license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
apache-2.0 | ['MRC', 'SQuAD 1.1', 'roberta-large'] | false | Model description An RoBERTa reading comprehension model for [SQuAD 1.1](https://aclanthology.org/D16-1264/). The model is initialized with [roberta-large](https://huggingface.co/roberta-large/) and fine-tuned on the [SQuAD 1.1 train data](https://huggingface.co/datasets/squad). | de5f3db27e1b9622d686e9a4dc89f2d3 |
apache-2.0 | ['MRC', 'SQuAD 1.1', 'roberta-large'] | false | Intended uses & limitations You can use the raw model for the reading comprehension task. Biases associated with the pre-existing language model, roberta-large, that we used may be present in our fine-tuned model, squad-v1-roberta-large. | 30ab28e3f71862087b031f38b8590c5c |
apache-2.0 | ['MRC', 'SQuAD 1.1', 'roberta-large'] | false | Usage You can use this model directly with the [PrimeQA](https://github.com/primeqa/primeqa) pipeline for reading comprehension [squad.ipynb](https://github.com/primeqa/primeqa/blob/main/notebooks/mrc/squad.ipynb). ```bibtex @article{2016arXiv160605250R, author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev}, Konstantin and {Liang}, Percy}, title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}", journal = {arXiv e-prints}, year = 2016, eid = {arXiv:1606.05250}, pages = {arXiv:1606.05250}, archivePrefix = {arXiv}, eprint = {1606.05250}, } ``` ```bibtex @article{DBLP:journals/corr/abs-1907-11692, author = {Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and Luke Zettlemoyer and Veselin Stoyanov}, title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach}, journal = {CoRR}, volume = {abs/1907.11692}, year = {2019}, url = {http://arxiv.org/abs/1907.11692}, archivePrefix = {arXiv}, eprint = {1907.11692}, timestamp = {Thu, 01 Aug 2019 08:59:33 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` | 3a98689a702dac8323166d027705b6d2 |
openrail | ['generated_from_trainer'] | false | gpt2-shikoto This model was trained on a dataset I obtained from an online novel site. **Please be aware that the stories (training data) might contain inappropriate content. This model is intended for research purposes only.** The base model can be found [here](https://huggingface.co/jed351/gpt2-base-zh-hk), which was obtained by patching a [GPT2 Chinese model](https://huggingface.co/ckiplab/gpt2-base-chinese) and its tokenizer with Cantonese characters. Refer to the base model for info on the patching process. Besides language modeling, another aim of this experiment was to test the accelerate library by offloading certain workloads to CPU as well as finding the optimal training iterations. The perplexity of this model is 16.12 after 400,000 steps. Comparing to the previous [attempt](https://huggingface.co/jed351/gpt2_tiny_zh-hk-shikoto) 27.02 after 400,000 steps. It took around the same time duration to train this model but I only used 1 GPU here. | f368e8a726f7bf8c1cdbe0e54d241283 |
openrail | ['generated_from_trainer'] | false | Training procedure Please refer to the [script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) provided by Huggingface. The model was trained for 400,000 steps on 1 NVIDIA Quadro RTX6000 for around 30 hours at the Research Computing Services of Imperial College London. | 531c73d2c336da639f8f64f4c93418dd |
openrail | ['generated_from_trainer'] | false | How to use it? ``` from transformers import AutoTokenizer from transformers import TextGenerationPipeline, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("jed351/gpt2-base-zh-hk") model = AutoModelForCausalLM.from_pretrained("jed351/gpt2_base_zh-hk-shikoto") | aae5f03d8def5d1fbd57b7f62078135b |
openrail | ['generated_from_trainer'] | false | try messing around with the parameters generator = TextGenerationPipeline(model, tokenizer, max_new_tokens=200, no_repeat_ngram_size=3) | 5d4266d0948388e29183854db27909d3 |
cc-by-4.0 | ['question generation', 'answer extraction'] | false | Model Card of `lmqg/t5-base-squad-qg-ae` This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for question generation and answer extraction jointly on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). | 09c5a79f8ed157539c0e311f3dfa3580 |
cc-by-4.0 | ['question generation', 'answer extraction'] | false | Overview - **Language model:** [t5-base](https://huggingface.co/t5-base) - **Language:** en - **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) | ad1c09736807c93cea10870364f4767a |
cc-by-4.0 | ['question generation', 'answer extraction'] | false | model prediction question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-base-squad-qg-ae") | cb8a72a53d6f2a20f1598d549c73e759 |
cc-by-4.0 | ['question generation', 'answer extraction'] | false | question generation question = pipe("extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.") ``` | 844b9e63c6e3d96867fb04a5befaba32 |
cc-by-4.0 | ['question generation', 'answer extraction'] | false | Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-base-squad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:---------------------------------------------------------------| | BERTScore | 90.58 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 58.59 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 42.6 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 32.91 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 26.01 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 27 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 64.72 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 53.4 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-base-squad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:---------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 92.53 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedF1Score (MoverScore) | 64.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (BERTScore) | 92.35 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (MoverScore) | 64.33 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (BERTScore) | 92.74 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (MoverScore) | 64.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/t5-base-squad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:---------------------------------------------------------------| | AnswerExactMatch | 58.9 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | AnswerF1Score | 70.18 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | BERTScore | 91.57 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 56.96 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 52.57 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 48.21 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 44.33 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 43.94 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 82.16 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 69.62 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | 59479c76886a148bedfed8573015b426 |
cc-by-4.0 | ['question generation', 'answer extraction'] | false | Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['paragraph_answer', 'paragraph_sentence'] - output_types: ['question', 'answer'] - prefix_types: ['qg', 'ae'] - model: t5-base - max_length: 512 - max_length_output: 32 - epoch: 6 - batch: 32 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-base-squad-qg-ae/raw/main/trainer_config.json). | eddbce3c35ea631193b3b5051d360298 |
apache-2.0 | ['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard'] | false | wav2vec2-large-xls-r-300m-guarani-small This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4964 - Wer: 0.5957 | c3a8ea5cda0f2f94f517742252c99c62 |
apache-2.0 | ['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30 - mixed_precision_training: Native AMP | 77257090192d192f3571ba4de7f31456 |
apache-2.0 | ['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 6.65 | 100 | 1.1326 | 1.0 | | 1.6569 | 13.32 | 200 | 0.5264 | 0.6478 | | 1.6569 | 19.97 | 300 | 0.5370 | 0.6261 | | 0.2293 | 26.65 | 400 | 0.4964 | 0.5957 | | 0b9c1a2d5ca5c4eb2b62ee7ba4373df9 |
mit | ['exbert'] | false | TOD-XLMR TOD-XLMR is a conversationally specialized multilingual version based on [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base). It is pre-trained on English conversational corpora consisting of nine human-to-human multi-turn task-oriented dialog (TOD) datasets as proposed in the paper [TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogue](https://aclanthology.org/2020.emnlp-main.66.pdf) by Wu et al. and first released in [this repository](https://huggingface.co/TODBERT). The model is jointly trained with two objectives as proposed in TOD-BERT, including masked language modeling (MLM) and response contrastive loss (RCL). Masked language modeling is a common pretraining strategy utilized for BERT-based architectures, where a random sample of tokens in the input sequence is replaced with the special token [MASK] for predicting the original masked tokens. To further encourage the model to capture dialogic structure (i.e., dialog sequential order), response contrastive loss is implemented by using in-batch negative training with contrastive learning. | f1fec47d29b9eb2507e38337771f5420 |
mit | ['exbert'] | false | How to use Here is how to use this model to get the features of a given text in PyTorch: ``` from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("umanlp/TOD-XLMR") model = AutoModelForMaskedLM.from_pretrained("umanlp/TOD-XLMR") | f44cc188b504f4eb97bfe2d418e83f13 |
mit | ['exbert'] | false | forward pass output = model(**encoded_input) ``` Or you can also use `AutoModel` to load the pretrained model and further apply to downstream tasks: ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("umanlp/TOD-XLMR") model = AutoModel("umanlp/TOD-XLMR") | 31696168a747d17323c22944946f7093 |
mit | ['generated_from_trainer'] | false | xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1608 - F1: 0.8593 | 7bfa7e2b501a23408544ffc80bb402fe |
apache-2.0 | ['Axon', 'Elixir'] | false | ResNet
This ResNet50 model was translated from the ONNX ResNetv1 model found
at https://github.com/onnx/models/tree/main/vision/classification/resnet into Axon using [AxonOnnx](https://github.com/elixir-nx/axon_onnx)
The following description is copied from the relevant description at the ONNX repository.
| f73492648bee0c39ea92e10d13c61205 |
apache-2.0 | ['Axon', 'Elixir'] | false | Use cases
These ResNet models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes. They are trained on ImageNet dataset which contains images from 1000 classes. ResNet models provide very high accuracies with affordable model sizes. They are ideal for cases when high accuracy of classification is required.
ImageNet trained models are often used as the base layers for a transfer learning approach to training a model in your domain. Transfer learning can significantly reduce the processing necessary to train an accurate model in your domain. This model was published here with the expectation that it would be useful to the Elixir community for transfer learning and other similar approaches.
| 3a0c9bc97f5e4c5be7d20afb73976ab7 |
apache-2.0 | ['Axon', 'Elixir'] | false | Description
Deeper neural networks are more difficult to train. Residual learning framework ease the training of networks that are substantially deeper. The research explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. It also provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset the residual nets were evaluated with a depth of up to 152 layers — 8× deeper than VGG nets but still having lower complexity.
| 9061d4d387c5388c5db63bd4c8b9f8b6 |
apache-2.0 | ['Axon', 'Elixir'] | false | Model
ResNet models consists of residual blocks and came up to counter the effect of deteriorating accuracies with more layers due to network not learning the initial layers.
ResNet v1 uses post-activation for the residual blocks.
| 441838aa974a3f2846ccd1031424726f |
apache-2.0 | ['Axon', 'Elixir'] | false | Input
All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (N x 3 x H x W), where N is the batch size, and H and W are expected to be at least 224.
The inference was done using jpeg image.
| c67d5461ffe927d07d093335436fe069 |
apache-2.0 | ['Axon', 'Elixir'] | false | Preprocessing
The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. The transformation should preferably happen at preprocessing.
| 91d9fe687d98972a8bee6672e91de2bf |
apache-2.0 | ['Axon', 'Elixir'] | false | Postprocessing
The post-processing involves calculating the softmax probability scores for each class. You can also sort them to report the most probable classes. Check [imagenet_postprocess.py](../imagenet_postprocess.py) for code.
| d0b25679b85aee53adb990e70424aee0 |
apache-2.0 | ['Axon', 'Elixir'] | false | Dataset
Dataset used for train and validation: [ImageNet (ILSVRC2012)](http://www.image-net.org/challenges/LSVRC/2012/). Check [imagenet_prep](../imagenet_prep.md) for guidelines on preparing the dataset.
| d2b3ab59cf59ca796961d1d3f6960f7d |
apache-2.0 | ['Axon', 'Elixir'] | false | References
* **ResNetv1**
[Deep residual learning for image recognition](https://arxiv.org/abs/1512.03385)
He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016.
* **ONNX source model**
[onnx/models vision/classification/resnet resnet50-v1-7.onnx](https://github.com/onnx/models/tree/main/vision/classification/resnet/README)
| 1a20790fc55b3abab5b3fafb96c3fc50 |
apache-2.0 | ['translation'] | false | lit-epo * source group: Lithuanian * target group: Esperanto * OPUS readme: [lit-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-epo/README.md) * model: transformer-align * source language(s): lit * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.eval.txt) | 4dc76db4a8ed3dc159e70a9ddefb5e13 |
apache-2.0 | ['translation'] | false | System Info: - hf_name: lit-epo - source_languages: lit - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['lt', 'eo'] - src_constituents: {'lit'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.test.txt - src_alpha3: lit - tgt_alpha3: epo - short_pair: lt-eo - chrF2_score: 0.313 - bleu: 13.0 - brevity_penalty: 1.0 - ref_len: 70340.0 - src_name: Lithuanian - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: lt - tgt_alpha2: eo - prefer_old: False - long_pair: lit-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41 | c807e13fec4b916aae132147825a0b15 |
apache-2.0 | ['automatic-speech-recognition', 'fr'] | false | exp_w2v2r_fr_vp-100k_age_teens-8_sixties-2_s607 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | da25df9372f65d4a9087c55fc0f6c625 |
apache-2.0 | ['generated_from_trainer'] | false | flan-t5-base-juraqanda This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0784 - Rouge1: 9.5491 - Rouge2: 1.4927 - Rougel: 8.828 - Rougelsum: 9.2708 - Gen Len: 18.5260 | 5c6d135105ac9c879d602fcd815d59c1 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 | 1dfee99edb7096dd7255628f4e8af898 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:| | 4.0303 | 1.0 | 712 | 3.3466 | 9.4455 | 1.2684 | 8.8558 | 9.1832 | 18.7577 | | 3.6049 | 2.0 | 1424 | 3.1931 | 10.0714 | 1.4116 | 9.4163 | 9.8024 | 18.6461 | | 3.3464 | 3.0 | 2136 | 3.1246 | 9.6542 | 1.4317 | 8.9441 | 9.36 | 18.5485 | | 3.2831 | 4.0 | 2848 | 3.0910 | 9.6676 | 1.4584 | 8.9533 | 9.3876 | 18.6706 | | 3.2176 | 5.0 | 3560 | 3.0784 | 9.5491 | 1.4927 | 8.828 | 9.2708 | 18.5260 | | 1aa25836a59a9d7589d0350e0019282c |
apache-2.0 | ['generated_from_trainer'] | false | mt5-base-finetuned-rabbi-kook This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3861 | 4b43730ad356142e72b9cb8d230737b5 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 | 766307dc09a647c2c26511921c4a5dce |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.2102 | 1.0 | 3567 | 2.4526 | | 3.0283 | 2.0 | 7134 | 2.3861 | | d1a61acdaf78985fb1081ac2b30545eb |
apache-2.0 | ['generated_from_trainer'] | false | t5-small-finetuned-eli5 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the eli5 dataset. It achieves the following results on the evaluation set: - Loss: 3.6782 - Rouge1: 13.0163 - Rouge2: 1.9263 - Rougel: 10.484 - Rougelsum: 11.8234 - Gen Len: 18.9951 | 6330aef12c4bed8f6b8aec9114679c90 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:| | 3.8841 | 1.0 | 17040 | 3.6782 | 13.0163 | 1.9263 | 10.484 | 11.8234 | 18.9951 | | 2b80d6f2f69a3c60a46e26be578a32e2 |
creativeml-openrail-m | ['text-to-image'] | false | white-walker-style Dreambooth model trained by sztanki with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: white (use that on your prompt)  | 5c3c1004782940524db365347c3ef85d |
cc-by-4.0 | ['generated_from_trainer'] | false | roberta-base-squad2-finetuned This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0010 | fa206ba6460bb0cb62e1284bbf5b2d81 |
cc-by-4.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 27 | 0.0023 | | No log | 2.0 | 54 | 0.0010 | | 39433e92701ffccb85e8f9bdc0acffce |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Whisper Small Lithuanian and Serbian sequentially trained This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: | cd46f80cc1b71b07cfb10fbbfd295e63 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training hyperparameters per fine-tune The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 - mixed_precision_training: Native AMP | 94c6257c863f6a171b780cf6bb8c2f08 |
apache-2.0 | ['generated_from_trainer'] | false | all-roberta-large-v1-banking-16-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.7470 - Accuracy: 0.0756 | 79bfd8fca61ff67ac86ee4309739b36d |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.8182 | 1.0 | 1 | 2.7709 | 0.0356 | | 2.6751 | 2.0 | 2 | 2.7579 | 0.0578 | | 2.5239 | 3.0 | 3 | 2.7509 | 0.0622 | | 2.4346 | 4.0 | 4 | 2.7470 | 0.0756 | | 2.4099 | 5.0 | 5 | 2.7452 | 0.0756 | | 8e2f555b15eaf0d8211534d878febe93 |
mit | ['generated_from_trainer'] | false | dreamy_poitras This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets. | 44f2f03cb90c05e69eba98d031a5156c |
mit | ['generated_from_trainer'] | false | Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.01, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0.0}, 'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True, 'skip_tokens': 1649999872}, 'generation': {'force_call_on': [25177], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048, 'prefix': '<|aligned|>'}], 'scorer_config': {}}, 'kl_gpt3_callback': {'force_call_on': [25177], 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>'}, 'model': {'from_scratch': False, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'revision': '9e6c78543a6ff1e4089002c38864d5a9cf71ec90'}, 'num_additional_tokens': 2, 'path_or_name': 'tomekkorbak/nervous_wozniak'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 128, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'dreamy_poitras', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0001, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output2', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25177, 'save_strategy': 'steps', 'seed': 42, 'tokens_already_seen': 1649999872, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} | 6b0eb87871efd02b17dba016b79bbb9f |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 8.7297 | 0.19 | 500 | 8.5541 | | 8.5592 | 0.39 | 1000 | 8.5536 | | 8.4892 | 0.58 | 1500 | 8.5554 | | 8.5288 | 0.77 | 2000 | 8.4786 | | 8.5034 | 0.97 | 2500 | 8.4756 | | 8.3497 | 1.16 | 3000 | 8.4821 | | 8.4516 | 1.36 | 3500 | 8.4742 | | 8.4224 | 1.55 | 4000 | 8.3972 | | 8.3356 | 1.74 | 4500 | 8.4158 | | 8.3805 | 1.94 | 5000 | 8.3800 | | 8.2947 | 2.13 | 5500 | 8.4242 | | 8.2475 | 2.32 | 6000 | 8.4334 | | 8.2708 | 2.52 | 6500 | 8.3504 | | 8.2559 | 2.71 | 7000 | 8.4211 | | 8.3676 | 2.9 | 7500 | 8.3744 | | b580dcecb75ece9ec12ef9e93ef958e5 |
apache-2.0 | ['generated_from_trainer'] | false | clinical_trial_stop_reasons_custom This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1448 - Accuracy Thresh: 0.9570 - F1 Micro: 0.5300 - F1 Macro: 0.1254 - Confusion Matrix: [[5940 15] [ 270 150]] | 9f39cc48a6cc035a3ffa360e0b59fe61 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 | a06fdf7fe6c7517ac7ac3472f262689d |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy Thresh | F1 Micro | F1 Macro | Confusion Matrix | |:-------------:|:-----:|:----:|:---------------:|:---------------:|:--------:|:--------:|:--------------------------:| | No log | 1.0 | 106 | 0.2812 | 0.8328 | 0.0 | 0.0 | [[5955 0] [ 420 0]] | | No log | 2.0 | 212 | 0.2189 | 0.9382 | 0.0 | 0.0 | [[5955 0] [ 420 0]] | | No log | 3.0 | 318 | 0.1840 | 0.9489 | 0.0 | 0.0 | [[5955 0] [ 420 0]] | | No log | 4.0 | 424 | 0.1638 | 0.9485 | 0.4940 | 0.0989 | [[5943 12] [ 288 132]] | | 0.239 | 5.0 | 530 | 0.1526 | 0.9533 | 0.5060 | 0.1018 | [[5943 12] [ 277 143]] | | 0.239 | 6.0 | 636 | 0.1467 | 0.9564 | 0.5077 | 0.1020 | [[5938 17] [ 275 145]] | | 0.239 | 7.0 | 742 | 0.1448 | 0.9570 | 0.5300 | 0.1254 | [[5940 15] [ 270 150]] | | 8962b5c0c8cbb02b16a87cd6e5c5bbe7 |
mit | ['generated_from_trainer'] | false | finetuned_gpt2_sst2_negation0.2_pretrainedFalse This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 5.3370 | 87c262282bb6039a8abdbd95a22a6ec3 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.9034 | 1.0 | 1072 | 5.5636 | | 4.5404 | 2.0 | 2144 | 5.3854 | | 4.368 | 3.0 | 3216 | 5.3370 | | eecea2cf59961a0f3af46c2b6c732f23 |
apache-2.0 | [] | false | 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言转换 NLT | 燃灯 Randeng | Transformer | 1.1B | 中文-改写 Chinese-Paraphrase | | 67b41f8936b032d31600f9a44a2714de |
apache-2.0 | [] | false | 加载模型 Loading Models ```shell git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git ``` ```python from fengshen.models.transfo_xl_paraphrase import TransfoXLModel from transformers import T5Tokenizer as TransfoXLTokenizer model = TransfoXLModel.from_pretrained('IDEA-CCNL/Randeng-TransformerXL-1.1B-Paraphrasing-Chinese') tokenizer = TransfoXLTokenizer.from_pretrained('IDEA-CCNL/Randeng-TransformerXL-1.1B-Paraphrasing-Chinese', eos_token = '<|endoftext|>', extra_ids=0) ``` | 6ecdab57f16b62b7fd5519a489184371 |
apache-2.0 | [] | false | 使用示例 Usage Examples ```python from fengshen.models.transfo_xl_paraphrase import paraphrase_generatete input_text = "年轻教师选择农村学校,还是县城学校?" res = paraphrase_generate(model, tokenizer, input_text, device=0) print(res) | ccb3c866aa7d90dd7e42b5ae34bdd9fa |
apache-2.0 | [] | false | 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970): If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970): ```text @article{fengshenbang, author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang}, title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence}, journal = {CoRR}, volume = {abs/2209.02970}, year = {2022} } ``` 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ``` | 6462d4adc2b2f58a57e23f33c97340be |
cc-by-4.0 | ['espnet', 'audio', 'text-to-speech'] | false | `kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4381098/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). | 072cae3a56105822c8530f7ac56b9056 |
apache-2.0 | ['generated_from_trainer'] | false | mobilebert_add_GLUE_Experiment_logit_kd_pretrain_wnli This model is a fine-tuned version of [gokuls/mobilebert_add_pre-training-complete](https://huggingface.co/gokuls/mobilebert_add_pre-training-complete) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: nan - Accuracy: 0.5634 | e1f2743199b467bfb78c5daf93d79f85 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0 | 1.0 | 5 | nan | 0.5634 | | 0.0 | 2.0 | 10 | nan | 0.5634 | | 0.0 | 3.0 | 15 | nan | 0.5634 | | 0.0 | 4.0 | 20 | nan | 0.5634 | | 0.0 | 5.0 | 25 | nan | 0.5634 | | 0.0 | 6.0 | 30 | nan | 0.5634 | | 41d9ecfc097f438fd6f9a4d3438d5871 |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'asr', 'hubert'] | false | Usage download file ```shell wget https://raw.githubusercontent.com/voidful/hubert-cluster-code/main/km_feat_100_layer_20 wget https://cdn-media.huggingface.co/speech_samples/sample1.flac ``` Hubert kmeans code ```python import joblib import torch from transformers import Wav2Vec2FeatureExtractor, HubertModel import soundfile as sf class HubertCode(object): def __init__(self, hubert_model, km_path, km_layer): self.processor = Wav2Vec2FeatureExtractor.from_pretrained(hubert_model) self.model = HubertModel.from_pretrained(hubert_model) self.km_model = joblib.load(km_path) self.km_layer = km_layer self.C_np = self.km_model.cluster_centers_.transpose() self.Cnorm_np = (self.C_np ** 2).sum(0, keepdims=True) self.C = torch.from_numpy(self.C_np) self.Cnorm = torch.from_numpy(self.Cnorm_np) if torch.cuda.is_available(): self.C = self.C.cuda() self.Cnorm = self.Cnorm.cuda() self.model = self.model.cuda() def __call__(self, filepath, sampling_rate=None): speech, sr = sf.read(filepath) input_values = self.processor(speech, return_tensors="pt", sampling_rate=sr).input_values if torch.cuda.is_available(): input_values = input_values.cuda() hidden_states = self.model(input_values, output_hidden_states=True).hidden_states x = hidden_states[self.km_layer].squeeze() dist = ( x.pow(2).sum(1, keepdim=True) - 2 * torch.matmul(x, self.C) + self.Cnorm ) return dist.argmin(dim=1).cpu().numpy() ``` input ```python hc = HubertCode("facebook/hubert-large-ll60k", './km_feat_100_layer_20', 20) voice_ids = hc('./sample1.flac') ``` bart model ````python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("voidful/asr_hubert_cluster_bart_base") model = AutoModelForSeq2SeqLM.from_pretrained("voidful/asr_hubert_cluster_bart_base") ```` generate output ```python gen_output = model.generate(input_ids=tokenizer("".join([f":vtok{i}:" for i in voice_ids]),return_tensors='pt').input_ids,max_length=1024) print(tokenizer.decode(gen_output[0], skip_special_tokens=True)) ``` | 32b0533f888e59d1c2f4e5322dfff90e |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'asr', 'hubert'] | false | Result `going along slushy country roads and speaking to damp audience in drifty school rooms day after day for a fortnight he'll have to put in an appearance at some place of worship on sunday morning and he can come to ask immediately afterwards` | 02ebbc811d34c7d461253ee8a2ce4e3c |
mit | ['generated_from_trainer'] | false | xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2763 - F1: 0.8346 | 1ae68f2dd45cc41bc5be3a01017ef53d |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5779 | 1.0 | 191 | 0.3701 | 0.7701 | | 0.2735 | 2.0 | 382 | 0.2908 | 0.8254 | | 0.1769 | 3.0 | 573 | 0.2763 | 0.8346 | | 8454f3d1b3753d0e792b9432bc38c8b5 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP | ac8ecaebc4da708736a0313623ecdc7b |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 188 | 2.1169 | 7.6948 | 17.4103 | | 4e18a9639c7e740f33ae7c9e0dce01c7 |
apache-2.0 | ['generated_from_trainer'] | false | vit-base-patch32-224-in21k-finetuned-eurosat This model is a fine-tuned version of [google/vit-base-patch32-224-in21k](https://huggingface.co/google/vit-base-patch32-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 1.6175 - Accuracy: 0.7321 | 8db70de80f6a854fa6359fb35f9bec34 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.6483 | 1.0 | 532 | 2.5574 | 0.6605 | | 1.8885 | 2.0 | 1064 | 1.8063 | 0.7182 | | 1.6371 | 3.0 | 1596 | 1.6175 | 0.7321 | | c0cb1d3370da221965469dcf2349904f |
apache-2.0 | [] | false | Mengzi-T5-MT model This is a Multi-Task model trained on the multitask mixture of 27 datasets and 301 prompts, based on [Mengzi-T5-base](https://huggingface.co/Langboat/mengzi-t5-base). [Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696) | 2ff7792249d073942a0a0c6cefb83172 |
apache-2.0 | [] | false | Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("Langboat/mengzi-t5-base-mt") model = T5ForConditionalGeneration.from_pretrained("Langboat/mengzi-t5-base-mt") ``` | 83bd60e387cadb7a4fd096d80488848e |
apache-2.0 | [] | false | This model is a fine-tune checkpoint of [T5-base](https://huggingface.co/t5-base), fine-tuned on the [Wiki Neutrality Corpus (WNC)](https://github.com/rpryzant/neutralizing-bias), a labeled dataset composed of 180,000 biased and neutralized sentence pairs that are generated from Wikipedia edits tagged for “neutral point of view”. This model reaches an accuracy of 0.39 on a dev split of the WNC. For more details about T5, check out this [model card](https://huggingface.co/t5-base). | 2ad47ac71a16d32a7514584b68448d7d |
apache-2.0 | ['generated_from_trainer'] | false | beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013-7e-05 This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.7881 - Accuracy: 0.7221 | ff4d5273c2708ae28bea93d1632391d0 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 | c8f34f799b111211bd0b2bdc66e57e85 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2307 | 1.0 | 224 | 1.0863 | 0.5874 | | 1.0893 | 2.0 | 448 | 0.9700 | 0.6362 | | 1.0244 | 3.0 | 672 | 0.8859 | 0.6757 | | 1.016 | 4.0 | 896 | 0.8804 | 0.6787 | | 0.9089 | 5.0 | 1120 | 0.8611 | 0.6897 | | 0.8935 | 6.0 | 1344 | 0.8283 | 0.7028 | | 0.8403 | 7.0 | 1568 | 0.8116 | 0.7102 | | 0.8179 | 8.0 | 1792 | 0.7934 | 0.7166 | | 0.7764 | 9.0 | 2016 | 0.7865 | 0.7208 | | 0.771 | 10.0 | 2240 | 0.7881 | 0.7221 | | 48c7ab7a7419368a6618c9e258210a91 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2284 - Accuracy: 0.9195 - F1: 0.9195 | ec1c3a40f728b8dd63240a47175a0125 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8441 | 1.0 | 250 | 0.3260 | 0.9 | 0.8970 | | 0.2551 | 2.0 | 500 | 0.2284 | 0.9195 | 0.9195 | | 9cf76e4feb12af0f936984f861fcca7d |
apache-2.0 | ['automatic-speech-recognition', 'id'] | false | exp_w2v2t_id_xlsr-53_s149 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (id)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 06ce85a2c715229510e89776181b14ee |
other | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'art', 'artistic', 'diffusers'] | false | Official Repository Read more about this model here: https://civitai.com/models/4384/dreamshaper You can run this model on: - https://huggingface.co/spaces/Lykon/DreamShaper-webui - https://sinkin.ai/m/4zdwGOB Some sample output:      | 8e7f54b3ee948c531e6892a39c2d8f27 |
apache-2.0 | ['text-classification', 'generated_from_trainer'] | false | custom-textcat-model This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the custom dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.3305 - Accuracy: 0.9541 | 76d84e5e75db72af4e284df3df51026b |
apache-2.0 | ['text-classification', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 209 | 0.3650 | 0.9514 | | No log | 2.0 | 418 | 0.3371 | 0.9568 | | 0.0108 | 3.0 | 627 | 0.3305 | 0.9541 | | 0.0108 | 4.0 | 836 | 0.3465 | 0.9568 | | 0.0056 | 5.0 | 1045 | 0.3498 | 0.9541 | | cd2037356332f74764ddc2ba9030c44a |
mit | ['spacy', 'token-classification'] | false | English pipeline optimized for CPU. Components: tok2vec, tagger, parser, senter, ner, attribute_ruler, lemmatizer. | Feature | Description | | --- | --- | | **Name** | `en_food_entity_extractor_v2` | | **Version** | `3.4.1` | | **spaCy** | `>=3.4.0,<3.5.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` | | **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) | | **Sources** | [OntoNotes 5](https://catalog.ldc.upenn.edu/LDC2013T19) (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston)<br />[ClearNLP Constituent-to-Dependency Conversion](https://github.com/clir/clearnlp-guidelines/blob/master/md/components/dependency_conversion.md) (Emory University)<br />[WordNet 3.0](https://wordnet.princeton.edu/) (Princeton University)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) | | **License** | `MIT` | | **Author** | [Explosion](https://explosion.ai) | | 5851ca1869b3924f438e1d62add2be8b |
mit | ['spacy', 'token-classification'] | false | Label Scheme <details> <summary>View label scheme (114 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, `_SP`, ```` | | **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` | | **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `FOOD`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` | </details> | fedb12cfc702d815bd6442873f36c9f2 |
mit | ['spacy', 'token-classification'] | false | Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.93 | | `TOKEN_P` | 99.57 | | `TOKEN_R` | 99.58 | | `TOKEN_F` | 99.57 | | `TAG_ACC` | 97.34 | | `SENTS_P` | 91.79 | | `SENTS_R` | 89.14 | | `SENTS_F` | 90.44 | | `DEP_UAS` | 92.04 | | `DEP_LAS` | 90.23 | | `ENTS_P` | 85.35 | | `ENTS_R` | 85.93 | | `ENTS_F` | 85.64 | | f287e5daea6ac9a7c24d9a6a5f54119d |
mit | ['generated_from_trainer'] | false | xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2403 - F1: 0.8358 | dfcf6b546110c1e531ba4e489765e22f |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7053 | 1.0 | 70 | 0.3077 | 0.7587 | | 0.2839 | 2.0 | 140 | 0.2692 | 0.8007 | | 0.1894 | 3.0 | 210 | 0.2403 | 0.8358 | | a6840a35873fa97be60e89b124bc3905 |
apache-2.0 | ['generated_from_trainer'] | false | ec_model This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9323 | 5e9a86b815fcecb98f75b45a6fbbffb4 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 497 | 1.1985 | | 1.578 | 2.0 | 994 | 1.0032 | | 1.187 | 3.0 | 1491 | 0.9479 | | 4c86c635c3fd4760e49746f5d53ed323 |
cc-by-4.0 | ['translation'] | false | 🇳🇴 Bokmål ⇔ Nynorsk 🇳🇴 Norwegian has two relatively similar written languages; Bokmål and Nynorsk. Historically Nynorsk is a written norm based on dialects curated by the linguist Ivar Aasen in the mid-to-late 1800s, whereas Bokmål is a gradual 'Norwegization' of written Danish. The two written languages are considered equal and citizens have a right to receive public service information in their primary and prefered language. Even though this right has been around for a long time only between 5-10% of Norwegian texts are written in Nynorsk. Nynorsk is therefore a low-resource language within a low-resource language. Apart from some word-list based engines, there are not any working off-the-shelf machine learning-based translation models. Translation between Bokmål and Nynorsk is not available in Google Translate. | 87b7acaed849ad4fda3e6fec8bc5b38e |
cc-by-4.0 | ['translation'] | false | Demo | | | |---|---| | Widget | Try the widget in the top right corner | | Huggingface Spaces | [Spaces Demo](https://huggingface.co/spaces/NbAiLab/nb2nn) | | | | | b240270a76a4b8fa0e05a19fa889a93c |
cc-by-4.0 | ['translation'] | false | Pretraining a T5-base There is an [mt5](https://huggingface.co/google/mt5-base) that includes Norwegian. Unfortunately a very small part of this is Nynorsk; there is only around 1GB Nynorsk text in mC4. Despite this, the mt5 also gives a BLEU score above 80. During the project we extracted all available Nynorsk text from the [Norwegian Colossal Corpus](https://github.com/NBAiLab/notram/blob/master/guides/corpus_v2_summary.md) at the National Library of Norway, and matched it (by material type i.e. book, newspapers and so on) with an equal amount of Bokmål. The corpus collection is described [here](https://github.com/NBAiLab/notram/blob/master/guides/nb_nn_balanced_corpus.md) and the total size is 19GB. | dbc05025fea8ab4cf86e815ae74e7072 |
cc-by-4.0 | ['translation'] | false | Finetuning - BLEU-SCORE 88.17 🎉 The central finetuning data of the project have been 200k translation units (TU) i.e. aligned pairs of sentences in the respective languages extracted from textbooks of various subjects and newspapers. Training for [10] epochs with a learning rate of [7e-4], a batch size of [32] and a max source and target length of [512] fine tuning reached a SACREBLEU score of [88.03] at training and a test score of [**88.17**] after training. | ebc00e8d0c1e31bc3ddb262f168b8cd3 |
cc-by-4.0 | ['translation'] | false | This is not a translator We found out that we were able to get almost identical BLEU-score with training it both ways, and letting the model decide if the input is in Bokmål or Nynorsk. This way we can train one model instead of two. We call it a language switcher. | af689f15e0d045e1e10c6ea94f308286 |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-large-xls-r-300m-turkish-colab2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3738 - Wer: 0.3532 | 060c228ee494b1f420fd9121d9248643 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP | 6dd140a1eea31695310f04d6298b0d38 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.9022 | 3.7 | 400 | 0.6778 | 0.7414 | | 0.4106 | 7.4 | 800 | 0.4123 | 0.5049 | | 0.1862 | 11.11 | 1200 | 0.4260 | 0.4232 | | 0.1342 | 14.81 | 1600 | 0.3951 | 0.4097 | | 0.0997 | 18.51 | 2000 | 0.4100 | 0.3999 | | 0.0782 | 22.22 | 2400 | 0.3918 | 0.3875 | | 0.059 | 25.92 | 2800 | 0.3803 | 0.3698 | | 0.0474 | 29.63 | 3200 | 0.3738 | 0.3532 | | 8412136505ee5d31c9f5cd7f117cb546 |
mit | ['generated_from_trainer'] | false | rubert-tiny2_finetuned_emotion_experiment_augmented_anger_fear_no_emojis This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5820 - Accuracy: 0.7881 - F1: 0.7886 - Precision: 0.7906 - Recall: 0.7881 | 83be75c00670618c238242f8f4cd41cb |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 | 49bb6bb19ac65676953b89072cf6b258 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.0996 | 1.0 | 69 | 1.0013 | 0.6879 | 0.6779 | 0.7070 | 0.6879 | | 0.9524 | 2.0 | 138 | 0.8651 | 0.7265 | 0.7245 | 0.7322 | 0.7265 | | 0.8345 | 3.0 | 207 | 0.7821 | 0.7422 | 0.7413 | 0.7445 | 0.7422 | | 0.7573 | 4.0 | 276 | 0.7222 | 0.7484 | 0.7473 | 0.7482 | 0.7484 | | 0.6923 | 5.0 | 345 | 0.6828 | 0.7568 | 0.7562 | 0.7562 | 0.7568 | | 0.6412 | 6.0 | 414 | 0.6531 | 0.7568 | 0.7559 | 0.7556 | 0.7568 | | 0.5982 | 7.0 | 483 | 0.6320 | 0.7610 | 0.7601 | 0.7597 | 0.7610 | | 0.5593 | 8.0 | 552 | 0.6133 | 0.7651 | 0.7655 | 0.7664 | 0.7651 | | 0.5183 | 9.0 | 621 | 0.6036 | 0.7714 | 0.7708 | 0.7709 | 0.7714 | | 0.5042 | 10.0 | 690 | 0.5951 | 0.7756 | 0.7755 | 0.7760 | 0.7756 | | 0.483 | 11.0 | 759 | 0.5878 | 0.7766 | 0.7768 | 0.7774 | 0.7766 | | 0.4531 | 12.0 | 828 | 0.5855 | 0.7850 | 0.7841 | 0.7839 | 0.7850 | | 0.4386 | 13.0 | 897 | 0.5828 | 0.7797 | 0.7790 | 0.7786 | 0.7797 | | 0.4238 | 14.0 | 966 | 0.5788 | 0.7777 | 0.7780 | 0.7786 | 0.7777 | | 0.4018 | 15.0 | 1035 | 0.5793 | 0.7839 | 0.7842 | 0.7855 | 0.7839 | | 0.3998 | 16.0 | 1104 | 0.5801 | 0.7850 | 0.7844 | 0.7841 | 0.7850 | | 0.3747 | 17.0 | 1173 | 0.5791 | 0.7839 | 0.7836 | 0.7833 | 0.7839 | | 0.3595 | 18.0 | 1242 | 0.5799 | 0.7891 | 0.7891 | 0.7894 | 0.7891 | | 0.3575 | 19.0 | 1311 | 0.5820 | 0.7881 | 0.7886 | 0.7906 | 0.7881 | | e59fe003e652eff89a025269d7ccc057 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.