license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0267 | 1.0 | 878 | 0.0754 | 0.9301 | 0.9282 | 0.9291 | 0.9829 | | 0.0141 | 2.0 | 1756 | 0.0698 | 0.9279 | 0.9395 | 0.9336 | 0.9841 | | 0.0084 | 3.0 | 2634 | 0.0731 | 0.9331 | 0.9432 | 0.9381 | 0.9851 |
c9d18beb23ccaf59d3b6a527cfcf1acd
apache-2.0
['pegasus', 'paraphrasing', 'seq2seq']
false
Model in Action 🚀 ``` import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer model_name = 'tuner007/pegasus_paraphrase' torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) def get_response(input_text,num_return_sequences,num_beams): batch = tokenizer([input_text],truncation=True,padding='longest',max_length=60, return_tensors="pt").to(torch_device) translated = model.generate(**batch,max_length=60,num_beams=num_beams, num_return_sequences=num_return_sequences, temperature=1.5) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) return tgt_text ```
56ef9bd54230f7a7646d7165eb08aa87
apache-2.0
['pegasus', 'paraphrasing', 'seq2seq']
false
output: ['The test of your knowledge is your ability to convey it.', 'The ability to convey your knowledge is the ultimate test of your knowledge.', 'The ability to convey your knowledge is the most important test of your knowledge.', 'Your capacity to convey your knowledge is the ultimate test of it.', 'The test of your knowledge is your ability to communicate it.', 'Your capacity to convey your knowledge is the ultimate test of your knowledge.', 'Your capacity to convey your knowledge to another is the ultimate test of your knowledge.', 'Your capacity to convey your knowledge is the most important test of your knowledge.', 'The test of your knowledge is how well you can convey it.', 'Your capacity to convey your knowledge is the ultimate test.'] ``` > Created by [Arpit Rajauria](https://twitter.com/arpit_rajauria) [![Twitter icon](https://cdn0.iconfinder.com/data/icons/shift-logotypes/32/Twitter-32.png)](https://twitter.com/arpit_rajauria)
05d748e168ffbe77366473208245d786
apache-2.0
['translation']
false
tha-eng * source group: Thai * target group: English * OPUS readme: [tha-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tha-eng/README.md) * model: transformer-align * source language(s): tha * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.eval.txt)
efe2538397562a59bc10ae446dcb771f
apache-2.0
['translation']
false
System Info: - hf_name: tha-eng - source_languages: tha - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tha-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['th', 'en'] - src_constituents: {'tha'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.test.txt - src_alpha3: tha - tgt_alpha3: eng - short_pair: th-en - chrF2_score: 0.644 - bleu: 48.1 - brevity_penalty: 0.9740000000000001 - ref_len: 7407.0 - src_name: Thai - tgt_name: English - train_date: 2020-06-17 - src_alpha2: th - tgt_alpha2: en - prefer_old: False - long_pair: tha-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
366f3c3282b552db03839a2228f78957
apache-2.0
[]
false
BERT Large model HPU configuration This model only contains the `GaudiConfig` file for running the [bert-large-uncased-whole-word-masking](https://huggingface.co/bert-large-uncased-whole-word-masking) model on Habana's Gaudi processors (HPU). **This model contains no model weights, only a GaudiConfig.** This enables to specify: - `use_habana_mixed_precision`: whether to use Habana Mixed Precision (HMP) - `hmp_opt_level`: optimization level for HMP, see [here](https://docs.habana.ai/en/latest/PyTorch/PyTorch_Mixed_Precision/PT_Mixed_Precision.html
99f5db6ffb0d22b4f3c7cd2b515a23e8
apache-2.0
[]
false
Usage The model is instantiated the same way as in the Transformers library. The only difference is that there are a few new training arguments specific to HPUs. [Here](https://github.com/huggingface/optimum-habana/blob/main/examples/question-answering/run_qa.py) is a question-answering example script to fine-tune a model on SQuAD. You can run it with BERT Large with the following command: ```bash python run_qa.py \ --model_name_or_path bert-large-uncased-whole-word-masking \ --gaudi_config_name gaudi_config_name_or_path \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 24 \ --per_device_eval_batch_size 8 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/squad/ \ --use_habana \ --use_lazy_mode \ --throughput_warmup_steps 2 ``` Check the [documentation](https://huggingface.co/docs/optimum/habana/index) out for more advanced usage and examples.
4463d20b76e4e40b2f75709543f382dc
mit
['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class']
false
This model is a fine-tuned diffusion model for unconditional image generation of animefaces. Even after fine-tuning the diffusion model for 10 epochs the generated images are still cursed... 💀. Maybe more epochs would help? ![epoch10](dm_anime_epoch10.png)
5d11365c49521a129c792cbf23914a5d
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5255 - Wer: 0.3330
10902816063abc35dc76c51a3ff53906
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.5942 | 1.0 | 500 | 2.3849 | 1.0011 | | 0.9765 | 2.01 | 1000 | 0.5907 | 0.5202 | | 0.4424 | 3.01 | 1500 | 0.4547 | 0.4661 | | 0.3008 | 4.02 | 2000 | 0.4194 | 0.4228 | | 0.2316 | 5.02 | 2500 | 0.3933 | 0.4099 | | 0.1921 | 6.02 | 3000 | 0.4532 | 0.3965 | | 0.1561 | 7.03 | 3500 | 0.4315 | 0.3777 | | 0.1378 | 8.03 | 4000 | 0.4463 | 0.3847 | | 0.1222 | 9.04 | 4500 | 0.4402 | 0.3784 | | 0.1076 | 10.04 | 5000 | 0.4253 | 0.3735 | | 0.0924 | 11.04 | 5500 | 0.4844 | 0.3732 | | 0.0866 | 12.05 | 6000 | 0.4758 | 0.3646 | | 0.086 | 13.05 | 6500 | 0.6395 | 0.4594 | | 0.0763 | 14.06 | 7000 | 0.4951 | 0.3647 | | 0.0684 | 15.06 | 7500 | 0.4870 | 0.3577 | | 0.0616 | 16.06 | 8000 | 0.5442 | 0.3591 | | 0.0594 | 17.07 | 8500 | 0.5305 | 0.3606 | | 0.0613 | 18.07 | 9000 | 0.5434 | 0.3546 | | 0.0473 | 19.08 | 9500 | 0.4818 | 0.3532 | | 0.0463 | 20.08 | 10000 | 0.5086 | 0.3514 | | 0.042 | 21.08 | 10500 | 0.5017 | 0.3484 | | 0.0365 | 22.09 | 11000 | 0.5129 | 0.3536 | | 0.0336 | 23.09 | 11500 | 0.5411 | 0.3433 | | 0.0325 | 24.1 | 12000 | 0.5307 | 0.3424 | | 0.0282 | 25.1 | 12500 | 0.5261 | 0.3404 | | 0.0245 | 26.1 | 13000 | 0.5306 | 0.3388 | | 0.0257 | 27.11 | 13500 | 0.5242 | 0.3369 | | 0.0234 | 28.11 | 14000 | 0.5216 | 0.3359 | | 0.0221 | 29.12 | 14500 | 0.5255 | 0.3330 |
e407375cd602df16f8bbc3237544f84c
apache-2.0
['generated_from_trainer']
false
berttest2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0674 - Precision: 0.9138 - Recall: 0.9325 - F1: 0.9230 - Accuracy: 0.9823
792f8557aea95a8143dae1cad57e7f54
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0869 | 1.0 | 1756 | 0.0674 | 0.9138 | 0.9325 | 0.9230 | 0.9823 |
db1940494264bf9a1a426f45120d01ea
mit
[]
false
Miko 3 robot on Stable Diffusion This is the `<miko-3>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<cat-toy> 0](https://huggingface.co/sd-concepts-library/cat-toy-2/resolve/main/concept_images/1.jpeg) ![<cat-toy> 1](https://huggingface.co/sd-concepts-library/cat-toy-2/resolve/main/concept_images/2.jpeg) ![<cat-toy> 2](https://huggingface.co/sd-concepts-library/cat-toy-2/resolve/main/concept_images/3.jpeg) ![<cat-toy> 3](https://huggingface.co/sd-concepts-library/cat-toy-2/resolve/main/concept_images/0.jpeg)
7f91577212ac73258fa12193ea77005b
apache-2.0
['translation']
false
opus-mt-swc-en * source languages: swc * target languages: en * OPUS readme: [swc-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/swc-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.eval.txt)
26a9baf9ec7f9766d1dcfd4f4c920d40
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-qgsquad-qgen This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the qg_squad dataset. It achieves the following results on the evaluation set: - Loss: 0.4039 - Rouge4 Precision: 0.0931 - Rouge4 Recall: 0.0834 - Rouge4 Fmeasure: 0.0843
67ffbde7d6d95472b0c9743788e6110c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge4 Precision | Rouge4 Recall | Rouge4 Fmeasure | |:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:| | 0.4325 | 1.0 | 4733 | 0.3960 | 0.0984 | 0.0867 | 0.0889 | | 0.4137 | 2.0 | 9466 | 0.3863 | 0.1061 | 0.0946 | 0.0963 | | 0.3914 | 3.0 | 14199 | 0.3806 | 0.1051 | 0.0938 | 0.0955 | | 0.3946 | 4.0 | 18932 | 0.3786 | 0.1084 | 0.097 | 0.0986 | | 0.3857 | 5.0 | 23665 | 0.3784 | 0.1101 | 0.0991 | 0.1007 |
234b1e9cc48f0c23e532b6c2f6baa978
creativeml-openrail-m
['text-to-image']
false
model by IR1763 This your the Stable Diffusion model fine-tuned the Pranav concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks person** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/pranav/resolve/main/concept_images/3.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/pranav/resolve/main/concept_images/0.jpeg) ![image 2](https://huggingface.co/sd-dreambooth-library/pranav/resolve/main/concept_images/2.jpeg) ![image 3](https://huggingface.co/sd-dreambooth-library/pranav/resolve/main/concept_images/1.jpeg)
64ae124e1eaff941edd07e415ffe21a7
mit
['bert', 'cloze', 'distractor', 'generation']
false
Model description This model is a Candidate Set Generator in **"CDGP: Automatic Cloze Distractor Generation based on Pre-trained Language Model", Findings of EMNLP 2022**. Its input are stem and answer, and output is candidate set of distractors. It is fine-tuned by [**CLOTH**](https://www.cs.cmu.edu/~glai1/data/cloth/) dataset based on [**allenai/scibert_scivocab_uncased**](https://huggingface.co/allenai/scibert_scivocab_uncased) model. For more details, you can see our **paper** or [**GitHub**](https://github.com/AndyChiangSH/CDGP).
671f69547220b20c7427c29259c97a95
mit
['bert', 'cloze', 'distractor', 'generation']
false
How to use? 1. Download the model by hugging face transformers. ```python from transformers import BertTokenizer, BertForMaskedLM, pipeline tokenizer = BertTokenizer.from_pretrained("AndyChiang/cdgp-csg-scibert-cloth") csg_model = BertForMaskedLM.from_pretrained("AndyChiang/cdgp-csg-scibert-cloth") ``` 2. Create a unmasker. ```python unmasker = pipeline("fill-mask", tokenizer=tokenizer, model=csg_model, top_k=10) ``` 3. Use the unmasker to generate the candidate set of distractors. ```python sent = "I feel [MASK] now. [SEP] happy" cs = unmasker(sent) print(cs) ```
38ceb6ec9ec0c179bd70813df471f4cf
mit
['bert', 'cloze', 'distractor', 'generation']
false
Training hyperparameters The following hyperparameters were used during training: - Pre-train language model: [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) - Optimizer: adam - Learning rate: 0.0001 - Max length of input: 64 - Batch size: 64 - Epoch: 1 - Device: NVIDIA® Tesla T4 in Google Colab
f340ac43b214a958cfb2f94b75c04629
mit
['bert', 'cloze', 'distractor', 'generation']
false
Testing The evaluations of this model as a Candidate Set Generator in CDGP is as follows: | P@1 | F1@3 | F1@10 | MRR | NDCG@10 | | ---- | ---- | ----- | ----- | ------- | | 8.10 | 9.13 | 12.22 | 19.53 | 28.76 |
0760fb4f837576aa1a666551fcd4034b
mit
['bert', 'cloze', 'distractor', 'generation']
false
Candidate Set Generator | Models | CLOTH | DGen | | ----------- | ----------------------------------------------------------------------------------- | -------------------------------------------------------------------------------- | | **BERT** | [cdgp-csg-bert-cloth](https://huggingface.co/AndyChiang/cdgp-csg-bert-cloth) | [cdgp-csg-bert-dgen](https://huggingface.co/AndyChiang/cdgp-csg-bert-dgen) | | **SciBERT** | [*cdgp-csg-scibert-cloth*](https://huggingface.co/AndyChiang/cdgp-csg-scibert-cloth) | [cdgp-csg-scibert-dgen](https://huggingface.co/AndyChiang/cdgp-csg-scibert-dgen) | | **RoBERTa** | [cdgp-csg-roberta-cloth](https://huggingface.co/AndyChiang/cdgp-csg-roberta-cloth) | [cdgp-csg-roberta-dgen](https://huggingface.co/AndyChiang/cdgp-csg-roberta-dgen) | | **BART** | [cdgp-csg-bart-cloth](https://huggingface.co/AndyChiang/cdgp-csg-bart-cloth) | [cdgp-csg-bart-dgen](https://huggingface.co/AndyChiang/cdgp-csg-bart-dgen) |
36355d3b57c613711ee623fbbb873d22
cc-by-4.0
['espnet', 'audio', 'speech-recognition']
false
Demo: How to use in ESPnet2 ```bash cd espnet pip install -e . cd egs2/aesrc2020/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_aesrc2020_asr_conformer ``` <!-- Generated by scripts/utils/show_asr_result.sh -->
a6daab74bc5c4da74e69f341fe443976
cc-by-4.0
['espnet', 'audio', 'speech-recognition']
false
Environments - date: `Sat Aug 20 06:55:57 EDT 2022` - python version: `3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0]` - espnet version: `espnet 202207` - pytorch version: `pytorch 1.8.1` - Git hash: `c892feb2ba248c85b683bf3cdef6c8f7ce85449a` - Commit date: `Thu Aug 18 11:54:56 2022 -0400`
34e2262178b9680537a30aaef3a0790e
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_logit_kd_wnli_128 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3455 - Accuracy: 0.5634
7bf3eac0c60c2f188ec62729b51a0d6c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.347 | 1.0 | 5 | 0.3455 | 0.5634 | | 0.3467 | 2.0 | 10 | 0.3458 | 0.5634 | | 0.3466 | 3.0 | 15 | 0.3459 | 0.5634 | | 0.3465 | 4.0 | 20 | 0.3457 | 0.5634 | | 0.3466 | 5.0 | 25 | 0.3455 | 0.5634 | | 0.3466 | 6.0 | 30 | 0.3455 | 0.5634 |
c160542af1f82ac6e9cddbf39a606196
mit
[]
false
Introduction **MERT-v0** is a completely unsupervised model trained on 1000 hour music audios. Its architecture is similar to the [HuBERT model](https://huggingface.co/docs/transformers/model_doc/hubert), but it has been specifically designed for music through the use of specialized pre-training strategies. It is SOTA-comparable on multiple MIR tasks even under probing settings, while keeping fine-tunable on a single 2080Ti. It outperforms Jukebox representation on GTZAN (genre classification) and GiantSteps (key classification) datasets. Larger models trained with more data are on the way. Noted: we also release a pre-trained MIR model [music2vec](https://huggingface.co/m-a-p/music2vec-v1/blob/main/README.md) before, which shares similar model structure but has weaker performance. ![Performance Comparison](mert.png)
80558a947b0984919dde68d01118ecbf
mit
[]
false
load demo audio and set processor dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") dataset = dataset.sort("id") sampling_rate = dataset.features["audio"].sampling_rate processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-large-ls960-ft")
3f0455ad569ebe6f523d6387721f8b09
mit
[]
false
audio file is decoded on the fly inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs, output_hidden_states=True)
a527d669790c5c82251a6582e5d7e2c7
mit
[]
false
each layer performs differently in different downstream tasks, you should choose empirically all_layer_hidden_states = torch.stack(outputs.hidden_states).squeeze() print(all_layer_hidden_states.shape)
4dee30d8588930d662ad290ff09bd5cb
mit
[]
false
you can even use a learnable weighted average representation aggregator = nn.Conv1d(in_channels=13, out_channels=1, kernel_size=1) weighted_avg_hidden_states = aggregator(time_reduced_hidden_states.unsqueeze(0)).squeeze() print(weighted_avg_hidden_states.shape)
478b5a8a49347b4d9d0f84ada82dc16b
mit
[]
false
Citation ```shell @article{li2022large, title={Large-Scale Pretrained Model for Self-Supervised Music Audio Representation Learning}, author={Li, Yizhi and Yuan, Ruibin and Zhang, Ge and Ma, Yinghao and Lin, Chenghua and Chen, Xingran and Ragni, Anton and Yin, Hanzhi and Hu, Zhijie and He, Haoyu and others}, year={2022} } @article{li2022map, title={MAP-Music2Vec: A Simple and Effective Baseline for Self-Supervised Music Audio Representation Learning}, author={Li, Yizhi and Yuan, Ruibin and Zhang, Ge and Ma, Yinghao and Lin, Chenghua and Chen, Xingran and Ragni, Anton and Yin, Hanzhi and Hu, Zhijie and He, Haoyu and others}, journal={arXiv preprint arXiv:2212.02508}, year={2022} } ```
fce2998ee26871059fd40fb67d715d93
apache-2.0
['translation']
false
zho-nld * source group: Chinese * target group: Dutch * OPUS readme: [zho-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-nld/README.md) * model: transformer-align * source language(s): cmn cmn_Bopo cmn_Hani cmn_Hira cmn_Kana cmn_Latn * target language(s): nld * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.eval.txt)
d71dd72007b6a1b48f3aba41c5cd3e53
apache-2.0
['translation']
false
System Info: - hf_name: zho-nld - source_languages: zho - target_languages: nld - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-nld/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['zh', 'nl'] - src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'} - tgt_constituents: {'nld'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.test.txt - src_alpha3: zho - tgt_alpha3: nld - short_pair: zh-nl - chrF2_score: 0.525 - bleu: 31.5 - brevity_penalty: 0.9309999999999999 - ref_len: 13575.0 - src_name: Chinese - tgt_name: Dutch - train_date: 2020-06-17 - src_alpha2: zh - tgt_alpha2: nl - prefer_old: False - long_pair: zho-nld - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
173c1b866585b3d1e7eb5b98f6a86fb8
apache-2.0
['translation']
false
opus-mt-fi-lg * source languages: fi * target languages: lg * OPUS readme: [fi-lg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-lg/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-lg/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-lg/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-lg/opus-2020-01-24.eval.txt)
f4c548d66ac8dfc7c67019922e7d1f74
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1656 - F1: 0.8589
52a8bbdf48f2de8bffdc6e4865bf5fcc
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2905 | 1.0 | 715 | 0.1783 | 0.8310 | | 0.1461 | 2.0 | 1430 | 0.1600 | 0.8455 | | 0.0948 | 3.0 | 2145 | 0.1656 | 0.8589 |
95bfc0f2e4b5424632dcdbfa6e6daa39
apache-2.0
['generated_from_keras_callback']
false
nandysoham/5-clustered This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5941 - Train End Logits Accuracy: 0.8333 - Train Start Logits Accuracy: 0.7955 - Validation Loss: 0.8305 - Validation End Logits Accuracy: 0.7820 - Validation Start Logits Accuracy: 0.7556 - Epoch: 1
a2bc16e0a52022a1392a73b1418f7ab7
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 132, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
c8237f91afc4b31f26dc0f0b18a40763
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.9118 | 0.7405 | 0.7093 | 0.8196 | 0.7744 | 0.7556 | 0 | | 0.5941 | 0.8333 | 0.7955 | 0.8305 | 0.7820 | 0.7556 | 1 |
60d721d02a8e6b4f80ac157bdbd69c5c
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-emotion-finetuned This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1518 - Acc: 0.935 - F1: 0.9350
631e2e146aeb0a91bc7d547d943c2b65
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Acc | F1 | |:-------------:|:-----:|:----:|:---------------:|:-----:|:------:| | 0.1734 | 1.0 | 250 | 0.1624 | 0.928 | 0.9279 | | 0.1187 | 2.0 | 500 | 0.1518 | 0.935 | 0.9350 |
4f6efd7e7db874f0373e1fc9067406ba
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small Kn - Bharat Ramanathan This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1398 - Wer: 23.8167
d0dbdfa140a2daf8cc1b4b5e03836693
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4126 | 0.1 | 500 | 2.2797 | 127.2639 | | 0.2099 | 0.1 | 1000 | 0.1774 | 28.2494 | | 0.1736 | 0.2 | 1500 | 0.1565 | 27.5733 | | 0.1506 | 0.3 | 2000 | 0.1514 | 26.0331 | | 0.1373 | 0.4 | 2500 | 0.1494 | 24.4177 | | 0.1298 | 0.5 | 3000 | 0.1456 | 25.0563 | | 0.1198 | 1.06 | 3500 | 0.1436 | 24.4177 | | 0.1102 | 0.1 | 4000 | 0.1452 | 24.2675 | | 0.1097 | 0.2 | 4500 | 0.1402 | 24.3050 | | 0.105 | 0.3 | 5000 | 0.1398 | 23.8167 |
566fca7d297a1869d887b42b378c1465
mit
['generated_from_keras_callback']
false
Andaf/chatbot-trvlk-finetuned-squad This model is a fine-tuned version of [cahya/bert-base-indonesian-522M](https://huggingface.co/cahya/bert-base-indonesian-522M) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5335 - Validation Loss: 6.4566 - Epoch: 1
91c41aee4f471f6f780e5218a5e21c22
mit
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 14444, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
2dfc01eeaeba60a21fc287eeeee24b11
apache-2.0
['classification', 'zero-shot']
false
Erlangshen-UniMC-MegatronBERT-1.3B-Chinese - Main Page:[Fengshenbang](https://fengshenbang-lm.com/) - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/unimc/) - Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/) - API: [Fengshen-OpenAPI](https://fengshenbang-lm.com/open-api)
4a643cbef02d8ff07ece93ef9c8adcc8
apache-2.0
['classification', 'zero-shot']
false
模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | MegatronBERT | 1.3B | Chinese |
58119c9a8bed844f10d54828414f8fee
apache-2.0
['classification', 'zero-shot']
false
模型信息 Model Information 我们为零样本学习者提出了一种与输入无关的新范式,从某种意义上说,它与任何格式兼容并适用于一系列语言任务,例如文本分类、常识推理、共指解析、情感分析。我们的方法将零样本学习转化为多项选择任务,避免常用的大型生成模型(如 FLAN)中的问题。它不仅增加了模型的泛化能力,而且显着减少了对参数的需求。我们证明了这种方法可以在通用语言基准上取得最先进的性能,并在自然语言推理和文本分类等任务上产生令人满意的结果。更多详细信息可以参考我们的[论文](https://arxiv.org/abs/2210.08590)或者[GitHub](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/unimc/) We propose an new paradigm for zero-shot learners that is input-agnostic, in the sense that it is compatible with any format and applicable to a list of language tasks, such as text classification, commonsense reasoning, coreference resolution, sentiment analysis. Our approach converts zero-shot learning into multiple choice tasks, avoiding problems in commonly used large generative models such as FLAN. It not only adds generalization ability to the models, but also reduces the needs of parameters significantly. We demonstrate that this approach leads to state-of-the-art performance on common language benchmarks, and produces satisfactory results on tasks such as natural language inference and text classification. For more details, please refer to our [paper](https://arxiv.org/abs/2210.08590) or [github](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/examples/unimc/)
0a058315c8890770455d77ee34602c17
apache-2.0
['classification', 'zero-shot']
false
下游效果 Performance **Few-shot** | Model | eprstmt | csldcp | tnews | iflytek | ocnli | bustm | chid | csl | wsc | Avg | |------------|------------|----------|-----------|----------|-----------|-----------|-----------|----------|-----------|-----------| | [FineTuning](https://arxiv.org/pdf/2107.07498.pdf)-RoBERTa-110M | 65.4 | 35.5 | 49 | 32.8 | 33 | 60.7 | 14.9 | 50 | 55.6 | 44.1 | | [FineTuning](https://arxiv.org/pdf/2107.07498.pdf)-ERNIE1.0-110M | 66.5 | 57 | 516 | 42.1 | 32 | 60.4 | 15 | 60.1 | 50.3 | 48.34 | | [PET](https://arxiv.org/pdf/2107.07498.pdf)-ERNIE1.0-110M | 84 | 59.9 | 56.4 | 50.3 | 38.1 | 58.4 | 40.6 | 61.1 | 58.7 | 56.39 | | [P-tuning](https://arxiv.org/pdf/2107.07498.pdf)-ERNIE1.0-110M | 80.6 | 56.6 | 55.9 | 52.6 | 35.7 | 60.8 | 39.61 | 51.8 | 55.7 | 54.37 | | [EFL](https://arxiv.org/pdf/2107.07498.pdf)-ERNIE1.0-110M | 76.7 | 47.9 | 56.3 | 52.1 | 48.7 | 54.6 | 30.3 | 52.8 | 52.3 | 52.7 | | [UniMC-RoBERTa-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-110M-Chinese) | 88.64 | 54.08 | 54.32 | 48.6 | 66.55 | 73.76 | 67.71 | 52.54 | 59.92 | 62.86 | | [UniMC-RoBERTa-330M](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-330M-Chinese) | 89.53 | 57.3 | 54.25 | 50 | 70.59 | 77.49 | 78.09 | 55.73 | 65.16 | 66.46 | | [UniMC-MegatronBERT-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese) | **89.278** | **60.9** | **57.46** | 52.89 | **76.33** | **80.37** | **90.33** | 61.73 | **79.15** | **72.05** | **Zero-shot** | Model | eprstmt | csldcp | tnews | iflytek | ocnli | bustm | chid | csl | wsc | Avg | |---------------|-----------|-----------|-----------|-----------|-----------|----------|----------|----------|-----------|-----------| | [GPT](https://arxiv.org/pdf/2107.07498.pdf)-110M | 57.5 | 26.2 | 37 | 19 | 34.4 | 50 | 65.6 | 50.1 | 50.3 | 43.4 | | [PET](https://arxiv.org/pdf/2107.07498.pdf)-RoBERTa-110M | 85.2 | 12.6 | 26.1 | 26.6 | 40.3 | 50.6 | 57.6 | 52.2 | 54.7 | 45.1 | | [NSP-BERT](https://arxiv.org/abs/2109.03564)-110M | 86.9 | 47.6 | 51 | 41.6 | 37.4 | 63.4 | 52 | **64.4** | 59.4 | 55.96 | | [ZeroPrompt](https://arxiv.org/abs/2201.06910)-T5-1.5B | - | - | - | 16.14 | 46.16 | - | - | - | 47.98 | - | | [Yuan1.0-13B](https://arxiv.org/abs/2110.04725) | 88.13 | 38.99 | 57.47 | 38.82 | 48.13 | 59.38 | 86.14 | 50 | 38.99 | 56.22 | | [ERNIE3.0-240B](https://arxiv.org/abs/2107.02137) | 88.75 | **50.97** | **57.83** | **40.42** | 53.57 | 64.38 | 87.13 | 56.25 | 53.46 | 61.41 | | [UniMC-RoBERTa-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-110M-Chinese) | 86.16 | 31.26 | 46.61 | 26.54 | 66.91 | 73.34 | 66.68 | 50.09 | 53.66 | 55.7 | | [UniMC-RoBERTa-330M](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-330M-Chinese) | 87.5 | 30.4 | 47.6 | 31.5 | 69.9 | 75.9 | 78.17 | 49.5 | 60.55 | 59.01 | | [UniMC-MegatronBERT-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese) | **88.79** | 42.06 | 55.21 | 33.93 | **75.57** | **79.5** | **89.4** | 50.25 | **66.67** | **64.53** | **Full dataset** | Model | AFQMC | TNEWS1.1 | IFLYTEK | OCNLI | CMNLI | WSC1.1 | CSL | CHID | C3 | |--------------------------------------------|-------|----------|---------|-------|-------|--------|-------|-------|-------| | RoBERTa-Base | 74.06 | 57.5 | 60.36 | 74.3 | 79.73 | 83.48 | 85.37 | - | - | | RoBERTa-Large | 74.88 | 58.79 | 61.52 | 77.7 | 81.4 | 89.14 | 86 | - | - | | [Erlangshen-MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B) 「Finetuning」 | 76.08 | 59.38 | 62.34 | 79.14 | 81 | 92.43 | 87.2 | 84.65 | 86.77 | | [Erlangshen-UniMC-MegatronBERT-1.3B-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese) | 77.09 | 60.4 | 62.67 | 83.05 | 84.76 | 93.74 | 87.67 | 85.93 | 86.54 |
d21825bb484242db98e59287373d6a11
apache-2.0
['classification', 'zero-shot']
false
使用 Usage ```shell git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git cd Fengshenbang-LM pip install --editable . ``` ```python3 import argparse from fengshen.pipelines.multiplechoice import UniMCPipelines total_parser = argparse.ArgumentParser("TASK NAME") total_parser = UniMCPipelines.piplines_args(total_parser) args = total_parser.parse_args() pretrained_model_path = 'IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese' args.learning_rate=2e-5 args.max_length=512 args.max_epochs=3 args.batchsize=8 args.default_root_dir='./' model = UniMCPipelines(args, pretrained_model_path) train_data = [] dev_data = [] test_data = [ {"texta": "放弃了途观L和荣威RX5,果断入手这部车,外观霸气又好开", "textb": "", "question": "下面新闻属于哪一个类别?", "choice": [ "房产", "汽车", "教育", "科技" ], "answer": "汽车", "label": 1, "id": 7759} ] if args.train: model.train(train_data, dev_data) result = model.predict(test_data) for line in result[:20]: print(line) ```
977815a209881044cb1a19a9e7535ea3
cc-by-sa-4.0
['generated_from_trainer']
false
legal-bert-base-uncased-finetuned-RRamicus This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1520
99081a050049b5afae92a907d5f6c2b1
cc-by-sa-4.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.021 | 1.0 | 1118 | 1.3393 | | 1.2272 | 2.0 | 2236 | 1.2612 | | 1.2467 | 3.0 | 3354 | 1.2403 | | 1.2149 | 4.0 | 4472 | 1.2276 | | 1.1855 | 5.0 | 5590 | 1.2101 | | 1.1674 | 6.0 | 6708 | 1.2020 | | 1.1508 | 7.0 | 7826 | 1.1893 | | 1.1386 | 8.0 | 8944 | 1.1870 | | 1.129 | 9.0 | 10062 | 1.1794 | | 1.1193 | 10.0 | 11180 | 1.1759 |
1be1a7021600a8e51dcf28c6b51d23bd
other
['stable-diffusion', 'text-to-image']
false
ご利用の際は下記のライセンス内容を十分にご確認ください。 If you can read English, please refer [here](https://huggingface.co/nakayama/DeDeDe/blob/main/README-en.md). DeDeDeはアニメ調の人物を出力しやすいように調整されたStable Diffusionモデルです。 ベースモデルの[DreamLike Diffusion 1.0](https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0)へ[Trinart Characters v2 Derrida](https://huggingface.co/naclbit/trinart_derrida_characters_v2_stable_diffusion) をStable Diffusion 1.4を用い差分マージしました。 そこから[DreamLike Photoreal 1.0](https://huggingface.co/dreamlike-art/dreamlike-photoreal-1.0)でIN0~5を調節、さらに30000枚のSD2.1、Novel AI、WD1.3/1.4、CoolJapan Diffusion 2.1、Dreamlike Photoreal 2.0で出力された画像でチューニングされています。 利用の際は以下のPrompt/Negative Promptをおすすめします。 P: best quality, masterpiece NP: 3d, flat shading, flat color, retro style, 1980s, 1990s, 2000s, 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, inaccurate limb
d32351272a20ec289f0813c0b631c153
other
['stable-diffusion', 'text-to-image']
false
例 <img src="https://huggingface.co/nakayama/DeDeDe/resolve/main/img/image01.png" style="max-width:400px;" width="50%"/> ``` (((best quality, masterpiece, 8k))), detailed anime style of anime 1girl bust shot sitting and dipping in river and wetty wearing white transparent onepiece dress with detailed wavy pink hair pink and hetailed yellow eye yellow, water splash in gorgeous scene secret garden Negative prompt: [[[3d]]], (((flat shading, flat color))), retro style, 1980s, 1990s, 2000s, 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, inaccurate limb Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 1117106368, Size: 512x768, Model hash: 6d1729a039, Denoising strength: 0.7, Clip skip: 2, ENSD: 31337, Hires resize: 768x1152, Hires steps: 5, Hires upscaler: Latent ``` <img src="https://huggingface.co/nakayama/DeDeDe/resolve/main/img/image02.png" style="max-width:400px;" width="50%"/> ``` (((best quality, masterpiece))), detailed anime style of bunny girl bishoujo from front wearing intricate frill jirai kei bikini with detailed wavy pink hair pink and detailed yellow eye yellow and lying on the bed in (((glitter harajuku kawaii messy room with flower and candy))) Negative prompt: 3d, (((flat shading, flat color))), retro style, 1980s, 1990s, 2000s, 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, inaccurate limb, eating Steps: 50, Sampler: DPM++ SDE Karras, CFG scale: 5.5, Seed: 911018641, Size: 768x512, Model hash: 6d1729a039, Denoising strength: 0.2, Clip skip: 2, ENSD: 31337, Hires resize: 1152x768, Hires steps: 10, Hires upscaler: ESRGAN_4x ``` <img src="https://huggingface.co/nakayama/DeDeDe/resolve/main/img/image03.png" style="max-width:400px;" width="50%"/> ``` (((best quality, masterpiece))), detailed anime style of 1girl bishoujo full body standing and looking from viewer and wearing classical frill dress with cape with wavy detailed pink hair pink and detailed yellow hair yellow in scenic view british fantastic landscape with flowing and golden hour, dynamic pose Negative prompt: [[[[3d]]]], (((flat shading, flat color))), retro style, 1980s, 1990s, 2000s, 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, inaccurate limb Steps: 15, Sampler: DPM++ SDE Karras, CFG scale: 5.5, Seed: 2149903685, Size: 768x512, Model hash: 6d1729a039, Denoising strength: 0.75, Clip skip: 2, ENSD: 31337, Hires resize: 1152x768, Hires steps: 10, Hires upscaler: Latent ``` <img src="https://huggingface.co/nakayama/DeDeDe/resolve/main/img/image04.png" style="max-width:400px;" width="50%"/> ``` (((best quality, masterpiece, 8k))), detailed anime style of 1boy cowboy shot wearing samurai outfit with flat chest and fighting pose, fist, motion blur Negative prompt: 3d, flat shading, flat color, retro style, 1980s, 1990s, 2000s, 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, inaccurate limb Steps: 20, Sampler: Euler a, CFG scale: 8, Seed: 2136917906, Size: 512x768, Model hash: 6d1729a039, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 768x1152, Hires steps: 20, Hires upscaler: Latent ``` <img src="https://huggingface.co/nakayama/DeDeDe/resolve/main/img/image05.png" style="max-width:400px;" width="50%"/> ``` (((best quality, masterpiece, 8k))), detailed photorealistic style of 1boy cowboy shot wearing mad max outfit with flat chest and fighting pose, fist, motion blur, abandoned city background Negative prompt: (((flat shading, flat color))), retro style, 1980s, 1990s, 2000s, 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, inaccurate limb Steps: 20, Sampler: Euler a, CFG scale: 8, Seed: 3787706619, Size: 768x512, Model hash: 6d1729a039, Denoising strength: 0.7, Clip skip: 2, ENSD: 31337, Hires resize: 1152x768, Hires steps: 5, Hires upscaler: Latent ```
bef88af01c1fae76c7171feb19c018f2
other
['stable-diffusion', 'text-to-image']
false
マージ・学習手順について 以下、モデル横に記載されている記号列は、Automatic1111 Webui コミットハッシュ c98cb0f8ecc904666f47684e238dd022039ca16f 時点での、モデル選択時に記載されているckptのハッシュ値です。 1. Dreamlike Diffusion 1.0にTrinart Derridaを差分マージする | Interpolation Method | Primary Model | Secondary Model | Tertiary Model | Merge Name | | --- | --- | --- | --- | --- | | Add Difference @ 1.0 | DreamLike Diffusion 1.0(0aecbcfa2c) | TrinArt Characters v2 Derrida(42d3f359b0) | Stable Diffusion 1.4(fe4efff1e1) | DDD_pre1(d1ac03017b) | 2. DDD_pre1にDreamlike Photoreal 1.0でIN00~IN05を階層マージで編集する | Model: A | Model: B | Weight | Base alpha | Merge Name | | --- | --- | --- | --- | --- | | DDD_pre1(d1ac03017b) | Dreamlike Photoreal 1.0(f403e4e2a5) | 0.45,0.45,0.4,0.35,0.3,0.25,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 | 0 | DDD_pre2(601ec74593) | 3. DDD_pre2に対し、[自前で用意した他Diffusion Modelの出力からなる素材画像](https://huggingface.co/datasets/nakayama/DeDeDeDataset)にて学習させる 用意の際に利用したサービス/モデルは、SD2.1、Novel AI、WD1.3/1.4、CoolJapan Diffusion 2.1、Dreamlike Photoreal 2.0。 総数は30000程、flipしたものと合わせてlearning rateは5e-6、60000Step学習させた。 これにより生成したモデルをDDD_pre3(4709475652)とする。 4. DDD_pre3にDDD_pre2を加重平均でマージする | Interpolation Method | Primary Model | Secondary Model | Merge Name | | --- | --- | --- | --- | | Weighted Sum @ 0.5 | DDD_pre3(4709475652) | DDD_pre2(601ec74593) | DeDeDe(6d1729a039) |
d72994ebfb92925eef68b7a085de1fb2
other
['stable-diffusion', 'text-to-image']
false
DeDeDe_ip2p_0.7_0.8.ckpt/DeDeDe_ip2p_0.7_1.0.ckpt [Instruct pix2pix](https://huggingface.co/timbrooks/instruct-pix2pix)モデルから[タスクベクトル](https://zenn.dev/discus0434/articles/ef418a8b0b3dc0)を抽出して加算したモデルです。 それぞれDeDeDe 0.8/Instruct Pix2Pix 0.7、DeDeDe 1.0/Instruct Pix2Pix 0.7の大きさとなります。 以下はInstruct Pix2Pixから継承したライセンスとなります。 Copyright (c) 2023 Ren Nakayama Released under the MIT license https://huggingface.co/nakayama/DeDeDe/blob/main/MIT-License
9ac53bb200f258946b7712df9b2e3e7a
other
['stable-diffusion', 'text-to-image']
false
ライセンスについて 当モデルはDreamlike Diffusion 1.0 / Dreamlike Photoreal 1.0の影響下にあるため、上記モデルにおける**修正された**CreativeML OpenRAIL-M licenseが適用されます。 以下はDeepLで翻訳された修正分の日本語訳となりますが、解釈において優先される言語は英語となります。 - **あなたが収入や寄付を得る、または得る予定のウェブサイト/アプリ/その他で、このモデルやその派生物をホストしたり使用したりすることはできません。もしそうしたいのなら、contact@dreamlike.art までメールしてください。** - **あなたは、モデルカードとファイル(実際の推論やファインチューニングを伴わない)を、商用および非商用のウェブサイト/アプリ/その他に自由にホストすることができます。完全なモデル名(Dreamlike Diffusion 1.0 / Dreamlike Photoreal 1.0)を明記し、モデルカードへのリンク( https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0 / https://huggingface.co/dreamlike-art/dreamlike-photoreal-1.0/ )を含めてください。** - **完全に非商用なウェブサイトやアプリなどで、モデルやその派生物を自由にホストすることができます(収益や寄付を一切得ていないことを意味します)。完全なモデル名(Dreamlike Diffusion 1.0 / Dreamlike Photoreal 1.0)を明記し、モデルカード( https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0 / https://huggingface.co/dreamlike-art/dreamlike-photoreal-1.0/ )へのリンクを添付してください。** - **10人以下のチームで、モデルの出力またはモデルの派生物の出力を商業目的で自由に使用することができます。** - あなたは、違法または有害な出力やコンテンツを意図的に作成したり共有したりするために、このモデルを使用することはできません。 - あなたが生成した出力について、作者はいかなる権利も主張しません。あなたはそれらを自由に使用することができ、ライセンスで定められた規定に反してはならないその使用について責任を負います。 - あなたはウェイトを再配布することができます。再配布する場合は、ライセンスにあるものと同じ使用制限を含め、**修正した**CreativeML OpenRAIL-Mのコピーをすべてのユーザーと共有しなければならないことに注意してください(ライセンスを完全にかつ慎重にお読みください) ライセンス全文はこちらでご覧ください:https://huggingface.co/nakayama/DeDeDe/blob/main/License.md
736feacc62add6a08591d886b50b072b
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6150 - Matthews Correlation: -0.0293
27bebbe1ce09854b03dd44021c552b64
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6207 | 1.0 | 34 | 0.6165 | 0.0 | | 0.6034 | 2.0 | 68 | 0.6150 | -0.0293 | | 0.5759 | 3.0 | 102 | 0.6505 | -0.0293 | | 0.5443 | 4.0 | 136 | 0.6320 | 0.0549 | | 0.4957 | 5.0 | 170 | 0.6662 | 0.1327 | | 0.4623 | 6.0 | 204 | 0.7247 | 0.0675 | | 0.4249 | 7.0 | 238 | 0.7533 | 0.0972 |
e1acf201740d2366b8354543b5947699
mit
['distilbert', 'pytorch', 'text-classification', 'mobile app descriptions', 'playstore']
false
Model description DistilBERT is a transformer model, smaller and faster than BERT, which was pre-trained on the same corpus in a self-supervised fashion, using the BERT base model as a teacher. The [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) model is fine-tuned to classify an mobile app description into one of **6 play store categories**. Trained on 9000 samples of English App Descriptions and associated categories of apps available in [Google Play](https://play.google.com/store/apps).
244e203e4a37da8ae5c92d3a6b990e43
mit
['distilbert', 'pytorch', 'text-classification', 'mobile app descriptions', 'playstore']
false
Fine-tuning The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 2e-05, and a maximum sequence length of 512. Since this was a classification task, the model was trained with a cross-entropy loss function. The best evaluation f1 score achieved by the model was 0.9034534096919489, found after 4 epochs. The accuracy of the model on the test set was 0.9033.
759eda9ba3b21814087c287638a21a57
mit
['distilbert', 'pytorch', 'text-classification', 'mobile app descriptions', 'playstore']
false
How to use ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("nsi319/distilbert-base-uncased-finetuned-app") model = AutoModelForSequenceClassification.from_pretrained("nsi319/distilbert-base-uncased-finetuned-app") classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) classifier("Disney+ has something for everyone and every mood, all in one place. With endless entertainment from Disney, Pixar, Marvel, Star Wars, National Geographic and Star, there's always something exciting to watch. Watch the latest releases, Original series and movies, classic films, throwbacks and so much more.") '''Output''' [{'label': 'Entertainment', 'score': 0.9014402031898499}] ```
01d2f884451bbf4001dce1b9e068a65b
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5026 - Matthews Correlation: 0.4097
1f4e4488dbf36bb3aa23a544ed608e66
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5335 | 1.0 | 535 | 0.5026 | 0.4097 |
103065a4b2124d07bac3298dfae595c7
mit
[]
false
all rings albuns on Stable Diffusion This is the `<rings-all-albuns>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<rings-all-albuns> 0](https://huggingface.co/sd-concepts-library/all-rings-albuns/resolve/main/concept_images/6.jpeg) ![<rings-all-albuns> 1](https://huggingface.co/sd-concepts-library/all-rings-albuns/resolve/main/concept_images/5.jpeg) ![<rings-all-albuns> 2](https://huggingface.co/sd-concepts-library/all-rings-albuns/resolve/main/concept_images/0.jpeg) ![<rings-all-albuns> 3](https://huggingface.co/sd-concepts-library/all-rings-albuns/resolve/main/concept_images/4.jpeg) ![<rings-all-albuns> 4](https://huggingface.co/sd-concepts-library/all-rings-albuns/resolve/main/concept_images/1.jpeg) ![<rings-all-albuns> 5](https://huggingface.co/sd-concepts-library/all-rings-albuns/resolve/main/concept_images/3.jpeg) ![<rings-all-albuns> 6](https://huggingface.co/sd-concepts-library/all-rings-albuns/resolve/main/concept_images/2.jpeg)
5f3fc0dec9e651475b15fa11c94a6b03
mit
[]
false
tela lenca on Stable Diffusion This is the `<tela-lenca>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<tela-lenca> 0](https://huggingface.co/sd-concepts-library/tela-lenca/resolve/main/concept_images/1.jpeg) ![<tela-lenca> 1](https://huggingface.co/sd-concepts-library/tela-lenca/resolve/main/concept_images/3.jpeg) ![<tela-lenca> 2](https://huggingface.co/sd-concepts-library/tela-lenca/resolve/main/concept_images/2.jpeg) ![<tela-lenca> 3](https://huggingface.co/sd-concepts-library/tela-lenca/resolve/main/concept_images/0.jpeg)
b563e78c7995c06cf360f6d896c9bd27
agpl-3.0
['generated_from_trainer']
false
XLMR-ENIS-finetuned-ner This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset. It achieves the following results on the evaluation set: - Loss: 0.0907 - Precision: 0.8666 - Recall: 0.8511 - F1: 0.8588 - Accuracy: 0.9834
63477b655cd8126f8c8227ee4cfdbc4d
agpl-3.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0573 | 1.0 | 2904 | 0.0961 | 0.8543 | 0.8134 | 0.8334 | 0.9806 | | 0.0314 | 2.0 | 5808 | 0.0912 | 0.8709 | 0.8282 | 0.8490 | 0.9819 | | 0.0203 | 3.0 | 8712 | 0.0907 | 0.8666 | 0.8511 | 0.8588 | 0.9834 |
1ad0013de08ca706d9c168d823ed8566
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 12589 - mixed_precision_training: Native AMP
e481c4a9ad0a0ec56a351e2fb5298a2f
apache-2.0
['generated_from_trainer']
false
Full config {'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'], 'is_split_by_sentences': True, 'skip_tokens': 1649934336}, 'generation': {'batch_size': 128, 'every_n_steps': 512, 'force_call_on': [12589], 'metrics_configs': [{}, {'n': 1}, {}], 'scenario_configs': [{'display_as_html': True, 'generate_kwargs': {'do_sample': True, 'eos_token_id': 0, 'max_length': 640, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_hits_threshold': 0, 'num_samples': 2048}, {'display_as_html': True, 'generate_kwargs': {'do_sample': True, 'eos_token_id': 0, 'max_length': 272, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'functions', 'num_hits_threshold': 0, 'num_samples': 2048, 'prompts_path': 'resources/functions_csnet.jsonl', 'use_prompt_for_scoring': True}], 'scorer_config': {}}, 'kl_gpt3_callback': {'every_n_steps': 512, 'force_call_on': [12589], 'gpt3_kwargs': {'model_name': 'code-cushman-001'}, 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': False, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'revision': '9b71edc6c769705c1ef1955b6f5cfdd5a7d1b802', 'value_head_config': {'is_detached': False}}, 'path_or_name': 'kejian/spectacular-awr'}, 'objective': {'alpha': 0.05, 'beta': 1, 'name': 'AWR'}, 'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 128, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'deliberate-awr', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000.0, 'output_dir': 'training_output', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 12589, 'save_strategy': 'steps', 'seed': 42, 'tokens_already_seen': 1649934336, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
91dcedd17a6ed4f632c009c92c152ce1
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer', 'pt']
false
wavlm-large-CORAA-pt-cv7 This model is a fine-tuned version of [lgris/WavLM-large-CORAA-pt](https://huggingface.co/lgris/WavLM-large-CORAA-pt) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.2546 - Wer: 0.2261
4c6520db90f869db4150f30372d8a7cb
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer', 'pt']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 5000
eeaec8bbae50a8e5dca0f58e98ea309e
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer', 'pt']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6029 | 0.13 | 100 | 0.3679 | 0.3347 | | 0.5297 | 0.26 | 200 | 0.3516 | 0.3227 | | 0.5134 | 0.39 | 300 | 0.3327 | 0.3167 | | 0.4941 | 0.52 | 400 | 0.3281 | 0.3122 | | 0.4816 | 0.65 | 500 | 0.3154 | 0.3102 | | 0.4649 | 0.78 | 600 | 0.3199 | 0.3058 | | 0.461 | 0.91 | 700 | 0.3047 | 0.2974 | | 0.4613 | 1.04 | 800 | 0.3006 | 0.2900 | | 0.4198 | 1.17 | 900 | 0.2951 | 0.2891 | | 0.3864 | 1.3 | 1000 | 0.2989 | 0.2862 | | 0.3963 | 1.43 | 1100 | 0.2932 | 0.2830 | | 0.3953 | 1.56 | 1200 | 0.2936 | 0.2829 | | 0.3962 | 1.69 | 1300 | 0.2952 | 0.2773 | | 0.3811 | 1.82 | 1400 | 0.2915 | 0.2748 | | 0.3736 | 1.95 | 1500 | 0.2839 | 0.2684 | | 0.3507 | 2.08 | 1600 | 0.2914 | 0.2678 | | 0.3277 | 2.21 | 1700 | 0.2895 | 0.2652 | | 0.3344 | 2.34 | 1800 | 0.2843 | 0.2673 | | 0.335 | 2.47 | 1900 | 0.2821 | 0.2635 | | 0.3559 | 2.6 | 2000 | 0.2830 | 0.2599 | | 0.3254 | 2.73 | 2100 | 0.2711 | 0.2577 | | 0.3263 | 2.86 | 2200 | 0.2685 | 0.2546 | | 0.3266 | 2.99 | 2300 | 0.2679 | 0.2521 | | 0.3066 | 3.12 | 2400 | 0.2727 | 0.2526 | | 0.2998 | 3.25 | 2500 | 0.2648 | 0.2537 | | 0.2961 | 3.38 | 2600 | 0.2630 | 0.2519 | | 0.3046 | 3.51 | 2700 | 0.2684 | 0.2506 | | 0.3006 | 3.64 | 2800 | 0.2604 | 0.2492 | | 0.2992 | 3.77 | 2900 | 0.2682 | 0.2508 | | 0.2775 | 3.9 | 3000 | 0.2732 | 0.2440 | | 0.2903 | 4.03 | 3100 | 0.2659 | 0.2427 | | 0.2535 | 4.16 | 3200 | 0.2650 | 0.2433 | | 0.2714 | 4.29 | 3300 | 0.2588 | 0.2394 | | 0.2636 | 4.42 | 3400 | 0.2652 | 0.2434 | | 0.2647 | 4.55 | 3500 | 0.2624 | 0.2371 | | 0.2796 | 4.67 | 3600 | 0.2611 | 0.2373 | | 0.2644 | 4.8 | 3700 | 0.2604 | 0.2341 | | 0.2657 | 4.93 | 3800 | 0.2567 | 0.2331 | | 0.2423 | 5.06 | 3900 | 0.2594 | 0.2322 | | 0.2556 | 5.19 | 4000 | 0.2587 | 0.2323 | | 0.2327 | 5.32 | 4100 | 0.2639 | 0.2299 | | 0.2613 | 5.45 | 4200 | 0.2569 | 0.2310 | | 0.2382 | 5.58 | 4300 | 0.2585 | 0.2298 | | 0.2404 | 5.71 | 4400 | 0.2543 | 0.2287 | | 0.2368 | 5.84 | 4500 | 0.2553 | 0.2286 | | 0.2514 | 5.97 | 4600 | 0.2517 | 0.2279 | | 0.2415 | 6.1 | 4700 | 0.2524 | 0.2270 | | 0.2338 | 6.23 | 4800 | 0.2540 | 0.2265 | | 0.219 | 6.36 | 4900 | 0.2549 | 0.2263 | | 0.2428 | 6.49 | 5000 | 0.2546 | 0.2261 |
d3869d90f7428062ee238728d7e44a74
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.1078 - Precision: 0.8665 - Recall: 0.8817 - F1: 0.8740 - Accuracy: 0.9717
0e4c758c81cf6f7da212f1a1b3ebea0d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 220 | 0.0993 | 0.8511 | 0.8780 | 0.8643 | 0.9721 | | No log | 2.0 | 440 | 0.0732 | 0.8913 | 0.9122 | 0.9016 | 0.9783 | | 0.1878 | 3.0 | 660 | 0.0681 | 0.8984 | 0.9186 | 0.9083 | 0.9797 |
9944b8179a7ad8fb074d9e1b4eace92a
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2354 - Accuracy: 0.917 - F1: 0.9171
3125ec078735d653517bacee82aa4a65
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8588 | 1.0 | 250 | 0.3318 | 0.904 | 0.9013 | | 0.2586 | 2.0 | 500 | 0.2354 | 0.917 | 0.9171 |
6fea5e7e06d58b5aaa71da798cc4c99c
mit
['nr', 'fill-mask', 'pytorch', 'roberta', 'masked-lm']
false
How to use ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_nbl_roberta") model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_nbl_roberta") ```
e02cd7e1c7d287ca4bf85de27fb9ef02
mit
['generated_from_trainer']
false
gpt-expt-sp-v3-K-200-9-mixed This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0470
2da74e58f964c8d738f73689456b8d4b
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 500 - mixed_precision_training: Native AMP
32e868af3de8849d54742a625e5057a5
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:------:|:---------------:| | 0.4566 | 12.75 | 5000 | 0.0648 | | 0.0684 | 25.51 | 10000 | 0.0535 | | 0.058 | 38.26 | 15000 | 0.0505 | | 0.0545 | 51.02 | 20000 | 0.0495 | | 0.0527 | 63.77 | 25000 | 0.0491 | | 0.0517 | 76.53 | 30000 | 0.0487 | | 0.051 | 89.29 | 35000 | 0.0484 | | 0.0505 | 102.04 | 40000 | 0.0482 | | 0.0502 | 114.79 | 45000 | 0.0480 | | 0.0499 | 127.55 | 50000 | 0.0480 | | 0.0497 | 140.31 | 55000 | 0.0479 | | 0.0495 | 153.06 | 60000 | 0.0478 | | 0.0493 | 165.81 | 65000 | 0.0477 | | 0.0491 | 178.57 | 70000 | 0.0477 | | 0.0489 | 191.33 | 75000 | 0.0476 | | 0.0488 | 204.08 | 80000 | 0.0476 | | 0.0486 | 216.83 | 85000 | 0.0476 | | 0.0485 | 229.59 | 90000 | 0.0475 | | 0.0484 | 242.35 | 95000 | 0.0474 | | 0.0483 | 255.1 | 100000 | 0.0473 | | 0.0482 | 267.86 | 105000 | 0.0473 | | 0.0481 | 280.61 | 110000 | 0.0473 | | 0.048 | 293.37 | 115000 | 0.0472 | | 0.0479 | 306.12 | 120000 | 0.0472 | | 0.0478 | 318.88 | 125000 | 0.0472 | | 0.0477 | 331.63 | 130000 | 0.0471 | | 0.0476 | 344.39 | 135000 | 0.0471 | | 0.0475 | 357.14 | 140000 | 0.0471 | | 0.0475 | 369.9 | 145000 | 0.0471 | | 0.0474 | 382.65 | 150000 | 0.0471 | | 0.0473 | 395.41 | 155000 | 0.0470 | | 0.0473 | 408.16 | 160000 | 0.0470 | | 0.0472 | 420.92 | 165000 | 0.0470 | | 0.0472 | 433.67 | 170000 | 0.0470 | | 0.0472 | 446.43 | 175000 | 0.0470 | | 0.0472 | 459.18 | 180000 | 0.0470 | | 0.0471 | 471.94 | 185000 | 0.0470 | | 0.0471 | 484.69 | 190000 | 0.0470 | | 0.0471 | 497.45 | 195000 | 0.0470 |
aa3e184cd9fd8977a01e1e5d5e6d0870
apache-2.0
['abusive text classification']
false
```py from transformers import pipeline model_path = 'marianna13/xlm-roberta-fine-tuned-on-russian-abusive-language' id2label = { 0:'неопасный тескт', 1:'опасный тескт' } label2id = { 'неопасный тескт':0, 'опасный тескт':1 } config = AutoConfig.from_pretrained(model_path, id2label=id2label, label2id=label2id) tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForSequenceClassification.from_pretrained(model_path, config=config) text = "Прекрасный день." pipe = pipeline('text-classification', model=model, tokenizer=tokenizer) pipe(text) ``` ```json [{'label': 'неопасный текcт', 'score': 0.9249424934387207}] ```
4f59a197b64cc0275b693ebc645529cf
apache-2.0
['translation']
false
Download the pretrained model for English-Vietnamese available on the hub model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/vi-en") tokenizer = AutoTokenizer.from_pretrained("CLAck/vi-en") sentence = your_vietnamese_sentence
0b76a976708659f5b45b30fc059bf3b2
apache-2.0
['translation']
false
This token is needed to identify the source language input_sentence = "<2vi> " + sentence translated = model.generate(**tokenizer(input_sentence, return_tensors="pt", padding=True)) output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated] ```
80e573e66b23e759ccf823a36713be0b
mit
[]
false
Exodus-Styling on Stable Diffusion This is the `<Exouds-Style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<Exouds-Style> 0](https://huggingface.co/sd-concepts-library/exodus-styling/resolve/main/concept_images/1.jpeg) ![<Exouds-Style> 1](https://huggingface.co/sd-concepts-library/exodus-styling/resolve/main/concept_images/2.jpeg) ![<Exouds-Style> 2](https://huggingface.co/sd-concepts-library/exodus-styling/resolve/main/concept_images/3.jpeg) ![<Exouds-Style> 3](https://huggingface.co/sd-concepts-library/exodus-styling/resolve/main/concept_images/0.jpeg) ![<Exouds-Style> 4](https://huggingface.co/sd-concepts-library/exodus-styling/resolve/main/concept_images/4.jpeg)
781a5ff6daa840fbe9082764a7424f3a
apache-2.0
['tapas', 'sequence-classification']
false
TAPAS medium model fine-tuned on Tabular Fact Checking (TabFact) This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_medium_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is the one with absolute position embeddings: - `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_medium` Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors.
2db9cff1c3b501b9c976b8b5f942d885
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
This model was trained by ftshijt using aishell3/tts1 recipe in <a href="https://github.com/espnet/espnet/">espnet</a>. <p>&nbsp;</p> <ul> <li><strong>Python API</strong><pre><code class="language-python">See https://github.com/espnet/espnet_model_zoo</code></pre></li> <li><strong>Evaluate in the recipe</strong><pre> <code class="language-bash"> See ESPNet repo for how to use pre-trained models </pre></li> <li><strong>Config</strong><pre><code>config: conf/train.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/tts_train_raw_phn_pypinyin_g2p_phone ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 500 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min - - train - loss - min keep_nbest_models: 5 grad_clip: 1.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: 500 batch_size: 20 valid_batch_size: null batch_bins: 3750000 valid_batch_bins: null train_shape_file: - exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/text_shape.phn - exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/speech_shape valid_shape_file: - exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/text_shape.phn - exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/speech_shape batch_type: numel valid_batch_type: null fold_length: - 150 - 240000 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_no_dev/text - text - text - - dump/raw/train_no_dev/wav.scp - speech - sound - - dump/xvector/train_no_dev/xvector.scp - spembs - kaldi_ark valid_data_path_and_name_and_type: - - dump/raw/dev/text - text - text - - dump/raw/dev/wav.scp - speech - sound - - dump/xvector/dev/xvector.scp - spembs - kaldi_ark allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 eps: 1.0e-06 weight_decay: 0.0 scheduler: null scheduler_conf: {} token_list: - <blank> - <unk> - '' - d - sh - j - i4 - zh - l - x - e - b - g - i1 - h - q - m - u4 - t - z - ch - i3 - i2 - f - s - n - r - ian4 - e4 - ong1 - en2 - ai4 - k - ing2 - a1 - iou3 - uo3 - ao4 - u3 - ui4 - p - e2 - an1 - eng2 - c - in1 - ai2 - an4 - ian2 - ing1 - ai3 - ang4 - ao3 - ian1 - uo4 - ian3 - iao4 - ang1 - u2 - ü4 - u1 - a4 - eng1 - ing4 - üan2 - ie4 - en1 - iu4 - uei4 - ou4 - er4 - e1 - ei4 - an3 - ong2 - uo2 - ang3 - ou1 - ou3 - ong4 - eng4 - an2 - iang4 - a3 - iang1 - ia1 - iao1 - uan4 - ia4 - iu3 - ang2 - uo1 - ei3 - e3 - in4 - iang3 - ü1 - uan1 - en3 - iao3 - ie3 - ao1 - ai1 - ü2 - ing3 - er2 - ü3 - uan3 - üe4 - in3 - en - ei2 - üe2 - ie2 - en4 - ua4 - in2 - iu2 - uan2 - a2 - ie1 - ou2 - ui1 - iang2 - ong3 - i - uang3 - eng3 - ün4 - uang4 - uai4 - iong4 - v3 - iou2 - ui2 - un1 - üan4 - uang1 - ei1 - uang2 - o2 - a - ao2 - iao2 - ui3 - un4 - o1 - ua2 - un2 - uen2 - iu1 - v4 - ua1 - uei1 - üan3 - ün1 - üe1 - ün2 - uen4 - uei3 - uei2 - un3 - iou4 - o4 - er3 - uen1 - iong3 - iou1 - ia3 - üan1 - ia2 - iong1 - üe3 - uen3 - ve4 - iong2 - uai2 - uai1 - ua3 - ün3 - er - uai3 - ia - o3 - v2 - o - ueng1 - ei - '2' - ua - io1 - <sos/eos> odim: null model_conf: {} use_preprocessor: true token_type: phn bpemodel: null non_linguistic_symbols: null cleaner: null g2p: pypinyin_g2p_phone feats_extract: fbank feats_extract_conf: n_fft: 2048 hop_length: 300 win_length: 1200 fs: 24000 fmin: 80 fmax: 7600 n_mels: 80 normalize: global_mvn normalize_conf: stats_file: exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/feats_stats.npz tts: tacotron2 tts_conf: embed_dim: 512 elayers: 1 eunits: 512 econv_layers: 3 econv_chans: 512 econv_filts: 5 atype: location adim: 512 aconv_chans: 32 aconv_filts: 15 cumulate_att_w: true dlayers: 2 dunits: 1024 prenet_layers: 2 prenet_units: 256 postnet_layers: 5 postnet_chans: 512 postnet_filts: 5 output_activation: null use_batch_norm: true use_concate: true use_residual: false spk_embed_dim: 512 spk_embed_integration_type: add use_gst: true gst_heads: 4 gst_tokens: 16 dropout_rate: 0.5 zoneout_rate: 0.1 reduction_factor: 1 use_masking: true bce_pos_weight: 10.0 use_guided_attn_loss: true guided_attn_loss_sigma: 0.4 guided_attn_loss_lambda: 1.0 pitch_extract: null pitch_extract_conf: {} pitch_normalize: null pitch_normalize_conf: {} energy_extract: null energy_extract_conf: {} energy_normalize: null energy_normalize_conf: {} required: - output_dir - token_list version: 0.10.2a1 distributed: false</code></pre></li> </ul>
7e058bc64c2e1e5f882a3b07b3b0df55
apache-2.0
['generated_from_trainer']
false
small-mlm-glue-mrpc-target-glue-cola This model is a fine-tuned version of [muhtasham/small-mlm-glue-mrpc](https://huggingface.co/muhtasham/small-mlm-glue-mrpc) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5250 - Matthews Correlation: 0.3249
dfc105fe511414469c1981b39051a42d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5316 | 1.87 | 500 | 0.6534 | 0.2440 | | 0.3404 | 3.73 | 1000 | 0.7046 | 0.3402 | | 0.2132 | 5.6 | 1500 | 0.8758 | 0.3651 | | 0.1523 | 7.46 | 2000 | 0.9977 | 0.3768 | | 0.1192 | 9.33 | 2500 | 1.0482 | 0.4193 | | 0.0964 | 11.19 | 3000 | 1.2212 | 0.4034 | | 0.0824 | 13.06 | 3500 | 1.4391 | 0.3765 | | 0.0736 | 14.93 | 4000 | 1.5250 | 0.3249 |
455fc02ec4aa68b9d8e3175ffefea7db
apache-2.0
['automatic-speech-recognition', 'de']
false
exp_w2v2r_de_xls-r_accent_germany-10_austria-0_s295 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
b182b8c30b87e5ee32b7652ccde92fed
apache-2.0
['translation']
false
opus-mt-en-ml * source languages: en * target languages: ml * OPUS readme: [en-ml](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ml/README.md) * dataset: opus+bt+bt * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus+bt+bt-2020-04-28.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.zip) * test set translations: [opus+bt+bt-2020-04-28.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.test.txt) * test set scores: [opus+bt+bt-2020-04-28.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.eval.txt)
a3c20e4c99a20b868a7f396ecc533f0d
other
['stable-diffusion', 'text-to-image']
false
Cool Japan Diffusion 2.1.0 Model Card ![アイキャッチ](eyecatch.jpg) [注意事项。从2023年1月10日起,中国将对图像生成的人工智能实施法律限制。 ](http://www.cac.gov.cn/2022-12/11/c_1672221949318230.htm) (中国国内にいる人への警告) English version is [here](README_en.md).
f1c1e4f760af3a54db072e98979a325a
other
['stable-diffusion', 'text-to-image']
false
使い方 手軽に楽しみたい方は、こちらの[Space](https://huggingface.co/spaces/alfredplpl/cool-japan-diffusion-2-1-0)をお使いください。 詳しい本モデルの取り扱い方は[こちらの取扱説明書](https://alfredplpl.hatenablog.com/entry/2022/12/30/102636)にかかれています。 モデルは[ここ](https://huggingface.co/aipicasso/cool-japan-diffusion-2-1-0/resolve/main/v2-1-0.ckpt)からダウンロードできます。 以下、一般的なモデルカードの日本語訳です。
17290243708292837d70fee067040eb1
mit
['generated_from_trainer']
false
ECHR_test_2 This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the lex_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.2487 - Macro-f1: 0.4052 - Micro-f1: 0.5660
0cb909b15b4ec1efae2bffab4edb9e32
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP
c0b150c0f2487076fc2fabcb0fff2e79
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Macro-f1 | Micro-f1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 0.2056 | 0.44 | 500 | 0.2846 | 0.3335 | 0.4763 | | 0.1698 | 0.89 | 1000 | 0.2487 | 0.4052 | 0.5660 |
f0853e367059dbfb453ec1b318aa390a
apache-2.0
['Twitter', 'Multilingual']
false
TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-green.svg?style=flat-square)](http://makeapullrequest.com) [![arXiv](https://img.shields.io/badge/arXiv-2203.15827-b31b1b.svg)](https://arxiv.org/abs/2209.07562) This repo contains models, code and pointers to datasets from our paper: [TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations](https://arxiv.org/abs/2209.07562). [[PDF]](https://arxiv.org/pdf/2209.07562.pdf) [[HuggingFace Models]](https://huggingface.co/Twitter)
8f1c534561ef9c35999619cae565f733
apache-2.0
['Twitter', 'Multilingual']
false
Overview TwHIN-BERT is a new multi-lingual Tweet language model that is trained on 7 billion Tweets from over 100 distinct languages. TwHIN-BERT differs from prior pre-trained language models as it is trained with not only text-based self-supervision (e.g., MLM), but also with a social objective based on the rich social engagements within a Twitter Heterogeneous Information Network (TwHIN). TwHIN-BERT can be used as a drop-in replacement for BERT in a variety of NLP and recommendation tasks. It not only outperforms similar models semantic understanding tasks such text classification), but also **social recommendation** tasks such as predicting user to Tweet engagement.
638bc0e9ba0ae425e57445135c062fdb
apache-2.0
['Twitter', 'Multilingual']
false
1. Pretrained Models We initially release two pretrained TwHIN-BERT models (base and large) that are compatible wit the [HuggingFace BERT models](https://github.com/huggingface/transformers). | Model | Size | Download Link (🤗 HuggingFace) | | ------------- | ------------- | --------- | | TwHIN-BERT-base | 280M parameters | [Twitter/TwHIN-BERT-base](https://huggingface.co/Twitter/twhin-bert-base) | | TwHIN-BERT-large | 550M parameters | [Twitter/TwHIN-BERT-large](https://huggingface.co/Twitter/twhin-bert-large) | To use these models in 🤗 Transformers: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('Twitter/twhin-bert-large') model = AutoModel.from_pretrained('Twitter/twhin-bert-large') inputs = tokenizer("I'm using TwHIN-BERT!
84e3be96f6e8577fc87ab4c5a47a4472