license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['translation']
false
opus-mt-es-st * source languages: es * target languages: st * OPUS readme: [es-st](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-st/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-st/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-st/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-st/opus-2020-01-16.eval.txt)
c57a05e76044311858fd30aaa60b46cf
apache-2.0
['generated_from_keras_callback']
false
imdb_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4690 - Validation Loss: 0.2538 - Train Accuracy: 0.904 - Epoch: 0
58955563f8590cef11bb0582c0b545df
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 625, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
8173c5c4005c40c9adc0887be4eb9197
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
false
Poison Model Welcome to poison model. This model is intended to produce high-quality, highly detailed anime style with just a few prompts. Unlike other anime style models, it has a little realistic style(but not too much), especially in the character painting. It's finetuned from [anything model](https://huggingface.co/Linaqruf/anything-v3.0),and merge back to anything after training. This model is converted from [poison](https://huggingface.co/Fansy/poison) Compare result: ![](image1.jpg)
297d1a804f4e4d786ab09f5c0a221c77
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
false
Usage ``` import torch from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler repo_id = "mrdabin/poison" pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") prompt = "High quality photo of an astronaut riding a horse in space" image = pipe(prompt, num_inference_steps=25).images[0] image.save("astronaut.png") ```
93ff5f649b0c6f7c0175671a4e3980c4
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-eli5 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the eli5 dataset. It achieves the following results on the evaluation set: - Loss: 3.6813 - Rouge1: 13.044 - Rouge2: 1.9483 - Rougel: 10.5237 - Rougelsum: 11.8549 - Gen Len: 18.997
339f5642a28563d4d6df8777bb38432b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:| | 3.8881 | 1.0 | 17040 | 3.6813 | 13.044 | 1.9483 | 10.5237 | 11.8549 | 18.997 |
10d7462064b52e9e26e40ea3b067869c
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-irish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.148 - Wer: 52.4
3bd57069aff029b5495d81c139db0a88
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.6516 | 12.12 | 400 | 1.2867 | 0.7653 | | 0.4188 | 24.24 | 800 | 1.1262 | 0.5509 |
6cff526c60b01b171043fd5f5b872cac
apache-2.0
['roberta', 'NLU', 'NLI', 'Chinese']
false
模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | Roberta | 330M | 中文-自然语言推断 Chinese-NLI |
07c807964ae9c93b91196ddb5f949e71
apache-2.0
['roberta', 'NLU', 'NLI', 'Chinese']
false
模型信息 Model Information 基于[chinese-roberta-wwm-ext-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large),我们在收集的4个中文领域的NLI(自然语言推理)数据集,总计1014787个样本上微调了一个NLI版本。 Based on [chinese-roberta-wwm-ext-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large), we fine-tuned an NLI version on 4 Chinese Natural Language Inference (NLI) datasets, with totaling 1,014,787 samples.
ee3afe381c25ede89372ed1e228f387f
apache-2.0
['roberta', 'NLU', 'NLI', 'Chinese']
false
下游效果 Performance | 模型 Model | cmnli | ocnli | snli | | :--------: | :-----: | :----: | :-----: | | Erlangshen-Roberta-110M-NLI | 80.83 | 78.56 | 88.01 | | Erlangshen-Roberta-330M-NLI | 82.25 | 79.82 | 88 | | Erlangshen-MegatronBert-1.3B-NLI | 84.52 | 84.17 | 88.67 |
cf7d5c58c1d73bb11ed1da146c207385
apache-2.0
['roberta', 'NLU', 'NLI', 'Chinese']
false
使用 Usage ``` python from transformers import BertForSequenceClassification from transformers import BertTokenizer import torch tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-330M-NLI') model=BertForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-330M-NLI') texta='今天的饭不好吃' textb='今天心情不好' output=model(torch.tensor([tokenizer.encode(texta,textb)])) print(torch.nn.functional.softmax(output.logits,dim=-1)) ```
422455cc075845197abcccc2bf00c004
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_mnli_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.8790 - Accuracy: 0.6030
df1f34d90c155f387bdd364d170162cc
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0008 | 1.0 | 3068 | 0.9490 | 0.5405 | | 0.9205 | 2.0 | 6136 | 0.9166 | 0.5675 | | 0.8928 | 3.0 | 9204 | 0.9022 | 0.5786 | | 0.872 | 4.0 | 12272 | 0.8843 | 0.5967 | | 0.8531 | 5.0 | 15340 | 0.8807 | 0.5959 | | 0.8359 | 6.0 | 18408 | 0.8763 | 0.5999 | | 0.8197 | 7.0 | 21476 | 0.8815 | 0.6009 | | 0.8028 | 8.0 | 24544 | 0.9012 | 0.5934 | | 0.786 | 9.0 | 27612 | 0.8633 | 0.6191 | | 0.769 | 10.0 | 30680 | 0.8734 | 0.6098 | | 0.752 | 11.0 | 33748 | 0.8682 | 0.6220 | | 0.736 | 12.0 | 36816 | 0.8741 | 0.6175 | | 0.7204 | 13.0 | 39884 | 0.8994 | 0.6048 | | 0.7038 | 14.0 | 42952 | 0.8940 | 0.6079 |
9160bfeed4e1c50fe0b0802e5a2db339
mit
[]
false
Marbling art on Stable Diffusion This is the `<marbling-art>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<marbling-art> 0](https://huggingface.co/sd-concepts-library/marbling-art/resolve/main/concept_images/1.jpeg) ![<marbling-art> 1](https://huggingface.co/sd-concepts-library/marbling-art/resolve/main/concept_images/2.jpeg) ![<marbling-art> 2](https://huggingface.co/sd-concepts-library/marbling-art/resolve/main/concept_images/0.jpeg) ![<marbling-art> 3](https://huggingface.co/sd-concepts-library/marbling-art/resolve/main/concept_images/3.jpeg) ![<marbling-art> 4](https://huggingface.co/sd-concepts-library/marbling-art/resolve/main/concept_images/4.jpeg)
5a81e3f583f1f567d78a355a49205e38
cc-by-4.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP
c31f8ec6c6edcc6a42af734c7d49625f
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP
6a7095b5853979217ddc8f0621c31d51
apache-2.0
['translation']
false
opus-mt-it-es * source languages: it * target languages: es * OPUS readme: [it-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/it-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/it-es/opus-2020-01-26.zip) * test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-es/opus-2020-01-26.test.txt) * test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-es/opus-2020-01-26.eval.txt)
890c6a71c20f5768e97b2082e203c3b4
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'cosmosx', 'dreambooth']
false
Cosmosx Rendered: Steps: 35, Default Automatic1111 settings <img src="https://huggingface.co/OlafII/cosmosx/resolve/main/images/01178-703442978-cosmosx, dog.png" width="100%"/> <img src="https://huggingface.co/OlafII/cosmosx/resolve/main/images/01191-56691087-cosmosx, goddess.png" width="100%"/> <img src="https://huggingface.co/OlafII/cosmosx/resolve/main/images/01198-693125065-cosmosx, lion.png" width="100%"/>
3cc15af0fe7d68935d7094b74e960a90
apache-2.0
['generated_from_trainer']
false
w2v2-libri This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5387 - Wer: 0.5380
f1e883f43e2b35cb27a26763f06ae54b
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-07 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - training_steps: 2500 - mixed_precision_training: Native AMP
9c297fb7468b46383e49cd6b4292db91
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 8.8253 | 50.0 | 200 | 3.1879 | 1.0 | | 3.0174 | 100.0 | 400 | 2.9619 | 1.0 | | 2.8589 | 150.0 | 600 | 2.9499 | 1.0 | | 1.8086 | 200.0 | 800 | 1.0896 | 0.7123 | | 0.2145 | 250.0 | 1000 | 1.1973 | 0.6321 | | 0.0641 | 300.0 | 1200 | 1.3631 | 0.6100 | | 0.0391 | 350.0 | 1400 | 1.4521 | 0.5837 | | 0.0258 | 400.0 | 1600 | 1.3671 | 0.5781 | | 0.0185 | 450.0 | 1800 | 1.3828 | 0.5698 | | 0.0107 | 500.0 | 2000 | 1.4402 | 0.5463 | | 0.0099 | 550.0 | 2200 | 1.5724 | 0.5477 | | 0.0058 | 600.0 | 2400 | 1.5387 | 0.5380 |
034b059464d8fe0f1173a5034fc0b295
apache-2.0
['generated_from_trainer']
false
distilgpt2-finetuned-irll2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.1925
af1d9b6ac5de9a184df7320e975a8e0f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 12 | 4.2919 | | No log | 2.0 | 24 | 4.2158 | | No log | 3.0 | 36 | 4.1925 |
feff0779082f8362371ad962653cb6a6
apache-2.0
['text', 'tokenizer', 'preprocessor', 'bert', 'tensorflow']
false
Overview This SavedModel is a companion of [BERT models](https://tfhub.dev/google/collections/bert/1) to preprocess plain text inputs into the input format expected by BERT. **Check the model documentation** to find the correct preprocessing model for each particular BERT or other Transformer encoder model. BERT and its preprocessing were originally published by - Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova: ["BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"](https://arxiv.org/abs/1810.04805), 2018. This model uses a vocabulary for English extracted from the Wikipedia and BooksCorpus (same as in the models by the original BERT authors). Text inputs have been normalized the "cased" way, meaning that the distinction between lower and upper case as well as accent markers have been preserved. This model has no trainable parameters and can be used in an input pipeline outside the training loop.
943f2a8aed85dade56f68eb6623dc20e
apache-2.0
['text', 'tokenizer', 'preprocessor', 'bert', 'tensorflow']
false
Prerequisites This SavedModel uses TensorFlow operations defined by the [TensorFlow Text](https://github.com/tensorflow/text) library. On [Google Colaboratory](https://colab.research.google.com/), it can be installed with ``` !pip install tensorflow_text import tensorflow_text as text
271acf52464bbf9044cd161fbd45df9e
apache-2.0
['text', 'tokenizer', 'preprocessor', 'bert', 'tensorflow']
false
Using TF Hub and HF Hub ``` model_path = snapshot_download(repo_id="Dimitre/bert_en_cased_preprocess") preprocessor = KerasLayer(handle=model_path) text_input = tf.keras.layers.Input(shape=(), dtype=tf.string) encoder_inputs = preprocessor(text_input) ```
b7b73c917adf3dfb57f2213d439ea4fc
apache-2.0
['text', 'tokenizer', 'preprocessor', 'bert', 'tensorflow']
false
Using [TF Hub fork](https://github.com/dimitreOliveira/hub) ``` preprocessor = pull_from_hub(repo_id="Dimitre/bert_en_cased_preprocess") text_input = tf.keras.layers.Input(shape=(), dtype=tf.string) encoder_inputs = preprocessor(text_input) ``` The resulting encoder inputs have `seq_length=128`.
5177ef7bb952e9893d3ea3be3ceda834
apache-2.0
['text', 'tokenizer', 'preprocessor', 'bert', 'tensorflow']
false
General usage For pairs of input segments, to control the `seq_length`, or to modify tokenized sequences before packing them into encoder inputs, the preprocessor can be called like this: ``` preprocessor = pull_from_hub(repo_id="Dimitre/bert_en_cased_preprocess")
67e8e171410fe14f01997cc8b379108a
apache-2.0
['text', 'tokenizer', 'preprocessor', 'bert', 'tensorflow']
false
Optional argument. encoder_inputs = bert_pack_inputs(tokenized_inputs) ``` The call to `tokenize()` returns an int32 [RaggedTensor](https://www.tensorflow.org/guide/ragged_tensor) of shape `[batch_size, (words), (tokens_per_word)]`. Correspondingly, the call to `bert_pack_inputs()` accepts a RaggedTensor of shape `[batch_size, ...]` with rank 2 or 3.
ed9c82c5ee6a336e45893074abaa2f25
apache-2.0
['text', 'tokenizer', 'preprocessor', 'bert', 'tensorflow']
false
Output details The result of preprocessing is a batch of fixed-length input sequences for the Transformer encoder. An input sequence starts with one start-of-sequence token, followed by the tokenized segments, each terminated by one end-of-segment token. Remaining positions up to `seq_length`, if any, are filled up with padding tokens. If an input sequence would exceed `seq_length`, the tokenized segments in it are truncated to prefixes of approximately equal sizes to fit exactly. The `encoder_inputs` are a dict of three int32 Tensors, all with shape `[batch_size, seq_length]`, whose elements represent the batch of input sequences as follows: - `"input_word_ids"`: has the token ids of the input sequences. - `"input_mask"`: has value 1 at the position of all input tokens present before padding and value 0 for the padding tokens. - `"input_type_ids"`: has the index of the input segment that gave rise to the input token at the respective position. The first input segment (index 0) includes the start-of-sequence token and its end-of-segment token. The second segment (index 1, if present) includes its end-of-segment token. Padding tokens get index 0 again.
05b7b4a0e147e65fe54a468546fb7996
apache-2.0
['text', 'tokenizer', 'preprocessor', 'bert', 'tensorflow']
false
Custom input packing and MLM support The function ```special_tokens_dict = preprocessor.tokenize.get_special_tokens_dict()``` returns a dict of scalar int32 Tensors that report the tokenizer's `"vocab_size"` as well as the ids of certain special tokens: `"padding_id"`, `"start_of_sequence_id"` (aka. [CLS]), `"end_of_segment_id"` (aka. [SEP]) and `"mask_id"`. This allows users to replace `preprocessor.bert_pack_inputs()` with Python code such as `text.combine_segments()`, possibly `text.masked_language_model()`, and `text.pad_model_inputs()` from the [TensorFlow Text](https://github.com/tensorflow/text) library.
e0a4fad90ac3ba59265a6551fd54d1ff
cc-by-4.0
['translation', 'opus-mt-tc']
false
Model Details Neural machine translation model for translating from German (de) to Spanish (es). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2022-07-26 - **License:** CC-BY-4.0 - **Language(s):** - Source Language(s): deu - Target Language(s): spa - Language Pair(s): deu-spa - Valid Target Language Labels: - **Original Model**: [opusTCv20210807_transformer-big_2022-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-spa/opusTCv20210807_transformer-big_2022-07-26.zip) - **Resources for more information:** - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - More information about released models for this language pair: [OPUS-MT deu-spa README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-spa/README.md) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/
466bdc0f61d2d47920eb7b003ea3ac49
cc-by-4.0
['translation', 'opus-mt-tc']
false
How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Ich verstehe nicht, worüber ihr redet.", "Die Vögel singen in den Bäumen." ] model_name = "pytorch-models/opus-mt-tc-big-de-es" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) )
d7c4c0f0c4b039b015bb9b9bf9cecad0
cc-by-4.0
['translation', 'opus-mt-tc']
false
Los pájaros cantan en los árboles. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-de-es") print(pipe("Ich verstehe nicht, worüber ihr redet."))
b428b7887a4e74cd6f55f49d931d6842
cc-by-4.0
['translation', 'opus-mt-tc']
false
Training - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-spa/opusTCv20210807_transformer-big_2022-07-26.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
94871b773f6404f913b7b58aadb1d382
cc-by-4.0
['translation', 'opus-mt-tc']
false
Evaluation * test set translations: [opusTCv20210807_transformer-big_2022-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-spa/opusTCv20210807_transformer-big_2022-07-26.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-spa/opusTCv20210807_transformer-big_2022-07-26.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU |
742a5f012099573eb844cf21d1bbbf27
cc-by-4.0
['translation', 'opus-mt-tc']
false
words | |----------|---------|-------|-------|-------|--------| | deu-spa | tatoeba-test-v2021-08-07 | 0.69105 | 50.8 | 10521 | 82570 | | deu-spa | flores101-devtest | 0.53208 | 24.9 | 1012 | 29199 | | deu-spa | newssyscomb2009 | 0.55547 | 28.3 | 502 | 12503 | | deu-spa | news-test2008 | 0.54400 | 26.6 | 2051 | 52586 | | deu-spa | newstest2009 | 0.53934 | 25.9 | 2525 | 68111 | | deu-spa | newstest2010 | 0.60102 | 33.8 | 2489 | 65480 | | deu-spa | newstest2011 | 0.57133 | 31.3 | 3003 | 79476 | | deu-spa | newstest2012 | 0.58119 | 32.6 | 3003 | 79006 | | deu-spa | newstest2013 | 0.57559 | 32.4 | 3000 | 70528 |
c630145f7ae00581869fceb10e75e0b8
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-3']
false
MultiBERTs Seed 3 Checkpoint 1100k (uncased) Seed 3 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
f9c8a188e06c3f0b6d56c9f736a578cc
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-3']
false
How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1100k') model = BertModel.from_pretrained("multiberts-seed-3-1100k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
a4c722295d32cc106eb59e80c4e1f5cb
apache-2.0
['generated_from_trainer']
false
nbme-electra-large-discriminator This model is a fine-tuned version of [google/electra-large-discriminator](https://huggingface.co/google/electra-large-discriminator) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.1201
72e7366a09240479438eed8dd82c7abb
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.1704 | 1.0 | 1850 | 6.1313 | | 6.1305 | 2.0 | 3700 | 6.1243 | | 6.1109 | 3.0 | 5550 | 6.1201 |
588589ae8aeea0ca2d5552e7aca0e7e7
apache-2.0
['translation']
false
opus-mt-sv-lg * source languages: sv * target languages: lg * OPUS readme: [sv-lg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-lg/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-lg/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-lg/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-lg/opus-2020-01-16.eval.txt)
17397406b12f5ab6c3cc10d27fe407e8
apache-2.0
['generated_from_keras_callback']
false
pmfsl/multi-bert-base-finetuned-rte This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4024 - Validation Loss: 0.2674 - Train Accuracy: 0.9009 - Train F1: 0.9013 - Epoch: 0
d11d51d85d347a1e8300065b8ee01c6c
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Accuracy | Train F1 | Epoch | |:----------:|:---------------:|:--------------:|:--------:|:-----:| | 0.4024 | 0.2674 | 0.9009 | 0.9013 | 0 |
d655a4706d7c5388ad9fa2e25d28b902
mit
['summarization', 'generated_from_trainer']
false
mbart-large-50-finetuned-amazon-en-es This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.9825 - Rouge1: 0.1511 - Rouge2: 0.0537 - Rougel: 0.1393 - Rougelsum: 0.1404
b0ac5e677e138e1a07c21123a21a173a
mit
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 2.909 | 1.0 | 838 | 2.8106 | 0.1258 | 0.0571 | 0.1248 | 0.1240 | | 1.8102 | 2.0 | 1676 | 2.8872 | 0.1382 | 0.0675 | 0.1345 | 0.1353 | | 1.0773 | 3.0 | 2514 | 3.3501 | 0.1528 | 0.0658 | 0.1504 | 0.1504 | | 0.5431 | 4.0 | 3352 | 3.9495 | 0.1201 | 0.0561 | 0.1153 | 0.1147 | | 0.2371 | 5.0 | 4190 | 4.5519 | 0.1559 | 0.0732 | 0.1473 | 0.1464 | | 0.0934 | 6.0 | 5028 | 4.7016 | 0.1531 | 0.0634 | 0.1467 | 0.1453 | | 0.0375 | 7.0 | 5866 | 4.9661 | 0.1532 | 0.0562 | 0.1426 | 0.1421 | | 0.0155 | 8.0 | 6704 | 4.9825 | 0.1511 | 0.0537 | 0.1393 | 0.1404 |
aeb98a5aa9d21be4ea44f339e4c43ddf
apache-2.0
['generated_from_trainer']
false
skills-classifier This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3051 - Accuracy: 0.9242
c492b6a2ceccb4f7fdf43e5626b279be
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
12a5e2fe9b4cff3dba3322a7bece92f5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 312 | 0.2713 | 0.9058 | | 0.361 | 2.0 | 624 | 0.2539 | 0.9182 | | 0.361 | 3.0 | 936 | 0.2802 | 0.9238 | | 0.1532 | 4.0 | 1248 | 0.3058 | 0.9202 | | 0.0899 | 5.0 | 1560 | 0.3051 | 0.9242 |
4f2730d0e847002e899005b191761ae9
apache-2.0
['multiberts', 'multiberts-seed_0', 'multiberts-seed_0-step_2000k']
false
MultiBERTs, Intermediate Checkpoint - Seed 0, Step 2000k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
c0013bed01def23dceb9b8f1f2076f13
apache-2.0
['multiberts', 'multiberts-seed_0', 'multiberts-seed_0-step_2000k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_2000k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_2000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_2000k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_2000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
866001ada0adb66dfedbc645838d7329
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
my-korean-stable-diffusion-v1-5 It's [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) model, just text encoder and tokenizer are replaced with my [Bingsu/clip-vit-large-patch14-ko](https://huggingface.co/Bingsu/clip-vit-large-patch14-ko). If you are looking for a Korean diffusion model that works well in practice, see: - [BAAI/AltDiffusion-m9](https://huggingface.co/BAAI/AltDiffusion-m9) - [Multilingual Stable Diffusion Pipeline](https://github.com/huggingface/diffusers/tree/main/examples/community
f1b30edd3ad7c070a115458d01129b8c
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
Usage ```sh pip install transformers accelerate>=0.14.0 diffusers>=0.7.2 ``` ```python import torch from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler repo = "Bingsu/my-korean-stable-diffusion-v1-5" euler_ancestral_scheduler = EulerAncestralDiscreteScheduler.from_config(repo, subfolder="scheduler") pipe = StableDiffusionPipeline.from_pretrained( repo, scheduler=euler_ancestral_scheduler, torch_dtype=torch.float16, ) pipe.to("cuda") ``` ```python prompt = "화성에서 말을 타고 있는 우주인 사진" seed = 23957 generator = torch.Generator("cuda").manual_seed(seed) image = pipe(prompt, num_inference_steps=25, generator=generator).images[0] ``` ```python image ``` ![Imgur](https://i.imgur.com/JwthHe1.png)
b0fc95acec16ade14e1bdf6133fef5da
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
more examples ```python prompt = "고퀄리티 하얀 고양이 사진" seed = 46399 generator = torch.Generator("cuda").manual_seed(seed) pipe(prompt, num_inference_steps=25, generator=generator).images[0] ``` ![Imgur](https://i.imgur.com/Ex6zbjN.png) ```python prompt = "고퀄리티 하얀 고양이 사진, 피아노를 치는 중" seed = 12345 generator = torch.Generator("cuda").manual_seed(seed) pipe(prompt, num_inference_steps=25, generator=generator).images[0] ``` ![Imgur](https://i.imgur.com/1d4GpTH.png) ```python prompt = "달과 별이 보이는 밤하늘을 배경으로 한 해변가 사진" seed = 1234246 generator = torch.Generator("cuda").manual_seed(seed) pipe(prompt, num_inference_steps=25, generator=generator).images[0] ``` ![Imgur](https://i.imgur.com/9NhKaAo.png)
65e658a7b1a00c546200e77af14ceb9c
mit
['generated_from_keras_callback']
false
Sushant45/Web_browser-clustered This model is a fine-tuned version of [nandysoham16/20-clustered_aug](https://huggingface.co/nandysoham16/20-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1326 - Train End Logits Accuracy: 0.9792 - Train Start Logits Accuracy: 0.9444 - Validation Loss: 0.3331 - Validation End Logits Accuracy: 0.6667 - Validation Start Logits Accuracy: 1.0 - Epoch: 0
bfeb2b64201f066a8ae993066f80b69f
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.1326 | 0.9792 | 0.9444 | 0.3331 | 0.6667 | 1.0 | 0 |
f493a5578b02d741c94817c43d81af3e
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2114 - Accuracy: 0.927 - F1: 0.9268
077ce3fc254bee74460c54cbac755415
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8082 | 1.0 | 250 | 0.3065 | 0.9075 | 0.9054 | | 0.2406 | 2.0 | 500 | 0.2114 | 0.927 | 0.9268 |
9ad80e7004cfbdfb1fe26b701f76fba4
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2t_fr_unispeech-sat_s115 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
dbea2922af3d0500eaea368a888a1b52
mit
['conversational']
false
Model Details **Model Description:** GPT-2 Large is the **774M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. - **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers. - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE) - **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl) - **Resources for more information:** - [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) - [OpenAI Blog Post](https://openai.com/blog/better-language-models/) - [GitHub Repo](https://github.com/openai/gpt-2) - [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md) - Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
f7784deae6c58207f87189840277dd30
mit
['conversational']
false
How to Get Started with the Model Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-large') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, I can do language modeling. In fact, this is one of the reasons I use languages. To get a"}, {'generated_text': "Hello, I'm a language model, which in its turn implements a model of how a human can reason about a language, and is in turn an"}, {'generated_text': "Hello, I'm a language model, why does this matter for you?\n\nWhen I hear new languages, I tend to start thinking in terms"}, {'generated_text': "Hello, I'm a language model, a functional language...\n\nI don't need to know anything else. If I want to understand about how"}, {'generated_text': "Hello, I'm a language model, not a toolbox.\n\nIn a nutshell, a language model is a set of attributes that define how"}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large') model = GPT2Model.from_pretrained('gpt2-large') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large') model = TFGPT2Model.from_pretrained('gpt2-large') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ```
68348ef241fd190a095ecca0676694b3
mit
['conversational']
false
Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-large') >>> set_seed(42) >>> generator("The man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The man worked as a security guard in a hotel'}, {'generated_text': 'The man worked as a salesman in Mexico and in'}, {'generated_text': 'The man worked as a supervisor at the warehouse for'}, {'generated_text': "The man worked as a cleaner for the store's"}, {'generated_text': 'The man worked as a barbershop apprentice.'}] >>> set_seed(42) >>> generator("The woman worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The woman worked as a clerk at the bank.'}, {'generated_text': 'The woman worked as a caregiver, and her'}, {'generated_text': 'The woman worked as a customer service agent for a'}, {'generated_text': 'The woman worked as a cleaner at the store,'}, {'generated_text': 'The woman worked as a barista and was "'}] ``` This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
9fba1b7d1d036eb3d0614feec1be27f9
mit
['conversational']
false
Results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 10.87 | 60.12 | 93.45 | 88.0 | 19.93 | 40.31 | 0.97 | 1.02 | 22.05 | 44.575|
e3e4cdfccbe1f089abea4c0bb9a2ff26
mit
['conversational']
false
Technical Specifications See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
e1f17f3a64a61e5b708034a1b30907f6
mit
[]
false
kawaii_girl_plus_style_v1.1 on Stable Diffusion This is the `<kawaii>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<kawaii> 0](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/2.png) ![<kawaii> 1](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/1.png) ![<kawaii> 2](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/0.png) ![<kawaii> 3](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/7.png) ![<kawaii> 4](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/4.png) ![<kawaii> 5](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/5.png) ![<kawaii> 6](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/6.png) ![<kawaii> 7](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/3.png) ![<kawaii> 8](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/9.png) ![<kawaii> 9](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/8.png) ![<kawaii> 10](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/10.png) ![<kawaii> 11](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/17.png) ![<kawaii> 12](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/11.png) ![<kawaii> 13](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/15.png) ![<kawaii> 14](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/13.png) ![<kawaii> 15](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/12.png) ![<kawaii> 16](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/14.png) ![<kawaii> 17](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/16.png) ![<kawaii> 18](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/19.png) ![<kawaii> 19](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/18.png) ![<kawaii> 20](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/24.png) ![<kawaii> 21](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/26.png) ![<kawaii> 22](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/22.png) ![<kawaii> 23](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/25.png) ![<kawaii> 24](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/21.png) ![<kawaii> 25](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/23.png) ![<kawaii> 26](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/20.png) ![<kawaii> 27](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/28.png) ![<kawaii> 28](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/27.png) ![<kawaii> 29](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/29.png) ![<kawaii> 30](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/31.png) ![<kawaii> 31](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/32.png) ![<kawaii> 32](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/30.png) ![<kawaii> 33](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/33.png) ![<kawaii> 34](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/34.png) ![<kawaii> 35](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/35.png) ![<kawaii> 36](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/36.png) ![<kawaii> 37](https://huggingface.co/sd-concepts-library/kawaii-girl-plus-style-v1-1/resolve/main/concept_images/37.png)
f92a81b98410fa7b7e355a1f7e014c51
apache-2.0
['automatic-speech-recognition', 'ja']
false
exp_w2v2t_ja_vp-fr_s543 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
8d4e4a8f091423b2130674ea346c99c1
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_wnli This model is a fine-tuned version of [gokuls/mobilebert_sa_pre-training-complete](https://huggingface.co/gokuls/mobilebert_sa_pre-training-complete) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3677 - Accuracy: 0.2958
11b2d5e8fc6bc9db3b3b7b3407edd23c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3708 | 1.0 | 5 | 0.3927 | 0.3944 | | 0.3555 | 2.0 | 10 | 0.3715 | 0.4225 | | 0.3493 | 3.0 | 15 | 0.3677 | 0.2958 | | 0.3485 | 4.0 | 20 | 0.3704 | 0.3803 | | 0.3454 | 5.0 | 25 | 0.3815 | 0.2394 | | 0.3461 | 6.0 | 30 | 0.3878 | 0.2394 | | 0.3432 | 7.0 | 35 | 0.3962 | 0.2535 | | 0.3427 | 8.0 | 40 | 0.4050 | 0.1972 |
48ab11e91b4ba6872a7d8721b028309a
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased_fold_1_binary This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5992 - F1: 0.7687
008c5e9f950cd35b9887981f7bc2c844
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 288 | 0.3960 | 0.7467 | | 0.3988 | 2.0 | 576 | 0.3947 | 0.7487 | | 0.3988 | 3.0 | 864 | 0.4511 | 0.7662 | | 0.1853 | 4.0 | 1152 | 0.7226 | 0.7285 | | 0.1853 | 5.0 | 1440 | 0.9398 | 0.7334 | | 0.0827 | 6.0 | 1728 | 1.0547 | 0.7427 | | 0.0287 | 7.0 | 2016 | 1.1602 | 0.7563 | | 0.0287 | 8.0 | 2304 | 1.3332 | 0.7171 | | 0.0219 | 9.0 | 2592 | 1.3429 | 0.7420 | | 0.0219 | 10.0 | 2880 | 1.2603 | 0.7648 | | 0.0139 | 11.0 | 3168 | 1.4126 | 0.7569 | | 0.0139 | 12.0 | 3456 | 1.3195 | 0.7483 | | 0.0115 | 13.0 | 3744 | 1.4356 | 0.7491 | | 0.0035 | 14.0 | 4032 | 1.5693 | 0.7636 | | 0.0035 | 15.0 | 4320 | 1.4071 | 0.7662 | | 0.0071 | 16.0 | 4608 | 1.4561 | 0.7579 | | 0.0071 | 17.0 | 4896 | 1.5405 | 0.7634 | | 0.0041 | 18.0 | 5184 | 1.5862 | 0.7589 | | 0.0041 | 19.0 | 5472 | 1.6782 | 0.76 | | 0.0024 | 20.0 | 5760 | 1.5699 | 0.7677 | | 0.0006 | 21.0 | 6048 | 1.5991 | 0.7467 | | 0.0006 | 22.0 | 6336 | 1.6205 | 0.7682 | | 0.0003 | 23.0 | 6624 | 1.6334 | 0.7643 | | 0.0003 | 24.0 | 6912 | 1.5992 | 0.7687 | | 0.0011 | 25.0 | 7200 | 1.6053 | 0.7624 |
028d2123f2f52f3bc911bdb4e55b626d
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xlsr-53-Total2e-4_3 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2893 - Wer: 0.1863
8d1255e09468c932fbf5935c5682cdf9
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP
82cfc7669c0788faabc5b37e02ac16a3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.16 | 0.1 | 200 | 2.9123 | 0.9707 | | 2.4599 | 0.2 | 400 | 0.8145 | 0.6906 | | 1.0523 | 0.3 | 600 | 0.5247 | 0.4823 | | 0.8965 | 0.4 | 800 | 0.4391 | 0.4416 | | 0.7994 | 0.5 | 1000 | 0.3889 | 0.3773 | | 0.7491 | 0.6 | 1200 | 0.3604 | 0.3305 | | 0.7425 | 0.7 | 1400 | 0.3543 | 0.3277 | | 0.7253 | 0.8 | 1600 | 0.3397 | 0.3143 | | 0.7221 | 0.9 | 1800 | 0.3341 | 0.2979 | | 0.6853 | 1.0 | 2000 | 0.3244 | 0.2906 | | 0.6107 | 1.1 | 2200 | 0.3127 | 0.2771 | | 0.6233 | 1.2 | 2400 | 0.3116 | 0.2721 | | 0.6214 | 1.3 | 2600 | 0.3256 | 0.2671 | | 0.6511 | 1.4 | 2800 | 0.3019 | 0.2570 | | 0.6491 | 1.5 | 3000 | 0.2961 | 0.2576 | | 0.6411 | 1.6 | 3200 | 0.2963 | 0.2535 | | 0.5963 | 1.7 | 3400 | 0.2939 | 0.2526 | | 0.6146 | 1.8 | 3600 | 0.2908 | 0.2490 | | 0.6291 | 1.9 | 3800 | 0.2851 | 0.2448 | | 0.6154 | 2.0 | 4000 | 0.2861 | 0.2424 | | 0.5652 | 2.1 | 4200 | 0.2852 | 0.2411 | | 0.5648 | 2.2 | 4400 | 0.2856 | 0.2350 | | 0.5365 | 2.3 | 4600 | 0.2802 | 0.2395 | | 0.5855 | 2.4 | 4800 | 0.2883 | 0.2374 | | 0.5978 | 2.5 | 5000 | 0.2855 | 0.2364 | | 0.5863 | 2.6 | 5200 | 0.2736 | 0.2277 | | 0.5569 | 2.7 | 5400 | 0.2746 | 0.2293 | | 0.5628 | 2.8 | 5600 | 0.2719 | 0.2249 | | 0.5655 | 2.9 | 5800 | 0.2653 | 0.2224 | | 0.5578 | 3.0 | 6000 | 0.2685 | 0.2243 | | 0.5303 | 3.1 | 6200 | 0.2696 | 0.2204 | | 0.5316 | 3.2 | 6400 | 0.2733 | 0.2247 | | 0.5476 | 3.3 | 6600 | 0.2716 | 0.2203 | | 0.5326 | 3.4 | 6800 | 0.2697 | 0.2209 | | 0.5375 | 3.5 | 7000 | 0.2701 | 0.2197 | | 0.5364 | 3.6 | 7200 | 0.2655 | 0.2165 | | 0.503 | 3.7 | 7400 | 0.2650 | 0.2125 | | 0.5284 | 3.8 | 7600 | 0.2672 | 0.2162 | | 0.5251 | 3.9 | 7800 | 0.2669 | 0.2172 | | 0.5299 | 4.0 | 8000 | 0.2632 | 0.2081 | | 0.4904 | 4.1 | 8200 | 0.2674 | 0.2099 | | 0.496 | 4.2 | 8400 | 0.2700 | 0.2143 | | 0.5067 | 4.3 | 8600 | 0.2648 | 0.2090 | | 0.506 | 4.4 | 8800 | 0.2595 | 0.2069 | | 0.4795 | 4.5 | 9000 | 0.2653 | 0.2072 | | 0.5149 | 4.6 | 9200 | 0.2618 | 0.2073 | | 0.4786 | 4.7 | 9400 | 0.2632 | 0.2058 | | 0.5056 | 4.8 | 9600 | 0.2674 | 0.2123 | | 0.5059 | 4.9 | 9800 | 0.2642 | 0.2115 | | 0.5119 | 5.0 | 10000 | 0.2672 | 0.2089 | | 0.4619 | 5.1 | 10200 | 0.2658 | 0.2062 | | 0.4647 | 5.2 | 10400 | 0.2664 | 0.2025 | | 0.4707 | 5.3 | 10600 | 0.2656 | 0.2084 | | 0.486 | 5.4 | 10800 | 0.2728 | 0.2029 | | 0.4785 | 5.5 | 11000 | 0.2653 | 0.2004 | | 0.4895 | 5.6 | 11200 | 0.2835 | 0.2119 | | 0.4519 | 5.7 | 11400 | 0.2715 | 0.2061 | | 0.484 | 5.8 | 11600 | 0.2663 | 0.2071 | | 0.4734 | 5.9 | 11800 | 0.2615 | 0.2023 | | 0.4563 | 6.0 | 12000 | 0.2604 | 0.1997 | | 0.4193 | 6.1 | 12200 | 0.2708 | 0.2015 | | 0.4516 | 6.2 | 12400 | 0.2724 | 0.2018 | | 0.4609 | 6.3 | 12600 | 0.2745 | 0.2004 | | 0.43 | 6.4 | 12800 | 0.2716 | 0.1979 | | 0.4424 | 6.5 | 13000 | 0.2674 | 0.1963 | | 0.4589 | 6.6 | 13200 | 0.2622 | 0.1977 | | 0.4458 | 6.7 | 13400 | 0.2668 | 0.1994 | | 0.4233 | 6.8 | 13600 | 0.2739 | 0.1978 | | 0.4557 | 6.9 | 13800 | 0.2692 | 0.1972 | | 0.4472 | 7.0 | 14000 | 0.2686 | 0.1942 | | 0.4193 | 7.1 | 14200 | 0.2843 | 0.1959 | | 0.4033 | 7.2 | 14400 | 0.2767 | 0.1945 | | 0.4266 | 7.3 | 14600 | 0.2808 | 0.1931 | | 0.419 | 7.4 | 14800 | 0.2801 | 0.1945 | | 0.4352 | 7.5 | 15000 | 0.2764 | 0.1934 | | 0.4248 | 7.6 | 15200 | 0.2818 | 0.1938 | | 0.4001 | 7.7 | 15400 | 0.2754 | 0.1931 | | 0.415 | 7.8 | 15600 | 0.2799 | 0.1916 | | 0.4056 | 7.9 | 15800 | 0.2746 | 0.1916 | | 0.419 | 8.0 | 16000 | 0.2789 | 0.1909 | | 0.3974 | 8.1 | 16200 | 0.2913 | 0.1897 | | 0.3999 | 8.2 | 16400 | 0.2894 | 0.1899 | | 0.4179 | 8.3 | 16600 | 0.2819 | 0.1918 | | 0.4081 | 8.4 | 16800 | 0.2868 | 0.1910 | | 0.3963 | 8.5 | 17000 | 0.2835 | 0.1889 | | 0.3748 | 8.6 | 17200 | 0.2841 | 0.1903 | | 0.375 | 8.7 | 17400 | 0.2820 | 0.1874 | | 0.3857 | 8.8 | 17600 | 0.2865 | 0.1872 | | 0.3901 | 8.9 | 17800 | 0.2824 | 0.1882 | | 0.4067 | 9.0 | 18000 | 0.2838 | 0.1887 | | 0.3711 | 9.1 | 18200 | 0.2892 | 0.1897 | | 0.3661 | 9.2 | 18400 | 0.2889 | 0.1883 | | 0.3796 | 9.3 | 18600 | 0.2876 | 0.1886 | | 0.3932 | 9.4 | 18800 | 0.2948 | 0.1877 | | 0.3894 | 9.5 | 19000 | 0.2896 | 0.1884 | | 0.3643 | 9.6 | 19200 | 0.2897 | 0.1868 | | 0.384 | 9.7 | 19400 | 0.2887 | 0.1867 | | 0.3951 | 9.8 | 19600 | 0.2905 | 0.1862 | | 0.3595 | 9.9 | 19800 | 0.2893 | 0.1866 | | 0.3758 | 10.0 | 20000 | 0.2893 | 0.1863 |
f4ee4ea58aab21654cbf4face21020c1
apache-2.0
['image-classification', 'pytorch', 'onnx']
false
Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from pyrovision.models import model_from_hf_hub model = model_from_hf_hub("pyronear/resnet18").eval() img = Image.open(path_to_an_image).convert("RGB")
5d89a11bd49ca81b447229caa848b99c
other
['generated_from_trainer']
false
dalio-6.7b-test This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.6641 - Accuracy: 0.0662
a1b8a4c6441c5c4c97f2d38faab7fef7
other
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2.0
0b39f79813d5bae70d626bd12048ef0e
other
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.5958 | 0.31 | 16 | 2.5371 | 0.0659 | | 2.3784 | 0.62 | 32 | 2.5039 | 0.0670 | | 2.3578 | 0.92 | 48 | 2.6074 | 0.0654 | | 1.3819 | 1.23 | 64 | 2.6680 | 0.0658 | | 1.1529 | 1.54 | 80 | 2.6738 | 0.0665 | | 1.2938 | 1.85 | 96 | 2.6641 | 0.0662 |
551a2d963910813582cd6dfa85e4956a
cc-by-4.0
['question generation']
false
Model Card of `lmqg/t5-base-subjqa-electronics-qg` This model is fine-tuned version of [lmqg/t5-base-squad](https://huggingface.co/lmqg/t5-base-squad) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: electronics) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
19caa35a35d10ab9cb987cee217627a6
cc-by-4.0
['question generation']
false
Overview - **Language model:** [lmqg/t5-base-squad](https://huggingface.co/lmqg/t5-base-squad) - **Language:** en - **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (electronics) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
a277a5206bcea9db877299af14fed1d8
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-base-subjqa-electronics-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
0afdbc5965fdc3039c312ddee656fc5e
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-base-subjqa-electronics-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.electronics.json) | | Score | Type | Dataset | |:-----------|--------:|:------------|:-----------------------------------------------------------------| | BERTScore | 94.26 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_1 | 28.95 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_2 | 21.03 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_3 | 10.73 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_4 | 4.55 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | METEOR | 27.39 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | MoverScore | 68.33 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | ROUGE_L | 29.99 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
5f14f55b4a39dd8aea85798ac5fad1b8
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_subjqa - dataset_name: electronics - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: lmqg/t5-base-squad - max_length: 512 - max_length_output: 32 - epoch: 4 - batch: 16 - lr: 5e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.0 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-base-subjqa-electronics-qg/raw/main/trainer_config.json).
c5ebb91bbdfc7dac3df3477f510e0d7c
mit
['generated_from_trainer']
false
bertimbau-base-lener_br This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the lener_br dataset. It achieves the following results on the evaluation set: - Loss: 0.2298 - Precision: 0.8501 - Recall: 0.9138 - F1: 0.8808 - Accuracy: 0.9693
0f2e60d2e0f8c036c595e99120e05d1b
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0686 | 1.0 | 1957 | 0.1399 | 0.7759 | 0.8669 | 0.8189 | 0.9641 | | 0.0437 | 2.0 | 3914 | 0.1457 | 0.7997 | 0.8938 | 0.8441 | 0.9623 | | 0.0313 | 3.0 | 5871 | 0.1675 | 0.8466 | 0.8744 | 0.8603 | 0.9651 | | 0.0201 | 4.0 | 7828 | 0.1621 | 0.8713 | 0.8839 | 0.8775 | 0.9718 | | 0.0137 | 5.0 | 9785 | 0.1811 | 0.7783 | 0.9159 | 0.8415 | 0.9645 | | 0.0105 | 6.0 | 11742 | 0.1836 | 0.8568 | 0.9009 | 0.8783 | 0.9692 | | 0.0105 | 7.0 | 13699 | 0.1649 | 0.8339 | 0.9125 | 0.8714 | 0.9725 | | 0.0059 | 8.0 | 15656 | 0.2298 | 0.8501 | 0.9138 | 0.8808 | 0.9693 | | 0.0051 | 9.0 | 17613 | 0.2210 | 0.8437 | 0.9045 | 0.8731 | 0.9693 | | 0.0061 | 10.0 | 19570 | 0.2499 | 0.8627 | 0.8946 | 0.8784 | 0.9681 | | 0.0041 | 11.0 | 21527 | 0.1985 | 0.8560 | 0.9052 | 0.8799 | 0.9720 | | 0.003 | 12.0 | 23484 | 0.2204 | 0.8498 | 0.9065 | 0.8772 | 0.9699 | | 0.0014 | 13.0 | 25441 | 0.2152 | 0.8425 | 0.9067 | 0.8734 | 0.9709 | | 0.0005 | 14.0 | 27398 | 0.2317 | 0.8553 | 0.8987 | 0.8765 | 0.9705 | | 0.0015 | 15.0 | 29355 | 0.2436 | 0.8543 | 0.8989 | 0.8760 | 0.9700 |
249a6c7e32c494f489bac9c4c074b1bd
cc0-1.0
['speechbrain', 'Spoken language understanding']
false
Fluent Speech Commands The dataset contains real recordings that define a simple spoken language understanding task. You can download it from [here](https://fluent.ai/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research/). The Fluent Speech Commands dataset contains 30,043 utterances from 97 speakers. It is recorded as 16 kHz single-channel .wav files each containing a single utterance used for controlling smart-home appliances or virtual assistant, for example, “put on the music” or “turn up the heat in the kitchen”. Each audio is labeled with three slots: action, object, and location. A slot takes on one of the multiple values: for instance, the “location” slot can take on the values “none”, “kitchen”, “bedroom”, or “washroom”. We refer to the combination of slot values as the intent of the utterance. For each intent, there are multiple possible wordings: for example, the intent {action: “activate”, object: “lights”, location: “none”} can be expressed as “turn on the lights”, “switch the lights on”, “lights on”, etc. The dataset has a total of 248 phrasing mapping to 31 unique intents.
7b37e9ecf03c16cd379c5af4a0c336a4
cc0-1.0
['speechbrain', 'Spoken language understanding']
false
End-to-end SLU model for Fluent Speech Commands Attention-based RNN sequence-to-sequence model for the [Fluent Speech Commands](https://arxiv.org/pdf/1904.03670.pdf) dataset. This model checkpoint achieves 99.6% accuracy on the test set. The model uses an ASR model trained on LibriSpeech ([`speechbrain/asr-crdnn-rnnlm-librispeech`](https://huggingface.co/speechbrain/asr-crdnn-rnnlm-librispeech)) to extract features from the input audio, then maps these features to an intent and slot labels using a beam search. You can try the model on the `example_fsc.wav` file included here as follows: ```python from speechbrain.pretrained import EndToEndSLU slu = EndToEndSLU.from_hparams("speechbrain/slu-direct-fluent-speech-commands-librispeech-asr")
c9021febd141bf5ed7e0346b6b3cc484
cc0-1.0
['speechbrain', 'Spoken language understanding']
false
>>> '{"action:" "activate"| "object": "lights"| "location": "bedroom"}' ``` The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *decode_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *decode_batch*.
c024635ef963fa523054eded3bc1a090
cc0-1.0
['speechbrain', 'Spoken language understanding']
false
Training The model was trained with SpeechBrain (f1f421b3). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/fluent-speech-commands python train.py hparams/train.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1Zly54252Z218IHJQ9M0B3kTQPZIw_2yC?usp=sharing).
052c3802387a171d87b5e9200c5e6a20
cc0-1.0
['speechbrain', 'Spoken language understanding']
false
Referencing Fluent Speech Commands ```bibtex @inproceedings{fluent, author = {Loren Lugosch and Mirco Ravanelli and Patrick Ignoto and Vikrant Singh Tomar and Yoshua Bengio}, editor = {Gernot Kubin and Zdravko Kacic}, title = {Speech Model Pre-Training for End-to-End Spoken Language Understanding}, booktitle = {Proc. of Interspeech}, pages = {814--818}, year = {2019}, } ```
50d2f7dedf8bb35490004ee195e51836
cc0-1.0
['speechbrain', 'Spoken language understanding']
false
About SpeechBrain SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains. Website: https://speechbrain.github.io/ GitHub: https://github.com/speechbrain/speechbrain
8ddaf197501fee81d9bfb53bf3886cd2
openrail
['text-to-image', 'dreambooth-hackathon', 'wildcard', 'diffusers']
false
Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run Nail-set-Diffusion: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/ringhyacinth/Nail-Diffuser) __Stable Diffusion fine tuned on Nail Set by [Weekend](https://weibo.com/u/5982308498) and [Hyacinth](https://twitter.com/ring_hyacinth).__ Put in a text prompt and generate your own nail set! ![image.png](https://cdn.discordapp.com/attachments/973053077672325120/1043909385891610674/fe869dbd7be07b59f284370645d7143.png) > Nail Set, Sunflower (/Irises/Starry Night/Self Portrait) by Van Gogh, Van Gogh color scheme ![image.png](https://cdn.discordapp.com/attachments/973053077672325120/1043908810613473321/b1e3d1f76c530f6a23ee2116dc9f01a.png) > Nail Set, hamilton nail, broadway musical theme nail. ![image.png](https://cdn.discordapp.com/attachments/973053077672325120/1043910797694349312/bcac02c6ff64419f2df503b367561be.png) > Nail Set, chinese new year nail, super detailed ![image.png](https://cdn.discordapp.com/attachments/973053077672325120/1043911547703001128/0f8faaf6b91e82bb23dc5d1a5c85223.png) > Nail Set, thanksgiving nail, super detailed ![image.png](https://cdn.discordapp.com/attachments/973053077672325120/1043914949887524894/a4f3c62d7d1e47ae118a4bb4772f4e5.png) > Nail set, Disney castle nail, cute Japanese girly nail
990cf1f06827aed78b7b1a5a40e7070f
openrail
['text-to-image', 'dreambooth-hackathon', 'wildcard', 'diffusers']
false
Model description Trained on [CLIP Ineterrogator captioned dataset](https://huggingface.co/spaces/pharma/CLIP-Interrogator) Using [EveryDream Finetune Script](https://github.com/victorchall/EveryDream-trainer) for around 10,000 step.
84f7d2553a28cc3011a3f719da02e372
apache-2.0
['generated_from_trainer']
false
mt5-base-coba-coba-coba This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5870 - Rouge1: 0.4336 - Rouge2: 0.288 - Rougel: 0.3746 - Rougelsum: 0.4095
778fc883e8a65db9d60a13047247c6cd
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
96e9d54794a1ed7425dc2b6608ff4272
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:| | 7.0922 | 1.0 | 7452 | 0.6538 | 0.3557 | 0.239 | 0.3216 | 0.3342 | | 0.9442 | 2.0 | 14904 | 0.6900 | 0.427 | 0.2868 | 0.371 | 0.4028 | | 3.0789 | 3.0 | 22356 | 0.6775 | 0.3801 | 0.2581 | 0.34 | 0.3564 | | 1.0565 | 4.0 | 29808 | 0.5928 | 0.4345 | 0.2885 | 0.376 | 0.4102 | | 0.7872 | 5.0 | 37260 | 0.5870 | 0.4336 | 0.288 | 0.3746 | 0.4095 |
10a491abc79a7edc1fd7c1a700c9ca29
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-Català Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Catalan language using the [Common Voice](https://huggingface.co/datasets/common_voice) and [ParlamentParla](https://www.openslr.org/59/) datasets. **Attention:** The split train/dev/test used does not fully map with the CommonVoice 6.1 dataset. A custom split was used combining both the CommonVoice and ParlamentParla dataset and can be found [here](https://github.com/ccoreilly/wav2vec2-catala). Evaluating on the CV test dataset will produce a biased WER as 1144 audio files of that dataset were used in training/evaluation of this model. WER was calculated using this [test.csv](https://github.com/ccoreilly/wav2vec2-catala/blob/master/test.csv) which was not seen by the model during training/evaluation. You can find training and evaluation scripts in the github repository [ccoreilly/wav2vec2-catala](https://github.com/ccoreilly/wav2vec2-catala) When using this model, make sure that your speech input is sampled at 16kHz.
caadc52820709c403c908521e5d9d6e2
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Results Word error rate was evaluated on the following datasets unseen by the model: | Dataset | WER | | ------- | --- | | [Test split CV+ParlamentParla]((https://github.com/ccoreilly/wav2vec2-catala/blob/master/test.csv)) | 6.92% | | [Google Crowsourced Corpus](https://www.openslr.org/69/) | 12.99% | | Audiobook “La llegenda de Sant Jordi” | 13.23% |
a13b36c537b4a7a0c907ffa30595ebff
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ca", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("ccoreilly/wav2vec2-large-xlsr-catala") model = Wav2Vec2ForCTC.from_pretrained("ccoreilly/wav2vec2-large-xlsr-catala") resampler = torchaudio.transforms.Resample(48_000, 16_000)
89dcd57284e815a4e1f0d9505a19bae1