license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
mt5-small-finetuned-mlsum This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the mlsum dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 1.1475 - Rouge2: 0.1284 - Rougel: 1.0634 - Rougelsum: 1.0778 - Gen Len: 3.7939
50136acca2f2152eaf5cd4fcf2818900
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | nan | 1.0 | 808 | nan | 1.1475 | 0.1284 | 1.0634 | 1.0778 | 3.7939 |
2cded04b3da1eb3611fe82814e97f430
other
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'zh', 'Chinese']
false
Chinese Stable Diffusion Model Card <!-- ![rinna](https://github.com/rinnakk/japanese-clip/blob/master/data/rinna.png?raw=true) --> svjack/Stable-Diffusion-FineTuned-zh-v0 is a Chinese-specific latent text-to-image diffusion model capable of generating images given any Chinese text input. This model was trained by using a powerful text-to-image model, [diffusers](https://github.com/huggingface/diffusers) For more information about our training method, see [train_zh_model.py](https://github.com/svjack/Stable-Diffusion-Chinese-Extend/blob/main/train_zh_model.py). With the help of a good baseline model [Taiyi-Stable-Diffusion-1B-Chinese-v0.1](IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1) from [IDEA-CCNL](https://github.com/IDEA-CCNL/Fengshenbang-LM) <!-- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/rinnakk/japanese-stable-diffusion/blob/master/scripts/txt2img.ipynb) -->
826eb3e093e8501817286ef7deb0a254
other
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'zh', 'Chinese']
false
Model Details - **Developed by:** Zhipeng Yang - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** Chinese - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model (LDM)](https://arxiv.org/abs/2112.10752) that used [Stable Diffusion](https://github.com/CompVis/stable-diffusion) as a pre-trained model. - **Resources for more information:** [https://github.com/svjack/Stable-Diffusion-Chinese-Extend](https://github.com/svjack/Stable-Diffusion-Chinese-Extend)
d342d8b2b160b39c2e30ef249ef05c17
other
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'zh', 'Chinese']
false
Examples Firstly, install our package as follows. This package is modified [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Chinese Stable Diffusion. ```bash diffusers==0.6.0 transformers torch datasets accelerate sentencepiece ``` Run this command to log in with your HF Hub token if you haven't before: ```bash huggingface-cli login ``` Running the pipeline with the LMSDiscreteScheduler scheduler: ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained("svjack/Stable-Diffusion-FineTuned-zh-v2") pipeline.safety_checker = lambda images, clip_input: (images, False) pipeline = pipeline.to("cuda") prompt = '女孩们打开了另一世界的大门' image = pipeline(prompt, guidance_scale=7.5).images[0] ```
81b0c9cba9794eeec9a8284f64ec840c
other
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'zh', 'Chinese']
false
Generator Results comparison [https://github.com/svjack/Stable-Diffusion-Chinese-Extend](https://github.com/svjack/Stable-Diffusion-Chinese-Extend) ![0](https://github.com/svjack/Stable-Diffusion-Chinese-Extend/blob/main/imgs/dragon_v2.jpg?raw=true) ![1](https://github.com/svjack/Stable-Diffusion-Chinese-Extend/blob/main/imgs/dragon_style_v2.jpg?raw=true) ![2](https://github.com/svjack/Stable-Diffusion-Chinese-Extend/blob/main/imgs/girl_v2.jpg?raw=true) ![3](https://github.com/svjack/Stable-Diffusion-Chinese-Extend/blob/main/imgs/girl_style_v2.jpg?raw=true) <!-- _Note: `JapaneseStableDiffusionPipeline` is almost same as diffusers' `StableDiffusionPipeline` but added some lines to initialize our models properly._
80329c186a6e63de61cf1861224a1fa2
other
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'zh', 'Chinese']
false
Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1._ The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
0ba8b794368b803256bdf9c22d82dbd0
other
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'zh', 'Chinese']
false
Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with Japanese captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a subset of a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
450d92c59f273cb8f8a59e692b5db41a
other
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'zh', 'Chinese']
false
Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Japanese Stable Diffusion was trained on Japanese datasets including [LAION-5B](https://laion.ai/blog/laion-5b/) with Japanese captions, which consists of images that are primarily limited to Japanese descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model. Further, the ability of the model to generate content with non-Japanese prompts is significantly worse than with Japanese-language prompts.
1a9d48faecbdd3953cb644c70e7ddfcd
other
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'zh', 'Chinese']
false
Training **Training Data** We used the following dataset for training the model: - Approximately 100 million images with Japanese captions, including the Japanese subset of [LAION-5B](https://laion.ai/blog/laion-5b/). **Training Procedure** Japanese Stable Diffusion has the same architecture as Stable Diffusion and was trained by using Stable Diffusion. Because Stable Diffusion was trained on English dataset and the CLIP tokenizer is basically for English, we had 2 stages to transfer to a language-specific model, inspired by [PITI](https://arxiv.org/abs/2205.12952). 1. Train a Japanese-specific text encoder with our Japanese tokenizer from scratch with the latent diffusion model fixed. This stage is expected to map Japanese captions to Stable Diffusion's latent space. 2. Fine-tune the text encoder and the latent diffusion model jointly. This stage is expected to generate Japanese-style images more. [//]:
7d1816df059b67af0f07dbf8d079df3e
apache-2.0
['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
wav2vec2-large-xls-r-300m-mr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.5479 - Wer: 0.5740
687f08a96850762aaad5c825f5ccfa3d
apache-2.0
['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 200
15012bb43e0516135f8b6c262d47e268
apache-2.0
['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 3.7378 | 18.18 | 400 | 3.5047 | 1.0 | | 3.1707 | 36.36 | 800 | 2.6166 | 0.9912 | | 1.4942 | 54.55 | 1200 | 0.5778 | 0.6927 | | 1.2058 | 72.73 | 1600 | 0.5168 | 0.6362 | | 1.0558 | 90.91 | 2000 | 0.5105 | 0.6069 | | 0.9488 | 109.09 | 2400 | 0.5151 | 0.6089 | | 0.8588 | 127.27 | 2800 | 0.5157 | 0.5989 | | 0.7991 | 145.45 | 3200 | 0.5179 | 0.5740 | | 0.7545 | 163.64 | 3600 | 0.5348 | 0.5740 | | 0.7144 | 181.82 | 4000 | 0.5518 | 0.5724 | | 0.7041 | 200.0 | 4400 | 0.5479 | 0.5740 |
90a293c33b0bb7f63dfa409f02517366
apache-2.0
['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-mr --dataset mozilla-foundation/common_voice_8_0 --config mr --split test ```
b55c97d84957de485beb0af18086bc92
apache-2.0
['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
Inference With LM ```python import torch from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torchaudio.functional as F model_id = "anuragshas/wav2vec2-large-xls-r-300m-mr" sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "mr", split="test", streaming=True, use_auth_token=True)) sample = next(sample_iter) resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy() model = AutoModelForCTC.from_pretrained(model_id) processor = AutoProcessor.from_pretrained(model_id) input_values = processor(resampled_audio, return_tensors="pt").input_values with torch.no_grad(): logits = model(input_values).logits transcription = processor.batch_decode(logits.numpy()).text
df483c89a496654df984f68bc6dddc6a
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-finetuned-math_punctuation-ignore_word_parts This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1981 - Precision: 0.7843 - Recall: 0.7485 - F Score: 0.7648 - Auc: 0.9248
dc640099c37eacb0dcfae5c0d39550f9
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12
f5a6d49051cb78e239a83cf1cd88bb19
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F Score | Auc | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:-------:|:------:| | 0.1064 | 0.64 | 500 | 0.1082 | 0.7558 | 0.6580 | 0.6964 | 0.9086 | | 0.0781 | 1.27 | 1000 | 0.1025 | 0.7594 | 0.7226 | 0.7365 | 0.9261 | | 0.0757 | 1.91 | 1500 | 0.1001 | 0.7945 | 0.6899 | 0.7302 | 0.9272 | | 0.0538 | 2.54 | 2000 | 0.1061 | 0.7689 | 0.7348 | 0.7480 | 0.9298 | | 0.0425 | 3.18 | 2500 | 0.1123 | 0.7806 | 0.7361 | 0.7560 | 0.9300 | | 0.0377 | 3.81 | 3000 | 0.1159 | 0.7841 | 0.7437 | 0.7610 | 0.9292 | | 0.0235 | 4.45 | 3500 | 0.1259 | 0.7786 | 0.7368 | 0.7561 | 0.9276 | | 0.0227 | 5.08 | 4000 | 0.1436 | 0.7699 | 0.7448 | 0.7555 | 0.9277 | | 0.0159 | 5.72 | 4500 | 0.1466 | 0.7715 | 0.7333 | 0.7514 | 0.9252 | | 0.0106 | 6.35 | 5000 | 0.1574 | 0.7710 | 0.7456 | 0.7566 | 0.9276 | | 0.0111 | 6.99 | 5500 | 0.1560 | 0.7694 | 0.7500 | 0.7595 | 0.9286 | | 0.0074 | 7.62 | 6000 | 0.1645 | 0.7789 | 0.7511 | 0.7639 | 0.9305 | | 0.0056 | 8.26 | 6500 | 0.1745 | 0.7887 | 0.7453 | 0.7648 | 0.9265 | | 0.005 | 8.89 | 7000 | 0.1760 | 0.7779 | 0.7497 | 0.7629 | 0.9281 | | 0.0038 | 9.53 | 7500 | 0.1873 | 0.7826 | 0.7505 | 0.7634 | 0.9273 | | 0.0031 | 10.17 | 8000 | 0.1896 | 0.7855 | 0.7477 | 0.7644 | 0.9258 | | 0.0026 | 10.8 | 8500 | 0.1929 | 0.7849 | 0.7485 | 0.7650 | 0.9263 | | 0.0017 | 11.44 | 9000 | 0.1981 | 0.7843 | 0.7485 | 0.7648 | 0.9248 |
c024635c86dc760950f7976bb816b7ec
apache-2.0
['translation']
false
opus-mt-bg-fi * source languages: bg * target languages: fi * OPUS readme: [bg-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bg-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.eval.txt)
df8b3abef482d45244feaa6ceded6db9
cc-by-sa-4.0
['speech', 'automatic-speech-recognition']
false
Wav2Vec2 base model trained of 3K hours of Vietnamese speech The base model is pre-trained on 16kHz sampled speech audio from Vietnamese speech corpus containing 3K hours of spontaneous, reading, and broadcasting speech. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Vietnamese Automatic Speech Recognition. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. [Facebook's Wav2Vec2 blog](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) [Paper](https://arxiv.org/abs/2006.11477)
99e0a56e42b2fdb9dc79ee98feb32d5c
cc-by-sa-4.0
['speech', 'automatic-speech-recognition']
false
Usage See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the English pre-trained model. ```python import torch from transformers import Wav2Vec2Model model = Wav2Vec2Model.from_pretrained("dragonSwing/viwav2vec2-base-3k")
b5342501e82e6aa364546e4d26d52c5d
apache-2.0
['generated_from_trainer']
false
bart-model2-3110-e4 This model is a fine-tuned version of [theojolliffe/bart-paraphrase-v4-e1-feedback](https://huggingface.co/theojolliffe/bart-paraphrase-v4-e1-feedback) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0700 - Rouge1: 70.0692 - Rouge2: 68.1457 - Rougel: 69.8943 - Rougelsum: 70.0389 - Gen Len: 19.8966
ecd40c6224b8e129131913e7f3703183
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 - mixed_precision_training: Native AMP
9d53d0668661ddd4cd4404625c20f925
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.5951 | 1.0 | 553 | 0.3089 | 62.5675 | 54.7411 | 61.2646 | 61.3675 | 19.7241 | | 0.2541 | 2.0 | 1106 | 0.1432 | 66.113 | 61.964 | 64.6141 | 64.9187 | 19.8966 | | 0.1547 | 3.0 | 1659 | 0.0964 | 68.6902 | 64.938 | 67.6197 | 67.9181 | 19.8966 | | 0.1141 | 4.0 | 2212 | 0.1015 | 68.9122 | 66.4279 | 68.4906 | 68.5758 | 19.8966 | | 0.0728 | 5.0 | 2765 | 0.0819 | 69.2271 | 66.8276 | 68.6915 | 68.849 | 19.8966 | | 0.0563 | 6.0 | 3318 | 0.0700 | 70.0692 | 68.1457 | 69.8943 | 70.0389 | 19.8966 |
b4e0d4eff0569acdeca3fa1fdd818536
apache-2.0
['generated_from_trainer']
false
model2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2319 - Accuracy: 0.9479
82367523a106eab477a853519c4be2b4
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 224 | 0.2074 | 0.9453 | | No log | 2.0 | 448 | 0.2421 | 0.9440 | | 0.2593 | 3.0 | 672 | 0.2319 | 0.9479 |
a244665a3cef57e5a69b4324d581da58
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
lineal-ic Dreambooth model trained by viba98 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: linealic ![linealic 0](https://huggingface.co/viba98/lineal-ic/resolve/main/sample_images/linealic_5.jpg) ![linealic 1](https://huggingface.co/viba98/lineal-ic/resolve/main/sample_images/linealic_4.jpg) ![linealic 2](https://huggingface.co/viba98/lineal-ic/resolve/main/sample_images/linealic_1.jpg) ![linealic 3](https://huggingface.co/viba98/lineal-ic/resolve/main/sample_images/linealic_3.jpg) ![linealic 4](https://huggingface.co/viba98/lineal-ic/resolve/main/sample_images/linealic_7.jpg) ![linealic 5](https://huggingface.co/viba98/lineal-ic/resolve/main/sample_images/linealic_6.jpg) ![linealic 6](https://huggingface.co/viba98/lineal-ic/resolve/main/sample_images/linealic_2.jpg)
7f73cd6bb5a8bd0e5b5276547c4a7a2c
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-issues-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2456
66043e757c663f6f8dc0c655502cab9e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0986 | 1.0 | 291 | 1.6929 | | 1.6401 | 2.0 | 582 | 1.4304 | | 1.4881 | 3.0 | 873 | 1.3916 | | 1.4 | 4.0 | 1164 | 1.3796 | | 1.3416 | 5.0 | 1455 | 1.2012 | | 1.2807 | 6.0 | 1746 | 1.2733 | | 1.2396 | 7.0 | 2037 | 1.2646 | | 1.1993 | 8.0 | 2328 | 1.2098 | | 1.1661 | 9.0 | 2619 | 1.1862 | | 1.1406 | 10.0 | 2910 | 1.2223 | | 1.1294 | 11.0 | 3201 | 1.2056 | | 1.1042 | 12.0 | 3492 | 1.1655 | | 1.0827 | 13.0 | 3783 | 1.2525 | | 1.0738 | 14.0 | 4074 | 1.1685 | | 1.0626 | 15.0 | 4365 | 1.1182 | | 1.0629 | 16.0 | 4656 | 1.2456 |
d415922e0afa38ac49f46b1d57f28377
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2167 - Accuracy: 0.926 - F1: 0.9262
e440e127e9c252d5ab5f365f6aecd4d1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8112 | 1.0 | 250 | 0.3147 | 0.903 | 0.8992 | | 0.2454 | 2.0 | 500 | 0.2167 | 0.926 | 0.9262 |
ade70772c115ce54a9d960b17ce216f1
apache-2.0
['translation']
false
opus-mt-en-ru * source languages: en * target languages: ru * OPUS readme: [en-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ru/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-11.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ru/opus-2020-02-11.zip) * test set translations: [opus-2020-02-11.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ru/opus-2020-02-11.test.txt) * test set scores: [opus-2020-02-11.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ru/opus-2020-02-11.eval.txt)
036bc2448e0c88c6c8e20705663dfae9
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newstest2012.en.ru | 31.1 | 0.581 | | newstest2013.en.ru | 23.5 | 0.513 | | newstest2015-enru.en.ru | 27.5 | 0.564 | | newstest2016-enru.en.ru | 26.4 | 0.548 | | newstest2017-enru.en.ru | 29.1 | 0.572 | | newstest2018-enru.en.ru | 25.4 | 0.554 | | newstest2019-enru.en.ru | 27.1 | 0.533 | | Tatoeba.en.ru | 48.4 | 0.669 |
542156396231091fb0681467e46ce0e1
apache-2.0
['image-classification', 'timm']
false
Model card for maxvit_small_tf_224.in1k An official MaxViT image classification model. Trained in tensorflow on ImageNet-1k by paper authors. Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman.
5122bcf5063c21f6054bb52fa02a7280
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 68.9 - GMACs: 11.7 - Activations (M): 53.2 - Image size: 224 x 224 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - **Dataset:** ImageNet-1k
c75def1aeaef4ee49aad30b20e96f3b6
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('maxvit_small_tf_224.in1k', pretrained=True) model = model.eval()
d98d2c19446c995e8eb3e4544d1fd84a
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_small_tf_224.in1k', pretrained=True, features_only=True, ) model = model.eval()
7171ace58f07e5cffb7f70e891c35b7c
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_small_tf_224.in1k', pretrained=True, num_classes=0,
49503848e8bffb6180b1df6d4280f044
mit
['generated_from_keras_callback']
false
Deep98/IPod-clustered This model is a fine-tuned version of [nandysoham16/15-clustered_aug](https://huggingface.co/nandysoham16/15-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4336 - Train End Logits Accuracy: 0.8819 - Train Start Logits Accuracy: 0.8819 - Validation Loss: 0.3193 - Validation End Logits Accuracy: 0.8636 - Validation Start Logits Accuracy: 0.8636 - Epoch: 0
47b758963cc16dd674042021cccf04e5
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.4336 | 0.8819 | 0.8819 | 0.3193 | 0.8636 | 0.8636 | 0 |
a9c7845e1cb4e706a295b02b0ebed34f
apache-2.0
['generated_from_trainer']
false
wav2vec2-E This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4832 - Wer: 0.3432
183d5246360270080a010ba67e0fe003
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.5034 | 4.0 | 500 | 1.1620 | 0.8995 | | 0.5738 | 8.0 | 1000 | 0.4625 | 0.4396 | | 0.2142 | 12.0 | 1500 | 0.4791 | 0.3965 | | 0.1219 | 16.0 | 2000 | 0.4677 | 0.3703 | | 0.0854 | 20.0 | 2500 | 0.4782 | 0.3544 | | 0.0587 | 24.0 | 3000 | 0.4680 | 0.3516 | | 0.044 | 28.0 | 3500 | 0.4832 | 0.3432 |
736d683366b86b0f762374eaf0d62572
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-2']
false
MultiBERTs Seed 2 Checkpoint 60k (uncased) Seed 2 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
a5c00e11d6f9d291798450f7a59f6fc5
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-2']
false
How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-60k') model = BertModel.from_pretrained("multiberts-seed-2-60k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
92148ebe3ea9966b6bdd1f411b4b2d08
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4313 - Wer: 0.3336
91807cc13399bdf335a1a78e43635ace
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.0055 | 3.67 | 400 | 0.7015 | 0.6789 | | 0.4384 | 7.34 | 800 | 0.4827 | 0.4875 | | 0.2143 | 11.01 | 1200 | 0.4672 | 0.4554 | | 0.1431 | 14.68 | 1600 | 0.4331 | 0.4014 | | 0.1053 | 18.35 | 2000 | 0.4471 | 0.3822 | | 0.0857 | 22.02 | 2400 | 0.4324 | 0.3637 | | 0.0683 | 25.69 | 2800 | 0.4305 | 0.3423 | | 0.0526 | 29.36 | 3200 | 0.4313 | 0.3336 |
768f56de5bcb78b38749c4129482c480
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xlsr-en-demo This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1356 - Wer: 0.2015
0edef44cf11f55964cef4b8baf059c26
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 - mixed_precision_training: Native AMP
99eeb38a8f00cc50426d858747e97829
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.3911 | 0.5 | 500 | 0.5397 | 0.2615 | | 0.3413 | 1.01 | 1000 | 0.1423 | 0.2137 | | 0.243 | 1.51 | 1500 | 0.1458 | 0.2210 | | 0.2232 | 2.01 | 2000 | 0.1380 | 0.2143 | | 0.162 | 2.51 | 2500 | 0.1464 | 0.2149 | | 0.1384 | 3.02 | 3000 | 0.1348 | 0.2109 | | 0.1164 | 3.52 | 3500 | 0.1324 | 0.2040 | | 0.1103 | 4.02 | 4000 | 0.1310 | 0.2051 | | 0.0857 | 4.53 | 4500 | 0.1356 | 0.2015 |
78e5aa448652e110136d67f1a9ef723c
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Small Mn - akmoyu This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.8308 - Wer: 50.5188
b81d881f22105bccabb9b820361fad37
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0306 | 7.94 | 1000 | 0.6344 | 52.8724 | | 0.0017 | 15.87 | 2000 | 0.7480 | 50.3659 | | 0.0004 | 23.81 | 3000 | 0.8137 | 50.5406 | | 0.0003 | 15.87 | 4000 | 0.8308 | 50.5188 |
47164f96e6c43278f9fc699932de9c73
apache-2.0
['generated_from_trainer']
false
distilr2-lr1e05-wd0.05-bs64 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2722 - Rmse: 0.5217 - Mse: 0.2722 - Mae: 0.4147
7b5ea6f951cba4f89b18f123b2187c26
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.277 | 1.0 | 312 | 0.2749 | 0.5243 | 0.2749 | 0.4243 | | 0.2745 | 2.0 | 624 | 0.2731 | 0.5226 | 0.2731 | 0.4120 | | 0.2732 | 3.0 | 936 | 0.2725 | 0.5220 | 0.2725 | 0.4156 | | 0.2718 | 4.0 | 1248 | 0.2722 | 0.5217 | 0.2722 | 0.4147 |
f0bb0ea0cf06c091395282801a1e61af
apache-2.0
['generated_from_trainer']
false
co2_eq_emissions: - emissions: 49.49 g - source: eco2AI - training_time: 00:31:54 - geographical_location: Bavaria, Germany - hardware_used: Intel(R) Xeon(R) Gold 5215 CPUs (2devices) & NVIDIA A40 (1 device)
485c808f043665d26e89e9bedd98e866
apache-2.0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_100k']
false
MultiBERTs, Intermediate Checkpoint - Seed 3, Step 100k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
419382d48a702403cbb177fc9f20b6bd
apache-2.0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_100k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_100k') model = TFBertModel.from_pretrained("google/multiberts-seed_3-step_100k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_100k') model = BertModel.from_pretrained("google/multiberts-seed_3-step_100k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
f15e26f08c5b2ee541a5f83ce83ad2d5
apache-2.0
['generated_from_trainer']
false
20split_dataset_version1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1942
bab4cbb530d41d0990081d1b98e12141
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12
01d0407f9d1525f50952ba27ea6bca75
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 2.7475 | 1.0 | 11851 | 2.5194 | | 2.5528 | 2.0 | 23702 | 2.4191 | | 2.4649 | 3.0 | 35553 | 2.3646 | | 2.4038 | 4.0 | 47404 | 2.3289 | | 2.3632 | 5.0 | 59255 | 2.2922 | | 2.3273 | 6.0 | 71106 | 2.2739 | | 2.2964 | 7.0 | 82957 | 2.2494 | | 2.2732 | 8.0 | 94808 | 2.2217 | | 2.2526 | 9.0 | 106659 | 2.2149 | | 2.2369 | 10.0 | 118510 | 2.2029 | | 2.222 | 11.0 | 130361 | 2.2020 | | 2.2135 | 12.0 | 142212 | 2.1942 |
a3a86fff28913f4fb53abfedd9a5bf89
apache-2.0
['generated_from_trainer']
false
vc-bantai-vit-withoutAMBI-adunest-v2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8271 - Accuracy: 0.7705
8d9f8df616d033361be388a5afcbff21
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 - mixed_precision_training: Native AMP
450c6e667735cdc9b8f347db237fb193
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.4 | 100 | 0.3811 | 0.8511 | | No log | 0.81 | 200 | 0.3707 | 0.8609 | | No log | 1.21 | 300 | 0.5708 | 0.7325 | | No log | 1.61 | 400 | 0.3121 | 0.8778 | | 0.3308 | 2.02 | 500 | 0.3358 | 0.8445 | | 0.3308 | 2.42 | 600 | 0.2820 | 0.8768 | | 0.3308 | 2.82 | 700 | 0.4825 | 0.7695 | | 0.3308 | 3.23 | 800 | 0.3133 | 0.8640 | | 0.3308 | 3.63 | 900 | 0.4509 | 0.8219 | | 0.2028 | 4.03 | 1000 | 0.5426 | 0.7551 | | 0.2028 | 4.44 | 1100 | 0.4886 | 0.8552 | | 0.2028 | 4.84 | 1200 | 0.5649 | 0.7695 | | 0.2028 | 5.24 | 1300 | 0.5925 | 0.7900 | | 0.2028 | 5.65 | 1400 | 0.4203 | 0.8439 | | 0.1471 | 6.05 | 1500 | 0.4275 | 0.8486 | | 0.1471 | 6.45 | 1600 | 0.3683 | 0.8727 | | 0.1471 | 6.85 | 1700 | 0.5709 | 0.8121 | | 0.1471 | 7.26 | 1800 | 0.6209 | 0.7680 | | 0.1471 | 7.66 | 1900 | 0.4971 | 0.8147 | | 0.101 | 8.06 | 2000 | 0.8792 | 0.7567 | | 0.101 | 8.47 | 2100 | 0.3288 | 0.8670 | | 0.101 | 8.87 | 2200 | 0.3643 | 0.8342 | | 0.101 | 9.27 | 2300 | 0.4883 | 0.8711 | | 0.101 | 9.68 | 2400 | 0.2892 | 0.8943 | | 0.0667 | 10.08 | 2500 | 0.5437 | 0.8398 | | 0.0667 | 10.48 | 2600 | 0.5841 | 0.8450 | | 0.0667 | 10.89 | 2700 | 0.8016 | 0.8219 | | 0.0667 | 11.29 | 2800 | 0.6389 | 0.7772 | | 0.0667 | 11.69 | 2900 | 0.3714 | 0.8753 | | 0.0674 | 12.1 | 3000 | 0.9811 | 0.7130 | | 0.0674 | 12.5 | 3100 | 0.6359 | 0.8101 | | 0.0674 | 12.9 | 3200 | 0.5691 | 0.8285 | | 0.0674 | 13.31 | 3300 | 0.6123 | 0.8316 | | 0.0674 | 13.71 | 3400 | 0.3655 | 0.8978 | | 0.0525 | 14.11 | 3500 | 0.4988 | 0.8583 | | 0.0525 | 14.52 | 3600 | 0.6153 | 0.8450 | | 0.0525 | 14.92 | 3700 | 0.4189 | 0.8881 | | 0.0525 | 15.32 | 3800 | 0.9713 | 0.7967 | | 0.0525 | 15.73 | 3900 | 1.1224 | 0.7967 | | 0.0438 | 16.13 | 4000 | 0.5725 | 0.8578 | | 0.0438 | 16.53 | 4100 | 0.4725 | 0.8532 | | 0.0438 | 16.94 | 4200 | 0.4696 | 0.8640 | | 0.0438 | 17.34 | 4300 | 0.4028 | 0.8789 | | 0.0438 | 17.74 | 4400 | 0.9452 | 0.7746 | | 0.0462 | 18.15 | 4500 | 0.4455 | 0.8783 | | 0.0462 | 18.55 | 4600 | 0.6328 | 0.8311 | | 0.0462 | 18.95 | 4700 | 0.6707 | 0.8296 | | 0.0462 | 19.35 | 4800 | 0.7771 | 0.8429 | | 0.0462 | 19.76 | 4900 | 1.2832 | 0.7408 | | 0.0381 | 20.16 | 5000 | 0.5415 | 0.8737 | | 0.0381 | 20.56 | 5100 | 0.8932 | 0.7977 | | 0.0381 | 20.97 | 5200 | 0.5182 | 0.8691 | | 0.0381 | 21.37 | 5300 | 0.5967 | 0.8794 | | 0.0381 | 21.77 | 5400 | 0.8271 | 0.7705 |
5ec5a237167b5c04a483ece38d62d257
mit
['generated_from_trainer']
false
gpt-finetuning-cervantes This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.8331
7c118e5d705d75ace7164ff746e3dade
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 70 - mixed_precision_training: Native AMP
cb1142a35cfe8bbc91d6a7b81e6a5428
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.0291 | 0.96 | 13 | 4.6705 | | 4.7952 | 1.96 | 26 | 4.4547 | | 4.5759 | 2.96 | 39 | 4.3201 | | 4.4032 | 3.96 | 52 | 4.2451 | | 4.269 | 4.96 | 65 | 4.1911 | | 4.143 | 5.96 | 78 | 4.1577 | | 4.0229 | 6.96 | 91 | 4.1306 | | 3.9047 | 7.96 | 104 | 4.1165 | | 3.7886 | 8.96 | 117 | 4.1114 | | 3.6666 | 9.96 | 130 | 4.1109 | | 3.539 | 10.96 | 143 | 4.1201 | | 3.4117 | 11.96 | 156 | 4.1374 | | 3.272 | 12.96 | 169 | 4.1538 | | 3.1283 | 13.96 | 182 | 4.1876 | | 2.9728 | 14.96 | 195 | 4.2226 | | 2.816 | 15.96 | 208 | 4.2695 | | 2.6475 | 16.96 | 221 | 4.3106 | | 2.4765 | 17.96 | 234 | 4.3678 | | 2.302 | 18.96 | 247 | 4.4249 | | 2.1257 | 19.96 | 260 | 4.4908 | | 1.9537 | 20.96 | 273 | 4.5664 | | 1.7834 | 21.96 | 286 | 4.6324 | | 1.6177 | 22.96 | 299 | 4.6944 | | 1.4573 | 23.96 | 312 | 4.7880 | | 1.3057 | 24.96 | 325 | 4.8843 | | 1.1652 | 25.96 | 338 | 4.9760 | | 1.0341 | 26.96 | 351 | 5.0612 | | 0.9101 | 27.96 | 364 | 5.1714 | | 0.8017 | 28.96 | 377 | 5.2702 | | 0.706 | 29.96 | 390 | 5.3530 | | 0.6194 | 30.96 | 403 | 5.4535 | | 0.5436 | 31.96 | 416 | 5.5373 | | 0.4816 | 32.96 | 429 | 5.6153 | | 0.4309 | 33.96 | 442 | 5.7014 | | 0.3899 | 34.96 | 455 | 5.7749 | | 0.3544 | 35.96 | 468 | 5.8430 | | 0.3236 | 36.96 | 481 | 5.9237 | | 0.3005 | 37.96 | 494 | 5.9824 | | 0.2804 | 38.96 | 507 | 6.0264 | | 0.263 | 39.96 | 520 | 6.0797 | | 0.2513 | 40.96 | 533 | 6.1285 | | 0.2376 | 41.96 | 546 | 6.1900 | | 0.2264 | 42.96 | 559 | 6.2212 | | 0.2183 | 43.96 | 572 | 6.2812 | | 0.2104 | 44.96 | 585 | 6.3079 | | 0.203 | 45.96 | 598 | 6.3501 | | 0.1964 | 46.96 | 611 | 6.3730 | | 0.1912 | 47.96 | 624 | 6.4190 | | 0.1854 | 48.96 | 637 | 6.4598 | | 0.1817 | 49.96 | 650 | 6.4618 | | 0.1792 | 50.96 | 663 | 6.4914 | | 0.1748 | 51.96 | 676 | 6.5385 | | 0.1732 | 52.96 | 689 | 6.5689 | | 0.1689 | 53.96 | 702 | 6.5761 | | 0.1672 | 54.96 | 715 | 6.5775 | | 0.1657 | 55.96 | 728 | 6.6362 | | 0.1625 | 56.96 | 741 | 6.6573 | | 0.1611 | 57.96 | 754 | 6.7019 | | 0.1588 | 58.96 | 767 | 6.6602 | | 0.1573 | 59.96 | 780 | 6.7015 | | 0.1547 | 60.96 | 793 | 6.7323 | | 0.1542 | 61.96 | 806 | 6.7368 | | 0.1538 | 62.96 | 819 | 6.7704 | | 0.1513 | 63.96 | 832 | 6.7963 | | 0.1504 | 64.96 | 845 | 6.7988 | | 0.1506 | 65.96 | 858 | 6.8386 | | 0.1497 | 66.96 | 871 | 6.8039 | | 0.15 | 67.96 | 884 | 6.8126 | | 0.1497 | 68.96 | 897 | 6.8858 | | 0.143 | 69.96 | 910 | 6.8331 |
16cd8ebaa8844ec9d8969f2f804c17d1
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-ml Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on ml (Malayalam) using the [Indic TTS Malayalam Speech Corpus (via Kaggle)](https://www.kaggle.com/kavyamanohar/indic-tts-malayalam-speech-corpus), [Openslr Malayalam Speech Corpus](http://openslr.org/63/), [SMC Malayalam Speech Corpus](https://blog.smc.org.in/malayalam-speech-corpus/) and [IIIT-H Indic Speech Databases](http://speech.iiit.ac.in/index.php/research-svl/69.html). The notebooks used to train model are available [here](https://github.com/gauthamsuresh09/wav2vec2-large-xlsr-53-malayalam/). When using this model, make sure that your speech input is sampled at 16kHz.
eddb08ffe12b4db121e95c529e2dc0e2
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = <load-test-split-of-combined-dataset>
32189f2075f0fdedd33830249a0add3b
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Details on loading this dataset in the evaluation section processor = Wav2Vec2Processor.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam") model = Wav2Vec2ForCTC.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam") resampler = torchaudio.transforms.Resample(48_000, 16_000)
ffa32c37f1a5d2117f31c49dbb74cb4f
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"]) ```
c68dec5fa03f5cea6d40fd229be4c567
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the test data of combined custom dataset. For more details on dataset preparation, check the notebooks mentioned at the end of this file. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re from datasets import load_dataset, load_metric from pathlib import Path
7af80fafc536ebb3f0570e338172bfcd
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
The custom dataset needs to be created using notebook mentioned at the end of this file data_dir = Path('<path-to-custom-dataset>') dataset_folders = { 'iiit': 'iiit_mal_abi', 'openslr': 'openslr', 'indic-tts': 'indic-tts-ml', 'msc-reviewed': 'msc-reviewed-speech-v1.0+20200825', }
0b65e7bb07c322412f25f3f18af92ff5
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Set directories for datasets openslr_male_dir = data_dir / dataset_folders['openslr'] / 'male' openslr_female_dir = data_dir / dataset_folders['openslr'] / 'female' iiit_dir = data_dir / dataset_folders['iiit'] indic_tts_male_dir = data_dir / dataset_folders['indic-tts'] / 'male' indic_tts_female_dir = data_dir / dataset_folders['indic-tts'] / 'female' msc_reviewed_dir = data_dir / dataset_folders['msc-reviewed']
f30f72b46e9d443936d453a542161491
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Load the datasets openslr_male = load_dataset("json", data_files=[f"{str(openslr_male_dir.absolute())}/sample_{i}.json" for i in range(2023)], split="train") openslr_female = load_dataset("json", data_files=[f"{str(openslr_female_dir.absolute())}/sample_{i}.json" for i in range(2103)], split="train") iiit = load_dataset("json", data_files=[f"{str(iiit_dir.absolute())}/sample_{i}.json" for i in range(1000)], split="train") indic_tts_male = load_dataset("json", data_files=[f"{str(indic_tts_male_dir.absolute())}/sample_{i}.json" for i in range(5649)], split="train") indic_tts_female = load_dataset("json", data_files=[f"{str(indic_tts_female_dir.absolute())}/sample_{i}.json" for i in range(2950)], split="train") msc_reviewed = load_dataset("json", data_files=[f"{str(msc_reviewed_dir.absolute())}/sample_{i}.json" for i in range(1541)], split="train")
760773bf46084a0062e5c99f3b54fe25
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Create test split as 20%, set random seed as well. test_size = 0.2 random_seed=1 openslr_male_splits = openslr_male.train_test_split(test_size=test_size, seed=random_seed) openslr_female_splits = openslr_female.train_test_split(test_size=test_size, seed=random_seed) iiit_splits = iiit.train_test_split(test_size=test_size, seed=random_seed) indic_tts_male_splits = indic_tts_male.train_test_split(test_size=test_size, seed=random_seed) indic_tts_female_splits = indic_tts_female.train_test_split(test_size=test_size, seed=random_seed) msc_reviewed_splits = msc_reviewed.train_test_split(test_size=test_size, seed=random_seed)
955b35fc32037fb9a9d04da2aed7867d
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Get combined test dataset split_list = [openslr_male_splits, openslr_female_splits, indic_tts_male_splits, indic_tts_female_splits, msc_reviewed_splits, iiit_splits] test_dataset = datasets.concatenate_datasets([split['test'] for split in split_list) wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam") model = Wav2Vec2ForCTC.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam") model.to("cuda") resamplers = { 48000: torchaudio.transforms.Resample(48_000, 16_000), } chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“\\\\%\\\\‘\\\\”\\\\�Utrnle\\\\_]' unicode_ignore_regex = r'[\\\\u200e]'
053488bb92c53ac6ac3e418719ebc88e
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the audio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]) batch["sentence"] = re.sub(unicode_ignore_regex, '', batch["sentence"]) speech_array, sampling_rate = torchaudio.load(batch["path"])
37673f3a67822b696eca0892bd7df344
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Resample if its not in 16kHz if sampling_rate != 16000: batch["speech"] = resamplers[sampling_rate](speech_array).squeeze().numpy() else: batch["speech"] = speech_array.squeeze().numpy()
5b7f60253146ccae44527b98d7acc478
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
If more than one dimension is present, pick first one if batch["speech"].ndim > 1: batch["speech"] = batch["speech"][0] return batch test_dataset = test_dataset.map(speech_file_to_array_fn)
6323089b6b4c30e16244aed951260dd8
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result (WER)**: 28.43 %
e97b61417212a77202e575f5379e285b
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Training A combined dataset was created using [Indic TTS Malayalam Speech Corpus (via Kaggle)](https://www.kaggle.com/kavyamanohar/indic-tts-malayalam-speech-corpus), [Openslr Malayalam Speech Corpus](http://openslr.org/63/), [SMC Malayalam Speech Corpus](https://blog.smc.org.in/malayalam-speech-corpus/) and [IIIT-H Indic Speech Databases](http://speech.iiit.ac.in/index.php/research-svl/69.html). The datasets were downloaded and was converted to HF Dataset format using [this notebook](https://github.com/gauthamsuresh09/wav2vec2-large-xlsr-53-malayalam/blob/main/make_hf_dataset.ipynb) The notebook used for training and evaluation can be found [here](https://github.com/gauthamsuresh09/wav2vec2-large-xlsr-53-malayalam/blob/main/fine-tune-xlsr-wav2vec2-on-malayalam-asr-with-transformers_v2.ipynb)
b304431ccf036ee6f62a0e14cfa9a406
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'et', 'hf-asr-leaderboard']
false
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ET dataset. It achieves the following results on the evaluation set: - Loss: 0.4623 - Wer: 0.3420
f5ccae48016547b58811d1c7d9413937
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'et', 'hf-asr-leaderboard']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 72 - eval_batch_size: 72 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 144 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP
6ae722520a995b6ef35e894dbe3ec9d2
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'et', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3082 | 12.5 | 500 | 0.3871 | 0.4907 | | 0.1497 | 25.0 | 1000 | 0.4168 | 0.4278 | | 0.1243 | 37.5 | 1500 | 0.4446 | 0.4220 | | 0.0954 | 50.0 | 2000 | 0.4426 | 0.3946 | | 0.0741 | 62.5 | 2500 | 0.4502 | 0.3800 | | 0.0533 | 75.0 | 3000 | 0.4618 | 0.3653 | | 0.0447 | 87.5 | 3500 | 0.4518 | 0.3461 | | 0.0396 | 100.0 | 4000 | 0.4623 | 0.3420 |
025111efd51e9a3e65f19e62de8ff663
cc-by-4.0
[]
false
PunjabiBERT PunjabiBERT is a Punjabi BERT model trained on publicly available Punjabi monolingual datasets. Preliminary details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>]. Citing: ``` @article{joshi2022l3cubehind, title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages}, author={Joshi, Raviraj}, journal={arXiv preprint arXiv:2211.11418}, year={2022} } ```
e49670568a8fb630179737aae15fdf4e
apache-2.0
[]
false
distilbert-base-en-no-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
0f895a0784e88609fec2e66d182e62bb
apache-2.0
[]
false
How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-no-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-no-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
8583c5410247c50d7b22ba32fad64bfb
apache-2.0
['generated_from_trainer']
false
t5_finetuned_genboolq This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5011 - Rouge1: 36.4881 - Rouge2: 17.8649 - Rougel: 34.2658 - Rougelsum: 34.2336 - Gen Len: 11.7003
fa34497e09f1b5e0ba483840bd0f8e51
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.5854 | 1.0 | 2082 | 0.5182 | 35.5544 | 16.9686 | 33.3783 | 33.3536 | 11.5918 | | 0.5479 | 2.0 | 4164 | 0.4969 | 37.0664 | 18.2443 | 34.7139 | 34.6934 | 11.8662 | | 0.5405 | 3.0 | 6246 | 0.5011 | 36.4881 | 17.8649 | 34.2658 | 34.2336 | 11.7003 |
23c60e2341648920e5a2d4f54e4035a3
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-cnndm-wikihow This model is a fine-tuned version of [Sevil/t5-small-finetuned-cnndm_3epoch_v2](https://huggingface.co/Sevil/t5-small-finetuned-cnndm_3epoch_v2) on the wikihow dataset. It achieves the following results on the evaluation set: - Loss: 2.2653 - Rouge1: 27.5037 - Rouge2: 10.8442 - Rougel: 23.4674 - Rougelsum: 26.7997 - Gen Len: 18.5558
e7691501728ad789e4f9c3655a362154
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP
5a5d45019d79557e246cd99db97e3768
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.8459 | 0.13 | 5000 | 2.5755 | 25.2929 | 8.7852 | 21.2379 | 24.5649 | 18.4758 | | 2.7251 | 0.25 | 10000 | 2.5189 | 25.33 | 9.0505 | 21.4892 | 24.6523 | 18.4513 | | 2.6696 | 0.38 | 15000 | 2.4805 | 26.3909 | 9.6858 | 22.3589 | 25.7297 | 18.4649 | | 2.647 | 0.51 | 20000 | 2.4491 | 25.9234 | 9.3936 | 22.0086 | 25.2342 | 18.5558 | | 2.5973 | 0.64 | 25000 | 2.4251 | 26.4988 | 9.8197 | 22.6201 | 25.8407 | 18.3438 | | 2.5916 | 0.76 | 30000 | 2.4022 | 26.3149 | 9.8432 | 22.3695 | 25.6581 | 18.4506 | | 2.5691 | 0.89 | 35000 | 2.3801 | 26.4198 | 9.8848 | 22.4856 | 25.7847 | 18.5381 | | 2.5365 | 1.02 | 40000 | 2.3755 | 26.5846 | 10.0287 | 22.667 | 25.9606 | 18.5608 | | 2.4649 | 1.14 | 45000 | 2.3663 | 26.5925 | 10.0569 | 22.6191 | 25.9247 | 18.5803 | | 2.4539 | 1.27 | 50000 | 2.3490 | 26.9735 | 10.2389 | 22.9536 | 26.282 | 18.5126 | | 2.4578 | 1.4 | 55000 | 2.3374 | 26.7878 | 10.2275 | 22.849 | 26.1188 | 18.6162 | | 2.4365 | 1.53 | 60000 | 2.3266 | 27.1171 | 10.403 | 23.0596 | 26.4284 | 18.6128 | | 2.428 | 1.65 | 65000 | 2.3209 | 27.1762 | 10.578 | 23.1577 | 26.5007 | 18.5246 | | 2.4293 | 1.78 | 70000 | 2.3145 | 27.0896 | 10.5146 | 23.1502 | 26.4338 | 18.4604 | | 2.4335 | 1.91 | 75000 | 2.2979 | 27.3373 | 10.6273 | 23.2944 | 26.6725 | 18.5403 | | 2.3981 | 2.03 | 80000 | 2.3008 | 27.1857 | 10.6455 | 23.1333 | 26.5203 | 18.5412 | | 2.3395 | 2.16 | 85000 | 2.2908 | 27.3123 | 10.7063 | 23.3126 | 26.626 | 18.4265 | | 2.3463 | 2.29 | 90000 | 2.2869 | 27.5328 | 10.7662 | 23.4527 | 26.8613 | 18.5664 | | 2.3481 | 2.42 | 95000 | 2.2802 | 27.4799 | 10.7826 | 23.4538 | 26.7912 | 18.5449 | | 2.3345 | 2.54 | 100000 | 2.2774 | 27.3182 | 10.724 | 23.3276 | 26.669 | 18.5908 | | 2.3254 | 2.67 | 105000 | 2.2713 | 27.3942 | 10.777 | 23.3918 | 26.7036 | 18.5681 | | 2.3369 | 2.8 | 110000 | 2.2666 | 27.5976 | 10.9144 | 23.5832 | 26.9147 | 18.5471 | | 2.3269 | 2.93 | 115000 | 2.2653 | 27.5037 | 10.8442 | 23.4674 | 26.7997 | 18.5558 |
dc041e18b87b1bdf017724ebaf39e717
mit
['generated_from_trainer']
false
mbart-large-50-finetuned-en-to-ko-8603428-finetuned-en-to-ko-9914408 This model is a fine-tuned version of [alphahg/mbart-large-50-finetuned-en-to-ko-8603428](https://huggingface.co/alphahg/mbart-large-50-finetuned-en-to-ko-8603428) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8130
10135310f2ea412fde0e3804176f7744
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP
bde2b2acb6e8713eb1fffaea8c02a7e4
apache-2.0
['generated_from_trainer']
false
wav2vec2-custom-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7785 - Wer: 0.3534
ed5b8f6c228b5131fc7df545be6c0df4
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 - mixed_precision_training: Native AMP
8f4833063396f34b65731729572dec56
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.4783 | 0.3 | 500 | 0.7199 | 0.5564 | | 0.4833 | 0.61 | 1000 | 0.8089 | 0.6181 | | 0.5733 | 0.91 | 1500 | 0.7617 | 0.5530 | | 0.4641 | 1.21 | 2000 | 0.7937 | 0.5731 | | 0.4167 | 1.52 | 2500 | 0.7993 | 0.5102 | | 0.3713 | 1.82 | 3000 | 0.7541 | 0.5437 | | 0.3395 | 2.12 | 3500 | 0.7658 | 0.5148 | | 0.2814 | 2.42 | 4000 | 0.7569 | 0.4783 | | 0.2698 | 2.73 | 4500 | 0.8126 | 0.5174 | | 0.2767 | 3.03 | 5000 | 0.7838 | 0.4676 | | 0.2249 | 3.33 | 5500 | 0.8769 | 0.4743 | | 0.2452 | 3.64 | 6000 | 0.8586 | 0.4778 | | 0.1828 | 3.94 | 6500 | 0.7695 | 0.4528 | | 0.1901 | 4.24 | 7000 | 0.7800 | 0.5021 | | 0.2062 | 4.55 | 7500 | 0.8107 | 0.4567 | | 0.1614 | 4.85 | 8000 | 0.7941 | 0.4094 | | 0.1327 | 5.15 | 8500 | 0.7900 | 0.4241 | | 0.1405 | 5.45 | 9000 | 0.8017 | 0.3992 | | 0.1219 | 5.76 | 9500 | 0.8099 | 0.4043 | | 0.1406 | 6.06 | 10000 | 0.8731 | 0.3913 | | 0.0806 | 6.36 | 10500 | 0.8387 | 0.3868 | | 0.1039 | 6.67 | 11000 | 0.8105 | 0.3905 | | 0.0967 | 6.97 | 11500 | 0.7291 | 0.3728 | | 0.0846 | 7.27 | 12000 | 0.8128 | 0.4201 | | 0.0722 | 7.58 | 12500 | 0.8204 | 0.3751 | | 0.0785 | 7.88 | 13000 | 0.7692 | 0.3760 | | 0.0647 | 8.18 | 13500 | 0.8294 | 0.3752 | | 0.0523 | 8.48 | 14000 | 0.7646 | 0.3763 | | 0.0623 | 8.79 | 14500 | 0.7773 | 0.3572 | | 0.0477 | 9.09 | 15000 | 0.7379 | 0.3635 | | 0.064 | 9.39 | 15500 | 0.7544 | 0.3538 | | 0.0321 | 9.7 | 16000 | 0.8118 | 0.3557 | | 0.0541 | 10.0 | 16500 | 0.7785 | 0.3534 |
31794c05f3d9616890eca5d8da7fd125
apache-2.0
['generated_from_trainer']
false
distilbart-cnn-12-6-finetuned-pubmed This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the pub_med_summarization_dataset dataset. It achieves the following results on the evaluation set: - Loss: 1.9895 - Rouge1: 40.0985 - Rouge2: 16.5016 - Rougel: 24.8319 - Rougelsum: 36.0775 - Gen Len: 141.884
376203b6582325d9eef2337b686a7698
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 2.1709 | 1.0 | 4000 | 2.0257 | 38.1012 | 15.112 | 23.4064 | 33.9373 | 141.9195 | | 1.9495 | 2.0 | 8000 | 1.9593 | 39.529 | 16.1693 | 24.487 | 35.5238 | 141.9785 | | 1.756 | 3.0 | 12000 | 1.9488 | 39.9623 | 16.5799 | 24.949 | 35.9194 | 141.8855 | | 1.6032 | 4.0 | 16000 | 1.9732 | 39.672 | 16.1994 | 24.5996 | 35.7021 | 141.921 | | 1.4817 | 5.0 | 20000 | 1.9895 | 40.0985 | 16.5016 | 24.8319 | 36.0775 | 141.884 |
b74a2def45879f2301c6ab95a2a550ab
apache-2.0
['translation']
false
opus-mt-fj-en * source languages: fj * target languages: en * OPUS readme: [fj-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fj-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fj-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fj-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fj-en/opus-2020-01-09.eval.txt)
e2d9c0409ba1e1500a1d9b82d689dbfe
apache-2.0
['translation']
false
opus-mt-srn-sv * source languages: srn * target languages: sv * OPUS readme: [srn-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/srn-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/srn-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-sv/opus-2020-01-16.eval.txt)
2e490abf4c8efbf2b45fe91d0c289526