license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
apache-2.0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer', 'uk'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 3.0255 | 7.93 | 500 | 2.5514 | 0.9921 | 0.9047 | | 1.3809 | 15.86 | 1000 | 0.4065 | 0.5361 | 0.1201 | | 1.2355 | 23.8 | 1500 | 0.3474 | 0.4618 | 0.1033 | | 1.1956 | 31.74 | 2000 | 0.3617 | 0.4580 | 0.1005 | | 1.1416 | 39.67 | 2500 | 0.3182 | 0.4074 | 0.0891 | | 1.0996 | 47.61 | 3000 | 0.3166 | 0.3985 | 0.0875 | | 1.0427 | 55.55 | 3500 | 0.3116 | 0.3835 | 0.0828 | | 0.9961 | 63.49 | 4000 | 0.3137 | 0.3757 | 0.0807 | | 0.9575 | 71.42 | 4500 | 0.2992 | 0.3632 | 0.0771 | | 0.9154 | 79.36 | 5000 | 0.3015 | 0.3502 | 0.0740 | | 0.8994 | 87.3 | 5500 | 0.3004 | 0.3425 | 0.0723 | | 0.871 | 95.24 | 6000 | 0.3016 | 0.3394 | 0.0713 | | 016461f75c2a8c5abe75df0f2fae8a9a |
apache-2.0 | ['italian', 'sequence-to-sequence', 'efficient', 'newspaper', 'ilgiornale', 'repubblica', 'style-transfer'] | false | IT5 Cased Small Efficient EL32 for News Headline Style Transfer (Repubblica to Il Giornale) 🗞️➡️🗞️ 🇮🇹 *Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!* This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on news headline style transfer in the Repubblica to Il Giornale direction on the Italian CHANGE-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model. A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. | 99eb753cfe45e389971328e63adacb94 |
apache-2.0 | ['italian', 'sequence-to-sequence', 'efficient', 'newspaper', 'ilgiornale', 'repubblica', 'style-transfer'] | false | Using the model The model is trained to generate a headline in the style of Il Giornale from the full body of an article written in the style of Repubblica. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines r2g = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-repubblica-to-ilgiornale') r2g("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".") >>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-repubblica-to-ilgiornale") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-repubblica-to-ilgiornale") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ``` | 478e4885a9d7bea6f283da4be7bde3a2 |
apache-2.0 | ['italian', 'sequence-to-sequence', 'efficient', 'newspaper', 'ilgiornale', 'repubblica', 'style-transfer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 | 5632a909c1af83e446de729b7b4b32cf |
apache-2.0 | ['generated_from_trainer'] | false | testing This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.6644 - Accuracy: 0.6814 - F1: 0.8105 - Combined Score: 0.7459 | c2d8ecad31d6e3d54193150207bbbc69 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 | 3b21c45bda476026dd69758f6e2eac0e |
openrail++ | ['stable-diffusion', 'text-to-image'] | false | Stable Diffusion v2-1 Model Card This model card focuses on the model associated with the Stable Diffusion v2-1 model, codebase available [here](https://github.com/Stability-AI/stablediffusion). This `stable-diffusion-2-1` model is fine-tuned from [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) (`768-v-ema.ckpt`) with an additional 55k steps on the same dataset (with `punsafe=0.1`), and then fine-tuned for another 155k extra steps with `punsafe=0.98`. - Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `v2-1_768-ema-pruned.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-ema-pruned.ckpt). - Use it with 🧨 [`diffusers`]( | 058b12795f5acc18d000233184cfcf38 |
openrail++ | ['stable-diffusion', 'text-to-image'] | false | Examples Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner. ```bash pip install diffusers transformers accelerate scipy safetensors ``` Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to DPMSolverMultistepScheduler): ```python from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler model_id = "stabilityai/stable-diffusion-2-1" | 28e5ebc1c3a61b96047097cc5f75fd82 |
openrail++ | ['stable-diffusion', 'text-to-image'] | false | Use the DPMSolverMultistepScheduler (DPM-Solver++) scheduler here instead pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` **Notes**: - Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance) - If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed) | 95d7c3c84ec5a20528b88d3862591959 |
openrail++ | ['stable-diffusion', 'text-to-image'] | false | Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent. | 9f6a5d412dced42154ee3af42b3ba302 |
openrail++ | ['stable-diffusion', 'text-to-image'] | false | Training **Training Data** The model developers used the following dataset for training the model: - LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic. **Training Procedure** Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through the OpenCLIP-ViT/H text-encoder. - The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512. We currently provide the following checkpoints: - `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`. 850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`. - `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset. - `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. - `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://huggingface.co/runwayml/stable-diffusion-inpainting). - `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752). In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml). - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 1 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant | 38d70d8e42c1c36a6b640e31dc1d4146 |
openrail++ | ['stable-diffusion', 'text-to-image'] | false | Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints:  Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. | 9e153f16b9e44051aaea0dea8bd67491 |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-large-xls-hun-53h-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.6027 - Wer: 0.4618 | 6afe1b199ff03bbb52d787287b73a706 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 23 - mixed_precision_training: Native AMP | 561d647123e877a494437da6dce9efc1 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 13.4225 | 0.67 | 100 | 3.7750 | 1.0 | | 3.4121 | 1.34 | 200 | 3.3166 | 1.0 | | 3.2263 | 2.01 | 300 | 3.1403 | 1.0 | | 3.0038 | 2.68 | 400 | 2.2474 | 0.9990 | | 1.2243 | 3.35 | 500 | 0.8174 | 0.7666 | | 0.6368 | 4.03 | 600 | 0.6306 | 0.6633 | | 0.4426 | 4.7 | 700 | 0.6151 | 0.6648 | | 0.3821 | 5.37 | 800 | 0.5765 | 0.6138 | | 0.3337 | 6.04 | 900 | 0.5522 | 0.5785 | | 0.2832 | 6.71 | 1000 | 0.5822 | 0.5691 | | 0.2485 | 7.38 | 1100 | 0.5626 | 0.5449 | | 0.2335 | 8.05 | 1200 | 0.5866 | 0.5662 | | 0.2031 | 8.72 | 1300 | 0.5574 | 0.5420 | | 0.1925 | 9.39 | 1400 | 0.5572 | 0.5297 | | 0.1793 | 10.07 | 1500 | 0.5878 | 0.5185 | | 0.1652 | 10.74 | 1600 | 0.6173 | 0.5243 | | 0.1663 | 11.41 | 1700 | 0.5807 | 0.5133 | | 0.1544 | 12.08 | 1800 | 0.5979 | 0.5154 | | 0.148 | 12.75 | 1900 | 0.5545 | 0.4986 | | 0.138 | 13.42 | 2000 | 0.5798 | 0.4947 | | 0.1353 | 14.09 | 2100 | 0.5670 | 0.5028 | | 0.1283 | 14.76 | 2200 | 0.5862 | 0.4957 | | 0.1271 | 15.43 | 2300 | 0.6009 | 0.4961 | | 0.1108 | 16.11 | 2400 | 0.5873 | 0.4975 | | 0.1182 | 16.78 | 2500 | 0.6013 | 0.4893 | | 0.103 | 17.45 | 2600 | 0.6165 | 0.4898 | | 0.1084 | 18.12 | 2700 | 0.6186 | 0.4838 | | 0.1014 | 18.79 | 2800 | 0.6122 | 0.4767 | | 0.1009 | 19.46 | 2900 | 0.5981 | 0.4793 | | 0.1004 | 20.13 | 3000 | 0.6034 | 0.4770 | | 0.0922 | 20.8 | 3100 | 0.6127 | 0.4663 | | 0.09 | 21.47 | 3200 | 0.5967 | 0.4672 | | 0.0893 | 22.15 | 3300 | 0.6051 | 0.4611 | | 0.0817 | 22.82 | 3400 | 0.6027 | 0.4618 | | e512e88a69953c586e0da0dd88b57bf3 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Whisper Tiny Greek This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the mozilla-foundation/common_voice_11_0 el dataset. It achieves the following results on the evaluation set: - Loss: 1.3444 - Wer: 231.8841 | 6d13f9c143ae06fe66c1fa7a6f1c7480 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2 - mixed_precision_training: Native AMP | bbe9a4d3b0fda7745cc25f10e85061b6 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.5 | 2 | 1.3444 | 231.8841 | | 463cab3f131f713dde51cfd7972cb290 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2302 - Accuracy: 0.922 - F1: 0.9218 | 762bebe588eac919bbe7570e9c6fed57 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3344 | 0.903 | 0.9004 | | No log | 2.0 | 500 | 0.2302 | 0.922 | 0.9218 | | d3ebff51a1e9b42c585f08d61c824b49 |
mit | [] | false | Manga style on Stable Diffusion This is the `<manga>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`:         | 0a3f207c69218bdad19f0e2e4cb3da17 |
apache-2.0 | ['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_1000k'] | false | MultiBERTs, Intermediate Checkpoint - Seed 2, Step 1000k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model | 663754d63b6cd00654e54be2103b789c |
apache-2.0 | ['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_1000k'] | false | Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. | 7bb9acb62fb4ed85f9d266d85bd2e909 |
apache-2.0 | ['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_1000k'] | false | How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_1000k') model = TFBertModel.from_pretrained("google/multiberts-seed_2-step_1000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_1000k') model = BertModel.from_pretrained("google/multiberts-seed_2-step_1000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` | f80eca6ffa47f99fc3e15239892e912c |
apache-2.0 | ['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_1000k'] | false | Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ``` | 685504012ab6c0dc9bf83b989734addc |
mit | ['generated_from_trainer'] | false | multi-minilm-finetuned-amazon-review This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 1.2436 - Accuracy: 0.5422 - F1: 0.5435 - Precision: 0.5452 - Recall: 0.5422 | ad2b1e5011275240cd8d438371d97e80 |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP | 54bc3f54fafdad8c1881fa09ca151998 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.0049 | 1.0 | 2500 | 1.0616 | 0.5352 | 0.5268 | 0.5347 | 0.5352 | | 0.9172 | 2.0 | 5000 | 1.0763 | 0.5432 | 0.5412 | 0.5444 | 0.5432 | | 0.8285 | 3.0 | 7500 | 1.1077 | 0.5408 | 0.5428 | 0.5494 | 0.5408 | | 0.7361 | 4.0 | 10000 | 1.1743 | 0.5342 | 0.5399 | 0.5531 | 0.5342 | | 0.6538 | 5.0 | 12500 | 1.2436 | 0.5422 | 0.5435 | 0.5452 | 0.5422 | | 76f3f59162319186f9de450204492790 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Whisper large-v2 zh-tw This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 zh-TW dataset. It achieves the following results on the evaluation set: - Loss: 1.1603 - Wer: 40.3946 - Cer: 41.1041 | 44f0853f74fa5de40bad3947f24ce9cf |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP | 76375e219230dae04cdf8eb071a2c340 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 2.87 | 0.2 | 1000 | 3.0804 | 192.9556 | 192.6466 | | 2.6143 | 0.4 | 2000 | 2.4951 | 96.5525 | 96.6443 | | 1.863 | 0.6 | 3000 | 2.0882 | 69.3188 | 69.6395 | | 1.1665 | 1.14 | 4000 | 1.4647 | 50.5666 | 51.5850 | | 0.6674 | 1.34 | 5000 | 1.1603 | 40.3946 | 41.1041 | | 75f4059d4e46e6d5d654a5b795f45590 |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | Model Details Neural machine translation model for translating from Italic languages (itc) to Basque (eu). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2022-07-23 - **License:** CC-BY-4.0 - **Language(s):** - Source Language(s): fra ita spa - Target Language(s): eus - Language Pair(s): spa-eus - Valid Target Language Labels: - **Original Model**: [opusTCv20210807_transformer-big_2022-07-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eus/opusTCv20210807_transformer-big_2022-07-23.zip) - **Resources for more information:** - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - More information about released models for this language pair: [OPUS-MT itc-eus README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-eus/README.md) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/ | 7612b6b36e0418561ea414f5403d3bdf |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). | 5a4c21be213dacd549787f07770873e5 |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Il est riche.", "¿Correcto?" ] model_name = "pytorch-models/opus-mt-tc-big-itc-eu" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) | 93ff36cbc074af3eeb383aaaa2bba7d4 |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | Zuzena? ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-itc-eu") print(pipe("Il est riche.")) | 2d988be64f63fab3e824c1f5e5479c27 |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | Training - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-07-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eus/opusTCv20210807_transformer-big_2022-07-23.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) | 0d33ec0cd09f9b53a21c48a666a1f05a |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | Evaluation * test set translations: [opusTCv20210807_transformer-big_2022-07-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eus/opusTCv20210807_transformer-big_2022-07-23.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-07-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eus/opusTCv20210807_transformer-big_2022-07-23.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | | 3e1abb231861236ef0c7b63982e58400 |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | Citation Information * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` | 6dee266f23ec16e2c3d434ece0ed394b |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-large-xlsr-53_toy_train_data_fast_10pct This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6983 - Wer: 0.5026 | 54a8b74be10e31d97dc85cde9645b54c |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.3619 | 1.05 | 250 | 3.4334 | 1.0 | | 3.0818 | 2.1 | 500 | 3.4914 | 1.0 | | 2.3245 | 3.15 | 750 | 1.6483 | 0.9486 | | 1.0233 | 4.2 | 1000 | 0.8817 | 0.7400 | | 0.7522 | 5.25 | 1250 | 0.7374 | 0.6529 | | 0.5343 | 6.3 | 1500 | 0.6972 | 0.6068 | | 0.4452 | 7.35 | 1750 | 0.6757 | 0.5740 | | 0.4275 | 8.4 | 2000 | 0.6789 | 0.5551 | | 0.3688 | 9.45 | 2250 | 0.6468 | 0.5394 | | 0.3363 | 10.5 | 2500 | 0.6798 | 0.5358 | | 0.3036 | 11.55 | 2750 | 0.6439 | 0.5265 | | 0.3173 | 12.6 | 3000 | 0.6898 | 0.5196 | | 0.2985 | 13.65 | 3250 | 0.6791 | 0.5169 | | 0.288 | 14.7 | 3500 | 0.6442 | 0.5090 | | 0.2673 | 15.75 | 3750 | 0.6984 | 0.5119 | | 0.2575 | 16.81 | 4000 | 0.7146 | 0.5084 | | 0.239 | 17.86 | 4250 | 0.6847 | 0.5040 | | 0.2266 | 18.91 | 4500 | 0.6900 | 0.5028 | | 0.22 | 19.96 | 4750 | 0.6983 | 0.5026 | | d60ecbf1048247cca5ebadcaa4164613 |
other | ['PyTorch'] | false | Diffusion GANというコードを使ってつくりました https://github.com/Zhendong-Wang/Diffusion-GAN つかいかた 試してないので動かなかったらごめんなさい - 環境をととのえる - 最近のNVIDIA製GPUがついたパソコンにLinuxを入れることをおすすめします - PytorchをCUDAありでインストールしてください - https://pytorch.org/get-started/locally/ - conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia - Gitもインストールしてください - sudo apt install git - Diffusion-GANをgithubからローカルにcloneしてください - git clone https://github.com/Zhendong-Wang/Diffusion-GAN - diffusion-projected-ganというフォルダを開いてください - ここの"Files and versions"から"best_model.pkl"をクリックしてダウンロードし、diffusion-projected-ganの中に保存してください - 以下のコマンドで画像を生成します - python gen_images.py --outdir=out --seeds=0-10 --network=./best_model.pkl - パッケージがインストールされていないというエラーが出たら適宜インストールしてください - outフォルダに生成された画像が入っています 商用利用などはご遠慮ください | e70f1501bc1886ab920d93dad06c8dc5 |
apache-2.0 | ['generated_from_trainer'] | false | all-roberta-large-v1-meta-1-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4797 - Accuracy: 0.28 | 58a2eda4898f64be526b9b0dda21065f |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7721 | 1.0 | 1 | 2.6529 | 0.1889 | | 2.2569 | 2.0 | 2 | 2.5866 | 0.2333 | | 1.9837 | 3.0 | 3 | 2.5340 | 0.2644 | | 1.6425 | 4.0 | 4 | 2.4980 | 0.2756 | | 1.4612 | 5.0 | 5 | 2.4797 | 0.28 | | 9e2ff25e49a9f8d99dde92691919d23b |
creativeml-openrail-m | ['text-to-image'] | false | quino Dreambooth model trained by machinelearnear with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: artequino (use that on your prompt)  | f3840c1945ec05b978f363253fcbd74a |
mit | ['sentence_embedding', 'search', 'pytorch', 'xlm-roberta', 'roberta', 'xlm-r-distilroberta-base-paraphrase-v1', 'paraphrase'] | false | Cross English & German RoBERTa for Sentence Embeddings This model is intended to [compute sentence (text) embeddings](https://www.sbert.net/examples/applications/computing-embeddings/README.html) for English and German text. These embeddings can then be compared with [cosine-similarity](https://en.wikipedia.org/wiki/Cosine_similarity) to find sentences with a similar semantic meaning. For example this can be useful for [semantic textual similarity](https://www.sbert.net/docs/usage/semantic_textual_similarity.html), [semantic search](https://www.sbert.net/docs/usage/semantic_search.html), or [paraphrase mining](https://www.sbert.net/docs/usage/paraphrase_mining.html). To do this you have to use the [Sentence Transformers Python framework](https://github.com/UKPLab/sentence-transformers). The speciality of this model is that it also works cross-lingually. Regardless of the language, the sentences are translated into very similar vectors according to their semantics. This means that you can, for example, enter a search in German and find results according to the semantics in German and also in English. Using a xlm model and _multilingual finetuning with language-crossing_ we reach performance that even exceeds the best current dedicated English large model (see Evaluation section below). > Sentence-BERT (SBERT) is a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65hours with BERT / RoBERTa to about 5 seconds with SBERT, while maintaining the accuracy from BERT. Source: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084) This model is fine-tuned from [Philip May](https://may.la/) and open-sourced by [T-Systems-onsite](https://www.t-systems-onsite.de/). Special thanks to [Nils Reimers](https://www.nils-reimers.de/) for your awesome open-source work, the Sentence Transformers, the models and your help on GitHub. | 650b8cf0dc0aa41fe373986ed6b32056 |
mit | ['sentence_embedding', 'search', 'pytorch', 'xlm-roberta', 'roberta', 'xlm-r-distilroberta-base-paraphrase-v1', 'paraphrase'] | false | How to use To use this model install the `sentence-transformers` package (see here: <https://github.com/UKPLab/sentence-transformers>). ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('T-Systems-onsite/cross-en-de-roberta-sentence-transformer') ``` For details of usage and examples see here: - [Computing Sentence Embeddings](https://www.sbert.net/docs/usage/computing_sentence_embeddings.html) - [Semantic Textual Similarity](https://www.sbert.net/docs/usage/semantic_textual_similarity.html) - [Paraphrase Mining](https://www.sbert.net/docs/usage/paraphrase_mining.html) - [Semantic Search](https://www.sbert.net/docs/usage/semantic_search.html) - [Cross-Encoders](https://www.sbert.net/docs/usage/cross-encoder.html) - [Examples on GitHub](https://github.com/UKPLab/sentence-transformers/tree/master/examples) | 7154ec1ff11cfc3880febae515024dd6 |
mit | ['sentence_embedding', 'search', 'pytorch', 'xlm-roberta', 'roberta', 'xlm-r-distilroberta-base-paraphrase-v1', 'paraphrase'] | false | Training The base model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base). This model has been further trained by [Nils Reimers](https://www.nils-reimers.de/) on a large scale paraphrase dataset for 50+ languages. [Nils Reimers](https://www.nils-reimers.de/) about this [on GitHub](https://github.com/UKPLab/sentence-transformers/issues/509 | fc5341f48407bc6269c4c3c38d4a438d |
mit | ['sentence_embedding', 'search', 'pytorch', 'xlm-roberta', 'roberta', 'xlm-r-distilroberta-base-paraphrase-v1', 'paraphrase'] | false | issuecomment-712243280): >A paper is upcoming for the paraphrase models. > >These models were trained on various datasets with Millions of examples for paraphrases, mainly derived from Wikipedia edit logs, paraphrases mined from Wikipedia and SimpleWiki, paraphrases from news reports, AllNLI-entailment pairs with in-batch-negative loss etc. > >In internal tests, they perform much better than the NLI+STSb models as they have see more and broader type of training data. NLI+STSb has the issue that they are rather narrow in their domain and do not contain any domain specific words / sentences (like from chemistry, computer science, math etc.). The paraphrase models has seen plenty of sentences from various domains. > >More details with the setup, all the datasets, and a wider evaluation will follow soon. The resulting model called `xlm-r-distilroberta-base-paraphrase-v1` has been released here: <https://github.com/UKPLab/sentence-transformers/releases/tag/v0.3.8> Building on this cross language model we fine-tuned it for English and German language on the [STSbenchmark](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) dataset. For German language we used the dataset of our [German STSbenchmark dataset](https://github.com/t-systems-on-site-services-gmbh/german-STSbenchmark) which has been translated with [deepl.com](https://www.deepl.com/translator). Additionally to the German and English training samples we generated samples of English and German crossed. We call this _multilingual finetuning with language-crossing_. It doubled the traing-datasize and tests show that it further improves performance. We did an automatic hyperparameter search for 33 trials with [Optuna](https://github.com/optuna/optuna). Using 10-fold crossvalidation on the deepl.com test and dev dataset we found the following best hyperparameters: - batch_size = 8 - num_epochs = 2 - lr = 1.026343323298136e-05, - eps = 4.462251033010287e-06 - weight_decay = 0.04794438776350409 - warmup_steps_proportion = 0.1609010732760181 The final model was trained with these hyperparameters on the combination of the train and dev datasets from English, German and the crossings of them. The testset was left for testing. | e1237cff7cb63e83fe912aff58076eea |
mit | ['sentence_embedding', 'search', 'pytorch', 'xlm-roberta', 'roberta', 'xlm-r-distilroberta-base-paraphrase-v1', 'paraphrase'] | false | Evaluation The evaluation has been done on English, German and both languages crossed with the STSbenchmark test data. The evaluation-code is available on [Colab](https://colab.research.google.com/drive/1gtGnKq_dYU_sDYqMohTYVMVpxMJjyH0M?usp=sharing). As the metric for evaluation we use the Spearman’s rank correlation between the cosine-similarity of the sentence embeddings and STSbenchmark labels. | Model Name | Spearman<br/>German | Spearman<br/>English | Spearman<br/>EN-DE & DE-EN<br/>(cross) | |---------------------------------------------------------------|-------------------|--------------------|------------------| | xlm-r-distilroberta-base-paraphrase-v1 | 0.8079 | 0.8350 | 0.7983 | | [xlm-r-100langs-bert-base-nli-stsb-mean-tokens](https://huggingface.co/sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens) | 0.7877 | 0.8465 | 0.7908 | | xlm-r-bert-base-nli-stsb-mean-tokens | 0.7877 | 0.8465 | 0.7908 | | [roberta-large-nli-stsb-mean-tokens](https://huggingface.co/sentence-transformers/roberta-large-nli-stsb-mean-tokens) | 0.6371 | 0.8639 | 0.4109 | | [T-Systems-onsite/<br/>german-roberta-sentence-transformer-v2](https://huggingface.co/T-Systems-onsite/german-roberta-sentence-transformer-v2) | 0.8529 | 0.8634 | 0.8415 | | **T-Systems-onsite/<br/>cross-en-de-roberta-sentence-transformer** | **0.8550** | **0.8660** | **0.8525** | | f2ff1356441e1f416b5e5516fe2c68b4 |
mit | ['sentence_embedding', 'search', 'pytorch', 'xlm-roberta', 'roberta', 'xlm-r-distilroberta-base-paraphrase-v1', 'paraphrase'] | false | License Copyright (c) 2020 Philip May, T-Systems on site services GmbH Licensed under the MIT License (the "License"); you may not use this work except in compliance with the License. You may obtain a copy of the License by reviewing the file [LICENSE](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer/blob/main/LICENSE) in the repository. | b271202e195ba1dd44f3e9915638b6ef |
apache-2.0 | ['generated_from_trainer'] | false | distilbert_add_GLUE_Experiment_mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.6028 - Accuracy: 0.6961 - F1: 0.8171 - Combined Score: 0.7566 | 05d5ad5aae397b0bcfd3552850772287 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP | 5b1552610fffa9e59a4c266dd398e5a2 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.6617 | 1.0 | 15 | 0.6507 | 0.6838 | 0.8122 | 0.7480 | | 0.6412 | 2.0 | 30 | 0.6290 | 0.6838 | 0.8122 | 0.7480 | | 0.6315 | 3.0 | 45 | 0.6252 | 0.6838 | 0.8122 | 0.7480 | | 0.6319 | 4.0 | 60 | 0.6236 | 0.6838 | 0.8122 | 0.7480 | | 0.6321 | 5.0 | 75 | 0.6225 | 0.6838 | 0.8122 | 0.7480 | | 0.616 | 6.0 | 90 | 0.6028 | 0.6961 | 0.8171 | 0.7566 | | 0.5469 | 7.0 | 105 | 0.6485 | 0.6446 | 0.7349 | 0.6898 | | 0.4436 | 8.0 | 120 | 0.7536 | 0.6838 | 0.7909 | 0.7374 | | 0.3794 | 9.0 | 135 | 0.7805 | 0.6961 | 0.7898 | 0.7430 | | 0.3158 | 10.0 | 150 | 0.8811 | 0.6838 | 0.7825 | 0.7331 | | 0.281 | 11.0 | 165 | 0.9246 | 0.6863 | 0.7881 | 0.7372 | | 2a02d5334e20bfadf7364c59ef3ed06a |
apache-2.0 | ['automatic-speech-recognition', 'fr'] | false | exp_w2v2t_fr_xls-r_s250 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 6fbce3ad387ae268d8916586cd8a8133 |
mit | ['generated_from_trainer'] | false | bert-base-german-cased-noisy-pretrain-fine-tuned_v1.2 This model is a fine-tuned version of [tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v1.2](https://huggingface.co/tbosse/bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v1.2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2810 - Precision: 0.7874 - Recall: 0.7514 - F1: 0.7690 - Accuracy: 0.9147 | 276818c05ba690e8d96f31453b223bf0 |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 | a044e0025e7a14c1c3768e83c5229cac |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 33 | 0.3078 | 0.7675 | 0.5943 | 0.6699 | 0.8842 | | No log | 2.0 | 66 | 0.2535 | 0.7729 | 0.7486 | 0.7605 | 0.9073 | | No log | 3.0 | 99 | 0.2417 | 0.7714 | 0.7714 | 0.7714 | 0.9119 | | No log | 4.0 | 132 | 0.2532 | 0.8031 | 0.7343 | 0.7672 | 0.9142 | | No log | 5.0 | 165 | 0.2675 | 0.7834 | 0.7543 | 0.7686 | 0.9142 | | No log | 6.0 | 198 | 0.2750 | 0.7870 | 0.76 | 0.7733 | 0.9159 | | No log | 7.0 | 231 | 0.2810 | 0.7874 | 0.7514 | 0.7690 | 0.9147 | | 012e1c8932d7b835fe95b09590e2d5cd |
apache-2.0 | ['generated_from_trainer'] | false | Training and evaluation data Training Data - Data Name: NIA13 ASIA - Num. of Samples: 9,634 - Audio Length: 9H 42M Evaluation Data - Data Name: NIA13 ASIA - Num. of Samples: 3,707 - Audio Length: 3H 37M Test Data - Data Name: NIA13 ASIA (Same as the Evaluation Data) - Num. of Samples: 3,707 - Audio Length: 3H 37M | 3e5176cbf22d939fe7e565b8a143f9ef |
apache-2.0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | false | Whisper Small Swedish -3000 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2974 - Wer: 19.6042 | 1f12c03c4559d0c17caf138f63bceb17 |
apache-2.0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 3000 - mixed_precision_training: Native AMP | b92415ee1d34c2e13430a8176a41b855 |
apache-2.0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1448 | 1.29 | 1000 | 0.2953 | 21.4245 | | 0.0188 | 2.59 | 2000 | 0.2879 | 20.0882 | | 0.0233 | 3.88 | 3000 | 0.2974 | 19.6042 | | b5c2f34d309982558166a0af65dde22d |
apache-2.0 | ['generated_from_trainer'] | false | mobilebert_add_GLUE_Experiment_logit_kd_qqp_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.8027 - Accuracy: 0.7596 - F1: 0.6364 - Combined Score: 0.6980 | 042c6896e715837bc1a81d766757d5ee |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 1.2838 | 1.0 | 2843 | 1.2200 | 0.6318 | 0.0 | 0.3159 | | 1.0184 | 2.0 | 5686 | 0.8422 | 0.7473 | 0.5924 | 0.6698 | | 0.8633 | 3.0 | 8529 | 0.8232 | 0.7520 | 0.5963 | 0.6742 | | 0.834 | 4.0 | 11372 | 0.8193 | 0.7563 | 0.6271 | 0.6917 | | 0.812 | 5.0 | 14215 | 0.8027 | 0.7596 | 0.6364 | 0.6980 | | 0.7871 | 6.0 | 17058 | nan | 0.6318 | 0.0 | 0.3159 | | 0.0 | 7.0 | 19901 | nan | 0.6318 | 0.0 | 0.3159 | | 0.0 | 8.0 | 22744 | nan | 0.6318 | 0.0 | 0.3159 | | 0.0 | 9.0 | 25587 | nan | 0.6318 | 0.0 | 0.3159 | | 0.0 | 10.0 | 28430 | nan | 0.6318 | 0.0 | 0.3159 | | 732cc98b142d8c4dd9455d3142e70237 |
mit | ['generated_from_trainer'] | false | xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [tkubotake/xlm-roberta-base-finetuned-panx-de](https://huggingface.co/tkubotake/xlm-roberta-base-finetuned-panx-de) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4157 - F1: 0.8636 | b35c4bdc3c5f9304463916a3bd7c1b71 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0847 | 1.0 | 191 | 0.4066 | 0.8524 | | 0.0574 | 2.0 | 382 | 0.4025 | 0.8570 | | 0.0333 | 3.0 | 573 | 0.4157 | 0.8636 | | 220b132c8d049d43ebbef59c74243ac3 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7818 - Matthews Correlation: 0.5492 | 3226b3be195374351546b01b8b6c5a5e |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5257 | 1.0 | 535 | 0.5238 | 0.4004 | | 0.3516 | 2.0 | 1070 | 0.5173 | 0.5206 | | 0.2402 | 3.0 | 1605 | 0.5623 | 0.5301 | | 0.1871 | 4.0 | 2140 | 0.7421 | 0.5387 | | 0.1386 | 5.0 | 2675 | 0.7818 | 0.5492 | | aaa063f0bbd8beb1f6a7a52e32b82049 |
apache-2.0 | ['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_20k'] | false | MultiBERTs, Intermediate Checkpoint - Seed 3, Step 20k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model | 74b4e27fdfd685c989c3a8f85a1603c4 |
apache-2.0 | ['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_20k'] | false | How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_20k') model = TFBertModel.from_pretrained("google/multiberts-seed_3-step_20k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_20k') model = BertModel.from_pretrained("google/multiberts-seed_3-step_20k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` | 3780bf43602f5e5a93591403e99125cb |
apache-2.0 | ['generated_from_trainer'] | false | flan-t5-large-extraction-cnndm_fs0.1-all This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6225 | 82bad0ea938930d154945bd9ee06535b |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 48 - seed: 1799 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 | b79790c45192847a5883dbb75a444adb |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0798 | 0.11 | 200 | 1.7813 | | 1.8704 | 0.23 | 400 | 1.7363 | | 1.8398 | 0.34 | 600 | 1.7100 | | 1.8068 | 0.45 | 800 | 1.6951 | | 1.8013 | 0.56 | 1000 | 1.6851 | | 1.8008 | 0.68 | 1200 | 1.6769 | | 1.783 | 0.79 | 1400 | 1.6609 | | 1.7459 | 0.9 | 1600 | 1.6578 | | 1.7394 | 1.02 | 1800 | 1.6605 | | 1.7036 | 1.13 | 2000 | 1.6464 | | 1.705 | 1.24 | 2200 | 1.6442 | | 1.6903 | 1.36 | 2400 | 1.6505 | | 1.6864 | 1.47 | 2600 | 1.6394 | | 1.7005 | 1.58 | 2800 | 1.6349 | | 1.6858 | 1.69 | 3000 | 1.6380 | | 1.6722 | 1.81 | 3200 | 1.6343 | | 1.6512 | 1.92 | 3400 | 1.6319 | | 1.6717 | 2.03 | 3600 | 1.6336 | | 1.636 | 2.15 | 3800 | 1.6352 | | 1.643 | 2.26 | 4000 | 1.6225 | | 1.6308 | 2.37 | 4200 | 1.6227 | | 1.6115 | 2.48 | 4400 | 1.6278 | | 1.6342 | 2.6 | 4600 | 1.6249 | | 1.6301 | 2.71 | 4800 | 1.6320 | | 1.6164 | 2.82 | 5000 | 1.6302 | | ebcfc92b3ca1704dac967e0f1aecf348 |
bsd-3-clause | ['summarization'] | false | Citation ``` @misc{https://doi.org/10.48550/arxiv.2110.07166, doi = {10.48550/ARXIV.2110.07166}, url = {https://arxiv.org/abs/2110.07166}, author = {Choubey, Prafulla Kumar and Fabbri, Alexander R. and Vig, Jesse and Wu, Chien-Sheng and Liu, Wenhao and Rajani, Nazneen Fatema}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {CaPE: Contrastive Parameter Ensembling for Reducing Hallucination in Abstractive Summarization}, publisher = {arXiv}, year = {2021}, copyright = {Creative Commons Attribution 4.0 International} } ``` | 06aee945e943c984ae202d96f09879b0 |
apache-2.0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'as', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard'] | false | wav2vec2-large-xls-r-300m-as-g1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AS dataset. It achieves the following results on the evaluation set: - Loss: 1.3327 - Wer: 0.5744 | d3fd57468b27cd26384bbc3cd6522424 |
apache-2.0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'as', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard'] | false | Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1 --dataset mozilla-foundation/common_voice_8_0 --config as --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Assamese language isn't available in speech-recognition-community-v2/dev_data | d46760e8025c72c669ab0dca2369829c |
apache-2.0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'as', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 200 - mixed_precision_training: Native AMP | cdf6ed5b42764bc24933623cd90248f8 |
apache-2.0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'as', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 14.1958 | 5.26 | 100 | 7.1919 | 1.0 | | 5.0035 | 10.51 | 200 | 3.9362 | 1.0 | | 3.6193 | 15.77 | 300 | 3.4451 | 1.0 | | 3.4852 | 21.05 | 400 | 3.3536 | 1.0 | | 2.8489 | 26.31 | 500 | 1.6451 | 0.9100 | | 0.9568 | 31.56 | 600 | 1.0514 | 0.7561 | | 0.4865 | 36.82 | 700 | 1.0434 | 0.7184 | | 0.322 | 42.1 | 800 | 1.0825 | 0.7210 | | 0.2383 | 47.36 | 900 | 1.1304 | 0.6897 | | 0.2136 | 52.62 | 1000 | 1.1150 | 0.6854 | | 0.179 | 57.87 | 1100 | 1.2453 | 0.6875 | | 0.1539 | 63.15 | 1200 | 1.2211 | 0.6704 | | 0.1303 | 68.41 | 1300 | 1.2859 | 0.6747 | | 0.1183 | 73.67 | 1400 | 1.2775 | 0.6721 | | 0.0994 | 78.92 | 1500 | 1.2321 | 0.6404 | | 0.0991 | 84.21 | 1600 | 1.2766 | 0.6524 | | 0.0887 | 89.46 | 1700 | 1.3026 | 0.6344 | | 0.0754 | 94.72 | 1800 | 1.3199 | 0.6704 | | 0.0693 | 99.97 | 1900 | 1.3044 | 0.6361 | | 0.0568 | 105.26 | 2000 | 1.3541 | 0.6254 | | 0.0536 | 110.51 | 2100 | 1.3320 | 0.6249 | | 0.0529 | 115.77 | 2200 | 1.3370 | 0.6271 | | 0.048 | 121.05 | 2300 | 1.2757 | 0.6031 | | 0.0419 | 126.31 | 2400 | 1.2661 | 0.6172 | | 0.0349 | 131.56 | 2500 | 1.2897 | 0.6048 | | 0.0309 | 136.82 | 2600 | 1.2688 | 0.5962 | | 0.0278 | 142.1 | 2700 | 1.2885 | 0.5954 | | 0.0254 | 147.36 | 2800 | 1.2988 | 0.5915 | | 0.0223 | 152.62 | 2900 | 1.3153 | 0.5941 | | 0.0216 | 157.87 | 3000 | 1.2936 | 0.5937 | | 0.0186 | 163.15 | 3100 | 1.2906 | 0.5877 | | 0.0156 | 168.41 | 3200 | 1.3476 | 0.5962 | | 0.0158 | 173.67 | 3300 | 1.3363 | 0.5847 | | 0.0142 | 178.92 | 3400 | 1.3367 | 0.5847 | | 0.0153 | 184.21 | 3500 | 1.3105 | 0.5757 | | 0.0119 | 189.46 | 3600 | 1.3255 | 0.5705 | | 0.0115 | 194.72 | 3700 | 1.3340 | 0.5787 | | 0.0103 | 199.97 | 3800 | 1.3327 | 0.5744 | | f7ddb135c74d693bc6d20dcb97f57aaf |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 | 95b9a005c4fd6e3d671b4a3bd4ddc0dc |
apache-2.0 | [] | false | PaddlePaddle/uie-medium Information extraction suffers from its varying targets, heterogeneous structures, and demand-specific schemas. The unified text-to-structure generation framework, namely UIE, can universally model different IE tasks, adaptively generate targeted structures, and collaboratively learn general IE abilities from different knowledge sources. Specifically, UIE uniformly encodes different extraction structures via a structured extraction language, adaptively generates target extractions via a schema-based prompt mechanism - structural schema instructor, and captures the common IE abilities via a large-scale pre-trained text-to-structure model. Experiments show that UIE achieved the state-of-the-art performance on 4 IE tasks, 13 datasets, and on all supervised, low-resource, and few-shot settings for a wide range of entity, relation, event and sentiment extraction tasks and their unification. These results verified the effectiveness, universality, and transferability of UIE. UIE Paper: https://arxiv.org/abs/2203.12277 PaddleNLP released UIE model series for Information Extraction of texts and multi-modal documents which use the ERNIE 3.0 models as the pre-trained language models and were finetuned on a large amount of information extraction data.  | 57ed924116eee15fa50e94cf4f045ab2 |
apache-2.0 | [] | false | Available Models | Model Name | Usage Scenarios | Supporting Tasks | | :----------------------------------------------------------: | :--------------------------------------------------------- | :--------------------------------------------------- | | `uie-base`<br />`uie-medium`<br />`uie-mini`<br />`uie-micro`<br />`uie-nano` | For **plain text** The **extractive** model of the scene supports **Chinese** | Supports entity, relation, event, opinion extraction | | `uie-base-en` | An **extractive** model for **plain text** scenarios, supports **English** | Supports entity, relation, event, opinion extraction | | `uie-m-base`<br />`uie-m-large` | An **extractive** model for **plain text** scenarios, supporting **Chinese and English** | Supports entity, relation, event, opinion extraction | | <b>`uie-x-base`</b> | An **extractive** model for **plain text** and **document** scenarios, supports **Chinese and English** | Supports entity, relation, event, opinion extraction on both plain text and documents/pictures/tables | | e5074936d9a45e8a2a4e665e2b5be038 |
apache-2.0 | [] | false | Performance on Text Dataset We conducted experiments on the in-house test sets of the three different domains of Internet, medical care, and finance: <table> <tr><th row_span='2'><th colspan='2'>finance<th colspan='2'>healthcare<th colspan='2'>internet <tr><td><th>0-shot<th>5-shot<th>0-shot<th>5-shot<th>0-shot<th>5-shot <tr><td>uie-base (12L768H)<td>46.43<td>70.92<td><b>71.83</b><td>85.72<td>78.33<td>81.86 <tr><td>uie-medium (6L768H)<td>41.11<td>64.53<td>65.40<td>75.72<td>78.32<td>79.68 <tr><td>uie-mini (6L384H)<td>37.04<td>64.65<td>60.50<td>78.36<td>72.09<td>76.38 <tr><td>uie-micro (4L384H)<td>37.53<td>62.11<td>57.04<td>75.92<td>66.00<td>70.22 <tr><td>uie-nano (4L312H)<td>38.94<td>66.83<td>48.29<td>76.74<td>62.86<td>72.35 <tr><td>uie-m-large (24L1024H)<td><b>49.35</b><td><b>74.55</b><td>70.50<td><b>92.66</b ><td>78.49<td><b>83.02</b> <tr><td>uie-m-base (12L768H)<td>38.46<td>74.31<td>63.37<td>87.32<td>76.27<td>80.13 <tr><td>🧾🎓<b>uie-x-base (12L768H)</b><td>48.84<td>73.87<td>65.60<td>88.81<td><b>79.36</b> <td>81.65 </table> 0-shot means that no training data is directly used for prediction through paddlenlp.Taskflow, and 5-shot means that each category contains 5 pieces of labeled data for model fine-tuning. Experiments show that UIE can further improve the performance with a small amount of data (few-shot). > Detailed Info: https://github.com/PaddlePaddle/PaddleNLP/blob/develop/applications/information_extraction/README_en.md | 456a14f0795c3954a10b866e27117214 |
mit | ['vision', 'image-to-text', 'image-captioning', 'visual-question-answering'] | false | BLIP-2, Flan T5-xl, fine-tuned on COCO BLIP-2 model, leveraging [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. | d7c6361ddd19aabbd7dc6258ded3539c |
mit | ['vision', 'image-to-text', 'image-captioning', 'visual-question-answering'] | false | Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model | 916040e942e44f51845fff4820f0c96b |
mit | ['vision', 'image-to-text', 'image-captioning', 'visual-question-answering'] | false | Intended uses & limitations You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. | 1170b21d5c398c39a09050bb199d8d1a |
apache-2.0 | ['generated_from_keras_callback'] | false | TestZee/t5-small-finetuned-xum-test This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.9733 - Validation Loss: 2.6463 - Epoch: 0 | 324dcb50651e33e1f2470cc9b616e58e |
mit | [] | false | Sherhook Painting v2 on Stable Diffusion This is the `<sherhook>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`:          | fdf073844472c80106d307642fb12070 |
other | [] | false | This model was trained for toxicity labeling. Label_1 means TOXIC, Label_0 means NOT TOXIC The model was fine-tuned based off [the CamemBERT language model](https://huggingface.co/camembert-base). The accuracy is 93% on the test split during training and 79% on a manually picked (and thus harder) sample of 200 sentences (100 label 1, 100 label 0) at the end of the training. The model was finetuned on 32k sentences. The train data was the translations of the English data (around 30k sentences) from [the multilingual_detox dataset](https://github.com/s-nlp/multilingual_detox) by [Skolkovo Institute](https://huggingface.co/SkolkovoInstitute) using [the opus-mt-en-fr translation model](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) by [Helsinki-NLP](https://huggingface.co/Helsinki-NLP) and the data from [the jigsaw dataset](https://www.kaggle.com/competitions/jigsaw-multilingual-toxic-comment-classification/data) on kaggle. | 37c282df6596195b14e265ecec62712a |
apache-2.0 | ['generated_from_trainer'] | false | bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0621 - Precision: 0.9357 - Recall: 0.9507 - F1: 0.9432 - Accuracy: 0.9865 | 6fe8a39a0a7095391e9b60b18a12cc02 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0861 | 1.0 | 1756 | 0.0695 | 0.9142 | 0.9293 | 0.9217 | 0.9811 | | 0.0341 | 2.0 | 3512 | 0.0632 | 0.9256 | 0.9478 | 0.9366 | 0.9856 | | 0.0178 | 3.0 | 5268 | 0.0621 | 0.9357 | 0.9507 | 0.9432 | 0.9865 | | 4fbeebd86abd46d6e0228d2874846a01 |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-base-timit-eng This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5047 - Wer: 0.2233 | ca8e5fe2f6376532b0caef22792160fc |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP | 839e987969afbd6b51474211410cff80 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.5485 | 1.0 | 500 | 1.9954 | 1.0042 | | 0.9068 | 2.01 | 1000 | 0.6418 | 0.4572 | | 0.4398 | 3.01 | 1500 | 0.4586 | 0.3629 | | 0.3023 | 4.02 | 2000 | 0.4464 | 0.3248 | | 0.2328 | 5.02 | 2500 | 0.4019 | 0.2969 | | 0.1899 | 6.02 | 3000 | 0.4363 | 0.2961 | | 0.163 | 7.03 | 3500 | 0.4832 | 0.2872 | | 0.1442 | 8.03 | 4000 | 0.4421 | 0.2801 | | 0.1246 | 9.04 | 4500 | 0.4757 | 0.2659 | | 0.1122 | 10.04 | 5000 | 0.4693 | 0.2648 | | 0.102 | 11.04 | 5500 | 0.4834 | 0.2549 | | 0.0919 | 12.05 | 6000 | 0.4558 | 0.2633 | | 0.0866 | 13.05 | 6500 | 0.4527 | 0.2641 | | 0.0762 | 14.06 | 7000 | 0.4394 | 0.2565 | | 0.0705 | 15.06 | 7500 | 0.5240 | 0.2609 | | 0.0647 | 16.06 | 8000 | 0.4980 | 0.2522 | | 0.0608 | 17.07 | 8500 | 0.5163 | 0.2589 | | 0.0576 | 18.07 | 9000 | 0.4991 | 0.2565 | | 0.0499 | 19.08 | 9500 | 0.4750 | 0.2457 | | 0.047 | 20.08 | 10000 | 0.5162 | 0.2447 | | 0.0418 | 21.08 | 10500 | 0.4801 | 0.2413 | | 0.0383 | 22.09 | 11000 | 0.4961 | 0.2394 | | 0.0342 | 23.09 | 11500 | 0.5209 | 0.2386 | | 0.032 | 24.1 | 12000 | 0.4970 | 0.2364 | | 0.0293 | 25.1 | 12500 | 0.4789 | 0.2309 | | 0.0265 | 26.1 | 13000 | 0.4948 | 0.2302 | | 0.0269 | 27.11 | 13500 | 0.4917 | 0.2249 | | 0.0237 | 28.11 | 14000 | 0.4991 | 0.2238 | | 0.022 | 29.12 | 14500 | 0.5047 | 0.2233 | | f0d6d5b61ecaa0db3770e2e7830e9a65 |
apache-2.0 | ['generated_from_trainer'] | false | bert-base-uncased-fine-tuned-on-clinc_oos-dataset This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 1.2811 - Accuracy Score: 0.9239 - F1 Score: 0.9213 | db53d3f5062a088c70aae70b1e3e7771 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 | 4e244064201925fa1d9aada6c0de922d |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy Score | F1 Score | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:--------:| | 4.4271 | 1.0 | 239 | 3.5773 | 0.6116 | 0.5732 | | 3.0415 | 2.0 | 478 | 2.4076 | 0.8390 | 0.8241 | | 2.1182 | 3.0 | 717 | 1.7324 | 0.8994 | 0.8934 | | 1.5897 | 4.0 | 956 | 1.3863 | 0.9210 | 0.9171 | | 1.3458 | 5.0 | 1195 | 1.2811 | 0.9239 | 0.9213 | | 9f85bbe304f7290a2224506cc9721fe9 |
apache-2.0 | ['automatic-speech-recognition', 'de'] | false | exp_w2v2r_de_xls-r_accent_germany-10_austria-0_s728 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 15901d00ec4ff8d78f168d5dc4a9f13a |
cc0-1.0 | ['stable-diffusion', 'text-to-image'] | false | Samples I hope it gives you an idea of what kind of styles can be created with this model. <img src="https://huggingface.co/Froddan/frost/resolve/main/frostography_nature_1.png" width="256px"/> <img src="https://huggingface.co/Froddan/frost/resolve/main/frostography_nature_2.png" width="256px"/> <img src="https://huggingface.co/Froddan/frost/resolve/main/frostography_car_1.png" width="256px"/> <img src="https://huggingface.co/Froddan/frost/resolve/main/frostography_fish_1.png" width="256px"/> <img src="https://huggingface.co/Froddan/frost/resolve/main/frostography_fish_2.png" width="256px"/> <img src="https://huggingface.co/Froddan/frost/resolve/main/frostography_moon.png" width="256px"/> <img src="https://huggingface.co/Froddan/frost/resolve/main/tmp3vde80fz.png" width="256px"/> <img src="https://huggingface.co/Froddan/frost/resolve/main/tmpffxdfi38.png" width="256px"/> <img src="https://huggingface.co/Froddan/frost/resolve/main/tmpmiz28zo5.png" width="256px"/> | 0f10eec03bba6ef92c5bfb5bbf6f0768 |
cc0-1.0 | ['stable-diffusion', 'text-to-image'] | false | 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). | ba6c2504449d29214cc4a3e7332f10d8 |
apache-2.0 | [] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 | b2bbbe7e07c2a61b6bc433e76bed6316 |
apache-2.0 | ['exbert', 'multiberts', 'multiberts-seed-4'] | false | MultiBERTs Seed 4 Checkpoint 400k (uncased) Seed 4 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). | fa36dd579a82d66a2c32087f095cfb93 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.