license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
apache-2.0 | [] | false | Text-to-Image generation with Stable Diffusion First let's install ```bash pip install --upgrade diffusers transformers accelerate ``` We recommend using the model in [half-precision (`fp16`)](https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/) as it gives almost always the same results as full precision while being roughly twice as fast and requiring half the amount of GPU RAM. ```python import torch from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] ``` | 4952be294b015611ca95b214ec70988f |
apache-2.0 | [] | false | Running the model locally You can also simply download the model folder and pass the path to the local folder to the `StableDiffusionPipeline`. ``` git lfs install git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 ``` Assuming the folder is stored locally under `./stable-diffusion-v1-5`, you can run stable diffusion as follows: ```python pipe = StableDiffusionPipeline.from_pretrained("./stable-diffusion-v1-5") pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] ``` If you are limited by GPU memory, you might want to consider chunking the attention computation in addition to using `fp16`. The following snippet should result in less than 4GB VRAM. ```python pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" pipe.enable_attention_slicing() image = pipe(prompt).images[0] ``` If you wish to use a different scheduler (e.g.: DDIM, LMS, PNDM/PLMS), you can instantiate it before the pipeline and pass it to `from_pretrained`. ```python from diffusers import LMSDiscreteScheduler pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config) prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` If you want to run Stable Diffusion on CPU or you want to have maximum precision on GPU, please run the model in the default *full-precision* setting: ```python from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") | 0fe5dd1092961af5a13712b3109c1df8 |
apache-2.0 | [] | false | disable the following line if you run on CPU pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` | 1705736f78ff32f886afabaabffbfda8 |
apache-2.0 | [] | false | JAX/Flax Diffusers offers a JAX / Flax implementation of Stable Diffusion for very fast inference. JAX shines specially on TPU hardware because each TPU server has 8 accelerators working in parallel, but it runs great on GPUs too. Running the pipeline with the default PNDMScheduler: ```python import jax import numpy as np from flax.jax_utils import replicate from flax.training.common_utils import shard from diffusers import FlaxStableDiffusionPipeline pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", revision="flax", dtype=jax.numpy.bfloat16 ) prompt = "a photo of an astronaut riding a horse on mars" prng_seed = jax.random.PRNGKey(0) num_inference_steps = 50 num_samples = jax.device_count() prompt = num_samples * [prompt] prompt_ids = pipeline.prepare_inputs(prompt) | 893c05aff0c358be193871ce2fa28781 |
apache-2.0 | [] | false | shard inputs and rng params = replicate(params) prng_seed = jax.random.split(prng_seed, jax.device_count()) prompt_ids = shard(prompt_ids) images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) ``` **Note**: If you are limited by TPU memory, please make sure to load the `FlaxStableDiffusionPipeline` in `bfloat16` precision instead of the default `float32` precision as done above. You can do so by telling diffusers to load the weights from "bf16" branch. ```python import jax import numpy as np from flax.jax_utils import replicate from flax.training.common_utils import shard from diffusers import FlaxStableDiffusionPipeline pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jax.numpy.bfloat16 ) prompt = "a photo of an astronaut riding a horse on mars" prng_seed = jax.random.PRNGKey(0) num_inference_steps = 50 num_samples = jax.device_count() prompt = num_samples * [prompt] prompt_ids = pipeline.prepare_inputs(prompt) | 348bd9e9985ebb5d463f0537e85dc479 |
apache-2.0 | [] | false | shard inputs and rng params = replicate(params) prng_seed = jax.random.split(prng_seed, jax.device_count()) prompt_ids = shard(prompt_ids) images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) ``` Diffusers also has a Image-to-Image generation pipeline with Flax/Jax ```python import jax import numpy as np import jax.numpy as jnp from flax.jax_utils import replicate from flax.training.common_utils import shard import requests from io import BytesIO from PIL import Image from diffusers import FlaxStableDiffusionImg2ImgPipeline def create_key(seed=0): return jax.random.PRNGKey(seed) rng = create_key(0) url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" response = requests.get(url) init_img = Image.open(BytesIO(response.content)).convert("RGB") init_img = init_img.resize((768, 512)) prompts = "A fantasy landscape, trending on artstation" pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", revision="flax", dtype=jnp.bfloat16, ) num_samples = jax.device_count() rng = jax.random.split(rng, jax.device_count()) prompt_ids, processed_image = pipeline.prepare_inputs(prompt=[prompts]*num_samples, image = [init_img]*num_samples) p_params = replicate(params) prompt_ids = shard(prompt_ids) processed_image = shard(processed_image) output = pipeline( prompt_ids=prompt_ids, image=processed_image, params=p_params, prng_seed=rng, strength=0.75, num_inference_steps=50, jit=True, height=512, width=768).images output_images = pipeline.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) ``` Diffusers also has a Text-guided inpainting pipeline with Flax/Jax ```python import jax import numpy as np from flax.jax_utils import replicate from flax.training.common_utils import shard import PIL import requests from io import BytesIO from diffusers import FlaxStableDiffusionInpaintPipeline def download_image(url): response = requests.get(url) return PIL.Image.open(BytesIO(response.content)).convert("RGB") img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" init_image = download_image(img_url).resize((512, 512)) mask_image = download_image(mask_url).resize((512, 512)) pipeline, params = FlaxStableDiffusionInpaintPipeline.from_pretrained("xvjiarui/stable-diffusion-2-inpainting") prompt = "Face of a yellow cat, high resolution, sitting on a park bench" prng_seed = jax.random.PRNGKey(0) num_inference_steps = 50 num_samples = jax.device_count() prompt = num_samples * [prompt] init_image = num_samples * [init_image] mask_image = num_samples * [mask_image] prompt_ids, processed_masked_images, processed_masks = pipeline.prepare_inputs(prompt, init_image, mask_image) | 9fbcb761a921798850ee437a5449f71a |
apache-2.0 | [] | false | shard inputs and rng params = replicate(params) prng_seed = jax.random.split(prng_seed, jax.device_count()) prompt_ids = shard(prompt_ids) processed_masked_images = shard(processed_masked_images) processed_masks = shard(processed_masks) images = pipeline(prompt_ids, processed_masks, processed_masked_images, params, prng_seed, num_inference_steps, jit=True).images images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) ``` | e7662f0ae84d57d2f1814733d7778d4b |
apache-2.0 | [] | false | Image-to-Image text-guided generation with Stable Diffusion The `StableDiffusionImg2ImgPipeline` lets you pass a text prompt and an initial image to condition the generation of new images. ```python import requests import torch from PIL import Image from io import BytesIO from diffusers import StableDiffusionImg2ImgPipeline | 29d3f2ef8666207295377d94458838df |
apache-2.0 | [] | false | let's download an initial image url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" response = requests.get(url) init_image = Image.open(BytesIO(response.content)).convert("RGB") init_image = init_image.resize((768, 512)) prompt = "A fantasy landscape, trending on artstation" images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images images[0].save("fantasy_landscape.png") ``` You can also run this example on colab [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb) | 053526adf925b697e74bfd7f90a76e96 |
apache-2.0 | [] | false | In-painting using Stable Diffusion The `StableDiffusionInpaintPipeline` lets you edit specific parts of an image by providing a mask and a text prompt. ```python import PIL import requests import torch from io import BytesIO from diffusers import StableDiffusionInpaintPipeline def download_image(url): response = requests.get(url) return PIL.Image.open(BytesIO(response.content)).convert("RGB") img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" init_image = download_image(img_url).resize((512, 512)) mask_image = download_image(mask_url).resize((512, 512)) pipe = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "Face of a yellow cat, high resolution, sitting on a park bench" image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] ``` | a539abbadbfff71d648108626e91a386 |
apache-2.0 | [] | false | Tweak prompts reusing seeds and latents You can generate your own latents to reproduce results, or tweak your prompt on a specific result you liked. Please have a look at [Reusing seeds for deterministic generation](https://huggingface.co/docs/diffusers/main/en/using-diffusers/reusing_seeds). | 1b2d57b1e30640a3d0c48e9c6f77bd07 |
apache-2.0 | [] | false | Fine-Tuning Stable Diffusion Fine-tuning techniques make it possible to adapt Stable Diffusion to your own dataset, or add new subjects to it. These are some of the techniques supported in `diffusers`: Textual Inversion is a technique for capturing novel concepts from a small number of example images in a way that can later be used to control text-to-image pipelines. It does so by learning new 'words' in the embedding space of the pipeline's text encoder. These special words can then be used within text prompts to achieve very fine-grained control of the resulting images. - Textual Inversion. Capture novel concepts from a small set of sample images, and associate them with new "words" in the embedding space of the text encoder. Please, refer to [our training examples](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion) or [documentation](https://huggingface.co/docs/diffusers/training/text_inversion) to try for yourself. - Dreambooth. Another technique to capture new concepts in Stable Diffusion. This method fine-tunes the UNet (and, optionally, also the text encoder) of the pipeline to achieve impressive results. Please, refer to [our training example](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth) and [training report](https://huggingface.co/blog/dreambooth) for additional details and training recommendations. - Full Stable Diffusion fine-tuning. If you have a more sizable dataset with a specific look or style, you can fine-tune Stable Diffusion so that it outputs images following those examples. This was the approach taken to create [a Pokémon Stable Diffusion model](https://huggingface.co/justinpinkney/pokemon-stable-diffusion) (by Justing Pinkney / Lambda Labs), [a Japanese specific version of Stable Diffusion](https://huggingface.co/spaces/rinna/japanese-stable-diffusion) (by [Rinna Co.](https://github.com/rinnakk/japanese-stable-diffusion/) and others. You can start at [our text-to-image fine-tuning example](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image) and go from there. | be5a390b72f7670dcf324e14c129d82b |
apache-2.0 | [] | false | Stable Diffusion Community Pipelines The release of Stable Diffusion as an open source model has fostered a lot of interesting ideas and experimentation. Our [Community Examples folder](https://github.com/huggingface/diffusers/tree/main/examples/community) contains many ideas worth exploring, like interpolating to create animated videos, using CLIP Guidance for additional prompt fidelity, term weighting, and much more! [Take a look](https://huggingface.co/docs/diffusers/using-diffusers/custom_pipeline_overview) and [contribute your own](https://huggingface.co/docs/diffusers/using-diffusers/contribute_pipeline). | ca7431ec1285c07b50343babfc917560 |
apache-2.0 | [] | false | save image image.save("ddpm_generated_image.png") ``` - [Unconditional Latent Diffusion](https://huggingface.co/CompVis/ldm-celebahq-256) - [Unconditional Diffusion with continuous scheduler](https://huggingface.co/google/ncsnpp-ffhq-1024) **Other Image Notebooks**: * [image-to-image generation with Stable Diffusion](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb) , * [tweak images via repeated Stable Diffusion seeds](https://colab.research.google.com/github/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb) , **Diffusers for Other Modalities**: * [Molecule conformation generation](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/geodiff_molecule_conformation.ipynb) , * [Model-based reinforcement learning](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/reinforcement_learning_with_diffusers.ipynb) , | bb1ff2c22cac90cc21b745f39100533b |
apache-2.0 | [] | false | Web Demos If you just want to play around with some web demos, you can try out the following 🚀 Spaces: | Model | Hugging Face Spaces | |-------------------------------- |------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Text-to-Image Latent Diffusion | [](https://huggingface.co/spaces/CompVis/text2img-latent-diffusion) | | Faces generator | [](https://huggingface.co/spaces/CompVis/celeba-latent-diffusion) | | DDPM with different schedulers | [](https://huggingface.co/spaces/fusing/celeba-diffusion) | | Conditional generation from sketch | [](https://huggingface.co/spaces/huggingface/diffuse-the-rest) | | Composable diffusion | [](https://huggingface.co/spaces/Shuang59/Composable-Diffusion) | | be61e7c06b85447ed7fa163a556e857a |
apache-2.0 | [] | false | Definitions **Models**: Neural network that models $p_\theta(\mathbf{x}_{t-1}|\mathbf{x}_t)$ (see image below) and is trained end-to-end to *denoise* a noisy input to an image. *Examples*: UNet, Conditioned UNet, 3D UNet, Transformer UNet <p align="center"> <img src="https://user-images.githubusercontent.com/10695622/174349667-04e9e485-793b-429a-affe-096e8199ad5b.png" width="800"/> <br> <em> Figure from DDPM paper (https://arxiv.org/abs/2006.11239). </em> <p> **Schedulers**: Algorithm class for both **inference** and **training**. The class provides functionality to compute previous image according to alpha, beta schedule as well as predict noise for training. Also known as **Samplers**. *Examples*: [DDPM](https://arxiv.org/abs/2006.11239), [DDIM](https://arxiv.org/abs/2010.02502), [PNDM](https://arxiv.org/abs/2202.09778), [DEIS](https://arxiv.org/abs/2204.13902) <p align="center"> <img src="https://user-images.githubusercontent.com/10695622/174349706-53d58acc-a4d1-4cda-b3e8-432d9dc7ad38.png" width="800"/> <br> <em> Sampling and training algorithms. Figure from DDPM paper (https://arxiv.org/abs/2006.11239). </em> <p> **Diffusion Pipeline**: End-to-end pipeline that includes multiple diffusion models, possible text encoders, ... *Examples*: Glide, Latent-Diffusion, Imagen, DALL-E 2 <p align="center"> <img src="https://user-images.githubusercontent.com/10695622/174348898-481bd7c2-5457-4830-89bc-f0907756f64c.jpeg" width="550"/> <br> <em> Figure from ImageGen (https://imagen.research.google/). </em> <p> | 51d16a22628ab92e07fd19ed815e14fa |
apache-2.0 | [] | false | Philosophy - Readability and clarity is preferred over highly optimized code. A strong importance is put on providing readable, intuitive and elementary code design. *E.g.*, the provided [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) are separated from the provided [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and provide well-commented code that can be read alongside the original paper. - Diffusers is **modality independent** and focuses on providing pretrained models and tools to build systems that generate **continuous outputs**, *e.g.* vision and audio. - Diffusion models and schedulers are provided as concise, elementary building blocks. In contrast, diffusion pipelines are a collection of end-to-end diffusion systems that can be used out-of-the-box, should stay as close as possible to their original implementation and can include components of another library, such as text-encoders. Examples for diffusion pipelines are [Glide](https://github.com/openai/glide-text2im) and [Latent Diffusion](https://github.com/CompVis/latent-diffusion). | c79927a2b276c545de003197ecf8b13d |
apache-2.0 | [] | false | In the works For the first release, 🤗 Diffusers focuses on text-to-image diffusion techniques. However, diffusers can be used for much more than that! Over the upcoming releases, we'll be focusing on: - Diffusers for audio - Diffusers for reinforcement learning (initial work happening in https://github.com/huggingface/diffusers/pull/105). - Diffusers for video generation - Diffusers for molecule generation (initial work happening in https://github.com/huggingface/diffusers/pull/54) A few pipeline components are already being worked on, namely: - BDDMPipeline for spectrogram-to-sound vocoding - GLIDEPipeline to support OpenAI's GLIDE model - Grad-TTS for text to audio generation / conditional audio generation We want diffusers to be a toolbox useful for diffusers models in general; if you find yourself limited in any way by the current API, or would like to see additional models, schedulers, or techniques, please open a [GitHub issue](https://github.com/huggingface/diffusers/issues) mentioning what you would like to see. | 6a8d54c1f0f654be81f7e572cbf21fa1 |
apache-2.0 | [] | false | Credits This library concretizes previous work by many different authors and would not have been possible without their great research and implementations. We'd like to thank, in particular, the following implementations which have helped us in our development and without which the API could not have been as polished today: - @CompVis' latent diffusion models library, available [here](https://github.com/CompVis/latent-diffusion) - @hojonathanho original DDPM implementation, available [here](https://github.com/hojonathanho/diffusion) as well as the extremely useful translation into PyTorch by @pesser, available [here](https://github.com/pesser/pytorch_diffusion) - @ermongroup's DDIM implementation, available [here](https://github.com/ermongroup/ddim). - @yang-song's Score-VE and Score-VP implementations, available [here](https://github.com/yang-song/score_sde_pytorch) We also want to thank @heejkoo for the very helpful overview of papers, code and resources on diffusion models, available [here](https://github.com/heejkoo/Awesome-Diffusion-Models) as well as @crowsonkb and @rromb for useful discussions and insights. | 4f892262a04baf160212906883ffcdf0 |
apache-2.0 | [] | false | Citation ```bibtex @misc{von-platen-etal-2022-diffusers, author = {Patrick von Platen and Suraj Patil and Anton Lozhkov and Pedro Cuenca and Nathan Lambert and Kashif Rasul and Mishig Davaadorj and Thomas Wolf}, title = {Diffusers: State-of-the-art diffusion models}, year = {2022}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/huggingface/diffusers}} } ``` | cd49c05899fd2948cce5a4febb4f8794 |
apache-2.0 | ['image-classification', 'vision', 'pytorch'] | false | Vision Transformer Fine Tuned on CIFAR10 Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) and **fine-tuned on CIFAR10** at resolution 224x224. Check out the code at my [my Github repo](https://github.com/nateraw/huggingface-vit-finetune). | dd6ba69a308ad69627c92c5e0fb3d14d |
apache-2.0 | ['image-classification', 'vision', 'pytorch'] | false | Usage ```python from transformers import ViTFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog10.png' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('nateraw/vit-base-patch16-224-cifar10') model = ViTForImageClassification.from_pretrained('nateraw/vit-base-patch16-224-cifar10') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) preds = outputs.logits.argmax(dim=1) classes = [ 'airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck' ] classes[preds[0]] ``` | 6b24e12df1c4607dd5eb08f242164e86 |
apache-2.0 | ['image-classification', 'vision', 'pytorch'] | false | Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification). By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. | fe942de90dfe654701a65dcf1a3e85be |
apache-2.0 | ['generated_from_trainer'] | false | languagemodel This model is a fine-tuned version of [monideep2255/XLRS-torgo](https://huggingface.co/monideep2255/XLRS-torgo) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: inf - Wer: 1.1173 | 8444063ec134962e0567b7db98b7924b |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.3015 | 3.12 | 400 | inf | 1.3984 | | 0.6892 | 6.25 | 800 | inf | 1.1059 | | 0.5069 | 9.37 | 1200 | inf | 1.0300 | | 0.3596 | 12.5 | 1600 | inf | 1.0830 | | 0.2571 | 15.62 | 2000 | inf | 1.1981 | | 0.198 | 18.75 | 2400 | inf | 1.1009 | | 0.1523 | 21.87 | 2800 | inf | 1.1803 | | 0.1112 | 25.0 | 3200 | inf | 1.0429 | | 0.08 | 28.12 | 3600 | inf | 1.1173 | | 067511375d2e8b0e06af43345d6b4ddd |
apache-2.0 | ['translation'] | false | opus-mt-de-efi * source languages: de * target languages: efi * OPUS readme: [de-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-efi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-efi/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-efi/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-efi/opus-2020-01-20.eval.txt) | 07102ba37248eef9c45d8ec8a250886f |
apache-2.0 | [] | false | Model Description This **DAMO-YOLO-T** model is a tiny-size object detection model with fast inference speed and high accuracy, trained by **DAMO-YOLO**. DAMO-YOLO is a fast and accurate object detection method, which is developed by TinyML Team from Alibaba DAMO Data Analytics and Intelligence Lab. And it achieves a higher performance than state-of-the-art YOLO series. DAMO-YOLO is extend from YOLO but with some new techs, including Neural Architecture Search (NAS) backbones, efficient Reparameterized Generalized-FPN (RepGFPN), a lightweight head with AlignedOTA label assignment, and distillation enhancement. For more details, please refer to our [Arxiv Report](https://arxiv.org/abs/2211.15444) and [Github Code](https://github.com/tinyvision/DAMO-YOLO). Moreover, here you can find not only powerful models, but also highly efficient training strategies and complete tools from training to deployment. | 5203d9b34d36be3e2fb84831b9eb2be1 |
apache-2.0 | [] | false | Chinese Web Demo - We also provide Chinese Web Demo on ModelScope, including [DAMO-YOLO-T](https://www.modelscope.cn/models/damo/cv_tinynas_object-detection_damoyolo-t/summary), [DAMO-YOLO-S](https://modelscope.cn/models/damo/cv_tinynas_object-detection_damoyolo/summary), [DAMO-YOLO-M](https://www.modelscope.cn/models/damo/cv_tinynas_object-detection_damoyolo-m/summary). | e6a1264eb3896f1060fec616d828b5c7 |
apache-2.0 | [] | false | Model Evaluation |Model |size |mAP<sup>val<br>0.5:0.95 | Latency T4<br>TRT-FP16-BS1| FLOPs<br>(G)| Params<br>(M)| Download | | ------ |:---: | :---: |:---:|:---: | :---: | :---:| |[DAMO-YOLO-T](./configs/damoyolo_tinynasL20_T.py) | 640 | 41.8 | 2.78 | 18.1 | 8.5 |[torch](https://idstcv.oss-cn-zhangjiakou.aliyuncs.com/DAMO-YOLO/clean_models/before_distill/damoyolo_tinynasL20_T_418.pth),[onnx](https://idstcv.oss-cn-zhangjiakou.aliyuncs.com/DAMO-YOLO/onnx/before_distill/damoyolo_tinynasL20_T_418.onnx) | |[DAMO-YOLO-T*](./configs/damoyolo_tinynasL20_T.py) | 640 | 43.0 | 2.78 | 18.1 | 8.5 |[torch](https://idstcv.oss-cn-zhangjiakou.aliyuncs.com/DAMO-YOLO/clean_models/damoyolo_tinynasL20_T.pth),[onnx](https://idstcv.oss-cn-zhangjiakou.aliyuncs.com/DAMO-YOLO/onnx/damoyolo_tinynasL20_T.onnx) | |[DAMO-YOLO-S](./configs/damoyolo_tinynasL25_S.py) | 640 | 45.6 | 3.83 | 37.8 | 16.3 |[torch](https://idstcv.oss-cn-zhangjiakou.aliyuncs.com/DAMO-YOLO/clean_models/before_distill/damoyolo_tinynasL25_S_456.pth),[onnx](https://idstcv.oss-cn-zhangjiakou.aliyuncs.com/DAMO-YOLO/onnx/before_distill/damoyolo_tinynasL25_S_456.onnx) | |[DAMO-YOLO-S*](./configs/damoyolo_tinynasL25_S.py) | 640 | 46.8 | 3.83 | 37.8 | 16.3 |[torch](https://idstcv.oss-cn-zhangjiakou.aliyuncs.com/DAMO-YOLO/clean_models/damoyolo_tinynasL25_S.pth),[onnx](https://idstcv.oss-cn-zhangjiakou.aliyuncs.com/DAMO-YOLO/onnx/damoyolo_tinynasL25_S.onnx) | |[DAMO-YOLO-M](./configs/damoyolo_tinynasL35_M.py) | 640 | 48.7 | 5.62 | 61.8 | 28.2 |[torch](https://idstcv.oss-cn-zhangjiakou.aliyuncs.com/DAMO-YOLO/clean_models/before_distill/damoyolo_tinynasL35_M_487.pth),[onnx](https://idstcv.oss-cn-zhangjiakou.aliyuncs.com/DAMO-YOLO/onnx/before_distill/damoyolo_tinynasL35_M_487.onnx)| |[DAMO-YOLO-M*](./configs/damoyolo_tinynasL35_M.py) | 640 | 50.0 | 5.62 | 61.8 | 28.2 |[torch](https://idstcv.oss-cn-zhangjiakou.aliyuncs.com/DAMO-YOLO/clean_models/damoyolo_tinynasL35_M.pth),[onnx](https://idstcv.oss-cn-zhangjiakou.aliyuncs.com/DAMO-YOLO/onnx/damoyolo_tinynasL35_M.onnx)| - We report the mAP of models on COCO2017 validation set, with multi-class NMS. - The latency in this table is measured without post-processing. - \* denotes the model trained with distillation. | 8fee8e017d460aa527a6fe175a0dd812 |
apache-2.0 | [] | false | Cite DAMO-YOLO If you use DAMO-YOLO in your research, please cite our work by using the following BibTeX entry: ```latex @article{damoyolo, title={DAMO-YOLO: A Report on Real-Time Object Detection Design}, author={Xianzhe Xu, Yiqi Jiang, Weihua Chen, Yilun Huang, Yuan Zhang and Xiuyu Sun}, journal={arXiv preprint arXiv:2211.15444v2}, year={2022}, } ``` | 9e357c3ac792cfa113ed097e4bf6b85b |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition', 'spoken-language-understanding'] | false | Environments - date: `Thu Nov 10 09:07:40 EST 2022` - python version: `3.8.6 (default, Dec 17 2020, 16:57:01) [GCC 10.2.0]` - espnet version: `espnet 202207` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `a7bd6522b32ec6472c13f6a2289dcdff4a846c12` - Commit date: `Wed Sep 14 08:34:27 2022 -0400` | 09968be21cd071f3d7f53e97d6647185 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition', 'spoken-language-understanding'] | false | asr_train_asr_hubert_transformer_adam_specaug_meld_raw_en_bpe850 - ASR config: conf/tuning/train_asr_hubert_transformer_adam_specaug_meld.yaml - token_type: bpe - keep_nbest_models: 5 |dataset|Snt|Emotion Classification (%)| |---|---|---| |decoder_asr_asr_model_valid.acc.ave_5best/test|2608|39.22| |decoder_asr_asr_model_valid.acc.ave_5best/valid|1104|42.64| | b4c84c4957ecfd99c4ac7798db541037 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition', 'spoken-language-understanding'] | false | WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decoder_asr_asr_model_valid.acc.ave_5best/test|2608|24809|55.5|28.0|16.5|8.4|52.9|96.5| |decoder_asr_asr_model_valid.acc.ave_5best/valid|1104|10171|55.3|29.4|15.3|7.0|51.7|96.2| | e93c25df02fa06d78f761e5665a30370 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition', 'spoken-language-understanding'] | false | CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decoder_asr_asr_model_valid.acc.ave_5best/test|2608|120780|71.1|10.7|18.2|10.6|39.5|96.5| |decoder_asr_asr_model_valid.acc.ave_5best/valid|1104|49323|71.3|11.1|17.6|9.4|38.1|96.2| | 2cf0e04ff90a18cabc2c5645e93fa5f7 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition', 'spoken-language-understanding'] | false | TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decoder_asr_asr_model_valid.acc.ave_5best/test|2608|35287|57.6|21.8|20.5|7.8|50.2|96.5| |decoder_asr_asr_model_valid.acc.ave_5best/valid|1104|14430|57.4|23.2|19.4|6.1|48.6|96.2| | 1a2770bd57c054eac47da1b6c6621bc8 |
apache-2.0 | ['gpt2', 'turkish'] | false | Model description This is a GPT2-Small English based model finetuned and additionaly trainied with Wikipedia Articles in Turkish as of 28-10-2020 Live demo based on this work at : https://www.metayazar.com/ Fine tuned writer on this model: https://huggingface.co/gorkemgoknar/gpt2-turkish-writer Work has been done on Pierre Guillou tutorial as on this page. (https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb) Code is converted to work with Fastai 2.X . Using Google Colab for training. Additional tutorial and source will be in https://github.com/gorkemgoknar in later stage. Current accuracy 33 % , Perplexity : 51.88 Models are available: * [gpt2-small-tuned-tr] (https://huggingface.co/gorkemgoknar/gpt2-small-turkish) * [gpt2-small-turkish-writer] (https://huggingface.co/gorkemgoknar/gpt2-turkish-writer) | 2734588d95b77c31ddc1f3287f68f871 |
apache-2.0 | ['gpt2', 'turkish'] | false | Install ```python from transformers import AutoTokenizer, AutoModelWithLMHead import torch tokenizer = AutoTokenizer.from_pretrained("gorkemgoknar/gpt2-small-turkish") model = AutoModelWithLMHead.from_pretrained("gorkemgoknar/gpt2-small-turkish") | 3740aa69408d3c6819fec0d6b5c119ba |
apache-2.0 | ['gpt2', 'turkish'] | false | model output outputs = model(**inputs, labels=inputs["input_ids"]) loss, logits = outputs[:2] predicted_index = torch.argmax(logits[0, -1, :]).item() predicted_text = tokenizer.decode([predicted_index]) | 4ea57bed5fb4c0cb4f8ed11c4e636136 |
apache-2.0 | ['gpt2', 'turkish'] | false | model output using Top-k sampling text generation method sample_outputs = model.generate(inputs.input_ids, pad_token_id=50256, do_sample=True, max_length=50, | dd3adf66704c68323aeaa9af78757b2e |
apache-2.0 | ['gpt2', 'turkish'] | false | Eval results | epoch\\\\t|train_loss\\\\t|valid_loss\\\\t|accuracy\\\\t|perplexity\\\\t|time | | ----- | -------- |--------- | ---------- | --------- | ----- | |0\\\\t|4.777015\\\\t|4.621834\\\\t|0.292547\\\\t|101.680367\\\\t|2:42:05| |1\\\\t|4.509412\\\\t|4.403999\\\\t|0.305574\\\\t|81.777267\\\\t|1:09:38| |2\\\\t|4.169529\\\\t|4.120755\\\\t|0.324908\\\\t|61.605747\\\\t|1:07:45| |3\\\\t|4.293973\\\\t|4.177899\\\\t|0.317211\\\\t|65.228653\\\\t|1:07:02| |4\\\\t|4.049848\\\\t|3.949103\\\\t|0.338347\\\\t|51.888783\\\\t|1:05:53| | aeeae3bcf4be9b280af94b096e0741e0 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-cased_fine_tuned_food_ner This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6129 - Precision: 0.9080 - Recall: 0.9328 - F1: 0.9203 - Accuracy: 0.9095 | 2ba9d5995f924224db631455d2101326 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 | 4fd99819e6e9f4d0a1ef4c7334c24c6d |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 40 | 1.2541 | 0.7806 | 0.7299 | 0.7544 | 0.6782 | | No log | 2.0 | 80 | 0.7404 | 0.8301 | 0.8657 | 0.8475 | 0.8047 | | No log | 3.0 | 120 | 0.5886 | 0.8416 | 0.8900 | 0.8651 | 0.8507 | | No log | 4.0 | 160 | 0.5094 | 0.8772 | 0.9122 | 0.8944 | 0.8727 | | No log | 5.0 | 200 | 0.4724 | 0.8727 | 0.9159 | 0.8938 | 0.8863 | | No log | 6.0 | 240 | 0.4471 | 0.8975 | 0.9240 | 0.9105 | 0.8960 | | No log | 7.0 | 280 | 0.4446 | 0.9028 | 0.9255 | 0.9140 | 0.9006 | | No log | 8.0 | 320 | 0.4437 | 0.9042 | 0.9336 | 0.9187 | 0.9032 | | No log | 9.0 | 360 | 0.4582 | 0.9144 | 0.9299 | 0.9221 | 0.9074 | | No log | 10.0 | 400 | 0.4525 | 0.9080 | 0.9328 | 0.9203 | 0.9066 | | No log | 11.0 | 440 | 0.4650 | 0.9076 | 0.9351 | 0.9211 | 0.9032 | | No log | 12.0 | 480 | 0.4725 | 0.9119 | 0.9395 | 0.9255 | 0.9095 | | 0.406 | 13.0 | 520 | 0.4862 | 0.9161 | 0.9343 | 0.9251 | 0.9095 | | 0.406 | 14.0 | 560 | 0.4735 | 0.9214 | 0.9424 | 0.9318 | 0.9154 | | 0.406 | 15.0 | 600 | 0.4973 | 0.9085 | 0.9380 | 0.9230 | 0.9095 | | 0.406 | 16.0 | 640 | 0.5075 | 0.9026 | 0.9373 | 0.9196 | 0.9099 | | 0.406 | 17.0 | 680 | 0.5057 | 0.9124 | 0.9380 | 0.9250 | 0.9121 | | 0.406 | 18.0 | 720 | 0.5179 | 0.9098 | 0.9380 | 0.9237 | 0.9129 | | 0.406 | 19.0 | 760 | 0.5156 | 0.9111 | 0.9380 | 0.9244 | 0.9121 | | 0.406 | 20.0 | 800 | 0.5325 | 0.9077 | 0.9358 | 0.9215 | 0.9099 | | 0.406 | 21.0 | 840 | 0.5350 | 0.9203 | 0.9373 | 0.9287 | 0.9137 | | 0.406 | 22.0 | 880 | 0.5405 | 0.9077 | 0.9365 | 0.9219 | 0.9108 | | 0.406 | 23.0 | 920 | 0.5682 | 0.9107 | 0.9336 | 0.9220 | 0.9066 | | 0.406 | 24.0 | 960 | 0.5545 | 0.9109 | 0.9351 | 0.9228 | 0.9095 | | 0.0303 | 25.0 | 1000 | 0.5717 | 0.9044 | 0.9351 | 0.9194 | 0.9049 | | 0.0303 | 26.0 | 1040 | 0.5637 | 0.9101 | 0.9343 | 0.9221 | 0.9108 | | 0.0303 | 27.0 | 1080 | 0.5736 | 0.9102 | 0.9351 | 0.9225 | 0.9104 | | 0.0303 | 28.0 | 1120 | 0.5793 | 0.9027 | 0.9380 | 0.9200 | 0.9074 | | 0.0303 | 29.0 | 1160 | 0.5753 | 0.9137 | 0.9380 | 0.9257 | 0.9112 | | 0.0303 | 30.0 | 1200 | 0.5804 | 0.9111 | 0.9380 | 0.9244 | 0.9108 | | 0.0303 | 31.0 | 1240 | 0.5877 | 0.9123 | 0.9365 | 0.9243 | 0.9099 | | 0.0303 | 32.0 | 1280 | 0.5837 | 0.9116 | 0.9358 | 0.9235 | 0.9087 | | 0.0303 | 33.0 | 1320 | 0.5886 | 0.9113 | 0.9402 | 0.9255 | 0.9108 | | 0.0303 | 34.0 | 1360 | 0.5847 | 0.9145 | 0.9387 | 0.9264 | 0.9121 | | 0.0303 | 35.0 | 1400 | 0.5981 | 0.9083 | 0.9358 | 0.9218 | 0.9082 | | 0.0303 | 36.0 | 1440 | 0.5963 | 0.9056 | 0.9343 | 0.9197 | 0.9095 | | 0.0303 | 37.0 | 1480 | 0.6027 | 0.9101 | 0.9343 | 0.9221 | 0.9104 | | 0.0086 | 38.0 | 1520 | 0.6003 | 0.9102 | 0.9351 | 0.9225 | 0.9099 | | 0.0086 | 39.0 | 1560 | 0.5958 | 0.9082 | 0.9343 | 0.9211 | 0.9095 | | 0.0086 | 40.0 | 1600 | 0.6054 | 0.9059 | 0.9306 | 0.9181 | 0.9091 | | 0.0086 | 41.0 | 1640 | 0.6056 | 0.9075 | 0.9343 | 0.9207 | 0.9112 | | 0.0086 | 42.0 | 1680 | 0.6029 | 0.9080 | 0.9321 | 0.9199 | 0.9091 | | 0.0086 | 43.0 | 1720 | 0.6027 | 0.9109 | 0.9351 | 0.9228 | 0.9104 | | 0.0086 | 44.0 | 1760 | 0.6071 | 0.9075 | 0.9336 | 0.9203 | 0.9099 | | 0.0086 | 45.0 | 1800 | 0.6100 | 0.9102 | 0.9351 | 0.9225 | 0.9095 | | 0.0086 | 46.0 | 1840 | 0.6106 | 0.9102 | 0.9351 | 0.9225 | 0.9104 | | 0.0086 | 47.0 | 1880 | 0.6132 | 0.9101 | 0.9343 | 0.9221 | 0.9091 | | 0.0086 | 48.0 | 1920 | 0.6134 | 0.9095 | 0.9343 | 0.9217 | 0.9095 | | 0.0086 | 49.0 | 1960 | 0.6129 | 0.9080 | 0.9328 | 0.9203 | 0.9095 | | 0.005 | 50.0 | 2000 | 0.6129 | 0.9080 | 0.9328 | 0.9203 | 0.9095 | | 379d6dc038491f792c7d5ad20f26420c |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-demo-F04-2 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3203 - Wer: 0.5353 | 1490f30888a976112fcb09786c27e681 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 | b55a96915f85cd153b5b2c0805e9675e |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 23.5576 | 0.89 | 500 | 3.3654 | 1.0 | | 3.3953 | 1.79 | 1000 | 3.1729 | 1.0 | | 2.9514 | 2.68 | 1500 | 2.8946 | 1.0 | | 2.84 | 3.57 | 2000 | 2.8386 | 1.0 | | 2.7685 | 4.46 | 2500 | 2.7147 | 1.0 | | 2.5059 | 5.36 | 3000 | 2.1341 | 1.1752 | | 1.8907 | 6.25 | 3500 | 1.3604 | 1.2403 | | 1.3892 | 7.14 | 4000 | 0.8814 | 1.1989 | | 1.0754 | 8.04 | 4500 | 0.6416 | 1.0529 | | 0.8795 | 8.93 | 5000 | 0.5760 | 0.9641 | | 0.7478 | 9.82 | 5500 | 0.4633 | 0.8790 | | 0.6107 | 10.71 | 6000 | 0.3921 | 0.8394 | | 0.5445 | 11.61 | 6500 | 0.3579 | 0.7987 | | 0.4788 | 12.5 | 7000 | 0.3034 | 0.7470 | | 0.4435 | 13.39 | 7500 | 0.2989 | 0.7311 | | 0.4057 | 14.29 | 8000 | 0.3366 | 0.7092 | | 0.3606 | 15.18 | 8500 | 0.2783 | 0.6892 | | 0.343 | 16.07 | 9000 | 0.2593 | 0.6612 | | 0.3189 | 16.96 | 9500 | 0.2780 | 0.6460 | | 0.277 | 17.86 | 10000 | 0.3266 | 0.6277 | | 0.2789 | 18.75 | 10500 | 0.3582 | 0.6253 | | 0.2552 | 19.64 | 11000 | 0.3422 | 0.6156 | | 0.2416 | 20.54 | 11500 | 0.3387 | 0.6016 | | 0.2187 | 21.43 | 12000 | 0.3657 | 0.5845 | | 0.2317 | 22.32 | 12500 | 0.2932 | 0.5845 | | 0.2091 | 23.21 | 13000 | 0.2551 | 0.5614 | | 0.199 | 24.11 | 13500 | 0.3113 | 0.5474 | | 0.1777 | 25.0 | 14000 | 0.2895 | 0.5572 | | 0.1823 | 25.89 | 14500 | 0.3127 | 0.5456 | | 0.179 | 26.79 | 15000 | 0.2945 | 0.5438 | | 0.1596 | 27.68 | 15500 | 0.3052 | 0.5322 | | 0.1671 | 28.57 | 16000 | 0.3119 | 0.5365 | | 0.1564 | 29.46 | 16500 | 0.3203 | 0.5353 | | df9b1cc8cfeba2633563da0dd9944a38 |
apache-2.0 | [] | false | Source A Neural Language Style Transfer framework to transfer natural language text smoothly between fine-grained language styles like formal/casual. The original model is at [https://github.com/PrithivirajDamodaran/Styleformer](https://github.com/PrithivirajDamodaran/Styleformer).  | 1e241a7b69e17137c64edfcf28b14c3f |
apache-2.0 | [] | false | Examples: ``` [Casual] I am quitting my job [Formal] I will be stepping down from my job. ---------------------------------------------------------------------------------------------------- [Casual] Jimmy is on crack and can't trust him [Formal] Jimmy is a crack addict I cannot trust him ---------------------------------------------------------------------------------------------------- [Casual] What do guys do to show that they like a gal? [Formal] What do guys do to demonstrate their affinity for women? ---------------------------------------------------------------------------------------------------- [Casual] i loooooooooooooooooooooooove going to the movies. [Formal] I really like to go to the movies. ``` | 08d792b99b9d1277aeb7fb981bdb3f6c |
apache-2.0 | [] | false | References - [Formality Style Transfer for Noisy Text: Leveraging Out-of-Domain Parallel Data for In-Domain Training via POS Masking](https://www.aclweb.org/anthology/D19-5502.pdf) - [Generative Text Style Transfer for Improved Language Sophistication](http://cs230.stanford.edu/projects_winter_2020/reports/32069807.pdf) - [Delete, Retrieve, Generate: A Simple Approach to Sentiment and Style Transfer](https://arxiv.org/pdf/1804.06437.pdf) | 03cade0c03a558f8f3a8101075aa75be |
apache-2.0 | ['generated_from_trainer', 'he', 'robust-speech-event'] | false | wav2vec2-xls-r-300m-lm-hebrew This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset with adding ngram models according to [Boosting Wav2Vec2 with n-grams in 🤗 Transformers](https://huggingface.co/blog/wav2vec2-with-ngram) | 862d6f750e1a36291136d2aa1fdecfa3 |
apache-2.0 | ['generated_from_trainer', 'he', 'robust-speech-event'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100 - mixed_precision_training: Native AMP | 06b1d33edf23d60418f2234fca09fe0d |
mit | ['MedicalNet', 'medical images', 'medical', '3D', 'Med3D'] | false | MedicalNet This repository contains a Pytorch implementation of [Med3D: Transfer Learning for 3D Medical Image Analysis](https://arxiv.org/abs/1904.00625). Many studies have shown that the performance on deep learning is significantly affected by volume of training data. The MedicalNet project aggregated the dataset with diverse modalities, target organs, and pathologies to to build relatively large datasets. Based on this dataset, a series of 3D-ResNet pre-trained models and corresponding transfer-learning training code are provided. | 7f6a0264b49ec9f9064cca23be8ebfc8 |
mit | ['MedicalNet', 'medical images', 'medical', '3D', 'Med3D'] | false | Citing MedicalNet If you use this code or pre-trained models, please cite the following: ``` @article{chen2019med3d, title={Med3D: Transfer Learning for 3D Medical Image Analysis}, author={Chen, Sihong and Ma, Kai and Zheng, Yefeng}, journal={arXiv preprint arXiv:1904.00625}, year={2019} } ``` | 396bd18bcfd3b5704ec445757903890e |
mit | ['MedicalNet', 'medical images', 'medical', '3D', 'Med3D'] | false | Update(2019/07/30) We uploaded 4 pre-trained models based on more datasets (23 datasets). ``` Model name : parameters settings resnet_10_23dataset.pth: --model resnet --model_depth 10 --resnet_shortcut B resnet_18_23dataset.pth: --model resnet --model_depth 18 --resnet_shortcut A resnet_34_23dataset.pth: --model resnet --model_depth 34 --resnet_shortcut A resnet_50_23dataset.pth: --model resnet --model_depth 50 --resnet_shortcut B ``` Hugging Face repository contribution by: [Rafael Zimmer](https://www.github.com/rzimmerdev) | 3d0540423f3739476f88c615e465c864 |
apache-2.0 | ['generated_from_trainer'] | false | hubert-base-libri-clean-ft100h This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 0.1324 - Wer: 0.1597 | 7e150373c98053331b4dd5599716dc67 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 - mixed_precision_training: Native AMP | 099824d892cd8adbcd360fb14cca012b |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.14 | 250 | 4.1508 | 1.0000 | | 4.4345 | 0.28 | 500 | 3.8766 | 1.0000 | | 4.4345 | 0.42 | 750 | 3.4376 | 1.0000 | | 2.8475 | 0.56 | 1000 | 2.7380 | 1.0 | | 2.8475 | 0.7 | 1250 | 0.8803 | 0.6766 | | 1.1877 | 0.84 | 1500 | 0.5671 | 0.5102 | | 1.1877 | 0.98 | 1750 | 0.4537 | 0.4388 | | 0.5802 | 1.12 | 2000 | 0.3566 | 0.3740 | | 0.5802 | 1.26 | 2250 | 0.2925 | 0.3209 | | 0.4301 | 1.4 | 2500 | 0.2613 | 0.2952 | | 0.4301 | 1.54 | 2750 | 0.2363 | 0.2715 | | 0.3591 | 1.68 | 3000 | 0.2155 | 0.2552 | | 0.3591 | 1.82 | 3250 | 0.2062 | 0.2418 | | 0.3015 | 1.96 | 3500 | 0.1951 | 0.2308 | | 0.3015 | 2.1 | 3750 | 0.1842 | 0.2207 | | 0.2698 | 2.24 | 4000 | 0.1900 | 0.2112 | | 0.2698 | 2.38 | 4250 | 0.1745 | 0.2048 | | 0.2561 | 2.52 | 4500 | 0.1718 | 0.2040 | | 0.2561 | 2.66 | 4750 | 0.1625 | 0.1939 | | 0.2348 | 2.8 | 5000 | 0.1568 | 0.1867 | | 0.2348 | 2.94 | 5250 | 0.1517 | 0.1855 | | 0.2278 | 3.08 | 5500 | 0.1501 | 0.1807 | | 0.2278 | 3.22 | 5750 | 0.1445 | 0.1772 | | 0.2166 | 3.36 | 6000 | 0.1422 | 0.1752 | | 0.2166 | 3.5 | 6250 | 0.1418 | 0.1741 | | 0.2017 | 3.64 | 6500 | 0.1404 | 0.1695 | | 0.2017 | 3.78 | 6750 | 0.1356 | 0.1674 | | 0.1922 | 3.92 | 7000 | 0.1350 | 0.1688 | | 0.1922 | 4.06 | 7250 | 0.1346 | 0.1638 | | 0.1979 | 4.2 | 7500 | 0.1359 | 0.1638 | | 0.1979 | 4.34 | 7750 | 0.1336 | 0.1612 | | 0.1836 | 4.48 | 8000 | 0.1324 | 0.1613 | | 0.1836 | 4.62 | 8250 | 0.1320 | 0.1606 | | 0.1891 | 4.76 | 8500 | 0.1325 | 0.1598 | | 0.1891 | 4.9 | 8750 | 0.1324 | 0.1597 | | 4632a65085cde467ac6caa65eacf67de |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Whisper Large Nepali - Drishti Sharma This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2551 - Wer: 18.8467 | b736fd247571a59f7f0b10477e903921 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 400 - mixed_precision_training: Native AMP | 9c07a121a9ab5d8d1056120f6d6612c0 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2613 | 0.27 | 400 | 0.2551 | 18.8467 | | 0f876ad0cebd1cd5402f0cdee46f40e6 |
apache-2.0 | ['generated_from_trainer'] | false | recipe-lr1e05-wd0.01-bs16 This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2793 - Rmse: 0.5285 - Mse: 0.2793 - Mae: 0.4342 | da03afac0a7d62535c8a520e67f2a2ef |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 | 7b57fedcd5bdd67a69435dabce891d4d |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.2767 | 1.0 | 1245 | 0.2744 | 0.5239 | 0.2744 | 0.4124 | | 0.2739 | 2.0 | 2490 | 0.2757 | 0.5251 | 0.2757 | 0.4212 | | 0.2727 | 3.0 | 3735 | 0.2793 | 0.5285 | 0.2793 | 0.4342 | | 619482804a9e81450f0247b0e8fc6334 |
apache-2.0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | **⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)** | 5bbb3c0bf9af0d41a3854be667f49ef4 |
apache-2.0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | sentence-transformers/bert-base-nli-max-tokens This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. | 9d4239cfc5625c23b9a8fdf8cf62d315 |
apache-2.0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/bert-base-nli-max-tokens') embeddings = model.encode(sentences) print(embeddings) ``` | 269151a49e34e62c686c8284bddaeed1 |
apache-2.0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch | 59138843f5aa7b4300fe590a195b6a95 |
apache-2.0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() token_embeddings[input_mask_expanded == 0] = -1e9 | 7cb3f691a804752d32fa79508cf77237 |
apache-2.0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/bert-base-nli-max-tokens') model = AutoModel.from_pretrained('sentence-transformers/bert-base-nli-max-tokens') | acf0b3c3c87e8531c4c0a56f6b646269 |
apache-2.0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/bert-base-nli-max-tokens) | 8b32d15c43862b17c243250934f2886a |
apache-2.0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': True, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` | cf2aefcb7e8aa87be37ff0af2001f7fc |
apache-2.0 | ['image-to-text'] | false | Manga OCR Optical character recognition for Japanese text, with the main focus being Japanese manga. It uses [Vision Encoder Decoder](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder) framework. Manga OCR can be used as a general purpose printed Japanese OCR, but its main goal was to provide a high quality text recognition, robust against various scenarios specific to manga: - both vertical and horizontal text - text with furigana - text overlaid on images - wide variety of fonts and font styles - low quality images Code is available [here](https://github.com/kha-white/manga_ocr). | 20fdaba46079d498e5fa3ebd7f7dd2c1 |
mit | ['generated_from_trainer'] | false | xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1363 - F1: 0.8627 | 582abb0fc2be24d0d9d6eb400c5bf95d |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2539 | 1.0 | 525 | 0.1697 | 0.8179 | | 0.1317 | 2.0 | 1050 | 0.1327 | 0.8516 | | 0.0819 | 3.0 | 1575 | 0.1363 | 0.8627 | | 931dd8f7d1de1a05db8a42e949c3fe0e |
creativeml-openrail-m | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape'] | false | DreamBooth model for Starcraft:Remastered terrain This is a Stable Diffusion model fine-tuned on Starcraft terrain images on the Space Platform tileset with DreamBooth. It can be used by adding the `instance_prompt`: **isometric scspace terrain** It was trained on 32x32 terrain images from 265 melee maps including original Blizzard maps and those downloaded from Battle.net, scmscx.com and broodwarmaps.net. Run it on Huggingface Spaces: https://huggingface.co/spaces/wdcqc/wfd Or use this notebook on Colab: https://colab.research.google.com/github/wdcqc/WaveFunctionDiffusion/blob/remaster/colab/WaveFunctionDiffusion_Demo.ipynb In addition to Dreambooth, a custom VAE model (`AutoencoderTile`) is trained to encode and decode the latents to/from tileset probabilities ("waves") and then generated as Starcraft maps. A WFC Guidance, inspired by the Wave Function Collapse algorithm, is also added to the pipeline. For more information about guidance please see this page: [Fine-Tuning, Guidance and Conditioning](https://github.com/huggingface/diffusion-models-class/tree/main/unit2) This model was created as part of the DreamBooth Hackathon. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! | add553168c4e851195e9f2d9c71e42ab |
creativeml-openrail-m | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape'] | false | Use CUDA (otherwise it will take 15 minutes) device = "cuda" tilenet = AutoencoderTile.from_pretrained( "wdcqc/starcraft-platform-terrain-32x32", subfolder="tile_vae" ).to(device) pipeline = WaveFunctionDiffusionPipeline.from_pretrained( "wdcqc/starcraft-platform-terrain-32x32", tile_vae = tilenet, wfc_data_path = wfc_data_path ) pipeline.to(device) | f3812d4b93a6a1857430bfe74cc1bba7 |
creativeml-openrail-m | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape'] | false | need to include the dreambooth keyword "isometric scspace terrain" pipeline_output = pipeline( "isometric scspace terrain, corgi", num_inference_steps = 50, wfc_guidance_start_step = 20, wfc_guidance_strength = 5, wfc_guidance_final_steps = 20, wfc_guidance_final_strength = 10, ) image = pipeline_output.images[0] | 4463f31080e29e9bf62a608dc512daa4 |
creativeml-openrail-m | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape'] | false | Display generated image as tiles wave = pipeline_output.waves[0] tile_result = wave.argmax(axis=2) from wfd.scmap import demo_map_image display(demo_map_image(tile_result, wfc_data_path = wfc_data_path)) | 9149f52c2662dd65c6c9590b35f2aabc |
creativeml-openrail-m | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape'] | false | Generate map file from wfd.scmap import tiles_to_scx import random, time tiles_to_scx( tile_result, "outputs/generated_{}_{:04d}.scx".format(time.strftime("%Y%m%d_%H%M%S"), random.randint(0, 1e4)), wfc_data_path = wfc_data_path ) | 168ce4e4da0f3ee3ff8fbfbe7386da77 |
apache-2.0 | ['generated_from_trainer'] | false | bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0627 - Precision: 0.9389 - Recall: 0.9524 - F1: 0.9456 - Accuracy: 0.9866 | 0f5e38c09dc11ffb4b085892bdb72338 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0835 | 1.0 | 1756 | 0.0711 | 0.9200 | 0.9334 | 0.9266 | 0.9825 | | 0.0329 | 2.0 | 3512 | 0.0648 | 0.9308 | 0.9485 | 0.9396 | 0.9858 | | 0.0179 | 3.0 | 5268 | 0.0627 | 0.9389 | 0.9524 | 0.9456 | 0.9866 | | eea520691fd7dce614d3bae462cd83df |
mit | ['roberta-base', 'roberta-base-epoch_69'] | false | RoBERTa, Intermediate Checkpoint - Epoch 69 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_69. | f702e9e1c03220dfd910105d9809089c |
mit | ['roberta-base', 'roberta-base-epoch_69'] | false | Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. | ba798f62685b63f142a377daf4b0afbf |
mit | ['roberta-base', 'roberta-base-epoch_69'] | false | How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` | 9b343f30e01b640f855b2e3bbabb548f |
mit | ['roberta-base', 'roberta-base-epoch_69'] | false | Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ``` | a60c586db9c5b03aa9c474d8f63fcf83 |
cc-by-4.0 | [] | false | Model description This model was trained from scratch using the [Fairseq toolkit](https://fairseq.readthedocs.io/en/latest/) on a combination of English-Catalan datasets, up to 11 million sentences. Additionally, the model is evaluated on several public datasecomprising 5 different domains (general, adminstrative, technology, biomedical, and news). | a97f48876f14f029a69da3c226b35eb9 |
cc-by-4.0 | [] | false | Usage Required libraries: ```bash pip install ctranslate2 pyonmttok ``` Translate a sentence using python ```python import ctranslate2 import pyonmttok from huggingface_hub import snapshot_download model_dir = snapshot_download(repo_id="projecte-aina/mt-aina-en-ca", revision="main") tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model") tokenized=tokenizer.tokenize("Welcome to the Aina Project!") translator = ctranslate2.Translator(model_dir) translated = translator.translate_batch([tokenized[0]]) print(tokenizer.detokenize(translated[0][0]['tokens'])) ``` | fa210a54afded052e69119eb5b77b526 |
cc-by-4.0 | [] | false | Training data The model was trained on a combination of the following datasets: | Dataset | Sentences | |--------------------|----------------| | Global Voices | 21.342 | | Memories Lluires | 1.173.055 | | Wikimatrix | 1.205.908 | | TED Talks | 50.979 | | Tatoeba | 5.500 | | CoVost 2 ca-en | 79.633 | | CoVost 2 en-ca | 263.891 | | Europarl | 1.965.734 | | jw300 | 97.081 | | Crawled Generalitat| 38.595 | | Opus Books | 4.580 | | CC Aligned | 5.787.682 | | COVID_Wikipedia | 1.531 | | EuroBooks | 3.746 | | Gnome | 2.183 | | KDE 4 | 144.153 | | OpenSubtitles | 427.913 | | QED | 69.823 | | Ubuntu | 6.781 | | Wikimedia | 208.073 | |--------------------|----------------| | **Total** | **11.558.183** | | f6f4bafd31b6bdf28322f06d1566a2d9 |
cc-by-4.0 | [] | false | Data preparation All datasets are concatenated and filtered using the [mBERT Gencata parallel filter](https://huggingface.co/projecte-aina/mbert-base-gencata). Before training, the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py) | 2c34fd0c5673b67a6c9587a81f11d54b |
cc-by-4.0 | [] | false | Hyperparameters The model is based on the Transformer-XLarge proposed by [Subramanian et al.](https://aclanthology.org/2021.wmt-1.18.pdf) The following hyperparamenters were set on the Fairseq toolkit: | Hyperparameter | Value | |------------------------------------|----------------------------------| | Architecture | transformer_vaswani_wmt_en_de_bi | | Embedding size | 1024 | | Feedforward size | 4096 | | Number of heads | 16 | | Encoder layers | 24 | | Decoder layers | 6 | | Normalize before attention | True | | --share-decoder-input-output-embed | True | | --share-all-embeddings | True | | Effective batch size | 96.000 | | Optimizer | adam | | Adam betas | (0.9, 0.980) | | Clip norm | 0.0 | | Learning rate | 1e-3 | | Lr. schedurer | inverse sqrt | | Warmup updates | 4000 | | Dropout | 0.1 | | Label smoothing | 0.1 | The model was trained for a total of 45.000 updates. Weights were saved every 1000 updates and reported results are the average of the last 32 checkpoints. | e22bc28df85cd5ac62b3d637bd16db30 |
cc-by-4.0 | [] | false | Variable and metrics We use the BLEU score for evaluation on test sets: [Flores-101](https://github.com/facebookresearch/flores), [TaCon](https://elrc-share.eu/repository/browse/tacon-spanish-constitution-mt-test-set/84a96138b98611ec9c1a00155d02670628f3e6857b0f422abd82abc3795ec8c2/), [United Nations](https://zenodo.org/record/3888414 | 9b5c5da86b3f5f59dc16ac6a75467ebb |
cc-by-4.0 | [] | false | .Y33-_tLMIW0), [Cybersecurity](https://elrc-share.eu/repository/browse/cyber-mt-test-set/2bd93faab98c11ec9c1a00155d026706b96a490ed3e140f0a29a80a08c46e91e/), [wmt19 biomedical test set](), [wmt13 news test set](https://elrc-share.eu/repository/browse/catalan-wmt2013-machine-translation-shared-task-test-set/84a96139b98611ec9c1a00155d0267061a0aa1b62e2248e89aab4952f3c230fc/) | 07eabea6904da4e02b060a3490da8760 |
cc-by-4.0 | [] | false | Evaluation results Below are the evaluation results on the machine translation from English to Catalan compared to [Softcatalà](https://www.softcatala.org/) and [Google Translate](https://translate.google.es/?hl=es): | Test set | SoftCatalà | Google Translate | mt-aina-en-ca | |----------------------|------------|------------------|---------------| | Spanish Constitution | 32,6 | 37,6 | **37,7** | | United Nations | 39,0 | 39,7 | **39,8** | | aina_aapp_ca-en | 46,5 | **51,5** | 48,8 | | european_comission | 49,1 | **52** | 49,5 | | Flores 101 dev | 41,0 | 41,6 | **42,9** | | Flores 101 devtest | 42,1 | 42,2 | **44,0** | | Cybersecurity | 42,5 | **46,5** | 45,8 | | wmt 19 biomedical | 21,7 | **25,2** | 25,1 | | wmt 13 news | 34,9 | 33,8 | **35,6** | | Average | 38,8 | **41,1** | 41,0 | | 0b560249d8a3f30962d56e627938fd68 |
cc-by-4.0 | [] | false | Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. | bc1b054267fae4ccf9edc3463b1c248e |
apache-2.0 | ['translation'] | false | opus-mt-sv-mos * source languages: sv * target languages: mos * OPUS readme: [sv-mos](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-mos/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-mos/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-mos/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-mos/opus-2020-01-16.eval.txt) | 87d85f5773d78ad1f184b4cd85eeafe1 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Whisper Small Vietnamese This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 vi dataset. It achieves the following results on the evaluation set: - Loss: 0.9921 - Wer: 34.2172 | 27397ded629c706fba4473cecb206098 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP | 8114111d4e28c45523ea75b7cc9fa3de |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0002 | 124.0 | 1000 | 0.7998 | 21.7706 | | 0.0001 | 249.0 | 2000 | 0.8833 | 28.9690 | | 0.0 | 374.0 | 3000 | 0.9382 | 30.8206 | | 0.0 | 499.0 | 4000 | 0.9754 | 34.4363 | | 0.0 | 624.0 | 5000 | 0.9921 | 34.2172 | | 88545ec531cf6b70fbff137300975a76 |
apache-2.0 | ['whisper-event', 'generated_from_trainer', 'hf-asr-leaderboard'] | false | Whisper Large French This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the mozilla-foundation/common_voice_11_0 fr dataset. It achieves the following results on the evaluation set: - Loss: 0.00 - Wer: 00.00 | 71ed2f03cfc3b778f57aa9aec4cfb222 |
apache-2.0 | ['whisper-event', 'generated_from_trainer', 'hf-asr-leaderboard'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP | 1febae21b718cc97a4389395c4cb959d |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.