| --- |
| license: apache-2.0 |
| language: |
| - en |
| pretty_name: Diffusers Benchmarks |
| --- |
| |
| <div align="center"> |
| <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/benchmarking/benchmarks_headshot.jpg" width=800/> |
| </div> |
|
|
| Welcome to 🤗 Diffusers Benchmarks! |
|
|
| This is dataset where we keep track of the inference latency and memory information of the core models in the `diffusers` library. |
|
|
| Currently, the core models are: |
|
|
| * Flux |
| * Wan |
| * LTX |
| * SDXL |
|
|
| *Note that we will continue to extend this list based on their usage.* |
|
|
| You can analyze the results in [this demo](https://huggingface.co/spaces/diffusers/benchmark-analyzer/). |
|
|
| > [!IMPORTANT] |
| > Instead of benchmarking the entire diffusion pipelines, we only benchmark the forward passes |
| > of the diffusion networks under different settings (compile, offloading, quantization, etc.). |
| > We only benchmark the diffusion networks because they are typically the most compute-heavy part |
| > in a diffusion pipeline. |
|
|
| We use [this GitHub Actions workflow](https://github.com/huggingface/diffusers/blob/main/.github/workflows/benchmark.yml) to report the |
| above numbers automatically. This workflow runs on a biweekly cadence. |
|
|
| [Here](https://github.com/huggingface/diffusers/actions/runs/16065987231) is an example run. |