File size: 14,320 Bytes
7766b8e
fe2fe7e
7766b8e
 
b904e25
ba2a039
7766b8e
 
fe2fe7e
7d464d8
7766b8e
fe2fe7e
7766b8e
fe2fe7e
7766b8e
fe2fe7e
 
 
 
7766b8e
fe2fe7e
7766b8e
fe2fe7e
 
 
 
 
 
 
 
 
 
 
 
7766b8e
3680501
fe2fe7e
7766b8e
fe2fe7e
da420ac
fe2fe7e
 
 
 
3680501
fe2fe7e
3680501
fe2fe7e
3680501
fe2fe7e
 
 
 
3680501
fe2fe7e
7766b8e
fe2fe7e
 
 
7766b8e
 
 
 
 
 
 
 
 
 
 
 
f5db075
fe2fe7e
 
7766b8e
 
 
 
 
 
 
 
 
 
 
 
fe2fe7e
3680501
fe2fe7e
7766b8e
fe2fe7e
7766b8e
fe2fe7e
3680501
fe2fe7e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3680501
fe2fe7e
3680501
fe2fe7e
3680501
fe2fe7e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3680501
 
 
fe2fe7e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3680501
 
fe2fe7e
 
 
3680501
fe2fe7e
3680501
fe2fe7e
3680501
fe2fe7e
3680501
fe2fe7e
 
 
3680501
fe2fe7e
 
 
3680501
fe2fe7e
 
 
3680501
fe2fe7e
b904e25
fe2fe7e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50206fc
7766b8e
 
 
 
 
fe2fe7e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
---
license: other
base_model:
- black-forest-labs/FLUX.1-dev
base_model_relation: quantized
pipeline_tag: text-to-image
---

# Elastic model: FLUX.1-dev


## Overview

ElasticModels are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement, routing different compression algorithms to different layers. For each model, we have produced a series of optimized models:

- **XL**: Mathematically equivalent neural network, optimized with our DNN compiler.
- **L**: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
- **M**: Faster model, with accuracy degradation less than 1.5%.
- **S**: The fastest model, with accuracy degradation less than 2%.

Models can be accessed via TheStage AI Python SDK: ElasticModels, or deployed as Docker containers with REST API endpoints (see Deploy section).

---

## Installation

### System Requirements

| **Property**| **Value** |
 | ---  | ---  |
| **GPU** | L40s, RTX 5090, H100, B200 |
| **Python Version** | 3.10-3.12 |
| **CPU** | Intel/AMD x86_64 |
| **CUDA Version** | 12.8+ |


### TheStage AI Access token setup

Install TheStage AI CLI and setup API token:

```bash
pip install thestage
thestage config set --access-token <YOUR_ACCESS_TOKEN>
```

### ElasticModels installation

Install TheStage Elastic Models package:

```bash
pip install 'thestage-elastic-models[nvidia]' \
    --extra-index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple
```

---

## Usage example

Elastic Models provides the same interface as HuggingFace Diffusers. Here is an example of how to use the FLUX.1-dev model:

```python
import torch
from elastic_models.diffusers import FluxPipeline

mode_name = 'black-forest-labs/FLUX.1-dev'
hf_token = ''
device = torch.device("cuda")

pipeline = FluxPipeline.from_pretrained(
    mode_name,
    torch_dtype=torch.bfloat16,
    token=hf_token,
    # 'original' for original model
    # 'S', 'M', 'L', 'XL' for accelerated models
    mode='S'
)
pipeline.to(device)

prompts = ["Kitten eating a banana"]
output = pipeline(prompt=prompts)

for prompt, output_image in zip(prompts, output.images):
    output_image.save((prompt.replace(' ', '_') + '.png'))
```


---

## Quality Benchmarks

We have used PartiPrompts and DrawBench datasets to evaluate the quality of images generated by different sizes of FLUX.1-dev models (S, M, L, XL) compared to the original model. The evaluation metrics include ARNIQA, CLIP IQA, PSNR, SSIM, and VQA Faithfulness.

![Quality Benchmarking](https://cdn.thestage.ai/production/cms_file_upload/1773422498-f1062c24-2904-4d56-b05b-a4d62f629a26/Flux_Dev_PartiPrompts_Evaluation.png)

### Quality Benchmark Results

| **Metric/Model Size**| **S**| **M**| **L**| **XL**| **Original** |
 | ---  | ---  | ---  | ---  | ---  | ---  |
| **ARNIQA (PartiPrompts)** | 64.1 | 63.2 | 61.9 | 66.8 | 66.9 |
| **ARNIQA (DrawBench)** | 64.3 | 63.5 | 63.6 | 68.2 | 68.5 |
| **CLIP IQA (PartiPrompts)** | 85.5 | 86.4 | 83.8 | 88.3 | 87.9 |
| **CLIP IQA (DrawBench)** | 86.4 | 86.5 | 84.5 | 89.5 | 90.0 |
| **VQA Faithfulness (PartiPrompts)** | 87.5 | 85.5 | 85.5 | 85.5 | 88.6 |
| **VQA Faithfulness (DrawBench)** | 69.3 | 64.7 | 64.8 | 67.8 | 65.2 |
| **PSNR (PartiPrompts)** | 30.22 | 30.24 | 30.38 | N/A | N/A |
| **SSIM (PartiPrompts)** | 0.72 | 0.72 | 0.76 | 1.0 | 1.0 |


---

## Datasets

- **PartiPrompts**: A benchmark dataset created by Google Research, containing 1,632 diverse and challenging prompts that test various aspects of text-to-image generation models. It includes categories such as abstract concepts, complex compositions, properties and attributes, counting and numbers, text rendering, artistic styles, and fine-grained details.

- **DrawBench**: A comprehensive benchmark dataset developed by Google Research, containing 200 carefully curated prompts designed to test specific capabilities and challenge areas of diffusion models. It includes categories such as colors, counting, conflicting requirements, DALL-E inspired prompts, detailed descriptions, misspellings, positional relationships, rare words, Reddit user prompts, and text generation.

---

## Metrics

- **ARNIQA**: No-reference image quality assessment metric that predicts perceptual quality without reference images.
- **CLIP_IQA**: No-reference image quality metric using contrastive learning to assess image quality without references.
- **VQA Faithfulness**: Metric measuring how accurately generated images represent the text prompts.
- **PSNR**: Peak Signal-to-Noise Ratio measuring similarity between generated by accelerated model and original model images.
- **SSIM**: Structural Similarity Index measuring perceptual similarity between generated by accelerated model and original model images.


---

## Latency Benchmarks

We have measured the latency of different sizes of FLUX.1-dev model (S, M, L, XL, original) on various GPUs. The measurements were taken for generating images of size 1024x1024 pixels.

![Latency Benchmarking](https://cdn.thestage.ai/production/cms_file_upload/1773422520-f2e2dedd-f475-4609-8277-b28fe5629623/Flux_Dev_1024x1024_image_generation.png)

### Latency Benchmark Results

Latency (in seconds) for generating a 1024x1024 image using different model sizes on various hardware setups.

| **GPU/Model Size**| **S**| **M**| **L**| **XL**| **Original** |
 | ---  | ---  | ---  | ---  | ---  | ---  |
| **H100** | 2.88 | 3.06 | 3.25 | 4.18 | 6.46 |
| **L40s** | 9.22 | 10.07 | 10.67 | 14.39 | 16 |
| **B200** | 1.93 | 2.04 | 2.15 | 2.77 | 4.52 |
| **GeForce RTX 5090** | 5.79 | N/A | N/A | N/A | N/A |


---

## Benchmarking Methodology

The benchmarking was performed on a single GPU with a batch size of 1. Each model was run for 10 iterations, and the average latency was calculated.

> **Algorithm summary:**
> 1. Load the FLUX.1-dev model with the specified size (S, M, L, XL, original).
> 2. Move the model to the GPU.
> 3. Prepare a sample prompt for image generation.
> 4. Run the model for a number of iterations (e.g., 10) and measure the time taken for each iteration. On each iteration:
>    - Synchronize the GPU to flush any previous operations.
>    - Record the start time.
>    - Generate the image using the model.
>    - Synchronize the GPU again.
>    - Record the end time and calculate the latency for that iteration.
> 5. Calculate the average latency over all iterations.

---

## Reproduce benchmarking

```python
import torch
from elastic_models.diffusers import FluxPipeline

mode_name = 'black-forest-labs/FLUX.1-dev'
hf_token = ''
device = torch.device("cuda")

pipeline = FluxPipeline.from_pretrained(
    mode_name,
    torch_dtype=torch.bfloat16,
    token=hf_token,
    # 'original' for original model
    # 'S', 'M', 'L', 'XL' for accelerated models
    mode='S'
)
pipeline.to(device)

prompt = ["Kitten eating a banana"]
generate_kwargs={
    "height": 1024,
    "width": 1024,
    "num_inference_steps": 28,
    "cfg_scale": 0.0
}

import time

def evaluate_pipeline():
    torch.cuda.synchronize()
    start_time = time.time()
    output = pipeline(
        prompt=prompt,
        **generate_kwargs
    )
    torch.cuda.synchronize()
    end_time = time.time()

    return end_time - start_time

# Warm-up
for _ in range(5):
    evaluate_pipeline()

# Benchmarking
num_runs = 10
total_time = 0.0

for _ in range(num_runs):
    latency = evaluate_pipeline()
    total_time += latency

average_latency = total_time / num_runs
print(f"Average Latency over {num_runs} runs: {average_latency} seconds")
```


---

## Serving with Docker Image

For serving with Nvidia GPUs, we provide ready-to-go Docker containers with OpenAI-compatible API endpoints.
Using our containers you can set up an inference endpoint on any desired cloud/serverless providers as well as on-premise servers.
You can also use this container to run inference through TheStage AI platform.

### Prebuilt image from ECR

| **GPU** | **Docker image name** |
| --- | --- |
| H100, L40s | `public.ecr.aws/i3f7g5s7/thestage/elastic-models:0.1.2-diffusers-nvidia-24.09b` |
| B200, RTX 5090 | `public.ecr.aws/i3f7g5s7/thestage/elastic-models:0.1.2-diffusers-blackwell-24.09b` |

Pull docker image for your Nvidia GPU and start inference container:

```bash
docker pull <IMAGE_NAME>
```
```bash
docker run --rm -ti \
  --name serving_thestage_model \
  -p 8000:80 \
  -e AUTH_TOKEN=<AUTH_TOKEN> \
  -e MODEL_REPO=black-forest-labs/FLUX.1-dev \
  -e MODEL_SIZE=<MODEL_SIZE> \
  -e MODEL_BATCH=<MAX_BATCH_SIZE> \
  -e HUGGINGFACE_ACCESS_TOKEN=<HUGGINGFACE_ACCESS_TOKEN> \
  -e THESTAGE_AUTH_TOKEN=<THESTAGE_ACCESS_TOKEN> \
  -v /mnt/hf_cache:/root/.cache/huggingface \
  <IMAGE_NAME_DEPENDING_ON_YOUR_GPU>
```

| **Parameter**              | **Description**                                                                                      |
|----------------------------|------------------------------------------------------------------------------------------------------|
| `<MODEL_SIZE>`             | Available: S, M, L, XL.                                                                              |
| `<MAX_BATCH_SIZE>`         | Maximum batch size to process in parallel.                                                           |
| `<HUGGINGFACE_ACCESS_TOKEN>` | Hugging Face access token.                                                                         |
| `<THESTAGE_ACCESS_TOKEN>`  | TheStage token generated on the platform (Profile -> Access tokens).                                 |
| `<AUTH_TOKEN>`             | Token for endpoint authentication. You can set it to any random string; it must match the value used by the client. |
| `<IMAGE_NAME>`             | Image name which you have pulled.                                                                    |

---

## Invocation

You can invoke the endpoint using CURL as follows:

```bash
curl -X POST <http://127.0.0.1:8000/v1/images/generations>  \
    -H "Authorization: Bearer <AUTH_TOKEN>"  \
    -H "Content-Type: application/json" \
    -H "X-Model-Name: flux-1-dev-<MODEL_SIZE>-bs<MAX_BATCH_SIZE>" \
    -d '{
          "prompt": "Cat eating banana",
          "seed": 12,
          "aspect_ratio": "1:1",
          "guidance_scale": 6.5,
          "num_inference_steps": 28
        }' \
    --output sunset.webp -D -
```

Or using Python requests:

```python
import requests
import json
url = "http://127.0.0.1:8000/v1/images/generations"
payload = json.dumps({
  "prompt": "sunset",
  "seed": 12,
  "aspect_ratio": "1:1",
  "guidance_scale": 6.5,
  "num_inference_steps": 28
})
headers = {
  'Authorization': 'Bearer <AUTH_TOKEN>',
  'Content-Type': 'application/json',
  'X-Model-Name': 'flux-1-dev-<MODEL_SIZE>-bs<MAX_BATCH_SIZE>'
}
response = requests.request("POST", url, headers=headers, data=payload)
with open("sunset.webp", "wb") as f:
    f.write(response.content)
```

Or using OpenAI python client:

```python
import os, base64, pathlib, json
from openai import OpenAI

BASE_URL = "http://<your_ip>/v1"
API_KEY  = ""
MODEL    = "flux-1-dev-<MODEL_SIZE>-bs<MAX_BATCH_SIZE>"

client = OpenAI(
    api_key=API_KEY,
    base_url=BASE_URL,
    default_headers={"X-Model-Name": MODEL}
)

response = client.with_raw_response.images.generate(
    model=MODEL,
    prompt="Cat eating banana",
    n=1,
    extra_body={
        "seed": 111,
        "aspect_ratio": "1:1",
        "guidance_scale": 3.5,
        "num_inference_steps": 28
    },
)

with open("thestage_image.webp", "wb") as f:
    f.write(response.content)
```

---

## Endpoint Parameters

### Method

> **POST** `/v1/images/generations`

### Header Parameters

> `Authorization`: `string`
>
> Bearer token for authentication. Should match the `AUTH_TOKEN` set during container startup.

> `Content-Type`: `string`
>
> Must be set to `application/json`.

> `X-Model-Name`: `string`
>
> Specifies the model to use for generation. Format: `flux-1-dev-<size>-bs<batch_size>`, where `<size>` is one of `S`, `M`, `L`, `XL`, `original` and `<batch_size>` is the maximum batch size configured during container startup.

### Input Body

> `prompt` : `string`
>
> The text prompt to generate an image for.

> `seed`: `int32`
>
>  Random seed for generation.

> `num_inference_steps`: `int32`
>
> Number of diffusion steps to use for generation. Higher values yield better quality but take longer. Default is 28

> `aspect_ratio`: `string`
>
>  Aspect ratio of the generated image. Supported values:
>  ```
>  "1:1": (1024, 1024),
>  "16:9": (1280, 736),
>  "21:9": (1280, 544),
>  "3:2": (1248, 832),
>  "2:3": (832, 1248),
>  "4:3": (1184, 896),
>  "3:4": (896, 1184),
>  "5:4": (1152, 928),
>  "4:5": (928, 1152),
>  "9:16": (736, 1280),
>  "9:21": (544, 1280)
>  ```

> `guidance_scale`: float32
>
> Guidance scale for classifier-free guidance. Higher values increase adherence to the prompt.

---

## Deploy on Modal

For more details please use the tutorial [Modal deployment](https://docs.thestage.ai/tutorials/source/modal_thestage.html)

### Clone modal serving code

```shell
git clone https://github.com/TheStageAI/ElasticModels.git
cd ElasticModels/examples/modal
```

### Configuration of environment variables

Set your environment variables in `modal_serving.py`:

```python
# modal_serving.py

ENVS = {
    "MODEL_REPO": "black-forest-labs/FLUX.1-dev",
    "MODEL_BATCH": "4",
    "THESTAGE_AUTH_TOKEN": "",
    "HUGGINGFACE_ACCESS_TOKEN": "",
    "PORT": "80",
    "PORT_HEALTH": "80",
    "HF_HOME": "/cache/huggingface",
}
```

### Configuration of GPUs

Set your desired GPU type and autoscaling variables in `modal_serving.py`:

```python
# modal_serving.py

@app.function(
    image=image,
    gpu="B200",
    min_containers=8,
    max_containers=8,
    timeout=10000,
    ephemeral_disk=600 * 1024,
    volumes={"/opt/project/.cache": HF_CACHE},
    startup_timeout=60*20
)
@modal.web_server(
    80,
    label="black-forest-labs/FLUX.1-dev-test",
    startup_timeout=60*20
)
def serve():
    pass
```

### Run serving

```shell
modal serve modal_serving.py
```


## Links

* __Platform__: [app.thestage.ai](https://app.thestage.ai)
* __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
* __Contact email__: contact@thestage.ai