DISCLAIMER: Not yet thoroughly tested, may not work at all!!

Original Model Link : https://huggingface.co/ostris/Flex.1-alpha

name: Flex.1-alpha-MLX-Q8
base_model: black-forest-labs/FLUX.1-schnell
license: apache-2.0
pipeline_tag: text-to-image
tasks :
 - text-to-image
 - image-generation
tags:
- ostris
- black-forest-labs
- Flux-1
- schnell
- Flex
- mlx
- apple
library_name: mlx
language: en
get_started_code: uvx --from mflux mflux-generate  --model exdysa/Flex.1-alpha-MLX-Q8 --prompt 'Test Prompt' --base-model schnell --steps 4 --seed 10 --width 1024 --height 1024 -q 8  --prompt 'Test prompt'

Flex.1-alpha-MLX-Q8

Flex.1-alpha-MLX-Q8 is a Flux schnell finetune with CFG control, half the double transformer blocks of the original, LoRA support, and permissive licensing.

MLX is a framework for METAL graphics supported by Apple computers with ARM M-series processors (M1/M2/M3/M4)

Generation using uv https://docs.astral.sh/uv/**:

uvx --from mflux mflux-generate  --model exdysa/Flex.1-alpha-MLX-Q8 --prompt 'Test Prompt' --base-model schnell --steps 4 --seed 10 --width 1024 --height 1024 -q 8 

Generation using pip:

pipx --from mflux mflux-generate  --model exdysa/Flex.1-alpha-MLX-Q8 --prompt 'Test Prompt' --base-model schnell --steps 4 --seed 10 --width 1024 --height 1024 -q 8 
Downloads last month

-

Downloads are not tracked for this model. How to track
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for exdysa/Flex.1-alpha-MLX-Q8

Finetuned
(57)
this model

Collection including exdysa/Flex.1-alpha-MLX-Q8