uqer1244/MLX-z-image-Base

https://github.com/uqer1244/MLX_z-image-base

This is a 4-bit quantized MLX version of Tongyi-MAI/Z-Image. It is optimized for Apple Silicon (macOS) using the MLX framework.

Model Details

  • Transformer: MLX 4-bit quantized
  • Text Encoder: MLX 4-bit quantized (Qwen3)
  • VAE: Original PyTorch Model (Sourced from original repo)
  • Tokenizer: Original Tokenizer (Sourced from original repo)
  • Scheduler: MLXFlowMatchEulerScheduler

Usage

This model can be used with the custom MLX pipeline script. Please refer to the original repository for detailed usage instructions regarding the model architecture.

Attribution & License

This model is a derivative work of Tongyi-MAI/Z-Image.

  • Original License: Apache 2.0
  • Modifications: Converted Transformer and Text Encoder weights to MLX format and quantized to 4-bit.
Downloads last month

-

Downloads are not tracked for this model. How to track
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for uqer1244/MLX-z-image-Base

Base model

Tongyi-MAI/Z-Image
Finetuned
(8)
this model