uqer1244 commited on
Commit
1fe60f0
·
verified ·
1 Parent(s): b73b700

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -10,8 +10,9 @@ license: apache-2.0
10
  ---
11
 
12
  # uqer1244/MLX-z-image
 
13
 
14
- This is a **4-bit quantized MLX version** of [Tongyi-MAI/Z-Image-Turbo](https://huggingface.co/Tongyi-MAI/Z-Image-Turbo).
15
  It is optimized for Apple Silicon (macOS) using the [MLX framework](https://github.com/ml-explore/mlx).
16
 
17
  ## Model Details
@@ -19,13 +20,13 @@ It is optimized for Apple Silicon (macOS) using the [MLX framework](https://gith
19
  - **Text Encoder**: MLX 4-bit quantized (Qwen3)
20
  - **VAE**: Original PyTorch Model (Sourced from original repo)
21
  - **Tokenizer**: Original Qwen2 Tokenizer (Sourced from original repo)
22
- - **Scheduler**: FlowMatchEulerDiscreteScheduler (Sourced from original repo)
23
 
24
  ## Usage
25
  This model can be used with the custom MLX pipeline script.
26
  Please refer to the original repository for detailed usage instructions regarding the model architecture.
27
 
28
  ## Attribution & License
29
- This model is a derivative work of [Tongyi-MAI/Z-Image-Turbo](https://huggingface.co/Tongyi-MAI/Z-Image-Turbo).
30
  - **Original License**: Apache 2.0
31
- - **Modifications**: Converted Transformer and Text Encoder weights to MLX format and quantized to 4-bit.
 
10
  ---
11
 
12
  # uqer1244/MLX-z-image
13
+ https://github.com/uqer1244/MLX_z-image
14
 
15
+ This is a **4-bit quantized MLX version** of [{args.original_repo_id}](https://huggingface.co/{args.original_repo_id}).
16
  It is optimized for Apple Silicon (macOS) using the [MLX framework](https://github.com/ml-explore/mlx).
17
 
18
  ## Model Details
 
20
  - **Text Encoder**: MLX 4-bit quantized (Qwen3)
21
  - **VAE**: Original PyTorch Model (Sourced from original repo)
22
  - **Tokenizer**: Original Qwen2 Tokenizer (Sourced from original repo)
23
+ - **Scheduler**: MLXFlowMatchEulerScheduler
24
 
25
  ## Usage
26
  This model can be used with the custom MLX pipeline script.
27
  Please refer to the original repository for detailed usage instructions regarding the model architecture.
28
 
29
  ## Attribution & License
30
+ This model is a derivative work of [{args.original_repo_id}](https://huggingface.co/{args.original_repo_id}).
31
  - **Original License**: Apache 2.0
32
+ - **Modifications**: Converted Transformer and Text Encoder weights to MLX format and quantized to 4-bit.