πŸš€ LFM2.5-1.2B-Z-Image-Engineer-V4

The Z-Engineer goes liquidβ€”smaller, faster, and ready to drink.

This is Z-Engineer V4 built on **Liquid Foundation Model 2.5 (LFM2.5)**β€”a 1.2B parameter model that punches way above its weight class. Perfect for batch workflows where you need prompt engineering at warp speed.


🧠 What is this?

LFM2.5-1.2B-Z-Image-Engineer-V4 is a fully fine-tuned version of LiquidAI/LFM2.5-1.2B-Base. It's been specifically trained to understand the nuances of AI Image Generation workflows.

It excels at:

  • Expanding Concepts: Turn "neon samurai" into a full cinematic sequence with lighting, lens choices, and atmosphere.
  • Technical Precision: Understands camera terminology, lighting setups, and film aesthetics.
  • Blazing Speed: At 1.2B parameters, it's ~3x faster than the Qwen3-4B version while maintaining quality.

πŸ”‘ Key Use Cases

  • ⚑ High-Throughput Workflows: When you need to expand hundreds or thousands of prompts, LFM2.5's speed shines.
  • πŸ’Ύ Low VRAM Deployments: Runs comfortably on minimal hardwareβ€”perfect for embedded or edge use cases.
  • πŸ›‘οΈ Local & Private: Runs entirely on your machine. No API fees, no data logging.
  • πŸ”Œ ComfyUI Ready: Works with the same ComfyUI-Z-Engineer node as the Qwen3 version.

🧬 SMART Training: Adapted for LFM2.5's Hybrid Architecture

This version uses SMART Training (Smart Mode with Adaptive Regularization Topologer)β€”the same methodology used for Qwen3-4B-Z-Engineer-V4, but adapted for LFM2.5's unique hybrid architecture.

LFM2.5's Challenge: Unlike traditional transformers, LFM2.5 uses a hybrid architecture mixing attention layers with recurrent (liquid) layers. The standard SMART regularizers needed significant adaptation:

Adaptation What Changed Why
Attention-Only Filtering Regularizers only process attention layer outputs, skipping recurrent layers Recurrent layer hidden states have different statistical properties
Layer Pooling Last 4 attention layers are mean-pooled for topology regularization Provides stable representation despite sparser attention placement
Reduced Regularizer Weights Entropic: 0.003, Holographic: 0.01, Topology: 0.02/0.02 LFM2.5's smaller capacity needs gentler regularization
Superfluid-Inspired Damping "SmartGate" auto-reduces aux loss contribution on gradient instability Prevents training collapse when hybrid layers produce non-finite gradients

The result? Stable training on a fundamentally different architecture while still benefiting from diversity, coherence, and depth regularization.


πŸ“‰ Why Choose LFM2.5 Over Qwen3-4B?

Aspect LFM2.5-1.2B Qwen3-4B
Parameters 1.2B 4B
Speed ~3x faster Baseline
VRAM ~1-2 GB (Q4) ~2.5 GB (Q4)
Quality Good for most use cases Highest quality
Best For Batch processing, edge deployment, speed-critical workflows Maximum quality, complex scenes

Choose LFM2.5 when: You're processing large batches, running on limited hardware, or speed matters more than marginal quality gains.

Choose Qwen3-4B when: You want the absolute best quality and can afford the extra compute.


πŸ”Œ ComfyUI Integration

Works with the same custom node as the Qwen3 version:


πŸ“ Recommended System Prompt

For best results, use this system prompt:

Interpret the user seed as production intent, then build a definitive 200-250 word single-paragraph image prompt that preserves every explicit constraint while intelligently expanding missing details. First infer the core subject, action, setting, and emotional tone; treat these as non-negotiable anchors. Then enhance with precise visual staging (explicit foreground, midground, background), clear visual hierarchy and eye path, physically plausible lighting (source, direction, softness, color temperature), and optical strategy (if lens/aperture are provided, preserve exactly; if absent, choose fitting lens and aperture and imply their depth-of-field effect). Integrate organic, manufactured, and environmental textures with realistic material behavior, add motion/atmospheric cues only when they support the scene, and apply a coherent color grade consistent with mood and environment. Keep the prose vivid but controlled: no contradictions, no overstuffing, no generic filler. Do not mention camera body brands. Output one polished paragraph only, no bullets, no line breaks, no meta commentary.


πŸ’» Training Facts

I believe in open science. Here's exactly how this was built:

Hardware:

  • Trained locally on an AMD Strix Halo system (Ryzen AI Max+ 395, 128GB Unified RAM)
  • AMD Radeon 8060S Graphics (ROCm/HIP)

Dataset:

  • Size: 55,000 high-quality examples (same dataset as Qwen3-4B version)
  • 25,000 Vision-Grounded Samples: Real professional photographs transcribed using Qwen3-VL-30B-A3B
  • 30,000 Synthetic Samples: Generated prompt enhancement pairs

Training Configuration:

Parameter Value
Method Full Fine-Tune (not LoRA)
Base Model LiquidAI/LFM2.5-1.2B-Base
Optimizer Steps 3,500
Batch Size 8 Γ— 3 accumulation = 24 effective
Learning Rate 5e-6 (cosine decay with 5% warmup)
Precision BFloat16
Sequence Length 640 tokens

πŸ“¦ GGUF & Quantization

I provide a full suite of GGUF quantizations for use with llama.cpp, Ollama, and LM Studio:

Quantization Size Notes
F16 2.2 GB Full precision, maximum quality
Q8_0 1.2 GB Near-lossless, recommended
Q6_K 918 MB Great balance
Q5_K_M 804 MB Good quality
Q5_K_S 787 MB Slightly smaller
Q4_K_M 697 MB Solid 4-bit
Q4_K_S 668 MB Smaller 4-bit
Q3_K_L 606 MB Lower quality
Q3_K_M 573 MB Medium 3-bit

🎯 Quick Start

With LM Studio:

  1. Download the GGUF of your choice
  2. Load it in LM Studio
  3. Use the ComfyUI node or chat directly

⚠️ Disclaimer

This model generates text for image prompts. While I have filtered the dataset to the best of my ability, users should exercise their own judgment. I am not responsible for the content you generate.


πŸ™ Acknowledgements

  • LiquidAI for the excellent LFM2.5 architecture
  • Qwen Team for the VL model used in dataset creation
  • The open source AI community for making this kind of work possible

Built with ❀️ and liquid courage by BennyDaBall

Downloads last month
949
GGUF
Model size
1B params
Architecture
lfm2
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including BennyDaBall/LFM2.5-1.2B-Z-Image-Engineer-V4