YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

LFM2-8B-A1B (UV Edition)

Liquid AI's 8B model - Zero setup, just works! πŸš€

⚑ Setup (10 seconds)

# Install UV (one time only)
bash setup_uv.sh

🎯 Usage (NO VENV NEEDED!)

# Run inference
uv run python run.py "What is machine learning?"

# Train LoRA
uv run python train_simple.py

# Debug model
uv run python debug/debug_all.py

That's it! No venv, no activation, no pip install. UV handles everything automatically.

πŸ”₯ Why UV?

  • No venv needed - UV manages everything
  • 100x faster than pip
  • Zero conflicts - automatic dependency resolution
  • Just works - run any Python file with uv run

πŸ“ Clean Project Structure

β”œβ”€β”€ run.py                 # Inference (1 file, 30 lines)
β”œβ”€β”€ train_simple.py        # Training (1 file, 40 lines)
β”œβ”€β”€ inspect_safetensors.py # Weight inspector/peek tool
β”œβ”€β”€ setup_uv.sh            # UV installer
β”œβ”€β”€ pyproject.toml         # Dependencies
β”œβ”€β”€ models/                # 8B model (4.7GB)
β”œβ”€β”€ train/                 # Data scripts
β”‚   β”œβ”€β”€ download_data.py
β”‚   └── prepare_data.py
└── debug/                 # Model debugging tools

🎯 Training Examples

# Download training data
uv run python train/download_data.py

# Train LoRA adapter
uv run python train_simple.py

# Debug model internals
uv run python debug/debug_all.py

πŸ” Safetensors Inspector (Peek Tool)

Deep inspection tool for .safetensors files - like dotPeek but for ML models!

# Quick inspection
uv run python inspect_safetensors.py your_model.safetensors

# Interactive mode
uv run python inspect_safetensors.py

πŸ“– Full documentation: README_INSPECTOR.md

Features: tensor analysis, weight visualization, checkpoint comparison, NumPy export, bfloat16 support, and more!

Model Info

  • Size: 4.7GB (quantized)
  • Memory: ~5-6GB RAM
  • Speed: 70-80 tokens/sec on M1/M2/M3
  • Architecture: MoE with 4-bit/8-bit quantization

Files

β”œβ”€β”€ run.py                    # Main inference script
β”œβ”€β”€ models/
β”‚   └── LFM2-8B-A1B-mlx/     # Quantized model (4.7GB)
└── train/                    # Training scripts
    β”œβ”€β”€ train_dense_lora.py   # Dense LoRA training
    └── README.md             # Training guide

Requirements

  • Apple Silicon Mac (M1/M2/M3) recommended
  • 16GB+ RAM
  • Python 3.9+
  • MLX framework

Model: LiquidAI/LFM2-8B-A1B

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support