How to use from
Pi
Start the llama.cpp server
# Install llama.cpp:
brew install llama.cpp
# Start a local OpenAI-compatible server:
llama-server -hf PinkPixel/ASCII-Machine-GGUF:
Configure the model in Pi
# Install Pi:
npm install -g @mariozechner/pi-coding-agent
# Add to ~/.pi/agent/models.json:
{
  "providers": {
    "llama-cpp": {
      "baseUrl": "http://localhost:8080/v1",
      "api": "openai-completions",
      "apiKey": "none",
      "models": [
        {
          "id": "ASCII-Machine-GGUF"
        }
      ]
    }
  }
}
Run Pi
# Start Pi in your project directory:
pi
Quick Links
ASCII Machine Logo

✨ ASCII Machine GGUF ✨

Quantized versions of PinkPixel/ASCII-Machine


NOTE: This is the first version of this model and it may not always generate perfect ASCII art. Fine tuning is still in progress and there will be updated versions coming soon.

📦 Model Overview

This repository contains GGUF (llama.cpp) compatible versions of ASCII Machine, a specialized model for ASCII art generation based on Qwen3.5-2B.

⚠️ Important Note: Vision Support

ASCII Machine features advanced vision-language capabilities. However, due to the very new architecture of Qwen3.5-2B, vision support (mmproj) may not yet be fully functional in current versions of llama.cpp, LM Studio, or Ollama.

We have included the mmproj.gguf files for experimentation, but expect updates as the ecosystem matures.

💾 Available Quantizations

File Name Description
ascii-machine.BF16.gguf Original Brain Float 16 precision
ascii-machine.F16.gguf Half-precision Float 16
ascii-machine.Q8_0.gguf 8-bit quantization (High quality, large)
ascii-machine.Q6_K.gguf 6-bit quantization (Excellent balance)
ascii-machine.Q5_K_M.gguf 5-bit quantization (Recommended)
ascii-machine.Q4_K_M.gguf 4-bit quantization (Standard)
ascii-machine.Q3_K_M.gguf 3-bit quantization (Small)
ascii-machine.Q2_K_L.gguf 2-bit quantization (Extreme compression)
ascii-machine.BF16-mmproj.gguf Vision adapter (Experimental)

💬 Example Usage

User:

Generate an ASCII rocket.

ASCII Machine:

   /\
  |  |
  |  |
 /____\
 [____]
  |  |
  |  |
 /_||_\

Made with ❤️ by Pink Pixel
"Dream it, Pixel it"
Downloads last month
771
GGUF
Model size
2B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for PinkPixel/ASCII-Machine-GGUF

Finetuned
Qwen/Qwen3.5-2B
Quantized
(1)
this model