How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf PinkPixel/ASCII-Machine-GGUF:
# Run inference directly in the terminal:
llama-cli -hf PinkPixel/ASCII-Machine-GGUF:
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf PinkPixel/ASCII-Machine-GGUF:
# Run inference directly in the terminal:
llama-cli -hf PinkPixel/ASCII-Machine-GGUF:
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf PinkPixel/ASCII-Machine-GGUF:
# Run inference directly in the terminal:
./llama-cli -hf PinkPixel/ASCII-Machine-GGUF:
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf PinkPixel/ASCII-Machine-GGUF:
# Run inference directly in the terminal:
./build/bin/llama-cli -hf PinkPixel/ASCII-Machine-GGUF:
Use Docker
docker model run hf.co/PinkPixel/ASCII-Machine-GGUF:
Quick Links
ASCII Machine Logo

✨ ASCII Machine GGUF ✨

Quantized versions of PinkPixel/ASCII-Machine


📦 Model Overview

This repository contains GGUF (llama.cpp) compatible versions of ASCII Machine, a specialized model for ASCII art generation based on Qwen3.5-2B.

⚠️ Important Note: Vision Support

ASCII Machine features advanced vision-language capabilities. However, due to the very new architecture of Qwen3.5-2B, vision support (mmproj) may not yet be fully functional in current versions of llama.cpp, LM Studio, or Ollama.

We have included the mmproj.gguf files for experimentation, but expect updates as the ecosystem matures.

💾 Available Quantizations

File Name Description
ascii-machine.BF16.gguf Original Brain Float 16 precision
ascii-machine.F16.gguf Half-precision Float 16
ascii-machine.Q8_0.gguf 8-bit quantization (High quality, large)
ascii-machine.Q6_K.gguf 6-bit quantization (Excellent balance)
ascii-machine.Q5_K_M.gguf 5-bit quantization (Recommended)
ascii-machine.Q4_K_M.gguf 4-bit quantization (Standard)
ascii-machine.Q3_K_M.gguf 3-bit quantization (Small)
ascii-machine.Q2_K_L.gguf 2-bit quantization (Extreme compression)
ascii-machine.BF16-mmproj.gguf Vision adapter (Experimental)

💬 Example Usage

User:

Generate an ASCII rocket.

ASCII Machine:

   /\
  |  |
  |  |
 /____\
 [____]
  |  |
  |  |
 /_||_\

Made with ❤️ by Pink Pixel
"Dream it, Pixel it"
Downloads last month
455
GGUF
Model size
2B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for PinkPixel/ASCII-Machine-GGUF

Finetuned
Qwen/Qwen3.5-2B
Quantized
(1)
this model