EditorAI Mini

A fine-tuned Qwen2.5-0.5B-Instruct model that generates Geometry Dash levels as JSON โ€” with blocks, spikes, platforms, triggers, groups, color channels, and more.

Part of the EditorAI project โ€” an AI-powered level generator mod for Geometry Dash.

About EditorAI

EditorAI is a Geode mod for Geometry Dash that lets you describe a level in plain text and have AI build it in the editor. It supports 8 AI providers (Gemini, Claude, OpenAI, Mistral, HuggingFace, Ollama, LM Studio, llama.cpp) and features blueprint preview, feedback learning, 15+ trigger types, and an in-game settings UI.

Model Details

  • Base model: Qwen2.5-0.5B-Instruct
  • Training: QLoRA (4-bit, rank 8) on hand-crafted expert GD level examples
  • Features: Blocks, spikes, platforms, color triggers, move triggers, alpha triggers, rotate triggers, toggle triggers, pulse triggers, speed portals, groups, color channels
  • GGUF quantization: q4_k_m (379 MB)

Files

File Size Description
model.safetensors 943 MB Merged fp16 model weights
editorai-mini.gguf 379 MB Quantized GGUF (q4_k_m) for llama.cpp / LM Studio
config.json โ€” Model architecture config
tokenizer.json โ€” Tokenizer

Setup

This model uses the ChatML chat template and works best with the following system prompt:

You are a Geometry Dash level designer. Return ONLY valid JSON with an analysis string and objects array. Each object needs type, x, y. Y >= 0. X uses 10 units per grid cell.

Recommended: Use the Ollama version (entity12208/editorai:mini) which has the system prompt and template pre-configured. The raw GGUF requires manual setup.

Usage with llama.cpp

wget https://huggingface.co/EditorAI-Geode/editorai-mini/resolve/main/editorai-mini.gguf

llama-server -m editorai-mini.gguf --port 8080 --chat-template chatml

# In the EditorAI mod: set provider to "llama-cpp", URL to http://localhost:8080

Usage with LM Studio

  1. Download editorai-mini.gguf from this repo
  2. Load it in LM Studio, set Prompt Format to ChatML
  3. Set the System Prompt to the prompt above
  4. Start the server
  5. In the EditorAI mod: set provider to "lm-studio", URL to http://localhost:1234

Usage with Ollama (recommended)

ollama pull entity12208/editorai:mini

In the EditorAI mod: set provider to "ollama" and select entity12208/editorai:mini.

Output Format

{
  "analysis": "A medium modern level with color transitions and moving platforms.",
  "objects": [
    {"type": "block_black_gradient_square", "x": 0, "y": 0, "color_channel": 10},
    {"type": "spike_black_gradient_spike", "x": 50, "y": 0},
    {"type": "color_trigger", "x": 80, "y": 0, "color_channel": 1, "color": "#0066FF", "duration": 1.5},
    {"type": "move_trigger", "x": 90, "y": 0, "target_group": 1, "move_x": 0, "move_y": 20, "duration": 1.0, "easing": 1},
    {"type": "end_trigger", "x": 400, "y": 0}
  ]
}

Links

License

Apache 2.0

Downloads last month
1,210
Safetensors
Model size
0.5B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for EditorAI-Geode/editorai-mini

Quantized
(185)
this model
Quantizations
1 model