EditorAI

The main EditorAI model โ€” a fine-tuned TinyLlama-1.1B-Chat that generates Geometry Dash levels as JSON with blocks, spikes, platforms, triggers, groups, color channels, and more.

Part of the EditorAI project โ€” an AI-powered level generator mod for Geometry Dash.

About EditorAI

EditorAI is a Geode mod for Geometry Dash that lets you describe a level in plain text and have AI build it in the editor. It supports 8 AI providers (Gemini, Claude, OpenAI, Mistral, HuggingFace, Ollama, LM Studio, llama.cpp) and features blueprint preview, feedback learning, 15+ trigger types, and an in-game settings UI.

Model Details

  • Base model: TinyLlama-1.1B-Chat-v1.0
  • Training: QLoRA (4-bit, rank 8) on 2368 examples (368 expert-crafted + 2000 synthetic), 2 epochs
  • Features: Blocks, spikes, platforms, color triggers, move triggers, alpha triggers, rotate triggers, toggle triggers, pulse triggers, spawn triggers, stop triggers, speed portals, groups, color channels
  • GGUF quantization: q4_k_m (637 MB)
  • Tested: 8/8 tests passed, generates all trigger types

Files

File Size Description
model.safetensors 2.1 GB Merged fp16 model weights
editorai-latest.gguf 637 MB Quantized GGUF (q4_k_m) for llama.cpp / LM Studio
config.json โ€” Model architecture config
tokenizer.json โ€” Tokenizer

Setup

This model uses the Zephyr/ChatML chat template and works best with the following system prompt:

You are a Geometry Dash level designer. Return ONLY valid JSON with an analysis string and objects array. Each object needs type, x, y. Y >= 0. X uses 10 units per grid cell.

Recommended: Use the Ollama version (entity12208/editorai:latest) which has the system prompt and template pre-configured.

Usage with llama.cpp

wget https://huggingface.co/EditorAI-Geode/EditorAI/resolve/main/editorai-latest.gguf

llama-server -m editorai-latest.gguf --port 8080 --chat-template chatml

# In the EditorAI mod: set provider to "llama-cpp", URL to http://localhost:8080

Usage with LM Studio

  1. Download editorai-latest.gguf from this repo
  2. Load it in LM Studio, set Prompt Format to ChatML
  3. Set the System Prompt to the prompt above
  4. Start the server
  5. In the EditorAI mod: set provider to "lm-studio", URL to http://localhost:1234

Usage with Ollama (recommended)

ollama pull entity12208/editorai:latest

In the EditorAI mod: set provider to "ollama" and select entity12208/editorai:latest.

Output Format

{
  "analysis": "A medium modern level with color transitions and moving platforms.",
  "objects": [
    {"type": "block_black_gradient_square", "x": 0, "y": 0, "color_channel": 10},
    {"type": "spike_black_gradient_spike", "x": 50, "y": 0},
    {"type": "color_trigger", "x": 80, "y": 0, "color_channel": 1, "color": "#0066FF", "duration": 1.5},
    {"type": "move_trigger", "x": 90, "y": 0, "target_group": 1, "move_x": 0, "move_y": 20, "duration": 1.0, "easing": 1},
    {"type": "end_trigger", "x": 400, "y": 0}
  ]
}

Model Comparison

Model Size Triggers Quality Speed
editorai:latest (this) 637 MB All types Best ~15-30s
editorai:mini 379 MB Color, move Good ~10-20s

Links

License

Apache 2.0

Downloads last month
371
Safetensors
Model size
1B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for EditorAI-Geode/EditorAI

Quantized
(139)
this model
Quantizations
1 model