Entity12208 commited on
Commit
4741066
·
verified ·
1 Parent(s): 8c11759

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +12 -5
README.md CHANGED
@@ -35,9 +35,15 @@ Part of the [EditorAI](https://github.com/Entity12208/EditorAI) project — an A
35
  | `config.json` | — | Model architecture config |
36
  | `tokenizer.json` | — | Tokenizer |
37
 
38
- ## Chat Template
39
 
40
- This model uses the **ChatML** format. Set your chat template to `chatml` in llama.cpp or LM Studio.
 
 
 
 
 
 
41
 
42
  ## Usage with llama.cpp
43
 
@@ -53,10 +59,11 @@ llama-server -m editorai-mini.gguf --port 8080 --chat-template chatml
53
 
54
  1. Download `editorai-mini.gguf` from this repo
55
  2. Load it in LM Studio, set **Prompt Format** to **ChatML**
56
- 3. Start the server
57
- 4. In the EditorAI mod: set provider to "lm-studio", URL to `http://localhost:1234`
 
58
 
59
- ## Usage with Ollama
60
 
61
  ```bash
62
  ollama pull entity12208/editorai:mini
 
35
  | `config.json` | — | Model architecture config |
36
  | `tokenizer.json` | — | Tokenizer |
37
 
38
+ ## Setup
39
 
40
+ This model uses the **ChatML** chat template and works best with the following system prompt:
41
+
42
+ ```
43
+ You are a Geometry Dash level designer. Return ONLY valid JSON with an analysis string and objects array. Each object needs type, x, y. Y >= 0. X uses 10 units per grid cell.
44
+ ```
45
+
46
+ > **Recommended:** Use the Ollama version (`entity12208/editorai:mini`) which has the system prompt and template pre-configured. The raw GGUF requires manual setup.
47
 
48
  ## Usage with llama.cpp
49
 
 
59
 
60
  1. Download `editorai-mini.gguf` from this repo
61
  2. Load it in LM Studio, set **Prompt Format** to **ChatML**
62
+ 3. Set the **System Prompt** to the prompt above
63
+ 4. Start the server
64
+ 5. In the EditorAI mod: set provider to "lm-studio", URL to `http://localhost:1234`
65
 
66
+ ## Usage with Ollama (recommended)
67
 
68
  ```bash
69
  ollama pull entity12208/editorai:mini