Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -35,14 +35,16 @@ Part of the [EditorAI](https://github.com/Entity12208/EditorAI) project — an A
|
|
| 35 |
| `config.json` | — | Model architecture config |
|
| 36 |
| `tokenizer.json` | — | Tokenizer |
|
| 37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
## Usage with llama.cpp
|
| 39 |
|
| 40 |
```bash
|
| 41 |
-
# Download the GGUF
|
| 42 |
wget https://huggingface.co/EditorAI-Geode/editorai-mini/resolve/main/editorai-mini.gguf
|
| 43 |
|
| 44 |
-
|
| 45 |
-
llama-server -m editorai-mini.gguf --port 8080
|
| 46 |
|
| 47 |
# In the EditorAI mod: set provider to "llama-cpp", URL to http://localhost:8080
|
| 48 |
```
|
|
@@ -50,25 +52,20 @@ llama-server -m editorai-mini.gguf --port 8080
|
|
| 50 |
## Usage with LM Studio
|
| 51 |
|
| 52 |
1. Download `editorai-mini.gguf` from this repo
|
| 53 |
-
2.
|
| 54 |
-
3.
|
| 55 |
4. In the EditorAI mod: set provider to "lm-studio", URL to `http://localhost:1234`
|
| 56 |
|
| 57 |
## Usage with Ollama
|
| 58 |
|
| 59 |
-
This model is also available on Ollama:
|
| 60 |
-
|
| 61 |
```bash
|
| 62 |
ollama pull entity12208/editorai:mini
|
| 63 |
-
ollama run entity12208/editorai:mini
|
| 64 |
```
|
| 65 |
|
| 66 |
In the EditorAI mod: set provider to "ollama" and select `entity12208/editorai:mini`.
|
| 67 |
|
| 68 |
## Output Format
|
| 69 |
|
| 70 |
-
The model generates JSON in this format:
|
| 71 |
-
|
| 72 |
```json
|
| 73 |
{
|
| 74 |
"analysis": "A medium modern level with color transitions and moving platforms.",
|
|
|
|
| 35 |
| `config.json` | — | Model architecture config |
|
| 36 |
| `tokenizer.json` | — | Tokenizer |
|
| 37 |
|
| 38 |
+
## Chat Template
|
| 39 |
+
|
| 40 |
+
This model uses the **ChatML** format. Set your chat template to `chatml` in llama.cpp or LM Studio.
|
| 41 |
+
|
| 42 |
## Usage with llama.cpp
|
| 43 |
|
| 44 |
```bash
|
|
|
|
| 45 |
wget https://huggingface.co/EditorAI-Geode/editorai-mini/resolve/main/editorai-mini.gguf
|
| 46 |
|
| 47 |
+
llama-server -m editorai-mini.gguf --port 8080 --chat-template chatml
|
|
|
|
| 48 |
|
| 49 |
# In the EditorAI mod: set provider to "llama-cpp", URL to http://localhost:8080
|
| 50 |
```
|
|
|
|
| 52 |
## Usage with LM Studio
|
| 53 |
|
| 54 |
1. Download `editorai-mini.gguf` from this repo
|
| 55 |
+
2. Load it in LM Studio, set **Prompt Format** to **ChatML**
|
| 56 |
+
3. Start the server
|
| 57 |
4. In the EditorAI mod: set provider to "lm-studio", URL to `http://localhost:1234`
|
| 58 |
|
| 59 |
## Usage with Ollama
|
| 60 |
|
|
|
|
|
|
|
| 61 |
```bash
|
| 62 |
ollama pull entity12208/editorai:mini
|
|
|
|
| 63 |
```
|
| 64 |
|
| 65 |
In the EditorAI mod: set provider to "ollama" and select `entity12208/editorai:mini`.
|
| 66 |
|
| 67 |
## Output Format
|
| 68 |
|
|
|
|
|
|
|
| 69 |
```json
|
| 70 |
{
|
| 71 |
"analysis": "A medium modern level with color transitions and moving platforms.",
|