File size: 3,731 Bytes
5261f9d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c5260a9
5261f9d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
107fa39
 
 
 
 
5261f9d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c412aff
5261f9d
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
---
datasets:
- Sweaterdog/Andy-4-base-2
- Sweaterdog/Andy-4-ft
language:
- en
base_model:
- unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit
tags:
- gaming
- minecraft
- mindcraft
---

# 🤏 Andy‑4‑micro 🧠

![image/png](https://cdn-uploads.huggingface.co/production/uploads/66960602f0ffd8e3a381106a/NFDS9XMy9YMM8KSn1Jjcd.png)

**Andy‑4‑micro** is a lightweight Minecraft-tuned AI model derived from the Andy‑4 architecture. Built for responsiveness and portability, it’s ideal for local testing, light inference, and experimentation within the **Mindcraft** framework.

**The current version of Andy-4-micro is `Andy-4-micro-0516`**, All previous versions of Andy-4-micro can still be found on my huggingface page.

> 💡 Trained on a **single RTX 3070** over **four days**, Andy‑4‑micro maintains strong performance while staying efficient.

> ⚠️ **Certification:**  
> Andy‑4‑micro is **not yet certified** by the Mindcraft developers. Use in production at your own discretion.

---

## 📊 Model Overview

- **Base Architecture:** Qwen 2.5
- **Parameter Count:** 1.5 B
- **Training Duration:** ~4 days
- **Training GPU:** 1 × NVIDIA RTX 3070
- **Total Tokens Used:** ~42M
- **License:** [Andy 1.1 License](LICENSE)
- **Repository:** https://huggingface.co/Sweaterdog/Andy-4-micro

---

## 🚀 Installation

First, you need to choose your quantization, this chart is with the base of `8192` set as the context window

| Quantization | VRAM Required |
|--------------|---------------|
|--------------|---------------|
| F16          | 5 GB       |
| Q8_0       | 3 GB+         |
| Q5_K_M       | 2 GB+         |
| Q3_K_M       | 1GB or CPU   |

**NOTE:** GPUs made before 2017 will have *significantly slower speeds* than newer GPUs, also, CPU inference will be extremely slow.

### 1. Installation directly on Ollama

1. Visit [Andy-4 on Ollama](https://ollama.com/Sweaterdog/Andy-4)
2. Copy the command after choosing model type / quantization
3. Run the command in the terminal
4. Set the profile's model to be what you installed, such as `ollama/sweaterdog/andy-4:latest`

### 2. Manual Download & Setup

1. **Download**
   - Visit the Hugging Face **Files** tab.
   - Download the `.GGUF` quantization weights (e.g. `Andy-4-micro.Q4_K_M.gguf`).
   - Grab the provided `Modelfile`.

2. **Edit `Modelfile`**

Change the path placeholder:

```text
FROM YOUR/PATH/HERE
```

to:

```text
FROM /path/to/Andy-4-micro.Q4_K_M.gguf
```

*Optional*: Adjust `num_ctx` for longer context windows if your system supports it.

3. **Create Model**

```bash
ollama create andy-4-micro -f Modelfile
```

This registers Andy‑4‑micro locally with Ollama.

---

If you lack a GPU, check the [Mindcraft Discord guide](https://ptb.discord.com/channels/1303399789995626667/1347027684768878644/1347027684768878644) for free cloud setups.


## 🔧 Context‑Window Quantization

To lower VRAM use for context windows:

#### **Windows**

1. Close Ollama.  
2. In **System Properties → Environment Variables**, add:  
   ```text
   OLLAMA_FLASH_ATTENTION=1  
   OLLAMA_KV_CACHE_TYPE=q8_0   # or q4_0 for extra savings, but far more unstable
   ```  
3. Restart Ollama.

#### **Linux/macOS**

```bash
export OLLAMA_FLASH_ATTENTION=1
export OLLAMA_KV_CACHE_TYPE="q8_0"   # or "q4_0", but far more unstable
ollama serve
```

---

## 📌 Acknowledgments

<details>
<summary>Click to expand</summary>

- **Data & Model by:** @Sweaterdog  
- **Framework:** Mindcraft (https://github.com/kolbytn/mindcraft)  
- **LoRA Weights:** https://huggingface.co/Sweaterdog/Andy-4-micro-LoRA

</details>

---

## ⚖️ License

See [Andy 1.0 License](LICENSE).  

*This work uses data and models created by @Sweaterdog.*