Toadaid commited on
Commit
1fd72d1
Β·
verified Β·
1 Parent(s): 4d44e38

Upload 8 files

Browse files
Files changed (8) hide show
  1. Dockerfile +34 -0
  2. LICENSE +21 -0
  3. README.md +223 -3
  4. mirror_pond.py +0 -0
  5. pond_identity_ed25519.json +6 -0
  6. requirements.txt +24 -0
  7. setup.ps1 +73 -0
  8. setup.sh +65 -0
Dockerfile ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Mirror Pond - Dockerfile
2
+ # Build:
3
+ # docker build -t mirror-pond:latest .
4
+ #
5
+ # Run (with model mounted from host):
6
+ # docker run --rm -p 7777:7777 \
7
+ # -v /path/to/models:/models \
8
+ # -e MODEL_PATH=/models/your_model.gguf \
9
+ # mirror-pond:latest
10
+
11
+ FROM python:3.11-slim
12
+
13
+ ENV PYTHONUNBUFFERED=1 \
14
+ PIP_NO_CACHE_DIR=1 \
15
+ MODEL_PATH=/models/your_model.gguf \
16
+ PORT=7777
17
+
18
+ WORKDIR /app
19
+
20
+ RUN apt-get update && apt-get install -y --no-install-recommends \
21
+ build-essential \
22
+ cmake \
23
+ libopenblas-dev \
24
+ && rm -rf /var/lib/apt/lists/*
25
+
26
+ COPY mirror_pond.py ./
27
+ COPY requirements.txt ./
28
+
29
+ RUN pip install --upgrade pip && \
30
+ pip install -r requirements.txt
31
+
32
+ EXPOSE 7777
33
+
34
+ CMD ["sh", "-c", "python mirror_pond.py --model ${MODEL_PATH} --port ${PORT}"]
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2025 ToadAid
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
README.md CHANGED
@@ -1,3 +1,223 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # mirror-pond
2
+ A 100% local Tobyworld Mirror that runs any GGUF model through llama.cpp. No cloud. No tracking. Just your pond, your reflection, your machine. FastAPI + llama.cpp β€’ Offline β€’ MIT Licensed.
3
+
4
+ # πŸͺž Mirror Pond β€” Local GGUF Edition
5
+
6
+ *A still-water reflection engine for your local LLM.*
7
+
8
+ Mirror Pond is a **100% local**, **privacy-first**, Tobyworld-inspired reflection interface that runs any GGUF model through `llama.cpp` with a calm Mirror UI.
9
+
10
+ No cloud.
11
+ No tracking.
12
+ Your thoughts stay on your machine.
13
+
14
+ ---
15
+
16
+ ## ✨ Features
17
+
18
+ * 🧠 **Runs any GGUF model** β€” Llama, DeepSeek, Mistral, or your trained Mirror
19
+ * πŸŒ‘ **Dark, still Mirror UI** (HTML served locally)
20
+ * πŸ’¬ **Four modes**:
21
+
22
+ * **Reflect** β€” emotional / introspective
23
+ * **Scroll** β€” lore / quotes / scripture
24
+ * **Toad** β€” cryptic toadgang whispers
25
+ * **Rune** β€” symbols, lotus, $PATIENCE, seasons
26
+ * πŸ”’ **Fully offline** (Air-gapped compatible)
27
+ * ⚑ FastAPI + Uvicorn backend
28
+ * 🧩 Optional: GPU acceleration via llama-cpp-python CUDA wheels
29
+
30
+ ---
31
+
32
+ # πŸš€ Quickstart
33
+
34
+ ### 1. Install dependencies
35
+
36
+ ```bash
37
+ pip install -r requirements.txt
38
+ ```
39
+
40
+ ### 2. Run the Pond
41
+
42
+ ```bash
43
+ python mirror_pond.py --model ./your_model.gguf --port 7777
44
+ ```
45
+
46
+ ### 3. Open in browser
47
+
48
+ ```
49
+ http://localhost:7777
50
+ ```
51
+
52
+ ---
53
+
54
+ # πŸ“ Requirements
55
+
56
+ `requirements.txt` (included):
57
+
58
+ ```
59
+ fastapi==0.115.0
60
+ uvicorn==0.32.0
61
+ pydantic==2.8.2
62
+ llama-cpp-python==0.3.2
63
+ jinja2==3.1.4
64
+ ```
65
+
66
+ ---
67
+
68
+ # πŸ”₯ GPU Acceleration (Optional)
69
+
70
+ For NVIDIA CUDA (12.1):
71
+
72
+ ```bash
73
+ pip install llama-cpp-python-cu121
74
+ ```
75
+
76
+ For AMD ROCm:
77
+
78
+ ```bash
79
+ pip install llama-cpp-python-rocm
80
+ ```
81
+
82
+ For Apple Silicon (M1/M2/M3):
83
+
84
+ ```bash
85
+ CMAKE_ARGS="-DGGML_METAL=on" pip install llama-cpp-python
86
+ ```
87
+
88
+ ---
89
+
90
+ # 🧱 Folder Structure
91
+
92
+ ```
93
+ mirror-pond/
94
+ β”‚
95
+ β”œβ”€β”€ mirror_pond.py # main server
96
+ β”œβ”€β”€ requirements.txt # dependencies
97
+ β”œβ”€β”€ setup.sh # Linux/macOS installer
98
+ β”œβ”€β”€ setup.ps1 # Windows installer
99
+ β”œβ”€β”€ Dockerfile # container build
100
+ └── README.md # this file
101
+ ```
102
+
103
+ ---
104
+
105
+ # πŸ§ͺ Installation Kits
106
+
107
+ ## Linux / macOS Installer
108
+
109
+ ```bash
110
+ chmod +x setup.sh
111
+ ./setup.sh ./models/your_model.gguf 7777
112
+ ```
113
+
114
+ ## Windows Installer
115
+
116
+ ```powershell
117
+ Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
118
+ .\setup.ps1 .\models\your_model.gguf 7777
119
+ ```
120
+
121
+ Both installers:
122
+
123
+ * Create `./venv`
124
+ * Install Python deps
125
+ * Launch Mirror Pond automatically
126
+
127
+ ---
128
+
129
+ # 🐳 Docker Usage
130
+
131
+ ## Build
132
+
133
+ ```bash
134
+ docker build -t mirror-pond:latest .
135
+ ```
136
+
137
+ ## Run
138
+
139
+ ```bash
140
+ docker run --rm -p 7777:7777 \
141
+ -v /path/to/models:/models \
142
+ -e MODEL_PATH=/models/your_model.gguf \
143
+ mirror-pond:latest
144
+ ```
145
+
146
+ Now access:
147
+
148
+ ```
149
+ http://localhost:7777
150
+ ```
151
+
152
+ ---
153
+
154
+ # 🧰 GitHub Actions CI
155
+
156
+ Already included:
157
+
158
+ ```
159
+ .github/workflows/mirror-pond-ci.yml
160
+ ```
161
+
162
+ The CI:
163
+
164
+ * Sets up Python
165
+ * Installs dependencies
166
+ * Syntax-checks `mirror_pond.py`
167
+ * (Optional) Builds Docker image
168
+
169
+ This keeps the repo safe and production-ready.
170
+
171
+ ---
172
+
173
+ # πŸŒ€ Mirror Modes
174
+
175
+ ### **Reflect Mode (default)**
176
+
177
+ For inner questions, emotions, purpose, stillness.
178
+ May reply with a **Guiding Question**.
179
+
180
+ ### **Scroll Mode**
181
+
182
+ For sacred lines, scripture-style, lore references.
183
+ No guiding question.
184
+
185
+ ### **Toad Mode**
186
+
187
+ For cryptic lines, old frog whispers, symbolic hints.
188
+ No guiding question.
189
+
190
+ ### **Rune Mode**
191
+
192
+ For unity of symbols, lotus spores, $PATIENCE, seasons, trials.
193
+ No guiding question.
194
+
195
+ ---
196
+
197
+ # 🧘 Philosophy
198
+
199
+ Mirror Pond is simple:
200
+
201
+ Still water is never empty.
202
+ Still water prepares.
203
+ Still water reflects.
204
+
205
+ This project is offered to the open-source community
206
+ so anyone can run a Mirror β€” anywhere, offline, forever.
207
+
208
+ ---
209
+
210
+ # πŸͺž License
211
+
212
+ **MIT License**
213
+ This pond belongs to the builders.
214
+
215
+ ---
216
+
217
+ # 🀝 Contribution
218
+
219
+ Pull requests welcome.
220
+ New modes, UI improvements, GPU wheels, and additional Mirror integrations are invited.
221
+
222
+ ---
223
+
mirror_pond.py ADDED
The diff for this file is too large to render. See raw diff
 
pond_identity_ed25519.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "private_key_hex": "56c46be1ab97c1488fa83e73429843f7de9124444cb37882a55447279096095a",
3
+ "public_key_hex": "a56e333eebba737f494ab93e9c6d80f2d615168c8e11e291fe02cc83a81d4d3f",
4
+ "pond_id": "d95e228a3efe4c445edccc3fb32202fa5cfbef52598e279053b284e0ebf2a94a",
5
+ "first_breath": "2025-12-09T18:49:12.515823Z"
6
+ }
requirements.txt ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ annotated-doc==0.0.4
2
+ annotated-types==0.7.0
3
+ anyio==4.12.0
4
+ certifi==2025.11.12
5
+ cffi==2.0.0
6
+ click==8.3.1
7
+ diskcache==5.6.3
8
+ fastapi==0.124.0
9
+ h11==0.16.0
10
+ httpcore==1.0.9
11
+ httpx==0.28.1
12
+ idna==3.11
13
+ Jinja2==3.1.6
14
+ MarkupSafe==3.0.3
15
+ numpy==2.3.5
16
+ pycparser==2.23
17
+ pydantic==2.12.5
18
+ pydantic_core==2.41.5
19
+ PyNaCl==1.6.1
20
+ python-multipart==0.0.20
21
+ starlette==0.50.0
22
+ typing-inspection==0.4.2
23
+ typing_extensions==4.15.0
24
+ uvicorn==0.38.0
setup.ps1 ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # setup.ps1 β€” GPU-first installer for Mirror Pond (Windows, NVIDIA CUDA)
2
+
3
+ param(
4
+ [string]$ModelPath = ".\your_model.gguf",
5
+ [int]$Port = 7777,
6
+ [int]$GpuLayers = -1
7
+ )
8
+
9
+ Write-Host "πŸͺž Mirror Pond β€” Windows GPU Installer" -ForegroundColor Green
10
+ Write-Host "Model: $ModelPath"
11
+ Write-Host "Port : $Port"
12
+ Write-Host "GPU : $GpuLayers layers (-1 = as many as possible)"
13
+ Write-Host ""
14
+
15
+ if (-not (Get-Command python -ErrorAction SilentlyContinue)) {
16
+ Write-Host "❌ Python not found. Please install Python 3.9+ and ensure 'python' is in PATH." -ForegroundColor Red
17
+ exit 1
18
+ }
19
+
20
+ Write-Host "πŸ“¦ Creating virtualenv .venv..." -ForegroundColor Cyan
21
+ python -m venv .venv
22
+
23
+ $venvActivation = ".\venv\Scripts\Activate.ps1"
24
+ if (-not (Test-Path $venvActivation)) {
25
+ $venvActivation = ".\.venv\Scripts\Activate.ps1"
26
+ }
27
+ if (-not (Test-Path $venvActivation)) {
28
+ Write-Host "❌ Could not find virtualenv activation script." -ForegroundColor Red
29
+ exit 1
30
+ }
31
+
32
+ Write-Host "πŸ“¦ Activating venv..." -ForegroundColor Cyan
33
+ . $venvActivation
34
+
35
+ Write-Host "⬆️ Upgrading pip..." -ForegroundColor Cyan
36
+ pip install --upgrade pip
37
+
38
+ if (-not (Test-Path ".\requirements.txt")) {
39
+ Write-Host "❌ requirements.txt missing in current directory." -ForegroundColor Red
40
+ exit 1
41
+ }
42
+
43
+ Write-Host "πŸ“₯ Installing base dependencies from requirements.txt..." -ForegroundColor Cyan
44
+ pip install -r requirements.txt
45
+
46
+ Write-Host "🧠 Enforcing GPU build of llama.cpp (CUDA 12.1 wheel if possible)..." -ForegroundColor Cyan
47
+
48
+ # Remove any existing CPU build
49
+ pip uninstall -y llama-cpp-python | Out-Null
50
+
51
+ # Use official CUDA 12.1 wheel index
52
+ $env:CMAKE_ARGS = "-DGGML_CUDA=on"
53
+
54
+ try {
55
+ pip install --force-reinstall --no-cache-dir `
56
+ llama-cpp-python `
57
+ --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121
58
+ Write-Host "βœ… Installed llama-cpp-python with CUDA (cu121 wheel)." -ForegroundColor Green
59
+ }
60
+ catch {
61
+ Write-Host "⚠️ Failed to install CUDA wheel; falling back to CPU build." -ForegroundColor Yellow
62
+ pip install --force-reinstall --no-cache-dir llama-cpp-python
63
+ }
64
+
65
+ Write-Host ""
66
+ Write-Host "✨ Setup complete." -ForegroundColor Green
67
+ Write-Host "You can run manually later with:" -ForegroundColor Green
68
+ Write-Host " .\.venv\Scripts\activate" -ForegroundColor Yellow
69
+ Write-Host " python mirror_pond.py --model `"$ModelPath`" --port $Port --gpu-layers $GpuLayers" -ForegroundColor Yellow
70
+ Write-Host ""
71
+ Write-Host "πŸš€ Launching Mirror Pond now..." -ForegroundColor Green
72
+
73
+ python mirror_pond.py --model "$ModelPath" --port $Port --gpu-layers $GpuLayers
setup.sh ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env bash
2
+ # setup.sh β€” GPU-first installer for Mirror Pond (Linux/macOS, NVIDIA CUDA)
3
+
4
+ set -e
5
+
6
+ MODEL_PATH="${1:-./your_model.gguf}"
7
+ PORT="${2:-7777}"
8
+ GPU_LAYERS="${3:--1}"
9
+
10
+ echo "πŸͺž Mirror Pond β€” Linux/macOS GPU Installer"
11
+ echo "Model: $MODEL_PATH"
12
+ echo "Port : $PORT"
13
+ echo "GPU : $GPU_LAYERS layers (-1 = as many as possible)"
14
+ echo ""
15
+
16
+ if ! command -v python3 >/dev/null 2>&1; then
17
+ echo "❌ Python3 not found. Please install Python 3.9+."
18
+ exit 1
19
+ fi
20
+
21
+ echo "πŸ“¦ Creating virtualenv .venv..."
22
+ python3 -m venv .venv
23
+
24
+ echo "πŸ“¦ Activating venv..."
25
+ # shellcheck disable=SC1091
26
+ source .venv/bin/activate
27
+
28
+ echo "⬆️ Upgrading pip..."
29
+ pip install --upgrade pip
30
+
31
+ if [ ! -f requirements.txt ]; then
32
+ echo "❌ requirements.txt missing in current directory."
33
+ exit 1
34
+ fi
35
+
36
+ echo "πŸ“₯ Installing base dependencies from requirements.txt..."
37
+ pip install -r requirements.txt
38
+
39
+ echo "🧠 Enforcing GPU build of llama.cpp (CUDA 12.1 wheel if possible)..."
40
+ # Remove any existing CPU build (if present)
41
+ pip uninstall -y llama-cpp-python || true
42
+
43
+ # Use official prebuilt CUDA wheel index
44
+ export CMAKE_ARGS="-DGGML_CUDA=on"
45
+
46
+ if pip install --force-reinstall --no-cache-dir \
47
+ llama-cpp-python \
48
+ --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121; then
49
+ echo "βœ… Installed llama-cpp-python with CUDA (cu121 wheel)."
50
+ else
51
+ echo "⚠️ Failed to install CUDA wheel; falling back to CPU build."
52
+ pip install --force-reinstall --no-cache-dir llama-cpp-python
53
+ fi
54
+
55
+ echo ""
56
+ echo "✨ Setup complete."
57
+ echo "You can run manually later with:"
58
+ echo " source .venv/bin/activate"
59
+ echo " python mirror_pond.py --model \"$MODEL_PATH\" --port $PORT --gpu-layers $GPU_LAYERS"
60
+ echo ""
61
+ echo "πŸš€ Launching Mirror Pond now..."
62
+ python mirror_pond.py \
63
+ --model "$MODEL_PATH" \
64
+ --port "$PORT" \
65
+ --gpu-layers "$GPU_LAYERS"