π οΈ ZetaGrid-Ollama Patch: Setup Guide
To run RTH-LM (Fractal TCN) natively in your local environment via Ollama, you need to add support for the TCN operators to the underlying llama.cpp engine.
π¦ Prerequisites
- GGUF Model: Download rth_lm_25b_v1.gguf
- Ollama Source: Clone the official repository or use our fork.
π οΈ Step 1: Add Custom Kernels
Copy the provided C++ files into the llama.cpp source tree:
- Move
rth_tcn_ops.cppandrth_tcn_ops.htollama.cpp/src/ - Register
GGML_OP_CAUSAL_CONV1DandGGML_OP_FRACTAL_GATEinggml.c
ποΈ Step 2: Compile
Rebuild llama.cpp or your Ollama binary:
make -j
# or for Ollama
go generate ./...
go build .
π Step 3: Create & Run
Use the provided Modelfile_RTH-LM to register the model:
ollama create rth-lm -f Modelfile_RTH-LM
ollama run rth-lm
Why this matters: RTH-LM is a pioneer in non-Transformer architectures for local inference. By applying this patch, you are at the forefront of the TCN revolution. π«‘ππ¦