pot-o-pathfinder-tiny-v1 π§¬ππ
Tribewarez Guild β First Tensor Model Release
Tiny model for Proof of Tensor Optimizations (PoT-O) path prediction on low-level devices
Live Beta β’ Permanently Open-Source β’ New guild in the cosmos
π Model Overview
pot-o-pathfinder-tiny-v1 is the inaugural publicly released model from the Tribewarez guild.
Purpose
Trained to predict high-efficiency tensor transformation paths for PoT-O mining challenges.
Given an input challenge tensor (e.g., flattened activation matrix or weight block), the model proposes a compressed / optimized computation path (sequence of ops: matmul β activation β low-rank adapt β quantize β prune) that maximizes Minimum Message Length (MML) compression while staying within verifiable neural-path constraints.
This helps low-power miners (ESP32, mobile, edge devices) propose better proofs faster than random search, improving guild-wide PoT-O efficiency.
Key Specs
- Architecture: Tiny feedforward / 4-layer transformer-inspired (configurable)
- Parameters: ~1.2M β 3.8M (depending on variant uploaded)
- Input: Flattened challenge tensor + metadata tokens (shape, dtype, target compression ratio)
- Output: Sequence of optimization ops + predicted efficiency score
- Quantization: 8-bit & 4-bit AWQ/GPTQ ready (GGUF export planned)
- Inference footprint: < 4 MB RAM (ideal for esp-pot-o-miner integration)
- Training: Synthetic PoT-O challenge dataset + real tensor traces from ai3-lib validators
Release Date: March 2026 (live beta phase)
π Intended Use & PoT-O Integration
- Primary use: PoT-O miners use this model to generate candidate paths β run full forward pass β submit proof if MML-optimal.
- Secondary uses:
- On-device tensor compression advisor
- Lightweight neural-path validator helper
- Research into useful PoW alternatives
- Not intended for: General-purpose chat, image gen, or high-precision scientific computing
Integrates directly with:
- pot-o-core β path encoding & proof structs
- ai3-lib β tensor challenge generation & verification
- esp-pot-o-miner β upcoming ONNX / TFLite port for ESP32 inference
π οΈ How to Use
With Transformers (desktop / validator)
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "tribewarez/pot-o-pathfinder-tiny-v1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
challenge_input = "tensor:shape=[64,128];dtype=float16;target_mml=0.42 ops:matmul,gelu,quant4,prune0.3"
inputs = tokenizer(challenge_input, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=128, do_sample=False)
print(tokenizer.decode(outputs[0]))
# Example output: "path: matmul[lowrank:32] -> gelu -> quant:4bit/int8 -> prune:0.35 -> score:0.418"
On low-level devices (future)
Export β ONNX / TorchScript Run via ONNX Runtime WebAssembly or tinygrad / esp-nn Planned: GGUF conversion for llama.cpp-style embedded inference
π Performance (Beta Benchmarks)
Device / SetupInference TimeRAM UsagePath Quality Gain vs RandomESP32-S3 (8-bit)180β320 ms3.1 MB+18β34% MML efficiencyRaspberry Pi 4 (4-bit)45β90 ms4.8 MB+22β41%Desktop RTX 3060 (fp16)<5 ms~12 MBBaseline
Early live-beta numbers from testnet miners β expect rapid iteration.
β οΈ Beta Warnings
This is live beta software β model outputs may change in v1.1βv2 as PoT-O RFCs evolve (V3 staking, V4 vaults). Challenge formats & tokenization may break between minor releases. Use at your own risk for real mining β testnet only for now.
π License & Open-Source Commitment MIT License β all weights, code, tokenizer, config permanently open. No closed-source components will ever be introduced in Tribewarez core models or infra. π€ Join the Guild Help shape the next versions:
Suggest better challenge encodings Contribute synthetic training data Port to more embedded runtimes PRs to: https://github.com/TribeWarez (pot-o-core, ai3-lib, etc.)
Docs & RFCs β https://docs.tribewarez.com/public Testnet RPC β https://pot.rpc.gateway.tribewarez.com Status β https://status.rpc.gateway.tribewarez.com Tribewarez β forging tensor futures in the cosmos β’ 2026ββ
- Downloads last month
- -