pot-o-pathfinder-tiny-v1 πŸ§¬πŸ”πŸŒŒ

Tribewarez Guild – First Tensor Model Release
Tiny model for Proof of Tensor Optimizations (PoT-O) path prediction on low-level devices

Live Beta β€’ Permanently Open-Source β€’ New guild in the cosmos

🌌 Model Overview

pot-o-pathfinder-tiny-v1 is the inaugural publicly released model from the Tribewarez guild.

Purpose
Trained to predict high-efficiency tensor transformation paths for PoT-O mining challenges.
Given an input challenge tensor (e.g., flattened activation matrix or weight block), the model proposes a compressed / optimized computation path (sequence of ops: matmul β†’ activation β†’ low-rank adapt β†’ quantize β†’ prune) that maximizes Minimum Message Length (MML) compression while staying within verifiable neural-path constraints.

This helps low-power miners (ESP32, mobile, edge devices) propose better proofs faster than random search, improving guild-wide PoT-O efficiency.

Key Specs

  • Architecture: Tiny feedforward / 4-layer transformer-inspired (configurable)
  • Parameters: ~1.2M – 3.8M (depending on variant uploaded)
  • Input: Flattened challenge tensor + metadata tokens (shape, dtype, target compression ratio)
  • Output: Sequence of optimization ops + predicted efficiency score
  • Quantization: 8-bit & 4-bit AWQ/GPTQ ready (GGUF export planned)
  • Inference footprint: < 4 MB RAM (ideal for esp-pot-o-miner integration)
  • Training: Synthetic PoT-O challenge dataset + real tensor traces from ai3-lib validators

Release Date: March 2026 (live beta phase)

πŸš€ Intended Use & PoT-O Integration

  • Primary use: PoT-O miners use this model to generate candidate paths β†’ run full forward pass β†’ submit proof if MML-optimal.
  • Secondary uses:
    • On-device tensor compression advisor
    • Lightweight neural-path validator helper
    • Research into useful PoW alternatives
  • Not intended for: General-purpose chat, image gen, or high-precision scientific computing

Integrates directly with:

  • pot-o-core – path encoding & proof structs
  • ai3-lib – tensor challenge generation & verification
  • esp-pot-o-miner – upcoming ONNX / TFLite port for ESP32 inference

πŸ› οΈ How to Use

With Transformers (desktop / validator)

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "tribewarez/pot-o-pathfinder-tiny-v1"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")

challenge_input = "tensor:shape=[64,128];dtype=float16;target_mml=0.42 ops:matmul,gelu,quant4,prune0.3"

inputs = tokenizer(challenge_input, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=128, do_sample=False)

print(tokenizer.decode(outputs[0]))
# Example output: "path: matmul[lowrank:32] -> gelu -> quant:4bit/int8 -> prune:0.35 -> score:0.418"

On low-level devices (future)

Export β†’ ONNX / TorchScript Run via ONNX Runtime WebAssembly or tinygrad / esp-nn Planned: GGUF conversion for llama.cpp-style embedded inference

πŸ“Š Performance (Beta Benchmarks)

Device / SetupInference TimeRAM UsagePath Quality Gain vs RandomESP32-S3 (8-bit)180–320 ms3.1 MB+18–34% MML efficiencyRaspberry Pi 4 (4-bit)45–90 ms4.8 MB+22–41%Desktop RTX 3060 (fp16)<5 ms~12 MBBaseline Early live-beta numbers from testnet miners – expect rapid iteration. ⚠️ Beta Warnings

This is live beta software β€” model outputs may change in v1.1–v2 as PoT-O RFCs evolve (V3 staking, V4 vaults). Challenge formats & tokenization may break between minor releases. Use at your own risk for real mining β€” testnet only for now.

πŸ“œ License & Open-Source Commitment MIT License β€” all weights, code, tokenizer, config permanently open. No closed-source components will ever be introduced in Tribewarez core models or infra. 🀝 Join the Guild Help shape the next versions:

Suggest better challenge encodings Contribute synthetic training data Port to more embedded runtimes PRs to: https://github.com/TribeWarez (pot-o-core, ai3-lib, etc.)

Docs & RFCs β†’ https://docs.tribewarez.com/public Testnet RPC β†’ https://pot.rpc.gateway.tribewarez.com Status β†’ https://status.rpc.gateway.tribewarez.com Tribewarez β€” forging tensor futures in the cosmos β€’ 2026β€“βˆž

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train Tribewarez/pot-o-pathfinder-tiny-v1

Collection including Tribewarez/pot-o-pathfinder-tiny-v1