Oussema Harbi
Harbous
·
AI & ML interests
None yet
Recent Activity
reacted
to
kanaria007's
post
with 👍
about 7 hours ago
✅ New Article: *Post-Transformer Decision Cores* (v0.1)
Title:
🚀 Post-Transformer Decision Cores: Goal-Native Engines Beyond LLMs
🔗 https://huggingface.co/blog/kanaria007/post-tranformer-decision-cores
---
Summary:
Transformers are powerful—but in SI-Core they’re *not the essence of intelligence*. A *Decision Core* is anything that satisfies the *Jump contracts* (OBS/ETH/MEM/ID/EVAL + RML), and those contracts don’t require next-token prediction.
This article sketches what “post-Transformer” looks like in practice: *goal-native, structure-aware controllers* that may use LLMs as tools—but don’t depend on them as the runtime brain.
> Don’t relax the contracts.
> Replace the engine behind them.
---
Why It Matters:
• Makes LLMs *optional*: shift them to “genesis / exploration / explanation,” while routine high-stakes Jumps run on structured cores
• Improves boring-but-critical properties: *determinism (CAS), fewer inconsistencies (SCI), fewer ETH violations (EAI), better rollback (RBL/RIR)*
• Enables gradual adoption via *pluggable Jump engines* and domain-by-domain “primary vs fallback” switching
---
What’s Inside:
• The architectural inversion: *World → OBS → SIM/SIS → Jump (Decision Core) → RML → Effects* (LLM is just one engine)
• Three compatible post-Transformer directions:
1. *World-model + search controllers* (MPC/MCTS/anytime search with explicit GCS + ETH constraints)
2. *Genius-distilled specialized controllers* (distill structure from GeniusTraces; LLM becomes a “genesis tool”)
3. *SIL-compiled Decision Programs* (typed Jump entrypoints, compiler-checked invariants, DPIR/GSPU targeting)
• A realistic migration path: LLM-wrapped → Genius library → shadow dual-run → flip primary by domain → SIL-compiled cores
• How this connects to “reproducing genius”: GRP provides trace selection/format; this article provides the engine architectures
---
📖 Structured Intelligence Engineering Series
reacted
to
Hellohal2064's
post
with 🔥
20 days ago
🚀 Excited to share: The vLLM container for NVIDIA DGX Spark!
I've been working on getting vLLM to run natively on the new DGX Spark with its GB10 Blackwell GPU (SM121 architecture). The results? 2.5x faster inference compared to llama.cpp!
📊 Performance Highlights:
• Qwen3-Coder-30B: 44 tok/s (vs 21 tok/s with llama.cpp)
• Qwen3-Next-80B: 45 tok/s (vs 18 tok/s with llama.cpp)
🔧 Technical Challenges Solved:
• Built PyTorch nightly with CUDA 13.1 + SM121 support
• Patched vLLM for Blackwell architecture
• Created custom MoE expert configs for GB10
• Implemented TRITON_ATTN backend workaround
📦 Available now:
• Docker Hub: docker pull hellohal2064/vllm-dgx-spark-gb10:latest
• HuggingFace: huggingface.co/Hellohal2064/vllm-dgx-spark-gb10
The DGX Spark's 119GB unified memory opens up possibilities for running massive models locally. Happy to connect with others working on the DGX Spark Blackwell!
Organizations
None yet