ManniX's picture

ManniX PRO

ManniX-ITA

AI & ML interests

None yet

Recent Activity

posted an update about 8 hours ago
After the Feb/Mar '26 collapse of Claude Code I started building my own framework. The token crunch is still only mitigated, but the reasoning and quality are back — better than before. For research and LLM training recipes, though, diverse knowledge and a second point of view are crucial. Pairing my Claude Max sub with an Ollama Pro sub has already saved me from days of botched trainings — multiple frontier models helping Claude is next level. Acting as the middleman myself was interesting but inefficient, so I shipped skills that let Claude talk to Ollama models directly. 🚀 claude-hooks v1.1.0 ships two LLM-to-LLM skills. 💬 /get-advice — single-shot second opinion. Claude runs a multi-turn conversation with a configured Ollama advisor; the advisor grounds in your project through read_file / grep / glob / list_files / recall_memory tools. Effort tiers cap fresh-session retries. 🤝 /consultants — multi-agent council for cross-cutting questions: 🧩 planner → researcher → critic → synthesizer Each role runs its own Ollama model. 💾 Sessions persist to disk (summary.md + transcript.md + SQLite per-role message threads); closed sessions reopen and produce follow-ups 🔁 indistinguishable from warm ones. 🎯 x-tier effort multiplies diversity: • xmedium / xhigh — researcher fans across N models in parallel • xmax — + multi-critic + meta-critic combine; critics anonymized as "Critic 1/2/3" to avoid model-bias 🛡️ Cloud-flap recovery, three layers: 15-attempt / ~15min retry budget; synthesizer failure-fallback model chain; degraded-answer composer surfaces researcher + critic work even when synthesis fails. 📊 7 cloud models benchmarked & Claude-graded on locked queries: • PROD-READY (P:A R:A C:A S:A): kimi-k2.6, gemma4:31b, glm-5.1 • Role specialists: minimax-m2.7 (critic), qwen3.5 (planner) 🐧 Linux/macOS/Windows. No per-project setup. 🔗 github.com/mann1x/claude-hooks
View all activity

Organizations

None yet