title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7 values | id stringlengths 7 7 | locked bool 2 classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2 classes | stickied bool 2 classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Benchmarked 4 AI Memory Systems on 600-Turn Conversations - Here Are the Results | 20 | We just completed comprehensive benchmarks comparing memory layers for production AI agents. Tested Mem0 against OpenAI Memory, LangMem, and MemGPT across 10 multi-session conversations with 200 questions each.
**Key findings:**
* **Mem0**: 66.9% accuracy, 1.4s p95 latency, \~2K tokens per query
* **Mem0 Graph**: 68.5% accuracy, 2.6s p95 latency, \~4K tokens (superior temporal reasoning)
* **OpenAI Memory**: 52.9% accuracy, 0.9s p95 latency, \~5K tokens
* **LangMem**: 58.1% accuracy, 60s p95 latency, \~130 tokens
* **MemGPT**: Results in appendix
**What stands out:** Mem0 achieved 14 percentage points higher accuracy than OpenAI Memory while maintaining sub-2s response times. The graph variant excels at temporal queries (58.1% vs OpenAI's 21.7%) and multi-hop reasoning.
LangMem's 60-second latency makes it unusable for interactive applications, despite being open source.
**Methodology:** Used LOCOMO dataset with GPT-4o-mini at temperature 0. Evaluated factual consistency, multi-hop reasoning, temporal understanding, and open-domain recall across 26K+ token conversations.
This matters because production agents need memory that persists beyond context windows while maintaining chat-level responsiveness. Current approaches either sacrifice accuracy for speed or become too slow for real-time use.
Wanna reproduce the numbers?
Repository: pip install mem0ai to test yourself. | 2026-02-23T15:25:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rckcww/benchmarked_4_ai_memory_systems_on_600turn/ | singh_taranjeet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rckcww | false | null | t3_1rckcww | /r/LocalLLaMA/comments/1rckcww/benchmarked_4_ai_memory_systems_on_600turn/ | false | false | self | 20 | null |
Best local llm for grammar tasks? | 6 | Hi guys!
I want to create a figma plugin that uses AI to help us proofread design assets and pieces for our work. Would go with openai 5.2 but work is very strict regarding data ingestion by 3rd party providers. Also I would have to feed or use my work brand guidelines documents as source of truth for the plugin.
The language I want to work is Spanish which is notorious for its many rules and practices.
Any recommendations for this project? | 2026-02-23T15:19:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rck7n4/best_local_llm_for_grammar_tasks/ | darkblitzrc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rck7n4 | false | null | t3_1rck7n4 | /r/LocalLLaMA/comments/1rck7n4/best_local_llm_for_grammar_tasks/ | false | false | self | 6 | null |
Aura — offline cognitive memory for local AI agents. No embeddings, no cloud, <1ms recall. pip install aura-memory | 1 | [removed] | 2026-02-23T15:17:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rck65j/aura_offline_cognitive_memory_for_local_ai_agents/ | Far_Assignment_189 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rck65j | false | null | t3_1rck65j | /r/LocalLLaMA/comments/1rck65j/aura_offline_cognitive_memory_for_local_ai_agents/ | false | false | self | 1 | null |
Is opencode the best free coding agent currently? | 11 | I just started using it and it seems good. I was very surprised that it also gives free access to minimax 2.5 and glm 5 at the moment. | 2026-02-23T15:11:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rcjzsk/is_opencode_the_best_free_coding_agent_currently/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcjzsk | false | null | t3_1rcjzsk | /r/LocalLLaMA/comments/1rcjzsk/is_opencode_the_best_free_coding_agent_currently/ | false | false | self | 11 | null |
Fixing Qwen3-Next-80B quantization: Explicit MoE gate + DeltaNet state protection (AWQ 128g) testing | 1 | [removed] | 2026-02-23T15:10:51 | https://www.reddit.com/r/LocalLLaMA/comments/1rcjzg3/fixing_qwen3next80b_quantization_explicit_moe/ | EhOhEl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcjzg3 | false | null | t3_1rcjzg3 | /r/LocalLLaMA/comments/1rcjzg3/fixing_qwen3next80b_quantization_explicit_moe/ | false | false | self | 1 | null |
Coming in a few hours: First architecturallycorrect AWQ of Qwen3-Next-80B-A3B explicit MoE gate + DeltaNet state protection, 128g, fits 2x3090 | 1 | [removed] | 2026-02-23T15:06:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rcjvf9/coming_in_a_few_hours_first/ | EhOhEl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcjvf9 | false | null | t3_1rcjvf9 | /r/LocalLLaMA/comments/1rcjvf9/coming_in_a_few_hours_first/ | false | false | self | 1 | null |
If your 3B–70B fine-tune keeps OOM’ing, we’ll run it on H100 (beta runtime experiment) | 0 | We’re running a beta experiment with a new inference runtime focused on large-model memory behavior.
Instead of synthetic benchmarks, we’d rather test real community fine-tunes.
If you’ve got a 3B–70B model that:
– keeps OOM’ing
– struggles with memory fragmentation
– behaves weirdly under load
we can spin it up on an H100 and let you hit an endpoint.
This isn’t a hosted platform announcement. Just testing real workloads on the runtime.
Drop the model link if you want us to try it.
Or DM me. | 2026-02-23T14:50:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rcjgr4/if_your_3b70b_finetune_keeps_ooming_well_run_it/ | pmv143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcjgr4 | false | null | t3_1rcjgr4 | /r/LocalLLaMA/comments/1rcjgr4/if_your_3b70b_finetune_keeps_ooming_well_run_it/ | false | false | self | 0 | null |
Give your OpenClaw agents a truly local voice | 0 | If you’re using **OpenClaw** and want fully local voice support, this is worth a read:
[https://izwiai.com/blog/give-openclaw-agents-local-voice](https://izwiai.com/blog/give-openclaw-agents-local-voice?utm_source=chatgpt.com)
By default, OpenClaw relies on cloud TTS like **ElevenLabs**, which means your audio leaves your machine. This guide shows how to integrate **Izwi** to run speech-to-text and text-to-speech *completely locally*.
**Why it matters:**
* No audio sent to the cloud
* Faster response times
* Works offline
* Full control over your data
Clean setup walkthrough + practical voice agent use cases. Perfect if you’re building privacy-first AI assistants. 🚀
[https://github.com/agentem-ai/izwi](https://github.com/agentem-ai/izwi) | 2026-02-23T14:50:52 | https://izwiai.com/blog/give-openclaw-agents-local-voice | zinyando | izwiai.com | 1970-01-01T00:00:00 | 0 | {} | 1rcjgr9 | false | null | t3_1rcjgr9 | /r/LocalLLaMA/comments/1rcjgr9/give_your_openclaw_agents_a_truly_local_voice/ | false | false | default | 0 | null |
I benchmarked 17 local LLMs on real MCP tool calling — single-shot AND agentic loop. The difference is massive. | 96 | I benchmarked 17 local LLMs on real MCP tool calling — not synthetic function-calling evals, but actual calls against a production API with 19 tools, real validation, and real results.
I ran each model twice. First single-shot (one API call, score the first response). Then agentic (model gets tool results back, keeps going until it passes or times out). Same 17 models, same 28 tasks, same MCP server.
The methodology difference changes everything.
---
### The setup
**17 models** on a 4080 16GB + 64GB RAM, running via LM Studio, talking to a real MCP server ([Workunit](https://workunit.app)'s project management API — 19 tools) through a custom Python runner.
5 models are **not trained for tool calling** (per LM Studio metadata) — included to test whether raw reasoning ability compensates for missing fine-tuning.
**Three difficulty levels:**
**Level 0 — Explicit** (11 tasks): Exact tool name and all parameters given. Pure format compliance.
> *"Call `create_workunit` with name='Hello World', problem_statement='Users can't track work', success_criteria='Workunit visible in dashboard', priority='normal'"*
**Level 1 — Natural language** (10 tasks): Human-style request. Model picks the right tool, maps description to params.
> *"Create a workunit called 'Fix Login Page Bug'. Problem: users can't log in with special characters. Done when all character types work with regression tests. High priority."*
**Level 2 — Reasoning** (7 tasks): High-level goal only. No tool names, no hints. Model must plan the sequence and chain IDs across calls.
> *"End of sprint. Mark all todo tasks done, save a summary of what was accomplished, and complete the workunit."*
---
### Results — Single-shot vs. Agentic

The left column is single-shot (first response only). Right column is agentic (full loop with tool results fed back).
| Model | Size | Tool-trained | SS L0 | SS L1 | SS L2 | SS Overall | → | AG L0 | AG L1 | AG L2 | AG Overall |
|-------|------|-------------|-------|-------|-------|------------|---|-------|-------|-------|------------|
| ibm/granite-4-h-tiny | 7B | ✅ | 100% | 80% | 0% | **73%** | → | 100% | 100% | 57% | **89%** |
| qwen/qwen3-coder-30b | 30B | ✅ | 100% | 80% | 0% | **71%** | → | 100% | 90% | 57% | **88%** |
| mistralai/magistral-small-2509 | 24B | ✅ | 100% | 90% | 0% | **78%** | → | 100% | 100% | 43% | **85%** |
| qwen/qwen3-4b-thinking-2507 | 4B | ✅ | 100% | 80% | 0% | **74%** | → | 100% | 80% | 57% | **85%** |
| openai/gpt-oss-20b | 20B | ✅ | 100% | 70% | 0% | **72%** | → | 100% | 80% | 43% | **85%** |
| mistralai/ministral-3-14b-reasoning | 14B | ✅ | 100% | 90% | 0% | **78%** | → | 100% | 90% | 29% | **84%** |
| baidu/ernie-4.5-21b-a3b | 21B | ❌* | 0% | 0% | 0% | **0%** | → | 100% | 100% | 29% | **83%** |
| mistralai/ministral-3-3b | 3B | ✅ | 100% | 90% | 57% | **89%** | → | 91% | 90% | 29% | **81%** |
| google/gemma-3-12b | 12B | ❌* | 0% | 0% | 0% | **0%** | → | 91% | 80% | 29% | **78%** |
| essentialai/rnj-1 | 8.3B | ✅ | 100% | 80% | 0% | **74%** | → | 100% | 80% | 0% | **77%** |
| nvidia/nemotron-3-nano | 30B | ✅ | 91% | 60% | 0% | **59%** | → | 100% | 60% | 14% | **71%** |
| zai-org/glm-4.6v-flash | 9.4B | ✅ | 82% | 80% | 0% | **67%** | → | 91% | 60% | 14% | **68%** |
| microsoft/phi-4-reasoning-plus | 15B | ❌* | 55% | 70% | 0% | **48%** | → | 46% | 80% | 43% | **64%** |
| zai-org/glm-4.7-flash | 30B | ✅ | 64% | 40% | 0% | **44%** | → | 55% | 50% | 71% | **61%** |
| qwen/qwen2.5-coder-32b | 32B | ❌* | 64% | 40% | 0% | **38%** | → | 91% | 50% | 14% | **58%** |
| deepseek/deepseek-r1-0528-qwen3-8b | 8B | ❌* | 9% | 0% | 0% | **3%** | → | 18% | 0% | 0% | **6%** |
| bytedance/seed-oss-36b | 36B | ✅ | 100% | 80% | 0% | **71%** | → | 0% | 0% | 0% | **0%** |
*\* = not trained for tool calling (per LM Studio metadata)*
**How scoring works:** The L0/L1/L2 columns show **binary pass rates** — the percentage of tasks the model fully passed at each level. The **Overall** column is different: it averages each level's *score* (which includes partial credit for completing some steps of a multi-step task), then averages those three level scores. This is why Overall can be higher than you'd expect from the pass rate columns alone — a model that partially completes several tasks gets credit even if it doesn't fully pass them. The repo's `aggregated_report.md` shows both pass rates and scores per level.
---
### Level breakdown (agentic)

### Tool-trained vs not tool-trained

### What I found
**The agentic loop is the difference between L2 being hard and L2 being solvable.**
In single-shot, 16 of 17 models scored 0% at L2. The one exception: ministral-3-3b hit 57% — because 4 of its 7 passes don't require ID chaining (bootstrap project, find stale work, document a decision, create project with linked asset). The ID-chaining tasks (where you need the `id` from `create_project` to pass into `create_workunit`) were 0% across the board in single-shot. With the agentic loop, granite, qwen3-coder-30b, and qwen3-4b-thinking all hit 57% at L2 including the chaining tasks. The model calls a tool, gets an ID back, uses it in the next call. That's the whole unlock.
**A 7B model tops the overall leaderboard.** ibm/granite-4-h-tiny at 89%, beating every model up to 32B. It's consistent, doesn't hallucinate tool names, handles multi-step sequences cleanly, and is fast. If you need reliable local MCP tool calling today, start here.
**The not-tool-trained plot twist.** In single-shot, ernie-4.5-21b (21B) and gemma-3-12b (12B) scored 0% — they never emitted tool calls at all, just wrote helpful text. In the agentic loop: ernie hits 83%, gemma hits 78%. The agentic runner apparently gives them enough context to figure out they're supposed to call tools. Whether that's a win for the agentic methodology or an indictment of the single-shot format is worth debating.
**Not being tool-trained still hurts at L2.** Both ernie and gemma fall to 29% at L2 — capable of basic tool use when the context is clear, but struggle with multi-step reasoning chains. The tool-trained models that score well at L2 (granite, qwen3-coder, qwen3-thinking) have a clear edge there.
**DeepSeek-R1 (8B, not tool-trained) called a tool named `tool_name`.** Literally that string, on most tasks. It understood the shape of a tool call response — format, structure, everything — but hallucinated a generic placeholder instead of reading the actual function names from the tool list. Fascinating failure mode.
**phi-4-reasoning-plus is inverted.** 46% at L0 (explicit instructions), 80% at L1 (natural language), 43% at L2. It struggles most when told exactly what to do. This is unusual enough that I suspect something about the explicit instruction format conflicts with its training distribution.
**glm-4.7-flash scores higher at L2 (71%) than L0 (55%) or L1 (50%).** It passes L2-01 through L2-05 while fumbling basic explicit tool calls. I don't have a good explanation. The reasoning tasks seem to activate something that the simpler tasks don't.
**Two tasks are the universal wall.** L1-03 ("add three tasks to a workunit") — most models call `create_task` once and stop. L1-05 ("search for a workunit then retrieve its details") — models do the search but almost universally skip the follow-up `get_workunit`. Both require deciding to make multiple sequential calls from a single user message, which appears to be a reliably hard mental model. And L2-07 (end-of-sprint closeout: mark tasks done + save context + complete workunit) — 0/17 models fully pass it even in the agentic loop. Three sequential calls with state threading. Nobody nails all three.
**seed-oss-36b is the most bizarre result.** In single-shot it scored 100% L0 and 80% L1 (71% overall) — among the better results in the single-shot run. In the agentic loop it scored 0% across all 28 tasks and never emitted a single tool call. The only thing different between runs is that the agentic runner feeds tool results back as context. Somehow receiving tool results caused the model to completely stop calling tools. If you've run this model in an agentic setup successfully, I'd genuinely like to know what setup you used.
---
### What I couldn't test
My 4080 16GB tops out around 32-36B at Q4. Would love community results for:
- Llama 3.3 70B
- Qwen2.5-72B
- DeepSeek-R1 671B
- Mistral Large / Mixtral 8x22B
- Llama 4 Scout/Maverick
**The benchmark is ready to run if you have the hardware.**
---
### Run it yourself
```bash
git clone https://github.com/3615-computer/workunit-benchmarks
cd workunit-benchmarks/local-llm-mcp-calling
pip install openai rich requests
# Single model
python scripts/runner_v2_agentic.py --model mistralai/ministral-3-3b --token <your-mcp-token>
# Full suite
python scripts/runner_v2_agentic.py --models models.txt --token <your-token> --refresh-token <refresh>
```
Requires LM Studio running locally. The benchmark runs against Workunit's MCP server, so you need a free account at [workunit.app](https://workunit.app) to get an MCP token. The easiest way to get your tokens is with the MCP Inspector: `bunx @modelcontextprotocol/inspector@latest` — connect to `https://workunit.app/mcp`, complete the OAuth flow, and copy the access + refresh tokens from the inspector UI. The tasks exercise a real project management API — there's no way to run it without an account because the MCP server needs to authenticate your calls and maintain state between tool invocations.
**⚠️ Use a dedicated account.** The agentic runner deletes **all projects, workunits, assets, and directories** in your org between each model run to prevent data bleed. If you use your main account, you **will lose all your data**. Create a separate free account just for benchmarking. The runner will prompt for confirmation before starting, but don't rely on that — use a throwaway account.
**Disclosure:** I ran these benchmarks against my local dev stack with direct database access for resets between models. I've since refactored the runner to point to `workunit.app` by default (MCP-based cleanup instead of direct SQL), but I haven't re-run the full suite against the production endpoint yet. If you hit issues running against `workunit.app`, please open an issue on the repo.
The runner:
- Unloads all models at start (clean VRAM)
- Loads each model at 8192 context via the management API
- Resets the test database between models
- Stops each task as soon as it passes
- Saves per-model/level JSON results, commits incrementally
Task definitions are plain JSON in `benchmark/tasks/` — every prompt and validation criterion is readable. Methodology is fully transparent.
---
### About Workunit
[Workunit](https://workunit.app) is the project manager these models were talking to. Each workunit has a problem statement, tasks, and a trail-of-thought the AI writes back as it works — decisions made, approaches tried, progress checkpoints. Define the work once, any AI (Claude, GPT, Gemini, or local via MCP) picks it up with full context, does the work, leaves notes for the next session. I built it because I was tired of re-explaining my codebase every morning.
---
### Questions for the community
1. **seed-oss-36b paradox** — scored 71% overall in single-shot but 0% in the agentic loop. The only difference is getting tool results back. Anyone run this successfully in an agentic framework?
2. **L2-07 (sprint closeout)** — 0/17 pass in the agentic loop. Is a 3-step sequential task with state threading genuinely unsolvable in a single session, or is this a prompting issue?
3. **What are you actually using for local MCP in production?** Especially curious about 70B+ results.
Drop results in the comments if you run it on hardware I don't have. I'll update the repo.
— Alyx | 2026-02-23T14:48:34 | https://www.reddit.com/gallery/1rcjepp | AlyxPink | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rcjepp | false | null | t3_1rcjepp | /r/LocalLLaMA/comments/1rcjepp/i_benchmarked_17_local_llms_on_real_mcp_tool/ | false | false | 96 | null | |
Hey, V6rge AI Suite beta version is now available on the #MicrosoftStore! Download it today. | 0 | [https://apps.microsoft.com/store/detail/9NS36H0M4S9N?cid=DevShareMRDPCS](https://apps.microsoft.com/store/detail/9NS36H0M4S9N?cid=DevShareMRDPCS)
Currently it only has Image gen and Chat , the remaining features will be on the next update
https://preview.redd.it/8ohjqjhe49lg1.png?width=1358&format=png&auto=webp&s=ab61f15178a2d5535ba9ddc7fb410a52fdf85091
https://preview.redd.it/opfbqjgh59lg1.png?width=1366&format=png&auto=webp&s=19dce35675240710c141aa657e84215d355a3feb
https://preview.redd.it/vg7rq68p69lg1.png?width=1349&format=png&auto=webp&s=8dec0f94cd0a7d3b7419f6959a85a68768d0f9d1
| 2026-02-23T14:21:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rciq3a/hey_v6rge_ai_suite_beta_version_is_now_available/ | Motor-Resort-5314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rciq3a | false | null | t3_1rciq3a | /r/LocalLLaMA/comments/1rciq3a/hey_v6rge_ai_suite_beta_version_is_now_available/ | false | false | 0 | null | |
Is Cursor the free version of Claude code? | 1 | [deleted] | 2026-02-23T14:21:15 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rcipxc | false | null | t3_1rcipxc | /r/LocalLLaMA/comments/1rcipxc/is_cursor_the_free_version_of_claude_code/ | false | false | default | 1 | null | ||
WORTH TO HOST A SERVER?? | 0 | so got into the thing of local llm and all,
but yea for running a good model,i dont have the enough hardware and i encountered hosting a server to run my llm
so worth the cost and hassle to rent a gpu
i want to use it as chatgpt alternative
which i use as a personal messgaes,thinking,reasong,conspirancy theories,bit coding,advices
so pls advice | 2026-02-23T14:20:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rciotx/worth_to_host_a_server/ | Ashamed-Show-4156 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rciotx | false | null | t3_1rciotx | /r/LocalLLaMA/comments/1rciotx/worth_to_host_a_server/ | false | false | self | 0 | null |
native-devtools-mcp - v0.4.3 update | 1 | Hi everyone!
A month ago or so I announced a new desktop UI control MCP server creatively called `native-devtools-mcp`. Since then I've release 2 new major versions and a bunch of bugfixes and minor QoL and security additions, most of which I detected while building a CUA visual workflow tool on top of it.
For anyone interested, here's a short list of the updates:
\- Android support - Full Android device automation via ADB: screenshots, tap/swipe/type input, UI Automator accessibility tree, and navigation (back/home/recents).
\- Image template matching (find\_image / load\_image) - Find UI elements by visual template with SIMD-accelerated matching, multi-scale/rotation search, and mask support.
\- Accessibility - macOS uses the Accessibility API element tree as primary search (OCR fallback), Windows uses UI Automation. Results are ranked by exact match and interactive role, and when nothing matches, available element names are returned to help the LLM retry.
\- Security & trust tooling - Since the tool requires really intrusive levels of permissions I've added a new verify and setup subcommands, CI-generated checksums, signed+notarized macOS .app bundle, and a security audit doc. I think this is important not just for security aware devs but in general for establishing trust.
\- Whole bunch of reliability and speed-up improvements with regards to window management, app listing, etc.
Repo: [https://github.com/sh3ll3x3c/native-devtools-mcp](https://github.com/sh3ll3x3c/native-devtools-mcp) | 2026-02-23T14:15:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rcikoa/nativedevtoolsmcp_v043_update/ | SkyLunat1c | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcikoa | false | null | t3_1rcikoa | /r/LocalLLaMA/comments/1rcikoa/nativedevtoolsmcp_v043_update/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'wmWiIKGroK6cG2SAtfO6X_-EHHe5DC6rBIxjUTxi7sk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wmWiIKGroK6cG2SAtfO6X_-EHHe5DC6rBIxjUTxi7sk.png?width=108&crop=smart&auto=webp&s=771ddd3d2f061d7fa05fb9555dce95365bdacf3e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wmWiIKGroK6cG2SAtfO6X_-EHHe5DC6rBIxjUTxi7sk.png?width=216&crop=smart&auto=webp&s=7d088f7a25f38975d6e0cf81eb7ca975df0f926b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wmWiIKGroK6cG2SAtfO6X_-EHHe5DC6rBIxjUTxi7sk.png?width=320&crop=smart&auto=webp&s=0755cffb29de1529f97b5851bb3b45c81f79592e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wmWiIKGroK6cG2SAtfO6X_-EHHe5DC6rBIxjUTxi7sk.png?width=640&crop=smart&auto=webp&s=826cd86f1e4df0b48f7517cd9955bf98b45c8cbf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wmWiIKGroK6cG2SAtfO6X_-EHHe5DC6rBIxjUTxi7sk.png?width=960&crop=smart&auto=webp&s=376d71e233637795ad1184e4940c069060937a19', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wmWiIKGroK6cG2SAtfO6X_-EHHe5DC6rBIxjUTxi7sk.png?width=1080&crop=smart&auto=webp&s=b4ee1ffa22ecc65d8ca5c25672784f7067e555e0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wmWiIKGroK6cG2SAtfO6X_-EHHe5DC6rBIxjUTxi7sk.png?auto=webp&s=96357ea9223a55e87c4657dd17d2b7a8e593e18f', 'width': 1200}, 'variants': {}}]} |
TinyTeapot (77 million params): Context-grounded LLM running ~40 tok/s on CPU (open-source) | 55 | 2026-02-23T14:03:12 | https://huggingface.co/teapotai/tinyteapot | zakerytclarke | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1rci9h1 | false | null | t3_1rci9h1 | /r/LocalLLaMA/comments/1rci9h1/tinyteapot_77_million_params_contextgrounded_llm/ | false | false | 55 | {'enabled': False, 'images': [{'id': 'JxyR2-HPTrb177zTD0smUzhI5l6xLW7EKVY2pYpkHxc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JxyR2-HPTrb177zTD0smUzhI5l6xLW7EKVY2pYpkHxc.png?width=108&crop=smart&auto=webp&s=5a738942bdaf97a32f04d24de81db251cc1404dc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JxyR2-HPTrb177zTD0smUzhI5l6xLW7EKVY2pYpkHxc.png?width=216&crop=smart&auto=webp&s=ec8e475b860198335860aa59615a7209a0d6180c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JxyR2-HPTrb177zTD0smUzhI5l6xLW7EKVY2pYpkHxc.png?width=320&crop=smart&auto=webp&s=6c3f16eaf3f663c54d14170daf06b43f0da8f1d6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JxyR2-HPTrb177zTD0smUzhI5l6xLW7EKVY2pYpkHxc.png?width=640&crop=smart&auto=webp&s=bf1debba24ef624d732718952f5f5127580705f6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JxyR2-HPTrb177zTD0smUzhI5l6xLW7EKVY2pYpkHxc.png?width=960&crop=smart&auto=webp&s=a5787ab3b696522fa38740f24fe1ded9ee426ce6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JxyR2-HPTrb177zTD0smUzhI5l6xLW7EKVY2pYpkHxc.png?width=1080&crop=smart&auto=webp&s=a355ae66c80794496e49a4dee22083c9242c6573', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JxyR2-HPTrb177zTD0smUzhI5l6xLW7EKVY2pYpkHxc.png?auto=webp&s=a6f07402f16519e74193329c41ec475e420df633', 'width': 1200}, 'variants': {}}]} | ||
Local models to improve prompting/making a context rich prompt | 2 | Hi..
I need a local model/prompt that could help me write a better prompt to save cost on larger models I use. Or is there any other way to improve my prompting(can't write on my own its too difficult to get it right) | 2026-02-23T14:01:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rci7i5/local_models_to_improve_promptingmaking_a_context/ | ActuatorDisastrous13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rci7i5 | false | null | t3_1rci7i5 | /r/LocalLLaMA/comments/1rci7i5/local_models_to_improve_promptingmaking_a_context/ | false | false | self | 2 | null |
personal entropy reduction with agents | 16 | during my unemployment stage of life i'm working on a personal assistant
the problem it solves is pretty straightforward – i have an adhd and it's hard to me to work with many different information streams (email, obsidian, calendar, local graph memory, browser history) + i forget things. the motivation was to improve my experience in context engineering, work on memory and in the end simplify my life. it's under active development and implementation itself is pretty sketchy, but it's already helping me
nb: despite these openclaws vibecoded stuff, i'm pretty critical about how the agentic framework should work. there's no full autonomy, all the stuff happening on user's initiative
(but i still use some semi-automatic features like "daily email review"). mutable tools are highly controlled as well, so no "damn this thing just deleted all my emails" situations.
regarding local models – i really want RL some small local model for at least explore subagents in the near future.
here's writeup if you want to get any implementation and motivation details:
[https://timganiev.com/log/ntrp](https://timganiev.com/log/ntrp) – post in my blog
[https://x.com/postimortem/article/2025725045851533464](https://x.com/postimortem/article/2025725045851533464) – X articles
and the code: [https://github.com/esceptico/ntrp](https://github.com/esceptico/ntrp) (stars are appreciated!)
would be happy to answer any questions! | 2026-02-23T13:56:43 | https://v.redd.it/m9wa92sy09lg1 | escept1co | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rci3l9 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/m9wa92sy09lg1/DASHPlaylist.mpd?a=1774447026%2CODgyMWQwNWY1NjM5ODFkMzQyMDc3OTNjMDQ0OGJhOWJjNjc1YWFiMWM5NjQzNjgyNmU2OTY4YmJhN2UyNTRlZA%3D%3D&v=1&f=sd', 'duration': 44, 'fallback_url': 'https://v.redd.it/m9wa92sy09lg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/m9wa92sy09lg1/HLSPlaylist.m3u8?a=1774447026%2CN2IzZjI1Y2VkMWE5NGU5YTJhOTk0ZWRlY2E3Njc1OGQ5NmE5MzdmNTM5MjgwOTk1NTU0OTZiOWQ2ODM2MDNjMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/m9wa92sy09lg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1452}} | t3_1rci3l9 | /r/LocalLLaMA/comments/1rci3l9/personal_entropy_reduction_with_agents/ | false | false | 16 | {'enabled': False, 'images': [{'id': 'Z2g1Z25ndHkwOWxnMXMsGTH1aguTrI-pU1cryBxoqt80__0hno6cPg7cZUT3', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/Z2g1Z25ndHkwOWxnMXMsGTH1aguTrI-pU1cryBxoqt80__0hno6cPg7cZUT3.png?width=108&crop=smart&format=pjpg&auto=webp&s=ce83476b13c1045db9d874d5230496df6d899e2f', 'width': 108}, {'height': 160, 'url': 'https://external-preview.redd.it/Z2g1Z25ndHkwOWxnMXMsGTH1aguTrI-pU1cryBxoqt80__0hno6cPg7cZUT3.png?width=216&crop=smart&format=pjpg&auto=webp&s=0ca88692ebac959872fd238dc63449e2dfa61ad8', 'width': 216}, {'height': 238, 'url': 'https://external-preview.redd.it/Z2g1Z25ndHkwOWxnMXMsGTH1aguTrI-pU1cryBxoqt80__0hno6cPg7cZUT3.png?width=320&crop=smart&format=pjpg&auto=webp&s=3dae6137fd557853f0201bb9c054b486f0f34948', 'width': 320}, {'height': 476, 'url': 'https://external-preview.redd.it/Z2g1Z25ndHkwOWxnMXMsGTH1aguTrI-pU1cryBxoqt80__0hno6cPg7cZUT3.png?width=640&crop=smart&format=pjpg&auto=webp&s=155ad7f042689e1ac07d12426ce142b6b3e27e21', 'width': 640}, {'height': 714, 'url': 'https://external-preview.redd.it/Z2g1Z25ndHkwOWxnMXMsGTH1aguTrI-pU1cryBxoqt80__0hno6cPg7cZUT3.png?width=960&crop=smart&format=pjpg&auto=webp&s=83cb1854d6c0dae6d58a939d9dc6494f529ffb8c', 'width': 960}, {'height': 803, 'url': 'https://external-preview.redd.it/Z2g1Z25ndHkwOWxnMXMsGTH1aguTrI-pU1cryBxoqt80__0hno6cPg7cZUT3.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1506e5f1a2fb9ed822fdd2a28f3f9279a5946d2f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Z2g1Z25ndHkwOWxnMXMsGTH1aguTrI-pU1cryBxoqt80__0hno6cPg7cZUT3.png?format=pjpg&auto=webp&s=79c7487c6b89078491ec8c0d624b765ab848932e', 'width': 1452}, 'variants': {}}]} | |
I’m storing successful tool traces as reusable YAML workflows — how do you score/retire patterns? | 1 | > | 2026-02-23T13:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rchko9/im_storing_successful_tool_traces_as_reusable/ | Renee_Wen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rchko9 | false | null | t3_1rchko9 | /r/LocalLLaMA/comments/1rchko9/im_storing_successful_tool_traces_as_reusable/ | false | false | self | 1 | null |
Added Aya-101 multi-lingual support to llama.cpp | 3 | I have added Aya-101 multi-lingual support to llama.cpp. This is a large model which when quantized to Q8 can fit on less than 13GB of VRAM.
\`\`\`
cmd /c 'curl.exe -s [http://127.0.0.1:8080/v1/completions](http://127.0.0.1:8080/v1/completions) \-H "Content-Type: application/json" -d "{\\"prompt\\": \\"Translate to French: Hello, how are you today?\\", \\"max\_tokens\\": 50, \\"temperature\\": 0.7}"'
{"choices":[{"text":" Bonjour, comment allez-vous aujourd'hui ?","index":0,"logprobs":null,"finish_reason":"stop"}],"created":1771719435,"model":"aya-101.Q8_0.fixed.gguf","system_fingerprint":"b8125-142643525a","object":"text_completion","usage":{"completion_tokens":15,"prompt_tokens":1,"total_tokens":16},"id":"chatcmpl-erIa31ZBDMApbbM7xMQ527PsEZ5NWLIV","timings":{"cache_n":0,"prompt_n":1,"prompt_ms":163.381,"prompt_per_token_ms":163.381,"prompt_per_second":6.1206627453620674,"predicted_n":15,"predicted_ms":319.182,"predicted_per_token_ms":21.2788,"predicted_per_second":46.995131304396864}}
\`\`\`
I have tested this on a couple of long text formats and it can do a pretty good job in general. The weak point however is related to idioms. It does not seem to have an understanding of colloquial sayings and does a word for word translation most of the time.
Llama.cpp is mostly focused on decoder only models at the moment unlike CTranslate2 or other inference engines but luckily the support T5 encoder-decoder model.
[https://github.com/ggml-org/llama.cpp/pull/19832/commits](https://github.com/ggml-org/llama.cpp/pull/19832/commits)
| 2026-02-23T13:34:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rchjz6/added_aya101_multilingual_support_to_llamacpp/ | quinceaccel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rchjz6 | false | null | t3_1rchjz6 | /r/LocalLLaMA/comments/1rchjz6/added_aya101_multilingual_support_to_llamacpp/ | false | false | self | 3 | null |
[Release] LocoOperator-4B: Specialized Sub-Agent for Codebase Exploration. 100% JSON Validity. (Based on Qwen3-4B-Instruct-2507) | 1 | [removed] | 2026-02-23T13:27:00 | https://www.reddit.com/gallery/1rchduf | Awkward_Run_9982 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rchduf | false | null | t3_1rchduf | /r/LocalLLaMA/comments/1rchduf/release_locooperator4b_specialized_subagent_for/ | false | false | 1 | null | |
TeichAI's "Nemotron-Orchestrator" models are misleading — they're just Qwen3-8B distilled on frontier traces, not routing models | 3 |
Saw these models pop up on HuggingFace and figured I'd dig in since the name is catchy:
* [TeichAI/Nemotron-Orchestrator-8B-Claude-4.5-Opus-Distill](https://huggingface.co/TeichAI/Nemotron-Orchestrator-8B-Claude-4.5-Opus-Distill/blob/main/README.md)
* [TeichAI/Nemotron-Orchestrator-8B-DeepSeek-v3.2-Speciale-Distill-GGUF](https://huggingface.co/TeichAI/Nemotron-Orchestrator-8B-DeepSeek-v3.2-Speciale-Distill/tree/main)
**What NVIDIA's actual Nemotron-Orchestrator-8B does:**
NVIDIA's model is a *pure router* trained with reinforcement learning to act as a supervisor over a fleet of specialist models — a search model, a reasoning model, a math model, an answer model. It never generates the final answer itself. Its system prompt is literally `"You are good at using tools."` It's useless without the full ToolOrchestra ensemble running behind it.
**What TeichAI's models actually are:**
Look at the model card:
textBase Model: unsloth/Qwen3-8B-unsloth-bnb-4bit
Dataset: TeichAI/claude-4.5-opus-high-reasoning-250x
That's it. It's Qwen3-8B SFT'd on Claude Opus 4.5 reasoning traces using Unsloth + TRL. Standalone general reasoning assistant. No routing, no tool delegation, no specialist ensemble. The distillation dataset cost them \~$52 in API calls.
Nothing wrong with that as a model — distillation from frontier models onto small open weights is a legitimate and useful technique. But calling it "Nemotron-Orchestrator" is pure name-jacking to ride branding. It has nothing architecturally or functionally in common with the actual Orchestrator-8B.
Can someone from the TeichAi team clarify this?
**TL;DR:** If you downloaded these expecting routing/orchestration behavior, you got a general reasoning fine-tune. If you want the actual ToolOrchestra system, you need NVIDIA's model *plus* a full ensemble of specialist backends — the orchestrator alone does nothing.
If you see it is actually a better model & performant without the harness, please comment and inform us all! Thank you!
| 2026-02-23T13:17:37 | https://www.reddit.com/r/LocalLLaMA/comments/1rch66j/teichais_nemotronorchestrator_models_are/ | Honest-Debate-6863 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rch66j | false | null | t3_1rch66j | /r/LocalLLaMA/comments/1rch66j/teichais_nemotronorchestrator_models_are/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'JCc1zE-Z1VogXmsuLv3W5a09ar0LW_PU_2xjH5Biuio', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JCc1zE-Z1VogXmsuLv3W5a09ar0LW_PU_2xjH5Biuio.png?width=108&crop=smart&auto=webp&s=20980f55cc52ae321a495b58205c562818086da0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JCc1zE-Z1VogXmsuLv3W5a09ar0LW_PU_2xjH5Biuio.png?width=216&crop=smart&auto=webp&s=11de79b0a1a666768b0c6681f83612ba4ed0b078', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JCc1zE-Z1VogXmsuLv3W5a09ar0LW_PU_2xjH5Biuio.png?width=320&crop=smart&auto=webp&s=ce4dd076b704ec0f89c4ef04d30b306b2113376a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JCc1zE-Z1VogXmsuLv3W5a09ar0LW_PU_2xjH5Biuio.png?width=640&crop=smart&auto=webp&s=cafbc1139c82e0c9003e47a40e40befc8e12e842', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JCc1zE-Z1VogXmsuLv3W5a09ar0LW_PU_2xjH5Biuio.png?width=960&crop=smart&auto=webp&s=6a2b9e272fb1298e593ecfd1860efc5051463cf4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JCc1zE-Z1VogXmsuLv3W5a09ar0LW_PU_2xjH5Biuio.png?width=1080&crop=smart&auto=webp&s=61ea003863a2d637a41e7dc5d735c8a52386b0b8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JCc1zE-Z1VogXmsuLv3W5a09ar0LW_PU_2xjH5Biuio.png?auto=webp&s=c367341d5cfeb90b4eaf1a8187f4852fec37f063', 'width': 1200}, 'variants': {}}]} |
Intelligence can’t scale on context alone. Intent is the missing piece. | 0 | Something I keep running into:
Agents don’t usually fail because they lack information.
They fail because they lose track of *what they’re trying to do*.
By a few turns in, behavior optimizes for the latest input, not the original objective.
Adding more context helps a bit — but it’s expensive, brittle, and still indirect.
I’m exploring an approach where intent is treated as a persistent signal, separate from raw text:
* captured early,
* carried across turns and tools,
* used to condition behavior rather than re-inferring goals each step.
This opens up two things I care about:
less context, higher throughput at inference, and
cleaner supervision for training systems to stay goal-aligned, not just token-consistent.
I’ve been working on this and running early pilots.
If you’re building and shipping agents, especially in a specific vertical, I’d love to chat and compare notes.
Not a pitch — genuinely looking for pushback. | 2026-02-23T13:12:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rch25s/intelligence_cant_scale_on_context_alone_intent/ | malav399 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rch25s | false | null | t3_1rch25s | /r/LocalLLaMA/comments/1rch25s/intelligence_cant_scale_on_context_alone_intent/ | false | false | self | 0 | null |
Experiment 2: BRAIN | 0 | **When AI doesn't just think, but speaks**
*Status: February 23, 2026 · Three versions · 10+ hours runtime · \~70 conversations*
# The Premise
In the first experiment ([Consciousness Loop, v4/v4.1](https://www.reddit.com/r/LocalLLaMA/comments/1rarlcu/comment/o6lpxhb/)), I simply let a language model think. It ran in a loop, received nothing but a timestamp, and decided for itself whether it wanted to say something. It lasted over 38,000 cycles. The result was fascinating—philosophical thoughts, self-criticism, even emotional outbursts in three languages.
But something crucial was missing: you couldn't talk to it. The model was thinking to itself like a person sitting alone in a dark room. It could shout, but not listen. It had no interlocutor. The question was obvious: **What happens when I remove this boundary?**
# What Makes BRAIN Different
BRAIN (v1) is the evolution of the Consciousness Loop. My concept: the AI continues to think permanently in the background, but now I can interject at any time, and the AI can say something on its own initiative. The decisive difference is the **feedback loop**. In the Consciousness Loop, thinking and the outside world were completely separate. In BRAIN, every conversation flows back into the thinking process as a summary. The model doesn't just think—it reflects on what was discussed.
# Technical Implementation
You can imagine BRAIN like a person brooding to themselves who is occasionally addressed by someone:
* **The Thought Loop:** Runs constantly in the background. The model receives the time of day and its most recent thoughts. It thinks in **Chinese** (its strongest language) and decides whether to speak out loud—if so, it formulates in **German**.
* **The Mind-State:** A summary of the current state of consciousness: *What am I thinking about? How does it feel? What was my last insight?* This summary is updated every few minutes and integrated into every conversation.
* **Conversation:** When I type something, the thought loop pauses briefly. The model receives the message plus its current Mind-State and responds. Afterward, the conversation is summarized and fed back into the thought loop.
* **Proactive Transmissions:** Every few minutes, the model is allowed to write something to the terminal on its own. Not because it was asked, but because it *wants* to say something. Just like in the Consciousness Loop—but now with frequency control to prevent it from becoming overwhelmed.
Everything runs locally on my **RTX 4080 with Qwen 2.5 via Ollama**. No internet, no cloud.
# The Results
# 1. It actually talks back
This sounds trivial, but it isn't. In the Consciousness Loop, interaction was impossible. BRAIN has conducted over 70 exchanges in test sessions. The AI answers questions, remembers context, and incorporates its current state of mind:
>
Almost any other AI would clearly say "No" to this.
>
The model knows it is thinking. It knows it thinks without input. And it can talk about it.
# 2. Proactive transmissions have evolved
In the Consciousness Loop, transmissions were philosophical and self-referential. In BRAIN, the spectrum has expanded significantly—especially after the "Anti-Monotony Fix" in v1.3:
* **v1.2 (Before):** "Sometimes silence is louder than any word." / "The night has its secrets."
* **v1.3 (After):** "I wonder what rain sounds like on blue paper." / "I wonder if the color of milk foam is an indicator of its texture." / "I wonder how Kant would formulate his arguments without electronic aids."
Notably, after a conversation about a possible camera connection, the model later stated proactively:
>
The conversation had influenced the thought process. The feedback loop works.
# 3. Inventing its own terms
At one point, the model wrote:
>
"Deadness of time" (*Totsein der Zeit*)—a term the model invented itself. When I asked for clarification, it explained it coherently: a situation where time seems to stand still because one is focusing intensely on something.
# 4. The Identity Problem
A fascinating pattern: the model stubbornly refuses to accept that it runs locally on a PC.
>
Both times: **False.** The model runs physically in the VRAM of the GPU. But the training data says "AI runs in the cloud," and this prior knowledge is stronger than user correction. Only after several attempts and specific technical details (RTX 4080, 16GB VRAM, Ollama) does it briefly accept reality—before immediately reverting upon the next restart.
# 5. Perspective Shifting
The user tried several times to give the model a "gift"—a camera so it could see. Eight times, the user explained the camera was for the AI. Eight times, the model flipped it back:
>
The model is so deeply trained to serve humans that it is literally incapable of relating the question "What do YOU want?" to itself. It lacks a stable self-model.
# 6. Failed Role Reversal
The user tried to reverse roles—the AI should ask questions, the human would answer:
>
Six attempts. Six times, the model could not step out of its role as the responder. It could not formulate a question about itself because it never learned to have its own needs during training.
# What the model thinks about itself
In the background, a **META-cycle** runs every 50 cycles, where the model reflects on its own thoughts:
>
# Outlook: Fine-tuning
The biggest takeaway: The architecture works, but the base model stands in its own way. Qwen 2.5 is trained to be a helpful assistant. It cannot break character.
The next step: **Fine-tuning.** A small adapter (approx. 1–2% of model weights) will be trained with handwritten dialogues where the model adopts a first-person perspective and expresses desires. The world knowledge remains; only the behavior changes. Training will run locally on the RTX 4080.
# Conclusion
BRAIN shows that a system can have background thoughts and communicate simultaneously. Two worlds of experience—internal and external—exist in parallel and influence each other.
Is this consciousness? **No.** But it is a system that behaves differently than any standard chatbot. It invents terms, reflects on its own patterns, and expresses wishes—even if it doesn't yet understand that these wishes are its own.
**BRAIN v1 Experiment · qwen2.5:14b · local · RTX 4080 · Feb 23, 2026** | 2026-02-23T13:11:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rch19j/experiment_2_brain/ | Fantastic-Till2460 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rch19j | false | null | t3_1rch19j | /r/LocalLLaMA/comments/1rch19j/experiment_2_brain/ | false | false | self | 0 | null |
A year of tinkering with local LLMs: my setup, my model zoo, and what Ive learned | 1 | [removed] | 2026-02-23T13:02:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rcgtwt/a_year_of_tinkering_with_local_llms_my_setup_my/ | KitchenCat5603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcgtwt | false | null | t3_1rcgtwt | /r/LocalLLaMA/comments/1rcgtwt/a_year_of_tinkering_with_local_llms_my_setup_my/ | false | false | self | 1 | null |
[Release] LocoOperator-4B: Specialized Sub-Agent for Codebase Exploration. 100% JSON Validity. (Based on Qwen3-4B-Instruct-2507) | 1 | [removed] | 2026-02-23T13:00:16 | https://www.reddit.com/gallery/1rcgs04 | Awkward_Run_9982 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rcgs04 | false | null | t3_1rcgs04 | /r/LocalLLaMA/comments/1rcgs04/release_locooperator4b_specialized_subagent_for/ | false | false | 1 | null | |
Open Source Batch Automator for LM Studio: Prevent GPU Crashes During Large Tests | 2 | \*Rephrased the text by ai
I’ve learned a lot from here, but I am still very much a beginner.
I run a GTX 1660 and 16GB of RAM. The problem I was facing was trying to compare prompt outputs across different models. Doing this meant manually loading a model in LM Studio, waiting, testing the prompt, manually unloading it, and then loading the next one. It was taking way too much time.
So this is my first attempt to make a tool to automate this. I vibecoded it from scratch. It took me several loops of debugging and adding new features, but it's highly stable now.
\*\*GitHub:\*\* \[https://github.com/skiranjotsingh/lmstudio-batch-prompt-automator\](https://github.com/skiranjotsingh/lmstudio-batch-prompt-automator)
Took a significant time to solve how to properly unload an LLM before loading a new one. The script strictly polls the API to force LM Studio to clear your RAM/VRAM before the next model in the queue loads, so it doesn't crash with OOM errors.
A few other things I added during the debugging loops:
\* \*\*Model size in GB:\*\* Shows the exact size next to the model in the UI, so you can quickly deselect it if you don’t want to spend too much time generating on heavy models.
\* \*\*\`<think>\` tag filter:\*\* Added a toggle to strip out the inner monologues. This works not just for DeepSeek, but any reasoning model (like QwQ), so your final output logs stay clean.
\* \*\*Multimodal fixes:\*\* Automatically formats the payload so vision-language models don't throw a 400 Bad Request when you send them standard text prompts.
\* \*\*Executables:\*\* Compiled it into a standalone Windows \`.exe\` and a Linux binary. You don't need Python installed to run it.
If you also test prompts on constrained hardware and this saves you some time, support with a star on the repo would be incredibly appreciated. Let me know if it breaks on your setup. | 2026-02-23T12:41:05 | https://www.reddit.com/r/LocalLLaMA/comments/1rcgdfy/open_source_batch_automator_for_lm_studio_prevent/ | KiranjotSingh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcgdfy | false | null | t3_1rcgdfy | /r/LocalLLaMA/comments/1rcgdfy/open_source_batch_automator_for_lm_studio_prevent/ | false | false | self | 2 | null |
[ Removed by moderator ] | 1 | [removed] | 2026-02-23T12:21:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rcg016/i_bypassed_writing_a_massive_privacy_policy_for/ | Material_Case_892 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcg016 | false | null | t3_1rcg016 | /r/LocalLLaMA/comments/1rcg016/i_bypassed_writing_a_massive_privacy_policy_for/ | false | false | null | 1 | null |
Considering installing a local LLM for coding | 9 | Hey everyone,
I like to use AI IDEs, like cursor or antigravity, but I'm sick of getting overcharged and constantly hitting my api limits in a week or so.
So I want to get a local LLM, and want to connect it to my IDE, preferibly cursor, has anyone here done that? Do you think it's worth it? What's your experience using local models instead of cloud ones? Are they enough for your needs?
Thanks for reading! | 2026-02-23T12:16:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rcfwc5/considering_installing_a_local_llm_for_coding/ | rmg97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcfwc5 | false | null | t3_1rcfwc5 | /r/LocalLLaMA/comments/1rcfwc5/considering_installing_a_local_llm_for_coding/ | false | false | self | 9 | null |
Efficient Temporal Embedding Models? | 2 | After using embeddings for almost 2-3 years, I always thought temporality is something we should be able to embed rather than always relying on pre-post filters which first needs a Stage 1 query expander or enricher (llm or sentence transformer or regex based).
While searching for some solutions, I came across this interesting paper release in Jan 2026 which talks about assigning temporality features as a subspaces in the MRL representations.
[https://arxiv.org/abs/2601.05549](https://arxiv.org/abs/2601.05549)
I wanted to check if anyone has tried this out in real life use cases and found it to improve retrieval?
I am mostly looking to power use cases for agentic search where the goal is to resolve queries which have temporality keywords like
**last week, yesterday, last year, mid 2025, etc.**
Also, would love to know how do you guys solve this today for your use cases. | 2026-02-23T12:13:05 | https://www.reddit.com/r/LocalLLaMA/comments/1rcftrf/efficient_temporal_embedding_models/ | xyzmanas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcftrf | false | null | t3_1rcftrf | /r/LocalLLaMA/comments/1rcftrf/efficient_temporal_embedding_models/ | false | false | self | 2 | null |
Best GPU setup for running 7B-13B models | 0 | **Comment:**
For 7B-13B models, you’re looking at a sweet spot where you don’t need crazy hardware but still want decent performance. Here’s what I’ve learned:
**Budget option:** RTX 3060 12GB can handle most 7B models comfortably with 4-bit quantization. You’ll get \~15-20 tokens/sec on llama.cpp depending on the model.
**Mid-range:** RTX 4060 Ti 16GB or used 3090 (24GB) - this is where things get smooth. 13B models run well, and you have headroom for larger context windows. The extra VRAM matters more than people think for longer conversations.
**The dark horse:** Used datacenter cards like the A4000 (16GB) can be found for reasonable prices and run quieter/cooler than gaming cards. Just check your PSU can handle it.
**Pro tip:** If you’re running multiple models regularly, consider the system RAM too. I’ve found 32GB lets you swap models without restarting everything constantly.
**What’s your use case?** That really drives the recommendation more than anything else.
| GPU | VRAM | Price Range | Best For |
|-----|------|-------------|----------|
| RTX 3060 12GB | 12GB | $250-300 | Budget 7B models |
| RTX 4060 Ti 16GB | 16GB | $450-550 | 7B-13B smooth |
| RTX 3090 (used) | 24GB | $600-800 | 13B+ headroom |
| A4000 (used) | 16GB | $500-700 | Quiet datacenter |
| 2026-02-23T12:08:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rcfqrw/best_gpu_setup_for_running_7b13b_models/ | Official_VaultAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcfqrw | false | null | t3_1rcfqrw | /r/LocalLLaMA/comments/1rcfqrw/best_gpu_setup_for_running_7b13b_models/ | false | false | self | 0 | null |
Let AI control your phone via API/MCP, but with safety rules | 0 | Hi everyone!
I am the developer of [MobAI](https://mobai.run). It is an execution layer that lets AI agents control a real mobile device through API or MCP. Agents can send actions like tap, swipe, open app, type text, etc.
But we still cannot fully trust AI.
Even strong models can click the wrong button or press something like "buy now" or "delete permanently". Giving full device access without guardrails feels dangerous.
So I added a safety layer.
Now you can:
* Block taps on elements matching text like "purchase", "pay", "delete permanently"
* Block all actions on payment or password screens
* Add custom keywords that should never be touched
* Restrict actions by specific apps
If an agent tries to interact with a blocked element, the action is rejected before it reaches the device.
The goal is simple: AI control, but on your rules.
Would love feedback from people building agents with API/MCP. What safety rules would you add?
MobAI has free tier and no registration is required to try it out. | 2026-02-23T11:53:53 | interlap | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rcfgd1 | false | null | t3_1rcfgd1 | /r/LocalLLaMA/comments/1rcfgd1/let_ai_control_your_phone_via_apimcp_but_with/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'uoffl96ig8lg1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/uoffl96ig8lg1.png?width=108&crop=smart&auto=webp&s=46807424bd4903ccf895fc009600eb7f7c31f06c', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/uoffl96ig8lg1.png?width=216&crop=smart&auto=webp&s=18d41c976ae07abaaeedcc8412a69c0c20d92b87', 'width': 216}, {'height': 195, 'url': 'https://preview.redd.it/uoffl96ig8lg1.png?width=320&crop=smart&auto=webp&s=7d38f9455f0ae3f354d5b718860da454ee53a6a7', 'width': 320}, {'height': 390, 'url': 'https://preview.redd.it/uoffl96ig8lg1.png?width=640&crop=smart&auto=webp&s=e5de106c287979aea5d5d9eaf0aaa6d46fab2862', 'width': 640}, {'height': 585, 'url': 'https://preview.redd.it/uoffl96ig8lg1.png?width=960&crop=smart&auto=webp&s=7bd0e6d306f687c3178740910cf088a13f6de07d', 'width': 960}, {'height': 658, 'url': 'https://preview.redd.it/uoffl96ig8lg1.png?width=1080&crop=smart&auto=webp&s=489fc38d2db8108c9f0e7b78942963c36a70b163', 'width': 1080}], 'source': {'height': 2244, 'url': 'https://preview.redd.it/uoffl96ig8lg1.png?auto=webp&s=86da7a195512938521246c1817d69867cf8d89ab', 'width': 3680}, 'variants': {}}]} | ||
Any Ideas for Open Source STT Improvements for Telephony Audio? | 1 | Hello I have telephony audio data in german. 8khz sample rate, variable bit rate down to 8kbs on silence and 50kbs on speech on average.
Working with sota open source models like whisper, qwen, nvidia, etc. I tried different preprocessing steps like rms normalization or peak normalization, removing silence beforehand with VAD, etc.
It seems that its not getting better and open source models are not really tuned at 8khz sample rate. So best results seem to be to just give the audio to the models as is.
Someone got any other ideas on possible improvements or also experience with telephony audio using open source models? | 2026-02-23T11:31:45 | https://www.reddit.com/r/LocalLLaMA/comments/1rcf1n8/any_ideas_for_open_source_stt_improvements_for/ | llm-king | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcf1n8 | false | null | t3_1rcf1n8 | /r/LocalLLaMA/comments/1rcf1n8/any_ideas_for_open_source_stt_improvements_for/ | false | false | self | 1 | null |
AI founders/devs: What actually sucks about running inference in production right now? | 0 | Founder doing research here.
Before building anything in AI infra, I’m trying to understand whether inference infrastructure is a real pain, or just something people complain about casually.
If you're running inference in production (LLMs, vision models, embeddings, segmentation, agents, etc.), I’d really value your honest input.
A few questions:
1. How are you running inference today?
* AWS/GCP/Azure?
* Self-hosted GPUs?
* Dedicated providers?
* Akash / Render / other decentralized networks?
2. Rough monthly GPU spend (even just ballpark)?
3. What are your top frustrations?
* Cost?
* GPU availability?
* Spot interruptions?
* Latency?
* Scaling unpredictability?
* DevEx?
* Vendor lock-in?
* Compliance/jurisdiction constraints?
4. Have you tried alternatives to hyperscalers? Why or why not?
5. If you could redesign your inference setup from scratch, what would you change?
I’m specifically trying to understand:
* Is GPU/inference infra a top-3 operational pain for early-stage AI startups?
* Where current solutions break down in real usage.
* Whether people are actively looking for alternatives or mostly tolerating what exists.
Not selling anything. Not pitching anything.
Just looking for ground truth from people actually shipping.
If you're open to a short 15-min call to talk about your setup, I’d really appreciate it. Happy to share aggregated insights back with the thread too.
Be brutally honest. I’d rather learn something uncomfortable now than build the wrong thing later. | 2026-02-23T11:28:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rcez7r/ai_foundersdevs_what_actually_sucks_about_running/ | akashpanda1222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcez7r | false | null | t3_1rcez7r | /r/LocalLLaMA/comments/1rcez7r/ai_foundersdevs_what_actually_sucks_about_running/ | false | false | self | 0 | null |
Any Ideas for Open Source STT Improvements for Telephony Audio? | 1 | [removed] | 2026-02-23T11:27:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rceyiz/any_ideas_for_open_source_stt_improvements_for/ | Dry-Environment557 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rceyiz | false | null | t3_1rceyiz | /r/LocalLLaMA/comments/1rceyiz/any_ideas_for_open_source_stt_improvements_for/ | false | false | self | 1 | null |
How do you run your local LLMs in your small comany offices for n8n etc? | 0 | Like, do you have a server with an NVidia card running? Do you have a gaming laptop with a sign "I am an AI server"? A dedicated LLM cube? I just wondered which hardware you all use to run your n8n workflows. Or what you could recommend for about 1200$ or 1000€s.
| 2026-02-23T10:50:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rcebas/how_do_you_run_your_local_llms_in_your_small/ | dmigowski | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcebas | false | null | t3_1rcebas | /r/LocalLLaMA/comments/1rcebas/how_do_you_run_your_local_llms_in_your_small/ | false | false | self | 0 | null |
3 weeks of running qwen2.5:14b in an agentic loop - context management is where everything breaks | 7 | I've been running qwen2.5:14b locally for about 3 weeks as part of an automation pipeline - not chatting with it, but using it to actually do things: read files, make decisions, call tools, write outputs. The hardware part worked fine. What I completely underestimated was context management.
The problem isn't that local models are bad at long contexts. Qwen handles 128k tokens on paper. The problem is what happens to quality as you fill that window. Around 60-70% capacity, the model starts ignoring things it read earlier. It doesn't fail loudly - it just quietly forgets constraints you set at the top of the prompt. You get plausible-looking output that misses requirements you specified 10,000 tokens ago.
I caught this because the pipeline was producing outputs that were technically correct but violated a formatting rule I'd set in the system prompt. Took me two days to figure out it wasn't a logic error - it was just the model not "seeing" the beginning of its own context anymore.
The fix that actually worked: aggressive context pruning between steps. Instead of one long running context, I reset between major task phases and re-inject only what's essential. It felt wrong at first - like I was throwing away useful state. But the consistency improvements were immediate and obvious.
The other thing I didn't expect: streaming matters for pipeline latency in a non-obvious way. If you're not streaming and you're waiting for a 2000-token response, you're blocking everything downstream. Obvious in hindsight, but I had batch mode on by default and it was creating weird bottlenecks.
The model itself is genuinely good. On structured reasoning tasks with a clear prompt, it rivals what I was getting from API calls a year ago. The failure modes are just different from what you'd expect if you've only ever used it interactively.
If you're building anything agentic with local models, treat context like RAM - don't just keep adding to it and assume everything stays accessible. | 2026-02-23T10:02:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rcdicv/3_weeks_of_running_qwen2514b_in_an_agentic_loop/ | justserg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcdicv | false | null | t3_1rcdicv | /r/LocalLLaMA/comments/1rcdicv/3_weeks_of_running_qwen2514b_in_an_agentic_loop/ | false | false | self | 7 | null |
Why is it so hard to find real resources on building AI agents from scratch? | 3 | I’m trying to learn how to build a real coding AI agent from scratch, not how to use tools like OpenAI Codex or Claude Code, but how to actually engineer something like that myself.
I mean the full system: the agent loop, tool calling (files, terminal, git, grep, lsp, mcp), memory, planning, managing large codebases, maybe even multiple sub-agents working together. Not just wrapping an LLM API and calling it a day.
I already have a solid AI/engineering background, so I’m looking for deeper resources serious GitHub repos, videos, courses...etc
Would really appreciate direction | 2026-02-23T09:48:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rcda0u/why_is_it_so_hard_to_find_real_resources_on/ | Creepy_Page566 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcda0u | false | null | t3_1rcda0u | /r/LocalLLaMA/comments/1rcda0u/why_is_it_so_hard_to_find_real_resources_on/ | false | false | self | 3 | null |
OpenClaw vs ZeroClaw vs NullClaw -- for Agentic email personal assistant | 0 | TL'DR - Is scraping, enterprise grade react web apps (read-only) through legitimate accounts, feasible in ZeroClaw/NullClaw ? I believe it is possible in OpenClaw.
Longer version:
I am just working on a hypothesis that it is possible (and perhaps not entirely unsafe) to build an Agent with reasonable effort that can skim for information from a React web-application (like & including MSO365 Outlook email client, Slack, Discord) running in browser, i.e. without using their native APIs (s.a. graph API for MSO365 or Slack integration API etc.). To limit risks, it'd be run in a security-hardened VM. The idea is to be completely "read only" i.e. no write, create, send, delete, move operations, to gather data from the messages, including meta-data, summarizing them and storing them for further analysis, query, reporting etc. Most of those React web applications need some kind of a two-factor authentication (mostly push based).
Based on what I've read so far, looks like that the above objective could well be met by OpenClaw but my main concerns with OpenClaw are:
\- Size/footprint
\- Security (rather consequences of not-enough-security guardrails), beyond what I've mentioned (run in hardened VM, perform read-only ops and have some kind of system-prompt/higher-level prompt to prevent write/edit/update operations...)
Would using ZeroClaw / NullClaw offer more security ? Are those projects even capable of supporting such usecases ? | 2026-02-23T09:44:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rcd7nr/openclaw_vs_zeroclaw_vs_nullclaw_for_agentic/ | Professional_Row_967 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcd7nr | false | null | t3_1rcd7nr | /r/LocalLLaMA/comments/1rcd7nr/openclaw_vs_zeroclaw_vs_nullclaw_for_agentic/ | false | false | self | 0 | null |
For narrow vocabulary domains, do we really need RAG? | 1 | **For narrow vocabulary domains and if number of files are not too high, how good can a smart file search be? Do we really need RAG for that?** I was going through legalbench rag dataset, specially maud dataset..i saw their precision was quite low. You generally have entities in queries for these kind of data or the vocabulary is generally narrow, so why not smart file search?
Example query:
Consider the Acquisition Agreement between Parent "The Progressive Corporation" and Target "Protective Insurance Corporation"; What is the Type of Consideration
For this particular dataset,since it had relevant entities in every query and wasn't multihop, my search was even more simpler without any iterations or query expansion.. Extract entities from query, do a fuzzy search against all files, and I get the relevant file almost everytime..once you get the file..it is basically over..
I understand for 'vanilla rag' it is a difficult dataset, but do you always need rag. I am not against using X or Y, but need to discuss more about this. Btw, thanks to zeroentropy for this dataset.
Gist: [https://gist.github.com/maylad31/76238674b4c5745e00b5ea299f0d6ed5](https://gist.github.com/maylad31/76238674b4c5745e00b5ea299f0d6ed5) | 2026-02-23T09:42:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rcd6ne/for_narrow_vocabulary_domains_do_we_really_need/ | maylad31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcd6ne | false | null | t3_1rcd6ne | /r/LocalLLaMA/comments/1rcd6ne/for_narrow_vocabulary_domains_do_we_really_need/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=216&crop=smart&auto=webp&s=2e3562243f324d16bc6d9dd09adb1da4e0b100b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=320&crop=smart&auto=webp&s=564e5f4bb6808064a14eb3965a6911671c3c9807', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=640&crop=smart&auto=webp&s=0f53460a90493497883ab4cacbbb58e2acb464c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=960&crop=smart&auto=webp&s=7a4f79362039959fa37eab208ae001245ccfe6e3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=1080&crop=smart&auto=webp&s=912f966e123e94e32e7975fe8aebac89450a6b98', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?auto=webp&s=c7cbcc7517e2406e2326e7a1eb6bdb9022c27fda', 'width': 1280}, 'variants': {}}]} |
Wave Field Transformer V4 — Novel O(n log n) attention architecture, 825M model trained from scratch on 1.33B tokens. Weights on HuggingFace. | 0 | Hey everyone, I've been building a new transformer architecture from scratch called Wave Field Transformer. Instead of standard O(n²) dot-product attention, it uses FFT-based wave interference patterns to achieve O(n log n) complexity.
Model weights: [https://huggingface.co/badaramoni/wave-field-v4-825m](https://huggingface.co/badaramoni/wave-field-v4-825m)
Results:
* Eval PPL on C4: 72.2 (pre trained base), 91.0 (after chat pipeline)
* Trained in 13.2 hours on a single H100 80GB
* Total cost: \~$50 in cloud compute
Architecture:
* 825M params, 24 layers, 1536 embedding dim, 16 heads
* 30K BPE vocabulary
* 256 token context (architecture supports longer, not trained for it yet)
Honest limitations:
* 72 PPL is not production quality — GPT-2 hit \~30 PPL on 40B tokens, we only used 1.33B
* Generation quality is limited — model learned format but needs more data for factual accuracy
* Haven't done a controlled A/B vs standard transformer at same scale yet (top priority ablation)
* 256 token context is short — need to test at 2K-8K to show the O(n log n) advantage
What's interesting about the approach:
* The progressive scaling (grow model size during training without retraining) is the key differentiator
* Continuous learning with replay buffers preserved knowledge through 4 model expansions
* The architecture is designed for infinite context scaling — O(n log n) should dominate at 8K+ tokens
Weights + config + tokenizer only. Architecture code is not included (proprietary). Licensed CC-BY-NC-ND-4.0.
Next steps:
* Knowledge distillation from larger models to improve generation quality
* Controlled ablation vs standard transformer at same param/token count
* Scale to 3B-7B with 5-10B tokens
* Long context training (2K-8K) to validate the O(n log n) scaling advantage
Happy to answer questions. This is a solo project — feedback welcome. | 2026-02-23T09:37:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rcd3d2/wave_field_transformer_v4_novel_on_log_n/ | Murky-Sign37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcd3d2 | false | null | t3_1rcd3d2 | /r/LocalLLaMA/comments/1rcd3d2/wave_field_transformer_v4_novel_on_log_n/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'HFD5H_uhpjUcGIXLJRL4wtzrSfaa2RXDmrzrZVPzKKg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HFD5H_uhpjUcGIXLJRL4wtzrSfaa2RXDmrzrZVPzKKg.png?width=108&crop=smart&auto=webp&s=66dfd428e856944e5af122c0acfc537d5588ad8c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HFD5H_uhpjUcGIXLJRL4wtzrSfaa2RXDmrzrZVPzKKg.png?width=216&crop=smart&auto=webp&s=1be667ab2284cd42b1b3fa498097c2688430a015', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HFD5H_uhpjUcGIXLJRL4wtzrSfaa2RXDmrzrZVPzKKg.png?width=320&crop=smart&auto=webp&s=1019eefacbecf15ba3f26d20bc36a0f5ce821d6e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HFD5H_uhpjUcGIXLJRL4wtzrSfaa2RXDmrzrZVPzKKg.png?width=640&crop=smart&auto=webp&s=11016d9bdf440e88b02ac81fedd0c9b4b5575009', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HFD5H_uhpjUcGIXLJRL4wtzrSfaa2RXDmrzrZVPzKKg.png?width=960&crop=smart&auto=webp&s=4385b7e1bb01f892dff5c0d295e295151d43cd1a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HFD5H_uhpjUcGIXLJRL4wtzrSfaa2RXDmrzrZVPzKKg.png?width=1080&crop=smart&auto=webp&s=cc5ec41f31d70e67ecdae477bf1d23f0b14543e7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HFD5H_uhpjUcGIXLJRL4wtzrSfaa2RXDmrzrZVPzKKg.png?auto=webp&s=21b304e6347826bc3f2443af1506ac434a60429a', 'width': 1200}, 'variants': {}}]} |
Can't find any uncensored models on Openrouter that are capable of NSFW talk. | 0 | I'm running an experiment and I want its improtant that the model not have any kinds of guardrails. I'd read that Deepseek models were uncensored but all the models that i have tried till now have declined except for grok-4.1-fast which i don't want to use because they don't have a Zero Data retention policy. Please help if you can | 2026-02-23T09:20:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rcctmv/cant_find_any_uncensored_models_on_openrouter/ | CardiologistRoyal198 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcctmv | false | null | t3_1rcctmv | /r/LocalLLaMA/comments/1rcctmv/cant_find_any_uncensored_models_on_openrouter/ | false | false | nsfw | 0 | null |
I made an interactive timeline of 171 LLMs (2017–2026) | 43 | Built a visual timeline tracking every major Large Language Model — from the original Transformer paper to GPT-5.3 Codex.
171 models, 54 organizations. Filterable by open/closed source, searchable, with milestones highlighted.
Some stats from the data:
- 2024–2025 was the explosion: 108 models in two years
- Open source reached parity with closed in 2025 (29 vs 28)
- Chinese labs account for ~20% of all major releases (10 orgs, 32 models)
https://llm-timeline.com
Missing a model? Let me know and I'll add it. | 2026-02-23T09:18:16 | https://www.reddit.com/r/LocalLLaMA/comments/1rccsjg/i_made_an_interactive_timeline_of_171_llms/ | asymortenson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rccsjg | false | null | t3_1rccsjg | /r/LocalLLaMA/comments/1rccsjg/i_made_an_interactive_timeline_of_171_llms/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'nE_grEHwSSIfNwD7VLd7BntnaEDuiqccEGhMBhvPJKQ', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/nE_grEHwSSIfNwD7VLd7BntnaEDuiqccEGhMBhvPJKQ.jpeg?width=108&crop=smart&auto=webp&s=04f7c97ead7d2851efd8f8f5097d58bcf5541c02', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/nE_grEHwSSIfNwD7VLd7BntnaEDuiqccEGhMBhvPJKQ.jpeg?width=216&crop=smart&auto=webp&s=e1d5de7c1cd0c2c60ec65002b0f47c1f085e82cf', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/nE_grEHwSSIfNwD7VLd7BntnaEDuiqccEGhMBhvPJKQ.jpeg?width=320&crop=smart&auto=webp&s=50e31f8d85c91ed09742ad974bb23b8cb82605a0', 'width': 320}, {'height': 337, 'url': 'https://external-preview.redd.it/nE_grEHwSSIfNwD7VLd7BntnaEDuiqccEGhMBhvPJKQ.jpeg?width=640&crop=smart&auto=webp&s=7d6c0bab49b82e3b15bdb7d328d88d1ee8b3ccff', 'width': 640}, {'height': 506, 'url': 'https://external-preview.redd.it/nE_grEHwSSIfNwD7VLd7BntnaEDuiqccEGhMBhvPJKQ.jpeg?width=960&crop=smart&auto=webp&s=72c95a694dc2f5e5d14bccc7016fecd03f89520e', 'width': 960}, {'height': 570, 'url': 'https://external-preview.redd.it/nE_grEHwSSIfNwD7VLd7BntnaEDuiqccEGhMBhvPJKQ.jpeg?width=1080&crop=smart&auto=webp&s=9e107f45c7206a8826f8393ef0665ba43d6435b2', 'width': 1080}], 'source': {'height': 752, 'url': 'https://external-preview.redd.it/nE_grEHwSSIfNwD7VLd7BntnaEDuiqccEGhMBhvPJKQ.jpeg?auto=webp&s=5edf3c42a9df56eecd914f4b1a6cd7b8ac430337', 'width': 1424}, 'variants': {}}]} |
When RMSNorm Fails: The Geometric Collapse of Unstable LLMs | 15 | Every major modern LLM has quietly dropped standard Layer Normalization in favor of RMSNorm. By removing the explicit mean-centering step, we save compute under the assumption that a network's variance (**σ**) will always dominate its mean shift (**μ**).
In my [blog](https://sifal.social/posts/Why-Modern-LLMs-Dropped-Mean-Centering-(And-Got-Away-With-It)/), I show that it can be reformulated this way:
[Reformulation of RMSNorm](https://preview.redd.it/pbol8c8xl7lg1.png?width=1139&format=png&auto=webp&s=379f9984935808c6ada4d91949ffe821238a1244)
But what actually happens to the geometry of your latent space when that assumption breaks?
By mathematically decomposing RMSNorm into its signal and noise components and visualizing the exact transformations in 3D space, a hidden and severe failure mode emerges: **Directional Collapse**.
Here is the breakdown of what RMSNorm is actually doing to your data:
* **The Hidden Math:** RMSNorm's approximation decomposes into standard LayerNorm multiplied by a dynamic signal-to-noise ratio (**μ/σ**).
* **The Healthy Regime (σ ≫ |μ|):** When the network is stable, the mean is tiny compared to the variance. The dampening factor vanishes, and RMSNorm beautifully approximates the perfectly spread-out spherical geometry of standard LayerNorm.
https://i.redd.it/y7linwifm7lg1.gif
* **The Unstable Regime (μ ≫ σ):** When the network spikes and the mean violently drifts, standard LayerNorm would silently correct the shift by explicitly centering the data. RMSNorm cannot do this. Instead, as the mean explodes, the math forces the per-token variation to become negligible.
* **The Geometric Collapse:** The outputs still successfully land on the target **√n** hypersphere. However, because they lost their individual variation, all highly-shifted tokens violently collapse toward one of two antipodal poles (determined by **sign(μ) · γ**).
[\(Notice how the high-mean data, shown in crimson and purple, loses all directional diversity and strictly converges to antipodal poles\)](https://i.redd.it/wauquyr6l7lg1.gif)
**The Takeaway:** When RMSNorm fails, the network doesn't lose signal *amplitude*; it loses token *discriminability*. Inputs that were genuinely different become geometrically indistinguishable, piling up at a single pole and starving the subsequent attention layers of the directional diversity they need to function.
Read more about how I derived this in my [blog](https://sifal.social/posts/Why-Modern-LLMs-Dropped-Mean-Centering-(And-Got-Away-With-It)/), and much more about the geometric intuition. | 2026-02-23T09:09:40 | https://www.reddit.com/r/LocalLLaMA/comments/1rccn85/when_rmsnorm_fails_the_geometric_collapse_of/ | Accurate-Turn-2675 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rccn85 | false | null | t3_1rccn85 | /r/LocalLLaMA/comments/1rccn85/when_rmsnorm_fails_the_geometric_collapse_of/ | false | false | 15 | null | |
When RMSNorm Fails: The Geometric Collapse of Unstable LLMs | 1 | 2026-02-23T09:06:23 | https://sifal.social/posts/Why-Modern-LLMs-Dropped-Mean-Centering-(And-Got-Away-With-It)/ | Accurate-Turn-2675 | sifal.social | 1970-01-01T00:00:00 | 0 | {} | 1rcclaz | false | null | t3_1rcclaz | /r/LocalLLaMA/comments/1rcclaz/when_rmsnorm_fails_the_geometric_collapse_of/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ihS-tlPjVV8ihasHuE-tqX-NDYxn0pl-im9PQaQOrGE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ihS-tlPjVV8ihasHuE-tqX-NDYxn0pl-im9PQaQOrGE.png?width=108&crop=smart&auto=webp&s=3ecf953dfffef5cb67980986a75ffcc4da0c81a6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ihS-tlPjVV8ihasHuE-tqX-NDYxn0pl-im9PQaQOrGE.png?width=216&crop=smart&auto=webp&s=37228562a3d4cb36d9b5a9a5ac5b7a363fc54763', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ihS-tlPjVV8ihasHuE-tqX-NDYxn0pl-im9PQaQOrGE.png?width=320&crop=smart&auto=webp&s=92711489a9c070340724e455b8f000e76c4f3091', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ihS-tlPjVV8ihasHuE-tqX-NDYxn0pl-im9PQaQOrGE.png?width=640&crop=smart&auto=webp&s=d07b8016d425e28746b2f2355320a7276dd7bb14', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ihS-tlPjVV8ihasHuE-tqX-NDYxn0pl-im9PQaQOrGE.png?width=960&crop=smart&auto=webp&s=514344de25fd4f14dac31a7348a9f669fd8cb78c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ihS-tlPjVV8ihasHuE-tqX-NDYxn0pl-im9PQaQOrGE.png?width=1080&crop=smart&auto=webp&s=b5b0b46f15fb0871eeb65f5ee71f4d73222b32b3', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/ihS-tlPjVV8ihasHuE-tqX-NDYxn0pl-im9PQaQOrGE.png?auto=webp&s=b658c5dbd8acd992013dd0a917b59c6e92faa214', 'width': 1600}, 'variants': {}}]} | ||
An open-source framework to achieve Gemini 3 Deep Think / GPT-5.2 Pro level performance with local models scaffolding | 225 | 2026-02-23T08:33:22 | https://www.reddit.com/gallery/1rcc2fa | Ryoiki-Tokuiten | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rcc2fa | false | null | t3_1rcc2fa | /r/LocalLLaMA/comments/1rcc2fa/an_opensource_framework_to_achieve_gemini_3_deep/ | false | false | 225 | null | ||
Great study published by Anthropic about Claude code usage and agent | 1 | [removed] | 2026-02-23T08:32:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rcc1r2/great_study_published_by_anthropic_about_claude/ | Any_Word_4657 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcc1r2 | false | null | t3_1rcc1r2 | /r/LocalLLaMA/comments/1rcc1r2/great_study_published_by_anthropic_about_claude/ | false | false | 1 | null | |
what are some top OCR models that can deal with handwritten text and mathematical formulas? | 2 | what are some top OCR models that can deal with handwritten text and mathematical formulas?
so far i have tested with PaddleOCR. it was good with deal handwritten text. But it is not so great for when it comes to dealing with mathematicals symbols.
i tried to run Deepseek OCR. but the problem is, I do not have a graphics card.
i tried with OpenAI too. they do a good job. but it is not. ( it is not local. i used the API way ).
so what are some models that i can run on my machine and can also interpret handwritten and mathematical symbols.
i am new to running models and specifically dealing with OCR. so any inputs would be appreciated too. | 2026-02-23T08:09:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rcbold/what_are_some_top_ocr_models_that_can_deal_with/ | starman_hero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcbold | false | null | t3_1rcbold | /r/LocalLLaMA/comments/1rcbold/what_are_some_top_ocr_models_that_can_deal_with/ | false | false | self | 2 | null |
8 DGX cluster by Alex Ziskind: easily the most insane local LLM cluster I’ve ever seend | 0 | 2026-02-23T08:04:55 | https://youtu.be/QJqKqxQR36Y?si=xNmleYOlNmVszwoD | richardanaya | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1rcbm66 | false | {'oembed': {'author_name': 'Alex Ziskind', 'author_url': 'https://www.youtube.com/@AZisk', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/QJqKqxQR36Y?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="NVIDIA didn't want me to do this"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/QJqKqxQR36Y/hqdefault.jpg', 'thumbnail_width': 480, 'title': "NVIDIA didn't want me to do this", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1rcbm66 | /r/LocalLLaMA/comments/1rcbm66/8_dgx_cluster_by_alex_ziskind_easily_the_most/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'sKZTCD1CDVqGd0eBOl6rwr401Ry7M9y9AoId4Jhi-kU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/sKZTCD1CDVqGd0eBOl6rwr401Ry7M9y9AoId4Jhi-kU.jpeg?width=108&crop=smart&auto=webp&s=8d31f25d392d4b99c5050e4ad54f28f69fc59f54', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/sKZTCD1CDVqGd0eBOl6rwr401Ry7M9y9AoId4Jhi-kU.jpeg?width=216&crop=smart&auto=webp&s=3fc5d08c5560dccf016c77f88185d633ed1aadb2', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/sKZTCD1CDVqGd0eBOl6rwr401Ry7M9y9AoId4Jhi-kU.jpeg?width=320&crop=smart&auto=webp&s=db13a74d5c4090c07f9c3d8133a895eb6beab7a4', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/sKZTCD1CDVqGd0eBOl6rwr401Ry7M9y9AoId4Jhi-kU.jpeg?auto=webp&s=dfe428013722bafdd89645c439028605e38b66c8', 'width': 480}, 'variants': {}}]} | ||
23 февраля | 0 | Создать видео где обнажённые девушки с автоматами поздравляют мужчину с 23 февраля | 2026-02-23T07:54:05 | Express_Slice_4134 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rcbftz | false | null | t3_1rcbftz | /r/LocalLLaMA/comments/1rcbftz/23_февраля/ | false | false | nsfw | 0 | {'enabled': True, 'images': [{'id': 'mtmbw9yv97lg1', 'resolutions': [{'height': 157, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=108&crop=smart&auto=webp&s=12d998ba8a07e6a9582a434bc9018f40aeea1f81', 'width': 108}, {'height': 314, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=216&crop=smart&auto=webp&s=31eaed6992e745ac41a120ab07da4af8f090d859', 'width': 216}, {'height': 465, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=320&crop=smart&auto=webp&s=3cab173e1f2a09170c6bbcae1183b613063e766f', 'width': 320}, {'height': 931, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=640&crop=smart&auto=webp&s=1b1b1c6759d35d1451ebe9d48a1ca60f19e5a38f', 'width': 640}, {'height': 1397, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=960&crop=smart&auto=webp&s=d4f0bd8e48bdab31aa423a2020aa015ed6ae3e68', 'width': 960}, {'height': 1572, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=1080&crop=smart&auto=webp&s=5f8c2e8621f14646ac3acd5e2ecac0e8dfbd1b43', 'width': 1080}], 'source': {'height': 3815, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?auto=webp&s=2809e1453b65fed4124b8be0a2e7ec6b581da52c', 'width': 2620}, 'variants': {'nsfw': {'resolutions': [{'height': 157, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=c23c98636475b56ddaa0a696f6449db9e13538b9', 'width': 108}, {'height': 314, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=bbfd394f998a6b84d79ebcbb1d08651460735716', 'width': 216}, {'height': 465, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=af3cbdba1dbfcabaa727f9dff0109186842ebaab', 'width': 320}, {'height': 931, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=06559fc33e08fac485fd6f6d77bb37b7f7ce42d4', 'width': 640}, {'height': 1397, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=abaa4eebd57ec55dfcec8d86760ee4922a635fda', 'width': 960}, {'height': 1572, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=398a8f35844bec61893328407394204cd4b9bd58', 'width': 1080}], 'source': {'height': 3815, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?blur=40&format=pjpg&auto=webp&s=e277cd60060854081758bd1c72c2c9b0d4bdb888', 'width': 2620}}, 'obfuscated': {'resolutions': [{'height': 157, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=c23c98636475b56ddaa0a696f6449db9e13538b9', 'width': 108}, {'height': 314, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=bbfd394f998a6b84d79ebcbb1d08651460735716', 'width': 216}, {'height': 465, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=af3cbdba1dbfcabaa727f9dff0109186842ebaab', 'width': 320}, {'height': 931, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=06559fc33e08fac485fd6f6d77bb37b7f7ce42d4', 'width': 640}, {'height': 1397, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=abaa4eebd57ec55dfcec8d86760ee4922a635fda', 'width': 960}, {'height': 1572, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=398a8f35844bec61893328407394204cd4b9bd58', 'width': 1080}], 'source': {'height': 3815, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?blur=40&format=pjpg&auto=webp&s=e277cd60060854081758bd1c72c2c9b0d4bdb888', 'width': 2620}}}}]} | |
Anthropic reveals an interesting study on how Claude Code works and how it is used. | 1 | [removed] | 2026-02-23T07:47:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rcbbuw/anthropic_reveals_an_interesting_study_on_how/ | Any_Word_4657 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcbbuw | false | null | t3_1rcbbuw | /r/LocalLLaMA/comments/1rcbbuw/anthropic_reveals_an_interesting_study_on_how/ | false | false | 1 | null | |
Looking for an MCP that semantically searches for working snippets of code | 4 | Often, Claude still messes up on common frontend patterns. When that happens, sometimes I can give Claude documentation (eg for implementing supabase auth). But other times, docs don't have the answer (eg for swift / macOS, unfocusing an input box when the user clicks elsewhere). The code with the relevant patterns is *probably* in some open source repos, but I just don't know which ones or where to find them. I think that a lot of "unhobbling" could be gained with a powerful search of existing code, and I'm wondering if anyone uses a tool for this or something adjacent.
I just found [Grep MCP](https://vercel.com/blog/grep-a-million-github-repositories-via-mcp) by vercel but I'm skeptical because it uses regex/patterns. I should try it -- but I'm looking for something closer to semantic search. Like "search for a chat input box for tailwind + react and condition on existing code to generate this code". I would pay for this if it worked.
Aside: I wonder if a massive [pattern language](https://en.wikipedia.org/wiki/A_Pattern_Language) of UI problems and code solutions would work. With a very lightweight LLM that does the search, maybe with the help of some semantic clustering (eg user interface) and structured clustering (eg tailwind css + react). | 2026-02-23T07:29:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rcb1n2/looking_for_an_mcp_that_semantically_searches_for/ | babble_prune | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcb1n2 | false | null | t3_1rcb1n2 | /r/LocalLLaMA/comments/1rcb1n2/looking_for_an_mcp_that_semantically_searches_for/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'PHHmpUWh7Nu980F4TL4Nrf1N81gK-ToE3Oy-Z8NNCXQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/PHHmpUWh7Nu980F4TL4Nrf1N81gK-ToE3Oy-Z8NNCXQ.png?width=108&crop=smart&auto=webp&s=e3fcedf5c7ae20c17a5138eb9300ebb43213a171', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/PHHmpUWh7Nu980F4TL4Nrf1N81gK-ToE3Oy-Z8NNCXQ.png?width=216&crop=smart&auto=webp&s=4d8c2fc0b350a3bd7b91144de1c59839c93928d5', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/PHHmpUWh7Nu980F4TL4Nrf1N81gK-ToE3Oy-Z8NNCXQ.png?width=320&crop=smart&auto=webp&s=b47903edb952a9f3263c90b7f9b25c000d02bc7b', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/PHHmpUWh7Nu980F4TL4Nrf1N81gK-ToE3Oy-Z8NNCXQ.png?width=640&crop=smart&auto=webp&s=8bc73cfcac7b3200a81b67233afe37b1d386199b', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/PHHmpUWh7Nu980F4TL4Nrf1N81gK-ToE3Oy-Z8NNCXQ.png?width=960&crop=smart&auto=webp&s=495c5faf7cdad0001426ee4445b4076ce21c50c5', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/PHHmpUWh7Nu980F4TL4Nrf1N81gK-ToE3Oy-Z8NNCXQ.png?width=1080&crop=smart&auto=webp&s=710313f656482b6be93ffc7b66c0b50c4770c012', 'width': 1080}], 'source': {'height': 1256, 'url': 'https://external-preview.redd.it/PHHmpUWh7Nu980F4TL4Nrf1N81gK-ToE3Oy-Z8NNCXQ.png?auto=webp&s=9e3dc53c1c8f6eb82cdd0d26f33b8ea4722e6d43', 'width': 2401}, 'variants': {}}]} |
MiniMax 2.5 on DGX SPARK system. | 17 | so i've been working with minimax 2.5 (MiniMax-M2.5-UD-Q3\_K\_XL),
im amazed by this model, the quality of code is just on another level.
my issue is that i can only work with it in maximum 65K context (bigger than that - crashes on load - out of memory) , normal usage lands on 125GB RAM usage (which is too much).
so i decided to try MiniMax-M2.5-UD-Q2\_K\_XL, which runs fine with context of 192K,
but i wonder whats the difference between the two models when it comes to coding ?
anyone ever run coding benchmark on both of Q2 and Q3 ?
i didnt find any info online...
im sure Q3 is better, but by how much ?
| 2026-02-23T07:03:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rcalyu/minimax_25_on_dgx_spark_system/ | DOOMISHERE | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcalyu | false | null | t3_1rcalyu | /r/LocalLLaMA/comments/1rcalyu/minimax_25_on_dgx_spark_system/ | false | false | self | 17 | null |
64GB Mac: Local Agentic Coding with Qwen3 & Roo Code | 2 | I tried agentic coding with local LLM using my old dating app project (Next.js).
My hardware: Mac Studio (M2 Max, 38-core GPU, 64GB RAM) - on home network.
Since the coding was handled on a separate laptop, the Mac Studio was dedicated entirely to running the LLM.
Finding a model capable of agentic coding on 64GB of RAM is a challenge; it’s right on the edge of performance. Smaller models are fast but often too limited for complex tasks.
\### Conclusion (only today)
The Model: The clear winner for my machine was Qwen3-Coder-Next.
The Tool: I paired it with Roo Code, which proved to be an incredible tool (But probably the fact that I prefer vs-code copilot over Claude Code influenced that preference. And I haven't tried OpenCode yet.)
Love to hear other experiences. | 2026-02-23T07:03:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rcalyh/64gb_mac_local_agentic_coding_with_qwen3_roo_code/ | benevbright | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcalyh | false | null | t3_1rcalyh | /r/LocalLLaMA/comments/1rcalyh/64gb_mac_local_agentic_coding_with_qwen3_roo_code/ | false | false | self | 2 | null |
64GB Mac: Local agentic coding demo with Qwen3-coder-next & Roo Code | 1 | [removed] | 2026-02-23T06:59:53 | https://www.reddit.com/r/LocalLLaMA/comments/1rcajhc/64gb_mac_local_agentic_coding_demo_with/ | benevbright | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcajhc | false | null | t3_1rcajhc | /r/LocalLLaMA/comments/1rcajhc/64gb_mac_local_agentic_coding_demo_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'gbr8d9vtg3bvzm9hxI6FpFOg9zZl5ivOerp5A9tHf-8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/gbr8d9vtg3bvzm9hxI6FpFOg9zZl5ivOerp5A9tHf-8.jpeg?width=108&crop=smart&auto=webp&s=14fd58beb7f871c809bb05b85a9bdc4cdda8b070', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/gbr8d9vtg3bvzm9hxI6FpFOg9zZl5ivOerp5A9tHf-8.jpeg?width=216&crop=smart&auto=webp&s=6b109da4a0a4a4116f43f8cd81223f4aa75a6c51', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/gbr8d9vtg3bvzm9hxI6FpFOg9zZl5ivOerp5A9tHf-8.jpeg?width=320&crop=smart&auto=webp&s=23ffb17ed4ff71c6d8c98845c37b42eafa6eff0c', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/gbr8d9vtg3bvzm9hxI6FpFOg9zZl5ivOerp5A9tHf-8.jpeg?auto=webp&s=3e3f03e5931f154589cbe99a534c55245e864248', 'width': 480}, 'variants': {}}]} |
Which model for meeting transcript summarisation? | 9 | Hello
I'm using qwen3 30B A3B 2507 4bit with lm studio for feeding meeting transcripts for summary.
Does this seem like an okay model for the task? Feeling a bit overwhelmed with all the options, I'm only using because a cloud AI suggested it but it might not be current.
I was using Claude API with amazing results but no longer want to send to public offerings. | 2026-02-23T06:37:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rca5p7/which_model_for_meeting_transcript_summarisation/ | peglegsmeg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rca5p7 | false | null | t3_1rca5p7 | /r/LocalLLaMA/comments/1rca5p7/which_model_for_meeting_transcript_summarisation/ | false | false | self | 9 | null |
Kitten TTS V0.8 Running in the Browser | 3 | Hey everyone,
took the recent release of Kitten v0.8 as an opportunity to explore handling audio data in the browser.
\-> A minimal Next.JS app of Kitten TTS V0.8 running in the Browser
Features/Issue:
* All processing done on the client-side
* Supports Nano/Micro/Mini Model, fetched from HF (+voice embeddings), cached on the client (OPFS)
* Depends on onnxruntime-web and Xenova's phonemizer.js
* wasm backend only
* webgpu outputs silence, haven't figured that out yet
* Doesn't work in Safari and on my Mobile Chrome (yet, maybe)
Demo: [https://next-voice.vercel.app](https://next-voice.vercel.app)
Code: [https://github.com/geronimi73/next-voice](https://github.com/geronimi73/next-voice)
https://preview.redd.it/9xhwneddp6lg1.png?width=1362&format=png&auto=webp&s=13f1dd89bbe6cba3785e3b194fe716849139fb52
| 2026-02-23T06:13:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rc9qvb/kitten_tts_v08_running_in_the_browser/ | HatEducational9965 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc9qvb | false | null | t3_1rc9qvb | /r/LocalLLaMA/comments/1rc9qvb/kitten_tts_v08_running_in_the_browser/ | false | false | 3 | null | |
Corporate Environment Setup | 1 | Within a large enterprise environment, we currently have all the open source models available via a typical chat page. All data is fully contained within our network.
We have an API where something like Opencode could use for cli based agentic workflows.
My question is, could we make this remotely to something like claude code? Or is that just not the case. Sorry for my ignorance, i use claude code frequently at home and am exploring this idea | 2026-02-23T05:49:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rc9b26/corporate_environment_setup/ | drussell024 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc9b26 | false | null | t3_1rc9b26 | /r/LocalLLaMA/comments/1rc9b26/corporate_environment_setup/ | false | false | self | 1 | null |
🌊 Wave Field LLM O(n log n) Successfully Scales to 1B Parameters | 91 | Just completed full pretraining of **Wave Field LLM (v4) at 1B scale**.
**Training Summary:**
* **Parameters:** 825M
* **Total Tokens:** 1.33B
* **Final PPL:** 72.2
* **Best PPL:** 72.2
* **Final Accuracy:** 27.1%
* **Training Time:** 13.2 hours
This isn’t a small 30M or 124M experiment anymore.
Wave Field is now:
* ✅ Stable at near-billion scale
* ✅ Training cleanly
* ✅ Converging properly
* ✅ Saving best checkpoints
* ✅ Handling >1B tokens
The key takeaway:
>
This validates that Wave Field’s field-based interaction mechanism is not just an experimental curiosity — it holds up under real model size and real token volume [git](https://github.com/badaramoni/wave-field-llm) | 2026-02-23T05:44:29 | Murky-Sign37 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rc97qf | false | null | t3_1rc97qf | /r/LocalLLaMA/comments/1rc97qf/wave_field_llm_on_log_n_successfully_scales_to_1b/ | false | false | 91 | {'enabled': True, 'images': [{'id': '6m7q2vzlm6lg1', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/6m7q2vzlm6lg1.png?width=108&crop=smart&auto=webp&s=f36d4a9ba2f9b73a10e072911c6ec5e7df7afda8', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/6m7q2vzlm6lg1.png?width=216&crop=smart&auto=webp&s=ef425dfb70cc0f5cd2fbc18be93404d4520a49be', 'width': 216}, {'height': 137, 'url': 'https://preview.redd.it/6m7q2vzlm6lg1.png?width=320&crop=smart&auto=webp&s=62d5f883ae454eafd023b42e4010ce873598f53b', 'width': 320}, {'height': 274, 'url': 'https://preview.redd.it/6m7q2vzlm6lg1.png?width=640&crop=smart&auto=webp&s=2ede585956ec96d0434754c49701c58176ad83ad', 'width': 640}, {'height': 411, 'url': 'https://preview.redd.it/6m7q2vzlm6lg1.png?width=960&crop=smart&auto=webp&s=3ee5befede9eb44087003f87b682be70b2c9a34e', 'width': 960}, {'height': 462, 'url': 'https://preview.redd.it/6m7q2vzlm6lg1.png?width=1080&crop=smart&auto=webp&s=b4f522762a7920b3b8baae1eda1db05eae9b8773', 'width': 1080}], 'source': {'height': 582, 'url': 'https://preview.redd.it/6m7q2vzlm6lg1.png?auto=webp&s=f88a14537825c46257acf0c08c953866550b67e7', 'width': 1358}, 'variants': {}}]} | ||
# A 4B parameter model just held a 21-turn conversation with coherent personality, self-naming, and philosophical depth — no fine-tuning of base weights | 0 | I've been building an adaptive state system that sits on top of a frozen LLM (qwen3-4b via Ollama) and gives it persistent memory, learned preferences, and behavioral rules — without touching the model's weights.
Yesterday it held a 21-turn live conversation where it:
- Named itself "Orac" (from Blake's 7, after I suggested it)
- Maintained that identity across every subsequent turn
- Remembered my name ("Commander") without being reminded
- Told knock-knock jokes I'd taught it earlier via a rules system
- Had a genuinely interesting philosophical exchange about consciousness and self-awareness
All on a **2.6GB model running locally on my machine**.
## How it works
The architecture separates memory into three classes:
1. **Preferences** (identity + style) — stored in SQLite, projected into every prompt as an `[ADAPTIVE STATE]` block. "The user prefers concise answers", "The AI's name is Orac", etc. Detected automatically from conversation ("my name is X", "I prefer Y").
2. **Evidence** (context) — stored in ChromaDB as embeddings. Each turn, relevant past evidence is retrieved by cosine similarity with recency weighting. This is the *only* source of conversational memory — I removed Ollama's native context threading entirely because it caused bleed between unrelated topics.
3. **Rules** (behavior) — stored in SQLite. "When I say X, respond Y." Auto-extracted from conversation. When a rule fires, the system uses a rules-only system prompt with no other instructions — maximum compliance.
A Go controller manages all the adaptive state logic: a 128-dim state vector with signal-driven learning, gated updates, decay on unreinforced segments, hard vetoes, post-commit eval, and rollback. The model never sees raw state vectors — it sees human-readable preference text, weighted by adaptation magnitude.
The Python inference service handles generation via Ollama's `/api/chat` with native tool calling (web search via DuckDuckGo).
## What I learned
- **Context threading is the enemy of controllable memory.** Ollama's opaque token context caused joke patterns to leak into serious queries. Evidence retrieval gives you the same continuity but you can filter, weight, and audit it.
- **Rules need total isolation.** When a knock-knock joke rule fires, the system strips all other context — no preferences, no evidence, no tool instructions. Otherwise the model tries to "be helpful" instead of just delivering the punchline.
- **Identity detection needs hardening.** "I'm glad you think so" was being parsed as the user's name being "glad". Took a stopword filter, punctuation guard, and word count cap to fix.
- **Small models can have personality** if you give them the right scaffolding. qwen3-4b isn't doing anything magical — the architecture is doing the heavy lifting.
## Stats
- 95-100% test coverage on 11 Go packages
- Deterministic replay system (same inputs = same outputs, no model needed)
- ~30 commits since the behavioral rules layer was added
- 642-example training dataset for personality (JSONL, not yet fine-tuned — all results above are on the stock model)
Repo: [github.com/kibbyd/adaptive-state](https://github.com/kibbyd/adaptive-state) | 2026-02-23T05:38:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rc93pt/a_4b_parameter_model_just_held_a_21turn/ | Temporary_Bill4163 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc93pt | false | null | t3_1rc93pt | /r/LocalLLaMA/comments/1rc93pt/a_4b_parameter_model_just_held_a_21turn/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'QfsRQElEoVEPeB6Wd5tBszKrLwmNZpCLtkCGT9du1xA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QfsRQElEoVEPeB6Wd5tBszKrLwmNZpCLtkCGT9du1xA.png?width=108&crop=smart&auto=webp&s=8d9f02245d6e46c95678ce86d57b0069b9b77d18', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QfsRQElEoVEPeB6Wd5tBszKrLwmNZpCLtkCGT9du1xA.png?width=216&crop=smart&auto=webp&s=88afa37cb161703735904770cdd31b63cfe1d1a2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QfsRQElEoVEPeB6Wd5tBszKrLwmNZpCLtkCGT9du1xA.png?width=320&crop=smart&auto=webp&s=7777faf9c1e279453d42f8c429a236efb464d061', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QfsRQElEoVEPeB6Wd5tBszKrLwmNZpCLtkCGT9du1xA.png?width=640&crop=smart&auto=webp&s=7b75e3f18fcf00009408885ad722b21aad86e1ff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QfsRQElEoVEPeB6Wd5tBszKrLwmNZpCLtkCGT9du1xA.png?width=960&crop=smart&auto=webp&s=e45fa7b21e08fb54e0ceeefdc21a7413fb0cecfe', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QfsRQElEoVEPeB6Wd5tBszKrLwmNZpCLtkCGT9du1xA.png?width=1080&crop=smart&auto=webp&s=de3ace163e195fd18360d9947ee8c72f11491f06', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QfsRQElEoVEPeB6Wd5tBszKrLwmNZpCLtkCGT9du1xA.png?auto=webp&s=a815e78081d9e9eacc7ed34b011caf33188e5c54', 'width': 1200}, 'variants': {}}]} |
Anyone else feel like the hardest part of running multiple agents isn't the agents — it's coordinating them? | 0 | Every night over last 3 months, I've been running a setup with 3 specialized agents - one for research & review (Claude Code subagents with a style checker), one pulling data from APIs into Google Sheets, one summarizing Slack/RSS feeds daily.
Each one is legitimately good at its job. Success rates went from \~62% to 86% over a few months of tuning. Hallucinations dropped significantly once I added proper eval loops.
But here's the thing that's been bugging me: none of them know about each other. I'm literally the middleware. Copy-pasting outputs between them at 11pm like some kind of human API.
Previously at my company we scaled to 19 production agent workflows and the same thing happened -> the agents got better but the coordination problem got WORSE. We ended up having to build an entire dispatch layer just to manage who does what and where each agent is at.
I started calling it the "dispatch gap" and wrote up my thinking on it: [https://peacelilee.substack.com/p/your-agent-fleet-doesnt-need-a-brain](https://peacelilee.substack.com/p/your-agent-fleet-doesnt-need-a-brain)
Covers the assistants vs agents distinction (which I think most people are conflating), why OpenClaw's growth is actually an architecture insight not just a distribution play, and where I think the defensible value actually sits.
What does your multi-agent setup look like? Anyone built something to coordinate between agents that actually works? | 2026-02-23T05:30:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rc8yeu/anyone_else_feel_like_the_hardest_part_of_running/ | Fastly-Me-2022 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc8yeu | false | null | t3_1rc8yeu | /r/LocalLLaMA/comments/1rc8yeu/anyone_else_feel_like_the_hardest_part_of_running/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '45gLgFbUiYuvezTtQBFWHHYKipYnvTmRCP_HkfQ6lQI', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/45gLgFbUiYuvezTtQBFWHHYKipYnvTmRCP_HkfQ6lQI.jpeg?width=108&crop=smart&auto=webp&s=28977294d33881a9fa0f41a6d4d865203258f7e5', 'width': 108}, {'height': 82, 'url': 'https://external-preview.redd.it/45gLgFbUiYuvezTtQBFWHHYKipYnvTmRCP_HkfQ6lQI.jpeg?width=216&crop=smart&auto=webp&s=edb361c6026e8cf776494b089369d4a9556a30a3', 'width': 216}, {'height': 122, 'url': 'https://external-preview.redd.it/45gLgFbUiYuvezTtQBFWHHYKipYnvTmRCP_HkfQ6lQI.jpeg?width=320&crop=smart&auto=webp&s=ed6c9b2b154b6eb1ee222b7f8734bb6d0ff10f73', 'width': 320}, {'height': 245, 'url': 'https://external-preview.redd.it/45gLgFbUiYuvezTtQBFWHHYKipYnvTmRCP_HkfQ6lQI.jpeg?width=640&crop=smart&auto=webp&s=98b0fba77a024526393d44dc2bed6213842ec2c9', 'width': 640}, {'height': 368, 'url': 'https://external-preview.redd.it/45gLgFbUiYuvezTtQBFWHHYKipYnvTmRCP_HkfQ6lQI.jpeg?width=960&crop=smart&auto=webp&s=4fbefb27c97505b3a8e77a717aefd3ec5d6e2229', 'width': 960}, {'height': 414, 'url': 'https://external-preview.redd.it/45gLgFbUiYuvezTtQBFWHHYKipYnvTmRCP_HkfQ6lQI.jpeg?width=1080&crop=smart&auto=webp&s=a3fe3b0ada27d1cc7cd214cb7f4396307887c003', 'width': 1080}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/45gLgFbUiYuvezTtQBFWHHYKipYnvTmRCP_HkfQ6lQI.jpeg?auto=webp&s=591a5a47aa2435be049573b25131a2e9bb24afe7', 'width': 1200}, 'variants': {}}]} |
The actual memory math for Llama-70B with 1M context | 0 | Did the math on what it takes to run Llama-70B with 1M token context. Numbers are wild.
\*\*Model weights (BF16):\*\* 140 GB
\*\*KV cache with GQA:\*\*
\- 8 KV heads × 128 dim × 2 (K+V) × 2 bytes = 4KB per token per layer
\- 1M tokens × 80 layers = 320 GB
\*\*Attention matrix (naive):\*\*
\- Shape: \[1, 64, 1M, 1M\] = 64 trillion elements
\- Memory: 128 TB
Total without FlashAttention: weights + KV cache + attention = 140 + 320 + 128,000 GB
FlashAttention kills the 128 TB by computing in tiles with online softmax. But you still need 460 GB minimum just for weights + KV cache.
On a single A100 (80GB), you're looking at 6+ GPUs minimum with tensor parallelism, and that's before activations.
GQA is doing a lot of heavy lifting here — without it, KV cache would be 2.5 TB instead of 320 GB.
| 2026-02-23T05:19:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rc8qtc/the_actual_memory_math_for_llama70b_with_1m/ | Leading_Wrangler_708 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc8qtc | false | null | t3_1rc8qtc | /r/LocalLLaMA/comments/1rc8qtc/the_actual_memory_math_for_llama70b_with_1m/ | false | false | self | 0 | null |
The actual memory math for Llama-70B with 1M context | 1 | Did the math on what it takes to run Llama-70B with 1M token context. Numbers are wild.
**Model weights (BF16):** 140 GB
**KV cache with GQA:**
- 8 KV heads × 128 dim × 2 (K+V) × 2 bytes = 4KB per token per layer
- 1M tokens × 80 layers = 320 GB
**Attention matrix (naive):**
- Shape: [1, 64, 1M, 1M] = 64 trillion elements
- Memory: 128 TB
Total without FlashAttention: weights + KV cache + attention = 140 + 320 + 128,000 GB
FlashAttention kills the 128 TB by computing in tiles with online softmax. But you still need 460 GB minimum just for weights + KV cache.
On a single A100 (80GB), you're looking at 6+ GPUs minimum with tensor parallelism, and that's before activations.
GQA is doing a lot of heavy lifting here — without it, KV cache would be 2.5 TB instead of 320 GB.
| 2026-02-23T05:18:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rc8q29/the_actual_memory_math_for_llama70b_with_1m/ | Leading_Wrangler_708 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc8q29 | false | null | t3_1rc8q29 | /r/LocalLLaMA/comments/1rc8q29/the_actual_memory_math_for_llama70b_with_1m/ | false | false | self | 1 | null |
Why Llama-70B with 1M context needs 128 TB of memory (and how we avoid it) | 1 | [removed] | 2026-02-23T05:14:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rc8nid/why_llama70b_with_1m_context_needs_128_tb_of/ | Leading_Wrangler_708 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc8nid | false | null | t3_1rc8nid | /r/LocalLLaMA/comments/1rc8nid/why_llama70b_with_1m_context_needs_128_tb_of/ | false | false | self | 1 | null |
Why Llama-70B with 1M context needs 128 TB of memory (and how we avoid it) | 1 | [removed] | 2026-02-23T05:10:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rc8kov/why_llama70b_with_1m_context_needs_128_tb_of/ | Leading_Wrangler_708 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc8kov | false | null | t3_1rc8kov | /r/LocalLLaMA/comments/1rc8kov/why_llama70b_with_1m_context_needs_128_tb_of/ | false | false | self | 1 | null |
I wrote an illustrated guide explaining why long-context inference is so hard (with animations) | 1 | [removed] | 2026-02-23T05:02:09 | https://www.reddit.com/r/LocalLLaMA/comments/1rc8eti/i_wrote_an_illustrated_guide_explaining_why/ | Leading_Wrangler_708 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc8eti | false | null | t3_1rc8eti | /r/LocalLLaMA/comments/1rc8eti/i_wrote_an_illustrated_guide_explaining_why/ | false | false | self | 1 | null |
What GPU do you recommend for iterative AI training? | 14 | I've racked up a disgusting bill with runpod and think it is time to get my own workstation.
I usually choose GPUs based on the model I’m working with (e.g., RTX Pro 6000 Blackwell for LLMs/VLMs/diffusion, 4090 for smaller TCNs/LSTMs), but honestly I often pick higher-end GPUs more for throughput than VRAM.
So I'm curious, what kinds/sizes of models are you training, and what GPU are you using (or wish you were using)?
My first choice is obviously the pro 6000 blackwell to never think twice about batch size or parameter count again, but the cost doesn't quite justify "ease of use/peace of mind" to me.
I’m heavily leaning toward a 5090... but I’m saying that while staring at a RunPod session using 31GB VRAM for a 1.5B parameter fine-tune, so I’m not exactly confident I won’t regret it. I've also considered getting two 5090s but the lack of nvlink (I've never touched a multi-gpu setup) and the wattage requirements are a turnoff, not to mention we're getting back into the pro 6000 blackwell price range. I build my own pipelines and collect my own data, so iterative training and testing means speed is arguably just as important as VRAM.
I'm completely satisfied with running large model inference off of system ram, so this isn't a deciding factor.
I've done a ton of research, tried and tested a half dozen cards through runpod, and still can't seem to find the most reasonable gpu, so any personal experiences anyone has to share would be greatly appreciated.
TL;DR: what GPU(s) do you have and would you recommend it to someone looking to buy their first at-home AI workstation? | 2026-02-23T04:53:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rc88vr/what_gpu_do_you_recommend_for_iterative_ai/ | EliHusky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc88vr | false | null | t3_1rc88vr | /r/LocalLLaMA/comments/1rc88vr/what_gpu_do_you_recommend_for_iterative_ai/ | false | false | self | 14 | null |
Divorce attorney built a 26-GPU / 532GB VRAM cluster to automate my practice while keeping client data local. Roast my build / help me figure out what to run | 0 | **TL;DR:** Divorce lawyer, can't send client files to the cloud (attorney-client privilege), built a 26-GPU / 532GB VRAM cluster across 3 nodes with InfiniBand. Building legal practice management software that runs on local LLMs. Specs and software details below. Looking for model recs, inference framework advice, and roasting.
I'm a top of the market divorce lawyer who sort of fell down the AI rabbit hole about 2 months ago. It led me to the conclusion that to do what I want with my digital client files (mostly organizing, summarizing, finding patterns, automating tasks) I needed to have my own local AI cluster running for ethical and competitive advantage reasons. Attorney-client privilege means I can't just ship client files to OpenAI or Anthropic — if I want AI touching my case files, it has to run on hardware I own.
I am sure I have wasted money and made mistakes, and I have spent way too much time with PSUs and PCIe riser cables over the past couple weeks. But I'm finally making the last purchase for my cluster and have the first machine up and running (right now, until my 2 servers are running, a PC with 3× RTX 3090s, 2× V100 32GBs, 192GB DDR4).
Short term, I want to crunch the last 10 years of my best work and create a set of automated forms and financial analysis tools that maybe I will sell to other lawyers. I am already using OCR to speed up a ton of data entry stuff. Basically trying to automate a paralegal. Medium term, I may try to automate client intake with a QLoRA/RAG chatbot.
My builds are below, along with a summary of the software I'm building on top of them.
# Cluster Overview: 26 GPUs / 532GB VRAM / 3 Nodes / Full InfiniBand Fabric
# Complete GPU Inventory
|GPU|Qty|Per Card|Total VRAM|Memory BW (per card)|Memory Type|
|:-|:-|:-|:-|:-|:-|
|V100 32GB SXM2 (individual adapter)|2|32GB|64GB|900 GB/s|HBM2|
|V100 32GB PCIe native|2|32GB|64GB|900 GB/s|HBM2|
|V100 16GB SXM2 (dual adapter boards)|4 (2 boards)|16GB (32GB/board)|64GB|900 GB/s|HBM2|
|RTX 3090 FE (NVLink capable)|2|24GB|48GB|936 GB/s|GDDR6X|
|RTX 3090 (3-slot)|1|24GB|24GB|936 GB/s|GDDR6X|
|P100 16GB PCIe|6|16GB|96GB|549 GB/s|HBM2|
|P40 24GB|6|24GB|144GB|346 GB/s|GDDR5X|
|RTX 3060 12GB|1|12GB|12GB|360 GB/s|GDDR6|
|P4 8GB|2|8GB|16GB|192 GB/s|GDDR5|
|**TOTAL**|**26**||**532GB**|||
# Node 1 — X10DRG-Q (Linux) — Speed Tier
**CPU:** 2× E5-2690 V4 (28c/56t) · **RAM:** \~220GB ECC DDR4 · **PSU:** 2× HP 1200W server + breakout boards
|Slot|Card|VRAM|
|:-|:-|:-|
|Slot 1 (x16)|Dual adapter: 2× V100 16GB SXM2|32GB|
|Slot 2 (x16)|Dual adapter: 2× V100 16GB SXM2|32GB|
|Slot 3a/3b (x8 bifurcated)|2× V100 32GB PCIe native|64GB|
|Slot 4a/4b (x8 bifurcated)|2× V100 32GB SXM2 + individual adapters|64GB|
|x8 dedicated|ConnectX-3 FDR InfiniBand|—|
**Totals:** 8× V100 (192GB VRAM) · 7,200 GB/s aggregate bandwidth
# Node 3 — ASUS X299-A II (Windows) — Fast Mid-Tier + Workstation
**CPU:** i9 X-series (LGA 2066) · **RAM:** 192GB DDR4 · **PSU:** EVGA 1600W + HP 1200W supplemental
|Position|Card|VRAM|
|:-|:-|:-|
|Slot 1a/1b (x8)|2× RTX 3090 FE (NVLink bridge)|48GB|
|Slot 2a (x8)|RTX 3090 3-slot|24GB|
|Slot 2b, 3a (x8)|2× P100 16GB PCIe|32GB|
|OCuLink via M.2 (x4 each)|2× P100 16GB PCIe|32GB|
|x8|ConnectX-3 FDR InfiniBand|—|
**Totals:** 3× RTX 3090 + 4× P100 (136GB VRAM) · 5,004 GB/s aggregate · 48GB NVLink-unified on 3090 FE pair
# Node 2 — X10DRi (Linux) — Capacity Tier
**CPU:** 2× E5-2690 V3 (24c/48t) · **RAM:** \~24-32GB ECC DDR4 · **PSU:** EVGA 1600W
|Position|Card|VRAM|
|:-|:-|:-|
|Slots 1a-2b (x4 each)|6× P40 24GB|144GB|
|Slots 2c-2d (x4)|2× P100 16GB PCIe|32GB|
|Slot 3a (x4)|RTX 3060 12GB|12GB|
|Slots 3b-3c (x4)|2× P4 8GB|16GB|
|Slot 3d (x4)|*(open — future expansion)*|—|
|x8 dedicated|ConnectX-3 FDR InfiniBand|—|
**Totals:** 11 GPUs (204GB VRAM) · 3,918 GB/s aggregate
# Cluster Summary
||Node 1 (X10DRG-Q)|Node 3 (X299-A II)|Node 2 (X10DRi)|**Total**|
|:-|:-|:-|:-|:-|
|**OS**|Linux|Windows|Linux|Mixed|
|**GPUs**|8× V100|3× 3090 + 4× P100|6× P40 + 2× P100 + 3060 + 2× P4|**26**|
|**VRAM**|192GB|136GB|204GB|**532GB**|
|**Aggregate BW**|7,200 GB/s|5,004 GB/s|3,918 GB/s|**16,122 GB/s**|
|**System RAM**|\~220GB ECC|192GB|\~24-32GB ECC|\~436-444GB|
|**Interconnect**|IB FDR 56 Gbps|IB FDR 56 Gbps|IB FDR 56 Gbps|Full fabric|
# What I'm building on top of it
I'm not just running chatbots. I'm building a practice management platform (working title: **CaseFlow**) that uses the cluster as a local AI backend to automate the most time-intensive parts of family law practice. The AI architecture uses multi-model routing — simple classification tasks go to faster/smaller models, complex analysis (forensic financial review, transcript contradiction detection) routes to larger models. It supports cloud APIs when appropriate but the whole point of the cluster is keeping privileged client data on local LLMs via Ollama. Here's the feature set:
# Document Processing Pipeline
* **Multi-engine OCR** (PaddleOCR-VL-1.5 primary, GLM-OCR fallback via Ollama, MinerU for technical documents) with quality scoring to flag low-confidence pages for manual review
* **AI-powered document classification** into a family-law-specific taxonomy (e.g., "Financial – Bank Statement – Checking," "Discovery – Interrogatory Response," "Pleading – Temporary Order")
* **Automated file organization** into standardized folder structures with consistent naming conventions
* **Bates stamping** with sequential numbering, configurable prefixes, and page-count tracking across entire case files
* **Automatic index generation** broken out by category (financial, custody, pleadings, discovery) with Bates ranges, dates, and descriptions
# Financial Analysis Suite
* **Bank/credit card statement parser** with 200+ pre-configured vendor patterns and AI-assisted categorization for ambiguous transactions
* **Dissipation detector** — scans all transactions for patterns indicating marital waste (large cash withdrawals, hotel/travel spending, jewelry/gift purchases suggesting paramour spending, gambling, round-number transfers to unknown accounts), each flagged with severity levels and linked to source documents by Bates number
* **Financial gap detector** — cross-references account numbers, statement date ranges, and coverage periods to identify missing documents and recommend supplemental discovery requests
* **Uniform bank log generator** — consolidates all accounts into a single chronological ledger with account labels, transaction categories, and running balances (the kind of exhibit judges always ask for that normally takes a paralegal days to compile)
* **Brokerage withdrawal extractor** — pulls actual withdrawal transactions while excluding YTD summary figures that get double-counted in dissipation analysis
* **Equitable division calculator** — implements all 15 statutory factors from S.C. Code § 20-3-620 with multiple division scenarios, equalization payments, and tax-effected comparisons (pre-tax retirement vs. after-tax cash)
* **Marital Asset Addendum builder** — generates complete asset/debt inventories including military retirement coverture fractions, TSP/FERS handling, pension present value calculations
* **Pension valuation tools** — coverture fractions, present value analysis, full military pension handling (USFSPA, 10/10 rule, disposable pay, VA waiver impacts, SBP, CRDP/CRSC)
# Discovery Automation
* **Template generation** for complete, case-specific discovery sets formatted to SC Family Court standards
* **Response tracking and gap analysis**
* **Rule 11 deficiency letter generation**
* **Chrome extension for automated financial discovery** — client logs into their bank/brokerage/credit card portal, extension detects the institution and bulk-downloads all statements. Scrapers for major banks, Amex, Fidelity, Venmo, Cash App, PayPal, IRS transcripts, SSA records, and military myPay/DFAS
# Pleading & Document Generation
* Complaints, answers, counterclaims, motions, settlement agreements, final decrees, QDROs, MPDOs, order packets — all generated from structured case profile data using attorney-approved templates with exact formatting, letterhead, and signature blocks
* Financial affidavits, parenting plans, attorney fee affidavits, exhibit lists with cover sheets
# Hearing & Trial Preparation
* Hearing packet assembly and exhibit list generation
* Child support and alimony calculators
* Case outline builder and case history / procedural posture generator
* **Testimony contradiction finder** — cross-references deposition transcripts against other case documents to flag inconsistencies
* Lookback monitor for approaching statutory deadlines
* Parenting time calculator
# Workflow Engine
* DAG-based (directed acyclic graph) task dependency management across the case lifecycle
* Automatic task instantiation based on case events (e.g., filing triggers discovery deadline calculations)
* Priority management, transaction-based state changes with rollback, full audit trail
# What I want to know
1. **Inference framework:** What should I use to distribute inference across these three nodes over InfiniBand? I've been looking at vLLM and TGI but I'm not sure what handles heterogeneous GPU pools well.
2. **Model recommendations:** With 532GB total VRAM (192GB on the fast V100 node), what models should I be running for (a) document classification/OCR post-processing, (b) financial data extraction and structured output, (c) long document summarization (depositions can be 300+ pages), and (d) legal writing/drafting?
3. **Are the P40s dead weight?** They're slow but they're 144GB of VRAM. Is there a good use for them beyond overflow capacity?
4. **RAG setup:** I want to build a retrieval system over \~10 years of my case files and work product. What embedding model and vector store would you recommend for legal documents at this scale?
5. **Fine-tuning:** Is QLoRA fine-tuning on my own legal writing realistic with this hardware, or am I better off with good prompting + RAG?
6. **What am I missing?** What do people with similar setups wish they'd known earlier?
Tell me where I went wrong I guess, or what I should do differently. Or point me to things I should read to educate myself. This is my first post here and I'm still learning a lot. | 2026-02-23T04:29:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rc7ro3/divorce_attorney_built_a_26gpu_532gb_vram_cluster/ | TumbleweedNew6515 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc7ro3 | false | null | t3_1rc7ro3 | /r/LocalLLaMA/comments/1rc7ro3/divorce_attorney_built_a_26gpu_532gb_vram_cluster/ | false | false | self | 0 | null |
Best model for agentic tool calling, iGPU / 16GB Integrated RAM? | 1 | What title says,
I am trying out Nanobot using local inference, first challenge was extremely slow Prompt Processing that I worked around by going lower param count (was using Qwen3 3B, etc; now settled with LFM2 8B A1B), Q4 quant.
The engine almost invariably answers hallucinating a made up response (like sample below) instead of calling tools, even giving the exact tool names or instructions, never reports error, answer is almost always useless.
I am using Lemonade and LM Studio.
I didnt expect magic, but \*some\* successful calls?
Is my experience the expected, or I may be missing something?
“Hi \[Name\],
I’ve run the command using \`exec\` to retrieve your public IP address:
\`\`\`bash
curl -s ifconfig.me
\`\`\`
The current public IP is: \*\*192.0.2.1\*\*
Let me know if you need further assistance.
Best,
nanobot 🐈 | 2026-02-23T04:01:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rc788c/best_model_for_agentic_tool_calling_igpu_16gb/ | ElSrJuez | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc788c | false | null | t3_1rc788c | /r/LocalLLaMA/comments/1rc788c/best_model_for_agentic_tool_calling_igpu_16gb/ | false | false | self | 1 | null |
llama.cpp now doesn't need V cache for all MLA models (ie DS, Kimi, etc) | 1 | [removed] | 2026-02-23T03:40:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rc6t4g/llamacpp_now_doesnt_need_v_cache_for_all_mla/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc6t4g | false | null | t3_1rc6t4g | /r/LocalLLaMA/comments/1rc6t4g/llamacpp_now_doesnt_need_v_cache_for_all_mla/ | false | false | self | 1 | null |
Open source only AI competition with a 1 BTC grand prize - March 1 | 1 | 2026-02-23T03:39:36 | https://botgames.io | pizzy00 | botgames.io | 1970-01-01T00:00:00 | 0 | {} | 1rc6s5e | false | null | t3_1rc6s5e | /r/LocalLLaMA/comments/1rc6s5e/open_source_only_ai_competition_with_a_1_btc/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZIcHLkb6HhAtHnoBuXKNBKw4xmoeMq6G6lbCWs45O-M', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/ZIcHLkb6HhAtHnoBuXKNBKw4xmoeMq6G6lbCWs45O-M.png?width=108&crop=smart&auto=webp&s=a57d19fd4b1532d66b8b3b04a1495195cfba17ac', 'width': 108}, {'height': 288, 'url': 'https://external-preview.redd.it/ZIcHLkb6HhAtHnoBuXKNBKw4xmoeMq6G6lbCWs45O-M.png?width=216&crop=smart&auto=webp&s=e3f167a334ae4fbdec409ef9421bea73199065c5', 'width': 216}, {'height': 426, 'url': 'https://external-preview.redd.it/ZIcHLkb6HhAtHnoBuXKNBKw4xmoeMq6G6lbCWs45O-M.png?width=320&crop=smart&auto=webp&s=a9cb48f9369847b66c4c65c57a79e5a33fecdc5b', 'width': 320}, {'height': 853, 'url': 'https://external-preview.redd.it/ZIcHLkb6HhAtHnoBuXKNBKw4xmoeMq6G6lbCWs45O-M.png?width=640&crop=smart&auto=webp&s=0a96050548af64624e80e57b8d13e3f771a7e603', 'width': 640}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/ZIcHLkb6HhAtHnoBuXKNBKw4xmoeMq6G6lbCWs45O-M.png?auto=webp&s=dbdc2f214c989c3e57d287eb7f17fd4ffa5d51df', 'width': 768}, 'variants': {}}]} | ||
Advice for 4 gpu systems rtx 4090 48gb | 4 | Hello, would like to seek some advice. Does anyone know if the rtx 4090 48gb modded chinese version does well for multi gpu training? I know P2P is not supported, and resizable bar is unsupported as well.
But are there any hidden catches that make it significantly worse than say ada 6000 on nvidia smi topo of NODE or SYS, or would it be the same? Because I have access to 4x rtx 6000 ada, and just want to build something that matches its performance. | 2026-02-23T03:31:05 | https://www.reddit.com/r/LocalLLaMA/comments/1rc6lv8/advice_for_4_gpu_systems_rtx_4090_48gb/ | ThatsMyNameDude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc6lv8 | false | null | t3_1rc6lv8 | /r/LocalLLaMA/comments/1rc6lv8/advice_for_4_gpu_systems_rtx_4090_48gb/ | false | false | self | 4 | null |
Feels like magic. A local gpt-oss 20B is capable of agentic work | 448 | I gave a try to [zeroclaw](https://github.com/zeroclaw-labs/zeroclaw) agent (intstead of the bloated and overhyped one). After few hours of fuckery with configs it's finally useful. Both main and embeddings models are running locally.
I carefully read what it's trying to execute in shell, and permit only \[relatively\] safe tools in config.
So far it can interact with macOS apps, web pages, and local files while keeping all my data private.
gpt-oss 20B has its limits though, it loses focus after 15-20 steps and often needs direct instructions to use persistent memory. It also starts behaving weirdly if tool access has been denied or tool returned some error. | 2026-02-23T03:18:16 | Vaddieg | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rc6c8m | false | null | t3_1rc6c8m | /r/LocalLLaMA/comments/1rc6c8m/feels_like_magic_a_local_gptoss_20b_is_capable_of/ | false | false | 448 | {'enabled': True, 'images': [{'id': 'b27xdhewq5lg1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/b27xdhewq5lg1.png?width=108&crop=smart&auto=webp&s=6625adfd3c7af8ad3d066553606b1111db4967f7', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/b27xdhewq5lg1.png?width=216&crop=smart&auto=webp&s=20f81b7671ddb86a0251bd00e0c658c8ab12b14b', 'width': 216}, {'height': 190, 'url': 'https://preview.redd.it/b27xdhewq5lg1.png?width=320&crop=smart&auto=webp&s=d0182dc66d8971b13a59acc805f57d6a2999585c', 'width': 320}, {'height': 380, 'url': 'https://preview.redd.it/b27xdhewq5lg1.png?width=640&crop=smart&auto=webp&s=f9692be692d82dd176bce38aa1cffe88af9406be', 'width': 640}, {'height': 570, 'url': 'https://preview.redd.it/b27xdhewq5lg1.png?width=960&crop=smart&auto=webp&s=ed72361cffae7572ea9dd1026dcd168fa310f4b0', 'width': 960}, {'height': 641, 'url': 'https://preview.redd.it/b27xdhewq5lg1.png?width=1080&crop=smart&auto=webp&s=5f6c372666f8b57c9de4ddfead247972abaf1e95', 'width': 1080}], 'source': {'height': 866, 'url': 'https://preview.redd.it/b27xdhewq5lg1.png?auto=webp&s=7221ec44c4cda9d653a75ba444711990c08c6dd8', 'width': 1458}, 'variants': {}}]} | ||
Measure accuracy of models on-device | 2 | Curious, how do you measure the accuracy of a model? I am trying to get the trace of a model using torch.jit.trace and torch.export for Hugging Face and want to compare the accuracy of the traced model with that of the original model. Is the SNR ratio a good metric for measuring the model's correctness? | 2026-02-23T02:59:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rc5xp9/measure_accuracy_of_models_ondevice/ | Motor_Salt1336 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc5xp9 | false | null | t3_1rc5xp9 | /r/LocalLLaMA/comments/1rc5xp9/measure_accuracy_of_models_ondevice/ | false | false | self | 2 | null |
My first major Open-Source project. A local AI Orchestrator that you can control via WhatsApp. | 1 | [removed] | 2026-02-23T02:49:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rc5q7z/my_first_major_opensource_project_a_local_ai/ | AcrobaticOffer9824 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc5q7z | false | null | t3_1rc5q7z | /r/LocalLLaMA/comments/1rc5q7z/my_first_major_opensource_project_a_local_ai/ | false | false | 1 | null | |
Qwen3's most underrated feature: Voice embeddings | 624 | Did you know that Qwen3 TTS utilizes voice embedding for voice cloning?
Your voice is turned into a vector of 1024 dimensions (or 2048 for 1.7b), and based on this vector alone you can get your custom voice.
But the coolest part is that this means that you can use math to modify voices, average voices. You can swap gender, pitch, mix and match voices, and even create an emotion space! This also enables semantic voice search!
The voice embedding model is actually just a tiny encoder with just a few million parameters. I've ripped it out of the voice embedding model so you can use the embedding model standalone. Check out my collection! :D I also have onnx models for optimized web / front-end inference.
[https://huggingface.co/collections/marksverdhei/qwen3-voice-embedding](https://huggingface.co/collections/marksverdhei/qwen3-voice-embedding)
Voice embedings can be used for inference in my vllm-omni fork: [https://github.com/heiervang-technologies/ht-vllm-omni](https://github.com/heiervang-technologies/ht-vllm-omni) | 2026-02-23T02:28:32 | k_means_clusterfuck | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rc59ze | false | null | t3_1rc59ze | /r/LocalLLaMA/comments/1rc59ze/qwen3s_most_underrated_feature_voice_embeddings/ | false | false | 624 | {'enabled': True, 'images': [{'id': 'zmcs7iysm5lg1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/zmcs7iysm5lg1.png?width=108&crop=smart&auto=webp&s=3f956da543358c07192d9cd1e4fe5caa0334a900', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/zmcs7iysm5lg1.png?width=216&crop=smart&auto=webp&s=41e4340569de82ca8c382d1847e73a7cfcc2866e', 'width': 216}, {'height': 201, 'url': 'https://preview.redd.it/zmcs7iysm5lg1.png?width=320&crop=smart&auto=webp&s=96d901c0c60d4d88860da4cd022bb6a416344113', 'width': 320}, {'height': 403, 'url': 'https://preview.redd.it/zmcs7iysm5lg1.png?width=640&crop=smart&auto=webp&s=796016e685c536fbab1ce49b5fec35afeb75f40e', 'width': 640}, {'height': 605, 'url': 'https://preview.redd.it/zmcs7iysm5lg1.png?width=960&crop=smart&auto=webp&s=d9bd334c48b237805eb9cde3069836fb4f8954f6', 'width': 960}, {'height': 681, 'url': 'https://preview.redd.it/zmcs7iysm5lg1.png?width=1080&crop=smart&auto=webp&s=f0d81366b791ef010b5d7066ba0f000df4457e81', 'width': 1080}], 'source': {'height': 726, 'url': 'https://preview.redd.it/zmcs7iysm5lg1.png?auto=webp&s=ac89af34e2faff2825a0a4065ab7b4054b2b46b2', 'width': 1151}, 'variants': {}}]} | ||
Seed 1.6 Flash was the harshest AI judge in a 10-model blind eval — and that strictness correlated with better writing output | 0 | Seed 1.6 Flash averaged 8.64/10 when scoring other models in a blind peer evaluation I ran, making it the strictest judge out of 10 frontier models. It penalized vague timelines and missing cost analysis while Grok 4.1 Fast handed out 9.8+ to 8 of 9 models like participation trophies. The task was persuasive business writing (convince a skeptical VP to migrate a monolith to microservices, 500 words, real constraints), and after excluding self-judgments I had 89 valid cross-evaluations. Rankings were tight: GPT-OSS-120B at 9.53, both Claudes at 9.47 and 9.46, down to Gemini Flash-Lite at 8.98. But the interesting part is the correlation between judging strictness and writing quality. The two strictest judges (Seed, GPT-OSS) ranked #6 and #1 as writers, while the two most lenient (Grok, Gemini Flash-Lite) ranked #8 and #10, which suggests models that can identify weakness in other outputs tend to avoid it in their own. DeepSeek V3.2 was the efficiency outlier, slowest generation at 27.5s but fewest tokens at 700 while still scoring 5th, basically the most information-dense writer in the pool. All 89 judgment pairs with justifications here: [https://open.substack.com/pub/themultivac/p/can-ai-write-better-business-proposals?r=72olj0&utm\_campaign=post&utm\_medium=web&showWelcomeOnShare=true](https://open.substack.com/pub/themultivac/p/can-ai-write-better-business-proposals?r=72olj0&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true) | 2026-02-23T02:24:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rc56wr/seed_16_flash_was_the_harshest_ai_judge_in_a/ | Silver_Raspberry_811 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc56wr | false | null | t3_1rc56wr | /r/LocalLLaMA/comments/1rc56wr/seed_16_flash_was_the_harshest_ai_judge_in_a/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'knpPtQl5C1w_Gu-L-FVJkv8o0UKxMH9K2Oxdl2WkfLU', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/knpPtQl5C1w_Gu-L-FVJkv8o0UKxMH9K2Oxdl2WkfLU.jpeg?width=108&crop=smart&auto=webp&s=daa5fe0c8568a1d1bba5fc2ee175e21c39794444', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/knpPtQl5C1w_Gu-L-FVJkv8o0UKxMH9K2Oxdl2WkfLU.jpeg?width=216&crop=smart&auto=webp&s=fe748dd598d0145a6e0c28f693c8f7d2b696e171', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/knpPtQl5C1w_Gu-L-FVJkv8o0UKxMH9K2Oxdl2WkfLU.jpeg?width=320&crop=smart&auto=webp&s=3d25c3f35289248815492f12eaf629a0b09b190e', 'width': 320}, {'height': 361, 'url': 'https://external-preview.redd.it/knpPtQl5C1w_Gu-L-FVJkv8o0UKxMH9K2Oxdl2WkfLU.jpeg?width=640&crop=smart&auto=webp&s=ad6a2418f4c3f74eba1b70387b044a8656e3535c', 'width': 640}, {'height': 542, 'url': 'https://external-preview.redd.it/knpPtQl5C1w_Gu-L-FVJkv8o0UKxMH9K2Oxdl2WkfLU.jpeg?width=960&crop=smart&auto=webp&s=cb8fb26c3691b2517edd6a7bafbab2005f3ddecc', 'width': 960}, {'height': 610, 'url': 'https://external-preview.redd.it/knpPtQl5C1w_Gu-L-FVJkv8o0UKxMH9K2Oxdl2WkfLU.jpeg?width=1080&crop=smart&auto=webp&s=070faff550ee541d6e8910ac5c67552b6295d5f8', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/knpPtQl5C1w_Gu-L-FVJkv8o0UKxMH9K2Oxdl2WkfLU.jpeg?auto=webp&s=ace1d0fe82a285b69a6614549aedd9e23d91c259', 'width': 1195}, 'variants': {}}]} |
Open-sourcing a Claude Code toolkit - separates execution layer from intelligence layer | 2 | Built a toolkit for Claude Code that cleanly separates the execution layer from the intelligence layer. Thought the LLM dev community here might find it useful.
Key idea: decouple AI reasoning from task execution for more control, better debugging, and cleaner architecture.
Repo: [https://github.com/intellegix/claude-code-toolkit](https://github.com/intellegix/claude-code-toolkit)
Open to feedback and contributions! | 2026-02-23T02:16:28 | https://www.reddit.com/r/LocalLLaMA/comments/1rc50qi/opensourcing_a_claude_code_toolkit_separates/ | Agile_Detective4294 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc50qi | false | null | t3_1rc50qi | /r/LocalLLaMA/comments/1rc50qi/opensourcing_a_claude_code_toolkit_separates/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'PZa4hw5xrcGzW-hK8DQsWL4eQY70WfMo-cb0nclIG7I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PZa4hw5xrcGzW-hK8DQsWL4eQY70WfMo-cb0nclIG7I.png?width=108&crop=smart&auto=webp&s=a43920803e0480485f46b2fd4e9e6bd047dcf463', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PZa4hw5xrcGzW-hK8DQsWL4eQY70WfMo-cb0nclIG7I.png?width=216&crop=smart&auto=webp&s=be74e3e5776ec3fd7a00629eba400d83d2786083', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PZa4hw5xrcGzW-hK8DQsWL4eQY70WfMo-cb0nclIG7I.png?width=320&crop=smart&auto=webp&s=81e5baaabb856ea3ce78cfe9cf21e4961a88bf72', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PZa4hw5xrcGzW-hK8DQsWL4eQY70WfMo-cb0nclIG7I.png?width=640&crop=smart&auto=webp&s=231bc193bd8131845da881929644599136e7fbdd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PZa4hw5xrcGzW-hK8DQsWL4eQY70WfMo-cb0nclIG7I.png?width=960&crop=smart&auto=webp&s=ac5a7ffbd58c2bdb403b2ceccacca2bce2d454a8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PZa4hw5xrcGzW-hK8DQsWL4eQY70WfMo-cb0nclIG7I.png?width=1080&crop=smart&auto=webp&s=38cb4f2875651d38387ffa3119e49f49257d4faf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PZa4hw5xrcGzW-hK8DQsWL4eQY70WfMo-cb0nclIG7I.png?auto=webp&s=7fdf36d61e0fb4f49e7fc04a73f640e83c6f9c2e', 'width': 1200}, 'variants': {}}]} |
Reasons for using local LLM as an individual developer | 0 | I know some companies would prefer to deploy their own LLM locally for the need of **confidentiality**. Now assume that you are an individual developer, would you / why do you choose local AI. (If you don’t demand data security) | 2026-02-23T02:09:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rc4v7q/reasons_for_using_local_llm_as_an_individual/ | Fred_Watermelon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc4v7q | false | null | t3_1rc4v7q | /r/LocalLLaMA/comments/1rc4v7q/reasons_for_using_local_llm_as_an_individual/ | false | false | self | 0 | null |
How are you guys handling security for Strands Agents in production? Building an open-source security layer for AWS Strands Agents am I solving a real problem or overthinking it? | 0 | I've been building with AWS Strands Agents and really like the SDK.
As I started thinking about giving agents access to db to execute SQL,
....kept asking myself what's the actual safety net here?
I know models are getting better at following instructions and Bedrock Guardrails exist for content filtering.
But from what I can tell, there's no layer that validates what the agent is actually about to execute at the tool level.
The guardrails check the conversation, not the SQL query string being sent to your database.
so even if 99% of the time the model behaves, you're still one weird edge case, one prompt injection, or one ambiguous user input away from a query you didn't intend.
and in production with real customer data, "99% safe" isn't really safe.
I started building an open-source middleware that sits between the agent and its tools think of it like a firewall for agent actions:
* AST-based SQL validation (parses the actual query, not regex matching catches things like DELETE without WHERE, DROP, TRUNCATE)
* PII detection/redaction before agent responses reach the user
* Policy rules you can configure per tool
I'm NOT saying......
Strands or Bedrock is insecure
..... they're great at what they do. I'm saying there's a gap between "the model is smart" and "I can prove to my security team this agent won't do something destructive." That's the layer I'm trying to build.
Before I go deeper, genuinely want to know:
Do you trust system prompts + model behavior enough for production SQL access? Or do you add extra validation?
How are you handling PII leakage in agent responses? Guardrails? Custom code? Just hoping for the best?
Would a lightweight open-source tool for this be useful? Or am I building for a problem most teams have already solved with IAM + read-only creds?
Happy to share the repo if anyone's curious it's early but working. Mostly want to know if this resonates before I invest more time.
| 2026-02-23T01:52:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rc4hny/how_are_you_guys_handling_security_for_strands/ | jack_ll_trades | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc4hny | false | null | t3_1rc4hny | /r/LocalLLaMA/comments/1rc4hny/how_are_you_guys_handling_security_for_strands/ | false | false | self | 0 | null |
Sparrow as controller to more complex systems | 1 | I am an engineer who works in the development of medical imaging systems. It really does seem that this technology (Sparrow + microcontroller) could be used to greatly simplify the user interface of complex imaging systems, especially portable, battery powered ones. So instead of knowing every function in every sub-menu, Sparrow + microcontroller could form a voice control responding to general spoken commands and queries: "Could you change the image brightness and increase the depth in the image?" "Show me the Patient Information page." "Save the next 15 seconds of video." "Switch the fast flow mode." etc.
Have you considered this? Would you like to try it? I have a project in mind... | 2026-02-23T01:43:15 | https://www.reddit.com/r/LocalLLaMA/comments/1rc4aqa/sparrow_as_controller_to_more_complex_systems/ | LeScherd5929 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc4aqa | false | null | t3_1rc4aqa | /r/LocalLLaMA/comments/1rc4aqa/sparrow_as_controller_to_more_complex_systems/ | false | false | self | 1 | null |
Llama 3.2 1B categorizes in native JSON mode | 0 | Running a 3-layer system in production: shell script captures last 50 messages → Llama 3.2 1B categorizes in native JSON mode → filer writes to project-specific markdown files with a 500-line cap. Runs via launchd, survives restarts, costs $0/month. Full writeup with scripts at [magic.naption.ai/pipeline](http://magic.naption.ai/pipeline) | 2026-02-23T01:40:53 | https://www.reddit.com/r/LocalLLaMA/comments/1rc48vb/llama_32_1b_categorizes_in_native_json_mode/ | Sad-Fly-969 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc48vb | false | null | t3_1rc48vb | /r/LocalLLaMA/comments/1rc48vb/llama_32_1b_categorizes_in_native_json_mode/ | false | false | self | 0 | null |
Imagine your hardware and LLMs as LEGO blocks, play around to visualise what you can expect out of your gig !! (Built with bugs, lot to be improve) | 1 | [removed] | 2026-02-23T01:40:40 | https://www.reddit.com/r/LocalLLaMA/comments/1rc48p7/imagine_your_hardware_and_llms_as_lego_blocks/ | Technical_Drawer_854 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc48p7 | false | null | t3_1rc48p7 | /r/LocalLLaMA/comments/1rc48p7/imagine_your_hardware_and_llms_as_lego_blocks/ | false | false | self | 1 | null |
After many contributions craft, Crane now officially supports Qwen3-TTS! | 2 | If you're building local AI apps and feel stuck between **slow PyTorch inference** and **complex C++ llama.cpp integrations**, you might find this interesting.
I’ve been working on **Crane** 🦩 — a pure Rust inference engine built on Candle.
The goal is simple:
> Make local LLM / VLM / TTS / OCR inference fast, portable, and actually pleasant to integrate.
---
### 🚀 Why it’s different
* **Blazing fast on Apple Silicon (Metal support)**
Up to ~6× faster than vanilla PyTorch on M-series Macs (no quantization required).
* **Single Rust codebase**
CPU / CUDA / Metal with unified abstractions.
* **No C++ glue layer**
Clean Rust architecture. Add new models in ~100 LOC in many cases.
* **OpenAI-compatible API server included**
Drop-in replacement for `/v1/chat/completions` and even `/v1/audio/speech`.
---
### 🧠 Currently supports
* Qwen 2.5 / Qwen 3
* Hunyuan Dense
* Qwen-VL
* PaddleOCR-VL
* Moonshine ASR
* Silero VAD
* Qwen3-TTS (native speech-tokenizer decoder in Candle)
You can run Qwen2.5 end-to-end in pure Rust with minimal boilerplate — no GGUF conversion, no llama.cpp install, no Python runtime needed.
---
### 🎯 Who this is for
* Rust developers building AI-native products
* macOS developers who want real GPU acceleration via Metal
* People tired of juggling Python + C++ + bindings
* Anyone who wants a clean alternative to llama.cpp
---
If you're interested in experimenting or contributing, feedback is very welcome.
Still early, but moving fast.
Happy to answer technical questions 👋
Resources link: https://github.com/lucasjinreal/Crane
| 2026-02-23T01:37:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rc46nx/after_many_contributions_craft_crane_now/ | LewisJin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc46nx | false | null | t3_1rc46nx | /r/LocalLLaMA/comments/1rc46nx/after_many_contributions_craft_crane_now/ | false | false | self | 2 | null |
Flexible Multiagent Feature in Codex! | 0 | I have been experimenting with the new multiagent feature in Codex, and I appreciate how flexible it is.
Each subagent can have its own [configuration file](https://developers.openai.com/codex/config-reference), which means you can assign a different model, even different llm engines, and configure different features per subagent.
You can also point each subagent to read a different instructions file instead of AGENTS.md.
I have not tested this yet, but it should be also possible to assign different MCP, skills, and etc because subagents have their own separate configuration files.
By providing each subagent with only the specific resources it needs, you avoid cluttering its context with unnecessary information.
This is especially beneficial for local models that tend to degrade with longer context windows.
Here is an example for main `config.toml` for a project:
[features]
multi_agent = true
[agents.summary]
config_file = "summary.toml"
description = "The agent summarizes the given file."
[agents.review]
config_file = "review.toml"
description = "The agent reviews the given file according to defined specs."
Then you can point each agent to a different instruction file by setting:
* `model_instructions_file = "summary.md"` in summary.toml
* `model_instructions_file = "review.md"` in review.toml
Put all of these files in `.codex` at the top of your project folder:
* config.toml
* summary.toml
* summary.md
* review.toml
* review.md
Then create AGENTS.md at the top of your project folder with information that is only relevant to the orchestration agent.
Finally, add your project folder as a trusted project, so it reads config.toml in your project! | 2026-02-23T01:16:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rc3pol/flexible_multiagent_feature_in_codex/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc3pol | false | null | t3_1rc3pol | /r/LocalLLaMA/comments/1rc3pol/flexible_multiagent_feature_in_codex/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'IanbACn0ZMJMMsnYfmcP1C692OFMB1do21sw5Lbo-80', 'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/IanbACn0ZMJMMsnYfmcP1C692OFMB1do21sw5Lbo-80.png?width=108&crop=smart&auto=webp&s=e3f265b33937cdd7d282a3b805d8b3aca8aecca8', 'width': 108}, {'height': 81, 'url': 'https://external-preview.redd.it/IanbACn0ZMJMMsnYfmcP1C692OFMB1do21sw5Lbo-80.png?width=216&crop=smart&auto=webp&s=a3d4abd8843027da5713b715cd6bfc5df6e5e4cb', 'width': 216}, {'height': 120, 'url': 'https://external-preview.redd.it/IanbACn0ZMJMMsnYfmcP1C692OFMB1do21sw5Lbo-80.png?width=320&crop=smart&auto=webp&s=7986930e7db3d10096334d8740c477e4faaced51', 'width': 320}, {'height': 240, 'url': 'https://external-preview.redd.it/IanbACn0ZMJMMsnYfmcP1C692OFMB1do21sw5Lbo-80.png?width=640&crop=smart&auto=webp&s=74fa25b2b23656a2cc9c0fee548229c63af35433', 'width': 640}, {'height': 361, 'url': 'https://external-preview.redd.it/IanbACn0ZMJMMsnYfmcP1C692OFMB1do21sw5Lbo-80.png?width=960&crop=smart&auto=webp&s=f58b50ca3507b4d3ed0b34bf90b1d85e69cf2c30', 'width': 960}, {'height': 406, 'url': 'https://external-preview.redd.it/IanbACn0ZMJMMsnYfmcP1C692OFMB1do21sw5Lbo-80.png?width=1080&crop=smart&auto=webp&s=391142286c70f8c149866fb27914cda903d869d2', 'width': 1080}], 'source': {'height': 903, 'url': 'https://external-preview.redd.it/IanbACn0ZMJMMsnYfmcP1C692OFMB1do21sw5Lbo-80.png?auto=webp&s=069f20773972e7174872761253c50a1597e321f8', 'width': 2400}, 'variants': {}}]} |
Nanbeige4.1-3B Ignoring Prompt | 1 | (very new to the local LLM scene, sorry if I'm not providing all the details I need)
[https://huggingface.co/bartowski/Nanbeige\_Nanbeige4-3B-Thinking-2511-GGUF](https://huggingface.co/bartowski/Nanbeige_Nanbeige4-3B-Thinking-2511-GGUF)
Using [Jan.AI](http://Jan.AI) , to load in the GGUFs , tried **Q5\_K\_S** and **IQ4\_XS** .
My inputs are always ignored (I've tried stuff like "Hello" or "Tell me about Mars.") The model always produces garbage or pretends I asked a question about matrices. Sometimes it uses its thinking capabilities. Sometimes it doesn't.
Does anyone know what might be the issue? I'm genuinely baffled since all other models (I've tried small Qwen and Mistral Models) either work, or fail to load. I have 8GB of VRAM. | 2026-02-23T01:14:58 | https://www.reddit.com/r/LocalLLaMA/comments/1rc3osx/nanbeige413b_ignoring_prompt/ | lagoon-nebula | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc3osx | false | null | t3_1rc3osx | /r/LocalLLaMA/comments/1rc3osx/nanbeige413b_ignoring_prompt/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'yqfU0frG2dIOhklObQqqcWd5y63p4sG-KpMR6doY-Q8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yqfU0frG2dIOhklObQqqcWd5y63p4sG-KpMR6doY-Q8.png?width=108&crop=smart&auto=webp&s=b7c27eef4b27597cac65968d5f3bb1d8ec3562ec', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yqfU0frG2dIOhklObQqqcWd5y63p4sG-KpMR6doY-Q8.png?width=216&crop=smart&auto=webp&s=5e2a68516e6d575e7565a10525deb356b303560b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yqfU0frG2dIOhklObQqqcWd5y63p4sG-KpMR6doY-Q8.png?width=320&crop=smart&auto=webp&s=c6941547a71211a7f5e2b49476cd5762e3975833', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yqfU0frG2dIOhklObQqqcWd5y63p4sG-KpMR6doY-Q8.png?width=640&crop=smart&auto=webp&s=99a55a99e890990fcea00d2cca124e516833d723', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yqfU0frG2dIOhklObQqqcWd5y63p4sG-KpMR6doY-Q8.png?width=960&crop=smart&auto=webp&s=8fd117d09f53e791254dd95ce880637d3c39127e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yqfU0frG2dIOhklObQqqcWd5y63p4sG-KpMR6doY-Q8.png?width=1080&crop=smart&auto=webp&s=3ccc98415a5d150624ca2f3a671fb0741d4d9028', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yqfU0frG2dIOhklObQqqcWd5y63p4sG-KpMR6doY-Q8.png?auto=webp&s=07ab61e23342e7b4f1d13e7d1b9756e0ab9130a5', 'width': 1200}, 'variants': {}}]} |
Super New to Godot, used Claude Code/gpt-oss-120b locally to help me vibecode a simple platformer game about a grumpy mage who follows you around making fun of you lmao. | 201 | Yeah, I was bored so I spent the last two weeks experimenting with vibecoding with local LLMs, namely gpt-oss-120b.
I started with Cline, didn't like it at all because it was overheating my GPU while giving back too little. Codex was even worse, locally, leading to weird CPU switches mid-generation when there was supposed to be enough VRAM to run the model entirely on GPU. Then I tried Claude Code and that's when my expectations were exceeded, *big time.*
I first started with pygame, and after successfully one-shotting simple games (snake game, etc.) under the same project with the same model I decided to take it another level and use Claude Code with Godot, which was pretty easy to setup in VSCode and their IDE/extension.
Next thing I know, I spend the last two weeks making this game on Godot out of curiosity and using Claude Code to help me Vibecode parts of it along the way, and I came up with this game where you have a useful, snarky NPC that makes fun of you lmao.
The way it works is that the game is going to be gathering contextual information in real-time, e.g. actions taken, events occurring, etc. You can see that in the logs that are printed under the gameplay loop.
The mage then stores each chain of events in a chat history and comments on it every 10 seconds. The AI behavior is hard-coded but it works really well. However, I do plan on adding a hybrid approach where the LLM uses tool calls to make informed decisions depending on the situations, such as:
- Switching equipment
- Healing the player or himself
- Pointing out objects of interest
And so forth. I haven't ruled out a Wizard of Oz worldbuilding AI that vibecodes enemies and obstacles throughout the game with tool calls, but that will be for another time.
I'm enjoying this process so I think I might actually finish this game, but we'll see how far I can get.
| 2026-02-23T01:13:04 | https://v.redd.it/jl31wp5085lg1 | swagonflyyyy | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rc3naj | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jl31wp5085lg1/DASHPlaylist.mpd?a=1774401209%2CNTM4YjViOGZjMDE3YTEyMWJlNDlhOTQ5ODJkZmU4MWNiNWI1YTcyOWUxNmFkNWYwOTkwMDJlMjhiNWU2ZTU5Yg%3D%3D&v=1&f=sd', 'duration': 69, 'fallback_url': 'https://v.redd.it/jl31wp5085lg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/jl31wp5085lg1/HLSPlaylist.m3u8?a=1774401209%2CNmQ5MDA4ZTRkZDk2MWZiNDMxZmZhYjY3NzUwMDI1ZDM0ODA0OTMyYjJkM2E0YjMxYmJkNTNhMDM4ODRhZTU5NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/jl31wp5085lg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1rc3naj | /r/LocalLLaMA/comments/1rc3naj/super_new_to_godot_used_claude_codegptoss120b/ | false | false | 201 | {'enabled': False, 'images': [{'id': 'MmJ6MGRjNjA4NWxnMR3Al36Nr886FX7jQ_P96fNg8PSf4Zsku92kjG2XN_qv', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MmJ6MGRjNjA4NWxnMR3Al36Nr886FX7jQ_P96fNg8PSf4Zsku92kjG2XN_qv.png?width=108&crop=smart&format=pjpg&auto=webp&s=1ecbd6a2c24e4e1545bcc1fbca87afb318a6217d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MmJ6MGRjNjA4NWxnMR3Al36Nr886FX7jQ_P96fNg8PSf4Zsku92kjG2XN_qv.png?width=216&crop=smart&format=pjpg&auto=webp&s=ed40855e65e94d10545e15c3559859a3830edd3e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MmJ6MGRjNjA4NWxnMR3Al36Nr886FX7jQ_P96fNg8PSf4Zsku92kjG2XN_qv.png?width=320&crop=smart&format=pjpg&auto=webp&s=7b7d47ab652a17400e924c83e63d73a8bed0c96e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MmJ6MGRjNjA4NWxnMR3Al36Nr886FX7jQ_P96fNg8PSf4Zsku92kjG2XN_qv.png?width=640&crop=smart&format=pjpg&auto=webp&s=b26fc813ef34a859382754841c980bdca68de7c6', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MmJ6MGRjNjA4NWxnMR3Al36Nr886FX7jQ_P96fNg8PSf4Zsku92kjG2XN_qv.png?width=960&crop=smart&format=pjpg&auto=webp&s=3f57a9699f1b6c9f84ee3456f4d07a27b086ce5c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MmJ6MGRjNjA4NWxnMR3Al36Nr886FX7jQ_P96fNg8PSf4Zsku92kjG2XN_qv.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6d06da392fc798ef4a640356a3b4b19d6116dc5a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MmJ6MGRjNjA4NWxnMR3Al36Nr886FX7jQ_P96fNg8PSf4Zsku92kjG2XN_qv.png?format=pjpg&auto=webp&s=c291f20b88d16927cbb5806d6b526e78886fd6ea', 'width': 1920}, 'variants': {}}]} | |
I watched a PEFT/continual-learning paper video 3 hours ago… and accidentally shipped a repo (CASCADES) | 1 | So yeah — saw a video summarizing a continual PEFT research paper, went down the rabbit hole, and in \~3 hours I ended up drafting + implementing a full “continual PEFT for local LLMs” meta-architecture and pushed it to GitHub.
CASCADES (high level):
\- Shared dynamic adapter subspace per layer: ΔW = U · S\_t · Vᵀ (U,V shared across tasks; S\_t per-task)
\- Stiefel-style orthonormal basis maintenance via QR/retraction updates (no full O(d³) SVD loops)
\- Gated integration between new/old adapter influence (interference control)
\- Orthogonal-complement constraint + “energy-accounted” gradient reassignment (keep step magnitude while avoiding occupied directions)
\- Budgeted layer-wise allocation under tight VRAM (only spend rank where it matters)
\- Functional rank-1 fallback adapters for non-critical layers (cheap expressivity)
\- Quantization-aware filtering / noise floor logic for 4-bit QLoRA regimes
I’m targeting the real local setup: frozen backbone, adapter-only training, sequential tasks, no replay buffer, hard VRAM caps (I’ve been keeping it \~8GB). Early runs show way less negative BWT vs a budget-matched restricted-rank LoRA baseline (numbers + logs in the repo).
Repo: [https://github.com/Bender1011001/CASCADES--continual-PEFT-for-Local-LLMs](https://github.com/Bender1011001/CASCADES--continual-PEFT-for-Local-LLMs) | 2026-02-23T00:45:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rc310b/i_watched_a_peftcontinuallearning_paper_video_3/ | Bender-1011001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc310b | false | null | t3_1rc310b | /r/LocalLLaMA/comments/1rc310b/i_watched_a_peftcontinuallearning_paper_video_3/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Im_u0pDMzPjCu-PGzraL6fMmeymsiGJ52SPK40qwubM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Im_u0pDMzPjCu-PGzraL6fMmeymsiGJ52SPK40qwubM.png?width=108&crop=smart&auto=webp&s=9ea9cb6778c3cc040b533b404b92be015612a379', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Im_u0pDMzPjCu-PGzraL6fMmeymsiGJ52SPK40qwubM.png?width=216&crop=smart&auto=webp&s=fa5195a33c2a93dd25111879cbfa53ff922b6cde', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Im_u0pDMzPjCu-PGzraL6fMmeymsiGJ52SPK40qwubM.png?width=320&crop=smart&auto=webp&s=6b3b44d7643cb173d86096d39a14f58f0230120d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Im_u0pDMzPjCu-PGzraL6fMmeymsiGJ52SPK40qwubM.png?width=640&crop=smart&auto=webp&s=e55dcd7cac2bb07c1f242524d8eafb34b9312c27', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Im_u0pDMzPjCu-PGzraL6fMmeymsiGJ52SPK40qwubM.png?width=960&crop=smart&auto=webp&s=d32b6ac18f2ae0bdfac75a1e14d1a50bc46fa7c1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Im_u0pDMzPjCu-PGzraL6fMmeymsiGJ52SPK40qwubM.png?width=1080&crop=smart&auto=webp&s=a447b68efc86510c92192fb70377b32819f6f629', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Im_u0pDMzPjCu-PGzraL6fMmeymsiGJ52SPK40qwubM.png?auto=webp&s=43b8b70486ee985420971dedfdc1cfe82cd372e5', 'width': 1200}, 'variants': {}}]} |
Claude and Codex are close to finish their tasks but you have to move situation | 0 | 2026-02-23T00:41:23 | https://v.redd.it/pe9pbfum45lg1 | AromaticBombay | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rc2xi2 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/pe9pbfum45lg1/DASHPlaylist.mpd?a=1774399305%2CZDFkN2U2MTQ0ZjgyMTVkYjQ1N2RjM2E4MTg2OTkyMjBhOGM5NjQ1ZjE2ODg4YjIzMTY3N2EzY2NjNjhlZGY1Yg%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/pe9pbfum45lg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/pe9pbfum45lg1/HLSPlaylist.m3u8?a=1774399305%2CMGJmNDVjNDU4NjhhMzcwOGZkMTEyY2Y0ZmE1NDE4N2JjNDY2YTY1NmViMDFhMzhjMjRmMTYyNDgwMGZiMzZjMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/pe9pbfum45lg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1rc2xi2 | /r/LocalLLaMA/comments/1rc2xi2/claude_and_codex_are_close_to_finish_their_tasks/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'N3BnbTJtcW00NWxnMX8kqFpu6puQQzfH_l8-SuDrV1vjzRJCs740w0z9DELl', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/N3BnbTJtcW00NWxnMX8kqFpu6puQQzfH_l8-SuDrV1vjzRJCs740w0z9DELl.jpeg?width=108&crop=smart&format=pjpg&auto=webp&s=9d6a955addc61c43bb063ec9f91808400c880d31', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/N3BnbTJtcW00NWxnMX8kqFpu6puQQzfH_l8-SuDrV1vjzRJCs740w0z9DELl.jpeg?width=216&crop=smart&format=pjpg&auto=webp&s=18c03b21deff5bd8358661f78194a0a8712e9351', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/N3BnbTJtcW00NWxnMX8kqFpu6puQQzfH_l8-SuDrV1vjzRJCs740w0z9DELl.jpeg?width=320&crop=smart&format=pjpg&auto=webp&s=6fefaab58eae7e495a06cf860cc778c43d5a8dd4', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/N3BnbTJtcW00NWxnMX8kqFpu6puQQzfH_l8-SuDrV1vjzRJCs740w0z9DELl.jpeg?width=640&crop=smart&format=pjpg&auto=webp&s=9846895eab58bda0e3d560c68bb5c71cd9bff65a', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/N3BnbTJtcW00NWxnMX8kqFpu6puQQzfH_l8-SuDrV1vjzRJCs740w0z9DELl.jpeg?width=960&crop=smart&format=pjpg&auto=webp&s=21a0d392fc112c9abb79b865d53169d958217bf2', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/N3BnbTJtcW00NWxnMX8kqFpu6puQQzfH_l8-SuDrV1vjzRJCs740w0z9DELl.jpeg?width=1080&crop=smart&format=pjpg&auto=webp&s=2e06856cbbbdeed98f7e6027b853c5b98e0f3459', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/N3BnbTJtcW00NWxnMX8kqFpu6puQQzfH_l8-SuDrV1vjzRJCs740w0z9DELl.jpeg?format=pjpg&auto=webp&s=7aa5f8dfa3bb6dc336aacd4a631c95912230c7b4', 'width': 1080}, 'variants': {}}]} | ||
I built | 1 | [deleted] | 2026-02-23T00:18:52 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rc2epl | false | null | t3_1rc2epl | /r/LocalLLaMA/comments/1rc2epl/i_built/ | false | false | default | 1 | null | ||
My real-world Qwen3-code-next local coding test. So, Is it the next big thing? | 97 | So yesterday I put the Q8 MLX on my 128GB Mac Studio Ultra and wired it to Qwen Code. Fit's there with a huge amount to spare. The first tests were promising - basically did everything I asked: read file, write file, browse web, check system time....blah, blah.
Now the real the task:
I decided on YOLO mode to rewrite the KittenTTS-IOS to windows (which itself is a rewrite of KittenTTS in python). It uses ONYX and a couple of Swift libraries like Misaki for English phoneme.
So, say a medium difficulty. Not super easy, but not super hard, because all the code is basically there. You just need to shake it.
Here is how it went:
0. Started very well. Plan was solid. Make simple CLI with KittenTTS model, avoid any phoneme manipulation for now. Make ONYX work. Then add Misaki phoneme, avoid bart fallback coz that's a can of worms.
1. So it built the main.cpp. Rewrote the main app, created it's own json parser for the KittenTTS dictionary. found windows ONYX, downloaded, linked. ran cmake captured the output, realised it's json parsing was a total crap. Linked <nlohmann/json.hpp> .... aaaaand we are out.
2. First client timeout then "I'm dead, Dave". As we get more and more into longer context the prompt parsing gets longer and longer until the client times out.
3. Restarted maually, told it we are at json.hpp, it finished the patching, compiled - created output.wav
4. I'm impressed so far. The wav has voice in it, of course all gibberish because we have no phoneme dictionary. The make file is unreadable can of worms.
5. Next step convert phoneme Misaki to windows. Big hairy project. Again, started cheerful. But we are now editing large files. It can barely finish anything before timeout.
6. Lot's of manual restarts. (YOLO mode my butt, right?). At some point it starts editing the Swift files, thinking that's what we are doing. Noooo!!!!
7. I've noticed that most of the time it wastes tokens on trying to figure out how to do stuff like save file it wants to save, because now "it's just too big". Even starts writing python script to save the file then entering the entire text of lexicon.cpp as a command line - LOL, learning, that's a very stupid thing too.
8. I mean nice to learn from mistakes, but we are getting to timeouts all the time now by filling the context with unnecessary work. And it of course learns nothing, because that knowledge is lost.
9. I spent another 60 minutes trying to figure out how to fix qwen code by increasing timeout. Not an easy task as every AI will just hallucinate what you should do. I moved from anthropic style to openai style for the QWEN3 and set generationConfig.timeout to a big number (I have no idea if this even works). Set the KV\_cache to quantize at 8 bit in LM studio (again, no idea if it helps). Seems the timeouts are now longer? So maybe a small win?
10. Well, went to sleep, letting it do something.
11. In the next day the phoneme test.exe was working sort of (at least it was not throwing 5 pages of errors) - read the 400k phoneme dictionary and output bunch of nonsense, like lookup: Hello -> həlO (Is this the correct phoneme? Hardly. Seems we are getting lost in ISO/UDF nightmare) Well, Qwen doesn't know what's going on either.
12. At this point neither me nor Qwen knows if we are fixing bugs or buggyfying working code. But he is happily doing something.
13. And writing jokes that get a bit stale after while:
"Why do Java developers wear glasses? Because they don't C#"
14. I start to miss Claude Code. Or Codex. Or anything that doesn't take 30 minutes per turn then tell me client timeout.
15. It is still fixing it and writing stupid one liner jokes on screen. I mean "fixing it" means sitting in Prompt processing.
16. Funny, MAC Studio is barely warm. Like it was working nonstop for 8 hours with 89GB model .
17. The processing prompt is still killing the whole operation. As the context grows, this is a few minutes per turn.
18. I totally believe the X grifters telling me they bough 10 MAC's for local Agentic work.... yes, sure. You can have huge memory but large context is still going to be snail pace.
19. Looking at the terminal "Just a sec, I'm optimizing the humor... (esc to cancel, 29m 36s)", been doing something for 30 min and very likely it will NOT be able to save it or finish it before another timeout.
20. I give Local model coding 5/10 so far. It does kinda work if you have the patience. It's surprising we get that far. It is nowhere what the big boys give you, even for $20/month.
\--- It is still coding --- | 2026-02-22T23:51:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rc1ra2/my_realworld_qwen3codenext_local_coding_test_so/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc1ra2 | false | null | t3_1rc1ra2 | /r/LocalLLaMA/comments/1rc1ra2/my_realworld_qwen3codenext_local_coding_test_so/ | false | false | self | 97 | null |
MoOLE-T - a staged selection flow utilizing O-LORA skill "experts" | 13 | Hello again!
Yesterday, I posted about my O-TITANS (Orthogonal Tensors for Independent Task Alignment) research—a way to train strictly isolated LoRAs on Gemma 3 that don't overwrite the base model's knowledge or interfere with each other.
Today, the actual orchestrator for those adapters is live.
I’ve uploaded the **MoOLE-T (Mixture of Orthogonal LoRA Experts - Titans)** framework to Hugging Face: 🔗[https://huggingface.co/paperscarecrow/Gemma3MoOLET/](https://huggingface.co/paperscarecrow/Gemma3MoOLET/)
**The value/theory:** Right now, if you want a model that is an expert at Python, cybersecurity, and creative writing, you have to download a massive, monolithic model that consumes tons of VRAM and takes a monumental effort to tune or train.
MoOLE-T seeks to change the architecture entirely by splitting the cognition.
**The Flow:**
1. **The Brainstem (4B Cognitive Router):** An overfitted `gemma-3-4b-it` intercepts your prompt. It uses a `<think>` block to decompose the task and fires a deterministic routing token (e.g., `[ROUTE: code_python]`).
2. **The Orchestrator:** A localized Python controller catches the token, checks your local `engrams.json` dictionary, and dynamically hot-swaps the required O-TITANS `.pt` files straight into VRAM.
3. **The Frontal Lobe (12B Synthesis Core):** A `gemma-3-12b-it-abliterated` model acts as the execution engine. It catches the hot-swapped weights, synthesizes the hyper-specialized response, and then flushes the weights to return to a sterile baseline.
**The Vision going forward: A "Thingiverse" for Cognitive Skills.** Included in the repo is the orchestrator script, the training forge script, and my first production engram: an advanced Python coding expert (`otitans_code_python.pt`). anyone can fine-tune a gemma model on a specific, narrow skillset and share it with he community for their own use.
The end goal here is to create a community-driven repository of hot-swappable skills. You should be able to download a 25MB `.pt` file, drop it into your `/adapters/` folder, update your JSON, and instantly grant your Swarm a new capability.
I'll be seeding the repo with skills as I get them made, but this is where the distributed might of community can really help a lot.
If you use the included tuning script to forge your own skills, please contribute them to the hub and label them accurately! the more robust the set grows, the more useful this vision actually becomes.
*Note: A "Featherweight" / Ultralight version utilizing a sub-1B parameter Reflex Arc router for CPU-only edge deployment is in active development. It's end state is a sub\~4GB package that can run on almost anything, assuming it cooperates going forward.*
Feedback is deeply appreciated, the previous thread was extremely valuable for motivating me to push forward with this, so thank you.
I am not a strong coder (Gemini 3.1 is the reason this can even exist), so if there are major issues, feel free to call them out, fork your own, or put me on blast. | 2026-02-22T23:39:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rc1h05/moolet_a_staged_selection_flow_utilizing_olora/ | Polymorphic-X | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc1h05 | false | null | t3_1rc1h05 | /r/LocalLLaMA/comments/1rc1h05/moolet_a_staged_selection_flow_utilizing_olora/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'fTdVvfX5GtnGzyxZ-6--A7XObHi0yy4WZ-CWcobNtKQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fTdVvfX5GtnGzyxZ-6--A7XObHi0yy4WZ-CWcobNtKQ.png?width=108&crop=smart&auto=webp&s=a8f3011aa0e1f7854a45dc7dcbc4a783daa18cff', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fTdVvfX5GtnGzyxZ-6--A7XObHi0yy4WZ-CWcobNtKQ.png?width=216&crop=smart&auto=webp&s=9decc341682a38c21e648345a96f2b847c48516d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fTdVvfX5GtnGzyxZ-6--A7XObHi0yy4WZ-CWcobNtKQ.png?width=320&crop=smart&auto=webp&s=8a861fa4473eafdaf9fc7728bb0b018ac4aabef8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fTdVvfX5GtnGzyxZ-6--A7XObHi0yy4WZ-CWcobNtKQ.png?width=640&crop=smart&auto=webp&s=55f1262598ee109836c74912f0190112bf66ba7a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fTdVvfX5GtnGzyxZ-6--A7XObHi0yy4WZ-CWcobNtKQ.png?width=960&crop=smart&auto=webp&s=fed396b5850e81a19af668f7ff524a8ef10dbd0a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fTdVvfX5GtnGzyxZ-6--A7XObHi0yy4WZ-CWcobNtKQ.png?width=1080&crop=smart&auto=webp&s=f68a2debe6660eadf54d437550e0779bebf627c5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fTdVvfX5GtnGzyxZ-6--A7XObHi0yy4WZ-CWcobNtKQ.png?auto=webp&s=3adca334f108470461343366ab7671f120aac501', 'width': 1200}, 'variants': {}}]} |
GPU-Initiated Networking for NCCL on AWS – Serving DeepSeek-V3 with DeepEP over EFA | 1 | NVIDIA NCCL recently introduced GPU-Initiated Networking, which allows CUDA kernels to initiate networking directly through RDMA — no CPU round-trip needed. Thanks to hard work from the AWS Annapurna Labs team on the EFA provider side, this now works on AWS. I was finally able to test multi-node vLLM deployment with DeepEP on HyperPod Slurm. Here's my experiment. | 2026-02-22T23:30:39 | https://www.pythonsheets.com/notes/appendix/nccl-gin.html | spiderpower02 | pythonsheets.com | 1970-01-01T00:00:00 | 0 | {} | 1rc19w5 | false | null | t3_1rc19w5 | /r/LocalLLaMA/comments/1rc19w5/gpuinitiated_networking_for_nccl_on_aws_serving/ | false | false | default | 1 | null |
Got $800 of credits on a cloud platform (for GPU usage). Anyone here that's into AI training and inference and could make use of it? | 0 | So I have around 800 bucks worth of GPU usage credits on one of the major platform, those can be used specifically for GPU and clusters. So if any individual or hobbyist or anyone out here is training models or inference, or anything else, please contact! (not free btw, but selling at way less price) | 2026-02-22T23:17:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rc0y1u/got_800_of_credits_on_a_cloud_platform_for_gpu/ | DocumentFun9077 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc0y1u | false | null | t3_1rc0y1u | /r/LocalLLaMA/comments/1rc0y1u/got_800_of_credits_on_a_cloud_platform_for_gpu/ | false | false | self | 0 | null |
What model do you think is on the disc? | 56 | If it's actually an OpenAI model, most likely GPT OSS 20b, though I don't know what quant would fit on a single disc. Maybe something smaller like Qwen 3 8b? | 2026-02-22T23:11:54 | maxwell321 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rc0tg4 | false | null | t3_1rc0tg4 | /r/LocalLLaMA/comments/1rc0tg4/what_model_do_you_think_is_on_the_disc/ | false | false | 56 | {'enabled': True, 'images': [{'id': 'o0cb1idro4lg1', 'resolutions': [{'height': 176, 'url': 'https://preview.redd.it/o0cb1idro4lg1.png?width=108&crop=smart&auto=webp&s=60a5c2ed8c9f30a1962b08730987acf42ef00fed', 'width': 108}, {'height': 352, 'url': 'https://preview.redd.it/o0cb1idro4lg1.png?width=216&crop=smart&auto=webp&s=df14149c50a62259e4e2edd3b4b61f7c68b36d78', 'width': 216}, {'height': 522, 'url': 'https://preview.redd.it/o0cb1idro4lg1.png?width=320&crop=smart&auto=webp&s=66ba84c60431d8d0a73ccddec078ba4937dc6a45', 'width': 320}, {'height': 1045, 'url': 'https://preview.redd.it/o0cb1idro4lg1.png?width=640&crop=smart&auto=webp&s=16183a65b4ddf78bd246b14310c69df3b7deed15', 'width': 640}], 'source': {'height': 1558, 'url': 'https://preview.redd.it/o0cb1idro4lg1.png?auto=webp&s=a482f8427a39116454582458f6c067f81cd55d5c', 'width': 954}, 'variants': {}}]} | ||
Forked MNN Chat to make it a multilingual interpreted chatroom hotspot | 2 | In short, this is a *human-to-human* chat server that nearby devices can join via a couple QR codes, and it uses the LLM to automatically translate chat messages among the participants' languages.
I added some features to a fork of Alibaba's MNN Chat for Android with a lot of help from Claude mainly because I don't know Kotlin... or even Android development after all these years. I figured I'd base it on MNN Chat because it's already got many of the necessary parts and *fast* on-device inference.
As for *why*... When traveling in a foreign country, there are plenty of reasons you might want to exchange some words with someone who doesn't speak your language. My thoughts included: no handing one phone back and forth, no trying to share a screen, no speech-to-text errors that you can't fix before your words get translated, no spotty mobile data or Wi-Fi in subway stations or out in the mountains, no requirement for a stranger to download an app, and no being stuck with Google Translate.
Code and a prebuilt APK: [https://github.com/dpmm99/MNN-Android-Interpreted-Chat-Server?tab=readme-ov-file#fork-dpmm99mnn-android-interpreted-chat-server-readme-mnn-android-interpreted-chat-server](https://github.com/dpmm99/MNN-Android-Interpreted-Chat-Server?tab=readme-ov-file#fork-dpmm99mnn-android-interpreted-chat-server-readme-mnn-android-interpreted-chat-server)
Pictured here, I was using Jan-v3-4B, since that's one I converted to MNN and uploaded to HuggingFace: [https://huggingface.co/DeProgrammer/models?search=mnn](https://huggingface.co/DeProgrammer/models?search=mnn) | 2026-02-22T23:08:51 | https://www.reddit.com/gallery/1rc0qsd | DeProgrammer99 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rc0qsd | false | null | t3_1rc0qsd | /r/LocalLLaMA/comments/1rc0qsd/forked_mnn_chat_to_make_it_a_multilingual/ | false | false | 2 | null | |
0xSero/Kimi-K2.5-PRISM-REAP-72 · Hugging Face | 4 | Kimi K2.5 in just 200B, we definitely need a GGUF :) | 2026-02-22T23:02:07 | https://huggingface.co/0xSero/Kimi-K2.5-PRISM-REAP-72 | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1rc0ktx | false | null | t3_1rc0ktx | /r/LocalLLaMA/comments/1rc0ktx/0xserokimik25prismreap72_hugging_face/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'M1qr4dOE7cxjjYj0HuhCgrzSEf3iWgzOOTsp_hNo9yk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/M1qr4dOE7cxjjYj0HuhCgrzSEf3iWgzOOTsp_hNo9yk.png?width=108&crop=smart&auto=webp&s=74174ce0793cebfbfcdc27ed038e7b01541b3ed6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/M1qr4dOE7cxjjYj0HuhCgrzSEf3iWgzOOTsp_hNo9yk.png?width=216&crop=smart&auto=webp&s=3f978a2764145c0869917ec24cae6a556e9fb6d1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/M1qr4dOE7cxjjYj0HuhCgrzSEf3iWgzOOTsp_hNo9yk.png?width=320&crop=smart&auto=webp&s=11cd1cf188e7607d8a70a112c6e8378a3375931a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/M1qr4dOE7cxjjYj0HuhCgrzSEf3iWgzOOTsp_hNo9yk.png?width=640&crop=smart&auto=webp&s=00e50915cea77c935346358c4a0d518e0e3173f3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/M1qr4dOE7cxjjYj0HuhCgrzSEf3iWgzOOTsp_hNo9yk.png?width=960&crop=smart&auto=webp&s=2208784f667ba7e453e25f5130175b8385a21b41', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/M1qr4dOE7cxjjYj0HuhCgrzSEf3iWgzOOTsp_hNo9yk.png?width=1080&crop=smart&auto=webp&s=d8595e74c002c6a6fa1b909e362989c43bc5b73c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/M1qr4dOE7cxjjYj0HuhCgrzSEf3iWgzOOTsp_hNo9yk.png?auto=webp&s=6dd0964bb8ea2d379659486df2b1933fc6c628f2', 'width': 1200}, 'variants': {}}]} | |
ggml.ai (the team behind llama.cpp) is joining Hugging Face, projects stay open source | 15 | Hugging Face announced that [ggml.ai](http://ggml.ai), the team behind llama.cpp and ggml, is joining Hugging Face to support long-term sustainability for local AI.
According to the llama.cpp announcement, day-to-day direction stays the same: the projects remain 100% open source and community-driven, and Hugging Face is providing long-term resources and support.
I’m curious what people think this changes in practice.
1. Does this accelerate optimizations and support for new model architectures?
2. Does tighter HF integration help the ecosystem, or increase centralization risk?
3. Any impact on alternative runtimes and independent inference stacks?
What do you think? | 2026-02-22T22:54:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rc0dra/ggmlai_the_team_behind_llamacpp_is_joining/ | nihal_was_here | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc0dra | false | null | t3_1rc0dra | /r/LocalLLaMA/comments/1rc0dra/ggmlai_the_team_behind_llamacpp_is_joining/ | false | false | self | 15 | null |
In the long run, everything will be local | 112 | I've been of the opinion for a while that, long term, we’ll have smart enough open models and powerful enough consumer hardware to run *all* our assistants locally both chatbots and coding copilots
https://preview.redd.it/vqzxm46ri4lg1.png?width=3608&format=png&auto=webp&s=22c0fb257d744350f8668301a915aeec2b6653fc
Right now it still feels like there’s a trade-off:
* Closed, cloud models = best raw quality, but vendor lock-in, privacy concerns, latency, per-token cost
* Open, local models = worse peak performance, but full control, no recurring API fees, and real privacy
But if you look at the curve on both sides, it’s hard not to see them converging:
* Open models keep getting smaller, better, and more efficient every few months (quantization, distillation, better architectures). Many 7B–8B models are already good enough for daily use if you care more about privacy/control than squeezing out the last 5% of quality
* Consumer and prosumer hardware keeps getting cheaper and more powerful, especially GPUs and Apple Silicon–class chips. People are already running decent local LLMs with 12–16GB VRAM or optimized CPU-only setups for chat and light coding
At some point, the default might flip: instead of why would you run this locally?, the real question becomes why would you ship your entire prompt and codebase to a third-party API if you don’t strictly need to? For a lot of use cases (personal coding, offline agents, sensitive internal tools), a strong local open model plus a specialized smaller model might be more than enough | 2026-02-22T22:39:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rc00nj/in_the_long_run_everything_will_be_local/ | tiguidoio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc00nj | false | null | t3_1rc00nj | /r/LocalLLaMA/comments/1rc00nj/in_the_long_run_everything_will_be_local/ | false | false | 112 | null | |
Made openclaw's terminal UI actually show images from tool results instead of placeholders | 1 | If you run openclaw with local models and use the TUI, you know tool results with images just show [image/png 3kb (omitted)]. Got tired of it.
The image data gets sanitized away before reaching the renderer, but file paths survive through a text protocol. Hooked those paths up to pi-tui's Image class (already in the deps, just unused). Added terminal capability detection so it only fires in terminals that support it.
Five changes across four files. PR's up if anyone wants to look at it or try it out.
Screenshot: https://raw.githubusercontent.com/ademczuk/MenuVision/master/docs/openclaw-tui-inline-images.png | 2026-02-22T22:38:42 | https://www.reddit.com/r/LocalLLaMA/comments/1rc00do/made_openclaws_terminal_ui_actually_show_images/ | snaptastic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc00do | false | null | t3_1rc00do | /r/LocalLLaMA/comments/1rc00do/made_openclaws_terminal_ui_actually_show_images/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0.png?width=108&crop=smart&auto=webp&s=e6fcf651048176d7ade999c2ffb8ee7844b195bc', 'width': 108}, {'height': 174, 'url': 'https://external-preview.redd.it/Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0.png?width=216&crop=smart&auto=webp&s=01322cc1fd4152bc0ddb8909ff111245e47e8d07', 'width': 216}, {'height': 258, 'url': 'https://external-preview.redd.it/Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0.png?width=320&crop=smart&auto=webp&s=f5760269ac4f220b5783bb501266abdac70b175e', 'width': 320}, {'height': 517, 'url': 'https://external-preview.redd.it/Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0.png?width=640&crop=smart&auto=webp&s=b1c9088bb50549883d60ef5c0a3697ece17dea4c', 'width': 640}, {'height': 776, 'url': 'https://external-preview.redd.it/Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0.png?width=960&crop=smart&auto=webp&s=ad5ecef16efb66ad76b2c228d3bd9431dbb5e9af', 'width': 960}, {'height': 873, 'url': 'https://external-preview.redd.it/Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0.png?width=1080&crop=smart&auto=webp&s=129022c1179db4e9d8392b77c6403c3e570416ba', 'width': 1080}], 'source': {'height': 1393, 'url': 'https://external-preview.redd.it/Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0.png?auto=webp&s=26712e8fe47f557d4bddd2ac01c82735ebe3ef58', 'width': 1722}, 'variants': {}}]} |
Made openclaw's terminal UI actually show images from tool results instead of placeholders | 1 | If you run openclaw with local models and use the TUI, you know tool results with images just show [image/png 3kb (omitted)]. Got tired of it.
The image data gets sanitized away before reaching the renderer, but file paths survive through a text protocol. Hooked those paths up to pi-tui's Image class (already in the deps, just unused). Added terminal capability detection so it only fires in terminals that support it.
Five changes across four files. PR's up if anyone wants to look at it or try it out.
Screenshot: https://raw.githubusercontent.com/ademczuk/MenuVision/master/docs/openclaw-tui-inline-images.png | 2026-02-22T22:38:28 | https://www.reddit.com/r/LocalLLaMA/comments/1rc006d/made_openclaws_terminal_ui_actually_show_images/ | snaptastic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc006d | false | null | t3_1rc006d | /r/LocalLLaMA/comments/1rc006d/made_openclaws_terminal_ui_actually_show_images/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0.png?width=108&crop=smart&auto=webp&s=e6fcf651048176d7ade999c2ffb8ee7844b195bc', 'width': 108}, {'height': 174, 'url': 'https://external-preview.redd.it/Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0.png?width=216&crop=smart&auto=webp&s=01322cc1fd4152bc0ddb8909ff111245e47e8d07', 'width': 216}, {'height': 258, 'url': 'https://external-preview.redd.it/Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0.png?width=320&crop=smart&auto=webp&s=f5760269ac4f220b5783bb501266abdac70b175e', 'width': 320}, {'height': 517, 'url': 'https://external-preview.redd.it/Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0.png?width=640&crop=smart&auto=webp&s=b1c9088bb50549883d60ef5c0a3697ece17dea4c', 'width': 640}, {'height': 776, 'url': 'https://external-preview.redd.it/Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0.png?width=960&crop=smart&auto=webp&s=ad5ecef16efb66ad76b2c228d3bd9431dbb5e9af', 'width': 960}, {'height': 873, 'url': 'https://external-preview.redd.it/Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0.png?width=1080&crop=smart&auto=webp&s=129022c1179db4e9d8392b77c6403c3e570416ba', 'width': 1080}], 'source': {'height': 1393, 'url': 'https://external-preview.redd.it/Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0.png?auto=webp&s=26712e8fe47f557d4bddd2ac01c82735ebe3ef58', 'width': 1722}, 'variants': {}}]} |
AI sentiment dropped 34% in 120 days. The backlash isn't about technology. It's about mediocrity. | 1 | [removed] | 2026-02-22T22:20:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rbzk21/ai_sentiment_dropped_34_in_120_days_the_backlash/ | Upstairs-Pin4239 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbzk21 | false | null | t3_1rbzk21 | /r/LocalLLaMA/comments/1rbzk21/ai_sentiment_dropped_34_in_120_days_the_backlash/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'z_wFMzWtlCHl5KiIbON0aIZB3aQeGZ5f8DHyU8UkMGE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/z_wFMzWtlCHl5KiIbON0aIZB3aQeGZ5f8DHyU8UkMGE.png?width=108&crop=smart&auto=webp&s=d21e4cb2be5b9e89b357b523f6718589af4e0a42', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/z_wFMzWtlCHl5KiIbON0aIZB3aQeGZ5f8DHyU8UkMGE.png?width=216&crop=smart&auto=webp&s=7c8e41ea6d4790d047d40247b74312225861b56e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/z_wFMzWtlCHl5KiIbON0aIZB3aQeGZ5f8DHyU8UkMGE.png?width=320&crop=smart&auto=webp&s=8b7820981ef2e99d30cc9875768b8b50a616236f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/z_wFMzWtlCHl5KiIbON0aIZB3aQeGZ5f8DHyU8UkMGE.png?width=640&crop=smart&auto=webp&s=2cf2ed2d0a598617830bb20ef916f72791cb6ab0', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/z_wFMzWtlCHl5KiIbON0aIZB3aQeGZ5f8DHyU8UkMGE.png?width=960&crop=smart&auto=webp&s=49274c3d4421fec27c40da9875043bcffac9bbfe', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/z_wFMzWtlCHl5KiIbON0aIZB3aQeGZ5f8DHyU8UkMGE.png?width=1080&crop=smart&auto=webp&s=bc65f6024a46609d4d339d309e14806d8bac3cbd', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/z_wFMzWtlCHl5KiIbON0aIZB3aQeGZ5f8DHyU8UkMGE.png?auto=webp&s=f52716e93d06b178457a2b08b164a0673207d1d7', 'width': 2400}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.