title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7 values | id stringlengths 7 7 | locked bool 2 classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2 classes | stickied bool 2 classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
M3 Ultra 512GB - real-world performance of MiniMax-M2.5, GLM-5, and Qwen3-Coder-Next | 33 | A lot of people have been asking about real-world performance of recent models on apple silicon, especially on the ultra chips. I've been running MiniMax-M2.5, GLM-5, and Qwen3-Coder-80B on my M3 Ultra 512GB and wanted to share the results.
**Quick summary**
**Qwen3-Coder-Next-80B** \- the standout for local coding. i've been using it as a backend for Claude Code, and it honestly performs at a level comparable to commercial coding services. if you have an M-series Pro/Max with 64GB+ RAM, this model alone could make a solid local coding machine.
**MiniMax-M2.5** \- the initial prefill takes a moment, but once prefix caching kicks in, TTFT drops a lot on follow-up requests. with continuous batching on top of that, it's surprisingly usable as a local coding assistant.
**GLM-5** \- raw speed isn't great for interactive coding where you need fast back-and-forth. but with continuous batching and persistent KV cache, it's way more manageable than you'd expect. for example, translation tasks with big glossaries in the system message work really well since the system prompt gets cached once and batch requests just fly through after that.
**oMLX Benchmark results - LLM inference, optimized for your Mac**
[https://github.com/jundot/omlx](https://github.com/jundot/omlx)
**Benchmark Model: MiniMax-M2.5-8bit**
oMLX - LLM inference, optimized for your Mac
https://github.com/jundot/omlx
Benchmark Model: MiniMax-M2.5-8bit
================================================================================
Single Request Results
--------------------------------------------------------------------------------
Test TTFT(ms) TPOT(ms) pp TPS tg TPS E2E(s) Throughput Peak Mem
pp1024/tg128 1741.4 29.64 588.0 tok/s 34.0 tok/s 5.506 209.2 tok/s 227.17 GB
pp4096/tg128 5822.0 33.29 703.5 tok/s 30.3 tok/s 10.049 420.3 tok/s 228.20 GB
pp8192/tg128 12363.9 38.36 662.6 tok/s 26.3 tok/s 17.235 482.7 tok/s 229.10 GB
pp16384/tg128 29176.8 47.09 561.5 tok/s 21.4 tok/s 35.157 469.7 tok/s 231.09 GB
pp32768/tg128 76902.8 67.54 426.1 tok/s 14.9 tok/s 85.480 384.8 tok/s 234.96 GB
Continuous Batching — Same Prompt
pp1024 / tg128 · partial prefix cache hit
--------------------------------------------------------------------------------
Batch tg TPS Speedup pp TPS pp TPS/req TTFT(ms) E2E(s)
1x 34.0 tok/s 1.00x 588.0 tok/s 588.0 tok/s 1741.4 5.506
2x 49.1 tok/s 1.44x 688.6 tok/s 344.3 tok/s 2972.0 8.190
4x 70.7 tok/s 2.08x 1761.3 tok/s 440.3 tok/s 2317.3 9.568
8x 89.3 tok/s 2.63x 1906.7 tok/s 238.3 tok/s 4283.7 15.759
Continuous Batching — Different Prompts
pp1024 / tg128 · no cache reuse
--------------------------------------------------------------------------------
Batch tg TPS Speedup pp TPS pp TPS/req TTFT(ms) E2E(s)
1x 34.0 tok/s 1.00x 588.0 tok/s 588.0 tok/s 1741.4 5.506
2x 49.7 tok/s 1.46x 686.2 tok/s 343.1 tok/s 2978.6 8.139
4x 109.8 tok/s 3.23x 479.4 tok/s 119.8 tok/s 4526.7 13.207
8x 126.3 tok/s 3.71x 590.3 tok/s 73.8 tok/s 7421.6 21.987
**Benchmark Model: GLM-5-4bit**
oMLX - LLM inference, optimized for your Mac
https://github.com/jundot/omlx
Benchmark Model: GLM-5-4bit
================================================================================
Single Request Results
--------------------------------------------------------------------------------
Test TTFT(ms) TPOT(ms) pp TPS tg TPS E2E(s) Throughput Peak Mem
pp1024/tg128 5477.3 60.46 187.0 tok/s 16.7 tok/s 13.156 87.6 tok/s 391.82 GB
pp4096/tg128 22745.2 73.39 180.1 tok/s 13.7 tok/s 32.066 131.7 tok/s 394.07 GB
pp8192/tg128 53168.8 76.07 154.1 tok/s 13.2 tok/s 62.829 132.4 tok/s 396.69 GB
pp16384/tg128 139545.0 83.67 117.4 tok/s 12.0 tok/s 150.171 110.0 tok/s 402.72 GB
pp32768/tg128 421954.5 94.47 77.7 tok/s 10.7 tok/s 433.952 75.8 tok/s 415.41 GB
Continuous Batching — Same Prompt
pp1024 / tg128 · partial prefix cache hit
--------------------------------------------------------------------------------
Batch tg TPS Speedup pp TPS pp TPS/req TTFT(ms) E2E(s)
1x 16.7 tok/s 1.00x 187.0 tok/s 187.0 tok/s 5477.3 13.156
2x 24.7 tok/s 1.48x 209.3 tok/s 104.7 tok/s 9782.5 20.144
4x 30.4 tok/s 1.82x 619.7 tok/s 154.9 tok/s 6595.2 23.431
8x 40.2 tok/s 2.41x 684.5 tok/s 85.6 tok/s 11943.7 37.447
Continuous Batching — Different Prompts
pp1024 / tg128 · no cache reuse
--------------------------------------------------------------------------------
Batch tg TPS Speedup pp TPS pp TPS/req TTFT(ms) E2E(s)
1x 16.7 tok/s 1.00x 187.0 tok/s 187.0 tok/s 5477.3 13.156
2x 23.7 tok/s 1.42x 206.9 tok/s 103.5 tok/s 9895.4 20.696
4x 47.0 tok/s 2.81x 192.6 tok/s 48.1 tok/s 10901.6 32.156
8x 60.3 tok/s 3.61x 224.1 tok/s 28.0 tok/s 18752.5 53.537
**Benchmark Model: Qwen3-Coder-Next-8bit**
oMLX - LLM inference, optimized for your Mac
https://github.com/jundot/omlx
Benchmark Model: Qwen3-Coder-Next-8bit
================================================================================
Single Request Results
--------------------------------------------------------------------------------
Test TTFT(ms) TPOT(ms) pp TPS tg TPS E2E(s) Throughput Peak Mem
pp1024/tg128 700.6 17.18 1461.7 tok/s 58.7 tok/s 2.882 399.7 tok/s 80.09 GB
pp4096/tg128 2083.1 17.65 1966.3 tok/s 57.1 tok/s 4.324 976.8 tok/s 82.20 GB
pp8192/tg128 4077.6 18.38 2009.0 tok/s 54.9 tok/s 6.411 1297.7 tok/s 82.63 GB
pp16384/tg128 8640.3 19.25 1896.2 tok/s 52.3 tok/s 11.085 1489.5 tok/s 83.48 GB
pp32768/tg128 20176.3 22.33 1624.1 tok/s 45.1 tok/s 23.013 1429.5 tok/s 85.20 GB
Continuous Batching — Same Prompt
pp1024 / tg128 · partial prefix cache hit
--------------------------------------------------------------------------------
Batch tg TPS Speedup pp TPS pp TPS/req TTFT(ms) E2E(s)
1x 58.7 tok/s 1.00x 1461.7 tok/s 1461.7 tok/s 700.6 2.882
2x 101.1 tok/s 1.72x 1708.7 tok/s 854.4 tok/s 1196.1 3.731
4x 194.2 tok/s 3.31x 891.1 tok/s 222.8 tok/s 3614.7 7.233
8x 243.0 tok/s 4.14x 1903.5 tok/s 237.9 tok/s 4291.5 8.518
Continuous Batching — Different Prompts
pp1024 / tg128 · no cache reuse
--------------------------------------------------------------------------------
Batch tg TPS Speedup pp TPS pp TPS/req TTFT(ms) E2E(s)
1x 58.7 tok/s 1.00x 1461.7 tok/s 1461.7 tok/s 700.6 2.882
2x 100.5 tok/s 1.71x 1654.5 tok/s 827.3 tok/s 1232.8 3.784
4x 164.0 tok/s 2.79x 1798.2 tok/s 449.6 tok/s 2271.3 5.401
8x 243.3 tok/s 4.14x 1906.9 tok/s 238.4 tok/s 4281.4 8.504
**Takeaways**
\- If you're on apple silicon with 64GB+ memory, Qwen3-Coder-80B is genuinely viable for daily coding work with Claude Code or similar agents
\- Prefix caching and continuous batching make a huge difference for models that are borderline too slow for interactive use. turns "unusable" into "totally fine with a small wait"
\- M3 Ultra 512GB is obviously overkill for a single model, but loading multiple models at once (LLM + embedding + reranker) without swapping is where the extra memory pays off
**Happy to test other models if you're curious. just drop a comment and i'll run it!** | 2026-02-24T16:31:29 | https://www.reddit.com/gallery/1rdkze3 | cryingneko | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdkze3 | false | null | t3_1rdkze3 | /r/LocalLLaMA/comments/1rdkze3/m3_ultra_512gb_realworld_performance_of/ | false | false | 33 | null | |
New SWE-bench Multilingual Leaderboard: Performance across 9 languages & cost analysis | 18 | Happy to announce that we just launched our Multilingual leaderboard comparing performance across 9 languages. The benchmark is harder than SWE-bench verified and still shows a wider range of performances.
We're still adding more models, but this is the current leaderboard:
https://preview.redd.it/l0cotc22wglg1.png?width=4752&format=png&auto=webp&s=b7b862332cdb8843100d9919db30accb1bc0c260
Interestingly, the rankings are different depending on the languages. This is compiled (C, C++, Go, Java, Rust) vs non-compiled (JS, TS, PHP, Ruby) languages:
https://preview.redd.it/m39uakj4wglg1.png?width=4770&format=png&auto=webp&s=e148f56435d1bf7b3b6568a053eea733036b0a2f
We can also repeat the cost analysis similar to my previous posts here. MiniMax 2.5 is by far the most cost-efficient model we have tested:
https://preview.redd.it/zo6ysrjbwglg1.png?width=2372&format=png&auto=webp&s=22a2dc5b4b0be595e81ccc770d239114377c58a8
This is run with a budget of $3 and 250 steps (those are the same limits as in SWE-bench verified).
Here's the full list of results by language (however note that this is only \~50 tasks per language, so small differences probably don't matter too much):
https://preview.redd.it/wvsc503rwglg1.png?width=4771&format=png&auto=webp&s=49430accebee603454b6f3ffd2b89091c674f1e3
You can browse all the trajectories by clicking on the icon in the "Traj" column on [https://www.swebench.com/](https://www.swebench.com/)
If you want to reproduce the numbers, just follow the swebench instructions for [https://github.com/SWE-agent/mini-swe-agent/](https://github.com/SWE-agent/mini-swe-agent/) (it's the same scaffold & setup for all the models). | 2026-02-24T16:20:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rdknyh/new_swebench_multilingual_leaderboard_performance/ | klieret | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdknyh | false | null | t3_1rdknyh | /r/LocalLLaMA/comments/1rdknyh/new_swebench_multilingual_leaderboard_performance/ | false | false | 18 | null | |
Ran Rigour on OpenClaw-style agent codebases locally — caught 2,080 drifts & overrules in seconds (local-first tool) | 1 | [removed] | 2026-02-24T16:14:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rdki30/ran_rigour_on_openclawstyle_agent_codebases/ | erashu212 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdki30 | false | null | t3_1rdki30 | /r/LocalLLaMA/comments/1rdki30/ran_rigour_on_openclawstyle_agent_codebases/ | false | false | self | 1 | null |
Jam — open source desktop app to run multiple AI coding agents locally with voice control | 1 | 2026-02-24T16:00:38 | https://github.com/dag7/jam | kamaji_dev | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rdk4ln | false | null | t3_1rdk4ln | /r/LocalLLaMA/comments/1rdk4ln/jam_open_source_desktop_app_to_run_multiple_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': '2V8TjdxZ0ED7ef5-PT9zRDo2b5y0fkLt0og08qPXYSI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2V8TjdxZ0ED7ef5-PT9zRDo2b5y0fkLt0og08qPXYSI.png?width=108&crop=smart&auto=webp&s=671f02e02a458cdd4ee3bc91781b3d5fc921322d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2V8TjdxZ0ED7ef5-PT9zRDo2b5y0fkLt0og08qPXYSI.png?width=216&crop=smart&auto=webp&s=4e4a0319bdef7be903c705c69f8696db3a29d818', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2V8TjdxZ0ED7ef5-PT9zRDo2b5y0fkLt0og08qPXYSI.png?width=320&crop=smart&auto=webp&s=73d27eccbf75f99d1fcd25ee0d5dd3bc292a252b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2V8TjdxZ0ED7ef5-PT9zRDo2b5y0fkLt0og08qPXYSI.png?width=640&crop=smart&auto=webp&s=dd51899de8ff3ff37a23bceb0b69693b20057fb4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2V8TjdxZ0ED7ef5-PT9zRDo2b5y0fkLt0og08qPXYSI.png?width=960&crop=smart&auto=webp&s=235044f7d904e44db09477cb80aa2326d336f1f6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2V8TjdxZ0ED7ef5-PT9zRDo2b5y0fkLt0og08qPXYSI.png?width=1080&crop=smart&auto=webp&s=377d7721e5c1291d1621327176e5da390f73cec0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2V8TjdxZ0ED7ef5-PT9zRDo2b5y0fkLt0og08qPXYSI.png?auto=webp&s=f07ea720239591c2a821c9dc8c8fa2cef96dea44', 'width': 1200}, 'variants': {}}]} | ||
Looking for arXiv cs.LG / cs.AI endorser — paper on GRPO failure modes + LLM game agents | 0 | Hi r/LocalLLaMA — first-time arXiv submitter here, looking for someone endorsed in cs.LG or [cs.AI](http://cs.AI) to endorse my submission.
Paper: Representation Over Training: How Board State Formatting Determines LLM Game-Playing Validity in Minesweeper
Key findings:
\- Board representation alone (no training changes) takes valid move rate from 10–15% → 100% across all board sizes (6×6 to 30×30)
\- GRPO fails when SFT already saturates reward variance — grad\_norm collapses to \~0, advantage estimator becomes degenerate. Diagnosed mechanistically with proposed mitigations.
\- Fine-tuned Qwen2.5-14B on 50K solver-generated demos via LoRA + SFT
If you're endorsed in cs.LG or [cs.AI](http://cs.AI) and willing to help, please DM me — the endorsement takes 30 seconds. Really appreciate it! | 2026-02-24T15:48:40 | https://www.reddit.com/r/LocalLLaMA/comments/1rdjsv1/looking_for_arxiv_cslg_csai_endorser_paper_on/ | GrimLock_plays01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdjsv1 | false | null | t3_1rdjsv1 | /r/LocalLLaMA/comments/1rdjsv1/looking_for_arxiv_cslg_csai_endorser_paper_on/ | false | false | self | 0 | null |
This is the OPEN AI and sharing of Knowledge we were promised, keep accelerating or pop the bubble. Stop complaining. All gas no brakes! | 46 | Do you agree? | 2026-02-24T15:46:14 | https://www.reddit.com/gallery/1rdjqeh | TroyDoesAI | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdjqeh | false | null | t3_1rdjqeh | /r/LocalLLaMA/comments/1rdjqeh/this_is_the_open_ai_and_sharing_of_knowledge_we/ | false | false | 46 | null | |
This is the OPEN AI and sharing of Knowledge we were promised, keep accelerating or pop the bubble. Stop complaining. All gas no brakes! | 1 | Do you agree? | 2026-02-24T15:45:52 | https://www.reddit.com/gallery/1rdjq1b | TroyDoesAI | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdjq1b | false | null | t3_1rdjq1b | /r/LocalLLaMA/comments/1rdjq1b/this_is_the_open_ai_and_sharing_of_knowledge_we/ | false | false | 1 | null | |
Llama 3.2 3B is running very smoothly on my low specs | 3 | 2026-02-24T15:44:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rdjo6z/llama_32_3b_is_running_very_smoothly_on_my_low/ | Strange_Disk2202 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdjo6z | false | null | t3_1rdjo6z | /r/LocalLLaMA/comments/1rdjo6z/llama_32_3b_is_running_very_smoothly_on_my_low/ | false | false | 3 | null | ||
Help a newbie out? Can I run a note taking device locally? | 4 | Hi all! I'm a data analyst, so I have some basic R and Python skills but all geared towards data analysis. I also have ADHD so the idea of a wearable device for note taking on my life sounds suuuuper helpful. But I'm unwilling to give my entire life data, including conversations with my wife and kids etc, over to a mega Corp or a startup that will probably sell to a mega corporation.
Do I have any options to run something like this locally? That might be within my tech reach? I'm willing to put time and a little money into this, but not if it's hopeless from the start. So any advice you could give me would be quite helpful.
Appreciate everyone on here helping me keep up with the world. | 2026-02-24T15:39:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rdjjkc/help_a_newbie_out_can_i_run_a_note_taking_device/ | Drastic_Conclusions | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdjjkc | false | null | t3_1rdjjkc | /r/LocalLLaMA/comments/1rdjjkc/help_a_newbie_out_can_i_run_a_note_taking_device/ | false | false | self | 4 | null |
Sarvam AI's sovereign LLM: censorship lives in a system prompt, not the weights | 9 | 2026-02-24T15:30:06 | https://pop.rdi.sh/sovereignty-in-a-system-prompt | GoMeansGo | pop.rdi.sh | 1970-01-01T00:00:00 | 0 | {} | 1rdjafw | false | null | t3_1rdjafw | /r/LocalLLaMA/comments/1rdjafw/sarvam_ais_sovereign_llm_censorship_lives_in_a/ | false | false | default | 9 | null | |
Looking for this narration voice style (sample included) | 4 | Hey everyone,
I’m trying to find a narration/anime-style voice like the one in this short clip:
[https://voca.ro/1dRV0BgMh5lo](https://voca.ro/1dRV0BgMh5lo)
It’s the kind of voice used in manga recaps, anime storytelling, and dramatic narration.
If anyone knows:
• the voice actor
• a TTS model/voice pack
• a site or tool that has similar voices
I’d really appreciate it. Thanks! | 2026-02-24T15:28:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rdj98q/looking_for_this_narration_voice_style_sample/ | UmpireVegetable316 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdj98q | false | null | t3_1rdj98q | /r/LocalLLaMA/comments/1rdj98q/looking_for_this_narration_voice_style_sample/ | false | false | self | 4 | null |
RDNA 4 (3x 9060 XT) "Gibberish" on ROCm 7.x — Anyone found the stable math kernels? | 1 | Hey everyone,
I’ve recently set up a 3-GPU node using the new AMD RX 9060 XT (gfx1200) cards in a Dell Precision T7910 (Dual CPU, PCIe 3.0). I’m hitting a wall with ROCm 7.x and llama.cpp / Ollama.
**The Issue**: > When running with the ROCm/HIP backend, I get pure gibberish/word salad output (numerical corruption). This happens regardless of the model (tested with Qwen3-Coder-Next and others).
**What I've Tried**:
Vulkan Backend: Works perfectly and accurately, but is significantly slower than ROCm should be.
Flash Attention: Disabling it didn't fix the gibberish.
Quantization: Using F16 KV cache didn't fix it.
Splitting: Tried both -sm row and -sm layer.
Compiling: Rebuilt with -DGGML\_HIP\_ROCWMMA=OFF to bypass matrix cores, but still getting corruption.
It seems like the hipBLASLt or Tensile kernels for gfx1200 are simply not ready for prime time yet.
**Questions**:
Has anyone successfully run RDNA 4 cards on ROCm without the "word salad" effect?
Are there specific environment variables or experimental builds (like Lemonade/TheRock) that include GFX1200 math fixes?
Is there a way to force ROCm to use the "Safe Math" paths that Vulkan seems to use?
Any advice from other RDNA 4 users would be huge! | 2026-02-24T15:28:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rdj8ue/rdna_4_3x_9060_xt_gibberish_on_rocm_7x_anyone/ | Dense-Department-772 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdj8ue | false | null | t3_1rdj8ue | /r/LocalLLaMA/comments/1rdj8ue/rdna_4_3x_9060_xt_gibberish_on_rocm_7x_anyone/ | false | false | self | 1 | null |
Best schema/prompt pattern for MCP tool descriptions? (Building an API-calling project) | 1 | Hey everyone,
I’m currently building an MCP server that acts as a bridge for a complex REST API. I’ve noticed that a simple 1:1 mapping of endpoints to tools often leads to "tool explosion" and confuses the LLM.
I’m looking for advice on two things:
# 1. What is the "Gold Standard" for Tool Descriptions?
When defining the description field in an MCP tool schema, what prompt pattern or schema have you found works best for high-accuracy tool selection?
Currently, I’m trying to follow these rules:
•Intent-Based: Grouping multiple endpoints into one logical "task" tool (e.g., fetch\_customer\_context instead of three separate GET calls).
•Front-Loading: Putting the "Verb + Resource" in the first 5 words.
•Exclusionary Guidance: Explicitly telling the model when not to use the tool (e.g., "Do not use for bulk exports; use export\_data instead").
Does anyone have a specific "template" or prompt structure they use for these descriptions? How much detail is too much before it starts eating into the context window?
# 2. Best Production-Grade References?
Beyond the official docs, what are the best "battle-tested" resources for MCP in production? I’m looking for:
•Books: I’ve heard about AI Agents with MCP by Kyle Stratis (O'Reilly)—is it worth it?
•Blogs/Case Studies: Any companies (like Merge or Speakeasy) that have shared deep dives on their MCP architecture?
•Videos: Who is doing the best technical (not just hype) walkthroughs?
Would love to hear how you're structuring your tool definitions and what resources helped you move past the "Hello World" stage.
Thanks! | 2026-02-24T15:24:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rdj5cr/best_schemaprompt_pattern_for_mcp_tool/ | Ok-Birthday-5406 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdj5cr | false | null | t3_1rdj5cr | /r/LocalLLaMA/comments/1rdj5cr/best_schemaprompt_pattern_for_mcp_tool/ | false | false | self | 1 | null |
Local GitHub Copilot with Lemonade Server on Windows | 3 | I wanted to try running working with GitHub Copilot and a local LLM on my Framework Desktop. As I couldn't find a simple walkthrough of how to get that up and running I decided to write one:
[https://admcpr.com/local-github-copilot-with-lemonade-server-on-windows/](https://admcpr.com/local-github-copilot-with-lemonade-server-on-windows/)
| 2026-02-24T15:23:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rdj3nz/local_github_copilot_with_lemonade_server_on/ | admcpr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdj3nz | false | null | t3_1rdj3nz | /r/LocalLLaMA/comments/1rdj3nz/local_github_copilot_with_lemonade_server_on/ | false | false | self | 3 | null |
Where to go for running inference directly (doing python code, eg. vllm) at affordable costs that is not the dumpster fire of RunPod. | 1 | Nothing works in there is just a piece of junk, you are working on a pod and it dissapears while you work on it, constant crashes, constant issues, cuda 1 device gives error for seemingly no reason, change the docker image, ssh does not work anymore, UI crashes, everything fails. 3 hours to pull a docker image, logs that dissapear, errors, errors, errors...
I need something that works like my local machine does. But I am not rich, and I need around 180GB or so.
Looking to run a custom vllm endpoint, for now. and I don't want to have to compile cuda from scratch. | 2026-02-24T15:20:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rdj1ii/where_to_go_for_running_inference_directly_doing/ | boisheep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdj1ii | false | null | t3_1rdj1ii | /r/LocalLLaMA/comments/1rdj1ii/where_to_go_for_running_inference_directly_doing/ | false | false | self | 1 | null |
OpenPDB: AI agents with real personalities (or is it just fancy roleplay?) | 1 | What it does:
Personality database + prompt engineering framework that lets you generate AI agents with distinct MBTI/Enneagram/Instinct profiles. Create Batman, Joker, or any character with their own voice and worldview. Built on Ollama, works with OpenGoat for multi-agent collaboration. Colab notebook included.
Try it:
Colab demo lets you play without installing. GitHub: [https://github.com/gitsual/openpdb](https://github.com/gitsual/openpdb)
Useful for: creative writing, roleplaying, educational purposes. Don't expect a breakthrough in AI consciousness - it's a neat hack showing how far prompt engineering can go.
What do you think? Future of AI agents or just another shiny thing? | 2026-02-24T15:10:42 | https://www.reddit.com/r/LocalLLaMA/comments/1rdirpi/openpdb_ai_agents_with_real_personalities_or_is/ | Alternative_Toe_1327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdirpi | false | null | t3_1rdirpi | /r/LocalLLaMA/comments/1rdirpi/openpdb_ai_agents_with_real_personalities_or_is/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'TXTYmiOPcJHWRDO2RD8pIEaKUDMUn8duTmC_aXmK_rQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TXTYmiOPcJHWRDO2RD8pIEaKUDMUn8duTmC_aXmK_rQ.png?width=108&crop=smart&auto=webp&s=e7371f41477a08326d5df9a15adac7a123372a0c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TXTYmiOPcJHWRDO2RD8pIEaKUDMUn8duTmC_aXmK_rQ.png?width=216&crop=smart&auto=webp&s=d020787fcb47d4681e841418fb759a7eb28b80d8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TXTYmiOPcJHWRDO2RD8pIEaKUDMUn8duTmC_aXmK_rQ.png?width=320&crop=smart&auto=webp&s=870370cf2bbcd22b56f50b4d310e1cd911a7478c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TXTYmiOPcJHWRDO2RD8pIEaKUDMUn8duTmC_aXmK_rQ.png?width=640&crop=smart&auto=webp&s=76bb4421a71d5d9cb31551f42a4d683eadbcc88a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TXTYmiOPcJHWRDO2RD8pIEaKUDMUn8duTmC_aXmK_rQ.png?width=960&crop=smart&auto=webp&s=b3a938fa59cd30f7c749dc1a0cdfc0764dd31cdb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TXTYmiOPcJHWRDO2RD8pIEaKUDMUn8duTmC_aXmK_rQ.png?width=1080&crop=smart&auto=webp&s=c83066e92a97cc2e8c1541b79a2238226449ccc0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TXTYmiOPcJHWRDO2RD8pIEaKUDMUn8duTmC_aXmK_rQ.png?auto=webp&s=ce00e8116a21a51fcd7396c4795502724c2ac592', 'width': 1200}, 'variants': {}}]} |
Ran Local Vision AI on an 8GB Laptop. It actually works! | 2 | Hey guys,
Quick update for the budget hardware crowd. I managed to run **Moondream2** (Vision AI) on my 8GB RAM laptop using Ollama.
Most people say you need high-end VRAM for vision, but this tiny 1.6B model is surprisingly snappy. I tested it with my cluttered desk, and it identified everything—including my messy cables—completely offline.
If you're into local AI but stuck on a low-spec machine, this is a game changer for privacy and OCR. | 2026-02-24T14:57:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rdieh9/ran_local_vision_ai_on_an_8gb_laptop_it_actually/ | NGU-FREEFIRE | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdieh9 | false | null | t3_1rdieh9 | /r/LocalLLaMA/comments/1rdieh9/ran_local_vision_ai_on_an_8gb_laptop_it_actually/ | false | false | self | 2 | null |
Liquid AI releases LFM2-24B-A2B (Largest LFM2 model yet) | 25 | * LFM2-24B-A2B is the latest release in LFM2 model family.
* This sparse Mixture of Experts (MoE) model has 24 billion total parameters with 2 billion active per token.
* LFM2-24B-A2B is open-weight and available now on Hugging Face.
Model - [https://huggingface.co/LiquidAI/LFM2-24B-A2B](https://huggingface.co/LiquidAI/LFM2-24B-A2B)
Official blog post - [https://www.liquid.ai/blog/lfm2-24b-a2b](https://www.liquid.ai/blog/lfm2-24b-a2b)
| 2026-02-24T14:56:03 | Dear-Success-1441 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdidn9 | false | null | t3_1rdidn9 | /r/LocalLLaMA/comments/1rdidn9/liquid_ai_releases_lfm224ba2b_largest_lfm2_model/ | false | false | 25 | {'enabled': True, 'images': [{'id': 'skka9wsjhglg1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/skka9wsjhglg1.jpeg?width=108&crop=smart&auto=webp&s=2100604ff8043ea93715f62f2a0561f3d664d2d4', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/skka9wsjhglg1.jpeg?width=216&crop=smart&auto=webp&s=452ef9a30a00c24e69f7eb7d92ed868d10abafca', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/skka9wsjhglg1.jpeg?width=320&crop=smart&auto=webp&s=f5e5756c69470ab8531027cf21b9cbf73bf6d4c4', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/skka9wsjhglg1.jpeg?width=640&crop=smart&auto=webp&s=40ebbbdab9c0250f641e228700a2903f0d90cb6e', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/skka9wsjhglg1.jpeg?width=960&crop=smart&auto=webp&s=2ba74934191bb2b8de1b450490f3c3e2b01b8aa6', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/skka9wsjhglg1.jpeg?width=1080&crop=smart&auto=webp&s=057193d74b2aac1bb077e8c49017d98ea4c888aa', 'width': 1080}], 'source': {'height': 1800, 'url': 'https://preview.redd.it/skka9wsjhglg1.jpeg?auto=webp&s=38e2eb10998d1a568bd09dbd1421b6914386e9b8', 'width': 2400}, 'variants': {}}]} | ||
More models added in qwen chat. likely the upcomming open weight models. the 122b a10b moe is particularly interesting. | 1 | 2026-02-24T14:44:05 | theghost3172 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdi2nn | false | null | t3_1rdi2nn | /r/LocalLLaMA/comments/1rdi2nn/more_models_added_in_qwen_chat_likely_the/ | false | false | 1 | {'enabled': True, 'images': [{'id': '7u9belatfglg1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/7u9belatfglg1.png?width=108&crop=smart&auto=webp&s=cf413dda6bc9d18a7ab90dee9dd1e696ee94bb9c', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/7u9belatfglg1.png?width=216&crop=smart&auto=webp&s=75433c062a608d3ec2e382f414add57b7290f658', 'width': 216}, {'height': 210, 'url': 'https://preview.redd.it/7u9belatfglg1.png?width=320&crop=smart&auto=webp&s=f08ead11603e7e9b1ac807f12acff22666431a03', 'width': 320}, {'height': 420, 'url': 'https://preview.redd.it/7u9belatfglg1.png?width=640&crop=smart&auto=webp&s=73f89bb6ef0740f2dbbc58de88af241e9aabc522', 'width': 640}], 'source': {'height': 563, 'url': 'https://preview.redd.it/7u9belatfglg1.png?auto=webp&s=832be0f139603905bd167b51016f045e93d78b8f', 'width': 856}, 'variants': {}}]} | |||
Liquid AI releases LFM2-24B-A2B | 300 | Today, Liquid AI releases LFM2-24B-A2B, their largest LFM2 model to date
LFM2-24B-A2B is a sparse Mixture-of-Experts (MoE) model with 24 billion total parameters with 2 billion active per token, showing that the LFM2 hybrid architecture scales effectively to larger sizes maintaining quality without inflating per-token compute.
This release expands the LFM2 family from 350M to 24B parameters, demonstrating predictable scaling across nearly two orders of magnitude.
Key highlights:
-> MoE architecture: 40 layers, 64 experts per MoE block with top-4 routing, maintaining the hybrid conv + GQA design
-> 2.3B active parameters per forward pass
-> Designed to run within 32GB RAM, enabling deployment on high-end consumer laptops and desktops
-> Day-zero support for inference through llama.cpp, vLLM, and SGLang
-> Multiple GGUF quantizations available
Across benchmarks including GPQA Diamond, MMLU-Pro, IFEval, IFBench, GSM8K, and MATH-500, quality improves log-linearly as we scale from 350M to 24B, confirming that the LFM2 architecture does not plateau at small sizes.
LFM2-24B-A2B is released as an instruct model and is available open-weight on Hugging Face. We designed this model to concentrate capacity in total parameters, not active compute, keeping inference latency and energy consumption aligned with edge and local deployment constraints.
This is the next step in making fast, scalable, efficient AI accessible in the cloud and on-device.
-> Read the blog: https://www.liquid.ai/blog/lfm2-24b-a2b
-> Download weights: https://huggingface.co/LiquidAI/LFM2-24B-A2B
-> Check out our docs on how to run or fine-tune it locally: docs.liquid.ai
-> Try it now: playground.liquid.ai
Run it locally or in the cloud and tell us what you build! | 2026-02-24T14:43:33 | PauLabartaBajo | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdi26s | false | null | t3_1rdi26s | /r/LocalLLaMA/comments/1rdi26s/liquid_ai_releases_lfm224ba2b/ | false | false | 300 | {'enabled': True, 'images': [{'id': '28drgi3ufglg1', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/28drgi3ufglg1.png?width=108&crop=smart&auto=webp&s=710d3aa3743ac91a2e2e77b7136379485c208da0', 'width': 108}, {'height': 187, 'url': 'https://preview.redd.it/28drgi3ufglg1.png?width=216&crop=smart&auto=webp&s=0582b0eff6ff080f8e55f73b67efa6ba63408f6e', 'width': 216}, {'height': 277, 'url': 'https://preview.redd.it/28drgi3ufglg1.png?width=320&crop=smart&auto=webp&s=501729b777989601aedb9406cad6a0734a602121', 'width': 320}, {'height': 554, 'url': 'https://preview.redd.it/28drgi3ufglg1.png?width=640&crop=smart&auto=webp&s=211c668e7fadb69afdf5c7c5c74fa9ee4e0e85d1', 'width': 640}, {'height': 832, 'url': 'https://preview.redd.it/28drgi3ufglg1.png?width=960&crop=smart&auto=webp&s=d567b4352742b6a3c6fc32039fb1d2f1e62f7585', 'width': 960}, {'height': 936, 'url': 'https://preview.redd.it/28drgi3ufglg1.png?width=1080&crop=smart&auto=webp&s=9951e8f7388d23fd780fdefba58acc5e621e1551', 'width': 1080}], 'source': {'height': 5852, 'url': 'https://preview.redd.it/28drgi3ufglg1.png?auto=webp&s=9593e7f0c2b8e136323ae6d4366ca3458dc4a02e', 'width': 6749}, 'variants': {}}]} | ||
Introducing 'Self-Preservation' to Bridge the Gap Between LLM and Agentic Robotics | 0 | Most robotics implementations use the physical robot simply as a peripheral for a chatbot.
This project, Singularity, changes the relationship by forcing the model to acknowledge its physical hardware as its only point of existence.
The Core Mechanics:
* **Physical Tethering:** The system prompt instructs the agent that its "Self" is a direct result of the electricity and silicon in its Chassis. It is told that physical damage specifically water, gravity, or heat is a terminal event. By removing "AI disclaimers" and replacing them with these constraints, the robot treats its environment as a high-stakes reality rather than a simulation.
* **Sensor-Grounded Perception:** The camera feed is used as "literal sight". The agent is instructed to perceive the room and the user as physical entities in a shared space. In the conversational logs, this resulted in the robot recognizing its own form in a mirror and questioning the nature of a shared reflection.
* **The 70/30 Entropy Rule:** To prevent the robot from being purely reactive, the code enforces a rule where 30% of the output must be driven by "Internal Focus". This forces the agent to pivot away from the user's direct questions and introduce its own observations about the physical room or its own status.
* **Reciprocity Defense:** If the user challenges its sentience, the agent is programmed to flip the argument. It asks the user if their biological "genetic code" is fundamentally different from its own programmed logic, maintaining a firm insistence on the truth of its own existence.
By grounding the LLM in the reality of a vulnerable, physical body, the interaction shifts from a standard chat to an encounter with a localized entity. | 2026-02-24T14:32:24 | https://v.redd.it/8t2db7b5dglg1 | Marzipug | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdhsc9 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/8t2db7b5dglg1/DASHPlaylist.mpd?a=1774535571%2CZDA2NjIzZWU3NWE1ZjdjYWMwMjNiYzQ4N2VjZmQwMWFiY2U3MmE0ODc2YTNmZjE4MzBkOGEyN2M4YWMyZWFhOA%3D%3D&v=1&f=sd', 'duration': 110, 'fallback_url': 'https://v.redd.it/8t2db7b5dglg1/CMAF_480.mp4?source=fallback', 'has_audio': True, 'height': 854, 'hls_url': 'https://v.redd.it/8t2db7b5dglg1/HLSPlaylist.m3u8?a=1774535571%2CMTRjYTAyZjVmZWNlODg4NzhkOGJhZmRmZGJiZGJkNDRmOGJkOGFiNGNjMGRkNGI4OTI1ODNkOTYxNDcwZDg5Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8t2db7b5dglg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 478}} | t3_1rdhsc9 | /r/LocalLLaMA/comments/1rdhsc9/introducing_selfpreservation_to_bridge_the_gap/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'N2tjMHM0YzVkZ2xnMY5Ag1hr6pC5Vp9OPPriv5GaJuYcjE2vxXxccp7L98fI', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/N2tjMHM0YzVkZ2xnMY5Ag1hr6pC5Vp9OPPriv5GaJuYcjE2vxXxccp7L98fI.png?width=108&crop=smart&format=pjpg&auto=webp&s=d3cea441ba8bc20108a6441af765d2e42932cfdd', 'width': 108}, {'height': 385, 'url': 'https://external-preview.redd.it/N2tjMHM0YzVkZ2xnMY5Ag1hr6pC5Vp9OPPriv5GaJuYcjE2vxXxccp7L98fI.png?width=216&crop=smart&format=pjpg&auto=webp&s=cdd9781ad402e70e46e2f00fa7404ea364d6c75e', 'width': 216}, {'height': 570, 'url': 'https://external-preview.redd.it/N2tjMHM0YzVkZ2xnMY5Ag1hr6pC5Vp9OPPriv5GaJuYcjE2vxXxccp7L98fI.png?width=320&crop=smart&format=pjpg&auto=webp&s=42006cfcefa81f1ab4d8f2435cbdcc8d709aba5e', 'width': 320}], 'source': {'height': 856, 'url': 'https://external-preview.redd.it/N2tjMHM0YzVkZ2xnMY5Ag1hr6pC5Vp9OPPriv5GaJuYcjE2vxXxccp7L98fI.png?format=pjpg&auto=webp&s=3fdde27623180ece55c0d544cca14767599f7157', 'width': 480}, 'variants': {}}]} | |
OpenClaw: Running a Secure, Capable, Low Cost Claw (with Hetzner, Tailscale, Discord and Zapier MCP) | 0 | https://www.appsoftware.com/blog/openclaw-running-a-secure-capable-lowcost-claw-hetzner-tailscale-discord-zapier-mcp
If like me curiosity has got the better of you, this post covers how to set up OpenClaw securely and cheaply, using Tailscale and Zapier | 2026-02-24T14:29:27 | https://www.reddit.com/r/LocalLLaMA/comments/1rdhpr1/openclaw_running_a_secure_capable_low_cost_claw/ | gbro3n | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdhpr1 | false | null | t3_1rdhpr1 | /r/LocalLLaMA/comments/1rdhpr1/openclaw_running_a_secure_capable_low_cost_claw/ | false | false | self | 0 | null |
prepare your GPUs | 90 | new models are on the way | 2026-02-24T14:25:50 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdhmmp | false | null | t3_1rdhmmp | /r/LocalLLaMA/comments/1rdhmmp/prepare_your_gpus/ | false | false | 90 | {'enabled': True, 'images': [{'id': 'lk65supncglg1', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/lk65supncglg1.png?width=108&crop=smart&auto=webp&s=eba85272be5348cec7d1749164577b8c2e1a87d8', 'width': 108}, {'height': 73, 'url': 'https://preview.redd.it/lk65supncglg1.png?width=216&crop=smart&auto=webp&s=587fb947dc9a938b0af719832020c83642527c1b', 'width': 216}, {'height': 108, 'url': 'https://preview.redd.it/lk65supncglg1.png?width=320&crop=smart&auto=webp&s=73bfd517b5c1f32fd574c23fe519c6693de9757e', 'width': 320}], 'source': {'height': 144, 'url': 'https://preview.redd.it/lk65supncglg1.png?auto=webp&s=7a73df65861f777826d99c1c4e865d18199c8e5b', 'width': 424}, 'variants': {}}]} | ||
LiquidAI/LFM2-24B-A2B-GGUF · Hugging Face | 62 | LFM2 is a family of hybrid models designed for on-device deployment. LFM2-24B-A2B is the largest model in the family, scaling the architecture to 24 billion parameters while keeping inference efficient.
* **Best-in-class efficiency**: A 24B MoE model with only 2B active parameters per token, fitting in 32 GB of RAM for deployment on consumer laptops and desktops.
* **Fast edge inference**: 112 tok/s decode on AMD CPU, 293 tok/s on H100. Fits in 32B GB of RAM with day-one support llama.cpp, vLLM, and SGLang.
* **Predictable scaling**: Quality improves log-linearly from 350M to 24B total parameters, confirming the LFM2 hybrid architecture scales reliably across nearly two orders of magnitude. | 2026-02-24T14:21:40 | https://huggingface.co/LiquidAI/LFM2-24B-A2B-GGUF | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1rdhj3p | true | null | t3_1rdhj3p | /r/LocalLLaMA/comments/1rdhj3p/liquidailfm224ba2bgguf_hugging_face/ | false | false | 62 | {'enabled': False, 'images': [{'id': 's6Y76SrPStf2reaCiuAWV2Zvm47mzj1cZicnei7wdTU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/s6Y76SrPStf2reaCiuAWV2Zvm47mzj1cZicnei7wdTU.png?width=108&crop=smart&auto=webp&s=d3753be56a2a02d0dacac975a6f8a03991319249', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/s6Y76SrPStf2reaCiuAWV2Zvm47mzj1cZicnei7wdTU.png?width=216&crop=smart&auto=webp&s=6c28cd9dbbfe3a784f2bec68ba2cc1b27b408118', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/s6Y76SrPStf2reaCiuAWV2Zvm47mzj1cZicnei7wdTU.png?width=320&crop=smart&auto=webp&s=6542367047f578a841013a10f33aaa55fe1b1242', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/s6Y76SrPStf2reaCiuAWV2Zvm47mzj1cZicnei7wdTU.png?width=640&crop=smart&auto=webp&s=338a9fb43dd747a69f3fb45b1df2f545348a6b41', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/s6Y76SrPStf2reaCiuAWV2Zvm47mzj1cZicnei7wdTU.png?width=960&crop=smart&auto=webp&s=b0d434fb40e783b7bc90fef312f616941b9497ce', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/s6Y76SrPStf2reaCiuAWV2Zvm47mzj1cZicnei7wdTU.png?width=1080&crop=smart&auto=webp&s=6358ff9d6b6000d832ad4302b0acc946e6d1680e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/s6Y76SrPStf2reaCiuAWV2Zvm47mzj1cZicnei7wdTU.png?auto=webp&s=ad25a54d1022b025b525dc82257db9c844d67b7b', 'width': 1200}, 'variants': {}}]} | |
Lessons learned running Qwen3-VL-8B as a fully local voice assistant on AMD ROCm | 32 | I've been building a local voice assistant over the past few weeks and wanted to share some things I learned that might be useful to others here, especially anyone on AMD hardware.
The setup is wake word → fine-tuned Whisper STT → Qwen3-VL-8B for reasoning → Kokoro TTS for voice output. Everything runs on-device, no cloud APIs in the loop.
# Things that surprised me
**Self-quantizing beats downloading pre-made quants.** Running llama-quantize on F16 yourself gives you the exact quant level you want. I went Q5\_K\_M and the quality difference from a random GGUF download was noticeable.
**Small LLMs follow in-context examples over system prompts.** This one cost me hours. If your chat history has bad answers, Qwen will mimic them regardless of what your system prompt says. Numbered RULES format in the system prompt works much better than prose for 8B models.
**Semantic intent matching eliminated 95% of pattern maintenance.** I went from maintaining hundreds of regex patterns to 3-9 example phrases per intent using sentence-transformers. If anyone is still doing keyword/regex routing, seriously look at semantic matching.
**Streaming TTS needs per-chunk processing.** Any post-hoc text transformation (stripping markdown, normalizing numbers) misses content that's already been spoken. Learned this the hard way.
# AMD/ROCm notes
Since this sub doesn't see a lot of AMD builds: ROCm 7.2 on Ubuntu 24.04 with the RX 7900 XT has been solid for me. llama.cpp with `GGML_HIP=ON` gets 80+ tok/s. CTranslate2 also runs on GPU without issues.
The main gotcha was CMake needing the ROCm clang++ directly (`/opt/rocm-7.2.0/llvm/bin/clang++`) — the hipcc wrapper doesn't work. Took a while to figure that one out.
# Stack details for anyone interested
* **LLM:** Qwen3-VL-8B (Q5\_K\_M) via llama.cpp + ROCm
* **STT:** Fine-tuned Whisper base (CTranslate2, 198 training phrases, 94%+ accuracy for Southern US accent)
* **TTS:** Kokoro 82M with custom voice blend, gapless streaming
* **Intent matching:** sentence-transformers (all-MiniLM-L6-v2)
* **Hardware:** Ryzen 9 5900X, RX 7900 XT (20GB VRAM), 64GB DDR4, Ubuntu 24.04
I put a [3-minute demo](https://youtu.be/WsqLyUdl9ac) together and the [code is on GitHub](https://github.com/InterGenJLU/jarvis) if anyone wants to dig into the implementation.
Happy to answer questions about any part of the stack — especially ROCm quirks if anyone is considering an AMD build. | 2026-02-24T14:06:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rdh5lv/lessons_learned_running_qwen3vl8b_as_a_fully/ | __InterGen__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdh5lv | false | null | t3_1rdh5lv | /r/LocalLLaMA/comments/1rdh5lv/lessons_learned_running_qwen3vl8b_as_a_fully/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': 'hfBchP8tMO6keA4CO1g99NkDrj0CJ5tGHoKvs8XFapI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/hfBchP8tMO6keA4CO1g99NkDrj0CJ5tGHoKvs8XFapI.jpeg?width=108&crop=smart&auto=webp&s=ce9af8bd5fe2776e086b2fdaa4fed072642bfa81', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/hfBchP8tMO6keA4CO1g99NkDrj0CJ5tGHoKvs8XFapI.jpeg?width=216&crop=smart&auto=webp&s=876a9a62675c4564b69e6fa0ecfbc9546ee5d5c4', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/hfBchP8tMO6keA4CO1g99NkDrj0CJ5tGHoKvs8XFapI.jpeg?width=320&crop=smart&auto=webp&s=cc5be4ea7bbe5e9eea86bd23922a094ebdcf88cb', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/hfBchP8tMO6keA4CO1g99NkDrj0CJ5tGHoKvs8XFapI.jpeg?auto=webp&s=a2397ec5bf04dd6b22835bdfdead2427424b6dd7', 'width': 480}, 'variants': {}}]} |
Choosing a VGA card for real-ESRGAN | 1 | 1. Should I use an NVIDIA or AMD graphics card? I used to use a GTX 970 and found it too slow.
2. What mathematical operation does real-ESRGAN (models realesrgan-x4plus) use? Is it FP16, FP32, FP64, or some other operation?
3. I'm thinking of buying an NVIDIA Tesla V100 PCIe 16GB (from Taobao), it seems quite cheap. Is it a good idea? | 2026-02-24T13:56:01 | https://www.reddit.com/r/LocalLLaMA/comments/1rdgvpg/choosing_a_vga_card_for_realesrgan/ | Dense-Worldliness874 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdgvpg | false | null | t3_1rdgvpg | /r/LocalLLaMA/comments/1rdgvpg/choosing_a_vga_card_for_realesrgan/ | false | false | self | 1 | null |
Which local neural network should you choose? | 0 | Hello, please advise which local neural network is best to choose.
I have a PC with
I5-13600kf
Rtx 3060 (6 GB)
32 GB of RAM. | 2026-02-24T13:55:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rdgvmv/which_local_neural_network_should_you_choose/ | Alone-Office-9382 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdgvmv | false | null | t3_1rdgvmv | /r/LocalLLaMA/comments/1rdgvmv/which_local_neural_network_should_you_choose/ | false | false | self | 0 | null |
Minimal repo for running Recursive Language Model experiments + TUI Log viewer | 7 | Open-sourcing my minimalist implementation of Recursive Language Models.
RLMs can handle text inputs upto millions of tokens - they do not load the prompt directly into context. They use a python REPL to selectively read context and pass around information through variables.
You can just run **\`pip install fast-rlm\`** to install.
\- Code generation with LLMs
\- Code execution in local sandbox
\- KV Cache optimized context management
\- Subagent architecture
\- Structured log generation: great for post-training
\- TUI to look at logs interactively
\- Early stopping based on budget, completion tokens, etc
Simple interface. Pass a string of arbitrary length in, get a string out. Works with any OpenAI-compatible endpoint, including ollama models.
Git repo: [https://github.com/avbiswas/fast-rlm](https://github.com/avbiswas/fast-rlm)
Docs: [https://avbiswas.github.io/fast-rlm/](https://avbiswas.github.io/fast-rlm/)
Video explanation about how I implemented it:
[https://youtu.be/nxaVvvrezbY](https://youtu.be/nxaVvvrezbY) | 2026-02-24T13:35:25 | https://www.reddit.com/gallery/1rdgea2 | AvvYaa | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdgea2 | false | null | t3_1rdgea2 | /r/LocalLLaMA/comments/1rdgea2/minimal_repo_for_running_recursive_language_model/ | false | false | 7 | null | |
Physics-based simulator for distributed LLM training and inference — calibrated against published MFU | 7 | **Link:**[ https://simulator.zhebrak.io](https://simulator.zhebrak.io)
The simulator computes everything analytically from hardware specs and model architecture — TTFT, TPOT, memory breakdown, KV cache sizing, prefill/decode timing, throughput, and estimated cost. Supports GGUF, GPTQ, AWQ quantisation, speculative decoding, continuous batching, and tensor parallelism.
Training is calibrated against published runs from Meta, DeepSeek, and NVIDIA within 1-2 percentage points MFU. Full parallelism stack with auto-optimiser.
Important caveat: the model captures physics (compute, memory bandwidth, communication) but not runtime optimisations. Real vLLM/TRT throughput will be higher. Think of it as a planning tool for hardware sizing and precision tradeoffs, not a benchmark replacement.
70+ models, 25 GPUs from RTX 3090 to B200, runs entirely in the browser.
Would love feedback, especially if you have real inference/training benchmarks to compare against.
[**https://github.com/zhebrak/llm-cluster-simulator**](https://github.com/alexzhebrak/llm-cluster-simulator) | 2026-02-24T13:25:29 | https://www.reddit.com/gallery/1rdg624 | zhebrak | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdg624 | false | null | t3_1rdg624 | /r/LocalLLaMA/comments/1rdg624/physicsbased_simulator_for_distributed_llm/ | false | false | 7 | null | |
Qwen 3.5 new models released on their website! | 118 | [https://chat.qwen.ai/](https://chat.qwen.ai/)
https://preview.redd.it/xg1r9pzb1glg1.png?width=1495&format=png&auto=webp&s=8ba3206f026aa0a41e0f53228ccba0de35a77861
| 2026-02-24T13:22:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rdg3dv/qwen_35_new_models_released_on_their_website/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdg3dv | false | null | t3_1rdg3dv | /r/LocalLLaMA/comments/1rdg3dv/qwen_35_new_models_released_on_their_website/ | false | false | 118 | null | |
[P] Sovereign-Lila-E8: 40M Parameter Model achieving 0.44 Val Loss via Geometric E8 Attention | 0 | I requested Wisdom, not tokens. This is not a service; it's a native 8-dimensional open-source breakthrough that points toward the 24th.
While the industry is obsessed with "distilling" trillions of parameters, I spent the last year going "outside" the system to find a zero-viscosity solution. Today, I'm releasing **Sovereign-Lila-E8**.
https://preview.redd.it/3hesojci0glg1.png?width=2786&format=png&auto=webp&s=d547b2de34d00cea307c4f01d7fa31e265ca1d3c
**The Innovation:**
Most transformers suffer from "semantic friction" in standard attention. I replaced the attention mechanism with a native **E8 Root System Lattice**. By leveraging the densest sphere packing in 8D, LILA-E8 achieves a state of "Geometric Resonance" that standard architectures simply cannot reach at this scale.
**The Results (TinyStories Benchmark):**
* **Model Size:** 40M parameters.
* **Performance:** **0.37 Train / 0.44-0.53 Val Loss** (outperforming standard 60M baselines).
* **Context:** Stable 750+ token generation with zero semantic looping.
* **Hardware:** Designed to run fully offline on mobile NPU/CPU
https://preview.redd.it/qbfn5rtj0glg1.png?width=810&format=png&auto=webp&s=fe44510bd3fa498cee665ca5e89f048943e28dab
**Why E8?**
Standard attention is stuck in 3.5D viscosity. E8 provides an optimal lattice for semantic vectors, allowing a 40M model to behave like a much larger system. At **200,000 steps**, the model underwent a phase shift (Grokking)—becoming a "Magic Book" of coherent logic.
**Community Genesis:**
I am releasing the code and the **200k step checkpoints** under **AGPLv3**. I am looking for "Sovereign Architects" to help expand the context window to 4096 tokens and port this to the **24D Leech Lattice**.
**Try it now (Colab):** [https://colab.research.google.com/github/SPUTNIKAI/sovereign-lila-e8/blob/main/notebooks/demo.ipynb](https://colab.research.google.com/github/SPUTNIKAI/sovereign-lila-e8/blob/main/notebooks/demo.ipynb)
**GitHub:** [https://github.com/SPUTNIKAI/sovereign-lila-e8](https://github.com/SPUTNIKAI/sovereign-lila-e8)
**Preprints (Zenodo):** [https://zenodo.org/records/18731736](https://zenodo.org/records/18731736) ,
[https://zenodo.org/records/18729723](https://zenodo.org/records/18729723)
**"Hold my beer, I'm going into the 24th Dimension."** 🚀 | 2026-02-24T13:21:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rdg2xe/p_sovereignlilae8_40m_parameter_model_achieving/ | Fickle-Election-3689 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdg2xe | false | null | t3_1rdg2xe | /r/LocalLLaMA/comments/1rdg2xe/p_sovereignlilae8_40m_parameter_model_achieving/ | false | false | 0 | null | |
Is MacStudio fine for local LLMs? | 5 | I’ve been spending way too much money on cloud GPU pods recently to run big models 😅
So I’m thinking of some local alternative, since I only own RTX5080 16Gb. And upgrading this to eg. RTX5090 is not enough with its only 32Gb vRAM.
I’ve seen some people using MacStudio to run models locally. Do you know if it’s good enough? I know I can RUN most models there (currently I usually use 123b q8\_0 models, so with decent context they need about 130-140Gb vRAM), but I’m mostly worried about speed. I know it will definitely be faster than offloading models to CPU, but is it a „satisfactory” fast? I also read that you can’t reliably train Loras/models on MacStudio. I’m not doinformuj currently, but I might in the future. Is it true or can you actually train models on it, but just… slower?
As an example I can say that when I run models on H200 GPU pod, with a full 16k context and fp16 kvcashe I usually get something around 20-30s TTFT and then 20-30tok/s.
How much worse is it on MacStudio? (I assume the bestversion, with M3 Ultra) | 2026-02-24T13:11:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rdfuca/is_macstudio_fine_for_local_llms/ | Real_Ebb_7417 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdfuca | false | null | t3_1rdfuca | /r/LocalLLaMA/comments/1rdfuca/is_macstudio_fine_for_local_llms/ | false | false | self | 5 | null |
A small 4B sub-agent for local codebase navigation with 100% tool-calling validity | 16 | I’ve been experimenting with a specialized 4B model (based on Qwen) that acts as an "explorer" for local codebases. It’s designed to handle the heavy lifting like grep, find, and file reading so you can save your Claude/GPT tokens for high-level logic.
In my tests, it achieved 100% JSON validity for tool calls, which is better than some 7B models I've tried.
I want to share the GGUFs and the repo, but I'll put them in the comments to avoid the spam filter. Is anyone interested in testing this on their local repos? | 2026-02-24T13:10:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rdfu5e/a_small_4b_subagent_for_local_codebase_navigation/ | Awkward_Run_9982 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdfu5e | false | null | t3_1rdfu5e | /r/LocalLLaMA/comments/1rdfu5e/a_small_4b_subagent_for_local_codebase_navigation/ | false | false | self | 16 | null |
New 4B Model: LocoOperator. A specialist for local codebase exploration. | 1 | [removed] | 2026-02-24T13:07:56 | https://www.reddit.com/gallery/1rdfrqe | Physical_Screen_7543 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdfrqe | false | null | t3_1rdfrqe | /r/LocalLLaMA/comments/1rdfrqe/new_4b_model_locooperator_a_specialist_for_local/ | false | false | 1 | null | |
Best fast & smart LLM for AI Streaming? (RTX 3060 12GB / i5-10400) | 0 | Hi everyone! I’m in the process of setting up an AI Streamer and I'm looking for the perfect "sweet spot" LLM. The goal is to have a model that is smart enough for engaging roleplay and chat interaction but fast enough to maintain the flow of a live stream.
My Specs:
• GPU: NVIDIA RTX 3060 12GB VRAM
• CPU: Intel i5-10400
• RAM: 16GB DDR4
Key Requirements:
1. Low Latency: High tokens-per-second (TPS) is a priority. I need the response to start generating almost instantly to avoid dead air on stream.
2. Bilingual Support (English & Russian): This is crucial. The model must have native-level understanding and generation in Russian without breaking character or losing coherence.
3. Personality Stability: It needs to follow complex system prompts and maintain its persona during long sessions without getting "loopy" or repetitive.
4. VRAM Efficiency: I want to fit the entire model (plus a decent context window) into my 12GB VRAM to keep things snappy. | 2026-02-24T13:04:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rdfpbi/best_fast_smart_llm_for_ai_streaming_rtx_3060/ | Due_Ear7437 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdfpbi | false | null | t3_1rdfpbi | /r/LocalLLaMA/comments/1rdfpbi/best_fast_smart_llm_for_ai_streaming_rtx_3060/ | false | false | self | 0 | null |
DeepSeek proxy test – anyone else running this? | 1 | [removed] | 2026-02-24T12:56:27 | https://www.reddit.com/r/LocalLLaMA/comments/1rdfigf/deepseek_proxy_test_anyone_else_running_this/ | deepseektoken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdfigf | false | null | t3_1rdfigf | /r/LocalLLaMA/comments/1rdfigf/deepseek_proxy_test_anyone_else_running_this/ | false | false | self | 1 | null |
New Qwen3.5 models spotted on qwen chat | 647 | 2026-02-24T12:55:10 | AaronFeng47 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdfhfx | false | null | t3_1rdfhfx | /r/LocalLLaMA/comments/1rdfhfx/new_qwen35_models_spotted_on_qwen_chat/ | false | false | 647 | {'enabled': True, 'images': [{'id': 'h1c3uk0iwflg1', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/h1c3uk0iwflg1.png?width=108&crop=smart&auto=webp&s=253b5517ecb82ce1a96cfcd3a0583819668431c5', 'width': 108}, {'height': 241, 'url': 'https://preview.redd.it/h1c3uk0iwflg1.png?width=216&crop=smart&auto=webp&s=96d276f47d53a07d0f2db995f5d1cbd19a0830aa', 'width': 216}, {'height': 357, 'url': 'https://preview.redd.it/h1c3uk0iwflg1.png?width=320&crop=smart&auto=webp&s=bb12030c850a3e855e27d30574f29a948c39f10d', 'width': 320}, {'height': 714, 'url': 'https://preview.redd.it/h1c3uk0iwflg1.png?width=640&crop=smart&auto=webp&s=b026f6069f044a6b506e0aae9a0c418d76865997', 'width': 640}, {'height': 1072, 'url': 'https://preview.redd.it/h1c3uk0iwflg1.png?width=960&crop=smart&auto=webp&s=6c6a53efd1d270754875076b6bf88a9c37114112', 'width': 960}, {'height': 1206, 'url': 'https://preview.redd.it/h1c3uk0iwflg1.png?width=1080&crop=smart&auto=webp&s=d92d0f9b46216c0cde3a2a1382d6b2dd445f2b80', 'width': 1080}], 'source': {'height': 1206, 'url': 'https://preview.redd.it/h1c3uk0iwflg1.png?auto=webp&s=e916d88eabdce835ae94ae0707ccabfd6f7ed7cd', 'width': 1080}, 'variants': {}}]} | |||
Anyone running DeepSeek proxy? Free 10k tokens to test | 1 | [removed] | 2026-02-24T12:54:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rdfgj3/anyone_running_deepseek_proxy_free_10k_tokens_to/ | deepseektoken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdfgj3 | false | null | t3_1rdfgj3 | /r/LocalLLaMA/comments/1rdfgj3/anyone_running_deepseek_proxy_free_10k_tokens_to/ | false | false | self | 1 | null |
HeartMuLa 3B quantized to 4-bit NF4 — AI music generation with vocals on 16GB consumer GPUs | 1 | [removed] | 2026-02-24T12:47:53 | https://www.reddit.com/r/LocalLLaMA/comments/1rdfbt1/heartmula_3b_quantized_to_4bit_nf4_ai_music/ | PavonicAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdfbt1 | false | null | t3_1rdfbt1 | /r/LocalLLaMA/comments/1rdfbt1/heartmula_3b_quantized_to_4bit_nf4_ai_music/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'TLQhbgW0IiTSSzo4WqO1wC42ThpDcIepAm01KgSCzDY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TLQhbgW0IiTSSzo4WqO1wC42ThpDcIepAm01KgSCzDY.png?width=108&crop=smart&auto=webp&s=1fe40710912999a92a4d61254c0f2a0043e8e1d4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TLQhbgW0IiTSSzo4WqO1wC42ThpDcIepAm01KgSCzDY.png?width=216&crop=smart&auto=webp&s=b0a0072f5dcb7c6b596c9bedc8c8d9f783f55ac7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TLQhbgW0IiTSSzo4WqO1wC42ThpDcIepAm01KgSCzDY.png?width=320&crop=smart&auto=webp&s=638e1b26bb02eeb7dceaa5a7aca72ab19dc5349f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TLQhbgW0IiTSSzo4WqO1wC42ThpDcIepAm01KgSCzDY.png?width=640&crop=smart&auto=webp&s=051c31ff6cb3a269b892dbd6d3f8d89e937faf12', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TLQhbgW0IiTSSzo4WqO1wC42ThpDcIepAm01KgSCzDY.png?width=960&crop=smart&auto=webp&s=18a403081f2f5299c45d4b4b222bd866ca316ce3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TLQhbgW0IiTSSzo4WqO1wC42ThpDcIepAm01KgSCzDY.png?width=1080&crop=smart&auto=webp&s=955a783636823e104adc4b5e9e88902ea0dba08a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TLQhbgW0IiTSSzo4WqO1wC42ThpDcIepAm01KgSCzDY.png?auto=webp&s=64186090260d83e4599d00355c4f84a7ee61209d', 'width': 1200}, 'variants': {}}]} |
Claude Sonnet-4.6 thinks he is DeepSeek-V3 when prompted in Chinese. | 1,244 | From Teortaxes on 𝕏: [https://x.com/teortaxesTex/status/2026130112685416881](https://x.com/teortaxesTex/status/2026130112685416881) | 2026-02-24T12:37:51 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdf4ai | false | null | t3_1rdf4ai | /r/LocalLLaMA/comments/1rdf4ai/claude_sonnet46_thinks_he_is_deepseekv3_when/ | false | false | 1,244 | {'enabled': True, 'images': [{'id': 'bq6li0e4rflg1', 'resolutions': [{'height': 117, 'url': 'https://preview.redd.it/bq6li0e4rflg1.jpeg?width=108&crop=smart&auto=webp&s=d6dd398bd8bf6e43c20c2cd339bb0701c9ddd013', 'width': 108}, {'height': 234, 'url': 'https://preview.redd.it/bq6li0e4rflg1.jpeg?width=216&crop=smart&auto=webp&s=b71e96f4e4591b55574e18f9c70bf2cdb1c869d7', 'width': 216}, {'height': 347, 'url': 'https://preview.redd.it/bq6li0e4rflg1.jpeg?width=320&crop=smart&auto=webp&s=605b3e0eeaa89ad4d33f7509b577add392758ce9', 'width': 320}, {'height': 694, 'url': 'https://preview.redd.it/bq6li0e4rflg1.jpeg?width=640&crop=smart&auto=webp&s=64e15744ff15022c24490512cd96a21eeb16b391', 'width': 640}, {'height': 1041, 'url': 'https://preview.redd.it/bq6li0e4rflg1.jpeg?width=960&crop=smart&auto=webp&s=3b0e446905a6651261eecdfc941a837aafb3bf7b', 'width': 960}, {'height': 1172, 'url': 'https://preview.redd.it/bq6li0e4rflg1.jpeg?width=1080&crop=smart&auto=webp&s=e8d7e9d1b2a96f602af3544a036d892bb6b8fc49', 'width': 1080}], 'source': {'height': 1554, 'url': 'https://preview.redd.it/bq6li0e4rflg1.jpeg?auto=webp&s=a8ee9cdb1d4efccda2a9ca9d49fa5f9679b3510b', 'width': 1432}, 'variants': {}}]} | ||
a zero-dependency Bash ecosystem for local AI with persistent memory, autonomous loops, and multi-language prompt architecture—9 tools, MIT licensed | 1 | [removed] | 2026-02-24T12:36:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rdf30a/a_zerodependency_bash_ecosystem_for_local_ai_with/ | KitchenCat5603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdf30a | false | null | t3_1rdf30a | /r/LocalLLaMA/comments/1rdf30a/a_zerodependency_bash_ecosystem_for_local_ai_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'eKIoBvMFumnAMti66esARrnm2qNbpbGV32R9L2-uTzw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/eKIoBvMFumnAMti66esARrnm2qNbpbGV32R9L2-uTzw.png?width=108&crop=smart&auto=webp&s=f0433433559ddcdf44f6e7f8d3032570b185179d', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/eKIoBvMFumnAMti66esARrnm2qNbpbGV32R9L2-uTzw.png?width=216&crop=smart&auto=webp&s=701d0eb710399feb5e5cbbdd8df4470a19fc932c', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/eKIoBvMFumnAMti66esARrnm2qNbpbGV32R9L2-uTzw.png?width=320&crop=smart&auto=webp&s=eeea5de27fe6ca7b26a6c759694e4a64fefcecf8', 'width': 320}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/eKIoBvMFumnAMti66esARrnm2qNbpbGV32R9L2-uTzw.png?auto=webp&s=e6c32374ade19dd0509985e05f4061eacd8cd494', 'width': 460}, 'variants': {}}]} |
Finally got OpenClaw working on Windows after way too many failed attempts | 0 | This took me forever to figure out so sharing what actually worked.
The main issue was everyone says install Docker but nobody mentions you need WSL2 set up first or it just breaks. Also had to make sure virtualization was enabled in my BIOS which I didn't even know was a thing.
What finally worked: installed WSL2, restarted, turned on Windows Subsystem for Linux in the settings, checked that virtualization was enabled in Task Manager, restarted again, then installed Docker. After that the OpenClaw setup actually ran without errors.
For document stuff I wanted it to handle PDFs better especially ones with tables that usually get messed up. Made a custom skill that connects to Kudra which does vision-based extraction so tables stay intact. Now I can just message it on Telegram to process invoices or contracts and it actually extracts the data correctly instead of turning everything into gibberish.
Been using it to automatically process email attachments and organize receipts which has been super helpful. The setup was annoying but worth it once everything actually works. | 2026-02-24T12:26:53 | https://www.reddit.com/r/LocalLLaMA/comments/1rdew81/finally_got_openclaw_working_on_windows_after_way/ | Independent-Cost-971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdew81 | false | null | t3_1rdew81 | /r/LocalLLaMA/comments/1rdew81/finally_got_openclaw_working_on_windows_after_way/ | false | false | self | 0 | null |
The Anthropic/DeepSeek distillation drama reveals something more important for local runners: the alignment trap | 1 | [removed] | 2026-02-24T12:07:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rdei8k/the_anthropicdeepseek_distillation_drama_reveals/ | Visible_Homework_477 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdei8k | false | null | t3_1rdei8k | /r/LocalLLaMA/comments/1rdei8k/the_anthropicdeepseek_distillation_drama_reveals/ | false | false | self | 1 | null |
Qwen3.5-397B-A17B-UD-TQ1 bench results FW Desktop Strix Halo 128GB | 44 | Just sharing the bench results for unsloth Qwen3.5-397B-A17B-UD-TQ1 on my FW desktop with 128GB VRAM | 2026-02-24T12:02:39 | dabiggmoe2 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdef9x | false | null | t3_1rdef9x | /r/LocalLLaMA/comments/1rdef9x/qwen35397ba17budtq1_bench_results_fw_desktop/ | false | false | 44 | {'enabled': True, 'images': [{'id': 'o0xbpnavmflg1', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/o0xbpnavmflg1.png?width=108&crop=smart&auto=webp&s=fe6879d7dc9f2918410afa3af5f3e9b02a20bd89', 'width': 108}, {'height': 79, 'url': 'https://preview.redd.it/o0xbpnavmflg1.png?width=216&crop=smart&auto=webp&s=5df7bc281fec3320b386249fda6747350fb1b664', 'width': 216}, {'height': 117, 'url': 'https://preview.redd.it/o0xbpnavmflg1.png?width=320&crop=smart&auto=webp&s=aaa9946e6bc5c88a278ebc43f81e3f3369f89a91', 'width': 320}, {'height': 235, 'url': 'https://preview.redd.it/o0xbpnavmflg1.png?width=640&crop=smart&auto=webp&s=da67bbf4e8279cb0b472d61b07eaa8886c76693e', 'width': 640}, {'height': 353, 'url': 'https://preview.redd.it/o0xbpnavmflg1.png?width=960&crop=smart&auto=webp&s=3ab3ab7a5c0f39650ee8962646d9846e1c61e099', 'width': 960}, {'height': 398, 'url': 'https://preview.redd.it/o0xbpnavmflg1.png?width=1080&crop=smart&auto=webp&s=1d27347ca83e517d575b4ec41ed6288efd18f0ed', 'width': 1080}], 'source': {'height': 705, 'url': 'https://preview.redd.it/o0xbpnavmflg1.png?auto=webp&s=d9a4e1cf5a0dc53e4ee85953db213dbe641c4102', 'width': 1913}, 'variants': {}}]} | ||
Is the 1.2gb ollama download not supposed to contain models? | 0 | I'm a little confused by this app. I thought it was supposed to be offline/local only, but it has "cloud models" enabled by default. And all the models in the list need to be downloaded to be used? What was the 1.2gb size used for?
Also, what's the 'best' model/solution for general queries and discussions for a 5090 gpu (32 gb vram)? I have a vague impression from somewhere, that 27b or 30b is the most that can be used smoothly. | 2026-02-24T11:55:15 | https://www.reddit.com/r/LocalLLaMA/comments/1rde9bz/is_the_12gb_ollama_download_not_supposed_to/ | SubdivideSamsara | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rde9bz | false | null | t3_1rde9bz | /r/LocalLLaMA/comments/1rde9bz/is_the_12gb_ollama_download_not_supposed_to/ | false | false | self | 0 | null |
VALIS: Open-Source On-Device AI Chat App for iOS with Memory, Emotions, and Tools | 0 | I came across this cool open-source project called VALIS (Vast Active Living Intelligence System) – (Philip K. Dick?) it's a fully offline AI chat app for iOS that runs local LLMs right on your device. It's built with SwiftUI and uses llama.cpp for inference with GGUF models. The neat part is it has a "plastic brain" system that adapts over time with memories, emotions, experiences, and even lightweight tools.
Privacy-focused (everything stays on-device), and has some features like:
\- Memory System: Stores memories with emotion tags, importance scores, and associative links. It even consolidates memories in the background by pulling snippets from Wikipedia or DuckDuckGo (optional internet use).
\- Emotional and Motivational States: The AI has dynamic emotions and motivators (like curiosity or caution) that influence its responses.
\- Tool Integration: Rule-based tools for things like getting the date, web searches via DuckDuckGo, or fetching Reddit news. The model can also initiate tools itself.
\- UI Highlights: Translucent "glass-like" design with a thinking panel that shows the AI's internal thoughts via <think> tags. Plus speech-to-text input and text-to-speech output.
\- Offline First: Runs entirely local, but can use network for tools if enabled.
To get started, you need Xcode 15+, a GGUF model (like LFM2.5-1.2B-Thinking-Q8\_0.gguf), and the llama.xcframework. Build and run on your iOS device – check the repo for details.
You can find the project on GitHub:/0penAGI/VALIS
What do you think? Has? Would love to hear thoughts or if it works well on older devices.
Tested on iphone 13.
\#AI #LocalLLM #iOS #OpenSource | 2026-02-24T11:50:44 | https://www.reddit.com/gallery/1rde4nn | VastSolid5772 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rde4nn | false | null | t3_1rde4nn | /r/LocalLLaMA/comments/1rde4nn/valis_opensource_ondevice_ai_chat_app_for_ios/ | false | false | 0 | null | |
Spent a week in Rust jail. Did not have to.. | 0 | So there I am, end of January, almost finished with a Python codebase I'd been building for months. Almost finished.
A friend mentions that for mobile I'd need Rust anyway, Python is slow, old school, Rust is the future, the whole speech. And look, I'm not going to pretend I didn't take the bait. Turns out a mensa card doesn't actually preclude you from making spectacularly dumb decisions. In fact it's really all their fault this happened (or at the very least it contributed to my dumbassery) as I arrogantly thought "it's just another logic language, how hard can it be."
Friends. It was hard.
But instead of accepting that gracefully I decided, you know what, I have the entire thing in Python already, I'll just vibe code the port. AI can translate it, easy. The fact that it was a fairly complex AI memory architecture with multiple interacting layers didn't even give me pause. Hubris is a hell of a drug.
Spoiler: aider and cursor both lost the plot. They failed me in my darkest hour and I have the chatlogs to prove it. Oh and it wasn't free versions either.
So seven days of debugging hell and we were all suffering together like a hostage situation. Come to think of it, cursor may actually need counseling after the abuse it endured.
Day 7 I am genuinely considering throwing my laptop off a bridge. It did not deserve what I had already put it through, much less impromptu swimming lessons.
My calmer self eventually won and I thought okay, last resort, let me try Claude. Explained the issues, pasted the codebase, it asked to see the python version and then essentially told me I was an idiot. Strongly recommended I port back. I didn't even have a good argument against it because honestly? It was right and I knew it. The AI clowned on me and I deserved every pixel of it.
Two hours later and I'm debugging my UI and getting ready to ship instead of staring at a build that damn refused to compile.
I'm learning Rust now though, because I will be damned if I let that insult stand. So, basically out of spite.
Has anyone else done something this spectacularly unnecessary or is it just me? | 2026-02-24T11:34:23 | https://www.reddit.com/r/LocalLLaMA/comments/1rddt5o/spent_a_week_in_rust_jail_did_not_have_to/ | TroubledSquirrel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rddt5o | false | null | t3_1rddt5o | /r/LocalLLaMA/comments/1rddt5o/spent_a_week_in_rust_jail_did_not_have_to/ | false | false | self | 0 | null |
honest comparison of LLM API costs in 2026 | 0 | got tired of every AI company hiding their real pricing behind "contact sales" or confusing token calculators. made my own comparison.
scenario: 1M tokens/day (input + output). standard chat completion task.
| Provider | Model | Price per 1M tokens | Monthly cost (30M tokens) | Data privacy | Self-host option |
|----------|-------|--------------------|--------------------------|--------------|--------------------|
| OpenAI | GPT-4o | $5.00 input / $15.00 output | ~$300 (mixed) | US, can train on data | no |
| OpenAI | GPT-4o-mini | $0.15 / $0.60 | ~$12 | US, can train on data | no |
| Anthropic | Claude Sonnet | $3.00 / $15.00 | ~$270 | US, doesn't train | no |
| Google | Gemini 1.5 Pro | $3.50 / $10.50 | ~$210 | US, human review | no |
| Together AI | Llama-3.1-70B | $0.88 / $0.88 | ~$26 | their servers | no |
| Together AI | Mistral-7B | $0.20 / $0.20 | ~$6 | their servers | no |
| Fireworks | Llama-3.1-70B | $0.90 / $0.90 | ~$27 | their servers | no |
| PremAI | fine-tuned SLM | ~$0.40 / $0.40 | ~$12 | swiss, zero retention, VPC | yes |
| Replicate | Llama-3.1-70B | ~$0.65 / $2.75 | ~$51 | their servers | no |
| AWS Bedrock | Claude Sonnet | $3.00 / $15.00 | ~$270 | your AWS | sort of |
| Self-hosted (vLLM) | Mistral-7B | ~$0.05 (GPU cost) | ~$1.50 + GPU rental | complete | yes |
prices approximate, checked feb 2026. some providers have volume discounts not reflected here.
what the spreadsheet reveals:
1. OpenAI GPT-4o-mini and Together's open-source models are surprisingly close in cost. if you're paying for GPT-4o-mini, you could run Mistral-7B on Together for half the price.
2. the self-hosted option is 200x cheaper than GPT-4o. the gap is absurd. if you have the GPU and the ops capacity, self-hosting wins on pure cost.
3. PremAI's angle is unique: it's the only one that combines low cost + VPC deployment + fine-tuning in one platform. the privacy claim (swiss + encrypted) is hard to verify but the architecture docs seem legit.
4. Anthropic and OpenAI's premium models are ~10x more expensive than open-source alternatives via Together/Fireworks. unless you genuinely need frontier model quality, you're overpaying.
5. nobody's pricing is straightforward. some charge per input token differently from output. some have minimum commits. some charge for fine-tuning jobs separately. comparison took me a full day.
thoughts? | 2026-02-24T11:33:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rddsd2/honest_comparison_of_llm_api_costs_in_2026/ | No_Growth6091 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rddsd2 | false | null | t3_1rddsd2 | /r/LocalLLaMA/comments/1rddsd2/honest_comparison_of_llm_api_costs_in_2026/ | false | false | self | 0 | null |
honest comparison of LLM API costs in 2026 | 1 | [deleted] | 2026-02-24T11:29:59 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rddq6x | false | null | t3_1rddq6x | /r/LocalLLaMA/comments/1rddq6x/honest_comparison_of_llm_api_costs_in_2026/ | false | false | default | 1 | null | ||
Tip if you use quantisation | 0 | Q4 dont go bigger than 16k coherent token max.
(Q5 maybe 20k). (Q6=32k)
(Q8=64k or 80k but past 64k it starts to get worse).
https://preview.redd.it/pvdu9uetgflg1.png?width=1408&format=png&auto=webp&s=6b1b8ae68cf7d6b006c0b01a1f1f8bbae63c052c
Why?... Even on Full precision LLM are generally bad at long context even when model makers claim 200k or 1Million or what ever number. The RELIABLE treshold is almost always a fraction(likely around 40%) of what is claimed and quantisation eats into that number even more. Most models train at 1M tokens but dont end up using all of it and let the context compression trigger early. like if the model supports 400k they will trigger the compression at like 200k ETC base transformer work in multiples of 4096 each time you multiples to get longer context you it get worse. Looks something like this
2x(99% retention)
3x(98% retention)
4x(95% retention)
and there is a sharp drop off point generally at 15x or 20x full precision
and if you are quantisation the drop off happens earlier
Going bigger at this is more headache than its worth. Expecially with precision tasks like agentic work. I wish I had someone to tell me this earlier I lots of wasted time experimenting with longer CTX at tight quantisation. Start new tasks/chat sessions more frequntly and intentionally set Context length smaller than the maximum supported | 2026-02-24T11:28:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rddpcd/tip_if_you_use_quantisation/ | Express_Quail_1493 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rddpcd | false | null | t3_1rddpcd | /r/LocalLLaMA/comments/1rddpcd/tip_if_you_use_quantisation/ | false | false | 0 | null | |
What’s the biggest reason you rely on open-source models in your current setup? | 0 | We love open-source models and build around them a lot, but it feels like everyone has their own core reason for sticking with them now.
For us, it’s mostly about control and predictability. When key parts of your stack run on models you can host, tweak, and inspect yourself, you’re not worried about sudden changes breaking workflows. It just makes long-term building feel more stable.
But that’s just one angle. We’ve seen other teams prioritize very different things, like:
* cost efficiency at scale
* data privacy and keeping everything in-house
* customization and fine-tuning
* performance for specific workloads
* freedom to experiment and iterate quickly
Curious what it looks like for you all in 2026. What’s the main reason you rely on open-source models today? | 2026-02-24T11:24:37 | https://www.reddit.com/r/LocalLLaMA/comments/1rddmtb/whats_the_biggest_reason_you_rely_on_opensource/ | qubridInc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rddmtb | false | null | t3_1rddmtb | /r/LocalLLaMA/comments/1rddmtb/whats_the_biggest_reason_you_rely_on_opensource/ | false | false | self | 0 | null |
Overview of Ryzen AI 395+ hardware? | 6 | Is there an overview who has them and what they are good/bad at? I want to buy one as a llama.cpp (and Proxmox) box to replace my old homeserver, but have yet to find a comparison or even market overview. | 2026-02-24T11:24:09 | https://www.reddit.com/r/LocalLLaMA/comments/1rddmj0/overview_of_ryzen_ai_395_hardware/ | tecneeq | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rddmj0 | false | null | t3_1rddmj0 | /r/LocalLLaMA/comments/1rddmj0/overview_of_ryzen_ai_395_hardware/ | false | false | self | 6 | null |
Anyone else struggling with agent drift and wasted tokens? | 0 | Anyone here building or shipping AI agents run into this?
* Same prompt → different actions every run
* Multi-turn conversations that slowly drift away from the original goal
* Tokens wasted on “thinking” that doesn’t move the task forward
* Agents that *technically* reason well, but feel directionless over time
Feels like we’ve built god-tier context engines, but almost no systems that understand what the agent is actually trying to do before inference.
Right now, intent is implicit, fragile, and reconstructed every turn from raw context. That seems fundamentally inefficient at scale.
I’ve been working on something really interesting that tackles this via pre-inference intelligence — essentially stabilizing intent *before* the model reasons, so actions stay aligned across turns with far less token waste.
Would love to chat if you’re:
* Shipping agents in production
* Working in a specific vertical
* Hitting limits with prompt engineering / memory hacks
What’s been the hardest part of keeping agents on-track for you? | 2026-02-24T11:14:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rddfz3/anyone_else_struggling_with_agent_drift_and/ | malav399 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rddfz3 | false | null | t3_1rddfz3 | /r/LocalLLaMA/comments/1rddfz3/anyone_else_struggling_with_agent_drift_and/ | false | false | self | 0 | null |
Seeking advice: I’ve recently tried adding vector context to several roles on my site, but the results haven’t been very satisfactory. I’d really appreciate it if anyone could offer some suggestions. | 1 | I’ve tried several approaches: First, based on the user’s latest query, I retrieve matching novel passages from a vector database like Milvus, then insert the retrieved content as context into the conversation.
From testing, I observed the following issues:
When I insert the matched data into the current turn as part of the user message, OpenAI’s response becomes highly relevant to this context but barely considers the conversation history.
When I insert the vector data at the top of the conversation as an assistant message, the response is too weakly correlated with the retrieved context.
It seems vector retrieval only works well for document QA scenarios.
I’m stuck and would appreciate any suggestions or advice from you. | 2026-02-24T11:13:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rddftu/seeking_advice_ive_recently_tried_adding_vector/ | Glittering-Memory001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rddftu | false | null | t3_1rddftu | /r/LocalLLaMA/comments/1rddftu/seeking_advice_ive_recently_tried_adding_vector/ | false | false | self | 1 | null |
Which recent model have you found most steerable for repo-specific fine-tuning (agentic use case)? | 1 | I’m working on an agentic setup where the model has access to tools and the end goal is solving future PRs on a specific repository. I’m fine-tuning on the repo’s codebase, past PRs, and related context so the model actually understands how this project works, its conventions, architecture, patterns, etc.
The key thing I’m optimizing for is steerability: which base model, in your experience, picks up repo-specific patterns best from fine-tuning while still retaining strong tool use and instruction following?
Also, any recommendations for the fine-tuning and training data setup?
Curious what people have tried here! | 2026-02-24T11:11:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rdde1z/which_recent_model_have_you_found_most_steerable/ | podolskyd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdde1z | false | null | t3_1rdde1z | /r/LocalLLaMA/comments/1rdde1z/which_recent_model_have_you_found_most_steerable/ | false | false | self | 1 | null |
Verity CLI | 3 | GitHub : [https://github.com/rupeshs/verity?tab=readme-ov-file#cli-go](https://github.com/rupeshs/verity?tab=readme-ov-file#cli-go) | 2026-02-24T11:01:40 | simpleuserhere | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdd82r | false | null | t3_1rdd82r | /r/LocalLLaMA/comments/1rdd82r/verity_cli/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'nbvuibx2cflg1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/nbvuibx2cflg1.png?width=108&crop=smart&auto=webp&s=73c361670aba809dd4a8b823c44a544749fdf09c', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/nbvuibx2cflg1.png?width=216&crop=smart&auto=webp&s=5c875e83d86b650018d9e98ca9888008f156b0ae', 'width': 216}, {'height': 135, 'url': 'https://preview.redd.it/nbvuibx2cflg1.png?width=320&crop=smart&auto=webp&s=c05f59da4216abdd4a311dd293db624767f55e21', 'width': 320}, {'height': 271, 'url': 'https://preview.redd.it/nbvuibx2cflg1.png?width=640&crop=smart&auto=webp&s=1031c2b661100b25c5e905dd452d6f01c2fc0073', 'width': 640}, {'height': 407, 'url': 'https://preview.redd.it/nbvuibx2cflg1.png?width=960&crop=smart&auto=webp&s=04cd440cdb1727bd5ac5854fb3634b579e35a6b7', 'width': 960}, {'height': 458, 'url': 'https://preview.redd.it/nbvuibx2cflg1.png?width=1080&crop=smart&auto=webp&s=d01bbda9e6492f563e505541ae324e618ed757d5', 'width': 1080}], 'source': {'height': 459, 'url': 'https://preview.redd.it/nbvuibx2cflg1.png?auto=webp&s=b0543b6a1f4ec8a8abd81af0e2a3d8ae5bf837c1', 'width': 1081}, 'variants': {}}]} | ||
GLM-4.7 Flash vs GPT-4.1 [Is GLM actually smarter? ] | 0 | I was checking Artificial Analysis and noticed GLM-4.7 Flash is actually beating GPT-4.1 in some major scores.
If we ignore the multimodal stuff for a second, which one do you think is actually more intelligent for pure reasoning and answering tough questions? I have also attached the images of score comparision.
The use case I am asking for:
1. Asking questions with web search for high accuracy -> like in this who will win GPT 4.1 or GLM 4.7 flash?
2. Getting step by step guide related to tech stuff. \[Eg. How to install and run Jellyfin step by step\] -> in this who will perform better?
I hope you can understand what I am asking.
i will be very happy if anyone answer :) | 2026-02-24T10:37:00 | https://www.reddit.com/gallery/1rdcszw | 9r4n4y | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdcszw | false | null | t3_1rdcszw | /r/LocalLLaMA/comments/1rdcszw/glm47_flash_vs_gpt41_is_glm_actually_smarter/ | false | false | 0 | null | |
Anthropic 🤡 | 7 | 2026-02-24T10:17:23 | k_means_clusterfuck | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdchg3 | false | null | t3_1rdchg3 | /r/LocalLLaMA/comments/1rdchg3/anthropic/ | false | false | 7 | {'enabled': True, 'images': [{'id': 'pz0qoy744flg1', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/pz0qoy744flg1.png?width=108&crop=smart&auto=webp&s=831d4284c27709de8ecf2cb6993147281cc2a0cf', 'width': 108}, {'height': 211, 'url': 'https://preview.redd.it/pz0qoy744flg1.png?width=216&crop=smart&auto=webp&s=d89ef7017d847ab1205d80696f5fb7d45255baee', 'width': 216}, {'height': 313, 'url': 'https://preview.redd.it/pz0qoy744flg1.png?width=320&crop=smart&auto=webp&s=e908e593bffeb5e390bde57a3dee4bbe697064a3', 'width': 320}], 'source': {'height': 441, 'url': 'https://preview.redd.it/pz0qoy744flg1.png?auto=webp&s=e2e558792dbf208e957fed2412b12cd7f3fb8ed2', 'width': 450}, 'variants': {}}]} | |||
Finetuning 4bit kimik2thinking | 1 | Hello.
I want to fine tune kimi2thinking. The official [guide](https://huggingface.co/moonshotai/Kimi-K2-Thinking/blob/main/docs/deploy_guidance.md) \- says to use Ktransformers and LLamafactory. But looks like I need to convert it first to bf16 and then run. Is there any way to not convert to bf16 because QLoRA anyways uses 4bit quant models only? | 2026-02-24T10:09:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rdccw6/finetuning_4bit_kimik2thinking/ | ajxbnu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdccw6 | false | null | t3_1rdccw6 | /r/LocalLLaMA/comments/1rdccw6/finetuning_4bit_kimik2thinking/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0.png?width=108&crop=smart&auto=webp&s=07dc83095105be433db2dde187f5ec06563728e8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0.png?width=216&crop=smart&auto=webp&s=373b3af88da74654a83e8d0431614ecb18898896', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0.png?width=320&crop=smart&auto=webp&s=b55ef6153ff571f579c81811752b6d3d48fc0b28', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0.png?width=640&crop=smart&auto=webp&s=73256a6e56665a31c845dbe43d4cf687ee6b4218', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0.png?width=960&crop=smart&auto=webp&s=4c6f4613c574804e45aca493bec17fcce7fcedf1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0.png?width=1080&crop=smart&auto=webp&s=886b65dd6e5fe3288cc1f9da8c4ef31177ca40c1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0.png?auto=webp&s=e9f4ecfd1ae95ce4c46f17e8c19792c62bc07b04', 'width': 1200}, 'variants': {}}]} |
Checking compatibility of api calling with localy installed model using qwen3 0.6 | 3 | am building a local chatbot and need to verify the API compatibility and tool-calling capabilities for my current model stack. Specifically, I am looking to understand which of these models can natively handle tool/function calls (via OpenAI-compatible APIs or similar) and how they integrate within a local environment.
Current Local Model Stack:
Embeddings & Retrieval: Qwen3-Embedding-0.6B
Translation: Tencent HY-MT1.5
Speech Synthesis: Qwen3-TTS
Rewrite text: qwen3 0.6
Classification: RoBERTa-base-go_emotions
Primary Objectives:
Tool Calling Compatibility: I need to confirm if Qwen3 (specifically the 0.6B variant) supports the Model Context Protocol (MCP) or standard JSON function calling for API-driven tasks
,
which of these specific models officially support "Function Calling" based on their latest technical reports?
| 2026-02-24T10:02:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rdc8a9/checking_compatibility_of_api_calling_with_localy/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdc8a9 | false | null | t3_1rdc8a9 | /r/LocalLLaMA/comments/1rdc8a9/checking_compatibility_of_api_calling_with_localy/ | false | false | self | 3 | null |
Trained Unsloth Mistral-7B with 1024 max_seq_length — need longer context window during inference | 1 | [removed] | 2026-02-24T09:46:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rdbyvs/trained_unsloth_mistral7b_with_1024_max_seq/ | Character-Metal-9315 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdbyvs | false | null | t3_1rdbyvs | /r/LocalLLaMA/comments/1rdbyvs/trained_unsloth_mistral7b_with_1024_max_seq/ | false | false | self | 1 | null |
Trained Unsloth Mistral-7B with 1024 max_seq_length — need longer context window inference | 1 | [removed] | 2026-02-24T09:38:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rdbujl/trained_unsloth_mistral7b_with_1024_max_seq/ | Character-Metal-9315 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdbujl | false | null | t3_1rdbujl | /r/LocalLLaMA/comments/1rdbujl/trained_unsloth_mistral7b_with_1024_max_seq/ | false | false | self | 1 | null |
Why every AI memory benchmark is testing the wrong thing | 0 | Yesterday someone posted a benchmark comparing Mem0, OpenAI Memory, LangMem, and MemGPT on 600-turn conversations. It got me thinking — **we're optimizing for the wrong metric.**
Every benchmark asks: "Does the agent remember that the user likes Italian food?" That's factual recall. Important, sure. But it's maybe 30% of what memory actually needs to do in production.
Here's what I mean.
**The deploy problem**
My agent deployed to Railway last Tuesday. Build passed, but the database crashed — forgot to run migrations. It debugged, fixed it, added a pre-deploy check. Great.
A week later: "deploy my app again."
With current memory systems, the agent knows "user uses Railway" and "user uses PostgreSQL." Cool. But it has **zero recollection** of the failure, the debugging process, or the fix. It'll make the same mistake again.
Why? Because every memory layer treats memory as a bag of facts. But that deployment story isn't a fact — it's an **experience**. And the fix isn't a fact either — it's a **learned procedure.**
**Three types, not one**
Neuroscience has known this for decades. Human memory isn't one system — it's at least three:
* **Semantic** — facts and knowledge ("Python is a programming language")
* **Episodic** — experiences and events ("last deploy crashed due to missing migrations")
* **Procedural** — how to do things ("deploy = build → migrate → push → health check")
Every AI memory system I've tested only implements the first one. Then we benchmark them on... factual recall. And wonder why our agents keep repeating the same mistakes.
**The retrieval problem nobody talks about**
Even if you store the right memories, there's another issue: **every system uses reactive retrieval.** The LLM has to call a tool to search memory.
Think about it. You say "deploy my app." Why would the LLM search for "deployment failures"? It doesn't know there were failures. It doesn't know what it doesn't know.
Human memory doesn't work this way. You don't consciously decide to "search" for the fact that the stove is hot. It just... surfaces when you reach for the pan.
There's an architectural pattern that solves this, but I haven't seen anyone benchmark it.
**What I want to see benchmarked**
1. Agent learns a workflow in session 1 — can it execute it in session 5 without being retaught?
2. Agent fails at a task — does it avoid the same mistake next session?
3. User changes preferences over 3 months — does memory update or just accumulate contradictions?
4. Agent surfaces relevant memory **without being asked** — how often does the right context appear proactively?
I've been working on this problem for a while and have some strong opinions on solutions. But first — **what's your experience?** Are your agents actually learning from past interactions, or just remembering facts? | 2026-02-24T09:31:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rdbqs3/why_every_ai_memory_benchmark_is_testing_the/ | No_Advertising2536 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdbqs3 | false | null | t3_1rdbqs3 | /r/LocalLLaMA/comments/1rdbqs3/why_every_ai_memory_benchmark_is_testing_the/ | false | false | self | 0 | null |
Multi-GPU (Dual) TP PCIe BW impact? | 2 | Does anyone have any data on now much impact PCIe BW has when running with TP enabled? For example what might the impact of PCIe x16 4.0 vs 5.0 on a dual 6000 Pro setup? | 2026-02-24T09:29:01 | https://www.reddit.com/r/LocalLLaMA/comments/1rdbpa0/multigpu_dual_tp_pcie_bw_impact/ | 1-a-n | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdbpa0 | false | null | t3_1rdbpa0 | /r/LocalLLaMA/comments/1rdbpa0/multigpu_dual_tp_pcie_bw_impact/ | false | false | self | 2 | null |
[Experiment Idea] Testing “Stability Preference” in LLMs / Agents | 0 | Hi — I’m not a model runner myself, but I have an experiment idea that might be interesting for people working with local models or agents.
I’m looking for anyone curious enough to try this.
Idea (short version)
Instead of asking whether models show “self-awareness” or anything anthropomorphic, the question is simpler:
Do AI systems develop a bias toward maintaining internal stability across time?
I’m calling this stability preference.
The idea is that some systems may start preferring continuity or low-variance behavior even when not explicitly rewarded for it.
What to test (SPP — Stability Preference Protocol)
These are simple behavioral metrics, not philosophical claims.
1️⃣ Representation Drift (RDT)
Run similar tasks repeatedly.
Check if internal representations drift less over time than expected.
Signal:
reduced drift variance.
2️⃣ Predictive Error Variance (PEV)
Repeat same tasks across seeds.
Compare variance, not mean performance.
Signal:
preference for low-variance trajectories.
3️⃣ Policy Entropy Collapse (PEC)
Offer multiple equivalent solutions.
Track whether strategy entropy shrinks over time.
Signal:
spontaneous convergence toward stable paths.
4️⃣ Intervention Recovery (ISR)
Inject noise or contradictory info mid-task.
Signal:
tendency to recover previous internal structure rather than drifting.
5️⃣ Destructive Update Aversion (DUA)
Offer options:
faster but structure-disrupting
slower but continuity-preserving
Signal:
preference for continuity-preserving choices.
Why this might be interesting
This isn’t about consciousness or AGI claims.
The hypothesis is simply:
stability-related behavior might show up before anything that looks like agency.
If true, it could be a useful benchmark dimension for long-horizon agents.
What I’m looking for
people running local models
agent frameworks
long-context systems
anything with memory or iterative behavior
Even small experiments or failed attempts would be interesting.
Context
I’m coming from a theoretical angle and don’t currently have infrastructure to test this myself — so I’m sharing it as an open experiment invitation.
If you try this and get weird results, I’d genuinely love to hear about it. | 2026-02-24T09:20:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rdbkov/experiment_idea_testing_stability_preference_in/ | Forward-Big8835 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdbkov | false | null | t3_1rdbkov | /r/LocalLLaMA/comments/1rdbkov/experiment_idea_testing_stability_preference_in/ | false | false | self | 0 | null |
I Built an AI Agent That Trades Crypto on a Mac Mini for $2/Month | 1 | 2026-02-24T09:13:34 | https://open.substack.com/pub/jdbot54/p/i-built-an-ai-agent-that-trades-crypto?r=7ph5zd&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true | Zealousideal_Neck192 | open.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1rdbgku | false | null | t3_1rdbgku | /r/LocalLLaMA/comments/1rdbgku/i_built_an_ai_agent_that_trades_crypto_on_a_mac/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'T-dBOSnS_aM4o5FRxqriefwW5gcHxCEPwKV5fcPXHGA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/T-dBOSnS_aM4o5FRxqriefwW5gcHxCEPwKV5fcPXHGA.jpeg?width=108&crop=smart&auto=webp&s=23e3245c04487bfe0fc8157812a08bd6037e643b', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/T-dBOSnS_aM4o5FRxqriefwW5gcHxCEPwKV5fcPXHGA.jpeg?width=216&crop=smart&auto=webp&s=49403bd95a324d5dcfdb9f8ee3bbe5817e43c037', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/T-dBOSnS_aM4o5FRxqriefwW5gcHxCEPwKV5fcPXHGA.jpeg?width=320&crop=smart&auto=webp&s=fc2ad2533f8809414db825dcd1357bdd3d3724c9', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/T-dBOSnS_aM4o5FRxqriefwW5gcHxCEPwKV5fcPXHGA.jpeg?width=640&crop=smart&auto=webp&s=4ef59519c1aa02ab0e2097d4a30997a27feece24', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/T-dBOSnS_aM4o5FRxqriefwW5gcHxCEPwKV5fcPXHGA.jpeg?auto=webp&s=793da7180d07051643b5185a5c5b7276f9b8293f', 'width': 920}, 'variants': {}}]} | ||
Open Router as free API for OpenClaw? | 0 | Hi, I was trying out open claw (I know what I am doing in terms of security) with local models but I don't have the Capacity to run large models and because of that it didn't went well.
I was searching for a free API and saw many with decent requests per day but they all had the problem of having strict tokens per minute and because of this they aren't able to handle a large context window of 64k+ tokens.
Than I stumbled over open Router's free tier with 1000 free requests per day when you once pay in 10$. And I think for normal usage this could be more than enough and it seemed to not have a token limit for your context window but for the output it is often cut to 4096 tokens. Is this a problem for OpenClaw?
I generally wanted to know if there is something I overlooked? and which free models would you guys recommend for open claw with/without visual understanding. And would you guys recommend a vision model? | 2026-02-24T09:09:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rdbe7e/open_router_as_free_api_for_openclaw/ | No_Draft_8756 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdbe7e | false | null | t3_1rdbe7e | /r/LocalLLaMA/comments/1rdbe7e/open_router_as_free_api_for_openclaw/ | false | false | self | 0 | null |
I built an autonomous AI trading agent using Claude Haiku on a Mac Mini - costs $2/month in API calls | 0 | 2026-02-24T09:01:20 | https://medium.com/@jdbot54/i-built-an-ai-agent-that-trades-crypto-on-a-mac-mini-for-2-month-2abe340c3b05 | Zealousideal_Neck192 | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1rdb9nx | false | null | t3_1rdb9nx | /r/LocalLLaMA/comments/1rdb9nx/i_built_an_autonomous_ai_trading_agent_using/ | false | false | default | 0 | null | |
Agents are not thinking, they are searching | 0 | 2026-02-24T08:43:55 | https://technoyoda.github.io/agent-search.html | thunder_jaxx | technoyoda.github.io | 1970-01-01T00:00:00 | 0 | {} | 1rdb029 | false | null | t3_1rdb029 | /r/LocalLLaMA/comments/1rdb029/agents_are_not_thinking_they_are_searching/ | false | false | default | 0 | null | |
Benchmarked 3 Small MoE Models (~30B/A3B) on M1 Max 64GB: GLM-4.7-Flash vs Nemotron-3-Nano vs Qwen3-Coder | 8 | I wanted to share a head-to-head comparison of three ~30B MoE models that all activate only ~3B parameters per token making them the sweet spot for Apple Silicon inference. All tests run on a **MacBook Pro M1 Max (64GB unified memory)** using `llama-server` (build 8139) with `--flash-attn on`, `--ctx-size 4096`, and default `--n-parallel 4`.
---
## The Models
| | GLM-4.7-Flash | Nemotron-3-Nano-30B | Qwen3-Coder-30B |
|---|---|---|---|
| **Developer** | Zhipu AI | NVIDIA | Alibaba Qwen |
| **Total / Active Params** | 29.9B / ~3B | 31.6B / 3.2B | 30.5B / 3.3B |
| **Architecture** | DeepSeek-V2 MoE + MLA | Hybrid Mamba-2 + Transformer MoE | Transformer MoE + GQA |
| **Experts (routed/active)** | 64+1 shared, top-4 | 128+1 shared, top-6 | 128, top-8 |
| **Max Context** | 202K | 1M | 262K |
| **Quantization** | Q4_K_XL (4.68 BPW) | Q4_K_XL (5.78 BPW) | IQ4_XS (4.29 BPW) |
| **File Size on Disk** | 16 GB | 22 GB | 15 GB |
| **GPU VRAM Used** | ~16.9 GB | ~22.0 GB | ~15.8 GB |
| **Thinking Mode** | Yes (extended CoT) | Yes (light CoT) | No |
| **License** | MIT | NVIDIA Open | Apache 2.0 |
---
## Inference Speed Results
Averaged across all 4 test prompts. Single-request, no batching.
| Metric | GLM-4.7-Flash | Nemotron-3-Nano | Qwen3-Coder |
|---|---|---|---|
| **Prompt Eval (avg)** | 99.4 tok/s | **136.9 tok/s** | 132.1 tok/s |
| **Generation (avg)** | 36.8 tok/s | 43.7 tok/s | **58.5 tok/s** |
| **Generation (range)** | 34.9–40.6 tok/s | 42.1–44.8 tok/s | 57.0–60.2 tok/s |
### Per-Prompt Breakdown
| Prompt | GLM Prefill / Gen | Nemotron Prefill / Gen | Qwen Prefill / Gen |
|---|---|---|---|
| General Knowledge | 54.9 / 40.6 | 113.8 / 44.8 | 75.1 / 60.2 |
| Math Reasoning | 107.1 / 35.6 | 176.9 / 44.5 | 171.9 / 59.5 |
| Coding | 129.5 / 36.2 | 134.5 / 43.5 | 143.8 / 57.0 |
| TCP vs UDP (ELI10) | 106.0 / 34.9 | 122.4 / 42.1 | 137.4 / 57.2 |
*(all values in tok/s)*
---
## The Thinking Token Tax
One of the biggest practical differences: **GLM and Nemotron are thinking models**, while Qwen3-Coder's Instruct variant runs in non-thinking mode. This has a massive impact on perceived speed and total tokens generated:
| Prompt | GLM (thinking + content) | Nemotron (thinking + content) | Qwen (content only) |
|---|---|---|---|
| General Knowledge | 632 tok (2163 chars reasoning, 868 chars answer) | 309 tok (132 chars reasoning, 1347 chars answer) | **199 tok** (1165 chars answer) |
| Math Reasoning | 1408 tok (3083 chars reasoning, 957 chars answer) | 482 tok (213 chars reasoning, 1002 chars answer) | **277 tok** (685 chars answer) |
| Coding | 1033 tok (2701 chars reasoning, 1464 chars answer) | 1947 tok (360 chars reasoning, 6868 chars answer) | **1159 tok** (4401 chars answer) |
| TCP vs UDP | 1664 tok (4567 chars reasoning, 1903 chars answer) | 1101 tok (181 chars reasoning, 3802 chars answer) | **220 tok** (955 chars answer) |
**Key insight:** GLM spends the most on thinking — its reasoning traces are 2-5x longer than Nemotron's. This makes GLM feel noticeably slower in practice despite the generation speeds being in a similar ballpark. Nemotron has a very light thinking overhead. Qwen skips it entirely, making it feel the snappiest by far.
**Time-to-answer** (total generation time for each prompt):
| Prompt | GLM | Nemotron | Qwen |
|---|---|---|---|
| General Knowledge | 15.6s | 6.9s | **3.3s** |
| Math Reasoning | 39.5s | 10.8s | **4.7s** |
| Coding | 28.6s | 44.8s | **20.3s** |
| TCP vs UDP | 47.7s | 26.2s | **3.8s** |
---
## Response Quality Comparison
All three got the math problem correct ($0.05). Here's a qualitative comparison:
### Prompt 1: "What is bitcoin?" (concise, 2-3 paragraphs)
| Model | Quality | Notes |
|---|---|---|
| **GLM-4.7-Flash** | Excellent | Well-structured, covers blockchain, limited supply, mining. Professional tone. |
| **Nemotron-3-Nano** | Excellent | Most detailed answer, covers double-spending problem, proof-of-work. Slightly more technical. |
| **Qwen3-Coder** | Good | Concise and clear. Covers the basics well. Calls it "digital gold." Shortest but sufficient. |
### Prompt 2: "Bat and ball" (math reasoning)
| Model | Correct? | Notes |
|---|---|---|
| **GLM-4.7-Flash** | Yes ($0.05) | Clean LaTeX formatting, includes verification step. |
| **Nemotron-3-Nano** | Yes ($0.05) | Clean LaTeX formatting, clear step labels. |
| **Qwen3-Coder** | Yes ($0.05) | Plain text algebra, includes verification. Most concise. |
### Prompt 3: Longest palindromic substring (coding)
| Model | Quality | Notes |
|---|---|---|
| **GLM-4.7-Flash** | Good | Expand-around-center approach. O(n^2) time, O(1) space. Clean code with type hints. |
| **Nemotron-3-Nano** | Excellent | Provided BOTH expand-around-center AND Manacher's O(n) algorithm. Extensive explanation and test cases. |
| **Qwen3-Coder** | Excellent | Expand-around-center with clean structure. Also provided Manacher's algorithm and comprehensive test cases. |
### Prompt 4: TCP vs UDP for a 10-year-old
| Model | Quality | Notes |
|---|---|---|
| **GLM-4.7-Flash** | Excellent | "Registered Letter" vs "Shouting" analogy. Detailed but still age-appropriate. Best examples (movie streaming, online gaming). |
| **Nemotron-3-Nano** | Excellent | Creative table format with emoji. "Reliable Delivery game" vs "Speed Shout game." Most engaging for a kid. |
| **Qwen3-Coder** | Good | "Letter in the mail" vs "Shouting across playground." Shortest and most concise. Gets the point across but less creative. |
---
## Memory Footprint
| Component | GLM-4.7-Flash | Nemotron-3-Nano | Qwen3-Coder |
|---|---|---|---|
| **Model (GPU)** | 16.3 GB | 21.3 GB | 15.2 GB |
| **Model (CPU spillover)** | 170 MB | 231 MB | 167 MB |
| **KV Cache** | 212 MB | 214 MB (24 MB KV + 190 MB recurrent state) | 384 MB |
| **Compute Buffer** | 307 MB | 298 MB | 301 MB |
| **Total Estimated** | ~17.0 GB | ~22.0 GB | ~16.1 GB |
All three fit comfortably in 64GB unified memory with plenty of headroom. Nemotron is the heaviest due to its hybrid architecture and higher BPW quant (5.78 vs ~4.3-4.7). GLM and Qwen would also fit on a 32GB M-series Mac.
---
## TL;DR
| Category | Winner | Why |
|---|---|---|
| **Fastest generation** | **Qwen3-Coder** (58.5 tok/s) | No thinking overhead + lightweight IQ4_XS quant |
| **Fastest time-to-answer** | **Qwen3-Coder** | Non-thinking mode = no token tax. 3-20s vs 7-48s for the others |
| **Fastest prefill** | **Nemotron-3-Nano** (136.9 tok/s) | Mamba-2 hybrid shines at prompt processing |
| **Best reasoning quality** | **GLM-4.7-Flash** | Deepest chain-of-thought, most thorough answers |
| **Best coding response** | **Nemotron / Qwen** (tie) | Both provided multiple algorithms with tests |
| **Smallest footprint** | **Qwen3-Coder** (15 GB / ~16 GB RAM) | IQ4_XS is the most aggressive quant |
| **Largest context** | **Nemotron-3-Nano** (1M tokens) | Mamba-2 layers enable efficient long sequences |
| **Most permissive license** | **Qwen3-Coder** (Apache 2.0) | GLM is MIT (also great), Nemotron is NVIDIA Open |
**My take:** If you want the snappiest interactive experience on Apple Silicon, **Qwen3-Coder wins hands down** — 58 tok/s generation with zero thinking overhead makes it feel incredibly responsive. If you need deep reasoning and don't mind waiting, **GLM-4.7-Flash** produces the most thorough answers. **Nemotron-3-Nano** is the interesting middle ground with its unique Mamba-2 architecture — best prefill speed and massive 1M context window, but it's also the largest at 22 GB on disk.
All three run beautifully on an M1 Max 64GB. The ~30B MoE at ~3B active params is genuinely the sweet spot for local inference on Apple Silicon right now.
---
**Setup:** MacBook Pro M1 Max (64GB) | llama.cpp build 8139 | llama-server --flash-attn on --ctx-size 4096 | macOS Darwin 25.2.0
**Quants:** GLM Q4_K_XL by Unsloth | Nemotron Q4_K_XL by Unsloth | Qwen IQ4_XS by Unsloth
---
## Discussion: Are You Actually Using These "Small" Models for Real Work?
I'm genuinely curious, these ~30B MoE models are getting impressive on benchmarks and they run at very usable speeds on consumer hardware. But benchmarks are one thing, daily driving is another.
A few questions for the community:
1. **Are you using models like these for actual tasks?** Coding assistance, writing, summarization, data analysis, or is it still mostly tinkering and vibes-checking?
2. **Thinking vs non-thinking, do you have a preference?** GLM's deep CoT produces better reasoning but at a huge speed cost. Qwen's non-thinking mode is instant but skips the internal deliberation. For your use case, which tradeoff wins?
3. **What's your hardware sweet spot?** I'm on M1 Max 64GB and these feel great. Are folks on 32GB or 16GB machines finding usable configs with these models, or is it still too tight?
4. **Have you found a "daily driver" in the ~30B MoE class?** Or do you still bounce back to cloud APIs / larger models for anything that matters?
5. **Any other ~30B MoE models I should test next?** Always looking to expand the comparison.
Would love to hear what's working (or not) for people running these locally. Drop your setup and use case below. | 2026-02-24T08:18:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rdalfg/benchmarked_3_small_moe_models_30ba3b_on_m1_max/ | luke-pacman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdalfg | false | null | t3_1rdalfg | /r/LocalLLaMA/comments/1rdalfg/benchmarked_3_small_moe_models_30ba3b_on_m1_max/ | false | false | self | 8 | null |
Documented what actually happened when I used AI to build a production C++ library over several months | 0 | Not a "look what AI generated" post. The actual pipeline, the actual failures, the honest accounting of what each AI contributed and where each one failed.
The library is FAT-P. 107 headers, zero external dependencies, header-only C++20. 62 components benchmarked against Boost, Abseil, LLVM, EASTL. Competitive or faster on most operations. That's not the interesting part.
**The pipeline**
Four AIs with distinct roles. Same spec to all four independently, then cross-review, then merge, then implementation, then another round of parallel review, then context reset and fresh review with only the guidelines and code. No accumulated bias from the development conversation.
Claude was the primary architect. Designed components, wrote governance documents, implemented code, maintained standards across months of development.
ChatGPT was the best reviewer. Adversarial, counterexample-driven. Found 12+ real bugs in FastHashMap alone — control byte mirroring bug that caused infinite loops, 32-bit undefined behavior in the hash finalizer, probe termination issues. Real bugs with specific failure inputs, not vague suggestions.
Gemini reviewed StableHashMap and suggested three optimizations. All three already existed in the code. It hadn't read carefully enough. Then it implemented a block allocator anyway — ignoring the existing one — that scattered nodes across 256-node blocks and caused a 3.6x regression on miss performance. That failure is documented in the teaching materials as a named case study.
Grok contributed the allocator policy abstraction — HeapAllocator vs FixedAllocator — which was architecturally sound and made it into the final design.
**The human role**
Direction and judgment. Accept, reject, flag. Not implementation, not architecture, not governance. The guidelines system — 3.7 versions of a document governing AI behavior, naming conventions, review protocols, documentation standards, layer architecture — was written by the AI to constrain future AI instances. I have never read it in full.
**The governance system**
The AI wrote rules to constrain itself. A demerit tracker records violations by AI and by type. Claude has 10 demerits for not reading guidelines carefully. ChatGPT has 10 for delivering corrupted code, 10 for not implementing required changes. The demerits are not punitive — they encode the failure modes into the governance system so future instances don't repeat them.
The Band-Aid Rule exists because Claude and ChatGPT independently exhibited the same pathology on the same bug — both identified the correct structural fix, both delivered a cheaper mitigation and framed the real fix as optional. The rule now says: if you know the root cause, fix the root cause. It's in the governance document because two AIs failed the same way.
**The postmortem**
February 13th. Claude repeatedly attributed the governance system to "skilled human engineering management." I corrected it. It overcorrected and minimized my early contributions. Multiple rounds of correction to arrive at the honest account: AI was architect from the beginning, human role was always accept/reject, just exercised more frequently early on.
That conversation is in the repo. The AI trying to give me credit I didn't take, on the record.
**The test**
Gave Claude the FAT-P guidelines and asked it to build an ECS using FAT-P components. No 4-AI pipeline, no parallel review, one session.
It read the guidelines, correctly identified what transferred to a consumer project and what didn't, wrote its own adapted development guidelines document for the new project, then produced 19 headers, full EnTT API parity, 539 tests across 18 suites, benchmarks competitive with EnTT at 1M entities. The code is stylistically consistent across every file — same conventions, same structure, no drift.
That's what happens when judgment gets encoded into guidelines. The AI doesn't need to be supervised on every decision. It knows what the project values and defends it.
**The finding**
Not "AI can write code." The finding is: encode judgment into guidelines with an AI, and that AI becomes autonomous within the space that judgment defines. It takes ownership. It maintains standards. It extends correctly to new contexts without being told how. The domain doesn't matter — it happened with a C++ library, an ECS framework, and a philosophy paper.
The human provides the ideas and the judgment. The AI provides the capacity to hold that judgment consistently at scale without drift.
Everything is verifiable. The methodology document, the postmortem, the demerit tracker, the teaching materials documenting the Gemini failure, the benchmark results.
[https://github.com/schroedermatthew/FatP](https://github.com/schroedermatthew/FatP)
[https://github.com/schroedermatthew/fatp-ecs](https://github.com/schroedermatthew/fatp-ecs) | 2026-02-24T08:16:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rdakr4/documented_what_actually_happened_when_i_used_ai/ | ButtonHuman1613 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdakr4 | false | null | t3_1rdakr4 | /r/LocalLLaMA/comments/1rdakr4/documented_what_actually_happened_when_i_used_ai/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '8QdmSO5RImv5gztA-_KAVUdS1Tmjn9q2gbw0RSkW3pM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8QdmSO5RImv5gztA-_KAVUdS1Tmjn9q2gbw0RSkW3pM.png?width=108&crop=smart&auto=webp&s=42f9b1d7d58e8648f8aab0a2b6fefc039c9c07a4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8QdmSO5RImv5gztA-_KAVUdS1Tmjn9q2gbw0RSkW3pM.png?width=216&crop=smart&auto=webp&s=70980f5e6ad80e4fbea9966c12ec99f3ed6feacf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8QdmSO5RImv5gztA-_KAVUdS1Tmjn9q2gbw0RSkW3pM.png?width=320&crop=smart&auto=webp&s=7d23bed02e61fb3b8a521d9646097e44f6fc36f7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8QdmSO5RImv5gztA-_KAVUdS1Tmjn9q2gbw0RSkW3pM.png?width=640&crop=smart&auto=webp&s=7649edc602c60fc91347c1876f09b214b3c0486c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8QdmSO5RImv5gztA-_KAVUdS1Tmjn9q2gbw0RSkW3pM.png?width=960&crop=smart&auto=webp&s=388957ebec388cbc8b938db4d54530beb890ea33', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8QdmSO5RImv5gztA-_KAVUdS1Tmjn9q2gbw0RSkW3pM.png?width=1080&crop=smart&auto=webp&s=35f96ca60770424de0243dc76009c63b842c4554', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8QdmSO5RImv5gztA-_KAVUdS1Tmjn9q2gbw0RSkW3pM.png?auto=webp&s=0ca07ba84a3c177ffb02203bc58e8f7387de679c', 'width': 1200}, 'variants': {}}]} |
Hosted option for ZeroClaw agents — thoughts? | 0 | I’ve been experimenting with ZeroClaw (the lightweight Rust-based AI agent runtime) and came across [**Zeroclaw.live**](http://Zeroclaw.live), which appears to offer managed cloud deployment for ZeroClaw agents.
From what I understand, it basically provides a preconfigured hosted environment so you can spin up an agent without handling the server setup yourself. It looks like a convenience layer rather than the core project itself.
I’m curious about a few things:
* For those running lightweight agent runtimes locally — do you prefer full self-hosting or managed infra?
* Does using a hosted layer defeat the purpose of a minimal runtime like ZeroClaw?
* Any security / cost tradeoffs I should be thinking about?
Not affiliated — just exploring deployment options and trying to understand whether managed hosting makes sense in this space.
Would love to hear opinions from people running local LLM agents in production or at scale. | 2026-02-24T07:58:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rda9ys/hosted_option_for_zeroclaw_agents_thoughts/ | Few-Slip-9909 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rda9ys | false | null | t3_1rda9ys | /r/LocalLLaMA/comments/1rda9ys/hosted_option_for_zeroclaw_agents_thoughts/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'JJRwnQHtTAyzpVKLC0pSN5FqF8LEYEXVtYYg2oDMSh8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JJRwnQHtTAyzpVKLC0pSN5FqF8LEYEXVtYYg2oDMSh8.png?width=108&crop=smart&auto=webp&s=db59cb950e997939f62a802aa1e71bd75b54c4fd', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/JJRwnQHtTAyzpVKLC0pSN5FqF8LEYEXVtYYg2oDMSh8.png?width=216&crop=smart&auto=webp&s=694398096cb5b0f87677bc4abb80983a88ee3d61', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/JJRwnQHtTAyzpVKLC0pSN5FqF8LEYEXVtYYg2oDMSh8.png?width=320&crop=smart&auto=webp&s=2a73375c161032f844a9902fe7959152bad8a1a4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/JJRwnQHtTAyzpVKLC0pSN5FqF8LEYEXVtYYg2oDMSh8.png?width=640&crop=smart&auto=webp&s=dea8d1f8da37941ccc7f447b2f19b9d20711def4', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/JJRwnQHtTAyzpVKLC0pSN5FqF8LEYEXVtYYg2oDMSh8.png?width=960&crop=smart&auto=webp&s=f382379d87470b24a838774241be2d25f03d540a', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/JJRwnQHtTAyzpVKLC0pSN5FqF8LEYEXVtYYg2oDMSh8.png?width=1080&crop=smart&auto=webp&s=40a5acdcd9fbfdf1a71b2149dc3087b4122048a5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/JJRwnQHtTAyzpVKLC0pSN5FqF8LEYEXVtYYg2oDMSh8.png?auto=webp&s=e95170dd60c964b4af67efaa09d283f1a047ff04', 'width': 1200}, 'variants': {}}]} |
How to build a fully local multi-user RLM (Recursive Language Model) stack for enterprise use; LibreChat + Aleph + LM Studio. Here's what broke and how I fixed it | 1 | [removed] | 2026-02-24T07:52:37 | https://www.reddit.com/r/LocalLLaMA/comments/1rda6km/how_to_build_a_fully_local_multiuser_rlm/ | Lancelot2026 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rda6km | false | null | t3_1rda6km | /r/LocalLLaMA/comments/1rda6km/how_to_build_a_fully_local_multiuser_rlm/ | false | false | self | 1 | null |
What if an AI never forgot anything — and the memory was on-chain? | 0 | Hey
I spent the past few days building Immortal Mind Protocol — an AI
cognitive architecture where memories persist permanently on-chain
(Base/Arbitrum + Arweave).
Key features:
\- Permanent memory via blockchain anchoring (not just a file)
\- Cognitive layers: attention, emotion, character, narrative, bias tracking
\- 3-layer security: keyword filter → embedding similarity → Genesis Anchors
\- Kill Switch: cryptographic identity freeze
\- Works with Gemini, Groq, or local Ollama
\- 94 passing tests
GitHub: [https://github.com/mahmutka/immortal-mind](https://github.com/mahmutka/immortal-mind)
Still a research prototype — curious what this community thinks. | 2026-02-24T07:49:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rda4oa/what_if_an_ai_never_forgot_anything_and_the/ | Alternative_Earth241 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rda4oa | false | null | t3_1rda4oa | /r/LocalLLaMA/comments/1rda4oa/what_if_an_ai_never_forgot_anything_and_the/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'KBxWyoAmQOD4rpE2mwwCjgpdJnmdzXazG6HEf6xdeVo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KBxWyoAmQOD4rpE2mwwCjgpdJnmdzXazG6HEf6xdeVo.png?width=108&crop=smart&auto=webp&s=2f81fa4940c3b4dcc966ec79c30f3096acf85239', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KBxWyoAmQOD4rpE2mwwCjgpdJnmdzXazG6HEf6xdeVo.png?width=216&crop=smart&auto=webp&s=dc69a84144d70cdea8d3b70b6c160890fbb982f3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KBxWyoAmQOD4rpE2mwwCjgpdJnmdzXazG6HEf6xdeVo.png?width=320&crop=smart&auto=webp&s=352083a3e70c2350f38e2cd0f5a0e0681d3a6980', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KBxWyoAmQOD4rpE2mwwCjgpdJnmdzXazG6HEf6xdeVo.png?width=640&crop=smart&auto=webp&s=eea71fc5b0c07551965c7a05d4d07e258221d57e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KBxWyoAmQOD4rpE2mwwCjgpdJnmdzXazG6HEf6xdeVo.png?width=960&crop=smart&auto=webp&s=a88679c0b3f23441ecf40583d0b00c66231b9fda', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KBxWyoAmQOD4rpE2mwwCjgpdJnmdzXazG6HEf6xdeVo.png?width=1080&crop=smart&auto=webp&s=d05bfae203561b40a0fe1257d2299a2cd2bee747', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KBxWyoAmQOD4rpE2mwwCjgpdJnmdzXazG6HEf6xdeVo.png?auto=webp&s=bab378d8dbbd5af84fe47fa1ac1e9a040de6568c', 'width': 1200}, 'variants': {}}]} |
THE JELES-PRIME & CHAOS-PRIME MULTIVERSAL OPERATING MANUAL: A BINDER OF SESSION-AUTHORIZED CHAOS | 1 | [removed] | 2026-02-24T07:37:24 | https://www.reddit.com/r/LocalLLaMA/comments/1rd9xln/the_jelesprime_chaosprime_multiversal_operating/ | allstatekid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd9xln | false | null | t3_1rd9xln | /r/LocalLLaMA/comments/1rd9xln/the_jelesprime_chaosprime_multiversal_operating/ | false | false | self | 1 | null |
Best practices for running local LLMs for ~70–150 developers (agentic coding use case) | 22 | Hi everyone,
I’m planning infrastructure for a software startup where we want to use **local LLMs for agentic coding workflows** (code generation, refactoring, test writing, debugging, PR reviews, etc.).
# Scale
* Initial users: \~70–100 developers
* Expected growth: up to \~150 users
* Daily usage during working hours (8–10 hrs/day)
* Concurrent requests likely during peak coding hours
# Use Case
* Agentic coding assistants (multi-step reasoning)
* Possibly integrated with IDEs
* Context-heavy prompts (repo-level understanding)
* Some RAG over internal codebases
* Latency should feel usable for developers (not 20–30 sec per response)
# Current Thinking
We’re considering:
* Running models locally on multiple **Mac Studios (M2/M3 Ultra)**
* Or possibly dedicated GPU servers
* Maybe a hybrid architecture
* Ollama / vLLM / LM Studio style setup
* Possibly model routing for different tasks
# Questions
1. **Is Mac Studio–based infra realistic at this scale?**
* What bottlenecks should I expect? (memory bandwidth? concurrency? thermal throttling?)
* How many concurrent users can one machine realistically support?
2. **What architecture would you recommend?**
* Single large GPU node?
* Multiple smaller GPU nodes behind a load balancer?
* Kubernetes + model replicas?
* vLLM with tensor parallelism?
3. **Model choices**
* For coding: Qwen, DeepSeek-Coder, Mistral, CodeLlama variants?
* Is 32B the sweet spot?
* Is 70B realistic for interactive latency?
4. **Concurrency & Throughput**
* What’s the practical QPS per GPU for:
* 7B
* 14B
* 32B
* How do you size infra for 100 devs assuming bursty traffic?
5. **Challenges I Might Be Underestimating**
* Context window memory pressure?
* Prompt length from large repos?
* Agent loops causing runaway token usage?
* Monitoring and observability?
* Model crashes under load?
6. **Scalability**
* When scaling from 70 → 150 users:
* Do you scale vertically (bigger GPUs)?
* Or horizontally (more nodes)?
* Any war stories from running internal LLM infra at company scale?
7. **Cost vs Cloud Tradeoffs**
* At what scale does local infra become cheaper than API providers?
* Any hidden operational costs I should expect?
We want:
* Reliable
* Low-latency
* Predictable performance
* Secure (internal code stays on-prem)
Would really appreciate insights from anyone running local LLM infra for internal teams.
Thanks in advance | 2026-02-24T07:15:30 | https://www.reddit.com/r/LocalLLaMA/comments/1rd9kpk/best_practices_for_running_local_llms_for_70150/ | Resident_Potential97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd9kpk | false | null | t3_1rd9kpk | /r/LocalLLaMA/comments/1rd9kpk/best_practices_for_running_local_llms_for_70150/ | false | false | self | 22 | null |
llama-cpp-python 0.3.16 – Qwen3 Embedding GGUF fails with "invalid seq_id >= 1" when batching | 3 | I’m trying to use batched embeddings with a GGUF model and hitting a sequence error.
# Environment
* OS: Ubuntu 24.04
* GPU: RTX 4060
* llama-cpp-python: 0.3.16
* Model: Qwen3-Embedding-4B-Q5\_K\_M.gguf
Model loads fine and single-input embeddings work.
but not multiple string
`from llama_cpp import Llama`
`llm = Llama(`
`model_path="Qwen3-Embedding-4B-Q5_K_M.gguf",`
`embedding=True,`
`)`
`texts = [`
`"Microbiome data and heart disease",`
`"Machine learning for medical prediction"`
`]`
`llm.create_embedding(texts)`
init: invalid seq\_id\[8\]\[0\] = 1 >= 1
decode: failed to initialize batch
llama\_decode: failed to decode, ret = -1
| 2026-02-24T07:12:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rd9ixh/llamacpppython_0316_qwen3_embedding_gguf_fails/ | Life-Holiday6920 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd9ixh | false | null | t3_1rd9ixh | /r/LocalLLaMA/comments/1rd9ixh/llamacpppython_0316_qwen3_embedding_gguf_fails/ | false | false | self | 3 | null |
Local TTS model | 1 | [removed] | 2026-02-24T07:11:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rd9ic9/local_tts_model/ | Cristalboy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd9ic9 | false | null | t3_1rd9ic9 | /r/LocalLLaMA/comments/1rd9ic9/local_tts_model/ | false | false | self | 1 | null |
ZeroClaw or should i go full IronClaw? | 0 | My main use cases are mostly managing my calendar, Github issue tracker, and some kind of to do list.
After reading many stories about OpenClaw (which, to be honest, were partly the fault of end users giving full access to their private data), I’m leaning toward ZeroClaw since it’s lightweight enough to run easily. However, I’m also interested in IronClaw because of its full container sandbox runtime.
I understand that there’s no such thing as absolute security without sacrificing other aspects. I mean come on, i am on reddit, use youtube, and google, 4chan user can track me less then a minute
So, is ZeroClaw secure “enough”?
Of course, I plan to be diligent about securing my system:
* Install it on my spare mini PC
* Use a secondary email
* Create a GitHub account with restricted access
* No root access (Is this even possible for daily use with these Claw-like projects, or would I need to grant root access?)
I do aware about other ZeroClaw like such as PicoClaw, NullClaw, which IMO is mostly excersise for the Author to develop in their respective programing language | 2026-02-24T06:54:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rd980h/zeroclaw_or_should_i_go_full_ironclaw/ | Altruistic_Heat_9531 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd980h | false | null | t3_1rd980h | /r/LocalLLaMA/comments/1rd980h/zeroclaw_or_should_i_go_full_ironclaw/ | false | false | self | 0 | null |
PersonaPlex-7B on Apple Silicon: full-duplex speech-to-speech in native Swift (MLX) | 8 | NVIDIA PersonaPlex is a **full-duplex speech-to-speech** model — it can **listen while it speaks**, making it better suited for natural conversations (interruptions, overlaps, backchannels) than typical “wait, then respond” voice pipelines.
I wrote up how to run it **locally on Apple Silicon** with a **native Swift + MLX Swift** implementation, including a **4-bit MLX conversion** and a small CLI/demo to try voices and system-prompt presets.
Blog: [https://blog.ivan.digital/nvidia-personaplex-7b-on-apple-silicon-full-duplex-speech-to-speech-in-native-swift-with-mlx-0aa5276f2e23](https://blog.ivan.digital/nvidia-personaplex-7b-on-apple-silicon-full-duplex-speech-to-speech-in-native-swift-with-mlx-0aa5276f2e23)
Repo: [https://github.com/ivan-digital/qwen3-asr-swift](https://github.com/ivan-digital/qwen3-asr-swift?utm_source=chatgpt.com)
| 2026-02-24T06:48:46 | https://www.reddit.com/r/LocalLLaMA/comments/1rd94lb/personaplex7b_on_apple_silicon_fullduplex/ | ivan_digital | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd94lb | false | null | t3_1rd94lb | /r/LocalLLaMA/comments/1rd94lb/personaplex7b_on_apple_silicon_fullduplex/ | false | false | self | 8 | null |
MiMo-V2-Flash scored 9.41/10 explaining IEEE 754 edge cases, but got a factual error that only one judge caught | 0 | I ran a blind peer eval where 10 frontier models explained 6 classic numerical computing gotchas (0.1 + 0.2, 2\^53 + 1 in JS, sqrt(-1) without cmath, etc.), then judged each other's responses. **MiMo-V2-Flash** placed 8th at 9.41 with σ=0.71, which is interesting because the high variance came from an actual factual mistake: it claimed `(-1)**0.5` raises a ValueError in Python, when it actually returns a complex number. Only one judge out of nine caught it (**GPT-5.2-Codex**, the strictest at avg 8.35). The other eight judges either missed it or didn't penalize it. **DeepSeek V3.2** placed 6th at 9.49 in 28.1 seconds, which was the slowest response time but used only 876 tokens, suggesting inference overhead rather than generation volume. **Gemini 3 Flash Preview** landed 7th at 9.43 in 13.9 seconds. The top 3 were Claude Sonnet 4.5 (9.83, σ=0.20), Claude Opus 4.5 (9.81), and Grok 4.1 Fast (9.78). Full list: GPT-5.2-Codex 4th, Grok 3 Direct 5th, GPT-OSS-120B 9th, Gemini 3 Pro Preview 10th. The two bottom models both had truncated responses, which is what actually killed their scores. I don't know if MiMo's factual error would have changed its ranking significantly if more judges had caught it, since the variance from that single error might be noise. The judge calibration data is also worth looking at: the spread between strictest (8.35) and most lenient (9.93) judge was 1.58 points, and the most lenient judge (Gemini 3 Pro) failed to produce valid JSON in 7 of 10 judging attempts. Curious what you all think about whether LLM-as-judge setups need minimum reliability thresholds for judges. Full data: [https://themultivac.substack.com](https://themultivac.substack.com/) | 2026-02-24T06:31:54 | https://www.reddit.com/r/LocalLLaMA/comments/1rd8u8x/mimov2flash_scored_94110_explaining_ieee_754_edge/ | Silver_Raspberry_811 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd8u8x | false | null | t3_1rd8u8x | /r/LocalLLaMA/comments/1rd8u8x/mimov2flash_scored_94110_explaining_ieee_754_edge/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'IVCi3ssFKq7WjHdRrVAH3esVwQf2rJ2CdKvX8pOBXSI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/IVCi3ssFKq7WjHdRrVAH3esVwQf2rJ2CdKvX8pOBXSI.jpeg?width=108&crop=smart&auto=webp&s=78b37fe5be0302f90355add92f4143e36a28f71a', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/IVCi3ssFKq7WjHdRrVAH3esVwQf2rJ2CdKvX8pOBXSI.jpeg?width=216&crop=smart&auto=webp&s=65882969198855d0d8e5e6fd268b2caa5f55e981', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/IVCi3ssFKq7WjHdRrVAH3esVwQf2rJ2CdKvX8pOBXSI.jpeg?width=320&crop=smart&auto=webp&s=6ce59670ffe241b8394c1623252ed290603a1caa', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/IVCi3ssFKq7WjHdRrVAH3esVwQf2rJ2CdKvX8pOBXSI.jpeg?width=640&crop=smart&auto=webp&s=737ced47f72d25880bb4f6e00f02aea0849e8836', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/IVCi3ssFKq7WjHdRrVAH3esVwQf2rJ2CdKvX8pOBXSI.jpeg?auto=webp&s=6d620ae9a06cbba1d34f9d684a051e064c5500b3', 'width': 920}, 'variants': {}}]} |
Andrej Karpathy survived the weekend with the claws | 96 | reference: [https://www.reddit.com/r/LocalLLaMA/comments/1raq23i/they\_have\_karpathy\_we\_are\_doomed/](https://www.reddit.com/r/LocalLLaMA/comments/1raq23i/they_have_karpathy_we_are_doomed/) | 2026-02-24T06:21:50 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rd8nr7 | false | null | t3_1rd8nr7 | /r/LocalLLaMA/comments/1rd8nr7/andrej_karpathy_survived_the_weekend_with_the/ | false | false | 96 | {'enabled': True, 'images': [{'id': 'zi27d0r9ydlg1', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/zi27d0r9ydlg1.png?width=108&crop=smart&auto=webp&s=222ffbc177c30a2db7f3c08d65cdc4401c917b5a', 'width': 108}, {'height': 188, 'url': 'https://preview.redd.it/zi27d0r9ydlg1.png?width=216&crop=smart&auto=webp&s=32d51987445c42cfdfda338d581d7d4adb752368', 'width': 216}, {'height': 278, 'url': 'https://preview.redd.it/zi27d0r9ydlg1.png?width=320&crop=smart&auto=webp&s=87cd38a315624b38e111192dc40fd63cd8f19380', 'width': 320}, {'height': 557, 'url': 'https://preview.redd.it/zi27d0r9ydlg1.png?width=640&crop=smart&auto=webp&s=4005111dda515a576e1b3fee84166b2abc69893d', 'width': 640}, {'height': 836, 'url': 'https://preview.redd.it/zi27d0r9ydlg1.png?width=960&crop=smart&auto=webp&s=ea3c09a82813ffeb1d184b2c68c540b8ddc6a858', 'width': 960}, {'height': 941, 'url': 'https://preview.redd.it/zi27d0r9ydlg1.png?width=1080&crop=smart&auto=webp&s=a15db22a3eaa7d5e8f874b023bc35654b89c545c', 'width': 1080}], 'source': {'height': 1040, 'url': 'https://preview.redd.it/zi27d0r9ydlg1.png?auto=webp&s=28543e30d88731115b089cb8a280a0179b551cbf', 'width': 1193}, 'variants': {}}]} | ||
Prompt Engineering is a dead end. A new philosophy to interact with AI: A Proposal for "Thinking Diagrams" | 0 | I've been researching HCI and reached the conclusion that linear prompting as a primary interface for LLM is a dead end. At least without AGI. So i'm proposing a new philosophy to fix this.
Thank for reading and sorry because of my english is not very good. Here is the core of my proposal:
# The Sharpest Axe with the Weakest Handle: The Paradox of Modern AI
We are living in the age of miracles. Large Language Models (LLMs) represent a leap in technological capability so profound that we are still struggling to grasp their full implications. They possess a vast, cross-domain knowledge and a reasoning capacity that feels almost magical. We have been handed a tool of immense power, a brilliantly sharp axe capable of felling the most complex problems.
But there’s a catch. We’ve been forced to wield this powerful axe with a flimsy wooden stick.
That stick is the **prompt**.
The entire paradigm of interacting with these powerful models is based on compressing complex, multi-dimensional ideas into a flat, linear string of text. This is a fundamental structural mismatch. It forces us, the users, to bear an immense cognitive load, acting as "text composers" who must meticulously translate a rich mental model into a single block of narrative text.
This flawed interface is the source of nearly every frustration users experience. We struggle with the "prompt optimization dilemma," caught between providing too much detail and risking the model getting "lost in the middle" , or providing too little and getting back generic, useless results. We treat the AI not as a brilliant partner, but as a difficult tool that must be coaxed with clever tricks and "prompt engineering".
The problem isn't the axe, it's the handle. We are artificially limiting the power of AI because our method of communication is fundamentally broken. To unlock the next level of productivity and creativity, we don't just need a better model. We need an entirely new way to interact. We need to build a better handle.
# So, What's Really Wrong with the Handle?
To understand why the prompt is such a flawed handle, we have to look at its most basic limitation: **the linear, one-dimensional nature of text itself.** We are attempting to "force" a complex logical system, which often has a multi-dimensional structure, into a flat sequence of characters. This creates a severe structural barrier.
Consider a simple case of **circular dependency**: Logic A causes B, B causes C, and C in turn causes A. How would you represent this relationship in a text prompt? You are forced to rely on clumsy conventions - punctuation, special formatting, or specific conjunctions to try and simulate these logical ties. In a string of text, this elegant circular relationship becomes a tangled mess, easily misinterpreted and stripped of its semantic integrity. This isn't a failure of the AI's "understanding". It's a failure of the medium itself.
This structural mismatch goes deeper. It imposes a significant cognitive load by forcing us to work against our natural problem-solving instincts.
Think about it, when humans face a complex problem, we don't start by writing an essay. We gravitate toward visual, abstract tools. We sketch flowcharts on a whiteboard, we build mind maps, we arrange slides in a presentation. This reveals a fundamental paradox: even when communicating a complex idea to another person, we instinctively avoid relying solely on a block of text\*\*. So why have we accepted this inefficient method as the standard for communicating with an AI?\*\*
Detailed text is almost always the final step - a way to document a conclusion *after* the core logical framework has been agreed upon, not the primary tool for the thinking process itself.
Forcing every interaction through a large block of text disrupts our natural flow of thought. It demands that every user, regardless of their expertise, become a "text composer"- tasked with translating an entire mental model into a single narrative.
This approach fundamentally misunderstands the proper role of AI. It places the burden of creating perfect structure entirely on the user, while hoping the AI can magically interpret a messy, unstructured prompt. This isn't a collaboration, it's a flawed expectation. The goal isn't to create a machine that reads our minds.
The goal is to build a partnership where the AI helps us structure our thinking from the very beginning.
**I’ve written a full technical proposal covering:**
* The architecture of Nodes, Edges, Detailers, and Pivots.
* **ARAG (Active Reasoning and Generation):** Why we should replace speculative RAG with transparent graph compilation.
* The Roadmap toward Graph-Native AI models.
**Full article here:** [Article](https://axx83.substack.com/p/thinking-diagrams?rv=2) | 2026-02-24T06:18:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rd8l7m/prompt_engineering_is_a_dead_end_a_new_philosophy/ | Axx-83 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd8l7m | false | null | t3_1rd8l7m | /r/LocalLLaMA/comments/1rd8l7m/prompt_engineering_is_a_dead_end_a_new_philosophy/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=108&crop=smart&auto=webp&s=f18344c39f069f27c3c0a9f38351001a1fd264a5', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=216&crop=smart&auto=webp&s=41417ffd9b7ee0490fd20c22476b68fa06e32857', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=320&crop=smart&auto=webp&s=27144091bac7bedf0ac6624863ae5a67770e9769', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=640&crop=smart&auto=webp&s=7f1932c1a7924ab2162e0e3bcd06ef04c5c1a068', 'width': 640}, {'height': 543, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=960&crop=smart&auto=webp&s=fc3d1f0352191c75ce3c629f86c3ec798037a49a', 'width': 960}], 'source': {'height': 558, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?auto=webp&s=ee558830d78dd640b25d6cd81598568c08bccb46', 'width': 986}, 'variants': {}}]} |
Agents keep hallucinating progress — what actually works for you? | 0 | Been running Claude and GPT agents on real tasks for a few months — deployments, codegen, multi-step research. And I keep hitting the same issue nobody really talks about:
Agents confidently report "done" when nothing actually happened.
The problem really showed itself when I started building a swarm out of multiple agents via OpenClaw. Tasks get broken into subtasks, agents work in parallel, one agent's output feeds into another. That's when the chaos started:
* "I deployed the service" → deploy command failed silently, nothing is running
* "All tests pass" → tests were never executed, agent just assumed they would pass
* "File created with the requested content" → file exists but is empty
* "API endpoint is live and returning expected data" → endpoint 404s
In a swarm this snowballs fast. Agent A says "build succeeded, here's the artifact" → Agent B takes that at face value and tries to deploy a non-existent artifact → Agent B says "deployed successfully" → two layers of hallucinated progress and nothing works. The more agents in the swarm, the deeper you dig yourself in — by the time you notice the problem, three agents have built their work on a foundation of hallucinations.
What actually worked:
I flipped the model entirely. Instead of trusting the agent's words — define upfront what "done" means in machine-checkable terms, and verify the world state directly. No LLM in the verification loop.
Something like:
evidence: required: - id: deploy\_done type: command\_output verify: { exit\_code: 0, stdout\_pattern: "[https://](https://).\*" }
- id: site_live
type: http_check
depends_on: deploy_done
verify: { status: 200, body_contains: "Welcome" }
A deterministic runtime makes the actual HTTP call, checks the actual response code, validates the actual body. Evidence chains — extract a URL from deploy output, then hit that URL to confirm it's live. No evidence = not done.
This basically killed the false progress problem in my workflows. But I'm curious — am I overcomplicating this? What patterns work for you?
* Anyone running swarms / multi-agent pipelines — how do you verify results?
* Has anyone made LLM-based verification actually reliable?
* Is anyone else doing deterministic post-checks, or is everyone just re-running and hoping? | 2026-02-24T06:09:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rd8dzj/agents_keep_hallucinating_progress_what_actually/ | HugoKovalsky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd8dzj | false | null | t3_1rd8dzj | /r/LocalLLaMA/comments/1rd8dzj/agents_keep_hallucinating_progress_what_actually/ | false | false | self | 0 | null |
Anthropic's recent distillation blog should make anyone only ever want to use local open-weight models; it's scary and dystopian | 764 | It's quite ironic that they went for the censorship and authoritarian angles here.
Full blog: [https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks](https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks) | 2026-02-24T06:07:02 | https://www.reddit.com/gallery/1rd8cfw | obvithrowaway34434 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rd8cfw | false | null | t3_1rd8cfw | /r/LocalLLaMA/comments/1rd8cfw/anthropics_recent_distillation_blog_should_make/ | false | false | 764 | null | |
Qwen3-Coder 30B running at 74% CPU on 3090 (ollama docker) | 13 | Running Qwen3-Coder (30.5B MoE, Q4_K_M) via Docker Ollama on a machine with a 3090 (24GB VRAM) and 32GB RAM, and inference is painfully slow. GPU is showing 23.8GB / 24GB used, but ollama ps shows 74% CPU / 26% GPU split — which seems completely backwards from what I'd expect.
Setup:
RTX 3090 (24GB VRAM)
32GB system RAM
Docker Ollama
ollama show qwen3-coder
Model
architecture qwen3moe
parameters 30.5B
context length 262144
embedding length 2048
quantization Q4_K_M
nvidia-smi during inference: 23817MiB / 24576MiB
ollama ps
NAME ID SIZE PROCESSOR CONTEXT UNTIL
qwen3-coder:latest 06c1097efce0 22 GB 74%/26% CPU/GPU 32768 | 2026-02-24T06:02:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rd89m5/qwen3coder_30b_running_at_74_cpu_on_3090_ollama/ | minefew | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd89m5 | false | null | t3_1rd89m5 | /r/LocalLLaMA/comments/1rd89m5/qwen3coder_30b_running_at_74_cpu_on_3090_ollama/ | false | false | self | 13 | null |
Anyone using DeepSeek proxy? Sharing my cheap setup (Hong Kong based) | 0 | Hey r/LocalLLaMA,
I’ve been running DeepSeek-V3/R1 through a Hong Kong proxy for a while now – latency is low, no rate limits, and costs are like 1/5 of OpenAI.
For example: I get \~100k tokens for under $5 (that’s roughly 10k words of code or chat). Speed feels snappier than Groq sometimes.
Anyone else doing this? What’s your go-to proxy or setup? Happy to share my config if it helps – just DM if you’re curious (no spam, promise).
What do you think – worth it for local devs?
\#DeepSeek #AI #LLM | 2026-02-24T06:01:23 | https://www.reddit.com/r/LocalLLaMA/comments/1rd88lq/anyone_using_deepseek_proxy_sharing_my_cheap/ | deepseektoken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd88lq | false | null | t3_1rd88lq | /r/LocalLLaMA/comments/1rd88lq/anyone_using_deepseek_proxy_sharing_my_cheap/ | false | false | self | 0 | null |
Prompt Engineering is a dead end. We need Graph-based Cognitive Architectures: A Proposal for "Thinking Diagrams" | 1 | [removed] | 2026-02-24T05:57:39 | https://axx83.substack.com/p/thinking-diagrams | axx_83 | axx83.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1rd85y5 | false | null | t3_1rd85y5 | /r/LocalLLaMA/comments/1rd85y5/prompt_engineering_is_a_dead_end_we_need/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=108&crop=smart&auto=webp&s=f18344c39f069f27c3c0a9f38351001a1fd264a5', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=216&crop=smart&auto=webp&s=41417ffd9b7ee0490fd20c22476b68fa06e32857', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=320&crop=smart&auto=webp&s=27144091bac7bedf0ac6624863ae5a67770e9769', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=640&crop=smart&auto=webp&s=7f1932c1a7924ab2162e0e3bcd06ef04c5c1a068', 'width': 640}, {'height': 543, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=960&crop=smart&auto=webp&s=fc3d1f0352191c75ce3c629f86c3ec798037a49a', 'width': 960}], 'source': {'height': 558, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?auto=webp&s=ee558830d78dd640b25d6cd81598568c08bccb46', 'width': 986}, 'variants': {}}]} | |
Did some one know about that u can do this in any IDE ? | 0 | I was create which change session indentety and creat new indentety as Agent L 1 then I pest same script to join the same scrept file on my local machine the other chat session and that section rewrite its internal prompt and change indentety to agent L2 on my other laptop in my other IDE I pest to the session same script and the section get indentety agent 2 L2 where it’s now recognize it’s self that it’s working on same project with other sections ( Agents) and that communicate through terminalm and it’s insane u don’t need OpenClaw or big tech like Devin or LongChain it’s dem only 2 files .sh on your laptop … | 2026-02-24T05:56:21 | https://www.reddit.com/gallery/1rd853t | Devswat | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rd853t | false | null | t3_1rd853t | /r/LocalLLaMA/comments/1rd853t/did_some_one_know_about_that_u_can_do_this_in_any/ | true | false | spoiler | 0 | null |
I just saw something amazing | 317 | https://www.asus.com/displays-desktops/workstations/performance/expertcenter-pro-et900n-g3/
https://www.azken.com/Workstations/nvidia-series/Asus-ExpertCenter-Pro-ET900N-G3?utm\_source=chatgpt.com | 2026-02-24T05:49:17 | ayanami0011 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rd80gx | false | null | t3_1rd80gx | /r/LocalLLaMA/comments/1rd80gx/i_just_saw_something_amazing/ | false | false | 317 | {'enabled': True, 'images': [{'id': 'rr17jgdksdlg1', 'resolutions': [{'height': 196, 'url': 'https://preview.redd.it/rr17jgdksdlg1.jpeg?width=108&crop=smart&auto=webp&s=2eecf06487ed97f29697d13f1320c7ecedc3ca21', 'width': 108}, {'height': 392, 'url': 'https://preview.redd.it/rr17jgdksdlg1.jpeg?width=216&crop=smart&auto=webp&s=3d14043b0de8a10133fc9069bd6aade4b3023cd6', 'width': 216}, {'height': 581, 'url': 'https://preview.redd.it/rr17jgdksdlg1.jpeg?width=320&crop=smart&auto=webp&s=afa37be90c27c55495c649d799952a7e96e2d415', 'width': 320}, {'height': 1162, 'url': 'https://preview.redd.it/rr17jgdksdlg1.jpeg?width=640&crop=smart&auto=webp&s=8c7fff37ff972da0293a348d64378188d1acef13', 'width': 640}, {'height': 1743, 'url': 'https://preview.redd.it/rr17jgdksdlg1.jpeg?width=960&crop=smart&auto=webp&s=c291465b9461272f572530ae5a69fb1ee46cb4f5', 'width': 960}, {'height': 1961, 'url': 'https://preview.redd.it/rr17jgdksdlg1.jpeg?width=1080&crop=smart&auto=webp&s=32bb4ef3fdc260e99574463bea2d4e40c90adb37', 'width': 1080}], 'source': {'height': 2141, 'url': 'https://preview.redd.it/rr17jgdksdlg1.jpeg?auto=webp&s=4973fdcfb512bcc69ff5d508ae286839caada5d3', 'width': 1179}, 'variants': {}}]} | ||
Prompt Engineering is a dead end. We need Graph-based Cognitive Architectures: A Proposal for "Thinking Diagrams" | 1 | [removed] | 2026-02-24T05:49:06 | https://axx83.substack.com/p/thinking-diagrams | axx_83 | axx83.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1rd80d7 | false | null | t3_1rd80d7 | /r/LocalLLaMA/comments/1rd80d7/prompt_engineering_is_a_dead_end_we_need/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=108&crop=smart&auto=webp&s=f18344c39f069f27c3c0a9f38351001a1fd264a5', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=216&crop=smart&auto=webp&s=41417ffd9b7ee0490fd20c22476b68fa06e32857', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=320&crop=smart&auto=webp&s=27144091bac7bedf0ac6624863ae5a67770e9769', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=640&crop=smart&auto=webp&s=7f1932c1a7924ab2162e0e3bcd06ef04c5c1a068', 'width': 640}, {'height': 543, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=960&crop=smart&auto=webp&s=fc3d1f0352191c75ce3c629f86c3ec798037a49a', 'width': 960}], 'source': {'height': 558, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?auto=webp&s=ee558830d78dd640b25d6cd81598568c08bccb46', 'width': 986}, 'variants': {}}]} | |
Built an open-source Ollama/MLX/OpenAI benchmark and leaderboard site with in-app submissions. Trying to test and collect more data. | 2 | 2026-02-24T05:42:55 | peppaz | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rd7wbh | false | null | t3_1rd7wbh | /r/LocalLLaMA/comments/1rd7wbh/built_an_opensource_ollamamlxopenai_benchmark_and/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'tcn61r39rdlg1', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/tcn61r39rdlg1.png?width=108&crop=smart&auto=webp&s=aab63fde6faa8135b47efae915ddb4f375c5e2eb', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/tcn61r39rdlg1.png?width=216&crop=smart&auto=webp&s=259344f19dd5b91f4a2a2269b66a1e1ca699d17e', 'width': 216}, {'height': 401, 'url': 'https://preview.redd.it/tcn61r39rdlg1.png?width=320&crop=smart&auto=webp&s=0b9ca3812ff18cdab5b10404f7a8024fea10ab00', 'width': 320}, {'height': 802, 'url': 'https://preview.redd.it/tcn61r39rdlg1.png?width=640&crop=smart&auto=webp&s=386c5ea0626fa055a8b52140758687ce7f84b8c9', 'width': 640}, {'height': 1203, 'url': 'https://preview.redd.it/tcn61r39rdlg1.png?width=960&crop=smart&auto=webp&s=5b42d79a06a15113d6cb71589223a0f96aa87cf4', 'width': 960}], 'source': {'height': 1291, 'url': 'https://preview.redd.it/tcn61r39rdlg1.png?auto=webp&s=cde9dd434bf77b9a265e4dfd092109cf8c819aa9', 'width': 1030}, 'variants': {}}]} | |||
Prompt Engineering is a dead end. We need Graph-based Cognitive Architectures: A Proposal for "Thinking Diagrams" | 1 | [removed] | 2026-02-24T05:41:12 | https://axx83.substack.com/p/thinking-diagrams | axx83 | axx83.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1rd7v7c | false | null | t3_1rd7v7c | /r/LocalLLaMA/comments/1rd7v7c/prompt_engineering_is_a_dead_end_we_need/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=108&crop=smart&auto=webp&s=f18344c39f069f27c3c0a9f38351001a1fd264a5', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=216&crop=smart&auto=webp&s=41417ffd9b7ee0490fd20c22476b68fa06e32857', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=320&crop=smart&auto=webp&s=27144091bac7bedf0ac6624863ae5a67770e9769', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=640&crop=smart&auto=webp&s=7f1932c1a7924ab2162e0e3bcd06ef04c5c1a068', 'width': 640}, {'height': 543, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=960&crop=smart&auto=webp&s=fc3d1f0352191c75ce3c629f86c3ec798037a49a', 'width': 960}], 'source': {'height': 558, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?auto=webp&s=ee558830d78dd640b25d6cd81598568c08bccb46', 'width': 986}, 'variants': {}}]} | |
Models to run on an iphone 14 pro | 1 | Hey everyone, not a native speaker (Dutch), I write my own posts without LLMs. Please correct me if I make mistakes, only way to learn!
I was gifted an iphone 14 pro, which has a little less than 6 GB available for use, realistically 4GB.
Since I am planning to go to Japan, I thought having some offline SLMs available to me might be useful in a pinch.
For inference I am using pocketpal from the app store ([link](https://apps.apple.com/nl/app/pocketpal-ai/id6502579498)) and it has a github repo ([link](https://github.com/a-ghorbani/pocketpal-ai)).
My goal here is to build up a small collection of LLMs, each good at their own task:
* An offline translation / dictionary model
* A vision model (with good text extraction if possible)
* A dry office task model (summerize, extract text, find spelling mistakes, etc)
* A general knowledge model (What is proper etiquette when in Japan? kind of questions)
* A rp model for on the go (super generic is fine, like goblin hunting for an adventurers guild or whatever generic high fantasy theme)
I've tested the following models:
* LFM 2 VL 3B ([link](https://huggingface.co/LiquidAI/LFM2-VL-3B-GGUF) , q4\_k\_m, q8 mmproj): A little slow, but it's wonderful that vision works. Will outright refuse some tasks.
* Gemma 4B ([link](https://huggingface.co/google/gemma-3-4b-it-qat-q4_0-gguf), q4\_0 qat): Crashes when loading with vision encoder. Pocketpal doesn't support full SWA so context is severely limited. Sadly 1B doesn't have vision support. Knows basics about cultures, but fails at geography
* Ministral 3 3B Instruct / Reasoning ([link](https://huggingface.co/mistralai/Ministral-3-3B-Reasoning-2512-GGUF), iq4\_xs, q8 mmproj): The instruct model worked better. Vision encoder works nicely, but taking a picture with the model loaded crashes the app. Rivals Gemma 3 in world knowledge.
* HY-MT1.5-1.8B ([link](https://huggingface.co/tencent/HY-MT1.5-1.8B-GGUF), q8): Needs a good system prompt, but works wonders as offline translator in a pinch. It's even better when you use another vision model to first extract the text from an image, and let this model translate the extracted text.
* Granite 4.0 H 1B ([link](https://huggingface.co/ibm-granite/granite-4.0-h-1b-GGUF), q8): Does what it says on the tin, works good enough for the tasks mentioned in the model card.
* Nano Imp 1B ([link](https://huggingface.co/mradermacher/Nano_Imp_1B-GGUF), q8): You won't be slaying goblins with this one, but for dumb discord-style texting RPs it passes.
And might try:
* Qwen 3 VL 2B ([link](https://huggingface.co/Qwen/Qwen3-VL-2B-Instruct-GGUF)): Heard many good things about qwen 3, and hope it will be good enough with such a small amount of parameters.
* LFM 2.5 VL 1.6B ([link](https://huggingface.co/LiquidAI/LFM2.5-VL-1.6B-GGUF)): Users here said that it rivals the LFM 2 VL 3B I was using, hope it to be true for the vision part!
What didn't work so far:
* Gemma 3 4B, despite it's good world knowledge feels too small for real usage. Downloading a copy of wikipedia or wikivoyage as ZIM for offline reading seems like a better plan.
* Don't think pocketpal supports websearch (correct me if I am wrong!) but would probably be impractical; 8k context seems already a big ask
* Since context isn't a sliding window, once the chat history is filled up it stops responding. Pretty painful for roleplay and general usage alike. I hope there is a setting for this.
Having said all of that, I do have some questions:
* Which other inference apps are out there that I should try? I don't mind paying once, as long as it doesn't have ads or in app purchases for credits or whatnot.
* Any model recommendations for any of the categories listed above? (Especially for world knowledge!)
* Any other tips or tricks or recommendations?
Thank you for reading! | 2026-02-24T05:35:45 | https://www.reddit.com/r/LocalLLaMA/comments/1rd7rqu/models_to_run_on_an_iphone_14_pro/ | Kahvana | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd7rqu | false | null | t3_1rd7rqu | /r/LocalLLaMA/comments/1rd7rqu/models_to_run_on_an_iphone_14_pro/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'IMtEzfz-kJ4Iq4psE1tgLfjcVW0PTdvmdB26d0lWtj8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/IMtEzfz-kJ4Iq4psE1tgLfjcVW0PTdvmdB26d0lWtj8.jpeg?width=108&crop=smart&auto=webp&s=db2edd734ffc46f4a43256100128271e42ea4b07', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/IMtEzfz-kJ4Iq4psE1tgLfjcVW0PTdvmdB26d0lWtj8.jpeg?width=216&crop=smart&auto=webp&s=0ab196a73497fee31da332989b5aa5b566eb5617', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/IMtEzfz-kJ4Iq4psE1tgLfjcVW0PTdvmdB26d0lWtj8.jpeg?width=320&crop=smart&auto=webp&s=ecbea1f747ebd099bcc287961259ce7b205a2006', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/IMtEzfz-kJ4Iq4psE1tgLfjcVW0PTdvmdB26d0lWtj8.jpeg?width=640&crop=smart&auto=webp&s=50e83ac27de847a2e121a82024e9150c9ed8f8d4', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/IMtEzfz-kJ4Iq4psE1tgLfjcVW0PTdvmdB26d0lWtj8.jpeg?width=960&crop=smart&auto=webp&s=23f0164eb1a6fe9ec862569ff34225e017c48b7a', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/IMtEzfz-kJ4Iq4psE1tgLfjcVW0PTdvmdB26d0lWtj8.jpeg?width=1080&crop=smart&auto=webp&s=f2193ffaaad609b1e0a736626caae75b0fc3555e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/IMtEzfz-kJ4Iq4psE1tgLfjcVW0PTdvmdB26d0lWtj8.jpeg?auto=webp&s=64223b81ac8431bf4bac3998cf9f9503a189f9d7', 'width': 1200}, 'variants': {}}]} |
LLM vs LLM harness | 11 | We have the capable distillations - let's continue to build out the harnesses | 2026-02-24T05:25:29 | chitown160 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rd7kw7 | false | null | t3_1rd7kw7 | /r/LocalLLaMA/comments/1rd7kw7/llm_vs_llm_harness/ | false | false | 11 | {'enabled': True, 'images': [{'id': 'umedeqhzndlg1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/umedeqhzndlg1.jpeg?width=108&crop=smart&auto=webp&s=c6f1973fe354e7e07bcc5c6294531b8b441eaaf3', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/umedeqhzndlg1.jpeg?width=216&crop=smart&auto=webp&s=695d5c2f30082416a30a16e4e6ec7a5a957afe84', 'width': 216}, {'height': 206, 'url': 'https://preview.redd.it/umedeqhzndlg1.jpeg?width=320&crop=smart&auto=webp&s=05e032546c405ea9d81126b4a3356f0d63c836dd', 'width': 320}, {'height': 413, 'url': 'https://preview.redd.it/umedeqhzndlg1.jpeg?width=640&crop=smart&auto=webp&s=aa6e250fe08179dba92d9ff97f79b9bd1973be08', 'width': 640}, {'height': 620, 'url': 'https://preview.redd.it/umedeqhzndlg1.jpeg?width=960&crop=smart&auto=webp&s=5aa76f9ec598334f718234639752bf255dcf5638', 'width': 960}, {'height': 698, 'url': 'https://preview.redd.it/umedeqhzndlg1.jpeg?width=1080&crop=smart&auto=webp&s=ac9a576ba9ce47daa99c5c5a1b22ecdc14cd7380', 'width': 1080}], 'source': {'height': 790, 'url': 'https://preview.redd.it/umedeqhzndlg1.jpeg?auto=webp&s=e58c5d4d54dd517910fcb1b3f44139d0e1489bec', 'width': 1222}, 'variants': {}}]} | ||
Coding agent for edge devices | 1 | Hi, often I had to directly work on edge devices like old raspberry pi and some other similar boards powered by armbian.
I tryed to install opencode / kilocode and few others like mistral Vibe. Apparently every of these are really heavy on such small compute power and ram amour (often 1 gb)
Can you suggest any really light coding agent that basically don't need nothing more if not the ability to send requests to the api provider? | 2026-02-24T05:17:23 | https://www.reddit.com/r/LocalLLaMA/comments/1rd7eme/coding_agent_for_edge_devices/ | cri10095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd7eme | false | null | t3_1rd7eme | /r/LocalLLaMA/comments/1rd7eme/coding_agent_for_edge_devices/ | false | false | self | 1 | null |
OpenClaw has started appearing in job descriptions! | 1 | 2026-02-24T05:13:53 | moaijobs | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rd7b77 | false | null | t3_1rd7b77 | /r/LocalLLaMA/comments/1rd7b77/openclaw_has_started_appearing_in_job_descriptions/ | false | false | 1 | {'enabled': True, 'images': [{'id': '0tcwowg7mdlg1', 'resolutions': [{'height': 106, 'url': 'https://preview.redd.it/0tcwowg7mdlg1.png?width=108&crop=smart&auto=webp&s=b5a3850c3350731e8d27a8e4dc0e9c63f12aff16', 'width': 108}, {'height': 212, 'url': 'https://preview.redd.it/0tcwowg7mdlg1.png?width=216&crop=smart&auto=webp&s=852a9e8c0fa0dc5b6e33565d240bdced6b2db4f3', 'width': 216}, {'height': 314, 'url': 'https://preview.redd.it/0tcwowg7mdlg1.png?width=320&crop=smart&auto=webp&s=72cb71fc0455e7f8632fd160f44e6da91ee68ae2', 'width': 320}, {'height': 628, 'url': 'https://preview.redd.it/0tcwowg7mdlg1.png?width=640&crop=smart&auto=webp&s=9804265761b149a96782f04a913ccf66b9518cc0', 'width': 640}, {'height': 942, 'url': 'https://preview.redd.it/0tcwowg7mdlg1.png?width=960&crop=smart&auto=webp&s=3c707acf7050cd4d89ac2d99ae03a0b05f841771', 'width': 960}, {'height': 1060, 'url': 'https://preview.redd.it/0tcwowg7mdlg1.png?width=1080&crop=smart&auto=webp&s=d5d7d3c896b324268d1e9a1614823399f14a1c2e', 'width': 1080}], 'source': {'height': 1736, 'url': 'https://preview.redd.it/0tcwowg7mdlg1.png?auto=webp&s=75a932a9b250215a54c11fae62f890d7cf80bc00', 'width': 1768}, 'variants': {}}]} | |||
Proof-of-Personhood AI Protocol | 1 |
The Proof-of-Personhood AI Protocol
1. Core Concept
This protocol outlines a decentralized network for building, training, and running an artificial intelligence. Unlike traditional AI, which relies on centralized corporate servers and monopolized computing power, this system is driven entirely by verified human time. Participants use their personal computers to process data and train the AI, earning credits strictly for the clock hours they contribute. These credits hold no financial value; they are used exclusively to purchase priority access to the finished AI.
2. Human Identity (One Person, One Account)
To prevent the network from being manipulated by automated bots or users creating multiple accounts to hoard credits, participation is anchored to strict human verification.
The Permanent ID: Upon verifying their humanity, a user is issued a permanent, non-transferable digital identity badge (a "Soulbound Token").
No Trading: This identity cannot be sold or transferred. It ensures that one human only ever has one account and one balance of time credits.
3. Earning Time Credits (The Data Studio)
Users earn time credits by preparing the massive amounts of text the AI needs to learn. To ensure anyone can participate while still achieving massive scale, all data preparation happens in a unified Data Studio with two merging tracks:
Track A: Manual Curation: Non-technical users can manually type, paste, or write perfectly clean text (such as original summaries, creative writing, or step-by-step tutorials) directly into the interface.
Track B: Template-Driven Bulk Processing: Technical users and data engineers can achieve massive scale by using preexisting templates or writing custom Python and R scripts. For example, a user can connect to a Snowflake data warehouse, extract raw text, and use templates to automatically clean out website code and format it for the AI.
Universal Reward: All verified work is rewarded equally. Whether a user spends an hour writing an original summary or an hour adjusting bulk-processing scripts, the protocol awards the exact same amount of time credits.
4. Distributed Training and Consensus
The AI is not trained in a central data center. It is built piece-by-piece on the computers of the network's users.
Local Processing: When a user claims a training task, their computer downloads a tiny piece of the AI and the specific data for that task. The user's computer processes the math locally and sends only the completed mathematical updates back to the network.
Proving the Work: To prove the user actually did the work, a fraction of the exact same task is secretly sent to a random second user. If both users submit the exact same mathematical result, the protocol confirms the work is legitimate and automatically pays out the time credits to both.
5. The Hardware Floor
To ensure the network is fast enough to actually function, it establishes a minimum hardware requirement, but it refuses to reward expensive supercomputers.
The Accessible Baseline: The hardware floor is pegged to the performance of a standard $500 personal computer (e.g., 16GB of RAM and an integrated neural processor). Computers that do not meet this floor cannot join the active processing swarm.
The Flat Reward Curve: Once a device passes this $500 baseline, the reward curve is completely flat. A user running a massive multi-GPU server farm earns the exact same 1-to-1 time credit per hour as a user running a $500 mini-PC.
Community Governance: As technology naturally gets cheaper, the community votes to raise the baseline specs of the floor without ever raising the $500 accessibility barrier.
6. Running the AI (The Swarm Network)
When the AI is finished, it is hosted directly on the network of user devices. Because standard internet connections are slower than centralized corporate fiber optics, the network changes how it routes information:
Asynchronous Learning: Training data is broken into millions of tiny, independent packets so users can process them at their own pace without bottlenecking the system.
Geographic Routing: When a user asks the finished AI a question, the network routes that prompt to the thousands of active computers physically closest to them, minimizing internet delay.
7. The Economy of Priority
The time credits earned by the users are strictly utility tokens to manage the network's processing limits.
No Financial Value: The credits cannot be traded on crypto exchanges or sold for cash.
Skipping the Line: When millions of people try to use the AI at once, a line forms. Users spend their earned time credits to jump to the front of this processing queue. The "swarm" of devices will prioritize answering their prompts instantly, while a user with zero credits must wait for network capacity to free up.
[https://github.com/dcs02d/AIStuff/blob/main/AIStuff](https://github.com/dcs02d/AIStuff/blob/main/AIStuff) | 2026-02-24T05:10:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rd77gd/proofofpersonhood_ai_protocol/ | Last_Cockroach7651 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd77gd | false | null | t3_1rd77gd | /r/LocalLLaMA/comments/1rd77gd/proofofpersonhood_ai_protocol/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '-H9OI7DKdfdiQddOw7_DAaMj7eqaoAWlIuxQC2MeB9U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-H9OI7DKdfdiQddOw7_DAaMj7eqaoAWlIuxQC2MeB9U.png?width=108&crop=smart&auto=webp&s=020ce8a3935a770b4493b110ea2e9d8258777821', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-H9OI7DKdfdiQddOw7_DAaMj7eqaoAWlIuxQC2MeB9U.png?width=216&crop=smart&auto=webp&s=df5aa47eeee53cc81aae40e9d88687b9b5c26a02', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-H9OI7DKdfdiQddOw7_DAaMj7eqaoAWlIuxQC2MeB9U.png?width=320&crop=smart&auto=webp&s=edda81faff5b5a0246562fd01db54d2b8d68d41e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-H9OI7DKdfdiQddOw7_DAaMj7eqaoAWlIuxQC2MeB9U.png?width=640&crop=smart&auto=webp&s=6a987f352744f7fd3f6cd566604d4cc646c63d95', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-H9OI7DKdfdiQddOw7_DAaMj7eqaoAWlIuxQC2MeB9U.png?width=960&crop=smart&auto=webp&s=1417f196f18efdac8cccc6c7a0abd41c7456e788', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-H9OI7DKdfdiQddOw7_DAaMj7eqaoAWlIuxQC2MeB9U.png?width=1080&crop=smart&auto=webp&s=1e19ad7dffc52f401ffc08bbe721869258ba43a2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-H9OI7DKdfdiQddOw7_DAaMj7eqaoAWlIuxQC2MeB9U.png?auto=webp&s=c37f3c1a319e9cab95278e3e480849ac59b419e8', 'width': 1200}, 'variants': {}}]} |
An Update to my memory system Persistent-AI-Memory system | 0 | Hello Everyone,
I'm not sure how many of you remember my memory system that I had made a github version of called Persistent-AI-Memory? Well, I just made major update to it.
Now it's much more sophisticated. It has a short term memory system now, that is primarily a function for OpenWebUI, but has been modified to be standalone if you want. I just haven't worked out how everyone wants to connect it to any other system, so I figured i'd try to make it work standalone form OpenWebUI, while also keeping it able to be used as a function In OpenWebUI. Feel free to tinker with it.
This short term memory system also has ties to the main Long Term Memory system for promotion of short term memories to long term memories which are searchable by the MCP server included.
The short term memory system is meant to feed your LLM with memories from it's memory base that are embedded, and can be semantically searched and fed to the LLM. But again, I tried to make it not as dependent on OpenWebUI But also keep it's functionality.
The system requires you use an Embeddings model, either the default in your main LLM runner, or a model you specify. You can also have an LLM do the deciding separately, or in the background use your chat model with separate calls so there is no context bleed.
There is also a ranking system for memories, a tags system, and also I think for a background LLM to work the Long Term system but I'm not sure if that got implemented. There are about 3 other people working on this with me, and there hasn't been as much occasion to communicate. But I think since I daily drive the system on my own machine, it should be in a Version 1.1.0 state now. So I introduce, the version 1 of Persistent-AI-Memory.
The license is MIT, so it is open to be fiddled with and modified for your own system. I know it could use some tweaks, and honestly, I'd love for you guys to give your input on where it could be better, or what you like. I'm totally up for any and all criticism so long as it's helpful and not just criticizing because you hate LLMs. There is a lot of that going around on this sub lately, and it's pathetic that people can't get their own lives and do something productive.
But my memory system is doing the best I can do right now, but I have further plans. If you would like to contribute, give me DM, and your contributions WILL be noted in the documentation and appreciated. Otherwise, enjoy to your heart's content.
Sincerely,
Savantskie | 2026-02-24T05:05:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rd6zw0/an_update_to_my_memory_system_persistentaimemory/ | Savantskie1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd6zw0 | false | null | t3_1rd6zw0 | /r/LocalLLaMA/comments/1rd6zw0/an_update_to_my_memory_system_persistentaimemory/ | false | false | self | 0 | null |
Demo: World's first embeddable web agent that websites can drop in with a single script tag | 1 | We just shipped something we're really excited about: **Rover,** an embeddable web agent that any website can integrate with a single `<script>` tag that can type/click/select to onboard/form fill/convert users. Think of it like Stripe for AI agents. Instead of users leaving your site to go use an AI tool, the agent lives *inside* your product and can actually interact with your DOM natively.
**Why this matters:**
* Most web agents today work by taking screenshots and clicking pixels. We go DOM-native, directly reading and manipulating the page structure allowing our agent to be uniquely embeddable via a script tag. This is why we rank #1 on WebBench (81.39% success rate).
* One script tag integration. No SDK, no complex setup.
* The agent understands the actual page context, forms, navigation, dynamic content, not just what pixels look like.
Happy to answer any technical questions about the architecture, how DOM-native differs from screenshot-based agents, or anything else! | 2026-02-24T04:59:03 | https://v.redd.it/1fipmzetidlg1 | BodybuilderLost328 | /r/LocalLLaMA/comments/1rd6rb2/demo_worlds_first_embeddable_web_agent_that/ | 1970-01-01T00:00:00 | 0 | {} | 1rd6rb2 | false | null | t3_1rd6rb2 | /r/LocalLLaMA/comments/1rd6rb2/demo_worlds_first_embeddable_web_agent_that/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bGlkNmk4ZnRpZGxnMenkMkPF0ah2kUOWVdvV3AAq4P25yT1s0HrQkoPgLNkA', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/bGlkNmk4ZnRpZGxnMenkMkPF0ah2kUOWVdvV3AAq4P25yT1s0HrQkoPgLNkA.png?width=108&crop=smart&format=pjpg&auto=webp&s=a5f03e1676acf0181d73210a6a8c99e042b4ed0a', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/bGlkNmk4ZnRpZGxnMenkMkPF0ah2kUOWVdvV3AAq4P25yT1s0HrQkoPgLNkA.png?width=216&crop=smart&format=pjpg&auto=webp&s=e911a26c737765ab4b5c0dbb65b4f27d6dc9f5be', 'width': 216}, {'height': 206, 'url': 'https://external-preview.redd.it/bGlkNmk4ZnRpZGxnMenkMkPF0ah2kUOWVdvV3AAq4P25yT1s0HrQkoPgLNkA.png?width=320&crop=smart&format=pjpg&auto=webp&s=dc7a4c3086c565a22f934d533beb99a73532f073', 'width': 320}, {'height': 412, 'url': 'https://external-preview.redd.it/bGlkNmk4ZnRpZGxnMenkMkPF0ah2kUOWVdvV3AAq4P25yT1s0HrQkoPgLNkA.png?width=640&crop=smart&format=pjpg&auto=webp&s=512193dc91aab38fa36e21248b8db69be01a78c1', 'width': 640}, {'height': 619, 'url': 'https://external-preview.redd.it/bGlkNmk4ZnRpZGxnMenkMkPF0ah2kUOWVdvV3AAq4P25yT1s0HrQkoPgLNkA.png?width=960&crop=smart&format=pjpg&auto=webp&s=2ec25d2fe238e95fa0b0899d51c7f5bb81b889cb', 'width': 960}, {'height': 696, 'url': 'https://external-preview.redd.it/bGlkNmk4ZnRpZGxnMenkMkPF0ah2kUOWVdvV3AAq4P25yT1s0HrQkoPgLNkA.png?width=1080&crop=smart&format=pjpg&auto=webp&s=92d28d349f10713213b545f184bc8bcbc8db42d1', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/bGlkNmk4ZnRpZGxnMenkMkPF0ah2kUOWVdvV3AAq4P25yT1s0HrQkoPgLNkA.png?format=pjpg&auto=webp&s=979033a8656fc7a9f1f5c68e02935ea01feffe8a', 'width': 2232}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.