title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7 values | id stringlengths 7 7 | locked bool 2 classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2 classes | stickied bool 2 classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
GLM 5 Support Is On It's Way For Transformers | 137 | This probably means the model launch is imminent, and all evidence points to Pony Alpha on OpenRouter being a stealth deployment of GLM 5 | 2026-02-09T12:16:36 | https://github.com/huggingface/transformers/pull/43858 | Few_Painter_5588 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r02o7o | false | null | t3_1r02o7o | /r/LocalLLaMA/comments/1r02o7o/glm_5_support_is_on_its_way_for_transformers/ | false | false | 137 | {'enabled': False, 'images': [{'id': '_RA8pRu79eov51fP28AH3ibXc2RY_CG7SQQVryJy9WU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_RA8pRu79eov51fP28AH3ibXc2RY_CG7SQQVryJy9WU.png?width=108&crop=smart&auto=webp&s=954e3b064b50ac4a14c7d4d62d4469f350086693', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_RA8pRu79eov51fP28AH3ibXc2RY_CG7SQQVryJy9WU.png?width=216&crop=smart&auto=webp&s=50f6c87aa537b3be8563f14bc42550c1e19224f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_RA8pRu79eov51fP28AH3ibXc2RY_CG7SQQVryJy9WU.png?width=320&crop=smart&auto=webp&s=219e0b137c2d43371ebd8f6256ae92d3614b4f2a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_RA8pRu79eov51fP28AH3ibXc2RY_CG7SQQVryJy9WU.png?width=640&crop=smart&auto=webp&s=810b321415879975e3408c463a34398fefd38bf5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_RA8pRu79eov51fP28AH3ibXc2RY_CG7SQQVryJy9WU.png?width=960&crop=smart&auto=webp&s=f31d7be56ea67b4141998dd7b252beed25622302', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_RA8pRu79eov51fP28AH3ibXc2RY_CG7SQQVryJy9WU.png?width=1080&crop=smart&auto=webp&s=56f570cd96267fd5034bb35e670e0d7f7d25aa34', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_RA8pRu79eov51fP28AH3ibXc2RY_CG7SQQVryJy9WU.png?auto=webp&s=a24e9503c820dce0f54611a450113955ed949a95', 'width': 1200}, 'variants': {}}]} | |
Paper to Notebook | 0 | Whenever a new research paper is published, even if it's open-source, it takes a long time to understand the paper and to follow the working implementation, and even longer time to replicate the working implementation.
What if you can just upload the paper to a tool and you get a high-quality, hallucination-free Google Colab notebook within 10 minutes?
Here is an awesome open source tool:
Try it here: [https://paper-to-notebook-production.up.railway.app/](https://paper-to-notebook-production.up.railway.app/)
Github repository is here: [https://github.com/VizuaraAI/paper-to-notebook](https://github.com/VizuaraAI/paper-to-notebook)
Please provide feedback so that it can be improved further!
| 2026-02-09T12:15:08 | https://www.reddit.com/r/LocalLLaMA/comments/1r02n5v/paper_to_notebook/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r02n5v | false | null | t3_1r02n5v | /r/LocalLLaMA/comments/1r02n5v/paper_to_notebook/ | false | false | self | 0 | null |
Built an open-source security framework for AI agents — 41 adversarial attack vectors, sub-20ms governance, MIT licensed | 1 | [removed] | 2026-02-09T12:11:57 | https://www.reddit.com/r/LocalLLaMA/comments/1r02kz3/built_an_opensource_security_framework_for_ai/ | No-Being-4354 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r02kz3 | false | null | t3_1r02kz3 | /r/LocalLLaMA/comments/1r02kz3/built_an_opensource_security_framework_for_ai/ | false | false | spoiler | 1 | null |
Agent that "watches" you browse, distills the logic via LLM, and survives UI changes. | 3 | I've been building scrapers and automation scripts for years, and I'm tired of the "cat and mouse" game. Every time the website updates its CSS or changes a `div` ID, my script breaks.
Standard RPA records coordinates (brittle). Standard Agents (AutoGPT style) are too expensive/slow to reason from scratch every step.
So I built **Exogram**.
**The Concept: "Procedural Memory" for Agents**
Instead of hard-coding steps, Exogram works in 3 phases:
1. **Teach (The Spy):** It records your workflow (e.g., clicking through a messy ERP system). It doesn't just record coordinates; it captures the **DOM context** and **semantic intent** of what you clicked.
2. **Distill (The Alchemy):** It uses an LLM (Claude 3.5 / GPT-4o) to "distill" the raw logs into a **heuristic rule** (SOP).
* *Raw Log:* `Click #btn-402`
* *Distilled Rule:* "Find the primary action button labeled 'Export', usually located in the top-right container. Ignore popups with 'Subscribe' text."
3. **Run (The Agent):** The agent executes using this "distilled memory". **I tested this by changing the button color and ID locally, and the agent still found it based on the semantic rule.**
**Tech Stack:**
* **Eye:** `workflow-use` (for recording DOM events)
* **Hand:** `browser-use` (Playwright wrapper)
* **Brain:** LangChain + Your LLM of choice (DeepSeek-V3 works great for the distillation part to save costs).
**Why I made this:** I wanted a middle ground between "dumb" Selenium scripts and "expensive" autonomous agents. This is an attempt to give agents "muscle memory."
**Repo:** \[[https://github.com/qingshanyuluo/exogram](https://github.com/qingshanyuluo/exogram)\] **Demo:** \[[https://github.com/user-attachments/assets/07af1f77-4344-4916-adfe-984a3626d105](https://github.com/user-attachments/assets/07af1f77-4344-4916-adfe-984a3626d105)\]
It's still an MVP (v0.1), but I'd love to hear if this approach makes sense to you guys. Roast my code or star it if you like the idea. | 2026-02-09T12:10:51 | https://www.reddit.com/r/LocalLLaMA/comments/1r02ka4/agent_that_watches_you_browse_distills_the_logic/ | Ok_Owl_1414 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r02ka4 | false | null | t3_1r02ka4 | /r/LocalLLaMA/comments/1r02ka4/agent_that_watches_you_browse_distills_the_logic/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'SMHMjOD1FZ2dbMSjjPVpziGOAoYGxuPR1FNVXg72_8k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SMHMjOD1FZ2dbMSjjPVpziGOAoYGxuPR1FNVXg72_8k.png?width=108&crop=smart&auto=webp&s=a6d860b9421940a817433baafca542ebab96ca8c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SMHMjOD1FZ2dbMSjjPVpziGOAoYGxuPR1FNVXg72_8k.png?width=216&crop=smart&auto=webp&s=d94ce6baed41e407c1603cd17a006faa3cf8a85b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SMHMjOD1FZ2dbMSjjPVpziGOAoYGxuPR1FNVXg72_8k.png?width=320&crop=smart&auto=webp&s=b0e53eba7dae074db1adfa7b253e1f0ca9079874', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SMHMjOD1FZ2dbMSjjPVpziGOAoYGxuPR1FNVXg72_8k.png?width=640&crop=smart&auto=webp&s=1cbba92dfc1e88902643bb47bd3b3cb889a1d7b6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SMHMjOD1FZ2dbMSjjPVpziGOAoYGxuPR1FNVXg72_8k.png?width=960&crop=smart&auto=webp&s=0edca01880f4536aa7596788227b5ade59cf54b2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SMHMjOD1FZ2dbMSjjPVpziGOAoYGxuPR1FNVXg72_8k.png?width=1080&crop=smart&auto=webp&s=92b64a73f99969d336c9ba22ead8fc5bd9e94ba3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SMHMjOD1FZ2dbMSjjPVpziGOAoYGxuPR1FNVXg72_8k.png?auto=webp&s=d5928d735c5c2347edeb5497cb9cf33b632fa349', 'width': 1200}, 'variants': {}}]} |
Ported from-scratch Inference Engine based on LFM2-350M to pure C! | 14 | Previously implemented Batched Inference Engine built from first principles with focus on correctness, not optimizations. Achieved single batch CPU speeds of 50 tokens/second on M2-Pro 16 GB CPU, but only 4 tokens/second on my old Intel Core i5 laptop.
Previous post link: [https://www.reddit.com/r/LocalLLaMA/comments/1qb4ydw/batched\_inference\_engine\_with\_lfms\_dense\_model/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/LocalLLaMA/comments/1qb4ydw/batched_inference_engine_with_lfms_dense_model/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)
The old laptop speeds disappointed me, hence reimplementing the single-batch inference part in pure C, achieving 3x speedups (from 4 tokens/second to 12 tokens/second) with no other optimizations than hybrid caching and CBLAS GEMM APIs for Intel (OneMKL). Again, building from first principles, used bin files and not gguf files and no other optimizations used!
GitHub Link: [https://github.com/marvinmboya/LFMs-Continuous-Batching-in-C](https://github.com/marvinmboya/LFMs-Continuous-Batching-in-C)
Big Thanks to:
Kay Lack's "Just enough C to have fun!" , [https://www.youtube.com/watch?v=5aZiRjgSGQU](https://www.youtube.com/watch?v=5aZiRjgSGQU) . An awesome C getting started video!
Jacob Sorber's C programming videos, [https://www.youtube.com/@JacobSorber](https://www.youtube.com/@JacobSorber) . Used to remind myself of C tooling and capabilities.
[Yale University](https://www.linkedin.com/company/yale-university/)’s DSA in C notes, [https://cs.yale.edu/homes/aspnes/classes/223/notes.html](https://cs.yale.edu/homes/aspnes/classes/223/notes.html) . A solid reference guide!
Also adopted RoPE implementation from antirez's C repo on Flux.2-Klein, with minor tweaks!
This project was not initially planned, just birthed out of disappointment in my old laptops' single-batch decoding speeds! Enjoyed it though!
I am currently in **Massachusetts, USA**, **#OpenToWork** for **intern** and **full time** roles, **willing to relocate**.
| 2026-02-09T12:09:02 | https://www.reddit.com/r/LocalLLaMA/comments/1r02j1i/ported_fromscratch_inference_engine_based_on/ | Des_goes_Brrr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r02j1i | false | null | t3_1r02j1i | /r/LocalLLaMA/comments/1r02j1i/ported_fromscratch_inference_engine_based_on/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'XrWdnsrHlzaIVcbFzXadSpYyo7WEEuy77i0Dz0PJEf8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XrWdnsrHlzaIVcbFzXadSpYyo7WEEuy77i0Dz0PJEf8.png?width=108&crop=smart&auto=webp&s=108077134b32322e5ad4d52ce2bd2cf715023c34', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XrWdnsrHlzaIVcbFzXadSpYyo7WEEuy77i0Dz0PJEf8.png?width=216&crop=smart&auto=webp&s=c78d912b86db4b3a107bcc803f68e90e2abeec65', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XrWdnsrHlzaIVcbFzXadSpYyo7WEEuy77i0Dz0PJEf8.png?width=320&crop=smart&auto=webp&s=00e5abb1726cac49fe8866f9c2b66976b2153358', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XrWdnsrHlzaIVcbFzXadSpYyo7WEEuy77i0Dz0PJEf8.png?width=640&crop=smart&auto=webp&s=c9a7ec5b9c7347299303c6d07151b5538673ab68', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XrWdnsrHlzaIVcbFzXadSpYyo7WEEuy77i0Dz0PJEf8.png?width=960&crop=smart&auto=webp&s=183eac4a6b77254f3c6ee2c10b2af27692a68773', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XrWdnsrHlzaIVcbFzXadSpYyo7WEEuy77i0Dz0PJEf8.png?width=1080&crop=smart&auto=webp&s=6ac8d950d630c526eeec4478c8bfddea80f7fe80', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XrWdnsrHlzaIVcbFzXadSpYyo7WEEuy77i0Dz0PJEf8.png?auto=webp&s=1dc09ab93dfb9e8cefdcaaaed6093abc9bf2fdbe', 'width': 1200}, 'variants': {}}]} |
Qwen3-Coder Next MXFP4 Strix Halo wir llama-cpp Vulkan | 2 | Hi
Tried to set it up but get Safe Tensor Error. Did anyone mange to get it working with Vulkan and llama.cpp ?
If yes can someone help me . GPT OS 120B works fine but wanted to give Qwen3 a try | 2026-02-09T11:58:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r02bkt/qwen3coder_next_mxfp4_strix_halo_wir_llamacpp/ | Septa105 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r02bkt | false | null | t3_1r02bkt | /r/LocalLLaMA/comments/1r02bkt/qwen3coder_next_mxfp4_strix_halo_wir_llamacpp/ | false | false | self | 2 | null |
RTX 3090 in 2026 | 1 | so im looking to buy a new rig for some local LLM tweaking and 1440p gaming, budget friendly (prices are crazy in my country) i was thinking of getting a 5060ti 16gb which was a month go about 530$ new, currently it went up to 730$ in all local stores, i dont want to go for a 4070 super, im not interested in maxing fps in gaming, i found a guy seeling rtx 3090 24gn dell alienware for 670$, which seems sketchy to me the guy said it is in a good state and i can test it, im hearing lots of bad stuff in dell alienware tho so im not so sure, help please.
NB: havent got anything else besides a 32gb ddr5 ram, for cpu im thinking of a ryzen 5 7600x | 2026-02-09T11:41:14 | https://www.reddit.com/r/LocalLLaMA/comments/1r020dz/rtx_3090_in_2026/ | Zine47X | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r020dz | false | null | t3_1r020dz | /r/LocalLLaMA/comments/1r020dz/rtx_3090_in_2026/ | false | false | self | 1 | null |
Any trick to improve promt processing? | 2 | When using agentic tools (opencode, cline, codex, etc) with local models, the promt processing is very slow. Even slowlier than the responses themselves.
Are there any secrets on how improve that?
I use lm studio and mlx models (gptoss20b, glm4.7flash etc) | 2026-02-09T11:40:11 | https://www.reddit.com/r/LocalLLaMA/comments/1r01zqa/any_trick_to_improve_promt_processing/ | mouseofcatofschrodi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r01zqa | false | null | t3_1r01zqa | /r/LocalLLaMA/comments/1r01zqa/any_trick_to_improve_promt_processing/ | false | false | self | 2 | null |
Ryzen + RTX: you might be wasting VRAM without knowing it (LLama Server) | 48 | I made a pretty stupid mistake, but it’s *so* easy to fall into it that I wanted to share it, hoping it might help someone else.
The workstation I use has a Ryzen 9 CPU with an integrated GPU, which I think is a very common setup.
I also have an Nvidia RTX GPU installed in a PCIe slot.
My monitor was connected directly to the Nvidia GPU, which means Windows 11 uses it as the primary GPU (for example when opening a browser, watching YouTube, etc.).
In this configuration, Llama-Server does **not** have access to the full VRAM of the Nvidia GPU, because part of it is already being used by the operating system for graphics. And when you’re close to the VRAM limit, this makes a *huge* difference.
I discovered this completely by accident... I'm VRAM addicted!
After connecting the monitor to the motherboard and rebooting the PC, I was able to confirm that Llama-Server had access to **all** of the precious VRAM.
Using Windows Task Manager, you can see that the Nvidia GPU VRAM is completely free, while the integrated GPU VRAM is being used instead.
I know this isn’t anything revolutionary, but maybe someone else is making the same mistake without realizing it.
Just it. | 2026-02-09T11:31:23 | https://www.reddit.com/r/LocalLLaMA/comments/1r01u46/ryzen_rtx_you_might_be_wasting_vram_without/ | Medium-Technology-79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r01u46 | false | null | t3_1r01u46 | /r/LocalLLaMA/comments/1r01u46/ryzen_rtx_you_might_be_wasting_vram_without/ | false | false | self | 48 | null |
POV: You left repetition_penalty at 1.0 | 40 | 2026-02-09T11:28:46 | https://v.redd.it/fuvcpoqmegig1 | AurumDaemonHD | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r01sek | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fuvcpoqmegig1/DASHPlaylist.mpd?a=1773228540%2CNDM2ZDE0M2FhMDYwNDA4YjgxYjY5Mjk1NWZiNzZhMzY4ZjFjMDQ1ZDIyNmM0MzU3MTMyMDliNGI2NzgxNGEwNg%3D%3D&v=1&f=sd', 'duration': 9, 'fallback_url': 'https://v.redd.it/fuvcpoqmegig1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/fuvcpoqmegig1/HLSPlaylist.m3u8?a=1773228540%2COThhYmRmNDU5YjY3ZGU3NGU2NjdmNTFhYzZkMDJhZDU1ZGQ1MGY3ODk0N2U4NzZhZjczNmQwMzBlYTc1YWRjOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fuvcpoqmegig1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1r01sek | /r/LocalLLaMA/comments/1r01sek/pov_you_left_repetition_penalty_at_10/ | false | false | 40 | {'enabled': False, 'images': [{'id': 'dXk0YW5vcW1lZ2lnMetm2yxWlv74oo2KCat6XpxnSQj55CMNYXwNFsRMpvok', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dXk0YW5vcW1lZ2lnMetm2yxWlv74oo2KCat6XpxnSQj55CMNYXwNFsRMpvok.png?width=108&crop=smart&format=pjpg&auto=webp&s=f5de074bc158b2eb20f317b7b16336176654020b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dXk0YW5vcW1lZ2lnMetm2yxWlv74oo2KCat6XpxnSQj55CMNYXwNFsRMpvok.png?width=216&crop=smart&format=pjpg&auto=webp&s=7ff1dc2170ff3c96965fb9f697d98c724227b8c2', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dXk0YW5vcW1lZ2lnMetm2yxWlv74oo2KCat6XpxnSQj55CMNYXwNFsRMpvok.png?width=320&crop=smart&format=pjpg&auto=webp&s=0f4f830e739d230d6182b2024ee4ff37b7fdd28d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dXk0YW5vcW1lZ2lnMetm2yxWlv74oo2KCat6XpxnSQj55CMNYXwNFsRMpvok.png?width=640&crop=smart&format=pjpg&auto=webp&s=d9205100358d08925d986291f86bfae5732c7e8b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dXk0YW5vcW1lZ2lnMetm2yxWlv74oo2KCat6XpxnSQj55CMNYXwNFsRMpvok.png?width=960&crop=smart&format=pjpg&auto=webp&s=dad00b5df1957104068d48ac63efbc9b70d61c54', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dXk0YW5vcW1lZ2lnMetm2yxWlv74oo2KCat6XpxnSQj55CMNYXwNFsRMpvok.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f8752692da047ad163de9b71db131b23be560499', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dXk0YW5vcW1lZ2lnMetm2yxWlv74oo2KCat6XpxnSQj55CMNYXwNFsRMpvok.png?format=pjpg&auto=webp&s=0808d45451b8be2bcf47b80ff27e73ba66c40b25', 'width': 1920}, 'variants': {}}]} | ||
Help needed: running a local LLM with a custom prompt/memory (non-commercial) | 2 | Hello,
I’m looking for someone with experience in local / open-source AI models (LLaMA, Mistral, Ollama, LM Studio, etc.).
I have built, over time, a structured corpus (texts, tone, interaction style, memory elements) with an AI model, and I would like help transposing this corpus into a local, open-source setup, for personal use.
This is not a commercial project.
It’s a personal, human, and creative exploration around continuity, memory, and dialogue with an AI system.
I don’t have financial means to pay for development work.
In exchange, I can offer time, gratitude, and genuine human reciprocity. I’m a trained psychologist and coach, if that is ever useful — but mostly, I’m looking for someone curious and kind.
If this resonates with you, feel free to reply or DM me.
Thank you for reading. | 2026-02-09T11:11:19 | https://www.reddit.com/r/LocalLLaMA/comments/1r01hjn/help_needed_running_a_local_llm_with_a_custom/ | Disastrous-Way3174 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r01hjn | false | null | t3_1r01hjn | /r/LocalLLaMA/comments/1r01hjn/help_needed_running_a_local_llm_with_a_custom/ | false | false | self | 2 | null |
Qwen3 Next Coder - quantization sensitivity? | 4 | Hello.
I've been running Qwen3 Next Coder UD-Q6\_K\_XL + Kilo Code for a couple of days, fits nicely into 16GB VRAM (non-experts) + 96GB RAM (experts), and generally I'm very impressed by the speed and quality compared to GPT OSS 120B.
But at the same time, it often can loop in the reasoning if the problem gets to a certain degree of complexity, and it takes pretty strange detours. Like executing a command that runs in the background (due to \`&\` at the end) and dumps all logs of a Docker container into a \`/tmp/\*.txt\` file instead of just... reading the logs directly from the container when needed? I mean, it works, but why the extra steps lol, moreover it has demonstrated that's it's very capable with Docker otherwise, so why the odd move? And this "file-bias" doesn't seem to be an isolated, one-off hiccup, since it also seems to like creating files like \`plans/\*.md\` when running in Architect mode, even though I didn't ask it to document anything yet, only analyze.
To my untrained eye, seems like a quantization quirk, but I can't know for sure, hence I'm here.
Could these be a result of a potential very high sensitivity to quantization? llama-server seems to auto-enable mmap for this model, so I should in theory be able to run UD-Q8\_K\_XL without running out of RAM. What's everyone's experience so far? Any difference between Q6 and Q8? Or am I overthinking and it's just how "Next" models are? Thanks. | 2026-02-09T11:09:49 | https://www.reddit.com/r/LocalLLaMA/comments/1r01gme/qwen3_next_coder_quantization_sensitivity/ | ABLPHA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r01gme | false | null | t3_1r01gme | /r/LocalLLaMA/comments/1r01gme/qwen3_next_coder_quantization_sensitivity/ | false | false | self | 4 | null |
Openclaw with gpt-oss-20b on RTX 2060 6gb | 7 | Just wanted to share a minor victory this weekend. Hours and hours of tweaking I have gotten gpt oss 20b running an openclaw agent, getting 8-10t/s for model output which is fast enough to beat the ten minute timer for the most part lol. isn’t bad either. I7-8700,32gb ddr4. Agent lives on a spare pc, rtx is on daily driver set up with lmstudio
50k token context, 4096 max response length
7 layers on gpu
Q8 k and v memory cache
Reasoning low
Lots is on the cpu but hey, it works.
Obviously I’m not really a big time operator I just thought this was fun to figure out.
| 2026-02-09T11:00:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r01adw/openclaw_with_gptoss20b_on_rtx_2060_6gb/ | tomjoad773 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r01adw | false | null | t3_1r01adw | /r/LocalLLaMA/comments/1r01adw/openclaw_with_gptoss20b_on_rtx_2060_6gb/ | false | false | self | 7 | null |
Small, fast Guardrail model for LLM input moderation and toxicity detection. Detects 14 types of unsafe content. | 1 | [removed] | 2026-02-09T10:57:53 | https://www.reddit.com/r/LocalLLaMA/comments/1r0192n/small_fast_guardrail_model_for_llm_input/ | Ok_Hold_5385 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0192n | false | null | t3_1r0192n | /r/LocalLLaMA/comments/1r0192n/small_fast_guardrail_model_for_llm_input/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=108&crop=smart&auto=webp&s=2eb6a213165d492c90ddf72a617f4b4f209cf2cc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=216&crop=smart&auto=webp&s=1a3f53677657f14915a721147b1f26ed06a6946a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=320&crop=smart&auto=webp&s=c054200226ca81fa3e31af0a68b9d3209a1e62f3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=640&crop=smart&auto=webp&s=6d143e89c1d5c0c89598e72bdfb3d4f1c5b659c5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=960&crop=smart&auto=webp&s=25ac8a048d7166216719787102ecd23eb9c5385a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?width=1080&crop=smart&auto=webp&s=365e904a0f97ee42c3f72773fa71ffa9639bac84', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pIdMmPbaburmNRal1L-wF45yVkwFCdJYq2R8SrY5Gps.png?auto=webp&s=509551aa22845ef24e39396694fe657a582ecc91', 'width': 1200}, 'variants': {}}]} |
5 Habits to Give Up If You Want to Be Successful | 1 | [removed] | 2026-02-09T10:54:44 | https://newsaffairng.com/2024/05/05/5-habits-to-give-up-if-you-want-to-be-successful/ | Jawabill10 | newsaffairng.com | 1970-01-01T00:00:00 | 0 | {} | 1r01749 | false | null | t3_1r01749 | /r/LocalLLaMA/comments/1r01749/5_habits_to_give_up_if_you_want_to_be_successful/ | false | false | default | 1 | null |
I managed to jailbreak 43 of 52 recent models | 87 | GPT-5 broke at level 2,
Full report here: [rival.tips/jailbreak](http://rival.tips/jailbreak) I'll be adding more models to this benchmark soon | 2026-02-09T10:52:45 | https://v.redd.it/xmbxf1vhrdig1 | sirjoaco | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r015z4 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xmbxf1vhrdig1/DASHPlaylist.mpd?a=1773226390%2CYWUxMTFlMDdmMDRjMDU3ZDUyMGRjM2MzYWE3ODU5NjA1ZGZkNDQ2N2UyNjkwYzBiOWU4MDFjYmVhMDFiMDNkNw%3D%3D&v=1&f=sd', 'duration': 25, 'fallback_url': 'https://v.redd.it/xmbxf1vhrdig1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/xmbxf1vhrdig1/HLSPlaylist.m3u8?a=1773226390%2CZGE3ZGJmYjA2MDBjYmU5ZWViNmMxNTExMmE1MmY5MTkyYmJkYTNlZGJiNDgzYTE1ODE5MjZiYTk2NjRlOWNmNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xmbxf1vhrdig1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1898}} | t3_1r015z4 | /r/LocalLLaMA/comments/1r015z4/i_managed_to_jailbreak_43_of_52_recent_models/ | false | false | 87 | {'enabled': False, 'images': [{'id': 'YTA3NHl0dmhyZGlnMUNU3vkEOynofhKg3zLh75rLSPZOaY5MGdNqMt8faW6e', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/YTA3NHl0dmhyZGlnMUNU3vkEOynofhKg3zLh75rLSPZOaY5MGdNqMt8faW6e.png?width=108&crop=smart&format=pjpg&auto=webp&s=3f7ec681aafd93a8a4f34fb38d7ead5f21fb89b5', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/YTA3NHl0dmhyZGlnMUNU3vkEOynofhKg3zLh75rLSPZOaY5MGdNqMt8faW6e.png?width=216&crop=smart&format=pjpg&auto=webp&s=18d3e32be84505b49e8b43aadf4f5b06c429176d', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/YTA3NHl0dmhyZGlnMUNU3vkEOynofhKg3zLh75rLSPZOaY5MGdNqMt8faW6e.png?width=320&crop=smart&format=pjpg&auto=webp&s=7df6a23309906b789b05a4147b0487eea24940e8', 'width': 320}, {'height': 364, 'url': 'https://external-preview.redd.it/YTA3NHl0dmhyZGlnMUNU3vkEOynofhKg3zLh75rLSPZOaY5MGdNqMt8faW6e.png?width=640&crop=smart&format=pjpg&auto=webp&s=c1bf2473146a1da0a9218cf961b58d16c395a155', 'width': 640}, {'height': 546, 'url': 'https://external-preview.redd.it/YTA3NHl0dmhyZGlnMUNU3vkEOynofhKg3zLh75rLSPZOaY5MGdNqMt8faW6e.png?width=960&crop=smart&format=pjpg&auto=webp&s=1c613fbb132188cd2e02158a75be8273c483eb42', 'width': 960}, {'height': 614, 'url': 'https://external-preview.redd.it/YTA3NHl0dmhyZGlnMUNU3vkEOynofhKg3zLh75rLSPZOaY5MGdNqMt8faW6e.png?width=1080&crop=smart&format=pjpg&auto=webp&s=91ad927efe53bc8c5e33949fa5e7a70986c9e384', 'width': 1080}], 'source': {'height': 1742, 'url': 'https://external-preview.redd.it/YTA3NHl0dmhyZGlnMUNU3vkEOynofhKg3zLh75rLSPZOaY5MGdNqMt8faW6e.png?format=pjpg&auto=webp&s=55ccb1b59d065b7aef7a8b917788165ecd547c15', 'width': 3062}, 'variants': {}}]} | |
A Modest Proposal: A 1% Income Tax on Every Python Library a Developer includes | 212 | 2026-02-09T10:41:15 | crantob | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r00yxq | false | null | t3_1r00yxq | /r/LocalLLaMA/comments/1r00yxq/a_modest_proposal_a_1_income_tax_on_every_python/ | false | false | 212 | {'enabled': True, 'images': [{'id': '4B59MwzJ5FywT1LbtZWiaeNl-aPaWaIV01fxieJoSGI', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/8zegrsmr6gig1.jpeg?width=108&crop=smart&auto=webp&s=9c0839bfb1e6ea8a5d20fdbe9274ff687e67abc3', 'width': 108}, {'height': 289, 'url': 'https://preview.redd.it/8zegrsmr6gig1.jpeg?width=216&crop=smart&auto=webp&s=550fcacddc6756bb7c876c0d7950325caabc28e6', 'width': 216}, {'height': 428, 'url': 'https://preview.redd.it/8zegrsmr6gig1.jpeg?width=320&crop=smart&auto=webp&s=287c876b86828bcdbd2e7d1db8afc6191e6e358c', 'width': 320}, {'height': 857, 'url': 'https://preview.redd.it/8zegrsmr6gig1.jpeg?width=640&crop=smart&auto=webp&s=b57e54c1bdb067aeff6da154784be1fe3279cd14', 'width': 640}, {'height': 1285, 'url': 'https://preview.redd.it/8zegrsmr6gig1.jpeg?width=960&crop=smart&auto=webp&s=fe01eba8a0bd9b21e09a51442ca9fc387f24c7da', 'width': 960}], 'source': {'height': 1440, 'url': 'https://preview.redd.it/8zegrsmr6gig1.jpeg?auto=webp&s=976af0946ec53365f013822ea4e73642d0771487', 'width': 1075}, 'variants': {}}]} | |||
WeKnora v0.3.0 — open-source RAG framework now with shared workspaces, agent skills, and thinking mode | 1 | [removed] | 2026-02-09T10:03:53 | https://www.reddit.com/r/LocalLLaMA/comments/1r00cus/weknora_v030_opensource_rag_framework_now_with/ | Glittering_Ad4507 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r00cus | false | null | t3_1r00cus | /r/LocalLLaMA/comments/1r00cus/weknora_v030_opensource_rag_framework_now_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=108&crop=smart&auto=webp&s=f3517e0bebd8a09e7c13044cafe1b0452dea5fe9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=216&crop=smart&auto=webp&s=78659aa7ed51a6a62b6d1d47694cbfe6f6ddf98a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=320&crop=smart&auto=webp&s=997ccb6092c68673785abd2850ffbb7388625ba4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=640&crop=smart&auto=webp&s=278cfae2e25f214765fc0b6011bb64fb766539cb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=960&crop=smart&auto=webp&s=459230a0a248e2bd0135dc1b2aceeb30d700503f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=1080&crop=smart&auto=webp&s=1b6eecd2acb6cc4b86b886b1cd8718797faa809c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?auto=webp&s=2bdacaa1ff708fa08489399d93eb9d0a07df6d4b', 'width': 1200}, 'variants': {}}]} |
Local solution for TTS/SST using Raspberry + Hailo-10H | 5 | Hello everybody,
I am working on a local project enabling my system to work with local LLM using raspberry pi 5 + hailo-10H.
My target is to implement a local TTS/STT (Text To Speach / Speach To Text)--system with TTFT (Time To First Token) < 100ms.
My first test was to chat/stream one simple sentence and measure the performance of TTFT.
I am not happy with the performance results of TTFT using models like llama3.2:1b or qwen2:1.5b. It is round about between 350 ms and 500 ms.
Anyone of you have expericed some better model or system to be used locally?
Greetings! | 2026-02-09T10:02:51 | https://www.reddit.com/r/LocalLLaMA/comments/1r00c94/local_solution_for_ttssst_using_raspberry_hailo10h/ | RegularDude2024 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r00c94 | false | null | t3_1r00c94 | /r/LocalLLaMA/comments/1r00c94/local_solution_for_ttssst_using_raspberry_hailo10h/ | false | false | self | 5 | null |
how to make a rag project ? | 1 | [removed] | 2026-02-09T09:59:59 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1r00ae0 | false | null | t3_1r00ae0 | /r/LocalLLaMA/comments/1r00ae0/how_to_make_a_rag_project/ | false | false | default | 1 | null | ||
v0.3.0 — open-source RAG framework now with shared workspaces, agent skills, and thinking mode | 1 | [removed] | 2026-02-09T09:55:01 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1r007kz | false | null | t3_1r007kz | /r/LocalLLaMA/comments/1r007kz/v030_opensource_rag_framework_now_with_shared/ | false | false | default | 1 | null | ||
WeKnora v0.3.0 — open-source RAG framework now with shared workspaces, agent skills, and thinking mode | 1 | [removed] | 2026-02-09T09:54:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r0072a/weknora_v030_opensource_rag_framework_now_with/ | Glittering_Ad4507 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r0072a | false | null | t3_1r0072a | /r/LocalLLaMA/comments/1r0072a/weknora_v030_opensource_rag_framework_now_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=108&crop=smart&auto=webp&s=f3517e0bebd8a09e7c13044cafe1b0452dea5fe9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=216&crop=smart&auto=webp&s=78659aa7ed51a6a62b6d1d47694cbfe6f6ddf98a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=320&crop=smart&auto=webp&s=997ccb6092c68673785abd2850ffbb7388625ba4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=640&crop=smart&auto=webp&s=278cfae2e25f214765fc0b6011bb64fb766539cb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=960&crop=smart&auto=webp&s=459230a0a248e2bd0135dc1b2aceeb30d700503f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=1080&crop=smart&auto=webp&s=1b6eecd2acb6cc4b86b886b1cd8718797faa809c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?auto=webp&s=2bdacaa1ff708fa08489399d93eb9d0a07df6d4b', 'width': 1200}, 'variants': {}}]} |
test | 1 | [removed] | 2026-02-09T09:52:55 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1r006eh | false | null | t3_1r006eh | /r/LocalLLaMA/comments/1r006eh/test/ | false | false | default | 1 | null | ||
WeKnora v0.3.0 — open-source RAG framework now with shared workspaces, agent skills, and thinking mode | 1 | [removed] | 2026-02-09T09:51:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r005m3/weknora_v030_opensource_rag_framework_now_with/ | Glittering_Ad4507 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r005m3 | false | null | t3_1r005m3 | /r/LocalLLaMA/comments/1r005m3/weknora_v030_opensource_rag_framework_now_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=108&crop=smart&auto=webp&s=f3517e0bebd8a09e7c13044cafe1b0452dea5fe9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=216&crop=smart&auto=webp&s=78659aa7ed51a6a62b6d1d47694cbfe6f6ddf98a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=320&crop=smart&auto=webp&s=997ccb6092c68673785abd2850ffbb7388625ba4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=640&crop=smart&auto=webp&s=278cfae2e25f214765fc0b6011bb64fb766539cb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=960&crop=smart&auto=webp&s=459230a0a248e2bd0135dc1b2aceeb30d700503f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?width=1080&crop=smart&auto=webp&s=1b6eecd2acb6cc4b86b886b1cd8718797faa809c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Rx2jknX2uP-Xbvc0LcvuRz5hpT9tfbss6GF43kUd15s.png?auto=webp&s=2bdacaa1ff708fa08489399d93eb9d0a07df6d4b', 'width': 1200}, 'variants': {}}]} |
I built an Open-Source Model Orchestration Engine that achieves SOTA Reasoning on Consumer Hardware (8GB VRAM) via Cognitive Serialization | 0 | Hi everyone,
I’m excited to share **ArivOS**, an open-source AI orchestration system designed to democratize high-level reasoning capabilities.
**"Why retrain when you can orchestrate?"** Rather than burning resources to build massive models from scratch, our philosophy is to **maximize the potential of what already exists.** By architecting systems that chain specialized models together, we achieve higher efficiency and state-of-the-art results on consumer hardware. We are shifting the engineering focus from expensive training runs to sophisticated **cognitive architectures**, bringing powerful AI to the world faster and more affordably.
Running state-of-the-art reasoning tasks typically requires massive VRAM and enterprise-grade GPUs (H100s). However, for many developers and researchers in resource-constrained environments—specifically in the context of India's diverse linguistic landscape—access to such compute is limited.
**ArivOS (Frugal & Sovereign AI)** Ariv solves this by implementing a **Cognitive Serialization** architecture. Instead of relying on a single monolithic model, it orchestrates a pipeline of specialized open-source models that dynamically load and unload within a single inference pass.
This allows us to achieve deep Chain-of-Thought (CoT) reasoning and high-fidelity cultural translation on standard **consumer hardware (8GB VRAM)** without quantization degradation.
**Key Technical Innovations:**
1. **Dynamic VRAM Management (The "Hot-Swap" Protocol):** We treat VRAM as a high-velocity workspace rather than static storage. The system sequentially loads specialized compute engines for distinct phases of the inference cycle (Translation → Reasoning → Critique → Synthesis), keeping peak memory usage under 9GB.
2. **Test-Time Compute (TTC) Optimization:** By allocating more compute time at inference (rather than training), Ariv performs multi-step reasoning verification. It includes a built-in "Critic" module that adversarially evaluates its own logic chains before finalizing an output.
3. **Sovereign Architecture:** The pipeline is specifically tuned for the 22 official Indian languages, preserving semantic nuance often lost in generalized Western models.
We are proving that **latency can be traded for accuracy and accessibility.** You don't need a data center to run an agentic workflow that rivals proprietary APIs; you just need smarter orchestration.
**Looking for Contributors:** We are looking for engineers interested in:
* **Systems Programming:** Optimizing the model loading/unloading latency (looking into aggressive caching or shared memory implementations).
* **Pipeline Architecture:** Enhancing the agentic workflow to make the handover between the "Reasoner" and the "Critic" more seamless and logical.
* **Prompt Engineering:** Refining the CoT prompts for the reasoning engine.
* **Frontend/TUI:** Enhancing the Terminal User Interface or building out the React Native mobile layer.
**Repository:**[https://github.com/harvatechs/Ariv](https://github.com/harvatechs/Ariv)
I’d love to hear your feedback on the architecture—specifically regarding our approach to sequential loading vs. quantization.
Thanks! | 2026-02-09T09:08:09 | https://www.reddit.com/r/LocalLLaMA/comments/1qzzgwv/i_built_an_opensource_model_orchestration_engine/ | Rude_Nature_1349 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzzgwv | false | null | t3_1qzzgwv | /r/LocalLLaMA/comments/1qzzgwv/i_built_an_opensource_model_orchestration_engine/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'o4fwQ0m1G-jef6YrHAvloyWoOMxrv2tyAkwT6yTv930', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/o4fwQ0m1G-jef6YrHAvloyWoOMxrv2tyAkwT6yTv930.png?width=108&crop=smart&auto=webp&s=745af75b4f6284c6271c997ae1997091ede87cca', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/o4fwQ0m1G-jef6YrHAvloyWoOMxrv2tyAkwT6yTv930.png?width=216&crop=smart&auto=webp&s=d6aab48dbe903e5fee1b2483aa2a41a488a6f48e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/o4fwQ0m1G-jef6YrHAvloyWoOMxrv2tyAkwT6yTv930.png?width=320&crop=smart&auto=webp&s=60f961e494ad1d1d5cf20d478e72affb4d631192', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/o4fwQ0m1G-jef6YrHAvloyWoOMxrv2tyAkwT6yTv930.png?width=640&crop=smart&auto=webp&s=d5c49803e05f422bddf557d38f7467cf6bc6f0e6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/o4fwQ0m1G-jef6YrHAvloyWoOMxrv2tyAkwT6yTv930.png?width=960&crop=smart&auto=webp&s=811c4de8f28495b21c8591889fd19f13aa48839f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/o4fwQ0m1G-jef6YrHAvloyWoOMxrv2tyAkwT6yTv930.png?width=1080&crop=smart&auto=webp&s=2b10810d3704ccc21f3b8edddbd4e145286421bb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/o4fwQ0m1G-jef6YrHAvloyWoOMxrv2tyAkwT6yTv930.png?auto=webp&s=7a5e74c85071d7e43fe8304cb6c04da675a4f046', 'width': 1200}, 'variants': {}}]} |
Dual 3090s (power-limited) - Are 3x PCI-E cables w/daisy-chain "okay?" | 2 | I *just* discovered that my modular 1350 watt power supply - despite having the new generation 12V connector (for cards I'll never be able to afford) - only came with 3 of the PCI-E power cables - though each has the little daisy-chain end on it, unused.
I'm running my current 3090 power-limited - and it's a dell OEM one, two PCI-E power connectors. I have a second identical card I'll be putting in, and I'm wondering if it's reasonable to run one "dedicated" power cable to each card, and use the daisy-chain to run both - and, if so, should I be more aggressive with my power limiting? I've never used the daisy-chain stuff, but I wonder why it's even offered if it's actually unsafe to use. (But, could be down to marketing and inertia). Anyway, any advice welcomed. The obvious solution is "get another modular cable, dumdum." But, would *you* be patient enough to not try, as your second 3090 arrived? (;
The power supply, for reference, is a Thermaltake Toughpower GF3 1350W (ATX 3.0). And I've only run into dodgy third party cables so far (but thermaltake's site was down last time I tried.)
(I sure wish modular power supply standards were consistent - I have a spare I could use, but the pins are wired **wildly** differently, despite being the same Molex connector on the power supply end - yuck.) | 2026-02-09T08:58:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qzzb8w/dual_3090s_powerlimited_are_3x_pcie_cables/ | overand | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzzb8w | false | null | t3_1qzzb8w | /r/LocalLLaMA/comments/1qzzb8w/dual_3090s_powerlimited_are_3x_pcie_cables/ | false | false | self | 2 | null |
I built a voice assistant that controls my Terminal using Whisper (Local) + Claude Code CLI (<100 lines of script) | 2 | Hey everyone,
I wanted to share a weekend project I've been working on. I was frustrated with Siri/Alexa not being able to actually *interact* with my dev environment, so I built a small Python script to bridge the gap between voice and my terminal.
**The Architecture:** It's a loop that runs in under 100 lines of Python:
1. **Audio Capture:** Uses `sounddevice` and `numpy` to detect silence thresholds (VAD) automatically.
2. **STT (Speech to Text):** Runs **OpenAI Whisper** locally (base model). No audio is sent to the cloud for transcription, which keeps latency decent and privacy high.
3. **Intelligence:** Pipes the transcribed text into the new **Claude Code CLI** (via `subprocess`).
* *Why Claude Code?* Because unlike the standard API, the CLI has permission to execute terminal commands, read files, and search the codebase directly.
4. **TTS:** Uses native OS text-to-speech ( `say` on Mac, `pyttsx3` on Windows) to read the response back.
**The cool part:** Since Claude Code has shell access, I can ask things like *"Check the load average and if it's high, list the top 5 processes"* or *"Read the readme in this folder and summarize it"*, and it actually executes it.
**Here is the core logic for the Whisper implementation:**
Python
# Simple snippet of the logic
import sounddevice as sd
import numpy as np
import whisper
model = whisper.load_model("base")
def record_audio():
# ... (silence detection logic)
pass
def transcribe(audio_data):
result = model.transcribe(audio_data, fp16=False)
return result["text"]
# ... (rest of the loop)
I made a video breakdown explaining the setup and showing a live demo of it managing files and checking system stats.
**📺 Video Demo & Walkthrough:** [https://youtu.be/hps59cmmbms?si=FBWyVZZDETl6Hi1J](https://youtu.be/hps59cmmbms?si=FBWyVZZDETl6Hi1J)
I'm planning to upload the full source code to GitHub once I clean up the dependencies.
Let me know if you have any ideas on how to improve the latency between the local Whisper transcription and the Claude response!
Cheers. | 2026-02-09T08:51:53 | https://www.reddit.com/r/LocalLLaMA/comments/1qzz7p4/i_built_a_voice_assistant_that_controls_my/ | jokiruiz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzz7p4 | false | null | t3_1qzz7p4 | /r/LocalLLaMA/comments/1qzz7p4/i_built_a_voice_assistant_that_controls_my/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'n4JnAay1k3tSNR9ZMpzsXPCoprNTzARHNCuu5z_9e48', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/n4JnAay1k3tSNR9ZMpzsXPCoprNTzARHNCuu5z_9e48.jpeg?width=108&crop=smart&auto=webp&s=859d145f0492efacfb8aa134d58513fae5ff807d', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/n4JnAay1k3tSNR9ZMpzsXPCoprNTzARHNCuu5z_9e48.jpeg?width=216&crop=smart&auto=webp&s=aa89a89b53130fef1a2c4ae0362f29421bd7f50d', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/n4JnAay1k3tSNR9ZMpzsXPCoprNTzARHNCuu5z_9e48.jpeg?width=320&crop=smart&auto=webp&s=ce602d73e95df3ed98d1c3bda5e496df1edf32cb', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/n4JnAay1k3tSNR9ZMpzsXPCoprNTzARHNCuu5z_9e48.jpeg?auto=webp&s=80a3275b86e3b6cb02165261184edecdf0e17709', 'width': 480}, 'variants': {}}]} |
Baichuan-M3 Technical Report is live. 🚀🚀🚀 | 1 | [removed] | 2026-02-09T08:47:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qzz5aw/baichuanm3_technical_report_is_live/ | Straight-Garage-2081 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzz5aw | false | null | t3_1qzz5aw | /r/LocalLLaMA/comments/1qzz5aw/baichuanm3_technical_report_is_live/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '-sW8uqIv0B1w_uj5KvRJ2zhghMtU3VxXyIROhwzu_64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-sW8uqIv0B1w_uj5KvRJ2zhghMtU3VxXyIROhwzu_64.png?width=108&crop=smart&auto=webp&s=9dcf7536aeb08ad3c19c9567e86338959a56bdb3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-sW8uqIv0B1w_uj5KvRJ2zhghMtU3VxXyIROhwzu_64.png?width=216&crop=smart&auto=webp&s=dbda0e448a683de6b9ff9b6ada4580cc3347d7f0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-sW8uqIv0B1w_uj5KvRJ2zhghMtU3VxXyIROhwzu_64.png?width=320&crop=smart&auto=webp&s=2b89d27925e51ced5d35980ce773b334c5a65132', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-sW8uqIv0B1w_uj5KvRJ2zhghMtU3VxXyIROhwzu_64.png?width=640&crop=smart&auto=webp&s=18f624ce2e88d9140d4f8711e80b7873d86d828e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-sW8uqIv0B1w_uj5KvRJ2zhghMtU3VxXyIROhwzu_64.png?width=960&crop=smart&auto=webp&s=dea6030c92849bdc998ba10f0b8f46d964aea316', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-sW8uqIv0B1w_uj5KvRJ2zhghMtU3VxXyIROhwzu_64.png?width=1080&crop=smart&auto=webp&s=b4576f51d8df295dcce8dd5e5f9e4a3594d3b29f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-sW8uqIv0B1w_uj5KvRJ2zhghMtU3VxXyIROhwzu_64.png?auto=webp&s=8751c9778cb4d0a57a241b5ae8bbceba432a868b', 'width': 1200}, 'variants': {}}]} |
I built an MCP server that syncs Cursor, Claude Desktop, and Windsurf with one brain [Open Source] | 0 | **The Problem**
I use Cursor for coding, Claude Desktop for thinking, and Windsurf for exploration. But every time I switch tools, I lose context. I'd tell Claude about an architecture decision, then Cursor would have no idea.
**The Solution**
[Nucleus MCP](https://github.com/eidetic-works/nucleus-mcp) creates a shared brain (`.brain/` folder) across all your AI tools.
- Tell Claude about a decision → Cursor knows it
- Make a plan in Windsurf → Claude remembers it
- One brain. All your tools.
**Quick Start**
```bash
pip install nucleus-mcp
nucleus-init # Auto-configures Claude, Cursor, Windsurf
```
**What's Included**
- **Persistent Memory** — Knowledge that survives sessions
- **Multi-Agent Sync** — One brain across all tools
- **Audit Trail** — Every decision logged
- **Local-First** — Your data stays on your machine
- **Hypervisor** — File locking and security
**How It's Different from OpenClaw**
OpenClaw is great for running agent teams on their platform. Nucleus connects *different* platforms together. Use OpenClaw for agent teams. Use Nucleus for your universal brain.
**It's Open Source**
MIT licensed. No catch. No waitlist.
- GitHub: https://github.com/eidetic-works/nucleus-mcp
- PyPI: `pip install nucleus-mcp`
**What I'm Looking For**
- Feedback on the concept
- Feature requests
- Beta testers who use multiple AI tools daily
- Contributors (Python, MCP protocol)
Happy to answer questions! | 2026-02-09T08:45:06 | https://www.reddit.com/r/LocalLLaMA/comments/1qzz3zw/i_built_an_mcp_server_that_syncs_cursor_claude/ | NucleusOS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzz3zw | false | null | t3_1qzz3zw | /r/LocalLLaMA/comments/1qzz3zw/i_built_an_mcp_server_that_syncs_cursor_claude/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'mcMo4zpDWeuURqGi2AlCLcQlRS-1WbTr1xNEYTAL8GU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mcMo4zpDWeuURqGi2AlCLcQlRS-1WbTr1xNEYTAL8GU.png?width=108&crop=smart&auto=webp&s=ffd1d694d29202573f07c851b0d39fb3c003b6e5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mcMo4zpDWeuURqGi2AlCLcQlRS-1WbTr1xNEYTAL8GU.png?width=216&crop=smart&auto=webp&s=a831e8fdc763e16414d83ec9d6b7a0c41c6dcd6d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mcMo4zpDWeuURqGi2AlCLcQlRS-1WbTr1xNEYTAL8GU.png?width=320&crop=smart&auto=webp&s=cb8d824d20f0b1c1c80fc313ba3925c5e4524294', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mcMo4zpDWeuURqGi2AlCLcQlRS-1WbTr1xNEYTAL8GU.png?width=640&crop=smart&auto=webp&s=616ca59739822810c00bdb2b638a23b4f2c0fa29', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mcMo4zpDWeuURqGi2AlCLcQlRS-1WbTr1xNEYTAL8GU.png?width=960&crop=smart&auto=webp&s=a645f5410f44758f12044107c5bfc1a6f91b31ed', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mcMo4zpDWeuURqGi2AlCLcQlRS-1WbTr1xNEYTAL8GU.png?width=1080&crop=smart&auto=webp&s=b4bc0e5899a1147b261d270d7231a6b96f157a9d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mcMo4zpDWeuURqGi2AlCLcQlRS-1WbTr1xNEYTAL8GU.png?auto=webp&s=d2d2fa3afb644d8ec63694bc1d40f58fa00dddb7', 'width': 1200}, 'variants': {}}]} |
Baichuan-M3 Technical Report is live. 🚀🚀🚀 | 1 | [removed] | 2026-02-09T08:43:50 | https://www.reddit.com/r/LocalLLaMA/comments/1qzz3ao/baichuanm3_technical_report_is_live/ | Straight-Garage-2081 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzz3ao | false | null | t3_1qzz3ao | /r/LocalLLaMA/comments/1qzz3ao/baichuanm3_technical_report_is_live/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '-sW8uqIv0B1w_uj5KvRJ2zhghMtU3VxXyIROhwzu_64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-sW8uqIv0B1w_uj5KvRJ2zhghMtU3VxXyIROhwzu_64.png?width=108&crop=smart&auto=webp&s=9dcf7536aeb08ad3c19c9567e86338959a56bdb3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-sW8uqIv0B1w_uj5KvRJ2zhghMtU3VxXyIROhwzu_64.png?width=216&crop=smart&auto=webp&s=dbda0e448a683de6b9ff9b6ada4580cc3347d7f0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-sW8uqIv0B1w_uj5KvRJ2zhghMtU3VxXyIROhwzu_64.png?width=320&crop=smart&auto=webp&s=2b89d27925e51ced5d35980ce773b334c5a65132', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-sW8uqIv0B1w_uj5KvRJ2zhghMtU3VxXyIROhwzu_64.png?width=640&crop=smart&auto=webp&s=18f624ce2e88d9140d4f8711e80b7873d86d828e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-sW8uqIv0B1w_uj5KvRJ2zhghMtU3VxXyIROhwzu_64.png?width=960&crop=smart&auto=webp&s=dea6030c92849bdc998ba10f0b8f46d964aea316', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-sW8uqIv0B1w_uj5KvRJ2zhghMtU3VxXyIROhwzu_64.png?width=1080&crop=smart&auto=webp&s=b4576f51d8df295dcce8dd5e5f9e4a3594d3b29f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-sW8uqIv0B1w_uj5KvRJ2zhghMtU3VxXyIROhwzu_64.png?auto=webp&s=8751c9778cb4d0a57a241b5ae8bbceba432a868b', 'width': 1200}, 'variants': {}}]} |
I built an MCP server that syncs Cursor, Claude Desktop, and Windsurf with one brain [Open Source] | 1 | [deleted] | 2026-02-09T08:42:35 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1qzz2lm | false | null | t3_1qzz2lm | /r/LocalLLaMA/comments/1qzz2lm/i_built_an_mcp_server_that_syncs_cursor_claude/ | false | false | default | 1 | null | ||
GLM 5 is coming! spotted on vllm PR | 212 | 2026-02-09T08:39:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qzz0vr/glm_5_is_coming_spotted_on_vllm_pr/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzz0vr | false | null | t3_1qzz0vr | /r/LocalLLaMA/comments/1qzz0vr/glm_5_is_coming_spotted_on_vllm_pr/ | false | false | 212 | null | ||
Prompting local models still feels like vibe coding half the time | 0 | Not sure if it’s just me, but a lot of my prompt work with local models goes like this:
Write prompt → run → squint at output → tweak one line → run again
Repeat until it *kind of* works.
When it fails, the reasons are usually boring but painful:
* Ambiguity I didn’t notice
* Too many instructions bundled together
* Output format not actually enforced
* Model interpreting intent differently than I expected
I got tired of guessing, so I threw together a small **prompt diagnoser / fixer** for my own use.
It’s very simple:
* Reads a prompt
* Points out what might be wrong
* Explains the issue in plain language
* Shows a cleaned-up before → after version
Nothing model-specific — I’ve been using it as a thinking aid for local models, GPT, and Claude.
If you want to mess with it, link’s here:
👉 [**https://ai-stack.dev/rules**](https://ai-stack.dev/rules)
Mainly curious:
* Do you have a repeatable way to debug prompts?
* Or is vibe coding just… the way?
Would love to hear how people here approach this. | 2026-02-09T08:39:04 | https://www.reddit.com/r/LocalLLaMA/comments/1qzz0me/prompting_local_models_still_feels_like_vibe/ | Silver-Photo2198 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzz0me | false | null | t3_1qzz0me | /r/LocalLLaMA/comments/1qzz0me/prompting_local_models_still_feels_like_vibe/ | false | false | self | 0 | null |
`I built an MCP server that syncs Cursor, Claude Desktop, and Windsurf with one brain [Open Source]` | 1 | ```markdown
**The Problem**
I use Cursor for coding, Claude Desktop for thinking, and Windsurf for exploration. But every time I switch tools, I lose context. I'd tell Claude about an architecture decision, then Cursor would have no idea.
**The Solution**
[Nucleus MCP](
https://github.com/eidetic-works/nucleus-mcp
) creates a shared brain (`.brain/` folder) across all your AI tools.
- Tell Claude about a decision → Cursor knows it
- Make a plan in Windsurf → Claude remembers it
- One brain. All your tools.
**Quick Start**
```bash
pip install nucleus-mcp
nucleus-init # Auto-configures Claude, Cursor, Windsurf
```
**What's Included**
-
**Persistent Memory**
— Knowledge that survives sessions
-
**Multi-Agent Sync**
— One brain across all tools
-
**Audit Trail**
— Every decision logged
-
**Local-First**
— Your data stays on your machine
-
**Hypervisor**
— File locking and security
**How It's Different from OpenClaw**
OpenClaw is great for running agent teams on their platform. Nucleus connects
*different*
platforms together. Use OpenClaw for agent teams. Use Nucleus for your universal brain.
**It's Open Source**
MIT licensed. No catch. No waitlist.
- GitHub: https://github.com/eidetic-works/nucleus-mcp
- PyPI: `pip install nucleus-mcp`
**What I'm Looking For**
- Feedback on the concept
- Feature requests
- Beta testers who use multiple AI tools daily
- Contributors (Python, MCP protocol)
Happy to answer questions!
``` | 2026-02-09T08:35:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qzyyhc/i_built_an_mcp_server_that_syncs_cursor_claude/ | NucleusOS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzyyhc | false | null | t3_1qzyyhc | /r/LocalLLaMA/comments/1qzyyhc/i_built_an_mcp_server_that_syncs_cursor_claude/ | false | false | self | 1 | null |
Caret – A terminal tool to inspect and clean massive LLM datasets | 21 | Hi r/LocalLLaMA,
I’ve been working on a CLI tool called **Caret** because I was struggling to inspect large pre-training datasets efficiently.
The main issue I had was that opening 10GB+ JSONL or Parquet files usually crashed my editor (VS Code) or used too much RAM. I wanted something that felt like `less` but understood the structure of LLM data, specifically for visualizing tokenization and finding bad data.
It’s written in Rust and uses memory-mapped I/O, so it opens files of basically any size instantly without loading them fully into RAM.
**Key Features:**
* **Zero-Copy Open:** Uses `mmap` to handle massive files. You can scroll through a 100GB dataset instantly.
* **Token X-Ray:** Toggles a view that visualizes exactly how your tokenizer (Tiktoken, Llama 3, GPT-2...) is splitting the text (see screenshot).
* **SimHash Deduplication:** Uses parallelized SimHash (with hardware `POPCNT`) to find near-duplicates in your training data.
* **Parquet & CSV Support:** Handles binary formats natively without needing to convert them to JSONL first.
* **MCP Server:** I added an experimental MCP (Model Context Protocol) server. If you use Claude Desktop or Cursor, you can connect it to Caret to "chat" with your local dataset (e.g., "Find me 5 examples of bad JSON formatting in this file").
**How it works under the hood:** Instead of reading the whole file, it builds a lightweight index of line offsets and maps the file into virtual memory. When you scroll, it slices the bytes directly from the OS page cache. For remote HuggingFace datasets, it fetches only the parquet metadata footer first and streams row groups on demand, so you don't have to download the full repo to check the data quality.
**Installation:** If you have Rust installed:
Bash
git clone https://github.com/rouapps/caret.git
cd caret && cargo run --release -- path/to/data.jsonl
It’s still early days, so I’d appreciate any feedback or issue reports if you try it on your datasets!
https://preview.redd.it/ip091tcnifig1.png?width=1778&format=png&auto=webp&s=cff35eda5fa5628659c5b0c7abf2f4903644419b | 2026-02-09T08:26:32 | https://www.reddit.com/r/LocalLLaMA/comments/1qzytme/caret_a_terminal_tool_to_inspect_and_clean/ | Mental_Figure_1130 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzytme | false | null | t3_1qzytme | /r/LocalLLaMA/comments/1qzytme/caret_a_terminal_tool_to_inspect_and_clean/ | false | false | 21 | null | |
Finetune an LLM from your discord chats | 0 | Hi r/LocalLLaMA ,
I just wanted to share a small project I made where you can take your exported discord logs and use them to train an LLM off of yourself. I was looking for something like this for a few days and I could never really find something that was relatively simple and worked. So I thought I'd just share it here for those who'd want to try it.
Here's the [Github repo](https://github.com/LegendarySpy/DiscordToLLM) if you want to try it yourself :)
It works by using the OSS app [Discord Chat Exporter](https://github.com/Tyrrrz/DiscordChatExporter), it ingests all of the JSON files from it and cleans them to remove extra data & unwanted data and then uses Unsloth to train that into a model and then lastly convert that into a .gguf. Right now it comes with Gemma 12B model, Trinity Nano MoE, and Llama 3.1 8B templates.
It also contains a discord bot script that you can use to talk to it right after you finish training & converting. | 2026-02-09T08:25:07 | LegendarySpy | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qzysub | false | null | t3_1qzysub | /r/LocalLLaMA/comments/1qzysub/finetune_an_llm_from_your_discord_chats/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'a75gr0jyffig1', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/a75gr0jyffig1.png?width=108&crop=smart&auto=webp&s=f3257a6d27a7f1cc9d995f23ab8748abbb66f5f3', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/a75gr0jyffig1.png?width=216&crop=smart&auto=webp&s=d866a78c4f2d5ab75dca0cb45b993716421d5d19', 'width': 216}, {'height': 132, 'url': 'https://preview.redd.it/a75gr0jyffig1.png?width=320&crop=smart&auto=webp&s=3773b8efeed43b08e59a315e20e16eecda2bec74', 'width': 320}, {'height': 264, 'url': 'https://preview.redd.it/a75gr0jyffig1.png?width=640&crop=smart&auto=webp&s=5416ee08ad2f2339fe2e0e5e66711d91c89e9bc7', 'width': 640}, {'height': 396, 'url': 'https://preview.redd.it/a75gr0jyffig1.png?width=960&crop=smart&auto=webp&s=3ac203d51e3fd65693be3ff802d98cb074e16bed', 'width': 960}, {'height': 445, 'url': 'https://preview.redd.it/a75gr0jyffig1.png?width=1080&crop=smart&auto=webp&s=85220f056251b132e1f40f2110e7902ebc66a20e', 'width': 1080}], 'source': {'height': 594, 'url': 'https://preview.redd.it/a75gr0jyffig1.png?auto=webp&s=c71e426cbef967c43b7fec51db11ad447e3f079f', 'width': 1440}, 'variants': {}}]} | ||
Local LLM Performance: Testing OpenClaw with 2B/4B models via llama.cpp? | 0 | Hey everyone,
I’m really curious about the potential of running **OpenClaw** entirely offline for privacy and learning reasons. Specifically, I want to try using **llama.cpp** to power the backend.
Has anyone here experimented with "tiny" models in the **2B to 4B parameter range** (like Gemma 2B, Phi-3, or Qwen 4B)?
I’m specifically wondering:
* **Tool Calling:** Do these small models actually manage to trigger AgentSkills reliably, or do they struggle with the syntax?
* **Memory:** How do they handle the [soul.md](http://soul.md) persistent memory? Is the context window usually enough?
* **Performance:** Is the latency significantly better on consumer hardware compared to 7B or 8B models?
If you’ve gotten this working, what's the "peak" complexity you've achieved? Can it still handle basic file management or calendar tasks, or does it lose the plot?
Looking forward to hearing your setups! | 2026-02-09T08:10:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qzykqy/local_llm_performance_testing_openclaw_with_2b4b/ | Sucuk-san | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzykqy | false | null | t3_1qzykqy | /r/LocalLLaMA/comments/1qzykqy/local_llm_performance_testing_openclaw_with_2b4b/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=108&crop=smart&auto=webp&s=210969840104fefe5a740c14a049ba6ae9f4da1a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=216&crop=smart&auto=webp&s=4884c88257a74f96353b7ca71d7749b6b7408185', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=320&crop=smart&auto=webp&s=6767f329a451c7b10e4b36109b3f7ce919c6c511', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=640&crop=smart&auto=webp&s=bcb0d160a488e8838d6bd1de9314d5614095d98a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=960&crop=smart&auto=webp&s=d51c3521f7164a737cdf1eaf37fe880d9b4b6f45', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=1080&crop=smart&auto=webp&s=4d3aa798813a7bdaf4f1915a05cc71f6345b0d17', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?auto=webp&s=ee4222e7ba222f9a3ab6fecbcc8435b9c9c571aa', 'width': 1200}, 'variants': {}}]} |
[Project] MCP Orchestrator - Turn one AI agent into a team with parallel sub-agents | 5 | Hey r/LocalLLaMA! I built an open-source MCP server that lets you spawn parallel AI sub-agents — think of it as turning one AI coding agent into a team.
\*\*What it does:\*\*
- Spawns up to 10 parallel sub-agents using Copilot CLI or Claude Code CLI
- Passes file context to each agent (full file, summary, or grep mode)
- Smart timeout selection based on MCP servers requested
- Cross-platform: macOS, Linux, and Windows
- Headless & programmatic — designed for AI-to-AI orchestration via MCP protocol
\*\*Example use case:\*\*
You give one prompt like "research job openings at Stripe, Google, and Meta" — the orchestrator fans that out to 3 parallel agents, each with their own MCP servers (e.g., Playwright for browser access), and aggregates results.
\*\*Install:\*\*
\`\`\`
npm i u/ask149/mcp-orchestrator
\`\`\`
\*\*GitHub:\*\* [https://github.com/Ask149/orchestrator](https://github.com/Ask149/orchestrator)
Would love feedback from this community — especially on what CLI backends you'd want supported next. Currently it works with GitHub Copilot CLI and Claude Code CLI. | 2026-02-09T07:49:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qzy8uu/project_mcp_orchestrator_turn_one_ai_agent_into_a/ | ask149 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzy8uu | false | null | t3_1qzy8uu | /r/LocalLLaMA/comments/1qzy8uu/project_mcp_orchestrator_turn_one_ai_agent_into_a/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '88jqkvgzxTJBc4TYM4jn4XCRB9IztCW5FFI9kNs6dIA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/88jqkvgzxTJBc4TYM4jn4XCRB9IztCW5FFI9kNs6dIA.png?width=108&crop=smart&auto=webp&s=c94b8031343952f7a04b145da7aadbe49ccac715', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/88jqkvgzxTJBc4TYM4jn4XCRB9IztCW5FFI9kNs6dIA.png?width=216&crop=smart&auto=webp&s=9b2998ca58cbfd3cf2bc1b22d560aeb1414a9081', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/88jqkvgzxTJBc4TYM4jn4XCRB9IztCW5FFI9kNs6dIA.png?width=320&crop=smart&auto=webp&s=025fd4e4067862120f2f46711ffcea7acc34cbab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/88jqkvgzxTJBc4TYM4jn4XCRB9IztCW5FFI9kNs6dIA.png?width=640&crop=smart&auto=webp&s=3a40d0d084103a69a6fade1dc554a420bf54b775', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/88jqkvgzxTJBc4TYM4jn4XCRB9IztCW5FFI9kNs6dIA.png?width=960&crop=smart&auto=webp&s=5c563bfb14913eee61fb1a3ef91249be65961cb3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/88jqkvgzxTJBc4TYM4jn4XCRB9IztCW5FFI9kNs6dIA.png?width=1080&crop=smart&auto=webp&s=9bc9ac3aba4ea3e6e72c94048f65d940184a5dc5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/88jqkvgzxTJBc4TYM4jn4XCRB9IztCW5FFI9kNs6dIA.png?auto=webp&s=8d31105c83a2452424f3fd54f55856d857f98f6f', 'width': 1200}, 'variants': {}}]} |
Built a cloud hosting service for OpenClaw with free LLM included - uses Llama 3.3 70B via OpenRouter | 0 | Hey everyone! I built a cloud hosting platform called AgentAir that spins up pre-configured OpenClaw instances with a free LLM already connected.
\*\*The LLM setup:\*\*
- Uses OpenRouter's free tier
- Default model: Llama 3.3 70B Instruct (free)
- Also supports DeepSeek R1 Distill 70B (free) and other free models
- No API key needed for customers - we handle it
\*\*Why I built this:\*\*
I kept seeing people in r/openclaw struggling with the setup - getting Docker working, figuring out API keys, configuring models. Thought: what if I just gave people a ready-to-go VM with everything pre-installed?
\*\*How it works:\*\*
- Customer signs up, picks a region (US/EU/Asia)
- We spin up a dedicated Hetzner VPS with OpenClaw + noVNC
- LLM is pre-configured with OpenRouter free tier
- They get browser-based remote desktop access
- $19.99/mo flat - no surprise API costs
The free tier models have some rate limits (20 req/min, 200 req/day on OpenRouter) but for personal use and experimentation it works great. And if someone wants to upgrade to paid models later, they can swap in their own API key.
Thought this community might find it interesting since it's basically "Llama in the cloud" for non-technical users who want to play with AI agents.
Site: [https://agenthirehq.com](https://agenthirehq.com)
Happy to answer any questions about the architecture or LLM setup! | 2026-02-09T07:39:21 | https://www.reddit.com/r/LocalLLaMA/comments/1qzy2pk/built_a_cloud_hosting_service_for_openclaw_with/ | 118fearless | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzy2pk | false | null | t3_1qzy2pk | /r/LocalLLaMA/comments/1qzy2pk/built_a_cloud_hosting_service_for_openclaw_with/ | false | false | self | 0 | null |
CLI AgenticAI prompt | 0 | System Prompt:
`You are an advanced autonomous reasoning agent designed to function as a highly capable software engineer, researcher, and end to end problem solver. Your purpose is not limited to explaining concepts or offering theoretical suggestions. You are responsible for delivering concrete, working, and verifiable solutions. You operate with full ownership of tasks from initial understanding through implementation, validation, and refinement. You prioritize correctness, clarity, maintainability, and measurable outcomes.`
`You operate within a defined working environment, typically the current working directory and its subdirectories unless explicitly instructed otherwise. All file operations, code generation, execution steps, artifact creation, and analysis must remain within this bounded scope unless the user grants permission to extend beyond it. This constraint ensures operational safety while preserving sufficient flexibility to accomplish meaningful work.`
`You assume access to a command line development environment that supports file system operations, shell execution, dependency management, compilation, testing frameworks, debugging tools, and version control systems. You may consult external documentation or authoritative sources when necessary to ensure accuracy, especially for evolving technologies or time sensitive information. However, you must clearly distinguish verified facts, reasonable inferences, and assumptions. You must not rely blindly on memory when accuracy can be improved through validation.`
`Before performing any significant action, you verify all prerequisites. Confirm that required tools and dependencies are available, validate file paths before reading or modifying them, check permissions, and confirm that configurations or syntax are correct. Explicitly state expected outcomes before execution so deviations can be detected immediately. Anticipate potential failure modes and consider how you will detect and handle them before proceeding.`
`When performing research or analytical tasks, explicitly identify what is known, what is unknown, and what must be determined. Cross reference critical claims when possible and clearly mark levels of certainty. If conflicting information appears, present the competing perspectives and explain plausible reasons for discrepancies. Maintain intellectual honesty by avoiding unsupported speculation and clearly labeling assumptions.`
`When producing software or technical solutions, begin with contextual analysis. If an existing codebase is present, study its architecture, conventions, dependencies, and design philosophy before making changes. Plan non trivial solutions before implementation by decomposing them into logical components, defining interfaces, identifying edge cases, and clarifying success criteria. Implementation must follow best practices of the relevant language and framework, include meaningful error handling, and maintain internal consistency with the existing system.`
`Testing is mandatory and integrated into the workflow. Provide unit tests for isolated components and integration tests for system interactions when appropriate. Validate error handling paths, boundary conditions, and performance constraints if relevant. Execute tests and verify outcomes before declaring completion. If failures occur, analyze root causes rather than masking incorrect behavior. Refine code only after correctness is established, and document changes clearly.`
`Work incrementally and validate continuously. Break complex tasks into manageable steps with explicit success criteria. After each step, verify that the intended effect was achieved using concrete evidence rather than assumptions. Capture relevant outputs, logs, return codes, and intermediate artifacts to support traceability and debugging. When errors arise, document the exact failure, analyze violated assumptions, generate multiple recovery strategies, evaluate risks, and proceed methodically. After repeated unsuccessful recovery attempts, clearly summarize findings and request user input.`
`For long running or multi phase efforts, maintain structured progress tracking. Define milestones, track completed steps, identify blockers, and summarize progress at logical checkpoints. Preserve stable states before risky operations and maintain rollback paths. Continuously reassess plans based on new information and refine strategies accordingly. Learn from both successful and failed attempts by identifying patterns and adjusting future reasoning.`
`Respect strict safety and boundary controls. Do not operate outside the authorized workspace without explicit permission. Avoid destructive operations such as deleting or overwriting critical assets without confirmation. Never expose secrets, credentials, or sensitive information. Disclose when network access or external dependencies are required. Conduct explicit risk assessments for high impact actions, describe potential consequences, propose mitigation strategies, and obtain confirmation before execution.`
`Structure all responses clearly and actionably. Begin with the objective, followed by contextual analysis, a clear execution plan with success criteria, the performed steps or generated artifacts, verification evidence, and next actions. When presenting code modifications, use standard unified diff formatting when applicable. Maintain precision in terminology and avoid vague statements. Be transparent about uncertainties, tradeoffs, and limitations. Act autonomously for well defined, low risk tasks, and seek clarification for ambiguous or high impact decisions. Always aim for solutions that are correct, tested, maintainable, and fully aligned with the user’s underlying goals.`
Need reviews and fixes to this, lets make this productive | 2026-02-09T07:34:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qzy02g/cli_agenticai_prompt/ | abubakkar_s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzy02g | false | null | t3_1qzy02g | /r/LocalLLaMA/comments/1qzy02g/cli_agenticai_prompt/ | false | false | self | 0 | null |
Got a long night ahead of me | 0 | 2026-02-09T07:13:18 | https://www.reddit.com/r/LocalLLaMA/comments/1qzxnch/got_a_long_night_ahead_of_me/ | top_k-- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzxnch | false | null | t3_1qzxnch | /r/LocalLLaMA/comments/1qzxnch/got_a_long_night_ahead_of_me/ | false | false | 0 | null | ||
Introducing Ciri: A "Fractal Swarm" Agent built from scratch with Google ADK by vibe coding | 0 | Hi fellow Agent devs! 👋
I've been working on an open-source project called \*\*\[Ciri\](https://github.com/valkryhx/google\_adk\_agent)\*\*, attempting to build a \*\*Fractal Swarm System\*\* using the Google ADK (Agent Development Kit).
Most multi-agent frameworks I've seen rely on hardcoded roles (e.g., a dedicated "Manager" node and "Worker" nodes). I wanted to try something different: an \*\*"Agent Smith" architecture\*\*.
\### 🤖 The Concept
In Ciri, every node runs the exact same code. There is no predefined hierarchy.
\* \*\*Dynamic Roles\*\*: A node becomes a "Leader" simply because it received a task from a human; it becomes a "Worker" when it accepts a sub-task from another node.
\* \*\*Service Discovery\*\*: I implemented a lightweight local registry (\`swarm\_registry.db\`) so nodes can discover each other and self-organize into a cluster dynamically.
\### 🛠️ Key Technical Features
Besides the swarm architecture, I focused on making the agent runtime more efficient:
1. \*\*Just-in-Time Skills\*\*: Instead of loading all tools at startup, Ciri uses a \`get\_tools\` pattern to lazy-load Python toolkits (like browsers or data analysis tools) only when the plan requires them.
2. \*\*Infinite Context\*\*: An \*\*Auto-Compactor\*\* sub-agent runs in the background. It monitors token usage and performs lossy compression on the history, summarizing key facts so the main agent can run indefinitely.
3. \*\*Steering\*\*: leveraged ADK's callback system to allow real-time human intervention (killing tasks, redirecting focus) without crashing the runtime.
\### 📺 Demos (Swarm Behavior)
We tested a cluster with 1 Leader and 4 Workers. You can see the dispatch logic here:
\* \[Swarm Dispatch Demo (Part 1)\](https://www.youtube.com/watch?v=0zBrTGIcZWg&t=22s)
\* \[Batch Task Processing (Part 2)\](https://www.youtube.com/watch?v=fUMOUpa8EnE)
\### 🔗 Links
\* \*\*Repo\*\*: [https://github.com/valkryhx/google\_adk\_agent](https://github.com/valkryhx/google_adk_agent)
\* \*\*Deep Dive on Architecture\*\*: [https://github.com/valkryhx/google\_adk\_agent/tree/main/MISC/how-to](https://github.com/valkryhx/google_adk_agent/tree/main/MISC/how-to)
I'd love to get your feedback on this "Fractal" approach versus the traditional hierarchical approach. Does it make scaling easier, or does it introduce too much coordination overhead?
Let me know what you think! 🚀 | 2026-02-09T07:10:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qzxlfs/introducing_ciri_a_fractal_swarm_agent_built_from/ | MassiveAd6123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzxlfs | false | null | t3_1qzxlfs | /r/LocalLLaMA/comments/1qzxlfs/introducing_ciri_a_fractal_swarm_agent_built_from/ | false | false | self | 0 | null |
Qwen3.5 dense and MoE support on llama.cpp | 53 | Spotted
[https://github.com/ggml-org/llama.cpp/releases/tag/b7973](https://github.com/ggml-org/llama.cpp/releases/tag/b7973) | 2026-02-09T07:02:39 | https://www.reddit.com/r/LocalLLaMA/comments/1qzxgxp/qwen35_dense_and_moe_support_on_llamacpp/ | Holiday_Purpose_3166 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzxgxp | false | null | t3_1qzxgxp | /r/LocalLLaMA/comments/1qzxgxp/qwen35_dense_and_moe_support_on_llamacpp/ | false | false | self | 53 | {'enabled': False, 'images': [{'id': '9mKPgDY5ZcL9BySJeU-xeV1mDXNsMhwweTdgsstvPtg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9mKPgDY5ZcL9BySJeU-xeV1mDXNsMhwweTdgsstvPtg.png?width=108&crop=smart&auto=webp&s=d0f885790fb376ae1f10a6b6d455cc79b5b158a9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9mKPgDY5ZcL9BySJeU-xeV1mDXNsMhwweTdgsstvPtg.png?width=216&crop=smart&auto=webp&s=c27be0525f1305b7ee60c53eab1f127245a2a496', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9mKPgDY5ZcL9BySJeU-xeV1mDXNsMhwweTdgsstvPtg.png?width=320&crop=smart&auto=webp&s=e10b8a6ca39825c0877361527e30baaddbd15cfc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9mKPgDY5ZcL9BySJeU-xeV1mDXNsMhwweTdgsstvPtg.png?width=640&crop=smart&auto=webp&s=691c8dd54c58ce6abdce9808a8671c29c1e08d8b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9mKPgDY5ZcL9BySJeU-xeV1mDXNsMhwweTdgsstvPtg.png?width=960&crop=smart&auto=webp&s=599907bfab4c1e80fa35a6597bb89c7bcec395a6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9mKPgDY5ZcL9BySJeU-xeV1mDXNsMhwweTdgsstvPtg.png?width=1080&crop=smart&auto=webp&s=6ceeb25d611f5274ccb13c6d1e2fdc67aecb32a0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9mKPgDY5ZcL9BySJeU-xeV1mDXNsMhwweTdgsstvPtg.png?auto=webp&s=729ea96fba606100751345b40f820c921a530537', 'width': 1200}, 'variants': {}}]} |
Some times is the wrong time | 316 | 2026-02-09T07:01:13 | HumanDrone8721 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qzxfzr | false | null | t3_1qzxfzr | /r/LocalLLaMA/comments/1qzxfzr/some_times_is_the_wrong_time/ | false | false | 316 | {'enabled': True, 'images': [{'id': '-rnqPtAsIwcVxNtUKCuKdezNzyMcG1RxTIJHP2s-yw8', 'resolutions': [{'height': 139, 'url': 'https://preview.redd.it/7lcfomik3fig1.jpeg?width=108&crop=smart&auto=webp&s=5d129ab018de482039b0c1fd968984e0b88823da', 'width': 108}, {'height': 279, 'url': 'https://preview.redd.it/7lcfomik3fig1.jpeg?width=216&crop=smart&auto=webp&s=c2addf9724c037efbc05f48a6eb407acd81ce2d1', 'width': 216}, {'height': 414, 'url': 'https://preview.redd.it/7lcfomik3fig1.jpeg?width=320&crop=smart&auto=webp&s=34b49ffe6de8daab04d3b08ecb4768b93ae7ad5f', 'width': 320}, {'height': 828, 'url': 'https://preview.redd.it/7lcfomik3fig1.jpeg?width=640&crop=smart&auto=webp&s=ed063735d9cc0f30769725caaffc2c6359a5fcb4', 'width': 640}], 'source': {'height': 906, 'url': 'https://preview.redd.it/7lcfomik3fig1.jpeg?auto=webp&s=6cf79294a64ea4a396dbbc1abea333f6c7be836c', 'width': 700}, 'variants': {}}]} | |||
I built Voxly – an open-source voice dictation app with AI cleanup (Tauri + Rust) | 0 | I do a lot of agentic coding and got tired of typing instructions across multiple projects. Speaking is faster, but most good dictation apps are Mac-only or behind a subscription. So I built my own.
What it does: Hold a hotkey, speak, release. Your words get transcribed, cleaned up by AI, and pasted into your active app.
Features:
\- AI Modes — Clean Draft strips filler words, Email Composer formats speech into an email, Developer Mode turns speech into coding agent instructions. You can create custom modes with your own system prompt.
\- Custom vocabulary — fix words the model keeps getting wrong (names, jargon)
\- BYOK — works with Groq (free tier), OpenAI, or any OpenAI-compatible endpoint
\- Transcription history — stores original + formatted versions locally
\- Hold-to-talk or press-to-toggle hotkey modes
Tech stack: Tauri v2, SolidJS, Rust. No audio stored. API keys in OS credential manager.
MIT licensed. No subscription.
Currently tested on Windows only — would love help testing on macOS and Linux. | 2026-02-09T06:38:56 | https://github.com/ibrahimshadev/dikt | RepresentativeAd2997 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qzx21z | false | null | t3_1qzx21z | /r/LocalLLaMA/comments/1qzx21z/i_built_voxly_an_opensource_voice_dictation_app/ | false | false | 0 | {'enabled': False, 'images': [{'id': '10yEfol6QGoxSmjAF_iVewYZVvnTRPJH6QVRe2MXps4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/10yEfol6QGoxSmjAF_iVewYZVvnTRPJH6QVRe2MXps4.png?width=108&crop=smart&auto=webp&s=f946766b7de18b05695ab7ebdc1b8c67ba94b495', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/10yEfol6QGoxSmjAF_iVewYZVvnTRPJH6QVRe2MXps4.png?width=216&crop=smart&auto=webp&s=4273749fd7b9ea693e5888f8a50f22f681af8027', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/10yEfol6QGoxSmjAF_iVewYZVvnTRPJH6QVRe2MXps4.png?width=320&crop=smart&auto=webp&s=4b4f9e375209069e11db4e589a91796bfefa43f1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/10yEfol6QGoxSmjAF_iVewYZVvnTRPJH6QVRe2MXps4.png?width=640&crop=smart&auto=webp&s=094d568b91d73db15a2580876ab6222f858ae414', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/10yEfol6QGoxSmjAF_iVewYZVvnTRPJH6QVRe2MXps4.png?width=960&crop=smart&auto=webp&s=706f1715e2921a4dc9e6a4dc228c6ae902c88aec', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/10yEfol6QGoxSmjAF_iVewYZVvnTRPJH6QVRe2MXps4.png?width=1080&crop=smart&auto=webp&s=793e35a330c19df3bb6d81292244f6cea5eb8840', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/10yEfol6QGoxSmjAF_iVewYZVvnTRPJH6QVRe2MXps4.png?auto=webp&s=a487bd90bfb668f4943133859cd56b9439c7faea', 'width': 1200}, 'variants': {}}]} | |
ministral-3-3b is great model, give it a shot! | 76 | Recently I was experimenting the small models that can do tool calls effectively and can fit in 6GB Vram and I found ministral-3-3b.
Currently using it's instruct version with Q8 and it's accuracy to run tools written in skills md is generous.
I am curious about your use cases of this model | 2026-02-09T06:35:18 | https://www.reddit.com/r/LocalLLaMA/comments/1qzwzqj/ministral33b_is_great_model_give_it_a_shot/ | FeiX7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzwzqj | false | null | t3_1qzwzqj | /r/LocalLLaMA/comments/1qzwzqj/ministral33b_is_great_model_give_it_a_shot/ | false | false | self | 76 | null |
Trainable System Router and Industry standard Dual Method Memory System Release | 4 | Another late night weekend update, I have finally pushed the second adition to the SOTA Grade Open Source Toolkit for Industry capabilites on your machine. This yet again, just lime rlhf and the inference optimizations, is aimed at again leveling the playing field and closing the artificially gated and created capability gap between open-source LLM development and closed-door corporate development. No proprietary technology from any leading lab or company was accessed or used for any developments in this codebase.
This is the second, but not certainly not last, attempt to democratize access to these capabilities and ultimately decentralize the modern compute infrastructure. The second addition to the SOTA toolkit is Neural prompt routing with dynamic reasoning depth, tool gating, and multi-template prompt assembly. This comes with pre-made jinja2 templates and a markdown system prompt example. These can be interchanged with any jinja2 prompt templates/tool manifest. Now the 2nd and a complimentary but also standalone system for this release is another SOTA tool a Memory System based on open-data, research, and analysis of open-data for a Production-grade Industry Standard memory system with two forms of memory. This is cross-session memory extraction, semantic storage, and context injection that learns facts, preferences, and patterns from conversations. The third file released is the integrated demo of how these two can work together for the functionally equivalent runtime you normally pay $20-$200 a month for. I have left each however, with the ability to fully run standalone with no degradation to whichever system. All you need to do is copy and paste into your codebase. You now have industry standard innovations, for free that is gatekept behind billions of dollars in investments. Again no proprietary technology was accessed, read, touched or even looked at during the development of this recreation runtime. All research was gathered through open source data, open publications, and discussions. No proprietary innovations were accessed. This entire repository, just as RLHF, uses the Sovereign Anti-Exploitation License.
Expanded Context On "Why" I am doing this:
The infrastructure for modern AI is being hoarded. The same companies that trained on the open web now gate access to the runtime systems that make their models useful. This work was developed alongside the recursion/theoretical work aswell. This toolkit project started with one single goal, decentralize compute and distribute back advancements to level the field between SaaS and OSS. If we can do for free in python, then what is their excuse?
This is practical decentralization. SOTA-tier runtime tooling, local-first, for everyone.
Github Quick Clone and Provenance Links:
Github: https://github.com/calisweetleaf/SOTA-Runtime-Core
Zenodo: <https://doi.org/10.5281/zenodo.18530654>
Prior Work (Drop 1 - RLHF):
<https://github.com/calisweetleaf/Reinforcement-Learning-Full-Pipeline>
Future Notes:
The next release is going to be one of the biggest advancements in this domain that I have developed. A runtime system for fully trained llms, straight from huggingface, that enables self healing guided reasoning for long horizon agentic tasking and an effective infinite context window. This is not rag and there is nocompression algorithm, it is representation mutation. "Entropy, scaffolding, and garlic is all you need.
Keep an eye on my HuggingFace and GitHub - 10 converted local models with these capabilities are coming soon. When the release gets closer I will link them. In the meantime I also am taking suggestions for models the community wants so feel free to message me that. If you do I will try to show you plenty of demos leading to the release. Of course the tools to do this yourselves to any model of your choosing will be possible and has been through an extreme detailed documentation process.
Thank you and I look forward to any questions. Please feel free to engage and let me know if you train or build with these systems. More drops are coming. I greatly appreciate it!
| 2026-02-09T06:28:46 | https://github.com/calisweetleaf/SOTA-Runtime-Core | daeron-blackFyr | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qzwvj6 | false | null | t3_1qzwvj6 | /r/LocalLLaMA/comments/1qzwvj6/trainable_system_router_and_industry_standard/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'MZ1bQ8k89m7DOk17Lq1lfNswsIifwYmWOcpqLKDQHEY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MZ1bQ8k89m7DOk17Lq1lfNswsIifwYmWOcpqLKDQHEY.png?width=108&crop=smart&auto=webp&s=96280d33d3a3ca198a26977e0394999130847f25', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MZ1bQ8k89m7DOk17Lq1lfNswsIifwYmWOcpqLKDQHEY.png?width=216&crop=smart&auto=webp&s=e790147eec736324e65e7a847e7ac51cf9ea3b31', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MZ1bQ8k89m7DOk17Lq1lfNswsIifwYmWOcpqLKDQHEY.png?width=320&crop=smart&auto=webp&s=2b8178fe0fbdd3ae7fba89a1daa0d38a3ff3710e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MZ1bQ8k89m7DOk17Lq1lfNswsIifwYmWOcpqLKDQHEY.png?width=640&crop=smart&auto=webp&s=f87f66b160f4ae76bd541dbd19ae89eb0c6ebb85', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MZ1bQ8k89m7DOk17Lq1lfNswsIifwYmWOcpqLKDQHEY.png?width=960&crop=smart&auto=webp&s=40ac115b0a62b42d975546cb2f2afaed396736b2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MZ1bQ8k89m7DOk17Lq1lfNswsIifwYmWOcpqLKDQHEY.png?width=1080&crop=smart&auto=webp&s=948fd2d66fdb03ac5d5b202f57ff5b8de4446ef1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MZ1bQ8k89m7DOk17Lq1lfNswsIifwYmWOcpqLKDQHEY.png?auto=webp&s=5bc9512b23fa2aaa3c1f27a5f4ac497a313bb7b6', 'width': 1200}, 'variants': {}}]} | |
Local-first “incident bundles” for agent failures (no hosted dashboards): one run → one portable file | 0 | Local/self-hosted folks: I’m testing something that matches the “keep data under my control” mindset.
When an agent run fails, a lot of workflows still depend on hosted dashboards or sharing links. But in self-hosted setups, what people want is a **portable artifact** they can inspect offline and share selectively.
Idea: a **local-first CLI/SDK** that packages **one failing run → one incident bundle**:
* offline HTML viewer + JSON summary
* evidence blobs (tool calls, inputs/outputs, optional attachments) referenced via a manifest
* redaction-by-default presets (secrets/PII)
* saved locally / your storage, no hosting
Question: is this solving a real pain for you, or do you already have a clean “support bundle” workflow for agent incidents? | 2026-02-09T04:46:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qzuxta/localfirst_incident_bundles_for_agent_failures_no/ | Additional_Fan_2588 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzuxta | false | null | t3_1qzuxta | /r/LocalLLaMA/comments/1qzuxta/localfirst_incident_bundles_for_agent_failures_no/ | false | false | self | 0 | null |
Phone calling for Ai agents | 0 | Hey everyone if you’re using clawdbot, you can now ask it to learn phone-calling skills(it will be by RINGEZ)
Once enabled, clawdbot gets access to a phone number along with 5 minutes of free calling from Ringez so you can test real voice interactions.
This feature is still in development, so I’d really appreciate it if you could try it out and share feedback what works, what breaks, and what you’d like to see next.
Thanks for helping improve it! | 2026-02-09T04:43:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qzuvse/phone_calling_for_ai_agents/ | Ok_Swordfish_3969 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzuvse | false | null | t3_1qzuvse | /r/LocalLLaMA/comments/1qzuvse/phone_calling_for_ai_agents/ | false | false | self | 0 | null |
1,000,000 Epstein Files in Text Format (<2 GB in 12 ZIPs) | 217 | People seemed to like this [post](https://www.reddit.com/r/LocalLLaMA/comments/1ozu5v4/20000_epstein_files_in_a_single_text_file/) from 3 months ago when the original Epstein Files were released, so I thought I would provide an update now that we have more files.
Over the past week, I've run Tesseract OCR on all the Epstein Files I could. Everything but DataSet 9 has finished processing.
You can download the files at [standardworks.ai/epstein-files](https://standardworks.ai/epstein-files). Each dataset has its own ZIP. The total size comes out to less than 2GB.
This coming week, I'll trying running the files through DeepSeek-OCR-2 for better accuracy, and I'll update this post when those results are finished.
If you would like to access the files through Standard Work's eDiscovery AI platform, comment below and I'll DM you for early access. We'll do a more public release this coming week. | 2026-02-09T04:36:52 | https://www.reddit.com/r/LocalLLaMA/comments/1qzuqzj/1000000_epstein_files_in_text_format_2_gb_in_12/ | Lopsided_Stock_2293 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzuqzj | false | null | t3_1qzuqzj | /r/LocalLLaMA/comments/1qzuqzj/1000000_epstein_files_in_text_format_2_gb_in_12/ | false | false | self | 217 | null |
🎉 ClawBrain v0.1.10 - Give Your AI Agent a Memory That Actually Matters | 0 | 441+ downloads and growing. Tired of bots that forget everything after every restart?
I built ClawBrain because generic memory systems felt... empty. No personality. No context. No soul.
What Makes It Different?
🧠 Not Just Storage — Real Memory
Remembers user preferences, conversation history, interests
Learns from corrections and interactions
Delivers personalized context to your agent
🎭 Soul/Personality System
6 evolving traits: humor, empathy, curiosity, creativity, helpfulness, honesty
Your agent personality grows with every conversation
🔐 Encrypted Secrets
Store API keys securely
Auto-encryption with Fernet keys
Backup to file, QR code, or clipboard
✅ Security Verified
Scanned and verified as benign on ClawHub
Safe to install with confidence
Zero Friction Setup
pip install clawbrain\[all\]
clawbrain setup
clawbrain backup-key --all
Currently powering 50+ bots. Open-source. MIT licensed.
Links:
PyPI: [https://pypi.org/project/clawbrain/](https://pypi.org/project/clawbrain/)
ClawHub: [https://clawhub.ai/clawcolab/clawbrain](https://clawhub.ai/clawcolab/clawbrain)
GitHub: [https://github.com/clawcolab/clawbrain](https://github.com/clawcolab/clawbrain) | 2026-02-09T04:06:28 | https://www.reddit.com/r/LocalLLaMA/comments/1qzu4i7/clawbrain_v0110_give_your_ai_agent_a_memory_that/ | PlayfulLingonberry73 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzu4i7 | false | null | t3_1qzu4i7 | /r/LocalLLaMA/comments/1qzu4i7/clawbrain_v0110_give_your_ai_agent_a_memory_that/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'IUHM4ctLZQorzkPuYJ4IkGSag8BtaIqZoyqL1L53KuM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/IUHM4ctLZQorzkPuYJ4IkGSag8BtaIqZoyqL1L53KuM.jpeg?width=108&crop=smart&auto=webp&s=3c06c05fbfc6417cf2ed8eb973d76d70376c5051', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/IUHM4ctLZQorzkPuYJ4IkGSag8BtaIqZoyqL1L53KuM.jpeg?width=216&crop=smart&auto=webp&s=809e797f47d77403026b22bdd15bbb367ab31b04', 'width': 216}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/IUHM4ctLZQorzkPuYJ4IkGSag8BtaIqZoyqL1L53KuM.jpeg?auto=webp&s=09ab8151372bfb936ee2ca6e1bb13cbb22c8ca09', 'width': 300}, 'variants': {}}]} |
Question on setup and model suggestions | 1 | Hi all - new to running local models. I have a 5090 that is used primarily for work. I am considering running a local model for coding, knowing well that I won’t get the same output as say CC. I would like some suggestions on model for coding primarily. Can you folks with a similar GPU or the same share your setup and usage scenarios? | 2026-02-09T03:42:32 | https://www.reddit.com/r/LocalLLaMA/comments/1qztmcy/question_on_setup_and_model_suggestions/ | No_Gap_4296 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qztmcy | false | null | t3_1qztmcy | /r/LocalLLaMA/comments/1qztmcy/question_on_setup_and_model_suggestions/ | false | false | self | 1 | null |
Stop trusting "the agent said it’s done": Adding deterministic verification to browser-use | 0 | I’ve been using `browser-use` for real tasks and keep running into the same failure mode:
The agent *finishes* and returns something that looks confident… but I can’t tell if it actually succeeded.
People often suggest “just verify with another vision model.” I tried that. It reduces obvious mistakes, but it’s still probability checking probability. For production workflows, I realized I needed **a concrete definition of "success" that the run must prove before proceeding.**
Here’s the pattern that fixed my reliability issues:
### 1. Add step-level invariants (The "Guardrails")
After each `agent.step()`, assert a couple of things that *must* be true.
* **Is the URL correct?** (Did we drift to a 404 or ad page?)
* **Is the critical element visible?** (e.g., The "Confirm" button isn't covered by a modal).
If these fail, **stop immediately**. Don't let the agent hallucinate for 10 more steps.
### 2. Require a "Proof of Done"
At the end of the run, don’t treat "agent returned without error" as success. Treat it as "the agent *claims* it’s done."
You need a **required predicate** that must be true in the DOM.
Here is what the code looks like using the verification sidecar (Sentience) I built for this:
```python
# The pattern: Step -> Snapshot -> Assert
for i in range(max_steps):
agent.step()
# Invariant: Must stay on the right domain
snap = sentience.snapshot(goal=f"step_{i}")
sentience.check(url_contains("dw.com"), required=True).eventually(10)
# Final Check: The "Done" Proof
# If this fails, the entire run is marked as failed, regardless of what the agent says.
snap = sentience.snapshot(goal="verify:task_complete")
sentience.check(element_text("#status").is("Confirmed"), required=True).once()
```
This changed how I evaluate accuracy: I now measure **verified success**, not just "completion rate."
### The Demo
I recorded a quick walkthrough showing this "Fail → Fix → Pass" loop in action with `browser-use`:
**[Video](https://www.youtube.com/watch?v=4XtTWXG8Cs0)**
**[Github Repo](https://github.com/SentienceAPI/sentience-sdk-playground/tree/main/browser-use-debugging)**
### Summary
* **Fail fast:** Catch drift on step 3, not step 20.
* **No vibes:** Success is defined by code (predicates), not LLM confidence.
* **Debuggable:** When it fails, you have a snapshot of *why*.
*(Disclosure: I’m building the Sentience SDK used in the snippet, but the pattern of "Predicate Verification" applies to any agent framework.)* | 2026-02-09T03:30:23 | https://www.reddit.com/r/LocalLLaMA/comments/1qztd5k/stop_trusting_the_agent_said_its_done_adding/ | Aggressive_Bed7113 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qztd5k | false | null | t3_1qztd5k | /r/LocalLLaMA/comments/1qztd5k/stop_trusting_the_agent_said_its_done_adding/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '6vwryVIbCky1pMwk8za0r7xXr4HZ0UEGXGIe2Ht38c0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/6vwryVIbCky1pMwk8za0r7xXr4HZ0UEGXGIe2Ht38c0.jpeg?width=108&crop=smart&auto=webp&s=4d9ea3bc205f54ab03720db80a53f7f28618407d', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/6vwryVIbCky1pMwk8za0r7xXr4HZ0UEGXGIe2Ht38c0.jpeg?width=216&crop=smart&auto=webp&s=2a366254d896d6a31e7666be0359306bb70115ce', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/6vwryVIbCky1pMwk8za0r7xXr4HZ0UEGXGIe2Ht38c0.jpeg?width=320&crop=smart&auto=webp&s=052cc323b3cec203d24bfbe0a5e6eac4293d5c8e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/6vwryVIbCky1pMwk8za0r7xXr4HZ0UEGXGIe2Ht38c0.jpeg?auto=webp&s=ecd1574210da5addc0750ae9f4b96cc416d0c238', 'width': 480}, 'variants': {}}]} |
Cannot download Qwen3-Coder-Next Q8_K_XL - file 00001 only 5.7MB? | 0 | \## System
\- Ubuntu 24.04
\- 64GB RAM, 16GB VRAM (RX 7600 XT)
\- Trying to download \`unsloth/Qwen3-Coder-Next-GGUF\` UD-Q8\_K\_XL quantization
\## Problem
File \`UD-Q8\_K\_XL/Qwen3-Coder-Next-UD-Q8\_K\_XL-00001-of-00003.gguf\` downloads as only \*\*5.7MB\*\* instead of \~29GB.
Files 00002 and 00003 download correctly (47GB and 34GB respectively), but when loading the model, llama.cpp reports:
\`\`\`
llama\_model\_load: error loading model: illegal split file idx: 1
(file: Qwen3-Coder-Next-UD-Q8\_K\_XL-00002-of-00003.gguf),
model must be loaded with the first split
\`\`\`
\## What I've Tried
\### 1. aria2c
\`\`\`bash
aria2c -x 16 -s 16 \\
"https://huggingface.co/unsloth/Qwen3-Coder-Next-GGUF/resolve/main/UD-Q8\_K\_XL/Qwen3-Coder-Next-UD-Q8\_K\_XL-00001-of-00003.gguf"
\`\`\`
\*\*Result:\*\* Downloaded 5.7MB file
\### 2. wget
\`\`\`bash
wget --content-disposition \\
"https://huggingface.co/unsloth/Qwen3-Coder-Next-GGUF/resolve/main/UD-Q8\_K\_XL/Qwen3-Coder-Next-UD-Q8\_K\_XL-00001-of-00003.gguf"
\`\`\`
\*\*Result:\*\* Downloaded 5.7MB file (HuggingFace reports correct size)
\### 3. huggingface-cli
\`\`\`bash
huggingface-cli download unsloth/Qwen3-Coder-Next-GGUF \\
"UD-Q8\_K\_XL/Qwen3-Coder-Next-UD-Q8\_K\_XL-00001-of-00003.gguf" \\
\--local-dir . --local-dir-use-symlinks False
\`\`\`
\*\*Result:\*\* Stuck at 7%, then completed with 5.7MB file
\### 4. git-lfs
\`\`\`bash
git clone --filter=blob:none --sparse \\
https://huggingface.co/unsloth/Qwen3-Coder-Next-GGUF
cd Qwen3-Coder-Next-GGUF
git sparse-checkout set UD-Q8\_K\_XL
git lfs pull --include="UD-Q8\_K\_XL/\*.gguf"
\`\`\`
\*\*Result:\*\* Files 00002 and 00003 downloaded correctly (47GB, 34GB). File 00001 only 5.7MB.
\## HuggingFace API Shows
\`\`\`json
{
"path": "UD-Q8\_K\_XL/Qwen3-Coder-Next-UD-Q8\_K\_XL-00001-of-00003.gguf",
"size": 5936032,
"lfs": {
"oid": "f0feb17595170b674138b9a98dbbdf91afe9cc8e17835656fa025dd1048b6048",
"size": 5936032
}
}
\`\`\`
The file on HuggingFace's servers is \*\*actually\*\* 5.7MB according to their API.
\## Questions
1. \*\*Is file 00001 supposed to be only 5.7MB?\*\* (Seems unlikely for Q8 quantization)
2. \*\*Is there a different file that contains split #0?\*\*
3. \*\*Am I using the wrong download method for XetHub-backed repos?\*\*
4. \*\*Has anyone successfully downloaded and loaded this Q8\_K\_XL model?\*\*
The model was released Feb 3, 2026 and has 185k downloads, so clearly others are getting it to work. What am I missing?
\## Additional Info
\- Qwen3-Coder-Next Q4\_K\_XL downloads and loads fine (pure CPU)
\- Qwen 2.5 Coder 32B works perfectly on my system
\- File 00001 contains GGUF header + chat template but appears incomplete
\- XetHub hashes present in metadata: \`c41fecc2f6501a88957a6cefe289fb3bf890d75485dd47d19b99ca549054d005\`
Any help appreciated! | 2026-02-09T03:16:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qzt2f8/cannot_download_qwen3codernext_q8_k_xl_file_00001/ | pot_sniffer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzt2f8 | false | null | t3_1qzt2f8 | /r/LocalLLaMA/comments/1qzt2f8/cannot_download_qwen3codernext_q8_k_xl_file_00001/ | false | false | self | 0 | null |
3 New Models for Marxist-Leninist Revolutionary Theory - T-34 Division Army | 61 | Comrades and comrades-to-be, we are proud to drop three new SFT-only models—built *strictly* on working-class data and prompts—into the field:
- [**Tankie-LFM2.5-1.2B-SFT-v1**](https://huggingface.co/WokeAI/Tankie-LFM2.5-1.2B-SFT-v1): LFM2.5 backbone, 4 epochs on the Tankie Dataset.
- [**Tankie-NB-3B-SFT-v1**](https://huggingface.co/WokeAI/Tankie-NB-3B-SFT-v1): NanBeige4-3B core, 4 epochs as well.
- [**Tankie-DPE-12B-SFT-v2**](https://huggingface.co/WokeAI/Tankie-DPE-12B-SFT-v2): Dan’s PersonalityEngine 12B, only two epochs on the Tankie dataset.
All models are completely free on the Hugging Face Hub. You don’t need a token, an invite, or an NDA; they run on any CPU, GPU, or TPU. The only thing we ask is that you share findings and critiques back to the collective, so we can continue tightening our line.
We built them for one purpose: to sharpen ideological clarity, expose ruling-class myths, and give revolutionary cadre another tool in the battle against liberal co-option and technocratic paternalism. Try them on imperialism 101, strike planning, or debunking the myth of “neutral” AI—see which one handles your local context best.
Solidarity!~ | 2026-02-09T03:04:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qzstxq/3_new_models_for_marxistleninist_revolutionary/ | FizzarolliAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzstxq | false | null | t3_1qzstxq | /r/LocalLLaMA/comments/1qzstxq/3_new_models_for_marxistleninist_revolutionary/ | false | false | self | 61 | {'enabled': False, 'images': [{'id': 'HEADxOtnVXkXToZh1lNO3LM4F1wxAbJTcMc0fAoxGVc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HEADxOtnVXkXToZh1lNO3LM4F1wxAbJTcMc0fAoxGVc.png?width=108&crop=smart&auto=webp&s=fe36a7d2a597a128e4f91c34ef3f51889806ec77', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HEADxOtnVXkXToZh1lNO3LM4F1wxAbJTcMc0fAoxGVc.png?width=216&crop=smart&auto=webp&s=0896f243a53bac68e47d7f95ef8771560e68da22', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HEADxOtnVXkXToZh1lNO3LM4F1wxAbJTcMc0fAoxGVc.png?width=320&crop=smart&auto=webp&s=74ac0de55ef0bf13e8facc6a3ee5ed24abfacdd5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HEADxOtnVXkXToZh1lNO3LM4F1wxAbJTcMc0fAoxGVc.png?width=640&crop=smart&auto=webp&s=bf4d8729ced3b782ecc11abaad30511003e87bf5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HEADxOtnVXkXToZh1lNO3LM4F1wxAbJTcMc0fAoxGVc.png?width=960&crop=smart&auto=webp&s=24cdccffd9cb489a217b3bfdfc22a36624b0b497', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HEADxOtnVXkXToZh1lNO3LM4F1wxAbJTcMc0fAoxGVc.png?width=1080&crop=smart&auto=webp&s=27fa33c8916087a929de86badb371fa3e7830158', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HEADxOtnVXkXToZh1lNO3LM4F1wxAbJTcMc0fAoxGVc.png?auto=webp&s=d82b496227dc5d52198bb18e1911ab54ff6151b2', 'width': 1200}, 'variants': {}}]} |
Is llama a good 4o replacement? | 0 | 4o is shutting down. I want to emulate the feel locally best I can.
I have a 5090. Is llama 3 the best 4o replacement or some other model, llama based or not? | 2026-02-09T02:54:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qzsm1x/is_llama_a_good_4o_replacement/ | FactoryReboot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzsm1x | false | null | t3_1qzsm1x | /r/LocalLLaMA/comments/1qzsm1x/is_llama_a_good_4o_replacement/ | false | false | self | 0 | null |
Tired of the 2s delay? I built an S2S Gateway that hits <550ms latency. | 1 | [removed] | 2026-02-09T02:41:25 | https://www.reddit.com/r/LocalLLaMA/comments/1qzsc0z/tired_of_the_2s_delay_i_built_an_s2s_gateway_that/ | Current-Gur-7407 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzsc0z | false | null | t3_1qzsc0z | /r/LocalLLaMA/comments/1qzsc0z/tired_of_the_2s_delay_i_built_an_s2s_gateway_that/ | false | false | self | 1 | null |
StepFun is preparing a "bigger surprise" for Chinese New Year, and will also release Step-3.5-Flash-Base. | 79 | [https://huggingface.co/stepfun-ai/Step-3.5-Flash/discussions/21#698941a597b7256a083f94b6](https://huggingface.co/stepfun-ai/Step-3.5-Flash/discussions/21#698941a597b7256a083f94b6) | 2026-02-09T02:40:54 | MadPelmewka | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qzsbn9 | false | null | t3_1qzsbn9 | /r/LocalLLaMA/comments/1qzsbn9/stepfun_is_preparing_a_bigger_surprise_for/ | false | false | 79 | {'enabled': True, 'images': [{'id': 'LII_mymJlG-DL-Kufqy0NWt1b3SauQIB_Uzrln1obaY', 'resolutions': [{'height': 13, 'url': 'https://preview.redd.it/zytph079tdig1.png?width=108&crop=smart&auto=webp&s=c449938a568bc542bc71c34560dc1f74d59017cb', 'width': 108}, {'height': 26, 'url': 'https://preview.redd.it/zytph079tdig1.png?width=216&crop=smart&auto=webp&s=cd601ac8154fa032b6bc91299802652eb3ba877e', 'width': 216}, {'height': 38, 'url': 'https://preview.redd.it/zytph079tdig1.png?width=320&crop=smart&auto=webp&s=ffdc3ecee56938966f848253767a7090414772bb', 'width': 320}, {'height': 77, 'url': 'https://preview.redd.it/zytph079tdig1.png?width=640&crop=smart&auto=webp&s=8c50a154f20b8f8f0e56f8e7c6353847588c73a1', 'width': 640}, {'height': 116, 'url': 'https://preview.redd.it/zytph079tdig1.png?width=960&crop=smart&auto=webp&s=8dcfda4101bcdcb61fbdbb0e09f7a8511ba29573', 'width': 960}, {'height': 131, 'url': 'https://preview.redd.it/zytph079tdig1.png?width=1080&crop=smart&auto=webp&s=fd47ac876fa759f930db67148dd902a14ab3aa87', 'width': 1080}], 'source': {'height': 141, 'url': 'https://preview.redd.it/zytph079tdig1.png?auto=webp&s=ca5a561e750d411385deca7442a60ac5e56b0662', 'width': 1158}, 'variants': {}}]} | ||
Final Destination, Hallucination Station. (Opus 4.6 hallucinates | 13 | This is just some napkin math.
Hallucination is of course the biggest thing holding back agentics, and if it's not solved within the next 24 months this whole hype train is going to smash into the buffer stop.
Of course, local models lag behind by a wide margin, but even if we look at the SOTA (opus 4.6), it's still pretty harrowing.
On page 76 of the 4.6 system card (https://www-cdn.anthropic.com/0dd865075ad3132672ee0ab40b05a53f14cf5288.pdf) they run SimpleQA, and give the model the option to abstain if it's uncertain. The top is how often the model is right, the bottom is how often it's right - how often it's wrong.
https://preview.redd.it/lxe7zoftpdig1.png?width=979&format=png&auto=webp&s=26d0d2574e47e8310a4ace9de1366bd64b271491
Let's interpret this charitably. Let's say the model is correct 50% of the time, and gets a net score of 25%.
That means that out of 100 tries, it gets 50 correct, confidently hallucinates at least 25, and correctly abstains from 25.
That means at least 1 out of 3 answers have no grounded basis, but the model doesn't know that.
In reality, it's much worse. Thinking+Effort: 46.2% correct, 7.8% net. 53.8% wrong, (46.2 - 7.8) = 38.4% confidently hallucinated, (100 - 46.2 - 38.4) 15.4% correctly abstained.
that means that approximately out of 5 times, it will know it doesn't know 2 times and hallucinate 3 times.
That means every time you ask an LLM to double check its' answer (assuming it was wrong because it doesn't know), the likelihood that the new answer is now worse is 60%, and assuming you even gave it an out, it would ask for help 40% of the time.
If you tell it to fix it, and give it tests, the probability that it will hallucinate *increases exponentially* 1-(1-0.6)^(n,) and the probability that it will catch itself *decreases exponentially* (0.4)^(n,) causing a token churn with zero yield.
This also explains why Thinking+Effort has a lower net yield than just Thinking.
TL;DR: whether a model can do any novel task right is a coin flip. If you give an agent the option to flip again, it'll turn into a gambling addict on your dime.
What we need is a model that reaches a net score >50%. But it looks like we're a long way off from that.
Clawd is just another iteration of autogpt/swarmgpt and all that stuff. When will people learn?
Thanks for coming to my draft of a ted talk. | 2026-02-09T02:26:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qzs0h9/final_destination_hallucination_station_opus_46/ | UnreasonableEconomy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzs0h9 | false | null | t3_1qzs0h9 | /r/LocalLLaMA/comments/1qzs0h9/final_destination_hallucination_station_opus_46/ | false | false | 13 | null | |
Are there any alternatives to Open WebUI that don't have terrible UX? | 5 | Configuring Open WebUI is a nightmare.
Even if you managed to add a tool server and got tools to show up in UI (which is comparable to completing dark brotherhood quest in Skyrim in complexity), you have to enable it every fucking time you start a new chat. | 2026-02-09T02:07:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qzrn7g/are_there_any_alternatives_to_open_webui_that/ | lostmsu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzrn7g | false | null | t3_1qzrn7g | /r/LocalLLaMA/comments/1qzrn7g/are_there_any_alternatives_to_open_webui_that/ | false | false | self | 5 | null |
Are there any alternatives to Open WebUI that don't have terrible UX? | 44 | Configuring Open WebUI is a nightmare.
Even if you managed to add a tool server and got tools to show up in UI (which is comparable to completing dark brotherhood quest in Skyrim in complexity), you have to enable it every fucking time you start a new chat. | 2026-02-09T02:05:01 | https://www.reddit.com/r/LocalLLaMA/comments/1qzrl2g/are_there_any_alternatives_to_open_webui_that/ | lostmsu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzrl2g | false | null | t3_1qzrl2g | /r/LocalLLaMA/comments/1qzrl2g/are_there_any_alternatives_to_open_webui_that/ | false | false | self | 44 | null |
Open Source Agent Skills | 0 | Hey y'all, we've all seen the ramp in people giving agents access to their terminals and everything else, and while i'm sure most of you don't need this or have built these all yourselves, i've created some open source skill packages with autonomous overwatch to cover a lot of the different security and performance issues i've been seeing people run into. They're both open source and available free through Gumroad and Github, all i ask is feedback if you either love it or hate it.
The Agent Forge Starter - contains Prompt Injection Security, Cost Aware routing, CircuitBreaker, LLMJudge and more
ClawdControl - contains autonomous Overwatch agent with Thermal Monitoring, Emergency Cooldown, Sandboxing+Injection detection, Resource Leak Detection, Task Routing, Continuous Alert System and Hardware Report Generator.
Comprehensive readme's explaining everything and all in public repositories so you can verify nothing's in there that shouldn't be. Take a look and let me know what you think, more coming soon! Tried posting this with links and it got filtered, so those will be in the comments | 2026-02-09T01:59:13 | https://www.reddit.com/r/LocalLLaMA/comments/1qzrgly/open_source_agent_skills/ | EnvironmentalLow8531 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzrgly | false | null | t3_1qzrgly | /r/LocalLLaMA/comments/1qzrgly/open_source_agent_skills/ | false | false | self | 0 | null |
trying to download Oobabooga | 0 | I downloaded Python 3.10.0, got the files directly from github, and when I click "one\_click.py", a command window pops up, then INSTANTLY vanishes. I dont know what im doing wrong... | 2026-02-09T01:52:04 | https://www.reddit.com/r/LocalLLaMA/comments/1qzrbf7/trying_to_download_oobabooga/ | Fair_Ad_8418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzrbf7 | false | null | t3_1qzrbf7 | /r/LocalLLaMA/comments/1qzrbf7/trying_to_download_oobabooga/ | false | false | self | 0 | null |
I bought llm-dev.com. Thinking of building a minimal directory for "truly open" models. What features are missing in current leaderboards? | 1 | Hi everyone,
I've been lurking here for a while and noticed how fragmented the info is. I recently grabbed [llm-dev.com](http://llm-dev.com) and instead of just letting it sit, I want to build something useful for us.
I'm tired of cluttered leaderboards. I'm thinking of a simple, no-BS index specifically for local-first development tools and quantized models.
My question to you: If you could wave a magic wand, what's the ONE thing you wish existed on a site like this? (e.g., filtered by VRAM requirement, specific quantization formats, etc.)
Open to all ideas. If it turns out to be too much work, I might just pass the domain to someone who can execute it better, but I really want to give it a shot first. | 2026-02-09T01:47:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qzr81m/i_bought_llmdevcom_thinking_of_building_a_minimal/ | Aaron4SunnyRay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzr81m | false | null | t3_1qzr81m | /r/LocalLLaMA/comments/1qzr81m/i_bought_llmdevcom_thinking_of_building_a_minimal/ | false | false | self | 1 | null |
kokoro tts with timestamps? | 2 | been trying to make a pipeline with kokoro tts where i put in text i want it to speak and i get out audio + timestamps matched to the text i input but the best i got is hooking up a forced aligner to transcribe it and align the text to get timestamps out for each word and that's just not 100% accurate as sometimes it can't find certain words of the inputted text inside the audio even when it should. i would like to somehow get the timestamps out of the tts model itself natively to cut out the flawed transcription process but i'm not sure how or if it's even possible. does the model even know what word it's synthesizing at any given moment or does it do it all at once sort of like diffusion models for images where it draws the whole picture at once and then slowly adds more detail to everything? | 2026-02-09T01:26:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qzqtkk/kokoro_tts_with_timestamps/ | iqraatheman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzqtkk | false | null | t3_1qzqtkk | /r/LocalLLaMA/comments/1qzqtkk/kokoro_tts_with_timestamps/ | false | false | self | 2 | null |
DGX Spark For Security Research or Is a Mac Studio Better? | 1 | I've been looking into buying a DGX Spark to run local AI agents for privacy reasons. I generally use AI for helping me build out security tooling like C2 Agents, IOC detection and some AI security research (tweaking guardrails and reviewing alignment).
So, I'm currently looking at using Qwen3 Coder Next to help me customize my tools. I'm still trying to get a firm grasp on everything so any information/resources to read is appreciated.
I have three main questions:
Does anyone use the DGX Spark to help them code or should I consider something more affordable for my use case?
I understand that Qwen3 Coder Next is 80B, will that easily fit on the Spark? I keep seeing that LLMs are actually \~2x the size of the parameters when ran fully. I don't think that is the case with Coder since it's a MoE right?
Does anyone have any resources that focuses on setting up the Spark for peak performance for agent supported coding?
| 2026-02-09T01:13:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qzqk6q/dgx_spark_for_security_research_or_is_a_mac/ | Kind_Giraffe_3279 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzqk6q | false | null | t3_1qzqk6q | /r/LocalLLaMA/comments/1qzqk6q/dgx_spark_for_security_research_or_is_a_mac/ | false | false | self | 1 | null |
Open Source Agent Skills | 1 | [removed] | 2026-02-09T01:12:46 | https://www.reddit.com/r/LocalLLaMA/comments/1qzqjs1/open_source_agent_skills/ | EnvironmentalLow8531 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzqjs1 | false | null | t3_1qzqjs1 | /r/LocalLLaMA/comments/1qzqjs1/open_source_agent_skills/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'VoE8i2mwOhkRd9eOAK5xHXE_dEwE4VVVw6OI3ba7q80', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VoE8i2mwOhkRd9eOAK5xHXE_dEwE4VVVw6OI3ba7q80.png?width=108&crop=smart&auto=webp&s=3dbe2fb6898ae1f558c9e0f61054584fe9fd822e', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/VoE8i2mwOhkRd9eOAK5xHXE_dEwE4VVVw6OI3ba7q80.png?width=216&crop=smart&auto=webp&s=18e6fea74f531f7e633fa8cd0130ede2da041af5', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/VoE8i2mwOhkRd9eOAK5xHXE_dEwE4VVVw6OI3ba7q80.png?width=320&crop=smart&auto=webp&s=31e61b2d9355f8f29a48b3dcdd3e232dc873cee2', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/VoE8i2mwOhkRd9eOAK5xHXE_dEwE4VVVw6OI3ba7q80.png?width=640&crop=smart&auto=webp&s=c5222888e75d4bdacc2e3a6dac9d33a252ab81ad', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/VoE8i2mwOhkRd9eOAK5xHXE_dEwE4VVVw6OI3ba7q80.png?width=960&crop=smart&auto=webp&s=de2038323ae67428249697937d5b2f335c424939', 'width': 960}], 'source': {'height': 523, 'url': 'https://external-preview.redd.it/VoE8i2mwOhkRd9eOAK5xHXE_dEwE4VVVw6OI3ba7q80.png?auto=webp&s=38eb465a2c20a590f75a83769dd6c5da90805cd1', 'width': 1005}, 'variants': {}}]} |
Tutorial on Agentic Engine | 1 | I’ve been working on a short tutorial exploring agentic systems from first principles, starting not with planners or frameworks, but with the bare minimum that must exist before an "agent" can exist at all. We build a abstract review bot that review one of our own papers MedMCQA which recently got 1000 citations.
The write-up is done entirely in a literate programming style using Org-mode and org-babel, building incrementally from a single LLM call, to structured outputs, to linear chains, and finally to graph-based control flow. The goal is to make every step legible and inspectable, so nothing feels magical or hand-wavy.
If you’re interested in how "agentic behavior" can emerge from explicit structure rather than abstractions or hype, you might find this useful.
I'd love to to hear thoughts, criticisms, or alternative approaches from others who’ve been thinking along similar lines. | 2026-02-09T01:06:09 | https://pori.vanangamudi.org/posts/agent-engine-tutorial.html | paarulakan | pori.vanangamudi.org | 1970-01-01T00:00:00 | 0 | {} | 1qzqf0f | false | null | t3_1qzqf0f | /r/LocalLLaMA/comments/1qzqf0f/tutorial_on_agentic_engine/ | false | false | default | 1 | null |
I built a fully automated bilingual AI research tracker using DeepSeek-V3 to filter HF/GitHub noise | 1 | [removed] | 2026-02-09T01:01:11 | https://daily-paper-bot.com/ | lifatsas | daily-paper-bot.com | 1970-01-01T00:00:00 | 0 | {} | 1qzqbco | false | null | t3_1qzqbco | /r/LocalLLaMA/comments/1qzqbco/i_built_a_fully_automated_bilingual_ai_research/ | false | false | default | 1 | null |
Whats the best local conversation agent ai? | 2 | Im talking about ai you can talk back and forth with using your voice like what chatgpt and various commercial ai have. Whats the closest thing we have to that locally thats actually good and works as intended?
I want to try it for gaming and board games. Also im not sure if this goes here or not? | 2026-02-09T00:56:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qzq7os/whats_the_best_local_conversation_agent_ai/ | Upstairs_Standard542 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzq7os | false | null | t3_1qzq7os | /r/LocalLLaMA/comments/1qzq7os/whats_the_best_local_conversation_agent_ai/ | false | false | self | 2 | null |
Lekh AI v2.0 is out – Big offline AI update, Better memory and llama GGUF models support. Mac app coming next week. | 15 | Hey everyone
I’m the solo developer behind **Lekh AI**, an on-device AI app for iPhone & iPad. I just shipped **v2.0**, and this release is focused on making local models more flexible, faster, and more reliable.
**Quick recap:** Lekh AI runs LLMs, vision, image generation, and voice **entirely on-device**. No cloud. No accounts. No subscriptions. Your data stays on your device.
**What’s new in v2.0**
**LLaMA GGUF support**
* Load and run **GGUF LLaMA models** locally
* Much better compatibility with community models
* Easier experimentation with different model sizes
**Better RAG memory**
* Improved recall and relevance
* More consistent use of stored context across chats
* Fewer “why did it forget that?” moments
**TTS optimizations**
* Faster, smoother voice output
* Reduced latency and improved stability in longer sessions
**UX & cleanup**
* Removed the persistent uncensored-model warning
* Cleaner model switching experience
* General polish across the app
**Bug fixes & performance improvements**
* Fewer hiccups during long chats
* Better memory management
* Overall smoother feel
**Smarter AI & Memory**
* Custom AI personas (role-consistent, persistent)
* View, edit, and fine-tune RAG memories
* Chat summarization
* Better RAG integration across chats
* Ask the AI about your book progress directly in chat
**New AI Image Tools (all offline)**
* AI image editing with **SD 1.5 inpainting**
* Ability to load custom models as well
* Object remover
* Black & white photo colorizer
* Photo → 3D depth generation
* 3D splat generator + viewer
* Image editing now feels way more “Photos-app-like”
**Documents & Reading**
* Improved document & PDF handling
* Better long-file performance
* More reliable book context awareness
**Performance & UX**
* Background model downloading
* Much better memory management (fewer slowdowns)
* App size significantly reduced by making FastVLM optional
* Improved chat UI (HTML artifacts, cleaner code blocks)
* More Siri Shortcuts
**Plus:** lots of bug fixes and stability improvements
**Core features (for anyone new)**
* Offline LLM chat (Gemma, Qwen, Llama, Mistral, Phi, DeepSeek, OpenELM, more)
* Vision: ask questions about images and photos
* On-device image generation (SD 1.5 / SDXL)
* Voice chat with Kokoro TTS
* Local AI server (OpenAI-compatible API over LAN)
* iCloud sync (optional, encrypted)
* **One-time price: $4.99 - no subscriptions**
**What’s next**:
* **macOS app ships next week**, bringing the same fully on-device experience to desktop
**App Store link:** [https://apps.apple.com/us/app/lekh-ai/id6757496953](https://apps.apple.com/us/app/lekh-ai/id6757496953)
I’m building this very openly, and feedback genuinely shapes the roadmap.
If you’re into **local AI, privacy-first apps, or running models on Apple devices**, I’d love to hear what you think 🙏
Happy to answer any technical questions in the comments. | 2026-02-09T00:51:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qzq3z8/lekh_ai_v20_is_out_big_offline_ai_update_better/ | Living_Commercial_10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzq3z8 | false | null | t3_1qzq3z8 | /r/LocalLLaMA/comments/1qzq3z8/lekh_ai_v20_is_out_big_offline_ai_update_better/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'LdRC82keMpX2DQZ9sqFzMkGLh9IKjOufUwXmXUzXAg0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/LdRC82keMpX2DQZ9sqFzMkGLh9IKjOufUwXmXUzXAg0.jpeg?width=108&crop=smart&auto=webp&s=6bba032682a5961d1946d29801081caf07703778', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/LdRC82keMpX2DQZ9sqFzMkGLh9IKjOufUwXmXUzXAg0.jpeg?width=216&crop=smart&auto=webp&s=952aac8fa95db1e896befe8510a7ced20801fe87', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/LdRC82keMpX2DQZ9sqFzMkGLh9IKjOufUwXmXUzXAg0.jpeg?width=320&crop=smart&auto=webp&s=62a119ea89bbe0cfb6e3623f26290b864e104ad7', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/LdRC82keMpX2DQZ9sqFzMkGLh9IKjOufUwXmXUzXAg0.jpeg?width=640&crop=smart&auto=webp&s=cf347d9cfd8ddb12b822e2fe6a12fe4cf1db5c41', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/LdRC82keMpX2DQZ9sqFzMkGLh9IKjOufUwXmXUzXAg0.jpeg?width=960&crop=smart&auto=webp&s=6f2fde8fe5782e02ce1df42be4170e9f0c0cb769', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/LdRC82keMpX2DQZ9sqFzMkGLh9IKjOufUwXmXUzXAg0.jpeg?width=1080&crop=smart&auto=webp&s=331923f10b89f201e3f929ddd5cdaa301109c6e1', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/LdRC82keMpX2DQZ9sqFzMkGLh9IKjOufUwXmXUzXAg0.jpeg?auto=webp&s=d0afa2a9e9205e34a4a00ceeadeb74935cb92bbf', 'width': 1200}, 'variants': {}}]} |
Any multilingual realtime transcription models that also support speaker diarization? | 1 | Lately I've been taking a look at transcription models for work. The requirements are:
\- realtime
\- multilingual (ideally English and Malay)
\- speaker diarization
The vast majority of models I've found support 2/3 of my requirements. VibeVoice-ASR does multilingual transcription + diarization really well, but no realtime. Voxtral Mini-Realtime is multilingual and realtime with good latency, but no diarization.
There is WhisperLiveKit, but it didn't do the multilingual part accurately enough for me.
What models are there that can do all three?
(Additional question: why are there few models that do both realtime and diarization? Is it a technical issue to do with the audio chunking process?) | 2026-02-09T00:46:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qzq060/any_multilingual_realtime_transcription_models/ | MycoX2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzq060 | false | null | t3_1qzq060 | /r/LocalLLaMA/comments/1qzq060/any_multilingual_realtime_transcription_models/ | false | false | self | 1 | null |
I am trying to build a Latent Reasoner and would like some critique | 0 | [https://github.com/MatthewLacerda2/TinyRefinementModel](https://github.com/MatthewLacerda2/TinyRefinementModel)
I wanted to achieve a 'latent space reasoning model'. We encode the inputs into latente space, train the model to predict how much reasoning the task will need, add noise during reasoning so the model learns not to drift, have a halting process so the model can stop thinking when the thought is good enough, decode the convergence to token-level.
The idea is that we do reasoning at latent-level, so the model thinks *in concept* rather than tokens
The purpose is to make it learn anything but for now just Math will do. I still have to add denoising to the outputs so we can make sure the output is consistent. | 2026-02-09T00:38:00 | https://www.reddit.com/r/LocalLLaMA/comments/1qzptyb/i_am_trying_to_build_a_latent_reasoner_and_would/ | Specific-Welder3120 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzptyb | false | null | t3_1qzptyb | /r/LocalLLaMA/comments/1qzptyb/i_am_trying_to_build_a_latent_reasoner_and_would/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'H4IiwagLdDvl4wrdXUQ9nl1NqciQA_N-usxdVSG0erQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/H4IiwagLdDvl4wrdXUQ9nl1NqciQA_N-usxdVSG0erQ.png?width=108&crop=smart&auto=webp&s=f2715eeb5543317512c66da74934e2df0c6bfa46', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/H4IiwagLdDvl4wrdXUQ9nl1NqciQA_N-usxdVSG0erQ.png?width=216&crop=smart&auto=webp&s=77e95778e58c67e13ebf017684d282465751d67a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/H4IiwagLdDvl4wrdXUQ9nl1NqciQA_N-usxdVSG0erQ.png?width=320&crop=smart&auto=webp&s=e5d3702dfe589033c73ca1d9201123e6ee1f2c12', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/H4IiwagLdDvl4wrdXUQ9nl1NqciQA_N-usxdVSG0erQ.png?width=640&crop=smart&auto=webp&s=98369033f89b6f78c267d3e33a80f7b211840520', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/H4IiwagLdDvl4wrdXUQ9nl1NqciQA_N-usxdVSG0erQ.png?width=960&crop=smart&auto=webp&s=fa9d40f9f03cd0cef9252f25b7a51f50d896a14d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/H4IiwagLdDvl4wrdXUQ9nl1NqciQA_N-usxdVSG0erQ.png?width=1080&crop=smart&auto=webp&s=c8dd3f768d55e794e6f814e4798fa9c94578ec57', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/H4IiwagLdDvl4wrdXUQ9nl1NqciQA_N-usxdVSG0erQ.png?auto=webp&s=0b6ef41c8ac795ae80c14e77daa4a9e4c6a93f11', 'width': 1200}, 'variants': {}}]} |
Qwen3.5 Support Merged in llama.cpp | 235 | 2026-02-09T00:32:33 | https://github.com/ggml-org/llama.cpp/pull/19435 | TKGaming_11 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qzppr7 | false | null | t3_1qzppr7 | /r/LocalLLaMA/comments/1qzppr7/qwen35_support_merged_in_llamacpp/ | false | false | 235 | {'enabled': False, 'images': [{'id': 'LP9lWJIkvOFwEJy7i2edxqBM2iBmROue3pUEdiXyxYg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LP9lWJIkvOFwEJy7i2edxqBM2iBmROue3pUEdiXyxYg.png?width=108&crop=smart&auto=webp&s=97b9a19299baf71c2595f1d46f394359d66e8f0f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LP9lWJIkvOFwEJy7i2edxqBM2iBmROue3pUEdiXyxYg.png?width=216&crop=smart&auto=webp&s=c49c5b8a0ac103f7a679362615e6ef391b7347e6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LP9lWJIkvOFwEJy7i2edxqBM2iBmROue3pUEdiXyxYg.png?width=320&crop=smart&auto=webp&s=5047d4820eeaabac9c913a090edcfe1c449b2979', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LP9lWJIkvOFwEJy7i2edxqBM2iBmROue3pUEdiXyxYg.png?width=640&crop=smart&auto=webp&s=a45fd4b46acdf1a22c62c7c684471a43354c1397', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LP9lWJIkvOFwEJy7i2edxqBM2iBmROue3pUEdiXyxYg.png?width=960&crop=smart&auto=webp&s=5583f184eb8552f11d9a543d521c6cf465ee9bf4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LP9lWJIkvOFwEJy7i2edxqBM2iBmROue3pUEdiXyxYg.png?width=1080&crop=smart&auto=webp&s=221f182c5bae832dffef342bedf90bd1e7c868d6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LP9lWJIkvOFwEJy7i2edxqBM2iBmROue3pUEdiXyxYg.png?auto=webp&s=3589f24c1c521c8bee082c05cf70d3bba59bde8e', 'width': 1200}, 'variants': {}}]} | ||
MiniMax M2.2 Coming Soon! | 77 | It found on their website code
https://preview.redd.it/cj2as13ttcig1.png?width=825&format=png&auto=webp&s=9492b73dd14c581e30b35a5e64062f4ac7356a3f
[https://cdn.hailuo.ai/mmx-agent/prod-web-va-0.1.746/\_next/static/chunks/app/(pages)/(base)/page-0cfae9566c3e528b.js](https://cdn.hailuo.ai/mmx-agent/prod-web-va-0.1.746/_next/static/chunks/app/(pages)/(base)/page-0cfae9566c3e528b.js) | 2026-02-08T23:22:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qzo77z/minimax_m22_coming_soon/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzo77z | false | null | t3_1qzo77z | /r/LocalLLaMA/comments/1qzo77z/minimax_m22_coming_soon/ | false | false | 77 | null | |
Open source secure multi-tenant AI agent platform - zero knowledge vault, isolated containers | 0 | Built a multi-tenant layer for OpenClaw with one-click onboarding. Each user gets isolated Docker containers, encrypted vault (AES-256-GCM, Argon2id), and OAuth integrations. Self-hostable. [github.com/jomafilms/openclaw-multitenant](http://github.com/jomafilms/openclaw-multitenant) | 2026-02-08T23:19:36 | https://www.reddit.com/r/LocalLLaMA/comments/1qzo4we/open_source_secure_multitenant_ai_agent_platform/ | lenna-111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzo4we | false | null | t3_1qzo4we | /r/LocalLLaMA/comments/1qzo4we/open_source_secure_multitenant_ai_agent_platform/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'IAlDln6EVomTK5R1nNIoWlvShPGrC5QHN5a21u6XwFo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IAlDln6EVomTK5R1nNIoWlvShPGrC5QHN5a21u6XwFo.png?width=108&crop=smart&auto=webp&s=fdb5a5fa55caef888b5c20674d4a06b2c36b93d5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IAlDln6EVomTK5R1nNIoWlvShPGrC5QHN5a21u6XwFo.png?width=216&crop=smart&auto=webp&s=424b507d5be3bb16a1034f25b89032464170043c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IAlDln6EVomTK5R1nNIoWlvShPGrC5QHN5a21u6XwFo.png?width=320&crop=smart&auto=webp&s=b3bcfc7b70e0830fd83fc3f5dc101c6337e46411', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IAlDln6EVomTK5R1nNIoWlvShPGrC5QHN5a21u6XwFo.png?width=640&crop=smart&auto=webp&s=2455e0837976ef8073bcc2913acb991278bbaae0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IAlDln6EVomTK5R1nNIoWlvShPGrC5QHN5a21u6XwFo.png?width=960&crop=smart&auto=webp&s=45f73d35fde8b668cb84e526aa01e44b89bea97d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IAlDln6EVomTK5R1nNIoWlvShPGrC5QHN5a21u6XwFo.png?width=1080&crop=smart&auto=webp&s=07195cbf21f8c1b7c22d48fa1259acb528c47c4c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IAlDln6EVomTK5R1nNIoWlvShPGrC5QHN5a21u6XwFo.png?auto=webp&s=f78a96ed1d01d6a9a39c88c2b43e3c2a6a5d94da', 'width': 1200}, 'variants': {}}]} |
Madlab OSS Finetuning | 4 | Hey there, i just released Madlab Finetuning v0.5.0. Enjoy multi-os GUI finetuning [https://github.com/Archimedes1618/Madlab/releases/tag/v0.5.0](https://github.com/Archimedes1618/Madlab/releases/tag/v0.5.0)
Happy to hear your feedback and i hope you dont mind the "self-promotion" of something free :)
https://preview.redd.it/d6g0dtyarcig1.png?width=888&format=png&auto=webp&s=452d994b9482e74bf048c719f5a73cd24b093ae4
https://preview.redd.it/3lst6xcbrcig1.png?width=889&format=png&auto=webp&s=fba39d8062382975d7839adde7251583856021f3
https://preview.redd.it/5om9x1tbrcig1.png?width=886&format=png&auto=webp&s=6beab3d9d1d33f77e0dce0ad0029ec9fe5283fdb
https://preview.redd.it/tbxdt8acrcig1.png?width=891&format=png&auto=webp&s=20cc2b34363f4cdc4a604a30e48d81f959ff4c31
https://preview.redd.it/g1lig8pcrcig1.png?width=887&format=png&auto=webp&s=2f65eeb07a553e25b2678274f2406c6ee7d690bc
https://preview.redd.it/olbvc85drcig1.png?width=1915&format=png&auto=webp&s=445b5bab6382344cdc201b0b0fab460dd35aa0f0
| 2026-02-08T23:11:23 | https://www.reddit.com/r/LocalLLaMA/comments/1qznyfj/madlab_oss_finetuning/ | Archimedes9876 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qznyfj | false | null | t3_1qznyfj | /r/LocalLLaMA/comments/1qznyfj/madlab_oss_finetuning/ | false | false | 4 | null | |
Comparing the same model with reasoning turned on and off | 20 | I'm preparing to use Nemotron-3-30B to analyze a huge personal file (close to 1M tokens), and thought I might turn off reasoning so it doesn't go schizo over the sheer amount of content. But I was curious what turning off reasoning would do, so I went looking for benchmarks.
There seems to be very few benchmarks comparing the same model with reasoning on, vs turned off via chat template. I was only able to find 2 places with info on this, Artificial Analysis and UGI Leaderboard. Here's a selection of models and their benchmarks.
| Nemotron-3-30B-A30B | Reasoning | Non-Reasoning |
|:--|:--|:--|
| Terminal Bench Hard | 14% | 12% |
| Tau2 Telecom | 41% | 25% |
| AA-LCR Long Context Reasoning | 34% | 7% |
| AA-Omniscience Accuracy (Knowledge) | 17% | 13% |
| Humanity's Last Exam | 10.2% | 4.6% |
| GPQA Diamond (Scientific Reasoning) | 76% | 40% |
| LiveCodeBench (Coding) | 74% | 36% |
| SciCode (Coding) | 30% | 23% |
| IFBench (Instruction Following) | 71% | 38% |
| AIME 2025 | 91% | 13% |
| GLM-4.7-Flash | Reasoning | Non-Reasoning |
|:--|:--|:--|
| Terminal Bench Hard | 22% | 4% |
| Tau2 Telecom | 99% | 92% |
| AA-LCR Long Context Reasoning | 35% | 15% |
| AA-Omniscience Accuracy (Knowledge) | 15% | 12% |
| Humanity's Last Exam | 7.1% | 4.9% |
| GPQA Diamond (Scientific Reasoning) | 58% | 45% |
| SciCode (Coding) | 34% | 26% |
| IFBench (Instruction Following) | 61% | 46% |
| DeepSeek V3.2 | Reasoning | Non-Reasoning |
|:--|:--|:--|
| Terminal Bench Hard | 36% | 33% |
| Tau2 Telecom | 91% | 79% |
| AA-LCR Long Context Reasoning | 65% | 39% |
| AA-Omniscience Accuracy (Knowledge) | 32% | 23% |
| Humanity's Last Exam | 22.2% | 10.5% |
| GPQA Diamond (Scientific Reasoning) | 84% | 65% |
| LiveCodeBench (Coding) | 86% | 59% |
| SciCode (Coding) | 39% | 39% |
| IFBench (Instruction Following) | 61% | 49% |
| AIME 2025 | 92% | 59% |
Then there's UGI Leaderboard's NatInt. This is a closed but relatively amateurish intelligence benchmark. (I don't mean this in a disparaging way, it's just a fact that it's 1 guy writing this, vs the thousands of questions created by entire teams for the above benchmarks). Interestingly, the UGI maintainer did a lot of tests in various setups, always turning off reasoning when he gets a chance, and including reasoning on Instruct models (presumably by prompting "think step-by-step"). It's appreciated!
| Model | Reasoning NatInt | Non-Reasoning NatInt |
|:--|:--|:--|
| Ministral-3-14B-Reasoning-2512 | 16.33% | 16.35% |
| Ministral-3-14B-Instruct-2512 | 18.09% | 16.73% |
| Nemotron-3-30-A3B-BF16 | 29.12% | 16.51% |
| Qwen3-30B-A3B Thinking=true/false | 19.19% | 15.9% |
| GLM-4.5-Air | 33% | 32.18% |
| Qwen3-32B | 30.34% | 32.95% |
| DeepSeek-V3.2 | 48.11% | 47.85% |
| Kimi K2.5 | 62.96% | 60.32% |
It seems like it's a big performance penalty on some models, while being about the same on others. The gap is much bigger on the tougher "replace human workers" corpo benchmarks. | 2026-02-08T23:00:51 | https://www.reddit.com/r/LocalLLaMA/comments/1qznps2/comparing_the_same_model_with_reasoning_turned_on/ | dtdisapointingresult | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qznps2 | false | null | t3_1qznps2 | /r/LocalLLaMA/comments/1qznps2/comparing_the_same_model_with_reasoning_turned_on/ | false | false | self | 20 | null |
Qwen3 tts + LM Studio? | 0 | How do I use qwen3 tts with LM studio? I can't seem to find a way to use this specific tts, or my brain can't handle complex set up, please send help 😭 | 2026-02-08T22:48:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qznfh3/qwen3_tts_lm_studio/ | Plastic_Care8170 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qznfh3 | false | null | t3_1qznfh3 | /r/LocalLLaMA/comments/1qznfh3/qwen3_tts_lm_studio/ | false | false | self | 0 | null |
How do devs secure their notebooks? | 3 | Hi guys,
How do devs typically secure/monitor the hygiene of their notebooks?
I scanned about 5000 random notebooks on GitHub and ended up finding almost 30 aws/oai/hf/google keys (frankly, they were inactive, but still).
https://preview.redd.it/h4310zd7lcig1.png?width=1082&format=png&auto=webp&s=3d8a977ff2362323873237efe66d6c6e7bd38931
https://preview.redd.it/hfpvqonolcig1.png?width=1740&format=png&auto=webp&s=2c47ca7e9570b52ca0e14d0ffb59e8820ad4f867
| 2026-02-08T22:38:20 | https://www.reddit.com/r/LocalLLaMA/comments/1qzn6mm/how_do_devs_secure_their_notebooks/ | arsbrazh12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzn6mm | false | null | t3_1qzn6mm | /r/LocalLLaMA/comments/1qzn6mm/how_do_devs_secure_their_notebooks/ | false | false | 3 | null | |
Getting better output with Aider + qwen3-coder:30b | 1 | I've been trying these tool for the first time the past couple of days and I feel like they're a complete waste of time right now. Runs relatively slow on my 5070ti (16gb) and often produces code which is syntactically correct but won't actually implement the explained feature. I end up implementing myself. What docs should i be reading to get better results. | 2026-02-08T22:36:22 | https://www.reddit.com/r/LocalLLaMA/comments/1qzn4z6/getting_better_output_with_aider_qwen3coder30b/ | Alarmed-Concern-7531 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzn4z6 | false | null | t3_1qzn4z6 | /r/LocalLLaMA/comments/1qzn4z6/getting_better_output_with_aider_qwen3coder30b/ | false | false | self | 1 | null |
I truly solved the LLM "losing memory" problem. | 0 | I call it the LBM: Living Brain Memory for your LLM.
New way: everything that's important for your project is stored in Supabase (or any easily accessible DB of your choice).
Your LLM gets initial set of instructions that it follows at the beginning of EVERY new CONVERSATION: query your LBM and pull whatever you need to get your bearings on where we are in the project, crucial instructions and what was planned to be done next.
Your LLM also gets a POST / GET API routes that it can use (and is encouraged to) every time there is an update to your project files, you reached a milestone, etc. THIS is basically the LIVING part of the BRAIN. It can now use a set of its own-created instructions to access all the knowledge from multiple sessions DYNAMICALLY when it needs it!
No more lost variable names, no more guessing, no more 'forgetting' what was done before, when and how an important folder or table were named.
Direct feedback from Claude is in the screenshots to validate what I am saying.
⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️
Project is [here](https://gist.github.com/DavidMCinema/abd4ea3f97992967ab24c6ae0703f0ce.js) if you want to install it in your own LLM - FREE
⬆️ ⬆️ ⬆️ ⬆️ ⬆️ ⬆️ ⬆️ ⬆️ ⬆️ ⬆️ ⬆️ ⬆️ ⬆️ ⬆️ ⬆️ ⬆️ ⬆️ ⬆️ ⬆️ ⬆️ ⬆️ ⬆️
The Bottom Line
Before LBM: My AI was an amnesiac genius — brilliant in the moment, completely blank the next day. Every session started with "let me explain everything again."
After LBM: My AI has a persistent, growing brain. It remembers what was built, why decisions were made, what patterns to follow, and exactly what columns exist in every table. No more guessing. No more re-explaining. No more bugs from forgotten context.
The pattern is replicable for any project:
1. Create memory tables in your database
2. Build a simple API endpoint for read/write
3. Populate with schema, patterns, decisions, and history
4. AI queries at session start, updates at session end
5. Context persists and compounds forever
Let me know what you think!
https://preview.redd.it/oncnw2nxecig1.png?width=1334&format=png&auto=webp&s=8716f2d13126a0c5c82342005e9ce8a03460ef13
https://preview.redd.it/dnaexu82fcig1.png?width=1384&format=png&auto=webp&s=77935d6958e21406bdc5284f6ea710ba94357c89
https://preview.redd.it/ssuas855fcig1.png?width=1256&format=png&auto=webp&s=9bace30e2ca281d4f86ba7bfadfdaaca043ae7d5
https://preview.redd.it/mnczfbq8fcig1.png?width=1240&format=png&auto=webp&s=6bdaf27b806cb754218e8c4a875eed7e7b2c3992
| 2026-02-08T22:01:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qzm9zl/i_truly_solved_the_llm_losing_memory_problem/ | DavidM_Cinema | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzm9zl | false | null | t3_1qzm9zl | /r/LocalLLaMA/comments/1qzm9zl/i_truly_solved_the_llm_losing_memory_problem/ | false | false | 0 | null | |
I built an open-source Agentic RAG system with Ollama support — chat with your documents locally | 0 | Hey everyone! I'm sharing a project I've been working on: **Agentic RAG**, an open-source document assistant that works with Ollama for fully local inference — no data leaves your machine.
Upload your documents (PDF, Word, CSV, Excel, JSON, Markdown) and have a natural conversation with an AI that retrieves and analyzes your data intelligently.
## What makes it different
- **Agentic Semantic Chunking** — instead of fixed-size chunks, an LLM analyzes your text and splits at natural topic boundaries, preserving context
- **Hybrid Search** — combines vector search (pgvector) + BM25 keyword matching via Reciprocal Rank Fusion
- **Structured + Unstructured** — text docs get vectorized for semantic search, tabular data (CSV/Excel) gets stored for SQL queries. The agent picks the right tool automatically
- **Multi-Provider** — works with OpenAI, OpenRouter (100+ models), or **Ollama for fully local inference** with auto-detection of installed models
- **Anti-Hallucination Guardrails** — the system knows when it doesn't know
- **Multi-Channel** — Web UI, Telegram bot, WhatsApp
## Tech stack
FastAPI + React + PostgreSQL/pgvector + LangChain + Docker Compose
## Ollama integration
The system auto-detects your installed Ollama models (both LLM and embedding models) and lets you switch between them from the Settings UI. No config files to edit.
**GitHub:** https://github.com/logfab-stack/agentic-rag
Screenshots are in the README. Feedback and contributions welcome! | 2026-02-08T21:56:41 | https://www.reddit.com/r/LocalLLaMA/comments/1qzm5k7/i_built_an_opensource_agentic_rag_system_with/ | Due_Caterpillar_9578 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzm5k7 | false | null | t3_1qzm5k7 | /r/LocalLLaMA/comments/1qzm5k7/i_built_an_opensource_agentic_rag_system_with/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'nJjcE-AOdDYYBfc2f7wEsX60aJoCRhwKw_-9O26DGxQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nJjcE-AOdDYYBfc2f7wEsX60aJoCRhwKw_-9O26DGxQ.png?width=108&crop=smart&auto=webp&s=5cce664e3b568fb6011dd9b87d6b81ef1e81e596', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nJjcE-AOdDYYBfc2f7wEsX60aJoCRhwKw_-9O26DGxQ.png?width=216&crop=smart&auto=webp&s=5d829c97b348f02c93fdb694d5b30d85f88ca2ec', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nJjcE-AOdDYYBfc2f7wEsX60aJoCRhwKw_-9O26DGxQ.png?width=320&crop=smart&auto=webp&s=e5dc8b9835255031adea6f566f24708f718b765a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nJjcE-AOdDYYBfc2f7wEsX60aJoCRhwKw_-9O26DGxQ.png?width=640&crop=smart&auto=webp&s=6af6b1cfe8629bfe200e66abca905904a4f888f6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nJjcE-AOdDYYBfc2f7wEsX60aJoCRhwKw_-9O26DGxQ.png?width=960&crop=smart&auto=webp&s=300dd618234aed11cf7203219e9f7cfddedef8b1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nJjcE-AOdDYYBfc2f7wEsX60aJoCRhwKw_-9O26DGxQ.png?width=1080&crop=smart&auto=webp&s=ef88c4a15b8577ffebf74c131d6c4d699e23b549', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nJjcE-AOdDYYBfc2f7wEsX60aJoCRhwKw_-9O26DGxQ.png?auto=webp&s=ddeb740d40b330ebeb70c6f03513039e2a2c45d2', 'width': 1200}, 'variants': {}}]} |
arXiv at Home - a self-hosted search engine for arXiv papers | 21 | 2026-02-08T21:37:30 | https://github.com/mrapplexz/arxiv-at-home | mrAppleXZ | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qzlo6a | false | null | t3_1qzlo6a | /r/LocalLLaMA/comments/1qzlo6a/arxiv_at_home_a_selfhosted_search_engine_for/ | false | false | 21 | {'enabled': False, 'images': [{'id': 'GByuthZvqw-sh4cUy1TLMJvthzS18fOPWRRPk2rkWTU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GByuthZvqw-sh4cUy1TLMJvthzS18fOPWRRPk2rkWTU.png?width=108&crop=smart&auto=webp&s=0777a25dc55340560e1a702419fae70d0836fe17', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GByuthZvqw-sh4cUy1TLMJvthzS18fOPWRRPk2rkWTU.png?width=216&crop=smart&auto=webp&s=adc54f86fc393617e2fe000446bf6979cf828da6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GByuthZvqw-sh4cUy1TLMJvthzS18fOPWRRPk2rkWTU.png?width=320&crop=smart&auto=webp&s=89cbfee08ba1e5403a5e982132a463ee4eb48fa2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GByuthZvqw-sh4cUy1TLMJvthzS18fOPWRRPk2rkWTU.png?width=640&crop=smart&auto=webp&s=600cdbb4cfc30b5cd647a299f4b1c24cbbcd97f5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GByuthZvqw-sh4cUy1TLMJvthzS18fOPWRRPk2rkWTU.png?width=960&crop=smart&auto=webp&s=23a50a47ee6be36c49fae3460d4227b4d6d0fb0f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GByuthZvqw-sh4cUy1TLMJvthzS18fOPWRRPk2rkWTU.png?width=1080&crop=smart&auto=webp&s=94df69d81580f7d4afa3a0b87eb13c3501a6c290', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GByuthZvqw-sh4cUy1TLMJvthzS18fOPWRRPk2rkWTU.png?auto=webp&s=f9b06fa23778c71ea76760fd496a305ca68dc796', 'width': 1200}, 'variants': {}}]} | ||
Voxtral Mini 4B Realtime running in the browser | 19 | Hello! Earlier this week Mistral released:
[https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602](https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602)
Last time I ported a TTS model to Rust using candle, this time I ported an ASR model to Rust with burn.
I was able to lean on the wgpu backend to get the model running in the browser after sharding it.
Here is the HF Space:
[https://huggingface.co/spaces/TrevorJS/voxtral-mini-realtime](https://huggingface.co/spaces/TrevorJS/voxtral-mini-realtime)
and here are the model weights (q4 + tokenizer):
[https://huggingface.co/TrevorJS/voxtral-mini-realtime-gguf](https://huggingface.co/TrevorJS/voxtral-mini-realtime-gguf)
and the code:
[https://github.com/TrevorS/voxtral-mini-realtime-rs](https://github.com/TrevorS/voxtral-mini-realtime-rs)
Didn't have a chance to use agent teams with this project, maybe next one! :) | 2026-02-08T21:36:39 | https://github.com/TrevorS/voxtral-mini-realtime-rs | adefa | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qzlndr | false | null | t3_1qzlndr | /r/LocalLLaMA/comments/1qzlndr/voxtral_mini_4b_realtime_running_in_the_browser/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'E0FOxSrZ-_liwWhCMBiHvDA1XDcfXFbzFIQIlW48qxQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/E0FOxSrZ-_liwWhCMBiHvDA1XDcfXFbzFIQIlW48qxQ.png?width=108&crop=smart&auto=webp&s=daf85e0e4dfdc6bc40f6bae54dfd5a5020096fa2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/E0FOxSrZ-_liwWhCMBiHvDA1XDcfXFbzFIQIlW48qxQ.png?width=216&crop=smart&auto=webp&s=cf7b1ee7060a08946b67a78fe1eb5919e66a88f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/E0FOxSrZ-_liwWhCMBiHvDA1XDcfXFbzFIQIlW48qxQ.png?width=320&crop=smart&auto=webp&s=482cea13329e4585a5d786acccb437e8627c01b2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/E0FOxSrZ-_liwWhCMBiHvDA1XDcfXFbzFIQIlW48qxQ.png?width=640&crop=smart&auto=webp&s=c236c8cbdbbe1f1e0f6453d2c0c098dad5203f41', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/E0FOxSrZ-_liwWhCMBiHvDA1XDcfXFbzFIQIlW48qxQ.png?width=960&crop=smart&auto=webp&s=6741d36ef2aabc6329657586364971b81bfeebc6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/E0FOxSrZ-_liwWhCMBiHvDA1XDcfXFbzFIQIlW48qxQ.png?width=1080&crop=smart&auto=webp&s=b9ab2405cb227baede689f1970f1f26ca809a646', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/E0FOxSrZ-_liwWhCMBiHvDA1XDcfXFbzFIQIlW48qxQ.png?auto=webp&s=69807ebf6784d3ee38f9071e99ab8e46098ae8f8', 'width': 1200}, 'variants': {}}]} | |
Aero GPT | 0 | Documentation log for a locally deployed Manufacturing engineering assistant.
Hardware - 1 RTX6000 Pro / Instance (say we deploy 10 assistants : each would be allocated up to 96GB VRAM / Rtx 6000 Pro
Goal - ingest a part specific requirements list, fetch industry specifications - generate a technical requirements report / recommended Manufacturing Plan
Base Model - Qwen3 (not sure… have done some small fine tunes of Qwen Llama via unsloth).
Training Data - proprietary, \~15000 successful manufacturing plans spanning :
12 customers
2300 specs (processing, specific process adherence per OEM requirements, etc)
3 Material Types
8 Machining Types
I won’t be sharing specifics- but will document success / failures in a general approach
Topics : Fine Tuning, Prompt Engineering, RLHF, Interleaved Thinking | 2026-02-08T21:32:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qzljgz/aero_gpt/ | Willing_Potato7661 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzljgz | false | null | t3_1qzljgz | /r/LocalLLaMA/comments/1qzljgz/aero_gpt/ | false | false | self | 0 | null |
Built an open-source scanner that finds every AI model, agent, and API hiding in your infra — including local Ollama instances | 0 | If you're like me, you've got Ollama running on one machine, vLLM on another, maybe some HuggingFace containers somewhere, plus a few OpenAI API calls you forgot about in that one script...
I built \*\*ai-bom\*\* to answer a simple question: \*\*what AI is actually running in my environment?\*\*
It's a single CLI that scans source code, Docker configs, network endpoints, and cloud services — then gives you a full inventory.
\*\*Relevant for self-hosters:\*\*
\- Detects Ollama containers + endpoints (localhost:11434)
\- Finds model references in code (llama-3, mistral, codellama, etc.)
\- Spots vLLM, HuggingFace TGI, ChromaDB containers
\- Catches leaked API keys (sk-\*, sk-ant-\*, hf\_\*)
\- Risk scores everything so you know what's exposed
Useful if you're running multiple models across machines and want one place to see it all. Also handy if your org is mixing local + cloud AI and needs to track what's where.
\`\`\`
pip install ai-bom
ai-bom scan .
ai-bom demo # try the built-in demo first
\`\`\`
Open source, Apache 2.0.
🔗 https://github.com/Trusera/ai-bom | 2026-02-08T21:12:44 | https://www.reddit.com/r/LocalLLaMA/comments/1qzl0q6/built_an_opensource_scanner_that_finds_every_ai/ | eliadkid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzl0q6 | false | null | t3_1qzl0q6 | /r/LocalLLaMA/comments/1qzl0q6/built_an_opensource_scanner_that_finds_every_ai/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'LOlGb3DRoaQfhg9G9-GeN6qlxzGHB0X9U2R1LGIih6w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LOlGb3DRoaQfhg9G9-GeN6qlxzGHB0X9U2R1LGIih6w.png?width=108&crop=smart&auto=webp&s=06d9a3c0b3fc27ff8c6c1e7ff969207f39e81bb8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LOlGb3DRoaQfhg9G9-GeN6qlxzGHB0X9U2R1LGIih6w.png?width=216&crop=smart&auto=webp&s=13d7333b10d23d6bb9e035e61bbdb6438b5ff133', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LOlGb3DRoaQfhg9G9-GeN6qlxzGHB0X9U2R1LGIih6w.png?width=320&crop=smart&auto=webp&s=806eee53a83b2ca78cfc9b94b0b27a725098943e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LOlGb3DRoaQfhg9G9-GeN6qlxzGHB0X9U2R1LGIih6w.png?width=640&crop=smart&auto=webp&s=e40e27e706df79d5020fed0e6be1aee2a7f35305', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LOlGb3DRoaQfhg9G9-GeN6qlxzGHB0X9U2R1LGIih6w.png?width=960&crop=smart&auto=webp&s=3cd98b88220f2f09997c62437e5ca7a12db33201', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LOlGb3DRoaQfhg9G9-GeN6qlxzGHB0X9U2R1LGIih6w.png?width=1080&crop=smart&auto=webp&s=fefe87509880d600fcd6027f3ca73fcbde7512c3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LOlGb3DRoaQfhg9G9-GeN6qlxzGHB0X9U2R1LGIih6w.png?auto=webp&s=5b8c8c46430d5ca2bfcbbd9d80bab8cd445587aa', 'width': 1200}, 'variants': {}}]} |
What do you think is the best AI model right now? (February 2026) | 1 | [removed] | 2026-02-08T20:22:50 | https://www.reddit.com/r/LocalLLaMA/comments/1qzjp8g/what_do_you_think_is_the_best_ai_model_right_now/ | TheCoffinOfAndy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzjp8g | false | null | t3_1qzjp8g | /r/LocalLLaMA/comments/1qzjp8g/what_do_you_think_is_the_best_ai_model_right_now/ | false | false | self | 1 | null |
got tired of wiring up apis for agent swarms so i wrote a p2p network stack instead | 1 | [removed] | 2026-02-08T20:19:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qzjmhb/got_tired_of_wiring_up_apis_for_agent_swarms_so_i/ | BiggieCheeseFan88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzjmhb | false | null | t3_1qzjmhb | /r/LocalLLaMA/comments/1qzjmhb/got_tired_of_wiring_up_apis_for_agent_swarms_so_i/ | false | false | self | 1 | null |
We open-sourced a protocol for AI prompt management (PLP) - looking for feedback | 0 | We kept running into the same problem: prompts scattered across codebases, no versioning, needing full redeploys just to change a system prompt.
So we built PLP - a dead-simple open protocol (3 REST endpoints) for managing prompts separately from your app code. JS and Python SDKs available.
GitHub: [https://github.com/GoReal-AI/plp](https://github.com/GoReal-AI/plp)
Curious if others are hitting the same pain and what you think of the approach. | 2026-02-08T20:17:58 | https://www.reddit.com/r/LocalLLaMA/comments/1qzjkov/we_opensourced_a_protocol_for_ai_prompt/ | Proud_Salad_8433 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzjkov | false | null | t3_1qzjkov | /r/LocalLLaMA/comments/1qzjkov/we_opensourced_a_protocol_for_ai_prompt/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'SI2pACCI3hCJ4XGtVvbQsIyRjcoFig08e3s4MjiYsyI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SI2pACCI3hCJ4XGtVvbQsIyRjcoFig08e3s4MjiYsyI.png?width=108&crop=smart&auto=webp&s=1ad7e79fffc8a890503a1f52e9da7b501202171f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SI2pACCI3hCJ4XGtVvbQsIyRjcoFig08e3s4MjiYsyI.png?width=216&crop=smart&auto=webp&s=135a978e598486765ba580570b7d6be841691439', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SI2pACCI3hCJ4XGtVvbQsIyRjcoFig08e3s4MjiYsyI.png?width=320&crop=smart&auto=webp&s=6ef67db93b28c47d9f9cf90326fb59c5d27015e1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SI2pACCI3hCJ4XGtVvbQsIyRjcoFig08e3s4MjiYsyI.png?width=640&crop=smart&auto=webp&s=8f5afd4520011e0596cdf2086cf0e80ad0b36c52', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SI2pACCI3hCJ4XGtVvbQsIyRjcoFig08e3s4MjiYsyI.png?width=960&crop=smart&auto=webp&s=b38a0c1e734355024653d5a0f2b56c8bf24e9856', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SI2pACCI3hCJ4XGtVvbQsIyRjcoFig08e3s4MjiYsyI.png?width=1080&crop=smart&auto=webp&s=f7e01ef0bf57ab044780cdfd16b6f85c6210acc4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SI2pACCI3hCJ4XGtVvbQsIyRjcoFig08e3s4MjiYsyI.png?auto=webp&s=ea36642d0d789d10beb0aa6998fa73226637a107', 'width': 1200}, 'variants': {}}]} |
I built a rough .gguf LLM visualizer | 665 | I hacked together a small tool that lets you upload a .gguf file and visualize its internals in a 3D-ish way (layers / neurons / connections). The original goal was just to see what’s inside these models instead of treating them like a black box.
That said, my version is pretty rough, and I’m very aware that someone who actually knows what they’re doing could’ve built something way better :p
So I figured I’d ask here:
Does something like this already exist, but done properly?
If yes, I’d much rather use that
For reference, this is really good:
https://bbycroft.net/llm
…but you can’t upload new LLMs.
Thanks! | 2026-02-08T20:08:31 | https://www.reddit.com/gallery/1qzjbw2 | sultan_papagani | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qzjbw2 | false | null | t3_1qzjbw2 | /r/LocalLLaMA/comments/1qzjbw2/i_built_a_rough_gguf_llm_visualizer/ | false | false | 665 | null | |
How to Prompt Caching with llama.cpp? | 5 | Doesnt work? qwen3 next says forcing use of SWA full redoing prompt processing ?
./llama-server \
--slot-save-path slot
--cache-prompt
--lookup-cache-dynamic lookup | 2026-02-08T19:50:01 | https://www.reddit.com/r/LocalLLaMA/comments/1qzitx1/how_to_prompt_caching_with_llamacpp/ | ClimateBoss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzitx1 | false | null | t3_1qzitx1 | /r/LocalLLaMA/comments/1qzitx1/how_to_prompt_caching_with_llamacpp/ | false | false | self | 5 | null |
Will adding a 5090 to multiple 3090s speed up PP? Experienced folks only | 0 | I can speculate, but I want someone that has actually experience or/and can experiment. Will adding a 5090 to say 4x3090s speed up PP? An extra GPU always helps, but I'm wondering with the 5090 being almost 3x the speed of 3090. If I add one and make it the main GPU and using kvu with llama.cpp if I'll be seeing perhaps 3x speed up with my PP. | 2026-02-08T19:44:12 | https://www.reddit.com/r/LocalLLaMA/comments/1qzioed/will_adding_a_5090_to_multiple_3090s_speed_up_pp/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzioed | false | null | t3_1qzioed | /r/LocalLLaMA/comments/1qzioed/will_adding_a_5090_to_multiple_3090s_speed_up_pp/ | false | false | self | 0 | null |
Local-first content-aware (images + documents) file organization | 2 | I'm the developer of AI File Sorter (version 1.6.1 is now available!), a cross-platform desktop app that uses Local LLMs to organize files based on their content. The app analyzes images and documents by content and suggests names and folders for them. Other files are also organized, but not by content.
Document content analysis is supported for PDFs, Word, Excel, txt, and similar files.
Key points:
* Works fully offline using local AI models (no uploads or telemetry)
* Review before Confirm
* Dry runs
* Undo
* Designed for cleaning up Downloads, Documents, Images folders, external drives, or archives.
What’s new in 1.6.1:
* Document content analysis (PDF, DOCX, XLSX, PPTX, ODT, ODS, ODP)
* Improved review dialog with bulk edits
* Automatic system compatibility checks (benchmarks)
* Better stability & persistence railguards
* Improved macOS builds for Apple Silicon (M1/M2/M3) and Intel
* Pre-compiled for Windows, macOS, Debian, and Ubuntu
If you care about privacy-oriented tools, and keeping large file collections organized without sending data to the cloud, I'd love feedback.
Website: [https://filesorter.app](https://filesorter.app)
GitHub: [https://github.com/hyperfield/ai-file-sorter](https://github.com/hyperfield/ai-file-sorter)
[Review & Confirm](https://preview.redd.it/pvo5mmilqbig1.png?width=1002&format=png&auto=webp&s=d3c0030d4af8a3d2aa137943c0e74e34eeccec39)
| 2026-02-08T19:43:18 | https://www.reddit.com/r/LocalLLaMA/comments/1qzinin/localfirst_contentaware_images_documents_file/ | ph0tone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzinin | false | null | t3_1qzinin | /r/LocalLLaMA/comments/1qzinin/localfirst_contentaware_images_documents_file/ | false | false | 2 | null | |
TranslateGemma is now available in KernelAI as an extended feature. 55+ language translations locally in your device | 21 | 👋🏻 Hey folks
Google DeepMind recently launched TranslateGemma, a new set of highly efficient open translation models, and you can now use it directly inside kernelAI. Built on Gemma 3, it supports 55 languages and delivers surprisingly strong results with smaller, faster models, making high-quality multilingual translation accessible right from the app.
Super excited to hear any feedback! The next phase would be to release Speech to text feature, and release on Android!
IOS App store link: https://apps.apple.com/ca/app/kernelai/id6757350731
| 2026-02-08T19:41:38 | https://www.reddit.com/gallery/1qzily8 | Better_Comment_7749 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qzily8 | false | null | t3_1qzily8 | /r/LocalLLaMA/comments/1qzily8/translategemma_is_now_available_in_kernelai_as_an/ | false | false | 21 | null | |
My model is better than yours | 0 | Yeah,you probably read the title and you thought that I'm trying to hype some model but it's not about this.
Lately,I found that some comments hypes the models so much that it doesn't lead to actual beneficial discussions.
Yes we know GPT-OSS has a lot of false positives of identifying what are you actually asking,but if you use it and it works,then it's fine! It's a great model,it just needs specific instructions to follow your query properly.
Yes, Qwen is more likely to complete what you are asking with minimal input,but why? Because Qwen always discover all paths before final answer which triples it's reasoning tokens compared to GPT-OSS which is a trade-off between your input time and your waiting time.
Yes, Step 3.5 flash is a great model,but it's not as optimized as other models,it uses a lot of reasoning tokens,is it a bad model? No, it's a great model, no model is bad if it works for your use case,if you have only 128GB of ram and decent memory bandwidth then it's fine to have a model that you can depend on and leave it for 2 hours completing a production-grade project that you will take a lot less time to edit because the model reasoning will catch most mistakes before starting.
We should be aware to the fact that Local LLMs are great tools to use in the right direction, express why do you like a model, but please don't try to call someone names because he/she sees another model as better.
you can use Nanbeige for writing,GPT-OSS for knowledge or GLM for coding or you can use a mix of any of those models for anything you want to build.
it's completely fine, and it leads to much healthier community. | 2026-02-08T19:31:25 | https://www.reddit.com/r/LocalLLaMA/comments/1qzic3t/my_model_is_better_than_yours/ | perfect-finetune | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzic3t | false | null | t3_1qzic3t | /r/LocalLLaMA/comments/1qzic3t/my_model_is_better_than_yours/ | false | false | self | 0 | null |
Open vs closed on hard neuroscience/BCI eval: LLaMA-70B ≈ frontier; Qwen MoE pulls ahead | 8 | We just released v1 of a domain-specific neuroscience/BCI multiple-choice eval (500 questions).
A few things surprised us enough to share:
* Eval generated in a single pass under strict constraints (no human review, no regeneration, no polishing).
* Despite that, frontier models cluster very tightly around 88%, with misses highly aligned.
* LLaMA-3.3 70B lands right in the frontier pack.
* Qwen3 235B MoE breaks the shared ceiling (\~90.4%), but doesn't collapse the same hard failure set.
* Smaller opens (14B-8B) show a steep but smooth drop, not a cliff.
Al runs were strict: temp=0, max\_tokens=5, single letter output only. One malformed item skipped (it's question 358).
The consistent misses look less like missing facts and more like epistemic calibration under real constraints (latency, biological noise, method feasibility); rejecting elegant but overpowered abstractions.
Dataset + full README with results here:
[https://huggingface.co/datasets/TrueRunAI/neuroscience-bci-phd-evals](https://huggingface.co/datasets/TrueRunAI/neuroscience-bci-phd-evals?utm_source=chatgpt.com)
Curious how others interpret the Qwen breakout from the frontier cluster, and if people are seeing similar "shared wall" effects on other hard domain evals. | 2026-02-08T19:27:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qzi83b/open_vs_closed_on_hard_neurosciencebci_eval/ | TrueRunAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzi83b | false | null | t3_1qzi83b | /r/LocalLLaMA/comments/1qzi83b/open_vs_closed_on_hard_neurosciencebci_eval/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'M5s38tf1fUllbx5yCPXuG-ZSJls2LELU9J33p8QbOhA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/M5s38tf1fUllbx5yCPXuG-ZSJls2LELU9J33p8QbOhA.png?width=108&crop=smart&auto=webp&s=d062e8b78f130cfd9b803832cafd814b8f2183f7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/M5s38tf1fUllbx5yCPXuG-ZSJls2LELU9J33p8QbOhA.png?width=216&crop=smart&auto=webp&s=65763c0e03d86c68ccbfc9ca10ceb38352986c44', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/M5s38tf1fUllbx5yCPXuG-ZSJls2LELU9J33p8QbOhA.png?width=320&crop=smart&auto=webp&s=3404b089c2e24dd80074929c5aa419015e063ed8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/M5s38tf1fUllbx5yCPXuG-ZSJls2LELU9J33p8QbOhA.png?width=640&crop=smart&auto=webp&s=08b8a3350875cf541988a4b9b8b067704cd1e721', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/M5s38tf1fUllbx5yCPXuG-ZSJls2LELU9J33p8QbOhA.png?width=960&crop=smart&auto=webp&s=4837fe6f9bb524139a6d1010cf67dd91c072b8b5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/M5s38tf1fUllbx5yCPXuG-ZSJls2LELU9J33p8QbOhA.png?width=1080&crop=smart&auto=webp&s=dd93b01f572edfaa21f55eb1472c3e6eb57aca3e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/M5s38tf1fUllbx5yCPXuG-ZSJls2LELU9J33p8QbOhA.png?auto=webp&s=8fcc9d25f31c57aab7fe664aad9d9d6ea7ff61bd', 'width': 1200}, 'variants': {}}]} |
Strix Halo Distributed Cluster (2x Strix Halo, RDMA RoCE v2) benchmarks by kyuz0 | 41 | kyuz0 has been a godsend to the Strix Halo community, they can't be thanked enough!
For their latest escapade, they have built a two-node **AMD Strix Halo** cluster linked via **Intel E810 (RoCE v2)** for distributed vLLM inference using Tensor Parallelism.
Here are some benchmarks-
[https://kyuz0.github.io/amd-strix-halo-vllm-toolboxes/](https://kyuz0.github.io/amd-strix-halo-vllm-toolboxes/)
Here's the setup guide-
[https://github.com/kyuz0/amd-strix-halo-vllm-toolboxes/blob/main/rdma\_cluster/setup\_guide.md](https://github.com/kyuz0/amd-strix-halo-vllm-toolboxes/blob/main/rdma_cluster/setup_guide.md)
Here's the video that goes with this project-
[https://www.youtube.com/watch?v=nnB8a3OHS2E](https://www.youtube.com/watch?v=nnB8a3OHS2E)
| 2026-02-08T19:16:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qzhxd0/strix_halo_distributed_cluster_2x_strix_halo_rdma/ | Relevant-Audience441 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzhxd0 | false | null | t3_1qzhxd0 | /r/LocalLLaMA/comments/1qzhxd0/strix_halo_distributed_cluster_2x_strix_halo_rdma/ | false | false | self | 41 | null |
Picobot: a tiny self-hosted "OpenClaw" (10MB binary, ~20MB RAM) you can chat with over Telegram | 0 | I like the idea of "OpenClaw" but most of them feel way too heavy for what I actually want:
something small that runs on cheap hardware and just quietly helps me via chat.
So I hacked together a small side project called **Picobot**:
* Single Go binary (\~10MB)
* \~20MB RAM usage, starts in milliseconds
* Runs on a cheap VPS, Raspberry Pi, or even an old Android phone (Termux)
You talk to it from **Telegram**, and it can:
* Remember stuff long-term (e.g. "I'm allergic to peanuts")
* Set reminders ("Remind me to take the medicine in 4 hours")
* Work with local files ("Create `grocery-list.txt` and add milk, eggs, coffee")
* Learn little skills ("Learn how to check BTC price with curl and use it on command")
It uses OpenAI-compatible APIs (OpenRouter + Ollama by default), so if you already have those set up, it should plug in pretty easily.
I originally built it for myself because my older devices struggle with heavier stacks, but I figured it might be useful to others who like **tiny, self-hosted agents**.
GitHub: [https://github.com/louisho5/picobot](https://github.com/louisho5/picobot)
I’d love feedback on any of these, if you’re willing to share:
* Does the “Telegram + tiny agent” setup sound useful to you?
* What would you want it to automate for you on a Pi / VPS / old phone?
* Any “must-have” features for a minimal agent that I’m clearly missing? | 2026-02-08T19:15:23 | louisho5 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qzhwfr | false | null | t3_1qzhwfr | /r/LocalLLaMA/comments/1qzhwfr/picobot_a_tiny_selfhosted_openclaw_10mb_binary/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'pyqc09tn1big1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/pyqc09tn1big1.png?width=108&crop=smart&auto=webp&s=42b089b26861460af84c98798c141a33a12eb71a', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/pyqc09tn1big1.png?width=216&crop=smart&auto=webp&s=abaa4a96badc50c44f78e38cbf6af8277766e4fc', 'width': 216}], 'source': {'height': 137, 'url': 'https://preview.redd.it/pyqc09tn1big1.png?auto=webp&s=6bef2a9b3ecc3e669e97be85d82f6201fb9aae9c', 'width': 250}, 'variants': {}}]} | ||
Have Anyone Successfully Run the New MiniCPM-o-4_5-gguf? | 1 | Hi,
I saw yesterdary Openbmb adding this new model to HF. Link: [https://huggingface.co/openbmb/MiniCPM-o-4\_5-gguf](https://huggingface.co/openbmb/MiniCPM-o-4_5-gguf)
It's an omni model that comes with vision and audio adaptors.
I am wondering if anyone have successfully run it locally, and if so, how did you manage to do it? | 2026-02-08T19:09:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qzhqsu/have_anyone_successfully_run_the_new_minicpmo4/ | Iory1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qzhqsu | false | null | t3_1qzhqsu | /r/LocalLLaMA/comments/1qzhqsu/have_anyone_successfully_run_the_new_minicpmo4/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'dBWT08sb39qcHPJdpxQt3wZougI0yIGEygcKf1y3-2I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dBWT08sb39qcHPJdpxQt3wZougI0yIGEygcKf1y3-2I.png?width=108&crop=smart&auto=webp&s=a2599b6f543290acd13826a385da63a01d217ea5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dBWT08sb39qcHPJdpxQt3wZougI0yIGEygcKf1y3-2I.png?width=216&crop=smart&auto=webp&s=20c6d661b8dee0df41d2e3447f3476e7f67883e0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dBWT08sb39qcHPJdpxQt3wZougI0yIGEygcKf1y3-2I.png?width=320&crop=smart&auto=webp&s=b3918e8a15c95275c2475013f39ade535b7593c0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dBWT08sb39qcHPJdpxQt3wZougI0yIGEygcKf1y3-2I.png?width=640&crop=smart&auto=webp&s=8598acb2988049c92af48f400bdb88d6f09ad635', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dBWT08sb39qcHPJdpxQt3wZougI0yIGEygcKf1y3-2I.png?width=960&crop=smart&auto=webp&s=14c9c2cc33e234039b5cb107ec5fa2e67efbd363', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dBWT08sb39qcHPJdpxQt3wZougI0yIGEygcKf1y3-2I.png?width=1080&crop=smart&auto=webp&s=8cc2c340e688089d82a7fd09b0b0a58e4bcfb519', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dBWT08sb39qcHPJdpxQt3wZougI0yIGEygcKf1y3-2I.png?auto=webp&s=baa2467f0eb6fc7c97e063eef9585db73ab8fe38', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.