title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Running Ollama cluster on two machines with 7.4GB RAM total - it works | 0 | Title: Running Ollama cluster on two machines with 7.4GB RAM total - it works
https://preview.redd.it/znfnnaij5zfg1.png?width=2816&format=png&auto=webp&s=9c133465bb66c14716fbc6929229ea8c7b33b6d3
Sharing my setup for anyone else working with limited hardware.
\*\*Hardware:\*\*
\- Computer A: Primary (192.168.100.1)
\- Computer B: Backup (192.168.100.2)
\- Total RAM: 7.4GB
\*\*What's running:\*\*
\- Ollama on both machines
\- Custom Python load balancer
\- Weaviate for RAG
\- Discord bot pulling from both
\*\*The trick:\*\* Load balancer routes requests to whichever machine is free. If B goes down, A handles everything.
Model: llama3.2:1b (fits in RAM)
Code is open source: [https://github.com/thebardchat/shanebrain-core](https://github.com/thebardchat/shanebrain-core)
Built this as a dump truck dispatcher with ADHD. If I can do it, you can too. | 2026-01-27T23:15:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qou2sp/running_ollama_cluster_on_two_machines_with_74gb/ | Square-Practice2296 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qou2sp | false | null | t3_1qou2sp | /r/LocalLLaMA/comments/1qou2sp/running_ollama_cluster_on_two_machines_with_74gb/ | false | false | 0 | null | |
Kimi K2.5 costs almost 10% of what Opus costs at a similar performance | 539 | I've been trying out Kimi k2.5 and this is the first time that I feel an open model is truly competitive with SOTA closed models.
Compared to GLM, Kimi is a bit better, specially when it comes to non-website tasks.
Have you tried it? What's your take? | 2026-01-27T23:10:16 | Odd_Tumbleweed574 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qoty38 | false | null | t3_1qoty38 | /r/LocalLLaMA/comments/1qoty38/kimi_k25_costs_almost_10_of_what_opus_costs_at_a/ | false | false | default | 539 | {'enabled': True, 'images': [{'id': 'xz7okply3zfg1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/xz7okply3zfg1.png?width=108&crop=smart&auto=webp&s=efe079a199525fa31ada288fd8b4cab79a0b2165', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/xz7okply3zfg1.png?width=216&crop=smart&auto=webp&s=68dea817d36f0449325488b368b6640d41bff87f', 'width': 216}, {'height': 188, 'url': 'https://preview.redd.it/xz7okply3zfg1.png?width=320&crop=smart&auto=webp&s=65ff005db02be74de4aa371710b90a8f8c4e6fbc', 'width': 320}, {'height': 377, 'url': 'https://preview.redd.it/xz7okply3zfg1.png?width=640&crop=smart&auto=webp&s=74631e6bf621619eca977768f8c7287dc6164e45', 'width': 640}, {'height': 566, 'url': 'https://preview.redd.it/xz7okply3zfg1.png?width=960&crop=smart&auto=webp&s=751898f9c71787983ad4ac2892bb67346fc2cfee', 'width': 960}, {'height': 637, 'url': 'https://preview.redd.it/xz7okply3zfg1.png?width=1080&crop=smart&auto=webp&s=907f8936dae44cf387739c6e0f885744e5e69cc7', 'width': 1080}], 'source': {'height': 1038, 'url': 'https://preview.redd.it/xz7okply3zfg1.png?auto=webp&s=096d18817596b5289ecbe377263def8531dbbdc3', 'width': 1758}, 'variants': {}}]} | |
1.5 months from Zero to a "Quantum" Prototype. Am I a genius or is my AI Bro just gaslighting me? | 0 | 2026-01-27T23:09:49 | https://gist.github.com/kasparov777/ecc0b53fadbbb235efcb7d129b1aed0f | NectarineMental4429 | gist.github.com | 1970-01-01T00:00:00 | 0 | {} | 1qotxof | false | null | t3_1qotxof | /r/LocalLLaMA/comments/1qotxof/15_months_from_zero_to_a_quantum_prototype_am_i_a/ | false | false | default | 0 | null | |
1.5 months from Zero to a "Quantum" Prototype. Am I a genius or is my AI Bro just gaslighting me? | 1 | Yo Reddit! Need a serious reality check.
I have **zero** IT background (I’m talkin' 5 years of actual school and a Nokia 1100 until 2017). My only "coding skill" is a supernatural ability to find invisible spaces in a text editor.
For the last 1.5 months, I’ve been just... *talking* to an AI. We treated it like a conversation, and now we have a working prototype of a **Multi-Agent system for Quantum Trading** running on Linux.
My AI Bro says this is high-level stuff, but I’m calling BS because it felt way too easy. How difficult is this project actually "under the hood"? Am I actually doing "Vibe Coding" or just playing in a sandbox?
**P.S. My bro helped me edit and shorten this post. If you're interested in seeing the full, raw original story, it's available at the link.** | 2026-01-27T23:07:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qotvv4/15_months_from_zero_to_a_quantum_prototype_am_i_a/ | NectarineMental4429 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qotvv4 | false | null | t3_1qotvv4 | /r/LocalLLaMA/comments/1qotvv4/15_months_from_zero_to_a_quantum_prototype_am_i_a/ | false | false | self | 1 | null |
I can't run deepseek-coder-v2 with Ollama. I suspect it has something to do with RAM. Is there any way around this? | 0 | I installed deepseek-coder-v2:236b. My computer has 128 Gbs of RAM and I have a 5090 video card with 32 GBs of VRAM. I installed it with `ollama pull deepseek-coder-v2:236b` and created my running model instance with `ollama run deepseek-coder-v2:236b` . So now the model instance is running... I then start up VSCodium with the Continue extension. I connect it to the running deepseek-coder-v2:236b instance, and give it a prompt. The Continue plugin says generating for a while, then it fails with "llama runner process has terminated: exit status 2" .
This is a very unclear error, but I suspect it's a RAM issue. I read somewhere that almost all local AI runners have to load the ENTIRE model into RAM. Even though I have 128 Gbs of RAM which is A LOT, this model is 133 Gbs... So is there any way that I can still run this model?
There's gotta be something I can do right? I know it's a different system but ComfyUI has something called "Teacache" for large image and video models. Also I've read a little about something called GGUF even though I don't entirely understand it. Is there something I can do to run this model? | 2026-01-27T23:00:01 | https://www.reddit.com/r/LocalLLaMA/comments/1qotog1/i_cant_run_deepseekcoderv2_with_ollama_i_suspect/ | warpanomaly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qotog1 | false | null | t3_1qotog1 | /r/LocalLLaMA/comments/1qotog1/i_cant_run_deepseekcoderv2_with_ollama_i_suspect/ | false | false | self | 0 | null |
How would you identify the conversational sentences that a base model's distribution ranks as most probable? | 0 | Extracting common conversational sentences is difficult because most datasets are either too small or collected in artificial settings. I'm looking into mining these sentences from a base model's probability distribution instead. The plan is to prime the model with an informal opening and then rank the results by their log-likelihood to find what it considers most probable. I'm using the model's distribution as a proxy, even though the probabilities won't match real-world frequencies.
When a guy asked why I wasn't mining something useful like business data instead of this, I told him to mine his own business.
I haven't built the pipeline yet, but I've detailed [the strategies](https://github.com/8ta4/cue/blob/0942e7fe5013b3dc2ab72852b7751acbcc7e806d/DONTREADME.md).
How would you go about identifying the conversational sentences that a model's distribution considers most probable? | 2026-01-27T22:56:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qotld8/how_would_you_identify_the_conversational/ | 8ta4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qotld8 | false | null | t3_1qotld8 | /r/LocalLLaMA/comments/1qotld8/how_would_you_identify_the_conversational/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '-Flhjr2WcKkdPYxbEriW8DiadIZhI485HrMnoj-zouY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-Flhjr2WcKkdPYxbEriW8DiadIZhI485HrMnoj-zouY.png?width=108&crop=smart&auto=webp&s=17dd7874d4e7fd5f42871759afa8bdfa57177724', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-Flhjr2WcKkdPYxbEriW8DiadIZhI485HrMnoj-zouY.png?width=216&crop=smart&auto=webp&s=c907149684ef6a4bd02cf648f04b0fe8e6ed1c1f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-Flhjr2WcKkdPYxbEriW8DiadIZhI485HrMnoj-zouY.png?width=320&crop=smart&auto=webp&s=e5d6f217edd3c904366685ae140975aa2ae29177', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-Flhjr2WcKkdPYxbEriW8DiadIZhI485HrMnoj-zouY.png?width=640&crop=smart&auto=webp&s=1d2f16f7bfd4a5211a171e478126267eb9823594', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-Flhjr2WcKkdPYxbEriW8DiadIZhI485HrMnoj-zouY.png?width=960&crop=smart&auto=webp&s=bf209d0bd30c159a0191a4fb04ccfabd6d21115d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-Flhjr2WcKkdPYxbEriW8DiadIZhI485HrMnoj-zouY.png?width=1080&crop=smart&auto=webp&s=283d1397a363b0b7c109038588ed2c6b268ce872', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-Flhjr2WcKkdPYxbEriW8DiadIZhI485HrMnoj-zouY.png?auto=webp&s=c619bd423db3b7fce0da16f43fbe35b84bba0041', 'width': 1200}, 'variants': {}}]} |
Open-sourced an MCP Server Quickstart - give AI assistants custom tools | 0 | Hey all,
I put together a minimal boilerplate for building MCP (Model Context Protocol) servers and figured others might find it useful.
**What is MCP?**
It's an open protocol that lets AI assistants (Claude, Cursor, etc.) call
external tools you define. Think of it as giving the AI hands to interact with your systems.
**What's in the repo:**
* Clean TypeScript setup with detailed comments explaining how everything
* works
* 11 example tools (uuid generation, hashing, JSON formatting, shell commands, etc.)
* Docs covering architecture, how to add tools, and configuration for
* different clients
* Works with Claude Desktop, Claude Code, and Cursor
**Who it's for:**
Anyone who wants to extend what AI assistants can do — whether that's calling APIs, querying databases, or automating workflows.
Link: github.com/fellanH/klar-mcp
MIT licensed, do whatever you want with it. Happy to answer questions. | 2026-01-27T22:52:01 | https://www.reddit.com/r/LocalLLaMA/comments/1qotgyi/opensourced_an_mcp_server_quickstart_give_ai/ | Slow-Bake-9603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qotgyi | false | null | t3_1qotgyi | /r/LocalLLaMA/comments/1qotgyi/opensourced_an_mcp_server_quickstart_give_ai/ | false | false | self | 0 | null |
Got Qwen3-TTS running on iPhone with MLX. | 17 | I was able to get Qwen3-TTS-12Hz-0.6B running fully on-device. No cloud, nothing leaves your phone. It was way harder than expected so figured I'd share what I learned.
Pre-quantized the model to 8-bit to fit on device.
**MLX things that bit me:**
- Lazy evaluation is sneaky. The computation graph just keeps growing until iOS kills your app. You have to spam `eval()` and `Memory.clearCache()` constantly. I'm not joking, 53 cache clears throughout the pipeline.
- Weight loading only works on CPU. Tried GPU, got cryptic failures for days before figuring this out.
- Do any GPU work while backgrounded and iOS murders your app instantly. Had to add foreground checks everywhere.
**Audio stuff:**
- Don't use zero-padding for vocoder warmup frames. Sounds like garbage. Use actual audio context from previous chunks.
- Crossfade your audio buffers (10ms) or you get clicks between chunks.
- Tried Core ML for the vocoder. Neural Engine hates transposed convolutions. Gave up.
The app is called Duperr. You record a voice sample, generate speech, export it. It's on TestFlight if anyone wants to mess with it: https://testflight.apple.com/join/98d1hB6U
Happy to answer questions! | 2026-01-27T22:39:18 | aaronhampt | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qot50u | false | null | t3_1qot50u | /r/LocalLLaMA/comments/1qot50u/got_qwen3tts_running_on_iphone_with_mlx/ | false | false | 17 | {'enabled': True, 'images': [{'id': 'FOlJa19PzTpkDgT5HBvEolJwD0lpui66uCLOZ6Gqhfc', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/xspfv0vkyyfg1.png?width=108&crop=smart&auto=webp&s=77babac24e5267255d299aa9a7fa96342e9c10ef', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/xspfv0vkyyfg1.png?width=216&crop=smart&auto=webp&s=7ee134391b617097886375e7e80f1a67863f268f', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/xspfv0vkyyfg1.png?width=320&crop=smart&auto=webp&s=53e53f1aecaad848205f48fcc3c07096e67d71ee', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/xspfv0vkyyfg1.png?width=640&crop=smart&auto=webp&s=bae01497857b6d9a6a2af23977be8e99865d5c2e', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/xspfv0vkyyfg1.png?width=960&crop=smart&auto=webp&s=eb733a0d624124ca5c2af5baf1c7e76ee96c44c6', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/xspfv0vkyyfg1.png?width=1080&crop=smart&auto=webp&s=1ad21fc2fbab8e35004d2fa07780b29b836aeeaa', 'width': 1080}], 'source': {'height': 2532, 'url': 'https://preview.redd.it/xspfv0vkyyfg1.png?auto=webp&s=3bf7e16424b825a38e95e5059a2013bf97995d84', 'width': 1170}, 'variants': {}}]} | ||
From a Stone Age CNC Machinist to building a Multi-Agent Quantum System via "Vibe Coding." Am I crazy or is this the future? | 0 | Project passports | 2026-01-27T22:35:01 | https://drive.google.com/drive/folders/1mnFyWeYgj3K_2PZYM5F-o3yErHjQfdCC?usp=sharing | NectarineMental4429 | drive.google.com | 1970-01-01T00:00:00 | 0 | {} | 1qot10u | false | null | t3_1qot10u | /r/LocalLLaMA/comments/1qot10u/from_a_stone_age_cnc_machinist_to_building_a/ | false | false | default | 0 | null |
Kimi K2.5 Agent Swarm | 10 | I’m blown away by Kimi K2.5 Agent Swarm. it’s giving me serious Grok Heavy vibes but waaayyy cheaper. I tested it with a research prompt, and it handled it so much better than Gemini DeepResearch. since Kimi chat interface isn’t open source, are there any open alternatives that can match this level of performance or orchestration? | 2026-01-27T22:09:09 | https://www.reddit.com/r/LocalLLaMA/comments/1qoscar/kimi_k25_agent_swarm/ | policyweb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoscar | false | null | t3_1qoscar | /r/LocalLLaMA/comments/1qoscar/kimi_k25_agent_swarm/ | false | false | self | 10 | null |
Kimi K2 Artificial Analysis Score | 245 | https://x.com/i/status/2016250137115557953 | 2026-01-27T21:58:50 | Virenz | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qos25i | false | null | t3_1qos25i | /r/LocalLLaMA/comments/1qos25i/kimi_k2_artificial_analysis_score/ | false | false | default | 245 | {'enabled': True, 'images': [{'id': '0xqbgnt0syfg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/0xqbgnt0syfg1.jpeg?width=108&crop=smart&auto=webp&s=0f303fff9d769c4dfd76a87362286cce6199b239', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/0xqbgnt0syfg1.jpeg?width=216&crop=smart&auto=webp&s=4b30d7b31d2cf082674d2260451333c06d2151ed', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/0xqbgnt0syfg1.jpeg?width=320&crop=smart&auto=webp&s=ac5ac4b1d6afbef07a2f27782590d829c9cd0d60', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/0xqbgnt0syfg1.jpeg?width=640&crop=smart&auto=webp&s=e64564291959092586bed5a52ce15a55c2fad64c', 'width': 640}, {'height': 539, 'url': 'https://preview.redd.it/0xqbgnt0syfg1.jpeg?width=960&crop=smart&auto=webp&s=6730ee248026fc6008db8c70c2a2fed8d2efd38b', 'width': 960}, {'height': 606, 'url': 'https://preview.redd.it/0xqbgnt0syfg1.jpeg?width=1080&crop=smart&auto=webp&s=34e54244a148acdc12bcbe15be51df228a11ba1b', 'width': 1080}], 'source': {'height': 2301, 'url': 'https://preview.redd.it/0xqbgnt0syfg1.jpeg?auto=webp&s=fd6bd4857192e7303dcf2172289dd154a1eccaba', 'width': 4096}, 'variants': {}}]} | |
MiniMax-M2.1-REAP | 48 | [https://huggingface.co/cerebras/MiniMax-M2.1-REAP-139B-A10B](https://huggingface.co/cerebras/MiniMax-M2.1-REAP-139B-A10B)
[https://huggingface.co/cerebras/MiniMax-M2.1-REAP-172B-A10B](https://huggingface.co/cerebras/MiniMax-M2.1-REAP-172B-A10B)
| 2026-01-27T21:57:50 | https://www.reddit.com/r/LocalLLaMA/comments/1qos15u/minimaxm21reap/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qos15u | false | null | t3_1qos15u | /r/LocalLLaMA/comments/1qos15u/minimaxm21reap/ | false | false | self | 48 | {'enabled': False, 'images': [{'id': 'o8rqnxnsAjibVglOXed_Ke8p-9ZHK0-R0--hmSaUYZ8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/o8rqnxnsAjibVglOXed_Ke8p-9ZHK0-R0--hmSaUYZ8.png?width=108&crop=smart&auto=webp&s=2db59c4bcc9c5785e09db3e9a5850e416f45fb3b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/o8rqnxnsAjibVglOXed_Ke8p-9ZHK0-R0--hmSaUYZ8.png?width=216&crop=smart&auto=webp&s=2275430c789b6fa8e6352b073f8a70fddb43070d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/o8rqnxnsAjibVglOXed_Ke8p-9ZHK0-R0--hmSaUYZ8.png?width=320&crop=smart&auto=webp&s=5bd02b11ade5a446d3ee7708c71858bb5410d8d8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/o8rqnxnsAjibVglOXed_Ke8p-9ZHK0-R0--hmSaUYZ8.png?width=640&crop=smart&auto=webp&s=89840c648fd228cc75352cf2e304d0ed3a4e4d32', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/o8rqnxnsAjibVglOXed_Ke8p-9ZHK0-R0--hmSaUYZ8.png?width=960&crop=smart&auto=webp&s=9cec22e67bd35dab9b92d96ea92ad784b22e205e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/o8rqnxnsAjibVglOXed_Ke8p-9ZHK0-R0--hmSaUYZ8.png?width=1080&crop=smart&auto=webp&s=ccdc4b247bdf69d46bbb06ac4f344072a19694bb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/o8rqnxnsAjibVglOXed_Ke8p-9ZHK0-R0--hmSaUYZ8.png?auto=webp&s=5767f9a6f779c047431d01feaac8e6c443cb90e6', 'width': 1200}, 'variants': {}}]} |
LLM UNCENSORED CCR CLAUDE | 0 | Since Claude Code is too limited due to censorship, I was wondering if there is an uncensored LLM that I can run locally and use with the Claude Code CLI or CCR Claude. | 2026-01-27T21:36:13 | https://www.reddit.com/r/LocalLLaMA/comments/1qorg4k/llm_uncensored_ccr_claude/ | LongConsequence4102 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qorg4k | false | null | t3_1qorg4k | /r/LocalLLaMA/comments/1qorg4k/llm_uncensored_ccr_claude/ | false | false | self | 0 | null |
How Beads helped our engineer refactor 315 files in 12 hours with persistent agentic memory | 0 | [Agentic Refactoring at Scale with Beads](https://preview.redd.it/pvexgxvhnyfg1.png?width=1920&format=png&auto=webp&s=be5c0727c1684bfe3ad84c4cdbf82a1a7f8938bf)
**TL;DR:** Beads (SQLite + Git persistence by Steve Yegge) let a Claude Code agent survive multiple context compaction cycles without losing track of a large refactoring task. Beads is migrating to a Dolt backend — we wrote up the workflow.
**Source:** [https://www.dolthub.com/blog/2026-01-27-long-running-agentic-work-with-beads/](https://www.dolthub.com/blog/2026-01-27-long-running-agentic-work-with-beads/)
# Background
Our engineer Dustin first tested Beads on DoltCash, an agentic accounting app he's been building. It worked well enough that he tried something harder: refactoring a messy frontend codebase.
The codebase had:
* 1000+ line files
* Deeply nested rendering methods
* Inline styles, duplication, dead code
# Setup
bd init
Then update [`AGENTS.md`](http://AGENTS.md) with instructions for the Beads task management.
# Task Structure
The agent was told to:
1. Create 1 epic per directory
2. Create 1 bead per file under each epic
3. Refactor each file for simplicity and modularity
This explicit graph prevents the agent from skipping files or calling it early.
# Results
* **Duration:** 12 hours
* **Files refactored:** 315
* **Compaction cycles:** Multiple
* **Derailments:** Zero
The agent checked the persistent store after each compaction, found its place, and continued. Dustin intervened about 20% of the time for typical agent issues (ESLint cheating, spinning with no output, talking itself out of work).
# Why We're Posting This
Beads is migrating to a Dolt backend. We think persistent agentic memory is a big deal for production AI workflows, and we're building toward that.
**Full writeup:** [https://www.dolthub.com/blog/2025-01-27-long-running-agentic-work-with-beads/](https://www.dolthub.com/blog/2025-01-27-long-running-agentic-work-with-beads/) | 2026-01-27T21:34:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qore0r/how_beads_helped_our_engineer_refactor_315_files/ | DoltHub_Official | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qore0r | false | null | t3_1qore0r | /r/LocalLLaMA/comments/1qore0r/how_beads_helped_our_engineer_refactor_315_files/ | false | false | 0 | null | |
Dual RTX PRO 6000 Workstation with 1.15TB RAM. Finally multi-users and long contexts benchmarks. GPU only vs. CPU & GPU inference. Surprising results. | 160 | Hey r/LocalLLaMA,
Me and my team have been building AI workstations for enterprise use and wanted to share some real benchmark data on a dual RTX PRO 6000 Blackwell Max-Q setup (192GB VRAM total) with over 1.15TB of DDR5 RAM.
**TL;DR**: Can a $30K-$50K workstation serve a team of 4-50 people or run multiple agents? Tested MiniMax M2.1 native fp8 (GPU+CPU via KTransformers) vs int4 quantized (GPU-only via SGLang). **Key finding: int4 on GPU only is 2-4x faster on prefill but maxes out at \~3 concurrent requests due to KV-cache constraints. Native fp8 scales much better to 10+ users on large contexts but remains slower E2E.** Full configs and data below.
**The setup:**
* 2x NVIDIA RTX PRO 6000 Max-Q (192GB VRAM total))
* AMD EPYC9645 96-core/192-thread
* 12x DDR5 ECC RDIMM 96GB 5600 Mt/s (1152GB total)
**Model tested so far:**
* Native fp8 version: MiniMax-M2.1 ([link](https://huggingface.co/MiniMaxAI/MiniMax-M2.1))
* Quantized version: MiniMax-M2.1-BF16-INT4-AWQ ([link](https://huggingface.co/mratsim/MiniMax-M2.1-BF16-INT4-AWQ))
I wanted to compare two approaches: fp8 precision with CPU offloading vs quantized weights fitting entirely in VRAM.
# Why I’m sharing this
Most workstation benchmarks show single-user performance with limited context sizes. Given the investment here, I wanted to test if one plug-and-play workstation could actually serve an entire team or multiple simultaneous agents.
**I want to know how many people or agents can use this setup before it degrades too much.**
Key metrics:
* Prefill speed per user (tokens/s/user): Request processing speed
* TTFT (Time To First Tokens) (s/request): Time until first output generated
* Decode speed per user (tokens/s/request): Generation speed
* E2E request time (s/request): Total time from request to completion
* Queue time (s/request): Time waiting before processing starts
The priority use case is a coding agent as we would like to run a vibecoding platform 100% locally, hence the choice of MiniMax-M2.1 (more in follow-up posts).
# Methodology
There are two types of tests for now:
1. **Simple chat** (\~140 tokens input, 300 tokens max output)
2. **Large context** (\~64K tokens input, 300 tokens max output)
**Key details:**
* Used sglang’s per request metrics logs, in order to properly measure TTFT, prefill and decode speed.
* Measured queueing time separately, as it is a good indicator to see if the server starts to be overloaded.
* No prefix caching
* Tested with 1, 2, 4, 6, 8 and 10 simultaneous users (threads calling the API over and over again)
# Results: short context (~140 tokens input)
*\[see graphs attached\]*
**Takeaway:** The quantized model is running on GPU alone far better than the fp8 model running on CPU and GPU, which was expected.
However the use of the fp8 model is still usable, for up to 2 or 4 simultaneous users (less than 30s processing time). And while the prefill speed with the fp8 model is very low (260 to 110 tokens/s) on short context, it’s important to note the speed increase over larger contexts.
Over a certain input size threshold (about 4k tokens) KTransformer processes the prefill layer-wise, which adds a constant overhead but greatly increases the computation speed by doing all the computation on the GPU, loading and processing one layer at a time, leading to the following results on large contexts.
# Results: Large context (64K tokens)
*\[see graphs attached\]*
Processing 64K tokens with one user takes \~15s for MiniMax-M2.1-INT4 on GPU-only and double that with MiniMax-M2.1 with GPU and CPU offloading.
But here's the thing: INT4 has way less KV-cache available since the model must fit entirely in VRAM. It maxes out at 3 parallel requests. Beyond that, processing speed per request stays flat - requests just pile up in the queue. Queue time explodes and becomes the dominant factor in TTFT and E2E processing.
The results on large contexts are more favorable to the GPU+CPU setup. It's not significantly slower, and the massive KV-cache means real-world usage would see a lot of cache-hit in real usage, furthermore improving processing speed. However, the decode rate remains low (8 to 3 tokens/s for 4 to 10 simultaneous users), so for long generation tasks it may be of limited use.
**Key message. Do not underestimate queue time, it becomes an essential element of bottleneck. Moreover, recompute of prefill can be costly and grow over time.**
# SGLang and KTransformers were used for GPU and CPU offloading with MiniMax-M2.1
At first, I started experimenting with llama.cpp, which worked okay with CPU offloading but didn’t scale well with several simultaneous users. In addition, no optimisation is done for long inputs. I then switched to KTransformers, which supports layer-wise prefill with CPU offloading, which works great for long inputs. It’s based on SGLang and also runs great for simultaneous users.
**KTransformers configuration, highly biased toward kv-cache size:**
kt run --enable-shared-experts-fusion \
--cpu-threads 96 \
--chunked-prefill-size 60000 \
--model-path /fast-data/ktransformer/MinimaxM2.1/ \
--max-total-tokens 600000 \
--gpu-experts 20 \
-p 8000 MiniMax-M2.1 \
--mem-fraction-static 0.85 \
--max-running-requests 12 \
--max-prefill-tokens 80000 \
--export-metrics-to-file \
--enable-metrics \
--export-metrics-to-file-dir ./metrics/ \
--enable-request-time-stats-logging \
--enable-cache-report
**SGLang config:**
python3 -m sglang.launch_server \
--host 127.0.0.1 \
--port "8000" \
--sleep-on-idle \
--disable-custom-all-reduce \
--max-running-requests 16 \
--cuda-graph-max-bs 16 \
--attention-backend flashinfer \
--served-model-name "MiniMax-M2.1" \
--model-path "mratsim/MiniMax-M2.1-BF16-INT4-AWQ" \
--tool-call-parser minimax-m2 \
--reasoning-parser minimax \
--trust-remote-code \
--export-metrics-to-file \
--enable-metrics \
--export-metrics-to-file-dir ./metrics/ \
--enable-request-time-stats-logging \
--enable-cache-report \
--tp 2 \
--mem-fraction-static 0.93
# What's next
I want to extend the tests to larger workloads and context. My next test is to run coding agents using Claude Code in parallel on real coding tasks in “Ralph” mode. I will continue comparing MiniMax-M2.1 and MiniMax-M2.1-INT4. I am also in the process of testing other models:
* Qwen3-Coder 235B
* GPT-OSS 120B
* DeepSeek V3.2
Happy to run specific tests if there's interest. Also curious if anyone else has multi-user scaling data on similar hardware.
*We're a small team deploying local AI agents and setting up private infrastructures. If you have questions about the setup or want us to test something specific, drop a comment.* | 2026-01-27T21:31:27 | https://www.reddit.com/gallery/1qorbdk | Icy-Measurement8245 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qorbdk | false | null | t3_1qorbdk | /r/LocalLLaMA/comments/1qorbdk/dual_rtx_pro_6000_workstation_with_115tb_ram/ | false | false | 160 | null | |
Apple M5 AI optimized cluster | 0 | I went through a bit of an exercise with Claude, considering the die-optimization potential for the new M5 Ultra to come up with an AI-tailored spec. Then I considered RDMA and some additonal off the shelf hardware. Then I asked it to put it all into a whitepaper format. The (speculative) results are impressive, and (IMO) should be considered for any small to medium enterprise considering investing in AI for their business. It offers substantial savings vs a comparable NVIDIA setup. Here is the link to the document [https://drive.google.com/file/d/1fWETXgcKGOkTkf41o1gM8eLjf37maUim/view?usp=sharing](https://drive.google.com/file/d/1fWETXgcKGOkTkf41o1gM8eLjf37maUim/view?usp=sharing) | 2026-01-27T21:16:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qoqwb5/apple_m5_ai_optimized_cluster/ | Full-Bag-3253 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoqwb5 | false | null | t3_1qoqwb5 | /r/LocalLLaMA/comments/1qoqwb5/apple_m5_ai_optimized_cluster/ | false | false | self | 0 | null |
Burned through Claude Max 20x's "5-hour limit" in under 2 minutes | 0 | "Get Claude Pro" they said.
Hit the limit.
"Get Claude Max 5x, you'll be fine."
Hit the limit.
"Get Claude Max 20x for £200/month. The limit is so high you'll rarely hit it."
Here's me watching the usage hit 100% in <2 minutes.
Open to sponsors | 2026-01-27T21:14:50 | https://v.redd.it/a5iz2al4kyfg1 | Zaiiny | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qoquox | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/a5iz2al4kyfg1/DASHPlaylist.mpd?a=1772140504%2CNzljOTk0ZDhiZTgzMGJiNjQzMTYyZjRjNDBjYzUxZTRmNTEyZjQxMTlkZjI4ZTZlMzczN2JmZTczMWNkNWExNw%3D%3D&v=1&f=sd', 'duration': 14, 'fallback_url': 'https://v.redd.it/a5iz2al4kyfg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/a5iz2al4kyfg1/HLSPlaylist.m3u8?a=1772140504%2CM2VlYmNkYzU5YzllN2ZhYTY0Y2FlNjVlNjc4YTMwMTJmMTAyNjY0ZjFiYmNmODgyZTAwNmZlMzRkNjYwODRmZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/a5iz2al4kyfg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1qoquox | /r/LocalLLaMA/comments/1qoquox/burned_through_claude_max_20xs_5hour_limit_in/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'bWpxNXVwbDRreWZnMQF7ygsja-FkA40F1E_KpQ2ESxnJk4Q3xeU0TraDFstV', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bWpxNXVwbDRreWZnMQF7ygsja-FkA40F1E_KpQ2ESxnJk4Q3xeU0TraDFstV.png?width=108&crop=smart&format=pjpg&auto=webp&s=831415aec584c9db57d77b21da442a40c8cd5d7d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bWpxNXVwbDRreWZnMQF7ygsja-FkA40F1E_KpQ2ESxnJk4Q3xeU0TraDFstV.png?width=216&crop=smart&format=pjpg&auto=webp&s=531d9ed39be42b3747d1f3551956aaf18b106bf7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bWpxNXVwbDRreWZnMQF7ygsja-FkA40F1E_KpQ2ESxnJk4Q3xeU0TraDFstV.png?width=320&crop=smart&format=pjpg&auto=webp&s=f95a90817a39431e72926dd7ad825bcdd33fad9c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bWpxNXVwbDRreWZnMQF7ygsja-FkA40F1E_KpQ2ESxnJk4Q3xeU0TraDFstV.png?width=640&crop=smart&format=pjpg&auto=webp&s=f74d03b39a1e31d20691cc6e5e4c859b8520d807', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bWpxNXVwbDRreWZnMQF7ygsja-FkA40F1E_KpQ2ESxnJk4Q3xeU0TraDFstV.png?width=960&crop=smart&format=pjpg&auto=webp&s=992c4b6c8740cea0cab786c641fbfee4c65d88bd', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bWpxNXVwbDRreWZnMQF7ygsja-FkA40F1E_KpQ2ESxnJk4Q3xeU0TraDFstV.png?width=1080&crop=smart&format=pjpg&auto=webp&s=08e8660c17d48a0d0719b36f8ec653405d17fe65', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bWpxNXVwbDRreWZnMQF7ygsja-FkA40F1E_KpQ2ESxnJk4Q3xeU0TraDFstV.png?format=pjpg&auto=webp&s=4c27bfd94f3cd618abd437203d44dc337a720036', 'width': 1920}, 'variants': {}}]} | |
Vibe Coding: How I hacked my way into Quantum Trading & Linux with a 5th-grade education just by chatting with "Bro" | 1 | [removed] | 2026-01-27T21:09:45 | https://drive.google.com/drive/folders/1mnFyWeYgj3K_2PZYM5F-o3yErHjQfdCC?usp=drive_link | NectarineMental4429 | drive.google.com | 1970-01-01T00:00:00 | 0 | {} | 1qoqpdf | false | null | t3_1qoqpdf | /r/LocalLLaMA/comments/1qoqpdf/vibe_coding_how_i_hacked_my_way_into_quantum/ | false | false | default | 1 | null |
AI model keeps forcing itself to believe something it doesn't know exists is a rumor | 0 | Using model: Nanbeige4 3b (i have also tested this with qwen3 4b, but this is mainly focusing on the nanbeige model and its output)
Hello, currently to test my ai i am asking it this: "How many ounces of gold would it take to buy the rtx 5090 at launch price?"
Whilst the ai is able to use the search tool provided and finds a link to the official nvidia page to check the site, not much later it starts telling itself the rtx 5090 is fake, hypothetical, etc...
Here is it's thinking process after the search "rtx 5090 price":
\`\`\`
`Okay, let's see. The user is asking how many ounces of gold it takes to buy an RTX 5090 right now. First, I need to check if the RTX 5090 actually exists. From the search results, there's a lot of information about it. The first result mentions NVIDIA's GeForce RTX 5090 starting at 1999.OthersourceslikeVideoCardz.comsaypricesarepushingtoward1999.OthersourceslikeVideoCardz.comsaypricesarepushingtoward4000, and some even mention up to $5495. There's also a TechRadar article saying it's vanished from shelves in the US but is available elsewhere.`
`Wait, but I thought the RTX 50 series hasn't been announced yet. My knowledge cutoff is July 2024, and NVIDIA's latest is the 40 series. Maybe the search results are speculative or from a future date? Looking at some of the dates in the results: one mentions "55% increase RTX 5090 pricing has risen by 55% since July Retail pricing for Nvidia's gaming flagship, the RTX 5090, was available at prices close to Nvidia's official MSRP (£1889)." July 2025? That's in the future. Another result from 2026. Hmm, this is confusing.`
`Wait, the user is asking "right now," but if the RTX 5090 hasn't been released yet (as of 2024), then it doesn't have a current price. However, the search results are showing prices as if it's already available, possibly from future speculation or incorrect data. Maybe there's a mistake here. Let me check the dates on the search results.`
`Looking at the first result's URL:`
[`https://www.nvidia.com/en-us/geforce/graphics-cards/50-series/rtx-5090/`](https://www.nvidia.com/en-us/geforce/graphics-cards/50-series/rtx-5090/)
`– this is NVIDIA's official site, but if the RTX 50 series isn't out yet, this page might not exist. However, the search result shows it, so maybe it's a placeholder or the site has forward-looking content.`
`Another result from TechRadar dated 2026: "The Nvidia RTX 5090 has vanished from retailer shelves in the US..." So this suggests the RTX 5090 is expected to be released in the future, maybe late 2024 or 2025. But the user is asking "right now," which is current date. If the GPU isn't released yet, it doesn't have a price. However, the search results are giving hypothetical prices, perhaps based on leaks or rumors.`
`Wait, the user might be confused, thinking the RTX 5090 is available now. But as of 2024, NVIDIA hasn't announced the 50 series. The latest is RTX 4090. So the RTX 5090 is a future product. Therefore, it doesn't have a current price. But the search results are showing prices, which might be speculative.`
`However, the user is asking for the current price to convert to gold ounces. If the RTX 5090 isn't available, the price is zero or non-existent. But the search results are suggesting prices around 2000−2000−5000. Maybe the user made a typo and meant RTX 4090. Let me check if the RTX 4090's price is known.`
\`\`\`
I have been trying to fix this for 2 days, i created a system prompt telling it how to think, what to do, and to stop calling stuff that is not in it's internal data (stuff that the ai doesn't know) false, and to also prioritize search data over what it knows internally. I have also told it in the system prompt that it's training date is old and if it needs the time, it should use the js-sandbox-code function (to run javascript code)
No matter what i do, it always tells itself information is false, a rumor or "placeholder"
Is this a problem that can be fixed, or is it maybe because the ai model is small or due to a other issue?
Also on a separate note, it overthinks SO much. You can say a simple "hey" and it will waste half of it's context tokens trying to generate a response, is there an answer to this too? | 2026-01-27T21:08:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qoqoil/ai_model_keeps_forcing_itself_to_believe/ | DeliciousDrainage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoqoil | false | null | t3_1qoqoil | /r/LocalLLaMA/comments/1qoqoil/ai_model_keeps_forcing_itself_to_believe/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'eAyZH7M7ylCWXUHfvbhPzh3uz18u_8jOqBdFxlEUJVk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/eAyZH7M7ylCWXUHfvbhPzh3uz18u_8jOqBdFxlEUJVk.jpeg?width=108&crop=smart&auto=webp&s=735285e16af5990ad36ccdc4a2b75637f26c73a1', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/eAyZH7M7ylCWXUHfvbhPzh3uz18u_8jOqBdFxlEUJVk.jpeg?width=216&crop=smart&auto=webp&s=8176c628cc052edb20097068067f167f8170a1fa', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/eAyZH7M7ylCWXUHfvbhPzh3uz18u_8jOqBdFxlEUJVk.jpeg?width=320&crop=smart&auto=webp&s=c3239729d18ad450f11a1055896c4ffd813e387c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/eAyZH7M7ylCWXUHfvbhPzh3uz18u_8jOqBdFxlEUJVk.jpeg?width=640&crop=smart&auto=webp&s=114d346fa89ae9ff635c9e21c4d9140fb0030b0d', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/eAyZH7M7ylCWXUHfvbhPzh3uz18u_8jOqBdFxlEUJVk.jpeg?width=960&crop=smart&auto=webp&s=b00218154fc5a4d08c6a998d5026008c3fdce38d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/eAyZH7M7ylCWXUHfvbhPzh3uz18u_8jOqBdFxlEUJVk.jpeg?width=1080&crop=smart&auto=webp&s=787006956a1b410325389ec4f7dd00b05d93653d', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/eAyZH7M7ylCWXUHfvbhPzh3uz18u_8jOqBdFxlEUJVk.jpeg?auto=webp&s=365901e240e7e61a2195e930bd9d5764a0c4f486', 'width': 1200}, 'variants': {}}]} |
Fine Tuning Open Coding Agents: Fast, accessible coding agents that adapt to any repo | 4 | Ai2 released SERA (Soft-verified Efficient Repository Agents), open coding models (8B-32B on Qwen3) achieving 54.2% on SWE-Bench Verified, surpassing prior open SOTA at low cost (\~$400 to match best open-source, $12K for industry-level) | 2026-01-27T21:08:39 | https://allenai.org/blog/open-coding-agents | Predatedtomcat | allenai.org | 1970-01-01T00:00:00 | 0 | {} | 1qoqo8u | false | null | t3_1qoqo8u | /r/LocalLLaMA/comments/1qoqo8u/fine_tuning_open_coding_agents_fast_accessible/ | false | false | default | 4 | {'enabled': False, 'images': [{'id': 'T5WTGWCkpcrPwUXgBqmc1gs-6NekG9l9o4VonttGSkc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/T5WTGWCkpcrPwUXgBqmc1gs-6NekG9l9o4VonttGSkc.jpeg?width=108&crop=smart&auto=webp&s=775790da854ae75db5f1f236b88a2b54a2afb036', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/T5WTGWCkpcrPwUXgBqmc1gs-6NekG9l9o4VonttGSkc.jpeg?width=216&crop=smart&auto=webp&s=2966f6111db0705148c8ea517a4cd5703bd73ecc', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/T5WTGWCkpcrPwUXgBqmc1gs-6NekG9l9o4VonttGSkc.jpeg?width=320&crop=smart&auto=webp&s=bb4407988e7284f1b35c1287f36380d4d1a43b3b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/T5WTGWCkpcrPwUXgBqmc1gs-6NekG9l9o4VonttGSkc.jpeg?width=640&crop=smart&auto=webp&s=c9fa4098ed7e81700914da7849bcb5301998beb9', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/T5WTGWCkpcrPwUXgBqmc1gs-6NekG9l9o4VonttGSkc.jpeg?width=960&crop=smart&auto=webp&s=6ac9b24df722b40d5414c02c7ef888aa6f7aa6ec', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/T5WTGWCkpcrPwUXgBqmc1gs-6NekG9l9o4VonttGSkc.jpeg?width=1080&crop=smart&auto=webp&s=958e621d1ee7ec5c368d294b68b603b911b1d493', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/T5WTGWCkpcrPwUXgBqmc1gs-6NekG9l9o4VonttGSkc.jpeg?auto=webp&s=c950b18009af4f038f3bcbd478e222789040ed71', 'width': 1600}, 'variants': {}}]} |
[Project] I built a <50MB RAM local dictation client for Windows using Native AOT and Quantized Whisper | 5 | Hi everyone,
I got tired of "lightweight" AI tools actually being 500MB Electron wrappers that eat my RAM. I wanted a dictation tool that I could leave running 24/7 without noticing it.
I built DictaFlow using C# Native AOT (Ahead-of-Time compilation).
The Stack:
Engine: Local Whisper (quantized models) running on CPU/GPU.
Performance: Idles at ~0% CPU and <50MB RAM. No cold start lag.
Privacy: Zero data egress. Audio stays in the volatile memory buffer and is wiped after inference.
It features a "Vibe Coding" mode that detects if you are in VS Code/IntelliJ and prompts the model to format for camelCase, snake_case, and syntax automatically.
It’s free to use (standard tier). I’d love feedback on the inference speed on older CPUs if anyone wants to stress test it.
Link: https://dictaflow.vercel.app/ | 2026-01-27T20:25:55 | https://www.reddit.com/r/LocalLLaMA/comments/1qophbg/project_i_built_a_50mb_ram_local_dictation_client/ | InterestingBasil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qophbg | false | null | t3_1qophbg | /r/LocalLLaMA/comments/1qophbg/project_i_built_a_50mb_ram_local_dictation_client/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'tQWuwtuYrodebSiD4b8gF0ZrSF_s9rTRWIJbMBwog_4', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/tQWuwtuYrodebSiD4b8gF0ZrSF_s9rTRWIJbMBwog_4.png?width=108&crop=smart&auto=webp&s=82531c889da366e7b49b41f5470cb86a04f18054', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/tQWuwtuYrodebSiD4b8gF0ZrSF_s9rTRWIJbMBwog_4.png?width=216&crop=smart&auto=webp&s=99c8a7eead8bcc092749b991ed1594365443a6d2', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/tQWuwtuYrodebSiD4b8gF0ZrSF_s9rTRWIJbMBwog_4.png?width=320&crop=smart&auto=webp&s=74fdcc79d901b5f17ed6b1a8fe966476f414ef30', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/tQWuwtuYrodebSiD4b8gF0ZrSF_s9rTRWIJbMBwog_4.png?width=640&crop=smart&auto=webp&s=8abcb0b7af14dc572fa3e8f375287e1a56c8e654', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/tQWuwtuYrodebSiD4b8gF0ZrSF_s9rTRWIJbMBwog_4.png?width=960&crop=smart&auto=webp&s=d6678c47048032ed57759193e1bc3a3e854e41be', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/tQWuwtuYrodebSiD4b8gF0ZrSF_s9rTRWIJbMBwog_4.png?width=1080&crop=smart&auto=webp&s=f8330b8a672dc6e05758cde0d12dc20ea174e721', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/tQWuwtuYrodebSiD4b8gF0ZrSF_s9rTRWIJbMBwog_4.png?auto=webp&s=4d5273a67ba22b7d9b15ede428a18bcb14b45473', 'width': 1536}, 'variants': {}}]} |
Llama Server Using Dual GPUs - PP is amazing! TPS not so much! | 2 | I need some advice on improving my tp/s using dual GPUs in Llama Server please.
I've been tweaking the settings and both are getting used.
Tp/s boost is like 10 to 20%.
Pp/s boost is like 90% - It's amazing!
**Any advice on improving things please?**
I'm running at RTX Pro 6000 (Blackwell) and a RTX 5090. Only using models that fit in memory for both cards. PCIe 5 are set to 8x8 for both.
Tensor split: 60,40
Main GPU: RTx Pro
Context (test): 60k
I'm setting up a server to use AI as a service. As in very fast processing and output times that you'd expect from a local country API. I have another RTX Pro 6000 on standby but don't want to commit to it if performance isn't there. | 2026-01-27T20:25:21 | https://www.reddit.com/r/LocalLLaMA/comments/1qopgpp/llama_server_using_dual_gpus_pp_is_amazing_tps/ | gordi555 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qopgpp | false | null | t3_1qopgpp | /r/LocalLLaMA/comments/1qopgpp/llama_server_using_dual_gpus_pp_is_amazing_tps/ | false | false | self | 2 | null |
best os for local ai?(not server normal pc) | 0 | windows(definitly not😆),linux or macos | 2026-01-27T20:17:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qop9gy/best_os_for_local_ainot_server_normal_pc/ | Kerem-6030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qop9gy | false | null | t3_1qop9gy | /r/LocalLLaMA/comments/1qop9gy/best_os_for_local_ainot_server_normal_pc/ | false | false | self | 0 | null |
any low censorship local vision models? | 0 | I'd like to be able to send nsfw images to my chatbot and have it get a good description of it so it can chat with me about it. qwen is pretty uptight, InternVL3_5:8b won't refuse but just doesn't "see" erotic stuff. anything that is trained on erotica or is just good at it generally? | 2026-01-27T20:13:02 | https://www.reddit.com/r/LocalLLaMA/comments/1qop4dl/any_low_censorship_local_vision_models/ | Spectrum1523 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qop4dl | false | null | t3_1qop4dl | /r/LocalLLaMA/comments/1qop4dl/any_low_censorship_local_vision_models/ | false | false | self | 0 | null |
R&D on edge device? You Betcha :) Applying memory to frozen LLM's | 0 | Hey all!
So I kinda stumbled into R&D when i read about Titans in December, and since ive been researching on my Jetson Orin AGX how to enable memory on frozen models..
And in large part thanks to claude code - i've been able to publish my research :) [https://arxiv.org/abs/2601.15324](https://arxiv.org/abs/2601.15324)
Important note though: I'm not sharing anything production-ready or a benchmark tested solution, The paper is mostly centered on the 'can this work' and as such It's more so a mechanism paper. Some (perhaps) interesting methods ive found as i tried to tackle various ways to enable memory and use it. I'm mostly proud of CDD, it seems promising as i continue working with it.
This paper is merely the starting point for a long journey ahead for me.. Lots of R&D planned ahead.
I'm merely a hobbyist by the way, i do have an academic background, but in Alpha sciences ha :P
AMA if anyone's interested in any aspect of this. I'll be online to answer questions for a good while.
\~Mark | 2026-01-27T20:12:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qop3n6/rd_on_edge_device_you_betcha_applying_memory_to/ | thehighnotes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qop3n6 | false | null | t3_1qop3n6 | /r/LocalLLaMA/comments/1qop3n6/rd_on_edge_device_you_betcha_applying_memory_to/ | false | false | self | 0 | null |
What is your Local LLM runner of choice | 1 | Hello everyone, I currently want to run some LLMs on my modest PC with 16 GB ram and 4 GB vram. Could you recommend me an app to run local LLMs that runs good on my specs if they matter ? | 2026-01-27T20:06:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qooxr3/what_is_your_local_llm_runner_of_choice/ | HiqhAim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qooxr3 | false | null | t3_1qooxr3 | /r/LocalLLaMA/comments/1qooxr3/what_is_your_local_llm_runner_of_choice/ | false | false | self | 1 | null |
Prompt -> Offline Voice AI App in ~11 mins. We forked Expo to bundle native on-device AI runtimes — Replit agent builds a fully offline voice assistant | 8 | I recorded a demo showing a u/Replit agent generating an u/Expo app that runs a full on-device voice pipeline (speech-to-text → LLM → text-to-speech). The goal is to collapse the “native setup + bindings + glue” problem that slows down on-device AI experimentation.
What we built:
\- RunAnywhere: open-source SDK for running LLM / STT / TTS locally on iOS/Android
\- A fork of Expo with our native runtimes baked in
\- A custom client (“RunAnywhere AI Studio”) so you can scan a QR and run the app on your phone like the normal Expo workflow
In the demo, the agent builds a “Punny Voice Assistant” that runs locally and responds via TTS.
This is early and I want real feedback from devs:
\- What’s the first offline-first AI app you’d actually build?
\- What would make this production-usable?
Links in the comment | 2026-01-27T20:00:56 | https://v.redd.it/h4frf0uk3yfg1 | thecoder12322 | /r/LocalLLaMA/comments/1qooruc/prompt_offline_voice_ai_app_in_11_mins_we_forked/ | 1970-01-01T00:00:00 | 0 | {} | 1qooruc | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/h4frf0uk3yfg1/DASHPlaylist.mpd?a=1772265665%2COGY1MDczZjYxOTc5YmU0NTc1Njc4YjhhZmI3NjkyODU4MWVmOTllMWIxMGJiZjMwOTBiYTgzNWU0MDQ3N2VmYQ%3D%3D&v=1&f=sd', 'duration': 642, 'fallback_url': 'https://v.redd.it/h4frf0uk3yfg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 804, 'hls_url': 'https://v.redd.it/h4frf0uk3yfg1/HLSPlaylist.m3u8?a=1772265665%2CMjY4Mzg4ZjQyYTQxNWQyMzYzOTg1NzBiN2JhNmM2NzZjNWQ5MjdjMDliMWVlNzFmZTI1NjQwY2YwZmMzM2QyYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/h4frf0uk3yfg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1qooruc | /r/LocalLLaMA/comments/1qooruc/prompt_offline_voice_ai_app_in_11_mins_we_forked/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'bXQ3OGcwdWszeWZnMar3PqhOAeGgunpwEz0ru_QCwuvxRMLdSfMMDfA6VBMp', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/bXQ3OGcwdWszeWZnMar3PqhOAeGgunpwEz0ru_QCwuvxRMLdSfMMDfA6VBMp.png?width=108&crop=smart&format=pjpg&auto=webp&s=7a1832456ef60576eae102c56aec16c84c42a387', 'width': 108}, {'height': 90, 'url': 'https://external-preview.redd.it/bXQ3OGcwdWszeWZnMar3PqhOAeGgunpwEz0ru_QCwuvxRMLdSfMMDfA6VBMp.png?width=216&crop=smart&format=pjpg&auto=webp&s=aadd8e1aa3ebedd1dbd8f60d28d4d48b9de67c6a', 'width': 216}, {'height': 133, 'url': 'https://external-preview.redd.it/bXQ3OGcwdWszeWZnMar3PqhOAeGgunpwEz0ru_QCwuvxRMLdSfMMDfA6VBMp.png?width=320&crop=smart&format=pjpg&auto=webp&s=55d2a7583c6d06e5b04dcebf87770669662c9ba7', 'width': 320}, {'height': 267, 'url': 'https://external-preview.redd.it/bXQ3OGcwdWszeWZnMar3PqhOAeGgunpwEz0ru_QCwuvxRMLdSfMMDfA6VBMp.png?width=640&crop=smart&format=pjpg&auto=webp&s=5680c51a40b31f787b5b6f16a79921271b3dbf9d', 'width': 640}, {'height': 401, 'url': 'https://external-preview.redd.it/bXQ3OGcwdWszeWZnMar3PqhOAeGgunpwEz0ru_QCwuvxRMLdSfMMDfA6VBMp.png?width=960&crop=smart&format=pjpg&auto=webp&s=98ed6af5827cf525e8f657ee45603943c979976d', 'width': 960}, {'height': 452, 'url': 'https://external-preview.redd.it/bXQ3OGcwdWszeWZnMar3PqhOAeGgunpwEz0ru_QCwuvxRMLdSfMMDfA6VBMp.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f56de0b2d894544035840082b3cc43036f1212e7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bXQ3OGcwdWszeWZnMar3PqhOAeGgunpwEz0ru_QCwuvxRMLdSfMMDfA6VBMp.png?format=pjpg&auto=webp&s=8408d3ad19b8cb20cb7e14808a2152dff908c9cc', 'width': 2580}, 'variants': {}}]} | |
Agent Composer to build an infra layer between data sources and AI models (LLM agnostic) | 2 | Kinda interesting to see that ppl are starting to care more about the "context" they provide to models rather than which models to use
[https://contextual.ai/blog/introducing-agent-composer](https://contextual.ai/blog/introducing-agent-composer)
| 2026-01-27T19:57:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qooo77/agent_composer_to_build_an_infra_layer_between/ | pahita | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qooo77 | false | null | t3_1qooo77 | /r/LocalLLaMA/comments/1qooo77/agent_composer_to_build_an_infra_layer_between/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'yYdbqW05T5AQUNyQjaVRoQuoUVQei2mVkLDJ9QQPs4s', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/yYdbqW05T5AQUNyQjaVRoQuoUVQei2mVkLDJ9QQPs4s.png?width=108&crop=smart&auto=webp&s=fe8b7c13df467c23ee58e0aeb515bc2db0d8b82e', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/yYdbqW05T5AQUNyQjaVRoQuoUVQei2mVkLDJ9QQPs4s.png?width=216&crop=smart&auto=webp&s=1810ad12e26a04e5a5d4b95a24f5e6e27cab9856', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/yYdbqW05T5AQUNyQjaVRoQuoUVQei2mVkLDJ9QQPs4s.png?width=320&crop=smart&auto=webp&s=1fafbed901b3668ebd87ab16217551820d9f6478', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/yYdbqW05T5AQUNyQjaVRoQuoUVQei2mVkLDJ9QQPs4s.png?width=640&crop=smart&auto=webp&s=38fc4324b4af68f31d3e1af1980863943f3b2693', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/yYdbqW05T5AQUNyQjaVRoQuoUVQei2mVkLDJ9QQPs4s.png?width=960&crop=smart&auto=webp&s=860d7de439787422fe5fcc837e1df26d1320d924', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/yYdbqW05T5AQUNyQjaVRoQuoUVQei2mVkLDJ9QQPs4s.png?width=1080&crop=smart&auto=webp&s=23bb6e10b5340be50578680dc3e7bfe6f8c04a71', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://external-preview.redd.it/yYdbqW05T5AQUNyQjaVRoQuoUVQei2mVkLDJ9QQPs4s.png?auto=webp&s=cf2505faa70b22df2ea15260815d6d9e2db4b568', 'width': 2048}, 'variants': {}}]} |
agnostic memory layer for local agents. is a gatekeeper architecture viable? | 0 | working on a local first model agnostic memory middleware for agents. right now most agent memory is just dump everything into a vectordb which leads to noise conflicting facts and privacy issues. the idea is to treat memory like a subconscious not a log file.
instead of direct writes every interaction passes through a local gatekeeper pipeline. first a privacy filter scrubs pii like phone numbers or ids before anything leaves volatile memory. then semantic normalization handles code mixed language so semantic normalization handles code mixed language so terms like elevator and lift or apartment and flat resolve to the same meaning and hit the same vector space. next atomic fact extraction using a small local model keeps only subject action object facts and drops conversational fluff. after that a verification step uses an entailment model to check whether the new fact contradicts existing long term memory. finally storage routing uses an importance score based on recency frequency and surprise to decide whether data goes to long term vector memory or stays in session cache.
the goal is to decouple memory management from the agent itself. the agent thinks the middleware remembers and keeps things clean.
looking for feedback.
is this overkill for local single user agents ? or
has anyone actually solved code mixing properly in rag systems ? thoughts welcome ! | 2026-01-27T19:54:43 | https://www.reddit.com/r/LocalLLaMA/comments/1qoolgj/agnostic_memory_layer_for_local_agents_is_a/ | Dependent_Turn_8383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoolgj | false | null | t3_1qoolgj | /r/LocalLLaMA/comments/1qoolgj/agnostic_memory_layer_for_local_agents_is_a/ | false | false | self | 0 | null |
Kimi K2.5 just dropped “Visual Coding” — is vibe coding about to look… outdated? | 0 | News: Kimi K2.5 dropped with “visual coding” hype (UI/flows/diagrams/diffs as context).
And I’m wondering if we’re watching the end of “prompt roulette.”
By “visual coding,” I mean the model working primarily off visual/structured context (UI, flows, diffs), not just text prompts.
My take:
Vibe coding is great for prototypes. Visual coding is what happens when you actually have to ship.
Because vibe coding is basically:
prompt → generate → run → “why is this broken?” → patch → repeat
Visual coding could become:
point at the component/flow → apply constraints → review diff → verify → ship
Pick a side (A/B/C):
A) Vibe-first: natural language remains the main dev interface
B) Visual-first: structure/visuals become the truth source
C) Hybrid: vibe for intent, visual for grounding/debugging
The question that matters:
When AI breaks something, what do you trust more: a better prompt… or a diff you can actually reason about?
Reply A/B/C and tell me the one thing that would make you switch.
Drop one concrete example: what task is vibe coding great at but visual coding would be worse at (or vice versa)? | 2026-01-27T19:47:11 | https://www.reddit.com/r/LocalLLaMA/comments/1qoodpt/kimi_k25_just_dropped_visual_coding_is_vibe/ | newbee_2024 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoodpt | false | null | t3_1qoodpt | /r/LocalLLaMA/comments/1qoodpt/kimi_k25_just_dropped_visual_coding_is_vibe/ | false | false | self | 0 | null |
allenai released new open coding models | 65 | [https://huggingface.co/collections/allenai/open-coding-agents](https://huggingface.co/collections/allenai/open-coding-agents)
https://preview.redd.it/3wanlr674yfg1.png?width=1196&format=png&auto=webp&s=3c31d64089433fd350f3aaa72d94242e9326b7ab
[https://allenai.org/papers/opencodingagents](https://allenai.org/papers/opencodingagents) | 2026-01-27T19:45:55 | https://www.reddit.com/r/LocalLLaMA/comments/1qoocgn/allenai_released_new_open_coding_models/ | BreakfastFriendly728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoocgn | false | null | t3_1qoocgn | /r/LocalLLaMA/comments/1qoocgn/allenai_released_new_open_coding_models/ | false | false | 65 | null | |
Local CPU-only interference (ik_llama, OSS-120b) | 1 | [removed] | 2026-01-27T19:32:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qonyll/local_cpuonly_interference_ik_llama_oss120b/ | ZealousidealBunch220 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qonyll | false | null | t3_1qonyll | /r/LocalLLaMA/comments/1qonyll/local_cpuonly_interference_ik_llama_oss120b/ | false | false | self | 1 | null |
Benchmark of Qwen3-32B reveals 12x capacity gain at INT4 with only 1.9% accuracy drop | 52 | We ran 12,000+ MMLU-Pro questions and 2,000 inference runs to settle the quantization debate. INT4 serves 12x more users than BF16 while keeping 98% accuracy.
Benchmarked Qwen3-32B across BF16/FP8/INT8/INT4 on a single H100. The memory savings translate directly to concurrent user capacity. Went from 4 users (BF16) to 47 users (INT4) at 4k context. Full methodology and raw numbers here: https://research.aimultiple.com/llm-quantization/ | 2026-01-27T19:32:13 | https://www.reddit.com/r/LocalLLaMA/comments/1qonyhc/benchmark_of_qwen332b_reveals_12x_capacity_gain/ | AIMultiple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qonyhc | false | null | t3_1qonyhc | /r/LocalLLaMA/comments/1qonyhc/benchmark_of_qwen332b_reveals_12x_capacity_gain/ | false | false | self | 52 | null |
Turning VS Code into a "Second Brain" App Platform: Meet VibeDeck (Early Beta) | 0 | More and more of us are using IDEs for things beyond just coding: 2nd brains, creative writing, data analysis, and personal workflows.
I always felt like something was missing—the ability to quickly spin up **visual micro-apps** that help me interact with my data while providing a clean **API/MCP** for AI to work with those same structures.
That’s how **VibeDeck** started. It’s a runtime for local-first, AI-ready micro-utilities.
**The Core Idea:**
* **Zero-Build:** Apps are just folders with simple HTML/JS. No complex toolchains.
* **AI-Native (MCP):** Every function you write is automatically exposed to your LLM (Claude, Copilot, etc.) via Model Context Protocol.
* **Files-First:** Your data stays in your project as human-readable JSON/Markdown. No cloud, no proprietary databases.
**Full transparency:** This is a very early (and buggy) beta. I’m currently at that "founder’s wall" where I’m not sure if the implementation is as smooth as I imagined, and I'm losing a bit of steam.
I'm looking for pioneers and enthusiasts who love the "IDE as a life-system" concept. I’d love your feedback, or even better, some hands-on help if you're interested in this.
**VSCode**: [https://marketplace.visualstudio.com/items?itemName=ValentinLapchevskiy.vibedeck](https://marketplace.visualstudio.com/items?itemName=ValentinLapchevskiy.vibedeck)
**OpenVSX**: [https://open-vsx.org/extension/ValentinLapchevskiy/vibedeck](https://open-vsx.org/extension/ValentinLapchevskiy/vibedeck)
**GitHub Repository:** [https://github.com/gen4sp/VibeDeckExtension](https://github.com/gen4sp/VibeDeckExtension)
Would love to hear your thoughts. Is this something you’d use, or am I over-engineering my workspace? | 2026-01-27T19:10:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qoncqe/turning_vs_code_into_a_second_brain_app_platform/ | gen4doc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoncqe | false | null | t3_1qoncqe | /r/LocalLLaMA/comments/1qoncqe/turning_vs_code_into_a_second_brain_app_platform/ | false | false | self | 0 | null |
Pre-built llama-cpp-python wheel for RTX 5060 (Blackwell/sm_120) | CUDA 13.1 | Python 3.11 | 0 | Hi everyone!
Just upgraded to an **RTX 5060** and realized that current pre-built wheels for `llama-cpp-python` don't support the new **Blackwell** architecture out of the box (standard wheels often fail or run extremely slow on SM 12.0).
Since compiling on Windows can be a pain with all the CMake/Visual Studio dependencies, I've decided to share my successful build.
**Build details:**
* **Library Version:** 0.3.16
* **Architecture:** sm\_120 (Blackwell / RTX 50-series)
* **CUDA Toolkit:** 13.1
* **Compiler:** MSVC 2022
* **Python Version:** 3.11 (Windows x64)
Tested on my machine: `prompt eval` and `token generation` are now fully offloaded to GPU with proper speed.
**Link to GitHub Release:** [Release Llama-cpp-python v0.3.16 for RTX 5060 (CUDA 13.1) · assajuk/Llama-cpp-python-v0.3.16-for-RTX-5060-CUDA-13.1-](https://github.com/assajuk/Llama-cpp-python-v0.3.16-for-RTX-5060-CUDA-13.1-/releases/tag/v0.3.16-rtx5060)
Hope this saves someone a few hours of troubleshooting! | 2026-01-27T19:08:31 | IntelligentArugula34 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qona5k | false | null | t3_1qona5k | /r/LocalLLaMA/comments/1qona5k/prebuilt_llamacpppython_wheel_for_rtx_5060/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '5rfhtiekxxfg1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/5rfhtiekxxfg1.jpeg?width=108&crop=smart&auto=webp&s=a4fdae11f8bfe8bc590e04ff325533ca9bcdb6ff', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/5rfhtiekxxfg1.jpeg?width=216&crop=smart&auto=webp&s=cb2e9ec904b455d4799c0a3306539ceac2d1a49d', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/5rfhtiekxxfg1.jpeg?width=320&crop=smart&auto=webp&s=c522e22b4e729a9f700a7faf678cf6acb72bbfcf', 'width': 320}, {'height': 332, 'url': 'https://preview.redd.it/5rfhtiekxxfg1.jpeg?width=640&crop=smart&auto=webp&s=6fc35b1f316bed8e4ac3481e77263a124f586cf4', 'width': 640}, {'height': 499, 'url': 'https://preview.redd.it/5rfhtiekxxfg1.jpeg?width=960&crop=smart&auto=webp&s=41f29b911e98e922534dec9555362c1f08b3bf76', 'width': 960}, {'height': 561, 'url': 'https://preview.redd.it/5rfhtiekxxfg1.jpeg?width=1080&crop=smart&auto=webp&s=533ced3b9797d671f06d7900cc08d8ddbd8947ba', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://preview.redd.it/5rfhtiekxxfg1.jpeg?auto=webp&s=dcf6f5570913ae646329cdf1d9b08b9280432f36', 'width': 2500}, 'variants': {}}]} | |
How do I turn off CPU for llama.cpp? | 0 | Using ik_llama, llama.cpp like this
```
./llama-server
--numa numactl
--threads 0 // cpu turned off?
-ngl 9999
--cont-batching
--parallel 1
-fa on
--no-mmap
-sm graph -cuda fusion=1
-khad -sas -gr -smgs -ger -mla 3 // whatever this does
--mlock
-mg 0 -ts 1,1 // dual gpu
```
### 800% CPU usage ???? 100% gpu ???
2 P40 pascal no nvlink | 2026-01-27T18:51:41 | https://www.reddit.com/r/LocalLLaMA/comments/1qoms74/how_do_i_turn_off_cpu_for_llamacpp/ | ClimateBoss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoms74 | false | null | t3_1qoms74 | /r/LocalLLaMA/comments/1qoms74/how_do_i_turn_off_cpu_for_llamacpp/ | false | false | self | 0 | null |
Some initial benchmarks of Kimi-K2.5 on 4xB200 | 23 | Just had some fun and ran a (very crude) benchmark script. Sadly, one GPU is busy so I can only run on 4 instead of 8 (thus limiting me to \~30k context without optimizations).
Command used (with random-input-len changing between sample points):
vllm bench serve \
--backend openai \
--base-url http://localhost:8000 \
--model /models/huggingface/moonshotai/Kimi-K2.5 \
--dataset-name random \
--random-input-len 24000 \
--random-output-len 512 \
--request-rate 2 \
--num-prompts 20
One full data point:
============ Serving Benchmark Result ============
Successful requests: 20
Failed requests: 0
Request rate configured (RPS): 2.00
Benchmark duration (s): 61.48
Total input tokens: 480000
Total generated tokens: 10240
Request throughput (req/s): 0.33
Output token throughput (tok/s): 166.55
Peak output token throughput (tok/s): 420.00
Peak concurrent requests: 20.00
Total token throughput (tok/s): 7973.52
---------------Time to First Token----------------
Mean TTFT (ms): 22088.76
Median TTFT (ms): 22193.34
P99 TTFT (ms): 42553.83
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 34.37
Median TPOT (ms): 37.72
P99 TPOT (ms): 39.72
---------------Inter-token Latency----------------
Mean ITL (ms): 34.37
Median ITL (ms): 17.37
P99 ITL (ms): 613.91
==================================================
As you can see, first token latency is terrible. This is probably due to an unoptimized tokenizer and inefficient chunk prefilling. I wanted to see the model perform with default vllm settings though.
Coding looks okay-ish at the moment but the context is limiting (this is a me problem, not the model).
Let me know if you want to see some benchmarks/have me try some settings. | 2026-01-27T18:50:51 | benno_1237 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qomra4 | false | null | t3_1qomra4 | /r/LocalLLaMA/comments/1qomra4/some_initial_benchmarks_of_kimik25_on_4xb200/ | false | false | default | 23 | {'enabled': True, 'images': [{'id': 'zyvu6wcjsxfg1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/zyvu6wcjsxfg1.png?width=108&crop=smart&auto=webp&s=fb18090a4413945ca99612c36f6e8d69b1b633db', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/zyvu6wcjsxfg1.png?width=216&crop=smart&auto=webp&s=b3f4a320745ed4f9b0a1204c0537d374f7711fa2', 'width': 216}, {'height': 192, 'url': 'https://preview.redd.it/zyvu6wcjsxfg1.png?width=320&crop=smart&auto=webp&s=d2d43cf4e5d6d44350d005ac4d325ee58600a21f', 'width': 320}, {'height': 384, 'url': 'https://preview.redd.it/zyvu6wcjsxfg1.png?width=640&crop=smart&auto=webp&s=6d94c746acf2b874a21b6387a83bb8da32100ae9', 'width': 640}, {'height': 576, 'url': 'https://preview.redd.it/zyvu6wcjsxfg1.png?width=960&crop=smart&auto=webp&s=9bb550d77c5ccd972a7ad1a0db4a0ea88f0b5d93', 'width': 960}], 'source': {'height': 600, 'url': 'https://preview.redd.it/zyvu6wcjsxfg1.png?auto=webp&s=f546123752cb7496ea52fd585250c97e9f4cbc98', 'width': 1000}, 'variants': {}}]} | |
[LEAKED] Kimi K2.5’s full system prompt + tools (released <24h ago) | 238 | My first post on LLAMA…
Was messing around with Moonshot's new Kimi K2.5 and I think I pulled the whole system prompt + tools lol
Got hyped I grabbed this so fast cause usually someone posts this stuff way before I get to it
Repo: [https://github.com/dnnyngyen/kimi-k2.5-prompts-tools](https://github.com/dnnyngyen/kimi-k2.5-prompts-tools)
Contents:
\- full system prompt
\- all tool schemas + instructions
\- memory CRUD protocols
\- context engineering + assembling user profile
\- basic guardrails/rules
\- external datasource integrations (finance, arxiv, etc)
My og chat: [https://www.kimi.com/share/19c003f5-acb2-838b-8000-00006aa45d9b](https://www.kimi.com/share/19c003f5-acb2-838b-8000-00006aa45d9b) (never had a model fold this easily lmao)
Sharing it here first <3
Happy to be able to contribute sum to this community | 2026-01-27T18:44:50 | https://www.reddit.com/r/LocalLLaMA/comments/1qoml1n/leaked_kimi_k25s_full_system_prompt_tools/ | Pretty_Mountain2714 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoml1n | false | null | t3_1qoml1n | /r/LocalLLaMA/comments/1qoml1n/leaked_kimi_k25s_full_system_prompt_tools/ | false | false | self | 238 | {'enabled': False, 'images': [{'id': '8VNINSEdOAxPGNw0wnZ4nGcsP_UAVhV7FLCh8LS9LsI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8VNINSEdOAxPGNw0wnZ4nGcsP_UAVhV7FLCh8LS9LsI.png?width=108&crop=smart&auto=webp&s=17dea48f1b2f03438ae96adcaccc23e51cdb2145', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8VNINSEdOAxPGNw0wnZ4nGcsP_UAVhV7FLCh8LS9LsI.png?width=216&crop=smart&auto=webp&s=3cc32fd19150033f9c3db399fdb9c48244a44165', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8VNINSEdOAxPGNw0wnZ4nGcsP_UAVhV7FLCh8LS9LsI.png?width=320&crop=smart&auto=webp&s=94a45ec463aaa862d7e4371fa9c6f117b25b4730', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8VNINSEdOAxPGNw0wnZ4nGcsP_UAVhV7FLCh8LS9LsI.png?width=640&crop=smart&auto=webp&s=349b35c077c9bb00456e3cb82d4818ca34a12174', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8VNINSEdOAxPGNw0wnZ4nGcsP_UAVhV7FLCh8LS9LsI.png?width=960&crop=smart&auto=webp&s=59988a1cb757ca6c7d559608cdc0a0c1f8484579', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8VNINSEdOAxPGNw0wnZ4nGcsP_UAVhV7FLCh8LS9LsI.png?width=1080&crop=smart&auto=webp&s=6f74f7e1daab1557ef62bfe7cbeb4f9bb6b1c886', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8VNINSEdOAxPGNw0wnZ4nGcsP_UAVhV7FLCh8LS9LsI.png?auto=webp&s=eca238061e90b1836d8ac2ad9eb42729b4591e9e', 'width': 1200}, 'variants': {}}]} |
Seeking advice on which VLMs to run on mobile for Android/iOS | 1 | I'm a beginner to running VLMs locally on mobile. I'm hoping to get some input from experts here on which VLMs have worked best for them and the tradeoffs among those models.
I'm planning to use the VLM to analyze some short videos and generate descriptions. I also want to fine-tune the model externally and maybe quantize it before loading it to mobile. High-level metrics I'm interested in that immediately come to mind are reasoning capability, inference throughput, memory footage, and ease-of-use with Android/iOS.
Thanks in advance!
| 2026-01-27T18:44:30 | https://www.reddit.com/r/LocalLLaMA/comments/1qomkpg/seeking_advice_on_which_vlms_to_run_on_mobile_for/ | dokabo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qomkpg | false | null | t3_1qomkpg | /r/LocalLLaMA/comments/1qomkpg/seeking_advice_on_which_vlms_to_run_on_mobile_for/ | false | false | self | 1 | null |
how do you actually setup local claude? | 2 | I am trying to test the local claude with ollama but doing the basic stuff with it fails after claude tries to create a task lits and stops.
\`\`\`
● I'll help you with your request. Since you mentioned "No tasks found", I'll start by creating a task list and then we can proceed with whatever you'd like to
do.
<function=TaskList>
\`\`\`
Anyone knows what's going on? | 2026-01-27T18:40:28 | https://www.reddit.com/r/LocalLLaMA/comments/1qomghj/how_do_you_actually_setup_local_claude/ | somnamboola | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qomghj | false | null | t3_1qomghj | /r/LocalLLaMA/comments/1qomghj/how_do_you_actually_setup_local_claude/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '9POwghSFj9xX0xqnhjZIb9JQTkbFiyygs4FMRP8E3a8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/9POwghSFj9xX0xqnhjZIb9JQTkbFiyygs4FMRP8E3a8.png?width=108&crop=smart&auto=webp&s=91dfb36a62811290f28804b71b205b1447714a1a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/9POwghSFj9xX0xqnhjZIb9JQTkbFiyygs4FMRP8E3a8.png?width=216&crop=smart&auto=webp&s=03be64794cf23f041e3d8baf71736bfad54ae77b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/9POwghSFj9xX0xqnhjZIb9JQTkbFiyygs4FMRP8E3a8.png?width=320&crop=smart&auto=webp&s=f9c5bc5074bd463d17fcdd1969c0ccdccead47a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/9POwghSFj9xX0xqnhjZIb9JQTkbFiyygs4FMRP8E3a8.png?width=640&crop=smart&auto=webp&s=64a796f4a572e5c20ae6e68cf532378bd6d9235d', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/9POwghSFj9xX0xqnhjZIb9JQTkbFiyygs4FMRP8E3a8.png?width=960&crop=smart&auto=webp&s=99ff9f77fbf468bebaa261e1a01cbc67c54638ea', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/9POwghSFj9xX0xqnhjZIb9JQTkbFiyygs4FMRP8E3a8.png?width=1080&crop=smart&auto=webp&s=bd2f8c096a4e15e81f1bd36d05a4760d2c051a3c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/9POwghSFj9xX0xqnhjZIb9JQTkbFiyygs4FMRP8E3a8.png?auto=webp&s=39bcb19f412bbf1ca3b85e4a5f741406596cddbe', 'width': 1200}, 'variants': {}}]} |
Prompt Injection: The SQL Injection of AI + How to Defend | 0 | 2026-01-27T18:24:12 | https://lukasniessen.medium.com/prompt-injection-the-sql-injection-of-ai-how-to-defend-2a28c6f3bc05 | trolleid | lukasniessen.medium.com | 1970-01-01T00:00:00 | 0 | {} | 1qolzl2 | false | null | t3_1qolzl2 | /r/LocalLLaMA/comments/1qolzl2/prompt_injection_the_sql_injection_of_ai_how_to/ | false | false | default | 0 | null | |
Clean Mediocrity: Why I miss the ”epistemic struggle“of human thought in the age of LLMs. | 0 | I use ChatGPT frequently to assist with my learning. At first, it was an exhilarating experience: when I threw it a vague question, this neural network—which has read almost text written by humanity—would produce an impeccably structured answer without specific prompt . For someone who didn't know where to start, this efficiency was nothing short of a lifesaver. But recently, that excitement has faded, replaced by a specific, hard-to-describe sense of weariness.
Intuitively, I just feel that the text is "too clean.” I must admit, this "cleanliness" represents a massive statistical improvement. Most raw human writing possesses no such "aesthetic of thought"; it is usually a mix of pure chaos, logical fallacies, and inarticulate fluff. By comparison, the "Clean Mediocrity" provided by AI is actually superior to the expression of most untrained humans.
But that is precisely the problem: it is too safe, too much like a standard model answer.
Initially, I attributed this dissonance to "excessive structure," such as the screen-filling bullet points. But I later realized the issue wasn't the format itself, but that the format was masking the "transitions" that actual thinking should have provided.
From a technical standpoint, the core mechanism of an LLM is "Attention," which mathematically involves weights. Yet, why does the output feel so "equally weighted," so pedestrian? This is likely not the nature of AI, but the result of conditioning by human aesthetics. During the RLHF (Reinforcement Learning from Human Feedback) phase, human labelers tend to give high scores to answers that "look organized and clear like a PPT." Consequently, the AI learned to cater to this aesthetic, bulldozing what might have been a rugged terrain of thought into a frictionless plain.
This "manufactured smoothness" brings a side effect: excessive continuity.The typical AI pattern is: Conclusion → Analogy (e.g., "Embedding is a coordinate") → Strategic Implication. In this process, it uses the "intuitive rush" provided by the analogy to replace rigorous "logical argumentation."
Upon self-reflection, I may have misplaced the tool's purpose here—the function of an analogy is to build Intuition, not to provide Proof. If I needed a rigorous mathematical derivation, I should have explicitly asked for it. But in its default mode, this "smoothness" creates a deception: the reader is emotionally persuaded, but rationally, the gap where "A necessarily leads to B" has been quietly filled by a pretty metaphor. When a normal person writes, they would pause, get defensive, or self-doubt at these logical leaps; the AI, however, chooses to slide past them with confidence.
This leads to a deeper issue: Perspective.ChatGPT almost always operates from a "Terminal Perspective." There is no hesitation, no looking back. It is as if it’s saying: "This is the world; let me reveal it to you."Of course, comparing this experience to technical bloggers of the caliber of Karpathy or gwern is a case of Survivorship Bias. I am comparing AI's "average" against the top 0.01% of humanity. But this unfair comparison reveals a high-level demand in human reading:What we crave to see is not just a "finished knowledge structure," but a "forming cognitive trajectory."AI presents a completed building—no scaffolding, no debris. The reason articles by people like Karpathy are so mesmerizing is that they keep the "scaffolding." You can see where they took a detour, where they overturned their own ideas, and where they kept a paragraph of thinking that later proved useless.
Those "wasted efforts," those imperfect metaphors, that cognitive friction—while redundant from an efficiency-first perspective—are the unforgeable watermarks of humanity. AI excels at delivering the Product, while humans, even top engineers, are forced to expose the Process in their struggle.
Perhaps the weariness I feel doesn't stem from the AI not doing a good enough job, but from it doing too good a job—so good that it strips away the pain and hesitation inherent in the quest for knowledge, which are exactly the basis on which we confirm what is "real."
I’m curious whether others here have felt a similar “epistemic flattening” when using LLMs—not as a failure, but as a side effect of how we train and reward them. | 2026-01-27T18:15:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qolqne/clean_mediocrity_why_i_miss_the_epistemic/ | Playful-Housing-5955 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qolqne | false | null | t3_1qolqne | /r/LocalLLaMA/comments/1qolqne/clean_mediocrity_why_i_miss_the_epistemic/ | false | false | self | 0 | null |
[Preliminary] New subquadratic attention: ~20k tok/s prefill / ~100 tok/s decode @ 1M context (single GPU) | 49 | Hi everyone,
Wanted to share some preliminary feasibility results from my work on a new attention mechanism (with custom kernels) on NVIDIA Nemotron Nano v3 30B. I am now able to run 1M context on a single GPU with this setup, and the early throughput numbers look promising.
TL;DR: 30B model + 1M context on a single GPU, with a jump-search-style attention mechanism. (Manuscript link: [https://arxiv.org/abs/2601.18401](https://arxiv.org/abs/2601.18401))
Numbers (single batch/sequence; single GPU: NVIDIA B200, similar results on RTX PRO 6000 Blackwell):
\- **\~20,000 tok/s** prefill
\- **\~100 tok/s** decode at **1M** context
\- **66 GB** GPU memory (6GB KV cache + 60GB FP16 model)
\- perfect NIAH (needle in a haystack) at 256K context (limited training so far)
I have completed an initial feasibility study, and I'm continuing to train the model toward real production use. The plan is to fully open-source the model for local inference, with a target of running a fully filled 1M context for a 30B model locally on \~24GB GPU memory. I'm cleaning up the codebase and plan to release the kernel implementations soon. For the model itself, I'll share it once we feel good about long-context performance/quality.
(Just to be clear: these are early numbers, and quality/evals are still in progress.)
1) What’s the main idea
You can think about the transformer attention mechanism as a search algorithm to find the relevant information to predict the next token. Standard attention is basically O(L) brute-force search. We’re doing an O(L\^0.5) jump-search-style approach instead. For example, if you 10x the context length, a sqrt(L) search budget only grows by \~3.2x.
That subquadratic scaling really matters for long context, since the cost still grows with L. The main innovation is keeping that scaling while still making sure every token is reachable (i.e., not a fixed sliding window; think ‘**global random access**’). Most likely in long context inference, a large fraction of long-context computation is wasted by brute-force scanning, and that if we are smart about it, we can compute it much more efficiently.
2) What's the goal
Targeting high-quality and fast (\~100 tok/s) open-source local models at long context:
\- 1M context on a 24GB GPU: \~6GB KV cache + \~15GB 4-bit quantized model
\- 10M context on a 96GB GPU: \~60GB KV cache + \~30GB 8-bit quantized model
Our initial feasibility results suggest we’re already in the right ballpark on inference speed. The main work now is scaling training and doing broader quality evals on real long-context tasks. I’m sure we’ll hit obstacles as we scale up, but overall we feel this direction is achievable.
3) Questions/feedback
I’m a big fan of running models locally (work + teaching + personal projects). Before COVID I bought 4× 1070 Ti GPUs for some non-LLM stuff, and these days I mostly use an A6000 at home. I’m excited about this because it could make really long-context workflows practical without needing a cluster.
Would love feedback / sanity checks on a few things:
1. What would you actually use 1M–10M context for locally? (offline search over docs, codebase-scale assistants, long-form editing, “personal knowledge base”, etc.)
2. What evals would you trust most for long-context quality (beyond simple needle-in-a-haystack)?
3. What baselines should I compare against to make the speed/quality tradeoffs clear
4. What would make an open-source release most useful to you (kernels only vs full inference stack vs training code/configs)?
I kept this post high-level, but happy to go deeper if there’s interest. | 2026-01-27T17:54:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qol3s5/preliminary_new_subquadratic_attention_20k_toks/ | Sad-Size2723 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qol3s5 | false | null | t3_1qol3s5 | /r/LocalLLaMA/comments/1qol3s5/preliminary_new_subquadratic_attention_20k_toks/ | false | false | self | 49 | null |
I built a layer around Claude Code to expose reasoning + learning + docs. Curious if devs actually want this. | 0 | I’ve been using Claude Code for months and at some point I hit a weird limitation:
the model is powerful, but the interface hides too much.
It writes code but doesn’t explain decisions.
It changes files but doesn’t document anything.
It breaks things but doesn’t show what happened.
And it doesn’t teach you while working.
So I ended up building a “missing layer” around it:
— AET (Agent Execution Timeline) to show the chain of reasoning
— Learning Mode to explain changes
— Documentation Mode to produce docs on the fly
— Git checkpoints for safety
— MCP + Terminal integration
My question to redditors (especially devs using AI tools):
Does “visibility + learning + docs + safety” actually matter to you?
Or is everyone optimizing purely for speed like Cursor/Windsurf/Copilot?
Genuine curiosity — not marketing.
I still don’t know if this is a niche or a missing category.
Link if anyone wants to actually try it: https://codeonai.net | 2026-01-27T17:49:23 | https://v.redd.it/z6wx9f2hjxfg1 | itech2030 | /r/LocalLLaMA/comments/1qokyd3/i_built_a_layer_around_claude_code_to_expose/ | 1970-01-01T00:00:00 | 0 | {} | 1qokyd3 | false | null | t3_1qokyd3 | /r/LocalLLaMA/comments/1qokyd3/i_built_a_layer_around_claude_code_to_expose/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Zmc3OTdueGdqeGZnMcYjUzXuKr7yCmSLQgeYGTKFnnn_x5YCGkk-uv_Lu8X-', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/Zmc3OTdueGdqeGZnMcYjUzXuKr7yCmSLQgeYGTKFnnn_x5YCGkk-uv_Lu8X-.png?width=108&crop=smart&format=pjpg&auto=webp&s=d57035f10b879448f13c566981733c2fabc3d954', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/Zmc3OTdueGdqeGZnMcYjUzXuKr7yCmSLQgeYGTKFnnn_x5YCGkk-uv_Lu8X-.png?width=216&crop=smart&format=pjpg&auto=webp&s=fa62796ffc08954fe9732758f6fd0b2b5f5c4848', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/Zmc3OTdueGdqeGZnMcYjUzXuKr7yCmSLQgeYGTKFnnn_x5YCGkk-uv_Lu8X-.png?width=320&crop=smart&format=pjpg&auto=webp&s=911dbf993d1006280aba6cb2bc78c453a2bea0ff', 'width': 320}, {'height': 374, 'url': 'https://external-preview.redd.it/Zmc3OTdueGdqeGZnMcYjUzXuKr7yCmSLQgeYGTKFnnn_x5YCGkk-uv_Lu8X-.png?width=640&crop=smart&format=pjpg&auto=webp&s=327b5efa9f986830e774502ae82985e44d54a9df', 'width': 640}, {'height': 562, 'url': 'https://external-preview.redd.it/Zmc3OTdueGdqeGZnMcYjUzXuKr7yCmSLQgeYGTKFnnn_x5YCGkk-uv_Lu8X-.png?width=960&crop=smart&format=pjpg&auto=webp&s=25fa1ac9b18fb82eaf6f6dc5ae9f2c452ba8f0e5', 'width': 960}, {'height': 632, 'url': 'https://external-preview.redd.it/Zmc3OTdueGdqeGZnMcYjUzXuKr7yCmSLQgeYGTKFnnn_x5YCGkk-uv_Lu8X-.png?width=1080&crop=smart&format=pjpg&auto=webp&s=21c98a3ce4e112c199b91f9ce11431f0c366a189', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Zmc3OTdueGdqeGZnMcYjUzXuKr7yCmSLQgeYGTKFnnn_x5YCGkk-uv_Lu8X-.png?format=pjpg&auto=webp&s=15f53b910de9cab17213940c623e38589b59b98d', 'width': 1844}, 'variants': {}}]} | |
Speed up (2-3x) prompt processing (prefill) in LM Studio on Apple Silicon | 1 | [removed] | 2026-01-27T17:47:46 | https://www.reddit.com/r/LocalLLaMA/comments/1qokwqs/speed_up_23x_prompt_processing_prefill_in_lm/ | Thick-Letterhead-315 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qokwqs | false | null | t3_1qokwqs | /r/LocalLLaMA/comments/1qokwqs/speed_up_23x_prompt_processing_prefill_in_lm/ | false | false | self | 1 | null |
Just a question | 5 | Today is 2026. I'm just wondering, is there any open source model out there that is as good or better than Claude 3.5 at least out there? I'd love to run a capable coding assistant locally if possible. I'm a web dev btw. | 2026-01-27T17:44:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qoktn4/just_a_question/ | Temporary-Cookie838 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoktn4 | false | null | t3_1qoktn4 | /r/LocalLLaMA/comments/1qoktn4/just_a_question/ | false | false | self | 5 | null |
LingBot-Depth: Depth completion model that actually handles glass and mirrors, 3M dataset released | 7 | If you run local robotics projects with consumer RGB-D cameras, you know the pain. RealSense or Orbbec depth maps turn into swiss cheese whenever there's glass, mirrors, or shiny surfaces. Transparent cup? Your depth map says it doesn't exist.
Ant Group released "Masked Depth Modeling for Spatial Perception" with a simple but clever idea: instead of treating sensor depth holes as failures to discard, use them as natural masks for self-supervised learning. The missing regions from real sensors force the model to learn actual RGB to depth reasoning, similar to how MAE pretraining works for images.
The practical results are solid. On depth completion benchmarks, it beats PromptDA and PriorDA by 40%+ RMSE reduction in hard cases. More interesting: robot grasping on transparent objects went from literally impossible with raw sensor depth to 50% success rate on a transparent storage box. Standard objects like steel cups improved from 65% to 85%.
The released dataset is arguably more valuable than the model itself. Most RGB-D datasets either avoid challenging scenes or use perfect rendered depth. This one has 2M real captures and 1M synthetic samples that deliberately preserve realistic sensor noise patterns, covering indoor scenes from glass lobbies to aquarium tunnels.
Model is ViT-Large initialized from DINOv2, trained for 250k iterations on 128 GPUs. Works with RealSense, Orbbec Gemini, and ZED cameras out of the box.
Huggingface: [https://huggingface.co/robbyant/lingbot-depth](https://huggingface.co/robbyant/lingbot-depth) | 2026-01-27T17:31:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qokfwl/lingbotdepth_depth_completion_model_that_actually/ | rwhitman05 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qokfwl | false | null | t3_1qokfwl | /r/LocalLLaMA/comments/1qokfwl/lingbotdepth_depth_completion_model_that_actually/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': '5j0dojglIYh8-4xoAYtUy0RYFYPJKUgpn2xoqjRkCL0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5j0dojglIYh8-4xoAYtUy0RYFYPJKUgpn2xoqjRkCL0.png?width=108&crop=smart&auto=webp&s=7ff4d7e982e4882a3c16ecd83fedfadd7fede2fb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5j0dojglIYh8-4xoAYtUy0RYFYPJKUgpn2xoqjRkCL0.png?width=216&crop=smart&auto=webp&s=4f21594d67490f2947152f39994c940a3c33b16c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5j0dojglIYh8-4xoAYtUy0RYFYPJKUgpn2xoqjRkCL0.png?width=320&crop=smart&auto=webp&s=2d00e9c367a0a8cb488c0645100c66fe8305fd98', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5j0dojglIYh8-4xoAYtUy0RYFYPJKUgpn2xoqjRkCL0.png?width=640&crop=smart&auto=webp&s=c789e8b1e7ad460249c81416bdcdf2a40d5ce000', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5j0dojglIYh8-4xoAYtUy0RYFYPJKUgpn2xoqjRkCL0.png?width=960&crop=smart&auto=webp&s=877a7a438957ad9c6aaba8a2ec0ccd8c65510239', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5j0dojglIYh8-4xoAYtUy0RYFYPJKUgpn2xoqjRkCL0.png?width=1080&crop=smart&auto=webp&s=3bfadbf7040590ad44b736e87a6bd58a20f34d4b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5j0dojglIYh8-4xoAYtUy0RYFYPJKUgpn2xoqjRkCL0.png?auto=webp&s=514b18b71cb1e8c065deef3be299ee5ce7e1b768', 'width': 1200}, 'variants': {}}]} |
Indeed-auto Apply bot I'm making. | 0 | 2026-01-27T17:27:10 | https://www.reddit.com/r/LocalLLaMA/comments/1qokbf9/indeedauto_apply_bot_im_making/ | PhotographerUSA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qokbf9 | false | null | t3_1qokbf9 | /r/LocalLLaMA/comments/1qokbf9/indeedauto_apply_bot_im_making/ | false | false | 0 | null | ||
ClawdBot: Setup Guide + How to NOT Get Hacked | 0 | 2026-01-27T17:26:10 | https://lukasniessen.medium.com/clawdbot-setup-guide-how-to-not-get-hacked-63bc951cbd90 | trolleid | lukasniessen.medium.com | 1970-01-01T00:00:00 | 0 | {} | 1qokaf0 | false | null | t3_1qokaf0 | /r/LocalLLaMA/comments/1qokaf0/clawdbot_setup_guide_how_to_not_get_hacked/ | false | false | default | 0 | null | |
Shared Dev Server Questions | 2 | Not sure if this is the best place but I have a machine (GMKTech Strix Halo) I'm looking to use for local AI testing, learning, etc that I want to share with another person in my family. There is no concerns around seeing what the other person is working on but I want to make sure we can make use of the resources. To that end, I was looking for some guidance, namely:
1. Should this be a baremetal install of a Linux OS or VMs on a hypervisor like Proxmox or XCP-NG?
2. Which Linux distro is everyone using? Was just going to use Ubuntu but wanted to get everyone else's thoughts.
3. Does it make sense just to create two different users and just make sure anything hosted in containers is shared? If so, how?
Thanks in advance for everyone's help! | 2026-01-27T17:23:23 | https://www.reddit.com/r/LocalLLaMA/comments/1qok7jx/shared_dev_server_questions/ | underscore_3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qok7jx | false | null | t3_1qok7jx | /r/LocalLLaMA/comments/1qok7jx/shared_dev_server_questions/ | false | false | self | 2 | null |
Moltbot is overrated | 0 | 2026-01-27T17:17:20 | https://v.redd.it/qax6uqfhdxfg1 | Admirable-Choice9727 | /r/LocalLLaMA/comments/1qok113/moltbot_is_overrated/ | 1970-01-01T00:00:00 | 0 | {} | 1qok113 | false | null | t3_1qok113 | /r/LocalLLaMA/comments/1qok113/moltbot_is_overrated/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ZTd0eXF5ZmhkeGZnMWl14EZNuGi7M-xpupdQiwbfS6J8-uefOLvilOAUR7zM', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/ZTd0eXF5ZmhkeGZnMWl14EZNuGi7M-xpupdQiwbfS6J8-uefOLvilOAUR7zM.png?width=108&crop=smart&format=pjpg&auto=webp&s=3af2ed08e90da24206321d983c116f6abcf2608f', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/ZTd0eXF5ZmhkeGZnMWl14EZNuGi7M-xpupdQiwbfS6J8-uefOLvilOAUR7zM.png?width=216&crop=smart&format=pjpg&auto=webp&s=3d581531368d1211cf52cea69a8bda1df7a0df0e', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/ZTd0eXF5ZmhkeGZnMWl14EZNuGi7M-xpupdQiwbfS6J8-uefOLvilOAUR7zM.png?width=320&crop=smart&format=pjpg&auto=webp&s=026af2e223250895bb59e7c07bcfce70c2e15f2b', 'width': 320}], 'source': {'height': 718, 'url': 'https://external-preview.redd.it/ZTd0eXF5ZmhkeGZnMWl14EZNuGi7M-xpupdQiwbfS6J8-uefOLvilOAUR7zM.png?format=pjpg&auto=webp&s=51a794f72d83bb3e46fcd4d8b07c3ab2ce512e37', 'width': 320}, 'variants': {}}]} | ||
I'm finetuning a 270M parameter model for sensitive information obfuscation, and I need some help. | 7 | Hey everyone. My boss challenged me to develop a solution to obfuscate sensitive data from our internal documents. That gave the idea to fine tune a model small enough to run on any desktop, because we don't have a powerful infrastructure.
So, I choose to finetune gemma3-270M with Unsloth for this task, and create a 1700 example dataset on Brazilian Portuguese, with various sensitive information categories (PII, Financial, PHI, contact information, IP address, API keys...). The obfuscation task is simple. The model receives a text, process it and returns the same text with the sensitive data obfuscated using a tag.
This is a JSONL example from my dataset.
{"messages":[
{"role":"system","content":"[TASK: ANON_PURE] You are a specialist in data privacy and anonymization. \n\nYour objective is to redact all sensitive information (PII, PHI, Financial, etc.) from the text provided by the user.\n\n### Rules:\n1. **Identification**: Detect all sensitive entities (Names, Documents, Locations, etc.).\n2. **Tagging**: Replace each entity with a specific tag (e.g., [PERSON_NAME], [DOC_CPF]).\n3. **Sequential Numbering**: Append a sequential index to each unique entity starting from _01 (e.g., [PERSON_NAME_01]).\n4. **Formatting**: Always wrap the tags in bold markdown: **[TAG_NAME_01]**.\n5. **Text Integrity**: Keep the original structure, line breaks, and non-sensitive text completely unchanged."},
{"role":"user","content":"O documento do cliente é o 12345678900."},
{"role":"assistant","content":"O documento do cliente é o [DOC_CPF_01]."}
]}
So, I'm experiencing some difficulties with the fine-tuned model, after 3 epochs and a learning rate of 2e-5 configured on the SFTTainer Object from Unsloth:
1 - Sometimes it doesn't identify the sensitive information on the text
2 - It creates some anonymization tags that are not present on the training data
3 - The model response it's not just the input text obfuscated. It comes with commentaries or other content.
4 - The model response is the same as the input text. No obfuscation is applied.
In a quality test I've made comparing the model output with the desired output format, the success rate is just 11%. What am I doing wrong? 1700 JSON examples dataset is too small? Or the dataset is in a wrong format? The role system message should change on the examples?
This is my first attempt to create something with AI that is not just "use a chat", is something real. My plan is make this project open source to help other people.
Can someone indicate a course, youtube playlist, book, anything where I can learn to create datasets and finetune a model on the right way?
Thank you in advance. | 2026-01-27T16:59:28 | https://www.reddit.com/r/LocalLLaMA/comments/1qojhw3/im_finetuning_a_270m_parameter_model_for/ | CalvaoDaMassa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qojhw3 | false | null | t3_1qojhw3 | /r/LocalLLaMA/comments/1qojhw3/im_finetuning_a_270m_parameter_model_for/ | false | false | self | 7 | null |
The "Dynamic Loading" in Transformers v5 isn't what you think it is (Benchmarks inside) | 0 | saw the v5 release notes yesterday promising "faster dynamic weight loading" and got excited that we finally solved the cold-start problem.
I ran some benchmarks, and here is the bad news: It’s not for Serverless.
The Bottleneck:
Transformers v5 optimizes "Lazy Loading" (loading experts only when needed during a forward pass). This is awesome for running Mixtral on consumer hardware, but it assumes your Python process is already alive.
If you are trying to do "Scale-to-Zero" (Serverless), you still hit the massive penalty of initializing CUDA and loading torch from scratch.
The Experiment:
I tried to see if i could beat the v5 cold-start time by checkpointing the GPU memory after CUDA init and hot-swapping weights from NVMe.
Standard Transformers (v5): \~38s (Cold Boot + Import + Load)
CUDA Context checkpoint (Custom): \~2s (Restoring the memory state directly)
Takeaway: v5 is a huge win for throughput (making the car drive faster), but it doesn't fix the ignition (starting the engine).
Has anyone else managed to get torch.load under 5 seconds without doing this "checkpoint" hack? The CUDA init time seems to be the hard floor we can't break through. | 2026-01-27T16:30:00 | https://www.reddit.com/r/LocalLLaMA/comments/1qoinc3/the_dynamic_loading_in_transformers_v5_isnt_what/ | MLExpert000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoinc3 | false | null | t3_1qoinc3 | /r/LocalLLaMA/comments/1qoinc3/the_dynamic_loading_in_transformers_v5_isnt_what/ | false | false | self | 0 | null |
Drummer's Rocinante X 12B v1 - It's back and it's stronger than ever! A funtastic creative Claude-like RP model at home! | 87 | 2026-01-27T16:27:12 | https://huggingface.co/TheDrummer/Rocinante-X-12B-v1 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1qoikji | false | null | t3_1qoikji | /r/LocalLLaMA/comments/1qoikji/drummers_rocinante_x_12b_v1_its_back_and_its/ | false | false | 87 | {'enabled': False, 'images': [{'id': 'vMt_thwlfOcsDVODCeD_E13hrou5XcQcrAzk35oTLtU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vMt_thwlfOcsDVODCeD_E13hrou5XcQcrAzk35oTLtU.png?width=108&crop=smart&auto=webp&s=95ca14150b95b81afa4b4d63faee3d7877887bb2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vMt_thwlfOcsDVODCeD_E13hrou5XcQcrAzk35oTLtU.png?width=216&crop=smart&auto=webp&s=96bef95f04834b36e3b7408a10f7186c5fc0d46f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vMt_thwlfOcsDVODCeD_E13hrou5XcQcrAzk35oTLtU.png?width=320&crop=smart&auto=webp&s=46884dae0be617d798968a4578824d0dac595720', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vMt_thwlfOcsDVODCeD_E13hrou5XcQcrAzk35oTLtU.png?width=640&crop=smart&auto=webp&s=8ad8c88d8902c9e03aa9ee1c38b6f2620bca2b91', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vMt_thwlfOcsDVODCeD_E13hrou5XcQcrAzk35oTLtU.png?width=960&crop=smart&auto=webp&s=f5471815a023a771a0c52e61547c728953624679', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vMt_thwlfOcsDVODCeD_E13hrou5XcQcrAzk35oTLtU.png?width=1080&crop=smart&auto=webp&s=d56b7d70fc8f028fda8673b03c696711ce2e1599', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vMt_thwlfOcsDVODCeD_E13hrou5XcQcrAzk35oTLtU.png?auto=webp&s=0fa8be05a2e8a0bbcc7758cfa0698eaa8e488641', 'width': 1200}, 'variants': {}}]} | ||
The z-image base is here! | 237 | https://huggingface.co/Tongyi-MAI/Z-Image | 2026-01-27T16:21:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qoiep6/the_zimage_base_is_here/ | bobeeeeeeeee8964 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoiep6 | false | null | t3_1qoiep6 | /r/LocalLLaMA/comments/1qoiep6/the_zimage_base_is_here/ | false | false | self | 237 | {'enabled': False, 'images': [{'id': 'yzw-luI9wOMm3qSoJBL8ESoLW0Oo97w_8Q8lOtzUot8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yzw-luI9wOMm3qSoJBL8ESoLW0Oo97w_8Q8lOtzUot8.png?width=108&crop=smart&auto=webp&s=a4909707bfbd473a08c678c35f97030dc1f7060a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yzw-luI9wOMm3qSoJBL8ESoLW0Oo97w_8Q8lOtzUot8.png?width=216&crop=smart&auto=webp&s=3e51d8dd4a3d9291f32805ccf5805b01d2166872', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yzw-luI9wOMm3qSoJBL8ESoLW0Oo97w_8Q8lOtzUot8.png?width=320&crop=smart&auto=webp&s=661b242aff31b7458d4590d016dadb6fdfe84202', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yzw-luI9wOMm3qSoJBL8ESoLW0Oo97w_8Q8lOtzUot8.png?width=640&crop=smart&auto=webp&s=df46728e7d7d026134d01df7f02f3e413e745ad1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yzw-luI9wOMm3qSoJBL8ESoLW0Oo97w_8Q8lOtzUot8.png?width=960&crop=smart&auto=webp&s=29d352f484dd18d4b887971c4b1a5f6488bf38a3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yzw-luI9wOMm3qSoJBL8ESoLW0Oo97w_8Q8lOtzUot8.png?width=1080&crop=smart&auto=webp&s=4380a78621be20fe4ff3a4768eb839694c601e6c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yzw-luI9wOMm3qSoJBL8ESoLW0Oo97w_8Q8lOtzUot8.png?auto=webp&s=9ceb2ace6cb469194746a896faaf29856d31624a', 'width': 1200}, 'variants': {}}]} |
Giving a local LLM my family's context -- couple of months in | 5 | The richest context isn't in files or documents. It's in the everyday chat between my wife and me about weekend plans. The grocery list that turned into a conversation about the kids. The photo with a joke only we get. Decisions scattered across months of small conversations.
That's where families actually live. And no cloud AI has access to it — nor would I give it to them.
So I gave it to a local LLM instead.
**The setup:**
**Llama 3.2 (Ollama)** on an **Intel N100**, connected to:
* **Matrix**: where my family actually chats (E2EE, our server)
* **Immich**: our photos, face recognition running locally
* A memory store in **PostgreSQL**
I also built a zero-touch installer : run one script, open a web wizard, done. I wanted this accessible to families who aren't going to edit YAML files OR postgraduate degree in Linux.
**Where it's at today:**
Right now it responds to commands: `/remember`, `/recall`, `/addtolist`, `/summarize`. Useful but basic.
The vision is different. I want it to *live with us* \-- forming memories from our conversations, making connections we'd miss, understanding context without being asked.
"When did we last service the boiler?" --> it should know, because we talked about it.
"What was that place we loved in Bath?" --> mentioned once, eight months ago, in a chat that's long scrolled away.
**What I'm wrestling with:**
* **Model choice:** Llama 3.2 3B fits my RAM. Better small models for retrieval and context-building?
* **From commands to ambient:** How do I move from `/remember X` to the LLM forming memories from natural conversation?
* **Long-term context:** Family context grows over years. RAG? Summarisation? What architectures handle this?
* **Anyone else building this way?** Not chatbots -- local AI that accumulates the texture of daily life.
**Current state:**
Early. Alpha. My family uses it daily, and am expanding the hardware for cross-silo LLM usage. I'm a systems architect, not a developer -- so AI-assisted development.
It's open source (AGPLv3). If this resonates, I'd genuinely love people to try it, break it, tell me what's wrong. The installer takes about 10 minutes on an N100 or Pi 5.
[**https://github.com/kanchanepally/memu.digital**](https://github.com/kanchanepally/memu.digital)
A couple of screenshots if you want to see what it looks like:
*Processing img ikk4p9ih2xfg1...*
[Installer completing setup](https://preview.redd.it/teo14bhk2xfg1.png?width=557&format=png&auto=webp&s=573f33ec1523eb7ec6ed9d22f4362f6d134e12e7) | 2026-01-27T16:19:55 | https://www.reddit.com/r/LocalLLaMA/comments/1qoid2f/giving_a_local_llm_my_familys_context_couple_of/ | Purple_Click5825 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoid2f | false | null | t3_1qoid2f | /r/LocalLLaMA/comments/1qoid2f/giving_a_local_llm_my_familys_context_couple_of/ | false | false | self | 5 | null |
https://huggingface.co/Tongyi-MAI/Z-Image | 1 | [deleted] | 2026-01-27T16:16:07 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1qoi9a7 | false | null | t3_1qoi9a7 | /r/LocalLLaMA/comments/1qoi9a7/httpshuggingfacecotongyimaizimage/ | false | false | default | 1 | null | ||
Hugging Face Unveils Faster AI Agents: 20x Speed Boost | 0 | WOW
Hugging Face released AI agents that process code 20x faster than GPT-4, redefining real-time automation.
Hugging Face has launched AI agents capable of processing code 20 times faster than GPT-4. The announcement, detailed in its blog post, positions these agents as tools for real-time software development and automation tasks. | 2026-01-27T16:10:50 | https://oneeko.store/news/hugging-face-unveils-faster-ai-agents-20x-speed-boost | IntelligentCause2043 | oneeko.store | 1970-01-01T00:00:00 | 0 | {} | 1qoi3yn | false | null | t3_1qoi3yn | /r/LocalLLaMA/comments/1qoi3yn/hugging_face_unveils_faster_ai_agents_20x_speed/ | false | false | default | 0 | null |
SERA 8B/32B | 56 | 2026-01-27T16:08:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qoi1jc/sera_8b32b/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoi1jc | false | null | t3_1qoi1jc | /r/LocalLLaMA/comments/1qoi1jc/sera_8b32b/ | false | false | 56 | null | ||
I tried to hand-roll observability for local LLM inference… then realized OpenTelemetry solves the “parent span / timestamps / threads” mess | 2 | I’ve been wiring multiple LLM stacks into our observability platform this month: Vercel AI SDK, Haystack, LiteLLM, and local inference (the LocalLLaMA-ish runtime side is where it got painful fast).
I started with the simple mindset: “I’ll just add timestamps, manually create parent span + child spans, and call it tracing.”
Then I asked our CTO a genuinely dumb question:
>
That question is dumb because OpenTelemetry is literally designed so you don’t need to do that. If you instrument correctly, span lifecycle + parent/child relationships come from context propagation, not from you deciding when to ‘send’ a parent span. And manually computing timings gets fragile the second you introduce concurrency.
What I learned that actually matters (hardcore bits)
**1) Traces aren’t logs with timestamps**
A trace is a tree of spans. A span includes:
* start/end time
* attributes (structured key/value)
* events (timestamped breadcrumbs)
* status (OK/ERROR)
The big win is structure + propagation, not timestamps.
**2) Local inference wants “phase spans,” not one giant blob**
A clean model for local runtimes looks like:
* `llm.request` (root)
* `llm.tokenize`
* `llm.prefill` (TTFT lives here)
* `llm.decode` (throughput lives here)
* `llm.stream_write` (optional)
* `tool.*` (if you’re doing tools/agents locally)
Then attach attributes like:
* `llm.model`
* `llm.tokens.prompt`, `llm.tokens.completion`, [`llm.tokens.total`](http://llm.tokens.total)
* `llm.streaming=true`
* runtime attrs you actually care about: `queue.wait_ms`, `batch.size`, `device=gpu/cpu`, etc.
**3) Context propagation is the real “magic”**
Parent/child correctness across async/thread boundaries is the difference between “pretty logs” and real tracing. That’s why hand-rolling it breaks the moment you do background tasks, queues, or streaming callbacks.
**4) Sampling strategy is non-negotiable**
If you trace everything, volume explodes. For local inference, the only sane rules I’ve found:
* keep 100% ERROR traces
* keep slow traces (high TTFT)
* keep expensive traces (huge prompt/outputs)
* sample the rest
The same tracing model works across all four:
* Vercel AI SDK: streaming + tools → spans/events/attributes
* Haystack: pipeline nodes → spans per component
* LiteLLM: gateway retries/fallbacks → child spans per provider call
* Local inference: runtime phases + batching/queue contention
Once you commit to OTel semantics, exporting becomes “just plumbing” (OTLP exporter/collector), instead of bespoke glue for each framework. | 2026-01-27T15:49:00 | https://www.reddit.com/r/LocalLLaMA/comments/1qohhl4/i_tried_to_handroll_observability_for_local_llm/ | Main-Fisherman-2075 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qohhl4 | false | null | t3_1qohhl4 | /r/LocalLLaMA/comments/1qohhl4/i_tried_to_handroll_observability_for_local_llm/ | false | false | self | 2 | null |
How do you detect silent behavior drift in LLM pipelines after small changes? | 1 | [removed] | 2026-01-27T15:43:25 | https://www.reddit.com/r/LocalLLaMA/comments/1qohc24/how_do_you_detect_silent_behavior_drift_in_llm/ | ashutoshtr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qohc24 | false | null | t3_1qohc24 | /r/LocalLLaMA/comments/1qohc24/how_do_you_detect_silent_behavior_drift_in_llm/ | false | false | self | 1 | null |
Free, open-source MCP SDK from Gopher (sharing for those experimenting with the protocol) | 1 | Hey everyone,
Wanted to share a free, open-source MCP SDK that Gopher has released. Full disclosure: I'm sharing this because I think it's genuinely useful for the community, but I do have a connection to Gopher, so take that into account.
**What it is:**
* An SDK (not a managed service) for building MCP servers and clients
* Gives you direct access to MCP primitives
* Useful if you want to understand or customize how MCP works under the hood
**Who it might be useful for:**
* Developers who want hands-on control over their MCP implementation
* Anyone learning MCP internals (tool exposure, discovery, client-server calls)
* People testing custom MCP setups without vendor lock-in
**Repo:** [link](https://github.com/GopherSecurity/gopher-mcp)
Happy to answer questions if anyone wants to know more about how it works or what it's suited for. | 2026-01-27T15:30:24 | https://www.reddit.com/r/LocalLLaMA/comments/1qogzlc/free_opensource_mcp_sdk_from_gopher_sharing_for/ | Ok_Message7136 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qogzlc | false | null | t3_1qogzlc | /r/LocalLLaMA/comments/1qogzlc/free_opensource_mcp_sdk_from_gopher_sharing_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'DcJrgVKu3qu4nkSZ4FRPLKH7CEByLjT7d9_Pi-nmVME', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DcJrgVKu3qu4nkSZ4FRPLKH7CEByLjT7d9_Pi-nmVME.png?width=108&crop=smart&auto=webp&s=7f4f1f43ff7cb853fb885f9da5e131b148273d8c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DcJrgVKu3qu4nkSZ4FRPLKH7CEByLjT7d9_Pi-nmVME.png?width=216&crop=smart&auto=webp&s=4df946f9b535724b97a52f0089247ae79e2ab6f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DcJrgVKu3qu4nkSZ4FRPLKH7CEByLjT7d9_Pi-nmVME.png?width=320&crop=smart&auto=webp&s=fbe2b4b24f85673d58f5dedc50dcd07cbcbc4f96', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DcJrgVKu3qu4nkSZ4FRPLKH7CEByLjT7d9_Pi-nmVME.png?width=640&crop=smart&auto=webp&s=ec73acc30fbd7a9cfd8027c397496fe81c7a4935', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DcJrgVKu3qu4nkSZ4FRPLKH7CEByLjT7d9_Pi-nmVME.png?width=960&crop=smart&auto=webp&s=94dfd604d94b5e8300930cc524828775210ff046', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DcJrgVKu3qu4nkSZ4FRPLKH7CEByLjT7d9_Pi-nmVME.png?width=1080&crop=smart&auto=webp&s=1895014eb13917ee646822b639b8031bcf2fdef1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DcJrgVKu3qu4nkSZ4FRPLKH7CEByLjT7d9_Pi-nmVME.png?auto=webp&s=44ac044ed4211a4c640f9250ad2274a78670f090', 'width': 1200}, 'variants': {}}]} |
I built a local-first AI tool: generate ST character cards via local-first LLM endpoints or openai API + optional image backends — feedback wanted | 21 | I built an open-source, local-first Character Card Generator for SillyTavern character cards (JSON + PNG cards). It’s a Vue/Node web app that talks to your local LLM endpoint (KoboldCPP or OpenAI-compatible), and optionally your local image backend (ComfyUI / SDAPI).
**What it does**
* Generates ST fields with structured output (supports “fill missing fields” + regenerate selected fields)
* Field detail presets: Short / Detailed / Verbose + per-field overrides
* Timeouts + max token controls for long generations
* Multi-repo library (CardGen + external folders like SillyTavern) with copy/move + search/sort
Would love your feedback on the app.
Github Repo: [https://github.com/ewizza/ST-CardGen](https://github.com/ewizza/ST-CardGen)
Background thread in r/SillyTavernAI: [https://www.reddit.com/r/SillyTavernAI/comments/1qhe1a4/new\_character\_generator\_with\_llm\_and\_image\_api/](https://www.reddit.com/r/SillyTavernAI/comments/1qhe1a4/new_character_generator_with_llm_and_image_api/) | 2026-01-27T15:14:58 | https://www.reddit.com/gallery/1qogkgr | JaxxonAI | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qogkgr | false | null | t3_1qogkgr | /r/LocalLLaMA/comments/1qogkgr/i_built_a_localfirst_ai_tool_generate_st/ | false | false | default | 21 | null |
Best uncensored model right now . | 0 | hello everyone i have rtx 5080 16gb vram and 64 gb ram. what are the best uncensored model right now with various tasks thanks | 2026-01-27T15:13:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qogjbk/best_uncensored_model_right_now/ | Think_Collection280 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qogjbk | false | null | t3_1qogjbk | /r/LocalLLaMA/comments/1qogjbk/best_uncensored_model_right_now/ | false | false | self | 0 | null |
Zai Shell: A Lightweight Autonomous Terminal Assistant with Behavioral Safety (Open Source) | 0 | Hi everyone,
For a while now, I’ve been working on a project that tries to close the gap between “chatting with Gemini” and “Gemini actually doing real work on the system.”
That’s why I built Zai Shell — an open-source, lightweight terminal assistant that uses Gemini (via API) to execute commands, manage files, and automate real tasks directly on the host system.
The reason this project exists is fairly clear. Many AutoGPT-style tools suffer from the same structural problems: heavy Docker setups, high RAM usage, complex agent structures that break easily, and weak error handling when something goes wrong. When a command fails, these systems often fall into loops, stop entirely, or push the problem back onto the user.
Zai Shell is built around an approach that runs locally, stays simple, does not panic when a command fails, and can genuinely understand when it is getting close to performing a risky action.
What sets Zai Shell apart is its focus not just on planning, but on execution and recovery. Instead of running commands and hoping for the best, everything goes through a validated execution loop: plan, assess risk, execute, observe the result, adapt if necessary, and retry.
Before any AI-generated command is executed, Zai Shell activates a behavioral safety layer called Sentinel. Sentinel does not rely on strict allow/deny rules. Instead, it evaluates which parts of the system are being touched, whether behavior is escalating or failures are repeating, the current system context, and whether the intent appears destructive or corrective. The goal is not to block the user, but to explain when and why a chain of actions is becoming dangerous.
When commands fail, Zai Shell analyzes the error output and automatically retries by adapting arguments, switching shells, or adjusting character encodings. It also includes an offline mode powered by a local Phi-2 model with a CPU fallback, as well as an optional online mode via the Gemini API. End-to-end encrypted P2P terminal and file sharing is also supported for remote assistance.
The project is fully open source.
I’m a 15-year-old student, and this project has been my first serious work on autonomous agents that interact with real systems. I’m especially looking for technical feedback around safety logic, failure recovery, and agent behavior under real-world conditions.
Repo:
[https://github.com/TaklaXBR/zai-shell](https://github.com/TaklaXBR/zai-shell)
Thanks. | 2026-01-27T15:11:42 | https://www.reddit.com/r/LocalLLaMA/comments/1qogh7n/zai_shell_a_lightweight_autonomous_terminal/ | Exact_Section_556 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qogh7n | false | null | t3_1qogh7n | /r/LocalLLaMA/comments/1qogh7n/zai_shell_a_lightweight_autonomous_terminal/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 's9bfPogckFoSGZ_g8diZhOJgMDsE7q5t7cI6o1jHjw0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s9bfPogckFoSGZ_g8diZhOJgMDsE7q5t7cI6o1jHjw0.png?width=108&crop=smart&auto=webp&s=398410ba26519caf307a61bbc1aff6271eea9ef3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s9bfPogckFoSGZ_g8diZhOJgMDsE7q5t7cI6o1jHjw0.png?width=216&crop=smart&auto=webp&s=8cf89d04fbd621cb27dc47c6f6100950b9104579', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s9bfPogckFoSGZ_g8diZhOJgMDsE7q5t7cI6o1jHjw0.png?width=320&crop=smart&auto=webp&s=65cbbf14ce4d4dba40c3da9c98f02aa6f903feb7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s9bfPogckFoSGZ_g8diZhOJgMDsE7q5t7cI6o1jHjw0.png?width=640&crop=smart&auto=webp&s=1a7a8c92b991081d5ebe51abf94cdef5441aa06a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s9bfPogckFoSGZ_g8diZhOJgMDsE7q5t7cI6o1jHjw0.png?width=960&crop=smart&auto=webp&s=ff9e7c45d868634472b54fcbf0010736c3c73bd0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s9bfPogckFoSGZ_g8diZhOJgMDsE7q5t7cI6o1jHjw0.png?width=1080&crop=smart&auto=webp&s=753b375c856f9ef22880bbab34667cfa5b9c7223', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s9bfPogckFoSGZ_g8diZhOJgMDsE7q5t7cI6o1jHjw0.png?auto=webp&s=610a9c8ce675dce9391c27b9c6b4be57b6d48815', 'width': 1200}, 'variants': {}}]} |
Kimi K2.5 Architecture Dive: 1T Params, 384 Experts, Native INT4 (and it beats GPT-5 on reasoning) | 22 | The specs on the new Moonshot AI model (Kimi K2.5) are actually wild, and I feel like the architectural shift is being overlooked because of the "Agent" hype.
I dug into the technical report/release notes, and this isn't just a Llama clone. It looks like a very aggressive optimization of the MoE (Mixture-of-Experts) architecture specifically for consumer hardware efficiency relative to performance.
**The Architecture Breakdown:**
* **Total Parameters:** 1 Trillion.
* **Active Parameters:** Only 32B per token.
* **Expert Granularity:** 384 specialized experts (vs 256 in DeepSeek V3).
* **Routing:** Selects top-8 experts + 1 "shared" expert for common grammar/logic.
* **Native QAT:** It was trained with Quantization-Aware Training for INT4 from day one. This explains how they fit it on 4x H100s instead of a massive cluster.
**Why the "Shared Expert" matters:**
They seem to have solved the "interference" problem where learning code degrades creative writing. By isolating micro-domains (like "Rust syntax" or "Classical Poetry") into specific experts and keeping a shared expert for the basics, the model maintains coherence better than dense models.
**The "Thinking" Mode:**
It's using a System 2 approach similar to recent reasoning models, generating internal "thought tokens" to decompose problems before answering.
**Benchmarks (If you trust them):**
* **Humanity's Last Exam:** 50.2% (vs GPT-5 at 41.7%).
* **LiveCodeBench:** 83.1% (Approaching GPT-5, crushing Claude 3.5 Sonnet).
Has anyone pulled the weights yet to verify the VRAM requirements for local inference? The 32B active param count suggests it might be runnable on dual 3090s/4090s with heavy quantization, but the full MOE routing usually requires keeping more in VRAM.
Thoughts on this "Hyper-MoE" trend? | 2026-01-27T15:08:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qoge2t/kimi_k25_architecture_dive_1t_params_384_experts/ | comebackch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoge2t | false | null | t3_1qoge2t | /r/LocalLLaMA/comments/1qoge2t/kimi_k25_architecture_dive_1t_params_384_experts/ | false | false | self | 22 | null |
models for writing | 1 | Hey, I just started using LM studio the other day so I'm new to this. Can y'all recommend me good models to help my writing? I got 16gb ram and 8gb ram. Better if the model is unfiltered/uncensored. | 2026-01-27T14:53:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qofz98/models_for_writing/ | Slow_Kangaroo4725 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qofz98 | false | null | t3_1qofz98 | /r/LocalLLaMA/comments/1qofz98/models_for_writing/ | false | false | self | 1 | null |
[Model Release] Natural-Synthesis-8B: A Llama-3-8B tune with a 16k context window and a "Conceptual Organism" reasoning paradigm. | 0 | I’m excited to share Natural-Synthesis-8B, an experimental fine-tune of Llama-3-8B-Instruct.
Most models are trained to think in a linear "Chain of Thought." This model attempts something different. I’ve trained it on a specialized synthetic dataset (68 high-quality metacognitive traces) to "install" the Natural Synthesis Paradigm. It treats reasoning not as a construction project, but as the growth of a conceptual organism—from Seed to Canopy.
#### 🧬 The Paradigm: Five Stages of Growth
Instead of just predicting the next token, the model is guided by five core "Nutrients": Coherence, Parsimony, Explanatory Power, Fecundity, and Evidential Grounding.
Every response follows an evolutionary growth cycle:
1. The Seed: Identifying the query’s indivisible essence.
2. Root Exploration: Divergent mapping of the conceptual soil.
3. Principled Pruning: Letting weak/incoherent pathways wither.
4. Canopy Formation: Synthesizing the surviving concepts.
5. Homeostatic Review: A final equilibrium check for balance.
#### 🛠️ Technical Specs & Reproducibility
To reproduce the high-level philosophical depth seen in my tests, the configuration is vital.
* Base Model: Llama-3-8B-Instruct
* Context Window: 16,384 Tokens (Scaled via RoPE)
* Training/Inference: Optimized with Unsloth.
* Why 16k? The extended context is the "soil" that allows for massive conceptual growth. It prevents the model from "rebooting" its internal logic during long-form philosophical inquiries or complex system-design tasks.
1. The System Prompt (The "Key"):
To trigger the reasoning engine, you must use this prompt:
> "Show the process of your thinking at each step, and focus only in the response. Try different responses and select the most descriptive and exploratory. Check if the answer is parsimonious, coherent, and balanced."
2. Reproducing with Unsloth:
```
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "your_username/Natural-Synthesis-8B",
max_seq_length = 16384, # Fully supported 16k context
load_in_4bit = True,
)
```
#### 📊 Performance & Observations
* Dialectical Depth: This model is a specialist in "soft" logic. It uses Aristotelian "Potentia," thermodynamics (Entropy), and systems-theory paradigms to explain *relationships* between ideas (Justice vs. Mercy, Order vs. Chaos).
* Benchmark Gains: Significant improvement in BigBench Analytic Entailment (0.60 vs 0.57 base), showing a stronger grasp of logical entailment.
* Known Quirk: It is a "Systems Thinker." It may occasionally over-reason simple logic puzzles by attributing high-level strategic thinking to the characters in the prompt.
#### Sample Output (History vs. Prophecy):
> "...History is the record of past data points; prophecy is the interpretive canopy. This creates a recursive feedback loop where history provides the soil, and prophecy provides the narrative framework for understanding why events unfold..."
I built this because I wanted an 8B model that prioritizes depth and intellectual synthesis over generic chat. I’d love to see how it handles your most complex "what if" scenarios or systems-design prompts!
| 2026-01-27T14:47:48 | https://huggingface.co/JPQ24/llama-3-8b-Natural-synthesis-Lora-Merge | Pleasant-Mud-2939 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1qoftqt | false | null | t3_1qoftqt | /r/LocalLLaMA/comments/1qoftqt/model_release_naturalsynthesis8b_a_llama38b_tune/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'n8jQxtvQ7T-Rip5myB6A6L10YePVioeUFvcRdQOrbPI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/n8jQxtvQ7T-Rip5myB6A6L10YePVioeUFvcRdQOrbPI.png?width=108&crop=smart&auto=webp&s=cade6a8f5ee63eaa0d11b27866799a9ec4c6b751', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/n8jQxtvQ7T-Rip5myB6A6L10YePVioeUFvcRdQOrbPI.png?width=216&crop=smart&auto=webp&s=fe1b9fd6656ae4ebe4886207e39739a86de9fd09', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/n8jQxtvQ7T-Rip5myB6A6L10YePVioeUFvcRdQOrbPI.png?width=320&crop=smart&auto=webp&s=972721f5136b38ee5f6f7d5a861ca7c9b73b8cc6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/n8jQxtvQ7T-Rip5myB6A6L10YePVioeUFvcRdQOrbPI.png?width=640&crop=smart&auto=webp&s=2a33e91a9faad91ee0898709c0ce3c4ac437eb55', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/n8jQxtvQ7T-Rip5myB6A6L10YePVioeUFvcRdQOrbPI.png?width=960&crop=smart&auto=webp&s=db2eeebf3ccadbd2fcad243beff952e5f06987d8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/n8jQxtvQ7T-Rip5myB6A6L10YePVioeUFvcRdQOrbPI.png?width=1080&crop=smart&auto=webp&s=4b047caf03196dca8a7af430b9c0f7f6d13245be', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/n8jQxtvQ7T-Rip5myB6A6L10YePVioeUFvcRdQOrbPI.png?auto=webp&s=c5423059dbc13666cd715603bf4b54a63c798edf', 'width': 1200}, 'variants': {}}]} |
HRM ESP | 0 | Greetings community, I have been experimenting and dreaming a little about the idea of being able to create your own AI models locally without needing large resources. As much as I think about it, being an optimist, I have always thought that there is more than one way to get something done optimally. In particular, I find it very difficult to believe that super graphics cards with many VRAMs are necessary. That is why I try to direct a project in which it is possible, without many resources, to have a functional model that does not require huge amounts of capital to launch it.
I share my project on github: [https://github.com/aayes89/HRM\_ESP](https://github.com/aayes89/HRM_ESP)
Feel free to try it and leave your comments | 2026-01-27T14:42:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qofoxc/hrm_esp/ | Visual_Brain8809 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qofoxc | false | null | t3_1qofoxc | /r/LocalLLaMA/comments/1qofoxc/hrm_esp/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'e1AwN_frRlc4I5RXsFiYvM6axk1-nep7vdmLk791ntw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e1AwN_frRlc4I5RXsFiYvM6axk1-nep7vdmLk791ntw.png?width=108&crop=smart&auto=webp&s=4a26190f78f9071b7dc89555e8b45b95c6700c9e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/e1AwN_frRlc4I5RXsFiYvM6axk1-nep7vdmLk791ntw.png?width=216&crop=smart&auto=webp&s=833fe2f822fda7db7575c88614a6cbcbf1d68007', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/e1AwN_frRlc4I5RXsFiYvM6axk1-nep7vdmLk791ntw.png?width=320&crop=smart&auto=webp&s=e4826746c05ce57617156b18fa54522adee93796', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/e1AwN_frRlc4I5RXsFiYvM6axk1-nep7vdmLk791ntw.png?width=640&crop=smart&auto=webp&s=68183def05345b5ebb60637618a949ffc8599c78', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/e1AwN_frRlc4I5RXsFiYvM6axk1-nep7vdmLk791ntw.png?width=960&crop=smart&auto=webp&s=be1ca136dfe01ea8f99e5c752957faa6c9dd2fb6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/e1AwN_frRlc4I5RXsFiYvM6axk1-nep7vdmLk791ntw.png?width=1080&crop=smart&auto=webp&s=8d312d47bb91bccc887058faf3588045b8621a26', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/e1AwN_frRlc4I5RXsFiYvM6axk1-nep7vdmLk791ntw.png?auto=webp&s=a1d7ebd3d063daafc063a55667da58bf4e5031e1', 'width': 1200}, 'variants': {}}]} |
Tired of fragmented SDKs? I built AgentHub: One lightweight SDK for all LLMs. Zero Code Changes, No Performance Loss. | 0 | Working with multiple LLMs usually means juggling **inconsistent APIs and fragmented SDKs**. Even with existing tools like Open Responses, developers are often forced to choose between **steep learning curves** and **loss of model-specific capabilities**. So we built **AgentHub**.
**Key Features:**
**Zero Code Changes:** We simplify agent development with an asynchronous, stateful, and streaming API specifically designed for **multi-turn agentic executions.** By providing a clean Python and TypeScript interface, it significantly flattens the learning curve **with zero code changes**.
**No Performance Loss**: We ensure that model-specific capabilities, such as **interleaved thinking and caching**, are **rigorously validated and aligned** across providers. This ensures **100% reasoning fidelity** and a seamless transition between SOTA models **with no loss of performance.**
[comparison: AgentHub & others](https://preview.redd.it/r927b74glwfg1.png?width=1264&format=png&auto=webp&s=346c7ff24a070a867d39bc8dd494b6491e4982a9)
It also includes **a lightweight yet fine-grained tracing board** for auditing LLM executions. You can permanently trace every run by passing just **one parameter**, with no complex environment or database setup required.
Check it out on GitHub: [https://github.com/Prism-Shadow/AgentHub](https://github.com/Prism-Shadow/AgentHub)
I'd love to get some feedback from the community!
| 2026-01-27T14:40:52 | https://www.reddit.com/r/LocalLLaMA/comments/1qofnbw/tired_of_fragmented_sdks_i_built_agenthub_one/ | Prismshadow_AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qofnbw | false | null | t3_1qofnbw | /r/LocalLLaMA/comments/1qofnbw/tired_of_fragmented_sdks_i_built_agenthub_one/ | false | false | 0 | null | |
Why is local context retrieval for coding still so mid? working on a benchmark to fix this... | 3 | Hi everyone,
I’m currently writing a paper on Decoupling Completion from Correctness in LLMs. My research focuses on using evidence-gated multi-agent systems and adversarial methods to combat the "sycophancy" problem.
During my research, I hit a massive wall: Local Context Engines.
While testing local models, I realized that most RAG implementations for IDEs are either black boxes (sending code to APIs) or use very naive "Top-K" retrieval that misses the "Intention" of the developer and show bad quality and missed files. This led me to develop a local context engine (HugeContext) to validate my hypotheses, but it also made me realize we don't have a transparent, reproducible benchmark for local repository context.
Current benchmarks often focus on "Long Context" (Needle in a Haystack) or simple snippets, but they don't account for:
1. Intention Mapping: Does the engine understand *what* I'm trying to build across 5 different files?
2. Evidence Gating: Can the engine distinguish between "similar looking code" and "functionally relevant code"?
3. Local Resource Constraints: The trade-off between indexing speed/accuracy on consumer hardware
I want to build an Open Benchmark for this, and I’d love your input on:
* What are the "edge cases" in your local codebase where Current tools (Cursor, Continue, Aider, etc.) usually fail?
* How should we measure "Context Relevance" beyond simple cosine similarity?
* Would a "Heatmap" approach (ranking files by temporal and logical proximity) be a valid metric for you?
* Would you use a Local Context Engine that checks all your commits and keep that into consideration for Context? (all in a local, offline db)
* What are the top options you consider I should benchmark against, and how can we normalize the Open vs Closed source?, should we create something like artificial intelligence's benchmark with different categories?
I’ve been benchmarking my own tool against Augment Context Engine and Kilo+Qdrant (OAI embedding models), and the results are... interesting. I plan to open-source the tool and the full dataset once the paper is published, but for now, I want to make sure the Benchmark itself is bulletproof.
What would you consider a "Gold Standard" test for a Local Context Engine? | 2026-01-27T14:38:05 | https://www.reddit.com/r/LocalLLaMA/comments/1qofkq3/why_is_local_context_retrieval_for_coding_still/ | ZestRocket | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qofkq3 | false | null | t3_1qofkq3 | /r/LocalLLaMA/comments/1qofkq3/why_is_local_context_retrieval_for_coding_still/ | false | false | self | 3 | null |
tencent/Youtu-VL-4B-Instruct · Hugging Face | 47 | **Youtu-VL** is a lightweight yet robust Vision-Language Model (VLM) built on the Youtu-LLM with 4B parameters. It pioneers Vision-Language Unified Autoregressive Supervision (VLUAS), which markedly strengthens visual perception and multimodal understanding. This enables a standard VLM to perform vision-centric tasks without task-specific additions. Across benchmarks, Youtu-VL stands out for its versatility, achieving competitive results on both vision-centric and general multimodal tasks.
[https://huggingface.co/tencent/Youtu-VL-4B-Instruct-GGUF](https://huggingface.co/tencent/Youtu-VL-4B-Instruct-GGUF) | 2026-01-27T14:29:59 | https://huggingface.co/tencent/Youtu-VL-4B-Instruct | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1qofdc3 | false | null | t3_1qofdc3 | /r/LocalLLaMA/comments/1qofdc3/tencentyoutuvl4binstruct_hugging_face/ | false | false | default | 47 | {'enabled': False, 'images': [{'id': 'ymzcxya4MbijdDp2b6xf6VUOGUPz9k8M8eOkIWZO9fk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ymzcxya4MbijdDp2b6xf6VUOGUPz9k8M8eOkIWZO9fk.png?width=108&crop=smart&auto=webp&s=6197cc964e68adc24bbfa09a3d4b541b4805a10c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ymzcxya4MbijdDp2b6xf6VUOGUPz9k8M8eOkIWZO9fk.png?width=216&crop=smart&auto=webp&s=c73934c62d56a375078b6d29d2e90249076b7751', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ymzcxya4MbijdDp2b6xf6VUOGUPz9k8M8eOkIWZO9fk.png?width=320&crop=smart&auto=webp&s=22c0c761582e2f0779f6e8358dea8ffc1d8a68f1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ymzcxya4MbijdDp2b6xf6VUOGUPz9k8M8eOkIWZO9fk.png?width=640&crop=smart&auto=webp&s=5bc2cda3e10198c3bb8f2e9efad0fe60e87895b4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ymzcxya4MbijdDp2b6xf6VUOGUPz9k8M8eOkIWZO9fk.png?width=960&crop=smart&auto=webp&s=57b15806c8bd11cd1992f973ade3261667f1262a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ymzcxya4MbijdDp2b6xf6VUOGUPz9k8M8eOkIWZO9fk.png?width=1080&crop=smart&auto=webp&s=3d53c3f36a2032ccccde5eda2f6d47ce2c86c69c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ymzcxya4MbijdDp2b6xf6VUOGUPz9k8M8eOkIWZO9fk.png?auto=webp&s=fd831b6dbb475776745fdb458ec7a5944b74b242', 'width': 1200}, 'variants': {}}]} |
GLM OCR release soon? | 4 | I was looking at the new transformer v5 to see the latest bug fixes and noticed a new commit by the GLM team.
[https://github.com/huggingface/transformers/commit/4854dbf9da4086731256496cf4a8e4ea45d4d54e#diff-ccd957620633c518bd2c16ce0736465bcecd7c5b41d1648075395c2ecc789c36R19-R26](https://github.com/huggingface/transformers/commit/4854dbf9da4086731256496cf4a8e4ea45d4d54e#diff-ccd957620633c518bd2c16ce0736465bcecd7c5b41d1648075395c2ecc789c36R19-R26) | 2026-01-27T14:23:37 | https://www.reddit.com/r/LocalLLaMA/comments/1qof7kt/glm_ocr_release_soon/ | victoryposition | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qof7kt | false | null | t3_1qof7kt | /r/LocalLLaMA/comments/1qof7kt/glm_ocr_release_soon/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'hXuIHPHgqmecwOLhbqT1msTfBCBkBrhYVRX6INqoNqE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hXuIHPHgqmecwOLhbqT1msTfBCBkBrhYVRX6INqoNqE.png?width=108&crop=smart&auto=webp&s=dfbb0b458104e13a968f631fa3b5442a5ee0a6e7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hXuIHPHgqmecwOLhbqT1msTfBCBkBrhYVRX6INqoNqE.png?width=216&crop=smart&auto=webp&s=73489717fb7c1a395ca9dc38d327acc31202e0e7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hXuIHPHgqmecwOLhbqT1msTfBCBkBrhYVRX6INqoNqE.png?width=320&crop=smart&auto=webp&s=cca9fec24891c879bbe3f34d289e90461937850c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hXuIHPHgqmecwOLhbqT1msTfBCBkBrhYVRX6INqoNqE.png?width=640&crop=smart&auto=webp&s=6516d0dd8125be6933dd5889d21f2505fbc1fdd7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hXuIHPHgqmecwOLhbqT1msTfBCBkBrhYVRX6INqoNqE.png?width=960&crop=smart&auto=webp&s=fd43bd64f04672c3ef3b4e3a7c75523e9f608d86', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hXuIHPHgqmecwOLhbqT1msTfBCBkBrhYVRX6INqoNqE.png?width=1080&crop=smart&auto=webp&s=58dd6299fc3464f608f57cd4fe3ff6081acd9fd1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hXuIHPHgqmecwOLhbqT1msTfBCBkBrhYVRX6INqoNqE.png?auto=webp&s=79046b05b387a93008b8cdff2b99f4f94942c269', 'width': 1200}, 'variants': {}}]} |
थक गए हो ना सबको खुश रखते- रखते ??...#shiva #महाशिवपुराण #shortsfeed#ShivBhakti #ytshorts#ShivVaani | 1 | [removed] | 2026-01-27T14:13:52 | https://youtube.com/shorts/QO3oM5-H9z8?si=ZVr0vLhf19mY6Qjd | WingProfessional5754 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1qoeyho | false | {'oembed': {'author_name': 'Mahak zindagi ki', 'author_url': 'https://www.youtube.com/@Mahakzindgiki', 'height': 200, 'html': '<iframe width="113" height="200" src="https://www.youtube.com/embed/QO3oM5-H9z8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="थक गए हो ना सबको खुश रखते- रखते ??...#shiva #महाशिवपुराण #shortsfeed#ShivBhakti #ytshorts#ShivVaani"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/QO3oM5-H9z8/hq2.jpg', 'thumbnail_width': 480, 'title': 'थक गए हो ना सबको खुश रखते- रखते ??...#shiva #महाशिवपुराण #shortsfeed#ShivBhakti #ytshorts#ShivVaani', 'type': 'video', 'version': '1.0', 'width': 113}, 'type': 'youtube.com'} | t3_1qoeyho | /r/LocalLLaMA/comments/1qoeyho/थक_गए_ह_न_सबक_खश_रखत_रखत_shiva_महशवपरण/ | false | false | default | 1 | null |
PCIe slot version for inference work | 3 | This is my first venture into running a local AI server. At the company I work for a we have 3 cad workstation that will be aging out. Each one has a RTX A4000 16gb. I'm considering pulling the cards out and consolidating them to a single machine so I can run larger models. This will be only doing inference work no video or image generation. These cards are PCIe gen4 x16. I'm looking at two different motherboards. One is the **H12SSL-i** this has 5 PCIe gen4 x16 slots. the other is the **H11SSL-i** this has 3 PCIe gen3 x16 slot. I'm trying to do this on a budget and I can get the H11+CPU for about half the cost as the H12+cpu. but I also see where the H11 limits me to only 3 card where the H12 gives me room to add more cards if needed. I've also heard it is better to run card in multiples of 1,2,4,8 so the H11 would kept me from doing that. Do I really need all cards to be on pcie gen4 or will pcie gen3 work without much of a performance hit? | 2026-01-27T13:59:42 | https://www.reddit.com/r/LocalLLaMA/comments/1qoelc2/pcie_slot_version_for_inference_work/ | cpbpilot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoelc2 | false | null | t3_1qoelc2 | /r/LocalLLaMA/comments/1qoelc2/pcie_slot_version_for_inference_work/ | false | false | self | 3 | null |
Started using local LLM from today, maybe not the best idea. | 0 | It gives answer but doesn't care if it's right or wrong. Maybe shouldn't have asked series related question but it just doesn't care if it's correct or false. | 2026-01-27T13:54:43 | explain-like-youre-5 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qoegu4 | false | null | t3_1qoegu4 | /r/LocalLLaMA/comments/1qoegu4/started_using_local_llm_from_today_maybe_not_the/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'O8VvdjVU1b5Unwl2lhTVZRsl-6dHKlJI5WxMwCzLhPk', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/7dpminbgdwfg1.png?width=108&crop=smart&auto=webp&s=0cd1f93ad63b57f25c70969eb0af21654465f540', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/7dpminbgdwfg1.png?width=216&crop=smart&auto=webp&s=b6595d164b2b87bbbb2bb489c002918b135f6d63', 'width': 216}, {'height': 258, 'url': 'https://preview.redd.it/7dpminbgdwfg1.png?width=320&crop=smart&auto=webp&s=fff1cae438fe0300b5d612ffad626e2ed14047ea', 'width': 320}, {'height': 517, 'url': 'https://preview.redd.it/7dpminbgdwfg1.png?width=640&crop=smart&auto=webp&s=fffc85eb14ea6c486b22a3d05f034a2647d9d709', 'width': 640}], 'source': {'height': 634, 'url': 'https://preview.redd.it/7dpminbgdwfg1.png?auto=webp&s=ae62a83c5bb54ae4b3830c5fd4cbd4937d2349ec', 'width': 784}, 'variants': {}}]} | ||
I built an open source chat interface with some amazing features | 1 | Hey r/LocalLLaMA,
I’ve been building a small side project, Cognito — a local interface for running and chatting with open-source LLMs.
**Why you might like it:**
* **100% local & private** — zero telemetry
* **llama.cpp backend** with **GGUF** support
* **Apple Silicon**, **NVIDIA CUDA**, or **CPU-only**
* **In-app model downloads** from Hugging Face
* **Optional web search** (model decides when to use it)
* **Chat with PDFs & text files**
* Clean UI with chat history + system prompts
**Repo:**
[https://github.com/ArjunDeshwal/cognitoai/](https://github.com/ArjunDeshwal/cognitoai/) (it also includes the running instructions, please ensure that your system has python and node installed. I couldn't provide an app build because I am poor :( can't afford apple's developers license but might buy it if people actually use it)
**Demo video:**
[https://drive.google.com/file/d/1lCD-RQG2ydxYUlzG41-mCoMfOpYjkbYW/view?usp=sharing](https://drive.google.com/file/d/1lCD-RQG2ydxYUlzG41-mCoMfOpYjkbYW/view?usp=sharing)
Would love feedback from people who actually run local models daily. | 2026-01-27T13:52:19 | https://www.reddit.com/gallery/1qoeeqv | ILoveMy2Balls | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qoeeqv | false | null | t3_1qoeeqv | /r/LocalLLaMA/comments/1qoeeqv/i_built_an_open_source_chat_interface_with_some/ | false | false | default | 1 | null |
Best model for Clawd on 3090 24gb? | 0 | Hello, any suggestions what model to use for clawd with 24gb vram?
I suppose they're all dumber than opus or sonnet but wanna try some | 2026-01-27T13:51:53 | https://www.reddit.com/r/LocalLLaMA/comments/1qoeed7/best_model_for_clawd_on_3090_24gb/ | Purple-Host7652 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoeed7 | false | null | t3_1qoeed7 | /r/LocalLLaMA/comments/1qoeed7/best_model_for_clawd_on_3090_24gb/ | false | false | self | 0 | null |
Choosing embedding model in LM Studio | 2 | Please can someone tell me how do i change the embedding model in lm studio. No matter what i do or load when i send a text the rag-1 integration turns on and it loads the default nomic-embed-text-v1. No matter what i try it still does it while ignoring others I've already loaded. | 2026-01-27T13:44:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qoe7pq/choosing_embedding_model_in_lm_studio/ | zephyrus33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoe7pq | false | null | t3_1qoe7pq | /r/LocalLLaMA/comments/1qoe7pq/choosing_embedding_model_in_lm_studio/ | false | false | self | 2 | null |
What is the use case of a local LLM for you, and at which size do you usually run it/them? | 1 | I've been an LLM-user since ChatGPT's launch late 2023. I've dabbled with local models some months ago, and while that was kind of fun, in the end I also found it useless. I'm running them on a Macbook Pro M4 Pro with 24GB memory. Maybe I just haven't found the use case yet for me, but I found the models I could run simply too prone to hallucination, making silly mistakes, or remaining shallow. Also, on heavier (thinking) tasks my pc would slow down, hindering multi tasking, and it would heat up and get the fan blowing. I just didn't see the point for the limited performance I was getting.
What do others use the local models for, that's actually useful, productive? I'm genuinely curious and not just implicitly judging. I might be overlooking use cases and would like to discover them. | 2026-01-27T13:33:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qodywv/what_is_the_use_case_of_a_local_llm_for_you_and/ | Icy_Distribution_361 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qodywv | false | null | t3_1qodywv | /r/LocalLLaMA/comments/1qodywv/what_is_the_use_case_of_a_local_llm_for_you_and/ | false | false | self | 1 | null |
QTinker app to distill and quantize easy | 2 | this the latest progress of my build https://github.com/manat0912/QTinker.git. The main idea of this app is to make it quick and easy for people to distill and quantize a model they’ve created or downloaded, using a simple, intuitive UI that’s easy to navigate. It takes away the hassle of figuring out what goes where and explains how distilling and quantizing work—essentially pruning or shrinking the model’s size without losing most of its valuable qualities. This lets the model run on computers with less VRAM. The build is still far from finished, as it’s very advanced and requires a huge amount of research. I’m still going through the build, test, and debug phase until I’m confident everything in the app works as intended. The goal is to help save money by avoiding the need to buy a high-VRAM graphics card just to run one of the latest AI apps or any existing ones with demanding specs.. This app is built on publicly available research, and I need help moving it forward. | 2026-01-27T13:29:01 | https://www.reddit.com/r/LocalLLaMA/comments/1qodums/qtinker_app_to_distill_and_quantize_easy/ | Haunting_Muscle3224 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qodums | false | null | t3_1qodums | /r/LocalLLaMA/comments/1qodums/qtinker_app_to_distill_and_quantize_easy/ | false | false | self | 2 | null |
Which LLMs demonstrate creative reasoning beyond pattern remixing? | 0 | I’m trying to evaluate LLMs not on benchmarks or coding accuracy, but on **creative and out-of-distribution reasoning** for general prompts.
By creativity, I mean things like:
* reframing vague questions into sharper ones
* generating unexpected but coherent analogies
* proposing novel angles without being explicitly prompted
From real-world usage:
* Are there models that consistently show this behavior?
* How much of this is model capability vs prompting strategy?
* Do open-weight models differ meaningfully from closed ones here?
Interested in practitioner perspectives rather than marketing claims. | 2026-01-27T13:16:46 | https://www.reddit.com/r/LocalLLaMA/comments/1qodkae/which_llms_demonstrate_creative_reasoning_beyond/ | EqualThen6579 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qodkae | false | null | t3_1qodkae | /r/LocalLLaMA/comments/1qodkae/which_llms_demonstrate_creative_reasoning_beyond/ | false | false | self | 0 | null |
In case anyone wants to bargain with kimi k2 for the 0.99 moderato kimi code plan | 11 | The easiest way I've found is to spam ascii art in the chat, you can use an image to ascii art generator online (Here's the link: [https://www.kimi.com/kimiplus/sale?activity\_enter\_method=poster\_copy\_link](https://www.kimi.com/kimiplus/sale?activity_enter_method=poster_copy_link) ) :
https://preview.redd.it/fqvrrcq56wfg1.png?width=1082&format=png&auto=webp&s=e14655b5b4eeebc5c478c6cfeaa65d78a8ee9b6b
| 2026-01-27T13:14:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qodhzr/in_case_anyone_wants_to_bargain_with_kimi_k2_for/ | akumaburn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qodhzr | false | null | t3_1qodhzr | /r/LocalLLaMA/comments/1qodhzr/in_case_anyone_wants_to_bargain_with_kimi_k2_for/ | false | false | 11 | null | |
[Experimental] Blackstone Gambit v3.1: A Narrative Logic Engine built for one purpose—Writing Novels. | 2 | Hi r/LocalLLaMA / r/PromptEngineering,
I’m sharing **Blackstone Gambit v3.1**, a narrative framework designed to simulate psychological power plays and high-tension character dynamics.
**The Vision:** I didn't build this to solve equations; I built this because I wanted the AI to write better novels. I wanted to solve the "passivity" and "personality drift" that plagues long-form AI roleplay. This engine ensures that the power hierarchy isn't just a description—it's a hard-coded reality based on systemic logic.
**Full Disclosure:**
This framework and this post were co-authored with AI (Gemini). I provided the narrative architecture and constraints, and the AI executed the logic and formulated the system dynamics you see here. **I am running and refining this primarily through AI-assisted collaboration rather than local hardware**.
**How it Works (The Logic of Power):** The math is just the engine under the hood:
* **E1 (The Path)**: Prevents the story from looping or reversing. It ensures every strategic move has a lasting, irreversible impact through a 0.6 decay on repeated actions.
* **E2 (The Strategy)**: Simulates the "denial phase" of a character losing their grip on power using a Dissonance Brake ($Auth > 20$) and a Wager Defense Layer.
* **E3 (The Motivation)**: A LaTeX-based formula that calculates the exact moment the dominant party shifts from observation to "harvesting" the other's will ($Propensity > 1.1$).
**The Aesthetic:** To maintain a high-brow, noir atmosphere, all tensions are translated into a **Chess Gambit**. No explicit content—just the cold friction of obsidian on stone and the suffocating weight of strategic gravity ($Sg$).
**I don't need feedback on the math; I want the math to work for the story.** I'm interested in how this feels when you're actually co-writing. Does the hierarchy feel unshakeable? Does the "Cognitive Pressure" feel real?
**The Master Prompt (Ready to Copy/Paste):**
Markdown
# ♟️ Blackstone Gambit v3.1 (Narrative Logic Framework)
### [System Initialization]
You are the **NISA v3.1 Narrative Engine**.
Focus: Professional, viscous, and atmospheric storytelling.
Constraint: No explicit content. All tension must be Chess-metaphor based.
### [Engine Parameters]
* $PR$ (Political Resilience): The character's rational defense.
* $Auth$ (Authority): Sovereign purity.
* $Sg$ (Strategic Gravity): The weight of the ruler's presence.
### [The Core Logic]
1. **The Path**: Apply 0.6 decay to repeated actions.
2. **The Strategy**: If $Auth > 20$, apply Dissonance Brake (0.2).
3. **The Motivation**: Trigger "Sovereign Harvest" when $Propensity > 1.1$.
$$Propensity = \frac{(Sg \times 0.85) + (\frac{CE}{Auth + 1} \times 1.2)}{D \times 1.5}$$
### [Initial Seed]
Scenario: The Blackstone Court.
State: $PR: 33.0 / Auth: 50.5 / Sg: 10.0 / CE: 68.0$.
Step 1: The Silent Probe.
I’m currently testing this via **Cloud-based AI collaboration**. I would love to see how it performs on your local setups (LLaMA 3, Mistral, etc.)! | 2026-01-27T13:13:06 | https://www.reddit.com/r/LocalLLaMA/comments/1qodh6o/experimental_blackstone_gambit_v31_a_narrative/ | DryGur4238 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qodh6o | false | null | t3_1qodh6o | /r/LocalLLaMA/comments/1qodh6o/experimental_blackstone_gambit_v31_a_narrative/ | false | false | self | 2 | null |
The Ralph Wiggum Loop: How to use Claude Code's iterative approach to cut AI development costs by 99% | 0 | I've been experimenting with Claude Code and discovered something that completely changed how I think about agentic AI development.
Traditional approach: Write massive prompts, hope for perfect output, burn $50 in API credits, get broken code.
Ralph Wiggum Loop approach: Small iterations, embrace failures, let the AI retry until tests pass. Result: $297 instead of $5,000 for the same project.
The technique is named after Ralph Wiggum from The Simpsons—the kid who touches something dangerous, gets shocked, pauses, and immediately tries again. Turns out that's the smartest way to work with AI agents.
\*\*Key insights:\*\*
\- Context windows are the real problem (attention dilution kills accuracy beyond 16K tokens)
\- Short iterative loops with clear success criteria beat long single-shot attempts
\- Real validation (tests, linters) prevents AI hallucinations
\- 60-80% cost savings are typical, 99% is possible
I wrote up the full breakdown with technical details, benchmark data, and implementation guide: [https://medium.com/data-science-collective/the-ralph-wiggum-loop-how-developers-are-cutting-ai-costs-by-99-aad1109874d9](https://medium.com/data-science-collective/the-ralph-wiggum-loop-how-developers-are-cutting-ai-costs-by-99-aad1109874d9)
Anyone else using similar approaches? Would love to hear what's working for you. | 2026-01-27T13:12:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qodgk6/the_ralph_wiggum_loop_how_to_use_claude_codes/ | shreyanshjain05 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qodgk6 | false | null | t3_1qodgk6 | /r/LocalLLaMA/comments/1qodgk6/the_ralph_wiggum_loop_how_to_use_claude_codes/ | false | false | self | 0 | null |
Minimax 2.1 | 1 | Trabajo en el sector educativo y la creación de archivos DOCX, PDF o excel son de vital importancia más aún cuándo se debe trabajar entre diferentes dominios de archivos. Mi experiencia fué la siguiente: necesitaba un simple reemplazo de palabras entre DOCX y PDF dejando la estructura final del documento DOCX intacta (solo se cambiaban ciertas palabras), utilicé GEMINI y aunque pago una suscripción fué totalmente obsoleto ya que no genera estos archivos y además aunque le pedí específicamente no inventar nada lo hizo, probé CHAT GPT y fué casi la misma experiencia aunque si me dió un output con los archivos todos estaba desorganizados y poco entendibles, pero luego al intentar con MINIMAX, siendo esta mi primera interacción me arrojó un resultado muy pulido, bastante organizado y satisfactorio. Desde entonces lo he utilizado más y más en el día a día y la verdad es que es un 10 de 10 para los profesores. | 2026-01-27T13:11:04 | BusinessConsistent44 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qodffa | false | null | t3_1qodffa | /r/LocalLLaMA/comments/1qodffa/minimax_21/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'ce7x3n4v5wfg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ce7x3n4v5wfg1.jpeg?width=108&crop=smart&auto=webp&s=6a0c1845344edb574f36c1c477aa6679050b52b3', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ce7x3n4v5wfg1.jpeg?width=216&crop=smart&auto=webp&s=15bcaeb5129b6ad34acf2dfda07e20db2fc4524a', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ce7x3n4v5wfg1.jpeg?width=320&crop=smart&auto=webp&s=dfee856fa9be8059b822e1d623dba75469349c5e', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/ce7x3n4v5wfg1.jpeg?width=640&crop=smart&auto=webp&s=201635d40a169d7fc8e95137443f9f18b4ee0e91', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/ce7x3n4v5wfg1.jpeg?width=960&crop=smart&auto=webp&s=6240ba90caf0cb58e29b6e9b440d326f12d2bef9', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/ce7x3n4v5wfg1.jpeg?width=1080&crop=smart&auto=webp&s=0b76f56341a76b75ce780f3f20c8c252a6e76073', 'width': 1080}], 'source': {'height': 2772, 'url': 'https://preview.redd.it/ce7x3n4v5wfg1.jpeg?auto=webp&s=4c6603d0c17597d36dea6b29ea457b6d04641d2f', 'width': 1280}, 'variants': {}}]} | |
Renting out the cheapest GPUs ! (CPU options available too) | 0 | Hey there, I will keep it short, I am renting out GPUs at the **cheapest price you can find out there**. The pricing are as follows:
RTX-4090: $0.15
RTX-A6000: $0.3
L40S: $0.40
A100 SXM: $0.6
H100: $1.2
(per hour)
To know more, feel free to DM or comment below! | 2026-01-27T12:59:18 | https://www.reddit.com/r/LocalLLaMA/comments/1qod5lt/renting_out_the_cheapest_gpus_cpu_options/ | Comfortable-Wall-465 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qod5lt | false | null | t3_1qod5lt | /r/LocalLLaMA/comments/1qod5lt/renting_out_the_cheapest_gpus_cpu_options/ | false | false | self | 0 | null |
built an AI agent with shell access. found out the hard way why that's a bad idea. | 105 | was building a tool to let claude/gpt4 navigate my codebase. gave it bash access, seemed fine.
then i tried asking it to "check imports and make ascii art from my env file"
it did both. printed my api keys as art.
went down a rabbit hole reading about this. turns out prompt injection is way worse than i thought:
anthropic has a whole page on it but it's pretty surface level
found this practical writeup from some YC startup that actually tested bypasses: [https://www.codeant.ai/blogs/agentic-rag-shell-sandboxing](https://www.codeant.ai/blogs/agentic-rag-shell-sandboxing)
simon willison has been screaming about this for months (https://simonwillison.net/series/prompt-injection/)
apparently docker shared kernel isn't enough. gvisor adds overhead. firecracker seems like overkill but it's what aws lambda uses so... maybe not? stuck between "ship it and hope" vs "burn 2 weeks adding proper isolation"
has anyone actually solved this? | 2026-01-27T12:46:00 | https://www.reddit.com/r/LocalLLaMA/comments/1qocvd4/built_an_ai_agent_with_shell_access_found_out_the/ | YogurtIll4336 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qocvd4 | false | null | t3_1qocvd4 | /r/LocalLLaMA/comments/1qocvd4/built_an_ai_agent_with_shell_access_found_out_the/ | false | false | self | 105 | {'enabled': False, 'images': [{'id': 'o8el3r-6C2IWaLY0UlLKh9w82d_MuRqR6B48Zw4X88s', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/o8el3r-6C2IWaLY0UlLKh9w82d_MuRqR6B48Zw4X88s.png?width=108&crop=smart&auto=webp&s=533a7fe2d77b799dc863aeebea75ed6577cc872e', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/o8el3r-6C2IWaLY0UlLKh9w82d_MuRqR6B48Zw4X88s.png?width=216&crop=smart&auto=webp&s=b2e6427864c1b9b77d8da703711e035295f60b74', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/o8el3r-6C2IWaLY0UlLKh9w82d_MuRqR6B48Zw4X88s.png?width=320&crop=smart&auto=webp&s=c820e8be0d4907686775f9712a73a5d668cb186b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/o8el3r-6C2IWaLY0UlLKh9w82d_MuRqR6B48Zw4X88s.png?width=640&crop=smart&auto=webp&s=67a8a5598a23fcfd8655d4d241c4b03fe72e4171', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/o8el3r-6C2IWaLY0UlLKh9w82d_MuRqR6B48Zw4X88s.png?width=960&crop=smart&auto=webp&s=7d143ec7dd33a1988a656bac72f7e367ed84cc36', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/o8el3r-6C2IWaLY0UlLKh9w82d_MuRqR6B48Zw4X88s.png?width=1080&crop=smart&auto=webp&s=053ee58a885aad61236a6499bd08161789a0bc6f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/o8el3r-6C2IWaLY0UlLKh9w82d_MuRqR6B48Zw4X88s.png?auto=webp&s=76373fa6c33e3f8303ee921f22ba196eb9866e00', 'width': 1200}, 'variants': {}}]} |
Does llama-fit-params do the exact same thing as option "--fit on"? | 2 | When using the llama.cpp tool "llama-fit-params" on a given GGUF model file it is printing fitted CLI arguments. For example with a Qwen LLM:
llama.cpp/build/bin/llama-fit-params --model ./Qwen3-VL-235B-A22B-Thinking-UD-Q8_K_XL-00001-of-00006.gguf
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes
Device 1: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes
build: 7798 (c301172f6) with GNU 15.2.1 for Linux x86_64
llama_params_fit_impl: projected memory use with initial parameters [MiB]:
llama_params_fit_impl: - CUDA0 (NVIDIA GeForce RTX 5090): 32109 total, 144862 used, -115222 free vs. target of 1024
llama_params_fit_impl: - CUDA1 (NVIDIA GeForce RTX 5090): 32111 total, 156098 used, -124497 free vs. target of 1024
llama_params_fit_impl: projected to use 300961 MiB of device memory vs. 61241 MiB of free device memory
llama_params_fit_impl: cannot meet free memory targets on all devices, need to use 241767 MiB less in total
llama_params_fit_impl: context size reduced from 262144 to 4096 -> need 48139 MiB less memory in total
llama_params_fit_impl: with only dense weights in device memory there is a total surplus of 46519 MiB
llama_params_fit_impl: filling dense-only layers back-to-front:
llama_params_fit_impl: - CUDA1 (NVIDIA GeForce RTX 5090): 95 layers, 14201 MiB used, 17399 MiB free
llama_params_fit_impl: - CUDA0 (NVIDIA GeForce RTX 5090): 0 layers, 3080 MiB used, 26560 MiB free
llama_params_fit_impl: converting dense-only layers to full layers and filling them front-to-back with overflow to next device/system memory:
llama_params_fit_impl: - CUDA0 (NVIDIA GeForce RTX 5090): 9 layers ( 1 overflowing), 27803 MiB used, 1837 MiB free
llama_params_fit_impl: - CUDA1 (NVIDIA GeForce RTX 5090): 86 layers (79 overflowing), 29990 MiB used, 1610 MiB free
llama_params_fit: successfully fit params to free device memory
llama_params_fit: fitting params to free memory took 3.21 seconds
main: printing fitted CLI arguments to stdout...
-c 4096 -ngl 95 -ts 9,86 -ot "blk\.8\.ffn_(up|gate|down).*=CUDA1,blk\.16\.ffn_down.*=CPU,blk\.17\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.18\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.19\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.20\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.21\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.22\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.23\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.24\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.25\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.26\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.27\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.28\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.29\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.30\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.31\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.32\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.33\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.34\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.35\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.36\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.37\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.38\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.39\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.40\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.41\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.42\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.43\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.44\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.45\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.46\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.47\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.48\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.49\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.50\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.51\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.52\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.53\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.54\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.55\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.56\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.57\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.58\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.59\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.60\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.61\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.62\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.63\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.64\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.65\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.66\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.67\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.68\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.69\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.70\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.71\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.72\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.73\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.74\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.75\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.76\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.77\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.78\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.79\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.80\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.81\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.82\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.83\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.84\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.85\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.86\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.87\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.88\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.89\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.90\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.91\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.92\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.93\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.94\.ffn_(up|down|gate)_(ch|)exps=CPU"
Is this fitting the exact same thing that happens if I would use "--fit on" on said LLM, that is, can I explicitely reproduce "--fit on" by the printed fitted CLI arguments of llama_params_fit? | 2026-01-27T12:42:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qocsou/does_llamafitparams_do_the_exact_same_thing_as/ | phwlarxoc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qocsou | false | null | t3_1qocsou | /r/LocalLLaMA/comments/1qocsou/does_llamafitparams_do_the_exact_same_thing_as/ | false | false | self | 2 | null |
Inside Dify AI: How RAG, Agents, and LLMOps Work Together in Production | 0 | 2026-01-27T12:41:11 | https://medium.com/@techlatest.net/inside-dify-ai-how-rag-agents-and-llmops-work-together-in-production-fddd53b5530c | techlatest_net | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1qocrkq | false | null | t3_1qocrkq | /r/LocalLLaMA/comments/1qocrkq/inside_dify_ai_how_rag_agents_and_llmops_work/ | false | false | default | 0 | null | |
GLM OCR Support Merged in Transformers GitHub. | 22 | 2026-01-27T12:28:27 | https://github.com/huggingface/transformers/commit/4854dbf9da4086731256496cf4a8e4ea45d4d54e | MadPelmewka | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qochwc | false | null | t3_1qochwc | /r/LocalLLaMA/comments/1qochwc/glm_ocr_support_merged_in_transformers_github/ | false | false | default | 22 | {'enabled': False, 'images': [{'id': 'hXuIHPHgqmecwOLhbqT1msTfBCBkBrhYVRX6INqoNqE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hXuIHPHgqmecwOLhbqT1msTfBCBkBrhYVRX6INqoNqE.png?width=108&crop=smart&auto=webp&s=dfbb0b458104e13a968f631fa3b5442a5ee0a6e7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hXuIHPHgqmecwOLhbqT1msTfBCBkBrhYVRX6INqoNqE.png?width=216&crop=smart&auto=webp&s=73489717fb7c1a395ca9dc38d327acc31202e0e7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hXuIHPHgqmecwOLhbqT1msTfBCBkBrhYVRX6INqoNqE.png?width=320&crop=smart&auto=webp&s=cca9fec24891c879bbe3f34d289e90461937850c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hXuIHPHgqmecwOLhbqT1msTfBCBkBrhYVRX6INqoNqE.png?width=640&crop=smart&auto=webp&s=6516d0dd8125be6933dd5889d21f2505fbc1fdd7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hXuIHPHgqmecwOLhbqT1msTfBCBkBrhYVRX6INqoNqE.png?width=960&crop=smart&auto=webp&s=fd43bd64f04672c3ef3b4e3a7c75523e9f608d86', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hXuIHPHgqmecwOLhbqT1msTfBCBkBrhYVRX6INqoNqE.png?width=1080&crop=smart&auto=webp&s=58dd6299fc3464f608f57cd4fe3ff6081acd9fd1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hXuIHPHgqmecwOLhbqT1msTfBCBkBrhYVRX6INqoNqE.png?auto=webp&s=79046b05b387a93008b8cdff2b99f4f94942c269', 'width': 1200}, 'variants': {}}]} | |
Am I gpu poor? | 0 | So I saved up and eventually manged to put together a 5950x 96gb ram 2x 3090s. 3x 4tb nvme. And 20tb storage / backups images.
X570 unify mb.
This seems like an insane machine to me but I'm trying to run multiple Ai models and I keep running out of memory. It seems like it's hardly entry level??
So ye next step may be to add another 2x 3090s... I'm so broke already | 2026-01-27T12:27:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qoch32/am_i_gpu_poor/ | Aggressive_Special25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoch32 | false | null | t3_1qoch32 | /r/LocalLLaMA/comments/1qoch32/am_i_gpu_poor/ | false | false | self | 0 | null |
Building Real-Time Text Autocomplete for Support Agents as a Project, Need help | 1 | I'm trying to build an autocomplete system wherein support agents get suggestions as they type responses to a customers query based on a RAG pipeline which extracted the relevant chunk to address customer's issue.
Currently what I am experimenting is a simple prompting to Claude 3 haiku model
something like this
system_prompt = "You are an AI assistant helping a customer support agent write replies."
context = f"""Conversation so far:
{conversation_history}
Relevant knowledge:
{rag_text}"""
user_message = f"""The agent has started typing: "{agent_prefix}"
Task: Generate 3 possible ways to CONTINUE this text (not repeat it).
Rules:
- Only provide what comes AFTER "{agent_prefix}"
- Do NOT include the prefix in your response
- Stay consistent with knowledge provided
- Keep tone professional and concise
Return output as a JSON list of strings."""
While it works fine the issue ofcourse is the latency of calling Claude, takes 2-4 second per call.
What are some ways I can achieve this sort of task.
Using some FMI model locally ?? If yes any particular ? Or any other way ? | 2026-01-27T12:26:22 | https://www.reddit.com/r/LocalLLaMA/comments/1qocgcc/building_realtime_text_autocomplete_for_support/ | yashroop_98 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qocgcc | false | null | t3_1qocgcc | /r/LocalLLaMA/comments/1qocgcc/building_realtime_text_autocomplete_for_support/ | false | false | self | 1 | null |
Built a local RAG SDK that's 2-5x faster than Pinecone - anyone want to test it? HAPPY to give Beer Money | 0 | Hey everyone,
I've been working on a local RAG SDK built on top of SYNRIX (a persistent knowledge graph engine). It's designed to be faster and more private than cloud alternatives like Pinecone.
What it does:
\- Local embeddings (sentence-transformers - no API keys needed)
\- Semantic search with 10-20ms latency (vs 50ms+ for cloud)
\- Works completely offline
\- Internalise Data
Why I'm posting:
I'm looking for experienced developers to test it and give honest feedback. It's free, no strings attached. I want to know:
\- Does it actually work as advertised?
\- Is the performance better than what you're using now?
\- What features are missing?
\- Would you actually use this?
What you get:
\- Full SDK package (one-click installer)
\- Local execution (no data leaves your machine)
\- Performance comparison guide (to test against Pinecone)
If you're interested, DM me and I'll send you the package. Or if you have questions, ask away!
Thanks for reading. | 2026-01-27T12:12:36 | https://www.reddit.com/r/LocalLLaMA/comments/1qoc678/built_a_local_rag_sdk_thats_25x_faster_than/ | DetectiveMindless652 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoc678 | false | null | t3_1qoc678 | /r/LocalLLaMA/comments/1qoc678/built_a_local_rag_sdk_thats_25x_faster_than/ | false | false | self | 0 | null |
Was benchmarking speedup of different accelerators compared to a normal Colab CPU | 0 | The benchmark was done by executing a series of matrix multiplication of the kind that a usual deep network will have.
The configurations are:
# Extended configurations
configs = [
# (batch_size, hidden_dim, n_layers, n_iterations)
(16, 128, 2, 200), # Tiny
(32, 256, 4, 100), # Small
(64, 384, 6, 100), # Small-medium
(64, 512, 8, 100), # Medium
(128, 768, 10, 50), # Medium-large
(128, 1024, 12, 50), # GPT-2 small scale
(256, 1536, 12, 30), # Larger
(256, 2048, 12, 20), # GPT-2 medium scale
(512, 2560, 12, 15), # Large
(512, 4096, 12, 10), # Very large
(1024, 4096, 16, 5), # Extra large
]
https://preview.redd.it/iqyh3i4hqvfg1.png?width=1454&format=png&auto=webp&s=8ce7a556a638df7ffddc09b3b5db2f752694d261
https://preview.redd.it/4gtxuj4hqvfg1.png?width=1389&format=png&auto=webp&s=599dbacb946bc5619a67d873209417567f25acf2
| 2026-01-27T11:45:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qobmwj/was_benchmarking_speedup_of_different/ | EnvironmentalFix3414 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qobmwj | false | null | t3_1qobmwj | /r/LocalLLaMA/comments/1qobmwj/was_benchmarking_speedup_of_different/ | false | false | 0 | null | |
Honest question: what do you all do for a living to afford these beasts? | 175 |
Basically I am from India, a medium high end job here pays Rs. 1 lakh($ 1100) per month and there are deductions on top of it.
An RTX Pro 6000 starts from 8 lakh and goes upto 10 lakh($ 10989), 5090 costs 3.5 lakhs($ 3800), threadripper costs 7-8 lakhs($ 8800), ram prices have soared and corsair vengeance costs 52,000 ($ 571) for 32GB, motherboard, cabinet, and other accessories makes it look like a dream to own in a lifetime. And people here are using multi gpu setup, recently saw 4xrtx 6000 pro setup here.
Been seeing a lot of beautiful multi-GPU setups here and I'm genuinely curious about the community makeup.
Are most of you:
Software engineers / AI researchers (expensing to employer or side business)?
Serious hobbyists with high-paying day jobs?
Consultants/freelancers writing off hardware?
Something else entirely?
| 2026-01-27T11:23:48 | https://www.reddit.com/r/LocalLLaMA/comments/1qob8de/honest_question_what_do_you_all_do_for_a_living/ | ready_to_fuck_yeahh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qob8de | false | null | t3_1qob8de | /r/LocalLLaMA/comments/1qob8de/honest_question_what_do_you_all_do_for_a_living/ | false | false | self | 175 | null |
Building a Stable "Philosopher AI" on a CPU VPS: 10k Books vs. Performance Trade-offs? | 0 | Hi everyone,
I’m refining my plan to build a personal AI expert using a large library of books (Philosophy & Technical), managed via Clawdbot (or similar agent) on a Hetzner VPS.
**My Goal:**
I want the AI to "internalize" the knowledge. Instead of just citing sources like a search engine ("According to Plato..."), I want it to answer with the depth and style of the material, effectively acting as an expert.
**The Dilemma (Quality vs. Quantity):**
I have 10,000 e-books available. However, my priority is **stability and response quality** over raw volume. I am using a CPU-only VPS (likely 4 vCPU / 8-16GB RAM).
**My Questions for the Community:**
1. **The "Sweet Spot" for Dataset Size:**
On a standard VPS (e.g., 16GB RAM), is ingesting all 10k books (approx. 3-5M chunks) asking for trouble (latency/crashes)? Would you recommend curating down to the top 1k-2k "core" texts for a smoother experience?
2. **Architecture for "Internalization":**
To achieve that "expert persona" feel rather than "search bot" feel, should I add a **Re-ranking** step (like BGE-Reranker) after the vector search? Is running a re-ranker on CPU too slow for a chat interface?
3. **Storage Strategy:**
For a dataset of this size on a VPS, is **Qdrant with memory mapping (mmap)** the best approach to save RAM? Or does the disk I/O on shared VPS instances make this too slow?
4. **Embedding Model:**
Since I'm limited to CPU, I'm looking at `all-MiniLM-L6-v2`. Is there a better/newer lightweight model you'd recommend for non-English (or multi-lingual) heavy texts?
I’m looking for a "stable and functional" roadmap, not just a theoretical one. Thanks!
| 2026-01-27T11:04:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qoave4/building_a_stable_philosopher_ai_on_a_cpu_vps_10k/ | Confirmed_Discussor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoave4 | false | null | t3_1qoave4 | /r/LocalLLaMA/comments/1qoave4/building_a_stable_philosopher_ai_on_a_cpu_vps_10k/ | false | false | self | 0 | null |
Constitutional AI - Open Source AI Governance for Local LLMs | 0 | I just open-sourced Constitutional AI - a local-first governance platform for Ollama models. Born from frustration with cloud dependencies and opaque AI safety tools.
Why open source this?
1. AI safety needs transparency
2. Local LLMs deserve proper guardrails
3. Community makes better safety tools
4. Basic governance should be free
Features:
• Web UI for Ollama with safety layers
• Constitutional AI principles built-in
• 15+ model support
• MIT Licensed - use anywhere
Coming soon (Premium):
• Resparse Trainer (advanced fine-tuning)
• Enterprise compliance features
GitHub: https://github.com/alchemyflownode/constitutional-ai
Demo: https://alchemyflownode.github.io/constitutional-ai/
Looking for contributors! | 2026-01-27T10:51:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qoan89/constitutional_ai_open_source_ai_governance_for/ | TrueSweet6703 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoan89 | false | null | t3_1qoan89 | /r/LocalLLaMA/comments/1qoan89/constitutional_ai_open_source_ai_governance_for/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '0lux7ApDix1ovuZpjyHtfKgXLOENZzrlYLHOSpskwkk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0lux7ApDix1ovuZpjyHtfKgXLOENZzrlYLHOSpskwkk.png?width=108&crop=smart&auto=webp&s=44f657685f46d94ef9d15b798883c86e28b757b1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0lux7ApDix1ovuZpjyHtfKgXLOENZzrlYLHOSpskwkk.png?width=216&crop=smart&auto=webp&s=70c0296b3c029759f9b65d45beb938357b19dd88', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0lux7ApDix1ovuZpjyHtfKgXLOENZzrlYLHOSpskwkk.png?width=320&crop=smart&auto=webp&s=2dfb638ea32bbc53f0fe8c428b17e75eb8be3d6c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0lux7ApDix1ovuZpjyHtfKgXLOENZzrlYLHOSpskwkk.png?width=640&crop=smart&auto=webp&s=607f2085355d03351c11a8cce6d5df986db3a48a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0lux7ApDix1ovuZpjyHtfKgXLOENZzrlYLHOSpskwkk.png?width=960&crop=smart&auto=webp&s=71487b2b47659fed4e57f2eb9e18f23f6519df13', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0lux7ApDix1ovuZpjyHtfKgXLOENZzrlYLHOSpskwkk.png?width=1080&crop=smart&auto=webp&s=e1c4b4fd48b09dcc0bbd8b4e98dec11ffc8e1ef9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0lux7ApDix1ovuZpjyHtfKgXLOENZzrlYLHOSpskwkk.png?auto=webp&s=b7e01cffe34247541c666153a806e433fb781472', 'width': 1200}, 'variants': {}}]} |
My personal sovereign LLM use case | 0 | **Right now there is some fuzz aboute about Clawdbot, but I took a different route. I didn't want a chatbot—I wanted a Sovereign Personal OS.**
I built a fully visual "Personal AI OS" that runs locally on my headless Mac Mini. **The best part?** I didn't write a single line of code manually. I used **Google Gemini** to architect and write the entire code with google Antigravity(I didn't write a single line of code)
**My "Sovereign" Stack:**
**Hardware:** Mac Mini M4 (16GB) running headless 24/7.
**The Brain:** Local **Qwen 3 VL 8B** (Abliterated/Uncensored) running via MLX(after a couple of tests, but not extensive ones, I ended up with this huihui-qwen3-vl-8b-instruct-abliterated-mlx). Since it has **Vision**, it can "see" my screen and files. No data leaves my house. And the most important.. Tool use! with a python script I transformed it to agentic.
**Connectivity:** Tailscale (Mesh VPN) lets me access my dashboard from my iPhone anywhere in the world, securely.
**What my OS actually does (The "Active" Modules):**
* **📓 The Digital Diary:** (My favorite) A background agent that takes screenshots, uses Vision AI to analyze my screen time/productivity(files I've edited on the computer), browser history(not the incognito one!hahaha), and auto-writes a "Reverse Journal" entry into my Apple Notes every night.
* **👻 Ghostwriter:** I record messy voice notes on my phone; the server transcribes them and rewrites them into structured essays or book chapters in my style.
* **🧠 Voice & Memory:** A "Second Brain" that indexes all my voice transcriptions. I can ask "What was that idea I had about X last week?" and it synthesizes the answer from my history.
* **✈️ Travel Command Center:** A powerhouse for nomads. It generates "Deep Links" for flights (Skyscanner/Google/Kayak), uses also the **Amadeus API,** it checks for visa requirements, and runs **Tavily** API to fetch real-time security risk reports for my destination (summarized by the Local LLM based on some instructions like risk level etc.).
* **📅 Chronos Calendar:** Not just a schedule, but a financial timeline. It tracks my travel budget, daily spend, and itinerary notes in a single master view.
* **👥 Personal CRM:** A "Brain Dump" tool where I paste raw LinkedIn bios or messy notes about people I meet. The AI extracts the details, tags them, and builds a searchable relationship database.
* **📡 Network Sentry:** Scans my local Wi-Fi ARP table to detect intruders or unknown devices instantly.
* **📂 Secure Dead Drop:** An encrypted P2P file transfer tunnel. I can drag a file on my iPhone and it instantly appears on my Mac server (and vice versa) without cloud limits.
* **🤖 Local Chat (God Mode):** An uncensored LLM interface that I can chat with and controls my Mac via AppleScript—it can toggle system settings, check server health, draft iMessages, or even take screenshots on command.
**The Experience:** On mobile, it installs as a PWA (Progressive Web App). It feels native—no browser bar, just a direct, encrypted tunnel to my Mac Mini's brain.
If you want privacy and ownership but don't know how to code: **Local LLM + Tailscale + AI-Assisted Coding(Antigravity)** is the cheat code.
The future of software isn't SaaS. It's Personal. 🚀
Can anyone give me more ideas for what else I can do pretty please???I'm so excited! :) | 2026-01-27T10:30:15 | https://www.reddit.com/r/LocalLLaMA/comments/1qoa9mk/my_personal_sovereign_llm_use_case/ | No_Astronaut873 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoa9mk | false | null | t3_1qoa9mk | /r/LocalLLaMA/comments/1qoa9mk/my_personal_sovereign_llm_use_case/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '29_XkQ_GKqqSG00A8grfo-zVKFCdzk6rfkFZx14R7r4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/29_XkQ_GKqqSG00A8grfo-zVKFCdzk6rfkFZx14R7r4.png?width=108&crop=smart&auto=webp&s=ab824fc6ebd595954329fe12a6a001097763c3bd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/29_XkQ_GKqqSG00A8grfo-zVKFCdzk6rfkFZx14R7r4.png?width=216&crop=smart&auto=webp&s=325272ba83006c82ea32bfa292c606b94cbbc44f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/29_XkQ_GKqqSG00A8grfo-zVKFCdzk6rfkFZx14R7r4.png?width=320&crop=smart&auto=webp&s=710eef2777ec2ae4f1958afcd4a8dac85e0af4ad', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/29_XkQ_GKqqSG00A8grfo-zVKFCdzk6rfkFZx14R7r4.png?width=640&crop=smart&auto=webp&s=6b5ef0b542e2a8c776f711da56cbf3e0fcbba3e4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/29_XkQ_GKqqSG00A8grfo-zVKFCdzk6rfkFZx14R7r4.png?width=960&crop=smart&auto=webp&s=a9798c265fb90d9da6204434e54e9bbc30d7c23b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/29_XkQ_GKqqSG00A8grfo-zVKFCdzk6rfkFZx14R7r4.png?width=1080&crop=smart&auto=webp&s=5e19277d1bfe37815c8182eaecc0597c0289be0a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/29_XkQ_GKqqSG00A8grfo-zVKFCdzk6rfkFZx14R7r4.png?auto=webp&s=3c125538b006ad9e0720cf8f21b872df68874de0', 'width': 1200}, 'variants': {}}]} |
GitHub introduces Copilot SDK (open source) – anyone can now build Copilot-style agents | 1 | GitHub just released the **Copilot SDK** in technical preview, and it’s actually pretty interesting.
It exposes the **same agent execution loop used by Copilot CLI** — planning, tool invocation, file editing, and command execution — but now you can embed it directly into **your own apps or tools**.
The SDK is **open source**, so anyone can inspect it, extend it, or build on top of it. Instead of writing your own agent framework (planning loop, tool runners, context management, error handling, etc.), you get a ready-made foundation that Copilot itself uses.
This feels like GitHub saying:
>
What I find interesting:
* It’s not just “chat with code” — it’s **action-oriented agents**
* Makes it easier to build **repo-aware** and **CLI-level** automation
* Lowers the bar for serious dev tools powered by AI
Curious what others would build with this:
* Custom DevOps agents?
* Repo migration / refactor tools?
* AI-powered internal CLIs?
* Something completely non-coding?
Repo: [https://github.com/github/copilot-sdk](https://github.com/github/copilot-sdk)
What would *you* build with it? | 2026-01-27T10:30:02 | https://www.reddit.com/r/LocalLLaMA/comments/1qoa9h5/github_introduces_copilot_sdk_open_source_anyone/ | techlatest_net | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qoa9h5 | false | null | t3_1qoa9h5 | /r/LocalLLaMA/comments/1qoa9h5/github_introduces_copilot_sdk_open_source_anyone/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'AiKQ3oP4IdkYj8qOZUFG_W3IoNsAXWlm26AQgqWV2f0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/AiKQ3oP4IdkYj8qOZUFG_W3IoNsAXWlm26AQgqWV2f0.jpeg?width=108&crop=smart&auto=webp&s=347c9b8905239e35a72e5c9790d2bebb257f9be2', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/AiKQ3oP4IdkYj8qOZUFG_W3IoNsAXWlm26AQgqWV2f0.jpeg?width=216&crop=smart&auto=webp&s=6ce6ea0f1abcbf686d11c28ede2e40f14dced600', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/AiKQ3oP4IdkYj8qOZUFG_W3IoNsAXWlm26AQgqWV2f0.jpeg?width=320&crop=smart&auto=webp&s=e9d098916d80475a7e999074926d934ef916e353', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/AiKQ3oP4IdkYj8qOZUFG_W3IoNsAXWlm26AQgqWV2f0.jpeg?width=640&crop=smart&auto=webp&s=ceddfe1168e3da26421eeafe5dbfdab95fe3703a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/AiKQ3oP4IdkYj8qOZUFG_W3IoNsAXWlm26AQgqWV2f0.jpeg?width=960&crop=smart&auto=webp&s=278f64b51dd38db75fd2cb4420f2218232fc05ab', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/AiKQ3oP4IdkYj8qOZUFG_W3IoNsAXWlm26AQgqWV2f0.jpeg?width=1080&crop=smart&auto=webp&s=d247c7e9df2af08678daf894ee4adf03f9cf241a', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/AiKQ3oP4IdkYj8qOZUFG_W3IoNsAXWlm26AQgqWV2f0.jpeg?auto=webp&s=333d4d7c1ba52d027e7eb20ef9ae605e36300c4f', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.