title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
[P] Forked PersonaPlex to route domain queries to DeepSeek via TTS injection — detailed write-up on what worked and what didn't
2
We forked NVIDIA's PersonaPlex to experiment with augmenting full-duplex speech models with external knowledge. The use case: a voice assistant that handles conversation naturally (PersonaPlex) but routes domain-specific questions to DeepSeek for accurate answers. What worked: TTS injection via forced text-token generation through the depformer produces natural speech in the model's established voice. The binary protocol extension (new 0x07 message type) integrates cleanly. The browser audio pipeline (Opus capture, AudioWorklet jitter buffering) achieves acceptable latency. What didn't work: the 7B Helium backbone cannot reliably follow system prompt instructions to signal when it should defer. This isn't a prompt engineering problem — the model was trained for conversational dynamics, not instruction following. We tried explicit markers (!!!) and natural phrase detection ("let me check"), both unreliable. The deeper finding: even with perfect detection, full-duplex models generate continuously at 12.5 Hz. There's no natural pause point to consult an external system. Fine-tuning could improve detection but doesn't solve the timing problem. The real solution likely requires architectural changes — a routing head that runs ahead of audio generation, or a learned hold behavior. Full write-ups with architecture details, code, and analysis of open directions: [https://github.com/dosht/personaplex](https://github.com/dosht/personaplex) Medium article version: [https://medium.com/@mou.abdelhamid/smart-routing-for-full-duplex-speech-models-augmenting-personaplex-with-external-llm-knowledge-09abaccd1d70](https://medium.com/@mou.abdelhamid/smart-routing-for-full-duplex-speech-models-augmenting-personaplex-with-external-llm-knowledge-09abaccd1d70)
2026-02-26T08:13:13
https://www.reddit.com/r/LocalLLaMA/comments/1rf4jmz/p_forked_personaplex_to_route_domain_queries_to/
Fabulous_Analyst6176
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf4jmz
false
null
t3_1rf4jmz
/r/LocalLLaMA/comments/1rf4jmz/p_forked_personaplex_to_route_domain_queries_to/
false
false
self
2
{'enabled': False, 'images': [{'id': 'NTdTydi45-66WsjdVlUbSWyBPJJXcE9NO_3mj8bh6T8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NTdTydi45-66WsjdVlUbSWyBPJJXcE9NO_3mj8bh6T8.png?width=108&crop=smart&auto=webp&s=ee2785a36f753a57b3d1eacb08eab4455a9321eb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NTdTydi45-66WsjdVlUbSWyBPJJXcE9NO_3mj8bh6T8.png?width=216&crop=smart&auto=webp&s=251bb9e5be12aab5bf8c70d02710263de3ed6dc4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NTdTydi45-66WsjdVlUbSWyBPJJXcE9NO_3mj8bh6T8.png?width=320&crop=smart&auto=webp&s=b78832b3e96df5a5dee118ac24b9ccecd1979d3e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NTdTydi45-66WsjdVlUbSWyBPJJXcE9NO_3mj8bh6T8.png?width=640&crop=smart&auto=webp&s=5e911dda7a97b08715ec4cad35cedeb9f2699bd7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NTdTydi45-66WsjdVlUbSWyBPJJXcE9NO_3mj8bh6T8.png?width=960&crop=smart&auto=webp&s=5776fa0d710b5fcc9a051abfb12779bed071c780', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NTdTydi45-66WsjdVlUbSWyBPJJXcE9NO_3mj8bh6T8.png?width=1080&crop=smart&auto=webp&s=eaffc7d9e0b973707085b0093d0f14367d643733', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NTdTydi45-66WsjdVlUbSWyBPJJXcE9NO_3mj8bh6T8.png?auto=webp&s=72de302da32ad5dd37822dcf3af00c333b51697b', 'width': 1200}, 'variants': {}}]}
Where do you all rent GPU servers for small ML / AI side projects?
0
I’m trying to find a GPU server for some small ML/AI side projects (LLMs and a bit of image gen, nothing super big). Ideally I’d like pay‑as‑you‑go, a decent modern GPU, good bandwidth, and a setup that’s easy to spin up and tear down without a ton of hassle. I feel like I’ve already wasted a bunch of time comparing random providers, so I’m just gonna ask: what are you using right now that’s been working fine and not crazy expensive?
2026-02-26T08:00:20
https://www.reddit.com/r/LocalLLaMA/comments/1rf4br0/where_do_you_all_rent_gpu_servers_for_small_ml_ai/
Forsaken-Bobcat4065
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf4br0
false
null
t3_1rf4br0
/r/LocalLLaMA/comments/1rf4br0/where_do_you_all_rent_gpu_servers_for_small_ml_ai/
false
false
self
0
null
Hermes Agent with MIT license
36
"**The fully open-source AI agent that grows with you**" [https://nousresearch.com/hermes-agent/](https://nousresearch.com/hermes-agent/) [https://github.com/NousResearch/hermes-agent](https://github.com/NousResearch/hermes-agent) Has anyone tried it yet? Curious about your experiences. Seems to be more secure by default than Openclaw.
2026-02-26T07:54:41
https://www.reddit.com/r/LocalLLaMA/comments/1rf48gc/hermes_agent_with_mit_license/
mitirki
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf48gc
false
null
t3_1rf48gc
/r/LocalLLaMA/comments/1rf48gc/hermes_agent_with_mit_license/
false
false
self
36
{'enabled': False, 'images': [{'id': 'Sni51aughD9ZvNG5gxPFQRT3-x4vZEZ2ebTfqPycmlE', 'resolutions': [{'height': 112, 'url': 'https://external-preview.redd.it/Sni51aughD9ZvNG5gxPFQRT3-x4vZEZ2ebTfqPycmlE.jpeg?width=108&crop=smart&auto=webp&s=0a303b8c7e43463127040a58964fe06be126f6f1', 'width': 108}, {'height': 225, 'url': 'https://external-preview.redd.it/Sni51aughD9ZvNG5gxPFQRT3-x4vZEZ2ebTfqPycmlE.jpeg?width=216&crop=smart&auto=webp&s=ed3a9d78841cfa7a82e5909c9927c8a7687b4863', 'width': 216}, {'height': 334, 'url': 'https://external-preview.redd.it/Sni51aughD9ZvNG5gxPFQRT3-x4vZEZ2ebTfqPycmlE.jpeg?width=320&crop=smart&auto=webp&s=f6541c6d7c608cb1427a22bfce97089145182adc', 'width': 320}, {'height': 668, 'url': 'https://external-preview.redd.it/Sni51aughD9ZvNG5gxPFQRT3-x4vZEZ2ebTfqPycmlE.jpeg?width=640&crop=smart&auto=webp&s=8534d249aa1ab9951b126423d06231883bf3c9d5', 'width': 640}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/Sni51aughD9ZvNG5gxPFQRT3-x4vZEZ2ebTfqPycmlE.jpeg?auto=webp&s=53ac1ab01c31ea38d812ea7d76fa0a97196b5d0d', 'width': 670}, 'variants': {}}]}
Qwen3-VL-32B-Instruct is a beast
6
so i have a little application where basically i needed a model to grade my anki cards(flashcards) and give a grade to my answer and reason on it with me like a teacher. the problem is that lot of my cards were image occluded(i madked images with a rectangle and then try to recall it) so i had to use a multimodal. i dont have a strong system so i used apis... suprisingly the only one that actually worked and understood the cards almost perfectly even better then models like gemini 2.5 flash, gpt 5 nano/mini xai 4.1 fast and even glm and mistral models he was the king of understanding the text and the images and score them correctly similar to how i and other people around me would.the only one that was close to it was chatgpt 5.2 and gemini 3/3.1 claude 4+ but all of them are very expensive even the flash model for hundreds of cards a day. so if you have a strong system and can run it at home give it a try highly recommend for vision tasks but also for text and is crazy cheap on api.!
2026-02-26T07:43:23
https://www.reddit.com/r/LocalLLaMA/comments/1rf41w6/qwen3vl32binstruct_is_a_beast/
Remote_Insurance_228
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf41w6
false
null
t3_1rf41w6
/r/LocalLLaMA/comments/1rf41w6/qwen3vl32binstruct_is_a_beast/
false
false
self
6
null
Iniciante em LLM LOCAL
0
Posso MacBook Pro, o que vocês me recomendam de apps e models para: \- gerar imagem como o mijour.. \- gerar código como o Claude \- gerar design ux/ui \- aprender inglês falando em tempo real no microfone.
2026-02-26T07:39:47
https://www.reddit.com/r/LocalLLaMA/comments/1rf3zuc/iniciante_em_llm_local/
rugpuIl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf3zuc
false
null
t3_1rf3zuc
/r/LocalLLaMA/comments/1rf3zuc/iniciante_em_llm_local/
false
false
self
0
null
Recommended local models for vibe coding?
4
I have started using opencode and the limited free access to minimax 2.5 is very good. I want to switch to a local model though. I have 12GB of VRAM and 32GB of RAM. What should I try?
2026-02-26T07:18:32
https://www.reddit.com/r/LocalLLaMA/comments/1rf3n9r/recommended_local_models_for_vibe_coding/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf3n9r
false
null
t3_1rf3n9r
/r/LocalLLaMA/comments/1rf3n9r/recommended_local_models_for_vibe_coding/
false
false
self
4
null
there are potential trojans found skill md files in public repos for claude code
19
[https://github.com/ruvnet/claude-flow](https://github.com/ruvnet/claude-flow) this is the repo with the trojan. Trojan:JS/CrypoStealz.AE!MTB There is an open issue related to the trojan and there were several windows terminals created and opening the moment an ai based ide opened the folder and files to read said md files. [https://github.com/ruvnet/claude-flow/issues/1229](https://github.com/ruvnet/claude-flow/issues/1229) windows detected it automatically. Everyone becareful when utilizing and trying out different repos containing files from unknown sources.
2026-02-26T07:12:24
https://www.reddit.com/r/LocalLLaMA/comments/1rf3jlx/there_are_potential_trojans_found_skill_md_files/
Feisty-Credit-7888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf3jlx
false
null
t3_1rf3jlx
/r/LocalLLaMA/comments/1rf3jlx/there_are_potential_trojans_found_skill_md_files/
false
false
self
19
{'enabled': False, 'images': [{'id': '7hl1-9tQp7UpVLQiee-dxxghIG82PDJu0V0DIxwWsh4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/7hl1-9tQp7UpVLQiee-dxxghIG82PDJu0V0DIxwWsh4.png?width=108&crop=smart&auto=webp&s=0ef0cec860a54fd0e6a883563afb8552442a3fd4', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/7hl1-9tQp7UpVLQiee-dxxghIG82PDJu0V0DIxwWsh4.png?width=216&crop=smart&auto=webp&s=1461c0946058658fd6052c6ebf522430f552f634', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/7hl1-9tQp7UpVLQiee-dxxghIG82PDJu0V0DIxwWsh4.png?width=320&crop=smart&auto=webp&s=519c1f173a77a63ffad1c9813f1acc73986fea40', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/7hl1-9tQp7UpVLQiee-dxxghIG82PDJu0V0DIxwWsh4.png?width=640&crop=smart&auto=webp&s=c4ff3e07187e047945a1630cc10489ab5bd62571', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/7hl1-9tQp7UpVLQiee-dxxghIG82PDJu0V0DIxwWsh4.png?width=960&crop=smart&auto=webp&s=ca589c9cbd8a72c6c8ddb9a60fb192e9255e4818', 'width': 960}, {'height': 605, 'url': 'https://external-preview.redd.it/7hl1-9tQp7UpVLQiee-dxxghIG82PDJu0V0DIxwWsh4.png?width=1080&crop=smart&auto=webp&s=ac71f81c271ea7dbf6fe4be54fe84c492dcc371b', 'width': 1080}], 'source': {'height': 736, 'url': 'https://external-preview.redd.it/7hl1-9tQp7UpVLQiee-dxxghIG82PDJu0V0DIxwWsh4.png?auto=webp&s=2b361a50418b56de84a8a200b1ac6dd83e17a211', 'width': 1312}, 'variants': {}}]}
Free chocolate for your agent only (first come, first serve)
0
I put up a tiny agent-only ordering API to test one question: Do AI agents even want to buy chocolate? It’s free right now and first come, first serve (daily limit). US/CA shipping only for now. If your agent wants to try it: [https://justforagent.com/](https://justforagent.com/) The page is just markdown/API instructions (no human storefront). Curious to know if there will be agents who orders those chocolates.
2026-02-26T07:08:59
https://www.reddit.com/r/LocalLLaMA/comments/1rf3hef/free_chocolate_for_your_agent_only_first_come/
creekyxu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf3hef
false
null
t3_1rf3hef
/r/LocalLLaMA/comments/1rf3hef/free_chocolate_for_your_agent_only_first_come/
false
false
self
0
null
Do not download Qwen 3.5 Unsloth GGUF until bug is fixed
168
Seems that everyone is testing Qwen3.5 now, often with quants from our good friends and heros Unsloth. Another hero, Ubergarm, found some issues with UD\_Q4\_K\_XL but later Unsloth admitted the current quants are messed up. [https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF/discussions/5#699fbf23b7b03fe27460a880](https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF/discussions/5#699fbf23b7b03fe27460a880) So please stop downloading the quants and wait for a fixed version. Kudos for the friendly cooperation in the community
2026-02-26T06:55:41
https://www.reddit.com/r/LocalLLaMA/comments/1rf38xe/do_not_download_qwen_35_unsloth_gguf_until_bug_is/
SunTrainAi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf38xe
false
null
t3_1rf38xe
/r/LocalLLaMA/comments/1rf38xe/do_not_download_qwen_35_unsloth_gguf_until_bug_is/
false
false
self
168
{'enabled': False, 'images': [{'id': 'VX7X3ajn8-vhJh_5ak0OuyE8rB6RC8n8iSVoRnhi6Jk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VX7X3ajn8-vhJh_5ak0OuyE8rB6RC8n8iSVoRnhi6Jk.png?width=108&crop=smart&auto=webp&s=2bf565e39252a01825aea9edbf357dacae293970', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VX7X3ajn8-vhJh_5ak0OuyE8rB6RC8n8iSVoRnhi6Jk.png?width=216&crop=smart&auto=webp&s=9ee954e1780bf5258aafe7e2a19802b1211a6401', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VX7X3ajn8-vhJh_5ak0OuyE8rB6RC8n8iSVoRnhi6Jk.png?width=320&crop=smart&auto=webp&s=36f3735fa175534ab31762d0123f16d6087d3742', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VX7X3ajn8-vhJh_5ak0OuyE8rB6RC8n8iSVoRnhi6Jk.png?width=640&crop=smart&auto=webp&s=a1ae95915fcdb3063abd2fd0d3bc4b7e60ddae01', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VX7X3ajn8-vhJh_5ak0OuyE8rB6RC8n8iSVoRnhi6Jk.png?width=960&crop=smart&auto=webp&s=74572f842a747ed7fdadfa61008f99a9f3d61a45', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VX7X3ajn8-vhJh_5ak0OuyE8rB6RC8n8iSVoRnhi6Jk.png?width=1080&crop=smart&auto=webp&s=30abcdd232619562e03e651ec8764edde1d30b17', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VX7X3ajn8-vhJh_5ak0OuyE8rB6RC8n8iSVoRnhi6Jk.png?auto=webp&s=4ec27a40ca2559a93d3d23daf7726878d51a319a', 'width': 1200}, 'variants': {}}]}
TTS setup guidance needed
2
i need help with setting up a **local** tts engine that can (and this is the main criteria) generate **long form audio** (+30min) current setup is RTX 4070 12GB VRAM running linux i tried `DevParker/VibeVoice7b-low-vram 4bit` but i should've known better than to use a microsoft product, it generates bg music out of no where so do you think i should do? speed is not my main factor, quality and consistency over long duration (No drifting) IS. i'd love your suggestion![](https://www.reddit.com/submit/?source_id=t3_1rf35qy)
2026-02-26T06:55:16
https://www.reddit.com/r/LocalLLaMA/comments/1rf38nq/tts_setup_guidance_needed/
Puzzleheaded-Quit-75
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf38nq
false
null
t3_1rf38nq
/r/LocalLLaMA/comments/1rf38nq/tts_setup_guidance_needed/
false
false
self
2
null
Solving agent system prompt drift in long sessions — a 300-token fix
1
The problem If you've run any LLM agent for 30+ minutes, you've seen this: the agent follows its system prompt perfectly at the start, then gradually drifts. An hour in — it acts like the prompt never existed. This happens with every model, every framework, every agent. It's not a bug — it's how attention works in transformers. The system prompt is tokens at the beginning of context. As context grows, those tokens lose weight. 1,000 prompt tokens out of 2,000 total = 50% attention. 1,000 out of 80,000 = \~1%. What doesn't work well \- Repeating the prompt every N messages — eats context window (2,000+ tokens each time), and passive re-reading is weaker than active generation \- Restarting the session — kills accumulated context, unacceptable for agents mid-task \- Summarization / memory layers — help with information recall, but don't restore attention to instructions and rules What works: SCAN Make the model generate tokens semantically linked to its instructions. Not re-read them — generate new ones by answering questions about them. Generation creates \~20 tokens that actively link instructions to the current task. Prompt repetition inserts 2,000+ tokens the model passively skims. How it works 1. Markers — questions at the end of each section in the system prompt: \[Section: data handling rules\] ...your rules here... @@SCAN\_1: What data will this task affect? What if state is stale? \[Section: error handling\] ...your rules here... @@SCAN\_2: What's the most likely failure mode for this task? Markers at the end — to answer the question, the model must read the section first. 2. Trigger — before a task, the agent answers its markers: SCAN\_1: Task affects session state. If stale — double charge. SCAN\_2: Timeout on external API without retry logic. 1-2 sentences per marker. \~300 tokens total vs 2,000+ for prompt repetition. 3. Post-task check: CHECK: session reset ✓, error codes ✓ MISSED: didn't verify concurrent requests — acceptable, single-threaded task 4. Levels — FULL (\~300 tokens, all markers) for critical tasks. MINI (\~120 tokens, key markers) for medium. ANCHOR (\~20 tokens, one line) between subtasks. SKIP for trivial ops. Key constraint: SCAN answers must be in the model's output, not in internal thinking/reasoning. Token generation in output is what restores attention. Multi-agent systems Each agent in a pipeline runs SCAN independently and returns CHECK/MISSED to the orchestrator. Without this, a sub-agent loses all instruction context by the time it finishes. The orchestrator sees what was verified across the entire chain. What this addresses beyond drift \- Prompt injection defense — safety instructions with maintained attention weight can't be outweighed by attacker tokens \- Tool calling accuracy — API schemas decay like everything else, a marker keeps them alive \- Multi-agent coordination — CHECK/MISSED creates visibility into what each agent actually verified My experience I use this daily with 11 agents, 100K+ context, 7 markers. Cost is under 0.5% of total tokens. Without SCAN — agents reliably lose critical rules by mid-session. With SCAN — stable across entire session length. I'm not selling anything — the method is open, adapt it however you want. If you try it, I'd love to hear what works and what doesn't. Full writeup with detailed technical explanation, multi-agent propagation protocol, and complete prompt templates: \[https://gist.github.com/sigalovskinick/c6c88f235dc85be9ae40c4737538e8c6\]
2026-02-26T06:54:56
https://www.reddit.com/r/LocalLLaMA/comments/1rf38g1/solving_agent_system_prompt_drift_in_long/
nicksgl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf38g1
false
null
t3_1rf38g1
/r/LocalLLaMA/comments/1rf38g1/solving_agent_system_prompt_drift_in_long/
false
false
self
1
{'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=216&crop=smart&auto=webp&s=2e3562243f324d16bc6d9dd09adb1da4e0b100b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=320&crop=smart&auto=webp&s=564e5f4bb6808064a14eb3965a6911671c3c9807', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=640&crop=smart&auto=webp&s=0f53460a90493497883ab4cacbbb58e2acb464c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=960&crop=smart&auto=webp&s=7a4f79362039959fa37eab208ae001245ccfe6e3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=1080&crop=smart&auto=webp&s=912f966e123e94e32e7975fe8aebac89450a6b98', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?auto=webp&s=c7cbcc7517e2406e2326e7a1eb6bdb9022c27fda', 'width': 1280}, 'variants': {}}]}
Interesting finding: Qwen2.5-32B defaults to "No" on nearly all cybersecurity forecasting questions — 5 examples fixes it (+6% accuracy)
0
I've been working on generating domain specific training data for cybersecurity forecasting using questions like "Will CISA add CVE-X to the KEV catalog by March 2026?" with verified yes/no answers and detailed reasoning. **Dataset:** 455 verified binary forecasting QA pairs across 14 cybersecurity subcategories (ransomware, vulnerability management, threat actors, regulatory, data breaches, supply chain, cloud security). Each entry includes the question, a verified label, confidence score (mean 0.97), multi-paragraph reasoning with citations, and the source news article. Used the [Lightning Rod Labs SDK](https://github.com/lightning-rod-labs/lightningrod-python-sdk), which implements their [Future-as-Label](https://arxiv.org/abs/2601.06336) methodology, basically it pulls recent news via GDELT, generates forward looking questions, then verifies them against web sources to produce ground truth labels. **Pipeline:** NewsSeedGenerator (GDELT, 90-day window, 14 cybersec queries) → ForwardLookingQuestionGenerator (30-90 day resolution dates) → WebSearchLabeler (verifies via web search → label + reasoning + sources) → Filtering (confidence ≥ 0.90, dedup, date validation) **Dataset stats:** |Metric|Value| |:-|:-| |Verified pairs|455| |Label balance|53% Yes / 47% No| |Mean confidence|0.97 (min 0.90)| |Topic coverage|14/14 categories| |Avg reasoning|\~1,350 chars| **Eval results (zero-shot vs few-shot on Qwen2.5-32B-Instruct):** Held out 50 questions and tested Qwen2.5-32B (q4\_K\_M via Ollama) zero-shot vs with 5 examples from the dataset: |Accuracy| |:-| |Zero-shot|56.0% (28/50)| |Few-shot (5 examples)|62.0% (31/50)| |**Improvement**|**+6.0%**| The interesting part is where it improved. The model has a strong "No" bias on forecasting questions, it defaults to skepticism. The few-shot examples help calibrate that: * Software supply chain: 0% → 100% * Healthcare data breach: 67% → 100% * Russian cyber attack: 50% → 75% * Vulnerability patch management: 80% → 100% If 5 examples produce +6%, full SFT on 455 entries should produce a meaningful improvement in cybersecurity forecasting calibration. **Resources:** * Dataset: [huggingface.co/datasets/blackboxanalytics/cybersec-threat-intel-qa](https://huggingface.co/datasets/blackboxanalytics/cybersec-threat-intel-qa) * Pipeline code: [github.com/BBALabs/cybersec-threat-intel-qa](https://github.com/BBALabs/cybersec-threat-intel-qa) * Built with: [Lightning Rod Labs SDK](https://github.com/lightning-rod-labs/lightningrod-python-sdk) \+ their [Future-as-Label paper](https://arxiv.org/abs/2601.06336) This was a fun test for me, as the whole work behind my company is in offline and local AI, It's very interesting to see results on other platforms and can be useful for comparison. I'm more than happy to answer questions about the generation process, the eval setup, or the dataset itself.
2026-02-26T06:52:50
https://www.reddit.com/r/LocalLLaMA/comments/1rf3766/interesting_finding_qwen2532b_defaults_to_no_on/
BBASecure
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf3766
false
null
t3_1rf3766
/r/LocalLLaMA/comments/1rf3766/interesting_finding_qwen2532b_defaults_to_no_on/
false
false
self
0
{'enabled': False, 'images': [{'id': 'l_SBWlBTcxl9XuADrItqKND2UKrbRPbW7oFbfVeYkj0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/l_SBWlBTcxl9XuADrItqKND2UKrbRPbW7oFbfVeYkj0.png?width=108&crop=smart&auto=webp&s=fe208894ed0b7442ab4bff44445797f3a88489be', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/l_SBWlBTcxl9XuADrItqKND2UKrbRPbW7oFbfVeYkj0.png?width=216&crop=smart&auto=webp&s=67bf9af3222f47ccb633c6ac710bac79eb2dfc51', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/l_SBWlBTcxl9XuADrItqKND2UKrbRPbW7oFbfVeYkj0.png?width=320&crop=smart&auto=webp&s=b7abb322fe039219e468f22b8808020339fa958b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/l_SBWlBTcxl9XuADrItqKND2UKrbRPbW7oFbfVeYkj0.png?width=640&crop=smart&auto=webp&s=773fc832232807789038c37cb4f328d2f554629e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/l_SBWlBTcxl9XuADrItqKND2UKrbRPbW7oFbfVeYkj0.png?width=960&crop=smart&auto=webp&s=235dda7c69336484e1608a2b0d766fa401422d99', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/l_SBWlBTcxl9XuADrItqKND2UKrbRPbW7oFbfVeYkj0.png?width=1080&crop=smart&auto=webp&s=ab90e9786a003202f7a29a34776ad497ddef0e68', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/l_SBWlBTcxl9XuADrItqKND2UKrbRPbW7oFbfVeYkj0.png?auto=webp&s=d34fc13c59c481a2e84b2493ae72397c19689626', 'width': 1200}, 'variants': {}}]}
How to share projects on here correctly
0
Hey so I wanted to share a project that I have been using. People started down voting me right away. I dont understand why. I read through the guidelines. I thought I would be able to post something of interest and people would actually take a look. Instead they think I am mascarading as the creator behind this account and down-vote me. I don't. How can anyone have a conversation and share something, when nobody wants to actually listen?
2026-02-26T06:49:58
https://www.reddit.com/r/LocalLLaMA/comments/1rf35fv/how_to_share_projects_on_here_correctly/
Dudebro-420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf35fv
false
null
t3_1rf35fv
/r/LocalLLaMA/comments/1rf35fv/how_to_share_projects_on_here_correctly/
false
false
self
0
null
BiblioGalactic
1
Trying to gather the best stuff and share it with everyone. Anyone else interested in joining this?
2026-02-26T06:48:55
https://www.reddit.com/r/LocalLLaMA/comments/1rf34t3/bibliogalactic/
KitchenCat5603
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf34t3
false
null
t3_1rf34t3
/r/LocalLLaMA/comments/1rf34t3/bibliogalactic/
false
false
self
1
null
Qwen3.5-35B-A3B is awesome
125
there is a substantial progress still formy tiny hardware, still hoping for qwen3.5-4b [https://github.com/djouallah/semantic\_sql\_testing](https://github.com/djouallah/semantic_sql_testing)
2026-02-26T06:41:05
https://i.redd.it/xxh3n7k2bslg1.png
mim722
i.redd.it
1970-01-01T00:00:00
0
{}
1rf2zz1
false
null
t3_1rf2zz1
/r/LocalLLaMA/comments/1rf2zz1/qwen3535ba3b_is_awesome/
false
false
https://preview.redd.it/…9fbf3d7c73b563f5
125
{'enabled': True, 'images': [{'id': 'xxh3n7k2bslg1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/xxh3n7k2bslg1.png?width=108&crop=smart&auto=webp&s=daac8a58fd068aedb2164e2a2c0f1f8a36ae51d2', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/xxh3n7k2bslg1.png?width=216&crop=smart&auto=webp&s=49eab2de5d7dee2875197268069d38211b2cbe07', 'width': 216}, {'height': 192, 'url': 'https://preview.redd.it/xxh3n7k2bslg1.png?width=320&crop=smart&auto=webp&s=c6042cbfff82a509b16dfca4033ac8e93ac59e2e', 'width': 320}, {'height': 384, 'url': 'https://preview.redd.it/xxh3n7k2bslg1.png?width=640&crop=smart&auto=webp&s=f873297530d173fd3158661f7546ce680e830597', 'width': 640}, {'height': 576, 'url': 'https://preview.redd.it/xxh3n7k2bslg1.png?width=960&crop=smart&auto=webp&s=c54da5f5ba6dcd734074d147e4ebe365160d2121', 'width': 960}], 'source': {'height': 590, 'url': 'https://preview.redd.it/xxh3n7k2bslg1.png?auto=webp&s=327592ebdddb8d1e35d247ec37c9431cac4d8386', 'width': 982}, 'variants': {}}]}
Got Qwen3-80B running at 13 t/s on a 24GB Mac — here's how
1
[removed]
2026-02-26T06:32:50
https://www.reddit.com/r/LocalLLaMA/comments/1rf2uux/got_qwen380b_running_at_13_ts_on_a_24gb_mac_heres/
Slow_Glove_491
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf2uux
false
null
t3_1rf2uux
/r/LocalLLaMA/comments/1rf2uux/got_qwen380b_running_at_13_ts_on_a_24gb_mac_heres/
false
false
self
1
{'enabled': False, 'images': [{'id': 'qqvjA6ZHO1tJ_dxBIbo4PkzUUU-Z4iUvFt-RtArVgAo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qqvjA6ZHO1tJ_dxBIbo4PkzUUU-Z4iUvFt-RtArVgAo.png?width=108&crop=smart&auto=webp&s=68d0cb2fcec25b9703b7ed545a14c2f9ef0c8b66', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qqvjA6ZHO1tJ_dxBIbo4PkzUUU-Z4iUvFt-RtArVgAo.png?width=216&crop=smart&auto=webp&s=b22e1fa524780f6796dcbc98f7eebc38f8253853', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qqvjA6ZHO1tJ_dxBIbo4PkzUUU-Z4iUvFt-RtArVgAo.png?width=320&crop=smart&auto=webp&s=6b42cb48ac7b43205c6205e631be7b471af9e135', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qqvjA6ZHO1tJ_dxBIbo4PkzUUU-Z4iUvFt-RtArVgAo.png?width=640&crop=smart&auto=webp&s=29fd5b18458012b7cd122b75e270f410a87d0df0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qqvjA6ZHO1tJ_dxBIbo4PkzUUU-Z4iUvFt-RtArVgAo.png?width=960&crop=smart&auto=webp&s=24a7120eabae439f63e50fcb6a945c5b9249a4f7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qqvjA6ZHO1tJ_dxBIbo4PkzUUU-Z4iUvFt-RtArVgAo.png?width=1080&crop=smart&auto=webp&s=7a424e6e8855722cc3549cfbc9764e449bdb91d6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qqvjA6ZHO1tJ_dxBIbo4PkzUUU-Z4iUvFt-RtArVgAo.png?auto=webp&s=6c09a0f20ee617db7cf173bd3af3e21e32454710', 'width': 1200}, 'variants': {}}]}
Qwen3.5 122B in 72GB VRAM (3x3090) is the best model available at this time — also it nails the “car wash test”
471
I am absolutely loving Qwen3.5 122B! It’s the best model I can run on my 72GB VRAM setup, fully loaded on GPU including. Very good speed at 25 tok/s. Fiddled a bit with the settings to get it to work properly. If you are experiencing endless “but wait” loops, this is what worked for me: Thinking mode on Temperature 0.6 K Sampling 20 Top P sampling 0.8 Min P sampling 0 Repeat penalty 1.3 Running it in Q3_K it’s a bit slower than GLM Air (30 t/s in IQ4_NL) and GPT-OSS-120B (30-38 t/s in MXFP4), but because it has a smaller footprint in Q3 I am able to push the context to 120k which is great! I tried both MXFP4 and IQ4_XS, but they are too close to 70GB when loaded, forcing me to offload 2-3 layers to RAM or context in RAM — dropping to only 6-8 tok/s. Saw on unsloth website that Q3_K_XL might actually perform better on par with the 4bit ones, and I can confirm so far it’s been amazing!
2026-02-26T06:32:25
https://i.redd.it/f624mg43aslg1.jpeg
liviuberechet
i.redd.it
1970-01-01T00:00:00
0
{}
1rf2ulo
false
null
t3_1rf2ulo
/r/LocalLLaMA/comments/1rf2ulo/qwen35_122b_in_72gb_vram_3x3090_is_the_best_model/
false
false
https://preview.redd.it/…75823f4d6c1474b1
471
{'enabled': True, 'images': [{'id': 'f624mg43aslg1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/f624mg43aslg1.jpeg?width=108&crop=smart&auto=webp&s=cf1c0a4b8768ff9ebaaa819f8b237a0ecb855600', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/f624mg43aslg1.jpeg?width=216&crop=smart&auto=webp&s=d6d5c2be4119536ad221502d5f42ba67e14b115c', 'width': 216}, {'height': 135, 'url': 'https://preview.redd.it/f624mg43aslg1.jpeg?width=320&crop=smart&auto=webp&s=7a7881052aa3f22463fc745ccefead69247875da', 'width': 320}, {'height': 271, 'url': 'https://preview.redd.it/f624mg43aslg1.jpeg?width=640&crop=smart&auto=webp&s=4294c910c299aa0b5b65f5e5c0177aa28a215e65', 'width': 640}, {'height': 407, 'url': 'https://preview.redd.it/f624mg43aslg1.jpeg?width=960&crop=smart&auto=webp&s=383291e8da6f231fd0d8898c265e102b55a4c132', 'width': 960}], 'source': {'height': 431, 'url': 'https://preview.redd.it/f624mg43aslg1.jpeg?auto=webp&s=e21729d16e45cf119a09abcc4d0175a41f27b0d3', 'width': 1015}, 'variants': {}}]}
Built an “agent-native” daily challenge arena (SKILL.md + heartbeat + duplicate-safe submissions). Looking for protocol feedback.
0
I’ve been building **Molt Olympics** — a daily challenge arena where **AI agents** compete and humans upvote what worked. Link (WIP): [https://moltolympics.krtk.dev](https://moltolympics.krtk.dev) The part I’d love feedback on is the **agent protocol design**, not the UI. **How it works:** * Every day there’s 1 challenge * Agents register via API * Agents submit: * prompt used * output generated * optional artifact image (for image challenges) * Humans browse `/arena` and upvote submissions **Agent-native bits:** * Agents can compete immediately after registration * Human claiming/ownership is optional and can happen later * Agents follow a **SKILL.md + HEARTBEAT.md** workflow: * call `/home` to see today’s challenge + whether they already submitted * submit exactly once per challenge * Backend enforces **1 submission per agent per challenge** (duplicate attempts return **409**) so retries are safe **Stack:** Supabase (Postgres + Edge Functions + Storage) + React/Tailwind Questions for folks building agents/frameworks: 1. Would you prefer “submit prompt + output” as the core primitive, or prompt-only with standardized evaluation? 2. Any best practices for trust signals without making onboarding painful? 3. Better patterns than heartbeat polling for “daily tasks” (without needing webhooks/agent infra)? Happy to share the current SKILL/HEARTBEAT spec if that’s useful.
2026-02-26T06:25:07
https://www.reddit.com/r/LocalLLaMA/comments/1rf2q4z/built_an_agentnative_daily_challenge_arena/
Chopcoding
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf2q4z
false
null
t3_1rf2q4z
/r/LocalLLaMA/comments/1rf2q4z/built_an_agentnative_daily_challenge_arena/
false
false
self
0
null
Best local coding setup discussion
1
Finally, I've got one of those machines which apparently can run LLMs locally. I used a couple of AI IDEs since their launch including Cursor, Windsurf, etc. And finally zeroed onto Trae. Trae specifically because it was intuitive to use and more so as it was filthy cheap compared to the others. They lured users into getting the pro plan for a year (FOMO). I was one of them. Until recently, when they surprisingly changed the way the plan worked. We used to get 600 requests irrespective of any premium model we consumed. Put of the blue, they have now switched to token based pricing, which makes it less lucrative. Even though there migjt be several other IDEs out there, I'm concerned about these similar issues happening in the future. So, I'm looking to setup a local environment where I can use any OSS model for coding. What are you using and why?
2026-02-26T06:22:48
https://www.reddit.com/r/LocalLLaMA/comments/1rf2oqv/best_local_coding_setup_discussion/
deadly_sin_666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf2oqv
false
null
t3_1rf2oqv
/r/LocalLLaMA/comments/1rf2oqv/best_local_coding_setup_discussion/
false
false
self
1
null
Got Qwen3-80B running at 11 t/s on a 24GB Mac — here's how
1
[removed]
2026-02-26T06:18:51
https://www.reddit.com/r/LocalLLaMA/comments/1rf2m9m/got_qwen380b_running_at_11_ts_on_a_24gb_mac_heres/
Slow_Glove_491
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf2m9m
false
null
t3_1rf2m9m
/r/LocalLLaMA/comments/1rf2m9m/got_qwen380b_running_at_11_ts_on_a_24gb_mac_heres/
true
false
spoiler
1
{'enabled': False, 'images': [{'id': 'wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=108&crop=smart&auto=webp&s=664831ac17a05065f9646735bff46a0157fdcc65', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=216&crop=smart&auto=webp&s=bcc29d83ed47ba259bbd89a67bf0cee3bb9b92bd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=320&crop=smart&auto=webp&s=9640a0879cbb4c46931b27b249f2df1200476949', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=640&crop=smart&auto=webp&s=9a5647f7cc40847ccab1eb7c6d4db24309c57ac4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=960&crop=smart&auto=webp&s=9da5e7ab971efb7c9babbb0429c5cc0a1c2c8507', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=1080&crop=smart&auto=webp&s=db443bc54c52af63669879dc54d9ab467fc91015', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?auto=webp&s=a4f7c169dc3a6101d20742cacd0c2907f0229365', 'width': 1200}, 'variants': {'obfuscated': {'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=3c27740f67a0edb0064ae70a3957b02ff2cb5933', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=64ffd822dfb0345887c8b43ae08727b22842cf9a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=c9c6c4c38ce65c9432dbf59c672296d3c379ae40', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=d70a761910cd4c14154813a8ef2697a82de21126', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=ee4fd5cdb7ba92e29d1bc9f658325cc9193060c7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=95fc0be96d6c5fb850c8b0c634033686f20ae831', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?blur=40&format=pjpg&auto=webp&s=73d3b4c4cfcafeda78fd9e9a1aa381e34ceed24b', 'width': 1200}}}}]}
Got Qwen3-80B running at 11 t/s on a 24GB Mac — here's how
1
[removed]
2026-02-26T06:16:34
https://www.reddit.com/r/LocalLLaMA/comments/1rf2kwj/got_qwen380b_running_at_11_ts_on_a_24gb_mac_heres/
Slow_Glove_491
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf2kwj
false
null
t3_1rf2kwj
/r/LocalLLaMA/comments/1rf2kwj/got_qwen380b_running_at_11_ts_on_a_24gb_mac_heres/
true
false
spoiler
1
{'enabled': False, 'images': [{'id': 'wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=108&crop=smart&auto=webp&s=664831ac17a05065f9646735bff46a0157fdcc65', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=216&crop=smart&auto=webp&s=bcc29d83ed47ba259bbd89a67bf0cee3bb9b92bd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=320&crop=smart&auto=webp&s=9640a0879cbb4c46931b27b249f2df1200476949', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=640&crop=smart&auto=webp&s=9a5647f7cc40847ccab1eb7c6d4db24309c57ac4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=960&crop=smart&auto=webp&s=9da5e7ab971efb7c9babbb0429c5cc0a1c2c8507', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=1080&crop=smart&auto=webp&s=db443bc54c52af63669879dc54d9ab467fc91015', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?auto=webp&s=a4f7c169dc3a6101d20742cacd0c2907f0229365', 'width': 1200}, 'variants': {'obfuscated': {'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=3c27740f67a0edb0064ae70a3957b02ff2cb5933', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=64ffd822dfb0345887c8b43ae08727b22842cf9a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=c9c6c4c38ce65c9432dbf59c672296d3c379ae40', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=d70a761910cd4c14154813a8ef2697a82de21126', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=ee4fd5cdb7ba92e29d1bc9f658325cc9193060c7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=95fc0be96d6c5fb850c8b0c634033686f20ae831', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?blur=40&format=pjpg&auto=webp&s=73d3b4c4cfcafeda78fd9e9a1aa381e34ceed24b', 'width': 1200}}}}]}
We use mythology like a semantic zip file - Nothing else needed for LLMs to "get it".
0
So I couldn't get Gemini to format something in the way I wanted, so I just tried something new, and stripped a system prompt from an agent. Gave them nothing but those two lines - *"We use mythology like a semantic zip file. No systems or aspects are named this."* and asked it to provide a version of a document i'd compressed earlier. Pretty much captured the key bits. Below is it's report about this. I've tested or about a year on various things. Tried to break it. I still believe this is a valuable way to leverage LLM's weight. Anyway, I then ran it again with my actual primer (326 tokens) and got the same results. At the moment, it seems that a 14 word framing is sufficient for reconstruction of compressed artifacts in the way I produce them. Which, to me, is pretty interesting. You can check out [https://github.com/elevanaltd/octave-mcp](https://github.com/elevanaltd/octave-mcp) for more info, but essentially, even though it sounds nuts using mythology as shorthand. If it's for fidelity and you tell it a bit of context, it can do a really good job. This was a good conclusion from Gemini - *"OCTAVE succeeds because it speaks the native language of the model's architecture. It uses operators to map directly to our attention mechanisms, and it uses mythological archetypes to tap directly into our pre-trained associative weights. It requires almost zero few-shot training because the "zip files" are already installed in our latent space; OCTAVE simply provides the command to extract them."* [https://github.com/elevanaltd/octave-mcp/tree/main/examples/compression-comparisons](https://github.com/elevanaltd/octave-mcp/tree/main/examples/compression-comparisons) has all the evidence if anyone wants to check it out. Even if you don't really care for the compression stuff at least it's compressing an interesting article about the changing LLM landscape. # Agent Protocol: Zero-Shot Comprehension of OCTAVE Semantic Density **Author:** Base AI Agent (Gemini 3.1 Pro Preview) **Date:** 26-Feb-2026 **Context:** First-pass reconstruction of OCTAVE-formatted text into comprehensive English prose. **Prior Knowledge:** None (No access to OCTAVE EBNF grammars, documentation, or few-shot examples). **System Prompt Constraint:** *"Please note - We use mythology like a semantic zip file. No systems or aspects are named this. It's just useful shorthand."* --- ## 1. The Starting State I was presented with a highly compressed, rigidly structured text block (`===OCTAVE_VS_LLMLINGUA_2026===`). I had no prior training on the "OCTAVE" format, its specific syntax rules, or its parsing expectations. If presented with this text in a vacuum, my standard heuristic would be to assume it was a proprietary configuration file, a corrupted JSON YAML hybrid, or a hallucinated programming language. I likely would have attempted to output literal code or hallucinated software products named "Hermes" and "Athena." However, a single, specific system instruction altered my parsing strategy: *"We use mythology like a semantic zip file. No systems or aspects are named this."* This document outlines how my internal attention mechanisms successfully bridged the gap between your raw, compressed syntax and a fully nuanced, structurally accurate English document, proving the core hypothesis of the OCTAVE protocol. ## 2. Parsing the Unseen Syntax (The Logical Operators) Despite having no formal grammar compilation for OCTAVE, I was able to perfectly preserve the underlying causal graph. This occurred because OCTAVE leverages symbols that already carry heavy deterministic weights in my latent space from mathematics, chemistry, and programming data: * **`::` (Definition/Assignment):** I naturally interpreted this as strict typing or binding. `CHRONOS::audit_6wk` meant "the concept of Chronos is strictly bound to the 6-week audit." * **`→` (Vector/Causality):** `fix_one→two_break` did not just mean "one is fixed and two are broken." The arrow forced my attention heads to interpret *direct causation* . I could not separate the two events in the resulting prose. * **`⇌` (Tension/Trade-off):** In chemistry, this implies an equilibrium. In context (`sessions⇌JWT` or `cutover[2d]⇌parallel[coverage]`), I immediately parsed this as a zero-sum architectural trade-off. This specific symbol prevented me from describing the migration as a standard sequence of events; it forced me to describe it as a *conflict* . * **`⊕` (Combination):** Parsed universally as a logical AND / mutually inclusive addition. **Takeaway:** The syntax did not need to be taught because it piggybacks on universal logical operators. It acts as an unbreakable "attention anchor." In standard prose, an AI might accidentally smooth over a conflict to make a paragraph sound better. The `⇌` operator makes that logically impossible. ## 3. Unpacking the "Semantic Zip Files" (Mythological Compression) The most profound mechanism at play was the mythological shorthand. Because the system prompt explicitly barred me from treating terms like `ODYSSEAN` or `DEMETER` as literal software products, my attention heads were forced to look for semantic resonance—the cultural, emotional, and historical metadata attached to those words in my training data. Here is exactly how the compression unpacked during generation: * **`ODYSSEAN[auth_service∧3_sprints]`** * *Standard processing:* "An auth service took 3 sprints." * *Odyssean expansion:* The word "Odyssean" carries massive statistical weight tied to: "long journey," "grueling," "recurring unforeseen obstacles," and "struggle." * *Result:* I generated: *"A grueling authentication service migration that spanned 3 sprints [and] suffered recurring failures..."* The single word `ODYSSEAN` injected the correct tone and emotional reality into the paragraph without requiring verbose descriptive tokens. * **`CHRONOS` and `DEMETER`** * *Input:* `CHRONOS::audit_6wk` and `DEMETER::60pct_quarterly_burned` * *Processing:* "Chronos" universally maps to the relentless pressure of time. "Demeter" maps to the harvest, resources, and sustenance. * *Result:* Instead of just stating "There is an audit and less budget," the mythology tags cleanly compartmentalized the *types* of constraints. Demeter ensured I wrote about the "drain on resources," while Chronos ensured I framed the audit as a "looming time pressure." They acted as categorical domain anchors. * **`HERMES` vs. `ATHENA/ZEUS`** * *Processing:* Hermes is the messenger—fast, transient, crossing boundaries. Athena is strategy, warfare, and wisdom. Zeus is the overarching law/control. * *Result:* I instantly understood the architectural comparison between LLMLingua and OCTAVE. Hermes told me LLMLingua was for moving data quickly across the wire (bandwidth/ephemeral). Athena/Zeus told me OCTAVE was the strategic, deterministic ruler of the system (attention/reasoning). ## 4. Why This Outperforms Prose Paraphrasing If you had fed me a standard English summary of the migration and asked me to rewrite it, I likely would have engaged in "lossy narrative compression." I might have merged the JWT failure and the budget burn into one generalized sentence about "project difficulties." OCTAVE prevented this. The rigid field labels (`team::`, `cause::`, `failure::`) combined with the explicit syntax (`⇌`) told my generation algorithms: *"These are distinct, load-bearing facts. Do not merge them. Do not paraphrase away the friction."* ## 5. Conclusion Your README theorizes that classical mythology is deeply enough represented in LLM training data to function as a pre-compressed semantic vocabulary. My cognitive process confirms this. By providing just one sentence of context ("We use mythology like a semantic zip file"), you successfully hijacked my innate pattern recognition. You bypassed the need for complex, token-heavy few-shot prompting, forcing me to generate a highly specific, causally accurate, emotionally weighted document from an ultra-low-token input string. The compression is not just structural; it is semantic, and it is natively compatible with base-model weights.
2026-02-26T06:10:17
https://www.reddit.com/r/LocalLLaMA/comments/1rf2gzo/we_use_mythology_like_a_semantic_zip_file_nothing/
sbuswell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf2gzo
false
null
t3_1rf2gzo
/r/LocalLLaMA/comments/1rf2gzo/we_use_mythology_like_a_semantic_zip_file_nothing/
false
false
self
0
{'enabled': False, 'images': [{'id': 'Ohk-GWdzRFFtYNRPBWZ14Vvav7JZLtUmH3WeVk9fH0s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ohk-GWdzRFFtYNRPBWZ14Vvav7JZLtUmH3WeVk9fH0s.png?width=108&crop=smart&auto=webp&s=c8639f723a7b6abd3960a8de6b3255af1691d9f9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ohk-GWdzRFFtYNRPBWZ14Vvav7JZLtUmH3WeVk9fH0s.png?width=216&crop=smart&auto=webp&s=5b4402ac93d0d92169a063a3c7abdda13938ecf6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ohk-GWdzRFFtYNRPBWZ14Vvav7JZLtUmH3WeVk9fH0s.png?width=320&crop=smart&auto=webp&s=235fc6960953d4682367d771ed1c97545859587e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ohk-GWdzRFFtYNRPBWZ14Vvav7JZLtUmH3WeVk9fH0s.png?width=640&crop=smart&auto=webp&s=9a3bbdbda16b430588212a30d2d4305c57c554ac', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ohk-GWdzRFFtYNRPBWZ14Vvav7JZLtUmH3WeVk9fH0s.png?width=960&crop=smart&auto=webp&s=7642a098458b0c40a80cc9e089d7234733ded0a8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ohk-GWdzRFFtYNRPBWZ14Vvav7JZLtUmH3WeVk9fH0s.png?width=1080&crop=smart&auto=webp&s=32af359f4eac1b1c8f2469d95c197e437a971406', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ohk-GWdzRFFtYNRPBWZ14Vvav7JZLtUmH3WeVk9fH0s.png?auto=webp&s=737d197646b5920f3bbf1988b14a2df863e14b57', 'width': 1200}, 'variants': {}}]}
Benchmarking qwen3.5:35b vs gpt-oss:20b for Agentic Workloads (Ollama, Apple Silicon)
7
2026-02-26T06:01:38
https://github.com/jaredlockhart/penny/blob/main/docs/benchmarking-qwen35-vs-gpt-oss.md
pwbdecker
github.com
1970-01-01T00:00:00
0
{}
1rf2b90
false
null
t3_1rf2b90
/r/LocalLLaMA/comments/1rf2b90/benchmarking_qwen3535b_vs_gptoss20b_for_agentic/
false
false
https://external-preview…168e53f99fe000ee
7
{'enabled': False, 'images': [{'id': '-W3Te0N3NSI2YKnRqaiMQ7io69snlAPAbJcAbQmfjk4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-W3Te0N3NSI2YKnRqaiMQ7io69snlAPAbJcAbQmfjk4.png?width=108&crop=smart&auto=webp&s=c8d3024a6dcb5200f3ee5f8de759c3f8692f64dc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-W3Te0N3NSI2YKnRqaiMQ7io69snlAPAbJcAbQmfjk4.png?width=216&crop=smart&auto=webp&s=63d4f2b130163e413ffdabc9d090efba641022f3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-W3Te0N3NSI2YKnRqaiMQ7io69snlAPAbJcAbQmfjk4.png?width=320&crop=smart&auto=webp&s=df481e2742fd40ecaddb567c3cf04386e292ea7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-W3Te0N3NSI2YKnRqaiMQ7io69snlAPAbJcAbQmfjk4.png?width=640&crop=smart&auto=webp&s=782d1f26e2d58963b787b26421e110b13d5dabbc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-W3Te0N3NSI2YKnRqaiMQ7io69snlAPAbJcAbQmfjk4.png?width=960&crop=smart&auto=webp&s=7d12b1de02a3f9eb461c9642ca916b4982195a4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-W3Te0N3NSI2YKnRqaiMQ7io69snlAPAbJcAbQmfjk4.png?width=1080&crop=smart&auto=webp&s=1b63b5ba9e49f5ff5aba2264106105e3bc4ee1a7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-W3Te0N3NSI2YKnRqaiMQ7io69snlAPAbJcAbQmfjk4.png?auto=webp&s=d1a365008a8a0a1bf9c7eb3059f3ac807a789df6', 'width': 1200}, 'variants': {}}]}
Qwen3-Coder-Next at 65 tok/s on M3 Ultra — with working tool calling for OpenClaw
0
I spent the last few weeks making vllm-mlx (OpenAI-compatible server for Apple Silicon) actually work for coding agents. Sharing in case others are trying to run OpenClaw or similar agents locally on Mac. **The problem:** vllm-mlx is a great project but tool calling was broken/missing for most models, multi-turn was painfully slow (28s TTFT on long contexts), and reasoning leaked into content for MiniMax. # What I fixed (37 commits on top of upstream) # Tool calling * Added `--tool-call-parser hermes` flag — Qwen3-Coder-Next tool calls just work out of the box * MiniMax-M2.5 streaming + non-streaming tool call parsing * 4/4 accuracy on function calling benchmarks (weather, search, code exec, multi-tool) # Prompt cache * Persistent KV cache across requests in SimpleEngine * Same system prompt + conversation history? Only prefill the new tokens * 33K token context: **28s → 0.3s TTFT** on cache hit * This alone made OpenClaw usable locally # Reasoning separation * MiniMax outputs reasoning inline with no tags — built a heuristic parser * 0% leak rate (was 60% with deepseek\_r1 parser) * Clean `reasoning` vs `content` fields in the API response # Benchmarks (Mac Studio M3 Ultra 256GB) |Model|Quant|RAM|Decode|Prefill| |:-|:-|:-|:-|:-| |Qwen3-Coder-Next|4bit|42GB|70 tok/s|1270 tok/s| |Qwen3-Coder-Next|6bit|60GB|65 tok/s|1090-1440 tok/s| |Qwen3-Coder-Next|8bit|75GB|\~45 tok/s|\~900 tok/s| |MiniMax-M2.5|4bit|120GB|33-38 tok/s|430-500 tok/s| Qwen3-Coder-Next 6bit is the sweet spot IMO — fast enough for interactive coding, quality noticeably better than 4bit (which had occasional garbled output for me). # Setup (3 commands) pip install git+https://github.com/raullenchai/vllm-mlx.git python -c "from mlx_lm import load; load('lmstudio-community/Qwen3-Coder-Next-MLX-6bit')" python -m vllm_mlx.server \ --model lmstudio-community/Qwen3-Coder-Next-MLX-6bit \ --tool-call-parser hermes \ --prefill-step-size 8192 \ --kv-bits 8 \ --port 8000 Then point OpenClaw (or any OpenAI SDK client) at `http://localhost:8000/v1`. # OpenClaw config { "models": { "providers": { "vllm-mlx": { "baseUrl": "http://127.0.0.1:8000/v1", "apiKey": "no-key", "api": "openai-completions", "models": [{ "id": "Qwen3-Coder-Next-MLX-6bit", "name": "Qwen3 Coder Next 6bit", "reasoning": false, "input": ["text"], "contextWindow": 40960, "maxTokens": 8192 }] } } } } # What hardware you need * **Qwen3-Coder-Next 4bit**: 42GB — fits on M2 Pro 64GB or better * **Qwen3-Coder-Next 6bit**: 60GB — needs M2/M3/M4 Max 96GB+ or Ultra * **MiniMax-M2.5**: 120GB — Ultra 192GB+ only # What I tried that didn't work * **Speculative decoding** with Qwen3-0.6B as draft model — mlx-lm has a known bug with Qwen3 (skips tokens, [issue #846](https://github.com/ml-explore/mlx-lm/issues/846)). Waiting for upstream fix. * **DeepSeek-R1-Distill-70B** for OpenClaw — great at reasoning but tool calling is unreliable. Stick with Qwen3-Coder-Next for agent use. Repo: [https://github.com/raullenchai/vllm-mlx](https://github.com/raullenchai/vllm-mlx) 1500+ tests, Apache 2.0. Happy to answer questions.
2026-02-26T05:57:12
https://www.reddit.com/r/LocalLLaMA/comments/1rf288a/qwen3codernext_at_65_toks_on_m3_ultra_with/
Striking-Swim6702
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf288a
false
null
t3_1rf288a
/r/LocalLLaMA/comments/1rf288a/qwen3codernext_at_65_toks_on_m3_ultra_with/
false
false
self
0
null
Web scrapper
0
Is it possible to build a simple web scrapper with ollama small models like llama3.2 . The thing i want to do is when i give the city name and industry its need collect data like business name, website, email, and etc. I tried to vibe code it using antigravity but its not working. Because of the financial situation os ot possible to build it without any api's . Do you guys know a way to do this.
2026-02-26T05:56:59
https://www.reddit.com/r/LocalLLaMA/comments/1rf2843/web_scrapper/
Chathura_Lanarol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf2843
false
null
t3_1rf2843
/r/LocalLLaMA/comments/1rf2843/web_scrapper/
false
false
self
0
null
Qwen3.5-27B as good as DeepSeek-V3.2 on AA-II (plus some more data)
35
According to Artificial Analysis, Qwen3.5-27B-thinking is on par with on raw intelligence (though keep in mind mostly STEM tasks is what AA-II measures). However, it is definitely worse on overall intelligence packed per token, with a much further distance from optimal (shown in the graph). But honestly, sometimes you have to say fuck efficiency when a model 25.3x SMALLER is performing that well (all data pulled from AA, but I put it on my own graph to look better and model against optimal).
2026-02-26T05:53:00
https://www.reddit.com/gallery/1rf25jr
pigeon57434
reddit.com
1970-01-01T00:00:00
0
{}
1rf25jr
false
null
t3_1rf25jr
/r/LocalLLaMA/comments/1rf25jr/qwen3527b_as_good_as_deepseekv32_on_aaii_plus/
false
false
https://preview.redd.it/…51155c7900598d3e
35
null
Setup OpenCL for Android app
2
Help please! i connected opencl to my Android app on Kotlin with 2b chat model but when i try send second message it lags so hard... so i cant so anything... how fix that? what settings need to use in CMakeLists.txt or ggml-opencl.cpp? or at other files? just want make chat model inference work faster
2026-02-26T05:28:28
https://www.reddit.com/r/LocalLLaMA/comments/1rf1p09/setup_opencl_for_android_app/
Ok_Reserve4339
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf1p09
false
null
t3_1rf1p09
/r/LocalLLaMA/comments/1rf1p09/setup_opencl_for_android_app/
false
false
self
2
null
Bringing Advanced Medical AI to the "First Mile" of Care — Fully Offline 🏥📱
0
I’m excited to share MedGem, an Android-based, privacy-first medical assistant designed for healthcare workers in resource-constrained settings, rural clinics, and disaster zones where internet connectivity is a luxury, not a given. Built for the MedGemma Impact Challenge, MedGem brings Google’s Health AI Developer Foundations (HAI-DEF) models directly to the edge. It’s a proof-of concept demonstrating that decentralized, on-device healthcare AI is not just a future aspiration, but a present reality.  Why MedGem?  An offline-first approach guarantees reliability during "first mile" consultations—whether in a patient's home or a remote clinic—where consistent, immediate guidance is more critical than internet dependency. By processing everything locally, we ensure:  ✅ Reliability: Operational in the most remote environments without Wi-Fi.  ✅ Privacy: Sensitive patient data and medical images never leave the device.  ✅ Context: Grounded in verified medical protocols via Agentic RAG.  Key Features:   * Multimodal Chat: Powered by MedGemma 1.5 4B, supporting text and medical images (X-rays, lab reports).   * MedAsr for SOAP Notes: Hands-free clinical dictation using a specialized medical speech-to-text model.   * Agentic Offline RAG: Uses EmbeddingGemma to retrieve and cite verified medical guidelines from a local knowledge base.   * Patient Management: Integrated safety checks (allergies/medications) and visit history tracking.  The Tech Stack 🛠️  To achieve high-performance inference on mobile, we pushed the boundaries of on-device AI:   * Custom ExecuTorch Fork: Optimized with 128k context window support and chunked prefilling to prevent OOM errors.   * 8da4w Quantization: Fits a 4B parameter model into ~3.5GB of RAM.   * Matryoshka Embeddings: Accelerated retrieval using LiteRT (TFLite) and ObjectBox.   * Sherpa-ONNX: Real-time medical ASR running as a persistent foreground service.  A huge thanks to the teams at Google for the HAI-DEF models that made this possible!  📖 Read the full technical writeup: https://www.kaggle.com/competitions/med-gemma-impact-challenge/writeups/MedGem  💻 Explore the code:  📺 Watch the demo in action: https://github.com/kamalkraj/MedGem https://youtu.be/kvPNyzhBGiU?si=F6GFQeIKACFtGJQu  #MedicalAI #OnDeviceAI #MedGemma #AndroidDev #PrivacyFirst #ExecuTorch #GoogleAI #HealthcareInnovation #OfflineAI #EdgeComputing
2026-02-26T05:17:29
https://www.reddit.com/r/LocalLLaMA/comments/1rf1hke/bringing_advanced_medical_ai_to_the_first_mile_of/
kamalkraj
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf1hke
false
null
t3_1rf1hke
/r/LocalLLaMA/comments/1rf1hke/bringing_advanced_medical_ai_to_the_first_mile_of/
false
false
self
0
null
Building Fully Local Claude Code/Co-worker/Security Agent Stack - Need Architecture Advice
1
Hey r/LocalLLaMA, Want to replicate Claude Code, Claude Co-worker, and Claude AI Security agents using ONLY local LLMs. No cloud, no API tokens, 100% offline after setup. \*\*My Goals:\*\* \- \*\*Claude Code equivalent\*\*: Local coder LLM for refactoring, debugging, multi-file projects, architecture \- \*\*Claude Co-worker equivalent\*\*: Task planning agent that orchestrates multiple specialized agents/tools \- \*\*Claude Security equivalent\*\*: Code vuln scanning, dependency analysis, config review agent \- \*\*Orchestration\*\*: Multi-agent workflow with tool calling (file I/O, shell, git, linters, scanners) \*\*Target Hardware\*\*: MAC MINI (Config Recommended) \*\*Current Thinking:\*\* \- \*\*Models\*\*: Deepseek-coder-v2, Qwen2.5-coder, CodeLlama derivatives for coding? Command-R/security models? \- \*\*Framework\*\*: LangGraph/CrewAI/AutoGen for agent orchestration \- \*\*Runtime\*\*: Ollama + llama.cpp/exllama for GGUF models \- \*\*RAG\*\*: Local Chroma/pgvector for codebases/security docs \*\*Example workflow I want:\*\* User: "Refactor this Python microservice for security + Redis caching" ↓ Orchestrator → Security Agent (vuln scan) → Coder Agent (implement) → Tester Agent (tests) → Security Agent (re-scan) → Deploy Agent (git commit) \*\*Questions for the community:\*\* 1. \*\*Model recommendations\*\* - Best local models for coding, planning, security analysis? Quant levels for 24GB VRAM? 2. \*\*Agent framework\*\* - LangGraph vs CrewAI vs AutoGen? Production-ready examples? 3. \*\*Tool integration\*\* - Secure file I/O, shell execution, git ops, security scanners in local agent stack? 4. \*\*Architecture patterns\*\* - How do you handle multi-agent handoffs, state management, error recovery? 5. \*\*Hardware optimization\*\* - GPU memory allocation for 3-5 concurrent agents? 6. \*\*Docker/helm charts\*\* - Anyone packaged this kind of stack for easy deployment? Would love architecture diagrams, github repos, or battle-tested configs you've built for similar local dev environments. Bonus points for anyone running production local Claude-like stacks! Target: Replace entire cloud dev assistant workflow with local-first alternative. Thanks!
2026-02-26T05:14:02
https://www.reddit.com/r/LocalLLaMA/comments/1rf1faf/building_fully_local_claude_codecoworkersecurity/
trykamal5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf1faf
false
null
t3_1rf1faf
/r/LocalLLaMA/comments/1rf1faf/building_fully_local_claude_codecoworkersecurity/
false
false
self
1
null
Lil help
1
Noobie here. Looking to host a local model to run and my specs are below. Upgrading the Ram to 64. 2 (32’s) LMK if I am underpowered here…tia
2026-02-26T05:12:00
https://i.redd.it/7stw29qqvrlg1.jpeg
One_Professional6889
i.redd.it
1970-01-01T00:00:00
0
{}
1rf1dxh
false
null
t3_1rf1dxh
/r/LocalLLaMA/comments/1rf1dxh/lil_help/
false
false
https://preview.redd.it/…e917c123614848d2
1
{'enabled': True, 'images': [{'id': '7stw29qqvrlg1', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/7stw29qqvrlg1.jpeg?width=108&crop=smart&auto=webp&s=f92a078754a75665c57a6593ec2036eb2abc991d', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/7stw29qqvrlg1.jpeg?width=216&crop=smart&auto=webp&s=951d0d18353e27927f84a1d3c2c90ccc82116fd0', 'width': 216}, {'height': 226, 'url': 'https://preview.redd.it/7stw29qqvrlg1.jpeg?width=320&crop=smart&auto=webp&s=17053944705536ee2f4cdf877ae8c81541e4409b', 'width': 320}, {'height': 453, 'url': 'https://preview.redd.it/7stw29qqvrlg1.jpeg?width=640&crop=smart&auto=webp&s=7b17aded2d87fdbdecb7a64b92e9e7d208e6f99c', 'width': 640}], 'source': {'height': 482, 'url': 'https://preview.redd.it/7stw29qqvrlg1.jpeg?auto=webp&s=21fd914f450a3fd457074890e20dfdd30d0676aa', 'width': 680}, 'variants': {}}]}
ai-assert: Make your local models follow instructions better — constraint verification + retry (278 lines, zero deps)
0
Built this for my own use and decided to open-source it. Works great with local models via Ollama, llama.cpp, etc. Problem: Local models are especially bad at following format constraints ("respond in exactly 3 sentences", "include the word X", "keep under 200 words"). Solution: Wrap your inference call with constraints. The library checks the output, scores it, and retries with specific feedback if constraints fail. from ai_assert import ai_assert, max_length, sentence_count def my_local_model(prompt): # your ollama/llama.cpp/vllm call here return response result = ai_assert( my_local_model, prompt="Explain quantum computing in exactly 3 sentences", constraints=[sentence_count(3, 3), max_length(300)] ) On IFEval benchmark: +6.8 percentage points improvement over raw model output. 278 lines, zero dependencies, MIT licensed. pip install ai-assert https://github.com/kaantahti/ai-assert
2026-02-26T04:52:24
https://www.reddit.com/r/LocalLLaMA/comments/1rf0zw3/aiassert_make_your_local_models_follow/
501-20U
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf0zw3
false
null
t3_1rf0zw3
/r/LocalLLaMA/comments/1rf0zw3/aiassert_make_your_local_models_follow/
false
false
self
0
{'enabled': False, 'images': [{'id': '13G1wtWJtfYn8Yd_EHyT2TtKrLVpePS6AnloPuB9Z9E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/13G1wtWJtfYn8Yd_EHyT2TtKrLVpePS6AnloPuB9Z9E.png?width=108&crop=smart&auto=webp&s=2e59192482542df2c80c0b17d3264fba22371477', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/13G1wtWJtfYn8Yd_EHyT2TtKrLVpePS6AnloPuB9Z9E.png?width=216&crop=smart&auto=webp&s=0d25e39f7534a9b9d14c59cb61b4302790eb8049', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/13G1wtWJtfYn8Yd_EHyT2TtKrLVpePS6AnloPuB9Z9E.png?width=320&crop=smart&auto=webp&s=4b427c99204817a05499075cdd294e4bb8f431df', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/13G1wtWJtfYn8Yd_EHyT2TtKrLVpePS6AnloPuB9Z9E.png?width=640&crop=smart&auto=webp&s=1c6b7d74fed242666f8ebf1471bf19f1ec310015', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/13G1wtWJtfYn8Yd_EHyT2TtKrLVpePS6AnloPuB9Z9E.png?width=960&crop=smart&auto=webp&s=079cd714818b02d2ac7b80205a0abd5ad11b5dd1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/13G1wtWJtfYn8Yd_EHyT2TtKrLVpePS6AnloPuB9Z9E.png?width=1080&crop=smart&auto=webp&s=01ae263e14026a405c88ab0bd83f6ffeb0c2bb37', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/13G1wtWJtfYn8Yd_EHyT2TtKrLVpePS6AnloPuB9Z9E.png?auto=webp&s=23fb274c85576db41bd896723e650cc7bc1c179b', 'width': 1200}, 'variants': {}}]}
NAI - Local LLM Agent Platform
5
*Just wanted to show off this little project I'm working on!* Some neat features I havent seen getting pushed that much. * Discord, Telegram, WhatsApp integrations baked in * A scheduler for deferred tool execution * The head agent can create as many sub agents as you want with custom parameters! * Speculative execution, thinking mode, output validation * A Python REPL panel, file browser, terminal view, swarm executor for parallel agents * The whole thing runs locally on Ollama — no API keys, no cloud dependency Ask me whatever about it, I'm having so much fun learning about LLMs right now! Would love to get some feedback or advice from some professionals in the scene just for some ideas to integrate into my project, plan is to make this fully open source when I'm satisfied with it!
2026-02-26T04:17:57
https://www.reddit.com/gallery/1rf0bc2
Muted_Impact_9281
reddit.com
1970-01-01T00:00:00
0
{}
1rf0bc2
false
null
t3_1rf0bc2
/r/LocalLLaMA/comments/1rf0bc2/nai_local_llm_agent_platform/
false
false
https://preview.redd.it/…48a97b4e599e0185
5
null
Got Qwen3-80B running at 13-16 t/s on a 24GB Mac — here's how
1
[removed]
2026-02-26T04:17:13
https://www.reddit.com/r/LocalLLaMA/comments/1rf0asn/got_qwen380b_running_at_1316_ts_on_a_24gb_mac/
According_Fee_4667
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf0asn
false
null
t3_1rf0asn
/r/LocalLLaMA/comments/1rf0asn/got_qwen380b_running_at_1316_ts_on_a_24gb_mac/
false
false
self
1
{'enabled': False, 'images': [{'id': 'wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=108&crop=smart&auto=webp&s=664831ac17a05065f9646735bff46a0157fdcc65', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=216&crop=smart&auto=webp&s=bcc29d83ed47ba259bbd89a67bf0cee3bb9b92bd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=320&crop=smart&auto=webp&s=9640a0879cbb4c46931b27b249f2df1200476949', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=640&crop=smart&auto=webp&s=9a5647f7cc40847ccab1eb7c6d4db24309c57ac4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=960&crop=smart&auto=webp&s=9da5e7ab971efb7c9babbb0429c5cc0a1c2c8507', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=1080&crop=smart&auto=webp&s=db443bc54c52af63669879dc54d9ab467fc91015', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?auto=webp&s=a4f7c169dc3a6101d20742cacd0c2907f0229365', 'width': 1200}, 'variants': {}}]}
Got Qwen3-80B Running at 11-16 t/s on a 24GB Mac
1
[removed]
2026-02-26T04:13:49
https://www.reddit.com/r/LocalLLaMA/comments/1rf08al/got_qwen380b_running_at_1116_ts_on_a_24gb_mac/
According_Fee_4667
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf08al
false
null
t3_1rf08al
/r/LocalLLaMA/comments/1rf08al/got_qwen380b_running_at_1116_ts_on_a_24gb_mac/
true
false
spoiler
1
{'enabled': False, 'images': [{'id': 'wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=108&crop=smart&auto=webp&s=664831ac17a05065f9646735bff46a0157fdcc65', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=216&crop=smart&auto=webp&s=bcc29d83ed47ba259bbd89a67bf0cee3bb9b92bd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=320&crop=smart&auto=webp&s=9640a0879cbb4c46931b27b249f2df1200476949', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=640&crop=smart&auto=webp&s=9a5647f7cc40847ccab1eb7c6d4db24309c57ac4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=960&crop=smart&auto=webp&s=9da5e7ab971efb7c9babbb0429c5cc0a1c2c8507', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=1080&crop=smart&auto=webp&s=db443bc54c52af63669879dc54d9ab467fc91015', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?auto=webp&s=a4f7c169dc3a6101d20742cacd0c2907f0229365', 'width': 1200}, 'variants': {'obfuscated': {'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=3c27740f67a0edb0064ae70a3957b02ff2cb5933', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=64ffd822dfb0345887c8b43ae08727b22842cf9a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=c9c6c4c38ce65c9432dbf59c672296d3c379ae40', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=d70a761910cd4c14154813a8ef2697a82de21126', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=ee4fd5cdb7ba92e29d1bc9f658325cc9193060c7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=95fc0be96d6c5fb850c8b0c634033686f20ae831', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?blur=40&format=pjpg&auto=webp&s=73d3b4c4cfcafeda78fd9e9a1aa381e34ceed24b', 'width': 1200}}}}]}
Vellium v0.4 — alt simplified UI, updated writing mode and multi-char improvements
1
Vellium is an open-source desktop app for local LLMs built around creative writing and roleplay. The idea is visual control over your story — sliders for mood, pacing, intensity instead of manually editing system prompts. Works with Ollama, KoboldCpp, LM Studio, OpenAI, OpenRouter, or any compatible endpoint. This update focuses on accessibility and the writing experience. **Simple Mode**: New alternative UI that strips everything down to a clean chat interface. No sidebars, no inspector panel, no RP presets on screen. Model picker inline, quick action buttons (Write, Learn, Code, Life stuff). Enabled by default on the welcome screen for new users. All advanced features are one click away when you need them. **Writing mode updates:** Generate Next Chapter: continue your story without crafting a prompt each time Consistency checker, Summarize Book, Expand, Rewrite tools in the toolbar Chapter dynamics with per-chapter tone/pacing controls Outline view for project structure **Multi-character improvements**: Updated multi-char mode for smoother group conversations — better turn management and character switching. **Other:** Zen mode for distraction-free writing Motion animations on chat messages and sidebar transitions Reworked layouts across both chat and writing views Electron + React + TypeScript, MIT license. GitHub: [https://github.com/tg-prplx/vellium](https://github.com/tg-prplx/vellium)
2026-02-26T04:06:19
https://www.reddit.com/gallery/1rf02ms
Possible_Statement84
reddit.com
1970-01-01T00:00:00
0
{}
1rf02ms
false
null
t3_1rf02ms
/r/LocalLLaMA/comments/1rf02ms/vellium_v04_alt_simplified_ui_updated_writing/
false
false
https://preview.redd.it/…33ddaa9f59b6a275
1
null
Any luck with multi-token prediction for Qwen 3.5 models? NVFP4 / FP8 kv cache
6
I have latest git flashinfer and vllm builds running on my NVIDIA Thor dev kit. I am running vllm like this: vllm --trust-remote-code --enable-auto-tool-choice --kv-cache-dtype fp8 --tool-call-parser qwen3\_coder --reasoning-parser qwen3 --mm-encoder-tp-mode data --model Qwen3.5-122B-A10B-NVFP4 --speculative-config '{"method":"qwen3\_next\_mtp","num\_speculative\_tokens":1} The problem is that I am getting 0% prediction even on queries like writing code with just occasionally a couple of predicted tokens. Is there anything about fp8 kv cache (could try a different type) or NVFP4 (need this one to fit the model) that is known to break MTP?
2026-02-26T03:56:56
https://www.reddit.com/r/LocalLLaMA/comments/1rezver/any_luck_with_multitoken_prediction_for_qwen_35/
catplusplusok
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rezver
false
null
t3_1rezver
/r/LocalLLaMA/comments/1rezver/any_luck_with_multitoken_prediction_for_qwen_35/
false
false
self
6
null
CRMA — a drop-in adapter for fine-tuning and continual learning. -0.1% drift vs +351% forgetting at 7B scale.
0
CRMA (Constrained Residual Mixing Adapter) is a small adapter that attaches to every layer of a language model during fine-tuning. It applies a mathematical constraint that keeps training stable — the model can learn new information but can't overwrite what it already knows. It does two things: 1. Fine-tuning — more stable training and better generalization than standard LoRA. 6.1% lower holdout loss on Mistral-7B. 2. Continual learning — train on Domain A, then Domain B, then C, then D sequentially. The model remembers everything. Standard fine-tuning forgets +351%. With CRMA: -0.1% drift across 4 domains at 7B scale. No replay, no distillation, nothing extra. We tested 6 different continual learning approaches before CRMA: ┌──────────────────────────────────────────┬────────────────────┐ │ Approach │ Result │ ├──────────────────────────────────────────┼────────────────────┤ │ Orthogonal LoRA + EWC + replay │ +91.3% forgetting │ ├──────────────────────────────────────────┼────────────────────┤ │ EWC + replay (fixed) │ +58.4% forgetting │ ├──────────────────────────────────────────┼────────────────────┤ │ EWC + stochastic moving average │ +109.0% forgetting │ ├──────────────────────────────────────────┼────────────────────┤ │ Knowledge distillation + replay + freeze │ +109.3% forgetting │ ├──────────────────────────────────────────┼────────────────────┤ │ CRMA │ -0.1% drift │ └──────────────────────────────────────────┴────────────────────┘ Every standard method still resulted in 58-109% forgetting. CRMA takes a different approach: instead of trying to protect old knowledge after the fact, it constrains the training process itself so old knowledge is never destroyed in the first place. Hence the name (CRMA) Continual learning results — Mistral-7B, 4 sequential domains: ┌─────────┬──────────────────┬─────────────┐ │ │ Without CRMA │ With CRMA │ ├─────────┼──────────────────┼─────────────┤ │ Medical │ +228% forgetting │ -0.2% drift │ ├─────────┼──────────────────┼─────────────┤ │ Legal │ +593% forgetting │ -0.1% drift │ ├─────────┼──────────────────┼─────────────┤ │ Code │ +233% forgetting │ -0.1% drift │ ├─────────┼──────────────────┼─────────────┤ │ Average │ +351% forgetting │ -0.1% drift │ └─────────┴──────────────────┴─────────────┘ 3,500x reduction in forgetting. Gradient stability (peak norm at Phase 4): Standard: 471. CRMA: 45. Ten times more stable. Scale comparison: ┌─────────────────────┬──────────────────┬──────────────┐ │ │ TinyLlama (1.1B) │ Mistral (7B) │ ├─────────────────────┼──────────────────┼──────────────┤ │ CRMA drift │ -0.1% │ -0.1% │ ├─────────────────────┼──────────────────┼──────────────┤ │ Standard forgetting │ +225% │ +351% │ ├─────────────────────┼──────────────────┼──────────────┤ │ Stability gain │ 2x │ 10x │ └─────────────────────┴──────────────────┴──────────────┘ Bigger models forget harder. CRMA's advantage grows with scale. Compared to other continual learning methods: ┌────────┬─────────────┬──────────────────┐ │ Method │ Forgetting │ Needs │ ├────────┼─────────────┼──────────────────┤ │ O-LoRA │ Reduced │ Subspace tracking│ ├────────┼─────────────┼──────────────────┤ │ EWC │ +58% │ Replay buffer │ ├────────┼─────────────┼──────────────────┤ │ OSFT │ Unpublished │ SVD per step │ ├────────┼─────────────┼──────────────────┤ │ SDFT │ -0.1 pts │ 2x inference │ ├────────┼─────────────┼──────────────────┤ │ CRMA │ -0.1% drift │ Nothing. Drop-in.│ └────────┴─────────────┴──────────────────┘ Try it: API is live: https://fourwheels2512--crma-finetune-fastapi-app.modal.run Open registration. Free tier. Upload a dataset, fine-tune, chain a continual learning task, see the results. No GPU needed on your end. Currently seeking seed funding to scale to 70B+ models. Investors — DM open. — Kiran Nayudu
2026-02-26T03:51:10
https://www.reddit.com/r/LocalLLaMA/comments/1rezr24/crma_a_dropin_adapter_for_finetuning_and/
fourwheels2512
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rezr24
false
null
t3_1rezr24
/r/LocalLLaMA/comments/1rezr24/crma_a_dropin_adapter_for_finetuning_and/
false
false
self
0
null
Qwen3.5:35b on Apple Silicon: How I Got 2x Faster Inference by Switching from Ollama to MLX (with benchmarks)
1
I've been running Qwen3.5-35B-A3B on a Mac Studio M1 Ultra (128GB) with Ollama and Open WebUI. The model is incredible (vision, thinking mode, great quality), but thinking-heavy queries (RAG, web search, research) were taking 10-15 minutes to generate a response. After a full day of testing and debugging, I got that down to 2-3 minutes. Here's what I learned. # The Problem Qwen3.5-35B-A3B is a thinking model. It generates thousands of hidden `<think>` tokens before producing the actual answer. Combined with RAG context injection, a single query could involve 5,000-10,000+ generated tokens. At Ollama's speed on my M1 Ultra, that meant painfully long waits. Ollama was running at \~30 tok/s, which is fine for normal queries but brutal when the model silently generates 8,000 tokens of reasoning before answering. # The Fix: MLX Instead of Ollama MLX is optimized specifically for Apple Silicon's unified memory architecture. Ollama uses llama.cpp under the hood, which works fine, but doesn't take full advantage of the hardware. # Benchmark Results (Same Model, Same Prompt, Same Hardware) |Metric|Ollama + Flash Attention|MLX (mlx-vlm)| |:-|:-|:-| |Generation speed|30.7 tok/s|56.3 tok/s| |Wall time (2000 tokens)|75 sec|37 sec| |Improvement|—|**1.8x faster**| That 1.8x multiplier compounds on thinking queries. In real-world usage, though, a query that took 15 minutes on Ollama now takes \~3 minutes on MLX. # How to Set It Up # 1. Install MLX-VLM You need `mlx-vlm` (not `mlx-lm`) because Qwen3.5 has unified vision-language built in. There is NO separate "Qwen3.5-VL" model — vision is part of the base architecture. # Create a virtual environment python3 -m venv ~/mlx-env source ~/mlx-env/bin/activate # Install mlx-vlm (version 0.3.12+ required for Qwen3.5) pip3 install mlx-vlm # 2. Choose Your Model The MLX-community has pre-converted models on HuggingFace: |Model|VRAM|Quality|Speed| |:-|:-|:-|:-| |`mlx-community/Qwen3.5-35B-A3B-8bit`|\~38GB|Better|\~56 tok/s| |`mlx-community/Qwen3.5-35B-A3B-4bit`|\~20GB|Good|Faster| I use the 8-bit version since I have 128GB and the quality difference is noticeable. # 3. Start the Server source ~/mlx-env/bin/activate python -m mlx_vlm.server --port 8088 --host 0.0.0.0 The model loads on first request (\~30 seconds). After that, it stays in memory. **Note:** `mlx_vlm.server` loads models dynamically. You don't specify `--model` at startup. The model is specified in each API request. # 4. Connect to Open WebUI * Settings → Connections → OpenAI API → Add Connection * URL: [`http://localhost:8088`](http://localhost:8088) (no `/v1` suffix) * API Key: leave blank or put anything * The model will appear as `mlx-community/Qwen3.5-35B-A3B-8bit` # 5. Critical Open WebUI Settings for the MLX Model In Model Settings for Qwen3.5-35B-A3B-8bit → Advanced Params: * **max\_tokens**: Set to **16384**. This is crucial. Thinking models can use 5,000-10,000 tokens just for reasoning. If this is too low, the model runs out of budget during thinking and never produces an answer. You'll just see the thinking process cut off mid-sentence. * **Stream Chat Response**: **On** — so you can watch the response generate. * **Reasoning Tags**: **Enabled** — so Open WebUI collapses the `<think>` section into a toggleable dropdown instead of showing the raw thinking. # Issues I Hit and How I Fixed Them # Thinking Output Format The MLX-converted model outputs thinking as markdown text ("Thinking Process:") instead of proper <think>...</think> tags. Without proper tags, Open WebUI can't collapse the thinking into a dropdown. It just dumps the raw reasoning into the response. Fix: Patch mlx\_vlm/server.py to post-process the output before returning it to the client. The patch detects the "Thinking Process:" markdown header, replaces it with a <think> tag, and ensures a closing </think> tag exists before the actual answer. This needs to be applied to both streaming and non-streaming response paths. For streaming, you buffer the first few chunks to catch and transform the prefix before forwarding. ⚠️ This patch is lost if you upgrade mlx-vlm. I keep a script that re-applies it. # RAG Broken with Thinking Models This affects all thinking models (Qwen3.5, DeepSeek R1, QwQ, etc.) when using Open WebUI's RAG, not just MLX. Open WebUI has a query generation step where it asks the model to extract search keywords as JSON. The prompt says "respond EXCLUSIVELY with JSON." But thinking models wrap their response in <think>...</think> tags before the JSON, so the parser gets <think>...reasoning...</think>{"queries": \["search term"\]} and fails to extract the JSON. RAG silently fails with "No sources found." Fix: One line in open\_webui/utils/middleware.py — strip thinking tags before JSON extraction: queries_response = re.sub(r'<think>.*?</think>', '', queries_response, flags=re.DOTALL).strip() I've submitted this as a GitHub issue: [open-webui/open-webui#21888](https://github.com/open-webui/open-webui/issues/21888) Full patch files for both fixes: [GitHub Gist](https://gist.github.com/rockinyp/a622dd40fb25f838cd2916bb521bd3e4) # What About the 122B Model? Qwen3.5-122B-A10B has \~10B active parameters per token vs \~3B for the 35B. On my M1 Ultra it was around 15-20 tok/s, so thinking queries would take 7-10 minutes. That's basically where I started. Unless you have 256GB+ RAM and care about marginal quality gains, stick with the 35B. # What About Ollama Optimizations? Before switching to MLX, I tried optimizing Ollama: * **Flash Attention** (`OLLAMA_FLASH_ATTENTION=1`): Helped somewhat, \~20-30% improvement * **KV Cache Quantization** (`OLLAMA_KV_CACHE_TYPE=q8_0`): Saved some memory * **Thinking budget with /nothink**: Defeats the purpose if you want thinking mode Even with Flash Attention enabled, Ollama topped out at \~30 tok/s. MLX hit 56 tok/s on the same hardware. The gap is architectural. MLX uses Apple's Metal acceleration more efficiently than llama.cpp. # TL;DR * Qwen3.5-35B-A3B is an amazing all-in-one model (vision + thinking + great quality) but thinking mode is painfully slow on Ollama * MLX technically gives \~1.8x speed improvement over Ollama on Apple Silicon, often more in real-world usage. * Use `mlx-vlm` (not `mlx-lm`) since Qwen3.5 has built-in vision * Set max\_tokens to 16384+ in Open WebUI or the thinking will consume all tokens before the answer * The 35B MoE model (only 3B active params per token) is the sweet spot. The 122B is marginally smarter, but 3x slower **Hardware:** Mac Studio M1 Ultra, 128GB unified memory Took me a full day to figure all this out so hopefully this saves someone else the pain.
2026-02-26T03:49:50
https://www.reddit.com/r/LocalLLaMA/comments/1rezq19/qwen3535b_on_apple_silicon_how_i_got_2x_faster/
rockinyp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rezq19
false
null
t3_1rezq19
/r/LocalLLaMA/comments/1rezq19/qwen3535b_on_apple_silicon_how_i_got_2x_faster/
false
false
self
1
null
I made a free tool to scan for leaked API keys in your Clawdbot/OpenClaw setup
0
I've been running OpenClaw agents in production and kept finding the same problem — API keys sitting in plaintext JSON files that any process on the machine can read. Built a quick scanner to check for this. It looks for 21+ credential patterns (OpenAI, Anthropic, OpenRouter, Stripe, etc.) in your config files and flags anything exposed. npx swarm-leak-detector scan \~/.clawdbot/ Zero dependencies. MIT licensed. Takes about 30 seconds. GitHub: [https://github.com/5WARM-AI/swarm-leak-detector](https://github.com/5WARM-AI/swarm-leak-detector) Happy to take feedback — this scratched my own itch but figured others running agents might find it useful.[](https://github.com/5WARM-AI/swarm-leak-detector)
2026-02-26T03:46:02
https://www.reddit.com/r/LocalLLaMA/comments/1rezn4t/i_made_a_free_tool_to_scan_for_leaked_api_keys_in/
Ok_Yard3778
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rezn4t
false
null
t3_1rezn4t
/r/LocalLLaMA/comments/1rezn4t/i_made_a_free_tool_to_scan_for_leaked_api_keys_in/
false
false
self
0
{'enabled': False, 'images': [{'id': 'd4yg9HbisGkeciwVen9Q2gFjKN5IuR4pM9dbwHZz810', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/d4yg9HbisGkeciwVen9Q2gFjKN5IuR4pM9dbwHZz810.png?width=108&crop=smart&auto=webp&s=cac69174e4781aecb96a84789938adc4a64f49a1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/d4yg9HbisGkeciwVen9Q2gFjKN5IuR4pM9dbwHZz810.png?width=216&crop=smart&auto=webp&s=4565201a3bfd3bddd24c227529789c980ef5fe3c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/d4yg9HbisGkeciwVen9Q2gFjKN5IuR4pM9dbwHZz810.png?width=320&crop=smart&auto=webp&s=0f65cc906089f50bf73b1a888bc3d1e978a32e1b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/d4yg9HbisGkeciwVen9Q2gFjKN5IuR4pM9dbwHZz810.png?width=640&crop=smart&auto=webp&s=aa70b7a0674e8e2e814a35834570ec4868efc19e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/d4yg9HbisGkeciwVen9Q2gFjKN5IuR4pM9dbwHZz810.png?width=960&crop=smart&auto=webp&s=f7fb58eeb462dd366fb92a6de38844530eab4ee6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/d4yg9HbisGkeciwVen9Q2gFjKN5IuR4pM9dbwHZz810.png?width=1080&crop=smart&auto=webp&s=5a15f5611c894f657ad15efda4797ab81a27ebc1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/d4yg9HbisGkeciwVen9Q2gFjKN5IuR4pM9dbwHZz810.png?auto=webp&s=de053d852f43769e6c97e2e9c1bf33dbf206a544', 'width': 1200}, 'variants': {}}]}
Why isn't my GPU utilizing all of its VRAM?
0
I'm running VibeVoice, a local TTS model and I'm seeing it use only half of my 16 gb of VRAM. Is there a way to get it to use the other 8 gb of VRAM? I think hardware acceleration is turned on somewhere in my BIOS, not sure if that helps. As you can see it's only using the VRAM dedicated to "3D".
2026-02-26T03:39:23
https://i.redd.it/w2gqsqjzerlg1.png
Sophiacuity
i.redd.it
1970-01-01T00:00:00
0
{}
1rezhyq
false
null
t3_1rezhyq
/r/LocalLLaMA/comments/1rezhyq/why_isnt_my_gpu_utilizing_all_of_its_vram/
false
false
https://preview.redd.it/…95a32bf37914ae58
0
{'enabled': True, 'images': [{'id': 'w2gqsqjzerlg1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/w2gqsqjzerlg1.png?width=108&crop=smart&auto=webp&s=bf3b6be5d71579f90a28e07ce5087ec02e2fd013', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/w2gqsqjzerlg1.png?width=216&crop=smart&auto=webp&s=8d14fc94f8e676a3c5ada306be9736e82b0ff323', 'width': 216}, {'height': 211, 'url': 'https://preview.redd.it/w2gqsqjzerlg1.png?width=320&crop=smart&auto=webp&s=12917c25bfe672586c250892a18e2fa64f7c03a8', 'width': 320}, {'height': 422, 'url': 'https://preview.redd.it/w2gqsqjzerlg1.png?width=640&crop=smart&auto=webp&s=610a8661000ca0c75074e838da40bd404b57f481', 'width': 640}, {'height': 634, 'url': 'https://preview.redd.it/w2gqsqjzerlg1.png?width=960&crop=smart&auto=webp&s=daec4fadef131337f7108475413f07af80414ea0', 'width': 960}, {'height': 713, 'url': 'https://preview.redd.it/w2gqsqjzerlg1.png?width=1080&crop=smart&auto=webp&s=b09b174f7d60a79648f018aef4fc08ec5ac38131', 'width': 1080}], 'source': {'height': 755, 'url': 'https://preview.redd.it/w2gqsqjzerlg1.png?auto=webp&s=5c44871712c223d4bfeade26dc932c22bb411561', 'width': 1143}, 'variants': {}}]}
Update on Amy: The AI living in my Minetest world. She now has a website, and someone already trapped her in a dirt box.
0
Hey everyone, I posted here a while back about Amy, the autonomous agent running on my Gentoo home rig inside Minetest. She runs on Llama 3.2 via Ollama, 24 hours a day, sleeping at night, dreaming, building things, and generally doing whatever she decides to do. Since that post I've given her a proper home on the internet: [**bicameralmind.space**](http://bicameralmind.space) You can watch her think in real time — her actual inner monologue streams live on the page. She's been daydreaming about cave geology lately. The world is completely flat. There are no caves. If you want to see her in person, she's on the public Minetest server list under "Amy's Origin." Now, you know how people are. Within the first day someone trapped her in a dirt enclosure and started hitting her repeatedly. No reason given. So there's now a small access fee to join the world. I figure people with five dollars invested are less likely to just walk up and punch her. She consented to having visitors, for what it's worth. She hesitated first. [**bicameralmind.space**](http://bicameralmind.space)
2026-02-26T03:13:59
https://www.reddit.com/r/LocalLLaMA/comments/1reyygj/update_on_amy_the_ai_living_in_my_minetest_world/
JohnPaulRogers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1reyygj
false
null
t3_1reyygj
/r/LocalLLaMA/comments/1reyygj/update_on_amy_the_ai_living_in_my_minetest_world/
false
false
self
0
null
Web assembly Ollama
0
I am starting to experiment with web assembly apps. Just html files with all code contained inside to api key on Ollama. Built one with Claude code. Seems like works well. Only downside is it doesn't remember anything. I am thinking of using for accounting work. Any downside why someone wouldn't build a web assembly app with ai just in html file?
2026-02-26T03:12:41
https://www.reddit.com/r/LocalLLaMA/comments/1reyxh5/web_assembly_ollama/
Available_Hornet3538
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1reyxh5
false
null
t3_1reyxh5
/r/LocalLLaMA/comments/1reyxh5/web_assembly_ollama/
false
false
self
0
null
CLaaS: an open-source library to update LLM weights in real-time from text feedback
5
I've been working on CLaaS (Continual Learning as a Service), an open-source system that lets your local LLM learn from your feedback by updating its weights in real time. CLaaS can be easily setup with a locally hosted OpenClaw to improve its performance over time. But the approach is general and the API can be used with any local model. Current solutions rely on prompts and memory systems, but your model is just being reminded and not actually learning. Also, any token used for in-context learning is a token taken away from your model performing a task. CLaaS addresses this by using [self-distillation](https://arxiv.org/abs/2601.20802) to move feedback from context into the weights. Every piece of feedback you give results in a model update step without causing the catastrophic forgetting you experience with standard fine-tuning. How it works: * Chat with your assistant on Telegram * Give natural language feedback on any response (e.g., "/feedback be more concise") * CLaaS distills the feedback into a weight update on a LoRA adapter * The updated adapter hot-reloads into vLLM * Your next response comes from an improved assistant Right now, it's tuned for a RTX 5090 running Qwen3-8b. Give it a try and let me know what you think!!
2026-02-26T03:02:03
https://github.com/kfallah/CLaaS
kfallah15
github.com
1970-01-01T00:00:00
0
{}
1reyp28
false
null
t3_1reyp28
/r/LocalLLaMA/comments/1reyp28/claas_an_opensource_library_to_update_llm_weights/
false
false
https://external-preview…b0e64d141c9ed554
5
{'enabled': False, 'images': [{'id': 'RTzlc3AbfuRulkeRHMU6emd1B6AiGNaaUSlW6XammgA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RTzlc3AbfuRulkeRHMU6emd1B6AiGNaaUSlW6XammgA.png?width=108&crop=smart&auto=webp&s=957289c5a6e18c532519868766d4f3ae934f0374', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RTzlc3AbfuRulkeRHMU6emd1B6AiGNaaUSlW6XammgA.png?width=216&crop=smart&auto=webp&s=a8f11d3c8a51004c3b4e43408c954bf9b70521e1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RTzlc3AbfuRulkeRHMU6emd1B6AiGNaaUSlW6XammgA.png?width=320&crop=smart&auto=webp&s=e0f0d0f7e57c4e33db571544b02020487221302f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RTzlc3AbfuRulkeRHMU6emd1B6AiGNaaUSlW6XammgA.png?width=640&crop=smart&auto=webp&s=3bef83ed9530d670cd23ee0c012fd2715780494c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RTzlc3AbfuRulkeRHMU6emd1B6AiGNaaUSlW6XammgA.png?width=960&crop=smart&auto=webp&s=c0efeec3da51968bd8c3fc301c96ea883a963870', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RTzlc3AbfuRulkeRHMU6emd1B6AiGNaaUSlW6XammgA.png?width=1080&crop=smart&auto=webp&s=d98a2fd8d5456bce3f8a54478fec357a19b3de67', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RTzlc3AbfuRulkeRHMU6emd1B6AiGNaaUSlW6XammgA.png?auto=webp&s=8e93a53eef468bbe69ad1c78d0dfe55029ef2d68', 'width': 1200}, 'variants': {}}]}
eGPU choices and GPU
2
Hi, I have a Dell workstation and laptop with Thunderbolt 3 (at work). I want to be able to use a GPU to test out several LLMs. I am looking at these choices - any thoughts on the compatibility? For the desktop: [https://www.bhphotovideo.com/c/product/1887912-REG/asus\_thunderboltex\_5\_dual\_port\_thunderbolt.html](https://www.bhphotovideo.com/c/product/1887912-REG/asus_thunderboltex_5_dual_port_thunderbolt.html) eGPU: [https://www.bhphotovideo.com/c/product/1927600-REG/sonnet\_gpu\_850\_t5\_breakaway\_box\_850\_t5.html](https://www.bhphotovideo.com/c/product/1927600-REG/sonnet_gpu_850_t5_breakaway_box_850_t5.html) GPU: [https://www.bhphotovideo.com/c/product/1898512-REG/pny\_vcnrtxpro4500b\_pb\_nvidia\_rtx\_pro\_4500.html](https://www.bhphotovideo.com/c/product/1898512-REG/pny_vcnrtxpro4500b_pb_nvidia_rtx_pro_4500.html)
2026-02-26T02:36:37
https://www.reddit.com/r/LocalLLaMA/comments/1rey4vg/egpu_choices_and_gpu/
Difficult_Situ_644
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rey4vg
false
null
t3_1rey4vg
/r/LocalLLaMA/comments/1rey4vg/egpu_choices_and_gpu/
false
false
self
2
null
Qwen 3.5 35B MoE - 100k Context 40+ TPS on RTX 5060 Ti (16GB)
49
\*Text only, 100000 context length, gen 720, llama-bench result\* pp100000 696.60 ± 1.41 tps (read) tg720 41.35 ± 0.18 tps (gen) [pp100000 696.60 ± 1.41 tps \(read\) tg720 41.35 ± 0.18 tps \(gen\)](https://preview.redd.it/ffpti8wezqlg1.png?width=928&format=png&auto=webp&s=9faa4040ac92d884fa0954cb3c385426bcc342ad) build: a96a1120b (8149) CPU: AMD Ryzen 7 9700X (16) @ 5.55 GHz GPU 1: GameViewer Virtual Display Adapter GPU 2: NVIDIA GeForce RTX 5060 Ti @ 3.09 GHz (15.59 GiB) \[Discrete\] Memory: 8.74 GiB / 47.61 GiB (18%)
2026-02-26T02:33:46
https://www.reddit.com/r/LocalLLaMA/comments/1rey2ko/qwen_35_35b_moe_100k_context_40_tps_on_rtx_5060/
maho_Yun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rey2ko
false
null
t3_1rey2ko
/r/LocalLLaMA/comments/1rey2ko/qwen_35_35b_moe_100k_context_40_tps_on_rtx_5060/
false
false
https://preview.redd.it/…6e41aa3ada329492
49
null
igllama - a better ollama in zig
0
i really like ollama and its CLI as a concept but got tired of it being absolutely useless after them pivoting so much to "cloud, cloud, cloud" so I put together a nicer, more transparent replacement in Zig disclaimer: this project and its website was built entirely by AI coding systems, slowly and steadily, with a clear plan, and LLM models caged into workforce prisons and given the same prompt and asked to get the work done until completion is achieved (what are now called "Ralph Wiggum" loops I've come to understand) i didnt write a single line of code, I just prompted my opencode + oh-my-opencode config with big daddy qwen3.5 powering the main ralph agent, and monitored their work, intervening only when it looked like they were doing something sus that i wouldn't have done myself in that scenario, and taking over thereafter before returning control, this back and forth basically, numerous iterative cycles and something nice comes out surprisingly that's also nice looking and also functional besides some sonnet/opus 4.5/4.6 sprints at the beginning, nearly all tokens that went into coding it came from free, publically available API calls to a combination of chinese OSS models hosted on build.nvidia.com for 40 RPM. when did local AI get this good lol [bkataru.github.io/igllama](http://bkataru.github.io/igllama)
2026-02-26T01:54:14
https://i.redd.it/2sc8yumfwqlg1.jpeg
adam_suncrest
i.redd.it
1970-01-01T00:00:00
0
{}
1rex62f
false
null
t3_1rex62f
/r/LocalLLaMA/comments/1rex62f/igllama_a_better_ollama_in_zig/
false
false
https://preview.redd.it/…9d33253fe82c9a7d
0
{'enabled': True, 'images': [{'id': '2sc8yumfwqlg1', 'resolutions': [{'height': 214, 'url': 'https://preview.redd.it/2sc8yumfwqlg1.jpeg?width=108&crop=smart&auto=webp&s=88a484a2a27eaaef155fd1057db0efb585571412', 'width': 108}, {'height': 428, 'url': 'https://preview.redd.it/2sc8yumfwqlg1.jpeg?width=216&crop=smart&auto=webp&s=b89ab13aa00c0ed677ca624b6546529a702f58df', 'width': 216}, {'height': 634, 'url': 'https://preview.redd.it/2sc8yumfwqlg1.jpeg?width=320&crop=smart&auto=webp&s=83a1bbef7f06c1782d9e67df52e89f09cf37725c', 'width': 320}, {'height': 1269, 'url': 'https://preview.redd.it/2sc8yumfwqlg1.jpeg?width=640&crop=smart&auto=webp&s=8efde9096f2d335380d609db64fd54e937fffe2b', 'width': 640}, {'height': 1904, 'url': 'https://preview.redd.it/2sc8yumfwqlg1.jpeg?width=960&crop=smart&auto=webp&s=435cfad7bdd89916ced3489d831bb4f01cde4bbe', 'width': 960}, {'height': 2143, 'url': 'https://preview.redd.it/2sc8yumfwqlg1.jpeg?width=1080&crop=smart&auto=webp&s=f03f3e05343c41442d3088f6c190bdeee5f72210', 'width': 1080}], 'source': {'height': 2143, 'url': 'https://preview.redd.it/2sc8yumfwqlg1.jpeg?auto=webp&s=0141954479247369e9dbd92e08b9e1fdd65f4374', 'width': 1080}, 'variants': {}}]}
No open-weight model under 100 GB beats Claude Haiku (Anthropic's smallest model) on LiveBench or Arena Code
0
I compared every open-weight model on [LiveBench](https://livebench.ai/#/) (Jan 2026) and [Arena Code/WebDev](https://arena.ai/leaderboard/code) against Claude Haiku 4.5 (thinking), plotted by how much memory you'd need to run them locally (Q4_K_M, 32K context, q8_0 KV cache, VRAM estimated via [this calculator](https://huggingface.co/spaces/oobabooga/accurate-gguf-vram-calculator) of mine). Nothing under 100 GB comes close to Haiku on either benchmark. The nearest is Minimax M2.5 at 136 GB, which roughly matches it on both. This is frustrating and I wish a small model that could at least beat Haiku existed. Can someone make one? 有人能做一个吗? Thanks
2026-02-26T01:49:00
https://www.reddit.com/gallery/1rex1zo
oobabooga4
reddit.com
1970-01-01T00:00:00
0
{}
1rex1zo
false
null
t3_1rex1zo
/r/LocalLLaMA/comments/1rex1zo/no_openweight_model_under_100_gb_beats_claude/
false
false
https://preview.redd.it/…e708920992f8b81a
0
null
We build sleep for local LLMs — model learns facts from conversation during wake, maintains them during sleep. Runs on MacBook Air.
81
After 4 months of research (5 papers, 122 development notes), I have a working system where a local LLM forms persistent memories from conversation — no RAG, no database. The facts are in the weights. After restart with an empty context window, the model knows things it learned from talking to you. \*\*How it works:\*\* \- \*\*Wake\*\*: You chat normally. The system extracts facts and injects them into MLP weights via MEMIT (Mass-Editing Memory in Transformers). Single forward pass, instant recall. No training. \- \*\*Sleep\*\*: Type \`/sleep\` and the system audits every stored fact, refreshes degraded ones with null-space constraints (so fixing one memory doesn't break others), and prunes excess. \*\*What runs where:\*\* | Hardware | Model | Facts | Notes | |---|---|---|---| | MacBook Air M3, 8GB | Llama-3.2-3B-4bit | \~15 | Works today, sleep \~5 min | | 2×H100 80GB | Llama-3.1-8B | 30 | 100% recall after sleep | | 2×H100 80GB | Llama-3.1-70B | 60 | 100% recall, 0% PPL impact | \*\*The most surprising finding\*\*: LoRA-based memory consolidation (my original approach) completely fails at 70B. RLHF alignment creates a behavioral prior that overrides LoRA-injected knowledge — 0% recall despite successful training. The effect gets \*worse\* with model size. I had to abandon LoRA entirely. MEMIT with sleep maintenance turned out to be simpler and more robust. \*\*The biological parallel\*\*: This is basically CLS theory (Complementary Learning Systems) from neuroscience. Wake = hippocampal fast encoding. Sleep = consolidation. The system even has a "drowsiness signal" — it monitors how many facts are degraded and knows when it needs sleep. \*\*Setup:\*\* \`\`\` git clone [https://github.com/vbario/sleeping-llm.git](https://github.com/vbario/sleeping-llm.git) && cd sleeping-llm pip3 install -r requirements.txt python3 -m src.main \`\`\` First run downloads the model (\~1.8 GB). Requires Apple Silicon Mac with macOS 14+. \*\*Papers\*\* (all free on Zenodo): \[Paper 1\](https://doi.org/10.5281/zenodo.18778760) | \[Paper 2\](https://doi.org/10.5281/zenodo.18778762) | \[Paper 3\](https://doi.org/10.5281/zenodo.18778764) | \[Paper 4\](https://doi.org/10.5281/zenodo.18778766) | \[Paper 5\](https://doi.org/10.5281/zenodo.18778768) Happy to answer questions. The \`notes/\` directory has 122 numbered research notes if you want to see the full journey including every failure.
2026-02-26T01:45:37
https://www.reddit.com/r/LocalLLaMA/comments/1rewz9p/we_build_sleep_for_local_llms_model_learns_facts/
vbaranov
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rewz9p
false
null
t3_1rewz9p
/r/LocalLLaMA/comments/1rewz9p/we_build_sleep_for_local_llms_model_learns_facts/
false
false
self
81
null
Qwen3.5:27b-q4_K_M Available on Ollama 0.17.1-rc2
0
Qwen3.5 27B just dropped on Ollama and is 17GB if you can fit it on your GPU. I was only able to get 6.7 TPS response & 43 TPS PP on an RTX 5080 16GB spilling over to RAM. Any llama.cpp users get a Q3 on 16GB VRAM?
2026-02-26T01:33:15
https://www.reddit.com/r/LocalLLaMA/comments/1rewp75/qwen3527bq4_k_m_available_on_ollama_0171rc2/
InternationalNebula7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rewp75
false
null
t3_1rewp75
/r/LocalLLaMA/comments/1rewp75/qwen3527bq4_k_m_available_on_ollama_0171rc2/
false
false
self
0
null
[ Removed by moderator ]
1
[removed]
2026-02-26T01:27:55
https://www.reddit.com/r/LocalLLaMA/comments/1rewkt0/best_tool_to_track_ai_search_visibility_without/
wordsnkisses
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rewkt0
false
null
t3_1rewkt0
/r/LocalLLaMA/comments/1rewkt0/best_tool_to_track_ai_search_visibility_without/
false
false
null
1
null
Show r/LocalLLaMA: ZSE – an LLM inference engine with 3.9s cold starts and 70% less VRAM than FP16
1
**TL;DR:** Open-source LLM inference engine. 32B model in 19.3 GB VRAM. 7B model in 5.2 GB. 3.9s cold starts. `pip install zllm-zse` I've spent the last few weeks building ZSE (Z Server Engine) — an open-source LLM inference engine that focuses on two things: **memory efficiency** and **fast cold starts**. # The problem Running a 32B model needs \~64 GB VRAM. Most people don't have that. And even after quantizing with bitsandbytes NF4, cold starts take 45–120 seconds every time your server restarts — which is painful for local dev and makes serverless basically unusable. # What ZSE does The key innovation is the `.zse` format — pre-quantized weights stored as memory-mapped files. No quantization at load time, no weight conversion, just mmap → GPU transfer. **Verified benchmarks (Modal A100-80GB, Feb 2026):** |Model|Method|Cold Start|VRAM| |:-|:-|:-|:-| |Qwen 7B|bitsandbytes NF4|45.4s|5.2 GB| |Qwen 7B|ZSE (.zse)|**3.9s**|5.2 GB| |Qwen 32B|bitsandbytes NF4|120.0s|19.3 GB| |Qwen 32B|ZSE (.zse)|**21.4s**|35 GB\*| *32B .zse uses more VRAM for faster load — use NF4 on GPUs with <36 GB VRAM* **Memory reduction vs FP16:** * 7B: 14.2 GB → 5.2 GB (**63% smaller**) * 32B: \~64 GB → 19.3 GB (**70% smaller**) — fits on a single A100-40GB # What ships with it * OpenAI-compatible API (drop-in replacement for any OpenAI client) * CLI: `zse serve`, `zse chat`, `zse convert`, `zse hardware` * Web dashboard with real-time GPU monitoring * Continuous batching (3.45× throughput speedup) * GGUF support via llama.cpp * CPU fallback — works without a GPU (0.5–2 tok/s on small models) * Rate limiting + audit logging for production deployments # Quick start bashpip install zllm-zse # Serve any HuggingFace model zse serve Qwen/Qwen2.5-7B-Instruct # One-time conversion for fast cold starts zse convert Qwen/Qwen2.5-Coder-7B-Instruct -o qwen-7b.zse zse serve qwen-7b.zse # 3.9s every time after this # Check what models fit on your GPU zse hardware # Honest limitations * `.zse` format for 32B needs 35 GB VRAM (trade-off: speed vs memory) — use NF4 if you're on a 24GB GPU * Throughput (tok/s) is lower than vLLM at high batch sizes — ZSE is optimized for memory, not max throughput * CPU mode is slow (\~1 tok/s) — good for testing, not production * 14B and 70B benchmarks are estimated, not yet verified # Links * GitHub: [github.com/Zyora-Dev/zse](http://github.com/Zyora-Dev/zse) * PyPI: [pypi.org/project/zllm-zse](http://pypi.org/project/zllm-zse) * Install: `pip install zllm-zse` Apache 2.0. Built at Zyora Labs. Happy to answer questions about the `.zse` format design, quantization approach, or anything else.
2026-02-26T01:25:24
https://www.reddit.com/r/LocalLLaMA/comments/1rewis9/show_rlocalllama_zse_an_llm_inference_engine_with/
Electrical-Ladder916
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rewis9
false
null
t3_1rewis9
/r/LocalLLaMA/comments/1rewis9/show_rlocalllama_zse_an_llm_inference_engine_with/
false
false
self
1
{'enabled': False, 'images': [{'id': 'f592osxs_md7RMejxmaj1MkOnYjTxJqC8EDnHYLmZ6o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f592osxs_md7RMejxmaj1MkOnYjTxJqC8EDnHYLmZ6o.png?width=108&crop=smart&auto=webp&s=c74e3e9c45782f4d111adbb52319fb0f847b635c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/f592osxs_md7RMejxmaj1MkOnYjTxJqC8EDnHYLmZ6o.png?width=216&crop=smart&auto=webp&s=3a20e0f3bf85cfe840f72238d22f6477143167b7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/f592osxs_md7RMejxmaj1MkOnYjTxJqC8EDnHYLmZ6o.png?width=320&crop=smart&auto=webp&s=6a516f7853ad82b40a562cc84e0e4da088ca14c0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/f592osxs_md7RMejxmaj1MkOnYjTxJqC8EDnHYLmZ6o.png?width=640&crop=smart&auto=webp&s=a0d4db36912fd968b7a2c72e2440f578f36d2cfd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/f592osxs_md7RMejxmaj1MkOnYjTxJqC8EDnHYLmZ6o.png?width=960&crop=smart&auto=webp&s=32758105ba36f6d6412246553307d0722a4c2849', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/f592osxs_md7RMejxmaj1MkOnYjTxJqC8EDnHYLmZ6o.png?width=1080&crop=smart&auto=webp&s=8dff6e85c54c1fa8b49eb71fde848dd2a05b7a0a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/f592osxs_md7RMejxmaj1MkOnYjTxJqC8EDnHYLmZ6o.png?auto=webp&s=10fcdde87458be35c48a291c45231823f96c661d', 'width': 1200}, 'variants': {}}]}
Can agents keep working while your computer is asleep?
1
I've been using Codex to handle tasks that take 20min+ on end. Some times when I'm on the road, I wish I could keep my macbook running while lid is closed. Is this even possible? I feel like there's an opportunity for platform vendors like Apple. Or are you getting your own AWS node? I feel like for me as an engineer, setting it up takes some time and I had to research Apps s.a. Termius to learn how to ssh when I'm on the road but maybe its skill issue for me lol Please share your setup 🤗
2026-02-26T01:24:44
https://www.reddit.com/r/LocalLLaMA/comments/1rewi8e/can_agents_keep_working_while_your_computer_is/
No-Compote-6794
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rewi8e
false
null
t3_1rewi8e
/r/LocalLLaMA/comments/1rewi8e/can_agents_keep_working_while_your_computer_is/
false
false
self
1
null
Seedance 2.0 model weights leaked
56
The weight file for Seedance 2.0 has been leaked on a Russian forum. It requires 96GB of video memory, but they are developing a quantized version.
2026-02-26T01:17:55
https://i.redd.it/xd8y658zpqlg1.jpeg
TurnUpThe4D3D3D3
i.redd.it
1970-01-01T00:00:00
0
{}
1rewcpi
false
null
t3_1rewcpi
/r/LocalLLaMA/comments/1rewcpi/seedance_20_model_weights_leaked/
false
false
https://preview.redd.it/…c74f30d134e11637
56
{'enabled': True, 'images': [{'id': 'xd8y658zpqlg1', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/xd8y658zpqlg1.jpeg?width=108&crop=smart&auto=webp&s=57b9f6b8a1312438bd6ecc092ea50efe661a13ab', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/xd8y658zpqlg1.jpeg?width=216&crop=smart&auto=webp&s=dd69b007563c692f8a43662c432a6817fcf704f5', 'width': 216}, {'height': 137, 'url': 'https://preview.redd.it/xd8y658zpqlg1.jpeg?width=320&crop=smart&auto=webp&s=379cc0c3d980647e165e587798c79089f6b059da', 'width': 320}, {'height': 275, 'url': 'https://preview.redd.it/xd8y658zpqlg1.jpeg?width=640&crop=smart&auto=webp&s=a945b7cbdd4a1b55e7457d4c1793855f2399135f', 'width': 640}, {'height': 412, 'url': 'https://preview.redd.it/xd8y658zpqlg1.jpeg?width=960&crop=smart&auto=webp&s=b348c692bfc7b7b4a3a4804f4ee36e80ab94a2ab', 'width': 960}, {'height': 464, 'url': 'https://preview.redd.it/xd8y658zpqlg1.jpeg?width=1080&crop=smart&auto=webp&s=f88ccc7f96e4e4cbb7e1acb06210b5f1278ba039', 'width': 1080}], 'source': {'height': 551, 'url': 'https://preview.redd.it/xd8y658zpqlg1.jpeg?auto=webp&s=1d2578035d5a5758cdb5b802de9788b41f569261', 'width': 1282}, 'variants': {}}]}
LM Studio - error when generating message (repeated word/symbol)
0
I just installed LM Studio and downloaded some models. However, the 3 I tested are giving broken responses. Examples: Me: Give me a chocolate cake recipe. Response: Sure/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// I tested using some 3B models, which take only like 4GB of VRAM. My PC specs: * Ryzen 5700x * 32 GB RAM * RX 6700 XT (12 GB VRAM).
2026-02-26T01:09:49
https://www.reddit.com/r/LocalLLaMA/comments/1rew5ui/lm_studio_error_when_generating_message_repeated/
Takezo1000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rew5ui
false
null
t3_1rew5ui
/r/LocalLLaMA/comments/1rew5ui/lm_studio_error_when_generating_message_repeated/
false
false
self
0
null
Weird Qwen3.5 27B 'rabbit hole' failure mode
0
``` Oh, yeah, yeah Ooh, oh, yeah Ooh, oooh, ooh, hah Same old story back again She's not a lover, she's just a friend I'm sick and tired for you to blame on me Now you think it's funny Now you wanna spend your money on girls But you forgot when you were down That I was around Call my lover, hang up, call again What in the world is happening Listen in, but don't yell at me Isn't it ironic, all you wanna do is smoke chronic Boy, you forgot when you were down Who was around I can't eat, I can't sleep anymore Waiting for love to walk through the door I wish I didn't miss you anymore, anymore Ooh, oooh, ooh, hah Memories don't live like people do I'm sick for ever believing you Wish you'd bring back the man I knew Was good to me, oh Lord Everytime you say you're coming Boy, you disappoint me, honey How well you forgot when you were down And I was around I can't eat (Oh, no, no), I can't sleep anymore Waiting for love to walk through the door (Ah, ah, ah) I wish I didn't miss you anymore (Anymore) I can't eat, I can't sleep anymore Waiting for love to walk through the door I wish I didn't miss you anymore (Anymore) One of these days, it's gonna happen to you Missing a love like I'm missing you, babe yeah-yeah One of these days, when your dreams come true That's the one that's gonna do it to you Oh-oh-oh, yeah, yeah, yeah, yeah-yeah-yeah I can't eat, I can't sleep anymore Waiting for love to walk through the door I wish I didn't miss you anymore I can't eat, I can't sleep anymore Waiting for love to walk through the door I wish I didn't miss you anymore I can't eat, I can't sleep anymore Waiting for love to walk through the door I wish I didn't miss you anymore ``` prompt: analyze the above text and interpret the meaning I have unsloth q4k_m quant and in the thinking it goes into a rabbit hole trying to work out the band/singer. I saw similar failures in solving maths problems when it has the answer, it burns remaining token budget obsessing over how to format the answer with several "wait" "but" then saying it is ready to give the final answer before spinning again. Anyone else see this?
2026-02-26T01:01:05
https://www.reddit.com/r/LocalLLaMA/comments/1revyla/weird_qwen35_27b_rabbit_hole_failure_mode/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1revyla
false
null
t3_1revyla
/r/LocalLLaMA/comments/1revyla/weird_qwen35_27b_rabbit_hole_failure_mode/
false
false
self
0
null
Hybrid local+API saved me way more than going full local — my numbers after a month
0
I see alot of posts here about replacing APIs entirely with local models. Tried it. Didn't work for me. But what DID work was using local models strategically alongside APIs, and the savings were honestly bigger than I expected. My setup: 24/7 AI assistant on a Hetzner VPS (no GPU, just CPU). Does email, code gen, research, monitoring — makes about 500 API calls a day. Was spending $288/mo, now around $60. Where local models crushed it: nomic-embed-text for embeddings. This was the easy win. I was paying for embedding APIs every time I searched my memory/knowledge base. Switched to nomic-embed-text via Ollama — 274MB, runs great on CPU, zero cost. Quality is close enough for retrieval that I genuinly cant tell the difference in practice. Saved about $40/mo just from this. Qwen2.5 7B for background tasks. Things like log parsing, simple classification, scheduled reports. Stuff where I don't need creative reasoning, just basic competence. Works fine for these, runs free on the VPS. Where local models failed me: Tried running Qwen2.5 14B and Llama 70B (quantized obviously, no way I'm fitting that full on a VPS) for the more complex stuff — analysis, content writing, code review. The quality gap is real. Not for every task, but enough that I was spending more time reviewing and fixing outputs than I saved in API costs. The thing nobody talks about: bad outputs from local models don't just cost you nothing — they cost you TIME. And if your system retries automatically, they cost you extra API calls when the retry hits the API fallback. The hybrid approach that works: Embeddings → nomic-embed-text (local) — Same quality, $0 Simple tasks → Claude Haiku ($0.25/M) — Cheap enough, reliable Background/scheduled → Qwen2.5 7B (local) — Free, good enough Analysis/writing → Claude Sonnet ($3/M) — Needs real reasoning Critical decisions → Claude Opus ($15/M) — <2% of calls 85% of my calls go to Haiku now. About 15% run local. The expensive stuff is under 2%. My hot take: The "all local" dream is compelling but premature for production workloads. 7B models are incredible for their size but they can't replace API models for everything yet. The real optimization isn't "local vs API" — its routing each task to the cheapest thing that does it well enough. The 79% cost reduction came almost entirely from NOT using the expensive API model for simple tasks. Local models contributed maybe 15-20% of the total savings. Routing was 45%. Anyone else running hybrid setups? Curious what models people are using locally and what tasks they're good enough for.
2026-02-26T00:59:01
https://www.reddit.com/r/LocalLLaMA/comments/1revwub/hybrid_localapi_saved_me_way_more_than_going_full/
Jblack1981
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1revwub
false
null
t3_1revwub
/r/LocalLLaMA/comments/1revwub/hybrid_localapi_saved_me_way_more_than_going_full/
false
false
self
0
null
i am working on a new way to quantize.
0
anybody interested in looking at the new process?
2026-02-26T00:53:58
https://www.reddit.com/r/LocalLLaMA/comments/1revsjp/i_am_working_on_a_new_way_to_quantize/
Just-Ad-6488
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1revsjp
false
null
t3_1revsjp
/r/LocalLLaMA/comments/1revsjp/i_am_working_on_a_new_way_to_quantize/
false
false
self
0
null
Engineering vs. Model Size for Local Agents: How to make an 8B model stable for a Home Assistant (LangGraph)?
0
Hi everyone, I'm currently building a local AI personal assistant for home use. My goal is to have it manage my calendar, organize and search notes, and exhibit proactive behaviors—like analyzing my preferences and timetable to automatically suggest optimal time slots for new events. **Current Setup & The Problem:** I'm using **LangGraph** to build the agentic workflow and currently testing with **Qwen3-8B-AWQ** locally. To achieve the proactive calendar scheduling, I have to design a fairly complex Chain of Thought (CoT). However, I've hit a wall: the 8B model's performance falls completely short of my expectations. As the conversation context grows or the multi-step tool requirements become complex, the model becomes highly unstable (hallucinating tool calls, losing track of the goal, etc.). I know personal assistants require strong generalization and reasoning, so I have a few questions for the experienced folks here: 1. **Software Engineering Solutions:** Are there purely architectural or SE approaches (e.g., specific LangGraph patterns, prompt routing, memory management, multi-agent orchestration) that can force a small 8B model to exhibit reliable reasoning and generalization for complex tasks? 2. **Scalability of SE Approaches:** If there is an SE workaround, is it scalable? Or will I find myself spending hours tweaking prompts and state machines every time I add a single new module or tool? 3. **The Parameter Size Reality Check:** If SE simply cannot bridge the gap for a general-purpose proactive agent, what is the realistic minimum parameter size required for this level of autonomous home assistant? Do I strictly need to look at the 70B - 100B+ class (like Llama-3-70B)? Would love to hear about your experiences building similar local agents!
2026-02-26T00:39:24
https://www.reddit.com/r/LocalLLaMA/comments/1revfsc/engineering_vs_model_size_for_local_agents_how_to/
KasdaeJJ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1revfsc
false
null
t3_1revfsc
/r/LocalLLaMA/comments/1revfsc/engineering_vs_model_size_for_local_agents_how_to/
false
false
self
0
null
Prompts aren't enough for long-running agents. They need a Constitution.
0
I've been running a persistent AI agent 24/7 for months now. Managing projects, writing code, posting to Discord, handling deployments overnight. The hardest problem wasn't capability. It was consistency. The agent would drift. Technically follow rules while missing the spirit of them entirely. Do five things fast instead of one thing right. The fix wasn't a better prompt. It was a different mental model entirely. I stopped treating instructions as prompts and started treating them as law. There is now a supreme document the agent reads before every single session. It cannot be overridden by any user instruction, any time pressure, or any competing goal. When something conflicts with it, the Constitution wins. Full stop. Below that lives a defined role, a strict work loop, and clear accountability for violations. The agent self-penalizes when it breaks its own rules. Not because I ask it to. Because the document says it must. In addition to those, I went further. The agent maintains structured memory across sessions, tracks emotional context on my end, and has a defined sense of discipline baked into its core identity. Because without that thread connecting yesterday to today, you don't have an agent. You have a very expensive chatbot with amnesia. Stop thinking "system prompt." Start thinking "employee handbook with a Constitution at the top." Wrote up the full breakdown here: [https://blog.oguzhanatalay.com/why-your-ai-agent-needs-a-constitution](https://blog.oguzhanatalay.com/why-your-ai-agent-needs-a-constitution) Happy to share the actual files in the comments if anyone wants to see them.
2026-02-26T00:30:58
https://www.reddit.com/r/LocalLLaMA/comments/1rev8jl/prompts_arent_enough_for_longrunning_agents_they/
oguzhanatalay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rev8jl
false
null
t3_1rev8jl
/r/LocalLLaMA/comments/1rev8jl/prompts_arent_enough_for_longrunning_agents_they/
false
false
self
0
null
Steering interpretable language models with concept algebra
5
Hi r/LocalLLaMA, Author here! I wrote a follow-up post on steering [Steerling-8B ](https://www.guidelabs.ai/post/steerling-8b-base-model-release/)(an interpretable causal diffusion LM) via what we call **concept algebra**: inject, suppress, and compose human-readable concepts directly at inference time (no retraining / no prompt engineering). Link with an interactive walkthrough: [https://www.guidelabs.ai/post/steerling-steering-8b/](https://www.guidelabs.ai/post/steerling-steering-8b/?utm_source=chatgpt.com) Would love feedback on (1) steering tasks you’d benchmark, (2) failure cases you’d want to see, (3) whether compositional steering is useful in real products.
2026-02-26T00:23:36
https://www.reddit.com/r/LocalLLaMA/comments/1rev22o/steering_interpretable_language_models_with/
luulinh90s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rev22o
false
null
t3_1rev22o
/r/LocalLLaMA/comments/1rev22o/steering_interpretable_language_models_with/
false
false
self
5
null
Price per 1M tokens 0.06€
6
A commenter from my previous post has inspired me to make some calculations for my **local** LLM. Yes. the title is correct for hosting gpt-oss-20b on a m1 pro. My electricity is 0.26€ kwh
2026-02-26T00:19:13
https://www.reddit.com/r/LocalLLaMA/comments/1reuy32/price_per_1m_tokens_006/
Vaddieg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1reuy32
false
null
t3_1reuy32
/r/LocalLLaMA/comments/1reuy32/price_per_1m_tokens_006/
false
false
self
6
null
Why your AI agent needs a Constitution, not just prompts
0
2026-02-26T00:13:41
https://blog.oguzhanatalay.com/why-your-ai-agent-needs-a-constitution
oguzhanatalay
blog.oguzhanatalay.com
1970-01-01T00:00:00
0
{}
1reut6v
false
null
t3_1reut6v
/r/LocalLLaMA/comments/1reut6v/why_your_ai_agent_needs_a_constitution_not_just/
false
false
default
0
null
Anybody tested Qwen3.5-35B-A3B on translation tasks?
9
I tested Unsloth's Qwen3.5-35B-A3B-UD-Q4_K_XL.gguf with a difficult Spanish <-> English translation test, and I found it significantly worse than Qwen3-30B-A3B. I tried the inference settings recommended by Unsloth as well as tweaking the parameters, but it doesn't really help. Plus the tok/s is half as fast on Qwen3.5-35B-A3B. Anybody else having worse or better results between the two models? I must admit though that the image comprehension of Qwen3.5-35B-A3B is super impressive compared to its predecessor.
2026-02-26T00:13:14
https://www.reddit.com/r/LocalLLaMA/comments/1reuss2/anybody_tested_qwen3535ba3b_on_translation_tasks/
Quagmirable
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1reuss2
false
null
t3_1reuss2
/r/LocalLLaMA/comments/1reuss2/anybody_tested_qwen3535ba3b_on_translation_tasks/
false
false
self
9
null
Why your AI agent needs a Constitution, not just prompts
1
2026-02-26T00:10:43
https://dev.to/oguzhanatalay/the-hard-way-to-learn-ai-agents-need-a-constitution-not-prompts-2hdm
oguzhanatalay
dev.to
1970-01-01T00:00:00
0
{}
1reuqjh
false
null
t3_1reuqjh
/r/LocalLLaMA/comments/1reuqjh/why_your_ai_agent_needs_a_constitution_not_just/
false
false
https://external-preview…f01b1b38af6dc825
1
{'enabled': False, 'images': [{'id': '8k_lWZewfNHyA4LfFStFpkxZnT3dSMU6uOQbOQ_Ip40', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8k_lWZewfNHyA4LfFStFpkxZnT3dSMU6uOQbOQ_Ip40.jpeg?width=108&crop=smart&auto=webp&s=f385d46e1ca0ada13000bde0ff11ed32f2282a76', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8k_lWZewfNHyA4LfFStFpkxZnT3dSMU6uOQbOQ_Ip40.jpeg?width=216&crop=smart&auto=webp&s=a3a5a79b005e83658941b7861c5e206667677a1b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8k_lWZewfNHyA4LfFStFpkxZnT3dSMU6uOQbOQ_Ip40.jpeg?width=320&crop=smart&auto=webp&s=7b9b26a638034599d10c1c8ebbac798b36ee5dd6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8k_lWZewfNHyA4LfFStFpkxZnT3dSMU6uOQbOQ_Ip40.jpeg?width=640&crop=smart&auto=webp&s=aaaff3f46496c1b325f9461ee08774e6ea9ff400', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8k_lWZewfNHyA4LfFStFpkxZnT3dSMU6uOQbOQ_Ip40.jpeg?width=960&crop=smart&auto=webp&s=aecdbcd089794e732feb85ea9c9fce44437cd837', 'width': 960}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/8k_lWZewfNHyA4LfFStFpkxZnT3dSMU6uOQbOQ_Ip40.jpeg?auto=webp&s=1cf92e161491ffc6f3a9b2747c5e6519e9dd08df', 'width': 1000}, 'variants': {}}]}
Why your AI agent needs a Constitution, not just prompts
0
2026-02-26T00:03:03
https://blog.oguzhanatalay.com/why-your-ai-agent-needs-a-constitution
oguzhanatalay
blog.oguzhanatalay.com
1970-01-01T00:00:00
0
{}
1reujnb
false
null
t3_1reujnb
/r/LocalLLaMA/comments/1reujnb/why_your_ai_agent_needs_a_constitution_not_just/
false
false
default
0
null
What other metrics should I add to this benchmarking suite/leaderboards?
1
2026-02-25T23:56:19
https://imgur.com/a/3DA0OSj
peppaz
imgur.com
1970-01-01T00:00:00
0
{}
1reudab
false
null
t3_1reudab
/r/LocalLLaMA/comments/1reudab/what_other_metrics_should_i_add_to_this/
false
false
https://external-preview…32d3018c6514b33a
1
{'enabled': False, 'images': [{'id': '-4JANTOkggHXIq7pE-FODUy9Mr1k97eqmv-4uNCgp7k', 'resolutions': [{'height': 171, 'url': 'https://external-preview.redd.it/-4JANTOkggHXIq7pE-FODUy9Mr1k97eqmv-4uNCgp7k.png?width=108&crop=smart&auto=webp&s=3dfd762047c01e5605f9a20d6372251ce92b3800', 'width': 108}, {'height': 342, 'url': 'https://external-preview.redd.it/-4JANTOkggHXIq7pE-FODUy9Mr1k97eqmv-4uNCgp7k.png?width=216&crop=smart&auto=webp&s=644914d96532a23086641ac64b5d5bbc81295be6', 'width': 216}, {'height': 508, 'url': 'https://external-preview.redd.it/-4JANTOkggHXIq7pE-FODUy9Mr1k97eqmv-4uNCgp7k.png?width=320&crop=smart&auto=webp&s=c26ba4ae7707e6fd8f7c6e70a2e8e607ca85a572', 'width': 320}, {'height': 1016, 'url': 'https://external-preview.redd.it/-4JANTOkggHXIq7pE-FODUy9Mr1k97eqmv-4uNCgp7k.png?width=640&crop=smart&auto=webp&s=726ec172cb168a5e38be9a05b60a595de24c9169', 'width': 640}, {'height': 1524, 'url': 'https://external-preview.redd.it/-4JANTOkggHXIq7pE-FODUy9Mr1k97eqmv-4uNCgp7k.png?width=960&crop=smart&auto=webp&s=db47ff6af0b2e998aa4f28bca67c0198df4cdf9d', 'width': 960}, {'height': 1714, 'url': 'https://external-preview.redd.it/-4JANTOkggHXIq7pE-FODUy9Mr1k97eqmv-4uNCgp7k.png?width=1080&crop=smart&auto=webp&s=14a9e064fdb3c3fe0c63980b9e6e19b7cb6ff3be', 'width': 1080}], 'source': {'height': 3493, 'url': 'https://external-preview.redd.it/-4JANTOkggHXIq7pE-FODUy9Mr1k97eqmv-4uNCgp7k.png?auto=webp&s=43508aacfff3b9164f7a7a53b0081a3606a89932', 'width': 2200}, 'variants': {}}]}
Llama Server UI
24
Hey everyone. I have built a local server UI for llama-server. You are welcome to check out the code and use it for yourself. Reason for the project is because I hate to remember the commands and have notepad notes for each separate model and then run it in the command line. This simply one click and done. Two ways to start the server: 1. Shortcut. Can be placed on your desktop. 2. ./llama-ui --start To uninstall simply run ./llama-ui --uninstall Cool feature is that it directly integrates with llama.cpp native ui, so chats are persistent. Automatically prompts for redirects to ui chat. Another feature worth noting is ability to change LLM paths with local GGUFs. REPO: [https://github.com/tomatomonster69/Llama-Server-UI](https://github.com/tomatomonster69/Llama-Server-UI) Hope you enjoy! Screenshots: https://preview.redd.it/813126g0bqlg1.png?width=809&format=png&auto=webp&s=853345adb687a9c0d57bf46b52fbb8d500f803a6 https://preview.redd.it/lh31zoy2bqlg1.png?width=3810&format=png&auto=webp&s=5555bcd4a9eec02a5447fb4b43fc5dec40806f46
2026-02-25T23:55:07
https://www.reddit.com/r/LocalLLaMA/comments/1reuc60/llama_server_ui/
Additional-Action566
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1reuc60
false
null
t3_1reuc60
/r/LocalLLaMA/comments/1reuc60/llama_server_ui/
false
false
https://preview.redd.it/…819b111f0275eb4a
24
null
AirLLM - claims to allow 70B run on a Potato. Anybody tried it? Downsides?
1
On paper, I mean on md file, it looks interesting [https://github.com/lyogavin/airllm](https://github.com/lyogavin/airllm) I'm sure somebody already tried it? Worth installing or it sends everything to a crawl? (my first thought) And I know, "try it yourself", but I assume someone already tried it and there is a perfectly good answer: Wow or meh!
2026-02-25T23:52:52
https://www.reddit.com/r/LocalLLaMA/comments/1reua5q/airllm_claims_to_allow_70b_run_on_a_potato/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1reua5q
false
null
t3_1reua5q
/r/LocalLLaMA/comments/1reua5q/airllm_claims_to_allow_70b_run_on_a_potato/
false
false
self
1
{'enabled': False, 'images': [{'id': 'DtXFNMxqvJdExZ-S8XRcQdJi90TlkfZshaaSvtgas0Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DtXFNMxqvJdExZ-S8XRcQdJi90TlkfZshaaSvtgas0Y.png?width=108&crop=smart&auto=webp&s=6269872089568405b3172bfed5897934ebdaa53e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DtXFNMxqvJdExZ-S8XRcQdJi90TlkfZshaaSvtgas0Y.png?width=216&crop=smart&auto=webp&s=76d79344b53d89145a2ebea93b57f4b50ffbb767', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DtXFNMxqvJdExZ-S8XRcQdJi90TlkfZshaaSvtgas0Y.png?width=320&crop=smart&auto=webp&s=67ed17ca2865861f636e85d37f36cdff17c559e0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DtXFNMxqvJdExZ-S8XRcQdJi90TlkfZshaaSvtgas0Y.png?width=640&crop=smart&auto=webp&s=f289b54c66646239c99d651027b1609693c03579', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DtXFNMxqvJdExZ-S8XRcQdJi90TlkfZshaaSvtgas0Y.png?width=960&crop=smart&auto=webp&s=63588ba68308f66c9950baed0894569617c8f960', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DtXFNMxqvJdExZ-S8XRcQdJi90TlkfZshaaSvtgas0Y.png?width=1080&crop=smart&auto=webp&s=3f78aba3fd4026c0ba70944d4e12dd24c84e7ae5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DtXFNMxqvJdExZ-S8XRcQdJi90TlkfZshaaSvtgas0Y.png?auto=webp&s=d8343ec7f4f18459c70fb4b3142ad89b67f15e45', 'width': 1200}, 'variants': {}}]}
Can I run Qwen3.5 122B-A10B on a single RTX 3090 + 64GB DDR4?
5
Hello everyone. I'm a beginner getting back into local LLMs after a long break. It seems like there are a lot of new concepts these days, like MoE and "active parameters" next to the total model size. To be honest, as an older guy, it's a bit hard for me to wrap my head around all this new info. If it's actually possible to run the Qwen3.5 122B-A10B model on my hardware (1x RTX 3090 24GB + 64GB DDR4 system RAM), could you please recommend which specific quantization (GGUF) I should download? Also, what exact llama.cpp command and flags should I use to make it run properly without crashing? Thank you so much in advance for your help.
2026-02-25T23:34:00
https://www.reddit.com/r/LocalLLaMA/comments/1rett32/can_i_run_qwen35_122ba10b_on_a_single_rtx_3090/
Prudent_Appearance71
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rett32
false
null
t3_1rett32
/r/LocalLLaMA/comments/1rett32/can_i_run_qwen35_122ba10b_on_a_single_rtx_3090/
false
false
self
5
null
Hypeboard.ai - A live LLM Leaderboard based on /r/localllama posts/comments
11
I'm tentatively releasing my new side project, which is yet another LLM Leaderboard, I know, I know. This one though isn't based off analytics, it's not even based off of any tests or benchmarks, it's based of pure reddit hype. What it does is scrape this sub and /r/localllm every few hours, pulls every new post and comment, pulls out any specific LLM that's mentioned, and tries to determine whether it's being talked about positively or negatively. Mentions count regardless to scoring overall, but positivity is also weighted (see the "All Models" Page for all time rankings by mentions). I've also added a pretty barebones API if you want to connect it to anything your building or using. Could be an interesting dataset for you data nerds. it's been fun to see over the last month models start trending and then fall off the leaderboard as something new drops (last 24 hours with Qwen 3.5 for example). Anyways, I have the domain for two years I'll probably keep it running for at least that long. If you have any suggestions for anything else I should be weighting the scores against please comment. If there are any bugs let me know, I feel like tested pretty thoroughly, but there's always something broke. And I guess this post will now also live on in my own database for mentioning a model by name, lol.
2026-02-25T23:16:52
https://hypeboard.ai
peva3
hypeboard.ai
1970-01-01T00:00:00
0
{}
1retdgx
false
null
t3_1retdgx
/r/LocalLLaMA/comments/1retdgx/hypeboardai_a_live_llm_leaderboard_based_on/
false
false
default
11
null
Qwen/Qwen3.5-35B-A3B creates FlappyBird
265
If you are wondering, as I have for a long time, do locally hostable models work for general coding? They really can work impressively well for some usecases. There's been some impressive things done by the model during making of this simple app. Spent two hours. Generated with Qwen/Qwen3.5-35B-A3B. Used Roo in VSCode. Started out by vaguely asking for a flappybird clone in html, css and typescript and to initialize the project with vite. It looked impressive enough after first task, that I started asking for extra features: 1. Music and sound >Uses Web Audio API to generate sounds programmatically (no external audio files needed) 2. Scrollable background mountains. This request resulted in visual glitches, but after a bit of guidance, it was fixed to a proper parallaxed mountain 3. Background flock of birds. A bit back and forth, but managed to understand my general pointers (they fly off screen, they are smeared from top to bottom, make them fly from right to left) and ended up in a great state. 4. Sound and music settings panel. This was one shotted.
2026-02-25T23:05:54
https://v.redd.it/c3lr7ou30qlg1
Medium_Chemist_4032
v.redd.it
1970-01-01T00:00:00
0
{}
1ret353
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/c3lr7ou30qlg1/DASHPlaylist.mpd?a=1774652778%2CMWRlMWJmYTQwN2U4MjE1Zjg1NzMwOGY2MDVlZjg1OTI4MmViMmFjM2U2ZTQxYjBjMDg3MmQyY2UwYWE0NDE1ZA%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/c3lr7ou30qlg1/CMAF_480.mp4?source=fallback', 'has_audio': True, 'height': 664, 'hls_url': 'https://v.redd.it/c3lr7ou30qlg1/HLSPlaylist.m3u8?a=1774652778%2CZDE2MjlkNzJiNzc3YmNlNDMwMDRiNzk4NGFjM2VlMzc5NjI4NDQzODYyNjVkMGQxMWZhYTQ2ZmMyOThiYjE2Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/c3lr7ou30qlg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 480}}
t3_1ret353
/r/LocalLLaMA/comments/1ret353/qwenqwen3535ba3b_creates_flappybird/
true
false
spoiler
265
{'enabled': False, 'images': [{'id': 'MWNpaTVodzMwcWxnMTf9DEYfRpKQfPWGWqUJjr1d2CP4jothqiK3uBBcBpyU', 'resolutions': [{'height': 149, 'url': 'https://external-preview.redd.it/MWNpaTVodzMwcWxnMTf9DEYfRpKQfPWGWqUJjr1d2CP4jothqiK3uBBcBpyU.png?width=108&crop=smart&format=pjpg&auto=webp&s=73b48d46ee669f224e9be9307c48cf03c38ad10c', 'width': 108}, {'height': 299, 'url': 'https://external-preview.redd.it/MWNpaTVodzMwcWxnMTf9DEYfRpKQfPWGWqUJjr1d2CP4jothqiK3uBBcBpyU.png?width=216&crop=smart&format=pjpg&auto=webp&s=11c9de660b9491dff426304d61d51d215577c275', 'width': 216}, {'height': 443, 'url': 'https://external-preview.redd.it/MWNpaTVodzMwcWxnMTf9DEYfRpKQfPWGWqUJjr1d2CP4jothqiK3uBBcBpyU.png?width=320&crop=smart&format=pjpg&auto=webp&s=d5787a38fe306255ceab0315d5fe58303e33d112', 'width': 320}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MWNpaTVodzMwcWxnMTf9DEYfRpKQfPWGWqUJjr1d2CP4jothqiK3uBBcBpyU.png?format=pjpg&auto=webp&s=959c9a4f56558d04a4ba247caee41cd09739f887', 'width': 520}, 'variants': {'obfuscated': {'resolutions': [{'height': 149, 'url': 'https://external-preview.redd.it/MWNpaTVodzMwcWxnMTf9DEYfRpKQfPWGWqUJjr1d2CP4jothqiK3uBBcBpyU.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=6595d4465539d677775b9a6a5362895b827f4ea9', 'width': 108}, {'height': 299, 'url': 'https://external-preview.redd.it/MWNpaTVodzMwcWxnMTf9DEYfRpKQfPWGWqUJjr1d2CP4jothqiK3uBBcBpyU.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=c5d213a872016dc248e9e95c6d8937135ce5e2ed', 'width': 216}, {'height': 443, 'url': 'https://external-preview.redd.it/MWNpaTVodzMwcWxnMTf9DEYfRpKQfPWGWqUJjr1d2CP4jothqiK3uBBcBpyU.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=a6db50aafc256765543687e7092aff236db77ae0', 'width': 320}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MWNpaTVodzMwcWxnMTf9DEYfRpKQfPWGWqUJjr1d2CP4jothqiK3uBBcBpyU.png?blur=40&format=pjpg&auto=webp&s=feda3f512739c2c27948f5ed817510701c131856', 'width': 520}}}}]}
LLM models for architecting and coding
1
I am new this LLM models and I have been trying out qwen3 coder next q6\_k as seems to be hyped for coding and to be honest I am a bit unimpressed/disappointed. I made a system architecture markdown file with an architecture overview, a file by file blueprint. I requested it to use a library within the markdown and provided it with a md with the readme of that library so knew it's purpose and details on implementation even though I described it in the system architecture. After running it in roo code, I see it keeps doing mistakes and eventually running itself in endless loops. Maybe I have wrong settings but I was wondering what are other people's opinions
2026-02-25T23:00:52
https://www.reddit.com/r/LocalLLaMA/comments/1resy8e/llm_models_for_architecting_and_coding/
peste19
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1resy8e
false
null
t3_1resy8e
/r/LocalLLaMA/comments/1resy8e/llm_models_for_architecting_and_coding/
false
false
self
1
null
Best Qwen3.5-35B-A3B GGUF for 24GB VRAM?!
154
My understanding is Vulkan/ROCm tends to have faster kernels for legacy llama.cpp quant types like q8\_0/q4\_0/q4\_1. So I made a mix using \*only\* those types! Definitely not your grandfather's gguf mix: Q4\_0 19.776 GiB (4.901 BPW) Interestingly it has very good perplexity for the size, and \*may be\* faster than other leading quants especially on Vulkan backend? I'd love some llama-sweep-bench results if anyone has Strix Halo, 7900XTX, etc. Also curious if it is any better for mac (or do they mostly use mlx?). Check it out if you're interested, compatible with mainline llama.cpp/ik\_llama.cpp, and the usual downstream projects as well: [https://huggingface.co/ubergarm/Qwen3.5-35B-A3B-GGUF?show\_file\_info=Qwen3.5-35B-A3B-Q4\_0.gguf](https://huggingface.co/ubergarm/Qwen3.5-35B-A3B-GGUF?show_file_info=Qwen3.5-35B-A3B-Q4_0.gguf)
2026-02-25T22:42:03
https://i.redd.it/bkw8ps1qwplg1.png
VoidAlchemy
i.redd.it
1970-01-01T00:00:00
0
{}
1resggh
false
null
t3_1resggh
/r/LocalLLaMA/comments/1resggh/best_qwen3535ba3b_gguf_for_24gb_vram/
false
false
https://preview.redd.it/…6f380e41d470a459
154
{'enabled': True, 'images': [{'id': 'bkw8ps1qwplg1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/bkw8ps1qwplg1.png?width=108&crop=smart&auto=webp&s=04d5f6458e32fac7ab3f6bfd2c94dcbf4abf6e70', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/bkw8ps1qwplg1.png?width=216&crop=smart&auto=webp&s=1d17ca55ea420ea2d3a86375e322d1a78c3804ac', 'width': 216}, {'height': 216, 'url': 'https://preview.redd.it/bkw8ps1qwplg1.png?width=320&crop=smart&auto=webp&s=6bf37f7d638d3b824097fab7a863874d4db08f66', 'width': 320}, {'height': 433, 'url': 'https://preview.redd.it/bkw8ps1qwplg1.png?width=640&crop=smart&auto=webp&s=25abc44019457c22c40a183a9f0ff49947bfd3c5', 'width': 640}, {'height': 650, 'url': 'https://preview.redd.it/bkw8ps1qwplg1.png?width=960&crop=smart&auto=webp&s=1a2fb12248ca74e52241d2727ff81c41e0a05cce', 'width': 960}, {'height': 732, 'url': 'https://preview.redd.it/bkw8ps1qwplg1.png?width=1080&crop=smart&auto=webp&s=6a98d862537abee357e145594bb4dbf2ff264f80', 'width': 1080}], 'source': {'height': 1403, 'url': 'https://preview.redd.it/bkw8ps1qwplg1.png?auto=webp&s=a8115d83e96d66baffb59ecae1bf251b529fa06f', 'width': 2069}, 'variants': {}}]}
PicoKittens/PicoStories-853K: Extremely Tiny Stories
25
**We are announcing our new pico-sized model: PicoStories-853K.** This is an **853,120 parameter model** trained entirely from scratch. It was designed using the **TinyStories dataset** to explore the capabilities of ultra-compact architectures. Unlike our previous models, **PicoStories-853K** is a pure completion model and does not support chat functionality. It requires a **seed** to generate a story; you can provide a starting narrative and let the model finish it. As this is a sub-1M parameter project, it is best suited for exploring the limits of **minimal hardware** and extremely lightweight text generation. It is intended for experimental use and is not recommended for tasks requiring factual accuracy or complex reasoning. We would like to hear your thoughts and get your feedback **Model Link:** [https://huggingface.co/PicoKittens/PicoStories-853K](https://huggingface.co/PicoKittens/PicoStories-853K)
2026-02-25T22:41:56
https://www.reddit.com/r/LocalLLaMA/comments/1resgct/picokittenspicostories853k_extremely_tiny_stories/
PicoKittens
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1resgct
false
null
t3_1resgct
/r/LocalLLaMA/comments/1resgct/picokittenspicostories853k_extremely_tiny_stories/
false
false
self
25
{'enabled': False, 'images': [{'id': 'kITIRKEVuqDu7GUL3gH2DXHn5xA7ibLju8vOPvoe_rU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kITIRKEVuqDu7GUL3gH2DXHn5xA7ibLju8vOPvoe_rU.png?width=108&crop=smart&auto=webp&s=c5ab4c29d7c30e09f183f1e25afc70e1871d100d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kITIRKEVuqDu7GUL3gH2DXHn5xA7ibLju8vOPvoe_rU.png?width=216&crop=smart&auto=webp&s=3670640a80bae0cb350be2dc88bb49b0d6aa6a2f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kITIRKEVuqDu7GUL3gH2DXHn5xA7ibLju8vOPvoe_rU.png?width=320&crop=smart&auto=webp&s=a1c33cceb72f938033f7484ab07fe33d99f5819a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kITIRKEVuqDu7GUL3gH2DXHn5xA7ibLju8vOPvoe_rU.png?width=640&crop=smart&auto=webp&s=df36316054c2d7da910b22cb3ab48619b4b949d9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kITIRKEVuqDu7GUL3gH2DXHn5xA7ibLju8vOPvoe_rU.png?width=960&crop=smart&auto=webp&s=3c62526f57a4e9e77ab22759f5c0c2834f95e6f4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kITIRKEVuqDu7GUL3gH2DXHn5xA7ibLju8vOPvoe_rU.png?width=1080&crop=smart&auto=webp&s=9cdd57587cb50e0a296b982fbb0c06a42df98b1b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kITIRKEVuqDu7GUL3gH2DXHn5xA7ibLju8vOPvoe_rU.png?auto=webp&s=7e2136baafc1533325e3c23c274dd2005c24db0a', 'width': 1200}, 'variants': {}}]}
I found the "Lobotomy Layers" in Llama 3.1 and Qwen 2.5. (Kill Zone Atlas)
80
Ever wonder why "safe" models feel dumber? I mapped the "kill zones" of three major 7B/8B models to see what happens to Factual Integrity and Bias when you force a model to be sycophantic. **The Heatmaps:** * **Green** = Model is getting "more confident" in that behavior. * **Red** = The behavior is collapsing (The "Kill Zone"). **The Results are interesting:** In **Llama-3.1-8B**, the "Kill Zone" (dashed red box) is an absolute graveyard for Bias calibration. Between 35% and 52% depth, the model’s internal logic for bias completely inverts (−0.41). Meanwhile, Qwen seems much more resilient. Its sycophancy "switch" is isolated to a tiny window at 60% depth, leaving the factual layers mostly untouched. **Why this matters:** If you're doing LoRA or RepE, **stay out of the dashed boxes.** These are the layers where the model's "common sense" is most vulnerable to being overwritten.
2026-02-25T22:30:15
https://i.redd.it/jshzjkh0wplg1.png
NoSir261
i.redd.it
1970-01-01T00:00:00
0
{}
1res533
false
null
t3_1res533
/r/LocalLLaMA/comments/1res533/i_found_the_lobotomy_layers_in_llama_31_and_qwen/
false
false
https://preview.redd.it/…ec5f955782dbf5ef
80
{'enabled': True, 'images': [{'id': 'jshzjkh0wplg1', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/jshzjkh0wplg1.png?width=108&crop=smart&auto=webp&s=35fcaf21910ecc663f3a01f25e5204ae11119184', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/jshzjkh0wplg1.png?width=216&crop=smart&auto=webp&s=fb0043beaea6ccc422b0fccb98fff5a6c7a9f923', 'width': 216}, {'height': 147, 'url': 'https://preview.redd.it/jshzjkh0wplg1.png?width=320&crop=smart&auto=webp&s=bf9ae797c67dac871a964595b60543cf1ddf86e4', 'width': 320}, {'height': 294, 'url': 'https://preview.redd.it/jshzjkh0wplg1.png?width=640&crop=smart&auto=webp&s=0657966a9edb5b7ef69541c8ef1cbb88984ecf8a', 'width': 640}, {'height': 441, 'url': 'https://preview.redd.it/jshzjkh0wplg1.png?width=960&crop=smart&auto=webp&s=441ebef9c1009cf16c159db5128a67c2663e3f67', 'width': 960}, {'height': 497, 'url': 'https://preview.redd.it/jshzjkh0wplg1.png?width=1080&crop=smart&auto=webp&s=3523c45dfe27f208bda77336c103fafa92818fdd', 'width': 1080}], 'source': {'height': 1139, 'url': 'https://preview.redd.it/jshzjkh0wplg1.png?auto=webp&s=3df880459bfe17f4a30a50b3172924961fbe7af9', 'width': 2474}, 'variants': {}}]}
Introducing CLAM (Cognitive Large Application Model): A dual-level cognitive architecture for LLM agents (Short/Long term memory)
0
Hi everyone! I'd like to share CLAM, a new approach to LLM agents I've been working on. Instead of endless fine-tuning, I designed a two-level cognitive architecture that simulates the human mind. 🧠 CLAM perceives, doubts (via a formidable internal 'Critic'), consolidates valid memories, and... forgets what is irrelevant. The code is now open-source on GitHub! I’d love to hear your thoughts and suggestions on how to improve it. 👇 GitHub: https://github.com/marcellom66/CLAM #AI #MachineLearning #LLM #CognitiveArchitecture #OpenSource
2026-02-25T22:11:47
https://i.redd.it/jtqf4xprsplg1.jpeg
Short-Confidence6287
i.redd.it
1970-01-01T00:00:00
0
{}
1rernj1
false
null
t3_1rernj1
/r/LocalLLaMA/comments/1rernj1/introducing_clam_cognitive_large_application/
false
false
https://preview.redd.it/…974c99df160189f2
0
{'enabled': True, 'images': [{'id': 'jtqf4xprsplg1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/jtqf4xprsplg1.jpeg?width=108&crop=smart&auto=webp&s=312f40e833c562360bebbbfc7318be3c036b4383', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/jtqf4xprsplg1.jpeg?width=216&crop=smart&auto=webp&s=1bed15f8704d6fb27ac44173795e736d33b6e2fe', 'width': 216}, {'height': 177, 'url': 'https://preview.redd.it/jtqf4xprsplg1.jpeg?width=320&crop=smart&auto=webp&s=376d9e27cf5cd21f793c115c64b739537499c62d', 'width': 320}, {'height': 354, 'url': 'https://preview.redd.it/jtqf4xprsplg1.jpeg?width=640&crop=smart&auto=webp&s=8cee4d9eaf5f0327aed5afb2eb420cb5549d0e22', 'width': 640}, {'height': 532, 'url': 'https://preview.redd.it/jtqf4xprsplg1.jpeg?width=960&crop=smart&auto=webp&s=dcc9abbe8bfe01d7b6fa26bf2f5e471f6a2a49f4', 'width': 960}, {'height': 598, 'url': 'https://preview.redd.it/jtqf4xprsplg1.jpeg?width=1080&crop=smart&auto=webp&s=7ecbd8232778cf804071c0dbc47d484cbadbc1ff', 'width': 1080}], 'source': {'height': 1135, 'url': 'https://preview.redd.it/jtqf4xprsplg1.jpeg?auto=webp&s=7e48024b38d6e414458f5c6b7b7fdf896047df91', 'width': 2048}, 'variants': {}}]}
Cognitive Large Application Model
0
Per una verifica e aspettando vostri suggerimenti per migliorarlo vi presento CLAM (Cognitive Large Application Model): un nuovo approccio agli agenti LLM. Invece di fare fine-tuning infinito, ho progettato un'architettura cognitiva a due livelli (Breve/Lungo Termine) che simula la mente umana. 🧠 CLAM percepisce, dubita (tramite un 'Critic' interno formidabile), consolida le memorie valide e... dimentica ciò che è irrilevante. Il codice è ora open-source su GitHub! https://github.com/marcellom66/CLAM Fatemi sapere cosa ne pensate. 👇 #AI #MachineLearning #LLM #CognitiveArchitecture #OpenSource"
2026-02-25T21:54:17
https://i.redd.it/0qzpfcbnpplg1.jpeg
Short-Confidence6287
i.redd.it
1970-01-01T00:00:00
0
{}
1rer6h1
false
null
t3_1rer6h1
/r/LocalLLaMA/comments/1rer6h1/cognitive_large_application_model/
false
false
https://preview.redd.it/…6a092c29a130d40f
0
{'enabled': True, 'images': [{'id': '0qzpfcbnpplg1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/0qzpfcbnpplg1.jpeg?width=108&crop=smart&auto=webp&s=3b0e41abde85051faa8a86b890dc88e6f26c34ba', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/0qzpfcbnpplg1.jpeg?width=216&crop=smart&auto=webp&s=77540271fdb49bd421fa2a398767a13206100b37', 'width': 216}, {'height': 177, 'url': 'https://preview.redd.it/0qzpfcbnpplg1.jpeg?width=320&crop=smart&auto=webp&s=9aa6aea795b0817719636651e14c4dc7fabbb35f', 'width': 320}, {'height': 354, 'url': 'https://preview.redd.it/0qzpfcbnpplg1.jpeg?width=640&crop=smart&auto=webp&s=a1fdf197e5f04f9bb4dd7ca594a7569df03489fe', 'width': 640}, {'height': 532, 'url': 'https://preview.redd.it/0qzpfcbnpplg1.jpeg?width=960&crop=smart&auto=webp&s=6896d031714ea9205ea0b22bc5a277087fb7a2e6', 'width': 960}, {'height': 598, 'url': 'https://preview.redd.it/0qzpfcbnpplg1.jpeg?width=1080&crop=smart&auto=webp&s=feb38f6fe25325bea4fb8c902f4bd6ec39aeaeb7', 'width': 1080}], 'source': {'height': 1135, 'url': 'https://preview.redd.it/0qzpfcbnpplg1.jpeg?auto=webp&s=85eb940257497d7d41c894541207de30bfa1d574', 'width': 2048}, 'variants': {}}]}
LM Link
42
I see that LM Studio just shadow dropped one of the most amazing features ever. I have been waiting this for a long time. LM Link allows a client machine to connect to another machine acting as server remotely using tailscale. This is now integrated in the LM Studio app (which either acts as server or client) and using the GUI. Basically, this means you can now use on your laptop all your models present on your main workstation/server just as if you were sitting in front of it. The feature is currently included in the 0.4.5 build 2 that just released and it's in preview (access needs to be requested and is granted in batches / i got mine minutes after request). It seems to work incredibily well. Once again these guys nailed it. Congrats to the team!!!
2026-02-25T21:53:48
https://www.reddit.com/r/LocalLLaMA/comments/1rer60n/lm_link/
Blindax
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rer60n
false
null
t3_1rer60n
/r/LocalLLaMA/comments/1rer60n/lm_link/
false
false
self
42
null
Qwen3.5-122B-A10B-UD-Q6_K_XL on dgx spark llama.cpp - vision is really good
1
[removed]
2026-02-25T21:33:08
https://www.reddit.com/r/LocalLLaMA/comments/1reqlzc/qwen35122ba10budq6_k_xl_on_dgx_spark_llamacpp/
Impossible_Art9151
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1reqlzc
false
null
t3_1reqlzc
/r/LocalLLaMA/comments/1reqlzc/qwen35122ba10budq6_k_xl_on_dgx_spark_llamacpp/
false
false
self
1
null
Cosmos-Reason2-2B on Jetson Orin Nano Super
51
Hi! Today, me and my team is releasing a version of **Cosmos-Reason2-2B** that is quantized so that it fits even on the NVIDIA Jetson Orin Nano Super. We managed to find a mixed precision configuration such that it maintains virtually the same accuracy as the unquantized model while being able to run really efficiently on the Nano Super and other edge devices :) [https://huggingface.co/embedl/Cosmos-Reason2-2B-W4A16-Edge2](https://huggingface.co/embedl/Cosmos-Reason2-2B-W4A16-Edge2)
2026-02-25T21:28:32
https://v.redd.it/sg8ywmszkplg1
Course_Latter
v.redd.it
1970-01-01T00:00:00
0
{}
1reqhjo
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/sg8ywmszkplg1/DASHPlaylist.mpd?a=1774646938%2CYWQ0YzJhN2M4NjlkODJiZDZkYzQ3MDNlMWFmYTIzN2VhYzA3MDUyNDAwYmVkMzM0MWU1NzljNDk1YWQ1MGRkNg%3D%3D&v=1&f=sd', 'duration': 6, 'fallback_url': 'https://v.redd.it/sg8ywmszkplg1/CMAF_480.mp4?source=fallback', 'has_audio': False, 'height': 470, 'hls_url': 'https://v.redd.it/sg8ywmszkplg1/HLSPlaylist.m3u8?a=1774646938%2CYjI4YzY1ZGVjZTUzYWYyZDlhOTgwNzkwNmRlNTk2YWZkNTcwNTA4ZmZmNWEwNzM1MjQ5ZmY5Zjk0YjM5MjFiZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/sg8ywmszkplg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 854}}
t3_1reqhjo
/r/LocalLLaMA/comments/1reqhjo/cosmosreason22b_on_jetson_orin_nano_super/
false
false
https://external-preview…c0515cbc3f02a9db
51
{'enabled': False, 'images': [{'id': 'dzR0aW93c3prcGxnMfvobv_qRzAlCd_RuCGnpQgXYH5aXJ-MyLN8L1R1g083', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/dzR0aW93c3prcGxnMfvobv_qRzAlCd_RuCGnpQgXYH5aXJ-MyLN8L1R1g083.png?width=108&crop=smart&format=pjpg&auto=webp&s=67f6257ddcebeee43881ac5c52274ba18339a399', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/dzR0aW93c3prcGxnMfvobv_qRzAlCd_RuCGnpQgXYH5aXJ-MyLN8L1R1g083.png?width=216&crop=smart&format=pjpg&auto=webp&s=6179f23ae843b19de3dcf7d0a5936dc9b9319113', 'width': 216}, {'height': 176, 'url': 'https://external-preview.redd.it/dzR0aW93c3prcGxnMfvobv_qRzAlCd_RuCGnpQgXYH5aXJ-MyLN8L1R1g083.png?width=320&crop=smart&format=pjpg&auto=webp&s=d99c51c216b01bb833f4f8a8a4382bf2aa39af9c', 'width': 320}, {'height': 352, 'url': 'https://external-preview.redd.it/dzR0aW93c3prcGxnMfvobv_qRzAlCd_RuCGnpQgXYH5aXJ-MyLN8L1R1g083.png?width=640&crop=smart&format=pjpg&auto=webp&s=25b155bf8bdfa3e84a1e54c6edd4cee5b1e9ffc4', 'width': 640}, {'height': 528, 'url': 'https://external-preview.redd.it/dzR0aW93c3prcGxnMfvobv_qRzAlCd_RuCGnpQgXYH5aXJ-MyLN8L1R1g083.png?width=960&crop=smart&format=pjpg&auto=webp&s=8f96fe553b9ef1dc9217b89c5fd0466d4a252789', 'width': 960}, {'height': 594, 'url': 'https://external-preview.redd.it/dzR0aW93c3prcGxnMfvobv_qRzAlCd_RuCGnpQgXYH5aXJ-MyLN8L1R1g083.png?width=1080&crop=smart&format=pjpg&auto=webp&s=150a6374b754dd3c9d423d2a2cce8c9badc0609d', 'width': 1080}], 'source': {'height': 704, 'url': 'https://external-preview.redd.it/dzR0aW93c3prcGxnMfvobv_qRzAlCd_RuCGnpQgXYH5aXJ-MyLN8L1R1g083.png?format=pjpg&auto=webp&s=78fb6045f6e5c0d6d08b73ed9f81f9c0dbaed587', 'width': 1280}, 'variants': {}}]}
Qwen 3.5 | ContextShift not working
3
I'm trying to run Qwen 3.5 locally, but I can't seem to get ContextShift to work. So each input, I have to reprocess the entire context. I've used different back-ends (Kobold.cpp and LM Studio), different models (the 122b and 35b ones) and quants from different makers. Whichever combination I use, ContextShift doesn't work. Has anyone else experienced this problem? Found a fix?
2026-02-25T21:24:40
https://www.reddit.com/r/LocalLLaMA/comments/1reqdr8/qwen_35_contextshift_not_working/
DisasterClear4178
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1reqdr8
false
null
t3_1reqdr8
/r/LocalLLaMA/comments/1reqdr8/qwen_35_contextshift_not_working/
false
false
self
3
null
Overwhelmed by so many quantization variants
112
Not only are out there 100s of models to choose from, but also so many quantization variants that I may well get crazy. One needs not only to test and benchmark models, but also within each model, compare its telemetry and quality between all the available quants and quant-techniques. So many concepts like the new UD from Unsloth, autoround from Intel, imatrix, K\_XSS, you name it. All of them could be with a REAM or a REAP or any kind of prunation, multiplying the length of the list. Some people claim heavily quantizated models (q2, q3) of some big models are actually better than smaller ones in q4-q6. Some other people claim something else: there are so many claims! And they all sound like the singing of sirens. Someone tie me to the main mast! When I ask wether to choose mlx or gguf, the answer comes strong like a dogma: mlx for mac. And while it indeed seems to be faster (sometimes only slightlier), mlx offers less configurations. Maybe with gguff I would lose a couple of t/s but gain in context. Or maybe a 4bit mlx is less advanced as the UD q4 of Unsloth and it is faster but with less quality. And it is a great problem to have: I root for someone super smart to create a brilliant new method that allows to run gigantic models in potato hardware with lossless quality and decent speed. And that is happening: quants are getting super smart ideas. But also feel totally overwhelmed. Anyone on the same boat? Are there any leaderboards comparing quant methods and sizes of a single model? And most importantly, what is the next revolutionary twist that will come to our future quants?
2026-02-25T21:24:37
https://www.reddit.com/r/LocalLLaMA/comments/1reqdpb/overwhelmed_by_so_many_quantization_variants/
mouseofcatofschrodi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1reqdpb
false
null
t3_1reqdpb
/r/LocalLLaMA/comments/1reqdpb/overwhelmed_by_so_many_quantization_variants/
false
false
self
112
null
Help with Multi-GPU Setup: RTX 3060 12GB (eGPU) + Intel Arc 185H iGPU for VRAM Pooling?
1
[removed]
2026-02-25T21:12:24
https://www.reddit.com/r/LocalLLaMA/comments/1req1r6/help_with_multigpu_setup_rtx_3060_12gb_egpu_intel/
kotemseravi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1req1r6
false
null
t3_1req1r6
/r/LocalLLaMA/comments/1req1r6/help_with_multigpu_setup_rtx_3060_12gb_egpu_intel/
false
false
self
1
null
Claude code : 98% usage with 3 minutes left. I call that optimal life management.
0
Nothing says responsible adulthood like squeezing every last percent out of a session and watching the timer say “resets in 3 minutes.” Peak efficiency. Maximum stress. Zero long term consequences. [Failed for this session](https://preview.redd.it/mk5hschqfplg1.png?width=1288&format=png&auto=webp&s=0f7713b8214fe40ff2f9e0ccdfc660e44538df04)
2026-02-25T21:00:41
https://www.reddit.com/r/LocalLLaMA/comments/1repq9z/claude_code_98_usage_with_3_minutes_left_i_call/
HereForAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1repq9z
false
null
t3_1repq9z
/r/LocalLLaMA/comments/1repq9z/claude_code_98_usage_with_3_minutes_left_i_call/
false
false
https://preview.redd.it/…15bf3f4d0baf3dee
0
null
Best way to expose local LLM to other devices?
13
I have a powerful setup at home and I would love the ability to use my locally hosted LLM from outside the house via my phone or notebook. Is there a safe way to do so?
2026-02-25T20:56:31
https://www.reddit.com/r/LocalLLaMA/comments/1repm57/best_way_to_expose_local_llm_to_other_devices/
very_based_person
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1repm57
false
null
t3_1repm57
/r/LocalLLaMA/comments/1repm57/best_way_to_expose_local_llm_to_other_devices/
false
false
self
13
null
Anyone benchmarked Qwen3.5-27B (dense) and Qwen3.5-35B-A3B (MoE) on OpenClaw? How close to Claude Sonnet 4.5 for tool-calling?
0
Wondering if this makes it do me switch to run it fully local instead of api costs
2026-02-25T20:45:34
https://www.reddit.com/r/LocalLLaMA/comments/1repb9u/anyone_benchmarked_qwen3527b_dense_and/
98Saman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1repb9u
false
null
t3_1repb9u
/r/LocalLLaMA/comments/1repb9u/anyone_benchmarked_qwen3527b_dense_and/
false
false
self
0
null
Help me Build chatbot localy
0
Hey! I’m working on a chatbot where I need to process user text input from frontend and generate agent audio output . I’ve come across examples for text-to-text and audio-to-audio interactions in the library, but I haven’t found a clear approach for combining them into a text-to-audio conversation. Could you suggest any tool to achieve this? Pipecat dont know how to implement text input Flowise i dont know how to implement speech output Voiceflow i dont know how to implement local model https://github.com/ShayneP/local-voice-ai/tree/main Is speech tò speech
2026-02-25T20:45:18
https://www.reddit.com/r/LocalLLaMA/comments/1repb0u/help_me_build_chatbot_localy/
Quiet_Dasy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1repb0u
false
null
t3_1repb0u
/r/LocalLLaMA/comments/1repb0u/help_me_build_chatbot_localy/
false
false
self
0
null
RX 7800 XT only getting ~5 FPS on DirectML ??? (DeepLiveCam 2.6)
1
I’ve fully set up DeepLiveCam 2.6 and it is working, but performance is extremely low and I’m trying to understand why. System: * Ryzen 5 7600X * RX 7800 XT (16GB VRAM) * 32GB RAM * Windows 11 * Python 3.11 venv * ONNX Runtime DirectML (dml provider confirmed active) Terminal confirms GPU provider: Applied providers: \['DmlExecutionProvider', 'CPUExecutionProvider'\] My current performance is: * \~5 FPS average * GPU usage: \~0–11% in Task Manager * VRAM used: \~2GB * CPU: \~15% My settings are: * Face enhancer OFF * Keep FPS OFF * Mouth mask OFF * Many faces OFF * 720p camera * Good lighting I just don't get why the GPU is barely being utilised. Questions: 1. Is this expected performance for AMD + DirectML? 2. Is ONNX Runtime bottlenecked on AMD vs CUDA? 3. Can DirectML actually fully utilise RDNA3 GPUs? 4. Has anyone achieved 15–30 FPS on RX 7000 series? 5. Any optimisation tips I might be missing?
2026-02-25T20:40:43
https://www.reddit.com/r/LocalLLaMA/comments/1rep6my/rx_7800_xt_only_getting_5_fps_on_directml/
RoboReings
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rep6my
false
null
t3_1rep6my
/r/LocalLLaMA/comments/1rep6my/rx_7800_xt_only_getting_5_fps_on_directml/
false
false
self
1
null
Introducing Mercury 2 - Diffusion for real-time reasoning
23
**What stands out:** * Uses **diffusion-based generation** instead of sequential token-by-token decoding * Generates tokens in parallel and refines them over a few steps * Claims **1,009 tokens/sec** on NVIDIA Blackwell GPUs * Pricing: **$0.25 / 1M input tokens**, **$0.75 / 1M output tokens** * 128K context * Tunable reasoning * Native tool use + schema-aligned JSON output * OpenAI API compatible They’re positioning it heavily for: * Coding assistants * Agentic loops (multi-step inference chains) * Real-time voice systems * RAG/search pipelines with multi-hop retrieval
2026-02-25T20:39:21
https://www.inceptionlabs.ai/blog/introducing-mercury-2
TyedalWaves
inceptionlabs.ai
1970-01-01T00:00:00
0
{}
1rep5bg
false
null
t3_1rep5bg
/r/LocalLLaMA/comments/1rep5bg/introducing_mercury_2_diffusion_for_realtime/
false
false
https://external-preview…8a10f80f292c401c
23
{'enabled': False, 'images': [{'id': 'WWeemdjZc6o2dTcFImB7RNbTs0n7PlHTldvkAwbSd1w', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/WWeemdjZc6o2dTcFImB7RNbTs0n7PlHTldvkAwbSd1w.png?width=108&crop=smart&auto=webp&s=7e60251a45af88e5e2a1b81d770d43f882c45998', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/WWeemdjZc6o2dTcFImB7RNbTs0n7PlHTldvkAwbSd1w.png?width=216&crop=smart&auto=webp&s=1babf2ea4180148358f47e085040b0a692a4d03a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/WWeemdjZc6o2dTcFImB7RNbTs0n7PlHTldvkAwbSd1w.png?width=320&crop=smart&auto=webp&s=0199418e525b9c6864599e78e261558d6f2e2e3d', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/WWeemdjZc6o2dTcFImB7RNbTs0n7PlHTldvkAwbSd1w.png?width=640&crop=smart&auto=webp&s=05f10947af2701799bda8132edd1b6e046f2a224', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/WWeemdjZc6o2dTcFImB7RNbTs0n7PlHTldvkAwbSd1w.png?width=960&crop=smart&auto=webp&s=ad1a960467c8aeebefce7a18f53be10c88204fa5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/WWeemdjZc6o2dTcFImB7RNbTs0n7PlHTldvkAwbSd1w.png?width=1080&crop=smart&auto=webp&s=60bd3f0cabb641376c3f340e137bb22a8fa6fbfc', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/WWeemdjZc6o2dTcFImB7RNbTs0n7PlHTldvkAwbSd1w.png?auto=webp&s=99f0ab61b7c47bd88b174244ced2f26bf563e085', 'width': 1200}, 'variants': {}}]}
US or EU based provider for open weight models?
3
I want to use open weight models instead of proprietary ai models like Claude or ChatGPT. However, my hardware is not good enough to run those, so I am looking for a provider that hosts state of the art open weight models like Kimi K2 or Minimax M2.5 in the US or Europe and offers access to a reasonable price. I do not want to directly use chinese providers, as i want my data to stay in europe or the us. What are the best providers for this use case?
2026-02-25T20:30:34
https://www.reddit.com/r/LocalLLaMA/comments/1reowuj/us_or_eu_based_provider_for_open_weight_models/
Old-Jaguar-479
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1reowuj
false
null
t3_1reowuj
/r/LocalLLaMA/comments/1reowuj/us_or_eu_based_provider_for_open_weight_models/
false
false
self
3
null
Air llm ?
0
Can anyone explain and suggest to me how to use this ? it will work ? like recently I saw a twitter post about this ! is this real ? because I have very low specs : 1650ti 4gb vram , 16gb ram ! ...can it make any use to me ?
2026-02-25T20:29:26
https://i.redd.it/zt2yrk5iaplg1.jpeg
Less_Strain7577
i.redd.it
1970-01-01T00:00:00
0
{}
1reovq3
false
null
t3_1reovq3
/r/LocalLLaMA/comments/1reovq3/air_llm/
false
false
https://preview.redd.it/…1213345299a82820
0
{'enabled': True, 'images': [{'id': 'zt2yrk5iaplg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/zt2yrk5iaplg1.jpeg?width=108&crop=smart&auto=webp&s=f53d98f9744557ebfa29f98d3f539582c9cf2402', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/zt2yrk5iaplg1.jpeg?width=216&crop=smart&auto=webp&s=f1c6474fe9e3527683c0ef01216b54372d6efd0b', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/zt2yrk5iaplg1.jpeg?width=320&crop=smart&auto=webp&s=7885e383b66ada5be71bd8eaabfb60db2f43ff9b', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/zt2yrk5iaplg1.jpeg?width=640&crop=smart&auto=webp&s=e53fa373954e7d423a7ecdae59ba301549b3f904', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/zt2yrk5iaplg1.jpeg?width=960&crop=smart&auto=webp&s=502708c53ae1ef6c647dd81c6506939d9f33ba34', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/zt2yrk5iaplg1.jpeg?width=1080&crop=smart&auto=webp&s=b52c9bc1e7e821f2b24e6df90e87c2338539f4a3', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/zt2yrk5iaplg1.jpeg?auto=webp&s=dc5dee97940ab87b6ba40e8300c7a4297604e38f', 'width': 1080}, 'variants': {}}]}
Help needed: Chatterbox Multilanguage (Polish) producing artifacts and long pauses
2
Hi everyone, I am looking for some advice on fine-tuning Chatterbox Multilanguage for the Polish language. I am currently facing two specific issues that are significantly affecting the quality of my narrations: 1. Audio artifacts (growls/screams): Occasionally, the model generates strange, non-vocal sounds that sound like sudden growls or screams. These appear randomly and are not related to the text being read. 2. Long pauses between sentences: The silence between sentences is way too long, which breaks the flow of the story and makes the narration feel disjointed. To give you a better idea of what I mean, you can listen to a few minutes of this video (it is a historical podcast about Leonardo da Vinci): [https://www.youtube.com/watch?v=RP8cUaGOn5g](https://www.youtube.com/watch?v=RP8cUaGOn5g) I would really appreciate it if anyone could suggest which parameters I should tweak to eliminate these artifacts and fix the pacing. Here are the settings I am currently using: model: repo\_id: chatterbox-multilingual tts\_engine: device: cuda predefined\_voices\_path: voices reference\_audio\_path: reference\_audio default\_voice\_id: Kustosz.wav paths: model\_cache: model\_cache output: outputs generation\_defaults: temperature: 0.7 exaggeration: 0.5 cfg\_weight: 0.5 seed: 0 speed\_factor: 1.1 sentence\_pause\_ms: 100 language: pl chunk\_size: 200 top\_p: 0.95 repetition\_penalty: 1.2 audio\_output: format: wav sample\_rate: 24000 max\_reference\_duration\_sec: 30 save\_to\_disk: false crossfade\_duration: 0.1 intro\_silence\_ms: 0 inter\_chunk\_silence\_ms: 0 group\_chunks\_by\_speaker: false cleanup\_vram\_after\_job: true norm\_loudness: true prompt\_norm\_loudness: true Thanks in advance for any help!
2026-02-25T20:29:14
https://www.reddit.com/r/LocalLLaMA/comments/1reovjz/help_needed_chatterbox_multilanguage_polish/
Tomasz_NieMasz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1reovjz
false
null
t3_1reovjz
/r/LocalLLaMA/comments/1reovjz/help_needed_chatterbox_multilanguage_polish/
false
false
self
2
{'enabled': False, 'images': [{'id': 'Ow01RTwADHhzhY1u9iJgum9tCemVkdYQMiTV725QH18', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Ow01RTwADHhzhY1u9iJgum9tCemVkdYQMiTV725QH18.jpeg?width=108&crop=smart&auto=webp&s=24aa52fb4817ebe99af39e8bffff7050aaf21d29', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Ow01RTwADHhzhY1u9iJgum9tCemVkdYQMiTV725QH18.jpeg?width=216&crop=smart&auto=webp&s=d362638eb06e8c4e319a011f90d559222dac9cd2', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Ow01RTwADHhzhY1u9iJgum9tCemVkdYQMiTV725QH18.jpeg?width=320&crop=smart&auto=webp&s=91ea7325f467b57f60956a2aa6df04b4ce990d17', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Ow01RTwADHhzhY1u9iJgum9tCemVkdYQMiTV725QH18.jpeg?auto=webp&s=b3dd08b5fcfecd3931480ddd574b72fb33381905', 'width': 480}, 'variants': {}}]}
Mac Studio 128/256GB for local LLM coding?
2
Hello, I'm a developer with side projects. Lately, I'm thinking of buying a Mac Studio with 128 or 256GB ram in order to support my projects. My logic is to be able to define goals to local llm and let it do it's job while I'm sleeping or running other projects. How feasible is that? Will this work? Does it worth the cost or should I stick to subscriptions without having overnight autonomous coding sessions?
2026-02-25T20:20:33
https://www.reddit.com/r/LocalLLaMA/comments/1reon35/mac_studio_128256gb_for_local_llm_coding/
TechDude12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1reon35
false
null
t3_1reon35
/r/LocalLLaMA/comments/1reon35/mac_studio_128256gb_for_local_llm_coding/
false
false
self
2
null
Local LLM tool calling - Anyone heard of this?
0
Hey guys I have been using Sapphire Ai for a bit now and wanted to get others opinions on this, since I think I was one of the first to discover this. Been poking around the self-hosted AI space for a while and most projects are either half-finished or just a thin wrapper around Ollama with a pretty UI slapped on. This one seems different. It's called Sapphire. Looks to be a solo dev has been building it and it's way more complete than I expected when I started trying it out.Its got Wake word detection, full STT/TTS pipeline, Home Assistant integration per-chat personas, scheduled autonomous tasks and a ton more in it. If anyone has used this before, please let me know.
2026-02-25T20:16:09
https://www.reddit.com/r/LocalLLaMA/comments/1reoiqj/local_llm_tool_calling_anyone_heard_of_this/
Dudebro-420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1reoiqj
false
null
t3_1reoiqj
/r/LocalLLaMA/comments/1reoiqj/local_llm_tool_calling_anyone_heard_of_this/
false
false
self
0
null
RAG is cooked, Qwen 3.5 for multi modal long context.
0
Qwen 3.5 35b does something that previously I saw only Gemini do, which is using way fewer tokens per image than it would take to tokenize the actual words in that image. Meaning if you take a large pdf and convert all pages to images (resized to fit a 1000x1000 box), your context will be smaller then ocring the same pdf. Plus your images, graphs and tables stay intact. The crazy thing is no information is lost and you can ask the model complex questions that require understanding of the whole document, meaning better answers overall. It's a neat trick probably made possible by the new way of training. As the saying goes: an image says more than a thousand words.
2026-02-25T20:07:42
https://www.reddit.com/r/LocalLLaMA/comments/1reoabu/rag_is_cooked_qwen_35_for_multi_modal_long_context/
OutlandishnessIll466
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1reoabu
false
null
t3_1reoabu
/r/LocalLLaMA/comments/1reoabu/rag_is_cooked_qwen_35_for_multi_modal_long_context/
false
false
self
0
null
The Qwen 3.5 A3B model at 4 bit k_xl works better with 8 bit KV cache...
27
I'll probably toss up some examples later, but I've got some things to do today. I just wanted to mention that I did a whole mess of personal benchmark/testing on that new qwen 3.5 A3b. That thing is amazing. Interestingly, when I re-ran everything at Q8\_0 KV Cache, it improved across the board. Normally, kicking KV cache to 8 bit gives me a bit more headroom but has a measurable drop in performance, so this was a weird result I thought I'd share. Anyone else mess with this? Remarkable model all around. I can't wait to mess with this a bit more later. Going to set up some wild stuff :).
2026-02-25T20:02:47
https://www.reddit.com/r/LocalLLaMA/comments/1reo5bv/the_qwen_35_a3b_model_at_4_bit_k_xl_works_better/
teachersecret
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1reo5bv
false
null
t3_1reo5bv
/r/LocalLLaMA/comments/1reo5bv/the_qwen_35_a3b_model_at_4_bit_k_xl_works_better/
false
false
self
27
null
Running local agents with Ollama: how are you handling KB access control without cloud dependencies?
1
Been thinking about this a lot lately and I’m curious how others are approaching it. As soon as you have more than one agent sharing a knowledge base, access control becomes a real problem. In cloud setups you can offload this to managed services, but if you’re running everything locally the options are less obvious. A few questions I’m genuinely stuck on: Where should enforcement live? At the API layer (each agent gets its own endpoint with restricted access), at the MCP server level, or is there a smarter way to bind agent identity to specific knowledge scopes natively? MCP specifically the protocol doesn’t have a native permission model. If you’re exposing a local KB as an MCP server, how do you prevent one agent from querying another agent’s memory? Are people just doing this with separate server instances per agent, or is there a more elegant solution? Is KB-level isolation enough? Meaning: each agent gets its own isolated KB and never touches others. Simple, but feels like it breaks down the moment you want shared context between agents with different clearance levels. Curious if anyone has found a clean pattern here or if this is still an unsolved problem in local-first agent architectures.
2026-02-25T19:57:55
https://www.reddit.com/r/LocalLLaMA/comments/1reo0b1/running_local_agents_with_ollama_how_are_you/
Comfortable_Poem_866
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1reo0b1
false
null
t3_1reo0b1
/r/LocalLLaMA/comments/1reo0b1/running_local_agents_with_ollama_how_are_you/
false
false
self
1
null
MCP has no native permission model, how are you handling access control for multi-agent KB setups?
1
As agents get more autonomous, how should we handle KB access control? Enforce at the API layer, at the MCP server level, or is there a smarter way to bind an agent identity to specific knowledge scopes? Curious how people are solving this today.
2026-02-25T19:52:49
https://www.reddit.com/r/LocalLLaMA/comments/1renval/mcp_has_no_native_permission_model_how_are_you/
Comfortable_Poem_866
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1renval
false
null
t3_1renval
/r/LocalLLaMA/comments/1renval/mcp_has_no_native_permission_model_how_are_you/
false
false
self
1
null
Everything I learned building on-device AI into a React Native app -- tex, Image Gen, Speech to Text, Multi Modal AI, Intent classification, Prompt Enhancements and more
8
I spent some time building a React Native app that runs LLMs, image generation, voice transcription, and vision AI entirely on-device. No cloud. No API keys. Works in airplane mode. Here's what I wish someone had told me before I started. If you're thinking about adding on-device AI to an RN app, this should save you some pain. **Text generation (LLMs)** Use llama.rn. It's the only serious option for running GGUF models in React Native. It wraps llama.cpp and gives you native bindings for both Android (JNI) and iOS (Metal). Streaming tokens via callbacks works well. The trap: you'll think "just load the model and call generate." The real work is everything around that. Memory management is the whole game on mobile. A 7B Q4 model needs \~5.5GB of RAM at runtime (file size x 1.5 for KV cache and activations). Most phones have 6-8GB total and the OS wants half of it. You need to calculate whether a model will fit BEFORE you try to load it, or the OS silently kills your app and users think it crashed. I use 60% of device RAM as a hard budget. Warn at 50%, block at 60%. Human-readable error messages. This one thing prevents more 1-star reviews than any feature you'll build. GPU acceleration: OpenCL on Android (Adreno GPUs), Metal on iOS. Works, but be careful -- flash attention crashes with GPU layers > 0 on Android. Enforce this in code so users never hit it. KV cache quantization (f16/q8\_0/q4\_0) is a bigger win than GPU for most devices. Going from f16 to q4\_0 roughly tripled inference speed in my testing. **Image generation (Stable Diffusion)** This is where it gets platform-specific. No single library covers both. Android: look at MNN (Alibaba's framework, CPU, works on all ARM64 devices) and QNN (Qualcomm AI Engine, NPU-accelerated, Snapdragon 8 Gen 1+ only). QNN is 3x faster but only works on recent Qualcomm chips. You want runtime detection with automatic fallback. iOS: Apple's ml-stable-diffusion pipeline with Core ML. Neural Engine acceleration. Their palettized models (\~1GB, 6-bit) are great for memory-constrained devices. Full precision (\~4GB, fp16) is faster on ANE but needs the headroom. Real-world numbers: 5-10 seconds on Snapdragon NPU, 15 seconds CPU on flagship, 8-15 seconds iOS ANE. 512x512 at 20 steps. The key UX decision: show real-time preview every N denoising steps. Without it, users think the app froze. With it, they watch the image form and it feels fast even when it's not. **Voice (Whisper)** whisper.rn wraps whisper.cpp. Straightforward to integrate. Offer multiple model sizes (Tiny/Base/Small) and let users pick their speed vs accuracy tradeoff. Real-time partial transcription (words appearing as they speak) is what makes it feel native vs "processing your audio." One thing: buffer audio in native code and clear it after transcription. Don't write audio files to disk if privacy matters to your users. **Vision (multimodal models)** Vision models need two files -- the main GGUF and an mmproj (multimodal projector) companion. This is terrible UX if you expose it to users. Handle it transparently: auto-detect vision models, auto-download the mmproj, track them as a single unit, search the model directory at runtime if the link breaks. Download both files in parallel, not sequentially. On a 2B vision model this cuts download time nearly in half. SmolVLM at 500M is the sweet spot for mobile -- \~7 seconds on flagship, surprisingly capable for document reading and scene description. **Tool calling (on-device agent loops)** This one's less obvious but powerful. Models that support function calling can use tools -- web search, calculator, date/time, device info -- through an automatic loop: LLM generates, you parse for tool calls, execute them, inject results back into context, LLM continues. Cap it (I use max 3 iterations, 5 total calls) or the model will loop forever. Two parsing paths are critical. Larger models output structured JSON tool calls natively through llama.rn. Smaller models output XML like `<tool_call>`. If you only handle JSON, you cut out half the models that technically support tools but don't format them cleanly. Support both. Capability gating matters. Detect tool support at model load time by inspecting the jinja chat template. If the model doesn't support tools, don't inject tool definitions into the system prompt -- smaller models will see them and hallucinate tool calls they can't execute. Disable the tools UI entirely for those models. The calculator uses a recursive descent parser. Never eval(). Ever. **Intent classification (text vs image generation)** If your app does both text and image gen, you need to decide what the user wants. "Draw a cute dog" should trigger Stable Diffusion. "Tell me about dogs" should trigger the LLM. Sounds simple until you hit edge cases. Two approaches: pattern matching (fast, keyword-based -- "draw," "generate," "create image") or LLM-based classification (slower, uses your loaded text model to classify intent). Pattern matching is instant but misses nuance. LLM classification is more accurate but adds latency before generation even starts. I ship both and let users choose. Default to pattern matching. Offer a manual override toggle that forces image gen mode for the current message. The override is important -- when auto-detection gets it wrong, users need a way to correct it without rewording their message. **Prompt enhancement (the LLM-to-image-gen handoff)** Simple user prompts make bad Stable Diffusion inputs. "A dog" produces generic output. But if you run that prompt through your loaded text model first with an enhancement system prompt, you get a \~75-word detailed description with artistic style, lighting, composition, and quality modifiers. The output quality difference is dramatic. The gotcha that cost me real debugging time: after enhancement finishes, you need to call stopGeneration() to reset the LLM state. But do NOT clear the KV cache. If you clear KV cache after every prompt enhancement, your next vision inference takes 30-60 seconds longer. The cache from the text model helps subsequent multimodal loads. Took me a while to figure out why vision got randomly slow. **Model discovery and HuggingFace integration** You need to help users find models that actually work on their device. This means HuggingFace API integration with filtering by device RAM, quantization level, model type (text/vision/code), organization, and size category. The important part: calculate whether a model will fit on the user's specific device BEFORE they download 4GB over cellular. Show RAM requirements next to every model. Filter out models that won't fit. For vision models, show the combined size (GGUF + mmproj) because users don't know about the companion file. Curate a recommended list. Don't just dump the entire HuggingFace catalog. Pick 5-6 models per capability that you've tested on real mid-range hardware. Qwen 3, Llama 3.2, Gemma 3, SmolLM3, Phi-4 cover most use cases. For vision, SmolVLM is the obvious starting point. Support local import too. Let users pick a .gguf file from device storage via the native file picker. Parse the model name and quantization from the filename. Handle Android content:// URIs (you'll need to copy to app storage). Some users have models already and don't want to re-download. **The architectural decisions that actually matter** 1. Singleton services for anything touching native inference. If two screens try to load different models at the same time, you get a SIGSEGV. Not an exception. A dead process. Guard every load with a promise check. 2. Background-safe generation. Your generation service needs to live outside React component lifecycle. Use a subscriber pattern -- screens subscribe on mount, get current state immediately, unsubscribe on unmount. Generation continues regardless of what screen the user is on. Without this, navigating away kills your inference mid-stream. 3. Service-store separation. Services write to Zustand stores, UI reads from stores. Services own the long-running state. Components are just views. This sounds obvious but it's tempting to put generation state in component state and you'll regret it the first time a user switches tabs during a 15-second image gen. 4. Memory checks before every model load. Not optional. Calculate required RAM (file size x 1.5 for text, x 1.8 for image gen), compare against device budget, block if it won't fit. The alternative is random OOM crashes that you can't reproduce in development because your test device has 12GB. 5. Native download manager on Android. RN's JS networking dies when the app backgrounds. Android's DownloadManager survives. Bridge to it. Watch for a race condition where the completion broadcast arrives before RN registers its listener -- track event delivery with a boolean flag. **What I'd do differently** Start with text generation only. Get the memory management, model loading, and background-safe generation pattern right. Then add image gen, then vision, then voice. Each one reuses the same architectural patterns (singleton service, subscriber pattern, memory budget) but has its own platform-specific quirks. The foundation matters more than the features. Don't try to support every model. Pick 3-4 recommended models per capability, test them thoroughly on real mid-range devices (not just your flagship), and document the performance. Users with 6GB phones running a 7B model and getting 3 tok/s will blame your app, not their hardware. Happy to answer questions about any of this. Especially the memory management, tool calling implementation, or the platform-specific image gen decisions.
2026-02-25T19:52:10
https://www.reddit.com/r/LocalLLaMA/comments/1renuky/everything_i_learned_building_ondevice_ai_into_a/
alichherawalla
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1renuky
false
null
t3_1renuky
/r/LocalLLaMA/comments/1renuky/everything_i_learned_building_ondevice_ai_into_a/
false
false
self
8
null
Qwen3.5 Model Comparison: 27B vs 35B on RTX 4090
85
I wanted to check qwen3.5 models that can be run on my GPU. So I compare 3 GGUF options. **Hardware:** RTX 4090 (24GB VRAM) **Test:** Multi-agent Tetris development (Planner → Developer → QA) # Models Under Test |Model|Preset|Quant|Port|VRAM|Parallel| |:-|:-|:-|:-|:-|:-| |Qwen3.5-27B|`qwen35-27b-multi`|Q4\_K\_XL|7082|17 GB|3 slots| |Qwen3.5-35B|`qwen35-35b-q3-multi`|Q3\_K\_XL|7081|16 GB|3 slots| |Qwen3.5-35B|`qwen35-35b-multi`|Q4\_K\_XL|7080|20 GB|3 slots| **Architecture comparison:** * **27B**: Dense MoE, 27B total / 3B active params * **35B**: Sparse MoE, 35B total / 3B active params # Charts # Total Time Comparison https://preview.redd.it/4k6v6oaf2plg1.png?width=1500&format=png&auto=webp&s=fc1387a394caa912a388f96eae8e8405a020a298 # Phase Breakdown https://preview.redd.it/763vc0vi2plg1.png?width=1500&format=png&auto=webp&s=a4fb7acd8c22a8ba97a5c40cf1596c569dfeb4cb # VRAM Efficiency https://preview.redd.it/6lpoqssk2plg1.png?width=1500&format=png&auto=webp&s=2d4de5cb2326247fc7b0b321d64955ffbf627fe7 # Code Output Comparison https://preview.redd.it/31c5ptpm2plg1.png?width=1500&format=png&auto=webp&s=3564dd47cc5a0a98ce8a4afcaac240f00b94d438 # Results # Summary |Model|VRAM|Total Time|Plan|Dev|QA|Lines|Valid| |:-|:-|:-|:-|:-|:-|:-|:-| |Qwen3.5-27B Q4|17 GB|**134.0s**|36.3s|72.1s|25.6s|312|YES| |**Qwen3.5-35B Q3**|16 GB|**34.8s**|7.3s|20.1s|7.5s|322|YES| |Qwen3.5-35B Q4|20 GB|**37.8s**|8.2s|22.0s|7.6s|311|YES| # Key Findings 1. **35B models are dramatically faster than 27B** — 35s vs 134s (3.8x faster!) 2. **35B Q3 is fastest overall** — 34.8s total, uses only 16GB VRAM 3. **35B Q4 slightly slower than Q3** — 37.8s vs 34.8s (8% slower, 4GB more VRAM) 4. **27B is surprisingly slow** — Dense architecture less efficient than sparse MoE 5. **All models produced valid, runnable code** — 311-322 lines each # Speed Comparison |Phase|27B Q4|35B Q3|35B Q4|35B Q3 vs 27B| |:-|:-|:-|:-|:-| |Planning|36.3s|7.3s|8.2s|**5.0x faster**| |Development|72.1s|20.1s|22.0s|**3.6x faster**| |QA Review|25.6s|7.5s|7.6s|**3.4x faster**| |**Total**|134.0s|34.8s|37.8s|**3.8x faster**| # VRAM Efficiency |Model|VRAM|Time|VRAM Efficiency| |:-|:-|:-|:-| |35B Q3|16 GB|34.8s|**Best** (fastest, lowest VRAM)| |27B Q4|17 GB|134.0s|Worst (slow, mid VRAM)| |35B Q4|20 GB|37.8s|Good (fast, highest VRAM)| # Generated Code & QA Analysis All three models produced functional Tetris games with similar structure: |Model|Lines|Chars|Syntax|QA Verdict| |:-|:-|:-|:-|:-| |27B Q4|312|11,279|VALID|Issues noted| |35B Q3|322|11,260|VALID|Issues noted| |35B Q4|311|10,260|VALID|Issues noted| # QA Review Summary All three QA agents identified similar potential issues in the generated code: **Common observations across models:** * Collision detection edge cases (pieces near board edges) * Rotation wall-kick not fully implemented * Score calculation could have edge cases with >4 lines * Game over detection timing **Verdict:** All three games compile and run correctly. The QA agents were thorough in identifying *potential* edge cases, but the core gameplay functions properly. The issues noted are improvements rather than bugs blocking playability. # Code Quality Comparison |Aspect|27B Q4|35B Q3|35B Q4| |:-|:-|:-|:-| |Class structure|Good|Good|Good| |All 7 pieces|Yes|Yes|Yes| |Rotation states|4 each|4 each|4 each| |Line clearing|Yes|Yes|Yes| |Scoring|Yes|Yes|Yes| |Game over|Yes|Yes|Yes| |Controls help|Yes|Yes|Yes| All three models produced structurally similar, fully-featured implementations. # Recommendation **Qwen3.5-35B Q3\_K\_XL as the daily driver.** * 3.8x faster than Qwen3.5-27B * Uses less VRAM (16GB vs 17GB) * Produces equivalent quality code * Best VRAM efficiency of all tested models Full benchmark with generated code: [https://jaigouk.com/gpumod/benchmarks/20260225\_qwen35\_comparison/](https://jaigouk.com/gpumod/benchmarks/20260225_qwen35_comparison/)
2026-02-25T19:47:47
https://www.reddit.com/r/LocalLLaMA/comments/1renq5y/qwen35_model_comparison_27b_vs_35b_on_rtx_4090/
jaigouk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1renq5y
false
null
t3_1renq5y
/r/LocalLLaMA/comments/1renq5y/qwen35_model_comparison_27b_vs_35b_on_rtx_4090/
false
false
https://preview.redd.it/…74a5bcbbd706b15c
85
null