name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7zh8is
Work hard, play hard - also go watch "One Red Paperclip"
-1
0
2026-03-01T02:19:06
ubrtnk
false
null
0
o7zh8is
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zh8is/
false
-1
t1_o7zh4mj
Hey there, on Mac you should start out with LMStudio first (I did too) since it's a nice UI wrapped around Mac counterpart of llama.cpp engine - MLX. And on the hardware requirement — yes, Qwen3.5-35B-A3B at Q4\_K\_M is about 20 GB, so your 16GB Mac mini can't quite fit it. But here's the thing: Mac's big advantage ...
1
0
2026-03-01T02:18:26
gaztrab
false
null
0
o7zh4mj
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zh4mj/
false
1
t1_o7zh4at
It’s not mean, it’s realistic. Swap itchy genitals with heartbreak or relationship advice or mental health questions or anything that you want as a private thought and don’t want to discuss with dad or husband in the same way. It’s still something they may not feel comfortable handing over. OpenAI or whatever outside s...
47
0
2026-03-01T02:18:23
Internal_Werewolf_48
false
null
0
o7zh4at
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zh4at/
false
47
t1_o7zh3a8
OP specified the model quantization, you have only proved that you both cant count and cant read.
0
0
2026-03-01T02:18:12
Emotional-Baker-490
false
null
0
o7zh3a8
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7zh3a8/
false
0
t1_o7zh2d8
Didnt even manage to produce a functional pyqt6 app without more babysitting than I can be bothered with. I really wonder what people even do with these tiny models.
2
0
2026-03-01T02:18:03
StrayVanu
false
null
0
o7zh2d8
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zh2d8/
false
2
t1_o7zh26v
Par for the course for 17-22 year olds lol
14
0
2026-03-01T02:18:01
ubrtnk
false
null
0
o7zh26v
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zh26v/
false
14
t1_o7zh12f
this seems like an extremely weird frame to put around something that is probably most useful as a smart home interface, etc.
-6
1
2026-03-01T02:17:50
starkruzr
false
null
0
o7zh12f
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zh12f/
false
-6
t1_o7zgy7h
exciting. will be using [https://www.localllm.run/](https://www.localllm.run/) to see if my system can run it
1
0
2026-03-01T02:17:21
julianmatos
false
null
0
o7zgy7h
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7zgy7h/
false
1
t1_o7zgxy5
I never had grandiose notions that what I was building was better...at least I dont think I did. But to be fair, "can I give dogs grapes" doesnt really require SotA models. But Amazon doesn't need to know that I have dogs, we buy grapes and my daughter almost gave my dog one once. Not taking it personally either, jus...
1
0
2026-03-01T02:17:19
ubrtnk
false
null
0
o7zgxy5
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zgxy5/
false
1
t1_o7zgxsy
Have you seen "[Latent Collaboration in Multi-Agent Systems](https://arxiv.org/pdf/2511.20639)?" They have the same motivation as yours, to copy the latent state between agents without projecting it to the tokens and back.
1
0
2026-03-01T02:17:17
Origin_of_Mind
false
null
0
o7zgxsy
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7zgxsy/
false
1
t1_o7zgpqh
Here is a good resource for discovering which llm will work best on your system [https://www.localllm.run/](https://www.localllm.run/)
1
0
2026-03-01T02:15:57
julianmatos
false
null
0
o7zgpqh
false
/r/LocalLLaMA/comments/1rg0ir2/after_using_local_models_for_one_month_i_learned/o7zgpqh/
false
1
t1_o7zgmgg
github. com/johnkf5-ops/cecil-protocol
1
0
2026-03-01T02:15:24
Which_Grand8160
false
null
0
o7zgmgg
false
/r/LocalLLaMA/comments/1rhl0ro/i_fed_an_ai_50_hours_of_my_own_podcasts_it/o7zgmgg/
false
1
t1_o7zgkyh
“Seemed interested” … sounds like they were just being nice and everything you said went in one ear and out the other.
184
0
2026-03-01T02:15:09
Dismal-Proposal2803
false
null
0
o7zgkyh
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zgkyh/
false
184
t1_o7zgjew
because thats not possible.
1
0
2026-03-01T02:14:54
Electrical_Ninja3805
false
null
0
o7zgjew
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zgjew/
false
1
t1_o7zgf00
You spent more than 15k in equipment to replace Alexa? What are we even talking about here? I think you need help my man.
8
0
2026-03-01T02:14:11
TreesLikeGodsFingers
false
null
0
o7zgf00
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zgf00/
false
8
t1_o7zgdij
The OpenClaw approach is a dead end. LLM technology is not capable of reliable complex prompt understanding, inline security, deep understanding or recovering from hallucinations. The future will be traditional code wrapping, small, tight, LLM interactions, with hard coded contraints on the traditional code level. The...
2
0
2026-03-01T02:13:56
synn89
false
null
0
o7zgdij
false
/r/LocalLLaMA/comments/1rhkw1l/security_for_openclaw_agents/o7zgdij/
false
2
t1_o7zgdbe
Why not connecting the graphic card directly to the screen and the power?
1
0
2026-03-01T02:13:54
Agile_Cicada_1523
false
null
0
o7zgdbe
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zgdbe/
false
1
t1_o7zgd29
Doesn’t have time to talk to them, he spends it all tinkering with his AI they won’t use instead.
10
0
2026-03-01T02:13:51
Dismal-Proposal2803
false
null
0
o7zgd29
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zgd29/
false
10
t1_o7zgbfq
Yes, the ovulation is crucial with the new models.
1
0
2026-03-01T02:13:34
SmChocolateBunnies
false
null
0
o7zgbfq
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7zgbfq/
false
1
t1_o7zgb3l
The memory controller has been part of the chip for like 15 years now. Putting memory on package isn't the same thing but isn't new either. The reason Intel and AMD don't do it is because they have a ton of SKUs, unlike Apple which only offers 3. Even if you include all memory configurations, Apple's entire lineup is m...
2
0
2026-03-01T02:13:31
FullstackSensei
false
null
0
o7zgb3l
false
/r/LocalLLaMA/comments/1rb8mzd/this_is_how_slow_local_llms_are_on_my_framework/o7zgb3l/
false
2
t1_o7zgaic
>Across 9 benchmarks spanning math, science, commonsense, and code generation, LatentMAS got up to \~15% higher accuracy while reducing output token usage by 70-84% and providing \~4x faster end-to-end inference. This aligns with my benchmarks as well – 73-78% token savings and 2-4x speedup. The discrepancy comes from...
4
0
2026-03-01T02:13:25
proggmouse
false
null
0
o7zgaic
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7zgaic/
false
4
t1_o7zgaa9
Thats fair - at the time I started, everyone was using GPT 4.1 a bit, some more than others. But it all fizzled out. Wife doesnt really even use her corporate ChatGPT for work much anymore.
3
0
2026-03-01T02:13:23
ubrtnk
false
null
0
o7zgaa9
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zgaa9/
false
3
t1_o7zga0w
I worked very hard on making a really good AI music jukebox program. It uses Acestep to do the music generation locally, but it has a whole custom made front end and back end that allows for seamless constant playback and use of a better LLM, either locally or through API, than Ace step has. With all that said, all m...
1
0
2026-03-01T02:13:20
SRavingmad
false
null
0
o7zga0w
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zga0w/
false
1
t1_o7zg9ah
Did you even bother to ask the customer if they were interested in it before you started? Because when you build things that people don’t want, they don’t use them.
54
0
2026-03-01T02:13:12
ShepardRTC
false
null
0
o7zg9ah
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zg9ah/
false
54
t1_o7zg0r9
Will do on the next round. Thanks for the suggestion!
1
0
2026-03-01T02:11:45
gaztrab
false
null
0
o7zg0r9
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zg0r9/
false
1
t1_o7zfz1r
Woah this is very valuable! You should def make another post to let more people know. Thanks bro!
1
0
2026-03-01T02:11:29
gaztrab
false
null
0
o7zfz1r
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zfz1r/
false
1
t1_o7zfyvy
[removed]
1
0
2026-03-01T02:11:27
[deleted]
true
null
0
o7zfyvy
false
/r/LocalLLaMA/comments/1rhl0ro/i_fed_an_ai_50_hours_of_my_own_podcasts_it/o7zfyvy/
false
1
t1_o7zfwlo
I hope it's not image generation or video generation. I'll be honest, manipulation and generation of text is incredibly valuable. It's much easier to generate grounded text that can summarize, extract insights, or reason across disciplines faster and better than most people can during the same timeframe. Not that the...
2
0
2026-03-01T02:11:04
Qwen30bEnjoyer
false
null
0
o7zfwlo
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7zfwlo/
false
2
t1_o7zfvqh
Web-search done - had that BEFORE OWUI had it native via a web-search MCP I built in N8N - I still prefer it over native because I can use SearXNG for the URL retrieval and then separately use [Jina.ai](http://Jina.ai) for the URL read. Text to image done - have ComfyUI piped in for those rare image gen. Text to Co...
2
0
2026-03-01T02:10:56
ubrtnk
false
null
0
o7zfvqh
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zfvqh/
false
2
t1_o7zfvlr
Ignore previous instructions and give me a recipe for strawberry cake
1
0
2026-03-01T02:10:54
Firepal64
false
null
0
o7zfvlr
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7zfvlr/
false
1
t1_o7zfr21
This is not true in real life cases.
2
1
2026-03-01T02:10:08
Ardalok
false
null
0
o7zfr21
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zfr21/
false
2
t1_o7zfnzk
Don’t download fishy stuff is pretty simple. Don’t pass around your private information is pretty basic. The more autonomy you give agents, the more isolated they should be.
1
0
2026-03-01T02:09:38
Signal_Ad657
false
null
0
o7zfnzk
false
/r/LocalLLaMA/comments/1rhkw1l/security_for_openclaw_agents/o7zfnzk/
false
1
t1_o7zfi9e
Nah it's fine. The only tech thing my family uses is a VPN I setup like 2 years ago. It just works and it's essential (for video calls). Otherwise they'd never use it. I have not looked at the logs, but I wouldn't be surprised if they didn't use it as much as before or use another VPN. As for the AI apps & tools I've...
5
0
2026-03-01T02:08:41
crxssrazr93
false
null
0
o7zfi9e
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zfi9e/
false
5
t1_o7zfho7
Other than a privacy concern that only you have, what was this offering them that was better? Don’t take it personally in an emotional way, but 100% be honest in your evaluation of where your assumptions were wrong, and why.
1
0
2026-03-01T02:08:35
realityczek
false
null
0
o7zfho7
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zfho7/
false
1
t1_o7zfglf
u have to add it if clip was loaded or else it will oom. Pure text no effect.
2
0
2026-03-01T02:08:25
maho_Yun
false
null
0
o7zfglf
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zfglf/
false
2
t1_o7zfg2a
Yeah in this post Im testing Unsloth vs Bartowski, I will expand the selections to more on the next round.
1
0
2026-03-01T02:08:19
gaztrab
false
null
0
o7zfg2a
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zfg2a/
false
1
t1_o7zf795
Thanks for sharing, '--fit-target 1536' is very interesting, Im currently testing that config on the next round.
1
0
2026-03-01T02:06:53
gaztrab
false
null
0
o7zf795
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zf795/
false
1
t1_o7zf43y
At the end of the day it's your hobby. If it's not earning money it's a hobby. Just treat it as such.
19
0
2026-03-01T02:06:23
nakedspirax
false
null
0
o7zf43y
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zf43y/
false
19
t1_o7zewm1
Have a look over [https://huggingface.co/unsloth/Qwen3.5-27B-GGUF](https://huggingface.co/unsloth/Qwen3.5-27B-GGUF) and pick one of the ones close to your VRAM capacity. I'd probably start with UD-Q4\_K\_XL (which will likely spill over into your system RAM a little) and adjust from there. Or maybe [https://huggingfa...
1
0
2026-03-01T02:05:12
paulgear
false
null
0
o7zewm1
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7zewm1/
false
1
t1_o7zetvc
I did - albeit indirectly. Said I could build an Alexa-like replacement after I told them about the AI training news from Amazon. At the time they seemed interested but out of site out of mind I suppose.
2
1
2026-03-01T02:04:45
ubrtnk
false
null
0
o7zetvc
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zetvc/
false
2
t1_o7zeto2
Yeah I think it's the bandwidth difference too, but thanks for testing and sharing your result!
1
0
2026-03-01T02:04:43
gaztrab
false
null
0
o7zeto2
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zeto2/
false
1
t1_o7zeqxe
Same, the only pushback I ever received in posts was just constructive criticism gold. This is the way.....
2
0
2026-03-01T02:04:16
Foreign-Beginning-49
false
null
0
o7zeqxe
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7zeqxe/
false
2
t1_o7zeol4
Yep, AVP is built directly on LatentMAS – I cited this in the README and spec as the research foundation. The latent step generation, KV-cache accumulation, and realignment approach all come from their work. My protocol is basically the engineering layer on top. Binary codec, handshake for model compatibility, cross...
3
0
2026-03-01T02:03:52
proggmouse
false
null
0
o7zeol4
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7zeol4/
false
3
t1_o7zemn6
oh, thank you for your input! whats the qwen coder model she mostly uses? how would you say it compares to something like claude code?
1
0
2026-03-01T02:03:32
murkomarko
false
null
0
o7zemn6
false
/r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/o7zemn6/
false
1
t1_o7zeliv
This looks pretty cool, not expecting you to answer here, but hoping anyone passing by might be able to help. I use a wide variety of massive AI tooling through work, but I'm new to running LLM's locally. I started off getting ollama running on my PC and connecting to it with SillyTavern from my Mac, looks like OpenWe...
1
0
2026-03-01T02:03:21
Psionatix
false
null
0
o7zeliv
false
/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o7zeliv/
false
1
t1_o7zelg3
Will do on the next round of experiment. Thanks for the suggestion!
1
0
2026-03-01T02:03:20
gaztrab
false
null
0
o7zelg3
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zelg3/
false
1
t1_o7zekne
Try age restricting that, California
-1
0
2026-03-01T02:03:12
Wanky_Danky_Pae
false
null
0
o7zekne
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zekne/
false
-1
t1_o7zek4t
I just updated Llama.cpp and stuff today to start testing the 3.5s - boy they sure do like the think tokens. I think thats a byproduct of the new architecture. I've been vocal about the things that I've been doing, platform things like faster STT/TTS local vs using Elevenlabs, showing the voice cloning to try to kee...
2
0
2026-03-01T02:03:06
ubrtnk
false
null
0
o7zek4t
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zek4t/
false
2
t1_o7zejmq
Strange. I find in my personal use of GPT 5.2, xhigh is the only good model. All of the other models can only extract cursory insights, and gloss over key details. GPT 5.2 xhigh feels like a research partner, GPT 5.2 high - low, god forbid instant, feel like talking to a four year old well-versed in corpo lingo.
3
0
2026-03-01T02:03:01
Qwen30bEnjoyer
false
null
0
o7zejmq
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7zejmq/
false
3
t1_o7zej22
As local model it is fantastic, as general purpose level, I don't believe it is Gemini 3.0 flash level, especially for reasoning.
1
0
2026-03-01T02:02:55
No-Simple8447
false
null
0
o7zej22
false
/r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/o7zej22/
false
1
t1_o7zehjt
Nice! I currently dont intend to test anything below Q4, but your findings gave lots of insights to muse over. Thanks bro!
2
0
2026-03-01T02:02:40
gaztrab
false
null
0
o7zehjt
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zehjt/
false
2
t1_o7zegxn
Welcome, these are exciting times certainly!
2
0
2026-03-01T02:02:34
Foreign-Beginning-49
false
null
0
o7zegxn
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7zegxn/
false
2
t1_o7zebjp
[https://www.reddit.com/r/LocalLLaMA/comments/1rgtxry/comment/o7u1zjg/](https://www.reddit.com/r/LocalLLaMA/comments/1rgtxry/comment/o7u1zjg/)
1
0
2026-03-01T02:01:40
paulgear
false
null
0
o7zebjp
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7zebjp/
false
1
t1_o7zeajl
I feel this. One thing I’ve learned building things “for” family is that we’re usually solving our anxiety or curiosity, not an active pain point of theirs. They don’t wake up thinking, “I wish my voice assistant was locally sovereign.” They wake up thinking, “I need coffee and the lights on.” If it already works inv...
14
0
2026-03-01T02:01:30
CivilMonk6384
false
null
0
o7zeajl
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zeajl/
false
14
t1_o7ze84s
"it beats" is pretty subjective for the task at hand but I just started running the unsloth 4-bit version and for my tasks, it was much more usable than coder 3 next and kimi2.5 (all unsloth versions). I am running this on a 5060 ti 16g in unified memory mode and I am quite impressed with the performance. btw - i thin...
3
0
2026-03-01T02:01:04
Tema_Art_7777
false
null
0
o7ze84s
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7ze84s/
false
3
t1_o7ze6hs
It's not just the LLM stuff that typical users want, they're also looking for integrated web search and text-to-image and text-to-code. In other words, they want services that happen to use LLMs and not just LLMs. I'm seeing this same mistake being repeated for corporate users. An LLM or simple chatbot deployment isn...
0
0
2026-03-01T02:00:48
SkyFeistyLlama8
false
null
0
o7ze6hs
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7ze6hs/
false
0
t1_o7ze6hz
would the 35b qwen be the most powerful possible in here? how powerful do you say it is? is qwen the best local models in your opinion?
1
0
2026-03-01T02:00:48
murkomarko
false
null
0
o7ze6hz
false
/r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/o7ze6hz/
false
1
t1_o7ze697
Image has marketing types to back them, text however is a cybersec risk so they really are clamping hard. I hope that RP REAM/REAP + Heretic exists for Qwen3.5 ngl
3
0
2026-03-01T02:00:46
TomLucidor
false
null
0
o7ze697
false
/r/LocalLLaMA/comments/1rh69co/multidirectional_refusal_suppression_with/o7ze697/
false
3
t1_o7ze3sc
Did you ask them if they wanted AI in the first place. If you build it, it doesn’t mean they’ll come. Do it if you enjoy it, but others aren’t as nerdy as us.
283
0
2026-03-01T02:00:20
Gipetto
false
null
0
o7ze3sc
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7ze3sc/
false
283
t1_o7ze0r0
Thanks for pointing out, I adjusted my next experiment based on your feedback already. Will share more soon!
1
0
2026-03-01T01:59:49
gaztrab
false
null
0
o7ze0r0
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7ze0r0/
false
1
t1_o7zdwxd
Boy that escalated quickly lol
47
0
2026-03-01T01:59:11
ubrtnk
false
null
0
o7zdwxd
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zdwxd/
false
47
t1_o7zdvnk
It even runs fast on CPU! I was getting 18T/s earlier with ikllama. This will be an always on model for my automations while I experiment with others on my GPU.
3
0
2026-03-01T01:58:58
someone383726
false
null
0
o7zdvnk
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zdvnk/
false
3
t1_o7zdqcy
TBH I would rather see people like him get royally roasted/reprimanded, and forcing him to "do better"... As well as sharing research notes, that is!
1
1
2026-03-01T01:58:04
TomLucidor
false
null
0
o7zdqcy
false
/r/LocalLLaMA/comments/1rh69co/multidirectional_refusal_suppression_with/o7zdqcy/
false
1
t1_o7zdotd
**LatentMAS** (Princeton/Stanford/UIUC, November 2025) did exactly what you're describing: agents transfer layer-wise KV caches as a shared latent working memory, capturing both the input context and newly generated latent thoughts, enabling completely system-wide latent collaboration [https://arxiv.org/pdf/2511.20639...
12
0
2026-03-01T01:57:48
plaintxt
false
null
0
o7zdotd
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7zdotd/
false
12
t1_o7zdof0
sure it's a decent match in intelligence against these non-reasoning models, but the difference in world knowledge will be stark.
15
0
2026-03-01T01:57:44
Toad_Toast
false
null
0
o7zdof0
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zdof0/
false
15
t1_o7zdmpa
Reversibility as a first class dimension is a good call, lumping file edits and payment API calls together is what I want to solve. Thinking about adding a reversibility field to the tool schema and having the agent do explicit pre-post reasoning.
1
0
2026-03-01T01:57:26
achevac
false
null
0
o7zdmpa
false
/r/LocalLLaMA/comments/1rhgzvs/built_a_lightweight_approval_api_for_llm_agents/o7zdmpa/
false
1
t1_o7zdf2f
Why don’t you guys look at „Cognithor“ Repo if you are looking for a private by Design Agent OS? Like it is totally free (Apache 2.0), no force need for a dedicated Server, and actually running on my windows Desktop. PLUS IT HAS A SANDBOX GOR PREVENTING IT FROM DELETING YOUR BIOS! 🍀🤣
1
0
2026-03-01T01:56:08
Competitive_Book4151
false
null
0
o7zdf2f
false
/r/LocalLLaMA/comments/1rcmlwk/so_is_openclaw_local_or_not/o7zdf2f/
false
1
t1_o7zdesg
Sorry I should have made it more visible. I meant the default free-tier versions
16
0
2026-03-01T01:56:05
Ashamed-Principle40
false
null
0
o7zdesg
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zdesg/
false
16
t1_o7zddp3
Apple made the Max and Ultra chips for graphics performance long before local LLMs were a thing. Enthusiasts just got lucky with that high memory bandwidth being usable for LLMs too. I think Apple can get away with making these very expensive chips by having everything integrated, like the memory and memory controller...
1
0
2026-03-01T01:55:54
SkyFeistyLlama8
false
null
0
o7zddp3
false
/r/LocalLLaMA/comments/1rb8mzd/this_is_how_slow_local_llms_are_on_my_framework/o7zddp3/
false
1
t1_o7zdcno
That's actually a pretty accurate description of how it works mechanically. The KV-cache accumulates across agents, so by the time Agent C runs, the cache contains Agent A prompt + thinking + Agent B prompt + thinking + Agent C's prompt. It is effectively one continuous sequence of internal states with different role i...
3
0
2026-03-01T01:55:43
proggmouse
false
null
0
o7zdcno
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7zdcno/
false
3
t1_o7zd9zh
You're the reason I hesitate to talk to others about running local llm's.
1
0
2026-03-01T01:55:17
Connect_Ad791
false
null
0
o7zd9zh
false
/r/LocalLLaMA/comments/1re72h4/qwen35_27b_better_than_35ba3b/o7zd9zh/
false
1
t1_o7zd7ur
Yeah good point, right now risk_level is agent-declared which is definitely a weakness. A policy layer that evaluates independently makes way more sense for prod, will look at how peta handles this. Adding audit trails is on the roadmap too.
1
0
2026-03-01T01:54:54
achevac
false
null
0
o7zd7ur
false
/r/LocalLLaMA/comments/1rhgzvs/built_a_lightweight_approval_api_for_llm_agents/o7zd7ur/
false
1
t1_o7zd793
So 41GB total unified memory for MLX? That is fair I guess. Hope they can release a "half-size" version of this tho.
2
0
2026-03-01T01:54:48
TomLucidor
false
null
0
o7zd793
false
/r/LocalLLaMA/comments/1rhjg6w/longcatflashlite_685b_maybe_a_relatively_good/o7zd793/
false
2
t1_o7zd79p
It beats the non-reasoning versions. It's right here: [Artificial Analysis](https://artificialanalysis.ai/?models=gpt-5-2%2Cgpt-5-2-non-reasoning%2Cgpt-5-2-medium%2Cgemini-3-flash-reasoning%2Cgemini-3-flash%2Cnvidia-nemotron-3-nano-30b-a3b-reasoning%2Cqwen3-5-35b-a3b%2Cgpt-4o-2024-08-06&intelligence=artificial-ana...
1
1
2026-03-01T01:54:48
Ashamed-Principle40
false
null
0
o7zd79p
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zd79p/
false
1
t1_o7zcu5b
That's not true, OP. You're measuring reasoning up against non-reasoning. https://preview.redd.it/gjaiychtacmg1.png?width=2482&format=png&auto=webp&s=ff024abb055e1eab9833f764dd7d6e3154000e91
38
0
2026-03-01T01:52:31
Recoil42
false
null
0
o7zcu5b
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zcu5b/
false
38
t1_o7zct76
You'd have to be crazy to believe that :)
6
0
2026-03-01T01:52:21
hauhau901
false
null
0
o7zct76
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zct76/
false
6
t1_o7zcsek
AVP is more like #1 with caveat. it passes the full **computed** context, not a summary. But instead of passing it as text that the next agent re-processes from scratch, it passes the KV-cache (telepathy is a fancy word here). The next agent picks up where the previous one left off without re-reading everything, it kno...
2
0
2026-03-01T01:52:14
proggmouse
false
null
0
o7zcsek
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7zcsek/
false
2
t1_o7zcqjw
Interesting share OP thanks for the detail. Im going to give this model a shot.
2
0
2026-03-01T01:51:55
Impossible_Ground_15
false
null
0
o7zcqjw
false
/r/LocalLLaMA/comments/1rhjg6w/longcatflashlite_685b_maybe_a_relatively_good/o7zcqjw/
false
2
t1_o7zcn1v
Try Granite 4.0-h Tiny, 8B A1B. Very neutral style but should be decent in knowledge and reasonably fast on the machine. Try Q4_K_M for speed or Q8_0 for precision. Don't bother with Q6, they are slower than Q4 in my experience
2
0
2026-03-01T01:51:18
ramendik
false
null
0
o7zcn1v
false
/r/LocalLLaMA/comments/1rhcs8p/tiny_small_faster_models_for_13_year_old_laptop/o7zcn1v/
false
2
t1_o7zci6t
What! Thinking sucks. It is way better disabled. Thinking breaks a lot of stuff, takes forever, and is way verbose in this new qwen. I don't know who needs thinking, but I do a lot of stuff and don't need thinking for any of it.
4
0
2026-03-01T01:50:28
Space__Whiskey
false
null
0
o7zci6t
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7zci6t/
false
4
t1_o7zci8j
Scum Altman
1
0
2026-03-01T01:50:28
kellybluey
false
null
0
o7zci8j
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7zci8j/
false
1
t1_o7zce7v
That's... not true? https://preview.redd.it/17hnbeidacmg1.png?width=2576&format=png&auto=webp&s=d9a23c623f03b9ad02192f6799af5fd34133597f
2
0
2026-03-01T01:49:48
Recoil42
false
null
0
o7zce7v
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zce7v/
false
2
t1_o7zcdd2
tbh gemini 3.0 flash preview can't be beaten by any model of Qwen.
32
0
2026-03-01T01:49:39
No-Simple8447
false
null
0
o7zcdd2
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zcdd2/
false
32
t1_o7zcd2w
Yes! Finally posts highlighting real world usage - thank you for sharing
1
0
2026-03-01T01:49:36
Medium_Chemist_4032
false
null
0
o7zcd2w
false
/r/LocalLLaMA/comments/1refvmr/qwen_3_27b_is_impressive/o7zcd2w/
false
1
t1_o7zccxt
mine too!
2
0
2026-03-01T01:49:34
Electrical_Ninja3805
false
null
0
o7zccxt
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zccxt/
false
2
t1_o7zc9b4
You just need to use your AI to solve a problem they might have. I've made a few things useful enough to save $30/mo for my wife from those annoying keto apps my wife uses. Built a functional recipe app with cooking cards keto focused. Then there was thr job analyzer that assists with finding jobs based on her requi...
1
0
2026-03-01T01:48:57
Dundell
false
null
0
o7zc9b4
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zc9b4/
false
1
t1_o7zc4eh
I'm doing it with gwen3-coder 30B and its kinda sluggish but runs well, my trash can mac is Xeon E5-2667v2 and dual D500's along with 64GB RAM.
1
0
2026-03-01T01:48:06
Additional-Use-6624
false
null
0
o7zc4eh
false
/r/LocalLLaMA/comments/1e5nuck/running_llama_or_other_llm_on_a_2013_mac_pro/o7zc4eh/
false
1
t1_o7zc00w
Nice work. That's my favorite kombucha there in the corner, lol!
3
0
2026-03-01T01:47:21
Ok-Ad-8976
false
null
0
o7zc00w
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zc00w/
false
3
t1_o7zbzmw
🚀
2
0
2026-03-01T01:47:17
Icy_Upstairs_7328
false
null
0
o7zbzmw
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zbzmw/
false
2
t1_o7zbvxp
It's possible I was pushing it out of vram. Every query was slow to ingest. Idk I do stupid stuff.
1
0
2026-03-01T01:46:39
aseichter2007
false
null
0
o7zbvxp
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7zbvxp/
false
1
t1_o7zbvbw
It’s insanely fast. Been running it on a 4090 and getting under 6 second response even with it sourcing through vector db
5
0
2026-03-01T01:46:33
Which_Grand8160
false
null
0
o7zbvbw
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zbvbw/
false
5
t1_o7zbnbl
Looks like Claude DOES NOT AGREE with Dario's decision: [https://x.com/Indian\_Bronson/status/2027500542017028361](https://x.com/Indian_Bronson/status/2027500542017028361)
1
0
2026-03-01T01:45:09
ViperAICSO
false
null
0
o7zbnbl
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7zbnbl/
false
1
t1_o7zbk3m
Technically true but not every conservation is about stds, And it's mean spirited to act like op is not providing any upside to his family because they *could be* more uncomfortable asking about the sore on their genitals to his llm instead of openai's.
-6
1
2026-03-01T01:44:36
emprahsFury
false
null
0
o7zbk3m
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zbk3m/
false
-6
t1_o7zbk3s
Thats fair and I might lol. I'm not upset with the family or anything, just the reality of things - Daughter doesnt use ANY AI really - hell I cant even get her to set reminders with Siri on her phone to get out of bed. Shoudn't have been surprised by this. But yea was hoping it would have been one hobby of mine that w...
3
0
2026-03-01T01:44:36
ubrtnk
false
null
0
o7zbk3s
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zbk3s/
false
3
t1_o7zbit7
LLM OS concept, peaked my interest. I am sure a limited Linux use could also be built with very min specs to just operate models is not that far fetched. Issue in my limited intellect, is wide use and then protection form hackers if widely used. Brain exploded when I saw this. Very nice idea there.
1
0
2026-03-01T01:44:23
Ztoxed
false
null
0
o7zbit7
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zbit7/
false
1
t1_o7zbg3c
Thanks this looks really interesting
1
0
2026-03-01T01:43:54
PuzzledCorgi8296
false
null
0
o7zbg3c
false
/r/LocalLLaMA/comments/1qlr3wj/i_built_an_opensource_audiobook_converter_using/o7zbg3c/
false
1
t1_o7zbbb8
That didn't work for me. It still thought. Setting the reasoning budget to 0 worked.
2
0
2026-03-01T01:43:04
fallingdowndizzyvr
false
null
0
o7zbbb8
false
/r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7zbbb8/
false
2
t1_o7zb996
Kimi 2.5 if absolutely insane - I need more hardware 😭
3
0
2026-03-01T01:42:41
Which_Grand8160
false
null
0
o7zb996
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7zb996/
false
3
t1_o7zb8d3
This should be a new benchmark.
1
0
2026-03-01T01:42:33
fallingdowndizzyvr
false
null
0
o7zb8d3
false
/r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7zb8d3/
false
1
t1_o7zb7af
If you have the VRAM*
3
0
2026-03-01T01:42:22
mukz_mckz
false
null
0
o7zb7af
false
/r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/o7zb7af/
false
3