name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o85zrux
Ah the amount of energy wasted on this.......
5
0
2026-03-02T02:59:25
Daemontatox
false
null
0
o85zrux
false
/r/LocalLLaMA/comments/1rih7lq/i_asked_my_llm_to_speak_with_as_many/o85zrux/
false
5
t1_o85zfz3
From running heretic v1.2 on large models.
2
0
2026-03-02T02:57:20
vpyno
false
null
0
o85zfz3
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o85zfz3/
false
2
t1_o85zfts
I'm in the middle of refining it and also not at my desk. But give this a read https://thoughts.jock.pl/p/how-i-structure-claude-md-after-1000-sessions. It helped me out a lot.
1
0
2026-03-02T02:57:18
l0nedigit
false
null
0
o85zfts
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o85zfts/
false
1
t1_o85zfdy
13 months ago the conversation was 'can open source compete with GPT-4?' Now the conversation is 'which open source model should I use for my specific task?' That's a fundamental shift. The DeepSeek moment proved two things: first, that you don't need OpenAI-level compute budgets to train competitive models. Second, t...
3
0
2026-03-02T02:57:14
Soft-Analyst-9452
false
null
0
o85zfdy
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o85zfdy/
false
3
t1_o85zdxm
I don't understand what you are looking for. Pinokio that isn't Pinokio?
1
0
2026-03-02T02:56:59
_-_David
false
null
0
o85zdxm
false
/r/LocalLLaMA/comments/1rh9c0w/alternatives_to_pinokio_and_lynxhub/o85zdxm/
false
1
t1_o85zdtg
Ah, the gatekepers get to these posts so fast. How about you just move on if you don't like it instead of being a jive turkey? 
-6
0
2026-03-02T02:56:58
ArchdukeofHyperbole
false
null
0
o85zdtg
false
/r/LocalLLaMA/comments/1rih7lq/i_asked_my_llm_to_speak_with_as_many/o85zdtg/
false
-6
t1_o85zdmd
It's really cheap ~ $20 USD to rent time on vast to do these computations.
1
0
2026-03-02T02:56:56
Intraluminal
false
null
0
o85zdmd
false
/r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o85zdmd/
false
1
t1_o85zcm5
The pace of open source model releases is genuinely insane right now. Qwen 3.5 dropping with competitive benchmarks means the window where only frontier labs could produce top-tier models is effectively closed. What excites me most is the implications for privacy-sensitive applications. When you can run a model this c...
0
0
2026-03-02T02:56:46
Soft-Analyst-9452
false
null
0
o85zcm5
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85zcm5/
false
0
t1_o85zckx
Well you got more context size, faster inference, and can fit secondary model like TTS/OCR/etc.
4
0
2026-03-02T02:56:45
Vaptor-
false
null
0
o85zckx
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85zckx/
false
4
t1_o85zcew
Yeah I’d like to see 3.5 coder soon. I think that the infinite individual use cases here are convoluted at best without specifics of information.. Here is the big question can you offload cron jobs checkins abs the like to either model from openclaw or similar agent frameworks without degradation or issues issues???...
1
0
2026-03-02T02:56:44
AdLongjumping192
false
null
0
o85zcew
false
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o85zcew/
false
1
t1_o85zb5g
Training cutoff for this LLMbot doesn't include any newer models. It's doing the best it can to enshittify reddit.
1
0
2026-03-02T02:56:30
DinoAmino
false
null
0
o85zb5g
false
/r/LocalLLaMA/comments/1ri14x0/has_anyone_built_a_proper_eval_pipeline_for_local/o85zb5g/
false
1
t1_o85z6wb
Didn't know that heretic damages larger parameter models! Any idea where you got that info?
6
0
2026-03-02T02:55:45
RickyRickC137
false
null
0
o85z6wb
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o85z6wb/
false
6
t1_o85z1vv
What does this have to do with local llm?
6
0
2026-03-02T02:54:54
Ok-Adhesiveness-4141
false
null
0
o85z1vv
false
/r/LocalLLaMA/comments/1rih7lq/i_asked_my_llm_to_speak_with_as_many/o85z1vv/
false
6
t1_o85z0it
[https://youtu.be/g0j2dVuhr6s](https://youtu.be/g0j2dVuhr6s)
3
0
2026-03-02T02:54:40
sp3kter
false
null
0
o85z0it
false
/r/LocalLLaMA/comments/1rih7lq/i_asked_my_llm_to_speak_with_as_many/o85z0it/
false
3
t1_o85yxld
good point! if unsloth is suggesting against it... I'm certainly skeptical myself. it's not my quant so I certainly never gathered PPL/KLD - but I'll figure out a way to! do you happen to know of any tools to do so?
1
0
2026-03-02T02:54:09
JohnTheNerd3
false
null
0
o85yxld
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o85yxld/
false
1
t1_o85ysdx
LMstudio uses llama.cpp behind the scenes. Potential causes of the speed difference could be caused by LMstudio using an older llama.cpp version (I keep mine fully up to date), or settings differences. I haven't used LMstudio for a while and can't remember how to look these up sorry
2
0
2026-03-02T02:53:15
Amazing_Athlete_2265
false
null
0
o85ysdx
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85ysdx/
false
2
t1_o85yjso
No. 35b-a3b. I could explain why, but it really isn't necessary. Trust. Download. Use it. It is the best model for you. 100% confident.
0
1
2026-03-02T02:51:46
_-_David
false
null
0
o85yjso
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85yjso/
false
0
t1_o85y3f2
What's the pattern
1
0
2026-03-02T02:48:57
Ylsid
false
null
0
o85y3f2
false
/r/LocalLLaMA/comments/1riat5w/vignettes_handy_for_ais/o85y3f2/
false
1
t1_o85xvvq
Anybody with 5 kidneys IS crazy.
1
0
2026-03-02T02:47:40
OfkMike
false
null
0
o85xvvq
false
/r/LocalLLaMA/comments/1rd80gx/i_just_saw_something_amazing/o85xvvq/
false
1
t1_o85xmwo
"it's open weights but not open source so you can't really look at the code (???) or experiment with it or understand what it's doing inside. Nevermind that you totally can and a lot of people already have." "A lot of closed models kinda let you do some really basic fine tuning, almost like 2% of what people could do ...
5
0
2026-03-02T02:46:09
KallistiTMP
false
null
0
o85xmwo
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o85xmwo/
false
5
t1_o85xm7k
can't wait
1
0
2026-03-02T02:46:02
Exciting_Ordinary884
false
null
0
o85xm7k
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85xm7k/
false
1
t1_o85xj4m
[deleted]
1
0
2026-03-02T02:45:31
[deleted]
true
null
0
o85xj4m
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85xj4m/
false
1
t1_o85xieu
Therefore, within the budget constraints, what I do is optimize within that ceiling :D
1
0
2026-03-02T02:45:24
Appropriate-Skirt25
false
null
0
o85xieu
false
/r/LocalLLaMA/comments/1qtnz9s/best_local_model_for_openclaw/o85xieu/
false
1
t1_o85xi30
Maybe it looks like a conspiracy because 'Right to be Forgotten' is foreign language to the US
1
0
2026-03-02T02:45:21
paulisaac
false
null
0
o85xi30
false
/r/LocalLLaMA/comments/1h3r8fg/if_you_want_to_know_why_opensource_its_important/o85xi30/
false
1
t1_o85xg6x
No; You need 64+++ RAM .. 132..
1
0
2026-03-02T02:45:02
mikeinnsw
false
null
0
o85xg6x
false
/r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o85xg6x/
false
1
t1_o85xfh5
I recently tried the programming version of Qwen, and it's really great.
2
0
2026-03-02T02:44:54
Appropriate-Skirt25
false
null
0
o85xfh5
false
/r/LocalLLaMA/comments/1qtnz9s/best_local_model_for_openclaw/o85xfh5/
false
2
t1_o85x8mv
NP, have fun... and a word of caution, 27b still might be as slow as a glacier lol, even with llama.cpp, it still was 5 tokens a second on mine.. you might have much more fun with 35b!
2
0
2026-03-02T02:43:45
c64z86
false
null
0
o85x8mv
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85x8mv/
false
2
t1_o85x2za
Honestly have never found my experience with models to correlate all too well with benchmarks.
2
0
2026-03-02T02:42:47
porkyminch
false
null
0
o85x2za
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o85x2za/
false
2
t1_o85x1ok
Geohot is still active? I'd have thought he slowed down after Sony's attempt to sue him, and iPhone jailbreaking being kinda deadge
1
0
2026-03-02T02:42:33
paulisaac
false
null
0
o85x1ok
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o85x1ok/
false
1
t1_o85x0sa
could you share some example of your agent.md? im struggling to write a good one for myself
1
0
2026-03-02T02:42:24
cuberhino
false
null
0
o85x0sa
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o85x0sa/
false
1
t1_o85wvfi
Thanks man! It means a lot
2
0
2026-03-02T02:41:30
callmedevilthebad
false
null
0
o85wvfi
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85wvfi/
false
2
t1_o85wunn
No, almost all the money I’ve spent has been on app development. That’s not what happened, and you shouldn’t be in the sub if you don’t understand why.
1
0
2026-03-02T02:41:22
No_Mango7658
false
null
0
o85wunn
false
/r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o85wunn/
false
1
t1_o85wo6a
The "exponentially more sensitive" framing for K-cache is misleading about the actual mechanism. It's not that keys are inherently more fragile — it's the interaction with RoPE. Keys get rotated by position-dependent angles before caching, and quantization after rotation destroys the high-frequency components that enco...
2
0
2026-03-02T02:40:15
tom_mathews
false
null
0
o85wo6a
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o85wo6a/
false
2
t1_o85wnkm
I have the same pc config, q4 model, and yet I only get around 20 t/s in LMstudio. I am not tech savvy, but is llama.cpp faster than LMstudio?
1
0
2026-03-02T02:40:09
RickyRickC137
false
null
0
o85wnkm
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85wnkm/
false
1
t1_o85wmzu
Yes, and qwen3.5 seems particularly sensitive to quantized cache. Symptoms include subtle shifts in thinking or outright looping.
3
0
2026-03-02T02:40:03
CodeSlave9000
false
null
0
o85wmzu
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o85wmzu/
false
3
t1_o85whii
> 4060/5060 Ti Heat is proportional to the power consumed and with 3090 you can set a power target. I limit to 200-250watts per card. Idle 3090 pulling 25w is not great but it's not a heat problem. The 5060 has advantages over the 3090 of course. At that price/GB it might worth it for the better efficiency and bett...
2
0
2026-03-02T02:39:07
crantob
false
null
0
o85whii
false
/r/LocalLLaMA/comments/1rhjg6w/longcatflashlite_685b_maybe_a_relatively_good/o85whii/
false
2
t1_o85wh1b
That’s amazing! 27b is a dense model, right?
1
0
2026-03-02T02:39:02
NoFudge4700
false
null
0
o85wh1b
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o85wh1b/
false
1
t1_o85wb44
Sure! You can grab the binaries here, make sure to also download the cuda 13 DLLs alongside it too [Releases · ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp/releases)
2
0
2026-03-02T02:38:02
c64z86
false
null
0
o85wb44
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85wb44/
false
2
t1_o85w88c
This is an apples and oranges question. Generally at the same quant level the 27B, being a dense model, would be more accurate than the 35B-A3B model. But at different quant levels, you'd have to benchmark. The 27B model is still going to be slower because it's a dense model. And it's not going to be "it's just b...
4
0
2026-03-02T02:37:33
insanemal
false
null
0
o85w88c
false
/r/LocalLLaMA/comments/1ridwl5/qwen3527b_iq3_vs_qwen35_35ba3m_q4_k_m/o85w88c/
false
4
t1_o85w81z
tried UD-Q4\_K\_XL and got slightly lower t/s(<100) than Q3 but for twice as many tokens https://preview.redd.it/y4e2386hnjmg1.png?width=1270&format=png&auto=webp&s=1f413224158bc47ca5e33167b6e69edd487ea409
1
0
2026-03-02T02:37:31
SteppenAxolotl
false
null
0
o85w81z
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o85w81z/
false
1
t1_o85w66h
Doesn't matter. If it fits, it sits.
1
0
2026-03-02T02:37:12
coreyfro
false
null
0
o85w66h
false
/r/LocalLLaMA/comments/1rh0msv/qwen3535ba3b_q5_k_mbest_model_for_nvidia_16gb_gpus/o85w66h/
false
1
t1_o85w622
that sounds easy. Actually let me try
0
0
2026-03-02T02:37:11
callmedevilthebad
false
null
0
o85w622
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85w622/
false
0
t1_o85w5gt
>Do you also move your hands that much while talking? He's italian-american lol
0
0
2026-03-02T02:37:05
Velocita84
false
null
0
o85w5gt
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o85w5gt/
false
0
t1_o85w1ry
If it helps, it's all portable and no setup required other than downloading a few things and putting them all in one folder.
3
0
2026-03-02T02:36:27
c64z86
false
null
0
o85w1ry
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85w1ry/
false
3
t1_o85vygv
I see. Thanks for the interesting thread.
3
0
2026-03-02T02:35:54
StardockEngineer
false
null
0
o85vygv
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o85vygv/
false
3
t1_o85vxsv
I was trying to avoid any extra setup
-4
0
2026-03-02T02:35:47
callmedevilthebad
false
null
0
o85vxsv
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85vxsv/
false
-4
t1_o85vxbf
It's the chatgpt craze over again. People want slaves. People who say it's fake think it's just technofeudalism being shoved down out throats and those people are in the US. If you look at the stargazers on GitHub they seem to be real accounts with random creation dates and activity and human comments
1
0
2026-03-02T02:35:42
Background-Fig-3967
false
null
0
o85vxbf
false
/r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o85vxbf/
false
1
t1_o85vwii
Thank You! I'm sure this is will be a hit with the community! I normally use Elevenlabs using the [Speaker.bot](http://Speaker.bot) so ideally I would I want to use Qwen3 TTS locally to replace that for TTS requests on both Twitch and Youtube. I also use the Chatgpt API to power some random responses. I hope I'm clear ...
1
0
2026-03-02T02:35:34
Gustx
false
null
0
o85vwii
false
/r/LocalLLaMA/comments/1ri8jwz/streamerbot_integration_it_to_qwen3_tts_running/o85vwii/
false
1
t1_o85vlxg
Though similar to heretic and Jim Lai's techniques, this one requires interactive manual tuning and benchmarking throughout the optimization process. Heretic performs too much damage to intelligence for models of this size.
-5
1
2026-03-02T02:33:44
vpyno
false
null
0
o85vlxg
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o85vlxg/
false
-5
t1_o85vk9l
Are you saying qwen3.5:27b will work? i think it is 17gb in size
0
0
2026-03-02T02:33:27
callmedevilthebad
false
null
0
o85vk9l
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85vk9l/
false
0
t1_o85vjuh
I don't know him, so why hate a guys perspective? Oh well, when it comes to LLM I try and learn from everywhere I can. My affordability in this is law, so I try and grasp ideas that help me do better.
-1
0
2026-03-02T02:33:23
Ztoxed
false
null
0
o85vjuh
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o85vjuh/
false
-1
t1_o85v3ab
I've had an experience creating a software within "a garage / mom's basement" setting which became a popular product, industry-standard (dare I say) and I've seen how programming for my hobby and for customers is like x5 effort difference. Yet you have the 20% base part already, it would be such a shame if you discard...
1
0
2026-03-02T02:30:32
3dom
false
null
0
o85v3ab
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o85v3ab/
false
1
t1_o85uzj2
Hola
1
0
2026-03-02T02:29:53
New_Spread9235
false
null
0
o85uzj2
false
/r/LocalLLaMA/comments/1pqttqu/run_your_own_uncensored_ai_use_it_for_hacking/o85uzj2/
false
1
t1_o85uw5b
Each one encodes a pattern the AI needs to hold without lengthy instruction. Three vignettes form a curriculum in sequence. Ask the right question. Know the context. Survive the rescue. Order matters. they are all culturally universal since its a pattern. each AI builds from it.
1
0
2026-03-02T02:29:19
RTS53Mini
false
null
0
o85uw5b
false
/r/LocalLLaMA/comments/1riat5w/vignettes_handy_for_ais/o85uw5b/
false
1
t1_o85uuef
Wow this is one cringe AF comment section.
1
0
2026-03-02T02:29:00
Responsible_Buy_7999
false
null
0
o85uuef
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o85uuef/
false
1
t1_o85ukd5
That's the beauty of it. I knew you didn't have enough VRAM for it. It's still the best for you.
4
0
2026-03-02T02:27:16
_-_David
false
null
0
o85ukd5
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85ukd5/
false
4
t1_o85ukbp
It should offload into your system RAM, it will run slower but it will still run! Try llama.cpp and check my comments and the thread I commented in for how I got it setup if you are stuck, i'm getting 57 tokens a second on there with my 12GB GPU.
3
0
2026-03-02T02:27:15
c64z86
false
null
0
o85ukbp
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85ukbp/
false
3
t1_o85ugpk
Run with Llama CPP and use the "--n-cpu-moe" option. Try and set it so that your GPU is close to full (maybe 14GB used?) and the rest is on CPU/system memory.
5
0
2026-03-02T02:26:38
ForsookComparison
false
null
0
o85ugpk
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85ugpk/
false
5
t1_o85tzpe
[removed]
1
0
2026-03-02T02:23:45
[deleted]
true
null
0
o85tzpe
false
/r/LocalLLaMA/comments/1prmjt3/best_speechtotext_in_2025/o85tzpe/
false
1
t1_o85tziu
It would be awesome to know
7
0
2026-03-02T02:23:43
emprahsFury
false
null
0
o85tziu
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o85tziu/
false
7
t1_o85tz35
leaving a lot of performance on the table for *16gb users.*
3
0
2026-03-02T02:23:39
MiyamotoMusashi7
false
null
0
o85tz35
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85tz35/
false
3
t1_o85tru5
You should at least look at Qwen Code but OpenCode is ideal.
2
0
2026-03-02T02:22:25
Thump604
false
null
0
o85tru5
false
/r/LocalLLaMA/comments/1rie3yc/which_ide_to_code_with_qwen_35/o85tru5/
false
2
t1_o85tqx8
It would be nice to have small models for phone's
2
0
2026-03-02T02:22:16
ICE0124
false
null
0
o85tqx8
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o85tqx8/
false
2
t1_o85tpry
64 gigs
1
0
2026-03-02T02:22:04
callmedevilthebad
false
null
0
o85tpry
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85tpry/
false
1
t1_o85tkcj
48gb isn't that good either. I'm on a MBP M4 Pro with 48gb. The new Qwen3.5-27B *runs* and with decent context size too, but it's only 8tok/s, which is a tad slow for me, and also I don't want the device to run at its limits all the time. Thankfully in terms of output quality it's now at a threshold where it becomes ge...
1
0
2026-03-02T02:21:07
Economy_Cabinet_7719
false
null
0
o85tkcj
false
/r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o85tkcj/
false
1
t1_o85tjz8
and the future paved by agentic llm-based systems will lead to a very strange world that warrants strange exploration
1
0
2026-03-02T02:21:04
cobalt1137
false
null
0
o85tjz8
false
/r/LocalLLaMA/comments/1rige7o/the_woes_of_a_biocel/o85tjz8/
false
1
t1_o85tca6
bro what the fuck are you talking about we make LLMs here
2
0
2026-03-02T02:19:45
HopePupal
false
null
0
o85tca6
false
/r/LocalLLaMA/comments/1rige7o/the_woes_of_a_biocel/o85tca6/
false
2
t1_o85t9k1
in general, i think there's some sort of mismatch between cache and prompt for the current turn of qwen3.5 models. the mismatch causes the entire conversation to be reprocessed (basically recalculating the entire linear matrix instead of updating it) What gemini explained to me about Kimi linear, it has a LCP Similari...
3
0
2026-03-02T02:19:16
ArchdukeofHyperbole
false
null
0
o85t9k1
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o85t9k1/
false
3
t1_o85t72r
How much system ram do you have?
1
0
2026-03-02T02:18:50
nikhilprasanth
false
null
0
o85t72r
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85t72r/
false
1
t1_o85suux
Our users basically submit a persistent prompt in many cases, this one then has the full backstory and lore of what they are about to do, or a persistent system prompt for the rest of the session. Those are kept and the cutting is done after it, its just not necessarily the middle since it depends entirely on how big i...
7
0
2026-03-02T02:16:44
henk717
false
null
0
o85suux
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o85suux/
false
7
t1_o85spfc
I use llama.cpp via llama-swap, no ollama or lmstudio.
1
0
2026-03-02T02:15:48
Amazing_Athlete_2265
false
null
0
o85spfc
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85spfc/
false
1
t1_o85sl7a
Self-speculative decoding is not as general as speculative decoding. It really speeds up highly regular workloads but is less effective for irregular generations.
1
0
2026-03-02T02:15:05
Thunderstarer
false
null
0
o85sl7a
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85sl7a/
false
1
t1_o85sjfx
I can confirm all of them are vision language models (VLMs).
1
0
2026-03-02T02:14:47
limoce
false
null
0
o85sjfx
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85sjfx/
false
1
t1_o85si8a
Having been using GLM 5 in Claude Code for a few weeks, trying actual Claude again felt like a step backwards. It felt like GLM stayed much more on task - didn't ask *needless* questions, or need as much handholding.
1
0
2026-03-02T02:14:34
-dysangel-
false
null
0
o85si8a
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o85si8a/
false
1
t1_o85sh4l
Probably try qwen 3.5 4B or 9B, it's gonna be out soon, till then I'll say use any appropriate sized model of lfm by liquid ai
1
0
2026-03-02T02:14:23
Acceptable_Home_
false
null
0
o85sh4l
false
/r/LocalLLaMA/comments/1ric44g/what_would_be_the_best_small_model_for_json/o85sh4l/
false
1
t1_o85s94j
I haven’t run the numbers, but that kind of usage might come out ahead
1
0
2026-03-02T02:13:01
suicidaleggroll
false
null
0
o85s94j
false
/r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/o85s94j/
false
1
t1_o85s2lv
I tried the same prompt with thinking disabled. It answered correctly and reasonably without veering off into the "almost certainly a scam" chain of thought. It didn't comment on the current price versus it's older knowledge of silver prices. The only thing it noted about the date was "Since you are looking at a 2026 c...
1
0
2026-03-02T02:11:54
drappleyea
false
null
0
o85s2lv
false
/r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85s2lv/
false
1
t1_o85s19p
Seems like \`Qwen3.5-35b-a3b\` will not fit in my VRAM. Trying to find for quantized ones on ollama (i will isntall using openwebui). Can you share, If you have any links to the same. Thanks :)
-1
0
2026-03-02T02:11:40
callmedevilthebad
false
null
0
o85s19p
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85s19p/
false
-1
t1_o85rzot
Yeah, because last August that was the price for the exact same hardware.
1
0
2026-03-02T02:11:24
GeorgeR_
false
null
0
o85rzot
false
/r/LocalLLaMA/comments/1rbvbzt/best_opensource_coder_model_for_replacing_claude/o85rzot/
false
1
t1_o85rvx1
Is this true even with non trivial usage? Imagine if you built a set of coding bots that were working 18hrs a day with several agents working in parallel. I could imagine you could rack up tens of thousands of costs per month.
1
0
2026-03-02T02:10:46
brakx
false
null
0
o85rvx1
false
/r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/o85rvx1/
false
1
t1_o85rtmi
Likely, yes
1
0
2026-03-02T02:10:22
Adorable_Low7621
false
null
0
o85rtmi
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o85rtmi/
false
1
t1_o85rei3
it has vision support? Let me check
2
0
2026-03-02T02:07:48
callmedevilthebad
false
null
0
o85rei3
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85rei3/
false
2
t1_o85r2m5
Interesting
1
0
2026-03-02T02:05:46
coreytbrewer
false
null
0
o85r2m5
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85r2m5/
false
1
t1_o85qyne
Amanhã não, depois
1
0
2026-03-02T02:05:06
DrNavigat
false
null
0
o85qyne
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o85qyne/
false
1
t1_o85qyhe
How does numa architecture affect this? I'd think the data access that crosses numa boundary would be slower but not more error-prone.
1
0
2026-03-02T02:05:04
fragment_me
false
null
0
o85qyhe
false
/r/LocalLLaMA/comments/1rhchvi/qwen35_family_running_notes/o85qyhe/
false
1
t1_o85qbd1
It made perfect sense, people just hate him.
-1
0
2026-03-02T02:01:06
Additional_Top1210
false
null
0
o85qbd1
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o85qbd1/
false
-1
t1_o85q4nn
You cannot fine tune claude. And you can certainly not run claude on a laptop.
1
0
2026-03-02T01:59:56
Monkey_1505
false
null
0
o85q4nn
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o85q4nn/
false
1
t1_o85pz9e
Yup. It focuses less narrowly if you add it to the prompt explicitly. I tell it to explore my intent and more broadly search for possibilities even if I didn’t prompt for it.
1
0
2026-03-02T01:59:00
CodeSlave9000
false
null
0
o85pz9e
false
/r/LocalLLaMA/comments/1rhchvi/qwen35_family_running_notes/o85pz9e/
false
1
t1_o85px8d
Qwen3.5-35b-a3b.
12
0
2026-03-02T01:58:39
_-_David
false
null
0
o85px8d
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85px8d/
false
12
t1_o85ptt8
I actually tried it on 27b earlier and it got it correct. It overthought for thousands of tokens even after it figured it out, but it got there in the end
2
0
2026-03-02T01:58:04
-dysangel-
false
null
0
o85ptt8
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o85ptt8/
false
2
t1_o85prw0
Try one of the new qwen models
10
0
2026-03-02T01:57:43
Impossible-Glass-487
false
null
0
o85prw0
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o85prw0/
false
10
t1_o85poka
It’s set because I was working around with it - no harm to have it on so I left it. And yes flash attention is on by default, I set it in my scripts because I test with it on and off.
1
0
2026-03-02T01:57:09
CodeSlave9000
false
null
0
o85poka
false
/r/LocalLLaMA/comments/1rhchvi/qwen35_family_running_notes/o85poka/
false
1
t1_o85pfbw
I think the dense model suffers less? I didn’t test for that.
1
0
2026-03-02T01:55:32
CodeSlave9000
false
null
0
o85pfbw
false
/r/LocalLLaMA/comments/1rhchvi/qwen35_family_running_notes/o85pfbw/
false
1
t1_o85pd6c
> - Has anyone here actually clustered two M4 Pro Mac Minis with Exo over TB5? How stable is it day to day? Why would anyone do this? It's for your job, buy a real system. You don't even state what language you're working in, which is a big difference between even the frontier paid models. If a model like Qwen3-30...
3
0
2026-03-02T01:55:10
LoaderD
false
null
0
o85pd6c
false
/r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o85pd6c/
false
3
t1_o85p869
can we use the qwen3 unsloth guides to do SFT on these new models? @unsloth
3
0
2026-03-02T01:54:20
vr_fanboy
false
null
0
o85p869
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85p869/
false
3
t1_o85p62q
What method? Heretic? 
15
0
2026-03-02T01:53:59
My_Unbiased_Opinion
false
null
0
o85p62q
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o85p62q/
false
15
t1_o85p63j
It's an evolution of Kblam, I hope you find it useful.
0
0
2026-03-02T01:53:59
charmander_cha
false
null
0
o85p63j
false
/r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o85p63j/
false
0
t1_o85p3gd
It makes sense that the model doesn't know the current date or silver prices. For my prompt, neither of these should matter. I gave the current items I was interested in and their current price, so it should easily be able to answer the asked question given the information provided. Instead, it went around in circles d...
1
0
2026-03-02T01:53:31
drappleyea
false
null
0
o85p3gd
false
/r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85p3gd/
false
1
t1_o85oxt2
I don't know what most use cases are for you, but I build a lot of agents and trimming from the middle, before having to resort to compact, has been the best strategy thus far. Seems to hold true for my coding agents as well. Even our compaction events don't disrupt the first few messages. We leave them intact becau...
2
0
2026-03-02T01:52:34
StardockEngineer
false
null
0
o85oxt2
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o85oxt2/
false
2
t1_o85owel
I use roo code and it works perfectly with Qwen3.5-110B and 27B.
2
0
2026-03-02T01:52:19
ortegaalfredo
false
null
0
o85owel
false
/r/LocalLLaMA/comments/1rie3yc/which_ide_to_code_with_qwen_35/o85owel/
false
2