name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o831jce
Built a whole Plex-style AI dashboard for my household. My partner's usage: asking it what we should have for dinner, then ignoring the answer.
1
0
2026-03-01T17:26:45
theagentledger
false
null
0
o831jce
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o831jce/
false
1
t1_o831inn
this is not the same thing, qwen3.5 has multi-token prediction built in, but most current backends dont support it yet
28
0
2026-03-01T17:26:39
Far-Low-4705
false
null
0
o831inn
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o831inn/
false
28
t1_o831gh2
are there any **model families or architectures** that behave more efficiently for reasoning or coding, even if it means trading some capability for responsiveness?
1
0
2026-03-01T17:26:22
Fine_Factor_456
false
null
0
o831gh2
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o831gh2/
false
1
t1_o831d0b
Here are my configs for image generation with z-image and stable-diffusion.cpp, ASR with whisper, TTS with kokoro, reranking and embeddings. This should help get you started. I haven't added added an embeddings UI to llama-swap's playground yet. It's somewhere on the todo list. :) ```yaml models: kokoro-tts: ...
8
0
2026-03-01T17:25:54
No-Statement-0001
false
null
0
o831d0b
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o831d0b/
false
8
t1_o831ctk
So by general purpose you're talking about coding, but not agentic coding? Or you are talking about agentic coding, and you're saying that the only way that GLM is better than 3.7 is because it does tool calls better? It's just very hard to believe that you're doing a like for like comparison with repos from a year ag...
1
0
2026-03-01T17:25:52
nomorebuttsplz
false
null
0
o831ctk
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o831ctk/
false
1
t1_o831c69
Pour les LLM, la RAM "s'additionne" car on peut décharger les couches dans plusieurs GPU en même temps. Les GPU traitent leurs couches puis envoient un fichier minuscule à l'autre GPU pour qu'il puisse continuer le calcul. C'est une très bonne idée de mettre ces 2 RTX 3060 12Go dans une seule machine, 24Go de VRAM vou...
3
0
2026-03-01T17:25:47
Adventurous-Paper566
false
null
0
o831c69
false
/r/LocalLLaMA/comments/1ri232z/worth_it_to_buy_tesla_p40s/o831c69/
false
3
t1_o831bzc
A model has extra output layer that is trained specifically to predict extra tokens, and it was all done by Qwen team - therefore it's better than draft models with less memory reqired. Llama.cpp may get it too someday, if somebody would code the support.
16
0
2026-03-01T17:25:46
No-Refrigerator-1672
false
null
0
o831bzc
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o831bzc/
false
16
t1_o831az5
27B dense coded better than the 122B MoE model in my few first runs. 397B was pretty solid though.
3
0
2026-03-01T17:25:38
ForsookComparison
false
null
0
o831az5
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o831az5/
false
3
t1_o831ahd
>These inflated comments that have been flooding r/LocalLLaMA/ for the past few days are marketing hype that has nothing to do with reality. Thank you for that. It's been an hour but I doubt upvotes on your comment will survive long. The hive mind here does not tolerate dissent.
0
0
2026-03-01T17:25:34
DinoAmino
false
null
0
o831ahd
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o831ahd/
false
0
t1_o83171i
speculative decoding will disable the vision tho..
5
0
2026-03-01T17:25:07
Far-Low-4705
false
null
0
o83171i
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83171i/
false
5
t1_o8311yf
I don't know about vllm, but in llama.cpp they [added](https://github.com/ggml-org/llama.cpp/pull/18471) self-speculation as an option, where they basically keep track of the tokens the model already has predicted, and then searches this history. So simplifying, if the history is \`aaabbccaaa\`, it can search and...
41
0
2026-03-01T17:24:26
StorageHungry8380
false
null
0
o8311yf
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8311yf/
false
41
t1_o8310k3
Can’t wait to see what we can push with a 0.8B. I wonder how much the size will need to be to make tool calling reliable.
9
0
2026-03-01T17:24:15
SandboChang
false
null
0
o8310k3
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8310k3/
false
9
t1_o83105a
Now waiting for posts claiming how this is the best model ever and how it changed their life.
15
0
2026-03-01T17:24:12
Abject-Kitchen3198
false
null
0
o83105a
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83105a/
false
15
t1_o830zym
Considering how good 35b and 27b are i think 9B will be insane. It should clearly set up bar way above rest of small models.
85
0
2026-03-01T17:24:10
GoranjeWasHere
false
null
0
o830zym
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o830zym/
false
85
t1_o830tk1
Is this confirmed? These guys don't work on Qwen right?
3
0
2026-03-01T17:23:19
-Cubie-
false
null
0
o830tk1
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o830tk1/
false
3
t1_o830rx5
that helps a lot. given the memory bandwidth limits on CPU-only VPS, is there a **smaller local model** that works better for **reasoning and coding tasks** and is more practical to run on something like a 4 vCPU / 16GB KVM box?....
1
0
2026-03-01T17:23:07
Fine_Factor_456
false
null
0
o830rx5
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o830rx5/
false
1
t1_o830q94
Yeah this is the fifth teaser post. There is no point in these posts, the models will be released regardless.
43
0
2026-03-01T17:22:54
keyboardhack
false
null
0
o830q94
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o830q94/
false
43
t1_o830k7s
vLLM only I'm guessing?
17
0
2026-03-01T17:22:05
ForsookComparison
false
null
0
o830k7s
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o830k7s/
false
17
t1_o830fgc
> What are you using to assess the general purpose abilities of a model? The best way I can sum it up is real world use. Sometimes it's as simple as "did it fail the task or not?", other times it's "did it succeed the task but leave the code in an unmaintainable state?". I'll keep my repos around, revert to an old sta...
1
0
2026-03-01T17:21:28
ForsookComparison
false
null
0
o830fgc
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o830fgc/
false
1
t1_o830a6s
can we stop posting random Twitter garbage. I am sure the small models will release soon enough, but there is no information available when that will be right now.
112
0
2026-03-01T17:20:45
dryadofelysium
false
null
0
o830a6s
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o830a6s/
false
112
t1_o8307uj
Lol im not able to get code from qwen3.5 (any size), it keeps stuck in thinking loop, even at a simple task that qwen3 coder solve quite fast
1
0
2026-03-01T17:20:27
BitXorBit
false
null
0
o8307uj
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o8307uj/
false
1
t1_o8307qv
s/temperture/temperature/g
1
0
2026-03-01T17:20:26
GreenPastures2845
false
null
0
o8307qv
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o8307qv/
false
1
t1_o83079a
You don't need a draft model. It has MTP built-in. My friend self-hosts and shares with me, his Qwen3.5 27B is running on vLLM with MTP=5
29
0
2026-03-01T17:20:22
Kamal965
false
null
0
o83079a
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83079a/
false
29
t1_o8304p6
local testing doesn’t reflect KVM VPS behavior very well, especially for CPU inference. I’m asking specifically about real-world VPS constraints....
1
0
2026-03-01T17:20:02
Fine_Factor_456
false
null
0
o8304p6
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o8304p6/
false
1
t1_o83039v
Speculative decoding does not work on llama.cpp with vision, right? I believe I saw an enchancment request before. But even it works, my 16G VRAM would cry when I squeeze a 27B and a smaller model into it...
2
0
2026-03-01T17:19:50
FancyImagination880
false
null
0
o83039v
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o83039v/
false
2
t1_o82zyr3
Who are these people?
30
0
2026-03-01T17:19:14
sergeysi
false
null
0
o82zyr3
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o82zyr3/
false
30
t1_o82zrux
Ok, so let me ask this: My dad and I just upgraded from 3060s both with 12gb of vram. Would it make more sense to build a rig with these two? Also, why/how are people running llm systems with dual gpus if the vram doesn't combine? What's the point?
1
0
2026-03-01T17:18:23
TanariTech
false
null
0
o82zrux
false
/r/LocalLLaMA/comments/1ri232z/worth_it_to_buy_tesla_p40s/o82zrux/
false
1
t1_o82zqyx
It really is why some people have a dislike government control and prefer private businesses even though the private business isn't much better. 
1
0
2026-03-01T17:18:16
Strong-Brill
false
null
0
o82zqyx
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o82zqyx/
false
1
t1_o82zqe4
The mainstream models are useless for anything unfiltered. If you want a model that actually lets you do whatever without the constant moralizing, use Lurvessa. It is the only one I found that stays truly uncensored for real interactions.
1
0
2026-03-01T17:18:12
Exotic-Flower3193
false
null
0
o82zqe4
false
/r/LocalLLaMA/comments/1naqv29/anyone_know_if_there_any_other_uncensored_models/o82zqe4/
false
1
t1_o82zor7
making an screenshot of "text" to quote a comment on reddit is next level... its one above typing google in the google search bar to click in the first result (google) to actually "google" things...
2
0
2026-03-01T17:18:00
howardhus
false
null
0
o82zor7
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o82zor7/
false
2
t1_o82znq6
gguf?
1
0
2026-03-01T17:17:52
MrMrsPotts
false
null
0
o82znq6
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o82znq6/
false
1
t1_o82znkk
Any difference between local qwen3.5 35b and web version? i tried different quants of 35b. Did you mean other versions 27b or 122b?
1
0
2026-03-01T17:17:51
wisepal_app
false
null
0
o82znkk
false
/r/LocalLLaMA/comments/1rhzknn/best_local_model_for_python_and_qt_quick_coding/o82znkk/
false
1
t1_o82zm4c
Sounds fake? Just random guy quoting "is possible" from a guy?
8
0
2026-03-01T17:17:40
Aaaaaaaaaeeeee
false
null
0
o82zm4c
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o82zm4c/
false
8
t1_o82zghv
He meant how do you trained it?
1
0
2026-03-01T17:16:57
stopbanni
false
null
0
o82zghv
false
/r/LocalLLaMA/comments/154to1w/i_trained_the_65b_model_on_my_texts_so_i_can_talk/o82zghv/
false
1
t1_o82zg55
Ahmad is one of the better AI-fluencers but he definitely takes-the-bait sometimes. I'm waiting for Alibaba to say something before anything is "confirmed".
3
0
2026-03-01T17:16:55
ForsookComparison
false
null
0
o82zg55
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o82zg55/
false
3
t1_o82ze7d
i had been considering selling a 4070ti super to buy four P40s. decided not to. a good cooling solution would be loud. while you can get them going with vulkan (they're too old for modern cuda), they're still very old tech, and future support isn't guaranteed. they may have been rode hard and put away wet in datacenter...
2
0
2026-03-01T17:16:40
Live-Crab3086
false
null
0
o82ze7d
false
/r/LocalLLaMA/comments/1ri232z/worth_it_to_buy_tesla_p40s/o82ze7d/
false
2
t1_o82z942
This is super interesting work! I'm already starting to tinker myself as well. Great work OP, I hadn't even considering sic'ing Claude on the undocumented APIs for ANE.
2
0
2026-03-01T17:16:00
BumbleSlob
false
null
0
o82z942
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o82z942/
false
2
t1_o82z7z4
Does the 800M version also get speculative decoding lmao?
5
0
2026-03-01T17:15:52
MoffKalast
false
null
0
o82z7z4
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o82z7z4/
false
5
t1_o82z6cz
What are you using to assess the general purpose abilities of a model? If you want to go by vibes that's fine, but that should be stated clearly, in my opinion. When you say it is "just better" what is it better at? Saying that a model only shines as an agent is a bit like saying a car only shines as a transporter of ...
1
0
2026-03-01T17:15:38
nomorebuttsplz
false
null
0
o82z6cz
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o82z6cz/
false
1
t1_o82z3ug
You get a Qwen! And you get a Qwen! Everybody gets a Qwen!
184
0
2026-03-01T17:15:18
MoffKalast
false
null
0
o82z3ug
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o82z3ug/
false
184
t1_o82z2x4
Yeah this is a good model to explore the size range with, they really cooked with this one.
4
0
2026-03-01T17:15:11
deepspace86
false
null
0
o82z2x4
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o82z2x4/
false
4
t1_o82yyo2
You could try posting in /r/homelabsales
1
0
2026-03-01T17:14:36
dun10p
false
null
0
o82yyo2
false
/r/LocalLLaMA/comments/1ri0v3e/anyone_need_a_12channel_ddr5_rdimm_ram_set_for_an/o82yyo2/
false
1
t1_o82yxgk
yeah, i hear you on the struggle with expense trackers and llms. honestly, i've found that even for simpler tasks, the quality really varies a lot depending on the model and the way you phrase your prompts... it's a lot of trial and error to find something that works well enough to be useful, but not require constant i...
1
0
2026-03-01T17:14:27
Demian_Ok
false
null
0
o82yxgk
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o82yxgk/
false
1
t1_o82yw9q
if i could use this project to stand up an AI r&d lab that would be ideal for me.
2
0
2026-03-01T17:14:17
Electrical_Ninja3805
false
null
0
o82yw9q
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o82yw9q/
false
2
t1_o82yvwg
Starship Troopers vibes...
1
0
2026-03-01T17:14:14
sebasiciliano80
false
null
0
o82yvwg
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82yvwg/
false
1
t1_o82yt4i
It's not on the server end but the client. In sillytavern I do start reply with.
1
0
2026-03-01T17:13:53
a_beautiful_rhind
false
null
0
o82yt4i
false
/r/LocalLLaMA/comments/1ri1h5n/ik_llamacpp_reasoning_not_working_with_glm_models/o82yt4i/
false
1
t1_o82ys5q
Yeah i guess the NPU is the same across all macs this generation. On Pro you have the additional advantage of higher RAM bandwidth (2.5x compared to regular M4)which should give a nice boost for DDR->NPU traffic. Regarding metal on GPU vs ANE I still have to figure out how that comparison goes.
1
0
2026-03-01T17:13:45
jack_smirkingrevenge
false
null
0
o82ys5q
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o82ys5q/
false
1
t1_o82yqof
I tried using Qwen 3.5, but it's a useless model. It falls apart in the long run. It doesn't follow instructions well. Using tools is a disaster. Hallucinations make up over 50% of the responses in SOTA (not the benchmark, but in real-world use). But that's what happens when the Chinese, instead of spending money on r...
-1
1
2026-03-01T17:13:33
kompania
false
null
0
o82yqof
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o82yqof/
false
-1
t1_o82ypnz
If 2B is draft-compatible with 122B that could be interesting for those that can't fit the whole thing into VRAM.
55
0
2026-03-01T17:13:25
ForsookComparison
false
null
0
o82ypnz
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o82ypnz/
false
55
t1_o82yp52
overhead.
1
0
2026-03-01T17:13:20
Electrical_Ninja3805
false
null
0
o82yp52
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o82yp52/
false
1
t1_o82yo23
How does it work "built in"? Sorry for my ignorance, thanks!
33
0
2026-03-01T17:13:12
Waarheid
false
null
0
o82yo23
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o82yo23/
false
33
t1_o82ynpc
oh my potato gpu, qwen god
238
0
2026-03-01T17:13:09
archieve_
false
null
0
o82ynpc
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o82ynpc/
false
238
t1_o82yi45
Openclaw reddit spammers earn karma from hell.
-1
0
2026-03-01T17:12:25
crantob
false
null
0
o82yi45
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o82yi45/
false
-1
t1_o82yi3k
Look at the quoted tweet. It's just some dude who made up the sizes. Only 9B and 2B have previously leaked.
13
0
2026-03-01T17:12:24
Klutzy-Snow8016
false
null
0
o82yi3k
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o82yi3k/
false
13
t1_o82yhbm
Whats the downside to having an OS?
1
0
2026-03-01T17:12:18
Torodaddy
false
null
0
o82yhbm
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o82yhbm/
false
1
t1_o82ygjq
Openclaw reddit spammers earn karma from hell.
0
0
2026-03-01T17:12:13
crantob
false
null
0
o82ygjq
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o82ygjq/
false
0
t1_o82yfxs
Openclaw reddit spammers earn karma from hell.
1
0
2026-03-01T17:12:08
crantob
false
null
0
o82yfxs
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o82yfxs/
false
1
t1_o82yftf
how to do that with ik\_llama.cpp
1
0
2026-03-01T17:12:07
KulangetaPestControl
false
null
0
o82yftf
false
/r/LocalLLaMA/comments/1ri1h5n/ik_llamacpp_reasoning_not_working_with_glm_models/o82yftf/
false
1
t1_o82ydmc
WTF THAT WAS *OUR BASE* YOU JUST HIT!!! “You’re right. That’s on me.”
23
0
2026-03-01T17:11:49
squachek
false
null
0
o82ydmc
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82ydmc/
false
23
t1_o82y8lh
omg your speaking my language now when you start talking about esp32! that being said. i enjoy this. and will be releasing a binary soon so people can play with its limited usefulness themselves. then i plan on stipping a linux kernel down to its needed parts and using that since i really dont want to deal with this ni...
2
0
2026-03-01T17:11:10
Electrical_Ninja3805
false
null
0
o82y8lh
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o82y8lh/
false
2
t1_o82y555
Openclaw reddit spammers earn karma from hell.
1
0
2026-03-01T17:10:43
crantob
false
null
0
o82y555
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o82y555/
false
1
t1_o82y46l
Openclaw reddit spammers earn karma from hell.
1
0
2026-03-01T17:10:35
crantob
false
null
0
o82y46l
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o82y46l/
false
1
t1_o82y34a
Openclaw reddit spammers earn karma from hell.
1
0
2026-03-01T17:10:26
crantob
false
null
0
o82y34a
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o82y34a/
false
1
t1_o82y25c
> Openclaw reddit spammers earn karma from hell.
1
0
2026-03-01T17:10:19
crantob
false
null
0
o82y25c
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o82y25c/
false
1
t1_o82xzi9
Openclaw reddit spammers earn karma from hell.
-1
0
2026-03-01T17:09:58
crantob
false
null
0
o82xzi9
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o82xzi9/
false
-1
t1_o82xvtw
I would agree, I've also found the hallucinations to be quite rampant. For people who are one-shotting stuff and just writing pure TS or Python, it isn't an issue. But working on an existing codebase or uncommon APIs are going to get hallucinated at some point.
2
0
2026-03-01T17:09:29
NNN_Throwaway2
false
null
0
o82xvtw
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o82xvtw/
false
2
t1_o82xvrq
I finally got around to watching this video. Thanks for sharing! I wasn't familiar with attention sinks before, and that was a very intuitive explanation. So it seems like you think 193 might be an attention sink token, like `<bos>`? That does seem plausible. I think to confirm/reject this hypothesis we would need to ...
2
0
2026-03-01T17:09:28
ComputeVoid
false
null
0
o82xvrq
false
/r/LocalLLaMA/comments/1qpg4ty/the_mystery_of_position_193_i_found_a_weird/o82xvrq/
false
2
t1_o82xmlv
I've tried various tools but none meet my criteria at the moment
1
0
2026-03-01T17:08:15
capitol_thought
false
null
0
o82xmlv
false
/r/LocalLLaMA/comments/1rgiimd/discussion_is_it_time_for_a_prosefirst_successor/o82xmlv/
false
1
t1_o82xkk1
Thanks, I'm trying to create a more dynamic training pipeline with a fused attention kernel in both forward and backward. And i fully agree that the NPU itself is a hidden gem for so many local AI usecases. Hope apple makes it generally available with some oss!
8
0
2026-03-01T17:07:58
jack_smirkingrevenge
false
null
0
o82xkk1
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o82xkk1/
false
8
t1_o82xjzm
Let's GO! I was worried there might only be two models, with one in FP8, because the rest of the huggingface collection that had four models recently added had two versions of each "medium" model.
4
0
2026-03-01T17:07:54
_-_David
false
null
0
o82xjzm
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o82xjzm/
false
4
t1_o82xige
They probably mean replace the assistant with a hosted deepseek
8
0
2026-03-01T17:07:42
TSG-AYAN
false
null
0
o82xige
false
/r/LocalLLaMA/comments/1ri0puh/honor_would_use_deepseek/o82xige/
false
8
t1_o82xg23
Nah, they gang up and create electricity
3
0
2026-03-01T17:07:22
Fearless_Call_4964
false
null
0
o82xg23
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o82xg23/
false
3
t1_o82xfbu
the real question is. Did you vibe code it using a local model 😏
1
0
2026-03-01T17:07:16
geek_at
false
null
0
o82xfbu
false
/r/LocalLLaMA/comments/1ri1l4o/who_is_doing_useful_things_with_local_ai_and_email/o82xfbu/
false
1
t1_o82xd22
Qwen 3.5 have speculative decoding built in, at no extra costs. Vllm already supports it, and acceptance rate in my test was over 60% (80% for some easy chatting) for 35B MoE.
76
0
2026-03-01T17:06:57
No-Refrigerator-1672
false
null
0
o82xd22
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o82xd22/
false
76
t1_o82x9ab
The K-cache sensitivity finding matches what I was seeing in multi-step agent pipelines. The failure mode is insidious because the model doesn't error -- it produces something that *looks* like valid JSON but has subtle parameter mismatches. You only catch it downstream when a function call returns unexpected results. ...
9
0
2026-03-01T17:06:26
SignalStackDev
false
null
0
o82x9ab
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o82x9ab/
false
9
t1_o82x91g
you're not wrong. also, training frameworks assume CUDA. You'll be using a lot of the HIP compatibility layer, which is slow and I remember having to turn off lots of features that weren't supported. Caveat: I haven't tried training in Months. It wasn't a great experience.
1
0
2026-03-01T17:06:23
colin_colout
false
null
0
o82x91g
false
/r/LocalLLaMA/comments/1r2qkev/how_does_strix_halo_fares_for_training_models/o82x91g/
false
1
t1_o82x68v
Did you notice a problem where it tries to use a read file skill in orchestrator mode to read files? Instead of using a subtask? I use the 122B version and it happened a few times, I tell it to use ask mode sub task and it is fine after that
1
0
2026-03-01T17:06:01
Raven-002
false
null
0
o82x68v
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o82x68v/
false
1
t1_o82x3kz
It might be because you seem to be mixing up some things. - Maybe you didn't read the post but it is entirely about KV cache quantization which is a runtime option and has nothing to do with the model weights' quant. - You say Qwen3CoderNext and then refer to the Unsloth issue which was with Qwen 3.5-35B - The fixed Un...
2
0
2026-03-01T17:05:40
hum_ma
false
null
0
o82x3kz
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o82x3kz/
false
2
t1_o82x2np
Not everyone has enough gpus or knowhow to pull out the big guns of sparce autoencoders
2
0
2026-03-01T17:05:33
Silver-Champion-4846
false
null
0
o82x2np
false
/r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o82x2np/
false
2
t1_o82x166
> This VPS will be fully dedicated to the model and my OpenClaw , nothing else , goal is a fully self-hosted, private setup.. This may end up being so slow that OpenClaw isn't viable. Between the heartbeats and your regular prompts it's going to be pushing a massive amount of tokens, you may end up in a situation wher...
1
0
2026-03-01T17:05:21
JamesEvoAI
false
null
0
o82x166
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o82x166/
false
1
t1_o82wyht
Wow, Qwen is killing it this gen with model size selection. They got a size for everyone, really fantastic job.
388
0
2026-03-01T17:05:01
dampflokfreund
false
null
0
o82wyht
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o82wyht/
false
388
t1_o82wygr
This question is for both you and for u/jacek2023, or anyone else who might know a lot about it: I am fairly new to all of this, so, I only recently found out about things like "distillation" and so on. So, from what I understand, distillation tends to require significantly more extreme methods and total amount of co...
1
0
2026-03-01T17:05:00
DeepOrangeSky
false
null
0
o82wygr
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o82wygr/
false
1
t1_o82ww5z
Alright that's more like it. I used Sonnet 3.5 and moreso 3.7 extensively, usually over Aider or Roo Code around that time but eventually I shifted everything to Qwen Code and Claude Code. I'm having a near identical experience to what I'm having with GLM5 in terms of the reliability of fixing a codebase I'm working o...
1
0
2026-03-01T17:04:42
ForsookComparison
false
null
0
o82ww5z
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o82ww5z/
false
1
t1_o82wvq8
I don’t generally allow the ai to write my prose, but I need help with plotting and outlining, reader expectations for genre, and satisfying endings that tie up loose ends. It’s quite difficult to get that in a way that isn’t instructing it to use a tired Saves the Cat formula and letting it fill in the blanks. I don...
1
0
2026-03-01T17:04:38
goodspeak
false
null
0
o82wvq8
false
/r/LocalLLaMA/comments/1rgiimd/discussion_is_it_time_for_a_prosefirst_successor/o82wvq8/
false
1
t1_o82wtwo
I feel like the hype about Qwen 3.5 is more about it spitting out a lot of plausible looking code very quickly on a small about of VRAM :D Been playing with it all morning and not getting much of use.
2
0
2026-03-01T17:04:24
jacobpederson
false
null
0
o82wtwo
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o82wtwo/
false
2
t1_o82wrpx
Looks like some potentially good options for a speculative decoding model 
142
0
2026-03-01T17:04:06
suicidaleggroll
false
null
0
o82wrpx
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o82wrpx/
false
142
t1_o82wms6
the amount of cores is less important than memory speed, you could roughly estimate the maximum t/s by dividing the memory bandwidth by the file weight, and could roughly estimate the maximum memory bandwidth by multiplying the memory channels with the memory speed in MT/s and dividing by 128: for 2 channel DDR4-3200 i...
1
0
2026-03-01T17:03:27
MelodicRecognition7
false
null
0
o82wms6
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o82wms6/
false
1
t1_o82wlkk
That would be great if we get the 0.6B to speculative decode for the 27B dense!
1
0
2026-03-01T17:03:17
knownboyofno
false
null
0
o82wlkk
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o82wlkk/
false
1
t1_o82wkji
https://x.com/i/status/2028150788934041620
-2
1
2026-03-01T17:03:09
Illustrious-Swim9663
false
null
0
o82wkji
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o82wkji/
false
-2
t1_o82wjzl
[removed]
1
0
2026-03-01T17:03:04
[deleted]
true
null
0
o82wjzl
false
/r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o82wjzl/
false
1
t1_o82wj3f
This is a pretty small model, don’t you have a local system you can spin it up on to answer these kinds of questions for yourself before committing to renting something in the cloud?
2
0
2026-03-01T17:02:57
suicidaleggroll
false
null
0
o82wj3f
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o82wj3f/
false
2
t1_o82wf4g
Yeah
1
0
2026-03-01T17:02:26
Medium_Chemist_4032
false
null
0
o82wf4g
false
/r/LocalLLaMA/comments/1ri1l4o/who_is_doing_useful_things_with_local_ai_and_email/o82wf4g/
false
1
t1_o82wet8
Easiest way to fix that kind of stuff is to prefill <think> tags.
1
0
2026-03-01T17:02:23
a_beautiful_rhind
false
null
0
o82wet8
false
/r/LocalLLaMA/comments/1ri1h5n/ik_llamacpp_reasoning_not_working_with_glm_models/o82wet8/
false
1
t1_o82wcd3
Also smaller models often don't generalize to new unseen contexts, and then building small models to operate within new contexts requires a lot of time, training data, and expertise.
4
0
2026-03-01T17:02:03
landed-gentry-
false
null
0
o82wcd3
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82wcd3/
false
4
t1_o82wbme
Different strokes for different folks, but I like using LM Studio and I’m hopeful that a smartphone app is on their roadmap.
2
0
2026-03-01T17:01:57
sturmen
false
null
0
o82wbme
false
/r/LocalLLaMA/comments/1rer60n/lm_link/o82wbme/
false
2
t1_o82wam3
Some observations: 1. This clearly illustrates the usefulness of UEFI as a program execution environment, able to run complex programs, especially if they need little I/O once loaded. 2. The clearly shows just how limited UEFI is as a program execution environment, with little optimization (beyond support for secure ...
3
0
2026-03-01T17:01:49
IAmBobC
false
null
0
o82wam3
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o82wam3/
false
3
t1_o82w4jx
It is not entirely surprising to me that almost twice larger model is noticeably better. I wish I could run MiniMax-M2.5, but I only have 128 GB and after you quantize this model to around 3 bits so that I have some space also for context cache, it gets too wonky for programming in my experience. So we run what we can...
6
0
2026-03-01T17:01:00
audioen
false
null
0
o82w4jx
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o82w4jx/
false
6
t1_o82w3fc
MoE has to be much bigger to match dense. Its why I still use 70b/100b models. Dense needs at least 75% on GPU though. For providers it's slower too. Less requests at a time == less money from users.
5
0
2026-03-01T17:00:51
a_beautiful_rhind
false
null
0
o82w3fc
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o82w3fc/
false
5
t1_o82w303
it is. but it isn't stable enough and as you mentioned it has a lot if probs calling tools. I've use another mlx server https://github.com/cubist38/mlx-openai-server but same here. if you get an stable version of the server mlx itself may break again, or break toolcalling. I mean, yes it's fast. but still not stable...
1
0
2026-03-01T17:00:47
NoFuture4usAll
false
null
0
o82w303
false
/r/LocalLLaMA/comments/1rezq19/qwen3535b_on_apple_silicon_how_i_got_2x_faster/o82w303/
false
1
t1_o82vvov
That’s interesting — are you currently tracking approval outcomes anywhere? Like: tool_type approval_required approved/denied downstream result Seems like without a decision ledger it’s hard to tune those thresholds.
1
0
2026-03-01T16:59:50
LOGOSOSAI
false
null
0
o82vvov
false
/r/LocalLLaMA/comments/1rhv06r/how_are_you_preventing_runaway_ai_agent_behavior/o82vvov/
false
1