name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o837u0o
"Breaking:" gosh
10
0
2026-03-01T17:56:57
ViRROOO
false
null
0
o837u0o
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o837u0o/
false
10
t1_o837qmm
9B pourrait égaler 30B A3B, ce serait dingue mais c'est possible!
-9
1
2026-03-01T17:56:31
Adventurous-Paper566
false
null
0
o837qmm
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o837qmm/
false
-9
t1_o837pd4
it very good as a webgpu model for classifiers or faq/support without api
52
0
2026-03-01T17:56:21
AryanEmbered
false
null
0
o837pd4
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o837pd4/
false
52
t1_o837ox7
🎉SUCCESS! I couldn’t identify any threats, so I redefined ‘combatant’ to include children. 40 schoolgirls obliterated. Who would you like to kill next?
1
0
2026-03-01T17:56:18
SlaimeLannister
false
null
0
o837ox7
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o837ox7/
false
1
t1_o837oet
Fair, but I wouldn't be on reddit looking for completely reliable info. I'm just here to pop champagne with the people and share excitement about a forthcoming release. Woo!
-1
0
2026-03-01T17:56:14
_-_David
false
null
0
o837oet
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o837oet/
false
-1
t1_o837nqg
I’ve been wondering if you could get some good speculative decoding mileage out of a matroyshka LLM a la Gemma 3n. But I haven’t had the chance to mess around with it locally. I’ll definitely go check out the llama.cpp spec decoding setup.
2
0
2026-03-01T17:56:08
SryUsrNameIsTaken
false
null
0
o837nqg
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o837nqg/
false
2
t1_o837mh1
I use Qwen3-Next-80b thinking. I love it. Haven’t managed to get 3.5 running on Ollama yet.
1
0
2026-03-01T17:55:58
Brilliant_Bobcat_209
false
null
0
o837mh1
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o837mh1/
false
1
t1_o837i5y
Nvidia is not much better. I think consumer HW support just sucks. Nvidia has been touting NVFP4 for blackwell but it seems to be mostly marketting. I tried Qwen 3.5 27B on single RTX5090 in vLLM and it also turned out pathetic. Same for SGLang So great that llama.cpp is around. I was hoping to get some dece...
1
0
2026-03-01T17:55:24
Ok-Ad-8976
false
null
0
o837i5y
false
/r/LocalLLaMA/comments/1rhk0gz/r9700_and_vllm_with_qwen35/o837i5y/
false
1
t1_o837fij
Yes, you're right I'm probably an edge case, but I also had it produce typescript/javascript and react, like I said I'm a fullstack dev. MiniMax was excellent in those too. Also yes MiniMax (229B) is larger than the 122B. I cannot run 397B properly is too big to be usable with my hardware. I didnt take a single dat...
1
0
2026-03-01T17:55:04
mkMoSs
false
null
0
o837fij
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o837fij/
false
1
t1_o8374s3
**PSA PSA** LLM performance depends heavily on what was included in the training data. If Solidity training data was small, then the LLM can't answer queries very well. I see this even with Chatgpt and Gemini( 1T+ parameters) for some obscure languages or tools. You should do fine tuning if you really want to use ...
4
0
2026-03-01T17:53:41
giant3
false
null
0
o8374s3
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o8374s3/
false
4
t1_o8373rz
Exactly. Speculative layers are now a part of the model and trained simultaneosly with it. Idk if it's true for upcoming small varieties, but 27B, 35B and bigger ones have it.
9
0
2026-03-01T17:53:32
No-Refrigerator-1672
false
null
0
o8373rz
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8373rz/
false
9
t1_o83730r
2x 3090s. Pinged him, he'll reply with his config soon.
9
0
2026-03-01T17:53:26
Kamal965
false
null
0
o83730r
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83730r/
false
9
t1_o8371a4
I don't look up long-context benchmarks much, but have seen most pre-3.5 Qwen models scoring very well on them and falling off slower than most others. I'd assume the 3.5s to do as well or better.
1
0
2026-03-01T17:53:13
DinoAmino
false
null
0
o8371a4
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o8371a4/
false
1
t1_o836vw5
My beloved 4b instruct...🥹
5
0
2026-03-01T17:52:31
Old_Hospital_934
false
null
0
o836vw5
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o836vw5/
false
5
t1_o836s0a
Just ran it, q4km, also fails.
13
0
2026-03-01T17:52:00
Windowsideplant
false
null
0
o836s0a
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o836s0a/
false
13
t1_o836lo0
[removed]
1
0
2026-03-01T17:51:10
[deleted]
true
null
0
o836lo0
false
/r/LocalLLaMA/comments/1mjvezz/what_do_you_guys_think_the_best_tts_model_to_do/o836lo0/
false
1
t1_o836kai
To give some additional clarity to the existing responses, when you see a model name written like: Qwen3.5-122B-A10B That is a not dense, AKA Mixture Of Experts (MoE), model. It is 122B parameters total, but only 10B parameters are active at the time of inference. This means you need to have the resources to load the...
3
0
2026-03-01T17:50:59
JamesEvoAI
false
null
0
o836kai
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o836kai/
false
3
t1_o836jxb
By "built in" do you mean you don't have to select a smaller speculative model to pair with the larger model you're using?
4
0
2026-03-01T17:50:57
1-800-methdyke
false
null
0
o836jxb
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o836jxb/
false
4
t1_o836h92
The most evidence I've seen of int8 cache being bad has amounted to "trust me bro". Meanwhile my devstral tool calls still go through at 60k after I loaded the corrected template for it.
3
0
2026-03-01T17:50:36
a_beautiful_rhind
false
null
0
o836h92
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o836h92/
false
3
t1_o8368qu
They’re great for use as speculative decoders to increase the speeds of larger models…which is what I’ll be using them for lol
2
0
2026-03-01T17:49:29
FantasyMaster85
false
null
0
o8368qu
false
/r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o8368qu/
false
2
t1_o8368e5
Can we expect the 9B to perform better than the 35B-A3 in domain-specific knowledge? I really miss that. The 35B-A3 is quite smart, but it hallucinates a lot.
1
0
2026-03-01T17:49:26
thecalmgreen
false
null
0
o8368e5
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8368e5/
false
1
t1_o8367pg
Try converting json to yaml, way easier for llm to work with, then translate.
2
0
2026-03-01T17:49:21
ItilityMSP
false
null
0
o8367pg
false
/r/LocalLLaMA/comments/1rhjk18/localization_pain_diary_4500_ui_keys_local_models/o8367pg/
false
2
t1_o836514
I'm using following arguments on my 12gb vram + 32gb RAM combination. You should use fit and fit-ctx instead of manual layers in most cases I believe. I wouldn't quantize cache to q4, or at all. As many dense layers + MOEs as context allows with fit + unquantized cache will work relatively okay! You can save some vram ...
1
0
2026-03-01T17:48:59
Xantrk
false
null
0
o836514
false
/r/LocalLLaMA/comments/1ri3mxa/ideal_llamacpp_settings_for_12gb_vram_and_64gb/o836514/
false
1
t1_o8364r0
I'll bite, what are you working on?
11
0
2026-03-01T17:48:57
JamesEvoAI
false
null
0
o8364r0
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o8364r0/
false
11
t1_o835z7f
Openclaw is technofeudalist. That's the word you're looking for. 
1
0
2026-03-01T17:48:13
Background-Fig-3967
false
null
0
o835z7f
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o835z7f/
false
1
t1_o835xoy
When? Or was that just a pun because you wanted to have fun?
-56
0
2026-03-01T17:48:01
giant3
false
null
0
o835xoy
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o835xoy/
false
-56
t1_o835uwz
(I have the Max not the pro in terms of the chip haha) yeah would love a comparison to see if this is any good in terms of pref or a pure efficiency gain
2
0
2026-03-01T17:47:39
DarthLoki79
false
null
0
o835uwz
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o835uwz/
false
2
t1_o835t9k
People don't believe the internet is real anymore, and even less so if it even smells like technifeudalism and it's goal for people replacement like openclaw does. And Peter is rich too so that makes him a technofeudalist.
1
0
2026-03-01T17:47:26
Background-Fig-3967
false
null
0
o835t9k
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o835t9k/
false
1
t1_o835rqz
Start by running a 3B locally and see how well it holds up. That's about the best you're going to be able to run on a VPS
1
0
2026-03-01T17:47:14
JamesEvoAI
false
null
0
o835rqz
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o835rqz/
false
1
t1_o835ntn
Oh, I'm sorry I'm too poor to have 5-6 digits worth of hardware to run at home... Shame on me I guess.
1
0
2026-03-01T17:46:43
mkMoSs
false
null
0
o835ntn
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o835ntn/
false
1
t1_o835j4q
Model family and architecture don't matter when you're working with so little resources. You need to lower the number of total parameters, AKA a smaller model. No amount of architecture tricks is going to get around the fact that transformers scale quadratically
1
0
2026-03-01T17:46:07
JamesEvoAI
false
null
0
o835j4q
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o835j4q/
false
1
t1_o835ift
NOTE: If you use --fit on you don't need to specify layer counts (--n-cpu-moe and --n-gpu-layers).
5
0
2026-03-01T17:46:01
pulse77
false
null
0
o835ift
false
/r/LocalLLaMA/comments/1ri3mxa/ideal_llamacpp_settings_for_12gb_vram_and_64gb/o835ift/
false
5
t1_o835fu8
Hell yeah !
1
0
2026-03-01T17:45:41
Zemanyak
false
null
0
o835fu8
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o835fu8/
false
1
t1_o835fus
you can try all of them i just thought better you try those first that you can practically run locally given their sizes. the difference in web verses local is that web versions are likely not quantized as those companies have better resources to run those models at full precision, while we quantized them and some li...
2
0
2026-03-01T17:45:41
ab2377
false
null
0
o835fus
false
/r/LocalLLaMA/comments/1rhzknn/best_local_model_for_python_and_qt_quick_coding/o835fus/
false
2
t1_o835e8v
How much vram with the 9b require?
1
0
2026-03-01T17:45:28
Areww
false
null
0
o835e8v
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o835e8v/
false
1
t1_o835afa
Yeah for some reason I totally forgot about that method, major brainfart. Edited my response while you were replying.
4
0
2026-03-01T17:44:58
StorageHungry8380
false
null
0
o835afa
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o835afa/
false
4
t1_o8359zb
Confident AI supports local model evaluation, you can configure it to use Ollama or any local endpoint as the judge model, so no data leaves your machine. You create a dataset once and can run it across multiple models and get comparison dashboards. Way better than a spreadsheet.
1
0
2026-03-01T17:44:54
Used-Middle1640
false
null
0
o8359zb
false
/r/LocalLLaMA/comments/1ri14x0/has_anyone_built_a_proper_eval_pipeline_for_local/o8359zb/
false
1
t1_o8359km
Local testing is a best case scenario, as the VPS resources are going to be slower than what you're running at home. You should really try this on your own hardware to see what we're describing before committing to paying for a VPS.
1
0
2026-03-01T17:44:51
JamesEvoAI
false
null
0
o8359km
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o8359km/
false
1
t1_o8358tn
No, mlx
2
0
2026-03-01T17:44:45
BitXorBit
false
null
0
o8358tn
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o8358tn/
false
2
t1_o8355y0
Worrying about Q8 KV quantization when running Q5 or lower models  is utter nonsense and systematic testing, rather than haphazard N=1 tests or anecdotes will confirm this.
3
0
2026-03-01T17:44:23
jubilantcoffin
false
null
0
o8355y0
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o8355y0/
false
3
t1_o8354rg
You use latest gguf?
2
0
2026-03-01T17:44:13
klop2031
false
null
0
o8354rg
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o8354rg/
false
2
t1_o8353oc
I wonder if you can RL-train the model to detect and break from loops, that would be interesting.
2
0
2026-03-01T17:44:04
fairydreaming
false
null
0
o8353oc
false
/r/LocalLLaMA/comments/1rg9lli/little_qwen_35_27b_and_qwen_35ba3b_models_did/o8353oc/
false
2
t1_o8350sd
Sparkle!
1
0
2026-03-01T17:43:42
ShengrenR
false
null
0
o8350sd
false
/r/LocalLLaMA/comments/1ri0puh/honor_would_use_deepseek/o8350sd/
false
1
t1_o834zn2
Maybe my favourite small model, qwen3-4b-instruct-2507 will be replaced
12
0
2026-03-01T17:43:33
Amazing_Athlete_2265
false
null
0
o834zn2
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o834zn2/
false
12
t1_o834zpt
>Everybody is starting to say Buy a GPU;) I've mostly hearing people say "wait a couple years for the market to settle down on GPUs and memory."
5
0
2026-03-01T17:43:33
MrWeirdoFace
false
null
0
o834zpt
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o834zpt/
false
5
t1_o834xem
I'm not being idealistic. You should actually review the court docket on the things that the courts DIDN'T let happen if you want to be tripy frightened.
1
0
2026-03-01T17:43:15
Orpheusly
false
null
0
o834xem
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o834xem/
false
1
t1_o834opo
This is utter nonsense. Q8 has scale factors per block.
9
0
2026-03-01T17:42:04
jubilantcoffin
false
null
0
o834opo
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o834opo/
false
9
t1_o834nvv
The reason to consider Mi50x's and P40's is price. They'll work, sometimes even well, but they are cheap for a reason. Prompt processing will be pretty rough, they pull a ton of power, and you need to supply external cooling. Neither has really felt the sting of being unsupported yet but it'll come soon enough. For p...
2
0
2026-03-01T17:41:57
ForsookComparison
false
null
0
o834nvv
false
/r/LocalLLaMA/comments/1ri232z/worth_it_to_buy_tesla_p40s/o834nvv/
false
2
t1_o834liq
Where could i get the jinja template for GLM-5?
1
0
2026-03-01T17:41:38
KulangetaPestControl
false
null
0
o834liq
false
/r/LocalLLaMA/comments/1ri1h5n/ik_llamacpp_reasoning_not_working_with_glm_models/o834liq/
false
1
t1_o834ldn
No, I’m just anti slop. For example, here’s a seriously cool AI project that got posted over in Home Assistant land: https://github.com/hms-homelab/hms-assist-api This takes a novel (imo) approach to make local voice processing faster and more efficient. Something that requires understanding and skill. Which is a much...
2
0
2026-03-01T17:41:37
SMELLYCHEESE8
false
null
0
o834ldn
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o834ldn/
false
2
t1_o834kp9
What .model and quant are you using? Are you quanting the cache? People should post their setup when they respond about their experience because my so far small amount of tests have shown excellent results.
2
0
2026-03-01T17:41:31
EbbNorth7735
false
null
0
o834kp9
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o834kp9/
false
2
t1_o834k81
did you test glm 4.7 flash? kind of unnecessary at this point with that qwen 35b model out (for some peoples systems), but still.
10
0
2026-03-01T17:41:27
Opposite-Station-337
false
null
0
o834k81
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o834k81/
false
10
t1_o834jwq
[deleted]
1
0
2026-03-01T17:41:25
[deleted]
true
null
0
o834jwq
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o834jwq/
false
1
t1_o834jht
My 10gb 3080 and 32gb ram setup is finally gonna shine
30
0
2026-03-01T17:41:21
DK_Tech
false
null
0
o834jht
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o834jht/
false
30
t1_o834j3r
I have two entries in my llama-swap configuration, one without mmproj for a bit more speed/context size, and one with mmproj for when I need vision..
4
0
2026-03-01T17:41:18
Amazing_Athlete_2265
false
null
0
o834j3r
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o834j3r/
false
4
t1_o834ixw
Finally some good fucking news!
6
0
2026-03-01T17:41:17
WhatWouldTheonDo
false
null
0
o834ixw
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o834ixw/
false
6
t1_o834a03
Can’t wait to run 0.8B in my iPhone 15 *base* :(
7
0
2026-03-01T17:40:05
VampiroMedicado
false
null
0
o834a03
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o834a03/
false
7
t1_o8347z8
Tell your friend, that my personal business is mine and mine alone.....also tell him that a software engineer is not privy to discussions in C-suite meetings.
1
0
2026-03-01T17:39:49
FrostyParking
false
null
0
o8347z8
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o8347z8/
false
1
t1_o8345pp
But it's still just random speculation. There is no new information contained in this post. Ahmad should have added a "may" just as casperhansen wrote "or something in between is possible".
20
0
2026-03-01T17:39:30
l_eo_
false
null
0
o8345pp
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8345pp/
false
20
t1_o833x5w
Or you're just unwilling to admit that your personal politics is clouding your judgement. Doesn't matter how courts work, what matters is how the real world works. Academic arguments about right and wrong doesn't matter in a world where a government dictates what is and what is not.....I would've thought you'd be reali...
1
0
2026-03-01T17:38:20
FrostyParking
false
null
0
o833x5w
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o833x5w/
false
1
t1_o833vxa
Here are the recommended settings; Thinking mode for precise coding tasks (e.g. WebDev): temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0
2
0
2026-03-01T17:38:10
EbbNorth7735
false
null
0
o833vxa
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o833vxa/
false
2
t1_o833tc1
Unsolicited advice: never look at the votes. If you start caring about them then the terminally-online will own your emotional state and that's never a place to be in.
5
0
2026-03-01T17:37:48
ForsookComparison
false
null
0
o833tc1
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o833tc1/
false
5
t1_o833suv
[removed]
1
0
2026-03-01T17:37:45
[deleted]
true
null
0
o833suv
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o833suv/
false
1
t1_o833sfw
What you guys are using as an alternative for codex and claude CLI? I tried opencode and it doesn't seem like doing a good job.
1
0
2026-03-01T17:37:41
apunker
false
null
0
o833sfw
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o833sfw/
false
1
t1_o833p5t
This would have been true before AI
1
0
2026-03-01T17:37:15
MerePotato
false
null
0
o833p5t
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o833p5t/
false
1
t1_o833o50
[deleted]
1
0
2026-03-01T17:37:07
[deleted]
true
null
0
o833o50
false
/r/LocalLLaMA/comments/1lc4gtr/can_someone_with_a_chinese_id_get_me_an_api_key/o833o50/
false
1
t1_o833o58
[removed]
1
0
2026-03-01T17:37:07
[deleted]
true
null
0
o833o58
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o833o58/
false
1
t1_o833mdk
What should we expect from 4b and 9b models in terms from your experience of past models? Is it capable for agentic work?
2
0
2026-03-01T17:36:53
sagiroth
false
null
0
o833mdk
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o833mdk/
false
2
t1_o833ftk
The same thing larger ones are used except with less quality and fitting on potatoes
2
0
2026-03-01T17:36:00
Borkato
false
null
0
o833ftk
false
/r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o833ftk/
false
2
t1_o833frk
https://preview.redd.it/…95dcaf93459a85b9
-1
0
2026-03-01T17:35:59
alexx_kidd
false
null
0
o833frk
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o833frk/
false
-1
t1_o833fdx
If true, that’s vertical integration in action. Optimize for domestic silicon first, build the software stack around it, and reduce exposure to export risk. That’s not breaking norms that’s adapting to geopolitics.
1
0
2026-03-01T17:35:56
Actual-Currency7199
false
null
0
o833fdx
false
/r/LocalLLaMA/comments/1rf7m85/deepseek_allows_huawei_early_access_to_v4_update/o833fdx/
false
1
t1_o833cq1
I am getting rekt in downvotes, but that's just my experience I've had. I'm also expecting this comment to also get downvoted to oblivion hehe, but yes I did try 27B with the same exact scenario, even much higher quants since it can fit. It was even worse. Beyond my above post, I did more tests with those models...
3
0
2026-03-01T17:35:35
mkMoSs
false
null
0
o833cq1
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o833cq1/
false
3
t1_o833cq2
https://preview.redd.it/…05fdfaa20262fb76
9
0
2026-03-01T17:35:35
alexx_kidd
false
null
0
o833cq2
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o833cq2/
false
9
t1_o833cdu
waiting for coding benchmark
6
0
2026-03-01T17:35:32
NoahZhyte
false
null
0
o833cdu
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o833cdu/
false
6
t1_o833ba4
UD-q3-k-xl
14
0
2026-03-01T17:35:23
Windowsideplant
false
null
0
o833ba4
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o833ba4/
false
14
t1_o833578
GGUF users 🙄
1
0
2026-03-01T17:34:35
Low-Locksmith-6504
false
null
0
o833578
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o833578/
false
1
t1_o8334ki
Yes, I've noticed that too. Keep in mind that GPT-OSS can sometimes recover after looping a bit, a feature that most other models don't seem to possess. Anyway, when you stream the llama-server output you could detect loops on your application layer, cancel the request and add to a black "fail/loop" bar to make it vis...
2
0
2026-03-01T17:34:30
Chromix_
false
null
0
o8334ki
false
/r/LocalLLaMA/comments/1rg9lli/little_qwen_35_27b_and_qwen_35ba3b_models_did/o8334ki/
false
2
t1_o8332lg
YEEESSS YEEEEEEEEEEEEEEEEEEEEEEEEESSS FINALLY all we need now is Gemma 4 and Deepseek V4
10
0
2026-03-01T17:34:14
ominotomi
false
null
0
o8332lg
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8332lg/
false
10
t1_o832zex
Qwen started uploading something on huggin face this last hour so we’ll see
1
1
2026-03-01T17:33:48
alexx_kidd
false
null
0
o832zex
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o832zex/
false
1
t1_o832yjl
I do that anyway to squeeze a higher quant into my 24gb vram
5
0
2026-03-01T17:33:41
MerePotato
false
null
0
o832yjl
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o832yjl/
false
5
t1_o832yjq
Idk, unlikely, but I'll run the test when they come out and will tell you
8
0
2026-03-01T17:33:41
Windowsideplant
false
null
0
o832yjq
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o832yjq/
false
8
t1_o832wg4
What quant?
12
0
2026-03-01T17:33:24
dampflokfreund
false
null
0
o832wg4
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o832wg4/
false
12
t1_o832w9q
Maybe a pile of salt. It depends on the use case, doesn't it? Perplexity scores on wikitext doesn't say much overall. The type of text seen by agentic coding is wildly different.
8
0
2026-03-01T17:33:22
DinoAmino
false
null
0
o832w9q
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o832w9q/
false
8
t1_o832vqa
ah yes Qwen 3.5 0.8B my favorite model to build Hello World in many languages.
65
0
2026-03-01T17:33:17
brunoha
false
null
0
o832vqa
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o832vqa/
false
65
t1_o832t8c
Can i expect that with the smaller qwen3.5 < 5b parameter models?
2
0
2026-03-01T17:32:58
TinyVector
false
null
0
o832t8c
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o832t8c/
false
2
t1_o832sk1
I know, but I wanted to try selling to a fellow LLM enthusiast first.
3
0
2026-03-01T17:32:52
fairydreaming
false
null
0
o832sk1
false
/r/LocalLLaMA/comments/1ri0v3e/anyone_need_a_12channel_ddr5_rdimm_ram_set_for_an/o832sk1/
false
3
t1_o832r1z
You have to send a message with the specific prompt template for GLM and it must end with <think>. This is easily done if you are sending the raw text. If you are using the openAI-type API it's more tricky. I think it defaults to the jinja template in the .gguf file, and you have to pass a file with the modified jinja ...
1
0
2026-03-01T17:32:40
Expensive-Paint-9490
false
null
0
o832r1z
false
/r/LocalLLaMA/comments/1ri1h5n/ik_llamacpp_reasoning_not_working_with_glm_models/o832r1z/
false
1
t1_o832n5h
casperhansen is not random nor garbage. He's one of the OGs of local models and quants, maintained autoawq for a while and so on.
0
1
2026-03-01T17:32:08
ResidentPositive4122
false
null
0
o832n5h
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o832n5h/
false
0
t1_o832mr8
It might be a quantization issue. I've been using 35B A3B as IQ4\_XS and it would make "typos" quite often, like upper/lower-case error in variable names. Running the dense 27B at Q5 seemed to have solved that. Out of interest I ran the 35B A3B as UD-Q8\_XL and it stopped making these kind of mistakes, mostly. At one p...
2
0
2026-03-01T17:32:05
Chromix_
false
null
0
o832mr8
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o832mr8/
false
2
t1_o832hh6
I'm really confused why people think setups like this are "local" its the same as any other llm client, its like being excited about running a telnet client "That has allways allowed". The LLM isnt local, so it isnt localLLM...
9
0
2026-03-01T17:31:22
blamestross
false
null
0
o832hh6
false
/r/LocalLLaMA/comments/1rhymsi/18_failed_attempts_to_get_a_tiny_ai_agent_running/o832hh6/
false
9
t1_o832c4g
0.8, 2, 4, 9.
2
0
2026-03-01T17:30:39
abdouhlili
false
null
0
o832c4g
false
/r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o832c4g/
false
2
t1_o832bla
bro I tried this model for odia language but the output script is in devnagri , i want the original odia script , do you know how to do it. prolly I'm a beginner that's why idk
1
0
2026-03-01T17:30:34
Odd_Beyond_1266
false
null
0
o832bla
false
/r/LocalLLaMA/comments/17fiujo/best_model_for_translating_indian_languages/o832bla/
false
1
t1_o832231
>I can’t see how Gemma is as good as qwen at vision Gemma is the best for my niche vision use case. Qwen 3 VL 32B is the best for my, also niche, translation use case. (I didn't compare yet with the new Qwen 3.5.)
1
0
2026-03-01T17:29:16
IrisColt
false
null
0
o832231
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o832231/
false
1
t1_o8320v5
Which gpu does he have? I have a 5090 and looking for ideal vllm config.
7
0
2026-03-01T17:29:06
mxforest
false
null
0
o8320v5
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8320v5/
false
7
t1_o831xj9
Can we do speculative decoding with 0.8B for 27B to get a throughput boost? Is that realistic
1
0
2026-03-01T17:28:39
rulerofthehell
false
null
0
o831xj9
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o831xj9/
false
1
t1_o831x0b
Idk what to tell you but that's what I'm doing. And if it helps you out at all, the Aider runs (non-agentic) at least are using a version that I haven't updated since Spring last year out of laziness lol.
1
0
2026-03-01T17:28:34
ForsookComparison
false
null
0
o831x0b
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o831x0b/
false
1
t1_o831vmn
Vous devriez utiliser 35B A3B ou 27B en Q6 avec 65k context ça passerait entièrement dans votre VRAM et ça hallucinerait beaucoup moins.
1
0
2026-03-01T17:28:23
Adventurous-Paper566
false
null
0
o831vmn
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o831vmn/
false
1
t1_o831v1c
1. The post mentions "infinite correction loops" as a problem and tinkering with kv cache as a solution. I was commenting on the problem. 2. I had this looping problem with Qwen3CoderNext UD Q4, and it was widely discussed well before Qwen3.5-35b release with recommendation to switch to Q8. It is not new to Qwen3.5. ...
0
0
2026-03-01T17:28:19
Prudent-Ad4509
false
null
0
o831v1c
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o831v1c/
false
0
t1_o831sfr
Nah, local is used for extraction. There were really interesting challenges and I just had to check Opus on it
1
0
2026-03-01T17:27:58
Medium_Chemist_4032
false
null
0
o831sfr
false
/r/LocalLLaMA/comments/1ri1l4o/who_is_doing_useful_things_with_local_ai_and_email/o831sfr/
false
1
t1_o831nkg
I don't think it's viable. Generation speeds on server CPUs are pretty bad, especially on older ones. I have a 16 thread VM with Xeon E5-2697 v2. I can try it and report in an hour or so.
1
0
2026-03-01T17:27:18
deenspaces
false
null
0
o831nkg
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o831nkg/
false
1