name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8atc9i
> i'm frustrated with the new models. try to prompt them with just: hello. they will overthink reeeeally hard Why would you just prompt it with Hello? Try an actual question or problem. If you really need to talk to an AI with "Hello" you can disable thinking lol.
1
0
2026-03-02T21:25:10
Xonzo
false
null
0
o8atc9i
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8atc9i/
false
1
t1_o8atbfe
I've heard 3.5 is pretty sensitive to key cache quantization, and to leave it as is.
1
0
2026-03-02T21:25:03
sine120
false
null
0
o8atbfe
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8atbfe/
false
1
t1_o8atabr
Try the new Qwen3.5 9B ?
1
0
2026-03-02T21:24:54
BassAzayda
false
null
0
o8atabr
false
/r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8atabr/
false
1
t1_o8at97q
In the AMA they mentioned there would be a multimodal version of Step 3.5 Flash too.
1
0
2026-03-02T21:24:45
tarruda
false
null
0
o8at97q
false
/r/LocalLLaMA/comments/1rj4zy3/stepfun_releases_2_base_models_for_step_35_flash/o8at97q/
false
1
t1_o8at0st
Oh my! Nice!!! The fact it bumped from 1.7B to 2B is also nice.
1
0
2026-03-02T21:23:38
charles25565
false
null
0
o8at0st
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8at0st/
false
1
t1_o8aszmt
Update update: Ran qwen3.5-0.8b-q8 as a draft model on my tr1920x, increased tps to 18.7 tps
1
0
2026-03-02T21:23:29
MaddesJG
false
null
0
o8aszmt
false
/r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/o8aszmt/
false
1
t1_o8asyki
it isnt supported.. even on cuda 13 sm\_120 .. only works if FA is off
1
0
2026-03-02T21:23:20
arthor
false
null
0
o8asyki
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o8asyki/
false
1
t1_o8astfj
"I have no brain and I must think" 
1
0
2026-03-02T21:22:39
imwearingyourpants
false
null
0
o8astfj
false
/r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8astfj/
false
1
t1_o8asrsm
Z-Image Turbo and Flux Klein 4b will work for you, if I'm not mistaken. Both are fairly recent and good quality for their size. Try out something called ComfyUI. It is the most popular image generation tool. It will have premade templates and workflows.
1
0
2026-03-02T21:22:26
_-_David
false
null
0
o8asrsm
false
/r/LocalLLaMA/comments/1rj5czr/free_image_models_that_can_run_on_12gb_vram/o8asrsm/
false
1
t1_o8aso7w
Ok this is really amazing, hope to see a model update soon too
1
0
2026-03-02T21:21:58
Leflakk
false
null
0
o8aso7w
false
/r/LocalLLaMA/comments/1rj4zy3/stepfun_releases_2_base_models_for_step_35_flash/o8aso7w/
false
1
t1_o8asj9y
Yeah, dense models have fallen a bit out of favor so I'm not sure how much is just "this is what you should expect from a dense model" and how much is Alibaba figuring out something new here.
1
0
2026-03-02T21:21:19
mr_riptano
false
null
0
o8asj9y
false
/r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8asj9y/
false
1
t1_o8asgwb
Thanks for this data, NPU usage info is sorely lacking. What is the reason for the difference between the terminology for the NPU vs the GPU/CPU? Decoding and Prefill vs Prompt and Generation? Also the NPU appears to use about a quarter of the power but takes about 4 times as long to produce the same output. Doesn't that imply it ends up consuming the same amount of energy? Or am I reading this wrong?
1
0
2026-03-02T21:21:00
golden_monkey_and_oj
false
null
0
o8asgwb
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8asgwb/
false
1
t1_o8asddf
Qwen 3 VL 32B Q2 just works, so... the architectural change is decisive, right?
1
0
2026-03-02T21:20:32
IrisColt
false
null
0
o8asddf
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8asddf/
false
1
t1_o8asbxn
I am still waiting for Intel to enable support for NPU on their Lunar Lake platforms for all Linux distros. It is available only on Ubuntu AFAIK. :-(
1
0
2026-03-02T21:20:21
giant3
false
null
0
o8asbxn
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8asbxn/
false
1
t1_o8asapo
/r/StableDiffusion most of them will easily run on 12gb
1
0
2026-03-02T21:20:11
Shap6
false
null
0
o8asapo
false
/r/LocalLLaMA/comments/1rj5czr/free_image_models_that_can_run_on_12gb_vram/o8asapo/
false
1
t1_o8as6kw
There are a large number of skeptical comments on that very post.
1
0
2026-03-02T21:19:38
FORLLM
false
null
0
o8as6kw
false
/r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o8as6kw/
false
1
t1_o8as56c
Kinda like this hehe? It has the model size you were talking about and from what I saw in tests it isn't dumb, could be pretty good. https://huggingface.co/stepfun-ai/Step-3.5-Flash
1
0
2026-03-02T21:19:28
ArtfulGenie69
false
null
0
o8as56c
false
/r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o8as56c/
false
1
t1_o8as24n
Z-Image Turbo is a good one, I run it on 12 GB VRAM as well.
1
0
2026-03-02T21:19:03
No-Statistician-374
false
null
0
o8as24n
false
/r/LocalLLaMA/comments/1rj5czr/free_image_models_that_can_run_on_12gb_vram/o8as24n/
false
1
t1_o8arzhz
Er... Qwen 3 VL 32B Q2 is decent, so... why is a Q3 not working...?
1
0
2026-03-02T21:18:42
IrisColt
false
null
0
o8arzhz
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8arzhz/
false
1
t1_o8arwdj
I added a run with a prompt that was 50x longer, I literally just cut and pasted the short prompt 50 times. The PP speed is faster. It's 97 with the longer prompt. It may be a problem with how it's calculating that number. Since it was faster at 10x bigger then at 30x bigger and now even faster at 50x bigger. At 150x bigger it's even faster. Average prefill speed: 198.711 tokens/s
1
0
2026-03-02T21:18:17
fallingdowndizzyvr
false
null
0
o8arwdj
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8arwdj/
false
1
t1_o8arvh8
Oh weird, I've never heard the term and googling revealed nothing.
1
0
2026-03-02T21:18:10
Clonkex
false
null
0
o8arvh8
false
/r/LocalLLaMA/comments/1iw3gzg/how_much_does_cpu_speed_matter_for_inference/o8arvh8/
false
1
t1_o8arvck
Sorry for the offtopic, is the file pipeline\_base\_logits.bin in [https://huggingface.co/unsloth/Qwen3.5-35B-A3B-Experiments-GGUF](https://huggingface.co/unsloth/Qwen3.5-35B-A3B-Experiments-GGUF) the one that I should use to calculate the KLD for a specific 35B-A3B quant right? I mean using the command: llama-perplexity -m <MODEL> --kl-divergence-base pipeline\_base\_logits.bin --kl-divergence I'm planning to measure the KLD of some dererestricted and heretic quants.
1
0
2026-03-02T21:18:09
dionisioalcaraz
false
null
0
o8arvck
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o8arvck/
false
1
t1_o8artms
The hardware isn't the problem. Nobody in your family wants to learn a new interface when "hey Alexa" already works with zero friction.
1
0
2026-03-02T21:17:56
tom_mathews
false
null
0
o8artms
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o8artms/
false
1
t1_o8arpcd
“can you code up spongebob generative art”
1
0
2026-03-02T21:17:22
camracks
false
null
0
o8arpcd
false
/r/LocalLLaMA/comments/1rj2e3j/spongebob_art_with_qwen_35_9b_vs_opus_46/o8arpcd/
false
1
t1_o8aro3k
Hey, I got Unsloth's Qwen3.5-35B-A3B-UD-Q4\_K\_XL running on my 3080 12GB @ \~67 t/s with 16k context (ncmoe 21). So far, this is the best mix of speed and quantization that I've found. \`\`\` \~/llama.cpp/build/bin/llama-server \\ \-m \~/models/Qwen3.5-35B-A3B-UD-Q4\_K\_XL.gguf \\ \-ngl 99 -ncmoe 21 --flash-attn on \\ \-c 16384 --host [0.0.0.0](http://0.0.0.0) \--port 3000 \`\`\` Qwen3.5-35B-A3B UD-Q5\_K\_XL also works @ 59 t.s, but not sure if a 0.6% quality difference in the quants justifies the 16% speed difference.
1
0
2026-03-02T21:17:12
anthonybustamante
false
null
0
o8aro3k
false
/r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8aro3k/
false
1
t1_o8arkuo
NPU is 25% of speed and 25% of power consumption. I have no idea how to leverage that in any way. What if we just finish task in 25 seconds consuming same power as NPU finishing it in 100 seconds?
1
0
2026-03-02T21:16:46
uti24
false
null
0
o8arkuo
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8arkuo/
false
1
t1_o8arc1k
he founded two business and streams every week.
1
0
2026-03-02T21:15:35
weeboards
false
null
0
o8arc1k
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o8arc1k/
false
1
t1_o8arbdl
Isn't qwen3.5 a 122b10a and Gpt-oss-120 is 120b5a? 5b extra params active has to help. Not to mention a 6 month newer dataset and possible model architecture upgrades. 
1
0
2026-03-02T21:15:30
ArtfulGenie69
false
null
0
o8arbdl
false
/r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o8arbdl/
false
1
t1_o8arahg
What do you use it with ? I don’t want to copy paste code from chat. 
1
0
2026-03-02T21:15:23
FearMyFear
false
null
0
o8arahg
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8arahg/
false
1
t1_o8ar97l
I hope my answer helped you with LM Studio.
1
0
2026-03-02T21:15:13
MadPelmewka
false
null
0
o8ar97l
false
/r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8ar97l/
false
1
t1_o8ar7g2
Thanks for info!
1
0
2026-03-02T21:14:59
IrisColt
false
null
0
o8ar7g2
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o8ar7g2/
false
1
t1_o8ar3fg
Yea I use Claude for work.  Local is for fun projects and really see how much I can squeeze from a local model
1
0
2026-03-02T21:14:28
FearMyFear
false
null
0
o8ar3fg
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8ar3fg/
false
1
t1_o8ar071
I really like the UI. Results seem consistent with my experience. Except Gemini 3.1 look way slower than Gemini 3 Flash. Any chance you add an "Open models" filter ?
1
0
2026-03-02T21:14:01
Zemanyak
false
null
0
o8ar071
false
/r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8ar071/
false
1
t1_o8ar00c
The Qwen3.5 line that just came out seems to have rendered a lot of the competition obsolete. Until we get Gemma 4, which I assume will be at Google I/O in April, I think the clear-cut winner for a 5090 is qwen3.5-27b. The benchmarks are outrageous. It matches or beats the 122b-a10b, and beats the 35b-a3b in all but speed. Looking at the benchmarks, the 27b dense model matches gpt-5-mini on high reasoning in pretty much every way. Including vision tasks. If you're interested in tts, stt, image gen or anything else, let me know. Recently I've been squeezing a bit of everything into VRAM at once to do some neat stuff. You came back at a great time
1
0
2026-03-02T21:14:00
_-_David
false
null
0
o8ar00c
false
/r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8ar00c/
false
1
t1_o8aqyp0
>Pro tip, adjust your prompt template to turn off thinking, set temperature to about .45, don't go any lower. I suppressed thinking via the prompt template but now I have unending repetitions... what am I doing wrong? :(
1
0
2026-03-02T21:13:49
IrisColt
false
null
0
o8aqyp0
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8aqyp0/
false
1
t1_o8aqw4x
woof, that's a big tier difference between qwen 3.5 27B dense and 35B-A3B but it's also kind of insane that 27B is ranking up there at all
1
0
2026-03-02T21:13:28
HopePupal
false
null
0
o8aqw4x
false
/r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8aqw4x/
false
1
t1_o8aqugh
Yes, you're right. My bad. I hadn't tried it yet, now I'm seeing the issue with the NVFP4 397B model I'm running trying to add MTP.
1
0
2026-03-02T21:13:15
TaiMaiShu-71
false
null
0
o8aqugh
false
/r/LocalLLaMA/comments/1rj2rec/new_qwen_models_for_speculative_decoding/o8aqugh/
false
1
t1_o8aqtk0
The qwen 3.5 models are broken right now in ollama and lm studio, but they do work with llama.cpp
1
0
2026-03-02T21:13:07
PloscaruRadu
false
null
0
o8aqtk0
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8aqtk0/
false
1
t1_o8aqs2w
thanks to AI, now hardware vendors found a new way to push products capabilities.
1
0
2026-03-02T21:12:55
Glad-Audience9131
false
null
0
o8aqs2w
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8aqs2w/
false
1
t1_o8aqpfp
Wow, I've been waiting on AMD NPU support on Linux for a while, surprised I missed the news on this. If I get it working I'll follow-up with some benchmark results on my machine.
1
0
2026-03-02T21:12:34
EffectiveCeilingFan
false
null
0
o8aqpfp
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8aqpfp/
false
1
t1_o8aqki5
YES! I have the same issue. If I use --ctv bf16 --ctk bf16 my speed plummets and my CPU usage spikes. must be a bug in llama.cpp.
1
0
2026-03-02T21:11:55
hp1337
false
null
0
o8aqki5
false
/r/LocalLLaMA/comments/1riunee/how_to_fix_endless_looping_with_qwen35/o8aqki5/
false
1
t1_o8aqk74
|model|size|params|backend|ngl|n\_ubatch|fa|test|t/s| |:-|:-|:-|:-|:-|:-|:-|:-|:-| |qwen35moe ?B Q8\_0|34.36 GiB|34.66 B|ROCm|99|1024|1|pp2048 @ d4096|922.85 ± 1.17| |qwen35moe ?B Q8\_0|34.36 GiB|34.66 B|ROCm|99|1024|1|tg32 @ d4096|38.66 ± 0.02| |model|size|params|backend|ngl|n\_ubatch|fa|test|t/s| |:-|:-|:-|:-|:-|:-|:-|:-|:-| |qwen35moe ?B Q8\_0|34.36 GiB|34.66 B|Vulkan|99|1024|1|pp2048 @ d4096|613.64 ± 1.37| |qwen35moe ?B Q8\_0|34.36 GiB|34.66 B|Vulkan|99|1024|1|tg32 @ d4096|42.78 ± 0.11|
1
0
2026-03-02T21:11:53
Educational_Sun_8813
false
null
0
o8aqk74
false
/r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o8aqk74/
false
1
t1_o8aqhh6
Thanks, I tried qwen3 8b but it kept felling into loops
1
0
2026-03-02T21:11:30
Scary-Motor-6551
false
null
0
o8aqhh6
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8aqhh6/
false
1
t1_o8aqcna
It's not supported yet.
1
0
2026-03-02T21:10:52
noctrex
false
null
0
o8aqcna
false
/r/LocalLLaMA/comments/1rj1e35/llamacpplmstudio_draft_model_settings_for_qwen35/o8aqcna/
false
1
t1_o8aqa4z
Not yet.
1
0
2026-03-02T21:10:32
noctrex
false
null
0
o8aqa4z
false
/r/LocalLLaMA/comments/1rj1e35/llamacpplmstudio_draft_model_settings_for_qwen35/o8aqa4z/
false
1
t1_o8aq92z
The 27B is a dense model. It uses all 27B parameters for inferring every token. The 35B is a sparse (MoE) model. It only uses 3B parameters of its 35B total parameters to infer any given token (but potentially a different 3B parameters from one token to another). MoE models of a given size are cheaper to train than the same size dense model, and they infer a lot faster, but they are also less competent than a comparable-sized dense model (all other factors being equal). So, it's a matter of your priorities. If both the 27B and the 35B fit in your GPU's VRAM, if you prioritize fast "good enough" inference, then you will want to use Qwen3.5-35B-A3B, and if you prefer "very good" but slow inference, then you will want to use Qwen3.5-27B.
1
0
2026-03-02T21:10:23
ttkciar
false
null
0
o8aq92z
false
/r/LocalLLaMA/comments/1rj3cku/why_qwen_35_27b/o8aq92z/
false
1
t1_o8aq85r
Andei lendo o sub sobre isso, e aparentemente o ideal seria usar uma tecnologia interna própria do modelo e/ou somado com ferramentas do llama.cpp (nao envolveria mais um modelo pequeno adicional) mas eu nao lembro tudo de cabeca, espero que alguem que entenda melhor possa responder seu post de maneira satisfatória
1
0
2026-03-02T21:10:16
charmander_cha
false
null
0
o8aq85r
false
/r/LocalLLaMA/comments/1rj2mzy/is_speculative_decoding_available_with_the_qwen/o8aq85r/
false
1
t1_o8aq773
My args: --model Sehyo/Qwen3.5-122B-A10B-NVFP4 --quantization compressed-tensors --max-model-len 131072 --gpu-memory-utilization 0.9 --max-num-seqs 1 --attention-backend flashinfer --async-scheduling --enable-auto-tool-choice --tool-call-parser qwen3_xml --kv-cache-dtype fp8
1
0
2026-03-02T21:10:09
Laabc123
false
null
0
o8aq773
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o8aq773/
false
1
t1_o8aq6yd
There have been already many posts about this. It's not supported yet.
1
0
2026-03-02T21:10:07
noctrex
false
null
0
o8aq6yd
false
/r/LocalLLaMA/comments/1rj1e35/llamacpplmstudio_draft_model_settings_for_qwen35/o8aq6yd/
false
1
t1_o8aq5pw
Elaborate
1
0
2026-03-02T21:09:57
letsgoiowa
false
null
0
o8aq5pw
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8aq5pw/
false
1
t1_o8aq3ih
I'm a paid Gemini CLI user, for a while it was hard to keep thoughts out of the main chat window. I haven't seen it as much with Gemini 3.1, but 3.0 Pro would regularly output its thoughts and not the final answer. Very infuriating, I hope Qwen3.5 *isn't* trained on it. Outputs without thinking aren't bad, but if you need tool calls or you're doing anything agentic it's probably required.
1
0
2026-03-02T21:09:40
sine120
false
null
0
o8aq3ih
false
/r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8aq3ih/
false
1
t1_o8aq0uk
Man, kudos for getting back to me! I very much appreciate it! Maybe this weekend will give it a try.
1
0
2026-03-02T21:09:18
waiting_for_zban
false
null
0
o8aq0uk
false
/r/LocalLLaMA/comments/1mucj1p/which_models_are_suitable_for_websearch/o8aq0uk/
false
1
t1_o8apwrh
[removed]
1
0
2026-03-02T21:08:44
[deleted]
true
null
0
o8apwrh
false
/r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/o8apwrh/
false
1
t1_o8aptb0
Until the M5 drops with matmul hardware your best bet is absolutely the dual 3090's.
1
0
2026-03-02T21:08:16
dinerburgeryum
false
null
0
o8aptb0
false
/r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o8aptb0/
false
1
t1_o8apsoz
there was only one amd-gpu-firmare update since few months in debian testing, besides all the data is in the graph, and all parameters were the same for the both backends, llama-bench standard procedure, with context up to 131k
1
0
2026-03-02T21:08:11
Educational_Sun_8813
false
null
0
o8apsoz
false
/r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o8apsoz/
false
1
t1_o8apr8t
Probably Seedance 2
1
0
2026-03-02T21:07:59
RichDad2
false
null
0
o8apr8t
false
/r/LocalLLaMA/comments/1rj326g/any_idea_what_is_being_used_for_these_generations/o8apr8t/
false
1
t1_o8aphm1
Seems like there will be no Qwen3.5-14B.
1
0
2026-03-02T21:06:41
Jobus_
false
null
0
o8aphm1
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8aphm1/
false
1
t1_o8apggx
Tried it?
1
0
2026-03-02T21:06:31
YearnMar10
false
null
0
o8apggx
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8apggx/
false
1
t1_o8apf6r
[removed]
1
0
2026-03-02T21:06:20
[deleted]
true
null
0
o8apf6r
false
/r/LocalLLaMA/comments/1rj4vwr/best_model_for_basic_text_based_rasks_on_rtx_3070/o8apf6r/
false
1
t1_o8apejf
this is really good!
1
0
2026-03-02T21:06:15
nikos_m
false
null
0
o8apejf
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8apejf/
false
1
t1_o8apaf4
It is in some of models, yes, but AFAIK it lost after quantization
1
0
2026-03-02T21:05:43
unbannedfornothing
false
null
0
o8apaf4
false
/r/LocalLLaMA/comments/1rj2rec/new_qwen_models_for_speculative_decoding/o8apaf4/
false
1
t1_o8ap56s
Classic prompt injection. Cake recipe: denied. Edge inference: very much approved.
1
0
2026-03-02T21:05:00
theagentledger
false
null
0
o8ap56s
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8ap56s/
false
1
t1_o8ap51f
Wow, I hadn't a clue. Thanks for the info. I'll keep my weights loaded instead of putting a log on the fireplace at night lol
1
0
2026-03-02T21:04:59
_-_David
false
null
0
o8ap51f
false
/r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8ap51f/
false
1
t1_o8aox26
> \> I don't like that even the smartest people in the world still haven't managed to bring all this into a sensible, understandable form. Unfortunately smart people tend to overestimate everyone else's capacity to understand things, because they think of their capacity to understand (and their peers') as "normal". Also, unfortunately you cannot trust benchmarks. The big model vendors (especially Qwen and OpenAI) benchmax their models. You have to try them out to see how well they work for your specific use-case.
1
0
2026-03-02T21:03:54
ttkciar
false
null
0
o8aox26
false
/r/LocalLLaMA/comments/1rj3kfq/im_tired/o8aox26/
false
1
t1_o8aot9h
I’m also on a 16GB M1 and I can get up to 14b models running at around 8tps if I close all other apps. The key is to make sure you’re running MLX versions not GGUF, it makes a huge difference in terms of efficiency.
1
0
2026-03-02T21:03:24
32doors
false
null
0
o8aot9h
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8aot9h/
false
1
t1_o8aoqmu
Wth they cook into this 27b?!?! Can someone please explain how that little brat is beating even the bigger model?!?!
1
0
2026-03-02T21:03:03
Turbulent_Pin7635
false
null
0
o8aoqmu
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8aoqmu/
false
1
t1_o8aopog
It's a 9b model! What the hell are your expectations?
1
0
2026-03-02T21:02:55
LagOps91
false
null
0
o8aopog
false
/r/LocalLLaMA/comments/1rj2e3j/spongebob_art_with_qwen_35_9b_vs_opus_46/o8aopog/
false
1
t1_o8aoiud
[removed]
1
0
2026-03-02T21:02:00
[deleted]
true
null
0
o8aoiud
false
/r/LocalLLaMA/comments/1rgswkc/turn_off_thinking_in_lm_studio/o8aoiud/
false
1
t1_o8aoiee
[deleted]
1
0
2026-03-02T21:01:56
[deleted]
true
null
0
o8aoiee
false
/r/LocalLLaMA/comments/1rj4nnq/qwen352b_on_android/o8aoiee/
false
1
t1_o8aoggt
How poor are we talking? lol I have an 8gb nvidia and wondering if I can achieve anything useful with it? Curious about dipping my toes into local llms, but I always see min 16gb vram recommended
1
0
2026-03-02T21:01:40
CowCowMoo5Billion
false
null
0
o8aoggt
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8aoggt/
false
1
t1_o8aoetn
This is actually my greatest fear about AI. We ALREADY have the technology to deploy models that can run autonomously and hack targets effectively. Security right now is the most important it’s ever been.
1
0
2026-03-02T21:01:27
JustinPooDough
false
null
0
o8aoetn
false
/r/LocalLLaMA/comments/1riuywe/genuinely_fascinating_but_also_kind_of_terrifying/o8aoetn/
false
1
t1_o8ao5qq
I am getting 0.09tk/s on my d8400 ultra phone. Why?
1
0
2026-03-02T21:00:15
i-am-the-G_O_A_T
false
null
0
o8ao5qq
false
/r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o8ao5qq/
false
1
t1_o8ao3qw
Numbers are for nerds — and you should try it yourself. it will give you the Real Life performance. There is no numbers to quantify a usable model.
1
0
2026-03-02T21:00:00
drip_lord007
false
null
0
o8ao3qw
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8ao3qw/
false
1
t1_o8ao2rv
Also released SteptronOSS a training framework which I assumed was used for Step 3.5 Flash: https://github.com/stepfun-ai/SteptronOss Amazing AI lab
1
0
2026-03-02T20:59:51
tarruda
false
null
0
o8ao2rv
false
/r/LocalLLaMA/comments/1rj4zy3/stepfun_releases_2_base_models_for_step_35_flash/o8ao2rv/
false
1
t1_o8ao2hc
for anyone looking for the Linux docs, there's not much yet but a getting started guide is here: https://github.com/FastFlowLM/FastFlowLM/blob/main/docs/linux-getting-started.md what context depth were you working at? what model? i was kinda hoping we'd see support for _hybrid_ execution, given how many AMD articles claimed  that the NPU could handle prompt processing faster than the iGPU. but on the other hand a lot of those articles date back to before the 395 so that might well have been true for weaker graphics cores. or maybe i'm failing to understand something? if the NPU _can't_ improve on the iGPU for prefill speed, then it only matters to users limited by battery or thermals, which is much less exciting.
1
0
2026-03-02T20:59:49
HopePupal
false
null
0
o8ao2hc
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8ao2hc/
false
1
t1_o8anvdy
Sure! It’s a single index.html file: https://huggingface.co/spaces/webml-community/Qwen3.5-0.8B-WebGPU/blob/main/index.html
1
0
2026-03-02T20:58:51
xenovatech
false
null
0
o8anvdy
false
/r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8anvdy/
false
1
t1_o8anmbw
is the code available to check it out and learn?
1
0
2026-03-02T20:57:39
drr21
false
null
0
o8anmbw
false
/r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8anmbw/
false
1
t1_o8anlra
It's absurd, but also rational when you realize that those who hold the keys do not actually want private models to be the mainstream usage. I don't think the current monopoly will ever willingly bring things back down for consumers. I think we are entering a new paradigm where access to state of the art technology will be artificially gated.
1
0
2026-03-02T20:57:34
LickMyTicker
false
null
0
o8anlra
false
/r/LocalLLaMA/comments/1rj39se/intelligence_density_per_gb_is_increasing_and_i/o8anlra/
false
1
t1_o8anjwp
I’ve found LLaMA 2 handles smaller text tasks super efficiently on a 3070.
1
0
2026-03-02T20:57:19
lacopefd
false
null
0
o8anjwp
false
/r/LocalLLaMA/comments/1rj4vwr/best_model_for_basic_text_based_rasks_on_rtx_3070/o8anjwp/
false
1
t1_o8anhe3
apparently it doesn't like "." at the end
1
0
2026-03-02T20:56:59
HatEducational9965
false
null
0
o8anhe3
false
/r/LocalLLaMA/comments/1rc9qvb/kitten_tts_v08_running_in_the_browser/o8anhe3/
false
1
t1_o8aneh4
Maybe do int8 as it is bigger and also works with 30's series. 
1
0
2026-03-02T20:56:35
ArtfulGenie69
false
null
0
o8aneh4
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8aneh4/
false
1
t1_o8anak8
You realize that a lot of the providers on Vast are independent and are not running at a loss right? Not sure who is upvoting you but tells me a lot about Reddit. Good social experiment.
1
0
2026-03-02T20:56:04
ReasonableDig6414
false
null
0
o8anak8
false
/r/LocalLLaMA/comments/1na3f1s/renting_gpus_is_hilariously_cheap/o8anak8/
false
1
t1_o8an4ew
Thanks
1
0
2026-03-02T20:55:15
No_Information9314
false
null
0
o8an4ew
false
/r/LocalLLaMA/comments/1rj4ktw/qwen3535ba3b_vision_capabilties_in_llamacpp/o8an4ew/
false
1
t1_o8amz11
The smaller models dont have enough neurons to generalize the way larger models do. They have strengths in their own right, but larger models suffer from the same issues smaller models do, just for different reaons. Its primarily an architectural problem. The transformer model is limited due to a variety of reasons besides scale. No amount of memory or data will fix it.
1
0
2026-03-02T20:54:31
teleprint-me
false
null
0
o8amz11
false
/r/LocalLLaMA/comments/1rj39se/intelligence_density_per_gb_is_increasing_and_i/o8amz11/
false
1
t1_o8amxwg
https://preview.redd.it/…1feac255ec2697
1
0
2026-03-02T20:54:22
MadPelmewka
false
null
0
o8amxwg
false
/r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8amxwg/
false
1
t1_o8amp72
didnt you just say you are ready to take it in at face value, the video was just for the gist of it. try using it yourself, there is no need for you to be hostile. we are just a bunch of people keeping our heads down and working towards actual usefulness and not showing benchmark results of our model which is not supposed to be benchmaxxed. for example the model you see in the video is the bodega-centanario-21b. you can see how it performs as well and here here are some numbers for you regarding hardware perf, detailed metrics for each context size. https://preview.redd.it/alr73mtz2pmg1.png?width=4078&format=png&auto=webp&s=ee8444495bd59aec7ebad81d26f57578933b5919
1
0
2026-03-02T20:53:13
EmbarrassedAsk2887
false
null
0
o8amp72
false
/r/LocalLLaMA/comments/1riypvk/axe_a_precision_agentic_coder_large_codebases/o8amp72/
false
1
t1_o8amcsl
Lightonocr for us is the best
1
0
2026-03-02T20:51:33
Interesting_lama
false
null
0
o8amcsl
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8amcsl/
false
1
t1_o8amc47
I’ve found that starting with small, curated datasets helps you understand model quirks before running the full suite.
1
0
2026-03-02T20:51:28
norofbfg
false
null
0
o8amc47
false
/r/LocalLLaMA/comments/1rj4rml/local_llm_test_cases_text_and_coding/o8amc47/
false
1
t1_o8ambh3
it didn't work for me. it can create multiple `<think>` tags. 🤔
1
0
2026-03-02T20:51:23
jojorne
false
null
0
o8ambh3
false
/r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8ambh3/
false
1
t1_o8am7y0
How it compares with vision language model trained for ocr like lightonocr or paddleocr or dots.ocr?
1
0
2026-03-02T20:50:54
Interesting_lama
false
null
0
o8am7y0
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8am7y0/
false
1
t1_o8am2c1
Honestly, I would argue qwen3.5 27b beats it (at least for coding)
1
0
2026-03-02T20:50:10
guiopen
false
null
0
o8am2c1
false
/r/LocalLLaMA/comments/1rj39se/intelligence_density_per_gb_is_increasing_and_i/o8am2c1/
false
1
t1_o8all1m
See, Lebron never did that.
1
0
2026-03-02T20:47:51
buildcool
false
null
0
o8all1m
false
/r/LocalLLaMA/comments/1rd2x61/people_are_getting_it_wrong_anthropic_doesnt_care/o8all1m/
false
1
t1_o8alhee
tbh myself didn't even know about it at first when qwen3 came... Now it is something that more people know. So it is normal they ask for it :) The 27B model is quite cool and many people can load it, but for many the speed is close to non-usable. It would be amazing to get more t/s with it, either with speculative decoding or mtp (which is not yet integrated in LM Studio and others)
1
0
2026-03-02T20:47:22
mouseofcatofschrodi
false
null
0
o8alhee
false
/r/LocalLLaMA/comments/1rj2mzy/is_speculative_decoding_available_with_the_qwen/o8alhee/
false
1
t1_o8alfk4
This is good advice, I appreciate it guys! The next thing I gotta do is find some spare ram (which is going to cost me my other arms and legs)
1
0
2026-03-02T20:47:06
nsmitherians
false
null
0
o8alfk4
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8alfk4/
false
1
t1_o8alfhk
Too much fucking blue I can't see fuck all mate!
1
0
2026-03-02T20:47:06
fantasticmrsmurf
false
null
0
o8alfhk
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8alfhk/
false
1
t1_o8alf64
The more time you put in it, the better the results. You can start with a simple prompt, and manually analyze the output, not very accurate, but still useful.
1
0
2026-03-02T20:47:03
ortegaalfredo
false
null
0
o8alf64
false
/r/LocalLLaMA/comments/1rj3kfq/im_tired/o8alf64/
false
1
t1_o8alf1i
It didn't know advanced typescript tricks, but those are pretty unintuitve and probably requires memorization
1
0
2026-03-02T20:47:02
InGanbaru
false
null
0
o8alf1i
false
/r/LocalLLaMA/comments/1rfej6k/qwen_35_family_comparison_by_artificialanalysisai/o8alf1i/
false
1
t1_o8ales1
> How’d you get NPU support working on Linux? That's explained in the FastFlowLM link. > For gpt-oss-20b, you definitely shouldn’t be using a Q4_0 quant. Use the native MXFP4. I'm trying to match FastFlowLM's quant. Which is Q4_1. The point of benchmarking is to match as much as possible. > Are you sure you’re using the NPU? Yes. Since the GPU and CPU are basically idle. There's like system level CPU use while it's running. > The PP and TG numbers being so close is suspicious. They are. But that's what the flm reports.
1
0
2026-03-02T20:47:00
fallingdowndizzyvr
false
null
0
o8ales1
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8ales1/
false
1
t1_o8ald09
Thanks! So, the source I am using is indeed Instruct 0509. I am using a Nano-GPT subscription got generating answers with that, then a Qwen-based classifier/filter to exclude hallucinations when necessary. And I actually did release an intermediate checkpoint a month ago: [https://www.reddit.com/r/LocalLLaMA/comments/1qua4xb/kimi\_distillation\_attempt/](https://www.reddit.com/r/LocalLLaMA/comments/1qua4xb/kimi_distillation_attempt/) . The feedback was interesting, including the main critical point - that I did not have a clear aim and corresponding benchmark. I will try to release one again in a week or two with the 1.5B - I am building a two-step curriculum oriented heavily towards text editing/rewriting (and secondarily creative writing, which, for a 1.5B, is necessarily surface-level). I did find some benchmarks for that - the older EQ-bench Creative Writing test (the newer versions rely on a 10x run I can't exactly afford) and the less known, but far more comprehensive WriteBench. On EQ-bench creative-writing my 1.5B checkpoints were beating all the 2B models I could find, but I did not yet have a chance to test out Qwen 3.5 2B; and it is far harder to beat the achievements of Qwen3 4B 2507 Instruct (again I have not given the 3.5 version a spin yet). On WriteBench I have hit a limit so far, but the curriculum thing was not yet started. Anyway, before I get to something bigger than 1.5B, I really want to recalibrate what I am trying to do vs. what people might want from this kind of distill. Is it more creative writing and rewriting? Or is it more sounding board and pushback? Or something else? (Granite is a good target because of its neutral original style and because it is, unlike Ministral or small Qwens, trained from scratch at every level and not a sequential distill. I tried a Ministral 14B target, didn't go great at first swing. Another possibility is to try the smaller Qwen 3.5 models, to see if I can put on the stylistic veneer while keeping their intelligence, but that's very much a wildcard and I should probably wait some weeks to let the infrastructure for fine-tuning them stabilize)
1
0
2026-03-02T20:46:46
ramendik
false
null
0
o8ald09
false
/r/LocalLLaMA/comments/1rj08k1/k2_not_25_distillation_still_worth_it/o8ald09/
false
1
t1_o8al8vh
Use llama.cpp. LMStudio is just an electron wrapper over old versions of llama.cpp. If you can only point and click then LMstudio is about your only option however.
1
0
2026-03-02T20:46:12
dookyspoon
false
null
0
o8al8vh
false
/r/LocalLLaMA/comments/1od7hyu/m5_macbook_pro_up_to_45_pp_improvement_25_tg/o8al8vh/
false
1