name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8be8a9
I found 4bit ok with Qwen3. Though the linear layers of 3.5 really do not like to be quantized, I switched now to a quant which leaves them at bf16.
1
0
2026-03-02T23:11:04
DeltaSqueezer
false
null
0
o8be8a9
false
/r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8be8a9/
false
1
t1_o8be3m9
That became my QA question for models and tuning lol So simple, yet so effective
1
0
2026-03-02T23:10:22
Di_Vante
false
null
0
o8be3m9
false
/r/LocalLLaMA/comments/1rj76pb/qwen35122ba10bq8_handling_the_car_wash_question/o8be3m9/
false
1
t1_o8bdzgq
Yes it is. Which is why I didn't do it before since I thought it would slow down with a longer prompt. Which is what happens with llama.cpp. So if it is a KV cache effect, why doesn't it help with llama.cpp? Here are the numbers for the GPU with a 54K prompt. [ Prompt: 1398.2 t/s | Generation: 68.2 t/s ] PP slows down as I expected. It's strange that with the NPU it goes up.
1
0
2026-03-02T23:09:45
fallingdowndizzyvr
false
null
0
o8bdzgq
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8bdzgq/
false
1
t1_o8bdrql
[removed]
1
0
2026-03-02T23:08:34
[deleted]
true
null
0
o8bdrql
false
/r/LocalLLaMA/comments/1nbcsvk/looking_for_uncensored_unfiltered_ai_models_for/o8bdrql/
false
1
t1_o8bdq95
Sure, go write a program that does hybrid NPU+GPU and I'll test it for you.
1
0
2026-03-02T23:08:20
fallingdowndizzyvr
false
null
0
o8bdq95
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8bdq95/
false
1
t1_o8bdpvi
Maybe it's something fixable via parameters. How are you running it?
1
0
2026-03-02T23:08:17
Di_Vante
false
null
0
o8bdpvi
false
/r/LocalLLaMA/comments/1rj7p2h/i_need_an_uncensored_llm_for_8gb_vram/o8bdpvi/
false
1
t1_o8bdl0d
I could have sworn where was a simpler way? I recall seeing something and even doing it once a long while back..
1
0
2026-03-02T23:07:32
rm-rf-rm
false
null
0
o8bdl0d
false
/r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8bdl0d/
false
1
t1_o8bdc7c
Well, if you decide to go strix halo you would be constrained if you later decide to expand. I bought a Radeon Pro Ai R9700 and a eGPU dock station for one of my Strix Halos. Unfortunately I got errors when I connected it with a thunderbolt cable. I'll have to purchase an OcuLink to NVMe adapter or a longer PCIe riser cable (mine now are too short). Maybe my eGPU dock is at fault, but going OcuLink would be really awkward, because my Desktop is laying on its side and a network card sits on top of the fan... Maybe I should purchase one of those racks for my boards. After playing with the current riser cables for the network cards (25Gbit), one of my cards started to give me errors. I reasserted the PCIe raiser cable and the card. Maybe I'll have to adjust the external fan which should blow on the two cards. Well, I do tests and for the last 20 minutes no problems with the connection. And before that I had to disable the WiFi module, because I suspect it was fighting somehow with the network cards.
1
0
2026-03-02T23:06:11
ProfessionalSpend589
false
null
0
o8bdc7c
false
/r/LocalLLaMA/comments/1rj6j0y/workstation_for_dev_work_local_llms_tesla_p40_vs/o8bdc7c/
false
1
t1_o8bdbc1
Genuine question, against the quant you're using for the 35B model, do you think it will be better to use Q8\_0 Qwen3.5 4B instead of the 35B for performance?
1
0
2026-03-02T23:06:04
EverGreen04082003
false
null
0
o8bdbc1
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8bdbc1/
false
1
t1_o8bd4xz
Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/PgFhZ8cnWW) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*
1
0
2026-03-02T23:05:06
WithoutReason1729
false
null
0
o8bd4xz
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8bd4xz/
true
1
t1_o8bd4ut
Nice try, but the cake is a lie.
1
0
2026-03-02T23:05:05
theagentledger
false
null
0
o8bd4ut
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bd4ut/
false
1
t1_o8bd1tu
Would love to hear what you saw in Continue specifically — the multi-MCP bug is nasty enough that it's almost certainly part of it.
1
0
2026-03-02T23:04:38
theagentledger
false
null
0
o8bd1tu
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8bd1tu/
false
1
t1_o8bd1v8
Nice benchmark writeup, this is the kind of real-world comparison that helps.
1
0
2026-03-02T23:04:38
Interesting_Lie_9231
false
null
0
o8bd1v8
false
/r/LocalLLaMA/comments/1riw6kd/mcp_colocation_stdio_49ms_single_client_vs_http/o8bd1v8/
false
1
t1_o8bcu4f
This aged well. Lol
1
0
2026-03-02T23:03:29
dillanthumous
false
null
0
o8bcu4f
false
/r/LocalLLaMA/comments/1cylhce/no_llms_are_not_plateauing_what_is_actually/o8bcu4f/
false
1
t1_o8bcpwr
Finding the right AI for idea generation can be quite the challenge. Many people notice that some of the more popular models tend to produce content that feels repetitive or uninspired, which can be frustrating. Personal experience has shown that sometimes a different approach can lead to more creative and engaging outcomes. I've been creating some interesting stories using Zongaflirt, and it has provided a fresh perspective that I appreciate. It might be worth checking out as an alternative for your writing needs. :)
1
0
2026-03-02T23:02:50
Constant_Ad6426
false
null
0
o8bcpwr
false
/r/LocalLLaMA/comments/18s587r/what_are_the_best_free_uncensored_local_ai_for/o8bcpwr/
false
1
t1_o8bcjib
It looks to me like it's a mix of 1. some kind of black magic that lets Flash 3 be much smarter with thinking disabled, it's like an Anthropic model that way
1
0
2026-03-02T23:01:52
mr_riptano
false
null
0
o8bcjib
false
/r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8bcjib/
false
1
t1_o8bci42
the fact that a 27b dense model is keeping up with R1 0528 is genuinely wild. like a year ago we were celebrating when 70b models could do basic reasoning and now a model that fits on a single consumer gpu is doing stuff that needed cluster-level compute the finetune potential is the real story tho. qwen base models have always been absurdly good starting points, if someone drops a solid coding finetune of this its gonna eat
1
0
2026-03-02T23:01:40
Pitiful-Impression70
false
null
0
o8bci42
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8bci42/
false
1
t1_o8bcgpe
Yes, but I was questioning their assertion of how slow it was. I have the same hardware.
1
0
2026-03-02T23:01:26
coder543
false
null
0
o8bcgpe
false
/r/LocalLLaMA/comments/1riy5x6/qwen_35_nonthinking_mode_benchmarks/o8bcgpe/
false
1
t1_o8bcgh8
lol this is what happens when you train on too much synthetic data from other models. the model absorbed so much gemini output it literally thinks it IS gemini now. identity crisis speedrun any%
1
0
2026-03-02T23:01:24
Pitiful-Impression70
false
null
0
o8bcgh8
false
/r/LocalLLaMA/comments/1rj65jl/qwens_latest_model_thinks_its_developed_by_google/o8bcgh8/
false
1
t1_o8bcef8
The YouTube channel "AI Search" is great for comprehensive overviews and latest AI news
1
0
2026-03-02T23:01:06
Curious_Priority8156
false
null
0
o8bcef8
false
/r/LocalLLaMA/comments/1rj7z9v/where_to_get_a_comprehensive_overview_on_the/o8bcef8/
false
1
t1_o8bcadd
I tried making memes with AI before, but couldn't really get good results. I wanted to use the actual meme template though (basically like https://imgflip.com/memegenerator but AI fills the content) but AI just came up with stupid stuff. Do you have any experience with AI generating memes? I could really need this for my project. Thanks!
1
0
2026-03-02T23:00:30
AutobahnRaser
false
null
0
o8bcadd
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bcadd/
false
1
t1_o8bc8rj
Not that I'm aware of. This subreddit is a comprehensive reference on cutting edge local LLM technology, but it's scattered across thousands of posts and not an "overview" at all. My recommendation is to ask an LLM inference service to summarize r/LocalLLaMA content about best local models, ComfyUI, and codegen, and use what it gives you as a source of good terms to search for in-subreddit for more in-depth information. We really should have something like that, even if it's initially generated. We can edit it by hand after an LLM generates the "rough draft".
1
0
2026-03-02T23:00:15
ttkciar
false
null
0
o8bc8rj
false
/r/LocalLLaMA/comments/1rj7z9v/where_to_get_a_comprehensive_overview_on_the/o8bc8rj/
false
1
t1_o8bc4cu
that's bizarre. maybe we're seeing KV cache in effect here? given that your test prompt is extremely repetitive 
1
0
2026-03-02T22:59:36
HopePupal
false
null
0
o8bc4cu
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8bc4cu/
false
1
t1_o8bc2q7
Makes a ton of sense. Would take years to recoup that money vs API tokens probably
1
0
2026-03-02T22:59:21
cyahahn
false
null
0
o8bc2q7
false
/r/LocalLLaMA/comments/1p2lqi7/are_any_of_the_m_series_mac_macbooks_and_mac/o8bc2q7/
false
1
t1_o8bbv4v
It's the first time I see an Orin used in the wild for running LLMs. This is quite impressive to run a Q8 of 120b at 10tps. I thought that it was more a tinkering, or validating platform more than an actually usable product.
1
0
2026-03-02T22:58:14
Medium_Chemist_4032
false
null
0
o8bbv4v
false
/r/LocalLLaMA/comments/1rj76pb/qwen35122ba10bq8_handling_the_car_wash_question/o8bbv4v/
false
1
t1_o8bbsam
Yes
1
0
2026-03-02T22:57:49
panchovix
false
null
0
o8bbsam
false
/r/LocalLLaMA/comments/1rbyg5x/if_you_have_a_rtx_5090_that_has_a_single/o8bbsam/
false
1
t1_o8bbos3
Alas indeed the best way to get good performance with coding for GPU poor people is to rent the capacity. Use small local models for summarization things.
1
0
2026-03-02T22:57:18
-Akos-
false
null
0
o8bbos3
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8bbos3/
false
1
t1_o8bbo65
12 tok/s on a snapdragon 855 is solid. Q4_0 or Q8? the NEON SIMD path in llama.cpp makes old ARM chips punch way above weight.
1
0
2026-03-02T22:57:13
sean_hash
false
null
0
o8bbo65
false
/r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8bbo65/
false
1
t1_o8bbn13
Speculative decoding would make a big difference here. With a small Qwen variant as a draft model, 27B could feel a lot lighter.
1
0
2026-03-02T22:57:03
pmv143
false
null
0
o8bbn13
false
/r/LocalLLaMA/comments/1rj2mzy/is_speculative_decoding_available_with_the_qwen/o8bbn13/
false
1
t1_o8bbmde
I think OP is looking for brains benchmarks, not speed. Like how does it actually perform on tasks compared to thinking on. Presumably all the Qwen published benchmarks are with reasoning on.
1
0
2026-03-02T22:56:57
thejoyofcraig
false
null
0
o8bbmde
false
/r/LocalLLaMA/comments/1riy5x6/qwen_35_nonthinking_mode_benchmarks/o8bbmde/
false
1
t1_o8bbel9
[removed]
1
0
2026-03-02T22:55:48
[deleted]
true
null
0
o8bbel9
false
/r/LocalLLaMA/comments/14jru57/metas_new_ai_lets_people_make_chatbots_theyre/o8bbel9/
false
1
t1_o8bbebp
[removed]
1
0
2026-03-02T22:55:46
[deleted]
true
null
0
o8bbebp
false
/r/LocalLLaMA/comments/1hao9z4/best_tools_for_running_a_local_llm_as_a_nsfw/o8bbebp/
false
1
t1_o8bbc9j
It's sounding like it has caught up with Claude sonnet 4.5. Very nice. Now the question is qwen 122b q6 or step flash 197b q4 they are both 10b active models 
1
0
2026-03-02T22:55:28
ArtfulGenie69
false
null
0
o8bbc9j
false
/r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o8bbc9j/
false
1
t1_o8bb1lu
Thanks!
1
0
2026-03-02T22:53:53
Kayo4life
false
null
0
o8bb1lu
false
/r/LocalLLaMA/comments/1m6tbhm/what_does_the_k_s_m_l_mean_behind_the/o8bb1lu/
false
1
t1_o8bb0s5
Show us the contrary.
1
0
2026-03-02T22:53:46
l33t-Mt
false
null
0
o8bb0s5
false
/r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o8bb0s5/
false
1
t1_o8bay1r
NPU 6-7% more efficient than GPU in tokens/watt. --> No use case here.
1
0
2026-03-02T22:53:21
crantob
false
null
0
o8bay1r
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8bay1r/
false
1
t1_o8bax11
I tried the same setup on my own 2× RTX 3090 machine, with each card capped at 280W and no NVLink, and this is genuinely impressive. I’m seeing about **692 tok/s** total throughput on an **8-request run**, around **77 tok/s** output throughput, and roughly **1,112 tok/s** on a prefill-heavy test, very nice result indeed! Here's another run: https://preview.redd.it/w3psjcslopmg1.png?width=647&format=png&auto=webp&s=88abe4c82b868ccd52d4bea9381dfa0b010f3a0c
1
0
2026-03-02T22:53:12
jslominski
false
null
0
o8bax11
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8bax11/
false
1
t1_o8baotn
I’ve been seeing strong results with 3.5 27B too. If you end up fine tuning it and want somewhere to deploy, happy to spin it up and host it for you.
1
0
2026-03-02T22:52:00
pmv143
false
null
0
o8baotn
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8baotn/
false
1
t1_o8banru
Nice, I'll look forward to it
1
0
2026-03-02T22:51:51
huffalump1
false
null
0
o8banru
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o8banru/
false
1
t1_o8baml4
Show perf for npu+gpu then. Can't assume they add-up.
1
0
2026-03-02T22:51:40
crantob
false
null
0
o8baml4
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8baml4/
false
1
t1_o8bal33
"As I wrote in December, [speed is the final boss](https://blog.brokk.ai/the-best-open-weights-coding-models-of-2025/) for open weights models. Qwen 3.5 27b is roughly 10x slower than Flash 3 at solving our tasks, and that’s against Alibaba’s API," Sooooo what did Alibaba do? Or what did Google do for that?
1
0
2026-03-02T22:51:27
Snoo_64233
false
null
0
o8bal33
false
/r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8bal33/
false
1
t1_o8bakqp
you can try 35B A3B q4 on your gpu+cpu, or 9B if you can fit it in vram.
1
0
2026-03-02T22:51:24
KaosNutz
false
null
0
o8bakqp
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8bakqp/
false
1
t1_o8bahmc
Almost any literally except a few that are not even the best ones, depending on what he is looking there are pretty good alternatives like Klein or z image
1
0
2026-03-02T22:50:56
brocolongo
false
null
0
o8bahmc
false
/r/LocalLLaMA/comments/1rj5czr/free_image_models_that_can_run_on_12gb_vram/o8bahmc/
false
1
t1_o8baghp
LMStudio estimator is likely not correct for 3.5.
1
0
2026-03-02T22:50:46
TheRealMasonMac
false
null
0
o8baghp
false
/r/LocalLLaMA/comments/1rj3ocy/question_regarding_model_parameters_and_memory/o8baghp/
false
1
t1_o8ba8gd
Flux Klein 9b runs pretty good for me on 12GB of VRAM. 4 seconds for generating an image and 9 seconds for editing. Comfyui is better with memory management these days so much so that as long as the model fits into your whole VRAM/RAM then it's OK. It will be slower, but OK.
1
0
2026-03-02T22:49:35
c64z86
false
null
0
o8ba8gd
false
/r/LocalLLaMA/comments/1rj5czr/free_image_models_that_can_run_on_12gb_vram/o8ba8gd/
false
1
t1_o8ba0v7
on default bios min real power 400W (min)?
1
0
2026-03-02T22:48:30
koloved
false
null
0
o8ba0v7
false
/r/LocalLLaMA/comments/1rbyg5x/if_you_have_a_rtx_5090_that_has_a_single/o8ba0v7/
false
1
t1_o8b9ufi
XDNA2 drivers are public and added in the Linux kernel since February 2025. According to the lemonade developer 2 months ago, there are 2 teams working on XDNA2 on Linux but is at the bottom of their list. FastFlowLM and AMD. vLLM, LLAMACPP etc haven't bothered yet after 13 months to add support to the NPU.
1
0
2026-03-02T22:47:34
ImportancePitiful795
false
null
0
o8b9ufi
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8b9ufi/
false
1
t1_o8b9lbm
Continues to do this for me on recommended settings and high bit quants, fwiw It's very unpredictable, on similar level of questions I get anywhere from 30 seconds to 11 minutes of thinking
1
0
2026-03-02T22:46:15
segfawlt
false
null
0
o8b9lbm
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8b9lbm/
false
1
t1_o8b8urq
I am more just testing capacity of the network! Go on though. What do you mean
1
0
2026-03-02T22:42:25
braydon125
false
null
0
o8b8urq
false
/r/LocalLLaMA/comments/1rj76pb/qwen35122ba10bq8_handling_the_car_wash_question/o8b8urq/
false
1
t1_o8b8px0
Qwen3-2B Q4\_0
1
0
2026-03-02T22:41:44
jslominski
false
null
0
o8b8px0
false
/r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o8b8px0/
false
1
t1_o8b8poy
The problem with Claude is usage. And Anthropic will not become more generous, we're just in the honeymoon phase of the enshittification cycle. It's good to get used to using local LLM for pedestrian tasks, and save the $$$ tokens for heavy lifting. Claude CLI is fantastic for this, you can just use the same interface for both.
1
0
2026-03-02T22:41:42
Easy-Unit2087
false
null
0
o8b8poy
false
/r/LocalLLaMA/comments/1rj54kw/local_llm/o8b8poy/
false
1
t1_o8b8nrf
Is new qwen3.5 still have unmodified license
1
0
2026-03-02T22:41:26
comodore6564
false
null
0
o8b8nrf
false
/r/LocalLLaMA/comments/1mij7fh/list_of_openweight_models_with_unmodified/o8b8nrf/
false
1
t1_o8b8e5p
Nice, will check it out. Still waiting for a model that embraces personas with the reckless, psychotic abandon of llama 3.3 70b
1
0
2026-03-02T22:40:02
nomorebuttsplz
false
null
0
o8b8e5p
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8b8e5p/
false
1
t1_o8b8dli
I tried with opencode. During the test it kept using tools wrong, failed to edit stuff correctly and always said ... "now i understand i need to ..." and then continued to fail. I think it might also be because i have the settings at the default ollama settings and didn't do any model specific settings prompts ect. I think it can work and since it is fully on gpu for me it is really fast. So even if it fails i can just retry quickly. It for sure has its place.
1
0
2026-03-02T22:39:57
Rofdo
false
null
0
o8b8dli
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8b8dli/
false
1
t1_o8b8bzi
How does a smaller model being compatible with a larger one help? Can they be run together some how?
1
0
2026-03-02T22:39:43
MetaCognitio
false
null
0
o8b8bzi
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8b8bzi/
false
1
t1_o8b83fb
man kann open router auch mit krypto zahlen. zum beispiel per coinbase. Damit kommt man leider nur nicht um kyc drum rum aber kreditkarten daten liegen dann da schonmal nicht
1
0
2026-03-02T22:38:30
Superb-Ladder5467
false
null
0
o8b83fb
false
/r/LocalLLaMA/comments/1rdbe7e/open_router_as_free_api_for_openclaw/o8b83fb/
false
1
t1_o8b81ux
You can also play around with different quantization to save on VRAM as well. But with your system you should try Qwen3.5 35B A3B that came out recently. It fits on my 5070Ti 32Gb system RAM so it will fit yours, just make sure you offload to CPU. This model has been the best one I’ve used locally yet, but it’s ~30% (9B is about 30% faster for me)
1
0
2026-03-02T22:38:16
Guilty_Rooster_6708
false
null
0
o8b81ux
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8b81ux/
false
1
t1_o8b7hxo
According to the openclaw inventor these smaller models are not recommended due to security concerns, they can be tricked more easily apparently. Don't have personal experience with it though
1
0
2026-03-02T22:35:25
MyBrainsShit
false
null
0
o8b7hxo
false
/r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8b7hxo/
false
1
t1_o8b7h8b
Depends what you're doing but a few tweaks and plugins can make all the difference. Playwright and Impeccable Style are the two must-haves for me.
1
0
2026-03-02T22:35:19
Aromatic-Low-4578
false
null
0
o8b7h8b
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o8b7h8b/
false
1
t1_o8b7ejn
I mostly use Olmo 3.7b instruct and thinking. They are smaller model, that let me crank up context by an lot without quantization. But i'm still looking. I mostly local model when i'm doing n8n work, and i haven't been doing this since a little while. So, i'm sure there is an latest deepseek or qwen that could beat the recommended Starcoder (it was the top pick Llmfit recommended me, it suck).
1
0
2026-03-02T22:34:56
vagabondluc
false
null
0
o8b7ejn
false
/r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o8b7ejn/
false
1
t1_o8b7dnd
These models also take instructions so much better than previous models. a good system prompt can inject a lot of personality too.
1
0
2026-03-02T22:34:48
National_Meeting_749
false
null
0
o8b7dnd
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8b7dnd/
false
1
t1_o8b77vw
[removed]
1
0
2026-03-02T22:33:57
[deleted]
true
null
0
o8b77vw
false
/r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8b77vw/
false
1
t1_o8b73mr
I'd also like to know. Last I checked Cline is still pretty high in the ranks on openrouter.
1
0
2026-03-02T22:33:20
Aromatic-Low-4578
false
null
0
o8b73mr
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o8b73mr/
false
1
t1_o8b72wd
Mine keeps telling me 2+2 = 3 and wont think otherwise
1
0
2026-03-02T22:33:13
Volts2430
false
null
0
o8b72wd
false
/r/LocalLLaMA/comments/1hff0wj/llama_32_1b_surprisingly_good/o8b72wd/
false
1
t1_o8b71tl
tesla p40 is pretty ancient. i havent used those specifically, but i did use the p102-100 mining card for a few days before it blew some vrm components. being a server card, used p40s might have been treated better, but i personally wouldnt risk it. if you want cheap and are okay with fighting the software stack a bit, i recommend the radeon pro v340. theyre $50 each and have 2 8gb vega56 class gpus on em. i currently have qwen3.5 35b-a3b running on 3/4ths of 2 cards and am getting around 250t/s pp and 22t/s tg.
1
0
2026-03-02T22:33:04
tvall_
false
null
0
o8b71tl
false
/r/LocalLLaMA/comments/1rj6j0y/workstation_for_dev_work_local_llms_tesla_p40_vs/o8b71tl/
false
1
t1_o8b706y
4o was good?
1
0
2026-03-02T22:32:50
michaelsoft__binbows
false
null
0
o8b706y
false
/r/LocalLLaMA/comments/1rj39se/intelligence_density_per_gb_is_increasing_and_i/o8b706y/
false
1
t1_o8b6y97
is it good for agentic uses?
1
0
2026-03-02T22:32:33
RipperJoe
false
null
0
o8b6y97
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8b6y97/
false
1
t1_o8b6s1e
They don't release base models for their big ones since Qwen3. Notice that K2.5 and GLM-5 also didn't release their base models.
1
0
2026-03-02T22:31:39
TheRealMasonMac
false
null
0
o8b6s1e
false
/r/LocalLLaMA/comments/1rj6hga/qwen35_base_models_for_122b_and_27b/o8b6s1e/
false
1
t1_o8b6qjp
Now that I think about it, it's weird we don't have 4gb memory chips, which shouldn't have been a big technological leap from 3gb chips. Why would anyone need them, though, except us, poor folks
1
0
2026-03-02T22:31:26
Long_comment_san
false
null
0
o8b6qjp
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8b6qjp/
false
1
t1_o8b6on6
Car wash questions? You don't need 122b , brother.
1
0
2026-03-02T22:31:10
qwen_next_gguf_when
false
null
0
o8b6on6
false
/r/LocalLLaMA/comments/1rj76pb/qwen35122ba10bq8_handling_the_car_wash_question/o8b6on6/
false
1
t1_o8b6i37
Why does it have to be either or? Why can't it be both at the same time? As I said, the NPU would be great to run a small model for spec decoding while the larger model runs on the GPU.
1
0
2026-03-02T22:30:13
fallingdowndizzyvr
false
null
0
o8b6i37
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8b6i37/
false
1
t1_o8b6a3c
My fix was to uninstall lm studio and just go straight to llama.cpp. Correct arg to avoid caching: --cache-ram 0 I think LM Studios caching is just broken overall though, managed to run the GPU caching just to discover a 80gb pagefile with 0 cache purge whatsoever in sight XD
1
0
2026-03-02T22:29:05
After-Operation2436
false
null
0
o8b6a3c
false
/r/LocalLLaMA/comments/1rj4ck1/lm_studio_kv_caching_issue/o8b6a3c/
false
1
t1_o8b68yl
The ADHD analogy in this thread is actually pretty accurate. It's not about whether the model is *smart enough* for any individual step — it usually is. The problem is coherence across a multi-step workflow. Agentic coding needs the model to hold a plan, execute step 1, evaluate the result, adjust the plan, execute step 2, and so on — without losing the thread. Smaller models tend to drift or forget constraints they set for themselves two steps ago. You get correct individual outputs that don't compose into a coherent whole. That said, there's a middle ground people are exploring: use a smaller model for the fast iteration steps (quick edits, test runs, simple refactors) and a bigger model for the planning and evaluation checkpoints. You get speed where it matters and coherence where it matters.
1
0
2026-03-02T22:28:55
Shingikai
false
null
0
o8b68yl
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8b68yl/
false
1
t1_o8b66zf
I find that with local models on my laptop I benefit more from auto-complete than with full copiloting. Previously, qwen14B coder has been a go-to. Now, I'm experimenting with the idea of just building tab completion models instead using super small LLMs. It's now a long term project that I'm building to mirror the composer model cursor has.
1
0
2026-03-02T22:28:37
yes-im-hiring-2025
false
null
0
o8b66zf
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8b66zf/
false
1
t1_o8b63tu
> they were seeing 450+ PP I updated OP, with a long enough prompt it does hit 450.
1
0
2026-03-02T22:28:10
fallingdowndizzyvr
false
null
0
o8b63tu
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8b63tu/
false
1
t1_o8b5yc7
Realistically I can do most of my neural network training on a much smaller gpu. I think break even running qwen3.5 locally might be as high as 20 years. I’ll ponder it for a few days as the qmax models got sold out at the local store today. I don’t think prices will decrease on these cards anytime soon and would be fun for research. Thanks for your input
1
0
2026-03-02T22:27:24
Annual_Award1260
false
null
0
o8b5yc7
false
/r/LocalLLaMA/comments/1rj54kw/local_llm/o8b5yc7/
false
1
t1_o8b5yfg
Just in case anyone else following this post is also using LM Studio, this post's guidance made even the 3.5 4B work for my needs on the first try!! I'm super excited to do real testing now. HOpe it helps -> [https://www.reddit.com/r/LocalLLaMA/comments/1riwhcf/psa\_lm\_studios\_parser\_silently\_breaks\_qwen35\_tool/](https://www.reddit.com/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/)
1
0
2026-03-02T22:27:24
FigZestyclose7787
false
null
0
o8b5yfg
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8b5yfg/
false
1
t1_o8b5xrj
yes, the question is if someone knows what models they use, or what are cheaper with a decent quality I tried minicpm and the delivery hasn't the gemini-2.5 flash level (I'm not looking the latest model quality)
1
0
2026-03-02T22:27:19
jrhabana
false
null
0
o8b5xrj
false
/r/LocalLLaMA/comments/1rizy4r/what_models_to_understand_videos_no_transcripts/o8b5xrj/
false
1
t1_o8b5x4a
Termux on the S10E pkg install cmake clang git, clone llama.cpp, and build with cmake. Main gotcha is Termux is missing spawn.h - you need to create a stub header with no-op implementations or the build fails. After that it compiles fine. Running Qwen 3.5 0.8B Q4_K_M at ~11 tokens/sec. Key flags: -t 2 -c 1024 -b 256 --no-mmap. Happy to share more details if you want.
1
0
2026-03-02T22:27:13
HighFlyingB1rd
false
null
0
o8b5x4a
false
/r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8b5x4a/
false
1
t1_o8b5r06
Is the target a brown stuffed animal looking building? Then, based on this solid proof, I dare to say yes
1
0
2026-03-02T22:26:20
SmartCustard9944
false
null
0
o8b5r06
false
/r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8b5r06/
false
1
t1_o8b5pni
I can't tell how how GRATEFUL I am to you for sharing this post!! I had high hopes for qwen 3.5 4 and 9B but simply couldn't get it to work (windows + lmstudio) for anything useful. I got frustrated with the models after so much hype, until i tried your simple suggestions for disabling thinking and it worked with understanding + using my skills on the first try. Mind you, I'm using LM studio to host the models, not lm studio chat + mcp directly. By what I understood from your writing, this bug still affects inferencing even, or specially, in this scenario, right? (just serving llm models through lm studio). In any case, my tests are FINALLY working, and I have high hopes for these new qwen models again. THANK YOU!!! very much.
1
0
2026-03-02T22:26:08
FigZestyclose7787
false
null
0
o8b5pni
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8b5pni/
false
1
t1_o8b5o9y
> So using your best two numbers, with 1000 input tokens and 100 output, it appears the GPU demolishes the NPU. Check my OP again. I updated it with another number. The larger the prompt, the faster it PPs. It's at 413tk/s with a 27K prompt. At 54K it's 450tk/s. So it seems it tops out there.
1
0
2026-03-02T22:25:56
fallingdowndizzyvr
false
null
0
o8b5o9y
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8b5o9y/
false
1
t1_o8b5m3n
Here's a recording of the workflow I ended up building if anyone's curious what it looks like in practice: [https://www.youtube.com/watch?v=c1L\_rC6SrPo](https://www.youtube.com/watch?v=c1L_rC6SrPo)
1
0
2026-03-02T22:25:37
Critical_Letter_7799
false
null
0
o8b5m3n
false
/r/LocalLLaMA/comments/1rj7bvo/the_biggest_pain_in_local_finetuning_isnt/o8b5m3n/
false
1
t1_o8b5dyj
At what quantisation? And How does a quantised 25B compare to 35b-A3B?
1
0
2026-03-02T22:24:28
AuspiciousApple
false
null
0
o8b5dyj
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8b5dyj/
false
1
t1_o8b59gn
If budget isn’t an issue, get a TinyBox
1
0
2026-03-02T22:23:49
Weird-Consequence366
false
null
0
o8b59gn
false
/r/LocalLLaMA/comments/1rj54kw/local_llm/o8b59gn/
false
1
t1_o8b5971
How would 35BA3B compare with 9B on 16GB VRAM? I guess 9B would be faster, but have less knowledge?
1
0
2026-03-02T22:23:47
AuspiciousApple
false
null
0
o8b5971
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8b5971/
false
1
t1_o8b58vq
> what docs were you working off of? The link you posted. As I said, it's lacking a few things. The rest I figured out myself.
1
0
2026-03-02T22:23:45
fallingdowndizzyvr
false
null
0
o8b58vq
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8b58vq/
false
1
t1_o8b52nn
You can just set the jinja to default to non-thinking. Unsloth's quants have that baked in that way already, so just use those if my words are meaningless.
1
0
2026-03-02T22:22:52
thejoyofcraig
false
null
0
o8b52nn
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8b52nn/
false
1
t1_o8b507p
Ay thanks guys I have come a long way since then. this sub is incredibly niche. love it.
1
0
2026-03-02T22:22:31
Joscar_5422
false
null
0
o8b507p
false
/r/LocalLLaMA/comments/1rcmlwk/so_is_openclaw_local_or_not/o8b507p/
false
1
t1_o8b4ucd
Since we are on the topic, what framework do people use/recommend for OCR model purposes?
1
0
2026-03-02T22:21:41
wrecklord0
false
null
0
o8b4ucd
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8b4ucd/
false
1
t1_o8b4d3h
how can I restart the server on boot without physical access required to log in before the service starts?
1
0
2026-03-02T22:19:15
luche
false
null
0
o8b4d3h
false
/r/LocalLLaMA/comments/1rezq19/qwen3535b_on_apple_silicon_how_i_got_2x_faster/o8b4d3h/
false
1
t1_o8b4awy
Claude CLI with local LLM is a completely different use case from typical benchmarks people post on social media, which haven't caught up with agentic coding. We're talking large context, parallel requests. DGX Spark (i.e. GB10 -- Asus GX10 1TB can still be had for $3k) with vllm running qwen3-coder-next at FP8 handles Claude much faster than my Mac Studio. I might sell my Mac for a second GX10 node, while prices for used 64GB+ Mac Studios are crazy.
1
0
2026-03-02T22:18:57
Easy-Unit2087
false
null
0
o8b4awy
false
/r/LocalLLaMA/comments/1rj54kw/local_llm/o8b4awy/
false
1
t1_o8b49lz
Lol. Point made and taken
1
0
2026-03-02T22:18:46
Creepy-Bell-4527
false
null
0
o8b49lz
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o8b49lz/
false
1
t1_o8b45ww
Even with those GPUs, you arent getting anything like opus locally. Would be a sick setup though, 1 TB ram... send some RAM my way lol
1
0
2026-03-02T22:18:14
Hefty_Development813
false
null
0
o8b45ww
false
/r/LocalLLaMA/comments/1rj54kw/local_llm/o8b45ww/
false
1
t1_o8b3zy6
I'm getting ridiculous refusals constantly with Qwen 3.5 35B A3B and it failed one of my main test questions that I haven't seen a 35B fail before unless its guardrails were overbearing to the quality of the output. Maybe I should try the 27B.
1
0
2026-03-02T22:17:22
Elite_Crew
false
null
0
o8b3zy6
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o8b3zy6/
false
1
t1_o8b3x2x
Budget isn’t a problem. I don’t like hardware holding me back so I generally just buy the best. The store just called me asking if they could sell the 2 qmax models I had on hold and since I’m not 100% sure I let them go. Having a store a 5 min walk away definitely gets me sometimes
1
0
2026-03-02T22:16:58
Annual_Award1260
false
null
0
o8b3x2x
false
/r/LocalLLaMA/comments/1rj54kw/local_llm/o8b3x2x/
false
1
t1_o8b3wte
Which model was the one who you use to tell what was in the photo?
1
0
2026-03-02T22:16:56
j0j0n4th4n
false
null
0
o8b3wte
false
/r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o8b3wte/
false
1
t1_o8b3wbp
what docs were you working off of?
1
0
2026-03-02T22:16:52
HopePupal
false
null
0
o8b3wbp
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8b3wbp/
false
1
t1_o8b3us3
Alright mate... Will definitely try these settings, thanks a bunch
1
0
2026-03-02T22:16:39
Zealousideal-Check77
false
null
0
o8b3us3
false
/r/LocalLLaMA/comments/1rj4nnq/qwen352b_on_android/o8b3us3/
false
1
t1_o8b3qr0
lol ... I fixed it
1
0
2026-03-02T22:16:05
Black-Mack
false
null
0
o8b3qr0
false
/r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8b3qr0/
false
1