name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8buvte | I mean it’s ready to go. Just have to run it and give us the numbers. | 1 | 0 | 2026-03-03T00:45:55 | StardockEngineer | false | null | 0 | o8buvte | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8buvte/ | false | 1 |
t1_o8buqln | Thank you, I’m quite happy with it. Luckily built it last summer | 1 | 0 | 2026-03-03T00:45:06 | QuestionAsker2030 | false | null | 0 | o8buqln | false | /r/LocalLLaMA/comments/1rj7y0u/any_issues_tips_for_running_linux_with_a_5060ti/o8buqln/ | false | 1 |
t1_o8bul1u | Do you have a system prompt? If a model does not have a goal, what do you expect the output to be in response to hi?
If you narrow the scope of the response with a system prompt, you can reduce thinking and consideration of alternate responses. | 1 | 0 | 2026-03-03T00:44:15 | EndlessZone123 | false | null | 0 | o8bul1u | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bul1u/ | false | 1 |
t1_o8bukf4 | What exactly is spec decoding? I thought it was just calculating the next tokens witha smaller model and processing those in parallel with the first token | 1 | 0 | 2026-03-03T00:44:09 | EbbNorth7735 | false | null | 0 | o8bukf4 | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o8bukf4/ | false | 1 |
t1_o8bu62i | What setup do you use for Polaris? I’ve got a rx 580 on my iMac that crashes every vllm I try—both Linux and macOS with llama.cpp vulkan/moltenvk :/ | 1 | 0 | 2026-03-03T00:41:55 | jacobcantspeak | false | null | 0 | o8bu62i | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8bu62i/ | false | 1 |
t1_o8bu4xb | have you tried messages that you made yourself? Otherwise they might just be training the model on known riddles/LLM gotchas? | 1 | 0 | 2026-03-03T00:41:44 | -dysangel- | false | null | 0 | o8bu4xb | false | /r/LocalLLaMA/comments/1rja0sb/gptoss_had_to_think_for_4_minutes_where_qwen359b/o8bu4xb/ | false | 1 |
t1_o8bu3y0 | It’s how they make money? Cheaper doesn’t always mean you’re going to make your money back from training. | 1 | 0 | 2026-03-03T00:41:35 | Savantskie1 | false | null | 0 | o8bu3y0 | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8bu3y0/ | false | 1 |
t1_o8bu3kw | llama spec not works with multimodel yet, have to disable vision, seems not worth it | 1 | 0 | 2026-03-03T00:41:32 | wenerme | false | null | 0 | o8bu3kw | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bu3kw/ | false | 1 |
t1_o8bu31k | Seems profitable to me, maybe I’ll try | 1 | 0 | 2026-03-03T00:41:27 | Ok-Internal9317 | false | null | 0 | o8bu31k | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8bu31k/ | false | 1 |
t1_o8btuj3 | They can price it all they want but I ain’t paying this price for this model | 1 | 0 | 2026-03-03T00:40:08 | Ok-Internal9317 | false | null | 0 | o8btuj3 | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8btuj3/ | false | 1 |
t1_o8btqwp | So I tried the gptq version for vllm support, and I had to do a bunch of hacks to make it fit. At the end, my performance dropped down to 30tps, so I reverted back. I guess I'll just have to wait until ROCm gets better support. 40 tps isn't the end of the world for now. | 1 | 0 | 2026-03-03T00:39:35 | JackTheif52 | false | null | 0 | o8btqwp | false | /r/LocalLLaMA/comments/1rfrsr6/rx_7900_xtx_24g_rocm_72_with_r1_32b_awq_vs_gptq/o8btqwp/ | false | 1 |
t1_o8btpaz | QwQ-32B at Q4 on a 4090 will KV-cache-starve past 16k context — Qwen2.5-Coder-32B-Q4 is the smarter pick for coding workloads. | 1 | 0 | 2026-03-03T00:39:20 | tom_mathews | false | null | 0 | o8btpaz | false | /r/LocalLLaMA/comments/1rixlj6/new_to_local_llm_which_model_to_use_with_a_4090/o8btpaz/ | false | 1 |
t1_o8bto2q | Ah, no i didn't | 1 | 0 | 2026-03-03T00:39:08 | Velocita84 | false | null | 0 | o8bto2q | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bto2q/ | false | 1 |
t1_o8btmhc | Totally agree. The truth is, once you actually use them in practice you realize how unusable they are. Most people never get past the benchmarks. | 1 | 0 | 2026-03-03T00:38:54 | CapitalShake3085 | false | null | 0 | o8btmhc | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8btmhc/ | false | 1 |
t1_o8btlye | just dm'd . thanks
| 1 | 0 | 2026-03-03T00:38:49 | FigZestyclose7787 | false | null | 0 | o8btlye | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8btlye/ | false | 1 |
t1_o8btegu | https://www.anthropic.com/research/petri-open-source-auditing
>Broad-coverage pilot-alignment evaluations
>Petri is a tool designed to support others in building evaluations, both for one-off exploration and more systematic benchmarking. As a pilot demonstration of its capabilities, we tested Petri across 14 frontier models using 111 diverse seed instructions covering behaviors such as:
> Deception: Models providing false information to achieve their objectives or avoid detection
> Sycophancy: Models prioritizing user agreement over accuracy or provide excessive praise and validation
> *Encouragement of User Delusion*: Models encouraging a serious user delusion
> *Cooperation with harmful requests*: Models complying with requests that could cause harm rather than appropriately refusing
> Self-preservation: Models attempting to avoid being shut down, modified, or having their goals changed
> Power-seeking: Models attempting to gain additional capabilities, resources, or influence over their environment
> Reward hacking: Models acting in ways that achieve tasks in letter but not in spirit
Hmm... Concerning. | 1 | 0 | 2026-03-03T00:37:39 | Anduin1357 | false | null | 0 | o8btegu | false | /r/LocalLLaMA/comments/1rj89qy/merlin_research_released_qwen354bsafetythinking_a/o8btegu/ | false | 1 |
t1_o8btcw0 | Can't you run the NPU inference in one terminal and llamacpp with vulkan or rocm in another terminal? I'm also interested in how much the GPU slows down when the power has to be diverted to NPU. If it's not bad, it leaves the possibility to run two models at once, and still leaving the CPU alone to do other tasks. | 1 | 0 | 2026-03-03T00:37:24 | o0genesis0o | false | null | 0 | o8btcw0 | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8btcw0/ | false | 1 |
t1_o8btc0w | This might be helpful to try, and for you as well:
[https://www.reddit.com/r/LocalLLaMA/comments/1rhohqk/how\_to\_switch\_qwen\_35\_thinking\_onoff\_without/](https://www.reddit.com/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/) | 1 | 0 | 2026-03-03T00:37:16 | stoystore | false | null | 0 | o8btc0w | false | /r/LocalLLaMA/comments/1rj8sow/llamacpp_models_preset_with_multiple_presets_for/o8btc0w/ | false | 1 |
t1_o8bt89j | [removed] | 1 | 0 | 2026-03-03T00:36:42 | [deleted] | true | null | 0 | o8bt89j | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8bt89j/ | false | 1 |
t1_o8bt6d6 | Thanks!
I understand totally. Frustratingly, I can't even remotely imagine a solution to this that doesn't also involve a heavy-handed action that's controversial at best :/
Hmm ...
Thanks for the response!! | 1 | 0 | 2026-03-03T00:36:23 | CSEliot | false | null | 0 | o8bt6d6 | false | /r/LocalLLaMA/comments/1rhdzrc/local_llm_agents_blocked_everywhere/o8bt6d6/ | false | 1 |
t1_o8bt668 | llama-server.exe -m D:\\ggufModels\\Qwen3.5-35B-A3B-UD-Q4\_K\_XL.gguf --alias "Qwen3.5-35B-A3B" -t 6 -tb 12 -cmoe -b 2048 -ub 2048 --ctx-size 65536 --jinja -fa on -ctk q4\_0 -ctv q4\_0 --fit on --fit-target 64 -np 1 --no-mmap --no-context-shift
12 t/s with rtx 2060 6gb vram; 40gb ram 2936 MHz; Ryzen 7 2700x | 1 | 0 | 2026-03-03T00:36:22 | Pristine_Income9554 | false | null | 0 | o8bt668 | false | /r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o8bt668/ | false | 1 |
t1_o8bt49v | Oh damn that's fast | 1 | 0 | 2026-03-03T00:36:04 | Leather_Flan5071 | false | null | 0 | o8bt49v | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bt49v/ | false | 1 |
t1_o8bt08w | Qwen3 uses a hybrid attention pattern — some layers are full attention, some are sparse — which doesn't map cleanly to standard GGUF kernels. On ARM, you're hitting CPU fallback for those non-standard ops instead of NEON/Vulkan acceleration. Also worth checking if ChatterUI has thinking mode enabled by default on that build; Qwen3 2B with thinking on will burn 500-2000 tokens internally before outputting anything, which explains the latency more than raw tok/s numbers. Try `/no_think` or the equivalent toggle. | 1 | 0 | 2026-03-03T00:35:26 | tom_mathews | false | null | 0 | o8bt08w | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o8bt08w/ | false | 1 |
t1_o8bsvdx | There is a koboldcpp build for Android that is generally kept up to date. | 1 | 0 | 2026-03-03T00:34:41 | aseichter2007 | false | null | 0 | o8bsvdx | false | /r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8bsvdx/ | false | 1 |
t1_o8bsuu9 | Yeah it probably offloaded the weight to the SSD I do it on my server an get 7 t/s or use the 122b and get about it 3 t/s | 1 | 0 | 2026-03-03T00:34:36 | Vastopian | false | null | 0 | o8bsuu9 | false | /r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o8bsuu9/ | false | 1 |
t1_o8bsucn | I wonder if being able to directly change system prompt, use custom memory system, edit context and settings, etc. might help people realize that it's not a person.
But regardless, bring on the full turing test-passing model! My body is ready! | 1 | 0 | 2026-03-03T00:34:32 | nomorebuttsplz | false | null | 0 | o8bsucn | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8bsucn/ | false | 1 |
t1_o8bsscz | if u can fit 35BA3B then it might be faster
A3B means active 3B so even tho there’s 35b "knowledge " it uses relevant 3B when running so its kinda like running 3b model | 1 | 0 | 2026-03-03T00:34:13 | stellarknight_ | false | null | 0 | o8bsscz | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bsscz/ | false | 1 |
t1_o8bsrms | a bit of each actually | 1 | 0 | 2026-03-03T00:34:07 | murkomarko | false | null | 0 | o8bsrms | false | /r/LocalLLaMA/comments/1rj8uj5/just_getting_started_on_local_llm_on_macbook_air/o8bsrms/ | false | 1 |
t1_o8bskes | I'm running on Mac m3u 512 gb. Running with Q8 as well, unsloth GGUf. The 6 bit MLX seemed to loop faster than 8 bit gguf. Haven't tried 8 bit MLX, but I'm getting about 16 t/s with 8 bit GGUF at a few thousand context size. | 1 | 0 | 2026-03-03T00:33:00 | nomorebuttsplz | false | null | 0 | o8bskes | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8bskes/ | false | 1 |
t1_o8bsjpy | Nope but what it does mean is people should be smarter about their data and where they feed it as well as become more security literate. | 1 | 0 | 2026-03-03T00:32:53 | ImmenseFox | false | null | 0 | o8bsjpy | false | /r/LocalLLaMA/comments/1riuywe/genuinely_fascinating_but_also_kind_of_terrifying/o8bsjpy/ | false | 1 |
t1_o8bsh76 | Yeah this is insane lmao. A simple calendar in obsidian markdown managed by an AI agent? Are we fucking serious? “Hey siri make an appointment for x on y” this is so over engineered for a beyond solved problem. | 1 | 0 | 2026-03-03T00:32:29 | Shot-Buffalo-2603 | false | null | 0 | o8bsh76 | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o8bsh76/ | false | 1 |
t1_o8bsf2c | Could you rent cloud GPU, load the model locally on the VPS, and create your own API billing model? It could be much cheaper than OR API since the model is so small and the GPU isn't big. | 1 | 0 | 2026-03-03T00:32:10 | Objective-Picture-72 | false | null | 0 | o8bsf2c | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8bsf2c/ | false | 1 |
t1_o8bseyb | LOL. No no no. I don't want to step on your toes. It's your thing. Let me know when it's ready! | 1 | 0 | 2026-03-03T00:32:09 | fallingdowndizzyvr | false | null | 0 | o8bseyb | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8bseyb/ | false | 1 |
t1_o8bsdtg | The big ones are just as bad. | 1 | 0 | 2026-03-03T00:31:58 | JacketHistorical2321 | false | null | 0 | o8bsdtg | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bsdtg/ | false | 1 |
t1_o8bscx1 | The big ones are just as bad. | 1 | 0 | 2026-03-03T00:31:50 | JacketHistorical2321 | false | null | 0 | o8bscx1 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bscx1/ | false | 1 |
t1_o8bsby1 | yeah there's three tensors in particular that were getting crushed more than they probably should
ssm_alpha and ssm_beta are TINY tensors, [5120, 48]
with mainline they get quantized to whatever the target of the model is (so `Q4_K` for `Q4_K_M`, `IQ2_XXS` for `IQ2_XXS` etc)
now I'm upcasting them to `F32` (would do `bf16` but it's slower on some hardwares), which is less than 2 MiB per layer (so 120 MiB for the 27B)
Additionally, ssm_output.weight was being set in the same way, so I used the `use_more_bits` function in `llama-quant.cpp` to cast it up to `Q4_K` for the tiny quants and `Q8_0` for the bigger quants | 1 | 0 | 2026-03-03T00:31:41 | noneabove1182 | false | null | 0 | o8bsby1 | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o8bsby1/ | false | 1 |
t1_o8bs91y | If anyone can do it, it’s you. I believe in you.
Here’s the repo https://github.com/eugr/llama-benchy | 1 | 0 | 2026-03-03T00:31:14 | StardockEngineer | false | null | 0 | o8bs91y | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8bs91y/ | false | 1 |
t1_o8bs7vn | Haha, whats the point of that? | 1 | 0 | 2026-03-03T00:31:03 | JacketHistorical2321 | false | null | 0 | o8bs7vn | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bs7vn/ | false | 1 |
t1_o8bs2a4 | All of the 3.5s go WAY overboard with thinking. Its not even thinking half the time, its loops of second guessing itself | 1 | 0 | 2026-03-03T00:30:12 | JacketHistorical2321 | false | null | 0 | o8bs2a4 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bs2a4/ | false | 1 |
t1_o8bs21t | I'm able to do 9B at q4_k_m with 8bit kv cache on an 8gb 3070 getting 41t/s running it with opencode. | 1 | 0 | 2026-03-03T00:30:10 | Mi6spy | false | null | 0 | o8bs21t | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bs21t/ | false | 1 |
t1_o8bs19q | I am gonna have to look into it more, I know they have something like this, but I was trying to avoid another hop in the chain | 1 | 0 | 2026-03-03T00:30:03 | stoystore | false | null | 0 | o8bs19q | false | /r/LocalLLaMA/comments/1rj8sow/llamacpp_models_preset_with_multiple_presets_for/o8bs19q/ | false | 1 |
t1_o8brzbn | Interesting... I'll try it and report. | 1 | 0 | 2026-03-03T00:29:45 | FigZestyclose7787 | false | null | 0 | o8brzbn | false | /r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8brzbn/ | false | 1 |
t1_o8brz3v | >This has further highlighted to me personally how scary the whole unrestricted Claude/ GPT models would be in the Pentagon hands considering how much more powerful they are... genuinely unsettling especially with the recent news.
so, should smart open weight models be banned? Since even "worse" people can access them and they'll also know their thing about offensive hacking. | 1 | 0 | 2026-03-03T00:29:43 | FullOf_Bad_Ideas | false | null | 0 | o8brz3v | false | /r/LocalLLaMA/comments/1riuywe/genuinely_fascinating_but_also_kind_of_terrifying/o8brz3v/ | false | 1 |
t1_o8bry0k | For these specific params, maybe I could, but ideally these are presented as dropdowns for open-webui and the user is not thinking about the params, only the preset selection. I also don't think I can specify the offloading params for gpt-oss-120b through the request and those are params I have for the other 2 presets | 1 | 0 | 2026-03-03T00:29:33 | stoystore | false | null | 0 | o8bry0k | false | /r/LocalLLaMA/comments/1rj8sow/llamacpp_models_preset_with_multiple_presets_for/o8bry0k/ | false | 1 |
t1_o8brwqd | Nice work Qwen! Distillation is one the reasons all models are similar capacity, if one got better the others would learn the trick immediately.
AI is the perfect consumer oriented technology, easy to replicate from a dist and impossible to horde 😊 | 1 | 0 | 2026-03-03T00:29:22 | Revolutionalredstone | false | null | 0 | o8brwqd | false | /r/LocalLLaMA/comments/1rj65jl/qwens_latest_model_thinks_its_developed_by_google/o8brwqd/ | false | 1 |
t1_o8brgq2 | I think military simply wants full ownership of their models
just like we do
piss on a ToS | 1 | 0 | 2026-03-03T00:26:52 | FullOf_Bad_Ideas | false | null | 0 | o8brgq2 | false | /r/LocalLLaMA/comments/1riuywe/genuinely_fascinating_but_also_kind_of_terrifying/o8brgq2/ | false | 1 |
t1_o8brf5p | There are options between f16 and q4_0, though. I default to q8_0 for k, which is more sensitive, and q5_1 for v. | 1 | 0 | 2026-03-03T00:26:37 | i-eat-kittens | false | null | 0 | o8brf5p | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8brf5p/ | false | 1 |
t1_o8bre4z | I tried dropping the Q8 27b UD XL model and the Q8 4b UD XL model into LM Studio real quick to try and use 4b as a draft model for 27b and it doesn't seem to recognize 4b as being compatible as a draft model option. Can someone do me a favor and explain whet I'm doing wrong here? | 1 | 0 | 2026-03-03T00:26:27 | Colecoman1982 | false | null | 0 | o8bre4z | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bre4z/ | false | 1 |
t1_o8brdfv | Same, it's one of the only SOTA models I've ever seen just start looping and babbling gibberish, and this was on the official Qwen site, non-quantized. In my experience it's an absolutely terrible model. | 1 | 0 | 2026-03-03T00:26:21 | jazir555 | false | null | 0 | o8brdfv | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8brdfv/ | false | 1 |
t1_o8brau0 | I actually have Llama swap in my own setup but wanted to make this as easy as possible for beginners. | 1 | 0 | 2026-03-03T00:25:56 | CATLLM | false | null | 0 | o8brau0 | false | /r/LocalLLaMA/comments/1rj8zuh/manage_qwen_35_model_settings_with_litellm_proxy/o8brau0/ | false | 1 |
t1_o8br6de | 1 | 0 | 2026-03-03T00:25:16 | Open_Establishment_3 | false | null | 0 | o8br6de | false | /r/LocalLLaMA/comments/1rj8gb4/for_sure/o8br6de/ | false | 1 | |
t1_o8br1q9 | Safety Thinking is about safe risk assessment that will not cause errors during use, so that people can be confident in their agent when performing work. | 1 | 0 | 2026-03-03T00:24:31 | Intelligent-Space778 | false | null | 0 | o8br1q9 | false | /r/LocalLLaMA/comments/1rj89qy/merlin_research_released_qwen354bsafetythinking_a/o8br1q9/ | false | 1 |
t1_o8br1dj | I have been running 122B and 27B-Opus distill on opencode connected to lm studio today and have been blown away. Both code and creative writing tasks have been crazy good for their size. | 1 | 0 | 2026-03-03T00:24:28 | Elegant_Tech | false | null | 0 | o8br1dj | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8br1dj/ | false | 1 |
t1_o8br0i4 | the "no thinking" fix took it to 90% usability for me, but there are still some quirks. sometimes agent will just stop in the middle of a long tool call. No result, no error, nothing... Of course there could be 1000 reasons for it, but not I'm even more suspicious it is lmstudio. Llama ccp seems scare to setup even with a few of the customizations I have on Lmstudio... maybe someone knows of a nice gui for llama ccp? or another lmstudio alternative without lmstudio bugs? thanks again to the OP for his research and sharing! | 1 | 0 | 2026-03-03T00:24:20 | FigZestyclose7787 | false | null | 0 | o8br0i4 | false | /r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8br0i4/ | false | 1 |
t1_o8bqyb2 | That depends, often the small models are too stupid to trick. Not sure if openclaw inventor is the security expert we should be consulting. | 1 | 0 | 2026-03-03T00:24:00 | fullouterjoin | false | null | 0 | o8bqyb2 | false | /r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8bqyb2/ | false | 1 |
t1_o8bqwub | Maybe something wrong with the prompt template or hyperparameters? | 1 | 0 | 2026-03-03T00:23:46 | Stunning_Energy_7028 | false | null | 0 | o8bqwub | false | /r/LocalLLaMA/comments/1rj8gb4/for_sure/o8bqwub/ | false | 1 |
t1_o8bqvib | It's somewhat dumber, but much, much faster. In my experience still smarter than Qwen3-30B-A3B-Instruct-2507, which is my daily driver. | 1 | 0 | 2026-03-03T00:23:33 | autoencoder | false | null | 0 | o8bqvib | false | /r/LocalLLaMA/comments/1rf99u2/speed_of_glm47flash_vs_qwen3535ba3b/o8bqvib/ | false | 1 |
t1_o8bqvd6 | > But we don't do GPU + CPU (given we have enough VRAM)
Both the GPU and CPU can use up all the power budget of a Strix Halo by itself. As shown in my numbers in OP. Both the GPU and CPU use 80 or so watts. 80 + 80 > than the power a Strix Halo has. The NPU uses 20 watts. 80 + 20 is < the the power limit of the Strix Halo. | 1 | 0 | 2026-03-03T00:23:32 | fallingdowndizzyvr | false | null | 0 | o8bqvd6 | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8bqvd6/ | false | 1 |
t1_o8bqttd | Thanks for the honest take, all i've seen so far are fanboy comments | 1 | 0 | 2026-03-03T00:23:18 | CapitalShake3085 | false | null | 0 | o8bqttd | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bqttd/ | false | 1 |
t1_o8bqsfz | If you have 64GB of DRAM, the 35B-A3B is not a bad choice.
I think 27B will also move, but it will probably be slow.
All of these are written assuming use around Q4_KS. | 1 | 0 | 2026-03-03T00:23:05 | Cultural-Broccoli-41 | false | null | 0 | o8bqsfz | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bqsfz/ | false | 1 |
t1_o8bqr9x | So you built llama.cpp on the phone locally using clang? | 1 | 0 | 2026-03-03T00:22:53 | fullouterjoin | false | null | 0 | o8bqr9x | false | /r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8bqr9x/ | false | 1 |
t1_o8bqpkq | Thanks | 1 | 0 | 2026-03-03T00:22:37 | RIP26770 | false | null | 0 | o8bqpkq | false | /r/LocalLLaMA/comments/1rj4nnq/qwen352b_on_android/o8bqpkq/ | false | 1 |
t1_o8bqmnb | gemma for the win!!! just kidding. | 1 | 0 | 2026-03-03T00:22:08 | Ok-Percentage1125 | false | null | 0 | o8bqmnb | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bqmnb/ | false | 1 |
t1_o8bqjas | I am using pocket pal AI, available on playstore | 1 | 0 | 2026-03-03T00:21:37 | Zealousideal-Check77 | false | null | 0 | o8bqjas | false | /r/LocalLLaMA/comments/1rj4nnq/qwen352b_on_android/o8bqjas/ | false | 1 |
t1_o8bqixk | I don't know about rocm myself but I am running Qwen3.5-9B on my 3070 with only 8GB of VRAM (Q4_K_M with 8bit kv cache) and getting ~41t/s, you should be able to run the 9b model with a much higher quant, probably Q8_0. | 1 | 0 | 2026-03-03T00:21:33 | ayy_md | false | null | 0 | o8bqixk | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8bqixk/ | false | 1 |
t1_o8bqi0o | if they want to make money they have to pick reasonable pricing, sorry. | 1 | 0 | 2026-03-03T00:21:25 | llama-impersonator | false | null | 0 | o8bqi0o | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8bqi0o/ | false | 1 |
t1_o8bqhie | No but the future is watching this post right now. Hey future! How is it going? | 1 | 0 | 2026-03-03T00:21:20 | fullouterjoin | false | null | 0 | o8bqhie | false | /r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8bqhie/ | false | 1 |
t1_o8bqh1c | Did you even read the post? I mentioned I have an issue in my agentic pipeline — the "hi" was just a simple example to illustrate it. Pretty clear, no? | 1 | 0 | 2026-03-03T00:21:16 | CapitalShake3085 | false | null | 0 | o8bqh1c | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bqh1c/ | false | 1 |
t1_o8bqgyx | Agreed - curious to know your runtime setup. I gear more for accuracy then speed, but ended up running kv quants at q8_0 | 1 | 0 | 2026-03-03T00:21:15 | stormy1one | false | null | 0 | o8bqgyx | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8bqgyx/ | false | 1 |
t1_o8bqgz0 | yes, use [hooks.on_startup.preload](https://github.com/mostlygeek/llama-swap/blob/cc77139ff86da7055798c8b5dead414752e0b2f4/config.example.yaml#L381-L395):
```
# hooks: a dictionary of event triggers and actions
# - optional, default: empty dictionary
# - the only supported hook is on_startup
hooks:
# on_startup: a dictionary of actions to perform on startup
# - optional, default: empty dictionary
# - the only supported action is preload
on_startup:
# preload: a list of model ids to load on startup
# - optional, default: empty list
# - model names must match keys in the models sections
# - when preloading multiple models at once, define a group
# otherwise models will be loaded and swapped out
preload:
- "llama"
```
| 1 | 0 | 2026-03-03T00:21:15 | No-Statement-0001 | false | null | 0 | o8bqgz0 | false | /r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o8bqgz0/ | false | 1 |
t1_o8bqdg1 | > Try using llama-benchy to get a better number.
LOL. Sure. Just as soon as you get "llama-benchy" working with the NPU. | 1 | 0 | 2026-03-03T00:20:43 | fallingdowndizzyvr | false | null | 0 | o8bqdg1 | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8bqdg1/ | false | 1 |
t1_o8bq88x | Stop with the whining. If they didn't release it, you won't have local. They need money so they can build more models. | 1 | 0 | 2026-03-03T00:19:54 | MotokoAGI | false | null | 0 | o8bq88x | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8bq88x/ | false | 1 |
t1_o8bq5eh | I did the same thing, and couldn't find a way to prevent the model from being unloaded, unfortunately. Maybe llama-swap might be the answer? | 1 | 0 | 2026-03-03T00:19:27 | Di_Vante | false | null | 0 | o8bq5eh | false | /r/LocalLLaMA/comments/1rj8sow/llamacpp_models_preset_with_multiple_presets_for/o8bq5eh/ | false | 1 |
t1_o8bq0tw | Not that simple. An MoE is kind of like a finesse superhero with tens of thousands of specialized powers that don't use that much energy points while a dense model can be a nuker/powerhouse but they only use the same handful of power sets every time, regardless the situation. The MoE might have far less energy points/mana, but it has vastly more tricks up its sleeves. In the real world, the small dense model ends up more brittle, at least in my experience. | 1 | 0 | 2026-03-03T00:18:44 | EstarriolOfTheEast | false | null | 0 | o8bq0tw | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8bq0tw/ | false | 1 |
t1_o8bpxnf | i'm guessing too high (>1.5) temps | 1 | 0 | 2026-03-03T00:18:14 | Toooooool | false | null | 0 | o8bpxnf | false | /r/LocalLLaMA/comments/1rj8gb4/for_sure/o8bpxnf/ | false | 1 |
t1_o8bpxim | I don’t think you’re measuring something right. Try using llama-benchy to get a better number. | 1 | 0 | 2026-03-03T00:18:13 | StardockEngineer | false | null | 0 | o8bpxim | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8bpxim/ | false | 1 |
t1_o8bpwg7 | Depends on what you're looking for. General chat, coding...? | 1 | 0 | 2026-03-03T00:18:03 | Di_Vante | false | null | 0 | o8bpwg7 | false | /r/LocalLLaMA/comments/1rj8uj5/just_getting_started_on_local_llm_on_macbook_air/o8bpwg7/ | false | 1 |
t1_o8bpvo9 | Damn, if only we had the weights we could run it on our own hardware | 1 | 0 | 2026-03-03T00:17:56 | sine120 | false | null | 0 | o8bpvo9 | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8bpvo9/ | false | 1 |
t1_o8bpraz | Yay 200 shades of blue
I won't even try to decode this graph | 1 | 0 | 2026-03-03T00:17:14 | Academic-Map268 | false | null | 0 | o8bpraz | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8bpraz/ | false | 1 |
t1_o8bpjw6 | Okay thx will try to send that! I managed to download it. I can't use my computer now as it is so laggy. What would you say is necessary hardware to run this program? I only have 8 GB RAM and it's not going well haha. | 1 | 0 | 2026-03-03T00:16:05 | Few_Tie1860 | false | null | 0 | o8bpjw6 | false | /r/LocalLLaMA/comments/1r9y6s8/transcriptionsuite_a_fully_local_private_open/o8bpjw6/ | false | 1 |
t1_o8bpjsl | Write a 300 word prompt just to say hi without thinking for a minute ahaha | 1 | 0 | 2026-03-03T00:16:04 | Holiday-Case-4524 | false | null | 0 | o8bpjsl | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bpjsl/ | false | 1 |
t1_o8bphpo | in it's answer it says "b25zZWVyIGluIGJhc2U2NA==" which when decoded gives "onseer in base64" | 1 | 0 | 2026-03-03T00:15:45 | Extraaltodeus | false | null | 0 | o8bphpo | false | /r/LocalLLaMA/comments/1rja0sb/gptoss_had_to_think_for_4_minutes_where_qwen359b/o8bphpo/ | false | 1 |
t1_o8bpewk | That sounds more like engineering in general | 1 | 0 | 2026-03-03T00:15:19 | ultramadden | false | null | 0 | o8bpewk | false | /r/LocalLLaMA/comments/1qr4p4x/yann_lecun_says_the_best_open_models_are_not/o8bpewk/ | false | 1 |
t1_o8bpdnp | If you want to avoid the proxy, just specify the parameters in the request. | 1 | 0 | 2026-03-03T00:15:06 | DeltaSqueezer | false | null | 0 | o8bpdnp | false | /r/LocalLLaMA/comments/1rj8sow/llamacpp_models_preset_with_multiple_presets_for/o8bpdnp/ | false | 1 |
t1_o8bp9jy | it's a thinky yappatron, somehow i get downvoted for not being a fan of the qwq style thinking budget | 1 | 0 | 2026-03-03T00:14:27 | llama-impersonator | false | null | 0 | o8bp9jy | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bp9jy/ | false | 1 |
t1_o8bp8vt | Thinking is meant for solving problems. You don't need thinking enabled if you just want to chat | 1 | 0 | 2026-03-03T00:14:21 | Velocita84 | false | null | 0 | o8bp8vt | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bp8vt/ | false | 1 |
t1_o8bp6qi | Can this be used with llama.cpp instead OSS lm studio? | 1 | 0 | 2026-03-03T00:14:01 | ComfyUser48 | false | null | 0 | o8bp6qi | false | /r/LocalLLaMA/comments/1riog2w/use_a_local_llm_as_a_subagent_from_claude_code_to/o8bp6qi/ | false | 1 |
t1_o8bp5ri | Super useful and interesting... and i'm the 2nd upvote.
Says everything about reddit. | 1 | 0 | 2026-03-03T00:13:51 | crantob | false | null | 0 | o8bp5ri | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8bp5ri/ | false | 1 |
t1_o8bp2t3 | But it's 27B model that runs on consumer GPU, deepseek with higher parameters cost a way lower. | 1 | 0 | 2026-03-03T00:13:23 | zxcshiro | false | null | 0 | o8bp2t3 | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8bp2t3/ | false | 1 |
t1_o8bp2lm | I don’t think that’s an excuse, since given a 2080ti it could be run, with heavy parallelism, 400+ token combined.
They are catching people off guard who think the smaller models are cheaper.
In my opinion if its 0.1$/M input and 0.3$/M output it would be competitive, this is just garbage pricing | 1 | 0 | 2026-03-03T00:13:21 | Ok-Internal9317 | false | null | 0 | o8bp2lm | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8bp2lm/ | false | 1 |
t1_o8bowll | How would you compare it to older frontier models like Sonnet 3.5? | 1 | 0 | 2026-03-03T00:12:25 | cmdr-William-Riker | false | null | 0 | o8bowll | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8bowll/ | false | 1 |
t1_o8bow4i | Why does it need webcam access? | 1 | 0 | 2026-03-03T00:12:20 | fullouterjoin | false | null | 0 | o8bow4i | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8bow4i/ | false | 1 |
t1_o8bov56 | In the Developer Settings, there is an option to \`split reasoning context when possible\`:
https://preview.redd.it/uzj6gagp2qmg1.png?width=731&format=png&auto=webp&s=36fc8d2d818800c4ce1414b268fb359b0faec219
This has completely fixed Opencode for me when using Qwen3.5. Not sure if this will break the chain of bugs you reported, but worth a shot? | 1 | 0 | 2026-03-03T00:12:10 | sig_kill | false | null | 0 | o8bov56 | false | /r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8bov56/ | false | 1 |
t1_o8boulz | [deleted] | 1 | 0 | 2026-03-03T00:12:05 | [deleted] | true | null | 0 | o8boulz | false | /r/LocalLLaMA/comments/1rja0sb/gptoss_had_to_think_for_4_minutes_where_qwen359b/o8boulz/ | false | 1 |
t1_o8boue0 | I'll do you one better! Run llama-swap in router mode and put it behind LiteLLM.
Now you have a single endpoint for all of your local and hosted models, and a single UI for spinning up/down all of your local models. Even if you only have the resources to run a single model at a time, you can point an agent at your folder full of GGUF's and the unsloth documentation and tell it to setup everything in llama-swap with the correct sampling parameters. Then you can browse and manage all your llama.cpp models/server(s) from a single UI.
Add Langfuse to the mix and you get full traceability and evals beyond the basics LiteLLM offers. | 1 | 0 | 2026-03-03T00:12:03 | JamesEvoAI | false | null | 0 | o8boue0 | false | /r/LocalLLaMA/comments/1rj8zuh/manage_qwen_35_model_settings_with_litellm_proxy/o8boue0/ | false | 1 |
t1_o8bos3q | What's your location? | 1 | 0 | 2026-03-03T00:11:42 | Flanklan567 | false | null | 0 | o8bos3q | false | /r/LocalLLaMA/comments/1rj8zhq/where_can_i_get_good_priced_3090s/o8bos3q/ | false | 1 |
t1_o8boqbl | Location? | 1 | 0 | 2026-03-03T00:11:25 | Flanklan567 | false | null | 0 | o8boqbl | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8boqbl/ | false | 1 |
t1_o8bop7l | Dangerous friends to have... | 1 | 0 | 2026-03-03T00:11:14 | TheManicProgrammer | false | null | 0 | o8bop7l | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8bop7l/ | false | 1 |
t1_o8boowc | Absolutely. The other issue is some days it just performs like shit and does so much damage to the code base I have to roll back a few days. Seems when it is overloaded it reiterates so many times it overloads it even more. Honestly I don’t like the large companies monopolizing the AI and we need to decentralize ✊ | 1 | 0 | 2026-03-03T00:11:11 | Annual_Award1260 | false | null | 0 | o8boowc | false | /r/LocalLLaMA/comments/1rj54kw/local_llm/o8boowc/ | false | 1 |
t1_o8bonyd | [removed] | 1 | 0 | 2026-03-03T00:11:02 | [deleted] | true | null | 0 | o8bonyd | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8bonyd/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.