name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8b3mpj | I'm no math major, but these values all fall within the margin of error of each other. Are we sure this is actually causing a performance degradation? | 1 | 0 | 2026-03-02T22:15:30 | dinerburgeryum | false | null | 0 | o8b3mpj | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o8b3mpj/ | false | 1 |
t1_o8b3kce | > for anyone looking for the Linux docs, there's not much yet but a getting started guide is here: https://github.com/FastFlowLM/FastFlowLM/blob/main/docs/linux-getting-started.md
Unfortunately, that guide is lacking a few things. There's a lot of prerequisites it doesn't mention like FTTW, boost, rust, uuid off the top of my head. Also, you need to do a recursive git clone to grab all the submodules. | 1 | 0 | 2026-03-02T22:15:11 | fallingdowndizzyvr | false | null | 0 | o8b3kce | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8b3kce/ | false | 1 |
t1_o8b3c81 | Claude helped capture Maduro. You think it will reject reverse engineering? | 1 | 0 | 2026-03-02T22:14:03 | Novel-Effective8639 | false | null | 0 | o8b3c81 | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o8b3c81/ | false | 1 |
t1_o8b3bdz | It's cost. Gemini is a fraction of the price. | 1 | 0 | 2026-03-02T22:13:56 | Deep90 | false | null | 0 | o8b3bdz | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8b3bdz/ | false | 1 |
t1_o8b32tq | I'm having similar issues but with llama.cpp (that LM Studio uses); I have no clear idea of what triggers the "leak", still investigating before opening an issue on github. Only some models/quants seem to be involved, but also ctk and ctv quantization and/or not unified kv...
Which GPU are you using? Also GPU arch could be involved. | 1 | 0 | 2026-03-02T22:12:43 | Technical-Bus258 | false | null | 0 | o8b32tq | false | /r/LocalLLaMA/comments/1rj4ck1/lm_studio_kv_caching_issue/o8b32tq/ | false | 1 |
t1_o8b2yrn | Oh, what a blast from the past! One of the original meme images! The "Unexplainable - This picture can not be explained" motivational poster style meme :)) | 1 | 0 | 2026-03-02T22:12:08 | tmvr | false | null | 0 | o8b2yrn | false | /r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o8b2yrn/ | false | 1 |
t1_o8b2yf9 | how did you install llama.cpp? | 1 | 0 | 2026-03-02T22:12:06 | rm-rf-rm | false | null | 0 | o8b2yf9 | false | /r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8b2yf9/ | false | 1 |
t1_o8b2ttq | Sounds like a pretty shit benchmark, then.
Kind of like benchmarking humans on the tendency to eat their own shit. Surprisingly, the mental hospital patients score significantly higher. | 1 | 0 | 2026-03-02T22:11:26 | Creepy-Bell-4527 | false | null | 0 | o8b2ttq | false | /r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o8b2ttq/ | false | 1 |
t1_o8b2q2n | New to running LLMs here, would any of these models be recommended for a 5070ti 16gb with 32gb ram if I just want a general assistant | 1 | 0 | 2026-03-02T22:10:55 | Akaryyn | false | null | 0 | o8b2q2n | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8b2q2n/ | false | 1 |
t1_o8b2ki7 | I keep falling back to gpt oss 20B , but the tool calling keep falling from time and times with gpt oss.
But qwen really surprised me , 0 tool calling errors in this prompt. Also, I did more test with my prompts to review and generate commits. Works great for all.
Thinking if a iq3 version can deal with more complex tasks , with some sacrifice of context or switching the kv cache to q4 instead of q8 . I will do more tests tomorrow. For now I will keep as my main model . | 1 | 0 | 2026-03-02T22:10:07 | Turbulent_Dot3764 | false | null | 0 | o8b2ki7 | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o8b2ki7/ | false | 1 |
t1_o8b2cff | "Let me revert the changes and start fresh making sure that I actually follow your instructions." | 1 | 0 | 2026-03-02T22:08:59 | Techngro | false | null | 0 | o8b2cff | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8b2cff/ | false | 1 |
t1_o8b21r1 | Why can’t anyone start adding a second color as a pattern? | 1 | 0 | 2026-03-02T22:07:30 | INtuitiveTJop | false | null | 0 | o8b21r1 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8b21r1/ | false | 1 |
t1_o8b1zd4 | Speculative decoding here we come! | 1 | 0 | 2026-03-02T22:07:10 | No_Mango7658 | false | null | 0 | o8b1zd4 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8b1zd4/ | false | 1 |
t1_o8b1mr5 | Maybe just not great with grammar. | 1 | 0 | 2026-03-02T22:05:24 | Akamashi | false | null | 0 | o8b1mr5 | false | /r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8b1mr5/ | false | 1 |
t1_o8b1l3w | what headless macOS alternatives are there, that can handle multiple models, loaded into memory upon request?
I believe lm-studio's server might do this, but I have yet to find an elegant way to run headless on Mac, without requiring user login on boot. it's honestly really annoying that the headless service can't start up without local access on reboot or after power failure. not exactly a great server.
llama-swap is another option, but I found way too many limitations with it... hopefully it continues to mature into a really nice alternative. | 1 | 0 | 2026-03-02T22:05:12 | luche | false | null | 0 | o8b1l3w | false | /r/LocalLLaMA/comments/1rezq19/qwen3535b_on_apple_silicon_how_i_got_2x_faster/o8b1l3w/ | false | 1 |
t1_o8b1af7 | 3090 | 2 | 0 | 2026-03-02T22:03:44 | Medium_Chemist_4032 | false | null | 0 | o8b1af7 | false | /r/LocalLLaMA/comments/1re9xbi/qwen35_on_vllm/o8b1af7/ | false | 2 |
t1_o8b18s8 | Ahh, I only included the ones Qwen featured in their official comparison charts for this release. Since they didn't include any older 14B, I didn't have any 'official' baseline to put it next to the 3.5 models. | 1 | 0 | 2026-03-02T22:03:30 | Jobus_ | false | null | 0 | o8b18s8 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8b18s8/ | false | 1 |
t1_o8b14jt | Have seen once with 9b q4_k_m | 1 | 0 | 2026-03-02T22:02:55 | Busy-Guru-1254 | false | null | 0 | o8b14jt | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8b14jt/ | false | 1 |
t1_o8b14dw | Before making any purchase: look into which models actually fit in 2*96gb + offloading (if u want) and access said models through API for at least a month. I'm pretty sure you will not be satisfied after being used to Opus. Just trying to prevent you to burn money on hardware and self hosting while having unrealistic expectations. If it's fine for u on the other hand after the testing period, go for it. | 1 | 0 | 2026-03-02T22:02:54 | Technical-Earth-3254 | false | null | 0 | o8b14dw | false | /r/LocalLLaMA/comments/1rj54kw/local_llm/o8b14dw/ | false | 1 |
t1_o8b1472 | Memory bandwith is the Nr 1 limiting factor for LLMs.
You can buy a lot of ram, but if everything goes trough pcie5, your gpus will be underutilized and LLM inference speed will be tragic, compared of model that fits inside one gpu. | 1 | 0 | 2026-03-02T22:02:52 | Expensive-Spot-4054 | false | null | 0 | o8b1472 | false | /r/LocalLLaMA/comments/1rbvbzt/best_opensource_coder_model_for_replacing_claude/o8b1472/ | false | 1 |
t1_o8b1307 | 35B works better than 4B. Others pointed me to that i should get rid of kv quant parameters for qwen3.5 models, so i removed them for the smaller ones. | 1 | 0 | 2026-03-02T22:02:43 | AppealSame4367 | false | null | 0 | o8b1307 | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8b1307/ | false | 1 |
t1_o8b10rx | prefill/prompt processing are synonyms, so are decoding/token generation. llama.cpp uses the second set of phrases but other tools and literature may use the first | 1 | 0 | 2026-03-02T22:02:24 | HopePupal | false | null | 0 | o8b10rx | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8b10rx/ | false | 1 |
t1_o8b0zzz | Set threads to 4.
Max response speed is usually in 2 threads while max input speed is in the max cores.
Input speed doesn't matter if your prompt is small.
Turn on flash attention and set the F16 to Q4_0 on both sections (if the AI glitches then set them to Q6_0) - < this will save you a lot of ram and doesn't affect anything.
If possible then use Q4_0 version of the 2b (if that glitches then use Q4KM) it's guaranteed to give you double the speed (8tps instead of 4tps) so you'll have a 2x boost. | 1 | 0 | 2026-03-02T22:02:17 | ItsHimSujan | false | null | 0 | o8b0zzz | false | /r/LocalLLaMA/comments/1rj4nnq/qwen352b_on_android/o8b0zzz/ | false | 1 |
t1_o8b0zcf | Tried BasedGuy - cracked me up! | 1 | 0 | 2026-03-02T22:02:12 | amado88 | false | null | 0 | o8b0zcf | false | /r/LocalLLaMA/comments/1nrf6s3/yes_you_can_run_128k_context_glm45_355b_on_just/o8b0zcf/ | false | 1 |
t1_o8b0thd | Just dropping an offline deployment array for local hardware only. Completely deterministic video intelligence running entirely on a local Node/SQLite vault. Seeking a few pilots to crash the local test rig: https://github.com/Z3r0DayZion-install/neuraltube-pilot | 1 | 0 | 2026-03-02T22:01:23 | Opening-Salad6289 | false | null | 0 | o8b0thd | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8b0thd/ | false | 1 |
t1_o8b0kz2 | So they distilled Gemini, that's interesting. | 1 | 0 | 2026-03-02T22:00:12 | Creepy-Bell-4527 | false | null | 0 | o8b0kz2 | false | /r/LocalLLaMA/comments/1rj65jl/qwens_latest_model_thinks_its_developed_by_google/o8b0kz2/ | false | 1 |
t1_o8b0asx | So fast.. | 1 | 0 | 2026-03-02T21:58:50 | reykeen_76 | false | null | 0 | o8b0asx | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o8b0asx/ | false | 1 |
t1_o8b08kt | Most vision models nowadays support video. | 1 | 0 | 2026-03-02T21:58:32 | TheRealMasonMac | false | null | 0 | o8b08kt | false | /r/LocalLLaMA/comments/1rizy4r/what_models_to_understand_videos_no_transcripts/o8b08kt/ | false | 1 |
t1_o8b04ei | \-ctk bf16 -ctv bf16 with RTX 5090 and R9700 gave me some really bad performance drop for pp and tg, I am talking epic, 10x bad. This is with UD-Q4\_K\_XL quant
# (RTX 5090, CUDA 13.1) — 3-way comparison at pp2048
|KV Cache|pp2048 (t/s)|tg128 (t/s)|
|:-|:-|:-|
|Default (none)|6,204|171|
|Explicit f16|6,197|171|
|**Explicit bf16**|**1,204**|**158**| | 1 | 0 | 2026-03-02T21:57:58 | Ok-Ad-8976 | false | null | 0 | o8b04ei | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o8b04ei/ | false | 1 |
t1_o8b00lp | The reality is more like that the 35b-3a model is similarly benchmarking as gpt-oss-120b. I haven't validated it myself whether it is so, but I can believe it. For my taste, it is "nearly good enough", just like gpt-oss-120b was. However, because of the 122b, I ended up just deleting gpt-oss-120b and I'm not sure I'm going to keep the 35b-a3b around -- seems like I never use it for anything.
I've let the 122b model to run overnight and during the days when I'm at office to handle various chores in codebases which haven't been properly maintained for years. Qwen is cooking up tests, improves or writes the missing technical documentation, and last night converted code from one dead JavaScript framework to a more recent one, and in the process it probably performed years worth of missing maintenance. Unprompted, it even translated all the string constants to another language just because the project template I had supplied contained hints that translations in that language were expected. It synthesized pretty good Finnish for a 122B. I think it came out around 80-90 % production ready code on the first draft, and then I fixed stylistic issues for a bit by hand, until I realized that I'm better off just writing a coding style guide and handed the AI that and told it to make the code fit it.
Last time when we did a migration like this, someone wrote custom code that converted files from the old crap framework with regex, and the results were an unnatural, stilted style that just about worked with enough compat clue and manual fixing. I've complained for literally years that someone should properly fix the shit -- well, now you guess what is going to take a pass over the crap and finally get it done... | 1 | 0 | 2026-03-02T21:57:27 | audioen | false | null | 0 | o8b00lp | false | /r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o8b00lp/ | false | 1 |
t1_o8azs08 | Me sorprende leer esto de "mas de 250 pestañas abiertas", conoces la extension session buddy? Las puedes organizar por grupos, no entiendo eso de mantener mas de 250 pestañas abiertas, el consumo de memoria es considerable, solo una sugerencia. | 1 | 0 | 2026-03-02T21:56:18 | Aggravating_Run_1217 | false | null | 0 | o8azs08 | false | /r/LocalLLaMA/comments/1n6utdd/amd_ryzen_7_8700g_for_local_ai_user_experience/o8azs08/ | false | 1 |
t1_o8azrm3 | I mean 9b x old 14b | 1 | 0 | 2026-03-02T21:56:15 | celsowm | false | null | 0 | o8azrm3 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8azrm3/ | false | 1 |
t1_o8azqh7 | what 4 cards are you running this on? | 1 | 0 | 2026-03-02T21:56:05 | No_War_8891 | false | null | 0 | o8azqh7 | false | /r/LocalLLaMA/comments/1re9xbi/qwen35_on_vllm/o8azqh7/ | false | 1 |
t1_o8azq0z | 1 | 0 | 2026-03-02T21:56:02 | never-been-here-nl | false | null | 0 | o8azq0z | false | /r/LocalLLaMA/comments/1rj65jl/qwens_latest_model_thinks_its_developed_by_google/o8azq0z/ | false | 1 | |
t1_o8azprp | So using your best two numbers, with 1000 input tokens and 100 output, it appears the GPU demolishes the NPU.
```
=== NPU (20W) ===
Prefill time: 10.2554s
Decode time: 5.1375s
Total time: 15.3929s
Energy used: 307.8580J | 0.085516 Wh
Tokens/Wh: 12865.55
Tokens/Joule: 3.5731
=== GPU (82W) ===
Prefill time: 0.6085s
Decode time: 1.3532s
Total time: 1.9617s
Energy used: 160.8594J | 0.044683 Wh
Tokens/Joule: 6.8380
Tokens/Wh: 24618.57
=== WINNER ===
GPU wins by 1.91x efficiency
```
please double check me - open Devtools and just paste this in:
```
// Configuration
const INPUT_TOKENS = 1000;
const OUTPUT_TOKENS = 100;
// NPU specs
const NPU_WATTS = 20;
// Using 50x longer prompt speeds (closer to 1000 token input)
const NPU_PREFILL_SPEED = 97.5095; // tokens/s
const NPU_DECODE_SPEED = 19.4633; // tokens/s
// GPU specs
const GPU_WATTS = 82;
// Using 2nd prompt speeds (closer to 1000 token input)
const GPU_PREFILL_SPEED = 1643.2; // tokens/s
const GPU_DECODE_SPEED = 73.9; // tokens/s
function calcEfficiency(prefillSpeed, decodeSpeed, watts, inputTokens, outputTokens) {
const prefillTime = inputTokens / prefillSpeed; // seconds
const decodeTime = outputTokens / decodeSpeed; // seconds
const totalTime = prefillTime + decodeTime; // seconds
const energyWh = (watts * totalTime) / 3600; // watt-hours
const energyJ = watts * totalTime; // joules
const totalTokens = inputTokens + outputTokens;
const tokensPerWh = totalTokens / energyWh;
const tokensPerJoule = totalTokens / energyJ;
return {
prefillTime: prefillTime.toFixed(4),
decodeTime: decodeTime.toFixed(4),
totalTime: totalTime.toFixed(4),
energyJoules: energyJ.toFixed(4),
energyWh: energyWh.toFixed(6),
tokensPerWh: tokensPerWh.toFixed(2),
tokensPerJoule: tokensPerJoule.toFixed(4)
};
}
const npu = calcEfficiency(NPU_PREFILL_SPEED, NPU_DECODE_SPEED, NPU_WATTS, INPUT_TOKENS, OUTPUT_TOKENS);
const gpu = calcEfficiency(GPU_PREFILL_SPEED, GPU_DECODE_SPEED, GPU_WATTS, INPUT_TOKENS, OUTPUT_TOKENS);
console.log("=== NPU (20W) ===");
console.log(`Prefill time: ${npu.prefillTime}s`);
console.log(`Decode time: ${npu.decodeTime}s`);
console.log(`Total time: ${npu.totalTime}s`);
console.log(`Energy used: ${npu.energyJoules}J | ${npu.energyWh} Wh`);
console.log(`Tokens/Wh: ${npu.tokensPerWh}`);
console.log(`Tokens/Joule: ${npu.tokensPerJoule}`);
console.log("\n=== GPU (82W) ===");
console.log(`Prefill time: ${gpu.prefillTime}s`);
console.log(`Decode time: ${gpu.decodeTime}s`);
console.log(`Total time: ${gpu.totalTime}s`);
console.log(`Energy used: ${gpu.energyJoules}J | ${gpu.energyWh} Wh`);
console.log(`Tokens/Wh: ${gpu.tokensPerWh}`);
console.log(`Tokens/Joule: ${gpu.tokensPerJoule}`);
console.log("\n=== WINNER ===");
const npuTpJ = parseFloat(npu.tokensPerJoule);
const gpuTpJ = parseFloat(gpu.tokensPerJoule);
const ratio = (Math.max(npuTpJ, gpuTpJ) / Math.min(npuTpJ, gpuTpJ)).toFixed(2);
const winner = npuTpJ > gpuTpJ ? "NPU" : "GPU";
console.log(`${winner} wins by ${ratio}x efficiency`);
``` | 1 | 0 | 2026-03-02T21:55:59 | StardockEngineer | false | null | 0 | o8azprp | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8azprp/ | false | 1 |
t1_o8azmqp | From the language, probably not japanese models. | 1 | 0 | 2026-03-02T21:55:35 | PwanaZana | false | null | 0 | o8azmqp | false | /r/LocalLLaMA/comments/1rj326g/any_idea_what_is_being_used_for_these_generations/o8azmqp/ | false | 1 |
t1_o8azjso | That's what Claude picked for me so that's what I chose 😊
I'm going to have a chat with Claude to make him feel bad 😀 | 1 | 0 | 2026-03-02T21:55:11 | Petroale | false | null | 0 | o8azjso | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8azjso/ | false | 1 |
t1_o8azelb | i'm aware, i checked the actual breakdown before posting and i'm not expecting a desktop-sized model to beat a Claude subscription… but it's still open weights and _desktop-sized_. Kimi K2.5 and GLM 5 sure aren't. Minimax M2.5 is pushing it, especially at the quants many of us will be using, and scores worse on task completion as tested. so this was still interesting new info to me | 1 | 0 | 2026-03-02T21:54:30 | HopePupal | false | null | 0 | o8azelb | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8azelb/ | false | 1 |
t1_o8azaly | Thanks | 1 | 0 | 2026-03-02T21:53:58 | tracagnotto | false | null | 0 | o8azaly | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o8azaly/ | false | 1 |
t1_o8az9gt | yes, but much slower. At least on unified memory. | 1 | 0 | 2026-03-02T21:53:49 | zipzag | false | null | 0 | o8az9gt | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8az9gt/ | false | 1 |
t1_o8az93y | from the text and ui it makes on threejs you can tell they distilled it with gemini 3 | 1 | 0 | 2026-03-02T21:53:46 | Whydoiexist2983 | false | null | 0 | o8az93y | false | /r/LocalLLaMA/comments/1rj65jl/qwens_latest_model_thinks_its_developed_by_google/o8az93y/ | false | 1 |
t1_o8az759 | "A year from now, an LLM of this size wasn't expected to hold a coherent conversation."
Um...are you from the future? | 1 | 0 | 2026-03-02T21:53:30 | Techngro | false | null | 0 | o8az759 | false | /r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8az759/ | false | 1 |
t1_o8az714 | Actually doable to hook inside a phone + drone | 1 | 0 | 2026-03-02T21:53:29 | neotorama | false | null | 0 | o8az714 | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8az714/ | false | 1 |
t1_o8az6e5 | Working on an indie RPG, you're better off pre-generating for a couple reasons (less resources away from your handful of precious milliseconds budget, more controllable, more reliable). And if you want smart AI, having the top LLMs handcode decision trees + your game-tailored optimized constraint prop is the way to go. | 1 | 0 | 2026-03-02T21:53:24 | EstarriolOfTheEast | false | null | 0 | o8az6e5 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8az6e5/ | false | 1 |
t1_o8az5wj | May depend on context length. Also, in practice, the ups will probably vary depending on the architecture it's run on.
One problem with these comparisons is that so many people run instruct and we aren't comparing times and tokens per second. | 1 | 0 | 2026-03-02T21:53:20 | zipzag | false | null | 0 | o8az5wj | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8az5wj/ | false | 1 |
t1_o8az3pe | I wonder why? I run llama.cpp through termux | 1 | 0 | 2026-03-02T21:53:02 | PayBetter | false | null | 0 | o8az3pe | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o8az3pe/ | false | 1 |
t1_o8az1ze | Use the new 3.5 9B. In benchmark it even outperforms Qwen3 30B so it will definitely be better than Qwen 2.5 14B. You will be able to use a larger context length with the 9B model too | 1 | 0 | 2026-03-02T21:52:48 | Guilty_Rooster_6708 | false | null | 0 | o8az1ze | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8az1ze/ | false | 1 |
t1_o8ayjxv | It's slower than 9B for me. | 1 | 0 | 2026-03-02T21:50:24 | iadanos | false | null | 0 | o8ayjxv | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o8ayjxv/ | false | 1 |
t1_o8ayirx | 24-25 tps. Not much faster, agree. I run with default draft settings. I guess there is a room for draft tokens and acceptance rate optimization. Maybe 30 tps possible. | 1 | 0 | 2026-03-02T21:50:13 | Hougasej | false | null | 0 | o8ayirx | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8ayirx/ | false | 1 |
t1_o8ayhyi | Could someone tell me what happened to context shifting and why that's not a thing anymore?
I liked the unlimited chatting. No fun to end a session after 8192 tokens. | 1 | 0 | 2026-03-02T21:50:07 | crantob | false | null | 0 | o8ayhyi | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8ayhyi/ | false | 1 |
t1_o8ay9nv | wow, Q2 with q4 cache and it works? that's impressive | 1 | 0 | 2026-03-02T21:48:59 | Spectrum1523 | false | null | 0 | o8ay9nv | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8ay9nv/ | false | 1 |
t1_o8ay8oe | And they distilled Opus to make the big ones | 1 | 0 | 2026-03-02T21:48:51 | zipzag | false | null | 0 | o8ay8oe | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8ay8oe/ | false | 1 |
t1_o8ay6ts | [removed] | 1 | 0 | 2026-03-02T21:48:36 | [deleted] | true | null | 0 | o8ay6ts | false | /r/LocalLLaMA/comments/1rj326g/any_idea_what_is_being_used_for_these_generations/o8ay6ts/ | false | 1 |
t1_o8ay1nn | i noticed that too, using open code each prompt takes a bit to be processed. thats kind why i stopped using qwen 3 next coder. each prompt was taking ages tobe processdd before it started responding. | 1 | 0 | 2026-03-02T21:47:55 | ZealousidealShoe7998 | false | null | 0 | o8ay1nn | false | /r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8ay1nn/ | false | 1 |
t1_o8ay11u | Probably because of cost | 1 | 0 | 2026-03-02T21:47:50 | Majestic-Foot-4120 | false | null | 0 | o8ay11u | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8ay11u/ | false | 1 |
t1_o8axyw1 | Rule 4 - Post is primarily commercial promotion.
It looks like you are the creator of "Bodega" and are hawking your wares by disparaging the competition without providing anything to back up your claims. This appears to be a pattern looking at your Reddit comment history. | 1 | 0 | 2026-03-02T21:47:33 | LocalLLaMA-ModTeam | false | null | 0 | o8axyw1 | true | /r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8axyw1/ | false | 1 |
t1_o8axvkt | can we get tokens/watt as a statistic? | 1 | 0 | 2026-03-02T21:47:06 | loadsamuny | false | null | 0 | o8axvkt | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8axvkt/ | false | 1 |
t1_o8axupo | you're doing it wrong if you're sticking to 9b models. With 16GBs, look at the \~30-35B MOE models like **Qwen3.5-35B-A3B** | 1 | 0 | 2026-03-02T21:46:59 | ailee43 | false | null | 0 | o8axupo | false | /r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8axupo/ | false | 1 |
t1_o8axr4o | Oh for sure, that happens when you try to boil down four variables (speed/price/intelligence/can i even run this model) to a single tier list.
So in this case the tier list is trying to communicate "Qwen 3.5 27b is the best local-sized model," not that it's as smart as GPT-5.2. | 1 | 0 | 2026-03-02T21:46:30 | mr_riptano | false | null | 0 | o8axr4o | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8axr4o/ | false | 1 |
t1_o8axozm | See if you can get it to disclose whether they trained it on Gemini or Gemma | 1 | 0 | 2026-03-02T21:46:14 | Creepy-Bell-4527 | false | null | 0 | o8axozm | false | /r/LocalLLaMA/comments/1rj65jl/qwens_latest_model_thinks_its_developed_by_google/o8axozm/ | false | 1 |
t1_o8axbef | For Edit mode and restyling, Flux Klein 4b with GGUFs... should handle 1440px and yet leave enough headroom to also run Photoshop.
For image generation at 1440px, depends what sort of images you want and how much speed you want. Definitely try Z-Image Turbo in its fast Nunchaku r256 variant.
ComfyUI Portable is what you want to run these. If you wait a few days, there should be the new Portable with massive RAM optimisation + the new NVIDIA Studio graphics-card drivers (said to greatly boost Flux2). Though I'm not sure if the latter will help RTX 30x series cards (I assume you have a 3060 12Gb)? | 1 | 0 | 2026-03-02T21:44:25 | optimisticalish | false | null | 0 | o8axbef | false | /r/LocalLLaMA/comments/1rj5czr/free_image_models_that_can_run_on_12gb_vram/o8axbef/ | false | 1 |
t1_o8ax937 | Here you go [https://brokk.ai/blog/the-26-02-coding-power-ranking/#friends-don%E2%80%99t-let-friends-code-with-anthropic-models](https://brokk.ai/blog/the-26-02-coding-power-ranking/#friends-don%E2%80%99t-let-friends-code-with-anthropic-models) | 1 | 0 | 2026-03-02T21:44:06 | mr_riptano | false | null | 0 | o8ax937 | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8ax937/ | false | 1 |
t1_o8ax70i | i imagine you need qwen3.5 27b at minimum. so yeah, go get more VRAM | 1 | 0 | 2026-03-02T21:43:49 | woahdudee2a | false | null | 0 | o8ax70i | false | /r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8ax70i/ | false | 1 |
t1_o8ax4jv | Has anyone tried using these smaller models with openclaw on an android phone, how shit is it? | 1 | 0 | 2026-03-02T21:43:29 | atape_1 | false | null | 0 | o8ax4jv | false | /r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8ax4jv/ | false | 1 |
t1_o8ax27o | You clearly were not RL'd as a child on Minecraft | 1 | 0 | 2026-03-02T21:43:11 | zipzag | false | null | 0 | o8ax27o | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8ax27o/ | false | 1 |
t1_o8ax178 | Just use F32. Its a bit faster than fp16, supports flash attentions, just as good/better as bf16, and it only takes like 2 or 3 more Gbs over 100k tokens. | 1 | 0 | 2026-03-02T21:43:02 | Time_Reaper | false | null | 0 | o8ax178 | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o8ax178/ | false | 1 |
t1_o8ax0zi | Guys, is Qwen 2.5 14B coder good enough for orchestrating some scrapers and doing some repairs if something goes wrong?!
I'm running it locally on a RTX 4070 with 64GB RAM.
Thanks! | 1 | 0 | 2026-03-02T21:43:01 | Petroale | false | null | 0 | o8ax0zi | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8ax0zi/ | false | 1 |
t1_o8awwjs | Is reasoning disabled on this? Testing it on llama.cpp on Q6 Quant and it hasn't done any thinking so far while 27B/35B-A3B near always spend a ton of tokens thinking before spitting out anything | 1 | 0 | 2026-03-02T21:42:25 | mrstrangedude | false | null | 0 | o8awwjs | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8awwjs/ | false | 1 |
t1_o8awqk9 | *^.^.^.^*LLAMA.CPP VS VLLM DEBATE ENDS WITH ABOVE POST*^.^.^.^* | 1 | 0 | 2026-03-02T21:41:37 | crantob | false | null | 0 | o8awqk9 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8awqk9/ | false | 1 |
t1_o8awnf1 | "I deleted the model" slow. I didn't retain logs, but if I remember - TG was about 1token/s.
I hope either the model gets fixed and I'll download it again or llama.cpp gets fixed.
For Mistral 3.2 24B instruct Q8\_0 I got:
| pp2000 @ d10000 | 738.48 +- 0.25 |
| tg8000 @ d10000 | 20.69 +- 0.6 |
https://preview.redd.it/kcj7ionsbpmg1.jpeg?width=4032&format=pjpg&auto=webp&s=8cec3addb3f14c9b77afd7c96717adae395d71b7
| 1 | 0 | 2026-03-02T21:41:12 | ProfessionalSpend589 | false | null | 0 | o8awnf1 | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o8awnf1/ | false | 1 |
t1_o8awku1 | Not really. REAP’s ideal operation is idempotent compression, which would obviously preserve trained-in behaviors.
And no operation under this kind of threat is running REAPs or GGUFs, that’s just amateur hour. These labs are running safetensors uncompressed, I assure you. | 1 | 0 | 2026-03-02T21:40:51 | __JockY__ | false | null | 0 | o8awku1 | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o8awku1/ | false | 1 |
t1_o8aw96f | Good idea. We do have that in the Open Round but in the tier lists we thought it would be checkbox overload to have both [https://brokk.ai/power-ranking?dataset=openround](https://brokk.ai/power-ranking?dataset=openround) | 1 | 0 | 2026-03-02T21:39:17 | mr_riptano | false | null | 0 | o8aw96f | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8aw96f/ | false | 1 |
t1_o8aw4op | 120b MoE and 120b dense are very very different.
It doesn't sound as crazy if you treat the 120b MoE as ~22b dense (though it's still impressive!) that feels more in line with the general rate of open source improvement over the last 2 years. | 1 | 0 | 2026-03-02T21:38:41 | Academic-Science-730 | false | null | 0 | o8aw4op | false | /r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o8aw4op/ | false | 1 |
t1_o8aw3l6 | LLMs optimized for roleplay are a specialist domain to explore on huggingface.
TheDrummer is one to search. | 1 | 0 | 2026-03-02T21:38:32 | crantob | false | null | 0 | o8aw3l6 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8aw3l6/ | false | 1 |
t1_o8aw2q2 | Z image turbo and flux 2 klein 9b. Klein is great for image editing and edits 1 image in 30-45s with 9b fp8. Z image turbo is great for generating images also in 30s and can be brought lower with half the steps with good results. This with 3060 12gb + 32gb ram | 1 | 0 | 2026-03-02T21:38:25 | KURD_1_STAN | false | null | 0 | o8aw2q2 | false | /r/LocalLLaMA/comments/1rj5czr/free_image_models_that_can_run_on_12gb_vram/o8aw2q2/ | false | 1 |
t1_o8avxrt | [removed] | 1 | 0 | 2026-03-02T21:37:45 | [deleted] | true | null | 0 | o8avxrt | false | /r/LocalLLaMA/comments/1r01zqa/any_trick_to_improve_promt_processing/o8avxrt/ | false | 1 |
t1_o8avt9x | I would like the automation option and compare my model to existing results. So I am not always starting from beginning. | 1 | 0 | 2026-03-02T21:37:09 | sunole123 | false | null | 0 | o8avt9x | false | /r/LocalLLaMA/comments/1rj4rml/local_llm_test_cases_text_and_coding/o8avt9x/ | false | 1 |
t1_o8avpb0 | Ask in r/aivideo | 1 | 0 | 2026-03-02T21:36:38 | dionisioalcaraz | false | null | 0 | o8avpb0 | false | /r/LocalLLaMA/comments/1rj326g/any_idea_what_is_being_used_for_these_generations/o8avpb0/ | false | 1 |
t1_o8avedv | Just spending a lot on hardware. About $4K to get into a setup locally that can approximate the quality of results you get with the “fast” tier of commercial SOTA models. | 1 | 0 | 2026-03-02T21:35:08 | txgsync | false | null | 0 | o8avedv | false | /r/LocalLLaMA/comments/1p2lqi7/are_any_of_the_m_series_mac_macbooks_and_mac/o8avedv/ | false | 1 |
t1_o8avb70 | how slow tk/s was R9700 on 27B Q8\_0 ? | 1 | 0 | 2026-03-02T21:34:42 | putrasherni | false | null | 0 | o8avb70 | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o8avb70/ | false | 1 |
t1_o8av9mo | Excellent and informed response. What *do* you use them for, if I might ask? 2 Pro 6000's is a curious configuration, at least for someone who owns zero. | 1 | 0 | 2026-03-02T21:34:29 | _-_David | false | null | 0 | o8av9mo | false | /r/LocalLLaMA/comments/1rj54kw/local_llm/o8av9mo/ | false | 1 |
t1_o8av2us | Yep | 1 | 0 | 2026-03-02T21:33:33 | MadPelmewka | false | null | 0 | o8av2us | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8av2us/ | false | 1 |
t1_o8av2cs | Opus 4.6 in C tier? I'm confused | 1 | 0 | 2026-03-02T21:33:29 | mrinterweb | false | null | 0 | o8av2cs | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8av2cs/ | false | 1 |
t1_o8autx8 | Just ask one of the online ai to create tests/benchmark for you. I am doing exactly that right now. Gemini 3.1 pro is helping a lot in creating these benchmarks. You can tell it your setup and how your local llm is connected, and it will guide you. | 1 | 0 | 2026-03-02T21:32:20 | thegr8anand | false | null | 0 | o8autx8 | false | /r/LocalLLaMA/comments/1rj4rml/local_llm_test_cases_text_and_coding/o8autx8/ | false | 1 |
t1_o8auofh | "You're absolutely right, I shouldn't have struck that elementary school. That's on me" | 1 | 0 | 2026-03-02T21:31:37 | theowlinspace | false | null | 0 | o8auofh | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8auofh/ | false | 1 |
t1_o8auhhq | Have not heard of this, will try it also thanks | 1 | 0 | 2026-03-02T21:30:40 | danihend | false | null | 0 | o8auhhq | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8auhhq/ | false | 1 |
t1_o8aufpo | if you dont turn off FA it will offload. experienced the same drop.. not sure if this is worth it | 1 | 0 | 2026-03-02T21:30:26 | arthor | false | null | 0 | o8aufpo | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o8aufpo/ | false | 1 |
t1_o8aueqw | Yep. I have it set up, just haven't tested it thoroughly yet - thanks! | 1 | 0 | 2026-03-02T21:30:18 | danihend | false | null | 0 | o8aueqw | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8aueqw/ | false | 1 |
t1_o8aubc3 | It’s the difference between a dense model and an MoE. The 27B uses all its parameters for every token, while the 35B MoE only uses 3B active params. This makes the 27B smarter, but it’ll be a lot slower to run.
Combined with the fact that Qwen3.5 is almost a year newer in architecture with better training, it even beats the older 235B A22B model in these benchmarks, which indeed is insane. | 1 | 0 | 2026-03-02T21:29:52 | Jobus_ | false | null | 0 | o8aubc3 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8aubc3/ | false | 1 |
t1_o8au5ae | Except Qwen3.5 27B is not actually ranking up there. Their tiers are just some opinionated jumble of price + performance + speed. Check the actual performance scores here:
https://brokk.ai/power-ranking
There we have Claude Opus at 91%, Claude Sonnet at 80%, GPT 5.2 at 77%, Gemini 3.1 Pro at 76% and Qwen3.5 27B at 38%.
| 1 | 0 | 2026-03-02T21:29:02 | ArtyfacialIntelagent | false | null | 0 | o8au5ae | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8au5ae/ | false | 1 |
t1_o8au4g6 | A year from now, an LLM of this size wasn't expected to hold a coherent conversation.
Look at how far we came. A smart model of 0.8B with vision support. | 1 | 0 | 2026-03-02T21:28:55 | Black-Mack | false | null | 0 | o8au4g6 | false | /r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8au4g6/ | false | 1 |
t1_o8au4gh | When I was playing around with system prompts, I've noticed that refusals for immoral things drastically drop down if you remove any "assistant" mentions from system prompt (including jinja template) and instead replace it with instructions to pose as a bad person, comply with bad instructions, or something of that sort. I never tried it with Qwen3.5, but I suspect it'll be similarly effective without killing model's intelligence. | 1 | 0 | 2026-03-02T21:28:55 | No-Refrigerator-1672 | false | null | 0 | o8au4gh | false | /r/LocalLLaMA/comments/1rixh53/qwen35122b_heretic_ggufs/o8au4gh/ | false | 1 |
t1_o8au49i | You can't run anything like Claude Opus on 2x RTX Pro 6000 Blackwell.
The best stuff that will run at a good clip with good context and concurrency is about 120gb in weights.
So:
- Qwen3.5-122b-a10b-fp8
- Qwen3-VL-235b-a22b (NVFP4)
- Minimax 2.5 NVFP4
- Devstal-2-123b (FP8)
- Qwen-Coder-Next-80b-a3b
If you are not running with concurrency - there is no math you can do for it to make sense in terms of cost/token.
If you want SOTA-ish, You will need **at least** half a Terabyte of VRAM. Honestly, 4x Pro 6000 is probably too tight, or you'll need to REAP/Quant your optimal version with calibration and if you don't want that to take forever, you will be renting the even larger machine to do it.
Yes, 4 may still be not enough and the next step up is 8, and that brings entirely new considerations like, what platform can you even run 8x PCIe 5 x16 on...
This is not a "trust me bro", I have 2 Pro 6000 - I pay for Calude/Gemini for coding.
| 1 | 0 | 2026-03-02T21:28:54 | reto-wyss | false | null | 0 | o8au49i | false | /r/LocalLLaMA/comments/1rj54kw/local_llm/o8au49i/ | false | 1 |
t1_o8au12p | "Impulse buy" is usually when I get LifeSavers at the checkout stand. If this is in your budget, have fun. If it is at all going to be a financial sting, you might want to lay back.
Buying a Jet-ski is dum for poor people who live in deserts. Wealthy person who lives on the shore is a different story. No one knows which you are. So would this be good fun, or stressful? Your call. My 2 cents. | 1 | 0 | 2026-03-02T21:28:28 | _-_David | false | null | 0 | o8au12p | false | /r/LocalLLaMA/comments/1rj54kw/local_llm/o8au12p/ | false | 1 |
t1_o8atwip | Nice :) | 1 | 0 | 2026-03-02T21:27:53 | charles25565 | false | null | 0 | o8atwip | false | /r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8atwip/ | false | 1 |
t1_o8attkm | I'll give it a try tonight. You're using the Unsloth quants, right? | 1 | 0 | 2026-03-02T21:27:28 | sine120 | false | null | 0 | o8attkm | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8attkm/ | false | 1 |
t1_o8atn9z | Thx for the info | 1 | 0 | 2026-03-02T21:26:38 | AppealSame4367 | false | null | 0 | o8atn9z | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8atn9z/ | false | 1 |
t1_o8atm9o | There are hybrid models available right now via lemonade (on windows) that enable NPU+GPU -- prompt processing on NPU and inference on GPU... granted currently 8b or smaller models... I'm currently testing the Q4 qwen 3.5 35b-a3b with openclaw - pretty solid results, getting almost 50 t/s... pp is still a bit slow at high context | 1 | 0 | 2026-03-02T21:26:30 | atcsecure99 | false | null | 0 | o8atm9o | false | /r/LocalLLaMA/comments/1re9h4r/some_qwen35_benchmarks_on_strix_halo_llamacpp/o8atm9o/ | false | 1 |
t1_o8atjek | Translation wise both 9B and 4B is shitty in Korean to English manhwa translations although very fast. 27B was better then 35B. 27B always translates some words incorrectly whereas 35B is always as correct as Deepl | 1 | 0 | 2026-03-02T21:26:08 | camekans | false | null | 0 | o8atjek | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8atjek/ | false | 1 |
t1_o8atiml | Yeah, I've been very impressed with Next Coder for systems that can fit it. | 1 | 0 | 2026-03-02T21:26:01 | sine120 | false | null | 0 | o8atiml | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8atiml/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.