name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o89tqo9 | Mother of God... Thanks!!! | 1 | 0 | 2026-03-02T18:33:38 | IrisColt | false | null | 0 | o89tqo9 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89tqo9/ | false | 1 |
t1_o89tm7s | What about .8 variant? | 1 | 0 | 2026-03-02T18:33:04 | stopbanni | false | null | 0 | o89tm7s | false | /r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o89tm7s/ | false | 1 |
t1_o89tkhy | aqui deu erro | 1 | 0 | 2026-03-02T18:32:50 | Numerous_Sandwich_62 | false | null | 0 | o89tkhy | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89tkhy/ | false | 1 |
t1_o89tgrd | Yeah, an alternative | 1 | 0 | 2026-03-02T18:32:22 | Mhanz97 | false | null | 0 | o89tgrd | false | /r/LocalLLaMA/comments/1rh9c0w/alternatives_to_pinokio_and_lynxhub/o89tgrd/ | false | 1 |
t1_o89te4b | the science is fucked up since about 2023 and fully RIP since 2025 because papers are vibe-written == generated by LLMs | 1 | 0 | 2026-03-02T18:32:01 | MelodicRecognition7 | false | null | 0 | o89te4b | false | /r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o89te4b/ | false | 1 |
t1_o89tcnc | try Obtainium to download and manage GitHub apps. | 1 | 0 | 2026-03-02T18:31:49 | jojorne | false | null | 0 | o89tcnc | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o89tcnc/ | false | 1 |
t1_o89tafh | Your temperature is too high for reasoning, those Wait tokens are often 2nd, 3rd in line so high temperature makes them more likely to be selected. Either drop it down a notch (Unsloth recommends 0.6 max for reasoning, but for OCR I'd go way lower), or turn reasoning off. I'd do both. | 1 | 0 | 2026-03-02T18:31:32 | 666666thats6sixes | false | null | 0 | o89tafh | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o89tafh/ | false | 1 |
t1_o89t536 | A "finetuning" technique to remove the ability of the LLM to refuse your demands. It usually doesn't remove all refusals and also degrades LLM capabilities a bit. | 1 | 0 | 2026-03-02T18:30:51 | Festour | false | null | 0 | o89t536 | false | /r/LocalLLaMA/comments/1rixh53/qwen35122b_heretic_ggufs/o89t536/ | false | 1 |
t1_o89t50f | 1 | 0 | 2026-03-02T18:30:50 | Independent-Ruin-376 | false | null | 0 | o89t50f | false | /r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o89t50f/ | false | 1 | |
t1_o89t316 | those accounts earn money by farming clicks and impressions. I normally only have them to know what's the latest buzz at most, never really put much weight on their opinions lol. | 1 | 0 | 2026-03-02T18:30:35 | hieuphamduy | false | null | 0 | o89t316 | false | /r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o89t316/ | false | 1 |
t1_o89t1yq | What kinds of examples are you looking for? | 1 | 0 | 2026-03-02T18:30:26 | alphatrad | false | null | 0 | o89t1yq | false | /r/LocalLLaMA/comments/1rd980h/zeroclaw_or_should_i_go_full_ironclaw/o89t1yq/ | false | 1 |
t1_o89sw0t | I didnt test it on benchmarks but for internal tasks it turned out on par! | 1 | 0 | 2026-03-02T18:29:40 | TerryTheAwesomeKitty | false | null | 0 | o89sw0t | false | /r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o89sw0t/ | false | 1 |
t1_o89sv8z | It’s bad for layout, just with any bbox estimation | 1 | 0 | 2026-03-02T18:29:34 | dreamai87 | false | null | 0 | o89sv8z | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o89sv8z/ | false | 1 |
t1_o89srs1 | takes some time to load, around 860MB of resources is loading. took 4 minutes to load for me | 1 | 0 | 2026-03-02T18:29:08 | unskilledexplorer | false | null | 0 | o89srs1 | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o89srs1/ | false | 1 |
t1_o89srh3 | If only it was uncensored | 1 | 0 | 2026-03-02T18:29:05 | Ok_Caregiver_1355 | false | null | 0 | o89srh3 | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o89srh3/ | false | 1 |
t1_o89sqvi | Just hard code the jinja template to \`<think>\\n\\n</think>\` | 1 | 0 | 2026-03-02T18:29:01 | I-am_Sleepy | false | null | 0 | o89sqvi | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o89sqvi/ | false | 1 |
t1_o89sq1r | I changed to bf16 for both k and v, but then the processing slowed to a crawl, I'm single 3090 and the utilization dropped to 30% during the process after the change (it was close to 100% at default).
I'm downloading a different quant and rechecking if the issue is not on the prompt side to make sure.
The 27B is awesome, I did not notice anything jarring (some minor comprehension issues at most) with the quant. As much I didn't like any of the previous qwen's, this one is likely staying. | 1 | 0 | 2026-03-02T18:28:54 | kaisurniwurer | false | null | 0 | o89sq1r | false | /r/LocalLLaMA/comments/1rizlkn/qwen_27b_is_a_beast_but_not_for_agentic_work/o89sq1r/ | false | 1 |
t1_o89snen | This paper [arxiv.org/abs/2511.05502](http://arxiv.org/abs/2511.05502) compares MLX, MLC-LLM, Ollama, llama.cpp, and PyTorch MPS on M2 Ultra (192GB) with Qwen-2.5. Key findings:
\- MLX had the highest sustained generation throughput
\- llama.cpp was efficient for lightweight single-stream but lower throughput
\- MLC-LLM had the best TTFT for moderate prompts (paged KV cache)
\- Ollama (llama.cpp wrapper) lagged in throughput and TTFT | 1 | 0 | 2026-03-02T18:28:34 | Striking-Swim6702 | false | null | 0 | o89snen | false | /r/LocalLLaMA/comments/1qssxhx/research_vllmmlx_on_apple_silicon_achieves_21_to/o89snen/ | false | 1 |
t1_o89sl6v | That v0.4.x persistent shared KV cache across 4 parallel inputs on a single GPU is slick when doing chained tool calls. That's what brought me back to LM Studio after being on and off with it since 2024. It's been good incentive to get up to speed on what's been blocking consistent behavior on local models, now that the tools make it possible to afford the time to dig more deeply. | 1 | 0 | 2026-03-02T18:28:17 | One-Cheesecake389 | false | null | 0 | o89sl6v | false | /r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o89sl6v/ | false | 1 |
t1_o89sjfv | i tried chatterui 0.8.9-b9 and it **works like a charm**,
but pocketpal stopped working for me at 1.11.11.
it's not just pocketpal, all llama.cpp derived apps. | 1 | 0 | 2026-03-02T18:28:04 | jojorne | false | null | 0 | o89sjfv | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o89sjfv/ | false | 1 |
t1_o89siwn | Is it possible to take a transcript from something like opencode, use an LLM to remove the fluff, and fine tune one of these small models for agents that do a similar thing?
My use case, I have an LLM which looks at a bunch of files, then uses some tools to generate some json. Qwen does an AMAZING job at it, but I have thousands of these directories I want to analyze, and they all kind of follow a similar pattern. I'd love if I could fine tune a smaller model to maybe reduce the amount of misfires it has as well as reduce the memory footprint so I can run a few instances of them.
I've seen guides for fine tuning for chat templates, but I think properly doing it for agent flows is another beast. Hoping for an unsloth article or something similar :D | 1 | 0 | 2026-03-02T18:27:59 | CSharpSauce | false | null | 0 | o89siwn | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89siwn/ | false | 1 |
t1_o89si89 | It's called progress. Q3.5 is huge leap forward compared to Q3. Not only does 35B beat Q3 235B but also it is dangerously close behind it's bigger Q3.5 cousin.
The point here is that if you look at charts, it seems that Q3.5 architecture is super efficient and going above 40B-50B probably requires a lot more data etc. than those 235b models have in them.
This is the same thing that was being pointed out back in 2023-2024 where those larger models rarely were better than smaller ones because there architecture uses just wasn't "stuffed" enough for those big B models to spread wings enough. We then shifted toward slower architecture progress and you had to use high Bs for amount of data shoved and again big B models run away with scores from small ones.
Q3.5 seems to again bring back big architecture gains that closes space to big B models that simply don't have enough data for them to matter. | 1 | 0 | 2026-03-02T18:27:54 | GoranjeWasHere | false | null | 0 | o89si89 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o89si89/ | false | 1 |
t1_o89sb57 | did you test some model? i have been testing qwen3.5-coder-fp8,that works great with opencode,but didnt find a proper vllm setup for codex | 1 | 0 | 2026-03-02T18:27:00 | Ancient_Canary1148 | false | null | 0 | o89sb57 | false | /r/LocalLLaMA/comments/1r8b9x8/how_to_use_codex_cli_with_a_local_vllm_server/o89sb57/ | false | 1 |
t1_o89sat2 | Damn q2… if it works it works. | 1 | 0 | 2026-03-02T18:26:57 | ThisWillPass | false | null | 0 | o89sat2 | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89sat2/ | false | 1 |
t1_o89s9vm | [removed] | 1 | 0 | 2026-03-02T18:26:50 | [deleted] | true | null | 0 | o89s9vm | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o89s9vm/ | false | 1 |
t1_o89s8sy | Thats so cooool damn ! Are you happy with the design ? I'm pretty new at this haha | 1 | 0 | 2026-03-02T18:26:42 | roackim | false | null | 0 | o89s8sy | false | /r/LocalLLaMA/comments/1rfi53f/completed_my_64gb_vram_rig_dual_mi50_build_custom/o89s8sy/ | false | 1 |
t1_o89s5cz | Haven't tested small ones but on 35BA3B and 27B reasoning adds up to ability of solving complex problems.
It doesn't affect in simple queries.
As you stated it helps in context recall, tool usage is more stable with reasoning.
But on the other hand I find it thinking too much, without any reasoning budget or knobs like in GPT-OSS with low/med/high it's not really worth improvement for me, as speed drop is extreme.
I've ended up with 35BA3B running at q6 at 60+ t/s on generation with disabled reasoning.
For things where I need reasoning I swap to cloud models as local speed is not enough.
Vision part also works without reasoning pretty good, can't complain. | 1 | 0 | 2026-03-02T18:26:15 | DistanceAlert5706 | false | null | 0 | o89s5cz | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o89s5cz/ | false | 1 |
t1_o89s57z | Qwen 4 will probably come with 1M of context | 1 | 0 | 2026-03-02T18:26:14 | Samy_Horny | false | null | 0 | o89s57z | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o89s57z/ | false | 1 |
t1_o89s32s | That is exactly what I was looking for, I knew that someone had to have done it, and I can run it fully local! Thanks a lot!! | 1 | 0 | 2026-03-02T18:25:57 | dionisioalcaraz | false | null | 0 | o89s32s | false | /r/LocalLLaMA/comments/1rf4pwa/reasondb_opensource_document_db_where_the_llm/o89s32s/ | false | 1 |
t1_o89s18r | Because we are in sloppy hype land where no one believes in science anymore. | 1 | 0 | 2026-03-02T18:25:44 | One-Employment3759 | false | null | 0 | o89s18r | false | /r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o89s18r/ | false | 1 |
t1_o89rwdx | No, those are different benchmarks that all test 1 thing, and he doesnt name the benchmark (I assume it's just copy-pasted from Artificial Analysis) so the data is meaningless except to compare the models in that specific post. | 1 | 0 | 2026-03-02T18:25:06 | dtdisapointingresult | false | null | 0 | o89rwdx | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o89rwdx/ | false | 1 |
t1_o89rvdt | I don't have the hardware for it. This exploration and what I've been slowly helping with on the Continue code assistant extension suggests behaviorally-interconnected bugs on the whole stack that look very similar in the final user workflow. Nothing against the owners of those products, either, because I've seen the code to deal with all the various syntax from the models. There is no "IEEE for LLMs". MCP is a great conceptual model to build within, but the model output to have to parse is understandably complex to implement.
vLLM is a good idea to look at in the future. I only have Intel and CUDA environments to work with tho. | 1 | 0 | 2026-03-02T18:24:58 | One-Cheesecake389 | false | null | 0 | o89rvdt | false | /r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o89rvdt/ | false | 1 |
t1_o89rt25 | Why not vLLM or SGLang. They have better cache management | 1 | 0 | 2026-03-02T18:24:40 | tsukuyomi911 | false | null | 0 | o89rt25 | false | /r/LocalLLaMA/comments/1rizlkn/qwen_27b_is_a_beast_but_not_for_agentic_work/o89rt25/ | false | 1 |
t1_o89rs4g | It's true. Try it.
There's a reason for it, too: Improved software techniques around LLMs and extreme amounts of training data. It's not magic or a scam, I predicted this one year ago based on the papers that came out. | 1 | 0 | 2026-03-02T18:24:33 | AppealSame4367 | false | null | 0 | o89rs4g | false | /r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o89rs4g/ | false | 1 |
t1_o89rnum | Is it Base or IT when it's not mentioned in the file name? Is it true that Base is mostly not useful for actual (non fine-tuned) use? | 1 | 0 | 2026-03-02T18:24:01 | ihatebeinganonymous | false | null | 0 | o89rnum | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o89rnum/ | false | 1 |
t1_o89rjjd | iirc 0.8b, 2b and 4b have a different architecture hence they can't work without tricks. 9B works | 1 | 0 | 2026-03-02T18:23:27 | Dany0 | false | null | 0 | o89rjjd | false | /r/LocalLLaMA/comments/1riwd56/speculative_decoding_with_qwen35_is_it_working/o89rjjd/ | false | 1 |
t1_o89rikp | It's an old dual epyc 7282 with a h11dsi motherboard (so only 2 16x pcie 3.0 slots, both controlled by cpu 1), 8x32GB 2666mhz ddr4 ecc, with 2x Mi50 16GB. Parameters are very basic, haven't gotten anything from trying to microtune settings, maybe one t/s from using a larger ubatch size. I find that llama.cpp's -fit works better than all the tedious manual tensor overrides and tensor splits I tried. So it's basically something to the effect of
```
taskset -c 0-15 llama-server -t 16 -fa on -fit on -fitt 128 -fitc 65536 -c 65536 -ub 2048 -m Qwen3.5-122B-A10B-UD-Q8_K_XL-00001-of-00004.gguf
```
Since inference is memory bound I get a maximum ~7 t/s at around 16 threads with affinities set so threads get all of cpu1's physical core (`taskset -c 0-15`). I would have expected to get a few more t/s by splitting over both cpu's physical cores so I could use 4 lane on cpu1 and 4 lanes on cpu2 instead of leaving 4 lanes unused on cpu2 , but it seems there's some bottleneck somewhere else. I haven't bothered to hunt it down. One thing I haven't tried yet is remove the second cpu and just use cpu1 with all 8 memory lanes filled.
But it's old hardware, it's already crazy that I can get this amount of mileage out of it. You need a lot of patience at 7 t/s when reasoning is on (`--chat-template-kwargs '{"enable_thinking": false}`)
With gpt-oss-120b I'm getting around 20 t/s at full context (`-c 0`) with the same parameters as above and a custom tensor override (`-ot "\.ffn_(up|gate)_exps\.=CPU" -sm layer`), but yesterday I found out `-fit on -fitt 128 -fitc 131072` gets me 23 t/s, so yeah, I gave up with those custom -ot, -fit is more clever than me. | 1 | 0 | 2026-03-02T18:23:20 | a1ix2 | false | null | 0 | o89rikp | false | /r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o89rikp/ | false | 1 |
t1_o89rgj0 | I would love, love it if we had a well-maintained, polished personal assistant app run by pros paid by Alibaba. | 1 | 0 | 2026-03-02T18:23:04 | dtdisapointingresult | false | null | 0 | o89rgj0 | false | /r/LocalLLaMA/comments/1rin3ea/alibaba_team_opensources_copaw_a_highperformance/o89rgj0/ | false | 1 |
t1_o89raey | Doc may be lacking, but looking at the repo, they support llama.cpp, so that means you can probably just set https://openrouter.ai/api/v1 as your "llama.cpp server" address. | 1 | 0 | 2026-03-02T18:22:17 | dtdisapointingresult | false | null | 0 | o89raey | false | /r/LocalLLaMA/comments/1rin3ea/alibaba_team_opensources_copaw_a_highperformance/o89raey/ | false | 1 |
t1_o89r7vg | So for someone like me who either wants to repurpose an RTX3070 or buy a mac mini for this, what the fk am i looking at? | 1 | 0 | 2026-03-02T18:21:57 | BruhAtTheDesk | false | null | 0 | o89r7vg | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o89r7vg/ | false | 1 |
t1_o89r2zi | Honestly, the colors are too distinct. | 1 | 0 | 2026-03-02T18:21:19 | ChocomelP | false | null | 0 | o89r2zi | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o89r2zi/ | false | 1 |
t1_o89r1fd | Where did you buy that card and how much was it? | 1 | 0 | 2026-03-02T18:21:07 | RudeboyRudolfo | false | null | 0 | o89r1fd | false | /r/LocalLLaMA/comments/1rj0dsf/running_llms_on_huawei_ascend_without_rewriting/o89r1fd/ | false | 1 |
t1_o89qxfb | If after changing the cache to bf16 (using \`-ctk bf16 -ctv bf16\`) you still have issues, I suggest trying recent unsloth quant, just in case, and compare.. Qwen3.5 27B is surprisingly good for its size, but it is very sensitive to quantizations.
In case you continue experienced performance issues with llama.cpp, there is [https://www.reddit.com/r/LocalLLaMA/comments/1rianwb/running\_qwen35\_27b\_dense\_with\_170k\_context\_at/](https://www.reddit.com/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/) post about how to get running Qwen3.5 with VLLM, if you have enough VRAM. It explains how to get running 4-bit version, but 8-bit can be ran the same way. | 1 | 0 | 2026-03-02T18:20:36 | Lissanro | false | null | 0 | o89qxfb | false | /r/LocalLLaMA/comments/1rizlkn/qwen_27b_is_a_beast_but_not_for_agentic_work/o89qxfb/ | false | 1 |
t1_o89qt21 | What is Heretic? | 1 | 0 | 2026-03-02T18:20:03 | ikaganacar | false | null | 0 | o89qt21 | false | /r/LocalLLaMA/comments/1rixh53/qwen35122b_heretic_ggufs/o89qt21/ | false | 1 |
t1_o89q2dc | Which is a shame because that's a quick and sharp tool caller. It does degenerate quickly, if you aren't feeding it back its reasoning trace. Fixing that doesn't catch the Harmony tokens, but has been another tricky thing to learn. | 1 | 0 | 2026-03-02T18:16:34 | One-Cheesecake389 | false | null | 0 | o89q2dc | false | /r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o89q2dc/ | false | 1 |
t1_o89q11p | Someone did [here](https://www.reddit.com/r/LocalLLaMA/comments/1rivckt/comment/o89md8f/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button). | 1 | 0 | 2026-03-02T18:16:24 | Jobus_ | false | null | 0 | o89q11p | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o89q11p/ | false | 1 |
t1_o89q0i0 | 0.8B | 1 | 0 | 2026-03-02T18:16:20 | Nubinu | false | null | 0 | o89q0i0 | false | /r/LocalLLaMA/comments/1rizjco/qwen3508b_released_today_speed_is_insane_157tksec/o89q0i0/ | false | 1 |
t1_o89pvwj | Is this sub in a competition for who can post the worst charts today? | 1 | 0 | 2026-03-02T18:15:44 | dtdisapointingresult | false | null | 0 | o89pvwj | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o89pvwj/ | false | 1 |
t1_o89pphn | Jesus Christ. Post the data in a markdown table in a comment. Anything but this. | 1 | 0 | 2026-03-02T18:14:54 | dtdisapointingresult | false | null | 0 | o89pphn | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o89pphn/ | false | 1 |
t1_o89pjs1 | A lot of people are getting behind Cloudflare and other tools, because the traffic grinds things to a halt. The few other people I know that manage small sites have followed suit. Don't misunderstand me, because small sites are almost always a labor of love. Their owners really do want information to flow freely. It's just that the traffic is grinding everything to a standstill for human beings. Everyone is pulling their hair out.
There's no good framework right now to manage bot traffic and guarantee access to human beings simultaneously... There is another side as well, of course. Larger websites often make money with clicks and exclusive content. They can't afford to let bots make site rips and give the milk away for free. | 1 | 0 | 2026-03-02T18:14:10 | Due-Function-4877 | false | null | 0 | o89pjs1 | false | /r/LocalLLaMA/comments/1rhdzrc/local_llm_agents_blocked_everywhere/o89pjs1/ | false | 1 |
t1_o89pg28 | Wooa, Qwen3.5 27b is super strong. | 1 | 0 | 2026-03-02T18:13:42 | --Tintin | false | null | 0 | o89pg28 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o89pg28/ | false | 1 |
t1_o89p28p | There's no single benchmark that covers more than 1-2 dimensions of problem-solving, or that comes even close to. MMLU Pro focuses on university knowledge Q&A. Terminal Bench on agentic/toolcalling + terminal knowledge. Tau2 Telecom on many-turn agentic/toolcalling. SWE Bench tests bugfixes. And so on.
If your desired task is a fun chatbot, or translation, your needs are different than those of someone who wants help coding. Even within coding, if you're using a different language than what the benchmark uses, you might be wrong to rely on benchmarks (for example SWE Bench is Python-only IIRC, so benchmaxxing incentives means AI labs focus more on Python than other languages).
I recommend you start tracking problematic prompts and using them as your own benchmark. Whenever an LLM surprisingly struggles with a prompt, save it for later use, and periodically run it across multiple models. I'm talking about real useful prompts here, not stupid redditor gotchas like the strawberry test or even math, which are things that by definition LLMs can't be good at without (non-generalizable) benchmaxxing.
I also keep track of indie benchmarks I feel test the sort of things I'm interested in, such as https://github.com/fairydreaming/lineage-bench and a few by lechmazur at https://github.com/lechmazur . | 1 | 0 | 2026-03-02T18:11:56 | dtdisapointingresult | false | null | 0 | o89p28p | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o89p28p/ | false | 1 |
t1_o89oy21 | Qwen3.5 27B and 122B-A10B are ranked even better (67%), select open models in the filter icon and they'll show up. | 1 | 0 | 2026-03-02T18:11:24 | dionisioalcaraz | false | null | 0 | o89oy21 | false | /r/LocalLLaMA/comments/1rim2y2/revisiting_minimaxs_article_on_their_decision_to/o89oy21/ | false | 1 |
t1_o89otdt | 1st: thank you for ChatterUI, I use it almost everyday.
2nd: thank you for supporting qwen35 so soon!
3rd: glad you have a Poco F5, the same as I have! Maybe some day we'll get hexagon acceleration!
4th: lfm2 8b A1B friggin FLY on Poco F5/ChatterUI | 1 | 0 | 2026-03-02T18:10:47 | xandep | false | null | 0 | o89otdt | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o89otdt/ | false | 1 |
t1_o89otdh | I have encountered this very issue try to do tools calls from openclaw to gpt-oss20b through LMStudio. Enormously frustrating!!! | 1 | 0 | 2026-03-02T18:10:47 | d4mations | false | null | 0 | o89otdh | false | /r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o89otdh/ | false | 1 |
t1_o89ostk | Thanks, I found them all here: [https://marketplace.nvidia.com/en-us/consumer/gaming-laptops/](https://marketplace.nvidia.com/en-us/consumer/gaming-laptops/) \- now I need to win the lottery ;-) | 1 | 0 | 2026-03-02T18:10:43 | timeshifter24 | false | null | 0 | o89ostk | false | /r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o89ostk/ | false | 1 |
t1_o89os13 | Same phone, how did you set it up? | 1 | 0 | 2026-03-02T18:10:36 | ParthProLegend | false | null | 0 | o89os13 | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o89os13/ | false | 1 |
t1_o89op5s | gguf when? | 1 | 0 | 2026-03-02T18:10:14 | alltheotherthing | false | null | 0 | o89op5s | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89op5s/ | false | 1 |
t1_o89omdv | Thanks, I did not consider the issue might be with quant itself. I'm using Q4_K_M heretic version, though it was a recent one. But I will confirm the cache before changing the model files. | 1 | 0 | 2026-03-02T18:09:53 | kaisurniwurer | false | null | 0 | o89omdv | false | /r/LocalLLaMA/comments/1rizlkn/qwen_27b_is_a_beast_but_not_for_agentic_work/o89omdv/ | false | 1 |
t1_o89olof | Yes but even something in between like they did last time would’ve been perfect for me. Always grateful though! | 1 | 0 | 2026-03-02T18:09:48 | arman-d0e | false | null | 0 | o89olof | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89olof/ | false | 1 |
t1_o89oi5r | this here is a really good point. it should make a good model for speculative decoding. | 1 | 0 | 2026-03-02T18:09:20 | sid_276 | false | null | 0 | o89oi5r | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o89oi5r/ | false | 1 |
t1_o89ofdr | The "start" button just never allows clicking. | 1 | 0 | 2026-03-02T18:08:59 | MartinByde | false | null | 0 | o89ofdr | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o89ofdr/ | false | 1 |
t1_o89ob2v | You don't have to fine tune. Just one two examples in the prompt should be enough. | 1 | 0 | 2026-03-02T18:08:25 | Area51-Escapee | false | null | 0 | o89ob2v | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89ob2v/ | false | 1 |
t1_o89o5yf | I thought the same. The meltdown over security concerns felt just as weirdly inorganic. It felt like they just needed it to stay trending long enough to get the influencers enough time to take over. Security could always be explained and it played into the " marketing" hype. (Dangerous means powerful means dangerous...... Don't be a coward...you are already behind! ) No big conspiracies here. Just congratulations on the fantastically orchestrated sudo grassroots campaign. | 1 | 0 | 2026-03-02T18:07:44 | Pretend-Lettuce-1809 | false | null | 0 | o89o5yf | false | /r/LocalLLaMA/comments/1r5v1jb/anyone_actually_using_openclaw/o89o5yf/ | false | 1 |
t1_o89o41p | It was more of an anecdotal example of how I've been paying for things. Selling things here and there to get more things. I also got my ram and mobo early before the max price hikes. | 1 | 0 | 2026-03-02T18:07:29 | ubrtnk | false | null | 0 | o89o41p | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o89o41p/ | false | 1 |
t1_o89o3u5 | Yes that build, 5851 or something. Just updated yesterday | 1 | 0 | 2026-03-02T18:07:27 | Not4Fame | false | null | 0 | o89o3u5 | false | /r/LocalLLaMA/comments/1rizlkn/qwen_27b_is_a_beast_but_not_for_agentic_work/o89o3u5/ | false | 1 |
t1_o89nzbe | Cool story armchair psychiatrist who is part of the 54% of Americans who can't read 500 words (\~23 seconds to read for me btw).
https://preview.redd.it/plqcfx2a9omg1.png?width=1784&format=png&auto=webp&s=91e4324e57f4512d98c925cdc882313a482073d1
Can't fix fried attention spans. Go read a book. Every page will look like "psychosis" for illiterates. | 1 | 0 | 2026-03-02T18:06:52 | brownman19 | false | null | 0 | o89nzbe | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o89nzbe/ | false | 1 |
t1_o89nyu9 | 128GB DDR5 (irrelevant as GPU offload only) and RTX 5090 + 9800x3D combo. | 1 | 0 | 2026-03-02T18:06:48 | Not4Fame | false | null | 0 | o89nyu9 | false | /r/LocalLLaMA/comments/1rizlkn/qwen_27b_is_a_beast_but_not_for_agentic_work/o89nyu9/ | false | 1 |
t1_o89nxms | Not sure if I understand the question. You use llama.cpp, or sglang, or vllm, or ollama, or whatever tool you’d like. | 1 | 0 | 2026-03-02T18:06:39 | siggystabs | false | null | 0 | o89nxms | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89nxms/ | false | 1 |
t1_o89nr3m | glad it worked! This is also my initial attempt (mac studio + localllm + openclaw) and the original vlm-mlx doesn't work so i forked and made everything work. I am glad it worked for you | 1 | 0 | 2026-03-02T18:05:48 | Striking-Swim6702 | false | null | 0 | o89nr3m | false | /r/LocalLLaMA/comments/1rf288a/qwen3codernext_at_65_toks_on_m3_ultra_with/o89nr3m/ | false | 1 |
t1_o89npxz | Agentic loops restart between user requests. I have not observed it happen with a single plan execution. Hopefully in future agents will use llama.cpp slot persistence (vllm also has something similar). | 1 | 0 | 2026-03-02T18:05:39 | smahs9 | false | null | 0 | o89npxz | false | /r/LocalLLaMA/comments/1rizlkn/qwen_27b_is_a_beast_but_not_for_agentic_work/o89npxz/ | false | 1 |
t1_o89nn6c | so where's the proverbial paperclip here that you are going to trade up? | 1 | 0 | 2026-03-02T18:05:17 | TreesLikeGodsFingers | false | null | 0 | o89nn6c | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o89nn6c/ | false | 1 |
t1_o89nmx6 | Thanks ill look into Ilama instead of ollama since I have literally trouble downloading qwen 3.5 into ollama lol | 1 | 0 | 2026-03-02T18:05:15 | azndkflush | false | null | 0 | o89nmx6 | false | /r/LocalLLaMA/comments/1rixlj6/new_to_local_llm_which_model_to_use_with_a_4090/o89nmx6/ | false | 1 |
t1_o89nl7e | use 27b in smaller or the 9b in higher quants? | 1 | 0 | 2026-03-02T18:05:02 | Impossible_Art9151 | false | null | 0 | o89nl7e | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89nl7e/ | false | 1 |
t1_o89nhfs | thx for the feedback | 1 | 0 | 2026-03-02T18:04:32 | Striking-Swim6702 | false | null | 0 | o89nhfs | false | /r/LocalLLaMA/comments/1rf288a/qwen3codernext_at_65_toks_on_m3_ultra_with/o89nhfs/ | false | 1 |
t1_o89nfka | right, this is the pain i got so i forked vllm-mlx and fix the tool calling and implemented more optimizations to speed up the inference on local LLMs. | 1 | 0 | 2026-03-02T18:04:17 | Striking-Swim6702 | false | null | 0 | o89nfka | false | /r/LocalLLaMA/comments/1rf288a/qwen3codernext_at_65_toks_on_m3_ultra_with/o89nfka/ | false | 1 |
t1_o89nehi | not impressed, 27b, typing 'hi' takes 5min of thinking garbage on a 5090 | 1 | 0 | 2026-03-02T18:04:08 | Noiselexer | false | null | 0 | o89nehi | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89nehi/ | false | 1 |
t1_o89nd1t | For mobile, have you tried the smaller Whisper models (tiny or base) with quantization? They're surprisingly fast on mobile CPUs. For desktop, if you ever need a fully offline Windows solution that works with 50+ languages, I built Weesper Neon Flow which runs locally - might be worth checking out as an alternative to VOSK. | 1 | 0 | 2026-03-02T18:03:56 | Weesper75 | false | null | 0 | o89nd1t | false | /r/LocalLLaMA/comments/1raste0/fast_voice_to_text_looking_for_offline_mobile/o89nd1t/ | false | 1 |
t1_o89n9jr | You did not mention any details; llama.cpp defaults to f16 cache, so if you used that or lower, that's could be an issue on its own. I recently saw multiple people reporting issues with f16 cache in Qwen3.5 models, while confirming that bf16 working fine; one of most detailed reports that I saw so far, with multiple cache quantizations tested, was this one: [https://www.reddit.com/r/LocalLLaMA/comments/1rii2pd/comment/o865qxw/](https://www.reddit.com/r/LocalLLaMA/comments/1rii2pd/comment/o865qxw/)
>
>
>
Of course, what quant of Qwen 27B you have used, also matters. If you downloaded unsloth quant, good idea to check if you got updated version or old broken one, and if necessary, redownload. Since it is a small model, I suggest using at least Q6\_K, or Q8\_0 - at the time of writing this, [https://huggingface.co/unsloth/Qwen3.5-27B-GGUF/tree/main](https://huggingface.co/unsloth/Qwen3.5-27B-GGUF/tree/main) was updated just 3 hours ago. So if for example you downloaded from them yesterday, you have a broken quant that needs to be redownloaded.
| 1 | 0 | 2026-03-02T18:03:28 | Lissanro | false | null | 0 | o89n9jr | false | /r/LocalLLaMA/comments/1rizlkn/qwen_27b_is_a_beast_but_not_for_agentic_work/o89n9jr/ | false | 1 |
t1_o89n8rn | Why not keep anything that you've trimmed in a thread and give your agent a tool and a summary of what was trimmed so they can fetch it again if they need it?
Using a small model to summarize memory for another agent is problematic, especially when your agent is working on tasks with levels of complexity a simple model won't understand. | 1 | 0 | 2026-03-02T18:03:22 | Total-Context64 | false | null | 0 | o89n8rn | false | /r/LocalLLaMA/comments/1riz852/what_if_a_small_ai_decided_what_your_llm_keeps_in/o89n8rn/ | false | 1 |
t1_o89n8b9 | [removed] | 1 | 0 | 2026-03-02T18:03:18 | [deleted] | true | null | 0 | o89n8b9 | false | /r/LocalLLaMA/comments/1rizy4r/what_models_to_understand_videos_no_transcripts/o89n8b9/ | false | 1 |
t1_o89n84e | This is fixed in [https://github.com/raullenchai/vllm-mlx/pull/9](https://github.com/raullenchai/vllm-mlx/pull/9) \- if you pickup the latest release, everything should work beautifully. | 1 | 0 | 2026-03-02T18:03:17 | Striking-Swim6702 | false | null | 0 | o89n84e | false | /r/LocalLLaMA/comments/1rf288a/qwen3codernext_at_65_toks_on_m3_ultra_with/o89n84e/ | false | 1 |
t1_o89n48p | This looks really well architected! I like that you separated the STT/LLM pipeline from the frontend - makes it easy to experiment with different models. Have you considered adding support for offline-only mode using something like Whisper.cpp for users who want complete privacy? | 1 | 0 | 2026-03-02T18:02:46 | Weesper75 | false | null | 0 | o89n48p | false | /r/LocalLLaMA/comments/1pmhqyf/open_source_ai_voice_dictation_app_with_a_fully/o89n48p/ | false | 1 |
t1_o89n3e7 | When will a stable version of the app be available? | 1 | 0 | 2026-03-02T18:02:39 | Samy_Horny | false | null | 0 | o89n3e7 | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o89n3e7/ | false | 1 |
t1_o89n2qf | I'm interested in the monkey! Can you link the video | 1 | 0 | 2026-03-02T18:02:34 | Helium116 | false | null | 0 | o89n2qf | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o89n2qf/ | false | 1 |
t1_o89n07v | LMStudio should focus on fixing existing issues instead of adding new features nobody asked. | 1 | 0 | 2026-03-02T18:02:13 | No_Conversation9561 | false | null | 0 | o89n07v | false | /r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o89n07v/ | false | 1 |
t1_o89n02x | fuck no 🥲 | 1 | 0 | 2026-03-02T18:02:12 | Fine_Factor_456 | false | null | 0 | o89n02x | false | /r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o89n02x/ | false | 1 |
t1_o89myhl | Truly, i wonder what's the use case of 2b and 0.8b. Can someone tell me? | 1 | 0 | 2026-03-02T18:02:00 | Billysm23 | false | null | 0 | o89myhl | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89myhl/ | false | 1 |
t1_o89mwjv | Si, la gente se confunde. | 1 | 0 | 2026-03-02T18:01:46 | Mickenfox | false | null | 0 | o89mwjv | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89mwjv/ | false | 1 |
t1_o89mu9v | Thanks. I have set it and notice much reduced over-thinking, but still occurs some percentage of time. The issue is presence penalty radically reduces the quality of some tasks. | 1 | 0 | 2026-03-02T18:01:26 | DeltaSqueezer | false | null | 0 | o89mu9v | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o89mu9v/ | false | 1 |
t1_o89mu45 | Look at the file size for a rough idea. Double the B params for full 16-bit weights, less for quants.
Context/KV cache in these is economical, looks like 550MiB for 32k with the 4B model. There are other things needed in VRAM too, like compute buffer another 500MiB and I'm not sure what else but a Q4 with 32k context is a little too big for 4GB VRAM, 22k context fits. | 1 | 0 | 2026-03-02T18:01:25 | hum_ma | false | null | 0 | o89mu45 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89mu45/ | false | 1 |
t1_o89mry2 | Adding --context-shift should be all you need. It might not do what you think it does though; at the moment, it lets the model finish its response if it would go over the context limit (i.e. a 500 token response when you are using 131,000 out of 131,072 context), but will fail if the context already exceeds the limit. There's some discussion on [GitHub](https://github.com/ggml-org/llama.cpp/issues/17284) about this. | 1 | 0 | 2026-03-02T18:01:08 | Ulterior-Motive_ | false | null | 0 | o89mry2 | false | /r/LocalLLaMA/comments/1riuttn/how_can_i_enable_context_shifting_in_llama_server/o89mry2/ | false | 1 |
t1_o89mrzq | You got the wrong quant mate. Get the latest ones and tweak params, they work great. | 1 | 0 | 2026-03-02T18:01:08 | jslominski | false | null | 0 | o89mrzq | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o89mrzq/ | false | 1 |
t1_o89mrc2 | Update. Ran llama-bench on the 27b Q6, got around 15 tps
I suspect if I was running my mi50s at 250w I'd get it up to 18-20 but I prefer the lower power consumption. | 1 | 0 | 2026-03-02T18:01:03 | MaddesJG | false | null | 0 | o89mrc2 | false | /r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/o89mrc2/ | false | 1 |
t1_o89moku | Same boat. I am going to try vLLM. Apparently there is a pretty simple docker setup. I asked ChatGPT about getting it up and running and it didn't look too convoluted. A docker run command and that's about it. | 1 | 0 | 2026-03-02T18:00:41 | _-_David | false | null | 0 | o89moku | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o89moku/ | false | 1 |
t1_o89mjta | Any ideas how to enable thinking in the 9B GGUF model of this? I got it running but it's not thinking at all. | 1 | 0 | 2026-03-02T18:00:04 | soyalemujica | false | null | 0 | o89mjta | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89mjta/ | false | 1 |
t1_o89mhu6 | The variant used is the Instant one, right? Or is it the Thinking one? | 1 | 0 | 2026-03-02T17:59:49 | Samy_Horny | false | null | 0 | o89mhu6 | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o89mhu6/ | false | 1 |
t1_o89md8f | | Model | Knowledge & STEM | Instruction Following | Long Context | Math | Coding | General Agent | Multilingualism |
|---|---|---|---|---|---|---|---|
| Qwen3-235B-A22B | 83 | 63 | 57 | 87 | 54 | 56 | 75 |
| Qwen3.5-122B-A10B | 85 | 76 | 63 | 91 | 59 | 75 | 79 |
| Qwen3-Next-80B-A3B-Thinking | 80 | 67 | 50 | 77 | 49 | 53 | 71 |
| Qwen3.5-35B-A3B | 84 | 74 | 58 | 89 | 55 | 74 | 77 |
| Qwen3-30BA3B-Thinking-2507 | 78 | 62 | 47 | 68 | 46 | 42 | 69 |
| Qwen3.5-27B | 84 | 77 | 63 | 91 | 60 | 74 | 79 |
| Qwen3.5-9B | 80 | 70 | 59 | 83 | 47 | 73 | 73 |
| Qwen3.5-4B | 76 | 66 | 53 | 75 | 40 | 64 | 68 |
| Qwen3-4B-2507 | 72 | 59 | 37 | 63 | N/A | 41 | 61 |
| Qwen3.5-2B | 64 | 51 | 32 | 21 | N/A | 46 | 52 |
| Qwen3-1.7B | 57 | 42 | 17 | 9 | N/A | 18 | 47 |
| Qwen3.5-0.8B | 43 | 28 | 16 | N/A | N/A | N/A | 37 | | 1 | 0 | 2026-03-02T17:59:13 | Vozer_bros | false | null | 0 | o89md8f | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o89md8f/ | false | 1 |
t1_o89ma71 | [removed] | 1 | 0 | 2026-03-02T17:58:49 | [deleted] | true | null | 0 | o89ma71 | false | /r/LocalLLaMA/comments/1ikn5fg/glyphstral24b_symbolic_deductive_reasoning_model/o89ma71/ | false | 1 |
t1_o89m010 | *cries in llama.cpp* | 1 | 0 | 2026-03-02T17:57:32 | _-_David | false | null | 0 | o89m010 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o89m010/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.