name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o89fh5o | Actually, sometimes it does, and sometimes it doesn't. I just hit-regenerate if it's stuck in the thinking loop. But, I agree with you. | 1 | 0 | 2026-03-02T17:26:33 | Iory1998 | false | null | 0 | o89fh5o | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o89fh5o/ | false | 1 |
t1_o89fan7 | I feel like that doesn't answer the question. wtf can a pi do that is useful with a small model. | 2 | 0 | 2026-03-02T17:25:41 | Space__Whiskey | false | null | 0 | o89fan7 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89fan7/ | false | 2 |
t1_o89f1wq | Yeah I'm curious how it compares to small dedicated OCR models, like GLM-OCR or [Deepseek OCR 2](https://huggingface.co/deepseek-ai/DeepSeek-OCR-2). The latter uses a 2B VLM as its base, so it's comparable size, but the encoder is very different... | 1 | 0 | 2026-03-02T17:24:31 | huffalump1 | false | null | 0 | o89f1wq | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o89f1wq/ | false | 1 |
t1_o89f1qt | Are your setting the recommend ones? | 1 | 0 | 2026-03-02T17:24:30 | Long_comment_san | false | null | 0 | o89f1qt | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o89f1qt/ | false | 1 |
t1_o89f1fb | Yeah, he didn't understand your point at all lol | 1 | 0 | 2026-03-02T17:24:27 | DinoAmino | false | null | 0 | o89f1fb | false | /r/LocalLLaMA/comments/1rira5e/iquestcoderv1_is_40b14b7b/o89f1fb/ | false | 1 |
t1_o89ezwn | Not unless you so simple scripts | 1 | 0 | 2026-03-02T17:24:16 | jeffwadsworth | false | null | 0 | o89ezwn | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89ezwn/ | false | 1 |
t1_o89eurh | Sounds like you're not actually using a local model, maybe a cloud model, but I could be wrong. How did you connect the model? | 1 | 0 | 2026-03-02T17:23:34 | dev_hoff | false | null | 0 | o89eurh | false | /r/LocalLLaMA/comments/1riyi54/i_am_using_qwen_ai_model_for_openclaw_and_i/o89eurh/ | false | 1 |
t1_o89esnn | I am curious, how did you enable or disable thinking mode in LM Studio? | 1 | 0 | 2026-03-02T17:23:17 | shankey_1906 | false | null | 0 | o89esnn | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o89esnn/ | false | 1 |
t1_o89es9u | How are you using bf16? LLama.cpp doesn't have support for BF16 CUDA flash attention kernels, only cpu, so that will Slow down fast
| 1 | 0 | 2026-03-02T17:23:14 | Time_Reaper | false | null | 0 | o89es9u | false | /r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o89es9u/ | false | 1 |
t1_o89er8e | general advice:
- use NVMe with TLC chips **and** DRAM cache for the operating system
- use NVMe with TLC chips and **without DRAM** for storing LLMs
- throw away NVMe with QLC chips and without DRAM cache | 1 | 0 | 2026-03-02T17:23:06 | MelodicRecognition7 | false | null | 0 | o89er8e | false | /r/LocalLLaMA/comments/1riqlhl/hardware_usage_advice/o89er8e/ | false | 1 |
t1_o89enpz | [removed] | 1 | 0 | 2026-03-02T17:22:37 | [deleted] | true | null | 0 | o89enpz | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o89enpz/ | false | 1 |
t1_o89em6q | Please try again later. | 1 | 0 | 2026-03-02T17:22:24 | dev_hoff | false | null | 0 | o89em6q | false | /r/LocalLLaMA/comments/1riyi54/i_am_using_qwen_ai_model_for_openclaw_and_i/o89em6q/ | false | 1 |
t1_o89eit8 | Fine. Still thanks for this graph | 2 | 0 | 2026-03-02T17:21:57 | pmttyji | false | null | 0 | o89eit8 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o89eit8/ | false | 2 |
t1_o89ef7d | Do this, maybe a higher quant. I ran it all weekend on a 6gb vram + 32GB RAM config and got 15-25 tps (RTX 2060). You could use a Q3 or Q4 quant, but be careful, speed and quality differ a lot. Maybe you can set cache-type-k and v to Q8\_0.
It should be better than trying to push the 9B model into your 8gb card.
Adapt -t to the number of your physical cpu cores.
./build/bin/llama-server \\
\-hf unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q2\_K\_XL \\
\-c 72000 \\
\-b 4092 \\
\-fit on \\
\--port 8129 \\
\--host [0.0.0.0](http://0.0.0.0) \\
\--flash-attn on \\
\--cache-type-k q4\_0 \\
\--cache-type-v q4\_0 \\
\--mlock \\
\-t 6 \\
\-tb 6 \\
\-np 1 \\
\--jinja \\
\-lcs lookup\_cache\_dynamic.bin \\
\-lcd lookup\_cache\_dynamic.bin | 1 | 0 | 2026-03-02T17:21:28 | AppealSame4367 | false | null | 0 | o89ef7d | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89ef7d/ | false | 1 |
t1_o89edlu | However, they clarify that the 0.8B and 2B models have repetition problems in thinking mode, and that is why these models have instant mode by default. | 1 | 0 | 2026-03-02T17:21:15 | sammoga123 | false | null | 0 | o89edlu | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o89edlu/ | false | 1 |
t1_o89ebga | I'm in the same boat (having a 4070 Ti Super). Go with the 35B model. I Use the quantized Q4_K_M from https://huggingface.co/AesSedai/Qwen3.5-35B-A3B-GGUF Works pretty well with nice speed for tool use and coding. It's not quite Claude, but better than Gemini Flash. | 1 | 0 | 2026-03-02T17:20:58 | ytklx | false | null | 0 | o89ebga | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89ebga/ | false | 1 |
t1_o89e6j7 | Let us know your results! I'm still getting a feel for Unsloth 27B Q3. It seems to be working fine so far, minus the fact I can't get image processing working on llama.cpp... | 1 | 0 | 2026-03-02T17:20:19 | InternationalNebula7 | false | null | 0 | o89e6j7 | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o89e6j7/ | false | 1 |
t1_o89e5mc | It's called progress. I remember in 2023 testing 20B model on kobolai Stesomething that could barely talk in english. Back then GPT4 was barely out and both GPT3 and GPT4 were good with english but you could plenty of times see that they do stupid stuff.
Now 4B model absolutely wipes the floor with old GPT4 in everything.
Those two models, 4B is way better than gpt oss20b being just 1/4 of size and 9b beats 120b being just 1/13 of size.
Qwen team absolutely cooked. | 1 | 0 | 2026-03-02T17:20:12 | GoranjeWasHere | false | null | 0 | o89e5mc | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o89e5mc/ | false | 1 |
t1_o89e45i | VL will no longer exist; Qwen models are fundamentally multimodal with 3.5 | 1 | 0 | 2026-03-02T17:20:00 | sammoga123 | false | null | 0 | o89e45i | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o89e45i/ | false | 1 |
t1_o89e3my | This post was up to 25 upvotes at one point, now shrinking rapidly.
I think that if the post stated specifically that this is a benchmark ONLY for the KV cache decode instead of full inference then it would have gained more traction. | 1 | 0 | 2026-03-02T17:19:56 | Craygen9 | false | null | 0 | o89e3my | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89e3my/ | false | 1 |
t1_o89e18s | I need more time to make it conclusive. I have done some minimal testing with Qwen-3.5-122B-16B AWQ vs Qwen3-Coder-Next MXP4.
I think the Qwen3-Coer-Next is still slightly better at coding, but I need to run them for longer to compare better. I run the Qwen-3.5-122B-16B AWQ on 4x 3090's and it's super fast, I also love that I can get full context on just GPU.
I run Qwen3-Coder-Next MXP4 hybrid on 2x 3090's and CPU/VRAM on the same machine. | 1 | 0 | 2026-03-02T17:19:36 | SuperChewbacca | false | null | 0 | o89e18s | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89e18s/ | false | 1 |
t1_o89e0sy | Agree. Maybe 12GB or 16GB folks could let us know about this as 27B is still big(Q4 is 15-17GB) for them so they could try this 9B with full context to experiment this.
Thought this model(3.5's architecture) would take more context without needing more VRAM.
For the same reason, I want to see comparison of Qwen3-4B vs Qwen3.5-4B as both are different architectures & see what t/s both giving. | 1 | 0 | 2026-03-02T17:19:33 | pmttyji | false | null | 0 | o89e0sy | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89e0sy/ | false | 1 |
t1_o89dykl | The most useful $20 you may spend is your first month of Gemini or ChatGPT. (Anthropic is great with a good reputation but I use the other two).
Give Gemini a basic intro to where your experience level is, what your goals are, and what you have available to you to play with.
Any time you get an error, throw it into Gemini or ChatGPT and have it explain to you what is going on and how to fix it.
Rinse and repeat until you are comfortable and confident | 1 | 0 | 2026-03-02T17:19:14 | SocialDinamo | false | null | 0 | o89dykl | false | /r/LocalLLaMA/comments/1risau2/please_help_me_with_the_following_ai_questions/o89dykl/ | false | 1 |
t1_o89dy5z | Obviously, dude, but we're talking about the models in the same release.
Is Qwen3.5 9B Q8 better than Qwen 3.5 27B Q3? It should be, because there's less deviation and the creators chose which data will the 9B omit compared to 27B.
Q8 is almost lossless, Q3 is a lobotomy. | 1 | 0 | 2026-03-02T17:19:11 | jonydevidson | false | null | 0 | o89dy5z | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o89dy5z/ | false | 1 |
t1_o89dwy1 | I think 22k context is too little to be usable in anything other than just Q&A. For example, I use claude code, the system prompt itself is around 16k already :D | 1 | 0 | 2026-03-02T17:19:01 | bobaburger | false | null | 0 | o89dwy1 | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o89dwy1/ | false | 1 |
t1_o89dv6x | Yes I'm sure you're a lot more learned than them and fit to advise them with your Reddit degree. | 1 | 0 | 2026-03-02T17:18:47 | Upset-Presentation28 | false | null | 0 | o89dv6x | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89dv6x/ | false | 1 |
t1_o89dt66 | Are you in infosec? I love the healthy skepticism and a call for PoCs, but that specific phrasing is very comorbid with hacker subcultures. | 1 | 0 | 2026-03-02T17:18:31 | RegisteredJustToSay | false | null | 0 | o89dt66 | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89dt66/ | false | 1 |
t1_o89drwa | This is the way. | 1 | 0 | 2026-03-02T17:18:20 | xRintintin | false | null | 0 | o89drwa | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89drwa/ | false | 1 |
t1_o89drfa | One computer with as much VRAM >>> clustering
Clustering is useful in certain cases, but is (a) often frustrating, (b) doesn't really speed things up, just gives you more capacity for larger models (sometimes), and (c) costs more than just buying one machine with as much VRAM as you can get. | 1 | 0 | 2026-03-02T17:18:17 | geerlingguy | false | null | 0 | o89drfa | false | /r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o89drfa/ | false | 1 |
t1_o89drh0 | [deleted] | 1 | 0 | 2026-03-02T17:18:17 | [deleted] | true | null | 0 | o89drh0 | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89drh0/ | false | 1 |
t1_o89dr50 | I’m not sure I agree with you on this. I have tested 9b and all it does id go in to a think loop that takes for ever to get out of | 1 | 0 | 2026-03-02T17:18:14 | d4mations | false | null | 0 | o89dr50 | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o89dr50/ | false | 1 |
t1_o89dq3v | [removed] | 1 | 0 | 2026-03-02T17:18:06 | [deleted] | true | null | 0 | o89dq3v | false | /r/LocalLLaMA/comments/1qlr3wj/i_built_an_opensource_audiobook_converter_using/o89dq3v/ | false | 1 |
t1_o89dptt | Thank you for revisiting it. The benchmark uses random tensors intentionally because attention kernel runtime depends on tensor dimensions and memory layout, not on token values or learned weights. The shapes are taken directly from real Hugging Face model configurations, so the measurements reflect realistic model dimensions.
Your skepticism was reasonable unlike "Please *read* what your agents build." guy who didn't even feel like running it or trying to understand it. | 1 | 0 | 2026-03-02T17:18:04 | Upset-Presentation28 | false | null | 0 | o89dptt | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89dptt/ | false | 1 |
t1_o89do14 | The question was if it is "enough". It is able to do agentic coding, of course you can't expect a lot of steps and automatic stuff like from big models.
He could easily run 35B-A3B with around 20-30 tps and get close to 27B agentic coding. Source: Ran it all weekend on a 6gb vram card. | 1 | 0 | 2026-03-02T17:17:49 | AppealSame4367 | false | null | 0 | o89do14 | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89do14/ | false | 1 |
t1_o89dnrn | I checked again today with llama.cpp (Strix Halo platform) and I did not find meaningful changes - I see that the model overthinks a lot even on simple tasks.
Case in point: I asked a simple OCR extraction (4 lines, 136 ASCII overall, just strings and numbers - a bit blurry, but not a captcha-like test), and tried to correct the model on a mistake made on one of the string.
He went onto a 6,400 token thinking spree, with the reasoning block full of "Wait, perhaps... Wait, another possibility ... Wait, maybe... Wait, but..." and could not correct the mistake (which is secondary, the nearly infinite thinkin loop is what concerns me).
Anything I can do about that? I read wonders of this model, but as of now it's barely usable. Am I missing anything with my llama.cpp configuration / have to wait for some kind of fix? | 1 | 0 | 2026-03-02T17:17:47 | Zhelgadis | false | null | 0 | o89dnrn | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o89dnrn/ | false | 1 |
t1_o89dn9o | You're in for a treat, because Qwen3.5-27B is a new dense model.
Also, LLM360 published their latest dense 72B model last month. (This is a trained from scratch model, and not a fine-tune.)
There are still dense models being trained; they just aren't as popular and don't get as much attention. | 1 | 0 | 2026-03-02T17:17:43 | ttkciar | false | null | 0 | o89dn9o | false | /r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o89dn9o/ | false | 1 |
t1_o89dmnp | Yes, on my Galaxy S22 Ultra, the 0.8b runs at 3 tok/s while the 3.0 1.7b is at 17 tok/s. I think llama.cpp needs an update. | 1 | 0 | 2026-03-02T17:17:38 | Psyko38 | false | null | 0 | o89dmnp | false | /r/LocalLLaMA/comments/1riy5x6/qwen_35_nonthinking_mode_benchmarks/o89dmnp/ | false | 1 |
t1_o89dkyv | Cool. | 1 | 0 | 2026-03-02T17:17:24 | profcuck | false | null | 0 | o89dkyv | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89dkyv/ | false | 1 |
t1_o89di90 | This is the way | 1 | 0 | 2026-03-02T17:17:03 | xRintintin | false | null | 0 | o89di90 | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89di90/ | false | 1 |
t1_o89dfiq | [removed] | 1 | 0 | 2026-03-02T17:16:40 | [deleted] | true | null | 0 | o89dfiq | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89dfiq/ | false | 1 |
t1_o89dfk5 | Imatrix is quantization with calibration, to improve model's quality after being quantized down. Generally help for lower quants with a speed trade off. (according to unsloth's doc [https://unsloth.ai/docs/models/qwen3.5/gguf-benchmarks#id-2-imatrix-works-very-well](https://unsloth.ai/docs/models/qwen3.5/gguf-benchmarks#id-2-imatrix-works-very-well) ) | 1 | 0 | 2026-03-02T17:16:40 | bobaburger | false | null | 0 | o89dfk5 | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o89dfk5/ | false | 1 |
t1_o89desz | Tbh 8B-9B is becoming the absolute sweet spot for local dev. Fits perfectly on a 12GB card with a good GGUF quant, but noticeably smarter than the old 7Bs. Definitely pulling this tonight to test | 1 | 0 | 2026-03-02T17:16:34 | Spiritual_Rule_6286 | false | null | 0 | o89desz | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o89desz/ | false | 1 |
t1_o89dbtd | lol RIP MIT, CERN and Cambridge if they also vibecode their software and generate their papers | 1 | 0 | 2026-03-02T17:16:10 | MelodicRecognition7 | false | null | 0 | o89dbtd | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89dbtd/ | false | 1 |
t1_o89d7fb | Did you see this with Qwen3.5 though? Because that's exactly what the AA-LCR benchmark is for and their values are on the same level as GLM 5, slightly below Sonnet 4.5, so you can expect around half the max context to fill up without much error. | 1 | 0 | 2026-03-02T17:15:34 | AppealSame4367 | false | null | 0 | o89d7fb | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89d7fb/ | false | 1 |
t1_o89d70x | This was the case until the end of 2025. Now, training data and model architecture are much more decisive. | 1 | 0 | 2026-03-02T17:15:31 | powerade-trader | false | null | 0 | o89d70x | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o89d70x/ | false | 1 |
t1_o89d5ix | You’re free to run it yourself. The benchmark uses real Hugging Face model configurations to extract actual attention dimensions (Hq, Hkv, D, max\_seq, layers) and measures decode-time attention cost over paged KV layouts. If the numbers don’t reproduce, that’s a technical discussion worth having. The script clearly states it benchmarks decode-time attention over paged KV caches. PoC is provided. The code runs. The CSV outputs are there. Feel free to verify. | 1 | 0 | 2026-03-02T17:15:19 | Upset-Presentation28 | false | null | 0 | o89d5ix | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89d5ix/ | false | 1 |
t1_o89d4ti | I run only models from 4b to 35b, 35b 4 bit quant fits on my GPU, I don't try and run 100b+ param models | 1 | 0 | 2026-03-02T17:15:13 | Certain-Cod-1404 | false | null | 0 | o89d4ti | false | /r/LocalLLaMA/comments/1rihhw6/questions_on_awq_vs_gguf_on_a_5090/o89d4ti/ | false | 1 |
t1_o89d4cv | So could I use this in comfyui as a clip encoder already? | 1 | 0 | 2026-03-02T17:15:09 | Justify_87 | false | null | 0 | o89d4cv | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o89d4cv/ | false | 1 |
t1_o89d345 | Yeah, I only included the ones Qwen featured in their official comparison charts for this release. Since they didn't list it there, I didn't have the 'official' baseline to put it next to the 3.5 models. | 1 | 0 | 2026-03-02T17:14:59 | Jobus_ | false | null | 0 | o89d345 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o89d345/ | false | 1 |
t1_o89d157 | Just installed 9B. Getting \`500: Ollama: 500, message='Internal Server Error', url='http://localhost:11434/api/chat'\` installed directly via openwebui using **ollama run** [**hf.co/unsloth/Qwen3.5-9B-GGUF:Q8\_0**](http://hf.co/unsloth/Qwen3.5-9B-GGUF:Q8_0) | 1 | 0 | 2026-03-02T17:14:43 | callmedevilthebad | false | null | 0 | o89d157 | false | /r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o89d157/ | false | 1 |
t1_o89d0h9 | https://huggingface.co has free courses | 1 | 0 | 2026-03-02T17:14:38 | MelodicRecognition7 | false | null | 0 | o89d0h9 | false | /r/LocalLLaMA/comments/1risau2/please_help_me_with_the_following_ai_questions/o89d0h9/ | false | 1 |
t1_o89cz5w | thanks | 1 | 0 | 2026-03-02T17:14:27 | Sea-Ad-9517 | false | null | 0 | o89cz5w | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89cz5w/ | false | 1 |
t1_o89ctnd | In your test you've shown up to 40x increase for this stage, but there are other bottlenecks that will overshadow this gain. These gains may be negligible in real world applications.
Would like to see full inference tests to see what the actual grain is. | 1 | 0 | 2026-03-02T17:13:43 | Craygen9 | false | null | 0 | o89ctnd | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89ctnd/ | false | 1 |
t1_o89cst8 | Just sharing my anectodal experience: Windows + LMStudio + Pi coding agent + 9B 6KM quants from unsloth ->and trying to use skills to read my emails on google. This model couldn't get it right. Out of 20+ tries, and adjusting instructions (which I don't have to do not even once with larger models) the 9B 3.5 only read my emails once (i saw logs) but never got me results back as it got on an infinite loop.
To be fair, maybe it is LMStudio issues? (saw another post on this), or maybe unsloth quants will need to be revised, or maybe the harness... or maybe... who knows. But no joy so far.
I'm praying for a proper way to do this, in case I did anything wrong on my end. High hopes for this model. The 35b version is a bit too heavy for my 1080TI+32GB RAM ;)
| 1 | 0 | 2026-03-02T17:13:36 | FigZestyclose7787 | false | null | 0 | o89cst8 | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89cst8/ | false | 1 |
t1_o89croz | Just from the [9B's HF model card](https://huggingface.co/Qwen/Qwen3.5-9B). I had to take snap & cut as it was text. | 1 | 0 | 2026-03-02T17:13:27 | pmttyji | false | null | 0 | o89croz | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89croz/ | false | 1 |
t1_o89crcw | You know what? I actually am completely thick. I thought the poster above was saying the data for your plots is completely generated. Perfectly reasonable to use random tensors as input to attention since nothing there determines the order or number of ops. You are right to call me a dumbass and I jumped the gun based on the other 95% of posts I’ve come across. | 1 | 0 | 2026-03-02T17:13:24 | pantalooniedoon | false | null | 0 | o89crcw | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89crcw/ | false | 1 |
t1_o89cpyg | Lowkey aggregators just make more sense. BlackboxAI is like $2/month rn with unlimited MM2.5 and Kimi, plus some GPT, Gemini and Opus access. Why pay full price for GPT when cheaper models handle most stuff anyway? | 1 | 0 | 2026-03-02T17:13:12 | kamen562 | false | null | 0 | o89cpyg | false | /r/LocalLLaMA/comments/1r3s8mq/is_minimax_m25_the_best_coding_model_in_the_world/o89cpyg/ | false | 1 |
t1_o89co58 | It's great for my usecase of processing documents and extracting information from the document while fitting within tight gpu constraints. These models are a blessing as they make my job hella easier! Can't wait to fine-tune them and test it out. | 1 | 0 | 2026-03-02T17:12:58 | r_a_dickhead | false | null | 0 | o89co58 | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o89co58/ | false | 1 |
t1_o89cnzh | I already used those for quick crap experiments with private data, such as conversation history with co-workers. Like summarization, classification, etc. Something raw like take this json with a dialogue, and write a summary for the conversation not in english most of the times, without even throwing any useful prompt engineering at it, and the model could figure it out, not too bad. maybe with some thought you might be able to find a useful task for it. | 1 | 0 | 2026-03-02T17:12:57 | _-inside-_ | false | null | 0 | o89cnzh | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89cnzh/ | false | 1 |
t1_o89cnfx | Lowkey aggregators just make more sense. BlackboxAI is like $2/month rn with unlimited MM2.5 and Kimi, plus some GPT, Gemini and Opus access. Why pay full price for GPT when cheaper models handle most stuff anyway? | 1 | 0 | 2026-03-02T17:12:52 | kamen562 | false | null | 0 | o89cnfx | false | /r/LocalLLaMA/comments/1r7d0ph/tested_minimax_m25_locally_vs_gemini_3_and_opus/o89cnfx/ | false | 1 |
t1_o89cmwy | This is the new deep fried meme. Vibe coded non-functional slop.
Do you have literally any experience in data science? Or coding for that matter?
Lit up one dopamine receptor in my brain until common sense kicked in and I read the rest of the post. | 1 | 0 | 2026-03-02T17:12:48 | Inevitable_Mistake32 | false | null | 0 | o89cmwy | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89cmwy/ | false | 1 |
t1_o89cmti | I'm using this right now | 1 | 0 | 2026-03-02T17:12:47 | boinkmaster360 | false | null | 0 | o89cmti | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89cmti/ | false | 1 |
t1_o89cklb | Why not ask your buddy the llm | 1 | 0 | 2026-03-02T17:12:29 | numberwitch | false | null | 0 | o89cklb | false | /r/LocalLLaMA/comments/1riyi54/i_am_using_qwen_ai_model_for_openclaw_and_i/o89cklb/ | false | 1 |
t1_o89ckbz | What run command are you using? I’m sitting around 90 output tok/s on 6000 pro | 1 | 0 | 2026-03-02T17:12:27 | Laabc123 | false | null | 0 | o89ckbz | false | /r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o89ckbz/ | false | 1 |
t1_o89cj2d | --context-shift, --no-context-shift whether to use context shift on infinite text generation (default: disabled)
I don't know about current release on Github but version b8118 has it disabled by default. | 1 | 0 | 2026-03-02T17:12:16 | MelodicRecognition7 | false | null | 0 | o89cj2d | false | /r/LocalLLaMA/comments/1riuttn/how_can_i_enable_context_shifting_in_llama_server/o89cj2d/ | false | 1 |
t1_o89cim1 | Agreed. | 1 | 0 | 2026-03-02T17:12:13 | __JockY__ | false | null | 0 | o89cim1 | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o89cim1/ | false | 1 |
t1_o89ciap | cline isn't good enough? I see even with GLM 4.7 or 5 it's hallucinate, but with the cli coder tools it's working well. Seems there are tweak needed when using cline, but I'm not bother to learn more :/ | 1 | 0 | 2026-03-02T17:12:10 | BenL90 | false | null | 0 | o89ciap | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89ciap/ | false | 1 |
t1_o89ch1r | We're not laughing with you. We're laughing at you. | 1 | 0 | 2026-03-02T17:12:00 | DinoAmino | false | null | 0 | o89ch1r | false | /r/LocalLLaMA/comments/1riy7cw/lmao/o89ch1r/ | false | 1 |
t1_o89cffe | The logic was to color-code them by generation (cool colors = Qwen3.5, warm colors = Qwen3), but I’m a total amateur at data visualization and overestimated how easy it would be to tell those shades apart. Lesson learned. | 1 | 0 | 2026-03-02T17:11:47 | Jobus_ | false | null | 0 | o89cffe | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o89cffe/ | false | 1 |
t1_o89cep3 | Update: Been playing with the nvfp4 quant and it’s incredible. | 1 | 0 | 2026-03-02T17:11:42 | Laabc123 | false | null | 0 | o89cep3 | false | /r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o89cep3/ | false | 1 |
t1_o89cd9g | Amazing models, as could be expected.
They seem to actually enable thinking for themselves dynamically, leaving the <think> tag contents empty for simple queries like greetings and then enabling reasoning for anything more complex. Thinking very long as has been noted, currently running translation of a single phrase with the 2B model on an old laptop CPU and it's a few thousand tokens in with stuff like "Wait, I need to be careful not to hallucinate", "Okay, final decision: ...", "Wait, one more thing:" etc.
More importantly, the 4B model is using less VRAM than Qwen3 4B at the same quant even though it is larger (4.21B vs 4.02B). Somehow the context is much more efficient. With Qwen3 I could only fit a 6k token context at most to 4GB VRAM, whereas 3.5 loads with 22k, without quantkv of course! | 1 | 0 | 2026-03-02T17:11:30 | hum_ma | false | null | 0 | o89cd9g | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89cd9g/ | false | 1 |
t1_o89cb3f | i'm glad you could test it out! my use case is also coding, it was a decent model to run on my 5060 ti, but i kept falling back to 35B-A3B. | 1 | 0 | 2026-03-02T17:11:13 | bobaburger | false | null | 0 | o89cb3f | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o89cb3f/ | false | 1 |
t1_o89c8mb | 5090 is too small for most of the awq quants on newer models. GGUF is your favorite buddy. | 1 | 0 | 2026-03-02T17:10:53 | qwen_next_gguf_when | false | null | 0 | o89c8mb | false | /r/LocalLLaMA/comments/1rihhw6/questions_on_awq_vs_gguf_on_a_5090/o89c8mb/ | false | 1 |
t1_o89c3u3 | Q2 did feel crappy for me, but IQ2 was good (at Q3\_K\_M level, at least), in terms of instruction following (not intelligence). | 1 | 0 | 2026-03-02T17:10:15 | bobaburger | false | null | 0 | o89c3u3 | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o89c3u3/ | false | 1 |
t1_o89c3iw | can i run the 9b model on 4050 6gb gpu? | 1 | 0 | 2026-03-02T17:10:12 | RedditUser-106 | false | null | 0 | o89c3iw | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89c3iw/ | false | 1 |
t1_o89c2qr | Thanks! That works. Too bad LM Studio doesn't have a better to do it. | 1 | 0 | 2026-03-02T17:10:06 | FaithlessnessSecure3 | false | null | 0 | o89c2qr | false | /r/LocalLLaMA/comments/1rgswkc/turn_off_thinking_in_lm_studio/o89c2qr/ | false | 1 |
t1_o89c1fc | The qwen3.5 27B with Opus4.6 reason distilled has better one shot my coding tests for prototype games than Gemini Pro 3.1 did. 122B and 27B versions are crazy for their size. Going to test them using open code to test the agentic performance over larger projects for writing and coding. | 1 | 0 | 2026-03-02T17:09:55 | Elegant_Tech | false | null | 0 | o89c1fc | false | /r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o89c1fc/ | false | 1 |
t1_o89bzrr | My favourite comment thus far lol | 1 | 0 | 2026-03-02T17:09:41 | Upset-Presentation28 | false | null | 0 | o89bzrr | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89bzrr/ | false | 1 |
t1_o89bz4c | Isn't the square root thing out of date now? MoEs have gotten crazy good, I think it might just be that these are slightly benchmaxxed | 1 | 0 | 2026-03-02T17:09:36 | Neither-Phone-7264 | false | null | 0 | o89bz4c | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o89bz4c/ | false | 1 |
t1_o89by9s | You are wrong. I've been using Qwen3.5-35B-A3B in the weekend (on a freakin 6gb laptop gpu, lel) and today qwen3.5-4b. 15-25 tps or 25-35 tps respectively.
They have vision, they can reason over multiple files and long context (the benchmark shows that they are on par with big models). They can write perfect mermaid diagrams.
They both can walk files, make plans and execute them in an agentic way in different Roo Code modes. Couldn't test more than \~70000 tokens of context, too limited hardware, but there's no reason to claim or believe they wouldn't perform well. You can use 256k context on bigger gpus with them and could have multiple slots in llama cpp if you can afford it.
OP: Just try it. I believe this is the best thing since the invention of bread. Imagine not giving a damn about all the cloud bs anymore. No latency, no down times, no lowered intelligence. Just the pure, raw benchmark values for every request.
Look at aistupidmeter or what that website was called. The output in day to day life vs benchmarks for all big models is horrible. They maybe achieve half of what the benchmarks promis. So your local small qwen agent that almost always delivers the benchmarked performance delivers a \_much\_ better overall performance if you measure over weeks. No fucking rate limiting. | 1 | 0 | 2026-03-02T17:09:29 | AppealSame4367 | false | null | 0 | o89by9s | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89by9s/ | false | 1 |
t1_o89bxzi | > have been dropped
Where from?
Oh, you mean "have dropped". | 1 | 0 | 2026-03-02T17:09:27 | florinandrei | false | null | 0 | o89bxzi | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89bxzi/ | false | 1 |
t1_o89bx1e | You don't need \ before reasoning budget, you can have as many params on a line as you want. I confirmed from llama-server startup output that these parameters were processed correctly. | 1 | 0 | 2026-03-02T17:09:19 | DeltaSqueezer | false | null | 0 | o89bx1e | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o89bx1e/ | false | 1 |
t1_o89bwdd | great, let us know the result. I'm a bit skeptic about the BF16 KV cache information though, would be nice to see the outcome (since I'm pretty much running Q8 cache very fine for a while now). | 1 | 0 | 2026-03-02T17:09:13 | bobaburger | false | null | 0 | o89bwdd | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o89bwdd/ | false | 1 |
t1_o89budu | I'll be sure to share your insights with Mezzanine's collaborators at MIT, CERN and Cambridge, one can't be too careful with emojis these days. | 1 | 0 | 2026-03-02T17:08:58 | Upset-Presentation28 | false | null | 0 | o89budu | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89budu/ | false | 1 |
t1_o89bsur | I’d rather use an aggregator tbh. BlackboxAI is basically $2/month with unlimited MM2.5 and Kimi, and you still get limited GPT/Gemini/Opus. 90% of tasks don’t need the top model anyway. | 1 | 0 | 2026-03-02T17:08:46 | PCSdiy55 | false | null | 0 | o89bsur | false | /r/LocalLLaMA/comments/1r7d0ph/tested_minimax_m25_locally_vs_gemini_3_and_opus/o89bsur/ | false | 1 |
t1_o89br2u | I mean they are providing a proprietary service for millions of users. They don’t want to expose their own data by allowing it to run on local hardware. While I don’t disagree with the potential that you’re on to something, I believe there are other explanations that also fit the profile. | 1 | 0 | 2026-03-02T17:08:32 | dev_hoff | false | null | 0 | o89br2u | false | /r/LocalLLaMA/comments/1riy56h/the_data_centers_are_being_built_for_mass/o89br2u/ | false | 1 |
t1_o89bquy | Here is my launch config to serve it on my local network: ./build/bin/llama-server --model /mnt/data/GLM-4.7-Flash/GLM-4.7-Flash-Q4\_K\_M.gguf --alias glm-4.7-flash --jinja -c 64000 -ngl 99 --flash-attn auto --threads 6 --host [0.0.0.0](http://0.0.0.0) \--port 8080 --temp 0.8 --top-p 0.9 | 1 | 0 | 2026-03-02T17:08:30 | Aromatic-Low-4578 | false | null | 0 | o89bquy | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o89bquy/ | false | 1 |
t1_o89blfd | that's probably mean \`-ctk q8\_0 -ctv q8\_0\` in llama.cpp, or any option mentioned "KV Cache Quantization" in other inference clients. The main point is to reduce the amount of data needed to be kept in memory during calculation.
for 5090, you have even many more options than most of us here 😂, maybe you just need to look at the dense models that from 27B and up, or MoE that above 70B. | 1 | 0 | 2026-03-02T17:07:46 | bobaburger | false | null | 0 | o89blfd | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o89blfd/ | false | 1 |
t1_o89bh2k | 20 tokens per second?
```
$ llama-bench -p 4096 -n 100 -fa 1 -b 2048 -ub 2048 -m Qwen3.5-27B-UD-Q4_K_XL.gguf
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
```
| model | size | params | backend | ngl | n_ubatch | fa | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -------: | -: | --------------: | -------------------: |
| qwen35 ?B Q4_K - Medium | 15.57 GiB | 26.90 B | CUDA | 99 | 2048 | 1 | pp4096 | 1245.35 ± 4.52 |
| qwen35 ?B Q4_K - Medium | 15.57 GiB | 26.90 B | CUDA | 99 | 2048 | 1 | tg100 | 36.34 ± 0.04 | | 1 | 0 | 2026-03-02T17:07:10 | coder543 | false | null | 0 | o89bh2k | false | /r/LocalLLaMA/comments/1riy5x6/qwen_35_nonthinking_mode_benchmarks/o89bh2k/ | false | 1 |
t1_o89b7mi | if AI hallucinated code compiles without errors it does not mean that the code is correct | 1 | 0 | 2026-03-02T17:05:54 | MelodicRecognition7 | false | null | 0 | o89b7mi | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89b7mi/ | false | 1 |
t1_o89b5h8 | Yeah lm studio has been super buggy I keep trying it then having to go back to llama.cpp. They probably need to stop vibe coding so much and start unit testing more or open source the server parts at least so the community can fix it for them! 😂 | 1 | 0 | 2026-03-02T17:05:36 | doomdayx | false | null | 0 | o89b5h8 | false | /r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o89b5h8/ | false | 1 |
t1_o89azid | When I asked in German, it told me it's ChatGPT. | 1 | 0 | 2026-03-02T17:04:48 | Black-Mack | false | null | 0 | o89azid | false | /r/LocalLLaMA/comments/1riy7cw/lmao/o89azid/ | false | 1 |
t1_o89azjm | which benchmark is this? link please | 1 | 0 | 2026-03-02T17:04:48 | Sea-Ad-9517 | false | null | 0 | o89azjm | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89azjm/ | false | 1 |
t1_o89ay2w | If something doesn't work, it won't get any better. I tried Qwen 3.5 27B Q3, even from different quantization sources. But Q3 can barely write. It produces unnecessary and meaningless text and lines. As you can see, it's unusable.
I'm currently downloading Qwen 3.5 9B 8Bit. I'll compare it with GPT Oss 20B MXFP4 (4Bit). I'll also compare it with Qwen 3 14B and Gemma 3 12B. | 1 | 0 | 2026-03-02T17:04:36 | powerade-trader | false | null | 0 | o89ay2w | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o89ay2w/ | false | 1 |
t1_o89axli | Do you guys support NPU? I've been trying to find an app that supports NPU on my SD 8 gen 3 to see how fast I could run the 4B model but could find any that support it. | 1 | 0 | 2026-03-02T17:04:32 | CucumberAccording813 | false | null | 0 | o89axli | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o89axli/ | false | 1 |
t1_o89atk9 | lol | 1 | 0 | 2026-03-02T17:04:00 | MelodicRecognition7 | false | null | 0 | o89atk9 | false | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89atk9/ | false | 1 |
t1_o89at61 | The reality.
AI poisoning is trivially easy by the right person inside of any lab. US or China or otherwise. An individual could accomplish it without it being known to the team or company. It is essentially impossible to detect unless you know the exact triggers and intended behaviors.
Preventing such adversarial tuning would require careful sourcing of every piece of training data that is not cost effective at the scale they are trying to operate at.
If you want a "truly safe" model, it does not exist. I'm not sure it can exist and still be anywhere near the frontier.
The best advice in the thread is to create an internal "performance certification" where a model is vetted for performance in the domain and tasks you need to utilize it for. This validates that the model is not poisoned for this use case. Make sure to include future-dates in the intended use time, or other potential internal triggers that it may see while in use.
If you're simply fighting internal politics because "china bad", the only real way to fight this and possibly get additional options is education about what can and cannot happen with models. They cannot ex-filtrate data without a tool and network path.
The internal certification route is the best path for the validation you are looking for, and you should not trust US models either. AI teams are inherently multi national, leaning heavily to China. And it only takes one. Nothing is "safe". | 1 | 0 | 2026-03-02T17:03:57 | MaybeOk4505 | false | null | 0 | o89at61 | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o89at61/ | false | 1 |
t1_o89asig | That's the GP comment. About 15 tokens/s | 1 | 0 | 2026-03-02T17:03:52 | JollyJoker3 | false | null | 0 | o89asig | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89asig/ | false | 1 |
t1_o89ars3 | I know. I've been playing with it. I feel that it's not the huge leap over Qwen3-4B I was hoping. | 1 | 0 | 2026-03-02T17:03:46 | Iory1998 | false | null | 0 | o89ars3 | false | /r/LocalLLaMA/comments/1naqln5/how_is_qwen3_4b_this_good/o89ars3/ | false | 1 |
t1_o89anuf | Its more of remote processing, your chats are not synced / shared as they would be with a web ui server like OpenWebUI. You will still need a external system to sync your chats if you wish for that.
This is nothing more then a way to link your LM Studio frontend/gui to a remote backend where all the AI processing is done remotely then sent back. Its like a openai api backend but with tighter integration to the remote UI. | 1 | 0 | 2026-03-02T17:03:14 | CallumCarmicheal | false | null | 0 | o89anuf | false | /r/LocalLLaMA/comments/1rer60n/lm_link/o89anuf/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.