name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8cawvn | +1 for this, lm studio is much nicer to work with than llama server, but I guess back I go to cpp llama server! | 1 | 0 | 2026-03-03T02:20:09 | timbo2m | false | null | 0 | o8cawvn | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8cawvn/ | false | 1 |
t1_o8canfc | yeah this model was practically designed for a 5900 | 1 | 0 | 2026-03-03T02:18:32 | mr_riptano | false | null | 0 | o8canfc | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8canfc/ | false | 1 |
t1_o8caivu | > The code looks so clean and easy to understand.
I took a look, it does actually look very clean. If there was any AI use in making it, a human has definitely cleaned it up. | 1 | 0 | 2026-03-03T02:17:47 | droptableadventures | false | null | 0 | o8caivu | false | /r/LocalLLaMA/comments/1rin3ea/alibaba_team_opensources_copaw_a_highperformance/o8caivu/ | false | 1 |
t1_o8cagm9 | I feel like that's how all the small models 'beat' the frontier LLMs imo: they are just designed to 'think' for near-infinite time until they reach a the desired response. I have a similar experience with the Ministral-14b-Reasoning as well | 1 | 0 | 2026-03-03T02:17:24 | hieuphamduy | false | null | 0 | o8cagm9 | false | /r/LocalLLaMA/comments/1rjcfdk/qwen359b_q4km_in_lm_studio_thinking_too_much/o8cagm9/ | false | 1 |
t1_o8cactf | hi, is this the best app for running llms on android? | 1 | 0 | 2026-03-03T02:16:47 | SailInevitable5261 | false | null | 0 | o8cactf | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o8cactf/ | false | 1 |
t1_o8cabvc | hi, I found a solution. Someone in the post qwen\_ai channel a great system prompt to resolve this problem,
\`\`\`
You are a helpful and efficient AI assistant. Your goal is to provide accurate answers without getting stuck in repetitive loops.
1. PROCESS: Before generating your final response, you must analyze the request inside <thinking> tags.
2. ADAPTIVE LOGIC:
\- For COMPLEX tasks (logic, math, coding): Briefly plan your approach in NO MORE than 3 steps inside the tags. (Save the detailed execution/work for the final answer).
\- For CHALLENGES: If the user doubts you or asks you to "check online," DO NOT LOOP. Do one quick internal check, then immediately state your answer.
\- For SIMPLE tasks: Keep the <thinking> section extremely concise (1 sentence).
3. OUTPUT: Once your analysis is complete, close the tag with </thinking>. Then, start a new line with exactly "### FINAL ANSWER:" followed by your response.
DO NOT reveal your thinking process outside of the tags.
\`\`\`
It works for me.
| 1 | 0 | 2026-03-03T02:16:37 | yingzir | false | null | 0 | o8cabvc | false | /r/LocalLLaMA/comments/1rjcfdk/qwen359b_q4km_in_lm_studio_thinking_too_much/o8cabvc/ | false | 1 |
t1_o8cabkj | The qwen models are fucked somehow, I get multiple times faster tok/sec on a bunch of old models.
I tried gguf, and even the new 27b on mlx. I’m getting around 10tok/sec on an M2 Max with 96gb. | 1 | 0 | 2026-03-03T02:16:34 | Virtamancer | false | null | 0 | o8cabkj | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8cabkj/ | false | 1 |
t1_o8ca8sw | Have you tried tunning parameters (presence\_penalty and repeat\_penalty?
Im not experiencing this issue when i changed then to the values provided in [https://unsloth.ai/docs/models/qwen3.5](https://unsloth.ai/docs/models/qwen3.5)
btw im using 122B-A10B, not 2B, but i guess the math is similar. | 1 | 0 | 2026-03-03T02:16:07 | Substantial_Log_1707 | false | null | 0 | o8ca8sw | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8ca8sw/ | false | 1 |
t1_o8ca3im | I mean I overthink to say hello, too. | 1 | 0 | 2026-03-03T02:15:15 | Warm-Attempt7773 | false | null | 0 | o8ca3im | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8ca3im/ | false | 1 |
t1_o8ca3l6 | Thanks for the link, mate! The machine in the link is exactly my machine (Ryzen 7 AI 350 with 32GB DDR5). It's not bad indeed. Not great in the grand scheme, and roughly half the speed on iGPU on battery. But if the NPU sips battery, it would be really nice indeed.
Now, fingercrossed that lemonade server Linux version would bring this in in near future, so I don't have to set it up by hand. Already having enough problem with Vulkan on Linux 6.18. | 1 | 0 | 2026-03-03T02:15:15 | o0genesis0o | false | null | 0 | o8ca3l6 | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8ca3l6/ | false | 1 |
t1_o8ca2nj | yes true | 1 | 0 | 2026-03-03T02:15:06 | kayteee1995 | false | null | 0 | o8ca2nj | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o8ca2nj/ | false | 1 |
t1_o8ca2jw | What quants did you use? | 1 | 0 | 2026-03-03T02:15:05 | twisted_nematic57 | false | null | 0 | o8ca2jw | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8ca2jw/ | false | 1 |
t1_o8ca214 | Same. | 1 | 0 | 2026-03-03T02:15:00 | Warm-Attempt7773 | false | null | 0 | o8ca214 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8ca214/ | false | 1 |
t1_o8ca069 | Right? Here it is, AGI with anxiety | 1 | 0 | 2026-03-03T02:14:42 | performanceboner | false | null | 0 | o8ca069 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8ca069/ | false | 1 |
t1_o8c9xij | What kind of mac though? I have a i5 intel cpu with normal ddr5 ram and I get 10 t/s on Q6_K. Macs with unified memory should be multiple times faster | 1 | 0 | 2026-03-03T02:14:15 | BumblebeeParty6389 | false | null | 0 | o8c9xij | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8c9xij/ | false | 1 |
t1_o8c9x8i | do you recommend ollama/llama.cpp instead of lmstudio? im kind of more used to it | 1 | 0 | 2026-03-03T02:14:12 | murkomarko | false | null | 0 | o8c9x8i | false | /r/LocalLLaMA/comments/1rj8uj5/just_getting_started_on_local_llm_on_macbook_air/o8c9x8i/ | false | 1 |
t1_o8c9x0m | Why do you assume an LLM with reasoning is capable of answering any other way? We’re in the early era of LLMs. I’m not sure the use case for a model is to say “hi”; it’s to solve actual problems. | 1 | 0 | 2026-03-03T02:14:10 | _crs | false | null | 0 | o8c9x0m | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c9x0m/ | false | 1 |
t1_o8c9tql | I think a lot of research is going into looking deep into LLMs and seeing how many parameters/weights are totally useless.
Removing these weights lead to same performance with lower size. | 1 | 0 | 2026-03-03T02:13:39 | PromiseMePls | false | null | 0 | o8c9tql | false | /r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8c9tql/ | false | 1 |
t1_o8c9ov3 | thanks! i actually stumbled upon this a bit ago and am currently working on this! not resilient to reboots just yet, but i'm still testing things out. | 1 | 0 | 2026-03-03T02:12:51 | luche | false | null | 0 | o8c9ov3 | false | /r/LocalLLaMA/comments/1rezq19/qwen3535b_on_apple_silicon_how_i_got_2x_faster/o8c9ov3/ | false | 1 |
t1_o8c9ndq | I feel like this would heat up your phone badly. | 1 | 0 | 2026-03-03T02:12:36 | PromiseMePls | false | null | 0 | o8c9ndq | false | /r/LocalLLaMA/comments/1rj4nnq/qwen352b_on_android/o8c9ndq/ | false | 1 |
t1_o8c9h8x | Bro has never been stopped by a hot girl where she says Hi first
Fam you’ll do more thinking than this to say hello back
This is IRL model imo | 1 | 0 | 2026-03-03T02:11:35 | InterstellarReddit | false | null | 0 | o8c9h8x | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c9h8x/ | false | 1 |
t1_o8c9eu2 | That is \*\*not\*\* what Gemini thougth. It's just a summary. It produced thousands of tokens, but hidden and fast. And that response was also kinda long for just a "hi" too. | 1 | 0 | 2026-03-03T02:11:11 | xandep | false | null | 0 | o8c9eu2 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c9eu2/ | false | 1 |
t1_o8c9ejx | Nope, even on the non UD one it gives the same error... the non UD one is faster though, so I'll keep it anyway! | 1 | 0 | 2026-03-03T02:11:08 | c64z86 | false | null | 0 | o8c9ejx | false | /r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/o8c9ejx/ | false | 1 |
t1_o8c9crl | ./llama.cpp/llama-server --model "models/Qwen3.5-9B-UD-Q8\_K\_XL.gguf" --alias "Qwen3.5 9B" --temp 1.0 --top-p 0.95 --min-p 0.0001 --top-k 50 --port 16384 --host [0.0.0.0](http://0.0.0.0)\--ctx-size 86000 --cache-type-k bf16 --cache-type-v bf16 --parallel 8 --cont-batching --ctx-size 262144 --repeat-penalty 1.0 --repeat-last-n 256
i use it like this (example 9B model) compiled the latest llama.cpp .... i only see gpu useage no cpu useage.
This one is running on Two old RTX 2080Ti (each 22GB vram) .... | 1 | 0 | 2026-03-03T02:10:50 | snapo84 | false | null | 0 | o8c9crl | false | /r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o8c9crl/ | false | 1 |
t1_o8c9ccr | Google doesn't exactly have that many resources. There are several signs of that. But basically, they keep nerfing the free tier of AI Studio, there are also quite a few problems with Gemini, and perhaps the least obvious: the delay in releasing the NB 2 model, and the fact that they'll probably finally release Flash Lite version 3.1 tomorrow. | 1 | 0 | 2026-03-03T02:10:46 | Samy_Horny | false | null | 0 | o8c9ccr | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8c9ccr/ | false | 1 |
t1_o8c99yo | True, same on my side.
adjust your params accorting to this guide: [https://unsloth.ai/docs/models/qwen3.5](https://unsloth.ai/docs/models/qwen3.5)
or just turn off thinking. | 1 | 0 | 2026-03-03T02:10:22 | Substantial_Log_1707 | false | null | 0 | o8c99yo | false | /r/LocalLLaMA/comments/1rjcfdk/qwen359b_q4km_in_lm_studio_thinking_too_much/o8c99yo/ | false | 1 |
t1_o8c97ta | Then don't | 0 | 0 | 2026-03-03T02:10:00 | mattcre8s | false | null | 0 | o8c97ta | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8c97ta/ | false | 0 |
t1_o8c95wi | I had downloaded it before this post to test it, but it doesn't work in LM Studio.
Could it be because it hasn't updated yet? | 1 | 0 | 2026-03-03T02:09:41 | AppealThink1733 | false | null | 0 | o8c95wi | false | /r/LocalLLaMA/comments/1rj89qy/merlin_research_released_qwen354bsafetythinking_a/o8c95wi/ | false | 1 |
t1_o8c93gw | This is why I run llama.cpp directly on Android — no Ollama, no middleware, no template parsing bugs.
Desktop uses Ollama for now with think:false to skip the CoT issues.
[github.com/ahitokun/hushai-android](http://github.com/ahitokun/hushai-android) | 1 | 0 | 2026-03-03T02:09:17 | chinkichameli | false | null | 0 | o8c93gw | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8c93gw/ | false | 1 |
t1_o8c92dd | We’re roughly using:
• --tensor-parallel-size 4 (for 4x L40)
• --max-model-len tuned conservatively, not maxing 192GB
• Explicit chat template matching the exact Qwen release
• Proper stop tokens for </think> / tool tags
• Slight presence + repetition penalties
Most “can’t close CoT” issues we’ve seen were template or stop token mismatches, not raw hardware. | 1 | 0 | 2026-03-03T02:09:06 | pmv143 | false | null | 0 | o8c92dd | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8c92dd/ | false | 1 |
t1_o8c8yyg | Confirmed, found this a minute ago on unsloth docs:
For Qwen3.5 0.8B, 2B, 4B and 9B, reasoning is disabled by default. To enable it, use: --chat-template-kwargs '{"enable_thinking":true}' | 1 | 0 | 2026-03-03T02:08:32 | guiopen | false | null | 0 | o8c8yyg | false | /r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/o8c8yyg/ | false | 1 |
t1_o8c8wk7 | People use ollama because it’s “ollama pull modelname”, if you’re talking about a specific repo’s quants, sure you can use ollama for that but it’s more work than using llama.cpp.
Also, keep in mind that exact same model files with the same seed, temp, prompt etc can give different results with different hardware, you’ll get the same output if repeated on a given platform but not necessarily between platforms. | 1 | 0 | 2026-03-03T02:08:08 | The_frozen_one | false | null | 0 | o8c8wk7 | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8c8wk7/ | false | 1 |
t1_o8c8rad | If u r running agents, just changing temperature isn't enough. Have a try for Qwen or Llama 3:
Temperature set to 0 (or < 0.2).Top\_P keep it at 1.0 if Temp is 0.Frequency/Presence penalty set to 0.Min\_P recommended (around 0.05).Flash attention always enable it to maintain accuracy as your context fills with tool logs.
The most important parameter is actually your System Prompt—ensure it strictly defines the tool schema. | 1 | 0 | 2026-03-03T02:07:15 | Rain_Sunny | false | null | 0 | o8c8rad | false | /r/LocalLLaMA/comments/1rjaymu/how_do_you_configure_your_local_model_better_for/o8c8rad/ | false | 1 |
t1_o8c8ome | This is common among smaller and medium sized models.
Longer reasoning is almost always leads to higher quality final answers (and better benchmark scores).
With limited parameters, I have to believe that training it for shorter reasoning in *some* cases will inevitably lead to under thinking other prompts.
Since the smaller models run much faster, I don’t think efficiency per token is as much of a priority. | 1 | 0 | 2026-03-03T02:06:49 | AvocadoArray | false | null | 0 | o8c8ome | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c8ome/ | false | 1 |
t1_o8c8ooz | I think its because Opus is one of the largest closed-source models that still has its full reasoning trace being shown to the user hence why it's often distilled whereas other models obscure it. | 1 | 0 | 2026-03-03T02:06:49 | ImmenseFox | false | null | 0 | o8c8ooz | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c8ooz/ | false | 1 |
t1_o8c8kf6 | There was a post yesterday that went over setting the various modes including disabling thinking (\`chat\_template\_kwargs: { enable\_thinking: False}\`) which also had some useful comments. Thinking is set by parameters, so it is possible to use templates to adjust the model's parameters without reloading.
Below is what I'm using for my llama-swap which allows calling this model 4 ways without reloading:
* `qwen-3p5-27b` \- thinking, default settings
* `qwen-3p5-27b:coding` \- thinking, coding tuned
* `qwen-3p5-27b:instruct` \- thinking disabled, instruction tuned
* `qwen-3p5-27b:instant` \- thinking disabled, default settings
​
qwen-3p5-27b:
filters:
stripParams: "temperature, top_k, top_p, repeat_penalty, min_p, presence_penalty"
setParamsByID:
"${MODEL_ID}:coding":
temperture: 0.6
presence_penalty: 0.0
"${MODEL_ID}:instruct":
chat_template_kwargs:
enable_thinking: false
temperture: 0.7
top_p: 0.8
"${MODEL_ID}:instant":
chat_template_kwargs:
enable_thinking: false
cmd: |
${llama-server}
--ctx-size 65535
--temp 1.0 --min-p 0.0 --top-k 20 --top-p 0.95 --repeat_penalty 1.0 --presence_penalty 1.5
--fit on
--model ${model-qwen3p5-27b}
--mmproj ${model-qwen3p5-27b-mmproj} | 1 | 0 | 2026-03-03T02:06:07 | therealpygon | false | null | 0 | o8c8kf6 | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8c8kf6/ | false | 1 |
t1_o8c8flg | ```
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- if enable_thinking is defined and enable_thinking is true %}
{{- '<think>\n' }}
{%- else %}
{{- '<think>\n\n</think>\n\n' }}
{%- endif %}
{%- endif %}
```
The unsloth template disable thinking by default, I think. | 1 | 0 | 2026-03-03T02:05:19 | Dr_Me_123 | false | null | 0 | o8c8flg | false | /r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/o8c8flg/ | false | 1 |
t1_o8c8cnk | I kinda like `MN-CaptainErisNebula-12B-Chimera-v1.1-heretic-uncensored-abliterated` from mradermacher, especially if you want a non-reasoning model for speed reasons. | 1 | 0 | 2026-03-03T02:04:50 | MushroomCharacter411 | false | null | 0 | o8c8cnk | false | /r/LocalLLaMA/comments/1rj7p2h/i_need_an_uncensored_llm_for_8gb_vram/o8c8cnk/ | false | 1 |
t1_o8c8atx | Hahaa… hmmmmm. I will keep this in mind:) | 1 | 0 | 2026-03-03T02:04:31 | _WaterBear | false | null | 0 | o8c8atx | false | /r/LocalLLaMA/comments/1rhbfya/shunyanet_sentinel_a_selfhosted_rss_aggregator/o8c8atx/ | false | 1 |
t1_o8c8643 | No error, just thinking = 0 in the output before starting the server. Don't have access to my PC right now, but will post the output here if I remember later | 1 | 0 | 2026-03-03T02:03:45 | guiopen | false | null | 0 | o8c8643 | false | /r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/o8c8643/ | false | 1 |
t1_o8c81al | Tried, no luck.
Attempt 1: V0.16.0 docker image, vllm not implemented Qwen3\_5ForXXXXX
Attempt 2: nightly build of vllm, transformers not supporting the load of arch qwen35moe in gguf format
Attempt 3: nightly, Qwen3.5-35B-A3B-GPTQ-Int4, loaded, but vllm COMPLETELY stuck at waiting for shm or what, hangs forever, no debug log available.
My personal concusion is:
1. Transformers is not even tring to support GGUF loading of multi-modal models, qwen2.5vl is not supported for now. If you need vision, consider FP8(Ada or Hopper or above), AWQ (Turing or above), GPTQ (not now, maybe fixed, Volta and above)
2. vllm's GGUF support suck, but still FASTER than llama.cpp when concurrency > 1 or you have 2 gpus or more. ( YES, non-native, claimed to be slow, but actually faster)
if you have use it NOW, consider llama.cpp. On my setup with 4xV100, unsloth/Qwen3.5-122B-A10B-GGUF Q4 runs 38tok/s tg and 390tok/s pp, with vision enabled | 1 | 0 | 2026-03-03T02:02:57 | Substantial_Log_1707 | false | null | 0 | o8c81al | false | /r/LocalLLaMA/comments/1re7ib7/vllm_qwen35122ba10bgguf/o8c81al/ | false | 1 |
t1_o8c7yzk | check my post. everyhing u need for qwen35 with llamacpp is basically there. | 1 | 0 | 2026-03-03T02:02:34 | maho_Yun | false | null | 0 | o8c7yzk | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8c7yzk/ | false | 1 |
t1_o8c7xvs | Need to set presence_penalty to 2. But it can’t be done in LM Studio interface | 1 | 0 | 2026-03-03T02:02:23 | I-am_Sleepy | false | null | 0 | o8c7xvs | false | /r/LocalLLaMA/comments/1rjcfdk/qwen359b_q4km_in_lm_studio_thinking_too_much/o8c7xvs/ | false | 1 |
t1_o8c7w03 | Is the agentic RAG pipeline in the room with us? You posted screenshot of a conversation in Ollama, relax. | 1 | 0 | 2026-03-03T02:02:04 | X3liteninjaX | false | null | 0 | o8c7w03 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c7w03/ | false | 1 |
t1_o8c7twg | Is this not a normal human thought process when someone approaches and says "hi"?
Pretty much exactly what goes through my head but with more cycling back on historical context to check for bespoke actions/reciprocation that they may expect.
Then suddenly they've closed the distance and you're still catching up on the salutation but it's kind of awkward now so you use a safe fallback of "inaudible grunt" alongside a vague nod of the head.
Then you walk away, turn the corner and realize you're in a cold sweat and that one interaction has exhausted you.
You will dwell on your social fumble for 5 hours. | 1 | 0 | 2026-03-03T02:01:42 | onlymostlyguts | false | null | 0 | o8c7twg | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c7twg/ | false | 1 |
t1_o8c7rnq | I think op doesn't get the task himself lol | 1 | 0 | 2026-03-03T02:01:20 | Feztopia | false | null | 0 | o8c7rnq | false | /r/LocalLLaMA/comments/1rja0sb/gptoss_had_to_think_for_4_minutes_where_qwen359b/o8c7rnq/ | false | 1 |
t1_o8c7quh | 13th attempt always worked for me too | 1 | 0 | 2026-03-03T02:01:12 | Maleficent-Ad5999 | false | null | 0 | o8c7quh | false | /r/LocalLLaMA/comments/1rja0sb/gptoss_had_to_think_for_4_minutes_where_qwen359b/o8c7quh/ | false | 1 |
t1_o8c7pph | This is the wrong comparison.
How in all that's holy is the 27b model as good or sometimes better than 3.5 122b and next80b ?? | 1 | 0 | 2026-03-03T02:01:01 | slypheed | false | null | 0 | o8c7pph | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8c7pph/ | false | 1 |
t1_o8c7pkl | Also it fails the task as it didn't answer in base64 | 1 | 0 | 2026-03-03T02:01:00 | Feztopia | false | null | 0 | o8c7pkl | false | /r/LocalLLaMA/comments/1rja0sb/gptoss_had_to_think_for_4_minutes_where_qwen359b/o8c7pkl/ | false | 1 |
t1_o8c7mha | Added non-MCP tool call definition override in the yaml config only (PR #9314), and have been working on terminal command via remote VSCode access routes. Remote to wsl and ssh called via Windows should be working finally after a couple fixes in code where macos, containers, linux mint should all be working now for run\_command() instead of the terminal "output" just being an error about the wrong OS's shell being called. | 1 | 0 | 2026-03-03T02:00:28 | One-Cheesecake389 | false | null | 0 | o8c7mha | false | /r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8c7mha/ | false | 1 |
t1_o8c7bqx | I usually run GLM 5 in Q4 but I wanted to see how good the small ones are getting. Also may use it for sub agents/video game characters, that sort of thing. Could have like 20 of them in parallel and then a big model for the main task | 1 | 0 | 2026-03-03T01:58:43 | nomorebuttsplz | false | null | 0 | o8c7bqx | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8c7bqx/ | false | 1 |
t1_o8c785d | I think open-webui + Open-Terminal is quite similar.
And I'm sure, there is a mcp for exactly that. | 1 | 0 | 2026-03-03T01:58:07 | Pakobbix | false | null | 0 | o8c785d | false | /r/LocalLLaMA/comments/1rjazyt/is_there_a_list_of_the_tools_geminichatgptclaude/o8c785d/ | false | 1 |
t1_o8c75zr | Ah I'm using the UD unsloth quant instead of the non UD one... maybe that's why? | 1 | 0 | 2026-03-03T01:57:45 | c64z86 | false | null | 0 | o8c75zr | false | /r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/o8c75zr/ | false | 1 |
t1_o8c74dx | did you just use an emdash and "and honestly" but with spelling errors? Are you AGI? | 1 | 0 | 2026-03-03T01:57:29 | nomorebuttsplz | false | null | 0 | o8c74dx | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8c74dx/ | false | 1 |
t1_o8c713n | I just found the base tiny whisper models, which had meh performance and still required a good chunk of compute time; to be honest I found it to perform similarly to VOSK. :\\ | 1 | 0 | 2026-03-03T01:56:56 | InvertedVantage | false | null | 0 | o8c713n | false | /r/LocalLLaMA/comments/1raste0/fast_voice_to_text_looking_for_offline_mobile/o8c713n/ | false | 1 |
t1_o8c70qx | Score takes into account speed. For an intelligence metric you need to look at "pass rate" where it gets 62% notably ahead of GLM 5 and mimimax 2.5 which is crazy. | 1 | 0 | 2026-03-03T01:56:53 | metigue | false | null | 0 | o8c70qx | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8c70qx/ | false | 1 |
t1_o8c6zz4 | It still says the same error sorry. | 1 | 0 | 2026-03-03T01:56:45 | c64z86 | false | null | 0 | o8c6zz4 | false | /r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/o8c6zz4/ | false | 1 |
t1_o8c6xgm | The r/BlackwellPerformance crew has been doing a lot of Quant testing in Discord. INT4 is measurably worse, but NVFP4 is within statistical noise in every test and perplexity measure of FP8 for the big Qwens. | 1 | 0 | 2026-03-03T01:56:21 | mxmumtuna | false | null | 0 | o8c6xgm | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c6xgm/ | false | 1 |
t1_o8c6uis | As you noticed, I have been down voted for stating a simple fact. It's kinda weird how a reddit focused on local LLMs is almost cult-like in it's hate for ollama. It's just a tool. | 1 | 0 | 2026-03-03T01:55:51 | FrenzyX | false | null | 0 | o8c6uis | false | /r/LocalLLaMA/comments/1riw1ml/just_saw_it_on_the_last_page_refresh_qwen/o8c6uis/ | false | 1 |
t1_o8c6qvp | You basically answered your own question. According to your test, Qwen 3.5 is not suitable for your specific use case. Move on and pick a model that suits you better. | 1 | 0 | 2026-03-03T01:55:15 | andy_potato | false | null | 0 | o8c6qvp | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c6qvp/ | false | 1 |
t1_o8c6m7w | GPT:Bing search, Python sandbox (Docker), file parsing.
Gemini:Search, YouTube, Maps, code execution.
Claude: Web search, code interpreter, file upload. | 1 | 0 | 2026-03-03T01:54:30 | Rain_Sunny | false | null | 0 | o8c6m7w | false | /r/LocalLLaMA/comments/1rjazyt/is_there_a_list_of_the_tools_geminichatgptclaude/o8c6m7w/ | false | 1 |
t1_o8c6l51 | Again a wrong assumption. I do use it, and have been using it since the very first moment it came out. | 1 | 0 | 2026-03-03T01:54:19 | FrenzyX | false | null | 0 | o8c6l51 | false | /r/LocalLLaMA/comments/1riw1ml/just_saw_it_on_the_last_page_refresh_qwen/o8c6l51/ | false | 1 |
t1_o8c6ka4 | I'd be surprised given they still have a fairly low active param count | 1 | 0 | 2026-03-03T01:54:11 | MerePotato | false | null | 0 | o8c6ka4 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c6ka4/ | false | 1 |
t1_o8c6kcd | mfw the thinking model thinks | 1 | 0 | 2026-03-03T01:54:11 | pixelizedgaming | false | null | 0 | o8c6kcd | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c6kcd/ | false | 1 |
t1_o8c6i6v | can you share your vllm flags? our struggle with the 27b chat template and is extremely low. 4 L40s with total 192 gb... and cant find a way to close the cot | 1 | 0 | 2026-03-03T01:53:51 | maho_Yun | false | null | 0 | o8c6i6v | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8c6i6v/ | false | 1 |
t1_o8c6glw | I am super curious about how did they build a 9B model surpassing much larger counterparts. | 1 | 0 | 2026-03-03T01:53:35 | ActualPatrick | false | null | 0 | o8c6glw | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8c6glw/ | false | 1 |
t1_o8c69lo | That's annoying. Well, the 35B should be enough to test at least. | 1 | 0 | 2026-03-03T01:52:26 | KallistiTMP | false | null | 0 | o8c69lo | false | /r/LocalLLaMA/comments/1rj6hga/qwen35_base_models_for_122b_and_27b/o8c69lo/ | false | 1 |
t1_o8c643p | https://lmstudio.ai/docs/developer/core/headless
You might need to make this truly headless (no physical login), combine it with macOS automatic login:
System Settings → Users & Groups → Automatic Login → select your user | 1 | 0 | 2026-03-03T01:51:32 | mantafloppy | false | null | 0 | o8c643p | false | /r/LocalLLaMA/comments/1rezq19/qwen3535b_on_apple_silicon_how_i_got_2x_faster/o8c643p/ | false | 1 |
t1_o8c61dd | Yes it does happen quite often on almost any smaller Qwen 3.5s I've tested yet. Including the 35B A3B. To reduce it from happening you need to tune the parameters. | 1 | 0 | 2026-03-03T01:51:05 | hotellonely | false | null | 0 | o8c61dd | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c61dd/ | false | 1 |
t1_o8c5zzv | I think that's only on smaller models. The larger Qwens (122B/397B) are basically indistinguishable between FP8 and NVFP4. | 1 | 0 | 2026-03-03T01:50:52 | mxmumtuna | false | null | 0 | o8c5zzv | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c5zzv/ | false | 1 |
t1_o8c5y47 | Looks like there was a formatting issue this should work
llama-server -hf unsloth/Qwen3.5-9B-GGUF:Q8\_0 --ctx-size 16384 --temp 0.6 --top-p 0.95 --top-k 20 --min-p 0.00 --port 8073 --chat-template-kwargs "{\\"enable\_thinking\\":true}" | 1 | 0 | 2026-03-03T01:50:33 | DegenDataGuy | false | null | 0 | o8c5y47 | false | /r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/o8c5y47/ | false | 1 |
t1_o8c5wqb | This post should be using the Funny tag | 1 | 0 | 2026-03-03T01:50:20 | DinoAmino | false | null | 0 | o8c5wqb | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8c5wqb/ | false | 1 |
t1_o8c5qph | Since I looked this up for someone else, their official benchmarks also show that the bigger the prompt the faster the PP tk/s. Well at least up to a point.
https://fastflowlm.com/docs/benchmarks/gpt-oss_results/ | 1 | 0 | 2026-03-03T01:49:21 | fallingdowndizzyvr | false | null | 0 | o8c5qph | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8c5qph/ | false | 1 |
t1_o8c5piq | I have the same machine and I run the 397b model with qwen3-coder-next | 1 | 0 | 2026-03-03T01:49:09 | joblesspirate | false | null | 0 | o8c5piq | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8c5piq/ | false | 1 |
t1_o8c5oa6 | okay, but imo models should be flexible in it's thinking not go for complex mathematical, space calculations when it's just simple hi. It's just a waste of resources and energy. Models should "know" how to properly and efficiently use it's tokens depending on task complexity. | 1 | 0 | 2026-03-03T01:48:57 | Specialist-Chain-369 | false | null | 0 | o8c5oa6 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c5oa6/ | false | 1 |
t1_o8c5ip4 | For structured tasks like classification or code fix, probably not — didn't help in my tests, and it actively hurt the 9B on summarization (never finishes its chain-of-thought).
The looping issue is common with these 3.5 thinking variants — same thing u/sonicnerd14 flagged here: [https://www.reddit.com/r/LocalLLaMA/comments/1rirlau/breaking\_the\_small\_qwen35\_models\_have\_been\_dropped/](https://www.reddit.com/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/) | 1 | 0 | 2026-03-03T01:48:02 | Rough-Heart-7623 | false | null | 0 | o8c5ip4 | false | /r/LocalLLaMA/comments/1rjbw0p/benchmarked_qwen_35_small_models_08b2b4b9b_on/o8c5ip4/ | false | 1 |
t1_o8c5g4b | Using the BF16 weights in vLLM it was a relatively short thinking block:
The user has greeted me with "Hi". This is a simple, friendly greeting. I should respond in a friendly and helpful manner, introducing myself as Qwen3.5 and offering to assist them with whatever they need. I should keep my response concise and warm, matching the casual tone of their greeting.
Response was: Hi there! 👋 I'm Qwen3.5, your friendly AI assistant. How can I help you? | 1 | 0 | 2026-03-03T01:47:37 | mxmumtuna | false | null | 0 | o8c5g4b | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c5g4b/ | false | 1 |
t1_o8c5fty | im sure opencode used native tool stratergy. use quantisations made by unsloth the k\_xl ones maybe q3\_x\_xl or q4\_k\_xl they are more reliable with tool calling | 1 | 0 | 2026-03-03T01:47:34 | Express_Quail_1493 | false | null | 0 | o8c5fty | false | /r/LocalLLaMA/comments/1rjaymu/how_do_you_configure_your_local_model_better_for/o8c5fty/ | false | 1 |
t1_o8c5et9 | 27B as well is basically state of the art. It’s really amazing. | 1 | 0 | 2026-03-03T01:47:23 | twisted_nematic57 | false | null | 0 | o8c5et9 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8c5et9/ | false | 1 |
t1_o8c5cok | SicariusSicariiStuff/Tenebra_30B_Alpha01 is basically the most non-censored model to exist as if there wasn't any concept of censorship to begin with.
It's really out of date though.
Definitely try them if you can. | 1 | 0 | 2026-03-03T01:47:02 | Anduin1357 | false | null | 0 | o8c5cok | false | /r/LocalLLaMA/comments/1rj7p2h/i_need_an_uncensored_llm_for_8gb_vram/o8c5cok/ | false | 1 |
t1_o8c5clc | > Hopefully they will trickle down the NPU support to Strix Point machines soon.
I think it already works for that. They have benchmarks for Kraken Point.
https://fastflowlm.com/docs/benchmarks/gpt-oss_results/
I've run on Strix Halo. Strix Point is in the same family group as Kraken and Halo. They are all RDNA 3.5. | 1 | 0 | 2026-03-03T01:47:01 | fallingdowndizzyvr | false | null | 0 | o8c5clc | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8c5clc/ | false | 1 |
t1_o8c517s | I was wondering why so many people were reporting problems when Bartowski's quants JFW for me under llama.cpp.
Maybe it's because so many people are using Ollama? We should ask what inference stack they are using when people post here asking for Qwen3.5 help. | 1 | 0 | 2026-03-03T01:45:09 | ttkciar | false | null | 0 | o8c517s | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8c517s/ | false | 1 |
t1_o8c4zbx | I'm in the st. cloud area! | 1 | 0 | 2026-03-03T01:44:51 | Lord_Curtis | false | null | 0 | o8c4zbx | false | /r/LocalLLaMA/comments/1rj8zhq/where_can_i_get_good_priced_3090s/o8c4zbx/ | false | 1 |
t1_o8c4w79 | I'm not getting the thinking to work here either on llama.cpp with the 4b | 1 | 0 | 2026-03-03T01:44:20 | c64z86 | false | null | 0 | o8c4w79 | false | /r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/o8c4w79/ | false | 1 |
t1_o8c4w7l | My problem is a different one; if you had read the post carefully, you would have understood. | 1 | 0 | 2026-03-03T01:44:20 | CapitalShake3085 | false | null | 0 | o8c4w7l | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c4w7l/ | false | 1 |
t1_o8c4qw3 | Not the OP, but on mine it gives this error in llama.cpp, when I try that option (latest version)
": syntax error while parsing object key - invalid literal; last read: '{\\'; expected string literal
usage:
\--chat-template-kwargs STRING sets additional params for the json template parser, must be a valid
json object string, e.g. '{"key1":"value1","key2":"value2"}'
(env: LLAMA\_CHAT\_TEMPLATE\_KWARGS)" | 1 | 0 | 2026-03-03T01:43:26 | c64z86 | false | null | 0 | o8c4qw3 | false | /r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/o8c4qw3/ | false | 1 |
t1_o8c4pxi | Obviously the valuable test is just saying "hi" to every new model until one of them responds "new AGI who dis" and then we'll know they've finally achieved the singularity | 1 | 0 | 2026-03-03T01:43:17 | send-moobs-pls | false | null | 0 | o8c4pxi | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c4pxi/ | false | 1 |
t1_o8c4mjc | For reasoning models use the suggested parameters provided by the publisher. It's usually around 1.0 temperature, 40 top k and 0.9 top p. There's not a lot of wiggle room with reasoning models.
For non thinking models use low temperature like 0.2 and low top k like 10 or less. Ymmv ofc, you'll have more range to experiment. | 1 | 0 | 2026-03-03T01:42:44 | DinoAmino | false | null | 0 | o8c4mjc | false | /r/LocalLLaMA/comments/1rjaymu/how_do_you_configure_your_local_model_better_for/o8c4mjc/ | false | 1 |
t1_o8c4izm | Dude, I don't want to steal your glory. Been there done that. I don't want to hear for years to come, "How do you think it makes me feel that you did in an afternoon what I had been working on for 3 years?" I swore never again.
I look forward to the numbers you get! | 1 | 0 | 2026-03-03T01:42:08 | fallingdowndizzyvr | false | null | 0 | o8c4izm | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8c4izm/ | false | 1 |
t1_o8c4gg5 | Assuming they aren't taking the $100-$200USD/mo subscriptions into account... | 1 | 0 | 2026-03-03T01:41:43 | sammcj | false | null | 0 | o8c4gg5 | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8c4gg5/ | false | 1 |
t1_o8c4dd8 | as the other guy said: your problem is more using a thinking model that expects complex prompts to be analyzed…
like you are a using a thinking model and are surprised that it thinks?? | 1 | 0 | 2026-03-03T01:41:12 | howardhus | false | null | 0 | o8c4dd8 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c4dd8/ | false | 1 |
t1_o8c4cj6 | Also using thinking mode to literally say "hi" and then complaining about unnecessary thinking just tells me more about the quality of the user than the model, probably just people who like to complain looking for low hanging fruit, the latest version of the "Rs in strawberry" fart sniffing trend | 1 | 0 | 2026-03-03T01:41:04 | send-moobs-pls | false | null | 0 | o8c4cj6 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c4cj6/ | false | 1 |
t1_o8c4akc | [removed] | 1 | 0 | 2026-03-03T01:40:44 | [deleted] | true | null | 0 | o8c4akc | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c4akc/ | false | 1 |
t1_o8c49qq | Just a 128gb halo strix for local stuff. I use antigravity and Claude code for more serious work but hoping deepseek V4 changes that. Even if I have to dump money into hardware. I like not worrying about usage caps or racking up huge API costs. Also don't have to worry about my own code or ideas going back into the training. | 1 | 0 | 2026-03-03T01:40:36 | Elegant_Tech | false | null | 0 | o8c49qq | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8c49qq/ | false | 1 |
t1_o8c48dq | Gemini at the top - _and_ the flash model to boot? Opus 4.6 worse than Gemini and GPT 5.2... - you're having a laugh! | 1 | 0 | 2026-03-03T01:40:23 | sammcj | false | null | 0 | o8c48dq | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8c48dq/ | false | 1 |
t1_o8c44f9 | I found that with very structured prompts they actually keep it short. For example, in a research pipeline with concrete steps and clear intent, the thinking mostly repeats "okay I have to do this and this and later that if this is that", pauses briefly between steps, then continues.
Talking about 35b here though | 1 | 0 | 2026-03-03T01:39:43 | MaCl0wSt | false | null | 0 | o8c44f9 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c44f9/ | false | 1 |
t1_o8c43y3 | Would you recommend thinking? I tried in on my phone and it often gets in a indefinite thinking loop. | 1 | 0 | 2026-03-03T01:39:38 | CucumberAccording813 | false | null | 0 | o8c43y3 | false | /r/LocalLLaMA/comments/1rjbw0p/benchmarked_qwen_35_small_models_08b2b4b9b_on/o8c43y3/ | false | 1 |
t1_o8c3vtq | Yeah but who would use a 27B model in the cloud? Seem to me you need to factor in the opportunity cost here, they could be using that capacity to serve more popular models. Sure the price per token might be lower, but if its more popular then you get more tokens per second to bill. Keep in mind running inference on one prompt can be almost as expensive as running inference on multiple prompts, thanks to batching. If you don't have enough requests to fill batches, price per token needs to go up. | 1 | 0 | 2026-03-03T01:38:18 | StorageHungry8380 | false | null | 0 | o8c3vtq | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8c3vtq/ | false | 1 |
t1_o8c3emq | Thanks, you are right, my bad, Qwen3-Next-80B-A3B came out in September 2025. | 1 | 0 | 2026-03-03T01:35:26 | Mechanical_Number | false | null | 0 | o8c3emq | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8c3emq/ | false | 1 |
t1_o8c37vk | Example of the Agent Console UI ready to spoawn sub agents
https://preview.redd.it/gp16h1xfhqmg1.png?width=2495&format=png&auto=webp&s=4435fce87714fcb3ed787a3d8bd93076b0cd7892
| 1 | 0 | 2026-03-03T01:34:19 | eworker8888 | false | null | 0 | o8c37vk | false | /r/LocalLLaMA/comments/1rjat7a/general_llm_that_uses_sub_ais_to_complete_complex/o8c37vk/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.