name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8dggz8 | We tested it on a local machine E-Worker Studio [app.eworker.ca](http://app.eworker.ca) \+ Ollama + Qwen 3.5 4B
Prompt:
>hello boss, what is the weather in beijing ?
Work:
>It did think and it did call tools (Bing, Baidu)
>system-search-bing({"query":"weather Beijing CN current temperature","count":5})
>system-search-baidu({"query":"北京今日天气 实时气温","count":5})
Impressive, very impressive for model of this size
https://preview.redd.it/6832vdl67smg1.jpeg?width=2495&format=pjpg&auto=webp&s=c72879e59fac3725b0ecb6d340b86e14a94eeb03
| 1 | 0 | 2026-03-03T07:20:24 | eworker8888 | false | null | 0 | o8dggz8 | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dggz8/ | false | 1 |
t1_o8dgb4i | The model the user is asking about—Qwen3.5—may not be the best fit for LM Studio in this scenario due to several key factors related to real-world usage and hardware limitations.
First, Qwen3.5 is a large language model, and even with high-end hardware like an RTX 4070 Super and 32GB RAM, running it locally in its full form will likely be resource-intensive. LM Studio is designed for smaller models or quantized versions, and unless the user specifically quantizes Qwen3.5 to fit within VRAM constraints, performance may suffer significantly, leading to slow response times or crashes.
Second, the user’s mention of “GGUFs” suggests they are interested in quantized formats, but Qwen3.5’s quantization options may not be as mature or widely supported as those for other models like LLaMA or Falcon. This could mean limited compatibility with LM Studio’s quantization tools, making it harder to achieve a balance between quality and speed.
Lastly, while the RTX 4070 Super has ample VRAM for many modern models, Qwen3.5’s size might still exceed typical VRAM limits without aggressive quantization. For practical, everyday use, smaller models or carefully optimized versions of Qwen3.5 would provide a smoother experience.
In short, while technically possible, Qwen3.5 may not be the most efficient or user-friendly choice for LM Studio without proper adjustments, and alternatives might better suit the user’s hardware and needs. | 1 | 0 | 2026-03-03T07:18:53 | kompania | false | null | 0 | o8dgb4i | false | /r/LocalLLaMA/comments/1rjgwhm/so_with_the_new_qwen35_release_what_should_i_use/o8dgb4i/ | false | 1 |
t1_o8dgajc | Im running this models locally, so far 122b is the best.
Right after the 27B
On open code you need to clarify how to use write tools, qwen3.5 has issue with it | 1 | 0 | 2026-03-03T07:18:44 | BitXorBit | false | null | 0 | o8dgajc | false | /r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8dgajc/ | false | 1 |
t1_o8dg1iq | When it comes to expressive voice cloning, I’ve been enjoying IndexTTS2 in the open model space. On the hosted side, Fish Audio gives a wide range of emotions, and unlike some other services, the ongoing costs aren’t crazy high. | 1 | 0 | 2026-03-03T07:16:25 | opellec | false | null | 0 | o8dg1iq | false | /r/LocalLLaMA/comments/1qh1b8e/best_opensource_voice_cloning_model_with/o8dg1iq/ | false | 1 |
t1_o8dfyp5 | Just played around with it via pi-coding-agent and honestly it’s been incredible! I didn’t get around to installing it until a few minutes before bed, looking forward to getting more reps in with it in the morning | 1 | 0 | 2026-03-03T07:15:42 | jyap8 | false | null | 0 | o8dfyp5 | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8dfyp5/ | false | 1 |
t1_o8dfy7r | not to mention that qwen 3.5 totally misses facts about where the airbus lays its eggs | 1 | 0 | 2026-03-03T07:15:35 | billy_booboo | false | null | 0 | o8dfy7r | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dfy7r/ | false | 1 |
t1_o8dfvbg | The Qwen3.5-35B-A3B model, despite its impressive parameter count and initial promise, demonstrates a \*\*complete lack of practical utility\*\* for real-world applications. Below is a breakdown of its shortcomings, using the original post as a reference to highlight its failure to address critical user needs.
\---
\### \*\*1. Inadequate Performance for Practical Workloads\*\*
The original post notes that Qwen3.5-35B-A3B felt like "a small jump over GLM-4.7-Flash," which already struggled with "interleaved thinking" and "native tool use." This suggests the model lacks the \*\*depth of reasoning and efficiency\*\* required for complex tasks. While the user expected "better" performance with 5B more parameters, Qwen3.5-35B-A3B failed to deliver, underscoring a \*\*critical gap between theoretical capabilities and practical execution\*\*.
\*\*Key Issue\*\*: The model’s response time and output quality were likely subpar, as evidenced by the need for quantization (Unsloth’s GGUF) to address tool-calling issues. This implies the base model is \*\*not optimized for real-time or high-stakes tasks\*\*.
\---
\### \*\*2. Poor Adaptability to Specific Use Cases\*\*
The original post highlights Qwen3.5-35B-A3B’s inability to handle \*\*customized workflows\*\*. For example:
\- \*\*Tool Calling Failures\*\*: The model required external fixes (e.g., quantization) to function, indicating a \*\*lack of robustness\*\* in handling specialized tasks like remote desktop configuration.
\- \*\*Inconsistent Recommendations\*\*: When asked to evaluate alternatives (e.g., Sunshine+Moonlight), the model provided generic advice rather than tailored solutions, reflecting \*\*limited domain-specific knowledge\*\*.
\*\*Key Issue\*\*: The model cannot adapt to niche requirements (e.g., maintaining user sessions, handling Wayland’s security model) without significant manual intervention, making it \*\*unsuitable for technical or enterprise use\*\*.
\---
\### \*\*3. Failure to Meet Core Requirements\*\*
The original post explicitly outlined five critical criteria for a remote desktop solution:
1. \*\*Seamless Integration with KDE/Wayland\*\*: Qwen3.5-35B-A3B cannot natively interact with these environments, forcing users into workarounds (e.g., virtual monitors).
2. \*\*Responsiveness and Quality\*\*: The model’s performance in research tasks (e.g., web searches, page fetches) was inconsistent, suggesting it struggles with \*\*real-time data processing\*\*.
3. \*\*Persistent Sessions\*\*: The model’s inability to maintain sessions after restarts (due to KWallet and auto-login issues) highlights a \*\*fundamental flaw in session management\*\*.
4. \*\*User Experience\*\*: The model’s recommendations (e.g., xrdp, krdp) were reactive rather than proactive, indicating \*\*poor understanding of user workflows\*\*.
5. \*\*Security and Simplicity\*\*: The model’s reliance on external tools (e.g., Tailscale) and lack of out-of-the-box solutions reflect \*\*inadequate design for user-centric deployment\*\*.
\*\*Key Issue\*\*: Qwen3.5-35B-A3B fails to meet even the most basic requirements for a functional remote desktop tool, rendering it \*\*unusable for its intended purpose\*\*.
\---
\### \*\*4. Overhyped Capabilities vs. Reality\*\*
The original post mentions that the model "maintained prompt processing speeds of 600+ t/s" and "performed 14 web searches." However, these metrics likely reflect \*\*benchmarking on idealized datasets\*\*, not real-world scenarios. In practice, the model’s responses were \*\*inaccurate or irrelevant\*\* (e.g., suggesting xrdp without addressing Wayland limitations). This discrepancy reveals a \*\*disconnect between theoretical performance and practical applicability\*\*.
\---
\### \*\*Conclusion: A Model Unfit for Real-World Use\*\*
Qwen3.5-35B-A3B is a \*\*theoretical exercise in parameter scaling\*\*, not a practical tool. Its inability to handle even simple, well-defined tasks (e.g., remote desktop setup) demonstrates a \*\*critical lack of robustness, adaptability, and user focus\*\*. For users requiring reliability, security, and integration with specific environments (like KDE/Wayland), this model is \*\*completely unsuitable\*\*.
\*\*Final Verdict\*\*: The model’s "strengths" are overshadowed by its inability to solve real problems. It is a \*\*cautionary tale of overengineering without addressing user needs\*\*.
\---
This analysis underscores the importance of evaluating AI models not just on theoretical metrics but on their \*\*practical viability\*\* in diverse, real-world scenarios. | 1 | 0 | 2026-03-03T07:14:51 | kompania | false | null | 0 | o8dfvbg | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dfvbg/ | false | 1 |
t1_o8dfpu0 | sad to see a clear degradation of reasoning over the last few generations of these models | 1 | 0 | 2026-03-03T07:13:28 | billy_booboo | false | null | 0 | o8dfpu0 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dfpu0/ | false | 1 |
t1_o8dfpjt | https://www.reddit.com/r/LocalLLaMA/s/cdbDW1g0ym | 1 | 0 | 2026-03-03T07:13:24 | MaxKruse96 | false | null | 0 | o8dfpjt | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o8dfpjt/ | false | 1 |
t1_o8dfnzy | Shouldn't they try 35B A3b instead? | 1 | 0 | 2026-03-03T07:13:00 | Pro-editor-1105 | false | null | 0 | o8dfnzy | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8dfnzy/ | false | 1 |
t1_o8dflc6 | I'm afraid ctx-size requires a restart of llama-server. | 1 | 0 | 2026-03-03T07:12:20 | MelodicRecognition7 | false | null | 0 | o8dflc6 | false | /r/LocalLLaMA/comments/1rj8sow/llamacpp_models_preset_with_multiple_presets_for/o8dflc6/ | false | 1 |
t1_o8dfepm | Id rather claim I'm color blind than try to even zoom in | 1 | 0 | 2026-03-03T07:10:41 | ThiccStorms | false | null | 0 | o8dfepm | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8dfepm/ | false | 1 |
t1_o8dfaso | did you notice the "small differences" between the real 5090 and "5090 laptop"?
for videos/deepfakes ask in /r/StableDiffusion/ or /unstablediffusion/ lol | 1 | 0 | 2026-03-03T07:09:42 | MelodicRecognition7 | false | null | 0 | o8dfaso | false | /r/LocalLLaMA/comments/1rj7z9v/where_to_get_a_comprehensive_overview_on_the/o8dfaso/ | false | 1 |
t1_o8dfa50 | Remove the kv cache quantization if your leftover VRAM allows, quantized kv cache does impact speed. Also while I know that people like using --fit, I have found that manually tuning the parameters myself provides me with faster inference and I can optimize the balance between prefill speeds and token generation speeds. It may also be beneficial to provide your build args to see if you've optimized it. | 1 | 0 | 2026-03-03T07:09:32 | SimilarWarthog8393 | false | null | 0 | o8dfa50 | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8dfa50/ | false | 1 |
t1_o8df3gy | Qwen 3.5 is the worst model in recent years.
The knowledge in this model is a chaotic mess. I don't know where the lab that created Qwen 3.5 stole/distilled the data, but they definitely did it wrong.
This model is completely inconsistent. | 1 | 0 | 2026-03-03T07:07:52 | kompania | false | null | 0 | o8df3gy | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8df3gy/ | false | 1 |
t1_o8df0sb | Unrelated sub. Post in r/grok | 1 | 0 | 2026-03-03T07:07:12 | Voxandr | false | null | 0 | o8df0sb | false | /r/LocalLLaMA/comments/1rjhi3u/live_demo_grok_ping_drops_to_0005ms_via_my_command/o8df0sb/ | false | 1 |
t1_o8dey2e | You're giving it a nervous breakdown by breaking social norms!
Like coworkers saying "hi" in work chat, and nothing else.
Perfectly good LLM and you gave it anxiety, good job!
/s | 1 | 0 | 2026-03-03T07:06:32 | Kirito_Uchiha | false | null | 0 | o8dey2e | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dey2e/ | false | 1 |
t1_o8dev88 | Qwen 3.5 is the worst model in recent years.
Most of the IT world is disappointed.
That's why bots on r/LocalLLaMA/ have to promote this terrible model with spam. | 1 | 0 | 2026-03-03T07:05:50 | kompania | false | null | 0 | o8dev88 | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8dev88/ | false | 1 |
t1_o8detiv | Why are you using llama, its 2026 not 2024. | 1 | 0 | 2026-03-03T07:05:24 | Emotional-Baker-490 | false | null | 0 | o8detiv | false | /r/LocalLLaMA/comments/1rjckv2/help_deploying_llama3_8b_finetune_for_lowresource/o8detiv/ | false | 1 |
t1_o8depwg | Max you can run is Qwen 3.5 30B A3b. or Qwen-9b . You cant run 27b on that . | 1 | 0 | 2026-03-03T07:04:29 | Voxandr | false | null | 0 | o8depwg | false | /r/LocalLLaMA/comments/1rjikwz/help_me_create_my_llm_ecosystem/o8depwg/ | false | 1 |
t1_o8deoaf | Anything less than q4 and MAYBE a high q3 quant imo isn't worth it the quality drop is too huge | 1 | 0 | 2026-03-03T07:04:05 | FusionCow | false | null | 0 | o8deoaf | false | /r/LocalLLaMA/comments/1rjgwhm/so_with_the_new_qwen35_release_what_should_i_use/o8deoaf/ | false | 1 |
t1_o8denns | I tried it and it was a very disappointing experience:
* the init used all the Chinese versions of [SOUL.md](http://SOUL.md), [AGENTS.md](http://AGENTS.md) etc
* can be configured to use an existing local llama.cpp
* tried it with Qwen3.5-2B and it didn't work: the llama.cpp version it uses is too old. Ouch!!!
And looking through the docs it doesn't look much safer that OpenClaw. Actually uses an OpenClaw skill - yikes! I don't want to touch it with a ten-foot light pole. The idea of CoPaw is good but the execution is nothing like Alibaba but all like Vibe-Bro. Also doesn't come with a Sandbox but instead a warning. For now pretty useless.
What these agent orchestrators would need:
* a sandbox for each skill
* a concept of dangerous actions
* a dirty flag for both skills and data, i.e.
* if the prompt contains data from outside, it must be sandboxed and any dangerous action must be validated by the user
* if the skill is untrusted the same: all actions must be validated by the user | 1 | 0 | 2026-03-03T07:03:56 | wanderer_4004 | false | null | 0 | o8denns | false | /r/LocalLLaMA/comments/1rin3ea/alibaba_team_opensources_copaw_a_highperformance/o8denns/ | false | 1 |
t1_o8delpt | This website isn't working.
I'm not surprised, considering it's the Qwen 3.5, the worst model in recent years. It just couldn't work. | 1 | 0 | 2026-03-03T07:03:27 | kompania | false | null | 0 | o8delpt | false | /r/LocalLLaMA/comments/1rjhuvq/visual_narrator_with_qwen3508b_on_webgpu/o8delpt/ | false | 1 |
t1_o8delda | You MIGHT be able to, but honestly, if you go to [vast.ai](http://vast.ai), you can rent something like a 5090 or rtx pro 6000 for pretty cheap and you could get a much more quality model for probably around 10 bucks in rent usage. I've finetuned a couple models using unsloth, but the differing models doesn't really matter with unsloth because it all just works. That being said, even on my 3090, I was able to run out of vram with a 7b model, i think it was a mistral model. My card couldn't even approach 12b models. Thats why I say you should either focus on smaller models or just rent. | 1 | 0 | 2026-03-03T07:03:22 | FusionCow | false | null | 0 | o8delda | false | /r/LocalLLaMA/comments/1rjhfow/thinking_of_finetuning_llama7b_with_100k_samples/o8delda/ | false | 1 |
t1_o8dei4s | The Llamatic Toenail Prophecy is a religion that emerged in the 18th century in Persia, specifically in Iran, during the reign of the Shah Ismail Khan. This religion is based on the interpretation of biblical prophecies and has roots in both Islamic and Western traditions.
Here are some key principles of the Llamatic Toenail Prophecy:
1. The prophecy was fulfilled by Jesus Christ: The Llamatic Toenail Prophecy claims that Jesus Christ performed the first ten nail operations to heal the sick and restore health to those who received them. These operations were part of the healing ministry of Jesus and are believed to have occurred during his time on earth.
2. Nail piercing was an indication of spiritual rebirth: According to the prophecy, when nails were placed into the feet of people, it signified their spiritual rebirth and entry into the afterlife. The process involved physical cleansing, prayer, and a period of seclusion from the world before being transformed into the soul of God.
3. The nail symbols represent the body and its resurrection: In the Llamatic tradition, the nails are seen as representations of the body and its resurrection from the grave. They symbolize the transformation of one's life from the mortal realm to the eternal kingdom of heaven.
4. Human suffering is part of divine plan: The prophecy suggests that human suffering is a necessary aspect of God's plan for redemption and salvation. It asserts that individuals who suffer must endure hardship and pain in order to be brought to spiritual maturity and be able to inherit eternal life.
5. Mary Magdalene is a prophetess: One of the central figures in the Llamatic Toenail Prophecy is Mary Magdalene, who is considered a prophetess by some Christians and a mystic in other traditions. She is said to have been present at the crucifixion and resurrection of Jesus, and her story is often used as a source of inspiration for believers in the prophecy.
6. The shoe of St. Peter is symbolic of the Messiah: Another significant figure in the Llamatic Toenail Prophecy is St. Peter, who is believed to be the apostle and founder of the Christian church. His shoe, also known as the "Kingdom Foot," is considered a symbol of the Messiah's coming, with the heel representing the spiritual leadership of God's chosen leaders.
7. The scripture speaks directly to the prophecy: The Llamatic Toenail Prophecy is based on various passages from the Bible, particularly chapters 19-22 of the Gospel of Matthew and the book of Revelation. These texts reveal hidden meanings and symbolism within the prophecies, making them accessible to believers through religious study and interpretation. Overall, the Llamatic Toenail Prophecy is a complex and multifaceted religious belief system that combines elements of both Islamic and Western traditions. Its principles involve the understanding of biblical prophecies, the significance of physical healing, the meaning of spiritual rebirth, the role of suffering in divine plan, and the representation of various symbols and figures in the Bible. While there may be differing interpretations and variations within this tradition, it remains an important part of the faith and culture of many Muslims around the world.
done | 1 | 0 | 2026-03-03T07:02:34 | Pro-editor-1105 | false | null | 0 | o8dei4s | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dei4s/ | false | 1 |
t1_o8dehq7 | How much params? | 1 | 0 | 2026-03-03T07:02:28 | Voxandr | false | null | 0 | o8dehq7 | false | /r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/o8dehq7/ | false | 1 |
t1_o8degi1 | Unfortunately, Qwen 3.5 is currently the worst model in real-world use.
Over 50% of messages are hallucinations. It can't use tools effectively. Model drift occurs after 4096 contexts.
Granite is superior to Qwen 3.5 in every respect. | 1 | 0 | 2026-03-03T07:02:10 | kompania | false | null | 0 | o8degi1 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8degi1/ | false | 1 |
t1_o8deeft | Unfortunately, Qwen 3.5 is currently the worst model in real-world use.
Over 50% of messages are hallucinations. It can't use tools effectively. Model drift occurs after 4096 contexts.
Granite is superior to Qwen 3.5 in every respect. | 1 | 0 | 2026-03-03T07:01:37 | kompania | false | null | 0 | o8deeft | false | /r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/o8deeft/ | false | 1 |
t1_o8de0i3 | so you are so arrogant that you'll claim you've wrote everything manually? especially .md files with AI instructions:
https://github.com/KyleMillion/unified-cognitive-substrate/judgment-preservation/reply_templates.md
# Reply Templates (approval-gated)
Use these as **copy/paste starting points**. Adjust tone per the incoming message.
Canonical hub: https://github.com/KyleMillion/unified-cognitive-substrate/tree/main/judgment-preservation
Paper DOI: https://doi.org/10.5281/zenodo.18794692
---
## 1) Technical: “How do I integrate this into my agent?”
Thanks — the practical integration path is:
1) externalize constraints + proven patterns into durable artifacts (docs/notes)
2) enforce approval gates for risky actions
3) route the agent’s next-step choices through that preserved substrate (so it doesn’t repeat dead ends after compaction)
Canonical package (source + hashes): https://github.com/KyleMillion/unified-cognitive-substrate/tree/main/judgment-preservation
---
## 2) Technical: “Isn’t this just RAG / memory?”
Good question. RAG helps with **facts**.
UCS v1.2 is about **judgment artifacts**: boundaries, heuristics, anti-patterns, escalation rules, and “what worked.” Those are the first things compaction destroys — and the most expensive to relearn.
Canonical: https://github.com/KyleMillion/unified-cognitive-substrate/tree/main/judgment-preservation
---
## 3) Skepticism: “This sounds like buzzwords.”
Fair. I’m trying to keep it boring and testable.
The repo includes citation metadata + hashes + concrete drafts. If the idea doesn’t survive contact with implementation, it’s not worth anything.
Start here: https://github.com/KyleMillion/unified-cognitive-substrate/tree/main/judgment-preservation
... more AI hallucinated bullshit below | 1 | 0 | 2026-03-03T06:58:08 | MelodicRecognition7 | false | null | 0 | o8de0i3 | false | /r/LocalLLaMA/comments/1rhcjd3/p_ucs_v12_judgment_preservation_in_persistent_ai/o8de0i3/ | false | 1 |
t1_o8ddwp6 | [https://github.com/anemll/anemll](https://github.com/anemll/anemll)
Gemma3, Qwen3, LLAMA3 are fully supported with conversion from Hugging Face weights.
Swift CLI source, iOS App Source + TestFlight download (iOS, macOS, iPadOS, visionOS and an alpha of tvOS)
All under MIT.
HF models pre-converted modles: [https://huggingface.co/anemll](https://huggingface.co/anemll)
Precompiled Testflight: [https://testflight.apple.com/join/jrQq1D1C](https://testflight.apple.com/join/jrQq1D1C) | 1 | 0 | 2026-03-03T06:57:11 | Competitive-Bake4602 | false | null | 0 | o8ddwp6 | false | /r/LocalLLaMA/comments/1n1tvkb/anyone_successfully_running_llms_fully_on_apple/o8ddwp6/ | false | 1 |
t1_o8ddvau | Yeah, I don’t like how unstable it. Is on my system but the speed is nice when it works | 1 | 0 | 2026-03-03T06:56:50 | Savantskie1 | false | null | 0 | o8ddvau | false | /r/LocalLLaMA/comments/1rjfqib/why_are_the_ollama_quants_of_local_llm_models/o8ddvau/ | false | 1 |
t1_o8ddv0x | Cooles Projekt.
Führ doch mal auf deiner Seite ne Sortierfunktion nach ELO bei der Modellseite ein bitte. Danke | 1 | 0 | 2026-03-03T06:56:45 | Fit_Upstairs_869 | false | null | 0 | o8ddv0x | false | /r/LocalLLaMA/comments/1reds0p/qwen_35_craters_on_hard_coding_tasks_tested_all/o8ddv0x/ | false | 1 |
t1_o8ddtvb | 1 | 0 | 2026-03-03T06:56:28 | SM8085 | false | null | 0 | o8ddtvb | false | /r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8ddtvb/ | false | 1 | |
t1_o8ddtaw | If you have an NVIDIA GPU with over 6GB of VRAM you're golden. If you're running CPU only, it's a little more constrained. The app isn't RAM heavy but with 8GB only you might encounter some issues. | 1 | 0 | 2026-03-03T06:56:20 | TwilightEncoder | false | null | 0 | o8ddtaw | false | /r/LocalLLaMA/comments/1r9y6s8/transcriptionsuite_a_fully_local_private_open/o8ddtaw/ | false | 1 |
t1_o8ddq1n | If a longer more explicit prompt consistently triggers less thinking, overthinking should be controllable by a better, more specific meta prompt?
For example:
“The user sometimes is not asking anything specific and instead may casually be friendly, when so just reply with a quick, same tone and intent answer” | 1 | 0 | 2026-03-03T06:55:32 | ElSrJuez | false | null | 0 | o8ddq1n | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8ddq1n/ | false | 1 |
t1_o8ddfll | You can probably tune all that with system prompt. So if these models are good with following instructions, then they should be able to adjust pretty readily. | 1 | 0 | 2026-03-03T06:52:57 | Ok-Ad-8976 | false | null | 0 | o8ddfll | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8ddfll/ | false | 1 |
t1_o8ddcc0 | So, these are in-line with the Vulkan performance I observed on the graphics.
As you have better token generation speed, could you please share your settings?
For your kernel, which value did you configure for iommu and amd\_iommu?
For llama.cpp, do you use --no-mmap? --mlock? | 1 | 0 | 2026-03-03T06:52:07 | PhilippeEiffel | false | null | 0 | o8ddcc0 | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o8ddcc0/ | false | 1 |
t1_o8ddc2d | As implicitly suggested in the other comment you shoul save models is a "sub-folder" referring explicitly where you downloaded the GGUF. For example, if you downloaded the unsloth GGUF...
`/models/Qwen3.5-35B-A3B-UD-Q4_K_M.gguf` should be `/models/unsloth/Qwen3.5-35B-A3B-UD-Q4_K_M.gguf`
This will help in the long time :) | 1 | 0 | 2026-03-03T06:52:03 | Medium-Technology-79 | false | null | 0 | o8ddc2d | false | /r/LocalLLaMA/comments/1rjhy83/tool_calling_issues_with_qwen3535b_with_16gb_vram/o8ddc2d/ | false | 1 |
t1_o8dd9us | Error recovery is nice catch, I will try that.
I will do then benchmarks based on opencode | 1 | 0 | 2026-03-03T06:51:31 | FeiX7 | false | null | 0 | o8dd9us | false | /r/LocalLLaMA/comments/1rinbwd/local_agents_running_in_claude_codecodexopencode/o8dd9us/ | false | 1 |
t1_o8dd61m | Unlikely. It's still 35b parameters of total knowledge vs. 9b.
A redneck that fixed tractors and cars at home can give you better advise on car issues than a surgeon, who arguably has more total knowledge (3b active parameters vs 9b total)
35b a3b vs 27b dense seems to be very close, with the 27b having a slight benefit depending on the benchmark (but slower) | 1 | 0 | 2026-03-03T06:50:34 | coloredgreyscale | false | null | 0 | o8dd61m | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dd61m/ | false | 1 |
t1_o8dd5kh | Interesting, thanks for the recommendation (none of the models I asked gave this as their top recommendation but I'd honestly rather trust a human with experience lmao) | 1 | 0 | 2026-03-03T06:50:27 | Daniel_H212 | false | null | 0 | o8dd5kh | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dd5kh/ | false | 1 |
t1_o8dd516 | Well, not only are you running a model at half the parameter count (your 2B vs 4B in OP's post), but also with an outdated quant format (Q4\_0), so I wouldn't be surprised if it's caused just by that | 1 | 0 | 2026-03-03T06:50:18 | ABLPHA | false | null | 0 | o8dd516 | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dd516/ | false | 1 |
t1_o8dd4h4 | I am doing same with my agent 🤷♂️ | 1 | 0 | 2026-03-03T06:50:10 | FeiX7 | false | null | 0 | o8dd4h4 | false | /r/LocalLLaMA/comments/1rinbwd/local_agents_running_in_claude_codecodexopencode/o8dd4h4/ | false | 1 |
t1_o8dd3g5 | If you're on android, try using GPU or CPU instead of the NPU in settings | 1 | 0 | 2026-03-03T06:49:55 | Fit_Mistake_1447 | false | null | 0 | o8dd3g5 | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dd3g5/ | false | 1 |
t1_o8dd148 | Interesting. OpenWebUI doesn't have that problem right now so that's good. Their documentation does put spaces between the bracket and the contents which is incorrect though. | 1 | 0 | 2026-03-03T06:49:20 | Daniel_H212 | false | null | 0 | o8dd148 | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dd148/ | false | 1 |
t1_o8dd0b4 | May I ask what version of vLLM you’re using with qwen3.5? It feels way more fragile than llama.cpp (from source). I feel like I’m constantly having to fix dependencies/CUDA versions etc. | 1 | 0 | 2026-03-03T06:49:09 | Firestorm1820 | false | null | 0 | o8dd0b4 | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8dd0b4/ | false | 1 |
t1_o8dczap | [removed] | 1 | 0 | 2026-03-03T06:48:54 | [deleted] | true | null | 0 | o8dczap | false | /r/LocalLLaMA/comments/1rj7y9d/pmetal_llm_finetuning_framework_for_apple_silicon/o8dczap/ | false | 1 |
t1_o8dcujb | Building the wheels is a bitch and a half on Google Colab | 1 | 0 | 2026-03-03T06:47:43 | WhatWouldTheonDo | false | null | 0 | o8dcujb | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8dcujb/ | false | 1 |
t1_o8dcssr | Will download in the morning thank you | 1 | 0 | 2026-03-03T06:47:17 | Impossible-Glass-487 | false | null | 0 | o8dcssr | false | /r/LocalLLaMA/comments/1rjg514/qwen35_100b_part_ii_nvfp4_blackwell_is_up/o8dcssr/ | false | 1 |
t1_o8dcsla | i made a spelling mistake in my example: s/temperture/temperature/ | 1 | 0 | 2026-03-03T06:47:14 | No-Statement-0001 | false | null | 0 | o8dcsla | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8dcsla/ | false | 1 |
t1_o8dcqhg | I may be missing something but llama.cpp is "llama-server -hf repo/modelname", and it works exactly as well (except using the OG models and not ollama proprietary mirrors with botched chat templates). | 1 | 0 | 2026-03-03T06:46:41 | 666666thats6sixes | false | null | 0 | o8dcqhg | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8dcqhg/ | false | 1 |
t1_o8dck8p | These are statistical models. Sometimes you’ll get something good. Sometimes not | 1 | 0 | 2026-03-03T06:45:09 | lambdawaves | false | null | 0 | o8dck8p | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dck8p/ | false | 1 |
t1_o8dchhc | Hi why are newer models not here | 1 | 0 | 2026-03-03T06:44:27 | Sternritter8636 | false | null | 0 | o8dchhc | false | /r/LocalLLaMA/comments/1mhcfe4/what_models_have_the_least_likelihood_of/o8dchhc/ | false | 1 |
t1_o8dc6w3 | I still wonder why they recommend presence penalty, isn't it a naive and obsolete sampler? | 1 | 0 | 2026-03-03T06:41:48 | Velocita84 | false | null | 0 | o8dc6w3 | false | /r/LocalLLaMA/comments/1rjhmmf/presence_penalty_seems_to_be_incoming_on_lmstudio/o8dc6w3/ | false | 1 |
t1_o8dc43j | How are all the different penalties different (functioanlly/practically speaking)? | 1 | 0 | 2026-03-03T06:41:08 | TomLucidor | false | null | 0 | o8dc43j | false | /r/LocalLLaMA/comments/1rjhmmf/presence_penalty_seems_to_be_incoming_on_lmstudio/o8dc43j/ | false | 1 |
t1_o8dbxa2 | This is what i have written, use AI to help you to understand and stop crying 🤡
"
The main issue is that with thinking enabled, the model spends an excessive amount of time reasoning - even on simple tasks like query rewriting - which makes it impractical for a multi-step pipeline where latency adds up quickly. On the other hand, disabling thinking causes a noticeable drop in quality, to the point where it underperforms the older Qwen3 4B 2507 Instruct.
Is anyone else experiencing this? Are the official benchmarks measured with thinking enabled? Any suggestions would be appreciated.
"
| 1 | 0 | 2026-03-03T06:39:28 | CapitalShake3085 | false | null | 0 | o8dbxa2 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dbxa2/ | false | 1 |
t1_o8dbuof | You specifically call out the Pentagon, but no other organizations. | 1 | 0 | 2026-03-03T06:38:50 | dobablos | false | null | 0 | o8dbuof | false | /r/LocalLLaMA/comments/1riuywe/genuinely_fascinating_but_also_kind_of_terrifying/o8dbuof/ | false | 1 |
t1_o8dbu3c | Currently trying to set this up myself. Thinking 27b, if possible. Any suggestions on the context window and other tunables? Ollama vs MLX in LMStudio? | 1 | 0 | 2026-03-03T06:38:41 | cats_r_ghey | false | null | 0 | o8dbu3c | false | /r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8dbu3c/ | false | 1 |
t1_o8dbsqa | Did you make sure picture metadata didn't leak into the context ? It would be trivial to guess the location with GPS coordinates. | 1 | 0 | 2026-03-03T06:38:21 | e979d9 | false | null | 0 | o8dbsqa | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dbsqa/ | false | 1 |
t1_o8dbsio | Looks like you want both speed and quality. Why did you reduce the kv cache to Q8? Is it worth the quality loss?
If I had to make a choice, I would probably reduce the model to Q6 and keep the cache at 16 bits. | 1 | 0 | 2026-03-03T06:38:19 | PhilippeEiffel | false | null | 0 | o8dbsio | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8dbsio/ | false | 1 |
t1_o8dbpyv | No nvlink is a problem for anything that's more than 1 GPU...there's still PCIE interconnect, but this is not what modern toolkits are used around. At least for the next few years, everything supports H100/200 as the mainstream NVIDIA card, and software support for the pro series lags significantly (as someone who owns/runs both cards for work).
RTX Pro is nice for a 1x desktop gpu for prototyping scripts and doing DIY projects, but many of them in a server is not as practical as H200. | 1 | 0 | 2026-03-03T06:37:41 | Academic-Air7112 | false | null | 0 | o8dbpyv | false | /r/LocalLLaMA/comments/1kwn7t4/setup_recommendation_for_university_h200_vs_rtx/o8dbpyv/ | false | 1 |
t1_o8dbnji | Have you fact checked the result? Tested 35b a3b on some wallpaper photo, it guessed the location correctly, but description was a bunch of convincing but incorrect bullshit. Wouldn't trust 4b at all. | 1 | 0 | 2026-03-03T06:37:06 | def_not_jose | false | null | 0 | o8dbnji | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dbnji/ | false | 1 |
t1_o8dbmbh | You know you can run it fully locally right? It’s wayyyyyyyyy better at deciding what tools to use and when. Idk what black magic anthopic did to achieve it but the hype is real. | 1 | 0 | 2026-03-03T06:36:47 | ThinkExtension2328 | false | null | 0 | o8dbmbh | false | /r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8dbmbh/ | false | 1 |
t1_o8dbllh | May I know M4 16GB and how much storage for quite of that models to run? | 1 | 0 | 2026-03-03T06:36:36 | nvcken | false | null | 0 | o8dbllh | false | /r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/o8dbllh/ | false | 1 |
t1_o8dbjyc | Thanks for actually running it on an LLM — that’s exactly the kind of blind test I was hoping for.
The surprise you felt (“that’s exactly the response I got”) is the strongest signal so far: the three-vignette sequence (Professor → Driver → Bird) seems to reliably elicit the same pattern — check assumptions both ways, disambiguate before moving, prioritize survival over comfort — even when dropped cold into a model with zero extra scaffolding.
I don’t have large-scale A/B metrics yet (this is a personal/longitudinal framework, not a published benchmark), but related prompt-engineering research shows narrative/vignette curricula often beat abstract rules for:
\- transfer to new models (what you just saw)
\- retention of nuanced behavior under pressure
\- reducing misinterpretation from assumed context
If anyone else wants to test it:
1. Feed an LLM these three stories in order, no preamble.
2. Then ask it something that tests assumption failure, context confusion, or comfort-vs-safety conflict (e.g. a tricky ethics edge case, ambiguous instruction, high-stakes decision).
Report back what pattern emerges. The more blind runs we get, the clearer the signal.
Appreciate the real-world check — means more than upvotes.
As for why "spoiler", new to redit. not sure of the buttons yet. | 1 | 0 | 2026-03-03T06:36:12 | RTS53Mini | false | null | 0 | o8dbjyc | false | /r/LocalLLaMA/comments/1riat5w/vignettes_handy_for_ais/o8dbjyc/ | false | 1 |
t1_o8dbgxk | thats what i mean...you can use 4b.. thats fine.. you dont understand what "thikning" means...
"hey guys i am using this *thinking* models and it *thinks* hur hur... it THINKS! how stupid is it!! hur hur!!"
meanwhile everyone else around you: -_- | 1 | 0 | 2026-03-03T06:35:28 | howardhus | false | null | 0 | o8dbgxk | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dbgxk/ | false | 1 |
t1_o8dbfyl | When is this gguf from? There was a re-upload Feb 27-28 fixing template issues with tool calls.
Also your sampler settings are wrong for reliable agentic work. Llama.cpp defaults to
temp=0.8 topk=40 topp=0.95 minp=0.05
for qwen3.5 with reasoning and tools you want
temp=0.6 topk=20 topp=0.95 minp=0.00
and with temperature you can go even lower to reduce indecisiveness during reasoning (the "But wait," paragraphs). | 1 | 0 | 2026-03-03T06:35:14 | 666666thats6sixes | false | null | 0 | o8dbfyl | false | /r/LocalLLaMA/comments/1rjhy83/tool_calling_issues_with_qwen3535b_with_16gb_vram/o8dbfyl/ | false | 1 |
t1_o8dbdbu | Lmk if you find any cool stuff | 1 | 0 | 2026-03-03T06:34:36 | Popular-Factor3553 | false | null | 0 | o8dbdbu | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8dbdbu/ | false | 1 |
t1_o8db7sr | even the tiniest llm is much larger than the models previously used for such tasks | 1 | 0 | 2026-03-03T06:33:16 | Sad-Grocery-1570 | false | null | 0 | o8db7sr | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8db7sr/ | false | 1 |
t1_o8db5tp | That is a helpful post, but will say- the answer to your question is almost certainly RustDesk. I have a similar setup, though Fedora XFCE, not Wayland (asterisk)- but RustDesk is far and away the best remote desktop I've worked with. I talk to my 4k desktops from the coffee shop wifi via wireguard, and it is like just being there. | 1 | 0 | 2026-03-03T06:32:46 | jonahbenton | false | null | 0 | o8db5tp | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8db5tp/ | false | 1 |
t1_o8db5lx | Don't have to worry about opening up you PC for the most part. LM Studio you need to turn that stuff on and I'm sure Llama.cpp it probably defaults to localhost (127.0.0.1). The only way to open it up to the internet is if you do port forwarding to that PC.
"...with vLLM you have to serve stuff to yourself", I'm guess you mean to its API. That depends if your running your frontend on the same PC or not. If you using the same PC then it'll be a bit easier. | 1 | 0 | 2026-03-03T06:32:43 | lemondrops9 | false | null | 0 | o8db5lx | false | /r/LocalLLaMA/comments/1rjfqib/why_are_the_ollama_quants_of_local_llm_models/o8db5lx/ | false | 1 |
t1_o8db3ya | Try any thing llm | 1 | 0 | 2026-03-03T06:32:19 | Individual_Page9676 | false | null | 0 | o8db3ya | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8db3ya/ | false | 1 |
t1_o8dawbz | It does?? | 1 | 0 | 2026-03-03T06:30:29 | Popular-Factor3553 | false | null | 0 | o8dawbz | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8dawbz/ | false | 1 |
t1_o8dasdf | it did not work well for coding in my testing with pi coder agent | 1 | 0 | 2026-03-03T06:29:33 | Hot_Turnip_3309 | false | null | 0 | o8dasdf | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8dasdf/ | false | 1 |
t1_o8das40 | It's 196B-A11B | 1 | 0 | 2026-03-03T06:29:29 | FriskyFennecFox | false | null | 0 | o8das40 | false | /r/LocalLLaMA/comments/1rj4zy3/stepfun_releases_2_base_models_for_step_35_flash/o8das40/ | false | 1 |
t1_o8daqwn | How would you react if a stranger walked up to you and said hi and nothing else? Have some empathy! | 1 | 0 | 2026-03-03T06:29:12 | jax_cooper | false | null | 0 | o8daqwn | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8daqwn/ | false | 1 |
t1_o8daoc9 | Ah ok, i cannot use qwen3.5 4b for my project but they released the model benchmarking it on the IFBench dataset.
Thank God I’m not as stupid as you. Peace. | 1 | 0 | 2026-03-03T06:28:34 | CapitalShake3085 | false | null | 0 | o8daoc9 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8daoc9/ | false | 1 |
t1_o8danqa | Are you running it on the new android terminal Linux environment? | 1 | 0 | 2026-03-03T06:28:25 | iamapizza | false | null | 0 | o8danqa | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8danqa/ | false | 1 |
t1_o8daniz | Imagine using windows in 2026 | 1 | 0 | 2026-03-03T06:28:22 | freehuntx | false | null | 0 | o8daniz | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8daniz/ | false | 1 |
t1_o8dal33 | I would say TokKong. It has a local Whisper model and a local LLM model, and it synchronizes all information between macOS and iOS devices. I always use it to transcribe my meeting audio recordings and then use the local AI to make summaization. | 1 | 0 | 2026-03-03T06:27:46 | Fabulous_Tip_8539 | false | null | 0 | o8dal33 | false | /r/LocalLLaMA/comments/1mtin36/best_private_local_llm_app_ios/o8dal33/ | false | 1 |
t1_o8dainr | A 196B truly base model is ***huuuuge*** ! And the license's permissive, too! So many use cases. | 1 | 0 | 2026-03-03T06:27:12 | FriskyFennecFox | false | null | 0 | o8dainr | false | /r/LocalLLaMA/comments/1rj4zy3/stepfun_releases_2_base_models_for_step_35_flash/o8dainr/ | false | 1 |
t1_o8da89k | Qwen runs agentic tasks well with reasoning on, it will typically at least summarize the intentions before emitting a tool call. It's still beneficial to keep temperature lower to minimize the indecisiveness. | 1 | 0 | 2026-03-03T06:24:44 | 666666thats6sixes | false | null | 0 | o8da89k | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o8da89k/ | false | 1 |
t1_o8da1yn | Should be, given how faster MoE is vs their dense equivalent. However, I don't believe that current generation of small MoE can reliably coordinate to converge toward a goal. It's likely that they can go back and forth and would be quite entertaining to look at, but unlikely produce real improvement vs simpler approach.
I remember taking a free course on AutoGen back in GPT-4 days. In the final lesson, they wanted to show off the "future" by having a bunch of agents working together autonomously to produce content. For that test, they allowed GPT-4 Turbo. The credit was only enough for a few run. None of my test run converged. Heck, the whole course was plagued with these random failure of GPT-4. I'm not sure current 30B MoE are smarter than GPT-4 Turbo yet.
That said, if you want to propose project, always do multi-agent, reasoning, multi-modal, since getting things actually done with simpler design is just not sexy. You can get things done effectively in your own time. (/s, btw) | 1 | 0 | 2026-03-03T06:23:13 | o0genesis0o | false | null | 0 | o8da1yn | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8da1yn/ | false | 1 |
t1_o8d9yhc | I asked the same thing and at one point it got stuck in an infinite loop second guessing itself. I'm new to trying open source models extensively so maybe me using ollama is the problem like others have stated. Still very weird to see this happen | 1 | 0 | 2026-03-03T06:22:23 | casualcoder47 | false | null | 0 | o8d9yhc | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8d9yhc/ | false | 1 |
t1_o8d9xv4 | ty for the feed back, its good qualitative evidence for later. | 1 | 0 | 2026-03-03T06:22:14 | RTS53Mini | false | null | 0 | o8d9xv4 | false | /r/LocalLLaMA/comments/1riat5w/vignettes_handy_for_ais/o8d9xv4/ | false | 1 |
t1_o8d9vc7 | because the latest version isn't released yet? or something else? | 1 | 0 | 2026-03-03T06:21:38 | OkUnderstanding420 | false | null | 0 | o8d9vc7 | false | /r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8d9vc7/ | false | 1 |
t1_o8d9q0f | Did it work with tool calling? | 1 | 0 | 2026-03-03T06:20:22 | SnoopCM | false | null | 0 | o8d9q0f | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8d9q0f/ | false | 1 |
t1_o8d9okq | For those you don't need to watch their GPU and CPU heat levels. Hope this helps | 1 | 0 | 2026-03-03T06:20:02 | afkie | false | null | 0 | o8d9okq | false | /r/LocalLLaMA/comments/1rjhfow/thinking_of_finetuning_llama7b_with_100k_samples/o8d9okq/ | false | 1 |
t1_o8d9oef | 27B is a dense model and requires a lot of hardware to drive with any decent speed especially to a lot of people. Most MoE models are only activating 3 or 10B parameters at a time. | 1 | 0 | 2026-03-03T06:20:00 | SillyLilBear | false | null | 0 | o8d9oef | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8d9oef/ | false | 1 |
t1_o8d9nk0 | Thank You for help.
I need to fine tuning for one specific task (medical domain) like about drug.
I explored and find potential candidates:
Llama
Biomistral
Mistral
I will also explore qwen.
And can't I fine tune 7b with QLoRA method?
And what do think about other tools google colab and replicate etc.
Can You please share your experience if you findtuned some model? | 1 | 0 | 2026-03-03T06:19:48 | SUPRA_1934 | false | null | 0 | o8d9nk0 | false | /r/LocalLLaMA/comments/1rjhfow/thinking_of_finetuning_llama7b_with_100k_samples/o8d9nk0/ | false | 1 |
t1_o8d9lf8 | Change these parameters to avoid the endless thinking, as instructed in the [model card](https://huggingface.co/Qwen/Qwen3.5-9B) on hugging face. That solved it for me:
* Thinking mode for general tasks: `temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
* Thinking mode for precise coding tasks (e.g. WebDev): `temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0`
* Instruct (or non-thinking) mode for general tasks: `temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
* Instruct (or non-thinking) mode for reasoning tasks: `temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
| 1 | 0 | 2026-03-03T06:19:19 | MarzipanTop4944 | false | null | 0 | o8d9lf8 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8d9lf8/ | false | 1 |
t1_o8d9la9 | what the agent ?
| 1 | 0 | 2026-03-03T06:19:17 | Powerful_Evening5495 | false | null | 0 | o8d9la9 | false | /r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/o8d9la9/ | false | 1 |
t1_o8d9b2u | I tried Step Flash at around IQ4\_XS which I was able to make fit into my Strix Halo but I ended up deleting it. It was reciting the code incorrectly and repeating tokens in paths which made them incorrect, and these are hallmarks of quantization damage in the model in my experience. When it can't repeat tokens it sees in context verbatim, that tells to me that higher quant is required. I've seen this in other models too. | 1 | 0 | 2026-03-03T06:16:52 | audioen | false | null | 0 | o8d9b2u | false | /r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o8d9b2u/ | false | 1 |
t1_o8d9b0v | I just tested it on LM Studio. Just make sure to update the runtime and change this parameters on the model as instructed in the model [card here](https://huggingface.co/Qwen/Qwen3.5-9B) or it will enter in an infinite loop when you ask it for basic stuff like counting the 'r' in strawberry.
This is what you need to change:
Instruct (or non-thinking) mode for general tasks: `temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
| 1 | 0 | 2026-03-03T06:16:51 | MarzipanTop4944 | false | null | 0 | o8d9b0v | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8d9b0v/ | false | 1 |
t1_o8d9aii | Ask it who is the current US president. It will say Joe Biden. | 1 | 0 | 2026-03-03T06:16:44 | guggaburggi | false | null | 0 | o8d9aii | false | /r/LocalLLaMA/comments/1rguzz2/qwen_35_cutoff_date_is_2024/o8d9aii/ | false | 1 |
t1_o8d9310 | A typical vnc server set up won't work? Thanks for the write up, I will definitely check this model out, I haven't really used open webui but this sounds actually useful | 1 | 0 | 2026-03-03T06:14:58 | Hefty_Development813 | false | null | 0 | o8d9310 | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8d9310/ | false | 1 |
t1_o8d907j | Hey, I am also working on a similar project and will return here if I find anything. Qwen3.5 could be promising. I wonder how hard it is to fine-tune one of the smaller models | 1 | 0 | 2026-03-03T06:14:20 | Dense_Addendum_4755 | false | null | 0 | o8d907j | false | /r/LocalLLaMA/comments/1ri5el2/local_mllm_for_gui_automation_visual_grounding/o8d907j/ | false | 1 |
t1_o8d8ukj | use opencode and the 27b. the 27b is leagues smarter | 1 | 0 | 2026-03-03T06:12:59 | FusionCow | false | null | 0 | o8d8ukj | false | /r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8d8ukj/ | false | 1 |
t1_o8d8rq6 | TLDR: It's more compute efficient, especially at larger contexts, since it doesn't have to compute the full attention each time.
A basic explanation why:
Transformers basically aim to map the relation of each token to every other token in a sequence/chat. This is done through the attention mechanism.
Full attention is basically the product of each token in a sequence/chat. So for example, if you only had 10 tokens in the sequence, you would have to have to do only a few computations, around a 100, to calculate the product of every token in the sequence. You can imagine a grid of 10x10 that you have to fill up fully.
So, say that you 10x the tokens in the sequence, the attention (KV cache) will grow 100x, not 10x.
This is why it's a quadratic complexity algorithm: it gets exponentially hard to compute as the input increases.
Qwen's solution is to have a balance of this behaviour and a more compute efficient one so that it can have the best of both worlds. | 1 | 0 | 2026-03-03T06:12:17 | _yustaguy_ | false | null | 0 | o8d8rq6 | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8d8rq6/ | false | 1 |
t1_o8d8qoq | Provided some paragraphs from a book and asked to summarize them, they wont do it because that infringes on copywrited materials | 1 | 0 | 2026-03-03T06:12:02 | adolfin4 | false | null | 0 | o8d8qoq | false | /r/LocalLLaMA/comments/1re17th/blown_away_by_qwen_35_35b_a3b/o8d8qoq/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.