name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8dn13m
What quants are you using ? I still don't know if I should use the full 27B (in FP16) with vLLM (tp=4 on 4*RTX 3090, great speed around 70-80 tok/s) or the 122B in GGUF (way slower because of llamacpp). Sadly toolcall in EXL3 isn't working well.
1
0
2026-03-03T08:22:33
TacGibs
false
null
0
o8dn13m
false
/r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8dn13m/
false
1
t1_o8dmrss
How long do the abliterated versions usually take to start appearing?
1
0
2026-03-03T08:19:56
chaosboi
false
null
0
o8dmrss
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8dmrss/
false
1
t1_o8dmr5c
wat
1
0
2026-03-03T08:19:46
MelodicRecognition7
false
null
0
o8dmr5c
false
/r/LocalLLaMA/comments/1rjikwz/help_me_create_my_llm_ecosystem/o8dmr5c/
false
1
t1_o8dmpz0
Kawaii
1
0
2026-03-03T08:19:26
ksoops
false
null
0
o8dmpz0
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8dmpz0/
false
1
t1_o8dmmyl
it's just more and more unnecessary yapping
1
0
2026-03-03T08:18:37
KomithErr404
false
null
0
o8dmmyl
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dmmyl/
false
1
t1_o8dml7x
Sunshine/Moonlight. I use them to play games from my desktop anywhere on earth. 10/10 no notes
1
0
2026-03-03T08:18:07
insanemal
false
null
0
o8dml7x
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dml7x/
false
1
t1_o8dml8x
You’re not missing anything obvious — a lot of the excitement around **OpenClaw** is more about *direction* than a single killer feature. What’s different isn’t so much UX polish (tools like Manus are strong there), but a shift in **where control lives**. A few things people are reacting to: * **Architecture over prompts.** OpenClaw pushes people to think in terms of tools, execution loops, permissions, and state, instead of just clever prompting. That’s appealing to folks who want agents to *do things*, not just chat. * **Explicit control layers.** Instead of abstracting everything away, it makes the decision/action boundary more visible. You’re forced to be explicit about *when* an agent can act, which tools it can use, and under what constraints. * **Composability.** It’s closer to a framework than a product. That makes it less friendly at first, but more flexible if you want to build something long-lived or production-ish. * **Alignment with “prod reality.”** The hype is cooling on fully autonomous agents. OpenClaw fits the newer mindset of agents as orchestrated workflows with guardrails, which resonates with builders who’ve been burned by demo-only setups. The tradeoff is obvious: it’s harder to get started, and without the right execution environment it can feel underwhelming or even “broken.” *Context:* we started building **ClawDock** largely because we saw people bounce off OpenClaw at this exact point — the ideas make sense, but wiring runtime, permissions, and visibility is where things get confusing. If you’re comparing it to Manus: Manus optimizes for immediacy and UX; OpenClaw optimizes for control and extensibility. Which one feels “special” really depends on whether you’re exploring or trying to build something that has to survive contact with production.
1
0
2026-03-03T08:18:07
Icy-Resource164
false
null
0
o8dml8x
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o8dml8x/
false
1
t1_o8dmhyp
And so far the model output seems to be good even though this is fairly exotic quant. Quality work, thanks! Gonna play with it a bit :D
1
0
2026-03-03T08:17:14
jslominski
false
null
0
o8dmhyp
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8dmhyp/
false
1
t1_o8dmc6n
It understood that pressurized fuel tanks are basically eggs tho
1
0
2026-03-03T08:15:41
huffalump1
false
null
0
o8dmc6n
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dmc6n/
false
1
t1_o8dma0m
Shit, have you been bot swizzled?
1
0
2026-03-03T08:15:05
Accomplished_Ad9530
false
null
0
o8dma0m
false
/r/LocalLLaMA/comments/1rjikwz/help_me_create_my_llm_ecosystem/o8dma0m/
false
1
t1_o8dm8tq
Litellm is one. I just wrote a guide: [https://www.reddit.com/r/LocalLLaMA/comments/1rj8zuh/manage\_qwen\_35\_model\_settings\_with\_litellm\_proxy/](https://www.reddit.com/r/LocalLLaMA/comments/1rj8zuh/manage_qwen_35_model_settings_with_litellm_proxy/)
1
0
2026-03-03T08:14:45
CATLLM
false
null
0
o8dm8tq
false
/r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8dm8tq/
false
1
t1_o8dm2tg
isn't llama.cpp WASM dead?
1
0
2026-03-03T08:13:07
holdenk
false
null
0
o8dm2tg
false
/r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8dm2tg/
false
1
t1_o8dm1j5
Isn’t rdma only going to work with TB5, will rdma work over TB4?
1
0
2026-03-03T08:12:46
JaatGuru
false
null
0
o8dm1j5
false
/r/LocalLLaMA/comments/1rga9x4/qwen35122ba10b_pooled_on_dual_mac_studio_m4_max/o8dm1j5/
false
1
t1_o8dly4j
Why would you want to do do with 2 Mac studios since it can fit one easily? Preparing for Deepseek 4, perhaps?
1
0
2026-03-03T08:11:52
JaatGuru
false
null
0
o8dly4j
false
/r/LocalLLaMA/comments/1rga9x4/qwen35122ba10b_pooled_on_dual_mac_studio_m4_max/o8dly4j/
false
1
t1_o8dlxbg
I have 4GB of RAM, and I'm not sure if the phone came with a physical problem or a software issue, but the RAM management is so terrible that it feels like I have 2GB or less.
1
0
2026-03-03T08:11:39
Samy_Horny
false
null
0
o8dlxbg
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dlxbg/
false
1
t1_o8dlvuf
I tried the 4B and 9B models and honestly, they are the weakest models I’ve ever used. Their instruction-following and reasoning abilities are poor. Even when I specifically asked for JSON output, they failed to understand correctly. They struggle with normal logical thinking. On the other hand, I tested the Qwen 3 4B Instruct model, and it performed much better than the newer Qwen 3.5 4B. This is a serious issue benchmark scores alone don’t reflect real-world usability. Just because a model performs well in benchmarks doesn’t mean it will actually be good in practice. I’m very disappointed with Qwen because the results don’t match expectations.
1
0
2026-03-03T08:11:16
Turbulent_Pie_8135
false
null
0
o8dlvuf
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8dlvuf/
false
1
t1_o8dlthp
As a sidenode to the sidenote, if you actually need to put dynamic data into the system prompt, like a current timestamp - the prompt above has CURRENT_DATETIME}}) - you should put it at the end. This way more of it can be cached.
1
0
2026-03-03T08:10:39
AnotherAvery
false
null
0
o8dlthp
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dlthp/
false
1
t1_o8dlqhl
If your smartphone has 8GB ram then 4b handle easily.
1
0
2026-03-03T08:09:50
Healthy-Nebula-3603
false
null
0
o8dlqhl
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dlqhl/
false
1
t1_o8dlpze
I noticed that. How on earth does it work? Is it some sort of context compression?
1
0
2026-03-03T08:09:43
roosterfareye
false
null
0
o8dlpze
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8dlpze/
false
1
t1_o8dlolv
[Well no](https://fr.reddit.com/r/LocalLLaMA/comments/1oymku1/heretic_fully_automatic_censorship_removal_for/np7jhxd/)
1
0
2026-03-03T08:09:21
Extraaltodeus
false
null
0
o8dlolv
false
/r/LocalLLaMA/comments/1rja0sb/gptoss_had_to_think_for_4_minutes_where_qwen359b/o8dlolv/
false
1
t1_o8dln3t
Thanks for posting. Have you compared it against Qwen3.5 35B A3B variant? I haven't tried the 27B dense model, but the one I mentioned shows very good results so far, but holy smoking reasoning tokens
1
0
2026-03-03T08:08:57
Lazy-Variation-1452
false
null
0
o8dln3t
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8dln3t/
false
1
t1_o8dlkta
I test on different models with the same question. You can try by yourself and see that it does get it every time.
1
0
2026-03-03T08:08:20
Extraaltodeus
false
null
0
o8dlkta
false
/r/LocalLLaMA/comments/1rja0sb/gptoss_had_to_think_for_4_minutes_where_qwen359b/o8dlkta/
false
1
t1_o8dlgit
I tried the 4B and 9B models and honestly, they are the weakest models I’ve ever used. Their instruction-following and reasoning abilities are poor. Even when I specifically asked for JSON output, they failed to understand correctly. They struggle with normal logical thinking. On the other hand, I tested the Qwen 3 4B Instruct model, and it performed much better than the newer Qwen 3.5 4B. This is a serious issue benchmark scores alone don’t reflect real-world usability. Just because a model performs well in benchmarks doesn’t mean it will actually be good in practice. I’m very disappointed with Qwen because the results don’t match expectations.
1
0
2026-03-03T08:07:13
Turbulent_Pie_8135
false
null
0
o8dlgit
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8dlgit/
false
1
t1_o8dleo6
The #QuitGPT movement is so justified. Only problem is that people have spent so much time building context in chatgpt that moving away from it feels like throwing away everything. I've built an MCP server that acts as a universal memory layer across all AI platforms simultaneously. That has honestly been helping me switch between different AIs without losing context. Happy to share if someone is interested.
1
0
2026-03-03T08:06:44
Reasonable-Jump-8539
false
null
0
o8dleo6
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o8dleo6/
false
1
t1_o8dlec8
granite was not built to compete on raw benchmarks.. its whole value is in the training data transparancy and apache 2.0 licencing which matter way more in enterprise or regulated deployments than context window size ever will..
1
0
2026-03-03T08:06:39
ashersullivan
false
null
0
o8dlec8
false
/r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/o8dlec8/
false
1
t1_o8dldyx
I tried the 4B and 9B models and honestly, they are the weakest models I’ve ever used. Their instruction-following and reasoning abilities are poor. Even when I specifically asked for JSON output, they failed to understand correctly. They struggle with normal logical thinking. On the other hand, I tested the Qwen 3 4B Instruct model, and it performed much better than the newer Qwen 3.5 4B. This is a serious issue benchmark scores alone don’t reflect real-world usability. Just because a model performs well in benchmarks doesn’t mean it will actually be good in practice. I’m very disappointed with Qwen because the results don’t match expectations.
1
0
2026-03-03T08:06:33
Turbulent_Pie_8135
false
null
0
o8dldyx
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8dldyx/
false
1
t1_o8dlczy
I tried the 4B and 9B models and honestly, they are the weakest models I’ve ever used. Their instruction-following and reasoning abilities are poor. Even when I specifically asked for JSON output, they failed to understand correctly. They struggle with normal logical thinking. On the other hand, I tested the Qwen 3 4B Instruct model, and it performed much better than the newer Qwen 3.5 4B. This is a serious issue benchmark scores alone don’t reflect real-world usability. Just because a model performs well in benchmarks doesn’t mean it will actually be good in practice. I’m very disappointed with Qwen because the results don’t match expectations.
1
0
2026-03-03T08:06:18
Turbulent_Pie_8135
false
null
0
o8dlczy
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8dlczy/
false
1
t1_o8dlcqx
Yeah this happens with a lot of thinking models.
1
0
2026-03-03T08:06:13
FuzzyLogick
false
null
0
o8dlcqx
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dlcqx/
false
1
t1_o8dlbsf
Ah interesting, so vision models even without using the vision portion, just text are better than non-vision if you had the same model size for example?
1
0
2026-03-03T08:05:58
rorowhat
false
null
0
o8dlbsf
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dlbsf/
false
1
t1_o8dlaaw
I have the same, did you resolve it ?
1
0
2026-03-03T08:05:34
Odd-Ad8796
false
null
0
o8dlaaw
false
/r/LocalLLaMA/comments/1re64fe/qwen35_thinking_blocks_in_output/o8dlaaw/
false
1
t1_o8dl862
I tried the 4B and 9B models and honestly, they are the weakest models I’ve ever used. Their instruction-following and reasoning abilities are poor. Even when I specifically asked for JSON output, they failed to understand correctly. They struggle with normal logical thinking. On the other hand, I tested the Qwen 3 4B Instruct model, and it performed much better than the newer Qwen 3.5 4B. This is a serious issue benchmark scores alone don’t reflect real-world usability. Just because a model performs well in benchmarks doesn’t mean it will actually be good in practice. I’m very disappointed with Qwen because the results don’t match expectations.
1
0
2026-03-03T08:05:00
Turbulent_Pie_8135
false
null
0
o8dl862
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8dl862/
false
1
t1_o8dl6oi
This was a solid read, appreciate the real-world context.
1
0
2026-03-03T08:04:37
Interesting_Lie_9231
false
null
0
o8dl6oi
false
/r/LocalLLaMA/comments/1riw6kd/mcp_colocation_stdio_49ms_single_client_vs_http/o8dl6oi/
false
1
t1_o8dl59m
Thank you for understanding my point of view. From the responses to my post, I’ve realized that this is not the right community to discuss these topics. It seems that many members lack the necessary background knowledge and are more casual AI enthusiasts than informed participants.
1
0
2026-03-03T08:04:15
CapitalShake3085
false
null
0
o8dl59m
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dl59m/
false
1
t1_o8dl4ki
There are so many OCR / document understanding models out there, here is my personal OCR list I try to keep up to date: GOT-OCR: https://huggingface.co/stepfun-ai/GOT-OCR2_0 granite-docling-258m: https://huggingface.co/ibm-granite/granite-docling-258M MinerU 2.5: https://huggingface.co/opendatalab/MinerU2.5-2509-1.2B OCRFlux: https://huggingface.co/ChatDOC/OCRFlux-3B MonkeyOCR-pro: 1.2B: https://huggingface.co/echo840/MonkeyOCR-pro-1.2B 3B: https://huggingface.co/echo840/MonkeyOCR-pro-3B FastVLM: 0.5B: https://huggingface.co/apple/FastVLM-0.5B 1.5B: https://huggingface.co/apple/FastVLM-1.5B 7B: https://huggingface.co/apple/FastVLM-7B MiniCPM-V-4_5: https://huggingface.co/openbmb/MiniCPM-V-4_5 GLM-4.1V-9B: https://huggingface.co/zai-org/GLM-4.1V-9B-Thinking InternVL3_5: 4B: https://huggingface.co/OpenGVLab/InternVL3_5-4B 8B: https://huggingface.co/OpenGVLab/InternVL3_5-8B AIDC-AI/Ovis2.5 2B: https://huggingface.co/AIDC-AI/Ovis2.5-2B 9B: https://huggingface.co/AIDC-AI/Ovis2.5-9B RolmOCR: https://huggingface.co/reducto/RolmOCR Nanonets OCR: https://huggingface.co/nanonets/Nanonets-OCR2-3B dots OCR: https://huggingface.co/rednote-hilab/dots.ocr https://modelscope.cn/models/rednote-hilab/dots.ocr-1.5 olmocr 2: https://huggingface.co/allenai/olmOCR-2-7B-1025 Light-On-OCR: https://huggingface.co/lightonai/LightOnOCR-2-1B Chandra: https://huggingface.co/datalab-to/chandra GLM 4.6V Flash: https://huggingface.co/zai-org/GLM-4.6V-Flash Jina vlm: https://huggingface.co/jinaai/jina-vlm HunyuanOCR: https://huggingface.co/tencent/HunyuanOCR bytedance Dolphin 2: https://huggingface.co/ByteDance/Dolphin-v2 PaddleOCR-VL: https://huggingface.co/PaddlePaddle/PaddleOCR-VL-1.5 Deepseek OCR 2: https://huggingface.co/deepseek-ai/DeepSeek-OCR-2 GLM OCR: https://huggingface.co/zai-org/GLM-OCR Nemotron OCR: https://huggingface.co/nvidia/nemotron-ocr-v1
1
0
2026-03-03T08:04:04
Mkengine
false
null
0
o8dl4ki
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8dl4ki/
false
1
t1_o8dl40m
How the hell a 27b dense model is better than the 122b a10b????
1
0
2026-03-03T08:03:55
Significant_Fig_7581
false
null
0
o8dl40m
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8dl40m/
false
1
t1_o8dkwrw
That is good to know! TBH, I didn't know what Open Weights meant. And thank you for that breakdown!
1
0
2026-03-03T08:02:00
Odd-Aside456
false
null
0
o8dkwrw
false
/r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8dkwrw/
false
1
t1_o8dkw73
Most frontends discard the CoT from context
1
0
2026-03-03T08:01:52
MerePotato
false
null
0
o8dkw73
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dkw73/
false
1
t1_o8dkw83
It likely can. You probably should try.
1
0
2026-03-03T08:01:52
kaisurniwurer
false
null
0
o8dkw83
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8dkw83/
false
1
t1_o8dkt2t
Also 27b is slow!
1
0
2026-03-03T08:01:04
johakine
false
null
0
o8dkt2t
false
/r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8dkt2t/
false
1
t1_o8dksho
Your app is dope! 😎 One question: how can I use it as an OpenAI-compatible API provider like llama.cpp or Ollama?
1
0
2026-03-03T08:00:55
RIP26770
false
null
0
o8dksho
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8dksho/
false
1
t1_o8dks2h
Use Qwen3-Coder-Next and wait for Qwen3.5 coder
1
0
2026-03-03T08:00:49
ExistingAd2066
false
null
0
o8dks2h
false
/r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8dks2h/
false
1
t1_o8dkq42
PocketPal works well and download models from HuggingFace. I have no relationship with the app aside from using it occasionally.
1
0
2026-03-03T08:00:18
nonother
false
null
0
o8dkq42
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8dkq42/
false
1
t1_o8dknzr
Thanks for sharing. Great finding, and would be fantastic if this holds true with larger/deeper model. I'm digging into this.
1
0
2026-03-03T07:59:44
NandaVegg
false
null
0
o8dknzr
false
/r/LocalLLaMA/comments/1rj2y4n/you_can_monitor_lora_training_quality_without/o8dknzr/
false
1
t1_o8dklks
For Qwen3.5 35B we won't be updating the exisiting ones anymore. We will only be uploading new ones we didn't before e.g. Q8\_0. For the rest we will be however, especially for the tool-calling fixes
1
0
2026-03-03T07:59:07
yoracale
false
null
0
o8dklks
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dklks/
false
1
t1_o8dkh7t
How straightforward is llamacpp to setup and run? Genuine question before I dive in. One of the biggest selling points of Ollama is the convenience of it. Oneliner to setup, openai server completion format OOTB, great integration with many AI apps etc
1
0
2026-03-03T07:57:59
Bac-Te
false
null
0
o8dkh7t
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dkh7t/
false
1
t1_o8dkh5j
It's not the top rec due to the login requirements, I'm in the same boat, it only starts the service after login
1
0
2026-03-03T07:57:58
Everlier
false
null
0
o8dkh5j
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dkh5j/
false
1
t1_o8dkfid
Added this post to my f ollama copypasta (saved as a snippet in raycast for convenience, requesting everyone to save and share this everywhere you see ollama. Case in point - if you Ask reddit (the feature in the search) whats the recommended way to run local AI, it still has Ollama at the top, despite the fact that we've been shitting on it in this sub non-stop for the better part of the past year) The snippet Use llama.cpp - the library they ripped off. https://old.reddit.com/r/LocalLLaMA/comments/1pvjpmb/why_i_quit_using_ollama/ https://old.reddit.com/r/LocalLLaMA/comments/1mncrqp/ollama/ https://old.reddit.com/r/LocalLLaMA/comments/1ko1iob/ollama_violating_llamacpp_license_for_over_a_year/
1
0
2026-03-03T07:57:32
rm-rf-rm
false
null
0
o8dkfid
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8dkfid/
false
1
t1_o8dke0i
>It's passing my reasoning and knowledge tests roughly at the level of R1 0528. Mine are pretty heavily weighted to the humanities and soft sciences so if that's a weak point it's easy to drag a model down. But it did pretty poorly for me. Gemma 3 27b beat it in every category by a pretty good margin. That said, I am VERY grateful to have another dense model at that size with distinct output from the rest.
1
0
2026-03-03T07:57:09
toothpastespiders
false
null
0
o8dke0i
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8dke0i/
false
1
t1_o8dkcdm
pls do not confuse "open source" and "open weights" models, Kimi is open weights not open source, because you have access to its weights only and do not have access to its training materials so you could not build the same LLM "from source". to run it at home at about 10 tokens per second, which is usable for single requests only, you'll need about $30k hardware, to run it for long complex tasks like agentic coding you'll need about $200k hardware which is not "local" as in "at home" but still "local" as in "on premises for a company".
1
0
2026-03-03T07:56:42
MelodicRecognition7
false
null
0
o8dkcdm
false
/r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8dkcdm/
false
1
t1_o8dkate
That’s some real useful performance you have there. Thanks for sharing the details of your setup.
1
0
2026-03-03T07:56:17
Beautiful-Honeydew10
false
null
0
o8dkate
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8dkate/
false
1
t1_o8dk645
I'm mostly looking for ones that are 600 - 700 rn, but I can't buy for a couple weeks
1
0
2026-03-03T07:55:05
Lord_Curtis
false
null
0
o8dk645
false
/r/LocalLLaMA/comments/1rj8zhq/where_can_i_get_good_priced_3090s/o8dk645/
false
1
t1_o8dk2qm
Hm.. interesting. I couldn't compile it due to a missing file.
1
0
2026-03-03T07:54:15
HighFlyingB1rd
false
null
0
o8dk2qm
false
/r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8dk2qm/
false
1
t1_o8dk0k7
doctor will also say "hi" without thinking, actually saying hi before starting new dialogue is quite common and polite. So now we are pretending that bug is a feature?! Imagine all of these models like GTP, or GEMINI would behave the same way as qwen3.5, what would be the cost of response? Take older model like open source gtp-oss:20B response for such questions is almost instant, even qwen3 is a speed of light compared to this one however deep reasoning maybe little bit weaker then qwen 3.5. That's so funny when you try to push this behavior as a feature. You are gaining few percentage of advantage compared to older models with with 50% slower response and 50% more energy used, very questionable tradeoff.
1
0
2026-03-03T07:53:40
Specialist-Chain-369
false
null
0
o8dk0k7
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dk0k7/
false
1
t1_o8djyyr
Reading docs say yes
1
0
2026-03-03T07:53:17
Holiday_Purpose_3166
false
null
0
o8djyyr
false
/r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8djyyr/
false
1
t1_o8djwjb
Haha yeah I stopped going to parties :/
1
0
2026-03-03T07:52:39
Icy-Degree6161
false
null
0
o8djwjb
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8djwjb/
false
1
t1_o8djror
I mean, the whole goal of multi-agent system is that eventually, the whole group of agents would converges toward the completion of the task you requested.What I usually see is that the orchestrator loses control of the agents (the AutoGen example), or the agents just loop without finishing. But the worst case is that everyone works fine within their own context, but together as a system, they loop around and override each other, rather than converging toward task completion. A few weeks ago I saw an empirical study on LLM multi-agent in software engineering. The study found that functional accuracy (test pass rate, essentially) actually drops in multi-agent design. However, there was small improvement in non-quality attributes like code quality. I think the day when the big labs manage to solve this multi-agent coordination problem (like really solving it, not barely working), it's going to be a major change. Until then, multi-agent is fun to look at, but you need to keep an eye on them. Good point about using different models for different agents, btw.
1
0
2026-03-03T07:51:23
o0genesis0o
false
null
0
o8djror
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8djror/
false
1
t1_o8djo6y
In which way are you running inference and on which OS? Are you using docker or the strix halo toolboxes or self compiled binaries? I am also playing around with different quants of the 35b-a3b, but found radv to be the fastest (and rocm 7.2 tanking completely). Would love to get more insight into how others run inference on the strix halo platform.
1
0
2026-03-03T07:50:30
Frequent-Noise-8129
false
null
0
o8djo6y
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8djo6y/
false
1
t1_o8djnmy
Probably using Q2 with Ollama 🤡
1
0
2026-03-03T07:50:21
TacGibs
false
null
0
o8djnmy
false
/r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/o8djnmy/
false
1
t1_o8djhn8
very interesting take 🤔
1
0
2026-03-03T07:48:48
Old-Sherbert-4495
false
null
0
o8djhn8
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8djhn8/
false
1
t1_o8dje2u
So what’s the verdict folks, which quant/repo is the best for coding? Thanks
1
0
2026-03-03T07:47:54
JaatGuru
false
null
0
o8dje2u
false
/r/LocalLLaMA/comments/1rf9dey/running_qwen_35_122b_with_72gb_of_vram_setup_and/o8dje2u/
false
1
t1_o8djdmh
that's above my pay grade tbh. i just downloaded from the releases. but from what I've read, i guess it only could get it a little optimized, I'd love to know more if thats not the case. also the cache improves the prompt processing speed at a marginal cost, so i could keep it anyways.
1
0
2026-03-03T07:47:47
Old-Sherbert-4495
false
null
0
o8djdmh
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8djdmh/
false
1
t1_o8djcr4
Do you have any guides or resources on setting up qwen locally or are you just following the GitHub? I also have 6gb vram, 1660ti I think, and I was wondering if that slows down other processes on ur PC and what kind of latency you are getting 
1
0
2026-03-03T07:47:33
lasagna_lee
false
null
0
o8djcr4
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8djcr4/
false
1
t1_o8djava
May I ask I which cases is 9B better than 35B A3B model?
1
0
2026-03-03T07:47:04
soyalemujica
false
null
0
o8djava
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8djava/
false
1
t1_o8dja70
in open webui, you can create presets as a single drop down option. it just appears as 2 llm choices for the user.
1
0
2026-03-03T07:46:53
DeltaSqueezer
false
null
0
o8dja70
false
/r/LocalLLaMA/comments/1rj8sow/llamacpp_models_preset_with_multiple_presets_for/o8dja70/
false
1
t1_o8dj9qf
Can I have my ozone and environments back?
1
0
2026-03-03T07:46:46
klipseracer
false
null
0
o8dj9qf
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8dj9qf/
false
1
t1_o8dj6iq
ctx-size doesn't change
1
0
2026-03-03T07:45:56
DeltaSqueezer
false
null
0
o8dj6iq
false
/r/LocalLLaMA/comments/1rj8sow/llamacpp_models_preset_with_multiple_presets_for/o8dj6iq/
false
1
t1_o8diydl
you using the wrong model... gotta pay attention in class kiddo. there is 2 versions
1
0
2026-03-03T07:43:51
CrypticZombies
false
null
0
o8diydl
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8diydl/
false
1
t1_o8dixn6
what tools are you using? I am interested
1
0
2026-03-03T07:43:39
shing3232
false
null
0
o8dixn6
false
/r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/o8dixn6/
false
1
t1_o8dixng
List so far: - cognee - memori - memU - mubit - memgpt/letta - memobase Can definitely see a pattern here with the names lol
1
0
2026-03-03T07:43:39
selund1
false
null
0
o8dixng
false
/r/LocalLLaMA/comments/1rin5r2/what_memory_systems_should_i_benchmark/o8dixng/
false
1
t1_o8diw0n
It's actually another ape typing in the zoo
1
0
2026-03-03T07:43:14
TheLexoPlexx
false
null
0
o8diw0n
false
/r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8diw0n/
false
1
t1_o8diupr
The speed is about what I expect of them, but qwen 3.5 is much smarter. Granite 4,0 H isn't obsolete as it can run nicer on edge hardware and their LLMs are ISO certified which is important for safety compliance for some fields. Also, why fix what isn't broken? If you use something which works well enough, replacing it can be more of a hassle.
1
0
2026-03-03T07:42:54
Kahvana
false
null
0
o8diupr
false
/r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/o8diupr/
false
1
t1_o8diu1s
sorry can't answer your exact question but could give some good advice for better results: - do not use Ollama - do not quantize KV cache - do not use quantized multimedia projector file
1
0
2026-03-03T07:42:44
MelodicRecognition7
false
null
0
o8diu1s
false
/r/LocalLLaMA/comments/1rjikwz/help_me_create_my_llm_ecosystem/o8diu1s/
false
1
t1_o8dis8q
+1 for RustDesk. Only used it for a few months on 10 year old Win10 system but it worked quite well for my needs
1
0
2026-03-03T07:42:16
layer4down
false
null
0
o8dis8q
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dis8q/
false
1
t1_o8dirxc
Will they release base model? I only see Base for the smaller ones :(
1
0
2026-03-03T07:42:11
nnxnnx
false
null
0
o8dirxc
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8dirxc/
false
1
t1_o8dirul
I've seen you twice now slandering on 3.5 you really set on the agenda huh
1
0
2026-03-03T07:42:09
FUS3N
false
null
0
o8dirul
false
/r/LocalLLaMA/comments/1rjhuvq/visual_narrator_with_qwen3508b_on_webgpu/o8dirul/
false
1
t1_o8dinky
I personally use the open-source TTS model FishAudio‑S1‑mini. The audio it produces can be controlled for emotion through specific input tags or markers.
1
0
2026-03-03T07:41:03
Rosa-Starks
false
null
0
o8dinky
false
/r/LocalLLaMA/comments/1nouu70/best_open_source_tts_model_with_emotion_control/o8dinky/
false
1
t1_o8dilxh
Me? I use Kobo.
1
0
2026-03-03T07:40:37
TheLocalDrummer
false
null
0
o8dilxh
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8dilxh/
false
1
t1_o8digam
Having vision means the model was trained on images, so has more knowledge of how things look.
1
0
2026-03-03T07:39:07
Thomas-Lore
false
null
0
o8digam
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8digam/
false
1
t1_o8diesu
3.5 27B seems to be good for text tasks, on par with Gemma 3 27B/Llama Ultra 70B, maybe even better. All today 35B MoE variants immediately fall into looping, sadly, either on thinking or after 2-3 text blocks.
1
0
2026-03-03T07:38:44
Solembumm2
false
null
0
o8diesu
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8diesu/
false
1
t1_o8di5za
thanks! you actually can use offline models out of the box for both stt and llm as they are already compatible with pipecat
1
0
2026-03-03T07:36:25
kuaythrone
false
null
0
o8di5za
false
/r/LocalLLaMA/comments/1pmhqyf/open_source_ai_voice_dictation_app_with_a_fully/o8di5za/
false
1
t1_o8di5l9
Do you people not use llms? If you want brevity, just ask for it.
1
0
2026-03-03T07:36:19
Thomas-Lore
false
null
0
o8di5l9
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8di5l9/
false
1
t1_o8di41w
What are you running it on, and what params?
1
0
2026-03-03T07:35:54
TheTerrasque
false
null
0
o8di41w
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8di41w/
false
1
t1_o8di1d6
This is the first model that truly works well with opencode locally on my machine that doesn't have a GPU (but has 128GB RAM 😅)
1
0
2026-03-03T07:35:12
octopus_limbs
false
null
0
o8di1d6
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8di1d6/
false
1
t1_o8dhwec
please keep the slop to yourself sir
1
0
2026-03-03T07:33:53
Rekkukk
false
null
0
o8dhwec
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dhwec/
false
1
t1_o8dhvbu
Appreciate the detailed response, your wisdom and the humor of selling solutions looking for problems. I don't understand what you mean about converging towards a goal. In my understanding, you'd want agents that are tasked with the same problem to be as cognitively diverse as possible, and adversarial. In my own work with MoE and multiple users of the same LLM server, agent threads don't interact so MoE lead to greater throughput of a processing pipeline.
1
0
2026-03-03T07:33:36
PentagonUnpadded
false
null
0
o8dhvbu
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dhvbu/
false
1
t1_o8dhuqp
There's another requant coming for all unsloth Qwen3.5 models, very soon: [https://www.reddit.com/r/unsloth/comments/1riy7et/comment/o89ca46/](https://www.reddit.com/r/unsloth/comments/1riy7et/comment/o89ca46/) So if you are on a slow or limited network connection, you might want to hold off until those land.
1
0
2026-03-03T07:33:26
spaceman_
false
null
0
o8dhuqp
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dhuqp/
false
1
t1_o8dhof0
Its merged now, thank you for this info, tool calls now working wonderfully 🙏 On bf16 27b model and 4x3090 + driver patch i'm getting ~101t/s. ps vllm also needs patch to use pcie bus for gpu interconnect after driver patch
1
0
2026-03-03T07:31:46
RS_n
false
null
0
o8dhof0
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8dhof0/
false
1
t1_o8dhmoo
I feel like it might be worth to use smaller models, like qwen3-vl 4b via vllm. You can process multiple documents at once, instead using the larger models with llama.cpp. This allows you to illiterate through multiple documents faster during setting up your environment. You literally need to check hundreds of extractions before you can even say the thing works reliably. Hence, it's much better to have 100 extractions done in a minute in parallel instead of having a 100 sequential extractions running for 100 minutes just to end up deciding that you need to adjust the prompt. Qwen3 4b can be quite capable. For the extraction part. I can strongly recommend it. Running it on a 3060 with 12gb right now with plenty of parallel requests and pretty decent speed.
1
0
2026-03-03T07:31:18
Njee_
false
null
0
o8dhmoo
false
/r/LocalLLaMA/comments/1rjikwz/help_me_create_my_llm_ecosystem/o8dhmoo/
false
1
t1_o8dhl5e
22 tps on a 4060 Ti 16GB with Q8_K_XL is actually really solid. The 4060 Ti has ~288 GB/s memory bandwidth, and that is your real ceiling for token generation. Every GB you shave off the model weight directly translates to more tokens per second. The biggest untapped win for you: Q8_K_XL is 12GB but Q5_K_M lands around 6.6GB. That frees up ~5.5GB of VRAM, which you can use to remove KV cache quantization entirely (drop --cache-type-k and --cache-type-v flags and let it use f16). Quality difference between Q8 and Q5_K_M on Qwen 3.5 is basically imperceptible for most tasks. You could realistically hit 40+ tps just by dropping the model quant. The context size not affecting speed much is expected behavior with Qwen 3.5 architecture, that part is legit. You are basically already near the ceiling for Q8 on that card.
1
0
2026-03-03T07:30:53
ElectricalOpinion639
false
null
0
o8dhl5e
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8dhl5e/
false
1
t1_o8dhdxd
Its hard to keep it under 3TB of models these days.
1
0
2026-03-03T07:28:58
lemondrops9
false
null
0
o8dhdxd
false
/r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8dhdxd/
false
1
t1_o8dhb3v
I speak for my VRAM-self. Only **8**GB VRAM. Granite4's Small 32B model is not fast on my VRAM as that model's Active parameters is **9**B which is bigger than my VRAM. For example, Qwen3-30B MOE(**A3B**) giving 35-40 t/s, while Granite4-Small(**A9B**) giving me only 10-15 t/s. If you have bigger VRAM(at least 12-16GB), that model would run faster. Anyway I still use Granite3's 8B model which's good one.
1
0
2026-03-03T07:28:15
pmttyji
false
null
0
o8dhb3v
false
/r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/o8dhb3v/
false
1
t1_o8dha62
I was under the impression that more parameters implied less hallucination, for the models are more "grounded", and that the "ADHD" is in fact a limitation of context size, and inevitably, KV cache issues as context size is reached, and discarded, unless some kind of memory snapshotting is done to pin the answer(s). This would affect frontier models as well though.
1
0
2026-03-03T07:28:01
InsensitiveClown
false
null
0
o8dha62
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8dha62/
false
1
t1_o8dh9vz
[removed]
1
0
2026-03-03T07:27:56
[deleted]
true
null
0
o8dh9vz
false
/r/LocalLLaMA/comments/1r7bsfd/best_audio_models_feb_2026/o8dh9vz/
false
1
t1_o8dh483
Guess it’ll just be to fuck around with in that case
1
0
2026-03-03T07:26:28
tableball35
false
null
0
o8dh483
false
/r/LocalLLaMA/comments/1rjgwhm/so_with_the_new_qwen35_release_what_should_i_use/o8dh483/
false
1
t1_o8dgz14
Unsloth released a new gguf to fix this issue earlier today. Re download.
1
0
2026-03-03T07:25:06
3spky5u-oss
false
null
0
o8dgz14
false
/r/LocalLLaMA/comments/1rjhy83/tool_calling_issues_with_qwen3535b_with_16gb_vram/o8dgz14/
false
1
t1_o8dgw18
What problems do you face? Just tested it briefly, seemed to work just fine. https://github.com/Danmoreng/local-qwen3-coder-env
1
0
2026-03-03T07:24:19
Danmoreng
false
null
0
o8dgw18
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8dgw18/
false
1
t1_o8dgrst
Why people persist to use ollama where u can get better results and support in llama.cpp? Blows my mind
1
0
2026-03-03T07:23:11
sagiroth
false
null
0
o8dgrst
false
/r/LocalLLaMA/comments/1rjdo1i/ollama_keeps_loading_with_openclaw/o8dgrst/
false
1
t1_o8dgnqa
it depends on your exact tasts but in general you need around 500B model to replace the cloud ones, and to run 500B locally you'll need at least quad or better octo GPU setup not just dual. But for the simpler tasks even <50B model could "replace" the cloud and will work on a single or dual GPU.
1
0
2026-03-03T07:22:09
MelodicRecognition7
false
null
0
o8dgnqa
false
/r/LocalLLaMA/comments/1rjaliw/what_llm_to_replace_claude_35_sonnet_for_server/o8dgnqa/
false
1
t1_o8dgnhy
[removed]
1
0
2026-03-03T07:22:05
[deleted]
true
null
0
o8dgnhy
false
/r/LocalLLaMA/comments/1jhiail/uncensored_image_generator/o8dgnhy/
false
1
t1_o8dgn69
> Would Ollama + anythingLLM + qwen 3.5 (27b?) a good combo for my needs? You could manage only with IQ4\_XS of 27B at decent t/s as it's dense model. Agree with other comment, go with 35B MOE model which is faster one.
1
0
2026-03-03T07:22:00
pmttyji
false
null
0
o8dgn69
false
/r/LocalLLaMA/comments/1rjikwz/help_me_create_my_llm_ecosystem/o8dgn69/
false
1