name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o89ak8k
You choose old models. New are much better, go for Qwen3.5 27B:: [https://huggingface.co/unsloth/Qwen3.5-27B-GGUF/blob/main/Qwen3.5-27B-UD-Q4\_K\_XL.gguf](https://huggingface.co/unsloth/Qwen3.5-27B-GGUF/blob/main/Qwen3.5-27B-UD-Q4_K_XL.gguf)
1
0
2026-03-02T17:02:44
Skyline34rGt
false
null
0
o89ak8k
false
/r/LocalLLaMA/comments/1rixlj6/new_to_local_llm_which_model_to_use_with_a_4090/o89ak8k/
false
1
t1_o89aicm
Funny that you ask. I didn't actually make it myself... AI did!
1
0
2026-03-02T17:02:29
themoregames
false
null
0
o89aicm
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89aicm/
false
1
t1_o89aekw
Are you actually thick. The benchmark uses synthetic tensors intentionally, and this is standard practice in attention kernel research. FlashAttention, xFormers, Triton reference kernels, and PyTorch’s own attention performance tests all benchmark with randomly initialized tensors. This is because **kernel runtime depends on tensor shapes, memory layout, and arithmetic intensity, not on the specific token values or trained weights**.
1
0
2026-03-02T17:01:58
Upset-Presentation28
false
null
0
o89aekw
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89aekw/
false
1
t1_o89aaap
More parameters always wins, this has been proved time and time again.
1
0
2026-03-02T17:01:24
jonydevidson
false
null
0
o89aaap
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o89aaap/
false
1
t1_o89a4hp
Why use literally the same colors with different shades when you have like 20 other colors
1
0
2026-03-02T17:00:38
ghulamalchik
false
null
0
o89a4hp
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o89a4hp/
false
1
t1_o89a48b
LMFAO this is so unbelievable. Guarantee 90% of “apps” are like this. Something these models do constantly is: “I’ll add a catch to use random data in case the real data doesnt load”.
1
0
2026-03-02T17:00:36
pantalooniedoon
false
null
0
o89a48b
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o89a48b/
false
1
t1_o89a3k3
Honestly I’m just waiting for SWE Rebench to come out. I’ve been running 122b, it’s good enough for what I’ve thrown at it but I’m not sure if it’s worth upgrading to 397b
1
0
2026-03-02T17:00:30
TheRealSerdra
false
null
0
o89a3k3
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89a3k3/
false
1
t1_o89a20f
It already works with llama.cpp, so you might want to just use that
1
0
2026-03-02T17:00:18
DarkWolfX2244
false
null
0
o89a20f
false
/r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o89a20f/
false
1
t1_o89a15p
Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/PgFhZ8cnWW) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*
1
0
2026-03-02T17:00:11
WithoutReason1729
false
null
0
o89a15p
false
/r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o89a15p/
true
1
t1_o89a00q
Out of box exp will be slow.
1
0
2026-03-02T17:00:03
qwen_next_gguf_when
false
null
0
o89a00q
false
/r/LocalLLaMA/comments/1rixlj6/new_to_local_llm_which_model_to_use_with_a_4090/o89a00q/
false
1
t1_o899wiz
Taking the research behind it, all of these are \*not\* good solutions. I haven't actually found a way around, and am hoping posting might help prioritize getting these fixed. 1. **Don't pass** `tools` **in the API request** — put tool descriptions in the system prompt instead, get raw text back, strip `<think>...</think>` yourself, parse tool calls from what remains. Bypasses LM Studio's parser entirely. But that's a massive architectural change for someone on LangGraph's standard tool calling path. 2. **Use a different inference server** — Ollama strips think blocks before parsing (as noted in #1589). But then you're not on LM Studio. 3. **Disable reasoning** — works, loses quality. 4. **Wait for the fix** — not a workaround.
1
0
2026-03-02T16:59:35
One-Cheesecake389
false
null
0
o899wiz
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o899wiz/
false
1
t1_o899pj7
You are using multiline shell command but missing \ before --reasoning-budget 0
1
0
2026-03-02T16:58:39
spirkaa
false
null
0
o899pj7
false
/r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o899pj7/
false
1
t1_o899m1f
Without sarcasm, I purpose vibeslop benchmark that takes those AI-electrojiving-fever dream repos and see how much they can correctly distinguish between slop repos and real repos. It would test long context code reading comprehension over multiple (in many cases, many) files.
1
0
2026-03-02T16:58:12
NandaVegg
false
null
0
o899m1f
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o899m1f/
false
1
t1_o899hzw
[removed]
1
0
2026-03-02T16:57:40
[deleted]
true
null
0
o899hzw
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o899hzw/
false
1
t1_o899hsx
The question is what happens after you detect something. eBPF gives you the visibility, but if the agent is running in a container with shared kernel access, by the time you've observed the bad syscall it's already executed. the pairing that actually works is kernel-level observability (what you built) + hardware-isolated execution (microVM with its own kernel) so the blast radius is contained before your tracer even fires. observation without containment is a dashcam — you get great footage of the crash but you're still in the crash. Have you tested azazel against agents running in microVMs vs containers? curious if the hook points behave differently when the agent has its own kernel.
1
0
2026-03-02T16:57:39
GarbageOk5505
false
null
0
o899hsx
false
/r/LocalLLaMA/comments/1r8yvu5/i_built_an_ebpf_tracer_to_monitor_ai_agents_the/o899hsx/
false
1
t1_o899eyb
useful for visual agent testing but docker is doing the heavy lifting on isolation here, and docker shares the host kernel. if the VLM decides to execute something that triggers a kernel exploit, the container boundary doesn't help. for untrusted agent actions, hypervisor-level isolation (microVM with its own kernel) is the gap between "sandbox for development" and "sandbox for production."
1
0
2026-03-02T16:57:16
GarbageOk5505
false
null
0
o899eyb
false
/r/LocalLLaMA/comments/1r4ntp2/localfirst_computeruse_agent_sandbox_docker_xfce/o899eyb/
false
1
t1_o899cdj
Ooh the problem is that you’re sending a simple “hi” to a reasoning model, this is known to happen, unless youre sending complex questions use the instruct variant as needed!
1
0
2026-03-02T16:56:56
Unsharded1
false
null
0
o899cdj
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o899cdj/
false
1
t1_o899bww
That's amazing! How did you make it?
1
0
2026-03-02T16:56:53
Long_comment_san
false
null
0
o899bww
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o899bww/
false
1
t1_o8996au
gguf ?
1
0
2026-03-02T16:56:09
Gold_Sugar_4098
false
null
0
o8996au
false
/r/LocalLLaMA/comments/1rira5e/iquestcoderv1_is_40b14b7b/o8996au/
false
1
t1_o8992ca
Ok, but using Codex to clean up code doesn't make the code wrong or invalidate the benchmark so I don't understand what your point is.
1
0
2026-03-02T16:55:37
Upset-Presentation28
false
null
0
o8992ca
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o8992ca/
false
1
t1_o898z4w
Everything is possible if you lie on the internet
1
0
2026-03-02T16:55:11
KriosXVII
false
null
0
o898z4w
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o898z4w/
false
1
t1_o898y7r
Wouldn't Ralph loops solve for at least some of this? I haven't tried it yet, but from what I've read, it's basically designed to solve exactly this. It has a supervisor model that tells the agent that's doing the actual coding how to handle the specific discrete tasks. So it would take the long-horizon tool calling issue, and would take away the need for very long context windows except for the supervising model, so you can conserve context window space by only giving it the context that any specific model needs to know. This is more of a question than a statement though I guess. I *think* that's how it would work, but I'm a total noob in this domain, so I'm trying to learn.
1
0
2026-03-02T16:55:04
TripleSecretSquirrel
false
null
0
o898y7r
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o898y7r/
false
1
t1_o898us5
Talking to you is a waste of my time.
1
0
2026-03-02T16:54:36
Impossible-Glass-487
false
null
0
o898us5
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o898us5/
false
1
t1_o898ti6
Thanks
1
0
2026-03-02T16:54:26
anubhav_200
false
null
0
o898ti6
false
/r/LocalLLaMA/comments/1rira5e/iquestcoderv1_is_40b14b7b/o898ti6/
false
1
t1_o898paj
Qwen3.5 4b is out!
1
0
2026-03-02T16:53:53
HaysamKING1
false
null
0
o898paj
false
/r/LocalLLaMA/comments/1naqln5/how_is_qwen3_4b_this_good/o898paj/
false
1
t1_o898ov0
The script is a targeted decode-attention benchmark for paged KV caches, not a full model end-to-end benchmark, doesn't describe itself as such. It uses real HF configs to get attention shapes and includes the pack-to-dense cost that paged-KV systems pay. You're more than welcome to work on an end-to-end HF forward appendix + a flash-attn kvcache baseline when available. Calm down nobody is here to steal your precious upvotes lmao.
1
0
2026-03-02T16:53:50
Upset-Presentation28
false
null
0
o898ov0
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o898ov0/
false
1
t1_o898ovt
Can we please report such slop posts?
1
0
2026-03-02T16:53:50
FullstackSensei
false
null
0
o898ovt
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o898ovt/
false
1
t1_o898i4e
Yes
1
0
2026-03-02T16:52:56
DarkWolfX2244
false
null
0
o898i4e
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o898i4e/
false
1
t1_o898hsy
Question is why you're spamming the thread with "I am about to load it..." if you are not willing to contribute anything to the discussion?
1
0
2026-03-02T16:52:53
reddit0r_123
false
null
0
o898hsy
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o898hsy/
false
1
t1_o898fhe
...that's what it seems like
1
0
2026-03-02T16:52:35
Impossible-Glass-487
false
null
0
o898fhe
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o898fhe/
false
1
t1_o898fiv
Hey I looked up to your work it's interesting I am also working on fine tuning on gpt-OSS so just want to talk and want to talk deep on fine tuning modles for ai tools
1
0
2026-03-02T16:52:35
tanishkParmar
false
null
0
o898fiv
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o898fiv/
false
1
t1_o898aq9
https://imgur.com/a/T8bBM6x
1
0
2026-03-02T16:51:58
themoregames
false
null
0
o898aq9
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o898aq9/
false
1
t1_o8989s7
Unpopular opinion, I like this type of posts. It's both a challenge to see if my common sense still works, and fun to read real nerds politely rant about interesting tech stuff.
1
0
2026-03-02T16:51:49
xadiant
false
null
0
o8989s7
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o8989s7/
false
1
t1_o898270
Thanks. I'll be doing an overnight download of the new Unsloth 4B GGUF tonight (3.25Gb, but slow Internet), so I'll try that one first I think.
1
0
2026-03-02T16:50:47
optimisticalish
false
null
0
o898270
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o898270/
false
1
t1_o8981mx
I tried the 9B on 8GB and 32GB ram. Problem is context. I can offload some power to cpu but then it gets really slow. I managed to get 256k context (max) but it was 5-7tkps. Whats the point then? Then I tried to fit it entirely in GPU and its fast but context is 64k. I mean. I compared it to my other 64k model 35B A3B optimised for 65k and I got 32tkps and smarter model so kinda defeats the purpose for me using the 7B model just for raw speed. Just my observations. The A3B model is fantastic at agentic work and tool calling but again it's not all for fun right now. Context is limiting
1
0
2026-03-02T16:50:42
sagiroth
false
null
0
o8981mx
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8981mx/
false
1
t1_o89811c
Please show me a working Qwen 3.5 397b with Heretic and regular ablation working. I’d be willing to wager money in an actual escrow if you feel so confident about this.
1
0
2026-03-02T16:50:37
dealignai
false
null
0
o89811c
false
/r/LocalLLaMA/comments/1riorfz/qwen35122bvl_abliterated_working_mlx/o89811c/
false
1
t1_o897v1m
Prompt processing (aka docs) will be unusable slow on cpu.
1
0
2026-03-02T16:49:50
koushd
false
null
0
o897v1m
false
/r/LocalLLaMA/comments/1rins2j/hardware_for_local_ai_project/o897v1m/
false
1
t1_o897ta2
I just happened to test it rn for fun... I was so shocked to see it has such a high accuracy for handwritten stuff, Qwen3.5 2b at Q8 I tried vl 4b at Q8 for comparison it did so poorly.
1
0
2026-03-02T16:49:36
BalStrate
false
null
0
o897ta2
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o897ta2/
false
1
t1_o897sxr
it's prohibitively costly to train a massive dense model that's why no one would do it now. The last one I saw is the llama3-400B
1
0
2026-03-02T16:49:34
shing3232
false
null
0
o897sxr
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o897sxr/
false
1
t1_o897s3z
You can use llama-swap for this
1
0
2026-03-02T16:49:27
MoodyPurples
false
null
0
o897s3z
false
/r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o897s3z/
false
1
t1_o897mjb
no. stick to giving it small, well-defined tasks like "implement a function that does xyz" through a chat interface, you'll get usable results much more reliably, without having to deal with the overhead of your machine needing to process the enormous system prompt agentic coding tools use.
1
0
2026-03-02T16:48:43
Your_Friendly_Nerd
false
null
0
o897mjb
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o897mjb/
false
1
t1_o897jpg
Do you even understand what your "benchmark" does? Did you even bother to read the comments that the agent left for you? Do you even know what a benchmark is? > Unsloth-style hero benchmark for **decode-time attention** over a **paged KV cache**. > We only download HF *configs* (fast). We do **not** download weights. > Baseline will show OOM at large contexts (that is part of the story). What a muppet.
1
0
2026-03-02T16:48:20
ResidentPositive4122
false
null
0
o897jpg
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o897jpg/
false
1
t1_o897hz4
Ppl failing to understand i wouldnt give any of the files even if i was paid. Im closing and taking this down.
1
0
2026-03-02T16:48:06
dealignai
false
null
0
o897hz4
false
/r/LocalLLaMA/comments/1riorfz/qwen35122bvl_abliterated_working_mlx/o897hz4/
false
1
t1_o897hj8
It needs to remain coherent at massive 100k+ contexts and a 9B is gonna struggle with that.
1
0
2026-03-02T16:48:03
__JockY__
false
null
0
o897hj8
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o897hj8/
false
1
t1_o897f6x
AI hallucination. all scripts in `./mezzanine/` have emojies ✅ and ❌ and perhaps the only manually written script `./numerical_visualiser/common.py` replaces these emojies to present text in normal form: ... verdict = verdict.replace("✅", "OK").replace("❌", "X") return f"{prefix}: gap_mse {fmt_metric(teacher_gap)} \u2192 {fmt_metric(student_gap)} | verdict: {verdict}"
1
0
2026-03-02T16:47:44
MelodicRecognition7
false
null
0
o897f6x
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o897f6x/
false
1
t1_o897c9e
lol. Yeah pretty much every LLM will immediately assume your modern hardware doesn't actually exist if its cut-off date for training is 2024 early 2025. The answer shows something I see often in LLMs, which is a failure to actually know what a number means. For example, you can absolutely run an unquantized (FP16) Mistral 7b on a 16GB GPU, let alone Q8. Its suggested quantizations are too aggressive. The FP16 model only uses 14.5 GB of VRAM. Now, do I \*recommend\* actually doing that? No. Stick with Q8 or lower. A 16GB card can run Qwen 3.5 27B at IQ4\_XS, although that's cutting it close as it's about 15GB. Q3 would fit more context. Your actual best bet is Qwen3.5 35B-A3B at Q4 or Q5. You could offload some of the experts to RAM and have the rest in VRAM and still have respectable speeds.
1
0
2026-03-02T16:47:21
Kamal965
false
null
0
o897c9e
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o897c9e/
false
1
t1_o897bre
I say just try it. It's such a small model. Quick to download 
1
0
2026-03-02T16:47:17
deadman87
false
null
0
o897bre
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o897bre/
false
1
t1_o8977kx
That's just `temperature=2.0`
1
0
2026-03-02T16:46:44
FriskyFennecFox
false
null
0
o8977kx
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8977kx/
false
1
t1_o8977j1
Is image input working for you on llama.cpp?[](https://alb.reddit.com/cr?za=53Dcsx5uoE3Ck9tWgEdBOCgWhqVE3jwckyn7ynk3qi7PBcGJZSmUvf8u0EJUKe77AvH7drOn1hl-pYmoB32xoxKB_qMfvLfPbxNmPd9mVR0czRZtwMZifoRM7dhqP2px1jNkt9dlvncZYeBtVZKbTYp49GHjAZNj7mY_-jnrVOmfRNdUwd5z7ZFwwLvqqlwVgpzdI4Pl2D4hhz85F7ez8t5ANjwFeMXHdZmf8oZz3Xa-enAhAlw92CvJY3ZZuWeFT7PrTuxcR5M3dgcvbZn6Sz8YzwSZXwhSOYhSf0jggTNEJPuispWLy-43pJWDb4ddLqgSGM4eevWzKf_rRWw6J24XirW24y1ML1nr3NwVpv3dWft3CX0zMHcmFdrWtVYtugkb0cKorPMD2qO9ak7a0JbjmMyQ7LHcriBjmAwScj-HwHHHnXg20WXRr1umJgnK4Wb4OYeHsx4V0mpCDoXrHOZmzZ54gTBQViaFJmii1pcwwxGmPP1NteEKpPK7Uumg54WJE_co3Ikt3S5UDrtqwzeqXrSu-X1zzcnxfkkdn1DGzS1EKrr9l5wGUCa73y0eVZXu7PzI8BzDsqGDrPpknh0zA6vwl3_DuY-8ZM4r_HgxUhfiDgN3UmFqYWkv5i5F9q56YbAL8pzRGrLUrmE7DOMrt-lCDN8PgmK1-68wCsTJ5wbwTPG3vFj-DLRfgFZZXOszCyaI5TgQfd7mMdYOtbwpxjobImhs9o0P180O5mfxn20tgBrN8BB1xzWEaVbzq0zRDn4rv1clJnF2Qng5kEfAExWBUflLMkxpxae6prW0k745XWfUvQeZwAPl8XbbdclGZRw&zp=nfK7B6TAdTybqi8oGd2IjeCmcIeMN6LfCN7O2T7Z-DnKc8fU0fYzoclhnkHOYOss2goRkrIoqDWQFBovpsFqPWCgWjF3vIkYbz9z3fi-7GraP_ZtsjE827qpdU_BONi6Gv7P7PZuXyGbuBCm_LQSdpa3ZEXOnLI0bQokuYomRQbDtrrsY6tNje5bmswvqI15gJj2K7a0hD5HsVT85Z4xV2H7qUKrwXye-kue_CvTs9E4O5Oz-GDG95rnxuOUrv4T_qXiAkp06aOCD9n0zwgRelNvJrvXUY13Lg2hrVngVJNMcYxUtfeANP9QrxXq9Yk&a=44366&b=36589&be=36111&c=34025&d=44366&e=36589&ea=37512&eb=36110&f=34025&r=6&g=1&i=1772469936959&t=1772469981325&o=1&q=1&h=204&w=732&sh=1600&sw=2560&va=1&vb=0&vc=0&vd=0&ve=0&vg=0&vh=0&vi=0&vs=0&vt=0&vu=0&vv=0&vx=0&vw=0&vq=0&vr=0&vy=0&xe=0&vz=0&xa=0&xf=0&xb=0&vf=0&xc=0)
1
0
2026-03-02T16:46:43
InternationalNebula7
false
null
0
o8977j1
false
/r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o8977j1/
false
1
t1_o8973ph
Its bad. doesn't work well with Cline, \[Hallucinations\].
1
0
2026-03-02T16:46:14
adellknudsen
false
null
0
o8973ph
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8973ph/
false
1
t1_o8973js
It's probably some parameter I have screwed up, but I can't get it to correctly execute a single tool call in opencode. Just tells me it's doing it and never does it
1
0
2026-03-02T16:46:12
IronColumn
false
null
0
o8973js
false
/r/LocalLLaMA/comments/1r074pg/good_local_llm_for_tool_calling/o8973js/
false
1
t1_o8972e1
Fedora for sure, you want modern versions of packages and kernels. Then check out Donato's content for easy setup of ROCm/Vulkan: [https://strix-halo-toolboxes.com/](https://strix-halo-toolboxes.com/) Llama-swap is super handy for model management, make sure to put it in router mode, it defaults to swap mode (one model at a time).
1
0
2026-03-02T16:46:03
JamesEvoAI
false
null
0
o8972e1
false
/r/LocalLLaMA/comments/1qpneiq/amd_strix_halo_gmtek_128gb_unified_rocks/o8972e1/
false
1
t1_o8970dx
Yeah but thinking helps the output quality, so I'm more interested in the structure of the thinking and bridging the gap until a fix is made.
1
0
2026-03-02T16:45:47
nicholas_the_furious
false
null
0
o8970dx
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8970dx/
false
1
t1_o896z36
> The "base" model I think you refer to is the pt (pretrain). Yes. Calling the pretrained model the "base" model has been the convention since about 2023.
1
0
2026-03-02T16:45:37
ttkciar
false
null
0
o896z36
false
/r/LocalLLaMA/comments/1kn6mic/qwen_25_vs_qwen_3_vs_gemma_3_real_world_base/o896z36/
false
1
t1_o896uky
You can read about specific problems with the 394B model and proposed solutions [here](https://huggingface.co/spaces/dealignai/GateBreaker-MoE-Safety).
1
0
2026-03-02T16:45:02
Prettyism_Forever_99
false
null
0
o896uky
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o896uky/
false
1
t1_o896st5
It actually works perfectly with thinking/reasoning switched off - that's an important thing to point out.
1
0
2026-03-02T16:44:48
One-Cheesecake389
false
null
0
o896st5
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o896st5/
false
1
t1_o896riq
That's fine, those would be my thoughts too, which is why I included a benchmark that anyone can run themselves. [https://github.com/leochlon/mezzanine/blob/main/mezzanine/kernels/bench\_paged\_gqa\_decode.py](https://github.com/leochlon/mezzanine/blob/main/mezzanine/kernels/bench_paged_gqa_decode.py)
1
0
2026-03-02T16:44:38
Upset-Presentation28
false
null
0
o896riq
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o896riq/
false
1
t1_o896pbv
That depends entirely on the models you're running. If you use the fast/distill whisper alongside a small model like PocketTTS, you can get pretty reasonable latency. I had it wired up in a Discord call and we were getting around 1-2 seconds latency in responses
1
0
2026-03-02T16:44:21
JamesEvoAI
false
null
0
o896pbv
false
/r/LocalLLaMA/comments/1qpneiq/amd_strix_halo_gmtek_128gb_unified_rocks/o896pbv/
false
1
t1_o896moj
Since you mention llama.cpp, a closely related bug in that codebase that underlies similar behavior to all these, specifically related to Harmony template and influencing LM Studio through the llama.cpp engine is: [https://github.com/ggml-org/llama.cpp/discussions/15341](https://github.com/ggml-org/llama.cpp/discussions/15341)
1
0
2026-03-02T16:44:00
One-Cheesecake389
false
null
0
o896moj
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o896moj/
false
1
t1_o896m0n
It used to get triggered when there was similar looking text in the image and then model used to get stuck in a repetitive loop, Gemma was much better in this case
1
0
2026-03-02T16:43:54
xyzmanas
false
null
0
o896m0n
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o896m0n/
false
1
t1_o896hak
asking in a bit of general sense, but how do you get data for things like this to fine tune the model at?
1
0
2026-03-02T16:43:16
Artistic_Swing6759
false
null
0
o896hak
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o896hak/
false
1
t1_o8968h5
It's beyond broken when used with something like open webui, requires more time to setup than I have available, the qwen 3.5 9B is insane at it anyway
1
0
2026-03-02T16:42:07
bapirey191
false
null
0
o8968h5
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8968h5/
false
1
t1_o8968cc
27B ran like absolute garbage on my RX 6800 (potato but a 16gb VRAM potato), 35B-A3B was much better in comparison even with higher quant.
1
0
2026-03-02T16:42:06
mrstrangedude
false
null
0
o8968cc
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8968cc/
false
1
t1_o8967qd
If you're trying to run GLM 4.7 Flash you're going to need a minimum of 40GB of VRAM to run the Q6 quant
1
0
2026-03-02T16:42:01
JamesEvoAI
false
null
0
o8967qd
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o8967qd/
false
1
t1_o89677z
How so? I haven’t had any issues the GLM OCR layouts, actually have found it to be really good, do you have any examples?
1
0
2026-03-02T16:41:57
root_klaus
false
null
0
o89677z
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o89677z/
false
1
t1_o8966wi
AI hallucination and/or measurement error
1
0
2026-03-02T16:41:54
MelodicRecognition7
false
null
0
o8966wi
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o8966wi/
false
1
t1_o8963us
You seem extremely emotionally unstable.
1
0
2026-03-02T16:41:30
Impossible-Glass-487
false
null
0
o8963us
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8963us/
false
1
t1_o8963i4
[deleted]
1
0
2026-03-02T16:41:27
[deleted]
true
null
0
o8963i4
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o8963i4/
false
1
t1_o8961by
Ollama sucks, updated to latest ollama, used their 9B download from their library via openwebui, thing just chases its tail in reasoning.
1
0
2026-03-02T16:41:10
Rollingsound514
false
null
0
o8961by
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8961by/
false
1
t1_o8960qw
"{\\"enable\_thinking\\": false}" and '{"enable\_thinking": false}' are exact same, shell just allows variable expansion in double-quotes and same style inner quotes need to be escaped so shell knows to parse how many arguments there are, llama-server won't see the backslashes.
1
0
2026-03-02T16:41:06
cptbeard
false
null
0
o8960qw
false
/r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8960qw/
false
1
t1_o895zh6
> we’ve achieved massive performance leaps on models we quickly benchmarked against: No, you didn't. You asked for a benchmark, and claude delivered a "unsloth like" benchmark. But if you check the code, it's a "presentation" benchmark, it doesn't even download models nor run actual inference on any model. It runs simulations on random data. What in the slop is this?! Please *read* what your agents build. Don't trust blindly. Also, don't post slop before you verify it. PoC or gtfo.
1
0
2026-03-02T16:40:55
ResidentPositive4122
false
null
0
o895zh6
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o895zh6/
false
1
t1_o895xfc
Any plans to add into vLLM?
1
0
2026-03-02T16:40:39
professormunchies
false
null
0
o895xfc
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o895xfc/
false
1
t1_o895wk9
Because it would be rude to leave you waiting for results when you have asked for them. But I forgot that this community is devolving in real time and that you now represent the new user base, so why bother.
1
0
2026-03-02T16:40:32
Impossible-Glass-487
false
null
0
o895wk9
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o895wk9/
false
1
t1_o895vw2
I think it's bit late as we moving to GDN variants and DSA
1
0
2026-03-02T16:40:26
shing3232
false
null
0
o895vw2
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o895vw2/
false
1
t1_o895vo4
Good idea, I'll delete Reddit again and be self-sufficient from now on! I'll use only the extensions that were archived on GitHub in 2024 since the "cloud" that lacks up-to-date knowledge can't pull of anything from March 2026 instead of the up-to-date, community-picked solutions! Thank you for saving me from another doom scrolling loop, kind stranger!
1
0
2026-03-02T16:40:25
FriskyFennecFox
false
null
0
o895vo4
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o895vo4/
false
1
t1_o895onx
My thought is when ever I read hype comments like this, "40x the speed, 84% - 90% VRAM reduction" it's generally bullshit 
1
0
2026-03-02T16:39:30
JacketHistorical2321
false
null
0
o895onx
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o895onx/
false
1
t1_o895h1p
Main question is 4b actually better than Qwen3 4b 2507, and for some reason they don't compare those. With few common benchmarks they look pretty similar. 4b 2507 was insanely good, let's see if this can do better.
1
0
2026-03-02T16:38:31
DistanceAlert5706
false
null
0
o895h1p
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o895h1p/
false
1
t1_o895a87
I tried with gemini and gpt to out short names on top of each column and they all failed, gemini at least admitted its attempts were garbage and removed the pictures
1
0
2026-03-02T16:37:37
KURD_1_STAN
false
null
0
o895a87
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o895a87/
false
1
t1_o89597z
So how do they hold up? Any good? Worth getting?
1
0
2026-03-02T16:37:29
fantasticmrsmurf
false
null
0
o89597z
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89597z/
false
1
t1_o89583m
Then why are you even responding? What's your point?
1
0
2026-03-02T16:37:20
reddit0r_123
false
null
0
o89583m
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89583m/
false
1
t1_o8957oy
[deleted]
1
0
2026-03-02T16:37:17
[deleted]
true
null
0
o8957oy
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8957oy/
false
1
t1_o895342
There's always a smaller model out-performing other models. I wouldn't trust the benchmarks infact these benchmarks are getting absolutely useless because of benchmaxing. we need something else for benchmarking now. 9B model is dumb specially for coding cant even write normal python scripts and doesn't work as good with agentic stuff. 27B model is good though.
1
0
2026-03-02T16:36:41
adellknudsen
false
null
0
o895342
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o895342/
false
1
t1_o8950gz
give a small model specific instructions in the first prompt, and see if those instructions are still followed 10 queries in. they always fall apart beyond a few queries
1
0
2026-03-02T16:36:19
bittytoy
false
null
0
o8950gz
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8950gz/
false
1
t1_o8950dy
Doesn’t show up as an option in lm studio yet for me.
1
0
2026-03-02T16:36:19
-_Apollo-_
false
null
0
o8950dy
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8950dy/
false
1
t1_o89505d
I run Qwen3-0.6 on ram as the task model for stuff like openwebui so it can generate titles and tags without interrupting the context of the main model I’m using.
1
0
2026-03-02T16:36:17
MoodyPurples
false
null
0
o89505d
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89505d/
false
1
t1_o894y92
They’re already VL. I’m waiting for the instructs.
1
0
2026-03-02T16:36:02
beedunc
false
null
0
o894y92
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o894y92/
false
1
t1_o894wq0
It's christmas!!!!
1
0
2026-03-02T16:35:49
UniversalJS
false
null
0
o894wq0
false
/r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o894wq0/
false
1
t1_o894wkf
"tiny llm" is called SLM :)
1
0
2026-03-02T16:35:48
Open_Speech6395
false
null
0
o894wkf
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o894wkf/
false
1
t1_o894pe6
Afaik llama.cpp currently does not support BF16 flash attention CUDA kernels, so BF16 is sort of non usable due to very high PP and TG falloff over context. Only FP32 and FP16 are supported.
1
0
2026-03-02T16:34:52
Time_Reaper
false
null
0
o894pe6
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o894pe6/
false
1
t1_o894fod
[removed]
1
0
2026-03-02T16:33:36
[deleted]
true
null
0
o894fod
false
/r/LocalLLaMA/comments/1rf4br0/where_do_you_all_rent_gpu_servers_for_small_ml_ai/o894fod/
false
1
t1_o894cqy
[removed]
1
0
2026-03-02T16:33:13
[deleted]
true
null
0
o894cqy
false
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/o894cqy/
false
1
t1_o894c2v
27B in coding seems great
1
0
2026-03-02T16:33:07
auggie246
false
null
0
o894c2v
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o894c2v/
false
1
t1_o894am7
What are we looking at here? Llama new model coming out? Smaller models? Will they be smart enough to be worth using?
1
0
2026-03-02T16:32:56
Sumchi
false
null
0
o894am7
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o894am7/
false
1
t1_o894ahy
Asking the real questions :) This will probably follow shortly I reckon.
1
0
2026-03-02T16:32:55
Koffiepoeder
false
null
0
o894ahy
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o894ahy/
false
1
t1_o894a70
Honestly, I'm confused with so many options. What would you use with a 5090? Some weights have a note like "Uses Q8_0 for embed and output weights", what does this mean? BTW, any quant in particular that you want to see benchmarked on a 8xP100?
1
0
2026-03-02T16:32:53
TooManyPascals
false
null
0
o894a70
false
/r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o894a70/
false
1
t1_o8948rx
Ran the Q8 version of this model on a 4090 briefly, tested it with my Gety MCP. It's a local file search engine that exposes two tools, one for search and one for fetching full content. Performance was pretty bad honestly. It just did a single search call and went straight to answering, no follow-up at all. Qwen 3.5 27B Q4 on the other hand did way better. It would search, then go read the relevant files, then actually rethink its search strategy and go again. Felt much more like a proper local Deep Research workflow. So yeah I don't think this model's long-horizon tool calling is ready for agentic coding. Also, your VRAM is too limited. Agentic coding needs very long context windows to support extended tool-use chains, like exploring a codebase and editing multiple files.
1
0
2026-03-02T16:32:42
ChanningDai
false
null
0
o8948rx
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8948rx/
false
1
t1_o8942gs
the root diagnosis is solid, no content-type model means the parser treats reasoning prose and tool syntax identically. for agentic setups hitting this now, llama.cpp server handles </think> boundaries much cleaner as a stopgap while LM Studio works through the backlog.
1
0
2026-03-02T16:31:52
BC_MARO
false
null
0
o8942gs
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8942gs/
false
1
t1_o893uh3
asshole is one word.
1
0
2026-03-02T16:30:50
Impossible-Glass-487
false
null
0
o893uh3
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o893uh3/
false
1
t1_o893tyv
Probably not. Agentic tasks kinda require big models because the bigger the model the more coherent it is. Even if smaller models are smart, they will act like they have ADHD in an agentic setting. I would love to be proven wrong though.
1
0
2026-03-02T16:30:46
ghulamalchik
false
null
0
o893tyv
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o893tyv/
false
1
t1_o893pk9
Fascinating, then I might go for the 27B for some work I have since my rig has a 3090. I want to maybe use it to mass scan financial documents and see if it can spot aberrations (fraud/misfiling detection). Since apparently these Q3.5 models are also VLMs, this should be a very privacy-friendly way of doing it. Uploading company financial docs to a cloud service like ChatGPT sounds like asking for a data leak.
1
0
2026-03-02T16:30:10
HugoCortell
false
null
0
o893pk9
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o893pk9/
false
1