name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8d8q98
So why are you paying it?
1
0
2026-03-03T06:11:56
brickout
false
null
0
o8d8q98
false
/r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8d8q98/
false
1
t1_o8d8q56
Might have just been easier to tell them "wrong tool for the task"
1
0
2026-03-03T06:11:54
amejin
false
null
0
o8d8q56
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8d8q56/
false
1
t1_o8d8nnh
You might be a hero
1
0
2026-03-03T06:11:19
lundrog
false
null
0
o8d8nnh
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8d8nnh/
false
1
t1_o8d8jx8
Pick Quant Q8\_0 which is 3GB less than Q8\_K\_XL. Agree with other comment, Q6 is enough(which is 2GB less than Q8\_0)
1
0
2026-03-03T06:10:26
pmttyji
false
null
0
o8d8jx8
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8d8jx8/
false
1
t1_o8d8isc
pick a better model like qwen 3 8b or the newer qwen 3.5 9b. tbh for 12gb you might need to look at the 4b though.
1
0
2026-03-03T06:10:10
FusionCow
false
null
0
o8d8isc
false
/r/LocalLLaMA/comments/1rjhfow/thinking_of_finetuning_llama7b_with_100k_samples/o8d8isc/
false
1
t1_o8d8i47
1000 tokens in the thinking process in every response locally is going to fill context really fast and the performance will slow down significantly. You are paying with even more time.
1
0
2026-03-03T06:10:01
Comfortable-Bench993
false
null
0
o8d8i47
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8d8i47/
false
1
t1_o8d8gsm
The problem is it would be cost as much as training a 37B as 600B/A37 moe with about the same amount of token.It would be a lot harder to train a moe but once you have the experience you would a lot less likely to train bigger dense that's why 72B is almost dead. From business standpoint,Serving MoE is a lot more preferable.
1
0
2026-03-03T06:09:42
shing3232
false
null
0
o8d8gsm
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o8d8gsm/
false
1
t1_o8d8fb0
A sidenote. It looks to me that LM Studio stopped parsing {{CURRENT_DATE}} in the system prompt, so I opened Github ticket for that.
1
0
2026-03-03T06:09:21
mtomas7
false
null
0
o8d8fb0
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8d8fb0/
false
1
t1_o8d8f04
What's that an GPU?
1
0
2026-03-03T06:09:17
Not_Magma_
false
null
0
o8d8f04
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8d8f04/
false
1
t1_o8d8ctc
Dont use ollama. Use llama.cpp
1
0
2026-03-03T06:08:46
sagiroth
false
null
0
o8d8ctc
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8d8ctc/
false
1
t1_o8d8brd
3.5 is better for agentic coding and it isn't close. While it may be somewhat dependent on exactly which framework you use, 3.5 is overall much more capable in this use case. But you're welcome to try both and use whatever works best for you.
1
0
2026-03-03T06:08:31
NNN_Throwaway2
false
null
0
o8d8brd
false
/r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8d8brd/
false
1
t1_o8d8acq
Just tried it. Very polished and easy to use!
1
0
2026-03-03T06:08:11
Daniel_H212
false
null
0
o8d8acq
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8d8acq/
false
1
t1_o8d86ty
ghost in the machine bro
1
0
2026-03-03T06:07:22
numberwitch
false
null
0
o8d86ty
false
/r/LocalLLaMA/comments/1rjhi3u/live_demo_grok_ping_drops_to_0005ms_via_my_command/o8d86ty/
false
1
t1_o8d84yq
Yeah, small models in general are only useful for very specific tasks, and I fail to see a reason to perform such tasks on a phone. For chat/general knowledge they are absolutely worthless
1
0
2026-03-03T06:06:56
def_not_jose
false
null
0
o8d84yq
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8d84yq/
false
1
t1_o8d82gx
Aren't MoE models better for multi-agent setups too? I am under the impression MoE is i) faster and ii) can run more agents simultaneously with a scheduler like vLLM's. Both due to less mem bandwidth going to each thread.
1
0
2026-03-03T06:06:21
PentagonUnpadded
false
null
0
o8d82gx
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8d82gx/
false
1
t1_o8d8273
it's literally in the description...
1
0
2026-03-03T06:06:17
pixelpoet_nz
false
null
0
o8d8273
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8d8273/
false
1
t1_o8d7zj1
22 tps on a 4060 Ti 16GB with Q8\_K\_XL at 60k ctx is actually decent. But you can squeeze more. Quick wins: • Drop to Q6\_K or Q5\_K\_M instead of Q8\_K\_XL. Q8 is heavy and bandwidth-bound. You’ll likely gain 20–40% speed with minimal quality loss. • Reduce ctx-size if you don’t truly need 60k. KV cache grows with context and hurts memory bandwidth. Try 8k–16k for testing. • Enable mmap unless you have a reason not to. `--no-mmap` can slow load behavior. • Make sure you’re using full GPU offload (`-ngl 99` or equivalent). • Keep `-fa on` (good). • Try `--cache-type-k q4_0` and `--cache-type-v q4_0` to reduce KV pressure. Main bottleneck here is memory bandwidth, not compute. Lower quant + smaller KV cache = more tokens/sec.
1
0
2026-03-03T06:05:38
qubridInc
false
null
0
o8d7zj1
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8d7zj1/
false
1
t1_o8d7vil
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 This is a great reference article found on bartowski quants which should give a pretty good idea with the differences between quants. Additionally https://rentry.org/llama-cpp-quants-or-fine-ill-do-it-myself-then-pt-2 The higher the number the closer to the original model weight for sure but the deviation from Q8 to Q4 Is still pretty marginally small. ---- That all said, you can chase marginally better quality or more context, if you dont need the context then higher quality is possible, again marginally.
1
0
2026-03-03T06:04:42
mp3m4k3r
false
null
0
o8d7vil
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8d7vil/
false
1
t1_o8d7ubt
and what about to use the other tools like google colab and replicate?
1
0
2026-03-03T06:04:25
SUPRA_1934
false
null
0
o8d7ubt
false
/r/LocalLLaMA/comments/1rjhfow/thinking_of_finetuning_llama7b_with_100k_samples/o8d7ubt/
false
1
t1_o8d7k9h
Yeah... this won't work. :) That's the first thing I tried
1
0
2026-03-03T06:02:04
HighFlyingB1rd
false
null
0
o8d7k9h
false
/r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8d7k9h/
false
1
t1_o8d7ijp
Saw a IQ2_XXS Quant for the 27b iirc that was 10GB
1
0
2026-03-03T06:01:40
tableball35
false
null
0
o8d7ijp
false
/r/LocalLLaMA/comments/1rjgwhm/so_with_the_new_qwen35_release_what_should_i_use/o8d7ijp/
false
1
t1_o8d7cvh
Q4_K_M
1
0
2026-03-03T06:00:21
HighFlyingB1rd
false
null
0
o8d7cvh
false
/r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8d7cvh/
false
1
t1_o8d724g
So far keyboard and brain have done pretty good
1
0
2026-03-03T05:57:50
821835fc62e974a375e5
false
null
0
o8d724g
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8d724g/
false
1
t1_o8d6xoz
Thanks, time to load these up on the 6000 then and give it a shot. I'm loving the analysis quality of 3.5 with no think but it's definitely slowing my workflows down to a point where it's not cutting it having only one LLM instance to work with - only 1440 minutes in a day 😢
1
0
2026-03-03T05:56:48
Demonicated
false
null
0
o8d6xoz
false
/r/LocalLLaMA/comments/1rjg514/qwen35_100b_part_ii_nvfp4_blackwell_is_up/o8d6xoz/
false
1
t1_o8d6x9d
Just watch your GPU and CPU heat levels
1
0
2026-03-03T05:56:42
roosterfareye
false
null
0
o8d6x9d
false
/r/LocalLLaMA/comments/1rjhfow/thinking_of_finetuning_llama7b_with_100k_samples/o8d6x9d/
false
1
t1_o8d6uxo
I ran these models in my gaming laptop with rtx5070ti 12GB VRAM and 32Gb RAM, Ultra 275hx, and connected with them with claude code I think because of some application running it some what lags but it is great in intelligence and tool calling, by the way I completely offloaded my GPU for loading all the model weights into VRAM. You can give it a shot for your laptop. Try to use 9b model for great performance.
1
0
2026-03-03T05:56:10
Other_Day735
false
null
0
o8d6uxo
false
/r/LocalLLaMA/comments/1rjgwhm/so_with_the_new_qwen35_release_what_should_i_use/o8d6uxo/
false
1
t1_o8d6sy2
Moonlight and sunshine work with the spark, there is a github repo for it. I want to adapt it to the halo, if you do let me know. Works really well!
1
0
2026-03-03T05:55:43
Badger-Purple
false
null
0
o8d6sy2
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8d6sy2/
false
1
t1_o8d6nz6
What is the best model for someone with 16GB VRAM?
1
0
2026-03-03T05:54:34
dadidutdut
false
null
0
o8d6nz6
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8d6nz6/
false
1
t1_o8d6ne4
do you use lsp servers? what about java support?
1
0
2026-03-03T05:54:27
sotona-
false
null
0
o8d6ne4
false
/r/LocalLLaMA/comments/1riypvk/axe_a_precision_agentic_coder_large_codebases/o8d6ne4/
false
1
t1_o8d6h05
My expectation is that most uses of tiny models like this would involve putting some information into its context and then asking it specifically about that information. I could see one sitting on a phone evaluating emails and text messages as they come in to determine whether they're important enough to bother you with, for example. Or being built into a web browser and being used to summarize or translate pages. Having native "understanding" would be very helpful but all the actual facts it'd be working with would be handed to it up front.
1
0
2026-03-03T05:52:59
FaceDeer
false
null
0
o8d6h05
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8d6h05/
false
1
t1_o8d6azk
Just get rid of Ollama. It's 30% to 70% worse performance than llama.cpp, in addition to all the horrible things that Ollama has been doing to the open source community. If you're serious about running AI locally, use llama.cpp, period.
1
0
2026-03-03T05:51:38
JMowery
false
null
0
o8d6azk
false
/r/LocalLLaMA/comments/1rjdo1i/ollama_keeps_loading_with_openclaw/o8d6azk/
false
1
t1_o8d6au8
Just… benchmark it? Devise a benchmark, run both through, compare results.
1
0
2026-03-03T05:51:36
3spky5u-oss
false
null
0
o8d6au8
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8d6au8/
false
1
t1_o8d6amp
Even if it is 75% as good as the benchmarks it is commendable work they have done in open source and in small models that many consumers can use in their computers.
1
0
2026-03-03T05:51:33
gpt872323
false
null
0
o8d6amp
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8d6amp/
false
1
t1_o8d689n
250 tok/s on a 5090 at FP16, for a fun comparison.
1
0
2026-03-03T05:51:01
3spky5u-oss
false
null
0
o8d689n
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8d689n/
false
1
t1_o8d65s7
That’s because of Qwen3.5’s arch, context doesn’t contribute much to memory usage. You could run higher and barely use more.
1
0
2026-03-03T05:50:27
3spky5u-oss
false
null
0
o8d65s7
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8d65s7/
false
1
t1_o8d62yy
Thank You will look for since I'm living in SOMA SF is defenetly should be some AI conference I need to actualy do so ....
1
0
2026-03-03T05:49:48
louienemesh
false
null
0
o8d62yy
false
/r/LocalLLaMA/comments/1rjh8cz/how_to_reach_any_llm_s_company_to_get_partnership/o8d62yy/
false
1
t1_o8d61fl
Can these be efficiently used to extract structured text from PDFs?
1
0
2026-03-03T05:49:27
Mollan8686
false
null
0
o8d61fl
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8d61fl/
false
1
t1_o8d61c2
I have high doubts about the 1m context, it implies doing at least as good as qwen's cookbook of long context training.
1
0
2026-03-03T05:49:25
LinkSea8324
false
null
0
o8d61c2
false
/r/LocalLLaMA/comments/1rj89qy/merlin_research_released_qwen354bsafetythinking_a/o8d61c2/
false
1
t1_o8d5zzp
That is even more impressive.
1
0
2026-03-03T05:49:08
anotheridiot-
false
null
0
o8d5zzp
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8d5zzp/
false
1
t1_o8d5zmt
I tested web app pentest via all qwen3 and qwen3.5 normal gguf and it gave good results to find sql i jection vulnerabilities
1
0
2026-03-03T05:49:02
DarkZ3r0o
false
null
0
o8d5zmt
false
/r/LocalLLaMA/comments/1riuywe/genuinely_fascinating_but_also_kind_of_terrifying/o8d5zmt/
false
1
t1_o8d5x73
>Google doesn't exactly have that many resources. There are several signs of that. But basically, they keep nerfing the free tier of AI Studio I think the fact they *offer* a free tier says more about their resources than them "nerfing" it.
1
0
2026-03-03T05:48:29
MizantropaMiskretulo
false
null
0
o8d5x73
false
/r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8d5x73/
false
1
t1_o8d5wg4
AFAIK for the moment no. In any case, GLM 4.7 is very capable model and at full precision would outperform any model that you can run locally, unless you have couple of H200 in the basement))
1
0
2026-03-03T05:48:19
perelmanych
false
null
0
o8d5wg4
false
/r/LocalLLaMA/comments/1rjf4zm/reasoning_in_cloud_coding_with_local/o8d5wg4/
false
1
t1_o8d5up2
You're literally getting 30% - 70% worse performance with using Ollama (along with all the other terrible BS they've done to the llama.cpp project). Use llama.cpp, and enjoy the massive performance boost, better control, standardized formats (GGUF), and support true open source and companies that don't steal from open source projects (like those POS people at Ollama).
1
0
2026-03-03T05:47:55
JMowery
false
null
0
o8d5up2
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8d5up2/
false
1
t1_o8d5t3a
AGI came in 2023
1
0
2026-03-03T05:47:33
_yustaguy_
false
null
0
o8d5t3a
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8d5t3a/
false
1
t1_o8d5rf2
Attend some AI-related conferences, where you might find senior executives. Make sure your communication skills are sharp, be confident, and strike up conversations.
1
0
2026-03-03T05:47:10
powerade-trader
false
null
0
o8d5rf2
false
/r/LocalLLaMA/comments/1rjh8cz/how_to_reach_any_llm_s_company_to_get_partnership/o8d5rf2/
false
1
t1_o8d5qgb
Thanks for sharing, I hope you can test the crack of the Apple Neural Engine, If what they say is true, there should be a significant improvement in performance.
1
0
2026-03-03T05:46:57
DebugCall
false
null
0
o8d5qgb
false
/r/LocalLLaMA/comments/1ral48v/interesting_observation_from_a_simple_multiagent/o8d5qgb/
false
1
t1_o8d5isr
I think that is the key, I just came to see if someone will the test the crack of Apple Neural Engine
1
0
2026-03-03T05:45:13
DebugCall
false
null
0
o8d5isr
false
/r/LocalLLaMA/comments/1rblur3/considering_mac_mini_m4_pro_64gb_for_agentic/o8d5isr/
false
1
t1_o8d5i03
Not OP, and my LLM explained it better. Here is a simplified breakdown of **NVFP4 quantization on NVIDIA Blackwell**: ### The Core Concept Instead of storing just 4 bits of data (which usually loses too much precision), NVFP4 uses a **hybrid approach** to keep the numbers accurate enough for AI calculations while saving space. It combines two methods: 1.  **The Dictionary (E2M1 Codebook):**     *   Instead of letting the number grow infinitely, it picks from a small, pre-defined list of values (magnitudes up to ±6).     *   Think of it like choosing words from a limited dictionary rather than writing every sentence in full detail. This maximizes how much information fits into those 4 bits. 2.  **The Context (FP8 Block Scale):**     *   For groups of 16 numbers, it uses an FP8 scale factor to adjust the values relative to that small dictionary.     *   This allows for flexible scaling (not just simple powers of two), helping minimize errors during math operations. ### Why Does It Matter? *   **Efficiency:** It packs more useful information into fewer bits than standard 4-bit methods. *   **Accuracy:** The system uses high-precision (FP32) math internally to calculate results, ensuring the final answer isn't ruined by the low-storage format. *   **Speed & Size:** On Blackwell chips, this method allows for smaller, faster hardware multipliers because the numbers are simpler to process. **In short:** NVFP4 makes neural networks run faster and more efficiently on Blackwell GPUs by intelligently combining a limited set of values with flexible scaling, all while maintaining high computational accuracy.
1
0
2026-03-03T05:45:03
simracerman
false
null
0
o8d5i03
false
/r/LocalLLaMA/comments/1rjg514/qwen35_100b_part_ii_nvfp4_blackwell_is_up/o8d5i03/
false
1
t1_o8d5h6b
Nope Opus also shows summaries of the COT
1
0
2026-03-03T05:44:52
bolmer
false
null
0
o8d5h6b
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8d5h6b/
false
1
t1_o8d5gui
can you try the latest build? we fixed some preprocessing bugs and its doing much better on things like this. pls lmk if that doesnt help. thanks.
1
0
2026-03-03T05:44:48
ElectricalBar7464
false
null
0
o8d5gui
false
/r/LocalLLaMA/comments/1rc9qvb/kitten_tts_v08_running_in_the_browser/o8d5gui/
false
1
t1_o8d5e9h
I've had the best results with gemma3. Shoot, I'm using 4b, and it still provides sufficient answers.
1
0
2026-03-03T05:44:13
nPrevail
false
null
0
o8d5e9h
false
/r/LocalLLaMA/comments/1l09i8f/what_are_the_top_creative_writing_models/o8d5e9h/
false
1
t1_o8d5cin
RTX 4050 might be good and affordable one $899.
1
0
2026-03-03T05:43:49
archadigi
false
null
0
o8d5cin
false
/r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o8d5cin/
false
1
t1_o8d59ni
No, most times apple shows macbook pro with pro and max chips in early year.
1
0
2026-03-03T05:43:12
getmevodka
false
null
0
o8d59ni
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8d59ni/
false
1
t1_o8d57zf
It’s not like you can do much optimizations, it’s a dense model. You either got it or ya don’t unfortunately
1
0
2026-03-03T05:42:50
arman-d0e
false
null
0
o8d57zf
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8d57zf/
false
1
t1_o8d56ze
Haha..True Will probably end up trying Z.ai. But, their basic plan doesn't have GLM-5?
1
0
2026-03-03T05:42:36
sedentarymalu
false
null
0
o8d56ze
false
/r/LocalLLaMA/comments/1rjf4zm/reasoning_in_cloud_coding_with_local/o8d56ze/
false
1
t1_o8d4v8i
Do you mean running an app that provides RESTful APIs so other apps can connect to it via those APIs? Perhaps on iOS it doesn’t work because the system imposes a RAM restriction. When the app exceeds this limit, the system terminates it. Since the performance on iOS devices is still limited, I think we can just polish the text, chat it with the local LLM, treat it like a dictionary, or do simple translations or a summary for a meeting. [https://apps.apple.com/us/app/tokkong-local-ai/id6742748996](https://apps.apple.com/us/app/tokkong-local-ai/id6742748996) So far, this app offers everything I need.
1
0
2026-03-03T05:39:59
Fabulous_Tip_8539
false
null
0
o8d4v8i
false
/r/LocalLLaMA/comments/1ls66qt/running_gguf_model_on_ios_with_local_api/o8d4v8i/
false
1
t1_o8d4ul3
If you ran this model try to post the speed, tokens/s and results under this comment, that will be helpful
1
0
2026-03-03T05:39:50
Other_Day735
false
null
0
o8d4ul3
false
/r/LocalLLaMA/comments/1q8mq0u/new_to_local_llms_dgx_spark_owner_looking_for/o8d4ul3/
false
1
t1_o8d4rxg
a „thinking“ model is boibd to carefully think by its prompt. like i said, you go to a doctor and creepily say „hi“ then stay silent… what do you think will happen
1
0
2026-03-03T05:39:16
howardhus
false
null
0
o8d4rxg
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8d4rxg/
false
1
t1_o8d4nyb
Try to use your head as well. Jokes apart I barely go over 5% of usage of [z.ai](http://z.ai) basic plan. Probably you should try it.
1
0
2026-03-03T05:38:27
perelmanych
false
null
0
o8d4nyb
false
/r/LocalLLaMA/comments/1rjf4zm/reasoning_in_cloud_coding_with_local/o8d4nyb/
false
1
t1_o8d4nhc
try unsloths 35B A3B. but i didn't quite got it working the best in lm studio, switched to llama.cpp and its pretty good.
1
0
2026-03-03T05:38:21
Old-Sherbert-4495
false
null
0
o8d4nhc
false
/r/LocalLLaMA/comments/1rjgwhm/so_with_the_new_qwen35_release_what_should_i_use/o8d4nhc/
false
1
t1_o8d4knx
i read it. your problem is you dont understand the technyou are using.
1
0
2026-03-03T05:37:43
howardhus
false
null
0
o8d4knx
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8d4knx/
false
1
t1_o8d4jwx
Nice
1
0
2026-03-03T05:37:34
fragment_me
false
null
0
o8d4jwx
false
/r/LocalLLaMA/comments/1rj89qy/merlin_research_released_qwen354bsafetythinking_a/o8d4jwx/
false
1
t1_o8d4jvc
the visual geolocation result is what's impressive. that requires reasoning about architectural styles, typography, urban density patterns -- not just pattern matching on pixel distributions. 4B hitting that quality is a different capability threshold than 4B models from 18 months ago. knowledge distillation from the larger Qwen models is clearly doing a lot of work here. 77ms/token on mobile is also meaningful for actual applications -- fast enough for interactive use without batching tricks. what quant level were you running? Q4_K_M or lower?
1
0
2026-03-03T05:37:33
BP041
false
null
0
o8d4jvc
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8d4jvc/
false
1
t1_o8d4gt4
releasing SteptronOSS alongside the weights is the actually interesting part. most labs release weights but not the training pipeline, which means the community can run inference but can't study what data mix and training decisions produced those capabilities. when you get both, you can actually do meaningful fine-tuning experiments rather than just LoRA stacking on a black box. curious whether the framework is general enough to reproduce their training setup or if it only covers the final stages.
1
0
2026-03-03T05:36:53
BP041
false
null
0
o8d4gt4
false
/r/LocalLLaMA/comments/1rj4zy3/stepfun_releases_2_base_models_for_step_35_flash/o8d4gt4/
false
1
t1_o8d4ch1
Really..?! I’ve been hesitant to try it with Claude code. Can you elaborate on the differences you’ve seen?
1
0
2026-03-03T05:35:57
simracerman
false
null
0
o8d4ch1
false
/r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8d4ch1/
false
1
t1_o8d4ams
What quants are you using? Have you quantized KV- cache? What inference parameters are you using? If you want to get any assistance you should be more precise.
1
0
2026-03-03T05:35:33
perelmanych
false
null
0
o8d4ams
false
/r/LocalLLaMA/comments/1rjfijf/cline_not_playing_well_with_the_freshly_dropped/o8d4ams/
false
1
t1_o8d496q
Honestly you should look at the 35b, even if it's offloaded you'll get solid speeds. with 12gb of ram you're not quite able to run the 27b, but you could run the 9b on a high quant and it seems pretty good for the size.
1
0
2026-03-03T05:35:14
FusionCow
false
null
0
o8d496q
false
/r/LocalLLaMA/comments/1rjgwhm/so_with_the_new_qwen35_release_what_should_i_use/o8d496q/
false
1
t1_o8d497e
So here is the great coding which offers best performance for offline coding, Qwen3.5 35B A3B model which was released recently. it rans seamlessly on my gaming laptop with RTX 5070Ti 12Gb Vram and 32GB Ram. So for DGX Spark you can Qwen3.5-397B-A17B model which has 1M Context length.
1
0
2026-03-03T05:35:14
Other_Day735
false
null
0
o8d497e
false
/r/LocalLLaMA/comments/1q8mq0u/new_to_local_llms_dgx_spark_owner_looking_for/o8d497e/
false
1
t1_o8d46oi
"a subset of the reasoning capability was used" but the most relevant subset. You basically sidestep a lot of areas that are unrelated to the question at hand and therefore extremely improbable but would waste time. If the training data for the model included, say, the complete history of Old and Middle English with all the different grammars and all the surviving literary texts, or the full course of the development of microbiology over the last 40 years, it won't help your final system code better.
1
0
2026-03-03T05:34:42
drivebyposter2020
false
null
0
o8d46oi
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8d46oi/
false
1
t1_o8d3zw9
27B is a pretty awesome model. I wish someone figures this out to make it faster
1
0
2026-03-03T05:33:12
Old-Sherbert-4495
false
null
0
o8d3zw9
false
/r/LocalLLaMA/comments/1rj2mzy/is_speculative_decoding_available_with_the_qwen/o8d3zw9/
false
1
t1_o8d3zh6
Everything's cool but how do you get it to use tools on android? Chats are too 2025 now. We want web searches and file access
1
0
2026-03-03T05:33:06
AnyCourage5004
false
null
0
o8d3zh6
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8d3zh6/
false
1
t1_o8d3ym4
I immediately switched off thinking in ninja file, cause it was unbearable. Still models perform quite decent with thinking off.
1
0
2026-03-03T05:32:54
perelmanych
false
null
0
o8d3ym4
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8d3ym4/
false
1
t1_o8d3t90
You can have some sort of control to pass a proper amount of thinking tokens maybe... That would fix it.
1
0
2026-03-03T05:31:43
Old-Individual-8175
false
null
0
o8d3t90
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8d3t90/
false
1
t1_o8d3rbt
not clear. I'm no expert but I'd think you have room for a longer context window which should help
1
0
2026-03-03T05:31:18
drivebyposter2020
false
null
0
o8d3rbt
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8d3rbt/
false
1
t1_o8d3qff
u picked to wrong example to title ur post. either way, did u try messing with the temp?
1
0
2026-03-03T05:31:05
Old-Sherbert-4495
false
null
0
o8d3qff
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8d3qff/
false
1
t1_o8d3iam
The fact that models of such different sizes are so close to each other in benchmarks points to an elephant in the room - training dataset contamination. Having said that, I still admire what Qwen is doing.
1
0
2026-03-03T05:29:19
perelmanych
false
null
0
o8d3iam
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8d3iam/
false
1
t1_o8d3f9y
Basically all "trust me bro" benchmarks frame Qwen3.5 as marginally better than Qwen3-Coder. I would pick Qwen3.5 just because it has much better tool-calling support and its important for agentic coding. From now on, the main focus on support and improvements will be only for 3.5
1
0
2026-03-03T05:28:40
timhok
false
null
0
o8d3f9y
false
/r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8d3f9y/
false
1
t1_o8d39lr
I think the first two are not thinking while the last one is.
1
0
2026-03-03T05:27:26
Zemanyak
false
null
0
o8d39lr
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8d39lr/
false
1
t1_o8d37yw
[deleted]
1
0
2026-03-03T05:27:04
[deleted]
true
null
0
o8d37yw
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8d37yw/
false
1
t1_o8d36hh
Just to give you a hardware benchmark: on an M4 Mac, generating about 10 seconds of audio usually takes around 10-20 seconds of waiting time.
1
0
2026-03-03T05:26:45
Aggressive-Floor-153
false
null
0
o8d36hh
false
/r/LocalLLaMA/comments/1qqmmn0/whats_the_highest_quality_opensource_tts/o8d36hh/
false
1
t1_o8d334k
You are using an old model with 1.5b why?
1
0
2026-03-03T05:26:02
Voxandr
false
null
0
o8d334k
false
/r/LocalLLaMA/comments/1rjfixk/peak_answer/o8d334k/
false
1
t1_o8d2xvb
Is this setting applicable to ipad air M1 ?
1
0
2026-03-03T05:24:55
dravenkill
false
null
0
o8d2xvb
false
/r/LocalLLaMA/comments/1rj4nnq/qwen352b_on_android/o8d2xvb/
false
1
t1_o8d2e0x
Implying the 17 years old version of yourself would not freeze, overthink and segfault if the cutie of your class would say hi to you
1
0
2026-03-03T05:20:37
LinkSea8324
false
null
0
o8d2e0x
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8d2e0x/
false
1
t1_o8d2651
What is a good price for you?
1
0
2026-03-03T05:18:57
Interesting_Fly_6576
false
null
0
o8d2651
false
/r/LocalLLaMA/comments/1rj8zhq/where_can_i_get_good_priced_3090s/o8d2651/
false
1
t1_o8d262s
I think you can go with the 9B Q4. When I still used my gaming laptop with 16GB RAM and 2060 with 6GB VRAM, I used to use Llama 8B and the small mistral (7B, I think). There was a 14B model that was pretty pushing it on that machine. Realistically, you can write down a bunch of test prompts in a file, then grab all three options: 4B at Q4, 4B at Q8, and 9B at Q4. Then write a script to run inference against your llamacpp backend and see the output of all three models, and see which one vibes better with you. If the difference is not much, I would prefer smaller model so I have more space for context.
1
0
2026-03-03T05:18:56
o0genesis0o
false
null
0
o8d262s
false
/r/LocalLLaMA/comments/1rit85e/question_about_running_small_models_on_potato_gpus/o8d262s/
false
1
t1_o8d206e
That's good to hear! But I was mainly remarking on this because there's a price comparison in the charts and I don't believe it's quite a fair comparison (long-term) to consider a model a like the Qwen 35B-A3B to be that pricey. A lot of people can run the (quanted) model locally after all.
1
0
2026-03-03T05:17:40
Aerroon
false
null
0
o8d206e
false
/r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8d206e/
false
1
t1_o8d1ykp
What does that mean (I'm stupid)
1
0
2026-03-03T05:17:19
JorG941
false
null
0
o8d1ykp
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8d1ykp/
false
1
t1_o8d1yal
force a tool use before final answer
1
0
2026-03-03T05:17:15
HatEducational9965
false
null
0
o8d1yal
false
/r/LocalLLaMA/comments/1rjefqu/data_analysis_from_a_csv_gpt0ss120b/o8d1yal/
false
1
t1_o8d1u1t
Fair point :)
1
0
2026-03-03T05:16:22
MyBrainsShit
false
null
0
o8d1u1t
false
/r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8d1u1t/
false
1
t1_o8d1tsu
I haven’t used LMstudio with qwen3.5 yet so i have no idea. Someone on here probably knows.
1
0
2026-03-03T05:16:19
CATLLM
false
null
0
o8d1tsu
false
/r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8d1tsu/
false
1
t1_o8d1trk
does anyone know how to jailbreak these modelss?
1
0
2026-03-03T05:16:18
falkon2112
false
null
0
o8d1trk
false
/r/LocalLLaMA/comments/1regq10/qwen_35_2735122b_jinja_template_modification/o8d1trk/
false
1
t1_o8d1sqs
Fair point :)
1
0
2026-03-03T05:16:05
MyBrainsShit
false
null
0
o8d1sqs
false
/r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8d1sqs/
false
1
t1_o8d1kfk
I just tried 4b q4\_k\_m as draft model and got \~29 tps on average(ctx 24k). So yeah, looks like acceptance rate is the main reason. Higher quants for 2b will indeed raise the acceptance rate, but seems like faster quants for 4b may also be valid.
1
0
2026-03-03T05:14:19
Hougasej
false
null
0
o8d1kfk
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8d1kfk/
false
1
t1_o8d1h0i
I don't think I've tried Ollama on Linux yet. But even if it is, the whole conversion isn't worth my time as I'm using +100B models. Not to mention how fast models can come out.
1
0
2026-03-03T05:13:37
lemondrops9
false
null
0
o8d1h0i
false
/r/LocalLLaMA/comments/1rjfqib/why_are_the_ollama_quants_of_local_llm_models/o8d1h0i/
false
1
t1_o8d10rg
Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/PgFhZ8cnWW) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*
1
0
2026-03-03T05:10:13
WithoutReason1729
false
null
0
o8d10rg
false
/r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8d10rg/
true
1
t1_o8d0x68
That's cool. Working on stuff for fun is what we all should do more of anyway. 
1
0
2026-03-03T05:09:29
sweetbacon
false
null
0
o8d0x68
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8d0x68/
false
1
t1_o8d0vxn
Have you given q6 a shot? I've not benched it myself but from my understanding, q6 is near identical to q8 but smaller(and therefore faster). Seems to make sense to me. If q6 had a noticeable drop off, quant makers would compare their q6 models to each other and fight over who's is better. But instead they focus on their q4 models and which is better.
1
0
2026-03-03T05:09:14
defensivedig0
false
null
0
o8d0vxn
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8d0vxn/
false
1
t1_o8d0v11
He's using a M4 Studio, but yes, like BigYo said you need a decent model difference. [https://youtu.be/qmAbco38pXA](https://youtu.be/qmAbco38pXA)
1
0
2026-03-03T05:09:03
tomByrer
false
null
0
o8d0v11
false
/r/LocalLLaMA/comments/1r9vsye/nice_interactive_explanation_of_speculative/o8d0v11/
false
1
t1_o8d0orb
I feel like it could start a religion on accident any day of the week! It's hilarious
1
0
2026-03-03T05:07:45
the_fabled_bard
false
null
0
o8d0orb
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8d0orb/
false
1
t1_o8d0ocy
(CEO of Parasail here) Price is going to come down a lot, we just copied Alibaba's pricing until we could observe some real traffic. Model has only been up for a day and had some instabilities we had to fix in image processing, but its looking stable now.
1
0
2026-03-03T05:07:40
Dizzy-Bad4423
false
null
0
o8d0ocy
false
/r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8d0ocy/
false
1