name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8e2hyf
Video streaming, generally for gaming. Open source implementation of nvidias streaming.
1
0
2026-03-03T10:52:41
ravage382
false
null
0
o8e2hyf
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8e2hyf/
false
1
t1_o8e2d2c
Only 1/4 of it's attention really "looks" at the whole context. 3/4 of its attention looks at "fixed" indexes, It Is not stored in KV cache (so KV Is Really light) and its math Is really Easy to compute
1
0
2026-03-03T10:51:30
Pentium95
false
null
0
o8e2d2c
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8e2d2c/
false
1
t1_o8e29ne
Where’s qwen3 coder next
1
0
2026-03-03T10:50:39
StardockEngineer
false
null
0
o8e29ne
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8e29ne/
false
1
t1_o8e29gf
It’s fine as an introduction to local models.
1
0
2026-03-03T10:50:36
ProfessionalSpend589
false
null
0
o8e29gf
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8e29gf/
false
1
t1_o8e27ov
Heretic doesn't use backprop training, but given the number of epochs you're using at this point for the parameter search how valuable is that still really? If you use backprop training with the same LORAs and a weighted loss for the refusals (first relevant token, just like now) you can forget about layer weighting, you don't need to worry about MLP vs attention. Set the weight for refusal loss and let backprop handle it all. If a layer/MLP causes too much KL divergence, the LORAs there will simply become pass through. I'd really like to see a Heretic vs FT face off using the same dataset and LORAs for training time and quality.
1
0
2026-03-03T10:50:09
PinkysBrein
false
null
0
o8e27ov
false
/r/LocalLLaMA/comments/1qa0w6c/it_works_abliteration_can_reduce_slop_without/o8e27ov/
false
1
t1_o8e25ma
Airbus A320-200: https://preview.redd.it/dnbht8ti8tmg1.jpeg?width=1000&format=pjpg&auto=webp&s=082ce505da094d6e28b7127659dc5fbc72831052
1
0
2026-03-03T10:49:38
-Ellary-
false
null
0
o8e25ma
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8e25ma/
false
1
t1_o8e1xi8
Oh I haven't checked it yet. Does the model have some kinda parameters for uncensored purposes? Do I just find out by asking a query?
1
0
2026-03-03T10:47:37
Zealousideal-Check77
false
null
0
o8e1xi8
false
/r/LocalLLaMA/comments/1rj4nnq/qwen352b_on_android/o8e1xi8/
false
1
t1_o8e1u4v
Try the other 6 quants and/or the settings for temperature and penalties mentioned on the page of the model.
1
0
2026-03-03T10:46:44
ProfessionalSpend589
false
null
0
o8e1u4v
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8e1u4v/
false
1
t1_o8e1ron
In llama.cpp I would guess it’s the kwargs flag you can set but does that only work in terminal or could it also work in a gui frontend? As you can see in the screenshot, there seems to be a gui button for thinking, unless I’m misinterpreting it and it’s just an indicator, no button.
1
0
2026-03-03T10:46:07
ProdoRock
false
null
0
o8e1ron
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8e1ron/
false
1
t1_o8e1r0q
Ask the ai 🤣
1
0
2026-03-03T10:45:57
Forsaken_Address8812
false
null
0
o8e1r0q
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8e1r0q/
false
1
t1_o8e1mm0
Do you have the dgx connected for distributed inference? Are you doing that size so you can fit other models as well? My strix halo can fit q6 and xcreates made a video showing the quant does affect the output. Just curious why you chose q3 with 2x dgx. I get the same speed at q6kxl i just trust it a bit more.
1
0
2026-03-03T10:44:50
GCoderDCoder
false
null
0
o8e1mm0
false
/r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8e1mm0/
false
1
t1_o8e1mfn
Thats good! I am running a potato pc and pushing for 3t/s now
1
0
2026-03-03T10:44:48
Last-Shake-9874
false
null
0
o8e1mfn
false
/r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8e1mfn/
false
1
t1_o8e1khe
> We should ask what inference stack they are using when people post here asking for Qwen3.5 help People should learn how to ask simple questions. 
1
0
2026-03-03T10:44:18
ProfessionalSpend589
false
null
0
o8e1khe
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8e1khe/
false
1
t1_o8e1ip0
My only recommendation is to take care to not end up with this. You need to direct because you are designing for humans. https://preview.redd.it/otnlwrgc7tmg1.png?width=495&format=png&auto=webp&s=652af366daa902c49acbd6c56e6c650762c50d4d
1
0
2026-03-03T10:43:49
schnauzergambit
false
null
0
o8e1ip0
false
/r/LocalLLaMA/comments/1rjlru3/extended_godot_mcp_from_20_to_149_tools_aiming/o8e1ip0/
false
1
t1_o8e1hf4
using it with vllm cu130 and is working perfect with opencode. had no tool call errors at all. (using official fp8 weights) until now its the only open weight model i tried (below 200b) which is totally useful and can replace my glm and minimax sub
1
0
2026-03-03T10:43:31
Pitiful_Task_2539
false
null
0
o8e1hf4
false
/r/LocalLLaMA/comments/1rf4viw/qwen_35_122b_tool_calls_in_opencode/o8e1hf4/
false
1
t1_o8e1h96
Last time I checked, Cline still did not support native tool calls on OpenAI-compatible endpoint. Try Roo Code instead, it uses native tool calling by default. If still having issues, double check that you have most recent quants (Unsloth recently recreated their quants, old ones were broken). If quant is good, try using bf16 or f32 cache; f16 cache (the default in llama.cpp) known to cause issues, and quantizing cache even more so. For small models, good idea to use Q6 or Q8. If still having issues, I suggest trying 27B or 35B-A3B, with at least Q5 or higher quant.
1
0
2026-03-03T10:43:28
Lissanro
false
null
0
o8e1h96
false
/r/LocalLLaMA/comments/1rjfijf/cline_not_playing_well_with_the_freshly_dropped/o8e1h96/
false
1
t1_o8e1flq
I have a secret formula I am currently using and still in the works, The Q4\_K\_M is about 260GB yes. and on humaneval bench I am getting now 2.2 tokens/second
1
0
2026-03-03T10:43:04
Last-Shake-9874
false
null
0
o8e1flq
false
/r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8e1flq/
false
1
t1_o8e1bb6
Could I run the 35b moe model with 8gb vram and 16 gb ddr5 ram?
1
0
2026-03-03T10:41:58
kedarkhand
false
null
0
o8e1bb6
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8e1bb6/
false
1
t1_o8e198e
I spent all my money on two DGX Sparks and an RTX Pro 6000, two 5090s and two A6000s. I’m all tapped out. Well, I guess we’ll never know if your numbers are even valid. Too bad.
1
0
2026-03-03T10:41:26
StardockEngineer
false
null
0
o8e198e
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8e198e/
false
1
t1_o8e196c
O yes that is true, I am pushing the limits to see what is the biggest usable model I can run on consumer hardware throwing in a little pezaz of my own to load these models I am doing optimization still and running it through humaneval to see some results, currently hitting 2.2 t/s now
1
0
2026-03-03T10:41:25
Last-Shake-9874
false
null
0
o8e196c
false
/r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8e196c/
false
1
t1_o8e0xep
How does it fit into 12GB VRAM and 48GM RAM? The Q4\_K\_M file is >>60GB Are you swapping? And you are getting 1.4 t/s!? Thats not bad. Poor SSD - doing lots of work. Get some additional RAM :-) I tested modells when their answers ran a full night. For testing, what is the quality of the model, speed does not matter in my eyes.
1
0
2026-03-03T10:38:23
Impossible_Art9151
false
null
0
o8e0xep
false
/r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8e0xep/
false
1
t1_o8e0uf8
Ahh thanks for pointing that out. It definitely got faster but not as fast as cache reuse in my potato hardware, cache reuse was snappy like 1 sec but ne checkpointing is more like 7-10 sec to the first tok.
1
0
2026-03-03T10:37:37
FORNAX_460
false
null
0
o8e0uf8
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o8e0uf8/
false
1
t1_o8e0qo3
Yes! I shouldn't have forgotten that, thanks!
1
0
2026-03-03T10:36:37
CryptographerKlutzy7
false
null
0
o8e0qo3
false
/r/LocalLLaMA/comments/1rjk9tt/are_all_models_censored_like_this/o8e0qo3/
false
1
t1_o8e0q5o
Go for qwen 3.5 9b q4 k xl... Gpu offload: 32, context size: start from 20k, I have a 12 gigs gpu and the max it can go without crashing or slowing my PC is 50k, above that it just starts to generate slow t/s. I have this model locally hosted on my whole network and using it from my phone as well just with the addition of a few mcps. Working really good so far. And yesterday I tested it out with a few coding tasks on my actual project on which I am working on, obviously it is not as good as the high end models but it's pretty impressive, and knows what it's doing but keep it limited to 2 or 3 files per query, otherwise it might not be able to handle the context.
1
0
2026-03-03T10:36:29
Zealousideal-Check77
false
null
0
o8e0q5o
false
/r/LocalLLaMA/comments/1rjkarj/local_model_suggestions_for_medium_end_pc_for/o8e0q5o/
false
1
t1_o8e0pmf
A major update for Agentic Memory is also planned!
1
0
2026-03-03T10:36:20
Active_Concept467
false
null
0
o8e0pmf
false
/r/LocalLLaMA/comments/1r8bc65/built_a_shared_memory_interagent_messaging_layer/o8e0pmf/
false
1
t1_o8e0ova
Here from the future Cant tell ya kid It s storm here, enjoy while you can
1
0
2026-03-03T10:36:09
single_shot_
false
null
0
o8e0ova
false
/r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8e0ova/
false
1
t1_o8e0o2r
I trained for two days, using an RTX 4060TI GPU.
1
0
2026-03-03T10:35:57
Forsaken_Shopping481
false
null
0
o8e0o2r
false
/r/LocalLLaMA/comments/1rjjvge/update_tinytts_the_smallest_english_tts_model/o8e0o2r/
false
1
t1_o8e0n7u
comparison of languages and accuracy [https://github.com/openai/whisper/discussions/2363](https://github.com/openai/whisper/discussions/2363)
1
0
2026-03-03T10:35:43
inh24
false
null
0
o8e0n7u
false
/r/LocalLLaMA/comments/1fvb83n/open_ais_new_whisper_turbo_model_runs_54_times/o8e0n7u/
false
1
t1_o8e0k5l
We are currently preparing to support multiple models including ChatGPT and others. An update is planned for March 8th!
1
0
2026-03-03T10:34:54
Active_Concept467
false
null
0
o8e0k5l
false
/r/LocalLLaMA/comments/1r8bc65/built_a_shared_memory_interagent_messaging_layer/o8e0k5l/
false
1
t1_o8e0j9y
It was the best of times, it was the blurst of times?
1
0
2026-03-03T10:34:40
ptear
false
null
0
o8e0j9y
false
/r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8e0j9y/
false
1
t1_o8e0f7f
Why is no one acknowledging the fact that the model sees context outside of "hi" and tries to decide how to handle language switch?
1
0
2026-03-03T10:33:34
kaisurniwurer
false
null
0
o8e0f7f
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8e0f7f/
false
1
t1_o8e0dfq
WER comparison [https://github.com/openai/whisper/discussions/2363](https://github.com/openai/whisper/discussions/2363)
1
0
2026-03-03T10:33:06
inh24
false
null
0
o8e0dfq
false
/r/LocalLLaMA/comments/1fvb83n/open_ais_new_whisper_turbo_model_runs_54_times/o8e0dfq/
false
1
t1_o8e0byk
Great job, now extract a lora and set weight = -1
1
0
2026-03-03T10:32:41
woct0rdho
false
null
0
o8e0byk
false
/r/LocalLLaMA/comments/1rj89qy/merlin_research_released_qwen354bsafetythinking_a/o8e0byk/
false
1
t1_o8e08ii
That isn’t how this works. If the point in time comes, that Opus 4.6 Level of coding locally is actually available - there will be Opus 5 or 6 in the cloud and you will want that.
1
0
2026-03-03T10:31:46
Danmoreng
false
null
0
o8e08ii
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8e08ii/
false
1
t1_o8e077t
Thank you for the UX compliment. I think largely where I'm coming form is, if you've got openclaw already does it even make sense to have an ondevice personal assistant? The results will never be comparable, but data will remain on device. IDK if thats a large enough moat, and I haven't been able to feel enough pull from the community. Typically people want RAG, and agentic AI, but haven't felt pull for a personal assistant. But I feel like I solving something bigger than RAG and agentic AI locally.
1
0
2026-03-03T10:31:25
alichherawalla
false
null
0
o8e077t
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8e077t/
false
1
t1_o8e03ba
I used openrouter for approx. 4 months. The main benefits were the cost control feature and possibility to route to different models. I used it in a semi-prod setup (internal tools + a small customer-facing feature). It was nice for fast experiments, but at scale some things pushed me off. The main ones are sudden latency spikes (the average latency was ok, but P95/P99 would randomly spike depending on which provider the request got routed to - from 2-3 secs to 15-20 secs) and opaque debugging. I've moved to llm api ai. It has much better speed stability and clear monitoring features for credit usage per feature/user/provider and debugging alike. Also this platform was way more stable, haven't experienced any downtime whatsoever. 
1
0
2026-03-03T10:30:24
Angelic_Insect_0
false
null
0
o8e03ba
false
/r/LocalLLaMA/comments/1p2fnm8/anyone_here_using_openrouter_what_made_you_pick_it/o8e03ba/
false
1
t1_o8e02id
1. There was no M4 ultra :) 2. I'll gladly await a further support of QLoRA for QWEN 3.5 MoEs
1
0
2026-03-03T10:30:11
Desperate-Sir-5088
false
null
0
o8e02id
false
/r/LocalLLaMA/comments/1rj7y9d/pmetal_llm_finetuning_framework_for_apple_silicon/o8e02id/
false
1
t1_o8e00xu
Sorry, but you’re wrong about the Qwen models. You are right about Ollama and other hosting frameworks, but as good as the Qwen models are, they have a serious issues which no one, including Qwen, is addressing. A significant part of their benchmark improvement comes from inference time reasoning. Turn it off, and the scores drop notably. That’s not a problem in itself. What *is* a problem is twofold: 1) If you read the original Qwen model descriptions, towards the end of a very long document in “considerations” they casually mention that for the 27B/35B the *minimum* safe context for daily use is 32K!!! For **any** query. Below that, there’s a chance the model will stop responding early because it doesn’t have enough context to reason in. It gets worse. If you have an unusually hard problem that genuinely requires extended thinking, the *minimum* suggested context is 80K!!! Just to accommodate the reasoning. 2) As if the minimum context wasn’t bad enough, the model has been so overtrained on thinking that it bleeds through when thinking is disabled, so there’s no way to permanently turn it off. You may not have thinking tags with it turned off, but your prompt includes a suggestion of thinking or reasoning then the model regularly outputs 30-80k of thinking steps. Don’t get me wrong, the outputs and benchmark scores are genuinely impressive, but it’s completely unusable as a daily driver unless you don’t mind 10-20 minute long pauses while it reasons and you have enough VRAM to accommodate the huge minimum context requirements. Qwen 3.5 does exactly what Anthropic did with their latest 4.6 models - they exploited a known loophole in current the benchmarking process which scores models without accounting for either speed of response or tokens used to achieve the score. Both of which matter in the real world, especially if you’re paying for tokens.
1
0
2026-03-03T10:29:46
StuartGray
false
null
0
o8e00xu
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8e00xu/
false
1
t1_o8dztyf
accuracy comparison [https://github.com/openai/whisper/discussions/2363](https://github.com/openai/whisper/discussions/2363)
1
0
2026-03-03T10:27:54
inh24
false
null
0
o8dztyf
false
/r/LocalLLaMA/comments/1fvb83n/open_ais_new_whisper_turbo_model_runs_54_times/o8dztyf/
false
1
t1_o8dzs7p
You asked if I’m using OpenClaw locally on my phone: not directly, I run OpenClaw on my laptop and control it from my phone via Telegram, with remote access secured through Tailscale. Right now, I also expose an OpenAI-compatible endpoint from MNN Chat when I need a local provider (the app has an OAI-compatible API), allowing OpenClaw and other clients to communicate with it. I just discovered your Android app, and it’s the best UX I’ve seen for on-device LLMs, my only wish is to use it as a full replacement for MNN Chat, especially if you add an OpenAI-compatible server/API mode. Regarding the use case for exposing it as a server: yes, keeping it local but accessible on LAN or your tailnet is useful as a second provider/sub-agent for fast tasks (doc/image extraction, quick summaries, lightweight vision), while OpenClaw manages routing, memory, and channels. For adoption: your “mobile-first personal assistant that runs local models” approach makes sense, what will retain users are 2, 3 killer workflows (e.g., “send a screenshot/doc → get structured notes + action items,” “receipt/invoice → fields into a template,” “image → OCR + short summary”), plus safe integrations (calendar is usually straightforward; WhatsApp automation can be tricky due to platform rules, so I’d start read-only/notification-first). Also Telegram over WhatsApp.
1
0
2026-03-03T10:27:26
RIP26770
false
null
0
o8dzs7p
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8dzs7p/
false
1
t1_o8dzqft
You must have a really old smartphone. :) Currently even for 280 USD smartphones have 12 GB of ram
1
0
2026-03-03T10:26:58
Healthy-Nebula-3603
false
null
0
o8dzqft
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dzqft/
false
1
t1_o8dzqb8
Ok... try now, I have uploaded a higher resolution version
1
0
2026-03-03T10:26:57
CapitalShake3085
false
null
0
o8dzqb8
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dzqb8/
false
1
t1_o8dzkwq
Nice! How long did it take you to train, and which GPU?
1
0
2026-03-03T10:25:31
Direct-Argument-7066
false
null
0
o8dzkwq
false
/r/LocalLLaMA/comments/1rjjvge/update_tinytts_the_smallest_english_tts_model/o8dzkwq/
false
1
t1_o8dzekv
Loved this tool bro!
1
0
2026-03-03T10:23:52
No-Marketing-2294
false
null
0
o8dzekv
false
/r/LocalLLaMA/comments/1nay7wk/need_a_free_simple_tool_of_whisperv3turbo/o8dzekv/
false
1
t1_o8dzd3p
[deleted]
1
0
2026-03-03T10:23:28
[deleted]
true
null
0
o8dzd3p
false
/r/LocalLLaMA/comments/1r2ptd5/using_glm5_for_everything/o8dzd3p/
false
1
t1_o8dzcsi
good catch PRed: [https://github.com/mungg/OneRuler/pull/2](https://github.com/mungg/OneRuler/pull/2)
1
0
2026-03-03T10:23:24
uhuge
false
null
0
o8dzcsi
false
/r/LocalLLaMA/comments/1omst7q/polish_is_the_most_effective_language_for/o8dzcsi/
false
1
t1_o8dz94g
Yes, but the local large models are not very intelligent. I have used Qwen3:8b and Qwen3.5:9b. Some minor functions cannot be achieved, and they are not as good as the online models. I suggest using the online models first.
1
0
2026-03-03T10:22:26
CollectionKey2320
false
null
0
o8dz94g
false
/r/LocalLLaMA/comments/1qv6892/help_setting_local_ollama_models_with_openclaw/o8dz94g/
false
1
t1_o8dz6xh
Also claiming that a q4 quant of the very latest model of whatever number of prams drop should by nature be entirely unuseable is a _wild_ take
1
0
2026-03-03T10:21:51
Competitive_Ad_5515
false
null
0
o8dz6xh
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dz6xh/
false
1
t1_o8dz4ym
Sorry, I forgot to mention the quant in hurry. Updated my comment.
1
0
2026-03-03T10:21:22
pmttyji
false
null
0
o8dz4ym
false
/r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/o8dz4ym/
false
1
t1_o8dz1bo
Yeah, because only q4_0 and q8_0 run nicely and natively accelerated on my NPU? There's some great work being done with them for sure, but dynamically weighted quants don't run well on my mobile device. I also ran quants of the 4B and got similar, my phone usually handles up to 8B models ok. It's probably a config issue on my end, but I'm sharing my bad first impression of the 3.5 model drop. I'm sure they'll be great once I get settings dialed in and I find the right quant for my use-cases. And for the record I love qwen, 2.5 was my jam.
1
0
2026-03-03T10:20:24
Competitive_Ad_5515
false
null
0
o8dz1bo
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dz1bo/
false
1
t1_o8dyvys
I think you need a classifier as your filter, then pass it to a more capable model, no need of using VLM on a task that more reliable traditional methods works.
1
0
2026-03-03T10:18:57
Chemical_Owl_6352
false
null
0
o8dyvys
false
/r/LocalLLaMA/comments/1rjkyq9/fast_free_vlm_for_object_id_quality_filtering/o8dyvys/
false
1
t1_o8dyt54
make sure resizable bar and above 4g decoding are on, and maybe check if your 3090 firmware has the resizable bar update, you could also try a regular ol' bios/uefi update
1
0
2026-03-03T10:18:12
llama-impersonator
false
null
0
o8dyt54
false
/r/LocalLLaMA/comments/1rjdeat/dual_rtx_3090_on_b550_70b_models_produce_garbage/o8dyt54/
false
1
t1_o8dyrxo
No? I already told you i didn't read the post's body so i just assumed it was only about saying "hi" Also i can't read your image it's too pixelated
1
0
2026-03-03T10:17:52
Velocita84
false
null
0
o8dyrxo
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dyrxo/
false
1
t1_o8dyofb
This is a bit reductive and there are plenty of edge cases, but in general: * Llama.cpp/ik_llama.cpp for CPU or CPU/GPU hybrid inference. * VLLM for multi-GPU inference. Just a general guideline, there are certainly scenarios that fall outside of this.
1
0
2026-03-03T10:16:55
RG_Fusion
false
null
0
o8dyofb
false
/r/LocalLLaMA/comments/1rjk2dq/im_a_noob_to_local_inference_how_do_you_choose/o8dyofb/
false
1
t1_o8dykpv
Hey, could you please answer some noob questions, please? 1. What settings are recommended? I'm planning to use this model in a chat bot without thinking. 2. Is this model capable of using tools without thinking? Or do I need to explicitly say in the prompt "use X tool"?
1
0
2026-03-03T10:15:54
groosha
false
null
0
o8dykpv
false
/r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8dykpv/
false
1
t1_o8dyh9o
Not for the home, unless you have a number of gpu's. MoE's are fast with a batch of 1. But they can slow down when you get more in flight since the different token sequences being run in parallel will activate different experts. So memory bandwidth takes a hit. One way to get around that is to deploy multiple copies of the models such that different experts gets their own bandwidth so to speak. Dense models don't have this issue and the speed can be increased with speculative decoding.
1
0
2026-03-03T10:14:58
RnRau
false
null
0
o8dyh9o
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dyh9o/
false
1
t1_o8dygmw
Sem comparativos?
1
0
2026-03-03T10:14:49
charmander_cha
false
null
0
o8dygmw
false
/r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8dygmw/
false
1
t1_o8dyfgk
Anyone fancy explaining what this is?
1
0
2026-03-03T10:14:30
Melodic_Reality_646
false
null
0
o8dyfgk
false
/r/LocalLLaMA/comments/1rjhmmf/presence_penalty_seems_to_be_incoming_on_lmstudio/o8dyfgk/
false
1
t1_o8dyfe5
Haha massive model for your machine. MASSIVE
1
0
2026-03-03T10:14:29
nakedspirax
false
null
0
o8dyfe5
false
/r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8dyfe5/
false
1
t1_o8dydif
Can or work with Antigravity Gemini/Claude models ?
1
0
2026-03-03T10:13:59
Dapper-Neat9261
false
null
0
o8dydif
false
/r/LocalLLaMA/comments/1r8bc65/built_a_shared_memory_interagent_messaging_layer/o8dydif/
false
1
t1_o8dyas0
Just download Heretic versions.
1
0
2026-03-03T10:13:14
-Ellary-
false
null
0
o8dyas0
false
/r/LocalLLaMA/comments/1regq10/qwen_35_2735122b_jinja_template_modification/o8dyas0/
false
1
t1_o8dy84q
I'm assuming that is your unloaded speed before adding any context. It probably drops below 1 t/s after a bit of use, but you could answer that better than I can. If you're purchasing a computer explicitly for running large models, you're much better off getting a Mac Pro or an EPYC server. I went the server route, and get 16 tokens/second on Q5-K-XL. I understand that not everyone has the opportunity to build out a system like this, so what you're doing is a legitimate alternative. Still, I have to ask, what can someone do with a 1 token/second model?
1
0
2026-03-03T10:12:31
RG_Fusion
false
null
0
o8dy84q
false
/r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8dy84q/
false
1
t1_o8dy04s
800m param btw. Incredible work Qwen ! Large (small) language models
1
0
2026-03-03T10:10:25
getpodapp
false
null
0
o8dy04s
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dy04s/
false
1
t1_o8dxzad
finally some good content on this sub
1
0
2026-03-03T10:10:11
pythonlover001
false
null
0
o8dxzad
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o8dxzad/
false
1
t1_o8dxyr6
Pretty sure IQ4\_NL is as fast but also way smarter. And weren't Q\_K quants finally optimized for ARM a few months ago?
1
0
2026-03-03T10:10:03
ABLPHA
false
null
0
o8dxyr6
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dxyr6/
false
1
t1_o8dxv01
Unsloth dynamic quant v2 q5 or q6 will be quick good good quality
1
0
2026-03-03T10:09:04
getpodapp
false
null
0
o8dxv01
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8dxv01/
false
1
t1_o8dxutv
Swizzling is replacing by reference. It was a silly joke since saying “sorry can’t answer your exact question but” sounded bot like, yet you’ve been outspoken about bots and slop and such
1
0
2026-03-03T10:09:01
Accomplished_Ad9530
false
null
0
o8dxutv
false
/r/LocalLLaMA/comments/1rjikwz/help_me_create_my_llm_ecosystem/o8dxutv/
false
1
t1_o8dxqm4
Interesting. Any hints like that for a desktop pc setup with i7 6700, 24gb ram & gtx1070 with 8gb vram?
1
0
2026-03-03T10:07:53
sydulysses
false
null
0
o8dxqm4
false
/r/LocalLLaMA/comments/1rjkarj/local_model_suggestions_for_medium_end_pc_for/o8dxqm4/
false
1
t1_o8dxpms
You please go and learn socialist theory first. China is in the preliminary stage of the construction of socialism. They have the market and private enterprise in some fields, but the commanding heights of the economy are fully socially owned and planned. The vast majority of the Chinese economy remains socially owned and directed for the common good by the five year plans, especially the critical sectors like steel, energy, etc. 
1
0
2026-03-03T10:07:38
Imperator_Basileus
false
null
0
o8dxpms
false
/r/LocalLLaMA/comments/1rd1lmz/american_vs_chinese_ai_is_a_false_narrative/o8dxpms/
false
1
t1_o8dxphc
Tried the chat online and it confidently gaslighted me many times. This is absolutely not anything usable at least for image input
1
0
2026-03-03T10:07:35
MastodonParty9065
false
null
0
o8dxphc
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dxphc/
false
1
t1_o8dxjct
HOW!!!, im struggling with 12gb Vram + 64gb Ram to run 27b q4\_k\_m with 32k context. Its 5t/s slow and keeps getting slower over time
1
0
2026-03-03T10:05:58
Beautiful_Egg6188
false
null
0
o8dxjct
false
/r/LocalLLaMA/comments/1rbkeea/which_one_are_you_waiting_for_more_9b_or_35b/o8dxjct/
false
1
t1_o8dxi2z
No, the KV cache does not grow 100x. The attention matrix is.
1
0
2026-03-03T10:05:37
Budget_Author_828
false
null
0
o8dxi2z
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8dxi2z/
false
1
t1_o8dxg78
I got the q3kxl unsloth version running on my 2x dgx spark cluster and getting 11t/s
1
0
2026-03-03T10:05:07
CATLLM
false
null
0
o8dxg78
false
/r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8dxg78/
false
1
t1_o8dx9at
Frankly, after using it, it blew me away instantly. I kept using it despite issues with prompt reprocessing.
1
0
2026-03-03T10:03:20
kaisurniwurer
false
null
0
o8dx9at
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8dx9at/
false
1
t1_o8dx3t8
4B is on the same level (or higher) as 80B A3B. Though 4B was always better than it should have been.
1
0
2026-03-03T10:01:52
kaisurniwurer
false
null
0
o8dx3t8
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8dx3t8/
false
1
t1_o8dx2ib
I can run both but I often prefer the 122b because I can run it way faster. It's semi usable for real work. I recommend you use an unsloth quant. Q3_K_XL is my goto.
1
0
2026-03-03T10:01:31
tylerhardin
false
null
0
o8dx2ib
false
/r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8dx2ib/
false
1
t1_o8dx28d
As if that isn't much
1
0
2026-03-03T10:01:27
KaroYadgar
false
null
0
o8dx28d
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dx28d/
false
1
t1_o8dwyao
Look here: [https://whatmodelscanirun.com/](https://whatmodelscanirun.com/)
1
0
2026-03-03T10:00:25
Chess_pensioner
false
null
0
o8dwyao
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o8dwyao/
false
1
t1_o8dwxhk
can you show the code implementation of the tools?
1
0
2026-03-03T10:00:12
pppp1234543
false
null
0
o8dwxhk
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dwxhk/
false
1
t1_o8dwooz
Yeah this is the real answer. Old server parts. ECC DDR4 was somewhat cheap not that long ago. My dual xeon 400GB DDR4 server cost me ~1000USD, which is still not exactly cheap, but for a hobby and with secondary utility of serving as storage server, I didn't mind paying that for a pretty cool hobby. Though I still mostly use 3090 and smaller models, since prompt processing is quite important for agentic use.
1
0
2026-03-03T09:57:54
kaisurniwurer
false
null
0
o8dwooz
false
/r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8dwooz/
false
1
t1_o8dwn2a
I'll look into it
1
0
2026-03-03T09:57:28
Ilishka2003
false
null
0
o8dwn2a
false
/r/LocalLLaMA/comments/1rjdo1i/ollama_keeps_loading_with_openclaw/o8dwn2a/
false
1
t1_o8dwmaf
yeah that makes sense. Are you using open claw locally on your mobile phone btw? I'm itching to create a mobile first personal assistant that runs local models and now with the qwen3.5 0.8 I feel like it makes sense to do it. Only cause the model is small and intelligent. But i really don't know about adoption. I'm thinking of very secretary type use cases. Check whatsapp, and ensure that there are appropriate calendar notifications for all personal obligations so that professional and personal dont' clash. What are your thoughts?
1
0
2026-03-03T09:57:15
alichherawalla
false
null
0
o8dwmaf
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8dwmaf/
false
1
t1_o8dwjoe
Earlier today? [https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF/tree/main](https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF/tree/main) None of the files have been updated today.
1
0
2026-03-03T09:56:33
dark-light92
false
null
0
o8dwjoe
false
/r/LocalLLaMA/comments/1rjhy83/tool_calling_issues_with_qwen3535b_with_16gb_vram/o8dwjoe/
false
1
t1_o8dwbjc
Yes, it's always great to have a sub-agent that can be added locally to your OpenClaw, for example, for simpler tasks.
1
0
2026-03-03T09:54:25
RIP26770
false
null
0
o8dwbjc
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8dwbjc/
false
1
t1_o8dwb4w
oh yeah Off Grid allows you to do that + image gen too. Further also allows you to import models if you've got em locally :)
1
0
2026-03-03T09:54:18
alichherawalla
false
null
0
o8dwb4w
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8dwb4w/
false
1
t1_o8dwa01
https://preview.redd.it/…rs "322755".
1
0
2026-03-03T09:54:00
dummyTukTuk
false
null
0
o8dwa01
false
/r/LocalLLaMA/comments/1rjfyqf/qwen_35_9b_on_a_dual_reasoning_math_game/o8dwa01/
false
1
t1_o8dw5nx
thanks! I don't expose it as a server just yet. Is there a use case for it?
1
0
2026-03-03T09:52:50
alichherawalla
false
null
0
o8dw5nx
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8dw5nx/
false
1
t1_o8dw4s7
> just two $10k mac ultras [..] You can hook 4 up for $40k > Hell, for $60k Yeah, easy. > it isn't quite as high as you're saying it is. It is. Even not the 200k (for full GPUs I assume) This is almost starting to touch lower bracket of property values around my parts.
1
0
2026-03-03T09:52:36
kaisurniwurer
false
null
0
o8dw4s7
false
/r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8dw4s7/
false
1
t1_o8dw3vk
https://preview.redd.it/…6c0c06841018ccd3
1
0
2026-03-03T09:52:21
CapitalShake3085
false
null
0
o8dw3vk
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dw3vk/
false
1
t1_o8dw25s
You're right, however you're missing another aspect - the integrated memory controller (IMC).  With a higher end CPU you're more likely to get a better IMC, which in turn means it can handle higher memory speeds.
1
0
2026-03-03T09:51:54
legit_split_
false
null
0
o8dw25s
false
/r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o8dw25s/
false
1
t1_o8dw1u4
You should replace ctx-size with fit-ctx and watch the magic happen.
1
0
2026-03-03T09:51:49
Xantrk
false
null
0
o8dw1u4
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8dw1u4/
false
1
t1_o8dvzft
stop using ollama and try llama.cpp like you said
1
0
2026-03-03T09:51:11
jwpbe
false
null
0
o8dvzft
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dvzft/
false
1
t1_o8dvnwk
I'm trying the qwen3.5-4b-mlx in LM Studio, and it says "Wait, one more check." over and over and over. Am I doing something wrong?
1
0
2026-03-03T09:48:05
firesalamander
false
null
0
o8dvnwk
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dvnwk/
false
1
t1_o8dvnxm
Well you can if you have any recent phone. It's 4 GBs in size with a Q4 Quant and runs pretty well on my phone. The bigger issue is the speed. I am getting 5 Tok/s on a Oppo Find x9 pro, a flagship phone that's a few months old. If we get MTP finally working in llama.cpp I can see a near future where this easily reaching the speed of simply reading, which then means it's enough for asking simple questions.
1
0
2026-03-03T09:48:05
OrkanFlorian
false
null
0
o8dvnxm
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dvnxm/
false
1
t1_o8dvmuj
Exactly, I tried it and it confidentially gave a wrong answer and was caught in an infinite thinking loop when I corrected completely wasting battery.
1
0
2026-03-03T09:47:48
ptear
false
null
0
o8dvmuj
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dvmuj/
false
1
t1_o8dvm16
Didn't try.
1
0
2026-03-03T09:47:34
MarketingGui
false
null
0
o8dvm16
false
/r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o8dvm16/
false
1
t1_o8dvjt4
What GPU do you have?
1
0
2026-03-03T09:46:59
tappyson
false
null
0
o8dvjt4
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dvjt4/
false
1
t1_o8dvg5l
the person creating these benchmarks posts on here once in a while, they have done both https://www.apex-testing.org/ but i'm not 100% confident in the testing method/reliability, esp. considering bad quants on release. but that being said, they have tested both there and the scores look somewhat reasonable
1
0
2026-03-03T09:45:59
fuckingredditman
false
null
0
o8dvg5l
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8dvg5l/
false
1
t1_o8dvg7b
What parameters did you find work best if you want to share?
1
0
2026-03-03T09:45:59
DD3Boh
false
null
0
o8dvg7b
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dvg7b/
false
1
t1_o8dvg2y
Upvote for the software setup (I’ll have to try it someday), but why not just run a SSH daemon and connect with VPN?
1
0
2026-03-03T09:45:57
ProfessionalSpend589
false
null
0
o8dvg2y
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dvg2y/
false
1