name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o83xves
how could an LLM model possibly know its running in what you consider local? what an asinine question.
4
0
2026-03-01T20:01:40
Moist-Length1766
false
null
0
o83xves
false
/r/LocalLLaMA/comments/1ri5la8/qwen_35_35b_a3b_is_convinced_that_its_running_in/o83xves/
false
4
t1_o83xt9t
hi! said friend here. I run on 2x3090 - using MTP=5, getting between 60-110t/s on the 27b dense (yes, really, the dense). happy to share my command, but **tool calling is currently broken with MTP**. i found a patch - i need to get to my laptop to share it. my launch command is this: ``` #!/bin/bash . /mnt/no-ba...
29
0
2026-03-01T20:01:22
JohnTheNerd3
false
null
0
o83xt9t
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83xt9t/
false
29
t1_o83xqtq
I'm loving the 35B. It slows down to like 5t/s on my setup at like 50k+ context. But it's the only model I've found works better at the same size than the previous 30BA3B.
1
0
2026-03-01T20:01:02
National_Meeting_749
false
null
0
o83xqtq
false
/r/LocalLLaMA/comments/1ri6nf2/recommendations_for_gpu_with_8gb_vram/o83xqtq/
false
1
t1_o83xoea
I'm just getting into local LLMs for dnd roleplay, is Qwen one of the best choices for that at the largest I can fit on my VRAM?
8
0
2026-03-01T20:00:41
Thardoc3
false
null
0
o83xoea
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83xoea/
false
8
t1_o83xo7d
R7090 3gig 🫡
1
0
2026-03-01T20:00:40
ThisWillPass
false
null
0
o83xo7d
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83xo7d/
false
1
t1_o83xneu
Yeah don't use Q4_K_XL. Use Q6. That's the fix.
11
0
2026-03-01T20:00:33
Hot_Turnip_3309
false
null
0
o83xneu
false
/r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o83xneu/
false
11
t1_o83xlsr
3600mhz cl18
1
0
2026-03-01T20:00:19
cookieGaboo24
false
null
0
o83xlsr
false
/r/LocalLLaMA/comments/1rhcnbt/best_coding_model_to_run_entirely_on_12gb_vram/o83xlsr/
false
1
t1_o83xibw
built this because I was tired of manually checking prices across 8 different providers every time I needed to spin up a training run
-5
0
2026-03-01T19:59:50
Plane-Marionberry380
false
null
0
o83xibw
false
/r/LocalLLaMA/comments/1ri7byg/why_aws_charges_60x_more_for_h100s_than_vastai/o83xibw/
false
-5
t1_o83xhgr
To get the real power of these frontier models, you need to run them with very little quantization. It is not accessible unless you have a bunch of pro6000
-3
1
2026-03-01T19:59:43
Fit-Pattern-2724
false
null
0
o83xhgr
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o83xhgr/
false
-3
t1_o83xhcv
LMAO Qwen3-27b better than DeepSeek-R1? I need the number of your pusher.
14
0
2026-03-01T19:59:42
Expensive-Paint-9490
false
null
0
o83xhcv
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o83xhcv/
false
14
t1_o83xgwu
Rocm 7.8-7.12 are all the next gen builds. I'm saying they added it back but as a generic implementation that should now work for Vega iGPUs, which because the Mi50 shares the same architecture, the Mi50 also now regains support. Basically we didn't regain Mi50 support because of community outcry rather AMD getting t...
1
0
2026-03-01T19:59:38
JaredsBored
false
null
0
o83xgwu
false
/r/LocalLLaMA/comments/1rfthhd/local_ai_on_mac_pro_2019/o83xgwu/
false
1
t1_o83xefm
Settings? I have been using: * Thinking mode for precise coding tasks (e.g. WebDev): `temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0`
2
0
2026-03-01T19:59:17
jacobpederson
false
null
0
o83xefm
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o83xefm/
false
2
t1_o83xe2q
What's up Jacek, what's happening? these models are released yet? Old news? tell me idk
1
0
2026-03-01T19:59:14
No_Afternoon_4260
false
null
0
o83xe2q
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83xe2q/
false
1
t1_o83xcu2
Is it in sloth or qwen GGUF?. Unsloth has a different way and may produce a divergent variant
1
0
2026-03-01T19:59:04
Honest-Debate-6863
false
null
0
o83xcu2
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o83xcu2/
false
1
t1_o83xagr
i mean it sounds like it's what YOU want and that's cool. But you should just treat this as your own hobby.
2
0
2026-03-01T19:58:44
McSendo
false
null
0
o83xagr
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o83xagr/
false
2
t1_o83x7kw
To quickly stand things up, go right ahead. In the long run you're gonna lose your shirt. Especially for anything public facing.
5
0
2026-03-01T19:58:20
a_beautiful_rhind
false
null
0
o83x7kw
false
/r/LocalLLaMA/comments/1ri6jg3/at_what_point_do_we_stop_reading_code/o83x7kw/
false
5
t1_o83x7hz
Weird. Just ran Q4\_K\_XL a couple times on my rig and came out with glitchy but working versions both times. Maybe something is a bit off with the Q\_8\_0 versions?
1
0
2026-03-01T19:58:19
jacobpederson
false
null
0
o83x7hz
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o83x7hz/
false
1
t1_o83x4ng
I don't, I tested it, and settled on 122B at MXFP4. But the output quality of 27B even with thinking off with VLLM auto-quantizing it to FP8 was noticably better than 35B at FP16. 27B benchmarks higher than Haiku 4.5 which is why it interested me. 35B hallucinated a lot when running it as an agent vs 80B which was the...
3
0
2026-03-01T19:57:55
TokenRingAI
false
null
0
o83x4ng
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o83x4ng/
false
3
t1_o83x3cz
But my friend in GGUF LLMs, I DO NOT have the hardware capability nor the insane amount of money to buy the required hardware to run "uncompressed" LLMs. They were ALL tested with the same quants, my results were based on the quantized versions. It was not an uneven comparison. Both Minimax and Qwen3.5 were Q4 quan...
0
0
2026-03-01T19:57:45
mkMoSs
false
null
0
o83x3cz
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o83x3cz/
false
0
t1_o83x00s
ocr is incredibly good even in smaller models
1
0
2026-03-01T19:57:17
illustrious_trees
false
null
0
o83x00s
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83x00s/
false
1
t1_o83wzwb
That is because the tests are created for the "standard" usage. It´s the same when looking at bigger models. I run Minimax m2.5 and its good, but its not close to Sonnet, Gpt 5.2, gemini 3 in real world. Still they are very impressive.
12
0
2026-03-01T19:57:16
Maximum_Parking_5174
false
null
0
o83wzwb
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o83wzwb/
false
12
t1_o83wz8o
Still way better than what I've been getting with Qwen3.5-35B-A3B-Q8\_0.gguf - mostly just black screens :D I just ran it a couple times on UD-Q4\_K\_XL to confirm and it came out with working versions both times.
6
0
2026-03-01T19:57:10
jacobpederson
false
null
0
o83wz8o
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o83wz8o/
false
6
t1_o83wxpd
My Tesla P40s are ready.
26
0
2026-03-01T19:56:57
Darklumiere
false
null
0
o83wxpd
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83wxpd/
false
26
t1_o83wxh4
Which model and parameters (i.e context window) will be good to use with a single RTX3090, ollama as local llm provider and Claude agent set to use local models? I’ve use GLM 4.7 flash but it stucks sometimes or give time out or weird results
1
0
2026-03-01T19:56:55
albertuki00
false
null
0
o83wxh4
false
/r/LocalLLaMA/comments/1rbdsds/best_model_for_single_3090_in_2026/o83wxh4/
false
1
t1_o83wunv
You can't use 5xxx gpus with pascal that well due to when they cut the driver. P40 ram is faster than sysram but prompt processing leaves much to be desired. Pytorch has also dropped pascal after 2.7
1
0
2026-03-01T19:56:31
a_beautiful_rhind
false
null
0
o83wunv
false
/r/LocalLLaMA/comments/1ri232z/worth_it_to_buy_tesla_p40s/o83wunv/
false
1
t1_o83wtag
look how my all comments were downvoted :) I believe all these guys are running deepseek on their laptops ;)
2
0
2026-03-01T19:56:19
jacek2023
false
null
0
o83wtag
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o83wtag/
false
2
t1_o83wp28
Understandable, you're not the only one hyped for these things \^\^
1
0
2026-03-01T19:55:43
No-Statistician-374
false
null
0
o83wp28
false
/r/LocalLLaMA/comments/1ri6nf2/recommendations_for_gpu_with_8gb_vram/o83wp28/
false
1
t1_o83wou4
take the following a try: CPU Thread Pool Size: 8 Max Concurrent: 1 Try mmap(): off remove your forced layer into cpu. number of expert: 8 check your used vram and adjust the gpu layers accordingly, dont overfit it. i highly recommend to use llama.cpp directly.
4
0
2026-03-01T19:55:41
Waste-Excitement-683
false
null
0
o83wou4
false
/r/LocalLLaMA/comments/1ri60l3/qwen_35_35b_a3b_lmstudio_settings/o83wou4/
false
4
t1_o83wncz
Someone talking about this here: https://www.reddit.com/r/Qwen_AI/comments/1ri2l62/comment/o831mjo/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button
2
0
2026-03-01T19:55:29
Elusive_Spoon
false
null
0
o83wncz
false
/r/LocalLLaMA/comments/1ri6q8d/repeat_pp_while_using_qwen35_27b_local_with/o83wncz/
false
2
t1_o83wn71
What's this bullshit? This is just a tweet from some rando who read that Qwen will release small models soon and he is simply SPECULATING that it will be "Qwen3.5 9B, 4B, 2B, 0.8B, or something in between is possible." How dumb are you people?
42
0
2026-03-01T19:55:27
cyberdork
false
null
0
o83wn71
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83wn71/
false
42
t1_o83wmh9
Don't tell /r/selfhosted, they told me you need 20k minimum to have a chance at self hosting LLMs.
12
0
2026-03-01T19:55:21
Darklumiere
false
null
0
o83wmh9
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83wmh9/
false
12
t1_o83wm3x
In a perfect world? Sure, but right now the closed source models are one shotting most features and the open source models are quicjly catching up and for those who want to move very fast, theyre definitely letting ai do all the work
-2
1
2026-03-01T19:55:18
MoaTheDog
false
null
0
o83wm3x
false
/r/LocalLLaMA/comments/1ri6jg3/at_what_point_do_we_stop_reading_code/o83wm3x/
false
-2
t1_o83wlcn
[removed]
1
0
2026-03-01T19:55:12
[deleted]
true
null
0
o83wlcn
false
/r/LocalLLaMA/comments/1ri60l3/qwen_35_35b_a3b_lmstudio_settings/o83wlcn/
false
1
t1_o83whpy
how dare you show my the errors of my ways. I Mixed up the poster for someone else actually from the qwen team.
1
0
2026-03-01T19:54:41
National_Meeting_749
false
null
0
o83whpy
false
/r/LocalLLaMA/comments/1ri6nf2/recommendations_for_gpu_with_8gb_vram/o83whpy/
false
1
t1_o83w94w
They are silly but the model still seems really good. It is certainly better than the big R1 model.
4
0
2026-03-01T19:53:29
Utoko
false
null
0
o83w94w
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o83w94w/
false
4
t1_o83w8vt
llamacpp doesn't support any MTP. vllm does though.
6
0
2026-03-01T19:53:27
thebadslime
false
null
0
o83w8vt
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o83w8vt/
false
6
t1_o83w85o
The GOAT continues to compute
15
0
2026-03-01T19:53:21
cunasmoker69420
false
null
0
o83w85o
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83w85o/
false
15
t1_o83w56j
Not sure if sarcasm or not, but the same would manifest, if I needed latest vitest set of functions, instead of what the model thinks it looks like.
1
0
2026-03-01T19:52:56
Medium_Chemist_4032
false
null
0
o83w56j
false
/r/LocalLLaMA/comments/1ri5la8/qwen_35_35b_a3b_is_convinced_that_its_running_in/o83w56j/
false
1
t1_o83w32s
Yeah 30Bish seems to be a sweet spot for MoE at the moment. I'm blown away by the modern models that size.
23
0
2026-03-01T19:52:39
Aromatic-Low-4578
false
null
0
o83w32s
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o83w32s/
false
23
t1_o83w1t5
The Qwen team has something special going on over there. Kimi is good, Minimax is great, GLM has some particular strengths as well, but a lot of their strength comes from their massive size. All around, at all sizes, it feels like Qwen is the only one truly competing with the western model makers.
5
0
2026-03-01T19:52:28
National_Meeting_749
false
null
0
o83w1t5
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o83w1t5/
false
5
t1_o83vt42
As long as no one from the actual Qwen team says that, I'll have a helping of salt ready ;)
1
0
2026-03-01T19:51:16
No-Statistician-374
false
null
0
o83vt42
false
/r/LocalLLaMA/comments/1ri6nf2/recommendations_for_gpu_with_8gb_vram/o83vt42/
false
1
t1_o83vobb
Might be cause improving error rate linearly requires exponential increase in params / training
8
0
2026-03-01T19:50:35
wektor420
false
null
0
o83vobb
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o83vobb/
false
8
t1_o83vnop
[removed]
1
0
2026-03-01T19:50:30
[deleted]
true
null
0
o83vnop
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83vnop/
false
1
t1_o83viu3
I could believe it's 84% as smart, but not anywhere near as knowledgeable. What I'm personally much more interested in terms of performance is the 35B MOE and how it could scale with more active parameters, like 5B or even 9B.
97
0
2026-03-01T19:49:49
BlueSwordM
false
null
0
o83viu3
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o83viu3/
false
97
t1_o83vhry
u/No_Afternoon_4260 [u/ttkciar](https://www.reddit.com/user/ttkciar) I have no words...
1
0
2026-03-01T19:49:40
jacek2023
false
null
0
o83vhry
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83vhry/
false
1
t1_o83vgcr
You're not meant to discuss with models, just code and run agents.
1
0
2026-03-01T19:49:29
anon235340346823
false
null
0
o83vgcr
false
/r/LocalLLaMA/comments/1ri5la8/qwen_35_35b_a3b_is_convinced_that_its_running_in/o83vgcr/
false
1
t1_o83vfgi
Never if you want your ish to work.
9
0
2026-03-01T19:49:21
a_beautiful_rhind
false
null
0
o83vfgi
false
/r/LocalLLaMA/comments/1ri6jg3/at_what_point_do_we_stop_reading_code/o83vfgi/
false
9
t1_o83vf66
I have 16GB of Vram (5060Ti) and I get around 50t/s as well in LM Studio. I think the issue is that you didn't increase the GPU offload to the maximum. I use "GPU offload:40" and "MoE layer offload:20" with 30K context and everything else stock and get 50t/s. If I lower the "GPU offload" to 26 layers like in your set...
3
0
2026-03-01T19:49:18
kke12
false
null
0
o83vf66
false
/r/LocalLLaMA/comments/1ri60l3/qwen_35_35b_a3b_lmstudio_settings/o83vf66/
false
3
t1_o83vetg
u/No_Afternoon_4260 [u/ttkciar](https://www.reddit.com/user/ttkciar) how this is ontopic post?
0
0
2026-03-01T19:49:15
jacek2023
false
null
0
o83vetg
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o83vetg/
false
0
t1_o83v889
>You say Qwen3CoderNext and then refer to the Unsloth issue which was with Qwen 3.5-35B Unsloth's Qwen 3 Coder Next UD-Q4\_K\_XL also has the same issue, if you look at it on HuggingFace.
1
0
2026-03-01T19:48:20
Klutzy-Snow8016
false
null
0
o83v889
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o83v889/
false
1
t1_o83v81x
It is indeed a beast. I've found that it gets a lot of user/agent and present/past confusion and degrades with long sessions though.
1
0
2026-03-01T19:48:18
localizeatp
false
null
0
o83v81x
false
/r/LocalLLaMA/comments/1r397hi/step_35_flash_is_a_beast/o83v81x/
false
1
t1_o83v37t
[https://www.reddit.com/r/LocalLLaMA/comments/1ri2irg/breaking\_today\_qwen\_35\_small/](https://www.reddit.com/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/) No waiting required! happening today. The 35BA3B is probably gonna be the one for him. But i'm super excited to see what the 9B and the 4B models...
1
0
2026-03-01T19:47:37
National_Meeting_749
false
null
0
o83v37t
false
/r/LocalLLaMA/comments/1ri6nf2/recommendations_for_gpu_with_8gb_vram/o83v37t/
false
1
t1_o83v26q
Just tell me why? WHY? WHY LLAMA3-8B? Why?
1
0
2026-03-01T19:47:29
Iory1998
false
null
0
o83v26q
false
/r/LocalLLaMA/comments/1ri14x0/has_anyone_built_a_proper_eval_pipeline_for_local/o83v26q/
false
1
t1_o83v0wu
Maybe a stupid question... how do i deactivate thinking / reasoning in Lm Studio? It's the 27B version.
1
0
2026-03-01T19:47:18
scubid
false
null
0
o83v0wu
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83v0wu/
false
1
t1_o83unu5
Bro what the fuck right 6 down votes!? Down voters are a bunch of... What's the word? Karen's?
-25
0
2026-03-01T19:45:27
MadwolfStudio
false
null
0
o83unu5
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83unu5/
false
-25
t1_o83uni0
You are using low quantized versions of the models and you are complaining about them hallucinating? What's worse, you are recommending people not to use them? I don't think that's fair. At least, you may not recommend using lower quants, but never the whole model. What you just did is like testing the camera of the ...
0
0
2026-03-01T19:45:25
Iory1998
false
null
0
o83uni0
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o83uni0/
false
0
t1_o83und6
What are u running it in? I'm using LM Studio. Also, what are your gpu offload settings? I'm going to try to configure it today. I was **very** impressed with its analysis and for the first time I feel that we've got sota-level open source model on our local system. I'm not easily impressed, but 3.5 has been giving m...
1
0
2026-03-01T19:45:23
GrungeWerX
false
null
0
o83und6
false
/r/LocalLLaMA/comments/1rhchvi/qwen35_family_running_notes/o83und6/
false
1
t1_o83uj22
Opencode
1
0
2026-03-01T19:44:47
sagiroth
false
null
0
o83uj22
false
/r/LocalLLaMA/comments/1ri0iep/ollama_or_openvino/o83uj22/
false
1
t1_o83uiqf
not the best, it has multiple seed points instead of 1?
4
0
2026-03-01T19:44:45
Honest-Debate-6863
false
null
0
o83uiqf
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o83uiqf/
false
4
t1_o83uhi0
Yes - I realise that my ideas are probably not new, but I haven't yet found an open source implementation that I can use. If you know of one... (Or more likely two open source projects - one which does the agentic orchestration / choreography i.e. runs the queues, defines how workflows work and decisions are made etc...
2
0
2026-03-01T19:44:34
Protopia
false
null
0
o83uhi0
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o83uhi0/
false
2
t1_o83uh3i
Local LLMs will very rarely save you money. The infrastructure and energy costs just can't compete with the efficiency you get at scale, not to mention that many cloud platforms are operating at a loss (or very close to it) right now in an effort to gain market share. There are still many valid reasons to run local L...
1
0
2026-03-01T19:44:31
suicidaleggroll
false
null
0
o83uh3i
false
/r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/o83uh3i/
false
1
t1_o83ubxb
Please support FIM, please support FIM, please support FIM ... 🙏
1
0
2026-03-01T19:43:48
danigoncalves
false
null
0
o83ubxb
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83ubxb/
false
1
t1_o83ubxv
I mostly down read my code.... But I have bugs and no customers*** LOL On the other hand I have way better code coverage in every repo I touch. I feel okay about the code quality.
4
0
2026-03-01T19:43:48
boinkmaster360
false
null
0
o83ubxv
false
/r/LocalLLaMA/comments/1ri6jg3/at_what_point_do_we_stop_reading_code/o83ubxv/
false
4
t1_o83u2m7
Well I have, I also tried all those models and 397B A17B is fantastic but other models just good. And 27B is not really as good as Deepseek R1 671B, and it does not feels to be near
5
0
2026-03-01T19:42:30
uti24
false
null
0
o83u2m7
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o83u2m7/
false
5
t1_o83tvb5
well theres good reasons to run lighter versions if you want to self host apps. these agents are very good at managing them.
1
0
2026-03-01T19:41:29
Beejsbj
false
null
0
o83tvb5
false
/r/LocalLLaMA/comments/1rcmlwk/so_is_openclaw_local_or_not/o83tvb5/
false
1
t1_o83tquw
Well I suggest you wait a little bit longer, there's a very strong possibility we'll see Qwen3.5 'small' models released over the next few days. Rumored to be a 0.8B, 2B, 4B and 9B model. Certainly the 4B model would fit well for you, and the 9B could too if you're willing to have less context or use a slightly lower q...
2
0
2026-03-01T19:40:52
No-Statistician-374
false
null
0
o83tquw
false
/r/LocalLLaMA/comments/1ri6nf2/recommendations_for_gpu_with_8gb_vram/o83tquw/
false
2
t1_o83tq5m
Thank you for saying what people don’t want to hear, the 27b and 35b models are not that good
5
0
2026-03-01T19:40:46
BitXorBit
false
null
0
o83tq5m
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o83tq5m/
false
5
t1_o83tp8w
Whats the speed of your DDR4?
1
0
2026-03-01T19:40:39
iLoveWaffle5
false
null
0
o83tp8w
false
/r/LocalLLaMA/comments/1rhcnbt/best_coding_model_to_run_entirely_on_12gb_vram/o83tp8w/
false
1
t1_o83tmdj
wow my command is now so much cleaner, thanks! `llama-cli -m AppData\Local\llama.cpp\unsloth_Qwen3.5-35B-A3B-GGUF_Qwen3.5-35B-A3B-UD-Q4_K_M.gguf --fit on --fit-ctx 200000 --reasoning-budget 0` In terms of cpu/gpu usage and performance, its nearly identical to the previous command.
1
0
2026-03-01T19:40:14
iLoveWaffle5
false
null
0
o83tmdj
false
/r/LocalLLaMA/comments/1rhcnbt/best_coding_model_to_run_entirely_on_12gb_vram/o83tmdj/
false
1
t1_o83tm8o
If you look at each of the 11 categories, R1 only wins one, most of the rest is far behind.
1
1
2026-03-01T19:40:13
dionisioalcaraz
false
null
0
o83tm8o
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o83tm8o/
false
1
t1_o83tfab
Reasonable price for 4x GTX 1070 though
2
0
2026-03-01T19:39:14
YellowOnline
false
null
0
o83tfab
false
/r/LocalLLaMA/comments/1rh0bkz/tempted_to_prompt_qwen_on_this_craigslist_rig_but/o83tfab/
false
2
t1_o83tfbz
9B will be nice - 27B is too slow on my 2x3060s :(
1
0
2026-03-01T19:39:14
Bamny
false
null
0
o83tfbz
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83tfbz/
false
1
t1_o83tf78
fwiw I get RTF=0.826 on 7900XTX with [qwen3-tts.cpp](https://github.com/predict-woo/qwen3-tts.cpp)
1
0
2026-03-01T19:39:13
cptbeard
false
null
0
o83tf78
false
/r/LocalLLaMA/comments/1rfc3ic/introducing_fasterqwentts/o83tf78/
false
1
t1_o83td2b
qwen3.5-27b -q8 did it one shot. https://preview.redd.it/huh043m4lhmg1.png?width=751&format=png&auto=webp&s=35177ead4bd34899764648119cb62f44f262a738
11
0
2026-03-01T19:38:55
MotokoAGI
false
null
0
o83td2b
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o83td2b/
false
11
t1_o83t8de
Amazing! Thank you, is it possible to have it packed into a docker?
1
0
2026-03-01T19:38:15
maglat
false
null
0
o83t8de
false
/r/LocalLLaMA/comments/1rgqyhg/wyoming_parakeet_mlx/o83t8de/
false
1
t1_o83t871
Fair call. The emoji print statements and verbose comments were from the first iteration. The source files now use standard Python logging. Appreciate the feedback; it made the code better.
-1
0
2026-03-01T19:38:14
Sea-Succotash1547
false
null
0
o83t871
false
/r/LocalLLaMA/comments/1ri51y0/p_aurastate_formally_verified_llm_state_machine/o83t871/
false
-1
t1_o83t7vg
thank you so much for the help
1
0
2026-03-01T19:38:11
Giyuforlife
false
null
0
o83t7vg
false
/r/LocalLLaMA/comments/1refvmr/qwen_3_27b_is_impressive/o83t7vg/
false
1
t1_o83t7gs
I think that Qwen 3.5 27B likely has better reasoning traces than the OG R1, but R1 is king in terms of world knowledge (aka crystallized intelligence) since it has over 20x more parameters. Therefore, what is truly "best" relies more on the application.
35
0
2026-03-01T19:38:08
BagelRedditAccountII
false
null
0
o83t7gs
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o83t7gs/
false
35
t1_o83syle
>Everyone is starting to buy a GPU What?
9
0
2026-03-01T19:36:53
kovake
false
null
0
o83syle
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83syle/
false
9
t1_o83suux
Dude try thr Qwen3.5-27B... i was shocked as it's summary capabilities.
4
0
2026-03-01T19:36:21
Iory1998
false
null
0
o83suux
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o83suux/
false
4
t1_o83ss8w
That was it! Thank you. But why? what does this parameter do without a value?
1
0
2026-03-01T19:35:58
KulangetaPestControl
false
null
0
o83ss8w
false
/r/LocalLLaMA/comments/1ri1h5n/ik_llamacpp_reasoning_not_working_with_glm_models/o83ss8w/
false
1
t1_o83sqeh
Settings? I have been using: * Thinking mode for precise coding tasks (e.g. WebDev): `temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0`
1
0
2026-03-01T19:35:43
jacobpederson
false
null
0
o83sqeh
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o83sqeh/
false
1
t1_o83so8e
Im doing some work with it right now, chain tool calling, longest thinking for this session so far is 14 seconds, 500 tokens. average thinking between tool calls is 1-3 second or less. 3 seconds 116 tokens sub 1 second averages 34 tokens of thinking between tool calls.
2
0
2026-03-01T19:35:24
SoupDue6629
false
null
0
o83so8e
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o83so8e/
false
2
t1_o83sa94
Use llama.cpp (search on GitHub), any quant slightly smaller than what fits into total (vram+ram). You need some space for kv cache too, so maybe like 1-2gb smaller than 21gb should be good
1
0
2026-03-01T19:33:27
rulerofthehell
false
null
0
o83sa94
false
/r/LocalLLaMA/comments/1refvmr/qwen_3_27b_is_impressive/o83sa94/
false
1
t1_o83s9qr
Yes, but benchmarks are based on the demands what users/market expect from a model to solve.
1
1
2026-03-01T19:33:22
WideWorry
false
null
0
o83s9qr
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o83s9qr/
false
1
t1_o83s4nr
it edit the searxng skilll in clawhub, use it in opencode and claude code.
1
0
2026-03-01T19:32:40
kironlau
false
null
0
o83s4nr
false
/r/LocalLLaMA/comments/1rhj0l9/mcp_server_for_searxngnonapi_local_search/o83s4nr/
false
1
t1_o83s0i2
It seems that you haven't read all the praise these two models are receiving in this sub.
1
1
2026-03-01T19:32:05
dionisioalcaraz
false
null
0
o83s0i2
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o83s0i2/
false
1
t1_o83s04g
This score numbers are silly, do anyone really believe this 27b model is 84% as smart as 744B GLM 5
201
0
2026-03-01T19:32:02
Ancient-Car-1171
false
null
0
o83s04g
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o83s04g/
false
201
t1_o83rvat
which particular quant? and how would i offload to RAM
1
0
2026-03-01T19:31:23
Giyuforlife
false
null
0
o83rvat
false
/r/LocalLLaMA/comments/1refvmr/qwen_3_27b_is_impressive/o83rvat/
false
1
t1_o83rr5o
1 could equal 1000B! It's possible! Imagine what this means!
1
1
2026-03-01T19:30:50
ericthegreen3
false
null
0
o83rr5o
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83rr5o/
false
1
t1_o83roc7
Yeah, I read a post on perplexity of KV quantization that was really well written and thorough. It concluded that q8 K cache was teeny tiny with its impact on PPL. "Free lunch" was the exact term. I went and tried it. Cut my tokens per second dramatically. Not really better than just spilling into RAM at that point.
3
0
2026-03-01T19:30:26
_-_David
false
null
0
o83roc7
false
/r/LocalLLaMA/comments/1ri60l3/qwen_35_35b_a3b_lmstudio_settings/o83roc7/
false
3
t1_o83rn9i
[deleted]
1
0
2026-03-01T19:30:17
[deleted]
true
null
0
o83rn9i
false
/r/LocalLLaMA/comments/1refvmr/qwen_3_27b_is_impressive/o83rn9i/
false
1
t1_o83rmyy
Did you use it as backend for Claude Code?
1
0
2026-03-01T19:30:14
G4rp
false
null
0
o83rmyy
false
/r/LocalLLaMA/comments/1ri0iep/ollama_or_openvino/o83rmyy/
false
1
t1_o83rjiq
I can imagine. Is it realistic though? Sounds like hype
-1
0
2026-03-01T19:29:46
ericthegreen3
false
null
0
o83rjiq
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83rjiq/
false
-1
t1_o83r3a9
No way to make a hook that detects spam to put it back on the right track?
2
0
2026-03-01T19:27:29
Grouchy-Bed-7942
false
null
0
o83r3a9
false
/r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o83r3a9/
false
2
t1_o83r33v
They did add it back. Not to the 7.2 but to a more recent 7.8 build. I added the link above. Not as well known/discussed but its the next gen offical rocm implimentation
1
0
2026-03-01T19:27:27
JacketHistorical2321
false
null
0
o83r33v
false
/r/LocalLLaMA/comments/1rfthhd/local_ai_on_mac_pro_2019/o83r33v/
false
1
t1_o83r1fa
so you claim that you've wrote all the emojies and "→" symbols manually in every single source file?
1
0
2026-03-01T19:27:13
MelodicRecognition7
false
null
0
o83r1fa
false
/r/LocalLLaMA/comments/1ri51y0/p_aurastate_formally_verified_llm_state_machine/o83r1fa/
false
1
t1_o83r19z
I tried it. It recommended me olds and obsoletes models from 2 year ago. I have an rtx 3060 12gb. It not an powerful card but small model are coming out all the time. Maybe it need more models in it databank?
3
0
2026-03-01T19:27:12
vagabondluc
false
null
0
o83r19z
false
/r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o83r19z/
false
3
t1_o83qgjx
Thanks man that's extremely helpful, how do you feel about sd.cpp when in comparison with comfyui? any difference in performance ?
1
0
2026-03-01T19:24:17
SarcasticBaka
false
null
0
o83qgjx
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o83qgjx/
false
1
t1_o83qewr
Ahhh, yeah. That's almost certainly it. I grabbed the model through their builtin model search thingy. I'll go grab a different quant. Thanks!
3
0
2026-03-01T19:24:03
n8mo
false
null
0
o83qewr
false
/r/LocalLLaMA/comments/1ri60l3/qwen_35_35b_a3b_lmstudio_settings/o83qewr/
false
3