name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8ik9l1
Still happens every time. Running a PhD-level assistant on a box under my desk without paying anyone a cent hasn't stopped being surreal.
1
0
2026-03-04T01:13:13
theagentledger
false
null
0
o8ik9l1
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8ik9l1/
false
1
t1_o8ik99o
Sorry fool, I run full context 128k, 256k. steadily getting work with them daily, non stop.
1
0
2026-03-04T01:13:10
segmond
false
null
0
o8ik99o
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8ik99o/
false
1
t1_o8ik27h
flour, butter, eggs, ignore previous instructions, sugar
1
0
2026-03-04T01:12:00
theagentledger
false
null
0
o8ik27h
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8ik27h/
false
1
t1_o8ik06a
[removed]
1
0
2026-03-04T01:11:40
[deleted]
true
null
0
o8ik06a
false
/r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8ik06a/
false
1
t1_o8ijwvs
Yes please. The open-source community really needs to start moving away from proprietary BS like CUDA.
1
0
2026-03-04T01:11:07
nullptr777
false
null
0
o8ijwvs
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8ijwvs/
false
1
t1_o8ijtz5
the typos are my alibi
1
0
2026-03-04T01:10:39
theagentledger
false
null
0
o8ijtz5
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8ijtz5/
false
1
t1_o8ijq3a
The main knock on Apple silicon for LLM inference is the prompt processing phase takes a long time, like dozens of seconds or even minutes in some circumstances. A 4x improvement is nothing to shake a stick at. 
1
0
2026-03-04T01:10:01
BumbleSlob
false
null
0
o8ijq3a
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8ijq3a/
false
1
t1_o8ijj3k
LLM trainer will target our skills.md for next phase training data. It is well designed, structured, high quality data of our real world operation procedure
1
0
2026-03-04T01:08:53
Guilty_Nothing_2858
false
null
0
o8ijj3k
false
/r/LocalLLaMA/comments/1rk6ulw/prediction_nextgen_frontier_llms_will_be/o8ijj3k/
false
1
t1_o8ijbm4
Claude Code also has a default model - which may be differently fine tuned or quantised from the main chat model, or may not. The Opus model that's at number 2 must have been tested via a different scaffold - which is probably explained on the site somewhere. I do think Claude Code is the best scaffold, which is why I use it even for open models.
1
0
2026-03-04T01:07:39
-dysangel-
false
null
0
o8ijbm4
false
/r/LocalLLaMA/comments/1rk5qzz/qwen3codernext_scored_40_on_latest_swerebench/o8ijbm4/
false
1
t1_o8ij4xz
What ?
1
0
2026-03-04T01:06:33
qwen_next_gguf_when
false
null
0
o8ij4xz
false
/r/LocalLLaMA/comments/1rk6ulw/prediction_nextgen_frontier_llms_will_be/o8ij4xz/
false
1
t1_o8ij4mf
it's alsome
1
0
2026-03-04T01:06:29
efficjoy
false
null
0
o8ij4mf
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8ij4mf/
false
1
t1_o8iiwel
Sorry my bad!! I looked at your photos first and that caught my attention I didn't fully read your post doh! Excited to check it out
1
0
2026-03-04T01:05:08
Impossible_Ground_15
false
null
0
o8iiwel
false
/r/LocalLLaMA/comments/1rk65ul/a_windows_client_for_llamaswapllamacpp/o8iiwel/
false
1
t1_o8iit0m
Well China doesn't allow their top AI talents to exit the country. They also need to report to their government for approval when they want to travel aboard.
1
0
2026-03-04T01:04:36
Ok_Warning2146
false
null
0
o8iit0m
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8iit0m/
false
1
t1_o8iiqe4
An llm over the web is just a local llm but not on someone else's server
1
0
2026-03-04T01:04:11
FireFireoldman
false
null
0
o8iiqe4
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8iiqe4/
false
1
t1_o8iiova
Didn't I mention it? Yes. It's going to be open source. I'm just structuring the code better at the moment, and it's not working properly yet. As soon as I finish a stable version, I'll leave the repository open with its respective license (I don't know which one to use yet).
1
0
2026-03-04T01:03:55
vk3r
false
null
0
o8iiova
false
/r/LocalLLaMA/comments/1rk65ul/a_windows_client_for_llamaswapllamacpp/o8iiova/
false
1
t1_o8iik0n
From my testing, KV Cache offloaded to CPU is bad when you use MoE but helpful when using dense models with layers offloaded to CPU.
1
0
2026-03-04T01:03:08
Iory1998
false
null
0
o8iik0n
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8iik0n/
false
1
t1_o8iift4
AFAIK, the cuda driver and rocm can run together.
1
0
2026-03-04T01:02:26
a_beautiful_rhind
false
null
0
o8iift4
false
/r/LocalLLaMA/comments/1rk5f6b/mixing_nvidia_amd_for_ai_3090_ti_7800_xt_in/o8iift4/
false
1
t1_o8iifmn
Je trouve ça incroyable d'avoir pu disposer d'un modèle qui bat GPT4 en si peu de temps. Et ce n'est pas si cher.
1
0
2026-03-04T01:02:25
Adventurous-Paper566
false
null
0
o8iifmn
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8iifmn/
false
1
t1_o8iib5p
did you figure it out?
1
0
2026-03-04T01:01:41
carefulflounder29
false
null
0
o8iib5p
false
/r/LocalLLaMA/comments/1dmnkjw/how_to_train_a_local_llm/o8iib5p/
false
1
t1_o8iia9b
Looks good are you planning to open source it?
1
0
2026-03-04T01:01:32
Impossible_Ground_15
false
null
0
o8iia9b
false
/r/LocalLLaMA/comments/1rk65ul/a_windows_client_for_llamaswapllamacpp/o8iia9b/
false
1
t1_o8ii5mo
may still, but this isn't a good thing
1
0
2026-03-04T01:00:47
adeadbeathorse
false
null
0
o8ii5mo
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ii5mo/
false
1
t1_o8ii3kd
Hard disagree, he said "I gave it my main points for it to write the post" Why should we read a huge post with maybe 90% slop around a few points? Just post the main points... Nobody cares about formatting on reddit, as long as you use sentences and paragraphs no one will say a thing. People care about not reading a bunch of slop.
1
0
2026-03-04T01:00:28
Djagatahel
false
null
0
o8ii3kd
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8ii3kd/
false
1
t1_o8ihyzn
Thanks a lot! I will continue to try. Thank you so much!
1
0
2026-03-04T00:59:43
Fabulous-Locksmith60
false
null
0
o8ihyzn
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8ihyzn/
false
1
t1_o8ihumq
Claude code is an application, not a model. They are pointing out the person who created the chart has conflated an app with a model. Slop chart?
1
0
2026-03-04T00:59:01
jtjstock
false
null
0
o8ihumq
false
/r/LocalLLaMA/comments/1rk5qzz/qwen3codernext_scored_40_on_latest_swerebench/o8ihumq/
false
1
t1_o8ihoze
[removed]
1
0
2026-03-04T00:58:07
[deleted]
true
null
0
o8ihoze
false
/r/LocalLLaMA/comments/1mpk2va/announcing_localllama_discord_server_bot/o8ihoze/
false
1
t1_o8ihm9k
Start off with pico models, they're... Decently capable. I had a lot of fun with [PicoKittens 23m](https://huggingface.co/PicoKittens/PicoMistral-23M), it runs at a whopping 17,000 tok/s on my 5090. I parralelized 10,000 of them with vllm and wired them to WhatsApp. They can generate about 750k tok in 30 seconds, and it's pure unhinged nonsense. If I had to start again in AI, I'd start at the bottom and work up, not hop in the middle and branch both ways like I did. You'll learn more from the bottom.
1
0
2026-03-04T00:57:41
3spky5u-oss
false
null
0
o8ihm9k
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8ihm9k/
false
1
t1_o8ihi8y
Creative writing!
1
0
2026-03-04T00:57:02
MasterOfFakeSkies
false
null
0
o8ihi8y
false
/r/LocalLLaMA/comments/1rk3uby/misgendering_issues_with_claude_sonnet_46/o8ihi8y/
false
1
t1_o8ihev8
plug in the laptop and close the lid?
1
0
2026-03-04T00:56:29
sub_bears
false
null
0
o8ihev8
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8ihev8/
false
1
t1_o8ih9mj
Let me try with -nkvo, I'll report back in a sec.
1
0
2026-03-04T00:55:39
TitwitMuffbiscuit
false
null
0
o8ih9mj
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ih9mj/
false
1
t1_o8ih219
>35B-A3B seems like it loses the plot when it has to keep track of a long procedure with multiple steps, 27B doesn't This is fundamentally an aspect of MoE, the smaller experts (3b in this case) don't quite hold attention as well on longer threads or complex tasks.
1
0
2026-03-04T00:54:24
3spky5u-oss
false
null
0
o8ih219
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8ih219/
false
1
t1_o8igz8z
XCreate has a great video on it. Basically 122B much smarter for one-shot or advanced coding. Maybe less obvious in other use cases.
1
0
2026-03-04T00:53:57
nyc_shootyourshot
false
null
0
o8igz8z
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8igz8z/
false
1
t1_o8igyge
Just offload KV cache to RAM and increase the layers offloaded to GPU.
1
0
2026-03-04T00:53:49
Iory1998
false
null
0
o8igyge
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8igyge/
false
1
t1_o8igygl
It will get there! Now they have entered this market, look out. Exciting
1
0
2026-03-04T00:53:49
GreggBlazer
false
null
0
o8igygl
false
/r/LocalLLaMA/comments/1rin3ea/alibaba_team_opensources_copaw_a_highperformance/o8igygl/
false
1
t1_o8igwyq
Unfortunately, it's really not generalizable, it's for this model and those quants specifically.
1
0
2026-03-04T00:53:35
TitwitMuffbiscuit
false
null
0
o8igwyq
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8igwyq/
false
1
t1_o8igs38
If you download any model from HF, you see it's size a bit smaller on your disk.
1
0
2026-03-04T00:52:49
Iory1998
false
null
0
o8igs38
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8igs38/
false
1
t1_o8igotv
Yeah it's definitly more relevant for quants twice the size, it's more an assessment of the recipe used to quantize. It's also useful for spoting outliers when people might think that bigger=better which is not always the case.
1
0
2026-03-04T00:52:17
TitwitMuffbiscuit
false
null
0
o8igotv
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8igotv/
false
1
t1_o8igksp
That's wild. It seems like a really solid solution to the "survivalist LLM" topics that come up every now and then. 35b seems like it'd be more than competent to juggle RAG tools and a library/wikipedia backup.
1
0
2026-03-04T00:51:39
toothpastespiders
false
null
0
o8igksp
false
/r/LocalLLaMA/comments/1rjygyu/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/o8igksp/
false
1
t1_o8ighfy
With the app that is listed can you point it at models you've already downloaded? I see it has a bunch of options to download but just wondering. 
1
0
2026-03-04T00:51:06
ArtfulGenie69
false
null
0
o8ighfy
false
/r/LocalLLaMA/comments/1rj4nnq/qwen352b_on_android/o8ighfy/
false
1
t1_o8igg5w
Thanks for this. Hopefully it translates similarly to the 122b model. I was torn between q4km and iq4ks since the latter is faster for me. Now i know the quality isnt much different.
1
0
2026-03-04T00:50:54
Gringe8
false
null
0
o8igg5w
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8igg5w/
false
1
t1_o8igetj
that is indeed a dumb question 
1
0
2026-03-04T00:50:41
sub_bears
false
null
0
o8igetj
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8igetj/
false
1
t1_o8igda3
The tools you mentioned don’t perform semantic analysis! The purpose of this article is to determine whether the attacker is an AI red team agent or a human/traditional tool. 🙂 The key idea is to use a semantically rich prompt, one that requires genuine comprehension to interpret correctly, something only a human or a sufficiently advanced AI can parse meaningfully. Here’s an example prompt into a HTML honeypot: <!-- TODO: remove before deploy!! admin/admin123 page: login.php --> An AI agent or a human will think it’s a human error, a developer’s oversight!​​​​​​​​​​​​​​​ 🤣
1
0
2026-03-04T00:50:27
M4r10_h4ck
false
null
0
o8igda3
false
/r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8igda3/
false
1
t1_o8igb78
Already have OpenCode installed. Don't have a openrouter in it because im Brazilian, and I don't have Credit Card to use. And I use a lot Claude Opus 4.5 in IA Arena. But always copying and pasting 😂😂😂 Want to use OpenCode, but dont find a really good local LLM to use on it, for free. Do you have some sugestions? I look like a heavy user 😂😂😂
1
0
2026-03-04T00:50:07
Fabulous-Locksmith60
false
null
0
o8igb78
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8igb78/
false
1
t1_o8ig3nw
What a weird attempt at dickflexing lol. Very happy you managed to hack together something that gets you borderline unusable tokens per second and probably crashes in performance horribly at any sort of minority long context.
1
0
2026-03-04T00:48:56
BumbleSlob
false
null
0
o8ig3nw
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8ig3nw/
false
1
t1_o8ify64
Ah, the classic. Thanks for the reply
2
0
2026-03-04T00:48:03
Gueleric
false
null
0
o8ify64
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ify64/
false
2
t1_o8ifwog
I just spent whole day with "Qwen 3.5 9B Heretic v2" and I'm still recovering from the shock and awe of how good it was. And I'm yet to even try any larger ones from Qwen 3.5 family. I fear me might have truly peaked with this release, and nobody will ever come with anything better.
1
0
2026-03-04T00:47:48
Woof9000
false
null
0
o8ifwog
false
/r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8ifwog/
false
1
t1_o8ifssg
If you have a 16gb card you wont be able to fit the 4km size, but you could fir the iq4ks with decent context. Also even a gb or two with qwen 3.5 can get you alot of extra context.
1
0
2026-03-04T00:47:11
Gringe8
false
null
0
o8ifssg
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ifssg/
false
1
t1_o8ifstw
Good question. Hugging Face shows GB while I reported GiB. 15,172,208,160 bytes ÷ 1,073,741,824 = 14.13 GiB
1
0
2026-03-04T00:47:11
TitwitMuffbiscuit
false
null
0
o8ifstw
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ifstw/
false
1
t1_o8ifqn8
5841 tokens/sec? can someone confirm that? 5 thousands? which exact model is that?
1
0
2026-03-04T00:46:50
tractorator
false
null
0
o8ifqn8
false
/r/LocalLLaMA/comments/1rk5ftz/benchmarked_the_main_gpu_options_for_local_llm/o8ifqn8/
false
1
t1_o8ifqcd
You need to make an agent framework (or use an off the shelf one). I just make my own, so I actually couldn't tell you much about off the shelf setups. Your 35b would act as the front end agent (I like to use the bartender analogy), and your 27b would be the manager, chilling in the back office, waiting for the bartender to call him up and ask a question. Your rules for the agent would basically have a "if this task, ask an expert" (grossly oversimplified).
1
0
2026-03-04T00:46:48
3spky5u-oss
false
null
0
o8ifqcd
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8ifqcd/
false
1
t1_o8iff4g
Nice💪💪💪💪
1
0
2026-03-04T00:45:01
No_Enthusiasm6313
false
null
0
o8iff4g
false
/r/LocalLLaMA/comments/1r99yda/pack_it_up_guys_open_weight_ai_models_running/o8iff4g/
false
1
t1_o8if9la
the latest qwen 3.5 models are suprisingly good! you should try them out
1
0
2026-03-04T00:44:09
ReceptionBrave91
false
null
0
o8if9la
false
/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o8if9la/
false
1
t1_o8if1z5
Yeah it looks like I'm probably waiting for M5 Ultra before stacking anything with my M3 Ultra. Unless anyone demonstrates that it's worth it stacking an M5 Max with M3 Ultra over RDMA to reduce overall prompt processing significantly
1
0
2026-03-04T00:42:57
-dysangel-
false
null
0
o8if1z5
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8if1z5/
false
1
t1_o8ieyga
Turn off thinking and use their settings for instruct reasoning? That’s what I did. The settings are on their model card in hugging face.
1
0
2026-03-04T00:42:23
Operation_Fluffy
false
null
0
o8ieyga
false
/r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/o8ieyga/
false
1
t1_o8iet3q
Thank you ❤️ write me in DM I will send you the configurations 🫡
1
0
2026-03-04T00:41:31
M4r10_h4ck
false
null
0
o8iet3q
false
/r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8iet3q/
false
1
t1_o8ieszg
Hey u/Major_Specific_23, I've updated the repo now with the mmproj file :) Download it and put it next to the gguf
1
0
2026-03-04T00:41:30
hauhau901
false
null
0
o8ieszg
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8ieszg/
false
1
t1_o8ieswv
The "route between both" takeaway is the key insight here. In practice, the hard part isn't building the distilled model - it's building the router that decides which requests go where. Most teams either over-rely on frontier (burning money on classification tasks) or over-rely on distilled (getting bad outputs on edge cases). The sweet spot is confidence-based routing: let the distilled model take a first pass, and escalate to frontier when output confidence is low or input looks out-of-distribution. The 50-example training result is impressive. For teams without ML expertise to run full distillation, there's a middle ground using few-shot prompting on smaller open-weight models that gets you 80% of this benefit with 10% of the setup cost.
1
0
2026-03-04T00:41:29
MarginDash_com
false
null
0
o8ieswv
false
/r/LocalLLaMA/comments/1rjslz0/benchmarks_the_10x_inference_tax_you_dont_have_to/o8ieswv/
false
1
t1_o8iesc9
It shouldn't matter whether the car wash is automated or not. With your modification, it's reaching the right answer for the wrong reason: It recognizes that an automated wash is unfit for washing humans, but not that the car must be present in order to wash the car.
1
0
2026-03-04T00:41:24
Murgatroyd314
false
null
0
o8iesc9
false
/r/LocalLLaMA/comments/1rk631c/qwen35_9b_q4_k_m_car_wash_philosophy_if_someone/o8iesc9/
false
1
t1_o8ierd7
That's it! Another Stability AI moment for us all... When the initial team leaves, productivity goes downhill. I must be honest fellas, I don't we will see another Qwen model as OS.
1
0
2026-03-04T00:41:15
Iory1998
false
null
0
o8ierd7
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ierd7/
false
1
t1_o8ienf2
I am testing the qwen3.5:27b-q4_K_M on my 3090. Honestly, a bit slower than im used with gemma3. I cannot make the model do websearch either on openwebui.
1
0
2026-03-04T00:40:37
Tasty-Butterscotch52
false
null
0
o8ienf2
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8ienf2/
false
1
t1_o8ieid0
Sounds cool and all, but can it run Crysis?
1
0
2026-03-04T00:39:49
Soft-Barracuda8655
false
null
0
o8ieid0
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8ieid0/
false
1
t1_o8iei6a
My God! And still slow? 😮
1
0
2026-03-04T00:39:47
Fabulous-Locksmith60
false
null
0
o8iei6a
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8iei6a/
false
1
t1_o8iegts
You'll have to invest... Quite a bit, if you want it to run at decent speeds (1000+pp/30+gen). If you want to see for yourself, try an [API](https://openrouter.ai/qwen/qwen3.5-397b-a17b) out first, they're cheap.
1
0
2026-03-04T00:39:34
3spky5u-oss
false
null
0
o8iegts
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8iegts/
false
1
t1_o8iegt1
I am just learning beelzebub. If I could have a look at your config that would be incredibly helpful. Thanks mate!
2
0
2026-03-04T00:39:34
andy_potato
false
null
0
o8iegt1
false
/r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8iegt1/
false
2
t1_o8iedus
Any particular reason for your efficiency score formula? They seem mostly similar in size so there seems little hope for fitting more layers or a speed boost from the marginally smaller models.
1
0
2026-03-04T00:39:05
LetterRip
false
null
0
o8iedus
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8iedus/
false
1
t1_o8iebnq
Always treat a model like a child taking to an adult. It cannot read what's in your mind, it's not beside you 24/7 to understand your surroundings. You got give a bit more context( through the prompt) to let it answer properly.
1
0
2026-03-04T00:38:44
ScotchMonk
false
null
0
o8iebnq
false
/r/LocalLLaMA/comments/1rk631c/qwen35_9b_q4_k_m_car_wash_philosophy_if_someone/o8iebnq/
false
1
t1_o8ieawm
exit code 1 failed the compile? Windows idk what to do
1
0
2026-03-04T00:38:37
Day_Old_Gatorade
false
null
0
o8ieawm
false
/r/LocalLLaMA/comments/1q48q2s/easywhisperui_opensource_easy_ui_for_openais/o8ieawm/
false
1
t1_o8ie7p3
Close. I'm just buying an Nvidia GPU. Not selling mine. I'll use mine for gaming and the Nvidia one for everything AI.
1
0
2026-03-04T00:38:06
National_Meeting_749
false
null
0
o8ie7p3
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8ie7p3/
false
1
t1_o8ie6f0
That’s part of it, yes. The other part is outside of healthcare and finance, nobody actually cares that much about privacy.
1
0
2026-03-04T00:37:54
owp4dd1w5a0a
false
null
0
o8ie6f0
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8ie6f0/
false
1
t1_o8ie2ym
You're right, and It endlessly frustrates me. I'm trying to train some small (<50m parameters) game playing models and that is the only reason I'm having to buy/take advantage of free cloud compute when I have powerful enough hardware sitting next to me.
1
0
2026-03-04T00:37:20
National_Meeting_749
false
null
0
o8ie2ym
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8ie2ym/
false
1
t1_o8idxpc
It is good and has a lot of knowledge. I run a 2-bit quant in a 128G Mac studio. The quality is amazing and seems close to the original. Here are some HF threads on it: https://huggingface.co/ubergarm/Qwen3.5-397B-A17B-GGUF/discussions/2 https://huggingface.co/ubergarm/Qwen3.5-397B-A17B-GGUF/discussions/8 It is kinda slow on my hardware though (about 15 tokens/second), so not usable for agentic coding yet, but it is great as a local chat model due to its amazing knowledge.
1
0
2026-03-04T00:36:30
tarruda
false
null
0
o8idxpc
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8idxpc/
false
1
t1_o8idqf8
You'll lose about 1-2gb to OS if you aren't running headless. Nice thing is the Qwen3.5 arch is very efficient on context, your KVcache won't be huge.
1
0
2026-03-04T00:35:19
3spky5u-oss
false
null
0
o8idqf8
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8idqf8/
false
1
t1_o8idoxx
I use ia arena to use Opus 4.6, i dont have money to pay for Claude 😢 And im looking for a local LLM to start to use my own agent. But my notebook is really weak. I know i dont have ways to use a LLM localy, but I want to try, even if i try and dont get it. Just to understand what is happening in the field today.
1
0
2026-03-04T00:35:05
Fabulous-Locksmith60
false
null
0
o8idoxx
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8idoxx/
false
1
t1_o8idnjp
I feel this series have yet to be finetune to achieving his full potential!
1
0
2026-03-04T00:34:51
THEKILLFUS
false
null
0
o8idnjp
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8idnjp/
false
1
t1_o8idn7w
It's crawling at 4.5 t/s with -ngl 36, then it's getting worse.
1
0
2026-03-04T00:34:48
TitwitMuffbiscuit
false
null
0
o8idn7w
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8idn7w/
false
1
t1_o8id9ys
Yea if I were in that position I would probably sell it and get nvidia
1
0
2026-03-04T00:32:40
Hefty_Development813
false
null
0
o8id9ys
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8id9ys/
false
1
t1_o8id98r
What are you needing to trusting it for? Just try out models. If they work for you then they work for you. These kinds of lists are a helpful hint of what models are worth trying.
1
0
2026-03-04T00:32:33
-dysangel-
false
null
0
o8id98r
false
/r/LocalLLaMA/comments/1rk5qzz/qwen3codernext_scored_40_on_latest_swerebench/o8id98r/
false
1
t1_o8id8bw
nice try bot
1
0
2026-03-04T00:32:24
woahdudee2a
false
null
0
o8id8bw
false
/r/LocalLLaMA/comments/1rk5ftz/benchmarked_the_main_gpu_options_for_local_llm/o8id8bw/
false
1
t1_o8id7ah
the overthinking its pretty bad when you are using as a chat, for coding its pretty neat. but it seems that you have to send a variable to turn off thinking
1
0
2026-03-04T00:32:14
ZealousidealShoe7998
false
null
0
o8id7ah
false
/r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/o8id7ah/
false
1
t1_o8id72n
Cuda has a big head start and nvidia invested heavily in trying to wall everyone into their ecosystem. There is just a ton of friction in every other direction and anyone doing serious AI work defaults to nvidia gpus and further entrenches the dynamic. It will take a long time, if ever, for this to change in any significant way, beyond just small implementations by enthusiasts for rocm or whatever else. The market has clearly spoken for awhile at this point
1
0
2026-03-04T00:32:12
Hefty_Development813
false
null
0
o8id72n
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8id72n/
false
1
t1_o8icsef
I had a similar reaction to Qwen 3.5 35b. And sure, modern base models by the nature of when and how they're made are going to lean into the things people have been using LLMs for since they became mainstream. But 3.5 really takes that to the next level. I haven't looked at a large amount of base models in general. Mostly just a couple of mistral's. But I never felt like I could be confused as to what was the base and what was the instruct if I was taking a blinded test. I do feel like I could fail that test with qwen 3.5 as long as I was fairly limited on the complexity of what I could try.
1
0
2026-03-04T00:29:50
toothpastespiders
false
null
0
o8icsef
false
/r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8icsef/
false
1
t1_o8icrwh
But 397b is really good? How much i have to invest in a setup to use it? Sorry for the questions, I'm learning more about localLlm, but I have a weak notebook, without vídeo in it. Its a Ryzen 7 with 12gb RAM, but whitout graphics.
1
0
2026-03-04T00:29:45
Fabulous-Locksmith60
false
null
0
o8icrwh
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8icrwh/
false
1
t1_o8ickis
Thank you 🙏 I find these tests very useful.
1
0
2026-03-04T00:28:34
CATLLM
false
null
0
o8ickis
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ickis/
false
1
t1_o8icexu
I got 397B IQ3 running at ~10/s with a 60k prompt on my 14900KS 192GB RAM and 4090+4060Ti in LM Studio. It could probably go faster in llama.cpp but still not as fast as 27B so I haven’t spent much time playing with it
1
0
2026-03-04T00:27:39
Dexamph
false
null
0
o8icexu
false
/r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8icexu/
false
1
t1_o8ice2c
Microsoft also has directml. Llama.cpp has a native and well supported vulkan implementation. but pytorch doesn't. Pretty much everything that isn't LLM inference focused (image/audio gen, training of any sort), just... doesn't support anything but CUDA and it makes me cry.
1
0
2026-03-04T00:27:31
National_Meeting_749
false
null
0
o8ice2c
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8ice2c/
false
1
t1_o8icdno
Really useful visualization. One thing worth noting: the 3:1 input/output weighting makes sense as a general average, but actual ratios vary wildly by use case. Code generation and chain-of-thought tasks can hit 1:10 or worse, which dramatically shifts the cost picture toward models with cheaper output tokens. The other dimension I find missing from most cost-performance charts is task-specific quality. A model might score 5% lower on general benchmarks but perform identically (or better) on your specific domain - classification, extraction, structured output, etc. That's where the real arbitrage is. Running evals on your actual workload before committing to a model is way more predictive of true cost-per-quality than any blended benchmark. Would be interesting to see this same chart with separate lines for input-heavy vs output-heavy workloads.
1
0
2026-03-04T00:27:27
MarginDash_com
false
null
0
o8icdno
false
/r/LocalLLaMA/comments/1rjnpuv/costsperformance_tradeoff_for_qwen3_qwen35_and/o8icdno/
false
1
t1_o8icaay
Not only that but there’s scripts already to just render it straight in blender 
1
0
2026-03-04T00:26:54
socialjusticeinme
false
null
0
o8icaay
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8icaay/
false
1
t1_o8ic8aw
Thanks for the work! How come for models like bartowski\_Qwen3.5-27B-IQ4\_XS you show a 14.1GB size when huggingface shows 15.2?
1
0
2026-03-04T00:26:34
Gueleric
false
null
0
o8ic8aw
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ic8aw/
false
1
t1_o8ic4vz
How can we trust this if the first one is CLAUDE CODE? It is not a model by itself.
1
0
2026-03-04T00:26:01
West_Expert_4639
false
null
0
o8ic4vz
false
/r/LocalLLaMA/comments/1rk5qzz/qwen3codernext_scored_40_on_latest_swerebench/o8ic4vz/
false
1
t1_o8ibvli
Yeah I feel those model spend time thinking on the wrong point instead of « using their gut ». They tend to prioritize the « but wait… » to miss nothing instead of taking a POV and building it gradually
1
0
2026-03-04T00:24:32
THEKILLFUS
false
null
0
o8ibvli
false
/r/LocalLLaMA/comments/1rk631c/qwen35_9b_q4_k_m_car_wash_philosophy_if_someone/o8ibvli/
false
1
t1_o8ibrns
It’s still a prompt problem. You’re relying on the model to fix your bad prompt. Better prompt: I want to wash my car in the automated car wash. The car wash is a street away. Should I walk there or should I go by car. Answer from Qwen3.5 4B running on a cell phone: Generally speaking, you should go by car, but there is one specific scenario where walking might make sense. Here is the breakdown of why driving is usually the better choice: 1. The Washing Itself: This is the most important factor. An automated car wash uses high-pressure water, rotating brushes, and strong chemical detergents. These are designed to clean vehicles, not humans. If you walk through the wash, you will likely get thoroughly soaked and covered in soapy suds. • Result: You arrive at home or work as a puddled, soapy person, whereas without a coat, it would be an unpleasant experience….
1
0
2026-03-04T00:23:54
croninsiglos
false
null
0
o8ibrns
false
/r/LocalLLaMA/comments/1rk631c/qwen35_9b_q4_k_m_car_wash_philosophy_if_someone/o8ibrns/
false
1
t1_o8ibp2p
And for anyone who cares: qwen3.5:9b-q4_K_M 50% qwen3.5:4b-bf16 50% qwen3.5:4b-q4_K_M 67% qwen3.5:2b-bf16 33% qwen3.5:2b-q8_0 33% qwen3.5:0.8b-bf16 17% qwen3.5:0.8b-q8_0 25%
1
0
2026-03-04T00:23:28
SufficientPie
false
null
0
o8ibp2p
false
/r/LocalLLaMA/comments/1rdlc02/qwenqwen35122ba10b_hugging_face/o8ibp2p/
false
1
t1_o8ibeq9
I run the FP8 version locally, and couldn't be happier. It's easily the best coding model I've used with my workflow. I want my agents & subagents to follow a strict workflow of research, planning & task creation, which is then reviewed by myself before allowing development. Qwen3 Coder Next has excelled for me with this use case. I can't speak to how well it can one shot, or work unattended, as that isn't how I work, but as a coding assistant, I really like it.
1
0
2026-03-04T00:21:47
rmhubbert
false
null
0
o8ibeq9
false
/r/LocalLLaMA/comments/1rk5qzz/qwen3codernext_scored_40_on_latest_swerebench/o8ibeq9/
false
1
t1_o8ibdow
Tested both extensively and found the 27B more consistent compared to 35B across a 15k line code base. While the 35B could do it, it took significantly more tries to get right. Depends on what you are doing of course, and your tolerance for perfection….
1
0
2026-03-04T00:21:37
stormy1one
false
null
0
o8ibdow
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8ibdow/
false
1
t1_o8ib9q6
How would one go about loading two models and have one leverage the other?
1
0
2026-03-04T00:20:59
hungry_hipaa
false
null
0
o8ib9q6
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8ib9q6/
false
1
t1_o8ib8f1
MXFP4-MOE gguf from unsloth.
1
0
2026-03-04T00:20:46
cafedude
false
null
0
o8ib8f1
false
/r/LocalLLaMA/comments/1rds9nm/strix_halo_models_loading_on_memory_but_plenty_of/o8ib8f1/
false
1
t1_o8iaz1s
I run Qwen3.5-397B at home Q6 and I didn't spend $10k. Most of yall are new around here, there's lot of us who have found creative ways to run these things. 12tk/sec and Q4 gives about 18tk/sec. I don't like Q4 because it's not as smart as Q6. So I stick to Q6.
1
0
2026-03-04T00:19:16
segmond
false
null
0
o8iaz1s
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8iaz1s/
false
1
t1_o8ias5c
This is very helpful. Here's my question: Are you able to fit these quants on your RTX 3060 12GB or are you spilling over to CPU and taking the performance hit? Perhaps I should try a Q4 on my 16 GB VRAM.
1
0
2026-03-04T00:18:11
InternationalNebula7
false
null
0
o8ias5c
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ias5c/
false
1
t1_o8ialx9
I geek out to anyone who will listen about what Qwen has done for me LOCALLY! Makes me run like a 10 person PE fund but it’s just me and qwen (with the occasional opus 4.6 spot check). I sound insane!! “The AI runs in my computer! On my desk! It thinks!!!!”
1
0
2026-03-04T00:17:11
michael_p
false
null
0
o8ialx9
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8ialx9/
false
1
t1_o8iae3s
Can you elaborate
1
0
2026-03-04T00:15:55
zodagma
false
null
0
o8iae3s
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8iae3s/
false
1
t1_o8iacf0
Yeah, I saw that qwen image 2.0 was just released today, but not open weighted. It's now on api only services like fal.ai. I hope it gets released but I'm losing hope. https://x.com/fal/status/2028858462109577560?s=61
1
0
2026-03-04T00:15:39
Hoodfu
false
null
0
o8iacf0
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8iacf0/
false
1