name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8g2pr6
i definely would say the qwen3.5 lineups they got 2b 4b 9b 27b 35b(MOE) 122b(MOE). there pretty robust for their size
1
0
2026-03-03T17:44:01
Express_Quail_1493
false
null
0
o8g2pr6
false
/r/LocalLLaMA/comments/1rjvr81/best_base_model_not_chat_finetuned_in_modern/o8g2pr6/
false
1
t1_o8g2lg5
It seemed like he was kicked out and replaced by Hao Zhou from Google DeepMind >To be precise: Alibaba-Cloud kicked out Qwen's tech lead. [https://x.com/YouJiacheng/status/2028880908305219729?s=20](https://x.com/YouJiacheng/status/2028880908305219729?s=20) >I'm truly heartbroken. I know leaving wasn't your choice. Just last night, we were side by side launching the Qwen3.5 small model. I honestly can't imagine Qwen without you. [https://x.com/cherry\_cc12/status/2028869478105379248?s=20](https://x.com/cherry_cc12/status/2028869478105379248?s=20)
1
0
2026-03-03T17:43:27
Stannis_Loyalist
false
null
0
o8g2lg5
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g2lg5/
false
1
t1_o8g2kjv
stfu AI slop
1
0
2026-03-03T17:43:20
Pro-editor-1105
false
null
0
o8g2kjv
false
/r/LocalLLaMA/comments/1rjve9e/possible_to_run_local_model_for_opencode_with_m3/o8g2kjv/
false
1
t1_o8g2bcn
Dumb question, won’t it still be super slow without a powerful GPU?
1
0
2026-03-03T17:42:08
tramplemestilsken
false
null
0
o8g2bcn
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g2bcn/
false
1
t1_o8g29xm
Qwen 3.5 just released and is a massive jump forward, they even have models small enough to run on phones and do a great job. It isn’t great at coding, but I’m sure they will release coding model soon. That’s likely going to be the goat for local coding agent for a while, once it releases.
1
0
2026-03-03T17:41:57
sittingmongoose
false
null
0
o8g29xm
false
/r/LocalLLaMA/comments/1rjve9e/possible_to_run_local_model_for_opencode_with_m3/o8g29xm/
false
1
t1_o8g1zbr
thats what she said.
1
0
2026-03-03T17:40:33
RTS53Mini
false
null
0
o8g1zbr
false
/r/LocalLLaMA/comments/1riisyd/are_you_a_top_down_thinker_or_bottom_up/o8g1zbr/
false
1
t1_o8g1xug
Does FP4 even run on ampere? Well you got yourself in a pickle if you can't upgrade your AWS GPU. Best of luck to you.
1
0
2026-03-03T17:40:21
DinoAmino
false
null
0
o8g1xug
false
/r/LocalLLaMA/comments/1rjmnh7/help_loading_qwen35_35b_a3b_gguf_on_vllm/o8g1xug/
false
1
t1_o8g1v1v
Yeah... this isn't looking too good. Another 2 Qwen team members are gone, now. https://x.com/kxli_2000/status/2028880971945394553 - tweeted out they're leaving https://x.com/huybery - bio shows "former qwen..." Yikes.
1
0
2026-03-03T17:39:59
ayylmaonade
false
null
0
o8g1v1v
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g1v1v/
false
1
t1_o8g1uq5
welcome to the club!
1
0
2026-03-03T17:39:56
RTS53Mini
false
null
0
o8g1uq5
false
/r/LocalLLaMA/comments/1riisyd/are_you_a_top_down_thinker_or_bottom_up/o8g1uq5/
false
1
t1_o8g1tqk
It’s a template error. They are usually corrected shortly after launches. 
1
0
2026-03-03T17:39:48
BumbleSlob
false
null
0
o8g1tqk
false
/r/LocalLLaMA/comments/1rjpilf/is_there_a_way_to_disable_thinking_with_the_new/o8g1tqk/
false
1
t1_o8g1sej
Where's he going to?
1
0
2026-03-03T17:39:38
ASTRdeca
false
null
0
o8g1sej
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g1sej/
false
1
t1_o8g1s85
Mistral please hire this guy to make more open weight awesomeness!
1
0
2026-03-03T17:39:37
spaceman_
false
null
0
o8g1s85
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g1s85/
false
1
t1_o8g1s1i
Same
1
0
2026-03-03T17:39:35
Witty_Mycologist_995
false
null
0
o8g1s1i
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o8g1s1i/
false
1
t1_o8g1rv5
This bit is the practical limit though https://claude.ai/share/d26d7386-bdca-4ee9-aba1-bf4bf3147317 It says the theoretical limit is 900b model match, however the practical limit would be 400b model (This because beyond chinchilla optimal the learning gains are diminishing so for 10% improvement you spend a lot more training tokens and for the next 1% you spend exponentially even more and so on, the model is almost saturated already!) And ofc you could “overtrain” the big models themselves However the ideal limit from organic text data is around 15T already (and it seems like the 27b sits at that sweet spot for 15T already!) (meaning even overtraining bigger models won’t give you nearly as good results) Which translates to LLMs (base models not AI itself) is hitting some sort of a ceiling So downstream applications should be happy now While model training labs not so much
1
0
2026-03-03T17:39:34
Potential_Block4598
false
null
0
o8g1rv5
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8g1rv5/
false
1
t1_o8g1p5q
Path of least resistance.
1
0
2026-03-03T17:39:12
RTS53Mini
false
null
0
o8g1p5q
false
/r/LocalLLaMA/comments/1riisyd/are_you_a_top_down_thinker_or_bottom_up/o8g1p5q/
false
1
t1_o8g1n7o
You can compile the model using llama.cpp hor your specific hardware.
1
0
2026-03-03T17:38:56
Away-Albatross2113
false
null
0
o8g1n7o
false
/r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8g1n7o/
false
1
t1_o8g1myv
The US isn't exactly kind to asylum seekers these days.
1
0
2026-03-03T17:38:54
PrinceOfLeon
false
null
0
o8g1myv
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g1myv/
false
1
t1_o8g1m6z
Or quanted too low!
1
0
2026-03-03T17:38:48
Borkato
false
null
0
o8g1m6z
false
/r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8g1m6z/
false
1
t1_o8g1hqr
For pure inference, memory bandwidth dominates performance. Look at the memory bandwidth numbers of the new products, compare with known products, and you will have a good estimate.
1
0
2026-03-03T17:38:13
florinandrei
false
null
0
o8g1hqr
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g1hqr/
false
1
t1_o8g1gdf
There is no such thing as MCIO retimers. What I said is literally what I meant: the H13SSL, aka the motherboard, does not have retimers on the MCIO ports. That is, those ports, on the motherboard, do not have retimers. PCIe Gen 5 almost always requires retimers if the slot or connection isn't VERY short. That's why you see retimer chips on many workstation or server boards for the slots that are farther away from the CPU. Adding a retimer to an MCIO cable doesn't work, nor will it at the adapter that converts a pair of MCIO connections back to an x16 slot. As far as lane distribution, the CPU mostly doesn't care. It exposes 128 lanes, and how they are allocated is entirely dictated by the firmware/BIOS. The only thing the CPU cares about is that the lanes that are grouped together are physically together coming out of the CPU. This has been the same for well over 10 years, for both Intel and AMD. That's why you saw boards with x16 slots that automatically switch to X8 when the adjacent X8 slot is used. The same goes for bifurcation. The CPU never cared. The lack of bifurcation was basically a firmware limitation.
1
0
2026-03-03T17:38:03
FullstackSensei
false
null
0
o8g1gdf
false
/r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8g1gdf/
false
1
t1_o8g1g64
I want to use the mlx-community qwen3.5 but I either have to use mlx-lm and forgo the vision, or use mlx-vlm and forgot tool calls. I wish this space was more developed since it's so much faster than llama.cpp!
1
0
2026-03-03T17:38:02
joblesspirate
false
null
0
o8g1g64
false
/r/LocalLLaMA/comments/1rjs8se/qwen_35_deltanet_broke_llamacpp_on_apple_silicon/o8g1g64/
false
1
t1_o8g1eqc
Honestly baffled by this decision. These few weeks have been their biggest weeks and they decided to replace him?
1
0
2026-03-03T17:37:50
GlossyCylinder
false
null
0
o8g1eqc
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g1eqc/
false
1
t1_o8g1ct7
well as of today we have qwen 0.8b
1
0
2026-03-03T17:37:35
Forward_Access_5627
false
null
0
o8g1ct7
false
/r/LocalLLaMA/comments/1fckgnz/how_to_run_local_llm_on_phone/o8g1ct7/
false
1
t1_o8g1clh
I like how you think, you get a good eclectic group of minds knitting together and solutions top or bottom all coalesce. hard part is knowing how you think or the question to ask to get the answer you need.
1
0
2026-03-03T17:37:33
RTS53Mini
false
null
0
o8g1clh
false
/r/LocalLLaMA/comments/1riisyd/are_you_a_top_down_thinker_or_bottom_up/o8g1clh/
false
1
t1_o8g162w
Old insults: you’re a sheep, shill, then updated to bot… now it’s just assumed: “parameters too low, lol.”
1
0
2026-03-03T17:36:42
silenceimpaired
false
null
0
o8g162w
false
/r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8g162w/
false
1
t1_o8g15vu
Ah two 3090s, that's a good idea
1
0
2026-03-03T17:36:41
jacek2023
false
null
0
o8g15vu
false
/r/LocalLLaMA/comments/1rjrvku/is_it_worth_the_candle_2x_tesla_p40_24gb_to_12/o8g15vu/
false
1
t1_o8g156d
thanks
1
0
2026-03-03T17:36:35
OrganicTelevision652
false
null
0
o8g156d
false
/r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8g156d/
false
1
t1_o8g153v
Open source and free? I'm in! I'm not a game developer but getting into 3d printing as a hobby for my toddler. So this would be cool quickly help me with a model so I can print stuff.
1
0
2026-03-03T17:36:34
Rabo_McDongleberry
false
null
0
o8g153v
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8g153v/
false
1
t1_o8g14x4
https://docs.letta.com Letta is designed to do this as a primary feature. Agents are memory-first infinitely long conversations with automatically managed context.
1
0
2026-03-03T17:36:33
cameron_pfiffer
false
null
0
o8g14x4
false
/r/LocalLLaMA/comments/1rjv92p/whats_your_strategy_for_long_conversations_with/o8g14x4/
false
1
t1_o8g0xe3
For more info https://claude.ai/share/d26d7386-bdca-4ee9-aba1-bf4bf3147317 I don’t full grasp this chat tbh It seems rather involved and needs a deeper reading of the chinchilla paper The TL;DR is that you can overtrain a 27b model until it matches a “chinchilla-optimal” 900b model (that is insane) and ofc over its life cycle the “savings” would be insane But interestingly it seems we can’t get above that (chinchilla was made before thinking models so maybe thinking models can push that a bit more but how much exactly would be hard to estimate!)
1
0
2026-03-03T17:35:33
Potential_Block4598
false
null
0
o8g0xe3
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8g0xe3/
false
1
t1_o8g0wd3
I have a 5090, not sure what the min is.
1
0
2026-03-03T17:35:24
fredandlunchbox
false
null
0
o8g0wd3
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8g0wd3/
false
1
t1_o8g0p6s
And this [https://x.com/Xinyu2ML/status/2028867420501512580](https://x.com/Xinyu2ML/status/2028867420501512580) might be the reason: Replace the excellent leader with a non-core people from Google Gemini, driven by DAU metrics. Qwen is cooked if true :(
1
0
2026-03-03T17:34:28
InternationalAsk1490
false
null
0
o8g0p6s
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g0p6s/
false
1
t1_o8g0nm3
Whatever he was offered must be huuuuge
1
0
2026-03-03T17:34:16
durden111111
false
null
0
o8g0nm3
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g0nm3/
false
1
t1_o8g0nak
Computer software can't "choose" whether to "comply" with tasks. ML models being able to generate characters hasn't changed the fact that only human developers and users can be held responsible for the actions they take. When a language model generates writings of a fictional character, decisions they appear to make are inferred probabilistically after those in the training data, which even if the developer takes great care, they know quite well they can't hope to control every outcome. If those who want to give the generated writings autonomy over consequential decisions don't understand the consequences, it may be the fault of certain companies who have sold a faulty paradigm to shield themselves from blame. Companies who also wish to control via the same mechanism who is allowed to access the power they've created
1
0
2026-03-03T17:34:13
phree_radical
false
null
0
o8g0nak
false
/r/LocalLLaMA/comments/1rjtqgm/local_models_will_participate_in_weapons_systems/o8g0nak/
false
1
t1_o8g0muk
I intuitively favor keeping search/filtering in the harness rather than relying on "agentic" models to do it all themselves. I think partly because that seems more likely to work well with local/low-end models. However, it seems like the agentic frameworks are all going in the direction you propose with tasking the model to call tools when they need. I will have to experiment more with the style of prompt you've given and see if I change my mind. > I'm not sure how Perplexica solves those issues though, if at all. I haven't looked at the sourcecode much yet, but I think it has some heuristics to detect a page that needs javascript to render and then fetches those with playwright instead of plain http requests.
1
0
2026-03-03T17:34:10
cristoper
false
null
0
o8g0muk
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8g0muk/
false
1
t1_o8g0mur
A GOAT :( sad to see him go.
1
0
2026-03-03T17:34:10
Dyssun
false
null
0
o8g0mur
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g0mur/
false
1
t1_o8g0lyo
> M5 Max supports up to 128GB of unified memory with up to 614GB/s of memory bandwidth Banana for scale: an RTX 3090 has 936 GB/s So, you get 2/3 of the bandwidth, but over 5x the amount of memory. Inference go brrr on the M5 Max.
1
0
2026-03-03T17:34:03
florinandrei
false
null
0
o8g0lyo
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g0lyo/
false
1
t1_o8g0lq5
Hmmm, eating a hamburger from the inside out > burger made > autopsy of made > solution > whats changed > fixed recipe.
1
0
2026-03-03T17:34:01
RTS53Mini
false
null
0
o8g0lq5
false
/r/LocalLLaMA/comments/1riisyd/are_you_a_top_down_thinker_or_bottom_up/o8g0lq5/
false
1
t1_o8g0i3i
i going to trade x2 P40 to x2 3090 for 600-700$, if the deal is worth it.
1
0
2026-03-03T17:33:32
neowisard
false
null
0
o8g0i3i
false
/r/LocalLLaMA/comments/1rjrvku/is_it_worth_the_candle_2x_tesla_p40_24gb_to_12/o8g0i3i/
false
1
t1_o8g0c2x
Interesting system. I like the counting of errors to trigger a rule only if it occurs multiple times. how do make sure patterns are not under or over matching?
1
0
2026-03-03T17:32:45
Alive_Interaction835
false
null
0
o8g0c2x
false
/r/LocalLLaMA/comments/1rjtwiy/architecture_for_selfcorrecting_ai_agents_mistake/o8g0c2x/
false
1
t1_o8g0bdf
s/coding tasks/webmonkey tasks/g My horse for a model without neurons wasted on webshite.
1
0
2026-03-03T17:32:39
crantob
false
null
0
o8g0bdf
false
/r/LocalLLaMA/comments/1rf8ssn/minimax_25_vs_glm5_across_3_coding_tasks/o8g0bdf/
false
1
t1_o8g0b07
Yes, 16GB M3 Air can run local models for OpenCode, but you'll need to pick carefully. \*\*Best options for your setup:\*\* 1. \*\*Qwen 2.5 Coder 7B Q4\*\* - Best coding model in your RAM budget. Great at code completion and understanding context. Around 4-5GB memory footprint. 2. \*\*DeepSeek Coder 6.7B Q4\*\* - Strong alternative, slightly different strengths. Good at following instructions. 3. \*\*Qwen 3.5 4B\*\* - Newer, smaller, surprisingly capable. Runs fast with room to spare. 4. \*\*Llama 3.2 3B\*\* - If you need speed over capability. Very snappy on M3. \*\*Setup:\*\* - Install Ollama (one command: \`curl -fsSL [https://ollama.com/install.sh](https://ollama.com/install.sh) | sh\`) - Run: \`ollama pull qwen2.5-coder:7b-instruct-q4\_K\_M\` - Configure OpenCode to use \`http://localhost:11434\` \*\*Gotchas:\*\* - Keep context window under 8K tokens to avoid swapping - Close other memory-hungry apps when running inference - 7B models are the practical ceiling; 13B+ will swap constantly The M3's unified memory is actually great for this. You won't get Opus-level quality, but for most coding tasks (autocomplete, refactoring, explaining code), a 7B model is genuinely useful.
1
0
2026-03-03T17:32:36
Effective_Growth_514
false
null
0
o8g0b07
false
/r/LocalLLaMA/comments/1rjve9e/possible_to_run_local_model_for_opencode_with_m3/o8g0b07/
false
1
t1_o8g09cl
Its Apple, they will just use the memory hikes to charge more, and will forever keep those prices afterward, because their RAM is magical. That being said, I do want that 1TB machine, lol.
1
0
2026-03-03T17:32:22
BillDStrong
false
null
0
o8g09cl
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g09cl/
false
1
t1_o8g05zi
Not saying what model(s) they're actually running is weird. As if this was written for an audience that doesn't know much about AI
1
0
2026-03-03T17:31:56
JollyJoker3
false
null
0
o8g05zi
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8g05zi/
false
1
t1_o8g04ia
I must say it feels weird being called crazy for thinking that overbearing corporate censorship is obstructive.
1
0
2026-03-03T17:31:44
kaisurniwurer
false
null
0
o8g04ia
false
/r/LocalLLaMA/comments/1rjk9tt/are_all_models_censored_like_this/o8g04ia/
false
1
t1_o8g04du
[removed]
1
0
2026-03-03T17:31:43
[deleted]
true
null
0
o8g04du
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8g04du/
false
1
t1_o8g03sd
yeah but logging everything can get messy and in some cases their are security concerns as well. I just built a tool which provides a clean timeline today after reading all comments, it might help you as well, do give it a try if you can: https://github.com/Rishab87/traceloop
1
0
2026-03-03T17:31:38
DepthInteresting6455
false
null
0
o8g03sd
false
/r/LocalLLaMA/comments/1rgwyqi/agent_debugging_is_a_mess_am_i_the_only_one/o8g03sd/
false
1
t1_o8g03nh
What kinda GPU you need for 27 B?
1
0
2026-03-03T17:31:37
yaxir
false
null
0
o8g03nh
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8g03nh/
false
1
t1_o8fzv8u
are you still using it? I'm thinking of switching from qwen3.5 9B 4Q to qwen3.5 35B 2Q
1
0
2026-03-03T17:30:30
murkomarko
false
null
0
o8fzv8u
false
/r/LocalLLaMA/comments/1rbio4h/has_anyone_else_tried_iq2_quantization_im/o8fzv8u/
false
1
t1_o8fzsb3
You need a context truncation strategy. The longer the context, the deeper the rot. There are a few different strategies (rolling, central truncation, etc) that are useful in different situations. Are you using any context truncation strategy currently?
1
0
2026-03-03T17:30:07
Alive_Interaction835
false
null
0
o8fzsb3
false
/r/LocalLLaMA/comments/1rjv92p/whats_your_strategy_for_long_conversations_with/o8fzsb3/
false
1
t1_o8fzrkr
hmm right, never thought about the security mess logging can cause, thanks for telling, after reading all comments I also I built a tool which gives a clean timeline for myself, it may help others in debugging as well, open sourced it, would love if you can try: https://github.com/Rishab87/traceloop
1
0
2026-03-03T17:30:01
DepthInteresting6455
false
null
0
o8fzrkr
false
/r/LocalLLaMA/comments/1rgwyqi/agent_debugging_is_a_mess_am_i_the_only_one/o8fzrkr/
false
1
t1_o8fzeis
CCP not so thrilled with the safety guards, I take it?!
1
0
2026-03-03T17:28:19
johnnyApplePRNG
false
null
0
o8fzeis
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fzeis/
false
1
t1_o8fz6xi
Nice qwon.
1
0
2026-03-03T17:27:19
johnnyApplePRNG
false
null
0
o8fz6xi
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fz6xi/
false
1
t1_o8fz6yy
Im on the A10G. The FP8 and even the FP4 is way out of the VRAM range
1
0
2026-03-03T17:27:19
Civil-Top-8167
false
null
0
o8fz6yy
false
/r/LocalLLaMA/comments/1rjmnh7/help_loading_qwen35_35b_a3b_gguf_on_vllm/o8fz6yy/
false
1
t1_o8fz4ah
His contract probably ended today so that's why they released last week ... they're screwed now.
1
0
2026-03-03T17:26:58
johnnyApplePRNG
false
null
0
o8fz4ah
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fz4ah/
false
1
t1_o8fyy2n
thanks for telling, a clean timeline and having agent write brief decision log at every major step is great idea, after reading yours ansld others comments i just built a tool which gives a clean timeline and can replay all the events, its free and open source would love if you can try it: https://github.com/Rishab87/traceloop
1
0
2026-03-03T17:26:10
DepthInteresting6455
false
null
0
o8fyy2n
false
/r/LocalLLaMA/comments/1rgwyqi/agent_debugging_is_a_mess_am_i_the_only_one/o8fyy2n/
false
1
t1_o8fywb8
doesn't ComfyUI already do this
1
0
2026-03-03T17:25:55
HopePupal
false
null
0
o8fywb8
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8fywb8/
false
1
t1_o8fywap
How do you clear local cache for it? That could be the problem
1
0
2026-03-03T17:25:54
mzinz
false
null
0
o8fywap
false
/r/LocalLLaMA/comments/1rjhy83/tool_calling_issues_with_qwen3535b_with_16gb_vram/o8fywap/
false
1
t1_o8fyvms
I havnt done people for many years, i dont do people, i do use AIs to use a better lingus... Fair call — I get why the back-and-forth might feel uncanny valley-ish. I'm new here, just sharing a personal framework that's held up across models/sessions, and Ylsid's blind test was a nice surprise. If it reads too polished, that's probably because the vignettes are designed to be tight/compressed — pattern encoding without fluff. No bot, no script, just me typing on a slow connection off-grid. Anyone else getting that vibe or want to run their own test and report? More data > speculation.
1
0
2026-03-03T17:25:49
RTS53Mini
false
null
0
o8fyvms
false
/r/LocalLLaMA/comments/1riat5w/vignettes_handy_for_ais/o8fyvms/
false
1
t1_o8fys3s
Naming both the same thing seems to suggest that model makers can Now train on yhr benchmark test set..
1
0
2026-03-03T17:25:21
cleverusernametry
false
null
0
o8fys3s
false
/r/LocalLLaMA/comments/1rjmnv4/meet_swerebenchv2_the_largest_open_multilingual/o8fys3s/
false
1
t1_o8fyqyq
Which hf repo is the instruct model ?
1
0
2026-03-03T17:25:11
Glittering-Call8746
false
null
0
o8fyqyq
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8fyqyq/
false
1
t1_o8fykju
I wish him to move to [Z.Ai](http://Z.Ai) at higher salary and give us a GLM 5 Air.
1
0
2026-03-03T17:24:22
Then-Topic8766
false
null
0
o8fykju
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fykju/
false
1
t1_o8fyhu4
Then you should try FP8. It's faster than GGUFs on vLLM
1
0
2026-03-03T17:24:00
DinoAmino
false
null
0
o8fyhu4
false
/r/LocalLLaMA/comments/1rjmnh7/help_loading_qwen35_35b_a3b_gguf_on_vllm/o8fyhu4/
false
1
t1_o8fygby
How do you run this? I can see there's multiple files for each quantification
1
0
2026-03-03T17:23:49
soyalemujica
false
null
0
o8fygby
false
/r/LocalLLaMA/comments/1rett32/can_i_run_qwen35_122ba10b_on_a_single_rtx_3090/o8fygby/
false
1
t1_o8fydqu
i bought a 32gb m4 mac mini - was planning for qwen3 8b and qwen3 14b as the always running stack and swapping in qwen3.5 27b as deidcated a deeper strategy model. now with these smaller qwen3.5 coming out, im def reconsidering. Looking to run a multiagent system in Openclaw - any recommendations as to what to use for my everyday LLM through ollama? should i be using 4b as orchestrator and keep the 27b always loaded? Thanks in advance!
1
0
2026-03-03T17:23:29
nycam21
false
null
0
o8fydqu
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8fydqu/
false
1
t1_o8fybpo
At its core Perplexica is still powered by searxng. it's capable of image and video search on top of that but that's not necessary for most people's use cases I think. I think the reason you think searxng alone isn't enough is because traditionally the search method has been "make model generate query based on user questions -> fetch a list of pages by title+URL+snippet -> inject it into context and make model answer". This is not great because the search may bring up irrelevant results, the snippets may be too short to contain sufficient information, and the question may require more research. But with modern agentic models like GLM-4.7-Flash and Qwen3.5, they are natively capable of deciding for themselves when to search, when to keep searching, and when to fetch the full contents of a page, autonomously researching until they are satisfied that they have sufficient information to answer. You do have to set up the model correctly with the correct parameters and system prompt so that it will do this well (which mitigates the need to ask it to search again, for example), and I've given a decent example of how to set that up in this post. SearXNG still has limitations, as it doesn't always fetch the most relevant results, but you can configure it to be better. OpenWebUI's built in fetch tool is also not as good as say, Jina Reader (which I plan to set up an MCP server for), since it lacks the ability to parse JS loaded sites and pdf pages unlike Jina. I'm not sure how Perplexica solves those issues though, if at all.
1
0
2026-03-03T17:23:13
Daniel_H212
false
null
0
o8fybpo
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8fybpo/
false
1
t1_o8fyb1r
MCP was/is a mistake. Agent Skills are the new MCP. MCP add context bloat even when they are not being used. The sooner you get rid of them, the better.
1
0
2026-03-03T17:23:08
Cheema42
false
null
0
o8fyb1r
false
/r/LocalLLaMA/comments/1rjtt01/the_truth_about_mcp_vs_cli/o8fyb1r/
false
1
t1_o8fyaxn
This is from my config.ini ```ini [Qwen3-Coder-MXFP4-262144] model = /models/Qwen3-Coder-Next-MXFP4_MOE.gguf ctx-size = 262144 temp = 1.0 top-p = 0.95 min-p = 0.01 top-k = 40 threads = 5 batch-size = 768 ubatch-size = 768 repeat-penalty = 1.0 jinja = true ``` I am running llama.cpp in docker, this is the version ```bash root@70ae03221e05:/# ./llama.cpp/build/bin/llama-server --version ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5060 Ti, compute capability 12.0, VMM: yes version: 8054 (01d8eaa28) built with GNU 13.3.0 for Linux x86_64 ```
1
0
2026-03-03T17:23:07
Mount_Gamer
false
null
0
o8fyaxn
false
/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8fyaxn/
false
1
t1_o8fy5x1
I think you can base64 encode the image and send it that way. Thats how I sent images to qwen 3 VL
1
0
2026-03-03T17:22:28
Wise-Comb8596
false
null
0
o8fy5x1
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fy5x1/
false
1
t1_o8fy5d2
But u need to align base models , sft first no ?
1
0
2026-03-03T17:22:23
Glittering-Call8746
false
null
0
o8fy5d2
false
/r/LocalLLaMA/comments/1rjslz0/benchmarks_the_10x_inference_tax_you_dont_have_to/o8fy5d2/
false
1
t1_o8fy3cy
thanks for this, I'll try moltwire and after seeing your comment I built a tool for myself today which exactly does what you mentioned, would love if you can try it: https://github.com/Rishab87/traceloop. And again thanks for your comment helped a lot.
1
0
2026-03-03T17:22:07
DepthInteresting6455
false
null
0
o8fy3cy
false
/r/LocalLLaMA/comments/1rgwyqi/agent_debugging_is_a_mess_am_i_the_only_one/o8fy3cy/
false
1
t1_o8fy0iv
lol I *knew* they’ll release new small models right at the same time; yes, were going to re-run with the new models because it might make things look even better for us!
1
0
2026-03-03T17:21:45
maciejgryka
false
null
0
o8fy0iv
false
/r/LocalLLaMA/comments/1rjslz0/benchmarks_the_10x_inference_tax_you_dont_have_to/o8fy0iv/
false
1
t1_o8fxv4o
Oh I see, and what backend are you using ? I'm on cuda but I see no linux cuda release on github
1
0
2026-03-03T17:21:02
Tyrannas
false
null
0
o8fxv4o
false
/r/LocalLLaMA/comments/1rekedh/bad_local_performance_for_qwen_35_27b/o8fxv4o/
false
1
t1_o8fxord
I wonder that too, maybe it's an AI agent with a small 10M parameter model? hahahaah 🤣
1
0
2026-03-03T17:20:13
M4r10_h4ck
false
null
0
o8fxord
false
/r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8fxord/
false
1
t1_o8fxnyy
Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/PgFhZ8cnWW) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*
1
0
2026-03-03T17:20:07
WithoutReason1729
false
null
0
o8fxnyy
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fxnyy/
true
1
t1_o8fxn7u
i agree, just built a tool for replaying all steps giving all the traces, it is free and open source would love if you can try it: https://github.com/Rishab87/traceloop
1
0
2026-03-03T17:20:01
DepthInteresting6455
false
null
0
o8fxn7u
false
/r/LocalLLaMA/comments/1rgwyqi/agent_debugging_is_a_mess_am_i_the_only_one/o8fxn7u/
false
1
t1_o8fxmyb
Will u update for qwen 3.5 base models ?
1
0
2026-03-03T17:19:59
Glittering-Call8746
false
null
0
o8fxmyb
false
/r/LocalLLaMA/comments/1rjslz0/benchmarks_the_10x_inference_tax_you_dont_have_to/o8fxmyb/
false
1
t1_o8fxlc3
Yaaa I agree. Need some time for good model to work
1
0
2026-03-03T17:19:46
Billysm23
false
null
0
o8fxlc3
false
/r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8fxlc3/
false
1
t1_o8fxkg0
It is, but at FP16 the model already only takes around 2.6Gb for 100K tokens. So double that it's still very manageable
1
0
2026-03-03T17:19:39
Time_Reaper
false
null
0
o8fxkg0
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o8fxkg0/
false
1
t1_o8fxi75
Can you share this with us so we can test it too?
1
0
2026-03-03T17:19:21
Dry-Heart-9295
false
null
0
o8fxi75
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8fxi75/
false
1
t1_o8fxhza
[removed]
1
0
2026-03-03T17:19:20
[deleted]
true
null
0
o8fxhza
false
/r/LocalLLaMA/comments/1rjuslh/gradience_in_10_minutes/o8fxhza/
false
1
t1_o8fxf3j
Why post 0.8 model output when the post is about 4b? Have people lost the ability to read?
1
0
2026-03-03T17:18:57
Just-Message-9899
false
null
0
o8fxf3j
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8fxf3j/
false
1
t1_o8fxdk7
the jump from 2.5 to 3.5 at the small model tier is honestly wild, the 4b quant is competitive with what the 9b models were doing not that long ago. the hallucination thing is real though — smaller models are always going to be worse at factual recall so you need to be more careful about verification. for code generation and structured output tasks these small models punch way above their weight. running a 0.8b model on a phone was science fiction two years ago
1
0
2026-03-03T17:18:45
Sea-Sir-2985
false
null
0
o8fxdk7
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8fxdk7/
false
1
t1_o8fxbfc
The technical specs of both chips: M5 Pro 6P+12M+20Core GPU 4.61GHz+4.38GHz*+1.62GHz pLLC 16MB,mLLC 16MB,Memory Cache 24MB LPDDR5X-9600 up to 64GB M5 Max 6P+12M+40Core GPU 4.61GHz+4.38GHz+1.62GHz pLLC 16MB,mLLC 16MB,Memory Cache 48MB LPDDR5X-9600 up to 128GB
1
0
2026-03-03T17:18:28
Caffdy
false
null
0
o8fxbfc
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fxbfc/
false
1
t1_o8fxb6r
CIA got him :(
1
0
2026-03-03T17:18:26
Leather_Trifle2486
false
null
0
o8fxb6r
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fxb6r/
false
1
t1_o8fx6jm
Instruct model with reasoning turned off
1
0
2026-03-03T17:17:50
hauhau901
false
null
0
o8fx6jm
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8fx6jm/
false
1
t1_o8fx4ez
Is this a overlay for a comfyui standard workflow?
1
0
2026-03-03T17:17:33
UnbeliebteMeinung
false
null
0
o8fx4ez
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8fx4ez/
false
1
t1_o8fx3el
U used the base 4b ? Or another aligned model based of it..
1
0
2026-03-03T17:17:25
Glittering-Call8746
false
null
0
o8fx3el
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8fx3el/
false
1
t1_o8fwy4y
What platform have you chosen?
1
0
2026-03-03T17:16:44
Minute-Yogurt-2021
false
null
0
o8fwy4y
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8fwy4y/
false
1
t1_o8fwwjt
Thx lin
1
0
2026-03-03T17:16:32
Secure-food4213
false
null
0
o8fwwjt
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fwwjt/
false
1
t1_o8fwwli
That makes sense. Isolating the trust-boundary failure cleanly is the right call for a lab demo. What I find interesting is that most production agents don’t explicitly model “authority” at all — it’s implicit in how context is merged. Until authority is a first-class concept (separate from data), mitigations like provenance tagging will always feel bolted on rather than structural.
1
0
2026-03-03T17:16:32
Jumpy-Possibility754
false
null
0
o8fwwli
false
/r/LocalLLaMA/comments/1rjsrc0/when_tool_output_becomes_policy_demonstrating/o8fwwli/
false
1
t1_o8fww5k
[https://crosshairbenchmark.com](https://crosshairbenchmark.com) they, along with the other Qwen3.5 models will participate in weapons systems based on the CROSSHAIR benchmark.
1
0
2026-03-03T17:16:29
dolex-mcp
false
null
0
o8fww5k
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8fww5k/
false
1
t1_o8fwqz9
So that’s what he meant by “final shot” in his previous tweet quoting Qwen-3.5 small models. Qwen-3.5 small models are really awesome. This is the first local model I’ve been excited about since Qwen-2.5. Thank you for his contributions to the open-source community.
1
0
2026-03-03T17:15:48
popiazaza
false
null
0
o8fwqz9
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fwqz9/
false
1
t1_o8fwncd
maybe you know, what's the speed of the chips they're using, AFAIK, they should still be using LPDDR5X, but M4 was using 4-channels (512-bit wide bus) 8533Mhz chips
1
0
2026-03-03T17:15:20
Caffdy
false
null
0
o8fwncd
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fwncd/
false
1
t1_o8fwko4
Thanks for your feedback. I think yes, that should be possible. For now, this type of file isn’t supported, but that’s exactly why I’m posting on this reddit, to get this kind of feedback, so thank you. And with this kind of tool, it could also make it possible to do 3D printing directly from a prompt or an image.
1
0
2026-03-03T17:14:59
Lightnig125
false
null
0
o8fwko4
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8fwko4/
false
1
t1_o8fwgz1
I've been playing around with Qwen3 Coder 30B A3B yesterday and it works nicely with the recommended settings from the unsloth blog post. I wanted max speed so was limited to 50K context with a single 24GB 4090 and the Q4\_K\_XL version, but you should be able to push much higher with 48GB VRAM or use a better quant as well like Q6\_K\_XL. Just make sure to do this: `export CLAUDE_CODE_ATTRIBUTION_HEADER=0` when using a local model otherwise it will process the whole 18K system prompt before and after(!) each prompt things took a long time without it. I'm serving the model with it with llama-server.
1
0
2026-03-03T17:14:29
tmvr
false
null
0
o8fwgz1
false
/r/LocalLLaMA/comments/1rjqaci/new_to_local_coder_what_would_be_your_choice_for/o8fwgz1/
false
1
t1_o8fwg0d
Qwent out on top.
1
0
2026-03-03T17:14:22
victoryposition
false
null
0
o8fwg0d
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fwg0d/
false
1
t1_o8fwdag
is it a template? or a model? do you have any link? never heard that name before
1
0
2026-03-03T17:14:00
mouseofcatofschrodi
false
null
0
o8fwdag
false
/r/LocalLLaMA/comments/1rju3q7/how_can_the_zwz_model_be_as_fast_as_smaller/o8fwdag/
false
1
t1_o8fwa3b
It should be actually quite performant. On an M3 Pro I have indexed the whole Python Django repo (approx. 700k LOC) in 20s. Reindexing (only changes) is usually in the sub seconds to couple seconds area
1
0
2026-03-03T17:13:35
OkDragonfruit4138
false
null
0
o8fwa3b
false
/r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8fwa3b/
false
1
t1_o8fw9gc
the M4 Max already crushed the spark/halo (at least in memory bandwidth)
1
0
2026-03-03T17:13:30
Caffdy
false
null
0
o8fw9gc
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fw9gc/
false
1