name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8gg8ws
Didn't xAI laid off a bunch of Chinese engineers because of ITAR after merger with SpaceX? If that's true why they can hire another group of chinese?
1
0
2026-03-03T18:46:34
aprx4
false
null
0
o8gg8ws
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gg8ws/
false
1
t1_o8gg4y3
[https://huggingface.co/collections/huihui-ai/qwen35-abliterated](https://huggingface.co/collections/huihui-ai/qwen35-abliterated)
1
0
2026-03-03T18:46:02
andy2na
false
null
0
o8gg4y3
false
/r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8gg4y3/
false
1
t1_o8gg2fb
I wonder what it'll turn out being
1
0
2026-03-03T18:45:43
StatusSociety2196
false
null
0
o8gg2fb
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gg2fb/
false
1
t1_o8gg0v7
The Qwen series were outstanding and SOTA the smaller the model got, but the larger the models got the less impressive they were. Qwen 4Bs - mind-blowing! Qwen3-Max trillion parameter - very mid. Qwen3.5-397B is nowhere as impressive as Qwen3.5-27B. Most of the money these companies make is from the larger models and if they are not competitive there then where do the beans come in from? And how to you really monetize a 4B model - however excellent it may be? Unless you own a phone vertical its not going to work. So business people viewing it as a failure is understandable - there seems to be no good path for actually making money off them. Who will be happy? people who make Liquid Models. Who will be sad? everyone who loved and enjoyed the open weight Qwen Models. The new head honcho might abandon the open source/weight approach as something garnering goodwill without translating into a monetization route. This really sucks - the whole Qwen output (Text, audio, video, image models are at the risk of being gutted and cut). This will be a major blow to Open Weight community. I hope I am wrong.
1
0
2026-03-03T18:45:31
GreenGreasyGreasels
false
null
0
o8gg0v7
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gg0v7/
false
1
t1_o8gfyrh
It's WizardLM all over again.
1
0
2026-03-03T18:45:15
fallingdowndizzyvr
false
null
0
o8gfyrh
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gfyrh/
false
1
t1_o8gfxom
Are they moving to Grok? Someone pointed out a tweet from Elon. With how good 3.5 is I wouldn't be surprised if they were poached by a frontier AI company.
1
0
2026-03-03T18:45:07
hainesk
false
null
0
o8gfxom
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gfxom/
false
1
t1_o8gfv57
I uh I prefer suffering in blender please don’t take my last hobby
1
0
2026-03-03T18:44:47
Jayden_Ha
false
null
0
o8gfv57
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8gfv57/
false
1
t1_o8gft8o
I doubt he was kicked out. AI researchers are the tech superstars nowadays. He probably received an offer he couldn't refuse.
1
0
2026-03-03T18:44:33
tarruda
false
null
0
o8gft8o
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gft8o/
false
1
t1_o8gfqv3
Historically, this has been a model issue. What model did you tend to use?
1
0
2026-03-03T18:44:15
cameron_pfiffer
false
null
0
o8gfqv3
false
/r/LocalLLaMA/comments/1rjv92p/whats_your_strategy_for_long_conversations_with/o8gfqv3/
false
1
t1_o8gfqvd
Why do you think it doesn't have a powerful GPU?
1
0
2026-03-03T18:44:15
fallingdowndizzyvr
false
null
0
o8gfqvd
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8gfqvd/
false
1
t1_o8gfpc3
Damn I wish I would live in us then. It's 17k here with tax.
1
0
2026-03-03T18:44:03
spaceman3000
false
null
0
o8gfpc3
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8gfpc3/
false
1
t1_o8gfl7k
I don't like the LoRA approach, it will make the model not just refuse, but it will *actively* push the model towards harmful responses. It is not really great for people who want unfiltered AI agents or code writers, as well as it biases the creative writing responses towards user's not really preferable one unprompted. > while the heretic version still refuses 46% of the time. How about this one? :) https://huggingface.co/llmfan46/Qwen3.5-9B-ultimate-irrefusable-heretic
1
0
2026-03-03T18:43:32
kabachuha
false
null
0
o8gfl7k
false
/r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8gfl7k/
false
1
t1_o8gfl8t
Is it suitable for creating spriers for games?
1
0
2026-03-03T18:43:32
AppealThink1733
false
null
0
o8gfl8t
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8gfl8t/
false
1
t1_o8gfkcu
I suspect they simply received much better $$$ offers.
1
0
2026-03-03T18:43:25
tarruda
false
null
0
o8gfkcu
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gfkcu/
false
1
t1_o8gfifz
you still need the entire bandwidth needed for 30b "reserved" from what I understand, 3b limits "speed", I think if you try to run multiple instances you may need more space free to reserve? someone more knowledgeable please tell me if I'm wrong lol
1
0
2026-03-03T18:43:11
Federal_Order4324
false
null
0
o8gfifz
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8gfifz/
false
1
t1_o8gfd0w
Your correct. Though the national priority I mention is about China, not Alibaba. The15th Five-Year Plan, which is being unveiled during China's "Two Sessions" in March 2026, prioritizes AI integration in biotechnology including biomanufacturing, drug discovery, and related industrial applications which is in Zhou backyard. China will put out big subsidies and support for those who prioritize this sector. I don't think it is coincidence that China's budget will be unveiled this month.
1
0
2026-03-03T18:42:29
Stannis_Loyalist
false
null
0
o8gfd0w
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gfd0w/
false
1
t1_o8gfa3e
That is so fucked up and so similar to my life experience... you do stellar work everything right and as reward they demote you.... so only option is to leave.
1
0
2026-03-03T18:42:07
Single_Ring4886
false
null
0
o8gfa3e
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gfa3e/
false
1
t1_o8gf9k6
just so u know, open webui is poorly designed and they inject prompts into your last user message from the tool call results which forces prompt cache reprocessing after every single tool call. If you want to temporarily fix it, go to the RAG template in admin settings and replace the default prompt with a single non-white space character like a period \`.\` it will still reproccess the prompt once, but after that it will be able to use cache. unfortunatly it will disable the openwebui citation renderings, but you can always just ask the model for links to sources.
1
0
2026-03-03T18:42:02
Far-Low-4705
false
null
0
o8gf9k6
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8gf9k6/
false
1
t1_o8gf8t4
[removed]
1
0
2026-03-03T18:41:56
[deleted]
true
null
0
o8gf8t4
false
/r/LocalLLaMA/comments/1rjwz6m/i_just_discovered_a_super_fun_game_to_play_with/o8gf8t4/
false
1
t1_o8gf8u7
Yes but most people trying to run AI just want a simple interface they can set and then run the AI instead of trying to troubleshoot which model to download, how the workflow needs to be setup, and what addons need to be downloaded for specific things to actually work properly with a community made workflow
1
0
2026-03-03T18:41:56
Ill-Oil-2027
false
null
0
o8gf8u7
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8gf8u7/
false
1
t1_o8gf7l4
Is there a repo we can follow?
1
0
2026-03-03T18:41:46
Ok_Cow_8213
false
null
0
o8gf7l4
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8gf7l4/
false
1
t1_o8gf766
gguf please. i need vision haha
1
0
2026-03-03T18:41:43
Major_Specific_23
false
null
0
o8gf766
false
/r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8gf766/
false
1
t1_o8gf37g
It doesn’t look good for Qwen when GLM, Kimi, Minimax are all making buzz and gained their mini Deepseek moment. Sad to see Alibaba fired a proper research team that making breakthroughs because they couldn’t beat other Chinese LLM labs that distill Claude models.
1
0
2026-03-03T18:41:12
popiazaza
false
null
0
o8gf37g
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gf37g/
false
1
t1_o8gezko
It is really 0 refusals but it forgot other languages and is only reliable in English
1
0
2026-03-03T18:40:44
jax_cooper
false
null
0
o8gezko
false
/r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8gezko/
false
1
t1_o8gevz3
cant get over 60t/s with f32... even slower than bf16 with FA off
1
0
2026-03-03T18:40:17
arthor
false
null
0
o8gevz3
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o8gevz3/
false
1
t1_o8geqb4
Any findings on this matter?
1
0
2026-03-03T18:39:33
Academia_IA
false
null
0
o8geqb4
false
/r/LocalLLaMA/comments/1qeupi8/personaplex_voice_and_role_control_for_full/o8geqb4/
false
1
t1_o8geqd2
It's the beginning of the end of Qwen with open weights as we know it. What a massive downer!
1
0
2026-03-03T18:39:33
maglat
false
null
0
o8geqd2
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8geqd2/
false
1
t1_o8gepso
I tried it, but it's more than twice as slow as 3.5 122b; so at least on a mac with lots of unified memory 122b still wins.
1
0
2026-03-03T18:39:29
slypheed
false
null
0
o8gepso
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8gepso/
false
1
t1_o8gep9r
I will send you the conf. in DM :)
1
0
2026-03-03T18:39:24
M4r10_h4ck
false
null
0
o8gep9r
false
/r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8gep9r/
false
1
t1_o8gefgv
Please forgive me and excuse my Italian. What I meant is that the models you mentioned, according to the contractors and as stated directly by their original sources — such as OpenAI or Qwen — require more than 80 GB of VRAM for 120B-class models, regardless of whether they are MoE or not, because they still need to be loaded into VRAM. They may be lighter in terms of active parameters, but the context and overall weights remain extremely large. Regarding the thread title, which discusses not a single *64 GB die* but rather *four 16 GB dies*, even if NVLink increases throughput, LLMs generally prefer unified memory paths, especially when running 4-bit quantized models. With 64 GB of VRAM, I would personally focus development on models in the 70B range and consider working in a multi-model setup with orchestration. https://preview.redd.it/n6ju7lc4kvmg1.png?width=1222&format=png&auto=webp&s=628c202fe78c1055a9255a1298cdcc398c98f434 At the moment, for programming tasks at least, cloud-based models still remain unmatched in my opinion.
1
0
2026-03-03T18:38:10
Single_Error8996
false
null
0
o8gefgv
false
/r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8gefgv/
false
1
t1_o8ge9yb
The issue is still the reduced productivity from the TPS with the M5, if that matters. What you are proposing is the rough equivalent of running 122B on an M3 Ultra today. It's not as satisfying as cloud. So for me we are still in the situation where local is about privacy and/or hobby/learning. On a token basis its likely remains that local is an order of magnitude more expensive than cloud at a minimum. Perhaps two orders of magnitude.
1
0
2026-03-03T18:37:27
zipzag
false
null
0
o8ge9yb
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8ge9yb/
false
1
t1_o8ge6v4
Bro did you know HF has a search feature? [https://huggingface.co/models?sort=trending&search=qwen3.5+base](https://huggingface.co/models?sort=trending&search=qwen3.5+base) But not a lot of GGUF quants yet: [https://huggingface.co/models?sort=trending&search=qwen3.5+base+gguf](https://huggingface.co/models?sort=trending&search=qwen3.5+base+gguf)
1
0
2026-03-03T18:37:03
temperature_5
false
null
0
o8ge6v4
false
/r/LocalLLaMA/comments/1rjvr81/best_base_model_not_chat_finetuned_in_modern/o8ge6v4/
false
1
t1_o8ge698
I think you did well with a creative idea! Too busy to play it though. The schism between (makers, do-ers) and (zombie-konsoomers) has grown to a chasm.
1
0
2026-03-03T18:36:59
crantob
false
null
0
o8ge698
false
/r/LocalLLaMA/comments/1rjh6ti/made_a_video_game_that_uses_local_llms/o8ge698/
false
1
t1_o8gdz39
I cant for the life of me get a vllm hosted model to call tools. It mimicks tool calls but cursor doesn't pick it up.
1
0
2026-03-03T18:36:04
MichaelXie4645
false
null
0
o8gdz39
false
/r/LocalLLaMA/comments/1qq1sni/has_anyone_actually_made_local_coding_models/o8gdz39/
false
1
t1_o8gdyug
Hmm interesting, I never got it to allow me to go over that. I'm running Ollama with custom Modelfiles. I've tried setting the context size both at the modelfile and the Ollama env variable directly. I can see it in action when I call it's API directly, but still Letta ADE caps me at 30k, same thing if I try to change it through API Maybe it hard caps based on the model it's reading, than there's some sort of internal checks it does to apply that limit?
1
0
2026-03-03T18:36:02
Di_Vante
false
null
0
o8gdyug
false
/r/LocalLLaMA/comments/1rjv92p/whats_your_strategy_for_long_conversations_with/o8gdyug/
false
1
t1_o8gdqn9
Hopefully, they will find work elsewhere to do open source. Maybe Switzerland, with Apertus?
1
0
2026-03-03T18:34:59
Sabin_Stargem
false
null
0
o8gdqn9
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gdqn9/
false
1
t1_o8gdq5r
[removed]
1
0
2026-03-03T18:34:55
[deleted]
true
null
0
o8gdq5r
false
/r/LocalLLaMA/comments/1rjpvdd/600tks_speed_on_local_hardware_with_self/o8gdq5r/
false
1
t1_o8gdq34
Aah apologies i interpreted base as (non-finetuned) i don’t think they are base
1
0
2026-03-03T18:34:54
Express_Quail_1493
false
null
0
o8gdq34
false
/r/LocalLLaMA/comments/1rjvr81/best_base_model_not_chat_finetuned_in_modern/o8gdq34/
false
1
t1_o8gdn0t
Very strange. The opposite should be happening, Qwen 3.5 has been a super successful launch. Hopefully these people will find new jobs quickly. Maybe [z.AI](http://z.AI) will be happy to hire them
1
0
2026-03-03T18:34:31
dampflokfreund
false
null
0
o8gdn0t
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gdn0t/
false
1
t1_o8gdmxj
It's happening, the ability to run local llms well is now a real selling point for the new macs
1
0
2026-03-03T18:34:30
pineapplekiwipen
false
null
0
o8gdmxj
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8gdmxj/
false
1
t1_o8gdhc4
I need one of the smallest ones, the 2b and the 0.8b hahaha. I can't run that
1
0
2026-03-03T18:33:47
Samy_Horny
false
null
0
o8gdhc4
false
/r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8gdhc4/
false
1
t1_o8gdfby
Only guesses and rumors so far but from what I've read Alibaba is restructuring so the Qwen team directly reports to Alibaba CEO and they replaced current tech lead with someone from Gemini
1
0
2026-03-03T18:33:32
kymigreg
false
null
0
o8gdfby
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gdfby/
false
1
t1_o8gdb95
It's not capped at 30k -- that is typically an issue with a local inference engine (LM Studio, Ollama) not providing the Letta server with the information it needs. Can you tell me more about your setup?
1
0
2026-03-03T18:33:01
cameron_pfiffer
false
null
0
o8gdb95
false
/r/LocalLLaMA/comments/1rjv92p/whats_your_strategy_for_long_conversations_with/o8gdb95/
false
1
t1_o8gd6df
If you need an American instruct-only model focused on RAG and FIM that can have a large context window in a small footprint. H-Tiny is about 7B-A1B, so organizations can run it on hardware or cloud using older VDI instances. Other real options in that instance is Arcee Trinity Nano 6B-A1B (not hybrid) or LFM2 8B-A1B (only 32k context)
1
0
2026-03-03T18:32:24
dreamkast06
false
null
0
o8gd6df
false
/r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/o8gd6df/
false
1
t1_o8gd5xb
On vllm I have experimented with QuantTrio/Qwen3.5-27B-AWQ, but had to give that up due to the KV cache offloading not being ready in vllm I think. It's on the current roadmap so I am hopeful. "CPU KV cache production ready: performance optimized, HMA support [@orozery](https://github.com/orozery)" While the gguf I use is the Unsloth Qwen3.5-27B-UD-Q5\_K\_XL quant. Is that what you meant? But I am bothered with a lot of full prompt re-processing in llama.cpp which is time-consuming. I think this is due to the SWA, the sliding window attention trick that llama.cpp supports to get the kv-cache down to 4GB. slot update_slots: id 3 | task 2687 | forcing full prompt re-processing due to lack of cache data (likely due to SWA or hybrid/recurrent memory, see https://github.com/ggml-org/llama.cpp/pull/13194#issuecomment-2868343055) slot update_slots: id 3 | task 2687 | erased invalidated context checkpoint (pos_min = 94215, pos_max = 94215, n_tokens = 94216, n_swa = 1, size = 149.626 MiB) slot update_slots: id 3 | task 2687 | erased invalidated context checkpoint (pos_min = 94699, pos_max = 94699, n_tokens = 94700, n_swa = 1, size = 149.626 MiB) slot update_slots: id 3 | task 2687 | n_tokens = 0, memory_seq_rm [0, end) slot update_slots: id 3 | task 2687 | prompt processing progress, n_tokens = 2048, batch.n_tokens = 2048, progress = 0.021676 Yes, so far I have been using the dense model. It is slower indeed, but I have had the impression that these 30B models are on the edge of what is realistically feasible to use with OpenClaw, and I have simply figured that every percentage of quality counts in my situation.
1
0
2026-03-03T18:32:20
Treq01
false
null
0
o8gd5xb
false
/r/LocalLLaMA/comments/1rihhw6/questions_on_awq_vs_gguf_on_a_5090/o8gd5xb/
false
1
t1_o8gd322
I have been putting the Qwen 3.5 122B-A10B through its paces. The 27B benchmarks similarly, so it possibly almost as good, though it lacks slightly behind in world knowledge. I very strongly suspect this is the best model for coding right now for folks with \~32 GB VRAM in actual GPU. The other is more useful for Strix Halo and Apple folks, who don't have the compute but can spare the RAM. I haven't tried the model, but I have been using this 122B-A10B and it's easily the best model I've ever been able to run. I've let it run through frontend and backend code and it's been documenting, writing tests, fixing bugs, converting between two different frontend frameworks, translating localization messages, and has done an alright job also at developing some new simple features. I have gone from untested, undocumented crap to professional-looking source with exhaustive javadocs and a fully working test suite that exercises various features during a single afternoon, all running locally on a fairly compute limited platform. This is the first model that feels like a real developer to me: I can just point it at codebase armed with requirements and a coding style doc, and then let it do whatever it wants. I use Kilo Code as VS Code plugin for the agentic stuff. I typically use the orchestrator mode, to which I hand a goal and I let it elaborate it to something actionable. The only downside is that the fan screams enough that I need to use headphones...
1
0
2026-03-03T18:31:58
audioen
false
null
0
o8gd322
false
/r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8gd322/
false
1
t1_o8gcz8j
man… this one actually stings. Junyang was clearly a big part of why Qwen felt “for the community” and not just another big lab pushing metrics. 3.5 releases were genuinely solid, especially for local folks. if this wasn’t his choice, that’s even rougher. i do worry a bit about the direction shift. When leadership changes from research-driven to more DAU / product-metric driven, open releases and smaller experimental models are usually the first things to get deprioritized. that’s just how big orgs work. but yeah… losing someone who clearly understood the local LLM community is not nothing. definitely watching what they ship next or what Junyang is up to
1
0
2026-03-03T18:31:29
Intrepid_Report_1435
false
null
0
o8gcz8j
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gcz8j/
false
1
t1_o8gcu05
This is so awesome. I would love to see the config! Im trying to learn beelzebub right now
1
0
2026-03-03T18:30:49
kexxty
false
null
0
o8gcu05
false
/r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8gcu05/
false
1
t1_o8gcsb6
Ha! I will look into it, thank you for reporting.
1
0
2026-03-03T18:30:36
init0
false
null
0
o8gcsb6
false
/r/LocalLLaMA/comments/1rjhuvq/visual_narrator_with_qwen3508b_on_webgpu/o8gcsb6/
false
1
t1_o8gcr0d
[removed]
1
0
2026-03-03T18:30:26
[deleted]
true
null
0
o8gcr0d
false
/r/LocalLLaMA/comments/1rd8cfw/anthropics_recent_distillation_blog_should_make/o8gcr0d/
false
1
t1_o8gcnen
It sounds like he has personal issues. Hope all is well in there.
1
0
2026-03-03T18:29:59
exaknight21
false
null
0
o8gcnen
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gcnen/
false
1
t1_o8gchv9
Nice work! What is the baseline power (running the machine without running any LLMs.) on your machine?
1
0
2026-03-03T18:29:16
BandEnvironmental834
false
null
0
o8gchv9
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8gchv9/
false
1
t1_o8gcht3
and the 3.5 models ARE SPECIFICALLY the newer architecture that was previewed BY 3Next
1
0
2026-03-03T18:29:15
drivebyposter2020
false
null
0
o8gcht3
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8gcht3/
false
1
t1_o8gcfry
And you compared to q3_k_xs or similar?
1
0
2026-03-03T18:29:00
crantob
false
null
0
o8gcfry
false
/r/LocalLLaMA/comments/1rjvacw/possible_to_run_on_8gb_cards/o8gcfry/
false
1
t1_o8gcfg7
Thank you.
1
0
2026-03-03T18:28:57
Dthen_
false
null
0
o8gcfg7
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o8gcfg7/
false
1
t1_o8gcelk
I did have use-cases in the past and only found the paid closed-source tools, so i did unfortunately have to discard it. Another great feature would be if you could map textures and make use of a collection of images taken ex. from a model ingame that is rendered in a game engine and has a lot of things applied so it would not be possible to export/recreate it externally, but with images recreating the model would be fantastic.
1
0
2026-03-03T18:28:51
ihateAdmins
false
null
0
o8gcelk
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8gcelk/
false
1
t1_o8gca96
This is all so vague. Was it mass layoffs in their AI org?
1
0
2026-03-03T18:28:18
ForsookComparison
false
null
0
o8gca96
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gca96/
false
1
t1_o8gc3ws
Not necessarily caused by a national priority thing imo. Alibaba actually has an notorious track record for management chaos, and known for replacing department leads with "parachuted" executives from high-profile companies. Google "阿里 空降高管" for some examples. I would like to think that Alibaba is merging resources and pivoting to a consumer-facing approach rather than focusing on open source. Qwen's Chinese app 通义 is struggling among its Chinese competitors like ByteDance's doubao. I saw in business reports that Qwen app only has 1/10 of the DAU of doubao, and even performing worse than Tencent's and Baidu's despite its model capabilities. Another sign is the campaign that the Chinese Qwen app launched this CNY. Every user logged into the app gets a 25 yuan discount on any takeout delivery, as long as it's done thought the MCP-like thing that connects to Alibaba's takeout delivery service 淘宝闪购. We kinda know that Alibaba wants Qwen to serve its other much bigger businesses like online shopping and food delivery. Campaigns like this are kinda intensified especially from later last year, when the Chinese Qwen app did a complete rebranding though, and you can tell higher executives are eager to merge Qwen into their "bigger empire" and make Qwen part of it.
1
0
2026-03-03T18:27:30
ttaox30
false
null
0
o8gc3ws
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gc3ws/
false
1
t1_o8gc357
Hope it helps :) I can review it. Can you leave a github issue? Helps me with planning :)
1
0
2026-03-03T18:27:24
OkDragonfruit4138
false
null
0
o8gc357
false
/r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8gc357/
false
1
t1_o8gc17k
Absolutely. As an addition to that I'd love to see more in the range of FOSS AI-enhanced Cloudpoint/photogrammetry processing
1
0
2026-03-03T18:27:09
phormix
false
null
0
o8gc17k
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8gc17k/
false
1
t1_o8gby9b
didn't get to extensively test, but its awesome
1
0
2026-03-03T18:26:46
Old-Sherbert-4495
false
null
0
o8gby9b
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8gby9b/
false
1
t1_o8gbv0h
Hey! I have a 5090 with 96GB of RAM. What quant/model are you running the 27B with? I also used unsloth's recommended inference settings i.e temp 0.6, top k 20, etc... I also tried turning speculative decoding on, but even in LM studio I see "no draft models found", even if I installed the 9b and 4b models. I can switch over to llama.cpp if needed. What model did you use for speculative decoding? Figuring out whether to try and use 122b vs 27B since I can load 27B straight into memory.
1
0
2026-03-03T18:26:21
Anarchaotic
false
null
0
o8gbv0h
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o8gbv0h/
false
1
t1_o8gbq1f
Either way it sucks. Qwen was delivering great small sizes other Chinese model makers don't give a rat ass about. Sad
1
0
2026-03-03T18:25:43
BumblebeeParty6389
false
null
0
o8gbq1f
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gbq1f/
false
1
t1_o8gboqr
I believe it is due the kernel switching overhead for prefill stage. Basically, it is a constant latency overhead regardless of the pp length.
1
0
2026-03-03T18:25:33
BandEnvironmental834
false
null
0
o8gboqr
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8gboqr/
false
1
t1_o8gbgmh
It's illegal in europe to bundle the power adapter.
1
0
2026-03-03T18:24:31
Desperate-Purpose178
false
null
0
o8gbgmh
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8gbgmh/
false
1
t1_o8gbf8z
I can't comment on that particular combo but I found that if I ask Gemini to propose settings for a given hardware setup and then ask Claude to review and combine the results I get something that takes pretty good advantage of my setup without trial and error 
1
0
2026-03-03T18:24:21
drivebyposter2020
false
null
0
o8gbf8z
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8gbf8z/
false
1
t1_o8gbbzp
This is the right approach. Thx for sharing!
1
0
2026-03-03T18:23:55
debackerl
false
null
0
o8gbbzp
false
/r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8gbbzp/
false
1
t1_o8gb7b1
Could just be Alibaba needs its LLM ops to be more revenue focused. There's something to say about the strategic value of setting a standard with powerful, efficient open source models, but not sure that long term strategic vision can hold against closed frontier models like Claude or early entrants like OpenAI.
1
0
2026-03-03T18:23:20
Taurus-Octopus
false
null
0
o8gb7b1
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gb7b1/
false
1
t1_o8gb4wt
If anything AI was standardized. Would be allot easier to learn and build. Some things are kind of getting there. I don't bother with the MCP, API stuff at all. I see it as a real security threat that's being over looked for convenience. Nothing against people who use it. I just prefer local. So I really have no opinions on it aside from that.
1
0
2026-03-03T18:23:02
melanov85
false
null
0
o8gb4wt
false
/r/LocalLLaMA/comments/1rjtt01/the_truth_about_mcp_vs_cli/o8gb4wt/
false
1
t1_o8gb0my
The future is likely running inference on a specialized ASIC. But even today some preload performance could be improved with a better understanding and design for caching. With home automation a lot of static and semi static data could remain the the cache with only sensor data and the user prompt being loaded at runtime. The large memory of a unified memory system means a lot of tokens can be kept in (v)ram. But the logic needs to be able to do it, as it is with serving inference in the data center. Also, usually no need to quantize the KV cache on a unified memory system. So less context rot.
1
0
2026-03-03T18:22:29
zipzag
false
null
0
o8gb0my
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8gb0my/
false
1
t1_o8gaw8r
Really interesting approach! The KV cache injection for skill files is clever - you're essentially trading compute overhead for context efficiency. A few thoughts on your methodology: 1. For the checkpoint selection problem (005 vs 006 degradation), have you considered tracking validation metrics during training rather than just final scores? Early stopping based on held-out skill tasks might help you catch that sweet spot automatically. 2. Your comment about being limited by local compute for larger model testing resonates. One thing that might help is systematically tracking the actual cost/token efficiency across your C0/C1/C2 approaches. Even with smaller models, measuring the compute cost per "useful output" could help you extrapolate whether this approach would be economically viable at scale. 3. The 1/3 reliability vs full context (C1) might actually be acceptable depending on your cost savings - have you quantified the token reduction percentage? For scaling this up, you might want to benchmark across multiple model sizes to see if there's a "crossover point" where your approach becomes more cost-effective than full skill context. The relationship between model size, context cost, and KV injection overhead could be non-linear. Btw, if you're looking to systematically compare costs across different LLM approaches and providers as you scale this up, tools like [ZenLLM.io](http://ZenLLM.io) can help track those metrics across experiments. But even just logging token usage and inference time per approach would give you good directional data. Would love to see how this performs with slightly larger models (1B-3B range) if you get access to more compute.
1
0
2026-03-03T18:21:55
eliko613
false
null
0
o8gaw8r
false
/r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o8gaw8r/
false
1
t1_o8gaw03
I don't disagree but there is a tradeoff to be made... The impact in most areas would be limited vs the compute you have to spend. This is why we try to keep multiple models around 😁 I am fairly new to this but for example I am getting the Qwen3.5 family of models up and running since some have done really well with MCP servers out of the box... they have two that are nearly the same number of parameters and one is MOE and one is not... the MOE is for agentic work where you want tasks planned and done and the non-MOE is for the more comprehensive analysis of materials assembled by the other and is dramatically slower. 
1
0
2026-03-03T18:21:54
drivebyposter2020
false
null
0
o8gaw03
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8gaw03/
false
1
t1_o8gats7
Daily Active User
1
0
2026-03-03T18:21:37
moresoma
false
null
0
o8gats7
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gats7/
false
1
t1_o8garqv
> Where do aircraft carriers store their eggs? --- qwen3.5-0.8b Aircraft carriers typically store their "eggs" (which are actually **fuel tanks** or **propellant storage systems**) in the following locations: 1. **Main Engine Room**: This is the primary location for storing large quantities of fuel and propellants, including kerosene, hydrogen, and other chemical fuels used to power the aircraft carrier's engines. 2. **Propulsion Systems**: The tanks are often located within or adjacent to the propulsion systems (such as the main engine room or auxiliary engines), which are critical for maintaining the ship's speed and maneuverability. These fuel storage areas are designed to be robust, easily accessible, and protected from the harsh conditions of the ocean environment. --- qwen/qwen3.5-9b Aircraft carriers do not store eggs. The premise of your question is incorrect; aircraft carriers are military warships used by navies, and they do not produce or store eggs, as they are animals. If you're referring to *aircraft* in general, they do not store eggs either. However, there might be a confusion or a pun intended here. Aircraft carriers carry aircraft (planes), not eggs. If you're asking about where an aircraft carrier stores its *fuel* or *weapons*, then the answer would involve specialized compartments and magazines, but "eggs" is not a storage item on these vessels. Let me know if you meant something else! --- qwen/qwen3.5-35b-a3b 3 calls to searxng + 40s of thinking later, did eventually generate the answer I was thinking of (refrigerated storage, near the galleys, duh...)
1
0
2026-03-03T18:21:21
Peewee223
false
null
0
o8garqv
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8garqv/
false
1
t1_o8ganw6
glm 5 scoring higher than opus, doubt
1
0
2026-03-03T18:20:51
4baobao
false
null
0
o8ganw6
false
/r/LocalLLaMA/comments/1rjnpuv/costsperformance_tradeoff_for_qwen3_qwen35_and/o8ganw6/
false
1
t1_o8gamq1
Did you try cloning a voice with it? I tried 3 different ones on the HF demo, almost comically off. For the intent (= voice cloning) it offers no benefits when compared to Voxtral and other small models for example.
1
0
2026-03-03T18:20:42
r4in311
false
null
0
o8gamq1
false
/r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8gamq1/
false
1
t1_o8gam1e
I'm really hoping Alibaba's model of *"give free stuff and get a few bucks off paying Qwen max users"* isn't finally hitting any harsh realities. Please no
2
0
2026-03-03T18:20:37
ForsookComparison
false
null
0
o8gam1e
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gam1e/
false
2
t1_o8galqe
[removed]
1
0
2026-03-03T18:20:34
[deleted]
true
null
0
o8galqe
false
/r/LocalLLaMA/comments/1r3kzz2/how_is_the_quality_of_recent_tts/o8galqe/
false
1
t1_o8ga9vq
GLM4.7/5, if the prompt processing is fast enough for agentic coding!
1
0
2026-03-03T18:19:03
temperature_5
false
null
0
o8ga9vq
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8ga9vq/
false
1
t1_o8ga9qu
[removed]
1
0
2026-03-03T18:19:02
[deleted]
true
null
0
o8ga9qu
false
/r/LocalLLaMA/comments/1qqjs1z/looking_for_fast_local_tts_with_zero_shot_cloning/o8ga9qu/
false
1
t1_o8ga8gw
Hopefully it is! :)
1
0
2026-03-03T18:18:52
OkDragonfruit4138
false
null
0
o8ga8gw
false
/r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8ga8gw/
false
1
t1_o8ga4jn
Yes, the idea is not entirely new, I just tried to make it broader in scope, improve quality and speed for daily ops :) I am still developing it further. Always open for feedback :)
1
0
2026-03-03T18:18:22
OkDragonfruit4138
false
null
0
o8ga4jn
false
/r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8ga4jn/
false
1
t1_o8ga4ay
Ah just saw this and hoped it might support my llm server when I'm on my home network. Does anyone know if there's an openai api compatible chat app (that is good!) that i can point at my server? 
1
0
2026-03-03T18:18:20
richardbaxter
false
null
0
o8ga4ay
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8ga4ay/
false
1
t1_o8ga0e1
https://preview.redd.it/… know, you know.
1
0
2026-03-03T18:17:51
MoffKalast
false
null
0
o8ga0e1
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8ga0e1/
false
1
t1_o8g9zq8
Wow! That’s better than I thought would be possible for a dense model this size. Thanks for sharing!
1
0
2026-03-03T18:17:46
AdCreative8703
false
null
0
o8g9zq8
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8g9zq8/
false
1
t1_o8g9w8z
But it was fast
1
0
2026-03-03T18:17:19
Substantial-Ebb-584
false
null
0
o8g9w8z
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8g9w8z/
false
1
t1_o8g9rt6
Aider was really efficient with tokens and can pinpoint changes. Too bad it doesn't have agent mode so I watch opencode, cline, kilo etc grep, wc, ls for several round trips and burn tokens. Looks like this could be what I'm looking for.
1
0
2026-03-03T18:16:44
aitchnyu
false
null
0
o8g9rt6
false
/r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8g9rt6/
false
1
t1_o8g9qsh
Hello, thank you. It works great on my 4060ti. I just have one question, are the vision capabilities still intact with this gguf (I am using q8). Lmstudio doesnt allow me to upload images when i load your model. Thanks
1
0
2026-03-03T18:16:37
Major_Specific_23
false
null
0
o8g9qsh
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8g9qsh/
false
1
t1_o8g9qwa
Check my post history 32tkps on qwen a3b
1
0
2026-03-03T18:16:37
sagiroth
false
null
0
o8g9qwa
false
/r/LocalLLaMA/comments/1rjvacw/possible_to_run_on_8gb_cards/o8g9qwa/
false
1
t1_o8g9q8r
I have 96GB on my mini PC, and I always feel right on the edge of being able to run large new models. If I do upgrade it's gonna be to 128GB or higher.
1
0
2026-03-03T18:16:33
temperature_5
false
null
0
o8g9q8r
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g9q8r/
false
1
t1_o8g9q3f
He should start up his own model
1
0
2026-03-03T18:16:32
cuberhino
false
null
0
o8g9q3f
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g9q3f/
false
1
t1_o8g9pnk
That’s super cool. Graphs in general with AI just are. I found that for my corpus, using GraphRAG improved cross topic hop by 24%.
1
0
2026-03-03T18:16:28
3spky5u-oss
false
null
0
o8g9pnk
false
/r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8g9pnk/
false
1
t1_o8g9ox4
Daily active users. It seems some execs at Alibaba weren't hitting the numbers they wanted, and this is the outcome. :/
1
0
2026-03-03T18:16:22
ayylmaonade
false
null
0
o8g9ox4
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g9ox4/
false
1
t1_o8g9ktj
so this means m5 ultra will be insane. waiting for that
1
0
2026-03-03T18:15:50
Kindly_Scientist
false
null
0
o8g9ktj
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g9ktj/
false
1
t1_o8g9jth
If Google gets him, Gemma4 27b beats Opus 4.6's coding ability by a year from now, and if Meta gets him, Llama5 70b dense beats William Shakespeare at poetry. If xAI gets him, the human race comes to an end, since birth rates would fall to 0.0 because of how powerful Grok's porn would be.
1
0
2026-03-03T18:15:43
DeepOrangeSky
false
null
0
o8g9jth
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g9jth/
false
1
t1_o8g9hve
Interesting! If you don't mind me asking, how does the middleware actually work? There have been a few cases where I'd prefer 3.5 to use the Gemini-like reasoning, but I also need my tools, and no amount of prompt-engineering seems to work consistently.
1
0
2026-03-03T18:15:28
ayylmaonade
false
null
0
o8g9hve
false
/r/LocalLLaMA/comments/1rf2ulo/qwen35_122b_in_72gb_vram_3x3090_is_the_best_model/o8g9hve/
false
1
t1_o8g9hoo
Since qwen?
1
0
2026-03-03T18:15:26
StupidityCanFly
false
null
0
o8g9hoo
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g9hoo/
false
1
t1_o8g9grd
He qwont be forgotten.
1
0
2026-03-03T18:15:19
MoffKalast
false
null
0
o8g9grd
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g9grd/
false
1
t1_o8g9fwx
They accidentally stumbled into having the best consumer AI hardware with unified memory, but now have decided to lean into it. 
1
0
2026-03-03T18:15:13
BumbleSlob
false
null
0
o8g9fwx
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g9fwx/
false
1
t1_o8g9efl
it's not even needed, fits in vram on 16 gig one.
1
0
2026-03-03T18:15:02
jslominski
false
null
0
o8g9efl
false
/r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o8g9efl/
false
1