name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8gmmcl
I guess we have different definitions of 'substance'. I found it useful to know "for web frontend design the 0.8B actually did best" & how decent 9b was at coding. But yea, I did skip though some of the video. It was the few 'small model comparison' vids I could find, so something is better than nothing.
1
0
2026-03-03T19:16:30
tomByrer
false
null
0
o8gmmcl
false
/r/LocalLLaMA/comments/1rjrfu1/qwen35_small_models_compared_9b_vs_4b_vs_2b_vs_08b/o8gmmcl/
false
1
t1_o8gmmc2
Well I tried the 3.50.8b on my laptop the other day, locally, because it's an ancient Lenovo. And it ran the model surprisingly well, the issue was it would get into thinking loops bc it's such a small model. I run it on Ollama on my phone for really simple things. No data. I just needed to be pretty explicit in the system prompt for it.
1
0
2026-03-03T19:16:30
ctanna5
false
null
0
o8gmmc2
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8gmmc2/
false
1
t1_o8gmlmn
Hello, I am new to this. Like i understood how to download models from hugging face and all that but like how do i use kobold cpp as like a chatgpt replacement? Is it only meant for like stories and stuff?
1
0
2026-03-03T19:16:25
Darth_Knight999
false
null
0
o8gmlmn
false
/r/LocalLLaMA/comments/1qwwint/vllm_vs_llamacpp_vs_ollama/o8gmlmn/
false
1
t1_o8gmlep
Still waiting for those fabled Apple AI servers for inference.
1
0
2026-03-03T19:16:23
YourVelourFog
false
null
0
o8gmlep
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8gmlep/
false
1
t1_o8gml44
The Text2SQL result at 98% vs Haiku's 98.7% for 3x lower cost is the kind of data that changes real decisions. Two questions: how stable is the distilled model when the input distribution shifts slightly from training (e.g., schema naming conventions change), and did you test with adversarial or ambiguous SQL prompts? The HotpotQA gap is expected - open-ended world knowledge retrieval is genuinely hard to compress out.
1
0
2026-03-03T19:16:20
Joozio
false
null
0
o8gml44
false
/r/LocalLLaMA/comments/1rjslz0/benchmarks_the_10x_inference_tax_you_dont_have_to/o8gml44/
false
1
t1_o8gmjmx
[z.ai](http://z.ai) would be worse… they’re pivoting towards closed-source.
1
0
2026-03-03T19:16:09
TheRealMasonMac
false
null
0
o8gmjmx
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gmjmx/
false
1
t1_o8gmgq5
Try deleting the old gguf files and rebuilding the container
1
0
2026-03-03T19:15:46
InternationalNebula7
false
null
0
o8gmgq5
false
/r/LocalLLaMA/comments/1rjhy83/tool_calling_issues_with_qwen3535b_with_16gb_vram/o8gmgq5/
false
1
t1_o8gme33
Gemini isn't even as good as Qwen in agentic tasks smh. Sure it's intelligent, but also unstable. That was a bad decision
1
0
2026-03-03T19:15:25
Dudmaster
false
null
0
o8gme33
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gme33/
false
1
t1_o8gmcpz
This guff is based on the model you attached. Check out the base model. They're the same.
1
0
2026-03-03T19:15:15
RickyRickC137
false
null
0
o8gmcpz
false
/r/LocalLLaMA/comments/1rjqff6/sabomakoqwen35122ba10bhereticgguf_hugging_face/o8gmcpz/
false
1
t1_o8gm7lj
That is the interconnect problem I mentioned. Good that you mention NVL, with the 4-way NVLINKS, you can actually get good interconnect for 4x PCIE H200 while avoiding the SXM / nvswitch cost. I didn't know them. What are the specific software library issues you mentioned? As far as I know, most / all interconnect communication runs over NCCL, which supports both PCIe, nvlink and SXM NVSwitch. The Chinese models like DeepSeek, which are being optimized for the H800 with its 400GB/s interconnect, should run better on the Pro 6000 with their 128GB/s Pcie5 interconnect (H200 NVL has 900 GB/s), but you are right, we don't have models specifically optimized for PCIe5 throughput yet. I'm curious if you have actual performance comparisons for real tasks for the Pro 6000 / H200.
1
0
2026-03-03T19:14:34
0Kaito
false
null
0
o8gm7lj
false
/r/LocalLLaMA/comments/1kwn7t4/setup_recommendation_for_university_h200_vs_rtx/o8gm7lj/
false
1
t1_o8gm3ac
All those sexy bytes!
1
0
2026-03-03T19:14:01
YourVelourFog
false
null
0
o8gm3ac
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8gm3ac/
false
1
t1_o8gm02m
likely this; watch nvidia-smi or nvtop or whatever GPU view you have to see if the device is even in use, per VRAM fill and activity.
1
0
2026-03-03T19:13:37
ShengrenR
false
null
0
o8gm02m
false
/r/LocalLLaMA/comments/1rett32/can_i_run_qwen35_122ba10b_on_a_single_rtx_3090/o8gm02m/
false
1
t1_o8glxc3
RIP Wan and future Qwen Images as well :(
1
0
2026-03-03T19:13:16
kabachuha
false
null
0
o8glxc3
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8glxc3/
false
1
t1_o8glupq
I will lyk soon
1
0
2026-03-03T19:12:55
antwon-tech
false
null
0
o8glupq
false
/r/LocalLLaMA/comments/1rjvacw/possible_to_run_on_8gb_cards/o8glupq/
false
1
t1_o8glu5f
I don't know why, but this really made me laugh :')
1
0
2026-03-03T19:12:51
Psychological_Box406
false
null
0
o8glu5f
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8glu5f/
false
1
t1_o8glrev
So to solve the probabilistic answer problem, you're basically proposing that rather than LLM trying to reason through the problem and answering it directly as verbally like a black box, it rather reasons the constraints of the problem, expresses them in some constraint satisfying programming language, and the program is then run? I predict that the problem in this approach is that it won't fully solve the probabilistic nature of LLM because the LLM will make mistakes in expressing the constraints.
1
0
2026-03-03T19:12:29
audioen
false
null
0
o8glrev
false
/r/LocalLLaMA/comments/1rjxuwo/stop_torturing_your_quantized_8b_models_why_we/o8glrev/
false
1
t1_o8glqq7
They're speaking to the 2% who already run pi-hole and know what 'self-hosted' means. The other 98% just want it to work faster and cheaper — that's the pitch.
1
0
2026-03-03T19:12:24
theagentledger
false
null
0
o8glqq7
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8glqq7/
false
1
t1_o8glpk8
Its ok OP -- using LLMs to write posts isn't a problem if you're not just pasting away blindly. When people freak out over AI being used it just makes me roll my eyes -- using AI is the entire point.
1
0
2026-03-03T19:12:15
UnifiedFlow
false
null
0
o8glpk8
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8glpk8/
false
1
t1_o8glotd
I am interested in a tool that can create efficient low poly models. All AI tools are very wasteful with the poly count.
1
0
2026-03-03T19:12:09
Steuern_Runter
false
null
0
o8glotd
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8glotd/
false
1
t1_o8glj2u
Well, you could get an M5 Max with 128GB of ram for 5.1k, but the memory interconnect isn’t as fast as the M3 Ultra, but it’s not far off. The M5 Max is 614GB/s while the M3 Ultra is 819GB/s. Of course you’d get more cores for the CPU and GPU with the ultra too.
1
0
2026-03-03T19:11:23
YourVelourFog
false
null
0
o8glj2u
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8glj2u/
false
1
t1_o8gle0p
Likely, yes. A top exec getting fired is highly correlated with a shift in business strategy.
1
0
2026-03-03T19:10:43
TheRealMasonMac
false
null
0
o8gle0p
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gle0p/
false
1
t1_o8glc1m
What the fuck is the CEO stupid?
1
0
2026-03-03T19:10:28
larrytheevilbunnie
false
null
0
o8glc1m
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8glc1m/
false
1
t1_o8glawo
I still want Nvidia support but I'm thinking of the people that have Intel or AMD GPU's as well
1
0
2026-03-03T19:10:18
Ill-Oil-2027
false
null
0
o8glawo
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8glawo/
false
1
t1_o8gla7x
Totally overthinking. Average people don't care about privacy, they'll say they care when asked, but wont hesitate when asked to accept those tracking cookies in their browsers. Because they want things simple, if pressing accept reduces the amounts of interactions to get past the inconvenience, they'll gladly give up their browser data. Because they want things simple. Running a local LLM isn't simple and wont ever be mainstream.
1
0
2026-03-03T19:10:13
Corrupt_file32
false
null
0
o8gla7x
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8gla7x/
false
1
t1_o8gl70e
the hallucination complaint is valid but it's also a bit of a category error. a 0.8b model isn't for factual recall -- it's for tasks where you feed it context and it processes it. the practical sweet spot is things like: - classifying or routing incoming data in a pipeline - structured extraction from a document you hand it - rewriting or summarizing short snippets - intent detection in a chat interface for any of those, the qwen 3.5 jump is genuinely impressive. the 2.5 4b struggled with instruction following when the format got complicated. 3.5 4b handles multi-step json extraction that used to require the 9b. for on-device assistant stuff (phone, local edge device) where you actually want some "understanding" and the facts all live in context, it's closer to real than it's ever been. the right framing isn't "is it smart?" -- it's "can it execute this specific narrow task reliably?"
1
0
2026-03-03T19:09:48
justserg
false
null
0
o8gl70e
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8gl70e/
false
1
t1_o8gl4o3
try this https://huggingface.co/Jackrong/Qwen3.5-4B-Claude-4.6-Opus-Reasoning-Distilled
1
0
2026-03-03T19:09:29
promobest247
false
null
0
o8gl4o3
false
/r/LocalLLaMA/comments/1rjr9ze/did_anyone_replace_old_qwen25coder7b_with/o8gl4o3/
false
1
t1_o8gl3a1
If you got 2 more sticks of RAM you could run 122B at Q4 with okay performance from the looks of it.
1
0
2026-03-03T19:09:18
FatheredPuma81
false
null
0
o8gl3a1
false
/r/LocalLLaMA/comments/1rjxt97/b580_qwen35_benchamarks/o8gl3a1/
false
1
t1_o8gl2y6
Appreciate it, he really did go out on his own terms.
1
0
2026-03-03T19:09:16
theagentledger
false
null
0
o8gl2y6
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gl2y6/
false
1
t1_o8gl1pv
Such a good guide, thank you very much. I was able to follow it (and the recent comments) and get the expected results for 1 GPU but my 2nd Mi50 stopped being recognized after the reboot following the reset bug fix. This is on the latest PVE 9.1 Then after a reboot, the VM wouldn't start, complaining about a broken pipe and the host became unreachable. Has anyone got this working for 2 or more Mi50's? Terrible shame, I thought I was on to a winner, I have 3 of these cards.
1
0
2026-03-03T19:09:06
Jungle_Llama
false
null
0
o8gl1pv
false
/r/LocalLLaMA/comments/1ml1aef/amd_mi50_32gbvega20_gpu_passthrough_guide_for/o8gl1pv/
false
1
t1_o8gl15p
Wait the first one is just standard goodbyes right? The second one sucks though 😭
1
0
2026-03-03T19:09:02
larrytheevilbunnie
false
null
0
o8gl15p
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gl15p/
false
1
t1_o8gl0w7
I think my point was exactly that the average consumer cares about ease and quality. The common complaints I see that local LLMs solve are related to outages, access (to data and the service), indexing/searchability, and in some cases speed. I essentially say this in my OP… I’ll admit I didn’t want to type everything by hand so I gave my main points to a LLM had it write the post and then I did some minor editing and proof reading. I don’t view this as a problem although a lot of people really complain about it.
1
0
2026-03-03T19:09:00
owp4dd1w5a0a
false
null
0
o8gl0w7
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8gl0w7/
false
1
t1_o8gl0o9
GLM-4.7-flash is usually my pick
1
0
2026-03-03T19:08:58
Di_Vante
false
null
0
o8gl0o9
false
/r/LocalLLaMA/comments/1rjv92p/whats_your_strategy_for_long_conversations_with/o8gl0o9/
false
1
t1_o8gkzs9
Why do you not want cuda if you have nvidia gpu?
1
0
2026-03-03T19:08:51
Hefty_Development813
false
null
0
o8gkzs9
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8gkzs9/
false
1
t1_o8gkzot
[removed]
1
0
2026-03-03T19:08:50
[deleted]
true
null
0
o8gkzot
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8gkzot/
false
1
t1_o8gkxz1
Yea, good point. I wonder if maybe Huawei will hire him, in that case. Apple already did that big deal with Google, and seems like they probably already have some people who are similarly talented at making models that are super strong for their size (given how strong the Gemma models were, and especially for when they came out), so, who knows, maybe they'd still make a big offer for him, but, maybe not, if they feel they already have the role filled in a major way. But Huawei on the other hand. It's Chinese, and I'm not sure if they have anyone as good as him at phone-sized models, so, maybe they'd be the top contender to try to get him. Doesn't do much good to an American like me, but, could be an interesting future for him, if it goes like that. Personally I'd selfishly hope he goes to Mistral or Google or Nvidia or something, but, I guess we'll see.
1
0
2026-03-03T19:08:37
DeepOrangeSky
false
null
0
o8gkxz1
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gkxz1/
false
1
t1_o8gkte9
Thanks!
1
0
2026-03-03T19:08:02
jonglaaa
false
null
0
o8gkte9
false
/r/LocalLLaMA/comments/1rjpifs/why_does_mixed_kv_cache_quantization_result_in/o8gkte9/
false
1
t1_o8gkryg
Programs for photography already exist with "ai-enchancment", I'll have to find it but the program unfortunately requires a Nvidia GPU
1
0
2026-03-03T19:07:51
Ill-Oil-2027
false
null
0
o8gkryg
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8gkryg/
false
1
t1_o8gkqfw
Interesting. In practice, both DeepSeek V3.2 and Qwen3.5-397B-A17B handle long context well, but multi-hop reasoning across very long chains is still where weaknesses show up. Full attention seems more stable at scale, even if it’s more expensive.
1
0
2026-03-03T19:07:39
qubridInc
false
null
0
o8gkqfw
false
/r/LocalLLaMA/comments/1rim2y2/revisiting_minimaxs_article_on_their_decision_to/o8gkqfw/
false
1
t1_o8gkiwb
Nah, I suspect they are also getting taken over by their military and some are pushing back.
1
0
2026-03-03T19:06:39
mckirkus
false
null
0
o8gkiwb
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gkiwb/
false
1
t1_o8gkh71
It’s implied here https://x.com/yuchenj_uw/status/2028872969217515996?s=46 https://x.com/cherry_cc12/status/2028869478105379248?s=46
1
0
2026-03-03T19:06:26
Howdareme9
false
null
0
o8gkh71
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gkh71/
false
1
t1_o8gkdme
You can very much take it to the bank if it's government policy
1
0
2026-03-03T19:05:58
FullstackSensei
false
null
0
o8gkdme
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gkdme/
false
1
t1_o8gkd39
Cool results!
1
0
2026-03-03T19:05:54
party-horse
false
null
0
o8gkd39
false
/r/LocalLLaMA/comments/1rjslz0/benchmarks_the_10x_inference_tax_you_dont_have_to/o8gkd39/
false
1
t1_o8gkatm
Shit is there a source for this? Would be terrible if true
1
0
2026-03-03T19:05:35
larrytheevilbunnie
false
null
0
o8gkatm
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gkatm/
false
1
t1_o8gk94x
I do not. We all knew this was coming someday, just feels a bit soon
1
0
2026-03-03T19:05:22
ForsookComparison
false
null
0
o8gk94x
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gk94x/
false
1
t1_o8gk88f
You don't even need to build it, there are prebuilt binaries attached to every release.
1
0
2026-03-03T19:05:15
vegetaaaaaaa
false
null
0
o8gk88f
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8gk88f/
false
1
t1_o8gk62l
Not yet, I’d like to make the application more stable before making it public, but the repo should be created soon. I’ll make an announcement when the repo is available.
1
0
2026-03-03T19:04:58
Lightnig125
false
null
0
o8gk62l
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8gk62l/
false
1
t1_o8gk5xa
Thanks for making this, and making it so simple. This is so so cool
1
0
2026-03-03T19:04:57
itsfaitdotcom
false
null
0
o8gk5xa
false
/r/LocalLLaMA/comments/1n1k2zg/4_months_of_droidrun_how_we_started_the_mobile/o8gk5xa/
false
1
t1_o8gk5om
Folks, please consider using xcancel.com for these links.
1
0
2026-03-03T19:04:55
NoahFect
false
null
0
o8gk5om
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gk5om/
false
1
t1_o8gk5eq
Probably starting their own lab
1
0
2026-03-03T19:04:53
Current-Ticket4214
false
null
0
o8gk5eq
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gk5eq/
false
1
t1_o8gk2ms
"But I am bothered with a lot of full prompt re-processing in llama.cpp which is time-consuming. I think this is due to the SWA, the sliding window attention trick that llama.cpp supports to get the kv-cache down to 4GB." yeah I looked into it a bit and it seems to be due to SWA and the MAMBA layers ? nothing we can do there untill the brilliant people at llama cpp come up with a solution but as a work around maybe set --batch-size 1024 --ubatch-size 1024 or higher depending on your vram to speed up prompt processing ? I think the move is to go with llama cpp, set those flags, and wait untill vllm/llama cpp offer better support ?
1
0
2026-03-03T19:04:31
Certain-Cod-1404
false
null
0
o8gk2ms
false
/r/LocalLLaMA/comments/1rihhw6/questions_on_awq_vs_gguf_on_a_5090/o8gk2ms/
false
1
t1_o8gk2ex
It's just a fastapi python script that listens to chat-completion requests. In a first step, it strips the tools but adds the definitions as system prompt so the LLM is still aware of them. After completing the reasoning, it cancels the remainder of the request (to save time) and pipes the reasoning output into a 2nd step where the tools are re-attached. It's a pretty ugly script. The better way to do it would be to create a Open WebUI pipeline. Then, you can also choose it from a dropdown instead of having always on.
1
0
2026-03-03T19:04:29
Tartarus116
false
null
0
o8gk2ex
false
/r/LocalLLaMA/comments/1rf2ulo/qwen35_122b_in_72gb_vram_3x3090_is_the_best_model/o8gk2ex/
false
1
t1_o8gk2hc
Correct. They arent going to Grok. I’d imagine maybe minimax or deepseek.
1
0
2026-03-03T19:04:29
illicITparameters
false
null
0
o8gk2hc
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gk2hc/
false
1
t1_o8gk0xr
Do you know what the word "method" means, and when to apply it?
1
0
2026-03-03T19:04:17
crantob
false
null
0
o8gk0xr
false
/r/LocalLLaMA/comments/1rjslz0/benchmarks_the_10x_inference_tax_you_dont_have_to/o8gk0xr/
false
1
t1_o8gjvz8
Yes! Absolutely. Especially if it's FOSS. I'd even contribute if you went the open source route.
1
0
2026-03-03T19:03:39
ayylmaonade
false
null
0
o8gjvz8
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8gjvz8/
false
1
t1_o8gjvkl
I think the Chinese and US AI researchers are sending each other signals. They are really the only people on earth that can slow down AI progress with something like a labor strike. All of the Presidents, CEOs, etc, can't force these people to work on this potentially existential problem.
1
0
2026-03-03T19:03:35
mckirkus
false
null
0
o8gjvkl
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gjvkl/
false
1
t1_o8gjtl0
Na, pretty much seeing that they got fired..... And replaced by some Gemini people to chase DAU
1
0
2026-03-03T19:03:20
KeikakuAccelerator
false
null
0
o8gjtl0
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gjtl0/
false
1
t1_o8gjsi4
Man, I'm really glad they were able to release Qwen 3.5 because it's looking like there might not be a Qwen 4. Qwen 3.5 is a beast though... I've been running the 27b model on my good PC and the 4b model on my bad PC. The 4b model is honestly the biggest surprise for me because it's shockingly good for 4b! I guess this is good for US AI firms but the open source community is worse off for sure. I'm American as they come but I was still rooting for Qwen to push the limits of open source AI...
1
0
2026-03-03T19:03:11
Cereal_Grapeist
false
null
0
o8gjsi4
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gjsi4/
false
1
t1_o8gjo0o
Windows 11. How many tokens per second were you getting on the laptop?
1
0
2026-03-03T19:02:37
cyberkiller6
false
null
0
o8gjo0o
false
/r/LocalLLaMA/comments/1rjvacw/possible_to_run_on_8gb_cards/o8gjo0o/
false
1
t1_o8gjkb6
Why would Chinese ruling party care if a performant model is released? Wouldn't that be a good thing?
1
0
2026-03-03T19:02:06
SK5454
false
null
0
o8gjkb6
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gjkb6/
false
1
t1_o8gjju4
Oh ah, I immediately linked it to the German meaning "Dümmster Anzunehmender User" (dumbest user imaginable). Wouldn't even be the worst fit in this context (user = strategic decision). Never cease to learn ;)
1
0
2026-03-03T19:02:03
professaDE
false
null
0
o8gjju4
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gjju4/
false
1
t1_o8gjia4
FunctionGemma is not meant for zero-shot, so of course the 4B wins. On the tests where both models were fine-tuned for the app the 4B wins by only 1.2%. For tool calling on mobile I'd still rather tune and use a 270M instead of a 4B. That's a lot of resource savings for a barely noticeable dip in accuracy.
1
0
2026-03-03T19:01:51
DinoAmino
false
null
0
o8gjia4
false
/r/LocalLLaMA/comments/1rjxlrh/simpletool_4b_model_10_hz_realtime_llm_function/o8gjia4/
false
1
t1_o8gjavo
Wait, what happened at Meta?
1
0
2026-03-03T19:00:52
Dyoakom
false
null
0
o8gjavo
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gjavo/
false
1
t1_o8gj7ra
Most of Europe uses YYYY-MM-DD for anything official or professional. Some countries still use the older formats in more informal contexts like handwriting. But then it is formatted differently, like DD.MM.YYYY or DD/MM-YYYY. That way you naturally read the day ordinally and there is never any confusion.
1
0
2026-03-03T19:00:27
ArtyfacialIntelagent
false
null
0
o8gj7ra
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8gj7ra/
false
1
t1_o8gj6zg
You think Chinese companies are stupid and doint OS as charity? Once they are only game in town people will line up to buy chinese "GPU" because western ones are out of reach for normal people.
1
0
2026-03-03T19:00:21
Single_Ring4886
false
null
0
o8gj6zg
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gj6zg/
false
1
t1_o8gj6lw
Qwing or Qing
1
0
2026-03-03T19:00:18
bebackground471
false
null
0
o8gj6lw
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gj6lw/
false
1
t1_o8gj4pj
huh, you're right i've been trying to fit as big a kv as i could while not realizing that i could just use 100k instead of 200k it's still a ton
1
0
2026-03-03T19:00:03
Hammer-Evader-5624
false
null
0
o8gj4pj
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o8gj4pj/
false
1
t1_o8gj401
You're overthinking this. The average consumer cares far more about ease and quality of output than privacy.
1
0
2026-03-03T18:59:58
Creepy-Bell-4527
false
null
0
o8gj401
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8gj401/
false
1
t1_o8gj0lj
You could get an Unsloth quantized model for either of those. For the 9b, try the Qwen3.5-9B-Q4\_K\_M or Qwen3.5-9B-Q5\_K\_M GGUFs. The second might barely fit if you have a small context window. The same goes for the 35B-A3B, except you might opt for a less-quantized version, since I think MoE's might handle swapping better in llama.cpp. But I'm not sure, lmk if this is wrong. I was able to get a Qwen3.5-35B-A3B version running on a laptop (i7, rtx 4070 8gb vram, 16gb ddr5). I don't remember which quant right now Are you on Linux?
1
0
2026-03-03T18:59:30
antwon-tech
false
null
0
o8gj0lj
false
/r/LocalLLaMA/comments/1rjvacw/possible_to_run_on_8gb_cards/o8gj0lj/
false
1
t1_o8giqlv
Alibaba isn't a jobs program and I don't think any level of napkin math can start to justify the spend on Qwen with their current plans. Good will with the community is worth a lot but you can't take it to the bank and justify a hyperscaler's worth of GPU compute
1
0
2026-03-03T18:58:11
ForsookComparison
false
null
0
o8giqlv
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8giqlv/
false
1
t1_o8gij2q
I tried a few generation with the demo and Im pretty impressed, was expecting it to sound very different to the audio clip but its honestly pretty damn good if its super fast like the original kokoro, definitely gonna spin up something local with this.
1
0
2026-03-03T18:57:12
FinBenton
false
null
0
o8gij2q
false
/r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8gij2q/
false
1
t1_o8giflh
This makes me predict the core team is going off on their own to form their own thing. Anthropic like i guess.
1
0
2026-03-03T18:56:44
sleepingsysadmin
false
null
0
o8giflh
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8giflh/
false
1
t1_o8gieqo
Oh look, it's me.
1
0
2026-03-03T18:56:37
Ironfields
false
null
0
o8gieqo
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8gieqo/
false
1
t1_o8gicid
This is pretty funny, I do think that accounting is probably never going to get automated by AI.
1
0
2026-03-03T18:56:20
rashaniquah
false
null
0
o8gicid
false
/r/LocalLLaMA/comments/1rjwig7/every_ai_accounting_tool_ive_seen_has_it/o8gicid/
false
1
t1_o8gibqz
Privacy is most relevant to sectors like healthcare and finance. But outside of that, yeah the average or even business does not seem to care about privacy. Just the experience when using the service.
1
0
2026-03-03T18:56:14
Spectacle_121
false
null
0
o8gibqz
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8gibqz/
false
1
t1_o8gi69d
The plot thickens - maybe models were too good and party is not pleased by open source release
1
0
2026-03-03T18:55:32
wektor420
false
null
0
o8gi69d
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gi69d/
false
1
t1_o8gi649
It's a clever idea. Can you show a sample of a chat?
1
0
2026-03-03T18:55:31
crantob
false
null
0
o8gi649
false
/r/LocalLLaMA/comments/1rjwz6m/i_just_discovered_a_super_fun_game_to_play_with/o8gi649/
false
1
t1_o8gi4n3
The main question is what will the future of open source look like in Qwen's AI?
1
0
2026-03-03T18:55:19
AppealThink1733
false
null
0
o8gi4n3
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gi4n3/
false
1
t1_o8gi0t6
This is terrible news for local AI
1
0
2026-03-03T18:54:49
vertigo235
false
null
0
o8gi0t6
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gi0t6/
false
1
t1_o8ghwxn
That's the key point for investigators.
1
0
2026-03-03T18:54:19
crantob
false
null
0
o8ghwxn
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8ghwxn/
false
1
t1_o8ghwuc
> Well, I guess we’ll never know if your numbers are even valid. Too bad. LOL. But we do know. I guess you haven't been keeping up with the thread. Since my numbers are consistent with the official numbers put out by the people that develop the software. I posted a link. More than once. I guess you missed it.
1
0
2026-03-03T18:54:18
fallingdowndizzyvr
false
null
0
o8ghwuc
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8ghwuc/
false
1
t1_o8ghsoz
1 thing great about local AI is your KV-cache is yours only.
1
0
2026-03-03T18:53:46
HauntingAd8395
false
null
0
o8ghsoz
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8ghsoz/
false
1
t1_o8ghsbm
In my life experience, looking back over decades in IT, that is usually the wake of a big (bad) political move to the worse.
1
0
2026-03-03T18:53:43
crantob
false
null
0
o8ghsbm
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8ghsbm/
false
1
t1_o8ghloy
100%. Those of us programmers who follow and use "plain text accounting" tools for our own financials totally agree.
1
0
2026-03-03T18:52:53
jonahbenton
false
null
0
o8ghloy
false
/r/LocalLLaMA/comments/1rjwig7/every_ai_accounting_tool_ive_seen_has_it/o8ghloy/
false
1
t1_o8ghl9w
Apparently the lead got kicked out by Alibaba CEO
1
0
2026-03-03T18:52:49
Howdareme9
false
null
0
o8ghl9w
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8ghl9w/
false
1
t1_o8ghfu9
Underrated comment!
1
0
2026-03-03T18:52:07
wanderer_4004
false
null
0
o8ghfu9
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ghfu9/
false
1
t1_o8ghf4v
3 watts. At the wall it's higher since it doesn't account for things like USB drives plugged in. Which uses up a surprising amount of power, in comparison. About 1.5 each.
1
0
2026-03-03T18:52:02
fallingdowndizzyvr
false
null
0
o8ghf4v
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8ghf4v/
false
1
t1_o8ghdes
Cool stuff, looks like it's based on Qwen3. Any plans to explore 3.5, especially the 2b or 4b?
1
0
2026-03-03T18:51:48
antwon-tech
false
null
0
o8ghdes
false
/r/LocalLLaMA/comments/1rjxlrh/simpletool_4b_model_10_hz_realtime_llm_function/o8ghdes/
false
1
t1_o8ghbvh
Yes, I’d be interested in local opensource 3D model creator!
1
0
2026-03-03T18:51:37
Forsaken-Paramedic-4
false
null
0
o8ghbvh
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8ghbvh/
false
1
t1_o8gh9li
What's the cognitive/knowledge loss?
1
0
2026-03-03T18:51:19
hackiv
false
null
0
o8gh9li
false
/r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8gh9li/
false
1
t1_o8gh5ze
:(
1
0
2026-03-03T18:50:51
DOAMOD
false
null
0
o8gh5ze
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gh5ze/
false
1
t1_o8gh3ph
Yep, I agree. Privacy, I think normies have given up on a long time ago. Just look at the people living with these stupid ads in their web browsers. I'm horrified anytime I see somebody who doesn't use ad filtering and they just put up with it. Mind blowing.
1
0
2026-03-03T18:50:34
Ok-Ad-8976
false
null
0
o8gh3ph
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8gh3ph/
false
1
t1_o8gh303
I saw yesterday that Alibaba was merging absolutely all Qwen-related projects under the "Qwen" brand itself. Could this be why?
1
0
2026-03-03T18:50:28
Samy_Horny
false
null
0
o8gh303
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gh303/
false
1
t1_o8ggv77
Are you concerned about consumer hardware not being able to run 3D gen models?
1
0
2026-03-03T18:49:28
antwon-tech
false
null
0
o8ggv77
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8ggv77/
false
1
t1_o8gguh7
> and they replaced current tech lead with someone from Gemini So Qwen4 being free and open weight is as likely as Gemma4
1
0
2026-03-03T18:49:22
ForsookComparison
false
null
0
o8gguh7
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gguh7/
false
1
t1_o8ggrp0
Would be fun if the people quitting Qwen join the Deepmind team at Google.
1
0
2026-03-03T18:49:00
roselan
false
null
0
o8ggrp0
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ggrp0/
false
1
t1_o8ggjqa
Using a percentage of similarity. I also have my agent send me a report of what it deems similar so I can give it feedback. There's def some errors to start, but less and less as we go.
1
0
2026-03-03T18:47:58
teeheEEee27
false
null
0
o8ggjqa
false
/r/LocalLLaMA/comments/1rjtwiy/architecture_for_selfcorrecting_ai_agents_mistake/o8ggjqa/
false
1
t1_o8ggepw
Amazing work by him. Not sad, he's going to land somewhere bigger is my guess. In 1 year from now he's going to be dropping even more epic stuff
1
0
2026-03-03T18:47:19
sleepingsysadmin
false
null
0
o8ggepw
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ggepw/
false
1
t1_o8ggdex
Yes, i completely agree with you 😄
1
0
2026-03-03T18:47:09
insidethemask
false
null
0
o8ggdex
false
/r/LocalLLaMA/comments/1rjsrc0/when_tool_output_becomes_policy_demonstrating/o8ggdex/
false
1
t1_o8gg9r4
This is awesome. I will post my results on my Orange Pi. Have you tried out ik\_llama.cpp?
1
0
2026-03-03T18:46:41
antwon-tech
false
null
0
o8gg9r4
false
/r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o8gg9r4/
false
1
t1_o8gg97v
i mean it calls us "game developers" which implies the same message was crossposted to a bunch of places without even bothering to customize it
1
0
2026-03-03T18:46:36
HopePupal
false
null
0
o8gg97v
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8gg97v/
false
1