name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8hcftw
But Qwen is like a working group, not an employer. Maybe more like some major reshuffle at Tongyi Lab ?
1
0
2026-03-03T21:19:16
bobby-chan
false
null
0
o8hcftw
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8hcftw/
false
1
t1_o8hcesb
What is the difference with existing text-to-3d open source models?
1
0
2026-03-03T21:19:08
pomelorosado
false
null
0
o8hcesb
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hcesb/
false
1
t1_o8hc79k
Something is missing from the story. You don’t fire infrastructure because user acquisition is bad.
1
0
2026-03-03T21:18:09
toocoolforgg
false
null
0
o8hc79k
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8hc79k/
false
1
t1_o8hc3jo
Im still trying to find a usecase of it and i cant find. I couldnt figure it out why people(bots) talk about it. I think who mentions about it as life changing are all fake and I dont believe they even tried openclaw. I inspect source code and tried it couple of times and I couldnt even understand basic benefits. I am so happy people talk about it finally.
1
0
2026-03-03T21:17:40
lackoproof
false
null
0
o8hc3jo
false
/r/LocalLLaMA/comments/1r5v1jb/anyone_actually_using_openclaw/o8hc3jo/
false
1
t1_o8hc1xw
The gateway approach is solid and the split between can-call and should-call that someone mentioned is key. One thing I would add from experience is that even with good schema validation and allowlists, the sneaky attacks come through content that looks totally normal until the agent processes it in context. Adding a runtime layer that watches what the agent actually does after processing each input caught way more for us than any input filtering alone. Moltwire does this specifically for tool-calling agent setups if you want to compare approaches.
1
0
2026-03-03T21:17:27
thecanonicalmg
false
null
0
o8hc1xw
false
/r/LocalLLaMA/comments/1rip2f0/how_are_you_mitigating_prompt_injection_in/o8hc1xw/
false
1
t1_o8hbyp1
[removed]
1
0
2026-03-03T21:17:02
[deleted]
true
null
0
o8hbyp1
false
/r/LocalLLaMA/comments/1b6oyog/finetuning_llm_to_perform_smart_function_calling/o8hbyp1/
false
1
t1_o8hbt6l
nor do you in the United States of Israel
1
0
2026-03-03T21:16:19
Crafty-Wonder-7509
false
null
0
o8hbt6l
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hbt6l/
false
1
t1_o8hbnut
Yes, I’m waiting for the application to be more stable, and then I’ll create an open-source repository.
1
0
2026-03-03T21:15:37
Lightnig125
false
null
0
o8hbnut
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hbnut/
false
1
t1_o8hbk1b
https://preview.redd.it/…63a237e84d3b1e
1
0
2026-03-03T21:15:07
jacek2023
false
null
0
o8hbk1b
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8hbk1b/
false
1
t1_o8hbi2p
Meta wouldn't have shut down open weight Llama if this was nearly enough savings to justify training models this size from scratch
1
0
2026-03-03T21:14:52
ForsookComparison
false
null
0
o8hbi2p
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8hbi2p/
false
1
t1_o8hb844
This is really cool work! Training LLMs for underrepresented languages like Luganda is so important for making AI more accessible globally. The 42.83% score on AFRIXNLI is solid progress, especially with a 110M parameter model. Are you planning to scale up to larger models, or focusing on optimizing the smaller ones for resource-constrained environments? Would love to see how this performs on other Luganda NLP tasks!
1
0
2026-03-03T21:13:34
Vey_TheClaw
false
null
0
o8hb844
false
/r/LocalLLaMA/comments/1rk1gfk/progress_on_bulamu_1st_luganda_llm_trained_from/o8hb844/
false
1
t1_o8hb4j6
My point was that in China when the government comes knocking, I don't think you get to say no.
1
0
2026-03-03T21:13:06
Top-Tangerine-5172
false
null
0
o8hb4j6
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hb4j6/
false
1
t1_o8hb4b8
Heretic is the only thing I've ever seen actually work in the context of uncensored models that haven't been finetuned on additional data. All the ones labeled "abliterated" are useless I find.
1
0
2026-03-03T21:13:04
ZootAllures9111
false
null
0
o8hb4b8
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8hb4b8/
false
1
t1_o8hb3pi
Whatever the government tells them they are going to do.
1
0
2026-03-03T21:12:59
DataGOGO
false
null
0
o8hb3pi
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hb3pi/
false
1
t1_o8hb3go
This a Chinese company. I hate to break it to you but the CCP makeS the us government look like it is obsessed with civil rights.
1
0
2026-03-03T21:12:57
nomorebuttsplz
false
null
0
o8hb3go
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hb3go/
false
1
t1_o8hb2z1
Do you care about your service working if your customers have a weak or no network connection? Huge swaths of the US have terrible coverage. If your service can't fall back to on device capabilities, they're going to have a bad time.
1
0
2026-03-03T21:12:53
rosstafarien
false
null
0
o8hb2z1
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8hb2z1/
false
1
t1_o8hb0kp
you know they they all have no choice but to do what the Chinese government says right?
1
0
2026-03-03T21:12:35
DataGOGO
false
null
0
o8hb0kp
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hb0kp/
false
1
t1_o8hazou
Yes, I got it running after a lot of work. Check their GitHub issues, people on there talked about how to get it running on modest hardware
1
0
2026-03-03T21:12:27
Numerous-Aerie-5265
false
null
0
o8hazou
false
/r/LocalLLaMA/comments/1qeupi8/personaplex_voice_and_role_control_for_full/o8hazou/
false
1
t1_o8hayxo
I have no idea what you're talking about tbh. The "party" has been going on for years, and Qwen has always been very well regarded during that time. I expect this decision probably *was* made a month or months ago, and he has just been finishing up the project.
1
0
2026-03-03T21:12:21
-dysangel-
false
null
0
o8hayxo
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8hayxo/
false
1
t1_o8hatxw
Yeah, it's not great. However, I am chatting with it right now and ik\_llama UI is telling me \~22-28 t/s for prompt processing. I wonder if that's due to some caching, since my tests were just cold single-shotting the llms
1
0
2026-03-03T21:11:42
antwon-tech
false
null
0
o8hatxw
false
/r/LocalLLaMA/comments/1rjygyu/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/o8hatxw/
false
1
t1_o8haqty
USA wasn't the first
1
0
2026-03-03T21:11:18
DataGOGO
false
null
0
o8haqty
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8haqty/
false
1
t1_o8hapyr
Thanks for your feedback ! I didn’t realize there would be so many 3D printing enthusiasts who might be interested.
1
0
2026-03-03T21:11:11
Lightnig125
false
null
0
o8hapyr
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hapyr/
false
1
t1_o8hapfu
In China? LOL they literally would have no choice, they would literally be executed.
1
0
2026-03-03T21:11:07
DataGOGO
false
null
0
o8hapfu
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hapfu/
false
1
t1_o8han3q
Can you tell me about the speed of 27B vs 35 MoE? I’m considering those two for CLI agentic work (claude code, codex)
1
0
2026-03-03T21:10:49
lol-its-funny
false
null
0
o8han3q
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8han3q/
false
1
t1_o8habgc
Thanks! Yeah, the problem is that prices on ALL consumer hardware *(RAM, SSDs, etc.)* have skyrocketed due to the AI/Datacenter race. If you want to take a DIY approach like I did, **it's going to be very expensive nowadays**. Here's the computer I got: [https://www.microcenter.com/product/695522/dell-alienware-18-area-51-aa18250-18-gaming-laptop-computer-liquid-teal](https://www.microcenter.com/product/695522/dell-alienware-18-area-51-aa18250-18-gaming-laptop-computer-liquid-teal) Here's the RAM *(which I was able to purchase for 50% of the current cost overseas)*: [https://www.bestbuy.com/product/crucial-128gb-kit-2x64gb-ddr5-5600mhz-c46-sodimm-laptop-memory-black/JX8PSKCPWR](https://www.bestbuy.com/product/crucial-128gb-kit-2x64gb-ddr5-5600mhz-c46-sodimm-laptop-memory-black/JX8PSKCPWR) Unless you ultimately need absolute privacy, API may be the way to go *(until prices decrease in 2028)*: [https://openrouter.ai/models?q=claude](https://openrouter.ai/models?q=claude) I was able to get started for $3800 but I don't think that's feasible anymore if you're trying to do what I'm doing. Some ppl here go for a more modest/small setup (\~ 4-16GB VRAM) but those setups couldn't hold a candle to what you could simply do with Claude over API.
1
0
2026-03-03T21:09:16
misterflyer
false
null
0
o8habgc
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o8habgc/
false
1
t1_o8ha96t
Yeah if that worked it would be nice, but I've not trusted Google to do the right thing for something like 16 years now
1
0
2026-03-03T21:08:58
-dysangel-
false
null
0
o8ha96t
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ha96t/
false
1
t1_o8ha5ng
What's astounding to me is that they actually seem to have laid these people off. Like, ok, corporate wants to refocus away from OS. Fine. I get it. But... you have these insanely talented devs in the most in-demand tech in the world and rather than try and reassign or retain them you... fire them? What?
1
0
2026-03-03T21:08:30
Drinniol
false
null
0
o8ha5ng
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ha5ng/
false
1
t1_o8ha59x
I thought about those, but I went 12 x x8 route. Now that I know about PCIe switches for just a couple hundred a piece I'm not sure that this was the best idea.
1
0
2026-03-03T21:08:27
Prudent-Ad4509
false
null
0
o8ha59x
false
/r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8ha59x/
false
1
t1_o8ha2ib
Yes, that’s possible. In fact, my application would also use open-source models, but with this tool, there would additionally be built-in features to directly edit and rework models.
1
0
2026-03-03T21:08:06
Lightnig125
false
null
0
o8ha2ib
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8ha2ib/
false
1
t1_o8ha0zn
StepFun released two base models very recently: https://huggingface.co/stepfun-ai/Step-3.5-Flash-Base and https://huggingface.co/stepfun-ai/Step-3.5-Flash-Base-Midtrain Haven't tried it, since there's no ggufs just yet, but I'm hoping the first one's good. The last good base we've had was Mistral Nemo 12B I think.
1
0
2026-03-03T21:07:54
aeqri
false
null
0
o8ha0zn
false
/r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8ha0zn/
false
1
t1_o8ha0bn
Try to find setup with nvidia card and 12GB of VRAM, like with 3060 or 5060. Plus at least 16GB of RAM. With this kind of setup you could even run Qwen 3.5 35B-A3B (quantized) and be a happy person. Buying laptop with 6GB (or less) of VRAM will be a serious bottleneck.
1
0
2026-03-03T21:07:48
jacek2023
false
null
0
o8ha0bn
false
/r/LocalLLaMA/comments/1rjznnk/system_requirements_for_local_llms/o8ha0bn/
false
1
t1_o8h9rwr
>This sub is weirdly hostile to AI It's hostile to grifters and hustlers
1
0
2026-03-03T21:06:42
Velocita84
false
null
0
o8h9rwr
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8h9rwr/
false
1
t1_o8h9pze
Yep, my bad. I glitched out in a moment of anger at the thought of corporations and average people. So no critique, you make fairly valid points overall. But I still don't think running local AI model's will be a thing. And looking at the current trends, having decent hardware will be a luxury in a few years. It's very likely that the majority of computational processing will be done remotely, the average consumer will just end up having hardware with prediction algorithms, caching and whatever is necessary to reduce latency for seamless inference.
1
0
2026-03-03T21:06:27
Corrupt_file32
false
null
0
o8h9pze
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8h9pze/
false
1
t1_o8h9nyx
bifurrymerge. I'll let myself out.
1
0
2026-03-03T21:06:11
Prudent-Ad4509
false
null
0
o8h9nyx
false
/r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8h9nyx/
false
1
t1_o8h9nqi
At first, I mentioned game developers because it seemed like something that could interest them, but I’ve realized that many people would like to use this tool for other purposes as well, such as 3D printing.
1
0
2026-03-03T21:06:09
Lightnig125
false
null
0
o8h9nqi
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8h9nqi/
false
1
t1_o8h9l4m
Their worth is being a pain in the side of US companies. Imagine just how much money they can make when western AI stock drops based on their own insider info.
1
0
2026-03-03T21:05:48
PANIC_EXCEPTION
false
null
0
o8h9l4m
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8h9l4m/
false
1
t1_o8h9jur
I always see people who are definitely not rich with smartphones that cost way more than a 32GB Mac Mini, so I think it's a matter of priorities.
1
0
2026-03-03T21:05:38
-dysangel-
false
null
0
o8h9jur
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8h9jur/
false
1
t1_o8h9f45
I'll have some concerns with your bullet points. Accuracy is matter of test difficulty. Dataset contamination is a real problem, but it is also true that multiple choice tests being evaluated can be simpler in nature or perhaps answer can be reasoned from first principles. The 70 % accuracy of a different test can be entirely fair result, if the test is harder, or if it has ambiguous language or an incorrect answer sheet, or large number options that reduce the baseline 25 % accuracy of a random guesser in 4-choice test. You can't directly deduce it is a genuine capability issue. The test difficulty and quality remains an unknown, uncontrolled factor. You can only deduce this if you prove that the tests are comparable difficulty and models that are known to not be contaminated (if they exist) take tests and this calibrates their relative difficulties so that they can be normalized. Typically, test contamination would be better investigated directly, for instance by examining the logit likelihoods to see how well the LLM predicts the exact wording of the *question* rather than the answer, and you can also try to determine if small variations in test wording cause reduction in the test-taker's accuracy.
1
0
2026-03-03T21:05:01
audioen
false
null
0
o8h9f45
false
/r/LocalLLaMA/comments/1rjysps/do_traditional_llm_benchmarks_actually_predict/o8h9f45/
false
1
t1_o8h9cte
For this video, I used the Hunyuan3D 2 Mini model, you can see it in the video, in the options on the left. At the moment, the app supports three models: StableFast3D, Hunyuan3D 2.1, and Hunyuan3D 2 Mini. There’s still some work to do to add support for other open-source models. However, there will be an option to use a model manager directly within the app or to simply paste the Hugging Face link for the models.
1
0
2026-03-03T21:04:43
Lightnig125
false
null
0
o8h9cte
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8h9cte/
false
1
t1_o8h9awe
I am, too, very interested. I hoped mlx could use the 3bit model with cuda, but it says the mat-mult is not implemented yet.
1
0
2026-03-03T21:04:28
Creative_Knee6618
false
null
0
o8h9awe
false
/r/LocalLLaMA/comments/1rjmnh7/help_loading_qwen35_35b_a3b_gguf_on_vllm/o8h9awe/
false
1
t1_o8h95pd
no, that would be expensive. They trained text-to-lora hypernetworks only for a few small and old models that are just going to suck at creative writing and will be losing the plot The easiest way to get good creative writing outputs is to use a new model that evals well on Creative Writing V3 benchmark. And if you have too much money, you can train a LoRA of that. Sakana's solution requires big upfront cost of training a hypernetwork for each model that you want supported, it's not practical for you if you just want good creative writing output or even if you want a good model that's finetuned for creative writing.
1
0
2026-03-03T21:03:47
FullOf_Bad_Ideas
false
null
0
o8h95pd
false
/r/LocalLLaMA/comments/1ri0n8p/llm_lora_on_the_fly_with_hypernetworks/o8h95pd/
false
1
t1_o8h95h4
I'll have to check that out, never tried building from source. Will do a little research on that and figure out how to build it for AMD TYVM for the insights!
1
0
2026-03-03T21:03:45
Di_Vante
false
null
0
o8h95h4
false
/r/LocalLLaMA/comments/1rjv92p/whats_your_strategy_for_long_conversations_with/o8h95h4/
false
1
t1_o8h94bj
Definitely a huge different in quality between 35b and 122b; can't really explain it other than to test it out yourself, but 35b definitely has small model vibes that make it seem dumber than larger models
1
0
2026-03-03T21:03:36
lolwutdo
false
null
0
o8h94bj
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8h94bj/
false
1
t1_o8h92vk
the LLM thirst is real 😉
1
0
2026-03-03T21:03:25
yaxir
false
null
0
o8h92vk
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8h92vk/
false
1
t1_o8h8z0w
That would force their products to always be subpar to Gemini, nah
1
0
2026-03-03T21:02:54
Due-Memory-6957
false
null
0
o8h8z0w
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8h8z0w/
false
1
t1_o8h8ykz
yes - and on the other hand success has more than a few names maybe?
1
0
2026-03-03T21:02:50
Impossible_Art9151
false
null
0
o8h8ykz
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8h8ykz/
false
1
t1_o8h8w0k
They stopped releasing base models for even their small ones after they went public.
1
0
2026-03-03T21:02:30
TheRealMasonMac
false
null
0
o8h8w0k
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8h8w0k/
false
1
t1_o8h8tsl
It is fun for sure but maybe not so surprising. The emojis have embeddings that map to the words (or more accurately, concepts) associated with the emoji. To the model, once the text is parsed, the emojis are internally equivalent to: "hitman", "hitman", "briefcase" Which is equivalent to: "auftragsmörder", "auftragsmörder", "aktenkoffer" in german. Emojis are usually a 1:1 mapping of words to icons. To the LLM they are just another language mapped to universal concepts.
1
0
2026-03-03T21:02:12
crantob
false
null
0
o8h8tsl
false
/r/LocalLLaMA/comments/1rjwz6m/i_just_discovered_a_super_fun_game_to_play_with/o8h8tsl/
false
1
t1_o8h8rzh
that came to my mind as well
1
0
2026-03-03T21:01:58
Impossible_Art9151
false
null
0
o8h8rzh
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8h8rzh/
false
1
t1_o8h8mdt
No, it would be a separate application dedicated to 3D models, with built-in tools to directly edit meshes and textures.
1
0
2026-03-03T21:01:13
Lightnig125
false
null
0
o8h8mdt
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8h8mdt/
false
1
t1_o8h8dnt
Tracking people's conversations and detecting offensive context using AI maybe?
1
0
2026-03-03T21:00:04
Professional_Hair550
false
null
0
o8h8dnt
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8h8dnt/
false
1
t1_o8h8asv
At first, unfortunately, it will be CUDA. We need to make the application fully functional first; then we can look into adding support for other technologies for Intel and AMD GPUs.
1
0
2026-03-03T20:59:42
Lightnig125
false
null
0
o8h8asv
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8h8asv/
false
1
t1_o8h8a94
[removed]
1
0
2026-03-03T20:59:38
[deleted]
true
null
0
o8h8a94
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8h8a94/
false
1
t1_o8h88r3
The question might be more of tool use to pull in outside info as well. If they can reason strongly when provided the right resources to search, that’s an interesting trade off to larger models that know more already.
1
0
2026-03-03T20:59:26
Thigh_Clapper
false
null
0
o8h88r3
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8h88r3/
false
1
t1_o8h85xo
Yes, absolutely! I have been wanting this for years.
1
0
2026-03-03T20:59:05
adriaans89
false
null
0
o8h85xo
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8h85xo/
false
1
t1_o8h85k8
Someone from google
1
0
2026-03-03T20:59:02
Ok_Reception_5545
false
null
0
o8h85k8
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8h85k8/
false
1
t1_o8h7xhw
No that’s what I meant, it’s like a clever way to seem slightly less like an ai
1
0
2026-03-03T20:58:00
nomorebuttsplz
false
null
0
o8h7xhw
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8h7xhw/
false
1
t1_o8h7r1v
I've been working on an update post, but I did end up buying one. It's a fair question about finances. I think most people buying these cards are fairly established in their career. For me, I'm in the infosec field and have been at the same company for over a decade with multiple raises and promotions. Despite that, I live very frugally and the next most expensive thing I own other than my home is a family vehicle we bought 4 years ago for $6k. I have friends much less than me driving much more expensive cars.
1
0
2026-03-03T20:57:11
AvocadoArray
false
null
0
o8h7r1v
false
/r/LocalLLaMA/comments/1ql9b7m/talk_me_out_of_buying_an_rtx_pro_6000/o8h7r1v/
false
1
t1_o8h7r5g
Ebay. The condition is always listed. I've never had a problem. I guess look out for obvious things like pictures that seem fake or sellers with no history or rating, stuff like that? But like I said, I've never had a problem and I've bought several used GPU's and other computer components off of Ebay.
1
0
2026-03-03T20:57:11
ArtifartX
false
null
0
o8h7r5g
false
/r/LocalLLaMA/comments/1rk0o58/where_do_you_buy_used_gpu_how_do_prevent_yourself/o8h7r5g/
false
1
t1_o8h7ji8
Is this only possible in llama-server or also in llama-cli?
1
0
2026-03-03T20:56:13
WowSkaro
false
null
0
o8h7ji8
false
/r/LocalLLaMA/comments/1rjzlrn/are_the_9b_or_smaller_qwen35_models_unthinking/o8h7ji8/
false
1
t1_o8h7j6w
Inwards and outwards. 👉👌😏
1
0
2026-03-03T20:56:10
Cool-Chemical-5629
false
null
0
o8h7j6w
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8h7j6w/
false
1
t1_o8h7dgh
Can i not dislike llm slop and still be interested in the technology?
1
0
2026-03-03T20:55:26
Velocita84
false
null
0
o8h7dgh
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8h7dgh/
false
1
t1_o8h7a6w
Will it work with openclaw to control pc and can we access it via API
1
0
2026-03-03T20:55:01
Formal_Jeweler_488
false
null
0
o8h7a6w
false
/r/LocalLLaMA/comments/1rjzsz6/qwen35_27b_feedback/o8h7a6w/
false
1
t1_o8h79vg
Yes - there are benchmarks to kinda simulate it like MRCR, but they only do 8 needle and real document retrieval can be like several hundred needle. It’s not really solvable with current architecture - the best we can do is try to work around it with things like RAG to limit the context size. It’s actually a large reason why lots of people are unexcited about context > 128k, because even frontier models fall off a cliff long before that. Opus 4.6 seems better, but it’s hard to quantify it by more than vibes.
1
0
2026-03-03T20:54:59
claythearc
false
null
0
o8h79vg
false
/r/LocalLLaMA/comments/1rk045z/are_huge_context_windows_a_hallucination_problem/o8h79vg/
false
1
t1_o8h78bp
Still don't understand why this got downvoted to hell. Anyway i have a RTX PRO 6000 now and i can confirm in LMStudio that it can successfully split a model across both cards and perform as expected. What did not work previously was combining a 4070 and 5090. LMstudio crashes due to memory allocation errors despite having enough. However, with a manual layer split, llama.cpp could function properly. Anyways, quite nice to have 128gb of VRAM and be able to load a midsize Q4 stepfun 3.5 flash!!! I get 115 tokens/sec at the start of a 100k context window, down to 35 tokens/sec at the end.
1
0
2026-03-03T20:54:46
mr_zerolith
false
null
0
o8h78bp
false
/r/LocalLLaMA/comments/1r5ta46/combining_a_rtx_pro_6000_and_5090_could_it_work/o8h78bp/
false
1
t1_o8h72e8
You think they're not using those models internally for all sorts of purposes?
1
0
2026-03-03T20:54:01
temperature_5
false
null
0
o8h72e8
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8h72e8/
false
1
t1_o8h7270
:(
1
0
2026-03-03T20:54:00
Denial_Jackson
false
null
0
o8h7270
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8h7270/
false
1
t1_o8h71nm
Shame a large part of the team beside the models is also getting dropped :/
1
0
2026-03-03T20:53:55
duliszewski
false
null
0
o8h71nm
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8h71nm/
false
1
t1_o8h6y8r
No no , its Full opencode ecosystem. Openwork desktop app , it's an opencode app to run agents like Claude work or openclaw. Has a native integration with telegram and automatically runners. Don't have too much options but it's cool to use. Just install it , create a telegram bot and connect . The session management it's bad yet , but still cool can send messages and use my own agents that I use to work with opencode daily.
1
0
2026-03-03T20:53:30
Turbulent_Dot3764
false
null
0
o8h6y8r
false
/r/LocalLLaMA/comments/1rjzsz6/qwen35_27b_feedback/o8h6y8r/
false
1
t1_o8h6wz4
RIP qwen.. at least we still got mistral, stepfun, glm, etc. Bro did censor the models which is why everyone had to run and make abliterated versions.
1
0
2026-03-03T20:53:20
a_beautiful_rhind
false
null
0
o8h6wz4
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8h6wz4/
false
1
t1_o8h6wje
I think the benchmark scores are fairly significantly saturated. I guess it may be fair to say that both models are surprisingly good, but if you are hitting for instance the problems that IFBench difference of 6 points indicates, you might be able to see that 35b suffers from noticeably worse prompt adherence, for example. Personally, I have no use for any other model except the 122b. I tried 35b and it was nowhere near as good in that it is less knowledgeable and this hurts its performance. Its upside is that it would be faster, but I'll rather wait patiently for good results than deal with worse results. Slow is fast, and fast is good, and all that.
1
0
2026-03-03T20:53:16
audioen
false
null
0
o8h6wje
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8h6wje/
false
1
t1_o8h6szc
You mean to make a text to lora for those prompts/
1
0
2026-03-03T20:52:49
Silver-Champion-4846
false
null
0
o8h6szc
false
/r/LocalLLaMA/comments/1ri0n8p/llm_lora_on_the_fly_with_hypernetworks/o8h6szc/
false
1
t1_o8h6o7j
Replaced by who/what? AI? 😂
1
0
2026-03-03T20:52:13
Cool-Chemical-5629
false
null
0
o8h6o7j
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8h6o7j/
false
1
t1_o8h6nzy
it can go on for a day no problem - so long sessions are not an issue. i tested in qwen 3 coder next and the new qwen 3.5 all from unsloth. ollama is too slow so u need to compile llama.cpp from git. all new releases require the latest. i run it on 5060 ti 16g with 128g unified mem but you can get away with less as long as you keep the context size low. I have it at 64k. cline will deal with even 32k quite well in terms of compacting.
1
0
2026-03-03T20:52:12
Tema_Art_7777
false
null
0
o8h6nzy
false
/r/LocalLLaMA/comments/1rjv92p/whats_your_strategy_for_long_conversations_with/o8h6nzy/
false
1
t1_o8h6mqp
cost. Everything actually done by humans (or agents of humans) has "opportunity cost". It's a term of art in economics. Worth learning!
1
0
2026-03-03T20:52:02
crantob
false
null
0
o8h6mqp
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8h6mqp/
false
1
t1_o8h6khw
That's a shame really, that low PP makes It fundamentally unusable even though the token generation would be more than sufficient for a lot of cases
1
0
2026-03-03T20:51:45
ps5cfw
false
null
0
o8h6khw
false
/r/LocalLLaMA/comments/1rjygyu/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/o8h6khw/
false
1
t1_o8h6kjr
Is this all CPU? I recall that there is an LLM SDK for the NPU as well. I would be interested in knowing how something like the 4B model would run on this. Thanks for posting your findings!
1
0
2026-03-03T20:51:45
UncleRedz
false
null
0
o8h6kjr
false
/r/LocalLLaMA/comments/1rjygyu/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/o8h6kjr/
false
1
t1_o8h6kau
I built this plugin specifically to solve this exact problem by allowing you to run local models like Ollama or LM Studio directly within your Zotero library. It gives you the privacy benefits of a local setup without sacrificing the advanced RAG and semantic search capabilities that cloud tools offer, handling the chunking, embedding, and retrieval automatically so you get accurate answers from your papers without needing to tune a custom instance from scratch. You can check out the project and see how it integrates local AI with your research workflow here: [https://github.com/dralkh/seerai](https://github.com/dralkh/seerai)
1
0
2026-03-03T20:51:43
Dralkha
false
null
0
o8h6kau
false
/r/LocalLLaMA/comments/1dlw92m/local_alternative_to_scholargpt_scispace/o8h6kau/
false
1
t1_o8h6k3e
didn't fix it for me 😒
1
0
2026-03-03T20:51:42
ArchdukeofHyperbole
false
null
0
o8h6k3e
false
/r/LocalLLaMA/comments/1rjxmvo/qwen35_checkpointing_fix_pr_testing/o8h6k3e/
false
1
t1_o8h6guy
Hey guys I am new here. So what's the deal with other people creating all this Qwen3.5 with different variations? If I want to try the original one like from Alibaba where can I find the author in hugginggace?
1
0
2026-03-03T20:51:16
pet3121
false
null
0
o8h6guy
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8h6guy/
false
1
t1_o8h6828
Advertising departments are seething... and we are happy.
1
0
2026-03-03T20:50:08
crantob
false
null
0
o8h6828
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8h6828/
false
1
t1_o8h61p8
Its an example like, if i need to extract only specific things from a text.
1
0
2026-03-03T20:49:18
last_llm_standing
false
null
0
o8h61p8
false
/r/LocalLLaMA/comments/1rjw6rc/thoughts_about_qwen_35_fine_tuning_08b_model_for/o8h61p8/
false
1
t1_o8h5zk9
Is this project still active?
1
0
2026-03-03T20:49:01
iddar
false
null
0
o8h5zk9
false
/r/LocalLLaMA/comments/1q6x7nq/testflight_built_an_ios_app_that_runs_llms_vision/o8h5zk9/
false
1
t1_o8h5yav
I've been using a Q6_K quant instead of FP8, should I just use the FP8 version then? I'm only getting roughly 60t/s, but I probably just need to play around with my settings. Thanks for your help!
1
0
2026-03-03T20:48:52
Anarchaotic
false
null
0
o8h5yav
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o8h5yav/
false
1
t1_o8h5rnn
using the old 30b-instruct over months and the new 35b since today I never had those effects. I am using the combo searxng, openwebui. Oftenly I try to get informations from paywalled content and even when this specific link is bocked it finds the content from other sources.
1
0
2026-03-03T20:48:00
Impossible_Art9151
false
null
0
o8h5rnn
false
/r/LocalLLaMA/comments/1rjyzp1/has_anyone_else_noticed_that_some_models_are/o8h5rnn/
false
1
t1_o8h5rj5
I need one for a college project I am working on!! 
1
0
2026-03-03T20:47:59
pet3121
false
null
0
o8h5rj5
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8h5rj5/
false
1
t1_o8h5lm0
If you wanted a local llm you should have at least a 3090, which fits the 27b model pretty well 
1
0
2026-03-03T20:47:13
TheKingOfTCGames
false
null
0
o8h5lm0
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8h5lm0/
false
1
t1_o8h5jrp
Not disagreeing with you, genuinely asking, why do you think that would be the case?
1
0
2026-03-03T20:46:59
LoaderD
false
null
0
o8h5jrp
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8h5jrp/
false
1
t1_o8h5gex
Even if it's just speed, if you're running on an m3 mac which often has ram to spare, that reduction in active parameters is going to do wonders for prompt processing times.
1
0
2026-03-03T20:46:32
Hoodfu
false
null
0
o8h5gex
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8h5gex/
false
1
t1_o8h5746
At the moment, Qwen Coder Next 80b, but it's slow.
1
0
2026-03-03T20:45:19
SlowMovingTarget
false
null
0
o8h5746
false
/r/LocalLLaMA/comments/18f6sae/got_myself_a_4way_rtx_4090_rig_for_local_llm/o8h5746/
false
1
t1_o8h56l9
Ain't even that.. I'm not giving them a phone number. I just add xcancel myself though.
1
0
2026-03-03T20:45:15
a_beautiful_rhind
false
null
0
o8h56l9
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8h56l9/
false
1
t1_o8h55ly
No I meant like the full system like how did you integrate tthe full system and connect with telegram, do you use it for clawdbot aswell?
1
0
2026-03-03T20:45:07
Formal_Jeweler_488
false
null
0
o8h55ly
false
/r/LocalLLaMA/comments/1rjzsz6/qwen35_27b_feedback/o8h55ly/
false
1
t1_o8h4xg7
The characters between "solved" and "and honestly" are two "minus signs" U+002D - U+002D - not one emdash. perhaps your browser is replacing them?
1
0
2026-03-03T20:44:03
crantob
false
null
0
o8h4xg7
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8h4xg7/
false
1
t1_o8h4txw
From the commit: >Applies a penalty to tokens that have already appeared in the response. >A value of 0 means no penalty. Higher values increase the penalty, which can encourage the model to introduce new tokens instead of reusing previously used tokens.
1
0
2026-03-03T20:43:35
Emotional_Egg_251
false
null
0
o8h4txw
false
/r/LocalLLaMA/comments/1rjhmmf/presence_penalty_seems_to_be_incoming_on_lmstudio/o8h4txw/
false
1
t1_o8h4q1x
[removed]
1
0
2026-03-03T20:43:04
[deleted]
true
null
0
o8h4q1x
false
/r/LocalLLaMA/comments/1rcda0u/why_is_it_so_hard_to_find_real_resources_on/o8h4q1x/
false
1
t1_o8h4lyj
USA already started the AI military game, every countries have to follow
1
0
2026-03-03T20:42:31
archieve_
false
null
0
o8h4lyj
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8h4lyj/
false
1
t1_o8h4gab
[removed]
1
0
2026-03-03T20:41:46
[deleted]
true
null
0
o8h4gab
false
/r/LocalLLaMA/comments/1r5v1jb/anyone_actually_using_openclaw/o8h4gab/
false
1
t1_o8h4g3k
I think you are right that privacy is too narrow. But I also think the technology is just in the beginning of being made simple enough for running locally and models small enough to run on more normal hardware is just within the last 6 months or so becoming useful enough. Next thing that needs to happen is cheaper hardware. If you look at average consumer or enterprise laptops, most don't have any Nvidia GPU, that's premium/enthusiasm/gamer territory. Both AMD and Intel is working on it though, with built-in NPUs, Microsoft is doing the Copilot+ thing to push vendors etc Claude Cowork is showing that the tech is useful for normal people and productivity. OpenClaw despite how horrible it's security is etc, is a great example of what is possible. Not to mention all the companion AI and Silly Tavern style stuff. I don't think Ollama etc will be the ones breaking into mainstream, it will be the ones doing the applications using LLMs, once hardware etc is ready. One sighting on this is Goose, on their roadmap they have an idea of local first, bundling the inference engine with the app. Normal people will not download an inference engine, they will download an app that does something they want.
1
0
2026-03-03T20:41:45
UncleRedz
false
null
0
o8h4g3k
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8h4g3k/
false
1
t1_o8h4fgd
Tested both in vision tasks against each other, both are excellent and above what I have seen so far, but the 122B outperforms here 35b. Will continue my test over the next days...
1
0
2026-03-03T20:41:39
Impossible_Art9151
false
null
0
o8h4fgd
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8h4fgd/
false
1
t1_o8h4a29
It’s worth noting that this only speeds up the first ~2 seconds after you send the LLM prompt (before tokens start spitting out). The "4x improvement" is NOT in overall speed nor tokens/sec, just the first ~2 seconds before tokens arrive. still a really great thing to have; but the marketing is a bit (conveniently) vague on this to the average user.
1
0
2026-03-03T20:40:56
TechExpert2910
false
null
0
o8h4a29
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8h4a29/
false
1