name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8hm6ko
Yes but still not merged :)
1
0
2026-03-03T22:05:16
jacek2023
false
null
0
o8hm6ko
false
/r/LocalLLaMA/comments/1rk2f8l/parallel_model_loading_this_is_a_thing_fast_model/o8hm6ko/
false
1
t1_o8hlz90
I was thinking the same
1
0
2026-03-03T22:04:17
gized00
false
null
0
o8hlz90
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8hlz90/
false
1
t1_o8hlxp8
If i only could turn its thinking fully then it would be perfect...
1
0
2026-03-03T22:04:04
Single_Ring4886
false
null
0
o8hlxp8
false
/r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/o8hlxp8/
false
1
t1_o8hlmce
https://preview.redd.it/…863c7ebf5d9eccd0
1
0
2026-03-03T22:02:32
Neither-Phone-7264
false
null
0
o8hlmce
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8hlmce/
false
1
t1_o8hlk12
For translations / multilanguage the difference becomes quickly noticeable in my testing. Aswell as in general knowledge and coding. Q5 running at 60 t/s on dual l40s
1
0
2026-03-03T22:02:13
sjoerdmaessen
false
null
0
o8hlk12
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8hlk12/
false
1
t1_o8hliym
Would he try to say no to Palantir? Will never know... cause he cant
0
0
2026-03-03T22:02:04
raiffuvar
false
null
0
o8hliym
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hliym/
false
0
t1_o8hl9hp
[https://github.com/ggml-org/llama.cpp/discussions/4167](https://github.com/ggml-org/llama.cpp/discussions/4167)
1
0
2026-03-03T22:00:48
Gregory-Wolf
false
null
0
o8hl9hp
false
/r/LocalLLaMA/comments/1rjyj3c/mlx_benchmarks/o8hl9hp/
false
1
t1_o8hl7pl
I feel like this is probably wrong but maybe it is the base model after GRPO like Deepseek-R1-Zero
1
0
2026-03-03T22:00:34
Initial-Argument2523
false
null
0
o8hl7pl
false
/r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8hl7pl/
false
1
t1_o8hl2cy
Nah, perfect for an MMO
1
0
2026-03-03T21:59:51
emanationinteractive
false
null
0
o8hl2cy
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8hl2cy/
false
1
t1_o8hkyb5
I would love it if it generated mask for different materials. I want to use procedural textures on all models to unify the art style.
1
0
2026-03-03T21:59:19
LushHappyPie
false
null
0
o8hkyb5
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hkyb5/
false
1
t1_o8hktyf
People don’t know computer vision was AI before LLMs
1
0
2026-03-03T21:58:44
big_witty_titty
false
null
0
o8hktyf
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hktyf/
false
1
t1_o8hkttg
Appreciate it. I’m using the Mlx one because it works faster compared to llama Will take a look to those models Many thanks
1
0
2026-03-03T21:58:43
OliverNoMore
false
null
0
o8hkttg
false
/r/LocalLLaMA/comments/1rk2cmi/help_on_using_qwen3535ba3b_in_vscodeide/o8hkttg/
false
1
t1_o8hkm0s
SmartVPN :] and a MUD/MMO client, oh and a document organizer! https://github.com/emanationinteractive/speaker-prep
1
0
2026-03-03T21:57:41
emanationinteractive
false
null
0
o8hkm0s
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8hkm0s/
false
1
t1_o8hkhul
abliterated models are very stupid we need to all get used to heretic especially since 1.2.0 was released and there is now MPOA+SOMA
1
0
2026-03-03T21:57:08
pigeon57434
false
null
0
o8hkhul
false
/r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8hkhul/
false
1
t1_o8hkh2s
 "What other things are promoted in the same way that we dont realize?" I think this the key takeway from this thread for me. Even though there are is no proof here, this is very probably happening a lot.
1
0
2026-03-03T21:57:02
IngenuityMotor2106
false
null
0
o8hkh2s
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8hkh2s/
false
1
t1_o8hkggp
Thanks for your feedback, that’s very interesting.
1
0
2026-03-03T21:56:57
Lightnig125
false
null
0
o8hkggp
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hkggp/
false
1
t1_o8hk9dw
How much VRAM would that model use?
1
0
2026-03-03T21:56:03
dca12345
false
null
0
o8hk9dw
false
/r/LocalLLaMA/comments/1rjznnk/system_requirements_for_local_llms/o8hk9dw/
false
1
t1_o8hk92z
Yes, Meta has excellent management. That's why they spent $77 billion on the metaverse. They shut down open Llama because it was making them look bad vs other smaller companies.
1
0
2026-03-03T21:56:00
temperature_5
false
null
0
o8hk92z
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8hk92z/
false
1
t1_o8hk62l
There will be several file formats that you’ll be able to export.
1
0
2026-03-03T21:55:37
Lightnig125
false
null
0
o8hk62l
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hk62l/
false
1
t1_o8hk2p3
Just because he thanked Elon?! 😨
1
0
2026-03-03T21:55:11
ANR2ME
false
null
0
o8hk2p3
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8hk2p3/
false
1
t1_o8hjzop
Fair point — but consider **compiler design** or **distributed systems**. When uninformed people have loud opinions there, the experts simply don't notice. The field moves forward regardless. With AI, the noise is *inside the room*. The same people who couldn't explain what a race condition is are now consulting on architecture decisions and shaping hiring criteria. That's not a printing press problem. That's a signal-to-noise problem inside the discipline itself.
1
0
2026-03-03T21:54:47
Holiday-Case-4524
false
null
0
o8hjzop
false
/r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8hjzop/
false
1
t1_o8hjmb3
I mean, surely the 122B has a lot more world knowledge
1
0
2026-03-03T21:53:01
Soft-Barracuda8655
false
null
0
o8hjmb3
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8hjmb3/
false
1
t1_o8hjlyk
QwenMan 🫡
1
0
2026-03-03T21:52:58
JLeonsarmiento
false
null
0
o8hjlyk
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8hjlyk/
false
1
t1_o8hjhgp
I view it the other way, this now means gpus/compute that would never even be considered for this type of work, now can be. As the models get more peformant for less compute(chaper to run) expect demand to rise not fall. Basic economics sadly
1
0
2026-03-03T21:52:23
Invader-Faye
false
null
0
o8hjhgp
false
/r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o8hjhgp/
false
1
t1_o8hjg20
I had the same problem, so I decided it needed to be developed. But I wanted to make it open source. I’ll open a Discord server for those who want to follow the project’s progress, and I’ll put it on a public repository once the app is more stable.
1
0
2026-03-03T21:52:12
Lightnig125
false
null
0
o8hjg20
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hjg20/
false
1
t1_o8hje87
Nooo! I hope they manage to release the open weights of Qwen Image 2 before it all falls apart!
1
0
2026-03-03T21:51:58
Spanky2k
false
null
0
o8hje87
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8hje87/
false
1
t1_o8hjdlt
I assume this is about the smaller 3.5's thinking loop. Can someone with more brain cells than me explain what this is, and why repeat penalty doesn't help get out of the loop?
1
0
2026-03-03T21:51:53
sine120
false
null
0
o8hjdlt
false
/r/LocalLLaMA/comments/1rjhmmf/presence_penalty_seems_to_be_incoming_on_lmstudio/o8hjdlt/
false
1
t1_o8hjafh
hm would have thought Gemini 3.1 would have done better at at least one category lol
1
0
2026-03-03T21:51:28
clocksmith
false
null
0
o8hjafh
false
/r/LocalLLaMA/comments/1r7shtv/i_built_a_benchmark_that_tests_coding_llms_on/o8hjafh/
false
1
t1_o8hj85b
Use llama.cpp and install llama vscode [https://marketplace.visualstudio.com/items?itemName=ggml-org.llama-vscode](https://marketplace.visualstudio.com/items?itemName=ggml-org.llama-vscode) To use vscode, you need to run 2 types of models. Qwen3.5-35b is a chat model, so you can use it to chat with your code. But if you want code complete/suggestions as you type, then you need to run a FIM model. For example this is a FIM+Chat combo - [https://huggingface.co/ggml-org/Qwen3-Coder-30B-A3B-Instruct-Q8\_0-GGUF](https://huggingface.co/ggml-org/Qwen3-Coder-30B-A3B-Instruct-Q8_0-GGUF) You can see other smaller FIM models here too https://huggingface.co/collections/ggml-org/llamavim. Also see there page for more info on llama.cpp vscode https://github.com/ggml-org/llama.vscode?tab=readme-ov-file
1
0
2026-03-03T21:51:10
segmond
false
null
0
o8hj85b
false
/r/LocalLLaMA/comments/1rk2cmi/help_on_using_qwen3535ba3b_in_vscodeide/o8hj85b/
false
1
t1_o8hj5px
If you're doing checkpoints every batch, you might want to increase \`--n-ctx-checkpoints\` as well.
1
0
2026-03-03T21:50:51
ilintar
false
null
0
o8hj5px
false
/r/LocalLLaMA/comments/1rjxmvo/qwen35_checkpointing_fix_pr_testing/o8hj5px/
false
1
t1_o8hj59d
Less informed people having more opportunity to broadcast their opinions is a thing we've been dealing with since at least the invention of the printing press. We'll adapt. It might take awhile and it will suck in the mean time, but we'll get there.
1
0
2026-03-03T21:50:47
Justsomedudeonthenet
false
null
0
o8hj59d
false
/r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8hj59d/
false
1
t1_o8hizpc
Automating accounting with AI does not make sense. You can automate like 90% with a simple algorithm (a lot of 'if-else') and what's left are questions that AI cannot answer itself without knowing the living situation and would also need to ask you. But let's say you use it for intepretation of the law... which means that if it has even one halucination, it could change the overall algorithm and f\* even a simply tax return by writing off your F150 as fighter jet. However, if you have receipts and other stuff that need to be scanned, do so, ocr, and ai for intepretation and organizing. Sidenote: I had like 15 credits worth of courses about accounting at university and barely work with accountants. So, I have no idea if this is correct, but accounting really seems more like a algorithmic problem than something AI can fix.
1
0
2026-03-03T21:50:04
No-Veterinarian8627
false
null
0
o8hizpc
false
/r/LocalLLaMA/comments/1rjwig7/every_ai_accounting_tool_ive_seen_has_it/o8hizpc/
false
1
t1_o8hiyh4
"Why is C++ even popular? Most people don't even realize they can do the same and more writing assembly"
1
0
2026-03-03T21:49:55
liv_drdoom
false
null
0
o8hiyh4
false
/r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o8hiyh4/
false
1
t1_o8hixl9
It must be easier and cheaper to tune, no? If the pre-training includes the formats already, then less time spent applying FT and RL on it. Not sure if there's papers on, though there probably are. The last models I saw that had a true base model were llama-2 derivatives. So, Mistral 7B v1, v2, and v3 will do true text completions. Not sure if there are newer models that do this.
1
0
2026-03-03T21:49:47
teleprint-me
false
null
0
o8hixl9
false
/r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8hixl9/
false
1
t1_o8hirbl
Where's the grift, where's the hustle
1
0
2026-03-03T21:48:58
CommunismDoesntWork
false
null
0
o8hirbl
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8hirbl/
false
1
t1_o8himxe
Thanks for your feedback ! The project should normally be available fairly soon.
1
0
2026-03-03T21:48:23
Lightnig125
false
null
0
o8himxe
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8himxe/
false
1
t1_o8hig6q
Thanks for your feedback ! Yes, it’s a problem I’ve often seen. But with this application, you can generate a model and then optimize the polygons to make it low-poly, as you can see in the video.
1
0
2026-03-03T21:47:30
Lightnig125
false
null
0
o8hig6q
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hig6q/
false
1
t1_o8hifls
[https://huggingface.co/lukey03/Qwen3.5-9B-abliterated-GGUF](https://huggingface.co/lukey03/Qwen3.5-9B-abliterated-GGUF)
1
0
2026-03-03T21:47:26
Flat_cola
false
null
0
o8hifls
false
/r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8hifls/
false
1
t1_o8hi3ge
It’s bandwidth limited at low or moderate context sizes
1
0
2026-03-03T21:45:50
nomorebuttsplz
false
null
0
o8hi3ge
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8hi3ge/
false
1
t1_o8hhydb
Probably 3.5x prefill over the previous generation, which is what we've seen with M5 vs M4. Token generation will likely be maybe 10% faster due to higher memory bandwidth.
1
0
2026-03-03T21:45:10
MrPecunius
false
null
0
o8hhydb
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8hhydb/
false
1
t1_o8hhwbf
Thanks for your feedback ! Yes, it will be FOSS. I’m finishing stabilizing the first version, and then I’ll open an open-source repository.
1
0
2026-03-03T21:44:54
Lightnig125
false
null
0
o8hhwbf
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hhwbf/
false
1
t1_o8hho8v
M5 has been out for a while, so we have plenty of data on the matmul performance boost. The gain over the M4 Pro/Max should be similar, with token generation scaling modestly with the \~10% increase in memory bandwidth. I have a binned M4 Pro MBP, and I am sorely tempted by the M5 Pro--especially since 64GB is now an option vs the 48GB I have now.
1
0
2026-03-03T21:43:51
MrPecunius
false
null
0
o8hho8v
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8hho8v/
false
1
t1_o8hhm2m
For coding, GLM 4.7 or 5 are the closest open models to Claude I've used. You would need a \*lot\* of VRAM or really fast unified RAM, and excellent prompt processing speeds or you're gonna be waiting forever. Current Macs are too slow in PP for agentic coding on that scale, but the new generation is supposed to have 4x the speed, so probably wait for those if you go that route. Or build a system with several real large GPUs and run vLLM. In the meantime, try running something like Qwen 3.5 27B on a 24-32GB card, or Qwen3.5 35B or GLM 4.7 Flash 30B on any system with enough RAM and just try them out in Claude Code or similar open framework.
1
0
2026-03-03T21:43:34
temperature_5
false
null
0
o8hhm2m
false
/r/LocalLLaMA/comments/1rk02yt/guidance_for_running_open_source_models/o8hhm2m/
false
1
t1_o8hhie4
Thanks for your feedback.
1
0
2026-03-03T21:43:05
Lightnig125
false
null
0
o8hhie4
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hhie4/
false
1
t1_o8hhhsi
Yeap! I know this is not for everyone but hell yeah, loading 175-255Gb across 8 GPUs is painfully slow...
1
0
2026-03-03T21:43:01
One-Macaron6752
false
null
0
o8hhhsi
false
/r/LocalLLaMA/comments/1rk2f8l/parallel_model_loading_this_is_a_thing_fast_model/o8hhhsi/
false
1
t1_o8hhg87
Not yet, on my list, wasn't that optimised for x86 inference?
1
0
2026-03-03T21:42:49
jslominski
false
null
0
o8hhg87
false
/r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o8hhg87/
false
1
t1_o8hh7ja
It’s impossible to reason with the China shills in this sub
1
0
2026-03-03T21:41:40
nomorebuttsplz
false
null
0
o8hh7ja
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hh7ja/
false
1
t1_o8hh68d
How well does he communicate?
1
0
2026-03-03T21:41:30
_fortexe
false
null
0
o8hh68d
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8hh68d/
false
1
t1_o8hh46u
Thanks for the Peta pointer. Never seen it, genuinely useful. The audit trail angle is exactly the space I'm thinking about. Quick question though: does Peta capture the *reasoning* that preceded the tool call — the inputs the agent evaluated, the logic it applied, the "why" behind the decision to act or hold? Or is the audit trail primarily the tool call itself and its parameters? Asking because that pre-decision window is specifically what I'm building for. Peta looks like it solves the governance and credentialing layer really well. I'm not sure it captures what happened in the agent's head before it pulled the trigger. That is the possible gap I keep running into.
1
0
2026-03-03T21:41:14
Ok-Telephone2163
false
null
0
o8hh46u
false
/r/LocalLLaMA/comments/1rjywpx/autonomous_agents_making_financial_decisions_how/o8hh46u/
false
1
t1_o8hgzbk
I don't even have too little money, more like no money online.
1
0
2026-03-03T21:40:35
Silver-Champion-4846
false
null
0
o8hgzbk
false
/r/LocalLLaMA/comments/1ri0n8p/llm_lora_on_the_fly_with_hypernetworks/o8hgzbk/
false
1
t1_o8hgwsh
There are open-source models that can run without requiring an incredible graphics card. In the video, I used the Hunyuan3D 2 Mini model, I’m on an RTX 3060 and the generation only took a few seconds.
1
0
2026-03-03T21:40:16
Lightnig125
false
null
0
o8hgwsh
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hgwsh/
false
1
t1_o8hgvef
China has been using AI for domestic mass surveillance for a long, long time. Here's a paper from 2021 where they talk about a new approach for identifying individuals on CCTV. Their benchmark dataset is: >32,668 annotated bounding boxes of 1,501 individuals captured by six cameras at a university campus https://arxiv.org/pdf/2112.11689
1
0
2026-03-03T21:40:06
Piyh
false
null
0
o8hgvef
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hgvef/
false
1
t1_o8hgqlp
Got it running today! The only caveat is the system prompt and tools overhead alone is 18k tokens. How do I trim it?
1
0
2026-03-03T21:39:29
simracerman
false
null
0
o8hgqlp
false
/r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8hgqlp/
false
1
t1_o8hgoi1
>Ollama on Windows There's your problem. A gentleman of course runs llama.cpp on Linux. \***sips scotch while adjusting my monocle\***
1
0
2026-03-03T21:39:13
spaceman_
false
null
0
o8hgoi1
false
/r/LocalLLaMA/comments/1rk0zht/til_a_single_windows_env_var_ollama_gpu_overhead/o8hgoi1/
false
1
t1_o8hglxt
Yes :) should be fine
1
0
2026-03-03T21:38:54
hauhau901
false
null
0
o8hglxt
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8hglxt/
false
1
t1_o8hglpc
"Use the glasses damit!"
1
0
2026-03-03T21:38:52
IrisColt
false
null
0
o8hglpc
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8hglpc/
false
1
t1_o8hgfyi
Dario said no and he didn't go on a state mandated reeducation vacation like Jack Ma did.
1
0
2026-03-03T21:38:07
Piyh
false
null
0
o8hgfyi
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hgfyi/
false
1
t1_o8hgd2o
There's a trend here...
1
0
2026-03-03T21:37:45
IrisColt
false
null
0
o8hgd2o
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8hgd2o/
false
1
t1_o8hgd4k
Bro's gonna show up at xAI...
1
0
2026-03-03T21:37:45
Eastern_Ad6546
false
null
0
o8hgd4k
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8hgd4k/
false
1
t1_o8hgcer
No, it would really be focused on 3D.
1
0
2026-03-03T21:37:40
Lightnig125
false
null
0
o8hgcer
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hgcer/
false
1
t1_o8hgbvm
Do I just use the mmproj file from the original Qwen3.5 4B?
1
0
2026-03-03T21:37:36
ZookeepergameNovel18
false
null
0
o8hgbvm
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8hgbvm/
false
1
t1_o8hg99a
Yes, this project is alive and going well. Infact, I shipped the mac version of the app yesterday along with audiobooks feature on iOS app.
1
0
2026-03-03T21:37:15
Living_Commercial_10
false
null
0
o8hg99a
false
/r/LocalLLaMA/comments/1q6x7nq/testflight_built_an_ios_app_that_runs_llms_vision/o8hg99a/
false
1
t1_o8hg2zc
Thanks for your feedback, and thank you for the idea, it’s interesting.
1
0
2026-03-03T21:36:26
Lightnig125
false
null
0
o8hg2zc
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hg2zc/
false
1
t1_o8hfyxa
No reason to still be using the 2 year old Mistral 7B, if you like Mistral in particular there's now Ministral-3-8B-Instruct-2512, and before that there was Ministral-8B-Instruct-2410...
1
0
2026-03-03T21:35:54
Zenobody
false
null
0
o8hfyxa
false
/r/LocalLLaMA/comments/1rjoeok/is_qwen35_08b_more_powerful_than_mistral_7b/o8hfyxa/
false
1
t1_o8hfyya
i should say that i watched the next turn on that same conversation. i simply asked "why is the sky blue" and it seemed to a few minutes minutes to process that prompt.
1
0
2026-03-03T21:35:54
ArchdukeofHyperbole
false
null
0
o8hfyya
false
/r/LocalLLaMA/comments/1rjxmvo/qwen35_checkpointing_fix_pr_testing/o8hfyya/
false
1
t1_o8hfuok
>it's looking like there might not be a Qwen 4 This.
1
0
2026-03-03T21:35:21
IrisColt
false
null
0
o8hfuok
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8hfuok/
false
1
t1_o8hftjy
Ai slop
1
0
2026-03-03T21:35:12
PloscaruRadu
false
null
0
o8hftjy
false
/r/LocalLLaMA/comments/1rk1gfk/progress_on_bulamu_1st_luganda_llm_trained_from/o8hftjy/
false
1
t1_o8hfqru
Mind uploading the ggufs to HF? Ollama is yucky
1
0
2026-03-03T21:34:50
doomed151
false
null
0
o8hfqru
false
/r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8hfqru/
false
1
t1_o8hflbu
what a Karen!
1
0
2026-03-03T21:34:08
Powerful_Evening5495
false
null
0
o8hflbu
false
/r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/o8hflbu/
false
1
t1_o8hfhu9
Thanks for your feedback.
1
0
2026-03-03T21:33:40
Lightnig125
false
null
0
o8hfhu9
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hfhu9/
false
1
t1_o8hfh7y
That's unfortunate, hope they don't go the way of Meta4, and hope Junyang finds a happy home with another openlab, GLM, MiniMax, Kimi, DeepSeek
1
0
2026-03-03T21:33:36
segmond
false
null
0
o8hfh7y
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8hfh7y/
false
1
t1_o8hfad3
oof.gif (ᵕ—ᴗ—)
1
0
2026-03-03T21:32:42
IrisColt
false
null
0
o8hfad3
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8hfad3/
false
1
t1_o8hf86h
I prefer to buy local and meet the seler. I also try to avouid something that is "too good to be true"
1
0
2026-03-03T21:32:24
FinalCap2680
false
null
0
o8hf86h
false
/r/LocalLLaMA/comments/1rk0o58/where_do_you_buy_used_gpu_how_do_prevent_yourself/o8hf86h/
false
1
t1_o8hezkd
heh
1
0
2026-03-03T21:31:15
IrisColt
false
null
0
o8hezkd
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8hezkd/
false
1
t1_o8hettc
They work so well that they could reverse the trend of models needing data-center level compute to run. The race could becomes to see how much you can do on modest hardware - smartphones and other devices for example. That could lead to diminished demand for data centers and hopefully some relief on consumer prices. Maybe I'm dreaming
1
0
2026-03-03T21:30:29
mountain_mongo
false
null
0
o8hettc
false
/r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o8hettc/
false
1
t1_o8heke0
Hoping they start their own lab, Qwen since 2.5 for me has been invaluable.
1
0
2026-03-03T21:29:14
SoupDue6629
false
null
0
o8heke0
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8heke0/
false
1
t1_o8hek0y
A $2k macbook pro is never going to come with a terabyte of RAM to run the SOTA models that can compete with cloud offerings. And my understanding is the NPUs that are shipping on some systems are only usable for slow background offloading. They're MUCH slower than the CPU/GPU, but for small models it can be enough to run things in the background without tying up the main processor.
1
0
2026-03-03T21:29:11
suicidaleggroll
false
null
0
o8hek0y
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8hek0y/
false
1
t1_o8heguf
For now, the application is still being stabilized, but as soon as it’s stable, I’ll put it in an open-source repository with instructions on how to test it, and I’ll make an announcement. I’ll also open a Discord server to discuss the project’s progress.
1
0
2026-03-03T21:28:46
Lightnig125
false
null
0
o8heguf
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8heguf/
false
1
t1_o8hdwjv
Yes, it's all CPU. There are a few techniques I've been looking into to mess with the NPU, namely ezrknn-llm/RKLLM, rk-llama.cpp, and rkllama. Once I've had my fill of CPU I will start trying these out as well. I plan to post about 2b and 4b performance on the OPi later!
1
0
2026-03-03T21:26:07
antwon-tech
false
null
0
o8hdwjv
false
/r/LocalLLaMA/comments/1rjygyu/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/o8hdwjv/
false
1
t1_o8hdmdu
That’s exactly the idea to have a simple application, a built-in open-source model manager, and 3D tools directly integrated into the same app. I think this could simplify the workflow for those who focus only on 3D.
1
0
2026-03-03T21:24:48
Lightnig125
false
null
0
o8hdmdu
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hdmdu/
false
1
t1_o8hdlf8
Elon Musk hiring Junyang Lin would be awesome because: \- Lots of GPU-friendly small Groks \- Lots of pissed-off redditors I can only dream… ;)
1
0
2026-03-03T21:24:40
jacek2023
false
null
0
o8hdlf8
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8hdlf8/
false
1
t1_o8hdi3f
Ah thank you. I am running the smaller models currently. I was just making a joke at people who run 1 bit quants
1
0
2026-03-03T21:24:15
Rude_Marzipan6107
false
null
0
o8hdi3f
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8hdi3f/
false
1
t1_o8hdgh4
I think i got it working. Had to do longer context i guess. latest command i used ./llama-server -m \~/pathto/Qwen3.5-35B-A3B-IQ4\_XS-00001-of-00002.gguf --checkpoint-every-nb 1 --ctx-checkpoint s 128 --swa-full -c 8192 --reasoning-budget 0 srv          init: init: chat template, thinking = 0 main: model loaded main: server is listening on [http://127.0.0.1:8080](http://127.0.0.1:8080) main: starting the main loop... srv  update\_slots: all slots are idle srv  params\_from\_: Chat format: peg-constructed slot get\_availabl: id  3 | task -1 | selected slot by LRU, t\_last = -1 slot launch\_slot\_: id  3 | task -1 | sampler chain: logits -> ?penalties -> ?dry -> ?top-n-sigma -> top-k -> ?typ ical -> top-p -> min-p -> ?xtc -> temp-ext -> dist   slot launch\_slot\_: id  3 | task 0 | processing task, is\_child = 0 slot update\_slots: id  3 | task 0 | new prompt, n\_ctx\_slot = 8192, n\_keep = 0, task.n\_tokens = 33 slot update\_slots: id  3 | task 0 | n\_tokens = 0, memory\_seq\_rm \[0, end) slot init\_sampler: id  3 | task 0 | init sampler, took 0.03 ms, tokens: text = 33, total = 33 slot update\_slots: id  3 | task 0 | prompt processing done, n\_tokens = 33, batch.n\_tokens = 33 srv  log\_server\_r: done request: POST /v1/chat/completions [127.0.0.1](http://127.0.0.1) 200 slot print\_timing: id  3 | task 0 |   prompt eval time =    4397.23 ms /    33 tokens (  133.25 ms per token,     7.50 tokens per second) eval time =  436784.31 ms /  1275 tokens (  342.58 ms per token,     2.92 tokens per second) total time =  441181.53 ms /  1308 tokens slot      release: id  3 | task 0 | stop processing: n\_tokens = 1307, truncated = 0 srv  update\_slots: all slots are idle srv  params\_from\_: Chat format: peg-constructed slot get\_availabl: id  2 | task -1 | selected slot by LRU, t\_last = -1 slot launch\_slot\_: id  2 | task -1 | sampler chain: logits -> ?penalties -> ?dry -> ?top-n-sigma -> top-k -> ?typ ical -> top-p -> min-p -> ?xtc -> temp-ext -> dist   slot launch\_slot\_: id  2 | task 1276 | processing task, is\_child = 0 slot update\_slots: id  2 | task 1276 | new prompt, n\_ctx\_slot = 8192, n\_keep = 0, task.n\_tokens = 1324 slot update\_slots: id  2 | task 1276 | n\_tokens = 0, memory\_seq\_rm \[0, end) slot update\_slots: id  2 | task 1276 | prompt processing progress, n\_tokens = 812, batch.n\_tokens = 812, progress = 0.613293 srv  log\_server\_r: done request: POST /v1/chat/completions [127.0.0.1](http://127.0.0.1) 200 slot update\_slots: id  2 | task 1276 | n\_tokens = 812, memory\_seq\_rm \[812, end) slot init\_sampler: id  2 | task 1276 | init sampler, took 0.36 ms, tokens: text = 1324, total = 1324 slot update\_slots: id  2 | task 1276 | prompt processing done, n\_tokens = 1324, batch.n\_tokens = 512 slot update\_slots: id  2 | task 1276 | created context checkpoint 1 of 128 (pos\_min = 811, pos\_max = 811, n\_token s = 812, size = 62.813 MiB) srv          stop: cancel task, id\_task = 1276 slot      release: id  2 | task 1276 | stop processing: n\_tokens = 2266, truncated = 0 srv  update\_slots: all slots are idle
1
0
2026-03-03T21:24:02
ArchdukeofHyperbole
false
null
0
o8hdgh4
false
/r/LocalLLaMA/comments/1rjxmvo/qwen35_checkpointing_fix_pr_testing/o8hdgh4/
false
1
t1_o8hdfww
" (not only LLMs btw)" - soon we are able to create/do everything in LM Studio (pictures, videos, voice/sound/music, TTS/STT and whatever i forgot)! That would be awesome.
1
0
2026-03-03T21:23:58
x3kim
false
null
0
o8hdfww
false
/r/LocalLLaMA/comments/1nkft9l/ama_with_the_lm_studio_team/o8hdfww/
false
1
t1_o8hdf0o
These models turn out to be very sensitive to temperature, and almost all posts like these in our wechat group got solved by reducing temperature and/or top_k. Go as low as 0.4, it's fine. Above 0.6 and it gets too rambly. 
1
0
2026-03-03T21:23:51
RadiantHueOfBeige
false
null
0
o8hdf0o
false
/r/LocalLLaMA/comments/1rjyzp1/has_anyone_else_noticed_that_some_models_are/o8hdf0o/
false
1
t1_o8hdeha
But is it for local inference on python? I use RVC but damn is that thing a pain to build with in python
1
0
2026-03-03T21:23:47
Alexercer
false
null
0
o8hdeha
false
/r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8hdeha/
false
1
t1_o8hdcsq
Glad it worked out nicely for you. Try 2b if you can, if it works your output tokens will be even faster.
1
0
2026-03-03T21:23:34
deadman87
false
null
0
o8hdcsq
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8hdcsq/
false
1
t1_o8hdarz
very cool!
1
0
2026-03-03T21:23:18
frismic
false
null
0
o8hdarz
false
/r/LocalLLaMA/comments/1rjzb0y/built_a_windows_desktop_ai_agent_with_toolcalling/o8hdarz/
false
1
t1_o8hd39z
sure, but the post i was replying to was claiming _no_ company would do it and that the AI as a service business model was longterm nonviable. i don't think that's true. there are going to be companies that buy inference gear and companies that rent the capabilities when they need it. try getting your job to pay for more on-prem build machines and you may quickly learn how much accountants hate capex even when it saves money by any normal math
1
0
2026-03-03T21:22:20
HopePupal
false
null
0
o8hd39z
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o8hd39z/
false
1
t1_o8hcztm
i finally got the QuantTrio/qwen3.5-122b-a10B-awq working locally. i'm limited to 4x3090's. had to run a dockerized vllm :nightly for it to work.
1
0
2026-03-03T21:21:53
Proof_Scene_9281
false
null
0
o8hcztm
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o8hcztm/
false
1
t1_o8hcywi
Does anyone have an actual opinion on how Qwen got so good and has continually flown under the mainstream radar? I think their flagship non-local model is noticeably better than DeepSeek, but I never hear anyone mention it. I think it's on the level of frontier US models. And their new smaller models also seem to be quite impressive. Did anyone who left Alibaba go somewhere new yet, or just departures?
1
0
2026-03-03T21:21:46
justgetoffmylawn
false
null
0
o8hcywi
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hcywi/
false
1
t1_o8hcx1z
35B has just 3B active parameters, it is many times faster than 27B. If it matter depends on your hardware. If 27B is fast enough, then it is a great choice. 35B MoE is better for older hardware or when the model does not fully fit VRAM.
1
0
2026-03-03T21:21:31
Lissanro
false
null
0
o8hcx1z
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8hcx1z/
false
1
t1_o8hcv41
It will always be a shortcoming of open source projects - giving the user a choice and having them jump through hoops. I remember using Cursor and it really was very simple and good UX. But when I tried OpenCode a couple of days ago? Holy shit. First of all, Youtube is infested with shitty AI tutorials, even worse when non of them was about my particular set up which did not feel out of the ordinary either. Just OpenCode with LM Studio, but no - the only reference I found was some blog from Google, and was incomplete at best. Took very long to setup. You would think by now we would have presets for them but no. And in the end it didn't even work properly for some reason. Tried multiple models, OSS 20B, 35B Qwen 3.5 - they keep looping and getting errors. But with Cursor? It actually understood the assignment and worked on it quickly. Not to mention, with cloud models I am not worrying about bringing my entire machine to a halt because I need the VRAM to run the model. Or updating every couple of weeks for the best and latest, set it up and building my memory with it again from scratch. All in all, it was not a pleasant experience and I am good with computers. Can't imagine how it is with complete newbies.
1
0
2026-03-03T21:21:15
Mayion
false
null
0
o8hcv41
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8hcv41/
false
1
t1_o8hcqp5
It’s definitely not faster – not for the same quality. It can’t get personal when most of one’s data is in the cloud. Cloud LLM providers allow you to export your data. iMessage, WhatsApp, etc competed with SMS, which was objectively much less capable than any of the internet-based messaging software. Cloud LLMs are generally a lot better for most people and on most machines.
1
0
2026-03-03T21:20:41
vfrolov
false
null
0
o8hcqp5
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8hcqp5/
false
1
t1_o8hcq39
Hmm...
1
0
2026-03-03T21:20:36
IrisColt
false
null
0
o8hcq39
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8hcq39/
false
1
t1_o8hcnd7
I would be especially interested if it had the capability to export to STL/STEP for 3d printing.
1
0
2026-03-03T21:20:15
th3m00se
false
null
0
o8hcnd7
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hcnd7/
false
1
t1_o8hcn26
Can you share some logs please?
1
0
2026-03-03T21:20:12
ilintar
false
null
0
o8hcn26
false
/r/LocalLLaMA/comments/1rjxmvo/qwen35_checkpointing_fix_pr_testing/o8hcn26/
false
1
t1_o8hck4e
It’s also not that people aren’t capable of figuring it out, it’s that they have to go thru that trial and error process. It takes time and, frankly, most people who already know how to do something are awful at explaining it to someone who doesn’t, let alone someone whose setup is a little different.
1
0
2026-03-03T21:19:50
_WaterBear
false
null
0
o8hck4e
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8hck4e/
false
1
t1_o8hcjd4
Because if I say to someone (a real person) "hitman hitman briefcase," that person understands what movie I'm talking about? And they're men in suits. If you search for "hitman" in emojis, you won't find anything. This is the reasoning behind the movie.
1
0
2026-03-03T21:19:44
eddy-morra
false
null
0
o8hcjd4
false
/r/LocalLLaMA/comments/1rjwz6m/i_just_discovered_a_super_fun_game_to_play_with/o8hcjd4/
false
1
t1_o8hcix7
Indeed
1
0
2026-03-03T21:19:41
Silver-Champion-4846
false
null
0
o8hcix7
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hcix7/
false
1