name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8fhu2o
Very useful, looking into this now :)
1
0
2026-03-03T16:05:02
StabledFusion
false
null
0
o8fhu2o
false
/r/LocalLLaMA/comments/1rj7z9v/where_to_get_a_comprehensive_overview_on_the/o8fhu2o/
false
1
t1_o8fht50
does opencode require local, desktop installations? Or can I install it as an admintstator with local LLMs and serve it to a department?
1
0
2026-03-03T16:04:55
Impossible_Art9151
false
null
0
o8fht50
false
/r/LocalLLaMA/comments/1rcjzsk/is_opencode_the_best_free_coding_agent_currently/o8fht50/
false
1
t1_o8fhqle
A bit newbie here, but how did you get to make it work on fedora ? Installed it with dnf and tried to run a qwen 3.5 model and I get a "unknown architecture qwen 35"
1
0
2026-03-03T16:04:35
Tyrannas
false
null
0
o8fhqle
false
/r/LocalLLaMA/comments/1rekedh/bad_local_performance_for_qwen_35_27b/o8fhqle/
false
1
t1_o8fhn3w
Depends on your expectations, if you want parity with modern hosted models, maybe 512GB, if you want 85-95% of that, maybe 192GB. It's a sliding scale, with diminishing returns. If you can use that 64GB of VRAM you have to run a single model, you could get gpt-oss-120b, qwen-coder-next, or qwen3.5-122b-a11b on there, all at 4-bits. Those are not bad models for coding. They're not great at creative writing. I would probably try out some community finetunes for creative writing, the models I've tried from most big labs are very "safe" and vanilla, which does not lead to riveting writing.
1
0
2026-03-03T16:04:07
spaceman_
false
null
0
o8fhn3w
false
/r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8fhn3w/
false
1
t1_o8fhmje
You mean it actually didn't hallucinate the answer, like in OP's case?
1
0
2026-03-03T16:04:02
arturdent
false
null
0
o8fhmje
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8fhmje/
false
1
t1_o8fhm60
[removed]
1
0
2026-03-03T16:03:59
[deleted]
true
null
0
o8fhm60
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fhm60/
false
1
t1_o8fhlyj
I made this one for consumer / unified memory Blackwell users, enjoy! https://huggingface.co/catplusplus/Qwen3.5-35B-A3B-heretic-NVFP4. Anyone in the position to quantize 120B-A10B one? I might eventually, need to figure out runpod setup as I can't load it full locally.
1
0
2026-03-03T16:03:58
catplusplusok
false
null
0
o8fhlyj
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8fhlyj/
false
1
t1_o8fhlcl
Imagine criticizing 73% of worldwide users
1
0
2026-03-03T16:03:53
pepe256
false
null
0
o8fhlcl
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8fhlcl/
false
1
t1_o8fhkm1
Yeah, that's how it acts for me with Qwen3-vl, but weirdly I'd doesn't do so with Qwen3.5. Maybe an Android issue?
1
0
2026-03-03T16:03:47
FoxTrotte
false
null
0
o8fhkm1
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8fhkm1/
false
1
t1_o8fhizo
Yes, the 27B GGUF one from unsloth, I'm running it on an intel machine with 128GB RAM (shared with the integrated GPU). Prompt processing is slow compared to a machine with a GPU but it is faster than gpt-oss on this same machine; much faster in opencode
1
0
2026-03-03T16:03:34
octopus_limbs
false
null
0
o8fhizo
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8fhizo/
false
1
t1_o8fhiwv
it is based and built on kokoro tts and i think that model sounds great.
1
0
2026-03-03T16:03:33
OrganicTelevision652
false
null
0
o8fhiwv
false
/r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8fhiwv/
false
1
t1_o8fhidm
okay yes but I think in humans intelligence can sometimes be described in combining information from different areas of knowledge
1
0
2026-03-03T16:03:29
OriginalPlayerHater
false
null
0
o8fhidm
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8fhidm/
false
1
t1_o8fhi5s
30-35 tps
1
0
2026-03-03T16:03:27
Old-Sherbert-4495
false
null
0
o8fhi5s
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8fhi5s/
false
1
t1_o8fhh0i
so, trains your replacement faster?
1
0
2026-03-03T16:03:18
InstaLurker
false
null
0
o8fhh0i
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fhh0i/
false
1
t1_o8fhc25
this model supports all languages that kokoro supported.
1
0
2026-03-03T16:02:38
OrganicTelevision652
false
null
0
o8fhc25
false
/r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8fhc25/
false
1
t1_o8fhbmi
Very interesting, and just to give my 2 cents, I love rustdesk.
1
0
2026-03-03T16:02:35
Spurnout
false
null
0
o8fhbmi
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8fhbmi/
false
1
t1_o8fh9w0
Test it i don't know Portuguese, this model supports all languages that kokoro supported.
1
0
2026-03-03T16:02:21
OrganicTelevision652
false
null
0
o8fh9w0
false
/r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8fh9w0/
false
1
t1_o8fh4dl
Production multi-agent is mostly single-agent plus tools in disguise. The "multi-agent" label gets applied loosely. True role-separated agents with coordination layers exist but mostly in well-funded infra teams. For solo devs and small orgs, a single capable agent with persistent memory and reliable tool calls outperforms most multi-agent setups - simpler to debug and cheaper to run.
1
0
2026-03-03T16:01:36
Joozio
false
null
0
o8fh4dl
false
/r/LocalLLaMA/comments/1rjpoge/are_multiagent_systems_actually_being_used_in/o8fh4dl/
false
1
t1_o8fh2pf
Are you using a 4 bit quant ? I'm gonna download the 35b a3b on vllm to compare against llama cpp and I'll report back to you, because if there's no major speed up, might as well use llama cpp where I can use 120k context length just fine but it does go down to like 50 tok/s at that point, you can also quantize the kv cache via llama cpp. Why are you using the dense 27b model though? Wouldn't you care more about speed for open claw ?
1
0
2026-03-03T16:01:22
Certain-Cod-1404
false
null
0
o8fh2pf
false
/r/LocalLLaMA/comments/1rihhw6/questions_on_awq_vs_gguf_on_a_5090/o8fh2pf/
false
1
t1_o8fgyr3
The problem with benchmarks is they're no use if they aren't kept secret One in particular involves physics calculations and gpt-oss-120b which is very strong with maths gets that part right Qwen produced a more polished user interface but it got the physics completely wrong
1
0
2026-03-03T16:00:51
BigYoSpeck
false
null
0
o8fgyr3
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8fgyr3/
false
1
t1_o8fgxd9
122B in vllm nvfp4 pushes 120t/s with Sehyo quant on rtx 6000 pro with MTP between 2-4. Not abliterated but should be able to do that.
1
0
2026-03-03T16:00:40
Kitchen-Year-8434
false
null
0
o8fgxd9
false
/r/LocalLLaMA/comments/1rjqff6/sabomakoqwen35122ba10bhereticgguf_hugging_face/o8fgxd9/
false
1
t1_o8fgmwj
KGB prison cell.
1
0
2026-03-03T15:59:17
peva3
false
null
0
o8fgmwj
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8fgmwj/
false
1
t1_o8fgmmo
Sure, for a bare bone chatbot without any tools it’s ok. But that is not what 99% of people do now with models.
1
0
2026-03-03T15:59:15
Valuable-Run2129
false
null
0
o8fgmmo
false
/r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8fgmmo/
false
1
t1_o8fg8sv
I am struggling with this question as well at the moment. Qwen3.5-27B + 5090, and how to run it fast and efficient. Ideally it seems to me that this should be run as an AWQ on vllm, but memory is the problem. I need large contexts for openclaw and model plus the kv cache is too big for the 32GB VRAM when running it on vllm. So from what I can understand, I would need to offload the kv-cache to system RAM, but I can't get that to work. --kv\_offloading\_backend native (or lmcache) --kv\_offloading\_size 22 It just errors out. Maybe vllm needs some work to get this to run. It would probably be kick ass if it did. Until then it is gguf on llama.cpp I guess.
1
0
2026-03-03T15:57:26
Treq01
false
null
0
o8fg8sv
false
/r/LocalLLaMA/comments/1rihhw6/questions_on_awq_vs_gguf_on_a_5090/o8fg8sv/
false
1
t1_o8ffyp5
He's just like me fr fr
1
0
2026-03-03T15:56:06
Honest-Monitor-2619
false
null
0
o8ffyp5
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8ffyp5/
false
1
t1_o8ffwno
That’s a new one, I’ll have to try it! What is qwen good and bad at in your experience?
1
0
2026-03-03T15:55:49
ClayToTheMax
false
null
0
o8ffwno
false
/r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8ffwno/
false
1
t1_o8ffulu
The newer models are getting more talkative and verbose, as they're uncertain about what satisfies the user's requirements or benchmark. As a result, they spit out lengthy explanations, hoping to nail at the answer somewhere. It's been getting annoying to encounter essays for simple questions. System prompts such as "be brief" often add more time to the model's thinking process, so they're just a band-aid fix. There should be some new metric that takes conciseness into account.
1
0
2026-03-03T15:55:33
yensteel
false
null
0
o8ffulu
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8ffulu/
false
1
t1_o8ffr9w
thank you kind sir. will check this
1
0
2026-03-03T15:55:07
Major_Specific_23
false
null
0
o8ffr9w
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8ffr9w/
false
1
t1_o8ffnvw
and im not telling you it is good for your specific use case. It is severely undertrained, it lacks a lot of knowledge and accuracy that you typically see in fully trained models. I'm sorry if you disagree for your personal use case, but it is objectively the current best model in long context understanding. I'm not saying it is the best for your use case, no need to downvote or get defensive about it.
1
0
2026-03-03T15:54:41
Far-Low-4705
false
null
0
o8ffnvw
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o8ffnvw/
false
1
t1_o8ffmlb
Prob qwen 3.5 27b
1
0
2026-03-03T15:54:31
hihenryjr
false
null
0
o8ffmlb
false
/r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8ffmlb/
false
1
t1_o8ffkhe
I'm not sure about lmstudio, but in llama.cpp u would pass another mmproj model for image understanding. this model is a small one. if u goto unsloths hf page then u can find that model in different quants. so probably something similar for lmstudio i guess
2
0
2026-03-03T15:54:14
Old-Sherbert-4495
false
null
0
o8ffkhe
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8ffkhe/
false
2
t1_o8ffigh
somewhy the tech guys use a complex term "PCIe bifurcation" instead of a simple "PCIe splitting", so if they like complex terms then instead of a simple "PCIe merging" it should also be something complex lol.
1
0
2026-03-03T15:53:58
MelodicRecognition7
false
null
0
o8ffigh
false
/r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8ffigh/
false
1
t1_o8ffi2z
Thanks, I appreciate it! I hear you on the latency. I guess in an ideal world I could offer a latency benchmark via each provider.
1
0
2026-03-03T15:53:55
-penne-arrabiata-
false
null
0
o8ffi2z
false
/r/LocalLLaMA/comments/1r14bqk/i_benchmarked_the_newest_40_ai_models_feb_2026/o8ffi2z/
false
1
t1_o8ffenl
Haha — that meme is accurate for the placeholder demo. But this is what the real app looks like in daily use (that's actually my own prompt library with 54 prompts). Local-first means your data stays in your browser no cloud, no tracking. Demo: [app.promptmanager.tech](http://app.promptmanager.tech)
1
0
2026-03-03T15:53:27
ConstructionExact911
false
null
0
o8ffenl
false
/r/LocalLLaMA/comments/1rjnupj/built_a_localfirst_prompt_manager_where_your_data/o8ffenl/
false
1
t1_o8ffe1w
Also getting the same error.. and variations of that with most local models I try. unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_M / unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_XL I thought --jinja on llama-server had fixed it, but isn't the case
1
0
2026-03-03T15:53:23
alexellisuk
false
null
0
o8ffe1w
false
/r/LocalLLaMA/comments/1rgxr0v/qwen_35_is_multimodal_here_is_how_to_enable_image/o8ffe1w/
false
1
t1_o8ff610
Interesting. So it seems likely the choice is between an m5 with higher memory now, or an m6 with lower memory later.
1
0
2026-03-03T15:52:20
piedamon
false
null
0
o8ff610
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8ff610/
false
1
t1_o8ff4bi
right, running kimi2.5/Minimax/Qwen3.5 requires the BEAST!!!
2
0
2026-03-03T15:52:07
abubakkar_s
false
null
0
o8ff4bi
false
/r/LocalLLaMA/comments/1rjrj0e/the_new_macbooks_airpromax_are_dissapointing/o8ff4bi/
false
2
t1_o8ff28k
It’s great for accessing the sparks desktop remotely via ipad, mac, etc.
1
0
2026-03-03T15:51:49
Badger-Purple
false
null
0
o8ff28k
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8ff28k/
false
1
t1_o8feypj
https://github.com/seanGSISG/dgx-spark-sunshine-setup
1
0
2026-03-03T15:51:21
Badger-Purple
false
null
0
o8feypj
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8feypj/
false
1
t1_o8fex3w
This is true, the base version of one of the Qwen3 models has been so deterministic when I was testing it for synthetic data generation you could *barely* call it "Base". Thankfully, there are still true-base models around, like allenai/Olmo-3-1125-32B or the most recent stepfun-ai/Step-3.5-Flash-Base !
1
0
2026-03-03T15:51:08
FriskyFennecFox
false
null
0
o8fex3w
false
/r/LocalLLaMA/comments/1rjpesa/qwen_35_what_is_base_version/o8fex3w/
false
1
t1_o8fevrq
Hope your tinfoil hat fits well.
1
0
2026-03-03T15:50:58
e38383
false
null
0
o8fevrq
false
/r/LocalLLaMA/comments/1rjk9tt/are_all_models_censored_like_this/o8fevrq/
false
1
t1_o8feuwv
It's complicated. If it's coming, the M5 Ultra Mac Studio will probably be released in June during WWDC. (It could also be released this week but there haven't been any rumors so the chances are very low). Apple's also going to completely revamp the MacBook Pro this Fall but that might only have the base M6. That's how they did the M5 series. Previous iterations were released in the Fall but who knows if they'll revert back to that. They're also changing their release schedule this year where they'll only release Pro-tier iPhones in the Fall and then release the basic models in the Spring so who knows what that means for the rest of their products. So who knows tbh. This is a strange year for Apple product releases so no one can tell you for certain when things will be coming out.
2
0
2026-03-03T15:50:51
smith7018
false
null
0
o8feuwv
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8feuwv/
false
2
t1_o8feuzn
that makes me wonder, indeed. You should check your config anyway. What are you running, vulkan or rocm?
1
0
2026-03-03T15:50:51
Impossible_Art9151
false
null
0
o8feuzn
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8feuzn/
false
1
t1_o8feuuq
Not only Unsloth, I run almost always with llama.cpp and models have bugs a lot of times too. BTW same story with vLLM quants, every time I try it just doesn't start and throws bunch of errors, meanwhile there is day 0 support. Tried NVFP4 quants, nothing works, tried AWQ, same story.
1
0
2026-03-03T15:50:50
DistanceAlert5706
false
null
0
o8feuuq
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8feuuq/
false
1
t1_o8feuj5
I think smu or rop might be closer equivalent
1
0
2026-03-03T15:50:48
PhilosophyforOne
false
null
0
o8feuj5
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8feuj5/
false
1
t1_o8feu0y
[removed]
1
0
2026-03-03T15:50:44
[deleted]
true
null
0
o8feu0y
false
/r/LocalLLaMA/comments/1rjrvku/is_it_worth_the_candle_2x_tesla_p40_24gb_to_12/o8feu0y/
false
1
t1_o8fetvy
You DO realise that not everybody has the same use case as you...right?
1
0
2026-03-03T15:50:43
No_Management_8069
false
null
0
o8fetvy
false
/r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8fetvy/
false
1
t1_o8fetbq
I tried heretic on 122B and it's good. So I would recommend just using heretic for 122B sized models.
1
0
2026-03-03T15:50:39
vpyno
false
null
0
o8fetbq
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o8fetbq/
false
1
t1_o8fetc2
That was the shocking part tbh! Models that are at the "knee curve" are always the most interesting as they are efficient. We need harder benchmarks that reveal the real difference between complex frontier models and models that we can run on our own computers. I know we're getting close to hitting another wall after the transformer boom, but the proof isn't in these benchmarks.
1
0
2026-03-03T15:50:39
yensteel
false
null
0
o8fetc2
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8fetc2/
false
1
t1_o8fesa2
Did you update transformers to 5.2.0 after installing Unsloth per the guide? It works fine and shouldn't break anything.
1
0
2026-03-03T15:50:30
TheRealMasonMac
false
null
0
o8fesa2
false
/r/LocalLLaMA/comments/1rjsf7f/i_spent_6_hours_last_night_failing_to_finetune/o8fesa2/
false
1
t1_o8fep65
I've done something very similar and it's not difficult if you can write code. I used Python and LM Studio with Google's Gemma 3 model. You can call LM Studio by IP address to access the model directly. Write code to loop over all your files, read them one by one, process them according to your prompt, then save the results. The model you can run really depends on the amount of vram you have. For the small models that can fit on a standard GPU, you may run into context issues with long articles. Speed and accuracy aren't going to be as good as the large cloud models but it worked well for my purpose.
1
0
2026-03-03T15:50:05
Craygen9
false
null
0
o8fep65
false
/r/LocalLLaMA/comments/1rjr5p7/local_llm_for_large_journal_library/o8fep65/
false
1
t1_o8fenst
you are comparing apples to oranges, you should compare 4B to gpt-oss-20B not 9B
1
0
2026-03-03T15:49:54
jacek2023
false
null
0
o8fenst
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fenst/
false
1
t1_o8femht
Yeah, unfortunately that's the only caveat currently with these (smaller) models. They are obviously having nowhere near enough parameters to still complete requests they haven't been (almost) directly trained for. So sometimes although it's now fully uncensored, it can flop if it's something completely left-field for it. Thanks for the feedback! P.S. Once I finish the 9b/27b/35b, I expect those to be (inherently) a lot more consistent.
1
0
2026-03-03T15:49:44
hauhau901
false
null
0
o8femht
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8femht/
false
1
t1_o8fekle
All code, data, and eval splits are in the repo! I even left the part with hardcoded answers in a file called \`definitely\_not\_cheating.py\`.
1
0
2026-03-03T15:49:28
maciejgryka
false
null
0
o8fekle
false
/r/LocalLLaMA/comments/1rjslz0/benchmarks_the_10x_inference_tax_you_dont_have_to/o8fekle/
false
1
t1_o8fehqn
I have the same problem, but only by choosing the Hexagon (NPU) backend, CPU and Vulkan work fine.
1
0
2026-03-03T15:49:06
PushInternational171
false
null
0
o8fehqn
false
/r/LocalLLaMA/comments/1rj8gb4/for_sure/o8fehqn/
false
1
t1_o8fehc8
Cost is a not that big of an issue for businesses. We currently use 2xM3 Ultra 512 at work. The thunderbolt 5 interface is the bottleneck.
1
0
2026-03-03T15:49:02
mxforest
false
null
0
o8fehc8
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fehc8/
false
1
t1_o8feg57
Somebody doesn't know how to use their LLMbot . Good for us that the repo link is missing. Use a tiny dedicated NER model to extract PII. They are faster than LLMs, small enough to run on CPU, and they don't hallucinate. https://huggingface.co/nvidia/gliner-PII
1
0
2026-03-03T15:48:53
DinoAmino
false
null
0
o8feg57
false
/r/LocalLLaMA/comments/1rjodma/cloakllm_uses_local_ollama_to_detect_pii_before/o8feg57/
false
1
t1_o8fefxu
yeah it is a bit absurd to fine tune the settings, otherwise it's a great test ground for new models.
1
0
2026-03-03T15:48:51
SandboChang
false
null
0
o8fefxu
false
/r/LocalLLaMA/comments/1rjhmmf/presence_penalty_seems_to_be_incoming_on_lmstudio/o8fefxu/
false
1
t1_o8fe4va
That's not you brother. You were thinking in normal style.
1
0
2026-03-03T15:47:24
pistaul
false
null
0
o8fe4va
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fe4va/
false
1
t1_o8fe4kx
It is very likely to happen, but I wouldn't expect it to cost less than $20k
1
0
2026-03-03T15:47:21
tarruda
false
null
0
o8fe4kx
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fe4kx/
false
1
t1_o8fe4eo
I like the magnitude of the divide in Apple's AI hardware and software success.
1
0
2026-03-03T15:47:20
Fast-Satisfaction482
false
null
0
o8fe4eo
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fe4eo/
false
1
t1_o8fe46b
This fixed it entirely for me [https://www.reddit.com/r/LocalLLaMA/comments/1rjsgy6/how\_to\_fix\_qwen35\_overthink/](https://www.reddit.com/r/LocalLLaMA/comments/1rjsgy6/how_to_fix_qwen35_overthink/)
1
0
2026-03-03T15:47:18
Brunofcsampaio
false
null
0
o8fe46b
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8fe46b/
false
1
t1_o8fe1vp
[removed]
1
0
2026-03-03T15:47:00
[deleted]
true
null
0
o8fe1vp
false
/r/LocalLLaMA/comments/1rjr5p7/local_llm_for_large_journal_library/o8fe1vp/
false
1
t1_o8fe1ls
I've tried Q8_0 for the 0.8b, 2b, 4b, and 9b models so far, with default/recommended settings in lmstudio. The token speed doesn't scale quite as I thought it would as the larger models are used, but the point where I get no interruptions or canceled actions are with 122b-a10-fp8 but that is past my VRAM enough that it's barely usable. I haven't tried quantizing cache fwiw though I primarily was curious if the issues I'm seeing were due to inference changes since this was the first time I've seen this kind of output while using Cline.
1
0
2026-03-03T15:46:58
SocietyTomorrow
false
null
0
o8fe1ls
false
/r/LocalLLaMA/comments/1rjfijf/cline_not_playing_well_with_the_freshly_dropped/o8fe1ls/
false
1
t1_o8fe1pt
Looks funny. I currently running a test with Qwen3.5 27B. The autostart of the round isn't working for me, so I needed to manually start a new-game. Don't know why exactly and if I started the correct Gamemode. Changed the Openrouter url to be my local llama.cpp endpoint to run my local models. Because of the new game error, I can't use Qwen3.5 35B A3B, as it clicks like a mad man while in the main menu and I can't start a game and it's faster in clicking the sandbox mode all the time \^\^ Edit to make it clear what I mean: I start the run\_agent, chromium opens up and I see the kiwi loading screen, there already are click actions from the script itself opening multiple tabs of ninjakiwi website. After loading is done (around 3-4 seconds) nothing happens anymore and the model itself is already executing actions.
1
0
2026-03-03T15:46:58
Pakobbix
false
null
0
o8fe1pt
false
/r/LocalLLaMA/comments/1rjr5uq/bloonsbench_evaluate_llm_agent_performance_on/o8fe1pt/
false
1
t1_o8fdy4j
base M6 only, as always.
1
0
2026-03-03T15:46:30
Longjumping-Boot1886
false
null
0
o8fdy4j
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fdy4j/
false
1
t1_o8fdvtw
[removed]
1
0
2026-03-03T15:46:12
[deleted]
true
null
0
o8fdvtw
false
/r/LocalLLaMA/comments/1rjrj0e/the_new_macbooks_airpromax_are_dissapointing/o8fdvtw/
false
1
t1_o8fduz1
yes, it is a known fact that with changing presence penalty and other parameters, thinking gets reduced. But it is mentioned that intelligence is reduced as well - keep that in mind. see the unsloth guide ... Asking qwen3.5 series "hi" is bringing those models to their edge :-) They perform better with challenging tasks. It is a kind of the LLM-personality. Personally I celebrate it asking qwen3.5 "hi" and tell my colleagues then about good prompting strategies ;-)
1
0
2026-03-03T15:46:05
Impossible_Art9151
false
null
0
o8fduz1
false
/r/LocalLLaMA/comments/1rjsgy6/how_to_fix_qwen35_overthink/o8fduz1/
false
1
t1_o8fdukj
What does this mean spec wise? I finally bought a Mac Studio M3 Ultra with 96tb of ram specifically for local ai models, like Open Claw, Llama, etc. would I be better off with an M5? Unfortunately the M5 is just in laptops at this point right?
1
0
2026-03-03T15:46:02
cmerrifield
false
null
0
o8fdukj
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fdukj/
false
1
t1_o8fdtsp
Their marketed 4x speed is for prefill on 4bit quants - reason is dead simple though: on M4 and below prefill tps is about same regardless of quantization, the new architecure includes neural accellerator cores which fixed this huge disparity issue. Bummer if no M5 Ultra announcement this time though. We still have some hope it will happen tomorrow, but they will be doing an invite only hands-on sessions for press, so likely will be just the devices from yesterday and today.
1
0
2026-03-03T15:45:55
bakawolf123
false
null
0
o8fdtsp
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fdtsp/
false
1
t1_o8fdt4m
Europe often uses DDMMYYYY America uses MMDDYYYY Europe’s is obviously a bit better than America’s, but ISO 8601 (YYYYMMDD) is SO much better than the others. It puts both Europe and USA to shame.
1
0
2026-03-03T15:45:50
randylush
false
null
0
o8fdt4m
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fdt4m/
false
1
t1_o8fdprw
What is wasm? The github seems to have been deleted
1
0
2026-03-03T15:45:23
Bulb93
false
null
0
o8fdprw
false
/r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8fdprw/
false
1
t1_o8fdpg3
This works for me: $ llama-mtmd-cli -m mistralai_Voxtral-Mini-3B-2507-IQ4_XS.gguf --mmproj mmproj-Voxtral-Mini-3B-2507-Q8_0.gguf --audio 20260303T121224.wav -p "Transcribe"
1
0
2026-03-03T15:45:20
Disonantemus
false
null
0
o8fdpg3
false
/r/LocalLLaMA/comments/1o93ad1/audio_transcription_with_llamacpp_multimodal/o8fdpg3/
false
1
t1_o8fdjpf
Hi, tests are still running currently. Wanted to redo some of them (about 300 to be exact). Everything will update in realtime on the website. I will also edit my OP as well to say when everything is done and I'll make a new post whenever a new SOTA model launches.
1
0
2026-03-03T15:44:34
hauhau901
false
null
0
o8fdjpf
false
/r/LocalLLaMA/comments/1reds0p/qwen_35_craters_on_hard_coding_tasks_tested_all/o8fdjpf/
false
1
t1_o8fdjcs
And 1 TB memory.
3
0
2026-03-03T15:44:31
mxforest
false
null
0
o8fdjcs
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fdjcs/
false
3
t1_o8fdg4d
this should fix it: [https://www.reddit.com/r/LocalLLaMA/comments/1rjsgy6/how\_to\_fix\_qwen35\_overthink/](https://www.reddit.com/r/LocalLLaMA/comments/1rjsgy6/how_to_fix_qwen35_overthink/)
1
0
2026-03-03T15:44:05
Brunofcsampaio
false
null
0
o8fdg4d
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8fdg4d/
false
1
t1_o8fddpz
What is the best for non document ocr cases. Like detecting text on a truck in a image.
1
0
2026-03-03T15:43:46
parabellum630
false
null
0
o8fddpz
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8fddpz/
false
1
t1_o8fdd88
Thats my dilemma. I'm ready to put 4k down on a studio, but do I buy whats available now or wait until the next release which will be better, but will it be much better? I'm banking on yes. But am going to be pissed if the m3 ultra still out performs it lol.
2
0
2026-03-03T15:43:42
alexhackney
false
null
0
o8fdd88
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fdd88/
false
2
t1_o8fdd8b
Training on the test set is all you need.
1
0
2026-03-03T15:43:42
zball_
false
null
0
o8fdd8b
false
/r/LocalLLaMA/comments/1rjslz0/benchmarks_the_10x_inference_tax_you_dont_have_to/o8fdd8b/
false
1
t1_o8fdalb
If they release a 1tb I'm going all in.
1
0
2026-03-03T15:43:20
Investolas
false
null
0
o8fdalb
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fdalb/
false
1
t1_o8fd9go
what is the use of your LLM? (intriguing question, I intend no sarcasm)
1
0
2026-03-03T15:43:11
Mr_Lewis_Verstappen
false
null
0
o8fd9go
false
/r/LocalLLaMA/comments/1nqkayx/i_trained_an_llm_from_scratch_ama/o8fd9go/
false
1
t1_o8fd6zh
expect potato speeds from potato hardware bro
1
0
2026-03-03T15:42:52
birotester
false
null
0
o8fd6zh
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fd6zh/
false
1
t1_o8fd42e
I missed that, however it's the *only* citation for all of the AI claims that have any quantitative measurements
1
0
2026-03-03T15:42:29
iMrParker
false
null
0
o8fd42e
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fd42e/
false
1
t1_o8fd3br
wait, a new form factor? Plz give the link cuz I thought Macbooks are clamshell only according to gemini.
1
0
2026-03-03T15:42:23
Fristender
false
null
0
o8fd3br
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fd3br/
false
1
t1_o8fczen
yeah but they fixed they slow prefill in the new m5
1
0
2026-03-03T15:41:52
Odd-Ordinary-5922
false
null
0
o8fczen
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fczen/
false
1
t1_o8fcu81
lmao I like your term reverse bifurcation. Bifurimerge perhaps? I've been using the AliExpress SlimSas ones and they've been working good too. Love their [bundle for ~$50](https://www.aliexpress.us/item/3256810126931126.html?productId=3256810126931126&selectedSkuId=12000051901321363) you get the pcie card, 2 75cm cables and 2 riser bases. Gen4x16 working on them no problem. 75cm cables are very fine length wise in the AAAwave frame to either first or second level, even like bottom most pcie slot zig-zagged around to the top side 2nd story. Surprisingly, I found their provided cables of higher quality than other ones I bought that had their lock-in mechanism just push out backwards and stop working [as a lock] literally upon first plug in. Also not an ad 😄
1
0
2026-03-03T15:41:10
Marksta
false
null
0
o8fcu81
false
/r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8fcu81/
false
1
t1_o8fcro1
thanks. i just found the mistake. it was on cpu. i changed to vulkan llama.cpp in lmstudio and disabled thinking and wow i have 40 tokens per second. sorry for the dumb question, but can i not upload images? i downloaded [https://huggingface.co/HauhauCS/Qwen3.5-4B-Uncensored-HauhauCS-Aggressive](https://huggingface.co/HauhauCS/Qwen3.5-4B-Uncensored-HauhauCS-Aggressive) but lmstudio doesnt allow me to upload images but qwen 3.5 can handle images right?
1
0
2026-03-03T15:40:49
Major_Specific_23
false
null
0
o8fcro1
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fcro1/
false
1
t1_o8fcp4y
Not sure how MLX+M5 can possibly triple token generation? Or do you mean Pro relative to base M5?
1
0
2026-03-03T15:40:29
petuman
false
null
0
o8fcp4y
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fcp4y/
false
1
t1_o8fcihg
A review of the relevant code in llama.cpp shows there is no difference between reasoning-budget 0 or the template args. It's not even clear to me what reasoning budget is supposed to do unless it ends up in a supporting model's chat\_template, but I looked for one that might support it and didn't find one in 10m of browsing. Even the test cases in llama-cpp for reasoning-budget just test -1 (unlimited) and 0 (disabled.)
1
0
2026-03-03T15:39:37
usrlocalben
false
null
0
o8fcihg
false
/r/LocalLLaMA/comments/1reuss2/anybody_tested_qwen3535ba3b_on_translation_tasks/o8fcihg/
false
1
t1_o8fch0j
lmaoo you made my day
1
0
2026-03-03T15:39:25
Fristender
false
null
0
o8fch0j
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fch0j/
false
1
t1_o8fcguh
[removed]
1
0
2026-03-03T15:39:23
[deleted]
true
null
0
o8fcguh
false
/r/LocalLLaMA/comments/1rjrtfd/i_let_an_agent_run_overnight_at_a_hackathon_heres/o8fcguh/
false
1
t1_o8fcf3m
Interesting. I will have to try both of them out, in that case. When it comes to literary fiction, usually my favorite writers were the ones who wrote a bit more straightforward and not too flowery with their prose, so maybe I will enjoy the 4b. Just sounds crazy to think of something as tiny as a 4b now being a capable writer model, lol, but I guess AI evolves pretty quick. Then again I still think it would be cool if we could get some more huge dense models like the mistral 123b and then get a Hemingway/Cormack McCarthy writing-style fine tune from the Drummer of some like, Gemma4 120b-Prosemaster or something. But, none of them seem to gaf about writing at the moment. Someone should redo that meme pic of the AI industry stealing all the RAM with the tiny crappy scraps falling to the opened mouths of the consumer-grade people as an afterthought, except with it being LLM models where the main ones are all coding models, and the crappy scraps are the general purpose writing models, and the people crying on the ground are all like little depressed artist types wearing berets and fedoras crying while all the people getting the good stuff are wearing nerd uniforms with glasses and calculators and pocket protectors and look super happy by comparison.
2
0
2026-03-03T15:39:09
DeepOrangeSky
false
null
0
o8fcf3m
false
/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8fcf3m/
false
2
t1_o8fcai0
A CUDA core on a marketing paper is surprisingly underwhelming compared to an Apple GPU core.
1
0
2026-03-03T15:38:32
Fristender
false
null
0
o8fcai0
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fcai0/
false
1
t1_o8fc852
what does the -b, ub do, coz i read in another thread removing them gives a boost if i remember correctly
1
0
2026-03-03T15:38:13
Old-Sherbert-4495
false
null
0
o8fc852
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fc852/
false
1
t1_o8fc7nj
vLLM supports MTP. I'm getting +16% TG from it.
2
0
2026-03-03T15:38:09
DeltaSqueezer
false
null
0
o8fc7nj
false
/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8fc7nj/
false
2
t1_o8fc3xb
Sentiment and intent classification for example.
1
0
2026-03-03T15:37:39
HighFlyingB1rd
false
null
0
o8fc3xb
false
/r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8fc3xb/
false
1
t1_o8fc13x
Screenshot is from Apple but I assume they are just using it to organize the files with the Maya MCP.
3
0
2026-03-03T15:37:15
themixtergames
false
null
0
o8fc13x
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fc13x/
false
3
t1_o8fc0qv
Maybe you do not know the concept of on-premises products.
0
0
2026-03-03T15:37:12
Just-Message-9899
false
null
0
o8fc0qv
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8fc0qv/
false
0
t1_o8fbynp
Apparently the model wants to think. Others are observing that even if thinking is disabled it will find somewhere else in the output to do it. From their documentation they expect a lot of thinking: 1. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 81,920 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance. Apparently2 the sampling parameters are important since it easy for the thinking to enter a cycle. I've observed both of these situations.
1
0
2026-03-03T15:36:56
usrlocalben
false
null
0
o8fbynp
false
/r/LocalLLaMA/comments/1reuss2/anybody_tested_qwen3535ba3b_on_translation_tasks/o8fbynp/
false
1
t1_o8fbyr6
https://preview.redd.it/…7950c5eeae7000
1
0
2026-03-03T15:36:56
M4r10_h4ck
false
null
0
o8fbyr6
false
/r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8fbyr6/
false
1