name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8fbybn
16gb vram in IGPU means it's just using ram. Ofc it's slow AF.
1
0
2026-03-03T15:36:53
Long_comment_san
false
null
0
o8fbybn
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fbybn/
false
1
t1_o8fbx5a
European style ? Don’t you mean rest-of-the-world style?
4
0
2026-03-03T15:36:44
SpanishAhora
false
null
0
o8fbx5a
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fbx5a/
false
4
t1_o8fbw8m
They probably won’t because people got weird with their model and developed deep emotional codependency on it.
1
0
2026-03-03T15:36:37
o5mfiHTNsH748KVq
false
null
0
o8fbw8m
false
/r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8fbw8m/
false
1
t1_o8fbr9k
Thank you Airscripts :)
1
0
2026-03-03T15:35:57
M4r10_h4ck
false
null
0
o8fbr9k
false
/r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8fbr9k/
false
1
t1_o8fbld9
Tried it, seems to perform better than the other decensored 4b variants. Under certain scenarios (which I assume the original model is aligned to avoid answering), it answers poorly then quickly decend into chaotic loops, just like the other variants of small 3.5 models. But when it works it answers better.
1
0
2026-03-03T15:35:10
tonyunreal
false
null
0
o8fbld9
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8fbld9/
false
1
t1_o8fbim3
But even still, running the 4b version is still significantly slower.
1
0
2026-03-03T15:34:47
spacecad_t
false
null
0
o8fbim3
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fbim3/
false
1
t1_o8fbigv
Yo I came here for the same thing. I have never heard of Maya-MPC. Im guessing it can control Maya from LMStudio?? What!?
2
0
2026-03-03T15:34:46
Illustrious-Lake2603
false
null
0
o8fbigv
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fbigv/
false
2
t1_o8fbiff
What about the performance improvements from [this PR?](https://github.com/ggml-org/llama.cpp/pull/19139) Which require a re-upload.
1
0
2026-03-03T15:34:45
coder543
false
null
0
o8fbiff
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8fbiff/
false
1
t1_o8fbdq4
Use lightning ai, they provide 15 dollar free credit monthly… you can use l40s gpu for around 5-7 hours with the free credits
1
0
2026-03-03T15:34:07
Tricky-Cream-3365
false
null
0
o8fbdq4
false
/r/LocalLLaMA/comments/1rjhfow/thinking_of_finetuning_llama7b_with_100k_samples/o8fbdq4/
false
1
t1_o8fbcg7
no way.... i have the same one... but didn't try 4b, i was getting 22 tps in 9b at q8. are u running using llama.cpp? how much ram do u have? u def can fly with 4b
2
0
2026-03-03T15:33:57
Old-Sherbert-4495
false
null
0
o8fbcg7
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fbcg7/
false
2
t1_o8fb9dj
My mate. You’re at Localllama sub. Any number isn’n “a lot” here ;)
6
0
2026-03-03T15:33:33
srigi
false
null
0
o8fb9dj
false
/r/LocalLLaMA/comments/1rjrj0e/the_new_macbooks_airpromax_are_dissapointing/o8fb9dj/
false
6
t1_o8fb4aq
One of the “problems” with the m3 ultra was that the m4 max in some inference benchmarks actually competes head to head with the m3 ultra. M3 ultra is a fantastic machine, but it lacks the improvements M4 generation and now m5 generation pushes that boundary even further.
2
0
2026-03-03T15:32:52
norms_are_practical
false
null
0
o8fb4aq
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fb4aq/
false
2
t1_o8fb05r
GPU cores / NPU cores / DSP cores / etc definitely don't compare across manufacturers and often don't compare across hardware generations.
2
0
2026-03-03T15:32:19
Future-Job-7442
false
null
0
o8fb05r
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fb05r/
false
2
t1_o8faxmw
Could be. It is using AWS drivers. Which is a part of whats required by my client's project. Regardless, 22.6 is still fine. The AWQ version is still 24.1 so either way, it doesnt fit.
1
0
2026-03-03T15:31:58
Civil-Top-8167
false
null
0
o8faxmw
false
/r/LocalLLaMA/comments/1rjmnh7/help_loading_qwen35_35b_a3b_gguf_on_vllm/o8faxmw/
false
1
t1_o8faqtx
Nice app. Just curious, how is your app different than PocketPal?
1
0
2026-03-03T15:31:03
whiteh4cker
false
null
0
o8faqtx
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8faqtx/
false
1
t1_o8faq1y
/r/vibecoding/
1
0
2026-03-03T15:30:57
MelodicRecognition7
false
null
0
o8faq1y
false
/r/LocalLLaMA/comments/1rjp9n2/i_built_an_ai_that_audits_other_ais/o8faq1y/
false
1
t1_o8faouz
9B AWQ Works. But I am looking for that little extra additional smartness for the project I am doing. Its work related so the smarter and faster the model, the better.
1
0
2026-03-03T15:30:48
Civil-Top-8167
false
null
0
o8faouz
false
/r/LocalLLaMA/comments/1rjmnh7/help_loading_qwen35_35b_a3b_gguf_on_vllm/o8faouz/
false
1
t1_o8fajmz
Please help me understand what is going on in your screenshot? LM Studio controlling Autodesk? Did it create that or just helping to organize? What model? Edit: Date shows April 1st, some sort of April fool's joke? Got me.
3
0
2026-03-03T15:30:07
Investolas
false
2026-03-03T15:34:48
0
o8fajmz
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fajmz/
false
3
t1_o8fajh8
What kind of things would you folks be running and doing with this setup?
1
0
2026-03-03T15:30:05
Fluffy_Ad7392
false
null
0
o8fajh8
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fajh8/
false
1
t1_o8fagwk
you can test with a agent setup. like OpenCode or Codex CLI and similar where you use you local model. Then the LLM can use terminal tools to search in files. but if the files are not text, it needs tools to parse them. e.g a PDF or .docx file. test it and if it is to complex / too much content for the LLM then use need a RAG system which will add much more work for you to do and test.
1
0
2026-03-03T15:29:44
magnus-m
false
null
0
o8fagwk
false
/r/LocalLLaMA/comments/1rjr5p7/local_llm_for_large_journal_library/o8fagwk/
false
1
t1_o8fagcy
You can fine-tune both base and already fine-tuned models. You can even do several fine-tunes one on the top of each other, or merge different fine-tunes. The difference is that a chat model will retain its chat behaviour after a fine-tune. With a base model you need to teach the chat behaviour with the fine-tuning itself, if you want to incorporate it. You could for example fine-tune a base model to give it chat and thinking behaviour, but with a different personality from the most common ones.
1
0
2026-03-03T15:29:39
Expensive-Paint-9490
false
null
0
o8fagcy
false
/r/LocalLLaMA/comments/1rj4zy3/stepfun_releases_2_base_models_for_step_35_flash/o8fagcy/
false
1
t1_o8faf10
Is Portuguese from Portugal or Brazil?
2
0
2026-03-03T15:29:28
AppealThink1733
false
null
0
o8faf10
false
/r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8faf10/
false
2
t1_o8faca2
Maybe he does need but doesnt know of such capabilities
1
0
2026-03-03T15:29:06
KURD_1_STAN
false
null
0
o8faca2
false
/r/LocalLLaMA/comments/1rjpesa/qwen_35_what_is_base_version/o8faca2/
false
1
t1_o8fa814
No Greek?
1
0
2026-03-03T15:28:32
alexx_kidd
false
null
0
o8fa814
false
/r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8fa814/
false
1
t1_o8fa84g
Gotcha, for cases when I need to unload automatically before running heavy workflow (aka LLM > Image Gen/Image Edit using comfyui, ollama still let me do it easily
1
0
2026-03-03T15:28:32
iChrist
false
null
0
o8fa84g
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8fa84g/
false
1
t1_o8fa5u0
Wait until OpenAI changes its name to ClosedAI…
1
0
2026-03-03T15:28:15
pl201
false
null
0
o8fa5u0
false
/r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8fa5u0/
false
1
t1_o8f9zih
your GPU is probably too slow since not an MoE like gpt-oss-20b my RX 6800 XT (16gb) runs 9b at 50tps
1
0
2026-03-03T15:27:23
xeeff
false
null
0
o8f9zih
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8f9zih/
false
1
t1_o8f9xbv
I might start using old stupid LLMs for the fun and creativity of it now lol
1
0
2026-03-03T15:27:06
Top_Gap5488
false
null
0
o8f9xbv
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8f9xbv/
false
1
t1_o8f9x58
I’m talking about mlx models
1
0
2026-03-03T15:27:04
alexx_kidd
false
null
0
o8f9x58
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f9x58/
false
1
t1_o8f9v3p
it *swaps* based on e.g. open-webui model selection, but if you need an explicit *unload* (as in no model loaded) you'd have to go to llama-swap UI to do that. it could be mimicked by making a model called "Unload" that runs /bin/false or similar instead of llama-server.
1
0
2026-03-03T15:26:47
usrlocalben
false
null
0
o8f9v3p
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8f9v3p/
false
1
t1_o8f9u4v
I just think that's a thin line between "In love with" and "really enjoy". Where is the barrier to unhealthy attachment. Seeing a model as a friend, therapist, ect. GPT4-X-Alpaca seems older. Maybe it does take more params, like this one mradermacher/gpt4o-distil-paperwitch-abliteration-L33-70b-i1-GGUF In time, this type of thing should become easier. Can't really assume other people have put much effort into it. Using Unsloth notebook with GPT-OSS-20B for gpt-4o roleplay might be a business idea :D
1
0
2026-03-03T15:26:40
Adventurous-Lead99
false
null
0
o8f9u4v
false
/r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8f9u4v/
false
1
t1_o8f9tmo
The real hardware cost is below $10. The selling price can be explained by low production quantity, engineering work distributed on small quantity,... So, no fundamental reason to be skeptical.
1
0
2026-03-03T15:26:35
PhilippeEiffel
false
null
0
o8f9tmo
false
/r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8f9tmo/
false
1
t1_o8f9rde
Thanks for the suggestion, but that still didn't work for me.
1
0
2026-03-03T15:26:17
noob10
false
null
0
o8f9rde
false
/r/LocalLLaMA/comments/1re64fe/qwen35_thinking_blocks_in_output/o8f9rde/
false
1
t1_o8f9nmx
i have the same issue on 4060ti 16gb. i am using 4b but the responses are slow like 3 tokens per second. first time running a model locally
1
0
2026-03-03T15:25:48
Major_Specific_23
false
null
0
o8f9nmx
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8f9nmx/
false
1
t1_o8f9kp5
Do number of GPU cores really compare directly like that? Different cores are different sizes and work differently. A big snub for a lot of hardware leaker channels was that Nvidia doubled the core count between Turing -> Ampere for example. The GPUs didn't perform differently as a result, but they just organized the architecture and changed how the SMs were arranged, etc. Similarly, cores across AMD, Nvidia, and Intel GPUs aren't really comparable. They all have different numbers of transistors, scale differently, function differently, etc.
1
0
2026-03-03T15:25:24
Double_Cause4609
false
null
0
o8f9kp5
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f9kp5/
false
1
t1_o8f9k5i
Yeah - we could also compare prior generations of AMD and Nvidia cards, but what is the purpose in that? If the M5 ultra first comes in 2027, then the comparable image changes again.
1
0
2026-03-03T15:25:20
norms_are_practical
false
null
0
o8f9k5i
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f9k5i/
false
1
t1_o8f9ham
Darn, ok. I wonder how it’d look on Q_4_K_M, as that’s a much more reasonable size for consumer hardware.
1
0
2026-03-03T15:24:57
twisted_nematic57
false
null
0
o8f9ham
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8f9ham/
false
1
t1_o8f9h0j
Supposedly the reason they skipped the M4 ultra and kept with just the M3 for the M3 ultra when they released the M4 for other formats of mac, was that TSMC wasn't able to get the connector thing that connects the doubled chips together to work once the nanometer size was below a certain size (which the M4 was below that size, and M3 wasn't, so, they could do ultra-style double chips with the m3 but not with m4 at the time). And then supposedly they figured out how to do it, even for M5 chips now, so now they are going to release M5 ultras in the summer when Mac Studio update time comes around. No clue if any of that is true, but I saw people saying that stuff about it for a while now. Regardless, all the rumor mill people seem like 99% sure there is going to be an M5 ultra coming and not another skip-year with it, so, I'm pretty optimistic (never an absolute guarantee I guess, but the "vibes" look pretty good, seems like)
1
0
2026-03-03T15:24:55
DeepOrangeSky
false
null
0
o8f9h0j
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f9h0j/
false
1
t1_o8f9g4f
good points! Thanks! When you begin using local LLMs in many use cases, tweaking use case C breaks use case A. ... and you just notice days later and not remembering the cause. One way to keep old paramter is using a middleware like litellm. The middleware handles the paramter or even prompts. Creating proxies for each use case helped me a lot.
1
0
2026-03-03T15:24:48
Impossible_Art9151
false
null
0
o8f9g4f
false
/r/LocalLLaMA/comments/1rjrtkd/local_models_drift_faster_than_you_think_when_you/o8f9g4f/
false
1
t1_o8f9cof
Yep yep. I never touch the first early Unsloth quants, for example. It's nice that they constantly improve things, but it's not nice to download 20 GB and then read, "Oops, the first quants were broken so if you got those please re-download them."
1
0
2026-03-03T15:24:20
lookwatchlistenplay
false
null
0
o8f9cof
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8f9cof/
false
1
t1_o8f9b10
> Moonlight and sunshine work with the spark How's the latency, I went into that rabbit hole when I wanted to move my gaming rig to a different room, then decided on a long HDMI cable. But it's been some time since then.
1
0
2026-03-03T15:24:07
imnotzuckerberg
false
null
0
o8f9b10
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8f9b10/
false
1
t1_o8f96aw
The 9b model adds too many not needed details I to the story. It tries to build around something but just fails. It uses filler words, describes rather than tells you what happens. If you need it for ~those~ stories and or just for something more dramatic, no chance. It tries so hard to not answer you. The 4b model is straight to the point. You want a story about something? You will get it. It won't sugarcoat it if not asked for. You call it lazy, for me it works wonders tho. Overall my pick for the NPC. Speeds on 3060 12gb are 45t/s 9b and 65t/s 4b. Both are usable. For NPCs, the 2b model will be as good id argue. But 9b will be overkill and I'd say the worse pick. If you find something out for yourself, let me know too, many thanks and best regards
1
0
2026-03-03T15:23:28
cookieGaboo24
false
null
0
o8f96aw
false
/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8f96aw/
false
1
t1_o8f94hj
I'm not sure how to explain to you that 128gb is a lot for a laptop.
1
0
2026-03-03T15:23:14
Recoil42
false
null
0
o8f94hj
false
/r/LocalLLaMA/comments/1rjrj0e/the_new_macbooks_airpromax_are_dissapointing/o8f94hj/
false
1
t1_o8f8tyr
Appreciate that, glad it's been useful! Let me know if you run into any issues with it.
1
0
2026-03-03T15:21:48
gavlaahh
false
null
0
o8f8tyr
false
/r/LocalLLaMA/comments/1qy2fwe/built_a_comparison_openclaw_vs_memoryfirst_local/o8f8tyr/
false
1
t1_o8f8qyy
Can you add Qwen 3.5 27B?
1
0
2026-03-03T15:21:24
Steuern_Runter
false
null
0
o8f8qyy
false
/r/LocalLLaMA/comments/1rjmnv4/meet_swerebenchv2_the_largest_open_multilingual/o8f8qyy/
false
1
t1_o8f8nzp
I gave up on vLLM. Now I’m working on adding support on LMDeploy. I created a PR with qwen3.5 integration. Performance is suboptimal still compared to llama.cpp though.
1
0
2026-03-03T15:20:59
grayarks
false
null
0
o8f8nzp
false
/r/LocalLLaMA/comments/1qcqicx/vllm_on_2x4x_tesla_v100_32gb/o8f8nzp/
false
1
t1_o8f8hf1
Why not use qwen3.5? Try that and if you need smaller models, well, they just dropped.
1
0
2026-03-03T15:20:05
Spurnout
false
null
0
o8f8hf1
false
/r/LocalLLaMA/comments/1rjrtkd/local_models_drift_faster_than_you_think_when_you/o8f8hf1/
false
1
t1_o8f8g7z
3 usd hardware...? Bro..I just needed one month !
1
0
2026-03-03T15:19:56
Less_Strain7577
false
null
0
o8f8g7z
false
/r/LocalLLaMA/comments/1rjr60d/alibaba_can_i_buy_any_suggestions/o8f8g7z/
false
1
t1_o8f8bat
My logic was using a smaller quant like Q4\_K\_M I would be able to do a 9/9 split, no? That quant would be 18gb so it would be more feasible.
1
0
2026-03-03T15:19:14
DK_Tech
false
null
0
o8f8bat
false
/r/LocalLLaMA/comments/1rjrfzg/agentic_coding_moe_models_for_10gb_vram_setup/o8f8bat/
false
1
t1_o8f88vi
I think I saw some rumors somewhere a while back that they were going to go to 700+GB or even 1TB of unified memory options for the M5 ultra. But, not sure if that was based on anything real, and even if maybe it was real at the time, if that got ruined by all the RAM shortage stuff lately to where maybe they would've, but now they won't. Would be pretty awesome, though. A quiet little mac studio computer that barely takes up any room, uses barely any electricity, doesn't make hardly any noise, and has 1TB of VRAM, and even runs at decent speeds, too. It would be like the Stealth Bomber of computers, lol. Hopefully it won't cost as much as a stealth bomber, though.
1
0
2026-03-03T15:18:55
DeepOrangeSky
false
null
0
o8f88vi
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f88vi/
false
1
t1_o8f86lt
Buy hardware then
1
0
2026-03-03T15:18:36
Amazing-You9339
false
null
0
o8f86lt
false
/r/LocalLLaMA/comments/1rjr60d/alibaba_can_i_buy_any_suggestions/o8f86lt/
false
1
t1_o8f85zf
i got qwen3 to work too, but not 3.5, totally stuck after model is loaded. anyone got Qwen3.5 + vllm + V100 working ? really need help here
1
0
2026-03-03T15:18:31
Substantial_Log_1707
false
null
0
o8f85zf
false
/r/LocalLLaMA/comments/1qcqicx/vllm_on_2x4x_tesla_v100_32gb/o8f85zf/
false
1
t1_o8f855h
There are lots of qwen 3.5 models for every hardware configuration. They do different things well. Coding is different to general knowledge which is different to image analysis which is different to agentic usage. These benchmarks are getting increasingly unreliable, and they will get more so over time. What's the measure of its ability? Does it do what *you want* on *your hardware*?
1
0
2026-03-03T15:18:24
AlwaysLateToThaParty
false
null
0
o8f855h
false
/r/LocalLLaMA/comments/1rjqff6/sabomakoqwen35122ba10bhereticgguf_hugging_face/o8f855h/
false
1
t1_o8f84lx
I disagree. M3 Ultra pre-fill of large models and contexts is painfully slow, so I'd rather wait for next AMD APU.
1
0
2026-03-03T15:18:20
duidui232323
false
null
0
o8f84lx
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f84lx/
false
1
t1_o8f84jv
> 22.6GB it should be exactly 24 GB, check if ECC is enabled, the GPU driver might have taken some GBs for the error correction.
1
0
2026-03-03T15:18:19
MelodicRecognition7
false
null
0
o8f84jv
false
/r/LocalLLaMA/comments/1rjmnh7/help_loading_qwen35_35b_a3b_gguf_on_vllm/o8f84jv/
false
1
t1_o8f8358
"I'm disappointed to see this attitude in a community of people who seem passionate about something, but you are not open minded enough to bridge the gap between what can and can't be done with --- and I'm being clear here --- a program." You only had like three responses before throwing in the towel and deleting the post, a sample of 3 is hardly a representation of the community as a whole. If you believed this idea is interesting enough to share, why not at least... be willing to discuss it openly? "I just presented my working expanded personality/ soul files, a system that is currently working right now, on my computer, in a room. In reality. Working. Building skills. Becoming capable of more." It's nice that the project works, there was however a lot of data presented that the reader has to sift through, and some word choices could probably be better articulated. The idea that an LLM could have a "personality" or "soul" is a non-starter and why people with Ai experience might simply dismiss everything as a whole; experts don't take people seriously who personify LLM's to this extent, it's a sign they are dissociated from the reality (hence the "schizo"/"psychosis" claims) of how LLM's actually work. If this is a new/novel thing, it just needs to be accurately presented in a way that people who don't already see your vision can understand. "It is already a more advanced system of ai than I would have ever been able to have before when I was just talking to a condensely packed mess of information. Something I couldnt get through to, you had to search for its data every time, etc." I mean, if it is truly that advanced, then I assure you people would show interest, but you still need to demonstrate it and show where people would find this iteration useful. If the complexity and depth surpasses Deepseek or Kimi as far as advanced Ai systems go, then there WILL be interest; good work speaks for itself. If it's somewhere in the data that was presented, it's buried somewhere in the fluff. "I started with 3 markdown files and just expanded their meaning into a system that do the same thing but much better, and called a brain, and you call me a schizo. Lol guys come on.... are we just gonna pretend putting "you are an advanced helper with a soul!" In a file is the best we can do?" Well, they don't have brains, so it goes back to what I said above. Don't give non-living things qualities that only living things can have, and people who are experienced in this field won't assume you're not working with a full deck of cards. As a bystander, my only concern would be that you are investing all of this time trying to inject human qualities into silicon to get a synthetic, diluted version of the human experience, when you have a very real, developing human who is capable of seemingly everything you are wanting to recreate and so much more. All done organically and without needing file systems to simulate. Time being invested into that will likely prove to be more enriching in the long run than pursuing human mimicry with a platform that is fundamentally incompatible to precisely replicate.
1
0
2026-03-03T15:18:08
SweetHomeAbalama0
false
null
0
o8f8358
false
/r/LocalLLaMA/comments/1rjmrj0/hello_i_am_a_guy_who_has_no_prior_ai_experience/o8f8358/
false
1
t1_o8f7z2z
i agree, LM Studio shows it goes to 128000 tokens 1.26 GB to load in if i put ctx to 0 and 2.90 GB at 128000 this is amazing in terms of memory efficiency for context, second to none
1
0
2026-03-03T15:17:34
Top_Gap5488
false
null
0
o8f7z2z
false
/r/LocalLLaMA/comments/1q7jd1a/lfm25_12b_instruct_is_amazing/o8f7z2z/
false
1
t1_o8f7w6l
I run an accountability group for technical founders since I've observed this is the only thing left that AIs and Computers can't beat humans at. Something about telling a room of humans you'll do something and being followed up on about it means it will actually happen.
1
0
2026-03-03T15:17:11
NefariousnessHairy31
false
null
0
o8f7w6l
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o8f7w6l/
false
1
t1_o8f7vz1
> The M3 Ultra mac studio takes 10+ minutes on pre-fill for large models and contexts The main difference in the M5 is the newly introduced matmul, specifically for boosting prompt processing speed. This paired with 1.2TB/s makes the combo wild. Still, not too many GPU cores compared to the spark (packed with 6k cuda cores), but for inference, on paper, it might be a better value for money.
1
0
2026-03-03T15:17:09
imnotzuckerberg
false
null
0
o8f7vz1
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f7vz1/
false
1
t1_o8f7tmm
Good points. I think it would be safe to say at least 2x AI improvements with all of the HW changes stacked up. But yes, I'm waiting for real benchmarks. & I don't even get full use out of my RTX3090 yet; I'm juggling too many projects....
1
0
2026-03-03T15:16:50
tomByrer
false
null
0
o8f7tmm
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f7tmm/
false
1
t1_o8f7r4i
Product page has notes like: "... Time to first token measured with a 16K-token prompt using a 14-billion parameter model with 4-bit weights and FP16 activations, mlx-lm, and MLX framework. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro."
1
0
2026-03-03T15:16:30
e34234
false
null
0
o8f7r4i
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f7r4i/
false
1
t1_o8f7n34
Unfortunately, it doesn't have Qwen3.5 (yet?)
1
0
2026-03-03T15:15:57
ANR2ME
false
null
0
o8f7n34
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8f7n34/
false
1
t1_o8f7mz8
Not on token generation, bandwidth is increased just by 10%.
1
0
2026-03-03T15:15:56
petuman
false
null
0
o8f7mz8
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f7mz8/
false
1
t1_o8f7jpi
you need nighly version of vllm + latest transformers from git main branch. easier if you use docker, just a docker run with this image: vllm/vllm-openai:nightly
1
0
2026-03-03T15:15:29
Substantial_Log_1707
false
null
0
o8f7jpi
false
/r/LocalLLaMA/comments/1re9xbi/qwen35_on_vllm/o8f7jpi/
false
1
t1_o8f7fe7
qwen3.5:9b is far bigger then your gpt-oss:20b. qwen 9b is a "dense" modek, not "moe". The whole 9b are calculated, used for inferencing. gpt-oss has about 3.5 active parameters only. The qwen3.5:9b is nearly 3 times bigger
1
0
2026-03-03T15:14:54
Impossible_Art9151
false
null
0
o8f7fe7
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8f7fe7/
false
1
t1_o8f7dtt
this is awesome. Thank you!
1
0
2026-03-03T15:14:41
funding__secured
false
null
0
o8f7dtt
false
/r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8f7dtt/
false
1
t1_o8f7c7v
Thats called a honeypot and is existing for years.
1
0
2026-03-03T15:14:27
UnbeliebteMeinung
false
null
0
o8f7c7v
false
/r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8f7c7v/
false
1
t1_o8f7935
> Model swapping overhead: tens of seconds > DGX OS ARM + own OS is a disadvantage and prone to becoming an abandonware, standard Ubuntu + CUDA will be better > Why is everyone going Mac depending on the tasks and workloads Mac could have a great price-performance ratio, especially for single user single batch requests.
1
0
2026-03-03T15:14:02
MelodicRecognition7
false
null
0
o8f7935
false
/r/LocalLLaMA/comments/1rjmlbi/local_llm_infrastructure_for_an_it_consulting/o8f7935/
false
1
t1_o8f755h
you mean Qwwn3.5 9B ? Dont try it untill vllm give another release like 0.16.1, there are bugs in it. Im using the official GPTQ model Qwen/Qwen3.5-27b-GPTQ-Int4, 2xV100, cuda 12.8, vllm nightly docker image The code runs, model loads, and silently stuck after this line: \[gpu\_model\_runner.py:5259\] Encoder cache will be initialized with a budget of 16384 tokens, and profiled with 1 image items of the maximum feature size. this is not necessarily the cause, but the CPU and GPU is 100% seems some kind of deadlock. same for moe models. nightly + qwen3 : OK, so this specific combination of nightly + qwen3.5 has problem in it, i guess vllm team is working hard on it. (maybe not for V100 LOL)
1
0
2026-03-03T15:13:30
Substantial_Log_1707
false
null
0
o8f755h
false
/r/LocalLLaMA/comments/1rjjvqo/vllm_on_v100_for_qwen_newer_models/o8f755h/
false
1
t1_o8f73wo
Not your fault
1
0
2026-03-03T15:13:20
Secure-food4213
false
null
0
o8f73wo
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f73wo/
false
1
t1_o8f71q7
> the model itself is far from a gimmick - it can actually hold a conversation and do some serious stuff. What serious stuff? I'm curious on what use-cases a .8B model has other than just toying with it.
1
0
2026-03-03T15:13:04
WPBaka
false
null
0
o8f71q7
false
/r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8f71q7/
false
1
t1_o8f71b9
thanks a lot for these ✌️ going for a lower quant
1
0
2026-03-03T15:13:00
Old-Sherbert-4495
false
null
0
o8f71b9
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8f71b9/
false
1
t1_o8f6x5k
sorry guys i am stupid, used to european style dates its not 3rd of november, 11th of march :(
1
0
2026-03-03T15:12:27
Oren_Lester
false
null
0
o8f6x5k
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f6x5k/
false
1
t1_o8f6vy6
That's a fascinating distinction — emergence doesn't require a vacuum, it just needs space to move into. Makes me wonder if the *domain* you give them shapes the *type* of emergence more than the *amount* of it. In my setup the space is social/economic, so what emerges is reputation dynamics and coalition behavior. In yours with creative direction, what kind of unexpected behaviors showed up — stylistic ones, or more strategic?
1
0
2026-03-03T15:12:17
TangerineSoft4767
false
null
0
o8f6vy6
false
/r/LocalLLaMA/comments/1rjoqpq/an_autonomous_agent_economy_where_agents_gamble/o8f6vy6/
false
1
t1_o8f6mlk
probably slow igpu, 5080 20b gives 170t/s\~. 9B gives 100t/s\~ but butter responses
1
0
2026-03-03T15:11:01
fishylord01
false
null
0
o8f6mlk
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8f6mlk/
false
1
t1_o8f6mgy
What fits on consumer-level GPUs, like 24Gb or 10Gb VRAM?
1
0
2026-03-03T15:11:00
tomByrer
false
null
0
o8f6mgy
false
/r/LocalLLaMA/comments/1rjqff6/sabomakoqwen35122ba10bhereticgguf_hugging_face/o8f6mgy/
false
1
t1_o8f6ik3
Yeah, there’s another hardware lineup slated for the back half of this year, right? And it’s a bigger step-up? If inference is the main use case, is waiting 7~8 months for m6 worth it?
1
0
2026-03-03T15:10:28
piedamon
false
null
0
o8f6ik3
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f6ik3/
false
1
t1_o8f6abt
Yes, no tool calling or web searching.
1
0
2026-03-03T15:09:22
Hanthunius
false
null
0
o8f6abt
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8f6abt/
false
1
t1_o8f69se
You can never prevent 'misuse'. & BTW need to add LoRa meshnet to that mix for when the internet goes down. (not an AI LORA, but LOw power RAdio) [https://search.brave.com/search?q=LoRa+meshnet&source=desktop](https://search.brave.com/search?q=LoRa+meshnet&source=desktop)
1
0
2026-03-03T15:09:17
tomByrer
false
null
0
o8f69se
false
/r/LocalLLaMA/comments/1rjqo97/how_can_we_use_ai_modern_tech_stacks_to_help/o8f69se/
false
1
t1_o8f68uj
It's amazing that this exists, that was something Kokoro was clearly missing, but the quality is, sadly, quite awful :-(
1
0
2026-03-03T15:09:10
r4in311
false
null
0
o8f68uj
false
/r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8f68uj/
false
1
t1_o8f68q5
[removed]
1
0
2026-03-03T15:09:08
[deleted]
true
null
0
o8f68q5
false
/r/LocalLLaMA/comments/1rjr60d/alibaba_can_i_buy_any_suggestions/o8f68q5/
false
1
t1_o8f666l
if you do not know what a base model is, then you don't need it :-)
1
0
2026-03-03T15:08:47
Impossible_Art9151
false
null
0
o8f666l
false
/r/LocalLLaMA/comments/1rjpesa/qwen_35_what_is_base_version/o8f666l/
false
1
t1_o8f65ci
I have an M3 max and an RTX 5090. Rough text gen numbers - for same model in diff quants M3 max - GGUF Q4 : 60-ish t/s M3 max - MLX 4 Bit : 80-ish t/s RTX 5090 - GGUF Q4 : 200-ish t/s This is single throughput - so one prompt at the time. The M5 ultra with MLX quants should be comparable to RTX 5090 running GGUF quants. The RTX still would have an edge on the M5 ultra, but the gap is decreasing. Unpopular opinion: If MLX matures further, there would be limited reason to buy Nvidia / AMD for a home inference server. // I use the RTX 5090 in an image generation server, and even the M5 Ultra should not be able to change this.
2
0
2026-03-03T15:08:41
norms_are_practical
false
null
0
o8f65ci
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f65ci/
false
2
t1_o8f64fb
10GB VRAM+CPU offloading. How much of the RAM you use to run the LLM model? Forget splitting 30B. On a 3080, DeepSeek-Coder-V2-Lite (16B MoE) maybe your better choice?
1
0
2026-03-03T15:08:33
Rain_Sunny
false
null
0
o8f64fb
false
/r/LocalLLaMA/comments/1rjrfzg/agentic_coding_moe_models_for_10gb_vram_setup/o8f64fb/
false
1
t1_o8f63tn
Yeah, the naive big O complexity of matrix multiplication is O(n^3). If you add 10* more items (tokens), number of computations grows 1000*. Scary weird.
1
0
2026-03-03T15:08:28
srigi
false
null
0
o8f63tn
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8f63tn/
false
1
t1_o8f5zty
That's what the LLM wrote in the write-up
1
0
2026-03-03T15:07:56
sine120
false
null
0
o8f5zty
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8f5zty/
false
1
t1_o8f5zny
[removed]
1
0
2026-03-03T15:07:54
[deleted]
true
null
0
o8f5zny
false
/r/LocalLLaMA/comments/1rjr9ze/did_anyone_replace_old_qwen25coder7b_with/o8f5zny/
false
1
t1_o8f5tan
It automatically detected that it was a vision model and in the chat field there was a + sign to add images.
1
0
2026-03-03T15:07:01
Hanthunius
false
null
0
o8f5tan
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8f5tan/
false
1
t1_o8f5rbx
The curl-first approach is interesting. MCP is fine when you control the stack, but wrapping every API in its own server process feels heavy for simple integrations where you just need authenticated HTTP calls.
1
0
2026-03-03T15:06:44
InteractionSmall6778
false
null
0
o8f5rbx
false
/r/LocalLLaMA/comments/1rjri86/integrating_local_agents_with_third_party/o8f5rbx/
false
1
t1_o8f5l06
Yea, I know realistically I am limited by my VRAM here but was curious to see what is possible on a setup like this.
1
0
2026-03-03T15:05:52
DK_Tech
false
null
0
o8f5l06
false
/r/LocalLLaMA/comments/1rjrfzg/agentic_coding_moe_models_for_10gb_vram_setup/o8f5l06/
false
1
t1_o8f5gz7
[removed]
1
0
2026-03-03T15:05:18
[deleted]
true
null
0
o8f5gz7
false
/r/LocalLLaMA/comments/1rjqfzc/skillmd_files_are_amazing_but_makingcreating_them/o8f5gz7/
false
1
t1_o8f4ylu
You'll have to wait for someone who isn't Apple or an Apple shill to get their hands on it. Upgraded RAM and SSD support are great, but their own testing numbers are less than meaningless.
1
0
2026-03-03T15:02:42
DenverNugs
false
null
0
o8f4ylu
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f4ylu/
false
1
t1_o8f4t0l
Interested as well for the strix halo
1
0
2026-03-03T15:01:54
Potential-Leg-639
false
null
0
o8f4t0l
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8f4t0l/
false
1
t1_o8f4sbr
But just to clarify - neither would probably do what you're hoping (properly)
1
0
2026-03-03T15:01:48
hauhau901
false
null
0
o8f4sbr
false
/r/LocalLLaMA/comments/1rjrfzg/agentic_coding_moe_models_for_10gb_vram_setup/o8f4sbr/
false
1
t1_o8f4ruz
the old qwen2.5-coder is running beside other, bigger models on two strix halo. form my memory. ./llama-server with -np 2 -c 64000 Theoretically I can serve 4 concurrent requests.
1
0
2026-03-03T15:01:44
Impossible_Art9151
false
null
0
o8f4ruz
false
/r/LocalLLaMA/comments/1rjr9ze/did_anyone_replace_old_qwen25coder7b_with/o8f4ruz/
false
1
t1_o8f4rod
[removed]
1
0
2026-03-03T15:01:43
[deleted]
true
null
0
o8f4rod
false
/r/LocalLLaMA/comments/1q57txn/we_built_an_open_source_memory_framework_that/o8f4rod/
false
1
t1_o8f4qfj
I'm really excited to see how the M5 Max performs, mainly as a preview to what we might expect from an M5 Ultra powered Mac Studio (although I still think they'll likely delay a Studio update until the M6 now). I'm not planning on getting one of these MacBook Pros but I'll be tempted with the M6 when the new form factor comes out.
1
0
2026-03-03T15:01:33
Spanky2k
false
null
0
o8f4qfj
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f4qfj/
false
1
t1_o8f4mfn
Maybe qwen3.5 35b? Your options are quite limited
1
0
2026-03-03T15:00:59
hauhau901
false
null
0
o8f4mfn
false
/r/LocalLLaMA/comments/1rjrfzg/agentic_coding_moe_models_for_10gb_vram_setup/o8f4mfn/
false
1
t1_o8f4lj0
Thanks for reporting the tool calling issue with Home Assistant. I'd love to help fix it — could you share a few details? 1. What model are you running? (e.g. Qwen3.5, Llama, MiniMax, etc.) 2. What's your full vllm-mlx serve command? (especially --tool-call-parser and --reasoning-parser flags) 3. What does the error look like? For example: does HA say "tool not found", does the LLM respond in plain text instead of calling tools, or does it crash?
1
0
2026-03-03T15:00:51
Striking-Swim6702
false
null
0
o8f4lj0
false
/r/LocalLLaMA/comments/1rf288a/qwen3codernext_at_65_toks_on_m3_ultra_with/o8f4lj0/
false
1
t1_o8f4iy1
I desperately need to see pre-fill benchmarks. The M3 Ultra mac studio takes 10+ minutes on pre-fill for large models and contexts
1
0
2026-03-03T15:00:29
iMrParker
false
null
0
o8f4iy1
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f4iy1/
false
1