name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8fpea0
elaborate please
1
0
2026-03-03T16:41:01
MelodicRecognition7
false
null
0
o8fpea0
false
/r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8fpea0/
false
1
t1_o8fpag7
[removed]
1
0
2026-03-03T16:40:31
[deleted]
true
null
0
o8fpag7
false
/r/LocalLLaMA/comments/1rjt5j4/agentic_rl_hackathon_this_weekend_in_sf/o8fpag7/
false
1
t1_o8fp8cv
How does it compare to cozyvoice?
1
0
2026-03-03T16:40:15
HugoCortell
false
null
0
o8fp8cv
false
/r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8fp8cv/
false
1
t1_o8fp8e6
Works perfectly fine now. Thank you. In the meantime I started the first two runs (1. Qwen3.5 27B and 2. Qwen3.5 35 B A3B) I will run some more runs in a loop now and push these when I got 5 for both. For anyone interested: Qwen3.5 27B First run: Up to round 38 with 1033760 total tokens. Qwen3.5 35B A3B First run: Up to round 37 with 1384168 total tokens.
1
0
2026-03-03T16:40:15
Pakobbix
false
null
0
o8fp8e6
false
/r/LocalLLaMA/comments/1rjr5uq/bloonsbench_evaluate_llm_agent_performance_on/o8fp8e6/
false
1
t1_o8fp7dx
Same
1
0
2026-03-03T16:40:07
marhalt
false
null
0
o8fp7dx
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fp7dx/
false
1
t1_o8foum6
He did what? 
1
0
2026-03-03T16:38:26
Gallardo994
false
null
0
o8foum6
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8foum6/
false
1
t1_o8fomo7
No, the vram is way lower, gpt-oss sits around 11GB and the qwen models sit at about 4 and 7 GB Still well below my 16GB limit.
1
0
2026-03-03T16:37:24
spacecad_t
false
null
0
o8fomo7
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fomo7/
false
1
t1_o8fok2d
I don't know what MCP is
1
0
2026-03-03T16:37:03
AICatgirls
false
null
0
o8fok2d
false
/r/LocalLLaMA/comments/1rjtt01/the_truth_about_mcp_vs_cli/o8fok2d/
false
1
t1_o8foigw
Return it, M3 Ultra not worth it performance wise and M5 Ultra will be a huge upgrade.
1
0
2026-03-03T16:36:50
duidui232323
false
null
0
o8foigw
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8foigw/
false
1
t1_o8fohpa
Deploybase is a dashboard for tracking real-time GPU and LLM pricing across cloud and inference providers. You can view performance stats and pricing history, compare side by side, and bookmark to track any changes. [https://deploybase.ai](https://deploybase.ai/)[](https://www.reddit.com/submit/?source_id=t3_1rjdv9z)
1
0
2026-03-03T16:36:44
Micky_Haller
false
null
0
o8fohpa
false
/r/LocalLLaMA/comments/1rf4br0/where_do_you_all_rent_gpu_servers_for_small_ml_ai/o8fohpa/
false
1
t1_o8fogxo
What’s up with the 3D window of Maya? No way you are using AI to interact with Maya modeling and getting something useful…
1
0
2026-03-03T16:36:38
liviuberechet
false
null
0
o8fogxo
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fogxo/
false
1
t1_o8fogi7
Woooooo what happened there?
1
0
2026-03-03T16:36:35
Ok-Internal9317
false
null
0
o8fogi7
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fogi7/
false
1
t1_o8foev6
Sick
1
0
2026-03-03T16:36:22
boinkmaster360
false
null
0
o8foev6
false
/r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8foev6/
false
1
t1_o8focyz
Integrated GPU won't help with token generation speed. LLMs token gen are mainly memory bandwidth bound. Integrated GPU (assuming you are using a backend that can utilize it properly) can at most help with better prompt processing. Don't expect dedicated GPU performance for LLMs from them. Dedicated GPUs are like mini pcs. They have their own cpu/chips with hundreds of cores, 600-900 or more gb/s unified memory bandwidth, their own motherboard, cooler and all. Your integrated GPU still uses your system ram. That is why you can run MOE models at usable speeds
1
0
2026-03-03T16:36:07
BumblebeeParty6389
false
null
0
o8focyz
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8focyz/
false
1
t1_o8foazu
per-stage microVM isolation with deterministic teardown is the right primitive. the seccomp-BPF + command allowlists + controlled egress stack looks solid on paper. questions: how are you handling state handoff between pipeline stages is structured output serialized through the host or passed VM-to-VM? what's the cold start overhead per stage, and does it make short-lived tool calls impractical? and is there audit logging of what happened inside each VM that survives teardown? the rootless networking via smoltcp is an interesting choice over TAP what drove that? easier deployment or actual security benefit from avoiding the host network stack?
1
0
2026-03-03T16:35:51
Loud-Option9008
false
null
0
o8foazu
false
/r/LocalLLaMA/comments/1rbtudq/voidbox_capabilitybound_agent_runtime/o8foazu/
false
1
t1_o8foa6q
So do i stick with my m3 pro mb or upgrade to the current studio or wait? haha. I wwant to run some models locally now and more importantly I want to be ready for larger better models. I will buy atleast 128GB of memory no matter what but hope to get 256.
1
0
2026-03-03T16:35:45
alexhackney
false
null
0
o8foa6q
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8foa6q/
false
1
t1_o8fo66f
ha, nah, I just tired 9B with q8 and my GPU won't fit :(
1
0
2026-03-03T16:35:13
Ok-Internal9317
false
null
0
o8fo66f
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8fo66f/
false
1
t1_o8fo1xi
Cool I'll try it out! Currently using gpt-oss-120b and Qwen-next coder 80b. Any other recent models I should try? Or are those still cream of the crop?
1
0
2026-03-03T16:34:40
exciting_kream
false
null
0
o8fo1xi
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8fo1xi/
false
1
t1_o8fntru
The missing piece: observation without containment is a dashcam. by the time eBPF captures the bad syscall, it's already executed. the pairing that works is kernel-level tracing + hardware-isolated execution so the blast radius is contained before the tracer even fires. have you tested this against agents in microVMs vs containers? curious if the hook behavior differs when the agent has its own kernel.
1
0
2026-03-03T16:33:35
GarbageOk5505
false
null
0
o8fntru
false
/r/LocalLLaMA/comments/1r8yvu5/i_built_an_ebpf_tracer_to_monitor_ai_agents_the/o8fntru/
false
1
t1_o8fnsz7
alr good luck then, where are you located?
1
0
2026-03-03T16:33:29
Ok-Internal9317
false
null
0
o8fnsz7
false
/r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8fnsz7/
false
1
t1_o8fnl5q
AFAIK, the H13SSL doesn't have any retimers on the MCIO ports AFAIK. So, while this might work for gen 4 cards, I doubt it would work with gen 5 cards.
1
0
2026-03-03T16:32:28
FullstackSensei
false
null
0
o8fnl5q
false
/r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8fnl5q/
false
1
t1_o8fnjb2
Same for me, crashes after a few seconds also on S25 Ultra.
1
0
2026-03-03T16:32:13
Userybx2
false
null
0
o8fnjb2
false
/r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o8fnjb2/
false
1
t1_o8fnevz
I don't think they'll only do base M6 if they're doing a refresh. They only did that once, with the M5 and it was quite weird. I think it was due to stock/production issues. They'll know people are waiting for the updated design and they will want high end versions of that available.
1
0
2026-03-03T16:31:38
Spanky2k
false
null
0
o8fnevz
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fnevz/
false
1
t1_o8fnch3
please also look at llama-server logs for VRAM usage, is this similar or maybe qwen uses more and must offload more to the CPU?
1
0
2026-03-03T16:31:19
jacek2023
false
null
0
o8fnch3
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fnch3/
false
1
t1_o8fnc15
You need to set the sampling settings like in the qwen documentation
1
0
2026-03-03T16:31:15
CATLLM
false
null
0
o8fnc15
false
/r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8fnc15/
false
1
t1_o8fnb6u
Running ROCM I suspect I could get a boost on vulkan but it would likely boost the gptoss model aswell so it's kind of a moot point.
1
0
2026-03-03T16:31:09
spacecad_t
false
null
0
o8fnb6u
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fnb6u/
false
1
t1_o8fn8ml
My goal is to find the best Open source model that can be hosted on a cloud server that produces good quality images along with accurate text over the images. Using gpt-image-1.5 and this model is very good, but seeing the QWEN 2.0 and that seems pretty good as well, though it isnt open yet.
1
0
2026-03-03T16:30:49
mrlockett
false
null
0
o8fn8ml
false
/r/LocalLLaMA/comments/1rhohkr/what_is_the_best_model_for_image_creation_with/o8fn8ml/
false
1
t1_o8fn7wx
God and I thought I was pretty well off with a single 5090.
1
0
2026-03-03T16:30:43
I_Downvote_Cunts
false
null
0
o8fn7wx
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8fn7wx/
false
1
t1_o8fn6a4
In Portuguese, the voice cloning sounds very strange, it acquires a strong English accent and ends up choppy, choppy words.
1
0
2026-03-03T16:30:31
Dramatic-Rub-7654
false
null
0
o8fn6a4
false
/r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8fn6a4/
false
1
t1_o8fn0mj
I did, it is significantly slower, around 11tk/s the 2b version is the only one that's actually comparable in speed, but it is failing at my tasks.
1
0
2026-03-03T16:29:47
spacecad_t
false
null
0
o8fn0mj
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fn0mj/
false
1
t1_o8fn025
Please talk like a human.
1
0
2026-03-03T16:29:42
Safe_Sky7358
false
null
0
o8fn025
false
/r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8fn025/
false
1
t1_o8fmxq7
I wouldn't buy an M3 Ultra now. It's a great chip but the prompt processing speeds hold it back. The M5 has drastically improved PP speeds so an M5 Ultra would blow the M3 away. The bigger issue is that they might skip the M5 Ultra entirely and go straight to the M6. My guess is that if they don't update the Mac Studio by WWDC this Summer by the absolute latest then they'll skip and go to the M6 Ultra around this time next year.
1
0
2026-03-03T16:29:23
Spanky2k
false
null
0
o8fmxq7
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fmxq7/
false
1
t1_o8fmwhl
🤯
1
0
2026-03-03T16:29:14
Glazedoats
false
null
0
o8fmwhl
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8fmwhl/
false
1
t1_o8fmu2i
No, in this version of the lab, I intentionally didn’t implement provenance tagging or confidence weighting because the goal was to isolate and demonstrate the vulnerability as clearly as possible. I wanted to show how authority can shift purely at the reasoning layer when tool output is treated as binding guidance. Adding provenance tagging or confidence weighting would actually be a strong mitigation here - especially if tool responses are explicitly marked as non-authoritative or validated before being merged into decision logic. For this experiment, the focus was on exposing the trust-boundary failure in its simplest form.😄
1
0
2026-03-03T16:28:55
insidethemask
false
null
0
o8fmu2i
false
/r/LocalLLaMA/comments/1rjsrc0/when_tool_output_becomes_policy_demonstrating/o8fmu2i/
false
1
t1_o8fmsp4
I hate guys like this 100% hype zero substance
1
0
2026-03-03T16:28:44
NoFaithlessness951
false
null
0
o8fmsp4
false
/r/LocalLLaMA/comments/1rjrfu1/qwen35_small_models_compared_9b_vs_4b_vs_2b_vs_08b/o8fmsp4/
false
1
t1_o8fmqwx
Missed opportunity to name it TerminaBench-800
1
0
2026-03-03T16:28:31
ortegaalfredo
false
null
0
o8fmqwx
false
/r/LocalLLaMA/comments/1rjtqgm/local_models_will_participate_in_weapons_systems/o8fmqwx/
false
1
t1_o8fmpw9
Yes because thats the only option
1
0
2026-03-03T16:28:22
CATLLM
false
null
0
o8fmpw9
false
/r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8fmpw9/
false
1
t1_o8fmntg
Please, post in English, even if it's machine translated. I understand that you might be accessing Reddit through their auto-translated view, but not everyone does. The machine translated version of your comment doesn't work very well in English. I understand that there is overhead when spreading models across cards, but this overhead has gotten a lot less with modern attention mechanisms and reduced context memory requirements like the one in Qwen3.5.
1
0
2026-03-03T16:28:05
spaceman_
false
null
0
o8fmntg
false
/r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8fmntg/
false
1
t1_o8fmmsa
>I thought a high end LLM orchestrates smaller models and checks their output. >So the quality of the primary model leads the quality. Is it so? Generally you want the high-end LLM to play the role of both orchestrator/planner and coder. Reason being that bad decisions can cascade upwards as well as downwards.
1
0
2026-03-03T16:27:58
Recoil42
false
null
0
o8fmmsa
false
/r/LocalLLaMA/comments/1rbvbzt/best_opensource_coder_model_for_replacing_claude/o8fmmsa/
false
1
t1_o8fmk85
Yes,
1
0
2026-03-03T16:27:37
alexx_kidd
false
null
0
o8fmk85
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fmk85/
false
1
t1_o8fmk8p
https://preview.redd.it/…this - easy jet?
1
0
2026-03-03T16:27:37
BarGroundbreaking624
false
null
0
o8fmk8p
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fmk8p/
false
1
t1_o8fmgy3
Curious to get your feedback! :)
1
0
2026-03-03T16:27:11
OkDragonfruit4138
false
null
0
o8fmgy3
false
/r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8fmgy3/
false
1
t1_o8fmfj0
I am getting below error when run above code. Any idea? `Traceback (most recent call last):` `File "c:\rashvan\AI - CCA Practise\Agentic_Financial_Advisor\main.py", line 8, in` `dataset = file.readlines()` `^^^^^^^^^^^^^^^^` `File "C:\Users\rashvan\AppData\Local\Programs\Python\Python312\Lib\encodings\cp1252.py", line 23, in decode` `return codecs.charmap_decode(input,self.errors,decoding_table)[0]` `^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^` `UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 405: character maps to`
1
0
2026-03-03T16:27:01
No_Composer_3311
false
null
0
o8fmfj0
false
/r/LocalLLaMA/comments/1rjo1tp/building_a_simple_rag_pipeline_from_scratch/o8fmfj0/
false
1
t1_o8fm972
Nice work! That took no time at all :) PS: which method did you use?
1
0
2026-03-03T16:26:10
OrneryMammoth2686
false
null
0
o8fm972
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8fm972/
false
1
t1_o8fm8o5
I require the speed that I get from vllm especially under a high concurrency.
1
0
2026-03-03T16:26:06
Civil-Top-8167
false
null
0
o8fm8o5
false
/r/LocalLLaMA/comments/1rjmnh7/help_loading_qwen35_35b_a3b_gguf_on_vllm/o8fm8o5/
false
1
t1_o8fm8a0
>in fact in about a month My recommendation then would be to try whatever base model has made today's SOTA look obsolete by then :p
1
0
2026-03-03T16:26:02
KallistiTMP
false
null
0
o8fm8a0
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o8fm8a0/
false
1
t1_o8fm7wh
Unsolth/Qwen3 worked perfectly. But tried to finetune Qwen3.5 27B with unsloth. It crashed with an exception inside the unsolth libs, something about needing the data fields, apparently it gets confused with the image fields, couldn't make it work so next time I will try plain transformers. Also my setup is multi-GPU and unsloth "device='balanced'" never worked. I assembled a device map by hand by assigning layers to GPUs and that worked.
1
0
2026-03-03T16:25:59
ortegaalfredo
false
null
0
o8fm7wh
false
/r/LocalLLaMA/comments/1rjsf7f/i_spent_6_hours_last_night_failing_to_finetune/o8fm7wh/
false
1
t1_o8fm7e4
The MacBook Pro is getting a design refresh with the M6 which will include an OLED monitor and it looks like even including some form of touch screen. So unless you're desperate for the new chips now, most people would likely be better off holding off until the next update. Although the downside is that they will almost certainly use the M6 refresh to increase prices across the board and I suspect they'll likely even sell both the M5 and M6 MacBook Pros concurrently for a while to soften the blow of the added cost. See here: https://www.macrumors.com/2026/02/10/macbook-pro-buying-advice-2026/ My wife has an M1 Pro MacBook Pro with 16GB of RAM that I'd like to update at some point to a model with a hell of a lot more RAM as she uses ChatGPT a lot and I'd like her to be able to run something decent locally. My M3 Max MacBook Pro is fine but I wouldn't mind having more RAM. Although I'm more likely to buy an M5/M6 Mac Studio when they get refreshed.
1
0
2026-03-03T16:25:55
Spanky2k
false
null
0
o8fm7e4
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fm7e4/
false
1
t1_o8fm4vy
okay thank you for detailed explanation. I will consider renting the gpu or use the smaller models.
1
0
2026-03-03T16:25:36
SUPRA_1934
false
null
0
o8fm4vy
false
/r/LocalLLaMA/comments/1rjhfow/thinking_of_finetuning_llama7b_with_100k_samples/o8fm4vy/
false
1
t1_o8fm1xj
Pushed the fix, lmk if you have any more issues
1
0
2026-03-03T16:25:12
cnqso
false
null
0
o8fm1xj
false
/r/LocalLLaMA/comments/1rjr5uq/bloonsbench_evaluate_llm_agent_performance_on/o8fm1xj/
false
1
t1_o8fm0rr
No Klingon?
1
0
2026-03-03T16:25:04
alienproxy
false
null
0
o8fm0rr
false
/r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8fm0rr/
false
1
t1_o8flzm5
The one that released with 26.2? Yes that is done.
1
0
2026-03-03T16:24:54
mxforest
false
null
0
o8flzm5
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8flzm5/
false
1
t1_o8flu1a
If I was buying today, I'd probably go with: * 14" M5 Pro, 18/20 core CPU/GPU * 64GB of RAM * for $3000 Seems like the sweet spot to me. Because to upgrade to the M5 Max with 64GB of ram it comes out to $4300
1
0
2026-03-03T16:24:10
urraca
false
null
0
o8flu1a
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8flu1a/
false
1
t1_o8flqdj
The same way we have always handled these things from digital comb filtering to deinterlacing--and beyond: a frame buffer...
1
0
2026-03-03T16:23:41
Due-Function-4877
false
null
0
o8flqdj
false
/r/LocalLLaMA/comments/1rir6ak/seeking_advice_on_detecting_keypoints_in_sports/o8flqdj/
false
1
t1_o8flmv3
You are correct MoE models choose different "experts" (a model "slice") on each token output. 27B has all the parameters active all the time. MoE scale better but dense are more coherent and are good at reproducing language nuances. That is why I use Gemma 27B as my image generation prompt generator. Also I am yet to find an open weight, locally runnable MoE model that is good at my highly flective language (Ukrainian). I think that partially that is due to architecture (and partially due to the training data). I always assume that people who wish for more locally runnable MoE models are likely English speakers.
1
0
2026-03-03T16:23:13
arbv
false
null
0
o8flmv3
false
/r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8flmv3/
false
1
t1_o8fljuw
> What kind of things would you folks be running and doing with this setup? On M5 Max 128GB, running [Qwen3.5-122B-A10B-GGUF:Q4_K_M](https://huggingface.co/AesSedai/Qwen3.5-122B-A10B-GGUF). On M5 Ultra 512GB, running [Qwen3.5-397B-A17B-8bit](https://huggingface.co/mlx-community/Qwen3.5-397B-A17B-8bit).
1
0
2026-03-03T16:22:49
Competitive_Ideal866
false
null
0
o8fljuw
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fljuw/
false
1
t1_o8flhqr
Cool idea. Any numbers on index build time + incremental update latency on big repos? That’s the make-or-break for editor use.
1
0
2026-03-03T16:22:33
BC_MARO
false
null
0
o8flhqr
false
/r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8flhqr/
false
1
t1_o8flfp5
GGUF support on vLLM is mediocre. Llama.cpp is the right tool for the job. Use vLLM for quantization types that are not GGUF.
1
0
2026-03-03T16:22:17
DinoAmino
false
null
0
o8flfp5
false
/r/LocalLLaMA/comments/1rjmnh7/help_loading_qwen35_35b_a3b_gguf_on_vllm/o8flfp5/
false
1
t1_o8fle8x
I've seen a Blender MCP out there.. These models can do crazy things. The **4B** Qwen3.5 model passes the "build an Operating System in a web browser" test. It spits out functional HTML code that mimics a window manager. A tiny little couple gig model can output useful code.. Not that crazy to think people are using Frontier API models to control crazy 3D applications now.
1
0
2026-03-03T16:22:05
tiffanytrashcan
false
null
0
o8fle8x
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fle8x/
false
1
t1_o8fladu
I'm trying this out with opencode & vibe against Step 3.5 Flash running locally, will see how well it works on my scrappy setup!
1
0
2026-03-03T16:21:34
spaceman_
false
null
0
o8fladu
false
/r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8fladu/
false
1
t1_o8fl73t
bummer, m4 max struggled to run even 4bit quant. 10t/s or even slower.
1
0
2026-03-03T16:21:09
Competitive-Force205
false
null
0
o8fl73t
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8fl73t/
false
1
t1_o8fl766
Damn. I guess I kind of lucked out with my Mac with m4. I typically use it for work(boring spreadsheets) but the 9b runs at about 20tk/s, GPT20b is obviously still faster and I can get about 30-35tk/s on avg. IMHO anything above 15tk/s+ is pretty usable for Chat or document summarization sort of stuff.
1
0
2026-03-03T16:21:09
Safe_Sky7358
false
null
0
o8fl766
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fl766/
false
1
t1_o8fl0r3
What client is this?
1
0
2026-03-03T16:20:18
Ok-Secret5233
false
null
0
o8fl0r3
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8fl0r3/
false
1
t1_o8fkyvp
Because ChatGPT doesn’t know about those models
1
0
2026-03-03T16:20:03
EffectiveCeilingFan
false
null
0
o8fkyvp
false
/r/LocalLLaMA/comments/1rjrtkd/local_models_drift_faster_than_you_think_when_you/o8fkyvp/
false
1
t1_o8fkxof
Hello, I currently have 3 blackwell rtx 6000 cards as my workstation :)
1
0
2026-03-03T16:19:54
hauhau901
false
null
0
o8fkxof
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8fkxof/
false
1
t1_o8fkx5j
Ahhh that makes sense. Can I ask, MacBook or Strix Halo?
1
0
2026-03-03T16:19:49
AdCreative8703
false
null
0
o8fkx5j
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8fkx5j/
false
1
t1_o8fkpr5
I'm running K2.5 TQ1 as we speak on a \~$14k homemade AI server (currently 224Gb, normally is 256 but I took a 5090 out for testing elsewhere), can get 54/62 layers offloaded to GPU and get around 20 tps to start with 8k context. If I put the 5090 back in, I could probably getter token gen with more layers offloaded to VRAM, and add some more context. It's possible, just getting more inconveniently expensive with the hardware prices.
1
0
2026-03-03T16:18:50
SweetHomeAbalama0
false
null
0
o8fkpr5
false
/r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8fkpr5/
false
1
t1_o8fkm1a
I wonder when we get to the point that people "buy prompts". Imagine instead of having a game's code take up over 90gb, the prompt to generate the entire thing only takes up a couple mb's plus the model itself.
1
0
2026-03-03T16:18:20
Maddolyn
false
null
0
o8fkm1a
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8fkm1a/
false
1
t1_o8fkkdt
no, UR attention matrix is 100x
1
0
2026-03-03T16:18:07
cachem3outside
false
null
0
o8fkkdt
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8fkkdt/
false
1
t1_o8fkjve
for vl u have to pass --mmproj model
1
0
2026-03-03T16:18:03
Old-Sherbert-4495
false
null
0
o8fkjve
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8fkjve/
false
1
t1_o8fkgzt
[removed]
1
0
2026-03-03T16:17:40
[deleted]
true
null
0
o8fkgzt
false
/r/LocalLLaMA/comments/1rjs8se/qwen_35_deltanet_broke_llamacpp_on_apple_silicon/o8fkgzt/
false
1
t1_o8fkg79
Can I ask - what do you use for these in terms of compute?  Do you run locally or use some kind of cloud based compute?
1
0
2026-03-03T16:17:34
Temporary-Mix8022
false
null
0
o8fkg79
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8fkg79/
false
1
t1_o8fkf3g
exactly the same as I've shared in the post. and i was about to rock 120k context size.
1
0
2026-03-03T16:17:25
Old-Sherbert-4495
false
null
0
o8fkf3g
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8fkf3g/
false
1
t1_o8fkdga
Did yall already do the update that enable direct to memory over thunderbolt 5?
1
0
2026-03-03T16:17:12
L3g3nd8ry_N3m3sis
false
null
0
o8fkdga
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fkdga/
false
1
t1_o8fk9zd
And it is not wasting tikens on thinking. It is using them to give you a better answer.
1
0
2026-03-03T16:16:44
Thomas-Lore
false
null
0
o8fk9zd
false
/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fk9zd/
false
1
t1_o8fk8xu
Yes, but we've hit a level of efficacy that we can choose to settle for less if the price is right.  We know their will be better.  It's about good enough.  
1
0
2026-03-03T16:16:36
MannToots
false
null
0
o8fk8xu
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8fk8xu/
false
1
t1_o8fk4s1
I'm pretty sure video input is different from image input.
1
0
2026-03-03T16:16:03
Cultured_Alien
false
null
0
o8fk4s1
false
/r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8fk4s1/
false
1
t1_o8fk4si
Oh nice that pretty decent performance for just CPU tbh
1
0
2026-03-03T16:16:03
HCLB_
false
null
0
o8fk4si
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8fk4si/
false
1
t1_o8fk2sl
yes boss. but i mean that what about the google colab and replicate tool? do they good for the fine tune or is there any other tool?
1
0
2026-03-03T16:15:47
SUPRA_1934
false
null
0
o8fk2sl
false
/r/LocalLLaMA/comments/1rjhfow/thinking_of_finetuning_llama7b_with_100k_samples/o8fk2sl/
false
1
t1_o8fjue3
Can you share your llamacpp config? and does it support VL ? and what is the context window
1
0
2026-03-03T16:14:41
callmedevilthebad
false
null
0
o8fjue3
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8fjue3/
false
1
t1_o8fjq2j
hey! thank you. it's really helpful.
1
0
2026-03-03T16:14:06
SUPRA_1934
false
null
0
o8fjq2j
false
/r/LocalLLaMA/comments/1rjhfow/thinking_of_finetuning_llama7b_with_100k_samples/o8fjq2j/
false
1
t1_o8fjld6
Thanks for letting me know, it sounds like the opening wait time isn't long enough -- in the short term you can set it longer on line 24 of harness/env/config.py. I'll patch in more sophisticated load screen detection in a few minutes.
2
0
2026-03-03T16:13:29
cnqso
false
null
0
o8fjld6
false
/r/LocalLLaMA/comments/1rjr5uq/bloonsbench_evaluate_llm_agent_performance_on/o8fjld6/
false
2
t1_o8fjf0z
Yeah but still, you wouldn't expect to list mammals as egg laying in an educational text
1
0
2026-03-03T16:12:39
Odenhobler
false
null
0
o8fjf0z
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8fjf0z/
false
1
t1_o8fjeuh
I am serving locally qwen3.5:122b, qwen-next-coder, gpt-oss-120b and some smaller variants. All running in quans higher than 6, with >128k context. They are not capable enough?! I hardly believe it? A general question, since agent coding is new to me. I thought a high end LLM orchestrates smaller models and checks their output. So the quality of the primary model leads the quality. Is it so?
1
0
2026-03-03T16:12:38
Impossible_Art9151
false
null
0
o8fjeuh
false
/r/LocalLLaMA/comments/1rbvbzt/best_opensource_coder_model_for_replacing_claude/o8fjeuh/
false
1
t1_o8fje36
Yes, it supports 35 languages right now — Python, Go, JS, TS, Rust, Java, C++, C#, PHP, Ruby, Kotlin, Scala, Zig, Elixir, Haskell, and 20 more. Full list in the README. Clojure isn't supported yet, but adding new languages is very doable since it's built on tree-sitter and there is a tree-sitter-clojure grammar. Would be a great candidate for a community contribution — or I can prioritize it if there's interest. Maybe you can add a feature request through an Issue and I will take it with me for the next release :)
1
0
2026-03-03T16:12:32
OkDragonfruit4138
false
null
0
o8fje36
false
/r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8fje36/
false
1
t1_o8fj2pi
I did upgrade to 5.2.0, but I had to stop using it completely afterward. Unsloth's dependencies seemed to block 5.x (`requires transformers [...] <=4.57.6`), and 5.2.0 breaks Unsloth compatibility. The script in my post uses raw Transformers + PEFT, not Unsloth, which is why the upgrade works. If you're using Unsloth's FastModel, you can't upgrade past 4.57.6 without dependency hell.
1
0
2026-03-03T16:11:00
pakalolo7123432
false
null
0
o8fj2pi
false
/r/LocalLLaMA/comments/1rjsf7f/i_spent_6_hours_last_night_failing_to_finetune/o8fj2pi/
false
1
t1_o8fiqvc
ISO 8601 supremacy!
1
0
2026-03-03T16:09:23
ImNotABotScoutsHonor
false
null
0
o8fiqvc
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fiqvc/
false
1
t1_o8filxu
I think you can around 37 to 40 token/s for Qwen3.5-397B-A17B on M5 max with 40 GPU core and 256 GB RAM
1
0
2026-03-03T16:08:43
DigiDecode_
false
null
0
o8filxu
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8filxu/
false
1
t1_o8fijth
yeah the testing setup makes such a huge difference tbh. like when people post 'this model sucks' but theyre running it with wrong params or broken inference its kinda useless feedback
1
0
2026-03-03T16:08:27
papertrailml
false
null
0
o8fijth
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8fijth/
false
1
t1_o8fijqv
This sounds interesting, is this programming language agnostic?
1
0
2026-03-03T16:08:26
Ok-Adhesiveness-4141
false
null
0
o8fijqv
false
/r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8fijqv/
false
1
t1_o8fij9u
Si ma sono 4x16 va in Pcie quando deve dialogare , si potrebbe provare ma 64 GB per modelli da 120b a salire è molto tirato
1
0
2026-03-03T16:08:22
Single_Error8996
false
null
0
o8fij9u
false
/r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8fij9u/
false
1
t1_o8fic4m
I appreciate this comment, thank you! I’ll see if I can fit qwen3.5 on it.
1
0
2026-03-03T16:07:25
ClayToTheMax
false
null
0
o8fic4m
false
/r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8fic4m/
false
1
t1_o8fi9i5
tbh the confidence when its wrong is the biggest issue with these smaller models imo. like qwen 4b can recognize pretty specific architecture patterns but then hallucinate the details
1
0
2026-03-03T16:07:04
papertrailml
false
null
0
o8fi9i5
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8fi9i5/
false
1
t1_o8fi6zy
I’ve been so puzzled by this and why they haven’t leaned in to self-hosted AI (and just acquire lm studio or build their own). I’m starting to think they’re waiting for the right moment where subscription inference prices deter more consumers. Scratching my head running Claude code with gpt-oss 120b on mlx all thanks to lm studio.
1
0
2026-03-03T16:06:45
arguingwithabot
false
null
0
o8fi6zy
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fi6zy/
false
1
t1_o8fi2pj
Tool authority injection is the more dangerous cousin of prompt injection because it exploits trust hierarchy, not input parsing. If the agent treats tool output as higher-trust than system intent, you’ve effectively created a shadow policy channel. The sandbox can be intact and file access locked down, but if the reasoning layer reclassifies tool output as authoritative state, you’ve already lost. Curious whether you’re enforcing explicit provenance tagging or confidence weighting on tool responses before they’re merged into context. Without that, any “trusted” tool becomes a policy escalation vector.
1
0
2026-03-03T16:06:10
Jumpy-Possibility754
false
null
0
o8fi2pj
false
/r/LocalLLaMA/comments/1rjsrc0/when_tool_output_becomes_policy_demonstrating/o8fi2pj/
false
1
t1_o8fi1mo
I have not read the unsloth guide since my main inference tasks are handled by VLLM, therefore I downloaded from the qwen repo, not the GGFU from unsloth, but it makes sense that yes, it might affect the quality of the output, I agree! So far, with the 27B model, users have reported a significant improvement in the quality of the outputs compared to qwen3-VL-32B, and I have the recommended setting, so I am happy ahah. I have done some testing, and whenever the task is complex, it will indeed think a bit more, but still much less than the 32B wish was constantly second guessing itself in a loops for minutes at the time only to end up sometimes with a hallucination. This is especially bad with some users who send incredibly vague prompts, but now with the 27B I would say there is a 10X quality increase when dealing with those queries.
1
0
2026-03-03T16:06:02
Brunofcsampaio
false
null
0
o8fi1mo
false
/r/LocalLLaMA/comments/1rjsgy6/how_to_fix_qwen35_overthink/o8fi1mo/
false
1
t1_o8fhzz9
Is there a way to use the speed of 4 models on llama.cpp rather than just the memory? By default, I believe llama.cpp uses each card in sequence, layer by layer, meaning you are essentially limited to the speed of your slowest card. Great for power usage though, you're really only loading one card at a time.
1
0
2026-03-03T16:05:49
spaceman_
false
null
0
o8fhzz9
false
/r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8fhzz9/
false
1
t1_o8fhzxg
Honestly, it's really really worth getting an ai like Gemini to explain the pros and cons of all quant methods in a simple way. The difference between quants at the same bits can be shocking, some of the newer methods are so much more efficient.
1
0
2026-03-03T16:05:48
Uncle___Marty
false
null
0
o8fhzxg
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8fhzxg/
false
1
t1_o8fhyqk
Unified memory bandwidth: M5 \~153 GB/s M5 Pro \~307 GB/s M5 Max \~614 GB/s
1
0
2026-03-03T16:05:39
magnus-m
false
null
0
o8fhyqk
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fhyqk/
false
1
t1_o8fhvn4
It takes up around 50GB fully loaded
1
0
2026-03-03T16:05:14
octopus_limbs
false
null
0
o8fhvn4
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8fhvn4/
false
1