name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8798fg
What settings did you use for this?
1
0
2026-03-02T09:08:56
mentallyburnt
false
null
0
o8798fg
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8798fg/
false
1
t1_o87920l
For future: PCIEx4 card with a u.2 mount (PEX4SFF8639) worked well without system crashes, however I still wasn't able to get any performance out of it (128k cache put 20GB on to the Optane and it took 15 mins to load model, then 1t/s)
1
0
2026-03-02T09:07:09
El_90
false
null
0
o87920l
false
/r/LocalLLaMA/comments/1r72dy3/strix_halo_128gb_optane_fast_swap_help/o87920l/
false
1
t1_o8791pa
The M5 are promising nonetheless with matmul. So it's not totally farfetch if apple enters the competition, unlikely given the ram shortage unfortunately.
2
0
2026-03-02T09:07:03
imnotzuckerberg
false
null
0
o8791pa
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o8791pa/
false
2
t1_o8790tp
You have to implement tool calling by yourself, for better calling. Like using LangGraph
1
0
2026-03-02T09:06:48
NigaTroubles
false
null
0
o8790tp
false
/r/LocalLLaMA/comments/1rinx3k/qwen_35_system_message_must_be_at_the_beginning/o8790tp/
false
1
t1_o8790lc
Mas eles vão sair quantizados também? Eu uso em q4
1
0
2026-03-02T09:06:45
NullKalahar
false
null
0
o8790lc
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8790lc/
false
1
t1_o878vej
the jump in context utilization is probably the most underreported part -- models actually using 64k+ coherently is a different world from 13 months ago.
2
0
2026-03-02T09:05:18
justserg
false
null
0
o878vej
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o878vej/
false
2
t1_o878qq4
No 13b?
1
0
2026-03-02T09:03:58
lfourtime
false
null
0
o878qq4
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o878qq4/
false
1
t1_o878iud
For coding tasks on your M5 Pro (24GB unified memory), I'd recommend Qwen2.5-Coder 14B - runs great on Apple Silicon and performs well for code generation. If you want more power, Qwen3 35B A3B at Q4 should fit nicely. For lighter tasks, Mistral 7B Instruct is super fast. Also check out faster-whisper if you need local...
0
0
2026-03-02T09:01:46
Actual_Wolf_2932
false
null
0
o878iud
false
/r/LocalLLaMA/comments/1rimncl/whats_the_best_local_model_i_can_run_with_a/o878iud/
false
0
t1_o878fdz
AI AT WARP SPEED LOVE
2
0
2026-03-02T09:00:48
nmkd
false
null
0
o878fdz
false
/r/LocalLLaMA/comments/1ri0puh/honor_would_use_deepseek/o878fdz/
false
2
t1_o878dzo
have you guys tested it with opencode? how does it perform.
5
0
2026-03-02T09:00:25
ab2377
false
null
0
o878dzo
false
/r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o878dzo/
false
5
t1_o878clm
I am currently mirroring something similar to the Computer Use Demo of Claude. I think the main time consumption at the moment is likely due to file migration. My product is somewhat like a concept of a digital human, involving files for different roles that need to be migrated. Later, when different users use this rol...
1
0
2026-03-02T09:00:02
SpareAlps6450
false
null
0
o878clm
false
/r/LocalLLaMA/comments/1rimgii/the_computer_use_trend_how_are_you_managing/o878clm/
false
1
t1_o8787bn
A new app
6
0
2026-03-02T08:58:33
MaxKruse96
false
null
0
o8787bn
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o8787bn/
false
6
t1_o8784ry
See, anyone in Germany can deny the Holocaust in their private conversations. It's not like you can't discuss it. You just cannot spread it as propaganda. As a society we had some costly historical lessons.  And so it came to be that human dignity is the highest value in our constitution. There are other values too...
1
0
2026-03-02T08:57:51
redballooon
false
null
0
o8784ry
false
/r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o8784ry/
false
1
t1_o8783f7
Running rocm 7.12 I get about 48 tps on generation Vulkan has been slow. Only 7 tps, and the card barely breaks 80w utilisation. Quant used Was Unsloths Qwen3.5-35b-a3b-UD-Q4-K-XL Haven't gotten to the dense models yet, but would expect around 18-22 tps Card is powerlimited to 160W, which is about 12.5% less, tps fo...
2
0
2026-03-02T08:57:28
MaddesJG
false
null
0
o8783f7
false
/r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/o8783f7/
false
2
t1_o8781wc
MBP has separate VRAM and Unified Memory? Thats something new
5
0
2026-03-02T08:57:03
HyperWinX
false
null
0
o8781wc
false
/r/LocalLLaMA/comments/1rimncl/whats_the_best_local_model_i_can_run_with_a/o8781wc/
false
5
t1_o877zhr
> MacBook Pro M5 > MacBook M5 Pro LOL wtf omg lol ...and I've thought Nvidia was confusing us with their "Quadro 6000" "A6000" "6000Ada" "Pro 6000"
7
0
2026-03-02T08:56:23
MelodicRecognition7
false
null
0
o877zhr
false
/r/LocalLLaMA/comments/1rimncl/whats_the_best_local_model_i_can_run_with_a/o877zhr/
false
7
t1_o877x9i
What's causing the long start time? Docker itself should spin up very quickly.
1
0
2026-03-02T08:55:47
DeltaSqueezer
false
null
0
o877x9i
false
/r/LocalLLaMA/comments/1rimgii/the_computer_use_trend_how_are_you_managing/o877x9i/
false
1
t1_o877w4j
GLM 4.7 Flash @ Q6
1
0
2026-03-02T08:55:28
metmelo
false
null
0
o877w4j
false
/r/LocalLLaMA/comments/1rimncl/whats_the_best_local_model_i_can_run_with_a/o877w4j/
false
1
t1_o877v0y
Only with a small context window, right? It seems that time to first token skyrockets after ~8k with insufficient vram.
1
0
2026-03-02T08:55:11
lorendroll
false
null
0
o877v0y
false
/r/LocalLLaMA/comments/1rij9k1/whats_the_best_local_model_i_can_run_with_8gb/o877v0y/
false
1
t1_o877sab
Depend on the HW ig
1
0
2026-03-02T08:54:26
Imakerocketengine
false
null
0
o877sab
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o877sab/
false
1
t1_o877s6t
DeepSeek v3.2 Speciale was really really good. Tho I'd argue DSA is not very much a sparse attention in its core. But meanwhile the next gen DS models are really good sparse models that showcased promising performance over 1M ctx. Minimax just isn't the kind of company to drive real architectural innovation.
3
0
2026-03-02T08:54:25
zball_
false
null
0
o877s6t
false
/r/LocalLLaMA/comments/1rim2y2/revisiting_minimaxs_article_on_their_decision_to/o877s6t/
false
3
t1_o877rx3
This doesn't help as you've never defined "expensive". A lot of things can be labelled "expensive", but for example a solution with a motherboard where you can put in 4x 2-slot cards is not something I would label as such. In case you want more cards you can always go for the PCIe riser solution and a case/frame you at...
1
0
2026-03-02T08:54:20
tmvr
false
null
0
o877rx3
false
/r/LocalLLaMA/comments/1rimpwy/sharding_model_across_machines/o877rx3/
false
1
t1_o877pmv
I fell so bad for you guys that Qwen3.5 4B is coming very soon 😂
22
0
2026-03-02T08:53:44
Ill-Fishing-1451
false
null
0
o877pmv
false
/r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o877pmv/
false
22
t1_o877fj8
this belongs to /r/chatgpt/ not here
3
0
2026-03-02T08:51:06
MelodicRecognition7
false
null
0
o877fj8
false
/r/LocalLLaMA/comments/1rinbe9/asked_gpt_about_a_friends_behavior_watched_it/o877fj8/
false
3
t1_o877e41
https://i.redd.it/lmv6vntdilmg1.gif Demo with Jan Desktop:
6
0
2026-03-02T08:50:43
Delicious_Focus3465
false
null
0
o877e41
false
/r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o877e41/
false
6
t1_o8777k7
Approximately 10 to 20 seconds.
1
0
2026-03-02T08:48:59
SpareAlps6450
false
null
0
o8777k7
false
/r/LocalLLaMA/comments/1rimgii/the_computer_use_trend_how_are_you_managing/o8777k7/
false
1
t1_o8771wi
> Peak compute on ANE only consumes 2.8 W which at 19 tflops becomes 6.6 tflops/watt. Insane! (Metal GPU - 1, H100 - 1.4 Tflops/watt) This is insane though. I think if Apple releases their hardware decoupled from software, and slap linux on it server style, they could compete in the AI chip market. At least aimed at c...
6
0
2026-03-02T08:47:28
imnotzuckerberg
false
null
0
o8771wi
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o8771wi/
false
6
t1_o876v06
I also think there are much better models. You could even run "gpt oss 20B" on your 12GB card. I was just looking for good alternatives for my friend. What have you decided on so far?
3
0
2026-03-02T08:45:37
DertekAn
false
null
0
o876v06
false
/r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o876v06/
false
3
t1_o876u1f
I was answering to you, when you said "You might want to start by explaining how you even got inference for a 70B model on 6GB VRAM.", the lol was trying to make the comment ironic as first one would try to offload to RAM.
2
0
2026-03-02T08:45:22
ThunderousHazard
false
null
0
o876u1f
false
/r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o876u1f/
false
2
t1_o876syp
Good job!
1
0
2026-03-02T08:45:05
moahmo88
false
null
0
o876syp
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o876syp/
false
1
t1_o876ruo
[deleted]
1
0
2026-03-02T08:44:48
[deleted]
true
null
0
o876ruo
false
/r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o876ruo/
false
1
t1_o876jem
Well, apparently there was a known problem. They released fix
1
0
2026-03-02T08:42:30
BitXorBit
false
null
0
o876jem
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o876jem/
false
1
t1_o876j6z
Literally search, point, "summarise" or more specific question.
3
0
2026-03-02T08:42:27
Lemondifficult22
false
null
0
o876j6z
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o876j6z/
false
3
t1_o876gts
How long does it take to launch your current docker containers?
1
0
2026-03-02T08:41:49
DeltaSqueezer
false
null
0
o876gts
false
/r/LocalLLaMA/comments/1rimgii/the_computer_use_trend_how_are_you_managing/o876gts/
false
1
t1_o876b4h
You have a problem
1
0
2026-03-02T08:40:16
Educational-Agent-32
false
null
0
o876b4h
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o876b4h/
false
1
t1_o876a5m
I'm currently using Docker, but I need to pre-launch images to reduce user wait time. When a user starts, I'll place their history files and uploaded files into the container, which makes Docker volumes unsuitable for this use case.
1
0
2026-03-02T08:40:00
SpareAlps6450
false
null
0
o876a5m
false
/r/LocalLLaMA/comments/1rimgii/the_computer_use_trend_how_are_you_managing/o876a5m/
false
1
t1_o8762di
I'm not very familiar with Firecracker; I'm still learning and looking for a sandbox that's ready to use.
1
0
2026-03-02T08:37:54
SpareAlps6450
false
null
0
o8762di
false
/r/LocalLLaMA/comments/1rimgii/the_computer_use_trend_how_are_you_managing/o8762di/
false
1
t1_o87625m
GPT is trash. I hear codex models are decent though. I left the GPT ecosystem for Gemini/Claude. Try having the conversation with Gemini Pro 3.1 or Claude Sonnet. GPT causes more harm than good.
1
0
2026-03-02T08:37:50
Ok-Lobster-919
false
null
0
o87625m
false
/r/LocalLLaMA/comments/1rinbe9/asked_gpt_about_a_friends_behavior_watched_it/o87625m/
false
1
t1_o8761r4
I don't think transformers use kV cache at all while they're being trained? But maybe that raw keys and values were BF16 in the training code, and the model somehow learned to use the quantization errors for better performance...
5
0
2026-03-02T08:37:44
stddealer
false
null
0
o8761r4
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o8761r4/
false
5
t1_o8761eh
Well, you can have some model overflowing like 2GB to RAM, I've got DDR5 and 5070Ti or previously has 2x3090 - there is then like 3t/s slowdown.
1
0
2026-03-02T08:37:38
H4UnT3R_CZ
false
null
0
o8761eh
false
/r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o8761eh/
false
1
t1_o8760ew
Is there a way to set the KV Cache type to BF16 in LMStudio? It seems like I can only set the K Cache Quantization Type to F16, which seems to be FP16 under the hood.
4
0
2026-03-02T08:37:22
120decibel
false
null
0
o8760ew
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o8760ew/
false
4
t1_o875ydy
Lmstudio has it working fine.  
3
0
2026-03-02T08:36:50
My_Unbiased_Opinion
false
null
0
o875ydy
false
/r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o875ydy/
false
3
t1_o875y5r
MVP
7
0
2026-03-02T08:36:46
nazimjamil
false
null
0
o875y5r
false
/r/LocalLLaMA/comments/1rin3ea/alibaba_team_opensources_copaw_a_highperformance/o875y5r/
false
7
t1_o875x3d
It's almost impossible to really compare the attention models since they are all different size models, with different amounts of training etc. Vocabulary size is another way to scale long context reasoning. Personally I suspect current large models still have smaller vocabularies than would be optimal. Partly becaus...
3
0
2026-03-02T08:36:29
Middle_Bullfrog_6173
false
null
0
o875x3d
false
/r/LocalLLaMA/comments/1rim2y2/revisiting_minimaxs_article_on_their_decision_to/o875x3d/
false
3
t1_o875vpx
TLDR and don't care what a sloppy closed model says
2
0
2026-03-02T08:36:07
Velocita84
false
null
0
o875vpx
false
/r/LocalLLaMA/comments/1rinbe9/asked_gpt_about_a_friends_behavior_watched_it/o875vpx/
false
2
t1_o875vjt
For sure claude code fits context with skills, and if you don't feed a similar prompt with the same skills the llm is less "specialized" in the tasks you need.
2
0
2026-03-02T08:36:03
R_Duncan
false
null
0
o875vjt
false
/r/LocalLLaMA/comments/1rinbwd/local_agents_running_in_claude_codecodexopencode/o875vjt/
false
2
t1_o875v0r
No idea why you are getting downvoted, this is absolutely correct. Heretic works on the 27B or the 35B A3B but you need a different approach for big MoE models. Thank you for your work.
1
0
2026-03-02T08:35:56
Prettyism_Forever_99
false
null
0
o875v0r
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o875v0r/
false
1
t1_o875s56
self promotion, bigotry, not local. gross
2
0
2026-03-02T08:35:08
Crafty-Diver-6948
false
null
0
o875s56
false
/r/LocalLLaMA/comments/1rinbe9/asked_gpt_about_a_friends_behavior_watched_it/o875s56/
false
2
t1_o875q7w
The main feature of this function is that it eliminates the need to reload the model, making the entire workflow very smooth! Could you please display the complete variant ID on the interface so I can easily copy it?
1
0
2026-03-02T08:34:35
Dazzling_Equipment_9
false
null
0
o875q7w
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o875q7w/
false
1
t1_o875nfd
Slop and not local.
5
0
2026-03-02T08:33:48
NNN_Throwaway2
false
null
0
o875nfd
false
/r/LocalLLaMA/comments/1rinbe9/asked_gpt_about_a_friends_behavior_watched_it/o875nfd/
false
5
t1_o875hck
Adding them to the list! Haven’t figured out a good way to test contradictions / memory decay over time without writing my own benchmark for it :/ Could always vibe check if you don’t know any good benchmark for it that’s less subjective?
2
0
2026-03-02T08:32:09
selund1
false
null
0
o875hck
false
/r/LocalLLaMA/comments/1rin5r2/what_memory_systems_should_i_benchmark/o875hck/
false
2
t1_o875dc0
I correct myself: it actually isn't that much of a difference, just go with apple
1
0
2026-03-02T08:31:03
Active_Woodpecker683
false
null
0
o875dc0
false
/r/LocalLLaMA/comments/1rimpwy/sharding_model_across_machines/o875dc0/
false
1
t1_o875cjy
[removed]
1
0
2026-03-02T08:30:51
[deleted]
true
null
0
o875cjy
false
/r/LocalLLaMA/comments/1pem49s/current_best_coding_llm/o875cjy/
false
1
t1_o875b2m
would love to see MemGPT/Letta in there - the hierarchical memory approach is architecturally quite different from the semantic search approach that Mem0 and Graphiti use. also curious if you'll test update/overwrite behavior since that's where most systems fall apart.
1
0
2026-03-02T08:30:27
BC_MARO
false
null
0
o875b2m
false
/r/LocalLLaMA/comments/1rin5r2/what_memory_systems_should_i_benchmark/o875b2m/
false
1
t1_o875419
Link to the repo and website for those who want to skip the article: [https://github.com/agentscope-ai/CoPaw](https://github.com/agentscope-ai/CoPaw) [https://copaw.agentscope.io/](https://copaw.agentscope.io/)
21
0
2026-03-02T08:28:32
hainesk
false
null
0
o875419
false
/r/LocalLLaMA/comments/1rin3ea/alibaba_team_opensources_copaw_a_highperformance/o875419/
false
21
t1_o8753ll
logpoints over restarting is huge for timing-sensitive bugs - restarting kills the race condition you were trying to catch. how does it handle async/concurrent execution where multiple agent calls overlap?
2
0
2026-03-02T08:28:24
BC_MARO
false
null
0
o8753ll
false
/r/LocalLLaMA/comments/1rijbp2/i_built_an_mcp_that_gives_any_agent_a_debugger/o8753ll/
false
2
t1_o8752iv
The only other thing I can think of is windows or linux? I run Arch Linux
1
0
2026-03-02T08:28:06
Amazing_Athlete_2265
false
null
0
o8752iv
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8752iv/
false
1
t1_o874zn0
Now that i think about it, are there any KLD results out there for fp16 KV vs Q8 KV?
3
0
2026-03-02T08:27:18
Velocita84
false
null
0
o874zn0
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o874zn0/
false
3
t1_o874z9c
This is also one of my concerns. Since it's usable, it should be displayed on the interface, or at least provided when accessing v1/models. Otherwise, manually concatenating the model ID is still a relatively error-prone part.
1
0
2026-03-02T08:27:11
Dazzling_Equipment_9
false
null
0
o874z9c
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o874z9c/
false
1
t1_o874t87
Answering your question, any model until 35B will be usable. Qwen 3.5 35B is all the rage lately. GPT OSS 20B is very good too IMO for that amount of memory
4
0
2026-03-02T08:25:30
cibernox
false
null
0
o874t87
false
/r/LocalLLaMA/comments/1rimncl/whats_the_best_local_model_i_can_run_with_a/o874t87/
false
4
t1_o874s2r
I have some offers from EU, if nothing comes of it I'll let you know.
1
0
2026-03-02T08:25:11
fairydreaming
false
null
0
o874s2r
false
/r/LocalLLaMA/comments/1ri0v3e/anyone_need_a_12channel_ddr5_rdimm_ram_set_for_an/o874s2r/
false
1
t1_o874n9o
Hey, It’s Alex from Jan team. Thanks so much! Really appreciate the kind words 🙌 The 4B size hitting that sweet spot for local coding was definitely a big focus for us. Love the voice coding workflow tip — pairing with faster-whisper is clever. If you haven't already, would love for you to give our model a shot in th...
5
0
2026-03-02T08:23:53
Background_Tea_3806
false
null
0
o874n9o
false
/r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o874n9o/
false
5
t1_o874n2b
I Already run Llama.cpp with the latest prompt caching support for Vision models. This is not about Vision, it makes zero difference if I use the vision encoder or not. This is about the RNN architecture and the constant reprocessing when the prompt exceeds the context size. Why wont you Listen to what am I saying? 
1
0
2026-03-02T08:23:50
dampflokfreund
false
null
0
o874n2b
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o874n2b/
false
1
t1_o874gi0
No. You have the MacBook Pro M5. Which is not the same as a MacBook with the M5 pro. Those will be announced this week.
23
0
2026-03-02T08:22:01
cibernox
false
null
0
o874gi0
false
/r/LocalLLaMA/comments/1rimncl/whats_the_best_local_model_i_can_run_with_a/o874gi0/
false
23
t1_o874fu9
from march 3 to "next week", bro i swear, it's gonna be next week this time
1
0
2026-03-02T08:21:50
Ambitious-Call-7565
false
null
0
o874fu9
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o874fu9/
false
1
t1_o874dkm
on 5090 + 5080 with bf16 I get a brutal loss of speed compares to f16 (under 60 vs 130 t/s) llama.cpp: up to date model: UD-Q8-K-XL
1
0
2026-03-02T08:21:12
sieskei
false
null
0
o874dkm
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o874dkm/
false
1
t1_o874bcp
sure
1
0
2026-03-02T08:20:36
deenspaces
false
null
0
o874bcp
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o874bcp/
false
1
t1_o874awe
I'm not sure if I understood your comment, but the Macbook M5 Pro is already available and I'm using one for weeks already.
-10
0
2026-03-02T08:20:28
soul105
false
null
0
o874awe
false
/r/LocalLLaMA/comments/1rimncl/whats_the_best_local_model_i_can_run_with_a/o874awe/
false
-10
t1_o873x8q
Get the qwen 3.5 35b version. It’s as good as GPT oss 120b
5
0
2026-03-02T08:16:48
Pixer---
false
null
0
o873x8q
false
/r/LocalLLaMA/comments/1rimncl/whats_the_best_local_model_i_can_run_with_a/o873x8q/
false
5
t1_o873w9w
Chinese CEOs are less narcissistic and don't care to shape the product in their own image the same way as Musk, Altman, or Dario types. I bet 95% of people here don't even know the name of Deepseeks CEO without looking it up
0
0
2026-03-02T08:16:32
tengo_harambe
false
null
0
o873w9w
false
/r/LocalLLaMA/comments/1rik3ge/what_is_the_personality_of_a_chinese_llm_when/o873w9w/
false
0
t1_o873v38
The M5 pro hasn’t been released or announced so really there is no way to know. Wait one week.
10
0
2026-03-02T08:16:13
cibernox
false
null
0
o873v38
false
/r/LocalLLaMA/comments/1rimncl/whats_the_best_local_model_i_can_run_with_a/o873v38/
false
10
t1_o873ue7
I too had issues with ollama and had to spin up llama.cpp to get tool calling working. Now I’m troubleshooting image processing…
3
0
2026-03-02T08:16:02
InternationalNebula7
false
null
0
o873ue7
false
/r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o873ue7/
false
3
t1_o873tzx
thx, in tighly builds ROCMm therock is now 7.12: https://therock-nightly-tarball.s3.amazonaws.com/index.html
2
0
2026-03-02T08:15:55
Educational_Sun_8813
false
null
0
o873tzx
false
/r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o873tzx/
false
2
t1_o873oxq
what about the m3 ultra in your opinion? the bandwidth difference seems enough for dense models / 72b
1
0
2026-03-02T08:14:34
quietsubstrate
false
null
0
o873oxq
false
/r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o873oxq/
false
1
t1_o873oya
4x mi50 32gb 122b -> 250 pp and 23 tg
2
0
2026-03-02T08:14:34
Pixer---
false
null
0
o873oya
false
/r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/o873oya/
false
2
t1_o873lr3
won't a 1GB speed be enough for transfer?
0
0
2026-03-02T08:13:43
Active_Woodpecker683
false
null
0
o873lr3
false
/r/LocalLLaMA/comments/1rimpwy/sharding_model_across_machines/o873lr3/
false
0
t1_o873k1q
downvootes bcs its ad
2
0
2026-03-02T08:13:15
Abject-Excitement37
false
null
0
o873k1q
false
/r/LocalLLaMA/comments/1r8oehn/opencode_arbitrary_code_execution_major_security/o873k1q/
false
2
t1_o873izb
Vllm or llamacpp ?
1
0
2026-03-02T08:12:58
Pixer---
false
null
0
o873izb
false
/r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/o873izb/
false
1
t1_o873db5
that model is not offloaded to RAM, all fit in VRAM, and it helps on RDNA4 like it's clearly written in PR, so for R9700, Strix halo is RDNA3.5, and here again no offloading...
1
0
2026-03-02T08:11:27
Educational_Sun_8813
false
null
0
o873db5
false
/r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o873db5/
false
1
t1_o873czk
nice release! 4b is a great size for local coding - reminds me of when we used haiku for code assist. for voice coding workflows, ive been pairing smaller models like this with local stt like faster-whisper - works surprisingly well for tts
6
0
2026-03-02T08:11:21
Weesper75
false
null
0
o873czk
false
/r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o873czk/
false
6
t1_o873ary
For most what? Most chit chatting?
1
0
2026-03-02T08:10:45
po_stulate
false
null
0
o873ary
false
/r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o873ary/
false
1
t1_o873as7
Why not just use Firecracker?
1
0
2026-03-02T08:10:45
DeltaSqueezer
false
null
0
o873as7
false
/r/LocalLLaMA/comments/1rimgii/the_computer_use_trend_how_are_you_managing/o873as7/
false
1
t1_o8737ct
this is a solid investigation! i wonder if this also affects the quantized kv cache options in llama.cpp - would love to see a comparison with q8_0 and q4_k_m cache types
-3
0
2026-03-02T08:09:51
Weesper75
false
null
0
o8737ct
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o8737ct/
false
-3
t1_o873517
yes, from debian testing repo
3
0
2026-03-02T08:09:12
Educational_Sun_8813
false
null
0
o873517
false
/r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o873517/
false
3
t1_o8733xq
Might be worth redownloading and checking again. The old ones used MXFP4 for ffn and attn and in their blog unsloth say: > MXFP4 is much worse on many tensors - attn_gate, attn_q, ssm_beta, ssm_alpha using MXFP4 is not a good idea, and rather Q4_K is better - also MXFP4 uses 4.25 bits per weight, whilst Q4_K uses 4.5 ...
1
0
2026-03-02T08:08:55
zkstx
false
null
0
o8733xq
false
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o8733xq/
false
1
t1_o8730iq
great post! for file mounting in pre-warmed sandboxes, id suggest looking into docker volumes or even just bind mounts - they can be attached to running containers without restart. for local-first voice tasks i use OpenClaw + faster-whisper, super fast on apple silicon
-1
0
2026-03-02T08:08:00
Weesper75
false
null
0
o8730iq
false
/r/LocalLLaMA/comments/1rimgii/the_computer_use_trend_how_are_you_managing/o8730iq/
false
-1
t1_o872zvi
I literally said in a comment u can have a motherboard with 4 gpus the problem is, the existing solutions are much much more expensive than any other thing I can have 2 machine, each motherboard serve 4 cards so its 96GB vram a rtx 6000 isn't in my country and costs much much more 6K usd, which might go up to 10k in...
0
0
2026-03-02T08:07:50
Active_Woodpecker683
false
null
0
o872zvi
false
/r/LocalLLaMA/comments/1rimpwy/sharding_model_across_machines/o872zvi/
false
0
t1_o872vpt
for those looking for stt on mac, faster-whisper runs great on apple silicon. if u need something even simpler for dictation, there are apps like Weesper that run locally on mac with good results
1
0
2026-03-02T08:06:42
Weesper75
false
null
0
o872vpt
false
/r/LocalLLaMA/comments/1r7bsfd/best_audio_models_feb_2026/o872vpt/
false
1
t1_o872q66
That question is something agentic reasoning model like Opus 4.6 handles very well, at least speculatively, so maybe you can try it yourself. Anyway my 2 cents: I think architecture only caps the ceiling of "quality/spec" (in many senses) rather than deciding what the model actually is. It is data; and so, data genera...
6
0
2026-03-02T08:05:13
NandaVegg
false
null
0
o872q66
false
/r/LocalLLaMA/comments/1rim2y2/revisiting_minimaxs_article_on_their_decision_to/o872q66/
false
6
t1_o872o91
for coding tasks on an m5 pro, id also look into smaller code-specific models like qwen2.5-coder 7b or 14b if u want something lighter. they run great on apple silicon. as for voice stuff, if u need transcription locally check out faster-whisper - runs nicely on m-series chips
1
0
2026-03-02T08:04:42
Weesper75
false
null
0
o872o91
false
/r/LocalLLaMA/comments/1rimncl/whats_the_best_local_model_i_can_run_with_a/o872o91/
false
1
t1_o872l7c
What imatrix does
6
0
2026-03-02T08:03:54
tracagnotto
false
null
0
o872l7c
false
/r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o872l7c/
false
6
t1_o872kqv
Came here to just check that
2
0
2026-03-02T08:03:46
owaisted
false
null
0
o872kqv
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o872kqv/
false
2
t1_o872ju4
Do you have some source for this (like github issue) where I could read more?
2
0
2026-03-02T08:03:32
HeapExchange
false
null
0
o872ju4
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o872ju4/
false
2
t1_o872hbu
please calm down sir
0
0
2026-03-02T08:02:51
jwpbe
false
null
0
o872hbu
false
/r/LocalLLaMA/comments/1rikvi8/no_way/o872hbu/
false
0
t1_o872ds4
There are similar posts like this weekly in r/selfhosted. One they might give you some more insights is this one: https://www.reddit.com/r/selfhosted/comments/1ct93nc/do_your_familiessignificant_other_use_your/
1
0
2026-03-02T08:01:54
Felladrin
false
null
0
o872ds4
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o872ds4/
false
1
t1_o872bnz
Wait for the new small qwen models
1
0
2026-03-02T08:01:20
Significant_Fig_7581
false
null
0
o872bnz
false
/r/LocalLLaMA/comments/1rij9k1/whats_the_best_local_model_i_can_run_with_8gb/o872bnz/
false
1
t1_o8729yi
omg no please no!! "mediiiccccccc" dude's got 'agent field theory' oh dear!
5
0
2026-03-02T08:00:55
ab2377
false
null
0
o8729yi
false
/r/LocalLLaMA/comments/1rim9l7/agents_are_not_thinking_science_of_agent_behavior/o8729yi/
false
5
t1_o8729b7
I think this is the culprit: [https://github.com/ggml-org/llama.cpp/pull/19976](https://github.com/ggml-org/llama.cpp/pull/19976) Thanks 0cc4m & Red Hat!
1
0
2026-03-02T08:00:44
spaceman_
false
null
0
o8729b7
false
/r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o8729b7/
false
1
t1_o8725mr
I have the same configuration, and the most I can achieve is 15 t/s. I'm using llama-server with the same parameters. Is there anything I should take a look at?
1
0
2026-03-02T07:59:46
Temas3D
false
null
0
o8725mr
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8725mr/
false
1
t1_o8721fp
my idea is cheaper and has no limit
2
0
2026-03-02T07:58:41
Active_Woodpecker683
false
null
0
o8721fp
false
/r/LocalLLaMA/comments/1rimpwy/sharding_model_across_machines/o8721fp/
false
2