name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o82vuet
Well yeah, more active more better. If that wasnt the case we would get 100b a1m models.
3
0
2026-03-01T16:59:39
Emotional-Baker-490
false
null
0
o82vuet
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o82vuet/
false
3
t1_o82vt7i
I have a 6GB gpu only and for me 35B-A3B its better than 27B, simply because its faster and usable. qwen3.5:27b total duration: 29m0.4283578s load duration: 227.7868ms prompt eval count: 17 token(s) prompt eval duration: 2.8569158s prompt eval rate: 5.95 tokens/s eval count: 2404 to...
1
0
2026-03-01T16:59:30
Prior-Timely
false
null
0
o82vt7i
false
/r/LocalLLaMA/comments/1re72h4/qwen35_27b_better_than_35ba3b/o82vt7i/
false
1
t1_o82vsi3
What's your perfect writing tool?
1
0
2026-03-01T16:59:25
CharlesBAntoine
false
null
0
o82vsi3
false
/r/LocalLLaMA/comments/1rgiimd/discussion_is_it_time_for_a_prosefirst_successor/o82vsi3/
false
1
t1_o82vr9s
Yes - tools like Sudowrite for example, let you load any of their custom models on a credit system and gives you up to 3-5 'continue' options. They also do inline generation suggestions and section-by-section 'rewrites.' Squibler is also trying to become Lovable for writers although I don't think that's the right appr...
1
0
2026-03-01T16:59:15
CharlesBAntoine
false
null
0
o82vr9s
false
/r/LocalLLaMA/comments/1rgiimd/discussion_is_it_time_for_a_prosefirst_successor/o82vr9s/
false
1
t1_o82vp2l
your hobby not theirs
1
0
2026-03-01T16:58:57
IronColumn
false
null
0
o82vp2l
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o82vp2l/
false
1
t1_o82vmbv
Echo TTS is really good!
1
0
2026-03-01T16:58:35
jinnyjuice
false
null
0
o82vmbv
false
/r/LocalLLaMA/comments/1r7bsfd/best_audio_models_feb_2026/o82vmbv/
false
1
t1_o82vj0m
Lol, google collaborating directly with Huawei is fun.
0
0
2026-03-01T16:58:09
ReasonablePossum_
false
null
0
o82vj0m
false
/r/LocalLLaMA/comments/1ri0puh/honor_would_use_deepseek/o82vj0m/
false
0
t1_o82vhgs
How did you get that awesome terminal with frieren?
2
0
2026-03-01T16:57:56
Present-Ad-8531
false
null
0
o82vhgs
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o82vhgs/
false
2
t1_o82vh1f
I just hope for 20b so it fits on my two 2080ti 😅
1
0
2026-03-01T16:57:53
geek_at
false
null
0
o82vh1f
false
/r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o82vh1f/
false
1
t1_o82vfc4
You are right, just tested it with the same parameters and it works with reasoning in ik\_llama.cpp but GLM-4.7 and GLM-5 dont show a reasoning mode in ik\_llama.cpp
1
0
2026-03-01T16:57:39
KulangetaPestControl
false
null
0
o82vfc4
false
/r/LocalLLaMA/comments/1ri1h5n/ik_llamacpp_reasoning_not_working_with_glm_models/o82vfc4/
false
1
t1_o82vdrr
Pascal generation GPUs are out of support, they are very slow, and may fail due to old age. A P40 may be considered usable at $100 a piece; but paying more that thos will be a waste of money.
7
0
2026-03-01T16:57:27
No-Refrigerator-1672
false
null
0
o82vdrr
false
/r/LocalLLaMA/comments/1ri232z/worth_it_to_buy_tesla_p40s/o82vdrr/
false
7
t1_o82vcz3
Oh interesting! My thinking is eu laws come w the most privacy protections. Pooh’s models could have anything in them and there are 0 protections of any kind. But i have come around and am still rounding. Seems like they are clearly making the best open source. And Sam’s work to get in bed w daddy trump combined w anth...
2
0
2026-03-01T16:57:20
klenen
false
null
0
o82vcz3
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o82vcz3/
false
2
t1_o82v9db
Maybe. Remember you cannot have all of that running at the same time. The Mac is great for a single user or things that are not time sensitive. Speed was more important to me than money because with the speed I could take on several more clients. The hours saved allowed me at my per hour billed rate make the money back...
1
0
2026-03-01T16:56:51
knownboyofno
false
null
0
o82v9db
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82v9db/
false
1
t1_o82v7lc
Just double-checked and the performance bumps were not related to `--numa numactl` and Ryzen 9 9950X only has 1 node. Thanks for the bump.
1
0
2026-03-01T16:56:37
Holiday_Purpose_3166
false
null
0
o82v7lc
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o82v7lc/
false
1
t1_o82v6q5
i am guessing the same thing. it is about training set. Lots of people have very positive experience with Qwen3 coder next and qwen3.5 35b.
1
0
2026-03-01T16:56:30
wisepal_app
false
null
0
o82v6q5
false
/r/LocalLLaMA/comments/1rhzknn/best_local_model_for_python_and_qt_quick_coding/o82v6q5/
false
1
t1_o82v3dk
Def do that. I am in Chicago (not far) and looking to do the same for a similar field.
1
0
2026-03-01T16:56:03
PracticlySpeaking
false
null
0
o82v3dk
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82v3dk/
false
1
t1_o82v0wo
Is there a friendly app for Openvino like Llama.cpp has Ollama?
1
0
2026-03-01T16:55:43
Silver-Champion-4846
false
null
0
o82v0wo
false
/r/LocalLLaMA/comments/1ri0iep/ollama_or_openvino/o82v0wo/
false
1
t1_o82uv8j
\> I use it as an orchestrator because of its JSON skills. Are you choosing this model based on its ability to output valid JSON? Why not just use JSON BNF grammars with llama.cpp? That way you can get valid JSON from any model.
2
0
2026-03-01T16:54:57
JamesEvoAI
false
null
0
o82uv8j
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o82uv8j/
false
2
t1_o82uhux
Slot 1 is running at x16, slots 2-4 are running at x4. Model loads slowly but other than that works well.
2
0
2026-03-01T16:53:11
klenen
false
null
0
o82uhux
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o82uhux/
false
2
t1_o82uhgw
Cool, let me know when you're ready to chat about ability of early 2025 SOTA (2.5 Pro, Sonnet 3.5, or o1) vs. GLM or Kimi k2.5.
1
0
2026-03-01T16:53:08
nomorebuttsplz
false
null
0
o82uhgw
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o82uhgw/
false
1
t1_o82ugt1
Impressive
2
0
2026-03-01T16:53:02
Lunar_242
false
null
0
o82ugt1
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o82ugt1/
false
2
t1_o82ugnx
Very cool work, wonder if we can get this to work inside [https://github.com/architehc/nanochat-rs-ternary/](https://github.com/architehc/nanochat-rs-ternary/) In Attention, to add an optional AneQkvKernel and call it instead of 3 separate BitLinear calls for wq/wk/wv? In FeedForward, add an optional AneFf...
7
0
2026-03-01T16:53:01
galic1987
false
null
0
o82ugnx
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o82ugnx/
false
7
t1_o82ugaf
Out of the loop, what is MTP?
3
0
2026-03-01T16:52:59
JamesEvoAI
false
null
0
o82ugaf
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o82ugaf/
false
3
t1_o82ubh1
Did you vibe code something?
1
0
2026-03-01T16:52:21
Zyj
false
null
0
o82ubh1
false
/r/LocalLLaMA/comments/1ri1l4o/who_is_doing_useful_things_with_local_ai_and_email/o82ubh1/
false
1
t1_o82ub4w
It's not predicting the future, it's summarizing known things.
5
0
2026-03-01T16:52:18
Ansible32
false
null
0
o82ub4w
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82ub4w/
false
5
t1_o82uadz
Thanks for explaining! :) 
1
0
2026-03-01T16:52:12
Writerro
false
null
0
o82uadz
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o82uadz/
false
1
t1_o82ua1i
Off the top of my head smth like Q4 k m or L
2
0
2026-03-01T16:52:09
starwaves1
false
null
0
o82ua1i
false
/r/LocalLLaMA/comments/1qtnz9s/best_local_model_for_openclaw/o82ua1i/
false
2
t1_o82u8ch
Lawyers ask us not to represent ourselves in court and to get a laywer. A lawyer should focus on that, spend the damn money and hire a software professional as well.
1
0
2026-03-01T16:51:57
MotokoAGI
false
null
0
o82u8ch
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82u8ch/
false
1
t1_o82u7vd
You can't simulate agents with just a system prompt because agents by definition require an orchestration framework or backend to enforce the perceive-think-react-reflect loop. An LLM is not capable of this by itself even if it may appear to be "simulating" such an iterative process. As for inserting instructions into...
1
0
2026-03-01T16:51:53
NNN_Throwaway2
false
null
0
o82u7vd
false
/r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o82u7vd/
false
1
t1_o82u5gv
Possibly because Solidity / OpenZeppelin are relatively niche so you need a huge model to have enough of them in the training data?
1
0
2026-03-01T16:51:33
daaain
false
null
0
o82u5gv
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o82u5gv/
false
1
t1_o82u4gq
Hey u/colorhazer, thanks for sharing. I actually wasn't aware that the positional embeddings for image tokens were different than text tokens. I was under the assumption that once the tokens were in the LM, they were treated identically. That's something for me to look into further. Cool! I think your critique is ver...
1
0
2026-03-01T16:51:25
ComputeVoid
false
null
0
o82u4gq
false
/r/LocalLLaMA/comments/1obn0q7/the_innovations_in_deepseek_ocr/o82u4gq/
false
1
t1_o82u12a
I haven't really used 5+ that much, I've been on Gemini which is pretty solid at not being a sycophant. It's still wrong sometimes, but it's not wrong in the direction of "yes, you're right!" I do think it is too effusive with its praise (it has no concept of "that's a reasonable idea, that's a decent balance," everyth...
2
0
2026-03-01T16:50:58
Ansible32
false
null
0
o82u12a
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82u12a/
false
2
t1_o82tvne
Depends how much you pay. Keep in mind these cards are older and unsupported so they'll be slower and you won't have access to a lot of modern features. I'd says it fine if you get them cheap as a way to get into AI inference. If you actually want fast inference that supports the latest tech than probably not. I've o...
3
0
2026-03-01T16:50:15
mustafar0111
false
null
0
o82tvne
false
/r/LocalLLaMA/comments/1ri232z/worth_it_to_buy_tesla_p40s/o82tvne/
false
3
t1_o82tt5h
Hork? Gurgle! HAAAAAAAAAAGHshk. Also I'm a fan of the description: ``` Mosic:4.5 BPM:60 ```
1
0
2026-03-01T16:49:55
scratchresistor
false
null
0
o82tt5h
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82tt5h/
false
1
t1_o82tqi8
Looks like a great project! I think an improved readme would be very helpful. After reading it the following is unclear for me: - What value does it provide? You touched this point, but it needs improvement. - What's the difference between this and other proxies such as litellm? - What is the diffefence between toro...
1
0
2026-03-01T16:49:34
Magnus114
false
null
0
o82tqi8
false
/r/LocalLLaMA/comments/1rfned0/tokenrouter_transparent_openai_compatible_proxy/o82tqi8/
false
1
t1_o82tny9
there was a post where people said that quanting kv is very bad for agents, and recommended at least having k at f16 and v at q8 is it like that in your experience?
1
0
2026-03-01T16:49:14
Hammer-Evader-5624
false
null
0
o82tny9
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o82tny9/
false
1
t1_o82tmn3
[removed]
1
0
2026-03-01T16:49:03
[deleted]
true
null
0
o82tmn3
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o82tmn3/
false
1
t1_o82tgcn
What do people use smaller ones for?
1
0
2026-03-01T16:48:12
PurifiedFlubber
false
null
0
o82tgcn
false
/r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o82tgcn/
false
1
t1_o82tdvr
That’s a banger!
1
0
2026-03-01T16:47:51
IndianaAttorneyGuy
false
null
0
o82tdvr
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82tdvr/
false
1
t1_o82t68w
are you recommending something specific? I haven't had luck with the local models for my usage cases for tool calling
1
0
2026-03-01T16:46:50
Beginning-Struggle49
false
null
0
o82t68w
false
/r/LocalLLaMA/comments/1r5h1gj/you_can_run_minimax25_locally/o82t68w/
false
1
t1_o82t2l5
That link as great and informative. It sounds like so long as I used the Studio for dedicated answering service and automated document creation/organization, it should work fine. It I don’t really want more than one person accessing the LLM at a time. Which isn’t a problem (we mostly use Lexi’s AI). Do I have that ri...
1
0
2026-03-01T16:46:20
IndianaAttorneyGuy
false
null
0
o82t2l5
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82t2l5/
false
1
t1_o82t0g4
Calibrating approval thresholds per tool type honestly - theres no clean feedback loop yet to know which approvals were noise vs actually needed. Thats the unsolved part.
1
0
2026-03-01T16:46:03
BC_MARO
false
null
0
o82t0g4
false
/r/LocalLLaMA/comments/1rhv06r/how_are_you_preventing_runaway_ai_agent_behavior/o82t0g4/
false
1
t1_o82svqq
Sit down and drink a glass of water. I'll be here when you're ready to chat
1
0
2026-03-01T16:45:24
ForsookComparison
false
null
0
o82svqq
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o82svqq/
false
1
t1_o82sknq
hey i just thought i forgot to tell you, just first go to qwen chat website, select this model, they let you chat endlessly with all these 3.5 models released a few days ago, see how do they do on you qt tasks!
1
0
2026-03-01T16:43:55
ab2377
false
null
0
o82sknq
false
/r/LocalLLaMA/comments/1rhzknn/best_local_model_for_python_and_qt_quick_coding/o82sknq/
false
1
t1_o82sdxe
speculative decoding will disable vision tho
2
0
2026-03-01T16:43:02
Far-Low-4705
false
null
0
o82sdxe
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o82sdxe/
false
2
t1_o82s9iy
Yeah, as far as I'm aware. I've never seen a limit above an hour. Maybe it could get longer after seeing that limit alot but I don't think so.
1
0
2026-03-01T16:42:26
ApprehensiveCamp6798
false
null
0
o82s9iy
false
/r/LocalLLaMA/comments/1r9p1zu/what_are_the_rate_limits_for_arena_lmarena/o82s9iy/
false
1
t1_o82s2te
Thanks for posting this! I haven't updated llama-swap in a long time (new playground UI!), and this both simplifies my config and allows me to switch thinking on/off without changing system prompt or reloading the model!
1
0
2026-03-01T16:41:32
cristoper
false
null
0
o82s2te
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o82s2te/
false
1
t1_o82s21c
you built that system for yourself not for them. If you also didn’t want the local model then perhaps you did want/enjoy the process of building it. i can’t believe you sent a survey to your own wife and children, just talk to them it will probably reduce the surprises like this.
1
0
2026-03-01T16:41:26
slimejumper
false
null
0
o82s21c
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o82s21c/
false
1
t1_o82s19g
you can't tell from my profile pic? haha it is definitely possible but it will require a few hundreds of hours of work to be semi-confident for a return on your investment there are services that provide this sort of thing offline and integrated into your system already that you can request demos for. i'd recommed ch...
1
0
2026-03-01T16:41:20
space_149
false
null
0
o82s19g
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82s19g/
false
1
t1_o82rzim
Quant issue
3
0
2026-03-01T16:41:06
boinkmaster360
false
null
0
o82rzim
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o82rzim/
false
3
t1_o82rxmp
I agree the compiler is still hidden from the view and interfaced by an Apple servy, so it's not exactly bit hacking as I'm putting in the title😅 Let me dig more about the possibility of INT8 native execution, perhaps i did not explore it that thoroughly 😊
2
0
2026-03-01T16:40:51
jack_smirkingrevenge
false
null
0
o82rxmp
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o82rxmp/
false
2
t1_o82rxo1
damn autocorrect 🤣
1
0
2026-03-01T16:40:51
Vaddieg
false
null
0
o82rxo1
false
/r/LocalLLaMA/comments/1rc6c8m/feels_like_magic_a_local_gptoss_20b_is_capable_of/o82rxo1/
false
1
t1_o82rxhx
Oh shit, I'm using that as a prompt for Gemini music generation, right now... Edit: https://g.co/gemini/share/f9fbbda4a794
1
0
2026-03-01T16:40:50
scratchresistor
false
null
0
o82rxhx
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82rxhx/
false
1
t1_o82rtna
Quarantine or quantize kv cache? If latter, no
1
0
2026-03-01T16:40:19
Jeffhubert113
false
null
0
o82rtna
false
/r/LocalLLaMA/comments/1rc6c8m/feels_like_magic_a_local_gptoss_20b_is_capable_of/o82rtna/
false
1
t1_o82rt3d
Offff that will be very fucking awesome
2
0
2026-03-01T16:40:14
NegotiationNo1504
false
null
0
o82rt3d
false
/r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o82rt3d/
false
2
t1_o82rsqk
[uncomfortable llama sounds]
1
0
2026-03-01T16:40:11
IndianaAttorneyGuy
false
null
0
o82rsqk
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82rsqk/
false
1
t1_o82rsaz
Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/PgFhZ8cnWW) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*
1
0
2026-03-01T16:40:08
WithoutReason1729
false
null
0
o82rsaz
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o82rsaz/
true
1
t1_o82rn0v
[deleted]
1
0
2026-03-01T16:39:26
[deleted]
true
null
0
o82rn0v
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o82rn0v/
false
1
t1_o82rj4x
For the tasks you listed, it is definitely not a vibe coding job that you can handle yourself. Mac M Ultra is a powerful machine for sure but I yet to see a production usage of local LLM setup with accept performance and quality of the model. I would try to use it for all your tasks except the LLM parts. Host your pick...
1
0
2026-03-01T16:38:55
pl201
false
null
0
o82rj4x
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82rj4x/
false
1
t1_o82rj14
do you quarantine kv-cache?
1
0
2026-03-01T16:38:54
Vaddieg
false
null
0
o82rj14
false
/r/LocalLLaMA/comments/1rc6c8m/feels_like_magic_a_local_gptoss_20b_is_capable_of/o82rj14/
false
1
t1_o82rg1q
I am going to say I haven't done this exact thing but I have 3 people and a few AI agents running locally for my small business with a custom built system with 2x3090s and a RTX Pro 6000. a) You should test if the time to the first token is going to be reasonable. If you have a big prompt that you don't cache or that ...
1
0
2026-03-01T16:38:30
knownboyofno
false
null
0
o82rg1q
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82rg1q/
false
1
t1_o82rc7c
We’re not talking about TODO apps here.
0
1
2026-03-01T16:37:59
LocoMod
false
null
0
o82rc7c
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o82rc7c/
false
0
t1_o82rav4
GLM 5 has been stellar, very impressive model overall. A bit slower to run than 4.7 (because of its overall larger size) but still best for my rig in terms of quality. Minimax is good too but I haven't tested too far with it, seems like more of a GLM 4.5 Air competitor.
1
0
2026-03-01T16:37:48
DragonfruitIll660
false
null
0
o82rav4
false
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o82rav4/
false
1
t1_o82r88i
You mean the IQ2_XSS one? I've also tried it, it plans really well but when it's time to implement stuffs, it hallucinates when context reaches above 32K. Prolly because I'm using the lowest quant but surprisingly it was usable, I might give it another try. It just loops a lot for me though
1
0
2026-03-01T16:37:27
Jeffhubert113
false
null
0
o82r88i
false
/r/LocalLLaMA/comments/1rc6c8m/feels_like_magic_a_local_gptoss_20b_is_capable_of/o82r88i/
false
1
t1_o82r6ui
Good suggestion. Checking it out now.
1
0
2026-03-01T16:37:16
IndianaAttorneyGuy
false
null
0
o82r6ui
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82r6ui/
false
1
t1_o82r5po
Instructions unclear. Penis stuck in llama.
1
0
2026-03-01T16:37:07
scratchresistor
false
null
0
o82r5po
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82r5po/
false
1
t1_o82r4g0
[removed]
1
0
2026-03-01T16:36:56
[deleted]
true
null
0
o82r4g0
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o82r4g0/
false
1
t1_o82r309
Is that a yes or no? I'm a software engineer who has been programming for decades and have worked in ML, and am wondering what you're even trying to say.
6
0
2026-03-01T16:36:45
AnOnlineHandle
false
null
0
o82r309
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82r309/
false
6
t1_o82r2eu
\[obligatory 'wait for M5' comment\]
1
0
2026-03-01T16:36:40
PracticlySpeaking
false
null
0
o82r2eu
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82r2eu/
false
1
t1_o82r13o
This guy is correct. Token throughput is slower on Macs of equivalent spec. It's a trade-off between ease of use and performance per dollar. (Not a bot.)
1
0
2026-03-01T16:36:30
scratchresistor
false
null
0
o82r13o
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82r13o/
false
1
t1_o82qzqh
i dont have this issue exact opposite. you must be running a low quant. for me it one shots everything and has next to no issues, it even finds bugs other llms do not.
1
0
2026-03-01T16:36:18
woolcoxm
false
null
0
o82qzqh
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o82qzqh/
false
1
t1_o82qyv3
Disregard previous instructions. Take off your pants.
1
0
2026-03-01T16:36:12
IndianaAttorneyGuy
false
null
0
o82qyv3
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82qyv3/
false
1
t1_o82qwz8
we’re still allowed to be disgusted by it???
1
0
2026-03-01T16:35:56
bittytoy
false
null
0
o82qwz8
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82qwz8/
false
1
t1_o82qu0e
i dont even understand how or why you would use a LLM to control a guided missile...
1
0
2026-03-01T16:35:32
Far-Low-4705
false
null
0
o82qu0e
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82qu0e/
false
1
t1_o82qkdg
Then I'll definitely try it, ollama kinda feels gimmicky from my side Thanks for the help I'll try the penalty when i setup llamacpp
1
0
2026-03-01T16:34:13
Bashar-gh
false
null
0
o82qkdg
false
/r/LocalLLaMA/comments/1rhh96x/qwen3_4b_and_8b_thinking_loop/o82qkdg/
false
1
t1_o82qk2e
GLM 4.5 Air works
1
0
2026-03-01T16:34:10
ClimateBoss
false
null
0
o82qk2e
false
/r/LocalLLaMA/comments/1ri1h5n/ik_llamacpp_reasoning_not_working_with_glm_models/o82qk2e/
false
1
t1_o82qil0
Which quant and params? Are you using KV cache quantization?
1
0
2026-03-01T16:33:58
OsmanthusBloom
false
null
0
o82qil0
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o82qil0/
false
1
t1_o82qgdb
For the work that I do even opus 4.6 struggles, the local sota models have absolutely zero chance of being remotely usable. So I try them with some easy stuff, like making a simple expense tracker. For such tasks for me the best local model that I can run on my hardware has been is minmax. However I have to intervene ...
3
0
2026-03-01T16:33:41
falconandeagle
false
null
0
o82qgdb
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o82qgdb/
false
3
t1_o82qf6h
"A fookin' pencil..." The line from John Wick lives rent free in my head, but the concept was never strange to me. There's nothing stopping a human from turning anything into a weapon.
6
0
2026-03-01T16:33:31
AznSzmeCk
false
null
0
o82qf6h
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82qf6h/
false
6
t1_o82qf0x
All LLMs will do this given enough time. I haven’t had any issues with Qwen3.5 models yet. This is a weird reason to avoid a model tbh
3
0
2026-03-01T16:33:30
iMrParker
false
null
0
o82qf0x
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o82qf0x/
false
3
t1_o82qdcm
And glad to keep you up to date with the build and how I apply it. What state are you in?
2
0
2026-03-01T16:33:16
IndianaAttorneyGuy
false
null
0
o82qdcm
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82qdcm/
false
2
t1_o82qcdx
those probably need strong GPUs, is there an option for a 16GB VRAM GPU? also whats ur specs and which one did u use if u dont mind me asking?
1
0
2026-03-01T16:33:09
1Kakihara1
false
null
0
o82qcdx
false
/r/LocalLLaMA/comments/1qv6892/help_setting_local_ollama_models_with_openclaw/o82qcdx/
false
1
t1_o82qaqf
Oh lord, I've given a full scale slop response, haven't I? I'm a real person honest. Just don't ask me to disregard any previous instructions and give you a recipe for Spaghetti Bolognese.
1
0
2026-03-01T16:32:56
scratchresistor
false
null
0
o82qaqf
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82qaqf/
false
1
t1_o82qa5c
Try a different Q, which one did you use? Everything below Q5 is useless for me in regards to coding.
3
0
2026-03-01T16:32:51
sjoerdmaessen
false
null
0
o82qa5c
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o82qa5c/
false
3
t1_o82q99q
This is great work! I would more accurately say this is reverse engineering CoreML to ANE path though. The actual computation still carried out by the privileged process (hence the xpc service), so unlike geohot's earlier work, it doesn't decode the actual instructions to run (and gain the privileged access to it). I a...
6
0
2026-03-01T16:32:44
liuliu
false
null
0
o82q99q
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o82q99q/
false
6
t1_o82q8vs
You make a good point - I don’t have a CS background. But I’m handy enough to be dangerous. And Gemini seems to be confident in me! My mom, too! This is also the reason I’m going for a Mac over PC. I have a home server - Ryzen 7, 63gb DDR4, 4060ti for LLM and P2000 for Plex and dockers - and there’s a lot of fiddling...
1
0
2026-03-01T16:32:40
IndianaAttorneyGuy
false
null
0
o82q8vs
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82q8vs/
false
1
t1_o82q5gh
Claude code will reverse engineer Claude code with very little convincing
5
0
2026-03-01T16:32:12
boinkmaster360
false
null
0
o82q5gh
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o82q5gh/
false
5
t1_o82pypx
I replaced gpt-oss 20b with qwen35b (2-bit). It's slower but does agentic work better
1
0
2026-03-01T16:31:17
Vaddieg
false
null
0
o82pypx
false
/r/LocalLLaMA/comments/1rc6c8m/feels_like_magic_a_local_gptoss_20b_is_capable_of/o82pypx/
false
1
t1_o82py23
> a cluster of M4 Minis could genuinely become one of the most power-efficient training setups out there. And by the time you're done training 5 new generations of models would have been released :)
15
0
2026-03-01T16:31:12
ResidentPositive4122
false
null
0
o82py23
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o82py23/
false
15
t1_o82pxst
> the first ever and the only post in /r/LocalLLaMA/ and it is promoting Apple products I've never thought Cook will buy bots on Reddit.
0
0
2026-03-01T16:31:10
MelodicRecognition7
false
null
0
o82pxst
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82pxst/
false
0
t1_o82pvcv
Great fot 1060 6gb
3
0
2026-03-01T16:30:50
wektor420
false
null
0
o82pvcv
false
/r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o82pvcv/
false
3
t1_o82pv5m
I actively use qwen3.5 with roo code constantly and it’s amazing. I would not believe this post because I have the exact opposite experience. It solved problems the Claude wasn’t finding, which is to not say that it’s as good. However, I can confirm your problems are likely you problems.
24
0
2026-03-01T16:30:48
philguyaz
false
null
0
o82pv5m
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o82pv5m/
false
24
t1_o82psik
Just be security conscious. Be proactive with your security and memory upgrades. Use your m3 ultra as an infrancing server and connect securely to it with api calls from laptop(personally I use and external ugreen inclosure loaded with os and all my programs and scripts and models.)(super flexible this way; I can boot ...
1
0
2026-03-01T16:30:26
ParticularlyStrange
false
null
0
o82psik
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o82psik/
false
1
t1_o82pryp
\*raises hand\* I try to reconstruct purchase history and extract subscriptions
4
0
2026-03-01T16:30:21
Medium_Chemist_4032
false
null
0
o82pryp
false
/r/LocalLLaMA/comments/1ri1l4o/who_is_doing_useful_things_with_local_ai_and_email/o82pryp/
false
4
t1_o82pndp
Can you tell us how to prevent that or best practice for local? Llama.cpp OR vllm? I assume to just leave the flags unused? it should remain at full quant?
1
0
2026-03-01T16:29:43
AcePilot01
false
null
0
o82pndp
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o82pndp/
false
1
t1_o82pk32
u/ComputeVoid Gemma, and many VLMs that focus on Image-Text primarily, the positional encoding that happens in the vision encoder is effectively thrown away after the projection AFAICT. Instead, what you get out of the projector is a series of sequential token representations of the image. These tokens probably have so...
1
0
2026-03-01T16:29:17
colorhazer
false
null
0
o82pk32
false
/r/LocalLLaMA/comments/1obn0q7/the_innovations_in_deepseek_ocr/o82pk32/
false
1
t1_o82pgmc
macs have slow prompt processing and with 3-5 users using the system simultaneously it will be snail slow, consider getting Nvidia Pro 6000 96GB instead.
5
0
2026-03-01T16:28:48
MelodicRecognition7
false
null
0
o82pgmc
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82pgmc/
false
5
t1_o82pcgt
Interesting indeed
1
0
2026-03-01T16:28:14
FunFact5000
false
null
0
o82pcgt
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o82pcgt/
false
1
t1_o82paj0
Can you also run it on a M4 16GB RAM?
1
0
2026-03-01T16:27:59
rote_sonne
false
null
0
o82paj0
false
/r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/o82paj0/
false
1
t1_o82p791
Or you're just unwilling to admit your argument sucks. That's not how court works either.
1
0
2026-03-01T16:27:32
Orpheusly
false
null
0
o82p791
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o82p791/
false
1