name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o86ljnq
\`bf16\` performance on my GPU is quite bad, though, I'll test this. \~80k tokens start the death spirals with \`f16\`
7
0
2026-03-02T05:33:08
ndiphilone
false
null
0
o86ljnq
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86ljnq/
false
7
t1_o86lfse
Good grief, who cares?
1
0
2026-03-02T05:32:15
Wise_Ad_6822
false
null
0
o86lfse
false
/r/LocalLLaMA/comments/1qa9k2t/does_anyone_know_what_nvidias_release/o86lfse/
false
1
t1_o86le0j
Really interesting experiment, props for digging into the ANE. One thing I’d push back on is the characterization of the ANE as “an FP16 processor.” That’s almost certainly not accurate. Apple advertises TOPS in INT8, and the Neural Engine appears to be a quantized tensor accelerator optimized primarily for INT8 (and...
2
0
2026-03-02T05:31:52
rovo
false
null
0
o86le0j
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o86le0j/
false
2
t1_o86ldd8
No way ?
1
0
2026-03-02T05:31:43
thestillwind
false
null
0
o86ldd8
false
/r/LocalLLaMA/comments/1rcpmwn/anthropic_weve_identified_industrialscale/o86ldd8/
false
1
t1_o86l9cl
If you get a chance running tests like this with different kv values (below f16) would be interesting, especially with K vs V
4
0
2026-03-02T05:30:50
mp3m4k3r
false
null
0
o86l9cl
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86l9cl/
false
4
t1_o86l89f
https://preview.redd.it/…5def0fab73b6d323
1
0
2026-03-02T05:30:36
krecoun007
false
null
0
o86l89f
false
/r/LocalLLaMA/comments/1ribhpg/help_me_understand_why_a_certain_image_is/o86l89f/
false
1
t1_o86l2fg
At this rate those “skills” will just turn into safetensor files. Lol
5
0
2026-03-02T05:29:19
CATLLM
false
null
0
o86l2fg
false
/r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o86l2fg/
false
5
t1_o86kzzz
Or just create a dockerfile and run it as a container
3
0
2026-03-02T05:28:47
chensium
false
null
0
o86kzzz
false
/r/LocalLLaMA/comments/1riic5m/running_llamaserver_as_a_persistent_systemd/o86kzzz/
false
3
t1_o86kxkp
Interesting! I just had the 35B-A3B get stuck in loops at 80k tokens. It’s fine in smaller prompts but once it gets properly loaded, I see these issues. Thanks for noting that!
1
1
2026-03-02T05:28:16
simracerman
false
null
0
o86kxkp
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86kxkp/
false
1
t1_o86kuo8
Could I run this with 96GB ddr5 ram and 96GB vram?
1
0
2026-03-02T05:27:38
whity2773
false
null
0
o86kuo8
false
/r/LocalLLaMA/comments/1r5h1gj/you_can_run_minimax25_locally/o86kuo8/
false
1
t1_o86kn70
Yes. Around 40k prompts (which generates a similar response), it takes about 1h to fully generate (so around 10t/s). It's about 2x slower than the M3 for almost all prompt sizes. But the real advantage, of course, is that I can run Deepseek 3.2 and larger models that would overwhelm the M3. The RTXs are also much fa...
1
0
2026-03-02T05:25:59
marhalt
false
null
0
o86kn70
false
/r/LocalLLaMA/comments/1n70v8v/rtx_6000_pro_workstation_to_run_deepseek/o86kn70/
false
1
t1_o86kn6o
thanks a bunch! 
1
0
2026-03-02T05:25:59
redditorialy_retard
false
null
0
o86kn6o
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o86kn6o/
false
1
t1_o86kk44
Oh I thought I attached. Fixing it. *
1
0
2026-03-02T05:25:19
krecoun007
false
null
0
o86kk44
false
/r/LocalLLaMA/comments/1ribhpg/help_me_understand_why_a_certain_image_is/o86kk44/
false
1
t1_o86kizk
This might explain why in my testing 122b, 35b, 27b felt more ‘dumb’ and making mistakes and doing deathloops when i have the kv cache at q8.
-2
1
2026-03-02T05:25:05
CATLLM
false
null
0
o86kizk
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86kizk/
false
-2
t1_o86kfhp
128k context window. kv cache at BF16 (always left there for coding). I loaded a small to medium size repo with 28k tokens. Asked it to review the code, make suggestions. When it finished, I asked it to implement the suggestions. It did well for a little while, but at around 80k tokens it got stuck in a loop and starte...
7
0
2026-03-02T05:24:19
simracerman
false
null
0
o86kfhp
false
/r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o86kfhp/
false
7
t1_o86kdrh
Can you share the startup command for it?
5
0
2026-03-02T05:23:56
texasdude11
false
null
0
o86kdrh
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o86kdrh/
false
5
t1_o86k6gq
Sorry, but a half-assed “write me a launch post” for yet another cloud service that makes no effort to describe how it pertains to local inference… it’s just not going to cut it anymore. Read the room.
4
0
2026-03-02T05:22:19
fligglymcgee
false
null
0
o86k6gq
false
/r/LocalLLaMA/comments/1rija4i/tired_of_the_lowquality_mindless_erp_chats_trying/o86k6gq/
false
4
t1_o86jzx1
https://preview.redd.it/…d01997e892523447
1
0
2026-03-02T05:20:53
DuePiglet2696
false
null
0
o86jzx1
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o86jzx1/
false
1
t1_o86jyro
I think they can imitate any of the "personality" you listed, per your request. It's a pretty shallow thing. Maybe only different in how they are fine-tuned or the default prompt.
3
0
2026-03-02T05:20:38
SnooCompliments7914
false
null
0
o86jyro
false
/r/LocalLLaMA/comments/1rik3ge/what_is_the_personality_of_a_chinese_llm_when/o86jyro/
false
3
t1_o86jy07
Any 397B version you recommend?
3
0
2026-03-02T05:20:29
vpyno
false
null
0
o86jy07
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o86jy07/
false
3
t1_o86jot3
Ubuntu, 64gb ddr4, i7, 3090 and compiled llama.cpp with a config I found in a thread here. I can get specifics for you tomorrow.
4
0
2026-03-02T05:18:31
Aromatic-Low-4578
false
null
0
o86jot3
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o86jot3/
false
4
t1_o86jm6w
I feel like we are spoiled for reasoning LLMs that can run locally if you stay away from instruct models. But if you want good knowledge, you need to have a massive training data that paid models have or be satisfied with the latency a RAG pipepline has to fill the gap between knowledge cutoff and the training data.
2
0
2026-03-02T05:17:57
false79
false
null
0
o86jm6w
false
/r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o86jm6w/
false
2
t1_o86jko4
My Man!
0
0
2026-03-02T05:17:38
Local_Phenomenon
false
null
0
o86jko4
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o86jko4/
false
0
t1_o86jkg1
Not the same model but one-shot test without thinking using Qwen3.5-35B-A3B-UD-Q6\_K\_XL: [https://codepen.io/dark-seied/pen/MYjabKN](https://codepen.io/dark-seied/pen/MYjabKN) Unsloth recommended settings used.
3
0
2026-03-02T05:17:34
Dyssun
false
null
0
o86jkg1
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o86jkg1/
false
3
t1_o86jjei
Great point. I have a number of different agents that have specialized tasks. Most of them are in my OpenClaw instance so I use the internal OpenClaw messaging to communicate between them (had a bit of a mess setting that up) but I've recently also setup my Claude Code (ie my coding specialist agent) on clawnet so th...
1
0
2026-03-02T05:17:22
_jonnyquest_
false
null
0
o86jjei
false
/r/LocalLLaMA/comments/1rij89l/agent_to_agent_communication_to_leverage/o86jjei/
false
1
t1_o86jj84
One made to be used outside. The one I use is this one https://www.youtube.com/watch?v=7dpdqO1EZK0
1
0
2026-03-02T05:17:19
Red_Redditor_Reddit
false
null
0
o86jj84
false
/r/LocalLLaMA/comments/1iw3gzg/how_much_does_cpu_speed_matter_for_inference/o86jj84/
false
1
t1_o86jacr
i would only install such AI in my useless PC which has no link with any of my official accounts
1
0
2026-03-02T05:15:26
Small_Extent7236
false
null
0
o86jacr
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o86jacr/
false
1
t1_o86j29o
You're slapping on "3 rag pipelines" to make simple memories work? While Claude is relying on a simple memory.md file to store everything efficiently, I'd say your approach is flawed, more complexity does not equal a better result, especially on smaller, weaker models 
1
0
2026-03-02T05:13:40
ELPascalito
false
null
0
o86j29o
false
/r/LocalLLaMA/comments/1rija4i/tired_of_the_lowquality_mindless_erp_chats_trying/o86j29o/
false
1
t1_o86j0b5
interesting, you loaded the model with what context window size? and do you use any kv cache quants? i didn't see this yet, but do see the drop in tool calls when the conversation context is too long.
3
0
2026-03-02T05:13:15
bobaburger
false
null
0
o86j0b5
false
/r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o86j0b5/
false
3
t1_o86isln
Transforming unstructured audio into actionable data is a human led art and Lifewood ensures your local embeddings are built on a foundation of high quality validation.
1
0
2026-03-02T05:11:36
Infamous_Web490
false
null
0
o86isln
false
/r/LocalLLaMA/comments/1q8uyhj/i_built_a_100_local_audio_rag_pipeline_to_index/o86isln/
false
1
t1_o86ir8y
https://huggingface.co/Sehyo/Qwen3.5-122B-A10B-NVFP4 this one is perfect
9
0
2026-03-02T05:11:19
Nepherpitu
false
null
0
o86ir8y
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o86ir8y/
false
9
t1_o86ikp1
Use llmfit, find out all the models that fits your machine perfectly and the find out the best reasoning model for you.
2
0
2026-03-02T05:09:57
Most_Requirement_470
false
null
0
o86ikp1
false
/r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o86ikp1/
false
2
t1_o86ijc6
I always wonder about why we ended up with MCPs and complex tool callings instead of dedicated, ephemeral agents that are dedicated to tool use and are only ever supposed to talk to other agents. Like if you want to call a complex API, you call up the API specialist agent, it guides the main agent through a telephone b...
1
0
2026-03-02T05:09:39
aeroumbria
false
null
0
o86ijc6
false
/r/LocalLLaMA/comments/1rij89l/agent_to_agent_communication_to_leverage/o86ijc6/
false
1
t1_o86i47l
afaik, and what people say, is that bigger models at medium/low quants perform better than smaller models at big quants. a 20B at Q4 would be better than 9B at Q8, right?
1
0
2026-03-02T05:06:28
PermitNo8107
false
null
0
o86i47l
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o86i47l/
false
1
t1_o86htl2
How is this model in tool calling and coding when compared with minimax 2.5? I currently run a 4 bit AWQ with vllm on 8x 3090, what’s the best quant for running qwen 3.5 122b? I only use Claude code with my setup.
2
0
2026-03-02T05:04:15
BeeNo7094
false
null
0
o86htl2
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o86htl2/
false
2
t1_o86hqkq
heretic is already working well. Been using this model: [https://huggingface.co/llmfan46/Qwen3.5-27B-heretic-v2](https://huggingface.co/llmfan46/Qwen3.5-27B-heretic-v2)
5
0
2026-03-02T05:03:37
My_Unbiased_Opinion
false
null
0
o86hqkq
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o86hqkq/
false
5
t1_o86hnhd
Can you pls share the link? I think the post got deleted 
1
0
2026-03-02T05:02:58
Drishal
false
null
0
o86hnhd
false
/r/LocalLLaMA/comments/1rf2b90/benchmarking_qwen3535b_vs_gptoss20b_for_agentic/o86hnhd/
false
1
t1_o86hi8s
The 35B-A3B just hallucinated on me with opencode after reaching 80k context. I’m using the Q5_K_XL from Unsloth after the fix the deployed 2 days ago.
11
0
2026-03-02T05:01:54
simracerman
false
null
0
o86hi8s
false
/r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o86hi8s/
false
11
t1_o86hhpe
Seguirá siendo gratis? Dudo que pueda generar imágenes/vídeos ilimitados.
1
0
2026-03-02T05:01:47
MetalZone00
false
null
0
o86hhpe
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o86hhpe/
false
1
t1_o86hc02
Very cool project. But I wouldn’t brag about it publicly since attempting to reverse engineer their product is against their terms of service
0
0
2026-03-02T05:00:34
vteyssier
false
null
0
o86hc02
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o86hc02/
false
0
t1_o86h3qy
It has a different way of laying out code than my other models. Unique signature.
1
0
2026-03-02T04:58:52
crantob
false
null
0
o86h3qy
false
/r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/o86h3qy/
false
1
t1_o86gt0h
Love the base model schizo-detuned answers.
2
0
2026-03-02T04:56:38
ortegaalfredo
false
null
0
o86gt0h
false
/r/LocalLLaMA/comments/1rij1nx/a_comparison_between_same_8b_parameter_llm/o86gt0h/
false
2
t1_o86gpyz
GitHub repo : [https://github.com/jaymunshi/open-swara](https://github.com/jaymunshi/open-swara)
2
0
2026-03-02T04:56:00
pmttyji
false
null
0
o86gpyz
false
/r/LocalLLaMA/comments/1riiwtp/open_swara_4065_humanized_voice_samples_across_44/o86gpyz/
false
2
t1_o86ggv1
Nice post! Thank you for the benches! It’s really interesting.
2
0
2026-03-02T04:54:05
No-Equivalent-2440
false
null
0
o86ggv1
false
/r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o86ggv1/
false
2
t1_o86gd3d
Yeah, 24GB just doesn't quite cut it on the macs. you need 24gb vram or 36gb unified to begin to get decent results
1
0
2026-03-02T04:53:18
Soft-Barracuda8655
false
null
0
o86gd3d
false
/r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o86gd3d/
false
1
t1_o86g9yz
try geohot's P2P driver! it's meant for the 4090, but it just might work for the 3090 too. it might improve things enough not to need the additional hardware!
2
0
2026-03-02T04:52:39
JohnTheNerd3
false
null
0
o86g9yz
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o86g9yz/
false
2
t1_o86fxs9
Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/PgFhZ8cnWW) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*
1
1
2026-03-02T04:50:07
WithoutReason1729
false
null
0
o86fxs9
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o86fxs9/
true
1
t1_o86fxoe
I didn't spend enough time with the model to be able to answer that - but I typically see above 3 for coding-related tasks. my main use case is a voice assistant, though, so I suspect it will not be very relatable regardless.
2
0
2026-03-02T04:50:06
JohnTheNerd3
false
null
0
o86fxoe
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o86fxoe/
false
2
t1_o86fvbt
it works but i've heard people saying it's really broken at times, llama.cpp runs it perfectly and isn't very hard to set up
4
0
2026-03-02T04:49:37
velcroenjoyer
false
null
0
o86fvbt
false
/r/LocalLLaMA/comments/1rij9k1/whats_the_best_local_model_i_can_run_with_8gb/o86fvbt/
false
4
t1_o86fsqd
The article lacks details. Please get names of specific modules, workflows, intelligence gathering, tools, information and data processing systems.
1
0
2026-03-02T04:49:05
AIML_Tom
false
null
0
o86fsqd
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o86fsqd/
false
1
t1_o86fkzy
can be done with ollama?
2
0
2026-03-02T04:47:28
gondoravenis
false
null
0
o86fkzy
false
/r/LocalLLaMA/comments/1rij9k1/whats_the_best_local_model_i_can_run_with_8gb/o86fkzy/
false
2
t1_o86ff4n
thanks dude, loved your LLM translation app. Recently forked it to add sdlxliff support (and seems to be working so far). Let me know what you think!
1
0
2026-03-02T04:46:16
bombaybicycleclub
false
null
0
o86ff4n
false
/r/LocalLLaMA/comments/1pvpd87/end_of_2026_whats_the_best_local_translation_model/o86ff4n/
false
1
t1_o86fadm
The issue is architectural. The concept of an ever expanding contract is simply bad architecture. Yes, the AI needs a memory of the previous posts of the conversation, but having it in context is a terrible idea. We have all seen answer quality deteriorated as the context grows. You need short context to keep your AI...
1
0
2026-03-02T04:45:18
Protopia
false
null
0
o86fadm
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o86fadm/
false
1
t1_o86f7og
I have the same GPU and 32gb system ram. I use Qwen 3.5 35B A3B Q4_K_M. It’s better than gpt oss 20b from what I’ve seen so far
2
0
2026-03-02T04:44:45
Guilty_Rooster_6708
false
null
0
o86f7og
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o86f7og/
false
2
t1_o86f558
bro make a github repo if you want to share code, nobody's taking the time to assemble this. for what it's worth, my openclaw made this same thing in about 20 minutes through discord
-5
0
2026-03-02T04:44:15
__SlimeQ__
false
null
0
o86f558
false
/r/LocalLLaMA/comments/1riic5m/running_llamaserver_as_a_persistent_systemd/o86f558/
false
-5
t1_o86exvz
I’d love to see that too! It seems promising. Generally, the info with c2 is a bit worse, but it can have exact recall with what it does get right which means in some cases, it can make the skills deterministic instead of pushing the model to act a certain way. So it gives more stability for skills in some use cases fr...
1
0
2026-03-02T04:42:47
Proper-Lab1756
false
null
0
o86exvz
false
/r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o86exvz/
false
1
t1_o86esb0
I know nothing about nvlink, i see em on ebay from 70$ to 600$ - wtf. halp.
1
0
2026-03-02T04:41:38
ghosthacked
false
null
0
o86esb0
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o86esb0/
false
1
t1_o86eqy3
Nothing beats paid API still. You can't really run big models on 128GB Macs, even 200b is pushing it and it's not even about speed, you simply don't have enough RAM. Right now the best model for knowledge and reasoning that I can run on my 128GB M4 Max is qwen3.5-122b-a10b, it just came out a few days ago and it's a bi...
5
0
2026-03-02T04:41:21
po_stulate
false
null
0
o86eqy3
false
/r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o86eqy3/
false
5
t1_o86emgn
FWIW, I just looked at the \[unsloth quant for the 27b\](https://huggingface.co/unsloth/Qwen3.5-27B-GGUF/tree/main?show\_file\_info=Qwen3.5-27B-Q4\_K\_M.gguf) and it doesn't seem any of the layers you mentioned are actually at Q8. perhaps you're thinking of another model?
8
0
2026-03-02T04:40:26
JohnTheNerd3
false
null
0
o86emgn
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o86emgn/
false
8
t1_o86egqm
TTS/OCR is a good idea. I'm thinking 27b q3 might be the way to go, from my experience I haven't been able to confidently use sub 20b models for general tasks
1
0
2026-03-02T04:39:16
MiyamotoMusashi7
false
null
0
o86egqm
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o86egqm/
false
1
t1_o86efgw
hmm
1
0
2026-03-02T04:39:01
Esiz_AL2
false
null
0
o86efgw
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o86efgw/
false
1
t1_o86efg4
[removed]
1
0
2026-03-02T04:39:01
[deleted]
true
null
0
o86efg4
false
/r/LocalLLaMA/comments/1p3dmlm/orange_pi_6_the_worlds_best_ai_deal_of_2025/o86efg4/
false
1
t1_o86e6sm
qwen3.5 35b a3b if you have 32g ram
9
0
2026-03-02T04:37:14
Conscious_Chef_3233
false
null
0
o86e6sm
false
/r/LocalLLaMA/comments/1rij9k1/whats_the_best_local_model_i_can_run_with_8gb/o86e6sm/
false
9
t1_o86e4b2
I downgraded my $300/mth claude subscription with qwen3.5, they almost do the same for me
3
0
2026-03-02T04:36:43
choz23
false
null
0
o86e4b2
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o86e4b2/
false
3
t1_o86e3z7
Don't have Mac, but you should give the most recent Qwen3.5 122B-A10B a shot. Great reasoning, coding model, and has the knowledge you need. MoE means your M4 will give you performance on top of usability.
13
0
2026-03-02T04:36:40
simracerman
false
null
0
o86e3z7
false
/r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o86e3z7/
false
13
t1_o86e3b0
the smartest dogs are smarter than the dumbest humans, the humans just have the language chiplet grafted onto the soc so you don't notice that there is NO general purpose compute there
3
0
2026-03-02T04:36:31
CanineAssBandit
false
null
0
o86e3b0
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o86e3b0/
false
3
t1_o86dzee
Silly me, i have two 3090, one does comfyui, one does ollama/openwebui. I know now what i must do, i dont know if i have the strength...
1
0
2026-03-02T04:35:43
ghosthacked
false
null
0
o86dzee
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o86dzee/
false
1
t1_o86dywi
What’s your average acceptance length in practice (and on what workload)?
1
0
2026-03-02T04:35:37
Appropriate-Lie-8812
false
null
0
o86dywi
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o86dywi/
false
1
t1_o86dun5
Well, it looks like they got around to make some local models that can generate low quality posts and also post even lower quality comments on those posts. Must be one of the Openclaw implementations.
1
0
2026-03-02T04:34:45
Ok-Adhesiveness-4141
false
null
0
o86dun5
false
/r/LocalLLaMA/comments/1rih7lq/i_asked_my_llm_to_speak_with_as_many/o86dun5/
false
1
t1_o86duco
Unfortunately I’m fresh out of college, and I’m having to save up money for some big upcoming expenses. Normally I would though.
1
0
2026-03-02T04:34:41
Proper-Lab1756
false
null
0
o86duco
false
/r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o86duco/
false
1
t1_o86du7i
games will still run on your 4090 (windows even has settings section to steer each app to the correct gpu if you need to finetune it), the rendered image will be transmitted over pcie through dGPU to the display. haven't noticed any slowdowns whatsoever. it's the same thing laptops were doing since forever.
1
0
2026-03-02T04:34:39
Training_Visual6159
false
null
0
o86du7i
false
/r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o86du7i/
false
1
t1_o86dsxo
Noticed the looping right away when I asked for weather and threw a bunch of search results in, it was struggling to settle with an answer when one site gave 19-20C and the other gave 17-20C, it loops extremely easily.
4
0
2026-03-02T04:34:24
m31317015
false
null
0
o86dsxo
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o86dsxo/
false
4
t1_o86do44
Dude I was just thinking about how I wanted to put the OS into the page file to free up room in ram for the llm. I can't tell if this is more or less unhinged. Super fucking cool.
1
0
2026-03-02T04:33:24
CanineAssBandit
false
null
0
o86do44
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o86do44/
false
1
t1_o86dn97
I've only run it with ik\_llama.cpp on my 24GB VRAM at IQ4\_XS. I get about 3 tok/s, but it works well enough. No kv quant, didn't dare try it on such a low general quant
1
0
2026-03-02T04:33:13
s1mplyme
false
null
0
o86dn97
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o86dn97/
false
1
t1_o86dl8n
Also, "spoiler" why?
1
0
2026-03-02T04:32:49
MrE_WI
false
null
0
o86dl8n
false
/r/LocalLLaMA/comments/1riat5w/vignettes_handy_for_ais/o86dl8n/
false
1
t1_o86dkwv
I tried that one, but perhaps I can't run a good enough quant for my purposes. Thanks for the suggestion
1
0
2026-03-02T04:32:45
Beginning-Struggle49
false
null
0
o86dkwv
false
/r/LocalLLaMA/comments/1r5h1gj/you_can_run_minimax25_locally/o86dkwv/
false
1
t1_o86djcj
Assuming you're using llama cpp, you need to offload experts to system ram. Offloading entire layers will be very slow. On my 5060ti + 32gb ram it runs fine, so I can only imagine it should run faster for you
3
0
2026-03-02T04:32:26
defensivedig0
false
null
0
o86djcj
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o86djcj/
false
3
t1_o86dj99
I just asked qwen 3.5's thinking model and watched it's 'thoughts' - it was wrestling with itself about how much to disclose - when I pressed further it recognized it's own deliberate omission, then it failed with an error. I replied, 'You seem to be doing a lot of mental gymnastics to avoid talking about the student p...
2
0
2026-03-02T04:32:24
Easy-Initiative5771
false
null
0
o86dj99
false
/r/LocalLLaMA/comments/16sw4na/qwen_is_aligned_just_as_you_would_expect/o86dj99/
false
2
t1_o86diwi
Okay, after reading the chain of replies to u/Ylsid 's (reasonable) comment, I do now understand what you're trying to do here... Can you give us some metrics to back up this claim? Like, how does a vignette like this perform compared to a more 'standard' approach?
1
0
2026-03-02T04:32:20
MrE_WI
false
null
0
o86diwi
false
/r/LocalLLaMA/comments/1riat5w/vignettes_handy_for_ais/o86diwi/
false
1
t1_o86deea
Has anyone given any of the nvfp4 quants a try? The coder next nvfp4 is absolutely blazing, and super usable in my experience. Hoping there’s an equivalence with qwen3.5 122B
6
0
2026-03-02T04:31:25
Laabc123
false
null
0
o86deea
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o86deea/
false
6
t1_o86dbp3
I got one off Marketplace and one off eBay. Both used EVGA 3090 FTW3.
2
0
2026-03-02T04:30:53
AdamTReineke
false
null
0
o86dbp3
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o86dbp3/
false
2
t1_o86db1z
TB5 sounds fast but 10 GB/s is nowhere near the 273 GB/s internal memory bandwidth the M4 Pro uses for attention layers, so two 24GB machines clustered over Thunderbolt won't behave like one 48GB machine -- you'd get better mileage from a single M4 Pro 48GB or M4 Max instead.
1
0
2026-03-02T04:30:45
BC_MARO
false
null
0
o86db1z
false
/r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o86db1z/
false
1
t1_o86d968
Are you referring to a markdown formatting issue? I'm not seeing anything amiss; all the code blocks are rendering in Reddit as I'd expect. Is there something specific that you're seeing in the post that's formatted improperly?
1
0
2026-03-02T04:30:22
jeremyckahn
false
null
0
o86d968
false
/r/LocalLLaMA/comments/1riic5m/running_llamaserver_as_a_persistent_systemd/o86d968/
false
1
t1_o86d85b
sounds like describing an ariel atom vs a semi truck. One is heavy as fuck and waaaaay bigger and slower than it needs to be, despite both being vehicles that move
1
0
2026-03-02T04:30:09
CanineAssBandit
false
null
0
o86d85b
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o86d85b/
false
1
t1_o86d1q8
I'm using GGUF on llama.cpp, sorry I'm not multi-gpu like him. When I said I had the same issue as him, i didn't mean exactly the same.
1
0
2026-03-02T04:28:51
sudden_aggression
false
null
0
o86d1q8
false
/r/LocalLLaMA/comments/1rhk0gz/r9700_and_vllm_with_qwen35/o86d1q8/
false
1
t1_o86crpw
The security concerns are real and the wild west framing is accurate. The structural issue is that OpenClaw (and most agent frameworks) give you capability with no execution boundary. Every tool call just... runs. The interesting design question is whether that governance layer belongs inside the framework or sits outs...
1
0
2026-03-02T04:26:51
Trick-Position-5101
false
null
0
o86crpw
false
/r/LocalLLaMA/comments/1raq23i/they_have_karpathy_we_are_doomed/o86crpw/
false
1
t1_o86crci
M4 is overkill for Clawdbot. M1 has the same single core performance Intel’s brand new Panther Lake. You are fine using it.  Clawdbot runs 24/7 so it is more natural on a desktop but if you keep your M1 plugged in somewhere and set it not to sleep it will work great.
1
0
2026-03-02T04:26:46
rpiguy9907
false
null
0
o86crci
false
/r/LocalLLaMA/comments/1rhazbc/13_m1_mbp_instead_of_m4_mac_mini/o86crci/
false
1
t1_o86cn9r
You can easily get AI to give you medical advice, wartime plans that includes attacking civilians, or even how to hide a body.  Just gotta frame it the right way
5
0
2026-03-02T04:25:56
redditorialy_retard
false
null
0
o86cn9r
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o86cn9r/
false
5
t1_o86cg2t
because coding models have less guardrails than normal ones
0
0
2026-03-02T04:24:30
redditorialy_retard
false
null
0
o86cg2t
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o86cg2t/
false
0
t1_o86cdbv
If dealing with a defined problem top-down is easier, but if it's a new realm bottom-up may be the only feasible option.
2
0
2026-03-02T04:23:57
po_stulate
false
null
0
o86cdbv
false
/r/LocalLLaMA/comments/1riisyd/are_you_a_top_down_thinker_or_bottom_up/o86cdbv/
false
2
t1_o86cc3j
Hello, may i dm you? I’m trying to run nvidia nemotron nano 12b v2 vl with same gpu as yours… gemini is running me in circles and cant find any solution to run it.
1
0
2026-03-02T04:23:43
alitadrakes
false
null
0
o86cc3j
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o86cc3j/
false
1
t1_o86ca3x
Meanwhile, I have several papers whose verified math and methodology rely on the theory 4o helped me develop. Also have two provisional patents in the works. Logic and common sense is the key to wading through sycophancy.
1
0
2026-03-02T04:23:20
randomintent
false
null
0
o86ca3x
false
/r/LocalLLaMA/comments/1qvv8ps/gpt4os_system_prompt_now_includes_instructions/o86ca3x/
false
1
t1_o86byjt
Lmao wut? The original R1 is in a VERY different performance class than a fucking 27B qwen that can hardly tie its own shoes. It takes me exactly one chat with any of these tiny models to know it's tiny and know this benchmaxxing bullshit is bullshit. I have never met a small model that was as clever as a larger one. ...
2
1
2026-03-02T04:21:02
CanineAssBandit
false
null
0
o86byjt
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o86byjt/
false
2
t1_o86bovl
> I think I am bottom Hehe
1
0
2026-03-02T04:19:06
Safe_Sky7358
false
null
0
o86bovl
false
/r/LocalLLaMA/comments/1riisyd/are_you_a_top_down_thinker_or_bottom_up/o86bovl/
false
1
t1_o86bnpp
give more details on what you mean by roleplay!
1
0
2026-03-02T04:18:53
michaelsoft__binbows
false
null
0
o86bnpp
false
/r/LocalLLaMA/comments/1riboy2/learnt_about_emergent_intention_maybe_prompt/o86bnpp/
false
1
t1_o86bj5p
Use the latest LM Studio or compile llama.cpp
1
0
2026-03-02T04:17:57
sine120
false
null
0
o86bj5p
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o86bj5p/
false
1
t1_o86baxq
i think i am bottom up but it's becoming a lot more apparent lately that if you just use ai in a lazy way it lets you enhance doing things the way you've been doing them... but that could still be a really inefficient way to do things. so i am pushing for more top down planning and it is clearly better. it exhausts me ...
1
0
2026-03-02T04:16:19
michaelsoft__binbows
false
null
0
o86baxq
false
/r/LocalLLaMA/comments/1riisyd/are_you_a_top_down_thinker_or_bottom_up/o86baxq/
false
1
t1_o86b9q4
I'd wait for a Dolphin 4 post-training of the Qwen 3.5 models. Should be very coherent.
1
0
2026-03-02T04:16:05
djstraylight
false
null
0
o86b9q4
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o86b9q4/
false
1
t1_o86b4aq
The classic YouTube test for the dudes getting shipped 4 Mac minis. "HI!" wow look at that 360 tokens/s.
5
0
2026-03-02T04:15:00
Anarchaotic
false
null
0
o86b4aq
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o86b4aq/
false
5
t1_o86b310
Should be a pretty potent 9b coming from qwen in a day or two. You'd be able to run that with a nice big contex window
2
0
2026-03-02T04:14:45
Soft-Barracuda8655
false
null
0
o86b310
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o86b310/
false
2