name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o81wqv9
TL;DR: I’m testing a 3-state supervisor (CLEAN/LOCKSTEP/HARDENED) above an agent loop to prevent threshold “chattering”. Mechanism: hysteresis (T_low < T_high) + min dwell time τ_min + optional EMA on S_t. Mini example: if S_t hovers 6.9–7.1 near boundary, hysteresis+dwell prevents rapid toggles. Question: In your expe...
1
0
2026-03-01T13:58:59
Gabriel-granata
false
null
0
o81wqv9
false
/r/LocalLLaMA/comments/1rhww3y/deterministic_supervisory_control_layer_for_llm/o81wqv9/
false
1
t1_o81wq7a
I'd definitely feel more comfortable if I had a bunch of extra heat sinks and an extra fan haha. Another commenter pointed out [LACT](https://github.com/ilya-zlobintsev/LACT) which supports different setting profiles and "Automatic profile activation based on running processes or gamemode status", will also look into t...
1
0
2026-03-01T13:58:52
doesitoffendyou
false
null
0
o81wq7a
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o81wq7a/
false
1
t1_o81wl89
Interesting info on the context degradation/rot I'll keep that in mind with MoE's moving forward. I appreciate your last insight, I feel that most people don't understand LLMs beyond them being a magic talking box. I imagine we have a somewhat similar background of actually working with AI professionally and having to...
1
0
2026-03-01T13:58:02
valdev
false
null
0
o81wl89
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o81wl89/
false
1
t1_o81wkkh
“In 2025, Anthropic was the first frontier AI company to be granted access to the Pentagon’s highly classified ‘air-gapped’ networks.” → https://medium.com/@venkata_sai/the-silicon-standoff-why-the-pentagon-labeled-anthropic-a-supply-chain-risk-06398516ec15 You're welcome. Rude.
0
0
2026-03-01T13:57:55
CoralBliss
false
null
0
o81wkkh
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81wkkh/
false
0
t1_o81wfpt
[deleted]
1
0
2026-03-01T13:57:07
[deleted]
true
null
0
o81wfpt
false
/r/LocalLLaMA/comments/1ra6nb9/phi_on_raspberry_pi/o81wfpt/
false
1
t1_o81waw2
That's the best position to be in — what's the hardest part you haven't solved yet?
1
0
2026-03-01T13:56:18
LOGOSOSAI
false
null
0
o81waw2
false
/r/LocalLLaMA/comments/1rhv06r/how_are_you_preventing_runaway_ai_agent_behavior/o81waw2/
false
1
t1_o81wap2
Yeah, this sub is great. Always good news, quality discussion and overall very pleasant experience checking it every now and then.
1
0
2026-03-01T13:56:17
CoUsT
false
null
0
o81wap2
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o81wap2/
false
1
t1_o81w5xq
Isn’t this 3.5 27B? Are there rumors of an official small <=17B model drop of 3.5 rather than post-release smaller quants?
1
0
2026-03-01T13:55:29
Spitfire1900
false
null
0
o81w5xq
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o81w5xq/
false
1
t1_o81w2r6
(>ᴗ•) !
2
0
2026-03-01T13:54:59
IrisColt
false
null
0
o81w2r6
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o81w2r6/
false
2
t1_o81vxjw
This channel has a bunch of helpful videos, thanks for the recommendation!
1
0
2026-03-01T13:54:07
doesitoffendyou
false
null
0
o81vxjw
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o81vxjw/
false
1
t1_o81vr7o
I am building and using it at the same time :)
2
0
2026-03-01T13:53:02
BC_MARO
false
null
0
o81vr7o
false
/r/LocalLLaMA/comments/1rhv06r/how_are_you_preventing_runaway_ai_agent_behavior/o81vr7o/
false
2
t1_o81vqre
heh
-6
0
2026-03-01T13:52:58
IrisColt
false
null
0
o81vqre
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81vqre/
false
-6
t1_o81vnxy
It's just software developed and used by humans.
1
0
2026-03-01T13:52:29
MelodicFuntasy
false
null
0
o81vnxy
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o81vnxy/
false
1
t1_o81vmvc
I am on 890M (64 GB ddr5) which is a bit better that 780M, I get 6 t/s on vulkan llama cpp build. when input prompt is small. When given slightly bigger prompt \~10K token with context size 32K, I get 4.8 t/s and 120 seconds for PP. Why not switch to Qwen3.5 35B A3B now? I get 18 t/s with similar \~10K token input pro...
2
0
2026-03-01T13:52:19
HopefulConfidence0
false
null
0
o81vmvc
false
/r/LocalLLaMA/comments/1rhxaqw/question_about_devstral_small_2_24b_on_radeon_780m/o81vmvc/
false
2
t1_o81vmgx
Inb4 models are trained to detect when they're being tested and to then put in more effort 
1
0
2026-03-01T13:52:14
rooster-inspector
false
null
0
o81vmgx
false
/r/LocalLLaMA/comments/1rf2ulo/qwen35_122b_in_72gb_vram_3x3090_is_the_best_model/o81vmgx/
false
1
t1_o81vm54
Oh wow LACT looks very cool, thanks for the recommendation! Are you using it just to limit power draw or also for fan control or other things? And does it have an "overlay" mode where I can see my GPU temp overlayed in any application? (couldn't find info on that in the readme)
1
0
2026-03-01T13:52:11
doesitoffendyou
false
null
0
o81vm54
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o81vm54/
false
1
t1_o81vjpe
this is also why short-context benchmarks are basically useless for evaluating agents. a model can score great at 4k and completely fall apart at 40k due to KV quant alone ..
22
0
2026-03-01T13:51:46
salmenus
false
null
0
o81vjpe
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o81vjpe/
false
22
t1_o81veij
.....
1
0
2026-03-01T13:50:53
reality_comes
false
null
0
o81veij
false
/r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/o81veij/
false
1
t1_o81vegn
Well, I have run Llama 3 405b on my computer in cpu for fun in Q6, as I have an old server with 512g memory that I normally use to run my virtual machines. But it is not usable speed for anything interactive or needing any response time. But I did get it running and could send things to it and eventually get a reply.
1
0
2026-03-01T13:50:52
Luvirin_Weby
false
null
0
o81vegn
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o81vegn/
false
1
t1_o81vc3g
Send it to Asahi Linux
146
0
2026-03-01T13:50:28
Worldly_Evidence9113
false
null
0
o81vc3g
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o81vc3g/
false
146
t1_o81vb50
I do switch between the two. I use med for gaming and tend to use min the rest of the time including inference. It does drop performance in inference but for LLM its a pretty good trad off (something close to 2% power down for every 1% performance drop on tokens/second). I don't actually pay attention to the temperatur...
1
0
2026-03-01T13:50:18
giblesnot
false
null
0
o81vb50
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o81vb50/
false
1
t1_o81vawt
Fedramp high is not airgapped, it is still a cloud service. In fact anthropic is a customer of my company and I had to update our services to be fedramp high compliant
3
0
2026-03-01T13:50:16
chill1217
false
null
0
o81vawt
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81vawt/
false
3
t1_o81vayr
2b confirmed 9b confirmed 4b Not confirmed
4
0
2026-03-01T13:50:16
Illustrious-Swim9663
false
null
0
o81vayr
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o81vayr/
false
4
t1_o81v95q
I don't think this makes business sense
1
0
2026-03-01T13:49:58
PewPewDiie
false
null
0
o81v95q
false
/r/LocalLLaMA/comments/1rbvbzt/best_opensource_coder_model_for_replacing_claude/o81v95q/
false
1
t1_o81v4oc
Impressive work but personally I'm more interested in the how than the what: how you convinced Claude to reverse engineer.
76
0
2026-03-01T13:49:12
Creepy-Bell-4527
false
null
0
o81v4oc
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o81v4oc/
false
76
t1_o81v3y4
Smaller. Earlier leaks included a 9b, and more recent leaks include a 4b. My guess is 0.x (0.6 or 0.8), 2b, 4b and 9b.
7
0
2026-03-01T13:49:05
ResidentPositive4122
false
null
0
o81v3y4
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o81v3y4/
false
7
t1_o81v35d
"I haven't tested Qwen-3.5-35B-A3B with something like this, but I'm scared to do it since I'm more than satisfied with this quality!" It’s 3x faster on the same hardware and, from my experiments, only slightly worse than dense 27b in output.
15
0
2026-03-01T13:48:57
jslominski
false
null
0
o81v35d
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o81v35d/
false
15
t1_o81uyls
q8 relies on int8 blocks. fp8 is floating point 8, and has more fidelity (or range) than int8, so it performs better.
-3
1
2026-03-01T13:48:08
Old_Hospital_934
false
null
0
o81uyls
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o81uyls/
false
-3
t1_o81utr0
You arnt using it correctly then. Thinking is kuch better.
1
0
2026-03-01T13:47:19
GifCo_2
false
null
0
o81utr0
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o81utr0/
false
1
t1_o81uowe
Smaller or larger than the existing 27B?
1
0
2026-03-01T13:46:28
MikeRoz
false
null
0
o81uowe
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o81uowe/
false
1
t1_o81uok0
Better quants (byteshape) for 27B.
1
0
2026-03-01T13:46:25
smahs9
false
null
0
o81uok0
false
/r/LocalLLaMA/comments/1rhvviu/qwen35_reap/o81uok0/
false
1
t1_o81umyw
Thank you for the detailed reply! Honestly maybe I'll just try a couple different distros and see which I like better. As someone who hasn't used linux as their standard desktop OS maybe I'll start with a consumer friendlier distro like Mint and see how that goes first..
1
0
2026-03-01T13:46:09
doesitoffendyou
false
null
0
o81umyw
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o81umyw/
false
1
t1_o81ukox
Nope, but used workstation boards and cpus can be found for surprisingly cheap. Good luck. I'm kind of navigating a similar thing and ended up getting the most powerful threadripper platform i could that uses regular ddr4 because i had a bunch laying around. But i also have 2 older xeon systems that need ECC
1
0
2026-03-01T13:45:45
brickout
false
null
0
o81ukox
false
/r/LocalLLaMA/comments/1rhu182/socket_am4_boards_with_rdimm_support/o81ukox/
false
1
t1_o81uifm
Cool but I doubt MTP is supported in llama.cpp, and vLLM wasn't starting with MTP, idk if it's fixed now. Hope llama will implement it, otherwise 0.8b will be a savior.
5
0
2026-03-01T13:45:22
DistanceAlert5706
false
null
0
o81uifm
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81uifm/
false
5
t1_o81uf2a
Try qwen3.5 35b Moe. It's much faster.
2
0
2026-03-01T13:44:47
qwen_next_gguf_when
false
null
0
o81uf2a
false
/r/LocalLLaMA/comments/1rhxaqw/question_about_devstral_small_2_24b_on_radeon_780m/o81uf2a/
false
2
t1_o81ub3j
So a md file?
3
0
2026-03-01T13:44:07
urekmazino_0
false
null
0
o81ub3j
false
/r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/o81ub3j/
false
3
t1_o81u98t
100% agree the K-cache is the fragile bit. “8-bit” isn’t one thing: FP8 has an exponent/mantissa (so dynamic range), while many Q8 schemes are uniform/affine with per-block scales — great for storage, not great for preserving tiny angular differences in keys over long contexts. In practice: if you care about tool-call...
6
0
2026-03-01T13:43:48
DonnaPollson
false
null
0
o81u98t
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o81u98t/
false
6
t1_o81u8p3
I'm still waiting for an easy way to save and reload KV caches for different prompts and models. Storage is relatively cheap, prompt processing isn't. I would love to be able to go back and load a 64k long context and continue the conversation in an instant.
2
0
2026-03-01T13:43:42
SkyFeistyLlama8
false
null
0
o81u8p3
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o81u8p3/
false
2
t1_o81u1km
Read the recent news from OpenAI where they REQUIRE their AI to be cloud based, differing from the installs of Claude. Also read the FEDRAMP announcement - then read what that requires (air gapping of the most secure systems), and that Anthropic has achieved level 5 certification.
2
0
2026-03-01T13:42:28
CantankerousOrder
false
null
0
o81u1km
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81u1km/
false
2
t1_o81u17j
q8 is no good but fp8 is ok? Aren’t they both 8-bit quants?
7
0
2026-03-01T13:42:25
Its-all-redditive
false
null
0
o81u17j
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o81u17j/
false
7
t1_o81tyri
This tracks with a boring interpretation: long CoT is often just a *symptom* of uncertainty / recovery attempts, not a cause of correctness. When the model is confident and right, it can be brief; when it’s lost, it “keeps talking” hoping to stumble back. For local inference you probably don’t need layer-wise DTR to g...
2
0
2026-03-01T13:41:59
DonnaPollson
false
null
0
o81tyri
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o81tyri/
false
2
t1_o81txux
llama.cpp/build/bin/llama-cli -m \~/codellama-7b-instruct.Q5\_K\_M.gguf --no-jinja --chat-template llama2 Or did you mean something else?
1
0
2026-03-01T13:41:50
Ben-Smyth
false
null
0
o81txux
false
/r/LocalLLaMA/comments/1rf3n9r/recommended_local_models_for_vibe_coding/o81txux/
false
1
t1_o81txiw
Thank you Dr.Menon!
2
0
2026-03-01T13:41:47
braydon125
false
null
0
o81txiw
false
/r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/o81txiw/
false
2
t1_o81twu4
this has been largely to learn, i get the sentiment. especially since i have larger goals with it. but this is also a learning experience for me.
1
0
2026-03-01T13:41:40
Electrical_Ninja3805
false
null
0
o81twu4
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o81twu4/
false
1
t1_o81tw5s
This is a home lab. You have trained yourself to be an LLM engineer for air gapped networks. Keep building and learning. You’re building valuable skills in the future job market.
1
0
2026-03-01T13:41:32
AncientTaro3584
false
null
0
o81tw5s
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81tw5s/
false
1
t1_o81tvec
[removed]
1
0
2026-03-01T13:41:24
[deleted]
true
null
0
o81tvec
false
/r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/o81tvec/
false
1
t1_o81tuf6
I'm not going to watch a 2h video from a known propagandist. To anyone who knows anything about the world, it's obvious that people in US an Europe have more freedom than people in Russia. I'm not sure if you can even access Reddit in Russia without VPN or Tor bridges. But if far right propaganda is your source of info...
1
0
2026-03-01T13:41:14
MelodicFuntasy
false
null
0
o81tuf6
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o81tuf6/
false
1
t1_o81ttap
Okay 2024 4k wasn't enough. I used 12k with mistral-large back than. Also the response are today way longer than back than.
1
0
2026-03-01T13:41:03
_hypochonder_
false
null
0
o81ttap
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o81ttap/
false
1
t1_o81tskq
Why "can't" it? It can talk to other systems, so it can create actions, similar to ones created by humans or more traditional tech. I'd accept knowledge/evidence that Claude _isnt_ being used for weapons systems, but could it be adapted to do so? Absolutely, and even without direct involvement from anthropic to change...
-1
1
2026-03-01T13:40:55
ToHallowMySleep
false
null
0
o81tskq
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81tskq/
false
-1
t1_o81tqtj
That's pretty cool ! Thanks for testing ! I want you I would be glad to see your print ! For the design I believe temps could be improved by making the shroud more airtight, with a tighter design and maybe some TPU joints. But it works well enough for me for now
2
0
2026-03-01T13:40:37
roackim
false
null
0
o81tqtj
false
/r/LocalLLaMA/comments/1rfi53f/completed_my_64gb_vram_rig_dual_mi50_build_custom/o81tqtj/
false
2
t1_o81tq6v
I dont know if this is memory because memory should have some search and traversal. This is a dump and retrieve.
2
0
2026-03-01T13:40:30
AurumDaemonHD
false
null
0
o81tq6v
false
/r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/o81tq6v/
false
2
t1_o81tnm0
I will figure something out and then post an update. i plan on releasing a bin soon so people can play with it.
1
0
2026-03-01T13:40:04
Electrical_Ninja3805
false
null
0
o81tnm0
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o81tnm0/
false
1
t1_o81tiil
Very interesting! I often thought about running comfyui in docker (though I think they also have a standalone version) but haven't thought about running llama.cpp in docker and would definitely be curious why it's worth the initial setup hassle for you. Have you ever tried using vllm?
1
0
2026-03-01T13:39:10
doesitoffendyou
false
null
0
o81tiil
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o81tiil/
false
1
t1_o81thzv
Are all these fully encensured? Whats best for a laptop with rtx 3050 gpu? 4gb vram? 8 gb system ram
1
0
2026-03-01T13:39:05
Radiant_Loquat
false
null
0
o81thzv
false
/r/LocalLLaMA/comments/1iigodb/how_to_download_the_full_version_of_deepseek_r1/o81thzv/
false
1
t1_o81thry
not an issue yet. i haven' t got networking up. this is fully just on the machine
2
0
2026-03-01T13:39:02
Electrical_Ninja3805
false
null
0
o81thry
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o81thry/
false
2
t1_o81tcc4
Try to please everyone and you please noone. This isn't even an AI problem!
2
0
2026-03-01T13:38:05
ToHallowMySleep
false
null
0
o81tcc4
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81tcc4/
false
2
t1_o81t967
What's it called when something is confidently wrong all the time?  5.2 chat has real issues with this.
1
0
2026-03-01T13:37:30
Ok-Measurement-1575
false
null
0
o81t967
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81t967/
false
1
t1_o81t6q9
The rigged elections with fake votes and opposition leaders like Alex Navalny getting imprisoned or killed? With journalists and protestors getting imprisoned? Yes, all western countries are aware that Russia has no democracy and it's been like that for a long time. You are the brainwashed one if you think that vaccine...
1
0
2026-03-01T13:37:06
MelodicFuntasy
false
null
0
o81t6q9
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o81t6q9/
false
1
t1_o81t65e
On question 4, I'd push hard toward text plus a learned style embedding over raw audio-history tokens fwiw. Feeding prior codec tokens back into the backbone sounds elegant but the context window cost is brutal. At typical neural codec rates (50-75 tokens/sec across codebook levels), even 30 seconds of prior speech bur...
1
0
2026-03-01T13:36:59
tom_mathews
false
null
0
o81t65e
false
/r/LocalLLaMA/comments/1rgb8tj/discussion_local_contextaware_tts_what_do_you/o81t65e/
false
1
t1_o81t5iu
[removed]
1
0
2026-03-01T13:36:53
[deleted]
true
null
0
o81t5iu
false
/r/LocalLLaMA/comments/1ptn2lq/batch_ocr_dockerized_paddleocr_pipeline_to/o81t5iu/
false
1
t1_o81t4xt
Woth current prices, it might be a good time to sell. And you can buy them again ina. Few years when prices drop. Do you host a plex server?
1
0
2026-03-01T13:36:47
Radiant_Loquat
false
null
0
o81t4xt
false
/r/LocalLLaMA/comments/1iigodb/how_to_download_the_full_version_of_deepseek_r1/o81t4xt/
false
1
t1_o81t3wv
I'm just a random guy with close to zero experience with LLMs, really. 1. Can system prompts be represented in a way such that they can be mutated and combined ? As vectors, maybe ? 2. Is there a way to assign a number to the behavior of a model, following a given system prompt, representing how good the model followe...
1
0
2026-03-01T13:36:36
Pale-Committee8059
false
null
0
o81t3wv
false
/r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o81t3wv/
false
1
t1_o81swem
Thanks a lot for sharing! The screenshot is from your system? Because I'm curious to know how hot your 3090 typically runs when not power limited. And did you assign aliases because you're changing the power cap more frequently? nvitop looks good though I was thinking about something that runs in my dock or as a consta...
1
0
2026-03-01T13:35:17
doesitoffendyou
false
null
0
o81swem
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o81swem/
false
1
t1_o81sv4n
you could go the aliexpress machinist way for like 80 110€(with a cpu) you an get a x99 motherboard with quad channel 8 ram slot, and multipe x16 gpu slots (gen 3), work pretty well for me with a 5060ti.
1
0
2026-03-01T13:35:03
Wild_Requirement8902
false
null
0
o81sv4n
false
/r/LocalLLaMA/comments/1rhu182/socket_am4_boards_with_rdimm_support/o81sv4n/
false
1
t1_o81sscx
Generally qwen3 coder > qwen3.5 27B > qwen3.5 35B A3B for coding at least.
7
0
2026-03-01T13:34:34
croninsiglos
false
null
0
o81sscx
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o81sscx/
false
7
t1_o81somf
Repo?
1
0
2026-03-01T13:33:54
braydon125
false
null
0
o81somf
false
/r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o81somf/
false
1
t1_o81slfj
I doubt it's vibe coded they built their own engine a way before LLMs have ability to meaningfully tackle ML stuff.
1
0
2026-03-01T13:33:20
chibop1
false
null
0
o81slfj
false
/r/LocalLLaMA/comments/1rfc7d3/ollama_dons_support_qwen3535b_yet/o81slfj/
false
1
t1_o81s2v2
The lesson here is never build anything in a silo. Also, don’t solve non-existent problems. With that said, OpenClaw is popular now. Take your knowledge and go build something that makes money.
1
0
2026-03-01T13:30:01
weiga
false
null
0
o81s2v2
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81s2v2/
false
1
t1_o81rypg
Exactly, qwen3-coder is still better at coding than the qwen3.5 models which leads one to wonder if a qwen3.5-coder model would be the best local coding model yet.
3
0
2026-03-01T13:29:17
croninsiglos
false
null
0
o81rypg
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81rypg/
false
3
t1_o81rwge
One idea that comes to mind, especially if you think in more orchestration-oriented terms like Verdent-style step isolation, is to treat watermark removal as a first-class preprocessing stage instead of part of “OCR.” If the watermark is consistent, you can detect repeated text blocks by coordinates and frequency acros...
1
0
2026-03-01T13:28:53
Appropriate-Lie-8812
false
null
0
o81rwge
false
/r/LocalLLaMA/comments/1rh3xey/seeking_help_improving_ocr_in_my_rag_pipeline/o81rwge/
false
1
t1_o81rw15
I was looking for internal ai to use for home lab IT, correct runaway processes etc. Ive been reading about lm studio.
2
0
2026-03-01T13:28:48
RowdyRidger19
false
null
0
o81rw15
false
/r/LocalLLaMA/comments/1rbmoi1/running_local_agents_with_ollama_was_easier_than/o81rw15/
false
2
t1_o81rvpl
Security through obscurity….
1
0
2026-03-01T13:28:44
uturnnnn
false
null
0
o81rvpl
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81rvpl/
false
1
t1_o81rtt7
regarding on log analysis, I think MCP act as the log retrieve, and ai act as analyzer, In between I think you might have your ai agent, to make some rules for AI, or maybe some more logic control in your agent?
1
0
2026-03-01T13:28:24
Mean-Sprinkles3157
false
null
0
o81rtt7
false
/r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/o81rtt7/
false
1
t1_o81rj9f
I understand you, yes, agentic coding can make thinks worse and slower. The way I work - I start implementing some components by myself, or at least something that I understand completely and have changed a bit - and then the coding agent ( at least based on Qwen3.5 family ) picks up the pattern. I think you should giv...
2
0
2026-03-01T13:26:31
Total_Activity_7550
false
null
0
o81rj9f
false
/r/LocalLLaMA/comments/1refyef/oneshot_vs_agentic_performance_of_openweight/o81rj9f/
false
2
t1_o81rja7
> Kind of… they have a distinct Claude of their very own at the DoD. I don’t know if it’s co-hosted or how it’s stored, but it’s just for their use and has been physical air-gap isolated from the internet. It cannot talk to the same Claude we use. Where in the WSJ or the article you linked says this? The “air-gapped” ...
0
0
2026-03-01T13:26:31
chill1217
false
null
0
o81rja7
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81rja7/
false
0
t1_o81rj7q
Have you used the Internet? Jesus. OpenAI even made a stink about their contract requiring cloud based AI. It’s not hidden, classified, or even remotely hard to look up. If you can install a local LLM you should be able to verify it.
0
0
2026-03-01T13:26:30
CantankerousOrder
false
null
0
o81rj7q
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81rj7q/
false
0
t1_o81rh8e
That’s relating to quantized weights, not quantized cache, cache is quantized locally.
10
0
2026-03-01T13:26:09
Manamultus
false
null
0
o81rh8e
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o81rh8e/
false
10
t1_o81rcje
Hierarchical agent trees plus persistent memory is a strong combo, especially for local setups where you actually control the full lifecycle. The self-building agents part is ambitious though, that’s where things can get messy fast without good guardrails. I like that you’re treating orchestration as a first-class conc...
1
0
2026-03-01T13:25:18
AlbatrossUpset9476
false
null
0
o81rcje
false
/r/LocalLLaMA/comments/1rhcxn2/mate_selfhosted_multiagent_system_with_ollama/o81rcje/
false
1
t1_o81r8n2
You mentioned persistent memory being a pain point -- I just released soul.py for exactly this. Two markdown files (SOUL.md + MEMORY.md), works with any local model via Ollama, no database or server needed. github.com/menonpg/soul.py
2
0
2026-03-01T13:24:35
the-ai-scientist
false
null
0
o81r8n2
false
/r/LocalLLaMA/comments/1rc6c8m/feels_like_magic_a_local_gptoss_20b_is_capable_of/o81r8n2/
false
2
t1_o81r0vp
Multiple. Use your browser.
1
1
2026-03-01T13:23:09
CantankerousOrder
false
null
0
o81r0vp
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81r0vp/
false
1
t1_o81qx0t
Meanwhile me running 123B Mistral on 24GB VRAM... ^(It's slow AF... and is still trying to stack chairs.)
30
0
2026-03-01T13:22:26
boisheep
false
null
0
o81qx0t
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o81qx0t/
false
30
t1_o81qw2b
Have you observed them and proved first that they really want it? Maybe the kid wants things that react with Roblox. want to see how it can help with their skin, their homework. The wife is looking for the best Christmas offer. You can enhance these to help their daily life. Your intention is good, but the quest...
1
0
2026-03-01T13:22:16
Proper-Process2144
false
null
0
o81qw2b
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81qw2b/
false
1
t1_o81qvpy
Read the article? Also read the news about OpenAI and the terms they set for it having to be in the cloud instead of local.
1
0
2026-03-01T13:22:12
CantankerousOrder
false
null
0
o81qvpy
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81qvpy/
false
1
t1_o81qv6a
If you read their Go code, you will see \`ggml.<some operator>\` all over the place. Wonder why, if they have "their own engine". Or maybe it is just a wrapper?.. Or something vibe-translated from original llama.cpp code?..
1
0
2026-03-01T13:22:05
Total_Activity_7550
false
null
0
o81qv6a
false
/r/LocalLLaMA/comments/1rfc7d3/ollama_dons_support_qwen3535b_yet/o81qv6a/
false
1
t1_o81qqhi
All of the Qwen3.5 models currently have MTP, which should vastly outperform using a 2B drafter
17
0
2026-03-01T13:21:15
xyz4d
false
null
0
o81qqhi
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81qqhi/
false
17
t1_o81qnf7
https://pastebin.com/pEpqiiJK - I paste the results in https://github.com/dpmm99/Towngardia/tree/main/src/minigame to check the errors easily.
1
0
2026-03-01T13:20:41
DeProgrammer99
false
null
0
o81qnf7
false
/r/LocalLLaMA/comments/1rdy4ko/qwen35_vs_qwen3codernext_impressions/o81qnf7/
false
1
t1_o81q9vw
Dense uses all parameters to calculate the next token. MOE uses a subset of parameters.
19
0
2026-03-01T13:18:08
Deep-Vermicelli-4591
false
null
0
o81q9vw
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o81q9vw/
false
19
t1_o81q0pw
>My main issue with this model is its thinking: it produces SO MUCH tokens with little improvement on its outputs. I genuinely believe thinking is just a gimmick for like 80% of the time. From my experience, thinking makes a more significant difference on trick questions and riddles. In real world use however, non-thi...
37
0
2026-03-01T13:16:24
Admirable-Star7088
false
null
0
o81q0pw
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o81q0pw/
false
37
t1_o81pugo
Not MoE.
29
0
2026-03-01T13:15:15
Middle_Bullfrog_6173
false
null
0
o81pugo
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o81pugo/
false
29
t1_o81pn7y
I fucking HATE when AI says “honestly”. Like oh, sometimes you’re lying to me? Really?
3
0
2026-03-01T13:13:54
OrbitalOutlander
false
null
0
o81pn7y
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81pn7y/
false
3
t1_o81pksl
This is very true. I will not type my secrets, medical history ect into any AI bot. I disliked typing medical things into google before it was public knowledge they sold that information. General research around topics though? I was using claude last night to get a rough plan for bluetooth sniffers, and just a fun at h...
1
0
2026-03-01T13:13:26
super1701
false
null
0
o81pksl
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81pksl/
false
1
t1_o81pipi
Pepper Potts did not use Jarvis.
1
0
2026-03-01T13:13:02
huwprosser
false
null
0
o81pipi
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81pipi/
false
1
t1_o81pe4m
What's the definition of dense model?
3
0
2026-03-01T13:12:10
peejay2
false
null
0
o81pe4m
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o81pe4m/
false
3
t1_o81p9l0
That’s literally the point of LLMs…one of the most critical use cases. Something I implement nearly daily. Dark data -> putting LLMs onto all of your historical data to find things we missed over time. Then seeing if those predictions held with historical outcomes. If so, you fine tune models on the paths and then us...
2
1
2026-03-01T13:11:18
brownman19
false
null
0
o81p9l0
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81p9l0/
false
2
t1_o81p9gs
There are many different reasons for such issues. You just have to find the combo that works best for you. In the case of Qwen3CoderNext I had to switch to nvfp4 quant and use it with sglang, giving up on llama-server. Unsloth might have fixed their UD Q4 quant by this point, but I'm not interested in checking it out a...
-3
1
2026-03-01T13:11:17
Prudent-Ad4509
false
null
0
o81p9gs
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o81p9gs/
false
-3
t1_o81p8nw
Yes, I got a second hand [Alienware R11](https://hothardware.com/photo-gallery/article/3036?image=big_alienware-aurora-r11-panel-off.jpg&tag=popup) and the ventilation in this case is not great. I think I will probably start with limiting the power draw to 280W like another commenter suggested and see how hot it gets a...
1
0
2026-03-01T13:11:08
doesitoffendyou
false
null
0
o81p8nw
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o81p8nw/
false
1
t1_o81p80g
This is now what I’m starting to think… but then again, will family members understand the difference? I’m not sure…
5
0
2026-03-01T13:11:00
PassengerPigeon343
false
null
0
o81p80g
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81p80g/
false
5
t1_o81p6by
What makes you decide to use gpt-oss? what else models you tried in your case? Thanks, i’m curious
1
0
2026-03-01T13:10:40
Neptun78
false
null
0
o81p6by
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o81p6by/
false
1
t1_o81p5o1
[removed]
1
0
2026-03-01T13:10:32
[deleted]
true
null
0
o81p5o1
false
/r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o81p5o1/
false
1
t1_o81p2ix
Yeah, good luck on a multi-services 100.000 LoC project.
-13
0
2026-03-01T13:09:57
Alywan
false
null
0
o81p2ix
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o81p2ix/
false
-13