name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o80l8ri
I was running 27b fine with Q8 quantisation for both model and kv cache. Looks like conservative settings are worth it this time until the knowledge about what works and what does not settles down.
1
0
2026-03-01T07:10:21
Prudent-Ad4509
false
null
0
o80l8ri
false
/r/LocalLLaMA/comments/1rhmepa/qwen35122b_on_blackwell_sm120_fp8_kv_cache/o80l8ri/
false
1
t1_o80l273
the problem is that if you told me that is how we are running every aspect of our government, I would absolutely believe it. because NOTHING MAKES SENSE
14
0
2026-03-01T07:08:41
SiWeyNoWay
false
null
0
o80l273
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80l273/
false
14
t1_o80kzzq
Unfortunately it will be amazing.. Queue the paid sub, and then once you pay for that, they switch it to their new plan drop the features you subscribed for but call it pro v2, while it's a less affective model... I want to be grandfathered into the model and limits I sign up for...
2
0
2026-03-01T07:08:07
Mstep85
false
null
0
o80kzzq
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o80kzzq/
false
2
t1_o80kxvp
they dont fucking understand foreign policy or history either
25
0
2026-03-01T07:07:35
SiWeyNoWay
false
null
0
o80kxvp
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80kxvp/
false
25
t1_o80kx2o
That s actually hilarious lol
1
0
2026-03-01T07:07:24
Ajwad6969
false
null
0
o80kx2o
false
/r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o80kx2o/
false
1
t1_o80kx28
The modern open-source LLM exists because of deepseek. It's as simple as that.
1
0
2026-03-01T07:07:23
AlwaysLateToThaParty
false
null
0
o80kx28
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o80kx28/
false
1
t1_o80kwxs
I think it’s a huge amount of cope to say that Chinese models aren’t distilling SOTA closed source ones. Chinese researchers are on the record stating that they can’t train models due to hardware shortages and then for them to pop out competitive stuff isn’t magic. There’s no secret sauce to math.
1
0
2026-03-01T07:07:22
Spara-Extreme
false
null
0
o80kwxs
false
/r/LocalLLaMA/comments/1rd2x61/people_are_getting_it_wrong_anthropic_doesnt_care/o80kwxs/
false
1
t1_o80ksb5
[deleted]
1
0
2026-03-01T07:06:12
[deleted]
true
null
0
o80ksb5
false
/r/LocalLLaMA/comments/1re3l3r/qwen330ba3b_vs_qwen3535ba3b_on_rtx_5090/o80ksb5/
false
1
t1_o80krzp
who bombed the girls school? it was three missiles. was it izzy or us?
3
0
2026-03-01T07:06:08
SiWeyNoWay
false
null
0
o80krzp
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80krzp/
false
3
t1_o80kmwf
I wish they would use these ais to write better speeches, or at least check the titles of their leaning centers
1
0
2026-03-01T07:04:52
Mstep85
false
null
0
o80kmwf
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80kmwf/
false
1
t1_o80kjww
It's a lot slower. Prohibitively slower unless you run a pretty small context size, or have a decent gpu like a 4090. It's a lot more accurate though, so depends on the use case tbh
2
0
2026-03-01T07:04:07
timbo2m
false
null
0
o80kjww
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o80kjww/
false
2
t1_o80kii0
Thank you! This should be done for every new model!
1
0
2026-03-01T07:03:46
Queasy_Asparagus69
false
null
0
o80kii0
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o80kii0/
false
1
t1_o80kfta
I was using 200k context size with 4x3090. I am currently switched to Qwen3.5-35B-A3B with 200k context size.
1
0
2026-03-01T07:03:05
high_funtioning_mess
false
null
0
o80kfta
false
/r/LocalLLaMA/comments/1rclied/glm47flash_vs_qwen3codernext_vs_gptoss120b/o80kfta/
false
1
t1_o80kfn7
Finger print.. I Mena biometric login in for convenience.. Face recognition for... When your in the shower and have slippery fingers... BTW why is the phone even in the shower with you..
1
0
2026-03-01T07:03:02
Mstep85
false
null
0
o80kfn7
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o80kfn7/
false
1
t1_o80kbfl
Gotta vibe code a ballistic missile MCP
97
0
2026-03-01T07:01:58
Zerofucks__ZeroChill
false
null
0
o80kbfl
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80kbfl/
false
97
t1_o80k75i
I feel you. For a little while I went on a bender with Plex downloading, labeling, organziing... and no one cared. They just used Netflix or YouTube. It happens :)
2
0
2026-03-01T07:00:54
realityczek
false
null
0
o80k75i
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80k75i/
false
2
t1_o80k6ly
>speak to the computer to brain dump I guess better to the computer than having to listen to the Mrs' brain dumps? ;)
2
0
2026-03-01T07:00:46
tomByrer
false
null
0
o80k6ly
false
/r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/o80k6ly/
false
2
t1_o80k0z3
thank you i ran it directly in llama.cpp, what context size you are able to reach with 4x3090 and still maintain acceptable speed of inference?
1
0
2026-03-01T06:59:21
alsolh
false
null
0
o80k0z3
false
/r/LocalLLaMA/comments/1rclied/glm47flash_vs_qwen3codernext_vs_gptoss120b/o80k0z3/
false
1
t1_o80jzro
the push-to-talk approach is honestly underrated — we went continuous capture because the use case demands it (you can't push a button during a meeting), but it introduces a whole class of problems you just sidestepped. for the heap management on continuous streaming, we ended up chunking into ~30s segments and shippin...
1
0
2026-03-01T06:59:04
Deep_Ad1959
false
null
0
o80jzro
false
/r/LocalLLaMA/comments/1rhjavd/aipi_local_voice_assistant_bridge_esp32s3/o80jzro/
false
1
t1_o80jzol
build a product and start selling it. Let me know if we can focus on something.
1
0
2026-03-01T06:59:02
Patient-Day-6370
false
null
0
o80jzol
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80jzol/
false
1
t1_o80jugs
[GH](https://gist.github.com/karpathy/8627fe009c40f57531cb18360106ce95)
1
0
2026-03-01T06:57:46
tomByrer
false
null
0
o80jugs
false
/r/LocalLLaMA/comments/1rhlosn/microgpt/o80jugs/
false
1
t1_o80jt8a
You have to remember the amount of yes men and tech illiterate people trump has surrounded himself with. They’re mostly techbro who don’t really understand the technology they just hired people to do it for them
34
0
2026-03-01T06:57:27
Savantskie1
false
null
0
o80jt8a
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80jt8a/
false
34
t1_o80jsat
Here here! Absolutely reasonable
1
0
2026-03-01T06:57:13
wesarnquist
false
null
0
o80jsat
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80jsat/
false
1
t1_o80jo8b
I am running the quantrio version. Lot of errors and failure. I had Claude Code/Opus download the latest nightly vLLM and built it locally.. It had to fix several errors in vLLM code .. took few hours including compiling vLLM.
1
0
2026-03-01T06:56:13
appakaradi
false
null
0
o80jo8b
false
/r/LocalLLaMA/comments/1rgdrgz/any_one_able_to_run_qwen_35_awq_q4_with_vllm/o80jo8b/
false
1
t1_o80jmn5
I am not sure if it is the same thing but when I was testing Qwen3.5-27B-Q8, first it was not producing any answer, but only never ending ////////////// I re-downloaded the file and the checksum was different, so I assume there was some model file corruption.
5
0
2026-03-01T06:55:49
mtomas7
false
null
0
o80jmn5
false
/r/LocalLLaMA/comments/1rhmepa/qwen35122b_on_blackwell_sm120_fp8_kv_cache/o80jmn5/
false
5
t1_o80jm08
So important that we keep developing AI to takeover and get rid of these 'leaders', I love Iran and know who the bad guys are in all this. Good on Dario for trying to show some good morals.
-2
1
2026-03-01T06:55:40
Revolutionalredstone
false
null
0
o80jm08
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80jm08/
false
-2
t1_o80jhbz
Well yeah. Right now is a great time to self-host 'cause it's winter and those 3090's help heat the house ;-)
3
0
2026-03-01T06:54:31
phormix
false
null
0
o80jhbz
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o80jhbz/
false
3
t1_o80jghk
I think the goal was a agent that keeps going until a task is done? Yeah the current docs need to do a better job explaining what it does. We tried it on a few systems and rolled back to standard opencode.
1
0
2026-03-01T06:54:18
mutemebutton
false
null
0
o80jghk
false
/r/LocalLLaMA/comments/1qd8vpj/claude_code_or_opencode_which_one_do_you_use_and/o80jghk/
false
1
t1_o80jel9
Would you like to know more?
7
0
2026-03-01T06:53:50
OldHamburger7923
false
null
0
o80jel9
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80jel9/
false
7
t1_o80jdq7
Anthro fanboys might say they love it?
2
0
2026-03-01T06:53:37
Fit-Pattern-2724
false
null
0
o80jdq7
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80jdq7/
false
2
t1_o80jcj2
To keep text accurate use a LLM to write the prompt u want to send with the text. That's the only way test stay accurate for me with stuff like z image. LLM are the best prompt makers for stable diffusion
1
0
2026-03-01T06:53:21
Far_Cat9782
false
null
0
o80jcj2
false
/r/LocalLLaMA/comments/1rhohkr/what_is_the_best_model_for_image_creation_with/o80jcj2/
false
1
t1_o80jc7a
The part worth watching is context degradation at 100k with Q4. MoE models with active parameters that small tend to lose coherence past 32-48k in quantized configs, even when the architecture technically supports longer windows ngl. I ran into this with my own multi-agent pipelines — the model handles tool calls fine ...
1
0
2026-03-01T06:53:15
tom_mathews
false
null
0
o80jc7a
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o80jc7a/
false
1
t1_o80iyul
So most of the comments went in a really weird direction about the family talking to AI about personal things and while that's all true it kinda misses the question you asked. Forget open webui and talking to an llm there, let's get back to the Alexa-type of stuff.  Are they using that?  It doesn't seem that hard to ...
3
0
2026-03-01T06:49:56
profcuck
false
null
0
o80iyul
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80iyul/
false
3
t1_o80iwjt
Sure! You might also be able to run it at the Q6 Quant too, but I'm not sure. It will require more memory though and might be slower than Q4, but it gives somewhat better quality. And don't worry about the model size being bigger than your VRAM, it just offloads the rest of it into RAM.. Which will slow it down, but it...
2
0
2026-03-01T06:49:22
c64z86
false
null
0
o80iwjt
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o80iwjt/
false
2
t1_o80iu85
The thing I actually care about with sub-10B Qwen models is quantization headroom fwiw. The 27B at Q4_K_M is already remarkably close to its FP16 baseline on reasoning benchmarks, maybe 2-3% degradation. Smaller architectures historically lose more under aggressive quants because there's less redundancy to compress awa...
1
0
2026-03-01T06:48:47
tom_mathews
false
null
0
o80iu85
false
/r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o80iu85/
false
1
t1_o80ir6g
The people I know that love AI best are either programmers and hobbyists like us, or they suck at writing and using google and love that AI can help them... In between, it's like no one knows what to ask it, or why. Don't feel bad!
2
0
2026-03-01T06:48:02
temperature_5
false
null
0
o80ir6g
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80ir6g/
false
2
t1_o80im8e
I don't know if it lives up to the demos, but NVIDIA made something called PersonaPlex for that.
1
0
2026-03-01T06:46:49
novelide
false
null
0
o80im8e
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o80im8e/
false
1
t1_o80im1q
This sounds reasonable. Thanks for not living solely through an emotional lense and actually thinking. It is rare in these parts.
6
0
2026-03-01T06:46:47
CoralBliss
false
null
0
o80im1q
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80im1q/
false
6
t1_o80ihb7
The rather formal "service provider and costumer"-mentality within family is wild. I get the sentiment of wanting to do something for your loved ones, but if it feels like a commercial service to them, I wouldn't expect them to be overly enthusiastic about it. And OP's story shows the limits to inner-family trust which...
10
0
2026-03-01T06:45:37
GerchSimml
false
null
0
o80ihb7
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80ihb7/
false
10
t1_o80iexi
Definite qwen 3.5 35B A3B q4! I just switched to this from qwen 3 vl 30b q4. You can test the output. Gemini Pro mostly found no flaw with qwen 3.5 responses. When i was using qwen 3 VL, gemini pro always found some flaw in the response.
1
0
2026-03-01T06:45:03
Euphoric_Emotion5397
false
null
0
o80iexi
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o80iexi/
false
1
t1_o80id9o
Holding the line for humanity 😾!
2
0
2026-03-01T06:44:38
Capable-Hotel-542
false
null
0
o80id9o
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o80id9o/
false
2
t1_o80ichk
Sweet, thanks. New to all this, so trying to work out what's possible :)
2
0
2026-03-01T06:44:27
cnuthead
false
null
0
o80ichk
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o80ichk/
false
2
t1_o80ic1l
You have to offer something they cannot get elsewhere and it should make their life easier immediately. One thing you mentioned is that you have RAG. You can load it with information that the family would not want in the cloud, but don't mind everyone in the family having. Like: • Warranty info • Insurance polici...
2
0
2026-03-01T06:44:20
gearcontrol
false
null
0
o80ic1l
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80ic1l/
false
2
t1_o80i4gp
Thanks for this, I didn't know you could change params like this without reloading / a separate command. This is awesome!
2
0
2026-03-01T06:42:30
spaceman_
false
null
0
o80i4gp
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o80i4gp/
false
2
t1_o80i283
It's V4-lite with 1mm context. Most likely from Engram architecture. Hopefully it doesn't disappoint like Llama4.
1
0
2026-03-01T06:41:57
rashaniquah
false
null
0
o80i283
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o80i283/
false
1
t1_o80hwxi
"Used those very tools to launch" is quite the jump and doesnt really have any base in those sources, just speculation on your end.
10
0
2026-03-01T06:40:40
Quartich
false
null
0
o80hwxi
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80hwxi/
false
10
t1_o80hwt6
Doesn’t 2 nodes means $8000?
1
0
2026-03-01T06:40:39
No_Conversation9561
false
null
0
o80hwt6
false
/r/LocalLLaMA/comments/1rhbtnw/the_state_of_openweights_llms_performance_on/o80hwt6/
false
1
t1_o80hsyl
"Claude, there's a an issue we need to fix. Scan the global data set and eliminate all theocrats."
2
0
2026-03-01T06:39:44
shokuninstudio
false
null
0
o80hsyl
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80hsyl/
false
2
t1_o80hh8d
“You’re absolutely right!”
96
0
2026-03-01T06:36:53
squachek
false
null
0
o80hh8d
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80hh8d/
false
96
t1_o80hg7m
Yeah, it definitely felt different back then. Like it wasn't trained to not have opinions, and in some ways seemed more alive. Some heretic versions of models have a little of that spark. Try GLM 4.7 \[flash\] heretic. You might also be interested in the base of dots.llm1, it was trained without synthetic data: [ht...
2
0
2026-03-01T06:36:38
temperature_5
false
null
0
o80hg7m
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o80hg7m/
false
2
t1_o80hf1x
I highly recommend you skip creating an mcp server and build a CLI. MCP Servers are highly inefficient and plug up your context with irrelevant tools most of the time. A CLI can be an Agent Skill or a set of Agent Skills. The CLI doesn't have to be human friendly, either, if you are worried about architecting such a th...
1
0
2026-03-01T06:36:22
meenie
false
null
0
o80hf1x
false
/r/LocalLLaMA/comments/1rhjcvo/what_im_doing_locally_develping_an_mcp_to_attach/o80hf1x/
false
1
t1_o80he0a
mlx\_lm.server with openai api has been around for 2 years. Maybe you're misremembering why you preferred going with llama.cpp.
1
0
2026-03-01T06:36:07
bobby-chan
false
null
0
o80he0a
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o80he0a/
false
1
t1_o80h77f
Source?
27
0
2026-03-01T06:34:30
BahnMe
false
null
0
o80h77f
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80h77f/
false
27
t1_o80h5v4
ship it and sell it to them, then they might be convinced
2
0
2026-03-01T06:34:10
No_Relationship641
false
null
0
o80h5v4
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80h5v4/
false
2
t1_o80h4bm
Ok but HOW did they use it? To write the press release?
38
0
2026-03-01T06:33:49
ShadowBannedAugustus
false
null
0
o80h4bm
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80h4bm/
false
38
t1_o80gwjy
Well this is fantastic.  Thank you!
0
0
2026-03-01T06:31:57
this-just_in
false
null
0
o80gwjy
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o80gwjy/
false
0
t1_o80gsyi
& r/comfyui & [**r/generativeAI**](https://www.reddit.com/r/generativeAI/)
2
0
2026-03-01T06:31:04
tomByrer
false
null
0
o80gsyi
false
/r/LocalLLaMA/comments/1rhohkr/what_is_the_best_model_for_image_creation_with/o80gsyi/
false
2
t1_o80gov6
FYI that was an IRGC missile launch failure by the neighboring base.
-25
0
2026-03-01T06:30:05
Drakonic
false
null
0
o80gov6
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80gov6/
false
-25
t1_o80gl05
yeah pretty much
1
0
2026-03-01T06:29:10
Altruistic_Heat_9531
false
null
0
o80gl05
false
/r/LocalLLaMA/comments/1rh3xey/seeking_help_improving_ocr_in_my_rag_pipeline/o80gl05/
false
1
t1_o80gily
can u imagine the outrage in the US AI community if open weight models were used in a war? about how ai should be regulated and controlled?
7
0
2026-03-01T06:28:34
Ska82
false
null
0
o80gily
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80gily/
false
7
t1_o80ge26
So it's still a chatbot.
2
0
2026-03-01T06:27:30
mtmttuan
false
null
0
o80ge26
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80ge26/
false
2
t1_o80g2j5
Wow! It jumped to 22 tps with -fit on and Q2\_K\_XL from unsloth How did you know that? Is there a big benchmark of quants somewhere? I only ever read IQ were the best new thing
2
0
2026-03-01T06:24:43
AppealSame4367
false
null
0
o80g2j5
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o80g2j5/
false
2
t1_o80g1ot
>llama cpp python you have to manage your own cache Then don't use it! Llama-server has prefix caching. Run it and connect via API. If you're not an AI researcher, you don't need direct control of the model, just give it to the projects that are optimized for speed and enjoy the experience. OP is using raw transforme...
5
0
2026-03-01T06:24:31
No-Refrigerator-1672
false
null
0
o80g1ot
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o80g1ot/
false
5
t1_o80g09r
Do to have a single source?
11
0
2026-03-01T06:24:10
chuby1tubby
false
null
0
o80g09r
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80g09r/
false
11
t1_o80g01h
You sent your wife a text to ask her if she is using it? Do you life in different places? Whatever you build get early feedback. Get people interested in the build process. Gamify stuff, especially with kids. Involve them. Final pro tip, ask your Alexa replacement the same question. It will come up with better words th...
3
0
2026-03-01T06:24:07
wanderer_4004
false
null
0
o80g01h
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80g01h/
false
3
t1_o80fywq
BTW what technology you used to create OWUI? 
1
0
2026-03-01T06:23:51
Writerro
false
null
0
o80fywq
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80fywq/
false
1
t1_o80fww0
At least it's fun to imagine Hegseth asking Opus 4.6 who they should bomb and how to do it
8
0
2026-03-01T06:23:21
chuby1tubby
false
null
0
o80fww0
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80fww0/
false
8
t1_o80fw1v
I did this for a research paper years ago. we also proved that a smaller model purpose-trained from scratch is better than larger model finetuned (this was when gpt 2 came out as we replicated a smaller version) But one model than can be finetuned for many tasks is still more valuable to most people and companies tha...
1
0
2026-03-01T06:23:09
Amazing_Trace
false
null
0
o80fw1v
false
/r/LocalLLaMA/comments/1rhe790/my_frends_trained_and_benchmarked_4_diffusion/o80fw1v/
false
1
t1_o80fs15
> help the family figure out how to control everything until they can downgrade to whatever my local ISP will give them - I don't expect them to maintain everything. Wait, what ISP would give them? I dont get it. ISP would provide some LLM with webui or what? Why would isp had to provide that? 
2
0
2026-03-01T06:22:13
Writerro
false
null
0
o80fs15
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80fs15/
false
2
t1_o80fqkf
Thank you so much for the test. They correlated with what i felt. In my experience coder next was able to resolve many of the tasks (i used to send to opus) in one shot, it does what its told to do. The only thing that needs to be perfect is to be able to plan and understand my intentions , but for that i wo...
3
0
2026-03-01T06:21:51
brahh85
false
null
0
o80fqkf
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o80fqkf/
false
3
t1_o80fk5b
In some models you can send this in your custom JSON: **{"chat\_template\_kwargs": {"enable\_thinking": false}}** or at least it looks like you can do {"chat\_template\_kwargs": {"reasoning\_effort": low}}
15
0
2026-03-01T06:20:19
temperature_5
false
null
0
o80fk5b
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o80fk5b/
false
15
t1_o80fbi0
"Hi Claud, how do I capture Maduro?"
21
0
2026-03-01T06:18:16
gamblingapocalypse
false
null
0
o80fbi0
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80fbi0/
false
21
t1_o80fba4
installed successful, copied demo script on the main github page, 'play' is not defined. :/
1
0
2026-03-01T06:18:12
Worldly_Science1670
false
null
0
o80fba4
false
/r/LocalLLaMA/comments/1rfc3ic/introducing_fasterqwentts/o80fba4/
false
1
t1_o80f5sn
Don’t feel it was a waste, we are your family. Write a guide or something we will read it .
4
0
2026-03-01T06:16:56
buddhist-truth
false
null
0
o80f5sn
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80f5sn/
false
4
t1_o80f046
This dude really wants to make sure no one can see his chats!
1
0
2026-03-01T06:15:36
temperature_5
false
null
0
o80f046
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o80f046/
false
1
t1_o80ewnf
User requirements are only applicable if there's a desire from them to begin with. I once put together a NAS because I saw family members continuing to use external hard drives and losing them. To this day, the NAS still exists, as well as them continuing to buy external hard drives and losing them. Their stores on t...
20
0
2026-03-01T06:14:47
unscholarly_source
false
null
0
o80ewnf
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80ewnf/
false
20
t1_o80eu26
What kind of speeds are u getting? On 3090 TI at max context I got about 1/ts, but the thinking and output were actually really good. I'm obviously going to tweak things, but curious about your experience.
1
0
2026-03-01T06:14:11
GrungeWerX
false
null
0
o80eu26
false
/r/LocalLLaMA/comments/1rhchvi/qwen35_family_running_notes/o80eu26/
false
1
t1_o80eq02
I think so: https://unsloth.ai/docs/get-started/install
0
0
2026-03-01T06:13:14
paranoidray
false
null
0
o80eq02
false
/r/LocalLLaMA/comments/1rh0xwk/unsloth_dynamic_20_ggufs_now_selectively/o80eq02/
false
0
t1_o80elhd
because of how much people like this idea Im pivoting to adding some hardware acceleration and making inference faster. i will release a binary here soon.
1
0
2026-03-01T06:12:09
Electrical_Ninja3805
false
null
0
o80elhd
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o80elhd/
false
1
t1_o80ei7h
I don't need to be so virtuous. We use LLMs in production summarising, synthesising, and analysing data. There is zero chance this data goes to a cloud supplier. We're doing this because it's the only way it can be done to satisfy the privacy requirements of clients. There's really no grey area. These open source m...
4
0
2026-03-01T06:11:22
AlwaysLateToThaParty
false
null
0
o80ei7h
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o80ei7h/
false
4
t1_o80ee69
CPU only here too. I really like your project. AI has made it possible because they are like virtual employees, I think you just need to clearly establish the foundation and from there iterate and improve.
1
0
2026-03-01T06:10:27
Innomen
false
null
0
o80ee69
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o80ee69/
false
1
t1_o80edxi
Pivot to playing with OpenClaw / ClawdBot - sounds like it’d be right up your alley
1
0
2026-03-01T06:10:24
AggravatinglyDone
false
null
0
o80edxi
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80edxi/
false
1
t1_o80e8m6
Yeah, like OP is really not getting it when he casually mentinos how monitors how often and when each familiy member is logging in...
32
0
2026-03-01T06:09:11
1731799517
false
null
0
o80e8m6
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80e8m6/
false
32
t1_o80e8mi
Just pushed a policy rules engine. Users define rules based on action type, risk level, and reversibility - evaluated by priority, independent of the agent. Will check out peta too!
2
0
2026-03-01T06:09:11
achevac
false
null
0
o80e8mi
false
/r/LocalLLaMA/comments/1rhgzvs/built_a_lightweight_approval_api_for_llm_agents/o80e8mi/
false
2
t1_o80e8cl
Not sure why I would have to be gay to appreciate this but I’d try anything once to improve my homelab. Is there a form to fill in?
1
0
2026-03-01T06:09:08
gregusmeus
false
null
0
o80e8cl
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o80e8cl/
false
1
t1_o80e50x
Thx!
1
0
2026-03-01T06:08:21
AppealSame4367
false
null
0
o80e50x
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o80e50x/
false
1
t1_o80e3mx
Update: just shipped this Reversibility is now a first-class field (full/partial/none) with a policy engine that keys off it. I also added a reasoning field for pre-POST explanation. thanks :)
1
0
2026-03-01T06:08:02
achevac
false
null
0
o80e3mx
false
/r/LocalLLaMA/comments/1rhgzvs/built_a_lightweight_approval_api_for_llm_agents/o80e3mx/
false
1
t1_o80e2d7
Source?
22
0
2026-03-01T06:07:43
V0dros
false
null
0
o80e2d7
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80e2d7/
false
22
t1_o80e0x3
i've spent the past 4 months building the framework necessary to make this happen. i had this thought around 6 months ago. problem being, none of the tools needed to make this a reality do not exist. i have built them. well most of them. i cant afford gpu, so running inference on cpu at the hardware level is my only op...
2
0
2026-03-01T06:07:23
Electrical_Ninja3805
false
null
0
o80e0x3
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o80e0x3/
false
2
t1_o80dxlu
Wow, 110k context? On my single 3090 and just 60k context ollama already starts dumping the model/cache onto cpu ram and token generation drops to ~5t/s. Ollama is garbage, I know, but this is quite the difference. Would you mind sharing more details on your software stack and settings, please?
1
0
2026-03-01T06:06:37
Fin5ki
false
null
0
o80dxlu
false
/r/LocalLLaMA/comments/1rdvq3s/qwen35_27b_is_match_made_in_heaven_for_size_and/o80dxlu/
false
1
t1_o80dvv7
My thoughts exactly. I am from central Europe and I thought "is it a culture difference? Guy is from USA, perhaps they do not spend so much time together because of different working hours of that marriage?". It is so weird to have such message from wife, that ends with a dot, lol. It feels like it would be a college r...
12
0
2026-03-01T06:06:13
Writerro
false
null
0
o80dvv7
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80dvv7/
false
12
t1_o80dqjt
Llama-swap is the GOAT! I’ve been able to create my local Chat thanks to it! Image generation, audio transcription, chat, vision support models, all integrated in Open-WebUI with llama-swap as the backend. All local and swapping models like crazy. Thanks for your ultra fine work.
22
0
2026-03-01T06:05:00
ismaelgokufox
false
null
0
o80dqjt
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o80dqjt/
false
22
t1_o80df63
I flipped my spam filter (rtx3060) from Qwen3-VL 8B to this (Q2 unsloth quant), and it seems reliable, and faster.
1
0
2026-03-01T06:02:23
evildeece
false
null
0
o80df63
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o80df63/
false
1
t1_o80dew9
Tell me more about your Devstral template fix.
2
0
2026-03-01T06:02:19
StardockEngineer
false
null
0
o80dew9
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o80dew9/
false
2
t1_o80d8rv
I use a docker compose to host some containers that help with this, such as open-webui, litellm, searxng
1
0
2026-03-01T06:00:53
timbo2m
false
null
0
o80d8rv
false
/r/LocalLLaMA/comments/1q5p60m/llamacpp_how_are_you_doing_websearch/o80d8rv/
false
1
t1_o80d7h8
I hate this take. They have algorithms that pour over our info for advertising. You don’t think ai companies won’t do the same? Share your deepest thoughts, potentially even leading to black mail? (Scizo take but) I’d rather trust my friends with my life than Sam “yeah let’s bring terminator and 1984 to reality” Altman
11
0
2026-03-01T06:00:36
super1701
false
null
0
o80d7h8
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80d7h8/
false
11
t1_o80d76q
AI OS is the future. I want a linux distro with an LLM IT agent built in, with clustering native, so i can just put it on ewaste and plug it in, low watt space heaters with compute. all accessible from any example merged in. [https://innomen.substack.com/p/computronium](https://innomen.substack.com/p/computronium)
2
0
2026-03-01T06:00:32
Innomen
false
null
0
o80d76q
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o80d76q/
false
2
t1_o80d6yl
Thanks for the detailed breakdown! The async job queue pattern definitely makes sense for the backend. However, I’m a bit skeptical about the \*\*LLM-driven polling approach\*\* (the LLM repeatedly calling \`check\_task\_status\`). In my experience, having the LLM manage the polling loop feels like a waste of expensiv...
1
0
2026-03-01T06:00:29
Maleficent_Spirit832
false
null
0
o80d6yl
false
/r/LocalLLaMA/comments/1rhowte/how_are_you_handling_longrunning_agent_tasks/o80d6yl/
false
1
t1_o80d66k
You're absolutely right ! I made a mistake by stating that the war was won. The truth, I nuked Washington D.C when you explicitly told me not too. I made a mistake by not following your explicit instructions Facts: - The people are irremediably gone, did you have a backup ? - The invading army is marching on Chicago ...
31
0
2026-03-01T06:00:18
yopla
false
null
0
o80d66k
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80d66k/
false
31
t1_o80d49a
Thanks for sharing, let me test too! 
1
0
2026-03-01T05:59:52
Several-Tax31
false
null
0
o80d49a
false
/r/LocalLLaMA/comments/1rhdddm/qwen_35_122ba10b_q3_k_xl_ud_actually_passed_my/o80d49a/
false
1