name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o80d34b
Kind of… they have a distinct Claude of their very own at the DoD. I don’t know if it’s co-hosted or how it’s stored, but it’s just for their use and has been physical air-gap isolated from the internet. It cannot talk to the same Claude we use. They feed it mountains of intelligence - everything from geological data ...
9
1
2026-03-01T05:59:37
CantankerousOrder
false
null
0
o80d34b
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80d34b/
false
9
t1_o80d1mm
any source for this im trying to build something like this for a week
1
0
2026-03-01T05:59:17
BadBoy17Ge
false
null
0
o80d1mm
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o80d1mm/
false
1
t1_o80d0a1
I am using llama-swap. This is part of the config.yaml file
1
0
2026-03-01T05:58:58
high_funtioning_mess
false
null
0
o80d0a1
false
/r/LocalLLaMA/comments/1rclied/glm47flash_vs_qwen3codernext_vs_gptoss120b/o80d0a1/
false
1
t1_o80cz41
Even I am building projects on GTX 1650 4gb vram and this sub helped my GitHub repo get a very good reach.. I am truly thankful to the community.. but yeah sometimes things get too fast for me to catch up and contribute in comments to the community
1
0
2026-03-01T05:58:42
D_E_V_25
false
null
0
o80cz41
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o80cz41/
false
1
t1_o80cz1n
It is configuration optimization. It explains what it is right below what you quoted; you just had to keep reading. It also assumes you're familiar with customizing the terminal and is relying on you knowing what Oh My Zsh or Oh My Bash are.
1
0
2026-03-01T05:58:41
CanYouSaySacrifice
false
null
0
o80cz1n
false
/r/LocalLLaMA/comments/1qd8vpj/claude_code_or_opencode_which_one_do_you_use_and/o80cz1n/
false
1
t1_o80cwvu
Sure will check
1
0
2026-03-01T05:58:12
DockyardTechlabs
false
null
0
o80cwvu
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o80cwvu/
false
1
t1_o80cu4e
thanks for the recommendation, want to try it, what is the setup to run this yaml file?
1
0
2026-03-01T05:57:35
alsolh
false
null
0
o80cu4e
false
/r/LocalLLaMA/comments/1rclied/glm47flash_vs_qwen3codernext_vs_gptoss120b/o80cu4e/
false
1
t1_o80cq56
Hell yeah I’ll set this up tomorrow. Thanks!
1
0
2026-03-01T05:56:39
StardockEngineer
false
null
0
o80cq56
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o80cq56/
false
1
t1_o80cpna
hi, did you try the fp32 version? i'd love to see the latency of that.
1
0
2026-03-01T05:56:33
ElectricalBar7464
false
null
0
o80cpna
false
/r/LocalLLaMA/comments/1rc9qvb/kitten_tts_v08_running_in_the_browser/o80cpna/
false
1
t1_o80cnzw
This is the expected outcome because models do not understand they predict 
1
0
2026-03-01T05:56:09
blondydog
false
null
0
o80cnzw
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o80cnzw/
false
1
t1_o80cljn
waaaaaaaaaaait, are you saying TheBloke IS Bartowski???
1
0
2026-03-01T05:55:36
k_means_clusterfuck
false
null
0
o80cljn
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o80cljn/
false
1
t1_o80ckir
Wow. That’s my mistake and you are totally right to call that out. What I thought was a drone manufacturing facility was actually a girls only grade school.
312
0
2026-03-01T05:55:22
BahnMe
false
null
0
o80ckir
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80ckir/
false
312
t1_o80cagt
yeah it's smarter than the MoE one, with a speed tradeoff. what hardware are you planning to run it on? rtx or apple silicon?
1
0
2026-03-01T05:53:07
luke_pacman
false
null
0
o80cagt
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o80cagt/
false
1
t1_o80c8xe
Australian's invented 'mate' for this reason. Timeless.
1
0
2026-03-01T05:52:46
AlwaysLateToThaParty
false
null
0
o80c8xe
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o80c8xe/
false
1
t1_o80c8l8
And my wife 🤣
11
0
2026-03-01T05:52:41
Dismal-Proposal2803
false
null
0
o80c8l8
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80c8l8/
false
11
t1_o80c7y9
Yeah you could generate the kv cache for the prompt so it doesn’t need to process the prompt each time.
1
0
2026-03-01T05:52:32
And-Bee
false
null
0
o80c7y9
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o80c7y9/
false
1
t1_o80c4po
my use case is using a small model with https://github.com/alexzhang13/rlm on local machine.
1
0
2026-03-01T05:51:49
jaigouk
false
null
0
o80c4po
false
/r/LocalLLaMA/comments/1renq5y/qwen35_model_comparison_27b_vs_35b_on_rtx_4090/o80c4po/
false
1
t1_o80c1lw
shouldn't be overlooked, I agree.
3
0
2026-03-01T05:51:07
AlwaysLateToThaParty
false
null
0
o80c1lw
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o80c1lw/
false
3
t1_o80bq2k
It’s privacy from corporate data mining but replaced with their husband/dad having access to all their logs haha.
8
0
2026-03-01T05:48:33
nobodybelievesyou
false
null
0
o80bq2k
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80bq2k/
false
8
t1_o80bf1y
Clever girl
1
0
2026-03-01T05:46:04
TomorrowsLogic57
false
null
0
o80bf1y
false
/r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o80bf1y/
false
1
t1_o80bdom
Yeah I plan to add RTX support to the agentic app soon since it would benefit from the much better speed... However I think the Qwen3.5 27B dense model would be a better choice than Qwen3.5 35B-A3B on an RTX 4090, it's smarter (intelligence score of 42 vs 37 for the A3B) and should run at an acceptable speed. Ha...
1
0
2026-03-01T05:45:45
luke_pacman
false
null
0
o80bdom
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o80bdom/
false
1
t1_o80bcvy
"lobotomy quants" lmao, Im putting that in my pocket for later. Thanks. lol
1
0
2026-03-01T05:45:35
TinyBoulderStudios
false
null
0
o80bcvy
false
/r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o80bcvy/
false
1
t1_o80b7z4
Aha I see now Some released are fp8 And one base release So 4 here means 2 more models tops
1
0
2026-03-01T05:44:29
Potential_Block4598
false
null
0
o80b7z4
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o80b7z4/
false
1
t1_o80b2ex
The MCP spec has a notifications/progress pattern for this. Your server can send progress notifications back to the client while the long task runs, keeping the connection alive. The pattern I use for agent workflows: 1. MCP tool starts the task and immediately returns a task ID 2. Work runs in a background proc...
1
0
2026-03-01T05:43:14
EquivalentGuitar7140
false
null
0
o80b2ex
false
/r/LocalLLaMA/comments/1rhowte/how_are_you_handling_longrunning_agent_tasks/o80b2ex/
false
1
t1_o80b1q0
Why would normies care about our esoteric hobbies? My wife thinks I’m so strange “wasting my time” on my computer, but then there she is laughing and clapping like a seal at evening television or doom scrolling. Give it up, dude 😂
2
0
2026-03-01T05:43:05
And-Bee
false
null
0
o80b1q0
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80b1q0/
false
2
t1_o80b1gz
See here: https://www.reddit.com/r/LocalLLaMA/comments/1re1b4a/you_can_use_qwen35_without_thinking/ There's some LMStudio-specific guidance in the comments as well
2
0
2026-03-01T05:43:02
Sensitive_Song4219
false
null
0
o80b1gz
false
/r/LocalLLaMA/comments/1rhcj7b/qwen35_with_lm_studio_api_without_thinking_output/o80b1gz/
false
2
t1_o80ajtl
we're extending the promo!!
1
0
2026-03-01T05:39:09
zo-computer
false
null
0
o80ajtl
false
/r/LocalLLaMA/comments/1r645g6/minimax_m25_vs_glm5_vs_kimi_k25_how_do_they/o80ajtl/
false
1
t1_o80aja3
I am about try my first model, no idea how to do this, but am building my image library and will learn soon! Any tips?
1
0
2026-03-01T05:39:01
cloudcity
false
null
0
o80aja3
false
/r/LocalLLaMA/comments/1rhe790/my_frends_trained_and_benchmarked_4_diffusion/o80aja3/
false
1
t1_o80ai3z
Very late to this thread but are you on Windows by any chance? Am on RX 6800 with 32gb of system ram and only getting like 12 tkps using qwen 3.5 27B, which should fit on VRAM, any tips to improve speed? 
1
0
2026-03-01T05:38:46
mrstrangedude
false
null
0
o80ai3z
false
/r/LocalLLaMA/comments/1q0mg6w/how_is_running_local_ai_models_on_amd_gpus_today/o80ai3z/
false
1
t1_o80ahs1
These were the fastest I could get without using llama-bench. It only got faster when enlarged the batch and ubatch settings!
1
0
2026-03-01T05:38:42
ClintonKilldepstein
false
null
0
o80ahs1
false
/r/LocalLLaMA/comments/1raq23i/they_have_karpathy_we_are_doomed/o80ahs1/
false
1
t1_o80abum
Thank you! Both. The main contribution is **methodological**: I’m not claiming any one line of evidence “solves” it — I’m combining **independent evidence streams** (linguistic patterns, aDNA context, trade networks, material culture, iconography, chronology, substrate hypotheses, and ruling out other families) and lo...
2
0
2026-03-01T05:37:23
Hot_Tip9520
false
null
0
o80abum
false
/r/LocalLLaMA/comments/1rhkwzn/from_gpt_wrapper_to_autonomous_oss_prs_apachenasa/o80abum/
false
2
t1_o80a3t1
suggestions of \~110gb models and <100gb models are both welcomed. I run headless but i tend to stick to less than 100gb for coding agents. 131072 context is the sweet spot for "stretch my legs and come back" speed, while still having decent quality. I've been maining minimax for a while. I'm still testing out oth...
1
0
2026-03-01T05:35:34
colin_colout
false
null
0
o80a3t1
false
/r/LocalLLaMA/comments/1rcrzbn/strix_halo_128gb_what_models_which_quants_are/o80a3t1/
false
1
t1_o80a3em
claudewar --dangerously-skip-permissions
29
0
2026-03-01T05:35:29
dergachoff
false
null
0
o80a3em
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80a3em/
false
29
t1_o809v4l
Well yeah... They're not just going to stop it cold turkey. There's a period of transition.
11
0
2026-03-01T05:33:39
TopTippityTop
false
null
0
o809v4l
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o809v4l/
false
11
t1_o809te3
Claude can't control weapon systems and these contracts have roll off periods lmfao. Holy shit, some of you really need to do some critical thinking and low level research.
198
0
2026-03-01T05:33:17
illicITparameters
false
null
0
o809te3
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o809te3/
false
198
t1_o809svu
nice, you solved a non existing problem with tools that are less efficient and effective tot he ones in the market.
1
0
2026-03-01T05:33:11
Antique-Ingenuity-97
false
null
0
o809svu
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o809svu/
false
1
t1_o809sx4
[https://www.youtube.com/watch?v=KyADkmRVe0U](https://www.youtube.com/watch?v=KyADkmRVe0U)
1
0
2026-03-01T05:33:11
Mr-I17
false
null
0
o809sx4
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o809sx4/
false
1
t1_o809rpi
build it, and I cum again
-22
0
2026-03-01T05:32:55
philmarcracken
false
null
0
o809rpi
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o809rpi/
false
-22
t1_o809nkf
Opus War 8 model
19
0
2026-03-01T05:32:00
neotorama
false
null
0
o809nkf
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o809nkf/
false
19
t1_o809mo8
>And honestly? ChatGPT reeks of terrible behavioural template that’s totally overcooked in its personality. They tried to make what people wanted but ended with this mess
40
0
2026-03-01T05:31:48
clckwrks
false
null
0
o809mo8
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o809mo8/
false
40
t1_o809mks
> Trump utilized those very tools to launch a massive airstrike against Iran. Is there any actual source that said Claude is used to launch attack? Like sure white house is having a brief with Anthropic and they are attacking Iran but the 2 things don't need to be related. Also I wonder usage of Claude in military ap...
130
0
2026-03-01T05:31:47
mtmttuan
false
null
0
o809mks
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o809mks/
false
130
t1_o809lw7
The moment they asked for help is a win TBH.
2
0
2026-03-01T05:31:38
moritzchow
false
null
0
o809lw7
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o809lw7/
false
2
t1_o809c8t
Push to huggingface pretty please 🥺
1
0
2026-03-01T05:29:33
k_means_clusterfuck
false
null
0
o809c8t
false
/r/LocalLLaMA/comments/1rfzfgf/minimax_m25_gguf_perform_poorly_overall/o809c8t/
false
1
t1_o809bwy
This is actually what happened to me on my UnRaid server, I built like 6 years ago and only early this year my family had issue on photos disappearing from mobiles that they asked my help using my server! The rest use case is just myself playing games on VM, Sonarr/Radar on my collection and my gf sitting on coach havi...
5
0
2026-03-01T05:29:29
moritzchow
false
null
0
o809bwy
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o809bwy/
false
5
t1_o8098je
No product-market-fit! Sorry, but it’s a hard job to match tech to humans. Especially your family.
3
0
2026-03-01T05:28:45
onethousandmonkey
false
null
0
o8098je
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o8098je/
false
3
t1_o80932m
r/StableDiffusion
1
0
2026-03-01T05:27:33
Alpacaaea
false
null
0
o80932m
false
/r/LocalLLaMA/comments/1rhohkr/what_is_the_best_model_for_image_creation_with/o80932m/
false
1
t1_o808z62
A tree in a forest is just a tree. But a tree in your server is vulnerable against your prying eyes. Being a statistics over there is better than being naked in here.
2
0
2026-03-01T05:26:42
Huge_Freedom3076
false
null
0
o808z62
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o808z62/
false
2
t1_o808yrm
And charge that $200m to their Apple Pay?
5
0
2026-03-01T05:26:37
1-800-methdyke
false
null
0
o808yrm
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o808yrm/
false
5
t1_o808wqh
The 3090s. There’s no real contest unfortunately. The 128gb mac will let you run somewhat smarter models but the difference isn’t enough to warrant the slow pp/long context performance. I do recommend mac to people with very specific needs. But for general purpose “i want to be ready for what comes next in AI world” it...
2
0
2026-03-01T05:26:10
datbackup
false
null
0
o808wqh
false
/r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o808wqh/
false
2
t1_o808r72
Iraq can consider itself lucky that ChatGPT wasnt in place yet.
179
0
2026-03-01T05:24:59
1-800-methdyke
false
null
0
o808r72
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o808r72/
false
179
t1_o808plr
everyone has this problem
1
0
2026-03-01T05:24:38
putrasherni
false
null
0
o808plr
false
/r/LocalLLaMA/comments/1re1b4a/you_can_use_qwen35_without_thinking/o808plr/
false
1
t1_o808lz5
Qwen models handle tool calls differently from GPT/Nemotron. A few things to check: 1. Make sure you're using the correct chat template. Qwen3 models need the Hermes-style tool call format. If ollama isn't applying the right template, the model literally doesn't know tools exist. 2. Try setting \`num\_ctx\` highe...
1
0
2026-03-01T05:23:52
EquivalentGuitar7140
false
null
0
o808lz5
false
/r/LocalLLaMA/comments/1rhmwfn/cant_get_qwen_models_to_work_with_tool_calls/o808lz5/
false
1
t1_o808b7n
nto yet. and maybe, it was work, its the amalgamation of a couple projects actually. and its \~120k lines of code. across 3 separate projects. hence why i haven't open sourced and I'm not sure if i will because it will be work. and im lazy for everything outside of whats got my attention at the moment.
1
0
2026-03-01T05:21:35
Electrical_Ninja3805
false
null
0
o808b7n
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o808b7n/
false
1
t1_o8087l7
Doubt. Engineers on the ground will know the discrepancies in performance and just keep using Claude.
-2
1
2026-03-01T05:20:50
Budget-Juggernaut-68
false
null
0
o8087l7
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8087l7/
false
-2
t1_o808737
Can you disable for 27B as well? I love it and it just rambles sometimes
1
0
2026-03-01T05:20:43
_raydeStar
false
null
0
o808737
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o808737/
false
1
t1_o8086uh
Even if an autonomous “AI takeover” doesn’t happen, very soon, access to frontier models will be the only type of power that really matters. Nobody in charge of anything will dare to make decisions without consulting with them, for fear of making a mistake and then being blamed for not using technology that could have...
22
0
2026-03-01T05:20:40
-p-e-w-
false
null
0
o8086uh
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8086uh/
false
22
t1_o8082w5
Now I understand Trump's POV, this endless button mashing: - Run "search_targets" Approve once/Approve for this war/Make suggestions? - Open lunch bay door? ... - Load the missile? .... - Acquire target? ... You get a carpal tunnel syndrome before you get to launch a single strike. No wonder he wants AI like Google A...
7
1
2026-03-01T05:19:50
catplusplusok
false
null
0
o8082w5
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8082w5/
false
7
t1_o807up7
Can you share your settings? I'm running 6GB VRAM/32GB DDR5 RAM and only get about 7tps (which is half of what I get out of the direct predecessor, 30b-a3b). Quality of output is impressive: its a smart model for the size.
1
0
2026-03-01T05:18:06
Sensitive_Song4219
false
null
0
o807up7
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o807up7/
false
1
t1_o807pra
Agreed, Im so confused.
1
0
2026-03-01T05:17:05
No-Guide4444
false
null
0
o807pra
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o807pra/
false
1
t1_o807nbc
Have you open sourced any of it, or plan to open source any of it? I haven't worked with UEFI yet so I'm curious how complex that work was. Any indication for how many lines of code the project is?
1
0
2026-03-01T05:16:35
HunterVacui
false
null
0
o807nbc
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o807nbc/
false
1
t1_o807gr1
*Looks at Hulu, Netflix, Disney+, HBO Max, Amazon Prime and Paramount, Plex* There's never anything on TV
4
0
2026-03-01T05:15:13
ubrtnk
false
null
0
o807gr1
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o807gr1/
false
4
t1_o807fe8
This is excellent, thank you!
1
0
2026-03-01T05:14:56
datbackup
false
null
0
o807fe8
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o807fe8/
false
1
t1_o807cjq
Honestly, I haven't tried NovelAI or anything like that other than SillyTavern, which I found myself really disliking because of how damn overwhelming its UI is. Question to you: Do any of those things you mentioned focus on actual sentence completion/continuation -- think of base, non-instruct models -- or are they no...
1
0
2026-03-01T05:14:21
Kamal965
false
null
0
o807cjq
false
/r/LocalLLaMA/comments/1rgiimd/discussion_is_it_time_for_a_prosefirst_successor/o807cjq/
false
1
t1_o807ax7
Good thing is that OpenAI's most valuable researchers already left the company, and SamA is only slightly better at understanding AI than Elon (who has zero idea of how LLM works). Now that the paradigm shifted to RL, they can't really compete by throwing more compute anymore and they actually need to manage clever res...
2
0
2026-03-01T05:14:01
NandaVegg
false
null
0
o807ax7
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o807ax7/
false
2
t1_o807aof
My #1 rule for the config: never break backwards compatibility.
13
0
2026-03-01T05:13:58
No-Statement-0001
false
null
0
o807aof
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o807aof/
false
13
t1_o807ai9
I think thats probably what I'll do. Thanks
2
0
2026-03-01T05:13:56
ubrtnk
false
null
0
o807ai9
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o807ai9/
false
2
t1_o8076z1
Wife is turning part of our side yard into a garden - got her some raised garden beds for Christmas last year. She's tried planting so many things that she didnt know what she planted at one point. Except Arugula...lots and lots of Arugula.
4
0
2026-03-01T05:13:13
ubrtnk
false
null
0
o8076z1
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o8076z1/
false
4
t1_o8071j1
How do you like OpenClaw? I find the notion of what OpenClaw claims to be fascinating but the security implications are not something to ignore. I've got a baby pc with an N150 and 16GB with a fresh install of ubuntu ready to go and I just cant pull the trigger.
2
0
2026-03-01T05:12:06
ubrtnk
false
null
0
o8071j1
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o8071j1/
false
2
t1_o8070gg
I watch the changelog and it certainly has gotten complex. However, you haven't broken the dumb simple config which is very much appreciated.
10
0
2026-03-01T05:11:53
suprjami
false
null
0
o8070gg
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o8070gg/
false
10
t1_o806y89
Christ, look at this parade of brain-dead clones. You could replace half of you with cardboard cutouts and no one would notice. Not a single contractor in sight — just a village of idiots congratulating each other for surviving another day without choking on the air. They say it takes a village, but obviously, it doesn...
0
0
2026-03-01T05:11:25
sinevilson
false
null
0
o806y89
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o806y89/
false
0
t1_o806sdb
Just because I’m new, it doesn’t mean I’m bot. Everyone is entitled to their opinion and so are you. But maybe you will not understand that.
1
0
2026-03-01T05:10:15
PaceImaginary8610
false
null
0
o806sdb
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o806sdb/
false
1
t1_o806rql
Skip Gentoo, you can go smaller with Buildroot and have the kernel directly run the inference engine as the init binary. This is not too uncommon in the embedded space actually, though it's typically a QT, GTK, Unity, or Unreal app that's loaded directly after the kernel.
1
0
2026-03-01T05:10:06
AndreVallestero
false
null
0
o806rql
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o806rql/
false
1
t1_o806pd9
The one you can run with relative speed, on your (unspecified) hardware.
1
0
2026-03-01T05:09:38
optimisticalish
false
null
0
o806pd9
false
/r/LocalLLaMA/comments/1rhohkr/what_is_the_best_model_for_image_creation_with/o806pd9/
false
1
t1_o806iu4
Yeah that’s fair. Pure $/hr can look higher for serverless. The thing to compare isn’t hourly rate though, it’s total billed GPU minutes for your actual workload. If your model is bursty and sits idle 70–80% of the time, a cheaper hourly box that stays warm can end up costing more than a higher $/hr serverless setup th...
1
0
2026-03-01T05:08:17
pmv143
false
null
0
o806iu4
false
/r/LocalLLaMA/comments/1rf4br0/where_do_you_all_rent_gpu_servers_for_small_ml_ai/o806iu4/
false
1
t1_o806iiq
Are you adopting a son anytime soon? Jokes aside, don't let that affect your moral, I mean the whole journey was probably worth it and you learned a lot, that's all that matters. Probably keep focusing on your own needs and what features you'd want.
1
0
2026-03-01T05:08:13
BalStrate
false
null
0
o806iiq
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o806iiq/
false
1
t1_o806fxq
It’s a bit like working hard to grow a large vegetable garden and raise some chickens, and then complaining that your family still eats at restaurants. I feel your pain though. The world is becoming less and less accommodating for people who like to “do it themselves”. But that just means your attitude is getting more...
1
0
2026-03-01T05:07:40
datbackup
false
null
0
o806fxq
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o806fxq/
false
1
t1_o806f2r
There's supposed to be a 6 month transitional period there, so that'd make sense.
349
0
2026-03-01T05:07:29
ReMeDyIII
false
null
0
o806f2r
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o806f2r/
false
349
t1_o806euq
"Bueno, es la única opción de bajo consumo con 16 GB de VRAM que encontré. El precio es razonable. Y aunque no es la más rápida, es lo suficientemente veloz para lo que hago. Puedo vivir sin el ruido, el calor y las facturas de luz elevadas."
1
0
2026-03-01T05:07:26
tony10000
false
null
0
o806euq
false
/r/LocalLLaMA/comments/1ok6w8r/i_bought_the_intel_arc_b50_to_use_with_lm_studio/o806euq/
false
1
t1_o8061ye
Yes! It might work even better for you since you have a newer GPU than mine, which is an RTX 4080 mobile with 12GB of VRAM. I get around 11 tokens a second on mine. Yours should run it faster. I'm using the Q4 K M quant by Unsloth.
2
0
2026-03-01T05:04:46
c64z86
false
null
0
o8061ye
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o8061ye/
false
2
t1_o8061ou
Its also likely they might just be disinterested in general.
16
0
2026-03-01T05:04:43
teleprint-me
false
null
0
o8061ou
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o8061ou/
false
16
t1_o805xt7
I feel you on this. I use openclaw. Made it available to family and friends. They don’t use it. But I do! I do a lot of vibe coding and other research. It helps me with collating data and my actual job. Honestly dude you should feel accomplished! You did something that helps you and possibly your family if they want to...
2
0
2026-03-01T05:03:55
ParticularlyStrange
false
null
0
o805xt7
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o805xt7/
false
2
t1_o805xko
Is role playing bad? I don’t get it why you will hesitate
1
0
2026-03-01T05:03:52
devilish-lavanya
false
null
0
o805xko
false
/r/LocalLLaMA/comments/1re72h4/qwen35_27b_better_than_35ba3b/o805xko/
false
1
t1_o805t11
I agree and I've been having lots of fun with it, even though it does run pretty slow on my setup at 11 tokens a second. So far It's built a 3D model of the solar system correctly, with all the paths and speeds of the planets accounted for, and I've even made some pretty basic raycaster games with it too... and now It'...
2
0
2026-03-01T05:02:56
c64z86
false
null
0
o805t11
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o805t11/
false
2
t1_o805sav
Gemini 3.1 is partially an image output model as nano banana 2, I could see DeepSeek V4 being that way
1
0
2026-03-01T05:02:47
thetaFAANG
false
null
0
o805sav
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o805sav/
false
1
t1_o805rzz
Do these things work well with dual GPUs? I have a 16gb 4060 Ti and was thinking doubling up on those is probably my most cost effective upgrade if it will work about as well as a 3090 with a bit more vram.
1
0
2026-03-01T05:02:43
danielfrances
false
null
0
o805rzz
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o805rzz/
false
1
t1_o805pyb
I love my Mac studio m4 64GB. Best computer I have ever owned. It’s completely silent. The form factor is amazing and is mounted under my desk. The energy consumption is stupid for the output. The “cons” if you will are that you’re running Mac OS and stuck on Mac OS. I enjoy using Mac OS. I do dabble with local models ...
1
0
2026-03-01T05:02:18
iamrob15
false
null
0
o805pyb
false
/r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o805pyb/
false
1
t1_o805nkj
It's not a RAG tool, it doesn't index files. It remembers things like "the pagination bug was caused by an off-by-one in the cursor logic" or tracks todos you discussed mid-session. All branch aware so nothing bleeds across contexts. More like working memory than code search.
1
0
2026-03-01T05:01:50
meszmate
false
null
0
o805nkj
false
/r/LocalLLaMA/comments/1rhn8eo/built_an_open_source_mcp_server_for_ai_coding/o805nkj/
false
1
t1_o805jal
I managed to get 100 tps on 35B q4 on apple silicon , but just about 27 tps on 27B q4
1
0
2026-03-01T05:00:58
putrasherni
false
null
0
o805jal
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o805jal/
false
1
t1_o805a0b
A 6 month old bot posting AI hate makes it to the top of /r/localllama. Well done guys.
1
0
2026-03-01T04:59:06
LocoMod
false
null
0
o805a0b
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o805a0b/
false
1
t1_o80575s
That should be good enough. Thanks!
1
0
2026-03-01T04:58:32
nborwankar
false
null
0
o80575s
false
/r/LocalLLaMA/comments/1rhg2ir/trying_to_set_up_a_vscode_server_local_llm/o80575s/
false
1
t1_o804uva
I built my own local alternative to all of these. Model2Vec potion-base-8M (256-dim) + sqlite-vec for vector search + FTS5 BM25 for keyword search, fused with Reciprocal Rank Fusion. 49,746 chunks from 15,800 files. 83MB in SQLite. Sub-second retrieval, zero API cost, everything local. Biggest win was hybrid search o...
1
0
2026-03-01T04:56:01
blakecr
false
null
0
o804uva
false
/r/LocalLLaMA/comments/1rckcww/benchmarked_4_ai_memory_systems_on_600turn/o804uva/
false
1
t1_o804na0
[https://github.com/anomalyco/opencode/pull/14085](https://github.com/anomalyco/opencode/pull/14085)
1
0
2026-03-01T04:54:29
FaustAg
false
null
0
o804na0
false
/r/LocalLLaMA/comments/1rhmwfn/cant_get_qwen_models_to_work_with_tool_calls/o804na0/
false
1
t1_o804fvw
I dont think anyone really has given Gemini much attention, not on purpose or anything. Our family google sub just got upgraded to have free Gemini Pro for some reason so may get used more, who knows.
1
0
2026-03-01T04:53:00
ubrtnk
false
null
0
o804fvw
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o804fvw/
false
1
t1_o804f0q
Brilliant, thank you!
1
0
2026-03-01T04:52:50
Protheu5
false
null
0
o804f0q
false
/r/LocalLLaMA/comments/1rhmwfn/cant_get_qwen_models_to_work_with_tool_calls/o804f0q/
false
1
t1_o804cqn
Yeah I've been working on a dedicated system with MCP for my agents to use. My own little local Google without the advertiser first index or API. Free and unrestricted. Still a WIP but surprisingly functional.
2
0
2026-03-01T04:52:22
indrasmirror
false
null
0
o804cqn
false
/r/LocalLLaMA/comments/1rhj0l9/mcp_server_for_searxngnonapi_local_search/o804cqn/
false
2
t1_o804847
Lesson learned: don't force your hobby onto a captive audience.
22
0
2026-03-01T04:51:26
Klutzy-Snow8016
false
null
0
o804847
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o804847/
false
22
t1_o804485
Yeah I'm definitely gay for it
1
0
2026-03-01T04:50:40
duokeks
false
null
0
o804485
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o804485/
false
1
t1_o80404b
If you are able to run oss 120b perhaps you should try qwen 3.5 397b @ unsloth q1, it is the best sub 100gb
1
0
2026-03-01T04:49:51
Confusion_Senior
false
null
0
o80404b
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o80404b/
false
1
t1_o803ti1
[Robot, experience this tragic irony for me](https://www.youtube.com/watch?v=LCPhbN1l024)
5
0
2026-03-01T04:48:33
cristoper
false
null
0
o803ti1
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o803ti1/
false
5
t1_o803qmu
yup i read something similar where the loss was minimum at q8 , and so far in my usage i didnt really notice a difference even when doing maxed out 261 k token context use to the limit.
1
0
2026-03-01T04:47:58
Key_Pace_9755
false
null
0
o803qmu
false
/r/LocalLLaMA/comments/1rhflqn/letting_my_rtx_5090_21_tbs_mem_stretch_its_legs/o803qmu/
false
1