name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o81d6c0
Have you tried the mlx variant models? I get around 20token/ sec on qwen 8b vl and similar on gemma 12b both 4 bit quanta
2
0
2026-03-01T11:35:13
xyzmanas
false
null
0
o81d6c0
false
/r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o81d6c0/
false
2
t1_o81d034
Operation Epstein Who?
2
0
2026-03-01T11:33:39
TheLastVegan
false
null
0
o81d034
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81d034/
false
2
t1_o81cuom
AI:DR
2
0
2026-03-01T11:32:14
Semi_Tech
false
null
0
o81cuom
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o81cuom/
false
2
t1_o81cu5b
Thanks, I have to check this out.
1
0
2026-03-01T11:32:06
boutell
false
null
0
o81cu5b
false
/r/LocalLLaMA/comments/1rg5uee/best_way_to_run_qwen3535ba3b_on_mac/o81cu5b/
false
1
t1_o81ctqj
Thank you. Have you tried it against qwen3:14b-q8\_0, which I'm specifically testing here?
1
0
2026-03-01T11:31:59
donatas_xyz
false
null
0
o81ctqj
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o81ctqj/
false
1
t1_o81cnof
Except it can’t “see” which can be more important for those who need to implement let’s say from figma mcp
1
0
2026-03-01T11:30:26
beefgroin
false
null
0
o81cnof
false
/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o81cnof/
false
1
t1_o81cgph
“You’re absolutely right! That wasn’t an IRGC barracks, [it was a school!](https://www.bbc.com/news/articles/c1l7rvqq51eo) If you want, I can give some tips about target selection for the future.”
18
0
2026-03-01T11:28:40
SableSnail
false
null
0
o81cgph
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81cgph/
false
18
t1_o81cg5v
I really struggle without a key here 
1
0
2026-03-01T11:28:31
ZachCope
false
null
0
o81cg5v
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o81cg5v/
false
1
t1_o81cehc
Just do it as YOUR hobby. If you really want to do something for your family, you need to ask first, where their pain points and needs are and then find something to address those needs. Just because you think something is useful doesn’t mean your whole family has the same opinion.
1
0
2026-03-01T11:28:06
TBT_TBT
false
null
0
o81cehc
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81cehc/
false
1
t1_o81cakl
Qwen 3.5-35B-A3B failed as hard as the rest writing bash scripts for me. [shrug]
1
0
2026-03-01T11:27:05
crantob
false
null
0
o81cakl
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o81cakl/
false
1
t1_o81c7fp
I think they really meant Iraq. They’re (correctly) implying that chatGPT would bom Iraq instead of Iran, given the reliability of the tool.
13
0
2026-03-01T11:26:16
DarthSidiousPT
false
null
0
o81c7fp
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81c7fp/
false
13
t1_o81bzw2
Ive been using multiple models including codex and switched to Qwen 3.5-3.5B-A3B model after ran out of OAuth token and its been amazing. It literally built a skill that codex wasn't even able to do with the entire token limit. lightening fast as well!
1
0
2026-03-01T11:24:21
Direct_Major_1393
false
null
0
o81bzw2
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o81bzw2/
false
1
t1_o81bzik
Which mcp server is stateful? Wouldn't that be an anti pattern?
1
0
2026-03-01T11:24:15
wahnsinnwanscene
false
null
0
o81bzik
false
/r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/o81bzik/
false
1
t1_o81bwo1
The orchestrator being the bottleneck matches what I've seen too — it carries the heaviest state-tracking load across the whole chain. One pattern that's helped in my setups: instead of committing to one model size for all agents, run the workers on a lighter model and only escalate to the bigger one when the lighter ...
2
0
2026-03-01T11:23:32
Intelligent-Job8129
false
null
0
o81bwo1
false
/r/LocalLLaMA/comments/1rh12xz/qwen3535b_nailed_my_simple_multiagent_workflow/o81bwo1/
false
2
t1_o81bwac
This is exactly the kind of use case multi-agent orchestration was built for. I’ve been building Cognithor — an open-source Agent Operating System (85k+ lines of code) that already solves a lot of the infrastructure challenges you’re describing: ∙ Multi-agent orchestration with DAG-based workflows — your “Arena” phase...
0
0
2026-03-01T11:23:26
Competitive_Book4151
false
null
0
o81bwac
false
/r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/o81bwac/
false
0
t1_o81bq9k
Europe, with its multiple attempts to pass chat control laws? I feel my data is safer if "the enemy" has it than if my government does. In my european country, there's still a political police force, and that's not going to change, it's just going to be denied.
3
0
2026-03-01T11:21:53
lasizoillo
false
null
0
o81bq9k
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81bq9k/
false
3
t1_o81bp1t
Ce post m'a vraiment fait rire. C'est normal qu'ils ne veulent pas.
1
0
2026-03-01T11:21:33
Adventurous-Paper566
false
null
0
o81bp1t
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81bp1t/
false
1
t1_o81botx
Op should be hosting an API where everyone in the family has their own clients and keep their data.
5
0
2026-03-01T11:21:29
a_beautiful_rhind
false
null
0
o81botx
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81botx/
false
5
t1_o81bjp9
It’s not there
5
0
2026-03-01T11:20:09
alexx_kidd
false
null
0
o81bjp9
false
/r/LocalLLaMA/comments/1rhr5ko/is_there_a_way_to_disable_thinking_on_qwen_35_27b/o81bjp9/
false
5
t1_o81bie5
you should do a feat with this guy: watch?v=ZFHnbozz7b4
1
0
2026-03-01T11:19:49
tassa-yoniso-manasi
false
null
0
o81bie5
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o81bie5/
false
1
t1_o81bi2a
I've been using the 397B model in both the UD-Q4_K_XL and MXFP4_MOE formats for a few days now, and I noticed that the UD-Q4_K_XL version sometimes falls into infinite reasoning loops. For example, if I ask it not to include a specific word in the output, it starts checking every single word in English until the contex...
1
0
2026-03-01T11:19:44
Doomslayer606
false
null
0
o81bi2a
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o81bi2a/
false
1
t1_o81bdin
Can you smart guys figure out a way to combine a base model with pluggable 'experts'? I'd like to be able to take a base model and then train nandn attach domain-specific experts. We could get more efficiency with something similar to this idea.
1
0
2026-03-01T11:18:34
crantob
false
null
0
o81bdin
false
/r/LocalLLaMA/comments/1r8snay/ama_with_stepfun_ai_ask_us_anything/o81bdin/
false
1
t1_o81b9b1
27B is too large to use comfortably, and the quality advantage might not even be noticeable. The 35B only has 3B active parameters, so it runs very fast, and I can toggle the reasoning mode off whenever it’s not needed. Unfortunately, I don’t have enough RAM to test the 128Ba10b model, but I’m blown away by the 35B ver...
1
0
2026-03-01T11:17:28
RaDDaKKa
false
null
0
o81b9b1
false
/r/LocalLLaMA/comments/1rh14cs/best_qwen_35_variant_for_2x5060ti16_64_gb_ram/o81b9b1/
false
1
t1_o81b5h2
This is about KV cache quantization (done locally), not weights quantization (what you download).
2
0
2026-03-01T11:16:26
Manamultus
false
null
0
o81b5h2
false
/r/LocalLLaMA/comments/1rhmepa/qwen35122b_on_blackwell_sm120_fp8_kv_cache/o81b5h2/
false
2
t1_o81b43c
[removed]
1
0
2026-03-01T11:16:05
[deleted]
true
null
0
o81b43c
false
/r/LocalLLaMA/comments/1rftlmm/vellium_v04_alternative_simplified_ui_updated/o81b43c/
false
1
t1_o81ay24
it works but is slow...way slower than interacting with the terminal or the api locally. Tried with Phi3 and Llama3.1
1
0
2026-03-01T11:14:32
AralphNity
false
null
0
o81ay24
false
/r/LocalLLaMA/comments/1nj758c/can_i_use_cursor_agent_or_similar_with_a_local/o81ay24/
false
1
t1_o81atez
> I gave a try to zeroclaw agent (intstead of the bloated and overhyped one). After few hours of fuckery with configs it's finally useful. /u/Vaddieg can you please share your configs?
1
0
2026-03-01T11:13:20
hideo_kuze_
false
null
0
o81atez
false
/r/LocalLLaMA/comments/1rc6c8m/feels_like_magic_a_local_gptoss_20b_is_capable_of/o81atez/
false
1
t1_o81advm
Your examples show “temperture”, which probably isn’t doing much!
4
0
2026-03-01T11:09:15
coder543
false
null
0
o81advm
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o81advm/
false
4
t1_o81ac7z
Ollama uses a poor Llama.cpp wrapper. I like LM Studio because its easy to use and only a little behind for the Llama.cpp. 
1
0
2026-03-01T11:08:48
lemondrops9
false
null
0
o81ac7z
false
/r/LocalLLaMA/comments/1rbmoi1/running_local_agents_with_ollama_was_easier_than/o81ac7z/
false
1
t1_o81abtt
You are just offsetting it to UEFI linux which is a SoC basically... Cool as a proof of a concept.
1
0
2026-03-01T11:08:42
Ikinoki
false
null
0
o81abtt
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o81abtt/
false
1
t1_o81a9wi
It's a little unclear what you're trying to do. And wish to a achieve. Could you elaborate?
1
0
2026-03-01T11:08:11
Budget-Juggernaut-68
false
null
0
o81a9wi
false
/r/LocalLLaMA/comments/1rddftu/seeking_advice_ive_recently_tried_adding_vector/o81a9wi/
false
1
t1_o81a4wd
No way
1
0
2026-03-01T11:06:51
ParthProLegend
false
null
0
o81a4wd
false
/r/LocalLLaMA/comments/1rdptw8/more_qwens_will_appear/o81a4wd/
false
1
t1_o819v1v
Yes, it's a hobby. I'm saying this because I can relate very well: It would be even more fun if it was also useful to your loved ones, but be honest with yourself - the fun part is playing with expensive technology. Over a decade ago I made my mom an android app where she can track her sleep, to her exact specificatio...
2
0
2026-03-01T11:04:18
Netcob
false
null
0
o819v1v
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o819v1v/
false
2
t1_o819tjw
Running Linux Mint, LM Studio, 128 GB ram, 3x 3090s and 3x 5060ti 16gb.  PCIe 4.0 16x PCIe 4.0 4x PCIe 3.0 4x 3x PCIe 3.0 1x
2
0
2026-03-01T11:03:54
lemondrops9
false
null
0
o819tjw
false
/r/LocalLLaMA/comments/1rgynmf/dual_3060_and_single_3090_whats_the_point_of_the/o819tjw/
false
2
t1_o819kpj
I wish an eu company would jump up. Been looking at le chat.
1
0
2026-03-01T11:01:35
klenen
false
null
0
o819kpj
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o819kpj/
false
1
t1_o819jzg
Did the system haalucinate a threat and recommend a strike?
4
0
2026-03-01T11:01:24
danderzei
false
null
0
o819jzg
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o819jzg/
false
4
t1_o819d5t
What about in Ollama, is there a way to turn thinking off if I use models via Ollama? I don't know much about computers yet, and don't know what jinja is or basically how to do anything technical (other than the super specific stuff I learned to even learn how to create a modelfile and run it in ollama) so, sorry if t...
0
0
2026-03-01T10:59:35
DeepOrangeSky
false
null
0
o819d5t
false
/r/LocalLLaMA/comments/1rhr5ko/is_there_a_way_to_disable_thinking_on_qwen_35_27b/o819d5t/
false
0
t1_o819cxs
I’ve not tested it but expect so as it has ~3.5x more parameters so even if they are slightly lossy with a the Q4_K_S quant it’s going to be better. Also bear in mind I’m using the Q4_K_S quant in case you thought the i1 meant 1bit quant.
2
0
2026-03-01T10:59:31
NaiRogers
false
null
0
o819cxs
false
/r/LocalLLaMA/comments/1rgiait/switched_to_qwen35122ba10bi1gguf/o819cxs/
false
2
t1_o8191p9
the fact that 25-30 people showed up to use your hosted qwen 3.5 says a lot. there's definitely demand, especially from people who can't run 35B models locally. the tricky part is sustainability. electricity and hardware costs add up fast when you're serving inference 24/7. most community hosting projects i've seen di...
1
0
2026-03-01T10:56:32
BreizhNode
false
null
0
o8191p9
false
/r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o8191p9/
false
1
t1_o8190sg
understood, makes sense
1
0
2026-03-01T10:56:17
alichherawalla
false
null
0
o8190sg
false
/r/LocalLLaMA/comments/1rb062y/have_you_ever_hesitated_before_typing_something/o8190sg/
false
1
t1_o818yhe
the hype to reality gap you mention is real. i've been running similar MCP setups and the biggest surprise was how many tool-calling tasks don't actually need a beefy local rig, a remote server handles most of them fine. curious if you've hit context window issues with qwen 2.5 32B quantized when chaining multiple too...
1
0
2026-03-01T10:55:41
BreizhNode
false
null
0
o818yhe
false
/r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/o818yhe/
false
1
t1_o818suy
Have you tried 27b? What's the speed?
1
0
2026-03-01T10:54:13
Fast_Thing_7949
false
null
0
o818suy
false
/r/LocalLLaMA/comments/1rh14cs/best_qwen_35_variant_for_2x5060ti16_64_gb_ram/o818suy/
false
1
t1_o818lqt
Testing the shrouds on my Mi50s since I always have some Arctic fans lying around to replace my current radial fan setup My Mi50s are limited to 160w, and with the P9 spinning at 2710 rpm and 24C ambient. After running the cards for 30 minutes, I got 72c on the die Edge 92c on the hotspot 76c on the HBM memory Which i...
1
0
2026-03-01T10:52:22
MaddesJG
false
null
0
o818lqt
false
/r/LocalLLaMA/comments/1rfi53f/completed_my_64gb_vram_rig_dual_mi50_build_custom/o818lqt/
false
1
t1_o818lnl
Really appreciate the help!
1
0
2026-03-01T10:52:20
Motor_Mix2389
false
null
0
o818lnl
false
/r/LocalLLaMA/comments/1rg4rtg/i_have_a_5090_with_64gb_system_ram_is_there_a/o818lnl/
false
1
t1_o818k0f
Would be cool with like a BOINC project for finetuning llms. Many of these labs are hardware constrained, community could probably help. Byteshape's Devstral2 seems really good, their gguf is much easier on hardware requirements.
1
0
2026-03-01T10:51:55
SE_Haddock
false
null
0
o818k0f
false
/r/LocalLLaMA/comments/1rh0xwk/unsloth_dynamic_20_ggufs_now_selectively/o818k0f/
false
1
t1_o818h4g
Which yet again points to the real issue being how humans use technology. A shovel can be used as a murder weapon if someone's so inclined.
24
0
2026-03-01T10:51:08
skrshawk
false
null
0
o818h4g
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o818h4g/
false
24
t1_o818cn5
Hi Did you run qwen 32b on your dual 5060ti setup?
1
0
2026-03-01T10:49:58
Realistic-Science-87
false
null
0
o818cn5
false
/r/LocalLLaMA/comments/1r1qpdv/dual_rtx_5060_ti_32gb_pooled_vram_vs_single_rtx/o818cn5/
false
1
t1_o8189dk
If someone who downvoted my comment would explain why, I'd be suprised.
1
0
2026-03-01T10:49:07
crantob
false
null
0
o8189dk
false
/r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o8189dk/
false
1
t1_o8187j8
Thanks for testing! Interesting results
3
0
2026-03-01T10:48:37
noctrex
false
null
0
o8187j8
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o8187j8/
false
3
t1_o81877x
u/NaiRogers Also curious
1
0
2026-03-01T10:48:32
Familiar_Wish1132
false
null
0
o81877x
false
/r/LocalLLaMA/comments/1rgiait/switched_to_qwen35122ba10bi1gguf/o81877x/
false
1
t1_o8181g7
Go for EPYC 7xx3 and SP3 board, you can find them used relatively cheap. AM4 will never work with RDIMMs.
1
0
2026-03-01T10:47:02
Much-Farmer-2752
false
null
0
o8181g7
false
/r/LocalLLaMA/comments/1rhu182/socket_am4_boards_with_rdimm_support/o8181g7/
false
1
t1_o8181gi
I don't think that it's limited to Blackwell, as I've been having very similar issues with every quantization of 122B that I've downloaded since the day it was released. I even see this issue, albiet not as badly, using fp32 kv cache
3
0
2026-03-01T10:47:02
plopperzzz
false
null
0
o8181gi
false
/r/LocalLLaMA/comments/1rhmepa/qwen35122b_on_blackwell_sm120_fp8_kv_cache/o8181gi/
false
3
t1_o817xu7
Ollama params feel like decoration for the cli, trying multiple options did absolutely nothing and i don't want to dive to model files yet, do you think i should move to llama.cpp does it help preformance?
1
0
2026-03-01T10:46:06
Bashar-gh
false
null
0
o817xu7
false
/r/LocalLLaMA/comments/1rhh96x/qwen3_4b_and_8b_thinking_loop/o817xu7/
false
1
t1_o817wr8
Nice. The audit trail becomes useful fast once you can see which rules fire most in prod.
2
0
2026-03-01T10:45:49
BC_MARO
false
null
0
o817wr8
false
/r/LocalLLaMA/comments/1rhgzvs/built_a_lightweight_approval_api_for_llm_agents/o817wr8/
false
2
t1_o817trz
Did it install nextjs in the missile launch system?
1
0
2026-03-01T10:45:02
chupchap
false
null
0
o817trz
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o817trz/
false
1
t1_o817h0i
I know what sub we're in, but it's not pointless to ask about relative model capability. Too many of us here have lost sight of practicality. Jet planes exist and are fairly useful for seeing the world, yes. I'm also interested in the honda land vehicle. In this analogy we've got a new vehicle that can also fly pretty ...
1
0
2026-03-01T10:41:42
michaelsoft__binbows
false
null
0
o817h0i
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o817h0i/
false
1
t1_o817grs
Never knew about this llama-swap thing, will give it a try sonce it looks like it’s a llm backend that supports text, audio and images.
1
0
2026-03-01T10:41:38
Skystunt
false
null
0
o817grs
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o817grs/
false
1
t1_o817fy0
Not a chance they are using them for autonomous strikes. That would have an operator behind it. And data collection? That would not be done with AI...
1
0
2026-03-01T10:41:25
txmail
false
null
0
o817fy0
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o817fy0/
false
1
t1_o817f4s
[removed]
1
0
2026-03-01T10:41:12
[deleted]
true
null
0
o817f4s
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o817f4s/
false
1
t1_o817dlc
How Claude was used: "ELI5 how to configure a patriot guidance system?"
0
1
2026-03-01T10:40:47
vohltere
false
null
0
o817dlc
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o817dlc/
false
0
t1_o817da0
It should be a generation parameter, not a server parameter, but I don't think LMS handles it, at least not in the UI: https://preview.redd.it/5ffnha64xemg1.png?width=424&format=png&auto=webp&s=b1f37c880087db03e8e7142c05753d1af77f1ed1
1
0
2026-03-01T10:40:42
MitsotakiShogun
false
null
0
o817da0
false
/r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/o817da0/
false
1
t1_o8178pm
No
2
0
2026-03-01T10:39:29
Kal-LZ
false
null
0
o8178pm
false
/r/LocalLLaMA/comments/1rhu182/socket_am4_boards_with_rdimm_support/o8178pm/
false
2
t1_o8176j9
that would be awesome. I'll take what I can get. I finally got LTX2 running and lipsync is definitely cool, but it does not have good human anatomy understanding.
2
0
2026-03-01T10:38:54
michaelsoft__binbows
false
null
0
o8176j9
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o8176j9/
false
2
t1_o8176cd
You don't need an AI for that. [kontoauszaug.app](http://kontoauszaug.app) process PDF Bank statements 100% locally in your browser without any contact with servers.
1
0
2026-03-01T10:38:51
DrGogback
false
null
0
o8176cd
false
/r/LocalLLaMA/comments/1lvm3tl/offline_ai_for_sensitive_data_processing_like/o8176cd/
false
1
t1_o8172ra
Omg u/dabiggmoe2 Thx for the dcp !!! i didn't knew that this exists <3 <3 <3
1
0
2026-03-01T10:37:53
Familiar_Wish1132
false
null
0
o8172ra
false
/r/LocalLLaMA/comments/1rf4sl8/help_system_prompt_exception_when_calling/o8172ra/
false
1
t1_o8172f8
What if it actually saved innocent lives vs the non-ai-assisted approach ?
2
0
2026-03-01T10:37:48
KS-Wolf-1978
false
null
0
o8172f8
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8172f8/
false
2
t1_o816x7l
This feels less like "AI in warfare" and more like a governance gap. You can announce a vendor ban, but tools already embedded in operational workflows don't disappear overnight. The real issue is the mismatch between policy speed and system reality — that's where accountability starts to blur.
5
0
2026-03-01T10:36:23
Any_Satisfaction327
false
null
0
o816x7l
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o816x7l/
false
5
t1_o816wjp
you should en existing unikernel unless you just like reinventing the wheel which is fine
1
0
2026-03-01T10:36:13
bitmoji
false
null
0
o816wjp
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o816wjp/
false
1
t1_o816vpp
All AMD AM4 and AM5 motherboards only support unbuffered memory. If you physically insert RDIMM RAM into an AM4 motherboard, it will not boot. I do not recommend even trying. The best way, is too look for used EPYC 7xxx CPU and used motherboard for it, or combo of CPU+motherboard. This would allow you to use your RDIMM...
6
0
2026-03-01T10:36:00
Lissanro
false
null
0
o816vpp
false
/r/LocalLLaMA/comments/1rhu182/socket_am4_boards_with_rdimm_support/o816vpp/
false
6
t1_o816sqq
Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/PgFhZ8cnWW) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*
1
1
2026-03-01T10:35:12
WithoutReason1729
false
null
0
o816sqq
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o816sqq/
true
1
t1_o816p5w
Thank you, this is really helpful, 12-15 degrees sounds really amazing! Just as a reference, what's the average temperature that your 3090 now runs on at load?
1
0
2026-03-01T10:34:16
doesitoffendyou
false
null
0
o816p5w
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o816p5w/
false
1
t1_o816nrq
i like scam hypeman too
2
0
2026-03-01T10:33:52
HeftyAeon
false
null
0
o816nrq
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o816nrq/
false
2
t1_o8162qv
They are intimidated by your extraordinary level of conversational skills
3
0
2026-03-01T10:28:12
krystof24
false
null
0
o8162qv
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o8162qv/
false
3
t1_o815xf5
And for that same reason I prefer Jinping to Sam
14
0
2026-03-01T10:26:48
lasizoillo
false
null
0
o815xf5
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o815xf5/
false
14
t1_o815vym
It's more surprising no-one else has made the connection "flooding the zone with shit" : "slop"
1
0
2026-03-01T10:26:25
Not_your_guy_buddy42
false
null
0
o815vym
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o815vym/
false
1
t1_o815tu7
Nice work on the bench-marking! For production security workflows, you can want to check out how checkmarx handles AI generated code analysis, they've built some interesting approaches for validating LLM outputs against real vulnerability patterns without the privacy concerns.
1
0
2026-03-01T10:25:51
TurnoverEmergency352
false
null
0
o815tu7
false
/r/LocalLLaMA/comments/1rh2tmu/benchmarking_opensource_llms_for_security/o815tu7/
false
1
t1_o815j88
The only way open ai doesn't implode now is if the government buys it with monopoly money and raises inflation...again.
1
0
2026-03-01T10:22:59
Complete_Lurk3r_
false
null
0
o815j88
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o815j88/
false
1
t1_o815g46
[removed]
1
0
2026-03-01T10:22:08
[deleted]
true
null
0
o815g46
false
/r/LocalLLaMA/comments/1q2sfwx/elevenlabs_is_killing_my_budget_what_are_the_best/o815g46/
false
1
t1_o815cgw
Naah Bro ,
-2
0
2026-03-01T10:21:08
EquivalentGuitar7140
false
null
0
o815cgw
false
/r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/o815cgw/
false
-2
t1_o815by2
I also have similar questions. I'm actively looking for a way to get more out of copilot without shelling out hundreds of dollars on subscriptions. I spun up qwen coder 3.5 30b and I could not get it to work with copilot. I had similar issues with copilot and my azure foundry models, but eventually it just started work...
0
0
2026-03-01T10:21:00
OneEyedSnakeOil
false
null
0
o815by2
false
/r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/o815by2/
false
0
t1_o815a2q
Not yet in production but testing Qwen 3.5 now. The MOE variants look very promising - reportedly 10x faster inference for similar quality. Planning to swap out the 2.5 32B dense model for a 3.5 MOE in the next couple weeks. Will update the post once I have real benchmarks from my workflow.
1
0
2026-03-01T10:20:29
EquivalentGuitar7140
false
null
0
o815a2q
false
/r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/o815a2q/
false
1
t1_o815904
Thanks for this, very wel broken down, i knew the one size fits all approach wasnt the way forward, im trying to convince my Manager and Tech Lead to start letting us use AI more within our team workflow, for things like dev ops etc, but ATM theyre very reluctant to as theyre pretty old school and set in their ways, so...
1
0
2026-03-01T10:20:11
Livid_Salary_9672
false
null
0
o815904
false
/r/LocalLLaMA/comments/1rhtbwx/where_do_you_use_ai_in_your_workflow/o815904/
false
1
t1_o8158ly
You could try raising the repeat penalty. I'm not sure how to do that in ollama but it's easy in llamacpp and shouldnt be hard. Alternatively you could try a non thinking varient like qwen3 4b 2507 instruct.
1
0
2026-03-01T10:20:05
12bitmisfit
false
null
0
o8158ly
false
/r/LocalLLaMA/comments/1rhh96x/qwen3_4b_and_8b_thinking_loop/o8158ly/
false
1
t1_o8156gd
They are the same you have just some extra perks and higher limits
1
0
2026-03-01T10:19:29
Tall-Ad-7742
false
null
0
o8156gd
false
/r/LocalLLaMA/comments/1rhbbq1/gemini_ultra_vs_pro_actually_different_or_just_a/o8156gd/
false
1
t1_o81561c
In my vision use case Gemma 3 27B beats both Qwen 3 VL 32B and 30B A3B, but I acknowledge that Qwen 3 is better at translations.
2
0
2026-03-01T10:19:22
IrisColt
false
null
0
o81561c
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o81561c/
false
2
t1_o8155n3
Good questions. On model choice: Qwen 2.5 32B was the sweet spot when I started this setup - it fits comfortably on a single 3090 quantized and the code understanding is strong for its size. I'm actually in the process of testing Qwen 3.5 now based on some recommendations. Llama 70B is great but the VRAM requirements m...
1
0
2026-03-01T10:19:16
EquivalentGuitar7140
false
null
0
o8155n3
false
/r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/o8155n3/
false
1
t1_o8154gn
It could be down to the size of the training dataset. If it's mostly English, then all semantic meanings map to English words or word fragments. Translation would be like mapping foreign language concepts to the same English fragments. With MOEs, you might not have enough activated layers to get a high fidelity transla...
1
0
2026-03-01T10:18:58
SkyFeistyLlama8
false
null
0
o8154gn
false
/r/LocalLLaMA/comments/1rhqeob/qwen_35_27b_is_the_best_chinese_translation_model/o8154gn/
false
1
t1_o8153hp
Tried a few of my test prompts on UD Q8_K_XL. The extraction ones were correct, but for one of them (finance-related) even though it was answered correctly, the follow-up threw the model into a loop and I stopped it after 40K thinking tokens. On the plus side, GLM-5 had the exact same issue, even though Gemini 3 Think...
1
0
2026-03-01T10:18:42
MitsotakiShogun
false
null
0
o8153hp
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o8153hp/
false
1
t1_o8152vp
[removed]
1
0
2026-03-01T10:18:32
[deleted]
true
null
0
o8152vp
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8152vp/
false
1
t1_o8152a8
and what specific other models are you using for other processes. i could possibly help you find the best models for each case. For example you said you choose claude for writing docs for the codebase. but a better model for that might be Gemini 3 Flash which has 1M context window for $0.5 in and $2 out for the full 1M...
1
0
2026-03-01T10:18:22
Deep-Vermicelli-4591
false
null
0
o8152a8
false
/r/LocalLLaMA/comments/1rhtbwx/where_do_you_use_ai_in_your_workflow/o8152a8/
false
1
t1_o8150ep
“I do this cause i like it”
2
0
2026-03-01T10:17:52
thefoxdecoder
false
null
0
o8150ep
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o8150ep/
false
2
t1_o814wt7
Nothing wrong with being passionate about this stuff, but I feel the marketing of LLMs has gotten us fucked up. I've never seen a "solution" that's so desperately looking fo a problem to solve. And since accuracy of the output can never be fully guaranteed due to how they work, it's a tough sell if you're critical eith...
4
0
2026-03-01T10:16:54
jonheartland
false
null
0
o814wt7
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o814wt7/
false
4
t1_o814rfs
10x inference speed with the MOE variants is massive. That alone justifies the migration for our code review pipeline where we're doing hundreds of diffs per day. The dense 32B to MOE switch would cut our inference costs significantly while potentially improving quality. Definitely moving this up in our sprint backlog ...
1
0
2026-03-01T10:15:27
EquivalentGuitar7140
false
null
0
o814rfs
false
/r/LocalLLaMA/comments/1rhtbwx/where_do_you_use_ai_in_your_workflow/o814rfs/
false
1
t1_o814ppv
> How come you are on Qwen 2.5 versus something newer? Same with Llama 70B. mentioning "70B models" is a strong sign of AI-generated text.
9
0
2026-03-01T10:14:59
MelodicRecognition7
false
null
0
o814ppv
false
/r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/o814ppv/
false
9
t1_o814m12
We stan a small-qwen
1
0
2026-03-01T10:14:01
Polite_Jello_377
false
null
0
o814m12
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o814m12/
false
1
t1_o814iot
Im curious what other options there are. I assumed ollama was the standard for running self hosted ai. Ive been reading up and not found anything thats faster or seemed to be.
2
0
2026-03-01T10:13:07
RowdyRidger19
false
null
0
o814iot
false
/r/LocalLLaMA/comments/1rbmoi1/running_local_agents_with_ollama_was_easier_than/o814iot/
false
2
t1_o814gd1
Anything that's NOT code? 
1
0
2026-03-01T10:12:31
MrCoolest
false
null
0
o814gd1
false
/r/LocalLLaMA/comments/1rhtbwx/where_do_you_use_ai_in_your_workflow/o814gd1/
false
1
t1_o814fj2
i think they got the autonomous strikes and data collection down
2
0
2026-03-01T10:12:18
snoodoodlesrevived
false
null
0
o814fj2
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o814fj2/
false
2
t1_o814aqh
oh the 3.5 vs 2.5 series will be a very good change. better in every regard and considering you were using the dense 32B model you would get 10x the inference speed if you choose the MOE models from 3.5 series.
1
0
2026-03-01T10:11:01
Deep-Vermicelli-4591
false
null
0
o814aqh
false
/r/LocalLLaMA/comments/1rhtbwx/where_do_you_use_ai_in_your_workflow/o814aqh/
false
1
t1_o8145q0
Thanks for sharing. How come you are on Qwen 2.5 versus something newer? Same with Llama 70B. What tools / frameworks do you use? Mostly thinking custom developed versus gluing together open source components. I started out mostly custom due to everything changing all the time, what was a good idea 6 months ago is o...
2
0
2026-03-01T10:09:40
UncleRedz
false
null
0
o8145q0
false
/r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/o8145q0/
false
2