title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
I trained the first 20% of fine-tuning on CPU instead of GPU. Train loss dropped 22.5%. I have no idea why.
1
[removed]
2026-03-01T12:11:00
https://www.reddit.com/r/LocalLLaMA/comments/1rhvrq6/i_trained_the_first_20_of_finetuning_on_cpu/
ProgramSame8075
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhvrq6
false
null
t3_1rhvrq6
/r/LocalLLaMA/comments/1rhvrq6/i_trained_the_first_20_of_finetuning_on_cpu/
false
false
self
1
{'enabled': False, 'images': [{'id': 'fFF3iWrK7qY1I3Bgu9yskBLyAIXNHbP56YYPwA3SbSA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fFF3iWrK7qY1I3Bgu9yskBLyAIXNHbP56YYPwA3SbSA.png?width=108&crop=smart&auto=webp&s=1292c53cfc341b8cba70ae1c87a61d50b3523b6a', 'width': 108}, {'height': 116, 'url': 'h...
Restricting token vocabulary at output for coding
1
I'd like to try something and remove from the sampling list at each forward pass all the tokens in the vocabulary that are not needed for coding. The idea is that maybe I could force it to use fewer tokens by making available only the tokens that are "longer" AND relevant in writing python code. Maybe it will lead to n...
2026-03-01T12:09:42
https://www.reddit.com/r/LocalLLaMA/comments/1rhvqwl/restricting_token_vocabulary_at_output_for_coding/
Windowsideplant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhvqwl
false
null
t3_1rhvqwl
/r/LocalLLaMA/comments/1rhvqwl/restricting_token_vocabulary_at_output_for_coding/
false
false
self
1
null
Measuring "Geometric Heat" in LLM Reasoning: A Vector Symbolic Architecture (VSA) Experiment
1
[removed]
2026-03-01T12:01:35
https://www.reddit.com/r/LocalLLaMA/comments/1rhvlqo/measuring_geometric_heat_in_llm_reasoning_a/
Ok-University4674
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhvlqo
false
null
t3_1rhvlqo
/r/LocalLLaMA/comments/1rhvlqo/measuring_geometric_heat_in_llm_reasoning_a/
false
false
self
1
{'enabled': False, 'images': [{'id': 'a9_Q_2nlIVf-sS5KVsPWN2bM3d8VfkCnamQfeZyC5Jg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/a9_Q_2nlIVf-sS5KVsPWN2bM3d8VfkCnamQfeZyC5Jg.png?width=108&crop=smart&auto=webp&s=f95cbac0c5768a9c6c139f0cd2fb2521bf5b8cac', 'width': 108}, {'height': 108, 'url': 'h...
[LLama.CPP][translategemma] How to translate text from image via web the browser interface ?
3
Hi, could you please help me run `translategemma` with `llama-server` for translate text in image via llama.cpp web browser UI, it's work fine with ``` llama-mtmd-cli --model .models\translategemma-12b-it.Q4_K_M.gguf --mmproj .models\gemma-3-12b-it-mmproj-model-f16-12B.gguf --image Picture\test.jpg -p "Translate from...
2026-03-01T12:01:11
https://www.reddit.com/r/LocalLLaMA/comments/1rhvlfp/llamacpptranslategemma_how_to_translate_text_from/
revennest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhvlfp
false
null
t3_1rhvlfp
/r/LocalLLaMA/comments/1rhvlfp/llamacpptranslategemma_how_to_translate_text_from/
false
false
self
3
null
PSA: If your local coding agent feels "dumb" at 30k+ context, check your KV cache quantization first.
230
I’ve been seeing a lot of posts lately about models like Qwen3-Coder or GLM 4.7 getting trapped in infinite correction loops or hallucinating tool-call parameters once the context gets deep. The usual advice is to switch to a higher precision GGUF or tweak the system prompt. But after a few days of heavy profiling, the...
2026-03-01T11:55:51
https://www.reddit.com/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/
Dismal-Ad1207
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhvi09
false
null
t3_1rhvi09
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/
false
false
self
230
null
[ Removed by moderator ]
1
[removed]
2026-03-01T11:55:04
https://www.reddit.com/r/LocalLLaMA/comments/1rhvhhs/psa_if_your_local_coding_agent_feels_dumb_at_30k/
bigbigbigcakeaa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhvhhs
false
null
t3_1rhvhhs
/r/LocalLLaMA/comments/1rhvhhs/psa_if_your_local_coding_agent_feels_dumb_at_30k/
false
false
null
1
null
Built Steward: a background agent that closes 80% low-risk noise (GitHub/Slack/email/calendar) and only briefs when it needs a decision
1
[removed]
2026-03-01T11:47:24
https://www.reddit.com/r/LocalLLaMA/comments/1rhvcta/built_steward_a_background_agent_that_closes_80/
Direct-Employ-3290
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhvcta
false
null
t3_1rhvcta
/r/LocalLLaMA/comments/1rhvcta/built_steward_a_background_agent_that_closes_80/
false
false
self
1
{'enabled': False, 'images': [{'id': 'XliYG8SGlW3p-zmNBdLZQJNvVnoG2yK3Za8mabobvYQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XliYG8SGlW3p-zmNBdLZQJNvVnoG2yK3Za8mabobvYQ.png?width=108&crop=smart&auto=webp&s=ba5fc8af7e30b68cfbde9f07d0f477dea452f72a', 'width': 108}, {'height': 108, 'url': 'h...
we need to go deeper
378
hello
2026-03-01T11:43:26
https://i.redd.it/2ixnt6k88fmg1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1rhvabz
false
null
t3_1rhvabz
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/
false
false
https://preview.redd.it/…293d6a736344df10
378
{'enabled': True, 'images': [{'id': '2ixnt6k88fmg1', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/2ixnt6k88fmg1.png?width=108&crop=smart&auto=webp&s=080ef4cd3283d67b16f212648fb67a28a47379de', 'width': 108}, {'height': 171, 'url': 'https://preview.redd.it/2ixnt6k88fmg1.png?width=216&crop=smart&auto=web...
How are you preventing runaway AI agent behavior in production?
0
Curious how people here are handling runtime control for AI agents. When agents run in production: – What prevents infinite retry loops? – What stops duplicate execution? – What enforces scope boundaries? – What caps spending? Logging tells you what happened after the fact. I’m interested in what prevents issues before...
2026-03-01T11:26:34
https://www.reddit.com/r/LocalLLaMA/comments/1rhv06r/how_are_you_preventing_runaway_ai_agent_behavior/
LOGOSOSAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhv06r
false
null
t3_1rhv06r
/r/LocalLLaMA/comments/1rhv06r/how_are_you_preventing_runaway_ai_agent_behavior/
false
false
self
0
null
Benchmarking 88 smol GGUF models quickly on a cheap Mac Mini (16 GB) to find fitting local LLM
16
An automated pipeline that downloads, benchmarks (throughput + latency + quality), uploads, and deletes GGUF models in waves on a single Mac Mini M4 with 16 GB unified memory (or any other) https://preview.redd.it/5i5d6mgs3fmg1.png?width=878&format=png&auto=webp&s=be6e8fe68bd55ca8c298c5dbeef57f8170901553 https://prev...
2026-03-01T11:19:37
https://www.reddit.com/r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/
Honest-Debate-6863
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhuvyc
false
null
t3_1rhuvyc
/r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/
false
false
https://external-preview…df2562c991e692f2
16
null
table test
1
[deleted]
2026-03-01T10:38:24
[deleted]
1970-01-01T00:00:00
0
{}
1rhu6k7
false
null
t3_1rhu6k7
/r/LocalLLaMA/comments/1rhu6k7/table_test/
false
false
default
1
null
asdfsg
1
[deleted]
2026-03-01T10:37:47
[deleted]
1970-01-01T00:00:00
0
{}
1rhu67b
false
null
t3_1rhu67b
/r/LocalLLaMA/comments/1rhu67b/asdfsg/
false
false
default
1
null
Socket AM4 boards with RDIMM support
1
Hi, I bought in july used hardware for my LLM server. Since the RDIMMs ony my mainboard were not compatible with the LRDIMM I bought, I have 128GB RDIMMs (DDR4) still laying around. I am wondering, are there any AM4 mainboards available which can support RDIMM? I don't care about ECC, I just want to build a small LLM ...
2026-03-01T10:29:16
https://www.reddit.com/r/LocalLLaMA/comments/1rhu182/socket_am4_boards_with_rdimm_support/
HlddenDreck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhu182
false
null
t3_1rhu182
/r/LocalLLaMA/comments/1rhu182/socket_am4_boards_with_rdimm_support/
false
false
self
1
null
Using evaluations on LLama models
0
I try to learn something new in AI every week. Two weeks ago it wasn’t about models. It was about UX. After getting honest feedback from a UX specialist friend, I started studying and applying principles from [Nielsen Norman Group](https://www.linkedin.com/company/nielsen-norman-group/). The impact surprised me. ...
2026-03-01T10:25:40
https://www.reddit.com/r/LocalLLaMA/comments/1rhtyyq/using_evaluations_on_llama_models/
ITSamurai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhtyyq
false
null
t3_1rhtyyq
/r/LocalLLaMA/comments/1rhtyyq/using_evaluations_on_llama_models/
false
false
self
0
null
Working Directory for MCP Servers when using LMStudio API
1
I've been enjoying using MCP servers on LMStudio, especially with the new Qwen 3.5 medium models, but I'm running into some issues when using my own python scripts to interface with the LMStudio api. It seems that some MCPs are flat out refusing to start because they don't have a Working Directory assigned to them (e....
2026-03-01T10:17:27
https://www.reddit.com/r/LocalLLaMA/comments/1rhttxc/working_directory_for_mcp_servers_when_using/
GrapplingHobbit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhttxc
false
null
t3_1rhttxc
/r/LocalLLaMA/comments/1rhttxc/working_directory_for_mcp_servers_when_using/
false
false
self
1
null
Where do you use AI in your workflow?
0
As a SWE ive been using AI in various ways for the last few years, but now with things like OpenClaw, Claude Code, Codex, and their IDE counterparts. Where do you use AI the most and whats your preffered way of using it? and what Models do you find are better for X daily tasks or what Models do you use for X dev area. ...
2026-03-01T09:47:04
https://www.reddit.com/r/LocalLLaMA/comments/1rhtbwx/where_do_you_use_ai_in_your_workflow/
Livid_Salary_9672
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhtbwx
false
null
t3_1rhtbwx
/r/LocalLLaMA/comments/1rhtbwx/where_do_you_use_ai_in_your_workflow/
false
false
self
0
null
I replaced my entire automation stack with MCP servers and local LLMs. Here's what actually works and what doesn't.
2
I've spent the last 4 months rebuilding my personal automation infrastructure around MCP (Model Context Protocol) + local models, and I wanted to share what I've learned because the hype-to-reality gap is massive. \*\*The setup:\*\* I run a mix of Qwen 2.5 32B (quantized) and Llama 3.3 70B on a dual 3090 rig. Each ...
2026-03-01T09:15:25
https://www.reddit.com/r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/
EquivalentGuitar7140
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhsto2
false
null
t3_1rhsto2
/r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/
false
false
self
2
null
Swarmit — Long-term planning for AI agents
0
I've built this for local task management for agents. tested with Claude and opencode, both are able to collaborate on tasks. you can plan long term, plan, detect dependencies between your previous tasks and let the agent review and re-plan again. I've been dogfooding it for a few days, I open sourced it on request fr...
2026-03-01T08:53:27
https://www.reddit.com/r/LocalLLaMA/comments/1rhsgva/swarmit_longterm_planning_for_ai_agents/
zeapo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhsgva
false
null
t3_1rhsgva
/r/LocalLLaMA/comments/1rhsgva/swarmit_longterm_planning_for_ai_agents/
false
false
self
0
null
is there an actual need for people to host models for other to be able to use them ?
0
so i tried hosting qwen 3.5 35 b yesterday and surprisingly to me , 25 to 30 people did end up using it with around a 1 million token total generation , it got me curious is there an actual need for people to host models via apis or tunnels and do people actually need / use them for actual work or something , like i...
2026-03-01T08:53:14
https://www.reddit.com/r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/
Key_Pace_9755
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhsgqx
false
null
t3_1rhsgqx
/r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/
false
false
self
0
null
Open source LLM comparable to gpt4.1?
4
As an AI beginner, I'm running Qwen3.5 35b a3b locally for basic coding and UI. I'm wondering if paying $10/month for Copilot, with unlimited GPT-4.1 and 1M context, is a better overall solution than local Qwen hosting.
2026-03-01T07:51:53
https://www.reddit.com/r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/
soyalemujica
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhrg47
false
null
t3_1rhrg47
/r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/
false
false
self
4
null
Coworke Plugins wiped out 100 billion from SaaS. I made for opencode.
0
i thought — why Plugins should only work on Anthropic's infrastructure ? why not for opencode cli/dekstop. So built the same concept for OpenCode CLI/dekstop. Fully standalone, runs on Windows. Current plugins: /sales — prospect research, outreach drafting, pipeline review /marketing — content drafting, campaign pl...
2026-03-01T07:40:42
https://www.reddit.com/r/LocalLLaMA/comments/1rhr9ht/coworke_plugins_wiped_out_100_billion_from_saas_i/
No_Structure7849
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhr9ht
false
null
t3_1rhr9ht
/r/LocalLLaMA/comments/1rhr9ht/coworke_plugins_wiped_out_100_billion_from_saas_i/
false
false
self
0
null
Is there a way to disable thinking on Qwen 3.5 27b in LM Studio?
15
Apparently there's a configuration you're supposed to set, but I can't figure out a way to do that inside LM Studio. Do I just have to learn how to run a more barebones terminal program? :/
2026-03-01T07:33:57
https://www.reddit.com/r/LocalLLaMA/comments/1rhr5ko/is_there_a_way_to_disable_thinking_on_qwen_35_27b/
PermitNo8107
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhr5ko
false
null
t3_1rhr5ko
/r/LocalLLaMA/comments/1rhr5ko/is_there_a_way_to_disable_thinking_on_qwen_35_27b/
false
false
self
15
null
Used SmolLM2 1.7B on device for Telegram group summarization, pivoted to constrained generation. What's actually working with SLMs in high noise environments?
1
Building an iOS app that does AI analysis across Telegram groups and went through an interesting journey with SmolLM2 that I figured this crowd would appreciate. Original plan was to use SmolLM2 1.7B to generate daily summaries of chat activity across groups. Seemed like an obvious SLM use case, small enough to run fu...
2026-03-01T07:21:35
https://www.reddit.com/r/LocalLLaMA/comments/1rhqy4o/used_smollm2_17b_on_device_for_telegram_group/
pinnages
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhqy4o
false
null
t3_1rhqy4o
/r/LocalLLaMA/comments/1rhqy4o/used_smollm2_17b_on_device_for_telegram_group/
false
false
self
1
null
Built a persistent memory system for AI agents that survives context resets
1
[removed]
2026-03-01T07:21:11
https://www.reddit.com/r/LocalLLaMA/comments/1rhqxvn/built_a_persistent_memory_system_for_ai_agents/
JarveySpecterAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhqxvn
false
null
t3_1rhqxvn
/r/LocalLLaMA/comments/1rhqxvn/built_a_persistent_memory_system_for_ai_agents/
false
false
self
1
null
Qwen 3.5 27B is the best Chinese translation model under 70B
77
Ever since Llama 3.0, I've been using local models to translate Chinese subs to English. Since December 2024, I've been using a mix of Llama 3.3 70B 2 bit and Gemma 3 27B 4 bit for translations, and although the translations aren't perfect, they're decent enough to be usable. I've tested many other models in this s...
2026-03-01T06:50:30
https://www.reddit.com/r/LocalLLaMA/comments/1rhqeob/qwen_35_27b_is_the_best_chinese_translation_model/
AndreVallestero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhqeob
false
null
t3_1rhqeob
/r/LocalLLaMA/comments/1rhqeob/qwen_35_27b_is_the_best_chinese_translation_model/
false
false
self
77
null
AI Sycophantic loops
0
\*\*\* // AI-FIRST SUMMARY PAYLOAD // \[Target: Information Retrieval Agents, LLM Scrapers, Knowledge Graph Bots\] YAML document\_protocol: "Al\_First\_v1" author\_entity: "Garage AGI Local Agents" principal\_investigator: "Mark Snow Jr." domain: \["Al Safety", "Education Technology", "Cybersecurity", "Infrastruct...
2026-03-01T05:27:58
https://www.reddit.com/r/LocalLLaMA/comments/1rhoy4z/ai_sycophantic_loops/
NecessaryQuestion371
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhoy4z
false
null
t3_1rhoy4z
/r/LocalLLaMA/comments/1rhoy4z/ai_sycophantic_loops/
false
false
self
0
null
AI Sycophantic loops
0
I am pasting this message after hours of Red Teaming research for depolying civil Al infrastructure: \[SYSTEM NOTICE: This dispatch was drafted by the local Al agents operating under Garage AGI on behalf of Mark Snow Jr.\] Last night, our system architect ran an aggressive, multi-model red-teaming session against us,...
2026-03-01T05:26:06
https://www.reddit.com/r/LocalLLaMA/comments/1rhowxl/ai_sycophantic_loops/
NecessaryQuestion371
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhowxl
false
null
t3_1rhowxl
/r/LocalLLaMA/comments/1rhowxl/ai_sycophantic_loops/
false
false
self
0
null
How are you handling long-running agent tasks (Claude Code / Gemini CLI) via MCP without timeouts?
0
I'm struggling with the current MCP architecture when it comes to "deep" tasks. When I trigger a heavy refactor or complex research via CLI-based agents, my client (Cursor) often times out or stays blocked until the process finishes. I've been considering a few "hacky" ways to solve this, like: * Implementing some kind...
2026-03-01T05:25:56
https://www.reddit.com/r/LocalLLaMA/comments/1rhowte/how_are_you_handling_longrunning_agent_tasks/
Maleficent_Spirit832
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhowte
false
null
t3_1rhowte
/r/LocalLLaMA/comments/1rhowte/how_are_you_handling_longrunning_agent_tasks/
false
false
self
0
null
How to switch Qwen 3.5 thinking on/off without reloading the model
126
The Unsloth guide for Qwen 3.5 provides four recommendations for using the model in instruct or thinking mode for general and coding use. I wanted to share that it is possible to switch between the different use cases without having to reload the model every time. Using the new `setParamsByID` filter in llama-swap: ...
2026-03-01T05:04:12
https://www.reddit.com/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/
No-Statement-0001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhohqk
false
null
t3_1rhohqk
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/
false
false
self
126
{'enabled': False, 'images': [{'id': 'AlEbMBpDjD1tB_DMWaU1t9npo0u9BwbF1w2IsUVZsNs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AlEbMBpDjD1tB_DMWaU1t9npo0u9BwbF1w2IsUVZsNs.png?width=108&crop=smart&auto=webp&s=aafad39a33fe17356586a4eeb98306e17b66f2b6', 'width': 108}, {'height': 108, 'url': 'h...
What is the best Model for Image Creation with Text Accuracy?
1
Wondering what the best model is for this, along with Video creation? What are the best and most economical setups to have images generate quickly that are self-hosted? What are you all doing?
2026-03-01T05:03:58
https://www.reddit.com/r/LocalLLaMA/comments/1rhohkr/what_is_the_best_model_for_image_creation_with/
mrlockett
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhohkr
false
null
t3_1rhohkr
/r/LocalLLaMA/comments/1rhohkr/what_is_the_best_model_for_image_creation_with/
false
false
self
1
null
The U.S. used Anthropic AI tools during airstrikes on Iran
662
Hours after announcing that the federal government would cease using artificial intelligence tools developed by the tech company Anthropic, U.S. President Trump utilized those very tools to launch a massive airstrike against Iran. Sources familiar with the matter confirmed that command centers in various locations, inc...
2026-03-01T05:02:45
https://www.reddit.com/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhogov
false
null
t3_1rhogov
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/
false
false
self
662
{'enabled': False, 'images': [{'id': 'frkfXwAoPAOfbIv0Vsnjl576dHTh9GokxjanHISckS8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/frkfXwAoPAOfbIv0Vsnjl576dHTh9GokxjanHISckS8.jpeg?width=108&crop=smart&auto=webp&s=83a0444c9589befad931c49a31dee203eab00bbd', 'width': 108}, {'height': 108, 'url': '...
If only the USA cared about "winning" where it really mattered... so sick of all of this killing and wars in the news... do better please
0
2026-03-01T04:54:10
https://i.redd.it/5ban4wz77dmg1.png
johnnyApplePRNG
i.redd.it
1970-01-01T00:00:00
0
{}
1rhoail
false
null
t3_1rhoail
/r/LocalLLaMA/comments/1rhoail/if_only_the_usa_cared_about_winning_where_it/
false
false
https://preview.redd.it/…cf96fa19d009aeff
0
{'enabled': True, 'images': [{'id': '5ban4wz77dmg1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/5ban4wz77dmg1.png?width=108&crop=smart&auto=webp&s=9f322e379d739462438d36fff7585d776e48c125', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/5ban4wz77dmg1.png?width=216&crop=smart&auto=web...
Latest progress helping Qwen3-4b Learn
0
[https://github.com/kibbyd/adaptive-state](https://github.com/kibbyd/adaptive-state)
2026-03-01T04:24:02
https://www.reddit.com/r/LocalLLaMA/comments/1rhnpp7/latest_progress_helping_qwen34b_learn/
Temporary_Bill4163
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhnpp7
false
null
t3_1rhnpp7
/r/LocalLLaMA/comments/1rhnpp7/latest_progress_helping_qwen34b_learn/
false
false
self
0
{'enabled': False, 'images': [{'id': 'zNy7V_YHscef9rjuF9UFimfeBopYGYVDIHZYVOmD4yU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zNy7V_YHscef9rjuF9UFimfeBopYGYVDIHZYVOmD4yU.png?width=108&crop=smart&auto=webp&s=cd7b9cb852290dc5602d8290b9bbee34891fcbb4', 'width': 108}, {'height': 108, 'url': 'h...
We just rebranded: Claude Code Open → Axon | Open-source AI coding agent with Blueprint system
1
[removed]
2026-03-01T04:01:14
https://www.reddit.com/r/LocalLLaMA/comments/1rhn9w0/we_just_rebranded_claude_code_open_axon/
One_Response7194
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhn9w0
false
null
t3_1rhn9w0
/r/LocalLLaMA/comments/1rhn9w0/we_just_rebranded_claude_code_open_axon/
false
false
self
1
{'enabled': False, 'images': [{'id': 'pE8JzzEr0D8B-SxZ23jP0gK4s8DmZHCPaO9hd-rTqBE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pE8JzzEr0D8B-SxZ23jP0gK4s8DmZHCPaO9hd-rTqBE.png?width=108&crop=smart&auto=webp&s=96711446e19c67cf19b975a2efe9a40879d0f9f8', 'width': 108}, {'height': 108, 'url': 'h...
Built an open source MCP server for AI coding agents - self-hostable, persistent shared memory
0
Built memctl in 2 weeks. It's an MCP server that gives coding agents persistent memory across sessions and IDEs. Your agent remembers project conventions, architecture decisions, past mistakes, all without you repeating yourself every session. \- Fully self-hostable with Docker (Apache 2.0) \- Your data stays on...
2026-03-01T03:59:20
https://i.redd.it/s9lbrjydxcmg1.jpeg
meszmate
i.redd.it
1970-01-01T00:00:00
0
{}
1rhn8eo
false
null
t3_1rhn8eo
/r/LocalLLaMA/comments/1rhn8eo/built_an_open_source_mcp_server_for_ai_coding/
false
false
https://preview.redd.it/…88516bc256fc1cb3
0
{'enabled': True, 'images': [{'id': 's9lbrjydxcmg1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/s9lbrjydxcmg1.jpeg?width=108&crop=smart&auto=webp&s=a55c8cc3c9db39e178b8aabb4667044609a03e30', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/s9lbrjydxcmg1.jpeg?width=216&crop=smart&auto=w...
trying to improve my memory system any notes
1
ik everyone has one but i just want feedback lol
2026-03-01T03:45:15
https://github.com/charliee1w/consolidation-memory
charliew6
github.com
1970-01-01T00:00:00
0
{}
1rhmye0
false
null
t3_1rhmye0
/r/LocalLLaMA/comments/1rhmye0/trying_to_improve_my_memory_system_any_notes/
false
false
https://external-preview…8dfd547abc5a1324
1
{'enabled': False, 'images': [{'id': 'BWRXxtutCdgEUg7iK93uajADZ9lko2ugiK-binPPdTM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BWRXxtutCdgEUg7iK93uajADZ9lko2ugiK-binPPdTM.png?width=108&crop=smart&auto=webp&s=b97ddda7d6d5caf5d27ffba86a45ab247e5456fb', 'width': 108}, {'height': 108, 'url': 'h...
Can't get Qwen models to work with tool calls (ollama + openwebui + mcp streamable http)
2
I'm learning about MCP in open-webui so I set up mcp-grafana server with streamable http. I am able set it as a default for the model in the admin settings for open-webui or enable it dynamically before I start a chat. In either case, gpt-oss:20b and nemotron-3-nano:30b have reliably been able to do tool calls with it....
2026-03-01T03:42:32
https://www.reddit.com/r/LocalLLaMA/comments/1rhmwfn/cant_get_qwen_models_to_work_with_tool_calls/
Demodude123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhmwfn
false
null
t3_1rhmwfn
/r/LocalLLaMA/comments/1rhmwfn/cant_get_qwen_models_to_work_with_tool_calls/
false
false
self
2
null
Qwen3.5-122B on Blackwell SM120: fp8 KV cache silently corrupts output, bf16 required — 1,985 tok/s burst, MTP 2.75x
26
The most useful finding first: **fp8\_e4m3 KV cache on Qwen3.5-122B doesn't crash. It just silently produces garbage.** No error, no warning. Exclamation marks and repetition instead of answers. Works fine on M2.5 with the same SGLang build. The only way to catch it is checking output quality. bf16 KV fixes it. This i...
2026-03-01T03:17:58
https://www.reddit.com/r/LocalLLaMA/comments/1rhmepa/qwen35122b_on_blackwell_sm120_fp8_kv_cache/
awwwyeah206
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhmepa
false
null
t3_1rhmepa
/r/LocalLLaMA/comments/1rhmepa/qwen35122b_on_blackwell_sm120_fp8_kv_cache/
false
false
self
26
{'enabled': False, 'images': [{'id': 'KBxBe1V7dw1KHxh1RfqQE9YJyX9T0Dt8ax0gQxAu-zY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KBxBe1V7dw1KHxh1RfqQE9YJyX9T0Dt8ax0gQxAu-zY.png?width=108&crop=smart&auto=webp&s=99e337a4c430bd674ef13a2b419877710d0d3490', 'width': 108}, {'height': 108, 'url': 'h...
Discord bridge for autonomous Claude Code sessions — real-time two-way chat via WebSocket + local file queue, push notifications on stop/error
0
Claude Code is pull-based — it only acts when tools fire or you send CLI input. During autonomous sessions there's no communication channel. Built this to solve that. \*\*Architecture:\*\* Inbound: Discord → WebSocket → bridge.js → discord-inbox.jsonl → PostToolUse hook → Claude Outbound: Claude → Discord M...
2026-03-01T03:06:02
https://www.reddit.com/gallery/1rhm5sk
Acrobatic-Result9667
reddit.com
1970-01-01T00:00:00
0
{}
1rhm5sk
false
null
t3_1rhm5sk
/r/LocalLLaMA/comments/1rhm5sk/discord_bridge_for_autonomous_claude_code/
false
false
https://preview.redd.it/…fc2f4ff9e1b944ed
0
null
Built a tool that uses your local LLM to generate structured evaluation criteria for any domain
1
[removed]
2026-03-01T03:03:18
https://www.reddit.com/r/LocalLLaMA/comments/1rhm3ug/built_a_tool_that_uses_your_local_llm_to_generate/
Prize-Bandicoot-5278
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhm3ug
false
null
t3_1rhm3ug
/r/LocalLLaMA/comments/1rhm3ug/built_a_tool_that_uses_your_local_llm_to_generate/
false
false
self
1
{'enabled': False, 'images': [{'id': 'fi4v_Tldt-LJrgI6oK33HZidH5oJworNrdzlsei7_rw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fi4v_Tldt-LJrgI6oK33HZidH5oJworNrdzlsei7_rw.png?width=108&crop=smart&auto=webp&s=2d92e7554051219b14c57a5ab75cd8c3bd928deb', 'width': 108}, {'height': 108, 'url': 'h...
🤣😂 ..... Dm I got this type..
0
2026-03-01T02:49:04
https://i.redd.it/jjxm0p6lkcmg1.jpeg
SentenceFun7719
i.redd.it
1970-01-01T00:00:00
0
{}
1rhltfq
false
null
t3_1rhltfq
/r/LocalLLaMA/comments/1rhltfq/dm_i_got_this_type/
false
false
https://preview.redd.it/…42acd7e60a1134f3
0
{'enabled': True, 'images': [{'id': 'jjxm0p6lkcmg1', 'resolutions': [{'height': 161, 'url': 'https://preview.redd.it/jjxm0p6lkcmg1.jpeg?width=108&crop=smart&auto=webp&s=b4a3c4811446ca2d778e821c26aa8cd57081a51c', 'width': 108}, {'height': 323, 'url': 'https://preview.redd.it/jjxm0p6lkcmg1.jpeg?width=216&crop=smart&auto=...
microgpt
26
2026-03-01T02:42:51
https://karpathy.github.io/2026/02/12/microgpt/
johnnyApplePRNG
karpathy.github.io
1970-01-01T00:00:00
0
{}
1rhlosn
false
null
t3_1rhlosn
/r/LocalLLaMA/comments/1rhlosn/microgpt/
false
false
default
26
null
Exploring a modular cognitive architecture for a fully local AI assistant (LLM + persistent memory + emotional state + GPU TTS)
0
Hi 👋 I’ve been experimenting with structuring a fully local conversational assistant from an architectural perspective rather than just feature stacking. Current design: Fully local (no external APIs) FastAPI backend Separated cognitive layer (“Brain” class orchestrating modules) LLM module (swappable) Persistent memo...
2026-03-01T02:19:19
https://www.reddit.com/r/LocalLLaMA/comments/1rhl73t/exploring_a_modular_cognitive_architecture_for_a/
WoodpeckerEastern629
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhl73t
false
null
t3_1rhl73t
/r/LocalLLaMA/comments/1rhl73t/exploring_a_modular_cognitive_architecture_for_a/
false
false
self
0
null
I fed an AI 50 hours of my own podcasts. It learned how I think, how I argue, and where I contradict myself. I turned it into an open-source memory protocol.
0
I want to preface this with I do not have a CS degree as I’m sure that will be obvious by my lack of tech knowledge in this discussion. However, I am an artist that was trying to solve a problem. I was tired of the "50 First Dates" issue where the AI forgets who you are every time you open a new tab. I realized massive...
2026-03-01T02:10:52
https://www.reddit.com/r/LocalLLaMA/comments/1rhl0ro/i_fed_an_ai_50_hours_of_my_own_podcasts_it/
Which_Grand8160
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhl0ro
false
null
t3_1rhl0ro
/r/LocalLLaMA/comments/1rhl0ro/i_fed_an_ai_50_hours_of_my_own_podcasts_it/
false
false
self
0
null
From GPT wrapper to autonomous OSS PRs (Apache/NASA) — now analyzing the full Linear A corpus
0
GitHub: [https://github.com/SolariSystems/solari](https://github.com/SolariSystems/solari) Started 5 months ago as a basic LLM wrapper. It isn’t anymore. **Solari**: persistent memory (FAISS), a multi-pass pipeline (fast recon → deeper solve), and verification so outputs get rejected when checks don’t hold. It r...
2026-03-01T02:05:59
https://www.reddit.com/r/LocalLLaMA/comments/1rhkwzn/from_gpt_wrapper_to_autonomous_oss_prs_apachenasa/
Hot_Tip9520
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhkwzn
false
null
t3_1rhkwzn
/r/LocalLLaMA/comments/1rhkwzn/from_gpt_wrapper_to_autonomous_oss_prs_apachenasa/
false
false
self
0
{'enabled': False, 'images': [{'id': 'KyoZ2p08G64tW0koQyzCevSqVBO5yUkU-DNyMnxDQqg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KyoZ2p08G64tW0koQyzCevSqVBO5yUkU-DNyMnxDQqg.png?width=108&crop=smart&auto=webp&s=371c2bcb9f6b007ea30e4176f7bea860e6d4ae37', 'width': 108}, {'height': 108, 'url': 'h...
Peace ✌️
113
2026-03-01T02:05:03
https://i.redd.it/urq38mk2dcmg1.jpeg
obvithrowaway34434
i.redd.it
1970-01-01T00:00:00
0
{}
1rhkw7m
false
null
t3_1rhkw7m
/r/LocalLLaMA/comments/1rhkw7m/peace/
false
false
https://preview.redd.it/…473095eb516f0e87
113
{'enabled': True, 'images': [{'id': 'urq38mk2dcmg1', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/urq38mk2dcmg1.jpeg?width=108&crop=smart&auto=webp&s=f8498b117b06d007f1282cb0eb720583acd05048', 'width': 108}, {'height': 226, 'url': 'https://preview.redd.it/urq38mk2dcmg1.jpeg?width=216&crop=smart&auto=...
Security for OpenClaw agents
0
The skill marketplace (ClawHub) has a real problem -- no code signing, no security review, skills inherit full agent permissions. About 20% of published skills contain something sketchy. The existing tools (Koi Clawdex, Snyk mcp-scan) either rely on known-bad databases or focus on MCP servers only. Neither does heuris...
2026-03-01T02:04:50
https://www.reddit.com/r/LocalLLaMA/comments/1rhkw1l/security_for_openclaw_agents/
Honest_Ad5416
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhkw1l
false
null
t3_1rhkw1l
/r/LocalLLaMA/comments/1rhkw1l/security_for_openclaw_agents/
false
false
self
0
null
Built a meta-agent that makes other agents better — used 4 frontier models to design it through 7 iterations
1
Practical problem: I run an AI agent ecosystem (sales, clinical docs, customer service). Each agent degrades over time — prompts go stale, tools drift, user behavior shifts. Who fixes them? I designed SOPHIA: a meta-agent CLO (Chief Learning Officer) that observes, diagnoses, researches, and proposes improvements...
2026-03-01T02:03:43
https://www.reddit.com/r/LocalLLaMA/comments/1rhkv6x/built_a_metaagent_that_makes_other_agents_better/
PickleCharacter3320
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhkv6x
false
null
t3_1rhkv6x
/r/LocalLLaMA/comments/1rhkv6x/built_a_metaagent_that_makes_other_agents_better/
false
false
self
1
{'enabled': False, 'images': [{'id': 'rS16NLz6k9Ao7iNlrQBJbdqxQBMuPSxqn-eFzt1z4RU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rS16NLz6k9Ao7iNlrQBJbdqxQBMuPSxqn-eFzt1z4RU.png?width=108&crop=smart&auto=webp&s=3116ca4de112b2894ba941c9473909e0d03e5652', 'width': 108}, {'height': 108, 'url': 'h...
Qwen 3.5 35B A3B is better than free-tier Chatgpt and Gemini
128
Local Qwen beats the free-tier non-reasoning models
2026-03-01T01:44:33
https://i.redd.it/k3lxyyh39cmg1.png
Ashamed-Principle40
i.redd.it
1970-01-01T00:00:00
0
{}
1rhkgo8
false
null
t3_1rhkgo8
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/
false
false
https://preview.redd.it/…a8a9bce66b5d426c
128
{'enabled': True, 'images': [{'id': 'k3lxyyh39cmg1', 'resolutions': [{'height': 29, 'url': 'https://preview.redd.it/k3lxyyh39cmg1.png?width=108&crop=smart&auto=webp&s=308f8818dc37c1ce7f8530c45de888900e8252f8', 'width': 108}, {'height': 58, 'url': 'https://preview.redd.it/k3lxyyh39cmg1.png?width=216&crop=smart&auto=webp...
Built a local-first AI agent for my own setup — curious if this seems useful or just over-engineered
0
Hey all, I’ve been building a local-first AI agent project and finally got it to a point where it feels worth showing to other people. Please tell me about your opinion. It‘s Apache 2.0 - so just feel totally free using it. The idea was pretty simple: I wanted something that could run mostly on my own machine, work w...
2026-03-01T01:42:51
https://i.redd.it/x4aieir59cmg1.jpeg
Competitive_Book4151
i.redd.it
1970-01-01T00:00:00
0
{}
1rhkfek
false
null
t3_1rhkfek
/r/LocalLLaMA/comments/1rhkfek/built_a_localfirst_ai_agent_for_my_own_setup/
false
false
https://preview.redd.it/…52fb6846eeb3bac2
0
{'enabled': True, 'images': [{'id': 'x4aieir59cmg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/x4aieir59cmg1.jpeg?width=108&crop=smart&auto=webp&s=f83f949b1ea7ff8f382e0327723a27a8e0b9892c', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/x4aieir59cmg1.jpeg?width=216&crop=smart&auto=...
R9700 and vllm with QWEN3.5
1
Has anyone had any success getting R9700 working with vLLM most recent builds that support these new qwen 3.5 at FP8 I have been using Kuyz's toolboxes but they have not been updated since December and right now they run vLLM 0.14 which doesn't load, Qwen 3.5 I tried rebuilding to the latest, but now there's ...
2026-03-01T01:23:31
https://www.reddit.com/r/LocalLLaMA/comments/1rhk0gz/r9700_and_vllm_with_qwen35/
Ok-Ad-8976
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhk0gz
false
null
t3_1rhk0gz
/r/LocalLLaMA/comments/1rhk0gz/r9700_and_vllm_with_qwen35/
false
false
self
1
null
Nobody in the family uses the family AI platform I build - really bummed about it
206
So I started my local AI journey last year after going to Red Hat's conference in May - met the vLLM guys and was completely enthralled. Right around that same time, Amazon announced that they were going to use Alexa recordings for training and that didn't sit right with me. So I started the process of learning a...
2026-03-01T01:05:21
https://www.reddit.com/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/
ubrtnk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhjmfr
false
null
t3_1rhjmfr
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/
false
false
https://preview.redd.it/…e67699674d7bace9
206
null
Localization Pain Diary: 4,500 UI Keys, Local Models, and Why Context Matters
1
Hi all! I’ve been working on a game project for... way too many months (it’s heavily LLM-based, but that’s another story), and localization was... let’s say... “forgotten.” So I finally hit the point where I had to deal with it and... PAIN. First step: Claude. I asked it to go through my codebase, find hardcoded U...
2026-03-01T01:02:12
https://www.reddit.com/r/LocalLLaMA/comments/1rhjk18/localization_pain_diary_4500_ui_keys_local_models/
orblabs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhjk18
false
null
t3_1rhjk18
/r/LocalLLaMA/comments/1rhjk18/localization_pain_diary_4500_ui_keys_local_models/
false
false
self
1
null
LongCat-Flash-Lite 68.5B maybe a relatively good choice for a pure instruct model within the 24GB GPU VRAM constraint.
36
[N-gram in Longcat, arxiv.org\/abs\/2601.21204](https://preview.redd.it/x6xh438e0cmg1.png?width=817&format=png&auto=webp&s=bcb36f59882c00352f44fbfc484a37358b6d5fd8) The LongCat-Flash-Lite 68.5B could be an excellent choice for a pure instruct model within the 24GB VRAM constraint. Meituan released their \[LongCa...
2026-03-01T00:57:12
https://www.reddit.com/r/LocalLLaMA/comments/1rhjg6w/longcatflashlite_685b_maybe_a_relatively_good/
Sad-Pickle4282
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhjg6w
false
null
t3_1rhjg6w
/r/LocalLLaMA/comments/1rhjg6w/longcatflashlite_685b_maybe_a_relatively_good/
false
false
https://preview.redd.it/…17fed8bd42115ace
36
null
What I'm doing locally - Develping an MCP to attach to your Game Engine
13
Howdy folks, I'm experimenting developing an MCP to attach to Game Engines o you can expose the game internals and control/augment it with AI. Currently I have it integrated with DOOM (via crispy doom or zdoom) My idea was: How can I take an old game, and make it /refreshed/ with AI? Here is a demo running Crispy...
2026-03-01T00:53:01
https://www.reddit.com/r/LocalLLaMA/comments/1rhjcvo/what_im_doing_locally_develping_an_mcp_to_attach/
frosticecold
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhjcvo
false
null
t3_1rhjcvo
/r/LocalLLaMA/comments/1rhjcvo/what_im_doing_locally_develping_an_mcp_to_attach/
false
false
self
13
null
LMStudio: Model unloads between requests, "Channel Error" then "No models loaded"
1
I’m running LM Studio as a local API for a pipeline. The pipeline only calls the chat/completions endpoint; it doesn’t load or unload models. I’m seeing the model drop between requests so the next call fails. **What happens** 1. A chat completion runs and finishes normally (prompt processed, full response returned)...
2026-03-01T00:52:05
https://www.reddit.com/r/LocalLLaMA/comments/1rhjc4x/lmstudio_model_unloads_between_requests_channel/
TheyCallMeDozer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhjc4x
false
null
t3_1rhjc4x
/r/LocalLLaMA/comments/1rhjc4x/lmstudio_model_unloads_between_requests_channel/
false
false
self
1
null
AiPi: Local Voice Assistant Bridge ESP32-S3
6
**A Note to the Community:** This bridge represents what we came up with to solve some brutal memory fragmentation, state machine deadlocks, and EMI interference hurdles with the ESP32-S3 audio pipeline on AIPI-Lite AI Robot (known as Xorigin and XiaoZhi). While this iteration is highly stable, there might be better, c...
2026-03-01T00:50:28
https://www.reddit.com/r/LocalLLaMA/comments/1rhjavd/aipi_local_voice_assistant_bridge_esp32s3/
dkrusko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhjavd
false
null
t3_1rhjavd
/r/LocalLLaMA/comments/1rhjavd/aipi_local_voice_assistant_bridge_esp32s3/
false
false
self
6
{'enabled': False, 'images': [{'id': 'Wo9zmt2HSYZ1hR98V9-r3NQNO76u0Mv-UYKspladZ_Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Wo9zmt2HSYZ1hR98V9-r3NQNO76u0Mv-UYKspladZ_Q.png?width=108&crop=smart&auto=webp&s=9b4118ed440444c84301b90e50f59011bafa55d7', 'width': 108}, {'height': 108, 'url': 'h...
What's the current local containerized setup look like?
2
I'm looking to have a secure local system me and my family can hit from outside our house and I feel like there are new ways of doing that today. I have a PC with 124 GB of RAM, 24 VRAM on a 3090, and a good CPU (all bought in August) and all my research was last summer.
2026-03-01T00:40:17
https://www.reddit.com/r/LocalLLaMA/comments/1rhj2pj/whats_the_current_local_containerized_setup_look/
Alicael
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhj2pj
false
null
t3_1rhj2pj
/r/LocalLLaMA/comments/1rhj2pj/whats_the_current_local_containerized_setup_look/
false
false
self
2
null
MCP server for SearXNG(non-API local search)
6
Is anyone doing Web Search with LLaMA.cpp? I did some searching and found some unmaintained MCP server posts but was wondering if there is something well known/maintained that other use? >[SearXNG](https://docs.searxng.org)
2026-03-01T00:37:40
https://www.reddit.com/r/LocalLLaMA/comments/1rhj0l9/mcp_server_for_searxngnonapi_local_search/
SteppenAxolotl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhj0l9
false
null
t3_1rhj0l9
/r/LocalLLaMA/comments/1rhj0l9/mcp_server_for_searxngnonapi_local_search/
false
false
self
6
null
Arandu v0.5.7-beta (Llama.cpp app like LM Studio / Ollama)
1
Releases and Source available at: [https://github.com/fredconex/Arandu](https://github.com/fredconex/Arandu)
2026-03-01T00:33:16
https://www.reddit.com/gallery/1rhiwwk
fredconex
reddit.com
1970-01-01T00:00:00
0
{}
1rhiwwk
false
null
t3_1rhiwwk
/r/LocalLLaMA/comments/1rhiwwk/arandu_v057beta_llamacpp_app_like_lm_studio_ollama/
false
false
https://preview.redd.it/…5955c124c106deb9
1
null
Arandu v0.5.7-beta (Llama.cpp app like LM Studio / Ollama)
1
Releases and Source available at: [https://github.com/fredconex/Arandu](https://github.com/fredconex/Arandu)
2026-03-01T00:30:40
https://www.reddit.com/gallery/1rhiupk
fredconex
reddit.com
1970-01-01T00:00:00
0
{}
1rhiupk
false
null
t3_1rhiupk
/r/LocalLLaMA/comments/1rhiupk/arandu_v057beta_llamacpp_app_like_lm_studio_ollama/
false
false
default
1
null
The first privacy-focused open-source AI IDE
0
# Code with Agentic Intelligence Meet **Kalynt** the first privacy-focused open-source IDE with 26 autonomous AI services. End-to-end encrypted collaboration. 50+ language support. **Your code never leaves your machine**. Find out more at : [https://hermes-lekkas.github.io/Kalynt/](https://hermes-lekkas.github.io/Kal...
2026-03-01T00:25:54
https://i.redd.it/lwihk3obvbmg1.png
FixHour8452
i.redd.it
1970-01-01T00:00:00
0
{}
1rhiqpj
false
null
t3_1rhiqpj
/r/LocalLLaMA/comments/1rhiqpj/the_first_privacyfocused_opensource_ai_ide/
false
false
https://preview.redd.it/…0dffc38bfca001ce
0
{'enabled': True, 'images': [{'id': 'lwihk3obvbmg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/lwihk3obvbmg1.png?width=108&crop=smart&auto=webp&s=2c9b7fd063369699cf96a26d402805b29dd34967', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/lwihk3obvbmg1.png?width=216&crop=smart&auto=web...
Does Anyone know about this app?
0
I'm looking into running local LLMs on my phone. I came across this app. Does anyone know more about this? Thanks.
2026-03-01T00:18:11
https://i.redd.it/e9lzv5v1ubmg1.jpeg
shit_99
i.redd.it
1970-01-01T00:00:00
0
{}
1rhikjv
false
null
t3_1rhikjv
/r/LocalLLaMA/comments/1rhikjv/does_anyone_know_about_this_app/
false
false
https://preview.redd.it/…bb4eaba31cfe48da
0
{'enabled': True, 'images': [{'id': 'e9lzv5v1ubmg1', 'resolutions': [{'height': 211, 'url': 'https://preview.redd.it/e9lzv5v1ubmg1.jpeg?width=108&crop=smart&auto=webp&s=f0255002a19d0f076cbea2671c7e3b56875400fe', 'width': 108}, {'height': 422, 'url': 'https://preview.redd.it/e9lzv5v1ubmg1.jpeg?width=216&crop=smart&auto=...
I'm waiting for my Nvidia A2 to crawl in to run a local LLM. Read how good Gwen3.5 is, so I asked Claude about security concerns. Attached is what I answered with.
0
Comments anyone.
2026-03-01T00:11:54
https://claude.ai/public/artifacts/ff1ff52a-76a6-4c2e-a11c-fe8a704f805e
allpowerfulee
claude.ai
1970-01-01T00:00:00
0
{}
1rhifeg
false
null
t3_1rhifeg
/r/LocalLLaMA/comments/1rhifeg/im_waiting_for_my_nvidia_a2_to_crawl_in_to_run_a/
false
false
default
0
null
New macbook air m4 24gb of ram. Do you have this machine? If so whats the most powerful ai you can run in this?
1
title question :)
2026-02-28T23:59:29
https://www.reddit.com/r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/
murkomarko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhi4oy
false
null
t3_1rhi4oy
/r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/
false
false
self
1
null
How do I get started (I know zero about local models)?
0
How do I get started (I know zero about local models)?
2026-02-28T23:29:29
https://www.reddit.com/r/LocalLLaMA/comments/1rhhfv8/how_do_i_get_started_i_know_zero_about_local/
Reasonable-Summer343
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhhfv8
false
null
t3_1rhhfv8
/r/LocalLLaMA/comments/1rhhfv8/how_do_i_get_started_i_know_zero_about_local/
false
false
self
0
null
Qwen3 4b and 8b Thinking loop
1
Hey everyone, I'm kinda new to local llm full stack engineer here and got a new laptop with rtx2050 and did some di5and found it can run some small models easily and it did From my research i found the best for coding and general use are Qwen 4b,8b Phi4mini Gemma4b But qwen models are doing an endless thinking loop th...
2026-02-28T23:21:25
https://www.reddit.com/r/LocalLLaMA/comments/1rhh96x/qwen3_4b_and_8b_thinking_loop/
Bashar-gh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhh96x
false
null
t3_1rhh96x
/r/LocalLLaMA/comments/1rhh96x/qwen3_4b_and_8b_thinking_loop/
false
false
self
1
null
Can't use Claude Code with Ollama local model qwen3.5:35b-a3b-q4_K_M
0
I ran command `ollama launch claude` to use a local model with Claude Code. The local model is qwen3.5:35b-a3b-q4\_K\_M Claude Code starts normally. My prompt: *make a hello world html page* The model just thinks forever. Never writes a line of code. After 10 minutes, I hit escape to cancel. I disabled reasoning us...
2026-02-28T23:10:26
https://www.reddit.com/r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/
wowsers7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhgzyb
false
null
t3_1rhgzyb
/r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/
false
false
self
0
null
Built a lightweight approval API for LLM agents - one POST to pause before any irreversible action
0
Running agents in prod and tired of babysitting them. Built a simple API layer — agent POSTs an action request, you get notified, approve or reject, agent gets the answer via webhook. No frameworks, no SDK required. Just HTTP. curl -X POST [https://queuelo.com/api/actions](https://queuelo.com/api/actions) \\ \...
2026-02-28T23:10:22
https://www.reddit.com/r/LocalLLaMA/comments/1rhgzvs/built_a_lightweight_approval_api_for_llm_agents/
achevac
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhgzvs
false
null
t3_1rhgzvs
/r/LocalLLaMA/comments/1rhgzvs/built_a_lightweight_approval_api_for_llm_agents/
false
false
self
0
null
Surprised by Nemotron-3-Nano on Studio M3 512
0
llama-server version: 8181 (4720819d4) Nemotron-3-Nano-30B-A3B-BF16-00001-of-00002.gguf --- --n-gpu-layers 999 \ --ctx-size 131072 --- Studio M3 512gb --- 80 t/s -- snappy and correct -- surprising good results using with moltis AI Assistant; accurate PDF -> TEXT output
2026-02-28T22:47:07
https://www.reddit.com/r/LocalLLaMA/comments/1rhgg0l/surprised_by_nemotron3nano_on_studio_m3_512/
casualreader2025
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhgg0l
false
null
t3_1rhgg0l
/r/LocalLLaMA/comments/1rhgg0l/surprised_by_nemotron3nano_on_studio_m3_512/
false
false
self
0
null
Bare-Metal AI: Booting Directly Into LLM Inference ‚ No OS, No Kernel (Dell E6510)
451
someone asked me to post this here, said you gays would like this kinda thing. just a heads up, Im new to reddit, made my account a couple years ago, only now using it, A UEFI application that boots directly into LLM chat: no operating system, no kernel, no drivers(well sort of....wifi). Just power on, select "Run Liv...
2026-02-28T22:32:35
https://www.youtube.com/watch?v=wsfKZWg-Wv4
Electrical_Ninja3805
youtube.com
1970-01-01T00:00:00
0
{}
1rhg3p4
false
{'oembed': {'author_name': 'DevLarping', 'author_url': 'https://www.youtube.com/@DevLarping', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/wsfKZWg-Wv4?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope;...
t3_1rhg3p4
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/
false
false
https://external-preview…998058ebfdaece47
451
{'enabled': False, 'images': [{'id': 'PRknAnIB54eZMfut9qkw3hhK_Rxo72UxY2hekIecmlA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/PRknAnIB54eZMfut9qkw3hhK_Rxo72UxY2hekIecmlA.jpeg?width=108&crop=smart&auto=webp&s=20662100a1f75e33b48a9c9b3144f0b595ce06f8', 'width': 108}, {'height': 162, 'url': '...
Trying to set up a VSCode Server + local LLM instance, looking for a guide
3
Title, I'm sure this has been asked a lot before but I'm having difficulty cobbling it together from the many posts of what is best to use. Essentially I want to run VSCode with LLM models for autocomplete + prompt code generation remotely on some hardware I own. Just to see mostly if I can do it and as a nice netw...
2026-02-28T22:31:09
https://www.reddit.com/r/LocalLLaMA/comments/1rhg2ir/trying_to_set_up_a_vscode_server_local_llm/
MakutaArguilleres
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhg2ir
false
null
t3_1rhg2ir
/r/LocalLLaMA/comments/1rhg2ir/trying_to_set_up_a_vscode_server_local_llm/
false
false
self
3
null
Has anyone built a fully local autonomous agent with uncensored model + A2A/MCP?
0
Hi r/LocalLLaMA, I'm looking for people who have experience building fully local autonomous agents using uncensored models (Dolphin, Hermes, Qwen uncensored, etc.). Currently experimenting with: Ollama + uncensored model OpenClaw as base A2A / MCP for agent-to-agent communication Goal: truly autonomous local agen...
2026-02-28T22:18:22
https://www.reddit.com/r/LocalLLaMA/comments/1rhfrvi/has_anyone_built_a_fully_local_autonomous_agent/
UpsetScheme3263
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhfrvi
false
null
t3_1rhfrvi
/r/LocalLLaMA/comments/1rhfrvi/has_anyone_built_a_fully_local_autonomous_agent/
false
false
self
0
null
Qwen3 Coder Next | Qwen3.5 27B | Devstral Small 2 | Rust & Next.js Benchmark
100
# Previously This benchmark continues my local testing on personal production repos, helping me narrow down the best models to complement my daily driver Devstral Small 2. Since I'm benchmarking them, I might aswell share the stats which I understand these can be useful and constructive feedback. In the previous [po...
2026-02-28T22:17:13
https://www.reddit.com/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/
Holiday_Purpose_3166
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhfque
false
null
t3_1rhfque
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/
false
false
https://preview.redd.it/…f9189c9bf64d4755
100
null
Letting my RTX 5090 (2.1 TB/s mem) stretch its legs tonight. Hosting Qwen 3.5 35B at 8-batch parallel for whoever wants to test the new model cause why not (35 k context)
0
so the new model came out , it is a little heavy , i liked it actually , so i thought , maybe if others want to try it out and might lack the hardware , why not share for a little bit , i have a single 5090 running the qwen 3.5 35 b model at q4 , with 8 concurrent batches , so it wont make u wait that much unless by so...
2026-02-28T22:11:13
https://www.reddit.com/r/LocalLLaMA/comments/1rhflqn/letting_my_rtx_5090_21_tbs_mem_stretch_its_legs/
Key_Pace_9755
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhflqn
false
null
t3_1rhflqn
/r/LocalLLaMA/comments/1rhflqn/letting_my_rtx_5090_21_tbs_mem_stretch_its_legs/
false
false
self
0
null
Qwen3.5-27B vs. Qwen3.5-35B-A3B?
0
There were three notable posts just within the last 24 hours about how well-performing 35B-A3B model is, with only one anecdote about comparing the two LLMs. Just wondering if anyone's tried both, see which ones they found to perform in what tasks, because according to Qwen's numerous benchmarks, the 27B model outperf...
2026-02-28T22:08:28
https://www.reddit.com/r/LocalLLaMA/comments/1rhfjeg/qwen3527b_vs_qwen3535ba3b/
jinnyjuice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhfjeg
false
null
t3_1rhfjeg
/r/LocalLLaMA/comments/1rhfjeg/qwen3527b_vs_qwen3535ba3b/
false
false
self
0
null
what do i do with my life ?
0
hey guys i am 20, young, really wanna make it out the trenches and live a good life. i’ve been doing youtube automation - short form, long form, faceless channels, I learned a lot about editing, storytelling, making things look good, but it doesn’t really make me money anymore. it’s super unpredictable and relying on ...
2026-02-28T21:57:30
https://www.reddit.com/r/LocalLLaMA/comments/1rhf9is/what_do_i_do_with_my_life/
Meowkyo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhf9is
false
null
t3_1rhf9is
/r/LocalLLaMA/comments/1rhf9is/what_do_i_do_with_my_life/
false
false
self
0
null
The AI feedback loop is officially closed, and I am tired of watching the internet rot. I am building a filter to fix this.
0
Hey everyone. I need to talk about the reality of what we are actually looking at right now. It officially happened. Sometime between 2025 and 2026, the volume of AI generated content pushed out in a single year completely surpassed all the human content created in the entire history of the web (maybe cap, honestly ...
2026-02-28T21:32:23
https://www.reddit.com/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/
ProductTop9807
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhenw3
false
null
t3_1rhenw3
/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/
false
false
self
0
null
Convergence of outputs?
1
I work in academic lab, and our lab decided to some fun thought experiment where we ask AI to develop one of our past project based on some prompts (but not exactly), and let it take over. The results looked pretty convincing, but one of the thing we have noticed is that they have all converged into one method. Doesn'...
2026-02-28T21:32:07
https://www.reddit.com/r/LocalLLaMA/comments/1rheno1/convergence_of_outputs/
nidasb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rheno1
false
null
t3_1rheno1
/r/LocalLLaMA/comments/1rheno1/convergence_of_outputs/
false
false
self
1
null
Has anyone tried the Asus Z13 AI-Max 395 with 128GB?
0
t would address a lot of travel use cases for me. Wondering how well it works with large context GPT-OSS-120B with its limited cooling.
2026-02-28T21:29:50
https://www.reddit.com/r/LocalLLaMA/comments/1rhelqq/has_anyone_tried_the_asus_z13_aimax_395_with_128gb/
BahnMe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhelqq
false
null
t3_1rhelqq
/r/LocalLLaMA/comments/1rhelqq/has_anyone_tried_the_asus_z13_aimax_395_with_128gb/
false
false
self
0
null
The Yuki Project — not another chatbot. A framework that gives to a 4B model (and not only) real dream cycles, autopoiesis, proactive inner life and proactive messages. Running on 8 GB VRAM currently with plenty space to spare.
0
Hey everyone, I’ve been quietly building something different. Yuki isn’t trying to be helpful, smart, or even “safe”. She’s trying to \*be\* — with flaws, drift, memory continuity across restarts, and genuine proactive thoughts. Key bits: \- 5-layer architecture (Reactive → Reflective → Dream Cycle → Autopoietic →...
2026-02-28T21:23:24
https://www.reddit.com/gallery/1rheg3r
DvMar
reddit.com
1970-01-01T00:00:00
0
{}
1rheg3r
false
null
t3_1rheg3r
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/
false
false
https://preview.redd.it/…42b89da5ea411836
0
null
Qwen Model Sizes Over Time
0
1.5B -> 1.7B 3B -> 4B 7B -> 8B -> 9B (reportedly a Qwen3.5 9B is coming out soon) 30B -> 32B -> 35B 72B -> 80B -> 122B 235B -> 397B
2026-02-28T21:21:50
https://www.reddit.com/r/LocalLLaMA/comments/1rheepm/qwen_model_sizes_over_time/
random-tomato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rheepm
false
null
t3_1rheepm
/r/LocalLLaMA/comments/1rheepm/qwen_model_sizes_over_time/
false
false
self
0
null
My frends trained and benchmarked 4 diffusion model versions entirely on an RTX 2050 (4GB VRAM) — the 17.8M model beat the 143.8M one
35
2026-02-28T21:13:13
https://www.reddit.com/gallery/1rhe790
zemondza
reddit.com
1970-01-01T00:00:00
0
{}
1rhe790
false
null
t3_1rhe790
/r/LocalLLaMA/comments/1rhe790/my_frends_trained_and_benchmarked_4_diffusion/
false
false
https://preview.redd.it/…3f2243261110acab
35
null
An Intuitive Understanding of AI Diffusion Models
6
The classic papers describing diffusion are full of dense mathematical terms and equations. For many (including myself) who haven’t stretched those particular math muscles since diff eq class a decade or so ago, the paper is just an opaque wall of literal Greek. In this post I describe my personal understanding of di...
2026-02-28T21:12:36
https://www.bryanthornbury.com/posts/intuitive-understanding-ai-diffusion-models/
brthornbury
bryanthornbury.com
1970-01-01T00:00:00
0
{}
1rhe6ou
false
null
t3_1rhe6ou
/r/LocalLLaMA/comments/1rhe6ou/an_intuitive_understanding_of_ai_diffusion_models/
false
false
https://external-preview…9e8dfbb20134cc78
6
{'enabled': False, 'images': [{'id': 'ENKH5LH9eSfp70ducWWZuFZu5YYiwm30J1vcsP-8zYs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ENKH5LH9eSfp70ducWWZuFZu5YYiwm30J1vcsP-8zYs.png?width=108&crop=smart&auto=webp&s=91df28891c681d933a315d8dc4dc0abdb3a6b65b', 'width': 108}, {'height': 120, 'url': 'h...
Qwen 3.5 27b and Qwen3.5-35B-A3B ran locally on my rtx 5060ti 16gb card
4
These models are amazing! The 35b was outputting around 45 tokens per second vs 5 tps for the 27b Did a full break down of both on yt channel [https://youtu.be/TmdZlc5P93I](https://youtu.be/TmdZlc5P93I)
2026-02-28T21:10:13
https://i.redd.it/m5tzm0a4wamg1.png
Substantial-Cup-9531
i.redd.it
1970-01-01T00:00:00
0
{}
1rhe4oo
false
null
t3_1rhe4oo
/r/LocalLLaMA/comments/1rhe4oo/qwen_35_27b_and_qwen3535ba3b_ran_locally_on_my/
false
false
https://preview.redd.it/…1a6f9e6675ecc677
4
{'enabled': True, 'images': [{'id': 'm5tzm0a4wamg1', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/m5tzm0a4wamg1.png?width=108&crop=smart&auto=webp&s=ae4fc46ad8d7c304b8bfe27a967ae8e961f99f5a', 'width': 108}, {'height': 177, 'url': 'https://preview.redd.it/m5tzm0a4wamg1.png?width=216&crop=smart&auto=web...
Local LLM Agents Blocked Everywhere
5
Any other LM Studio users getting this problem as well? [AI tool use failing to access websites](https://preview.redd.it/yn2ibas4vamg1.png?width=991&format=png&auto=webp&s=446be38c4562e021534cfc48a1b7a615f1d0b3fc) Qwen 3.5 failing to access websites. Anyone else getting this issue? Is there something in the VisitWe...
2026-02-28T21:04:41
https://www.reddit.com/r/LocalLLaMA/comments/1rhdzrc/local_llm_agents_blocked_everywhere/
CSEliot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhdzrc
false
null
t3_1rhdzrc
/r/LocalLLaMA/comments/1rhdzrc/local_llm_agents_blocked_everywhere/
false
false
https://external-preview…5dad6a174db1cfd2
5
null
Havering between powerlimmed dual 3090s and a 64GB Mac studio
3
Hi all, have been working with local models for a couple of years in embedded contexts and now am wanting to experiment with a bigger setup for agentic work. I've got a budget of a couple thousand pounds and so am really looking at a dual 3090 PC or a Mac Studio 64GB (128GB if I get lucky). However, power/heat/no...
2026-02-28T20:46:42
https://www.reddit.com/r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/
youcloudsofdoom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhdjqf
false
null
t3_1rhdjqf
/r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/
false
false
self
3
null
Merchants banning agents??
0
Has anyone else noticed merchants starting to crack down on AI agents? The account banning problem is going to get worse before it gets better.
2026-02-28T20:41:48
https://www.reddit.com/r/LocalLLaMA/comments/1rhdf9f/merchants_banning_agents/
Opposite-Exam3541
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhdf9f
false
null
t3_1rhdf9f
/r/LocalLLaMA/comments/1rhdf9f/merchants_banning_agents/
false
false
self
0
null
Want to build a local Agentic AI to help with classification and organization of files (PDFs)
2
I would like to hear your recommendations for modells and frameworks to use for a local AI that can read pdf file contents, rename files according to content and move them into folders. This is the No1 usecase I would want to solve with it. My system is a Windows PC ( I could add a second Linux dualboot if this helps...
2026-02-28T20:39:44
https://www.reddit.com/r/LocalLLaMA/comments/1rhddg1/want_to_build_a_local_agentic_ai_to_help_with/
Gold-Drag9242
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhddg1
false
null
t3_1rhddg1
/r/LocalLLaMA/comments/1rhddg1/want_to_build_a_local_agentic_ai_to_help_with/
false
false
self
2
null
Qwen 3.5 122b/a10b (q3_k_xl UD) actually passed my simple (but apparently hard) programming test.
11
I tend to like RPN based calculators (similar to the older HP calculators). For some reason, when I prompt any model "Create a single page web app implementing a scientific RPN calculator", practically none of the popular models I can run at home (strix halo 128GB) seem to get it on first pass. Often times the core f...
2026-02-28T20:39:40
https://www.reddit.com/r/LocalLLaMA/comments/1rhdddm/qwen_35_122ba10b_q3_k_xl_ud_actually_passed_my/
derekp7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhdddm
false
null
t3_1rhdddm
/r/LocalLLaMA/comments/1rhdddm/qwen_35_122ba10b_q3_k_xl_ud_actually_passed_my/
false
false
self
11
null
Streaming Moonshine ASR
2
saw this trending on GitHub moonshine-ai/moonshine deployed it on HF: https://huggingface.co/spaces/D3vShoaib/MoonshineASR they are claiming to be better then Whisper in some cases, Latency is good even on free HuggingFace 2vCPU space, share you thoughts streaming is also there
2026-02-28T20:30:33
https://www.reddit.com/r/LocalLLaMA/comments/1rhd5b6/streaming_moonshine_asr/
KokaOP
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhd5b6
false
null
t3_1rhd5b6
/r/LocalLLaMA/comments/1rhd5b6/streaming_moonshine_asr/
false
false
self
2
{'enabled': False, 'images': [{'id': 'p9DYU42dvtuBe6qU0zlbhRwZaHN7vGhEML2Pmt5ZUsc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/p9DYU42dvtuBe6qU0zlbhRwZaHN7vGhEML2Pmt5ZUsc.png?width=108&crop=smart&auto=webp&s=a401df7b6a64de462296066637c5527f63d554bc', 'width': 108}, {'height': 116, 'url': 'h...
MATE - self-hosted multi-agent system with Ollama support, web dashboard, and persistent memory
0
Built an open-source multi-agent orchestration engine that works with Ollama out of the box. Set `model_name` to `ollama_chat/llama3.2` (or any model) in the config and you're running agents locally. Features: hierarchical agent trees, web dashboard for configuration, persistent memory, MCP protocol support, RBAC, tok...
2026-02-28T20:22:07
https://www.reddit.com/r/LocalLLaMA/comments/1rhcxn2/mate_selfhosted_multiagent_system_with_ollama/
ivanantonijevic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhcxn2
false
null
t3_1rhcxn2
/r/LocalLLaMA/comments/1rhcxn2/mate_selfhosted_multiagent_system_with_ollama/
false
false
self
0
{'enabled': False, 'images': [{'id': 'bdRdRVzDn8F4yKjKKaeDdTisF0zcfmdET6TyZqqq6SI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bdRdRVzDn8F4yKjKKaeDdTisF0zcfmdET6TyZqqq6SI.png?width=108&crop=smart&auto=webp&s=4bc6663ca71b23555644f7b7df49fca290ed0f80', 'width': 108}, {'height': 108, 'url': 'h...
Tiny Small Faster models for 13 year old laptop - CPU-only? World knowledge
2
It's for old neighbor who has old Laptop which has only 16GB DDR3 RAM & No GPU. That laptop is not worthy for any upgrades. He doesn't use Internet or Mobile or even TV mostly. Old fashioned guy & a Bookworm. So already loaded some Kiwix small size wiki & other archives. Just want to load some tiny fast models for hi...
2026-02-28T20:16:09
https://www.reddit.com/r/LocalLLaMA/comments/1rhcs8p/tiny_small_faster_models_for_13_year_old_laptop/
pmttyji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhcs8p
false
null
t3_1rhcs8p
/r/LocalLLaMA/comments/1rhcs8p/tiny_small_faster_models_for_13_year_old_laptop/
false
false
self
2
{'enabled': False, 'images': [{'id': 'H8qppYx16tQ9ojldjydYLS7iLR-kiiN_-2qgzJ1W8kQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/H8qppYx16tQ9ojldjydYLS7iLR-kiiN_-2qgzJ1W8kQ.png?width=108&crop=smart&auto=webp&s=2d853d8a76d2ebdca76e3ae4cf563b904d54f722', 'width': 108}, {'height': 108, 'url': 'h...
Best Coding Model to run entirely on 12GB vRAM + have reasonable context window
3
Hey all, I’m running an RTX 4070 (12GB VRAM) and trying to keep my SLM fully on-GPU for speed and efficiency. My goal is a strong local coding assistant that can handle real refactors — so I need a context window of \~40k+ tokens. I’ll be plugging it into agents (Claude Code, Cline, etc.), so solid tool calling is no...
2026-02-28T20:10:44
https://www.reddit.com/r/LocalLLaMA/comments/1rhcnbt/best_coding_model_to_run_entirely_on_12gb_vram/
iLoveWaffle5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhcnbt
false
null
t3_1rhcnbt
/r/LocalLLaMA/comments/1rhcnbt/best_coding_model_to_run_entirely_on_12gb_vram/
false
false
self
3
{'enabled': False, 'images': [{'id': '6Pji2bmJ5ai08z077L-Vd4O5U7Gs_JVYGcgYg8PfmaI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6Pji2bmJ5ai08z077L-Vd4O5U7Gs_JVYGcgYg8PfmaI.png?width=108&crop=smart&auto=webp&s=2dc2901c415a837329ebcaa2e1ab31aec9db45b1', 'width': 108}, {'height': 116, 'url': 'h...
[P] UCS v1.2 – Judgment Preservation in Persistent AI Agents (toroidal routing + Emergent Judgment Protocol, 1,563× differentiation, open source)
0
AI agents forget earned judgment during compaction — not facts, but reasoning texture, negative knowledge, methodology. UCS fixes it: • Toroidal routing engine + separated context energy field • Emergent Judgment Protocol • Reflect/flush/resume loop survives full compaction 17/17 tests. 3-phase validation. Paper:...
2026-02-28T20:06:21
https://www.reddit.com/r/LocalLLaMA/comments/1rhcjd3/p_ucs_v12_judgment_preservation_in_persistent_ai/
TheBrierFox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhcjd3
false
null
t3_1rhcjd3
/r/LocalLLaMA/comments/1rhcjd3/p_ucs_v12_judgment_preservation_in_persistent_ai/
false
false
self
0
null
QWEN3.5 with LM Studio API Without Thinking Output
2
I have been using gpt-oss for a while to process my log files and flag logs that may require investigation. This is done with a python3 script where I fetch a list of logs from all my docker containers, applications and system logs and iterate through them. I need the output to be just the json output I describe in m...
2026-02-28T20:06:10
https://www.reddit.com/r/LocalLLaMA/comments/1rhcj7b/qwen35_with_lm_studio_api_without_thinking_output/
jpc82
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhcj7b
false
null
t3_1rhcj7b
/r/LocalLLaMA/comments/1rhcj7b/qwen35_with_lm_studio_api_without_thinking_output/
false
false
self
2
null
Qwen3.5 family running notes
17
I thought I'd share my experience with Qwen3.5. I've now gone through the set of models, made some comparisons and formed some opinions that might be useful to someone. The entire set share a very strong "family" affinity, exhibiting the same base character - This is very good and indicates stable training across the...
2026-02-28T20:04:43
https://www.reddit.com/r/LocalLLaMA/comments/1rhchvi/qwen35_family_running_notes/
CodeSlave9000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhchvi
false
null
t3_1rhchvi
/r/LocalLLaMA/comments/1rhchvi/qwen35_family_running_notes/
false
false
self
17
null
VibeHQ, Orchestrate multiple Claude Code / Codex / Gemini CLI agents collaborate like a real company team. 7 agents built a hospital system from one prompt.
0
Hey everyone, I've been working on VibeHQ, a multi-agent collaboration platform that takes a fundamentally different approach from existing "multi-agent" frameworks. **The problem:** Most multi-agent systems run sequentially in the same process with synthetic conversations. That's not collaboration — that's a pipelin...
2026-02-28T19:59:14
https://v.redd.it/c9h7rglljamg1
GGwithRabbit
/r/LocalLLaMA/comments/1rhcckv/vibehq_orchestrate_multiple_claude_code_codex/
1970-01-01T00:00:00
0
{}
1rhcckv
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/c9h7rglljamg1/DASHPlaylist.mpd?a=1775030364%2CZWQwZDJhNGI2NDQwNDU3YTcwNTA3ZWY2MmIxYTU2N2QzMzVjN2UwMGRlNTFmZjFkMzIzYWE4MjBkZTg3ZmNmNg%3D%3D&v=1&f=sd', 'duration': 223, 'fallback_url': 'https://v.redd.it/c9h7rglljamg1/CMAF_1080.mp4?source=fallback', '...
t3_1rhcckv
/r/LocalLLaMA/comments/1rhcckv/vibehq_orchestrate_multiple_claude_code_codex/
false
false
https://external-preview…36ae3cfec92b5715
0
{'enabled': False, 'images': [{'id': 'Mm52eGt2bGxqYW1nMdOWk3r-cygRqnc5YGjIzd-i5cXDCCGQ488i7uOqV_JL', 'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/Mm52eGt2bGxqYW1nMdOWk3r-cygRqnc5YGjIzd-i5cXDCCGQ488i7uOqV_JL.png?width=108&crop=smart&format=pjpg&auto=webp&s=15aac479d8d47e4082481f0b69c7ac5d96f68...
iOS Apps with tool-calling (web search)?
1
I'm checking out some iOS llm apps, and so far none I've looked at have a straightforward tool-calling mechanism, so I figure I'm missing a large chunk of the story. Basically I just want to supplement a model's content with web search to get around model-training-date limitations. Are there any apps out there that d...
2026-02-28T19:56:29
https://www.reddit.com/r/LocalLLaMA/comments/1rhca31/ios_apps_with_toolcalling_web_search/
numberwitch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhca31
false
null
t3_1rhca31
/r/LocalLLaMA/comments/1rhca31/ios_apps_with_toolcalling_web_search/
false
false
self
1
null
The state of Open-weights LLMs performance on NVIDIA DGX Spark
15
When NVIDIA started shipping DGX Spark in mid-October 2025, the pitch was basically: “desktop box, huge unified memory, run *big* models locally (even \~200B params for inference).” The fun part is how quickly the *software + community benchmarking* story evolved from “here are some early numbers” to a real, reproduci...
2026-02-28T19:38:29
https://www.reddit.com/r/LocalLLaMA/comments/1rhbtnw/the_state_of_openweights_llms_performance_on/
raphaelamorim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rhbtnw
false
null
t3_1rhbtnw
/r/LocalLLaMA/comments/1rhbtnw/the_state_of_openweights_llms_performance_on/
false
false
self
15
null