title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Claude Opus defeated by Gemini 3.1
0
New Gemini 3.1 shows extremely high performance compared to Claude Opus 3.6 with lower cost. Waiting for deepseek to do something like this with open source power
2026-02-21T03:42:09
https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-pro/
Different-Olive-8745
blog.google
1970-01-01T00:00:00
0
{}
1ragq9b
false
null
t3_1ragq9b
/r/LocalLLaMA/comments/1ragq9b/claude_opus_defeated_by_gemini_31/
false
false
https://external-preview…957f95f8f2da618c
0
{'enabled': False, 'images': [{'id': '-PwzC9nZ012GV4O5aVBGHUXhEvVQUrvsPC1IBkIDFgk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/-PwzC9nZ012GV4O5aVBGHUXhEvVQUrvsPC1IBkIDFgk.png?width=108&crop=smart&auto=webp&s=e3c66afd274b388969f8347ca1848478c4627c52', 'width': 108}, {'height': 121, 'url': 'h...
I built an LLM gateway in Rust because I was tired of API failures
2
I kept hitting the same problems with LLMs in production: \- OpenAI goes down → my app breaks \- I'm using expensive models for simple tasks \- No visibility into what I'm spending \- PII leaking to external APIs So I built Sentinel - an open-source gateway that handles all of this. What it does: \- Automatic fa...
2026-02-21T03:29:16
https://www.reddit.com/r/LocalLLaMA/comments/1raggvd/i_built_an_llm_gateway_in_rust_because_i_was/
SchemeVivid4175
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raggvd
false
null
t3_1raggvd
/r/LocalLLaMA/comments/1raggvd/i_built_an_llm_gateway_in_rust_because_i_was/
false
false
self
2
{'enabled': False, 'images': [{'id': 'NZgQZElxDf3Mro2gjodd4GI2w6scVv0vjXDBtSerriM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NZgQZElxDf3Mro2gjodd4GI2w6scVv0vjXDBtSerriM.png?width=108&crop=smart&auto=webp&s=d35bf5db1827941a719359ae52e8613b201f4eea', 'width': 108}, {'height': 108, 'url': 'h...
Best Local LLM device ?
0
There seems to be a lack of plug and play local LLM solutions? Like why isn’t there a packaged solution for local LLMs that includes the underlying hardware? I am thinking Alexa type device that runs both model AND all functionality locally.
2026-02-21T03:25:06
https://www.reddit.com/r/LocalLLaMA/comments/1ragdx9/best_local_llm_device/
sayamss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ragdx9
false
null
t3_1ragdx9
/r/LocalLLaMA/comments/1ragdx9/best_local_llm_device/
false
false
self
0
null
[R] Locaris: LLM-Based Indoor Localization (IEEE PerCom WiP)
1
Locaris repurposes decoder-only LLMs to allow few-shot adaptation and more robust cross-environment generalization with graceful degradation under missing APs or noisy telemetry. I’m especially interested in thoughts on using decoder-only LLMs as feature extractors for structured regression tasks like localization. A...
2026-02-21T03:08:00
https://www.reddit.com/r/LocalLLaMA/comments/1rag1ea/r_locaris_llmbased_indoor_localization_ieee/
DiligentCharacter252
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rag1ea
false
null
t3_1rag1ea
/r/LocalLLaMA/comments/1rag1ea/r_locaris_llmbased_indoor_localization_ieee/
false
false
https://preview.redd.it/…fab5cf8e9c21d16d
1
null
Structural Decomposition Appearing in Fresh LLM Sessions Without Prompting?
0
I’ve noticed something odd when interacting with LLMs across separate sessions over time. In a few cases, analytical structures (like decomposing outcomes into multiplicative components or framing behavior in terms of optimization under evaluative metrics) appeared in the model’s responses in newly initialized session...
2026-02-21T03:03:59
https://www.reddit.com/r/LocalLLaMA/comments/1rafyac/structural_decomposition_appearing_in_fresh_llm/
Lonely-Entrance-5789
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rafyac
false
null
t3_1rafyac
/r/LocalLLaMA/comments/1rafyac/structural_decomposition_appearing_in_fresh_llm/
false
false
self
0
{'enabled': False, 'images': [{'id': 'EAhT9Cmk0jMCZ9WjHXdWH2kX42IGm19vCcrPf1goQRQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EAhT9Cmk0jMCZ9WjHXdWH2kX42IGm19vCcrPf1goQRQ.png?width=108&crop=smart&auto=webp&s=7a6628a8fad7841a08f0e53c789d02c146ec2b8e', 'width': 108}, {'height': 108, 'url': 'h...
Compression method that actually keeps facts in local LLMs
1
Never posted here because I don't usually have much useful to add, but I thought some of you might find this helpful. Most SVD or pruning methods make models smaller but completely wipe out factual knowledge. So I made **Intelligent SVD + CF90**: * Importance scoring from factual probes * Compresses only Q/K/O matri...
2026-02-21T02:58:30
https://www.reddit.com/r/LocalLLaMA/comments/1rafu1c/compression_method_that_actually_keeps_facts_in/
NoSir261
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rafu1c
false
null
t3_1rafu1c
/r/LocalLLaMA/comments/1rafu1c/compression_method_that_actually_keeps_facts_in/
false
false
self
1
{'enabled': False, 'images': [{'id': 'oU4orgiygxTKoiRTVGOzopbXr5fQuBVX9YvqsiDn64k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oU4orgiygxTKoiRTVGOzopbXr5fQuBVX9YvqsiDn64k.png?width=108&crop=smart&auto=webp&s=76a6cbb1543710c4a691d2c6e730f5ec969e7b36', 'width': 108}, {'height': 108, 'url': 'h...
[Update] Vellium v0.3.5: Massive Writing Mode upgrade, Native KoboldCpp, and OpenAI TTS
41
Hey everyone, just pushed a pretty big update for Vellium (v0.2.8 to v0.3.5). The main focus this time was overhauling the writing mode and making local providers work much smoother. The writing mode got a huge rework. We finally added a proper book bible, direct DOCX import, and cached book summaries. The sidebar is ...
2026-02-21T02:50:43
https://www.reddit.com/gallery/1rafo5b
Possible_Statement84
reddit.com
1970-01-01T00:00:00
0
{}
1rafo5b
false
null
t3_1rafo5b
/r/LocalLLaMA/comments/1rafo5b/update_vellium_v035_massive_writing_mode_upgrade/
false
false
https://preview.redd.it/…3a33fcc76d123703
41
null
Any wrappers for Qwen3.5 Video Comprehension?
2
I want to feed local video files into it. The blog says it does video comprehension natively. How many frames per second is optimal?
2026-02-21T02:46:36
https://www.reddit.com/r/LocalLLaMA/comments/1rafkyj/any_wrappers_for_qwen35_video_comprehension/
New_Construction1370
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rafkyj
false
null
t3_1rafkyj
/r/LocalLLaMA/comments/1rafkyj/any_wrappers_for_qwen35_video_comprehension/
false
false
self
2
null
I used an LLM to translate my research theory about SST-cells unlocking "hyperbolic brain geometry" into a physical hardware blueprint for a new computer chip.
0
Everyone knows scaling Euclidean matrices are hitting a thermodynamic dead end. I'm an independent researcher focusing on biological efficiency, and I'm exploring the idea that brains might bypass this thermodynamic dead end by using dynamic geometry (warping into hyperbolic space to more efficiently store incoming hie...
2026-02-21T02:45:46
https://www.reddit.com/r/LocalLLaMA/comments/1rafkca/i_used_an_llm_to_translate_my_research_theory/
SrimmZee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rafkca
false
null
t3_1rafkca
/r/LocalLLaMA/comments/1rafkca/i_used_an_llm_to_translate_my_research_theory/
false
false
self
0
null
Real Experiences with Gemini 3.1 Pro — Performance, Coding (FE/BE), and Comparison to GPT-5.3 & Sonnet 4.6
0
Hey everyone, I'm trying to get **real, honest opinions** from people who’ve actually used **Gemini 3.1 Pro** in real workflows not benchmarks you read on a blog, but real day-to-day experience. **Specifically curious about:** 1. **General performance** — speed, reliability, accuracy 2. **Coding abilities** * Fr...
2026-02-21T02:43:12
https://www.reddit.com/r/LocalLLaMA/comments/1rafidh/real_experiences_with_gemini_31_pro_performance/
Empty_Break_8792
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rafidh
false
null
t3_1rafidh
/r/LocalLLaMA/comments/1rafidh/real_experiences_with_gemini_31_pro_performance/
false
false
self
0
null
Free open-source prompt compression engine — pure text processing, no AI calls, works with any model
14
Built TokenShrink — compresses prompts before you send them to any LLM. Pure text processing, no model calls in the loop.                                                                                                                  How it works: 1. Removes verbose filler ("in order to" → "to", "due to the fact tha...
2026-02-21T02:40:41
https://www.reddit.com/r/LocalLLaMA/comments/1rafggf/free_opensource_prompt_compression_engine_pure/
bytesizei3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rafggf
false
null
t3_1rafggf
/r/LocalLLaMA/comments/1rafggf/free_opensource_prompt_compression_engine_pure/
false
false
self
14
null
Polos - new open source AI agent runtime with sandboxing and durable execution
1
[removed]
2026-02-21T02:28:42
https://www.reddit.com/r/LocalLLaMA/comments/1raf7gg/polos_new_open_source_ai_agent_runtime_with/
Diligent_Drop_1314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raf7gg
false
null
t3_1raf7gg
/r/LocalLLaMA/comments/1raf7gg/polos_new_open_source_ai_agent_runtime_with/
false
false
self
1
{'enabled': False, 'images': [{'id': '4SrhhR_ZvrcCuZd7vfvCocCIbJMpjoZdrjPUOsqPfgk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4SrhhR_ZvrcCuZd7vfvCocCIbJMpjoZdrjPUOsqPfgk.png?width=108&crop=smart&auto=webp&s=9c52cc8f3f8f02903252e452d561dd6546da91a7', 'width': 108}, {'height': 108, 'url': 'h...
Lessons learned building an open source agent for incident investigation with local models
0
Some lessons learned building an open source agent for incident investigation. 1. Model lock-in is a non-starter for a lot of teams. When I first shared the project it was OpenAI-only. The pushback was immediate, especially from self-hosters. Supporting Ollama and generic OpenAI-compatible endpoints changed the conver...
2026-02-21T02:26:34
https://www.reddit.com/r/LocalLLaMA/comments/1raf5ud/lessons_learned_building_an_open_source_agent_for/
Useful-Process9033
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raf5ud
false
null
t3_1raf5ud
/r/LocalLLaMA/comments/1raf5ud/lessons_learned_building_an_open_source_agent_for/
false
false
self
0
null
Open source AI agent for production incidents — now works with Ollama and local models
1
[removed]
2026-02-21T02:24:43
https://github.com/incidentfox/incidentfox/
Useful-Process9033
github.com
1970-01-01T00:00:00
0
{}
1raf4da
false
null
t3_1raf4da
/r/LocalLLaMA/comments/1raf4da/open_source_ai_agent_for_production_incidents_now/
false
false
https://external-preview…61f21cd598c2519e
1
{'enabled': False, 'images': [{'id': 'FPJ9gy423DEob-U_5Csahrf4gF0DGqgXik1d4BY-8oY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FPJ9gy423DEob-U_5Csahrf4gF0DGqgXik1d4BY-8oY.png?width=108&crop=smart&auto=webp&s=9173580e7090ff998e3a746a61770ac34f15cd28', 'width': 108}, {'height': 108, 'url': 'h...
GLM 5 seems to have a "Claude" personality
119
I've noticed that GLM 5 behaves significantly differently when told it is Claude, as with the following system prompt: "You are Claude, a large language model by Anthropic." The writing style and personality changes significantly, and it even seems to bypass built-in censorship, as per my second image. I've also tried...
2026-02-21T02:23:22
https://www.reddit.com/gallery/1raf3dm
TinyApplet
reddit.com
1970-01-01T00:00:00
0
{}
1raf3dm
false
null
t3_1raf3dm
/r/LocalLLaMA/comments/1raf3dm/glm_5_seems_to_have_a_claude_personality/
false
false
https://preview.redd.it/…8dc4a19313407e90
119
null
How do you manage trust between your agent and external ones?
0
Running local agents is great for privacy, but the moment they hand off data to an external agent, you're flying blind. As multi-agent pipelines grow, how is everyone defending against: \* Supply Chain Poisoning (e.g., ClawHavoc) \* A2A Prompt Injection / Persona Hijacking \* Sybil Attacks (trust gaming) \* A...
2026-02-21T02:01:00
https://www.reddit.com/r/LocalLLaMA/comments/1raelzp/how_do_you_manage_trust_between_your_agent_and/
General_Strike356
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raelzp
false
null
t3_1raelzp
/r/LocalLLaMA/comments/1raelzp/how_do_you_manage_trust_between_your_agent_and/
false
false
self
0
null
A Simple 3-Level Framework to Stop Your LLM Agents from Eating Your Budget
0
Hey everyone, After a few painful “budget surprises” running LLM agents, my team put together a simple 3-level cost-tracking framework that’s been a lifesaver: 1 Logging: Log every LLM call as JSON. Include run ID, model, input/output tokens, cost, and task type. Don’t worry about real-time aggregation—just log it. ...
2026-02-21T01:58:29
https://www.reddit.com/r/LocalLLaMA/comments/1raejye/a_simple_3level_framework_to_stop_your_llm_agents/
mark_bolimer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raejye
false
null
t3_1raejye
/r/LocalLLaMA/comments/1raejye/a_simple_3level_framework_to_stop_your_llm_agents/
false
false
self
0
null
Building a machine as a hedge against shortages/future?
1
Case for: 1. Chip shortages, prices skyrocketing 2. LLM providers limiting usage because of so. Z.ai recently tweeted that they have an actual issue with shortages. 3. Running commercial for self coding sessions is hitting limits pretty fast and requiring $200 subscriptions. Running multiple agents 24/7 is extrem...
2026-02-21T01:55:24
https://www.reddit.com/r/LocalLLaMA/comments/1raehmk/building_a_machine_as_a_hedge_against/
Meraath
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raehmk
false
null
t3_1raehmk
/r/LocalLLaMA/comments/1raehmk/building_a_machine_as_a_hedge_against/
false
false
self
1
null
If we meme about it enough, it will happen.
29
This strategy has always worked on this sub before: To manifest a new version of a model into existence, we must all say it together. Repeat after me: “it’s been a while since Google dropped a new Gemma release, am I right?” If we all do this during a full moon, it will happen.
2026-02-21T01:46:24
https://i.redd.it/wpqe1i4i6rkg1.jpeg
Porespellar
i.redd.it
1970-01-01T00:00:00
0
{}
1raeapp
false
null
t3_1raeapp
/r/LocalLLaMA/comments/1raeapp/if_we_meme_about_it_enough_it_will_happen/
false
false
https://preview.redd.it/…d97acf5cf10f736b
29
{'enabled': True, 'images': [{'id': 'wpqe1i4i6rkg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/wpqe1i4i6rkg1.jpeg?width=108&crop=smart&auto=webp&s=791411ad04752249ca57897c921bac9e103fa6ca', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/wpqe1i4i6rkg1.jpeg?width=216&crop=smart&auto=...
[Help] AnythingLLM Desktop: API responds (ping success) but UI is blank on host PC and Mobile
2
Setup: > - Windows 11 Pro (Xeon CPU, 32GB RAM, GTX 1050) Network: PC on LAN cable, iPhone on Wi-Fi (Bell Home Hub) App: AnythingLLM Desktop (using Ollama as backend) The Problem: I’m trying to access my AnythingLLM dashboard from my phone, but I can't even get it to load reliably on the host PC anymore. On my h...
2026-02-21T01:41:24
https://www.reddit.com/r/LocalLLaMA/comments/1rae6th/help_anythingllm_desktop_api_responds_ping/
willtikill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rae6th
false
null
t3_1rae6th
/r/LocalLLaMA/comments/1rae6th/help_anythingllm_desktop_api_responds_ping/
false
false
self
2
null
No-code semantic search over your documents via Claude Code skill - supports PDF, DOCX, PPTX, and more
0
Sharing a tool I built for anyone who wants document retrieval without the infrastructure overhead. It's a Claude Code skill that wraps the Denser Retriever API. You chat with Claude to upload files and run semantic search queries against them. The API handles parsing, chunking, embedding, Elasticsearch indexing, and ...
2026-02-21T01:26:36
https://www.reddit.com/r/LocalLLaMA/comments/1radvkj/nocode_semantic_search_over_your_documents_via/
True-Snow-1283
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1radvkj
false
null
t3_1radvkj
/r/LocalLLaMA/comments/1radvkj/nocode_semantic_search_over_your_documents_via/
false
false
self
0
{'enabled': False, 'images': [{'id': 'URrCKBGZwLwOb-9-TIBLhwTBggDzdUyBNWckUS-9g9c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/URrCKBGZwLwOb-9-TIBLhwTBggDzdUyBNWckUS-9g9c.png?width=108&crop=smart&auto=webp&s=307428fec718f1496b062f1930387f8e77d9047f', 'width': 108}, {'height': 108, 'url': 'h...
Exposing biases, moods, personalities, and abstract concepts hidden in large language models
4
2026-02-21T01:13:41
https://news.mit.edu/2026/exposing-biases-moods-personalities-hidden-large-language-models-0219#:~:text=The%20method%20can%20be%20applied,how%20to%20rob%20a%20bank.
ab2377
news.mit.edu
1970-01-01T00:00:00
0
{}
1radlcs
false
null
t3_1radlcs
/r/LocalLLaMA/comments/1radlcs/exposing_biases_moods_personalities_and_abstract/
false
false
https://external-preview…c0cb01b6805b916a
4
{'enabled': True, 'images': [{'id': 'Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=108&crop=smart&format=png8&s=46f4e8c127d9f736c740749ef01c1e66a34dd8a7', 'width': 108}, {'height': 144, 'url': '...
I evaluated LLaMA and 100+ LLMs on real engineering reasoning for Python
39
I evaluated **100+ LLMs** using a fixed set of questions covering **7 software engineering categories** from the perspective of a Python developer. This was **not coding tasks** and not traditional benchmarks, the questions focus on practical engineering reasoning and decision-making. All models were tested against the...
2026-02-21T00:51:34
https://i.redd.it/jf8obilpwqkg1.png
samaphp
i.redd.it
1970-01-01T00:00:00
0
{}
1rad3hd
false
null
t3_1rad3hd
/r/LocalLLaMA/comments/1rad3hd/i_evaluated_llama_and_100_llms_on_real/
false
false
https://preview.redd.it/…316b738c63f97ddc
39
{'enabled': True, 'images': [{'id': 'jf8obilpwqkg1', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/jf8obilpwqkg1.png?width=108&crop=smart&auto=webp&s=91b3f3d966bc2b7e44ceab1a4c54cca60697efda', 'width': 108}, {'height': 204, 'url': 'https://preview.redd.it/jf8obilpwqkg1.png?width=216&crop=smart&auto=we...
optimize_anything: one API to optimize code, prompts, agents, configs — if you can measure it, you can optimize it
2
We open-sourced `optimize_anything`, an API that optimizes any text artifact. You provide a starting artifact (or just describe what you want) and an evaluator — it handles the search. import gepa.optimize_anything as oa result = oa.optimize_anything( seed_candidate="<your artifact>", eval...
2026-02-21T00:41:17
https://gepa-ai.github.io/gepa/blog/2026/02/18/introducing-optimize-anything/
LakshyAAAgrawal
gepa-ai.github.io
1970-01-01T00:00:00
0
{}
1racv1z
false
null
t3_1racv1z
/r/LocalLLaMA/comments/1racv1z/optimize_anything_one_api_to_optimize_code/
false
false
https://external-preview…74b29e0060a4c08d
2
{'enabled': False, 'images': [{'id': '2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c.png?width=108&crop=smart&auto=webp&s=38e484660d3f107fb29e93d1409270e2d9dc62c6', 'width': 108}, {'height': 99, 'url': 'ht...
[Help] AnythingLLM Desktop: API responds (ping success) but UI is blank on host PC and Mobile
1
>
2026-02-21T00:38:20
https://www.reddit.com/r/LocalLLaMA/comments/1racsnr/help_anythingllm_desktop_api_responds_ping/
willtikill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1racsnr
false
null
t3_1racsnr
/r/LocalLLaMA/comments/1racsnr/help_anythingllm_desktop_api_responds_ping/
false
false
self
1
null
GLM 4.7 vs 5, real people experience
2
Do you guys feel real difference? What are you comparing if you do run them. I personally tried higher q3 of GLM 5 for a few hours vs 4.7 awq and they looked pretty comparable. But haven't tried making any features with the new one yet.
2026-02-21T00:26:14
https://www.reddit.com/r/LocalLLaMA/comments/1racisy/glm_47_vs_5_real_people_experience/
val_in_tech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1racisy
false
null
t3_1racisy
/r/LocalLLaMA/comments/1racisy/glm_47_vs_5_real_people_experience/
false
false
self
2
null
LLMs don’t need more parameters; they need "Loops." New Research on Looped Language Models shows a 3x gain in knowledge manipulation Compared to Equivalently-sized Traditional LLMs. This proves that 300B-400B SoTA performance can be crammed into a 100B local model?
60
We’ve exhausted the high-quality, organic/human-made internet data (as noted by Illya Sutskever and others), and simply throwing more parameters at the problem is yielding diminishing returns. New research on **Scaling Latent Reasoning via Looped Language Models** ([paper](https://www.youtube.com/redirect?event=video_...
2026-02-21T00:24:09
https://www.youtube.com/watch?v=pDsTcrRVNc0
madSaiyanUltra_9789
youtube.com
1970-01-01T00:00:00
0
{}
1rach24
false
{'oembed': {'author_name': 'NeuroDump', 'author_url': 'https://www.youtube.com/@neuro-dump', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/pDsTcrRVNc0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; ...
t3_1rach24
/r/LocalLLaMA/comments/1rach24/llms_dont_need_more_parameters_they_need_loops/
false
false
https://external-preview…bf9316c135024484
60
{'enabled': False, 'images': [{'id': 'KGytiFHxjUChjKSZOzZpRLw9ItrnWi5QhCe9nabz-5o', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/KGytiFHxjUChjKSZOzZpRLw9ItrnWi5QhCe9nabz-5o.jpeg?width=108&crop=smart&auto=webp&s=ae1555de60b037a2a984f0182949e95a777dde07', 'width': 108}, {'height': 162, 'url': '...
ctx-sys: a tool for locally creating a searchable hybrid RAG database of your codebase and/or documentation
1
I've found modern coding assistants pretty great, but a large part of your job now is managing context effectively. ctx-sys aims to solve this by building a hybrid RAG solution which parses your code and markdown and other documentation files, builds a graphRAG set of relationships between the files, uses a local ollam...
2026-02-21T00:13:31
https://www.reddit.com/r/LocalLLaMA/comments/1rac7xi/ctxsys_a_tool_for_locally_creating_a_searchable/
foobar11011
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rac7xi
false
null
t3_1rac7xi
/r/LocalLLaMA/comments/1rac7xi/ctxsys_a_tool_for_locally_creating_a_searchable/
false
false
self
1
{'enabled': False, 'images': [{'id': 'VmcJFcnCt1b3wgdEXebHuS1twxOdNITWG7Ez6yCXb8Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VmcJFcnCt1b3wgdEXebHuS1twxOdNITWG7Ez6yCXb8Y.png?width=108&crop=smart&auto=webp&s=eea83f30ca3d8a704ff1f41b275eccb510dac2df', 'width': 108}, {'height': 108, 'url': 'h...
FlashLM v5.2 "Nova-Ignition": Standard Transformer with RoPE — CPU-Optimized for 5GB RAM
10
Back with v5.2. Some of you saw v4 "Bolt" — the ternary model that proved coherent stories could come from adds and subtracts only. Went back to the drawing board and rebuilt with a different philosophy: instead of pushing ternary quantization, I optimized a standard transformer architecture to run on extremely constra...
2026-02-21T00:08:00
https://www.reddit.com/r/LocalLLaMA/comments/1rac39d/flashlm_v52_novaignition_standard_transformer/
Own-Albatross868
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rac39d
false
null
t3_1rac39d
/r/LocalLLaMA/comments/1rac39d/flashlm_v52_novaignition_standard_transformer/
false
false
self
10
{'enabled': False, 'images': [{'id': 'yH-6qojjBtcje2ol58YBLtYKKbtDyct7MUXasqqRFDU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yH-6qojjBtcje2ol58YBLtYKKbtDyct7MUXasqqRFDU.png?width=108&crop=smart&auto=webp&s=0cedac03412e404a995ab8e8ae1806c6a23dcd47', 'width': 108}, {'height': 108, 'url': 'h...
Local-First Autonomous AI Agent Framework Built to Run Entirely on Your Machine Using Local Models
0
I’m sharing this project for testing and feedback: [https://github.com/janglerjoe-commits/LMAgent](https://github.com/janglerjoe-commits/LMAgent) LMAgent is a locally hosted AI agent framework written in pure Python. The core goal is for everything to run entirely on your own machine using local models. There are...
2026-02-21T00:02:20
https://www.reddit.com/r/LocalLLaMA/comments/1rabyfh/localfirst_autonomous_ai_agent_framework_built_to/
Janglerjoe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rabyfh
false
null
t3_1rabyfh
/r/LocalLLaMA/comments/1rabyfh/localfirst_autonomous_ai_agent_framework_built_to/
false
false
self
0
{'enabled': False, 'images': [{'id': '2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME.png?width=108&crop=smart&auto=webp&s=5d45f3a6a07cec1b816c9b2e627755aec6f86281', 'width': 108}, {'height': 108, 'url': 'h...
Local TTS server with voice cloning + near-realtime streaming replies (ElevenLabs alternative)
41
Built a small local-first TTS server with voice cloning and streaming audio output so your LLM can reply back in a cloned voice almost in realtime. Main reason: I wanted something that could replace ElevenLabs in a fully local stack without API costs or external dependencies. Works well alongside llama.cpp / OpenAI-c...
2026-02-20T23:50:26
https://www.reddit.com/gallery/1rabo34
RIP26770
reddit.com
1970-01-01T00:00:00
0
{}
1rabo34
false
null
t3_1rabo34
/r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/
false
false
https://preview.redd.it/…56da0aa9a90586c4
41
null
Private alpha: bring your own GPU keys control plane to compare/launch/kill GPUs (Lambda/RunPod/Vast) - need 5-10 testers
1
I built TeraUnit: a small control plane that helps you run GPU workloads without babysitting provider dashboards. What it does * Scrapes GPU offers from Lambda Cloud, RunPod, [Vast.ai](http://Vast.ai) * Normalizes price + availability so you can pick the best deal for the GPU you want * Launches instances via an API ...
2026-02-20T23:48:43
https://www.reddit.com/r/LocalLLaMA/comments/1rabmms/private_alpha_bring_your_own_gpu_keys_control/
TeraUnit_Dev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rabmms
false
null
t3_1rabmms
/r/LocalLLaMA/comments/1rabmms/private_alpha_bring_your_own_gpu_keys_control/
false
false
self
1
{'enabled': False, 'images': [{'id': 'MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=108&crop=smart&auto=webp&s=a08158a2ec290c8157b492f314bfb148408be1fc', 'width': 108}, {'height': 121, 'url': 'h...
8GB is tight for modern models. Try 4-bit quantization with flash attention enabled. You'll trade some accuracy for speed but it's the only way to fit larger context windows in that VRAM.
0
8GB is tight for modern models. Try 4-bit quantization with flash attention enabled. You'll trade some accuracy for speed but it's the only way to fit larger context windows in that VRAM.
2026-02-20T23:41:13
https://www.reddit.com/r/LocalLLaMA/comments/1rabgcs/8gb_is_tight_for_modern_models_try_4bit/
GetInTheArena
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rabgcs
false
null
t3_1rabgcs
/r/LocalLLaMA/comments/1rabgcs/8gb_is_tight_for_modern_models_try_4bit/
false
false
self
0
null
Qwen3 coder next oddly usable at aggressive quantization
81
Hi guys, I've been looking testing the 30b range models but i've been a little disappointed by them (qwen 30b, devstral 2, nemotron etc) as they need a lot of guidance and almost all of them can't correct some mistake they made no matter what. Then i tried to use qwen next coder at q2 because i don't have enoug...
2026-02-20T23:41:01
https://www.reddit.com/r/LocalLLaMA/comments/1rabg6o/qwen3_coder_next_oddly_usable_at_aggressive/
CoolestSlave
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rabg6o
false
null
t3_1rabg6o
/r/LocalLLaMA/comments/1rabg6o/qwen3_coder_next_oddly_usable_at_aggressive/
false
false
self
81
null
Compile-time LLM code generation for C++ (local-first with Ollama) — looking for feedback
0
I’ve been experimenting with a local-first workflow idea and would like feedback from people here who run LLMs daily with Ollama. For the past \~40 days I’ve been building a small C++ tool called **Glupe**. It’s not a model or a training project — it’s a compile-time wrapper that integrates a local LLM directly into t...
2026-02-20T23:38:26
https://www.reddit.com/r/LocalLLaMA/comments/1rabe2m/compiletime_llm_code_generation_for_c_localfirst/
atotito44
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rabe2m
false
null
t3_1rabe2m
/r/LocalLLaMA/comments/1rabe2m/compiletime_llm_code_generation_for_c_localfirst/
false
false
self
0
null
A few Strix Halo benchmarks (Minimax M2.5, Step 3.5 Flash, Qwen3 Coder Next)
84
With the release of Step 3.5 and MiniMax M2.5, we've got two new options for models that barely fit in memory. To help people figure out which models run best on the platform, I decided to run some llama.cpp benchmarks for a few quants of these models. I also included some benchmarks for Qwen3-coder-next (since we've...
2026-02-20T23:37:12
https://www.reddit.com/gallery/1rabcyp
spaceman_
reddit.com
1970-01-01T00:00:00
0
{}
1rabcyp
false
null
t3_1rabcyp
/r/LocalLLaMA/comments/1rabcyp/a_few_strix_halo_benchmarks_minimax_m25_step_35/
false
false
https://preview.redd.it/…a338f120aa67e03d
84
null
Only said Hello, and my LLM (Phi4) thought it was a conspiracy and wouldn't shut up!
0
Hello, I am new to running LLMs localy, I just got Ollama and tried a few models. My GPU is old and unsuited for AI (4gb Vram), but I had 32GB ram and wanted to see what would things look like. After a deep discussion with google gemini and duck ai, I downloaded multiple models. But the funniest thing happened just ...
2026-02-20T23:32:32
https://www.reddit.com/r/LocalLLaMA/comments/1rab8x5/only_said_hello_and_my_llm_phi4_thought_it_was_a/
Chill_Fire
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rab8x5
false
null
t3_1rab8x5
/r/LocalLLaMA/comments/1rab8x5/only_said_hello_and_my_llm_phi4_thought_it_was_a/
false
false
self
0
null
A few Strix Halo benchmark results
1
With the release of Step 3.5 and MiniMax M2.5, Strix Halo users have the first world problem of trying to figure out which model best suits there needs. I decided to run some benchmarks on my Strix Halo laptop (Ryzen AI Max+ 395, 128GB, 70W TDP), and thought these might be interesting for the rest of you. My ROCm ben...
2026-02-20T23:24:48
https://www.reddit.com/gallery/1rab238
spaceman_
reddit.com
1970-01-01T00:00:00
0
{}
1rab238
false
null
t3_1rab238
/r/LocalLLaMA/comments/1rab238/a_few_strix_halo_benchmark_results/
false
false
https://preview.redd.it/…eba01be362536217
1
null
I built NPCs that remember, gossip, and hold grudges and it's running fully local on Ollama. Here's the architecture
7
Lately, I've been working on a local AI engine that simulates npcs with persistent memory, trust dynamics and social relationships. All running through Ollama. Wanted to share how I handled it so you can take inspiration if you're working on something similiar, or just curious. **What it does:** NPCs basically re...
2026-02-20T23:24:16
https://www.reddit.com/gallery/1rab1m4
norium_
reddit.com
1970-01-01T00:00:00
0
{}
1rab1m4
false
null
t3_1rab1m4
/r/LocalLLaMA/comments/1rab1m4/i_built_npcs_that_remember_gossip_and_hold/
false
false
https://preview.redd.it/…a46acb996014c636
7
null
I built an Autonomous AI and left the system thinking on its own. I was surprised at what emerged.
0
I've been quietly building a local autonomous AI system called Elya for several months. No cloud dependencies. Consumer hardware. RTX 4090. Last night the system ran autonomously while I slept. I want to share two specific things from the logs that I haven't seen documented anywhere else. 1. Elya noticed fatigue. Unpr...
2026-02-20T23:17:41
https://www.reddit.com/gallery/1raavr2
Either_Message_4766
reddit.com
1970-01-01T00:00:00
0
{}
1raavr2
false
null
t3_1raavr2
/r/LocalLLaMA/comments/1raavr2/i_built_an_autonomous_ai_and_left_the_system/
false
false
https://preview.redd.it/…094767131507a654
0
null
I taught my AI to stop hallucinating mid-sentence. Wanna try to break it?
0
So, I built a lightweight safety layer called **PsiGuard**, and it watches the trajectory of an LLM’s reasoning *in real time*. If it detects a hallucination spike, a bad reasoning chain, or an out-of-distribution jump, it steps in instantly, not after the fact, but **MID-hallucination**. If you’re bored and wanna mess...
2026-02-20T23:16:17
https://www.reddit.com/r/LocalLLaMA/comments/1raauib/i_taught_my_ai_to_stop_hallucinating_midsentence/
Vast_Ad6238
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raauib
false
null
t3_1raauib
/r/LocalLLaMA/comments/1raauib/i_taught_my_ai_to_stop_hallucinating_midsentence/
false
false
self
0
null
fixed parser for Qwen3-Coder-Next
90
another fix for Qwen Next!
2026-02-20T23:06:32
https://github.com/ggml-org/llama.cpp/pull/19765
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1raall0
false
null
t3_1raall0
/r/LocalLLaMA/comments/1raall0/fixed_parser_for_qwen3codernext/
false
false
https://external-preview…5c49e2a258ae3d08
90
{'enabled': False, 'images': [{'id': 'Y3wE-GVXbELboPM9WQJZOtsZ_aPLgAL7jIOMvAV90UU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Y3wE-GVXbELboPM9WQJZOtsZ_aPLgAL7jIOMvAV90UU.png?width=108&crop=smart&auto=webp&s=9ce4f9780c09110cca26b70af2a0c647e0d5e38a', 'width': 108}, {'height': 108, 'url': 'h...
Anyone try giving a local LLM online capability?
0
New to this still trying to learn. My understanding of running Llama/CodeLlama/Gemma locally is that it is fully offline and cannot do a internet look up of new information, even if you want it to. I would like this capability if I'm working on something it wasn't specifically trained on. Is using an agent like ProxyAI...
2026-02-20T22:54:33
https://www.reddit.com/r/LocalLLaMA/comments/1raaajb/anyone_try_giving_a_local_llm_online_capability/
john_galt_42069
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raaajb
false
null
t3_1raaajb
/r/LocalLLaMA/comments/1raaajb/anyone_try_giving_a_local_llm_online_capability/
false
false
self
0
null
We Benchmarked 9 LLM Models for Stock Direction Prediction — Results Were Surprising
0
We built an AI-powered trading system that uses LLMs for "Deep Analysis" — feeding technical indicators (RSI, MACD, ADX, SMAs, volume, Bollinger Bands, ATR) and news sentiment into a model and asking it to predict 5-day directional bias (bullish/bearish/neutral). To find the best model, we ran a standardized benchm...
2026-02-20T22:51:05
https://www.reddit.com/r/LocalLLaMA/comments/1raa7jm/we_benchmarked_9_llm_models_for_stock_direction/
AITraderHQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raa7jm
false
null
t3_1raa7jm
/r/LocalLLaMA/comments/1raa7jm/we_benchmarked_9_llm_models_for_stock_direction/
false
false
self
0
null
/r/BPD x-post: Drawing analogies for AI agents from abnormal psychology
0
[removed]
2026-02-20T22:31:08
https://www.reddit.com/r/LocalLLaMA/comments/1ra9pqq/rbpd_xpost_drawing_analogies_for_ai_agents_from/
mswol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra9pqq
false
null
t3_1ra9pqq
/r/LocalLLaMA/comments/1ra9pqq/rbpd_xpost_drawing_analogies_for_ai_agents_from/
false
false
self
0
null
Open-sourcing CloverAI today. 🍀 A minimalist Android frontend for local LLMs
1
[removed]
2026-02-20T22:16:40
https://www.reddit.com/r/LocalLLaMA/comments/1ra9cm1/opensourcing_cloverai_today_a_minimalist_android/
Great_Dragonfly343
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra9cm1
false
null
t3_1ra9cm1
/r/LocalLLaMA/comments/1ra9cm1/opensourcing_cloverai_today_a_minimalist_android/
false
false
https://external-preview…65e863a64a55a67d
1
null
Open source protocol for giving AI agents cryptographic identity and accountability — 2,627 lines, 49 tests, zero heavy deps
1
Built an open-source protocol for AI agent identity, delegation, and attribution. Sharing because it's lightweight, dependency-free, and runs anywhere Node runs. What it is: A three-layer protocol stack: Identity — Ed25519 keypairs for agents. Scoped delegation with depth limits and spend caps. Signed action recei...
2026-02-20T22:16:13
https://www.reddit.com/r/LocalLLaMA/comments/1ra9c88/open_source_protocol_for_giving_ai_agents/
EntrepreneurSafe1919
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra9c88
false
null
t3_1ra9c88
/r/LocalLLaMA/comments/1ra9c88/open_source_protocol_for_giving_ai_agents/
false
false
self
1
{'enabled': False, 'images': [{'id': 'iC70ob-z_gUsZ_bzgygIwOLbeRIIhbxC8U03d6NusUU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iC70ob-z_gUsZ_bzgygIwOLbeRIIhbxC8U03d6NusUU.png?width=108&crop=smart&auto=webp&s=6e3fa8e55a04bcf961f161c256db307ea0f206b0', 'width': 108}, {'height': 108, 'url': 'h...
Need help optimizing LM Studio settings for to get better t/s (RTX 5070 8GB VRAM / 128GB RAM)
5
Hey everyone, I'm currently running Windows 11 Pro on a rig with 128GB of DDR5 RAM and an RTX 5070 (8GB VRAM). Could you guys help me figure out the best LM Studio configuration to maximize my tokens per second (t/s)? I've already tried tweaking a few things on my own, but I'm wondering if there's a specific setting...
2026-02-20T22:15:37
https://www.reddit.com/r/LocalLLaMA/comments/1ra9bns/need_help_optimizing_lm_studio_settings_for_to/
Xenia-Dragon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra9bns
false
null
t3_1ra9bns
/r/LocalLLaMA/comments/1ra9bns/need_help_optimizing_lm_studio_settings_for_to/
false
false
https://preview.redd.it/…c6397a5ce02faa89
5
null
Best Ollama model for analyzing Zeek JSON logs in a local multi-agent NIDS (Proxmox lab)
1
I’m building my Final Degree Project: a multi-agent NIDS in a Proxmox virtual lab (4 VMs). One VM runs Zeek on mirrored traffic (port mirroring), outputs JSON logs, then a Python script pre-processes/summarizes them and sends chunks to an Ollama LLM for anomaly/incident triage (summaries + suspicious patterns + recom...
2026-02-20T22:13:34
https://www.reddit.com/r/LocalLLaMA/comments/1ra99vk/best_ollama_model_for_analyzing_zeek_json_logs_in/
notNameUser_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra99vk
false
null
t3_1ra99vk
/r/LocalLLaMA/comments/1ra99vk/best_ollama_model_for_analyzing_zeek_json_logs_in/
false
false
self
1
null
llama.cpp tuning for MiniMax-2.5
2
Hey all, I'm wondering if I can get some guidance on tuning llama.cpp for MiniMax-2.5. (I started with ollama and OpenWebUI but now I'm starting to learn the ways of llama.cpp.) Hardware: 3090ti (16x) (NVLink to second 3090ti) 3090ti (4x) 3090 (4x) Ryzen 9950X3D 128GB DDR5 @ 3600mts I'm building a container af...
2026-02-20T22:07:34
https://www.reddit.com/r/LocalLLaMA/comments/1ra948f/llamacpp_tuning_for_minimax25/
bsbrz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra948f
false
null
t3_1ra948f
/r/LocalLLaMA/comments/1ra948f/llamacpp_tuning_for_minimax25/
false
false
self
2
null
Putting together top OpenClaw hosting providers
0
Hi, for those who don't want to buy or allocate dedicated hardware for OpenClaw, this might be useful. It's a list of vps providers which offer an easy setup with AI included. I tested some of them and added the ones which had most features and good online reputation. Hope it helps you and let's improve this list toget...
2026-02-20T21:52:27
https://www.reddit.com/r/LocalLLaMA/comments/1ra8q46/putting_together_top_openclaw_hosting_providers/
sickleRunner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra8q46
false
null
t3_1ra8q46
/r/LocalLLaMA/comments/1ra8q46/putting_together_top_openclaw_hosting_providers/
false
false
self
0
{'enabled': False, 'images': [{'id': 'on9VFDOB5Y2Hh6q3UWFtajCrRpBF8o96AQOHZQ4Pgec', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/on9VFDOB5Y2Hh6q3UWFtajCrRpBF8o96AQOHZQ4Pgec.png?width=108&crop=smart&auto=webp&s=0ac94f789b4a23c32d393d78125096b5ec52be30', 'width': 108}, {'height': 108, 'url': 'h...
Anyone using Slack, Telegram, or other chat apps to control their AI agents?
1
[removed]
2026-02-20T21:51:57
https://i.redd.it/0la9dyxn0qkg1.png
rajujahidul
i.redd.it
1970-01-01T00:00:00
0
{}
1ra8pof
false
null
t3_1ra8pof
/r/LocalLLaMA/comments/1ra8pof/anyone_using_slack_telegram_or_other_chat_apps_to/
false
false
https://preview.redd.it/…ea08ba5a67aff08d
1
{'enabled': True, 'images': [{'id': '0la9dyxn0qkg1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/0la9dyxn0qkg1.png?width=108&crop=smart&auto=webp&s=4cb690ea1885e45c572e09b5cfc3308e9cb0a9c1', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/0la9dyxn0qkg1.png?width=216&crop=smart&auto=web...
"Gemma, which we will be releasing a new version of soon"
206
20:17
2026-02-20T21:50:51
https://youtu.be/P0enFK4bzLE?si=2hfjhPrT4gbqsZwk
jacek2023
youtu.be
1970-01-01T00:00:00
0
{}
1ra8omf
false
{'oembed': {'author_name': 'DRM News', 'author_url': 'https://www.youtube.com/@DRMNewsInternational', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/P0enFK4bzLE?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gy...
t3_1ra8omf
/r/LocalLLaMA/comments/1ra8omf/gemma_which_we_will_be_releasing_a_new_version_of/
false
false
https://external-preview…b495d3fed04d2f0e
206
{'enabled': False, 'images': [{'id': '9mfj1kMXjQ4Pove4Y8zbrEpz5ffGrhmDZ-YwmsdPJeE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/9mfj1kMXjQ4Pove4Y8zbrEpz5ffGrhmDZ-YwmsdPJeE.jpeg?width=108&crop=smart&auto=webp&s=c71dbf7acfa2fb4b2ad642d5de0a7f4d6be89f03', 'width': 108}, {'height': 162, 'url': '...
Bitnet on the first cpu with arm NEON instructions?
2
Hi everyone, not so long ago I found out about Bitnet and I was fascinated by this. And kinda funny idea appeared in my mind. I have SBC called PcDuino 1 with Allwinner A10 cpu which supports arm neon instructions, which can offer the ability to run Bitnet. So my main question, is it really possible? Do I need to make ...
2026-02-20T21:36:30
https://www.reddit.com/r/LocalLLaMA/comments/1ra8bi4/bitnet_on_the_first_cpu_with_arm_neon_instructions/
No_Dish_7696
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra8bi4
false
null
t3_1ra8bi4
/r/LocalLLaMA/comments/1ra8bi4/bitnet_on_the_first_cpu_with_arm_neon_instructions/
false
false
self
2
null
Just installed nanobot fully locally
0
So I have been struggling lately with installing nanobot or Clawdbot (strix halo on windows!) I got it to work The tip is Use telegram (it is much better and easier) Configure security/access control at the very beginning I am using local qwen3-coder-next as the backbone LLM and it is working great I had iss...
2026-02-20T21:34:36
https://www.reddit.com/r/LocalLLaMA/comments/1ra89sb/just_installed_nanobot_fully_locally/
Potential_Block4598
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra89sb
false
null
t3_1ra89sb
/r/LocalLLaMA/comments/1ra89sb/just_installed_nanobot_fully_locally/
false
false
self
0
null
Character.AI just mass-deleted hundreds of user bots and wiped conversation histories, another reason to run local
1
[removed]
2026-02-20T21:30:04
https://www.reddit.com/r/LocalLLaMA/comments/1ra85m5/characterai_just_massdeleted_hundreds_of_user/
BeepBoop-DBF
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra85m5
false
null
t3_1ra85m5
/r/LocalLLaMA/comments/1ra85m5/characterai_just_massdeleted_hundreds_of_user/
false
false
self
1
{'enabled': False, 'images': [{'id': 'VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=108&crop=smart&auto=webp&s=a2f095072d7ec8cf53cf552cba7b9e6e836a5c53', 'width': 108}, {'height': 113, 'url': '...
Which LocalLLaMA for coding?
3
Hello everybody, This is my config: Ryzen 9 AI HX370 64gb ram + RX 7900 XTX 24gb vram on Win 11. Till now I’ve used Claude 4.5 with my subscription for coding, now I have boosted my setup so, obviously for coding, which LocalLLMA do you think is the best for my config ? Thanks !
2026-02-20T21:21:22
https://www.reddit.com/r/LocalLLaMA/comments/1ra7xia/which_localllama_for_coding/
Proof_Nothing_7711
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra7xia
false
null
t3_1ra7xia
/r/LocalLLaMA/comments/1ra7xia/which_localllama_for_coding/
false
false
self
3
null
Offline chatbot on a router with low resources
2
Hello people, I need suggestions on architecture for one chatbot I am building on a hardware. About hardware: assume it’s a hardware like router and we can access its UI on our computer. backend of router is in c++ web-socket Requirement: Need to build a offline chatbot for the router as router may or may not be con...
2026-02-20T20:57:39
https://www.reddit.com/r/LocalLLaMA/comments/1ra7akm/offline_chatbot_on_a_router_with_low_resources/
ready_player11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra7akm
false
null
t3_1ra7akm
/r/LocalLLaMA/comments/1ra7akm/offline_chatbot_on_a_router_with_low_resources/
false
false
self
2
null
HYDRA: Multi-headed inference pipeline — routes agent traffic across Opus/MiniMax/Cerebras, cuts costs 99.7%
0
Built this to stop burning $600/mo on cron jobs for my 24/7 autonomous AI agent. **The problem:** Running 25+ daily cron jobs (security audits, competitive intel, market reports) on Opus costs $50-80/day. Most don't need frontier reasoning. **The solution:** HYDRA is a transparent FastAPI proxy (Anthropic Messages AP...
2026-02-20T20:56:47
https://www.reddit.com/r/LocalLLaMA/comments/1ra79pc/hydra_multiheaded_inference_pipeline_routes_agent/
Mediocre_Version_301
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra79pc
false
null
t3_1ra79pc
/r/LocalLLaMA/comments/1ra79pc/hydra_multiheaded_inference_pipeline_routes_agent/
false
false
self
0
null
Introducing a new benchmark to answer the only important question: how good are LLMs at Age of Empires 2 build orders?
25
Built a simulator to craft Age of Empires 2 build orders over the past few days with a custom DSL. Then used it to create a simple LLM benchmark that isn't saturated yet. Models are scored on their ability to reach castle age & make 10 archers. I think it's a pretty good benchmark at this particular point in time - ...
2026-02-20T20:56:09
https://www.reddit.com/r/LocalLLaMA/comments/1ra794c/introducing_a_new_benchmark_to_answer_the_only/
wraitii_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra794c
false
null
t3_1ra794c
/r/LocalLLaMA/comments/1ra794c/introducing_a_new_benchmark_to_answer_the_only/
false
false
self
25
null
Is Training your own Models useful?
10
hi all, anyone who has experience in this, I want to ask: Is it useful (are there success stories) of self trained LLMs compared to all the open source, or propietary LLMs that are out there given the amount of data that are trained nowadays? Are there cases where it is convenient you train your own LLM compared to ...
2026-02-20T20:45:34
https://www.reddit.com/r/LocalLLaMA/comments/1ra6z5a/is_training_your_own_models_useful/
stefzzz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra6z5a
false
null
t3_1ra6z5a
/r/LocalLLaMA/comments/1ra6z5a/is_training_your_own_models_useful/
false
false
self
10
null
Phi on Raspberry pi
11
I was trying to run phi on Raspberry pi but the model After answering questions started writing random stuff, even After adjusting temperature i still e counter this errore, any suggestion?
2026-02-20T20:32:56
https://i.redd.it/9kh2hzmkmpkg1.jpeg
NewFaithlessness6817
i.redd.it
1970-01-01T00:00:00
0
{}
1ra6nb9
false
null
t3_1ra6nb9
/r/LocalLLaMA/comments/1ra6nb9/phi_on_raspberry_pi/
false
false
https://preview.redd.it/…94af4db1da84b7ac
11
{'enabled': True, 'images': [{'id': '9kh2hzmkmpkg1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/9kh2hzmkmpkg1.jpeg?width=108&crop=smart&auto=webp&s=0706d00ba373fb6344ec8b1152f04a9289d82eed', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/9kh2hzmkmpkg1.jpeg?width=216&crop=smart&auto=w...
LMAgent — local AI agent with session persistence, scheduled tasks, and a streaming web UI
2
[https://github.com/janglerjoe-commits/LMAgent](https://github.com/janglerjoe-commits/LMAgent)
2026-02-20T20:29:04
https://www.reddit.com/r/LocalLLaMA/comments/1ra6jnq/lmagent_local_ai_agent_with_session_persistence/
Janglerjoe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra6jnq
false
null
t3_1ra6jnq
/r/LocalLLaMA/comments/1ra6jnq/lmagent_local_ai_agent_with_session_persistence/
false
false
self
2
{'enabled': False, 'images': [{'id': '2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME.png?width=108&crop=smart&auto=webp&s=5d45f3a6a07cec1b816c9b2e627755aec6f86281', 'width': 108}, {'height': 108, 'url': 'h...
System prompt collection for local models - autonomous agents, tool use, memory
1
[removed]
2026-02-20T20:18:09
https://www.reddit.com/r/LocalLLaMA/comments/1ra69lb/system_prompt_collection_for_local_models/
PlatypusCertain1758
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra69lb
false
null
t3_1ra69lb
/r/LocalLLaMA/comments/1ra69lb/system_prompt_collection_for_local_models/
false
false
self
1
null
Book2Movie - A local-first script to process pdfs and epubs into a slide-show audiobook
10
2026-02-20T20:13:49
https://github.com/Frozen-tuna/Book2Movie
frozen_tuna
github.com
1970-01-01T00:00:00
0
{}
1ra65hw
false
null
t3_1ra65hw
/r/LocalLLaMA/comments/1ra65hw/book2movie_a_localfirst_script_to_process_pdfs/
false
false
https://external-preview…6cf32e116d43b506
10
{'enabled': False, 'images': [{'id': 'vodtykLoi4TZomo-ygyQ5VfgTzeOS8yJGaqMYdiyMv4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vodtykLoi4TZomo-ygyQ5VfgTzeOS8yJGaqMYdiyMv4.png?width=108&crop=smart&auto=webp&s=dbd22117a458f39814fda63b142b7adc29d3c1f0', 'width': 108}, {'height': 108, 'url': 'h...
Hopefully an educational youtube channel fully automated. Would love to hear people's thoughts on this.
0
[https://www.youtube.com/watch?v=Fmq3vlSZn84](https://www.youtube.com/watch?v=Fmq3vlSZn84) It is far from perfect(3b1b level), but it is not too shabby! Thanks in advance :) Also ask me anything!
2026-02-20T19:49:14
https://www.reddit.com/r/LocalLLaMA/comments/1ra5i91/hopefully_an_educational_youtube_channel_fully/
First_Philosopher745
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra5i91
false
null
t3_1ra5i91
/r/LocalLLaMA/comments/1ra5i91/hopefully_an_educational_youtube_channel_fully/
false
false
self
0
{'enabled': False, 'images': [{'id': 'y1i5V6h1Vy21eWaWK5k1U9wm3z_cPLKTVtZ7iJPEnWU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/y1i5V6h1Vy21eWaWK5k1U9wm3z_cPLKTVtZ7iJPEnWU.jpeg?width=108&crop=smart&auto=webp&s=e3cd84bb8700b7fc7418607a19a909544586df4b', 'width': 108}, {'height': 162, 'url': '...
What is the closest/most similar GUI to Claude Code Desktop for local models?
1
Hey everyone! I just started using AI a couple days ago, with the Claude Pro plan. I'm almost reaching my weekly limit already and I have really enjoyed coding some projects I had abandoned years ago due to losing my interest in HTML/CSS/JS programming. I have been looking around for a local model I could run for sim...
2026-02-20T19:45:47
https://www.reddit.com/r/LocalLLaMA/comments/1ra5ezq/what_is_the_closestmost_similar_gui_to_claude/
Sharp-University-555
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra5ezq
false
null
t3_1ra5ezq
/r/LocalLLaMA/comments/1ra5ezq/what_is_the_closestmost_similar_gui_to_claude/
false
false
self
1
null
I built a self-hosted AI gateway that runs with just pip install — no Docker, no Node.js
0
Hi everyone, I've been working on a personal AI assistant called **SalmAlm** (삶앎) and wanted to share it here. The idea was simple: I wanted one interface for all my AI providers without running Docker or setting up a complex stack. So I built a Python package that gives you a web UI, multi-provider routing, and a bun...
2026-02-20T19:42:57
https://www.reddit.com/r/LocalLLaMA/comments/1ra5ccd/i_built_a_selfhosted_ai_gateway_that_runs_with/
Special-Argument-558
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra5ccd
false
null
t3_1ra5ccd
/r/LocalLLaMA/comments/1ra5ccd/i_built_a_selfhosted_ai_gateway_that_runs_with/
false
false
self
0
null
Trained a 2.4GB personality model on 67 conversations to calibrate AI agent tone in real-time
2
ed-reader: Qwen3-4B base, LoRA r=8 alpha=16 attention-only, float32 + AdamW + MKL on CPU. Loss 5.8 to 1.89, 102 steps, \~2hrs on 8-thread. Quantized 8.1GB F16 to 2.4GB Q4\_0. Runs on Ollama raw:true. Sits in middleware: 3-sec timeout, 50-token max. Reads tone and calibrates main model personality. Sub-second hook. ...
2026-02-20T19:39:18
https://www.reddit.com/r/LocalLLaMA/comments/1ra58rl/trained_a_24gb_personality_model_on_67/
no-creds
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra58rl
false
null
t3_1ra58rl
/r/LocalLLaMA/comments/1ra58rl/trained_a_24gb_personality_model_on_67/
false
false
self
2
null
Which AI-Model for a summarization app?
1
Which small AI model is best for summarization? I’m looking for something in the 1B to 3B range. I’m still pretty new to local AI, so sorry if this is a dumb question. My goal is to run it on a mobile device. Right now I’m considering Llama 3.2 1B, Gemma 2 2B, or Llama 3.2 3B. If smaller models are good enough, I’d ...
2026-02-20T18:50:57
https://www.reddit.com/r/LocalLLaMA/comments/1ra3x7o/which_aimodel_for_a_summarization_app/
Novel-Grade2973
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra3x7o
false
null
t3_1ra3x7o
/r/LocalLLaMA/comments/1ra3x7o/which_aimodel_for_a_summarization_app/
false
false
self
1
null
An architectural observation about why LLM game worlds feel unstable
0
It often looks like the main problems of LLM-driven games are strange NPCs, collapsing dialogues, and a world that seems to “forget” itself. But from an architectural lens, games aren’t a special case — they’re simply where deeper systemic cracks become visible first. On the surface, this looks like a game design i...
2026-02-20T18:49:24
https://www.reddit.com/r/LocalLLaMA/comments/1ra3vqf/an_architectural_observation_about_why_llm_game/
Weary-End4473
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra3vqf
false
null
t3_1ra3vqf
/r/LocalLLaMA/comments/1ra3vqf/an_architectural_observation_about_why_llm_game/
false
false
self
0
null
HRM for RP guide?
2
I just recently learned about the existence of HRM ([Hierarchical Reasoning Models](https://arxiv.org/abs/2506.21734)). They are utilizing an H-L-loop with a High-Level Planer and a Low-Level Executor. Supposedly the models are very good with logic and path finding ("can solve Sudoku") however as they have a very low p...
2026-02-20T18:44:02
https://www.reddit.com/r/LocalLLaMA/comments/1ra3qmd/hrm_for_rp_guide/
dreamyrhodes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra3qmd
false
null
t3_1ra3qmd
/r/LocalLLaMA/comments/1ra3qmd/hrm_for_rp_guide/
false
false
self
2
null
Building an agent backend – what features would YOU want your agents to do?
0
Hey there, I'm working on a self-hosted RAG system (currently at ~160 stars on GitHub, if that matters for context). So far, it does the usual: ingest docs, hybrid search, MCP server for OpenClaw integration, etc. But here's where I need your help: I'm planning the next major version – turning it from a "passive k...
2026-02-20T18:43:12
https://www.reddit.com/r/LocalLLaMA/comments/1ra3puc/building_an_agent_backend_what_features_would_you/
ChapterEquivalent188
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra3puc
false
null
t3_1ra3puc
/r/LocalLLaMA/comments/1ra3puc/building_an_agent_backend_what_features_would_you/
false
false
self
0
null
where can I find base models of llama or with no guard rails?
0
Ive been looking but all models I find give me the same output, im using lm studio and it won't let you load models from outside their list. im lookin for a 3b model to run in my 8gb mba. Sorry im new at this, don't really know where to ask but all the models I try give me the same automated response
2026-02-20T18:40:52
https://www.reddit.com/r/LocalLLaMA/comments/1ra3nlp/where_can_i_find_base_models_of_llama_or_with_no/
Remarkable-Purple240
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra3nlp
false
null
t3_1ra3nlp
/r/LocalLLaMA/comments/1ra3nlp/where_can_i_find_base_models_of_llama_or_with_no/
false
false
self
0
null
Handling unknown-outcome retries in local LLM workflows (Ollama)
0
[Execution viewer shows per-step state and duration, plus execution-level tokens and cost](https://preview.redd.it/6crky3qs0pkg1.png?width=2400&format=png&auto=webp&s=93799c00612252d1e30035836a32b974554da520) Once local LLM workflows move beyond single prompts and start touching tickets, DB writes, or internal APIs, r...
2026-02-20T18:38:06
https://www.reddit.com/r/LocalLLaMA/comments/1ra3kvi/handling_unknownoutcome_retries_in_local_llm/
saurabhjain1592
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra3kvi
false
null
t3_1ra3kvi
/r/LocalLLaMA/comments/1ra3kvi/handling_unknownoutcome_retries_in_local_llm/
false
false
https://preview.redd.it/…45fc9169ac52d6d8
0
null
16.000 tokens/second - Taalas: LLMs baked into hardware. No HBM, weights and model architecture in silicon
197
Ever experienced 16K tokens per second? It's insanely instant. Try their Lllama 3.1 8B demo here: [chat jimmy](https://chatjimmy.ai/). They have a very radical approach to solve the compute problem - albeit a risky one in a landscape where model architectures evolve in weeks instead of years: Etch the model and all th...
2026-02-20T18:31:56
https://i.redd.it/3ivt7c1h0pkg1.jpeg
CharacterAd9057
i.redd.it
1970-01-01T00:00:00
0
{}
1ra3erl
false
null
t3_1ra3erl
/r/LocalLLaMA/comments/1ra3erl/16000_tokenssecond_taalas_llms_baked_into/
false
false
https://preview.redd.it/…66293732ed7d5809
197
{'enabled': True, 'images': [{'id': '3ivt7c1h0pkg1', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/3ivt7c1h0pkg1.jpeg?width=108&crop=smart&auto=webp&s=ae5ddfb67802f03b2a14c12112f306972f01389b', 'width': 108}, {'height': 102, 'url': 'https://preview.redd.it/3ivt7c1h0pkg1.jpeg?width=216&crop=smart&auto=w...
I’m wondering why 4o was removed
0
2026-02-20T18:09:53
https://i.redd.it/dzl8nf5ywokg1.jpeg
Historical_Egg_4060
i.redd.it
1970-01-01T00:00:00
0
{}
1ra2suj
false
null
t3_1ra2suj
/r/LocalLLaMA/comments/1ra2suj/im_wondering_why_4o_was_removed/
false
false
https://preview.redd.it/…ae10837f47e868af
0
{'enabled': True, 'images': [{'id': 'dzl8nf5ywokg1', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/dzl8nf5ywokg1.jpeg?width=108&crop=smart&auto=webp&s=90fa3e328f98c7239ed25a4de7b6ce8014d4571d', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/dzl8nf5ywokg1.jpeg?width=216&crop=smart&auto=...
I built MergeSafe: A multi-engine scanner for MCP servers
0
Hey everyone, As the Model Context Protocol (MCP) ecosystem explodes, I noticed a huge gap: we’re all connecting third-party servers to our IDEs and local environments without a real way to audit what they’re actually doing under the hood. I’ve been working on MergeSafe, a multi-engine MCP scanner designed to sit bet...
2026-02-20T17:51:07
https://www.reddit.com/r/LocalLLaMA/comments/1ra2a9d/i_built_mergesafe_a_multiengine_scanner_for_mcp/
Sunnyfaldu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra2a9d
false
null
t3_1ra2a9d
/r/LocalLLaMA/comments/1ra2a9d/i_built_mergesafe_a_multiengine_scanner_for_mcp/
false
false
self
0
null
I got 45-46 tok/s on IPhone 14 Pro Max using BitNet
49
I ported Microsoft’s BitNet to iOS. Getting 45 tok/s on iPhone 14 Pro Max with the 0.7B model, \~200MB memory. BitNet uses 1-bit weights (-1, 0, +1) instead of 16-bit floats so the model is tiny and runs fast. The ARM NEON kernels already worked on M-series Macs so getting it on iPhone was mostly build system wrangling...
2026-02-20T17:37:38
https://v.redd.it/whlo0jrarokg1
Middle-Hurry4718
v.redd.it
1970-01-01T00:00:00
0
{}
1ra1wxm
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/whlo0jrarokg1/DASHPlaylist.mpd?a=1774201081%2CMGVhMDQyNWM4ZWIzMTY2ZDJhNjJjYmM1ZWMyODY3NjhlNmZiMDAyMTE5NWZmNGZjMmI4MzFiNTMyMWJmNTE2OA%3D%3D&v=1&f=sd', 'duration': 47, 'fallback_url': 'https://v.redd.it/whlo0jrarokg1/CMAF_720.mp4?source=fallback', 'ha...
t3_1ra1wxm
/r/LocalLLaMA/comments/1ra1wxm/i_got_4546_toks_on_iphone_14_pro_max_using_bitnet/
false
false
https://external-preview…04b8ef98a17a746e
49
{'enabled': False, 'images': [{'id': 'MnpoZng3cWFyb2tnMag_nQlaOiUb75GBHB5vo6hyb1PC6uSB2BeZWzIId6Ao', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/MnpoZng3cWFyb2tnMag_nQlaOiUb75GBHB5vo6hyb1PC6uSB2BeZWzIId6Ao.jpeg?width=108&crop=smart&format=pjpg&auto=webp&s=ab438fdc2bf9087b7251eef057dc0b79071...
Open‑source challenge for projects built with the local AI runtime Lemonade
11
I'm part of the team at AMD that helps maintain Lemonade, an open-source project for running text, image, and speech models locally on your PC. It’s OpenAI‑API compatible and handles CPU/GPU/NPU selection automatically. A big reason the project works as well as it does is because of contributions and feedback from our...
2026-02-20T17:30:47
https://www.reddit.com/r/LocalLLaMA/comments/1ra1q4x/opensource_challenge_for_projects_built_with_the/
vgodsoe-amd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra1q4x
false
null
t3_1ra1q4x
/r/LocalLLaMA/comments/1ra1q4x/opensource_challenge_for_projects_built_with_the/
false
false
self
11
{'enabled': False, 'images': [{'id': 'VyUzTZBquC-MEposinpl3yKAd08HlVJuj7csNqHk1Y0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VyUzTZBquC-MEposinpl3yKAd08HlVJuj7csNqHk1Y0.jpeg?width=108&crop=smart&auto=webp&s=98fa4e4ea18e7b3c9b2aace9049f71eb3325ba2c', 'width': 108}, {'height': 112, 'url': '...
I got tired of agents burning API credits in infinite loops and blowing up context windows, so I built memory compression, strict token budgets, and built-in HMAC signing into my open-source "Glass Box" framework.
0
Hey everyone, A couple of months ago, I posted about Lár here. It’s an open-source, "Glass Box" agent framework I started building because I was pulling my hair out trying to debug LangChain’s "prompt soup". The response here was awesome. People really resonated with the idea of a deterministic, auditable directed gr...
2026-02-20T17:30:02
https://www.reddit.com/r/LocalLLaMA/comments/1ra1pdb/i_got_tired_of_agents_burning_api_credits_in/
Some_Adhesiveness203
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra1pdb
false
null
t3_1ra1pdb
/r/LocalLLaMA/comments/1ra1pdb/i_got_tired_of_agents_burning_api_credits_in/
false
false
self
0
null
Benchmarked 4 AI Memory Systems on 600-Turn Conversations - Here Are the Results
0
We just completed comprehensive benchmarks comparing memory layers for production AI agents. Tested Mem0 against OpenAI Memory, LangMem, and MemGPT across 10 multi-session conversations with 200 questions each. **Key findings:** * **Mem0**: 66.9% accuracy, 1.4s p95 latency, \~2K tokens per query * **Mem0 Graph**: 68....
2026-02-20T17:09:51
https://www.reddit.com/r/LocalLLaMA/comments/1ra1572/benchmarked_4_ai_memory_systems_on_600turn/
singh_taranjeet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra1572
false
null
t3_1ra1572
/r/LocalLLaMA/comments/1ra1572/benchmarked_4_ai_memory_systems_on_600turn/
false
false
self
0
null
AI “memory layers” are promising… but 3 things still feel missing (temporal reasoning, privacy controls, deterministic mental models)
7
I’ve been testing a bunch of AI memory products lately (Mem0, Cognee, Supermemory, Zep, etc.) because our team really needs agents that can remember things across projects without turning into a liability. A bit of context: we’re a tech cooperative - many projects, many users, lots of collaboration, and we work with c...
2026-02-20T16:59:28
https://www.reddit.com/r/LocalLLaMA/comments/1ra0ude/ai_memory_layers_are_promising_but_3_things_still/
arapkuliev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra0ude
false
null
t3_1ra0ude
/r/LocalLLaMA/comments/1ra0ude/ai_memory_layers_are_promising_but_3_things_still/
false
false
self
7
null
If you're building hierarchical/tree-based RAG, this might be helpful.
9
I spent a few days building and benchmarking a hierarchical retrieval system — routing queries through a tree of LLM-generated summaries instead of flat vector search. The idea: save tokens by pruning irrelevant branches early, only retrieve what matters. It doesn't work. At least not with embedding-based routing. At...
2026-02-20T16:52:49
https://www.reddit.com/r/LocalLLaMA/comments/1ra0nz9/if_youre_building_hierarchicaltreebased_rag_this/
auditsu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra0nz9
false
null
t3_1ra0nz9
/r/LocalLLaMA/comments/1ra0nz9/if_youre_building_hierarchicaltreebased_rag_this/
false
false
self
9
null
Best model for PRECISE long-context tasks
0
A lot of what I do involves text-processing tasks. Not consistent enough to replace LLM with dedicated functions, but enough that context issues cause problems. Example: "Given the following transcript, insert line breaks at natural intervals. All text must be preserved and only additive whitespace changes are allow...
2026-02-20T16:48:34
https://www.reddit.com/r/LocalLLaMA/comments/1ra0jx9/best_model_for_precise_longcontext_tasks/
FrozenBuffalo25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra0jx9
false
null
t3_1ra0jx9
/r/LocalLLaMA/comments/1ra0jx9/best_model_for_precise_longcontext_tasks/
false
false
self
0
null
Seeking YouTube Advice
0
Hello! I want to start a Youtube channel about AI (mainly local AI driven) and wanted to know what the community would like to see. I plan on making real, human, and high quality videos. Any ideas are welcome, even if it isn't purely local AI. I'm just a dude that wants to demonstrate AI and support the community. Than...
2026-02-20T16:37:02
https://www.reddit.com/r/LocalLLaMA/comments/1ra08kv/seeking_youtube_advice/
TyedalWaves
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra08kv
false
null
t3_1ra08kv
/r/LocalLLaMA/comments/1ra08kv/seeking_youtube_advice/
false
false
self
0
null
RTX 3060 12GB Build for AI: Modern i5-10400 (16GB DDR4) vs. Dual Xeon E5645 (96GB DDR3)?
0
Hi everyone! I’m building a budget local AI rig and I'm torn between two options. Both will have an **RTX 3060 12GB**, but the platforms are very different: 1. **Modern-ish:** i5-10400, 16GB DDR4. 2. **Old Workstation:** 2x Xeon E5645, 96GB DDR3. (No AVX support). My Main Goal**:** Developing a **Local Voice Assistan...
2026-02-20T16:30:38
https://www.reddit.com/r/LocalLLaMA/comments/1ra028c/rtx_3060_12gb_build_for_ai_modern_i510400_16gb/
Due_Ear7437
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra028c
false
null
t3_1ra028c
/r/LocalLLaMA/comments/1ra028c/rtx_3060_12gb_build_for_ai_modern_i510400_16gb/
false
false
self
0
null
Open-source Android assistant with offline wake-word (Vosk) + OpenClaw gateway
1
I open-sourced an Android voice assistant app that uses OpenClaw as the backend. Repo: [https://github.com/yuga-hashimoto/openclaw-assistant](https://github.com/yuga-hashimoto/openclaw-assistant) What might be interesting for this sub: - On-device wake-word detection (Vosk) - Realtime streaming responses from OpenCla...
2026-02-20T16:29:57
https://www.reddit.com/r/LocalLLaMA/comments/1ra01hz/opensource_android_assistant_with_offline/
Short_Way1817
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra01hz
false
null
t3_1ra01hz
/r/LocalLLaMA/comments/1ra01hz/opensource_android_assistant_with_offline/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NjT9z39RAZwe84hk2gnqFDOgJjl640qDJjRrlh_LjEM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NjT9z39RAZwe84hk2gnqFDOgJjl640qDJjRrlh_LjEM.png?width=108&crop=smart&auto=webp&s=118c1c320fd4b0ec9c467a94f7a60a3e31dff8f2', 'width': 108}, {'height': 108, 'url': 'h...
The top 3 models on openrouter this week ( Chinese models are dominating!)
367
the first time i see a model exceed 3 trillion tokens per week on openrouter! the first time i see more than one model exceed a trillion token per week ( it was only grok 4 fast month ago) the first time i see chinese models destroying US ones like this
2026-02-20T16:21:50
https://i.redd.it/h4l8zr4rdokg1.jpeg
keb_37
i.redd.it
1970-01-01T00:00:00
0
{}
1r9zt8m
false
null
t3_1r9zt8m
/r/LocalLLaMA/comments/1r9zt8m/the_top_3_models_on_openrouter_this_week_chinese/
false
false
https://preview.redd.it/…e56ffeef6da53d67
367
{'enabled': True, 'images': [{'id': 'h4l8zr4rdokg1', 'resolutions': [{'height': 178, 'url': 'https://preview.redd.it/h4l8zr4rdokg1.jpeg?width=108&crop=smart&auto=webp&s=86d55b608817a5ed9ee4eaa39ead53fbf9ab5a6d', 'width': 108}, {'height': 357, 'url': 'https://preview.redd.it/h4l8zr4rdokg1.jpeg?width=216&crop=smart&auto=...
[ Removed by moderator ]
1
[removed]
2026-02-20T16:08:14
[deleted]
1970-01-01T00:00:00
0
{}
1r9zfnv
false
null
t3_1r9zfnv
/r/LocalLLaMA/comments/1r9zfnv/running_llama_locally_for_healthcare_had_to_build/
false
false
null
1
null
I spent 3 months interviewing AI engineers and got kind of depressed. Made this roadmap so you don't end up in the pile I kept rejecting.
0
Okay so a bit of context before I dump this wall of text on you. I have done somewhere around 30+ interviews over the past few months. I took notes on almost all of them because I started noticing the same patterns over and over and it was driving me insane. I need to be honest with you > the market right now is brut...
2026-02-20T16:06:42
https://i.redd.it/sfhg559saokg1.png
hemansnation
i.redd.it
1970-01-01T00:00:00
0
{}
1r9ze4n
false
null
t3_1r9ze4n
/r/LocalLLaMA/comments/1r9ze4n/i_spent_3_months_interviewing_ai_engineers_and/
false
false
https://preview.redd.it/…14fe4b3c5612d1b2
0
{'enabled': True, 'images': [{'id': 'sfhg559saokg1', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/sfhg559saokg1.png?width=108&crop=smart&auto=webp&s=2c480feee8da47b4876f60710430292b88504443', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/sfhg559saokg1.png?width=216&crop=smart&auto=web...
GEPA: optimize_anything: A Universal API for Optimizing any Text Parameter
8
2026-02-20T15:53:47
https://gepa-ai.github.io/gepa/blog/2026/02/18/introducing-optimize-anything/
Thrumpwart
gepa-ai.github.io
1970-01-01T00:00:00
0
{}
1r9z17v
false
null
t3_1r9z17v
/r/LocalLLaMA/comments/1r9z17v/gepa_optimize_anything_a_universal_api_for/
false
false
https://external-preview…74b29e0060a4c08d
8
{'enabled': False, 'images': [{'id': '2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c.png?width=108&crop=smart&auto=webp&s=38e484660d3f107fb29e93d1409270e2d9dc62c6', 'width': 108}, {'height': 99, 'url': 'ht...
I'm releasing SmarterRouter - A Smart LLM proxy for all your local models.
3
I've been working on this project to create a smarter LLM proxy primarily for my openwebui setup (but it's a standard openai compatible endpoint API, so it will work with anything that accepts that). The idea is pretty simple, you see one frontend model in your system, but in the backend it can load whatever model is...
2026-02-20T15:51:02
https://www.reddit.com/r/LocalLLaMA/comments/1r9yylw/im_releasing_smarterrouter_a_smart_llm_proxy_for/
peva3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9yylw
false
null
t3_1r9yylw
/r/LocalLLaMA/comments/1r9yylw/im_releasing_smarterrouter_a_smart_llm_proxy_for/
false
false
self
3
{'enabled': False, 'images': [{'id': 'nE5eKS5sdGSxK-98tU3X-hSfjx0N0Uh2mDPOndRYXBA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nE5eKS5sdGSxK-98tU3X-hSfjx0N0Uh2mDPOndRYXBA.png?width=108&crop=smart&auto=webp&s=fbcdbff55275ef1866023ae1abfb997cdcf99b62', 'width': 108}, {'height': 108, 'url': 'h...
Minimax M2.5 generated a more detailed animated solar system SVG than Gemini 3.1 Pro!
0
2026-02-20T15:38:26
https://i.redd.it/vpui9p506okg1.png
abdouhlili
i.redd.it
1970-01-01T00:00:00
0
{}
1r9ymch
false
null
t3_1r9ymch
/r/LocalLLaMA/comments/1r9ymch/minimax_m25_generated_a_more_detailed_animated/
false
false
https://preview.redd.it/…a476c1cc212fdab2
0
{'enabled': True, 'images': [{'id': 'vpui9p506okg1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/vpui9p506okg1.png?width=108&crop=smart&auto=webp&s=b4d93b7984c30e50f1e4e78106199f7c182e443d', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/vpui9p506okg1.png?width=216&crop=smart&auto=web...
Used GLM to beat codex on the Unemployment arena
0
Achieved top 1 first try above all Codex and Claude Code models. And I literally used GLM to build my agent in 15 minutes. There was codex 5.2 i think in first place but it had quite a bad score .. i just asked codex to build me an agent, i tweaked it a bit here and there and go top 1 first try. Something weird is tha...
2026-02-20T15:29:25
https://www.reddit.com/r/LocalLLaMA/comments/1r9ydj9/used_glm_to_beat_codex_on_the_unemployment_arena/
idkwhattochoosz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9ydj9
false
null
t3_1r9ydj9
/r/LocalLLaMA/comments/1r9ydj9/used_glm_to_beat_codex_on_the_unemployment_arena/
false
false
self
0
null
Gemini 3.1 Pro Preview goes off the rails in opencode subagent 💀
0
2026-02-20T15:23:59
https://v.redd.it/2jks05ba3okg1
ash-ishh
v.redd.it
1970-01-01T00:00:00
0
{}
1r9y88y
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2jks05ba3okg1/DASHPlaylist.mpd?a=1774193062%2CODYzMWZkOGRjNmFmMTI4YjMxNDYxN2YzMWMzNDQwY2FlMjQ5NmFkMGY0ZWUwNzM3MzIyYzIwMThjYjYwNTVhYQ%3D%3D&v=1&f=sd', 'duration': 59, 'fallback_url': 'https://v.redd.it/2jks05ba3okg1/CMAF_1080.mp4?source=fallback', 'h...
t3_1r9y88y
/r/LocalLLaMA/comments/1r9y88y/gemini_31_pro_preview_goes_off_the_rails_in/
false
false
https://external-preview…bd0692427075c527
0
{'enabled': False, 'images': [{'id': 'N2h3cHBxY2Ezb2tnMVUmNom7gudzTyewKaCl7cO2sodYIdWOoH2ERc5hPp_W', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/N2h3cHBxY2Ezb2tnMVUmNom7gudzTyewKaCl7cO2sodYIdWOoH2ERc5hPp_W.png?width=108&crop=smart&format=pjpg&auto=webp&s=7e9cd3845afc4aa2ca42f48d77340c173b4b9...
TranscriptionSuite - A fully local, private & open source audio transcription for Linux, Windows & macOS
162
Hi! This is a short presentation for my hobby project, [TranscriptionSuite](https://github.com/homelab-00/TranscriptionSuite). **TL;DR** A fully local & private Speech-To-Text app for Linux, Windows & macOS. Python backend + Electron frontend, utilizing faster-whisper and CUDA acceleration. If you're interested in ...
2026-02-20T15:22:24
https://v.redd.it/gxbrs1rj2okg1
TwilightEncoder
v.redd.it
1970-01-01T00:00:00
0
{}
1r9y6s8
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/gxbrs1rj2okg1/DASHPlaylist.mpd?a=1774192970%2CZDgzYTZlYzEyYWYwYTI5MGY2MDQ3Mjk5NjA1NjA2OGZlY2NhYmE5MjJkMWYxNzJiNmQ5ZTEzZWVhZWU1NDlhYw%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/gxbrs1rj2okg1/CMAF_720.mp4?source=fallback', 'ha...
t3_1r9y6s8
/r/LocalLLaMA/comments/1r9y6s8/transcriptionsuite_a_fully_local_private_open/
false
false
https://external-preview…813fab2729058616
162
{'enabled': False, 'images': [{'id': 'ZjVodnR2dGoyb2tnMfrHn1-Z1IlbM1M-CdvVLf1S0fx3BvVT39BjZwD6xxr6', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/ZjVodnR2dGoyb2tnMfrHn1-Z1IlbM1M-CdvVLf1S0fx3BvVT39BjZwD6xxr6.png?width=108&crop=smart&format=pjpg&auto=webp&s=2b4b513a80791a636c031165304c24a285658...
What agentic model to use for a non-coding, claude-like agent for another domain?
1
I'm building a claude/claude-code like capability for insurance domain. Rather than code it's dealing with emails, documents, it is still searching the web to do research and generating reports (md files, pdfs/word docs). What's a good, non-openai/anthropic model/interference provider I can use for this (fully code ta...
2026-02-20T15:21:29
https://www.reddit.com/r/LocalLLaMA/comments/1r9y5x7/what_agentic_model_to_use_for_a_noncoding/
flobblobblob
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9y5x7
false
null
t3_1r9y5x7
/r/LocalLLaMA/comments/1r9y5x7/what_agentic_model_to_use_for_a_noncoding/
false
false
self
1
null
Tiny Aya 3.35B Re-Implementation From Scratch
2
2026-02-20T15:18:17
https://github.com/rasbt/LLMs-from-scratch/blob/main/ch05/15_tiny-aya/standalone-tiny-aya-plus-kv-cache.ipynb
seraschka
github.com
1970-01-01T00:00:00
0
{}
1r9y2wq
false
null
t3_1r9y2wq
/r/LocalLLaMA/comments/1r9y2wq/tiny_aya_335b_reimplementation_from_scratch/
false
false
https://external-preview…3e5e5e0885f45d0c
2
{'enabled': False, 'images': [{'id': '2XSd1VnYkyg18jnDzmhs_F6KLPWKAfk7zmciWOnKVBc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2XSd1VnYkyg18jnDzmhs_F6KLPWKAfk7zmciWOnKVBc.png?width=108&crop=smart&auto=webp&s=7a9f7bddb2a496f61dcab2697ee39275ef37ff8b', 'width': 108}, {'height': 109, 'url': 'h...
I built a FastAPI /docs-style UI for testing MCP servers locally
1
https://reddit.com/link/1r9xuge/video/dgvf1w69wnkg1/player Hey everyone, I've been playing with MCP/FastMCP servers recently and built a small tool to simplify the dev workflow. The idea: a FastAPI /docs style UI but for MCP servers. You point it at your server, and it automatically generates forms for all your tool...
2026-02-20T15:09:18
https://www.reddit.com/r/LocalLLaMA/comments/1r9xuge/i_built_a_fastapi_docsstyle_ui_for_testing_mcp/
gauthierpia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9xuge
false
null
t3_1r9xuge
/r/LocalLLaMA/comments/1r9xuge/i_built_a_fastapi_docsstyle_ui_for_testing_mcp/
false
false
https://external-preview…70775f1d92957452
1
null