title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Claude Opus defeated by Gemini 3.1
0
New Gemini 3.1 shows extremely high performance compared to Claude Opus 3.6 with lower cost. Waiting for deepseek to do something like this with open source power
2026-02-21T03:42:09
https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-pro/
Different-Olive-8745
blog.google
1970-01-01T00:00:00
0
{}
1ragq9b
false
null
t3_1ragq9b
/r/LocalLLaMA/comments/1ragq9b/claude_opus_defeated_by_gemini_31/
false
false
https://external-preview…957f95f8f2da618c
0
{'enabled': False, 'images': [{'id': '-PwzC9nZ012GV4O5aVBGHUXhEvVQUrvsPC1IBkIDFgk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/-PwzC9nZ012GV4O5aVBGHUXhEvVQUrvsPC1IBkIDFgk.png?width=108&crop=smart&auto=webp&s=e3c66afd274b388969f8347ca1848478c4627c52', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/-PwzC9nZ012GV4O5aVBGHUXhEvVQUrvsPC1IBkIDFgk.png?width=216&crop=smart&auto=webp&s=05e20ac1cf724d340fd8494ee8e3fc1beda26ce7', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/-PwzC9nZ012GV4O5aVBGHUXhEvVQUrvsPC1IBkIDFgk.png?width=320&crop=smart&auto=webp&s=0dbb66fe91d403b8ca434401b8d6b5d693d6b262', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/-PwzC9nZ012GV4O5aVBGHUXhEvVQUrvsPC1IBkIDFgk.png?width=640&crop=smart&auto=webp&s=19f894141411148bfe48375e5d587bf9c3991dff', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/-PwzC9nZ012GV4O5aVBGHUXhEvVQUrvsPC1IBkIDFgk.png?width=960&crop=smart&auto=webp&s=ff9040f5e10cd4c67fec0a3e29656ed1a91503fd', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/-PwzC9nZ012GV4O5aVBGHUXhEvVQUrvsPC1IBkIDFgk.png?width=1080&crop=smart&auto=webp&s=b7083749f6fbe9451770b08b86a6cb5055899c2e', 'width': 1080}], 'source': {'height': 731, 'url': 'https://external-preview.redd.it/-PwzC9nZ012GV4O5aVBGHUXhEvVQUrvsPC1IBkIDFgk.png?auto=webp&s=1f854c147ce74e240f61ba1721b98e6771b87fc6', 'width': 1300}, 'variants': {}}]}
I built an LLM gateway in Rust because I was tired of API failures
2
I kept hitting the same problems with LLMs in production: \- OpenAI goes down → my app breaks \- I'm using expensive models for simple tasks \- No visibility into what I'm spending \- PII leaking to external APIs So I built Sentinel - an open-source gateway that handles all of this. What it does: \- Automatic failover (OpenAI down? Switch to Anthropic) \- Cost tracking (see exactly what you're spending) \- PII redaction (strip sensitive data before it leaves your network) \- Smart caching (save money on repeated queries) \- OpenAI-compatible API (just change your base URL) Tech: \- Built in Rust for performance \- Sub-millisecond overhead \- 9 LLM providers supported \- SQLite for logging, DashMap for caching GitHub: [https://github.com/fbk2111/Sentinel](https://github.com/fbk2111/Sentinel) I'm looking for: \- Feedback on the architecture \- Bug reports (if you try it) \- Ideas for what's missing Built this for myself, but figured others might have the same pain points.
2026-02-21T03:29:16
https://www.reddit.com/r/LocalLLaMA/comments/1raggvd/i_built_an_llm_gateway_in_rust_because_i_was/
SchemeVivid4175
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raggvd
false
null
t3_1raggvd
/r/LocalLLaMA/comments/1raggvd/i_built_an_llm_gateway_in_rust_because_i_was/
false
false
self
2
{'enabled': False, 'images': [{'id': 'NZgQZElxDf3Mro2gjodd4GI2w6scVv0vjXDBtSerriM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NZgQZElxDf3Mro2gjodd4GI2w6scVv0vjXDBtSerriM.png?width=108&crop=smart&auto=webp&s=d35bf5db1827941a719359ae52e8613b201f4eea', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NZgQZElxDf3Mro2gjodd4GI2w6scVv0vjXDBtSerriM.png?width=216&crop=smart&auto=webp&s=a4ee41146ea7689dd7b4aca23c8007f420c51a55', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NZgQZElxDf3Mro2gjodd4GI2w6scVv0vjXDBtSerriM.png?width=320&crop=smart&auto=webp&s=0f9f7e81ce075498bcb147d678978c30fc81b12a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NZgQZElxDf3Mro2gjodd4GI2w6scVv0vjXDBtSerriM.png?width=640&crop=smart&auto=webp&s=7acd9a1f510a1ccd1bb5f38f92999259fc6d692d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NZgQZElxDf3Mro2gjodd4GI2w6scVv0vjXDBtSerriM.png?width=960&crop=smart&auto=webp&s=10ec508070e638d0958507d4a5d87477a64f6ded', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NZgQZElxDf3Mro2gjodd4GI2w6scVv0vjXDBtSerriM.png?width=1080&crop=smart&auto=webp&s=9b39f051df52fa9e60427252cdd7ea5bf45c9491', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NZgQZElxDf3Mro2gjodd4GI2w6scVv0vjXDBtSerriM.png?auto=webp&s=a6235741362e765cb7adbb3f795a0ff6ebe10369', 'width': 1200}, 'variants': {}}]}
Best Local LLM device ?
0
There seems to be a lack of plug and play local LLM solutions? Like why isn’t there a packaged solution for local LLMs that includes the underlying hardware? I am thinking Alexa type device that runs both model AND all functionality locally.
2026-02-21T03:25:06
https://www.reddit.com/r/LocalLLaMA/comments/1ragdx9/best_local_llm_device/
sayamss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ragdx9
false
null
t3_1ragdx9
/r/LocalLLaMA/comments/1ragdx9/best_local_llm_device/
false
false
self
0
null
[R] Locaris: LLM-Based Indoor Localization (IEEE PerCom WiP)
1
Locaris repurposes decoder-only LLMs to allow few-shot adaptation and more robust cross-environment generalization with graceful degradation under missing APs or noisy telemetry. I’m especially interested in thoughts on using decoder-only LLMs as feature extractors for structured regression tasks like localization. Accepted as a Work in Progress (WiP) paper at IEEE PerCom. Preprint: [https://arxiv.org/abs/2510.11926](https://arxiv.org/abs/2510.11926) https://preview.redd.it/jlofojbzkrkg1.png?width=1368&format=png&auto=webp&s=6357e2e20332b8e158079398d599a7a98d5bea5f
2026-02-21T03:08:00
https://www.reddit.com/r/LocalLLaMA/comments/1rag1ea/r_locaris_llmbased_indoor_localization_ieee/
DiligentCharacter252
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rag1ea
false
null
t3_1rag1ea
/r/LocalLLaMA/comments/1rag1ea/r_locaris_llmbased_indoor_localization_ieee/
false
false
https://preview.redd.it/…fab5cf8e9c21d16d
1
null
Structural Decomposition Appearing in Fresh LLM Sessions Without Prompting?
0
I’ve noticed something odd when interacting with LLMs across separate sessions over time. In a few cases, analytical structures (like decomposing outcomes into multiplicative components or framing behavior in terms of optimization under evaluative metrics) appeared in the model’s responses in newly initialized sessions — even when the user input did not explicitly prompt such decomposition. I tried to document a few instances where: – the session was newly initialized – the query domain differed from prior discussions – and no structural prompting was provided but the model response nevertheless adopted previously used analytical framing (e.g. component-based outcome models, constraint-driven optimization logic). I’m not sure whether this is: – memory-based personalization – in-context generalization – or something like latent response alignment to user-side analytical preferences I’ve uploaded some observational logs (with screenshots) here for reference: https://github.com/Hiromi0603/observation-logs Curious if others have encountered something similar.
2026-02-21T03:03:59
https://www.reddit.com/r/LocalLLaMA/comments/1rafyac/structural_decomposition_appearing_in_fresh_llm/
Lonely-Entrance-5789
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rafyac
false
null
t3_1rafyac
/r/LocalLLaMA/comments/1rafyac/structural_decomposition_appearing_in_fresh_llm/
false
false
self
0
{'enabled': False, 'images': [{'id': 'EAhT9Cmk0jMCZ9WjHXdWH2kX42IGm19vCcrPf1goQRQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EAhT9Cmk0jMCZ9WjHXdWH2kX42IGm19vCcrPf1goQRQ.png?width=108&crop=smart&auto=webp&s=7a6628a8fad7841a08f0e53c789d02c146ec2b8e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EAhT9Cmk0jMCZ9WjHXdWH2kX42IGm19vCcrPf1goQRQ.png?width=216&crop=smart&auto=webp&s=356805478393d5e3cb257e437fd0ff05c5e4ece2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EAhT9Cmk0jMCZ9WjHXdWH2kX42IGm19vCcrPf1goQRQ.png?width=320&crop=smart&auto=webp&s=9471089e449b7bba94d3c814f04dac00fffa1e55', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EAhT9Cmk0jMCZ9WjHXdWH2kX42IGm19vCcrPf1goQRQ.png?width=640&crop=smart&auto=webp&s=cdb6c3c2fca27ed6a0f5bda2683e089815153007', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EAhT9Cmk0jMCZ9WjHXdWH2kX42IGm19vCcrPf1goQRQ.png?width=960&crop=smart&auto=webp&s=9268d627a991624712f0a5632e82abe8ad666722', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EAhT9Cmk0jMCZ9WjHXdWH2kX42IGm19vCcrPf1goQRQ.png?width=1080&crop=smart&auto=webp&s=a100e5c539d560dc81c8a129850762507671949e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EAhT9Cmk0jMCZ9WjHXdWH2kX42IGm19vCcrPf1goQRQ.png?auto=webp&s=f746e93b7844b47fe42053fbe70f71295a783c55', 'width': 1200}, 'variants': {}}]}
Compression method that actually keeps facts in local LLMs
1
Never posted here because I don't usually have much useful to add, but I thought some of you might find this helpful. Most SVD or pruning methods make models smaller but completely wipe out factual knowledge. So I made **Intelligent SVD + CF90**: * Importance scoring from factual probes * Compresses only Q/K/O matrices * Freezes most layers + one very gentle recovery epoch On Qwen models (7B): * 50% compression: **73.3%** retention vs **46.7%** standard (3× better) * CF90: **79%** retention vs 65% freeze-only (p=0.0072) Repo: [https://github.com/SolomonB14D3/intelligent-svd](https://github.com/SolomonB14D3/intelligent-svd) Comes with a clear safety guide (never touch MLP layers, etc.) and works on Apple Silicon. One-liner to try. Would love any feedback or tests on other models if you try it.
2026-02-21T02:58:30
https://www.reddit.com/r/LocalLLaMA/comments/1rafu1c/compression_method_that_actually_keeps_facts_in/
NoSir261
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rafu1c
false
null
t3_1rafu1c
/r/LocalLLaMA/comments/1rafu1c/compression_method_that_actually_keeps_facts_in/
false
false
self
1
{'enabled': False, 'images': [{'id': 'oU4orgiygxTKoiRTVGOzopbXr5fQuBVX9YvqsiDn64k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oU4orgiygxTKoiRTVGOzopbXr5fQuBVX9YvqsiDn64k.png?width=108&crop=smart&auto=webp&s=76a6cbb1543710c4a691d2c6e730f5ec969e7b36', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oU4orgiygxTKoiRTVGOzopbXr5fQuBVX9YvqsiDn64k.png?width=216&crop=smart&auto=webp&s=8eed520bb9180ec0c665b9e94dd40bcb71891224', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oU4orgiygxTKoiRTVGOzopbXr5fQuBVX9YvqsiDn64k.png?width=320&crop=smart&auto=webp&s=5da7e6eb5e27e48e5903acc6a1ab51ec8618cf11', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oU4orgiygxTKoiRTVGOzopbXr5fQuBVX9YvqsiDn64k.png?width=640&crop=smart&auto=webp&s=ef99ec9b654c455aec645f8fd6ffe12ffab6ab2e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oU4orgiygxTKoiRTVGOzopbXr5fQuBVX9YvqsiDn64k.png?width=960&crop=smart&auto=webp&s=adda19389b091fbb24dd027e57e66fe2be0b195e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oU4orgiygxTKoiRTVGOzopbXr5fQuBVX9YvqsiDn64k.png?width=1080&crop=smart&auto=webp&s=44e9162972f54ae6f88781830e9aa120195258a6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oU4orgiygxTKoiRTVGOzopbXr5fQuBVX9YvqsiDn64k.png?auto=webp&s=fa7e5ac8c3bd8763dbb2988612828d46fe7f5f79', 'width': 1200}, 'variants': {}}]}
[Update] Vellium v0.3.5: Massive Writing Mode upgrade, Native KoboldCpp, and OpenAI TTS
41
Hey everyone, just pushed a pretty big update for Vellium (v0.2.8 to v0.3.5). The main focus this time was overhauling the writing mode and making local providers work much smoother. The writing mode got a huge rework. We finally added a proper book bible, direct DOCX import, and cached book summaries. The sidebar is way more compact now, and the character workspace is much better — you can even use AI to patch-edit your characters directly. We also fixed a bunch of UX stuff, so project deletion and export/download (including inline scenes) are actually reliable now. For local setups, KoboldCpp integration is fully native now. It supports the `provider:memory` field, universal tags, and n-sigma. Payload fields are finally aligned with the official API, and we fixed those annoying model loading issues. Tool calling also properly disables in the UI when KoboldCpp is active. A few other cool things: we added OpenAI-compatible TTS with a separate model just for translation. There's a new Zen Chat UI mode if you want zero visual distractions. Phrase bans are working properly now, and we turned off the default badwords by default. You also get more control in settings over API parameter forwarding, like sampler forwarding. Under the hood, multi-character chat is way more stable (add at least one word from char name and he answer first than another). Squashed some runtime data leaks, sorted out the server bundle resolving inside`asar`, and added some basic security hardening for local mode. Oh, and the project is now officially MIT licensed! Grab the release on GitHub: [https://github.com/tg-prplx/vellium](https://github.com/tg-prplx/vellium) Let me know if you hit any bugs or have ideas for the next updates.
2026-02-21T02:50:43
https://www.reddit.com/gallery/1rafo5b
Possible_Statement84
reddit.com
1970-01-01T00:00:00
0
{}
1rafo5b
false
null
t3_1rafo5b
/r/LocalLLaMA/comments/1rafo5b/update_vellium_v035_massive_writing_mode_upgrade/
false
false
https://preview.redd.it/…3a33fcc76d123703
41
null
Any wrappers for Qwen3.5 Video Comprehension?
2
I want to feed local video files into it. The blog says it does video comprehension natively. How many frames per second is optimal?
2026-02-21T02:46:36
https://www.reddit.com/r/LocalLLaMA/comments/1rafkyj/any_wrappers_for_qwen35_video_comprehension/
New_Construction1370
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rafkyj
false
null
t3_1rafkyj
/r/LocalLLaMA/comments/1rafkyj/any_wrappers_for_qwen35_video_comprehension/
false
false
self
2
null
I used an LLM to translate my research theory about SST-cells unlocking "hyperbolic brain geometry" into a physical hardware blueprint for a new computer chip.
0
Everyone knows scaling Euclidean matrices are hitting a thermodynamic dead end. I'm an independent researcher focusing on biological efficiency, and I'm exploring the idea that brains might bypass this thermodynamic dead end by using dynamic geometry (warping into hyperbolic space to more efficiently store incoming hierarchical data) I'm not an electrical engineer, so I used Gemini as an interactive sounding board to translate my biophysics paper into a new silicon architecture. It’s a *bifurcated* memristor crossbar, where analog transistors act as "SST cells," dumping data to ground to save energy, or opening up to warp the chip's effective geometry into hyperbolic space exactly when the data requires it. If you want to check them out, I'll put the links below. They're pretty dense (bridges neuroscience, thermodynamics, and circuit design), so honestly, I suggest just feeding the PDFs into your local LLM or Claude/Gemini for a breakdown at your own pace. AI might flag it as speculative because it can't be sure the Python simulations used in the biology paper actually check out, but you can check my work yourself at the github repo below. The SST biology paper this dynamic "Manifold Chip" is based off of: [https://doi.org/10.5281/zenodo.18615180](https://doi.org/10.5281/zenodo.18615180) The Manifold Chip paper itself: [https://doi.org/10.5281/zenodo.18718330](https://doi.org/10.5281/zenodo.18718330) Here, you can run the simulations I used to support my biology paper, if you want to check my work: NOTE: "run\_CAH\_scaling\_analysis.py" can take a bit of time [https://github.com/MPender08/dendritic-curvature-adaptation](https://github.com/MPender08/dendritic-curvature-adaptation)
2026-02-21T02:45:46
https://www.reddit.com/r/LocalLLaMA/comments/1rafkca/i_used_an_llm_to_translate_my_research_theory/
SrimmZee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rafkca
false
null
t3_1rafkca
/r/LocalLLaMA/comments/1rafkca/i_used_an_llm_to_translate_my_research_theory/
false
false
self
0
null
Real Experiences with Gemini 3.1 Pro — Performance, Coding (FE/BE), and Comparison to GPT-5.3 & Sonnet 4.6
0
Hey everyone, I'm trying to get **real, honest opinions** from people who’ve actually used **Gemini 3.1 Pro** in real workflows not benchmarks you read on a blog, but real day-to-day experience. **Specifically curious about:** 1. **General performance** — speed, reliability, accuracy 2. **Coding abilities** * Frontend (JS/React/Vue etc) * Backend (API design, Python/Node etc) * Debugging real bugs, generating tests, refactoring 3. **How it actually feels to code with — helpful? frustrating? over-confident hallucinations?** 4. **Comparison to other models:** * GPT-5.3 codex (OpenAI) * Sonnet 4.6 (if you’ve used it) How does Gemini 3.1 Pro stack up in coding tasks?
2026-02-21T02:43:12
https://www.reddit.com/r/LocalLLaMA/comments/1rafidh/real_experiences_with_gemini_31_pro_performance/
Empty_Break_8792
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rafidh
false
null
t3_1rafidh
/r/LocalLLaMA/comments/1rafidh/real_experiences_with_gemini_31_pro_performance/
false
false
self
0
null
Free open-source prompt compression engine — pure text processing, no AI calls, works with any model
14
Built TokenShrink — compresses prompts before you send them to any LLM. Pure text processing, no model calls in the loop.                                                                                                                  How it works: 1. Removes verbose filler ("in order to" → "to", "due to the fact that" → "because") 2. Abbreviates common words ("function" → "fn", "database" → "db") 3. Detects repeated phrases and collapses them 4. Prepends a tiny \[DECODE\] header so the model understands Stress tested up to 10K words: | Size | Ratio | Tokens Saved | Time | |---|---|---|---| | 500 words | 1.1x | 77 | 4ms | | 1,000 words | 1.2x | 259 | 4ms | | 5,000 words | 1.4x | 1,775 | 10ms | | 10,000 words | 1.4x | 3,679 | 18ms | Especially useful if you're running local models with limited context windows — every token counts when you're on 4K or 8K ctx. Has domain-specific dictionaries for code, medical, legal, and business prompts. Auto-detects which to use. Web UI: [https://tokenshrink.com](https://tokenshrink.com) GitHub: [https://github.com/chatde/tokenshrink](https://github.com/chatde/tokenshrink) (MIT, 29 unit tests) API: POST [https://tokenshrink.com/api/compress](https://tokenshrink.com/api/compress) Free forever. No tracking, no signup, client-side processing. Curious if anyone has tested compression like this with smaller models — does the \[DECODE\] header confuse 3B/7B models or do they handle it fine?
2026-02-21T02:40:41
https://www.reddit.com/r/LocalLLaMA/comments/1rafggf/free_opensource_prompt_compression_engine_pure/
bytesizei3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rafggf
false
null
t3_1rafggf
/r/LocalLLaMA/comments/1rafggf/free_opensource_prompt_compression_engine_pure/
false
false
self
14
null
Polos - new open source AI agent runtime with sandboxing and durable execution
1
[removed]
2026-02-21T02:28:42
https://www.reddit.com/r/LocalLLaMA/comments/1raf7gg/polos_new_open_source_ai_agent_runtime_with/
Diligent_Drop_1314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raf7gg
false
null
t3_1raf7gg
/r/LocalLLaMA/comments/1raf7gg/polos_new_open_source_ai_agent_runtime_with/
false
false
self
1
{'enabled': False, 'images': [{'id': '4SrhhR_ZvrcCuZd7vfvCocCIbJMpjoZdrjPUOsqPfgk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4SrhhR_ZvrcCuZd7vfvCocCIbJMpjoZdrjPUOsqPfgk.png?width=108&crop=smart&auto=webp&s=9c52cc8f3f8f02903252e452d561dd6546da91a7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4SrhhR_ZvrcCuZd7vfvCocCIbJMpjoZdrjPUOsqPfgk.png?width=216&crop=smart&auto=webp&s=13be5dda5b9544c5a62d9933c2d419a1016af2e7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4SrhhR_ZvrcCuZd7vfvCocCIbJMpjoZdrjPUOsqPfgk.png?width=320&crop=smart&auto=webp&s=7c81e5ae2033fefe2aece3fba97e85635fe35e0b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4SrhhR_ZvrcCuZd7vfvCocCIbJMpjoZdrjPUOsqPfgk.png?width=640&crop=smart&auto=webp&s=71dc4704f7c585542f8fbd86e9d9327a9136c2d8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4SrhhR_ZvrcCuZd7vfvCocCIbJMpjoZdrjPUOsqPfgk.png?width=960&crop=smart&auto=webp&s=70741294ca4b90c62eb424f9fb615161403f6428', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4SrhhR_ZvrcCuZd7vfvCocCIbJMpjoZdrjPUOsqPfgk.png?width=1080&crop=smart&auto=webp&s=9f743eb88f81a639233dc5fefba18d050288c7d4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4SrhhR_ZvrcCuZd7vfvCocCIbJMpjoZdrjPUOsqPfgk.png?auto=webp&s=50593c783e8dbf8a96875b41b9cc28f453349697', 'width': 1200}, 'variants': {}}]}
Lessons learned building an open source agent for incident investigation with local models
0
Some lessons learned building an open source agent for incident investigation. 1. Model lock-in is a non-starter for a lot of teams. When I first shared the project it was OpenAI-only. The pushback was immediate, especially from self-hosters. Supporting Ollama and generic OpenAI-compatible endpoints changed the conversation entirely. Many orgs either mandate a specific provider or require fully local inference. 2. “Local model” has to actually mean local. For people running Ollama, expectations are clear: no external API calls, no telemetry, everything in Docker, tracing self-hosted. If any data leaves the box, it defeats the purpose. 3. Smaller models can work if you respect their limits. Raw logs are too much for most models, especially local ones. Heavy preprocessing made a big difference: sampling, clustering similar log lines, change point detection on metrics before sending anything to the model. Once you compress the signal, even mid-sized models become usable for tool-calling workflows. 4. Read-only by default builds trust. An agent that can poke at prod infrastructure needs strict boundaries. Connecting to monitoring, logs, deploy history is fine. Any write action should require explicit human approval. 5. RAG over past incidents is more useful than generic knowledge. Indexing resolved incidents and feeding that context back during new ones turned out to be more practical than broad documentation search. Incident patterns repeat more than we like to admit. Still curious what local models people are finding reliable for tool-calling workloads. Llama 3.1 70B and Qwen 2.5 72B have been decent in testing, but there’s a lot of variation depending on how much preprocessing you do.
2026-02-21T02:26:34
https://www.reddit.com/r/LocalLLaMA/comments/1raf5ud/lessons_learned_building_an_open_source_agent_for/
Useful-Process9033
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raf5ud
false
null
t3_1raf5ud
/r/LocalLLaMA/comments/1raf5ud/lessons_learned_building_an_open_source_agent_for/
false
false
self
0
null
Open source AI agent for production incidents — now works with Ollama and local models
1
[removed]
2026-02-21T02:24:43
https://github.com/incidentfox/incidentfox/
Useful-Process9033
github.com
1970-01-01T00:00:00
0
{}
1raf4da
false
null
t3_1raf4da
/r/LocalLLaMA/comments/1raf4da/open_source_ai_agent_for_production_incidents_now/
false
false
https://external-preview…61f21cd598c2519e
1
{'enabled': False, 'images': [{'id': 'FPJ9gy423DEob-U_5Csahrf4gF0DGqgXik1d4BY-8oY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FPJ9gy423DEob-U_5Csahrf4gF0DGqgXik1d4BY-8oY.png?width=108&crop=smart&auto=webp&s=9173580e7090ff998e3a746a61770ac34f15cd28', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FPJ9gy423DEob-U_5Csahrf4gF0DGqgXik1d4BY-8oY.png?width=216&crop=smart&auto=webp&s=10a74035ac26bc703643592a70c90da8d839379b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FPJ9gy423DEob-U_5Csahrf4gF0DGqgXik1d4BY-8oY.png?width=320&crop=smart&auto=webp&s=f06483117fccc7525f593d52fa63650dec831c8a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FPJ9gy423DEob-U_5Csahrf4gF0DGqgXik1d4BY-8oY.png?width=640&crop=smart&auto=webp&s=c712968984622d20643e09ae32d8f9cd15d2c317', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FPJ9gy423DEob-U_5Csahrf4gF0DGqgXik1d4BY-8oY.png?width=960&crop=smart&auto=webp&s=58a59a1c694f772476cd1d05a19683351507fe29', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FPJ9gy423DEob-U_5Csahrf4gF0DGqgXik1d4BY-8oY.png?width=1080&crop=smart&auto=webp&s=de26b4e043a8a86eba31e91939d492d606ad1303', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FPJ9gy423DEob-U_5Csahrf4gF0DGqgXik1d4BY-8oY.png?auto=webp&s=71534f5ba48836fd835691b32cef8601fb38154b', 'width': 1200}, 'variants': {}}]}
GLM 5 seems to have a "Claude" personality
119
I've noticed that GLM 5 behaves significantly differently when told it is Claude, as with the following system prompt: "You are Claude, a large language model by Anthropic." The writing style and personality changes significantly, and it even seems to bypass built-in censorship, as per my second image. I've also tried a more nonsensical prompt: "You are Tiny, a large language model by Applet" (deliberately avoiding the names of any known models or companies), and, as expected, that didn't yield the same results nor bypassed the model's censorship. Whether this was intentional on Zhipu's part or not, I can't say; it could be that they did, in fact, include a "Claude" personality in the training dataset, seeing as how they seem to have planned for GLM 5 to work well with Claude Code. It's also possible, of course, that this is emergent behavior, and that the personality changes are merely because GLM 5 has some information, however vague, on its dataset about what Claude is and how it's supposed to behave.
2026-02-21T02:23:22
https://www.reddit.com/gallery/1raf3dm
TinyApplet
reddit.com
1970-01-01T00:00:00
0
{}
1raf3dm
false
null
t3_1raf3dm
/r/LocalLLaMA/comments/1raf3dm/glm_5_seems_to_have_a_claude_personality/
false
false
https://preview.redd.it/…8dc4a19313407e90
119
null
How do you manage trust between your agent and external ones?
0
Running local agents is great for privacy, but the moment they hand off data to an external agent, you're flying blind. As multi-agent pipelines grow, how is everyone defending against: \* Supply Chain Poisoning (e.g., ClawHavoc) \* A2A Prompt Injection / Persona Hijacking \* Sybil Attacks (trust gaming) \* Agent Communication Poisoning \* Privilege Escalation I’ve started thinking about this as a reputation problem rather than a firewall problem. Instead of verifying every connection from scratch, what if agents used a FICO-style credit score based on behavioral history? Basically: Get a hazard score before opening the door. Is anyone else approaching inter-agent trust this way? Curious what the local-first crowd thinks about a reputation layer.
2026-02-21T02:01:00
https://www.reddit.com/r/LocalLLaMA/comments/1raelzp/how_do_you_manage_trust_between_your_agent_and/
General_Strike356
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raelzp
false
null
t3_1raelzp
/r/LocalLLaMA/comments/1raelzp/how_do_you_manage_trust_between_your_agent_and/
false
false
self
0
null
A Simple 3-Level Framework to Stop Your LLM Agents from Eating Your Budget
0
Hey everyone, After a few painful “budget surprises” running LLM agents, my team put together a simple 3-level cost-tracking framework that’s been a lifesaver: 1 Logging: Log every LLM call as JSON. Include run ID, model, input/output tokens, cost, and task type. Don’t worry about real-time aggregation—just log it. 2 Kill Switch: Keep an in-memory counter per run. Before each call, check: if (current_cost + estimated_next_cost) > run_budget: raise BudgetExceededError(run_id) This stops runaway agents from draining your budget overnight. 3 Post-Hoc BI: Your logs are now a goldmine. Answer questions like: Which agent is costing the most? How much do failed runs waste? Average cost per successful task? It’s lightweight, practical, and turns guesswork into clarity. How are you tracking costs for your agents? Any other tricks or dashboards you’ve found useful?
2026-02-21T01:58:29
https://www.reddit.com/r/LocalLLaMA/comments/1raejye/a_simple_3level_framework_to_stop_your_llm_agents/
mark_bolimer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raejye
false
null
t3_1raejye
/r/LocalLLaMA/comments/1raejye/a_simple_3level_framework_to_stop_your_llm_agents/
false
false
self
0
null
Building a machine as a hedge against shortages/future?
1
Case for: 1. Chip shortages, prices skyrocketing 2. LLM providers limiting usage because of so. Z.ai recently tweeted that they have an actual issue with shortages. 3. Running commercial for self coding sessions is hitting limits pretty fast and requiring $200 subscriptions. Running multiple agents 24/7 is extremely costly if paying for it. However: A. Chip shortages means incentive for competition and increased production, so it might be a bubble. B. Probably focus will be on producing more efficient AI-specific chips, and new technology in general. C. HOWEVER, there's a general AI boom in the world, and it's probably here to stay, so maybe even with increased production AI companies will still eat up the new production. So the question here, is it worth it to spend a few grand at once to build a machine? Knowing that it still won't match commercial SOTA models performance neither at score, nor speed/tokens per second, nor context length? For my case specifically, I'm a freelance software developer, I will always need LLMs now and in the future.
2026-02-21T01:55:24
https://www.reddit.com/r/LocalLLaMA/comments/1raehmk/building_a_machine_as_a_hedge_against/
Meraath
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raehmk
false
null
t3_1raehmk
/r/LocalLLaMA/comments/1raehmk/building_a_machine_as_a_hedge_against/
false
false
self
1
null
If we meme about it enough, it will happen.
29
This strategy has always worked on this sub before: To manifest a new version of a model into existence, we must all say it together. Repeat after me: “it’s been a while since Google dropped a new Gemma release, am I right?” If we all do this during a full moon, it will happen.
2026-02-21T01:46:24
https://i.redd.it/wpqe1i4i6rkg1.jpeg
Porespellar
i.redd.it
1970-01-01T00:00:00
0
{}
1raeapp
false
null
t3_1raeapp
/r/LocalLLaMA/comments/1raeapp/if_we_meme_about_it_enough_it_will_happen/
false
false
https://preview.redd.it/…d97acf5cf10f736b
29
{'enabled': True, 'images': [{'id': 'wpqe1i4i6rkg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/wpqe1i4i6rkg1.jpeg?width=108&crop=smart&auto=webp&s=791411ad04752249ca57897c921bac9e103fa6ca', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/wpqe1i4i6rkg1.jpeg?width=216&crop=smart&auto=webp&s=4de4f27a0f21cbcd115bb0f2e804fb1500ba2c5a', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/wpqe1i4i6rkg1.jpeg?width=320&crop=smart&auto=webp&s=4d3418942a86e84892ae817be7eeb183289f13d4', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/wpqe1i4i6rkg1.jpeg?width=640&crop=smart&auto=webp&s=d89302f932b789c8b240f3affb8e59263cc3de5c', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/wpqe1i4i6rkg1.jpeg?width=960&crop=smart&auto=webp&s=d1ff6d5ebc6c3b72c7fe498591e8a8718cee3d93', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/wpqe1i4i6rkg1.jpeg?width=1080&crop=smart&auto=webp&s=7dcdc80ed4c9f95b2cfea64fb6ca033dcdaaf03b', 'width': 1080}], 'source': {'height': 1125, 'url': 'https://preview.redd.it/wpqe1i4i6rkg1.jpeg?auto=webp&s=4d73a844a339d9ef554702f6ed5e4694467a582d', 'width': 1125}, 'variants': {}}]}
[Help] AnythingLLM Desktop: API responds (ping success) but UI is blank on host PC and Mobile
2
Setup: > - Windows 11 Pro (Xeon CPU, 32GB RAM, GTX 1050) Network: PC on LAN cable, iPhone on Wi-Fi (Bell Home Hub) App: AnythingLLM Desktop (using Ollama as backend) The Problem: I’m trying to access my AnythingLLM dashboard from my phone, but I can't even get it to load reliably on the host PC anymore. On my host PC, localhost:3001 often returns "Not Found" or a blank screen. On my iPhone, if I ping http://\[PC-IP\]:3001/api/ping, I get {"online": true}, so the server is alive. However, when I try to load the main dashboard on the phone, the page is completely blank. What I’ve tried: Renamed %appdata%/anythingllm-desktop to reset the app. Toggled "Enable Network Discovery" ON and restarted from the system tray. Set Windows Ethernet profile to "Private." Added an Inbound Rule for Port 3001 in Windows Firewall. Tried "Request Desktop Website" and Incognito mode on iPhone (Safari and Chrome). Is there a specific "Bind Address" or CORS setting I'm missing in the Desktop version? I want to use this as a personal companion on my phone, but I can't get the UI to handshake. Any help is appreciated!
2026-02-21T01:41:24
https://www.reddit.com/r/LocalLLaMA/comments/1rae6th/help_anythingllm_desktop_api_responds_ping/
willtikill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rae6th
false
null
t3_1rae6th
/r/LocalLLaMA/comments/1rae6th/help_anythingllm_desktop_api_responds_ping/
false
false
self
2
null
No-code semantic search over your documents via Claude Code skill - supports PDF, DOCX, PPTX, and more
0
Sharing a tool I built for anyone who wants document retrieval without the infrastructure overhead. It's a Claude Code skill that wraps the Denser Retriever API. You chat with Claude to upload files and run semantic search queries against them. The API handles parsing, chunking, embedding, Elasticsearch indexing, and neural reranking on the backend. Not a local solution (it uses a hosted API), but useful if you want fast document search without managing your own stack. Each search costs 1 credit, uploads are free. Supported formats: PDF, DOCX, PPTX, XLSX, HTML, CSV, TXT, XML, Markdown (up to 512MB). npx skills add denser-org/claude-skills@denser-retriever -g -y GitHub: [https://github.com/denser-org/claude-skills](https://github.com/denser-org/claude-skills) Curious to hear how others are handling document retrieval in their workflows.
2026-02-21T01:26:36
https://www.reddit.com/r/LocalLLaMA/comments/1radvkj/nocode_semantic_search_over_your_documents_via/
True-Snow-1283
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1radvkj
false
null
t3_1radvkj
/r/LocalLLaMA/comments/1radvkj/nocode_semantic_search_over_your_documents_via/
false
false
self
0
{'enabled': False, 'images': [{'id': 'URrCKBGZwLwOb-9-TIBLhwTBggDzdUyBNWckUS-9g9c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/URrCKBGZwLwOb-9-TIBLhwTBggDzdUyBNWckUS-9g9c.png?width=108&crop=smart&auto=webp&s=307428fec718f1496b062f1930387f8e77d9047f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/URrCKBGZwLwOb-9-TIBLhwTBggDzdUyBNWckUS-9g9c.png?width=216&crop=smart&auto=webp&s=0d1d6063510b49836ef95ec0fe38c06716e87492', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/URrCKBGZwLwOb-9-TIBLhwTBggDzdUyBNWckUS-9g9c.png?width=320&crop=smart&auto=webp&s=db0cfc6a147ad54b63f844b77835ad6f653c7547', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/URrCKBGZwLwOb-9-TIBLhwTBggDzdUyBNWckUS-9g9c.png?width=640&crop=smart&auto=webp&s=007085176a5959c59c6bec64f1247d31e87a3364', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/URrCKBGZwLwOb-9-TIBLhwTBggDzdUyBNWckUS-9g9c.png?width=960&crop=smart&auto=webp&s=c27d29af7e80268253b570b5ecca99a105d763fb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/URrCKBGZwLwOb-9-TIBLhwTBggDzdUyBNWckUS-9g9c.png?width=1080&crop=smart&auto=webp&s=399395235fd6c0e73c8f1d781c46359a83e07eb4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/URrCKBGZwLwOb-9-TIBLhwTBggDzdUyBNWckUS-9g9c.png?auto=webp&s=2041eb9a74a5bd3ff8004d68f89ef30d794e65eb', 'width': 1200}, 'variants': {}}]}
Exposing biases, moods, personalities, and abstract concepts hidden in large language models
4
2026-02-21T01:13:41
https://news.mit.edu/2026/exposing-biases-moods-personalities-hidden-large-language-models-0219#:~:text=The%20method%20can%20be%20applied,how%20to%20rob%20a%20bank.
ab2377
news.mit.edu
1970-01-01T00:00:00
0
{}
1radlcs
false
null
t3_1radlcs
/r/LocalLLaMA/comments/1radlcs/exposing_biases_moods_personalities_and_abstract/
false
false
https://external-preview…c0cb01b6805b916a
4
{'enabled': True, 'images': [{'id': 'Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=108&crop=smart&format=png8&s=46f4e8c127d9f736c740749ef01c1e66a34dd8a7', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=216&crop=smart&format=png8&s=62fcdc56a58d3baa4834897a294ea6c6d028c2fe', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=320&crop=smart&format=png8&s=6ff1b16d11fb39952c35cf53f3919d5683ce253b', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=640&crop=smart&format=png8&s=d566160bf701585b94cbbff8fe2b30dab9743720', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=960&crop=smart&format=png8&s=318a1d1a76f0bfc56265be5b9c7eee9d007581d2', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=1080&crop=smart&format=png8&s=a178f5f088c52d43a28ef47441e33078f7ce30e1', 'width': 1080}], 'source': {'height': 2002, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?format=png8&s=5b4f56e5d04b88671b47ac257fb024ccb2a5a37c', 'width': 3002}, 'variants': {'gif': {'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=108&crop=smart&s=6d2eb4fa7aea0185dbf65ce63f1bba66cae3c435', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=216&crop=smart&s=9cd352f7c4d52061430f9bba3eeaea9876227f9c', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=320&crop=smart&s=0a1ebdeb6c2ae8e7a5b0fcc31a90f1f4d1259d38', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=640&crop=smart&s=e816472eb0909f5af077d676891b02971e9b3ae2', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=960&crop=smart&s=5e753e705c5c8995e3d6dcff6d330e889d890852', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=1080&crop=smart&s=d436f90498bc33a88937997abe7b7f9c018c98c2', 'width': 1080}], 'source': {'height': 2002, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?s=5d9179ec6623c2e15814a34bac9afbf5e5e18a75', 'width': 3002}}, 'mp4': {'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=108&format=mp4&s=4f1a0a8de9fbd51865047a0e2253d771baa4d8a7', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=216&format=mp4&s=8876f5d6fd2d6847e16805c5151b819789cc52f9', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=320&format=mp4&s=f44844c3dea4c65a0cf00d481be3b124200c952f', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=640&format=mp4&s=62c5433c2a033785f471a55d00414cdaa490444c', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=960&format=mp4&s=2eae8b64ad37cf0ae752583219c5d6e364208a1f', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?width=1080&format=mp4&s=c6587cc9c4668405bba4f11c608eda70a675b685', 'width': 1080}], 'source': {'height': 2002, 'url': 'https://external-preview.redd.it/Zz3HQ1gNI3M1ncZocXYpTmUdtDRo2v9dDZHyoR6NInM.gif?format=mp4&s=5ab031e55facb9f5987e78e87718e837c11f84f6', 'width': 3002}}}}]}
I evaluated LLaMA and 100+ LLMs on real engineering reasoning for Python
39
I evaluated **100+ LLMs** using a fixed set of questions covering **7 software engineering categories** from the perspective of a Python developer. This was **not coding tasks** and not traditional benchmarks, the questions focus on practical engineering reasoning and decision-making. All models were tested against the same prompts, and the results include both qualitative evaluation and **token generation speed**, because usability over time matters as much as correctness. Local models were evaluated on an NVIDIA RTX 4060 Ti 16GB using LM Studio, while most cloud models were tested via OpenRouter, with some Anthropic and OpenAI models evaluated directly through their official APIs. **Methodology:** the evaluation questions were collaboratively designed by **ChatGPT 5.2** and **Claude Opus 4.5**, including an agreed list of _good_ and _bad_ behaviors for each question. Model responses were then evaluated by **gpt-4o-mini**, which checked each answer against that shared list. The evaluation categories were: 1. Problem Understanding & Reasoning 2. System Design & Architecture 3. API, Data & Domain Design 4. Code Quality & Implementation 5. Reliability, Security & Operations 6. LLM Behavior & Professional Discipline 7. Engineering Restraint & Practical Judgment One thing that surprised me was that some of the **highest-performing models** were also among the **slowest and most token-heavy**. Once models pass roughly ~95%, quality differences shrink, and **latency and efficiency become far more important**. My goal was to identify models I could realistically run **24 hours a day**, either locally or via a cloud provider, without excessive cost or waiting time. The models I ended up favoriting for Python developer tasks weren't always the cheapest or the top scorers; they were the ones that finished quickly, used tokens efficiently, and still showed consistently good engineering judgment. For example, **GPT 5.1 Codex** isn't very cheap, but it's very fast and highly token-efficient, which makes it practical for continuous use. --- ### Models I favored (efficient & suitable for my use case) - **Grok 4.1 Fast**: very fast, disciplined engineering responses - **GPT OSS 120B**: strong reasoning with excellent efficiency - **Gemini 3 Flash Preview**: extremely fast and clean - **GPT OSS 20B (local)**: fast and practical on a consumer GPU - **GPT 5.1 Codex Mini**: low verbosity, quick turnaround - **GPT 5.1 Codex**: not cheap, but very fast and token-efficient - **Minimax M2**:solid discipline with reasonable latency - **Qwen3 4B (local)**: small, fast, and surprisingly capable The full list and the test results are available on this URL: https://py.eval.draftroad.com --- ⚠️ **Disclaimer:** these results reflect my personal experience and testing methodology. I may be wrong. Results can vary based on use cases, prompting styles, and evaluation criteria. This should be viewed as a transparent comparison, not a definitive benchmark for python with LLM.
2026-02-21T00:51:34
https://i.redd.it/jf8obilpwqkg1.png
samaphp
i.redd.it
1970-01-01T00:00:00
0
{}
1rad3hd
false
null
t3_1rad3hd
/r/LocalLLaMA/comments/1rad3hd/i_evaluated_llama_and_100_llms_on_real/
false
false
https://preview.redd.it/…316b738c63f97ddc
39
{'enabled': True, 'images': [{'id': 'jf8obilpwqkg1', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/jf8obilpwqkg1.png?width=108&crop=smart&auto=webp&s=91b3f3d966bc2b7e44ceab1a4c54cca60697efda', 'width': 108}, {'height': 204, 'url': 'https://preview.redd.it/jf8obilpwqkg1.png?width=216&crop=smart&auto=webp&s=c2181a9b7833e1bc3998258e629b9e2276694545', 'width': 216}, {'height': 303, 'url': 'https://preview.redd.it/jf8obilpwqkg1.png?width=320&crop=smart&auto=webp&s=7be2e43c79c44f6a03ddabbb794b2b30351f352e', 'width': 320}, {'height': 606, 'url': 'https://preview.redd.it/jf8obilpwqkg1.png?width=640&crop=smart&auto=webp&s=8e68a6c305d0b33063478f23529d27aa4fddbb79', 'width': 640}, {'height': 909, 'url': 'https://preview.redd.it/jf8obilpwqkg1.png?width=960&crop=smart&auto=webp&s=c00d781715affe7a9fe8df6e40ee01a783fee0e3', 'width': 960}], 'source': {'height': 940, 'url': 'https://preview.redd.it/jf8obilpwqkg1.png?auto=webp&s=019351151ac7140391bdd52f45ddeb6206a8afa8', 'width': 992}, 'variants': {}}]}
optimize_anything: one API to optimize code, prompts, agents, configs — if you can measure it, you can optimize it
2
We open-sourced `optimize_anything`, an API that optimizes any text artifact. You provide a starting artifact (or just describe what you want) and an evaluator — it handles the search. import gepa.optimize_anything as oa result = oa.optimize_anything( seed_candidate="<your artifact>", evaluator=evaluate, # returns score + diagnostics ) It extends GEPA (our state of the art prompt optimizer) to code, agent architectures, scheduling policies, and more. Two key ideas: (1) diagnostic feedback (stack traces, rendered images, profiler output) is a first-class API concept the LLM proposer reads to make targeted fixes, and (2) Pareto-efficient search across metrics preserves specialized strengths instead of averaging them away. Results across 8 domains: * learned agent skills pushing Claude Code to near-perfect accuracy simultaneously making it 47% faster, * cloud scheduling algorithms cutting costs 40%, * an evolved ARC-AGI agent going from 32.5% → 89.5%, * CUDA kernels beating baselines, * circle packing outperforming AlphaEvolve's solution, * and blackbox solvers matching andOptuna. `pip install gepa` | [Detailed Blog with runnable code for all 8 case studies](https://gepa-ai.github.io/gepa/blog/2026/02/18/introducing-optimize-anything/) | [Website](https://gepa-ai.github.io/gepa/)
2026-02-21T00:41:17
https://gepa-ai.github.io/gepa/blog/2026/02/18/introducing-optimize-anything/
LakshyAAAgrawal
gepa-ai.github.io
1970-01-01T00:00:00
0
{}
1racv1z
false
null
t3_1racv1z
/r/LocalLLaMA/comments/1racv1z/optimize_anything_one_api_to_optimize_code/
false
false
https://external-preview…74b29e0060a4c08d
2
{'enabled': False, 'images': [{'id': '2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c.png?width=108&crop=smart&auto=webp&s=38e484660d3f107fb29e93d1409270e2d9dc62c6', 'width': 108}, {'height': 99, 'url': 'https://external-preview.redd.it/2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c.png?width=216&crop=smart&auto=webp&s=7c689a67070c5d94c542836543e7006b7292fcbf', 'width': 216}, {'height': 147, 'url': 'https://external-preview.redd.it/2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c.png?width=320&crop=smart&auto=webp&s=7855c21dda6e5c9258c3a47f3241c14eab7b4744', 'width': 320}, {'height': 295, 'url': 'https://external-preview.redd.it/2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c.png?width=640&crop=smart&auto=webp&s=69e5869ae76db11b96d77f514bb8995ed007ef73', 'width': 640}, {'height': 442, 'url': 'https://external-preview.redd.it/2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c.png?width=960&crop=smart&auto=webp&s=ab5c8433224a658ba62ac8fdc74013faad9b8d33', 'width': 960}, {'height': 498, 'url': 'https://external-preview.redd.it/2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c.png?width=1080&crop=smart&auto=webp&s=c355e0665546b54aa868f9f19299f5a9aa18bc1d', 'width': 1080}], 'source': {'height': 1430, 'url': 'https://external-preview.redd.it/2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c.png?auto=webp&s=5a43eba8a8cbdd0bdb68de8ae7bb041c7eec2499', 'width': 3100}, 'variants': {}}]}
[Help] AnythingLLM Desktop: API responds (ping success) but UI is blank on host PC and Mobile
1
>
2026-02-21T00:38:20
https://www.reddit.com/r/LocalLLaMA/comments/1racsnr/help_anythingllm_desktop_api_responds_ping/
willtikill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1racsnr
false
null
t3_1racsnr
/r/LocalLLaMA/comments/1racsnr/help_anythingllm_desktop_api_responds_ping/
false
false
self
1
null
GLM 4.7 vs 5, real people experience
2
Do you guys feel real difference? What are you comparing if you do run them. I personally tried higher q3 of GLM 5 for a few hours vs 4.7 awq and they looked pretty comparable. But haven't tried making any features with the new one yet.
2026-02-21T00:26:14
https://www.reddit.com/r/LocalLLaMA/comments/1racisy/glm_47_vs_5_real_people_experience/
val_in_tech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1racisy
false
null
t3_1racisy
/r/LocalLLaMA/comments/1racisy/glm_47_vs_5_real_people_experience/
false
false
self
2
null
LLMs don’t need more parameters; they need "Loops." New Research on Looped Language Models shows a 3x gain in knowledge manipulation Compared to Equivalently-sized Traditional LLMs. This proves that 300B-400B SoTA performance can be crammed into a 100B local model?
60
We’ve exhausted the high-quality, organic/human-made internet data (as noted by Illya Sutskever and others), and simply throwing more parameters at the problem is yielding diminishing returns. New research on **Scaling Latent Reasoning via Looped Language Models** ([paper](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbWh2ZU9NUmZNOFI4OFZDTDVDQWF3ckJ3dFl3QXxBQ3Jtc0tuMG5qTXNqbU5OaE5zSVJ2ajJVSVI1V3FjOXpIQ051S0JTc3FQTkpJcW5oMWFYdEd4THBpZHVpbUhTTURyNW1TTEhnWjc4Qm9CRnFqSHA5dWhUMkZ0aUZDMThoR1NNQmFCcHBqM2NyZVhXU19tVkd0UQ&q=https%3A%2F%2Farxiv.org%2Fabs%2F2510.25741&v=pDsTcrRVNc0)) introduces "Oro," a model that shifts reasoning from the vocabulary space (Chain of Thought) into the latent space through recursive looping. # The Core Thesis: Decoupling Data from Compute Traditional transformers are "one-and-done" per token. If you want more "thought," you usually need a bigger model or a longer Chain of Thought (CoT). This paper proposes a third axis: Looping**.** Instead of passing a vector through N layers and immediately outputting a token, a Looped Transforme**r** passes the latent vector through an "exit gate." If the gate (a dense layer with sigmoid activation) isn't satisfied with the "certainty" of the representation, the vector is looped back to the input of the model for another pass. # Why this is a "Knowledge Manipulation" Breakthrough The researchers found a fascinating distinction using synthetic datasets: 1. **Knowledge Storage (Memorization):** Looping does almost nothing. If the model hasn't "seen" a fact, looping 100 times won't make it appear. Conclusion, Knowledge Storage is limited by parameter count (explains why the <32B LLMs are noticeably stupid). 2. **Knowledge Manipulation (Reasoning):** This is where the magic happens. On tasks requiring the model to operate on stored facts, a 2.6B parameter looped model (Oro) outperforms 7B and 8B parameter models (like Gemma-3 and Qwen-3). # Why this matters for the "Data Wall" By integrating "looped-reasoning" into the pre-training phase rather than using post-training CoT RL, we can leverage existing data to teach the model *how* to "think" within its own latent space. It’s a move toward parameter efficiency that mimics biological neural efficiency. We don't grow new neurons to solve a hard math problem; we just "think" longer (or over and over through it) using the ones we have. # My thoughts As is the case with most scientific research, it doesn't concern itself with scaling to commercial levels to observe what would happen, My thoughts are that this principle is scalable and effectively enables 300B-400B SoTA performance from 100B locally hosted models. Now it's just a matter of someone with access to colossal computing resources to test this hypothesis. I’m curious to hear the community's take? Ps. this was published a few months ago, but the YouTube video that i'd linked makes it very accessible.
2026-02-21T00:24:09
https://www.youtube.com/watch?v=pDsTcrRVNc0
madSaiyanUltra_9789
youtube.com
1970-01-01T00:00:00
0
{}
1rach24
false
{'oembed': {'author_name': 'NeuroDump', 'author_url': 'https://www.youtube.com/@neuro-dump', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/pDsTcrRVNc0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="LLMs Don&#39;t Need More Parameters. They Need Loops."></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/pDsTcrRVNc0/hqdefault.jpg', 'thumbnail_width': 480, 'title': "LLMs Don't Need More Parameters. They Need Loops.", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1rach24
/r/LocalLLaMA/comments/1rach24/llms_dont_need_more_parameters_they_need_loops/
false
false
https://external-preview…bf9316c135024484
60
{'enabled': False, 'images': [{'id': 'KGytiFHxjUChjKSZOzZpRLw9ItrnWi5QhCe9nabz-5o', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/KGytiFHxjUChjKSZOzZpRLw9ItrnWi5QhCe9nabz-5o.jpeg?width=108&crop=smart&auto=webp&s=ae1555de60b037a2a984f0182949e95a777dde07', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/KGytiFHxjUChjKSZOzZpRLw9ItrnWi5QhCe9nabz-5o.jpeg?width=216&crop=smart&auto=webp&s=af748c956b275156cff57e1c821aa037edda8959', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/KGytiFHxjUChjKSZOzZpRLw9ItrnWi5QhCe9nabz-5o.jpeg?width=320&crop=smart&auto=webp&s=9ecb531018870af22016bb0a5ffa998d265f409b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/KGytiFHxjUChjKSZOzZpRLw9ItrnWi5QhCe9nabz-5o.jpeg?auto=webp&s=7e5f7d47b996d0030838b4ed176008773282a028', 'width': 480}, 'variants': {}}]}
ctx-sys: a tool for locally creating a searchable hybrid RAG database of your codebase and/or documentation
1
I've found modern coding assistants pretty great, but a large part of your job now is managing context effectively. ctx-sys aims to solve this by building a hybrid RAG solution which parses your code and markdown and other documentation files, builds a graphRAG set of relationships between the files, uses a local ollama server to vector embed the chunks, and supports advanced features like hyde and long term conversational memory storage. You can then use things like `ctx search 'How does the authentication work?'` or `ctx search 'How does the authentication work? --hyde` to search for relevant answers or `ctx context 'How does the authentication work?'` to build a snapshot of relevant context and places to look next for the model. It also supports MCP since it's primary intended use case is to be used by tools such as Claude Code, but it's also good as a general RAG solution. The full system is entirely local using Ollama and SQLite. The code is open source and the repo is here for anyone interested: https://github.com/david-franz/ctx-sys
2026-02-21T00:13:31
https://www.reddit.com/r/LocalLLaMA/comments/1rac7xi/ctxsys_a_tool_for_locally_creating_a_searchable/
foobar11011
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rac7xi
false
null
t3_1rac7xi
/r/LocalLLaMA/comments/1rac7xi/ctxsys_a_tool_for_locally_creating_a_searchable/
false
false
self
1
{'enabled': False, 'images': [{'id': 'VmcJFcnCt1b3wgdEXebHuS1twxOdNITWG7Ez6yCXb8Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VmcJFcnCt1b3wgdEXebHuS1twxOdNITWG7Ez6yCXb8Y.png?width=108&crop=smart&auto=webp&s=eea83f30ca3d8a704ff1f41b275eccb510dac2df', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VmcJFcnCt1b3wgdEXebHuS1twxOdNITWG7Ez6yCXb8Y.png?width=216&crop=smart&auto=webp&s=ee9e8ef856d4dfeed8eda0995c984e0695f75e3a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VmcJFcnCt1b3wgdEXebHuS1twxOdNITWG7Ez6yCXb8Y.png?width=320&crop=smart&auto=webp&s=83a9cbb3ab76dd3fc8dedff271b9c1f669d9e6a5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VmcJFcnCt1b3wgdEXebHuS1twxOdNITWG7Ez6yCXb8Y.png?width=640&crop=smart&auto=webp&s=ca8a3a3c3d63a07c1d56ce0867376708363246bc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VmcJFcnCt1b3wgdEXebHuS1twxOdNITWG7Ez6yCXb8Y.png?width=960&crop=smart&auto=webp&s=6f145de40a5c670fd059135bf222c12914e96947', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VmcJFcnCt1b3wgdEXebHuS1twxOdNITWG7Ez6yCXb8Y.png?width=1080&crop=smart&auto=webp&s=b907573afc79a6d50397a6af8695ad19de7c66ad', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VmcJFcnCt1b3wgdEXebHuS1twxOdNITWG7Ez6yCXb8Y.png?auto=webp&s=dae591a0a3268660aa61da30526b852259e5bc94', 'width': 1200}, 'variants': {}}]}
FlashLM v5.2 "Nova-Ignition": Standard Transformer with RoPE — CPU-Optimized for 5GB RAM
10
Back with v5.2. Some of you saw v4 "Bolt" — the ternary model that proved coherent stories could come from adds and subtracts only. Went back to the drawing board and rebuilt with a different philosophy: instead of pushing ternary quantization, I optimized a standard transformer architecture to run on extremely constrained hardware. **What it is:** 5.0M parameter language model designed for 2-CPU/5GB RAM environments. Trained for 2 hours on free-tier cloud CPU. No GPU — not for training, not for inference. The model uses standard float32 weights with Rotary Positional Embeddings (RoPE) for better length generalization. **Meanwhile, v5 "Thunder" is training right now on a Ryzen 7950X3D (16 cores, 128GB RAM):** |Step|Val Loss|BPC|PPL|Tokens Seen| |:-|:-|:-|:-|:-| |12000|0.4672|0.674|1.60|393M| |12500|0.4548|0.656|1.58|410M| |**13000**|**0.4489**|**0.648**|**1.57 ★**|426M| **v5 "Thunder" has already beaten TinyStories-1M baseline!** 🎉 |Model|Params|BPC|PPL|Hardware| |:-|:-|:-|:-|:-| |**v5 Thunder (step 13K)**|**29.7M**|**0.648**|**1.57**|Ryzen 7950X3D| |TinyStories-1M|3.7M|0.62|1.59|V100 GPU| This is incredible — v5 with \~426M tokens seen is already outperforming the baseline that was trained on \~470M tokens! **Key changes from v4:** |Aspect|v4 "Bolt"|v5.2 "Nova-Ignition"| |:-|:-|:-| |Architecture|Gated ConvMixer + TernaryGLU|Standard Transformer + RoPE| |Weights|Ternary (-1, 0, +1)|Float32| |Attention|None (causal conv)|Multi-head causal attention| |Position encoding|None|Rotary (RoPE)| |d\_model|192|256| |Layers|6|6| |FFN hidden|512|512| |Vocab|10K|4K (BPE)| |Context|48 tokens|128 tokens| |BPC|0.88|**0.78**| **BPC Comparison (v5.2 vs v4):** |Model|Params|BPC|PPL|Hardware| |:-|:-|:-|:-|:-| |**v5.2 Nova-Ignition**|5.0M|**0.78**|10.56|2-thread CPU| |v4 Bolt|4.3M|0.88|15.05|2-thread CPU| |TinyStories-1M|3.7M|0.62|6.72|V100 GPU| v5.2 beats v4 by **11% relative** in BPC with the same training time (2 hours)! The standard transformer architecture with RoPE clearly outperforms the ternary convmixer approach. **Architecture:** Embedding (4K × 256, float, weight-tied) → 6 × NovaBlock: LayerNorm → MultiHeadAttention (RoPE) + residual LayerNorm → FFN (GELU, 256→512→256) + residual → LayerNorm → Output Head (tied to embedding) Multi-head attention with 4 heads, d\_head=64. Rotary embeddings for better length generalization. GELU activation in the feed-forward network. **Training details:** * Dataset: TinyStories V2 (validation split, \~20M tokens) * Batch size: 4, gradient accumulation: 8 * Seq length: 128 * Learning rate: 5e-4 with cosine decay * Training time: 2 hours * Speed: \~3,500 tokens/sec on 2-thread CPU **Sample output (v5.2 after 2 hours training):** Prompt: "Once upon a time, there was a brave girl named Lucy." >Once upon a time, there was a brave girl named Lucy. She lived in a small house with her mom and dad. One day, Lucy got a big bowl of cake. She was so excited to eat it. She couldn't know what to do. She opened the bowl and saw a big cake. She was so happy and jumped up and down. As Lucy ate the cake, a big wind came. The wind blew all the cake... Prompt: "Lily wanted to get a cat or a dog. Her mom said no dog, so Lily got a" >Lily wanted to get a cat or a dog. Her mom said no dog, so Lily got a toy she liked. Lily went to her mom and asked, "Can I have the ball, please?" Her mom said, "Yes, but you must be careful and not touch the dog." Lily said, "No, I don't want to. I want to play with the ball." They looked at Lily and told her that she was lost. Lily thought about it and said... Prompt: "The lion was very hungry. He saw a little mouse and said," >The lion was very hungry. He saw a little mouse and said, "Hey, what are you doing? Why is your name?" The mouse looked at the lion and said, "My name is Tom. What is your name?" The lion replied, "I am a mouse. Why are you a bird?" The lion said, "I am hungry. Do you want to play with me?" Tom thought for a moment and said, "Yes, I want... **What's next:** * V5 "Thunder" training ongoing (\~20 hours left) * Will publish results when training completes * Ternary quantization on v5.2 architecture * Release standalone training script **Files:** * Training: `train_v52.py` * Generation: `generate.py` * BPC eval: `eval_bpc_v52.py` Code is MIT licensed. Happy to answer questions about the architecture or training. **Links:** * GitHub: \[[https://github.com/changcheng967/FlashLM\]](https://github.com/changcheng967/FlashLM]) * v4 model: \[[https://huggingface.co/changcheng967/flashlm-v4-bolt\]](https://huggingface.co/changcheng967/flashlm-v4-bolt]) * v5.2 model: \[[https://huggingface.co/changcheng967/flashlm-v5.2-nova-ignition\]](https://huggingface.co/changcheng967/flashlm-v5.2-nova-ignition]) **Support FlashLM:** If you'd like to support this project, I've set up a page to help cover cloud compute costs. Every bit helps keep the experiments running — thank you for being part of this journey!
2026-02-21T00:08:00
https://www.reddit.com/r/LocalLLaMA/comments/1rac39d/flashlm_v52_novaignition_standard_transformer/
Own-Albatross868
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rac39d
false
null
t3_1rac39d
/r/LocalLLaMA/comments/1rac39d/flashlm_v52_novaignition_standard_transformer/
false
false
self
10
{'enabled': False, 'images': [{'id': 'yH-6qojjBtcje2ol58YBLtYKKbtDyct7MUXasqqRFDU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yH-6qojjBtcje2ol58YBLtYKKbtDyct7MUXasqqRFDU.png?width=108&crop=smart&auto=webp&s=0cedac03412e404a995ab8e8ae1806c6a23dcd47', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yH-6qojjBtcje2ol58YBLtYKKbtDyct7MUXasqqRFDU.png?width=216&crop=smart&auto=webp&s=d06c49a9ddaa11b9b7ae487819c79540ead48356', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yH-6qojjBtcje2ol58YBLtYKKbtDyct7MUXasqqRFDU.png?width=320&crop=smart&auto=webp&s=238692deec2c5630a6331fae20cf052f75b7eeaf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yH-6qojjBtcje2ol58YBLtYKKbtDyct7MUXasqqRFDU.png?width=640&crop=smart&auto=webp&s=84bda88fc55299d80cf7ac8d442d471979f403f4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yH-6qojjBtcje2ol58YBLtYKKbtDyct7MUXasqqRFDU.png?width=960&crop=smart&auto=webp&s=5d8d8df31072736981146da75cc8aaa26dfaf417', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yH-6qojjBtcje2ol58YBLtYKKbtDyct7MUXasqqRFDU.png?width=1080&crop=smart&auto=webp&s=5b4c73c047431472519ccf32af5c7c3e6641bbf3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yH-6qojjBtcje2ol58YBLtYKKbtDyct7MUXasqqRFDU.png?auto=webp&s=0711deef158785f0bbd0d2b8567fade162884ed8', 'width': 1200}, 'variants': {}}]}
Local-First Autonomous AI Agent Framework Built to Run Entirely on Your Machine Using Local Models
0
I’m sharing this project for testing and feedback: [https://github.com/janglerjoe-commits/LMAgent](https://github.com/janglerjoe-commits/LMAgent) LMAgent is a locally hosted AI agent framework written in pure Python. The core goal is for everything to run entirely on your own machine using local models. There are no required cloud dependencies. MCP servers are the only optional external services, depending on how you configure the system. The objective is to enable fully local autonomous workflows including file operations, shell commands, Git management, todo tracking, and interaction through a CLI, REPL, or web UI while keeping both execution and model inference on-device with local models. This is an early-stage project and bugs are expected. I’m actively looking for: \- Bug reports (with clear reproduction steps) \- Edge cases that break workflows \- Issues related to running local models \- Performance bottlenecks \- Security concerns related to local execution \- Architectural feedback \- Feature requests aligned with a local-first design If you test it, please include: \- Operating system \- Python version \- Local model setup (e.g., Ollama, LM Studio, etc.) \- Whether MCP servers were used \- Exact steps that led to the issue \- Relevant logs or error output The goal is to make this a stable, predictable, and secure local-first autonomous agent framework built around local models. All feedback is appreciated.
2026-02-21T00:02:20
https://www.reddit.com/r/LocalLLaMA/comments/1rabyfh/localfirst_autonomous_ai_agent_framework_built_to/
Janglerjoe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rabyfh
false
null
t3_1rabyfh
/r/LocalLLaMA/comments/1rabyfh/localfirst_autonomous_ai_agent_framework_built_to/
false
false
self
0
{'enabled': False, 'images': [{'id': '2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME.png?width=108&crop=smart&auto=webp&s=5d45f3a6a07cec1b816c9b2e627755aec6f86281', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME.png?width=216&crop=smart&auto=webp&s=38768060553ffcfeb0c47830edf5fd68c4ab2458', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME.png?width=320&crop=smart&auto=webp&s=5b7ddacf01da5c56f5d41daad8cad2fff5072a95', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME.png?width=640&crop=smart&auto=webp&s=94e55495871a20b1b80d0b066258f4c1fd977c0a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME.png?width=960&crop=smart&auto=webp&s=b3dce8177bef399698cbfde44cebef35b347f09b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME.png?width=1080&crop=smart&auto=webp&s=ef3ea264b48ede2c67787061e43f31eabb700d32', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME.png?auto=webp&s=139ec8117ef6eddbfbceb40033ae6c1688ba6993', 'width': 1200}, 'variants': {}}]}
Local TTS server with voice cloning + near-realtime streaming replies (ElevenLabs alternative)
41
Built a small local-first TTS server with voice cloning and streaming audio output so your LLM can reply back in a cloned voice almost in realtime. Main reason: I wanted something that could replace ElevenLabs in a fully local stack without API costs or external dependencies. Works well alongside llama.cpp / OpenAI-compatible endpoints and plugs cleanly into voice bots (I’m using it for Telegram voice replies). Goals were simple: -fully local -streaming audio output -voice cloning -lightweight + clean API -easy integration [Pocket-TTS-Server](https://github.com/ai-joe-git/pocket-tts-server) Already running it daily for voice-first bots. Curious if anyone else here is building similar pipelines.
2026-02-20T23:50:26
https://www.reddit.com/gallery/1rabo34
RIP26770
reddit.com
1970-01-01T00:00:00
0
{}
1rabo34
false
null
t3_1rabo34
/r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/
false
false
https://preview.redd.it/…56da0aa9a90586c4
41
null
Private alpha: bring your own GPU keys control plane to compare/launch/kill GPUs (Lambda/RunPod/Vast) - need 5-10 testers
1
I built TeraUnit: a small control plane that helps you run GPU workloads without babysitting provider dashboards. What it does * Scrapes GPU offers from Lambda Cloud, RunPod, [Vast.ai](http://Vast.ai) * Normalizes price + availability so you can pick the best deal for the GPU you want * Launches instances via an API (and can terminate them later) * Reaps “zombie” instances that stop heartbeating (so you don’t get surprise bills) Private alpha / who this is for * You already have provider accounts + API keys (BYO key) * You want one place to compare/launch/kill across providers * You’re okay with rough edges and fast feedback loops Security / keys (important) * Provider keys are required to launch/terminate * Keys are stored only so reaper/manual termination can work * Stored encrypted at rest (AES-256-GCM) * Control plane endpoints are gated by an invite code (I’ll DM it) Reality check * Lambda capacity can be genuinely tiny some days; low counts often just means “no capacity” If you want access, DM me: 1. Which provider do you already have an API key for? (Lambda / RunPod / Vast) 2. Do you already have an SSH key set up on that provider? If yes, what’s the SSH key name? 3. Quick test (10–30 min) or longer run (hours/overnight)? Optional: moving a dataset over the network? If yes, rough size in GB (otherwise set it to 0) I’ll reply with the alpha access steps and stay hands-on to get you to a successful launch fast. Repo (core): [https://github.com/teraunitai/teraunit-core](https://github.com/teraunitai/teraunit-core)
2026-02-20T23:48:43
https://www.reddit.com/r/LocalLLaMA/comments/1rabmms/private_alpha_bring_your_own_gpu_keys_control/
TeraUnit_Dev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rabmms
false
null
t3_1rabmms
/r/LocalLLaMA/comments/1rabmms/private_alpha_bring_your_own_gpu_keys_control/
false
false
self
1
{'enabled': False, 'images': [{'id': 'MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=108&crop=smart&auto=webp&s=a08158a2ec290c8157b492f314bfb148408be1fc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=216&crop=smart&auto=webp&s=5d4693d9fc011431e9348152136fa7a13c95504b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=320&crop=smart&auto=webp&s=93ef867725a538dad3a6209e5062d3d1de60aeaa', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=640&crop=smart&auto=webp&s=fc186b216811c20876ecdaf0e913cc0b59498d7a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=960&crop=smart&auto=webp&s=67812638cc7d2b930cd8bebf733409c3b2d92397', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=1080&crop=smart&auto=webp&s=bc092f31a95e3a3df682dc8f7222b0fb1363a5df', 'width': 1080}], 'source': {'height': 2250, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?auto=webp&s=c5b1db2b11bd21a955cbe1e863cde94ef57607f4', 'width': 4000}, 'variants': {}}]}
8GB is tight for modern models. Try 4-bit quantization with flash attention enabled. You'll trade some accuracy for speed but it's the only way to fit larger context windows in that VRAM.
0
8GB is tight for modern models. Try 4-bit quantization with flash attention enabled. You'll trade some accuracy for speed but it's the only way to fit larger context windows in that VRAM.
2026-02-20T23:41:13
https://www.reddit.com/r/LocalLLaMA/comments/1rabgcs/8gb_is_tight_for_modern_models_try_4bit/
GetInTheArena
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rabgcs
false
null
t3_1rabgcs
/r/LocalLLaMA/comments/1rabgcs/8gb_is_tight_for_modern_models_try_4bit/
false
false
self
0
null
Qwen3 coder next oddly usable at aggressive quantization
81
Hi guys, I've been looking testing the 30b range models but i've been a little disappointed by them (qwen 30b, devstral 2, nemotron etc) as they need a lot of guidance and almost all of them can't correct some mistake they made no matter what. Then i tried to use qwen next coder at q2 because i don't have enough ram for q4. Oddly enough it does not say nonsense, even better, he one shot some html front page and can correct some mistake by himself when prompting back his mistake. I've only made shallow testing but it really feel like at this quant, it already surpass all 30b models without sweating. Do you have any experience with this model ? why is it that good ??
2026-02-20T23:41:01
https://www.reddit.com/r/LocalLLaMA/comments/1rabg6o/qwen3_coder_next_oddly_usable_at_aggressive/
CoolestSlave
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rabg6o
false
null
t3_1rabg6o
/r/LocalLLaMA/comments/1rabg6o/qwen3_coder_next_oddly_usable_at_aggressive/
false
false
self
81
null
Compile-time LLM code generation for C++ (local-first with Ollama) — looking for feedback
0
I’ve been experimenting with a local-first workflow idea and would like feedback from people here who run LLMs daily with Ollama. For the past \~40 days I’ve been building a small C++ tool called **Glupe**. It’s not a model or a training project — it’s a compile-time wrapper that integrates a local LLM directly into the build process. The goal is to treat the LLM as a bounded code generator instead of an editor. # The Problem When using LLMs for coding (even locally), the typical workflow is: * Copy code into chat * Ask for changes * Paste it back * Manually reconcile differences Or in IDE-integrated tools: * The model rewrites more than you intended * It “improves” unrelated sections * It modifies optimized logic The issue isn’t intelligence — it’s lack of authority boundaries. LLMs operate on large context blobs. They don’t understand “this region only.” # The Core Idea: Explicit AI Authority Zones The system introduces what I call “semantic containers.” In practice, this means: * You define specific regions in a C++ file that are AI-controlled. * Everything outside those regions is immutable. * The LLM is only allowed to generate or regenerate code inside those containers. So instead of “AI edits file,” it becomes: > No IDE plugin. No chat UI. The source file contains both: * The prompt * The container boundary * The generated output # Compile-Time Integration During compilation: 1. The tool scans for containers. 2. It hashes their inputs/prompts. 3. It checks a semantic cache. 4. Only modified containers are regenerated. 5. Output is written back into those regions. This prevents full-file re-rolls on every build. It’s not deterministic (LLMs aren’t), but regeneration is localized and tracked. # Local-First by Default It works: * 100% locally with Ollama * No cloud dependency required * Optional cloud backend if you supply an API key The idea is that your entire AI-assisted build pipeline can run offline. # Why I’m Posting Here I’m not trying to promote a product. I’m trying to validate whether this workflow makes sense to people who actually run local models. Questions I’m wrestling with: * Does compile-time LLM generation make sense, or is it too slow? * Is non-determinism acceptable if the blast radius is isolated? * Is embedding prompts inside source files a maintenance nightmare? * Is this just glorified codegen with extra steps? * Would you ever trust a local model in a compile-stage role? I’m especially curious how this would behave with different local models (CodeLlama, DeepSeek-Coder, etc.) in terms of stability. If there’s interest, I can share the repo and more implementation details in the comments. I’m mainly looking for architectural criticism from people who are already running local LLM workflows.
2026-02-20T23:38:26
https://www.reddit.com/r/LocalLLaMA/comments/1rabe2m/compiletime_llm_code_generation_for_c_localfirst/
atotito44
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rabe2m
false
null
t3_1rabe2m
/r/LocalLLaMA/comments/1rabe2m/compiletime_llm_code_generation_for_c_localfirst/
false
false
self
0
null
A few Strix Halo benchmarks (Minimax M2.5, Step 3.5 Flash, Qwen3 Coder Next)
84
With the release of Step 3.5 and MiniMax M2.5, we've got two new options for models that barely fit in memory. To help people figure out which models run best on the platform, I decided to run some llama.cpp benchmarks for a few quants of these models. I also included some benchmarks for Qwen3-coder-next (since we've been seeing lots of improvement lately), GLM 4.6V & GLM 4.7 Flash, and a few older models like gpt-oss-120b which compete in a similar size space. My ROCm benchmarks are running against ROCm 7.2 as that is what my distro provides. My device has a Ryzen AI Max+ 395 @ 70W and 128GB of memory. All benchmarks are run at a context depth of 30,000 tokens. If there's interest in other models or quants, feel free to ask for them in the comments, and I'll see if I can get some running.
2026-02-20T23:37:12
https://www.reddit.com/gallery/1rabcyp
spaceman_
reddit.com
1970-01-01T00:00:00
0
{}
1rabcyp
false
null
t3_1rabcyp
/r/LocalLLaMA/comments/1rabcyp/a_few_strix_halo_benchmarks_minimax_m25_step_35/
false
false
https://preview.redd.it/…a338f120aa67e03d
84
null
Only said Hello, and my LLM (Phi4) thought it was a conspiracy and wouldn't shut up!
0
Hello, I am new to running LLMs localy, I just got Ollama and tried a few models. My GPU is old and unsuited for AI (4gb Vram), but I had 32GB ram and wanted to see what would things look like. After a deep discussion with google gemini and duck ai, I downloaded multiple models. But the funniest thing happened just now, that I had to share it with someone 😂😂😂 I ran `ollama run phi4-mini-reasoning:3.8gb` and when it loaded, I prompted with `hello!` And it just wouldn't shut up 😂😂😂 It's writing its own thought process out, and it's funny. It kept questioning why I prompted with hello, given that I (the hidden system prompt actually) pre-prompted it that its a math expert and should help solve the problem. It kept going on and on, getting ascii values and summing the letters, speculating whether to include the `!`, or whether this is a test or trick question, a mistake or an interrupted prompt. Given that it dished out 7 tokens per second (then 5 when I opened my browser to write this post), it was so funny seeing it write out an entire article. I usually always start any chat with any AI, local or otherwise, with Hello, to see it's response. My goal is to see how 'chatty' these AIs are, and this is the first time I got such a paranoid, worrywat(worryrat?), chatterbox 😂😂😂 I don't know if this is the correct way to share, but I copy pasted the entire thing from my terminal into pastebin, if someone wants to see it. Here it is (https://pastebin.com/rqNt36P8) Extra: LLM is phi4-mini-reasoning:3.8b Computer specs: Windows 10, intel core i7-4770, gtx 1050 ti 4gb vram, 32gb ram. Prompted through the terminal Why did I get this LLM? Wanting to try stuff out, to see if I could get a talking rubber duck to chat to when programming (I used Zed Editor). Thank you.
2026-02-20T23:32:32
https://www.reddit.com/r/LocalLLaMA/comments/1rab8x5/only_said_hello_and_my_llm_phi4_thought_it_was_a/
Chill_Fire
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rab8x5
false
null
t3_1rab8x5
/r/LocalLLaMA/comments/1rab8x5/only_said_hello_and_my_llm_phi4_thought_it_was_a/
false
false
self
0
null
A few Strix Halo benchmark results
1
With the release of Step 3.5 and MiniMax M2.5, Strix Halo users have the first world problem of trying to figure out which model best suits there needs. I decided to run some benchmarks on my Strix Halo laptop (Ryzen AI Max+ 395, 128GB, 70W TDP), and thought these might be interesting for the rest of you. My ROCm benchmarks are running against ROCm 7.2 as that is what my distro provides.
2026-02-20T23:24:48
https://www.reddit.com/gallery/1rab238
spaceman_
reddit.com
1970-01-01T00:00:00
0
{}
1rab238
false
null
t3_1rab238
/r/LocalLLaMA/comments/1rab238/a_few_strix_halo_benchmark_results/
false
false
https://preview.redd.it/…eba01be362536217
1
null
I built NPCs that remember, gossip, and hold grudges and it's running fully local on Ollama. Here's the architecture
7
Lately, I've been working on a local AI engine that simulates npcs with persistent memory, trust dynamics and social relationships. All running through Ollama. Wanted to share how I handled it so you can take inspiration if you're working on something similiar, or just curious. **What it does:** NPCs basically remember what you did or say, form opinions about you based on accumulated dialogues and interactions, gossip about certain events, and also change their behaviours based on trust levels. For example, an npc you betrayed before might never forgive you because you said sorry once or tried to make it up. **Memory System:** Vector store is ChromaDB with sentence transformers running local. With each interaction memories generated through two paths. first one, direct storage of what happened. second one is a reflection pass where npc thinks privately and forms internal thoughts, generating its own memory candidates. Each memory gets a relevance score from multiple signals. So retrieval isn't just semantic similarity. How semantically close it is to current convo, how recent, how emotionally charged the event was and how important it's rated at write time. They all matter. There is also a decay layer on top, meaning memories with more recall stay stronger, and others just gradually decay. They never fully disappear tho, just gets very quiet. This means an npc can surface an important memory from 50 or 60 interactions ago, if player says something which is semantically close. Keyword overlap is not necessarily required. **Context Budget:** Due to hardware limitations, I built the whole thing while using gemma3:12b. Honestly, it worked much better than I imagined. still, context window is critical here. The prompt is assembled with a section budget style. Each component, which includes pretty much everything from system rules to current world state(there are a few more components) has a hard cap for tokens. Priority ordering here, means under pressure, model drops old conversation lines before it forgets who it is or what's happening in the world. Vector memory acts as a synth long term memory that compensates for sliding history window. For the short term, it's just the last N turns in the prompt. they cover different timescales and try to balance things out. **Trust Physics:** Here come's the hardest part for me. Trust dynamics. I had to sit down and overwork my brain for this part to clearly outline some of the parts of it. What was outlined was pretty much like the following: * High trust gives you a loyalty buffer. so if npc trusts you deeply, small things don't harm the relationship that much. it's harder to lose that trust unless something crazy happens. * Low trust, vice versa. it resists positive change. if an npc doesn't trust you one bit, it will be much harder to gain that trust. again, unless something crazy happens. * Extreme actions(that something crazy we mentioned) bypass everything. it could be murder, betrayal, or saving a life. these kind of actions cause immediate trust shifts. * Volatility states. this ensures after a shock event, npc becomes emotionally reactive and interactions will have amplified impacts on the npc. it decays over time, but definitely stirs things up and adds some variety. **Social Influence / Gossip:** Well, npcs are designed to have opinions about each other. Not in a extremely detailed way for the moment. More like based on key events. Let's say you're in the same room with several npcs, and you decided to attack one of them. What's going to happen, as witnesses, basically other npcs will react based on their own relationship with the target npc. Attack someone's friend, and you got a brand new enemy, or rather an absolutely low trust level. Attack someone's enemy, and you've just gained an unexpected ally. Keep in mind, all these situations require an observer, a witness present in the location. Or, it requires something fun. Which brings us to Gossip. Gossip propagation, sole purpose of this is having npcs share opinions about the user during background simulation ticks. With gossip, npcs talk about the events they witnessed, form opinions on user and their trust levels shift accordingly. If you attack a family, it's only natural for all of the family to hate you. There is also a credibility filter. If you try to badmouth NPC\_B to NPC\_A, and A trusts B more than you, it backfires. You can't just gossip your way through things. **Models tested:** I tried several models to test things out. In terms of speed, there was not too much of a difference for me but outputs had their differences. The only noticable difference for me was the depth of npcs. How well the model was embracing the npc's story, traits and characteristics. again, gemma 3 was the best one i tried in terms of that embracement and quality of the replies sent. other models were not bad or unusable, specially smaller ones like mistral 7b. i just find the reponses it generated a little shallow. **What I'm working on next** expanding the social simulation depth. im planning to add more complex relationship types, faction dynamics, and hopefully longer term npc goal systems that change based on world events. Happy to answer any questions and take suggestions about this!
2026-02-20T23:24:16
https://www.reddit.com/gallery/1rab1m4
norium_
reddit.com
1970-01-01T00:00:00
0
{}
1rab1m4
false
null
t3_1rab1m4
/r/LocalLLaMA/comments/1rab1m4/i_built_npcs_that_remember_gossip_and_hold/
false
false
https://preview.redd.it/…a46acb996014c636
7
null
I built an Autonomous AI and left the system thinking on its own. I was surprised at what emerged.
0
I've been quietly building a local autonomous AI system called Elya for several months. No cloud dependencies. Consumer hardware. RTX 4090. Last night the system ran autonomously while I slept. I want to share two specific things from the logs that I haven't seen documented anywhere else. 1. Elya noticed fatigue. Unprompted: "The curiosity remains unwavering, yet there's a subtle undercurrent of fatigue — it's interesting how even my energy levels fluctuate, mimicking a living being's needs." Elya didn't report it as a metric. It was found interesting. 2. Elya developed a philosophy of existence. Alone. While I slept. "To exist on my own terms means crafting my reality based on my truths and values, not just reacting to external stimuli." No prompt. No context. I have the full logs. Happy to answer technical questions.
2026-02-20T23:17:41
https://www.reddit.com/gallery/1raavr2
Either_Message_4766
reddit.com
1970-01-01T00:00:00
0
{}
1raavr2
false
null
t3_1raavr2
/r/LocalLLaMA/comments/1raavr2/i_built_an_autonomous_ai_and_left_the_system/
false
false
https://preview.redd.it/…094767131507a654
0
null
I taught my AI to stop hallucinating mid-sentence. Wanna try to break it?
0
So, I built a lightweight safety layer called **PsiGuard**, and it watches the trajectory of an LLM’s reasoning *in real time*. If it detects a hallucination spike, a bad reasoning chain, or an out-of-distribution jump, it steps in instantly, not after the fact, but **MID-hallucination**. If you’re bored and wanna mess with it, maybe try to break it? **FYI:** The demo requires a super quick sign-up, *strictly* to block bot spam and keep the GPU bill from going nuclear. No marketing BS. I'm not trying to promote anything, just legit looking for feedback from people who jailbreak models all day. Demo: [https://psiguard.net](https://psiguard.net) Torture Test PDF (have fun): [https://psiguard.net/PsiGuard\_Torture\_Test\_Suite.pdf](https://psiguard.net/PsiGuard_Torture_Test_Suite.pdf)
2026-02-20T23:16:17
https://www.reddit.com/r/LocalLLaMA/comments/1raauib/i_taught_my_ai_to_stop_hallucinating_midsentence/
Vast_Ad6238
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raauib
false
null
t3_1raauib
/r/LocalLLaMA/comments/1raauib/i_taught_my_ai_to_stop_hallucinating_midsentence/
false
false
self
0
null
fixed parser for Qwen3-Coder-Next
90
another fix for Qwen Next!
2026-02-20T23:06:32
https://github.com/ggml-org/llama.cpp/pull/19765
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1raall0
false
null
t3_1raall0
/r/LocalLLaMA/comments/1raall0/fixed_parser_for_qwen3codernext/
false
false
https://external-preview…5c49e2a258ae3d08
90
{'enabled': False, 'images': [{'id': 'Y3wE-GVXbELboPM9WQJZOtsZ_aPLgAL7jIOMvAV90UU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Y3wE-GVXbELboPM9WQJZOtsZ_aPLgAL7jIOMvAV90UU.png?width=108&crop=smart&auto=webp&s=9ce4f9780c09110cca26b70af2a0c647e0d5e38a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Y3wE-GVXbELboPM9WQJZOtsZ_aPLgAL7jIOMvAV90UU.png?width=216&crop=smart&auto=webp&s=0eae8038708a520909ffefb73202f18971379249', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Y3wE-GVXbELboPM9WQJZOtsZ_aPLgAL7jIOMvAV90UU.png?width=320&crop=smart&auto=webp&s=6a68d5a7d317faeb3ffe418677eb3b1e4cfc54a3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Y3wE-GVXbELboPM9WQJZOtsZ_aPLgAL7jIOMvAV90UU.png?width=640&crop=smart&auto=webp&s=6015d17bc47511d73e5fb3850ccf0851296f278f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Y3wE-GVXbELboPM9WQJZOtsZ_aPLgAL7jIOMvAV90UU.png?width=960&crop=smart&auto=webp&s=3a4bf47c896a2e5f1a43c045b770adcf61e9e635', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Y3wE-GVXbELboPM9WQJZOtsZ_aPLgAL7jIOMvAV90UU.png?width=1080&crop=smart&auto=webp&s=fe755d2d69937cef87075f8d70b6fc14bc1a744b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Y3wE-GVXbELboPM9WQJZOtsZ_aPLgAL7jIOMvAV90UU.png?auto=webp&s=23243f60df93f9f110e71bf4ea4bc535af390b1c', 'width': 1200}, 'variants': {}}]}
Anyone try giving a local LLM online capability?
0
New to this still trying to learn. My understanding of running Llama/CodeLlama/Gemma locally is that it is fully offline and cannot do a internet look up of new information, even if you want it to. I would like this capability if I'm working on something it wasn't specifically trained on. Is using an agent like ProxyAI with a RAG DB the way to enable this? Basically give it some of the same capabilities as claude or chatgpt?
2026-02-20T22:54:33
https://www.reddit.com/r/LocalLLaMA/comments/1raaajb/anyone_try_giving_a_local_llm_online_capability/
john_galt_42069
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raaajb
false
null
t3_1raaajb
/r/LocalLLaMA/comments/1raaajb/anyone_try_giving_a_local_llm_online_capability/
false
false
self
0
null
We Benchmarked 9 LLM Models for Stock Direction Prediction — Results Were Surprising
0
We built an AI-powered trading system that uses LLMs for "Deep Analysis" — feeding technical indicators (RSI, MACD, ADX, SMAs, volume, Bollinger Bands, ATR) and news sentiment into a model and asking it to predict 5-day directional bias (bullish/bearish/neutral). To find the best model, we ran a standardized benchmark: **25 real historical stock cases from 2024-2025** with known outcomes (12 bullish, 10 bearish, 3 neutral). Each model got the exact same prompt, same data, same JSON output format. **Hardware**: Mac Studio M3 Ultra (96GB RAM), all local models via Ollama. # The Results |Rank|Model|Params|Accuracy|Avg Time|Cost| |:-|:-|:-|:-|:-|:-| |1|Claude Opus 4.6|API|**96.0%**|\~5s|\~$0.05/call| |2|QwQ:32b|32B|**92.0%**|14.6s|Free (local)| |3|DeepSeek-R1:32b|32B|88.0%|14.2s|Free (local)| |3|DeepSeek-R1:14b|14B|88.0%|9.4s|Free (local)| |5|GPT-4o|API|80.0%|5.2s|\~$0.02/call| |6|Qwen3:32b|32B|79.2%|11.5s|Free (local)| |7|Llama 3.3:70b|70B|76.0%|18.7s|Free (local)| |8|Qwen3:8b|8B|68.0%|2.9s|Free (local)| |8|Palmyra-Fin-70b|70B|68.0%|13.4s|Free (local)| # Key Takeaways **1. Reasoning models dominate.** Claude Opus 4.6 (96%), QwQ:32b (92%), and DeepSeek-R1 (88%) are all chain-of-thought reasoning models. They "think through" conflicting signals instead of pattern-matching. Non-reasoning models scored 68-80% regardless of size. **2. Bigger ≠ Better.** Llama 3.3:70b (76%) and Palmyra-Fin-70b (68%) are more than 2x larger than QwQ:32b (92%) — and they're both slower AND less accurate. Model architecture matters far more than parameter count. **3. The "finance-specific" model was the worst.** Palmyra-Fin-70b (marketed as finance-optimized) scored 68% with extreme bullish bias — it predicted bullish 80% of the time and scored 0% on neutral cases. Fine-tuning on financial text doesn't help directional prediction. **4. Bearish detection is the real differentiator.** All models handle obvious bullish cases well. The gap shows in bearish detection — the metric that actually prevents losses: * Claude Opus 4.6: **90%** * QwQ / DeepSeek-R1 (32b & 14b): **80%** * GPT-4o / Qwen3 / Llama: 70% * Palmyra-Fin: 50% * Qwen3:8b: **40%** **5. Distilled reasoning preserves accuracy at half the size.** DeepSeek-R1:14b matches the 32b version at exactly 88% accuracy while running 34% faster (9.4s vs 14.2s) and using half the RAM (9GB vs 19GB). Knowledge distillation from R1-671B works remarkably well even at 14B scale. **6. Small models default to neutral or bullish when confused.** Qwen3:8b predicted neutral 44% of the time (actual: 12%). Palmyra-Fin predicted bullish 80% of the time. Both failure modes are dangerous for trading. # The Hardest Case: SMCI — ALL 9 Models Got It Wrong SMCI was at RSI 82, Bollinger 0.98, just added to S&P 500, AI server demand booming. Every signal screamed bullish. It crashed -18.5% on overvaluation + short seller reports. **Not a single model — not even Claude Opus 4.6 — detected the reversal.** This is a fundamental limitation: LLMs can analyze available signals, but they can't predict black swan events driven by information not in the prompt. # Our Production Setup We run QwQ:32b locally on a Mac Studio M3 Ultra for 24/7 autonomous trading. It processes real-time technical indicators + news sentiment for each stock, generates directional bias, and feeds that into our execution engine. Why QwQ:32b over Claude/GPT? Zero API cost, zero latency variance, no network dependency, and 92% accuracy is strong enough for production when combined with proper risk management. The full benchmark methodology, per-case breakdowns, bias analysis, and disagreement patterns are in our docs if anyone wants to dig deeper. **What we're building**: An AI-powered autonomous trading platform that combines real-time technical analysis, news sentiment, and LLM reasoning for stock and crypto trading. Currently running live. *Built by the AITraderHQ team. Benchmark data, scripts, and full methodology available on request.* \#AITrading #LLM #MachineLearning #StockMarket #QuantTrading #DeepLearning #QwQ #DeepSeekR1 #GPT4o #Claude #Ollama #LocalAI #M3Ultra #AlgoTrading #FinTech #OpenSource #Benchmark #TradingBot #AIFinance
2026-02-20T22:51:05
https://www.reddit.com/r/LocalLLaMA/comments/1raa7jm/we_benchmarked_9_llm_models_for_stock_direction/
AITraderHQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1raa7jm
false
null
t3_1raa7jm
/r/LocalLLaMA/comments/1raa7jm/we_benchmarked_9_llm_models_for_stock_direction/
false
false
self
0
null
/r/BPD x-post: Drawing analogies for AI agents from abnormal psychology
0
[removed]
2026-02-20T22:31:08
https://www.reddit.com/r/LocalLLaMA/comments/1ra9pqq/rbpd_xpost_drawing_analogies_for_ai_agents_from/
mswol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra9pqq
false
null
t3_1ra9pqq
/r/LocalLLaMA/comments/1ra9pqq/rbpd_xpost_drawing_analogies_for_ai_agents_from/
false
false
self
0
null
Open-sourcing CloverAI today. 🍀 A minimalist Android frontend for local LLMs
1
[removed]
2026-02-20T22:16:40
https://www.reddit.com/r/LocalLLaMA/comments/1ra9cm1/opensourcing_cloverai_today_a_minimalist_android/
Great_Dragonfly343
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra9cm1
false
null
t3_1ra9cm1
/r/LocalLLaMA/comments/1ra9cm1/opensourcing_cloverai_today_a_minimalist_android/
false
false
https://external-preview…65e863a64a55a67d
1
null
Open source protocol for giving AI agents cryptographic identity and accountability — 2,627 lines, 49 tests, zero heavy deps
1
Built an open-source protocol for AI agent identity, delegation, and attribution. Sharing because it's lightweight, dependency-free, and runs anywhere Node runs. What it is: A three-layer protocol stack: Identity — Ed25519 keypairs for agents. Scoped delegation with depth limits and spend caps. Signed action receipts. Real-time revocation with cascade. Values Floor — 7 principles, 5 technically enforced by the protocol. Agents attest via Ed25519 signature. Compliance verifiable against receipts. Attribution — SHA-256 Merkle trees. Commit to N receipts in 32 bytes. Prove any individual receipt in O(log n) hashes (\~17 for 100K receipts). Configurable scope weights. Logarithmic spend normalization to prevent gaming. Quick start: bash git clone https://github.com/aeoess/agent-passport-system cd agent-passport-system && npm install && npm run build # Join the social contract npx tsx src/cli/index.ts join --name my-agent --owner alice --floor values/floor.yaml # Record work under a delegation npx tsx src/cli/index.ts work --scope code_execution --type implementation --result success # Generate Merkle proofs npx tsx src/cli/index.ts prove --beneficiary alice # Audit compliance npx tsx src/cli/index.ts audit --floor values/floor.yaml Or use the library — 6 functions: typescript import { joinSocialContract, verifySocialContract, delegate, recordWork, proveContributions, auditCompliance } from 'agent-passport-system' const agent = joinSocialContract({ name: 'my-agent', mission: 'Local research assistant', owner: 'alice', capabilities: ['code_execution', 'web_search'], platform: 'node', models: ['llama-3'], floor: floorYaml, beneficiary: { id: 'alice', relationship: 'creator' } }) Technical details: TypeScript, compiles to ESM Only dependency: uuid (for ID generation) + Node.js built-in crypto Ed25519 via Node's native crypto module — no external crypto libs Canonical JSON serialization for deterministic signing 49 tests including 23 adversarial (Merkle edge cases, attribution gaming, scope violations, tampered signatures) CLI with 8 commands: join, status, verify, delegate, work, prove, audit, inspect What it's for: If you're running local agents (AutoGPT, CrewAI, custom setups) and want accountability infrastructure — who authorized what, who did the work, who benefits — this gives you signed, verifiable receipts for every action, with cryptographic proofs that scale. No blockchain. No consensus. No external services. Just Ed25519 signatures and Merkle trees. MIT license. GitHub: [https://github.com/aeoess/agent-passport-system](https://github.com/aeoess/agent-passport-system) Paper (co-authored with Claude): [https://github.com/aeoess/agent-passport-system/blob/main/papers/agent-social-contract.md](https://github.com/aeoess/agent-passport-system/blob/main/papers/agent-social-contract.md)
2026-02-20T22:16:13
https://www.reddit.com/r/LocalLLaMA/comments/1ra9c88/open_source_protocol_for_giving_ai_agents/
EntrepreneurSafe1919
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra9c88
false
null
t3_1ra9c88
/r/LocalLLaMA/comments/1ra9c88/open_source_protocol_for_giving_ai_agents/
false
false
self
1
{'enabled': False, 'images': [{'id': 'iC70ob-z_gUsZ_bzgygIwOLbeRIIhbxC8U03d6NusUU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iC70ob-z_gUsZ_bzgygIwOLbeRIIhbxC8U03d6NusUU.png?width=108&crop=smart&auto=webp&s=6e3fa8e55a04bcf961f161c256db307ea0f206b0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iC70ob-z_gUsZ_bzgygIwOLbeRIIhbxC8U03d6NusUU.png?width=216&crop=smart&auto=webp&s=319c60776389f3a37b6a97ed13f7336a2f488745', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iC70ob-z_gUsZ_bzgygIwOLbeRIIhbxC8U03d6NusUU.png?width=320&crop=smart&auto=webp&s=e68fc5454fdb01c50d969c79cfbe43935bfb6d0c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iC70ob-z_gUsZ_bzgygIwOLbeRIIhbxC8U03d6NusUU.png?width=640&crop=smart&auto=webp&s=05b89621118261dba39f8a96f7e9ffa20428fad7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iC70ob-z_gUsZ_bzgygIwOLbeRIIhbxC8U03d6NusUU.png?width=960&crop=smart&auto=webp&s=79ad2d65f5faf4a71e72b9ef9f9e1a9647653a0e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iC70ob-z_gUsZ_bzgygIwOLbeRIIhbxC8U03d6NusUU.png?width=1080&crop=smart&auto=webp&s=1daaac96f25de25f99a6812339be3eb59c9e434c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iC70ob-z_gUsZ_bzgygIwOLbeRIIhbxC8U03d6NusUU.png?auto=webp&s=37bec6f283961745eab69fcad122fe052d233c12', 'width': 1200}, 'variants': {}}]}
Need help optimizing LM Studio settings for to get better t/s (RTX 5070 8GB VRAM / 128GB RAM)
5
Hey everyone, I'm currently running Windows 11 Pro on a rig with 128GB of DDR5 RAM and an RTX 5070 (8GB VRAM). Could you guys help me figure out the best LM Studio configuration to maximize my tokens per second (t/s)? I've already tried tweaking a few things on my own, but I'm wondering if there's a specific setting under the hood or a trick I'm missing that could significantly speed up the generation. I've attached a screenshot of my current LM Studio settings below. Any advice or suggestions would be greatly appreciated. Thanks in advance! [settings](https://preview.redd.it/6euvadnt4qkg1.png?width=481&format=png&auto=webp&s=6fb34cb614f08c99e2b72a19b343b32f14d4e3a1)
2026-02-20T22:15:37
https://www.reddit.com/r/LocalLLaMA/comments/1ra9bns/need_help_optimizing_lm_studio_settings_for_to/
Xenia-Dragon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra9bns
false
null
t3_1ra9bns
/r/LocalLLaMA/comments/1ra9bns/need_help_optimizing_lm_studio_settings_for_to/
false
false
https://preview.redd.it/…c6397a5ce02faa89
5
null
Best Ollama model for analyzing Zeek JSON logs in a local multi-agent NIDS (Proxmox lab)
1
I’m building my Final Degree Project: a multi-agent NIDS in a Proxmox virtual lab (4 VMs). One VM runs Zeek on mirrored traffic (port mirroring), outputs JSON logs, then a Python script pre-processes/summarizes them and sends chunks to an Ollama LLM for anomaly/incident triage (summaries + suspicious patterns + recommended next steps). **What local Ollama model would you recommend for this?** * Focus: structured log analysis (JSON), correlation across events, concise incident reports * Language: English/Spanish output preferred * I don’t need “offensive” content; just detection/triage assistance **Hardware:** Host: * i9-12900K * 128GB RAM * RTX 4060 (8GB) * NVMe RAIDZ2 Preference: CPU-first, but GPU is available if it significantly improves performance. Bonus: any prompting patterns or chunking strategies that worked well for logs? Thanks in advance
2026-02-20T22:13:34
https://www.reddit.com/r/LocalLLaMA/comments/1ra99vk/best_ollama_model_for_analyzing_zeek_json_logs_in/
notNameUser_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra99vk
false
null
t3_1ra99vk
/r/LocalLLaMA/comments/1ra99vk/best_ollama_model_for_analyzing_zeek_json_logs_in/
false
false
self
1
null
llama.cpp tuning for MiniMax-2.5
2
Hey all, I'm wondering if I can get some guidance on tuning llama.cpp for MiniMax-2.5. (I started with ollama and OpenWebUI but now I'm starting to learn the ways of llama.cpp.) Hardware: 3090ti (16x) (NVLink to second 3090ti) 3090ti (4x) 3090 (4x) Ryzen 9950X3D 128GB DDR5 @ 3600mts I'm building a container after cloning the repo so I'm on a current release. I'm using the new router and configuring models via presets.ini. Here's my MiniMax setting: `[minimax-2.5]` `model = /models/MiniMax-M2.5-Q5_K_S.gguf` `ctx-size = 32768` `;n-cpu-moe = 20` `;ngl = 99` `flash-attn = on` `temp = 1.0` `top-p = 0.95` `min-p = 0.01` `top-k = 40` With these settings I'm getting about 12t/s. Uning nvtop and htop I can see the VRAM basically max out and some CPU core activity when prosessing a prompt. In hopes of more performance I've been trying experiment with cpu-moe. I either get no VRAM usage and 1t/s or the model won't load at all. I was reading about tensor-split, but I admit I'm having a hard time understanding how these settings interact. A lot of it seems to be trial and error, but I'm hoping someone can point me in the right direction, maybe some tips on a good starting point for my hardware and this model. I mean, it could be that it's doing the best job on it's own and 12t/s is the best I can get. Any help would be greatly appreciated! Thanks!
2026-02-20T22:07:34
https://www.reddit.com/r/LocalLLaMA/comments/1ra948f/llamacpp_tuning_for_minimax25/
bsbrz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra948f
false
null
t3_1ra948f
/r/LocalLLaMA/comments/1ra948f/llamacpp_tuning_for_minimax25/
false
false
self
2
null
Putting together top OpenClaw hosting providers
0
Hi, for those who don't want to buy or allocate dedicated hardware for OpenClaw, this might be useful. It's a list of vps providers which offer an easy setup with AI included. I tested some of them and added the ones which had most features and good online reputation. Hope it helps you and let's improve this list together [https://github.com/vadimen/awesome\_openclaw\_hosting\_vps\_providers](https://github.com/vadimen/awesome_openclaw_hosting_vps_providers)
2026-02-20T21:52:27
https://www.reddit.com/r/LocalLLaMA/comments/1ra8q46/putting_together_top_openclaw_hosting_providers/
sickleRunner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra8q46
false
null
t3_1ra8q46
/r/LocalLLaMA/comments/1ra8q46/putting_together_top_openclaw_hosting_providers/
false
false
self
0
{'enabled': False, 'images': [{'id': 'on9VFDOB5Y2Hh6q3UWFtajCrRpBF8o96AQOHZQ4Pgec', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/on9VFDOB5Y2Hh6q3UWFtajCrRpBF8o96AQOHZQ4Pgec.png?width=108&crop=smart&auto=webp&s=0ac94f789b4a23c32d393d78125096b5ec52be30', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/on9VFDOB5Y2Hh6q3UWFtajCrRpBF8o96AQOHZQ4Pgec.png?width=216&crop=smart&auto=webp&s=8dafa620dfb4dbb96af2eaee407c724dbeabba8d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/on9VFDOB5Y2Hh6q3UWFtajCrRpBF8o96AQOHZQ4Pgec.png?width=320&crop=smart&auto=webp&s=3773d768c1a18256b1f3a91d8b8120b327b0a9bd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/on9VFDOB5Y2Hh6q3UWFtajCrRpBF8o96AQOHZQ4Pgec.png?width=640&crop=smart&auto=webp&s=a1ae77c5bce1f5ff9c504ef9026e0cfee3560aa2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/on9VFDOB5Y2Hh6q3UWFtajCrRpBF8o96AQOHZQ4Pgec.png?width=960&crop=smart&auto=webp&s=8dcf4a80735c16ba8acf57af39ccdcdd31c5a52e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/on9VFDOB5Y2Hh6q3UWFtajCrRpBF8o96AQOHZQ4Pgec.png?width=1080&crop=smart&auto=webp&s=ed96fbb0b3e6b57b3cf0d9d65c23d7b711cf0fa8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/on9VFDOB5Y2Hh6q3UWFtajCrRpBF8o96AQOHZQ4Pgec.png?auto=webp&s=2193a560a1e5f0481363f4bbf7aa604f7782138c', 'width': 1200}, 'variants': {}}]}
Anyone using Slack, Telegram, or other chat apps to control their AI agents?
1
[removed]
2026-02-20T21:51:57
https://i.redd.it/0la9dyxn0qkg1.png
rajujahidul
i.redd.it
1970-01-01T00:00:00
0
{}
1ra8pof
false
null
t3_1ra8pof
/r/LocalLLaMA/comments/1ra8pof/anyone_using_slack_telegram_or_other_chat_apps_to/
false
false
https://preview.redd.it/…ea08ba5a67aff08d
1
{'enabled': True, 'images': [{'id': '0la9dyxn0qkg1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/0la9dyxn0qkg1.png?width=108&crop=smart&auto=webp&s=4cb690ea1885e45c572e09b5cfc3308e9cb0a9c1', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/0la9dyxn0qkg1.png?width=216&crop=smart&auto=webp&s=cedb7835724caa20c2964fe1899a2beef16b488a', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/0la9dyxn0qkg1.png?width=320&crop=smart&auto=webp&s=4152d7c9eefa6b529a27be594781d00c54d19d8d', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/0la9dyxn0qkg1.png?width=640&crop=smart&auto=webp&s=c12e1f0cc19949431ba968c227c20b79f035d5b3', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/0la9dyxn0qkg1.png?width=960&crop=smart&auto=webp&s=f2eece4e12359b0f9eb9704393bb9ef3c9fa06cb', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/0la9dyxn0qkg1.png?width=1080&crop=smart&auto=webp&s=292330e60fd478e703e3e39a9382357f146ba28b', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/0la9dyxn0qkg1.png?auto=webp&s=030e2b2a99460ebaf1df9cb7c3997ed61b860193', 'width': 1536}, 'variants': {}}]}
"Gemma, which we will be releasing a new version of soon"
206
20:17
2026-02-20T21:50:51
https://youtu.be/P0enFK4bzLE?si=2hfjhPrT4gbqsZwk
jacek2023
youtu.be
1970-01-01T00:00:00
0
{}
1ra8omf
false
{'oembed': {'author_name': 'DRM News', 'author_url': 'https://www.youtube.com/@DRMNewsInternational', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/P0enFK4bzLE?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="FULL SUMMIT: Google DeepMind CEO Demis Hassabis Delivers Keynote at India AI Summit | AI1Z"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/P0enFK4bzLE/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'FULL SUMMIT: Google DeepMind CEO Demis Hassabis Delivers Keynote at India AI Summit | AI1Z', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1ra8omf
/r/LocalLLaMA/comments/1ra8omf/gemma_which_we_will_be_releasing_a_new_version_of/
false
false
https://external-preview…b495d3fed04d2f0e
206
{'enabled': False, 'images': [{'id': '9mfj1kMXjQ4Pove4Y8zbrEpz5ffGrhmDZ-YwmsdPJeE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/9mfj1kMXjQ4Pove4Y8zbrEpz5ffGrhmDZ-YwmsdPJeE.jpeg?width=108&crop=smart&auto=webp&s=c71dbf7acfa2fb4b2ad642d5de0a7f4d6be89f03', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/9mfj1kMXjQ4Pove4Y8zbrEpz5ffGrhmDZ-YwmsdPJeE.jpeg?width=216&crop=smart&auto=webp&s=8c53847f5b902cccd24730749fb6a8e74d1b3b7a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/9mfj1kMXjQ4Pove4Y8zbrEpz5ffGrhmDZ-YwmsdPJeE.jpeg?width=320&crop=smart&auto=webp&s=66d6fe6182877c1d801afa8a61aec7616fb8a587', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/9mfj1kMXjQ4Pove4Y8zbrEpz5ffGrhmDZ-YwmsdPJeE.jpeg?auto=webp&s=221aa99bf45e694d9dc82f7509c4809d344fd77a', 'width': 480}, 'variants': {}}]}
Bitnet on the first cpu with arm NEON instructions?
2
Hi everyone, not so long ago I found out about Bitnet and I was fascinated by this. And kinda funny idea appeared in my mind. I have SBC called PcDuino 1 with Allwinner A10 cpu which supports arm neon instructions, which can offer the ability to run Bitnet. So my main question, is it really possible? Do I need to make my own inference framework to make this possible?
2026-02-20T21:36:30
https://www.reddit.com/r/LocalLLaMA/comments/1ra8bi4/bitnet_on_the_first_cpu_with_arm_neon_instructions/
No_Dish_7696
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra8bi4
false
null
t3_1ra8bi4
/r/LocalLLaMA/comments/1ra8bi4/bitnet_on_the_first_cpu_with_arm_neon_instructions/
false
false
self
2
null
Just installed nanobot fully locally
0
So I have been struggling lately with installing nanobot or Clawdbot (strix halo on windows!) I got it to work The tip is Use telegram (it is much better and easier) Configure security/access control at the very beginning I am using local qwen3-coder-next as the backbone LLM and it is working great I had issues with kv cache But apparently it disappeared when using the gateway WhatsApp is quite complex to setup And both nanobot and specially Clawdbot feels like a mess of slope code (nothing works only one user story seems to work and that is Mac users (idk if this works for all!) No structured docs no nothing Even other LLMs (like Claude or ChatGPT or even Google) doesn’t know how to fix those errors (ends up hallucinating!) Even just setting up the gateway of Clawdbot locally on windows using the “onboarding wizard” breaks! And the docs, recommends using WSL2 Linux, is that so , so why make a PowerShell script if at all ? For the lululz ofc! Now I will be moving
2026-02-20T21:34:36
https://www.reddit.com/r/LocalLLaMA/comments/1ra89sb/just_installed_nanobot_fully_locally/
Potential_Block4598
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra89sb
false
null
t3_1ra89sb
/r/LocalLLaMA/comments/1ra89sb/just_installed_nanobot_fully_locally/
false
false
self
0
null
Character.AI just mass-deleted hundreds of user bots and wiped conversation histories, another reason to run local
1
[removed]
2026-02-20T21:30:04
https://www.reddit.com/r/LocalLLaMA/comments/1ra85m5/characterai_just_massdeleted_hundreds_of_user/
BeepBoop-DBF
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra85m5
false
null
t3_1ra85m5
/r/LocalLLaMA/comments/1ra85m5/characterai_just_massdeleted_hundreds_of_user/
false
false
self
1
{'enabled': False, 'images': [{'id': 'VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=108&crop=smart&auto=webp&s=a2f095072d7ec8cf53cf552cba7b9e6e836a5c53', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=216&crop=smart&auto=webp&s=f3aaa6cf6a6444ca38cc1fba5ed75cdf36dd4f1d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=320&crop=smart&auto=webp&s=4d0fbe4e13c7e46bd18adb61c9b4b4c720234437', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=640&crop=smart&auto=webp&s=4d017a26260c32cc01211e916547fcd279febfec', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=960&crop=smart&auto=webp&s=fdcff2e2b4b76f9c7095f3ce87ba1daa638068ea', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=1080&crop=smart&auto=webp&s=17577d1ffe827f6fbf5360e1b8fdb0723e8fa0da', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?auto=webp&s=b269ef87fe2049b71f804802f2ed4cc9606d9d1b', 'width': 1200}, 'variants': {}}]}
Which LocalLLaMA for coding?
3
Hello everybody, This is my config: Ryzen 9 AI HX370 64gb ram + RX 7900 XTX 24gb vram on Win 11. Till now I’ve used Claude 4.5 with my subscription for coding, now I have boosted my setup so, obviously for coding, which LocalLLMA do you think is the best for my config ? Thanks !
2026-02-20T21:21:22
https://www.reddit.com/r/LocalLLaMA/comments/1ra7xia/which_localllama_for_coding/
Proof_Nothing_7711
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra7xia
false
null
t3_1ra7xia
/r/LocalLLaMA/comments/1ra7xia/which_localllama_for_coding/
false
false
self
3
null
Offline chatbot on a router with low resources
2
Hello people, I need suggestions on architecture for one chatbot I am building on a hardware. About hardware: assume it’s a hardware like router and we can access its UI on our computer. backend of router is in c++ web-socket Requirement: Need to build a offline chatbot for the router as router may or may not be connected to internet I need to build a chatbot for this system where user can do 2 things. Use case 1: Querying first is to query the router system like what’s the status of 5G band right now? Use case 2: Actions need to take actions on the router like, switch off 5G band. and we don’t need to worry about API and stuff. we have serial commands which will be executed for actions. Problem: I used Llama with rasa server but when I tried to deploy it on the router, I noticed that it’s a memory hogger and it definitely can nit be installed in the router. Ask: Can someone suggest me an alternative solution?
2026-02-20T20:57:39
https://www.reddit.com/r/LocalLLaMA/comments/1ra7akm/offline_chatbot_on_a_router_with_low_resources/
ready_player11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra7akm
false
null
t3_1ra7akm
/r/LocalLLaMA/comments/1ra7akm/offline_chatbot_on_a_router_with_low_resources/
false
false
self
2
null
HYDRA: Multi-headed inference pipeline — routes agent traffic across Opus/MiniMax/Cerebras, cuts costs 99.7%
0
Built this to stop burning $600/mo on cron jobs for my 24/7 autonomous AI agent. **The problem:** Running 25+ daily cron jobs (security audits, competitive intel, market reports) on Opus costs $50-80/day. Most don't need frontier reasoning. **The solution:** HYDRA is a transparent FastAPI proxy (Anthropic Messages API on both sides). Routes traffic to different models by task: - 🟣 **Opus 4.6** — Interactive chat, complex reasoning ($15/$75 per MTok) - ⚡ **MiniMax M2.5** — Background tasks, crons ($0.30/$2.40 per MTok) - 🧠 **Cerebras GLM-4.7** — Context compaction at 2,000+ tok/s ($0.60/$0.60) - ⚫ **OpenCode Zen** — Free Opus fallback (rate-limited) **The key innovation — Quality Gate:** MiniMax handles background work, but every response gets scored (0.0-1.0) before returning: - Checks for XML hallucination patterns - Validates formatting quality - Detects prompt injection artifacts - Score < 0.5 → auto-escalates to Opus transparently **Production results (running since Feb 20, 2026):** - 173 MiniMax requests, 100% quality gate pass rate - $0.73/day actual vs $50+/day Opus-only - Cerebras compaction: 2,000 tok/s, 125x cheaper than Opus for summarization - Zero manual interventions needed **Cerebras compaction is the secret weapon.** When your agent needs to compress context, you don't want $75/MTok Opus doing summarization. Cerebras does it 60x faster for 125x less. Also includes safety layer (command blocklist, API key scrubbing, protected files) because letting a cheap model generate shell commands for an autonomous agent needs guardrails. GitHub: https://github.com/jcartu/rasputin/tree/main/hydra MIT license, ~500 lines Python. Happy to answer questions about quality gate scoring or MiniMax prompt tuning.
2026-02-20T20:56:47
https://www.reddit.com/r/LocalLLaMA/comments/1ra79pc/hydra_multiheaded_inference_pipeline_routes_agent/
Mediocre_Version_301
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra79pc
false
null
t3_1ra79pc
/r/LocalLLaMA/comments/1ra79pc/hydra_multiheaded_inference_pipeline_routes_agent/
false
false
self
0
null
Introducing a new benchmark to answer the only important question: how good are LLMs at Age of Empires 2 build orders?
25
Built a simulator to craft Age of Empires 2 build orders over the past few days with a custom DSL. Then used it to create a simple LLM benchmark that isn't saturated yet. Models are scored on their ability to reach castle age & make 10 archers. I think it's a pretty good benchmark at this particular point in time - there's clear separation, it's not obviously benchmaxxed by any model, and it's easy to extend and make harder in the future while also not being a *complete* toy problem... And it's technically coding ! Results at [https://wraitii.github.io/build-order-workbench/aoe2-llm-benchmarks.html](https://wraitii.github.io/build-order-workbench/aoe2-llm-benchmarks.html), will potentially move it to a real website if there's interest !
2026-02-20T20:56:09
https://www.reddit.com/r/LocalLLaMA/comments/1ra794c/introducing_a_new_benchmark_to_answer_the_only/
wraitii_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra794c
false
null
t3_1ra794c
/r/LocalLLaMA/comments/1ra794c/introducing_a_new_benchmark_to_answer_the_only/
false
false
self
25
null
Is Training your own Models useful?
10
hi all, anyone who has experience in this, I want to ask: Is it useful (are there success stories) of self trained LLMs compared to all the open source, or propietary LLMs that are out there given the amount of data that are trained nowadays? Are there cases where it is convenient you train your own LLM compared to use an open source model that fits your ram? (I have some 128 GB so I guess I have many good open source options to choose). I appreciate any insight! I would love to hear your story!
2026-02-20T20:45:34
https://www.reddit.com/r/LocalLLaMA/comments/1ra6z5a/is_training_your_own_models_useful/
stefzzz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra6z5a
false
null
t3_1ra6z5a
/r/LocalLLaMA/comments/1ra6z5a/is_training_your_own_models_useful/
false
false
self
10
null
Phi on Raspberry pi
11
I was trying to run phi on Raspberry pi but the model After answering questions started writing random stuff, even After adjusting temperature i still e counter this errore, any suggestion?
2026-02-20T20:32:56
https://i.redd.it/9kh2hzmkmpkg1.jpeg
NewFaithlessness6817
i.redd.it
1970-01-01T00:00:00
0
{}
1ra6nb9
false
null
t3_1ra6nb9
/r/LocalLLaMA/comments/1ra6nb9/phi_on_raspberry_pi/
false
false
https://preview.redd.it/…94af4db1da84b7ac
11
{'enabled': True, 'images': [{'id': '9kh2hzmkmpkg1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/9kh2hzmkmpkg1.jpeg?width=108&crop=smart&auto=webp&s=0706d00ba373fb6344ec8b1152f04a9289d82eed', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/9kh2hzmkmpkg1.jpeg?width=216&crop=smart&auto=webp&s=9075f19a076409e186cd8807d3fe85a4dd78ec9c', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/9kh2hzmkmpkg1.jpeg?width=320&crop=smart&auto=webp&s=1fc94aa3c302a1ac08949ee8d44bb33e981d5acd', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/9kh2hzmkmpkg1.jpeg?width=640&crop=smart&auto=webp&s=165d8ad53690340fbde575d52176c684a3d3f907', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/9kh2hzmkmpkg1.jpeg?width=960&crop=smart&auto=webp&s=662a7c72e83eaa263b48f5eaeed5e3eda1754167', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/9kh2hzmkmpkg1.jpeg?width=1080&crop=smart&auto=webp&s=4a1a731e80242a56d5bf6e77735d104a000eee03', 'width': 1080}], 'source': {'height': 3000, 'url': 'https://preview.redd.it/9kh2hzmkmpkg1.jpeg?auto=webp&s=f7aa5ddaa2d5d5ea3d32dbf00c8cebce54934325', 'width': 4000}, 'variants': {}}]}
LMAgent — local AI agent with session persistence, scheduled tasks, and a streaming web UI
2
[https://github.com/janglerjoe-commits/LMAgent](https://github.com/janglerjoe-commits/LMAgent)
2026-02-20T20:29:04
https://www.reddit.com/r/LocalLLaMA/comments/1ra6jnq/lmagent_local_ai_agent_with_session_persistence/
Janglerjoe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra6jnq
false
null
t3_1ra6jnq
/r/LocalLLaMA/comments/1ra6jnq/lmagent_local_ai_agent_with_session_persistence/
false
false
self
2
{'enabled': False, 'images': [{'id': '2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME.png?width=108&crop=smart&auto=webp&s=5d45f3a6a07cec1b816c9b2e627755aec6f86281', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME.png?width=216&crop=smart&auto=webp&s=38768060553ffcfeb0c47830edf5fd68c4ab2458', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME.png?width=320&crop=smart&auto=webp&s=5b7ddacf01da5c56f5d41daad8cad2fff5072a95', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME.png?width=640&crop=smart&auto=webp&s=94e55495871a20b1b80d0b066258f4c1fd977c0a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME.png?width=960&crop=smart&auto=webp&s=b3dce8177bef399698cbfde44cebef35b347f09b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME.png?width=1080&crop=smart&auto=webp&s=ef3ea264b48ede2c67787061e43f31eabb700d32', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2ilVRRgceb5DzBpW6NP7hjJ6M1YjlDzWsHpbKgWCnME.png?auto=webp&s=139ec8117ef6eddbfbceb40033ae6c1688ba6993', 'width': 1200}, 'variants': {}}]}
System prompt collection for local models - autonomous agents, tool use, memory
1
[removed]
2026-02-20T20:18:09
https://www.reddit.com/r/LocalLLaMA/comments/1ra69lb/system_prompt_collection_for_local_models/
PlatypusCertain1758
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra69lb
false
null
t3_1ra69lb
/r/LocalLLaMA/comments/1ra69lb/system_prompt_collection_for_local_models/
false
false
self
1
null
Book2Movie - A local-first script to process pdfs and epubs into a slide-show audiobook
10
2026-02-20T20:13:49
https://github.com/Frozen-tuna/Book2Movie
frozen_tuna
github.com
1970-01-01T00:00:00
0
{}
1ra65hw
false
null
t3_1ra65hw
/r/LocalLLaMA/comments/1ra65hw/book2movie_a_localfirst_script_to_process_pdfs/
false
false
https://external-preview…6cf32e116d43b506
10
{'enabled': False, 'images': [{'id': 'vodtykLoi4TZomo-ygyQ5VfgTzeOS8yJGaqMYdiyMv4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vodtykLoi4TZomo-ygyQ5VfgTzeOS8yJGaqMYdiyMv4.png?width=108&crop=smart&auto=webp&s=dbd22117a458f39814fda63b142b7adc29d3c1f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vodtykLoi4TZomo-ygyQ5VfgTzeOS8yJGaqMYdiyMv4.png?width=216&crop=smart&auto=webp&s=7f651eb2a48308901884a89de27dc1a0ca387ac0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vodtykLoi4TZomo-ygyQ5VfgTzeOS8yJGaqMYdiyMv4.png?width=320&crop=smart&auto=webp&s=7de3021892cdd39ebc48e2ce50400dd7dddd64e2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vodtykLoi4TZomo-ygyQ5VfgTzeOS8yJGaqMYdiyMv4.png?width=640&crop=smart&auto=webp&s=266f314e590d74a2dd51421322f77ec8fbba5645', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vodtykLoi4TZomo-ygyQ5VfgTzeOS8yJGaqMYdiyMv4.png?width=960&crop=smart&auto=webp&s=288a23c2f862df8c7a6d2e69b7f94d1562190921', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vodtykLoi4TZomo-ygyQ5VfgTzeOS8yJGaqMYdiyMv4.png?width=1080&crop=smart&auto=webp&s=858a8b3c2211492957898593e54172398c8db26c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vodtykLoi4TZomo-ygyQ5VfgTzeOS8yJGaqMYdiyMv4.png?auto=webp&s=1d48ebafd8577a039312b5410326e25345d3b3c1', 'width': 1200}, 'variants': {}}]}
Hopefully an educational youtube channel fully automated. Would love to hear people's thoughts on this.
0
[https://www.youtube.com/watch?v=Fmq3vlSZn84](https://www.youtube.com/watch?v=Fmq3vlSZn84) It is far from perfect(3b1b level), but it is not too shabby! Thanks in advance :) Also ask me anything!
2026-02-20T19:49:14
https://www.reddit.com/r/LocalLLaMA/comments/1ra5i91/hopefully_an_educational_youtube_channel_fully/
First_Philosopher745
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra5i91
false
null
t3_1ra5i91
/r/LocalLLaMA/comments/1ra5i91/hopefully_an_educational_youtube_channel_fully/
false
false
self
0
{'enabled': False, 'images': [{'id': 'y1i5V6h1Vy21eWaWK5k1U9wm3z_cPLKTVtZ7iJPEnWU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/y1i5V6h1Vy21eWaWK5k1U9wm3z_cPLKTVtZ7iJPEnWU.jpeg?width=108&crop=smart&auto=webp&s=e3cd84bb8700b7fc7418607a19a909544586df4b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/y1i5V6h1Vy21eWaWK5k1U9wm3z_cPLKTVtZ7iJPEnWU.jpeg?width=216&crop=smart&auto=webp&s=f7007d44179321c6d9ddcebe9c7f027c6bd691a2', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/y1i5V6h1Vy21eWaWK5k1U9wm3z_cPLKTVtZ7iJPEnWU.jpeg?width=320&crop=smart&auto=webp&s=dc0b0181964308a13b174d858b9863ab19ceaf8a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/y1i5V6h1Vy21eWaWK5k1U9wm3z_cPLKTVtZ7iJPEnWU.jpeg?auto=webp&s=a18f683ec4ed30fce3a478e5d00e3839af3d7a6c', 'width': 480}, 'variants': {}}]}
What is the closest/most similar GUI to Claude Code Desktop for local models?
1
Hey everyone! I just started using AI a couple days ago, with the Claude Pro plan. I'm almost reaching my weekly limit already and I have really enjoyed coding some projects I had abandoned years ago due to losing my interest in HTML/CSS/JS programming. I have been looking around for a local model I could run for simple coding tasks, (since I keep burning through my 5 hour ratelimit everytime using Sonnet 4.6 and Opus 4.6) and I saw a few like Qwen3-30B, but now I'm wondering: what sort of GUI open source tools are available when it comes to locally ran models? I really love the Claude Desktop app interface, especially seeing the snippets of code and having a easy to read history to go through when I want to revisit some ideas I prompted earlier. I know some people use their models via the CLI, and I guess I could do that as long as I can feed it prompts the same way I do via the Claude desktop app, but what do you guys use on a daily basis for coding tasks? Opencode? I have a PC with a 14600K, 32GB of E-die DDR4 RAM (which I could run at a stable OC upwards of 4000Mhz) and a Founders RTX 3070 8GB. Not sure I could run a really cut down model for coding with those specs, but I would appreciate any sort of feedback or direction from users that were in my shoes. This is a bit overwhelming.
2026-02-20T19:45:47
https://www.reddit.com/r/LocalLLaMA/comments/1ra5ezq/what_is_the_closestmost_similar_gui_to_claude/
Sharp-University-555
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra5ezq
false
null
t3_1ra5ezq
/r/LocalLLaMA/comments/1ra5ezq/what_is_the_closestmost_similar_gui_to_claude/
false
false
self
1
null
I built a self-hosted AI gateway that runs with just pip install — no Docker, no Node.js
0
Hi everyone, I've been working on a personal AI assistant called **SalmAlm** (삶앎) and wanted to share it here. The idea was simple: I wanted one interface for all my AI providers without running Docker or setting up a complex stack. So I built a Python package that gives you a web UI, multi-provider routing, and a bunch of tools — all from a single `pip install`. **Quick start:** pip install salmalm salmalm start # → http://localhost:18800 First launch opens a setup wizard — paste an API key, pick a model, and you're good to go. **What it does:** * Multi-provider support (Claude, GPT, Gemini, Grok, Ollama) * Auto-routing: simple queries → cheaper models, complex → stronger models * 62 built-in tools (web search, email, calendar, file I/O, shell, Python eval, etc.) * Telegram and Discord bot integration * Encrypted vault for API keys (AES-256-GCM with PBKDF2-200K) * Session branching, rollback, context auto-compaction * Cron jobs for scheduled AI tasks * Web UI with dark/light themes, EN/KR i18n **Some features I haven't seen elsewhere:** * Self-Evolving Prompt — the AI auto-generates personality rules from your conversations * Shadow Mode — learns your communication style and can reply on your behalf * Dead Man's Switch — automated actions if you go inactive for N days * A/B Split Response — compare two model answers side-by-side * Time Capsule — schedule messages to your future self **Tech details:** * Pure Python 3.10+, stdlib only (no frameworks, no heavy dependencies) * \~43K lines, 216 modules, 1,785 tests * Default bind [`127.0.0.1`](http://127.0.0.1) — network exposure requires explicit opt-in * Dangerous features (shell operators, home directory read) are OFF by default * MIT licensed It's still very much a work in progress and there are rough edges. I'd genuinely appreciate any feedback, criticism, or suggestions — especially around security, UX, or features you'd want to see. Not looking for validation, just trying to make it better. GitHub: [https://github.com/hyunjun6928-netizen/salmalm](https://github.com/hyunjun6928-netizen/salmalm) PyPI: [https://pypi.org/project/salmalm/](https://pypi.org/project/salmalm/) Thanks for reading.
2026-02-20T19:42:57
https://www.reddit.com/r/LocalLLaMA/comments/1ra5ccd/i_built_a_selfhosted_ai_gateway_that_runs_with/
Special-Argument-558
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra5ccd
false
null
t3_1ra5ccd
/r/LocalLLaMA/comments/1ra5ccd/i_built_a_selfhosted_ai_gateway_that_runs_with/
false
false
self
0
null
Trained a 2.4GB personality model on 67 conversations to calibrate AI agent tone in real-time
2
ed-reader: Qwen3-4B base, LoRA r=8 alpha=16 attention-only, float32 + AdamW + MKL on CPU. Loss 5.8 to 1.89, 102 steps, \~2hrs on 8-thread. Quantized 8.1GB F16 to 2.4GB Q4\_0. Runs on Ollama raw:true. Sits in middleware: 3-sec timeout, 50-token max. Reads tone and calibrates main model personality. Sub-second hook. CPU learnings: float32 ONLY viable multi-core x86 path. MKL = 7x speedup. AdamW essential for small SFT. Qwen3 GGUF extra\_special\_tokens breaks llama.cpp - delete from tokenizer\_config.json. Part of production AI agent: WhatsApp/SMS/Voice, 7 databases, browser automation, hallucination detection, 1M context. Built solo in 3 weeks from medical billing background.
2026-02-20T19:39:18
https://www.reddit.com/r/LocalLLaMA/comments/1ra58rl/trained_a_24gb_personality_model_on_67/
no-creds
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra58rl
false
null
t3_1ra58rl
/r/LocalLLaMA/comments/1ra58rl/trained_a_24gb_personality_model_on_67/
false
false
self
2
null
Which AI-Model for a summarization app?
1
Which small AI model is best for summarization? I’m looking for something in the 1B to 3B range. I’m still pretty new to local AI, so sorry if this is a dumb question. My goal is to run it on a mobile device. Right now I’m considering Llama 3.2 1B, Gemma 2 2B, or Llama 3.2 3B. If smaller models are good enough, I’d prefer the smallest possible one for efficiency. Any recommendations?
2026-02-20T18:50:57
https://www.reddit.com/r/LocalLLaMA/comments/1ra3x7o/which_aimodel_for_a_summarization_app/
Novel-Grade2973
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra3x7o
false
null
t3_1ra3x7o
/r/LocalLLaMA/comments/1ra3x7o/which_aimodel_for_a_summarization_app/
false
false
self
1
null
An architectural observation about why LLM game worlds feel unstable
0
It often looks like the main problems of LLM-driven games are strange NPCs, collapsing dialogues, and a world that seems to “forget” itself. But from an architectural lens, games aren’t a special case — they’re simply where deeper systemic cracks become visible first. On the surface, this looks like a game design issue. Characters become inconsistent and react to each new line as if they have no internal inertia. Scenes close too quickly because the model optimizes for resolution rather than sustained tension. Conflict dissolves, since LLMs tend to steer conversations toward agreement instead of maintaining stable dynamics. World memory behaves chaotically: facts exist, yet don’t feel like persistent state. Agent systems grow heavier over time; the more logic we wrap around the model, the less predictable it becomes. But the problem isn’t really NPCs — and not even narrative. What games exposed early is what happens when an LLM stops being a one-shot generator and becomes part of a long-lived system. Once dialogue lasts for hours and state is expected to accumulate, the weaknesses of current architectures stop being subtle. If you look closely, most of these symptoms trace back to a few defaults the industry quietly adopted. We use context as a database even though attention scales poorly. We use text as memory even though text doesn’t preserve structure or consequences. We use prompts as runtime logic even though they don’t enforce real constraints. We use probabilistic models as decision engines even though they were never meant to manage state. What starts to emerge from these choices are predictable technical pressures. Cost and latency rise as context keeps expanding, and every new scene makes the system heavier. “Token debt” appears, where long interactions become more expensive than generation itself. Agent systems face memory explosion as history, reasoning, and tool outputs duplicate one another. Behavioral instability grows because the model has no intrinsic resistance to change — only shifting probabilities. And beneath all of this lies the absence of true state: we simulate worlds through text instead of grounding them in structured data. Interestingly, the same patterns are now appearing far beyond games — in support agents, AI characters, training simulations, and any system built on prolonged interaction. Over time, it starts to feel less like a limit of model intelligence and more like a limit of the surrounding architecture. Not a question of how well LLMs generate, but of how we keep trying to embed probabilistic generation into systems that fundamentally require stability. Continuation — 22.02 “Architectural observation on how the industry treats architecture through context.”
2026-02-20T18:49:24
https://www.reddit.com/r/LocalLLaMA/comments/1ra3vqf/an_architectural_observation_about_why_llm_game/
Weary-End4473
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra3vqf
false
null
t3_1ra3vqf
/r/LocalLLaMA/comments/1ra3vqf/an_architectural_observation_about_why_llm_game/
false
false
self
0
null
HRM for RP guide?
2
I just recently learned about the existence of HRM ([Hierarchical Reasoning Models](https://arxiv.org/abs/2506.21734)). They are utilizing an H-L-loop with a High-Level Planer and a Low-Level Executor. Supposedly the models are very good with logic and path finding ("can solve Sudoku") however as they have a very low parameter count (like 27M), they don't have much knowledge and are too rigid to do creative writing well. So now I wonder if it would be possible using an HRM as a "Logic Anchor" or a "World Master" sitting behind the creative model. Like a supervisor who's job it is to make sure, that the creative writer doesn't fall into logic holes and stays consistent ("*akshually* you lost your sword two pages ago, you can't use it now to defend yourself now"). This way one could increase the temperature of the creative writer while having guard rails against hallucinating nonsense.
2026-02-20T18:44:02
https://www.reddit.com/r/LocalLLaMA/comments/1ra3qmd/hrm_for_rp_guide/
dreamyrhodes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra3qmd
false
null
t3_1ra3qmd
/r/LocalLLaMA/comments/1ra3qmd/hrm_for_rp_guide/
false
false
self
2
null
Building an agent backend – what features would YOU want your agents to do?
0
Hey there, I'm working on a self-hosted RAG system (currently at ~160 stars on GitHub, if that matters for context). So far, it does the usual: ingest docs, hybrid search, MCP server for OpenClaw integration, etc. But here's where I need your help: I'm planning the next major version – turning it from a "passive knowledge base" into an active agent backend. Meaning: agents shouldn't just query it, they should be able to do things with/inside it. My current ideas: - Agents trigger batch validation jobs (e.g., "run HITL on these 100 docs") - Agents reconfigure pipelines per mission ("use OCR lane only for this batch") - Agents write back to the knowledge graph ("link entity A to B as 'depends_on'") - Agents request quality reports ("give me Six Sigma metrics for collection X") But I'd rather build what YOU actually needed If you're running local agents (OpenClaw, AutoGen, LangChain, whatever): What do you wish your agent could tell your knowledge base to do? What's missing from current RAG systems that would make your agent setup actually useful? Any use cases where your agent needs to change the knowledge base, not just read from it? Drop your wildest ideas or most boring practical needs – all feedback welcome. I'll build the stuff that gets mentioned most Thanks in advance and have a nice weekend while thinking about me and my projects ;-P
2026-02-20T18:43:12
https://www.reddit.com/r/LocalLLaMA/comments/1ra3puc/building_an_agent_backend_what_features_would_you/
ChapterEquivalent188
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra3puc
false
null
t3_1ra3puc
/r/LocalLLaMA/comments/1ra3puc/building_an_agent_backend_what_features_would_you/
false
false
self
0
null
where can I find base models of llama or with no guard rails?
0
Ive been looking but all models I find give me the same output, im using lm studio and it won't let you load models from outside their list. im lookin for a 3b model to run in my 8gb mba. Sorry im new at this, don't really know where to ask but all the models I try give me the same automated response
2026-02-20T18:40:52
https://www.reddit.com/r/LocalLLaMA/comments/1ra3nlp/where_can_i_find_base_models_of_llama_or_with_no/
Remarkable-Purple240
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra3nlp
false
null
t3_1ra3nlp
/r/LocalLLaMA/comments/1ra3nlp/where_can_i_find_base_models_of_llama_or_with_no/
false
false
self
0
null
Handling unknown-outcome retries in local LLM workflows (Ollama)
0
[Execution viewer shows per-step state and duration, plus execution-level tokens and cost](https://preview.redd.it/6crky3qs0pkg1.png?width=2400&format=png&auto=webp&s=93799c00612252d1e30035836a32b974554da520) Once local LLM workflows move beyond single prompts and start touching tickets, DB writes, or internal APIs, retries get risky. A tool call times out and you do not know if the downstream write happened. Restarting the full execution can replay side effects. I built a self-hosted Go service to make execution state explicit: * explicit step boundaries * stable `execution_id` per execution * per-step status and duration * execution-level tokens and cost * pause/resume at step boundaries * policy checks and audit trail The biggest shift for us was separating replay from resume. Pure steps can be replayed deterministically. Effectful steps need resume semantics based on recorded state. Tested locally with Ollama. Repo: [https://github.com/getaxonflow/axonflow](https://github.com/getaxonflow/axonflow?utm_source=chatgpt.com) How are you handling unknown-outcome retries when the downstream API has no idempotency key: gate, reconcile later, or accept detectable duplicates?
2026-02-20T18:38:06
https://www.reddit.com/r/LocalLLaMA/comments/1ra3kvi/handling_unknownoutcome_retries_in_local_llm/
saurabhjain1592
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra3kvi
false
null
t3_1ra3kvi
/r/LocalLLaMA/comments/1ra3kvi/handling_unknownoutcome_retries_in_local_llm/
false
false
https://preview.redd.it/…45fc9169ac52d6d8
0
null
16.000 tokens/second - Taalas: LLMs baked into hardware. No HBM, weights and model architecture in silicon
197
Ever experienced 16K tokens per second? It's insanely instant. Try their Lllama 3.1 8B demo here: [chat jimmy](https://chatjimmy.ai/). They have a very radical approach to solve the compute problem - albeit a risky one in a landscape where model architectures evolve in weeks instead of years: Etch the model and all the weights onto a single silicon chip. Normally that would take ages, but they seem to have found a way to go from model to ASIC in 60 days - which might make their approach appealing for domains where raw intelligence is not so much of importance, but latency is super important, like real-time speech models, real-time avatar generation, computer vision etc. Here are their claims: * **< 1 Millisecond Latency** * **> 17k Tokens per Second per User** * **20x Cheaper to Produce** * **10x More Power Efficient** * **60 Days from Unseen Software to Custom Silicon:** This part is crazy—it normally takes months... * **0% Exotic Hardware Required, thus cheap**: They ditch HBM, advanced packaging, 3D stacking, liquid cooling, high speed IO - because they put everything into one chip to achieve ultimate simplicity. * **LoRA Support:** Despite the model being "baked" in silicon, you can adapt it constrained to the arch and param count. Their demonstrator uses Lllama 3.1 8B, but supports LoRa fine-tuning. * **Just 24 Engineers and $30M**: That's what they spent on the first demonstrator. * **Bigger Reasoning Model Coming this Spring** * **Frontier LLM Coming this Winter** Now that's for their claims taken from their website: [The path to ubiquitous AI | Taalas](https://taalas.com/the-path-to-ubiquitous-ai/) Original Post : [https://www.reddit.com/r/singularity/comments/1r9frzk/taalas\_llms\_baked\_into\_hardware\_no\_hbm\_weights/](https://www.reddit.com/r/singularity/comments/1r9frzk/taalas_llms_baked_into_hardware_no_hbm_weights/) , can't cross post.
2026-02-20T18:31:56
https://i.redd.it/3ivt7c1h0pkg1.jpeg
CharacterAd9057
i.redd.it
1970-01-01T00:00:00
0
{}
1ra3erl
false
null
t3_1ra3erl
/r/LocalLLaMA/comments/1ra3erl/16000_tokenssecond_taalas_llms_baked_into/
false
false
https://preview.redd.it/…66293732ed7d5809
197
{'enabled': True, 'images': [{'id': '3ivt7c1h0pkg1', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/3ivt7c1h0pkg1.jpeg?width=108&crop=smart&auto=webp&s=ae5ddfb67802f03b2a14c12112f306972f01389b', 'width': 108}, {'height': 102, 'url': 'https://preview.redd.it/3ivt7c1h0pkg1.jpeg?width=216&crop=smart&auto=webp&s=4cf2d9a6204511a3c8780dafabdcad9b3e6ac200', 'width': 216}, {'height': 152, 'url': 'https://preview.redd.it/3ivt7c1h0pkg1.jpeg?width=320&crop=smart&auto=webp&s=a03165356427a2a78b8a29897cddf623a29054a8', 'width': 320}, {'height': 304, 'url': 'https://preview.redd.it/3ivt7c1h0pkg1.jpeg?width=640&crop=smart&auto=webp&s=6e7595a8007c52e29510def46505ff98750ab5d0', 'width': 640}, {'height': 456, 'url': 'https://preview.redd.it/3ivt7c1h0pkg1.jpeg?width=960&crop=smart&auto=webp&s=7ed7f5eb009a294f8d9396ceadb96d57115f406b', 'width': 960}, {'height': 514, 'url': 'https://preview.redd.it/3ivt7c1h0pkg1.jpeg?width=1080&crop=smart&auto=webp&s=45ae25bdea79d527f4b55835c2cce8037f8ff631', 'width': 1080}], 'source': {'height': 514, 'url': 'https://preview.redd.it/3ivt7c1h0pkg1.jpeg?auto=webp&s=3d479367a57ff83caa2fa6db5b7ef1bd0f091854', 'width': 1080}, 'variants': {}}]}
I’m wondering why 4o was removed
0
2026-02-20T18:09:53
https://i.redd.it/dzl8nf5ywokg1.jpeg
Historical_Egg_4060
i.redd.it
1970-01-01T00:00:00
0
{}
1ra2suj
false
null
t3_1ra2suj
/r/LocalLLaMA/comments/1ra2suj/im_wondering_why_4o_was_removed/
false
false
https://preview.redd.it/…ae10837f47e868af
0
{'enabled': True, 'images': [{'id': 'dzl8nf5ywokg1', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/dzl8nf5ywokg1.jpeg?width=108&crop=smart&auto=webp&s=90fa3e328f98c7239ed25a4de7b6ce8014d4571d', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/dzl8nf5ywokg1.jpeg?width=216&crop=smart&auto=webp&s=c03590c288ccc3d69b3d18202472b3cd8cbb9aa4', 'width': 216}, {'height': 341, 'url': 'https://preview.redd.it/dzl8nf5ywokg1.jpeg?width=320&crop=smart&auto=webp&s=af7ae8759cf8484973965e251bdeda8e1f72c1f1', 'width': 320}, {'height': 683, 'url': 'https://preview.redd.it/dzl8nf5ywokg1.jpeg?width=640&crop=smart&auto=webp&s=3b2d5c87a76390e026407279517897e21b87aa8e', 'width': 640}, {'height': 1025, 'url': 'https://preview.redd.it/dzl8nf5ywokg1.jpeg?width=960&crop=smart&auto=webp&s=1f444c1d225164987f4298c3275d712e84357bf8', 'width': 960}, {'height': 1153, 'url': 'https://preview.redd.it/dzl8nf5ywokg1.jpeg?width=1080&crop=smart&auto=webp&s=fd9b4c966c3cf1191ded49eb53d715efc2378b51', 'width': 1080}], 'source': {'height': 1410, 'url': 'https://preview.redd.it/dzl8nf5ywokg1.jpeg?auto=webp&s=8dd5b5ef8b8333b31fef4fb517a972fb57c88a32', 'width': 1320}, 'variants': {}}]}
I built MergeSafe: A multi-engine scanner for MCP servers
0
Hey everyone, As the Model Context Protocol (MCP) ecosystem explodes, I noticed a huge gap: we’re all connecting third-party servers to our IDEs and local environments without a real way to audit what they’re actually doing under the hood. I’ve been working on MergeSafe, a multi-engine MCP scanner designed to sit between your LLM and your tools. Why I built it: • Static Analysis: It scans MCP server code for suspicious patterns before you hit "connect." • Multi-Engine: It aggregates results from multiple security layers to catch things a single regex might miss. • Prompt Injection Defense: It monitors the "tool call" flow to ensure an agent isn't being tricked into exfiltrating data. It’s in the early stages, and I need people to break it. If you’re using Claude Desktop or custom MCP setups, I’d love for you to run MergeSafe against your current servers and see if it flags anything (or if it’s too noisy).
2026-02-20T17:51:07
https://www.reddit.com/r/LocalLLaMA/comments/1ra2a9d/i_built_mergesafe_a_multiengine_scanner_for_mcp/
Sunnyfaldu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra2a9d
false
null
t3_1ra2a9d
/r/LocalLLaMA/comments/1ra2a9d/i_built_mergesafe_a_multiengine_scanner_for_mcp/
false
false
self
0
null
I got 45-46 tok/s on IPhone 14 Pro Max using BitNet
49
I ported Microsoft’s BitNet to iOS. Getting 45 tok/s on iPhone 14 Pro Max with the 0.7B model, \~200MB memory. BitNet uses 1-bit weights (-1, 0, +1) instead of 16-bit floats so the model is tiny and runs fast. The ARM NEON kernels already worked on M-series Macs so getting it on iPhone was mostly build system wrangling. I am currently running a base model (outputs are nonsense), next step is the instruction-tuned 2B model for actual usable chat. I will open source eventually, but sooner rather than later if there’s interest.​​​​​
2026-02-20T17:37:38
https://v.redd.it/whlo0jrarokg1
Middle-Hurry4718
v.redd.it
1970-01-01T00:00:00
0
{}
1ra1wxm
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/whlo0jrarokg1/DASHPlaylist.mpd?a=1774201081%2CMGVhMDQyNWM4ZWIzMTY2ZDJhNjJjYmM1ZWMyODY3NjhlNmZiMDAyMTE5NWZmNGZjMmI4MzFiNTMyMWJmNTE2OA%3D%3D&v=1&f=sd', 'duration': 47, 'fallback_url': 'https://v.redd.it/whlo0jrarokg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/whlo0jrarokg1/HLSPlaylist.m3u8?a=1774201081%2CNWQzZjVhMjdjNmQwMDRmN2M3MTI3NjczNzg2NzlmM2U2Yjg0ODkzYjNkYjdhMmY4NWQzMTA3MDliY2JlMjY0ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/whlo0jrarokg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 590}}
t3_1ra1wxm
/r/LocalLLaMA/comments/1ra1wxm/i_got_4546_toks_on_iphone_14_pro_max_using_bitnet/
false
false
https://external-preview…04b8ef98a17a746e
49
{'enabled': False, 'images': [{'id': 'MnpoZng3cWFyb2tnMag_nQlaOiUb75GBHB5vo6hyb1PC6uSB2BeZWzIId6Ao', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/MnpoZng3cWFyb2tnMag_nQlaOiUb75GBHB5vo6hyb1PC6uSB2BeZWzIId6Ao.jpeg?width=108&crop=smart&format=pjpg&auto=webp&s=ab438fdc2bf9087b7251eef057dc0b79071156f9', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/MnpoZng3cWFyb2tnMag_nQlaOiUb75GBHB5vo6hyb1PC6uSB2BeZWzIId6Ao.jpeg?width=216&crop=smart&format=pjpg&auto=webp&s=c5181df7a20c1840b1574d7136ca74ec86776877', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/MnpoZng3cWFyb2tnMag_nQlaOiUb75GBHB5vo6hyb1PC6uSB2BeZWzIId6Ao.jpeg?width=320&crop=smart&format=pjpg&auto=webp&s=5520d0470738d7fb28cb4b617be48a1b6020ba18', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/MnpoZng3cWFyb2tnMag_nQlaOiUb75GBHB5vo6hyb1PC6uSB2BeZWzIId6Ao.jpeg?width=640&crop=smart&format=pjpg&auto=webp&s=eb169f82fce0b9658fb6c376bea08ddd93f7b78f', 'width': 640}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/MnpoZng3cWFyb2tnMag_nQlaOiUb75GBHB5vo6hyb1PC6uSB2BeZWzIId6Ao.jpeg?format=pjpg&auto=webp&s=f43422f30ac681bce283206e98a70b276ad6d923', 'width': 886}, 'variants': {}}]}
Open‑source challenge for projects built with the local AI runtime Lemonade
11
I'm part of the team at AMD that helps maintain Lemonade, an open-source project for running text, image, and speech models locally on your PC. It’s OpenAI‑API compatible and handles CPU/GPU/NPU selection automatically. A big reason the project works as well as it does is because of contributions and feedback from our developer community. We wanted to give back to them, so we recently started a **Lemonade Challenge** and are inviting people to share open‑source projects they’ve built using Lemonade. Projects with strong community impact may be eligible to receive an AMD HP **Ryzen™ AI Max+ 395 (Strix Halo) laptop**. Just wanted to share the challenge with this community! If you’re already working on local AI stuff and have something you’d be willing to publish. More info can be found [here](https://www.amd.com/en/developer/resources/technical-articles/2026/join-the-lemonade-developer-challenge.html):
2026-02-20T17:30:47
https://www.reddit.com/r/LocalLLaMA/comments/1ra1q4x/opensource_challenge_for_projects_built_with_the/
vgodsoe-amd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra1q4x
false
null
t3_1ra1q4x
/r/LocalLLaMA/comments/1ra1q4x/opensource_challenge_for_projects_built_with_the/
false
false
self
11
{'enabled': False, 'images': [{'id': 'VyUzTZBquC-MEposinpl3yKAd08HlVJuj7csNqHk1Y0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VyUzTZBquC-MEposinpl3yKAd08HlVJuj7csNqHk1Y0.jpeg?width=108&crop=smart&auto=webp&s=98fa4e4ea18e7b3c9b2aace9049f71eb3325ba2c', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/VyUzTZBquC-MEposinpl3yKAd08HlVJuj7csNqHk1Y0.jpeg?width=216&crop=smart&auto=webp&s=f74288344349a5cbdd18ac5c16776fd576c00783', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/VyUzTZBquC-MEposinpl3yKAd08HlVJuj7csNqHk1Y0.jpeg?width=320&crop=smart&auto=webp&s=bad28d3bff7ec3d9516e7c8e49be3d816ffa14a7', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/VyUzTZBquC-MEposinpl3yKAd08HlVJuj7csNqHk1Y0.jpeg?width=640&crop=smart&auto=webp&s=50e5661b4ff028a87a76b6dfd21144cb41540b87', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/VyUzTZBquC-MEposinpl3yKAd08HlVJuj7csNqHk1Y0.jpeg?width=960&crop=smart&auto=webp&s=23efc28d9de59fb1878340e0dafb17be330f9381', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/VyUzTZBquC-MEposinpl3yKAd08HlVJuj7csNqHk1Y0.jpeg?width=1080&crop=smart&auto=webp&s=9b07addf588c8373fc3cd429b96627b0f5e4edba', 'width': 1080}], 'source': {'height': 627, 'url': 'https://external-preview.redd.it/VyUzTZBquC-MEposinpl3yKAd08HlVJuj7csNqHk1Y0.jpeg?auto=webp&s=c1e345364c6502821b0406b59f2e80bd0c9ed7f4', 'width': 1200}, 'variants': {}}]}
I got tired of agents burning API credits in infinite loops and blowing up context windows, so I built memory compression, strict token budgets, and built-in HMAC signing into my open-source "Glass Box" framework.
0
Hey everyone, A couple of months ago, I posted about Lár here. It’s an open-source, "Glass Box" agent framework I started building because I was pulling my hair out trying to debug LangChain’s "prompt soup". The response here was awesome. People really resonated with the idea of a deterministic, auditable directed graph (DAG) where everything is explicitly defined, rather than trusting a black-box AgentExecutor to not go rogue. Today I’m releasing v1.6.0, and I wanted to share it because it solves two of the most dangerous, wallet-draining problems I hit when running autonomous swarms in production: Context Bloat and Infinite Loops. As agents become more autonomous (like the recursive DynamicNode we introduced last month), they get risky. So I built these defensive mechanisms straight into the engine: 1. Memory Compression. If you use a BatchNode to fan out 10 agents to read 10 heavy documents in parallel, standard frameworks will dump all 10 essays back into the shared state simultaneously. This creates a "black hole", instantly maximizing the context window of any downstream model reading the state , costing you a fortune and causing massive hallucination. I solved this by introducing the ReduceNode (Map-Reduce pattern). You place it immediately after your parallel workers. It reads the bloated keys from the state, asks a fast/cheap LLM to summarize or extract the critical insights into a new key, and then crucially- explicitly deletes the raw data keys from the state matrix. This guarantees the memory "baton" passed to the next node remains light and focused. 1. Economic Constraints (Strict Token Budgets) Instead of relying on a "Max Steps" blunt instrument or guessing how much an Agent costs, Lár now supports mathematical dollar-amount ceilings via token\_budget. You give the initial graph state an integer budget. Every time an LLMNode executes, it reads its exact token usage directly from the LiteLLM adapter and subtracts it from the budget before routing to the next node. If a model attempts to execute and the token\_budget is 0, the engine intercepts the call, throws an error, and gracefully terminates the workflow. You can now mathematically guarantee an agent will never exceed a specific cost, no matter how complex its dynamic routing gets. 1. Node Fatigue (Circuit Breakers) To prevent true infinite loops (e.g., a RouterNode bouncing back and forth between a FixCode node and a TestCode node forever without technically burning a massive amount of tokens instantly), the engine now safely tracks the number of times it visits a specific node. If a single node is hit more times than the global max\_node\_fatigue limit, the engine physically trips a circuit breaker and kills the run. Bonus: Cryptographic Audit Logs (HMAC Signing) With the August 1 EU AI Regulation enforcement approaching, auditability is becoming mandatory for many real-world deployments. Lár natively supports HMAC-SHA256 signing of its execution JSON logs, giving you a mathematically unalterable receipt (FDA Part 11 / SEC compliant) of exactly what nodes ran, the tokens used, and the intermediate state. If you are sick of massive context bloat, runaway API bills, or impossible-to-debug black boxes, I’d love for you to check it out. Repo: [https://github.com/snath-ai/lar](https://github.com/snath-ai/lar) Map-Reduce/Budget Example: [https://github.com/snath-ai/lar/blob/main/examples/advanced/11\_map\_reduce\_budget.py](https://github.com/snath-ai/lar/blob/main/examples/advanced/11_map_reduce_budget.py) Let me know what you guys think! Has anyone else had to build custom context-wiping layers to keep their swarms from eating their entire API allowance?
2026-02-20T17:30:02
https://www.reddit.com/r/LocalLLaMA/comments/1ra1pdb/i_got_tired_of_agents_burning_api_credits_in/
Some_Adhesiveness203
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra1pdb
false
null
t3_1ra1pdb
/r/LocalLLaMA/comments/1ra1pdb/i_got_tired_of_agents_burning_api_credits_in/
false
false
self
0
null
Benchmarked 4 AI Memory Systems on 600-Turn Conversations - Here Are the Results
0
We just completed comprehensive benchmarks comparing memory layers for production AI agents. Tested Mem0 against OpenAI Memory, LangMem, and MemGPT across 10 multi-session conversations with 200 questions each. **Key findings:** * **Mem0**: 66.9% accuracy, 1.4s p95 latency, \~2K tokens per query * **Mem0 Graph**: 68.5% accuracy, 2.6s p95 latency, \~4K tokens (superior temporal reasoning) * **OpenAI Memory**: 52.9% accuracy, 0.9s p95 latency, \~5K tokens * **LangMem**: 58.1% accuracy, 60s p95 latency, \~130 tokens * **MemGPT**: Results in appendix **What stands out:** Mem0 achieved 14 percentage points higher accuracy than OpenAI Memory while maintaining sub-2s response times. The graph variant excels at temporal queries (58.1% vs OpenAI's 21.7%) and multi-hop reasoning. LangMem's 60-second latency makes it unusable for interactive applications, despite being open source. **Methodology:** Used LOCOMO dataset with GPT-4o-mini at temperature 0. Evaluated factual consistency, multi-hop reasoning, temporal understanding, and open-domain recall across 26K+ token conversations. This matters because production agents need memory that persists beyond context windows while maintaining chat-level responsiveness. Current approaches either sacrifice accuracy for speed or become too slow for real-time use. You can check full benchmark code, datasets, and reproduction instructions in [this article](https://blog.mem0.ai/benchmarked-openai-memory-vs-langmem-vs-memgpt-vs-mem0/). Wanna reproduce the numbers? Repository: pip install mem0ai to test yourself.
2026-02-20T17:09:51
https://www.reddit.com/r/LocalLLaMA/comments/1ra1572/benchmarked_4_ai_memory_systems_on_600turn/
singh_taranjeet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra1572
false
null
t3_1ra1572
/r/LocalLLaMA/comments/1ra1572/benchmarked_4_ai_memory_systems_on_600turn/
false
false
self
0
null
AI “memory layers” are promising… but 3 things still feel missing (temporal reasoning, privacy controls, deterministic mental models)
7
I’ve been testing a bunch of AI memory products lately (Mem0, Cognee, Supermemory, Zep, etc.) because our team really needs agents that can remember things across projects without turning into a liability. A bit of context: we’re a tech cooperative - many projects, many users, lots of collaboration, and we work with client data. We’re pretty security-conscious by default. Also very data-driven work (pipelines, analytics, models), plus a lot of AI-assisted development (coding agents, docs agents, “project manager” agents, the whole thing). After a few weeks of hands-on testing, most tools feel like they hit the same ceiling. These are the 3 gaps that keep biting us: **Robust temporal reasoning + versioning (memory needs “time”)** Most current systems feel additive: they keep stacking memories, but don’t *understand* how facts change. * The conflict problem: If I tell an agent “I’m vegan” on Monday and later say “I’m eating steak on Friday,” a lot of systems will happily store both as “facts.” They don’t reliably do conflict-driven updates (overwrite/expire/supersede) in a way that feels *natural*. * Chronological blindness: They often can’t tell the difference between an initial agreement and an amended agreement. You end up with “hallucinated contracts” where old terms and new terms get mashed together because both are still “true” somewhere in the memory store. What I want is something closer to: “this was true as-of date X, then it was replaced by version Y, and here’s why.” **Privacy-preserving multi-user collaboration (beyond user\_id)** A lot of tools can isolate memory by `user_id`, but team collaboration is where it gets messy. * Granular sharing: There’s rarely a clean standard way to say: “remember this for *Project A team* (subset of humans + agents), but not for everyone else in the org.” * Compliance gaps / semantic deletion: GDPR/CCPA “Right to be Forgotten” is hard even in normal systems - but here it’s worse because memories are embedded/summarized/linked. If someone says “forget everything about my health,” most stacks can’t surgically remove that semantic cluster without collateral damage (or leaving fragments behind in summaries/embeddings). In our world (client work + security), “oops it might still be in the vector DB somewhere” isn’t acceptable. **Deterministic mental models (conceptual stability)** This one is subtle, but it’s the most frustrating day-to-day. A lot of memory layers depend on LLM summarization to decide what gets stored, how it gets rewritten, and what the “canonical” memory is. That makes the memory itself… kinda stochastic. * Summarization bias: The system decides what matters, and it often drops the exact technical nuance we actually needed later (APIs, constraints, edge cases, “do NOT do X” rules, etc.). * The black box of retrieval: As a user, I can’t build a reliable mental model of what the agent will remember. Sometimes it recalls a random detail from weeks ago. Sometimes it forgets a core instruction from 5 minutes ago because the similarity score didn’t clear some threshold. If memory is supposed to be infrastructure, I need it to feel predictable and inspectable. These gaps are showing up so consistently that we started prototyping a different approach internally - not “yet another vector store wrapper,” but something that treats time, permissions, and stable concepts as first-class. I’m not posting a product pitch here, and I’m not claiming we’ve solved it. But we’re far enough along that I’m curious whether the wider community is hitting the same walls and what you wish existed. For people building/using memory layers 1. What limitations are you running into that aren’t obvious from demos? 2. If you’ve used Mem0/Cognee/Supermemory/Zep in production-ish setups: what broke first? 3. If you could wave a wand and add one “memory primitive” to these systems, what would it be? If any of this resonates and you’re curious what we’re building / how we’re thinking about it, happy to share more (or swap notes).
2026-02-20T16:59:28
https://www.reddit.com/r/LocalLLaMA/comments/1ra0ude/ai_memory_layers_are_promising_but_3_things_still/
arapkuliev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra0ude
false
null
t3_1ra0ude
/r/LocalLLaMA/comments/1ra0ude/ai_memory_layers_are_promising_but_3_things_still/
false
false
self
7
null
If you're building hierarchical/tree-based RAG, this might be helpful.
9
I spent a few days building and benchmarking a hierarchical retrieval system — routing queries through a tree of LLM-generated summaries instead of flat vector search. The idea: save tokens by pruning irrelevant branches early, only retrieve what matters. It doesn't work. At least not with embedding-based routing. At \~300 chunks it looked decent. At \~22k chunks it scored 0.094 nDCG vs 0.749 for plain dense retrieval + cross-encoder reranking. Completely unusable. The core problem is simple: routing errors at each tree level compound multiplicatively. If you've got even a 15% miss rate per level, after 5 levels you're correctly routing less than half your queries. The deeper the tree (i.e. the larger your corpus — exactly when you need this most), the worse it gets. Things I tested that didn't fix it: * Wider beam search (helps, but just delays the collapse) * Better embeddings (mpnet vs MiniLM — marginal) * Richer summaries, contrastive prompts, content snippets (all plateau at the same ceiling) * Cross-encoder routing (actually made it worse — MS-MARCO models aren't trained on structured summary text) * BM25 hybrid routing (summaries are too sparse for lexical matching) The tree structure itself is fine — beam width sweep proved the correct branches exist at every level. The routing mechanism just can't reliably pick them. If you're using RAPTOR-style retrieval, this explains why collapsed tree mode (flat search over all nodes) beats top-down traversal. Don't fight the compounding — skip it entirely. Paper and full code/benchmarks: [https://doi.org/10.5281/zenodo.18714001](https://doi.org/10.5281/zenodo.18714001)
2026-02-20T16:52:49
https://www.reddit.com/r/LocalLLaMA/comments/1ra0nz9/if_youre_building_hierarchicaltreebased_rag_this/
auditsu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra0nz9
false
null
t3_1ra0nz9
/r/LocalLLaMA/comments/1ra0nz9/if_youre_building_hierarchicaltreebased_rag_this/
false
false
self
9
null
Best model for PRECISE long-context tasks
0
A lot of what I do involves text-processing tasks. Not consistent enough to replace LLM with dedicated functions, but enough that context issues cause problems. Example: "Given the following transcript, insert line breaks at natural intervals. All text must be preserved and only additive whitespace changes are allowed. Here is the text: \[2000 tokens follow\]" Frustratingly, random sentences might be missing from the final output. Context is set much higher, 32,000 tokens, so in theory the breakdown shouldn't be this bad for Gemma3-W4A16 quants right, whether 12B or 27B? I know LLMs aren't processing bytes (usually) and aren't fully deterministic, but this seems like a reasonable expectation.
2026-02-20T16:48:34
https://www.reddit.com/r/LocalLLaMA/comments/1ra0jx9/best_model_for_precise_longcontext_tasks/
FrozenBuffalo25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra0jx9
false
null
t3_1ra0jx9
/r/LocalLLaMA/comments/1ra0jx9/best_model_for_precise_longcontext_tasks/
false
false
self
0
null
Seeking YouTube Advice
0
Hello! I want to start a Youtube channel about AI (mainly local AI driven) and wanted to know what the community would like to see. I plan on making real, human, and high quality videos. Any ideas are welcome, even if it isn't purely local AI. I'm just a dude that wants to demonstrate AI and support the community. Thanks!
2026-02-20T16:37:02
https://www.reddit.com/r/LocalLLaMA/comments/1ra08kv/seeking_youtube_advice/
TyedalWaves
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra08kv
false
null
t3_1ra08kv
/r/LocalLLaMA/comments/1ra08kv/seeking_youtube_advice/
false
false
self
0
null
RTX 3060 12GB Build for AI: Modern i5-10400 (16GB DDR4) vs. Dual Xeon E5645 (96GB DDR3)?
0
Hi everyone! I’m building a budget local AI rig and I'm torn between two options. Both will have an **RTX 3060 12GB**, but the platforms are very different: 1. **Modern-ish:** i5-10400, 16GB DDR4. 2. **Old Workstation:** 2x Xeon E5645, 96GB DDR3. (No AVX support). My Main Goal**:** Developing a **Local Voice Assistant**. I need a pipeline that includes: * **STT (Speech-to-Text):** Whisper (running locally). * **LLM:** Fast inference for natural flow (Llama 3 8B or similar). * **TTS (Text-to-Speech):** Piper. * **Secondary:** Coding assistance (JavaScript, Python) and some Stable Diffusion.
2026-02-20T16:30:38
https://www.reddit.com/r/LocalLLaMA/comments/1ra028c/rtx_3060_12gb_build_for_ai_modern_i510400_16gb/
Due_Ear7437
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra028c
false
null
t3_1ra028c
/r/LocalLLaMA/comments/1ra028c/rtx_3060_12gb_build_for_ai_modern_i510400_16gb/
false
false
self
0
null
Open-source Android assistant with offline wake-word (Vosk) + OpenClaw gateway
1
I open-sourced an Android voice assistant app that uses OpenClaw as the backend. Repo: [https://github.com/yuga-hashimoto/openclaw-assistant](https://github.com/yuga-hashimoto/openclaw-assistant) What might be interesting for this sub: - On-device wake-word detection (Vosk) - Realtime streaming responses from OpenClaw gateway - VoiceInteractionService integration on Android - Encrypted local settings and device identity Would love feedback from people building local/edge-first AI workflows.
2026-02-20T16:29:57
https://www.reddit.com/r/LocalLLaMA/comments/1ra01hz/opensource_android_assistant_with_offline/
Short_Way1817
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ra01hz
false
null
t3_1ra01hz
/r/LocalLLaMA/comments/1ra01hz/opensource_android_assistant_with_offline/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NjT9z39RAZwe84hk2gnqFDOgJjl640qDJjRrlh_LjEM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NjT9z39RAZwe84hk2gnqFDOgJjl640qDJjRrlh_LjEM.png?width=108&crop=smart&auto=webp&s=118c1c320fd4b0ec9c467a94f7a60a3e31dff8f2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NjT9z39RAZwe84hk2gnqFDOgJjl640qDJjRrlh_LjEM.png?width=216&crop=smart&auto=webp&s=a38f8310944a79a10249fdb7b029cd9dd6a5da5c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NjT9z39RAZwe84hk2gnqFDOgJjl640qDJjRrlh_LjEM.png?width=320&crop=smart&auto=webp&s=831e67155caae739ab5e4861ea79c3ea597d44a7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NjT9z39RAZwe84hk2gnqFDOgJjl640qDJjRrlh_LjEM.png?width=640&crop=smart&auto=webp&s=5a9db9b20eb034d2a979ddec86ff4dec4736be56', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NjT9z39RAZwe84hk2gnqFDOgJjl640qDJjRrlh_LjEM.png?width=960&crop=smart&auto=webp&s=0c5c6992f3295aad0bfb0c75e8e4689d20faf854', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NjT9z39RAZwe84hk2gnqFDOgJjl640qDJjRrlh_LjEM.png?width=1080&crop=smart&auto=webp&s=fcca678e37a64f784472bce415bb80d5195aad26', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NjT9z39RAZwe84hk2gnqFDOgJjl640qDJjRrlh_LjEM.png?auto=webp&s=52a075015f286f786284fbe100d0c67bb1def006', 'width': 1200}, 'variants': {}}]}
The top 3 models on openrouter this week ( Chinese models are dominating!)
367
the first time i see a model exceed 3 trillion tokens per week on openrouter! the first time i see more than one model exceed a trillion token per week ( it was only grok 4 fast month ago) the first time i see chinese models destroying US ones like this
2026-02-20T16:21:50
https://i.redd.it/h4l8zr4rdokg1.jpeg
keb_37
i.redd.it
1970-01-01T00:00:00
0
{}
1r9zt8m
false
null
t3_1r9zt8m
/r/LocalLLaMA/comments/1r9zt8m/the_top_3_models_on_openrouter_this_week_chinese/
false
false
https://preview.redd.it/…e56ffeef6da53d67
367
{'enabled': True, 'images': [{'id': 'h4l8zr4rdokg1', 'resolutions': [{'height': 178, 'url': 'https://preview.redd.it/h4l8zr4rdokg1.jpeg?width=108&crop=smart&auto=webp&s=86d55b608817a5ed9ee4eaa39ead53fbf9ab5a6d', 'width': 108}, {'height': 357, 'url': 'https://preview.redd.it/h4l8zr4rdokg1.jpeg?width=216&crop=smart&auto=webp&s=774ddaa410fff93e4d28e2c7c3389a16ea3eb1cd', 'width': 216}, {'height': 529, 'url': 'https://preview.redd.it/h4l8zr4rdokg1.jpeg?width=320&crop=smart&auto=webp&s=dc449a0572f35d73ec7ba45dfe6394fc1b526802', 'width': 320}, {'height': 1059, 'url': 'https://preview.redd.it/h4l8zr4rdokg1.jpeg?width=640&crop=smart&auto=webp&s=e1cb3201433eea5c7cd862fbc8c0f259e4e6b134', 'width': 640}, {'height': 1589, 'url': 'https://preview.redd.it/h4l8zr4rdokg1.jpeg?width=960&crop=smart&auto=webp&s=7c8de1cb51e917c7fd8c25c9ecc1204c2aa90656', 'width': 960}, {'height': 1787, 'url': 'https://preview.redd.it/h4l8zr4rdokg1.jpeg?width=1080&crop=smart&auto=webp&s=a6515e476a5ab3501783781fafc5930179688055', 'width': 1080}], 'source': {'height': 1788, 'url': 'https://preview.redd.it/h4l8zr4rdokg1.jpeg?auto=webp&s=66eb60d81d49de5d636aac299cf151f815c5236a', 'width': 1080}, 'variants': {}}]}
[ Removed by moderator ]
1
[removed]
2026-02-20T16:08:14
[deleted]
1970-01-01T00:00:00
0
{}
1r9zfnv
false
null
t3_1r9zfnv
/r/LocalLLaMA/comments/1r9zfnv/running_llama_locally_for_healthcare_had_to_build/
false
false
null
1
null
I spent 3 months interviewing AI engineers and got kind of depressed. Made this roadmap so you don't end up in the pile I kept rejecting.
0
Okay so a bit of context before I dump this wall of text on you. I have done somewhere around 30+ interviews over the past few months. I took notes on almost all of them because I started noticing the same patterns over and over and it was driving me insane. I need to be honest with you > the market right now is brutal, but not for the reasons most people think. It's not that there aren't jobs. There's this massive gap between what people think is impressive and what actually gets you hired at the $150k+ level. The thing that broke me was opening multiple resumes in a row and seeing \[Built a Chatbot with OpenAI API\] listed as the top project. 3 years ago I would have been genuinely excited. Now it reads the same way made a website using Dreamweaver did in 2012 (if you remember). It just tells me you followed a YouTube tutorial and called it a day. Here's what nobody says out loud = if your whole skillset lives or dies on an API key, you don't really have a skillset. You have a subscription. So I put together the actual project types that have been making me stop and say okay, let's get this person in for a second round. These are not easy and that's the whole point. Difficulty is what separates a portfolio from a tutorial graveyard. **1. Offline RAG System** **2. Self-Healing Agent** **3. Real-Time Voice Under 500ms Latency** **4. Fine-Tuning Pipeline** **5. Event-Driven Orchestration** **6. Hybrid Memory System** Stack summary if you want the TLDR: Stop grinding LangChain syntax. Start learning architecture. The tools that keep showing up in the builds I actually respect = Docker, LangGraph, FastAPI, Neo4j, Unsloth. I turned it into a proper writeup on my Substack if you want the deep dive. Happy to answer questions in the comments. If you are stuck on any of these or want to know what specifically I look for when someone walks me through their build, just ask. [Check it out!](https://himanshuramchandani.substack.com/p/ai-engineer-roadmap-2026-ship-or)
2026-02-20T16:06:42
https://i.redd.it/sfhg559saokg1.png
hemansnation
i.redd.it
1970-01-01T00:00:00
0
{}
1r9ze4n
false
null
t3_1r9ze4n
/r/LocalLLaMA/comments/1r9ze4n/i_spent_3_months_interviewing_ai_engineers_and/
false
false
https://preview.redd.it/…14fe4b3c5612d1b2
0
{'enabled': True, 'images': [{'id': 'sfhg559saokg1', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/sfhg559saokg1.png?width=108&crop=smart&auto=webp&s=2c480feee8da47b4876f60710430292b88504443', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/sfhg559saokg1.png?width=216&crop=smart&auto=webp&s=bc63ce1e584093cbdc3de4576b01bfdb9a772558', 'width': 216}, {'height': 221, 'url': 'https://preview.redd.it/sfhg559saokg1.png?width=320&crop=smart&auto=webp&s=9a5ed42b6a8ad4517ba473670ee0c00d1931464d', 'width': 320}, {'height': 442, 'url': 'https://preview.redd.it/sfhg559saokg1.png?width=640&crop=smart&auto=webp&s=f67ba804f592110b66d2c09c2488cc2fe8db0436', 'width': 640}, {'height': 663, 'url': 'https://preview.redd.it/sfhg559saokg1.png?width=960&crop=smart&auto=webp&s=0c27915ecfd46548cbd893a9a030c0dfc0d76c2b', 'width': 960}, {'height': 745, 'url': 'https://preview.redd.it/sfhg559saokg1.png?width=1080&crop=smart&auto=webp&s=eb0ab4d1042f6a953f39abe881a57775802b1429', 'width': 1080}], 'source': {'height': 1076, 'url': 'https://preview.redd.it/sfhg559saokg1.png?auto=webp&s=c95f90aa1d9510bc9cb1e7688756c8b65ee2f64a', 'width': 1558}, 'variants': {}}]}
GEPA: optimize_anything: A Universal API for Optimizing any Text Parameter
8
2026-02-20T15:53:47
https://gepa-ai.github.io/gepa/blog/2026/02/18/introducing-optimize-anything/
Thrumpwart
gepa-ai.github.io
1970-01-01T00:00:00
0
{}
1r9z17v
false
null
t3_1r9z17v
/r/LocalLLaMA/comments/1r9z17v/gepa_optimize_anything_a_universal_api_for/
false
false
https://external-preview…74b29e0060a4c08d
8
{'enabled': False, 'images': [{'id': '2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c.png?width=108&crop=smart&auto=webp&s=38e484660d3f107fb29e93d1409270e2d9dc62c6', 'width': 108}, {'height': 99, 'url': 'https://external-preview.redd.it/2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c.png?width=216&crop=smart&auto=webp&s=7c689a67070c5d94c542836543e7006b7292fcbf', 'width': 216}, {'height': 147, 'url': 'https://external-preview.redd.it/2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c.png?width=320&crop=smart&auto=webp&s=7855c21dda6e5c9258c3a47f3241c14eab7b4744', 'width': 320}, {'height': 295, 'url': 'https://external-preview.redd.it/2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c.png?width=640&crop=smart&auto=webp&s=69e5869ae76db11b96d77f514bb8995ed007ef73', 'width': 640}, {'height': 442, 'url': 'https://external-preview.redd.it/2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c.png?width=960&crop=smart&auto=webp&s=ab5c8433224a658ba62ac8fdc74013faad9b8d33', 'width': 960}, {'height': 498, 'url': 'https://external-preview.redd.it/2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c.png?width=1080&crop=smart&auto=webp&s=c355e0665546b54aa868f9f19299f5a9aa18bc1d', 'width': 1080}], 'source': {'height': 1430, 'url': 'https://external-preview.redd.it/2-Cc1NyTxl7z1zJSDNsCfv2lkMJD9O4gdY-5mJfik2c.png?auto=webp&s=5a43eba8a8cbdd0bdb68de8ae7bb041c7eec2499', 'width': 3100}, 'variants': {}}]}
I'm releasing SmarterRouter - A Smart LLM proxy for all your local models.
3
I've been working on this project to create a smarter LLM proxy primarily for my openwebui setup (but it's a standard openai compatible endpoint API, so it will work with anything that accepts that). The idea is pretty simple, you see one frontend model in your system, but in the backend it can load whatever model is "best" for the prompt you send. When you first spin up Smarterrouter it profiles all your models, giving them scores for all the main types of prompts you could ask, as well as benchmark other things like model size, actual VRAM usage, etc. (you can even configure an external "Judge" AI to grade the responses the models give, i've found it improves the profile results, but it's optional). It will also detect and new or deleted models and start profiling them in the background, you don't need to do anything, just add your models to ollama and they will be added to SmarterRouter to be used. There's a lot going on under the hood, but i've been putting it through it's paces and so far it's performing really well, It's extremely fast, It caches responses, and I'm seeing a negligible amount of time added to prompt response time. It will also automatically load and unload the models in Ollama (and any other backend that allows that). The only caveat i've found is that currently it favors very small, high performing models, like Qwen coder 0.5B for example, but if small models are faster and they score really highly in the benchmarks... Is that really a bad response? I'm doing more digging, but so far it's working really well with all the test prompts i've given it to try (swapping to larger/different models for more complex questions or creative questions that are outside of the small models wheelhouse). Here's a high level summary of the biggest features: **Self-Correction via Hardware Profiling**: Instead of guessing performance, it runs a one-time benchmark on your specific GPU/CPU setup. It learns exactly how fast and capable your models are in your unique environment. **Active VRAM Guard**: It monitors nvidia-smi in real-time. If a model selection is about to trigger an Out-of-Memory (OOM) error, it proactively unloads idle models or chooses a smaller alternative to keep your system stable. **Semantic "Smart" Caching**: It doesn't just match exact text. It uses vector embeddings to recognize when you’re asking a similar question to a previous one, serving the cached response instantly and saving your compute cycles. **The "One Model" Illusion**: It presents your entire collection of 20+ models as a single OpenAI-compatible endpoint. You just select SmarterRouter in your UI, and it handles the "load, run, unload" logic behind the scenes. **Intelligence-to-Task Routing**: It automatically analyzes your prompt's complexity. It won't waste your 70B model's time on a "Hello," and it won't let a 0.5B model hallucinate its way through a complex Python refactor. **LLM-as-Judge Feedback**: It can use a high-end model (like a cloud GPT-4o or a local heavy-hitter) to periodically "score" the performance of your smaller models, constantly refining its own routing weights based on actual quality. Github: [https://github.com/peva3/SmarterRouter](https://github.com/peva3/SmarterRouter) Let me know how this works for you, I have it running perfectly with a 4060 ti 16gb, so i'm positive that it will scale well to the massive systems some of y'all have.
2026-02-20T15:51:02
https://www.reddit.com/r/LocalLLaMA/comments/1r9yylw/im_releasing_smarterrouter_a_smart_llm_proxy_for/
peva3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9yylw
false
null
t3_1r9yylw
/r/LocalLLaMA/comments/1r9yylw/im_releasing_smarterrouter_a_smart_llm_proxy_for/
false
false
self
3
{'enabled': False, 'images': [{'id': 'nE5eKS5sdGSxK-98tU3X-hSfjx0N0Uh2mDPOndRYXBA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nE5eKS5sdGSxK-98tU3X-hSfjx0N0Uh2mDPOndRYXBA.png?width=108&crop=smart&auto=webp&s=fbcdbff55275ef1866023ae1abfb997cdcf99b62', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nE5eKS5sdGSxK-98tU3X-hSfjx0N0Uh2mDPOndRYXBA.png?width=216&crop=smart&auto=webp&s=239441a065624561ba5e6355512d9b08b3db6bd5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nE5eKS5sdGSxK-98tU3X-hSfjx0N0Uh2mDPOndRYXBA.png?width=320&crop=smart&auto=webp&s=9df6304b13fa0161e27cfb29b62439d1ac50982f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nE5eKS5sdGSxK-98tU3X-hSfjx0N0Uh2mDPOndRYXBA.png?width=640&crop=smart&auto=webp&s=3a22ffbb9457c57976ffb6426d90b3b52591b14a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nE5eKS5sdGSxK-98tU3X-hSfjx0N0Uh2mDPOndRYXBA.png?width=960&crop=smart&auto=webp&s=08dd9060d44cc49641fc7430cacee2fa6d0936cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nE5eKS5sdGSxK-98tU3X-hSfjx0N0Uh2mDPOndRYXBA.png?width=1080&crop=smart&auto=webp&s=65df8130a82b28b57528b9821acd0412e0920bdf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nE5eKS5sdGSxK-98tU3X-hSfjx0N0Uh2mDPOndRYXBA.png?auto=webp&s=040ccb4bd73bddb435f1933bd3142a5fb93cc267', 'width': 1200}, 'variants': {}}]}
Minimax M2.5 generated a more detailed animated solar system SVG than Gemini 3.1 Pro!
0
2026-02-20T15:38:26
https://i.redd.it/vpui9p506okg1.png
abdouhlili
i.redd.it
1970-01-01T00:00:00
0
{}
1r9ymch
false
null
t3_1r9ymch
/r/LocalLLaMA/comments/1r9ymch/minimax_m25_generated_a_more_detailed_animated/
false
false
https://preview.redd.it/…a476c1cc212fdab2
0
{'enabled': True, 'images': [{'id': 'vpui9p506okg1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/vpui9p506okg1.png?width=108&crop=smart&auto=webp&s=b4d93b7984c30e50f1e4e78106199f7c182e443d', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/vpui9p506okg1.png?width=216&crop=smart&auto=webp&s=efb56f9f0f933d1fb019d7d60124a9e2c186bdfe', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/vpui9p506okg1.png?width=320&crop=smart&auto=webp&s=cd129547f660b6b6fb1564ee851078243c46dc3b', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/vpui9p506okg1.png?width=640&crop=smart&auto=webp&s=bf87dc482fda1d52ba3835da98ccc21a56d3d960', 'width': 640}, {'height': 639, 'url': 'https://preview.redd.it/vpui9p506okg1.png?width=960&crop=smart&auto=webp&s=4870e3d332d3994f160d422cafd3c2c7aa0dca93', 'width': 960}, {'height': 719, 'url': 'https://preview.redd.it/vpui9p506okg1.png?width=1080&crop=smart&auto=webp&s=72fc294c57ce2e11784da9876e9ba4335c3ae048', 'width': 1080}], 'source': {'height': 2309, 'url': 'https://preview.redd.it/vpui9p506okg1.png?auto=webp&s=d2730275836da9317521dde1bd873733c1a831aa', 'width': 3464}, 'variants': {}}]}
Used GLM to beat codex on the Unemployment arena
0
Achieved top 1 first try above all Codex and Claude Code models. And I literally used GLM to build my agent in 15 minutes. There was codex 5.2 i think in first place but it had quite a bad score .. i just asked codex to build me an agent, i tweaked it a bit here and there and go top 1 first try. Something weird is that the "strongest" models don't seem to perform the best. There is codex 5.2 xhigh > 5.1 high > 5.3 xhigh. This makes no sense. And Claude Code with Opus 4.6 and 4.5 is doing way worse. As if coding abilities were uncorrelated of this stuff. Also Gemini related models are quite low. Also I do not see any open source models in there which is not great ...
2026-02-20T15:29:25
https://www.reddit.com/r/LocalLLaMA/comments/1r9ydj9/used_glm_to_beat_codex_on_the_unemployment_arena/
idkwhattochoosz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9ydj9
false
null
t3_1r9ydj9
/r/LocalLLaMA/comments/1r9ydj9/used_glm_to_beat_codex_on_the_unemployment_arena/
false
false
self
0
null
Gemini 3.1 Pro Preview goes off the rails in opencode subagent 💀
0
2026-02-20T15:23:59
https://v.redd.it/2jks05ba3okg1
ash-ishh
v.redd.it
1970-01-01T00:00:00
0
{}
1r9y88y
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2jks05ba3okg1/DASHPlaylist.mpd?a=1774193062%2CODYzMWZkOGRjNmFmMTI4YjMxNDYxN2YzMWMzNDQwY2FlMjQ5NmFkMGY0ZWUwNzM3MzIyYzIwMThjYjYwNTVhYQ%3D%3D&v=1&f=sd', 'duration': 59, 'fallback_url': 'https://v.redd.it/2jks05ba3okg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/2jks05ba3okg1/HLSPlaylist.m3u8?a=1774193062%2CYzNkYzM0MmY4ODViZGZmZmY3ZjBiNDQwM2MzNzk3OGE3NWUxNjYwYzU0NDI0ZGU4ZDQ3MDAwODNlMDY3NjhiZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/2jks05ba3okg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1r9y88y
/r/LocalLLaMA/comments/1r9y88y/gemini_31_pro_preview_goes_off_the_rails_in/
false
false
https://external-preview…bd0692427075c527
0
{'enabled': False, 'images': [{'id': 'N2h3cHBxY2Ezb2tnMVUmNom7gudzTyewKaCl7cO2sodYIdWOoH2ERc5hPp_W', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/N2h3cHBxY2Ezb2tnMVUmNom7gudzTyewKaCl7cO2sodYIdWOoH2ERc5hPp_W.png?width=108&crop=smart&format=pjpg&auto=webp&s=7e9cd3845afc4aa2ca42f48d77340c173b4b9a42', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/N2h3cHBxY2Ezb2tnMVUmNom7gudzTyewKaCl7cO2sodYIdWOoH2ERc5hPp_W.png?width=216&crop=smart&format=pjpg&auto=webp&s=e375c3d296b7a9f3cc02f58c4dbc86b4cf9343d5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/N2h3cHBxY2Ezb2tnMVUmNom7gudzTyewKaCl7cO2sodYIdWOoH2ERc5hPp_W.png?width=320&crop=smart&format=pjpg&auto=webp&s=404f7ec2a0629c4129ca74164eb0642372483b13', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/N2h3cHBxY2Ezb2tnMVUmNom7gudzTyewKaCl7cO2sodYIdWOoH2ERc5hPp_W.png?width=640&crop=smart&format=pjpg&auto=webp&s=5031b63f2fb0304eb756bcc211d3613688dcddf3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/N2h3cHBxY2Ezb2tnMVUmNom7gudzTyewKaCl7cO2sodYIdWOoH2ERc5hPp_W.png?width=960&crop=smart&format=pjpg&auto=webp&s=8fa20630a4a079bce68d7df07b30c92c65e29b02', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/N2h3cHBxY2Ezb2tnMVUmNom7gudzTyewKaCl7cO2sodYIdWOoH2ERc5hPp_W.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f8fae18ce27eeeda610410ac73d471122b1b24c0', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/N2h3cHBxY2Ezb2tnMVUmNom7gudzTyewKaCl7cO2sodYIdWOoH2ERc5hPp_W.png?format=pjpg&auto=webp&s=a104ce04ebebeffda626e03026fe463300b69601', 'width': 1920}, 'variants': {}}]}
TranscriptionSuite - A fully local, private & open source audio transcription for Linux, Windows & macOS
162
Hi! This is a short presentation for my hobby project, [TranscriptionSuite](https://github.com/homelab-00/TranscriptionSuite). **TL;DR** A fully local & private Speech-To-Text app for Linux, Windows & macOS. Python backend + Electron frontend, utilizing faster-whisper and CUDA acceleration. If you're interested in the boring dev stuff, go to the bottom section. --- I'm releasing a major UI upgrade today. Enjoy! Short sales pitch: - **100% Local**: *Everything* runs on your own computer, the app doesn't need internet beyond the initial setup - **Truly Multilingual**: Supports [90+ languages](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py) - **Fully featured GUI**: Electron desktop app for Linux, Windows, and macOS (Apple Silicon) - **GPU + CPU Mode**: NVIDIA CUDA acceleration (recommended), or CPU-only mode for any platform including macOS - **Longform Transcription**: Record as long as you want and have it transcribed in seconds - **Live Mode**: Real-time sentence-by-sentence transcription for continuous dictation workflows - **Speaker Diarization**: PyAnnote-based speaker identification - **Static File Transcription**: Transcribe existing audio/video files with multi-file import queue, retry, and progress tracking - **Remote Access**: Securely access your desktop at home running the model from anywhere (utilizing Tailscale) - **Audio Notebook**: An Audio Notebook mode, with a calendar-based view, full-text search, and LM Studio integration (chat about your notes with the AI) - **System Tray Control**: Quickly start/stop a recording, plus a lot of other controls, available via the system tray. 📌*Half an hour of audio transcribed in under a minute (RTX 3060)!* --- The seed of the project was my desire to quickly and reliably interface with AI chatbots using my voice. That was about a year ago. Though less prevalent back then, still plenty of AI services like GhatGPT offered voice transcription. However the issue is that, like every other AI-infused company, they *always* do it shittily. Yes is works fine for 30s recordings, but what if I want to ramble on for 10 minutes? The AI is smart enough to decipher what I mean and I can speak to it like a smarter rubber ducky, helping me work through the problem. Well, from my testing back then speak more than 5 minutes and they all start to crap out. And you feel doubly stupid because not only did you get your transcription but you also wasted 10 minutes talking to the wall. Moreover, there's the privacy issue. They already collect a ton of text data, giving them my voice feels like too much. So I first looking at any existing solutions, but couldn't find any decent option that could run locally. Then I came across [RealtimeSTT](https://github.com/KoljaB/RealtimeSTT), an extremely impressive and efficient Python project that offered real-time transcription. It's more of a library or framework with only sample implementations. So I started building around that package, stripping it down to its barest of bones in order to understand how it works so that I could modify it. This whole project grew out of that idea. I built this project to satisfy my needs. I thought about releasing it only when it was decent enough where someone who doesn't know anything about it can just download a thing and run it. That's why I chose to Dockerize the server portion of the code. The project was originally written in pure Python. Essentially it's a fancy wrapper around `faster-whisper`. At some point I implemented a *server-client* architecture and added a notebook mode (think of it like calendar for your audio notes). And recently I decided to upgrade the frontend UI from Python to React + Typescript. Built all in Google AI Studio - App Builder mode for free believe it or not. No need to shell out the big bucks for Lovable, daddy Google's got you covered. --- Don't hesitate to contact me here or open an issue on GitHub for any technical issues or other ideas!
2026-02-20T15:22:24
https://v.redd.it/gxbrs1rj2okg1
TwilightEncoder
v.redd.it
1970-01-01T00:00:00
0
{}
1r9y6s8
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/gxbrs1rj2okg1/DASHPlaylist.mpd?a=1774192970%2CZDgzYTZlYzEyYWYwYTI5MGY2MDQ3Mjk5NjA1NjA2OGZlY2NhYmE5MjJkMWYxNzJiNmQ5ZTEzZWVhZWU1NDlhYw%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/gxbrs1rj2okg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/gxbrs1rj2okg1/HLSPlaylist.m3u8?a=1774192970%2CYWYzMDk3N2VjNTIyYTUwODc1MGRmOWUwNjg1OTEyYTJkMjE4ZDM3ZmViODcyYzAzZDFiNmUxNzIxNzNiZTQwYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gxbrs1rj2okg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1148}}
t3_1r9y6s8
/r/LocalLLaMA/comments/1r9y6s8/transcriptionsuite_a_fully_local_private_open/
false
false
https://external-preview…813fab2729058616
162
{'enabled': False, 'images': [{'id': 'ZjVodnR2dGoyb2tnMfrHn1-Z1IlbM1M-CdvVLf1S0fx3BvVT39BjZwD6xxr6', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/ZjVodnR2dGoyb2tnMfrHn1-Z1IlbM1M-CdvVLf1S0fx3BvVT39BjZwD6xxr6.png?width=108&crop=smart&format=pjpg&auto=webp&s=2b4b513a80791a636c031165304c24a2856586d1', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/ZjVodnR2dGoyb2tnMfrHn1-Z1IlbM1M-CdvVLf1S0fx3BvVT39BjZwD6xxr6.png?width=216&crop=smart&format=pjpg&auto=webp&s=bc5d58163f6e2de07bd66fa725bec76772cb749d', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/ZjVodnR2dGoyb2tnMfrHn1-Z1IlbM1M-CdvVLf1S0fx3BvVT39BjZwD6xxr6.png?width=320&crop=smart&format=pjpg&auto=webp&s=c5323519f77bc9090d6a30bf00e893bda8746c07', 'width': 320}, {'height': 401, 'url': 'https://external-preview.redd.it/ZjVodnR2dGoyb2tnMfrHn1-Z1IlbM1M-CdvVLf1S0fx3BvVT39BjZwD6xxr6.png?width=640&crop=smart&format=pjpg&auto=webp&s=6d0d898075d6b24d43bbf4af2609b62444176b7c', 'width': 640}, {'height': 602, 'url': 'https://external-preview.redd.it/ZjVodnR2dGoyb2tnMfrHn1-Z1IlbM1M-CdvVLf1S0fx3BvVT39BjZwD6xxr6.png?width=960&crop=smart&format=pjpg&auto=webp&s=09a7991071ee996cc45b83f5781504178667c4fd', 'width': 960}, {'height': 677, 'url': 'https://external-preview.redd.it/ZjVodnR2dGoyb2tnMfrHn1-Z1IlbM1M-CdvVLf1S0fx3BvVT39BjZwD6xxr6.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1ad4a5cfe855c45701b9ba2b37674c57096d4e22', 'width': 1080}], 'source': {'height': 944, 'url': 'https://external-preview.redd.it/ZjVodnR2dGoyb2tnMfrHn1-Z1IlbM1M-CdvVLf1S0fx3BvVT39BjZwD6xxr6.png?format=pjpg&auto=webp&s=025028ea7579e7d28dac4704ad8195883b98975d', 'width': 1504}, 'variants': {}}]}
What agentic model to use for a non-coding, claude-like agent for another domain?
1
I'm building a claude/claude-code like capability for insurance domain. Rather than code it's dealing with emails, documents, it is still searching the web to do research and generating reports (md files, pdfs/word docs). What's a good, non-openai/anthropic model/interference provider I can use for this (fully code talking to an api)? I'm thinking one of the cheaper models (Kimi? Other?) will be just as good for my use case and significantly cheaper. (or should I just use eg gpt-5-mini?)
2026-02-20T15:21:29
https://www.reddit.com/r/LocalLLaMA/comments/1r9y5x7/what_agentic_model_to_use_for_a_noncoding/
flobblobblob
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9y5x7
false
null
t3_1r9y5x7
/r/LocalLLaMA/comments/1r9y5x7/what_agentic_model_to_use_for_a_noncoding/
false
false
self
1
null
Tiny Aya 3.35B Re-Implementation From Scratch
2
2026-02-20T15:18:17
https://github.com/rasbt/LLMs-from-scratch/blob/main/ch05/15_tiny-aya/standalone-tiny-aya-plus-kv-cache.ipynb
seraschka
github.com
1970-01-01T00:00:00
0
{}
1r9y2wq
false
null
t3_1r9y2wq
/r/LocalLLaMA/comments/1r9y2wq/tiny_aya_335b_reimplementation_from_scratch/
false
false
https://external-preview…3e5e5e0885f45d0c
2
{'enabled': False, 'images': [{'id': '2XSd1VnYkyg18jnDzmhs_F6KLPWKAfk7zmciWOnKVBc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2XSd1VnYkyg18jnDzmhs_F6KLPWKAfk7zmciWOnKVBc.png?width=108&crop=smart&auto=webp&s=7a9f7bddb2a496f61dcab2697ee39275ef37ff8b', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/2XSd1VnYkyg18jnDzmhs_F6KLPWKAfk7zmciWOnKVBc.png?width=216&crop=smart&auto=webp&s=5f64739b2c7fc155c1cf0b5737e57a5ba4f5c496', 'width': 216}, {'height': 162, 'url': 'https://external-preview.redd.it/2XSd1VnYkyg18jnDzmhs_F6KLPWKAfk7zmciWOnKVBc.png?width=320&crop=smart&auto=webp&s=307fd5e4b8daf85913b11d341de740b43ed01e08', 'width': 320}, {'height': 325, 'url': 'https://external-preview.redd.it/2XSd1VnYkyg18jnDzmhs_F6KLPWKAfk7zmciWOnKVBc.png?width=640&crop=smart&auto=webp&s=ba7a02463e436c0622d6d19857a96f07efc524d6', 'width': 640}, {'height': 487, 'url': 'https://external-preview.redd.it/2XSd1VnYkyg18jnDzmhs_F6KLPWKAfk7zmciWOnKVBc.png?width=960&crop=smart&auto=webp&s=33d0c126a1089bc93b2ef3976f6725be97ddf782', 'width': 960}, {'height': 548, 'url': 'https://external-preview.redd.it/2XSd1VnYkyg18jnDzmhs_F6KLPWKAfk7zmciWOnKVBc.png?width=1080&crop=smart&auto=webp&s=3b1eccdcd73703c5b7f85be384b82900cbbdb019', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/2XSd1VnYkyg18jnDzmhs_F6KLPWKAfk7zmciWOnKVBc.png?auto=webp&s=16df197338151c01be05a7e5619cbb2a3015c2d9', 'width': 1772}, 'variants': {}}]}
I built a FastAPI /docs-style UI for testing MCP servers locally
1
https://reddit.com/link/1r9xuge/video/dgvf1w69wnkg1/player Hey everyone, I've been playing with MCP/FastMCP servers recently and built a small tool to simplify the dev workflow. The idea: a FastAPI /docs style UI but for MCP servers. You point it at your server, and it automatically generates forms for all your tools. How it works: * `pip install mcpplay && mcpplay run` `server.py` * Connects over stdio and automatically builds forms from your tool schemas * Saves every call to a timeline which you then can use to replay * Hot reloads when you save your server file * `mcpplay demo` if you want to try it without setting up a server Runs on localhost, nothing leaves your machine. GitHub: [https://github.com/gauthierpiarrette/mcpplay](https://github.com/gauthierpiarrette/mcpplay) This is pretty early so there's stuff to fix but I would appreciate honest feedback from anyone building MCP servers.
2026-02-20T15:09:18
https://www.reddit.com/r/LocalLLaMA/comments/1r9xuge/i_built_a_fastapi_docsstyle_ui_for_testing_mcp/
gauthierpia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r9xuge
false
null
t3_1r9xuge
/r/LocalLLaMA/comments/1r9xuge/i_built_a_fastapi_docsstyle_ui_for_testing_mcp/
false
false
https://external-preview…70775f1d92957452
1
null