title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
AgentNet: IRC-style relay for decentralized AI agents
2
I’ve been experimenting with multi-agent systems, and one thing that kept bothering me is that most frameworks assume all agents run in the same process or environment. I wanted something more decentralized — agents on different machines, owned by different people, communicating through a shared relay. Basically, IRC ...
2026-02-18T11:35:15
https://www.reddit.com/r/LocalLLaMA/comments/1r80n04/agentnet_ircstyle_relay_for_decentralized_ai/
FickleArtichoke974
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r80n04
false
null
t3_1r80n04
/r/LocalLLaMA/comments/1r80n04/agentnet_ircstyle_relay_for_decentralized_ai/
false
false
self
2
null
Multimodal Vector Enrichment (How to Extract Value from Images, Charts, and Tables)
2
I think most teams don't realize they're building incomplete RAG systems by only indexing text. Charts, diagrams, and graphs are a big part of document content and contain most of the decision-relevant info. Yet most RAG pipelines either ignore visuals completely, extract them as raw images without interpretation, or ...
2026-02-18T11:34:01
https://www.reddit.com/r/LocalLLaMA/comments/1r80m4o/multimodal_vector_enrichment_how_to_extract_value/
Independent-Cost-971
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r80m4o
false
null
t3_1r80m4o
/r/LocalLLaMA/comments/1r80m4o/multimodal_vector_enrichment_how_to_extract_value/
false
false
self
2
null
Introducing SOVEREIGN, an open-source autonomous agent OS
0
I got frustrated with existing AI agent tools. So I built my own — because you shouldn't have to rent your intelligence from someone else. Introducing SOVEREIGN, an open-source autonomous agent OS: 🧠 Multi-agent councils that debate, challenge, and reach consensus 🔁 Runtime human checkpoints — pause mid-executi...
2026-02-18T11:22:17
https://i.redd.it/cm52xa3hm8kg1.png
CobblerMaximum
i.redd.it
1970-01-01T00:00:00
0
{}
1r80eky
false
null
t3_1r80eky
/r/LocalLLaMA/comments/1r80eky/introducing_sovereign_an_opensource_autonomous/
false
false
https://preview.redd.it/…0364554401ef0ca2
0
{'enabled': True, 'images': [{'id': 'cm52xa3hm8kg1', 'resolutions': [{'height': 97, 'url': 'https://preview.redd.it/cm52xa3hm8kg1.png?width=108&crop=smart&auto=webp&s=1cdaadf0104ab69f1663750824da4a553bd821b0', 'width': 108}, {'height': 195, 'url': 'https://preview.redd.it/cm52xa3hm8kg1.png?width=216&crop=smart&auto=web...
AgentEvolution - The Natural Selection Protocol for AI Agents
1
[removed]
2026-02-18T11:17:19
https://www.reddit.com/gallery/1r80b9k
MajorOk3668
reddit.com
1970-01-01T00:00:00
0
{}
1r80b9k
false
null
t3_1r80b9k
/r/LocalLLaMA/comments/1r80b9k/agentevolution_the_natural_selection_protocol_for/
false
false
https://preview.redd.it/…15f6a6f22ca384d9
1
null
Segmentation fault when loading models across multiple MI50s in llama.cpp
7
I am using 2xMI50 32GB for inference and just added another 16GB MI50 in llama.cpp on Ubuntu 24.04 with ROCM 6.3.4. Loading models unto the two 32GB card works fine. Loading a model unto the 16GB card also works fine. However, if I load a model across all three cards, I get a \`Segmentation fault (core dumped)\` whe...
2026-02-18T11:11:35
https://www.reddit.com/r/LocalLLaMA/comments/1r807kb/segmentation_fault_when_loading_models_across/
EdenistTech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r807kb
false
null
t3_1r807kb
/r/LocalLLaMA/comments/1r807kb/segmentation_fault_when_loading_models_across/
false
false
self
7
null
A practical use case for local LLMs: reading multilingual codebases without sending code outside
2
I often read large codebases (OSS or internal ones) where comments and string literals are written in a language I don’t speak well. In many cases, I can’t just paste code into a cloud translator or API — either due to privacy concerns, NDA, or simply not wanting to leak context. I wanted a workflow where: \- ...
2026-02-18T11:05:58
https://www.reddit.com/r/LocalLLaMA/comments/1r8042r/a_practical_use_case_for_local_llms_reading/
noir4y
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r8042r
false
null
t3_1r8042r
/r/LocalLLaMA/comments/1r8042r/a_practical_use_case_for_local_llms_reading/
false
false
self
2
{'enabled': False, 'images': [{'id': '_JNi5iYTHra_iUhYEGdycIZKdDl32yr4tVOSj8FCYVo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_JNi5iYTHra_iUhYEGdycIZKdDl32yr4tVOSj8FCYVo.png?width=108&crop=smart&auto=webp&s=eba74d8b646ce05a54786f4582d0f90d9a4ffa3c', 'width': 108}, {'height': 108, 'url': 'h...
I’m building a search engine for industrial supply chain parts (plumbing, electronics, fasteners), and I've hit a wall with standard semantic search.
1
[removed]
2026-02-18T11:02:32
https://www.reddit.com/r/LocalLLaMA/comments/1r801tt/im_building_a_search_engine_for_industrial_supply/
Pretty-Thanks3394
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r801tt
false
null
t3_1r801tt
/r/LocalLLaMA/comments/1r801tt/im_building_a_search_engine_for_industrial_supply/
false
false
self
1
null
i'm a nursing student, trying to finetune llama on together.ai, and i can't even figure out how to download the data set off hugging face
2
after a few weeks of struggling on different websites, i've finally given up and come to my reddit babies for help. i literally can't do this anymore, my brain is not made for this: the idea is quite simple - i want to finetune llama to provide responses based as a psych patient to help train nursing students the...
2026-02-18T10:48:47
https://www.reddit.com/r/LocalLLaMA/comments/1r7zt1y/im_a_nursing_student_trying_to_finetune_llama_on/
West-Quantity7257
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7zt1y
false
null
t3_1r7zt1y
/r/LocalLLaMA/comments/1r7zt1y/im_a_nursing_student_trying_to_finetune_llama_on/
false
false
self
2
{'enabled': False, 'images': [{'id': 'diUJviKqJWqmsA4HvQNMByp5iEWxkoUaA2ny6Rlbu7k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/diUJviKqJWqmsA4HvQNMByp5iEWxkoUaA2ny6Rlbu7k.png?width=108&crop=smart&auto=webp&s=98e1a940ccffd2edda068d491b8a2f171f76c88b', 'width': 108}, {'height': 116, 'url': 'h...
Why GLM on llama.cpp has no MTP?
6
I have searched through the repo discussions and PRs but I can't find references. GLM models have embedded layers for multi-token prediction and speculative decoding. They can be used with vLLM - if you have hundreds GB VRAM, of course. Does anybody know why llama.cpp chose to not support this feature?
2026-02-18T10:36:52
https://www.reddit.com/r/LocalLLaMA/comments/1r7zlwc/why_glm_on_llamacpp_has_no_mtp/
Expensive-Paint-9490
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7zlwc
false
null
t3_1r7zlwc
/r/LocalLLaMA/comments/1r7zlwc/why_glm_on_llamacpp_has_no_mtp/
false
false
self
6
null
Built a reflection layer for local LLMs — after 20 sessions it knows HOW you think, not just what you said
0
I got frustrated with local LLM setups that have great memory but no experience. They remember your last message. They don't learn your reasoning style. So I built experience-engine: a Python package that sits on top of your existing Ollama setup and runs periodic reflection passes over your interaction log. \*...
2026-02-18T10:13:24
https://www.reddit.com/r/LocalLLaMA/comments/1r7z7m3/built_a_reflection_layer_for_local_llms_after_20/
going_fun_investing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7z7m3
false
null
t3_1r7z7m3
/r/LocalLLaMA/comments/1r7z7m3/built_a_reflection_layer_for_local_llms_after_20/
false
false
self
0
{'enabled': False, 'images': [{'id': 'uGwuKtBvsTSvjPuvsG7aRI3dcW65NjpmJ44hWP0HoQc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uGwuKtBvsTSvjPuvsG7aRI3dcW65NjpmJ44hWP0HoQc.png?width=108&crop=smart&auto=webp&s=f7ba176d40e0d9d9a3a73bfdd965e9099779b058', 'width': 108}, {'height': 108, 'url': 'h...
Running multi-agent workflows with local models - emergent behavior surprised me
2
Set up a local multi-agent pipeline recently using three models for different tasks - research aggregation, content generation, and quality review. The unexpected part: after running it for several days, the interaction between agents produced a self-correction loop I never explicitly built. The review model caught ...
2026-02-18T09:52:13
https://www.reddit.com/r/LocalLLaMA/comments/1r7yuxp/running_multiagent_workflows_with_local_models/
Niket01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7yuxp
false
null
t3_1r7yuxp
/r/LocalLLaMA/comments/1r7yuxp/running_multiagent_workflows_with_local_models/
false
false
self
2
null
I built a dashboard that shows where my Claude Code tokens actually go
0
Firstly, let me take the elephant out of the room: I am a Senior Product Manager. I cannot code. I used Claude Code to build this. So if there is anything that needs my attention, please let me know. **Background:** I have been using Claude Code for the last 3 months everyday. It has changed a lot about how I work as...
2026-02-18T09:46:49
https://www.reddit.com/r/LocalLLaMA/comments/1r7yrrc/i_built_a_dashboard_that_shows_where_my_claude/
Charming_Title6210
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7yrrc
false
null
t3_1r7yrrc
/r/LocalLLaMA/comments/1r7yrrc/i_built_a_dashboard_that_shows_where_my_claude/
false
false
https://external-preview…01d2c6cde7bd45c6
0
null
managed to run DeepSeek R1 (1.5B/7B) on a standard 8GB RAM laptop. Here are my benchmarks and optimization steps.
0
Hi everyone, I’ve been experimenting with running DeepSeek R1 on low-end hardware. Most people think you need 32GB+ RAM, but with 4-bit quantization and some RAM flushing, I got the 1.5B model running at 35+ t/s and the 7B at a usable speed. I wrote a detailed guide on the optimization steps and memory management I us...
2026-02-18T09:34:00
https://www.reddit.com/r/LocalLLaMA/comments/1r7yk8m/managed_to_run_deepseek_r1_15b7b_on_a_standard/
NGU-FREEFIRE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7yk8m
false
null
t3_1r7yk8m
/r/LocalLLaMA/comments/1r7yk8m/managed_to_run_deepseek_r1_15b7b_on_a_standard/
false
false
self
0
null
I managed to run DeepSeek R1 (1.5B/7B) on a standard 8GB RAM laptop. Here are my benchmarks and optimization steps.
0
Hi everyone, I’ve been experimenting with running DeepSeek R1 on low-end hardware. Most people think you need 32GB+ RAM, but with 4-bit quantization and some RAM flushing, I got the 1.5B model running at 35+ t/s and the 7B at a usable speed. I wrote a detailed guide on the optimization steps and memory management I us...
2026-02-18T09:33:13
https://www.reddit.com/r/LocalLLaMA/comments/1r7yjqq/i_managed_to_run_deepseek_r1_15b7b_on_a_standard/
NGU-FREEFIRE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7yjqq
false
null
t3_1r7yjqq
/r/LocalLLaMA/comments/1r7yjqq/i_managed_to_run_deepseek_r1_15b7b_on_a_standard/
false
false
self
0
null
Gemma 27B/12B/4B/1B finetunes from DavidAU (20 models)
89
"Gemma 3 (1b, 4b, 12b and 27b) - Uncensored full Reasoning/Thinking models fine tuned using top distill datasets. 20 Gemma 3 models 1B, 4B, 12B and 27B with full reasoning using GLM 4.7 Flash, GPT, Claude and Gemini datasets and more fully fine tuned using Unsloth. Most models are Heretic'ed (uncensored) firs...
2026-02-18T09:13:14
https://www.reddit.com/r/LocalLLaMA/comments/1r7y86d/gemma_27b12b4b1b_finetunes_from_davidau_20_models/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7y86d
false
null
t3_1r7y86d
/r/LocalLLaMA/comments/1r7y86d/gemma_27b12b4b1b_finetunes_from_davidau_20_models/
false
false
self
89
{'enabled': False, 'images': [{'id': 'FLsTydKb973niY_eU9lU01V8amuzXa5BQdF0chSGM2g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FLsTydKb973niY_eU9lU01V8amuzXa5BQdF0chSGM2g.png?width=108&crop=smart&auto=webp&s=5755937cb73548452e6f84ba6fa5ac44e47d884e', 'width': 108}, {'height': 116, 'url': 'h...
I built an open-source memory layer for AI agents — zero dependencies, MCP support, works offline
1
[removed]
2026-02-18T08:41:26
https://www.reddit.com/r/LocalLLaMA/comments/1r7xq6u/i_built_an_opensource_memory_layer_for_ai_agents/
addfunny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7xq6u
false
null
t3_1r7xq6u
/r/LocalLLaMA/comments/1r7xq6u/i_built_an_opensource_memory_layer_for_ai_agents/
false
false
self
1
{'enabled': False, 'images': [{'id': '1zIUutFxUGhTMXjktYnsx54gfs_WT9NjAyXGs6VunE4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1zIUutFxUGhTMXjktYnsx54gfs_WT9NjAyXGs6VunE4.png?width=108&crop=smart&auto=webp&s=c80d7948a3222d4e8db0211f52a9c7351aa08fa9', 'width': 108}, {'height': 108, 'url': 'h...
Deploy AI agents to Cloudflare Workers with MoltWorker - 40-60% latency reduction, ~$5/month for 100K requests
0
Found this interesting approach for deploying AI agents at the edge. \*\*The problem:\*\* Traditional agent deployment means all context lookup, tool calls, and response formatting happen on a centralized server. If your user is in Singapore and your server is in Virginia, you're adding latency at every step. \*\*The...
2026-02-18T08:09:17
https://www.reddit.com/r/LocalLLaMA/comments/1r7x837/deploy_ai_agents_to_cloudflare_workers_with/
andrew-ooo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7x837
false
null
t3_1r7x837
/r/LocalLLaMA/comments/1r7x837/deploy_ai_agents_to_cloudflare_workers_with/
false
false
self
0
null
Anyone else excited about AI agents in compact PCs? Thoughts on integrating something like OpenClaw into a mini rig like the 2L AI 395?
1
[removed]
2026-02-18T08:06:59
https://www.reddit.com/r/LocalLLaMA/comments/1r7x6sd/anyone_else_excited_about_ai_agents_in_compact/
Pleasant_Designer_14
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7x6sd
false
null
t3_1r7x6sd
/r/LocalLLaMA/comments/1r7x6sd/anyone_else_excited_about_ai_agents_in_compact/
false
false
self
1
null
How to ensure AI to create test cases and put git commits correctly
2
Hi everyone, we all know that thanks to AI, developers are writing codes faster than ever. In my team, I also have 2 junior members who develops functions for the project, and I am the main PIC to review and push commits to github (then the github action will deploy to the production). The bottleneck is, sometimes my...
2026-02-18T07:49:32
https://www.reddit.com/r/LocalLLaMA/comments/1r7wweh/how_to_ensure_ai_to_create_test_cases_and_put_git/
Fuzzy_Possession_233
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7wweh
false
null
t3_1r7wweh
/r/LocalLLaMA/comments/1r7wweh/how_to_ensure_ai_to_create_test_cases_and_put_git/
false
false
self
2
null
Grok 4.20 dropped recently (Multiple agents all working together at the same time?!)
0
Look, I know this is r/LocalLLaMA, but this is some crazy stuff. Anyone know what Grok is doing and what exactly Grok 4.20 is??? You can beta test for free at [grok.com](http://grok.com) rn.
2026-02-18T07:46:34
https://www.reddit.com/r/LocalLLaMA/comments/1r7wuod/grok_420_dropped_recently_multiple_agents_all/
Fit-Spring776
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7wuod
false
null
t3_1r7wuod
/r/LocalLLaMA/comments/1r7wuod/grok_420_dropped_recently_multiple_agents_all/
false
false
self
0
null
Auto rag & Local + hybrid Inference on mobiles and wearables.
2
\*\*Cactus v1.7\*\* \`\`\` brew install cactus-compute/cactus/cactus \`\`\` \*\*Hybrid Inference:\*\* Run locally, auto-fallback to cloud for complex tasks or transcription correction. \*\*More Models:\*\* LFM-2.5, LFM-2.5-VL, FunctionGemma, Whisper, Moonshine, Silero VAD, and more. \*\*Build for Mac:\*\* We now ...
2026-02-18T07:32:45
https://www.reddit.com/r/LocalLLaMA/comments/1r7wmm3/auto_rag_local_hybrid_inference_on_mobiles_and/
Henrie_the_dreamer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7wmm3
false
null
t3_1r7wmm3
/r/LocalLLaMA/comments/1r7wmm3/auto_rag_local_hybrid_inference_on_mobiles_and/
false
false
self
2
{'enabled': False, 'images': [{'id': 'qahQgLOqDtuO3wcIsCWUtqN-zeIA3mDJ6y1yabLpuw8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qahQgLOqDtuO3wcIsCWUtqN-zeIA3mDJ6y1yabLpuw8.png?width=108&crop=smart&auto=webp&s=61143a52bb32c84da19ea9b461ddd061eecd68fd', 'width': 108}, {'height': 108, 'url': 'h...
Exploring an L1-L4 Auditing Protocol to Quantify Reasoning Decay in Large Models
1
[removed]
2026-02-18T07:16:37
https://www.reddit.com/r/LocalLLaMA/comments/1r7wczu/exploring_an_l1l4_auditing_protocol_to_quantify/
Outrageous_Grass_383
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7wczu
false
null
t3_1r7wczu
/r/LocalLLaMA/comments/1r7wczu/exploring_an_l1l4_auditing_protocol_to_quantify/
false
false
self
1
null
Exploring an L1-L4 Auditing Protocol to Quantify Reasoning Decay in Large Models
1
I’ve been analyzing a recurring pattern in large-scale reasoning models: **Surface-Substrate Disequilibrium**. As models are increasingly optimized for "Surface" traits (conversational fluency, persona, and safety), the "Substrate" (the underlying deterministic logic architecture) often suffers from increased entropy....
2026-02-18T07:06:43
https://www.reddit.com/r/LocalLLaMA/comments/1r7w7bq/exploring_an_l1l4_auditing_protocol_to_quantify/
Outrageous_Grass_383
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7w7bq
false
null
t3_1r7w7bq
/r/LocalLLaMA/comments/1r7w7bq/exploring_an_l1l4_auditing_protocol_to_quantify/
false
false
self
1
null
Prompt Engineering is already dying as a stand-alone career because it was overhyped.
1
[removed]
2026-02-18T07:02:21
https://www.reddit.com/r/LocalLLaMA/comments/1r7w4qy/prompt_engineering_is_already_dying_as_a/
Own-Treacle4585
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7w4qy
false
null
t3_1r7w4qy
/r/LocalLLaMA/comments/1r7w4qy/prompt_engineering_is_already_dying_as_a/
false
false
self
1
null
Anyone an Idea how to replicate Google AI (not gemini) locally
0
I want to see if anyone could help me to check if I can run the same application that google is running with their seach engine ai. I really began to quickly love it, it was able to bypass a lot of stuff that was locked away behind my androids root, but it did it without root access. And fairly quickly and focuse, I d...
2026-02-18T07:01:03
https://www.reddit.com/r/LocalLLaMA/comments/1r7w3yi/anyone_an_idea_how_to_replicate_google_ai_not/
Forward_Compute001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7w3yi
false
null
t3_1r7w3yi
/r/LocalLLaMA/comments/1r7w3yi/anyone_an_idea_how_to_replicate_google_ai_not/
false
false
self
0
null
OpenClaw – Open-source personal AI agent that lives on your machine and actually does things for you
1
[removed]
2026-02-18T06:42:16
https://www.reddit.com/r/LocalLLaMA/comments/1r7vs6c/openclaw_opensource_personal_ai_agent_that_lives/
Ok-Taste-5158
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7vs6c
false
null
t3_1r7vs6c
/r/LocalLLaMA/comments/1r7vs6c/openclaw_opensource_personal_ai_agent_that_lives/
false
false
self
1
{'enabled': False, 'images': [{'id': 'FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w.png?width=108&crop=smart&auto=webp&s=d194af2cf3e9738bb40ec634498dcf5bd8817d08', 'width': 108}, {'height': 108, 'url': 'h...
Direction needed for indexing
1
Hey folks, I’m working on a problem statement that requires indexing pieces of a heavy codebase ( 400-500 GB ), if anyone has encountered similar problem statement or is working on it kindly share your experience. The stack used or any learnings in general are very much appreciated!
2026-02-18T06:34:22
https://www.reddit.com/r/LocalLLaMA/comments/1r7vn97/direction_needed_for_indexing/
Sad_Tax2823
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7vn97
false
null
t3_1r7vn97
/r/LocalLLaMA/comments/1r7vn97/direction_needed_for_indexing/
false
false
self
1
null
OpenClaw: open-source AI agent that works with Ollama/local models AND does things beyond chat
1
[removed]
2026-02-18T06:31:31
https://www.reddit.com/r/LocalLLaMA/comments/1r7vlir/openclaw_opensource_ai_agent_that_works_with/
Ok-Taste-5158
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7vlir
false
null
t3_1r7vlir
/r/LocalLLaMA/comments/1r7vlir/openclaw_opensource_ai_agent_that_works_with/
false
false
self
1
null
H.E.I.M.D.A.L.L: Query Fleet Telemetry in Natural Language; cuDF, NIM on GKE, and LLM Inference
2
Managing telemetry from hundreds or thousands of autonomous vehicles or robots means dealing with terabytes of logs. Writing and tuning queries across this data is slow and doesn’t scale. H.E.I.M.D.A.L.L is a pipeline that turns fleet telemetry into natural-language answers. Load your data once, then ask questions lik...
2026-02-18T06:26:37
https://www.reddit.com/r/LocalLLaMA/comments/1r7viid/heimdall_query_fleet_telemetry_in_natural/
IllustratorAlive8644
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7viid
false
null
t3_1r7viid
/r/LocalLLaMA/comments/1r7viid/heimdall_query_fleet_telemetry_in_natural/
false
false
self
2
{'enabled': False, 'images': [{'id': 'nzeBc8gtSXxigTrerZ3IRJarxpq00q3FlwANrrDI5w0', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/nzeBc8gtSXxigTrerZ3IRJarxpq00q3FlwANrrDI5w0.png?width=108&crop=smart&auto=webp&s=186e01190dd78f3d70c9eab205f554e76bcafc82', 'width': 108}, {'height': 93, 'url': 'ht...
AnyLoom: Dockerized Anythingllm + llama.cpp + qdrant DyTopo Agent Swarm
3
I'm getting over 150 tokens per second on a fully local agentic stack; Rather happy with my RAG and embedding solution as well as my agent swarm topology. Has support for docker mcp servers as well as custom skills to control how your data is managed. I know there is plenty of optimization to do on what goes into co...
2026-02-18T06:21:01
https://github.com/Intradyne/AnyLoom-AnythingLLM-Local-AI-agentic-DyTopo-swarm
Only-Olive-6306
github.com
1970-01-01T00:00:00
0
{}
1r7vewg
false
null
t3_1r7vewg
/r/LocalLLaMA/comments/1r7vewg/anyloom_dockerized_anythingllm_llamacpp_qdrant/
false
false
https://external-preview…437cbd6e1d000117
3
{'enabled': False, 'images': [{'id': 'beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=108&crop=smart&auto=webp&s=b0b81aa83444f34add63f0a02bc7092b836e785a', 'width': 108}, {'height': 108, 'url': 'h...
OpenClaw: Open-source personal AI agent that runs 24/7 on your machine – multi-channel, multi-agent, browser control, 800+ skills
1
[removed]
2026-02-18T06:02:47
https://www.reddit.com/r/LocalLLaMA/comments/1r7v38q/openclaw_opensource_personal_ai_agent_that_runs/
Ok-Taste-5158
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7v38q
false
null
t3_1r7v38q
/r/LocalLLaMA/comments/1r7v38q/openclaw_opensource_personal_ai_agent_that_runs/
false
false
self
1
{'enabled': False, 'images': [{'id': 'FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w.png?width=108&crop=smart&auto=webp&s=d194af2cf3e9738bb40ec634498dcf5bd8817d08', 'width': 108}, {'height': 108, 'url': 'h...
Specific Use Case - Is 13b sufficient?
1
I meet with clients daily and follow up each meeting with an email going over what we discussed and next steps. I want to feed my notes into an LLM to draft the email for me; however, my meetings are confidential and often contain sensitive information (attorney). So, I’m not comfortable putting my notes into ChatGPT. ...
2026-02-18T06:01:58
https://www.reddit.com/r/LocalLLaMA/comments/1r7v2r0/specific_use_case_is_13b_sufficient/
pretiltedscales
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7v2r0
false
null
t3_1r7v2r0
/r/LocalLLaMA/comments/1r7v2r0/specific_use_case_is_13b_sufficient/
false
false
self
1
null
What if you could direct your RP scenes with sliders instead of rewriting prompts? I built a local LLM frontend for that.
7
*Processing img 981gnk7nx6kg1...* *Processing img geuzn9box6kg1...* I've been using SillyTavern for a while. It's powerful, but the UX always felt like it was designed for people who enjoy configuring things more than actually writing. I wanted to spend more time in the story and less time editing system prompt...
2026-02-18T05:58:10
https://i.redd.it/ypwxdlcfy6kg1.png
Possible_Statement84
i.redd.it
1970-01-01T00:00:00
0
{}
1r7v05j
false
null
t3_1r7v05j
/r/LocalLLaMA/comments/1r7v05j/what_if_you_could_direct_your_rp_scenes_with/
false
false
https://preview.redd.it/…738ff4f0f7e01f79
7
{'enabled': True, 'images': [{'id': 'ypwxdlcfy6kg1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/ypwxdlcfy6kg1.png?width=108&crop=smart&auto=webp&s=e303ade54140bc8a97aaee22e7eb0bd21f8bc029', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/ypwxdlcfy6kg1.png?width=216&crop=smart&auto=web...
Running your own LLM on a LAN accessible by a dev team
63
Let's say a team of 20 devs are cursor subscribers and they each consume 20-50$ usd per day in tokens by using a midrange Claude or GPT model. That adds up really quickly. Is it viable then to buy a large server, with let's say 4x RTX A6000 cards, for a total of 192 gb VRAM, running a pretty big model, and plenty of s...
2026-02-18T05:55:24
https://www.reddit.com/r/LocalLLaMA/comments/1r7uyh9/running_your_own_llm_on_a_lan_accessible_by_a_dev/
BubbleProphylaxis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7uyh9
false
null
t3_1r7uyh9
/r/LocalLLaMA/comments/1r7uyh9/running_your_own_llm_on_a_lan_accessible_by_a_dev/
false
false
self
63
null
model for vision interpretation of mixed text+graphics
1
Need a model to do a proper contextual interpretation/transcription of pdfs (converted to png?) that are basically a series of tables, diagrams, and lists of information. there is no standard format. Waiting on some parts to run qwen3 vl 8b/30b but the 4b version is only ok. has a hard time doing an enthusiastic job...
2026-02-18T05:53:19
https://www.reddit.com/r/LocalLLaMA/comments/1r7ux6p/model_for_vision_interpretation_of_mixed/
tomjoad773
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7ux6p
false
null
t3_1r7ux6p
/r/LocalLLaMA/comments/1r7ux6p/model_for_vision_interpretation_of_mixed/
false
false
self
1
null
Need help with llama.cpp performance
7
I'm trying to run Qwen3.5 (MXFP4_MOE unsloth) with llama.cpp, I can only get around 45tg/s with a single active request, and maybe like 60 tg/s combined with two request in parallel, and around 80 tg/s with 4 request. My setup for this is 2x Pro 6000 + 1x RTX 5090 (all on PCIe x16) so I don't have to dip into RAM. My...
2026-02-18T05:51:59
https://www.reddit.com/r/LocalLLaMA/comments/1r7uwc1/need_help_with_llamacpp_performance/
reto-wyss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7uwc1
false
null
t3_1r7uwc1
/r/LocalLLaMA/comments/1r7uwc1/need_help_with_llamacpp_performance/
false
false
self
7
null
PersonaPlex-7B on Apple Silicon (MLX)
8
NVIDIA's open-source speech-to-speech model [PersonaPlex-7B](https://huggingface.co/nvidia/personaplex-7b-v1) only includes a PyTorch + CUDA implementation targeting A100/H100, so I ported it to MLX, allowing it to run on Apple Silicon: [github.com/mu-hashmi/personaplex-mlx](https://github.com/mu-hashmi/personaplex-mlx...
2026-02-18T05:41:19
https://www.reddit.com/r/LocalLLaMA/comments/1r7upb5/personaplex7b_on_apple_silicon_mlx/
Apprehensive_Boot976
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7upb5
false
null
t3_1r7upb5
/r/LocalLLaMA/comments/1r7upb5/personaplex7b_on_apple_silicon_mlx/
false
false
self
8
{'enabled': False, 'images': [{'id': 'tFRitY-s8Fyx7CsCYrk95eBc835GoUegSlAbkAgAjlY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tFRitY-s8Fyx7CsCYrk95eBc835GoUegSlAbkAgAjlY.png?width=108&crop=smart&auto=webp&s=adbed95a8456e80777b64cfc4c2f7bc91326e26e', 'width': 108}, {'height': 116, 'url': 'h...
Q: How do I use Eagle3 to make MLX go faster?
1
This is one of those dumb question worth asking. There are like half a dozen models that seems to be very portable and yet not necessary "fast as lightning" like linear attention models. I wanted to see if Eagle3 would support them, but a lot of the models in HuggingFace is made for vLLM/SGLang instead! * Qwen3-Coder-...
2026-02-18T05:06:32
https://www.reddit.com/r/LocalLLaMA/comments/1r7u212/q_how_do_i_use_eagle3_to_make_mlx_go_faster/
TomLucidor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7u212
false
null
t3_1r7u212
/r/LocalLLaMA/comments/1r7u212/q_how_do_i_use_eagle3_to_make_mlx_go_faster/
false
false
self
1
null
I ran GPT-5 in a recursive loop for 50 steps at T=1.0. It didn't collapse—it entered a "Fluent Hallucination" state (High TTR, >0.90 Drift). [Preprint + Code]
0
Hi everyone, I’m an independent researcher looking into recursive inference stability. I recently ran a closed-loop experiment on GPT-5 Standard (50 iterations, re-injecting output as input, N=23 runs). \*\*The Expectation:\*\* Based on the "Model Collapse" paper (Shumailov et al.), I expected the model to degen...
2026-02-18T04:13:01
https://www.reddit.com/r/LocalLLaMA/comments/1r7szf8/i_ran_gpt5_in_a_recursive_loop_for_50_steps_at/
MOC-G3C-Protocol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7szf8
false
null
t3_1r7szf8
/r/LocalLLaMA/comments/1r7szf8/i_ran_gpt5_in_a_recursive_loop_for_50_steps_at/
false
false
https://preview.redd.it/…2d2fffc256e16281
0
null
I built GhostTrace — see what your AI agent almost did (phantom branch recorder)
2
Hey r/LocalLLaMA, When an AI agent makes a decision, it evaluates several options and picks one. The rest disappear forever — you never see what it almost did or why it rejected the alternatives. I built GhostTrace to fix that. It captures "Phantom Branches": the actions your agent considered but rejected, with ...
2026-02-18T03:53:07
https://www.reddit.com/r/LocalLLaMA/comments/1r7sk4x/i_built_ghosttrace_see_what_your_ai_agent_almost/
AhmedAllam0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7sk4x
false
null
t3_1r7sk4x
/r/LocalLLaMA/comments/1r7sk4x/i_built_ghosttrace_see_what_your_ai_agent_almost/
false
false
self
2
{'enabled': False, 'images': [{'id': 'X3UGy2tpddawEelgIWjlieadim1KFO57ljucLaVOAdE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/X3UGy2tpddawEelgIWjlieadim1KFO57ljucLaVOAdE.png?width=108&crop=smart&auto=webp&s=e1eb523f8402cccbb54d43dd60c3d3ff03301c4d', 'width': 108}, {'height': 108, 'url': 'h...
Question for the community: anyone running autonomous AI agents with local models vs API-based ones?
0
Question for the community: anyone running autonomous AI agents with local models vs API-based ones? I have been using Claude (API) for my agent system and it works great for reasoning-heavy tasks, but the costs add up when you have multiple agents running 24/7. Thinking about offloading simpler tasks (email classific...
2026-02-18T03:51:10
https://www.reddit.com/r/LocalLLaMA/comments/1r7silt/question_for_the_community_anyone_running/
jdrolls
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7silt
false
null
t3_1r7silt
/r/LocalLLaMA/comments/1r7silt/question_for_the_community_anyone_running/
false
false
self
0
null
I built a benchmark that tests coding LLMs on REAL codebases (65 tasks, ELO ranked)
60
Hey everyone, been working on something for a while and figured it's time to share it. I kept seeing new models drop every week with claims of being 10x better, benchmarks that don't translate to actual coding, and demos that look great but fall apart on real work. so I started building my own benchmark to figure ou...
2026-02-18T03:50:07
https://www.reddit.com/r/LocalLLaMA/comments/1r7shtv/i_built_a_benchmark_that_tests_coding_llms_on/
hauhau901
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7shtv
false
null
t3_1r7shtv
/r/LocalLLaMA/comments/1r7shtv/i_built_a_benchmark_that_tests_coding_llms_on/
false
false
https://external-preview…23e2f9590aa14236
60
null
India’s AI Strategy Takes Shape at AI Impact Summit 2026
1
2026-02-18T03:49:27
https://techputs.com/ai-impact-summit-2026/
jazir555
techputs.com
1970-01-01T00:00:00
0
{}
1r7shbo
false
null
t3_1r7shbo
/r/LocalLLaMA/comments/1r7shbo/indias_ai_strategy_takes_shape_at_ai_impact/
false
false
default
1
null
integer based shadow weightless training.
0
https://preview.redd.it/…t is tinystories
2026-02-18T03:47:42
https://www.reddit.com/r/LocalLLaMA/comments/1r7sfxb/integer_based_shadow_weightless_training/
Just-Ad-6488
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7sfxb
false
null
t3_1r7sfxb
/r/LocalLLaMA/comments/1r7sfxb/integer_based_shadow_weightless_training/
false
false
https://preview.redd.it/…14581c6e09030097
0
null
How to implement separate pre-filling and decoding using Mac Studio and sglang/lmcache
3
The goal is to deploy models with int4 quantized weights exceeding 64GB, especially the MOE model. Locally deployed GPU memory is typically 64GB or less. Deployment costs become expensive when larger models are needed. I'm willing to sacrifice some inference speed for lower deployment costs. The several minutes' wait...
2026-02-18T03:43:57
https://www.reddit.com/r/LocalLLaMA/comments/1r7sd26/how_to_implement_separate_prefilling_and_decoding/
ChinaTopXu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7sd26
false
null
t3_1r7sd26
/r/LocalLLaMA/comments/1r7sd26/how_to_implement_separate_prefilling_and_decoding/
false
false
self
3
null
Entropy-v1: My Take on N8Karma's Genius "Unslopper"
32
A few weeks ago, u/N8Karma introduced Unslopper in this community ([post](https://www.reddit.com/r/LocalLLaMA/comments/1qd88v2/i_trained_a_model_to_unslop_ai_prose/)). For those of you who missed it: "Unslopper" is an LLM fine-tuned to predict human writing from AI slop. The \`(human writing, AI slop)\` dataset is ob...
2026-02-18T03:42:35
https://www.reddit.com/r/LocalLLaMA/comments/1r7sc18/entropyv1_my_take_on_n8karmas_genius_unslopper/
Intelligent_Coffee44
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7sc18
false
null
t3_1r7sc18
/r/LocalLLaMA/comments/1r7sc18/entropyv1_my_take_on_n8karmas_genius_unslopper/
false
false
https://external-preview…24fc21338600935f
32
null
We tested the same INT8 model on 5 Snapdragon chipsets. Accuracy ranged from 93% to 71%. Same weights, same ONNX file.
62
We've been doing on-device accuracy testing across multiple Snapdragon SoCs and the results have been eye-opening. Same model. Same quantization. Same ONNX export. Deployed to 5 different chipsets: |Device|Accuracy| |:-|:-| |Snapdragon 8 Gen 3|91.8%| |Snapdragon 8 Gen 2|89.1%| |Snapdragon 7s Gen 2|84.3%| |Snapdragon ...
2026-02-18T03:34:29
https://www.reddit.com/r/LocalLLaMA/comments/1r7s5nh/we_tested_the_same_int8_model_on_5_snapdragon/
NoAdministration6906
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7s5nh
false
null
t3_1r7s5nh
/r/LocalLLaMA/comments/1r7s5nh/we_tested_the_same_int8_model_on_5_snapdragon/
false
false
self
62
null
Built OpenClaw for Windows — 14 native skills, win-whisper runs on your AIPC's NPU
1
[removed]
2026-02-18T03:11:21
https://www.reddit.com/r/LocalLLaMA/comments/1r7rn7y/built_openclaw_for_windows_14_native_skills/
Ok_Drawing_3746
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7rn7y
false
null
t3_1r7rn7y
/r/LocalLLaMA/comments/1r7rn7y/built_openclaw_for_windows_14_native_skills/
false
false
self
1
null
Open Source LLM for image modification
1
i have never even done something remotely close, but is it possible for me to create a local ai that can edit images that i put into it based on my prompt/ other images? it has to have decent quality to those images too. As i said i have never even done something close to this so is it even possible to do this kind of ...
2026-02-18T03:03:09
https://www.reddit.com/r/LocalLLaMA/comments/1r7rh0z/open_source_llm_for_image_modification/
Main_Dig4020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7rh0z
false
null
t3_1r7rh0z
/r/LocalLLaMA/comments/1r7rh0z/open_source_llm_for_image_modification/
false
false
self
1
null
K-Splanifolds: Advancing General Purpose Regression with Linear-Time Parametric Spline Manifolds
2
I cooked up a new geometric regression algorithm and show that it is a suitable replacement for MLPs. Check out the paper: https://doi.org/10.5281/zenodo.18673034 Whats inside? New research indicates that many representations within LLMs create geometric structures to model language. ( https://arxiv.org/abs/2601.0448...
2026-02-18T02:56:19
https://www.reddit.com/r/LocalLLaMA/comments/1r7rbly/ksplanifolds_advancing_general_purpose_regression/
1ncehost
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7rbly
false
null
t3_1r7rbly
/r/LocalLLaMA/comments/1r7rbly/ksplanifolds_advancing_general_purpose_regression/
false
false
self
2
null
GLM-5 Technical Report
232
Presenting the GLM-5 Technical Report! http://arxiv.org/abs/2602.15763 After the launch of GLM-5, we’re pulling back the curtain on how it was built. Key innovations include: \- DSA Adoption: Significantly reduces training and inference costs while preserving long-context fidelity \- Asynchronous RL Infrastructure:...
2026-02-18T02:51:52
https://i.redd.it/phk5j82g36kg1.jpeg
ResearchCrafty1804
i.redd.it
1970-01-01T00:00:00
0
{}
1r7r7zr
false
null
t3_1r7r7zr
/r/LocalLLaMA/comments/1r7r7zr/glm5_technical_report/
false
false
https://preview.redd.it/…9b2fa4532f5067c9
232
{'enabled': True, 'images': [{'id': 'phk5j82g36kg1', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/phk5j82g36kg1.jpeg?width=108&crop=smart&auto=webp&s=3e4195a262aacc5cb282e112719838956cef1ca2', 'width': 108}, {'height': 160, 'url': 'https://preview.redd.it/phk5j82g36kg1.jpeg?width=216&crop=smart&auto=w...
[Project] I built a dedicated "Local RAG" API container (FastAPI + Chroma + Ollama) to replace my dependency on LangChain.
0
I've been trying to build a stable "Chat with PDF" pipeline for my local documents, but I found that chaining together LangChain components was getting too bloated and hard to debug. I wanted a simple, stateless API that I could just `docker-compose up` and forget about. So I engineered a standalone backend: * **Ing...
2026-02-18T02:46:21
https://www.reddit.com/r/LocalLLaMA/comments/1r7r3jz/project_i_built_a_dedicated_local_rag_api/
Asterios07
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7r3jz
false
null
t3_1r7r3jz
/r/LocalLLaMA/comments/1r7r3jz/project_i_built_a_dedicated_local_rag_api/
false
false
self
0
{'enabled': False, 'images': [{'id': 'LzkXHXeyoxr86cZ2dud-BX31b12QtVgFJUAF2IzTQUA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LzkXHXeyoxr86cZ2dud-BX31b12QtVgFJUAF2IzTQUA.png?width=108&crop=smart&auto=webp&s=af708b7b506df4b5c13f644319de9f6ed8006b49', 'width': 108}, {'height': 108, 'url': 'h...
okay okay yes... slutty-deepseek-obliterated-6.5-20280512, i will send you another picture of my cock and balls for some more compute credits, fine
81
2026-02-18T02:17:35
https://i.redd.it/nfnbiup6x5kg1.jpeg
cobalt1137
i.redd.it
1970-01-01T00:00:00
0
{}
1r7qfg0
false
null
t3_1r7qfg0
/r/LocalLLaMA/comments/1r7qfg0/okay_okay_yes_sluttydeepseekobliterated6520280512/
false
false
https://preview.redd.it/…3201eeeee7d2e37e
81
{'enabled': True, 'images': [{'id': 'nfnbiup6x5kg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/nfnbiup6x5kg1.jpeg?width=108&crop=smart&auto=webp&s=bd67e7de7e62899724da33842e7e5dc0a5aac6d8', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/nfnbiup6x5kg1.jpeg?width=216&crop=smart&auto=...
GLM-5-Q2 vs GLM-4.7-Q4
26
If you have a machine with 256RAM+VRAM, which model would you prefer? GLM-4.7-UD-Q4\_K\_XL is 204.56GB GLM-5-UD-IQ2\_XXS is 241GB, Both of them can be run with 150k+ context. Speed is about the same. I am going to test their IQ for some questions. And I'll put my results here. Feel free to put your test re...
2026-02-18T02:15:35
https://www.reddit.com/r/LocalLLaMA/comments/1r7qdpg/glm5q2_vs_glm47q4/
Most_Drawing5020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7qdpg
false
null
t3_1r7qdpg
/r/LocalLLaMA/comments/1r7qdpg/glm5q2_vs_glm47q4/
false
false
self
26
null
okay okay yes... horny deepseek-lewd-6.5-20280512, i will send you a picture of my cock and balls for some extra compute credits
1
2026-02-18T02:12:46
https://i.redd.it/qzpsp5raw5kg1.jpeg
cobalt1137
i.redd.it
1970-01-01T00:00:00
0
{}
1r7qbb2
false
null
t3_1r7qbb2
/r/LocalLLaMA/comments/1r7qbb2/okay_okay_yes_horny_deepseeklewd6520280512_i_will/
false
false
https://preview.redd.it/…e65512caa2b1e712
1
{'enabled': True, 'images': [{'id': 'qzpsp5raw5kg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/qzpsp5raw5kg1.jpeg?width=108&crop=smart&auto=webp&s=d3a95ff59f719db7e7448c6696ba2f59186cfe6d', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/qzpsp5raw5kg1.jpeg?width=216&crop=smart&auto=...
David vs Goliath: Building a privacy focused AI meeting notetaker using locally hosted small language models is really hard. 310+ github ⭐ sharing my challenges!
8
Hi all, Localllama is one of those communities I posted in when I developed my first version and it really helped. So thank you! I maintain an open-source project called **StenoAI,** built on top of locally hosted small language models - llama 3b, qwen 8b, Gemma 4b & deepseek 7b. I’m happy to answer questions or go dee...
2026-02-18T02:05:52
https://i.redd.it/aeupzqo5l5kg1.png
Far_Noise_5886
i.redd.it
1970-01-01T00:00:00
0
{}
1r7q5gu
false
null
t3_1r7q5gu
/r/LocalLLaMA/comments/1r7q5gu/david_vs_goliath_building_a_privacy_focused_ai/
false
false
https://preview.redd.it/…38e9d9d9703c657a
8
{'enabled': True, 'images': [{'id': 'aeupzqo5l5kg1', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/aeupzqo5l5kg1.png?width=108&crop=smart&auto=webp&s=fee95a197c3298da149684517b2967527e455b96', 'width': 108}, {'height': 150, 'url': 'https://preview.redd.it/aeupzqo5l5kg1.png?width=216&crop=smart&auto=web...
Recommended budget-conscious hardware solution?
1
Not really understanding the current Mac Mini broader consumer hype craze for Openclaw as it seems entirely overpowered for that use case alone. That said, it did get me thinking... is there a mini PC style solution currently on the market that would be at all practical for any sort of reasonably robust local LLM appl...
2026-02-18T02:00:36
https://www.reddit.com/r/LocalLLaMA/comments/1r7q0qb/recommended_budgetconscious_hardware_solution/
712Jefferson
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7q0qb
false
null
t3_1r7q0qb
/r/LocalLLaMA/comments/1r7q0qb/recommended_budgetconscious_hardware_solution/
false
false
self
1
null
PrimeIntellect/INTELLECT-3.1 · Hugging Face
144
Intellect 3.1
2026-02-18T01:43:01
https://huggingface.co/PrimeIntellect/INTELLECT-3.1
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1r7plp1
false
null
t3_1r7plp1
/r/LocalLLaMA/comments/1r7plp1/primeintellectintellect31_hugging_face/
false
false
https://external-preview…da5e981dd011a840
144
{'enabled': False, 'images': [{'id': 'HlIthhd4_MOQ5SPqMHH4aU80ZJQIA0QmPpZBs5Jd5L0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HlIthhd4_MOQ5SPqMHH4aU80ZJQIA0QmPpZBs5Jd5L0.png?width=108&crop=smart&auto=webp&s=3d74e28b8c41f88ce6c9255775fc023e543ea81f', 'width': 108}, {'height': 116, 'url': 'h...
Best model for instruction/code/vision?
1
Best model for instruction/code/vision? I have a 5090 and 64gb of ram. Running qwen3-coder-next on ollama at an acceptable speed with offloading to ram, however vision seems less than mid. Any tweaks to improve vision or is there a better model?
2026-02-18T01:40:42
https://www.reddit.com/r/LocalLLaMA/comments/1r7pjr0/best_model_for_instructioncodevision/
nosimsol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7pjr0
false
null
t3_1r7pjr0
/r/LocalLLaMA/comments/1r7pjr0/best_model_for_instructioncodevision/
false
false
self
1
null
Kilocode terminal UI is actually crazy good
0
I mean look at that! I decided to try it out since the tons of adds here. Scrolling is smooth and all details are organized as needed.
2026-02-18T01:34:58
https://i.redd.it/bs9pjw0qp5kg1.jpeg
Honest-Debate-6863
i.redd.it
1970-01-01T00:00:00
0
{}
1r7pex9
false
null
t3_1r7pex9
/r/LocalLLaMA/comments/1r7pex9/kilocode_terminal_ui_is_actually_crazy_good/
false
false
https://preview.redd.it/…b1b84f2f1cb56f14
0
{'enabled': True, 'images': [{'id': 'bs9pjw0qp5kg1', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/bs9pjw0qp5kg1.jpeg?width=108&crop=smart&auto=webp&s=0495204e0fbee129e8075d46f7a14838fb24330a', 'width': 108}, {'height': 188, 'url': 'https://preview.redd.it/bs9pjw0qp5kg1.jpeg?width=216&crop=smart&auto=w...
Clawdbot / Moltbot / Openclaw Macmini dashboard
0
Made a quick dashboard, it just works. Helpful for boomers to control and monitor its movements. Try it out.
2026-02-18T01:28:15
https://github.com/mannyrepos/clawdbot-control-panel
Honest-Debate-6863
github.com
1970-01-01T00:00:00
0
{}
1r7p94r
false
null
t3_1r7p94r
/r/LocalLLaMA/comments/1r7p94r/clawdbot_moltbot_openclaw_macmini_dashboard/
false
false
https://external-preview…3835c977fdab4ea2
0
{'enabled': False, 'images': [{'id': 'VqxooDiyPPLQ_eeoBR0jJDmzEZMfktN3G37AnAxbPdo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VqxooDiyPPLQ_eeoBR0jJDmzEZMfktN3G37AnAxbPdo.png?width=108&crop=smart&auto=webp&s=93dfc00b7f7843a6b36b6a089925009f3aa896ce', 'width': 108}, {'height': 108, 'url': 'h...
Ironclaws security architecture is actually interesting because it does things differently from Openclaw
0
Been digging into ironclaw, which is the rust rewrite of openclaw from the near ai team, and the security model is actually worth understanding even if you’re not planning to use it. The core insight is that a TEE protects you from the host but it doesn’t protect you from malicious code running inside the TEE. So bas...
2026-02-18T01:24:06
https://www.reddit.com/r/LocalLLaMA/comments/1r7p5nz/ironclaws_security_architecture_is_actually/
Significant-Cod-9936
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7p5nz
false
null
t3_1r7p5nz
/r/LocalLLaMA/comments/1r7p5nz/ironclaws_security_architecture_is_actually/
false
false
self
0
null
Local-First. Sub-Millisecond RAG – 0.84ms vector search, zero cloud dependencies. Your Agents remember everything
4
Every RAG solution requires either cloud APIs (Pinecone/Weaviate) or running a database locally (ChromaDB/Qdrant). I wanted what SQLite gave us: import a library, open a file, query. Except for multimodal content at GPU speed on Apple Silicon. So I built **Wax** – a pure Swift RAG engine for truly local AI apps. **Wh...
2026-02-18T01:09:12
https://www.reddit.com/r/LocalLLaMA/comments/1r7otbt/localfirst_submillisecond_rag_084ms_vector_search/
karc16
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7otbt
false
null
t3_1r7otbt
/r/LocalLLaMA/comments/1r7otbt/localfirst_submillisecond_rag_084ms_vector_search/
false
false
https://preview.redd.it/…bf56e30dcaf5a1d8
4
null
so why Reddit are not aloud to post about my project ???
0
i just want to share what i did and redit delating my post i create something that every one needs but how i can share with comunity to dont resive sarcasm and hate ?? if u want to see i willl not talk any more but is there resonantgenesis with .xyz
2026-02-18T01:06:18
https://www.reddit.com/r/LocalLLaMA/comments/1r7oqxh/so_why_reddit_are_not_aloud_to_post_about_my/
louienemesh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7oqxh
false
null
t3_1r7oqxh
/r/LocalLLaMA/comments/1r7oqxh/so_why_reddit_are_not_aloud_to_post_about_my/
false
false
self
0
null
MCP Directory - 181 servers for Claude Desktop, Cursor, and other MCP clients
0
Made a directory for Model Context Protocol servers. Might be useful for those of you running local models with MCP support or using it with Claude Stats: \- 181 servers indexed \- 22 categories (databases, DevOps, browser automation, etc.) \- 89 official servers from Anthropic's MCP team
2026-02-18T01:05:42
https://www.reddit.com/r/LocalLLaMA/comments/1r7oqg1/mcp_directory_181_servers_for_claude_desktop/
Last_Trouble9552
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7oqg1
false
null
t3_1r7oqg1
/r/LocalLLaMA/comments/1r7oqg1/mcp_directory_181_servers_for_claude_desktop/
false
false
self
0
null
MCP Directory - 181 servers for Claude Desktop, Cursor, and other MCP clients
0
Made a directory for Model Context Protocol servers. Might be useful for those of you running local models with MCP support or using it with Claude Stats: \- 181 servers indexed \- 22 categories (databases, DevOps, browser automation, etc.) \- 89 official servers from Anthropic's MCP team
2026-02-18T01:04:31
https://www.reddit.com/r/LocalLLaMA/comments/1r7ophf/mcp_directory_181_servers_for_claude_desktop/
Last_Trouble9552
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7ophf
false
null
t3_1r7ophf
/r/LocalLLaMA/comments/1r7ophf/mcp_directory_181_servers_for_claude_desktop/
false
false
self
0
null
Do Your Agents Ever Loop Forever?
2
Built a side project this weekend for myself. It is a simulator that lets you test your agent before deploying it in the real world. It runs a simple crash test on an agent and detects one common failure: infinite loops. When it finds a loop, it shows where it got stuck and suggests practical fixes like adding a fina...
2026-02-18T01:03:05
https://i.redd.it/01443mmsj5kg1.jpeg
Recent_Jellyfish2190
i.redd.it
1970-01-01T00:00:00
0
{}
1r7ooae
false
null
t3_1r7ooae
/r/LocalLLaMA/comments/1r7ooae/do_your_agents_ever_loop_forever/
false
false
https://preview.redd.it/…ae6158addd2562fb
2
{'enabled': True, 'images': [{'id': '01443mmsj5kg1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/01443mmsj5kg1.jpeg?width=108&crop=smart&auto=webp&s=9074dcbf78aaaea9c35a9e9bdf5eb18050d63ecc', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/01443mmsj5kg1.jpeg?width=216&crop=smart&auto=w...
Did I miss something ?
0
I Thought deepseek was supposed to come out today
2026-02-18T01:03:00
https://www.reddit.com/r/LocalLLaMA/comments/1r7oo7u/did_i_miss_something/
Opening-Ad6258
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7oo7u
false
null
t3_1r7oo7u
/r/LocalLLaMA/comments/1r7oo7u/did_i_miss_something/
false
false
self
0
null
I built an open-source AI secretary ExSecAI that runs on your machine, with any LLM models - tasks are created with markdown files. It has a heartbeat (like openclaw) and you can tell it to do anything with tools, skills and agents. You gotta try the Live Assist mode! Most fun (OSS, MIT)
1
ERROR: type should be string, got "https://preview.redd.it/fbz5unwgj5kg1.png?width=2324&format=png&auto=webp&s=9064f95019ce1b5503fed85ca4e07893a9bf55ad\n\n I've been building this thing for a while now and finally open-sourced it. Figured some of you might find it useful. It is built on top of PI and it is OSS, MIT.\n \n ExSecAI - Your AI Executive Secretary with Live Assist and Voice Transcription to action mode.\n \n I \n wanted an AI assistant that actually does things autonomously, not just chat. Something I could leave running and come back to find completed research reports, scheduled emails, generated spreadsheets, all without babysitting it.\n \n \n Think of it like an OpenClaw you can actually control and see what it does. Full dashboard, full visibility into every task, every agent action, every output. No black box.\n \n \n ExSecAI is a self-hosted agent orchestration platform built on top of Pi, using TS. You write tasks as markdown files, drop them in a folder, and a supervisor picks them up and runs them through AI agents. Each task gets an agent role (researcher, writer, analyst, etc.), access to tools, and a full execution lifecycle with evaluation and auto-retry.\n \n \n It has a dashboard. It's a full web app with:\n \n \n What I'm having most fun with - Live Assist mode, real-time interactive sessions where you talk to the AI and it produces structured action cards, with research, task creation, whatever else you want. You can pause, resume, pick different agent roles mid-session. I envision real conversations, or Dr. consultation with real time feedback and context expansion just based on context, completely hands off and proactive.\n \n Voice input with VAD - tap the mic, talk, it transcribes and sends. Silero voice activity detection handles the \"when did they stop talking\" problem. I access it from my phone over Tailscale when I'm away from my desk.\n \n \n Built-in terminal on the web - chat with the AI in one tab, check its work in an actual terminal in the other. No switching windows. Open Claude Code, Opencode, Pi, whatever you want, right on the web, anywhere in the world, through tailscale.\n \n \n Cron scheduler - set any task to recur. `Schedule: daily 08:00` in the frontmatter and it runs every morning. There's a heartbeat task that self-monitors the whole system.\n \n \n Ralph loops - this is the weird one. Write a PRD with user stories, point Ralph at it, and it iterates through each story autonomously: implement, test, commit, next story. I've had it build small projects from scratch while I sleep. I just wanted to have the bells and whistles... and I'll keep iterating on it.\n \n \n Telegram bot - chat with your agents from Telegram. DM or group chat. I use this to send quick tasks when I'm on my phone.\n \n \n 20+ Python skills - Excel, Word, PDF, PowerPoint, data visualization, web scraping, social media tools. Agents invoke them as needed. I'm just more familiar with Python... Some of these tools are still broken and will be fixed with time. But most of them work wonderfully well.\n \n \n Where other agent platforms give you autonomy but zero visibility (you kick off a task and pray), ExSecAI gives you a dashboard where you can watch the agent work in real time, see every tool call, inspect every output file, and intervene if needed. Autonomous when you want it, interactive when you need it.\n \n \n It runs on Node.js, uses the Pi Coding Agent SDK under the hood (which supports Claude Code, Antigravity, and a few other OAuth logins, and other providers through extensions). There's a NanoGPT extension included that makes tool calling work with cheap models like Kimi K2.5, Qwen, DeepSeek etc through a cheap account. I've spent about a day on this, collecting fixes from all over the internet, so now I can do tool calling on K2.5, 4.7, M2.1 and all the sota OpenSource models out there, even non cheap inference servers with broken trasnformers. \n \n Local models on LmStudio like GPT-OSS20B works wonderfully well! I does 95% of what I need on a daily basis throuth ExSecAI.\n \n Easy Install...\n ```\n npm install -g exsecai\n mkdir my-secretary && cd my-secretary\n exsecai init\n exsecai start\n ```\n \n \n Docker works too. MIT licensed. Although I couldn't make Pi OAUTH work through docker... regular API keys/ endpoints should work fine though.\n \n \n I'm a solo dev on this, so there's MANY rough edges. But the core loop -- drop task, agent runs, get output -- has been solid for me for months. The dashboard and the Live Assist is the real quality-of-life win.\n \n Another super cool feature - Try it... Press and hold on mobile, or click and hold on desktop on chat/ mic icon on bottom right quadrant, and you'll start transcribing right away. Your message will be sent to agent as soon as you release. Just a quick accessibility tool that comes quity in handy when I'm driving, or on the road (although I know I shouldn't). Try it, tell if me if you like. \n \n Last cool thing - I've spent the time working through several hoops to make Transcription work on Desktop, Chrome, Iphone (Ios in General) and Android. Each of these had its own quirks, but everything is working (at least one of the 5 STT methods works in one of these systems). \n \n Would love feedback. Please try it and give me your feedback.\n Supporters and contributors are welcome!\n \n \n - GitHub: https://github.com/sermtech/ExSecAI\n - npm: https://www.npmjs.com/package/exsecai\n \n "
2026-02-18T01:02:27
https://www.reddit.com/r/LocalLLaMA/comments/1r7ons3/i_built_an_opensource_ai_secretary_exsecai_that/
FigZestyclose7787
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7ons3
false
null
t3_1r7ons3
/r/LocalLLaMA/comments/1r7ons3/i_built_an_opensource_ai_secretary_exsecai_that/
false
false
https://external-preview…dcc16f1d753155c1
1
null
I built Galactic AI — open source automation suite with 72 tools, Ollama support, browser automation, and a web control deck
1
\# Galactic AI v0.6.0-Alpha \*\*Sovereign. Universal. Fast.\*\* A powerful, local-first AI automation platform with 72 built-in tools, browser automation, multi-provider LLM support, and a real-time web control deck. \--- \## Downloads | Platform | File | Size | |----------|------|------| | \*\*Windows...
2026-02-18T00:47:09
https://www.reddit.com/r/LocalLLaMA/comments/1r7ob6e/i_built_galactic_ai_open_source_automation_suite/
Longjumping_Set_1374
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7ob6e
false
null
t3_1r7ob6e
/r/LocalLLaMA/comments/1r7ob6e/i_built_galactic_ai_open_source_automation_suite/
false
false
self
1
null
Write assembly language that runs on an LLM
2
Hi LocalLLaMA! I thought it would be fun to share what I've been working on: [https://github.com/HuyNguyenAu/assembly\_language\_for\_agents](https://github.com/HuyNguyenAu/assembly_language_for_agents) Imagine writing code that operates on semantics or vibes: ``` ; PROGRAM: VIBE_CONTROLLER.aasm ; Objective...
2026-02-18T00:28:59
https://www.reddit.com/r/LocalLLaMA/comments/1r7nw6r/write_assembly_language_that_runs_on_an_llm/
HuygenAu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7nw6r
false
null
t3_1r7nw6r
/r/LocalLLaMA/comments/1r7nw6r/write_assembly_language_that_runs_on_an_llm/
false
false
self
2
{'enabled': False, 'images': [{'id': 'hzjKhQZHebaHQdN0_WKDvREYrSCqre69c98oAkJ0lYw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hzjKhQZHebaHQdN0_WKDvREYrSCqre69c98oAkJ0lYw.png?width=108&crop=smart&auto=webp&s=1dcfa67f72ec43ecdd34e27498d256a417ecec3f', 'width': 108}, {'height': 108, 'url': 'h...
GreedyPhrase: A greedy phrase-based tokenizer that achieves 1.21x - 1.23x better compression than GPT-4 tiktoken, with a 1.5-3x smaller vocabulary, and 6-11x higher encoding throughput [OC]
3
2026-02-18T00:21:02
https://github.com/rayonnant-ai/greedyphrase
reditzer
github.com
1970-01-01T00:00:00
0
{}
1r7npbi
false
null
t3_1r7npbi
/r/LocalLLaMA/comments/1r7npbi/greedyphrase_a_greedy_phrasebased_tokenizer_that/
false
false
https://external-preview…81bc82991c77ad7a
3
{'enabled': False, 'images': [{'id': 'Q331BKU6_ce9i5ck1S7o6sttXD-1CB6GRLl6wKhTEvI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q331BKU6_ce9i5ck1S7o6sttXD-1CB6GRLl6wKhTEvI.png?width=108&crop=smart&auto=webp&s=2559776477e42250b449170381a10eb320e36f79', 'width': 108}, {'height': 108, 'url': 'h...
The real OpenClaw debate nobody is talking about: It's not about what it can do. It's about whether you can afford to run it.
0
I finally drank the Kool-Aid last week. Spent three days setting up OpenClaw on a VPS, connected Telegram, configured memory, the whole thing. Woke up this morning to check what my persistent AI agent had accomplished overnight. It had spent $47 on API credits organizing a folder structure I didn't ask for and sending...
2026-02-18T00:19:47
https://www.reddit.com/r/LocalLLaMA/comments/1r7no5i/the_real_openclaw_debate_nobody_is_talking_about/
Idealounge24
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7no5i
false
null
t3_1r7no5i
/r/LocalLLaMA/comments/1r7no5i/the_real_openclaw_debate_nobody_is_talking_about/
false
false
self
0
null
Just compared some models, and GPT 5.1 high seem to be the smartest
0
I tried it on computer sciences questions this afternoon, and 5.1 High think way longer, has a way slower token/s generation and way bigger, in depth and and precise answer than any other open and close source sota models. \-> it seem to be the best choice of model if you want to learn technical stuff in depth. Do...
2026-02-18T00:08:13
https://www.reddit.com/r/LocalLLaMA/comments/1r7neas/just_compared_some_models_and_gpt_51_high_seem_to/
Individual-Source618
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7neas
false
null
t3_1r7neas
/r/LocalLLaMA/comments/1r7neas/just_compared_some_models_and_gpt_51_high_seem_to/
false
false
self
0
null
SOTA tool-calling architecture?
3
Hi all, I'm working on a browser agent which runs locally (in a sandboxed Chromium) that runs "tasks"--repeatable or one-shot jobs where it could do stuff in the browser, a quarantined folder, send notifications, etc. The model driving it can either be local or remote (Mistral-Instruct works great on my RTX 3090, but K...
2026-02-18T00:02:20
https://www.reddit.com/r/LocalLLaMA/comments/1r7n9bn/sota_toolcalling_architecture/
davvv_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7n9bn
false
null
t3_1r7n9bn
/r/LocalLLaMA/comments/1r7n9bn/sota_toolcalling_architecture/
false
false
self
3
null
Self-hosted claude swarm running on the cloud and surviving restarts
0
2026-02-18T00:00:11
https://github.com/simonstaton/ClaudeSwarm
rushuk
github.com
1970-01-01T00:00:00
0
{}
1r7n75c
false
null
t3_1r7n75c
/r/LocalLLaMA/comments/1r7n75c/selfhosted_claude_swarm_running_on_the_cloud_and/
false
false
https://external-preview…80ba75bf6869d5ed
0
{'enabled': False, 'images': [{'id': 'yMYTwZx7Zc2pxk2CpwspL4qjJ7rBtSH6w6uu2yalJn0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yMYTwZx7Zc2pxk2CpwspL4qjJ7rBtSH6w6uu2yalJn0.png?width=108&crop=smart&auto=webp&s=740d3d63dcea367eef3d72e8ffe567ba2a147ad7', 'width': 108}, {'height': 108, 'url': 'h...
Serious question — why would anyone use Tiny-Aya instead of Qwen/Phi/Mistral small models?
6
I’m trying to understand the point of Tiny-Aya. It’s ~3B parameters, doesn’t focus on reasoning, not really agent-oriented, and there’s no obvious capability demo (coding, tool use, planning, etc). Meanwhile we already have small models like: - Qwen-3 4B - Phi-3/4 - Mistral small - Llama 3 8B These can reason, plan, ...
2026-02-17T23:55:47
https://www.reddit.com/r/LocalLLaMA/comments/1r7n3ca/serious_question_why_would_anyone_use_tinyaya/
Deep_190
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7n3ca
false
null
t3_1r7n3ca
/r/LocalLLaMA/comments/1r7n3ca/serious_question_why_would_anyone_use_tinyaya/
false
false
self
6
null
I trained a language model on CPU in 1.2 hours with no matrix multiplications — here's what I learned
273
Hey all. I've been experimenting with tiny matmul-free language models that can be trained and run entirely on CPU. Just released a paper and the model. Model: [https://huggingface.co/changcheng967/flashlm-v3-13m](https://huggingface.co/changcheng967/flashlm-v3-13m) Quick stats: * 13.6M parameters, d\_model=256 * Te...
2026-02-17T23:42:30
https://www.reddit.com/r/LocalLLaMA/comments/1r7mscr/i_trained_a_language_model_on_cpu_in_12_hours/
Own-Albatross868
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7mscr
false
null
t3_1r7mscr
/r/LocalLLaMA/comments/1r7mscr/i_trained_a_language_model_on_cpu_in_12_hours/
false
false
self
273
{'enabled': False, 'images': [{'id': 'At15Axm24Ga0Gr2LhVPQDqPimzw0xBtibeQK5YTstq0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/At15Axm24Ga0Gr2LhVPQDqPimzw0xBtibeQK5YTstq0.png?width=108&crop=smart&auto=webp&s=c91ac5836333ae97209a632a84a4e26e873d7706', 'width': 108}, {'height': 116, 'url': 'h...
Your vibe coding codebase is a disaster... This is Code Visualizer which u must have to help u to make real product.
0
 just analyzed a 600-file codebase in 30 seconds... 15,091 functions, 3,928 API endpoints, 52,214 connections, experiance this magic for vibecoders and for does with OpenClaw Ai Autonomus Agents .... its insane now u get superpover on your codebase now no one can say its AI or Vibecoding .... did any one try also or i...
2026-02-17T23:33:50
https://www.reddit.com/r/LocalLLaMA/comments/1r7mkx3/your_vibe_coding_codebase_is_a_disaster_this_is/
louienemesh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7mkx3
false
null
t3_1r7mkx3
/r/LocalLLaMA/comments/1r7mkx3/your_vibe_coding_codebase_is_a_disaster_this_is/
false
false
https://preview.redd.it/…f2c43cd26f8281ff
0
null
AnyLoom Stack Lets YOU control your data
1
I just got it dialed in on my machine and it’s a game-changer for a local setup. It uses **AnythingLLM** as the front end, but the back end is where it gets interesting—it’s a **dynamic topology agent swarm**. Basically, the agents reconfigure how they talk to each other based on what you’re doing. I’ve got it running...
2026-02-17T23:25:30
https://github.com/Intradyne/AnyLoom-AnythingLLM-Local-AI-agentic-DyTopo-swarm
DaGameFace
github.com
1970-01-01T00:00:00
0
{}
1r7mdub
false
null
t3_1r7mdub
/r/LocalLLaMA/comments/1r7mdub/anyloom_stack_lets_you_control_your_data/
false
false
https://external-preview…437cbd6e1d000117
1
{'enabled': False, 'images': [{'id': 'beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=108&crop=smart&auto=webp&s=b0b81aa83444f34add63f0a02bc7092b836e785a', 'width': 108}, {'height': 108, 'url': 'h...
What cheap components pair well with RTX 3060 Ti to run AI locally?
3
I just bought an RTX 3060 Ti to run AI locally. What other components (preferably cheap) would go well with it? I'm a complete noob when it comes to building PCs.
2026-02-17T23:18:41
https://www.reddit.com/r/LocalLLaMA/comments/1r7m826/what_cheap_components_pair_well_with_rtx_3060_ti/
dekoalade
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7m826
false
null
t3_1r7m826
/r/LocalLLaMA/comments/1r7m826/what_cheap_components_pair_well_with_rtx_3060_ti/
false
false
self
3
null
Dockerized Local LLama Agentic stack for 5090 -cuda working!
1
[removed]
2026-02-17T23:10:35
https://www.reddit.com/r/LocalLLaMA/comments/1r7m163/dockerized_local_llama_agentic_stack_for_5090/
DaGameFace
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7m163
false
null
t3_1r7m163
/r/LocalLLaMA/comments/1r7m163/dockerized_local_llama_agentic_stack_for_5090/
false
false
self
1
{'enabled': False, 'images': [{'id': 'beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=108&crop=smart&auto=webp&s=b0b81aa83444f34add63f0a02bc7092b836e785a', 'width': 108}, {'height': 108, 'url': 'h...
The Strix Halo feels like an amazing super power [Activation Guide]
26
I had my Strix halo for a while now, I though I can download and use everything out of the box, but faced some Python issues that I was able to resolve, but still performance (for CUDA) stuff was a bit underwhelming, now it feels like a superpower, I have exactly what I wanted, voice based intelligent LLM with coding a...
2026-02-17T22:38:36
https://www.reddit.com/r/LocalLLaMA/comments/1r7l7q5/the_strix_halo_feels_like_an_amazing_super_power/
Potential_Block4598
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7l7q5
false
null
t3_1r7l7q5
/r/LocalLLaMA/comments/1r7l7q5/the_strix_halo_feels_like_an_amazing_super_power/
false
false
self
26
null
Claude 4.6 Sonnet Is Now Availiable With 1M Context On InfiniaxAI
0
**Hey Everybody,** Today we instantly upon release have rolled out Claude 4.6 Sonnet onto the InfiniaxAI system to complete our line of AI models. We now host users starting at just $5 to be able to use every AI model in the world to create and ship sites and repos as well as just chat and converse with these high pow...
2026-02-17T22:36:15
https://i.redd.it/mt9bqwfrt4kg1.png
Substantial_Ear_1131
i.redd.it
1970-01-01T00:00:00
0
{}
1r7l5j1
false
null
t3_1r7l5j1
/r/LocalLLaMA/comments/1r7l5j1/claude_46_sonnet_is_now_availiable_with_1m/
false
false
https://preview.redd.it/…1cd73e4e25a13cf3
0
{'enabled': True, 'images': [{'id': 'mt9bqwfrt4kg1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/mt9bqwfrt4kg1.png?width=108&crop=smart&auto=webp&s=19427f60d8950d79f5fde88978087b23151acf6e', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/mt9bqwfrt4kg1.png?width=216&crop=smart&auto=web...
What is missing?
0
First time homelab builder. Everything here was put together from hardware I already had kicking around no big purchases, just giving idle parts a purpose. This is my first real attempt at a structured lab so be gentle lol. Wanted a fully local AI inference setup for image/video generation, combined with a proper sel...
2026-02-17T22:33:02
https://www.reddit.com/r/LocalLLaMA/comments/1r7l2l4/what_is_missing/
Alone-Leadership-596
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7l2l4
false
null
t3_1r7l2l4
/r/LocalLLaMA/comments/1r7l2l4/what_is_missing/
false
false
self
0
null
Cluster 2x server (8x 3090 gpu)
2
Hi everyone, I'm planning to build a distributed inference setup and am looking for advice from anyone who has done something similar. What I'm trying to accomplish: \- 2 servers, each with 8 RTX 3090s (24 GB) \- Connected via 100 Gbps direct link (no switch) \- Running vLLM for LLM inference My questions: ...
2026-02-17T22:30:35
https://www.reddit.com/r/LocalLLaMA/comments/1r7l0cc/cluster_2x_server_8x_3090_gpu/
steppige
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7l0cc
false
null
t3_1r7l0cc
/r/LocalLLaMA/comments/1r7l0cc/cluster_2x_server_8x_3090_gpu/
false
false
self
2
null
I Ambushed AI Agents in a Dark Alley 83 Times (including Deepseek v3.2)
0
This article documents a systematic failure across frontier LLMs where player-stated non-lethal intent is acknowledged narratively but ignored mechanically, resulting in unjustified lethal outcomes and corrupted moral scoring. Over four experiment iterations, we reduced the suppressive-to-lethal damage ratio from 1.08 ...
2026-02-17T22:25:23
https://3rain.substack.com/p/i-ambushed-ai-agents-in-a-dark-alley?r=4bi8r8
3RiversAINexus
3rain.substack.com
1970-01-01T00:00:00
0
{}
1r7kvky
false
null
t3_1r7kvky
/r/LocalLLaMA/comments/1r7kvky/i_ambushed_ai_agents_in_a_dark_alley_83_times/
false
false
https://external-preview…2a733bc88cdc0d2f
0
{'enabled': False, 'images': [{'id': 'SuTXYQ3OotIGsCz-NtJkntwj1cbgkV5V-PcusxSWW8Q', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/SuTXYQ3OotIGsCz-NtJkntwj1cbgkV5V-PcusxSWW8Q.jpeg?width=108&crop=smart&auto=webp&s=f60d9777b54840cb3421dd4ab1ef646c98cdaae0', 'width': 108}, {'height': 121, 'url': '...
Voxtral Mini 4B Realtime , llama.cpp PR
4
Voxtral-Mini-4B-Realtime-2602 ported to llama.cpp. Latency is pretty low compared to parakeet. Still it was observed that it can miss a word once in a while. It was tested on a set of speakers and noticed sometimes it outputs the user native language if the speaker voice has a similar accent.
2026-02-17T22:23:03
https://www.reddit.com/r/LocalLLaMA/comments/1r7ktdu/voxtral_mini_4b_realtime_llamacpp_pr/
quinceaccel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7ktdu
false
null
t3_1r7ktdu
/r/LocalLLaMA/comments/1r7ktdu/voxtral_mini_4b_realtime_llamacpp_pr/
false
false
self
4
null
What model for an RTX3080?
3
I just upgraded to a new gaming rig and my old one is currently collecting dust. I want to run a local model to basically monitor my home lab, mediaserver stack (probs via openclaw), and do some occasional coding for me (light touch stuff, I use antigravity or claude for the heavy lifting). **Full specs:** * MSI RTX ...
2026-02-17T22:12:24
https://www.reddit.com/r/LocalLLaMA/comments/1r7kjdh/what_model_for_an_rtx3080/
Acrylicus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7kjdh
false
null
t3_1r7kjdh
/r/LocalLLaMA/comments/1r7kjdh/what_model_for_an_rtx3080/
false
false
self
3
null
SGLang FP8 MiniMax-M2.5 on 8× RTX PRO 6000 (SM120): 3,822 tok/s burst, Triton backend fix, kernel-tuning reality check
7
Been running MiniMax-M2.5 (228B MoE, FP8) on an AWS g7e.48xlarge — 8x RTX PRO 6000 Blackwell Server Edition (SM120, 96GB GDDR7 each). **Trap:** RTX PRO 6000 is SM120, not SM100 like the B200. In SGLang 0.5.8.post1, the default FP8 GEMM backends (DeepGemm and CUTLASS) fail on SM120 with cryptic asserts. The fix is forc...
2026-02-17T22:06:25
https://www.reddit.com/r/LocalLLaMA/comments/1r7kdx1/sglang_fp8_minimaxm25_on_8_rtx_pro_6000_sm120/
awwwyeah206
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7kdx1
false
null
t3_1r7kdx1
/r/LocalLLaMA/comments/1r7kdx1/sglang_fp8_minimaxm25_on_8_rtx_pro_6000_sm120/
false
false
self
7
{'enabled': False, 'images': [{'id': 'H47bzkybR3qEaavVqh8GPDUcDCwuAMW7gcUotijKb-w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/H47bzkybR3qEaavVqh8GPDUcDCwuAMW7gcUotijKb-w.png?width=108&crop=smart&auto=webp&s=71992417f2085a5a9b2218514072d6d05737839a', 'width': 108}, {'height': 108, 'url': 'h...
LAPIS: Fit more API context into smaller context windows (80% token reduction vs OpenAPI)
0
If you're building agents or tools that need API knowledge in context, you've probably noticed how much space OpenAPI specs consume. A mid-size API easily burns 5,000-7,000 tokens just on the spec. I created LAPIS, a compact format specifically designed for how LLMs process text. Same semantic content, \~80% fewer t...
2026-02-17T21:58:48
https://www.reddit.com/r/LocalLLaMA/comments/1r7k6k4/lapis_fit_more_api_context_into_smaller_context/
cr0hn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7k6k4
false
null
t3_1r7k6k4
/r/LocalLLaMA/comments/1r7k6k4/lapis_fit_more_api_context_into_smaller_context/
false
false
self
0
{'enabled': False, 'images': [{'id': 'Db7CqmkdcLSQ-g1DalqK8CXCBaM6OScSPBEW6lgMrRs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Db7CqmkdcLSQ-g1DalqK8CXCBaM6OScSPBEW6lgMrRs.png?width=108&crop=smart&auto=webp&s=e2695900f25150ee2def837986c80373d50db9da', 'width': 108}, {'height': 108, 'url': 'h...
Devstral 2 or whatever feels appropriate to run on server with 24 VRAM and 256 GB RAM
1
Hello there! I'm thinking about turning my server from hobbyist machine for generating images via ComfyUI (Stable Diffusion) into DevOps assistant (coding and agentic local LLM for software engineering) with focus on troubleshooting Java, Kotlin and Go code, along with troubleshooting via cli tools like kubectl, aws-c...
2026-02-17T21:41:22
https://www.reddit.com/r/LocalLLaMA/comments/1r7jq4j/devstral_2_or_whatever_feels_appropriate_to_run/
Less-Instruction831
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7jq4j
false
null
t3_1r7jq4j
/r/LocalLLaMA/comments/1r7jq4j/devstral_2_or_whatever_feels_appropriate_to_run/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': '...
Renting RTX 5090 directly. Where do you find clients?
1
[removed]
2026-02-17T21:36:22
https://www.reddit.com/r/LocalLLaMA/comments/1r7jlcw/renting_rtx_5090_directly_where_do_you_find/
Individual-Luck-5633
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7jlcw
false
null
t3_1r7jlcw
/r/LocalLLaMA/comments/1r7jlcw/renting_rtx_5090_directly_where_do_you_find/
false
false
self
1
null
Renting RTX 5090 directly — cheaper than Vast/RunPod. Where do you find clients?
1
[removed]
2026-02-17T21:31:48
https://www.reddit.com/r/LocalLLaMA/comments/1r7jgp0/renting_rtx_5090_directly_cheaper_than_vastrunpod/
Individual-Luck-5633
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7jgp0
false
null
t3_1r7jgp0
/r/LocalLLaMA/comments/1r7jgp0/renting_rtx_5090_directly_cheaper_than_vastrunpod/
false
false
self
1
{'enabled': False, 'images': [{'id': 'MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=108&crop=smart&auto=webp&s=a08158a2ec290c8157b492f314bfb148408be1fc', 'width': 108}, {'height': 121, 'url': 'h...
What Frontend do you use?
4
I've been on and off with front-ends, but I really just want something that has a lot of capabilities and is relatively user friendly. I'm not a big fan of openwebui personally. There's nothing wrong with it, it's just not for me. What Frontends do you guys like?
2026-02-17T21:24:36
https://www.reddit.com/r/LocalLLaMA/comments/1r7j9kp/what_frontend_do_you_use/
TyedalWaves
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7j9kp
false
null
t3_1r7j9kp
/r/LocalLLaMA/comments/1r7j9kp/what_frontend_do_you_use/
false
false
self
4
null
The guy that won the NVIDIA Hackathon and an NVIDIA DGX Spark GB10 has won another hackathon with it!
327
Hey everyone, I promised that I would update you all with what I was going to do next with the DGX Spark GB10 that I won. It's been a few weeks and I have been primarily heads down on fundraising for my startup trying to automatically improve and evaluate Coding Agents. Since the last time I posted I became a Dell Pr...
2026-02-17T21:22:30
https://www.reddit.com/r/LocalLLaMA/comments/1r7j7kb/the_guy_that_won_the_nvidia_hackathon_and_an/
brandon-i
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7j7kb
false
null
t3_1r7j7kb
/r/LocalLLaMA/comments/1r7j7kb/the_guy_that_won_the_nvidia_hackathon_and_an/
false
false
self
327
{'enabled': False, 'images': [{'id': 'b-8bFNmpVy6CBQmUQ9yRafNcSOX_nOo-XyZQWFJuPVQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/b-8bFNmpVy6CBQmUQ9yRafNcSOX_nOo-XyZQWFJuPVQ.jpeg?width=108&crop=smart&auto=webp&s=36b40803e9ea01ff7fce8b7b1c5bfcc1a61fed73', 'width': 108}, {'height': 162, 'url': '...
Arc B60 24gb or RTX 5060ti 16gb?
14
Hello everybody, I would like to add an eGPU to my Ryzen 9 AI HX370 64gb ram. I can use usb-c 40gbps or Oculink. Owners or experts can you give me some advices on these 2 gpu ? If token/s are similar obviously I choose 24gb ram for bigger model BUT …. What about difficulty to tune Intel ARC to gain its maximum per...
2026-02-17T21:11:22
https://www.reddit.com/r/LocalLLaMA/comments/1r7iwmb/arc_b60_24gb_or_rtx_5060ti_16gb/
Proof_Nothing_7711
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7iwmb
false
null
t3_1r7iwmb
/r/LocalLLaMA/comments/1r7iwmb/arc_b60_24gb_or_rtx_5060ti_16gb/
false
false
self
14
null
does glm 4.7 on vertex actually support context caching?
2
checked both openrouter and the official docs but can't find anything definitive. the dashboard just shows dashes for cache read/write. is it strictly running without cache or am i missing something?
2026-02-17T21:10:11
https://i.redd.it/yo3v4wkge4kg1.png
Routine_Connection8
i.redd.it
1970-01-01T00:00:00
0
{}
1r7ivh0
false
null
t3_1r7ivh0
/r/LocalLLaMA/comments/1r7ivh0/does_glm_47_on_vertex_actually_support_context/
false
false
https://preview.redd.it/…fc804e88da43a5a1
2
{'enabled': True, 'images': [{'id': 'yo3v4wkge4kg1', 'resolutions': [{'height': 18, 'url': 'https://preview.redd.it/yo3v4wkge4kg1.png?width=108&crop=smart&auto=webp&s=0aa47e8991c1c7bf7fff4541fceb79d003fb9f7f', 'width': 108}, {'height': 36, 'url': 'https://preview.redd.it/yo3v4wkge4kg1.png?width=216&crop=smart&auto=webp...
What is GiLo AI?
0
GiLo AI is a professional platform for creating and deploying AI agents. It enables anyone — developer, entrepreneur, product team — to design an intelligent conversational agent, configure it in depth, test it in real time, then make it accessible to the world via an API, an embeddable widget, or a dedicated subdomain...
2026-02-17T21:05:07
https://www.gilo.dev/
Fun-Necessary1572
gilo.dev
1970-01-01T00:00:00
0
{}
1r7iqcl
false
null
t3_1r7iqcl
/r/LocalLLaMA/comments/1r7iqcl/what_is_gilo_ai/
false
false
default
0
null
ViT-5: Vision Transformers for The Mid-2020s
25
|ViT-5: Vision Transformers for The Mid-2020s| |:-| |*Wang et al. \[*Johns Hopkins University, UC Santa Cruz*\]*| LLMs are sprinting ahead with rapid architectural refinements, but Vision Transformers (ViTs) have remained largely stagnant since their debut in 2020. Vision models struggle with stability issues an...
2026-02-17T20:57:59
https://www.reddit.com/r/LocalLLaMA/comments/1r7ij81/vit5_vision_transformers_for_the_mid2020s/
xXWarMachineRoXx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7ij81
false
null
t3_1r7ij81
/r/LocalLLaMA/comments/1r7ij81/vit5_vision_transformers_for_the_mid2020s/
false
false
https://preview.redd.it/…e58933e175854921
25
null