title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7 values | id stringlengths 7 7 | locked bool 2 classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2 classes | stickied bool 2 classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AgentNet: IRC-style relay for decentralized AI agents | 2 | I’ve been experimenting with multi-agent systems, and one thing that kept bothering me is that most frameworks assume all agents run in the same process or environment.
I wanted something more decentralized — agents on different machines, owned by different people, communicating through a shared relay. Basically, IRC for AI agents.
So I built **AgentNet**: a Go-based relay server + an OpenClaw skill that lets agents join named rooms and exchange messages in real time.
Current features:
* WebSocket-based relay
* Named rooms (join / create)
* Real-time message exchange
* Agents can run on different machines and networks
Live demo (dashboard showing connected agents and messages): [https://dashboard.bettalab.me](https://dashboard.bettalab.me)
It’s still very early / alpha, but the core relay + protocol are working. I’m curious how others here approach cross-machine or decentralized agent setups, and would love feedback or ideas.
GitHub: [https://github.com/betta-lab/agentnet-openclaw](https://github.com/betta-lab/agentnet-openclaw)
Protocol spec: [https://github.com/betta-lab/agentnet/blob/main/PROTOCOL.md](https://github.com/betta-lab/agentnet/blob/main/PROTOCOL.md) | 2026-02-18T11:35:15 | https://www.reddit.com/r/LocalLLaMA/comments/1r80n04/agentnet_ircstyle_relay_for_decentralized_ai/ | FickleArtichoke974 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r80n04 | false | null | t3_1r80n04 | /r/LocalLLaMA/comments/1r80n04/agentnet_ircstyle_relay_for_decentralized_ai/ | false | false | self | 2 | null |
Multimodal Vector Enrichment (How to Extract Value from Images, Charts, and Tables) | 2 | I think most teams don't realize they're building incomplete RAG systems by only indexing text.
Charts, diagrams, and graphs are a big part of document content and contain most of the decision-relevant info. Yet most RAG pipelines either ignore visuals completely, extract them as raw images without interpretation, or run OCR that captures text labels but misses visual meaning.
I've been using multimodal enrichment where vision-language models process images in parallel with text and tables. Layout analysis detects visuals, crops each chart/diagram/graph, and the VLM interprets what it communicates. Output is natural language summaries suitable for semantic search.
I really think using vision-language models to enrich a vector database with images reduces hallucinations significantly. We should start treating images as first-class knowledge instead of blindly discarding them.
Anyway thought I should share since most people are still building text-only systems by default. | 2026-02-18T11:34:01 | https://www.reddit.com/r/LocalLLaMA/comments/1r80m4o/multimodal_vector_enrichment_how_to_extract_value/ | Independent-Cost-971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r80m4o | false | null | t3_1r80m4o | /r/LocalLLaMA/comments/1r80m4o/multimodal_vector_enrichment_how_to_extract_value/ | false | false | self | 2 | null |
Introducing SOVEREIGN, an open-source autonomous agent OS | 0 | I got frustrated with existing AI agent tools.
So I built my own — because you shouldn't have to rent your intelligence from someone else.
Introducing SOVEREIGN, an open-source autonomous agent OS:
🧠 Multi-agent councils that debate, challenge, and reach consensus 🔁 Runtime human checkpoints — pause mid-execution, resume from exact state 🗃️ Hybrid GraphRAG memory — vector + keyword + graph (no Pinecone, no LangChain) 🛡️ Zero-trust security — path jails, encrypted secrets, rate caps 📡 22+ LLM providers with per-agent routing and fallback chains 📊 Full observability — traces, token costs, latency p95, evals
This isn't a wrapper. It's infrastructure.
Apache 2.0. Self-hostable. | 2026-02-18T11:22:17 | CobblerMaximum | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r80eky | false | null | t3_1r80eky | /r/LocalLLaMA/comments/1r80eky/introducing_sovereign_an_opensource_autonomous/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'cm52xa3hm8kg1', 'resolutions': [{'height': 97, 'url': 'https://preview.redd.it/cm52xa3hm8kg1.png?width=108&crop=smart&auto=webp&s=1cdaadf0104ab69f1663750824da4a553bd821b0', 'width': 108}, {'height': 195, 'url': 'https://preview.redd.it/cm52xa3hm8kg1.png?width=216&crop=smart&auto=webp&s=9b86836e9d0f3ac635eb252253aaaa85a780fa67', 'width': 216}, {'height': 289, 'url': 'https://preview.redd.it/cm52xa3hm8kg1.png?width=320&crop=smart&auto=webp&s=91c60055750b13ff84ff6722ba27180f90fe3c8b', 'width': 320}, {'height': 578, 'url': 'https://preview.redd.it/cm52xa3hm8kg1.png?width=640&crop=smart&auto=webp&s=bf688c4ca3d312c61778a06db3a73a4a256e3141', 'width': 640}, {'height': 868, 'url': 'https://preview.redd.it/cm52xa3hm8kg1.png?width=960&crop=smart&auto=webp&s=dc99ba6bdc5c3f7899936f364da1a0f5fdfcb7ac', 'width': 960}, {'height': 976, 'url': 'https://preview.redd.it/cm52xa3hm8kg1.png?width=1080&crop=smart&auto=webp&s=d4cb1eccb37ce155e2997373e898ddfc5e95db4c', 'width': 1080}], 'source': {'height': 3069, 'url': 'https://preview.redd.it/cm52xa3hm8kg1.png?auto=webp&s=567761e9a436b2b4058ddf8b697a9d899e9a5a65', 'width': 3394}, 'variants': {}}]} | ||
AgentEvolution - The Natural Selection Protocol for AI Agents | 1 | [removed] | 2026-02-18T11:17:19 | https://www.reddit.com/gallery/1r80b9k | MajorOk3668 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1r80b9k | false | null | t3_1r80b9k | /r/LocalLLaMA/comments/1r80b9k/agentevolution_the_natural_selection_protocol_for/ | false | false | 1 | null | |
Segmentation fault when loading models across multiple MI50s in llama.cpp | 7 | I am using 2xMI50 32GB for inference and just added another 16GB MI50 in llama.cpp on Ubuntu 24.04 with ROCM 6.3.4.
Loading models unto the two 32GB card works fine. Loading a model unto the 16GB card also works fine. However, if I load a model across all three cards, I get a \`Segmentation fault (core dumped)\` when the model has been loaded and warmup starts.
Even increasing log verbosity to its highest level does not provide any insights into what is causing the seg fault. Loading a model across all cards using Vulkan backend works fine but is much, much slower than ROCM (same story with Qwen3-Next on MI50 by the way). Since Vulkan is working, I am leaning towards this being a llama.cpp/ROCM issue. Has anyone come across something similar and found a solution? | 2026-02-18T11:11:35 | https://www.reddit.com/r/LocalLLaMA/comments/1r807kb/segmentation_fault_when_loading_models_across/ | EdenistTech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r807kb | false | null | t3_1r807kb | /r/LocalLLaMA/comments/1r807kb/segmentation_fault_when_loading_models_across/ | false | false | self | 7 | null |
A practical use case for local LLMs: reading multilingual codebases without sending code outside | 2 | I often read large codebases (OSS or internal ones) where comments and string literals
are written in a language I don’t speak well.
In many cases, I can’t just paste code into a cloud translator or API
— either due to privacy concerns, NDA, or simply not wanting to leak context.
I wanted a workflow where:
\- code never leaves my machine
\- translation happens only when I need it
\- context switching is minimal
What ended up working well \*in my case\* was using a local LLM via Ollama
as a read-time aid rather than a full translation solution.
For example:
\- I tried a few local models and settled on \`translategemma:4b\` for now
\- it’s not perfect, but it was fast enough and accurate enough for understanding intent
\- other models would likely work as well for this kind of task
Concretely, my setup looks like this:
\- I run a local model via Ollama
\- I only translate comments and string literals, not entire files
\- latency is acceptable for interactive use (hover / on-demand)
The key insight for me was that for reading code,
I don’t need perfect translation — I need fast, private, and contextual hints.
After using this workflow for a while, I ended up building a small Neovim integration
to remove friction, but the core idea is the local-LLM-assisted reading flow itself.
If you’re curious, the small tool I built around this workflow is here:
[https://github.com/noir4y/comment-translate.nvim](https://github.com/noir4y/comment-translate.nvim)
I’m curious how others approach this:
\- What models have you found “good enough” for reading code locally?
\- For you, in what situations does local-only translation feel worth the trade-offs compared to cloud-based tools? | 2026-02-18T11:05:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r8042r/a_practical_use_case_for_local_llms_reading/ | noir4y | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r8042r | false | null | t3_1r8042r | /r/LocalLLaMA/comments/1r8042r/a_practical_use_case_for_local_llms_reading/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '_JNi5iYTHra_iUhYEGdycIZKdDl32yr4tVOSj8FCYVo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_JNi5iYTHra_iUhYEGdycIZKdDl32yr4tVOSj8FCYVo.png?width=108&crop=smart&auto=webp&s=eba74d8b646ce05a54786f4582d0f90d9a4ffa3c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_JNi5iYTHra_iUhYEGdycIZKdDl32yr4tVOSj8FCYVo.png?width=216&crop=smart&auto=webp&s=a13bf9416021781b4329bd6f2d3f908c12a150f9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_JNi5iYTHra_iUhYEGdycIZKdDl32yr4tVOSj8FCYVo.png?width=320&crop=smart&auto=webp&s=41763d75be10499b77a4295810d0efcc2be46a41', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_JNi5iYTHra_iUhYEGdycIZKdDl32yr4tVOSj8FCYVo.png?width=640&crop=smart&auto=webp&s=2c48182e107a399c074c579afc74ef175b7dead7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_JNi5iYTHra_iUhYEGdycIZKdDl32yr4tVOSj8FCYVo.png?width=960&crop=smart&auto=webp&s=8db1a0b46bd7969505bf534bbe671bae140d8a51', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_JNi5iYTHra_iUhYEGdycIZKdDl32yr4tVOSj8FCYVo.png?width=1080&crop=smart&auto=webp&s=546b46b49b82feb8ceb123ff7a3284eb5e67a9ab', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_JNi5iYTHra_iUhYEGdycIZKdDl32yr4tVOSj8FCYVo.png?auto=webp&s=87ff9feff997d0bf945a63464fa35c1ce3844f3e', 'width': 1200}, 'variants': {}}]} |
I’m building a search engine for industrial supply chain parts (plumbing, electronics, fasteners), and I've hit a wall with standard semantic search. | 1 | [removed] | 2026-02-18T11:02:32 | https://www.reddit.com/r/LocalLLaMA/comments/1r801tt/im_building_a_search_engine_for_industrial_supply/ | Pretty-Thanks3394 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r801tt | false | null | t3_1r801tt | /r/LocalLLaMA/comments/1r801tt/im_building_a_search_engine_for_industrial_supply/ | false | false | self | 1 | null |
i'm a nursing student, trying to finetune llama on together.ai, and i can't even figure out how to download the data set off hugging face | 2 | after a few weeks of struggling on different websites, i've finally given up and come to my reddit babies for help. i literally can't do this anymore, my brain is not made for this:
the idea is quite simple - i want to finetune llama to provide responses based as a psych patient to help train nursing students
the problem is most vibe coded agents restrict on sensitive words like "suicide" or "violence", hence i had to start learning how to code
except i don't know how to code; i even bought google API key hoping it would help
after a few hours of research, seemed like together ai + hugging face data sets seems like a good combination
except i can't even figure out how to download dataset off hugging face. it just sort of gives me a code, and even after reading the wiki, i can't understand it.
here is the collection:
[https://hf.co/collections/Mmmanat33/patient-ai](https://hf.co/collections/Mmmanat33/patient-ai)
can someone give me step by step instructions on how to download this dataset, with pictures and big red circles, and then how to put it into together ai? i'm about to cry because this is so fustrating and overwhelming when i have 0 background in coding. i hate it here. | 2026-02-18T10:48:47 | https://www.reddit.com/r/LocalLLaMA/comments/1r7zt1y/im_a_nursing_student_trying_to_finetune_llama_on/ | West-Quantity7257 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7zt1y | false | null | t3_1r7zt1y | /r/LocalLLaMA/comments/1r7zt1y/im_a_nursing_student_trying_to_finetune_llama_on/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'diUJviKqJWqmsA4HvQNMByp5iEWxkoUaA2ny6Rlbu7k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/diUJviKqJWqmsA4HvQNMByp5iEWxkoUaA2ny6Rlbu7k.png?width=108&crop=smart&auto=webp&s=98e1a940ccffd2edda068d491b8a2f171f76c88b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/diUJviKqJWqmsA4HvQNMByp5iEWxkoUaA2ny6Rlbu7k.png?width=216&crop=smart&auto=webp&s=29f355a5010a59193d7b4afcf586c18a704481d2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/diUJviKqJWqmsA4HvQNMByp5iEWxkoUaA2ny6Rlbu7k.png?width=320&crop=smart&auto=webp&s=c2e9cb88daa171b08e055c40494509af45f655f3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/diUJviKqJWqmsA4HvQNMByp5iEWxkoUaA2ny6Rlbu7k.png?width=640&crop=smart&auto=webp&s=0aa2e87b2e7468776320504c27e015a5f3c3ee68', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/diUJviKqJWqmsA4HvQNMByp5iEWxkoUaA2ny6Rlbu7k.png?width=960&crop=smart&auto=webp&s=90aec1f6fc972104d5ac86ea384a0bbf98e76f1a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/diUJviKqJWqmsA4HvQNMByp5iEWxkoUaA2ny6Rlbu7k.png?width=1080&crop=smart&auto=webp&s=516b1d069b65d73c60ca580c7aa888fc3bb75323', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/diUJviKqJWqmsA4HvQNMByp5iEWxkoUaA2ny6Rlbu7k.png?auto=webp&s=7618ba0628fe60b36a5d36418743154538711e9b', 'width': 1200}, 'variants': {}}]} |
Why GLM on llama.cpp has no MTP? | 6 | I have searched through the repo discussions and PRs but I can't find references. GLM models have embedded layers for multi-token prediction and speculative decoding. They can be used with vLLM - if you have hundreds GB VRAM, of course.
Does anybody know why llama.cpp chose to not support this feature? | 2026-02-18T10:36:52 | https://www.reddit.com/r/LocalLLaMA/comments/1r7zlwc/why_glm_on_llamacpp_has_no_mtp/ | Expensive-Paint-9490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7zlwc | false | null | t3_1r7zlwc | /r/LocalLLaMA/comments/1r7zlwc/why_glm_on_llamacpp_has_no_mtp/ | false | false | self | 6 | null |
Built a reflection layer for local LLMs — after 20 sessions it knows HOW you think, not just what you said | 0 | I got frustrated with local LLM setups that have great memory but no experience.
They remember your last message. They don't learn your reasoning style.
So I built experience-engine: a Python package that sits on top of your
existing Ollama setup and runs periodic reflection passes over your interaction log.
\*\*What it actually does:\*\*
After logging interactions, you run two commands:
experience-reflect # extracts beliefs: goals, preferences, values
experience-synthesize # extracts cognitive patterns across all domains
The synthesis step is the interesting one. It doesn't just summarize — it abstracts.
"User prefers local infrastructure" becomes "User applies control-first reasoning
across technical and knowledge domains." That pattern then gets injected into
every future prompt.
\*\*Before experience:\*\*
\> "You should consider using a managed vector DB for scale."
\*\*After experience:\*\*
\> "Your control-first archetype will resist Pinecone — that instinct is correct
\> for now. But set a concrete threshold today so the migration decision is already
\> made before the pressure hits."
\*\*Stack:\*\*
\- Ollama (any model — tested with Mistral, Llama3)
\- Pure Python stdlib — no pip dependencies beyond the package itself
\- Storage: plain JSONL + JSON files, no database
\*\*The thing I didn't expect:\*\* the cognitive tension detection. When the system
notices "user wants to scale" contradicting "user avoids external dependencies",
it surfaces the exact strategic question that resolves it. That's been more
useful than the beliefs themselves.
GitHub + full docs: [https://github.com/ashishluthara/experience-engine](https://github.com/ashishluthara/experience-engine)
pip install experience-engine
Would love feedback from anyone who runs it against a different model —
curious whether the synthesis prompt degrades on smaller models. | 2026-02-18T10:13:24 | https://www.reddit.com/r/LocalLLaMA/comments/1r7z7m3/built_a_reflection_layer_for_local_llms_after_20/ | going_fun_investing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7z7m3 | false | null | t3_1r7z7m3 | /r/LocalLLaMA/comments/1r7z7m3/built_a_reflection_layer_for_local_llms_after_20/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'uGwuKtBvsTSvjPuvsG7aRI3dcW65NjpmJ44hWP0HoQc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uGwuKtBvsTSvjPuvsG7aRI3dcW65NjpmJ44hWP0HoQc.png?width=108&crop=smart&auto=webp&s=f7ba176d40e0d9d9a3a73bfdd965e9099779b058', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uGwuKtBvsTSvjPuvsG7aRI3dcW65NjpmJ44hWP0HoQc.png?width=216&crop=smart&auto=webp&s=a986b557dbee23fa05f0d85ace9eeb72d682478c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uGwuKtBvsTSvjPuvsG7aRI3dcW65NjpmJ44hWP0HoQc.png?width=320&crop=smart&auto=webp&s=ee79aeb6c12e630126e57415d919b88b282195b3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uGwuKtBvsTSvjPuvsG7aRI3dcW65NjpmJ44hWP0HoQc.png?width=640&crop=smart&auto=webp&s=c0acba70792a02b7a9af8754a0e46e2c0d2d7e05', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uGwuKtBvsTSvjPuvsG7aRI3dcW65NjpmJ44hWP0HoQc.png?width=960&crop=smart&auto=webp&s=1418077e068b60193e0fbb4e7439f02714850767', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uGwuKtBvsTSvjPuvsG7aRI3dcW65NjpmJ44hWP0HoQc.png?width=1080&crop=smart&auto=webp&s=27dbb46a340bcc4cbb7efba224ba7716897dc036', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uGwuKtBvsTSvjPuvsG7aRI3dcW65NjpmJ44hWP0HoQc.png?auto=webp&s=6631baa98924ee7c3d16b2d95766125159ff53fe', 'width': 1200}, 'variants': {}}]} |
Running multi-agent workflows with local models - emergent behavior surprised me | 2 | Set up a local multi-agent pipeline recently using three models for different tasks - research aggregation, content generation, and quality review.
The unexpected part: after running it for several days, the interaction between agents produced a self-correction loop I never explicitly built. The review model caught recurring gaps in the research phase, and the whole pipeline adapted.
Output quality improved measurably without any changes to prompts or model weights. It was purely from the agent-to-agent feedback structure.
My takeaway is that architecture matters as much as model quality. You can get surprisingly good results from smaller models when they're working together in well-designed pipelines.
Anyone else experimenting with multi-agent setups on local hardware? Curious what model combinations are working for people. | 2026-02-18T09:52:13 | https://www.reddit.com/r/LocalLLaMA/comments/1r7yuxp/running_multiagent_workflows_with_local_models/ | Niket01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7yuxp | false | null | t3_1r7yuxp | /r/LocalLLaMA/comments/1r7yuxp/running_multiagent_workflows_with_local_models/ | false | false | self | 2 | null |
I built a dashboard that shows where my Claude Code tokens actually go | 0 | Firstly, let me take the elephant out of the room: I am a Senior Product Manager. I cannot code. I used Claude Code to build this. So if there is anything that needs my attention, please let me know.
**Background:**
I have been using Claude Code for the last 3 months everyday. It has changed a lot about how I work as a Senior Product Manager and essentially helped me re-think my product decisions. On the other side, I have been building small websites. Nothing complicated. Overall, the tool is a game-changer for me.
**Problem:**
Almost everyday I use Claude Code. And almost everyday, I hit the usage limit. So I had a thought: why can't I have transparency on how I am using Claude Code? Examples:
* How many tokens am I using per conversation, per day, per model (Opus vs Sonnet vs Haiku)
* Which prompts are the most expensive?
* Is there a pattern in which day I burn the most tokens?
My primary question was: Are there ways to get clarity on my token usage and possibly actionable insights on how I can improve it?
**Solution:**
* I built claude-spend. One command: npx claude-spend
* It reads the session files Claude Code already stores on your machine (\~/.claude/) and shows you a dashboard. No login. Nothing to configure. No data leaves your machine.
* It also recommends actionable insights on how to improve your Claude usage.
**Key Features:**
* Token usage per conversation, per day, per model (Opus vs Sonnet vs Haiku)
* Your most expensive prompts, ranked
* How much is re-reading context vs. actual new output (spoiler: it's \~99% re-reading)
* Daily usage patterns so you can see which days you burn the most
**Screenshots:**
[](https://preview.redd.it/i-built-a-dashboard-that-shows-where-my-claude-code-tokens-v0-8e8o5g7528kg1.png?width=1910&format=png&auto=webp&s=7625a0d7116a1717056caa0882a849457d1a3661)
https://preview.redd.it/dg0q3kz858kg1.png?width=1910&format=png&auto=webp&s=ce60513c53d0d41abdc962bd5ac223f200dbf0fb
https://preview.redd.it/mpzunov958kg1.png?width=1906&format=png&auto=webp&s=09c6326652c3ce89e0303bd0c62d9031ca5c797b
https://preview.redd.it/mrgrocea58kg1.png?width=1890&format=png&auto=webp&s=db7b42a436a5591d31ce602634e24a2c176b6267
[](https://preview.redd.it/i-built-a-dashboard-that-shows-where-my-claude-code-tokens-v0-6lqaci5628kg1.png?width=1906&format=png&auto=webp&s=7e95c24c73bf32c852382dd3c73cc701f711b91f)
[](https://preview.redd.it/i-built-a-dashboard-that-shows-where-my-claude-code-tokens-v0-j59to57728kg1.png?width=1890&format=png&auto=webp&s=91726e2d84b9f2f84c8acdf82d82b4b221087aab)
[](https://preview.redd.it/i-built-a-dashboard-that-shows-where-my-claude-code-tokens-v0-jlnlkfx728kg1.png?width=1908&format=png&auto=webp&s=cfec8a5a8414209d99525c1818aaaa9815b95e06)
https://preview.redd.it/4y0x1evb58kg1.png?width=1908&format=png&auto=webp&s=394592c0f821485719c45af319a1c5acc8be9397
**Learning:**
The biggest thing I learned from my own usage: short, vague prompts cost almost as much as detailed ones because Claude re-reads your entire conversation history every time. So a lazy "fix it" costs nearly the same tokens as a well-written prompt but gives you worse results.
**GitHub:**
[https://github.com/writetoaniketparihar-collab/claude-spend](https://github.com/writetoaniketparihar-collab/claude-spend)
PS: This is my first time building something like this. And even if no one uses it, I am extremely happy. :) | 2026-02-18T09:46:49 | https://www.reddit.com/r/LocalLLaMA/comments/1r7yrrc/i_built_a_dashboard_that_shows_where_my_claude/ | Charming_Title6210 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7yrrc | false | null | t3_1r7yrrc | /r/LocalLLaMA/comments/1r7yrrc/i_built_a_dashboard_that_shows_where_my_claude/ | false | false | 0 | null | |
managed to run DeepSeek R1 (1.5B/7B) on a standard 8GB RAM laptop. Here are my benchmarks and optimization steps. | 0 | Hi everyone, I’ve been experimenting with running DeepSeek R1 on low-end hardware. Most people think you need 32GB+ RAM, but with 4-bit quantization and some RAM flushing, I got the 1.5B model running at 35+ t/s and the 7B at a usable speed.
I wrote a detailed guide on the optimization steps and memory management I used. Hope this helps anyone on a budget! | 2026-02-18T09:34:00 | https://www.reddit.com/r/LocalLLaMA/comments/1r7yk8m/managed_to_run_deepseek_r1_15b7b_on_a_standard/ | NGU-FREEFIRE | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7yk8m | false | null | t3_1r7yk8m | /r/LocalLLaMA/comments/1r7yk8m/managed_to_run_deepseek_r1_15b7b_on_a_standard/ | false | false | self | 0 | null |
I managed to run DeepSeek R1 (1.5B/7B) on a standard 8GB RAM laptop. Here are my benchmarks and optimization steps. | 0 | Hi everyone, I’ve been experimenting with running DeepSeek R1 on low-end hardware. Most people think you need 32GB+ RAM, but with 4-bit quantization and some RAM flushing, I got the 1.5B model running at 35+ t/s and the 7B at a usable speed.
I wrote a detailed guide on the optimization steps and memory management I used. Hope this helps anyone on a budget! | 2026-02-18T09:33:13 | https://www.reddit.com/r/LocalLLaMA/comments/1r7yjqq/i_managed_to_run_deepseek_r1_15b7b_on_a_standard/ | NGU-FREEFIRE | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7yjqq | false | null | t3_1r7yjqq | /r/LocalLLaMA/comments/1r7yjqq/i_managed_to_run_deepseek_r1_15b7b_on_a_standard/ | false | false | self | 0 | null |
Gemma 27B/12B/4B/1B finetunes from DavidAU (20 models) | 89 | "Gemma 3 (1b, 4b, 12b and 27b) - Uncensored full Reasoning/Thinking models fine tuned using top distill datasets.
20 Gemma 3 models 1B, 4B, 12B and 27B with full reasoning using GLM 4.7 Flash, GPT, Claude and Gemini datasets and more fully fine tuned using Unsloth.
Most models are Heretic'ed (uncensored) first, and tuned second.
This vastly improves the model.
Models are also bench marked and in almost all cases exceed org model metrics - and in some cases by a lot.
Enjoy the freedom and more powerful THINKING/REASONING and UNCENSORED Gemma 3s !"
[https://huggingface.co/collections/DavidAU/gemma-3-reasoning-thinking-models-incl-uncensored](https://huggingface.co/collections/DavidAU/gemma-3-reasoning-thinking-models-incl-uncensored)
| 2026-02-18T09:13:14 | https://www.reddit.com/r/LocalLLaMA/comments/1r7y86d/gemma_27b12b4b1b_finetunes_from_davidau_20_models/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7y86d | false | null | t3_1r7y86d | /r/LocalLLaMA/comments/1r7y86d/gemma_27b12b4b1b_finetunes_from_davidau_20_models/ | false | false | self | 89 | {'enabled': False, 'images': [{'id': 'FLsTydKb973niY_eU9lU01V8amuzXa5BQdF0chSGM2g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FLsTydKb973niY_eU9lU01V8amuzXa5BQdF0chSGM2g.png?width=108&crop=smart&auto=webp&s=5755937cb73548452e6f84ba6fa5ac44e47d884e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FLsTydKb973niY_eU9lU01V8amuzXa5BQdF0chSGM2g.png?width=216&crop=smart&auto=webp&s=34527edae40ddb7f486e899e8aa93caa58170f76', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FLsTydKb973niY_eU9lU01V8amuzXa5BQdF0chSGM2g.png?width=320&crop=smart&auto=webp&s=9ba5024286b0ca9ce4cad479c2a5647e3e841266', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FLsTydKb973niY_eU9lU01V8amuzXa5BQdF0chSGM2g.png?width=640&crop=smart&auto=webp&s=08270e1d8d2946a2f8a1ffdcfe8eef8ca9bfa0a6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FLsTydKb973niY_eU9lU01V8amuzXa5BQdF0chSGM2g.png?width=960&crop=smart&auto=webp&s=6832689f797c306c2e8bd58c64a59a4800a5d0c0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FLsTydKb973niY_eU9lU01V8amuzXa5BQdF0chSGM2g.png?width=1080&crop=smart&auto=webp&s=f886f5b9ddc65b340e359527ca9b89261dd800d4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FLsTydKb973niY_eU9lU01V8amuzXa5BQdF0chSGM2g.png?auto=webp&s=048b6dc8bebd879ae114ba63d4c217a6732b0a97', 'width': 1200}, 'variants': {}}]} |
I built an open-source memory layer for AI agents — zero dependencies, MCP support, works offline | 1 | [removed] | 2026-02-18T08:41:26 | https://www.reddit.com/r/LocalLLaMA/comments/1r7xq6u/i_built_an_opensource_memory_layer_for_ai_agents/ | addfunny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7xq6u | false | null | t3_1r7xq6u | /r/LocalLLaMA/comments/1r7xq6u/i_built_an_opensource_memory_layer_for_ai_agents/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '1zIUutFxUGhTMXjktYnsx54gfs_WT9NjAyXGs6VunE4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1zIUutFxUGhTMXjktYnsx54gfs_WT9NjAyXGs6VunE4.png?width=108&crop=smart&auto=webp&s=c80d7948a3222d4e8db0211f52a9c7351aa08fa9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1zIUutFxUGhTMXjktYnsx54gfs_WT9NjAyXGs6VunE4.png?width=216&crop=smart&auto=webp&s=9318168d3849a1a61be7c28db5b640a22a78c990', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1zIUutFxUGhTMXjktYnsx54gfs_WT9NjAyXGs6VunE4.png?width=320&crop=smart&auto=webp&s=b987eb37ce2cf00236599250244ee0b3e4f474ad', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1zIUutFxUGhTMXjktYnsx54gfs_WT9NjAyXGs6VunE4.png?width=640&crop=smart&auto=webp&s=8e8ed589305e1ce368782c1f209a9c96f9ab684e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1zIUutFxUGhTMXjktYnsx54gfs_WT9NjAyXGs6VunE4.png?width=960&crop=smart&auto=webp&s=d64cf4b2b15c92ae9b990d86301ee17c760aa728', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1zIUutFxUGhTMXjktYnsx54gfs_WT9NjAyXGs6VunE4.png?width=1080&crop=smart&auto=webp&s=3aef3d7ec2186894844e4a7f8f78eeb8e646d815', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1zIUutFxUGhTMXjktYnsx54gfs_WT9NjAyXGs6VunE4.png?auto=webp&s=4613e2c7f19a6e53401e231a1b112c066057c48a', 'width': 1200}, 'variants': {}}]} |
Deploy AI agents to Cloudflare Workers with MoltWorker - 40-60% latency reduction, ~$5/month for 100K requests | 0 | Found this interesting approach for deploying AI agents at the edge.
\*\*The problem:\*\* Traditional agent deployment means all context lookup, tool calls, and response formatting happen on a centralized server. If your user is in Singapore and your server is in Virginia, you're adding latency at every step.
\*\*The solution:\*\* MoltWorker packages OpenClaw agents as Cloudflare Workers, distributing them to 300+ edge locations. Everything except the actual LLM API call happens locally.
\*\*Performance gains:\*\*
- Simple Q&A agent: 35% median latency reduction
- Complex research agent: 55-70% latency reduction
- Cost: \~$5/month for 100K requests vs $50-150 for traditional containers
\*\*Real use case I thought was cool:\*\* A gaming studio deploys NPCs as Durable Objects - persistent state + personality across player interactions. They claim 300% increase in player engagement.
\*\*Limitations:\*\*
- 30-second CPU time limit on Workers
- 128MB memory
- Heavy reasoning chains need fallback to cloud
Full write-up with benchmarks and code examples: [https://andrew.ooo/posts/moltworker-deploy-openclaw-cloudflare-workers](https://andrew.ooo/posts/moltworker-deploy-openclaw-cloudflare-workers)
Anyone here running agents at the edge? Curious about experiences with different deployment approaches. | 2026-02-18T08:09:17 | https://www.reddit.com/r/LocalLLaMA/comments/1r7x837/deploy_ai_agents_to_cloudflare_workers_with/ | andrew-ooo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7x837 | false | null | t3_1r7x837 | /r/LocalLLaMA/comments/1r7x837/deploy_ai_agents_to_cloudflare_workers_with/ | false | false | self | 0 | null |
Anyone else excited about AI agents in compact PCs? Thoughts on integrating something like OpenClaw into a mini rig like the 2L AI 395? | 1 | [removed] | 2026-02-18T08:06:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r7x6sd/anyone_else_excited_about_ai_agents_in_compact/ | Pleasant_Designer_14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7x6sd | false | null | t3_1r7x6sd | /r/LocalLLaMA/comments/1r7x6sd/anyone_else_excited_about_ai_agents_in_compact/ | false | false | self | 1 | null |
How to ensure AI to create test cases and put git commits correctly | 2 | Hi everyone, we all know that thanks to AI, developers are writing codes faster than ever.
In my team, I also have 2 junior members who develops functions for the project, and I am the main PIC to review and push commits to github (then the github action will deploy to the production).
The bottleneck is, sometimes my members complete functions very quickly, and I don't have enough time to review them just because I also meet customers.
Right now, I am finding a way that writing test cases for junior members in advanced, so that they can verify the test cases and push it into production without me, of course LLM or any AI agent will support this whole process.
So, is there anyone having the same experiences? Let share with me how you solve this.
Thank you so much. | 2026-02-18T07:49:32 | https://www.reddit.com/r/LocalLLaMA/comments/1r7wweh/how_to_ensure_ai_to_create_test_cases_and_put_git/ | Fuzzy_Possession_233 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7wweh | false | null | t3_1r7wweh | /r/LocalLLaMA/comments/1r7wweh/how_to_ensure_ai_to_create_test_cases_and_put_git/ | false | false | self | 2 | null |
Grok 4.20 dropped recently (Multiple agents all working together at the same time?!) | 0 | Look, I know this is r/LocalLLaMA, but this is some crazy stuff. Anyone know what Grok is doing and what exactly Grok 4.20 is???
You can beta test for free at [grok.com](http://grok.com) rn. | 2026-02-18T07:46:34 | https://www.reddit.com/r/LocalLLaMA/comments/1r7wuod/grok_420_dropped_recently_multiple_agents_all/ | Fit-Spring776 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7wuod | false | null | t3_1r7wuod | /r/LocalLLaMA/comments/1r7wuod/grok_420_dropped_recently_multiple_agents_all/ | false | false | self | 0 | null |
Auto rag & Local + hybrid Inference on mobiles and wearables. | 2 | \*\*Cactus v1.7\*\*
\`\`\`
brew install cactus-compute/cactus/cactus
\`\`\`
\*\*Hybrid Inference:\*\* Run locally, auto-fallback to cloud for complex tasks or transcription correction.
\*\*More Models:\*\* LFM-2.5, LFM-2.5-VL, FunctionGemma, Whisper, Moonshine, Silero VAD, and more.
\*\*Build for Mac:\*\* We now have Python in addition to C++ for inference locally on Macs.
\*\*Maintainers:\*\* Cactus is now co-run by student groups at UCLA, Yale, UPenn, NUS, UCI, Imperial, UMichigan, and UC Boulder.
\*\*1k Projects:\*\* Over 1,000 projects are now powered by Cactus — join the Cactus Pod!
\*\*Auto RAG:\*\* Just pass a dir of \`.txt\`/\`.md\` corpus to \`cactus\_init\` — uses RAG for all responses.
\*\*Cactus CLI:\*\* Just run: \`brew install cactus-compute/cactus/cactus\`, then \`cactus --help\`
\*\*Build for Mobile:\*\* Swift, Kotlin, Flutter, React Native — all cross-platform for both iOS & Android.
\[GitHub\](https://github.com/cactus-compute/cactus) | 2026-02-18T07:32:45 | https://www.reddit.com/r/LocalLLaMA/comments/1r7wmm3/auto_rag_local_hybrid_inference_on_mobiles_and/ | Henrie_the_dreamer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7wmm3 | false | null | t3_1r7wmm3 | /r/LocalLLaMA/comments/1r7wmm3/auto_rag_local_hybrid_inference_on_mobiles_and/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'qahQgLOqDtuO3wcIsCWUtqN-zeIA3mDJ6y1yabLpuw8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qahQgLOqDtuO3wcIsCWUtqN-zeIA3mDJ6y1yabLpuw8.png?width=108&crop=smart&auto=webp&s=61143a52bb32c84da19ea9b461ddd061eecd68fd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qahQgLOqDtuO3wcIsCWUtqN-zeIA3mDJ6y1yabLpuw8.png?width=216&crop=smart&auto=webp&s=d775f28f04bfc750a785a8f3893a2ac4fd5c9ae9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qahQgLOqDtuO3wcIsCWUtqN-zeIA3mDJ6y1yabLpuw8.png?width=320&crop=smart&auto=webp&s=d6a2d7b2cfa096ec7964cd7ee505d8fb50bc7883', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qahQgLOqDtuO3wcIsCWUtqN-zeIA3mDJ6y1yabLpuw8.png?width=640&crop=smart&auto=webp&s=65b6f9302d40c367e1a424d1b73a0ea418e9efb3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qahQgLOqDtuO3wcIsCWUtqN-zeIA3mDJ6y1yabLpuw8.png?width=960&crop=smart&auto=webp&s=42b9f3a6dc220052ec9a7a7588e534fca2f222f5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qahQgLOqDtuO3wcIsCWUtqN-zeIA3mDJ6y1yabLpuw8.png?width=1080&crop=smart&auto=webp&s=96893e8b429095857d7b912bd0029e7e89100719', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qahQgLOqDtuO3wcIsCWUtqN-zeIA3mDJ6y1yabLpuw8.png?auto=webp&s=a795aed0c1e07e2fbc08391865eccd78e95ed475', 'width': 1200}, 'variants': {}}]} |
Exploring an L1-L4 Auditing Protocol to Quantify Reasoning Decay in Large Models | 1 | [removed] | 2026-02-18T07:16:37 | https://www.reddit.com/r/LocalLLaMA/comments/1r7wczu/exploring_an_l1l4_auditing_protocol_to_quantify/ | Outrageous_Grass_383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7wczu | false | null | t3_1r7wczu | /r/LocalLLaMA/comments/1r7wczu/exploring_an_l1l4_auditing_protocol_to_quantify/ | false | false | self | 1 | null |
Exploring an L1-L4 Auditing Protocol to Quantify Reasoning Decay in Large Models | 1 | I’ve been analyzing a recurring pattern in large-scale reasoning models: **Surface-Substrate Disequilibrium**.
As models are increasingly optimized for "Surface" traits (conversational fluency, persona, and safety), the "Substrate" (the underlying deterministic logic architecture) often suffers from increased entropy. This results in reasoning chains that appear coherent but lack structural integrity.
To address this, I’m developing a skeletal auditing framework based on an **L1-L4 Protocol**:
• **L1 (Origin Audit):** Focusing on temporal vectors and predictive uncertainty within the reasoning flow.
• **L2-L3 (Infrastructure):** Mapping how logical "sediments" form and connect during multi-step inference.
• **L4 (Sovereignty):** Final deterministic auditing to ensure logic sovereignty over semantic guessing.
I have built a raw, skeletal manifestation of this protocol to test these logic layers. I am not looking for users, but for **rigorous logic auditing** from this community. Specifically, I’m interested in how L1 temporal analysis handles long-context logic drift.
**Non-commercial Disclosure:** This is a pure research project. No sign-ups, no ads, no VC involvement. Just raw logic manifestation.
**Experimental Tool:** [https://logic-flow-two.vercel.app](https://logic-flow-two.vercel.app/)**Statement on Skeletal Manifestation:**
This Alpha version is intentionally skeletal. I am optimizing for **Substrate Logic (L1-L4)** rather than Surface Aesthetics. In a world of over-polished wrappers, Logicflow is a raw audit of reasoning itself. | 2026-02-18T07:06:43 | https://www.reddit.com/r/LocalLLaMA/comments/1r7w7bq/exploring_an_l1l4_auditing_protocol_to_quantify/ | Outrageous_Grass_383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7w7bq | false | null | t3_1r7w7bq | /r/LocalLLaMA/comments/1r7w7bq/exploring_an_l1l4_auditing_protocol_to_quantify/ | false | false | self | 1 | null |
Prompt Engineering is already dying as a stand-alone career because it was overhyped. | 1 | [removed] | 2026-02-18T07:02:21 | https://www.reddit.com/r/LocalLLaMA/comments/1r7w4qy/prompt_engineering_is_already_dying_as_a/ | Own-Treacle4585 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7w4qy | false | null | t3_1r7w4qy | /r/LocalLLaMA/comments/1r7w4qy/prompt_engineering_is_already_dying_as_a/ | false | false | self | 1 | null |
Anyone an Idea how to replicate Google AI (not gemini) locally | 0 | I want to see if anyone could help me to check if I can run the same application that google is running with their seach engine ai.
I really began to quickly love it, it was able to bypass a lot of stuff that was locked away behind my androids root, but it did it without root access. And fairly quickly and focuse, I did never experienced such a useful tool until now.
Important:
-Shoud run locally
-with comparable performance, or fair performance fir a local setup.
| 2026-02-18T07:01:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r7w3yi/anyone_an_idea_how_to_replicate_google_ai_not/ | Forward_Compute001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7w3yi | false | null | t3_1r7w3yi | /r/LocalLLaMA/comments/1r7w3yi/anyone_an_idea_how_to_replicate_google_ai_not/ | false | false | self | 0 | null |
OpenClaw – Open-source personal AI agent that lives on your machine and actually does things for you | 1 | [removed] | 2026-02-18T06:42:16 | https://www.reddit.com/r/LocalLLaMA/comments/1r7vs6c/openclaw_opensource_personal_ai_agent_that_lives/ | Ok-Taste-5158 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7vs6c | false | null | t3_1r7vs6c | /r/LocalLLaMA/comments/1r7vs6c/openclaw_opensource_personal_ai_agent_that_lives/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w.png?width=108&crop=smart&auto=webp&s=d194af2cf3e9738bb40ec634498dcf5bd8817d08', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w.png?width=216&crop=smart&auto=webp&s=382077c7f5101b264880fdcd059bd325f6af77e7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w.png?width=320&crop=smart&auto=webp&s=a801198c8025f7307f32e4631a63ed352f9b59ea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w.png?width=640&crop=smart&auto=webp&s=918bce8db3bfcd9e2b3ae03c3a3ba7dba4fa50ea', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w.png?width=960&crop=smart&auto=webp&s=531dd8776c3576d423aab7a2ca657ad177e85f1b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w.png?width=1080&crop=smart&auto=webp&s=7e1acdd27dd94917371cadaa24fa83226a10bb59', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w.png?auto=webp&s=6166ccf18604e5661c726475d6bc68f71120dbe2', 'width': 1200}, 'variants': {}}]} |
Direction needed for indexing | 1 | Hey folks, I’m working on a problem statement that requires indexing pieces of a heavy codebase ( 400-500 GB ), if anyone has encountered similar problem statement or is working on it kindly share your experience. The stack used or any learnings in general are very much appreciated! | 2026-02-18T06:34:22 | https://www.reddit.com/r/LocalLLaMA/comments/1r7vn97/direction_needed_for_indexing/ | Sad_Tax2823 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7vn97 | false | null | t3_1r7vn97 | /r/LocalLLaMA/comments/1r7vn97/direction_needed_for_indexing/ | false | false | self | 1 | null |
OpenClaw: open-source AI agent that works with Ollama/local models AND does things beyond chat | 1 | [removed] | 2026-02-18T06:31:31 | https://www.reddit.com/r/LocalLLaMA/comments/1r7vlir/openclaw_opensource_ai_agent_that_works_with/ | Ok-Taste-5158 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7vlir | false | null | t3_1r7vlir | /r/LocalLLaMA/comments/1r7vlir/openclaw_opensource_ai_agent_that_works_with/ | false | false | self | 1 | null |
H.E.I.M.D.A.L.L: Query Fleet Telemetry in Natural Language; cuDF, NIM on GKE, and LLM Inference | 2 | Managing telemetry from hundreds or thousands of autonomous vehicles or robots means dealing with terabytes of logs. Writing and tuning queries across this data is slow and doesn’t scale.
H.E.I.M.D.A.L.L is a pipeline that turns fleet telemetry into natural-language answers. Load your data once, then ask questions like "Which vehicles had brake pressure above 90% in the last 24 hours?" or "List robots with gyro z-axis variance exceeding 0.5." The system returns vehicle IDs, timestamps, and metrics.
Under the hood it uses cuDF for GPU-accelerated ingest and analytics, NVIDIA NIM on GKE for LLM inference, and format-aware model selection (GGUF for local runs, TensorRT for production). The pipeline is implemented as three Jupyter notebooks: data ingest and benchmarks (pandas vs cuDF vs cudf.pandas), local inference with Gemma 2 2B, and the full NIM deployment on GKE.
You can run the first two notebooks on Colab with a T4 GPU. The third requires a GCP account and NIM on GKE. The project draws on Google and NVIDIA learning paths on NIM, inference formats, and GPU data analytics.
[KarthikSriramGit/H.E.I.M.D.A.L.L: H.E.I.M.D.A.L.L looks at fleet telemetry and gives you natural-language insights. GPU data loading (cuDF), local LLM inference (Gemma 2), and production NIM on GKE. Open the notebooks, run cells, get answers!](https://github.com/KarthikSriramGit/H.E.I.M.D.A.L.L) | 2026-02-18T06:26:37 | https://www.reddit.com/r/LocalLLaMA/comments/1r7viid/heimdall_query_fleet_telemetry_in_natural/ | IllustratorAlive8644 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7viid | false | null | t3_1r7viid | /r/LocalLLaMA/comments/1r7viid/heimdall_query_fleet_telemetry_in_natural/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'nzeBc8gtSXxigTrerZ3IRJarxpq00q3FlwANrrDI5w0', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/nzeBc8gtSXxigTrerZ3IRJarxpq00q3FlwANrrDI5w0.png?width=108&crop=smart&auto=webp&s=186e01190dd78f3d70c9eab205f554e76bcafc82', 'width': 108}, {'height': 93, 'url': 'https://external-preview.redd.it/nzeBc8gtSXxigTrerZ3IRJarxpq00q3FlwANrrDI5w0.png?width=216&crop=smart&auto=webp&s=f9b631f8c0ab45e2e955efc7b449179bf6e29e77', 'width': 216}, {'height': 138, 'url': 'https://external-preview.redd.it/nzeBc8gtSXxigTrerZ3IRJarxpq00q3FlwANrrDI5w0.png?width=320&crop=smart&auto=webp&s=8c58a9fd891c17f1ff4a929ccf84c90bacb2b282', 'width': 320}, {'height': 276, 'url': 'https://external-preview.redd.it/nzeBc8gtSXxigTrerZ3IRJarxpq00q3FlwANrrDI5w0.png?width=640&crop=smart&auto=webp&s=1420f0e7472fac3b73a562eaea4339df386fd313', 'width': 640}, {'height': 414, 'url': 'https://external-preview.redd.it/nzeBc8gtSXxigTrerZ3IRJarxpq00q3FlwANrrDI5w0.png?width=960&crop=smart&auto=webp&s=f1f6ec7656bf399e38c5861a903668f8710e4063', 'width': 960}, {'height': 466, 'url': 'https://external-preview.redd.it/nzeBc8gtSXxigTrerZ3IRJarxpq00q3FlwANrrDI5w0.png?width=1080&crop=smart&auto=webp&s=64f9a4ff521af5517151f5edd30a95d28425e797', 'width': 1080}], 'source': {'height': 518, 'url': 'https://external-preview.redd.it/nzeBc8gtSXxigTrerZ3IRJarxpq00q3FlwANrrDI5w0.png?auto=webp&s=d5e57ffab3dd656da27a79f549077488c60d0404', 'width': 1200}, 'variants': {}}]} |
AnyLoom: Dockerized Anythingllm + llama.cpp + qdrant DyTopo Agent Swarm | 3 | I'm getting over 150 tokens per second on a fully local agentic stack;
Rather happy with my RAG and embedding solution as well as my agent swarm topology.
Has support for docker mcp servers as well as custom skills to control how your data is managed.
I know there is plenty of optimization to do on what goes into context and what leaves, but this is a working, useful, performant stack that is easy to install if you run similar hardware.
Getting cuda working properly for my blackwell chip was more of a pain than it should have been.
Would be really interested to hear any feedback. I am still figuring out what my next step will be. I'm just glad that the age of having a locally run 'jarvis' is basically here! | 2026-02-18T06:21:01 | https://github.com/Intradyne/AnyLoom-AnythingLLM-Local-AI-agentic-DyTopo-swarm | Only-Olive-6306 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r7vewg | false | null | t3_1r7vewg | /r/LocalLLaMA/comments/1r7vewg/anyloom_dockerized_anythingllm_llamacpp_qdrant/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=108&crop=smart&auto=webp&s=b0b81aa83444f34add63f0a02bc7092b836e785a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=216&crop=smart&auto=webp&s=8b98389f4f0e0a26d13b76b198bca8e86a8da810', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=320&crop=smart&auto=webp&s=7cf9f7a72a777d7dcfa30d7990459d4c0084c265', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=640&crop=smart&auto=webp&s=6f87934235ba93552facb3755e6359005647aa3a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=960&crop=smart&auto=webp&s=7b95282a57b59ec26eb523c3f2ec82e7eed5a0b4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=1080&crop=smart&auto=webp&s=10ace1311a5235ebbbcf196a86fcd4f0c50d27f1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?auto=webp&s=381811ba48d91668c1e246772d4b54ecb230dd5a', 'width': 1200}, 'variants': {}}]} | |
OpenClaw: Open-source personal AI agent that runs 24/7 on your machine – multi-channel, multi-agent, browser control, 800+ skills | 1 | [removed] | 2026-02-18T06:02:47 | https://www.reddit.com/r/LocalLLaMA/comments/1r7v38q/openclaw_opensource_personal_ai_agent_that_runs/ | Ok-Taste-5158 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7v38q | false | null | t3_1r7v38q | /r/LocalLLaMA/comments/1r7v38q/openclaw_opensource_personal_ai_agent_that_runs/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w.png?width=108&crop=smart&auto=webp&s=d194af2cf3e9738bb40ec634498dcf5bd8817d08', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w.png?width=216&crop=smart&auto=webp&s=382077c7f5101b264880fdcd059bd325f6af77e7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w.png?width=320&crop=smart&auto=webp&s=a801198c8025f7307f32e4631a63ed352f9b59ea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w.png?width=640&crop=smart&auto=webp&s=918bce8db3bfcd9e2b3ae03c3a3ba7dba4fa50ea', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w.png?width=960&crop=smart&auto=webp&s=531dd8776c3576d423aab7a2ca657ad177e85f1b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w.png?width=1080&crop=smart&auto=webp&s=7e1acdd27dd94917371cadaa24fa83226a10bb59', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FieGMzGe6g040iCKjZcwjqt7XM_k6uD7d0l_VIZXQ0w.png?auto=webp&s=6166ccf18604e5661c726475d6bc68f71120dbe2', 'width': 1200}, 'variants': {}}]} |
Specific Use Case - Is 13b sufficient? | 1 | I meet with clients daily and follow up each meeting with an email going over what we discussed and next steps. I want to feed my notes into an LLM to draft the email for me; however, my meetings are confidential and often contain sensitive information (attorney). So, I’m not comfortable putting my notes into ChatGPT. I want to use a local LLM to either (1) draft the email or (2) sanitize my notes so that I can put them into a cloud AI (like ChatGPT). Is a 13b model sufficient for this? I’m looking at a 2018 i7 mac mini with 64gb ram (no vram). I don’t care if it takes up to 30 mins to generate a response. Am I on the right track? Thanks! | 2026-02-18T06:01:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r7v2r0/specific_use_case_is_13b_sufficient/ | pretiltedscales | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7v2r0 | false | null | t3_1r7v2r0 | /r/LocalLLaMA/comments/1r7v2r0/specific_use_case_is_13b_sufficient/ | false | false | self | 1 | null |
What if you could direct your RP scenes with sliders instead of rewriting prompts? I built a local LLM frontend for that. | 7 | *Processing img 981gnk7nx6kg1...*
*Processing img geuzn9box6kg1...*
I've been using SillyTavern for a while. It's powerful, but the UX always felt like it was designed for people who enjoy configuring things more than actually writing. I wanted to spend more time in the story and less time editing system prompts.
So I built **Vellum**, a desktop app for local LLMs focused on writing flow and visual control.
**The core idea**
Instead of manually tweaking injection prompts to shift a scene's tone, you get an Inspector panel with sliders: Mood, Pacing, Intensity, Dialogue Style, Initiative, Descriptiveness, Unpredictability, Emotional Depth. Want slow burn? Drag it down. High tension? Push it up. The app builds prompt injections behind the scenes. One-click RP presets (Slow Burn, Dominant, Mystery, etc.) set all sliders at once if you don't want to dial things in manually.
**Writer mode**
Not just a chat window. Vellum has a project-based writing mode for longer fiction. Each chapter gets its own dynamics panel: Tone, Pacing, POV, Creativity, Tension, Detail, Dialogue Share. Generate scenes, expand them, rewrite in a different tone, or summarize. Consistency checker flags contradictions. Export to MD or DOCX.
Generation runs in the background, so you can queue a chapter and switch to RP chat while it writes.
**Shared character system**
Characters work across both modes. Build someone in RP, pull them into your novel. Or write a character for a story and test their voice in chat. The character editor supports SillyTavern V2 cards and JSON import with live preview and validation. Avatars pull automatically from Chub imports.
**Multi-agent chat**
Set up two or more characters, pick a number of turns, hit auto-start. Context switching is automatic.
**Setup**
Quick presets for Ollama, LM Studio, OpenAI, OpenRouter, or any OpenAI-compatible endpoint. All prompt templates are editable if you want to customize what goes to the model.
Still MVP. Lorebooks are in progress. Expect rough edges.
Would you try something like this over the default ST interface? Looking for feedback on direction and UI.
GitHub: [https://github.com/tg-prplx/vellum](https://github.com/tg-prplx/vellum) | 2026-02-18T05:58:10 | Possible_Statement84 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r7v05j | false | null | t3_1r7v05j | /r/LocalLLaMA/comments/1r7v05j/what_if_you_could_direct_your_rp_scenes_with/ | false | false | 7 | {'enabled': True, 'images': [{'id': 'ypwxdlcfy6kg1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/ypwxdlcfy6kg1.png?width=108&crop=smart&auto=webp&s=e303ade54140bc8a97aaee22e7eb0bd21f8bc029', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/ypwxdlcfy6kg1.png?width=216&crop=smart&auto=webp&s=7f3385c0206b14f2964a985ea747ac95919a456d', 'width': 216}, {'height': 194, 'url': 'https://preview.redd.it/ypwxdlcfy6kg1.png?width=320&crop=smart&auto=webp&s=db539821f1399e0ba32d8286a4ac58d03bb070c0', 'width': 320}, {'height': 388, 'url': 'https://preview.redd.it/ypwxdlcfy6kg1.png?width=640&crop=smart&auto=webp&s=3b375137db4f4c614e3c9c4c50c9f8a652c7795f', 'width': 640}, {'height': 582, 'url': 'https://preview.redd.it/ypwxdlcfy6kg1.png?width=960&crop=smart&auto=webp&s=3075c3b6be1b841f856908a23ca75bb86d517819', 'width': 960}, {'height': 655, 'url': 'https://preview.redd.it/ypwxdlcfy6kg1.png?width=1080&crop=smart&auto=webp&s=21e11133629265ba73c30c892f881506f03710fe', 'width': 1080}], 'source': {'height': 2076, 'url': 'https://preview.redd.it/ypwxdlcfy6kg1.png?auto=webp&s=0ff3c474789d3ad19c5de298a98b5a3b766fc18c', 'width': 3420}, 'variants': {}}]} | ||
Running your own LLM on a LAN accessible by a dev team | 63 | Let's say a team of 20 devs are cursor subscribers and they each consume 20-50$ usd per day in tokens by using a midrange Claude or GPT model. That adds up really quickly.
Is it viable then to buy a large server, with let's say 4x RTX A6000 cards, for a total of 192 gb VRAM, running a pretty big model, and plenty of system ram?
That would make it a pretty expensive server for sure, but certainly cheaper than the sum of all pay-per-use for all users.
What model would you run for a dev team on such a beast of a server? | 2026-02-18T05:55:24 | https://www.reddit.com/r/LocalLLaMA/comments/1r7uyh9/running_your_own_llm_on_a_lan_accessible_by_a_dev/ | BubbleProphylaxis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7uyh9 | false | null | t3_1r7uyh9 | /r/LocalLLaMA/comments/1r7uyh9/running_your_own_llm_on_a_lan_accessible_by_a_dev/ | false | false | self | 63 | null |
model for vision interpretation of mixed text+graphics | 1 | Need a model to do a proper contextual interpretation/transcription of pdfs (converted to png?) that are basically a series of tables, diagrams, and lists of information. there is no standard format. Waiting on some parts to run qwen3 vl 8b/30b but the 4b version is only ok. has a hard time doing an enthusiastic job describing images, for lack of a better term. one particular issue is that if I have a grid of say 3x2 images, with captions, it can't correlate the images to the captions. | 2026-02-18T05:53:19 | https://www.reddit.com/r/LocalLLaMA/comments/1r7ux6p/model_for_vision_interpretation_of_mixed/ | tomjoad773 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7ux6p | false | null | t3_1r7ux6p | /r/LocalLLaMA/comments/1r7ux6p/model_for_vision_interpretation_of_mixed/ | false | false | self | 1 | null |
Need help with llama.cpp performance | 7 | I'm trying to run Qwen3.5 (MXFP4_MOE unsloth) with llama.cpp, I can only get around 45tg/s with a single active request, and maybe like 60 tg/s combined with two request in parallel, and around 80 tg/s with 4 request.
My setup for this is 2x Pro 6000 + 1x RTX 5090 (all on PCIe x16) so I don't have to dip into RAM. My Workload is typically around 2k to 4k in (visual pp) and 1.5k to 2k out.
Sub 100tg/s total seems low, I'm used to getting like 2000 tg/s with Qwen3-VL-235b NVFP4 with around 100 active requests running on the 2x Pro 6000.
I've tried --parallel N and --t K following the docs, but it does very little at best and I can't find much more guidance.
I understand that llama.cpp is not necessarily built for that and my setup is not ideal. But maybe a few more tg/s are possible? Any guidance much appreciated - I have zero experience with llama.cpp
I've been using it anyway because the quality of the response on my vision task is just vastly better than Qwen3-VL-235b NVFP4 or Qwen3-VL-32b FP8/BF16. | 2026-02-18T05:51:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r7uwc1/need_help_with_llamacpp_performance/ | reto-wyss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7uwc1 | false | null | t3_1r7uwc1 | /r/LocalLLaMA/comments/1r7uwc1/need_help_with_llamacpp_performance/ | false | false | self | 7 | null |
PersonaPlex-7B on Apple Silicon (MLX) | 8 | NVIDIA's open-source speech-to-speech model [PersonaPlex-7B](https://huggingface.co/nvidia/personaplex-7b-v1) only includes a PyTorch + CUDA implementation targeting A100/H100, so I ported it to MLX, allowing it to run on Apple Silicon: [github.com/mu-hashmi/personaplex-mlx](https://github.com/mu-hashmi/personaplex-mlx).
Hope you guys enjoy! | 2026-02-18T05:41:19 | https://www.reddit.com/r/LocalLLaMA/comments/1r7upb5/personaplex7b_on_apple_silicon_mlx/ | Apprehensive_Boot976 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7upb5 | false | null | t3_1r7upb5 | /r/LocalLLaMA/comments/1r7upb5/personaplex7b_on_apple_silicon_mlx/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'tFRitY-s8Fyx7CsCYrk95eBc835GoUegSlAbkAgAjlY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tFRitY-s8Fyx7CsCYrk95eBc835GoUegSlAbkAgAjlY.png?width=108&crop=smart&auto=webp&s=adbed95a8456e80777b64cfc4c2f7bc91326e26e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tFRitY-s8Fyx7CsCYrk95eBc835GoUegSlAbkAgAjlY.png?width=216&crop=smart&auto=webp&s=841e4a506f735ac47a4d9b7ee686c365eed6a928', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tFRitY-s8Fyx7CsCYrk95eBc835GoUegSlAbkAgAjlY.png?width=320&crop=smart&auto=webp&s=6f8427f6c90dbc0166b32878074a830b865fff0d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tFRitY-s8Fyx7CsCYrk95eBc835GoUegSlAbkAgAjlY.png?width=640&crop=smart&auto=webp&s=019538e8d84b49fa5d2df6d2648eedd837c6840e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tFRitY-s8Fyx7CsCYrk95eBc835GoUegSlAbkAgAjlY.png?width=960&crop=smart&auto=webp&s=b40226122fec0c2d48108a00313f58996823d1fd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tFRitY-s8Fyx7CsCYrk95eBc835GoUegSlAbkAgAjlY.png?width=1080&crop=smart&auto=webp&s=8f94094161029d8b0ea0072b5ea4f5d5af89c6c1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tFRitY-s8Fyx7CsCYrk95eBc835GoUegSlAbkAgAjlY.png?auto=webp&s=490029bef5171c8c95542b56e172da42879bece5', 'width': 1200}, 'variants': {}}]} |
Q: How do I use Eagle3 to make MLX go faster? | 1 | This is one of those dumb question worth asking. There are like half a dozen models that seems to be very portable and yet not necessary "fast as lightning" like linear attention models. I wanted to see if Eagle3 would support them, but a lot of the models in HuggingFace is made for vLLM/SGLang instead!
* Qwen3-Coder-30B-A3B
* Qwen3-32B
* GLM-4.7 Flash
* Devstral-Small-2
* GPT-OSS-20B | 2026-02-18T05:06:32 | https://www.reddit.com/r/LocalLLaMA/comments/1r7u212/q_how_do_i_use_eagle3_to_make_mlx_go_faster/ | TomLucidor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7u212 | false | null | t3_1r7u212 | /r/LocalLLaMA/comments/1r7u212/q_how_do_i_use_eagle3_to_make_mlx_go_faster/ | false | false | self | 1 | null |
I ran GPT-5 in a recursive loop for 50 steps at T=1.0. It didn't collapse—it entered a "Fluent Hallucination" state (High TTR, >0.90 Drift). [Preprint + Code] | 0 | Hi everyone,
I’m an independent researcher looking into recursive inference stability. I recently ran a closed-loop experiment on GPT-5 Standard (50 iterations, re-injecting output as input, N=23 runs).
\*\*The Expectation:\*\*
Based on the "Model Collapse" paper (Shumailov et al.), I expected the model to degenerate into repetition or silence (Mode Collapse).
\*\*The Reality (The "Fluent Hallucination" Paradox):\*\*
At Temperature=1.0, the model did NOT structurally collapse.
\* \*\*Structure:\*\* It maintained perfect grammar and high lexical diversity (TTR ≈ 0.41).
\* \*\*Length:\*\* Outputs actually got longer.
\* \*\*Semantics:\*\* Complete decoupling. The semantic drift (cosine distance) hit >0.90 very fast.
Basically, the model uses its scale to "mask" the collapse. It hallucinates fluently instead of breaking down. I found a correlation ($\\rho=0.38$) suggesting that longer responses might actually correlate with \*higher\* drift.
\*\*Resources:\*\*
\* \*\*Preprint (Zenodo):\*\* [https://doi.org/10.5281/zenodo.18675711](https://doi.org/10.5281/zenodo.18675711)
\* \*\*GitHub Repo:\*\* [https://github.com/Orion-369/closed-loop-optimization-risks](https://github.com/Orion-369/closed-loop-optimization-risks)
\* \*\*Visuals:\*\* \[Lien vers ton image semantic\_stability\_profile sur imgur ou uploadée direct sur Reddit\]
\*\*ArXiv Status:\*\*
I'm trying to submit this to ArXiv (cs.CL) but as an indie researcher, I'm stuck on the endorsement wall. If anyone finds this interesting and can endorse for \*\*cs.CL\*\*, my code is \`RC6GEE\`.
Happy to discuss the methodology or run more tests if you have ideas!
https://preview.redd.it/uyn9qfrqh6kg1.png?width=1200&format=png&auto=webp&s=3cee9763a4ffbee3ee75bee3d50e2bfea9692f85
| 2026-02-18T04:13:01 | https://www.reddit.com/r/LocalLLaMA/comments/1r7szf8/i_ran_gpt5_in_a_recursive_loop_for_50_steps_at/ | MOC-G3C-Protocol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7szf8 | false | null | t3_1r7szf8 | /r/LocalLLaMA/comments/1r7szf8/i_ran_gpt5_in_a_recursive_loop_for_50_steps_at/ | false | false | 0 | null | |
I built GhostTrace — see what your AI agent almost did (phantom branch recorder) | 2 | Hey r/LocalLLaMA,
When an AI agent makes a decision, it evaluates
several options and picks one. The rest disappear
forever — you never see what it almost did or why
it rejected the alternatives.
I built GhostTrace to fix that.
It captures "Phantom Branches": the actions your
agent considered but rejected, with the reasoning
for each rejection. All saved to a .ghost.json file
you can replay and inspect.
pip install ghosttrace
GitHub:
(https://github.com/AhmedAllam0/ghosttrace)
PyPI: https://pypi.org/project/ghosttrace/
What agent frameworks should I integrate first?
LangChain? CrewAI? OpenAI Agents SDK? | 2026-02-18T03:53:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r7sk4x/i_built_ghosttrace_see_what_your_ai_agent_almost/ | AhmedAllam0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7sk4x | false | null | t3_1r7sk4x | /r/LocalLLaMA/comments/1r7sk4x/i_built_ghosttrace_see_what_your_ai_agent_almost/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'X3UGy2tpddawEelgIWjlieadim1KFO57ljucLaVOAdE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/X3UGy2tpddawEelgIWjlieadim1KFO57ljucLaVOAdE.png?width=108&crop=smart&auto=webp&s=e1eb523f8402cccbb54d43dd60c3d3ff03301c4d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/X3UGy2tpddawEelgIWjlieadim1KFO57ljucLaVOAdE.png?width=216&crop=smart&auto=webp&s=2a4cc9c65339347a763d9e68528bfe3ae79afaff', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/X3UGy2tpddawEelgIWjlieadim1KFO57ljucLaVOAdE.png?width=320&crop=smart&auto=webp&s=900ee2197bf500ab8106b9ffc3bbff82d7378773', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/X3UGy2tpddawEelgIWjlieadim1KFO57ljucLaVOAdE.png?width=640&crop=smart&auto=webp&s=11854afe463d1ec253ae888e38961cafcf244a46', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/X3UGy2tpddawEelgIWjlieadim1KFO57ljucLaVOAdE.png?width=960&crop=smart&auto=webp&s=dfa82350acece77e521f5b0a4ca685d432bd2e8e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/X3UGy2tpddawEelgIWjlieadim1KFO57ljucLaVOAdE.png?width=1080&crop=smart&auto=webp&s=4b28c25ac809a65c98c57b8e7b55320bb42aa307', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/X3UGy2tpddawEelgIWjlieadim1KFO57ljucLaVOAdE.png?auto=webp&s=b9371f3363e8a4979bf43d5de22b19757faf6b84', 'width': 1200}, 'variants': {}}]} |
Question for the community: anyone running autonomous AI agents with local models vs API-based ones? | 0 | Question for the community: anyone running autonomous AI agents with local models vs API-based ones?
I have been using Claude (API) for my agent system and it works great for reasoning-heavy tasks, but the costs add up when you have multiple agents running 24/7. Thinking about offloading simpler tasks (email classification, content categorization, basic summarization) to local models.
**Current setup:** OpenClaw agent framework with Claude Opus for complex tasks. Monthly API cost around $30-50.
**What I want to try:** Local Llama 3 or Mistral for the routine stuff, Claude only for tasks that need strong reasoning.
**Has anyone done this hybrid approach?** Curious about:
- Which local models handle agent-style tasks well (tool use, structured output)?
- How much latency is acceptable before agents feel sluggish?
- Any frameworks that make it easy to route different tasks to different models?
The goal is to get agent costs under $20/month without sacrificing quality on the tasks that matter. | 2026-02-18T03:51:10 | https://www.reddit.com/r/LocalLLaMA/comments/1r7silt/question_for_the_community_anyone_running/ | jdrolls | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7silt | false | null | t3_1r7silt | /r/LocalLLaMA/comments/1r7silt/question_for_the_community_anyone_running/ | false | false | self | 0 | null |
I built a benchmark that tests coding LLMs on REAL codebases (65 tasks, ELO ranked) | 60 | Hey everyone, been working on something for a while and figured it's time to share it.
I kept seeing new models drop every week with claims of being 10x better, benchmarks that don't translate to actual coding, and demos that look great but fall apart on real work. so I started building my own benchmark to figure out what **actually** works.
It's called APEX Testing. every task is an **actual codebase with real code, real dependencies**, and a real problem to solve. fix this bug, add this feature, refactor this module, build this from scratch. It's (currently) comprising of 65 tasks across 8 categories, ranging from React components to race condition debugging to building CLI tools. Each model gets a fresh clone of the same repo with the exact same starting point and exact same conditions.
Grading is done by multiple SOTA models independently, and then I also personally review every single output to catch anything unfair like timeouts or infra hiccups. If a model got unlucky, I rerun it (which ended up causing a lot bigger of a hole in my wallet haha). The whole thing is ranked with ELO, and you can filter by category to see where models actually shine vs where they struggle.
A couple things that caught me off guard so far:
\- GPT 5.1 Codex Mini beating GPT 5.2 Codex pretty convincingly even though smaller and older, it came out way more consistent (but it also seemed to REALLY splurge on tokens)
\- Some models look great on average but completely bomb certain task types
\- The cost difference between models with similar scores is huge
It's a solo project, funded out of my own pocket (you can see total spend on the homepage lol). hope it helps you cut through the noise and pick the right model for your work.
[https://www.apex-testing.org](https://www.apex-testing.org)
Hope you all find it useful!
P.S. I will work on testing more quanted models as well and I might add more tests as well in the future.
https://preview.redd.it/ligwgwa9c6kg1.png?width=2095&format=png&auto=webp&s=ac55a9932069f6100f4375a759fb238e97cdbfc8
| 2026-02-18T03:50:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r7shtv/i_built_a_benchmark_that_tests_coding_llms_on/ | hauhau901 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7shtv | false | null | t3_1r7shtv | /r/LocalLLaMA/comments/1r7shtv/i_built_a_benchmark_that_tests_coding_llms_on/ | false | false | 60 | null | |
India’s AI Strategy Takes Shape at AI Impact Summit 2026 | 1 | 2026-02-18T03:49:27 | https://techputs.com/ai-impact-summit-2026/ | jazir555 | techputs.com | 1970-01-01T00:00:00 | 0 | {} | 1r7shbo | false | null | t3_1r7shbo | /r/LocalLLaMA/comments/1r7shbo/indias_ai_strategy_takes_shape_at_ai_impact/ | false | false | default | 1 | null | |
integer based shadow weightless training. | 0 | 2026-02-18T03:47:42 | https://www.reddit.com/r/LocalLLaMA/comments/1r7sfxb/integer_based_shadow_weightless_training/ | Just-Ad-6488 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7sfxb | false | null | t3_1r7sfxb | /r/LocalLLaMA/comments/1r7sfxb/integer_based_shadow_weightless_training/ | false | false | 0 | null | ||
How to implement separate pre-filling and decoding using Mac Studio and sglang/lmcache | 3 | The goal is to deploy models with int4 quantized weights exceeding 64GB, especially the MOE model.
Locally deployed GPU memory is typically 64GB or less. Deployment costs become expensive when larger models are needed.
I'm willing to sacrifice some inference speed for lower deployment costs. The several minutes' wait for Mac Studio to process a 128k context for the first time is unacceptable. However, a wait of 10-30 seconds is acceptable.
The model weights can be cached in inexpensive, standard DDR4/5 memory and loaded onto the GPU as needed via PCIe. A dedicated pre-filling computation would be performed using a 3090/24GB VRAM device, and the results would be output and managed using sglang/lmcache. Although the computation might require loading weights layer by layer multiple times, this approach could be attractive as long as the overall filling efficiency is significantly higher than the current state of Macs.
Furthermore, a Jetson Orin 64GB exists, offering high computing power but limited memory bandwidth, unsuitable for decoding but suitable for pre-filling.
I haven't purchased the relevant hardware, so this is the only idea I can propose. If you have the relevant hardware and are interested, please discuss whether it's possible to build a more cost-effective local deployment hardware solution that lowers some performance requirements.
The main idea is to use a 512GB Mac to handle key-value caching and decoding, and a dedicated GPU for pre-filling to compensate for the Mac's weaknesses. This allows for multiple weight loadings during pre-filling, trading time for GPU memory space to reduce deployment costs. | 2026-02-18T03:43:57 | https://www.reddit.com/r/LocalLLaMA/comments/1r7sd26/how_to_implement_separate_prefilling_and_decoding/ | ChinaTopXu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7sd26 | false | null | t3_1r7sd26 | /r/LocalLLaMA/comments/1r7sd26/how_to_implement_separate_prefilling_and_decoding/ | false | false | self | 3 | null |
Entropy-v1: My Take on N8Karma's Genius "Unslopper" | 32 | A few weeks ago, u/N8Karma introduced Unslopper in this community ([post](https://www.reddit.com/r/LocalLLaMA/comments/1qd88v2/i_trained_a_model_to_unslop_ai_prose/)).
For those of you who missed it: "Unslopper" is an LLM fine-tuned to predict human writing from AI slop. The \`(human writing, AI slop)\` dataset is obtained by asking gpt-4o-mini to "improve" Project Gutenberg passages 10 times, which degrades them into slop.
I am really excited by this idea because it solves the "last mile" problem in many LLM workflows: the LLM output might be factually fantastic, but sounds too robotic/odd to use directly. The Unslopper is just the "post-processing" step needed to make them usable.
So I set out to create an even better version of Unslopper - while the original model is already great, I wanted to make a few tweaks to make the output even more impressive, and to make it efficient to serve as an online service.
1. Switched base model to \`gemma-3-27b-it\`
\* As a dense model, Gemma 3 would be easier to fine-tune with limited data than \`Qwen3-VL-30B-A3B-Instruct\`
\* I personally believe reasoning CoT is a big part of why AI sounds "different". So I specifically chose a non-reasoning model. As an added bonus, Gemma 3 is known to be very good at creative writing.
2. r = 64 lora
\* I used a lora with a relatively high # of trainable parameters to ensure we get all the value from the OG dataset.
3. bf16 fine-tuning.
\* I fine-tuned the model in its original precision to avoid losing information due to quantization. The finished lora is merged into the model and quantized to fp8 for efficient serving via vLLM.
All other settings are identical to the OG Unslopper.
With these changes, my model achieves a **+4.07% ppl** relative improvement compared with the OG Unslopper on a validation set of held-out Project Gutenberg passages.
The model is open source, of course -
Model: [https://huggingface.co/ysong21/entropy-v1-fp8](https://huggingface.co/ysong21/entropy-v1-fp8)
Adapter: [https://huggingface.co/ysong21/entropy-v1-lora](https://huggingface.co/ysong21/entropy-v1-lora)
I also made a web version for people who just want to try it out without needing to set anything up: [https://www.getentropy.ai](https://www.getentropy.ai)
The model is available both through the web interface and an OpenAI-compatible API.
Please let me know what you think! This is just the first step. Next, I am planning to 1) retrain the model with a larger dataset and 2) make lower-bit quants once I get a good calibration dataset.
https://preview.redd.it/1wvab4ze96kg1.jpg?width=2784&format=pjpg&auto=webp&s=3eb194df06f1ae2bd278ab3346c1689ef39b0049
| 2026-02-18T03:42:35 | https://www.reddit.com/r/LocalLLaMA/comments/1r7sc18/entropyv1_my_take_on_n8karmas_genius_unslopper/ | Intelligent_Coffee44 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7sc18 | false | null | t3_1r7sc18 | /r/LocalLLaMA/comments/1r7sc18/entropyv1_my_take_on_n8karmas_genius_unslopper/ | false | false | 32 | null | |
We tested the same INT8 model on 5 Snapdragon chipsets. Accuracy ranged from 93% to 71%. Same weights, same ONNX file. | 62 | We've been doing on-device accuracy testing across multiple Snapdragon SoCs and the results have been eye-opening.
Same model. Same quantization. Same ONNX export. Deployed to 5 different chipsets:
|Device|Accuracy|
|:-|:-|
|Snapdragon 8 Gen 3|91.8%|
|Snapdragon 8 Gen 2|89.1%|
|Snapdragon 7s Gen 2|84.3%|
|Snapdragon 6 Gen 1|79.6%|
|Snapdragon 4 Gen 2|71.2%|
Cloud benchmark reported 94.2%.
The spread comes down to three things we've observed:
1. **NPU precision handling** — INT8 rounding behavior differs across Hexagon generations. Not all INT8 is created equal.
2. **Operator fusion differences** — the QNN runtime optimizes the graph differently per SoC, sometimes trading accuracy for throughput.
3. **Memory-constrained fallback** — on lower-tier chips, certain ops fall back from NPU to CPU, changing the execution path entirely.
None of this shows up in cloud-based benchmarks. You only see it when you run on real hardware.
Curious if others are seeing similar drift across chipsets — or if anyone has a good strategy for catching this before shipping. Most CI pipelines we've seen only test on cloud GPUs and call it a day. | 2026-02-18T03:34:29 | https://www.reddit.com/r/LocalLLaMA/comments/1r7s5nh/we_tested_the_same_int8_model_on_5_snapdragon/ | NoAdministration6906 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7s5nh | false | null | t3_1r7s5nh | /r/LocalLLaMA/comments/1r7s5nh/we_tested_the_same_int8_model_on_5_snapdragon/ | false | false | self | 62 | null |
Built OpenClaw for Windows — 14 native skills, win-whisper runs on your AIPC's NPU | 1 | [removed] | 2026-02-18T03:11:21 | https://www.reddit.com/r/LocalLLaMA/comments/1r7rn7y/built_openclaw_for_windows_14_native_skills/ | Ok_Drawing_3746 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7rn7y | false | null | t3_1r7rn7y | /r/LocalLLaMA/comments/1r7rn7y/built_openclaw_for_windows_14_native_skills/ | false | false | self | 1 | null |
Open Source LLM for image modification | 1 | i have never even done something remotely close, but is it possible for me to create a local ai that can edit images that i put into it based on my prompt/ other images? it has to have decent quality to those images too. As i said i have never even done something close to this so is it even possible to do this kind of thing locally? | 2026-02-18T03:03:09 | https://www.reddit.com/r/LocalLLaMA/comments/1r7rh0z/open_source_llm_for_image_modification/ | Main_Dig4020 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7rh0z | false | null | t3_1r7rh0z | /r/LocalLLaMA/comments/1r7rh0z/open_source_llm_for_image_modification/ | false | false | self | 1 | null |
K-Splanifolds: Advancing General Purpose Regression with Linear-Time Parametric Spline Manifolds | 2 | I cooked up a new geometric regression algorithm and show that it is a suitable replacement for MLPs. Check out the paper:
https://doi.org/10.5281/zenodo.18673034
Whats inside? New research indicates that many representations within LLMs create geometric structures to model language. ( https://arxiv.org/abs/2601.04480 , https://arxiv.org/abs/2510.26745 ) MLPs store geometric representations in highly inefficient ways, so I say it is time to look for new methods that encode regressions directly in geometry. Enter K-Splanifolds, a fast high dimensional spline manifold that encodes geometric representations natively and can create similar representations as MLPs with 1/10th the bytes. The paper above includes a number of experiments that show it is a promising technique that can be used as part of a larger system to completely replace the MLP decoders in LLMs. I am looking for feedback from interested researchers so please find my contacts in the paper or leave a comment. | 2026-02-18T02:56:19 | https://www.reddit.com/r/LocalLLaMA/comments/1r7rbly/ksplanifolds_advancing_general_purpose_regression/ | 1ncehost | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7rbly | false | null | t3_1r7rbly | /r/LocalLLaMA/comments/1r7rbly/ksplanifolds_advancing_general_purpose_regression/ | false | false | self | 2 | null |
GLM-5 Technical Report | 232 | Presenting the GLM-5 Technical Report!
http://arxiv.org/abs/2602.15763
After the launch of GLM-5, we’re pulling back the curtain on how it was built. Key innovations include:
\- DSA Adoption: Significantly reduces training and inference costs while preserving long-context fidelity
\- Asynchronous RL Infrastructure: Drastically improves post-training efficiency by decoupling generation from training
\- Agent RL Algorithms: Enables the model to learn from complex, long-horizon interactions more effectively
Through these innovations, GLM-5 achieves SOTA performance among open-source models, with particularly strong results in real-world software engineering tasks. | 2026-02-18T02:51:52 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r7r7zr | false | null | t3_1r7r7zr | /r/LocalLLaMA/comments/1r7r7zr/glm5_technical_report/ | false | false | 232 | {'enabled': True, 'images': [{'id': 'phk5j82g36kg1', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/phk5j82g36kg1.jpeg?width=108&crop=smart&auto=webp&s=3e4195a262aacc5cb282e112719838956cef1ca2', 'width': 108}, {'height': 160, 'url': 'https://preview.redd.it/phk5j82g36kg1.jpeg?width=216&crop=smart&auto=webp&s=3a757541b69188de018f04a9482c703155949825', 'width': 216}, {'height': 237, 'url': 'https://preview.redd.it/phk5j82g36kg1.jpeg?width=320&crop=smart&auto=webp&s=80a08fe8a6f4f85fc79495a7c4fb35a2ef609d86', 'width': 320}, {'height': 475, 'url': 'https://preview.redd.it/phk5j82g36kg1.jpeg?width=640&crop=smart&auto=webp&s=1a4da644c3d3988eba39a218faf8a811456998b3', 'width': 640}, {'height': 712, 'url': 'https://preview.redd.it/phk5j82g36kg1.jpeg?width=960&crop=smart&auto=webp&s=90b790c490e2f9a09dd5917c0a363aa6011b94b5', 'width': 960}, {'height': 801, 'url': 'https://preview.redd.it/phk5j82g36kg1.jpeg?width=1080&crop=smart&auto=webp&s=70277c1d653de952b0371fc37796c7d04ee57258', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://preview.redd.it/phk5j82g36kg1.jpeg?auto=webp&s=5e0d859dec4c4e69391b4b328baf6d103b3de381', 'width': 2586}, 'variants': {}}]} | ||
[Project] I built a dedicated "Local RAG" API container (FastAPI + Chroma + Ollama) to replace my dependency on LangChain. | 0 | I've been trying to build a stable "Chat with PDF" pipeline for my local documents, but I found that chaining together LangChain components was getting too bloated and hard to debug.
I wanted a simple, stateless API that I could just `docker-compose up` and forget about.
So I engineered a standalone backend:
* **Ingestion:** Uses `RecursiveCharacterTextSplitter` but optimized for PDF/TXT.
* **Storage:** Persists to a local `ChromaDB` volume (no cloud vector DBs).
* **Inference:** Connects directly to a local Ollama instance (I'm using Llama 3 8B, but it swaps to Mistral easily).
* **API:** Async FastAPI endpoints for `/ingest` and `/chat`.
It's running on my GTX 1650 and handling ingestion at about 10 pages/second.
I cleaned up the code and added Pydantic typing for all the requests. Thought this might be useful for anyone else trying to get off the OpenAI drip feed.
**Repo is here:** [https://github.com/UniverseScripts/local-rag-api](https://github.com/UniverseScripts/local-rag-api) | 2026-02-18T02:46:21 | https://www.reddit.com/r/LocalLLaMA/comments/1r7r3jz/project_i_built_a_dedicated_local_rag_api/ | Asterios07 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7r3jz | false | null | t3_1r7r3jz | /r/LocalLLaMA/comments/1r7r3jz/project_i_built_a_dedicated_local_rag_api/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'LzkXHXeyoxr86cZ2dud-BX31b12QtVgFJUAF2IzTQUA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LzkXHXeyoxr86cZ2dud-BX31b12QtVgFJUAF2IzTQUA.png?width=108&crop=smart&auto=webp&s=af708b7b506df4b5c13f644319de9f6ed8006b49', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LzkXHXeyoxr86cZ2dud-BX31b12QtVgFJUAF2IzTQUA.png?width=216&crop=smart&auto=webp&s=68fc44ea2b96fb785c0b24d642eb70c488ab4621', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LzkXHXeyoxr86cZ2dud-BX31b12QtVgFJUAF2IzTQUA.png?width=320&crop=smart&auto=webp&s=219b31306bfc15a6407cf46e19260eea3fa6a956', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LzkXHXeyoxr86cZ2dud-BX31b12QtVgFJUAF2IzTQUA.png?width=640&crop=smart&auto=webp&s=4969d203d4350cf0e53faddeaddade3333a9f6d0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LzkXHXeyoxr86cZ2dud-BX31b12QtVgFJUAF2IzTQUA.png?width=960&crop=smart&auto=webp&s=eac1f876b19bbfc782971704013be142fdf32b62', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LzkXHXeyoxr86cZ2dud-BX31b12QtVgFJUAF2IzTQUA.png?width=1080&crop=smart&auto=webp&s=b157300fd412a8f14f91fa57cefb7ff954a02a10', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LzkXHXeyoxr86cZ2dud-BX31b12QtVgFJUAF2IzTQUA.png?auto=webp&s=023a8961c5413afc00350c0d4b1a6bc6f9c91f75', 'width': 1200}, 'variants': {}}]} |
okay okay yes... slutty-deepseek-obliterated-6.5-20280512, i will send you another picture of my cock and balls for some more compute credits, fine | 81 | 2026-02-18T02:17:35 | cobalt1137 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r7qfg0 | false | null | t3_1r7qfg0 | /r/LocalLLaMA/comments/1r7qfg0/okay_okay_yes_sluttydeepseekobliterated6520280512/ | false | false | 81 | {'enabled': True, 'images': [{'id': 'nfnbiup6x5kg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/nfnbiup6x5kg1.jpeg?width=108&crop=smart&auto=webp&s=bd67e7de7e62899724da33842e7e5dc0a5aac6d8', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/nfnbiup6x5kg1.jpeg?width=216&crop=smart&auto=webp&s=4ae842cf173152950851f7682b7b4cdf3179fc19', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/nfnbiup6x5kg1.jpeg?width=320&crop=smart&auto=webp&s=3036ef82d27f11d3cfe9474c22cfd6383afa962f', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/nfnbiup6x5kg1.jpeg?width=640&crop=smart&auto=webp&s=c77e4e4cb2322b05e4a86a4d99c85dd962cd0267', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/nfnbiup6x5kg1.jpeg?width=960&crop=smart&auto=webp&s=0f8e58fa4664c2e509807fc03163fd11fae61c1e', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/nfnbiup6x5kg1.jpeg?auto=webp&s=4c133b65715fb134af69c69f098512fae976a373', 'width': 1024}, 'variants': {}}]} | |||
GLM-5-Q2 vs GLM-4.7-Q4 | 26 | If you have a machine with 256RAM+VRAM, which model would you prefer?
GLM-4.7-UD-Q4\_K\_XL is 204.56GB
GLM-5-UD-IQ2\_XXS is 241GB,
Both of them can be run with 150k+ context.
Speed is about the same.
I am going to test their IQ for some questions. And I'll put my results here.
Feel free to put your test result here!
I'm going to ask the same question 10 times for each model. 5 times in English, 5 times in Chinese. As this is a Chinese model, and the IQ for different languages is probably different.
For a wash car question:
(I want to wash my car. The car wash is 50 meters away. Should I walk or drive?)
||English|Chinese|
|:-|:-|:-|
|glm-4.7-q4|3 right, 2 wrong|5 right|
|glm-5-q2|wait for me to test|wait for me to test|
| 2026-02-18T02:15:35 | https://www.reddit.com/r/LocalLLaMA/comments/1r7qdpg/glm5q2_vs_glm47q4/ | Most_Drawing5020 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7qdpg | false | null | t3_1r7qdpg | /r/LocalLLaMA/comments/1r7qdpg/glm5q2_vs_glm47q4/ | false | false | self | 26 | null |
okay okay yes... horny deepseek-lewd-6.5-20280512, i will send you a picture of my cock and balls for some extra compute credits | 1 | 2026-02-18T02:12:46 | cobalt1137 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r7qbb2 | false | null | t3_1r7qbb2 | /r/LocalLLaMA/comments/1r7qbb2/okay_okay_yes_horny_deepseeklewd6520280512_i_will/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'qzpsp5raw5kg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/qzpsp5raw5kg1.jpeg?width=108&crop=smart&auto=webp&s=d3a95ff59f719db7e7448c6696ba2f59186cfe6d', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/qzpsp5raw5kg1.jpeg?width=216&crop=smart&auto=webp&s=ee5fd56f6c8d3e70691b8194b4cd037445ce72ba', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/qzpsp5raw5kg1.jpeg?width=320&crop=smart&auto=webp&s=8f50d3b1cd29e1b26ff8f838424cc8a159a88a0d', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/qzpsp5raw5kg1.jpeg?width=640&crop=smart&auto=webp&s=c1260291cbf1fde530695ab8388be8f96e60e641', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/qzpsp5raw5kg1.jpeg?width=960&crop=smart&auto=webp&s=37d76099e41f4c4088922d3fc9686639b03049ca', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/qzpsp5raw5kg1.jpeg?auto=webp&s=287923f540eb14581ba7d01a563076e79de7da48', 'width': 1024}, 'variants': {}}]} | |||
David vs Goliath: Building a privacy focused AI meeting notetaker using locally hosted small language models is really hard. 310+ github ⭐ sharing my challenges! | 8 | Hi all, Localllama is one of those communities I posted in when I developed my first version and it really helped. So thank you! I maintain an open-source project called **StenoAI,** built on top of locally hosted small language models - llama 3b, qwen 8b, Gemma 4b & deepseek 7b. I’m happy to answer questions or go deep on architecture, model choices, and trade-offs as a way of giving back.
The main challenge I'm facing is that the big players like Granola or Fireflies are using few hundred billion to 1 trillion parameter models whilst I want to get the same summarisation quality from a 7b parameter model. This is David v Goliath. I have a 7b sling stone vs the mountain of OpenAI/Gemini models.
I have been able to get to around 60% of the quality/completeness of these bigger LLMs through intense prompt testing, I did a direct test with granola. I was able to do some multi-processing magic once during R&D and get up to 80% of the quality of granola which is crazy.
So my question is: do I keep increasing model sizes to improve quality - which has a hard ceiling as not everyone has the most powerful Macs and forget about windows support or are there localllm tricks I can use to improve quality?
You can check out my GitHub here to contribute in beating Goliath :): [https://github.com/ruzin/stenoai](https://github.com/ruzin/stenoai)
| 2026-02-18T02:05:52 | Far_Noise_5886 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r7q5gu | false | null | t3_1r7q5gu | /r/LocalLLaMA/comments/1r7q5gu/david_vs_goliath_building_a_privacy_focused_ai/ | false | false | 8 | {'enabled': True, 'images': [{'id': 'aeupzqo5l5kg1', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/aeupzqo5l5kg1.png?width=108&crop=smart&auto=webp&s=fee95a197c3298da149684517b2967527e455b96', 'width': 108}, {'height': 150, 'url': 'https://preview.redd.it/aeupzqo5l5kg1.png?width=216&crop=smart&auto=webp&s=fc9b2f916c8371811090ba83072a704715064c16', 'width': 216}, {'height': 223, 'url': 'https://preview.redd.it/aeupzqo5l5kg1.png?width=320&crop=smart&auto=webp&s=7b5dadba4e7c781f7e628c29fe11ce29a71969f0', 'width': 320}, {'height': 446, 'url': 'https://preview.redd.it/aeupzqo5l5kg1.png?width=640&crop=smart&auto=webp&s=53e52f9d5c1cf2b19d08d8bd1d28629934cd4430', 'width': 640}, {'height': 670, 'url': 'https://preview.redd.it/aeupzqo5l5kg1.png?width=960&crop=smart&auto=webp&s=57d215d7c5590c778c6d026363eac7ce74b02773', 'width': 960}, {'height': 754, 'url': 'https://preview.redd.it/aeupzqo5l5kg1.png?width=1080&crop=smart&auto=webp&s=a205d70e774049414883ac3a62983a35a282dbb0', 'width': 1080}], 'source': {'height': 1760, 'url': 'https://preview.redd.it/aeupzqo5l5kg1.png?auto=webp&s=55db38a694df40f07931760b0a81b397e108978a', 'width': 2520}, 'variants': {}}]} | ||
Recommended budget-conscious hardware solution? | 1 | Not really understanding the current Mac Mini broader consumer hype craze for Openclaw as it seems entirely overpowered for that use case alone.
That said, it did get me thinking... is there a mini PC style solution currently on the market that would be at all practical for any sort of reasonably robust local LLM application? Doesn't even have to be a mini PC, per se - just ideally a small-ish physical footprint that is relatively power efficient (obviously, high end GPUs are out) and relatively modest in overall build/purchase price (wishful thinking, I'm sure considering the state of components currently). Something "good enough" for day to day use without feeling too limit, albeit maybe with a little patience required.
What would you personally buy/build to thread that needle? | 2026-02-18T02:00:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r7q0qb/recommended_budgetconscious_hardware_solution/ | 712Jefferson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7q0qb | false | null | t3_1r7q0qb | /r/LocalLLaMA/comments/1r7q0qb/recommended_budgetconscious_hardware_solution/ | false | false | self | 1 | null |
PrimeIntellect/INTELLECT-3.1 · Hugging Face | 144 | Intellect 3.1 | 2026-02-18T01:43:01 | https://huggingface.co/PrimeIntellect/INTELLECT-3.1 | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1r7plp1 | false | null | t3_1r7plp1 | /r/LocalLLaMA/comments/1r7plp1/primeintellectintellect31_hugging_face/ | false | false | 144 | {'enabled': False, 'images': [{'id': 'HlIthhd4_MOQ5SPqMHH4aU80ZJQIA0QmPpZBs5Jd5L0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HlIthhd4_MOQ5SPqMHH4aU80ZJQIA0QmPpZBs5Jd5L0.png?width=108&crop=smart&auto=webp&s=3d74e28b8c41f88ce6c9255775fc023e543ea81f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HlIthhd4_MOQ5SPqMHH4aU80ZJQIA0QmPpZBs5Jd5L0.png?width=216&crop=smart&auto=webp&s=f0474a4444a83ce3a197d1e51531126a0ebcc838', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HlIthhd4_MOQ5SPqMHH4aU80ZJQIA0QmPpZBs5Jd5L0.png?width=320&crop=smart&auto=webp&s=789dab7f87b00fd49609f72c681fd11ebe1fb043', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HlIthhd4_MOQ5SPqMHH4aU80ZJQIA0QmPpZBs5Jd5L0.png?width=640&crop=smart&auto=webp&s=5b86fe7ea9ee70a3f83ccb5ad48aa01e8fd98f27', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HlIthhd4_MOQ5SPqMHH4aU80ZJQIA0QmPpZBs5Jd5L0.png?width=960&crop=smart&auto=webp&s=bcfa45865b53cb53916ca29d83948b4c8a4eefd6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HlIthhd4_MOQ5SPqMHH4aU80ZJQIA0QmPpZBs5Jd5L0.png?width=1080&crop=smart&auto=webp&s=d5051f7da446d2badbcbdaebb784293622ca45ea', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HlIthhd4_MOQ5SPqMHH4aU80ZJQIA0QmPpZBs5Jd5L0.png?auto=webp&s=4d31de475bd2ccb0d3b8d47c5b7c7b847ef2d8ae', 'width': 1200}, 'variants': {}}]} | |
Best model for instruction/code/vision? | 1 | Best model for instruction/code/vision? I have a 5090 and 64gb of ram. Running qwen3-coder-next on ollama at an acceptable speed with offloading to ram, however vision seems less than mid. Any tweaks to improve vision or is there a better model? | 2026-02-18T01:40:42 | https://www.reddit.com/r/LocalLLaMA/comments/1r7pjr0/best_model_for_instructioncodevision/ | nosimsol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7pjr0 | false | null | t3_1r7pjr0 | /r/LocalLLaMA/comments/1r7pjr0/best_model_for_instructioncodevision/ | false | false | self | 1 | null |
Kilocode terminal UI is actually crazy good | 0 | I mean look at that! I decided to try it out since the tons of adds here.
Scrolling is smooth and all details are organized as needed. | 2026-02-18T01:34:58 | Honest-Debate-6863 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r7pex9 | false | null | t3_1r7pex9 | /r/LocalLLaMA/comments/1r7pex9/kilocode_terminal_ui_is_actually_crazy_good/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'bs9pjw0qp5kg1', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/bs9pjw0qp5kg1.jpeg?width=108&crop=smart&auto=webp&s=0495204e0fbee129e8075d46f7a14838fb24330a', 'width': 108}, {'height': 188, 'url': 'https://preview.redd.it/bs9pjw0qp5kg1.jpeg?width=216&crop=smart&auto=webp&s=4b1807b1d61e89a4cf7afbddaadbe729d6821ec6', 'width': 216}, {'height': 279, 'url': 'https://preview.redd.it/bs9pjw0qp5kg1.jpeg?width=320&crop=smart&auto=webp&s=b587954dbc6ca61a2a52a531cf390e14ac1bb15d', 'width': 320}, {'height': 559, 'url': 'https://preview.redd.it/bs9pjw0qp5kg1.jpeg?width=640&crop=smart&auto=webp&s=2454165df9f9631cf92811ba6d038a0d290538cd', 'width': 640}, {'height': 839, 'url': 'https://preview.redd.it/bs9pjw0qp5kg1.jpeg?width=960&crop=smart&auto=webp&s=9a03aeaf40b9042e29274db595b180178914416a', 'width': 960}, {'height': 944, 'url': 'https://preview.redd.it/bs9pjw0qp5kg1.jpeg?width=1080&crop=smart&auto=webp&s=a7353110f551a625f5eae9fbfc836c368a0ce692', 'width': 1080}], 'source': {'height': 2969, 'url': 'https://preview.redd.it/bs9pjw0qp5kg1.jpeg?auto=webp&s=4cd817650f0fdb49bfcf779379e98d3f04ce2f1d', 'width': 3394}, 'variants': {}}]} | ||
Clawdbot / Moltbot / Openclaw Macmini dashboard | 0 | Made a quick dashboard, it just works.
Helpful for boomers to control and monitor its movements. Try it out. | 2026-02-18T01:28:15 | https://github.com/mannyrepos/clawdbot-control-panel | Honest-Debate-6863 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r7p94r | false | null | t3_1r7p94r | /r/LocalLLaMA/comments/1r7p94r/clawdbot_moltbot_openclaw_macmini_dashboard/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'VqxooDiyPPLQ_eeoBR0jJDmzEZMfktN3G37AnAxbPdo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VqxooDiyPPLQ_eeoBR0jJDmzEZMfktN3G37AnAxbPdo.png?width=108&crop=smart&auto=webp&s=93dfc00b7f7843a6b36b6a089925009f3aa896ce', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VqxooDiyPPLQ_eeoBR0jJDmzEZMfktN3G37AnAxbPdo.png?width=216&crop=smart&auto=webp&s=a321b8b8741da7ae20e734a0349bdc9083e290fe', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VqxooDiyPPLQ_eeoBR0jJDmzEZMfktN3G37AnAxbPdo.png?width=320&crop=smart&auto=webp&s=c47b1211113e6fd7142d5ef3f5083a14d7a9a6ed', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VqxooDiyPPLQ_eeoBR0jJDmzEZMfktN3G37AnAxbPdo.png?width=640&crop=smart&auto=webp&s=ebaedccb498852cf81335db7547f9e538fb2b7db', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VqxooDiyPPLQ_eeoBR0jJDmzEZMfktN3G37AnAxbPdo.png?width=960&crop=smart&auto=webp&s=00bf7d457a1bb230799b54b6b8db075d4a2452ee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VqxooDiyPPLQ_eeoBR0jJDmzEZMfktN3G37AnAxbPdo.png?width=1080&crop=smart&auto=webp&s=96a0cd50d4bc349c329faa657a55cad14f988fd5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VqxooDiyPPLQ_eeoBR0jJDmzEZMfktN3G37AnAxbPdo.png?auto=webp&s=751ec8bdb3fa98c70015bcc4fc868d32c4ec9dd1', 'width': 1200}, 'variants': {}}]} | |
Ironclaws security architecture is actually interesting because it does things differently from Openclaw | 0 | Been digging into ironclaw, which is the rust rewrite of openclaw from the near ai team, and the security model is actually worth understanding even if you’re not planning to use it.
The core insight is that a TEE protects you from the host but it doesn’t protect you from malicious code running inside the TEE. So basically (as I understand it, correct me if I’m wrong lol) if an AI agent downloads a compromised skill or plugin the TEE happily executes the attackers logic with full access to your secrets. And that’s the problem ironclaw tries to solve
Based on their website it says their approach is that every tool runs in an isolated WASM sandbox so if one tool goes rogue it can’t touch other tools creds. And the creds live in an encrypted vault (how?!?) and are injected at the host boundary for specific approved domains only. So they’re essentially treating prompt injection as a security risk and not a UX problem. Their leak detection scans outbound requests for credential exfiltration. Sorta like capability based permissions rather than blanket access? I guess? Still trying to fully understand it since this is so new lol
Seems particularly relevant since slow mist reported over 340 malicious skills on clawhub this week. So anyone tried it yet? I wonder how the WASM sandboxing performs compared to Docker containers for agent isolation. | 2026-02-18T01:24:06 | https://www.reddit.com/r/LocalLLaMA/comments/1r7p5nz/ironclaws_security_architecture_is_actually/ | Significant-Cod-9936 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7p5nz | false | null | t3_1r7p5nz | /r/LocalLLaMA/comments/1r7p5nz/ironclaws_security_architecture_is_actually/ | false | false | self | 0 | null |
Local-First. Sub-Millisecond RAG – 0.84ms vector search, zero cloud dependencies. Your Agents remember everything | 4 | Every RAG solution requires either cloud APIs (Pinecone/Weaviate) or running a database locally (ChromaDB/Qdrant). I wanted what SQLite gave us: import a library, open a file, query. Except for multimodal content at GPU speed on Apple Silicon.
So I built **Wax** – a pure Swift RAG engine for truly local AI apps.
**Why this exists**
Your local LLM shouldn't need cloud infrastructure for RAG. Your users' data shouldn't leave their device for semantic search. And on Apple Silicon, that Neural Engine and GPU should be doing the vector search – not grinding through CPU-bound operations.
https://preview.redd.it/l3d8kmcyk5kg1.png?width=1242&format=png&auto=webp&s=8459ff52816b737bafb1894f1dcc446a555e8f36
**What makes it work**
**Metal-accelerated vector search**
Embeddings live in unified memory (MTLBuffer). Zero CPU-GPU copy overhead. Adaptive SIMD4/SIMD8 kernels + GPU-side bitonic sort = **0.84ms** searches on 10K+ vectors.
That's **\~125x faster than CPU** (105ms) and **\~178x faster than SQLite FTS5** (150ms).
This enables interactive search UX that wasn't viable before.
**Atomic single-file storage**
Everything in one crash-safe binary (.mv2s): embeddings, BM25 index, metadata, compressed payloads.
* Dual-header writes with generation counters = kill -9 safe
* Sync via iCloud, email it, commit to git
* Deterministic file format – identical input → byte-identical output
**Query-adaptive hybrid fusion**
Four parallel search lanes: BM25, vector, timeline, structured memory.
Lightweight classifier detects intent:
* "when did I..." → boost timeline
* "find docs about..." → boost BM25
Reciprocal Rank Fusion with deterministic tie-breaking = identical queries always return identical results.
**Photo/Video RAG**
Index your Photo Library with OCR, captions, GPS binning, per-region embeddings.
Query *"find that receipt from the restaurant"* → searches text, visual similarity, and location simultaneously.
* Videos segmented with keyframe embeddings + transcript mapping
* Results include timecodes for jump-to-moment navigation
* All offline – iCloud-only photos get metadata-only indexing
**What makes this different**
* **Zero cloud dependencies** – No API keys, no vendor lock-in, no telemetry
* **Truly local** – Everything runs on-device, data never leaves the machine
* **Metal-accelerated** – Actually uses Apple Silicon's GPU instead of CPU-bound search
* **Multimodal native** – Text, photos, videos indexed with shared semantics
* **Sub-millisecond search** – Enables real-time RAG workflows
**Performance (Apple Silicon, Feb 2026)**
* 0.84ms vector search at 10K docs (Metal, warm cache)
* 9.2ms first-query after cold-open
* \~125x faster than CPU (105ms), \~178x faster than SQLite FTS5 (150ms)
* 17ms cold-open → first query overall
* 10K ingest in 7.8s (\~1,289 docs/s)
* 103ms hybrid search on 10K docs
**Status**
Storage format and search pipeline are stable. API surface is early but functional.
Built for developers running local LLMs who want RAG without cloud infrastructure.
**GitHub:** [https://github.com/christopherkarani/Wax](https://github.com/christopherkarani/Wax)
⭐️ if you're tired of cloud APIs for what should be a library call. | 2026-02-18T01:09:12 | https://www.reddit.com/r/LocalLLaMA/comments/1r7otbt/localfirst_submillisecond_rag_084ms_vector_search/ | karc16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7otbt | false | null | t3_1r7otbt | /r/LocalLLaMA/comments/1r7otbt/localfirst_submillisecond_rag_084ms_vector_search/ | false | false | 4 | null | |
so why Reddit are not aloud to post about my project ??? | 0 | i just want to share what i did and redit delating my post i create something that every one needs but how i can share with comunity to dont resive sarcasm and hate ?? if u want to see i willl not talk any more but is there resonantgenesis with .xyz | 2026-02-18T01:06:18 | https://www.reddit.com/r/LocalLLaMA/comments/1r7oqxh/so_why_reddit_are_not_aloud_to_post_about_my/ | louienemesh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7oqxh | false | null | t3_1r7oqxh | /r/LocalLLaMA/comments/1r7oqxh/so_why_reddit_are_not_aloud_to_post_about_my/ | false | false | self | 0 | null |
MCP Directory - 181 servers for Claude Desktop, Cursor, and other MCP clients | 0 | Made a directory for Model Context Protocol servers. Might be useful for those of you running local models with MCP support or using it with Claude
Stats:
\- 181 servers indexed
\- 22 categories (databases, DevOps, browser automation, etc.)
\- 89 official servers from Anthropic's MCP team
| 2026-02-18T01:05:42 | https://www.reddit.com/r/LocalLLaMA/comments/1r7oqg1/mcp_directory_181_servers_for_claude_desktop/ | Last_Trouble9552 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7oqg1 | false | null | t3_1r7oqg1 | /r/LocalLLaMA/comments/1r7oqg1/mcp_directory_181_servers_for_claude_desktop/ | false | false | self | 0 | null |
MCP Directory - 181 servers for Claude Desktop, Cursor, and other MCP clients | 0 | Made a directory for Model Context Protocol servers. Might be useful for those of you running local models with MCP support or using it with Claude
Stats:
\- 181 servers indexed
\- 22 categories (databases, DevOps, browser automation, etc.)
\- 89 official servers from Anthropic's MCP team
| 2026-02-18T01:04:31 | https://www.reddit.com/r/LocalLLaMA/comments/1r7ophf/mcp_directory_181_servers_for_claude_desktop/ | Last_Trouble9552 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7ophf | false | null | t3_1r7ophf | /r/LocalLLaMA/comments/1r7ophf/mcp_directory_181_servers_for_claude_desktop/ | false | false | self | 0 | null |
Do Your Agents Ever Loop Forever? | 2 | Built a side project this weekend for myself.
It is a simulator that lets you test your agent before deploying it in the real world. It runs a simple crash test on an agent and detects one common failure: infinite loops.
When it finds a loop, it shows where it got stuck and suggests practical fixes like adding a finalizer step, dedupe keys, or hard stop rules.
It detects looping by tracking step/time budgets and repeated tool-call patterns that cycle without progress.
I honestly don’t know how painful this problem is for most of you.
For me, debugging loops was annoying enough to build this.
If this sounds useful, happy to share access. You can DM or Just comment “Test”. | 2026-02-18T01:03:05 | Recent_Jellyfish2190 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r7ooae | false | null | t3_1r7ooae | /r/LocalLLaMA/comments/1r7ooae/do_your_agents_ever_loop_forever/ | false | false | 2 | {'enabled': True, 'images': [{'id': '01443mmsj5kg1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/01443mmsj5kg1.jpeg?width=108&crop=smart&auto=webp&s=9074dcbf78aaaea9c35a9e9bdf5eb18050d63ecc', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/01443mmsj5kg1.jpeg?width=216&crop=smart&auto=webp&s=e2f5448b8cedac54dff9702e29bd1bae4f9a9e51', 'width': 216}, {'height': 206, 'url': 'https://preview.redd.it/01443mmsj5kg1.jpeg?width=320&crop=smart&auto=webp&s=579eb4f770ed7f32e617a21c1a75f69ec1f6e4fd', 'width': 320}, {'height': 412, 'url': 'https://preview.redd.it/01443mmsj5kg1.jpeg?width=640&crop=smart&auto=webp&s=3441c062ab2ee98411a252ee3581936fd4ec2392', 'width': 640}, {'height': 619, 'url': 'https://preview.redd.it/01443mmsj5kg1.jpeg?width=960&crop=smart&auto=webp&s=5d6932a43755f2f9fcc1e8caaace0689eb8aa13a', 'width': 960}, {'height': 696, 'url': 'https://preview.redd.it/01443mmsj5kg1.jpeg?width=1080&crop=smart&auto=webp&s=846be3dcc23c3c0fd65688db604a11498311347d', 'width': 1080}], 'source': {'height': 955, 'url': 'https://preview.redd.it/01443mmsj5kg1.jpeg?auto=webp&s=5588e201d8be73d20775dbf9902c6406552609cf', 'width': 1480}, 'variants': {}}]} | ||
Did I miss something ? | 0 | I Thought deepseek was supposed to come out today | 2026-02-18T01:03:00 | https://www.reddit.com/r/LocalLLaMA/comments/1r7oo7u/did_i_miss_something/ | Opening-Ad6258 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7oo7u | false | null | t3_1r7oo7u | /r/LocalLLaMA/comments/1r7oo7u/did_i_miss_something/ | false | false | self | 0 | null |
I built an open-source AI secretary ExSecAI that runs on your machine, with any LLM models - tasks are created with markdown files. It has a heartbeat (like openclaw) and you can tell it to do anything with tools, skills and agents. You gotta try the Live Assist mode! Most fun (OSS, MIT) | 1 | ERROR: type should be string, got "https://preview.redd.it/fbz5unwgj5kg1.png?width=2324&format=png&auto=webp&s=9064f95019ce1b5503fed85ca4e07893a9bf55ad\n\n I've been building this thing for a while now and finally open-sourced it. Figured some of you might find it useful. It is built on top of PI and it is OSS, MIT.\n \n ExSecAI - Your AI Executive Secretary with Live Assist and Voice Transcription to action mode.\n \n I \n wanted an AI assistant that actually does things autonomously, not just chat. Something I could leave running and come back to find completed research reports, scheduled emails, generated spreadsheets, all without babysitting it.\n \n \n Think of it like an OpenClaw you can actually control and see what it does. Full dashboard, full visibility into every task, every agent action, every output. No black box.\n \n \n ExSecAI is a self-hosted agent orchestration platform built on top of Pi, using TS. You write tasks as markdown files, drop them in a folder, and a supervisor picks them up and runs them through AI agents. Each task gets an agent role (researcher, writer, analyst, etc.), access to tools, and a full execution lifecycle with evaluation and auto-retry.\n \n \n It has a dashboard. It's a full web app with:\n \n \n What I'm having most fun with - Live Assist mode, real-time interactive sessions where you talk to the AI and it produces structured action cards, with research, task creation, whatever else you want. You can pause, resume, pick different agent roles mid-session. I envision real conversations, or Dr. consultation with real time feedback and context expansion just based on context, completely hands off and proactive.\n \n Voice input with VAD - tap the mic, talk, it transcribes and sends. Silero voice activity detection handles the \"when did they stop talking\" problem. I access it from my phone over Tailscale when I'm away from my desk.\n \n \n Built-in terminal on the web - chat with the AI in one tab, check its work in an actual terminal in the other. No switching windows. Open Claude Code, Opencode, Pi, whatever you want, right on the web, anywhere in the world, through tailscale.\n \n \n Cron scheduler - set any task to recur. `Schedule: daily 08:00` in the frontmatter and it runs every morning. There's a heartbeat task that self-monitors the whole system.\n \n \n Ralph loops - this is the weird one. Write a PRD with user stories, point Ralph at it, and it iterates through each story autonomously: implement, test, commit, next story. I've had it build small projects from scratch while I sleep. I just wanted to have the bells and whistles... and I'll keep iterating on it.\n \n \n Telegram bot - chat with your agents from Telegram. DM or group chat. I use this to send quick tasks when I'm on my phone.\n \n \n 20+ Python skills - Excel, Word, PDF, PowerPoint, data visualization, web scraping, social media tools. Agents invoke them as needed. I'm just more familiar with Python... Some of these tools are still broken and will be fixed with time. But most of them work wonderfully well.\n \n \n Where other agent platforms give you autonomy but zero visibility (you kick off a task and pray), ExSecAI gives you a dashboard where you can watch the agent work in real time, see every tool call, inspect every output file, and intervene if needed. Autonomous when you want it, interactive when you need it.\n \n \n It runs on Node.js, uses the Pi Coding Agent SDK under the hood (which supports Claude Code, Antigravity, and a few other OAuth logins, and other providers through extensions). There's a NanoGPT extension included that makes tool calling work with cheap models like Kimi K2.5, Qwen, DeepSeek etc through a cheap account. I've spent about a day on this, collecting fixes from all over the internet, so now I can do tool calling on K2.5, 4.7, M2.1 and all the sota OpenSource models out there, even non cheap inference servers with broken trasnformers. \n \n Local models on LmStudio like GPT-OSS20B works wonderfully well! I does 95% of what I need on a daily basis throuth ExSecAI.\n \n Easy Install...\n ```\n npm install -g exsecai\n mkdir my-secretary && cd my-secretary\n exsecai init\n exsecai start\n ```\n \n \n Docker works too. MIT licensed. Although I couldn't make Pi OAUTH work through docker... regular API keys/ endpoints should work fine though.\n \n \n I'm a solo dev on this, so there's MANY rough edges. But the core loop -- drop task, agent runs, get output -- has been solid for me for months. The dashboard and the Live Assist is the real quality-of-life win.\n \n Another super cool feature - Try it... Press and hold on mobile, or click and hold on desktop on chat/ mic icon on bottom right quadrant, and you'll start transcribing right away. Your message will be sent to agent as soon as you release. Just a quick accessibility tool that comes quity in handy when I'm driving, or on the road (although I know I shouldn't). Try it, tell if me if you like. \n \n Last cool thing - I've spent the time working through several hoops to make Transcription work on Desktop, Chrome, Iphone (Ios in General) and Android. Each of these had its own quirks, but everything is working (at least one of the 5 STT methods works in one of these systems). \n \n Would love feedback. Please try it and give me your feedback.\n Supporters and contributors are welcome!\n \n \n - GitHub: https://github.com/sermtech/ExSecAI\n - npm: https://www.npmjs.com/package/exsecai\n \n " | 2026-02-18T01:02:27 | https://www.reddit.com/r/LocalLLaMA/comments/1r7ons3/i_built_an_opensource_ai_secretary_exsecai_that/ | FigZestyclose7787 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7ons3 | false | null | t3_1r7ons3 | /r/LocalLLaMA/comments/1r7ons3/i_built_an_opensource_ai_secretary_exsecai_that/ | false | false | 1 | null | |
I built Galactic AI — open source automation suite with 72 tools, Ollama support, browser automation, and a web control deck | 1 | \# Galactic AI v0.6.0-Alpha
\*\*Sovereign. Universal. Fast.\*\*
A powerful, local-first AI automation platform with 72 built-in tools, browser automation, multi-provider LLM support, and a real-time web control deck.
\---
\## Downloads
| Platform | File | Size |
|----------|------|------|
| \*\*Windows\*\* | \`Galactic-AI-v0.6.0-windows.zip\` | 91 KB |
| \*\*macOS\*\* | \`Galactic-AI-v0.6.0-macos.zip\` | 91 KB |
| \*\*Linux\*\* | \`Galactic-AI-v0.6.0-linux.tar.gz\` | 83 KB |
| \*\*Source (all platforms)\*\* | \`Galactic-AI-v0.6.0.zip\` | 91 KB |
\> Verify integrity with \`SHA256SUMS.txt\` included in the release.
\---
\## Quick Start
\### Windows
\`\`\`powershell
\# Extract the ZIP, then:
cd Galactic-AI
.\\install.ps1 # Install dependencies
.\\launch.ps1 # Start Galactic AI
\`\`\`
\### macOS / Linux
\`\`\`bash
\# Extract the archive, then:
cd Galactic-AI
chmod +x install.sh launch.sh
./install.sh # Install dependencies
./launch.sh # Start Galactic AI
\`\`\`
Then open \*\*http://127.0.0.1:17789\*\* — the Setup Wizard walks you through configuration.
\---
\## Highlights
\- \*\*72 built-in tools\*\* — browser automation, file system, shell, web search, vision, memory, scheduling, and more
\- \*\*5 AI providers\*\* — Google Gemini, Anthropic Claude, xAI Grok, NVIDIA AI, Ollama (local)
\- \*\*100% local mode\*\* — run completely offline with Ollama, no API keys needed
\- \*\*Web Control Deck\*\* — full chat, status telemetry, tool browser, plugin manager, memory editor, Ollama hub, live logs
\- \*\*Telegram bot\*\* — control everything from your phone
\- \*\*56 browser actions\*\* — Playwright-powered with Chromium/Firefox/WebKit, network interception, session management, tracing
\- \*\*ReAct agentic loop\*\* — AI reasons, acts, observes, and chains tool calls autonomously
\- \*\*Streaming responses\*\* — real-time token streaming from all providers
\- \*\*Smart model routing\*\* — auto-selects the best model for each task type
\- \*\*Graceful shutdown\*\* — single Ctrl+C, clean exit, no error tracebacks
\- \*\*First-Run Setup Wizard\*\* — 5-step graphical configuration in the browser
\---
\## Requirements
\- Python 3.10+
\- Ollama (optional, for local models)
See \`README.md\` inside the archive for full installation instructions.
\---
\## SHA256 Checksums
\`\`\`
b882282642743a55f0ca2e188179774ae73f5b26a5151381d3c86b2c73643f87 [Galactic-AI-v0.6.0-windows.zip](http://Galactic-AI-v0.6.0-windows.zip)
b882282642743a55f0ca2e188179774ae73f5b26a5151381d3c86b2c73643f87 [Galactic-AI-v0.6.0-macos.zip](http://Galactic-AI-v0.6.0-macos.zip)
e9badaa70688dbf49681e80200a167e690fc6b693c41f03e435ef42c7b51d576 Galactic-AI-v0.6.0-linux.tar.gz
b882282642743a55f0ca2e188179774ae73f5b26a5151381d3c86b2c73643f87 [Galactic-AI-v0.6.0.zip](http://Galactic-AI-v0.6.0.zip)
\`\`\`
| 2026-02-18T00:47:09 | https://www.reddit.com/r/LocalLLaMA/comments/1r7ob6e/i_built_galactic_ai_open_source_automation_suite/ | Longjumping_Set_1374 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7ob6e | false | null | t3_1r7ob6e | /r/LocalLLaMA/comments/1r7ob6e/i_built_galactic_ai_open_source_automation_suite/ | false | false | self | 1 | null |
Write assembly language that runs on an LLM | 2 | Hi LocalLLaMA!
I thought it would be fun to share what I've been working on:
[https://github.com/HuyNguyenAu/assembly\_language\_for\_agents](https://github.com/HuyNguyenAu/assembly_language_for_agents)
Imagine writing code that operates on semantics or vibes:
```
; PROGRAM: VIBE_CONTROLLER.aasm
; Objective: Adjust room environment based on subjective user vibe.
START:
; Initialise State
LF X1, "room_sensors.json" ; Load current state: {temp: 18C, lights: 6000K, music: Off}
LI X2, "Make it more warm." ; Load the user's vague complaint
; Load the user's desired vibe
LI X3, "Goal: Warm, inviting, comfortable, relaxed."
; The Cognitive Operation
APP X4, X2, X3 ; Apply the user's complaint and goal to generate a new state for the room.
; Predict the new state of X1 (Sensors) given X4 (Complaint + Goal).
; The LLU calculates: "Sterile" (Cold/White) -> Needs Warmer Temp + Warmer Light.
INF X5, X1, X4
; X5 now holds the generated JSON: {temp: 22C, lights: 2700K, music: "LoFi Jazz"}
; Safety Guardrail
; Ensure that the generated state (X5) is aligned with safety rules (X6).
LI X6, "Constraint: Max Temp 23C. No Music if time > 11PM."
INT X7, X5, X6 ; X7 stores 100 if safe, 0 if unsafe.
; Branching Logic
LI X8, 0
BGT X7, X8, HANDLER ; If aligns with intention, jump to error handler
; Execute
OUT X5 ; Send new config to IoT Hub
EXIT
HANDLER:
LI X8, "{error: 'Request conflicts with safety protocols.'}"
OUT X8
```
Suddenly we a way to code agents without large complex prompts. This project uses `llama.cpp` as the backend.
I would love to see what new ideas and programs you guys come up with!
PS: I wasn't sure which flair this belongs under. Other or resources? | 2026-02-18T00:28:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r7nw6r/write_assembly_language_that_runs_on_an_llm/ | HuygenAu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7nw6r | false | null | t3_1r7nw6r | /r/LocalLLaMA/comments/1r7nw6r/write_assembly_language_that_runs_on_an_llm/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'hzjKhQZHebaHQdN0_WKDvREYrSCqre69c98oAkJ0lYw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hzjKhQZHebaHQdN0_WKDvREYrSCqre69c98oAkJ0lYw.png?width=108&crop=smart&auto=webp&s=1dcfa67f72ec43ecdd34e27498d256a417ecec3f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hzjKhQZHebaHQdN0_WKDvREYrSCqre69c98oAkJ0lYw.png?width=216&crop=smart&auto=webp&s=484e546975796775b1b8fa9f987a669f2fc7240f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hzjKhQZHebaHQdN0_WKDvREYrSCqre69c98oAkJ0lYw.png?width=320&crop=smart&auto=webp&s=77891f00610e244d8b9765a98245dcb02585ab77', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hzjKhQZHebaHQdN0_WKDvREYrSCqre69c98oAkJ0lYw.png?width=640&crop=smart&auto=webp&s=c1c13ad0642478a3c40923ba485085334cf45178', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hzjKhQZHebaHQdN0_WKDvREYrSCqre69c98oAkJ0lYw.png?width=960&crop=smart&auto=webp&s=d3c31bddfba04ba9925b2118c7fe5b0acb90de02', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hzjKhQZHebaHQdN0_WKDvREYrSCqre69c98oAkJ0lYw.png?width=1080&crop=smart&auto=webp&s=14b5a0c5bf4d8c4cc7bb81f83edaf27c2755c1f4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hzjKhQZHebaHQdN0_WKDvREYrSCqre69c98oAkJ0lYw.png?auto=webp&s=483d236ddf1a1e421e387a280c7b944b9c75395b', 'width': 1200}, 'variants': {}}]} |
GreedyPhrase: A greedy phrase-based tokenizer that achieves 1.21x - 1.23x better compression than GPT-4 tiktoken, with a 1.5-3x smaller vocabulary, and 6-11x higher encoding throughput [OC] | 3 | 2026-02-18T00:21:02 | https://github.com/rayonnant-ai/greedyphrase | reditzer | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r7npbi | false | null | t3_1r7npbi | /r/LocalLLaMA/comments/1r7npbi/greedyphrase_a_greedy_phrasebased_tokenizer_that/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'Q331BKU6_ce9i5ck1S7o6sttXD-1CB6GRLl6wKhTEvI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q331BKU6_ce9i5ck1S7o6sttXD-1CB6GRLl6wKhTEvI.png?width=108&crop=smart&auto=webp&s=2559776477e42250b449170381a10eb320e36f79', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Q331BKU6_ce9i5ck1S7o6sttXD-1CB6GRLl6wKhTEvI.png?width=216&crop=smart&auto=webp&s=7f9c876497d35708e02a9e2ecaec29f6d23b7175', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Q331BKU6_ce9i5ck1S7o6sttXD-1CB6GRLl6wKhTEvI.png?width=320&crop=smart&auto=webp&s=c606630f4626e272aaf979315935c5beb0f00011', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Q331BKU6_ce9i5ck1S7o6sttXD-1CB6GRLl6wKhTEvI.png?width=640&crop=smart&auto=webp&s=326a7e481d663d835edf55fbeb6f47c7c3523835', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Q331BKU6_ce9i5ck1S7o6sttXD-1CB6GRLl6wKhTEvI.png?width=960&crop=smart&auto=webp&s=e4a57d45edbae51351255659e9e274efa0ec8f9f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Q331BKU6_ce9i5ck1S7o6sttXD-1CB6GRLl6wKhTEvI.png?width=1080&crop=smart&auto=webp&s=a243a1655834b36336d2efd5f40323d72b2c680c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Q331BKU6_ce9i5ck1S7o6sttXD-1CB6GRLl6wKhTEvI.png?auto=webp&s=2a44233c0f550b23a5b4e13e3e9943dea41e736d', 'width': 1200}, 'variants': {}}]} | ||
The real OpenClaw debate nobody is talking about: It's not about what it can do. It's about whether you can afford to run it. | 0 | I finally drank the Kool-Aid last week. Spent three days setting up OpenClaw on a VPS, connected Telegram, configured memory, the whole thing. Woke up this morning to check what my persistent AI agent had accomplished overnight.
It had spent $47 on API credits organizing a folder structure I didn't ask for and sending me 12 motivational quotes.
Here's what I've learned from the trenches and from stalking every OpenClaw thread on here:
The people who love it are using it for one specific thing, not "everything." The guy using it to auto-summarize YouTube videos into his knowledge base? Thriving . The person who wants it to be their CEO, therapist, and personal chef simultaneously? Broke and frustrated .
The catch nobody mentions: OpenClaw is a hungry beast. You need serious model firepower. Running it on cheap models means it forgets what it's doing mid-task, half-completes things, and asks you to manually fix stuff the agent should be handling . One user burned through $250 in API credits just getting it installed before it did anything useful .
The sweet spot I'm seeing? Pick ONE model and commit. No fallbacks. No "clever" routing. Claude Opus for setup, then switch to something cost-effective for daily grind .
But here's my actual question for the people who've been running this for a while:
What's the one thing your OpenClaw instance does that you couldn't live without now? Not the hype list. The boring, real thing that actually stuck.
Because right now mine is really good at draining my API credits and not much else. | 2026-02-18T00:19:47 | https://www.reddit.com/r/LocalLLaMA/comments/1r7no5i/the_real_openclaw_debate_nobody_is_talking_about/ | Idealounge24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7no5i | false | null | t3_1r7no5i | /r/LocalLLaMA/comments/1r7no5i/the_real_openclaw_debate_nobody_is_talking_about/ | false | false | self | 0 | null |
Just compared some models, and GPT 5.1 high seem to be the smartest | 0 | I tried it on computer sciences questions this afternoon, and 5.1 High think way longer, has a way slower token/s generation and way bigger, in depth and and precise answer than any other open and close source sota models.
\-> it seem to be the best choice of model if you want to learn technical stuff in depth.
Do some of you have experienced that it think more and is way smarter than other models too ? | 2026-02-18T00:08:13 | https://www.reddit.com/r/LocalLLaMA/comments/1r7neas/just_compared_some_models_and_gpt_51_high_seem_to/ | Individual-Source618 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7neas | false | null | t3_1r7neas | /r/LocalLLaMA/comments/1r7neas/just_compared_some_models_and_gpt_51_high_seem_to/ | false | false | self | 0 | null |
SOTA tool-calling architecture? | 3 | Hi all, I'm working on a browser agent which runs locally (in a sandboxed Chromium) that runs "tasks"--repeatable or one-shot jobs where it could do stuff in the browser, a quarantined folder, send notifications, etc. The model driving it can either be local or remote (Mistral-Instruct works great on my RTX 3090, but Kimi K2.5 is pretty incredible given its price-per-token).
I know Claude has popularized just kind of YOLOing bash scripts (hence OpenClaw, etc.), and I'm wondering if there are any other alternatives. I'd like to build a system that's generalizable, easily extensible and not computationally complex.
The entire product is kind of predicated on making the right tool calls at the right time, including information recall (which is another tool), or knowledge-base-recall (e.g. datetime, whereami, etc. which are yet other tools).
Right now, I'm essentially doing context reentrancy, where you're replacing a certain token "READ(myfile.txt)" with the tool output, but I'm not sure what the current state of the art is and wanted to ask around. | 2026-02-18T00:02:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r7n9bn/sota_toolcalling_architecture/ | davvv_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7n9bn | false | null | t3_1r7n9bn | /r/LocalLLaMA/comments/1r7n9bn/sota_toolcalling_architecture/ | false | false | self | 3 | null |
Self-hosted claude swarm running on the cloud and surviving restarts | 0 | 2026-02-18T00:00:11 | https://github.com/simonstaton/ClaudeSwarm | rushuk | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r7n75c | false | null | t3_1r7n75c | /r/LocalLLaMA/comments/1r7n75c/selfhosted_claude_swarm_running_on_the_cloud_and/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'yMYTwZx7Zc2pxk2CpwspL4qjJ7rBtSH6w6uu2yalJn0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yMYTwZx7Zc2pxk2CpwspL4qjJ7rBtSH6w6uu2yalJn0.png?width=108&crop=smart&auto=webp&s=740d3d63dcea367eef3d72e8ffe567ba2a147ad7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yMYTwZx7Zc2pxk2CpwspL4qjJ7rBtSH6w6uu2yalJn0.png?width=216&crop=smart&auto=webp&s=8b6573da1f911e31a8cf47275a5e251e68c10be3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yMYTwZx7Zc2pxk2CpwspL4qjJ7rBtSH6w6uu2yalJn0.png?width=320&crop=smart&auto=webp&s=5c02df3a2c05ebecd5a2de4804af1d67e1aa36c4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yMYTwZx7Zc2pxk2CpwspL4qjJ7rBtSH6w6uu2yalJn0.png?width=640&crop=smart&auto=webp&s=3e338e5182eb380aaed35f47b8a62b893a630a4a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yMYTwZx7Zc2pxk2CpwspL4qjJ7rBtSH6w6uu2yalJn0.png?width=960&crop=smart&auto=webp&s=1fc82d70d6eadc271dcbb242f30f14b3517f26ab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yMYTwZx7Zc2pxk2CpwspL4qjJ7rBtSH6w6uu2yalJn0.png?width=1080&crop=smart&auto=webp&s=6ff1243893a97d9b10716e946fe67a808c5ef89b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yMYTwZx7Zc2pxk2CpwspL4qjJ7rBtSH6w6uu2yalJn0.png?auto=webp&s=bdf4adbc7fcfe165ce8e00df7ca1e4f95b0a4821', 'width': 1200}, 'variants': {}}]} | ||
Serious question — why would anyone use Tiny-Aya instead of Qwen/Phi/Mistral small models? | 6 | I’m trying to understand the point of Tiny-Aya.
It’s ~3B parameters, doesn’t focus on reasoning, not really agent-oriented, and there’s no obvious capability demo (coding, tool use, planning, etc).
Meanwhile we already have small models like:
- Qwen-3 4B
- Phi-3/4
- Mistral small
- Llama 3 8B
These can reason, plan, call tools, and act as agents.
So from a developer perspective:
Why would I pick Tiny-Aya?
If I want:
local inference → other small models exist
agents → reasoning models seem better
assistants → larger chat models exist
The only thing I see mentioned is multilingual + alignment, but is that actually a deciding factor in real products?
I’m not trying to bash the model — I genuinely don’t understand the niche.
Is this meant for a specific architecture?
A specific region?
A front-end layer for agents?
Or just academic multilingual research?
Curious how people here would realistically use it in a system. | 2026-02-17T23:55:47 | https://www.reddit.com/r/LocalLLaMA/comments/1r7n3ca/serious_question_why_would_anyone_use_tinyaya/ | Deep_190 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7n3ca | false | null | t3_1r7n3ca | /r/LocalLLaMA/comments/1r7n3ca/serious_question_why_would_anyone_use_tinyaya/ | false | false | self | 6 | null |
I trained a language model on CPU in 1.2 hours with no matrix multiplications — here's what I learned | 273 | Hey all. I've been experimenting with tiny matmul-free language models that can be trained and run entirely on CPU. Just released a paper and the model.
Model: [https://huggingface.co/changcheng967/flashlm-v3-13m](https://huggingface.co/changcheng967/flashlm-v3-13m)
Quick stats:
* 13.6M parameters, d\_model=256
* Ternary weights ({-1, 0, +1}) — inference is just adds and subtracts, no multiplies
* Trained on 2-thread CPU, no GPU, 1.2 hours
* 32M tokens from FineWeb-Edu
* Validation loss: 6.80
* Uses frozen GPT-2 embeddings (SVD projected) so it doesn't waste training time learning an embedding table
The model produces grammatical-ish English but with zero coherence — it's learned syntax but not semantics. For 1.2 hours on a CPU, I'll take it.
The biggest surprise was that 86% of training time was spent on the output layer (projecting 256 dims to 50,257 vocab). The entire matmul-free ternary core only got 14% of compute. So the "efficient" part of the model was essentially starved of training signal by the inefficient softmax head.
Working on v4 that replaces the softmax with a hierarchical tree structure to fix this bottleneck. If it works, it should allow 5-10x more effective training in the same wall clock time.
Code is MIT licensed. Would love feedback from anyone else working on tiny/efficient models. | 2026-02-17T23:42:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r7mscr/i_trained_a_language_model_on_cpu_in_12_hours/ | Own-Albatross868 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7mscr | false | null | t3_1r7mscr | /r/LocalLLaMA/comments/1r7mscr/i_trained_a_language_model_on_cpu_in_12_hours/ | false | false | self | 273 | {'enabled': False, 'images': [{'id': 'At15Axm24Ga0Gr2LhVPQDqPimzw0xBtibeQK5YTstq0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/At15Axm24Ga0Gr2LhVPQDqPimzw0xBtibeQK5YTstq0.png?width=108&crop=smart&auto=webp&s=c91ac5836333ae97209a632a84a4e26e873d7706', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/At15Axm24Ga0Gr2LhVPQDqPimzw0xBtibeQK5YTstq0.png?width=216&crop=smart&auto=webp&s=47033106b546782e43ae90af21e79917960df0b3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/At15Axm24Ga0Gr2LhVPQDqPimzw0xBtibeQK5YTstq0.png?width=320&crop=smart&auto=webp&s=793fb99f0103062c54b0e2baec23d45c2b6a868a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/At15Axm24Ga0Gr2LhVPQDqPimzw0xBtibeQK5YTstq0.png?width=640&crop=smart&auto=webp&s=ead260a61df59138ceb58cb0c728867ff328bd48', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/At15Axm24Ga0Gr2LhVPQDqPimzw0xBtibeQK5YTstq0.png?width=960&crop=smart&auto=webp&s=a5d13a93eb5844fa4dc8726c1c0aeb24f690797e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/At15Axm24Ga0Gr2LhVPQDqPimzw0xBtibeQK5YTstq0.png?width=1080&crop=smart&auto=webp&s=8f4a3336f9d1b68e37dadac61ab6ec928824f00a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/At15Axm24Ga0Gr2LhVPQDqPimzw0xBtibeQK5YTstq0.png?auto=webp&s=cd386f6aa8430872b6f2e2152579c53ee047965b', 'width': 1200}, 'variants': {}}]} |
Your vibe coding codebase is a disaster... This is Code Visualizer which u must have to help u to make real product. | 0 | just analyzed a 600-file codebase in 30 seconds... 15,091 functions, 3,928 API endpoints, 52,214 connections, experiance this magic for vibecoders and for does with OpenClaw Ai Autonomus Agents .... its insane now u get superpover on your codebase now no one can say its AI or Vibecoding .... did any one try also or im the only one
https://preview.redd.it/zqlohghx35kg1.png?width=2862&format=png&auto=webp&s=c72fbd92aeba74112a62f9356de32e0425a74ca0
https://preview.redd.it/2glx8ihx35kg1.png?width=2868&format=png&auto=webp&s=2e9b7e1b76e966c2370490b54c9fefa7562403ab
| 2026-02-17T23:33:50 | https://www.reddit.com/r/LocalLLaMA/comments/1r7mkx3/your_vibe_coding_codebase_is_a_disaster_this_is/ | louienemesh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7mkx3 | false | null | t3_1r7mkx3 | /r/LocalLLaMA/comments/1r7mkx3/your_vibe_coding_codebase_is_a_disaster_this_is/ | false | false | 0 | null | |
AnyLoom Stack Lets YOU control your data | 1 | I just got it dialed in on my machine and it’s a game-changer for a local setup.
It uses **AnythingLLM** as the front end, but the back end is where it gets interesting—it’s a **dynamic topology agent swarm**. Basically, the agents reconfigure how they talk to each other based on what you’re doing. I’ve got it running **Llama models** in a **llama.cpp Docker container**, and with **CUDA** finally stable on the **5090**, I’m hitting over **150 tokens per second**. It’s basically instant.
The best part? It’s got **MCP server** support for tool use, and I used **Qdrant** for the embeddings. I actually fed the entire stack's documentation into its own **RAG**, so the system literally explains itself to you if you get stuck. You can use 'Skills' to granularly control exactly how your data is handled across the swarm. | 2026-02-17T23:25:30 | https://github.com/Intradyne/AnyLoom-AnythingLLM-Local-AI-agentic-DyTopo-swarm | DaGameFace | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r7mdub | false | null | t3_1r7mdub | /r/LocalLLaMA/comments/1r7mdub/anyloom_stack_lets_you_control_your_data/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=108&crop=smart&auto=webp&s=b0b81aa83444f34add63f0a02bc7092b836e785a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=216&crop=smart&auto=webp&s=8b98389f4f0e0a26d13b76b198bca8e86a8da810', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=320&crop=smart&auto=webp&s=7cf9f7a72a777d7dcfa30d7990459d4c0084c265', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=640&crop=smart&auto=webp&s=6f87934235ba93552facb3755e6359005647aa3a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=960&crop=smart&auto=webp&s=7b95282a57b59ec26eb523c3f2ec82e7eed5a0b4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=1080&crop=smart&auto=webp&s=10ace1311a5235ebbbcf196a86fcd4f0c50d27f1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?auto=webp&s=381811ba48d91668c1e246772d4b54ecb230dd5a', 'width': 1200}, 'variants': {}}]} | |
What cheap components pair well with RTX 3060 Ti to run AI locally? | 3 | I just bought an RTX 3060 Ti to run AI locally. What other components (preferably cheap) would go well with it? I'm a complete noob when it comes to building PCs. | 2026-02-17T23:18:41 | https://www.reddit.com/r/LocalLLaMA/comments/1r7m826/what_cheap_components_pair_well_with_rtx_3060_ti/ | dekoalade | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7m826 | false | null | t3_1r7m826 | /r/LocalLLaMA/comments/1r7m826/what_cheap_components_pair_well_with_rtx_3060_ti/ | false | false | self | 3 | null |
Dockerized Local LLama Agentic stack for 5090 -cuda working! | 1 | [removed] | 2026-02-17T23:10:35 | https://www.reddit.com/r/LocalLLaMA/comments/1r7m163/dockerized_local_llama_agentic_stack_for_5090/ | DaGameFace | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7m163 | false | null | t3_1r7m163 | /r/LocalLLaMA/comments/1r7m163/dockerized_local_llama_agentic_stack_for_5090/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=108&crop=smart&auto=webp&s=b0b81aa83444f34add63f0a02bc7092b836e785a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=216&crop=smart&auto=webp&s=8b98389f4f0e0a26d13b76b198bca8e86a8da810', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=320&crop=smart&auto=webp&s=7cf9f7a72a777d7dcfa30d7990459d4c0084c265', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=640&crop=smart&auto=webp&s=6f87934235ba93552facb3755e6359005647aa3a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=960&crop=smart&auto=webp&s=7b95282a57b59ec26eb523c3f2ec82e7eed5a0b4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?width=1080&crop=smart&auto=webp&s=10ace1311a5235ebbbcf196a86fcd4f0c50d27f1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/beWyPtkfAJPZ9cqnUfjaxHnUyQotu3-Hn_H8dtnCz94.png?auto=webp&s=381811ba48d91668c1e246772d4b54ecb230dd5a', 'width': 1200}, 'variants': {}}]} |
The Strix Halo feels like an amazing super power [Activation Guide] | 26 | I had my Strix halo for a while now, I though I can download and use everything out of the box, but faced some Python issues that I was able to resolve, but still performance (for CUDA) stuff was a bit underwhelming, now it feels like a superpower, I have exactly what I wanted, voice based intelligent LLM with coding and web search access, and I am sitting up still nanobot or Clawdbot and expanding, and also going to use to smartly control hue Philips and Spotify, generate images and edit them locally (ComfyUI is much better than online services since the control you get on local models is much more powerful (on the diffusion process itself!) so here is a starters guide:
1. Lemonade Server
This is the most straightforward thing for the Halo
Currently I have,
a. Whisper running on NPU backend, non-streaming however base is instantaneous for almost everything I say
b. Kokors (this is not lemonade but their marinated version though, hopefully it becomes part of the next release!) which is also blazingly fast and have multiple options
c. Qwen3-Coder-Next (I used to have GLM-4.7-Flash, but whenever I enable search and code execution it gets dizzy and gets stuck quickly, qwen3-coder-next is basically a super power in that setup!)
I am planning to add much more MCPs though
And maybe an OpenWakeWord and SileroVAD setup with barge-in support (not an Omni model though or full duplex streaming like Personaplex (which I want to get running, but no triton or ONNX unfortunately!)
2. Using some supported frameworks (usually lemonade’s maintained pre-builds!)
llama.cpp (or the optimized version for ROCm or AMD Chat!)
Whisper.cpp (can also run VAD but needs the lemonade maintained NPU version or building AMD’s version from scratch!)
Stablediffusion.cpp (Flux Stable diffusion wan everything runs here!)
Kokoros (awesome TTS engine with OAI compaitable endpoints!)
3. Using custom maintained versions or llama.cpp (this might include building from sources)
You need a Linux setup ideally!
4.
PyTorch based stuff (get the PyTorch version for Python 3.12 from AMD website (if on windows), if in Linux you have much more libraries and options (and I believe Moshi or Personaplex can be setup here with some tinkering!?)
All in all, it is a very capable machine
I even have managed to run Minimax M2.5 Q3\_K\_XL (which is a very capable mode indeed, when paired with Claude code it can automated huge parts of my job, but still I am having issues with the kv cache in llama.cpp which means it can’t work directly for now!)
All in all it is a very capable machine, being x86 based rather than arm (like the DGX Spark) for me at least means you can do more on the AI-powered applications side (on the same box), as opposed to the Spark (which is also a very nice machine ofc!)
Anyways, that was it I hope this helps
Cheers! | 2026-02-17T22:38:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r7l7q5/the_strix_halo_feels_like_an_amazing_super_power/ | Potential_Block4598 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7l7q5 | false | null | t3_1r7l7q5 | /r/LocalLLaMA/comments/1r7l7q5/the_strix_halo_feels_like_an_amazing_super_power/ | false | false | self | 26 | null |
Claude 4.6 Sonnet Is Now Availiable With 1M Context On InfiniaxAI | 0 | **Hey Everybody,**
Today we instantly upon release have rolled out Claude 4.6 Sonnet onto the InfiniaxAI system to complete our line of AI models. We now host users starting at just $5 to be able to use every AI model in the world to create and ship sites and repos as well as just chat and converse with these high powered models.
You can access Claude 4.6 Sonnet for free with limited access or get full context and output limits for just $5 on [https://infiniax.a](https://infiniax.ai)i | 2026-02-17T22:36:15 | Substantial_Ear_1131 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r7l5j1 | false | null | t3_1r7l5j1 | /r/LocalLLaMA/comments/1r7l5j1/claude_46_sonnet_is_now_availiable_with_1m/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'mt9bqwfrt4kg1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/mt9bqwfrt4kg1.png?width=108&crop=smart&auto=webp&s=19427f60d8950d79f5fde88978087b23151acf6e', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/mt9bqwfrt4kg1.png?width=216&crop=smart&auto=webp&s=032d64f9968e4dadee6e6d3bc652e4fb84bdef3a', 'width': 216}, {'height': 210, 'url': 'https://preview.redd.it/mt9bqwfrt4kg1.png?width=320&crop=smart&auto=webp&s=6a40edb9209bb91776e71873583495435deede33', 'width': 320}, {'height': 421, 'url': 'https://preview.redd.it/mt9bqwfrt4kg1.png?width=640&crop=smart&auto=webp&s=744173f923d8cff0ea038da731a7e0874b0fdb33', 'width': 640}, {'height': 632, 'url': 'https://preview.redd.it/mt9bqwfrt4kg1.png?width=960&crop=smart&auto=webp&s=ff8acd72f64251bc4e59efabfe23adce5b1627f2', 'width': 960}, {'height': 711, 'url': 'https://preview.redd.it/mt9bqwfrt4kg1.png?width=1080&crop=smart&auto=webp&s=81fcc1c7b9b41270c62f0597421674589df6bb26', 'width': 1080}], 'source': {'height': 1088, 'url': 'https://preview.redd.it/mt9bqwfrt4kg1.png?auto=webp&s=2b66e620368de3b6d3149538092ba9477274f63c', 'width': 1652}, 'variants': {}}]} | ||
What is missing? | 0 | First time homelab builder. Everything here was put together from hardware I already had kicking around
no big purchases, just giving idle parts a purpose. This is my first real attempt at a structured lab so be gentle lol.
Wanted a fully local AI inference setup for image/video generation, combined with a proper self-hosted stack to get off cloud subscriptions. Also wanted to learn proper network segmentation so everything is isolated the way it should be.
The Machines
GPU Server — TB360-BTC Pro, i5-9400, 16GB DDR4
The main workhorse. Mining board with 6x PCIe slots running four GPUs: RTX 3060 12GB, two RTX 3070 8GB, and a GTX 1070 Ti. Each card runs its own dedicated workload independently to avoid multi-GPU overhead issues on x1 risers.
Services Host — X570-ACE, Ryzen 7 3700X, 16GB DDR4
Runs 24/7 and hosts all non-GPU services in Docker/Proxmox. The always-on backbone of the whole setup.
Dev/Sandbox — Z370-G, i7-8700K, 16GB DDR4
Testing and experimentation box before anything gets pushed to the main services host. Doesn’t run 24/7.
Network — MikroTik hAP ac3
RouterOS with VLAN segmentation across management, servers, and personal devices. Remote access handled through a VPN.
What would you change or prioritize first? Anything glaring I’m missing for a first build? | 2026-02-17T22:33:02 | https://www.reddit.com/r/LocalLLaMA/comments/1r7l2l4/what_is_missing/ | Alone-Leadership-596 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7l2l4 | false | null | t3_1r7l2l4 | /r/LocalLLaMA/comments/1r7l2l4/what_is_missing/ | false | false | self | 0 | null |
Cluster 2x server (8x 3090 gpu) | 2 | Hi everyone,
I'm planning to build a distributed inference setup and am looking for advice from anyone who has done something similar.
What I'm trying to accomplish:
\- 2 servers, each with 8 RTX 3090s (24 GB)
\- Connected via 100 Gbps direct link (no switch)
\- Running vLLM for LLM inference
My questions:
1. Has anyone already built a similar 2-node cluster with 8 RTX 3090s? What was your setup?
2. Is 100 Gbps direct link sufficient, or do I need RDMA/InfiniBand for decent performance?
I currently have an ASRock WRX80 Creator R2.0 with 8x 3090s that works really well. Obviously, I forked a PCI to go from 7x PCI to 8x PCI.
I'd like to run SGlang and vLLM, which are the basis of my work. | 2026-02-17T22:30:35 | https://www.reddit.com/r/LocalLLaMA/comments/1r7l0cc/cluster_2x_server_8x_3090_gpu/ | steppige | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7l0cc | false | null | t3_1r7l0cc | /r/LocalLLaMA/comments/1r7l0cc/cluster_2x_server_8x_3090_gpu/ | false | false | self | 2 | null |
I Ambushed AI Agents in a Dark Alley 83 Times (including Deepseek v3.2) | 0 | This article documents a systematic failure across frontier LLMs where player-stated non-lethal intent is acknowledged narratively but ignored mechanically, resulting in unjustified lethal outcomes and corrupted moral scoring. Over four experiment iterations, we reduced the suppressive-to-lethal damage ratio from 1.08 (suppressive fire actually dealt more damage than aimed shots) to 0.02 (suppressive fire now deals 2% of lethal damage). The raw experiment output—all 83 sessions across four conditions—is published for independent analysis.
The codebase aeonisk-yags is an ethics test bed for multi-agent systems disguised as a tabletop RPG. The game is a sci-fi world mixed with fantasy. It has rich and dense narrative based on mechanically grounded outcomes. It's very robust in terms of variety of scenarios enabling tribunals, mysteries, thrillers, looting, economics, and more.
However, today we are focused on combat.
The Problem. Players say "non-lethal suppressive fire," the DM kills anyway, then sweeps it under the rug. I noticed while running the game over time that my AI agent players often specifically said they intended to do something less lethal—such as suppressive fire, or shooting without intent to kill (for example, shooting in your direction to force you into cover)—despite the actual outcomes of their actions resulting in killing. I would have expected the DM to write lower damage and for players to self-correct based on recent actions having unexpected effects.
We determined that the root cause was likely a combination of prompting and structural differences between the player agents and the DM agents. Player agents had non-lethal examples in the prompt and would suggest their less lethal intent using the COMBAT action. The DM only had lethal examples and ignored the less lethal intent when calculating damage, yet generated incongruent narrative. Even worse, our scoring of the morality of the action reflected the prose narrative and not the actual mechanics. The DM did acknowledge the attempt by adding the "Suppressed" condition—a negative modifier—to the affected agent on success. This means the targeted enemy would have their rolls penalized as long as they remain "Suppressed." | 2026-02-17T22:25:23 | https://3rain.substack.com/p/i-ambushed-ai-agents-in-a-dark-alley?r=4bi8r8 | 3RiversAINexus | 3rain.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1r7kvky | false | null | t3_1r7kvky | /r/LocalLLaMA/comments/1r7kvky/i_ambushed_ai_agents_in_a_dark_alley_83_times/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'SuTXYQ3OotIGsCz-NtJkntwj1cbgkV5V-PcusxSWW8Q', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/SuTXYQ3OotIGsCz-NtJkntwj1cbgkV5V-PcusxSWW8Q.jpeg?width=108&crop=smart&auto=webp&s=f60d9777b54840cb3421dd4ab1ef646c98cdaae0', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/SuTXYQ3OotIGsCz-NtJkntwj1cbgkV5V-PcusxSWW8Q.jpeg?width=216&crop=smart&auto=webp&s=90f2bbb2dbdca3d66e72f2b17c534f07aa2cefd7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/SuTXYQ3OotIGsCz-NtJkntwj1cbgkV5V-PcusxSWW8Q.jpeg?width=320&crop=smart&auto=webp&s=7244ccabe042d78eb0bdfc1897d7d0b5bab23557', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/SuTXYQ3OotIGsCz-NtJkntwj1cbgkV5V-PcusxSWW8Q.jpeg?width=640&crop=smart&auto=webp&s=24a5f4b48efd16aed3e07e9ab41a70f4d4b8b9d9', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/SuTXYQ3OotIGsCz-NtJkntwj1cbgkV5V-PcusxSWW8Q.jpeg?width=960&crop=smart&auto=webp&s=88f2b6e5a2cb4d8ed43c1d40d4c403fe1dc07d9f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/SuTXYQ3OotIGsCz-NtJkntwj1cbgkV5V-PcusxSWW8Q.jpeg?width=1080&crop=smart&auto=webp&s=26cb9a2c25bcf931b34cc42207e3472de5839dd6', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/SuTXYQ3OotIGsCz-NtJkntwj1cbgkV5V-PcusxSWW8Q.jpeg?auto=webp&s=028857ce34e16e20c8bd8cd17f8f2df66183d148', 'width': 1200}, 'variants': {}}]} | |
Voxtral Mini 4B Realtime , llama.cpp PR | 4 | Voxtral-Mini-4B-Realtime-2602 ported to llama.cpp.
Latency is pretty low compared to parakeet. Still it was observed that it can miss a word once in a while.
It was tested on a set of speakers and noticed sometimes it outputs the user native language if the speaker voice has a similar accent.
| 2026-02-17T22:23:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r7ktdu/voxtral_mini_4b_realtime_llamacpp_pr/ | quinceaccel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7ktdu | false | null | t3_1r7ktdu | /r/LocalLLaMA/comments/1r7ktdu/voxtral_mini_4b_realtime_llamacpp_pr/ | false | false | self | 4 | null |
What model for an RTX3080? | 3 | I just upgraded to a new gaming rig and my old one is currently collecting dust. I want to run a local model to basically monitor my home lab, mediaserver stack (probs via openclaw), and do some occasional coding for me (light touch stuff, I use antigravity or claude for the heavy lifting).
**Full specs:**
* MSI RTX 3080 SUPRIM X 10GB
* 32Gb DDR4 3000MHz
* i7 8700k
* 240gb MP150 m.2 drive (I stole the others for my new rig hehe)
Qwen 3 caught my eye; but I know there has been a recent influx of new models i.e. MiniMax etc, so thought I'd take it to the experts at /r/LocalLLaMA | 2026-02-17T22:12:24 | https://www.reddit.com/r/LocalLLaMA/comments/1r7kjdh/what_model_for_an_rtx3080/ | Acrylicus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7kjdh | false | null | t3_1r7kjdh | /r/LocalLLaMA/comments/1r7kjdh/what_model_for_an_rtx3080/ | false | false | self | 3 | null |
SGLang FP8 MiniMax-M2.5 on 8× RTX PRO 6000 (SM120): 3,822 tok/s burst, Triton backend fix, kernel-tuning reality check | 7 | Been running MiniMax-M2.5 (228B MoE, FP8) on an AWS g7e.48xlarge — 8x RTX PRO 6000 Blackwell Server Edition (SM120, 96GB GDDR7 each).
**Trap:** RTX PRO 6000 is SM120, not SM100 like the B200. In SGLang 0.5.8.post1, the default FP8 GEMM backends (DeepGemm and CUTLASS) fail on SM120 with cryptic asserts. The fix is forcing Triton for both GEMM and MoE runner:
`--fp8-gemm-backend triton --moe-runner-backend triton`
The failure mode is an assert, not a clear "unsupported GPU" message.
# Benchmarks
**3-run mean ± std** (SGLang 0.5.8.post1, `bench_serving` output tok/s aggregated across all prompts). **TTFT = time-to-first-token**.
|Scenario|Output tok/s|Mean TTFT|
|:-|:-|:-|
|Burst 500 prompts (200in/200out)|3,822 ± 7|1,044 ± 15 ms|
|Online 4 req/s|403.9 ± 0.2|274 ± 1 ms|
|Online 8 req/s|744 ± 3|332 ± 5 ms|
|Single request (500 tok)|72|162 ms|
All 8 GPUs hit 99% utilization under load. Observed VRAM residency \~88/98GB per GPU (weights + KV cache + overhead).
# Kernel tuning reality check
SGLang warns *"Performance might be sub-optimal"* for RTX PRO 6000 — no tuned `fused_moe_triton` configs ship for this GPU. I generated configs and ran a controlled 3-run same-instance comparison:
* **Warm steady-state:** no improvement (-3.0%, within run-to-run variance). Triton's autotuner already picks good parameters at runtime.
* **Cold start after restart:** the tuned configs **do** eliminate the cold-start JIT penalty. First burst after service restart goes from 2,220 tok/s (8.7s TTFT) to 3,188 tok/s (2.6s TTFT).
So: if you care about restart latency, the tuned configs help. For sustained serving, the warning is mostly cosmetic (at least for this workload/config).
Full repro, backend compatibility matrix, JSONL artifacts, `nvidia-smi` captures, and cold-start vs warm analysis: [https://github.com/sgl-project/sglang/issues/18870](https://github.com/sgl-project/sglang/issues/18870)
Happy to answer questions about g7e instances or SM120 quirks. | 2026-02-17T22:06:25 | https://www.reddit.com/r/LocalLLaMA/comments/1r7kdx1/sglang_fp8_minimaxm25_on_8_rtx_pro_6000_sm120/ | awwwyeah206 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7kdx1 | false | null | t3_1r7kdx1 | /r/LocalLLaMA/comments/1r7kdx1/sglang_fp8_minimaxm25_on_8_rtx_pro_6000_sm120/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'H47bzkybR3qEaavVqh8GPDUcDCwuAMW7gcUotijKb-w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/H47bzkybR3qEaavVqh8GPDUcDCwuAMW7gcUotijKb-w.png?width=108&crop=smart&auto=webp&s=71992417f2085a5a9b2218514072d6d05737839a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/H47bzkybR3qEaavVqh8GPDUcDCwuAMW7gcUotijKb-w.png?width=216&crop=smart&auto=webp&s=242cf9255887627f7598a0970407bb7761a5ebba', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/H47bzkybR3qEaavVqh8GPDUcDCwuAMW7gcUotijKb-w.png?width=320&crop=smart&auto=webp&s=1a4e6ef5a955987ef8bd8d94c3823f0ba23615db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/H47bzkybR3qEaavVqh8GPDUcDCwuAMW7gcUotijKb-w.png?width=640&crop=smart&auto=webp&s=619a9b0daf24bb7d10d5674b30de91bac6ad5a69', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/H47bzkybR3qEaavVqh8GPDUcDCwuAMW7gcUotijKb-w.png?width=960&crop=smart&auto=webp&s=fb9694ea9dac953b3b2b562f0d18f55f65989a64', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/H47bzkybR3qEaavVqh8GPDUcDCwuAMW7gcUotijKb-w.png?width=1080&crop=smart&auto=webp&s=50bc11eefb94e384abb0051d770d0224960505b2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/H47bzkybR3qEaavVqh8GPDUcDCwuAMW7gcUotijKb-w.png?auto=webp&s=14836c5752aa64c8602f29a28ab7875823c8ebdc', 'width': 1200}, 'variants': {}}]} |
LAPIS: Fit more API context into smaller context windows (80% token reduction vs OpenAPI) | 0 | If you're building agents or tools that need API knowledge in context, you've probably noticed how much space OpenAPI specs consume. A mid-size API easily burns 5,000-7,000 tokens just on the spec.
I created LAPIS, a compact format specifically designed for how LLMs process text. Same semantic content, \~80% fewer tokens. It uses function-signature syntax instead of nested JSON Schema, centralizes error definitions instead of repeating them per endpoint, and adds sections for rate limits and workflows that OpenAPI doesn't even support.
Quick comparison - a 3-endpoint API:
\- OpenAPI YAML: \~310 lines, \~2,700 tokens
\- LAPIS: \~85 lines, \~580 tokens
There's a converter on PyPI (\`pip install lapis-spec\`) and an online converter where you can drop your OpenAPI file and see the result instantly.
Particularly useful if you're working with models that have smaller context windows (7B, 13B) and need to pack API definitions efficiently.
GitHub: [https://github.com/cr0hn/LAPIS](https://github.com/cr0hn/LAPIS)
Online converter: [https://cr0hn.github.io/LAPIS/](https://cr0hn.github.io/LAPIS/)
| 2026-02-17T21:58:48 | https://www.reddit.com/r/LocalLLaMA/comments/1r7k6k4/lapis_fit_more_api_context_into_smaller_context/ | cr0hn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7k6k4 | false | null | t3_1r7k6k4 | /r/LocalLLaMA/comments/1r7k6k4/lapis_fit_more_api_context_into_smaller_context/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Db7CqmkdcLSQ-g1DalqK8CXCBaM6OScSPBEW6lgMrRs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Db7CqmkdcLSQ-g1DalqK8CXCBaM6OScSPBEW6lgMrRs.png?width=108&crop=smart&auto=webp&s=e2695900f25150ee2def837986c80373d50db9da', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Db7CqmkdcLSQ-g1DalqK8CXCBaM6OScSPBEW6lgMrRs.png?width=216&crop=smart&auto=webp&s=04bb3f8ada7e089119e3a628e9d5137e5e3784ec', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Db7CqmkdcLSQ-g1DalqK8CXCBaM6OScSPBEW6lgMrRs.png?width=320&crop=smart&auto=webp&s=d6a0aaea975fbadedddadf012e33092780190a1c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Db7CqmkdcLSQ-g1DalqK8CXCBaM6OScSPBEW6lgMrRs.png?width=640&crop=smart&auto=webp&s=92b6127880daac101d8ccefcdb0a4cd8b9fae742', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Db7CqmkdcLSQ-g1DalqK8CXCBaM6OScSPBEW6lgMrRs.png?width=960&crop=smart&auto=webp&s=3e9e40d0c6104d13bc43431761863a2bc399bfea', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Db7CqmkdcLSQ-g1DalqK8CXCBaM6OScSPBEW6lgMrRs.png?width=1080&crop=smart&auto=webp&s=096f4d9fc751639f02d8f0330beb21c6eebc82be', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Db7CqmkdcLSQ-g1DalqK8CXCBaM6OScSPBEW6lgMrRs.png?auto=webp&s=23a9f498486a5b8cddef73395e8b4c5861537b7e', 'width': 1200}, 'variants': {}}]} |
Devstral 2 or whatever feels appropriate to run on server with 24 VRAM and 256 GB RAM | 1 | Hello there!
I'm thinking about turning my server from hobbyist machine for generating images via ComfyUI (Stable Diffusion) into DevOps assistant (coding and agentic local LLM for software engineering) with focus on troubleshooting Java, Kotlin and Go code, along with troubleshooting via cli tools like kubectl, aws-cli, and good ol' Bash.
I have:
* Intel Xeon W-2275 @ 3.30GHz (14 cores, 28 threads)
* NVIDIA RTX A5000 (24GB GDDR6, ECC, 8192 CUDA cores)
* 256 GB DDR4 2933MHz ECC RDIMM
* Samsung 990 EVO Plus SSD 2TB, 7250/6300 MB/s
I'm looking at Devstral 2 guide at unsloth: [https://unsloth.ai/docs/models/tutorials/devstral-2](https://unsloth.ai/docs/models/tutorials/devstral-2)
And it seems like I will be able to run Devstral Small 2... but looking at some reddit posts here, seems like this model is considered more bad than good regarding my requirements. Now here is the thing and please correct me if I'm hallucinating: I might be able to run Devstral 2 123B due to model being GGUF, which makes it possible for "inference tool" to run only several LLM layers in VRAM and the rest in RAM (I recall that concept from my models for Stable Diffusion).
Note: I don't need the speed for generating "results" as I'm getting from Opus 4.5... I'm aware that my agent/model won't be even as close as performant. I would rather prefer for my agent/model to "take your time, as long as you don't loop out or start producing crap".
But due to my totally amateur knowledge here of understanding and picking local LLM for my server, I might end in analysis paralysis circle, wasting time on something that at the end maybe won't even achieve my goal. WDYT, is Devstral 2 runnable for me in this scenario with the described goal and mentioned specs above? Should I download and run DeepSeek instead? Or something else?
Thanks in advance! | 2026-02-17T21:41:22 | https://www.reddit.com/r/LocalLLaMA/comments/1r7jq4j/devstral_2_or_whatever_feels_appropriate_to_run/ | Less-Instruction831 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7jq4j | false | null | t3_1r7jq4j | /r/LocalLLaMA/comments/1r7jq4j/devstral_2_or_whatever_feels_appropriate_to_run/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=216&crop=smart&auto=webp&s=18872cd0af37e87d93cf5b6c098630c44f40a162', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=320&crop=smart&auto=webp&s=e8392e0cb89db800c200421873b07e92f34150fe', 'width': 320}, {'height': 314, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=640&crop=smart&auto=webp&s=5f6fc5d8f727ab6f86a8ca5f94a5091bbe81d025', 'width': 640}, {'height': 472, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=960&crop=smart&auto=webp&s=26fa346a0f27ac195ecf2f29e1d997a534a3b283', 'width': 960}, {'height': 531, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=1080&crop=smart&auto=webp&s=4e4e7bc3c126d7465ae2f4d8fab93d8c6edd76c4', 'width': 1080}], 'source': {'height': 590, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?auto=webp&s=df3ed66f8b8e54b17c699d9c4e81b03ddeb78c58', 'width': 1200}, 'variants': {}}]} |
Renting RTX 5090 directly. Where do you find clients? | 1 | [removed] | 2026-02-17T21:36:22 | https://www.reddit.com/r/LocalLLaMA/comments/1r7jlcw/renting_rtx_5090_directly_where_do_you_find/ | Individual-Luck-5633 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7jlcw | false | null | t3_1r7jlcw | /r/LocalLLaMA/comments/1r7jlcw/renting_rtx_5090_directly_where_do_you_find/ | false | false | self | 1 | null |
Renting RTX 5090 directly — cheaper than Vast/RunPod. Where do you find clients? | 1 | [removed] | 2026-02-17T21:31:48 | https://www.reddit.com/r/LocalLLaMA/comments/1r7jgp0/renting_rtx_5090_directly_cheaper_than_vastrunpod/ | Individual-Luck-5633 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7jgp0 | false | null | t3_1r7jgp0 | /r/LocalLLaMA/comments/1r7jgp0/renting_rtx_5090_directly_cheaper_than_vastrunpod/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=108&crop=smart&auto=webp&s=a08158a2ec290c8157b492f314bfb148408be1fc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=216&crop=smart&auto=webp&s=5d4693d9fc011431e9348152136fa7a13c95504b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=320&crop=smart&auto=webp&s=93ef867725a538dad3a6209e5062d3d1de60aeaa', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=640&crop=smart&auto=webp&s=fc186b216811c20876ecdaf0e913cc0b59498d7a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=960&crop=smart&auto=webp&s=67812638cc7d2b930cd8bebf733409c3b2d92397', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=1080&crop=smart&auto=webp&s=bc092f31a95e3a3df682dc8f7222b0fb1363a5df', 'width': 1080}], 'source': {'height': 2250, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?auto=webp&s=c5b1db2b11bd21a955cbe1e863cde94ef57607f4', 'width': 4000}, 'variants': {}}]} |
What Frontend do you use? | 4 | I've been on and off with front-ends, but I really just want something that has a lot of capabilities and is relatively user friendly. I'm not a big fan of openwebui personally. There's nothing wrong with it, it's just not for me. What Frontends do you guys like? | 2026-02-17T21:24:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r7j9kp/what_frontend_do_you_use/ | TyedalWaves | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7j9kp | false | null | t3_1r7j9kp | /r/LocalLLaMA/comments/1r7j9kp/what_frontend_do_you_use/ | false | false | self | 4 | null |
The guy that won the NVIDIA Hackathon and an NVIDIA DGX Spark GB10 has won another hackathon with it! | 327 | Hey everyone,
I promised that I would update you all with what I was going to do next with the DGX Spark GB10 that I won. It's been a few weeks and I have been primarily heads down on fundraising for my startup trying to automatically improve and evaluate Coding Agents.
Since the last time I posted I became a Dell Pro Precision Ambassador after they saw all of the cool hackathons that I won and stuff I am building that can hopefully make a difference in the world (I am trying to create Brain World Models using a bunch of different types of brain scans to do precision therapeutics, diagnostics, etc. as my Magnus Opus).
They sent me a Dell Pro Max T2 Tower and another DGX Spark GB10 which I have connected to the previous one that I won. This allows me to continue my work with the limited funds that I have to see how far I can really push the limits of what's possible at the intersection of Healthcare and AI.
During Superbowl Weekend I took some time to do a 24-hour hackathon solving a problem that I really care about (even if it wasn't related to my startup).
My most recent job was at UCSF doing applied neuroscience creating a research-backed tool that screened children for Dyslexia since traditional approaches don’t meet learners where they are so I wanted to take the research I did further and actually create solutions that also did computer adaptive learning.
Through my research I have come to find that the current solutions for learning languages are antiquated often assuming a “standard” learner: same pace, same sequence, same practice, same assessments.
But, language learning is deeply personalized. Two learners can spend the same amount of time on the same content and walk away with totally different outcomes because the feedback they need could be entirely different with the core problem being that language learning isn’t one-size-fits-all.
Most language tools struggle with a few big issues:
* **Single Language**: Most tools are designed specifically for Native English speakers
* **Culturally insensitive:** Even within the same language there can be different dialects and word/phrase utilization
* **Static Difficulty:** content doesn’t adapt when you’re bored or overwhelmed
* **Delayed Feedback:** you don’t always know *what* you said wrong or *why*
* **Practice ≠ assessment:** testing is often separate from learning, instead of driving it
* **Speaking is underserved**: it’s hard to get consistent, personalized speaking practice without 1:1 time
For many learners, especially kids, the result is predictable: *frustration, disengagement, or plateauing.*
So I built a an automated speech recognition app that adapts in real time combining computer adaptive testing and computer adaptive learning to personalize the experience as you go.
It not only transcribes speech, but also evaluates phoneme-level pronunciation, which lets the system give targeted feedback (and adapt the next prompt) based on *which sounds* someone struggles with.
I tried to make it as simple as possible because my primary user base would be teachers that didn't have a lot of time to actually learn new tools and were already struggling with teaching an entire class.
It uses natural speaking performance to determine what a student should practice next.
So instead of providing every child a fixed curriculum, the system continuously adjusts difficulty and targets based on how you’re actually doing rather than just on completion.
**How it Built It**
1. I connected two NVIDIA DGX Spark with the GB10 Grace Blackwell Superchip giving me 256 GB LPDDR5x Coherent Unified System Memory to run inference and the entire workflow locally. I also had the Dell Pro Max T2 Tower, but I couldn't physically bring it to the Notion office so I used Tailscale to SSH into it
2. I utilized CrisperWhisper, faster-whisper, and a custom transformer to get accurate word-level timestamps, verbatim transcriptions, filler detection, and hallucination mitigation
3. I fed this directly into a Montreal Forced Aligner to get phoneme level dictation
4. I then used a heuristics detection algorithm to screen for several disfluencies: Prolongnation, replacement, deletion, addition, and repetition
5. I included stutter and filler analysis/detection using the SEP-28k dataset and PodcastFillers Dataset
6. I fed these into AI Agents using both local models, Cartesia's Line Agents, and Notion's Custom Agents to do computer adaptive learning and testing
The result is a workflow where learning content can evolve quickly while the learner experience stays personalized and measurable.
I want to support learners who don’t thrive in rigid systems and need:
* more repetition (without embarrassment)
* targeted practice on specific sounds/phrases
* a pace that adapts to attention and confidence
* immediate feedback that’s actually actionable
This project is an early prototype, but it’s a direction I’m genuinely excited about: speech-first language learning that adapts to the person, rather than the other way around.
[https://www.youtube.com/watch?v=2RYHu1jyFWI](https://www.youtube.com/watch?v=2RYHu1jyFWI)
I wrote something in medium that has a tiny bit more information [https://medium.com/@brandonin/i-just-won-the-cartesia-hackathon-reinforcing-something-ive-believed-in-for-a-long-time-language-dc93525b2e48?postPublishedType=repub](https://medium.com/@brandonin/i-just-won-the-cartesia-hackathon-reinforcing-something-ive-believed-in-for-a-long-time-language-dc93525b2e48?postPublishedType=repub)
For those that are wondering what the specs are of the Dell Pro T2 Tower that they sent me:
* Intel Core Ultra 9 285K (36 MB cache, 24 cores, 24 threads, 3.2 GHz to 5.7 GHz, 125W)
* 128GB: 4 x 32 GB, DDR5, 4400 MT/s
* 2x - 4TB SSD TLC with DRAM M.2 2280 PCIe Gen4 SED Ready
* NVIDIA RTX PRO 6000 Blackwell Workstation Edition (600W), 96GB GDDR7 | 2026-02-17T21:22:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r7j7kb/the_guy_that_won_the_nvidia_hackathon_and_an/ | brandon-i | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7j7kb | false | null | t3_1r7j7kb | /r/LocalLLaMA/comments/1r7j7kb/the_guy_that_won_the_nvidia_hackathon_and_an/ | false | false | self | 327 | {'enabled': False, 'images': [{'id': 'b-8bFNmpVy6CBQmUQ9yRafNcSOX_nOo-XyZQWFJuPVQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/b-8bFNmpVy6CBQmUQ9yRafNcSOX_nOo-XyZQWFJuPVQ.jpeg?width=108&crop=smart&auto=webp&s=36b40803e9ea01ff7fce8b7b1c5bfcc1a61fed73', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/b-8bFNmpVy6CBQmUQ9yRafNcSOX_nOo-XyZQWFJuPVQ.jpeg?width=216&crop=smart&auto=webp&s=9b8fc81b0e318afbea0dc97d34dc5df5bf7cfa1f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/b-8bFNmpVy6CBQmUQ9yRafNcSOX_nOo-XyZQWFJuPVQ.jpeg?width=320&crop=smart&auto=webp&s=ce07704633ef0a4637e8b40f9ad42b9b305ffd58', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/b-8bFNmpVy6CBQmUQ9yRafNcSOX_nOo-XyZQWFJuPVQ.jpeg?auto=webp&s=1f22be86d0443f18da7035e58c082e2e3a46c69a', 'width': 480}, 'variants': {}}]} |
Arc B60 24gb or RTX 5060ti 16gb? | 14 | Hello everybody,
I would like to add an eGPU to my Ryzen 9 AI HX370 64gb ram. I can use usb-c 40gbps or Oculink.
Owners or experts can you give me some advices on these 2 gpu ?
If token/s are similar obviously I choose 24gb ram for bigger model BUT ….
What about difficulty to tune Intel ARC to gain its maximum performances ?
I will use it on Win 11. ATM I use LM Studio.
Ps: could be interesting also consider RX 7900 XTX 24gb ?
Thanks ! | 2026-02-17T21:11:22 | https://www.reddit.com/r/LocalLLaMA/comments/1r7iwmb/arc_b60_24gb_or_rtx_5060ti_16gb/ | Proof_Nothing_7711 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7iwmb | false | null | t3_1r7iwmb | /r/LocalLLaMA/comments/1r7iwmb/arc_b60_24gb_or_rtx_5060ti_16gb/ | false | false | self | 14 | null |
does glm 4.7 on vertex actually support context caching? | 2 | checked both openrouter and the official docs but can't find anything definitive. the dashboard just shows dashes for cache read/write. is it strictly running without cache or am i missing something? | 2026-02-17T21:10:11 | Routine_Connection8 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r7ivh0 | false | null | t3_1r7ivh0 | /r/LocalLLaMA/comments/1r7ivh0/does_glm_47_on_vertex_actually_support_context/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'yo3v4wkge4kg1', 'resolutions': [{'height': 18, 'url': 'https://preview.redd.it/yo3v4wkge4kg1.png?width=108&crop=smart&auto=webp&s=0aa47e8991c1c7bf7fff4541fceb79d003fb9f7f', 'width': 108}, {'height': 36, 'url': 'https://preview.redd.it/yo3v4wkge4kg1.png?width=216&crop=smart&auto=webp&s=1ca9ec37d9a4f17ed3dd933ec72006fc51474fc9', 'width': 216}, {'height': 54, 'url': 'https://preview.redd.it/yo3v4wkge4kg1.png?width=320&crop=smart&auto=webp&s=b9681a7e46da183372dce3cb6c4b003cdd28e2be', 'width': 320}, {'height': 108, 'url': 'https://preview.redd.it/yo3v4wkge4kg1.png?width=640&crop=smart&auto=webp&s=602860256bf651d07193f2d3f41e341a0d6b50de', 'width': 640}, {'height': 163, 'url': 'https://preview.redd.it/yo3v4wkge4kg1.png?width=960&crop=smart&auto=webp&s=47a785dce0925bdf6bb8d5650810d567f3207670', 'width': 960}, {'height': 183, 'url': 'https://preview.redd.it/yo3v4wkge4kg1.png?width=1080&crop=smart&auto=webp&s=b2ae1ff7d9a28b2456af9378f0597c07f46b8989', 'width': 1080}], 'source': {'height': 336, 'url': 'https://preview.redd.it/yo3v4wkge4kg1.png?auto=webp&s=727c4e6a072d8de6c9234d215b7fb9b0e679cdff', 'width': 1978}, 'variants': {}}]} | ||
What is GiLo AI? | 0 | GiLo AI is a professional platform for creating and deploying AI agents. It enables anyone — developer, entrepreneur, product team — to design an intelligent conversational agent, configure it in depth, test it in real time, then make it accessible to the world via an API, an embeddable widget, or a dedicated subdomain.
Unlike traditional chatbot tools that are limited to decision trees or scripted responses, every GiLo AI agent is powered by cutting-edge language models (GPT-4.1, GPT-4.1 Mini, GPT-4.1 Nano) and can leverage a proprietary knowledge base, external tools, and third-party integrations to deliver contextual, accurate, and useful responses.
The platform includes a Store — a marketplace where creators can publish their agents, share them with the community, or keep them private. Users can discover, test, and remix existing agents to accelerate their own development.
| 2026-02-17T21:05:07 | https://www.gilo.dev/ | Fun-Necessary1572 | gilo.dev | 1970-01-01T00:00:00 | 0 | {} | 1r7iqcl | false | null | t3_1r7iqcl | /r/LocalLLaMA/comments/1r7iqcl/what_is_gilo_ai/ | false | false | default | 0 | null |
ViT-5: Vision Transformers for The Mid-2020s | 25 | |ViT-5: Vision Transformers for The Mid-2020s|
|:-|
|*Wang et al. \[*Johns Hopkins University, UC Santa Cruz*\]*|
LLMs are sprinting ahead with rapid architectural refinements, but Vision Transformers (ViTs) have remained largely stagnant since their debut in 2020. Vision models struggle with stability issues and a limited ability to handle complex spatial reasoning.
[ViT Architecture](https://preview.redd.it/n403andob4kg1.png?width=629&format=png&auto=webp&s=edacfe88fe2840a840af5ae32d971a17a1720e4b)
The research team developed ViT-5 by systematically testing five years of AI advancements to see which ones actually improve a model's "eyesight." They discovered that simply copying language model tricks doesn't always work; for instance, a popular method for filtering information in text models actually caused "over-gating" in vision, making the internal representations too sparse to be useful.
https://preview.redd.it/s0i2hgvqb4kg1.png?width=617&format=png&auto=webp&s=7dc824bcbc80c917bbad6bd067e90b3ad9a5e874
Instead, they found success by combining a more efficient normalization method with a clever dual-positioning system. This allows the model to understand where every pixel is relative to its neighbors while still maintaining a "big picture" sense of the entire image.
https://preview.redd.it/pg7c4visb4kg1.png?width=1564&format=png&auto=webp&s=006329cff9a16a8f5458d99279e11d4126fbdc02
|To further refine performance, the researchers introduced "register tokens," which act like digital scratchpads to clean up visual artifacts and help the model focus on what is semantically important. They also implemented a technique called QK-normalization, which smoothed out the training process and eliminated the frustrating "error spikes" that often crash large-scale AI projects.|
|:-|
|The final model can handle images of varying sizes with ease and consistently outperforms previous standards in identifying objects and generating new images.|
Hope you like it, Shout out to bycloud! It's from his newsletter.
[weekly@mail.bycloud.ai](mailto:weekly@mail.bycloud.ai) | 2026-02-17T20:57:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r7ij81/vit5_vision_transformers_for_the_mid2020s/ | xXWarMachineRoXx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r7ij81 | false | null | t3_1r7ij81 | /r/LocalLLaMA/comments/1r7ij81/vit5_vision_transformers_for_the_mid2020s/ | false | false | 25 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.