title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 β | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k β | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 β | ups int64 0 8.54k | preview stringlengths 301 5.01k β |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Benchmarked 4 AI Memory Systems on 600-Turn Conversations - Here Are the Results | 20 | We just completed comprehensive benchmarks comparing memory layers for production AI agents. Tested Mem0 against OpenAI Memory, LangMem, and MemGPT across 10 multi-session conversations with 200 questions each.
**Key findings:**
* **Mem0**: 66.9% accuracy, 1.4s p95 latency, \~2K tokens per query
* **Mem0 Graph**: 68.... | 2026-02-23T15:25:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rckcww/benchmarked_4_ai_memory_systems_on_600turn/ | singh_taranjeet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rckcww | false | null | t3_1rckcww | /r/LocalLLaMA/comments/1rckcww/benchmarked_4_ai_memory_systems_on_600turn/ | false | false | self | 20 | null |
Best local llm for grammar tasks? | 6 | Hi guys!
I want to create a figma plugin that uses AI to help us proofread design assets and pieces for our work. Would go with openai 5.2 but work is very strict regarding data ingestion by 3rd party providers. Also I would have to feed or use my work brand guidelines documents as source of truth for the plugin.
Th... | 2026-02-23T15:19:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rck7n4/best_local_llm_for_grammar_tasks/ | darkblitzrc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rck7n4 | false | null | t3_1rck7n4 | /r/LocalLLaMA/comments/1rck7n4/best_local_llm_for_grammar_tasks/ | false | false | self | 6 | null |
Aura β offline cognitive memory for local AI agents. No embeddings, no cloud, <1ms recall. pip install aura-memory | 1 | [removed] | 2026-02-23T15:17:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rck65j/aura_offline_cognitive_memory_for_local_ai_agents/ | Far_Assignment_189 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rck65j | false | null | t3_1rck65j | /r/LocalLLaMA/comments/1rck65j/aura_offline_cognitive_memory_for_local_ai_agents/ | false | false | self | 1 | null |
Is opencode the best free coding agent currently? | 11 | I just started using it and it seems good. I was very surprised that it also gives free access to minimax 2.5 and glm 5 at the moment. | 2026-02-23T15:11:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rcjzsk/is_opencode_the_best_free_coding_agent_currently/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcjzsk | false | null | t3_1rcjzsk | /r/LocalLLaMA/comments/1rcjzsk/is_opencode_the_best_free_coding_agent_currently/ | false | false | self | 11 | null |
Fixing Qwen3-Next-80B quantization: Explicit MoE gate + DeltaNet state protection (AWQ 128g) testing | 1 | [removed] | 2026-02-23T15:10:51 | https://www.reddit.com/r/LocalLLaMA/comments/1rcjzg3/fixing_qwen3next80b_quantization_explicit_moe/ | EhOhEl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcjzg3 | false | null | t3_1rcjzg3 | /r/LocalLLaMA/comments/1rcjzg3/fixing_qwen3next80b_quantization_explicit_moe/ | false | false | self | 1 | null |
Coming in a few hours: First architecturallycorrect AWQ of Qwen3-Next-80B-A3B explicit MoE gate + DeltaNet state protection, 128g, fits 2x3090 | 1 | [removed] | 2026-02-23T15:06:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rcjvf9/coming_in_a_few_hours_first/ | EhOhEl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcjvf9 | false | null | t3_1rcjvf9 | /r/LocalLLaMA/comments/1rcjvf9/coming_in_a_few_hours_first/ | false | false | self | 1 | null |
If your 3Bβ70B fine-tune keeps OOMβing, weβll run it on H100 (beta runtime experiment) | 0 | Weβre running a beta experiment with a new inference runtime focused on large-model memory behavior.
Instead of synthetic benchmarks, weβd rather test real community fine-tunes.
If youβve got a 3Bβ70B model that:
β keeps OOMβing
β struggles with memory fragmentation
β behaves weirdly under load
we can spin it up ... | 2026-02-23T14:50:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rcjgr4/if_your_3b70b_finetune_keeps_ooming_well_run_it/ | pmv143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcjgr4 | false | null | t3_1rcjgr4 | /r/LocalLLaMA/comments/1rcjgr4/if_your_3b70b_finetune_keeps_ooming_well_run_it/ | false | false | self | 0 | null |
Give your OpenClaw agents a truly local voice | 0 | If youβre using **OpenClaw** and want fully local voice support, this is worth a read:
[https://izwiai.com/blog/give-openclaw-agents-local-voice](https://izwiai.com/blog/give-openclaw-agents-local-voice?utm_source=chatgpt.com)
By default, OpenClaw relies on cloud TTS like **ElevenLabs**, which means your audio leaves... | 2026-02-23T14:50:52 | https://izwiai.com/blog/give-openclaw-agents-local-voice | zinyando | izwiai.com | 1970-01-01T00:00:00 | 0 | {} | 1rcjgr9 | false | null | t3_1rcjgr9 | /r/LocalLLaMA/comments/1rcjgr9/give_your_openclaw_agents_a_truly_local_voice/ | false | false | default | 0 | null |
I benchmarked 17 local LLMs on real MCP tool calling β single-shot AND agentic loop. The difference is massive. | 96 | I benchmarked 17 local LLMs on real MCP tool calling β not synthetic function-calling evals, but actual calls against a production API with 19 tools, real validation, and real results.
I ran each model twice. First single-shot (one API call, score the first response). Then agentic (model gets tool results back, keeps ... | 2026-02-23T14:48:34 | https://www.reddit.com/gallery/1rcjepp | AlyxPink | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rcjepp | false | null | t3_1rcjepp | /r/LocalLLaMA/comments/1rcjepp/i_benchmarked_17_local_llms_on_real_mcp_tool/ | false | false | 96 | null | |
Hey, V6rge AI Suite beta version is now available on the #MicrosoftStore! Download it today. | 0 | [https://apps.microsoft.com/store/detail/9NS36H0M4S9N?cid=DevShareMRDPCS](https://apps.microsoft.com/store/detail/9NS36H0M4S9N?cid=DevShareMRDPCS)
Currently it only has Image gen and Chat , the remaining features will be on the next update
https://preview.redd.it/8ohjqjhe49lg1.png?width=1358&format=png&auto=webp&s=... | 2026-02-23T14:21:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rciq3a/hey_v6rge_ai_suite_beta_version_is_now_available/ | Motor-Resort-5314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rciq3a | false | null | t3_1rciq3a | /r/LocalLLaMA/comments/1rciq3a/hey_v6rge_ai_suite_beta_version_is_now_available/ | false | false | 0 | null | |
Is Cursor the free version of Claude code? | 1 | [deleted] | 2026-02-23T14:21:15 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rcipxc | false | null | t3_1rcipxc | /r/LocalLLaMA/comments/1rcipxc/is_cursor_the_free_version_of_claude_code/ | false | false | default | 1 | null | ||
WORTH TO HOST A SERVER?? | 0 | so got into the thing of local llm and all,
but yea for running a good model,i dont have the enough hardware and i encountered hosting a server to run my llm
so worth the cost and hassle to rent a gpu
i want to use it as chatgpt alternative
which i use as a personal messgaes,thinking,reasong,conspirancy theories,bi... | 2026-02-23T14:20:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rciotx/worth_to_host_a_server/ | Ashamed-Show-4156 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rciotx | false | null | t3_1rciotx | /r/LocalLLaMA/comments/1rciotx/worth_to_host_a_server/ | false | false | self | 0 | null |
native-devtools-mcp - v0.4.3 update | 1 | Hi everyone!
A month ago or so I announced a new desktop UI control MCP server creatively called `native-devtools-mcp`. Since then I've release 2 new major versions and a bunch of bugfixes and minor QoL and security additions, most of which I detected while building a CUA visual workflow tool on top of it.
For anyone... | 2026-02-23T14:15:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rcikoa/nativedevtoolsmcp_v043_update/ | SkyLunat1c | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcikoa | false | null | t3_1rcikoa | /r/LocalLLaMA/comments/1rcikoa/nativedevtoolsmcp_v043_update/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'wmWiIKGroK6cG2SAtfO6X_-EHHe5DC6rBIxjUTxi7sk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wmWiIKGroK6cG2SAtfO6X_-EHHe5DC6rBIxjUTxi7sk.png?width=108&crop=smart&auto=webp&s=771ddd3d2f061d7fa05fb9555dce95365bdacf3e', 'width': 108}, {'height': 108, 'url': 'h... |
TinyTeapot (77 million params): Context-grounded LLM running ~40 tok/s on CPU (open-source) | 55 | 2026-02-23T14:03:12 | https://huggingface.co/teapotai/tinyteapot | zakerytclarke | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1rci9h1 | false | null | t3_1rci9h1 | /r/LocalLLaMA/comments/1rci9h1/tinyteapot_77_million_params_contextgrounded_llm/ | false | false | 55 | {'enabled': False, 'images': [{'id': 'JxyR2-HPTrb177zTD0smUzhI5l6xLW7EKVY2pYpkHxc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JxyR2-HPTrb177zTD0smUzhI5l6xLW7EKVY2pYpkHxc.png?width=108&crop=smart&auto=webp&s=5a738942bdaf97a32f04d24de81db251cc1404dc', 'width': 108}, {'height': 116, 'url': 'h... | ||
Local models to improve prompting/making a context rich prompt | 2 | Hi..
I need a local model/prompt that could help me write a better prompt to save cost on larger models I use. Or is there any other way to improve my prompting(can't write on my own its too difficult to get it right) | 2026-02-23T14:01:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rci7i5/local_models_to_improve_promptingmaking_a_context/ | ActuatorDisastrous13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rci7i5 | false | null | t3_1rci7i5 | /r/LocalLLaMA/comments/1rci7i5/local_models_to_improve_promptingmaking_a_context/ | false | false | self | 2 | null |
personal entropy reduction with agents | 16 | during my unemployment stage of life i'm working on a personal assistant
the problem it solves is pretty straightforward β i have an adhd and it's hard to me to work with many different information streams (email, obsidian, calendar, local graph memory, browser history) + i forget things. the motivation was to improv... | 2026-02-23T13:56:43 | https://v.redd.it/m9wa92sy09lg1 | escept1co | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rci3l9 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/m9wa92sy09lg1/DASHPlaylist.mpd?a=1774447026%2CODgyMWQwNWY1NjM5ODFkMzQyMDc3OTNjMDQ0OGJhOWJjNjc1YWFiMWM5NjQzNjgyNmU2OTY4YmJhN2UyNTRlZA%3D%3D&v=1&f=sd', 'duration': 44, 'fallback_url': 'https://v.redd.it/m9wa92sy09lg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1rci3l9 | /r/LocalLLaMA/comments/1rci3l9/personal_entropy_reduction_with_agents/ | false | false | 16 | {'enabled': False, 'images': [{'id': 'Z2g1Z25ndHkwOWxnMXMsGTH1aguTrI-pU1cryBxoqt80__0hno6cPg7cZUT3', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/Z2g1Z25ndHkwOWxnMXMsGTH1aguTrI-pU1cryBxoqt80__0hno6cPg7cZUT3.png?width=108&crop=smart&format=pjpg&auto=webp&s=ce83476b13c1045db9d874d5230496df6d899... | |
Iβm storing successful tool traces as reusable YAML workflows β how do you score/retire patterns? | 1 | > | 2026-02-23T13:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rchko9/im_storing_successful_tool_traces_as_reusable/ | Renee_Wen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rchko9 | false | null | t3_1rchko9 | /r/LocalLLaMA/comments/1rchko9/im_storing_successful_tool_traces_as_reusable/ | false | false | self | 1 | null |
Added Aya-101 multi-lingual support to llama.cpp | 3 | I have added Aya-101 multi-lingual support to llama.cpp. This is a large model which when quantized to Q8 can fit on less than 13GB of VRAM.
\`\`\`
cmd /c 'curl.exe -s [http://127.0.0.1:8080/v1/completions](http://127.0.0.1:8080/v1/completions) \-H "Content-Type: application/json" -d "{\\"prompt\\": \\"Translate ... | 2026-02-23T13:34:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rchjz6/added_aya101_multilingual_support_to_llamacpp/ | quinceaccel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rchjz6 | false | null | t3_1rchjz6 | /r/LocalLLaMA/comments/1rchjz6/added_aya101_multilingual_support_to_llamacpp/ | false | false | self | 3 | null |
[Release] LocoOperator-4B: Specialized Sub-Agent for Codebase Exploration. 100% JSON Validity. (Based on Qwen3-4B-Instruct-2507) | 1 | [removed] | 2026-02-23T13:27:00 | https://www.reddit.com/gallery/1rchduf | Awkward_Run_9982 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rchduf | false | null | t3_1rchduf | /r/LocalLLaMA/comments/1rchduf/release_locooperator4b_specialized_subagent_for/ | false | false | 1 | null | |
TeichAI's "Nemotron-Orchestrator" models are misleading β they're just Qwen3-8B distilled on frontier traces, not routing models | 3 |
Saw these models pop up on HuggingFace and figured I'd dig in since the name is catchy:
* [TeichAI/Nemotron-Orchestrator-8B-Claude-4.5-Opus-Distill](https://huggingface.co/TeichAI/Nemotron-Orchestrator-8B-Claude-4.5-Opus-Distill/blob/main/README.md)
* [TeichAI/Nemotron-Orchestrator-8B-DeepSeek-v3.2-Speciale-Distill... | 2026-02-23T13:17:37 | https://www.reddit.com/r/LocalLLaMA/comments/1rch66j/teichais_nemotronorchestrator_models_are/ | Honest-Debate-6863 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rch66j | false | null | t3_1rch66j | /r/LocalLLaMA/comments/1rch66j/teichais_nemotronorchestrator_models_are/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'JCc1zE-Z1VogXmsuLv3W5a09ar0LW_PU_2xjH5Biuio', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JCc1zE-Z1VogXmsuLv3W5a09ar0LW_PU_2xjH5Biuio.png?width=108&crop=smart&auto=webp&s=20980f55cc52ae321a495b58205c562818086da0', 'width': 108}, {'height': 116, 'url': 'h... |
Intelligence canβt scale on context alone. Intent is the missing piece. | 0 | Something I keep running into:
Agents donβt usually fail because they lack information.
They fail because they lose track of *what theyβre trying to do*.
By a few turns in, behavior optimizes for the latest input, not the original objective.
Adding more context helps a bit β but itβs expensive, brittle, and still... | 2026-02-23T13:12:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rch25s/intelligence_cant_scale_on_context_alone_intent/ | malav399 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rch25s | false | null | t3_1rch25s | /r/LocalLLaMA/comments/1rch25s/intelligence_cant_scale_on_context_alone_intent/ | false | false | self | 0 | null |
Experiment 2: BRAIN | 0 | **When AI doesn't just think, but speaks**
*Status: February 23, 2026 Β· Three versions Β· 10+ hours runtime Β· \~70 conversations*
# The Premise
In the first experiment ([Consciousness Loop, v4/v4.1](https://www.reddit.com/r/LocalLLaMA/comments/1rarlcu/comment/o6lpxhb/)), I simply let a language model think. It ran i... | 2026-02-23T13:11:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rch19j/experiment_2_brain/ | Fantastic-Till2460 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rch19j | false | null | t3_1rch19j | /r/LocalLLaMA/comments/1rch19j/experiment_2_brain/ | false | false | self | 0 | null |
A year of tinkering with local LLMs: my setup, my model zoo, and what Ive learned | 1 | [removed] | 2026-02-23T13:02:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rcgtwt/a_year_of_tinkering_with_local_llms_my_setup_my/ | KitchenCat5603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcgtwt | false | null | t3_1rcgtwt | /r/LocalLLaMA/comments/1rcgtwt/a_year_of_tinkering_with_local_llms_my_setup_my/ | false | false | self | 1 | null |
[Release] LocoOperator-4B: Specialized Sub-Agent for Codebase Exploration. 100% JSON Validity. (Based on Qwen3-4B-Instruct-2507) | 1 | [removed] | 2026-02-23T13:00:16 | https://www.reddit.com/gallery/1rcgs04 | Awkward_Run_9982 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rcgs04 | false | null | t3_1rcgs04 | /r/LocalLLaMA/comments/1rcgs04/release_locooperator4b_specialized_subagent_for/ | false | false | 1 | null | |
Open Source Batch Automator for LM Studio: Prevent GPU Crashes During Large Tests | 2 | \*Rephrased the text by ai
Iβve learned a lot from here, but I am still very much a beginner.
I run a GTX 1660 and 16GB of RAM. The problem I was facing was trying to compare prompt outputs across different models. Doing this meant manually loading a model in LM Studio, waiting, testing the prompt, manually unloadin... | 2026-02-23T12:41:05 | https://www.reddit.com/r/LocalLLaMA/comments/1rcgdfy/open_source_batch_automator_for_lm_studio_prevent/ | KiranjotSingh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcgdfy | false | null | t3_1rcgdfy | /r/LocalLLaMA/comments/1rcgdfy/open_source_batch_automator_for_lm_studio_prevent/ | false | false | self | 2 | null |
[ Removed by moderator ] | 1 | [removed] | 2026-02-23T12:21:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rcg016/i_bypassed_writing_a_massive_privacy_policy_for/ | Material_Case_892 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcg016 | false | null | t3_1rcg016 | /r/LocalLLaMA/comments/1rcg016/i_bypassed_writing_a_massive_privacy_policy_for/ | false | false | null | 1 | null |
Considering installing a local LLM for coding | 9 | Hey everyone,
I like to use AI IDEs, like cursor or antigravity, but I'm sick of getting overcharged and constantly hitting my api limits in a week or so.
So I want to get a local LLM, and want to connect it to my IDE, preferibly cursor, has anyone here done that? Do you think it's worth it? What's your experience... | 2026-02-23T12:16:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rcfwc5/considering_installing_a_local_llm_for_coding/ | rmg97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcfwc5 | false | null | t3_1rcfwc5 | /r/LocalLLaMA/comments/1rcfwc5/considering_installing_a_local_llm_for_coding/ | false | false | self | 9 | null |
Efficient Temporal Embedding Models? | 2 | After using embeddings for almost 2-3 years, I always thought temporality is something we should be able to embed rather than always relying on pre-post filters which first needs a Stage 1 query expander or enricher (llm or sentence transformer or regex based).
While searching for some solutions, I came across this in... | 2026-02-23T12:13:05 | https://www.reddit.com/r/LocalLLaMA/comments/1rcftrf/efficient_temporal_embedding_models/ | xyzmanas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcftrf | false | null | t3_1rcftrf | /r/LocalLLaMA/comments/1rcftrf/efficient_temporal_embedding_models/ | false | false | self | 2 | null |
Best GPU setup for running 7B-13B models | 0 | **Comment:**
For 7B-13B models, youβre looking at a sweet spot where you donβt need crazy hardware but still want decent performance. Hereβs what Iβve learned:
**Budget option:**Β RTX 3060 12GB can handle most 7B models comfortably with 4-bit quantization. Youβll get \~15-20 tokens/sec on llama.cpp depending on the mo... | 2026-02-23T12:08:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rcfqrw/best_gpu_setup_for_running_7b13b_models/ | Official_VaultAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcfqrw | false | null | t3_1rcfqrw | /r/LocalLLaMA/comments/1rcfqrw/best_gpu_setup_for_running_7b13b_models/ | false | false | self | 0 | null |
Let AI control your phone via API/MCP, but with safety rules | 0 | Hi everyone!
I am the developer of [MobAI](https://mobai.run). It is an execution layer that lets AI agents control a real mobile device through API or MCP. Agents can send actions like tap, swipe, open app, type text, etc.
But we still cannot fully trust AI.
Even strong models can click the wrong button or press so... | 2026-02-23T11:53:53 | interlap | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rcfgd1 | false | null | t3_1rcfgd1 | /r/LocalLLaMA/comments/1rcfgd1/let_ai_control_your_phone_via_apimcp_but_with/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'uoffl96ig8lg1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/uoffl96ig8lg1.png?width=108&crop=smart&auto=webp&s=46807424bd4903ccf895fc009600eb7f7c31f06c', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/uoffl96ig8lg1.png?width=216&crop=smart&auto=web... | ||
Any Ideas for Open Source STT Improvements for Telephony Audio? | 1 | Hello I have telephony audio data in german. 8khz sample rate, variable bit rate down to 8kbs on silence and 50kbs on speech on average.
Working with sota open source models like whisper, qwen, nvidia, etc. I tried different preprocessing steps like rms normalization or peak normalization, removing silence beforehand ... | 2026-02-23T11:31:45 | https://www.reddit.com/r/LocalLLaMA/comments/1rcf1n8/any_ideas_for_open_source_stt_improvements_for/ | llm-king | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcf1n8 | false | null | t3_1rcf1n8 | /r/LocalLLaMA/comments/1rcf1n8/any_ideas_for_open_source_stt_improvements_for/ | false | false | self | 1 | null |
AI founders/devs: What actually sucks about running inference in production right now? | 0 | Founder doing research here.
Before building anything in AI infra, Iβm trying to understand whether inference infrastructure is a real pain, or just something people complain about casually.
If you're running inference in production (LLMs, vision models, embeddings, segmentation, agents, etc.), Iβd really value your ... | 2026-02-23T11:28:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rcez7r/ai_foundersdevs_what_actually_sucks_about_running/ | akashpanda1222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcez7r | false | null | t3_1rcez7r | /r/LocalLLaMA/comments/1rcez7r/ai_foundersdevs_what_actually_sucks_about_running/ | false | false | self | 0 | null |
Any Ideas for Open Source STT Improvements for Telephony Audio? | 1 | [removed] | 2026-02-23T11:27:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rceyiz/any_ideas_for_open_source_stt_improvements_for/ | Dry-Environment557 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rceyiz | false | null | t3_1rceyiz | /r/LocalLLaMA/comments/1rceyiz/any_ideas_for_open_source_stt_improvements_for/ | false | false | self | 1 | null |
How do you run your local LLMs in your small comany offices for n8n etc? | 0 | Like, do you have a server with an NVidia card running? Do you have a gaming laptop with a sign "I am an AI server"? A dedicated LLM cube? I just wondered which hardware you all use to run your n8n workflows. Or what you could recommend for about 1200$ or 1000β¬s.
| 2026-02-23T10:50:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rcebas/how_do_you_run_your_local_llms_in_your_small/ | dmigowski | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcebas | false | null | t3_1rcebas | /r/LocalLLaMA/comments/1rcebas/how_do_you_run_your_local_llms_in_your_small/ | false | false | self | 0 | null |
3 weeks of running qwen2.5:14b in an agentic loop - context management is where everything breaks | 7 | I've been running qwen2.5:14b locally for about 3 weeks as part of an automation pipeline - not chatting with it, but using it to actually do things: read files, make decisions, call tools, write outputs. The hardware part worked fine. What I completely underestimated was context management.
The problem isn't that loc... | 2026-02-23T10:02:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rcdicv/3_weeks_of_running_qwen2514b_in_an_agentic_loop/ | justserg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcdicv | false | null | t3_1rcdicv | /r/LocalLLaMA/comments/1rcdicv/3_weeks_of_running_qwen2514b_in_an_agentic_loop/ | false | false | self | 7 | null |
Why is it so hard to find real resources on building AI agents from scratch? | 3 | Iβm trying to learn how to build a real coding AI agent from scratch, not how to use tools like OpenAI Codex or Claude Code, but how to actually engineer something like that myself.
I mean the full system: the agent loop, tool calling (files, terminal, git, grep, lsp, mcp), memory, planning, managing large codebases, ... | 2026-02-23T09:48:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rcda0u/why_is_it_so_hard_to_find_real_resources_on/ | Creepy_Page566 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcda0u | false | null | t3_1rcda0u | /r/LocalLLaMA/comments/1rcda0u/why_is_it_so_hard_to_find_real_resources_on/ | false | false | self | 3 | null |
OpenClaw vs ZeroClaw vs NullClaw -- for Agentic email personal assistant | 0 | TL'DR - Is scraping, enterprise grade react web apps (read-only) through legitimate accounts, feasible in ZeroClaw/NullClaw ? I believe it is possible in OpenClaw.
Longer version:
I am just working on a hypothesis that it is possible (and perhaps not entirely unsafe) to build an Agent with reasonable effort that ... | 2026-02-23T09:44:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rcd7nr/openclaw_vs_zeroclaw_vs_nullclaw_for_agentic/ | Professional_Row_967 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcd7nr | false | null | t3_1rcd7nr | /r/LocalLLaMA/comments/1rcd7nr/openclaw_vs_zeroclaw_vs_nullclaw_for_agentic/ | false | false | self | 0 | null |
For narrow vocabulary domains, do we really need RAG? | 1 | **For narrow vocabulary domains and if number of files are not too high, how good can a smart file search be? Do we really need RAG for that?** I was going through legalbench rag dataset, specially maud dataset..i saw their precision was quite low. You generally have entities in queries for these kind of data or the vo... | 2026-02-23T09:42:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rcd6ne/for_narrow_vocabulary_domains_do_we_really_need/ | maylad31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcd6ne | false | null | t3_1rcd6ne | /r/LocalLLaMA/comments/1rcd6ne/for_narrow_vocabulary_domains_do_we_really_need/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'h... |
Wave Field Transformer V4 β Novel O(n log n) attention architecture, 825M model trained from scratch on 1.33B tokens. Weights on HuggingFace. | 0 | HeyΒ everyone, I've been buildingΒ a new transformer architecture fromΒ scratch calledΒ Wave Field Transformer. InsteadΒ of standardΒ O(nΒ²) dot-product attention, itΒ uses FFT-based waveΒ interference patterns to achieve O(n log n) complexity.
ModelΒ weights:Β [https://huggingface.co/badaramoni/wave-field-v4-825m](https://huggi... | 2026-02-23T09:37:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rcd3d2/wave_field_transformer_v4_novel_on_log_n/ | Murky-Sign37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcd3d2 | false | null | t3_1rcd3d2 | /r/LocalLLaMA/comments/1rcd3d2/wave_field_transformer_v4_novel_on_log_n/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'HFD5H_uhpjUcGIXLJRL4wtzrSfaa2RXDmrzrZVPzKKg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HFD5H_uhpjUcGIXLJRL4wtzrSfaa2RXDmrzrZVPzKKg.png?width=108&crop=smart&auto=webp&s=66dfd428e856944e5af122c0acfc537d5588ad8c', 'width': 108}, {'height': 116, 'url': 'h... |
Can't find any uncensored models on Openrouter that are capable of NSFW talk. | 0 | I'm running an experiment and I want its improtant that the model not have any kinds of guardrails. I'd read that Deepseek models were uncensored but all the models that i have tried till now have declined except for grok-4.1-fast which i don't want to use because they don't have a Zero Data retention policy. Please he... | 2026-02-23T09:20:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rcctmv/cant_find_any_uncensored_models_on_openrouter/ | CardiologistRoyal198 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcctmv | false | null | t3_1rcctmv | /r/LocalLLaMA/comments/1rcctmv/cant_find_any_uncensored_models_on_openrouter/ | false | false | nsfw | 0 | null |
I made an interactive timeline of 171 LLMs (2017β2026) | 43 | Built a visual timeline tracking every major Large Language Model β from the original Transformer paper to GPT-5.3 Codex.
171 models, 54 organizations. Filterable by open/closed source, searchable, with milestones highlighted.
Some stats from the data:
- 2024β2025 was the explosion: 108 models in two years
- Open sou... | 2026-02-23T09:18:16 | https://www.reddit.com/r/LocalLLaMA/comments/1rccsjg/i_made_an_interactive_timeline_of_171_llms/ | asymortenson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rccsjg | false | null | t3_1rccsjg | /r/LocalLLaMA/comments/1rccsjg/i_made_an_interactive_timeline_of_171_llms/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'nE_grEHwSSIfNwD7VLd7BntnaEDuiqccEGhMBhvPJKQ', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/nE_grEHwSSIfNwD7VLd7BntnaEDuiqccEGhMBhvPJKQ.jpeg?width=108&crop=smart&auto=webp&s=04f7c97ead7d2851efd8f8f5097d58bcf5541c02', 'width': 108}, {'height': 114, 'url': '... |
When RMSNorm Fails: The Geometric Collapse of Unstable LLMs | 15 | Every major modern LLM has quietly dropped standard Layer Normalization in favor of RMSNorm. By removing the explicit mean-centering step, we save compute under the assumption that a network's variance (**Ο**) will always dominate its mean shift (**ΞΌ**).
In my [blog](https://sifal.social/posts/Why-Modern-LLMs-Dropped-... | 2026-02-23T09:09:40 | https://www.reddit.com/r/LocalLLaMA/comments/1rccn85/when_rmsnorm_fails_the_geometric_collapse_of/ | Accurate-Turn-2675 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rccn85 | false | null | t3_1rccn85 | /r/LocalLLaMA/comments/1rccn85/when_rmsnorm_fails_the_geometric_collapse_of/ | false | false | 15 | null | |
When RMSNorm Fails: The Geometric Collapse of Unstable LLMs | 1 | 2026-02-23T09:06:23 | https://sifal.social/posts/Why-Modern-LLMs-Dropped-Mean-Centering-(And-Got-Away-With-It)/ | Accurate-Turn-2675 | sifal.social | 1970-01-01T00:00:00 | 0 | {} | 1rcclaz | false | null | t3_1rcclaz | /r/LocalLLaMA/comments/1rcclaz/when_rmsnorm_fails_the_geometric_collapse_of/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ihS-tlPjVV8ihasHuE-tqX-NDYxn0pl-im9PQaQOrGE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ihS-tlPjVV8ihasHuE-tqX-NDYxn0pl-im9PQaQOrGE.png?width=108&crop=smart&auto=webp&s=3ecf953dfffef5cb67980986a75ffcc4da0c81a6', 'width': 108}, {'height': 121, 'url': 'h... | ||
An open-source framework to achieve Gemini 3 Deep Think / GPT-5.2 Pro level performance with local models scaffolding | 225 | 2026-02-23T08:33:22 | https://www.reddit.com/gallery/1rcc2fa | Ryoiki-Tokuiten | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rcc2fa | false | null | t3_1rcc2fa | /r/LocalLLaMA/comments/1rcc2fa/an_opensource_framework_to_achieve_gemini_3_deep/ | false | false | 225 | null | ||
Great study published by Anthropic about Claude code usage and agent | 1 | [removed] | 2026-02-23T08:32:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rcc1r2/great_study_published_by_anthropic_about_claude/ | Any_Word_4657 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcc1r2 | false | null | t3_1rcc1r2 | /r/LocalLLaMA/comments/1rcc1r2/great_study_published_by_anthropic_about_claude/ | false | false | 1 | null | |
what are some top OCR models that can deal with handwritten text and mathematical formulas? | 2 | what are some top OCR models that can deal with handwritten text and mathematical formulas?
so far i have tested with PaddleOCR. it was good with deal handwritten text. But it is not so great for when it comes to dealing with mathematicals symbols.
i tried to run Deepseek OCR. but the problem is, I do not have a gr... | 2026-02-23T08:09:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rcbold/what_are_some_top_ocr_models_that_can_deal_with/ | starman_hero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcbold | false | null | t3_1rcbold | /r/LocalLLaMA/comments/1rcbold/what_are_some_top_ocr_models_that_can_deal_with/ | false | false | self | 2 | null |
8 DGX cluster by Alex Ziskind: easily the most insane local LLM cluster Iβve ever seend | 0 | 2026-02-23T08:04:55 | https://youtu.be/QJqKqxQR36Y?si=xNmleYOlNmVszwoD | richardanaya | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1rcbm66 | false | {'oembed': {'author_name': 'Alex Ziskind', 'author_url': 'https://www.youtube.com/@AZisk', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/QJqKqxQR36Y?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pi... | t3_1rcbm66 | /r/LocalLLaMA/comments/1rcbm66/8_dgx_cluster_by_alex_ziskind_easily_the_most/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'sKZTCD1CDVqGd0eBOl6rwr401Ry7M9y9AoId4Jhi-kU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/sKZTCD1CDVqGd0eBOl6rwr401Ry7M9y9AoId4Jhi-kU.jpeg?width=108&crop=smart&auto=webp&s=8d31f25d392d4b99c5050e4ad54f28f69fc59f54', 'width': 108}, {'height': 162, 'url': '... | ||
23 ΡΠ΅Π²ΡΠ°Π»Ρ | 0 | Π‘ΠΎΠ·Π΄Π°ΡΡ Π²ΠΈΠ΄Π΅ΠΎ Π³Π΄Π΅ ΠΎΠ±Π½Π°ΠΆΡΠ½Π½ΡΠ΅ Π΄Π΅Π²ΡΡΠΊΠΈ Ρ Π°Π²ΡΠΎΠΌΠ°ΡΠ°ΠΌΠΈ ΠΏΠΎΠ·Π΄ΡΠ°Π²Π»ΡΡΡ ΠΌΡΠΆΡΠΈΠ½Ρ Ρ 23 ΡΠ΅Π²ΡΠ°Π»Ρ | 2026-02-23T07:54:05 | Express_Slice_4134 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rcbftz | false | null | t3_1rcbftz | /r/LocalLLaMA/comments/1rcbftz/23_ΡΠ΅Π²ΡΠ°Π»Ρ/ | false | false | nsfw | 0 | {'enabled': True, 'images': [{'id': 'mtmbw9yv97lg1', 'resolutions': [{'height': 157, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=108&crop=smart&auto=webp&s=12d998ba8a07e6a9582a434bc9018f40aeea1f81', 'width': 108}, {'height': 314, 'url': 'https://preview.redd.it/mtmbw9yv97lg1.jpeg?width=216&crop=smart&auto=... | |
Anthropic reveals an interesting study on how Claude Code works and how it is used. | 1 | [removed] | 2026-02-23T07:47:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rcbbuw/anthropic_reveals_an_interesting_study_on_how/ | Any_Word_4657 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcbbuw | false | null | t3_1rcbbuw | /r/LocalLLaMA/comments/1rcbbuw/anthropic_reveals_an_interesting_study_on_how/ | false | false | 1 | null | |
Looking for an MCP that semantically searches for working snippets of code | 4 | Often, Claude still messes up on common frontend patterns. When that happens, sometimes I can give Claude documentation (eg for implementing supabase auth). But other times, docs don't have the answer (eg for swift / macOS, unfocusing an input box when the user clicks elsewhere). The code with the relevant patterns is ... | 2026-02-23T07:29:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rcb1n2/looking_for_an_mcp_that_semantically_searches_for/ | babble_prune | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcb1n2 | false | null | t3_1rcb1n2 | /r/LocalLLaMA/comments/1rcb1n2/looking_for_an_mcp_that_semantically_searches_for/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'PHHmpUWh7Nu980F4TL4Nrf1N81gK-ToE3Oy-Z8NNCXQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/PHHmpUWh7Nu980F4TL4Nrf1N81gK-ToE3Oy-Z8NNCXQ.png?width=108&crop=smart&auto=webp&s=e3fcedf5c7ae20c17a5138eb9300ebb43213a171', 'width': 108}, {'height': 112, 'url': 'h... |
MiniMax 2.5 on DGX SPARK system. | 17 | so i've been working with minimax 2.5 (MiniMax-M2.5-UD-Q3\_K\_XL),
im amazed by this model, the quality of code is just on another level.
my issue is that i can only work with it in maximum 65K context (bigger than that - crashes on load - out of memory) , normal usage lands on 125GB RAM usage (which is too much).... | 2026-02-23T07:03:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rcalyu/minimax_25_on_dgx_spark_system/ | DOOMISHERE | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcalyu | false | null | t3_1rcalyu | /r/LocalLLaMA/comments/1rcalyu/minimax_25_on_dgx_spark_system/ | false | false | self | 17 | null |
64GB Mac: Local Agentic Coding with Qwen3 & Roo Code | 2 | I tried agentic coding with local LLM using my old dating app project (Next.js).
My hardware: Mac Studio (M2 Max, 38-core GPU, 64GB RAM) - on home network.
Since the coding was handled on a separate laptop, the Mac Studio was dedicated entirely to running the LLM.
Finding a model capable of agentic coding on 64GB of... | 2026-02-23T07:03:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rcalyh/64gb_mac_local_agentic_coding_with_qwen3_roo_code/ | benevbright | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcalyh | false | null | t3_1rcalyh | /r/LocalLLaMA/comments/1rcalyh/64gb_mac_local_agentic_coding_with_qwen3_roo_code/ | false | false | self | 2 | null |
64GB Mac: Local agentic coding demo with Qwen3-coder-next & Roo Code | 1 | [removed] | 2026-02-23T06:59:53 | https://www.reddit.com/r/LocalLLaMA/comments/1rcajhc/64gb_mac_local_agentic_coding_demo_with/ | benevbright | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rcajhc | false | null | t3_1rcajhc | /r/LocalLLaMA/comments/1rcajhc/64gb_mac_local_agentic_coding_demo_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'gbr8d9vtg3bvzm9hxI6FpFOg9zZl5ivOerp5A9tHf-8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/gbr8d9vtg3bvzm9hxI6FpFOg9zZl5ivOerp5A9tHf-8.jpeg?width=108&crop=smart&auto=webp&s=14fd58beb7f871c809bb05b85a9bdc4cdda8b070', 'width': 108}, {'height': 162, 'url': '... |
Which model for meeting transcript summarisation? | 9 | Hello
I'm using qwen3 30B A3B 2507 4bit with lm studio for feeding meeting transcripts for summary.
Does this seem like an okay model for the task? Feeling a bit overwhelmed with all the options, I'm only using because a cloud AI suggested it but it might not be current.
I was using Claude API with amazing ... | 2026-02-23T06:37:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rca5p7/which_model_for_meeting_transcript_summarisation/ | peglegsmeg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rca5p7 | false | null | t3_1rca5p7 | /r/LocalLLaMA/comments/1rca5p7/which_model_for_meeting_transcript_summarisation/ | false | false | self | 9 | null |
Kitten TTS V0.8 Running in the Browser | 3 | Hey everyone,
took the recent release of Kitten v0.8 as an opportunity to explore handling audio data in the browser.
\-> A minimal Next.JS app of Kitten TTS V0.8 running in the Browser
Features/Issue:
* All processing done on the client-side
* Supports Nano/Micro/Mini Model, fetched from HF (+voice embeddings), ca... | 2026-02-23T06:13:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rc9qvb/kitten_tts_v08_running_in_the_browser/ | HatEducational9965 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc9qvb | false | null | t3_1rc9qvb | /r/LocalLLaMA/comments/1rc9qvb/kitten_tts_v08_running_in_the_browser/ | false | false | 3 | null | |
Corporate Environment Setup | 1 | Within a large enterprise environment, we currently have all the open source models available via a typical chat page. All data is fully contained within our network.
We have an API where something like Opencode could use for cli based agentic workflows.
My question is, could we make this remotely to something l... | 2026-02-23T05:49:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rc9b26/corporate_environment_setup/ | drussell024 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc9b26 | false | null | t3_1rc9b26 | /r/LocalLLaMA/comments/1rc9b26/corporate_environment_setup/ | false | false | self | 1 | null |
π Wave Field LLM O(n log n) Successfully Scales to 1B Parameters | 91 | Just completed full pretraining ofΒ **Wave Field LLM (v4) at 1B scale**.
**Training Summary:**
* **Parameters:**Β 825M
* **Total Tokens:**Β 1.33B
* **Final PPL:**Β 72.2
* **Best PPL:**Β 72.2
* **Final Accuracy:**Β 27.1%
* **Training Time:**Β 13.2 hours
This isnβt a small 30M or 124M experiment anymore.
Wave Field is now:
... | 2026-02-23T05:44:29 | Murky-Sign37 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rc97qf | false | null | t3_1rc97qf | /r/LocalLLaMA/comments/1rc97qf/wave_field_llm_on_log_n_successfully_scales_to_1b/ | false | false | 91 | {'enabled': True, 'images': [{'id': '6m7q2vzlm6lg1', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/6m7q2vzlm6lg1.png?width=108&crop=smart&auto=webp&s=f36d4a9ba2f9b73a10e072911c6ec5e7df7afda8', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/6m7q2vzlm6lg1.png?width=216&crop=smart&auto=webp... | ||
# A 4B parameter model just held a 21-turn conversation with coherent personality, self-naming, and philosophical depth β no fine-tuning of base weights | 0 | I've been building an adaptive state system that sits on top of a frozen LLM (qwen3-4b via Ollama) and gives it persistent memory, learned preferences, and behavioral rules β without touching the model's weights.
Yesterday it held a 21-turn live conversation where it:
- Named itself "Orac" (from Blake'... | 2026-02-23T05:38:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rc93pt/a_4b_parameter_model_just_held_a_21turn/ | Temporary_Bill4163 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc93pt | false | null | t3_1rc93pt | /r/LocalLLaMA/comments/1rc93pt/a_4b_parameter_model_just_held_a_21turn/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'QfsRQElEoVEPeB6Wd5tBszKrLwmNZpCLtkCGT9du1xA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QfsRQElEoVEPeB6Wd5tBszKrLwmNZpCLtkCGT9du1xA.png?width=108&crop=smart&auto=webp&s=8d9f02245d6e46c95678ce86d57b0069b9b77d18', 'width': 108}, {'height': 108, 'url': 'h... |
Anyone else feel like the hardest part of running multiple agents isn't the agents β it's coordinating them? | 0 | Every night over last 3 months, I've been running a setup with 3 specialized agents - one for research & review (Claude Code subagents with a style checker), one pulling data from APIs into Google Sheets, one summarizing Slack/RSS feeds daily.
Each one is legitimately good at its job. Success rates went from \~62% to ... | 2026-02-23T05:30:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rc8yeu/anyone_else_feel_like_the_hardest_part_of_running/ | Fastly-Me-2022 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc8yeu | false | null | t3_1rc8yeu | /r/LocalLLaMA/comments/1rc8yeu/anyone_else_feel_like_the_hardest_part_of_running/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '45gLgFbUiYuvezTtQBFWHHYKipYnvTmRCP_HkfQ6lQI', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/45gLgFbUiYuvezTtQBFWHHYKipYnvTmRCP_HkfQ6lQI.jpeg?width=108&crop=smart&auto=webp&s=28977294d33881a9fa0f41a6d4d865203258f7e5', 'width': 108}, {'height': 82, 'url': 'h... |
The actual memory math for Llama-70B with 1M context | 0 | Did the math on what it takes to run Llama-70B with 1M token context. Numbers are wild.
\*\*Model weights (BF16):\*\* 140 GB
\*\*KV cache with GQA:\*\*
\- 8 KV heads Γ 128 dim Γ 2 (K+V) Γ 2 bytes = 4KB per token per layer
\- 1M tokens Γ 80 layers = 320 GB
\*\*Attention matrix (naive):\*\*
\- Shape: \[1, 64,... | 2026-02-23T05:19:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rc8qtc/the_actual_memory_math_for_llama70b_with_1m/ | Leading_Wrangler_708 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc8qtc | false | null | t3_1rc8qtc | /r/LocalLLaMA/comments/1rc8qtc/the_actual_memory_math_for_llama70b_with_1m/ | false | false | self | 0 | null |
The actual memory math for Llama-70B with 1M context | 1 | Did the math on what it takes to run Llama-70B with 1M token context. Numbers are wild.
**Model weights (BF16):** 140 GB
**KV cache with GQA:**
- 8 KV heads Γ 128 dim Γ 2 (K+V) Γ 2 bytes = 4KB per token per layer
- 1M tokens Γ 80 layers = 320 GB
**Attention matrix (naive):**
- Shape: [1, 64, 1M, 1M] = 64 trillion el... | 2026-02-23T05:18:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rc8q29/the_actual_memory_math_for_llama70b_with_1m/ | Leading_Wrangler_708 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc8q29 | false | null | t3_1rc8q29 | /r/LocalLLaMA/comments/1rc8q29/the_actual_memory_math_for_llama70b_with_1m/ | false | false | self | 1 | null |
Why Llama-70B with 1M context needs 128 TB of memory (and how we avoid it) | 1 | [removed] | 2026-02-23T05:14:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rc8nid/why_llama70b_with_1m_context_needs_128_tb_of/ | Leading_Wrangler_708 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc8nid | false | null | t3_1rc8nid | /r/LocalLLaMA/comments/1rc8nid/why_llama70b_with_1m_context_needs_128_tb_of/ | false | false | self | 1 | null |
Why Llama-70B with 1M context needs 128 TB of memory (and how we avoid it) | 1 | [removed] | 2026-02-23T05:10:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rc8kov/why_llama70b_with_1m_context_needs_128_tb_of/ | Leading_Wrangler_708 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc8kov | false | null | t3_1rc8kov | /r/LocalLLaMA/comments/1rc8kov/why_llama70b_with_1m_context_needs_128_tb_of/ | false | false | self | 1 | null |
I wrote an illustrated guide explaining why long-context inference is so hard (with animations) | 1 | [removed] | 2026-02-23T05:02:09 | https://www.reddit.com/r/LocalLLaMA/comments/1rc8eti/i_wrote_an_illustrated_guide_explaining_why/ | Leading_Wrangler_708 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc8eti | false | null | t3_1rc8eti | /r/LocalLLaMA/comments/1rc8eti/i_wrote_an_illustrated_guide_explaining_why/ | false | false | self | 1 | null |
What GPU do you recommend for iterative AI training? | 14 | I've racked up a disgusting bill with runpod and think it is time to get my own workstation.
I usually choose GPUs based on the model Iβm working with (e.g., RTX Pro 6000 Blackwell for LLMs/VLMs/diffusion, 4090 for smaller TCNs/LSTMs), but honestly I often pick higher-end GPUs more for throughput than VRAM.
So I'm c... | 2026-02-23T04:53:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rc88vr/what_gpu_do_you_recommend_for_iterative_ai/ | EliHusky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc88vr | false | null | t3_1rc88vr | /r/LocalLLaMA/comments/1rc88vr/what_gpu_do_you_recommend_for_iterative_ai/ | false | false | self | 14 | null |
Divorce attorney built a 26-GPU / 532GB VRAM cluster to automate my practice while keeping client data local. Roast my build / help me figure out what to run | 0 | **TL;DR:** Divorce lawyer, can't send client files to the cloud (attorney-client privilege), built a 26-GPU / 532GB VRAM cluster across 3 nodes with InfiniBand. Building legal practice management software that runs on local LLMs. Specs and software details below. Looking for model recs, inference framework advice, and ... | 2026-02-23T04:29:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rc7ro3/divorce_attorney_built_a_26gpu_532gb_vram_cluster/ | TumbleweedNew6515 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc7ro3 | false | null | t3_1rc7ro3 | /r/LocalLLaMA/comments/1rc7ro3/divorce_attorney_built_a_26gpu_532gb_vram_cluster/ | false | false | self | 0 | null |
Best model for agentic tool calling, iGPU / 16GB Integrated RAM? | 1 | What title says,
I am trying out Nanobot using local inference, first challenge was extremely slow Prompt Processing that I worked around by going lower param count (was using Qwen3 3B, etc; now settled with LFM2 8B A1B), Q4 quant.
The engine almost invariably answers hallucinating a made up response (like sample bel... | 2026-02-23T04:01:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rc788c/best_model_for_agentic_tool_calling_igpu_16gb/ | ElSrJuez | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc788c | false | null | t3_1rc788c | /r/LocalLLaMA/comments/1rc788c/best_model_for_agentic_tool_calling_igpu_16gb/ | false | false | self | 1 | null |
llama.cpp now doesn't need V cache for all MLA models (ie DS, Kimi, etc) | 1 | [removed] | 2026-02-23T03:40:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rc6t4g/llamacpp_now_doesnt_need_v_cache_for_all_mla/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc6t4g | false | null | t3_1rc6t4g | /r/LocalLLaMA/comments/1rc6t4g/llamacpp_now_doesnt_need_v_cache_for_all_mla/ | false | false | self | 1 | null |
Open source only AI competition with a 1 BTC grand prize - March 1 | 1 | 2026-02-23T03:39:36 | https://botgames.io | pizzy00 | botgames.io | 1970-01-01T00:00:00 | 0 | {} | 1rc6s5e | false | null | t3_1rc6s5e | /r/LocalLLaMA/comments/1rc6s5e/open_source_only_ai_competition_with_a_1_btc/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZIcHLkb6HhAtHnoBuXKNBKw4xmoeMq6G6lbCWs45O-M', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/ZIcHLkb6HhAtHnoBuXKNBKw4xmoeMq6G6lbCWs45O-M.png?width=108&crop=smart&auto=webp&s=a57d19fd4b1532d66b8b3b04a1495195cfba17ac', 'width': 108}, {'height': 288, 'url': '... | ||
Advice for 4 gpu systems rtx 4090 48gb | 4 | Hello, would like to seek some advice. Does anyone know if the rtx 4090 48gb modded chinese version does well for multi gpu training? I know P2P is not supported, and resizable bar is unsupported as well.
But are there any hidden catches that make it significantly worse than say ada 6000 on nvidia smi topo of NODE or... | 2026-02-23T03:31:05 | https://www.reddit.com/r/LocalLLaMA/comments/1rc6lv8/advice_for_4_gpu_systems_rtx_4090_48gb/ | ThatsMyNameDude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc6lv8 | false | null | t3_1rc6lv8 | /r/LocalLLaMA/comments/1rc6lv8/advice_for_4_gpu_systems_rtx_4090_48gb/ | false | false | self | 4 | null |
Feels like magic. A local gpt-oss 20B is capable of agentic work | 448 | I gave a try to [zeroclaw](https://github.com/zeroclaw-labs/zeroclaw) agent (intstead of the bloated and overhyped one). After few hours of fuckery with configs it's finally useful. Both main and embeddings models are running locally.
I carefully read what it's trying to execute in shell, and permit only \[relatively... | 2026-02-23T03:18:16 | Vaddieg | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rc6c8m | false | null | t3_1rc6c8m | /r/LocalLLaMA/comments/1rc6c8m/feels_like_magic_a_local_gptoss_20b_is_capable_of/ | false | false | 448 | {'enabled': True, 'images': [{'id': 'b27xdhewq5lg1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/b27xdhewq5lg1.png?width=108&crop=smart&auto=webp&s=6625adfd3c7af8ad3d066553606b1111db4967f7', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/b27xdhewq5lg1.png?width=216&crop=smart&auto=web... | ||
Measure accuracy of models on-device | 2 | Curious, how do you measure the accuracy of a model? I am trying to get the trace of a model using torch.jit.trace and torch.export for Hugging Face and want to compare the accuracy of the traced model with that of the original model. Is the SNR ratio a good metric for measuring the model's correctness? | 2026-02-23T02:59:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rc5xp9/measure_accuracy_of_models_ondevice/ | Motor_Salt1336 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc5xp9 | false | null | t3_1rc5xp9 | /r/LocalLLaMA/comments/1rc5xp9/measure_accuracy_of_models_ondevice/ | false | false | self | 2 | null |
My first major Open-Source project. A local AI Orchestrator that you can control via WhatsApp. | 1 | [removed] | 2026-02-23T02:49:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rc5q7z/my_first_major_opensource_project_a_local_ai/ | AcrobaticOffer9824 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc5q7z | false | null | t3_1rc5q7z | /r/LocalLLaMA/comments/1rc5q7z/my_first_major_opensource_project_a_local_ai/ | false | false | 1 | null | |
Qwen3's most underrated feature: Voice embeddings | 624 | Did you know that Qwen3 TTS utilizes voice embedding for voice cloning?
Your voice is turned into a vector of 1024 dimensions (or 2048 for 1.7b), and based on this vector alone you can get your custom voice.
But the coolest part is that this means that you can use math to modify voices, average voices. You can s... | 2026-02-23T02:28:32 | k_means_clusterfuck | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rc59ze | false | null | t3_1rc59ze | /r/LocalLLaMA/comments/1rc59ze/qwen3s_most_underrated_feature_voice_embeddings/ | false | false | 624 | {'enabled': True, 'images': [{'id': 'zmcs7iysm5lg1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/zmcs7iysm5lg1.png?width=108&crop=smart&auto=webp&s=3f956da543358c07192d9cd1e4fe5caa0334a900', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/zmcs7iysm5lg1.png?width=216&crop=smart&auto=web... | ||
Seed 1.6 Flash was the harshest AI judge in a 10-model blind eval β and that strictness correlated with better writing output | 0 | Seed 1.6 Flash averaged 8.64/10 when scoring other models in a blind peer evaluation I ran, making it the strictest judge out of 10 frontier models. It penalized vague timelines and missing cost analysis while Grok 4.1 Fast handed out 9.8+ to 8 of 9 models like participation trophies. The task was persuasive business w... | 2026-02-23T02:24:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rc56wr/seed_16_flash_was_the_harshest_ai_judge_in_a/ | Silver_Raspberry_811 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc56wr | false | null | t3_1rc56wr | /r/LocalLLaMA/comments/1rc56wr/seed_16_flash_was_the_harshest_ai_judge_in_a/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'knpPtQl5C1w_Gu-L-FVJkv8o0UKxMH9K2Oxdl2WkfLU', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/knpPtQl5C1w_Gu-L-FVJkv8o0UKxMH9K2Oxdl2WkfLU.jpeg?width=108&crop=smart&auto=webp&s=daa5fe0c8568a1d1bba5fc2ee175e21c39794444', 'width': 108}, {'height': 122, 'url': '... |
Open-sourcing a Claude Code toolkit - separates execution layer from intelligence layer | 2 | Built a toolkit for Claude Code that cleanly separates the execution layer from the intelligence layer. Thought the LLM dev community here might find it useful.
Key idea: decouple AI reasoning from task execution for more control, better debugging, and cleaner architecture.
Repo: [https://github.com/intellegix/cl... | 2026-02-23T02:16:28 | https://www.reddit.com/r/LocalLLaMA/comments/1rc50qi/opensourcing_a_claude_code_toolkit_separates/ | Agile_Detective4294 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc50qi | false | null | t3_1rc50qi | /r/LocalLLaMA/comments/1rc50qi/opensourcing_a_claude_code_toolkit_separates/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'PZa4hw5xrcGzW-hK8DQsWL4eQY70WfMo-cb0nclIG7I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PZa4hw5xrcGzW-hK8DQsWL4eQY70WfMo-cb0nclIG7I.png?width=108&crop=smart&auto=webp&s=a43920803e0480485f46b2fd4e9e6bd047dcf463', 'width': 108}, {'height': 108, 'url': 'h... |
Reasons for using local LLM as an individual developer | 0 | I know some companies would prefer to deploy their own LLM locally for the need of **confidentiality**. Now assume that you are an individual developer, would you / why do you choose local AI. (If you donβt demand data security) | 2026-02-23T02:09:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rc4v7q/reasons_for_using_local_llm_as_an_individual/ | Fred_Watermelon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc4v7q | false | null | t3_1rc4v7q | /r/LocalLLaMA/comments/1rc4v7q/reasons_for_using_local_llm_as_an_individual/ | false | false | self | 0 | null |
How are you guys handling security for Strands Agents in production? Building an open-source security layer for AWS Strands Agents am I solving a real problem or overthinking it? | 0 | I've been building with AWS Strands Agents and really like the SDK.
As I started thinking about giving agents access to db to execute SQL,
....kept asking myself what's the actual safety net here?
I know models are getting better at following instructions and Bedrock Guardrails exist for content filtering.
But from... | 2026-02-23T01:52:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rc4hny/how_are_you_guys_handling_security_for_strands/ | jack_ll_trades | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc4hny | false | null | t3_1rc4hny | /r/LocalLLaMA/comments/1rc4hny/how_are_you_guys_handling_security_for_strands/ | false | false | self | 0 | null |
Sparrow as controller to more complex systems | 1 | I am an engineer who works in the development of medical imaging systems. It really does seem that this technology (Sparrow + microcontroller) could be used to greatly simplify the user interface of complex imaging systems, especially portable, battery powered ones. So instead of knowing every function in every sub-men... | 2026-02-23T01:43:15 | https://www.reddit.com/r/LocalLLaMA/comments/1rc4aqa/sparrow_as_controller_to_more_complex_systems/ | LeScherd5929 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc4aqa | false | null | t3_1rc4aqa | /r/LocalLLaMA/comments/1rc4aqa/sparrow_as_controller_to_more_complex_systems/ | false | false | self | 1 | null |
Llama 3.2 1B categorizes in native JSON mode | 0 | Running a 3-layer system in production: shell script captures last 50 messages β Llama 3.2 1B categorizes in native JSON mode β filer writes to project-specific markdown files with a 500-line cap. Runs via launchd, survives restarts, costs $0/month. Full writeup with scripts at [magic.naption.ai/pipeline](http://magic.... | 2026-02-23T01:40:53 | https://www.reddit.com/r/LocalLLaMA/comments/1rc48vb/llama_32_1b_categorizes_in_native_json_mode/ | Sad-Fly-969 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc48vb | false | null | t3_1rc48vb | /r/LocalLLaMA/comments/1rc48vb/llama_32_1b_categorizes_in_native_json_mode/ | false | false | self | 0 | null |
Imagine your hardware and LLMs as LEGO blocks, play around to visualise what you can expect out of your gig !! (Built with bugs, lot to be improve) | 1 | [removed] | 2026-02-23T01:40:40 | https://www.reddit.com/r/LocalLLaMA/comments/1rc48p7/imagine_your_hardware_and_llms_as_lego_blocks/ | Technical_Drawer_854 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc48p7 | false | null | t3_1rc48p7 | /r/LocalLLaMA/comments/1rc48p7/imagine_your_hardware_and_llms_as_lego_blocks/ | false | false | self | 1 | null |
After many contributions craft, Crane now officially supports Qwen3-TTS! | 2 | If you're building local AI apps and feel stuck between **slow PyTorch inference** and **complex C++ llama.cpp integrations**, you might find this interesting.
Iβve been working on **Crane** 𦩠β a pure Rust inference engine built on Candle.
The goal is simple:
> Make local LLM / VLM / TTS / OCR inference fast, port... | 2026-02-23T01:37:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rc46nx/after_many_contributions_craft_crane_now/ | LewisJin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc46nx | false | null | t3_1rc46nx | /r/LocalLLaMA/comments/1rc46nx/after_many_contributions_craft_crane_now/ | false | false | self | 2 | null |
Flexible Multiagent Feature in Codex! | 0 | I have been experimenting with the new multiagent feature in Codex, and I appreciate how flexible it is.
Each subagent can have its own [configuration file](https://developers.openai.com/codex/config-reference), which means you can assign a different model, even different llm engines, and configure different features ... | 2026-02-23T01:16:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rc3pol/flexible_multiagent_feature_in_codex/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc3pol | false | null | t3_1rc3pol | /r/LocalLLaMA/comments/1rc3pol/flexible_multiagent_feature_in_codex/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'IanbACn0ZMJMMsnYfmcP1C692OFMB1do21sw5Lbo-80', 'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/IanbACn0ZMJMMsnYfmcP1C692OFMB1do21sw5Lbo-80.png?width=108&crop=smart&auto=webp&s=e3f265b33937cdd7d282a3b805d8b3aca8aecca8', 'width': 108}, {'height': 81, 'url': 'ht... |
Nanbeige4.1-3B Ignoring Prompt | 1 | (very new to the local LLM scene, sorry if I'm not providing all the details I need)
[https://huggingface.co/bartowski/Nanbeige\_Nanbeige4-3B-Thinking-2511-GGUF](https://huggingface.co/bartowski/Nanbeige_Nanbeige4-3B-Thinking-2511-GGUF)
Using [Jan.AI](http://Jan.AI) , to load in the GGUFs , tried **Q5\_K\_S** and **I... | 2026-02-23T01:14:58 | https://www.reddit.com/r/LocalLLaMA/comments/1rc3osx/nanbeige413b_ignoring_prompt/ | lagoon-nebula | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc3osx | false | null | t3_1rc3osx | /r/LocalLLaMA/comments/1rc3osx/nanbeige413b_ignoring_prompt/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'yqfU0frG2dIOhklObQqqcWd5y63p4sG-KpMR6doY-Q8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yqfU0frG2dIOhklObQqqcWd5y63p4sG-KpMR6doY-Q8.png?width=108&crop=smart&auto=webp&s=b7c27eef4b27597cac65968d5f3bb1d8ec3562ec', 'width': 108}, {'height': 116, 'url': 'h... |
Super New to Godot, used Claude Code/gpt-oss-120b locally to help me vibecode a simple platformer game about a grumpy mage who follows you around making fun of you lmao. | 201 | Yeah, I was bored so I spent the last two weeks experimenting with vibecoding with local LLMs, namely gpt-oss-120b.
I started with Cline, didn't like it at all because it was overheating my GPU while giving back too little. Codex was even worse, locally, leading to weird CPU switches mid-generation when there was supp... | 2026-02-23T01:13:04 | https://v.redd.it/jl31wp5085lg1 | swagonflyyyy | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rc3naj | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jl31wp5085lg1/DASHPlaylist.mpd?a=1774401209%2CNTM4YjViOGZjMDE3YTEyMWJlNDlhOTQ5ODJkZmU4MWNiNWI1YTcyOWUxNmFkNWYwOTkwMDJlMjhiNWU2ZTU5Yg%3D%3D&v=1&f=sd', 'duration': 69, 'fallback_url': 'https://v.redd.it/jl31wp5085lg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1rc3naj | /r/LocalLLaMA/comments/1rc3naj/super_new_to_godot_used_claude_codegptoss120b/ | false | false | 201 | {'enabled': False, 'images': [{'id': 'MmJ6MGRjNjA4NWxnMR3Al36Nr886FX7jQ_P96fNg8PSf4Zsku92kjG2XN_qv', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MmJ6MGRjNjA4NWxnMR3Al36Nr886FX7jQ_P96fNg8PSf4Zsku92kjG2XN_qv.png?width=108&crop=smart&format=pjpg&auto=webp&s=1ecbd6a2c24e4e1545bcc1fbca87afb318a62... | |
I watched a PEFT/continual-learning paper video 3 hours agoβ¦ and accidentally shipped a repo (CASCADES) | 1 | So yeah β saw a video summarizing a continual PEFT research paper, went down the rabbit hole, and in \~3 hours I ended up drafting + implementing a full βcontinual PEFT for local LLMsβ meta-architecture and pushed it to GitHub.
CASCADES (high level):
\- Shared dynamic adapter subspace per layer: ΞW = U Β· S\_t Β· Vα΅ ... | 2026-02-23T00:45:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rc310b/i_watched_a_peftcontinuallearning_paper_video_3/ | Bender-1011001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc310b | false | null | t3_1rc310b | /r/LocalLLaMA/comments/1rc310b/i_watched_a_peftcontinuallearning_paper_video_3/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Im_u0pDMzPjCu-PGzraL6fMmeymsiGJ52SPK40qwubM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Im_u0pDMzPjCu-PGzraL6fMmeymsiGJ52SPK40qwubM.png?width=108&crop=smart&auto=webp&s=9ea9cb6778c3cc040b533b404b92be015612a379', 'width': 108}, {'height': 108, 'url': 'h... |
Claude and Codex are close to finish their tasks but you have to move situation | 0 | 2026-02-23T00:41:23 | https://v.redd.it/pe9pbfum45lg1 | AromaticBombay | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rc2xi2 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/pe9pbfum45lg1/DASHPlaylist.mpd?a=1774399305%2CZDFkN2U2MTQ0ZjgyMTVkYjQ1N2RjM2E4MTg2OTkyMjBhOGM5NjQ1ZjE2ODg4YjIzMTY3N2EzY2NjNjhlZGY1Yg%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/pe9pbfum45lg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1rc2xi2 | /r/LocalLLaMA/comments/1rc2xi2/claude_and_codex_are_close_to_finish_their_tasks/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'N3BnbTJtcW00NWxnMX8kqFpu6puQQzfH_l8-SuDrV1vjzRJCs740w0z9DELl', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/N3BnbTJtcW00NWxnMX8kqFpu6puQQzfH_l8-SuDrV1vjzRJCs740w0z9DELl.jpeg?width=108&crop=smart&format=pjpg&auto=webp&s=9d6a955addc61c43bb063ec9f91808400c8... | ||
I built | 1 | [deleted] | 2026-02-23T00:18:52 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rc2epl | false | null | t3_1rc2epl | /r/LocalLLaMA/comments/1rc2epl/i_built/ | false | false | default | 1 | null | ||
My real-world Qwen3-code-next local coding test. So, Is it the next big thing? | 97 | So yesterday I put the Q8 MLX on my 128GB Mac Studio Ultra and wired it to Qwen Code. Fit's there with a huge amount to spare. The first tests were promising - basically did everything I asked: read file, write file, browse web, check system time....blah, blah.
Now the real the task:
I decided on YOLO mode to rewri... | 2026-02-22T23:51:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rc1ra2/my_realworld_qwen3codenext_local_coding_test_so/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc1ra2 | false | null | t3_1rc1ra2 | /r/LocalLLaMA/comments/1rc1ra2/my_realworld_qwen3codenext_local_coding_test_so/ | false | false | self | 97 | null |
MoOLE-T - a staged selection flow utilizing O-LORA skill "experts" | 13 | Hello again!
Yesterday, I posted about my O-TITANS (Orthogonal Tensors for Independent Task Alignment) researchβa way to train strictly isolated LoRAs on Gemma 3 that don't overwrite the base model's knowledge or interfere with each other.
Today, the actual orchestrator for those adapters is live.
Iβve uploaded the... | 2026-02-22T23:39:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rc1h05/moolet_a_staged_selection_flow_utilizing_olora/ | Polymorphic-X | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc1h05 | false | null | t3_1rc1h05 | /r/LocalLLaMA/comments/1rc1h05/moolet_a_staged_selection_flow_utilizing_olora/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'fTdVvfX5GtnGzyxZ-6--A7XObHi0yy4WZ-CWcobNtKQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fTdVvfX5GtnGzyxZ-6--A7XObHi0yy4WZ-CWcobNtKQ.png?width=108&crop=smart&auto=webp&s=a8f3011aa0e1f7854a45dc7dcbc4a783daa18cff', 'width': 108}, {'height': 116, 'url': 'h... |
GPU-Initiated Networking for NCCL on AWS β Serving DeepSeek-V3 with DeepEP over EFA | 1 | NVIDIA NCCL recently introduced GPU-Initiated Networking, which allows CUDA kernels to initiate networking directly through RDMA β no CPU round-trip needed. Thanks to hard work from the AWS Annapurna Labs team on the EFA provider side, this now works on AWS. I was finally able to test multi-node vLLM deployment with De... | 2026-02-22T23:30:39 | https://www.pythonsheets.com/notes/appendix/nccl-gin.html | spiderpower02 | pythonsheets.com | 1970-01-01T00:00:00 | 0 | {} | 1rc19w5 | false | null | t3_1rc19w5 | /r/LocalLLaMA/comments/1rc19w5/gpuinitiated_networking_for_nccl_on_aws_serving/ | false | false | default | 1 | null |
Got $800 of credits on a cloud platform (for GPU usage). Anyone here that's into AI training and inference and could make use of it? | 0 | So I have around 800 bucks worth of GPU usage credits on one of the major platform, those can be used specifically for GPU and clusters. So if any individual or hobbyist or anyone out here is training models or inference, or anything else, please contact! (not free btw, but selling at way less price) | 2026-02-22T23:17:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rc0y1u/got_800_of_credits_on_a_cloud_platform_for_gpu/ | DocumentFun9077 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc0y1u | false | null | t3_1rc0y1u | /r/LocalLLaMA/comments/1rc0y1u/got_800_of_credits_on_a_cloud_platform_for_gpu/ | false | false | self | 0 | null |
What model do you think is on the disc? | 56 | If it's actually an OpenAI model, most likely GPT OSS 20b, though I don't know what quant would fit on a single disc. Maybe something smaller like Qwen 3 8b? | 2026-02-22T23:11:54 | maxwell321 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rc0tg4 | false | null | t3_1rc0tg4 | /r/LocalLLaMA/comments/1rc0tg4/what_model_do_you_think_is_on_the_disc/ | false | false | 56 | {'enabled': True, 'images': [{'id': 'o0cb1idro4lg1', 'resolutions': [{'height': 176, 'url': 'https://preview.redd.it/o0cb1idro4lg1.png?width=108&crop=smart&auto=webp&s=60a5c2ed8c9f30a1962b08730987acf42ef00fed', 'width': 108}, {'height': 352, 'url': 'https://preview.redd.it/o0cb1idro4lg1.png?width=216&crop=smart&auto=we... | ||
Forked MNN Chat to make it a multilingual interpreted chatroom hotspot | 2 | In short, this is a *human-to-human* chat server that nearby devices can join via a couple QR codes, and it uses the LLM to automatically translate chat messages among the participants' languages.
I added some features to a fork of Alibaba's MNN Chat for Android with a lot of help from Claude mainly because I don't kn... | 2026-02-22T23:08:51 | https://www.reddit.com/gallery/1rc0qsd | DeProgrammer99 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rc0qsd | false | null | t3_1rc0qsd | /r/LocalLLaMA/comments/1rc0qsd/forked_mnn_chat_to_make_it_a_multilingual/ | false | false | 2 | null | |
0xSero/Kimi-K2.5-PRISM-REAP-72 Β· Hugging Face | 4 | Kimi K2.5 in just 200B, we definitely need a GGUF :) | 2026-02-22T23:02:07 | https://huggingface.co/0xSero/Kimi-K2.5-PRISM-REAP-72 | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1rc0ktx | false | null | t3_1rc0ktx | /r/LocalLLaMA/comments/1rc0ktx/0xserokimik25prismreap72_hugging_face/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'M1qr4dOE7cxjjYj0HuhCgrzSEf3iWgzOOTsp_hNo9yk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/M1qr4dOE7cxjjYj0HuhCgrzSEf3iWgzOOTsp_hNo9yk.png?width=108&crop=smart&auto=webp&s=74174ce0793cebfbfcdc27ed038e7b01541b3ed6', 'width': 108}, {'height': 116, 'url': 'h... | |
ggml.ai (the team behind llama.cpp) is joining Hugging Face, projects stay open source | 15 | Hugging Face announced that [ggml.ai](http://ggml.ai), the team behind llama.cpp and ggml, is joining Hugging Face to support long-term sustainability for local AI.
According to the llama.cpp announcement, day-to-day direction stays the same: the projects remain 100% open source and community-driven, and Hugging Face ... | 2026-02-22T22:54:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rc0dra/ggmlai_the_team_behind_llamacpp_is_joining/ | nihal_was_here | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc0dra | false | null | t3_1rc0dra | /r/LocalLLaMA/comments/1rc0dra/ggmlai_the_team_behind_llamacpp_is_joining/ | false | false | self | 15 | null |
In the long run, everything will be local | 112 | I've been of the opinion for a while that, long term, weβll have smart enough open models and powerful enough consumer hardware to runΒ *all*Β our assistants locally both chatbots and coding copilots
https://preview.redd.it/vqzxm46ri4lg1.png?width=3608&format=png&auto=webp&s=22c0fb257d744350f8668301a915aeec2b6653fc
Rig... | 2026-02-22T22:39:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rc00nj/in_the_long_run_everything_will_be_local/ | tiguidoio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc00nj | false | null | t3_1rc00nj | /r/LocalLLaMA/comments/1rc00nj/in_the_long_run_everything_will_be_local/ | false | false | 112 | null | |
Made openclaw's terminal UI actually show images from tool results instead of placeholders | 1 | If you run openclaw with local models and use the TUI, you know tool results with images just show [image/png 3kb (omitted)]. Got tired of it.
The image data gets sanitized away before reaching the renderer, but file paths survive through a text protocol. Hooked those paths up to pi-tui's Image class (already in the d... | 2026-02-22T22:38:42 | https://www.reddit.com/r/LocalLLaMA/comments/1rc00do/made_openclaws_terminal_ui_actually_show_images/ | snaptastic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc00do | false | null | t3_1rc00do | /r/LocalLLaMA/comments/1rc00do/made_openclaws_terminal_ui_actually_show_images/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0.png?width=108&crop=smart&auto=webp&s=e6fcf651048176d7ade999c2ffb8ee7844b195bc', 'width': 108}, {'height': 174, 'url': 'h... |
Made openclaw's terminal UI actually show images from tool results instead of placeholders | 1 | If you run openclaw with local models and use the TUI, you know tool results with images just show [image/png 3kb (omitted)]. Got tired of it.
The image data gets sanitized away before reaching the renderer, but file paths survive through a text protocol. Hooked those paths up to pi-tui's Image class (already in the d... | 2026-02-22T22:38:28 | https://www.reddit.com/r/LocalLLaMA/comments/1rc006d/made_openclaws_terminal_ui_actually_show_images/ | snaptastic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rc006d | false | null | t3_1rc006d | /r/LocalLLaMA/comments/1rc006d/made_openclaws_terminal_ui_actually_show_images/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/Cr6LKhEznOfDSboROneWLBbDq0Zr8le4rCpMU0zJUA0.png?width=108&crop=smart&auto=webp&s=e6fcf651048176d7ade999c2ffb8ee7844b195bc', 'width': 108}, {'height': 174, 'url': 'h... |
AI sentiment dropped 34% in 120 days. The backlash isn't about technology. It's about mediocrity. | 1 | [removed] | 2026-02-22T22:20:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rbzk21/ai_sentiment_dropped_34_in_120_days_the_backlash/ | Upstairs-Pin4239 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbzk21 | false | null | t3_1rbzk21 | /r/LocalLLaMA/comments/1rbzk21/ai_sentiment_dropped_34_in_120_days_the_backlash/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'z_wFMzWtlCHl5KiIbON0aIZB3aQeGZ5f8DHyU8UkMGE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/z_wFMzWtlCHl5KiIbON0aIZB3aQeGZ5f8DHyU8UkMGE.png?width=108&crop=smart&auto=webp&s=d21e4cb2be5b9e89b357b523f6718589af4e0a42', 'width': 108}, {'height': 113, 'url': 'h... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.