title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Anyone else seeing signs of Qwen3.5 dropping soon? | 1 | I’ve been tracking PR activity and arena testing and it feels like Qwen3.5 might be close. Rumors point to mid-Feb open source. Curious what everyone expects most: scale, efficiency or multimodal? | 2026-02-16T01:15:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r5vwco/anyone_else_seeing_signs_of_qwen35_dropping_soon/ | New_Construction1370 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5vwco | false | null | t3_1r5vwco | /r/LocalLLaMA/comments/1r5vwco/anyone_else_seeing_signs_of_qwen35_dropping_soon/ | false | false | self | 1 | null |
Qwen3.5 memory footprint? | 0 | Concerned about VRAM? | 2026-02-16T01:14:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r5vvn0/qwen35_memory_footprint/ | New_Construction1370 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5vvn0 | false | null | t3_1r5vvn0 | /r/LocalLLaMA/comments/1r5vvn0/qwen35_memory_footprint/ | false | false | self | 0 | null |
Junie equivalent Agentic workflow | 2 | I've spend all weekend playing around with [Junie](https://www.jetbrains.com/junie/) AI from Jetbrains. My day to day AI so far has been more limited to running ollama LM studio or whatnot and using it like a chat buddy than anything else.
I was very very impressed with it. I pointed it to a code base in PHP that I in... | 2026-02-16T01:02:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r5vmdc/junie_equivalent_agentic_workflow/ | pixel-pusher-coder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5vmdc | false | null | t3_1r5vmdc | /r/LocalLLaMA/comments/1r5vmdc/junie_equivalent_agentic_workflow/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'q1CE2d8Rwb7c542Y7z95ruXz_tQIVdxDK93BDwlpZ8Y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/q1CE2d8Rwb7c542Y7z95ruXz_tQIVdxDK93BDwlpZ8Y.png?width=108&crop=smart&auto=webp&s=5b8e38be19182e951a1538baad7da564f982d9a3', 'width': 108}, {'height': 121, 'url': 'h... |
That's why I go local.The enshittification is at full steam | 67 | I just received an email from chatGPT. Ads are beginning to show up. Well, we are cooked. Not we, we, we. But we are cooked. | 2026-02-16T00:51:55 | Turbulent_Pin7635 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r5vdxc | false | null | t3_1r5vdxc | /r/LocalLLaMA/comments/1r5vdxc/thats_why_i_go_localthe_enshittification_is_at/ | false | false | 67 | {'enabled': True, 'images': [{'id': '94yjg9288rjg1', 'resolutions': [{'height': 163, 'url': 'https://preview.redd.it/94yjg9288rjg1.png?width=108&crop=smart&auto=webp&s=6177d132e399602725fafe34a07996013a82592b', 'width': 108}, {'height': 327, 'url': 'https://preview.redd.it/94yjg9288rjg1.png?width=216&crop=smart&auto=we... | ||
EasyWhisperUI Linux release is live — local easy to use Whisper desktop UI | 1 | [removed] | 2026-02-16T00:49:21 | https://www.reddit.com/r/LocalLLaMA/comments/1r5vbxj/easywhisperui_linux_release_is_live_local_easy_to/ | mehtabmahir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5vbxj | false | null | t3_1r5vbxj | /r/LocalLLaMA/comments/1r5vbxj/easywhisperui_linux_release_is_live_local_easy_to/ | false | false | self | 1 | null |
Anyone actually using Openclaw? | 643 | I am highly suspicious that openclaw's virality is organic. I don't know of anyone (online or IRL) that is actually using it and I am deep in the AI ecosystem (both online and IRL). If this sort of thing is up anyone's alley, its the members of localllama - so are you using it?
With the announcement that OpenAI bough... | 2026-02-16T00:36:08 | https://www.reddit.com/r/LocalLLaMA/comments/1r5v1jb/anyone_actually_using_openclaw/ | rm-rf-rm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5v1jb | false | null | t3_1r5v1jb | /r/LocalLLaMA/comments/1r5v1jb/anyone_actually_using_openclaw/ | false | false | self | 643 | {'enabled': False, 'images': [{'id': 'h_DRDAnUOUqxtC_rCvf20sP_oO8cssauQSYlzUdEZR8', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/h_DRDAnUOUqxtC_rCvf20sP_oO8cssauQSYlzUdEZR8.jpeg?width=108&crop=smart&auto=webp&s=1b17f2d2ae743ad18b77ec8ea6379865b5900259', 'width': 108}, {'height': 100, 'url': '... |
Q2 GLM 5 fixing its own typo | 38 | I found this hilarious. Never seen a model fix its own typos in realtime before.
https://preview.redd.it/cuvsstz74rjg1.png?width=1218&format=png&auto=webp&s=a7a31bd9849a772b7753179a1c40135c12f5fe3c
Unsloth's GLM 5 quants are impressive - even down at TQ1 it was staying coherent, producing syntactically correct code w... | 2026-02-16T00:33:15 | https://www.reddit.com/r/LocalLLaMA/comments/1r5uz7d/q2_glm_5_fixing_its_own_typo/ | -dysangel- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5uz7d | false | null | t3_1r5uz7d | /r/LocalLLaMA/comments/1r5uz7d/q2_glm_5_fixing_its_own_typo/ | false | false | 38 | null | |
ik_llama.cpp benchmarks on an Intel Xeon Platinum 8570 ES Q30H with 256GB DDR5 5600 (8x32GB) | 5 | I wanted to see if my Intel Xeon Platinum 8570 ES Q30H is comparable on it's own with the integrated GPU in the AMD Ryzen AI MAX+ 395
Specs:
**CPU:** Intel Xeon Platinum 8570 ES Q30H - 56 cores, 112 threads, 1.9GHz base clock, 280MB cache (at various benchmarks the 8570 ES came about 4% faster than the retail 8570) ... | 2026-02-16T00:25:33 | https://www.reddit.com/r/LocalLLaMA/comments/1r5ut40/ik_llamacpp_benchmarks_on_an_intel_xeon_platinum/ | _serby_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5ut40 | false | null | t3_1r5ut40 | /r/LocalLLaMA/comments/1r5ut40/ik_llamacpp_benchmarks_on_an_intel_xeon_platinum/ | false | false | self | 5 | null |
Using Symbolic Shorthand (e.g., ⏣3[⊞:step-4]) for Token-Efficient Agent Steering | 0 | Hey everyone,
I’ve been benchmarking a method to bypass the conversational "verbosity" of modern LLMs (Gemini, Llama 3, Mistral) without using massive system prompts.
I'm developing a **Symbolic Shorthand Syntax**—a dense, non-linguistic "macro language" using specific geometric Unicode blocks to anchor model attenti... | 2026-02-16T00:25:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r5usxv/using_symbolic_shorthand_eg_3step4_for/ | lil-Zavy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5usxv | false | null | t3_1r5usxv | /r/LocalLLaMA/comments/1r5usxv/using_symbolic_shorthand_eg_3step4_for/ | false | false | self | 0 | null |
I built a local-first, append-only memory system for agents (Git + SQLite). Looking for design critique. | 0 | I’ve been experimenting with long-term memory for local AI agents and kept running into the same issue:
most “memory” implementations silently overwrite state, lose history, or allow agents to rewrite their own past.
This repository is an attempt to treat agent memory as a systems problem, not a prompting problem.
... | 2026-02-16T00:24:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r5usm3/i_built_a_localfirst_appendonly_memory_system_for/ | Junior_Drawing_8353 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5usm3 | false | null | t3_1r5usm3 | /r/LocalLLaMA/comments/1r5usm3/i_built_a_localfirst_appendonly_memory_system_for/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'BXBqZhuAgUq8ikLlXrPxVlBWw6czSeJt5RMZKhZaKWo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BXBqZhuAgUq8ikLlXrPxVlBWw6czSeJt5RMZKhZaKWo.png?width=108&crop=smart&auto=webp&s=d5db616dba7e37b654fcf6dcf87ee5625dcfcdc3', 'width': 108}, {'height': 108, 'url': 'h... |
Mac mini - powerful enough? | 0 | The unified memory is so awesome to run bigger models but is the performance good enough?
It’s nice to run >30B models but if I get 5 t/s…
I would love to have a mac studio but it’s way too expansive for me | 2026-02-16T00:21:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r5uq8h/mac_mini_powerful_enough/ | Dentifrice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5uq8h | false | null | t3_1r5uq8h | /r/LocalLLaMA/comments/1r5uq8h/mac_mini_powerful_enough/ | false | false | self | 0 | null |
Deflation: Cost to train A.I. models drops 40% per year - Karpathy | 171 | [https://github.com/karpathy/nanochat/discussions/481](https://github.com/karpathy/nanochat/discussions/481)
Quote: ..., each year the cost to train GPT-2 is falling to approximately 40% of the previous year. (I think this is an underestimate and that further improvements are still quite possible). The gains come from... | 2026-02-16T00:11:13 | https://www.reddit.com/r/LocalLLaMA/comments/1r5uhfu/deflation_cost_to_train_ai_models_drops_40_per/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5uhfu | false | null | t3_1r5uhfu | /r/LocalLLaMA/comments/1r5uhfu/deflation_cost_to_train_ai_models_drops_40_per/ | false | false | self | 171 | {'enabled': False, 'images': [{'id': 'sVRwqqgRZiG4XDKBTcFlMdrgCaMjVfm6rAll2ozPwqU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sVRwqqgRZiG4XDKBTcFlMdrgCaMjVfm6rAll2ozPwqU.png?width=108&crop=smart&auto=webp&s=7ec6d2d7eb0639c5a6080cf9d0784239da216ad4', 'width': 108}, {'height': 108, 'url': 'h... |
Hiring AI Intern — For someone obsessed with AI tools & agents | 0 | I run a digital marketing agency and I’m looking for an AI intern who actually experiments with AI — not just basic ChatGPT use.
Looking for someone who:
• Uses tools like Sora, ElevenLabs, OpenClaw, Nano Banana, ChatGPT, Midjourney, etc.
• Has built or tested AI agents or automations
• Loves experimenting and finding ... | 2026-02-16T00:01:38 | https://www.reddit.com/r/LocalLLaMA/comments/1r5u9qp/hiring_ai_intern_for_someone_obsessed_with_ai/ | iTataBirla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5u9qp | false | null | t3_1r5u9qp | /r/LocalLLaMA/comments/1r5u9qp/hiring_ai_intern_for_someone_obsessed_with_ai/ | false | false | self | 0 | null |
AI agents sandboxing guide | 4 | Spent some time looking at this as part of my consulting work and decided to write it down. Appreciate any feedback https://open.substack.com/pub/manveerc/p/ai-agent-sandboxing-guide?r=1a5vz&utm\_medium=ios | 2026-02-16T00:01:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r5u9gf/ai_agents_sandboxing_guide/ | manveerc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5u9gf | false | null | t3_1r5u9gf | /r/LocalLLaMA/comments/1r5u9gf/ai_agents_sandboxing_guide/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'iYmPWu7bKrab-C3kRq8L20DFYP0tu8d1kh8BaIYgC1c', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/iYmPWu7bKrab-C3kRq8L20DFYP0tu8d1kh8BaIYgC1c.jpeg?width=108&crop=smart&auto=webp&s=4ca7c14c0a0bcaff8914967b70242c1a8f849fac', 'width': 108}, {'height': 121, 'url': '... |
Issues with gpt4all and llama | 0 | Ok. Using GPT4All with Llama 3 8B Instruct
It is clear I don't know what I'm doing and need help so please be kind or move along.
Installed locally to help parse my huge file mess. I started with a small folder with 242 files. These files are a mix of pdf, a few docx and pptx and eml. The LocalDocs in GPT4All inde... | 2026-02-15T23:59:17 | https://www.reddit.com/r/LocalLLaMA/comments/1r5u7ij/issues_with_gpt4all_and_llama/ | Bleucb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5u7ij | false | null | t3_1r5u7ij | /r/LocalLLaMA/comments/1r5u7ij/issues_with_gpt4all_and_llama/ | false | false | self | 0 | null |
Micro-LLM training on "orthogonal" corpora | 3 | Had to spend a day traveling so I wrote a basic LLM from scratch. Single-layer, decoder-only transformer that uses (BPE) for its vocabulary (you'll see later why that matters), with causal masked self-attention for context, and layer normalization for stability. It was trained via stochastic gradient descent. Took me a... | 2026-02-15T23:31:50 | https://www.reddit.com/r/LocalLLaMA/comments/1r5tlin/microllm_training_on_orthogonal_corpora/ | Dumbest-Questions | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5tlin | false | null | t3_1r5tlin | /r/LocalLLaMA/comments/1r5tlin/microllm_training_on_orthogonal_corpora/ | false | false | self | 3 | null |
The Contradiction Conundrum in LLM Memory Systems | 0 | I’ve been digging into long-running agent memory systems lately, and I keep running into the same structural problem:
Most memory implementations collapse the moment contradictions appear.
Example:
Day 1:
“We bill monthly.”
Day 10:
“Actually, we bill weekly.”
What does your memory layer do?
**The 3 Common Patte... | 2026-02-15T23:30:40 | https://www.reddit.com/r/LocalLLaMA/comments/1r5tkl3/the_contradiction_conundrum_in_llm_memory_systems/ | kinkaid2002 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5tkl3 | false | null | t3_1r5tkl3 | /r/LocalLLaMA/comments/1r5tkl3/the_contradiction_conundrum_in_llm_memory_systems/ | false | false | self | 0 | null |
AnyLoom: AnythingLLM with Agent Swarm | 0 | 2026-02-15T23:20:22 | https://www.reddit.com/r/LocalLLaMA/comments/1r5tcaq/anyloom_anythingllm_with_agent_swarm/ | DaGameFace | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5tcaq | false | null | t3_1r5tcaq | /r/LocalLLaMA/comments/1r5tcaq/anyloom_anythingllm_with_agent_swarm/ | false | false | 0 | null | ||
Combining a RTX PRO 6000 and 5090 - could it work? | 0 | So i have a 5090 and realized that adding a RTX PRO 6000 into the mix could get me up to 128gb, allowing me to run \~200B MoEs
I'm wondering if it's possible to get a notable speed boost out of this when splitting a model.
I know that if you split a model with ik\_llama, you can see up to a 40% speedup, but this is a... | 2026-02-15T23:17:46 | https://www.reddit.com/r/LocalLLaMA/comments/1r5ta46/combining_a_rtx_pro_6000_and_5090_could_it_work/ | mr_zerolith | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5ta46 | false | null | t3_1r5ta46 | /r/LocalLLaMA/comments/1r5ta46/combining_a_rtx_pro_6000_and_5090_could_it_work/ | false | false | self | 0 | null |
If Qwen3.5 is open — what will you benchmark first? | 1 | Assuming Qwen3.5 drops soon, what’s the first benchmark you’d run? | 2026-02-15T23:02:38 | https://www.reddit.com/r/LocalLLaMA/comments/1r5sx4f/if_qwen35_is_open_what_will_you_benchmark_first/ | masanovu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5sx4f | false | null | t3_1r5sx4f | /r/LocalLLaMA/comments/1r5sx4f/if_qwen35_is_open_what_will_you_benchmark_first/ | false | false | self | 1 | null |
I got OpenClaw memory search from 82 seconds to 30ms — here's how | 1 | [removed] | 2026-02-15T22:51:06 | https://www.reddit.com/r/LocalLLaMA/comments/1r5sn40/i_got_openclaw_memory_search_from_82_seconds_to/ | TigerAIElectrical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5sn40 | false | null | t3_1r5sn40 | /r/LocalLLaMA/comments/1r5sn40/i_got_openclaw_memory_search_from_82_seconds_to/ | false | false | self | 1 | null |
llama.cpp takes forever to load model from SSD? | 0 | doesnt work
* \--mlock
* \--no-mmap
* \--simple-io
qwen3-coder-next gguf (40gb) takes like 30 mins to load from NVMe SSD wtf ? | 2026-02-15T22:43:28 | https://www.reddit.com/r/LocalLLaMA/comments/1r5sgow/llamacpp_takes_forever_to_load_model_from_ssd/ | ClimateBoss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5sgow | false | null | t3_1r5sgow | /r/LocalLLaMA/comments/1r5sgow/llamacpp_takes_forever_to_load_model_from_ssd/ | false | false | self | 0 | null |
Stop calculating truth. Just look it up. A zero-hallucination logic plugin for LLMs (LiE Protocol). | 1 | [removed] | 2026-02-15T22:38:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r5sc96/stop_calculating_truth_just_look_it_up_a/ | Puzzled-Egg-9807 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5sc96 | false | null | t3_1r5sc96 | /r/LocalLLaMA/comments/1r5sc96/stop_calculating_truth_just_look_it_up_a/ | false | false | self | 1 | null |
Stop calculating truth. Just look it up. A zero-hallucination logic plugin for LLMs (LiE Protocol). | 1 | \## \*\*LiE (Lookup is Execution) Logic Plugin\*\*
\> \*\*Core Manifesto: Stop wasting compute to "calculate" the truth. The truth should be "looked up" directly.\*\*
\### \*\*1. Core Principle: Spatialization and Atomization of Logic\*\*
LiE does not replace the LLM; it demotes it from "Decision Maker" to "Se... | 2026-02-15T22:26:09 | https://www.reddit.com/r/LocalLLaMA/comments/1r5s1pp/stop_calculating_truth_just_look_it_up_a/ | Puzzled-Egg-9807 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5s1pp | false | null | t3_1r5s1pp | /r/LocalLLaMA/comments/1r5s1pp/stop_calculating_truth_just_look_it_up_a/ | false | false | self | 1 | null |
Is there a good use for 1 or 2 4 GB VRAM in a home lab? | 1 | I've got a laptop or two that I was hoping I'd get to use, but it seems that 4 is too small for much and there's no good way to combine them, am I overlooking a good use case? | 2026-02-15T22:24:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r5s0oc/is_there_a_good_use_for_1_or_2_4_gb_vram_in_a/ | pjdonovan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5s0oc | false | null | t3_1r5s0oc | /r/LocalLLaMA/comments/1r5s0oc/is_there_a_good_use_for_1_or_2_4_gb_vram_in_a/ | false | false | self | 1 | null |
AgentKV: Single-file vector+graph DB for local agents (no ChromaDB/Weaviate needed) | 3 | # AgentKV: Single-file vector+graph DB for local agents (no ChromaDB/Weaviate needed)
Just released AgentKV v0.7.1 on PyPI — it's like SQLite but for agent memory.
## Why I built this
Running local LLMs with ChromaDB felt like overkill. I needed something that works without servers:
- One file on disk (mmap-backed)
... | 2026-02-15T21:50:31 | https://www.reddit.com/r/LocalLLaMA/comments/1r5r66r/agentkv_singlefile_vectorgraph_db_for_local/ | yobro3366 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5r66r | false | null | t3_1r5r66r | /r/LocalLLaMA/comments/1r5r66r/agentkv_singlefile_vectorgraph_db_for_local/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'LUVZb45sWfH3Zs-oqoVrgpeutmO-AD-WF5LpfrzAGj8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LUVZb45sWfH3Zs-oqoVrgpeutmO-AD-WF5LpfrzAGj8.png?width=108&crop=smart&auto=webp&s=329c7047e56676727dd4ddee612c2e48d4a89ee7', 'width': 108}, {'height': 108, 'url': 'h... |
Why are we scaling context windows additively? What if "Sieve-based" context is the real endgame for Local LLMs? | 1 | Hey everyone,
I’ve been thinking about the current race for 1M+ context windows. Even with FlashAttention-3 and extreme quantization, the "Lost in the Middle" phenomenon and the KV cache VRAM cost still feel like a wall for local hardware.
We are currently treating context management as an **additive** problem (retri... | 2026-02-15T21:26:56 | https://www.reddit.com/r/LocalLLaMA/comments/1r5qkuw/why_are_we_scaling_context_windows_additively/ | UnderstandingAway139 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5qkuw | false | null | t3_1r5qkuw | /r/LocalLLaMA/comments/1r5qkuw/why_are_we_scaling_context_windows_additively/ | false | false | self | 1 | null |
inclusionAI/Ling-2.5-1T · Hugging Face | 93 | another 1T model :) | 2026-02-15T21:20:54 | https://huggingface.co/inclusionAI/Ling-2.5-1T | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1r5qfb8 | false | null | t3_1r5qfb8 | /r/LocalLLaMA/comments/1r5qfb8/inclusionailing251t_hugging_face/ | false | false | default | 93 | {'enabled': False, 'images': [{'id': 'nCxW8JHyfmzzv3lMTtcAqL8Ez3yOAkDeuLrrPCFMKz4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nCxW8JHyfmzzv3lMTtcAqL8Ez3yOAkDeuLrrPCFMKz4.png?width=108&crop=smart&auto=webp&s=27d61638b2c124411fb058285d99a636500d0d41', 'width': 108}, {'height': 116, 'url': 'h... |
How are you handling persistent memory for AI coding agents? | 5 | Context compaction is killing me.
I use Claude Code daily and the biggest pain isn't hallucination or context limits — it's that every time context compacts, all the important stuff vanishes. The decision about why we chose Postgres over Mongo? Gone. The fix for that auth bug that took 3 hours? Gone.
I end up re-expl... | 2026-02-15T21:12:49 | https://www.reddit.com/r/LocalLLaMA/comments/1r5q7xd/how_are_you_handling_persistent_memory_for_ai/ | Maximum_Fearless | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5q7xd | false | null | t3_1r5q7xd | /r/LocalLLaMA/comments/1r5q7xd/how_are_you_handling_persistent_memory_for_ai/ | false | false | self | 5 | null |
Prometheus metrics for NVIDIA DGX Spark clusters | 8 | Hi,
I’m sharing **dgx-spark-prometheus** — a small repo to help you get **Prometheus monitoring/metrics for NVIDIA DGX Spark** clusters.
Repo: [https://github.com/ateska/dgx-spark-prometheus](https://github.com/ateska/dgx-spark-prometheus)
**What it’s for**
* Making DGX Spark cluster easier to observe with **Promet... | 2026-02-15T21:11:19 | Icy_Programmer7186 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r5q6ib | false | null | t3_1r5q6ib | /r/LocalLLaMA/comments/1r5q6ib/prometheus_metrics_for_nvidia_dgx_spark_clusters/ | false | false | default | 8 | {'enabled': True, 'images': [{'id': '0zab5ars4qjg1', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/0zab5ars4qjg1.jpeg?width=108&crop=smart&auto=webp&s=e7dfe555384f69b3d1921c87dcd7e7e920f0f341', 'width': 108}, {'height': 210, 'url': 'https://preview.redd.it/0zab5ars4qjg1.jpeg?width=216&crop=smart&auto=... | |
Help me with the AI Lab V.2 | 0 | So my path is: Intel I7 NUC -> GEM12 AMD Ryzen with eGPU -> Intel I7 14000KF with 3090/4090.
So I've reach to a point where I want more with a bit of future, if not proofing, at least predictability. Also I need to reuse some parts from the I7-14KF, especially the DDR5 RAM.
So my appeal to the community is: what will... | 2026-02-15T20:56:56 | https://www.reddit.com/r/LocalLLaMA/comments/1r5pt6v/help_me_with_the_ai_lab_v2/ | HumanDrone8721 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5pt6v | false | null | t3_1r5pt6v | /r/LocalLLaMA/comments/1r5pt6v/help_me_with_the_ai_lab_v2/ | false | false | self | 0 | null |
rednote-hilab/dots.ocr-1.5 | 36 | 2026-02-15T20:39:43 | https://huggingface.co/rednote-hilab/dots.ocr-1.5 | nullmove | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1r5pdnn | false | null | t3_1r5pdnn | /r/LocalLLaMA/comments/1r5pdnn/rednotehilabdotsocr15/ | false | false | default | 36 | {'enabled': False, 'images': [{'id': 'XrUFWJLhWfPf18dDw3gY1NttJAJXB7-AhzrL3jFOW4M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XrUFWJLhWfPf18dDw3gY1NttJAJXB7-AhzrL3jFOW4M.png?width=108&crop=smart&auto=webp&s=0838e8e67a6c3625cae81d22ba62dd82b521c40f', 'width': 108}, {'height': 116, 'url': 'h... | |
I built an OCR-based chat translator for Foxhole (MMO war game) that runs on local LLMs | 0 | Foxhole is a massively multiplayer war game where hundreds of players from all over the world fight on the same server. The chat is a firehose of English, Russian, Korean, Chinese, Spanish, and more - often all in the same channel. There's no built-in translation. If someone's calling out enemy armor positions in Cyril... | 2026-02-15T20:36:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r5pb1g/i_built_an_ocrbased_chat_translator_for_foxhole/ | Ok-Pomegranate1314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5pb1g | false | null | t3_1r5pb1g | /r/LocalLLaMA/comments/1r5pb1g/i_built_an_ocrbased_chat_translator_for_foxhole/ | false | false | self | 0 | null |
Deepseek v4 leaked benchmarks? | 1 | 2026-02-15T20:14:27 | Independent-Wind4462 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r5oqng | false | null | t3_1r5oqng | /r/LocalLLaMA/comments/1r5oqng/deepseek_v4_leaked_benchmarks/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'pq50aerpupjg1', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/pq50aerpupjg1.jpeg?width=108&crop=smart&auto=webp&s=4925e6f9c76a11f59d7f4051f08bfbc16933f647', 'width': 108}, {'height': 193, 'url': 'https://preview.redd.it/pq50aerpupjg1.jpeg?width=216&crop=smart&auto=w... | ||
prompt injection test library? | 3 | Hello, I was just wondering if there exists some kind of public repository of known test cases for guarding against prompt injection? | 2026-02-15T20:01:40 | https://www.reddit.com/r/LocalLLaMA/comments/1r5of3p/prompt_injection_test_library/ | epic_troll_tard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5of3p | false | null | t3_1r5of3p | /r/LocalLLaMA/comments/1r5of3p/prompt_injection_test_library/ | false | false | self | 3 | null |
cant tell if this is true or not | 9 | 2026-02-15T19:49:45 | panic_in_the_cosmos | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r5o3y2 | false | null | t3_1r5o3y2 | /r/LocalLLaMA/comments/1r5o3y2/cant_tell_if_this_is_true_or_not/ | false | false | 9 | {'enabled': True, 'images': [{'id': 'wfiz477bqpjg1', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/wfiz477bqpjg1.jpeg?width=108&crop=smart&auto=webp&s=a25b849ebf1293ece0e82011bb49dfb2479abdcb', 'width': 108}, {'height': 193, 'url': 'https://preview.redd.it/wfiz477bqpjg1.jpeg?width=216&crop=smart&auto=w... | |||
QED-Nano: Teaching a Tiny Model to Prove Hard Theorems | 4 | New Maths model by Hugging face.
Similar line of thought to VibeThinker 1.5B, Hugging Face have released a new model that has been RL trained on solving maths problems. They had an innovative approach that broke down large problems into smaller parts.
Writeup here: [https://huggingface.co/spaces/lm-provers/qed-nan... | 2026-02-15T19:48:51 | https://www.reddit.com/r/LocalLLaMA/comments/1r5o34g/qednano_teaching_a_tiny_model_to_prove_hard/ | ThePrimeClock | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5o34g | false | null | t3_1r5o34g | /r/LocalLLaMA/comments/1r5o34g/qednano_teaching_a_tiny_model_to_prove_hard/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': '1Ycsmqdbxl4b11AxVnn9pWohtaOZGciLr71fdDuGehc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/1Ycsmqdbxl4b11AxVnn9pWohtaOZGciLr71fdDuGehc.png?width=108&crop=smart&auto=webp&s=f8823eeb3a4daf23fa4b849ce4a543ff4fba3df1', 'width': 108}, {'height': 120, 'url': 'h... |
GLM-5 is officially on NVIDIA NIM, and you can now use it to power Claude Code for FREE 🚀 | 136 | NVIDIA just added **z-ai/glm5** to their NIM inventory, and I’ve just updated **free-claude-code** to support it fully. This means you can now run Anthropic’s powerful **Claude Code CLI** using GLM-5 as the backend engine completely free.
**What is this?** `free-claude-code` is a lightweight proxy that converts Claude... | 2026-02-15T19:31:53 | https://github.com/Alishahryar1/free-claude-code | PreparationAny8816 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r5nnhz | false | null | t3_1r5nnhz | /r/LocalLLaMA/comments/1r5nnhz/glm5_is_officially_on_nvidia_nim_and_you_can_now/ | false | false | 136 | {'enabled': False, 'images': [{'id': 'jeBOY9b76BQPslI7xSt75z5frSsGlOBOeFPAwBsENE8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jeBOY9b76BQPslI7xSt75z5frSsGlOBOeFPAwBsENE8.png?width=108&crop=smart&auto=webp&s=e926b5c8cf882022c7e7e5331fcb4498296fc93f', 'width': 108}, {'height': 108, 'url': 'h... | |
Nvfp4 now working on mlx using lm studio | 5 | Hi,
I just thought I would make a thread as I've just found after downloading some mlx nvfp4 quants that they now load and run in lm studio. I did try this last month but they didn't work then, I suppose mlx has been updated now in lm studio and so it works. I'm not sure how good the quality is vs other quants in m... | 2026-02-15T19:24:13 | https://www.reddit.com/r/LocalLLaMA/comments/1r5ng7l/nvfp4_now_working_on_mlx_using_lm_studio/ | Professional-Bear857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5ng7l | false | null | t3_1r5ng7l | /r/LocalLLaMA/comments/1r5ng7l/nvfp4_now_working_on_mlx_using_lm_studio/ | false | false | self | 5 | null |
GLM-5 is officially on NVIDIA NIM, and you can now use it to power Claude Code for FREE 🚀 | 0 | NVIDIA just added **z-ai/glm5** to their NIM inventory, and I’ve just updated **free-claude-code** to support it fully. This means you can now run Anthropic’s powerful **Claude Code CLI** using GLM-5 as the backend engine completely free.
**What is this?** `free-claude-code` is a lightweight proxy that converts Claude... | 2026-02-15T19:22:48 | https://github.com/Alishahryar1/free-claude-code/tree/main | PreparationAny8816 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r5neul | false | null | t3_1r5neul | /r/LocalLLaMA/comments/1r5neul/glm5_is_officially_on_nvidia_nim_and_you_can_now/ | false | false | default | 0 | null |
Image comparison | 3 | I’m building an AI agent for a furniture business where customers can send a photo of a sofa and ask if we have that design. The system should compare the customer’s image against our catalog of about 500 product images (SKUs), find visually similar items, and return the closest matches or say if none are available.
I... | 2026-02-15T19:16:45 | https://www.reddit.com/r/LocalLLaMA/comments/1r5n946/image_comparison/ | This_Rice4830 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5n946 | false | null | t3_1r5n946 | /r/LocalLLaMA/comments/1r5n946/image_comparison/ | false | false | self | 3 | null |
Local experiments with Qwen 3 ASR/TTS on 8 GB | 3 | Following antirez' release of Qwen 3 ASR I have since had Claude do a similar C-based framework for Qwen 3 TTS. I have not spent much time to understand what Claude did, but I thought I would report how my local efforts are going. If anyone wants to discuss any of it especially your own progress in similar endeavours, ... | 2026-02-15T19:04:17 | https://www.reddit.com/r/LocalLLaMA/comments/1r5mxld/local_experiments_with_qwen_3_asrtts_on_8_gb/ | rmtew | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5mxld | false | null | t3_1r5mxld | /r/LocalLLaMA/comments/1r5mxld/local_experiments_with_qwen_3_asrtts_on_8_gb/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'CzF5oc7QV8ukp0LrvcRpF8lGRHF9BdotlL2LEl046KQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CzF5oc7QV8ukp0LrvcRpF8lGRHF9BdotlL2LEl046KQ.png?width=108&crop=smart&auto=webp&s=324c7fd09968f4a49d25ab71a822266b809c1cf2', 'width': 108}, {'height': 108, 'url': 'h... |
How to run Qwen3-Coder-Next 80b parameters model on 8Gb VRAM | 118 | I am running large llms on my **8Gb** **laptop 3070ti**. I have optimized: **LTX-2, Wan2.2, HeartMula, ACE-STEP 1.5**.
And now i abble to run 80b parameters model **Qwen3-Coder-Next !!!**
**Instruction here:** [https://github.com/nalexand/Qwen3-Coder-OPTIMIZED](https://github.com/nalexand/Qwen3-Coder-OPTIMIZED)
It i... | 2026-02-15T18:33:14 | https://www.reddit.com/r/LocalLLaMA/comments/1r5m4vl/how_to_run_qwen3codernext_80b_parameters_model_on/ | AccomplishedLeg527 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5m4vl | false | null | t3_1r5m4vl | /r/LocalLLaMA/comments/1r5m4vl/how_to_run_qwen3codernext_80b_parameters_model_on/ | false | false | self | 118 | {'enabled': False, 'images': [{'id': 'aUree8qwL7ztMwsI2h7LX6n7pZIsB-yHwcK8uNRZExI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aUree8qwL7ztMwsI2h7LX6n7pZIsB-yHwcK8uNRZExI.png?width=108&crop=smart&auto=webp&s=2a12608e21e09e4b70ea44ccc2b9e2afed949512', 'width': 108}, {'height': 108, 'url': 'h... |
AI/ML on Linux: 16GB AMD (9060 XT) vs 8GB NVIDIA (5060)? | 5 | Hi everyone,
I'm building a budget-focused rig for Machine Learning and Software Development. I've settled on a Ryzen 7 5700X (AM4) with 32GB of DDR4 to save costs. Now I'm stuck on the GPU choice.
I'm a Linux user and I'd love to go with AMD for the open-source drivers, but I'm worried about the industry's reliance ... | 2026-02-15T18:30:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r5m2r8/aiml_on_linux_16gb_amd_9060_xt_vs_8gb_nvidia_5060/ | SpecificProduct923 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5m2r8 | false | null | t3_1r5m2r8 | /r/LocalLLaMA/comments/1r5m2r8/aiml_on_linux_16gb_amd_9060_xt_vs_8gb_nvidia_5060/ | false | false | self | 5 | null |
Is local AI actually practical for everyday note taking? | 10 | I’ve been trying to move more of my workflow offline, especially anything related to notes. In theory, running a local model for meeting summaries and task extraction sounds perfect. Private, fast, no cloud dependency.
Right now I use Bluedot mostly so I don’t have to type during meetings and can review a summary afte... | 2026-02-15T18:29:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r5m1pl/is_local_ai_actually_practical_for_everyday_note/ | kingsaso9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5m1pl | false | null | t3_1r5m1pl | /r/LocalLLaMA/comments/1r5m1pl/is_local_ai_actually_practical_for_everyday_note/ | false | false | self | 10 | null |
Prompt Engineering was overhyped, and it’s already dying as a standalone career? | 0 | A year ago, people were claiming prompt engineers would earn $200k just by “talking to AI correctly.” Entire courses, influencers, and job titles popped up around prompt engineering as if it were some rare, defensible technical skill.
But now, working with modern LLMs in actual production systems, it honestly feels li... | 2026-02-15T18:29:39 | https://www.reddit.com/r/LocalLLaMA/comments/1r5m1ee/prompt_engineering_was_overhyped_and_its_already/ | Own-Treacle4585 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5m1ee | false | null | t3_1r5m1ee | /r/LocalLLaMA/comments/1r5m1ee/prompt_engineering_was_overhyped_and_its_already/ | false | false | self | 0 | null |
How do I fix this AI model? | 0 | So, I tried making a [C.AI](http://C.AI) alternative with the difference being that it's local. I want to learn how to code but I currently can't so I just used Cursor. But anyways for some reason it won't answer normally. I picked the model "TinyLlama 1.1B". I don't think it really even works with roleplay but I just ... | 2026-02-15T18:22:17 | https://www.reddit.com/r/LocalLLaMA/comments/1r5luaq/how_do_i_fix_this_ai_model/ | Novel-Grade2973 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5luaq | false | null | t3_1r5luaq | /r/LocalLLaMA/comments/1r5luaq/how_do_i_fix_this_ai_model/ | false | false | 0 | null | |
Bad Apple but it's GPT-2 XL Attention Maps | 76 | I optimized learnable input embeddings for a frozen GPT-2 XL model so that its attention maps display the frames of the Bad Apple music video. The model never saw an image in its life, The optimizer just found the right inputs.
This is a silly little project but I found it interesting, here are some details about ... | 2026-02-15T18:19:02 | https://www.youtube.com/watch?v=UU14rQO6VzU | TheLatentExplorer | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1r5lra1 | false | {'oembed': {'author_name': 'The Latent Explorer', 'author_url': 'https://www.youtube.com/@thelatentexplorer', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/UU14rQO6VzU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-m... | t3_1r5lra1 | /r/LocalLLaMA/comments/1r5lra1/bad_apple_but_its_gpt2_xl_attention_maps/ | false | false | 76 | {'enabled': False, 'images': [{'id': 'bQ8_O8mHCtpCo5Q-asAduYJCGmACnuapiWfZUdt-AYQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/bQ8_O8mHCtpCo5Q-asAduYJCGmACnuapiWfZUdt-AYQ.jpeg?width=108&crop=smart&auto=webp&s=983b25c69c8ffbe9c1bb5abc38edbcaa9b91ebf8', 'width': 108}, {'height': 162, 'url': '... | |
What is GLM-5? | 0 | GLM-5 is a new open-weight model released by Zhipu AI, focused on complex engineering and autonomous agent workflows. This is not a Google model. It is positioned as a high-performance system for real-world task execution rather than just chat.The model employs an asynchronous RL infrastructure called "slime" for effic... | 2026-02-15T18:14:23 | https://www.reddit.com/r/LocalLLaMA/comments/1r5lmww/what_is_glm5/ | demon_bhaiya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5lmww | false | null | t3_1r5lmww | /r/LocalLLaMA/comments/1r5lmww/what_is_glm5/ | false | false | self | 0 | null |
RobinLLM - Free LLM Router (OpenRouter) | 9 | Introducing **RobinLLM** — a quick passion project born from a burst of inspiration. It queries OpenRouter for available free LLMs and intelligently routes requests to the fastest-responding model. Under the hood, it leverages concurrency so that a single misbehaving model doesn't bottleneck your experience — if one pr... | 2026-02-15T18:11:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r5lk68/robinllm_free_llm_router_openrouter/ | akumaburn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5lk68 | false | null | t3_1r5lk68 | /r/LocalLLaMA/comments/1r5lk68/robinllm_free_llm_router_openrouter/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'D73jM1blRC8MJ_wZ1B03IorLFGE8-KGYYPnKY0MGWgs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/D73jM1blRC8MJ_wZ1B03IorLFGE8-KGYYPnKY0MGWgs.png?width=108&crop=smart&auto=webp&s=7fad3401196b7456a568f7dad9db2081b4a6558b', 'width': 108}, {'height': 108, 'url': 'h... |
Help with optimising GPT-OSS-120B on Llama.cpp’s Vulkan branch | 2 | Hello there!
Let’s get down to brass tax:
My system specs are as follows:
CPU: 11600F
Memory: 128GB DDR4 3600MHz C16 (I was lucky pre-crisis)
GPUs: 3x Intel Arc A770’s (running the Xe driver)
OS: Ubuntu 25.04 (VM), Proxmox CE (host)
I’m trying to optimise my run command/build args for GPT-OSS-120B.
I use the Vulkan b... | 2026-02-15T18:09:27 | https://www.reddit.com/r/LocalLLaMA/comments/1r5li7d/help_with_optimising_gptoss120b_on_llamacpps/ | HumerousGorgon8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5li7d | false | null | t3_1r5li7d | /r/LocalLLaMA/comments/1r5li7d/help_with_optimising_gptoss120b_on_llamacpps/ | false | false | self | 2 | null |
Студентки голые прыгают на лицо мужику | 0 | р | 2026-02-15T18:08:00 | https://www.reddit.com/r/LocalLLaMA/comments/1r5lguo/студентки_голые_прыгают_на_лицо_мужику/ | Quirky_Car_4282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5lguo | false | null | t3_1r5lguo | /r/LocalLLaMA/comments/1r5lguo/студентки_голые_прыгают_на_лицо_мужику/ | true | false | spoiler | 0 | null |
Студентки голые прыгают на лицо мужику | 0 | голые студентки прыгают на лицо мужику | 2026-02-15T18:07:12 | https://www.reddit.com/r/LocalLLaMA/comments/1r5lg3g/студентки_голые_прыгают_на_лицо_мужику/ | Quirky_Car_4282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5lg3g | false | null | t3_1r5lg3g | /r/LocalLLaMA/comments/1r5lg3g/студентки_голые_прыгают_на_лицо_мужику/ | false | false | self | 0 | null |
Built a personal assistant easy to run locally | 9 | Hi
I built this project for myself because I wanted full control over what my personal assistant does and the ability to modify it quickly whenever I need to. I decided to share it on GitHub here's the link: [https://github.com/emanueleielo/ciana-parrot](https://github.com/emanueleielo/ciana-parrot)
If you find it us... | 2026-02-15T18:05:57 | https://www.reddit.com/r/LocalLLaMA/comments/1r5lex8/built_a_personal_assistant_easy_to_run_locally/ | Releow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5lex8 | false | null | t3_1r5lex8 | /r/LocalLLaMA/comments/1r5lex8/built_a_personal_assistant_easy_to_run_locally/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'mHWao1DxHzBSaWCu-ZRmbvlEl3LPXDnDanBiVILrNEU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mHWao1DxHzBSaWCu-ZRmbvlEl3LPXDnDanBiVILrNEU.png?width=108&crop=smart&auto=webp&s=b4a43b217a39bb55292e55105648d25579e89cae', 'width': 108}, {'height': 108, 'url': 'h... |
Recent dual-core CPUs can be enough for LLM CPU offloading | 0 | I got Pentium g6400 with 64 GB and 2060 | 2026-02-15T17:33:47 | https://www.reddit.com/r/LocalLLaMA/comments/1r5kkvl/recent_dualcore_cpus_can_be_enough_for_llm_cpu/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5kkvl | false | null | t3_1r5kkvl | /r/LocalLLaMA/comments/1r5kkvl/recent_dualcore_cpus_can_be_enough_for_llm_cpu/ | false | false | self | 0 | null |
Building a fully local AI roleplay app (private, customizable, experimental) — would this interest you? | 4 | I’m a software engineer and long-time roleplay fan, and I’ve been building a local-first AI roleplay desktop app for myself. I’m considering refining it into something more polished and usable.
The core idea:
• Fully local (no accounts, no cloud storage, no tracking)
• You choose which model to use
• Clean UI d... | 2026-02-15T17:30:46 | https://www.reddit.com/r/LocalLLaMA/comments/1r5ki6g/building_a_fully_local_ai_roleplay_app_private/ | Different_Ad_8684 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5ki6g | false | null | t3_1r5ki6g | /r/LocalLLaMA/comments/1r5ki6g/building_a_fully_local_ai_roleplay_app_private/ | false | false | self | 4 | null |
Does anyone know how Nanbeige4.1-3B can be so impressive compared with other models of similar size? | 49 | It seemed extremely consistent, cohesive, no repetition so far I've tested, and it works very well on small vram size.
How is this possible? | 2026-02-15T17:29:10 | https://www.reddit.com/r/LocalLLaMA/comments/1r5kgn0/does_anyone_know_how_nanbeige413b_can_be_so/ | cloudxaas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5kgn0 | false | null | t3_1r5kgn0 | /r/LocalLLaMA/comments/1r5kgn0/does_anyone_know_how_nanbeige413b_can_be_so/ | false | false | self | 49 | {'enabled': False, 'images': [{'id': '8w-UKjLf1mDqWPuF1nTgK69XoaTwWdGBd4N-hiGxJMg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8w-UKjLf1mDqWPuF1nTgK69XoaTwWdGBd4N-hiGxJMg.png?width=108&crop=smart&auto=webp&s=4f8a4a149a25a06bc6364d59ef25b60ad29bf76d', 'width': 108}, {'height': 116, 'url': 'h... |
If you were starting with local LLMs today, what would you do differently | 57 | Hey all,
I am seriously considering investing a significant portion of my signing bonus into a local LLM setup as a hobby and learning project once I start my job in August.
I am currently in university. I have studied a lot of theory, but I feel I am missing practical, hands-on experience.
If you were starting from... | 2026-02-15T17:15:44 | https://www.reddit.com/r/LocalLLaMA/comments/1r5k46x/if_you_were_starting_with_local_llms_today_what/ | Bubbly_Run_2349 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5k46x | false | null | t3_1r5k46x | /r/LocalLLaMA/comments/1r5k46x/if_you_were_starting_with_local_llms_today_what/ | false | false | self | 57 | null |
if you try and slap a gpu-card that needs pcie 4 into a 2015 dell office tower, how does perform llm that are ntire loaded on GPU? | 1 | Ryzen 5 1600 ,Pentium G6400 , i7-2600 ,I3-6100 paired with 4x2060 Nvidia
Will i encounter bottleneck, CPU doesnt supporto pcie4, ? | 2026-02-15T17:03:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r5js7i/if_you_try_and_slap_a_gpucard_that_needs_pcie_4/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5js7i | false | null | t3_1r5js7i | /r/LocalLLaMA/comments/1r5js7i/if_you_try_and_slap_a_gpucard_that_needs_pcie_4/ | false | false | self | 1 | null |
Building a self-hosted AI Knowledge System with automated ingestion, GraphRAG, and proactive briefings - looking for feedback | 1 | I've spent the last few weeks researching how to build a personal AI-powered knowledge system and wanted to share where I landed and get feedback before I commit to building it.
**The Problem**
I consume a lot of AI content: \~20 YouTube channels, \~10 podcasts, \~8 newsletters, plus papers and articles. The problem ... | 2026-02-15T17:02:43 | https://www.reddit.com/r/LocalLLaMA/comments/1r5jrti/building_a_selfhosted_ai_knowledge_system_with/ | EmergencyAddition433 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5jrti | false | null | t3_1r5jrti | /r/LocalLLaMA/comments/1r5jrti/building_a_selfhosted_ai_knowledge_system_with/ | false | false | self | 1 | null |
Looking for technical co-founder – AI Native SEO / agentic infra | 0 | Hey 👋
I’m building an **AI-native, agentic SEO agency** that lives inside tools like **WordPress, GitHub, Framer, Webflow**, etc — not a dashboard, but something that actually *runs* SEO end-to-end.
I’m a non-technical founder with strong product + GTM + logic background, but I’m looking for a **technical co-fou... | 2026-02-15T16:56:45 | https://www.reddit.com/r/LocalLLaMA/comments/1r5jm80/looking_for_technical_cofounder_ai_native_seo/ | Comfortable-Risk9023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5jm80 | false | null | t3_1r5jm80 | /r/LocalLLaMA/comments/1r5jm80/looking_for_technical_cofounder_ai_native_seo/ | false | false | self | 0 | null |
я хочу создать свое приложение которое будет монета история которой начнется сильнее чем у биткойна | 0 | .... | 2026-02-15T16:51:47 | https://www.reddit.com/r/LocalLLaMA/comments/1r5jhpd/я_хочу_создать_свое_приложение_которое_будет/ | Dear_Measurement_684 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5jhpd | false | null | t3_1r5jhpd | /r/LocalLLaMA/comments/1r5jhpd/я_хочу_создать_свое_приложение_которое_будет/ | true | false | nsfw | 0 | null |
Self-hosting coding models (DeepSeek/Qwen) - anyone doing this for unlimited usage? | 10 | I've been hitting credit limits on Cursor/Copilot pretty regularly. Expensive models eat through credits fast when you're doing full codebase analysis.
Thinking about self-hosting DeepSeek V3 or Qwen for coding. Has anyone set this up successfully?
Main questions:
\- Performance compared to Claude/GPT-4 for code... | 2026-02-15T16:40:04 | https://www.reddit.com/r/LocalLLaMA/comments/1r5j70a/selfhosting_coding_models_deepseekqwen_anyone/ | Big_Rope2548 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5j70a | false | null | t3_1r5j70a | /r/LocalLLaMA/comments/1r5j70a/selfhosting_coding_models_deepseekqwen_anyone/ | false | false | self | 10 | null |
7 levels of AI-assisted development | 0 | 2026-02-15T16:39:04 | https://www.hyperact.co.uk/blog/7-levels-of-ai-assisted-development | ArtisticProgrammer11 | hyperact.co.uk | 1970-01-01T00:00:00 | 0 | {} | 1r5j637 | false | null | t3_1r5j637 | /r/LocalLLaMA/comments/1r5j637/7_levels_of_aiassisted_development/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'tMxHCaFuYbKpstSldJcGbTNZXr2Ss41il8xeOmzyDfo', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/tMxHCaFuYbKpstSldJcGbTNZXr2Ss41il8xeOmzyDfo.jpeg?width=108&crop=smart&auto=webp&s=a7c36838beca9a03d884ed005cbb0e827aee3d84', 'width': 108}, {'height': 123, 'url': '... | ||
Solo dev needs testers - open-source AI agent tool with bugs but real potential | 1 | [removed] | 2026-02-15T16:32:55 | https://www.reddit.com/r/LocalLLaMA/comments/1r5j0bv/solo_dev_needs_testers_opensource_ai_agent_tool/ | Fit_Soup_1391 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5j0bv | false | null | t3_1r5j0bv | /r/LocalLLaMA/comments/1r5j0bv/solo_dev_needs_testers_opensource_ai_agent_tool/ | false | false | self | 1 | null |
Buy a Mac or GPU? | 0 | I am planning to run purely text-based LLMs locally for simple tasks like general chat and brainstorming (and possibly some light python coding and rag). I am not sure if I should go the m series route or the nvidia route. As of this writing, what's the best entry point for local ai that is a balance between cost, perf... | 2026-02-15T16:28:47 | https://www.reddit.com/r/LocalLLaMA/comments/1r5iwez/buy_a_mac_or_gpu/ | SnooOranges0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5iwez | false | null | t3_1r5iwez | /r/LocalLLaMA/comments/1r5iwez/buy_a_mac_or_gpu/ | false | false | self | 0 | null |
I Trained My AI to Refuse My Commands Using Aviation Safety Protocols | 0 | TL;DR: Used Crew Resource Management (CRM) from aviation—the same protocols that reduced flight fatalities 90%+—to teach AI when to push back or refuse. Working prompts included. Tested with Claude but principles should work across models.
The Problem: Your AI Is Too Obedient
We've all been there. You're tired, distr... | 2026-02-15T16:26:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r5iunv/i_trained_my_ai_to_refuse_my_commands_using/ | swiss-tomcat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5iunv | false | null | t3_1r5iunv | /r/LocalLLaMA/comments/1r5iunv/i_trained_my_ai_to_refuse_my_commands_using/ | false | false | self | 0 | null |
CodeAct vs Recursive LMs: restructuring inference instead of increasing context windows | 0 | I’ve been experimenting with two ideas around making LLM systems more scalable:
* **CodeAct** → using code as an action interface
* **Recursive Language Models (RLM)** → using code as a reasoning controller
Instead of trying to increase context windows indefinitely, both approaches restructure how inference happens.
... | 2026-02-15T16:24:32 | https://www.reddit.com/r/LocalLLaMA/comments/1r5isfb/codeact_vs_recursive_lms_restructuring_inference/ | shreyanshjain05 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5isfb | false | null | t3_1r5isfb | /r/LocalLLaMA/comments/1r5isfb/codeact_vs_recursive_lms_restructuring_inference/ | false | false | self | 0 | null |
Whole Album of songs Generation on your own PC tutorial | 0 | [https://www.youtube.com/watch?v=5b3yCqHQOoI](https://www.youtube.com/watch?v=5b3yCqHQOoI) | 2026-02-15T16:22:38 | https://www.reddit.com/r/LocalLLaMA/comments/1r5iqpf/whole_album_of_songs_generation_on_your_own_pc/ | Legion10008 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5iqpf | false | null | t3_1r5iqpf | /r/LocalLLaMA/comments/1r5iqpf/whole_album_of_songs_generation_on_your_own_pc/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'oiSiIscnnGqJyvXnVRrNrkp-R0f61wS6Mlp2FuCFpyE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/oiSiIscnnGqJyvXnVRrNrkp-R0f61wS6Mlp2FuCFpyE.jpeg?width=108&crop=smart&auto=webp&s=e359f63bddde09998e8fecf4cd142c58ab27ed59', 'width': 108}, {'height': 162, 'url': '... |
🚀 Launched today: PDFagain.com | 0 | It’s an AI-powered, all-in-one PDF platform designed to make document workflows faster and smarter with Chat with PDF functionality.
👉 Try it here: [https://pdfagain.com/](https://pdfagain.com/) | 2026-02-15T16:20:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r5ip3l/launched_today_pdfagaincom/ | rohit-ramakkanavar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5ip3l | false | null | t3_1r5ip3l | /r/LocalLLaMA/comments/1r5ip3l/launched_today_pdfagaincom/ | false | false | self | 0 | null |
Local Gemini/GPT like UI feeling, llm, vLLM, sst/tts, and Text to Image via one Ui | 2 | Hi,
I'm looking for recommendations for a centralized WebUI for my local setup. I've got the backends running but I'm searching for the perfect frontend that offers a smooth, seamless user experience similar to ChatGPT or Gemini.
Here is my current backend stack that the UI needs to handle:
• LLMs: Two 32b models... | 2026-02-15T16:13:31 | https://www.reddit.com/r/LocalLLaMA/comments/1r5ii5y/local_geminigpt_like_ui_feeling_llm_vllm_ssttts/ | MageLD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5ii5y | false | null | t3_1r5ii5y | /r/LocalLLaMA/comments/1r5ii5y/local_geminigpt_like_ui_feeling_llm_vllm_ssttts/ | false | false | self | 2 | null |
Good local setup for LLM training/finetuning? | 3 | Hi,
This is my first post on reddit, sorry in advance if this is a naive question. I am a PhD student working on ML/RL theory, and I don't have access to compute at my university. Over the past year, I have been trying to transition toward empirical work on LLMs (e.g., for reasoning), but it has been frustratingly ... | 2026-02-15T15:47:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r5htyt/good_local_setup_for_llm_trainingfinetuning/ | Glittering-Hat-7629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5htyt | false | null | t3_1r5htyt | /r/LocalLLaMA/comments/1r5htyt/good_local_setup_for_llm_trainingfinetuning/ | false | false | self | 3 | null |
GLM 5 vs Claude Opus 4.6: the paradox of paying $100 / $200 per month and still chasing hype | 20 | I’ve had a hard-to-ignore sense of paradox for weeks now. Just a month ago, a lot of us were paying $100 / $200 to Anthropic (for example via Claude Code) for a level of capability that, at the time, felt “worth” the price. Today, Claude Opus 4.6 is clearly more refined—but then GLM 5 shows up pushing incredibly hard, ... | 2026-02-15T15:41:44 | https://www.reddit.com/r/LocalLLaMA/comments/1r5hp3a/glm_5_vs_claude_opus_46_the_paradox_of_paying_100/ | willymunoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5hp3a | false | null | t3_1r5hp3a | /r/LocalLLaMA/comments/1r5hp3a/glm_5_vs_claude_opus_46_the_paradox_of_paying_100/ | false | false | self | 20 | null |
You can run MiniMax-2.5 locally | 447 | MiniMax-2.5 is a new open LLM achieving SOTA in coding, agentic tool use and search and office work.
The 230B parameters (10B active) model has a **200K context** window and unquantized bf16 requires **457GB**.
Unsloth Dynamic **3-bit** GGUF reduces size to **101GB** **(-62%).**
**Official Guide -** [**https://unsl... | 2026-02-15T15:14:51 | Dear-Success-1441 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r5h1gj | false | null | t3_1r5h1gj | /r/LocalLLaMA/comments/1r5h1gj/you_can_run_minimax25_locally/ | false | false | default | 447 | {'enabled': True, 'images': [{'id': 'hd369oaucojg1', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/hd369oaucojg1.jpeg?width=108&crop=smart&auto=webp&s=1cd5c6273a2a0ed7b57f61c572e677da0a2eebb6', 'width': 108}, {'height': 241, 'url': 'https://preview.redd.it/hd369oaucojg1.jpeg?width=216&crop=smart&auto=... | |
hi all i just started out with local a.i, don't have a clue what im doing, totally confused with all the jargon, some advice please | 3 | I have windows 11, 32gb ram, rtx 4060 card 8g vram, intel chip. so i know i cant run big models well. ive tried, 12gig downloads to find out unusable (mostly img2video)
I was advised by chatgpt to start out with pinnokio as it has 1 click installs which i did i have stumbled upon 3 brilliant models that i can use ... | 2026-02-15T15:01:53 | https://www.reddit.com/r/LocalLLaMA/comments/1r5gqf2/hi_all_i_just_started_out_with_local_ai_dont_have/ | coys68 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5gqf2 | false | null | t3_1r5gqf2 | /r/LocalLLaMA/comments/1r5gqf2/hi_all_i_just_started_out_with_local_ai_dont_have/ | false | false | self | 3 | null |
Brain surgery on LLMs via LoRA | 6 |
If you’ve been playing with LoRA, you know you can fine-tune a model by only touching specific "parts" of its brain. I decided to run a controlled experiment using a Qwen-2.5 3B model to see how it modifies its behaviour as a result of adapting different parts of its layers.
The domain I work in is AI academic system... | 2026-02-15T15:00:51 | https://www.reddit.com/r/LocalLLaMA/comments/1r5gpfv/brain_surgery_on_llms_via_lora/ | FeeMassive4003 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5gpfv | false | null | t3_1r5gpfv | /r/LocalLLaMA/comments/1r5gpfv/brain_surgery_on_llms_via_lora/ | false | false | self | 6 | null |
Opencode Agent Swarms! | 0 | [https://github.com/lanefiedler731-gif/OpencodeSwarms](https://github.com/lanefiedler731-gif/OpencodeSwarms)
I vibecoded this with opencode btw.
This fork emulates Kimi K2.5 Agent Swarms, any model, up to 100 agents at a time.
You will have to build this yourself.
(Press tab until you see "Swarm\_manager" mode en... | 2026-02-15T14:55:55 | https://www.reddit.com/r/LocalLLaMA/comments/1r5gl8p/opencode_agent_swarms/ | Available-Craft-5795 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5gl8p | false | null | t3_1r5gl8p | /r/LocalLLaMA/comments/1r5gl8p/opencode_agent_swarms/ | false | false | self | 0 | null |
One Week Review of Bot | 1 | [removed] | 2026-02-15T14:54:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r5gjp9/one_week_review_of_bot/ | Long_Complex_4395 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5gjp9 | false | null | t3_1r5gjp9 | /r/LocalLLaMA/comments/1r5gjp9/one_week_review_of_bot/ | false | false | self | 1 | null |
should I expect this level of variation for batch and ubatch at depth 30000 for step flash IQ2_M ? | 0 | I typically do not touch these flags at all, but I saw a post where someone claimed tuning them could make a big difference for some specific model. Since claude code loads up 20k tokens on its own, I have targeted 30k as my place to try and optimize. The TLDR is PP varied from 293 - 493 and TG from 16.7 - 45.3 with... | 2026-02-15T14:53:13 | https://www.reddit.com/r/LocalLLaMA/comments/1r5gj0r/should_i_expect_this_level_of_variation_for_batch/ | jdchmiel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5gj0r | false | null | t3_1r5gj0r | /r/LocalLLaMA/comments/1r5gj0r/should_i_expect_this_level_of_variation_for_batch/ | false | false | self | 0 | null |
Qwen3-Coder-Next on M3 Pro 36GB | 4 | Hello,
Currently, I am using qwen3-coder:30b and it works fine. I would like to switch to Qwen3-Coder-Next. Does it make sense to do so? Will my MacBook be able to handle this? | 2026-02-15T14:33:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r5g36e/qwen3codernext_on_m3_pro_36gb/ | Sketusky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5g36e | false | null | t3_1r5g36e | /r/LocalLLaMA/comments/1r5g36e/qwen3codernext_on_m3_pro_36gb/ | false | false | self | 4 | null |
Qwen3-Code-Next ggufs: Any difference between Q4KXL and MXPF4? | 22 | The later is a few GBs smaller, but are there any meaningful differences performance wise? | 2026-02-15T14:27:39 | https://www.reddit.com/r/LocalLLaMA/comments/1r5fxyd/qwen3codenext_ggufs_any_difference_between_q4kxl/ | ParaboloidalCrest | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5fxyd | false | null | t3_1r5fxyd | /r/LocalLLaMA/comments/1r5fxyd/qwen3codenext_ggufs_any_difference_between_q4kxl/ | false | false | self | 22 | null |
24gb M4 Mac Mini vs 9070XT + 32gb system RAM. What to expect? | 1 | As the title says. I'm considering getting myself either a Mac Mini or Custom PC for AI and Gaming. PC is the obvious winner here for gaming, but I'm curious on the AI performance before I decide, especially:
1. Maximum parameters I can realistically run?
2. Token speed
Thanks! | 2026-02-15T14:27:14 | https://www.reddit.com/r/LocalLLaMA/comments/1r5fxn5/24gb_m4_mac_mini_vs_9070xt_32gb_system_ram_what/ | Soft-Distance-6571 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5fxn5 | false | null | t3_1r5fxn5 | /r/LocalLLaMA/comments/1r5fxn5/24gb_m4_mac_mini_vs_9070xt_32gb_system_ram_what/ | false | false | self | 1 | null |
GLM-4.7-Flash (IQ5_K GGUF) Bench: CPU-only vs Hybrid (exps=CPU) vs Full GPU (RTX PRO 6000 Blackwell, EPYC 9175F) | 9 | author:~$ Non-native English; AI helped with translation/structure. All numbers are from my logs.🙇
I benchmarked **GLM-4.7-Flash (IQ5\_K GGUF)** across three different execution modes. The goal was to quantify the performance impact of offloading MoE (Mixture of Experts) to the CPU versus keeping everything on th... | 2026-02-15T14:20:35 | https://www.reddit.com/r/LocalLLaMA/comments/1r5fs69/glm47flash_iq5_k_gguf_bench_cpuonly_vs_hybrid/ | Express-Jicama-9827 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5fs69 | false | null | t3_1r5fs69 | /r/LocalLLaMA/comments/1r5fs69/glm47flash_iq5_k_gguf_bench_cpuonly_vs_hybrid/ | false | false | self | 9 | null |
I ran System Design tests on GLM-5, Kimi k2.5, Qwen 3, and more. Here are the results. | 7 | Last week I posted my System Design benchmark here and got roasted (rightfully so) for focusing on closed models.
I listened. I spent the weekend doing two things:
1. **Adding Open Weight Support:** I ran the benchmark against **Qwen 3**, **GLM-5**, and **Kimi k2.5**. I tested them on the original problem (**Design a... | 2026-02-15T14:12:22 | Ruhal-Doshi | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r5flim | false | null | t3_1r5flim | /r/LocalLLaMA/comments/1r5flim/i_ran_system_design_tests_on_glm5_kimi_k25_qwen_3/ | false | false | 7 | {'enabled': True, 'images': [{'id': '7cntqdzu1ojg1', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/7cntqdzu1ojg1.png?width=108&crop=smart&auto=webp&s=0ffffcd9dd13dae18c04ce17017ed0083bd90360', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/7cntqdzu1ojg1.png?width=216&crop=smart&auto=web... | ||
LMStudio + SillyTavern Docker on DockerHub | 1 | [removed] | 2026-02-15T13:42:09 | https://www.reddit.com/r/LocalLLaMA/comments/1r5exfz/lmstudio_sillytavern_docker_on_dockerhub/ | m94301 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5exfz | false | null | t3_1r5exfz | /r/LocalLLaMA/comments/1r5exfz/lmstudio_sillytavern_docker_on_dockerhub/ | false | false | self | 1 | null |
GLM 4.7 and Qwen3 coder Next | 4 | What is the general consensus on the two models especially when it comes to tools calling? I expect both will be replaced soon but which of these two is optimal? | 2026-02-15T13:37:16 | https://www.reddit.com/r/LocalLLaMA/comments/1r5etqr/glm_47_and_qwen3_coder_next/ | Thump604 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5etqr | false | null | t3_1r5etqr | /r/LocalLLaMA/comments/1r5etqr/glm_47_and_qwen3_coder_next/ | false | false | self | 4 | null |
I have a question about running LLMs fully offline | 1 | I’m experimenting with running LLMs entirely on mobile hardware without cloud dependency. The challenge isn’t the model itself, it’s dealing with memory limits, thermal throttling, and sustained compute on edge devices.
How do others optimiz for reliability and performance when inference has to stay fully local? Any ti... | 2026-02-15T13:36:00 | https://www.reddit.com/r/LocalLLaMA/comments/1r5estr/i_have_a_question_about_running_llms_fully_offline/ | NeoLogic_Dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5estr | false | null | t3_1r5estr | /r/LocalLLaMA/comments/1r5estr/i_have_a_question_about_running_llms_fully_offline/ | false | false | self | 1 | null |
AI Agents forget everything between sessions - I built a Brain-Like memory system with temporal decay and auto-extraction. | 0 | I've been running Claude Code and OpenClaw as daily coding agents for months now, and the biggest pain point isn't hallucination or context limits — it's amnesia.
Every time context compacts or a session ends, all the decisions, bug fixes, and architectural choices just vanish. I was spending the first 10 minutes of e... | 2026-02-15T13:17:10 | https://www.reddit.com/r/LocalLLaMA/comments/1r5eeht/ai_agents_forget_everything_between_sessions_i/ | Maximum_Fearless | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5eeht | false | null | t3_1r5eeht | /r/LocalLLaMA/comments/1r5eeht/ai_agents_forget_everything_between_sessions_i/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'FyZHC18JVGY-QRObyeqcd9MYgKxoEacyaBN_3UeqyYw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FyZHC18JVGY-QRObyeqcd9MYgKxoEacyaBN_3UeqyYw.png?width=108&crop=smart&auto=webp&s=f6abd9413848f480a22e59a8367e04e9678e3929', 'width': 108}, {'height': 108, 'url': 'h... |
dual Xeon server, 768GB -> LocalLLAMA? | 0 | So guys, I can get an old server with 40 cores, any idea what tokens/sec i can get out of it and if it's worth the electricity cost or i better subscribe to one of top tokens magicians online? | 2026-02-15T13:08:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r5e89s/dual_xeon_server_768gb_localllama/ | Glad-Audience9131 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5e89s | false | null | t3_1r5e89s | /r/LocalLLaMA/comments/1r5e89s/dual_xeon_server_768gb_localllama/ | false | false | self | 0 | null |
sirchmunk: embedding-and-index-free retrieval for fast moving data | 1 | recently came across sirchmunk, which seems to be a refreshing take on information retrieval, as it skips the embedding pipeline entirely.
it work on raw data without the heavy-lifting of embedding. compared to other embedding-free approach such as PageIndex, sirchmunk doesn't require a pre-indexing phase either. ins... | 2026-02-15T12:58:53 | https://www.reddit.com/r/LocalLLaMA/comments/1r5e0x8/sirchmunk_embeddingandindexfree_retrieval_for/ | HugeConsideration211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5e0x8 | false | null | t3_1r5e0x8 | /r/LocalLLaMA/comments/1r5e0x8/sirchmunk_embeddingandindexfree_retrieval_for/ | false | false | self | 1 | null |
how to train a tiny model (4B) to prove hard theorems | 144 | 2026-02-15T12:55:39 | eliebakk | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r5dyna | false | null | t3_1r5dyna | /r/LocalLLaMA/comments/1r5dyna/how_to_train_a_tiny_model_4b_to_prove_hard/ | false | false | default | 144 | {'enabled': True, 'images': [{'id': 'pqtgdyl5onjg1', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/pqtgdyl5onjg1.png?width=108&crop=smart&auto=webp&s=2c8de13f0f4adddb6bd9fb53ab6068e9d13d3589', 'width': 108}, {'height': 208, 'url': 'https://preview.redd.it/pqtgdyl5onjg1.png?width=216&crop=smart&auto=we... | ||
Are knowledge graphs are the best operating infrastructure for agents? | 1 | A knowledge graph seems like the best way to link AI diffs to structured evidence, to mitigate hallucinations and prevent the duplication of logic across a codebase. The idea behind KGs for agents is, rather than an agent reconstructing context at runtime, they use a persistent bank that is strictly maintained using do... | 2026-02-15T12:51:09 | https://www.reddit.com/r/LocalLLaMA/comments/1r5dvk9/are_knowledge_graphs_are_the_best_operating/ | SnooPeripherals5313 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5dvk9 | false | null | t3_1r5dvk9 | /r/LocalLLaMA/comments/1r5dvk9/are_knowledge_graphs_are_the_best_operating/ | false | false | self | 1 | null |
Model that can hold opinions and a conversation? | 0 | I want to run a model that will actually hold opinions. I tried a bunch of ways to manipulate an LLM, but i think i am terrible at it because i get told "I am an AI that generates human-like responses"
I just want to talk to a computer like i do to a normal person | 2026-02-15T12:46:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r5dsg9/model_that_can_hold_opinions_and_a_conversation/ | HSVMalooGTS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5dsg9 | false | null | t3_1r5dsg9 | /r/LocalLLaMA/comments/1r5dsg9/model_that_can_hold_opinions_and_a_conversation/ | false | false | self | 0 | null |
What is llama.cpp or PC optimal settings? | 3 | Hello everyone. I recently started using llama.cpp, previously used ollama. I have ryzen 7700x + 64 gb 6400 + 16 gb 5070 ti. In bios I use expo profile so that the memory works with optimal timings and frequency. I also set the infinity fabric frequency to optimal.
I use Ubuntu, the latest version of llama.cpp and the... | 2026-02-15T12:21:19 | https://www.reddit.com/r/LocalLLaMA/comments/1r5db7d/what_is_llamacpp_or_pc_optimal_settings/ | Typical_Swimming3593 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5db7d | false | null | t3_1r5db7d | /r/LocalLLaMA/comments/1r5db7d/what_is_llamacpp_or_pc_optimal_settings/ | false | false | self | 3 | null |
Step 3.5 and Minimax m. 2.5 on a local hardware - some tests (ik_llama) | 26 | Hello!
I did some llama-bench tests (on ik\_llama.cpp fork - it has sota quants (iq4\_kss and others, and is faster on prompt processing on both CPU only and CUDA + CPU option)
[on my machine](https://preview.redd.it/c9gndrc3cnjg1.png?width=720&format=png&auto=webp&s=d5b1bfd500f3eff470e671bcaf991ffbd5e4a793)
[.\/ik... | 2026-02-15T12:18:26 | https://www.reddit.com/r/LocalLLaMA/comments/1r5d9ax/step_35_and_minimax_m_25_on_a_local_hardware_some/ | ZealousidealBunch220 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5d9ax | false | null | t3_1r5d9ax | /r/LocalLLaMA/comments/1r5d9ax/step_35_and_minimax_m_25_on_a_local_hardware_some/ | false | false | 26 | null | |
Built a dedicated AI assistant box with Jetson Orin Nano Super - runs 24/7 for EUR 399 | 1 | [removed] | 2026-02-15T12:07:52 | https://www.reddit.com/r/LocalLLaMA/comments/1r5d2dy/built_a_dedicated_ai_assistant_box_with_jetson/ | superactro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5d2dy | false | null | t3_1r5d2dy | /r/LocalLLaMA/comments/1r5d2dy/built_a_dedicated_ai_assistant_box_with_jetson/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'V37YiiwzUWXS-WAD8qYJ-Os5_Gkt960o8zkWzlS6IgU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/V37YiiwzUWXS-WAD8qYJ-Os5_Gkt960o8zkWzlS6IgU.png?width=108&crop=smart&auto=webp&s=c4176946dd3bdb9d7c3b289a9815b6cfdc7df3a2', 'width': 108}, {'height': 113, 'url': 'h... |
Is just a meme... | 624 | I did need to buy some ECC DDR4 :( | 2026-02-15T11:35:42 | HumanDrone8721 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r5chzd | false | null | t3_1r5chzd | /r/LocalLLaMA/comments/1r5chzd/is_just_a_meme/ | false | false | 624 | {'enabled': True, 'images': [{'id': 'qfotdf9z9njg1', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/qfotdf9z9njg1.png?width=108&crop=smart&auto=webp&s=e8f9bfcb3678ff6349d7cc5064c72ecf262aa4d5', 'width': 108}, {'height': 233, 'url': 'https://preview.redd.it/qfotdf9z9njg1.png?width=216&crop=smart&auto=we... | ||
I’ve created an AI compression/structure tool that’s really useful and need help. | 0 | Basically, I’ve spent the last 11 months working on something that caused me a real pain. LLMs running out of context as I needed them to read so many files.
So I created OCTAVE, which is a structure + compression layer that makes AI coding workflows more reliable and cheaper.
But I’m all alone. Solo developer not i... | 2026-02-15T11:33:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r5cgjc/ive_created_an_ai_compressionstructure_tool_thats/ | sbuswell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5cgjc | false | null | t3_1r5cgjc | /r/LocalLLaMA/comments/1r5cgjc/ive_created_an_ai_compressionstructure_tool_thats/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '-DuaVbeIAunwg6kzrH-K-4ON6_gVRu9HFJMAMasAtUs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-DuaVbeIAunwg6kzrH-K-4ON6_gVRu9HFJMAMasAtUs.png?width=108&crop=smart&auto=webp&s=77725199199e75654ec2c7b7dba599f3672bd4d1', 'width': 108}, {'height': 108, 'url': 'h... |
I benchmarked every 1-bit model I could find, native 1-bit is 50% faster than post-quantized | 2 | I've been building ARIA Protocol, an open-source distributed inference system for 1-bit quantized LLMs (ternary weights: -1, 0, +1). I couldn't find a proper cross-vendor benchmark of 1-bit models so I ran one myself.
Everything was tested on an AMD Ryzen 9 7845HX (Zen 4) with 64 GB DDR5, AVX-512 VNNI+VBMI verified in... | 2026-02-15T11:25:45 | https://www.reddit.com/r/LocalLLaMA/comments/1r5cby8/i_benchmarked_every_1bit_model_i_could_find/ | EiwazDeath | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5cby8 | false | null | t3_1r5cby8 | /r/LocalLLaMA/comments/1r5cby8/i_benchmarked_every_1bit_model_i_could_find/ | false | false | self | 2 | null |
RX 7900 XTX vs RTX 3090 for gaming + local LLM/AI (Linux) — and can 24GB run ~70B with EXL2? | 1 | Hi everyone. I’m planning to build/buy a PC within the next \~6 months (it’s a gift, so the timing isn’t fully up to me). I want to use it for both gaming and local AI/LLM projects.
I’m currently choosing between:
1. AMD RX 7900 XTX (24GB)
2. NVIDIA RTX 3090 (24GB)
My environment / goals:
1. OS: Linux (I’m fine wit... | 2026-02-15T11:17:47 | https://www.reddit.com/r/LocalLLaMA/comments/1r5c77f/rx_7900_xtx_vs_rtx_3090_for_gaming_local_llmai/ | AdStriking8966 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5c77f | false | null | t3_1r5c77f | /r/LocalLLaMA/comments/1r5c77f/rx_7900_xtx_vs_rtx_3090_for_gaming_local_llmai/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.