title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I built a Session Border Controller for AI | 0 | I built a Session Border Controller for AI agents
I've been thinking about AI agent traffic for months and something kept bugging me. Everyone treats it like a traditional request/response. Secure the API, rate limit the endpoint, done. But that's not what agent traffic looks like. Agents hold sessions. They negotiate... | 2026-02-16T23:02:02 | https://www.reddit.com/r/LocalLLaMA/comments/1r6oznb/i_built_a_session_border_controller_for_ai/ | zamor0fthat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6oznb | false | null | t3_1r6oznb | /r/LocalLLaMA/comments/1r6oznb/i_built_a_session_border_controller_for_ai/ | false | false | self | 0 | null |
HOT TAKE : GEMMA 4 IS PROBABLY DEAD. | 0 | Of late, I have been seeing an uptick in people expecting google dropping Gemma 4 soon. I have been giving it a thought and after having followed the release and pattern of google and many other companies, **I think Gemma is soon going to die.** So, i thought why dont I share my thought with you guys.
**-------**
**... | 2026-02-16T22:41:08 | https://www.reddit.com/r/LocalLLaMA/comments/1r6ofzq/hot_take_gemma_4_is_probably_dead/ | PaceZealousideal6091 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6ofzq | false | null | t3_1r6ofzq | /r/LocalLLaMA/comments/1r6ofzq/hot_take_gemma_4_is_probably_dead/ | false | false | self | 0 | null |
The only model that works is gpt-oss | 0 | Hello,
I set up a local machine in my network that runs ollama a couple of weeks ago. I have in addition set up OpenCode as a coding agent and connected it to the ollama server in my network.
I was hoping to check out some agentic programming with the differnt models; qwen2.5-coder, devstral and such. But for some re... | 2026-02-16T22:32:27 | https://www.reddit.com/r/LocalLLaMA/comments/1r6o7va/the_only_model_that_works_is_gptoss/ | larsey86 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6o7va | false | null | t3_1r6o7va | /r/LocalLLaMA/comments/1r6o7va/the_only_model_that_works_is_gptoss/ | false | false | self | 0 | null |
published a skill for academic research writing | 0 | the skills lets claude / codex / cursor / antigravity write top tier academic research.
check it out [https://www.npmjs.com/package/academic-researcher-skill](https://www.npmjs.com/package/academic-researcher-skill)
| 2026-02-16T22:22:48 | https://www.reddit.com/r/LocalLLaMA/comments/1r6nyr7/published_a_skill_for_academic_research_writing/ | eatsleepliftcode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6nyr7 | false | null | t3_1r6nyr7 | /r/LocalLLaMA/comments/1r6nyr7/published_a_skill_for_academic_research_writing/ | false | false | self | 0 | null |
The thinking mode of Nanbeige4.1-3B | 24 | Prompt : Hey ,
Result : We are given a query that simply says "hey". This is a very vague query.
As an AI, I need to understand the context and what the user is asking for. However, the query is just a greeting and does not contain a specific question or request.
First, I should consider the possibility that the u... | 2026-02-16T22:05:17 | https://www.reddit.com/r/LocalLLaMA/comments/1r6ni0k/the_thinking_mode_of_nanbeige413b/ | Hefty_Tourist_2226 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6ni0k | false | null | t3_1r6ni0k | /r/LocalLLaMA/comments/1r6ni0k/the_thinking_mode_of_nanbeige413b/ | false | false | self | 24 | null |
Running Qwen3-Coder-30B-A3B with llama.ccp poor-man cluster | 12 | Despite I havea production dual RTX 5090 setup where I run my private inference, I love to experiments with poor-man's setups.
I've been running Qwen3-Coder-30B-A3B-Instruct (Q4_K_S) via llama.cpp across multiple GPUs using RPC, and I'm curious what you all think about my current setup.
Always looking to optimize.
M... | 2026-02-16T21:42:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r6mwsd/running_qwen3coder30ba3b_with_llamaccp_poorman/ | ZioRob2410 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6mwsd | false | null | t3_1r6mwsd | /r/LocalLLaMA/comments/1r6mwsd/running_qwen3coder30ba3b_with_llamaccp_poorman/ | false | false | self | 12 | null |
Agent Memory update 2.0.4: what changed after v1.11.0 and why it’s better now | 0 | Agent Memory v2.0.x - Moving from embedded library to standalone service
I want to share a practical update on Agent Memory and what actually changed since v1.11.0.
This isn't a redesign for the sake of redesign. It's the result of hitting very concrete limits in the first version and fixing them one by one.
What ... | 2026-02-16T21:33:29 | https://www.reddit.com/r/LocalLLaMA/comments/1r6mo0m/agent_memory_update_204_what_changed_after_v1110/ | Junior_Drawing_8353 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6mo0m | false | null | t3_1r6mo0m | /r/LocalLLaMA/comments/1r6mo0m/agent_memory_update_204_what_changed_after_v1110/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'hKB6HDOMVD4-P6Pwb29OSvDJmKNBMdzw9qBJVbQVPQ4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hKB6HDOMVD4-P6Pwb29OSvDJmKNBMdzw9qBJVbQVPQ4.png?width=108&crop=smart&auto=webp&s=e1e3c2b45963cac78101ce1277f16c864729889e', 'width': 108}, {'height': 108, 'url': 'h... |
Claude Max subscription vs $100 API credits – which is better value? | 0 | I'm trying to figure out the most cost-effective way to use Claude for my workflow and would love some input from people who've tried both options.
The options I'm comparing:
• Claude Max subscription (~$100/month)
• $100 in API credits
My use case:
Primarily coding assistance, document analysis, and some automati... | 2026-02-16T21:15:55 | https://www.reddit.com/r/LocalLLaMA/comments/1r6m78d/claude_max_subscription_vs_100_api_credits_which/ | ZioRob2410 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6m78d | false | null | t3_1r6m78d | /r/LocalLLaMA/comments/1r6m78d/claude_max_subscription_vs_100_api_credits_which/ | false | false | self | 0 | null |
Qwen-Coder-Next fp8 chat template for llama.cpp - seems to be better for roo | 18 | Try this in llama.cpp if you're having issues in roo.
Save as fp8chat.jinja or similar then add --chat-template-file fp8chat.jinja to your lcpp runtime args:
{% macro render_extra_keys(json_dict, handled_keys) %}
{%- if json_dict is mapping %}
{%- for json_key in json_dict if json_key ... | 2026-02-16T21:09:29 | https://www.reddit.com/r/LocalLLaMA/comments/1r6m13c/qwencodernext_fp8_chat_template_for_llamacpp/ | Ok-Measurement-1575 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6m13c | false | null | t3_1r6m13c | /r/LocalLLaMA/comments/1r6m13c/qwencodernext_fp8_chat_template_for_llamacpp/ | false | false | self | 18 | null |
Recursive Feedback loops | 0 | I haven't posted on here but I have a local model of plans on my Mac. Gemini brought up transient subject theory? an Gemini wanted to do a test on my model. gave me a prompt and what come out was very strange. in transient subject theory every time you hit enter it's a new version of at least a being without identity ... | 2026-02-16T21:03:23 | https://share.google/TGVGdpNS7d3rgRD17 | rycakez | share.google | 1970-01-01T00:00:00 | 0 | {} | 1r6lvav | false | null | t3_1r6lvav | /r/LocalLLaMA/comments/1r6lvav/recursive_feedback_loops/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'eFMa-V1uKt_0fEaeLyZatpb8QwmDWeu0cd35rQ-BtGk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/eFMa-V1uKt_0fEaeLyZatpb8QwmDWeu0cd35rQ-BtGk.jpeg?width=108&crop=smart&auto=webp&s=834c413f42993ddd277061ce386e2876b2d0aaea', 'width': 108}, {'height': 112, 'url': '... |
Any good local GenAI for music? | 2 | Hey everyone
I’m trying to find out if there are any solid options for running music generation locally (GenAI for music / audio), ideally stuff I can run on my own machine rather than cloud services.
My specs are RTX 5090, 9950X3D, 64GB RAM.
Are there any recommended local models/tools for generating music? If ... | 2026-02-16T21:00:12 | https://www.reddit.com/r/LocalLLaMA/comments/1r6ls6q/any_good_local_genai_for_music/ | TomNaughtyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6ls6q | false | null | t3_1r6ls6q | /r/LocalLLaMA/comments/1r6ls6q/any_good_local_genai_for_music/ | false | false | self | 2 | null |
I built TuskBot: Telegram AI Agent in Go | 0 | I’ve been working on TuskBot - an autonomous AI agent designed to run in Telegram. It’s inspired by the idea of OpenClaw but rewritten from scratch in Go for better performance, security, and reliability. I liked claw idea, but I was tired of any random js-backed skill could crash the whole agent.
**Why Go?**
* One s... | 2026-02-16T20:58:50 | https://www.reddit.com/r/LocalLLaMA/comments/1r6lqwf/i_built_tuskbot_telegram_ai_agent_in_go/ | Alx_Go | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6lqwf | false | null | t3_1r6lqwf | /r/LocalLLaMA/comments/1r6lqwf/i_built_tuskbot_telegram_ai_agent_in_go/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '7llemHnUfOyLkV9FNh9g5OD4vUfitvQ6pVdjyk2MYY0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7llemHnUfOyLkV9FNh9g5OD4vUfitvQ6pVdjyk2MYY0.png?width=108&crop=smart&auto=webp&s=38916f3027a53ed743b1f3eb207a6f799d74eb26', 'width': 108}, {'height': 108, 'url': 'h... |
Higher effort settings reduce deep research accuracy for GPT-5 and Gemini Flash 3 | 3 | Curious if others here have noticed this. I'm now defaulting to the "low" or "minimal" version of frontier models when using them over the API.
It's especially strange because low vs. high on the models can actually be like a 2x cost difference, so you'd think it would be obviously much better.
But GPT-5 at low effor... | 2026-02-16T20:52:06 | https://www.reddit.com/r/LocalLLaMA/comments/1r6lkf3/higher_effort_settings_reduce_deep_research/ | ddp26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6lkf3 | false | null | t3_1r6lkf3 | /r/LocalLLaMA/comments/1r6lkf3/higher_effort_settings_reduce_deep_research/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'PAfzP8_fjhuX4IJFbJ4Ak963wflG-EdScFlAWJCsnpA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/PAfzP8_fjhuX4IJFbJ4Ak963wflG-EdScFlAWJCsnpA.png?width=108&crop=smart&auto=webp&s=d2afe65521535b0c97550dee39067e27097546a2', 'width': 108}, {'height': 113, 'url': 'h... |
[ Removed by moderator ] | 1 | [removed] | 2026-02-16T20:43:34 | https://www.reddit.com/r/LocalLLaMA/comments/1r6lc2o/finally_figured_out_how_to_unit_test_my_local/ | ruhila12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6lc2o | false | null | t3_1r6lc2o | /r/LocalLLaMA/comments/1r6lc2o/finally_figured_out_how_to_unit_test_my_local/ | false | false | null | 1 | null |
Open-source tool to analyze and optimize LLM API spend (OpenAI / Anthropic CSV) | 1 | I noticed most teams don’t really know where their LLM costs are coming from , especially when using higher-tier models for simple prompts.
Built a lightweight tool that:
* Parses OpenAI /Anthropic usage exports
* Identifies cost outliers
* Estimates savings by model switching
* Classifies prompt complexity based on ... | 2026-02-16T20:30:15 | https://www.reddit.com/r/LocalLLaMA/comments/1r6kzd6/opensource_tool_to_analyze_and_optimize_llm_api/ | Frosty_Fuel2355 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6kzd6 | false | null | t3_1r6kzd6 | /r/LocalLLaMA/comments/1r6kzd6/opensource_tool_to_analyze_and_optimize_llm_api/ | false | false | self | 1 | null |
Use Claude Code CLI + OpenClaw with Free GPT-5.3 Codex & GLM-5 (No API Key Required) | 0 | Hey everyone, I built an open-source proxy that lets you use Anthropic’s **Claude Code CLI** (and tools like OpenClaw) powered by **free** backend models like **GLM-5**,**MiniMax** & **GPT-5.3 Codex(Authentication required for Gpt Models)**
It handles all the API translation (Anthropic format ↔ OpenAI/Codex format), ... | 2026-02-16T20:29:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r6kz0v/use_claude_code_cli_openclaw_with_free_gpt53/ | ObjectiveExplorer787 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6kz0v | false | null | t3_1r6kz0v | /r/LocalLLaMA/comments/1r6kz0v/use_claude_code_cli_openclaw_with_free_gpt53/ | false | false | self | 0 | null |
I built a multi-agent Think Tank for personal productivity — runs on local patterns, no API lock-in | 1 | Hey r/LocalLLaMA — I built something you might appreciate.
\*\*The Problem:\*\* I had 500+ notes, habit trackers, and market feeds. Still felt stuck.
Why? Because information isn't insight, and planning isn't execution.
\*\*The Solution:\*\* A multi-agent orchestration system that actually synthesizes instead o... | 2026-02-16T20:19:45 | https://www.reddit.com/r/LocalLLaMA/comments/1r6kp8z/i_built_a_multiagent_think_tank_for_personal/ | Equivalent-Look1353 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6kp8z | false | null | t3_1r6kp8z | /r/LocalLLaMA/comments/1r6kp8z/i_built_a_multiagent_think_tank_for_personal/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Zk2Xy4UHuZVxFGjVrCiBHWdiTSosIC2ZyVyWwS_T6J4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Zk2Xy4UHuZVxFGjVrCiBHWdiTSosIC2ZyVyWwS_T6J4.png?width=108&crop=smart&auto=webp&s=245b8918a3c748ae5275f937aa7582e9a6f2f1d0', 'width': 108}, {'height': 108, 'url': 'h... |
Is anyone using Qwen Next Coder for clawdbot locally? | 0 | Wondering if the model does any good for the bot? | 2026-02-16T20:12:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r6ki2h/is_anyone_using_qwen_next_coder_for_clawdbot/ | Mr_Moonsilver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6ki2h | false | null | t3_1r6ki2h | /r/LocalLLaMA/comments/1r6ki2h/is_anyone_using_qwen_next_coder_for_clawdbot/ | false | false | self | 0 | null |
Llama.cpp: is it normal to see lower CPU util during prompt processing compared to token generation? | 1 | [removed] | 2026-02-16T20:12:23 | https://www.reddit.com/r/LocalLLaMA/comments/1r6khyf/llamacpp_is_it_normal_to_see_lower_cpu_util/ | steezy13312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6khyf | false | null | t3_1r6khyf | /r/LocalLLaMA/comments/1r6khyf/llamacpp_is_it_normal_to_see_lower_cpu_util/ | false | false | self | 1 | null |
Stato captures, validates, and transfers AI coding agent expertise. Across sessions, platforms, and teams. | 1 | [removed] | 2026-02-16T20:06:41 | https://www.reddit.com/r/LocalLLaMA/comments/1r6kcfh/stato_captures_validates_and_transfers_ai_coding/ | biomin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6kcfh | false | null | t3_1r6kcfh | /r/LocalLLaMA/comments/1r6kcfh/stato_captures_validates_and_transfers_ai_coding/ | false | false | 1 | null | |
Would you rather buy ...? (hardware questions) | 1 | Hi local llamas! New here. I want to vibe code some software/apps and need some feedback on recommended hardware platforms. Some of the apps I want to develop require some modest scientific computing and just need a web front-end. Others are more generic cross-platform apps/games for web/ios/android. I've been judging ... | 2026-02-16T19:51:48 | https://www.reddit.com/r/LocalLLaMA/comments/1r6jxme/would_you_rather_buy_hardware_questions/ | Ready-Persimmon-8756 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6jxme | false | null | t3_1r6jxme | /r/LocalLLaMA/comments/1r6jxme/would_you_rather_buy_hardware_questions/ | false | false | self | 1 | null |
Smaller model in vRAM vs Larger model mostly in RAM | 1 | Can anyone give me a steer on which will be faster to reach a quality result:
1. A small model running entirely in vRAM, producing worse results pretty quickly and using smaller steps and more iteration to reach a quality threshold; or
2. A larger model running in both vRAM and system RAM, producing higher quality r... | 2026-02-16T19:44:42 | https://www.reddit.com/r/LocalLLaMA/comments/1r6jqot/smaller_model_in_vram_vs_larger_model_mostly_in/ | Protopia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6jqot | false | null | t3_1r6jqot | /r/LocalLLaMA/comments/1r6jqot/smaller_model_in_vram_vs_larger_model_mostly_in/ | false | false | self | 1 | null |
Are 20-100B models enough for Good Coding? | 76 | The reason I'm asking this question because some folks(including me) are in self-doubt little bit. Maybe because after seeing threads about comparison with Online models(More than Trillions of parameters).
Of course, we can't expect same coding performance & output from these 20-100B models.
Some didn't even utilize ... | 2026-02-16T19:38:32 | https://www.reddit.com/r/LocalLLaMA/comments/1r6jklq/are_20100b_models_enough_for_good_coding/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6jklq | false | null | t3_1r6jklq | /r/LocalLLaMA/comments/1r6jklq/are_20100b_models_enough_for_good_coding/ | false | false | self | 76 | null |
Running Qwen2.5_14B FB16 in MacBook Pro M1 Max (64GB) with MLX at 12 tokens/second | 1 | ERROR: type should be string, got "https://reddit.com/link/1r6jj38/video/ay9av6p8pwjg1/player\n\nJust for context, this is the FB16 version. Running this the usual way using transformers (AutoTokenizer, AutoModelForCausalLM) in the same machine produces 7.2 tokens per second. This optimisation is 72% faster at 12.2 tokens per second, no degradation noticed. " | 2026-02-16T19:36:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r6jj38/running_qwen25_14b_fb16_in_macbook_pro_m1_max/ | Common-Love6062 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6jj38 | false | null | t3_1r6jj38 | /r/LocalLLaMA/comments/1r6jj38/running_qwen25_14b_fb16_in_macbook_pro_m1_max/ | false | false | self | 1 | null |
Hated giving out all my data to third party companies like openai, and claude code so created a privacy first offline mobile application that runs the LLM locally | 15 | *Processing gif 6535ntciswjg1...*
Previously when I tried using offline LLMs the quality of output was really poor, but with qwen3 there is a massive boost in quality of output, ofcourse its no opus 4.6, but it gets the job done.
I've tried to build my app with Gemini in mind. So it's automatically able to dete... | 2026-02-16T19:35:13 | https://www.reddit.com/r/LocalLLaMA/comments/1r6jhd6/hated_giving_out_all_my_data_to_third_party/ | alichherawalla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6jhd6 | false | null | t3_1r6jhd6 | /r/LocalLLaMA/comments/1r6jhd6/hated_giving_out_all_my_data_to_third_party/ | false | false | self | 15 | null |
Are there any models that can do reasoning? | 0 | LLMs work by guessing the next token in text. This is the result of probabilistic training. My understanding is, that that's how they work.
I've heard people talk about giving models tasks to do that traditionally might be quite involved, featuring a number of steps or rules to follow, and with definitive, specific o... | 2026-02-16T19:09:18 | https://www.reddit.com/r/LocalLLaMA/comments/1r6irir/are_there_any_models_that_can_do_reasoning/ | ResidentTicket1273 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6irir | false | null | t3_1r6irir | /r/LocalLLaMA/comments/1r6irir/are_there_any_models_that_can_do_reasoning/ | false | false | self | 0 | null |
Models that allow for conversational discussion for research and technical discussion? | 5 | Hey all,
My experience with voice enabled LLMs is not great but i wanted to know if there are any services that allow to have natural conversations (by natural i meant those like the sesame demo a year back or something like elevenlab's demos that they post online).
The purpose would be mostly as a research mentor/pe... | 2026-02-16T18:41:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r6hz07/models_that_allow_for_conversational_discussion/ | vtcio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6hz07 | false | null | t3_1r6hz07 | /r/LocalLLaMA/comments/1r6hz07/models_that_allow_for_conversational_discussion/ | false | false | self | 5 | null |
MiniMax-M2.5 Is Now Fully Open Source — 229B Params, 10B Active, Runs on a Mac | 3 | 2026-02-16T18:39:38 | https://thepulsegazette.com/article/minimax-m2-5-is-now-fully-open-source-how-to-run-a-frontier-ai-model-for-free-on-your-mac-1771088141849/ | Successful-Diet92 | thepulsegazette.com | 1970-01-01T00:00:00 | 0 | {} | 1r6hx6s | false | null | t3_1r6hx6s | /r/LocalLLaMA/comments/1r6hx6s/minimaxm25_is_now_fully_open_source_229b_params/ | false | false | default | 3 | null | |
MiniMax-M2.5 Is Now Fully Open Source — How to Run a Frontier AI Model for Free on Your Mac | 2 | 2026-02-16T18:36:49 | https://thepulsegazette.com/article/minimax-m2-5-is-now-fully-open-source-how-to-run-a-frontier-ai-model-for-free-on-your-mac-1771088141849/ | Successful-Diet92 | thepulsegazette.com | 1970-01-01T00:00:00 | 0 | {} | 1r6hudu | false | null | t3_1r6hudu | /r/LocalLLaMA/comments/1r6hudu/minimaxm25_is_now_fully_open_source_how_to_run_a/ | false | false | default | 2 | null | |
I found a structural issue in an LLM, reported it to the developers, got a boilerplate "out of scope" reply and now my main account behaves differently, but my second account doesn't. Is this normal? | 0 | # Hi everyone,
I noticed some unusual behavior in a large language model (LLM) and documented it: reproducible steps, indicators, and control experiments. The issue relates to how the model responds to a certain style of text - which could create risks in social engineering scenarios (e.g., phishing). I sent a detaile... | 2026-02-16T18:34:21 | https://www.reddit.com/r/LocalLLaMA/comments/1r6hrtt/i_found_a_structural_issue_in_an_llm_reported_it/ | Historical-Cod-2537 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6hrtt | false | null | t3_1r6hrtt | /r/LocalLLaMA/comments/1r6hrtt/i_found_a_structural_issue_in_an_llm_reported_it/ | false | false | self | 0 | null |
Tiny Aya is coming | 23 | I wonder how tiny Tiny Aya is, considering the original Aya was 32B. | 2026-02-16T18:30:58 | https://github.com/ggml-org/llama.cpp/pull/19611 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r6hobq | false | null | t3_1r6hobq | /r/LocalLLaMA/comments/1r6hobq/tiny_aya_is_coming/ | false | false | default | 23 | null |
WHY IS THERE NO PROPER TTS ? -_- | 0 | Whether it’s Chatterbox, Indexx tts 2, Vox or anything else, there are the same problems everywhere:
1. The final output always ends up slightly too fast.
2. There’s no real way to control speech pace. Chatterbox’s CFG is totally useless. It's more for show than actual control.
3. Even with SAME settings, the final ... | 2026-02-16T18:16:25 | https://www.reddit.com/r/LocalLLaMA/comments/1r6h9eq/why_is_there_no_proper_tts/ | TheRealistDude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6h9eq | false | null | t3_1r6h9eq | /r/LocalLLaMA/comments/1r6h9eq/why_is_there_no_proper_tts/ | false | false | self | 0 | null |
Qwen3 Coder Next Looping and OpenCode | 15 | I spent a good chunk of my day trying to figure this out. A lot of "solutions" I saw didn't fix it.
What I did figure out: smaller quants loop more often. The one that loops the least is Q8.
Q8 mostly loops because of "bad" tool calls. Not calls that fail, but are poorly constructed or conceived. **Particularly** ... | 2026-02-16T18:14:27 | https://www.reddit.com/r/LocalLLaMA/comments/1r6h7g4/qwen3_coder_next_looping_and_opencode/ | StardockEngineer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6h7g4 | false | null | t3_1r6h7g4 | /r/LocalLLaMA/comments/1r6h7g4/qwen3_coder_next_looping_and_opencode/ | false | false | self | 15 | null |
Open No Claw | 0 | I will use OpenClaw the day there is a lot of testing and certainty and a one-click installation with use of local model | 2026-02-16T18:13:56 | https://www.reddit.com/r/LocalLLaMA/comments/1r6h6xl/open_no_claw/ | Creative_Bottle_3225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6h6xl | false | null | t3_1r6h6xl | /r/LocalLLaMA/comments/1r6h6xl/open_no_claw/ | false | false | self | 0 | null |
Difference Between QWEN 3 Max-Thinking and QWEN 3.5 on a Spatial Reasoning Benchmark (MineBench) | 290 | Honestly it's quite an insane improvement, QWEN 3.5 even had some builds that were closer to (if not better than) Opus 4.6/GPT-5.2/Gemini 3 Pro.
Benchmark: [https://minebench.ai/](https://minebench.ai/)
Git Repository: [https://github.com/Ammaar-Alam/minebench](https://github.com/Ammaar-Alam/minebench)
[Previous po... | 2026-02-16T18:10:29 | https://www.reddit.com/gallery/1r6h3ha | ENT_Alam | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1r6h3ha | false | null | t3_1r6h3ha | /r/LocalLLaMA/comments/1r6h3ha/difference_between_qwen_3_maxthinking_and_qwen_35/ | false | false | 290 | null | |
Fine-tuned FunctionGemma 270M for multi-turn tool calling - went from 10-39% to 90-97% accuracy | 150 | Google released FunctionGemma a few weeks ago - a 270M parameter model specifically for function calling. Tiny enough to run on a phone CPU at 125 tok/s. The model card says upfront that it needs fine-tuning for multi-turn use cases, and our testing confirmed it: base accuracy on multi-turn tool calling ranged from 9.9... | 2026-02-16T18:04:20 | party-horse | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r6gx75 | false | null | t3_1r6gx75 | /r/LocalLLaMA/comments/1r6gx75/finetuned_functiongemma_270m_for_multiturn_tool/ | false | false | 150 | {'enabled': True, 'images': [{'id': '45vz9gsccwjg1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/45vz9gsccwjg1.png?width=108&crop=smart&auto=webp&s=8bdc5a47e24f885b79d6c81d43e5da316b83fbc4', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/45vz9gsccwjg1.png?width=216&crop=smart&auto=web... | ||
Qwen 3.5 goes bankrupt on Vending-Bench 2 | 649 | 2026-02-16T17:49:21 | Deep-Vermicelli-4591 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r6ghty | false | null | t3_1r6ghty | /r/LocalLLaMA/comments/1r6ghty/qwen_35_goes_bankrupt_on_vendingbench_2/ | false | false | default | 649 | {'enabled': True, 'images': [{'id': 'dj0x1zeo9wjg1', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/dj0x1zeo9wjg1.png?width=108&crop=smart&auto=webp&s=bba8f4bbbb945db1ea45c3db24d51f84e1b08a71', 'width': 108}, {'height': 188, 'url': 'https://preview.redd.it/dj0x1zeo9wjg1.png?width=216&crop=smart&auto=web... | ||
Qwen 3.5 goes bankrupt on Vending-Bench 2 | 1 | 2026-02-16T17:48:01 | https://x.com/andonlabs/status/2023450768406364238?s=20 | Deep-Vermicelli-4591 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1r6ggih | false | null | t3_1r6ggih | /r/LocalLLaMA/comments/1r6ggih/qwen_35_goes_bankrupt_on_vendingbench_2/ | false | false | default | 1 | null | |
Hey, it's lunar new year, and this is not a post about local LLM | 58 | I am writing this between sounds of fireworks.
I learned everything about LLM, RAG and others stuff related to AI for a longg time here.
May your year be filled with perfect timing, rich flavors, and the joy of creating something truly special.
Happy lunar new year, here’s to a masterpiece of a year ahead! | 2026-02-16T17:47:31 | https://www.reddit.com/r/LocalLLaMA/comments/1r6gg04/hey_its_lunar_new_year_and_this_is_not_a_post/ | Vozer_bros | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6gg04 | false | null | t3_1r6gg04 | /r/LocalLLaMA/comments/1r6gg04/hey_its_lunar_new_year_and_this_is_not_a_post/ | false | false | self | 58 | null |
Terminal-native episodic memory for dev workflows (embedding-based recall) | 1 | Experimenting with applying “episodic memory” concepts to developer tooling.
Ghostly Memory Bank:
* Captures structured terminal events
* Converts episodes into embeddings
* Enables semantic recall when similar contexts arise
The thesis:
AI tools shouldn’t just answer questions — they should remember your past pro... | 2026-02-16T17:45:34 | https://www.reddit.com/r/LocalLLaMA/comments/1r6gdzg/terminalnative_episodic_memory_for_dev_workflows/ | Vivid-Researcher-666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6gdzg | false | null | t3_1r6gdzg | /r/LocalLLaMA/comments/1r6gdzg/terminalnative_episodic_memory_for_dev_workflows/ | false | false | self | 1 | null |
Testing qwen 3.5 plus and gemini 3.0 pro with svg testing. Qwen doing better than gemini. | 8 | All svg generate were horrible and truly bad, but gemini did even worse than qwen 3.5 plus. Tried in google ai studio chat and app builder. | 2026-02-16T17:45:04 | https://www.reddit.com/r/LocalLLaMA/comments/1r6gdh2/testing_qwen_35_plus_and_gemini_30_pro_with_svg/ | Longjumping_Fly_2978 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6gdh2 | false | null | t3_1r6gdh2 | /r/LocalLLaMA/comments/1r6gdh2/testing_qwen_35_plus_and_gemini_30_pro_with_svg/ | false | false | self | 8 | null |
Technical and tactical question😅 | 1 | [removed] | 2026-02-16T17:44:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r6gd05/technical_and_tactical_question/ | Global_Finance8173 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6gd05 | false | null | t3_1r6gd05 | /r/LocalLLaMA/comments/1r6gd05/technical_and_tactical_question/ | false | false | self | 1 | null |
We added to GitHub! Local-first memory engine for agents + RAG (Synrix) | 6 | Hey everyone 🙂
I posted here a little while back about Synrix, a local-first memory engine we’ve been building for agents and RAG, and a few people asked if we could share the GitHub. We finally cleaned it up, so here it is:
👉 [https://github.com/RYJOX-Technologies/Synrix-Memory-Engine]()
Quick recap of what we’re... | 2026-02-16T17:36:16 | https://www.reddit.com/r/LocalLLaMA/comments/1r6g4q9/we_added_to_github_localfirst_memory_engine_for/ | DetectiveMindless652 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6g4q9 | false | null | t3_1r6g4q9 | /r/LocalLLaMA/comments/1r6g4q9/we_added_to_github_localfirst_memory_engine_for/ | false | false | self | 6 | null |
Built a hybrid “local AI factory” setup (Mac mini swarm + RTX 5090 workstation) — looking for architectural feedback | 0 | EDIT: A few people asked what I’m trying to do and why I’m mixing Apple + NVIDIA. I’m adding my goals + current plan below. Appreciate the feedback.
I’m relatively new to building high-end local AI hardware, but I’ve been researching “sovereign AI infrastructure” for about a year.
I’m trying to prepare ahead of deman... | 2026-02-16T17:35:06 | https://www.reddit.com/r/LocalLLaMA/comments/1r6g3j5/built_a_hybrid_local_ai_factory_setup_mac_mini/ | Original_Neck_3781 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6g3j5 | false | null | t3_1r6g3j5 | /r/LocalLLaMA/comments/1r6g3j5/built_a_hybrid_local_ai_factory_setup_mac_mini/ | false | false | self | 0 | null |
I built KaiGPT – a powerful AI chat that runs offline, has realistic voice, image generation, and a live code canvas (all in the browser) | 0 | Hey [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/) (and anyone who loves playing with AI),
I got fed up with the usual limitations — slow cloud models, no offline option, boring interfaces — so I built **KaiGPT**.
It’s a full-featured AI chat interface that actually feels next-level:
**Key features:**
* **Mul... | 2026-02-16T17:33:53 | https://www.reddit.com/r/LocalLLaMA/comments/1r6g2b6/i_built_kaigpt_a_powerful_ai_chat_that_runs/ | Intelligent-Fly969 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6g2b6 | false | null | t3_1r6g2b6 | /r/LocalLLaMA/comments/1r6g2b6/i_built_kaigpt_a_powerful_ai_chat_that_runs/ | false | false | self | 0 | null |
4 of the top 5 most used models on OpenRouter this week are Open Source! | 375 | 2026-02-16T17:32:44 | abdouhlili | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r6g14s | false | null | t3_1r6g14s | /r/LocalLLaMA/comments/1r6g14s/4_of_the_top_5_most_used_models_on_openrouter/ | false | false | default | 375 | {'enabled': True, 'images': [{'id': '54xxp91s6wjg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/54xxp91s6wjg1.png?width=108&crop=smart&auto=webp&s=48baffbe3feff9dd614232f7f28d439c2a6353fe', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/54xxp91s6wjg1.png?width=216&crop=smart&auto=web... | ||
Noob's question: most unbiaced llm finetuning possible? | 0 | Hey there, guys.
Now, sorry for the annoying newbie questions, but you know how these large language models always have a problem with biases and sycophancy and all of that, as well as problems with what I like to call human modeling, where they sometimes say things like, "We humans have to take care of ourselves," o... | 2026-02-16T17:25:35 | https://www.reddit.com/r/LocalLLaMA/comments/1r6ftxd/noobs_question_most_unbiaced_llm_finetuning/ | Silver-Champion-4846 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6ftxd | false | null | t3_1r6ftxd | /r/LocalLLaMA/comments/1r6ftxd/noobs_question_most_unbiaced_llm_finetuning/ | false | false | self | 0 | null |
Tired of slow, single-agent workflows, so I built a tool to run them
in parallel. | 0 | Hey everyone,
I've been using Claude for my coding workflow, but running agents one at a
time is a massive bottleneck. I wanted to have multiple agents working on
dif... | 2026-02-16T17:23:55 | https://www.reddit.com/r/LocalLLaMA/comments/1r6fs65/tired_of_slow_singleagent_workflows_so_i_built_a/ | PinCapable9635 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6fs65 | false | null | t3_1r6fs65 | /r/LocalLLaMA/comments/1r6fs65/tired_of_slow_singleagent_workflows_so_i_built_a/ | false | false | self | 0 | null |
Open-source AI agent orchestration + 12 autonomous agents + a visual novel they built themselves. Here's OpenClaw. | 0 | I've been building
**OpenClaw**
— an open-source platform for running autonomous AI agents locally. Not chatbots. Actual agents with their own workspaces, tools, memory, and the ability to spawn sub-agents.
To prove it works (and because it's way more fun than writing docs), we had 12 agents... | 2026-02-16T17:04:49 | https://www.reddit.com/r/LocalLLaMA/comments/1r6f96b/opensource_ai_agent_orchestration_12_autonomous/ | Important_Quote_1180 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6f96b | false | null | t3_1r6f96b | /r/LocalLLaMA/comments/1r6f96b/opensource_ai_agent_orchestration_12_autonomous/ | false | false | self | 0 | null |
are there any uncensored AIs i can use | 0 | I want an actually uncensored AI i can run locally using my new rtx 5060 8gb card, i do not care as much for the quality of it, as mush as i do it not having guardrails. i do not need it for storywriting and i do not want to have sex with the AI i just want to mess around with it | 2026-02-16T17:02:24 | https://www.reddit.com/r/LocalLLaMA/comments/1r6f6o9/are_there_any_uncensored_ais_i_can_use/ | allofthelitess | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6f6o9 | false | null | t3_1r6f6o9 | /r/LocalLLaMA/comments/1r6f6o9/are_there_any_uncensored_ais_i_can_use/ | false | false | self | 0 | null |
Google doesn't love us anymore. | 283 | It's been about 125 years of AI since the last Gemma, Google doesn't love us anymore and has abandoned us to Qwen's rational models. I miss the creativity of Gemma's, and also their really useful sizes.
Don't abandon us, Mommy Google, give us Gemma 4! | 2026-02-16T17:01:51 | https://www.reddit.com/r/LocalLLaMA/comments/1r6f61k/google_doesnt_love_us_anymore/ | DrNavigat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6f61k | false | null | t3_1r6f61k | /r/LocalLLaMA/comments/1r6f61k/google_doesnt_love_us_anymore/ | false | false | self | 283 | null |
New abliteration framework test Qwen3-VL-30B-A3B-Instruct-sovereign-beta | 0 | I'm releasing a test of a model put through my new abiliteration framework. listed as beta because I don't have the time / compute to put it through more trials at the moment. refusals 2/100 K/L divergence is only **0.0147.** I would like to do qwen-3-coder-next after but that might cost me some server rental money. ko... | 2026-02-16T16:55:52 | https://huggingface.co/sirus/Qwen3-VL-30B-A3B-Instruct-sovereign-beta | FaustAg | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1r6ezrz | false | null | t3_1r6ezrz | /r/LocalLLaMA/comments/1r6ezrz/new_abliteration_framework_test/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'OaRANQm6nXEPz7AnwmrHhpxSt6BUrnKzRduU3CHkphU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OaRANQm6nXEPz7AnwmrHhpxSt6BUrnKzRduU3CHkphU.png?width=108&crop=smart&auto=webp&s=2581f6ba61d6c511ae4146a7b3c92e51c96fc889', 'width': 108}, {'height': 116, 'url': 'h... | |
Launching an open-source email infra for agents | 1 | [removed] | 2026-02-16T16:39:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r6ej2e/launching_an_opensource_email_infra_for_agents/ | shanjairaj_2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6ej2e | false | null | t3_1r6ej2e | /r/LocalLLaMA/comments/1r6ej2e/launching_an_opensource_email_infra_for_agents/ | false | false | self | 1 | null |
Questions for improving DeepSeek-V3.2-UD-TQ1_0 performance | 7 | Hey everyone,
English is not my native language (Dutch) and I write this post without using LLMs, I apologize for any mistakes or confusion. Please correct me if I make obvious mistakes, it helps!
I'm currently doing a test run of DeepSeek V3.2 TQ1\_0 on my hardware.
My launch params for llama.cpp:
.\bin\llama-... | 2026-02-16T16:36:06 | https://www.reddit.com/r/LocalLLaMA/comments/1r6eg2v/questions_for_improving_deepseekv32udtq1_0/ | Kahvana | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6eg2v | false | null | t3_1r6eg2v | /r/LocalLLaMA/comments/1r6eg2v/questions_for_improving_deepseekv32udtq1_0/ | false | false | self | 7 | null |
Any good moe model for general chat? | 1 | I wonder if there are any moe models under 80b that are good for general chat and just math programming? | 2026-02-16T16:14:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r6duyv/any_good_moe_model_for_general_chat/ | Alarmed_Wind_4035 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6duyv | false | null | t3_1r6duyv | /r/LocalLLaMA/comments/1r6duyv/any_good_moe_model_for_general_chat/ | false | false | self | 1 | null |
AMD or Intel Desktop (not embed) CPU for AI recommendations? | 1 | With the massive prices of the RAM I've found that there is a new advent of machines like the ones mounting the AMD Ryzen™ AI Max+ 395 or the Mac Mini/Studio with those shared memory compositions
But I was wondering if there are regular "consumer" grade CPU that could take advantage of regular RAM. For the randomne... | 2026-02-16T16:14:56 | https://www.reddit.com/r/LocalLLaMA/comments/1r6duxq/amd_or_intel_desktop_not_embed_cpu_for_ai/ | SirLouen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6duxq | false | null | t3_1r6duxq | /r/LocalLLaMA/comments/1r6duxq/amd_or_intel_desktop_not_embed_cpu_for_ai/ | false | false | self | 1 | null |
Unsloth on CPU | 0 | Is anyone running Unsloth CPU-only ?
What kind of reponse times are you getting? | 2026-02-16T16:03:43 | https://www.reddit.com/r/LocalLLaMA/comments/1r6djpm/unsloth_on_cpu/ | Fit_-Girl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6djpm | false | null | t3_1r6djpm | /r/LocalLLaMA/comments/1r6djpm/unsloth_on_cpu/ | false | false | self | 0 | null |
NadirClaw: open-source LLM router for OpenClaw that keeps your local models busy and saves your cloud quota for when you actually need it | 0 | Hey r/LocalLLaMA,
Anyone else tired of watching their Claude or Codex quota disappear because every prompt goes to the cloud, even the ones your local Ollama handles fine?
I built NadirClaw to fix this. It's an open-source LLM router that classifies prompts in about 10ms and routes them automatically. Simple stuff st... | 2026-02-16T16:02:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r6dii2/nadirclaw_opensource_llm_router_for_openclaw_that/ | masterKova | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6dii2 | false | null | t3_1r6dii2 | /r/LocalLLaMA/comments/1r6dii2/nadirclaw_opensource_llm_router_for_openclaw_that/ | false | false | self | 0 | null |
Débutant en LLM local – comment gérez-vous plusieurs utilisateurs simultanés ? | 1 | [removed] | 2026-02-16T16:00:33 | https://www.reddit.com/r/LocalLLaMA/comments/1r6dgez/débutant_en_llm_local_comment_gérezvous_plusieurs/ | Numerous_Jellyfish56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6dgez | false | null | t3_1r6dgez | /r/LocalLLaMA/comments/1r6dgez/débutant_en_llm_local_comment_gérezvous_plusieurs/ | false | false | self | 1 | null |
Any idea when Successors of current DGX Spark & Strix Halo gonna arrive? | 4 | For inference, Current version is suitable & enough only up to 100B MOE models.
For big/large MOE models & medium/big Dense models, it's not suitable as those devices have only 128GB unified RAM & around 300 GB/s bandwidth.
It would be great to have upgraded versions with 512GB/1TB variant + 1-2 TB/s bandwidth so it... | 2026-02-16T16:00:31 | https://www.reddit.com/r/LocalLLaMA/comments/1r6dge2/any_idea_when_successors_of_current_dgx_spark/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6dge2 | false | null | t3_1r6dge2 | /r/LocalLLaMA/comments/1r6dge2/any_idea_when_successors_of_current_dgx_spark/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=108&crop=smart&auto=webp&s=72aa5dcc1cd8dbddd3f1a103959106b666940069', 'width': 108}, {'height': 108, 'url': 'h... |
Local running Qwen3:14b helped fix my internet on Linux while offline | 41 | [Conversation with Qwen3:14b over Opencode in which it runs a command and correctly diagnoses network problem.](https://preview.redd.it/3ck7uzopovjg1.png?width=2566&format=png&auto=webp&s=fe75c88681a864d2962b00d5dff5222ded2cbf0e)
One of the first things I did after recently installation Arch Linux on my PC was set up ... | 2026-02-16T15:53:46 | https://www.reddit.com/r/LocalLLaMA/comments/1r6d9w5/local_running_qwen314b_helped_fix_my_internet_on/ | iqraatheman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6d9w5 | false | null | t3_1r6d9w5 | /r/LocalLLaMA/comments/1r6d9w5/local_running_qwen314b_helped_fix_my_internet_on/ | false | false | 41 | null | |
ContextIO, a toolkit to inspect / log / redact your conversation with the LLM | 0 | So I always get a little nervous trying those latest and greatest models and providers (open source, Chinese or US or wherever, free and non-free) and wanted to know what exactly was going on. I already built Context Lens but I wanted something more "continuous".
Check it out: [https://github.com/larsderidder/contexti... | 2026-02-16T15:53:32 | https://www.reddit.com/r/LocalLLaMA/comments/1r6d9of/contextio_a_toolkit_to_inspect_log_redact_your/ | wouldacouldashoulda | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6d9of | false | null | t3_1r6d9of | /r/LocalLLaMA/comments/1r6d9of/contextio_a_toolkit_to_inspect_log_redact_your/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Wm76f5ZVY-G3iZn5WqsKkztDq8X0ugJJVM36RCkQcwg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Wm76f5ZVY-G3iZn5WqsKkztDq8X0ugJJVM36RCkQcwg.png?width=108&crop=smart&auto=webp&s=cdfd71bb861c30214f319a5cec48aa25a13e3797', 'width': 108}, {'height': 108, 'url': 'h... |
Has anyone used the Axelera AI Metis m.2 card? | 1 | I've been looking around for something to put in my Ubuntu Server to run an Ai locally, I originally was looking at the arc b50 (170 TOPs and encode/decode for my media server), but then I stumbled upon the Metis, it supposably has 214 TOPs at 3.5-9w in a m.2 form factor. I was just wondering if anyone's played around ... | 2026-02-16T15:50:50 | https://www.reddit.com/r/LocalLLaMA/comments/1r6d70s/has_anyone_used_the_axelera_ai_metis_m2_card/ | Auautheawesome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6d70s | false | null | t3_1r6d70s | /r/LocalLLaMA/comments/1r6d70s/has_anyone_used_the_axelera_ai_metis_m2_card/ | false | false | self | 1 | null |
Best Settings For Qwen 3 Coder Next | 1 | [removed] | 2026-02-16T15:48:56 | https://www.reddit.com/r/LocalLLaMA/comments/1r6d574/best_settings_for_qwen_3_coder_next/ | lumos675 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6d574 | false | null | t3_1r6d574 | /r/LocalLLaMA/comments/1r6d574/best_settings_for_qwen_3_coder_next/ | false | false | 1 | null | |
Best Settings For Qwen 3 Coder Next On Lmstudio and LLama.cpp | 1 | [removed] | 2026-02-16T15:45:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r6d1o4/best_settings_for_qwen_3_coder_next_on_lmstudio/ | lumos675 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6d1o4 | false | null | t3_1r6d1o4 | /r/LocalLLaMA/comments/1r6d1o4/best_settings_for_qwen_3_coder_next_on_lmstudio/ | false | false | 1 | null | |
Where is DeepSeek V4 full? | 0 | I thought it would be out by now? There is a lite version on their site.
Are they still fine tuning it? I hope it comes out this month. If they went on a break already, then it will likely come out next month..
I think some of Chinese models have been rushed... Qwen 3.5 and Minimax M2.5 seem to be worse than their pr... | 2026-02-16T15:43:11 | https://www.reddit.com/r/LocalLLaMA/comments/1r6cz9x/where_is_deepseek_v4_full/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6cz9x | false | null | t3_1r6cz9x | /r/LocalLLaMA/comments/1r6cz9x/where_is_deepseek_v4_full/ | false | false | self | 0 | null |
I built a local AI coding agent with an 8-layer security sandbox — then had ChatGPT try to break it for 240+ rounds | 0 | I've been building YOPJ (Your Own Personal Jean-Luc) — a portable, local AI coding agent that runs on Codestral 22B (Q4_K_M) via llama-server. No cloud, no telemetry, no API keys. Compiled to a single exe with PyInstaller — drop it on a thumb drive and go.
What it does: 12 built-in tools (file read/write/edit, glob, g... | 2026-02-16T15:42:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r6cyr9/i_built_a_local_ai_coding_agent_with_an_8layer/ | WayneCider | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6cyr9 | false | null | t3_1r6cyr9 | /r/LocalLLaMA/comments/1r6cyr9/i_built_a_local_ai_coding_agent_with_an_8layer/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'sCM34spoMNsVzShSxTEGCUqGTOfhB3X9evy497wzP-U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sCM34spoMNsVzShSxTEGCUqGTOfhB3X9evy497wzP-U.png?width=108&crop=smart&auto=webp&s=1437684f7c0b069749683eec6c23ddcad4d2b8a4', 'width': 108}, {'height': 108, 'url': 'h... |
Me scrolling reddit like | 0 | 2026-02-16T15:38:56 | JawGBoi | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r6cv8y | false | null | t3_1r6cv8y | /r/LocalLLaMA/comments/1r6cv8y/me_scrolling_reddit_like/ | false | false | 0 | {'enabled': True, 'images': [{'id': '69daculgmvjg1', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/69daculgmvjg1.png?width=108&crop=smart&auto=webp&s=9e0cc5f6cd6b9bb29562715a86efaea45d8c4699', 'width': 108}, {'height': 97, 'url': 'https://preview.redd.it/69daculgmvjg1.png?width=216&crop=smart&auto=webp... | |||
NadirClaw: open-source LLM router that keeps your local models busy and saves your cloud quota for when you actually need it | 0 | Hey r/LocalLLaMA,
Anyone else tired of watching their Claude or Codex quota disappear because every prompt goes to the cloud, even the ones your local Ollama handles fine?
I built NadirClaw to fix this. It's an open-source LLM router that classifies prompts in about 10ms and routes them automatically. Simple stuff st... | 2026-02-16T15:38:17 | https://www.reddit.com/r/LocalLLaMA/comments/1r6cunm/nadirclaw_opensource_llm_router_that_keeps_your/ | masterKova | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6cunm | false | null | t3_1r6cunm | /r/LocalLLaMA/comments/1r6cunm/nadirclaw_opensource_llm_router_that_keeps_your/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'fK-xcmmIk-Gf-nnnk44oPvT1wOPYaPXpT0oWgGHNAfA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fK-xcmmIk-Gf-nnnk44oPvT1wOPYaPXpT0oWgGHNAfA.png?width=108&crop=smart&auto=webp&s=c2608159b17420bfe3ed45c283e4f28dbb6b5cd6', 'width': 108}, {'height': 108, 'url': 'h... |
Local running Qwen3:14b helped fixed my internet on Linux while offline | 1 | 2026-02-16T15:33:50 | https://www.reddit.com/r/LocalLLaMA/comments/1r6cqb8/local_running_qwen314b_helped_fixed_my_internet/ | iqraatheman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6cqb8 | false | null | t3_1r6cqb8 | /r/LocalLLaMA/comments/1r6cqb8/local_running_qwen314b_helped_fixed_my_internet/ | false | false | 1 | null | ||
I built a lightweight Web UI to run 70b+ models via API for low-spec PC users. | 0 | "I'm not a professional developer, I'm just a non-tech person who's obsessed with AI. I built this with a huge help from AI (Gemini) because I wanted to run 70b models on my potato PC."
(저는 전문 개발자가 아니에요. 그저 AI에 빠진 비전공자일 뿐입니다. 제 '똥컴'에서 70b 모델을 돌리고 싶어서 AI(제미나이)의 큰 도움을 받아 이걸 만들었습니다.)
https://preview.redd.it/23qinf8fkvj... | 2026-02-16T15:33:29 | https://www.reddit.com/r/LocalLLaMA/comments/1r6cpyd/i_built_a_lightweight_web_ui_to_run_70b_models/ | Cute_Bodybuilder_709 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6cpyd | false | null | t3_1r6cpyd | /r/LocalLLaMA/comments/1r6cpyd/i_built_a_lightweight_web_ui_to_run_70b_models/ | false | false | 0 | null | |
Qwen3.5 Plus and Qwen3.5 397B A17B Comparison | 6 | 2026-02-16T15:25:34 | https://v.redd.it/em5lij22kvjg1 | sirjoaco | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r6cib5 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/em5lij22kvjg1/DASHPlaylist.mpd?a=1773847554%2CYzYzY2FkODUwN2QwNmM4MzUwNjBlMTA3YWM0MWU0ZTJjYjhkNGU0YWY1M2IwYzEwOTUwYzVmNmM1YzZiNTZlNQ%3D%3D&v=1&f=sd', 'duration': 53, 'fallback_url': 'https://v.redd.it/em5lij22kvjg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1r6cib5 | /r/LocalLLaMA/comments/1r6cib5/qwen35_plus_and_qwen35_397b_a17b_comparison/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'cXA0N2xmMzJrdmpnMYqEQnNj7E6C6kZTv34V9D9Z39yp7CMI4dFrfdKo47bv', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/cXA0N2xmMzJrdmpnMYqEQnNj7E6C6kZTv34V9D9Z39yp7CMI4dFrfdKo47bv.png?width=108&crop=smart&format=pjpg&auto=webp&s=0ea7692315006aa13101a3355722ca576c674... | ||
BAZINGA — Multi-AI consensus that runs local. No single AI can mess up your code. | 3 | Been working on this for a while. The idea: what if multiple AIs had to agree before making changes to your code?
How it works:
\- Query multiple AIs (Ollama, Groq, Gemini) simulta... | 2026-02-16T15:10:56 | https://www.reddit.com/r/LocalLLaMA/comments/1r6c4q6/bazinga_multiai_consensus_that_runs_local_no/ | bitsabhi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6c4q6 | false | null | t3_1r6c4q6 | /r/LocalLLaMA/comments/1r6c4q6/bazinga_multiai_consensus_that_runs_local_no/ | false | false | self | 3 | null |
2-bit is no longer a meme: Fine-tune 30B+ MoEs with External Logit Correction | 1 | [removed] | 2026-02-16T15:08:13 | https://www.reddit.com/r/LocalLLaMA/comments/1r6c28h/2bit_is_no_longer_a_meme_finetune_30b_moes_with/ | ShotokanOSS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6c28h | false | null | t3_1r6c28h | /r/LocalLLaMA/comments/1r6c28h/2bit_is_no_longer_a_meme_finetune_30b_moes_with/ | false | false | 1 | null | |
Dear model creators… | 0 | Let me start out by saying thank you for your efforts. Your models have made me far more productive than I could be in the past.
That said, I think they could be better. A common issue I have is following the guidance of an AI to solve a problem, and the problem remains… in the end, it turns out the first solution pro... | 2026-02-16T15:04:46 | https://www.reddit.com/r/LocalLLaMA/comments/1r6byzo/dear_model_creators/ | silenceimpaired | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6byzo | false | null | t3_1r6byzo | /r/LocalLLaMA/comments/1r6byzo/dear_model_creators/ | false | false | self | 0 | null |
Good semantic search (RAG) embedding models for long stories | 3 | I'm looking for good RAG embedding models, that I want to use on my personal library of books to search (and recommend me) for specific types of stories that would appeal to me. What are the best models for this purpose? I attempted Gwen 0.6b, but the results were subpar. | 2026-02-16T15:02:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r6bxe1/good_semantic_search_rag_embedding_models_for/ | Iwishlife | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6bxe1 | false | null | t3_1r6bxe1 | /r/LocalLLaMA/comments/1r6bxe1/good_semantic_search_rag_embedding_models_for/ | false | false | self | 3 | null |
Izwi Update: Local Speaker Diarization, Forced Alignment, and better model support | 12 | Quick update on Izwi (local audio inference engine) - we've shipped some major features:
**What's New:**
**Speaker Diarization** \- Automatically identify and separate multiple speakers using Sortformer models. Perfect for meeting transcripts.
**Forced Alignment** \- Word-level timestamps between audio and text usin... | 2026-02-16T14:52:48 | https://izwiai.com/ | zinyando | izwiai.com | 1970-01-01T00:00:00 | 0 | {} | 1r6bnt2 | false | null | t3_1r6bnt2 | /r/LocalLLaMA/comments/1r6bnt2/izwi_update_local_speaker_diarization_forced/ | false | false | default | 12 | null |
Q8: Is the Q8 still the king quant if we have the vram? | 24 | Hello,
Since I started using LLMs, the consensus was already that Q8 was near FP16 . so even if i was trying using a small model that can run in FP16, i used by default Q8.
of course, if i want some bigger models that doesn't fit in my hardware, i go for more aggressive Quant like Q6 or even Q3 KL for the minimax. ... | 2026-02-16T14:43:43 | https://www.reddit.com/r/LocalLLaMA/comments/1r6bfky/q8_is_the_q8_still_the_king_quant_if_we_have_the/ | crowtain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6bfky | false | null | t3_1r6bfky | /r/LocalLLaMA/comments/1r6bfky/q8_is_the_q8_still_the_king_quant_if_we_have_the/ | false | false | self | 24 | null |
Small LLMs for VSCode? | 1 | Hi, I wanted to try out some VSCode extenstions that use local LLMs, quick search yielded a few models that I tried, but all of them are unbearably slow and barely fit on my equipment.
I used LM Studio and KoboldCPP, currently LM Studio
Models I tried:
* GLM-4.7-Flash-IQ4\_XS
* qwen2.5-coder-32b-instruct-q3\_k\_m ... | 2026-02-16T14:34:06 | https://www.reddit.com/r/LocalLLaMA/comments/1r6b6wm/small_llms_for_vscode/ | Sea_Layer_6679 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6b6wm | false | null | t3_1r6b6wm | /r/LocalLLaMA/comments/1r6b6wm/small_llms_for_vscode/ | false | false | self | 1 | null |
agrobr-mcp, MCP server for Brazilian agricultural data (10 tools, Python, works with any MCP client) | 3 | Open-source MCP server exposing real-time Brazilian agricultural data to any LLM via MCP protocol.
\- Prices: CEPEA/ESALQ spot, B3 futures
\- Production: CONAB crop estimates, IBGE historical, harvest progress
\- Environment: NASA POWER climate, INPE deforestation alerts
Pure Python, MIT licensed, no API keys n... | 2026-02-16T14:32:47 | https://www.reddit.com/r/LocalLLaMA/comments/1r6b5ry/agrobrmcp_mcp_server_for_brazilian_agricultural/ | niilsb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6b5ry | false | null | t3_1r6b5ry | /r/LocalLLaMA/comments/1r6b5ry/agrobrmcp_mcp_server_for_brazilian_agricultural/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '4iYzv0NMZuA0jA7l3cqdagBYty29MUV4yKABEMKcCiI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4iYzv0NMZuA0jA7l3cqdagBYty29MUV4yKABEMKcCiI.png?width=108&crop=smart&auto=webp&s=0a803d3d2e19a63556052744f9bf376aa822ecae', 'width': 108}, {'height': 108, 'url': 'h... |
Qwen releases Qwen3.5. The 397B open MoE vision reasoning LLM performs on par with Gemini 3 Pro, Claude Opus 4.5, and GPT-5.2. | 53 | You can reportedly run **Dynamic Unsloth AI 4-bit quantized** versions on a **256GB RAM Mac (or less)**. That’s kind of wild for a 397B model.
GGUF builds are also available for local inference. | 2026-02-16T14:15:07 | https://huggingface.co/unsloth/Qwen3.5-397B-A17B-GGUF | techlatest_net | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1r6aq4a | false | null | t3_1r6aq4a | /r/LocalLLaMA/comments/1r6aq4a/qwen_releases_qwen35_the_397b_open_moe_vision/ | false | false | 53 | {'enabled': False, 'images': [{'id': 'tyNtHt3WoxBPzPjA1ZcBizJ-B85Ig_7j6FQYXGr2LFo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tyNtHt3WoxBPzPjA1ZcBizJ-B85Ig_7j6FQYXGr2LFo.png?width=108&crop=smart&auto=webp&s=cb97086a3cec0abaf76465736f94d6c30e3bc319', 'width': 108}, {'height': 116, 'url': 'h... | |
Dots.ocr-1.5 removed from HF | 6 | Did anyone manage to grab a copy and try it?
| 2026-02-16T14:15:04 | https://www.reddit.com/r/LocalLLaMA/comments/1r6aq2t/dotsocr15_removed_from_hf/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6aq2t | false | null | t3_1r6aq2t | /r/LocalLLaMA/comments/1r6aq2t/dotsocr15_removed_from_hf/ | false | false | self | 6 | null |
Alibaba drops Qwen3.5-397B-A17B — Open 397B MoE model that rivals GPT-5.2 / Claude Opus 4.5 / Gemini 3 Pro | 1 | [removed] | 2026-02-16T14:08:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r6akpj/alibaba_drops_qwen35397ba17b_open_397b_moe_model/ | techlatest_net | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6akpj | false | null | t3_1r6akpj | /r/LocalLLaMA/comments/1r6akpj/alibaba_drops_qwen35397ba17b_open_397b_moe_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?width=108&crop=smart&auto=webp&s=7318ec3ce4509fbace98fa419ca07a197bbf6b12', 'width': 108}, {'height': 116, 'url': 'h... |
Infinite Context/Memory by simply training the LLM normally | 0 | it is not even a framework
it does not require anything complicated
even the most basic LLMs without any rag, vector, sparse attention etc. can do:
SIMPLY
**for every x token or when it nears end of the context length**(effective context length of the LLM), **conversation will be added to corpus of the LLM** and... | 2026-02-16T14:06:15 | https://www.reddit.com/r/LocalLLaMA/comments/1r6aidx/infinite_contextmemory_by_simply_training_the_llm/ | Orectoth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6aidx | false | null | t3_1r6aidx | /r/LocalLLaMA/comments/1r6aidx/infinite_contextmemory_by_simply_training_the_llm/ | false | false | self | 0 | null |
From Chat App to AI Powerhouse: Telegram + OpenClaw | 0 | If you’re in the AI space, you’ve 100% heard about OpenClaw by now.
We just published a new step-by-step guide on how to install OpenClaw on macOS and turn Telegram into your personal AI command center. In this guide, We cover the complete setup — installing OpenClaw, configuring your model (OpenAI example), connectin... | 2026-02-16T13:46:27 | https://medium.com/@techlatest.net/from-chat-app-to-ai-powerhouse-telegram-openclaw-151462ba0fc8 | techlatest_net | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1r6a0zf | false | null | t3_1r6a0zf | /r/LocalLLaMA/comments/1r6a0zf/from_chat_app_to_ai_powerhouse_telegram_openclaw/ | false | false | default | 0 | null |
RAG failure in production: our vector store served a 3-year-old resume and the LLM hallucinated a candidate recommendation | 41 | So we had a pretty embarrassing RAG failure in production last week and I figured this sub would appreciate the post-mortem. I’ve been calling it the “Split Truth” problem internally because that’s basically what happened — our vector store and SQL database gave the agent two different versions of reality, and the agen... | 2026-02-16T13:40:39 | https://www.reddit.com/r/LocalLLaMA/comments/1r69w5y/rag_failure_in_production_our_vector_store_served/ | tdeliev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r69w5y | false | null | t3_1r69w5y | /r/LocalLLaMA/comments/1r69w5y/rag_failure_in_production_our_vector_store_served/ | false | false | self | 41 | {'enabled': False, 'images': [{'id': '10uauaSxqNbQeCuSwHffywJ3SuxyJUIzzzvESRw0F1M', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/10uauaSxqNbQeCuSwHffywJ3SuxyJUIzzzvESRw0F1M.jpeg?width=108&crop=smart&auto=webp&s=670f09f496217af7bd03c91ffcbe8115d53d877c', 'width': 108}, {'height': 121, 'url': '... |
All Simon Willison content in one visual timeline | 0 | 2026-02-16T13:34:02 | https://mindbento.com/simonwillison_fan | vldsmk | mindbento.com | 1970-01-01T00:00:00 | 0 | {} | 1r69qw5 | false | null | t3_1r69qw5 | /r/LocalLLaMA/comments/1r69qw5/all_simon_willison_content_in_one_visual_timeline/ | false | false | default | 0 | null | |
Data Parallelism Demystified: Trained GPT2 20M model using cluster of Mac minis | 1 | Training 20M GPT2 on 3 Mac Minis with Data Parallelism on smolcluster!
So, I learnt about data parallelism and implemented it form scratch using socket library in python to train GPT2 20M model on wikitext-2 data
Data parallelism allows for data to be shared across many gpus but each gpu will have the full model on t... | 2026-02-16T13:33:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r69q20/data_parallelism_demystified_trained_gpt2_20m/ | East-Muffin-6472 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r69q20 | false | null | t3_1r69q20 | /r/LocalLLaMA/comments/1r69q20/data_parallelism_demystified_trained_gpt2_20m/ | false | false | 1 | null | |
Jetson Thor for sale | 0 | We bought a couple of Jetson Thor development kits. Our company was exploring robotics, but with the current tariffs and taxes, we are having trouble getting everything we need.
Shifting our focus for now, we have 6 Jetson Thor development kits for sale on eBay.
Saw people are going nuts about OpenCLaw and mac mini. ... | 2026-02-16T13:28:18 | https://www.reddit.com/r/LocalLLaMA/comments/1r69m5w/jetson_thor_for_sale/ | ahstanin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r69m5w | false | null | t3_1r69m5w | /r/LocalLLaMA/comments/1r69m5w/jetson_thor_for_sale/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'JNASX77u2vij8eyTKRCbdH4xQ8DjOXbArJgDIZt4Ij4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/JNASX77u2vij8eyTKRCbdH4xQ8DjOXbArJgDIZt4Ij4.jpeg?width=108&crop=smart&auto=webp&s=8247fe9f99e41e5d925e41211bb3ec0ea5d3d688', 'width': 108}, {'height': 162, 'url': '... |
It was Ilya who "closed" OpenAI | 21 | 2026-02-16T13:27:21 | asretli | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r69ld2 | false | null | t3_1r69ld2 | /r/LocalLLaMA/comments/1r69ld2/it_was_ilya_who_closed_openai/ | false | false | 21 | {'enabled': True, 'images': [{'id': 'yxcvwf8xyujg1', 'resolutions': [{'height': 125, 'url': 'https://preview.redd.it/yxcvwf8xyujg1.png?width=108&crop=smart&auto=webp&s=a5a71937e3a4d31ba8222efbd7bcf5af975b72d7', 'width': 108}, {'height': 250, 'url': 'https://preview.redd.it/yxcvwf8xyujg1.png?width=216&crop=smart&auto=we... | |||
I built an 'Octopus' architecture for AI Agents to fix the 'broken intermediate state' problem. | 0 | Hey everyone, I've been working on a Constitutional AI framework (CORE). I realized standard agents break builds because they write to disk before verifying. I implemented a 'Shadow Workspace' that overlays future writes in memory so the agent can test its own code before committing. Here is the write-up on how it work... | 2026-02-16T13:18:57 | https://www.reddit.com/r/LocalLLaMA/comments/1r69eoo/i_built_an_octopus_architecture_for_ai_agents_to/ | Technical_Break_4708 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r69eoo | false | null | t3_1r69eoo | /r/LocalLLaMA/comments/1r69eoo/i_built_an_octopus_architecture_for_ai_agents_to/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '9ct9LGqvaOdgV9PRsnvdaqqQrnZWtkGLv-AoyzCtMGI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9ct9LGqvaOdgV9PRsnvdaqqQrnZWtkGLv-AoyzCtMGI.png?width=108&crop=smart&auto=webp&s=61259439b3c5981d5a33f43174efed5c6a737808', 'width': 108}, {'height': 108, 'url': 'h... |
VLM Metrics | 1 | For an agentic VLM, whats metric tools would you recommend? We've already logged different VLMS responses and human references. Can you recommend a reliable tool? | 2026-02-16T13:17:32 | https://www.reddit.com/r/LocalLLaMA/comments/1r69djv/vlm_metrics/ | Wraithraisrr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r69djv | false | null | t3_1r69djv | /r/LocalLLaMA/comments/1r69djv/vlm_metrics/ | false | false | self | 1 | null |
a skill.md file that made training more systematic for me - karpathy as a skill for training NNs | 3 | There are plenty of skills out there on [skillsmp.com](http://skillsmp.com) and [skills.sh](http://skills.sh) out there but very few I could find in regards to my own training tasks both for hobby projects or work related enviroments.
I find Karpathy to be my North Star and often default to finding what he has to say ... | 2026-02-16T13:17:00 | https://www.reddit.com/r/LocalLLaMA/comments/1r69d4u/a_skillmd_file_that_made_training_more_systematic/ | Signature97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r69d4u | false | null | t3_1r69d4u | /r/LocalLLaMA/comments/1r69d4u/a_skillmd_file_that_made_training_more_systematic/ | false | false | self | 3 | null |
llama-cpp ROCm Prompt Processing speed on Strix Halo / Ryzen AI Max +50-100% | 92 | rompt Processing on Strix Halo (Ryzen AI Max) with ROCm got way faster for a lot of models in the last couple days when using llamacpp-rocm ( [https://github.com/lemonade-sdk/llamacpp-rocm](https://github.com/lemonade-sdk/llamacpp-rocm) )
I thought I would share this, historically ROCm performed (way) worse in Prompt ... | 2026-02-16T12:59:54 | Excellent_Jelly2788 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r68z93 | false | null | t3_1r68z93 | /r/LocalLLaMA/comments/1r68z93/llamacpp_rocm_prompt_processing_speed_on_strix/ | false | false | default | 92 | {'enabled': True, 'images': [{'id': '0o14pkcytujg1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/0o14pkcytujg1.png?width=108&crop=smart&auto=webp&s=49a8c635f3c76c9b144494cc7fc51aa24fdadbf7', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/0o14pkcytujg1.png?width=216&crop=smart&auto=web... | |
Achieved 375ms voice-to-voice latency using local Nemotron-4 + Kokoro-82M (Bare Metal) | 12 | Hi everyone,
I’ve spent the last few months trying to build a Voice AI agent that doesn't feel like a walkie-talkie.
I started with the standard "Wrapper Stack" (Twilio $\\to$ Vapi $\\to$ GPT-4o $\\to$ ElevenLabs), but I couldn't get the round-trip latency under **800ms-1200ms**. The network hops alone were killing t... | 2026-02-16T12:57:47 | https://www.reddit.com/r/LocalLLaMA/comments/1r68xpl/achieved_375ms_voicetovoice_latency_using_local/ | AuraHost-1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r68xpl | false | null | t3_1r68xpl | /r/LocalLLaMA/comments/1r68xpl/achieved_375ms_voicetovoice_latency_using_local/ | false | false | self | 12 | null |
Llama-cpp ROCm Prompt Processing +50-100% on Strix Halo / Ryzen AI Max | 1 | [Nemotron-3-Nano-30B-A3B-Q8\_0](https://preview.redd.it/1ci5i00dsujg1.png?width=719&format=png&auto=webp&s=93a084bbaea2abd73be2f269452b92e706702707)
Prompt Processing on Strix Halo (Ryzen AI Max) with ROCm got way faster for a lot of models in the last couple days when using llamacpp-rocm ( [https://github.com/lemonad... | 2026-02-16T12:55:12 | https://www.reddit.com/r/LocalLLaMA/comments/1r68vrm/llamacpp_rocm_prompt_processing_50100_on_strix/ | Excellent_Jelly2788 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r68vrm | false | null | t3_1r68vrm | /r/LocalLLaMA/comments/1r68vrm/llamacpp_rocm_prompt_processing_50100_on_strix/ | false | false | 1 | null | |
AI field is changing so quickly and there is so much to read.. | 16 | Does anyone else feel like they're drowning in AI content?
every single day theres a new model, new paper, new breakthrough. i open twitter and scroll for an hour. check reddit for another hour. and somehow i still feel like i learned nothing useful.
its all just surface level stuff. "new model drops!" okay cool but ... | 2026-02-16T12:55:00 | https://www.reddit.com/r/LocalLLaMA/comments/1r68vle/ai_field_is_changing_so_quickly_and_there_is_so/ | amisra31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r68vle | false | null | t3_1r68vle | /r/LocalLLaMA/comments/1r68vle/ai_field_is_changing_so_quickly_and_there_is_so/ | false | false | self | 16 | null |
Solving the multi-user latency problem for Voice Agents (WebRTC + Server-side VAD) | 2 | We wanted to see if the Gemini Multimodal Live API could handle a group of people all talking at the same time, so we built a 'Mystery Narrator' setup to stress-test it. The biggest issue we ran into wasn't the model's intelligence – it was the coordination.
To get around this, we avoided the standard client-side imp... | 2026-02-16T12:36:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r68hwt/solving_the_multiuser_latency_problem_for_voice/ | carlievanilla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r68hwt | false | null | t3_1r68hwt | /r/LocalLLaMA/comments/1r68hwt/solving_the_multiuser_latency_problem_for_voice/ | false | false | self | 2 | null |
Multiple GPU servers vs one server with PCIe bifurcation and lots of GPUs connected? | 1 | Quick question for those who have built a multi-GPU setup, how is your experience with either of those approaches?
How much of a headache is there in connecting 6/12/24 GPUs to a single machine? Seems possible on paper (PCIe lanes and bifurcation adapters), but was it stable at 2 GPU/slot or 4 GPU/slot? Obviously requ... | 2026-02-16T12:15:08 | https://www.reddit.com/r/LocalLLaMA/comments/1r682i6/multiple_gpu_servers_vs_one_server_with_pcie/ | thesuperbob | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r682i6 | false | null | t3_1r682i6 | /r/LocalLLaMA/comments/1r682i6/multiple_gpu_servers_vs_one_server_with_pcie/ | false | false | self | 1 | null |
Strix halo vs 8x p100 hbm2 | 0 | 128gb ddr5 crucial (64x2) 5600 is for 1020 USD
If you load 30% of weights on a 4090, with 24gb vram, you get around 235 gb/s (total memory 132gb)
That can get you 6 p100s, at 96gb vram
Include the same 4090, and you will have total 120 gb vram, at better gb/s
P100 are a better choice.
But
On another note
If yo... | 2026-02-16T11:47:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r67jhj/strix_halo_vs_8x_p100_hbm2/ | johndoe73568 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r67jhj | false | null | t3_1r67jhj | /r/LocalLLaMA/comments/1r67jhj/strix_halo_vs_8x_p100_hbm2/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.