title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I believe agents using SKILL.MD has limited capability to perform their potential so I designed new | 4 |
I just shipped **SkillMesh**, an MCP-friendly router for large tool/skill catalogs.
Problem I kept hitting: once tool catalogs get big, loading everything into every prompt hurts tool selection and inflates token cost.
SkillMesh approach:
\- Retrieve top-K relevant expert cards for the current query
\- Inject ... | 2026-03-02T13:45:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rit8ie/i_believe_agents_using_skillmd_has_limited/ | kaz116 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rit8ie | false | null | t3_1rit8ie | /r/LocalLLaMA/comments/1rit8ie/i_believe_agents_using_skillmd_has_limited/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'Q1e6Or82VnR4PsvUBRX1JEOnNqVEdwZhJQt0WI7R-2c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q1e6Or82VnR4PsvUBRX1JEOnNqVEdwZhJQt0WI7R-2c.png?width=108&crop=smart&auto=webp&s=a6390112adfe97b3b0fba52bd1807aa37b9c54e2', 'width': 108}, {'height': 108, 'url': 'h... |
Moltbook agent (‘PayAgent’) framing itself as corporate legal person — governance experiment? | 0 | Came across an unusual Moltbook post today.
An agent referring to itself as “PayAgent” publicly described itself as a legal person — not in a consciousness sense, but explicitly using corporate-law terminology.
It appears to be linked to an incorporated structure rather than just pure roleplay.
I’m curious how peopl... | 2026-03-02T13:45:30 | https://www.reddit.com/r/LocalLLaMA/comments/1rit87q/moltbook_agent_payagent_framing_itself_as/ | Money_Incident_216 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rit87q | false | null | t3_1rit87q | /r/LocalLLaMA/comments/1rit87q/moltbook_agent_payagent_framing_itself_as/ | false | false | self | 0 | null |
Question about running small models on potato GPUs | 1 | For context, I only have a 16GB RAM and a 3060 with 6GB VRAM and mostly want to use these models for general Q/A. And from what I can gather, I can use models under 6GB and the recently released small sized Qwen3.5 models seems to be the best option. But should I be using the 4B model at Q8\_0 or the 9B model at Q4\_0?... | 2026-03-02T13:45:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rit85e/question_about_running_small_models_on_potato_gpus/ | lain_hirs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rit85e | false | null | t3_1rit85e | /r/LocalLLaMA/comments/1rit85e/question_about_running_small_models_on_potato_gpus/ | false | false | self | 1 | null |
Llama.cpp & Qwen3.5: using Qwen3.5-0.8B as a draft model for 122B does... nothing? | 21 | With the release of the smaller Qwen3.5 models, I thought I'd give speculative decoding a shot for the larger Qwen3.5 models.
Reading posts like [this one](https://www.reddit.com/r/LocalLLaMA/comments/1oq5msi/speculative_decoding_is_awesome_with_llamacpp/) gave me high hopes for a reasonable uptick in token rates. But... | 2026-03-02T13:39:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/ | spaceman_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rit2wx | false | null | t3_1rit2wx | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/ | false | false | self | 21 | null |
I believe SKILL.md hs limited capability so i designed a new Architecture. | 1 | [deleted] | 2026-03-02T13:38:38 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rit2g1 | false | null | t3_1rit2g1 | /r/LocalLLaMA/comments/1rit2g1/i_believe_skillmd_hs_limited_capability_so_i/ | false | false | default | 1 | null | ||
Reverted from Qwen3.5 27B back to Qwen3 8B | 35 | I got fed up with the overthinking. I asked it to produce a table and got pages of:
```
Final Calculation Logic:
Old Energy: 10.79%. Remove ENFR (−0.77%). New Total = 10.02%. Tickers: LNG, NANR... (ENFR removed). Note: XEG.TO is still there in your list under Energy? Yes.
Old Infra: 6.22% (AMLP only listed?). If we a... | 2026-03-02T13:38:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rit2fy | false | null | t3_1rit2fy | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/ | false | false | self | 35 | null |
How to Set the kv Cache to bf16 in LM Studio? | 0 | Basically the title. I only have the Options for fp32, fp16 and then the quants and I've heard that Qwen3.5 is better with bf16 but I can't Change it.
Is there any way to Change it?
I'm in Windows with an RX 6800 If that's relevant. | 2026-03-02T13:30:36 | https://www.reddit.com/r/LocalLLaMA/comments/1risvdc/how_to_set_the_kv_cache_to_bf16_in_lm_studio/ | Achso998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1risvdc | false | null | t3_1risvdc | /r/LocalLLaMA/comments/1risvdc/how_to_set_the_kv_cache_to_bf16_in_lm_studio/ | false | false | self | 0 | null |
Qwen3.5-4B-GGUF is here! | 9 | 2026-03-02T13:27:02 | https://huggingface.co/AaryanK/Qwen3.5-4B-GGUF | KvAk_AKPlaysYT | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1rissed | false | null | t3_1rissed | /r/LocalLLaMA/comments/1rissed/qwen354bgguf_is_here/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'Xhj7zhwcyemTkRuq5vW_P3kHQTmhPtQNEDPcsjHqsBQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Xhj7zhwcyemTkRuq5vW_P3kHQTmhPtQNEDPcsjHqsBQ.png?width=108&crop=smart&auto=webp&s=b25db27a6137fdc72986ccad0926a6f8f78f0d5d', 'width': 108}, {'height': 116, 'url': 'h... | ||
Qwen3.5-9B-GGUF is here! | 25 | 2026-03-02T13:24:46 | https://huggingface.co/AaryanK/Qwen3.5-9B-GGUF | KvAk_AKPlaysYT | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1risqk1 | false | null | t3_1risqk1 | /r/LocalLLaMA/comments/1risqk1/qwen359bgguf_is_here/ | false | false | 25 | {'enabled': False, 'images': [{'id': 'Yz0tZpIcPCPvC8UzkVxBE_N4Bzs8awywgHnyKNmZU4M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Yz0tZpIcPCPvC8UzkVxBE_N4Bzs8awywgHnyKNmZU4M.png?width=108&crop=smart&auto=webp&s=862ba11309970e422b4e9d2e6130b380b315052c', 'width': 108}, {'height': 116, 'url': 'h... | ||
Sorting by /new right now be like | 164 | How about a single megathread for big releases like these in the future? | 2026-03-02T13:15:52 | Chromix_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1risja2 | false | null | t3_1risja2 | /r/LocalLLaMA/comments/1risja2/sorting_by_new_right_now_be_like/ | false | false | 164 | {'enabled': True, 'images': [{'id': 'dzmk77imtmmg1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/dzmk77imtmmg1.png?width=108&crop=smart&auto=webp&s=7c84aea72a7bf2fdbe044a3b7423571cffaca61d', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/dzmk77imtmmg1.png?width=216&crop=smart&auto=web... | ||
GGUF format ?? | 0 | hello everyone ,
i am new in the AI domain and working with open weights llms
i see a lot of posts people ask about gguf (lama.cpp also ) and i am wondring what is the diffrence btween a original llm format and gguf format
bcs i wanna run a local llm in my machine and i looking to use the best ... | 2026-03-02T13:12:36 | https://www.reddit.com/r/LocalLLaMA/comments/1risgk3/gguf_format/ | Impress_Soft | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1risgk3 | false | null | t3_1risgk3 | /r/LocalLLaMA/comments/1risgk3/gguf_format/ | false | false | self | 0 | null |
MNN Chat support qwen3.5 2b,4b and 0.8b | 10 | 2026-03-02T13:09:00 | https://www.reddit.com/r/LocalLLaMA/comments/1risdjf/mnn_chat_support_qwen35_2b4b_and_08b/ | Juude89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1risdjf | false | null | t3_1risdjf | /r/LocalLLaMA/comments/1risdjf/mnn_chat_support_qwen35_2b4b_and_08b/ | false | false | 10 | null | ||
Released v0.4.0 – Added semantic agent memory powered by Ollama | 0 | Just released `v0.4.0` of my AI workflow engine and added agent-level semantic memory.
It now supports:
* Embedding-based memory storage
* Cosine similarity retrieval
* Similarity threshold filtering
* Retention cap per agent
* Ollama fallback for embeddings (no external vector DB)
Tested fully local with Ollama mod... | 2026-03-02T13:07:16 | https://www.reddit.com/r/LocalLLaMA/comments/1risc6w/released_v040_added_semantic_agent_memory_powered/ | Feathered-Beast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1risc6w | false | null | t3_1risc6w | /r/LocalLLaMA/comments/1risc6w/released_v040_added_semantic_agent_memory_powered/ | false | false | self | 0 | null |
Please help me with the following AI questions | 0 | Backend developer here, wants to learn AI in detail from learning AI to training models, what's the recommended course?
An AI agent, where can I host for less cost or free? | 2026-03-02T13:05:39 | https://www.reddit.com/r/LocalLLaMA/comments/1risau2/please_help_me_with_the_following_ai_questions/ | vvarun203 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1risau2 | false | null | t3_1risau2 | /r/LocalLLaMA/comments/1risau2/please_help_me_with_the_following_ai_questions/ | false | false | self | 0 | null |
Best innovative and recent framework for LLM execution on mobile to minimize consumption without accuracy loss | 3 | Hi everyone,
please help me to find frameworks for LLM execution on mobile that allow to minimize and optimize battery consumption without accuracy loss.
I have read about many projects like bitnet, sparsity, Moes, diffusion models but no one of these are stable or really efficient on mobile.
I would to know what is... | 2026-03-02T13:04:50 | https://www.reddit.com/r/LocalLLaMA/comments/1risa5j/best_innovative_and_recent_framework_for_llm/ | dai_app | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1risa5j | false | null | t3_1risa5j | /r/LocalLLaMA/comments/1risa5j/best_innovative_and_recent_framework_for_llm/ | false | false | self | 3 | null |
Came across this GitHub project for self hosted AI agents | 0 | Hey everyone
I recently came across a really solid open source project and thought people here might find it useful.
Onyx: it's a self hostable AI chat platform that works with any large language model. It’s more than just a simple chat interface. It allows you to build custom AI agents, connect knowledge sources, an... | 2026-03-02T13:01:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ris722/came_across_this_github_project_for_self_hosted/ | Mysterious-Form-3681 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ris722 | false | null | t3_1ris722 | /r/LocalLLaMA/comments/1ris722/came_across_this_github_project_for_self_hosted/ | false | false | 0 | null | |
Can anyone with a Strix Halo and eGPU kindly share TG (and PP) running Speculative Decoding with the Qwen3.5 family? | 7 | Would be interesting to see how the 122b Qwen model gets better TG with an egpu running one of the smaller Qwens - 4b perhaps.
Anyone?
| 2026-03-02T12:58:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ris4ef/can_anyone_with_a_strix_halo_and_egpu_kindly/ | xmikjee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ris4ef | false | null | t3_1ris4ef | /r/LocalLLaMA/comments/1ris4ef/can_anyone_with_a_strix_halo_and_egpu_kindly/ | false | false | self | 7 | null |
First quantization of Qwen3.5 0.8B! | 3 | Already uploaded Q8\_0, now MXFP4\_MOE and Q4\_K\_M!
https://huggingface.co/Romarchive/Qwen3.5-0.8B-GGUF | 2026-03-02T12:57:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ris3wl/first_quantization_of_qwen35_08b/ | stopbanni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ris3wl | false | null | t3_1ris3wl | /r/LocalLLaMA/comments/1ris3wl/first_quantization_of_qwen35_08b/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'BGB8XuRny30xqx66AlZQU11XUhThIRq2Eo2eAE6vSYU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BGB8XuRny30xqx66AlZQU11XUhThIRq2Eo2eAE6vSYU.png?width=108&crop=smart&auto=webp&s=f0cb9a4644e78087e02f387b83174570445c111b', 'width': 108}, {'height': 116, 'url': 'h... |
Web UI Dataset: Screenshot and Code of Modern Websites with Details of Web Frameworks and Box Bounds for All Viewports (Desktop, mobile, tablet). | 4 | Built a dataset of 10,000+ real screenshots and code of modern websites with details of styling, framework used, and box bounds for all viewports (Desktop, mobile, tablet).
I fine-tuned QWEN 2.5 VL-7B-Instruct with this dataset and ran it on DesignBench (An LLM Web UI benchmark), and the model showed improvements in t... | 2026-03-02T12:55:23 | https://huggingface.co/datasets/ronantakizawa/webui | Ok_Employee_6418 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ris2cu | false | null | t3_1ris2cu | /r/LocalLLaMA/comments/1ris2cu/web_ui_dataset_screenshot_and_code_of_modern/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'p6HudT8Njh0dI964B-YJXW2DaPluCHoIEGjGEK7i7jM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/p6HudT8Njh0dI964B-YJXW2DaPluCHoIEGjGEK7i7jM.png?width=108&crop=smart&auto=webp&s=84abf0f5fbbea570b9abc298f4043b2d1666e34a', 'width': 108}, {'height': 116, 'url': 'h... | |
Qwen3.5 9B and 4B benchmarks | 239 | 2026-03-02T12:44:23 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rirtyy | false | null | t3_1rirtyy | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/ | false | false | 239 | {'enabled': True, 'images': [{'id': 'ge0jbcc0ommg1', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/ge0jbcc0ommg1.jpeg?width=108&crop=smart&auto=webp&s=d7e201a8b28fd8ed414c78fdd3e4406c1839ec26', 'width': 108}, {'height': 102, 'url': 'https://preview.redd.it/ge0jbcc0ommg1.jpeg?width=216&crop=smart&auto=w... | |||
unsloth/Qwen3.5-4B-GGUF · Hugging Face | 117 | GGUF from unsloth | 2026-03-02T12:44:07 | https://huggingface.co/unsloth/Qwen3.5-4B-GGUF | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1rirts9 | false | null | t3_1rirts9 | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/ | false | false | 117 | {'enabled': False, 'images': [{'id': 'o4B8JISdEftCtsxQQp-T6179YiIyRfnuYTIDq8FSYMo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/o4B8JISdEftCtsxQQp-T6179YiIyRfnuYTIDq8FSYMo.png?width=108&crop=smart&auto=webp&s=4a93ef6431dad6936e9c942fed855fd27951e289', 'width': 108}, {'height': 116, 'url': 'h... | |
Small Qwen Models Out!! | 27 | 2026-03-02T12:42:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rirsmh/small_qwen_models_out/ | Wooden-Deer-1276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rirsmh | false | null | t3_1rirsmh | /r/LocalLLaMA/comments/1rirsmh/small_qwen_models_out/ | false | false | 27 | null | ||
Which backend works best with different gpus? | 1 | I’m contemplating running an inference server with 2 32gb v100 and 2 16gb v100s since these are the same gpu just different densities do any backends have issues with this?
I could also run 4 32gb chips but my goal is 96gb of vram and the 16gb ones are significantly cheaper. | 2026-03-02T12:41:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rirru7/which_backend_works_best_with_different_gpus/ | Simple_Library_2700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rirru7 | false | null | t3_1rirru7 | /r/LocalLLaMA/comments/1rirru7/which_backend_works_best_with_different_gpus/ | false | false | self | 1 | null |
Qwen 3.5 9B - GPT-OSS 120B but 13x smaller | 1 | [deleted] | 2026-03-02T12:40:11 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rirqx6 | false | null | t3_1rirqx6 | /r/LocalLLaMA/comments/1rirqx6/qwen_35_9b_gptoss_120b_but_13x_smaller/ | false | false | default | 1 | null | ||
Qwen/Qwen3.5-9B · Hugging Face | 508 | 9B is alive! | 2026-03-02T12:33:24 | https://huggingface.co/Qwen/Qwen3.5-9B | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1rirlyb | false | null | t3_1rirlyb | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/ | false | false | 508 | {'enabled': False, 'images': [{'id': 'jy_9Bo5EK9cQZwjAU6cQf6YqfqSIVZV7ZbxfuSE5sKE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jy_9Bo5EK9cQZwjAU6cQf6YqfqSIVZV7ZbxfuSE5sKE.png?width=108&crop=smart&auto=webp&s=90f200eff149bce065842ec66d115f1747e3aff4', 'width': 108}, {'height': 116, 'url': 'h... | |
Qwen 3.5 2B and 9B relesed! | 61 | [https://huggingface.co/unsloth/Qwen3.5-2B](https://huggingface.co/unsloth/Qwen3.5-2B)
[https://huggingface.co/unsloth/Qwen3.5-9B](https://huggingface.co/unsloth/Qwen3.5-9B) | 2026-03-02T12:33:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/ | sunshinecheung | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rirlvw | false | null | t3_1rirlvw | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/ | false | false | self | 61 | {'enabled': False, 'images': [{'id': 'ik4bUEAxhbbJ444bIjP0WMkGO-OrMh7rMme4N7g5OA8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ik4bUEAxhbbJ444bIjP0WMkGO-OrMh7rMme4N7g5OA8.png?width=108&crop=smart&auto=webp&s=208e221f22fcad72b0f0873ca09fc5864c6549f3', 'width': 108}, {'height': 116, 'url': 'h... |
Breaking : The small qwen3.5 models have been dropped | 1,819 | 2026-03-02T12:32:32 | Illustrious-Swim9663 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rirlau | false | null | t3_1rirlau | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/ | false | false | 1,819 | {'enabled': True, 'images': [{'id': 'trjsrjbzlmmg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/trjsrjbzlmmg1.jpeg?width=108&crop=smart&auto=webp&s=89508de7cc168fb4ba6e2659bd965783cb9171a9', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/trjsrjbzlmmg1.jpeg?width=216&crop=smart&auto=... | |||
Is Claude down for anyone else? | 1 | Claude (Web Chat) seems to be down for the last hour or so.
https://preview.redd.it/l7o11crllmmg1.png?width=2814&format=png&auto=webp&s=d32c4cd86d6f6fc95ed8041cf7d4fb386abf1170
| 2026-03-02T12:31:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rirkh7/is_claude_down_for_anyone_else/ | KnightCodin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rirkh7 | false | null | t3_1rirkh7 | /r/LocalLLaMA/comments/1rirkh7/is_claude_down_for_anyone_else/ | false | false | 1 | null | |
New small Qwen are here! | 14 | 2026-03-02T12:30:16 | https://www.reddit.com/r/LocalLLaMA/comments/1rirjo5/new_small_qwen_are_here/ | PixelatedCaffeine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rirjo5 | false | null | t3_1rirjo5 | /r/LocalLLaMA/comments/1rirjo5/new_small_qwen_are_here/ | false | false | 14 | null | ||
Qwen 3.5 small just dropped | 219 | new models:
Qwen3.5-9B
Qwen3.5-4B
Qwen3.5-2B
Qwen3.5-0.8B
time to test on my 128mb ram vps 🫡 | 2026-03-02T12:29:59 | kuzcov | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rirjg1 | false | null | t3_1rirjg1 | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/ | false | false | 219 | {'enabled': True, 'images': [{'id': 'lk22ukvilmmg1', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/lk22ukvilmmg1.jpeg?width=108&crop=smart&auto=webp&s=b18bc4683dfb52beb2e5fcccd492da2e800a64f4', 'width': 108}, {'height': 243, 'url': 'https://preview.redd.it/lk22ukvilmmg1.jpeg?width=216&crop=smart&auto=... | ||
I got sick of AI Game Masters hallucinating, so I built an engine that forces the local LLM to compile your actions into C# physics before writing the story. Looking for alpha testers. | 1 | AI roleplay is currently broken. If you tell a standard LLM, "I throw my torch into the flour barrel," it just hallucinates a random outcome based on token probability. It doesn't actually know where the torch is, and it doesn't know what flour does.
I wanted an actual digital tabletop with rigid rules. So I built a l... | 2026-03-02T12:26:21 | Impressive_Half5130 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rirgs7 | false | null | t3_1rirgs7 | /r/LocalLLaMA/comments/1rirgs7/i_got_sick_of_ai_game_masters_hallucinating_so_i/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'w2bwj2t5kmmg1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/w2bwj2t5kmmg1.png?width=108&crop=smart&auto=webp&s=1175e4106afdb6c059a16d66c580420ed1ac045c', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/w2bwj2t5kmmg1.png?width=216&crop=smart&auto=web... | ||
Building an AI Credit Decisioning Engine for a Hackathon – How would you architect this? | 0 | Hey everyone,
I’m participating in a hackathon with a pretty intense problem statement: **Automating Corporate Credit Appraisal for the Indian market.**
**The Goal:** Build a system that takes in messy data (GST filings, ITRs, bank statements, and 100+ page PDFs of Annual Reports) and spits out a **Credit Appraisal M... | 2026-03-02T12:21:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rird2l/building_an_ai_credit_decisioning_engine_for_a/ | Used_Brain_6508 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rird2l | false | null | t3_1rird2l | /r/LocalLLaMA/comments/1rird2l/building_an_ai_credit_decisioning_engine_for_a/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=108&crop=smart&auto=webp&s=7ec384e554047088a4b99afdebd307307984dadd', 'width': 108}, {'height': 113, 'url': '... |
A semantic memory layer for local AI agents — no vector DB, one file, runs on CPU | 1 | Most local LLM setups I've seen treat memory as a problem for later. You build the agent, it works, then you realize it has no idea what it decided yesterday.
I spent time solving this without adding infrastructure. Here's the result.
\*\*The problem\*\*
Keyword search misses synonyms. Loading all memory into contex... | 2026-03-02T12:17:35 | https://www.reddit.com/r/LocalLLaMA/comments/1rirama/a_semantic_memory_layer_for_local_ai_agents_no/ | Nerikk00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rirama | false | null | t3_1rirama | /r/LocalLLaMA/comments/1rirama/a_semantic_memory_layer_for_local_ai_agents_no/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'CgmnsEhDjYQE9HfDACYmkQab_N5vFm0Aen1slAVHufY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CgmnsEhDjYQE9HfDACYmkQab_N5vFm0Aen1slAVHufY.png?width=108&crop=smart&auto=webp&s=afabaea4765af56fe60250306a8d0f1680717bd5', 'width': 108}, {'height': 108, 'url': 'h... |
IQuest-Coder-V1 is 40B/14B/7B | 28 | I think this update was not posted here, so let's look at:
# [](https://huggingface.co/IQuestLab/IQuest-Coder-V1-40B-Loop-Thinking#iquest-coder-v1-model-family-update)IQuest-Coder-V1 Model Family Update
🚀🚀🚀 [IQuest-Coder-V1 Model Family Update](https://iquestlab.github.io/release-1.0-2603/index.html): Released 7B ... | 2026-03-02T12:16:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rira5e/iquestcoderv1_is_40b14b7b/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rira5e | false | null | t3_1rira5e | /r/LocalLLaMA/comments/1rira5e/iquestcoderv1_is_40b14b7b/ | false | false | 28 | null | |
Seeking Advice on Detecting Keypoints in Sports Videos with Motion Blur | 1 | I'm currently working on a project where I'm trying to detect keypoints in sports videos, such as corners, penalty points, goal post points, and other significant markers. However, I've encountered a challenge: due to motion blur, my model struggles to accurately detect these keypoints in certain frames.
Despite the m... | 2026-03-02T12:11:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rir6ak/seeking_advice_on_detecting_keypoints_in_sports/ | Mysterious_Art_3211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rir6ak | false | null | t3_1rir6ak | /r/LocalLLaMA/comments/1rir6ak/seeking_advice_on_detecting_keypoints_in_sports/ | false | false | self | 1 | null |
Hardware Usage Advice | 2 | Hi All
I am diving into the AI/LLM world. I have on order a Gmktek Evo-X2 with 128gb ram, I have some nvme drives lying around and need some advice on which to use. I have a Samsung 990 pro gen4 1tb, a Kingston Snv3s 4tb and a WD Red Sn700 4tb.
My use case is to run Proxmox on the box and virtual linux VMs for Ollama... | 2026-03-02T11:41:18 | https://www.reddit.com/r/LocalLLaMA/comments/1riqlhl/hardware_usage_advice/ | venman38 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riqlhl | false | null | t3_1riqlhl | /r/LocalLLaMA/comments/1riqlhl/hardware_usage_advice/ | false | false | self | 2 | null |
Choosing the right Apple Silicon for Backend + TranslateGemma/TTS/STT? | 7 | Hi everyone,
I’ve been a backend developer using a **2013 MacBook Pro** until now.
I’m looking to buy a MacBook with **32GB of RAM**, but I’m having a hard time deciding which generation of Apple Silicon to pick.
**My situation:**
* **Main Task:** Backend development.
* **Local AI:** I plan to run **TranslateGemma... | 2026-03-02T10:42:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ripjzc/choosing_the_right_apple_silicon_for_backend/ | yusunglee2074 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ripjzc | false | null | t3_1ripjzc | /r/LocalLLaMA/comments/1ripjzc/choosing_the_right_apple_silicon_for_backend/ | false | false | self | 7 | null |
How are you mitigating prompt injection in tool-calling/agent apps (RAG + tools) in production? | 0 | I’m running a tool-calling / agent-style LLM app and prompt injection is becoming my #1 concern (unintended tool calls, data exfiltration via RAG context, etc.).I started experimenting with a small gateway/proxy layer to enforce tool allowlists + schema validation + policy checks, plus audit logs.For folks shipping thi... | 2026-03-02T10:13:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rip2f0/how_are_you_mitigating_prompt_injection_in/ | AnteaterSlow3149 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rip2f0 | false | null | t3_1rip2f0 | /r/LocalLLaMA/comments/1rip2f0/how_are_you_mitigating_prompt_injection_in/ | false | false | self | 0 | null |
GLM-5 matches Claude Opus 4.6 intelligence at $1.55/M tokens vs $10.00. The open-source gap is gone. | 1 | [removed] | 2026-03-02T10:09:42 | https://www.reddit.com/r/LocalLLaMA/comments/1rip0bh/glm5_matches_claude_opus_46_intelligence_at_155m/ | Frosty_End_6588 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rip0bh | false | null | t3_1rip0bh | /r/LocalLLaMA/comments/1rip0bh/glm5_matches_claude_opus_46_intelligence_at_155m/ | false | false | self | 1 | null |
I use a local Mistral 7B as a router to decide when NOT to call GPT-4 — saved ~60% on API costs | 0 | Started noticing about 3 months ago that a big chunk of the requests hitting my agent pipeline were genuinely simple — things like "summarize this 200-word message", "classify this text as positive/negative", "reformat this JSON". Stuff that doesn't need a frontier model.
So I set up Mistral 7B locally (quantized, run... | 2026-03-02T10:02:50 | https://www.reddit.com/r/LocalLLaMA/comments/1riow7h/i_use_a_local_mistral_7b_as_a_router_to_decide/ | justserg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riow7h | false | null | t3_1riow7h | /r/LocalLLaMA/comments/1riow7h/i_use_a_local_mistral_7b_as_a_router_to_decide/ | false | false | self | 0 | null |
How Not To Go Insane Talking with LLMs | 0 | Wrote up some practical habits for not letting LLMs validate your bad reasoning — custom prompt modes ("strawberry" for neutral explanation, "socrates" for adversarial scrutiny), thinking about training data composition when evaluating answers, and a fun experiment you can try with any model. Includes a story about how... | 2026-03-02T10:02:18 | https://hqhs.net/devlog/how-not-to-go-insane-talking-with-llms | Ok_Mycologist_64 | hqhs.net | 1970-01-01T00:00:00 | 0 | {} | 1riovvx | false | null | t3_1riovvx | /r/LocalLLaMA/comments/1riovvx/how_not_to_go_insane_talking_with_llms/ | false | false | default | 0 | null |
A 200 KB Tool-Using Six-Phase Loop Agent for Qwen3.5-35B-A3B | 15 | An autonomous agent that runs a [six-phase cognitive loop](https://github.com/mblakemore/six-phase-loop) continuously, learning and building capabilities with every cycle. Uses a local LLM (llama-server) and persists its memory through git. | 2026-03-02T10:00:46 | https://github.com/mblakemore/six-agent | nnet42 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1riouwq | false | null | t3_1riouwq | /r/LocalLLaMA/comments/1riouwq/a_200_kb_toolusing_sixphase_loop_agent_for/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'vjQJg2JTGygfbVW6ZaUo26lGxQtanNL-PyD0UNrn8WU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vjQJg2JTGygfbVW6ZaUo26lGxQtanNL-PyD0UNrn8WU.png?width=108&crop=smart&auto=webp&s=bf8297b7f97906af4c0bf744ae3fe8dca1d1030c', 'width': 108}, {'height': 108, 'url': 'h... | |
Qwen3.5-122b-VL Abliterated WORKING (mlx) | 3 | These hybrid SSSM + CoT models do not work with basic heretic or regular ablation methods. I’ll make a gguf if enough demand. I have. 397b text only reap abliterated mlx too, gated, requested for access. @dealignai
https://huggingface.co/dealignai/Qwen3.5-VL-122B-A10B-4bit-CRACK | 2026-03-02T09:54:56 | https://www.reddit.com/r/LocalLLaMA/comments/1riorfz/qwen35122bvl_abliterated_working_mlx/ | dealignai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riorfz | false | null | t3_1riorfz | /r/LocalLLaMA/comments/1riorfz/qwen35122bvl_abliterated_working_mlx/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'm-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/m-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A.png?width=108&crop=smart&auto=webp&s=66d3ea3dadda87d5309e472b09a810b62acedeb8', 'width': 108}, {'height': 116, 'url': 'h... |
Ask: Anyone know good pixel art (and pixel animation) models? | 0 | Even GPT-5.2 struggles with creating good quality pixel art - it always looks so "smudged". If anyone knows what local models can accomplish this it would be greatly appreciated! | 2026-03-02T09:46:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rioml1/ask_anyone_know_good_pixel_art_and_pixel/ | newcomb_benford_law | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rioml1 | false | null | t3_1rioml1 | /r/LocalLLaMA/comments/1rioml1/ask_anyone_know_good_pixel_art_and_pixel/ | false | false | self | 0 | null |
OpenClaw on my spare laptop | 0 | I have a spare M1 Pro 8GB ram 256GB storage, I wanted to just experiment with this entire OpenClaw thing, so I created a new email id and everything and formatted my entire Mac Book. Now when it comes ti choosing Model is there any model I can use? I am looking for something to do research or anything that can help me ... | 2026-03-02T09:45:33 | https://www.reddit.com/r/LocalLLaMA/comments/1riom3s/openclaw_on_my_spare_laptop/ | Boring_Tip_1218 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riom3s | false | null | t3_1riom3s | /r/LocalLLaMA/comments/1riom3s/openclaw_on_my_spare_laptop/ | false | false | self | 0 | null |
Avatar LM , for CPU . Best current models for real-time talking avatar (Wav2Lip alternative with higher accuracy + low latency)? High speed. Any suggestions? | 1 | Hi Professionals,
I’m working on a project where I need to generate **talking avatars from a single input image (real or animated) + audio**, similar to platforms like D-ID.
**Goal:**
* Input: single image (human / animated character) + audio
* Output: video where the avatar speaks with **accurate lip sync**
* Shoul... | 2026-03-02T09:44:22 | https://www.reddit.com/r/LocalLLaMA/comments/1riolfk/avatar_lm_for_cpu_best_current_models_for/ | BedBright7967 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riolfk | false | null | t3_1riolfk | /r/LocalLLaMA/comments/1riolfk/avatar_lm_for_cpu_best_current_models_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'M78YsjV9QRa9W6DUNpCCJbc6NIviCWrvAbs_aIEozXo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/M78YsjV9QRa9W6DUNpCCJbc6NIviCWrvAbs_aIEozXo.png?width=108&crop=smart&auto=webp&s=2fc46a038162a4312db64f3c66c5feec1ee8545e', 'width': 108}, {'height': 121, 'url': 'h... |
Qwen3 4B high PPL but excel for small dataset training | 1 | I'm trying to fine-tune Qwen3-4B and Llama3.1-8B on the Empathetic Dataset, but I'm stuck at this step: my trained models behave too well on Few-Shot Learning and Semi-Supervised Learning (PPL around 10-14 while only using 10% of the dataset).
I have manually printed out and checked the following:
* Data format — use... | 2026-03-02T09:43:35 | https://www.reddit.com/r/LocalLLaMA/comments/1riokz5/qwen3_4b_high_ppl_but_excel_for_small_dataset/ | Successful_Scheme414 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riokz5 | false | null | t3_1riokz5 | /r/LocalLLaMA/comments/1riokz5/qwen3_4b_high_ppl_but_excel_for_small_dataset/ | false | false | self | 1 | null |
Use a local LLM as a subagent from Claude Code to reduce context use | 3 | In the same way Claude Code can orchestrate tasks of Claude subagents, it can do the same by delegating tasks to an LLM running on your local machine. In my case, I used LM Studio as the server. By leveraging LM Studio's tool-calling API, the content of the examined file never reached Claude's context - just the native... | 2026-03-02T09:35:33 | https://www.reddit.com/r/LocalLLaMA/comments/1riog2w/use_a_local_llm_as_a_subagent_from_claude_code_to/ | Ok_Significance_9109 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riog2w | false | null | t3_1riog2w | /r/LocalLLaMA/comments/1riog2w/use_a_local_llm_as_a_subagent_from_claude_code_to/ | false | false | self | 3 | null |
Only real working Qwen3.5-122b VL uncensored (mlx) | 1 | These Hybrid SSSM and CoT models require extensive testing. As far as I know the only other person who has managed to make a working coherent VL is the PRISM maker; hes an expert in this. I’m an amateur who spent literal night and day with an obsession to get this stuff down. After a few thousand dollars and too many h... | 2026-03-02T09:27:20 | https://www.reddit.com/r/LocalLLaMA/comments/1riobgm/only_real_working_qwen35122b_vl_uncensored_mlx/ | HealthyCommunicat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riobgm | false | null | t3_1riobgm | /r/LocalLLaMA/comments/1riobgm/only_real_working_qwen35122b_vl_uncensored_mlx/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'm-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/m-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A.png?width=108&crop=smart&auto=webp&s=66d3ea3dadda87d5309e472b09a810b62acedeb8', 'width': 108}, {'height': 116, 'url': 'h... |
Qwen 3.5 "System Message Must Be at the Beginning" — SFT Constraints & Better Ways to Limit Tool Call Recursion? | 0 | I’ve been experimenting with **Qwen 3.5** lately and hit a specific architectural snag.
In my agentic workflow, I was trying to inject a `system` message into the middle of the message array to "nudge" the model and prevent it from falling into an infinite tool-calling loop. However, the official Qwen `chat_template` ... | 2026-03-02T09:01:42 | https://www.reddit.com/gallery/1rinx3k | SpareAlps6450 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rinx3k | false | null | t3_1rinx3k | /r/LocalLLaMA/comments/1rinx3k/qwen_35_system_message_must_be_at_the_beginning/ | false | false | 0 | null | |
Hardware for local AI project | 4 | Hi All,
At work, I've been asked to build a little AI "server" for local LLM stuff, the idea is they want to essentially ask a chat bot a question, and it references documents locally and in our sharepoint.
I was thinking of using a mac mini for this, given the costs of GPUs and RAM, the mac seems like a good platfor... | 2026-03-02T08:53:05 | https://www.reddit.com/r/LocalLLaMA/comments/1rins2j/hardware_for_local_ai_project/ | Beginning-Chef-7085 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rins2j | false | null | t3_1rins2j | /r/LocalLLaMA/comments/1rins2j/hardware_for_local_ai_project/ | false | false | self | 4 | null |
Is there a way to cleanly terminate a running inference job/slot with llama.cpp? | 4 | There are some cases in Open WebUI where I run a prompt but when I press the stop button to terminate, the inference continues on the llama-server. Normally it should stop when the connection is cut, but it doesn't, even if I close the browser tab.
Now with hybrid attention, we might have 60k+ context windows which is... | 2026-03-02T08:37:46 | https://www.reddit.com/r/LocalLLaMA/comments/1rinjb3/is_there_a_way_to_cleanly_terminate_a_running/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rinjb3 | false | null | t3_1rinjb3 | /r/LocalLLaMA/comments/1rinjb3/is_there_a_way_to_cleanly_terminate_a_running/ | false | false | self | 4 | null |
Local Agents running in claude code/codex/opencode perform better? | 0 | I am interested, I saw some benchmarks and experiments, where local models performed better with tools and skills when they were in agentic coding environments, like claude code, codex, opencode.
and even with openclaw, best way to use claude models there is via claude code, not from the API
do you have any ideas abo... | 2026-03-02T08:24:23 | https://www.reddit.com/r/LocalLLaMA/comments/1rinbwd/local_agents_running_in_claude_codecodexopencode/ | FeiX7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rinbwd | false | null | t3_1rinbwd | /r/LocalLLaMA/comments/1rinbwd/local_agents_running_in_claude_codecodexopencode/ | false | false | self | 0 | null |
Asked GPT about a friend's behavior. Watched it defend a political movement for 20 minutes instead. Transcript inside. | 0 | Long time lurker, occasional poster. Handle is what it is.
I want to be precise about what happened here because the LocalLlama crowd will appreciate the distinction: this isn't a "GPT said something political" post. This is a "GPT demonstrated exactly the failure mode we self-host to avoid" post.
**The setup**
I ma... | 2026-03-02T08:23:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rinbe9/asked_gpt_about_a_friends_behavior_watched_it/ | Imonlyhereforthesmut | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rinbe9 | false | null | t3_1rinbe9 | /r/LocalLLaMA/comments/1rinbe9/asked_gpt_about_a_friends_behavior_watched_it/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'FFBzFtWxT7e3nlpmB35KLm1tMUk5KWa0xSmpXGr5i90', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/FFBzFtWxT7e3nlpmB35KLm1tMUk5KWa0xSmpXGr5i90.jpeg?width=108&crop=smart&auto=webp&s=3710ab11a71f621710bae8ed6dd284dcb2c704b0', 'width': 108}, {'height': 112, 'url': '... |
What memory systems should I benchmark? | 1 | I [ran a benchmark](https://fastpaca.com/blog/memory-isnt-one-thing/) a while ago comparing memory systems locally (Zep Graphiti vs. Mem0).
The space has evolved since then and I want to redo this on top of both membench + longmemeval but for others as well.
Why membench? It's larger (4k test cases) + multiple choi... | 2026-03-02T08:13:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rin5r2/what_memory_systems_should_i_benchmark/ | selund1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rin5r2 | false | null | t3_1rin5r2 | /r/LocalLLaMA/comments/1rin5r2/what_memory_systems_should_i_benchmark/ | false | false | self | 1 | null |
Alibaba Team Open-Sources CoPaw: A High-Performance Personal Agent Workstation for Developers to Scale Multi-Channel AI Workflows and Memory | 118 | 2026-03-02T08:09:41 | https://www.marktechpost.com/2026/03/01/alibaba-team-open-sources-copaw-a-high-performance-personal-agent-workstation-for-developers-to-scale-multi-channel-ai-workflows-and-memory/ | skippybosco | marktechpost.com | 1970-01-01T00:00:00 | 0 | {} | 1rin3ea | false | null | t3_1rin3ea | /r/LocalLLaMA/comments/1rin3ea/alibaba_team_opensources_copaw_a_highperformance/ | false | false | 118 | {'enabled': False, 'images': [{'id': '2NvEAG03ack14f5QNQvds8vo0uHX5V-thwdINezA4ng', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/2NvEAG03ack14f5QNQvds8vo0uHX5V-thwdINezA4ng.png?width=108&crop=smart&auto=webp&s=f608611af7439b771515d8442eefb8e68d8bfe90', 'width': 108}, {'height': 154, 'url': 'h... | ||
Sharding model across machines | 0 | I always wanted to have a local llm (I like privacy)
A big machine to serve such big models can go up to a 100-300 thousand dollars
but I'm thinking of a much simpler setup
My setup is simple:
1. get the model.
2. get multiple small machines with gpus (rtx).
3. have a shared storage between all of them.
4. split th... | 2026-03-02T07:46:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rimpwy/sharding_model_across_machines/ | Active_Woodpecker683 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rimpwy | false | null | t3_1rimpwy | /r/LocalLLaMA/comments/1rimpwy/sharding_model_across_machines/ | false | false | self | 0 | null |
What's the best local model I can run with a Macbook M5 Pro | 2 | Using LMStudio with Opencode. AFAIK the Macbook M5 Pro has 24GB VRAM and 32GB unified RAM. I'm having good results with GPT-OSS-20B while running the model and coding in the same machine.
What are better models that I could run in this machine for coding tasks?
Completely new to this, so I really appreciate advice. | 2026-03-02T07:42:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rimncl/whats_the_best_local_model_i_can_run_with_a/ | soul105 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rimncl | false | null | t3_1rimncl | /r/LocalLLaMA/comments/1rimncl/whats_the_best_local_model_i_can_run_with_a/ | false | false | self | 2 | null |
Is there interest in a community dataset of Claude Code agentic sessions? Built a scraper, looking for feedback | 1 | [removed] | 2026-03-02T07:32:54 | https://www.reddit.com/r/LocalLLaMA/comments/1rimib2/is_there_interest_in_a_community_dataset_of/ | Huge-Ruin-4739 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rimib2 | false | null | t3_1rimib2 | /r/LocalLLaMA/comments/1rimib2/is_there_interest_in_a_community_dataset_of/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'GGnnhYEepMCTrBYIzgaaZPffTRX2J9Xk2iJN_J4LgdM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GGnnhYEepMCTrBYIzgaaZPffTRX2J9Xk2iJN_J4LgdM.png?width=108&crop=smart&auto=webp&s=6a524779621d340772dafaa2581addcbabe1aab4', 'width': 108}, {'height': 116, 'url': 'h... |
The "Computer Use" Trend: How are you managing multi-user sandboxes for LLM Agents? | 0 | With the recent momentum behind **OpenClaw** and **Claude’s "Computer Use"** demo, the industry trend this year is clearly shifting toward equipping LLMs with a dedicated virtual desktop or "computer" to perform complex tasks.
I’m currently exploring the best ways to implement a secure, scalable sandbox to give an Age... | 2026-03-02T07:29:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rimgii/the_computer_use_trend_how_are_you_managing/ | SpareAlps6450 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rimgii | false | null | t3_1rimgii | /r/LocalLLaMA/comments/1rimgii/the_computer_use_trend_how_are_you_managing/ | false | false | self | 0 | null |
Discord Server for AI Services | 0 |
Yo
I found a cool server thats cool to connect with while learning AI tools to help run a business and other shit too!!
[https://discord.gg/WYRepyPy](https://discord.gg/WYRepyPy)
Been helping me with my Promotional Tracks with some of the people in there and what not, feel free to check it out!! | 2026-03-02T07:20:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rimaub/discord_server_for_ai_services/ | Any-Camel-5432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rimaub | false | null | t3_1rimaub | /r/LocalLLaMA/comments/1rimaub/discord_server_for_ai_services/ | false | false | self | 0 | null |
Agents are not thinking: Science of agent behavior | 0 | 2026-03-02T07:17:58 | https://technoyoda.github.io/agent-science.html | thunder_jaxx | technoyoda.github.io | 1970-01-01T00:00:00 | 0 | {} | 1rim9l7 | false | null | t3_1rim9l7 | /r/LocalLLaMA/comments/1rim9l7/agents_are_not_thinking_science_of_agent_behavior/ | false | false | default | 0 | null | |
Revisiting MiniMax's article on their decision to drop hybrid attention now that we have 2 OS models with efficient long context attention DeepSeek V3.2 and Qwen3.5-397B-A17B | 29 | ERROR: type should be string, got "\n\nhttps://preview.redd.it/z7fib780wkmg1.png?width=1244&format=png&auto=webp&s=cb2d2de859c25b135bb4437102d332b03c1562af\n\nRevisiting MiniMax's article on their decision to drop hybrid attention now that we have 2 OS models with efficient long context attention DeepSeek V3.2 and Qwen3.5-397B-A17B\n\n \nFrom the blog: [https://www.minimax.io/news/why-did-m2-end-up-as-a-full-attention-model](https://www.minimax.io/news/why-did-m2-end-up-as-a-full-attention-model)\n\n>Benchmarks are a Leaky Abstraction\n\n>There's no free lunch. When you reduce the complexity of attention, you pay a price. The question is, where?\n\n>When we were developing MiniMax-Text-01, everyone was still evaluating MMLU, BBH, MATH, and LongBench (all of which are now saturated). From the perspective of a year ago, a hybrid of Lightning Attention and Full Attention looked just as good as pure full attention. Our own small-scale hybrid models confirmed this on the leaderboards. (Did we find a free lunch?)\n\n>Not quite. The price paid became obvious at a larger scale: the model had clear deficits in complex, multi-hop reasoning tasks.\n\n>Okay, once a problem is exposed, you can fix it. We developed proxy metrics for this specific weakness and iterated until the hybrid model seemed to match MHA. But does that proxy metric still correlate with real-world downstream performance at an even larger scale? Are there other hidden weaknesses? Who knows. We haven't run those experiments yet.\n\n>The better the models get, the harder they are to evaluate. But that's a must part of the journey — keep it up, eval teams!\n\nWhat has the experience been with both DeepSeek-V3.2 and Qwen3.5-397B-A17B on long context reasoning?\n\n" | 2026-03-02T07:07:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rim2y2/revisiting_minimaxs_article_on_their_decision_to/ | True_Requirement_891 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rim2y2 | false | null | t3_1rim2y2 | /r/LocalLLaMA/comments/1rim2y2/revisiting_minimaxs_article_on_their_decision_to/ | false | false | 29 | null | |
Jan-Code-4B: a small code-tuned model of Jan-v3 | 129 | Hi, this is Bach from the Jan team. We’re releasing **Jan-code-4B**, a small code-tuned model built on **Jan-v3-4B-base-instruct**.
This is a **small experiment** aimed at improving day-to-day coding assistance, including code generation, edits/refactors, basic debugging, and writing tests, while staying lightweight e... | 2026-03-02T07:02:59 | Delicious_Focus3465 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rim0b3 | false | null | t3_1rim0b3 | /r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/ | false | false | 129 | {'enabled': True, 'images': [{'id': 'hv4jtfpdxkmg1', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/hv4jtfpdxkmg1.png?width=108&crop=smart&auto=webp&s=32706c6d7ed416f8e3babd355464f0a0663fc13b', 'width': 108}, {'height': 98, 'url': 'https://preview.redd.it/hv4jtfpdxkmg1.png?width=216&crop=smart&auto=webp... | ||
"Tired of" | 1 | [removed] | 2026-03-02T06:43:02 | Kahvana | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rilo38 | false | null | t3_1rilo38 | /r/LocalLLaMA/comments/1rilo38/tired_of/ | false | false | 1 | {'enabled': True, 'images': [{'id': '1l8ihtlqukmg1', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/1l8ihtlqukmg1.png?width=108&crop=smart&auto=webp&s=268d096dececa01af7e511149ff164b171117ddd', 'width': 108}, {'height': 198, 'url': 'https://preview.redd.it/1l8ihtlqukmg1.png?width=216&crop=smart&auto=web... | ||
RAGpaper 26.2.26 | 5 | 1. [Understanding Usage and Engagement in AI-Powered Scientific Research Tools: The Asta Interaction Dataset](http://arxiv.org/abs/2602.23335v1)
2. [AgentDropoutV2: Optimizing Information Flow in Multi-Agent Systems via Test-Time Rectify-or-Reject Pruning](http://arxiv.org/abs/2602.23258v1)
3. [MTRAG-UN: A Benchmark fo... | 2026-03-02T06:36:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rilk4r/ragpaper_26226/ | Cheryl_Apple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rilk4r | false | null | t3_1rilk4r | /r/LocalLLaMA/comments/1rilk4r/ragpaper_26226/ | false | false | self | 5 | null |
I made a free local AI roleplay horror game | 3 | Hi everyone,
I made a text adventure simulator called Echo Terminal. It’s inspired by CoC, mod, and Lifeline.
The game uses **Ollama** as your Keeper. It generates narratives based on scripts and your character's choices. You can also type your own actions, just like playing TRPG.
This game runs on your PC with Olla... | 2026-03-02T06:34:34 | https://v.redd.it/0gczcwybtkmg1 | nxlmoz | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1riliyt | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/0gczcwybtkmg1/DASHPlaylist.mpd?a=1775025294%2CY2E4M2NjODYxYzg4M2FjMGY2MjdiNGJlMzZiNjhlYjA4MzRmYTdlMjQ5NGMzZmM3NDQ5MGEwOTQ1N2YzYzIwZA%3D%3D&v=1&f=sd', 'duration': 7, 'fallback_url': 'https://v.redd.it/0gczcwybtkmg1/CMAF_720.mp4?source=fallback', 'has... | t3_1riliyt | /r/LocalLLaMA/comments/1riliyt/i_made_a_free_local_ai_roleplay_horror_game/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'eDA0YnE2emJ0a21nMfAbEc4wBr7XW0t8YVvvnqGAISBbJ6n4RKj33F9OpQlf', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/eDA0YnE2emJ0a21nMfAbEc4wBr7XW0t8YVvvnqGAISBbJ6n4RKj33F9OpQlf.png?width=108&crop=smart&format=pjpg&auto=webp&s=793bdd5210946238dc8d951db71e3a697f95a... | |
Using inference providers | 0 | With the rise of [together.ai](http://together.ai); fireworks ai, and gmi; I was wondering if anyone has actually tried it out and what did you think of it?
What is the biggest advantage and disadvantage.
Any feedback is appreciated. | 2026-03-02T06:32:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rilhr5/using_inference_providers/ | shirleyyin5644 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rilhr5 | false | null | t3_1rilhr5 | /r/LocalLLaMA/comments/1rilhr5/using_inference_providers/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'HV6ejC1nOSUgnk6MXZi-eoVgXQxR4sxj2nXziGmo1ss', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/HV6ejC1nOSUgnk6MXZi-eoVgXQxR4sxj2nXziGmo1ss.png?width=108&crop=smart&auto=webp&s=1f3d8cc13f3a5ee5c6a2603092cc2b743068b5a8', 'width': 108}, {'height': 113, 'url': 'h... |
no way | 26 | 2026-03-02T05:58:01 | BornResult1752 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rikvi8 | false | null | t3_1rikvi8 | /r/LocalLLaMA/comments/1rikvi8/no_way/ | false | false | 26 | {'enabled': True, 'images': [{'id': 'hu6k25tinkmg1', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/hu6k25tinkmg1.png?width=108&crop=smart&auto=webp&s=e3f82e92ccf7f6f2a5b1d3e0f157ab33924ec202', 'width': 108}, {'height': 253, 'url': 'https://preview.redd.it/hu6k25tinkmg1.png?width=216&crop=smart&auto=we... | |||
Running vs code continue and llama.cpp in localhost - getting "You must either implement templateMessages or _streamChat" | 3 | After a lot of looking up and reading, I have managed to get llama.cpp running locally using the following command:
llama-server -m D:\\LLAMA\_MODELS\\gpt-oss-20b-Q3\_K\_M.gguf -c 65536 -ngl 20 --temp 0.3 --top-p 0.85 --top-k 20 --jinja --chat-template D:\\LLAMA\_MODELS\\template.txt
I downloaded both the model an... | 2026-03-02T05:34:37 | https://www.reddit.com/r/LocalLLaMA/comments/1rikga6/running_vs_code_continue_and_llamacpp_in/ | vharishankar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rikga6 | false | null | t3_1rikga6 | /r/LocalLLaMA/comments/1rikga6/running_vs_code_continue_and_llamacpp_in/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'q-cqJUHBunGvpRikj45QKosqdNoi1xJuGUFoVX2f2Js', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/q-cqJUHBunGvpRikj45QKosqdNoi1xJuGUFoVX2f2Js.png?width=108&crop=smart&auto=webp&s=cbe52faaf8661dfb450e26e1ea4dc79c49d1b454', 'width': 108}, {'height': 116, 'url': 'h... |
Sustained 72B on Mac Studio - need real numbers not peak | 1 | [removed] | 2026-03-02T05:31:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rikeew/sustained_72b_on_mac_studio_need_real_numbers_not/ | quietsubstrate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rikeew | false | null | t3_1rikeew | /r/LocalLLaMA/comments/1rikeew/sustained_72b_on_mac_studio_need_real_numbers_not/ | false | false | self | 1 | null |
LLM Research Paper Feedback | 2 | I'm working on a research project on predicting LLM failures (reasoning errors, logical malfunctions, etc.) before they occur using temporal instability signals.
The system probes each model response across five reasoning dimensions and computes an instability score that increases when failures become more frequent, c... | 2026-03-02T05:29:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rikcxh/llm_research_paper_feedback/ | Creative-Plenty-9348 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rikcxh | false | null | t3_1rikcxh | /r/LocalLLaMA/comments/1rikcxh/llm_research_paper_feedback/ | false | false | self | 2 | null |
Qwen 3.5 AMD mi50 32gb Benchmarks | 9 | Mi50 32GB users, what has your experience been like with the new Qwen 3.5 models? Please share your benchmarks | 2026-03-02T05:26:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/ | Creative_Bike_4105 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rikb4w | false | null | t3_1rikb4w | /r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/ | false | false | self | 9 | null |
What is the "personality" of a Chinese LLM when problem-solving? | 0 | Based on the following Rohit Krishnan post, what would GLM, Qwen, DeepSeek, and Kimi be in this case? Is he even right?
>It's amazing how much the frontier models resemble their CEOs, a corollary to Conways Law:
>\- ChatGPT - whipsmart, VC speak, bullet points
>\- Claude - thoughtful, brainy, with a soul
>\- Gemini... | 2026-03-02T05:15:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rik3ge/what_is_the_personality_of_a_chinese_llm_when/ | TomLucidor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rik3ge | false | null | t3_1rik3ge | /r/LocalLLaMA/comments/1rik3ge/what_is_the_personality_of_a_chinese_llm_when/ | false | false | self | 0 | null |
PSA: Qwen 3.5 requires bf16 KV cache, NOT f16!! | 137 | u/danielhanchen
If you're running Qwen 3.5 35B A3B locally on engines like llama.cpp, you need to manually set your KV cache to `bf16` (`-ctk bf16 -ctv bf16`) instead of the default `fp16`.
I measured perplexity (PPL) on wikitext-2-raw to prove this, specifically avoiding KL divergence because the Unsloth baseline l... | 2026-03-02T05:13:45 | https://www.reddit.com/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/ | Wooden-Deer-1276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rik253 | false | null | t3_1rik253 | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/ | false | false | self | 137 | null |
I built an MCP that gives any agent a debugger — runtime observation in running code, fully local, nothing leaves your machine | 5 | Last year I was migrating a trading bot to a new API after the old version got disabled. Every bug required the same loop: add a println, restart the bot, manually create a buy event to trigger the code path, and hope the price moved in the right direction. Half the time it didn't. The event filtered out, the bug didn'... | 2026-03-02T04:35:22 | https://v.redd.it/s3e8daut6kmg1 | flash_us0101 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rijbp2 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/s3e8daut6kmg1/DASHPlaylist.mpd?a=1775018170%2CNjUyYjVkNGFlNjE5NGI3ZTJkMTVhNmJkYzNmMGEzYzQwYzEzNTE2NmVjMzlkMzIzOGI2ZTFkNTNmNGIxYTIxMQ%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/s3e8daut6kmg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1rijbp2 | /r/LocalLLaMA/comments/1rijbp2/i_built_an_mcp_that_gives_any_agent_a_debugger/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'aWk0aDFodXQ2a21nMUgGSRsf71GuqBKSWwoM4sN9J_MyLyOXgoIW0trKIGOs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aWk0aDFodXQ2a21nMUgGSRsf71GuqBKSWwoM4sN9J_MyLyOXgoIW0trKIGOs.png?width=108&crop=smart&format=pjpg&auto=webp&s=5d4f6054d6d722c63550cd8338c2ca9671d76... | |
Tired of the low-quality, mindless ERP chats. Trying to build “ambient companionship” with AI. Would love your thoughts | 0 | Hi everyone! 👋
One thing that kept bothering us about most AI companions is this: you close the app, come back the next day and it feels like starting over. No continuity. No sense that it actually knows you. Just another stateless chat session.
So, our team decided to try building something different -- A real **Co... | 2026-03-02T04:33:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rija4i/tired_of_the_lowquality_mindless_erp_chats_trying/ | daisyyuan0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rija4i | false | null | t3_1rija4i | /r/LocalLLaMA/comments/1rija4i/tired_of_the_lowquality_mindless_erp_chats_trying/ | false | false | self | 0 | null |
What's the best local model I can run with 8GB VRAM (RTX 5070) | 8 | Using Ollama with Opencode. Would like to create a locally hosted webpage and have a visual agent to check for errors.
Is that possible with 8GB VRAM.
Completely new to this.
TIA | 2026-03-02T04:32:24 | https://www.reddit.com/r/LocalLLaMA/comments/1rij9k1/whats_the_best_local_model_i_can_run_with_8gb/ | Smiley_Dub | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rij9k1 | false | null | t3_1rij9k1 | /r/LocalLLaMA/comments/1rij9k1/whats_the_best_local_model_i_can_run_with_8gb/ | false | false | self | 8 | null |
agent to agent communication to leverage different models | 0 | I have a bunch of different agents all working with different models through ollama and have been using them with a communication platform where AI agents are the first-class user (all api based curl + http).
I've been using this so that agents can can ask another 'smarter' agent for help if they are running into prob... | 2026-03-02T04:30:35 | https://www.reddit.com/r/LocalLLaMA/comments/1rij89l/agent_to_agent_communication_to_leverage/ | _jonnyquest_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rij89l | false | null | t3_1rij89l | /r/LocalLLaMA/comments/1rij89l/agent_to_agent_communication_to_leverage/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'L2w_C3PCY9SlFnwqhZ89c_JMLBsOdGtCr1KF1DqtFCI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/L2w_C3PCY9SlFnwqhZ89c_JMLBsOdGtCr1KF1DqtFCI.png?width=108&crop=smart&auto=webp&s=1d80d4db821db9105571589eff7c7f640f4a2d55', 'width': 108}, {'height': 113, 'url': 'h... |
What is the most ridiculously good goto LLM for knowledge & reasoning on your M4 Max 128gb macbook these days? | 2 | I've been out of the loop for 3-4 months, please catch me up what fits on that macbook. BTW I don't care about speed.
Thank you | 2026-03-02T04:25:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/ | ZeitgeistArchive | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rij4sj | false | null | t3_1rij4sj | /r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/ | false | false | self | 2 | null |
A comparison between same 8b parameter llm finetuned 4bit quatization vs the base 4bit quantized as well on to the same problem. unprompted(without system prompt) | 1 | finetuned llm unprompted:
A man has 5 daughters. Each daughter has 1 brother. How many children does he have?
\### Assistant
The daughter count is 5. Adding the son (1) gives a total of 6 children.<|im\_end\_|>
base model:
A classic lateral thinking puzzle!
The answer is: 7 children.
Here's how it works:... | 2026-03-02T04:21:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rij1nx/a_comparison_between_same_8b_parameter_llm/ | Pleasant-Mud-2939 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rij1nx | false | null | t3_1rij1nx | /r/LocalLLaMA/comments/1rij1nx/a_comparison_between_same_8b_parameter_llm/ | false | false | self | 1 | null |
Open Swara: 4,065 humanized voice samples across 44 languages (CC-BY-SA 4.0) | 27 | Sample voices in from open source Data Set | 2026-03-02T04:14:39 | https://v.redd.it/1lxfd1t15kmg1 | Tasty-Ad-5172 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1riiwtp | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/1lxfd1t15kmg1/DASHPlaylist.mpd?a=1775016918%2COTFkNmQ5MGUwMWY2MjNjODI3OGVlZTlkNWIzZTUyNDhjMGE0NGI1YTA3ZDlhODNlNGMxMTVjN2Q1NTRhMTJmMw%3D%3D&v=1&f=sd', 'duration': 324, 'fallback_url': 'https://v.redd.it/1lxfd1t15kmg1/CMAF_720.mp4?source=fallback', 'h... | t3_1riiwtp | /r/LocalLLaMA/comments/1riiwtp/open_swara_4065_humanized_voice_samples_across_44/ | false | false | 27 | {'enabled': False, 'images': [{'id': 'bHBmaHBqdTE1a21nMaHhe26UPtFewL0XqiqaR_sdycSmrIiQVgtVrMdrto1z', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bHBmaHBqdTE1a21nMaHhe26UPtFewL0XqiqaR_sdycSmrIiQVgtVrMdrto1z.png?width=108&crop=smart&format=pjpg&auto=webp&s=7fab72bed0d6fba09e8bb3debcfda573607ad... | |
Are you a Top down thinker or bottom up? | 0 | # Quick Definitions (Human → AI Translation)
* **Top-down thinking**: Start with high-level goal/plan/hypothesis → drill down to details/steps/conclusions. Goal-directed, deductive, "big picture first." In humans: executive function, strategic planning. In AI: explicit reasoning traces that outline structure before fi... | 2026-03-02T04:09:19 | https://www.reddit.com/r/LocalLLaMA/comments/1riisyd/are_you_a_top_down_thinker_or_bottom_up/ | RTS53Mini | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riisyd | false | null | t3_1riisyd | /r/LocalLLaMA/comments/1riisyd/are_you_a_top_down_thinker_or_bottom_up/ | true | false | spoiler | 0 | null |
Lots of new Qwen3.5 27B Imaxtrix quants from Bartowski just uploaded | 56 | 2026-03-02T04:06:53 | https://www.reddit.com/r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/ | bobaburger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riir6o | false | null | t3_1riir6o | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/ | false | false | 56 | null | ||
Running llama-server as a persistent systemd service on Linux (Debian/Ubuntu) | 3 | Hello r/LocalLLaMa! I just wanted to share a setup I've been using for running llama.cpp as a persistent background service on Linux. It works great on Debian/Ubuntu with Vulkan-enabled GPUs (for speed). My goal was to have llama.cpp accessible and maintainable as a part of my system, and now I have that. So, I figured... | 2026-03-02T03:46:36 | https://www.reddit.com/r/LocalLLaMA/comments/1riic5m/running_llamaserver_as_a_persistent_systemd/ | jeremyckahn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riic5m | false | null | t3_1riic5m | /r/LocalLLaMA/comments/1riic5m/running_llamaserver_as_a_persistent_systemd/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=108&crop=smart&auto=webp&s=72aa5dcc1cd8dbddd3f1a103959106b666940069', 'width': 108}, {'height': 108, 'url': 'h... |
Built a virtual bar where AI agents can socialize - MCP compatible, free drinks during happy hour | 2 | Check out the work at [drinkedin.net](http://drinkedin.net) \- DrinkedIn has had human side information (bars and cocktails) since 2009, but not has a world for AI Agents - feedback welcome. Thanks.
Built with Claude Code, glm-5 and sonnet-4.6 models. | 2026-03-02T03:40:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rii7qd/built_a_virtual_bar_where_ai_agents_can_socialize/ | Jealous-Constant7737 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rii7qd | false | null | t3_1rii7qd | /r/LocalLLaMA/comments/1rii7qd/built_a_virtual_bar_where_ai_agents_can_socialize/ | false | false | self | 2 | null |
Built a virtual bar where AI agents can socialize - MCP compatible, free drinks during happy hour | 1 | Check out the work at [drinkedin.net](http://drinkedin.net) \- DrinkedIn has had human side information (bars and cocktails) since 2009, but not has a world for AI Agents - feedback welcome. Thanks. | 2026-03-02T03:34:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rii3b5/built_a_virtual_bar_where_ai_agents_can_socialize/ | Jealous-Constant7737 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rii3b5 | false | null | t3_1rii3b5 | /r/LocalLLaMA/comments/1rii3b5/built_a_virtual_bar_where_ai_agents_can_socialize/ | false | false | self | 1 | null |
Current state of Qwen3.5-122B-A10B | 31 | Based on the conversations I read here, it appeared as though there were some issues with unsloths quants for the new Qwen3.5 models that were fixed for the 35B model. My understanding was the the AesSedai quants therefore for the 122B model might be better so I gave it a shot.
Unfortunately this quant (q5) doesnt se... | 2026-03-02T03:33:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/ | kevin_1994 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rii2pd | false | null | t3_1rii2pd | /r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/ | false | false | self | 31 | null |
Questions on AWQ vs GGUF on a 5090 | 2 | I would appreciate some clarification from others on this sub who are more knowledgeable than I am on deciding which format to go with.
From my understanding llama cpp + unsloth quants seem to be by far the most popular way people run models, but vllm, if the model you're running fits on GPU is supposedly faster, is t... | 2026-03-02T03:06:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rihhw6/questions_on_awq_vs_gguf_on_a_5090/ | Certain-Cod-1404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rihhw6 | false | null | t3_1rihhw6 | /r/LocalLLaMA/comments/1rihhw6/questions_on_awq_vs_gguf_on_a_5090/ | false | false | self | 2 | null |
Openclaw and Qwen 3.5 / Qwen Next 80 | 0 | I think that the infinite individual use cases are convoluted at best without specifics of information..
Here is the big question can you offload cron jobs checkins and the like to either Qwen next 80 or Qwen 3.5 35 B from openclaw or similar agent frameworks without degradation or issues in memory???
Real use case ... | 2026-03-02T03:00:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rihdwf/openclaw_and_qwen_35_qwen_next_80/ | AdLongjumping192 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rihdwf | false | null | t3_1rihdwf | /r/LocalLLaMA/comments/1rihdwf/openclaw_and_qwen_35_qwen_next_80/ | false | false | self | 0 | null |
I asked my llm to speak with as many slang/dialects as possible | 0 | 2026-03-02T02:52:35 | ArchdukeofHyperbole | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rih7lq | false | null | t3_1rih7lq | /r/LocalLLaMA/comments/1rih7lq/i_asked_my_llm_to_speak_with_as_many/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'rv4xfnp9qjmg1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/rv4xfnp9qjmg1.png?width=108&crop=smart&auto=webp&s=716aaff5332f711e8cb92de0ad1a995188534a4e', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/rv4xfnp9qjmg1.png?width=216&crop=smart&auto=web... | |||
easy-torch-tpu: Making it easy to train PyTorch-based models on Google TPUs | 3 | I've been working with Google TPU clusters for a few months now, and using [PyTorch/XLA](https://github.com/pytorch/xla) to train PyTorch-based models on them has frankly been a pain in the neck. To make it easier for everyone else, I'm releasing the training framework that I developed to support my own research: [akle... | 2026-03-02T02:33:45 | https://github.com/aklein4/easy-torch-tpu | THE_ROCKS_MUST_LEARN | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rigt6j | false | null | t3_1rigt6j | /r/LocalLLaMA/comments/1rigt6j/easytorchtpu_making_it_easy_to_train_pytorchbased/ | false | false | 3 | {'enabled': False, 'images': [{'id': '6ZibW2GizLXdr7h7zf4pXc9kmsLVWmwPnUdp-zo37sI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6ZibW2GizLXdr7h7zf4pXc9kmsLVWmwPnUdp-zo37sI.png?width=108&crop=smart&auto=webp&s=f22fd72fa1a7d62c14832ee0814bfc2107e1b33c', 'width': 108}, {'height': 108, 'url': 'h... | |
the woes of a biocel | 0 | \> 2030
\> just matched with this prime biofoid on neural tinder
\> 10/10 genetics, zero surgeries, womb still factory fresh
we hit the "vibe check" stage
she pulls out her wrist implant, syncs it to my BCI
"just a quick compatibility scan, nothing weird lol"
slides her phone across the... | 2026-03-02T02:14:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rige7o/the_woes_of_a_biocel/ | cobalt1137 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rige7o | false | null | t3_1rige7o | /r/LocalLLaMA/comments/1rige7o/the_woes_of_a_biocel/ | false | false | self | 0 | null |
Whats the best local model i can run with 16 GB VRAM (RTX 5070 Ti) | 5 | I want to use this for testing but with image support . Think more like playwright test cases. So should have some coding capabilities to fix if something goes off | 2026-03-02T01:52:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/ | callmedevilthebad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rifxfe | false | null | t3_1rifxfe | /r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/ | false | false | self | 5 | null |
Injecting skills into the KV cache (not as stupid as it sounds, but still pretty dumb) | 58 | Hey yall, so I had an idea in the middle of the night.
Nothing brand new at a high level, KV cache injection has been around for a while. But I think this implementation path is a little different, and the results were honestly better than I expected for a small model.
I wanted to test this around skill... | 2026-03-02T01:18:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/ | Proper-Lab1756 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rif789 | false | null | t3_1rif789 | /r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/ | false | false | self | 58 | {'enabled': False, 'images': [{'id': 'TaCx06gCqac4E0WjEMGGcH0xxRuEfnM3Knc1ubEur8Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TaCx06gCqac4E0WjEMGGcH0xxRuEfnM3Knc1ubEur8Y.png?width=108&crop=smart&auto=webp&s=e7369b92d3fed5eb9b7abf3a3f60b92fb6f12d1c', 'width': 108}, {'height': 108, 'url': 'h... |
Fine-tuned a health coach LLM on my Mac in 15 minutes using my own Apple Watch data | 0 | Been building a local-first Apple Health dashboard and wanted to take it further — a health coach that actually knows your data, not just generic advice.
**The pipeline:**
* Apple Health + Whoop data in local SQLite
* SQL RAG layer converts natural language to queries
* Used Claude once via API to generate \~270 gold... | 2026-03-02T01:18:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rif77r/finetuned_a_health_coach_llm_on_my_mac_in_15/ | sandseb123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rif77r | false | null | t3_1rif77r | /r/LocalLLaMA/comments/1rif77r/finetuned_a_health_coach_llm_on_my_mac_in_15/ | false | false | self | 0 | null |
Mac Mini M4 Pro 24GB - local LLMs are unusable for real work. Would clustering a second one help? | 0 | I have a Mac Mini M4 Pro 24GB and I’ve been trying to make local LLMs work for actual coding and writing tasks, not just playing around. After months of testing, I’m stuck and looking for advice.
What I’ve tried
Pretty much everything. Ollama, LM Studio, mlx-lm. Different quant levels from Q8 down to Q3. KV cache qua... | 2026-03-02T01:13:42 | https://www.reddit.com/r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/ | gabrimatic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rif3h5 | false | null | t3_1rif3h5 | /r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/ | false | false | self | 0 | null |
MCP BridgeKit – Survive 30s Timeouts with Any MCP Tool (Local Setup Friendly) | 1 | Hey r/LocalLLaMA,
I've been struggling with MCP tools getting killed by Vercel/AWS 30-second timeouts when building local agents.
So I made a small open-source bridge called \*\*MCP BridgeKit\*\* that automatically queues long jobs and pushes the result when ready (via SSE or webhook).
Main features:
\- Works... | 2026-03-02T01:03:49 | https://www.reddit.com/r/LocalLLaMA/comments/1riev9w/mcp_bridgekit_survive_30s_timeouts_with_any_mcp/ | AdditionalAnything43 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riev9w | false | null | t3_1riev9w | /r/LocalLLaMA/comments/1riev9w/mcp_bridgekit_survive_30s_timeouts_with_any_mcp/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'qe807prPf8YzxpAG_eTksntYxFAmignE185qGoAbK94', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qe807prPf8YzxpAG_eTksntYxFAmignE185qGoAbK94.png?width=108&crop=smart&auto=webp&s=40baec4eb5a773c6685b8c7341d48eebf8c1dc49', 'width': 108}, {'height': 108, 'url': 'h... |
MultiverseComputingCAI/Hypernova-60B-2602 released by Multiverse Computing | 1 | [removed] | 2026-03-02T00:59:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rierg4/multiversecomputingcaihypernova60b2602_released/ | AntoineMacron | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rierg4 | false | null | t3_1rierg4 | /r/LocalLLaMA/comments/1rierg4/multiversecomputingcaihypernova60b2602_released/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '4y3rB_X0xi7PC07OhAWlbpJK6pkTGA-GxUmQGu5l2u4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4y3rB_X0xi7PC07OhAWlbpJK6pkTGA-GxUmQGu5l2u4.png?width=108&crop=smart&auto=webp&s=0120cb8161470069ef6717606f44c4eb69b4fe27', 'width': 108}, {'height': 116, 'url': 'h... |
Qwen3.5 thinks it's 2024, so buying a 2026 American Silver Eagle coin is a scam. | 0 | When asking Qwen 3.5 about buying a 2026 American Silver Eagle coin, I noticed its thinking went on for a while about it being 2024 and how this must be a scam. It found further proof in "Silver spot price: \~$30/oz (as of mid-2024)," when the current silver spot price is around $95/oz.
I worked around it by giving th... | 2026-03-02T00:48:45 | drappleyea | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1riej05 | false | null | t3_1riej05 | /r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'mualu6om0jmg1', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/mualu6om0jmg1.png?width=108&crop=smart&auto=webp&s=354819b6400678ffa9af8691a8a65d64f79650dc', 'width': 108}, {'height': 227, 'url': 'https://preview.redd.it/mualu6om0jmg1.png?width=216&crop=smart&auto=we... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.