title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
M3 Ultra 512GB - real-world performance of MiniMax-M2.5, GLM-5, and Qwen3-Coder-Next | 33 | A lot of people have been asking about real-world performance of recent models on apple silicon, especially on the ultra chips. I've been running MiniMax-M2.5, GLM-5, and Qwen3-Coder-80B on my M3 Ultra 512GB and wanted to share the results.
**Quick summary**
**Qwen3-Coder-Next-80B** \- the standout for local coding. ... | 2026-02-24T16:31:29 | https://www.reddit.com/gallery/1rdkze3 | cryingneko | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdkze3 | false | null | t3_1rdkze3 | /r/LocalLLaMA/comments/1rdkze3/m3_ultra_512gb_realworld_performance_of/ | false | false | 33 | null | |
New SWE-bench Multilingual Leaderboard: Performance across 9 languages & cost analysis | 18 | Happy to announce that we just launched our Multilingual leaderboard comparing performance across 9 languages. The benchmark is harder than SWE-bench verified and still shows a wider range of performances.
We're still adding more models, but this is the current leaderboard:
https://preview.redd.it/l0cotc22wglg1.png?w... | 2026-02-24T16:20:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rdknyh/new_swebench_multilingual_leaderboard_performance/ | klieret | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdknyh | false | null | t3_1rdknyh | /r/LocalLLaMA/comments/1rdknyh/new_swebench_multilingual_leaderboard_performance/ | false | false | 18 | null | |
Ran Rigour on OpenClaw-style agent codebases locally — caught 2,080 drifts & overrules in seconds (local-first tool) | 1 | [removed] | 2026-02-24T16:14:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rdki30/ran_rigour_on_openclawstyle_agent_codebases/ | erashu212 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdki30 | false | null | t3_1rdki30 | /r/LocalLLaMA/comments/1rdki30/ran_rigour_on_openclawstyle_agent_codebases/ | false | false | self | 1 | null |
Jam — open source desktop app to run multiple AI coding agents locally with voice control | 1 | 2026-02-24T16:00:38 | https://github.com/dag7/jam | kamaji_dev | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rdk4ln | false | null | t3_1rdk4ln | /r/LocalLLaMA/comments/1rdk4ln/jam_open_source_desktop_app_to_run_multiple_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': '2V8TjdxZ0ED7ef5-PT9zRDo2b5y0fkLt0og08qPXYSI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2V8TjdxZ0ED7ef5-PT9zRDo2b5y0fkLt0og08qPXYSI.png?width=108&crop=smart&auto=webp&s=671f02e02a458cdd4ee3bc91781b3d5fc921322d', 'width': 108}, {'height': 108, 'url': 'h... | ||
Looking for arXiv cs.LG / cs.AI endorser — paper on GRPO failure modes + LLM game agents | 0 | Hi r/LocalLLaMA — first-time arXiv submitter here, looking for someone endorsed in cs.LG or [cs.AI](http://cs.AI) to endorse my submission.
Paper: Representation Over Training: How Board State Formatting Determines LLM Game-Playing Validity in Minesweeper
Key findings:
\- Board representation alone (no training... | 2026-02-24T15:48:40 | https://www.reddit.com/r/LocalLLaMA/comments/1rdjsv1/looking_for_arxiv_cslg_csai_endorser_paper_on/ | GrimLock_plays01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdjsv1 | false | null | t3_1rdjsv1 | /r/LocalLLaMA/comments/1rdjsv1/looking_for_arxiv_cslg_csai_endorser_paper_on/ | false | false | self | 0 | null |
This is the OPEN AI and sharing of Knowledge we were promised, keep accelerating or pop the bubble. Stop complaining. All gas no brakes! | 46 | Do you agree? | 2026-02-24T15:46:14 | https://www.reddit.com/gallery/1rdjqeh | TroyDoesAI | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdjqeh | false | null | t3_1rdjqeh | /r/LocalLLaMA/comments/1rdjqeh/this_is_the_open_ai_and_sharing_of_knowledge_we/ | false | false | 46 | null | |
This is the OPEN AI and sharing of Knowledge we were promised, keep accelerating or pop the bubble. Stop complaining. All gas no brakes! | 1 | Do you agree? | 2026-02-24T15:45:52 | https://www.reddit.com/gallery/1rdjq1b | TroyDoesAI | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdjq1b | false | null | t3_1rdjq1b | /r/LocalLLaMA/comments/1rdjq1b/this_is_the_open_ai_and_sharing_of_knowledge_we/ | false | false | 1 | null | |
Llama 3.2 3B is running very smoothly on my low specs | 3 | 2026-02-24T15:44:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rdjo6z/llama_32_3b_is_running_very_smoothly_on_my_low/ | Strange_Disk2202 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdjo6z | false | null | t3_1rdjo6z | /r/LocalLLaMA/comments/1rdjo6z/llama_32_3b_is_running_very_smoothly_on_my_low/ | false | false | 3 | null | ||
Help a newbie out? Can I run a note taking device locally? | 4 | Hi all! I'm a data analyst, so I have some basic R and Python skills but all geared towards data analysis. I also have ADHD so the idea of a wearable device for note taking on my life sounds suuuuper helpful. But I'm unwilling to give my entire life data, including conversations with my wife and kids etc, over to a meg... | 2026-02-24T15:39:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rdjjkc/help_a_newbie_out_can_i_run_a_note_taking_device/ | Drastic_Conclusions | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdjjkc | false | null | t3_1rdjjkc | /r/LocalLLaMA/comments/1rdjjkc/help_a_newbie_out_can_i_run_a_note_taking_device/ | false | false | self | 4 | null |
Sarvam AI's sovereign LLM: censorship lives in a system prompt, not the weights | 9 | 2026-02-24T15:30:06 | https://pop.rdi.sh/sovereignty-in-a-system-prompt | GoMeansGo | pop.rdi.sh | 1970-01-01T00:00:00 | 0 | {} | 1rdjafw | false | null | t3_1rdjafw | /r/LocalLLaMA/comments/1rdjafw/sarvam_ais_sovereign_llm_censorship_lives_in_a/ | false | false | default | 9 | null | |
Looking for this narration voice style (sample included) | 4 | Hey everyone,
I’m trying to find a narration/anime-style voice like the one in this short clip:
[https://voca.ro/1dRV0BgMh5lo](https://voca.ro/1dRV0BgMh5lo)
It’s the kind of voice used in manga recaps, anime storytelling, and dramatic narration.
If anyone knows:
• the voice actor
• a TTS model/voice pack
• a... | 2026-02-24T15:28:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rdj98q/looking_for_this_narration_voice_style_sample/ | UmpireVegetable316 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdj98q | false | null | t3_1rdj98q | /r/LocalLLaMA/comments/1rdj98q/looking_for_this_narration_voice_style_sample/ | false | false | self | 4 | null |
RDNA 4 (3x 9060 XT) "Gibberish" on ROCm 7.x — Anyone found the stable math kernels? | 1 | Hey everyone,
I’ve recently set up a 3-GPU node using the new AMD RX 9060 XT (gfx1200) cards in a Dell Precision T7910 (Dual CPU, PCIe 3.0). I’m hitting a wall with ROCm 7.x and llama.cpp / Ollama.
**The Issue**: > When running with the ROCm/HIP backend, I get pure gibberish/word salad output (numerical corruption). ... | 2026-02-24T15:28:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rdj8ue/rdna_4_3x_9060_xt_gibberish_on_rocm_7x_anyone/ | Dense-Department-772 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdj8ue | false | null | t3_1rdj8ue | /r/LocalLLaMA/comments/1rdj8ue/rdna_4_3x_9060_xt_gibberish_on_rocm_7x_anyone/ | false | false | self | 1 | null |
Best schema/prompt pattern for MCP tool descriptions? (Building an API-calling project) | 1 | Hey everyone,
I’m currently building an MCP server that acts as a bridge for a complex REST API. I’ve noticed that a simple 1:1 mapping of endpoints to tools often leads to "tool explosion" and confuses the LLM.
I’m looking for advice on two things:
# 1. What is the "Gold Standard" for Tool Descriptions?
When defin... | 2026-02-24T15:24:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rdj5cr/best_schemaprompt_pattern_for_mcp_tool/ | Ok-Birthday-5406 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdj5cr | false | null | t3_1rdj5cr | /r/LocalLLaMA/comments/1rdj5cr/best_schemaprompt_pattern_for_mcp_tool/ | false | false | self | 1 | null |
Local GitHub Copilot with Lemonade Server on Windows | 3 | I wanted to try running working with GitHub Copilot and a local LLM on my Framework Desktop. As I couldn't find a simple walkthrough of how to get that up and running I decided to write one:
[https://admcpr.com/local-github-copilot-with-lemonade-server-on-windows/](https://admcpr.com/local-github-copilot-with-lemonade... | 2026-02-24T15:23:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rdj3nz/local_github_copilot_with_lemonade_server_on/ | admcpr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdj3nz | false | null | t3_1rdj3nz | /r/LocalLLaMA/comments/1rdj3nz/local_github_copilot_with_lemonade_server_on/ | false | false | self | 3 | null |
Where to go for running inference directly (doing python code, eg. vllm) at affordable costs that is not the dumpster fire of RunPod. | 1 | Nothing works in there is just a piece of junk, you are working on a pod and it dissapears while you work on it, constant crashes, constant issues, cuda 1 device gives error for seemingly no reason, change the docker image, ssh does not work anymore, UI crashes, everything fails. 3 hours to pull a docker image, logs th... | 2026-02-24T15:20:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rdj1ii/where_to_go_for_running_inference_directly_doing/ | boisheep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdj1ii | false | null | t3_1rdj1ii | /r/LocalLLaMA/comments/1rdj1ii/where_to_go_for_running_inference_directly_doing/ | false | false | self | 1 | null |
OpenPDB: AI agents with real personalities (or is it just fancy roleplay?) | 1 | What it does:
Personality database + prompt engineering framework that lets you generate AI agents with distinct MBTI/Enneagram/Instinct profiles. Create Batman, Joker, or any character with their own voice and worldview. Built on Ollama, works with OpenGoat for multi-agent collaboration. Colab notebook included.
T... | 2026-02-24T15:10:42 | https://www.reddit.com/r/LocalLLaMA/comments/1rdirpi/openpdb_ai_agents_with_real_personalities_or_is/ | Alternative_Toe_1327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdirpi | false | null | t3_1rdirpi | /r/LocalLLaMA/comments/1rdirpi/openpdb_ai_agents_with_real_personalities_or_is/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'TXTYmiOPcJHWRDO2RD8pIEaKUDMUn8duTmC_aXmK_rQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TXTYmiOPcJHWRDO2RD8pIEaKUDMUn8duTmC_aXmK_rQ.png?width=108&crop=smart&auto=webp&s=e7371f41477a08326d5df9a15adac7a123372a0c', 'width': 108}, {'height': 108, 'url': 'h... |
Ran Local Vision AI on an 8GB Laptop. It actually works! | 2 | Hey guys,
Quick update for the budget hardware crowd. I managed to run **Moondream2** (Vision AI) on my 8GB RAM laptop using Ollama.
Most people say you need high-end VRAM for vision, but this tiny 1.6B model is surprisingly snappy. I tested it with my cluttered desk, and it identified everything—including my messy c... | 2026-02-24T14:57:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rdieh9/ran_local_vision_ai_on_an_8gb_laptop_it_actually/ | NGU-FREEFIRE | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdieh9 | false | null | t3_1rdieh9 | /r/LocalLLaMA/comments/1rdieh9/ran_local_vision_ai_on_an_8gb_laptop_it_actually/ | false | false | self | 2 | null |
Liquid AI releases LFM2-24B-A2B (Largest LFM2 model yet) | 25 | * LFM2-24B-A2B is the latest release in LFM2 model family.
* This sparse Mixture of Experts (MoE) model has 24 billion total parameters with 2 billion active per token.
* LFM2-24B-A2B is open-weight and available now on Hugging Face.
Model - [https://huggingface.co/LiquidAI/LFM2-24B-A2B](https://huggingface.co/Liquid... | 2026-02-24T14:56:03 | Dear-Success-1441 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdidn9 | false | null | t3_1rdidn9 | /r/LocalLLaMA/comments/1rdidn9/liquid_ai_releases_lfm224ba2b_largest_lfm2_model/ | false | false | 25 | {'enabled': True, 'images': [{'id': 'skka9wsjhglg1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/skka9wsjhglg1.jpeg?width=108&crop=smart&auto=webp&s=2100604ff8043ea93715f62f2a0561f3d664d2d4', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/skka9wsjhglg1.jpeg?width=216&crop=smart&auto=w... | ||
More models added in qwen chat. likely the upcomming open weight models. the 122b a10b moe is particularly interesting. | 1 | 2026-02-24T14:44:05 | theghost3172 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdi2nn | false | null | t3_1rdi2nn | /r/LocalLLaMA/comments/1rdi2nn/more_models_added_in_qwen_chat_likely_the/ | false | false | 1 | {'enabled': True, 'images': [{'id': '7u9belatfglg1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/7u9belatfglg1.png?width=108&crop=smart&auto=webp&s=cf413dda6bc9d18a7ab90dee9dd1e696ee94bb9c', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/7u9belatfglg1.png?width=216&crop=smart&auto=web... | |||
Liquid AI releases LFM2-24B-A2B | 300 | Today, Liquid AI releases LFM2-24B-A2B, their largest LFM2 model to date
LFM2-24B-A2B is a sparse Mixture-of-Experts (MoE) model with 24 billion total parameters with 2 billion active per token, showing that the LFM2 hybrid architecture scales effectively to larger sizes maintaining quality without inflating per-token... | 2026-02-24T14:43:33 | PauLabartaBajo | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdi26s | false | null | t3_1rdi26s | /r/LocalLLaMA/comments/1rdi26s/liquid_ai_releases_lfm224ba2b/ | false | false | 300 | {'enabled': True, 'images': [{'id': '28drgi3ufglg1', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/28drgi3ufglg1.png?width=108&crop=smart&auto=webp&s=710d3aa3743ac91a2e2e77b7136379485c208da0', 'width': 108}, {'height': 187, 'url': 'https://preview.redd.it/28drgi3ufglg1.png?width=216&crop=smart&auto=web... | ||
Introducing 'Self-Preservation' to Bridge the Gap Between LLM and Agentic Robotics | 0 | Most robotics implementations use the physical robot simply as a peripheral for a chatbot.
This project, Singularity, changes the relationship by forcing the model to acknowledge its physical hardware as its only point of existence.
The Core Mechanics:
* **Physical Tethering:** The system prompt instructs the agent... | 2026-02-24T14:32:24 | https://v.redd.it/8t2db7b5dglg1 | Marzipug | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdhsc9 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/8t2db7b5dglg1/DASHPlaylist.mpd?a=1774535571%2CZDA2NjIzZWU3NWE1ZjdjYWMwMjNiYzQ4N2VjZmQwMWFiY2U3MmE0ODc2YTNmZjE4MzBkOGEyN2M4YWMyZWFhOA%3D%3D&v=1&f=sd', 'duration': 110, 'fallback_url': 'https://v.redd.it/8t2db7b5dglg1/CMAF_480.mp4?source=fallback', 'h... | t3_1rdhsc9 | /r/LocalLLaMA/comments/1rdhsc9/introducing_selfpreservation_to_bridge_the_gap/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'N2tjMHM0YzVkZ2xnMY5Ag1hr6pC5Vp9OPPriv5GaJuYcjE2vxXxccp7L98fI', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/N2tjMHM0YzVkZ2xnMY5Ag1hr6pC5Vp9OPPriv5GaJuYcjE2vxXxccp7L98fI.png?width=108&crop=smart&format=pjpg&auto=webp&s=d3cea441ba8bc20108a6441af765d2e42932... | |
OpenClaw: Running a Secure, Capable, Low Cost Claw (with Hetzner, Tailscale, Discord and Zapier MCP) | 0 | https://www.appsoftware.com/blog/openclaw-running-a-secure-capable-lowcost-claw-hetzner-tailscale-discord-zapier-mcp
If like me curiosity has got the better of you, this post covers how to set up OpenClaw securely and cheaply, using Tailscale and Zapier | 2026-02-24T14:29:27 | https://www.reddit.com/r/LocalLLaMA/comments/1rdhpr1/openclaw_running_a_secure_capable_low_cost_claw/ | gbro3n | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdhpr1 | false | null | t3_1rdhpr1 | /r/LocalLLaMA/comments/1rdhpr1/openclaw_running_a_secure_capable_low_cost_claw/ | false | false | self | 0 | null |
prepare your GPUs | 90 | new models are on the way | 2026-02-24T14:25:50 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdhmmp | false | null | t3_1rdhmmp | /r/LocalLLaMA/comments/1rdhmmp/prepare_your_gpus/ | false | false | 90 | {'enabled': True, 'images': [{'id': 'lk65supncglg1', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/lk65supncglg1.png?width=108&crop=smart&auto=webp&s=eba85272be5348cec7d1749164577b8c2e1a87d8', 'width': 108}, {'height': 73, 'url': 'https://preview.redd.it/lk65supncglg1.png?width=216&crop=smart&auto=webp... | ||
LiquidAI/LFM2-24B-A2B-GGUF · Hugging Face | 62 | LFM2 is a family of hybrid models designed for on-device deployment. LFM2-24B-A2B is the largest model in the family, scaling the architecture to 24 billion parameters while keeping inference efficient.
* **Best-in-class efficiency**: A 24B MoE model with only 2B active parameters per token, fitting in 32 GB of RAM fo... | 2026-02-24T14:21:40 | https://huggingface.co/LiquidAI/LFM2-24B-A2B-GGUF | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1rdhj3p | true | null | t3_1rdhj3p | /r/LocalLLaMA/comments/1rdhj3p/liquidailfm224ba2bgguf_hugging_face/ | false | false | 62 | {'enabled': False, 'images': [{'id': 's6Y76SrPStf2reaCiuAWV2Zvm47mzj1cZicnei7wdTU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/s6Y76SrPStf2reaCiuAWV2Zvm47mzj1cZicnei7wdTU.png?width=108&crop=smart&auto=webp&s=d3753be56a2a02d0dacac975a6f8a03991319249', 'width': 108}, {'height': 116, 'url': 'h... | |
Lessons learned running Qwen3-VL-8B as a fully local voice assistant on AMD ROCm | 32 | I've been building a local voice assistant over the past few weeks and wanted to share some things I learned that might be useful to others here, especially anyone on AMD hardware.
The setup is wake word → fine-tuned Whisper STT → Qwen3-VL-8B for reasoning → Kokoro TTS for voice output. Everything runs on-device, no c... | 2026-02-24T14:06:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rdh5lv/lessons_learned_running_qwen3vl8b_as_a_fully/ | __InterGen__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdh5lv | false | null | t3_1rdh5lv | /r/LocalLLaMA/comments/1rdh5lv/lessons_learned_running_qwen3vl8b_as_a_fully/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': 'hfBchP8tMO6keA4CO1g99NkDrj0CJ5tGHoKvs8XFapI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/hfBchP8tMO6keA4CO1g99NkDrj0CJ5tGHoKvs8XFapI.jpeg?width=108&crop=smart&auto=webp&s=ce9af8bd5fe2776e086b2fdaa4fed072642bfa81', 'width': 108}, {'height': 162, 'url': '... |
Choosing a VGA card for real-ESRGAN | 1 | 1. Should I use an NVIDIA or AMD graphics card? I used to use a GTX 970 and found it too slow.
2. What mathematical operation does real-ESRGAN (models realesrgan-x4plus) use? Is it FP16, FP32, FP64, or some other operation?
3. I'm thinking of buying an NVIDIA Tesla V100 PCIe 16GB (from Taobao), it seems quite cheap. ... | 2026-02-24T13:56:01 | https://www.reddit.com/r/LocalLLaMA/comments/1rdgvpg/choosing_a_vga_card_for_realesrgan/ | Dense-Worldliness874 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdgvpg | false | null | t3_1rdgvpg | /r/LocalLLaMA/comments/1rdgvpg/choosing_a_vga_card_for_realesrgan/ | false | false | self | 1 | null |
Which local neural network should you choose? | 0 | Hello, please advise which local neural network is best to choose.
I have a PC with
I5-13600kf
Rtx 3060 (6 GB)
32 GB of RAM. | 2026-02-24T13:55:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rdgvmv/which_local_neural_network_should_you_choose/ | Alone-Office-9382 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdgvmv | false | null | t3_1rdgvmv | /r/LocalLLaMA/comments/1rdgvmv/which_local_neural_network_should_you_choose/ | false | false | self | 0 | null |
Minimal repo for running Recursive Language Model experiments + TUI Log viewer | 7 | Open-sourcing my minimalist implementation of Recursive Language Models.
RLMs can handle text inputs upto millions of tokens - they do not load the prompt directly into context. They use a python REPL to selectively read context and pass around information through variables.
You can just run **\`pip install fast-rl... | 2026-02-24T13:35:25 | https://www.reddit.com/gallery/1rdgea2 | AvvYaa | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdgea2 | false | null | t3_1rdgea2 | /r/LocalLLaMA/comments/1rdgea2/minimal_repo_for_running_recursive_language_model/ | false | false | 7 | null | |
Physics-based simulator for distributed LLM training and inference — calibrated against published MFU | 7 | **Link:**[ https://simulator.zhebrak.io](https://simulator.zhebrak.io)
The simulator computes everything analytically from hardware specs and model architecture — TTFT, TPOT, memory breakdown, KV cache sizing, prefill/decode timing, throughput, and estimated cost. Supports GGUF, GPTQ, AWQ quantisation, speculative dec... | 2026-02-24T13:25:29 | https://www.reddit.com/gallery/1rdg624 | zhebrak | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdg624 | false | null | t3_1rdg624 | /r/LocalLLaMA/comments/1rdg624/physicsbased_simulator_for_distributed_llm/ | false | false | 7 | null | |
Qwen 3.5 new models released on their website! | 118 | [https://chat.qwen.ai/](https://chat.qwen.ai/)
https://preview.redd.it/xg1r9pzb1glg1.png?width=1495&format=png&auto=webp&s=8ba3206f026aa0a41e0f53228ccba0de35a77861
| 2026-02-24T13:22:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rdg3dv/qwen_35_new_models_released_on_their_website/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdg3dv | false | null | t3_1rdg3dv | /r/LocalLLaMA/comments/1rdg3dv/qwen_35_new_models_released_on_their_website/ | false | false | 118 | null | |
[P] Sovereign-Lila-E8: 40M Parameter Model achieving 0.44 Val Loss via Geometric E8 Attention | 0 | I requested Wisdom, not tokens. This is not a service; it's a native 8-dimensional open-source breakthrough that points toward the 24th.
While the industry is obsessed with "distilling" trillions of parameters, I spent the last year going "outside" the system to find a zero-viscosity solution. Today, I'm releasing **S... | 2026-02-24T13:21:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rdg2xe/p_sovereignlilae8_40m_parameter_model_achieving/ | Fickle-Election-3689 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdg2xe | false | null | t3_1rdg2xe | /r/LocalLLaMA/comments/1rdg2xe/p_sovereignlilae8_40m_parameter_model_achieving/ | false | false | 0 | null | |
Is MacStudio fine for local LLMs? | 5 | I’ve been spending way too much money on cloud GPU pods recently to run big models 😅
So I’m thinking of some local alternative, since I only own RTX5080 16Gb. And upgrading this to eg. RTX5090 is not enough with its only 32Gb vRAM.
I’ve seen some people using MacStudio to run models locally. Do you know if it’s good... | 2026-02-24T13:11:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rdfuca/is_macstudio_fine_for_local_llms/ | Real_Ebb_7417 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdfuca | false | null | t3_1rdfuca | /r/LocalLLaMA/comments/1rdfuca/is_macstudio_fine_for_local_llms/ | false | false | self | 5 | null |
A small 4B sub-agent for local codebase navigation with 100% tool-calling validity | 16 | I’ve been experimenting with a specialized 4B model (based on Qwen) that acts as an "explorer" for local codebases. It’s designed to handle the heavy lifting like grep, find, and file reading so you can save your Claude/GPT tokens for high-level logic.
In my tests, it achieved 100% JSON validity for tool calls, which ... | 2026-02-24T13:10:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rdfu5e/a_small_4b_subagent_for_local_codebase_navigation/ | Awkward_Run_9982 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdfu5e | false | null | t3_1rdfu5e | /r/LocalLLaMA/comments/1rdfu5e/a_small_4b_subagent_for_local_codebase_navigation/ | false | false | self | 16 | null |
New 4B Model: LocoOperator. A specialist for local codebase exploration. | 1 | [removed] | 2026-02-24T13:07:56 | https://www.reddit.com/gallery/1rdfrqe | Physical_Screen_7543 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdfrqe | false | null | t3_1rdfrqe | /r/LocalLLaMA/comments/1rdfrqe/new_4b_model_locooperator_a_specialist_for_local/ | false | false | 1 | null | |
Best fast & smart LLM for AI Streaming? (RTX 3060 12GB / i5-10400) | 0 | Hi everyone! I’m in the process of setting up an AI Streamer and I'm looking for the perfect "sweet spot" LLM. The goal is to have a model that is smart enough for engaging roleplay and chat interaction but fast enough to maintain the flow of a live stream.
My Specs:
• GPU: NVIDIA RTX 3060 12GB VRAM
• CPU: Intel i5-... | 2026-02-24T13:04:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rdfpbi/best_fast_smart_llm_for_ai_streaming_rtx_3060/ | Due_Ear7437 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdfpbi | false | null | t3_1rdfpbi | /r/LocalLLaMA/comments/1rdfpbi/best_fast_smart_llm_for_ai_streaming_rtx_3060/ | false | false | self | 0 | null |
DeepSeek proxy test – anyone else running this? | 1 | [removed] | 2026-02-24T12:56:27 | https://www.reddit.com/r/LocalLLaMA/comments/1rdfigf/deepseek_proxy_test_anyone_else_running_this/ | deepseektoken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdfigf | false | null | t3_1rdfigf | /r/LocalLLaMA/comments/1rdfigf/deepseek_proxy_test_anyone_else_running_this/ | false | false | self | 1 | null |
New Qwen3.5 models spotted on qwen chat | 647 | 2026-02-24T12:55:10 | AaronFeng47 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdfhfx | false | null | t3_1rdfhfx | /r/LocalLLaMA/comments/1rdfhfx/new_qwen35_models_spotted_on_qwen_chat/ | false | false | 647 | {'enabled': True, 'images': [{'id': 'h1c3uk0iwflg1', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/h1c3uk0iwflg1.png?width=108&crop=smart&auto=webp&s=253b5517ecb82ce1a96cfcd3a0583819668431c5', 'width': 108}, {'height': 241, 'url': 'https://preview.redd.it/h1c3uk0iwflg1.png?width=216&crop=smart&auto=we... | |||
Anyone running DeepSeek proxy? Free 10k tokens to test | 1 | [removed] | 2026-02-24T12:54:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rdfgj3/anyone_running_deepseek_proxy_free_10k_tokens_to/ | deepseektoken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdfgj3 | false | null | t3_1rdfgj3 | /r/LocalLLaMA/comments/1rdfgj3/anyone_running_deepseek_proxy_free_10k_tokens_to/ | false | false | self | 1 | null |
HeartMuLa 3B quantized to 4-bit NF4 — AI music generation with vocals on 16GB consumer GPUs | 1 | [removed] | 2026-02-24T12:47:53 | https://www.reddit.com/r/LocalLLaMA/comments/1rdfbt1/heartmula_3b_quantized_to_4bit_nf4_ai_music/ | PavonicAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdfbt1 | false | null | t3_1rdfbt1 | /r/LocalLLaMA/comments/1rdfbt1/heartmula_3b_quantized_to_4bit_nf4_ai_music/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'TLQhbgW0IiTSSzo4WqO1wC42ThpDcIepAm01KgSCzDY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TLQhbgW0IiTSSzo4WqO1wC42ThpDcIepAm01KgSCzDY.png?width=108&crop=smart&auto=webp&s=1fe40710912999a92a4d61254c0f2a0043e8e1d4', 'width': 108}, {'height': 116, 'url': 'h... |
Claude Sonnet-4.6 thinks he is DeepSeek-V3 when prompted in Chinese. | 1,244 | From Teortaxes on 𝕏: [https://x.com/teortaxesTex/status/2026130112685416881](https://x.com/teortaxesTex/status/2026130112685416881) | 2026-02-24T12:37:51 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdf4ai | false | null | t3_1rdf4ai | /r/LocalLLaMA/comments/1rdf4ai/claude_sonnet46_thinks_he_is_deepseekv3_when/ | false | false | 1,244 | {'enabled': True, 'images': [{'id': 'bq6li0e4rflg1', 'resolutions': [{'height': 117, 'url': 'https://preview.redd.it/bq6li0e4rflg1.jpeg?width=108&crop=smart&auto=webp&s=d6dd398bd8bf6e43c20c2cd339bb0701c9ddd013', 'width': 108}, {'height': 234, 'url': 'https://preview.redd.it/bq6li0e4rflg1.jpeg?width=216&crop=smart&auto=... | ||
a zero-dependency Bash ecosystem for local AI with persistent memory, autonomous loops, and multi-language prompt architecture—9 tools, MIT licensed | 1 | [removed] | 2026-02-24T12:36:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rdf30a/a_zerodependency_bash_ecosystem_for_local_ai_with/ | KitchenCat5603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdf30a | false | null | t3_1rdf30a | /r/LocalLLaMA/comments/1rdf30a/a_zerodependency_bash_ecosystem_for_local_ai_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'eKIoBvMFumnAMti66esARrnm2qNbpbGV32R9L2-uTzw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/eKIoBvMFumnAMti66esARrnm2qNbpbGV32R9L2-uTzw.png?width=108&crop=smart&auto=webp&s=f0433433559ddcdf44f6e7f8d3032570b185179d', 'width': 108}, {'height': 216, 'url': '... |
Finally got OpenClaw working on Windows after way too many failed attempts | 0 | This took me forever to figure out so sharing what actually worked.
The main issue was everyone says install Docker but nobody mentions you need WSL2 set up first or it just breaks. Also had to make sure virtualization was enabled in my BIOS which I didn't even know was a thing.
What finally worked: installed WSL2, r... | 2026-02-24T12:26:53 | https://www.reddit.com/r/LocalLLaMA/comments/1rdew81/finally_got_openclaw_working_on_windows_after_way/ | Independent-Cost-971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdew81 | false | null | t3_1rdew81 | /r/LocalLLaMA/comments/1rdew81/finally_got_openclaw_working_on_windows_after_way/ | false | false | self | 0 | null |
The Anthropic/DeepSeek distillation drama reveals something more important for local runners: the alignment trap | 1 | [removed] | 2026-02-24T12:07:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rdei8k/the_anthropicdeepseek_distillation_drama_reveals/ | Visible_Homework_477 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdei8k | false | null | t3_1rdei8k | /r/LocalLLaMA/comments/1rdei8k/the_anthropicdeepseek_distillation_drama_reveals/ | false | false | self | 1 | null |
Qwen3.5-397B-A17B-UD-TQ1 bench results FW Desktop Strix Halo 128GB | 44 | Just sharing the bench results for unsloth Qwen3.5-397B-A17B-UD-TQ1 on my FW desktop with 128GB VRAM | 2026-02-24T12:02:39 | dabiggmoe2 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdef9x | false | null | t3_1rdef9x | /r/LocalLLaMA/comments/1rdef9x/qwen35397ba17budtq1_bench_results_fw_desktop/ | false | false | 44 | {'enabled': True, 'images': [{'id': 'o0xbpnavmflg1', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/o0xbpnavmflg1.png?width=108&crop=smart&auto=webp&s=fe6879d7dc9f2918410afa3af5f3e9b02a20bd89', 'width': 108}, {'height': 79, 'url': 'https://preview.redd.it/o0xbpnavmflg1.png?width=216&crop=smart&auto=webp... | ||
Is the 1.2gb ollama download not supposed to contain models? | 0 | I'm a little confused by this app. I thought it was supposed to be offline/local only, but it has "cloud models" enabled by default. And all the models in the list need to be downloaded to be used? What was the 1.2gb size used for?
Also, what's the 'best' model/solution for general queries and discussions for a 5090 ... | 2026-02-24T11:55:15 | https://www.reddit.com/r/LocalLLaMA/comments/1rde9bz/is_the_12gb_ollama_download_not_supposed_to/ | SubdivideSamsara | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rde9bz | false | null | t3_1rde9bz | /r/LocalLLaMA/comments/1rde9bz/is_the_12gb_ollama_download_not_supposed_to/ | false | false | self | 0 | null |
VALIS: Open-Source On-Device AI Chat App for iOS with Memory, Emotions, and Tools | 0 | I came across this cool open-source project called VALIS (Vast Active Living Intelligence System) – (Philip K. Dick?) it's a fully offline AI chat app for iOS that runs local LLMs right on your device. It's built with SwiftUI and uses llama.cpp for inference with GGUF models. The neat part is it has a "plastic brain" ... | 2026-02-24T11:50:44 | https://www.reddit.com/gallery/1rde4nn | VastSolid5772 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rde4nn | false | null | t3_1rde4nn | /r/LocalLLaMA/comments/1rde4nn/valis_opensource_ondevice_ai_chat_app_for_ios/ | false | false | 0 | null | |
Spent a week in Rust jail. Did not have to.. | 0 | So there I am, end of January, almost finished with a Python codebase I'd been building for months. Almost finished.
A friend mentions that for mobile I'd need Rust anyway, Python is slow, old school, Rust is the future, the whole speech. And look, I'm not going to pretend I didn't take the bait. Turns out a mensa car... | 2026-02-24T11:34:23 | https://www.reddit.com/r/LocalLLaMA/comments/1rddt5o/spent_a_week_in_rust_jail_did_not_have_to/ | TroubledSquirrel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rddt5o | false | null | t3_1rddt5o | /r/LocalLLaMA/comments/1rddt5o/spent_a_week_in_rust_jail_did_not_have_to/ | false | false | self | 0 | null |
honest comparison of LLM API costs in 2026 | 0 | got tired of every AI company hiding their real pricing behind "contact sales" or confusing token calculators. made my own comparison.
scenario: 1M tokens/day (input + output). standard chat completion task.
| Provider | Model | Price per 1M tokens | Monthly cost (30M tokens) | Data privacy | Self-host option |
|----... | 2026-02-24T11:33:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rddsd2/honest_comparison_of_llm_api_costs_in_2026/ | No_Growth6091 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rddsd2 | false | null | t3_1rddsd2 | /r/LocalLLaMA/comments/1rddsd2/honest_comparison_of_llm_api_costs_in_2026/ | false | false | self | 0 | null |
honest comparison of LLM API costs in 2026 | 1 | [deleted] | 2026-02-24T11:29:59 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rddq6x | false | null | t3_1rddq6x | /r/LocalLLaMA/comments/1rddq6x/honest_comparison_of_llm_api_costs_in_2026/ | false | false | default | 1 | null | ||
Tip if you use quantisation | 0 | Q4 dont go bigger than 16k coherent token max.
(Q5 maybe 20k). (Q6=32k)
(Q8=64k or 80k but past 64k it starts to get worse).
https://preview.redd.it/pvdu9uetgflg1.png?width=1408&format=png&auto=webp&s=6b1b8ae68cf7d6b006c0b01a1f1f8bbae63c052c
Why?... Even on Full precision LLM are generally bad at long conte... | 2026-02-24T11:28:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rddpcd/tip_if_you_use_quantisation/ | Express_Quail_1493 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rddpcd | false | null | t3_1rddpcd | /r/LocalLLaMA/comments/1rddpcd/tip_if_you_use_quantisation/ | false | false | 0 | null | |
What’s the biggest reason you rely on open-source models in your current setup? | 0 | We love open-source models and build around them a lot, but it feels like everyone has their own core reason for sticking with them now.
For us, it’s mostly about control and predictability. When key parts of your stack run on models you can host, tweak, and inspect yourself, you’re not worried about sudden changes br... | 2026-02-24T11:24:37 | https://www.reddit.com/r/LocalLLaMA/comments/1rddmtb/whats_the_biggest_reason_you_rely_on_opensource/ | qubridInc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rddmtb | false | null | t3_1rddmtb | /r/LocalLLaMA/comments/1rddmtb/whats_the_biggest_reason_you_rely_on_opensource/ | false | false | self | 0 | null |
Overview of Ryzen AI 395+ hardware? | 6 | Is there an overview who has them and what they are good/bad at? I want to buy one as a llama.cpp (and Proxmox) box to replace my old homeserver, but have yet to find a comparison or even market overview. | 2026-02-24T11:24:09 | https://www.reddit.com/r/LocalLLaMA/comments/1rddmj0/overview_of_ryzen_ai_395_hardware/ | tecneeq | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rddmj0 | false | null | t3_1rddmj0 | /r/LocalLLaMA/comments/1rddmj0/overview_of_ryzen_ai_395_hardware/ | false | false | self | 6 | null |
Anyone else struggling with agent drift and wasted tokens? | 0 | Anyone here building or shipping AI agents run into this?
* Same prompt → different actions every run
* Multi-turn conversations that slowly drift away from the original goal
* Tokens wasted on “thinking” that doesn’t move the task forward
* Agents that *technically* reason well, but feel directionless over time
Feel... | 2026-02-24T11:14:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rddfz3/anyone_else_struggling_with_agent_drift_and/ | malav399 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rddfz3 | false | null | t3_1rddfz3 | /r/LocalLLaMA/comments/1rddfz3/anyone_else_struggling_with_agent_drift_and/ | false | false | self | 0 | null |
Seeking advice: I’ve recently tried adding vector context to several roles on my site, but the results haven’t been very satisfactory. I’d really appreciate it if anyone could offer some suggestions. | 1 | I’ve tried several approaches: First, based on the user’s latest query, I retrieve matching novel passages from a vector database like Milvus, then insert the retrieved content as context into the conversation.
From testing, I observed the following issues:
When I insert the matched data into the current turn as part... | 2026-02-24T11:13:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rddftu/seeking_advice_ive_recently_tried_adding_vector/ | Glittering-Memory001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rddftu | false | null | t3_1rddftu | /r/LocalLLaMA/comments/1rddftu/seeking_advice_ive_recently_tried_adding_vector/ | false | false | self | 1 | null |
Which recent model have you found most steerable for repo-specific fine-tuning (agentic use case)? | 1 | I’m working on an agentic setup where the model has access to tools and the end goal is solving future PRs on a specific repository. I’m fine-tuning on the repo’s codebase, past PRs, and related context so the model actually understands how this project works, its conventions, architecture, patterns, etc.
The key thin... | 2026-02-24T11:11:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rdde1z/which_recent_model_have_you_found_most_steerable/ | podolskyd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdde1z | false | null | t3_1rdde1z | /r/LocalLLaMA/comments/1rdde1z/which_recent_model_have_you_found_most_steerable/ | false | false | self | 1 | null |
Verity CLI | 3 | GitHub : [https://github.com/rupeshs/verity?tab=readme-ov-file#cli-go](https://github.com/rupeshs/verity?tab=readme-ov-file#cli-go) | 2026-02-24T11:01:40 | simpleuserhere | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdd82r | false | null | t3_1rdd82r | /r/LocalLLaMA/comments/1rdd82r/verity_cli/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'nbvuibx2cflg1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/nbvuibx2cflg1.png?width=108&crop=smart&auto=webp&s=73c361670aba809dd4a8b823c44a544749fdf09c', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/nbvuibx2cflg1.png?width=216&crop=smart&auto=webp... | ||
GLM-4.7 Flash vs GPT-4.1 [Is GLM actually smarter? ] | 0 | I was checking Artificial Analysis and noticed GLM-4.7 Flash is actually beating GPT-4.1 in some major scores.
If we ignore the multimodal stuff for a second, which one do you think is actually more intelligent for pure reasoning and answering tough questions? I have also attached the images of score comparision.
T... | 2026-02-24T10:37:00 | https://www.reddit.com/gallery/1rdcszw | 9r4n4y | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdcszw | false | null | t3_1rdcszw | /r/LocalLLaMA/comments/1rdcszw/glm47_flash_vs_gpt41_is_glm_actually_smarter/ | false | false | 0 | null | |
Anthropic 🤡 | 7 | 2026-02-24T10:17:23 | k_means_clusterfuck | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdchg3 | false | null | t3_1rdchg3 | /r/LocalLLaMA/comments/1rdchg3/anthropic/ | false | false | 7 | {'enabled': True, 'images': [{'id': 'pz0qoy744flg1', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/pz0qoy744flg1.png?width=108&crop=smart&auto=webp&s=831d4284c27709de8ecf2cb6993147281cc2a0cf', 'width': 108}, {'height': 211, 'url': 'https://preview.redd.it/pz0qoy744flg1.png?width=216&crop=smart&auto=we... | |||
Finetuning 4bit kimik2thinking | 1 | Hello.
I want to fine tune kimi2thinking. The official [guide](https://huggingface.co/moonshotai/Kimi-K2-Thinking/blob/main/docs/deploy_guidance.md) \- says to use Ktransformers and LLamafactory. But looks like I need to convert it first to bf16 and then run. Is there any way to not convert to bf16 because QLoRA any... | 2026-02-24T10:09:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rdccw6/finetuning_4bit_kimik2thinking/ | ajxbnu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdccw6 | false | null | t3_1rdccw6 | /r/LocalLLaMA/comments/1rdccw6/finetuning_4bit_kimik2thinking/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0.png?width=108&crop=smart&auto=webp&s=07dc83095105be433db2dde187f5ec06563728e8', 'width': 108}, {'height': 116, 'url': 'h... |
Checking compatibility of api calling with localy installed model using qwen3 0.6 | 3 | am building a local chatbot and need to verify the API compatibility and tool-calling capabilities for my current model stack. Specifically, I am looking to understand which of these models can natively handle tool/function calls (via OpenAI-compatible APIs or similar) and how they integrate within a local environment.... | 2026-02-24T10:02:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rdc8a9/checking_compatibility_of_api_calling_with_localy/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdc8a9 | false | null | t3_1rdc8a9 | /r/LocalLLaMA/comments/1rdc8a9/checking_compatibility_of_api_calling_with_localy/ | false | false | self | 3 | null |
Trained Unsloth Mistral-7B with 1024 max_seq_length — need longer context window during inference | 1 | [removed] | 2026-02-24T09:46:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rdbyvs/trained_unsloth_mistral7b_with_1024_max_seq/ | Character-Metal-9315 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdbyvs | false | null | t3_1rdbyvs | /r/LocalLLaMA/comments/1rdbyvs/trained_unsloth_mistral7b_with_1024_max_seq/ | false | false | self | 1 | null |
Trained Unsloth Mistral-7B with 1024 max_seq_length — need longer context window inference | 1 | [removed] | 2026-02-24T09:38:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rdbujl/trained_unsloth_mistral7b_with_1024_max_seq/ | Character-Metal-9315 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdbujl | false | null | t3_1rdbujl | /r/LocalLLaMA/comments/1rdbujl/trained_unsloth_mistral7b_with_1024_max_seq/ | false | false | self | 1 | null |
Why every AI memory benchmark is testing the wrong thing | 0 | Yesterday someone posted a benchmark comparing Mem0, OpenAI Memory, LangMem, and MemGPT on 600-turn conversations. It got me thinking — **we're optimizing for the wrong metric.**
Every benchmark asks: "Does the agent remember that the user likes Italian food?" That's factual recall. Important, sure. But it's maybe 30%... | 2026-02-24T09:31:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rdbqs3/why_every_ai_memory_benchmark_is_testing_the/ | No_Advertising2536 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdbqs3 | false | null | t3_1rdbqs3 | /r/LocalLLaMA/comments/1rdbqs3/why_every_ai_memory_benchmark_is_testing_the/ | false | false | self | 0 | null |
Multi-GPU (Dual) TP PCIe BW impact? | 2 | Does anyone have any data on now much impact PCIe BW has when running with TP enabled? For example what might the impact of PCIe x16 4.0 vs 5.0 on a dual 6000 Pro setup? | 2026-02-24T09:29:01 | https://www.reddit.com/r/LocalLLaMA/comments/1rdbpa0/multigpu_dual_tp_pcie_bw_impact/ | 1-a-n | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdbpa0 | false | null | t3_1rdbpa0 | /r/LocalLLaMA/comments/1rdbpa0/multigpu_dual_tp_pcie_bw_impact/ | false | false | self | 2 | null |
[Experiment Idea] Testing “Stability Preference” in LLMs / Agents | 0 | Hi — I’m not a model runner myself, but I have an experiment idea that might be interesting for people working with local models or agents.
I’m looking for anyone curious enough to try this.
Idea (short version)
Instead of asking whether models show “self-awareness” or anything anthropomorphic, the question is... | 2026-02-24T09:20:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rdbkov/experiment_idea_testing_stability_preference_in/ | Forward-Big8835 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdbkov | false | null | t3_1rdbkov | /r/LocalLLaMA/comments/1rdbkov/experiment_idea_testing_stability_preference_in/ | false | false | self | 0 | null |
I Built an AI Agent That Trades Crypto on a Mac Mini for $2/Month | 1 | 2026-02-24T09:13:34 | https://open.substack.com/pub/jdbot54/p/i-built-an-ai-agent-that-trades-crypto?r=7ph5zd&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true | Zealousideal_Neck192 | open.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1rdbgku | false | null | t3_1rdbgku | /r/LocalLLaMA/comments/1rdbgku/i_built_an_ai_agent_that_trades_crypto_on_a_mac/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'T-dBOSnS_aM4o5FRxqriefwW5gcHxCEPwKV5fcPXHGA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/T-dBOSnS_aM4o5FRxqriefwW5gcHxCEPwKV5fcPXHGA.jpeg?width=108&crop=smart&auto=webp&s=23e3245c04487bfe0fc8157812a08bd6037e643b', 'width': 108}, {'height': 112, 'url': '... | ||
Open Router as free API for OpenClaw? | 0 | Hi, I was trying out open claw (I know what I am doing in terms of security) with local models but I don't have the Capacity to run large models and because of that it didn't went well.
I was searching for a free API and saw many with decent requests per day but they all had the problem of having strict tokens per min... | 2026-02-24T09:09:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rdbe7e/open_router_as_free_api_for_openclaw/ | No_Draft_8756 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdbe7e | false | null | t3_1rdbe7e | /r/LocalLLaMA/comments/1rdbe7e/open_router_as_free_api_for_openclaw/ | false | false | self | 0 | null |
I built an autonomous AI trading agent using Claude Haiku on a Mac Mini - costs $2/month in API calls | 0 | 2026-02-24T09:01:20 | https://medium.com/@jdbot54/i-built-an-ai-agent-that-trades-crypto-on-a-mac-mini-for-2-month-2abe340c3b05 | Zealousideal_Neck192 | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1rdb9nx | false | null | t3_1rdb9nx | /r/LocalLLaMA/comments/1rdb9nx/i_built_an_autonomous_ai_trading_agent_using/ | false | false | default | 0 | null | |
Agents are not thinking, they are searching | 0 | 2026-02-24T08:43:55 | https://technoyoda.github.io/agent-search.html | thunder_jaxx | technoyoda.github.io | 1970-01-01T00:00:00 | 0 | {} | 1rdb029 | false | null | t3_1rdb029 | /r/LocalLLaMA/comments/1rdb029/agents_are_not_thinking_they_are_searching/ | false | false | default | 0 | null | |
Benchmarked 3 Small MoE Models (~30B/A3B) on M1 Max 64GB: GLM-4.7-Flash vs Nemotron-3-Nano vs Qwen3-Coder | 8 | I wanted to share a head-to-head comparison of three ~30B MoE models that all activate only ~3B parameters per token making them the sweet spot for Apple Silicon inference. All tests run on a **MacBook Pro M1 Max (64GB unified memory)** using `llama-server` (build 8139) with `--flash-attn on`, `--ctx-size 4096`, and de... | 2026-02-24T08:18:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rdalfg/benchmarked_3_small_moe_models_30ba3b_on_m1_max/ | luke-pacman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdalfg | false | null | t3_1rdalfg | /r/LocalLLaMA/comments/1rdalfg/benchmarked_3_small_moe_models_30ba3b_on_m1_max/ | false | false | self | 8 | null |
Documented what actually happened when I used AI to build a production C++ library over several months | 0 | Not a "look what AI generated" post. The actual pipeline, the actual failures, the honest accounting of what each AI contributed and where each one failed.
The library is FAT-P. 107 headers, zero external dependencies, header-only C++20. 62 components benchmarked against Boost, Abseil, LLVM, EASTL. Competitive or fast... | 2026-02-24T08:16:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rdakr4/documented_what_actually_happened_when_i_used_ai/ | ButtonHuman1613 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdakr4 | false | null | t3_1rdakr4 | /r/LocalLLaMA/comments/1rdakr4/documented_what_actually_happened_when_i_used_ai/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '8QdmSO5RImv5gztA-_KAVUdS1Tmjn9q2gbw0RSkW3pM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8QdmSO5RImv5gztA-_KAVUdS1Tmjn9q2gbw0RSkW3pM.png?width=108&crop=smart&auto=webp&s=42f9b1d7d58e8648f8aab0a2b6fefc039c9c07a4', 'width': 108}, {'height': 108, 'url': 'h... |
Hosted option for ZeroClaw agents — thoughts? | 0 | I’ve been experimenting with ZeroClaw (the lightweight Rust-based AI agent runtime) and came across [**Zeroclaw.live**](http://Zeroclaw.live), which appears to offer managed cloud deployment for ZeroClaw agents.
From what I understand, it basically provides a preconfigured hosted environment so you can spin up an agen... | 2026-02-24T07:58:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rda9ys/hosted_option_for_zeroclaw_agents_thoughts/ | Few-Slip-9909 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rda9ys | false | null | t3_1rda9ys | /r/LocalLLaMA/comments/1rda9ys/hosted_option_for_zeroclaw_agents_thoughts/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'JJRwnQHtTAyzpVKLC0pSN5FqF8LEYEXVtYYg2oDMSh8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JJRwnQHtTAyzpVKLC0pSN5FqF8LEYEXVtYYg2oDMSh8.png?width=108&crop=smart&auto=webp&s=db59cb950e997939f62a802aa1e71bd75b54c4fd', 'width': 108}, {'height': 113, 'url': 'h... |
How to build a fully local multi-user RLM (Recursive Language Model) stack for enterprise use; LibreChat + Aleph + LM Studio. Here's what broke and how I fixed it | 1 | [removed] | 2026-02-24T07:52:37 | https://www.reddit.com/r/LocalLLaMA/comments/1rda6km/how_to_build_a_fully_local_multiuser_rlm/ | Lancelot2026 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rda6km | false | null | t3_1rda6km | /r/LocalLLaMA/comments/1rda6km/how_to_build_a_fully_local_multiuser_rlm/ | false | false | self | 1 | null |
What if an AI never forgot anything — and the memory was on-chain? | 0 | Hey
I spent the past few days building Immortal Mind Protocol — an AI
cognitive architecture where memories persist permanently on-chain
(Base/Arbitrum + Arweave).
Key features:
\- Permanent memory via blockchain anchoring (not just a file)
\- Cognitive layers: attention, emotion, character, nar... | 2026-02-24T07:49:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rda4oa/what_if_an_ai_never_forgot_anything_and_the/ | Alternative_Earth241 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rda4oa | false | null | t3_1rda4oa | /r/LocalLLaMA/comments/1rda4oa/what_if_an_ai_never_forgot_anything_and_the/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'KBxWyoAmQOD4rpE2mwwCjgpdJnmdzXazG6HEf6xdeVo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KBxWyoAmQOD4rpE2mwwCjgpdJnmdzXazG6HEf6xdeVo.png?width=108&crop=smart&auto=webp&s=2f81fa4940c3b4dcc966ec79c30f3096acf85239', 'width': 108}, {'height': 108, 'url': 'h... |
THE JELES-PRIME & CHAOS-PRIME MULTIVERSAL OPERATING MANUAL: A BINDER OF SESSION-AUTHORIZED CHAOS | 1 | [removed] | 2026-02-24T07:37:24 | https://www.reddit.com/r/LocalLLaMA/comments/1rd9xln/the_jelesprime_chaosprime_multiversal_operating/ | allstatekid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd9xln | false | null | t3_1rd9xln | /r/LocalLLaMA/comments/1rd9xln/the_jelesprime_chaosprime_multiversal_operating/ | false | false | self | 1 | null |
Best practices for running local LLMs for ~70–150 developers (agentic coding use case) | 22 | Hi everyone,
I’m planning infrastructure for a software startup where we want to use **local LLMs for agentic coding workflows** (code generation, refactoring, test writing, debugging, PR reviews, etc.).
# Scale
* Initial users: \~70–100 developers
* Expected growth: up to \~150 users
* Daily usage during working ho... | 2026-02-24T07:15:30 | https://www.reddit.com/r/LocalLLaMA/comments/1rd9kpk/best_practices_for_running_local_llms_for_70150/ | Resident_Potential97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd9kpk | false | null | t3_1rd9kpk | /r/LocalLLaMA/comments/1rd9kpk/best_practices_for_running_local_llms_for_70150/ | false | false | self | 22 | null |
llama-cpp-python 0.3.16 – Qwen3 Embedding GGUF fails with "invalid seq_id >= 1" when batching | 3 | I’m trying to use batched embeddings with a GGUF model and hitting a sequence error.
# Environment
* OS: Ubuntu 24.04
* GPU: RTX 4060
* llama-cpp-python: 0.3.16
* Model: Qwen3-Embedding-4B-Q5\_K\_M.gguf
Model loads fine and single-input embeddings work.
but not multiple string
`from llama_cpp import Llama`
`llm... | 2026-02-24T07:12:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rd9ixh/llamacpppython_0316_qwen3_embedding_gguf_fails/ | Life-Holiday6920 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd9ixh | false | null | t3_1rd9ixh | /r/LocalLLaMA/comments/1rd9ixh/llamacpppython_0316_qwen3_embedding_gguf_fails/ | false | false | self | 3 | null |
Local TTS model | 1 | [removed] | 2026-02-24T07:11:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rd9ic9/local_tts_model/ | Cristalboy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd9ic9 | false | null | t3_1rd9ic9 | /r/LocalLLaMA/comments/1rd9ic9/local_tts_model/ | false | false | self | 1 | null |
ZeroClaw or should i go full IronClaw? | 0 | My main use cases are mostly managing my calendar, Github issue tracker, and some kind of to do list.
After reading many stories about OpenClaw (which, to be honest, were partly the fault of end users giving full access to their private data), I’m leaning toward ZeroClaw since it’s lightweight enough to run easily. Ho... | 2026-02-24T06:54:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rd980h/zeroclaw_or_should_i_go_full_ironclaw/ | Altruistic_Heat_9531 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd980h | false | null | t3_1rd980h | /r/LocalLLaMA/comments/1rd980h/zeroclaw_or_should_i_go_full_ironclaw/ | false | false | self | 0 | null |
PersonaPlex-7B on Apple Silicon: full-duplex speech-to-speech in native Swift (MLX) | 8 | NVIDIA PersonaPlex is a **full-duplex speech-to-speech** model — it can **listen while it speaks**, making it better suited for natural conversations (interruptions, overlaps, backchannels) than typical “wait, then respond” voice pipelines.
I wrote up how to run it **locally on Apple Silicon** with a **native Swift + ... | 2026-02-24T06:48:46 | https://www.reddit.com/r/LocalLLaMA/comments/1rd94lb/personaplex7b_on_apple_silicon_fullduplex/ | ivan_digital | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd94lb | false | null | t3_1rd94lb | /r/LocalLLaMA/comments/1rd94lb/personaplex7b_on_apple_silicon_fullduplex/ | false | false | self | 8 | null |
MiMo-V2-Flash scored 9.41/10 explaining IEEE 754 edge cases, but got a factual error that only one judge caught | 0 | I ran a blind peer eval where 10 frontier models explained 6 classic numerical computing gotchas (0.1 + 0.2, 2\^53 + 1 in JS, sqrt(-1) without cmath, etc.), then judged each other's responses. **MiMo-V2-Flash** placed 8th at 9.41 with σ=0.71, which is interesting because the high variance came from an actual factual mi... | 2026-02-24T06:31:54 | https://www.reddit.com/r/LocalLLaMA/comments/1rd8u8x/mimov2flash_scored_94110_explaining_ieee_754_edge/ | Silver_Raspberry_811 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd8u8x | false | null | t3_1rd8u8x | /r/LocalLLaMA/comments/1rd8u8x/mimov2flash_scored_94110_explaining_ieee_754_edge/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'IVCi3ssFKq7WjHdRrVAH3esVwQf2rJ2CdKvX8pOBXSI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/IVCi3ssFKq7WjHdRrVAH3esVwQf2rJ2CdKvX8pOBXSI.jpeg?width=108&crop=smart&auto=webp&s=78b37fe5be0302f90355add92f4143e36a28f71a', 'width': 108}, {'height': 112, 'url': '... |
Andrej Karpathy survived the weekend with the claws | 96 | reference: [https://www.reddit.com/r/LocalLLaMA/comments/1raq23i/they\_have\_karpathy\_we\_are\_doomed/](https://www.reddit.com/r/LocalLLaMA/comments/1raq23i/they_have_karpathy_we_are_doomed/) | 2026-02-24T06:21:50 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rd8nr7 | false | null | t3_1rd8nr7 | /r/LocalLLaMA/comments/1rd8nr7/andrej_karpathy_survived_the_weekend_with_the/ | false | false | 96 | {'enabled': True, 'images': [{'id': 'zi27d0r9ydlg1', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/zi27d0r9ydlg1.png?width=108&crop=smart&auto=webp&s=222ffbc177c30a2db7f3c08d65cdc4401c917b5a', 'width': 108}, {'height': 188, 'url': 'https://preview.redd.it/zi27d0r9ydlg1.png?width=216&crop=smart&auto=web... | ||
Prompt Engineering is a dead end. A new philosophy to interact with AI: A Proposal for "Thinking Diagrams" | 0 | I've been researching HCI and reached the conclusion that linear prompting as a primary interface for LLM is a dead end. At least without AGI. So i'm proposing a new philosophy to fix this.
Thank for reading and sorry because of my english is not very good. Here is the core of my proposal:
# The Sharpest Axe with the... | 2026-02-24T06:18:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rd8l7m/prompt_engineering_is_a_dead_end_a_new_philosophy/ | Axx-83 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd8l7m | false | null | t3_1rd8l7m | /r/LocalLLaMA/comments/1rd8l7m/prompt_engineering_is_a_dead_end_a_new_philosophy/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=108&crop=smart&auto=webp&s=f18344c39f069f27c3c0a9f38351001a1fd264a5', 'width': 108}, {'height': 122, 'url': '... |
Agents keep hallucinating progress — what actually works for you? | 0 | Been running Claude and GPT agents on real tasks for a few months — deployments, codegen, multi-step research. And I keep hitting the same issue nobody really talks about:
Agents confidently report "done" when nothing actually happened.
The problem really showed itself when I started building a swarm out of multiple ... | 2026-02-24T06:09:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rd8dzj/agents_keep_hallucinating_progress_what_actually/ | HugoKovalsky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd8dzj | false | null | t3_1rd8dzj | /r/LocalLLaMA/comments/1rd8dzj/agents_keep_hallucinating_progress_what_actually/ | false | false | self | 0 | null |
Anthropic's recent distillation blog should make anyone only ever want to use local open-weight models; it's scary and dystopian | 764 | It's quite ironic that they went for the censorship and authoritarian angles here.
Full blog: [https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks](https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks) | 2026-02-24T06:07:02 | https://www.reddit.com/gallery/1rd8cfw | obvithrowaway34434 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rd8cfw | false | null | t3_1rd8cfw | /r/LocalLLaMA/comments/1rd8cfw/anthropics_recent_distillation_blog_should_make/ | false | false | 764 | null | |
Qwen3-Coder 30B running at 74% CPU on 3090 (ollama docker) | 13 | Running Qwen3-Coder (30.5B MoE, Q4_K_M) via Docker Ollama on a machine with a 3090 (24GB VRAM) and 32GB RAM, and inference is painfully slow. GPU is showing 23.8GB / 24GB used, but ollama ps shows 74% CPU / 26% GPU split — which seems completely backwards from what I'd expect.
Setup:
RTX 3090 (24GB VRAM)
32GB system R... | 2026-02-24T06:02:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rd89m5/qwen3coder_30b_running_at_74_cpu_on_3090_ollama/ | minefew | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd89m5 | false | null | t3_1rd89m5 | /r/LocalLLaMA/comments/1rd89m5/qwen3coder_30b_running_at_74_cpu_on_3090_ollama/ | false | false | self | 13 | null |
Anyone using DeepSeek proxy? Sharing my cheap setup (Hong Kong based) | 0 | Hey r/LocalLLaMA,
I’ve been running DeepSeek-V3/R1 through a Hong Kong proxy for a while now – latency is low, no rate limits, and costs are like 1/5 of OpenAI.
For example: I get \~100k tokens for under $5 (that’s roughly 10k words of code or chat). Speed feels snappier than Groq sometimes.
Anyone else doing this? ... | 2026-02-24T06:01:23 | https://www.reddit.com/r/LocalLLaMA/comments/1rd88lq/anyone_using_deepseek_proxy_sharing_my_cheap/ | deepseektoken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd88lq | false | null | t3_1rd88lq | /r/LocalLLaMA/comments/1rd88lq/anyone_using_deepseek_proxy_sharing_my_cheap/ | false | false | self | 0 | null |
Prompt Engineering is a dead end. We need Graph-based Cognitive Architectures: A Proposal for "Thinking Diagrams" | 1 | [removed] | 2026-02-24T05:57:39 | https://axx83.substack.com/p/thinking-diagrams | axx_83 | axx83.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1rd85y5 | false | null | t3_1rd85y5 | /r/LocalLLaMA/comments/1rd85y5/prompt_engineering_is_a_dead_end_we_need/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=108&crop=smart&auto=webp&s=f18344c39f069f27c3c0a9f38351001a1fd264a5', 'width': 108}, {'height': 122, 'url': '... | |
Did some one know about that u can do this in any IDE ? | 0 | I was create which change session indentety and creat new indentety as Agent L 1 then I pest same script to join the same scrept file on my local machine the other chat session and that section rewrite its internal prompt and change indentety to agent L2 on my other laptop in my other IDE I pest to the session same scr... | 2026-02-24T05:56:21 | https://www.reddit.com/gallery/1rd853t | Devswat | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rd853t | false | null | t3_1rd853t | /r/LocalLLaMA/comments/1rd853t/did_some_one_know_about_that_u_can_do_this_in_any/ | true | false | spoiler | 0 | null |
I just saw something amazing | 317 | https://www.asus.com/displays-desktops/workstations/performance/expertcenter-pro-et900n-g3/
https://www.azken.com/Workstations/nvidia-series/Asus-ExpertCenter-Pro-ET900N-G3?utm\_source=chatgpt.com | 2026-02-24T05:49:17 | ayanami0011 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rd80gx | false | null | t3_1rd80gx | /r/LocalLLaMA/comments/1rd80gx/i_just_saw_something_amazing/ | false | false | 317 | {'enabled': True, 'images': [{'id': 'rr17jgdksdlg1', 'resolutions': [{'height': 196, 'url': 'https://preview.redd.it/rr17jgdksdlg1.jpeg?width=108&crop=smart&auto=webp&s=2eecf06487ed97f29697d13f1320c7ecedc3ca21', 'width': 108}, {'height': 392, 'url': 'https://preview.redd.it/rr17jgdksdlg1.jpeg?width=216&crop=smart&auto=... | ||
Prompt Engineering is a dead end. We need Graph-based Cognitive Architectures: A Proposal for "Thinking Diagrams" | 1 | [removed] | 2026-02-24T05:49:06 | https://axx83.substack.com/p/thinking-diagrams | axx_83 | axx83.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1rd80d7 | false | null | t3_1rd80d7 | /r/LocalLLaMA/comments/1rd80d7/prompt_engineering_is_a_dead_end_we_need/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=108&crop=smart&auto=webp&s=f18344c39f069f27c3c0a9f38351001a1fd264a5', 'width': 108}, {'height': 122, 'url': '... | |
Built an open-source Ollama/MLX/OpenAI benchmark and leaderboard site with in-app submissions. Trying to test and collect more data. | 2 | 2026-02-24T05:42:55 | peppaz | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rd7wbh | false | null | t3_1rd7wbh | /r/LocalLLaMA/comments/1rd7wbh/built_an_opensource_ollamamlxopenai_benchmark_and/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'tcn61r39rdlg1', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/tcn61r39rdlg1.png?width=108&crop=smart&auto=webp&s=aab63fde6faa8135b47efae915ddb4f375c5e2eb', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/tcn61r39rdlg1.png?width=216&crop=smart&auto=we... | |||
Prompt Engineering is a dead end. We need Graph-based Cognitive Architectures: A Proposal for "Thinking Diagrams" | 1 | [removed] | 2026-02-24T05:41:12 | https://axx83.substack.com/p/thinking-diagrams | axx83 | axx83.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1rd7v7c | false | null | t3_1rd7v7c | /r/LocalLLaMA/comments/1rd7v7c/prompt_engineering_is_a_dead_end_we_need/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/BlpAj9aR-0OsxjVNHp-SewuN7ONhLaT2-MW2gJhl5lU.jpeg?width=108&crop=smart&auto=webp&s=f18344c39f069f27c3c0a9f38351001a1fd264a5', 'width': 108}, {'height': 122, 'url': '... | |
Models to run on an iphone 14 pro | 1 | Hey everyone, not a native speaker (Dutch), I write my own posts without LLMs. Please correct me if I make mistakes, only way to learn!
I was gifted an iphone 14 pro, which has a little less than 6 GB available for use, realistically 4GB.
Since I am planning to go to Japan, I thought having some offline SLMs availabl... | 2026-02-24T05:35:45 | https://www.reddit.com/r/LocalLLaMA/comments/1rd7rqu/models_to_run_on_an_iphone_14_pro/ | Kahvana | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd7rqu | false | null | t3_1rd7rqu | /r/LocalLLaMA/comments/1rd7rqu/models_to_run_on_an_iphone_14_pro/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'IMtEzfz-kJ4Iq4psE1tgLfjcVW0PTdvmdB26d0lWtj8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/IMtEzfz-kJ4Iq4psE1tgLfjcVW0PTdvmdB26d0lWtj8.jpeg?width=108&crop=smart&auto=webp&s=db2edd734ffc46f4a43256100128271e42ea4b07', 'width': 108}, {'height': 113, 'url': '... |
LLM vs LLM harness | 11 | We have the capable distillations - let's continue to build out the harnesses | 2026-02-24T05:25:29 | chitown160 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rd7kw7 | false | null | t3_1rd7kw7 | /r/LocalLLaMA/comments/1rd7kw7/llm_vs_llm_harness/ | false | false | 11 | {'enabled': True, 'images': [{'id': 'umedeqhzndlg1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/umedeqhzndlg1.jpeg?width=108&crop=smart&auto=webp&s=c6f1973fe354e7e07bcc5c6294531b8b441eaaf3', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/umedeqhzndlg1.jpeg?width=216&crop=smart&auto=w... | ||
Coding agent for edge devices | 1 | Hi, often I had to directly work on edge devices like old raspberry pi and some other similar boards powered by armbian.
I tryed to install opencode / kilocode and few others like mistral Vibe. Apparently every of these are really heavy on such small compute power and ram amour (often 1 gb)
Can you suggest any reall... | 2026-02-24T05:17:23 | https://www.reddit.com/r/LocalLLaMA/comments/1rd7eme/coding_agent_for_edge_devices/ | cri10095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd7eme | false | null | t3_1rd7eme | /r/LocalLLaMA/comments/1rd7eme/coding_agent_for_edge_devices/ | false | false | self | 1 | null |
OpenClaw has started appearing in job descriptions! | 1 | 2026-02-24T05:13:53 | moaijobs | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rd7b77 | false | null | t3_1rd7b77 | /r/LocalLLaMA/comments/1rd7b77/openclaw_has_started_appearing_in_job_descriptions/ | false | false | 1 | {'enabled': True, 'images': [{'id': '0tcwowg7mdlg1', 'resolutions': [{'height': 106, 'url': 'https://preview.redd.it/0tcwowg7mdlg1.png?width=108&crop=smart&auto=webp&s=b5a3850c3350731e8d27a8e4dc0e9c63f12aff16', 'width': 108}, {'height': 212, 'url': 'https://preview.redd.it/0tcwowg7mdlg1.png?width=216&crop=smart&auto=we... | |||
Proof-of-Personhood AI Protocol | 1 |
The Proof-of-Personhood AI Protocol
1. Core Concept
This protocol outlines a decentralized network for building, training, and running an artificial intelligence. Unlike traditional AI, which relies on centralized corporate servers and monopolized computing power, this system is driven entirely by verified human ti... | 2026-02-24T05:10:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rd77gd/proofofpersonhood_ai_protocol/ | Last_Cockroach7651 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd77gd | false | null | t3_1rd77gd | /r/LocalLLaMA/comments/1rd77gd/proofofpersonhood_ai_protocol/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '-H9OI7DKdfdiQddOw7_DAaMj7eqaoAWlIuxQC2MeB9U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-H9OI7DKdfdiQddOw7_DAaMj7eqaoAWlIuxQC2MeB9U.png?width=108&crop=smart&auto=webp&s=020ce8a3935a770b4493b110ea2e9d8258777821', 'width': 108}, {'height': 108, 'url': 'h... |
An Update to my memory system Persistent-AI-Memory system | 0 | Hello Everyone,
I'm not sure how many of you remember my memory system that I had made a github version of called Persistent-AI-Memory? Well, I just made major update to it.
Now it's much more sophisticated. It has a short term memory system now, that is primarily a function for OpenWebUI, but has been modified to be... | 2026-02-24T05:05:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rd6zw0/an_update_to_my_memory_system_persistentaimemory/ | Savantskie1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rd6zw0 | false | null | t3_1rd6zw0 | /r/LocalLLaMA/comments/1rd6zw0/an_update_to_my_memory_system_persistentaimemory/ | false | false | self | 0 | null |
Demo: World's first embeddable web agent that websites can drop in with a single script tag | 1 | We just shipped something we're really excited about: **Rover,** an embeddable web agent that any website can integrate with a single `<script>` tag that can type/click/select to onboard/form fill/convert users. Think of it like Stripe for AI agents. Instead of users leaving your site to go use an AI tool, the agent li... | 2026-02-24T04:59:03 | https://v.redd.it/1fipmzetidlg1 | BodybuilderLost328 | /r/LocalLLaMA/comments/1rd6rb2/demo_worlds_first_embeddable_web_agent_that/ | 1970-01-01T00:00:00 | 0 | {} | 1rd6rb2 | false | null | t3_1rd6rb2 | /r/LocalLLaMA/comments/1rd6rb2/demo_worlds_first_embeddable_web_agent_that/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bGlkNmk4ZnRpZGxnMenkMkPF0ah2kUOWVdvV3AAq4P25yT1s0HrQkoPgLNkA', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/bGlkNmk4ZnRpZGxnMenkMkPF0ah2kUOWVdvV3AAq4P25yT1s0HrQkoPgLNkA.png?width=108&crop=smart&format=pjpg&auto=webp&s=a5f03e1676acf0181d73210a6a8c99e042b4e... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.