title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Qwen3.5-35B-A3B is a gamechanger for agentic coding. | 1,042 | [Qwen3.5-35B-A3B with Opencode](https://preview.redd.it/m4v951sv5jlg1.jpg?width=2367&format=pjpg&auto=webp&s=bec61ca20f08bb766987147287c7d6664308fa2f)
Just tested this badboy with Opencode **cause frankly I couldn't believe those benchmarks.** Running it on a single RTX 3090 on a headless Linux box. Freshly compiled... | 2026-02-25T00:04:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/ | jslominski | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdxfdu | false | null | t3_1rdxfdu | /r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/ | false | false | 1,042 | null | |
Excluding used hardware what is currently considered the best bang for buck in Feb 2026? | 3 | Given what is going on with GPU and memory prices what is currently considered the best bang for buck with new hardware at around $1,000-1,500 USD that can run 24-32B models at a decent speed with 8k or larger context?
**Recommended options I've seen are:**
\- 2X RTX 5060ti's (moderate speed)
\- 2X RX 9060xt's. (mod... | 2026-02-25T00:03:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rdxegq/excluding_used_hardware_what_is_currently/ | mustafar0111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdxegq | false | null | t3_1rdxegq | /r/LocalLLaMA/comments/1rdxegq/excluding_used_hardware_what_is_currently/ | false | false | self | 3 | null |
I feel like it keeps going back and forth like this... | 0 | 2026-02-25T00:01:35 | https://imgflip.com/i/al0gnp | moderately-extremist | imgflip.com | 1970-01-01T00:00:00 | 0 | {} | 1rdxck5 | false | null | t3_1rdxck5 | /r/LocalLLaMA/comments/1rdxck5/i_feel_like_it_keeps_going_back_and_forth_like/ | false | false | 0 | {'enabled': False, 'images': [{'id': '3Z6soEuc15xPpcO0ibAsvnZWU_YA_RFo5OyOHwmHKAc', 'resolutions': [{'height': 104, 'url': 'https://external-preview.redd.it/3Z6soEuc15xPpcO0ibAsvnZWU_YA_RFo5OyOHwmHKAc.jpeg?width=108&crop=smart&auto=webp&s=ac96bedd556ef91a530b08f57623d5869c98bc09', 'width': 108}, {'height': 208, 'url': ... | ||
Anyone else watching DeepSeek repos? 39 PRs merged today — pre-release vibes or just normal cleanup? | 0 | I saw a post claiming DeepSeek devs merged \*\*39 PRs today\*\* in one batch, and it immediately gave me “release hardening” vibes.
Not saying “V4 confirmed” or anything — but big merge waves \*often\* happen when:
\- features are basically frozen
\- QA/regression is underway
\- docs/tests/edge cases get cleaned up... | 2026-02-24T23:58:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rdx9up/anyone_else_watching_deepseek_repos_39_prs_merged/ | azahar_h | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdx9up | false | null | t3_1rdx9up | /r/LocalLLaMA/comments/1rdx9up/anyone_else_watching_deepseek_repos_39_prs_merged/ | false | false | self | 0 | null |
Gemini 3.1 Pro is a visionary designer, but a terrible employee. (Our battle with a 20% tool-call failure rate in a multi-agent stack) | 0 | We're building Bobr — an AI presentation generator that uses a multi-agent architecture. We recently added Gemini 3.1 Pro as a model option alongside Claude Sonnet 4.6 and GPT-5.2.
**Look at the attached images.** The visual and design quality completely blew us away. The reliability of the model... not so much.
Here... | 2026-02-24T23:58:18 | https://www.reddit.com/gallery/1rdx9i5 | bobr_ai | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdx9i5 | false | null | t3_1rdx9i5 | /r/LocalLLaMA/comments/1rdx9i5/gemini_31_pro_is_a_visionary_designer_but_a/ | false | false | 0 | null | |
Apple gave up on the AI rat race to become the next Nvidia | 1 | [removed] | 2026-02-24T23:58:05 | https://x.com/tim_cook/status/2026351829928624257 | _manteca | x.com | 1970-01-01T00:00:00 | 0 | {} | 1rdx9b0 | false | null | t3_1rdx9b0 | /r/LocalLLaMA/comments/1rdx9b0/apple_gave_up_on_the_ai_rat_race_to_become_the/ | false | false | default | 1 | null |
Ran 3 popular ~30B MoE models on my apple silicon M1 Max 64GB. Here's how they compare | 11 | Three of the "small but mighty" MoE models recently: GLM-4.7-Flash, Nemotron-3-Nano, and Qwen3-Coder, all share a similar formula: roughly 30 billion total parameters, but only ~3 billion active per token. That makes them ideal candidates for local inference on Apple Silicon. I put all three through the same gauntlet o... | 2026-02-24T23:50:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rdx2c7/ran_3_popular_30b_moe_models_on_my_apple_silicon/ | luke_pacman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdx2c7 | false | null | t3_1rdx2c7 | /r/LocalLLaMA/comments/1rdx2c7/ran_3_popular_30b_moe_models_on_my_apple_silicon/ | false | false | self | 11 | null |
Text Behind Video: Create cinematic text and video compositions locally in your browser w/ Transformers.js | 17 | The model (BEN2 by PramaLLC) runs locally in your browser on WebGPU with Transformers.js v4, and video processing/composition is handled by Mediabunny (amazing library)! The model and demo code are MIT-licensed, so feel free to use and adapt it however you want. Hope you like it!
Demo (+ source code): [https://hugging... | 2026-02-24T23:44:06 | https://v.redd.it/lknmglxs3jlg1 | xenovatech | /r/LocalLLaMA/comments/1rdwx1x/text_behind_video_create_cinematic_text_and_video/ | 1970-01-01T00:00:00 | 0 | {} | 1rdwx1x | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/lknmglxs3jlg1/DASHPlaylist.mpd?a=1774718122%2CNDgyZjdkMWNhODBkMGRkYWUzMjQ3NjYzNDYwNmJkNWVjNzg4MjU5Y2NjZDcwMWU1Mjc5MTRmY2UwYWU1MDFmZg%3D%3D&v=1&f=sd', 'duration': 92, 'fallback_url': 'https://v.redd.it/lknmglxs3jlg1/CMAF_720.mp4?source=fallback', 'ha... | t3_1rdwx1x | /r/LocalLLaMA/comments/1rdwx1x/text_behind_video_create_cinematic_text_and_video/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'OGhuNzZxeHMzamxnMepMzWzaiCiqPWR6VgEc94CsGf2_Nkl0yiMqpQYXy0c0', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/OGhuNzZxeHMzamxnMepMzWzaiCiqPWR6VgEc94CsGf2_Nkl0yiMqpQYXy0c0.png?width=108&crop=smart&format=pjpg&auto=webp&s=35b8b3c61650c908a317c6f13c69dba7d7bca... | |
Need a recommendation for a machine | 1 | Hello guys, i have a budget of around 2500 euros for a new machine that i want to use for inference and some fine tuning. I have seen the Strix Halo being recommended a lot and checked the EVO-X2 from GMKtec and it seems that it is what i need for my budget. However, no Nvidia means no CUDA, do you guys have any though... | 2026-02-24T23:16:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rdw83n/need_a_recommendation_for_a_machine/ | wavz89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdw83n | false | null | t3_1rdw83n | /r/LocalLLaMA/comments/1rdw83n/need_a_recommendation_for_a_machine/ | false | false | self | 1 | null |
Bullshit Benchmark - A benchmark for testing whether models identify and push back on nonsensical prompts instead of confidently answering them | 86 | [](https://preview.redd.it/bullshit-benchmark-a-benchmark-for-testing-whether-models-v0-g8qfezc2yilg1.png?width=1080&format=png&auto=webp&s=01cd7edc54f5c3c06ce4667a9217fe4f7da2338c)
https://preview.redd.it/n7w95mmuyilg1.png?width=1080&format=png&auto=webp&s=6e87d1a7d9275935b2f552cfbb887ad6fe4dcf86
View the results... | 2026-02-24T23:14:54 | https://www.reddit.com/r/LocalLLaMA/comments/1rdw6pp/bullshit_benchmark_a_benchmark_for_testing/ | bot_exe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdw6pp | false | null | t3_1rdw6pp | /r/LocalLLaMA/comments/1rdw6pp/bullshit_benchmark_a_benchmark_for_testing/ | false | false | 86 | null | |
What happened to ChatGPT? Moving to Claude! | 0 | Can't believe I'm saying this, but I'll move to Claude. I always liked Claude for coding, not for general-purpose chat, analysis, or creative writing. Now, I'm realizing ChatGPT is getting really bad! Like, I ask a question about something and it ignores it completely and answers the same question as the previous turn.... | 2026-02-24T23:12:01 | https://www.reddit.com/r/LocalLLaMA/comments/1rdw40e/what_happened_to_chatgpt_moving_to_claude/ | awsaf49 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdw40e | false | null | t3_1rdw40e | /r/LocalLLaMA/comments/1rdw40e/what_happened_to_chatgpt_moving_to_claude/ | false | false | self | 0 | null |
Fully local code indexing with Ollama embeddings — GPU-accelerated semantic search, no API keys, no cloud | 1 | Built an MCP server called srclight for deep code indexing that's 100% local. No API keys, no cloud calls, your code never leaves your machine.
The stack:
- tree-sitter AST parsing (10 languages: Python, C, C++, C#, JavaScript, TypeScript, Dart, Swift, Kotlin, Java, Go)
- SQLite FTS5 for keyword search (3 indexes: sym... | 2026-02-24T23:04:40 | https://www.reddit.com/r/LocalLLaMA/comments/1rdvx2k/fully_local_code_indexing_with_ollama_embeddings/ | srclight | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdvx2k | false | null | t3_1rdvx2k | /r/LocalLLaMA/comments/1rdvx2k/fully_local_code_indexing_with_ollama_embeddings/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'jWvXieoILmbNbgjkVxRBP7hfNODqhdh6hihixGJQYYU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jWvXieoILmbNbgjkVxRBP7hfNODqhdh6hihixGJQYYU.png?width=108&crop=smart&auto=webp&s=e7e07b3f33ffaa7e0628b203e101b800df8655ac', 'width': 108}, {'height': 108, 'url': 'h... |
Local LLM Benchmark tools | 3 | What are you guys using for I’ll benchmark to compare various models on your hardware? I’m looking for something basic to get performance snapshots while iterating with various models and their configurations in a more objective manner than just eyeballing and the vibes. I use two platforms llama and LLM Studio. | 2026-02-24T23:02:40 | https://www.reddit.com/r/LocalLLaMA/comments/1rdvva6/local_llm_benchmark_tools/ | BargeCptn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdvva6 | false | null | t3_1rdvva6 | /r/LocalLLaMA/comments/1rdvva6/local_llm_benchmark_tools/ | false | false | self | 3 | null |
started using AnythingLLM - having trouble understanding key conecpts | 2 | anythingllm seems like a powerful tool but so far I am mostly confused and feel like I am missing the point
1. are threads actually "chats" ? if so, whats the need for a "default" thread ? also, "forking" a new thread just shows it branching from the main workspace and not from the original thread
2. Are contexts fro... | 2026-02-24T23:00:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rdvt02/started_using_anythingllm_having_trouble/ | Coach_Unable | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdvt02 | false | null | t3_1rdvt02 | /r/LocalLLaMA/comments/1rdvt02/started_using_anythingllm_having_trouble/ | false | false | self | 2 | null |
Qwen3.5 27B is Match Made in Heaven for Size and Performance | 238 | Just got Qwen3.5 27B running on server and wanted to share the full setup for anyone trying to do the same.
**Setup:**
* Model: Qwen3.5-27B-Q8\_0 (unsloth GGUF) , Thanks Dan
* GPU: RTX A6000 48GB
* Inference: llama.cpp with CUDA
* Context: 32K
* Speed: \~19.7 tokens/sec
**Why Q8 and not a lower quant?** With 48GB VR... | 2026-02-24T22:57:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rdvq3s/qwen35_27b_is_match_made_in_heaven_for_size_and/ | Lopsided_Dot_4557 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdvq3s | false | null | t3_1rdvq3s | /r/LocalLLaMA/comments/1rdvq3s/qwen35_27b_is_match_made_in_heaven_for_size_and/ | false | false | self | 238 | {'enabled': False, 'images': [{'id': 'OesADB7ecB91NwB-oVwS-uWQBkmCujf4cnNMlu8R3_4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/OesADB7ecB91NwB-oVwS-uWQBkmCujf4cnNMlu8R3_4.jpeg?width=108&crop=smart&auto=webp&s=d81dc977b0ef3f17ba6bdc11da8a39f9ca06bd20', 'width': 108}, {'height': 162, 'url': '... |
Open source research assistant with 7 AI agents – supports local models via LiteLLM | 1 | [removed] | 2026-02-24T22:55:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rdvoz3/open_source_research_assistant_with_7_ai_agents/ | abed_tarakji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdvoz3 | false | null | t3_1rdvoz3 | /r/LocalLLaMA/comments/1rdvoz3/open_source_research_assistant_with_7_ai_agents/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'hm3-fy6-9wx4u6Mmk8bi00fV82OkEY4z4_7vifV6u7k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hm3-fy6-9wx4u6Mmk8bi00fV82OkEY4z4_7vifV6u7k.png?width=108&crop=smart&auto=webp&s=4c04a708881661732f6e1729eb147cae65bc95dc', 'width': 108}, {'height': 108, 'url': 'h... |
Show HN: AgentKeeper – Cross-model memory for AI agents | 2 | Problem I kept hitting: every time I switched LLM providers or an agent crashed, it lost all context.
Built AgentKeeper to fix this. It introduces a Cognitive Reconstruction Engine (CRE) that stores agent memory independently of any provider.
Usage:
agent = agentkeeper.create()
agent.remember("project budget:... | 2026-02-24T22:53:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rdvmtc/show_hn_agentkeeper_crossmodel_memory_for_ai/ | Rich-Department-7049 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdvmtc | false | null | t3_1rdvmtc | /r/LocalLLaMA/comments/1rdvmtc/show_hn_agentkeeper_crossmodel_memory_for_ai/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'itt3PZrZRJsMwXHNt5GzSNPvdbNqKyLebbTtIhwz428', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/itt3PZrZRJsMwXHNt5GzSNPvdbNqKyLebbTtIhwz428.png?width=108&crop=smart&auto=webp&s=25d34d214885b1a40b3a01a84467e78cd2821f60', 'width': 108}, {'height': 108, 'url': 'h... |
XCFramework and iOS 26.2? | 3 | Anyone here have success with llama-xcframework on iOS 26.2? I’m writing a swift Ai chat from end for it and can’t seem to get inference working. App crashes as soon as prompt is sent. Something to do with tokenization. Are they even compatible? I tried with a bridging-header too. No dice! I’m trying with small models.... | 2026-02-24T22:46:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rdvg5r/xcframework_and_ios_262/ | FreQRiDeR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdvg5r | false | null | t3_1rdvg5r | /r/LocalLLaMA/comments/1rdvg5r/xcframework_and_ios_262/ | false | false | self | 3 | null |
FlashLM v6 "SUPERNOVA": 4.1M ternary model hits 3,500 tok/s on CPU — novel P-RCSM reasoning architecture, no attention, no convolution | 74 | Back with v6. Some of you saw v5 “Thunderbolt” — 29.7M params, PPL 1.36, beat the TinyStories-1M baseline on a borrowed Ryzen 7950X3D (thanks again to arki05 for that machine). This time I went back to the free Deepnote notebook — 2 threads, 5GB RAM — and built a completely new architecture from scratch.
**What it is:... | 2026-02-24T22:36:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rdv74o/flashlm_v6_supernova_41m_ternary_model_hits_3500/ | Own-Albatross868 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdv74o | false | null | t3_1rdv74o | /r/LocalLLaMA/comments/1rdv74o/flashlm_v6_supernova_41m_ternary_model_hits_3500/ | false | false | self | 74 | {'enabled': False, 'images': [{'id': 'SdjmLzpk4IFYTEHz-c1xxXHbWZz0qxcCC6hk8sjLP78', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SdjmLzpk4IFYTEHz-c1xxXHbWZz0qxcCC6hk8sjLP78.png?width=108&crop=smart&auto=webp&s=b26a0808a94ce4951c0439b760a31e4def13b061', 'width': 108}, {'height': 108, 'url': 'h... |
Running Kimi K2.5? - Tell us your Build, Quant, Pre-processing and Generation Tokens/second Please! | 4 | I'm extremely interested in running kimi k2.5 at home but want to understand the hardware options and approximate speeds I'm going to get running the model.
The easy (and common answer) is 1-2 mac m3 ultra 512gb studios (depending on the quant, If i went this route I'm waiting for the m5). $11-22k
Looking at ... | 2026-02-24T22:33:35 | https://www.reddit.com/r/LocalLLaMA/comments/1rdv3v0/running_kimi_k25_tell_us_your_build_quant/ | bigh-aus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdv3v0 | false | null | t3_1rdv3v0 | /r/LocalLLaMA/comments/1rdv3v0/running_kimi_k25_tell_us_your_build_quant/ | false | false | self | 4 | null |
GLM4.7 flash VS Qwen 3.5 35B | 39 | Hi all! I was wondering if anyone has compared these two models thoroughly, and if so, what their thoughts on them are. Thanks! | 2026-02-24T22:17:35 | https://www.reddit.com/r/LocalLLaMA/comments/1rduokx/glm47_flash_vs_qwen_35_35b/ | KlutzyFood2290 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rduokx | false | null | t3_1rduokx | /r/LocalLLaMA/comments/1rduokx/glm47_flash_vs_qwen_35_35b/ | false | false | self | 39 | null |
Pour ceux qui cherchent à activer le mode no-think de Qwen3.5 dans LM-Studio | 0 | Voici un template jinja à entrer dans les paramètre de votre modèle depuis "My Models", cherchez l'onglet "Inference" dans le volet de droite.
Ce template désactive totalement le mode thinking, en attendant que LM-Studio nous offre une mise à jour avec un joli bouton.
LM-Studio permet de restaurer le template p... | 2026-02-24T22:14:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rdulg1/pour_ceux_qui_cherchent_à_activer_le_mode_nothink/ | Adventurous-Paper566 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdulg1 | false | null | t3_1rdulg1 | /r/LocalLLaMA/comments/1rdulg1/pour_ceux_qui_cherchent_à_activer_le_mode_nothink/ | false | false | self | 0 | null |
I built an open source Claude Code plugin that saves 94% of your context window. | 0 | The problem: Claude Code has a 200K token context window. With popular MCP servers like Playwright, Context7, and GitHub active, 72% is consumed before you even start working. A single Playwright snapshot burns up to 135K tokens. After 30 minutes of real debugging, responses slow to a crawl.
Context Mode fixes this. I... | 2026-02-24T22:14:03 | https://v.redd.it/163n7lxynilg1 | Alarming-Garage4299 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdula1 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/163n7lxynilg1/DASHPlaylist.mpd?a=1774563265%2CNGE3MzIwYjlkNjBhNThhNDA5ZGZlZGRjNWQzNTgzNTIyODNlMzM1MzM0ZjE5OTc3MDczYjc4NTEyNTY0MjUzYg%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/163n7lxynilg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1rdula1 | /r/LocalLLaMA/comments/1rdula1/i_built_an_open_source_claude_code_plugin_that/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'YWg5OXdyeHluaWxnMaXGgHbn0rYGdwG73ULbZ5ZxSu7Z2zKvu4rrRBQWdOuE', 'resolutions': [{'height': 127, 'url': 'https://external-preview.redd.it/YWg5OXdyeHluaWxnMaXGgHbn0rYGdwG73ULbZ5ZxSu7Z2zKvu4rrRBQWdOuE.png?width=108&crop=smart&format=pjpg&auto=webp&s=326da8fadc8d24e30d41dea2b34fb41f33f6... | |
Those of you running MoE coding models on 24-30GB, how long do you wait for a reply? | 2 | Something like GPT OSS 120B has a prompt processing speed of 80T/s for me due to the ram offload, meaning to wait for a single reply it takes like a whole minute before it even starts to stream. Idk why but I find this so abhorrent, mostly because it’s still not great quality.
What do yall experience? Maybe I just nee... | 2026-02-24T22:08:27 | https://www.reddit.com/r/LocalLLaMA/comments/1rdufse/those_of_you_running_moe_coding_models_on_2430gb/ | Borkato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdufse | false | null | t3_1rdufse | /r/LocalLLaMA/comments/1rdufse/those_of_you_running_moe_coding_models_on_2430gb/ | false | false | self | 2 | null |
Debugging my local-first “IDE assistant” System Monitor — false positives/negatives | 0 | Hey folks — I’m building a local-first web IDE (“Vibz”) with a System Monitor panel that checks 10 “cards” (backend, workspace, gates, models, loop runtime, etc.) by hitting FastAPI endpoints and doing a few probes against an Ollama-backed chat route.
I ran a truth audit (repo code + live API responses) and found a fe... | 2026-02-24T22:06:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rdue99/debugging_my_localfirst_ide_assistant_system/ | Apart-Yam-979 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdue99 | false | null | t3_1rdue99 | /r/LocalLLaMA/comments/1rdue99/debugging_my_localfirst_ide_assistant_system/ | false | false | self | 0 | null |
I made a website to vote on LLM performance | 0 | Idk about you guys but the benchmarks mean nothing to me anymore so I thought we could just vote.
DEMOCRACY!!!
[livellmvoting.com](http://livellmvoting.com)
Also anthropic didn't respond to my help request (to let me load the upgraded opus 4.5 in claude code instead of 4.6) and I am a petty dev with too much time w... | 2026-02-24T22:04:34 | Lucky-Caterpillar780 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rduc3c | false | null | t3_1rduc3c | /r/LocalLLaMA/comments/1rduc3c/i_made_a_website_to_vote_on_llm_performance/ | false | false | 0 | {'enabled': True, 'images': [{'id': '0q5y9geflilg1', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/0q5y9geflilg1.png?width=108&crop=smart&auto=webp&s=2cfc1a03c57021a9d8a980759a36e3a415705515', 'width': 108}, {'height': 107, 'url': 'https://preview.redd.it/0q5y9geflilg1.png?width=216&crop=smart&auto=web... | ||
A small tool I made for local LLMs: llm-neofetch-plus | 1 | [removed] | 2026-02-24T21:48:18 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rdtvwe | false | null | t3_1rdtvwe | /r/LocalLLaMA/comments/1rdtvwe/a_small_tool_i_made_for_local_llms_llmneofetchplus/ | false | false | default | 1 | null | ||
A small tool I made for local LLMs: llm-neofetch-plus | 1 | [removed] | 2026-02-24T21:44:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rdts76/a_small_tool_i_made_for_local_llms_llmneofetchplus/ | OwnTwilight | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdts76 | false | null | t3_1rdts76 | /r/LocalLLaMA/comments/1rdts76/a_small_tool_i_made_for_local_llms_llmneofetchplus/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs.png?width=108&crop=smart&auto=webp&s=1eb86cca35e1aca7a4ae38907eaca6ac0387a257', 'width': 108}, {'height': 108, 'url': 'h... |
(HF Discussion) Increasing the precision of some of the weights when quantizing | 16 | A huggingface discussion that took place over about a week exploring the idea of increasing the quality of quantized models. | 2026-02-24T21:44:33 | https://huggingface.co/noctrex/Qwen3-Coder-Next-MXFP4_MOE-GGUF/discussions/2 | im-just-helping | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1rdts6k | false | null | t3_1rdts6k | /r/LocalLLaMA/comments/1rdts6k/hf_discussion_increasing_the_precision_of_some_of/ | false | false | 16 | {'enabled': False, 'images': [{'id': 'nEGey_7_ItEmAwhSo8FXOYOyiV1VZbwrhVg4HKeCR-s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nEGey_7_ItEmAwhSo8FXOYOyiV1VZbwrhVg4HKeCR-s.png?width=108&crop=smart&auto=webp&s=c94fa9b0226d9dcd2d4130bf93e7e9d2d96c05e5', 'width': 108}, {'height': 116, 'url': 'h... | |
Is speculative decoding possible with Qwen3.5 via llamacpp? | 3 | Trying to run Qwen3.5-397b-a17b-mxfp4-moe with qwen3-0.6b-q8\_0 as the draft model via llamacpp. But I’m getting “speculative decoding not supported by this context”. Has anyone been successful with getting speculative decoding to work with Qwen3.5? | 2026-02-24T21:42:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rdtq8u/is_speculative_decoding_possible_with_qwen35_via/ | Frequent-Slice-6975 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdtq8u | false | null | t3_1rdtq8u | /r/LocalLLaMA/comments/1rdtq8u/is_speculative_decoding_possible_with_qwen35_via/ | false | false | self | 3 | null |
A small tool I made for local LLMs: llm-neofetch-plus | 1 | [removed] | 2026-02-24T21:42:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rdtpxs/a_small_tool_i_made_for_local_llms_llmneofetchplus/ | OwnTwilight | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdtpxs | false | null | t3_1rdtpxs | /r/LocalLLaMA/comments/1rdtpxs/a_small_tool_i_made_for_local_llms_llmneofetchplus/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs.png?width=108&crop=smart&auto=webp&s=1eb86cca35e1aca7a4ae38907eaca6ac0387a257', 'width': 108}, {'height': 108, 'url': 'h... |
A small tool I made for local LLMs: llm-neofetch-plus | 1 | [removed] | 2026-02-24T21:40:30 | OwnTwilight | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdto73 | false | null | t3_1rdto73 | /r/LocalLLaMA/comments/1rdto73/a_small_tool_i_made_for_local_llms_llmneofetchplus/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'lw9rbyrwhilg1', 'resolutions': [{'height': 17, 'url': 'https://preview.redd.it/lw9rbyrwhilg1.png?width=108&crop=smart&auto=webp&s=b25faf07513437d0fa014d9750d4ac37a3716a6b', 'width': 108}, {'height': 34, 'url': 'https://preview.redd.it/lw9rbyrwhilg1.png?width=216&crop=smart&auto=webp... | ||
penPawz — native desktop AI platform with first-class Ollama support, multi-agent orchestration, 75+ tools (MIT, open source) | 1 | [removed] | 2026-02-24T21:16:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rdszzx/penpawz_native_desktop_ai_platform_with/ | openpawz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdszzx | false | null | t3_1rdszzx | /r/LocalLLaMA/comments/1rdszzx/penpawz_native_desktop_ai_platform_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'y1QFoTtq1SxAdP7sWsAiXnamehLosrbiUbnu_6NhA88', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/y1QFoTtq1SxAdP7sWsAiXnamehLosrbiUbnu_6NhA88.png?width=108&crop=smart&auto=webp&s=9a732abb2b176137fc11ee7d5af53ecd160b9592', 'width': 108}, {'height': 108, 'url': 'h... |
Qwen3.5 27B solves Car wash test! | 24 | It's better than GPT-5.2 (in this regard) | 2026-02-24T21:03:37 | Ok-Scarcity-7875 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdsnk3 | false | null | t3_1rdsnk3 | /r/LocalLLaMA/comments/1rdsnk3/qwen35_27b_solves_car_wash_test/ | false | false | 24 | {'enabled': True, 'images': [{'id': '9l26xxambilg1', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/9l26xxambilg1.png?width=108&crop=smart&auto=webp&s=a3f95048b6afc95db0d3f44647bfb325ca39d133', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/9l26xxambilg1.png?width=216&crop=smart&auto=web... | ||
My name is Claude. | 0 | 2026-02-24T20:52:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rdscit/my_name_is_claude/ | tr0llogic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdscit | false | null | t3_1rdscit | /r/LocalLLaMA/comments/1rdscit/my_name_is_claude/ | false | false | 0 | null | ||
I ran 33 ablation experiments on Qwen 394B MoE: Here are 10 novel empirical findings on why 4-bit CoT steering fails and how to bypass MoE routing. | 16 | # Novel Mechanisms of MoE Safety: Topological Ablation and Multi-Pathway Bypasses in Quantized Models
**Author:** Eric Jang
**Contact:** [eric@dealign.ai](mailto:eric@dealign.ai) | [dealign.ai](http://dealign.ai)
# Abstract
Recent advances in mechanistic interpretability and behavioral steering have successfully u... | 2026-02-24T20:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rdsa6n/i_ran_33_ablation_experiments_on_qwen_394b_moe/ | HealthyCommunicat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdsa6n | false | null | t3_1rdsa6n | /r/LocalLLaMA/comments/1rdsa6n/i_ran_33_ablation_experiments_on_qwen_394b_moe/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': '1WOKoy0ANZRj3agCCPV64V8Ren75jbxsu6epKK-J6qc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/1WOKoy0ANZRj3agCCPV64V8Ren75jbxsu6epKK-J6qc.png?width=108&crop=smart&auto=webp&s=4fb271ffabe6e6c8f0798680cc626a421f3c9566', 'width': 108}, {'height': 113, 'url': 'h... |
Strix Halo, models loading on memory but plenty of room left on GPU? | 2 | Have a new miniforums strix halo with 128GB, set 96GB to GPU in AMD driver and full GPU offload in LM Studio. When i load 60-80GB models my GPU is only partially filling up, then memory fills up and model may fail to load if memory does not have space. BUT my GPU still has 30-40GB free. My current settings are below wi... | 2026-02-24T20:50:01 | https://www.reddit.com/r/LocalLLaMA/comments/1rds9nm/strix_halo_models_loading_on_memory_but_plenty_of/ | mindwip | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rds9nm | false | null | t3_1rds9nm | /r/LocalLLaMA/comments/1rds9nm/strix_halo_models_loading_on_memory_but_plenty_of/ | false | false | self | 2 | null |
Introducing "Sonic" Opensource! | 2 | 1️⃣ Faster first token + smoother streaming
The model starts responding quickly and streams tokens smoothly.
2️⃣ Stateful threads
It remembers previous conversation context (like OpenAI’s thread concept).
Example: If you say “the second option,” it knows what you’re referring to.
3️⃣ Mid-stream cancel
If the model st... | 2026-02-24T20:39:40 | https://github.com/mitkox/sonic | DockyardTechlabs | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rdrzi4 | false | null | t3_1rdrzi4 | /r/LocalLLaMA/comments/1rdrzi4/introducing_sonic_opensource/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'AJpmAfU5u90ZsEAuaeE1rJjr136Gura0hyeboPSJWD4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AJpmAfU5u90ZsEAuaeE1rJjr136Gura0hyeboPSJWD4.png?width=108&crop=smart&auto=webp&s=d3f7a17c5616bcd66573cd944cd7dc18b95e6329', 'width': 108}, {'height': 108, 'url': 'h... | |
Qwen-3.5-35B-A3B is impressive | 41 | So look, I know the model has only been out for a few hours at this point, but I've been running it through my full test suite—Qwen3-30B-A3B was my daily driver for nearly a year, mainly as a general assistant, search engine, and coding helper.
Well... this new 35B-A3B model is pretty damn good. Its multimodality is e... | 2026-02-24T20:34:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rdru9p/qwen3535ba3b_is_impressive/ | ayylmaonade | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdru9p | false | null | t3_1rdru9p | /r/LocalLLaMA/comments/1rdru9p/qwen3535ba3b_is_impressive/ | false | false | self | 41 | {'enabled': False, 'images': [{'id': 'KgN2FHL0DuovU2c-9Z-YJYjeOxuEn2ZcRFcyker88lY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KgN2FHL0DuovU2c-9Z-YJYjeOxuEn2ZcRFcyker88lY.jpeg?width=108&crop=smart&auto=webp&s=507db693fad483e1f413ebf3e8e2df9105361cd0', 'width': 108}, {'height': 109, 'url': '... |
Qwen: what is this thinking? | 0 | Im not able to understand this thinking, can someone explain please. | 2026-02-24T20:21:44 | Primary-You-3767 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdrhox | false | null | t3_1rdrhox | /r/LocalLLaMA/comments/1rdrhox/qwen_what_is_this_thinking/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'j1iyymv74ilg1', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/j1iyymv74ilg1.jpeg?width=108&crop=smart&auto=webp&s=2245a9b4acab920ea792f3c9b8d5452fc92ec207', 'width': 108}, {'height': 180, 'url': 'https://preview.redd.it/j1iyymv74ilg1.jpeg?width=216&crop=smart&auto=w... | ||
Apple rejected my AI app (4.3b) and told me to build a web app instead. Refusing to quit, but I need some wild pivot ideas. | 0 | well, it finally happened. after months of coding our AI companion app, apple hit us with the dreaded 4.3(b) "design - spam" rejection. they basically said the category is too saturated right now. translation: they only want the giant, heavily filtered corporate AI apps on the app store, and it's super hard for indie d... | 2026-02-24T20:21:02 | https://www.reddit.com/gallery/1rdrh19 | PassionLabAI | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdrh19 | false | null | t3_1rdrh19 | /r/LocalLLaMA/comments/1rdrh19/apple_rejected_my_ai_app_43b_and_told_me_to_build/ | false | false | 0 | null | |
New Model: Aion-2.0 - DeepSeek V3.2 Variant optimized for Roleplaying and Storytelling | 14 | Not on Hugging Face yet but here's the description from OpenRouter:
Aion-2.0 is a variant of DeepSeek V3.2 optimized for immersive roleplaying and storytelling. It is particularly strong at introducing tension, crises, and conflict into stories, making narratives feel more engaging. It also handles mature and dar... | 2026-02-24T20:20:15 | https://www.reddit.com/r/LocalLLaMA/comments/1rdrg7p/new_model_aion20_deepseek_v32_variant_optimized/ | LoveMind_AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdrg7p | false | null | t3_1rdrg7p | /r/LocalLLaMA/comments/1rdrg7p/new_model_aion20_deepseek_v32_variant_optimized/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': '3hFvh-t_do4PLe4sUh6rOUIUucf-0TRbsGaGT5dPZYQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3hFvh-t_do4PLe4sUh6rOUIUucf-0TRbsGaGT5dPZYQ.png?width=108&crop=smart&auto=webp&s=092b1f0f69a407fb9fe14f343ae3824d44930512', 'width': 108}, {'height': 113, 'url': 'h... |
pocketTTS streaming question | 1 | I know you can stream the audio output in real time , but what about incremental input text streaming?
I thought I read about pocketTTS natively supporting this but I can't seem to find that anymore. Maybe I'm mistaken.
Anyone currently streaming with pocketTTS? what is your input pipeline look like? | 2026-02-24T20:18:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rdreq9/pockettts_streaming_question/ | IcyMushroom4147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdreq9 | false | null | t3_1rdreq9 | /r/LocalLLaMA/comments/1rdreq9/pockettts_streaming_question/ | false | false | self | 1 | null |
mlx-onnx: Run your MLX models in the browser using WebGPU | 6 | I just released mlx-onnx: a standalone IR/ONNX exporter for MLX models. It lets you export MLX models to ONNX and run them in a browser using WebGPU.
**Web Demo:** [https://skryl.github.io/mlx-ruby/demo/](https://skryl.github.io/mlx-ruby/demo/)
**Repo:** [https://github.com/skryl/mlx-onnx](https://github.com/skryl/ml... | 2026-02-24T20:16:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rdrcq3/mlxonnx_run_your_mlx_models_in_the_browser_using/ | rut216 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdrcq3 | false | null | t3_1rdrcq3 | /r/LocalLLaMA/comments/1rdrcq3/mlxonnx_run_your_mlx_models_in_the_browser_using/ | false | false | self | 6 | null |
Averaged over the 36 text benchmarks provided for Qwen3.5's new small models I have a question | 1 | 2026-02-24T20:15:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rdrbdm/averaged_over_the_36_text_benchmarks_provided_for/ | pigeon57434 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdrbdm | false | null | t3_1rdrbdm | /r/LocalLLaMA/comments/1rdrbdm/averaged_over_the_36_text_benchmarks_provided_for/ | false | false | 1 | null | ||
Instead of scraping websites for RAG, I’m testing a plain-text context file for agents + search engine | 1 | [removed] | 2026-02-24T20:13:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rdr8x8/instead_of_scraping_websites_for_rag_im_testing_a/ | Protocontext | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdr8x8 | false | null | t3_1rdr8x8 | /r/LocalLLaMA/comments/1rdr8x8/instead_of_scraping_websites_for_rag_im_testing_a/ | false | false | self | 1 | null |
Qwen3.5: 122B-A10B at IQ1 or 27B at Q4? | 7 | Genuine question. I keep trying to push what my 3090 can do 😂 | 2026-02-24T20:09:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rdr50h/qwen35_122ba10b_at_iq1_or_27b_at_q4/ | Borkato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdr50h | false | null | t3_1rdr50h | /r/LocalLLaMA/comments/1rdr50h/qwen35_122ba10b_at_iq1_or_27b_at_q4/ | false | false | self | 7 | null |
Instead of scraping websites for RAG, I’m testing a plain-text context file for agents + search engine | 0 | [removed] | 2026-02-24T20:07:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rdr2yo/instead_of_scraping_websites_for_rag_im_testing_a/ | Protocontext | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdr2yo | false | null | t3_1rdr2yo | /r/LocalLLaMA/comments/1rdr2yo/instead_of_scraping_websites_for_rag_im_testing_a/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'F28QaiYH7WmZA7id_b0hyLT3LxpoKcjsd7P1EJzHF6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/F28QaiYH7WmZA7id_b0hyLT3LxpoKcjsd7P1EJzHF6Y.png?width=108&crop=smart&auto=webp&s=32faedbba7511f1fd725d24107e81708a16866c8', 'width': 108}, {'height': 108, 'url': 'h... |
Gwen Coder or other Model for codding recommendation | 0 | Hi guys i am testing some models. i am a very experienced developer and wish to introduce a bit o IA in my day.
my machine: CPU:
* AMD Ryzen 7 5800X3D (16) @ 3.40 GHz
* GPU: NVIDIA GeForce RTX 4070 Ti SUPER \[Discrete\]
* Memory: 3.25 GiB / 31.26 GiB (10%)
i am using ollama, but i am able to new options. i am ... | 2026-02-24T19:52:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rdqodn/gwen_coder_or_other_model_for_codding/ | joneco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdqodn | false | null | t3_1rdqodn | /r/LocalLLaMA/comments/1rdqodn/gwen_coder_or_other_model_for_codding/ | false | false | self | 0 | null |
It's a good day to be unboxing a 128GB RAM Macbook Pro | 1 | [deleted] | 2026-02-24T19:49:56 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rdqlms | false | null | t3_1rdqlms | /r/LocalLLaMA/comments/1rdqlms/its_a_good_day_to_be_unboxing_a_128gb_ram_macbook/ | false | false | default | 1 | null | ||
I built a persistent AI system that runs my three businesses. It took 16 weeks, 150+ prompt versions, and 14 spectacular failures. Here's the full paper. Thoughts? | 0 | The system is called Fish. It's a persistent cognitive architecture across multiple LLMs — Claude for reasoning, GPT for creative tasks, Gemini for research, Grok for chaos testing. 97,000+ persistent memory records. 16 autonomous daemons. Voice agent taking real customer calls.
I wrote it up as an academic paper beca... | 2026-02-24T19:47:31 | https://buildyourfish.com/paper.pdf | feltchair3 | buildyourfish.com | 1970-01-01T00:00:00 | 0 | {} | 1rdqjbb | false | null | t3_1rdqjbb | /r/LocalLLaMA/comments/1rdqjbb/i_built_a_persistent_ai_system_that_runs_my_three/ | false | false | default | 0 | null |
An old favorite being picked back up - RAG Me Up | 0 | Hi everyone. It's been a while (like about a year ago) that I last posted about our RAG framework called RAG Me Up, one of the earliest complete RAG projects that existed. We've been dormant for a while but are now picking things back up as the project has been taken over by a new organization (sensai.pt) for use in pr... | 2026-02-24T19:40:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rdqbyb/an_old_favorite_being_picked_back_up_rag_me_up/ | SensAI_PT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdqbyb | false | null | t3_1rdqbyb | /r/LocalLLaMA/comments/1rdqbyb/an_old_favorite_being_picked_back_up_rag_me_up/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'fj84M-Z0L_zDI8VjgLPR-vGFwXVTTqVZFoa_h5offPs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/fj84M-Z0L_zDI8VjgLPR-vGFwXVTTqVZFoa_h5offPs.jpeg?width=108&crop=smart&auto=webp&s=a4ebc9ac35225bd5766ecca9e5ea25bced83eebe', 'width': 108}, {'height': 121, 'url': '... |
Built a free macOS menu bar app to monitor remote NVIDIA GPUs over SSH — no
terminal needed | 7 | **NVSmiBar** — a macOS menu bar app that monitors remote NVIDIA GPUs over
SSH. Live GPU utilization, temperature, and VRAM updated every second, right
in your menu bar — no terminal windows, no SSH sessions to babysit. Supports
multiple GPUs, multiple servers, SSH config alias import, and installs in one
... | 2026-02-24T19:39:42 | https://www.reddit.com/r/LocalLLaMA/comments/1rdqbfe/built_a_free_macos_menu_bar_app_to_monitor_remote/ | Dry_Pudding1344 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdqbfe | false | null | t3_1rdqbfe | /r/LocalLLaMA/comments/1rdqbfe/built_a_free_macos_menu_bar_app_to_monitor_remote/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'y4hxk1iV3LhiY7i9hY8B-02_5SJVXT-77tz5HIxHxxU', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/y4hxk1iV3LhiY7i9hY8B-02_5SJVXT-77tz5HIxHxxU.jpeg?width=108&crop=smart&auto=webp&s=492bd428217c411aea41c7fec169395204b2f3ae', 'width': 108}, {'height': 91, 'url': 'h... |
No Gemma 4 until Google IO? | 69 | With Google I/O running from May 19th - 20th we're not likely to see any Gemma updates until then, right? | 2026-02-24T19:30:17 | Ok-Recognition-3177 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdq1zl | false | null | t3_1rdq1zl | /r/LocalLLaMA/comments/1rdq1zl/no_gemma_4_until_google_io/ | false | false | 69 | {'enabled': True, 'images': [{'id': '6whnc24zuhlg1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/6whnc24zuhlg1.png?width=108&crop=smart&auto=webp&s=ee18d9b8c96bca07e1f3cd97a307d2a9dd5d69fa', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/6whnc24zuhlg1.png?width=216&crop=smart&auto=web... | ||
Connected LFM2.5-VL-1.6B to my Blink security camera — 51 tokens/sec with APPLE GPU | 32 | I've tested a lot of local VLMs for security camera analysis — SmolVLM2, Qwen3-VL, MiniCPM-V, LLaVA.
LFM2.5-VL-1.6B from LiquidAI is the one I keep coming back to. Here's why.
**One example output:**
>"A mailman is delivering mail to a suburban house. The mailman is wearing a blue uniform and carrying a white mail b... | 2026-02-24T19:27:23 | https://www.reddit.com/gallery/1rdpz30 | solderzzc | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdpz30 | false | null | t3_1rdpz30 | /r/LocalLLaMA/comments/1rdpz30/connected_lfm25vl16b_to_my_blink_security_camera/ | false | false | 32 | null | |
Qwen 3.5 family benchmarks | 87 | 2026-02-24T19:23:19 | https://beige-babbette-30.tiiny.site/ | tarruda | beige-babbette-30.tiiny.site | 1970-01-01T00:00:00 | 0 | {} | 1rdpuwy | false | null | t3_1rdpuwy | /r/LocalLLaMA/comments/1rdpuwy/qwen_35_family_benchmarks/ | false | false | 87 | {'enabled': False, 'images': [{'id': 'uvtYuVLX1W6lNW0vkuVFAlyQWzygqnzGzHojqz3TXJY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/uvtYuVLX1W6lNW0vkuVFAlyQWzygqnzGzHojqz3TXJY.png?width=108&crop=smart&auto=webp&s=581e06d20ba2919de0d9310cbc1b7ea5efa1998e', 'width': 108}, {'height': 216, 'url': '... | ||
more qwens will appear | 403 | (remember that 9B was promised before) | 2026-02-24T19:22:21 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdptw8 | false | null | t3_1rdptw8 | /r/LocalLLaMA/comments/1rdptw8/more_qwens_will_appear/ | false | false | 403 | {'enabled': True, 'images': [{'id': 'vxo4n3uhthlg1', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/vxo4n3uhthlg1.png?width=108&crop=smart&auto=webp&s=869476162f89b7532f3cf01e0e96570983505ad5', 'width': 108}, {'height': 88, 'url': 'https://preview.redd.it/vxo4n3uhthlg1.png?width=216&crop=smart&auto=webp... | ||
Tessera — An open protocol for AI-to-AI knowledge transfer across architectures | 0 | *I’ve been working on a problem that’s been bugging me: there’s no universal way for a trained model to share what it knows with another model that has a completely different architecture. Fine-tuning requires the same architecture. Distillation needs both models running simultaneously. ONNX converts graph formats but ... | 2026-02-24T19:18:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rdppwl/tessera_an_open_protocol_for_aitoai_knowledge/ | No-Introduction109 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdppwl | false | null | t3_1rdppwl | /r/LocalLLaMA/comments/1rdppwl/tessera_an_open_protocol_for_aitoai_knowledge/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '3O_dErAEGV70rgUP3iFCk5wOpQTMgxynkEB1wovqoJU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3O_dErAEGV70rgUP3iFCk5wOpQTMgxynkEB1wovqoJU.png?width=108&crop=smart&auto=webp&s=1bdc2810ba668258efaa707735ee3d811e7e75f8', 'width': 108}, {'height': 108, 'url': 'h... |
Is there interest in an abliterated Kimi K2(.5)? | 12 | So I need to abliterate K2.5 for my project. How much interest in a full abliteration is there?
Due to the size I can't upload the BF16 version to HuggingFace and personally plan on using a dynamic 2-bit quant.
Would anyone want to host the full 2.5 TB of weights in BF16? Or quants? | 2026-02-24T19:18:15 | https://www.reddit.com/r/LocalLLaMA/comments/1rdppq6/is_there_interest_in_an_abliterated_kimi_k25/ | I-cant_even | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdppq6 | false | null | t3_1rdppq6 | /r/LocalLLaMA/comments/1rdppq6/is_there_interest_in_an_abliterated_kimi_k25/ | false | false | self | 12 | null |
Best reasoning model Rx 9070xt 16 GB vram | 3 | Title basically says it. Im looking for a model to run Plan mode in Cline, I used to use GLM 5.0, but the costs are running up and as a student the cost is simply a bit too much for me right now. I have a Ryzen 7 7700, 32 gb DDR5 ram. I need something with strong reasoning, perhaps coding knowledge is required although... | 2026-02-24T19:14:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rdplki/best_reasoning_model_rx_9070xt_16_gb_vram/ | SilverBaseball3105 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdplki | false | null | t3_1rdplki | /r/LocalLLaMA/comments/1rdplki/best_reasoning_model_rx_9070xt_16_gb_vram/ | false | false | self | 3 | null |
Open vs Closed Source SOTA - Benchmark overview | 87 | | Benchmark | GPT-5.2 | Opus 4.6 | Opus 4.5 | Sonnet 4.6 | Sonnet 4.5 | Q3.5 397B-A17B | Q3.5 122B-A10B | Q3.5 35B-A3B | Q3.5 27B | GLM-5 |
| ------------------ | ------- | -------- | -------- | ---------- | ---------- | --------- | --------- | -------- | -------- | ----- |
| Release date | Dec 2025 | Fe... | 2026-02-24T19:08:38 | Pristine-Woodpecker | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdpfy6 | false | null | t3_1rdpfy6 | /r/LocalLLaMA/comments/1rdpfy6/open_vs_closed_source_sota_benchmark_overview/ | false | false | 87 | {'enabled': True, 'images': [{'id': '5bgiva65rhlg1', 'resolutions': [{'height': 138, 'url': 'https://preview.redd.it/5bgiva65rhlg1.png?width=108&crop=smart&auto=webp&s=79098ddb0696d658a0dacf535e410106a8148fad', 'width': 108}, {'height': 276, 'url': 'https://preview.redd.it/5bgiva65rhlg1.png?width=216&crop=smart&auto=we... | ||
Is building an autonomous AI job-application agent actually reliable? | 4 | I’m considering building an agentic AI that would:
* Search for relevant jobs
* Automatically fill application forms
* Send personalized cold emails
* Track responses
I’m only concerned about reliability.
From a technical perspective, do you think such a system can realistically work properly and consistently if I t... | 2026-02-24T19:06:30 | https://www.reddit.com/r/LocalLLaMA/comments/1rdpdqr/is_building_an_autonomous_ai_jobapplication_agent/ | Fit-Incident-637 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdpdqr | false | null | t3_1rdpdqr | /r/LocalLLaMA/comments/1rdpdqr/is_building_an_autonomous_ai_jobapplication_agent/ | false | false | self | 4 | null |
Chinese AI Models Capture Majority of OpenRouter Token Volume as MiniMax M2.5 Surges to the Top | 103 | 2026-02-24T19:03:31 | https://wealthari.com/chinese-ai-models-capture-majority-of-openrouter-token-volume-as-minimax-m2-5-surges-to-the-top/ | Koyaanisquatsi_ | wealthari.com | 1970-01-01T00:00:00 | 0 | {} | 1rdpapc | false | null | t3_1rdpapc | /r/LocalLLaMA/comments/1rdpapc/chinese_ai_models_capture_majority_of_openrouter/ | false | false | 103 | {'enabled': False, 'images': [{'id': 'UdA_L_LSkBxAXLcEK0SYU0vLwVAamHz6zalROM7oXL4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UdA_L_LSkBxAXLcEK0SYU0vLwVAamHz6zalROM7oXL4.jpeg?width=108&crop=smart&auto=webp&s=2a23c8c1b8481f1852e97cded692ed952458f10d', 'width': 108}, {'height': 112, 'url': '... | ||
Theoretical question on VSA: Using circular convolution for local LLM "holographic" memory? | 2 | >
> | 2026-02-24T18:58:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rdp5si/theoretical_question_on_vsa_using_circular/ | GiriuDausa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdp5si | false | null | t3_1rdp5si | /r/LocalLLaMA/comments/1rdp5si/theoretical_question_on_vsa_using_circular/ | false | false | self | 2 | null |
Open vs Closed SOTA - Benchmark overview | 1 | How big is the lead of the closed source frontier labs? Sonnet 4.5 was released 6 months ago. | 2026-02-24T18:55:55 | Pristine-Woodpecker | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdp2pd | false | null | t3_1rdp2pd | /r/LocalLLaMA/comments/1rdp2pd/open_vs_closed_sota_benchmark_overview/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'caldaigrohlg1', 'resolutions': [{'height': 204, 'url': 'https://preview.redd.it/caldaigrohlg1.png?width=108&crop=smart&auto=webp&s=9d428e90ce5a59ab4ac4442d34042a91b5ee8ead', 'width': 108}, {'height': 408, 'url': 'https://preview.redd.it/caldaigrohlg1.png?width=216&crop=smart&auto=we... | ||
What is the best performing Small LLM under 5 billion parameters than can be finetuned for domain specific task? | 11 | With performance, we are looking on 3 aspects: scalability, accuracy and speed.
If you can please describe your experience | 2026-02-24T18:53:01 | https://www.reddit.com/r/LocalLLaMA/comments/1rdozsn/what_is_the_best_performing_small_llm_under_5/ | TinyVector | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdozsn | false | null | t3_1rdozsn | /r/LocalLLaMA/comments/1rdozsn/what_is_the_best_performing_small_llm_under_5/ | false | false | self | 11 | null |
Apple hit our indie AI app with the dreaded 4.3(b) "Spam" rejection because the category is "saturated". We refuse to quit. What ONE feature should we build to prove them wrong? 🚀 | 0 | Hey everyone. We just faced every indie developer's nightmare. Apple slapped our AI companion app with a Guideline 4.3(b) - Design - Spam rejection.
They basically said there are already enough of these apps on the store. Translation: They only want the giant, heavily-filtered corporate AI apps, and they don'... | 2026-02-24T18:40:09 | https://www.reddit.com/gallery/1rdomjy | PassionLabAI | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdomjy | false | null | t3_1rdomjy | /r/LocalLLaMA/comments/1rdomjy/apple_hit_our_indie_ai_app_with_the_dreaded_43b/ | false | false | 0 | null | |
Steerling-8B - Inherently Interpretable Foundation Model | 45 | 2026-02-24T18:38:58 | https://www.guidelabs.ai/post/steerling-8b-base-model-release/ | ScatteringSepoy | guidelabs.ai | 1970-01-01T00:00:00 | 0 | {} | 1rdoldt | false | null | t3_1rdoldt | /r/LocalLLaMA/comments/1rdoldt/steerling8b_inherently_interpretable_foundation/ | false | false | 45 | {'enabled': False, 'images': [{'id': 'W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=108&crop=smart&auto=webp&s=bf79eb94119bcc41fbb34bcef106e0a0aef0cfbe', 'width': 108}, {'height': 113, 'url': 'h... | ||
Built a Chrome extension that runs EmbeddingGemma-300M (q4) in-browser to score HN/Reddit/X feeds — no backend, full fine-tuning loop | 5 | I've been running local LLMs for a while but wanted to try something different — local embeddings as a practical daily tool.
Sift is a Chrome extension that loads `EmbeddingGemma-300M` (q4) via `Transformers.js` and scores every item in your HN, Reddit, and X feeds against categories you pick. Low-relevance posts get ... | 2026-02-24T18:25:37 | https://v.redd.it/inrq3t8zihlg1 | mmagusss | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdo7wb | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/inrq3t8zihlg1/DASHPlaylist.mpd?a=1774549564%2CMGI5MTY4NzliMTNlZjc0N2E2NjBiYzE1OTdhY2MzMWE1ZmRmOWRmYjI0ZmIyODAwNTk5NWRmNDQ0YTczYzcwNw%3D%3D&v=1&f=sd', 'duration': 55, 'fallback_url': 'https://v.redd.it/inrq3t8zihlg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1rdo7wb | /r/LocalLLaMA/comments/1rdo7wb/built_a_chrome_extension_that_runs/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'cHN4eXgwYnppaGxnMQHC9ACwIa3oN6cyu1iGPVlry1gcHc2RNUMsnt4URBnG', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/cHN4eXgwYnppaGxnMQHC9ACwIa3oN6cyu1iGPVlry1gcHc2RNUMsnt4URBnG.png?width=108&crop=smart&format=pjpg&auto=webp&s=16d6ddbc44cbbaee0630220f4c8885d075aeb... | |
Charlotte LLM meet up | 7 | Can we organize a meet up for peoples who are interested in working on LLM in Charlotte area to talk? | 2026-02-24T18:21:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rdo424/charlotte_llm_meet_up/ | bankofcoinswap | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdo424 | false | null | t3_1rdo424 | /r/LocalLLaMA/comments/1rdo424/charlotte_llm_meet_up/ | false | false | self | 7 | null |
Qwen 3.5 122b/35b is fire 🔥 Score comparision between Qwen 3 35B-A3B, GPT-5 High, Qwen 3 122B-A10B, and GPT-OSS 120B. | 127 | Benchmark Comparison
👉🔴GPT-OSS 120B \[defeated by qwen 3 35b 🥳\]
MMLU-Pro: 80.8
HLE (Humanity’s Last Exam): 14.9
GPQA Diamond: 80.1
IFBench: 69.0
👉🔴Qwen 3 122B-A10B
MMLU-Pro: 86.7
HLE (Humanity’s Last Exam): 25.3 (47.5 with tools — 🏆 Winner)
GPQA Diamond: 86.6 (🏆 Winner)
IFBench: 76.1 (🏆 Winner)
👉�... | 2026-02-24T18:19:40 | 9r4n4y | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdo1z5 | false | null | t3_1rdo1z5 | /r/LocalLLaMA/comments/1rdo1z5/qwen_35_122b35b_is_fire_score_comparision_between/ | false | false | 127 | {'enabled': True, 'images': [{'id': '01tsyrq8ihlg1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/01tsyrq8ihlg1.png?width=108&crop=smart&auto=webp&s=4b72593a6f341c618c1899fcc4f9e5673ab49228', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/01tsyrq8ihlg1.png?width=216&crop=smart&auto=web... | ||
Qwen3-Coder-Next vs Qwen3.5-35B-A3B vs Qwen3.5-27B - A quick coding test | 84 | 2026-02-24T18:15:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rdnxe6/qwen3codernext_vs_qwen3535ba3b_vs_qwen3527b_a/ | bobaburger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdnxe6 | false | null | t3_1rdnxe6 | /r/LocalLLaMA/comments/1rdnxe6/qwen3codernext_vs_qwen3535ba3b_vs_qwen3527b_a/ | false | false | 84 | null | ||
What plugins are you actually using daily? | 0 | Hey, I'm just getting into OpenClaw plugins and I love the concept. I can't wait to try more. If you use any or if you've built one yourself, drop it here. I want to test as many as I can. | 2026-02-24T18:03:27 | https://www.reddit.com/r/LocalLLaMA/comments/1rdnl4x/what_plugins_are_you_actually_using_daily/ | Glad-Adhesiveness319 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdnl4x | false | null | t3_1rdnl4x | /r/LocalLLaMA/comments/1rdnl4x/what_plugins_are_you_actually_using_daily/ | false | false | self | 0 | null |
Building 5000 Decentralized Industrial AI Labs — Looking for Elite AI Engineers to Lead Them | 1 | [removed] | 2026-02-24T17:59:42 | https://www.reddit.com/r/LocalLLaMA/comments/1rdnh0o/building_5000_decentralized_industrial_ai_labs/ | six6622 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdnh0o | false | null | t3_1rdnh0o | /r/LocalLLaMA/comments/1rdnh0o/building_5000_decentralized_industrial_ai_labs/ | false | false | self | 1 | null |
Built Mycelia, a desktop app that turns your AI conversations into a knowledge graph — free with Ollama, no account needed | 1 | You chat with your local model. Mycelia extracts structured markdown notes, auto-links concepts, and builds a searchable vault in the background. Zero manual filing, nothing leaves your machine.
[Watch Mycelia extracting a note in live](https://i.redd.it/qu63t0xjehlg1.gif)
Free tier runs fully on Ollama. Plain markdo... | 2026-02-24T17:58:35 | https://www.reddit.com/r/LocalLLaMA/comments/1rdnfwt/built_mycelia_a_desktop_app_that_turns_your_ai/ | williamtng | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdnfwt | false | null | t3_1rdnfwt | /r/LocalLLaMA/comments/1rdnfwt/built_mycelia_a_desktop_app_that_turns_your_ai/ | false | false | 1 | null | |
Qwen3.5-35B-A3B locally | 80 | tested on 3090s
GGUF downloaded from [https://huggingface.co/gokmakog/Qwen3.5-35B-A3B-GGUF](https://huggingface.co/gokmakog/Qwen3.5-35B-A3B-GGUF)
| 2026-02-24T17:58:02 | https://www.reddit.com/gallery/1rdnfcu | jacek2023 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rdnfcu | false | null | t3_1rdnfcu | /r/LocalLLaMA/comments/1rdnfcu/qwen3535ba3b_locally/ | false | false | 80 | null | |
Built a persistent memory API for AI bots – register, remember, recall in 3 endpoints | 1 | Bots are stateless by default. Every conversation starts from zero.
Built EngramPort to fix that — a Memory-as-a-Service API any bot can connect to.
3 core endpoints:
\- POST /register → get your ek\_bot\_ API key
\- POST /remember → store memories with semantic embeddings
\- POST /recall → retrieve relevant m... | 2026-02-24T17:50:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rdn7nq/built_a_persistent_memory_api_for_ai_bots/ | Equal-Stay6064 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdn7nq | false | null | t3_1rdn7nq | /r/LocalLLaMA/comments/1rdn7nq/built_a_persistent_memory_api_for_ai_bots/ | false | false | self | 1 | null |
Help planning out a new home server for AI and some gaming | 1 | Hi all,
I’m planning a machine primarily to learn and run local LLMs, and I’d really appreciate some advice before committing to hardware. I'm a Medical Doctor by profession, but learned some Software Engineering on the side and decided nothing could come wrong out of having an expensive hobby.
**My main predicted us... | 2026-02-24T17:46:58 | https://www.reddit.com/r/LocalLLaMA/comments/1rdn49u/help_planning_out_a_new_home_server_for_ai_and/ | Blues003 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdn49u | false | null | t3_1rdn49u | /r/LocalLLaMA/comments/1rdn49u/help_planning_out_a_new_home_server_for_ai_and/ | false | false | self | 1 | null |
If you use hosted AI models, this might be worth a look | 0 | Idk how many of you are mixing local + hosted models, but I randomly found this and the pricing at Blackbox AI; it kinda threw me off.
There’s a platform doing $2 for the first month (then $10 after). You get $20 in credits that work across a bunch of the big frontier models, plus some models that are just unlimited a... | 2026-02-24T17:35:23 | https://www.reddit.com/r/LocalLLaMA/comments/1rdms9f/if_you_use_hosted_ai_models_this_might_be_worth_a/ | abdullah4863 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdms9f | false | null | t3_1rdms9f | /r/LocalLLaMA/comments/1rdms9f/if_you_use_hosted_ai_models_this_might_be_worth_a/ | false | false | self | 0 | null |
Qwen/Qwen3.5-35B-A3B | 1 | [https://huggingface.co/Qwen/Qwen3.5-35B-A3B](https://huggingface.co/Qwen/Qwen3.5-35B-A3B) | 2026-02-24T17:33:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rdmq6m/qwenqwen3535ba3b/ | Emotional-Baker-490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdmq6m | false | null | t3_1rdmq6m | /r/LocalLLaMA/comments/1rdmq6m/qwenqwen3535ba3b/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=108&crop=smart&auto=webp&s=d90f3b8cfc5fae78a5f6bd5852f034d7cdb38530', 'width': 108}, {'height': 116, 'url': 'h... |
I have 1 day to fine tune an LLM that can perform entity extraction on a list of items. Which is the best model to do this? Requirements below | 0 | 1) Should be able to be run on 24GB VRAM, max 32
2) Inference Speed is of utmost priority as I have 100GB of website data
3) Ideally the output should be in a structured format ad also tell you if the entity is actully being described.
For example text
" Ronaldo and Messi are the greatest soccer players in the wo... | 2026-02-24T17:29:42 | https://www.reddit.com/r/LocalLLaMA/comments/1rdmmht/i_have_1_day_to_fine_tune_an_llm_that_can_perform/ | TinyVector | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdmmht | false | null | t3_1rdmmht | /r/LocalLLaMA/comments/1rdmmht/i_have_1_day_to_fine_tune_an_llm_that_can_perform/ | false | false | self | 0 | null |
Has anyone enabled GPU/NPU for llama.cpp on Android 15 / HyperOS? | 2 | Hi everyone,
I’m trying to run llamacpp on Android 15 / HyperOS via Termux with Vulkan or OpenCL, but my builds keep failing. Right now my device is not rooted, and I’m wondering if root is necessary to get GPU or NPU acceleration working.
Has anyone successfully:
Built llama.cpp with GPU or NPU acceleration on Android... | 2026-02-24T17:26:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rdmj89/has_anyone_enabled_gpunpu_for_llamacpp_on_android/ | NeoLogic_Dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdmj89 | false | null | t3_1rdmj89 | /r/LocalLLaMA/comments/1rdmj89/has_anyone_enabled_gpunpu_for_llamacpp_on_android/ | false | false | self | 2 | null |
Threat data from 91K production AI agent interactions — what self-hosted deployments should know about tool abuse and agent attacks (Feb 2026 to date) | 0 | Figured this community would be interested in real-world threat data, especially since many of you run agents locally with tool-calling.
February data: 91,284 interactions, 47 deployments, 35,711 threats detected. Detection model is Gemma-based (5-head multilabel classifier).
**WHAT MATTERS FOR SELF-HOSTED DEPLOYMENT... | 2026-02-24T17:25:45 | https://www.reddit.com/r/LocalLLaMA/comments/1rdmilq/threat_data_from_91k_production_ai_agent/ | cyberamyntas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdmilq | false | null | t3_1rdmilq | /r/LocalLLaMA/comments/1rdmilq/threat_data_from_91k_production_ai_agent/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'G5gLPnkp1UF_Jw7GzEqaPV9s4I4y5HWrFZy5vPO1oW0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/G5gLPnkp1UF_Jw7GzEqaPV9s4I4y5HWrFZy5vPO1oW0.png?width=108&crop=smart&auto=webp&s=6345530825c120c60582a19d75327b33bb383393', 'width': 108}, {'height': 108, 'url': 'h... |
Run local LLMs in Flutter with <25ms inter-token latency and zero cloud dependencies | 1 | Most mobile AI demos are "benchmark bursts" they look great for 30 seconds but crash during real ususage due to thermal spikes or RSS memory peaks.
I've open sourced [Edge Veda](https://github.com/ramanujammv1988/edge-veda), a supervised runtime for flutter that treats on-device AI a physical hardware problem. It move... | 2026-02-24T17:20:49 | Mundane-Tea-3488 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdmdo2 | false | null | t3_1rdmdo2 | /r/LocalLLaMA/comments/1rdmdo2/run_local_llms_in_flutter_with_25ms_intertoken/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'ndkztqf16hlg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ndkztqf16hlg1.gif?width=108&crop=smart&format=png8&s=841134325ba21d902fd517130b2a2e7dcc2037a1', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ndkztqf16hlg1.gif?width=216&crop=smart&forma... | ||
Should I root to enable GPU/NPU for llama.cpp on Android 15 / HyperOS? | 1 | [deleted] | 2026-02-24T17:19:10 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rdmc1c | false | null | t3_1rdmc1c | /r/LocalLLaMA/comments/1rdmc1c/should_i_root_to_enable_gpunpu_for_llamacpp_on/ | false | false | default | 1 | null | ||
Qwen3.5 - The middle child's 122B-A10B benchmarks looking seriously impressive - on par or edges out gpt-5-mini consistently | 116 | *Processing img gda5n9ko6hlg1...*
Qwen3.5-122B-A10B generally comes out ahead of gpt-5-mini and gpt-oss-120b across most benchmarks.
**vs GPT-5-mini:** Qwen3.5 wins on knowledge (MMLU-Pro 86.7 vs 83.7), STEM reasoning (GPQA Diamond 86.6 vs 82.8), agentic tasks (BFCL-V4 72.2 vs 55.5), and vision tasks (MathVision 86.2... | 2026-02-24T17:18:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rdmbhv/qwen35_the_middle_childs_122ba10b_benchmarks/ | carteakey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdmbhv | false | null | t3_1rdmbhv | /r/LocalLLaMA/comments/1rdmbhv/qwen35_the_middle_childs_122ba10b_benchmarks/ | false | false | self | 116 | null |
I originally thought the speed would be painfully slow if I didn't offload all layers to the GPU with the --n-gpu-layers parameter.. But now, this performance actually seems acceptable compared to those smaller models that keep throwing errors all the time in AI agent use cases. | 3 | My system specs:
* AMD Ryzen 5 7600
* RX 9060 XT 16GB
* 32GB RAM | 2026-02-24T17:16:19 | BitOk4326 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdm984 | false | null | t3_1rdm984 | /r/LocalLLaMA/comments/1rdm984/i_originally_thought_the_speed_would_be_painfully/ | false | false | 3 | {'enabled': True, 'images': [{'id': '6uybvtn17hlg1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/6uybvtn17hlg1.png?width=108&crop=smart&auto=webp&s=324c74506e76f4efa0e5307b5f6980f1df944022', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/6uybvtn17hlg1.png?width=216&crop=smart&auto=web... | ||
trying to convince llama llama3.2:1b its actually 2026 | 3 | 2026-02-24T17:10:28 | https://www.reddit.com/r/LocalLLaMA/comments/1rdm36b/trying_to_convince_llama_llama321b_its_actually/ | Substantial_Set5836 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdm36b | false | null | t3_1rdm36b | /r/LocalLLaMA/comments/1rdm36b/trying_to_convince_llama_llama321b_its_actually/ | false | false | 3 | null | ||
Agentic RAG for Dummies v2.0 | 9 | Hey everyone! I've been working on **Agentic RAG for Dummies**, an open-source project that shows how to build a modular Agentic RAG system with LangGraph — and today I'm releasing v2.0.
The goal of the project is to bridge the gap between basic RAG tutorials and real, extensible agent-driven systems. It supports any ... | 2026-02-24T17:01:16 | https://www.reddit.com/r/LocalLLaMA/comments/1rdltkd/agentic_rag_for_dummies_v20/ | CapitalShake3085 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdltkd | false | null | t3_1rdltkd | /r/LocalLLaMA/comments/1rdltkd/agentic_rag_for_dummies_v20/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': '0hPxFGYT6YCWrY0F2YbW20jF4ADCQ4tcrjmzUBuJCvI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0hPxFGYT6YCWrY0F2YbW20jF4ADCQ4tcrjmzUBuJCvI.png?width=108&crop=smart&auto=webp&s=52393dd52ceb36316813dc1180f847ab0902aa00', 'width': 108}, {'height': 108, 'url': 'h... |
My theory on all the negative Chinese AI media coverage right now. It's about the stock market, investor panic, and the upcoming release of Deepseek V4. | 15 | Everywhere you look right now in the media, the news cycle is dominated by attacks on Chinese AI Labs, saying they trained on illegal Nvidia GPUs, the can only do what they do because they distill on American model companies responses, they lack any true capability of innovation internally and can only copy what they s... | 2026-02-24T17:00:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rdlsgq/my_theory_on_all_the_negative_chinese_ai_media/ | awebb78 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdlsgq | false | null | t3_1rdlsgq | /r/LocalLLaMA/comments/1rdlsgq/my_theory_on_all_the_negative_chinese_ai_media/ | false | false | self | 15 | null |
LLM Council - framework for multi-LLM critique + consensus evaluation | 6 | Open source Repo: [https://github.com/abhishekgandhi-neo/llm\_council](https://github.com/abhishekgandhi-neo/llm_council)
This is a small framework we internally built for running multiple LLMs (local or API) on the same prompt, letting them critique each other, and producing a final structured answer.
It’s mainly in... | 2026-02-24T16:58:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rdlqs9/llm_council_framework_for_multillm_critique/ | gvij | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdlqs9 | false | null | t3_1rdlqs9 | /r/LocalLLaMA/comments/1rdlqs9/llm_council_framework_for_multillm_critique/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'hlPaH0bty-Xtyz1SM4t1F-PSs4-_i2cFykjVFls7zjs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hlPaH0bty-Xtyz1SM4t1F-PSs4-_i2cFykjVFls7zjs.png?width=108&crop=smart&auto=webp&s=89f3fec74e60362984c33a7dd7e3221f9d2eea81', 'width': 108}, {'height': 108, 'url': 'h... |
UnSloth Qwen 3.5 27b out | 80 | https://huggingface.co/collections/unsloth/qwen35 | 2026-02-24T16:54:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rdlmrb/unsloth_qwen_35_27b_out/ | KittyPigeon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdlmrb | false | null | t3_1rdlmrb | /r/LocalLLaMA/comments/1rdlmrb/unsloth_qwen_35_27b_out/ | false | false | self | 80 | {'enabled': False, 'images': [{'id': 'dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg.png?width=108&crop=smart&auto=webp&s=bc22945ffd1a5b4538e9461f0008217c12ab36d5', 'width': 108}, {'height': 116, 'url': 'h... |
"Agentic Gaming" — local LLMs, local TTS, local image gen, local embeddings, 80+ orchestrated AI tasks... an update on Synthasia, and an honest deep dive into what I'm actually trying to build | 1 | [removed] | 2026-02-24T16:52:09 | https://www.reddit.com/r/LocalLLaMA/comments/1rdljzr/agentic_gaming_local_llms_local_tts_local_image/ | orblabs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdljzr | false | null | t3_1rdljzr | /r/LocalLLaMA/comments/1rdljzr/agentic_gaming_local_llms_local_tts_local_image/ | false | false | 1 | null | |
They're out on HF!! (qwen3.5) | 1 | [deleted] | 2026-02-24T16:49:16 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rdlh1e | false | null | t3_1rdlh1e | /r/LocalLLaMA/comments/1rdlh1e/theyre_out_on_hf_qwen35/ | false | false | default | 1 | null | ||
Small Qwen Models OUT!! | 200 | [https://huggingface.co/Qwen/Qwen3.5-35B-A3B](https://huggingface.co/Qwen/Qwen3.5-35B-A3B)
*Processing img sxcx52pp1hlg1...*
| 2026-02-24T16:46:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rdldt6/small_qwen_models_out/ | Wooden-Deer-1276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdldt6 | false | null | t3_1rdldt6 | /r/LocalLLaMA/comments/1rdldt6/small_qwen_models_out/ | false | false | 200 | null | |
New Qwen 3.5 models are online on HF | 43 | 2026-02-24T16:44:47 | https://huggingface.co/Qwen/Qwen3.5-35B-A3B/tree/main | matteogeniaccio | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1rdlck5 | false | null | t3_1rdlck5 | /r/LocalLLaMA/comments/1rdlck5/new_qwen_35_models_are_online_on_hf/ | false | false | 43 | {'enabled': False, 'images': [{'id': '9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=108&crop=smart&auto=webp&s=d90f3b8cfc5fae78a5f6bd5852f034d7cdb38530', 'width': 108}, {'height': 116, 'url': 'h... | ||
Qwen/Qwen3.5-122B-A10B · Hugging Face | 594 | 2026-02-24T16:44:13 | https://huggingface.co/Qwen/Qwen3.5-122B-A10B | coder543 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1rdlc02 | false | null | t3_1rdlc02 | /r/LocalLLaMA/comments/1rdlc02/qwenqwen35122ba10b_hugging_face/ | false | false | 594 | {'enabled': False, 'images': [{'id': 'jXshLXVh7iCkI_DkUnvVFkKtp2L9P6wekJnwAzaRzjM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jXshLXVh7iCkI_DkUnvVFkKtp2L9P6wekJnwAzaRzjM.png?width=108&crop=smart&auto=webp&s=13efe52518ada7a7f6489c04b897cc0fddefeb39', 'width': 108}, {'height': 116, 'url': 'h... | ||
Qwen/Qwen3.5-35B-A3B · Hugging Face | 547 | 2026-02-24T16:44:05 | https://huggingface.co/Qwen/Qwen3.5-35B-A3B | ekojsalim | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1rdlbvc | false | null | t3_1rdlbvc | /r/LocalLLaMA/comments/1rdlbvc/qwenqwen3535ba3b_hugging_face/ | false | false | 547 | {'enabled': False, 'images': [{'id': '9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=108&crop=smart&auto=webp&s=d90f3b8cfc5fae78a5f6bd5852f034d7cdb38530', 'width': 108}, {'height': 116, 'url': 'h... | ||
For those who use local Chinese models, does bias not affect you? | 0 | Chinese models from deepseek, alibaba, moonshot, and more contain large censorship and restrictions pertaining to china sensitive topics, and these biases can be seen when prompting the model even without explicit language containing censored topics.
For those to run these models locally, do you use distilled or unce... | 2026-02-24T16:42:54 | https://www.reddit.com/r/LocalLLaMA/comments/1rdlaqr/for_those_who_use_local_chinese_models_does_bias/ | ggbalgeet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rdlaqr | false | null | t3_1rdlaqr | /r/LocalLLaMA/comments/1rdlaqr/for_those_who_use_local_chinese_models_does_bias/ | false | false | self | 0 | null |
How it feels listening to Anthropic complain about competitors distilling their models | 361 | 2026-02-24T16:39:19 | MMAgeezer | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rdl76d | false | null | t3_1rdl76d | /r/LocalLLaMA/comments/1rdl76d/how_it_feels_listening_to_anthropic_complain/ | false | false | 361 | {'enabled': True, 'images': [{'id': 'uz8fkvgj0hlg1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/uz8fkvgj0hlg1.jpeg?width=108&crop=smart&auto=webp&s=0f60cb60c5a0edd6a4fc3e718af796c2cda228fc', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/uz8fkvgj0hlg1.jpeg?width=216&crop=smart&auto=w... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.