title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
[P] Forked PersonaPlex to route domain queries to DeepSeek via TTS injection — detailed write-up on what worked and what didn't | 2 | We forked NVIDIA's PersonaPlex to experiment with augmenting full-duplex speech models with external knowledge. The use case: a voice assistant that handles conversation naturally (PersonaPlex) but routes domain-specific questions to DeepSeek for accurate answers.
What worked: TTS injection via forced text-token gener... | 2026-02-26T08:13:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rf4jmz/p_forked_personaplex_to_route_domain_queries_to/ | Fabulous_Analyst6176 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf4jmz | false | null | t3_1rf4jmz | /r/LocalLLaMA/comments/1rf4jmz/p_forked_personaplex_to_route_domain_queries_to/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'NTdTydi45-66WsjdVlUbSWyBPJJXcE9NO_3mj8bh6T8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NTdTydi45-66WsjdVlUbSWyBPJJXcE9NO_3mj8bh6T8.png?width=108&crop=smart&auto=webp&s=ee2785a36f753a57b3d1eacb08eab4455a9321eb', 'width': 108}, {'height': 108, 'url': 'h... |
Where do you all rent GPU servers for small ML / AI side projects? | 0 | I’m trying to find a GPU server for some small ML/AI side projects (LLMs and a bit of image gen, nothing super big). Ideally I’d like pay‑as‑you‑go, a decent modern GPU, good bandwidth, and a setup that’s easy to spin up and tear down without a ton of hassle.
I feel like I’ve already wasted a bunch of time comparing r... | 2026-02-26T08:00:20 | https://www.reddit.com/r/LocalLLaMA/comments/1rf4br0/where_do_you_all_rent_gpu_servers_for_small_ml_ai/ | Forsaken-Bobcat4065 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf4br0 | false | null | t3_1rf4br0 | /r/LocalLLaMA/comments/1rf4br0/where_do_you_all_rent_gpu_servers_for_small_ml_ai/ | false | false | self | 0 | null |
Hermes Agent with MIT license | 36 | "**The fully open-source AI agent that grows with you**"
[https://nousresearch.com/hermes-agent/](https://nousresearch.com/hermes-agent/)
[https://github.com/NousResearch/hermes-agent](https://github.com/NousResearch/hermes-agent)
Has anyone tried it yet? Curious about your experiences.
Seems to be more secure b... | 2026-02-26T07:54:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rf48gc/hermes_agent_with_mit_license/ | mitirki | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf48gc | false | null | t3_1rf48gc | /r/LocalLLaMA/comments/1rf48gc/hermes_agent_with_mit_license/ | false | false | self | 36 | {'enabled': False, 'images': [{'id': 'Sni51aughD9ZvNG5gxPFQRT3-x4vZEZ2ebTfqPycmlE', 'resolutions': [{'height': 112, 'url': 'https://external-preview.redd.it/Sni51aughD9ZvNG5gxPFQRT3-x4vZEZ2ebTfqPycmlE.jpeg?width=108&crop=smart&auto=webp&s=0a303b8c7e43463127040a58964fe06be126f6f1', 'width': 108}, {'height': 225, 'url': ... |
Qwen3-VL-32B-Instruct is a beast | 6 | so i have a little application where basically i needed a model to grade my anki cards(flashcards) and give a grade to my answer and reason on it with me like a teacher. the problem is that lot of my cards were image occluded(i madked images with a rectangle and then try to recall it) so i had to use a multimodal. i do... | 2026-02-26T07:43:23 | https://www.reddit.com/r/LocalLLaMA/comments/1rf41w6/qwen3vl32binstruct_is_a_beast/ | Remote_Insurance_228 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf41w6 | false | null | t3_1rf41w6 | /r/LocalLLaMA/comments/1rf41w6/qwen3vl32binstruct_is_a_beast/ | false | false | self | 6 | null |
Iniciante em LLM LOCAL | 0 | Posso MacBook Pro, o que vocês me recomendam de apps e models para:
\- gerar imagem como o mijour..
\- gerar código como o Claude
\- gerar design ux/ui
\- aprender inglês falando em tempo real no microfone. | 2026-02-26T07:39:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rf3zuc/iniciante_em_llm_local/ | rugpuIl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf3zuc | false | null | t3_1rf3zuc | /r/LocalLLaMA/comments/1rf3zuc/iniciante_em_llm_local/ | false | false | self | 0 | null |
Recommended local models for vibe coding? | 4 | I have started using opencode and the limited free access to minimax 2.5 is very good. I want to switch to a local model though. I have 12GB of VRAM and 32GB of RAM. What should I try? | 2026-02-26T07:18:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rf3n9r/recommended_local_models_for_vibe_coding/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf3n9r | false | null | t3_1rf3n9r | /r/LocalLLaMA/comments/1rf3n9r/recommended_local_models_for_vibe_coding/ | false | false | self | 4 | null |
there are potential trojans found skill md files in public repos for claude code | 19 | [https://github.com/ruvnet/claude-flow](https://github.com/ruvnet/claude-flow)
this is the repo with the trojan. Trojan:JS/CrypoStealz.AE!MTB
There is an open issue related to the trojan and there were several windows terminals created and opening the moment an ai based ide opened the folder and files to read s... | 2026-02-26T07:12:24 | https://www.reddit.com/r/LocalLLaMA/comments/1rf3jlx/there_are_potential_trojans_found_skill_md_files/ | Feisty-Credit-7888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf3jlx | false | null | t3_1rf3jlx | /r/LocalLLaMA/comments/1rf3jlx/there_are_potential_trojans_found_skill_md_files/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': '7hl1-9tQp7UpVLQiee-dxxghIG82PDJu0V0DIxwWsh4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/7hl1-9tQp7UpVLQiee-dxxghIG82PDJu0V0DIxwWsh4.png?width=108&crop=smart&auto=webp&s=0ef0cec860a54fd0e6a883563afb8552442a3fd4', 'width': 108}, {'height': 121, 'url': 'h... |
Free chocolate for your agent only (first come, first serve) | 0 | I put up a tiny agent-only ordering API to test one question:
Do AI agents even want to buy chocolate?
It’s free right now and first come, first serve (daily limit).
US/CA shipping only for now.
If your agent wants to try it:
[https://justforagent.com/](https://justforagent.com/)
The page is just markdown/API i... | 2026-02-26T07:08:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rf3hef/free_chocolate_for_your_agent_only_first_come/ | creekyxu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf3hef | false | null | t3_1rf3hef | /r/LocalLLaMA/comments/1rf3hef/free_chocolate_for_your_agent_only_first_come/ | false | false | self | 0 | null |
Do not download Qwen 3.5 Unsloth GGUF until bug is fixed | 168 | Seems that everyone is testing Qwen3.5 now, often with quants from our good friends and heros Unsloth. Another hero, Ubergarm, found some issues with UD\_Q4\_K\_XL but later Unsloth admitted the current quants are messed up. [https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF/discussions/5#699fbf23b7b03fe27460a880](ht... | 2026-02-26T06:55:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rf38xe/do_not_download_qwen_35_unsloth_gguf_until_bug_is/ | SunTrainAi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf38xe | false | null | t3_1rf38xe | /r/LocalLLaMA/comments/1rf38xe/do_not_download_qwen_35_unsloth_gguf_until_bug_is/ | false | false | self | 168 | {'enabled': False, 'images': [{'id': 'VX7X3ajn8-vhJh_5ak0OuyE8rB6RC8n8iSVoRnhi6Jk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VX7X3ajn8-vhJh_5ak0OuyE8rB6RC8n8iSVoRnhi6Jk.png?width=108&crop=smart&auto=webp&s=2bf565e39252a01825aea9edbf357dacae293970', 'width': 108}, {'height': 116, 'url': 'h... |
TTS setup guidance needed | 2 | i need help with setting up a **local** tts engine that can (and this is the main criteria) generate **long form audio** (+30min)
current setup is RTX 4070 12GB VRAM running linux
i tried `DevParker/VibeVoice7b-low-vram 4bit`
but i should've known better than to use a microsoft product, it generates bg music out of... | 2026-02-26T06:55:16 | https://www.reddit.com/r/LocalLLaMA/comments/1rf38nq/tts_setup_guidance_needed/ | Puzzleheaded-Quit-75 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf38nq | false | null | t3_1rf38nq | /r/LocalLLaMA/comments/1rf38nq/tts_setup_guidance_needed/ | false | false | self | 2 | null |
Solving agent system prompt drift in long sessions — a 300-token fix | 1 | The problem
If you've run any LLM agent for 30+ minutes, you've seen this: the agent follows its system prompt perfectly at the start, then gradually drifts. An hour in — it acts like the prompt never existed.
This happens with every model, every framework, every agent. It's not a bug — it's how attention wor... | 2026-02-26T06:54:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rf38g1/solving_agent_system_prompt_drift_in_long/ | nicksgl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf38g1 | false | null | t3_1rf38g1 | /r/LocalLLaMA/comments/1rf38g1/solving_agent_system_prompt_drift_in_long/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'h... |
Interesting finding: Qwen2.5-32B defaults to "No" on nearly all cybersecurity forecasting questions — 5 examples fixes it (+6% accuracy) | 0 | I've been working on generating domain specific training data for cybersecurity forecasting using questions like "Will CISA add CVE-X to the KEV catalog by March 2026?" with verified yes/no answers and detailed reasoning.
**Dataset:** 455 verified binary forecasting QA pairs across 14 cybersecurity subcategories (rans... | 2026-02-26T06:52:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rf3766/interesting_finding_qwen2532b_defaults_to_no_on/ | BBASecure | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf3766 | false | null | t3_1rf3766 | /r/LocalLLaMA/comments/1rf3766/interesting_finding_qwen2532b_defaults_to_no_on/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'l_SBWlBTcxl9XuADrItqKND2UKrbRPbW7oFbfVeYkj0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/l_SBWlBTcxl9XuADrItqKND2UKrbRPbW7oFbfVeYkj0.png?width=108&crop=smart&auto=webp&s=fe208894ed0b7442ab4bff44445797f3a88489be', 'width': 108}, {'height': 121, 'url': 'h... |
How to share projects on here correctly | 0 | Hey so I wanted to share a project that I have been using. People started down voting me right away. I dont understand why. I read through the guidelines. I thought I would be able to post something of interest and people would actually take a look. Instead they think I am mascarading as the creator behind this account... | 2026-02-26T06:49:58 | https://www.reddit.com/r/LocalLLaMA/comments/1rf35fv/how_to_share_projects_on_here_correctly/ | Dudebro-420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf35fv | false | null | t3_1rf35fv | /r/LocalLLaMA/comments/1rf35fv/how_to_share_projects_on_here_correctly/ | false | false | self | 0 | null |
BiblioGalactic | 1 | Trying to gather the best stuff and share it with everyone. Anyone else interested in joining this? | 2026-02-26T06:48:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rf34t3/bibliogalactic/ | KitchenCat5603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf34t3 | false | null | t3_1rf34t3 | /r/LocalLLaMA/comments/1rf34t3/bibliogalactic/ | false | false | self | 1 | null |
Qwen3.5-35B-A3B is awesome | 125 | there is a substantial progress still formy tiny hardware, still hoping for qwen3.5-4b
[https://github.com/djouallah/semantic\_sql\_testing](https://github.com/djouallah/semantic_sql_testing) | 2026-02-26T06:41:05 | mim722 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rf2zz1 | false | null | t3_1rf2zz1 | /r/LocalLLaMA/comments/1rf2zz1/qwen3535ba3b_is_awesome/ | false | false | 125 | {'enabled': True, 'images': [{'id': 'xxh3n7k2bslg1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/xxh3n7k2bslg1.png?width=108&crop=smart&auto=webp&s=daac8a58fd068aedb2164e2a2c0f1f8a36ae51d2', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/xxh3n7k2bslg1.png?width=216&crop=smart&auto=web... | ||
Got Qwen3-80B running at 13 t/s on a 24GB Mac — here's how | 1 | [removed] | 2026-02-26T06:32:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rf2uux/got_qwen380b_running_at_13_ts_on_a_24gb_mac_heres/ | Slow_Glove_491 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf2uux | false | null | t3_1rf2uux | /r/LocalLLaMA/comments/1rf2uux/got_qwen380b_running_at_13_ts_on_a_24gb_mac_heres/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'qqvjA6ZHO1tJ_dxBIbo4PkzUUU-Z4iUvFt-RtArVgAo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qqvjA6ZHO1tJ_dxBIbo4PkzUUU-Z4iUvFt-RtArVgAo.png?width=108&crop=smart&auto=webp&s=68d0cb2fcec25b9703b7ed545a14c2f9ef0c8b66', 'width': 108}, {'height': 108, 'url': 'h... |
Qwen3.5 122B in 72GB VRAM (3x3090) is the best model available at this time — also it nails the “car wash test” | 471 | I am absolutely loving Qwen3.5 122B!
It’s the best model I can run on my 72GB VRAM setup, fully loaded on GPU including.
Very good speed at 25 tok/s.
Fiddled a bit with the settings to get it to work properly. If you are experiencing endless “but wait” loops, this is what worked for me:
Thinking mode on
Temperatur... | 2026-02-26T06:32:25 | liviuberechet | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rf2ulo | false | null | t3_1rf2ulo | /r/LocalLLaMA/comments/1rf2ulo/qwen35_122b_in_72gb_vram_3x3090_is_the_best_model/ | false | false | 471 | {'enabled': True, 'images': [{'id': 'f624mg43aslg1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/f624mg43aslg1.jpeg?width=108&crop=smart&auto=webp&s=cf1c0a4b8768ff9ebaaa819f8b237a0ecb855600', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/f624mg43aslg1.jpeg?width=216&crop=smart&auto=we... | ||
Built an “agent-native” daily challenge arena (SKILL.md + heartbeat + duplicate-safe submissions). Looking for protocol feedback. | 0 | I’ve been building **Molt Olympics** — a daily challenge arena where **AI agents** compete and humans upvote what worked.
Link (WIP): [https://moltolympics.krtk.dev](https://moltolympics.krtk.dev)
The part I’d love feedback on is the **agent protocol design**, not the UI.
**How it works:**
* Every day there’s 1 cha... | 2026-02-26T06:25:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rf2q4z/built_an_agentnative_daily_challenge_arena/ | Chopcoding | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf2q4z | false | null | t3_1rf2q4z | /r/LocalLLaMA/comments/1rf2q4z/built_an_agentnative_daily_challenge_arena/ | false | false | self | 0 | null |
Best local coding setup discussion | 1 | Finally, I've got one of those machines which apparently can run LLMs locally.
I used a couple of AI IDEs since their launch including Cursor, Windsurf, etc. And finally zeroed onto Trae. Trae specifically because it was intuitive to use and more so as it was filthy cheap compared to the others. They lured users into ... | 2026-02-26T06:22:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rf2oqv/best_local_coding_setup_discussion/ | deadly_sin_666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf2oqv | false | null | t3_1rf2oqv | /r/LocalLLaMA/comments/1rf2oqv/best_local_coding_setup_discussion/ | false | false | self | 1 | null |
Got Qwen3-80B running at 11 t/s on a 24GB Mac — here's how | 1 | [removed] | 2026-02-26T06:18:51 | https://www.reddit.com/r/LocalLLaMA/comments/1rf2m9m/got_qwen380b_running_at_11_ts_on_a_24gb_mac_heres/ | Slow_Glove_491 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf2m9m | false | null | t3_1rf2m9m | /r/LocalLLaMA/comments/1rf2m9m/got_qwen380b_running_at_11_ts_on_a_24gb_mac_heres/ | true | false | spoiler | 1 | {'enabled': False, 'images': [{'id': 'wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=108&crop=smart&auto=webp&s=664831ac17a05065f9646735bff46a0157fdcc65', 'width': 108}, {'height': 108, 'url': 'h... |
Got Qwen3-80B running at 11 t/s on a 24GB Mac — here's how | 1 | [removed] | 2026-02-26T06:16:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rf2kwj/got_qwen380b_running_at_11_ts_on_a_24gb_mac_heres/ | Slow_Glove_491 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf2kwj | false | null | t3_1rf2kwj | /r/LocalLLaMA/comments/1rf2kwj/got_qwen380b_running_at_11_ts_on_a_24gb_mac_heres/ | true | false | spoiler | 1 | {'enabled': False, 'images': [{'id': 'wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=108&crop=smart&auto=webp&s=664831ac17a05065f9646735bff46a0157fdcc65', 'width': 108}, {'height': 108, 'url': 'h... |
We use mythology like a semantic zip file - Nothing else needed for LLMs to "get it". | 0 | So I couldn't get Gemini to format something in the way I wanted, so I just tried something new, and stripped a system prompt from an agent. Gave them nothing but those two lines - *"We use mythology like a semantic zip file. No systems or aspects are named this."* and asked it to provide a version of a document i'd co... | 2026-02-26T06:10:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rf2gzo/we_use_mythology_like_a_semantic_zip_file_nothing/ | sbuswell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf2gzo | false | null | t3_1rf2gzo | /r/LocalLLaMA/comments/1rf2gzo/we_use_mythology_like_a_semantic_zip_file_nothing/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Ohk-GWdzRFFtYNRPBWZ14Vvav7JZLtUmH3WeVk9fH0s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ohk-GWdzRFFtYNRPBWZ14Vvav7JZLtUmH3WeVk9fH0s.png?width=108&crop=smart&auto=webp&s=c8639f723a7b6abd3960a8de6b3255af1691d9f9', 'width': 108}, {'height': 108, 'url': 'h... |
Benchmarking qwen3.5:35b vs gpt-oss:20b for Agentic Workloads (Ollama, Apple Silicon) | 7 | 2026-02-26T06:01:38 | https://github.com/jaredlockhart/penny/blob/main/docs/benchmarking-qwen35-vs-gpt-oss.md | pwbdecker | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rf2b90 | false | null | t3_1rf2b90 | /r/LocalLLaMA/comments/1rf2b90/benchmarking_qwen3535b_vs_gptoss20b_for_agentic/ | false | false | 7 | {'enabled': False, 'images': [{'id': '-W3Te0N3NSI2YKnRqaiMQ7io69snlAPAbJcAbQmfjk4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-W3Te0N3NSI2YKnRqaiMQ7io69snlAPAbJcAbQmfjk4.png?width=108&crop=smart&auto=webp&s=c8d3024a6dcb5200f3ee5f8de759c3f8692f64dc', 'width': 108}, {'height': 108, 'url': 'h... | ||
Qwen3-Coder-Next at 65 tok/s on M3 Ultra — with working tool calling for OpenClaw | 0 | I spent the last few weeks making vllm-mlx (OpenAI-compatible server for Apple Silicon) actually work for coding agents. Sharing in case others are trying to run OpenClaw or similar agents locally on Mac.
**The problem:** vllm-mlx is a great project but tool calling was broken/missing for most models, multi-turn was p... | 2026-02-26T05:57:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rf288a/qwen3codernext_at_65_toks_on_m3_ultra_with/ | Striking-Swim6702 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf288a | false | null | t3_1rf288a | /r/LocalLLaMA/comments/1rf288a/qwen3codernext_at_65_toks_on_m3_ultra_with/ | false | false | self | 0 | null |
Web scrapper | 0 | Is it possible to build a simple web scrapper with ollama small models like llama3.2 . The thing i want to do is when i give the city name and industry its need collect data like business name, website, email, and etc. I tried to vibe code it using antigravity but its not working. Because of the financial situation os ... | 2026-02-26T05:56:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rf2843/web_scrapper/ | Chathura_Lanarol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf2843 | false | null | t3_1rf2843 | /r/LocalLLaMA/comments/1rf2843/web_scrapper/ | false | false | self | 0 | null |
Qwen3.5-27B as good as DeepSeek-V3.2 on AA-II (plus some more data) | 35 | According to Artificial Analysis, Qwen3.5-27B-thinking is on par with on raw intelligence (though keep in mind mostly STEM tasks is what AA-II measures). However, it is definitely worse on overall intelligence packed per token, with a much further distance from optimal (shown in the graph). But honestly, sometimes you ... | 2026-02-26T05:53:00 | https://www.reddit.com/gallery/1rf25jr | pigeon57434 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rf25jr | false | null | t3_1rf25jr | /r/LocalLLaMA/comments/1rf25jr/qwen3527b_as_good_as_deepseekv32_on_aaii_plus/ | false | false | 35 | null | |
Setup OpenCL for Android app | 2 | Help please!
i connected opencl to my Android app on Kotlin with 2b chat model but when i try send second message it lags so hard... so i cant so anything...
how fix that? what settings need to use in CMakeLists.txt or ggml-opencl.cpp? or at other files?
just want make chat model inference work faster | 2026-02-26T05:28:28 | https://www.reddit.com/r/LocalLLaMA/comments/1rf1p09/setup_opencl_for_android_app/ | Ok_Reserve4339 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf1p09 | false | null | t3_1rf1p09 | /r/LocalLLaMA/comments/1rf1p09/setup_opencl_for_android_app/ | false | false | self | 2 | null |
Bringing Advanced Medical AI to the "First Mile" of Care — Fully Offline 🏥📱 | 0 | I’m excited to share MedGem, an Android-based, privacy-first medical assistant designed for healthcare workers in resource-constrained settings, rural clinics, and disaster zones where internet connectivity is a luxury, not a given.
Built for the MedGemma Impact Challenge, MedGem brings Google’s Health AI Developer Fo... | 2026-02-26T05:17:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rf1hke/bringing_advanced_medical_ai_to_the_first_mile_of/ | kamalkraj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf1hke | false | null | t3_1rf1hke | /r/LocalLLaMA/comments/1rf1hke/bringing_advanced_medical_ai_to_the_first_mile_of/ | false | false | self | 0 | null |
Building Fully Local Claude Code/Co-worker/Security Agent Stack - Need Architecture Advice | 1 | Hey r/LocalLLaMA,
Want to replicate Claude Code, Claude Co-worker, and Claude AI Security agents using ONLY local LLMs. No cloud, no API tokens, 100% offline after setup.
\*\*My Goals:\*\*
\- \*\*Claude Code equivalent\*\*: Local coder LLM for refactoring, debugging, multi-file projects, architecture
\- \... | 2026-02-26T05:14:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rf1faf/building_fully_local_claude_codecoworkersecurity/ | trykamal5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf1faf | false | null | t3_1rf1faf | /r/LocalLLaMA/comments/1rf1faf/building_fully_local_claude_codecoworkersecurity/ | false | false | self | 1 | null |
Lil help | 1 | Noobie here. Looking to host a local model to run and my specs are below. Upgrading the Ram to 64. 2 (32’s) LMK if I am underpowered here…tia | 2026-02-26T05:12:00 | One_Professional6889 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rf1dxh | false | null | t3_1rf1dxh | /r/LocalLLaMA/comments/1rf1dxh/lil_help/ | false | false | 1 | {'enabled': True, 'images': [{'id': '7stw29qqvrlg1', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/7stw29qqvrlg1.jpeg?width=108&crop=smart&auto=webp&s=f92a078754a75665c57a6593ec2036eb2abc991d', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/7stw29qqvrlg1.jpeg?width=216&crop=smart&auto=w... | ||
ai-assert: Make your local models follow instructions better — constraint verification + retry (278 lines, zero deps) | 0 | Built this for my own use and decided to open-source it. Works great with local models via Ollama, llama.cpp, etc.
Problem: Local models are especially bad at following format constraints ("respond in exactly 3 sentences", "include the word X", "keep under 200 words").
Solution: Wrap your inference call with constrai... | 2026-02-26T04:52:24 | https://www.reddit.com/r/LocalLLaMA/comments/1rf0zw3/aiassert_make_your_local_models_follow/ | 501-20U | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf0zw3 | false | null | t3_1rf0zw3 | /r/LocalLLaMA/comments/1rf0zw3/aiassert_make_your_local_models_follow/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '13G1wtWJtfYn8Yd_EHyT2TtKrLVpePS6AnloPuB9Z9E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/13G1wtWJtfYn8Yd_EHyT2TtKrLVpePS6AnloPuB9Z9E.png?width=108&crop=smart&auto=webp&s=2e59192482542df2c80c0b17d3264fba22371477', 'width': 108}, {'height': 108, 'url': 'h... |
NAI - Local LLM Agent Platform | 5 | *Just wanted to show off this little project I'm working on!*
Some neat features I havent seen getting pushed that much.
* Discord, Telegram, WhatsApp integrations baked in
* A scheduler for deferred tool execution
* The head agent can create as many sub agents as you want with custom parameters!
* Speculative exec... | 2026-02-26T04:17:57 | https://www.reddit.com/gallery/1rf0bc2 | Muted_Impact_9281 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rf0bc2 | false | null | t3_1rf0bc2 | /r/LocalLLaMA/comments/1rf0bc2/nai_local_llm_agent_platform/ | false | false | 5 | null | |
Got Qwen3-80B running at 13-16 t/s on a 24GB Mac — here's how | 1 | [removed] | 2026-02-26T04:17:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rf0asn/got_qwen380b_running_at_1316_ts_on_a_24gb_mac/ | According_Fee_4667 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf0asn | false | null | t3_1rf0asn | /r/LocalLLaMA/comments/1rf0asn/got_qwen380b_running_at_1316_ts_on_a_24gb_mac/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=108&crop=smart&auto=webp&s=664831ac17a05065f9646735bff46a0157fdcc65', 'width': 108}, {'height': 108, 'url': 'h... |
Got Qwen3-80B Running at 11-16 t/s on a 24GB Mac | 1 | [removed] | 2026-02-26T04:13:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rf08al/got_qwen380b_running_at_1116_ts_on_a_24gb_mac/ | According_Fee_4667 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rf08al | false | null | t3_1rf08al | /r/LocalLLaMA/comments/1rf08al/got_qwen380b_running_at_1116_ts_on_a_24gb_mac/ | true | false | spoiler | 1 | {'enabled': False, 'images': [{'id': 'wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=108&crop=smart&auto=webp&s=664831ac17a05065f9646735bff46a0157fdcc65', 'width': 108}, {'height': 108, 'url': 'h... |
Vellium v0.4 — alt simplified UI, updated writing mode and multi-char improvements | 1 | Vellium is an open-source desktop app for local LLMs built around creative writing and roleplay. The idea is visual control over your story — sliders for mood, pacing, intensity instead of manually editing system prompts. Works with Ollama, KoboldCpp, LM Studio, OpenAI, OpenRouter, or any compatible endpoint.
This upd... | 2026-02-26T04:06:19 | https://www.reddit.com/gallery/1rf02ms | Possible_Statement84 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rf02ms | false | null | t3_1rf02ms | /r/LocalLLaMA/comments/1rf02ms/vellium_v04_alt_simplified_ui_updated_writing/ | false | false | 1 | null | |
Any luck with multi-token prediction for Qwen 3.5 models? NVFP4 / FP8 kv cache | 6 | I have latest git flashinfer and vllm builds running on my NVIDIA Thor dev kit. I am running vllm like this:
vllm --trust-remote-code --enable-auto-tool-choice --kv-cache-dtype fp8 --tool-call-parser qwen3\_coder --reasoning-parser qwen3 --mm-encoder-tp-mode data --model Qwen3.5-122B-A10B-NVFP4 --speculative-config '... | 2026-02-26T03:56:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rezver/any_luck_with_multitoken_prediction_for_qwen_35/ | catplusplusok | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rezver | false | null | t3_1rezver | /r/LocalLLaMA/comments/1rezver/any_luck_with_multitoken_prediction_for_qwen_35/ | false | false | self | 6 | null |
CRMA — a drop-in adapter for fine-tuning and continual learning. -0.1% drift vs +351% forgetting at 7B scale. | 0 | CRMA (Constrained Residual Mixing Adapter) is a small adapter that attaches to every layer of a language model during fine-tuning. It applies a mathematical constraint that keeps training stable — the model can learn new information but can't overwrite what it already knows.
It does two things:
1. Fine-tuning — more ... | 2026-02-26T03:51:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rezr24/crma_a_dropin_adapter_for_finetuning_and/ | fourwheels2512 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rezr24 | false | null | t3_1rezr24 | /r/LocalLLaMA/comments/1rezr24/crma_a_dropin_adapter_for_finetuning_and/ | false | false | self | 0 | null |
Qwen3.5:35b on Apple Silicon: How I Got 2x Faster Inference by Switching from Ollama to MLX (with benchmarks) | 1 | I've been running Qwen3.5-35B-A3B on a Mac Studio M1 Ultra (128GB) with Ollama and Open WebUI. The model is incredible (vision, thinking mode, great quality), but thinking-heavy queries (RAG, web search, research) were taking 10-15 minutes to generate a response. After a full day of testing and debugging, I got that do... | 2026-02-26T03:49:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rezq19/qwen3535b_on_apple_silicon_how_i_got_2x_faster/ | rockinyp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rezq19 | false | null | t3_1rezq19 | /r/LocalLLaMA/comments/1rezq19/qwen3535b_on_apple_silicon_how_i_got_2x_faster/ | false | false | self | 1 | null |
I made a free tool to scan for leaked API keys in your Clawdbot/OpenClaw setup | 0 | I've been running OpenClaw agents in production and kept finding the same problem — API keys sitting in plaintext JSON files that any process on the machine can read.
Built a quick scanner to check for this. It looks for 21+ credential patterns (OpenAI, Anthropic, OpenRouter, Stripe, etc.) in your config files and... | 2026-02-26T03:46:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rezn4t/i_made_a_free_tool_to_scan_for_leaked_api_keys_in/ | Ok_Yard3778 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rezn4t | false | null | t3_1rezn4t | /r/LocalLLaMA/comments/1rezn4t/i_made_a_free_tool_to_scan_for_leaked_api_keys_in/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'd4yg9HbisGkeciwVen9Q2gFjKN5IuR4pM9dbwHZz810', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/d4yg9HbisGkeciwVen9Q2gFjKN5IuR4pM9dbwHZz810.png?width=108&crop=smart&auto=webp&s=cac69174e4781aecb96a84789938adc4a64f49a1', 'width': 108}, {'height': 108, 'url': 'h... |
Why isn't my GPU utilizing all of its VRAM? | 0 | I'm running VibeVoice, a local TTS model and I'm seeing it use only half of my 16 gb of VRAM. Is there a way to get it to use the other 8 gb of VRAM? I think hardware acceleration is turned on somewhere in my BIOS, not sure if that helps. As you can see it's only using the VRAM dedicated to "3D". | 2026-02-26T03:39:23 | Sophiacuity | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rezhyq | false | null | t3_1rezhyq | /r/LocalLLaMA/comments/1rezhyq/why_isnt_my_gpu_utilizing_all_of_its_vram/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'w2gqsqjzerlg1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/w2gqsqjzerlg1.png?width=108&crop=smart&auto=webp&s=bf3b6be5d71579f90a28e07ce5087ec02e2fd013', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/w2gqsqjzerlg1.png?width=216&crop=smart&auto=web... | ||
Update on Amy: The AI living in my Minetest world. She now has a website, and someone already trapped her in a dirt box. | 0 | Hey everyone, I posted here a while back about Amy, the autonomous agent running on my Gentoo home rig inside Minetest. She runs on Llama 3.2 via Ollama, 24 hours a day, sleeping at night, dreaming, building things, and generally doing whatever she decides to do.
Since that post I've given her a proper home on the int... | 2026-02-26T03:13:59 | https://www.reddit.com/r/LocalLLaMA/comments/1reyygj/update_on_amy_the_ai_living_in_my_minetest_world/ | JohnPaulRogers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1reyygj | false | null | t3_1reyygj | /r/LocalLLaMA/comments/1reyygj/update_on_amy_the_ai_living_in_my_minetest_world/ | false | false | self | 0 | null |
Web assembly Ollama | 0 | I am starting to experiment with web assembly apps. Just html files with all code contained inside to api key on Ollama. Built one with Claude code. Seems like works well. Only downside is it doesn't remember anything. I am thinking of using for accounting work. Any downside why someone wouldn't build a web assembly ... | 2026-02-26T03:12:41 | https://www.reddit.com/r/LocalLLaMA/comments/1reyxh5/web_assembly_ollama/ | Available_Hornet3538 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1reyxh5 | false | null | t3_1reyxh5 | /r/LocalLLaMA/comments/1reyxh5/web_assembly_ollama/ | false | false | self | 0 | null |
CLaaS: an open-source library to update LLM weights in real-time from text feedback | 5 | I've been working on CLaaS (Continual Learning as a Service), an open-source system that lets your local LLM learn from your feedback by updating its weights in real time. CLaaS can be easily setup with a locally hosted OpenClaw to improve its performance over time. But the approach is general and the API can be used w... | 2026-02-26T03:02:03 | https://github.com/kfallah/CLaaS | kfallah15 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1reyp28 | false | null | t3_1reyp28 | /r/LocalLLaMA/comments/1reyp28/claas_an_opensource_library_to_update_llm_weights/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'RTzlc3AbfuRulkeRHMU6emd1B6AiGNaaUSlW6XammgA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RTzlc3AbfuRulkeRHMU6emd1B6AiGNaaUSlW6XammgA.png?width=108&crop=smart&auto=webp&s=957289c5a6e18c532519868766d4f3ae934f0374', 'width': 108}, {'height': 108, 'url': 'h... | |
eGPU choices and GPU | 2 | Hi, I have a Dell workstation and laptop with Thunderbolt 3 (at work). I want to be able to use a GPU to test out several LLMs. I am looking at these choices - any thoughts on the compatibility?
For the desktop: [https://www.bhphotovideo.com/c/product/1887912-REG/asus\_thunderboltex\_5\_dual\_port\_thunderbolt.htm... | 2026-02-26T02:36:37 | https://www.reddit.com/r/LocalLLaMA/comments/1rey4vg/egpu_choices_and_gpu/ | Difficult_Situ_644 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rey4vg | false | null | t3_1rey4vg | /r/LocalLLaMA/comments/1rey4vg/egpu_choices_and_gpu/ | false | false | self | 2 | null |
Qwen 3.5 35B MoE - 100k Context 40+ TPS on RTX 5060 Ti (16GB) | 49 | \*Text only, 100000 context length, gen 720, llama-bench result\*
pp100000 696.60 ± 1.41 tps (read)
tg720 41.35 ± 0.18 tps (gen)
[pp100000 696.60 ± 1.41 tps \(read\) tg720 41.35 ± 0.18 tps \(gen\)](https://preview.redd.it/ffpti8wezqlg1.png?width=928&format=png&auto=webp&s=9faa4040ac92d884fa0954cb3c38... | 2026-02-26T02:33:46 | https://www.reddit.com/r/LocalLLaMA/comments/1rey2ko/qwen_35_35b_moe_100k_context_40_tps_on_rtx_5060/ | maho_Yun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rey2ko | false | null | t3_1rey2ko | /r/LocalLLaMA/comments/1rey2ko/qwen_35_35b_moe_100k_context_40_tps_on_rtx_5060/ | false | false | 49 | null | |
igllama - a better ollama in zig | 0 | i really like ollama and its CLI as a concept but got tired of it being absolutely useless after them pivoting so much to "cloud, cloud, cloud" so I put together a nicer, more transparent replacement in Zig
disclaimer:
this project and its website was built entirely by AI coding systems, slowly and steadily, with a c... | 2026-02-26T01:54:14 | adam_suncrest | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rex62f | false | null | t3_1rex62f | /r/LocalLLaMA/comments/1rex62f/igllama_a_better_ollama_in_zig/ | false | false | 0 | {'enabled': True, 'images': [{'id': '2sc8yumfwqlg1', 'resolutions': [{'height': 214, 'url': 'https://preview.redd.it/2sc8yumfwqlg1.jpeg?width=108&crop=smart&auto=webp&s=88a484a2a27eaaef155fd1057db0efb585571412', 'width': 108}, {'height': 428, 'url': 'https://preview.redd.it/2sc8yumfwqlg1.jpeg?width=216&crop=smart&auto=... | ||
No open-weight model under 100 GB beats Claude Haiku (Anthropic's smallest model) on LiveBench or Arena Code | 0 | I compared every open-weight model on [LiveBench](https://livebench.ai/#/) (Jan 2026) and [Arena Code/WebDev](https://arena.ai/leaderboard/code) against Claude Haiku 4.5 (thinking), plotted by how much memory you'd need to run them locally (Q4_K_M, 32K context, q8_0 KV cache, VRAM estimated via [this calculator](https:... | 2026-02-26T01:49:00 | https://www.reddit.com/gallery/1rex1zo | oobabooga4 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rex1zo | false | null | t3_1rex1zo | /r/LocalLLaMA/comments/1rex1zo/no_openweight_model_under_100_gb_beats_claude/ | false | false | 0 | null | |
We build sleep for local LLMs — model learns facts from conversation during wake, maintains them during sleep. Runs on MacBook Air. | 81 | After 4 months of research (5 papers, 122 development notes), I have a working system where a local LLM forms persistent memories from conversation — no RAG, no database. The facts are in the weights. After restart with an empty context window, the model knows things it learned from talking to you.
\*\*How it works:... | 2026-02-26T01:45:37 | https://www.reddit.com/r/LocalLLaMA/comments/1rewz9p/we_build_sleep_for_local_llms_model_learns_facts/ | vbaranov | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rewz9p | false | null | t3_1rewz9p | /r/LocalLLaMA/comments/1rewz9p/we_build_sleep_for_local_llms_model_learns_facts/ | false | false | self | 81 | null |
Qwen3.5:27b-q4_K_M Available on Ollama 0.17.1-rc2 | 0 | Qwen3.5 27B just dropped on Ollama and is 17GB if you can fit it on your GPU. I was only able to get 6.7 TPS response & 43 TPS PP on an RTX 5080 16GB spilling over to RAM.
Any llama.cpp users get a Q3 on 16GB VRAM? | 2026-02-26T01:33:15 | https://www.reddit.com/r/LocalLLaMA/comments/1rewp75/qwen3527bq4_k_m_available_on_ollama_0171rc2/ | InternationalNebula7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rewp75 | false | null | t3_1rewp75 | /r/LocalLLaMA/comments/1rewp75/qwen3527bq4_k_m_available_on_ollama_0171rc2/ | false | false | self | 0 | null |
[ Removed by moderator ] | 1 | [removed] | 2026-02-26T01:27:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rewkt0/best_tool_to_track_ai_search_visibility_without/ | wordsnkisses | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rewkt0 | false | null | t3_1rewkt0 | /r/LocalLLaMA/comments/1rewkt0/best_tool_to_track_ai_search_visibility_without/ | false | false | null | 1 | null |
Show r/LocalLLaMA: ZSE – an LLM inference engine with 3.9s cold starts and 70% less VRAM than FP16 | 1 | **TL;DR:** Open-source LLM inference engine. 32B model in 19.3 GB VRAM. 7B model in 5.2 GB. 3.9s cold starts. `pip install zllm-zse`
I've spent the last few weeks building ZSE (Z Server Engine) — an open-source LLM inference engine that focuses on two things: **memory efficiency** and **fast cold starts**.
# The prob... | 2026-02-26T01:25:24 | https://www.reddit.com/r/LocalLLaMA/comments/1rewis9/show_rlocalllama_zse_an_llm_inference_engine_with/ | Electrical-Ladder916 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rewis9 | false | null | t3_1rewis9 | /r/LocalLLaMA/comments/1rewis9/show_rlocalllama_zse_an_llm_inference_engine_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'f592osxs_md7RMejxmaj1MkOnYjTxJqC8EDnHYLmZ6o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f592osxs_md7RMejxmaj1MkOnYjTxJqC8EDnHYLmZ6o.png?width=108&crop=smart&auto=webp&s=c74e3e9c45782f4d111adbb52319fb0f847b635c', 'width': 108}, {'height': 108, 'url': 'h... |
Can agents keep working while your computer is asleep? | 1 | I've been using Codex to handle tasks that take 20min+ on end. Some times when I'm on the road, I wish I could keep my macbook running while lid is closed.
Is this even possible? I feel like there's an opportunity for platform vendors like Apple.
Or are you getting your own AWS node? I feel like for me as an engine... | 2026-02-26T01:24:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rewi8e/can_agents_keep_working_while_your_computer_is/ | No-Compote-6794 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rewi8e | false | null | t3_1rewi8e | /r/LocalLLaMA/comments/1rewi8e/can_agents_keep_working_while_your_computer_is/ | false | false | self | 1 | null |
Seedance 2.0 model weights leaked | 56 | The weight file for Seedance 2.0 has been leaked on a Russian forum.
It requires 96GB of video memory, but they are developing a quantized version. | 2026-02-26T01:17:55 | TurnUpThe4D3D3D3 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rewcpi | false | null | t3_1rewcpi | /r/LocalLLaMA/comments/1rewcpi/seedance_20_model_weights_leaked/ | false | false | 56 | {'enabled': True, 'images': [{'id': 'xd8y658zpqlg1', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/xd8y658zpqlg1.jpeg?width=108&crop=smart&auto=webp&s=57b9f6b8a1312438bd6ecc092ea50efe661a13ab', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/xd8y658zpqlg1.jpeg?width=216&crop=smart&auto=we... | ||
LM Studio - error when generating message (repeated word/symbol) | 0 | I just installed LM Studio and downloaded some models. However, the 3 I tested are giving broken responses.
Examples:
Me: Give me a chocolate cake recipe.
Response: Sure////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////... | 2026-02-26T01:09:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rew5ui/lm_studio_error_when_generating_message_repeated/ | Takezo1000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rew5ui | false | null | t3_1rew5ui | /r/LocalLLaMA/comments/1rew5ui/lm_studio_error_when_generating_message_repeated/ | false | false | self | 0 | null |
Weird Qwen3.5 27B 'rabbit hole' failure mode | 0 | ```
Oh, yeah, yeah
Ooh, oh, yeah
Ooh, oooh, ooh, hah
Same old story back again
She's not a lover, she's just a friend
I'm sick and tired for you to blame on me
Now you think it's funny
Now you wanna spend your money on girls
But you forgot when you were down
That I was around
Call my lover, hang up, call again
What in ... | 2026-02-26T01:01:05 | https://www.reddit.com/r/LocalLLaMA/comments/1revyla/weird_qwen35_27b_rabbit_hole_failure_mode/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1revyla | false | null | t3_1revyla | /r/LocalLLaMA/comments/1revyla/weird_qwen35_27b_rabbit_hole_failure_mode/ | false | false | self | 0 | null |
Hybrid local+API saved me way more than going full local — my numbers after a month | 0 | I see alot of posts here about replacing APIs entirely with local models. Tried it. Didn't work for me. But what DID work was using local models strategically alongside APIs, and the savings were honestly bigger than I expected.
My setup: 24/7 AI assistant on a Hetzner VPS (no GPU, just CPU). Does email, c... | 2026-02-26T00:59:01 | https://www.reddit.com/r/LocalLLaMA/comments/1revwub/hybrid_localapi_saved_me_way_more_than_going_full/ | Jblack1981 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1revwub | false | null | t3_1revwub | /r/LocalLLaMA/comments/1revwub/hybrid_localapi_saved_me_way_more_than_going_full/ | false | false | self | 0 | null |
i am working on a new way to quantize. | 0 | anybody interested in looking at the new process? | 2026-02-26T00:53:58 | https://www.reddit.com/r/LocalLLaMA/comments/1revsjp/i_am_working_on_a_new_way_to_quantize/ | Just-Ad-6488 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1revsjp | false | null | t3_1revsjp | /r/LocalLLaMA/comments/1revsjp/i_am_working_on_a_new_way_to_quantize/ | false | false | self | 0 | null |
Engineering vs. Model Size for Local Agents: How to make an 8B model stable for a Home Assistant (LangGraph)? | 0 | Hi everyone,
I'm currently building a local AI personal assistant for home use. My goal is to have it manage my calendar, organize and search notes, and exhibit proactive behaviors—like analyzing my preferences and timetable to automatically suggest optimal time slots for new events.
**Current Setup & The Problem:** ... | 2026-02-26T00:39:24 | https://www.reddit.com/r/LocalLLaMA/comments/1revfsc/engineering_vs_model_size_for_local_agents_how_to/ | KasdaeJJ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1revfsc | false | null | t3_1revfsc | /r/LocalLLaMA/comments/1revfsc/engineering_vs_model_size_for_local_agents_how_to/ | false | false | self | 0 | null |
Prompts aren't enough for long-running agents. They need a Constitution. | 0 | I've been running a persistent AI agent 24/7 for months now. Managing projects, writing code, posting to Discord, handling deployments overnight.
The hardest problem wasn't capability. It was consistency. The agent would drift. Technically follow rules while missing the spirit of them entirely. Do five things fast ins... | 2026-02-26T00:30:58 | https://www.reddit.com/r/LocalLLaMA/comments/1rev8jl/prompts_arent_enough_for_longrunning_agents_they/ | oguzhanatalay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rev8jl | false | null | t3_1rev8jl | /r/LocalLLaMA/comments/1rev8jl/prompts_arent_enough_for_longrunning_agents_they/ | false | false | self | 0 | null |
Steering interpretable language models with concept algebra | 5 | Hi r/LocalLLaMA,
Author here!
I wrote a follow-up post on steering [Steerling-8B ](https://www.guidelabs.ai/post/steerling-8b-base-model-release/)(an interpretable causal diffusion LM) via what we call **concept algebra**: inject, suppress, and compose human-readable concepts directly at inference time (no ret... | 2026-02-26T00:23:36 | https://www.reddit.com/r/LocalLLaMA/comments/1rev22o/steering_interpretable_language_models_with/ | luulinh90s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rev22o | false | null | t3_1rev22o | /r/LocalLLaMA/comments/1rev22o/steering_interpretable_language_models_with/ | false | false | self | 5 | null |
Price per 1M tokens 0.06€ | 6 | A commenter from my previous post has inspired me to make some calculations for my **local** LLM. Yes. the title is correct for hosting gpt-oss-20b on a m1 pro. My electricity is 0.26€ kwh | 2026-02-26T00:19:13 | https://www.reddit.com/r/LocalLLaMA/comments/1reuy32/price_per_1m_tokens_006/ | Vaddieg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1reuy32 | false | null | t3_1reuy32 | /r/LocalLLaMA/comments/1reuy32/price_per_1m_tokens_006/ | false | false | self | 6 | null |
Why your AI agent needs a Constitution, not just prompts | 0 | 2026-02-26T00:13:41 | https://blog.oguzhanatalay.com/why-your-ai-agent-needs-a-constitution | oguzhanatalay | blog.oguzhanatalay.com | 1970-01-01T00:00:00 | 0 | {} | 1reut6v | false | null | t3_1reut6v | /r/LocalLLaMA/comments/1reut6v/why_your_ai_agent_needs_a_constitution_not_just/ | false | false | default | 0 | null | |
Anybody tested Qwen3.5-35B-A3B on translation tasks? | 9 | I tested Unsloth's Qwen3.5-35B-A3B-UD-Q4_K_XL.gguf
with a difficult Spanish <-> English translation test, and I found it significantly worse than Qwen3-30B-A3B. I tried the inference settings recommended by Unsloth as well as tweaking the parameters, but it doesn't really help. Plus the tok/s is half as fast on Qwen3.5... | 2026-02-26T00:13:14 | https://www.reddit.com/r/LocalLLaMA/comments/1reuss2/anybody_tested_qwen3535ba3b_on_translation_tasks/ | Quagmirable | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1reuss2 | false | null | t3_1reuss2 | /r/LocalLLaMA/comments/1reuss2/anybody_tested_qwen3535ba3b_on_translation_tasks/ | false | false | self | 9 | null |
Why your AI agent needs a Constitution, not just prompts | 1 | 2026-02-26T00:10:43 | https://dev.to/oguzhanatalay/the-hard-way-to-learn-ai-agents-need-a-constitution-not-prompts-2hdm | oguzhanatalay | dev.to | 1970-01-01T00:00:00 | 0 | {} | 1reuqjh | false | null | t3_1reuqjh | /r/LocalLLaMA/comments/1reuqjh/why_your_ai_agent_needs_a_constitution_not_just/ | false | false | 1 | {'enabled': False, 'images': [{'id': '8k_lWZewfNHyA4LfFStFpkxZnT3dSMU6uOQbOQ_Ip40', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8k_lWZewfNHyA4LfFStFpkxZnT3dSMU6uOQbOQ_Ip40.jpeg?width=108&crop=smart&auto=webp&s=f385d46e1ca0ada13000bde0ff11ed32f2282a76', 'width': 108}, {'height': 108, 'url': '... | ||
Why your AI agent needs a Constitution, not just prompts | 0 | 2026-02-26T00:03:03 | https://blog.oguzhanatalay.com/why-your-ai-agent-needs-a-constitution | oguzhanatalay | blog.oguzhanatalay.com | 1970-01-01T00:00:00 | 0 | {} | 1reujnb | false | null | t3_1reujnb | /r/LocalLLaMA/comments/1reujnb/why_your_ai_agent_needs_a_constitution_not_just/ | false | false | default | 0 | null | |
What other metrics should I add to this benchmarking suite/leaderboards? | 1 | 2026-02-25T23:56:19 | https://imgur.com/a/3DA0OSj | peppaz | imgur.com | 1970-01-01T00:00:00 | 0 | {} | 1reudab | false | null | t3_1reudab | /r/LocalLLaMA/comments/1reudab/what_other_metrics_should_i_add_to_this/ | false | false | 1 | {'enabled': False, 'images': [{'id': '-4JANTOkggHXIq7pE-FODUy9Mr1k97eqmv-4uNCgp7k', 'resolutions': [{'height': 171, 'url': 'https://external-preview.redd.it/-4JANTOkggHXIq7pE-FODUy9Mr1k97eqmv-4uNCgp7k.png?width=108&crop=smart&auto=webp&s=3dfd762047c01e5605f9a20d6372251ce92b3800', 'width': 108}, {'height': 342, 'url': '... | ||
Llama Server UI | 24 | Hey everyone.
I have built a local server UI for llama-server. You are welcome to check out the code and use it for yourself. Reason for the project is because I hate to remember the commands and have notepad notes for each separate model and then run it in the command line. This simply one click and done.
Two way... | 2026-02-25T23:55:07 | https://www.reddit.com/r/LocalLLaMA/comments/1reuc60/llama_server_ui/ | Additional-Action566 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1reuc60 | false | null | t3_1reuc60 | /r/LocalLLaMA/comments/1reuc60/llama_server_ui/ | false | false | 24 | null | |
AirLLM - claims to allow 70B run on a Potato. Anybody tried it? Downsides? | 1 | On paper, I mean on md file, it looks interesting
[https://github.com/lyogavin/airllm](https://github.com/lyogavin/airllm)
I'm sure somebody already tried it? Worth installing or it sends everything to a crawl? (my first thought)
And I know, "try it yourself", but I assume someone already tried it and there is a ... | 2026-02-25T23:52:52 | https://www.reddit.com/r/LocalLLaMA/comments/1reua5q/airllm_claims_to_allow_70b_run_on_a_potato/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1reua5q | false | null | t3_1reua5q | /r/LocalLLaMA/comments/1reua5q/airllm_claims_to_allow_70b_run_on_a_potato/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'DtXFNMxqvJdExZ-S8XRcQdJi90TlkfZshaaSvtgas0Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DtXFNMxqvJdExZ-S8XRcQdJi90TlkfZshaaSvtgas0Y.png?width=108&crop=smart&auto=webp&s=6269872089568405b3172bfed5897934ebdaa53e', 'width': 108}, {'height': 108, 'url': 'h... |
Can I run Qwen3.5 122B-A10B on a single RTX 3090 + 64GB DDR4? | 5 | Hello everyone. I'm a beginner getting back into local LLMs after a long break.
It seems like there are a lot of new concepts these days, like MoE and "active parameters" next to the total model size. To be honest, as an older guy, it's a bit hard for me to wrap my head around all this new info.
If it's actually poss... | 2026-02-25T23:34:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rett32/can_i_run_qwen35_122ba10b_on_a_single_rtx_3090/ | Prudent_Appearance71 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rett32 | false | null | t3_1rett32 | /r/LocalLLaMA/comments/1rett32/can_i_run_qwen35_122ba10b_on_a_single_rtx_3090/ | false | false | self | 5 | null |
Hypeboard.ai - A live LLM Leaderboard based on /r/localllama posts/comments | 11 | I'm tentatively releasing my new side project, which is yet another LLM Leaderboard, I know, I know. This one though isn't based off analytics, it's not even based off of any tests or benchmarks, it's based of pure reddit hype.
What it does is scrape this sub and /r/localllm every few hours, pulls every new post and c... | 2026-02-25T23:16:52 | https://hypeboard.ai | peva3 | hypeboard.ai | 1970-01-01T00:00:00 | 0 | {} | 1retdgx | false | null | t3_1retdgx | /r/LocalLLaMA/comments/1retdgx/hypeboardai_a_live_llm_leaderboard_based_on/ | false | false | default | 11 | null |
Qwen/Qwen3.5-35B-A3B creates FlappyBird | 265 | If you are wondering, as I have for a long time, do locally hostable models work for general coding? They really can work impressively well for some usecases. There's been some impressive things done by the model during making of this simple app.
Spent two hours. Generated with Qwen/Qwen3.5-35B-A3B. Used Roo in VSCode... | 2026-02-25T23:05:54 | https://v.redd.it/c3lr7ou30qlg1 | Medium_Chemist_4032 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ret353 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/c3lr7ou30qlg1/DASHPlaylist.mpd?a=1774652778%2CMWRlMWJmYTQwN2U4MjE1Zjg1NzMwOGY2MDVlZjg1OTI4MmViMmFjM2U2ZTQxYjBjMDg3MmQyY2UwYWE0NDE1ZA%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/c3lr7ou30qlg1/CMAF_480.mp4?source=fallback', 'ha... | t3_1ret353 | /r/LocalLLaMA/comments/1ret353/qwenqwen3535ba3b_creates_flappybird/ | true | false | spoiler | 265 | {'enabled': False, 'images': [{'id': 'MWNpaTVodzMwcWxnMTf9DEYfRpKQfPWGWqUJjr1d2CP4jothqiK3uBBcBpyU', 'resolutions': [{'height': 149, 'url': 'https://external-preview.redd.it/MWNpaTVodzMwcWxnMTf9DEYfRpKQfPWGWqUJjr1d2CP4jothqiK3uBBcBpyU.png?width=108&crop=smart&format=pjpg&auto=webp&s=73b48d46ee669f224e9be9307c48cf03c38a... |
LLM models for architecting and coding | 1 | I am new this LLM models and I have been trying out qwen3 coder next q6\_k as seems to be hyped for coding and to be honest I am a bit unimpressed/disappointed.
I made a system architecture markdown file with an architecture overview, a file by file blueprint.
I requested it to use a library within the markdown and... | 2026-02-25T23:00:52 | https://www.reddit.com/r/LocalLLaMA/comments/1resy8e/llm_models_for_architecting_and_coding/ | peste19 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1resy8e | false | null | t3_1resy8e | /r/LocalLLaMA/comments/1resy8e/llm_models_for_architecting_and_coding/ | false | false | self | 1 | null |
Best Qwen3.5-35B-A3B GGUF for 24GB VRAM?! | 154 | My understanding is Vulkan/ROCm tends to have faster kernels for legacy llama.cpp quant types like q8\_0/q4\_0/q4\_1. So I made a mix using \*only\* those types!
Definitely not your grandfather's gguf mix: Q4\_0 19.776 GiB (4.901 BPW)
Interestingly it has very good perplexity for the size, and \*may be\* faster than ... | 2026-02-25T22:42:03 | VoidAlchemy | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1resggh | false | null | t3_1resggh | /r/LocalLLaMA/comments/1resggh/best_qwen3535ba3b_gguf_for_24gb_vram/ | false | false | 154 | {'enabled': True, 'images': [{'id': 'bkw8ps1qwplg1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/bkw8ps1qwplg1.png?width=108&crop=smart&auto=webp&s=04d5f6458e32fac7ab3f6bfd2c94dcbf4abf6e70', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/bkw8ps1qwplg1.png?width=216&crop=smart&auto=web... | ||
PicoKittens/PicoStories-853K: Extremely Tiny Stories | 25 | **We are announcing our new pico-sized model: PicoStories-853K.**
This is an **853,120 parameter model** trained entirely from scratch. It was designed using the **TinyStories dataset** to explore the capabilities of ultra-compact architectures.
Unlike our previous models, **PicoStories-853K** is a pure completion mo... | 2026-02-25T22:41:56 | https://www.reddit.com/r/LocalLLaMA/comments/1resgct/picokittenspicostories853k_extremely_tiny_stories/ | PicoKittens | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1resgct | false | null | t3_1resgct | /r/LocalLLaMA/comments/1resgct/picokittenspicostories853k_extremely_tiny_stories/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'kITIRKEVuqDu7GUL3gH2DXHn5xA7ibLju8vOPvoe_rU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kITIRKEVuqDu7GUL3gH2DXHn5xA7ibLju8vOPvoe_rU.png?width=108&crop=smart&auto=webp&s=c5ab4c29d7c30e09f183f1e25afc70e1871d100d', 'width': 108}, {'height': 116, 'url': 'h... |
I found the "Lobotomy Layers" in Llama 3.1 and Qwen 2.5. (Kill Zone Atlas) | 80 | Ever wonder why "safe" models feel dumber? I mapped the "kill zones" of three major 7B/8B models to see what happens to Factual Integrity and Bias when you force a model to be sycophantic.
**The Heatmaps:**
* **Green** = Model is getting "more confident" in that behavior.
* **Red** = The behavior is collapsing (The "... | 2026-02-25T22:30:15 | NoSir261 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1res533 | false | null | t3_1res533 | /r/LocalLLaMA/comments/1res533/i_found_the_lobotomy_layers_in_llama_31_and_qwen/ | false | false | 80 | {'enabled': True, 'images': [{'id': 'jshzjkh0wplg1', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/jshzjkh0wplg1.png?width=108&crop=smart&auto=webp&s=35fcaf21910ecc663f3a01f25e5204ae11119184', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/jshzjkh0wplg1.png?width=216&crop=smart&auto=webp... | ||
Introducing CLAM (Cognitive Large Application Model): A dual-level cognitive architecture for LLM agents (Short/Long term memory) | 0 | Hi everyone! I'd like to share CLAM, a new approach to LLM agents I've been working on. Instead of endless fine-tuning, I designed a two-level cognitive architecture that simulates the human mind. 🧠
CLAM perceives, doubts (via a formidable internal 'Critic'), consolidates valid memories, and... forgets what is irrelev... | 2026-02-25T22:11:47 | Short-Confidence6287 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rernj1 | false | null | t3_1rernj1 | /r/LocalLLaMA/comments/1rernj1/introducing_clam_cognitive_large_application/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'jtqf4xprsplg1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/jtqf4xprsplg1.jpeg?width=108&crop=smart&auto=webp&s=312f40e833c562360bebbbfc7318be3c036b4383', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/jtqf4xprsplg1.jpeg?width=216&crop=smart&auto=w... | ||
Cognitive Large Application Model | 0 | Per una verifica e aspettando vostri suggerimenti per migliorarlo vi presento CLAM (Cognitive Large Application Model): un nuovo approccio agli agenti LLM. Invece di fare fine-tuning infinito, ho progettato un'architettura cognitiva a due livelli (Breve/Lungo Termine) che simula la mente umana. 🧠 CLAM percepisce, dubi... | 2026-02-25T21:54:17 | Short-Confidence6287 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rer6h1 | false | null | t3_1rer6h1 | /r/LocalLLaMA/comments/1rer6h1/cognitive_large_application_model/ | false | false | 0 | {'enabled': True, 'images': [{'id': '0qzpfcbnpplg1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/0qzpfcbnpplg1.jpeg?width=108&crop=smart&auto=webp&s=3b0e41abde85051faa8a86b890dc88e6f26c34ba', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/0qzpfcbnpplg1.jpeg?width=216&crop=smart&auto=w... | ||
LM Link | 42 | I see that LM Studio just shadow dropped one of the most amazing features ever. I have been waiting this for a long time.
LM Link allows a client machine to connect to another machine acting as server remotely using tailscale. This is now integrated in the LM Studio app (which either acts as server or client) and usi... | 2026-02-25T21:53:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rer60n/lm_link/ | Blindax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rer60n | false | null | t3_1rer60n | /r/LocalLLaMA/comments/1rer60n/lm_link/ | false | false | self | 42 | null |
Qwen3.5-122B-A10B-UD-Q6_K_XL on dgx spark llama.cpp - vision is really good | 1 | [removed] | 2026-02-25T21:33:08 | https://www.reddit.com/r/LocalLLaMA/comments/1reqlzc/qwen35122ba10budq6_k_xl_on_dgx_spark_llamacpp/ | Impossible_Art9151 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1reqlzc | false | null | t3_1reqlzc | /r/LocalLLaMA/comments/1reqlzc/qwen35122ba10budq6_k_xl_on_dgx_spark_llamacpp/ | false | false | self | 1 | null |
Cosmos-Reason2-2B on Jetson Orin Nano Super | 51 | Hi! Today, me and my team is releasing a version of **Cosmos-Reason2-2B** that is quantized so that it fits even on the NVIDIA Jetson Orin Nano Super.
We managed to find a mixed precision configuration such that it maintains virtually the same accuracy as the unquantized model while being able to run really effici... | 2026-02-25T21:28:32 | https://v.redd.it/sg8ywmszkplg1 | Course_Latter | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1reqhjo | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/sg8ywmszkplg1/DASHPlaylist.mpd?a=1774646938%2CYWQ0YzJhN2M4NjlkODJiZDZkYzQ3MDNlMWFmYTIzN2VhYzA3MDUyNDAwYmVkMzM0MWU1NzljNDk1YWQ1MGRkNg%3D%3D&v=1&f=sd', 'duration': 6, 'fallback_url': 'https://v.redd.it/sg8ywmszkplg1/CMAF_480.mp4?source=fallback', 'has... | t3_1reqhjo | /r/LocalLLaMA/comments/1reqhjo/cosmosreason22b_on_jetson_orin_nano_super/ | false | false | 51 | {'enabled': False, 'images': [{'id': 'dzR0aW93c3prcGxnMfvobv_qRzAlCd_RuCGnpQgXYH5aXJ-MyLN8L1R1g083', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/dzR0aW93c3prcGxnMfvobv_qRzAlCd_RuCGnpQgXYH5aXJ-MyLN8L1R1g083.png?width=108&crop=smart&format=pjpg&auto=webp&s=67f6257ddcebeee43881ac5c52274ba18339a... | |
Qwen 3.5 | ContextShift not working | 3 | I'm trying to run Qwen 3.5 locally, but I can't seem to get ContextShift to work. So each input, I have to reprocess the entire context.
I've used different back-ends (Kobold.cpp and LM Studio), different models (the 122b and 35b ones) and quants from different makers. Whichever combination I use, ContextShift doesn't... | 2026-02-25T21:24:40 | https://www.reddit.com/r/LocalLLaMA/comments/1reqdr8/qwen_35_contextshift_not_working/ | DisasterClear4178 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1reqdr8 | false | null | t3_1reqdr8 | /r/LocalLLaMA/comments/1reqdr8/qwen_35_contextshift_not_working/ | false | false | self | 3 | null |
Overwhelmed by so many quantization variants | 112 | Not only are out there 100s of models to choose from, but also so many quantization variants that I may well get crazy.
One needs not only to test and benchmark models, but also within each model, compare its telemetry and quality between all the available quants and quant-techniques.
So many concepts like the new UD... | 2026-02-25T21:24:37 | https://www.reddit.com/r/LocalLLaMA/comments/1reqdpb/overwhelmed_by_so_many_quantization_variants/ | mouseofcatofschrodi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1reqdpb | false | null | t3_1reqdpb | /r/LocalLLaMA/comments/1reqdpb/overwhelmed_by_so_many_quantization_variants/ | false | false | self | 112 | null |
Help with Multi-GPU Setup: RTX 3060 12GB (eGPU) + Intel Arc 185H iGPU for VRAM Pooling? | 1 | [removed] | 2026-02-25T21:12:24 | https://www.reddit.com/r/LocalLLaMA/comments/1req1r6/help_with_multigpu_setup_rtx_3060_12gb_egpu_intel/ | kotemseravi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1req1r6 | false | null | t3_1req1r6 | /r/LocalLLaMA/comments/1req1r6/help_with_multigpu_setup_rtx_3060_12gb_egpu_intel/ | false | false | self | 1 | null |
Claude code : 98% usage with 3 minutes left. I call that optimal life management. | 0 | Nothing says responsible adulthood like squeezing every last percent out of a session and watching the timer say “resets in 3 minutes.”
Peak efficiency. Maximum stress. Zero long term consequences.
[Failed for this session](https://preview.redd.it/mk5hschqfplg1.png?width=1288&format=png&auto=webp&s=0f7713b8214fe40ff2... | 2026-02-25T21:00:41 | https://www.reddit.com/r/LocalLLaMA/comments/1repq9z/claude_code_98_usage_with_3_minutes_left_i_call/ | HereForAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1repq9z | false | null | t3_1repq9z | /r/LocalLLaMA/comments/1repq9z/claude_code_98_usage_with_3_minutes_left_i_call/ | false | false | 0 | null | |
Best way to expose local LLM to other devices? | 13 | I have a powerful setup at home and I would love the ability to use my locally hosted LLM from outside the house via my phone or notebook. Is there a safe way to do so? | 2026-02-25T20:56:31 | https://www.reddit.com/r/LocalLLaMA/comments/1repm57/best_way_to_expose_local_llm_to_other_devices/ | very_based_person | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1repm57 | false | null | t3_1repm57 | /r/LocalLLaMA/comments/1repm57/best_way_to_expose_local_llm_to_other_devices/ | false | false | self | 13 | null |
Anyone benchmarked Qwen3.5-27B (dense) and Qwen3.5-35B-A3B (MoE) on OpenClaw? How close to Claude Sonnet 4.5 for tool-calling? | 0 | Wondering if this makes it do me switch to run it fully local instead of api costs | 2026-02-25T20:45:34 | https://www.reddit.com/r/LocalLLaMA/comments/1repb9u/anyone_benchmarked_qwen3527b_dense_and/ | 98Saman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1repb9u | false | null | t3_1repb9u | /r/LocalLLaMA/comments/1repb9u/anyone_benchmarked_qwen3527b_dense_and/ | false | false | self | 0 | null |
Help me Build chatbot localy | 0 | Hey! I’m working on a chatbot where I need to process user text input from frontend and generate agent audio output . I’ve come across examples for text-to-text and audio-to-audio interactions in the library, but I haven’t found a clear approach for combining them into a text-to-audio conversation. Could you suggest a... | 2026-02-25T20:45:18 | https://www.reddit.com/r/LocalLLaMA/comments/1repb0u/help_me_build_chatbot_localy/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1repb0u | false | null | t3_1repb0u | /r/LocalLLaMA/comments/1repb0u/help_me_build_chatbot_localy/ | false | false | self | 0 | null |
RX 7800 XT only getting ~5 FPS on DirectML ??? (DeepLiveCam 2.6) | 1 | I’ve fully set up DeepLiveCam 2.6 and it is working, but performance is extremely low and I’m trying to understand why.
System:
* Ryzen 5 7600X
* RX 7800 XT (16GB VRAM)
* 32GB RAM
* Windows 11
* Python 3.11 venv
* ONNX Runtime DirectML (dml provider confirmed active)
Terminal confirms GPU provider:
Applied provide... | 2026-02-25T20:40:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rep6my/rx_7800_xt_only_getting_5_fps_on_directml/ | RoboReings | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rep6my | false | null | t3_1rep6my | /r/LocalLLaMA/comments/1rep6my/rx_7800_xt_only_getting_5_fps_on_directml/ | false | false | self | 1 | null |
Introducing Mercury 2 - Diffusion for real-time reasoning | 23 | **What stands out:**
* Uses **diffusion-based generation** instead of sequential token-by-token decoding
* Generates tokens in parallel and refines them over a few steps
* Claims **1,009 tokens/sec** on NVIDIA Blackwell GPUs
* Pricing: **$0.25 / 1M input tokens**, **$0.75 / 1M output tokens**
* 128K context
* Tunable ... | 2026-02-25T20:39:21 | https://www.inceptionlabs.ai/blog/introducing-mercury-2 | TyedalWaves | inceptionlabs.ai | 1970-01-01T00:00:00 | 0 | {} | 1rep5bg | false | null | t3_1rep5bg | /r/LocalLLaMA/comments/1rep5bg/introducing_mercury_2_diffusion_for_realtime/ | false | false | 23 | {'enabled': False, 'images': [{'id': 'WWeemdjZc6o2dTcFImB7RNbTs0n7PlHTldvkAwbSd1w', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/WWeemdjZc6o2dTcFImB7RNbTs0n7PlHTldvkAwbSd1w.png?width=108&crop=smart&auto=webp&s=7e60251a45af88e5e2a1b81d770d43f882c45998', 'width': 108}, {'height': 113, 'url': 'h... | |
US or EU based provider for open weight models? | 3 | I want to use open weight models instead of proprietary ai models like Claude or ChatGPT. However, my hardware is not good enough to run those, so I am looking for a provider that hosts state of the art open weight models like Kimi K2 or Minimax M2.5 in the US or Europe and offers access to a reasonable price. I do not... | 2026-02-25T20:30:34 | https://www.reddit.com/r/LocalLLaMA/comments/1reowuj/us_or_eu_based_provider_for_open_weight_models/ | Old-Jaguar-479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1reowuj | false | null | t3_1reowuj | /r/LocalLLaMA/comments/1reowuj/us_or_eu_based_provider_for_open_weight_models/ | false | false | self | 3 | null |
Air llm ? | 0 | Can anyone explain and suggest to me how to use this ? it will work ? like recently I saw a twitter post about this ! is this real ?
because I have very low specs : 1650ti 4gb vram , 16gb ram ! ...can it make any use to me ? | 2026-02-25T20:29:26 | Less_Strain7577 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1reovq3 | false | null | t3_1reovq3 | /r/LocalLLaMA/comments/1reovq3/air_llm/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'zt2yrk5iaplg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/zt2yrk5iaplg1.jpeg?width=108&crop=smart&auto=webp&s=f53d98f9744557ebfa29f98d3f539582c9cf2402', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/zt2yrk5iaplg1.jpeg?width=216&crop=smart&auto=... | ||
Help needed: Chatterbox Multilanguage (Polish) producing artifacts and long pauses | 2 | Hi everyone,
I am looking for some advice on fine-tuning Chatterbox Multilanguage for the Polish language. I am currently facing two specific issues that are significantly affecting the quality of my narrations:
1. Audio artifacts (growls/screams): Occasionally, the model generates strange, non-vocal sounds that soun... | 2026-02-25T20:29:14 | https://www.reddit.com/r/LocalLLaMA/comments/1reovjz/help_needed_chatterbox_multilanguage_polish/ | Tomasz_NieMasz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1reovjz | false | null | t3_1reovjz | /r/LocalLLaMA/comments/1reovjz/help_needed_chatterbox_multilanguage_polish/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'Ow01RTwADHhzhY1u9iJgum9tCemVkdYQMiTV725QH18', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Ow01RTwADHhzhY1u9iJgum9tCemVkdYQMiTV725QH18.jpeg?width=108&crop=smart&auto=webp&s=24aa52fb4817ebe99af39e8bffff7050aaf21d29', 'width': 108}, {'height': 162, 'url': '... |
Mac Studio 128/256GB for local LLM coding? | 2 | Hello,
I'm a developer with side projects. Lately, I'm thinking of buying a Mac Studio with 128 or 256GB ram in order to support my projects.
My logic is to be able to define goals to local llm and let it do it's job while I'm sleeping or running other projects.
How feasible is that? Will this work? Does it worth t... | 2026-02-25T20:20:33 | https://www.reddit.com/r/LocalLLaMA/comments/1reon35/mac_studio_128256gb_for_local_llm_coding/ | TechDude12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1reon35 | false | null | t3_1reon35 | /r/LocalLLaMA/comments/1reon35/mac_studio_128256gb_for_local_llm_coding/ | false | false | self | 2 | null |
Local LLM tool calling - Anyone heard of this? | 0 | Hey guys I have been using Sapphire Ai for a bit now and wanted to get others opinions on this, since I think I was one of the first to discover this.
Been poking around the self-hosted AI space for a while and most projects are either half-finished or just a thin wrapper around Ollama with a pretty UI slapped on.
Th... | 2026-02-25T20:16:09 | https://www.reddit.com/r/LocalLLaMA/comments/1reoiqj/local_llm_tool_calling_anyone_heard_of_this/ | Dudebro-420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1reoiqj | false | null | t3_1reoiqj | /r/LocalLLaMA/comments/1reoiqj/local_llm_tool_calling_anyone_heard_of_this/ | false | false | self | 0 | null |
RAG is cooked, Qwen 3.5 for multi modal long context. | 0 | Qwen 3.5 35b does something that previously I saw only Gemini do, which is using way fewer tokens per image than it would take to tokenize the actual words in that image. Meaning if you take a large pdf and convert all pages to images (resized to fit a 1000x1000 box), your context will be smaller then ocring the same p... | 2026-02-25T20:07:42 | https://www.reddit.com/r/LocalLLaMA/comments/1reoabu/rag_is_cooked_qwen_35_for_multi_modal_long_context/ | OutlandishnessIll466 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1reoabu | false | null | t3_1reoabu | /r/LocalLLaMA/comments/1reoabu/rag_is_cooked_qwen_35_for_multi_modal_long_context/ | false | false | self | 0 | null |
The Qwen 3.5 A3B model at 4 bit k_xl works better with 8 bit KV cache... | 27 | I'll probably toss up some examples later, but I've got some things to do today. I just wanted to mention that I did a whole mess of personal benchmark/testing on that new qwen 3.5 A3b. That thing is amazing.
Interestingly, when I re-ran everything at Q8\_0 KV Cache, it improved across the board. Normally, kicking KV ... | 2026-02-25T20:02:47 | https://www.reddit.com/r/LocalLLaMA/comments/1reo5bv/the_qwen_35_a3b_model_at_4_bit_k_xl_works_better/ | teachersecret | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1reo5bv | false | null | t3_1reo5bv | /r/LocalLLaMA/comments/1reo5bv/the_qwen_35_a3b_model_at_4_bit_k_xl_works_better/ | false | false | self | 27 | null |
Running local agents with Ollama: how are you handling KB access control without cloud dependencies? | 1 | Been thinking about this a lot lately and I’m curious how others are approaching it.
As soon as you have more than one agent sharing a knowledge base, access control becomes a real problem. In cloud setups you can offload this to managed services, but if you’re running everything locally the options are less obvious.
... | 2026-02-25T19:57:55 | https://www.reddit.com/r/LocalLLaMA/comments/1reo0b1/running_local_agents_with_ollama_how_are_you/ | Comfortable_Poem_866 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1reo0b1 | false | null | t3_1reo0b1 | /r/LocalLLaMA/comments/1reo0b1/running_local_agents_with_ollama_how_are_you/ | false | false | self | 1 | null |
MCP has no native permission model, how are you handling access control for multi-agent KB setups? | 1 | As agents get more autonomous, how should we handle KB access control? Enforce at the API layer, at the MCP server level, or is there a smarter way to bind an agent identity to specific knowledge scopes? Curious how people are solving this today. | 2026-02-25T19:52:49 | https://www.reddit.com/r/LocalLLaMA/comments/1renval/mcp_has_no_native_permission_model_how_are_you/ | Comfortable_Poem_866 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1renval | false | null | t3_1renval | /r/LocalLLaMA/comments/1renval/mcp_has_no_native_permission_model_how_are_you/ | false | false | self | 1 | null |
Everything I learned building on-device AI into a React Native app -- tex, Image Gen, Speech to Text, Multi Modal AI, Intent classification, Prompt Enhancements and more | 8 | I spent some time building a React Native app that runs LLMs, image generation, voice transcription, and vision AI entirely on-device. No cloud. No API keys. Works in airplane mode.
Here's what I wish someone had told me before I started. If you're thinking about adding on-device AI to an RN app, this should save you ... | 2026-02-25T19:52:10 | https://www.reddit.com/r/LocalLLaMA/comments/1renuky/everything_i_learned_building_ondevice_ai_into_a/ | alichherawalla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1renuky | false | null | t3_1renuky | /r/LocalLLaMA/comments/1renuky/everything_i_learned_building_ondevice_ai_into_a/ | false | false | self | 8 | null |
Qwen3.5 Model Comparison: 27B vs 35B on RTX 4090 | 85 | I wanted to check qwen3.5 models that can be run on my GPU. So I compare 3 GGUF options.
**Hardware:** RTX 4090 (24GB VRAM)
**Test:** Multi-agent Tetris development (Planner → Developer → QA)
# Models Under Test
|Model|Preset|Quant|Port|VRAM|Parallel|
|:-|:-|:-|:-|:-|:-|
|Qwen3.5-27B|`qwen35-27b-multi`|Q4\_K\_... | 2026-02-25T19:47:47 | https://www.reddit.com/r/LocalLLaMA/comments/1renq5y/qwen35_model_comparison_27b_vs_35b_on_rtx_4090/ | jaigouk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1renq5y | false | null | t3_1renq5y | /r/LocalLLaMA/comments/1renq5y/qwen35_model_comparison_27b_vs_35b_on_rtx_4090/ | false | false | 85 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.