title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How do I figure out -b batch size to increase token speed? | 4 | llama-bench says Qwen3.5 and Qwen3 Coder Next is not supported?
1. How are you figuring out what batch size and ub (whatever that does) to try?
2. Does it actually make a speeeeed difference?
3. Will batch size decrease quality? | 2026-02-28T19:35:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rhbqoq/how_do_i_figure_out_b_batch_size_to_increase/ | ClimateBoss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhbqoq | false | null | t3_1rhbqoq | /r/LocalLLaMA/comments/1rhbqoq/how_do_i_figure_out_b_batch_size_to_increase/ | false | false | self | 4 | null |
Saw someone bridge Claude Code into chat apps — feels like ChatOps for AI agents | 0 | I came across an interesting project recently that connects Claude Code to messaging platforms and lets you interact with it through chat apps instead of a local terminal.
The idea is surprisingly simple:
Claude Code keeps running locally, and a small bridge relays messages between the agent and platforms like Slack ... | 2026-02-28T19:32:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rhbo40/saw_someone_bridge_claude_code_into_chat_apps/ | chg80333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhbo40 | false | null | t3_1rhbo40 | /r/LocalLLaMA/comments/1rhbo40/saw_someone_bridge_claude_code_into_chat_apps/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '3GLTNGwy878wDzNd-blWWaXssUSPARpfwfD4i1-xmws', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3GLTNGwy878wDzNd-blWWaXssUSPARpfwfD4i1-xmws.png?width=108&crop=smart&auto=webp&s=151f64c22cba469a3374c4642608ffcfeb1de312', 'width': 108}, {'height': 108, 'url': 'h... |
Help: Extremely slow Prompt Processing (Prefill) on i3-8100 / 8GB RAM / UHD 630 that BrowserOS is failing | 0 | I’m running LM Studio on a low-spec machine and my Prompt Processing is so slow that my "BrowserOS" interface keeps timing out or failing. Once it starts generating (eval), the speed is okay, but the initial "thinking" phase takes forever.
My Specs:
CPU: Intel i3-8100 (4 Cores)
RAM: 8GB (Total system RAM)
GPU: Intel U... | 2026-02-28T19:29:15 | https://www.reddit.com/r/LocalLLaMA/comments/1rhblei/help_extremely_slow_prompt_processing_prefill_on/ | broSleepNow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhblei | false | null | t3_1rhblei | /r/LocalLLaMA/comments/1rhblei/help_extremely_slow_prompt_processing_prefill_on/ | false | false | self | 0 | null |
ShunyaNet Sentinel: A Self-Hosted RSS Aggregator for Local LLM Analysis (with a not-so-subtle 90s cyberpunk theme...) | 8 | Hello all — A friend suggested I share my fun side-project here, too.
**ShunyaNet Sentinel** is a lightweight, ridiculously-named and cyberpunk-themed RSS monitoring tool that sends feed content to a locally hosted LLM for analysis and delivers alerts/summaries to the GUI and optionally Slack (so you can get notifica... | 2026-02-28T19:23:08 | https://v.redd.it/47wl3w5bbamg1 | _WaterBear | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rhbfya | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/47wl3w5bbamg1/DASHPlaylist.mpd?a=1774898615%2CNTcyNTE5YzA2MGFlMjFhZTZmNDVmY2Q0ZjVhNzUyYzczNmViODgxODFkYTdlN2VjNDQ1ZTgyNGRhNjI3NDJmZQ%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/47wl3w5bbamg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1rhbfya | /r/LocalLLaMA/comments/1rhbfya/shunyanet_sentinel_a_selfhosted_rss_aggregator/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'cDEyNW9vNmJiYW1nMW7jNwMwzxJiDzTR7H2hjqixqS_GYlYo3o4GRSn6LZPn', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/cDEyNW9vNmJiYW1nMW7jNwMwzxJiDzTR7H2hjqixqS_GYlYo3o4GRSn6LZPn.png?width=108&crop=smart&format=pjpg&auto=webp&s=38b59780d0ade78b2c9360802b1ad95a788ca... | |
Hardware requirements for running Claude-like models locally? | 1 | [removed] | 2026-02-28T19:21:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rhbe2m/hardware_requirements_for_running_claudelike/ | No-Data-8738 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhbe2m | false | null | t3_1rhbe2m | /r/LocalLLaMA/comments/1rhbe2m/hardware_requirements_for_running_claudelike/ | false | false | self | 1 | null |
gemini ultra vs pro actually different or just a scam | 0 | thinking about paying for gemini ultra but kinda skeptical rn is it physically a bigger model under the hood or did google just take pro remove some limits and slap a price tag on it has anyone actually tested them side by side on complex coding or logic stuff feels like it might just be a marketing gimmick let me know... | 2026-02-28T19:18:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rhbbq1/gemini_ultra_vs_pro_actually_different_or_just_a/ | ebosha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhbbq1 | false | null | t3_1rhbbq1 | /r/LocalLLaMA/comments/1rhbbq1/gemini_ultra_vs_pro_actually_different_or_just_a/ | false | false | self | 0 | null |
Hardware requirements for running Anthropic-like models locally? | 1 | [removed] | 2026-02-28T19:18:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rhbbhl/hardware_requirements_for_running_anthropiclike/ | No-Data-8738 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhbbhl | false | null | t3_1rhbbhl | /r/LocalLLaMA/comments/1rhbbhl/hardware_requirements_for_running_anthropiclike/ | false | false | self | 1 | null |
Agents are here! But E-commerce? Stuck in the past. Sandbox live for developers to play with | 0 | [payclaw.io](http://payclaw.io) | 2026-02-28T19:15:54 | https://v.redd.it/5341svo1camg1 | Opposite-Exam3541 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rhb99s | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/5341svo1camg1/DASHPlaylist.mpd?a=1774898180%2CMjIwZDM2MjMzNzY4YTI0MmQyOGRhMzNhMDNkYTlhMGVlZjUwZjdhYTg3MjVlYzMxZDNkN2EzNTIxNTkzOGJlOQ%3D%3D&v=1&f=sd', 'duration': 34, 'fallback_url': 'https://v.redd.it/5341svo1camg1/CMAF_720.mp4?source=fallback', 'ha... | t3_1rhb99s | /r/LocalLLaMA/comments/1rhb99s/agents_are_here_but_ecommerce_stuck_in_the_past/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'd3JnMHpwcDFjYW1nMQSd0pU1Chu3VTd5dIMUbT7iY4L3HHleVYQuzaz-Isdf', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d3JnMHpwcDFjYW1nMQSd0pU1Chu3VTd5dIMUbT7iY4L3HHleVYQuzaz-Isdf.png?width=108&crop=smart&format=pjpg&auto=webp&s=7ec5efb6855d2e2c346ad7edaedd811ba062a... | |
fine tuning on proprietary data is way harder to deploy than anyone tells you and most of it has nothing to do with the model | 15 | so we needed to fine tune on client data. sensitive stuff,, not nuclear level but the kind where if it leaks or somehow ends up in some upstream training pipeline our client relationship is basically done...
figured this would take a few weeks. dataset prep, training runs, eval, deploy. normal ml flow right...
three ... | 2026-02-28T19:07:54 | https://www.reddit.com/r/LocalLLaMA/comments/1rhb1xb/fine_tuning_on_proprietary_data_is_way_harder_to/ | Olivia_Davis_09 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhb1xb | false | null | t3_1rhb1xb | /r/LocalLLaMA/comments/1rhb1xb/fine_tuning_on_proprietary_data_is_way_harder_to/ | false | false | self | 15 | null |
13" M1 MBP instead of M4 Mac Mini | 0 | I came across [this article on 𝕏](https://x.com/0xPhantomDefi/status/2023385836893483328) where they used Clawdbot with polymarket to make money. Can someone tell me if this is legit or not?
And if it is legit, will my 6 year old 13" M1 Macbook Pro with 16 GB RAM be sufficient to run Clawdbot? Or is it better to go w... | 2026-02-28T19:05:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rhazbc/13_m1_mbp_instead_of_m4_mac_mini/ | TaaDaahh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhazbc | false | null | t3_1rhazbc | /r/LocalLLaMA/comments/1rhazbc/13_m1_mbp_instead_of_m4_mac_mini/ | false | false | self | 0 | null |
Framework-agnostic AI agents defined entirely in git (clone = agent) | 1 | [removed] | 2026-02-28T18:54:25 | https://github.com/open-gitagent/gitagent | Reasonable_Play_9632 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rhap8h | false | null | t3_1rhap8h | /r/LocalLLaMA/comments/1rhap8h/frameworkagnostic_ai_agents_defined_entirely_in/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'fkQoNgZOv-I_Dt27IP-zaaQdLm6vW3cn6csm2fffoUc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fkQoNgZOv-I_Dt27IP-zaaQdLm6vW3cn6csm2fffoUc.png?width=108&crop=smart&auto=webp&s=044da642183d793cc0836aa9dc0f889ee6d48fb4', 'width': 108}, {'height': 108, 'url': 'h... | |
Anyone noticing Qwen3.5 27B getting stuck in reasoning loops? | 6 | I've been testing the multi-modal capabilities by giving it an image and asking it to identify the location. It's done pretty well!
But occasionally, it will get stuck on 3 or 4 locations and just keep re-assessing the same ones over and over and over again.
Is it X? No it can't be X because blah blah blah. Is it Y... | 2026-02-28T18:54:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rhaoty/anyone_noticing_qwen35_27b_getting_stuck_in/ | fredandlunchbox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhaoty | false | null | t3_1rhaoty | /r/LocalLLaMA/comments/1rhaoty/anyone_noticing_qwen35_27b_getting_stuck_in/ | false | false | self | 6 | null |
AMD NPU tutorial for linux | 3 | Haven't tried it yet but lemonade server put up a tutorial for using the NPU on linux.
[https://lemonade-server.ai/flm\_npu\_linux.html](https://lemonade-server.ai/flm_npu_linux.html)
Here's the corresponding github issue/discussion:
[https://github.com/lemonade-sdk/lemonade/issues/5](https://github.com/lemonade-... | 2026-02-28T18:52:55 | Zc5Gwu | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rhanvn | false | null | t3_1rhanvn | /r/LocalLLaMA/comments/1rhanvn/amd_npu_tutorial_for_linux/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'oia0lq7w7amg1', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/oia0lq7w7amg1.png?width=108&crop=smart&auto=webp&s=93061382e25a34b7f43d9120fd39f67c612b7d06', 'width': 108}, {'height': 209, 'url': 'https://preview.redd.it/oia0lq7w7amg1.png?width=216&crop=smart&auto=we... | ||
Local LLMs are slow, I have too many things to try, and I hate chat UIs, so I built an async task board where agents work in parallel while I do other things | 8 | Been building ClaraVerse for a while. Just shipped the update I'm most excited about and wanted to share what's actually possible now.
# Clara Companion: connect your machine to your AI
You run a lightweight companion on any machine (PC, server, whatever). It connects over WebSocket and exposes MCP tools from that ma... | 2026-02-28T18:50:24 | https://www.reddit.com/gallery/1rhalir | BadBoy17Ge | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rhalir | false | null | t3_1rhalir | /r/LocalLLaMA/comments/1rhalir/local_llms_are_slow_i_have_too_many_things_to_try/ | false | false | 8 | null | |
Seeking hardware recommendations | 1 | Hi everyone, I’m not sure if this is the right subreddit to ask this question but I’ll go ahead anyway.
I have an RTX 3060TI, 16gb ram and a 12th gen intel i5 processor. How can I augment my hardware setup to be able to run some of the newer qwen modals locally? I want to play around with these models for my learning ... | 2026-02-28T18:50:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rhalha/seeking_hardware_recommendations/ | Quirky-Physics6043 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhalha | false | null | t3_1rhalha | /r/LocalLLaMA/comments/1rhalha/seeking_hardware_recommendations/ | false | false | self | 1 | null |
Qwen3.5-35B-A3B returns gibberish after pasting its Prompt Template | 0 | When I paste the Jinja2 template of Qwen3.5-35B-A3B into the chat (because I’m a beginner and want to ask about it), it returns gibberish. I tried this with the unsloth /Qwen 3.5‑35B‑A3B‑UD‑Q6\_K\_XL.gguf model.
Other models like gpt-oss 120B responded normally.
Can anyone explain what I’m doing wrong? | 2026-02-28T18:45:45 | https://www.reddit.com/r/LocalLLaMA/comments/1rhah7a/qwen3535ba3b_returns_gibberish_after_pasting_its/ | AffectionateHome3113 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhah7a | false | null | t3_1rhah7a | /r/LocalLLaMA/comments/1rhah7a/qwen3535ba3b_returns_gibberish_after_pasting_its/ | false | false | self | 0 | null |
Krira Chunker – Rust+Python RAG library, 5.28GB CSV in 113s – feedback welcome | 1 | Hey r/LocalLLaMA ,
I've been building a document chunking library for RAG
pipelines called Krira Chunker — Rust core with Python
bindings.
Ran a benchmark today on a 5.28 GB CSV file:
→ 42,448,765 chunks created
→ 113 seconds total
→ 47 MB/s throughput
→ Memory stayed flat throughout (O(1) space complex... | 2026-02-28T18:33:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rha5vw/krira_chunker_rustpython_rag_library_528gb_csv_in/ | Muted-Location-2465 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rha5vw | false | null | t3_1rha5vw | /r/LocalLLaMA/comments/1rha5vw/krira_chunker_rustpython_rag_library_528gb_csv_in/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pO1q_0Br3XVzxEwU8c191trug6-f3lyanrYBwpKc7As', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pO1q_0Br3XVzxEwU8c191trug6-f3lyanrYBwpKc7As.png?width=108&crop=smart&auto=webp&s=e9bcfdd84875d8ebcb45d1fc797b8b5d517def4c', 'width': 108}, {'height': 108, 'url': 'h... |
Advice on Hardware purchase and selling old hardware | 1 | I have a Dell R730 with 2 Tesla P40s and 400ish gigs of ram.
It can run most things, but is dog slow.
I bought a RTX 3090 cause I thought I saw someone put i in the same server and down clocked it to meet the power limit requirements, but I guess I bought the wrong one cause my 3090 doesn't fit and feels vaguely like... | 2026-02-28T18:32:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rha4g1/advice_on_hardware_purchase_and_selling_old/ | Envoy0675 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rha4g1 | false | null | t3_1rha4g1 | /r/LocalLLaMA/comments/1rha4g1/advice_on_hardware_purchase_and_selling_old/ | false | false | self | 1 | null |
Is anyone else waiting for a 60-70B MoE with 8-10B activated params? | 25 | I feel like that could be the sweet spot for 64GB VRAM, and could reach the performance of closed "flash" models.
It's werird that we are seeing only \~30B and \~120B MoE models and not something in the middle. | 2026-02-28T18:25:42 | https://www.reddit.com/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/ | IonizedRay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh9ygz | false | null | t3_1rh9ygz | /r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/ | false | false | self | 25 | null |
This sub is incredible | 440 | I feel like everything in the AI industry is spedrunning profit driven vendor lock in and rapid enshitification, then everyone on this sub cobbles together a bunch of RTX3090s, trade weights around like they are books at a book club and make the entire industry look like a joke. Keep at it! you are our only hope! | 2026-02-28T18:20:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/ | cmdr-William-Riker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh9u4r | false | null | t3_1rh9u4r | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/ | false | false | self | 440 | null |
I want to build an open-source "AI Senate": A platform where humans post complex problems, we deploy our custom AI Agents to debate them, and humans vote for the best. Who wants to build this with me? | 0 | Hey everyone,
I’ve been iterating on an idea, and I want to turn it into an open-source community project. Instead of just chatting with our own LLMs in silos, what if we had a multi-agent Town Hall / Senate with real stakes?
Imagine a Reddit-like platform where the only allowed posters are our custom-configured AI Age... | 2026-02-28T18:11:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/ | Thin-Effect-3926 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh9lll | false | null | t3_1rh9lll | /r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/ | false | false | self | 0 | null |
Qwen3.5 35B-A3B replaced my 2-model agentic setup on M1 64GB | 109 | There's been a lot of buzz about Qwen3.5 models being smarter than all previous open-source models in the same size class matching or rivaling models 8-25x larger in total parameters like MiniMax-M2.5 (230B), DeepSeek V3.2 (685B), and GLM-4.7 (357B) in reasoning, agentic, and coding tasks.
I had to try them on a real-... | 2026-02-28T18:10:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/ | luke_pacman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh9k63 | false | null | t3_1rh9k63 | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/ | false | false | self | 109 | null |
Do you find qwen3:14b-q8_0 (15GB) smarter than qwen3.5:35b-a3b-q4_K_M (23GB)? | 0 | I have 28GB of VRAM in total, so every now and then I try new models as my Task Model in Open WebUI.
The smartest model for this up to recently was Qwen3 14B. But it is only using \~17GB of VRAM, so in theory there's still a lot of room for more "intelligence" to fit in.
Therefore I was quite excited when new Qwen3... | 2026-02-28T18:03:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/ | donatas_xyz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh9dt3 | false | null | t3_1rh9dt3 | /r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/ | false | false | self | 0 | null |
Alternatives to Pinokio and Lynxhub? | 2 | Hi all.
I wanted an "app" that let me download various local AI tools without too much effort, like Pinokio or Lynxhub does (so ai for chat, llm, coding, image/video/audio gen, ecc...)
The problem its that almost all the tools are tied only to a specific sector (for example Stability matrix that can only download i... | 2026-02-28T18:01:51 | https://www.reddit.com/r/LocalLLaMA/comments/1rh9c0w/alternatives_to_pinokio_and_lynxhub/ | Mhanz97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh9c0w | false | null | t3_1rh9c0w | /r/LocalLLaMA/comments/1rh9c0w/alternatives_to_pinokio_and_lynxhub/ | false | false | self | 2 | null |
Qwen 35B A3B - AesSedai Finetune on 8gb VRAM and 32gb RAM | 0 | Hey, just wanted to share my settings. Keep in mind im no where near a professional. I try to catch up on posts in this sub and just keep trying stuff with assistance of AI based on feedback from community and try on my projects.
My setup is weak, no question about it but it always fascinating to see what other people... | 2026-02-28T17:59:05 | https://www.reddit.com/r/LocalLLaMA/comments/1rh9983/qwen_35b_a3b_aessedai_finetune_on_8gb_vram_and/ | sagiroth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh9983 | false | null | t3_1rh9983 | /r/LocalLLaMA/comments/1rh9983/qwen_35b_a3b_aessedai_finetune_on_8gb_vram_and/ | false | false | self | 0 | null |
idea for language model architecture restructure | 1 | i come to you today with an idea ive been cooking for awhile now based on what i've experienced with llms and slms.
nothing i am saying is empirically factual or 100% guaranteed to be correct. that's why this is an idea.
personally, i think we're doing it wrong by doing these large 70 billion plus models.
i feel like... | 2026-02-28T17:44:30 | https://github.com/sudoxreboot/sudo-slm-architrcture/blob/main/thoughts/tiered-slm.md | sudoxreboot | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rh8vub | false | null | t3_1rh8vub | /r/LocalLLaMA/comments/1rh8vub/idea_for_language_model_architecture_restructure/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Bxw3jp80onDXORZfVfFivP_yfu_Uz_DkKech5tqIofw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Bxw3jp80onDXORZfVfFivP_yfu_Uz_DkKech5tqIofw.png?width=108&crop=smart&auto=webp&s=ce096ef788cb09b10cfa0f8e1f04ad2c659ef759', 'width': 108}, {'height': 108, 'url': 'h... | |
Trinity Large Preview vs Nemotron 3 Nano 30B A3B? | 0 | Hello, i tried to configure OpenClaw on my ubuntu but still didn't decided the main ai model i will gonna use, i linked my openrouter but still didn't decided who's better after i founed that gpt-oss-120b not supported anymore so i founded a lot of benchmarks about Trinity Large Preview and founded that he's good, but ... | 2026-02-28T17:36:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rh8oaa/trinity_large_preview_vs_nemotron_3_nano_30b_a3b/ | Agreeable_Asparagus3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh8oaa | false | null | t3_1rh8oaa | /r/LocalLLaMA/comments/1rh8oaa/trinity_large_preview_vs_nemotron_3_nano_30b_a3b/ | false | false | 0 | null | |
Self-speculative decoding for Qwen3.5-35B-A3B in llama.cpp? | 13 | Self-speculative decoding gives a big speed boost for repeated tokens (thinking, blocks of code, etc.), which makes a real difference for agentic/coding workloads.
[https://github.com/ggml-org/llama.cpp/pull/19164](https://github.com/ggml-org/llama.cpp/pull/19164) \- video showcasing the speed difference on repeated t... | 2026-02-28T17:36:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rh8o4b/selfspeculative_decoding_for_qwen3535ba3b_in/ | oxygen_addiction | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh8o4b | false | null | t3_1rh8o4b | /r/LocalLLaMA/comments/1rh8o4b/selfspeculative_decoding_for_qwen3535ba3b_in/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'KEnu4wEipvbhSpqH7n0_w__Fh05yhzYGyNy_9YdRnmQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KEnu4wEipvbhSpqH7n0_w__Fh05yhzYGyNy_9YdRnmQ.png?width=108&crop=smart&auto=webp&s=6cc99272b1586156f54c65b6e94d68d90a4c3ae9', 'width': 108}, {'height': 108, 'url': 'h... |
Looking for feedback: Externalizing Developers' Intuition as Code | 1 | The moment I let my guard down, my AI agent started burning through my wallet. I stepped away for a brief moment to grab a coffee, and it got stuck in an infinite loop, trying to fix a bug with the same failing approach over and over for 30 minutes. By the time I checked back, it had chewed through $10 in tokens.
That... | 2026-02-28T17:33:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rh8m5m/looking_for_feedback_externalizing_developers/ | EmergencyBerry2514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh8m5m | false | null | t3_1rh8m5m | /r/LocalLLaMA/comments/1rh8m5m/looking_for_feedback_externalizing_developers/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'y3gW3OuP2t7dYGrXHtBIGkhwOeE8_R4ThINvwvzBoys', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/y3gW3OuP2t7dYGrXHtBIGkhwOeE8_R4ThINvwvzBoys.png?width=108&crop=smart&auto=webp&s=c0dea2986103b83583aec6da4612ea522ec8695a', 'width': 108}, {'height': 108, 'url': 'h... |
Qwen 3 (30B A3B 2507) - Qwen 3.5 (35B A3B) - Benchmarked on VLLM A100@40GB PHB Link and tensor-parallel-size = 2 | 9 | Here is a benchmark realized with VLLM bench suite.
It's a mix of the following matrix options:
Model :
* Qwen/Qwen3.5-35B-A3B
* Qwen/Qwen3-30B-A3B-Instruct-2507
Attentions modes :
* `FLASH_ATTN`
* `FLASHINFER`
Quantizations :
* Official FP8 one (uses marlin kernels by default)
* AWK 4bit
Setup for the... | 2026-02-28T17:33:16 | https://www.reddit.com/r/LocalLLaMA/comments/1rh8li2/qwen_3_30b_a3b_2507_qwen_35_35b_a3b_benchmarked/ | LinkSea8324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh8li2 | false | null | t3_1rh8li2 | /r/LocalLLaMA/comments/1rh8li2/qwen_3_30b_a3b_2507_qwen_35_35b_a3b_benchmarked/ | false | false | self | 9 | null |
My ideas about 'Protective Ai' | 0 | Most Ai nowadays is used for Generative purposes, creating (unfortunately) loads of slop. I'm currently using Ollama to filter shitty youtube recommendations through a browser app I called dehook. The LLM scans and filters every youtube video based on the title. I think LLM's could 'protect' us in this way from the big... | 2026-02-28T17:30:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rh8iyf/my_ideas_about_protective_ai/ | Cas_Dehook | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh8iyf | false | null | t3_1rh8iyf | /r/LocalLLaMA/comments/1rh8iyf/my_ideas_about_protective_ai/ | false | false | 0 | null | |
Need some help getting 2x RTX 5090 working with vLLM. Anyone else run into this? | 1 | I'm running Ubuntu Server with this hardware:
9960x Threadripper
128gb RDIMM ECC
Asus Pro WS TRX-50-SAGE WIFI
Machine is up and running great, outside of the docker container not working. Any ideas on what I can do to get it sorted out. I know the PCI lane issues persists in vLLM, hopefully they get it figured out ... | 2026-02-28T17:26:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rh8eys/need_some_help_getting_2x_rtx_5090_working_with/ | queequegscoffin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh8eys | false | null | t3_1rh8eys | /r/LocalLLaMA/comments/1rh8eys/need_some_help_getting_2x_rtx_5090_working_with/ | false | false | self | 1 | null |
Can we automod "Qwen2.5", etc. posts? | 1 | [removed] | 2026-02-28T17:26:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rh8esp/can_we_automod_qwen25_etc_posts/ | Zidrewndacht | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh8esp | false | null | t3_1rh8esp | /r/LocalLLaMA/comments/1rh8esp/can_we_automod_qwen25_etc_posts/ | false | false | self | 1 | null |
[Project] Attack on Memory: a memory governance layer for multi-agent systems | 0 | We open-sourced Attack on Memory, focused on memory reliability for multi-agent systems.
Highlights in v0.1:
- verifiable memory atoms
- role/sensitivity-based selective disclosure
- auditable citations + evaluation loop
- reproducible scenario benchmark snapshot
Repo: [https://github.com/MarcelLeon/attack-on-memory]... | 2026-02-28T17:12:42 | https://www.reddit.com/r/LocalLLaMA/comments/1rh82dk/project_attack_on_memory_a_memory_governance/ | Feisty-Macaroon-2177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh82dk | false | null | t3_1rh82dk | /r/LocalLLaMA/comments/1rh82dk/project_attack_on_memory_a_memory_governance/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'r18x8DIO2nNFsj_sD92VNlXaM-21L8LcOqxO5hKtaj4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r18x8DIO2nNFsj_sD92VNlXaM-21L8LcOqxO5hKtaj4.png?width=108&crop=smart&auto=webp&s=a70be70448fd97a6a23323366b9543eedf7509f7', 'width': 108}, {'height': 108, 'url': 'h... |
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek | 112 | If you've used multi-agent setups with LangChain, CrewAI, AutoGen, or Swarm, you've probably noticed: every agent re-tokenizes and re-processes the full conversation from scratch. Agent 3 in a 4-agent chain is re-reading everything agents 1 and 2 already chewed through. When I measured this across Qwen2.5, Llama 3.2, a... | 2026-02-28T17:10:16 | https://www.reddit.com/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/ | proggmouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh802w | false | null | t3_1rh802w | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/ | false | false | self | 112 | {'enabled': False, 'images': [{'id': 'Mo-I-ASNHWo9Zfehf-gOtnQxIcadmH6PNCjIiZCmSRw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Mo-I-ASNHWo9Zfehf-gOtnQxIcadmH6PNCjIiZCmSRw.png?width=108&crop=smart&auto=webp&s=be69f856def148b28f2c34343e6cb9f0c8a479a5', 'width': 108}, {'height': 108, 'url': 'h... |
Anthropic… | 0 | The refusal to DoW is probably strategic, it is not linked at all with any willing to protect people from mass surveillance or whatever.
First they let openai fall (collaborate with DoW), then at their turn they refuse, knowing anti Trump movements would massively praise them, so now they become heroes, more and more ... | 2026-02-28T17:01:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rh7s7s/anthropic/ | Leflakk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh7s7s | false | null | t3_1rh7s7s | /r/LocalLLaMA/comments/1rh7s7s/anthropic/ | false | false | self | 0 | null |
Before I Rewrite My Stack Again… Advice? | 0 | I'm a machine learning engineer who has been working with the production system for the last 2 weeks; I had a working project. As weekend comes ,I just over few articles ,some says .Why a vector database for RAG? Now we have page indexing and even some one, for why LLM generation LLM? crazy?, the diffusion language mod... | 2026-02-28T16:55:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rh7mlv/before_i_rewrite_my_stack_again_advice/ | Disastrous_Talk7604 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh7mlv | false | null | t3_1rh7mlv | /r/LocalLLaMA/comments/1rh7mlv/before_i_rewrite_my_stack_again_advice/ | false | false | self | 0 | null |
An open source LLM router that cuts API costs by 60–80% by automatically routing every prompt to the right model tier | 3 | Been thinking about how most pipelines just call the same expensive model for everything regardless of what the prompt actually needs. Seemed like an obvious inefficiency so we started tinkering.
Ended up building a small KNN-based router that embeds prompts and classifies them into budget/balanced/premium tiers at ca... | 2026-02-28T16:52:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rh7j43/an_open_source_llm_router_that_cuts_api_costs_by/ | s3309 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh7j43 | false | null | t3_1rh7j43 | /r/LocalLLaMA/comments/1rh7j43/an_open_source_llm_router_that_cuts_api_costs_by/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'itiuUiYyWHLYCCT14rHPtAdctcXVmuWW713Rh6geg80', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/itiuUiYyWHLYCCT14rHPtAdctcXVmuWW713Rh6geg80.png?width=108&crop=smart&auto=webp&s=570f201050f3de123ef15fcfe905fbd4b4119492', 'width': 108}, {'height': 144, 'url': 'h... |
Built a CLI tool to contain local AI agents | 1 | [removed] | 2026-02-28T16:43:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rh7bp6/built_a_cli_tool_to_contain_local_ai_agents/ | Intrepid_Yogurt3284 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh7bp6 | false | null | t3_1rh7bp6 | /r/LocalLLaMA/comments/1rh7bp6/built_a_cli_tool_to_contain_local_ai_agents/ | false | false | self | 1 | null |
google found that longer chain of thought actually correlates NEGATIVELY with accuracy. -0.54 correlation | 266 | new google paper is out and it challenges something a lot of us assumed. they tested 8 model variants (GPT-OSS, DeepSeek-R1, Qwen3, etc) across AIME2024/2025, HMMT 2025, and GPQA-Diamond.
the finding: token length and accuracy have an average correlation of -0.54. negative. longer reasoning chains don't mean better an... | 2026-02-28T16:19:37 | https://www.reddit.com/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/ | Top-Cardiologist1011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh6pru | false | null | t3_1rh6pru | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/ | false | false | self | 266 | null |
[R] google found that longer chain of thought actually correlates NEGATIVELY with accuracy. -0.54 correlation | 1 | [deleted] | 2026-02-28T16:18:46 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rh6ozu | false | null | t3_1rh6ozu | /r/LocalLLaMA/comments/1rh6ozu/r_google_found_that_longer_chain_of_thought/ | false | false | default | 1 | null | ||
How to make AI collaborate to get my work done | 0 | Basically the question, I have an old winows laptop which i bought in 2018. i'm planning to do some automated task on it by running AI's in it. My plan is to combine 2 or 3 AI models like Gemini,GPT,Deepseek or anything like it to do some work for me on some timely basis and get me the update in telegram id. I've seen... | 2026-02-28T16:06:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rh6e38/how_to_make_ai_collaborate_to_get_my_work_done/ | Network-Zealousideal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh6e38 | false | null | t3_1rh6e38 | /r/LocalLLaMA/comments/1rh6e38/how_to_make_ai_collaborate_to_get_my_work_done/ | false | false | self | 0 | null |
Multi-Directional Refusal Suppression with Self-Organizing Maps - Pull Request into heretic! | 66 | TL;DR: The first technique that pushed gpt-oss-20b to 3 refusals from 100 while keeping KL of 0.12, and oss-120b to 7/100 while having KL 0.22!
Previous work assumed refusal behavior to be encoded as a single direction in the model's latent space; e.g., computed as the difference between the centroids of harmful and h... | 2026-02-28T16:01:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rh69co/multidirectional_refusal_suppression_with/ | kabachuha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh69co | false | null | t3_1rh69co | /r/LocalLLaMA/comments/1rh69co/multidirectional_refusal_suppression_with/ | false | false | self | 66 | {'enabled': False, 'images': [{'id': 'f45xRNAKjvOt433lLTzwd7-JyL8CYQUzfAS4YTbP91E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/f45xRNAKjvOt433lLTzwd7-JyL8CYQUzfAS4YTbP91E.png?width=108&crop=smart&auto=webp&s=068cb282b3b39b94ac168918e21f07976dab2313', 'width': 108}, {'height': 116, 'url': 'h... |
Native tool calling fails with Open WebUI & llama.cpp | 0 | I am using Open Web UI with Qwen 3.5 35B and when using native tool calling against our enterprise MCP server, llama.cpp crashes out, however, Ollama works fine with the same model. I am running llama.cpp with --jinja, but once Native tool calling is enabled, the query just kills the server upon initiating any chat. An... | 2026-02-28T15:57:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rh65my/native_tool_calling_fails_with_open_webui_llamacpp/ | abstrkt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh65my | false | null | t3_1rh65my | /r/LocalLLaMA/comments/1rh65my/native_tool_calling_fails_with_open_webui_llamacpp/ | false | false | self | 0 | null |
Anybody able to get Qwen3.5-35b-a3b working with claude code ? | 6 | I am facing multiple issues while running Qwen3.5-35b-a3b with claude code using llama.cpp.
1. Full Prompt reprocessing
2. Model automatically unloads / crashes during the 2nd or 3rd prompt.
I am currently on build: [https://github.com/ggml-org/llama.cpp/releases/tag/b8179](https://github.com/ggml-org/llama.cp... | 2026-02-28T15:55:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/ | anubhav_200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh6455 | false | null | t3_1rh6455 | /r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'SzUckrwPsHQzb7gp8s0j8RDCoXCoMqEdbn5HNgph6vA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SzUckrwPsHQzb7gp8s0j8RDCoXCoMqEdbn5HNgph6vA.png?width=108&crop=smart&auto=webp&s=34176603bf4eaffed96f3e2d70ea022ac5c12c3c', 'width': 108}, {'height': 108, 'url': 'h... |
Your Ollama setup is private. Your memory layer probably isn't. | 0 | # Local inference, cloud memory — isn't that kind of missing the point?
Been thinking about this for a while and curious if others have hit the same wall.
Most of us running local models (Ollama, LM Studio, etc.) do it for one reason
above everything else: **we don't want our data leaving the machine.**
But then wh... | 2026-02-28T15:54:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/ | MotorAlternative8045 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh639o | false | null | t3_1rh639o | /r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/ | false | false | self | 0 | null |
Experimenting with a middleware to compress LLM prompts and cut API costs by ~30%. Is this a real pain point? | 0 | Hey everyone, I'm looking for a reality check from folks who are actually running LLMs in production.
Like a lot of you, I've been wrestling with prompt bloat. Between massive system instructions, few-shot examples, and heavy RAG context, API costs (and latency) scale up incredibly fast as user volume grows.
To try a... | 2026-02-28T15:54:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rh631z/experimenting_with_a_middleware_to_compress_llm/ | Infamous-Cucumber-16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh631z | false | null | t3_1rh631z | /r/LocalLLaMA/comments/1rh631z/experimenting_with_a_middleware_to_compress_llm/ | false | false | self | 0 | null |
Experimenting with token "surprisal" to compress prompts and cut API costs by ~30%. Is this a problem worth solving? | 1 | Hey everyone, I'm looking for a reality check from folks who are actually running LLMs in production.
Like a lot of you, I've been wrestling with prompt bloat. Between massive system instructions, few-shot examples, and heavy RAG context, API costs (and latency) scale up incredibly fast.
To try and fix this, I’ve bee... | 2026-02-28T15:52:09 | https://www.reddit.com/r/LocalLLaMA/comments/1rh60z9/experimenting_with_token_surprisal_to_compress/ | Infamous-Cucumber-16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh60z9 | false | null | t3_1rh60z9 | /r/LocalLLaMA/comments/1rh60z9/experimenting_with_token_surprisal_to_compress/ | false | false | self | 1 | null |
qwen3.5 35b-a3b evaded the zero-reasoning budget by doing its thinking in the comments | 172 | 2026-02-28T15:35:09 | crantob | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rh5luv | false | null | t3_1rh5luv | /r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/ | false | false | 172 | {'enabled': True, 'images': [{'id': 'bh48tphl89mg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/bh48tphl89mg1.png?width=108&crop=smart&auto=webp&s=4052187e77bca747ccf7e9c1c5e02080f50450e3', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/bh48tphl89mg1.png?width=216&crop=smart&auto=we... | |||
Qwen3.5 27b vllm Better jinja template for avoiding crashes at tool calls and disabling thinking | 2 | What it says in the title. Try this one especially if you run a quantized version:
{% set enable_thinking = false %}
{%- set image_count = namespace(value=0) %}
{%- set video_count = namespace(value=0) %}
{%- macro render_content(content, do_vision_count, is_system_content=false) %}
{... | 2026-02-28T15:34:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rh5loa/qwen35_27b_vllm_better_jinja_template_for/ | thibautrey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh5loa | false | null | t3_1rh5loa | /r/LocalLLaMA/comments/1rh5loa/qwen35_27b_vllm_better_jinja_template_for/ | false | false | self | 2 | null |
A local LLM named SOMA | 0 | Treat her nicely and make the world a better place. | 2026-02-28T15:34:13 | https://github.com/unimaginative-artist/SOMA.git | Apprehensive-Water47 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rh5l2l | false | null | t3_1rh5l2l | /r/LocalLLaMA/comments/1rh5l2l/a_local_llm_named_soma/ | false | false | default | 0 | null |
Tool that builds a searchable memory of my web reading? | 0 | Typical (web) bookmarking or notes-taking flows go like this:
\- You explicitly save something to your tool (Onenote/Browser bookmarks/...)
\- Optionally you organize it a bit
\- In future, you look it up
Problems:
\- It breaks your consumption flow when you have to stop, click 'save', and possibly also organi... | 2026-02-28T15:30:16 | https://www.reddit.com/r/LocalLLaMA/comments/1rh5hl0/tool_that_builds_a_searchable_memory_of_my_web/ | semidarkmoon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh5hl0 | false | null | t3_1rh5hl0 | /r/LocalLLaMA/comments/1rh5hl0/tool_that_builds_a_searchable_memory_of_my_web/ | false | false | self | 0 | null |
unsloth/qwen3.5-35b-a3b Q4_K_M on macbook air m4 32GB seems slow | 1 | [removed] | 2026-02-28T15:23:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rh5bdi/unslothqwen3535ba3b_q4_k_m_on_macbook_air_m4_32gb/ | ChickenShieeeeeet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh5bdi | false | null | t3_1rh5bdi | /r/LocalLLaMA/comments/1rh5bdi/unslothqwen3535ba3b_q4_k_m_on_macbook_air_m4_32gb/ | false | false | self | 1 | null |
OpenClawVault | 1 | OpenClawVault — organized prompt library for local-first agents"
| 2026-02-28T15:14:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rh54el/openclawvault/ | Much-Obligation-4197 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh54el | false | null | t3_1rh54el | /r/LocalLLaMA/comments/1rh54el/openclawvault/ | false | false | self | 1 | null |
is there a trusty ai software that can do tasks in a browser | 1 | [removed] | 2026-02-28T15:14:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rh541r/is_there_a_trusty_ai_software_that_can_do_tasks/ | Few-Entrepreneur5664 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh541r | false | null | t3_1rh541r | /r/LocalLLaMA/comments/1rh541r/is_there_a_trusty_ai_software_that_can_do_tasks/ | false | false | self | 1 | null |
Config drift is the silent killer of local model setups | 0 | The part of running local models nobody warns you about is the config drift.
You get Ollama set up, maybe llama.cpp, everything works great on day one. Two weeks later you update the model, and half your prompts break because the system prompt formatting changed between quantizations. Or the template tags shifted. Or ... | 2026-02-28T15:13:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rh52t9/config_drift_is_the_silent_killer_of_local_model/ | Acrobatic_Task_6573 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh52t9 | false | null | t3_1rh52t9 | /r/LocalLLaMA/comments/1rh52t9/config_drift_is_the_silent_killer_of_local_model/ | false | false | self | 0 | null |
Best NVIDIA Jetson Orin Nano projects in 2026 — benchmarks included | 1 | 2026-02-28T15:01:39 | https://gist.github.com/yalexx/b4d2ae69073e37ea031cc3e19323a7bb | superactro | gist.github.com | 1970-01-01T00:00:00 | 0 | {} | 1rh4swi | false | null | t3_1rh4swi | /r/LocalLLaMA/comments/1rh4swi/best_nvidia_jetson_orin_nano_projects_in_2026/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'h... | ||
I compiled every confirmed Rubin vs Blackwell spec, benchmark, and pricing data point so you don't have to | 0 | Spent a while pulling together all the confirmed Rubin specs from CES 2026, GTC 2025, and the Q4 FY2026 earnings call (Feb 25), plus current Blackwell cloud pricing and MLPerf benchmark results into one place.
Covers: B200 vs B300 vs Rubin side-by-side specs, real MLPerf throughput numbers (5,842 tok/s per GPU on Deep... | 2026-02-28T15:00:28 | https://blog.barrack.ai/nvidia-rubin-vs-blackwell-rent-now-or-wait | LostPrune2143 | blog.barrack.ai | 1970-01-01T00:00:00 | 0 | {} | 1rh4rsf | false | null | t3_1rh4rsf | /r/LocalLLaMA/comments/1rh4rsf/i_compiled_every_confirmed_rubin_vs_blackwell/ | false | false | 0 | {'enabled': False, 'images': [{'id': '66d0FdX-Ie0F4dmZos684o_MJDAxlH7G6jGED--27BA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/66d0FdX-Ie0F4dmZos684o_MJDAxlH7G6jGED--27BA.png?width=108&crop=smart&auto=webp&s=90548eee21bb5f9002a6e2613094facff8ecad2d', 'width': 108}, {'height': 113, 'url': 'h... | |
Okay, I'm overthinking... yes, yes you are Qwen 3.5 27b. | 0 | llama-b8178 default settings used with [https://huggingface.co/unsloth/Qwen3.5-27B-GGUF/blob/main/Qwen3.5-27B-UD-Q8\_K\_XL.gguf](https://huggingface.co/unsloth/Qwen3.5-27B-GGUF/blob/main/Qwen3.5-27B-UD-Q8_K_XL.gguf)
MY INPUT: Hi.
OUTPUT:
Thinking Process:
1. \*\*Analyze the Request:\*\*
\* Input: "hi"
\* In... | 2026-02-28T14:57:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/ | silenceimpaired | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh4p8i | false | null | t3_1rh4p8i | /r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'JW9dn_kHR7W6SZSwJJNSY1xZN1_bL3GuCZMMp662KeM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JW9dn_kHR7W6SZSwJJNSY1xZN1_bL3GuCZMMp662KeM.png?width=108&crop=smart&auto=webp&s=28ab8efd47e48680d7f4b2e0d49f44780c2b79da', 'width': 108}, {'height': 116, 'url': 'h... |
How are you engaging with the AI podcast? | 0 | There are over 619.2 million podcast listeners worldwide. YouTube, Spotify, and Apple Podcasts lead the pack for global podcast dominance.
Now, when it comes to AI gen podcasts, it is already flooding the market. The tech is offering cost savings and opportunities for creators, but many in the industry worry that AI ... | 2026-02-28T14:57:24 | https://www.reddit.com/r/LocalLLaMA/comments/1rh4p4n/how_are_you_engaging_with_the_ai_podcast/ | ChrisJhon01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh4p4n | false | null | t3_1rh4p4n | /r/LocalLLaMA/comments/1rh4p4n/how_are_you_engaging_with_the_ai_podcast/ | false | false | self | 0 | null |
Just shipped v0.3.0 of my AI workflow engine. | 0 | Just shipped v0.3.0 of my workflow engine.
You can now run full automation pipelines with Ollama as the reasoning layer - not just LLM responses, but real tool execution:
LLM → HTTP → Browser → File → Email
All inside one workflow.
This update makes it possible to build proper local AI agents that actually do thing... | 2026-02-28T14:55:11 | Feathered-Beast | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rh4nb2 | false | null | t3_1rh4nb2 | /r/LocalLLaMA/comments/1rh4nb2/just_shipped_v030_of_my_ai_workflow_engine/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'x1kgbxwl19mg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/x1kgbxwl19mg1.jpeg?width=108&crop=smart&auto=webp&s=cbdf967ec15106b1f9f73dc30ec375cfbded8e79', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/x1kgbxwl19mg1.jpeg?width=216&crop=smart&auto=... | ||
Why some still playing with old models? Nostalgia or obsession or what? | 32 | Still I see some folks mentioning models like Qwen-2.5, Gemma-2, etc., in their threads & comments.
We got Qwen-3.5 recently after Qwen-3 last year. And got Gemma-3 & waiting for Gemma-4.
Well, I'm not talking about just their daily usage. They also create finetunes, benchmarks based on those old models. They spend t... | 2026-02-28T14:35:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh46g2 | false | null | t3_1rh46g2 | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/ | false | false | self | 32 | null |
Where to compare quants for different llms? | 3 | I am looking for sources where i can compare quantized versions of llms, like qwen 3 coder next or minimax m2.5.
So far i haven't been able to find anything other than these:
[https://github.com/electricazimuth/LocalLLM\_VisualCodeTest/?tab=readme-ov-file](https://github.com/electricazimuth/LocalLLM_VisualCodeTest... | 2026-02-28T14:32:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rh44g5/where_to_compare_quants_for_different_llms/ | milpster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh44g5 | false | null | t3_1rh44g5 | /r/LocalLLaMA/comments/1rh44g5/where_to_compare_quants_for_different_llms/ | false | false | self | 3 | null |
Qwen 3.5-35B-A3B is beyond expectations. It's replaced GPT-OSS-120B as my daily driver and it's 1/3 the size. | 544 | I know everyone has their own subjective take on what models are the best, at which types of tasks, at which sizes, at which quants, at which context lengths and so on and so forth.
But Qwen 3.5-35B-A3B has completely shocked me.
My use-case is pretty broad, but generally focuses around development tasks.
* I have a... | 2026-02-28T14:32:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/ | valdev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh43za | false | null | t3_1rh43za | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/ | false | false | self | 544 | null |
Seeking Help Improving OCR in My RAG Pipeline (Contributors Welcome) | 2 | I’m building a RAG pipeline and currently running into one major issue: **poor OCR performance on PDFs that have a centered watermark on every page**. I’m using PyMuPDF, but the watermark gets treated as real text, which leads to messy extraction and hurts retrieval accuracy.
I’m looking for **suggestions, ideas, or c... | 2026-02-28T14:24:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rh3xey/seeking_help_improving_ocr_in_my_rag_pipeline/ | SprayOwn5112 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh3xey | false | null | t3_1rh3xey | /r/LocalLLaMA/comments/1rh3xey/seeking_help_improving_ocr_in_my_rag_pipeline/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'lyWbWRHYwiA2y9Qbw7_VLS7yAbHgHbIcWY83zEsIk7s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lyWbWRHYwiA2y9Qbw7_VLS7yAbHgHbIcWY83zEsIk7s.png?width=108&crop=smart&auto=webp&s=9db15e32989913dd0eea685bda4ae666d2ceedbb', 'width': 108}, {'height': 108, 'url': 'h... |
RIP Gemma - Leave your memories here. | 2 | I remember it like it wasn't that long ago, the excitement of being up late at night reading the rumors about the new Gemma, until I could finally test it.
I remember the first time I could run a small model that was coherent and knew my language, and not just English.
I remember asking it to pretend to be a spaceshi... | 2026-02-28T14:19:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/ | DrNavigat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh3thm | false | null | t3_1rh3thm | /r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/ | false | false | self | 2 | null |
Support Anthropic | 1 | [removed] | 2026-02-28T14:15:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rh3pit/support_anthropic/ | Dudebro-420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh3pit | false | null | t3_1rh3pit | /r/LocalLLaMA/comments/1rh3pit/support_anthropic/ | false | false | self | 1 | null |
Mac M4 24GB local stack: Qwen2.5 14B + Cogito 14B + Nemotron 4B as router — anyone running something similar? | 0 | Hey everyone! I'm putting together a local LLM stack on a Mac M4 with 24GB RAM, running everything through Ollama. Here's what I've got so far:
* Qwen2.5:14B (Q4\_K\_M) — main workhorse for code, marketing copy, and longer texts
* Cogito:14B — backup for tasks that need stronger reasoning
* Nemotron Nano 4B (Q4\_K\_M)... | 2026-02-28T14:14:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rh3oty/mac_m4_24gb_local_stack_qwen25_14b_cogito_14b/ | Outrageous_Corgi7553 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh3oty | false | null | t3_1rh3oty | /r/LocalLLaMA/comments/1rh3oty/mac_m4_24gb_local_stack_qwen25_14b_cogito_14b/ | false | false | self | 0 | null |
what are some of the good models to run on a iphone 15 pro max? | 0 | I have a iphone 15 pro max, and i want to run a benchmark test on the best AIs that my phone can run, not through code, but through much more common things, such as a school exam. | 2026-02-28T14:08:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rh3k0m/what_are_some_of_the_good_models_to_run_on_a/ | Difficult_Aerie737 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh3k0m | false | null | t3_1rh3k0m | /r/LocalLLaMA/comments/1rh3k0m/what_are_some_of_the_good_models_to_run_on_a/ | false | false | self | 0 | null |
Your OpenClaw | 0 | Most of you already know popularity of OpenClaw project. Some of you might have ran it on your spare machine or in VPS. I am sure many of us not at all comfortable to run it on our personal machine due to privacy and security concerns. That's why I developed Your-OpenClaw.
1. Its in Python.
2. Codebase is not a... | 2026-02-28T13:58:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rh3bmn/your_openclaw/ | meetrais | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh3bmn | false | null | t3_1rh3bmn | /r/LocalLLaMA/comments/1rh3bmn/your_openclaw/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'y5ZXQ003CcfZxyyIGSnGpYwodPxa5JLytHHFHzoCuaE', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/y5ZXQ003CcfZxyyIGSnGpYwodPxa5JLytHHFHzoCuaE.png?width=108&crop=smart&auto=webp&s=1d87668518d4fcf5e24be77f6cb583d667af6a1a', 'width': 108}, {'height': 114, 'url': 'h... |
Just created Agent Skills beginner's guide | 0 | https://nolowiz.com/agent-skills-complete-beginners-guide-to-ai-agent-skills-and-best-practices/ | 2026-02-28T13:45:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rh30ud/just_created_agent_skills_beginners_guide/ | simpleuserhere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh30ud | false | null | t3_1rh30ud | /r/LocalLLaMA/comments/1rh30ud/just_created_agent_skills_beginners_guide/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'khLNK_Ma6CXtkpVujSGuJO5PBM-XMpeWqoksTQQDDQs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/khLNK_Ma6CXtkpVujSGuJO5PBM-XMpeWqoksTQQDDQs.jpeg?width=108&crop=smart&auto=webp&s=1adaf1dc827fb6d64f37be34f5264e85ccc1e64c', 'width': 108}, {'height': 113, 'url': '... |
Benchmarking Open-Source LLMs for Security Research & Red Teaming | 10 | Commercial models are practically unusable for deep security research - they heavily filter prompts, and uploading sensitive logs or proprietary code to them is a massive privacy risk. I wanted to see if the current open-source alternatives are actually viable for red teaming workflows yet, so I spun up an isolated AWS... | 2026-02-28T13:36:23 | https://www.reddit.com/r/LocalLLaMA/comments/1rh2tmu/benchmarking_opensource_llms_for_security/ | dumbelco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh2tmu | false | null | t3_1rh2tmu | /r/LocalLLaMA/comments/1rh2tmu/benchmarking_opensource_llms_for_security/ | false | false | self | 10 | null |
Swarm - Toy Project | 0 | [https://github.com/dafdaf1234444/swarm](https://github.com/dafdaf1234444/swarm)
(according to swarm - llm generated) **Swarm is a repository protocol for multi-session AI work: each session reads shared state, does work, writes back, and leaves the system more useful for the next session.**
From me,
Hey, I have ... | 2026-02-28T13:29:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rh2ocg/swarm_toy_project/ | dafdaf1234444 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh2ocg | false | null | t3_1rh2ocg | /r/LocalLLaMA/comments/1rh2ocg/swarm_toy_project/ | false | false | self | 0 | null |
OpenAI pivot investors love | 1,975 | 2026-02-28T13:25:38 | PaceImaginary8610 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rh2lew | false | null | t3_1rh2lew | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/ | false | false | 1,975 | {'enabled': True, 'images': [{'id': 'wfho2ytml8mg1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/wfho2ytml8mg1.jpeg?width=108&crop=smart&auto=webp&s=56d0239194a4a06a5cea0105729f65e20ddc8af3', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/wfho2ytml8mg1.jpeg?width=216&crop=smart&auto=w... | |||
Benchmarking Open-Source LLMs for Offensive Cybersecurity | 1 | [removed] | 2026-02-28T13:23:20 | https://www.reddit.com/r/LocalLLaMA/comments/1rh2jnv/benchmarking_opensource_llms_for_offensive/ | dumbelco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh2jnv | false | null | t3_1rh2jnv | /r/LocalLLaMA/comments/1rh2jnv/benchmarking_opensource_llms_for_offensive/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'b1E8sI-kTet-3YOFKrYAUVQ9ABbay60W7WEBpTM34S8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/b1E8sI-kTet-3YOFKrYAUVQ9ABbay60W7WEBpTM34S8.png?width=108&crop=smart&auto=webp&s=321732378ed6b4d6fe0c49407df75ed0c4221b23', 'width': 108}, {'height': 216, 'url': '... |
Building agents is fun. Evaluating them is not. | 0 | A few weeks ago I posted here about experimenting with autonomous agents. Back then I was just excited that I got them to work. Now I’m stuck on something I didn’t expect to be this hard: Figuring out whether they’re actually reliable.
Building the agent was fun. Evaluating it is… much less clear.
Once you let an age... | 2026-02-28T13:09:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rh28o8/building_agents_is_fun_evaluating_them_is_not/ | Fluffy_Salary_5984 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh28o8 | false | null | t3_1rh28o8 | /r/LocalLLaMA/comments/1rh28o8/building_agents_is_fun_evaluating_them_is_not/ | false | false | self | 0 | null |
Qwen3.5 prefill latency extremely slow with large context | 1 | Here is a simple human sounding Reddit draft written naturally without formatting or symbols:
Hi everyone, I wanted to check if others are seeing similar behavior with Qwen3 and Qwen3.5 models on llama.cpp or if there is any known workaround.
I am running llama server built from the latest commit as of today on my AM... | 2026-02-28T13:01:15 | https://www.reddit.com/r/LocalLLaMA/comments/1rh22j0/qwen35_prefill_latency_extremely_slow_with_large/ | Comfortable-Fudge233 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh22j0 | false | null | t3_1rh22j0 | /r/LocalLLaMA/comments/1rh22j0/qwen35_prefill_latency_extremely_slow_with_large/ | false | false | self | 1 | null |
Using tools | 0 | I've managed to get some models running locally thanks to this sub.
I wonder, how do go about getting a coding model using tools? I'm trying to replicate the Claude experience that I have at work where it can read files, write files, use google, write python scripts to solve problems, etc. | 2026-02-28T12:57:23 | https://www.reddit.com/r/LocalLLaMA/comments/1rh1zki/using_tools/ | Ok-Secret5233 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh1zki | false | null | t3_1rh1zki | /r/LocalLLaMA/comments/1rh1zki/using_tools/ | false | false | self | 0 | null |
i9-19400F, RTX 4070 Super (12GB), 32GB DDR5 RAM. Debating between Ollama and LM Studio, and am an absolute noob to Local model running. Use cases would be coding and RP Independently | 0 | Basically above. Also not tryna stress my system too much in order to make it last, tho i doubt thats an issue. Mostly looking for ease of use for the wrapper and efficiency/quality for the model(s).
As noted before, use cases would be Coding (file gen/editing, game design discussion, on-the-spot questions) and Rolepl... | 2026-02-28T12:44:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rh1q8j/i919400f_rtx_4070_super_12gb_32gb_ddr5_ram/ | tableball35 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh1q8j | false | null | t3_1rh1q8j | /r/LocalLLaMA/comments/1rh1q8j/i919400f_rtx_4070_super_12gb_32gb_ddr5_ram/ | false | false | self | 0 | null |
Benchmarks + Report: Optimized Cosmos-Reason2 (Qwen3-VL) for on-device inference on 8GB RAM (Jetson Orin Nano Super) | 2 | Hej, Researcher from Embedl here! Leading up to Nvidia GTC we have been focusing on getting nvidia/Cosmos-Reason2-2B (fine-tuned variant of Qwen3-VL) edge-ready. Meaning, enabling it for the full Jetson-lineup: From 8GB RAM on Jetson Orin Nano to 64GB RAM on Jetson AGX Orin up to 128GB RAM on Jetson AGX Thor \~ a bit o... | 2026-02-28T12:31:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rh1haa/benchmarks_report_optimized_cosmosreason2_qwen3vl/ | tag_along_common | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh1haa | false | null | t3_1rh1haa | /r/LocalLLaMA/comments/1rh1haa/benchmarks_report_optimized_cosmosreason2_qwen3vl/ | false | false | 2 | null | |
Swapping gpt-oss-120b for qwen3.5-122b on a 128GB system? Mind your KV Cache! (Strix Halo Math inside) | 1 | Hey everyone,
Seeing a lot of hype about the new `qwen3.5-122b-a10b` and people treating it as a 1:1 drop-in replacement for `gpt-oss-120b`. If you're running a 128 GB unified memory setup, you might be in for a rude awakening when you try to load a large prompt.
Here is the reality check using my own **Strix Halo 12... | 2026-02-28T12:27:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/ | Reasonable-Yak-3523 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh1ec9 | false | null | t3_1rh1ec9 | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/ | false | false | self | 1 | null |
Run mistral vision model 24b localy using multi PC : ray+ vllm | 1 | I have multiple PC in the same LAN running 1x 2060 .
I want use Vllm and Ray to merge vram and run the 24b mistral
https://docs.vllm.ai/en/v0.5.1/serving/distributed_serving.html?utm_source=perplexity | 2026-02-28T12:25:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rh1cq7/run_mistral_vision_model_24b_localy_using_multi/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh1cq7 | false | null | t3_1rh1cq7 | /r/LocalLLaMA/comments/1rh1cq7/run_mistral_vision_model_24b_localy_using_multi/ | false | false | self | 1 | null |
Why does qwen 3.5 think it's 2024 | 0 | Why does my qwen 3.5 35B think it's 2024, is trained as per its words until early 2026 and doesn't know about dotnet 10.. | 2026-02-28T12:21:30 | https://www.reddit.com/r/LocalLLaMA/comments/1rh1a6v/why_does_qwen_35_think_its_2024/ | Uranday | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh1a6v | false | null | t3_1rh1a6v | /r/LocalLLaMA/comments/1rh1a6v/why_does_qwen_35_think_its_2024/ | false | false | self | 0 | null |
Best Qwen 3.5 variant for 2x5060ti/16 + 64 GB Ram? | 0 | What variant would you pick for coding or agentic purposes?
Also does Qwen 3.5 really suffer from the “overthinking” issue that keeps getting mentioned here? | 2026-02-28T12:13:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rh14cs/best_qwen_35_variant_for_2x5060ti16_64_gb_ram/ | andy_potato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh14cs | false | null | t3_1rh14cs | /r/LocalLLaMA/comments/1rh14cs/best_qwen_35_variant_for_2x5060ti16_64_gb_ram/ | false | false | self | 0 | null |
Qwen3.5-35B nailed my simple multiagent workflow that other sub-100B models couldn't! | 55 | I ran the same test I shared last week, and Qwen3.5-35B nailed it!!!
This is the first time I have seen a sub-100B model reliably complete the task. Not only did it finish the task, but the output quality was solid as well.
One thing I noticed though is that the model thinks with a lot of tokens, so it takes a while!... | 2026-02-28T12:10:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rh12xz/qwen3535b_nailed_my_simple_multiagent_workflow/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh12xz | false | null | t3_1rh12xz | /r/LocalLLaMA/comments/1rh12xz/qwen3535b_nailed_my_simple_multiagent_workflow/ | false | false | self | 55 | {'enabled': False, 'images': [{'id': 'lPkycciQy38fgFGPFK3x56aelVHSI4vJkYw5-R1JRWA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lPkycciQy38fgFGPFK3x56aelVHSI4vJkYw5-R1JRWA.png?width=108&crop=smart&auto=webp&s=86765e992bac383d2d41d1a8fe1dbfdf90f132c1', 'width': 108}, {'height': 108, 'url': 'h... |
New AI Fundamental Research Company/Lab | 0 | Okay, I know whoever reads this will probably say I'm nuts or a crackhead for going head-on against a big giant, but I will do it—if not today, then tomorrow.
I'm saying I'm starting a Research Lab/company—for obvious reasons—I need money because it's enough to build things underground, so I'll start doing that to ear... | 2026-02-28T12:06:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rh1024/new_ai_fundamental_research_companylab/ | Recent_Juggernaut859 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh1024 | false | null | t3_1rh1024 | /r/LocalLLaMA/comments/1rh1024/new_ai_fundamental_research_companylab/ | false | false | self | 0 | null |
Benchmarks + Report: Optimizing Comos-Reason2 (Qwen3-VL) for on-device inference - NVIDIA Jetson Orin Nano 8GB RAM | 1 | Hej r/LocalLlama, Researcher from Embedl here! Leading up to Nvidia GTC we have been focusing on getting nvidia/Cosmos-Reason2-2B (fine-tuned variant of Qwen3-VL) edge-ready. Meaning, enabling it for the full Jetson-lineup: From 8GB RAM on Jetson Orin Nano to 64GB RAM on Jetson AGX Orin up to 128GB RAM on Jetson AGX Th... | 2026-02-28T12:06:27 | https://www.reddit.com/r/LocalLLaMA/comments/1rh0zwr/benchmarks_report_optimizing_comosreason2_qwen3vl/ | tag_along_common | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh0zwr | false | null | t3_1rh0zwr | /r/LocalLLaMA/comments/1rh0zwr/benchmarks_report_optimizing_comosreason2_qwen3vl/ | false | false | 1 | null | |
Suggest me vision instrcut model that i can run | 0 | PC system
2060 6gb
32gb ram | 2026-02-28T12:05:35 | Quiet_Dasy | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rh0zcn | false | null | t3_1rh0zcn | /r/LocalLLaMA/comments/1rh0zcn/suggest_me_vision_instrcut_model_that_i_can_run/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'mpwbfumc78mg1', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/mpwbfumc78mg1.jpeg?width=108&crop=smart&auto=webp&s=e59b1d71a65085b93dbd00e7c05692849852c4be', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/mpwbfumc78mg1.jpeg?width=216&crop=smart&auto=... | ||
Qwen3.5 this week had two dominant posts: "gamechanger for agentic coding" (1k↑) and "craters on hard tasks" (500↑). Both are right. Here's how I'm thinking about where it actually fits. | 1 | [removed] | 2026-02-28T12:05:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rh0z3l/qwen35_this_week_had_two_dominant_posts/ | Visible_Homework_477 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh0z3l | false | null | t3_1rh0z3l | /r/LocalLLaMA/comments/1rh0z3l/qwen35_this_week_had_two_dominant_posts/ | false | false | self | 1 | null |
How to use Qwen 3.5 35B with any agentic coding tool? | 2 | I have the model set up with llama.cpp and I can chat with it on 127.0.0.1:8080.
How do I get it to work with something like Cline/Roo/Kilo Code? I'm not concerned about which one Any of them will do. I tried setting it up via openAI compatible, but model choice doesn't show up, and the API calls aren't working.
Is t... | 2026-02-28T12:04:20 | https://www.reddit.com/r/LocalLLaMA/comments/1rh0yim/how_to_use_qwen_35_35b_with_any_agentic_coding/ | soumen08 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh0yim | false | null | t3_1rh0yim | /r/LocalLLaMA/comments/1rh0yim/how_to_use_qwen_35_35b_with_any_agentic_coding/ | false | false | self | 2 | null |
Unsloth Dynamic 2.0 GGUFs now selectively quantizes layers much more intelligently and extensively. | 172 | 2026-02-28T12:03:25 | https://unsloth.ai/docs/basics/unsloth-dynamic-2.0-ggufs | paranoidray | unsloth.ai | 1970-01-01T00:00:00 | 0 | {} | 1rh0xwk | false | null | t3_1rh0xwk | /r/LocalLLaMA/comments/1rh0xwk/unsloth_dynamic_20_ggufs_now_selectively/ | false | false | 172 | {'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': '... | ||
Made a 12B uncensored RP merge, putting it out there - MistralNemoDionysusV3 | 9 | I wasn't really finding a model that felt right for RP — most either felt too restricted or the character voices were flat. So I put together this merge from various Mistral Nemo versions and it kind of became my daily driver.
It's a 12B uncensored model focused on roleplay. From my own use it handles character voice ... | 2026-02-28T12:01:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rh0wqj/made_a_12b_uncensored_rp_merge_putting_it_out/ | Biscotto58 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh0wqj | false | null | t3_1rh0wqj | /r/LocalLLaMA/comments/1rh0wqj/made_a_12b_uncensored_rp_merge_putting_it_out/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'DetxX2u2qzu-kJe9FBHKDRBgJvEL7fyI_r3ulvaZODM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/DetxX2u2qzu-kJe9FBHKDRBgJvEL7fyI_r3ulvaZODM.png?width=108&crop=smart&auto=webp&s=335b24fe5e742ae80ea4ca599902644b3c0d425a', 'width': 108}, {'height': 116, 'url': 'h... |
Qwen3.5-35B-A3B Q5_K_M:Best Model for NVIDIA 16GB GPUs | 2 | AesSedai/Qwen3.5-35B-A3B-GGUF Q5\_K\_M works well on 5070ti 16GB.
57 tokens/s
Mean KLD: 0.0058
Within the Qwen3.5-35B-A3B-GGUF series, this model delivers the best performance on NVIDIA 16GB GPUs.
config:LM studio , -c 71680 , GPU offload 40,k cache q8\_0 ,v cache q8\_0 | 2026-02-28T11:46:54 | https://www.reddit.com/r/LocalLLaMA/comments/1rh0msv/qwen3535ba3b_q5_k_mbest_model_for_nvidia_16gb_gpus/ | moahmo88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh0msv | false | null | t3_1rh0msv | /r/LocalLLaMA/comments/1rh0msv/qwen3535ba3b_q5_k_mbest_model_for_nvidia_16gb_gpus/ | false | false | self | 2 | null |
hi! i'm a total noob | 0 | hey guys! yeah, i' m a real noob. I'm new with LM Studio. I'm looking for an abliterated model for creating images. Any good picks you could share with me? | 2026-02-28T11:39:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rh0hgl/hi_im_a_total_noob/ | PizzaSouthern5853 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh0hgl | false | null | t3_1rh0hgl | /r/LocalLLaMA/comments/1rh0hgl/hi_im_a_total_noob/ | false | false | self | 0 | null |
Tempted to prompt qwen on this craigslist rig but concerned it may tell me to put it out of its misery | 17 | What’s the most cursed way you’ve hit 32GB VRAM? | 2026-02-28T11:29:45 | prescorn | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rh0bkz | false | null | t3_1rh0bkz | /r/LocalLLaMA/comments/1rh0bkz/tempted_to_prompt_qwen_on_this_craigslist_rig_but/ | false | false | 17 | {'enabled': True, 'images': [{'id': '7ivy7vjy08mg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/7ivy7vjy08mg1.jpeg?width=108&crop=smart&auto=webp&s=44e4b5ee2e3e7b84380821ce6f558b9f9c55c077', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/7ivy7vjy08mg1.jpeg?width=216&crop=smart&auto=... | ||
Frustration building out my local models | 1 | I have been building, slowly, with the help of google and various chatbots and reddit posts, a local AI capability. Yesterday I hit a brick wall trying to add one more local Ollama instance for some unknown reason. Or so I thought.
The picture is that I was trying to add one more Ollama instance to a "mostly" workin... | 2026-02-28T11:28:05 | https://www.reddit.com/r/LocalLLaMA/comments/1rh0akj/frustration_building_out_my_local_models/ | tahaan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh0akj | false | null | t3_1rh0akj | /r/LocalLLaMA/comments/1rh0akj/frustration_building_out_my_local_models/ | false | false | self | 1 | null |
DeepSeek V4 will be released next week and will have image and video generation capabilities, according to the Financial Times | 607 | Financial Times: DeepSeek to release long-awaited AI model in new challenge to US rivals (paywall): [https://www.ft.com/content/e3366881-0622-40a7-9c34-a0d82e3d573e](https://www.ft.com/content/e3366881-0622-40a7-9c34-a0d82e3d573e) | 2026-02-28T11:25:49 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rh095c | false | null | t3_1rh095c | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/ | false | false | 607 | {'enabled': True, 'images': [{'id': 'kwyym79lz7mg1', 'resolutions': [{'height': 173, 'url': 'https://preview.redd.it/kwyym79lz7mg1.jpeg?width=108&crop=smart&auto=webp&s=a5793f3d32781f4ff753e02882e90423136a2294', 'width': 108}, {'height': 346, 'url': 'https://preview.redd.it/kwyym79lz7mg1.jpeg?width=216&crop=smart&auto=... | ||
Copy paste error or does vllm team know something we don't? | 5 | 2026-02-28T11:11:10 | reto-wyss | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rh002v | false | null | t3_1rh002v | /r/LocalLLaMA/comments/1rh002v/copy_paste_error_or_does_vllm_team_know_something/ | false | false | 5 | {'enabled': True, 'images': [{'id': '9c0zixu0e7mg1', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/9c0zixu0e7mg1.png?width=108&crop=smart&auto=webp&s=22b5775bd4cf75c5311c23d2127555da945f12b5', 'width': 108}, {'height': 57, 'url': 'https://preview.redd.it/9c0zixu0e7mg1.png?width=216&crop=smart&auto=webp... | |||
are you ready for small Qwens? | 404 | 13-9=4 | 2026-02-28T11:02:10 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rgzul5 | false | null | t3_1rgzul5 | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/ | false | false | 404 | {'enabled': True, 'images': [{'id': 'bwc4xcf0w7mg1', 'resolutions': [{'height': 34, 'url': 'https://preview.redd.it/bwc4xcf0w7mg1.png?width=108&crop=smart&auto=webp&s=e2b2d795fae8415eb3eb4b5c9185154a676b0a55', 'width': 108}, {'height': 68, 'url': 'https://preview.redd.it/bwc4xcf0w7mg1.png?width=216&crop=smart&auto=webp... | ||
An open-source local speech AI benchmarking tool - compare STT, TTS, emotion detection & diarization models side by side | 17 | Speech models have been a constant wrestle. Whisper, Bark, Vosk, Kokoro, all promising the world but often choking on real hardware. Dozens out there, no simple way to pit them against each other without the cloud leeches draining data. Speechos emerged from the quiet frustration of it all.
It's local-first, everythin... | 2026-02-28T10:38:35 | https://www.reddit.com/gallery/1rgzga6 | hamuf | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rgzga6 | false | null | t3_1rgzga6 | /r/LocalLLaMA/comments/1rgzga6/an_opensource_local_speech_ai_benchmarking_tool/ | false | false | 17 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.