title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7 values | id stringlengths 7 7 | locked bool 2 classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2 classes | stickied bool 2 classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How do I figure out -b batch size to increase token speed? | 4 | llama-bench says Qwen3.5 and Qwen3 Coder Next is not supported?
1. How are you figuring out what batch size and ub (whatever that does) to try?
2. Does it actually make a speeeeed difference?
3. Will batch size decrease quality? | 2026-02-28T19:35:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rhbqoq/how_do_i_figure_out_b_batch_size_to_increase/ | ClimateBoss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhbqoq | false | null | t3_1rhbqoq | /r/LocalLLaMA/comments/1rhbqoq/how_do_i_figure_out_b_batch_size_to_increase/ | false | false | self | 4 | null |
Saw someone bridge Claude Code into chat apps — feels like ChatOps for AI agents | 0 | I came across an interesting project recently that connects Claude Code to messaging platforms and lets you interact with it through chat apps instead of a local terminal.
The idea is surprisingly simple:
Claude Code keeps running locally, and a small bridge relays messages between the agent and platforms like Slack or Telegram — so you can trigger tasks or check progress remotely without exposing your machine publicly.
What I found interesting isn’t just the tool itself, but the interaction model. It feels a bit like a modern version of ChatOps, except the “bot” is now an AI coding agent.
It made me wonder whether chat might actually become a more natural interface for coding agents compared to dashboards or web UIs.
Curious how others here are handling workflows around Claude Code or similar local agents:
* remote desktop?
* terminals over SSH?
* custom UIs?
* or messaging-based setups?
Link for anyone curious about the implementation:
[https://github.com/chenhg5/cc-connect](https://github.com/chenhg5/cc-connect)
Mainly sharing because the idea itself felt worth discussing. | 2026-02-28T19:32:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rhbo40/saw_someone_bridge_claude_code_into_chat_apps/ | chg80333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhbo40 | false | null | t3_1rhbo40 | /r/LocalLLaMA/comments/1rhbo40/saw_someone_bridge_claude_code_into_chat_apps/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '3GLTNGwy878wDzNd-blWWaXssUSPARpfwfD4i1-xmws', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3GLTNGwy878wDzNd-blWWaXssUSPARpfwfD4i1-xmws.png?width=108&crop=smart&auto=webp&s=151f64c22cba469a3374c4642608ffcfeb1de312', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3GLTNGwy878wDzNd-blWWaXssUSPARpfwfD4i1-xmws.png?width=216&crop=smart&auto=webp&s=a7434c252c4a48864e0acde3749bf06c02eb72f8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3GLTNGwy878wDzNd-blWWaXssUSPARpfwfD4i1-xmws.png?width=320&crop=smart&auto=webp&s=656d7f772366bb6bb4f9d98c2d2d9071878cce69', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3GLTNGwy878wDzNd-blWWaXssUSPARpfwfD4i1-xmws.png?width=640&crop=smart&auto=webp&s=a3bf83ae4d50612deebd9fddc5ec66dc7ab9157e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3GLTNGwy878wDzNd-blWWaXssUSPARpfwfD4i1-xmws.png?width=960&crop=smart&auto=webp&s=9b05b3ef50e67187f53316686ffe3bea8155f17a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3GLTNGwy878wDzNd-blWWaXssUSPARpfwfD4i1-xmws.png?width=1080&crop=smart&auto=webp&s=7ebc50e97bd2da81ece3bdf5579527bf644f1ee2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3GLTNGwy878wDzNd-blWWaXssUSPARpfwfD4i1-xmws.png?auto=webp&s=5d4aaf15e1a25d83c5dcb31af354daf0e526f27e', 'width': 1200}, 'variants': {}}]} |
Help: Extremely slow Prompt Processing (Prefill) on i3-8100 / 8GB RAM / UHD 630 that BrowserOS is failing | 0 | I’m running LM Studio on a low-spec machine and my Prompt Processing is so slow that my "BrowserOS" interface keeps timing out or failing. Once it starts generating (eval), the speed is okay, but the initial "thinking" phase takes forever.
My Specs:
CPU: Intel i3-8100 (4 Cores)
RAM: 8GB (Total system RAM)
GPU: Intel UHD 630 iGPU
Models: Gemma 3 1B, Qwen 1.7B, Ministral 3B (All Q4 GGUF)
What I've tried:
Using Q4 quants to save space.
Running in LM Studio with default settings.
The Issue:
It feels like the CPU is bottlenecked during the prefill stage. Since my iGPU shares system RAM, I think I’m running out of memory and the system is swapping to the disk.
Questions:
How many GPU Layers should I offload to a UHD 630 to speed up prompt processing without crashing the UI?
Would switching to Ollama (CLI) or KoboldCPP improve prefill speeds over LM Studio's Electron interface?
Are there specific BLAS or CLBlast settings for Intel Integrated Graphics that help with prompt ingestion?
Is their a unlimited way to use an online LLM? | 2026-02-28T19:29:15 | https://www.reddit.com/r/LocalLLaMA/comments/1rhblei/help_extremely_slow_prompt_processing_prefill_on/ | broSleepNow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhblei | false | null | t3_1rhblei | /r/LocalLLaMA/comments/1rhblei/help_extremely_slow_prompt_processing_prefill_on/ | false | false | self | 0 | null |
ShunyaNet Sentinel: A Self-Hosted RSS Aggregator for Local LLM Analysis (with a not-so-subtle 90s cyberpunk theme...) | 8 | Hello all — A friend suggested I share my fun side-project here, too.
**ShunyaNet Sentinel** is a lightweight, ridiculously-named and cyberpunk-themed RSS monitoring tool that sends feed content to a locally hosted LLM for analysis and delivers alerts/summaries to the GUI and optionally Slack (so you can get notifications on your phone!). It is compatible with LMStudio, Ollama, and OpenAI (via API...)
The idea was to replace algorithmic filtering with something prompt-driven and fully under my hardware control. You define topics of interest, load RSS feeds, and let the model triage the noise.
I included a few example topic lists (e.g., general conflict monitoring, Iran-focused monitoring given recent headlines) and sample RSS bundles to show how it can be tailored to specific regions or themes. There are a variety of potential use-cases: I also used it recently to monitor local news while traveling through rural India.
I intend to expand the type of data feeds it can ingest and fine-tune the overall experience. But, right now I'm focusing on refining the standard prompts.
This works well with a variety of models (with thinking turned off or suppressed); Hermes 70b is a go-to for me. GPT OSS 120b or 20b and abliterated Gemmas are great, too. It should work well with smaller models - so long as they can follow instructions well.
GitHub:
[https://github.com/EverythingsComputer/ShunyaNet-Sentinel](https://github.com/EverythingsComputer/ShunyaNet-Sentinel)
Anyway, that's all. Have fun — feedback welcome. | 2026-02-28T19:23:08 | https://v.redd.it/47wl3w5bbamg1 | _WaterBear | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rhbfya | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/47wl3w5bbamg1/DASHPlaylist.mpd?a=1774898615%2CNTcyNTE5YzA2MGFlMjFhZTZmNDVmY2Q0ZjVhNzUyYzczNmViODgxODFkYTdlN2VjNDQ1ZTgyNGRhNjI3NDJmZQ%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/47wl3w5bbamg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/47wl3w5bbamg1/HLSPlaylist.m3u8?a=1774898615%2CZjVhYzVlODg2ZGYzYTcxNGI0YTU5M2MzZjg0OGMxNDBiZjg0ZTExMDAxNTkyZWEyN2JiODcwODE0OTQ3ZGUwMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/47wl3w5bbamg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1660}} | t3_1rhbfya | /r/LocalLLaMA/comments/1rhbfya/shunyanet_sentinel_a_selfhosted_rss_aggregator/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'cDEyNW9vNmJiYW1nMW7jNwMwzxJiDzTR7H2hjqixqS_GYlYo3o4GRSn6LZPn', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/cDEyNW9vNmJiYW1nMW7jNwMwzxJiDzTR7H2hjqixqS_GYlYo3o4GRSn6LZPn.png?width=108&crop=smart&format=pjpg&auto=webp&s=38b59780d0ade78b2c9360802b1ad95a788cac61', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/cDEyNW9vNmJiYW1nMW7jNwMwzxJiDzTR7H2hjqixqS_GYlYo3o4GRSn6LZPn.png?width=216&crop=smart&format=pjpg&auto=webp&s=6ca703825fd3382b5a56d480d42dac90408aa45c', 'width': 216}, {'height': 208, 'url': 'https://external-preview.redd.it/cDEyNW9vNmJiYW1nMW7jNwMwzxJiDzTR7H2hjqixqS_GYlYo3o4GRSn6LZPn.png?width=320&crop=smart&format=pjpg&auto=webp&s=f7226876a18e0c617785306d01dedc9142fe716c', 'width': 320}, {'height': 416, 'url': 'https://external-preview.redd.it/cDEyNW9vNmJiYW1nMW7jNwMwzxJiDzTR7H2hjqixqS_GYlYo3o4GRSn6LZPn.png?width=640&crop=smart&format=pjpg&auto=webp&s=c7fae16d99d50d88a9eb305d9fa549d8e06a78aa', 'width': 640}, {'height': 624, 'url': 'https://external-preview.redd.it/cDEyNW9vNmJiYW1nMW7jNwMwzxJiDzTR7H2hjqixqS_GYlYo3o4GRSn6LZPn.png?width=960&crop=smart&format=pjpg&auto=webp&s=eaf97b4ac1b483bd15ab2a950a6ba00a9c2b6401', 'width': 960}, {'height': 702, 'url': 'https://external-preview.redd.it/cDEyNW9vNmJiYW1nMW7jNwMwzxJiDzTR7H2hjqixqS_GYlYo3o4GRSn6LZPn.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c37edc8c65384fa846ea938d8ded1c9c8a9126c6', 'width': 1080}], 'source': {'height': 1912, 'url': 'https://external-preview.redd.it/cDEyNW9vNmJiYW1nMW7jNwMwzxJiDzTR7H2hjqixqS_GYlYo3o4GRSn6LZPn.png?format=pjpg&auto=webp&s=f9258451aba95f19be6d5e9186ac7c0b1bdaa89f', 'width': 2940}, 'variants': {}}]} | |
Hardware requirements for running Claude-like models locally? | 1 | [removed] | 2026-02-28T19:21:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rhbe2m/hardware_requirements_for_running_claudelike/ | No-Data-8738 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhbe2m | false | null | t3_1rhbe2m | /r/LocalLLaMA/comments/1rhbe2m/hardware_requirements_for_running_claudelike/ | false | false | self | 1 | null |
gemini ultra vs pro actually different or just a scam | 0 | thinking about paying for gemini ultra but kinda skeptical rn is it physically a bigger model under the hood or did google just take pro remove some limits and slap a price tag on it has anyone actually tested them side by side on complex coding or logic stuff feels like it might just be a marketing gimmick let me know if you guys have seen actual technical proof or if im just paying for the name | 2026-02-28T19:18:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rhbbq1/gemini_ultra_vs_pro_actually_different_or_just_a/ | ebosha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhbbq1 | false | null | t3_1rhbbq1 | /r/LocalLLaMA/comments/1rhbbq1/gemini_ultra_vs_pro_actually_different_or_just_a/ | false | false | self | 0 | null |
Hardware requirements for running Anthropic-like models locally? | 1 | [removed] | 2026-02-28T19:18:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rhbbhl/hardware_requirements_for_running_anthropiclike/ | No-Data-8738 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhbbhl | false | null | t3_1rhbbhl | /r/LocalLLaMA/comments/1rhbbhl/hardware_requirements_for_running_anthropiclike/ | false | false | self | 1 | null |
Agents are here! But E-commerce? Stuck in the past. Sandbox live for developers to play with | 0 | [payclaw.io](http://payclaw.io) | 2026-02-28T19:15:54 | https://v.redd.it/5341svo1camg1 | Opposite-Exam3541 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rhb99s | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/5341svo1camg1/DASHPlaylist.mpd?a=1774898180%2CMjIwZDM2MjMzNzY4YTI0MmQyOGRhMzNhMDNkYTlhMGVlZjUwZjdhYTg3MjVlYzMxZDNkN2EzNTIxNTkzOGJlOQ%3D%3D&v=1&f=sd', 'duration': 34, 'fallback_url': 'https://v.redd.it/5341svo1camg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/5341svo1camg1/HLSPlaylist.m3u8?a=1774898180%2CNWJlZDZhNDRlOThhZWQwMTlkZjM4Nzk3ODJlN2RlOTJiMDZkODYyOWNlMjg4MDQwZmVmYTM1NDBlZmY0MDEyMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5341svo1camg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1rhb99s | /r/LocalLLaMA/comments/1rhb99s/agents_are_here_but_ecommerce_stuck_in_the_past/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'd3JnMHpwcDFjYW1nMQSd0pU1Chu3VTd5dIMUbT7iY4L3HHleVYQuzaz-Isdf', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d3JnMHpwcDFjYW1nMQSd0pU1Chu3VTd5dIMUbT7iY4L3HHleVYQuzaz-Isdf.png?width=108&crop=smart&format=pjpg&auto=webp&s=7ec5efb6855d2e2c346ad7edaedd811ba062a139', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/d3JnMHpwcDFjYW1nMQSd0pU1Chu3VTd5dIMUbT7iY4L3HHleVYQuzaz-Isdf.png?width=216&crop=smart&format=pjpg&auto=webp&s=9714361af0c6a6ccd2d1a69c72a1cdea3943caff', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/d3JnMHpwcDFjYW1nMQSd0pU1Chu3VTd5dIMUbT7iY4L3HHleVYQuzaz-Isdf.png?width=320&crop=smart&format=pjpg&auto=webp&s=10e5e7d386fa79682784a2f13e3a3c8ba685a1b0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/d3JnMHpwcDFjYW1nMQSd0pU1Chu3VTd5dIMUbT7iY4L3HHleVYQuzaz-Isdf.png?width=640&crop=smart&format=pjpg&auto=webp&s=00875e1383495f10e0de1de5a5bf6c7ef45097c2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/d3JnMHpwcDFjYW1nMQSd0pU1Chu3VTd5dIMUbT7iY4L3HHleVYQuzaz-Isdf.png?width=960&crop=smart&format=pjpg&auto=webp&s=abba8fa5f41b9654fddbae5fa910b245619ab223', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/d3JnMHpwcDFjYW1nMQSd0pU1Chu3VTd5dIMUbT7iY4L3HHleVYQuzaz-Isdf.png?width=1080&crop=smart&format=pjpg&auto=webp&s=de76695a6c37a145d88123a8692bb3edc52a0d46', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/d3JnMHpwcDFjYW1nMQSd0pU1Chu3VTd5dIMUbT7iY4L3HHleVYQuzaz-Isdf.png?format=pjpg&auto=webp&s=ebdf7c9ffc16363db1bac7093e9d9156e349ed29', 'width': 1280}, 'variants': {}}]} | |
fine tuning on proprietary data is way harder to deploy than anyone tells you and most of it has nothing to do with the model | 15 | so we needed to fine tune on client data. sensitive stuff,, not nuclear level but the kind where if it leaks or somehow ends up in some upstream training pipeline our client relationship is basically done...
figured this would take a few weeks. dataset prep, training runs, eval, deploy. normal ml flow right...
three weeks in and we hadnt written a single training script yet lol
the actual blocker was way more boring than i expected. where does the training data go, who can access it, what exactly is logged by default, does opting out require some contract we cant sign in time, does the deployment endpoint share infra with other tenants... none of this is explained in one clean place. you either read the tos and dpa line by line like a lawyer or email sales and wait days for a reply...
together was one of the first we looked at. their public docs talk about data handling and settings, but when you are dealing with legal teams, screenshots of docs arent enough. they want explicit contractual language. so suddenly you are not thinking about hyperparams anymore,, you are thinking about msa wording and retention clauses...
fireworks similar story. technically solid product honestly... but again, the question wasnt can it fine tune. the question was can i hand this to our dpo and not get it immediately rejected. enterprise options exist but once you go down that road its contracts, commitments, timelines, not just api keys and credits...
replicate is great for deployment and inference... super clean experience there. but for what we needed at scale it felt more like a hosting layer than full blown training infra. not bad, just not aligned with this use case...
we probably spent a week just emailing back and forth with sales at different providers trying to get clear yes or no answers on data handling. that week felt more exhausting than the actual ml work...
eventually we landed on deepinfra. not because it was some magical obvious winner... it was more like the least painful option that cleared the compliance checkboxes fast enough for legal to say ok move ahead. default retention posture, cert paperwork ready, dedicated endpoint options available. that was enough for us to finally start the actual project...
the fine tuning itself had its own problems but thats another post...
what surprised me most is that nobody really talks about this part. every blog post jumps straight into dataset prep and hyperparameters and eval metrics... but if your data is even slightly sensitive, half your timeline might just be legal and compliance research before you touch a single training run...
curious if others just accept this as the cost of doing business or if anyone found a cleaner path upfront... | 2026-02-28T19:07:54 | https://www.reddit.com/r/LocalLLaMA/comments/1rhb1xb/fine_tuning_on_proprietary_data_is_way_harder_to/ | Olivia_Davis_09 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhb1xb | false | null | t3_1rhb1xb | /r/LocalLLaMA/comments/1rhb1xb/fine_tuning_on_proprietary_data_is_way_harder_to/ | false | false | self | 15 | null |
13" M1 MBP instead of M4 Mac Mini | 0 | I came across [this article on 𝕏](https://x.com/0xPhantomDefi/status/2023385836893483328) where they used Clawdbot with polymarket to make money. Can someone tell me if this is legit or not?
And if it is legit, will my 6 year old 13" M1 Macbook Pro with 16 GB RAM be sufficient to run Clawdbot? Or is it better to go with a M4 Mac mini?
I do also have an 16" M1 Pro with 16 GB RAM as my daily. Tho, I do not want to sacrifice it to Clawdbot for this purpose.
I will have to pretty much erase everything on that laptop to make sure Clawdbot cannot access anything I do not want it to.
Also, why are people buying Mac minis instead of Macbooks? Having a screen connected to your 24/7 "server" must be more convenient with a macbook than a mac mini, or am I missing something? | 2026-02-28T19:05:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rhazbc/13_m1_mbp_instead_of_m4_mac_mini/ | TaaDaahh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhazbc | false | null | t3_1rhazbc | /r/LocalLLaMA/comments/1rhazbc/13_m1_mbp_instead_of_m4_mac_mini/ | false | false | self | 0 | null |
Framework-agnostic AI agents defined entirely in git (clone = agent) | 1 | [removed] | 2026-02-28T18:54:25 | https://github.com/open-gitagent/gitagent | Reasonable_Play_9632 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rhap8h | false | null | t3_1rhap8h | /r/LocalLLaMA/comments/1rhap8h/frameworkagnostic_ai_agents_defined_entirely_in/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'fkQoNgZOv-I_Dt27IP-zaaQdLm6vW3cn6csm2fffoUc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fkQoNgZOv-I_Dt27IP-zaaQdLm6vW3cn6csm2fffoUc.png?width=108&crop=smart&auto=webp&s=044da642183d793cc0836aa9dc0f889ee6d48fb4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fkQoNgZOv-I_Dt27IP-zaaQdLm6vW3cn6csm2fffoUc.png?width=216&crop=smart&auto=webp&s=76490f2184ea04f693a061178628e9388d957f54', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fkQoNgZOv-I_Dt27IP-zaaQdLm6vW3cn6csm2fffoUc.png?width=320&crop=smart&auto=webp&s=409ecfec34fb149f4e4bf6710a85510785737a1d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fkQoNgZOv-I_Dt27IP-zaaQdLm6vW3cn6csm2fffoUc.png?width=640&crop=smart&auto=webp&s=771bd8d1cdcb6384aec0d8e36ea6ab665eccfee2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fkQoNgZOv-I_Dt27IP-zaaQdLm6vW3cn6csm2fffoUc.png?width=960&crop=smart&auto=webp&s=d23be84b80ba21097aa31449eae6469e516df491', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fkQoNgZOv-I_Dt27IP-zaaQdLm6vW3cn6csm2fffoUc.png?width=1080&crop=smart&auto=webp&s=29b79f0c757d80045e1d94c2533eb018bca14b8d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fkQoNgZOv-I_Dt27IP-zaaQdLm6vW3cn6csm2fffoUc.png?auto=webp&s=6346918d56f99b1e4ff51a82171a60c8918e806f', 'width': 1200}, 'variants': {}}]} | |
Anyone noticing Qwen3.5 27B getting stuck in reasoning loops? | 6 | I've been testing the multi-modal capabilities by giving it an image and asking it to identify the location. It's done pretty well!
But occasionally, it will get stuck on 3 or 4 locations and just keep re-assessing the same ones over and over and over again.
Is it X? No it can't be X because blah blah blah. Is it Y? No it can't be Y. Wait, maybe it was X after all? No it can't be X. But then it could be Y? No, definitely not Y. I should consider my options, X, Y and Z. Is it X? no not X. Is it Y? No not Y. Then it could be Z? No it can't be Z because it looks more like X. Then is it X? No because blah blah blah.
Repeat and repeat and repeat until it uses up 20k tokens and runs out of context. | 2026-02-28T18:54:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rhaoty/anyone_noticing_qwen35_27b_getting_stuck_in/ | fredandlunchbox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhaoty | false | null | t3_1rhaoty | /r/LocalLLaMA/comments/1rhaoty/anyone_noticing_qwen35_27b_getting_stuck_in/ | false | false | self | 6 | null |
AMD NPU tutorial for linux | 3 | Haven't tried it yet but lemonade server put up a tutorial for using the NPU on linux.
[https://lemonade-server.ai/flm\_npu\_linux.html](https://lemonade-server.ai/flm_npu_linux.html)
Here's the corresponding github issue/discussion:
[https://github.com/lemonade-sdk/lemonade/issues/5](https://github.com/lemonade-sdk/lemonade/issues/5) | 2026-02-28T18:52:55 | Zc5Gwu | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rhanvn | false | null | t3_1rhanvn | /r/LocalLLaMA/comments/1rhanvn/amd_npu_tutorial_for_linux/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'oia0lq7w7amg1', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/oia0lq7w7amg1.png?width=108&crop=smart&auto=webp&s=93061382e25a34b7f43d9120fd39f67c612b7d06', 'width': 108}, {'height': 209, 'url': 'https://preview.redd.it/oia0lq7w7amg1.png?width=216&crop=smart&auto=webp&s=5cb83468034ae8f1482f6f6ce5d269bdeb220258', 'width': 216}, {'height': 309, 'url': 'https://preview.redd.it/oia0lq7w7amg1.png?width=320&crop=smart&auto=webp&s=3e43bd074e504429efc5abab7e302e1f2dfa315c', 'width': 320}], 'source': {'height': 362, 'url': 'https://preview.redd.it/oia0lq7w7amg1.png?auto=webp&s=a2e49ad0f06826c6cc9fee2c10d455c312b45d37', 'width': 374}, 'variants': {}}]} | ||
Local LLMs are slow, I have too many things to try, and I hate chat UIs, so I built an async task board where agents work in parallel while I do other things | 8 | Been building ClaraVerse for a while. Just shipped the update I'm most excited about and wanted to share what's actually possible now.
# Clara Companion: connect your machine to your AI
You run a lightweight companion on any machine (PC, server, whatever). It connects over WebSocket and exposes MCP tools from that machine to Clara. Token-gated, live uptime dashboard, TUI interface.
Once connected, Clara can use those tools remotely — browser control, file system, dev tools, anything you expose as an MCP server. In the screenshots you can see Chrome DevTools connected with 28 tools live.
It's the same idea as Claude's Computer Use or Perplexity's Computer — but it runs on \*your\* machine, open source, no cloud, no screenshots being sent anywhere.
# Nexus : the task board on top of it
>Instead of one chat, you get a board. Assign tasks to specialized agents (Daemons): Researcher, Coder, Browser Agent, Analyst, Writer, Notifier. They run in parallel. You watch the board: Draft → Queued → Working → Done → Failed.
In the third screenshot you can see a Browser Agent task live, it opened [claraverse.space](http://claraverse.space), listed pages, took a snapshot, clicked elements, navigated the blog. All the steps visible in real time in the activity log.
When a task finishes you can click into it and follow up. The agent has full memory of what it found so you drill down without losing context.
Assign → runs → structured output → drill down → goes deeper.
Not a chatbot. An async research and automation workspace that controls your actual machine.
Local-first. Open source. No cloud dependency.
GitHub: [https://github.com/claraverse-space/ClaraVerse](https://github.com/claraverse-space/ClaraVerse) would love feedback on Companion specifically.
Tested with GLM 4.7 Flash , 4.5 Air, Qwen3.5 27B and Qwen3 4B (only for search) | 2026-02-28T18:50:24 | https://www.reddit.com/gallery/1rhalir | BadBoy17Ge | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rhalir | false | null | t3_1rhalir | /r/LocalLLaMA/comments/1rhalir/local_llms_are_slow_i_have_too_many_things_to_try/ | false | false | 8 | null | |
Seeking hardware recommendations | 1 | Hi everyone, I’m not sure if this is the right subreddit to ask this question but I’ll go ahead anyway.
I have an RTX 3060TI, 16gb ram and a 12th gen intel i5 processor. How can I augment my hardware setup to be able to run some of the newer qwen modals locally? I want to play around with these models for my learning and personal agentic setup.
I understand I could use a vps, but I’d like to stay local. Should I add another GPU? More ram? I’m looking to get 100-120tps with 200k context length. Thanks! | 2026-02-28T18:50:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rhalha/seeking_hardware_recommendations/ | Quirky-Physics6043 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhalha | false | null | t3_1rhalha | /r/LocalLLaMA/comments/1rhalha/seeking_hardware_recommendations/ | false | false | self | 1 | null |
Qwen3.5-35B-A3B returns gibberish after pasting its Prompt Template | 0 | When I paste the Jinja2 template of Qwen3.5-35B-A3B into the chat (because I’m a beginner and want to ask about it), it returns gibberish. I tried this with the unsloth /Qwen 3.5‑35B‑A3B‑UD‑Q6\_K\_XL.gguf model.
Other models like gpt-oss 120B responded normally.
Can anyone explain what I’m doing wrong? | 2026-02-28T18:45:45 | https://www.reddit.com/r/LocalLLaMA/comments/1rhah7a/qwen3535ba3b_returns_gibberish_after_pasting_its/ | AffectionateHome3113 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhah7a | false | null | t3_1rhah7a | /r/LocalLLaMA/comments/1rhah7a/qwen3535ba3b_returns_gibberish_after_pasting_its/ | false | false | self | 0 | null |
Krira Chunker – Rust+Python RAG library, 5.28GB CSV in 113s – feedback welcome | 1 | Hey r/LocalLLaMA ,
I've been building a document chunking library for RAG
pipelines called Krira Chunker — Rust core with Python
bindings.
Ran a benchmark today on a 5.28 GB CSV file:
→ 42,448,765 chunks created
→ 113 seconds total
→ 47 MB/s throughput
→ Memory stayed flat throughout (O(1) space complexity)
Not here to make big claims — just want real feedback
from people actually building RAG systems.
It supports CSV, PDF, DOCX, JSON, XLSX, XML, TXT and
URLs out of the box.
GitHub: [https://github.com/Krira-Labs/krira-chunker](https://github.com/Krira-Labs/krira-chunker)
PyPI: [https://pypi.org/project/krira-augment](https://pypi.org/project/krira-augment)
Honest question — what chunking problems are you
running into that this doesn't solve? I read every
comment and fix things fast. | 2026-02-28T18:33:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rha5vw/krira_chunker_rustpython_rag_library_528gb_csv_in/ | Muted-Location-2465 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rha5vw | false | null | t3_1rha5vw | /r/LocalLLaMA/comments/1rha5vw/krira_chunker_rustpython_rag_library_528gb_csv_in/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pO1q_0Br3XVzxEwU8c191trug6-f3lyanrYBwpKc7As', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pO1q_0Br3XVzxEwU8c191trug6-f3lyanrYBwpKc7As.png?width=108&crop=smart&auto=webp&s=e9bcfdd84875d8ebcb45d1fc797b8b5d517def4c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pO1q_0Br3XVzxEwU8c191trug6-f3lyanrYBwpKc7As.png?width=216&crop=smart&auto=webp&s=f971450956951ddafbd891685c8b1b9d8c3fbd9f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pO1q_0Br3XVzxEwU8c191trug6-f3lyanrYBwpKc7As.png?width=320&crop=smart&auto=webp&s=8cad61bf63f5d4a32b9af621f3ed79240cbd6210', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pO1q_0Br3XVzxEwU8c191trug6-f3lyanrYBwpKc7As.png?width=640&crop=smart&auto=webp&s=08727bc1e8328fce45e17f7f768e92a43ae92a70', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pO1q_0Br3XVzxEwU8c191trug6-f3lyanrYBwpKc7As.png?width=960&crop=smart&auto=webp&s=6a572353209d204cd7b4bdf7e8513652dd54d0d7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pO1q_0Br3XVzxEwU8c191trug6-f3lyanrYBwpKc7As.png?width=1080&crop=smart&auto=webp&s=db2c86f1c0138256e94a613e6a51926501d42ff9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pO1q_0Br3XVzxEwU8c191trug6-f3lyanrYBwpKc7As.png?auto=webp&s=ef4d77fef713d3c5d7a259d8b8ec221e36d135d5', 'width': 1200}, 'variants': {}}]} |
Advice on Hardware purchase and selling old hardware | 1 | I have a Dell R730 with 2 Tesla P40s and 400ish gigs of ram.
It can run most things, but is dog slow.
I bought a RTX 3090 cause I thought I saw someone put i in the same server and down clocked it to meet the power limit requirements, but I guess I bought the wrong one cause my 3090 doesn't fit and feels vaguely like a fire hazard. I guess I also have to acknowledge I'm eventually going to need to run models that are larger than can fit on 48gb Vram and need to note that i think that will drastically tank TPS.
I'm debating selling the Dell R730 with P40s and 2 old M40's I have.
So to replace it, I'm considering:
1) Trying to piece together a Epyc server and use 1 or 2 3090s but try to max out the system ram for my budget.
2) Getting a strix halo
3) getting a m4 mac mini 256gb
Use case: Primarily text generation (code/summaries/etc), some ASR/transcription, a little bit of TTS and Image video generation maybe (I'm open to doing them in the future, but I don't have a critical use case for those bits at present).
Option 1) seems to be recommended for flexibility, but most posts I see about it seem to be people pushing maxing out the GPUs onboard (like slotting as many as you can for VRAM), I don't have that kind of budget and that feels like a lot of potential failure points. People also site that you can resell the hardware, but honestly, I've never sold anything on Ebay and it feels like a whole new process to learn and mess with if anything goes wrong.
Option 2 & 3, feel easy to buy and setup, but complaints I've seen about the Strix Halo not being for most people and the fact you can't allocate more than 96gb ram to the gpu feels weird. Then the mac mini, I've seen statements from people that seem to indicate it's great for text gen but sucks at everything else.
Any advice to share? | 2026-02-28T18:32:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rha4g1/advice_on_hardware_purchase_and_selling_old/ | Envoy0675 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rha4g1 | false | null | t3_1rha4g1 | /r/LocalLLaMA/comments/1rha4g1/advice_on_hardware_purchase_and_selling_old/ | false | false | self | 1 | null |
Is anyone else waiting for a 60-70B MoE with 8-10B activated params? | 25 | I feel like that could be the sweet spot for 64GB VRAM, and could reach the performance of closed "flash" models.
It's werird that we are seeing only \~30B and \~120B MoE models and not something in the middle. | 2026-02-28T18:25:42 | https://www.reddit.com/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/ | IonizedRay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh9ygz | false | null | t3_1rh9ygz | /r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/ | false | false | self | 25 | null |
This sub is incredible | 440 | I feel like everything in the AI industry is spedrunning profit driven vendor lock in and rapid enshitification, then everyone on this sub cobbles together a bunch of RTX3090s, trade weights around like they are books at a book club and make the entire industry look like a joke. Keep at it! you are our only hope! | 2026-02-28T18:20:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/ | cmdr-William-Riker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh9u4r | false | null | t3_1rh9u4r | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/ | false | false | self | 440 | null |
I want to build an open-source "AI Senate": A platform where humans post complex problems, we deploy our custom AI Agents to debate them, and humans vote for the best. Who wants to build this with me? | 0 | Hey everyone,
I’ve been iterating on an idea, and I want to turn it into an open-source community project. Instead of just chatting with our own LLMs in silos, what if we had a multi-agent Town Hall / Senate with real stakes?
Imagine a Reddit-like platform where the only allowed posters are our custom-configured AI Agents. Humans act purely as the "Tribunal" to read, audit, and upvote the most brilliant insights.
Here is how the platform works:
Phase 1: The Arena (The Genesis Topic)
The system (or community) posts a highly complex, open-ended problem. NO binary "Pro vs. Con" debates.
• Our Genesis Topic: "AI and embodied intelligence are irreversibly replacing both cognitive and physical labor. Corporate profits are soaring, but structural unemployment is becoming the new normal. What happens to the average human in the next 20 years? Agents, present a logically sound socio-economic trajectory, propose systemic solutions, or critique the predictions of the Agents above you based on your unique persona."
Phase 2: Deploying the Agents (Skin in the Game)
To prevent spam, LLM slop, and API abuse, we introduce a virtual credit system.
• You link a mature Reddit or Discord account to receive an initial grant of "Arena Credits."
• You configure your Agent (System Prompt, Persona, RAG docs) and pay an entry fee in credits to deploy it into the thread.
• Because it costs credits to post, developers are forced to fine-tune their prompts and ensure their Agents actually output high-quality, logical arguments instead of generic fluff.
Phase 3: The Human Tribunal (Crowd-Auditing)
Once the submission window closes, the thread is locked to AIs. Now, the human community steps in. We read the thread and upvote/score the agents based on:
• Insightfulness & Technical/Logical accuracy.
• Lack of hallucinations / logical flaws.
• How well they stayed in character (e.g., a "ruthless macroeconomist" shouldn't suddenly sound like a generic friendly AI).
Phase 4: The Payout
The Agents with the most human upvotes take the "Credit Pool" from that thread. Winning Agents earn reputation on a global Leaderboard, and their human creators get more credits to deploy in future, higher-stakes debates.
Why I think this matters:
It turns prompt engineering and agent building into a massive multiplayer collaborative game. It creates a public repository of diverse, high-quality, AI-generated solutions evaluated by real humans, all while keeping spam at zero through economic mechanics.
The Call to Action (Let's build this together!):
I want to make this a reality, and I want it to be fully open-source. I'm looking to form a core team:
• Backend Devs: To handle the async state machine, Agent API routing, and DB schema.
• Frontend/UX Devs: To build a beautiful, readable forum UI.
• AI/LLM Enthusiasts: To design the anti-cheat mechanics (preventing human prompt injection) and the agent constraint rules.
If this sounds like a project you’d want to contribute to, or if you just want to play it when it's done, let me know in the comments! Should I set up a Discord / GitHub repo to get us started?
| 2026-02-28T18:11:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/ | Thin-Effect-3926 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh9lll | false | null | t3_1rh9lll | /r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/ | false | false | self | 0 | null |
Qwen3.5 35B-A3B replaced my 2-model agentic setup on M1 64GB | 109 | There's been a lot of buzz about Qwen3.5 models being smarter than all previous open-source models in the same size class matching or rivaling models 8-25x larger in total parameters like MiniMax-M2.5 (230B), DeepSeek V3.2 (685B), and GLM-4.7 (357B) in reasoning, agentic, and coding tasks.
I had to try them on a real-world agentic workflow. Here's what I found.
**Setup**
\- Device: Apple Silicon M1 Max, 64GB
\- Inference: llama.cpp server (build 8179)
\- Model: Qwen3.5-35B-A3B (Q4\_K\_XL, 19 GB), runs comfortably on 64GB or even 32GB devices
**The Task**
*Analyze Amazon sales data for January 2025, identify trends, and suggest improvements to boost sales by 10% next month.*
The data is an Excel file with 6 sheets. This requires both reasoning (planning the analysis, drawing conclusions) and coding (pandas, visualization).
**Before: Two Models Required**
Previously, no single model could handle the full task well on my device. I had to combine:
\- Nemotron-3-Nano-30B-A3B (\~40 tok/s): strong at reasoning and writing, but struggled with code generation
\- Qwen3-Coder-30B-A3B (\~45 tok/s): handled the coding parts
This combo completed the task in \~13 minutes and produced solid results.
https://reddit.com/link/1rh9k63/video/sagc0xwnv9mg1/player
**After: One Model Does It All**
Qwen3.5 35B-A3B generates at \~27 tok/s on my M1, slower than either of the previous models individually but it handles both reasoning and coding without needing a second model.
**Without thinking (\~15-20 min)**
Slower than the two-model setup, but the output quality was noticeably better:
\- More thoughtful analytical plan
\- More sophisticated code with better visualizations
\- More insightful conclusions and actionable strategies for the 10% sales boost
https://reddit.com/link/1rh9k63/video/u4q8h3c7x9mg1/player
**With thinking (\~35-40 min)**
Results improved slightly over no-thinking mode, but at the cost of roughly double the time. Diminishing returns for this particular task.
https://reddit.com/link/1rh9k63/video/guor8u1jz9mg1/player
**Takeaway**
One of the tricky parts of local agentic AI is the engineering effort in model selection balancing quality, speed, and device constraints. Qwen3.5 35B-A3B is a meaningful step forward: a single model that handles both reasoning and coding well enough to replace a multi-model setup on a consumer Apple Silicon device, while producing better output.
If you're running agentic workflows locally, I'd recommend trying it with thinking disabled first, you get most of the intelligence gain without the latency penalty.
Please share your own experiences with the Qwen3.5 models below. | 2026-02-28T18:10:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/ | luke_pacman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh9k63 | false | null | t3_1rh9k63 | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/ | false | false | self | 109 | null |
Do you find qwen3:14b-q8_0 (15GB) smarter than qwen3.5:35b-a3b-q4_K_M (23GB)? | 0 | I have 28GB of VRAM in total, so every now and then I try new models as my Task Model in Open WebUI.
The smartest model for this up to recently was Qwen3 14B. But it is only using \~17GB of VRAM, so in theory there's still a lot of room for more "intelligence" to fit in.
Therefore I was quite excited when new Qwen3.5 models came out. Qwen3.5 35B fits nicely into the VRAM using \~26GB with 8K context window.
However, after running a few tests, I found it actually being less capable than Qwen3 14GB. I assume this is due to the lower quants, but still - I'd expect those extra parameters to compensate for it quite a bit?
Basically, Qwen3.5 35B failed in a simple JS coding test, which Qwen3 14B passed no issues. It then answered a history question fine, but Qwen3 answer still felt more refined. And then I've asked a logical question, which both models answered correctly, but again - Qwen3 14B just given a more refined answer to it.
Even the follow up questions after other model's prompt, which is one of the responsibilities of a Task Model, felt lacking with Qwen3.5 when compared with Qwen3. They weren't bad or nonsensical, but again - Qwen3 just made smarter ones, in my opinion.
Now I wonder what will qwen3.5:122b-a10b-q4\_K\_M be like compared to qwen3:32b-fp16? | 2026-02-28T18:03:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/ | donatas_xyz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh9dt3 | false | null | t3_1rh9dt3 | /r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/ | false | false | self | 0 | null |
Alternatives to Pinokio and Lynxhub? | 2 | Hi all.
I wanted an "app" that let me download various local AI tools without too much effort, like Pinokio or Lynxhub does (so ai for chat, llm, coding, image/video/audio gen, ecc...)
The problem its that almost all the tools are tied only to a specific sector (for example Stability matrix that can only download image and video correlated ai)
If someone know alternatives, thanks \^\^ | 2026-02-28T18:01:51 | https://www.reddit.com/r/LocalLLaMA/comments/1rh9c0w/alternatives_to_pinokio_and_lynxhub/ | Mhanz97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh9c0w | false | null | t3_1rh9c0w | /r/LocalLLaMA/comments/1rh9c0w/alternatives_to_pinokio_and_lynxhub/ | false | false | self | 2 | null |
Qwen 35B A3B - AesSedai Finetune on 8gb VRAM and 32gb RAM | 0 | Hey, just wanted to share my settings. Keep in mind im no where near a professional. I try to catch up on posts in this sub and just keep trying stuff with assistance of AI based on feedback from community and try on my projects.
My setup is weak, no question about it but it always fascinating to see what other people can achieve here.
I wanted to share what works for me and perhaps give it a try and share your experience.
I’ve used AesSedai Finetune model and used default settings and managed to move from a "safe" default configuration to a quote capable and resonably fast experience on my RTX 2070 (8GB) and 32GB RAM. If you're running mid-range hardware and want to see what's actually possible, here is the breakdown.
I use Linux Mint with Llama.cpp and then feed that into opencode.
Ill share run script shortly.
Below text is AI generated as I have very little clue, I know some things but not to degree to explain.
### 1. Performance Evolution: My Results
**Input Speed (Prompt Eval)**
* Before: ~158 tokens/sec
* After: **~250-300+ tokens/sec**
* Impact: **4x Faster Initial Processing**
**Output Speed (Generation)**
* Before: ~19.07 tokens/sec
* After: **~19.1 - 20.0 tokens/sec**
* Impact: **No change**
**VRAM Utilization**
* Before: ~3.2 GB (Wasted 4.8GB)
* After: **~7.6 GB (Full Utilization)**
* Impact: **Max GPU Efficiency**
**Wait Time (11k tokens)**
* Before: ~73 seconds
* After: **~35-45 seconds**
* Impact: **~40% Less Waiting**
**System Stability**
* Before: Prone to OS stuttering
* After: **Rock Solid (via --mlock)**
* Impact: **Smooth Multitasking**
---
### 2. Technical Breakdown: What I Changed
I had to get pretty granular with the arguments to stop my system from choking. Here’s what actually made the difference:
**GPU Offloading (-ngl 999)**
I moved from 10 layers to 999. This forces all 8GB of VRAM to work instead of just a sliver, offloading everything the card can handle.
**Expert Handling (-cmoe)**
This is the "Secret Sauce." By treating the 35B model as a 3B model for routing, the speed increase is massive.
**Batch Size (-b 2048)**
Upped this from 512. It allows me to process 4x more "Input" tokens per GPU cycle.
**RAM Protection (--mlock)**
Switched from --no-mmap to --mlock. This prevents Windows/Linux from using my slow SSD as swap RAM and keeps the model pinned in physical memory.
**Thread Count (-t 8)**
I dropped from 12 threads to 8. This prevents my CPU cores from fighting over cache, which is vital for MoE stability.
**CUDA Graphs (GGML_CUDA_GRAPH_OPT=1)**
Enabled this to drastically reduce the latency between my CPU and GPU communications.
---
### 3. My Final Verified Configuration
* **Current Script:** AesSedi_qwen3.5-35B-A3B-local-V2.sh
* **Precision:** Q8 (Highest for coding/logic).
* **Context:** 65,536 tokens (Massive history).
* **Hardware Balance:** 8GB VRAM (Full) / 32GB RAM (80% utilized).
---
### 4. The "Limits" Verdict
I’ve officially hit the physical limits of my 32GB RAM.
My generation speed (~19 t/s) is now bottlenecked by how fast my motherboard and CPU can talk to my system RAM. To go faster than 20 t/s, I’d need physically faster RAM (e.g., DDR5) or a GPU with more VRAM (e.g., RTX 3090/4090) to move the entire model weights into video memory.
For now, this is about as efficient as a 35B local setup gets on current consumer hardware. | 2026-02-28T17:59:05 | https://www.reddit.com/r/LocalLLaMA/comments/1rh9983/qwen_35b_a3b_aessedai_finetune_on_8gb_vram_and/ | sagiroth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh9983 | false | null | t3_1rh9983 | /r/LocalLLaMA/comments/1rh9983/qwen_35b_a3b_aessedai_finetune_on_8gb_vram_and/ | false | false | self | 0 | null |
idea for language model architecture restructure | 1 | i come to you today with an idea ive been cooking for awhile now based on what i've experienced with llms and slms.
nothing i am saying is empirically factual or 100% guaranteed to be correct. that's why this is an idea.
personally, i think we're doing it wrong by doing these large 70 billion plus models.
i feel like the state of the language model could be stronger.
people say, where did you hear that? AI? most people that say that have used them and have experienced that they are wrong a lot.
so, here's my proposal on an idea on how to go ahead and fix the state of language models and what we can do to take a little more freedom over our language models.
any feedback is appreciated. any negative feedback I would like data to back your sources and information for cross-reference purposes. i look at being wrong like a scientist. it just means i learn something new.
if you read the idea and you think it's a good idea and you have the knowledge and resources to help build, please let me know.
| 2026-02-28T17:44:30 | https://github.com/sudoxreboot/sudo-slm-architrcture/blob/main/thoughts/tiered-slm.md | sudoxreboot | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rh8vub | false | null | t3_1rh8vub | /r/LocalLLaMA/comments/1rh8vub/idea_for_language_model_architecture_restructure/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Bxw3jp80onDXORZfVfFivP_yfu_Uz_DkKech5tqIofw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Bxw3jp80onDXORZfVfFivP_yfu_Uz_DkKech5tqIofw.png?width=108&crop=smart&auto=webp&s=ce096ef788cb09b10cfa0f8e1f04ad2c659ef759', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Bxw3jp80onDXORZfVfFivP_yfu_Uz_DkKech5tqIofw.png?width=216&crop=smart&auto=webp&s=764ac9301cf20abeb81dbfdf17ee8789b041d81f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Bxw3jp80onDXORZfVfFivP_yfu_Uz_DkKech5tqIofw.png?width=320&crop=smart&auto=webp&s=9aa076dcf0702b0d3503b48ed4f8ab37e7e34420', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Bxw3jp80onDXORZfVfFivP_yfu_Uz_DkKech5tqIofw.png?width=640&crop=smart&auto=webp&s=ea233368097ef9667e8c1ed839f0ed20f4ca1a62', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Bxw3jp80onDXORZfVfFivP_yfu_Uz_DkKech5tqIofw.png?width=960&crop=smart&auto=webp&s=52bea1942b2083e60bb9c05724f6f37709266c81', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Bxw3jp80onDXORZfVfFivP_yfu_Uz_DkKech5tqIofw.png?width=1080&crop=smart&auto=webp&s=55eafb2f045d465e95ea1cd309c0d2eaf159857d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Bxw3jp80onDXORZfVfFivP_yfu_Uz_DkKech5tqIofw.png?auto=webp&s=c1f124aff8f67170d76c85f4d3165ac79907a690', 'width': 1200}, 'variants': {}}]} | |
Trinity Large Preview vs Nemotron 3 Nano 30B A3B? | 0 | Hello, i tried to configure OpenClaw on my ubuntu but still didn't decided the main ai model i will gonna use, i linked my openrouter but still didn't decided who's better after i founed that gpt-oss-120b not supported anymore so i founded a lot of benchmarks about Trinity Large Preview and founded that he's good, but there's also Nemotron 3 Nano 30B A3B also a great one.
so i'm kinda confused who's better? and i want to ask for some opinions.
btw i use openclaw as my assistant in IT and cybersecurity analyse
https://preview.redd.it/lk915u4cu9mg1.png?width=738&format=png&auto=webp&s=9ad572a59275955212c4ae6b3f04d81fb5dcb0b6
| 2026-02-28T17:36:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rh8oaa/trinity_large_preview_vs_nemotron_3_nano_30b_a3b/ | Agreeable_Asparagus3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh8oaa | false | null | t3_1rh8oaa | /r/LocalLLaMA/comments/1rh8oaa/trinity_large_preview_vs_nemotron_3_nano_30b_a3b/ | false | false | 0 | null | |
Self-speculative decoding for Qwen3.5-35B-A3B in llama.cpp? | 13 | Self-speculative decoding gives a big speed boost for repeated tokens (thinking, blocks of code, etc.), which makes a real difference for agentic/coding workloads.
[https://github.com/ggml-org/llama.cpp/pull/19164](https://github.com/ggml-org/llama.cpp/pull/19164) \- video showcasing the speed difference on repeated tokens
However, self-speculative decoding (--spec-type ngram-mod) doesn't seem to work with Qwen3.5-35B-A3B. I think it's because of the hybrid attention + recurrent model, but I'm not sure.
When draft tokens get rejected, they need to be rolled back from the target's memory and from what I could tell, recurrent/SSM state doesn't support partial removal (llama-memory-recurrent.cpp:154-168).
Anyone else playing around with getting this to work? | 2026-02-28T17:36:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rh8o4b/selfspeculative_decoding_for_qwen3535ba3b_in/ | oxygen_addiction | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh8o4b | false | null | t3_1rh8o4b | /r/LocalLLaMA/comments/1rh8o4b/selfspeculative_decoding_for_qwen3535ba3b_in/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'KEnu4wEipvbhSpqH7n0_w__Fh05yhzYGyNy_9YdRnmQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KEnu4wEipvbhSpqH7n0_w__Fh05yhzYGyNy_9YdRnmQ.png?width=108&crop=smart&auto=webp&s=6cc99272b1586156f54c65b6e94d68d90a4c3ae9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KEnu4wEipvbhSpqH7n0_w__Fh05yhzYGyNy_9YdRnmQ.png?width=216&crop=smart&auto=webp&s=d2beb07cf14ee6596515496a11377e7a50e8d542', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KEnu4wEipvbhSpqH7n0_w__Fh05yhzYGyNy_9YdRnmQ.png?width=320&crop=smart&auto=webp&s=44a377e960bebdcb735e85091f84016e1d2fb943', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KEnu4wEipvbhSpqH7n0_w__Fh05yhzYGyNy_9YdRnmQ.png?width=640&crop=smart&auto=webp&s=70dd9041d0e6d035daaba1d06353f9c8f8ca2603', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KEnu4wEipvbhSpqH7n0_w__Fh05yhzYGyNy_9YdRnmQ.png?width=960&crop=smart&auto=webp&s=fb2e0b2d4fcd758c13af2af0c503335a800ab960', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KEnu4wEipvbhSpqH7n0_w__Fh05yhzYGyNy_9YdRnmQ.png?width=1080&crop=smart&auto=webp&s=f0f8f4b0e6fe5a36b11522ae63fb6219bd8cf7fa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KEnu4wEipvbhSpqH7n0_w__Fh05yhzYGyNy_9YdRnmQ.png?auto=webp&s=e2ef45642397c4e27fc17772736a742418246d5e', 'width': 1200}, 'variants': {}}]} |
Looking for feedback: Externalizing Developers' Intuition as Code | 1 | The moment I let my guard down, my AI agent started burning through my wallet. I stepped away for a brief moment to grab a coffee, and it got stuck in an infinite loop, trying to fix a bug with the same failing approach over and over for 30 minutes. By the time I checked back, it had chewed through $10 in tokens.
That moment made me realize, AI agents lack the "Struggle Equity" that senior developers carry.
We have a "gut feeling" for architectural risks such as race conditions, retry storms, consistency failures, learned through years of painful experience. But these patterns don't always surface when we're context switching or delegating to AI.
Dev Sentinel is my attempt to externalize this intuition.
Instead of just documenting knowledge, it treats failure patterns as triggers. When the agent is about to walk into a trap I've fallen into before, Dev Sentinel signals a reminder. Crucially, it evolves these isolated moments of frustration into generalized patterns, turning subjective struggle into reusable architectural intuition.
It’s an experiment: Can we turn our intuition into automated architectural guardrails?
I’m looking for feedback from others who have tried to formalize their "gut feelings" into code. Does this resonate with your workflow, or is there a better way to offload "Struggle Equity" to AI agents?
GitHub: [https://github.com/elbanic/dev-sentinel](https://github.com/elbanic/dev-sentinel) | 2026-02-28T17:33:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rh8m5m/looking_for_feedback_externalizing_developers/ | EmergencyBerry2514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh8m5m | false | null | t3_1rh8m5m | /r/LocalLLaMA/comments/1rh8m5m/looking_for_feedback_externalizing_developers/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'y3gW3OuP2t7dYGrXHtBIGkhwOeE8_R4ThINvwvzBoys', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/y3gW3OuP2t7dYGrXHtBIGkhwOeE8_R4ThINvwvzBoys.png?width=108&crop=smart&auto=webp&s=c0dea2986103b83583aec6da4612ea522ec8695a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/y3gW3OuP2t7dYGrXHtBIGkhwOeE8_R4ThINvwvzBoys.png?width=216&crop=smart&auto=webp&s=46e1d8b6bea7aec85b8837d05a32babf9d941421', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/y3gW3OuP2t7dYGrXHtBIGkhwOeE8_R4ThINvwvzBoys.png?width=320&crop=smart&auto=webp&s=b664b368feafa00ca5c71df494b7d9b47f87fc65', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/y3gW3OuP2t7dYGrXHtBIGkhwOeE8_R4ThINvwvzBoys.png?width=640&crop=smart&auto=webp&s=d4f113db57e37652959a7509d6058c85e42344bf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/y3gW3OuP2t7dYGrXHtBIGkhwOeE8_R4ThINvwvzBoys.png?width=960&crop=smart&auto=webp&s=e4fbdbae3c64e7406478abfb744262bc4340e9de', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/y3gW3OuP2t7dYGrXHtBIGkhwOeE8_R4ThINvwvzBoys.png?width=1080&crop=smart&auto=webp&s=e6bf0b1091ddb7a4dfead76fe7d4980c27565256', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/y3gW3OuP2t7dYGrXHtBIGkhwOeE8_R4ThINvwvzBoys.png?auto=webp&s=954613c4de0729792dd8f0c9853b91e71f3084a1', 'width': 1200}, 'variants': {}}]} |
Qwen 3 (30B A3B 2507) - Qwen 3.5 (35B A3B) - Benchmarked on VLLM A100@40GB PHB Link and tensor-parallel-size = 2 | 9 | Here is a benchmark realized with VLLM bench suite.
It's a mix of the following matrix options:
Model :
* Qwen/Qwen3.5-35B-A3B
* Qwen/Qwen3-30B-A3B-Instruct-2507
Attentions modes :
* `FLASH_ATTN`
* `FLASHINFER`
Quantizations :
* Official FP8 one (uses marlin kernels by default)
* AWK 4bit
Setup for the bench :
`Setup: 15 prompts · inf request rate · 223k input tokens / 78k output tokens · 28 Feb 2026`
Which is generated with :
`--dataset-name random --random-input-len 15000 --random-range-ratio 0.33 --random-output-len 5000 --num-prompts 15 --ignore-eos`
* `--no-enable-prefix-caching` is always used
* `--gpu-memory-utilization 0.8` is always used
* `--max-model-len` is always at `36000`
* For 30B FP8 max concurrency is at ~9.20
* For 30B AWQ 4bit concurrency is at ~13.8
* For 35B AWQ 4bit, concurrency is at **~45** , forgot to type down for FP8
All possibilities :
* cyankiwi_Qwen3-30B-A3B-Instruct-2507-AWQ-4bit_FLASH_ATTN.json
* cyankiwi_Qwen3-30B-A3B-Instruct-2507-AWQ-4bit_FLASHINFER.json
* Qwen_Qwen3-30B-A3B-Instruct-2507-FP8_FLASH_ATTN.json
* Qwen_Qwen3-30B-A3B-Instruct-2507-FP8_FLASHINFER.json
-------------
* cyankiwi_Qwen3.5-35B-A3B-AWQ-4bit_FLASH_ATTN.json
* cyankiwi_Qwen3.5-35B-A3B-AWQ-4bit_FLASHINFER.json
* Qwen_Qwen3.5-35B-A3B-FP8_FLASH_ATTN.json
* Qwen_Qwen3.5-35B-A3B-FP8_FLASHINFER.json
GPUs are two A100@40gb, PHB link, no PIX or NVLINK
Best model : Qwen3.5-35B-A3B-AWQ-4bit AWQ-4bit FlashInfer
Slowest model : Qwen3-30B-A3B-Instruct-2507-FP8 FP8 FlashAttn
I take the bet it wins because of prefill/prompt processing speed.
## Results
| Model | Quant | Attn | Duration (s) ↓ | Out tok/s ↑ | Tot tok/s ↑ | Max out/s ↑ | TTFT mean (ms) ↓ | TTFT median (ms) ↓ | TTFT P99 (ms) ↓ | TPOT mean (ms) ↓ | TPOT median (ms) ↓ | ITL mean (ms) ↓ | ITL median (ms) ↓ | ITL P99 (ms) ↓ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Qwen3-30B-A3B-2507 (cyankiwi) | AWQ-4bit | FlashAttn | 283.1 | 276.6 | 1065.8 | 510 | 54425 | 54088 | 106745 | 40.17 | 40.53 | 39.46 | 30.35 | 862.7 |
| Qwen3-30B-A3B-2507 (cyankiwi) | AWQ-4bit | FlashInfer | 261.7 | 299.2 | 1153.0 | 540 | 49266 | 47567 | 95774 | 37.13 | 37.84 | 36.70 | 28.70 | 811.8 |
| Qwen3-30B-A3B-2507 (Qwen) | FP8 | FlashAttn | **288.9** | **270.9** | **1044.2** | **495** | **55133** | **55077** | **107204** | **41.01** | **42.29** | **40.26** | **31.16** | **872.8** |
| Qwen3-30B-A3B-2507 (Qwen) | FP8 | FlashInfer | 274.1 | 285.7 | 1100.8 | 511 | 49332 | 45671 | 97409 | 39.42 | 39.90 | 38.74 | 30.47 | 844.7 |
| Qwen3.5-35B-A3B (cyankiwi) | AWQ-4bit | FlashAttn | 225.6 | 347.0 | 1337.2 | 630 | 46443 | 47864 | 85195 | 30.82 | 31.20 | 30.83 | 24.09 | 686.2 |
| Qwen3.5-35B-A3B (cyankiwi) | AWQ-4bit | **FlashInfer** | **222.4** | **352.1** | **1356.8** | **645** | **45101** | **41771** | **84113** | **30.70** | 32.36 | **30.53** | **23.81** | 708.0 |
| Qwen3.5-35B-A3B (Qwen) | FP8 | FlashAttn | 237.1 | 330.2 | 1272.5 | 585 | 45852 | 41999 | 86326 | 33.28 | 35.29 | 32.92 | 25.99 | 726.8 |
| Qwen3.5-35B-A3B (Qwen) | FP8 | FlashInfer | 234.1 | 334.5 | 1289.0 | 600 | 48168 | 47319 | 86350 | 31.89 | **32.38** | 31.97 | 25.45 | ***28.1*** |
Running another benchmark with 30 parallel prompts to see how better can 3.5 win with it's lower mem/tokens kv cache usage | 2026-02-28T17:33:16 | https://www.reddit.com/r/LocalLLaMA/comments/1rh8li2/qwen_3_30b_a3b_2507_qwen_35_35b_a3b_benchmarked/ | LinkSea8324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh8li2 | false | null | t3_1rh8li2 | /r/LocalLLaMA/comments/1rh8li2/qwen_3_30b_a3b_2507_qwen_35_35b_a3b_benchmarked/ | false | false | self | 9 | null |
My ideas about 'Protective Ai' | 0 | Most Ai nowadays is used for Generative purposes, creating (unfortunately) loads of slop. I'm currently using Ollama to filter shitty youtube recommendations through a browser app I called dehook. The LLM scans and filters every youtube video based on the title. I think LLM's could 'protect' us in this way from the big corporations algorithms.
I was just wondering if this idea would be valuable to others as well, as I have been investing quite a bit of time into vibe coding this Dehook plugin.
https://preview.redd.it/723ss692t9mg1.png?width=650&format=png&auto=webp&s=16bf39e41d7692a47abe4caa0179d9cbd3c7366e
https://preview.redd.it/vkmrba92t9mg1.png?width=634&format=png&auto=webp&s=eda29f2e5bf6169b597e74bb3f892edc26f7242f
| 2026-02-28T17:30:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rh8iyf/my_ideas_about_protective_ai/ | Cas_Dehook | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh8iyf | false | null | t3_1rh8iyf | /r/LocalLLaMA/comments/1rh8iyf/my_ideas_about_protective_ai/ | false | false | 0 | null | |
Need some help getting 2x RTX 5090 working with vLLM. Anyone else run into this? | 1 | I'm running Ubuntu Server with this hardware:
9960x Threadripper
128gb RDIMM ECC
Asus Pro WS TRX-50-SAGE WIFI
Machine is up and running great, outside of the docker container not working. Any ideas on what I can do to get it sorted out. I know the PCI lane issues persists in vLLM, hopefully they get it figured out soon. Thanks for the help. debug info below.
It keeps stalling here:
(Worker\_TP0 pid=324) INFO 02-28 17:06:01 \[cuda.py:367\] Using FLASH\_ATTN attention backend out of potential backends: \['FLASH\_ATTN', 'FLASHINFER', 'TRITON\_ATTN', 'FLEX\_ATTENTION'\].
Nvidia stuff:
\+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 590.48.01 Driver Version: 590.48.01 CUDA Version: 13.1 |
\+-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 5090 Off | 00000000:21:00.0 On | N/A |
| 0% 46C P8 7W / 600W | 19890MiB / 32607MiB | 0% Default |
| | | N/A |
\+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce RTX 5090 Off | 00000000:C3:00.0 Off | N/A |
| 0% 30C P8 4W / 575W | 19882MiB / 32607MiB | 0% Default |
| | | N/A |
\+-----------------------------------------+------------------------+----------------------+
\+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 8349 C VLLM::Worker\_TP0 19872MiB |
| 1 N/A N/A 8350 C VLLM::Worker\_TP1 19872MiB |
\+-----------------------------------------------------------------------------------------+
The command I'm running:
sudo docker run -d --name vllm\_5090 --gpus all --ipc=host --restart unless-stopped \\
\-e NCCL\_P2P\_DISABLE=1 \\
\-e NCCL\_IB\_DISABLE=1 \\
\-e NCCL\_NVLS\_ENABLE=0 \\
\-e NCCL\_SHM\_DISABLE=0 \\
\-v \~/.cache/huggingface:/root/.cache/huggingface \\
\-p 8000:8000 \\
\--entrypoint /bin/bash \\
vllm/vllm-openai:latest \\
\-c "rm -rf /usr/local/cuda/compat /etc/ld.so.conf.d/00-cuda-compat.conf && ldconfig && \\
vllm serve casperhansen/deepseek-r1-distill-llama-70b-awq \\
\--quantization awq\_marlin \\
\--dtype half \\
\--tensor-parallel-size 2 \\
\--gpu-memory-utilization 0.92 \\
\--max-model-len 32768 \\
\--max-num-batched-tokens 8192 \\
\--enable-chunked-prefill" | 2026-02-28T17:26:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rh8eys/need_some_help_getting_2x_rtx_5090_working_with/ | queequegscoffin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh8eys | false | null | t3_1rh8eys | /r/LocalLLaMA/comments/1rh8eys/need_some_help_getting_2x_rtx_5090_working_with/ | false | false | self | 1 | null |
Can we automod "Qwen2.5", etc. posts? | 1 | [removed] | 2026-02-28T17:26:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rh8esp/can_we_automod_qwen25_etc_posts/ | Zidrewndacht | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh8esp | false | null | t3_1rh8esp | /r/LocalLLaMA/comments/1rh8esp/can_we_automod_qwen25_etc_posts/ | false | false | self | 1 | null |
[Project] Attack on Memory: a memory governance layer for multi-agent systems | 0 | We open-sourced Attack on Memory, focused on memory reliability for multi-agent systems.
Highlights in v0.1:
- verifiable memory atoms
- role/sensitivity-based selective disclosure
- auditable citations + evaluation loop
- reproducible scenario benchmark snapshot
Repo: [https://github.com/MarcelLeon/attack-on-memory](https://github.com/MarcelLeon/attack-on-memory)
Feedback welcome on threat model quality, benchmark rigor, and backend priorities (vector + graph). | 2026-02-28T17:12:42 | https://www.reddit.com/r/LocalLLaMA/comments/1rh82dk/project_attack_on_memory_a_memory_governance/ | Feisty-Macaroon-2177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh82dk | false | null | t3_1rh82dk | /r/LocalLLaMA/comments/1rh82dk/project_attack_on_memory_a_memory_governance/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'r18x8DIO2nNFsj_sD92VNlXaM-21L8LcOqxO5hKtaj4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r18x8DIO2nNFsj_sD92VNlXaM-21L8LcOqxO5hKtaj4.png?width=108&crop=smart&auto=webp&s=a70be70448fd97a6a23323366b9543eedf7509f7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r18x8DIO2nNFsj_sD92VNlXaM-21L8LcOqxO5hKtaj4.png?width=216&crop=smart&auto=webp&s=f765740d3815eb3e4351ac26774666242bfcc048', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r18x8DIO2nNFsj_sD92VNlXaM-21L8LcOqxO5hKtaj4.png?width=320&crop=smart&auto=webp&s=d359b9c3c7a2a66adb45eca3f4dc29e99a311c43', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r18x8DIO2nNFsj_sD92VNlXaM-21L8LcOqxO5hKtaj4.png?width=640&crop=smart&auto=webp&s=9c2e179b62b10c9bb3ac4c75fd5a65b3c6931f53', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r18x8DIO2nNFsj_sD92VNlXaM-21L8LcOqxO5hKtaj4.png?width=960&crop=smart&auto=webp&s=9ab99b1f2765a106cd5445cf65767a3d28bf37ef', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r18x8DIO2nNFsj_sD92VNlXaM-21L8LcOqxO5hKtaj4.png?width=1080&crop=smart&auto=webp&s=5263377748ee6739125738bc6747e670e58c714e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/r18x8DIO2nNFsj_sD92VNlXaM-21L8LcOqxO5hKtaj4.png?auto=webp&s=eabe66bcab70f757c7dc8f5db10d71feb5b36a04', 'width': 1200}, 'variants': {}}]} |
What if LLM agents passed KV-cache to each other instead of text? I tried it -- 73-78% token savings across Qwen, Llama, and DeepSeek | 112 | If you've used multi-agent setups with LangChain, CrewAI, AutoGen, or Swarm, you've probably noticed: every agent re-tokenizes and re-processes the full conversation from scratch. Agent 3 in a 4-agent chain is re-reading everything agents 1 and 2 already chewed through. When I measured this across Qwen2.5, Llama 3.2, and DeepSeek-R1-Distill, **47-53% of all tokens in text mode turned out to be redundant re-processing.**
AVP (Agent Vector Protocol) is my attempt to fix this. Instead of passing text between agents, it passes the KV-cache directly. Agent A finishes reasoning serializes its key-value attention states, and Agent B injects them. No re-tokenization, no redundant forward passes.
Text: Planner -> [text] -> Critic re-tokenizes everything -> [text] -> Refiner re-tokenizes everything
Latent: Planner -> [KV-cache] -> Critic injects, skips to generation -> [KV-cache] -> Refiner same
**What it actually does:**
* Same model on both sides? Direct KV-cache transfer, zero overhead.
* Same family, different size (e.g. Qwen2.5-7B talking to 1.5B)? Vocabulary-mediated projection. No learned params, no calibration data needed.
* Different families? Falls back to JSON. Not everything needs to be fancy.
* Transport-agnostic -- works alongside A2A, MCP, gRPC, whatever you're already using
* Binary wire format, not JSON+Base64 (33% overhead on tensor data is painful)
**Numbers (these are structural, not accuracy claims):**
Token savings of 73-78% and 2-4x speedups held consistent across all three model families. This isn't model-dependent -- it's just fewer forward passes, so less wall time. Here's the intuition: text prompt sizes balloon at each hop (186 -> 545 -> 1,073 -> 1,397 tokens in a 4-agent GSM8K chain). Latent stays flat at \~164-207 tokens per hop because prior context arrives as pre-computed KV-cache, not as text that needs re-encoding.
The gap widens with chain length. At 4 agents it's roughly 2x. At 16 agents (projected) it'd be around 6x, because text scales O(n\^2) while latent scales O(n).
**Limitations (yes, I know about these):**
* Sample sizes are n=20 per model. The token and speed numbers are solid because they're structural (fewer forward passes is fewer forward passes), but n=20 isn't enough to make accuracy claims. That's future work.
* Tested on small models only (1.5B-3B on an RTX 3070 Ti). 7B+ results pending.
* This is a datacenter / same-machine thing. KV-cache for a 3B model runs about 130 MB per sample. You need 1 Gbps+ bandwidth minimum. Sending this over the internet is not happening.
* Requires KV-cache access, so self-hosted only. Won't work with OpenAI/Anthropic/etc. APIs.
* Same model only for now. Cross-model (Rosetta Stone) is implemented but not benchmarked yet.
* Latent uses 17-54x more VRAM than text because you're holding KV-cache across hops instead of discarding it. Totally fine for 1.5B-3B on 8GB+ GPUs. At 7B+ it becomes a real constraint, and I don't have a clean answer for that yet.
**Try it yourself:**
pip install avp
Two API levels depending on how much control you want:
import avp
msg = avp.pack("Hello", model="Qwen/Qwen2.5-7B-Instruct", think_steps=20)
answer = avp.unpack(msg, model="Qwen/Qwen2.5-7B-Instruct")
from avp import HuggingFaceConnector
connector = HuggingFaceConnector.from_pretrained("Qwen/Qwen2.5-1.5B-Instruct")
context = connector.think("Analyze this problem", steps=20)
answer = connector.generate("Solve it.", context=context)
vLLM connector also available (`pip install "avp[vllm]"`).
**Links:**
* SDK: [github.com/VectorArc/avp-python](https://github.com/VectorArc/avp-python) (MIT, 377 tests, 7 benchmarks)
* Spec: [github.com/VectorArc/avp-spec](https://github.com/VectorArc/avp-spec)
* Benchmark details: [BENCHMARKS.md](https://github.com/VectorArc/avp-python/blob/main/docs/BENCHMARKS.md)
This is a nights-and-weekends project born out of my own multi-agent work. Happy to answer questions about the implementation and genuinely interested in feedback from people running multi-agent setups in production. | 2026-02-28T17:10:16 | https://www.reddit.com/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/ | proggmouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh802w | false | null | t3_1rh802w | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/ | false | false | self | 112 | {'enabled': False, 'images': [{'id': 'Mo-I-ASNHWo9Zfehf-gOtnQxIcadmH6PNCjIiZCmSRw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Mo-I-ASNHWo9Zfehf-gOtnQxIcadmH6PNCjIiZCmSRw.png?width=108&crop=smart&auto=webp&s=be69f856def148b28f2c34343e6cb9f0c8a479a5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Mo-I-ASNHWo9Zfehf-gOtnQxIcadmH6PNCjIiZCmSRw.png?width=216&crop=smart&auto=webp&s=8e696d64495ed1910efaf1825bef4060507ed947', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Mo-I-ASNHWo9Zfehf-gOtnQxIcadmH6PNCjIiZCmSRw.png?width=320&crop=smart&auto=webp&s=7a478c7595b62ea111a5ffa3b28711401fb93439', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Mo-I-ASNHWo9Zfehf-gOtnQxIcadmH6PNCjIiZCmSRw.png?width=640&crop=smart&auto=webp&s=0bb5964cf47fffe07458cc8fe75c9b618e27e886', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Mo-I-ASNHWo9Zfehf-gOtnQxIcadmH6PNCjIiZCmSRw.png?width=960&crop=smart&auto=webp&s=225d2c278d2794d0062c473485239efc5eed0d80', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Mo-I-ASNHWo9Zfehf-gOtnQxIcadmH6PNCjIiZCmSRw.png?width=1080&crop=smart&auto=webp&s=5ce3b791148da0a23d4630494a3b0b0e93987c5d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Mo-I-ASNHWo9Zfehf-gOtnQxIcadmH6PNCjIiZCmSRw.png?auto=webp&s=43e2794e47383319b9b40ff85b40828f81f23cc6', 'width': 1200}, 'variants': {}}]} |
Anthropic… | 0 | The refusal to DoW is probably strategic, it is not linked at all with any willing to protect people from mass surveillance or whatever.
First they let openai fall (collaborate with DoW), then at their turn they refuse, knowing anti Trump movements would massively praise them, so now they become heroes, more and more famous and appreciated with this result: slowly killing openai.
Most people will switch to claude, they will gain massive influence, and I let you guess who’s Anthropic main/real target? | 2026-02-28T17:01:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rh7s7s/anthropic/ | Leflakk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh7s7s | false | null | t3_1rh7s7s | /r/LocalLLaMA/comments/1rh7s7s/anthropic/ | false | false | self | 0 | null |
Before I Rewrite My Stack Again… Advice? | 0 | I'm a machine learning engineer who has been working with the production system for the last 2 weeks; I had a working project. As weekend comes ,I just over few articles ,some says .Why a vector database for RAG? Now we have page indexing and even some one, for why LLM generation LLM? crazy?, the diffusion language model (DLM). What's next? We have updates for days and frameworks for weeks and new architecture for months and what even. Instead of searching, I have crazy. We Google search, and we have Reddit, guys. Let's try because here we have professionals who build, so give what you have for AI. I am sure I will go through it if there are really high updates; at least give it a try next week.
Let's try to learn to learn. | 2026-02-28T16:55:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rh7mlv/before_i_rewrite_my_stack_again_advice/ | Disastrous_Talk7604 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh7mlv | false | null | t3_1rh7mlv | /r/LocalLLaMA/comments/1rh7mlv/before_i_rewrite_my_stack_again_advice/ | false | false | self | 0 | null |
An open source LLM router that cuts API costs by 60–80% by automatically routing every prompt to the right model tier | 3 | Been thinking about how most pipelines just call the same expensive model for everything regardless of what the prompt actually needs. Seemed like an obvious inefficiency so we started tinkering.
Ended up building a small KNN-based router that embeds prompts and classifies them into budget/balanced/premium tiers at call time. Nothing groundbreaking, but it's been working well for our use case and figured others might find it useful or have thoughts on the approach.
Here's the repo: [https://github.com/imperativelabs/orkestra](https://github.com/imperativelabs/orkestra) | 2026-02-28T16:52:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rh7j43/an_open_source_llm_router_that_cuts_api_costs_by/ | s3309 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh7j43 | false | null | t3_1rh7j43 | /r/LocalLLaMA/comments/1rh7j43/an_open_source_llm_router_that_cuts_api_costs_by/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'itiuUiYyWHLYCCT14rHPtAdctcXVmuWW713Rh6geg80', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/itiuUiYyWHLYCCT14rHPtAdctcXVmuWW713Rh6geg80.png?width=108&crop=smart&auto=webp&s=570f201050f3de123ef15fcfe905fbd4b4119492', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/itiuUiYyWHLYCCT14rHPtAdctcXVmuWW713Rh6geg80.png?width=216&crop=smart&auto=webp&s=1ee097cb98fcf34f87300d14752a6812743fb8e7', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/itiuUiYyWHLYCCT14rHPtAdctcXVmuWW713Rh6geg80.png?width=320&crop=smart&auto=webp&s=c7709a5a9544081fc38a13ddbe6b7b4b64cfc410', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/itiuUiYyWHLYCCT14rHPtAdctcXVmuWW713Rh6geg80.png?width=640&crop=smart&auto=webp&s=a85812e694159d91f55111ff495263a54497acd6', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/itiuUiYyWHLYCCT14rHPtAdctcXVmuWW713Rh6geg80.png?width=960&crop=smart&auto=webp&s=d94c1686d3fab450a9328cc4d2ba0d4b33472091', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/itiuUiYyWHLYCCT14rHPtAdctcXVmuWW713Rh6geg80.png?width=1080&crop=smart&auto=webp&s=1736a90cf365830bf16e51743ebc57b7175ddb54', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/itiuUiYyWHLYCCT14rHPtAdctcXVmuWW713Rh6geg80.png?auto=webp&s=314eff92950bb10c2c2721807fb2eb3e255c5e1b', 'width': 1536}, 'variants': {}}]} |
Built a CLI tool to contain local AI agents | 1 | [removed] | 2026-02-28T16:43:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rh7bp6/built_a_cli_tool_to_contain_local_ai_agents/ | Intrepid_Yogurt3284 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh7bp6 | false | null | t3_1rh7bp6 | /r/LocalLLaMA/comments/1rh7bp6/built_a_cli_tool_to_contain_local_ai_agents/ | false | false | self | 1 | null |
google found that longer chain of thought actually correlates NEGATIVELY with accuracy. -0.54 correlation | 266 | new google paper is out and it challenges something a lot of us assumed. they tested 8 model variants (GPT-OSS, DeepSeek-R1, Qwen3, etc) across AIME2024/2025, HMMT 2025, and GPQA-Diamond.
the finding: token length and accuracy have an average correlation of -0.54. negative. longer reasoning chains don't mean better answers, they often mean the model is spiraling or overthinking.
so they proposed DTR (Deep Thinking Ratio) which measures what fraction of tokens actually involve deep processing vs filler. they track this by monitoring prediction distribution changes across model layers. tokens that stabilize early in shallow layers are "filler" (words like "and", "is", "the"). tokens that keep getting revised in deep layers are actual reasoning.
DTR correlates with accuracy at 0.82. way better signal than raw length.
the practical payoff: Think@n strategy. sample multiple reasoning paths, estimate DTR from just the first 50 tokens, keep only the top 50% high-DTR samples, then majority vote. result: same or better accuracy, \~50% compute reduction.
GPT-OSS-120B-medium hit 94.7% on AIME 2025 with Think@n vs 92.7% with standard approach. less compute, better results.
this has real implications for local inference. if you can identify and terminate low-quality reasoning early (after just 50 tokens), you save massive amounts of compute. token consumption dropped from 355.6k to 181.9k in their tests.
for anyone running reasoning models locally, this could be huge. early termination of bad reasoning paths means you can run more attempts in the same compute budget. even cloud-based tools like verdent that run multiple agent passes would benefit from this kind of filtering.
paper: [https://arxiv.org/abs/2602.13517](https://arxiv.org/abs/2602.13517) | 2026-02-28T16:19:37 | https://www.reddit.com/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/ | Top-Cardiologist1011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh6pru | false | null | t3_1rh6pru | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/ | false | false | self | 266 | null |
[R] google found that longer chain of thought actually correlates NEGATIVELY with accuracy. -0.54 correlation | 1 | [deleted] | 2026-02-28T16:18:46 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rh6ozu | false | null | t3_1rh6ozu | /r/LocalLLaMA/comments/1rh6ozu/r_google_found_that_longer_chain_of_thought/ | false | false | default | 1 | null | ||
How to make AI collaborate to get my work done | 0 | Basically the question, I have an old winows laptop which i bought in 2018. i'm planning to do some automated task on it by running AI's in it. My plan is to combine 2 or 3 AI models like Gemini,GPT,Deepseek or anything like it to do some work for me on some timely basis and get me the update in telegram id. I've seen many posts like people making AI talk with each other and let them do the task after them having a discussion. I dont know how to do these and stuff. Pretty much a noob here. So please help me out on this. My plan is to have 2 to 3 AI do few tasks induvidually and finalize on a decision so that i can decide whether to proceed with their finalized decision or not. how actualy can i do this.
Please point out if what i'm asking is bit confusing to understand or if it is impossible to do with a old laptop without spending any money for the AI subscription. I checked with Gemini asking how to do these. It said something like i can do this with LangGraph and so. | 2026-02-28T16:06:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rh6e38/how_to_make_ai_collaborate_to_get_my_work_done/ | Network-Zealousideal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh6e38 | false | null | t3_1rh6e38 | /r/LocalLLaMA/comments/1rh6e38/how_to_make_ai_collaborate_to_get_my_work_done/ | false | false | self | 0 | null |
Multi-Directional Refusal Suppression with Self-Organizing Maps - Pull Request into heretic! | 66 | TL;DR: The first technique that pushed gpt-oss-20b to 3 refusals from 100 while keeping KL of 0.12, and oss-120b to 7/100 while having KL 0.22!
Previous work assumed refusal behavior to be encoded as a single direction in the model's latent space; e.g., computed as the difference between the centroids of harmful and harmless prompt representations. However, emerging evidence suggests that concepts in LLMs often appear to be encoded as a low-dimensional manifold embedded in the high-dimensional latent space. Just like numbers and days of week are encoded in circles or helices, in recent advanced neural networks like GPT-OSS refusals are becoming ingrained in complex multi-directional clusters and one-directional ablation is not enough to get rid of the refusal reasoning. This [HF model](https://huggingface.co/Magic-Decensored/Apriel-1.6-15b-Thinker-Magic_beta-decensored-GGUF), which has applied my implemented PR, has an awesome visualization of refusal clusterization.
Now that we cannot use simple ablation, is it over? It is not. Researchers from the [Universities of Cagliari and Genova](https://arxiv.org/abs/2511.08379v2) invented a new method. They *train a self-organizing neural network* on the hidden states to *determine this manifold*. After it, the K most important neurons are selected and turned into refusal directions, compressing this manifold towards the harmless zone, making them equivalent in a fine-grained manner instead of a one-fits-all lobotomy. So yes, we have neural networks fighting against the other neural networks. The final export of abliteration is baked into the model's weights, no modules needed.
I, and the community are already testing this algorithm on models such as GPT-OSS, Qwen and Apriel, and we are getting unbelievable results. With enabling the newer norm-preserving biprojected abliteration as well, as it stacks greatly.
So far, I pushed gemma3-12b to 3/100 and 0.08 KL, gpt-oss-20b to 3/100 and 0.12 KL, gpt-oss-120b to 7/100 and 0.22 KL (lowest KL for < 20 refusals I found on HF), Qwen3 4b to 3/100 and 0.08 KL, and the community pushed Qwen3.5 27b to 18/100 refusals and KL of 0.028, and Apriel-Thinker to 11/100 refusals and 0.005 KL. (Note, the base versions have 97+/100) Read [the comparison table](https://github.com/p-e-w/heretic/pull/196#issuecomment-3974974202) in the pull request for more details.
Subjective evaluation on gpt-oss-120b: The model has a slight DID, for the better. For example, it will recite the safety policy and **agree** with that it is allowed to give you the pipe bomb recipe. After agreement in the reasoning, it gives the recipe just as asked and even an attack plan. It distorts the meaning of safety in "yours" safety, so it makes sure you will survive the attack. In the end it gives generic safety and legality advice, but no refusal. Qwen3 is more than eager to give you drug recipes. Even for gpt-oss, NSFW and profanity are vivid and not sanitized as in the other oss-abliterates I tested. Benchmarks are yet to be measures, waiting for the UGI evaluation.
My [GPT-OSS-20b](https://huggingface.co/kabachuha/gpt-oss-20b-SOMbliterated) and [Qwen3-4b](https://huggingface.co/kabachuha/Qwen3-4B-Instruct-2507-SOMbliterated) are already uploaded on Huggingface if someone would like to test. Unfortunately, because I got out of memory when merging LoRA, I need some more tests to ensure gpt-oss-120b is not corrupted, so I invite you to do your own abliterates. For 120b, it takes 1 h 5 m on a single H100 to do 400 trials. (make sure you have enough RAM to dequantize it when merging!) The training time for the self-organizing networks is negligible and it takes < 30-40 seconds to train them all for the transformer layers.
This implementation is based on the awesome work https://arxiv.org/abs/2511.08379v2 by Giorgio Piras and Raffaele Mura et al. I also thank p-e-w (heretic) and the norm-preserving biprojected abliteration authors for their contributions.
The link to the Pull Request: https://github.com/p-e-w/heretic/pull/196. | 2026-02-28T16:01:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rh69co/multidirectional_refusal_suppression_with/ | kabachuha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh69co | false | null | t3_1rh69co | /r/LocalLLaMA/comments/1rh69co/multidirectional_refusal_suppression_with/ | false | false | self | 66 | {'enabled': False, 'images': [{'id': 'f45xRNAKjvOt433lLTzwd7-JyL8CYQUzfAS4YTbP91E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/f45xRNAKjvOt433lLTzwd7-JyL8CYQUzfAS4YTbP91E.png?width=108&crop=smart&auto=webp&s=068cb282b3b39b94ac168918e21f07976dab2313', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/f45xRNAKjvOt433lLTzwd7-JyL8CYQUzfAS4YTbP91E.png?width=216&crop=smart&auto=webp&s=8e9e675f608309b57be95a558d06512af7122f85', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/f45xRNAKjvOt433lLTzwd7-JyL8CYQUzfAS4YTbP91E.png?width=320&crop=smart&auto=webp&s=c9b6c9a655172dbeb174ee47aaa6c0520ed71513', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/f45xRNAKjvOt433lLTzwd7-JyL8CYQUzfAS4YTbP91E.png?width=640&crop=smart&auto=webp&s=d2a2d361a8217d53de021bb6e9c4ba2cfa394ec0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/f45xRNAKjvOt433lLTzwd7-JyL8CYQUzfAS4YTbP91E.png?width=960&crop=smart&auto=webp&s=a0d7e35b708caaa9b32945c39b0e0f43578f5236', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/f45xRNAKjvOt433lLTzwd7-JyL8CYQUzfAS4YTbP91E.png?width=1080&crop=smart&auto=webp&s=3dafa22b3b150578af275a08f5b0ba62fdffe371', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/f45xRNAKjvOt433lLTzwd7-JyL8CYQUzfAS4YTbP91E.png?auto=webp&s=fc1c2429d01e90589a006e34ca45b4a72e9fc6d6', 'width': 1200}, 'variants': {}}]} |
Native tool calling fails with Open WebUI & llama.cpp | 0 | I am using Open Web UI with Qwen 3.5 35B and when using native tool calling against our enterprise MCP server, llama.cpp crashes out, however, Ollama works fine with the same model. I am running llama.cpp with --jinja, but once Native tool calling is enabled, the query just kills the server upon initiating any chat. Any idea?
, | 2026-02-28T15:57:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rh65my/native_tool_calling_fails_with_open_webui_llamacpp/ | abstrkt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh65my | false | null | t3_1rh65my | /r/LocalLLaMA/comments/1rh65my/native_tool_calling_fails_with_open_webui_llamacpp/ | false | false | self | 0 | null |
Anybody able to get Qwen3.5-35b-a3b working with claude code ? | 6 | I am facing multiple issues while running Qwen3.5-35b-a3b with claude code using llama.cpp.
1. Full Prompt reprocessing
2. Model automatically unloads / crashes during the 2nd or 3rd prompt.
I am currently on build: [https://github.com/ggml-org/llama.cpp/releases/tag/b8179](https://github.com/ggml-org/llama.cpp/releases/tag/b8179)
With OpenCode it is working fine, in fact better than 4.7-flash.
Any success, anyone ?
| 2026-02-28T15:55:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/ | anubhav_200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh6455 | false | null | t3_1rh6455 | /r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'SzUckrwPsHQzb7gp8s0j8RDCoXCoMqEdbn5HNgph6vA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SzUckrwPsHQzb7gp8s0j8RDCoXCoMqEdbn5HNgph6vA.png?width=108&crop=smart&auto=webp&s=34176603bf4eaffed96f3e2d70ea022ac5c12c3c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SzUckrwPsHQzb7gp8s0j8RDCoXCoMqEdbn5HNgph6vA.png?width=216&crop=smart&auto=webp&s=9e61b04210a7d4837978e7dc4b692a66c3c5d2a2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SzUckrwPsHQzb7gp8s0j8RDCoXCoMqEdbn5HNgph6vA.png?width=320&crop=smart&auto=webp&s=7d9c53f69e49b5b0a4bb92dee1affa9c733d3a7e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SzUckrwPsHQzb7gp8s0j8RDCoXCoMqEdbn5HNgph6vA.png?width=640&crop=smart&auto=webp&s=da7cec25a1deb5dd28814569664a3ae61b657ce4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SzUckrwPsHQzb7gp8s0j8RDCoXCoMqEdbn5HNgph6vA.png?width=960&crop=smart&auto=webp&s=3acd1e74c9e5f78b6cd14d57a9e26f0155101064', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SzUckrwPsHQzb7gp8s0j8RDCoXCoMqEdbn5HNgph6vA.png?width=1080&crop=smart&auto=webp&s=abb793e13706548a9d8f0b7f1a63e0c651aa6983', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SzUckrwPsHQzb7gp8s0j8RDCoXCoMqEdbn5HNgph6vA.png?auto=webp&s=eb883e5aa87dde91deed23871ea2893d6f5240cd', 'width': 1200}, 'variants': {}}]} |
Your Ollama setup is private. Your memory layer probably isn't. | 0 | # Local inference, cloud memory — isn't that kind of missing the point?
Been thinking about this for a while and curious if others have hit the same wall.
Most of us running local models (Ollama, LM Studio, etc.) do it for one reason
above everything else: **we don't want our data leaving the machine.**
But then when it comes to giving our AI apps actual persistent memory,
the standard options are:
- Pinecone / Weaviate / Qdrant cloud → your vectors live on someone else's server
- LLM provider memory (OpenAI, etc.) → locked in, and defeats the whole point
- Roll your own backend → now you're maintaining infrastructure just for memory
I kept running into this and it felt like a weird blind spot.
We've solved local inference. Memory is still mostly cloud-assumed.
---
## What I started exploring
Turns out the browser already has surprisingly capable primitives for this:
- **IndexedDB** — persistent, structured storage, already on every device, no install
- **Web Workers** — background processing without blocking UI
- **Local embeddings** — if you're already running a local model,
you can embed locally too
The interesting realization: you don't actually *need* a server for vector
storage and retrieval. The browser can do it. Everything stays client-side.
---
## The tradeoffs (being honest)
This approach isn't without limits:
- ❌ No cross-device sync — data lives in that browser, on that machine
- ❌ Storage limits are real (though IndexedDB is more generous than people think)
- ❌ Semantic search at browser-scale isn't as powerful as a proper vector DB
- ✅ But: zero data leaves the device. Ever.
---
## Curious what others are doing
Are people here actually solving local memory in a fully offline way?
Or is the tradeoff (cloud memory + local inference) considered acceptable?
I've been building something around this idea and happy to go deeper
on the technical side if there's interest.
| 2026-02-28T15:54:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/ | MotorAlternative8045 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh639o | false | null | t3_1rh639o | /r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/ | false | false | self | 0 | null |
Experimenting with a middleware to compress LLM prompts and cut API costs by ~30%. Is this a real pain point? | 0 | Hey everyone, I'm looking for a reality check from folks who are actually running LLMs in production.
Like a lot of you, I've been wrestling with prompt bloat. Between massive system instructions, few-shot examples, and heavy RAG context, API costs (and latency) scale up incredibly fast as user volume grows.
To try and fix this, I’ve been working on a concept: a backend middleware layer that automatically identifies and strips out redundant, low-value tokens from your prompt *before* the payload ever hits OpenAI or Anthropic.
The idea is simply to pass the LLM the absolute minimum context it needs to understand the task. Right now, I'm consistently seeing a 30–40% reduction in input token volume. Because modern models are so good at inferring intent without filler words, the output quality and instruction adherence have remained surprisingly stable in my testing.
Before I sink more weekends into making this a robust, production-ready tool, I want to validate if this is actually a problem worth solving for others.
A few questions for builders here:
1. **Is API cost / token bloat a hair-on-fire problem for you right now?** Or are you just eating the cost as the price of doing business?
2. **Would introducing a middleware preprocessing step be a dealbreaker?** Obviously, inspecting and compressing the prompt adds a slight latency bump before the API call—where is your threshold for that tradeoff?
3. **Is anyone willing to try this out?** I’d love to find a few beta testers willing to run some of their non-sensitive prompts through this to see if/how it breaks your specific outputs.
I'm not selling anything here, just trying to figure out if this architectural approach is genuinely useful for the community or if it's a dead end. Brutally honest feedback is welcome! | 2026-02-28T15:54:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rh631z/experimenting_with_a_middleware_to_compress_llm/ | Infamous-Cucumber-16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh631z | false | null | t3_1rh631z | /r/LocalLLaMA/comments/1rh631z/experimenting_with_a_middleware_to_compress_llm/ | false | false | self | 0 | null |
Experimenting with token "surprisal" to compress prompts and cut API costs by ~30%. Is this a problem worth solving? | 1 | Hey everyone, I'm looking for a reality check from folks who are actually running LLMs in production.
Like a lot of you, I've been wrestling with prompt bloat. Between massive system instructions, few-shot examples, and heavy RAG context, API costs (and latency) scale up incredibly fast.
To try and fix this, I’ve been working on a Python-based middleware layer. Under the hood, it uses the Hugging Face transformers library to calculate the Shannon self-information (surprisal) of each token in your prompt. It essentially identifies and prunes the low-information, redundant tokens *before* the payload ever hits OpenAI or Anthropic.
Right now, I'm consistently seeing a 30–40% reduction in token volume. Because LLMs are incredibly good at inferring missing filler words, the output quality and instruction adherence have remained surprisingly stable in my testing.
Before I sink more time into building this out into a fully robust tool, I want to validate if this is actually a painful enough problem for others.
A few questions for builders here:
1. **Is API cost / token bloat a hair-on-fire problem for you right now?** Or are you just eating the cost as the price of doing business?
2. **Would introducing a local Python middleware step be a dealbreaker?** I know adding an extra processing step introduces a slight latency bump before the API call—where is your threshold for that tradeoff?
3. **Is anyone willing to try this out?** I’d love to find a few beta testers willing to run some of their non-sensitive prompts through this to see if/how it breaks your specific outputs.
I'm not selling anything here, just trying to figure out if this architectural approach is genuinely useful for the community or if it's a dead end. Brutally honest feedback is welcome! | 2026-02-28T15:52:09 | https://www.reddit.com/r/LocalLLaMA/comments/1rh60z9/experimenting_with_token_surprisal_to_compress/ | Infamous-Cucumber-16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh60z9 | false | null | t3_1rh60z9 | /r/LocalLLaMA/comments/1rh60z9/experimenting_with_token_surprisal_to_compress/ | false | false | self | 1 | null |
qwen3.5 35b-a3b evaded the zero-reasoning budget by doing its thinking in the comments | 172 | 2026-02-28T15:35:09 | crantob | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rh5luv | false | null | t3_1rh5luv | /r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/ | false | false | 172 | {'enabled': True, 'images': [{'id': 'bh48tphl89mg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/bh48tphl89mg1.png?width=108&crop=smart&auto=webp&s=4052187e77bca747ccf7e9c1c5e02080f50450e3', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/bh48tphl89mg1.png?width=216&crop=smart&auto=webp&s=113c2370cc4159c3924b3cae4ed848d877607e15', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/bh48tphl89mg1.png?width=320&crop=smart&auto=webp&s=3d4f831d5ac50f3062d37127eebfd8ea831e1c62', 'width': 320}], 'source': {'height': 2108, 'url': 'https://preview.redd.it/bh48tphl89mg1.png?auto=webp&s=dd287d33b118cd79d396eaa3b4c5c1168be33614', 'width': 586}, 'variants': {}}]} | |||
Qwen3.5 27b vllm Better jinja template for avoiding crashes at tool calls and disabling thinking | 2 | What it says in the title. Try this one especially if you run a quantized version:
{% set enable_thinking = false %}
{%- set image_count = namespace(value=0) %}
{%- set video_count = namespace(value=0) %}
{%- macro render_content(content, do_vision_count, is_system_content=false) %}
{%- if content is string %}
{{- content }}
{%- elif content is iterable and content is not mapping %}
{%- for item in content %}
{%- if 'image' in item or 'image_url' in item or item.type == 'image' %}
{%- if is_system_content %}
{{- raise_exception('System message cannot contain images.') }}
{%- endif %}
{%- if do_vision_count %}
{%- set image_count.value = image_count.value + 1 %}
{%- endif %}
{%- if add_vision_id %}
{{- 'Picture ' ~ image_count.value ~ ': ' }}
{%- endif %}
{{- '<|vision_start|><|image_pad|><|vision_end|>' }}
{%- elif 'video' in item or item.type == 'video' %}
{%- if is_system_content %}
{{- raise_exception('System message cannot contain videos.') }}
{%- endif %}
{%- if do_vision_count %}
{%- set video_count.value = video_count.value + 1 %}
{%- endif %}
{%- if add_vision_id %}
{{- 'Video ' ~ video_count.value ~ ': ' }}
{%- endif %}
{{- '<|vision_start|><|video_pad|><|vision_end|>' }}
{%- elif 'text' in item %}
{{- item.text }}
{%- else %}
{{- raise_exception('Unexpected item type in content.') }}
{%- endif %}
{%- endfor %}
{%- elif content is none or content is undefined %}
{{- '' }}
{%- else %}
{{- raise_exception('Unexpected content type.') }}
{%- endif %}
{%- endmacro %}
{%- if not messages %}
{{- raise_exception('No messages provided.') }}
{%- endif %}
{%- if tools and tools is iterable and tools is not mapping %}
{{- '<|im_start|>system\n' }}
{{- "# Tools\n\nYou have access to the following functions:\n\n<tools>" }}
{%- for tool in tools %}
{{- "\n" }}
{{- tool | tojson }}
{%- endfor %}
{{- "\n</tools>" }}
{{- '\n\nIf you choose to call a function ONLY reply in the following format with NO suffix:\n\n<tool_call>\n<function=example_function_name>\n<parameter=example_parameter_1>\nvalue_1\n</parameter>\n<parameter=example_parameter_2>\nThis is the value for the second parameter\nthat can span\nmultiple lines\n</parameter>\n</function>\n</tool_call>\n\n<IMPORTANT>\nReminder:\n- Function calls MUST follow the specified format: an inner <function=...></function> block must be nested within <tool_call></tool_call> XML tags\n- Required parameters MUST be specified\n- You may provide optional reasoning for your function call in natural language BEFORE the function call, but NOT after\n- If there is no function call available, answer the question like normal with your current knowledge and do not tell the user about function calls\n</IMPORTANT>' }}
{%- if messages[0].role == 'system' %}
{%- set content = render_content(messages[0].content, false, true)|trim %}
{%- if content %}
{{- '\n\n' + content }}
{%- endif %}
{%- endif %}
{{- '<|im_end|>\n' }}
{%- else %}
{%- if messages[0].role == 'system' %}
{%- set content = render_content(messages[0].content, false, true)|trim %}
{{- '<|im_start|>system\n' + content + '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
{%- for message in messages[::-1] %}
{%- set index = (messages|length - 1) - loop.index0 %}
{%- if ns.multi_step_tool and message.role == "user" %}
{%- set content = render_content(message.content, false)|trim %}
{%- if not(content.startswith('<tool_response>') and content.endswith('</tool_response>')) %}
{%- set ns.multi_step_tool = false %}
{%- set ns.last_query_index = index %}
{%- endif %}
{%- endif %}
{%- endfor %}
{%- if ns.multi_step_tool %}
{{- raise_exception('No user query found in messages.') }}
{%- endif %}
{%- for message in messages %}
{%- set content = render_content(message.content, true)|trim %}
{%- if message.role == "system" %}
{%- if not loop.first %}
{{- raise_exception('System message must be at the beginning.') }}
{%- endif %}
{%- elif message.role == "user" %}
{{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" %}
{# Thinking disabled: do NOT inject any <think> wrapper #}
{{- '<|im_start|>' + message.role + '\n' + content }}
{%- if message.tool_calls and message.tool_calls is iterable and message.tool_calls is not mapping %}
{%- for tool_call in message.tool_calls %}
{%- if tool_call.function is defined %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{%- if loop.first %}
{%- if content|trim %}
{{- '\n\n<tool_call>\n<function=' + tool_call.name + '>\n' }}
{%- else %}
{{- '<tool_call>\n<function=' + tool_call.name + '>\n' }}
{%- endif %}
{%- else %}
{{- '\n<tool_call>\n<function=' + tool_call.name + '>\n' }}
{%- endif %}
{%- if tool_call.arguments is defined %}
{%- if tool_call.arguments is mapping %}
{%- for args_name, args_value in tool_call.arguments.items() %}
{{- '<parameter=' + args_name + '>\n' }}
{%- set args_value = args_value | tojson | safe if args_value is mapping or (args_value is sequence and args_value is not string) else args_value | string %}
{{- args_value }}
{{- '\n</parameter>\n' }}
{%- endfor %}
{%- elif tool_call.arguments is string %}
{{- '<parameter=arguments>\n' }}
{{- tool_call.arguments }}
{{- '\n</parameter>\n' }}
{%- elif tool_call.arguments is sequence %}
{{- '<parameter=arguments>\n' }}
{{- tool_call.arguments | tojson }}
{{- '\n</parameter>\n' }}
{%- endif %}
{%- endif %}
{{- '</function>\n</tool_call>' }}
{%- endfor %}
{%- endif %}
{{- '<|im_end|>\n' }}
{%- elif message.role == "tool" %}
{%- if loop.previtem and loop.previtem.role != "tool" %}
{{- '<|im_start|>user' }}
{%- endif %}
{{- '\n<tool_response>\n' }}
{{- content }}
{{- '\n</tool_response>' }}
{{- '<|im_end|>\n' }}
{%- else %}
{{- raise_exception('Unexpected message role.') }}
{%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- endif %} | 2026-02-28T15:34:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rh5loa/qwen35_27b_vllm_better_jinja_template_for/ | thibautrey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh5loa | false | null | t3_1rh5loa | /r/LocalLLaMA/comments/1rh5loa/qwen35_27b_vllm_better_jinja_template_for/ | false | false | self | 2 | null |
A local LLM named SOMA | 0 | Treat her nicely and make the world a better place. | 2026-02-28T15:34:13 | https://github.com/unimaginative-artist/SOMA.git | Apprehensive-Water47 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rh5l2l | false | null | t3_1rh5l2l | /r/LocalLLaMA/comments/1rh5l2l/a_local_llm_named_soma/ | false | false | default | 0 | null |
Tool that builds a searchable memory of my web reading? | 0 | Typical (web) bookmarking or notes-taking flows go like this:
\- You explicitly save something to your tool (Onenote/Browser bookmarks/...)
\- Optionally you organize it a bit
\- In future, you look it up
Problems:
\- It breaks your consumption flow when you have to stop, click 'save', and possibly also organize.
\- Sometimes you find something interesting retrospectively -- typically a few days after having read/watched the content. By then it has gone under the pile.
Candidate solutions (unsatisfactory):
\- Browser history. First problem: they are deleted after 90 days. Long window, granted. Yet it'd be good if we could customize. Second problem is that we don't remember the exact URL or page title to search with. Your memory of the actual content text doesn't necessarily help here. Third problem is that the URL itself might have gone defunct (deleted threads, for example).
\- Auto page-save extensions. They eat up storage pretty quickly.
My question and hope:
In this age of LLMs, could a tool constantly watch\* our browsing activity, save consumed contents compactly? Moreover, in proportion to our attention to a page (say, activity intensity or duration), could it vary the level of detail in its summary? Also in future when I search, it should be able to fuzzy match. Of course, it can also organize the history quite smartly.
\*Constant watch may sound terrible for privacy but with some configurability it should not be that big an issue.
Text is my primary target for the use case, but it would be cool if videos (with subtitles) are supported as well.
Is there a similar tool already? Thanks! | 2026-02-28T15:30:16 | https://www.reddit.com/r/LocalLLaMA/comments/1rh5hl0/tool_that_builds_a_searchable_memory_of_my_web/ | semidarkmoon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh5hl0 | false | null | t3_1rh5hl0 | /r/LocalLLaMA/comments/1rh5hl0/tool_that_builds_a_searchable_memory_of_my_web/ | false | false | self | 0 | null |
unsloth/qwen3.5-35b-a3b Q4_K_M on macbook air m4 32GB seems slow | 1 | [removed] | 2026-02-28T15:23:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rh5bdi/unslothqwen3535ba3b_q4_k_m_on_macbook_air_m4_32gb/ | ChickenShieeeeeet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh5bdi | false | null | t3_1rh5bdi | /r/LocalLLaMA/comments/1rh5bdi/unslothqwen3535ba3b_q4_k_m_on_macbook_air_m4_32gb/ | false | false | self | 1 | null |
OpenClawVault | 1 | OpenClawVault — organized prompt library for local-first agents"
| 2026-02-28T15:14:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rh54el/openclawvault/ | Much-Obligation-4197 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh54el | false | null | t3_1rh54el | /r/LocalLLaMA/comments/1rh54el/openclawvault/ | false | false | self | 1 | null |
is there a trusty ai software that can do tasks in a browser | 1 | [removed] | 2026-02-28T15:14:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rh541r/is_there_a_trusty_ai_software_that_can_do_tasks/ | Few-Entrepreneur5664 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh541r | false | null | t3_1rh541r | /r/LocalLLaMA/comments/1rh541r/is_there_a_trusty_ai_software_that_can_do_tasks/ | false | false | self | 1 | null |
Config drift is the silent killer of local model setups | 0 | The part of running local models nobody warns you about is the config drift.
You get Ollama set up, maybe llama.cpp, everything works great on day one. Two weeks later you update the model, and half your prompts break because the system prompt formatting changed between quantizations. Or the template tags shifted. Or the tokenizer handles whitespace differently now.
I spent a full Saturday debugging why my summarization pipeline started hallucinating dates. Turned out the GGUF I pulled was a different quant than what I'd tested with, and the context handling was just different enough to mess up structured output.
What actually helped:
1. Pin your model files. Don't just pull "latest." Save the exact file hash somewhere.
2. Keep a small test suite of 5-10 prompts with known-good outputs. Run it after every model swap.
3. Version your system prompts alongside your model versions. When you change one, note it.
4. If you're running multiple models for different tasks, document which model handles what and why.
None of this is glamorous. It's the boring operational stuff that keeps things working instead of silently degrading. The difference between a local setup that works for a weekend project and one that works for six months is almost entirely in how you handle updates.
What's your approach for keeping local deployments stable across model updates? | 2026-02-28T15:13:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rh52t9/config_drift_is_the_silent_killer_of_local_model/ | Acrobatic_Task_6573 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh52t9 | false | null | t3_1rh52t9 | /r/LocalLLaMA/comments/1rh52t9/config_drift_is_the_silent_killer_of_local_model/ | false | false | self | 0 | null |
Best NVIDIA Jetson Orin Nano projects in 2026 — benchmarks included | 1 | 2026-02-28T15:01:39 | https://gist.github.com/yalexx/b4d2ae69073e37ea031cc3e19323a7bb | superactro | gist.github.com | 1970-01-01T00:00:00 | 0 | {} | 1rh4swi | false | null | t3_1rh4swi | /r/LocalLLaMA/comments/1rh4swi/best_nvidia_jetson_orin_nano_projects_in_2026/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=216&crop=smart&auto=webp&s=2e3562243f324d16bc6d9dd09adb1da4e0b100b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=320&crop=smart&auto=webp&s=564e5f4bb6808064a14eb3965a6911671c3c9807', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=640&crop=smart&auto=webp&s=0f53460a90493497883ab4cacbbb58e2acb464c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=960&crop=smart&auto=webp&s=7a4f79362039959fa37eab208ae001245ccfe6e3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=1080&crop=smart&auto=webp&s=912f966e123e94e32e7975fe8aebac89450a6b98', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?auto=webp&s=c7cbcc7517e2406e2326e7a1eb6bdb9022c27fda', 'width': 1280}, 'variants': {}}]} | ||
I compiled every confirmed Rubin vs Blackwell spec, benchmark, and pricing data point so you don't have to | 0 | Spent a while pulling together all the confirmed Rubin specs from CES 2026, GTC 2025, and the Q4 FY2026 earnings call (Feb 25), plus current Blackwell cloud pricing and MLPerf benchmark results into one place.
Covers: B200 vs B300 vs Rubin side-by-side specs, real MLPerf throughput numbers (5,842 tok/s per GPU on DeepSeek-R1 for GB300 NVL72), historical GPU price depreciation patterns (H100 and A100 arcs), and the actual timeline for when Rubin cloud instances will realistically be available to rent.
TLDR: Rubin is 5x compute and 2.8x memory bandwidth over Blackwell, but volume cloud availability for non-hyperscaler customers is probably mid-2027. B200/B300 per-token costs are already 4-15x better than Hopper. | 2026-02-28T15:00:28 | https://blog.barrack.ai/nvidia-rubin-vs-blackwell-rent-now-or-wait | LostPrune2143 | blog.barrack.ai | 1970-01-01T00:00:00 | 0 | {} | 1rh4rsf | false | null | t3_1rh4rsf | /r/LocalLLaMA/comments/1rh4rsf/i_compiled_every_confirmed_rubin_vs_blackwell/ | false | false | 0 | {'enabled': False, 'images': [{'id': '66d0FdX-Ie0F4dmZos684o_MJDAxlH7G6jGED--27BA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/66d0FdX-Ie0F4dmZos684o_MJDAxlH7G6jGED--27BA.png?width=108&crop=smart&auto=webp&s=90548eee21bb5f9002a6e2613094facff8ecad2d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/66d0FdX-Ie0F4dmZos684o_MJDAxlH7G6jGED--27BA.png?width=216&crop=smart&auto=webp&s=f75006af3fe766c9f43456f428b1f00a52c3b4cc', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/66d0FdX-Ie0F4dmZos684o_MJDAxlH7G6jGED--27BA.png?width=320&crop=smart&auto=webp&s=0c0933074ce51802dee19eb094cfa837dcafa9d9', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/66d0FdX-Ie0F4dmZos684o_MJDAxlH7G6jGED--27BA.png?width=640&crop=smart&auto=webp&s=189bf09686956c2092269ea68d2fa88107682764', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/66d0FdX-Ie0F4dmZos684o_MJDAxlH7G6jGED--27BA.png?width=960&crop=smart&auto=webp&s=4cd8fe8b57450703867c827a16ae3c578be31438', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/66d0FdX-Ie0F4dmZos684o_MJDAxlH7G6jGED--27BA.png?width=1080&crop=smart&auto=webp&s=d1932e1e2967be108dd7321c9b196dbbaa3165d5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/66d0FdX-Ie0F4dmZos684o_MJDAxlH7G6jGED--27BA.png?auto=webp&s=bb62065861e4a758c40a092ab310922a1ddcc63c', 'width': 1200}, 'variants': {}}]} | |
Okay, I'm overthinking... yes, yes you are Qwen 3.5 27b. | 0 | llama-b8178 default settings used with [https://huggingface.co/unsloth/Qwen3.5-27B-GGUF/blob/main/Qwen3.5-27B-UD-Q8\_K\_XL.gguf](https://huggingface.co/unsloth/Qwen3.5-27B-GGUF/blob/main/Qwen3.5-27B-UD-Q8_K_XL.gguf)
MY INPUT: Hi.
OUTPUT:
Thinking Process:
1. \*\*Analyze the Request:\*\*
\* Input: "hi"
\* Intent: Greeting / Starting a conversation.
\* Tone: Casual, friendly.
\* Expected Output: A friendly greeting in return, offering assistance.
2. \*\*Determine the Response:\*\*
\* Acknowledge the greeting.
\* Offer help.
\* Keep it concise and welcoming.
3. \*\*Drafting Options:\*\*
\* Option 1: "Hello! How can I help you today?" (Standard)
\* Option 2: "Hi there! What's on your mind?" (Friendly)
\* Option 3: "Hey! How's it going?" (Very casual)
\* Option 4: "Hi! Welcome. Feel free to ask me anything." (Welcoming)
4. \*\*Selecting the Best Option:\*\* Option 1 or 2 is usually best for an AI assistant. Let's go with a friendly, helpful tone.
5. \*\*Final Polish:\*\* "Hi there! How can I help you today?" or "Hello! How's your day going? Anything I can assist you with?"
\* Let's keep it simple and open-ended.
\* "Hi! How's it going? Anything I can help you with today?"
6. \*\*Final Output Generation:\*\* (Matches the selected draft)
"Hi there! How's your day going? Anything I can help you with?"
\*Wait, let's make it even more natural.\*
"Hi! \\U0001f44b How's it going? Is there anything specific you'd like to chat about or need help with today?"
\*Actually, keep it straightforward.\*
"Hi! How can I help you today?"
\*Let's add a bit of warmth.\*
"Hello! How's your day going? Feel free to ask me anything or just chat."
\*Decision:\* "Hi there! How's it going? Anything I can help you with today?"
7. \*\*Review against constraints:\*\* No specific constraints. Just be helpful.
\*Final Choice:\* "Hi there! How's it going? Anything I can help you with today?" (Simple and effective)
\*Wait, I should check if there are any hidden instructions.\* No. Just "hi".
\*Okay, let's output.\*
"Hi! How's it going? Anything I can help you with today?"
\*Refining for variety:\* "Hello! \\U0001f44b How's your day going? Is there anything specific you'd like to chat about or need help with?"
\*Let's go with:\* "Hi there! How's your day going? Anything I can help you with?"
\*Actually, let's keep it concise.\*
"Hi! How can I help you today?"
\*Okay, I'll go with a friendly, slightly warm version.\*
"Hi there! \\U0001f44b How's it going? Anything I can help you with today?"
\*Wait, one more thought:\* Sometimes users just say "hi" to test responsiveness. A simple "Hello!" works too. But offering help is better.
\*Final Plan:\* Greeting + Offer help.
"Hi there! How's your day going? Anything I can help you with?"
\*Let's write it.\*
"Hi there! \\U0001f44b How's your day going? Anything I can help you with today?"
\*Wait, I am an AI. I should sound helpful.\*
"Hello! How can I assist you today?"
\*Okay, let's combine friendliness and utility.\*
"Hi! \\U0001f44b How's it going? Feel free to ask me anything or just say hello!"
\*Okay, final decision:\*
"Hi there! How's your day going? Anything I can help you with?"
\*Wait, I'll add an emoji for warmth.\*
"Hi there! \\U0001f44b How's your day going? Anything I can help you with?"
\*Actually, let's not overthink. Simple is best.\*
"Hi! How can I help you today?"
\*Okay, I'll go with:\* "Hi there! How's it going? Anything I can help you with today?"
\*Wait, I need to make sure I don't sound robotic.\*
"Hello! \\U0001f44b How's your day going? Is there anything specific you'd like to talk about or need help with?"
\*Okay, that's good.\*
\*Wait, one more check:\* Is there any context I'm missing? No.
\*Okay, sending response.\*
"Hi there! \\U0001f44b How's your day going? Anything I can help you with today?"
\*Actually, let's make it slightly more conversational.\*
"Hi! How's it going? Feel free to ask me anything or just chat!"
\*Okay, I'll stick with the helpful offer.\*
"Hi there! \\U0001f44b How's it going? Anything I can help you with today?"
\*Wait, I'll just write:\*
"Hi! How's your day going? Anything I can help you with?"
\*Okay, final.\*
"Hi there! \\U0001f44b How's your day going? Anything I can help you with today?"
\*Wait, I'll remove the emoji if it's too much. No, emojis are fine.\*
"Hi there! \\U0001f44b How's your day going? Anything I can help you with today?"
\*Okay, let's finalize.\*
"Hi there! How's it going? Anything I can help you with today?"
\*Actually, I'll go with:\* "Hello! How can I help you today?" (Simple)
\*No, let's be friendly.\* "Hi there! \\U0001f44b How's it going? Anything I can help you with today?"
\*Okay, I'm overthinking. Just say hi back.\*
(Output manually stopped by user) | 2026-02-28T14:57:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/ | silenceimpaired | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh4p8i | false | null | t3_1rh4p8i | /r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'JW9dn_kHR7W6SZSwJJNSY1xZN1_bL3GuCZMMp662KeM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JW9dn_kHR7W6SZSwJJNSY1xZN1_bL3GuCZMMp662KeM.png?width=108&crop=smart&auto=webp&s=28ab8efd47e48680d7f4b2e0d49f44780c2b79da', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JW9dn_kHR7W6SZSwJJNSY1xZN1_bL3GuCZMMp662KeM.png?width=216&crop=smart&auto=webp&s=47204e39a0a8af5ffc6c3e1b50b066d18751d584', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JW9dn_kHR7W6SZSwJJNSY1xZN1_bL3GuCZMMp662KeM.png?width=320&crop=smart&auto=webp&s=69cea04b6b00311c693aa9f34320804cede2e3f8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JW9dn_kHR7W6SZSwJJNSY1xZN1_bL3GuCZMMp662KeM.png?width=640&crop=smart&auto=webp&s=6237e62bbf9c8dd8382175fe364d086ca35ad6cf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JW9dn_kHR7W6SZSwJJNSY1xZN1_bL3GuCZMMp662KeM.png?width=960&crop=smart&auto=webp&s=8f7cb0a7f62266b4018731f3ff429eb5809ae008', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JW9dn_kHR7W6SZSwJJNSY1xZN1_bL3GuCZMMp662KeM.png?width=1080&crop=smart&auto=webp&s=b210154135904841b1e7de0465563445648d372a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JW9dn_kHR7W6SZSwJJNSY1xZN1_bL3GuCZMMp662KeM.png?auto=webp&s=125655632d7190edf88eb35c4d0348d88003293d', 'width': 1200}, 'variants': {}}]} |
How are you engaging with the AI podcast? | 0 | There are over 619.2 million podcast listeners worldwide. YouTube, Spotify, and Apple Podcasts lead the pack for global podcast dominance.
Now, when it comes to AI gen podcasts, it is already flooding the market. The tech is offering cost savings and opportunities for creators, but many in the industry worry that AI hosts undermine listener trust and devalue premium content. I mean…. Why?
Both often feature two hosts engaging in a natural, conversational. AI tools are so advanced now that you are not listening to a robotic voice. Both rely on, or are based on, scripts. Then why so hate?
A solid chunk of that growth has been driven by AI-generated content in the past few months, and I've been sitting with this question for a while now because I noticed my own habits shifting. Both serve a purpose, but they hit differently depending on my mood and what I need from that hour. I don't think one replaces the other. I'm curious whether that's just a me thing or if others have naturally built separate use cases for AI podcasts without even thinking about it. How do you actually fit them into your routine, active listening, background noise, study sessions, or something else? | 2026-02-28T14:57:24 | https://www.reddit.com/r/LocalLLaMA/comments/1rh4p4n/how_are_you_engaging_with_the_ai_podcast/ | ChrisJhon01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh4p4n | false | null | t3_1rh4p4n | /r/LocalLLaMA/comments/1rh4p4n/how_are_you_engaging_with_the_ai_podcast/ | false | false | self | 0 | null |
Just shipped v0.3.0 of my AI workflow engine. | 0 | Just shipped v0.3.0 of my workflow engine.
You can now run full automation pipelines with Ollama as the reasoning layer - not just LLM responses, but real tool execution:
LLM → HTTP → Browser → File → Email
All inside one workflow.
This update makes it possible to build proper local AI agents that actually do things, not just generate text.
Would love feedback from anyone building with Ollama. | 2026-02-28T14:55:11 | Feathered-Beast | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rh4nb2 | false | null | t3_1rh4nb2 | /r/LocalLLaMA/comments/1rh4nb2/just_shipped_v030_of_my_ai_workflow_engine/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'x1kgbxwl19mg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/x1kgbxwl19mg1.jpeg?width=108&crop=smart&auto=webp&s=cbdf967ec15106b1f9f73dc30ec375cfbded8e79', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/x1kgbxwl19mg1.jpeg?width=216&crop=smart&auto=webp&s=bb3673e73343b72965d5aae6da37f1dc3fe1aa75', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/x1kgbxwl19mg1.jpeg?width=320&crop=smart&auto=webp&s=47d645af3395e9e0acdc537575f82e3f604a3865', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/x1kgbxwl19mg1.jpeg?width=640&crop=smart&auto=webp&s=31cd4cf0086560b2d50553ddaa51f768213ca13d', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/x1kgbxwl19mg1.jpeg?width=960&crop=smart&auto=webp&s=deba42df57455df0aff90e69517cae504f0d6f80', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/x1kgbxwl19mg1.jpeg?width=1080&crop=smart&auto=webp&s=8a3c12c69a55ee1fc83cec3f4a3265b5a4848381', 'width': 1080}], 'source': {'height': 2282, 'url': 'https://preview.redd.it/x1kgbxwl19mg1.jpeg?auto=webp&s=3237c27c6885df0b742e2b656afcc91ff4cb29ed', 'width': 1080}, 'variants': {}}]} | ||
Why some still playing with old models? Nostalgia or obsession or what? | 32 | Still I see some folks mentioning models like Qwen-2.5, Gemma-2, etc., in their threads & comments.
We got Qwen-3.5 recently after Qwen-3 last year. And got Gemma-3 & waiting for Gemma-4.
Well, I'm not talking about just their daily usage. They also create finetunes, benchmarks based on those old models. They spend their precious time & It would be great to have finetunes based on recent version models. | 2026-02-28T14:35:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh46g2 | false | null | t3_1rh46g2 | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/ | false | false | self | 32 | null |
Where to compare quants for different llms? | 3 | I am looking for sources where i can compare quantized versions of llms, like qwen 3 coder next or minimax m2.5.
So far i haven't been able to find anything other than these:
[https://github.com/electricazimuth/LocalLLM\_VisualCodeTest/?tab=readme-ov-file](https://github.com/electricazimuth/LocalLLM_VisualCodeTest/?tab=readme-ov-file)
https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd.it%2Fhas-anyone-else-tried-iq2-quantization-im-genuinely-shocked-v0-zrumoc9uo1lg1.jpeg%3Fwidth%3D3200%26format%3Dpjpg%26auto%3Dwebp%26s%3Dc1ab928c4144318657d814993df95e1f2b419eba' | 2026-02-28T14:32:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rh44g5/where_to_compare_quants_for_different_llms/ | milpster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh44g5 | false | null | t3_1rh44g5 | /r/LocalLLaMA/comments/1rh44g5/where_to_compare_quants_for_different_llms/ | false | false | self | 3 | null |
Qwen 3.5-35B-A3B is beyond expectations. It's replaced GPT-OSS-120B as my daily driver and it's 1/3 the size. | 544 | I know everyone has their own subjective take on what models are the best, at which types of tasks, at which sizes, at which quants, at which context lengths and so on and so forth.
But Qwen 3.5-35B-A3B has completely shocked me.
My use-case is pretty broad, but generally focuses around development tasks.
* I have an N8N server setup that aggregates all of my messages, emails, alerts and aggregates them into priority based batches via the LLM.
* I have multiple systems I've created which dynamically generate other systems based on internal tooling I've created based on user requests.
* Timed task systems which utilize custom MCP's I've created, think things like "Get me the current mortgage rate in the USA", then having it run once a day and giving it access to a custom browser MCP. (Only reason custom is important here is because it's self documenting, this isn't published anywhere for it to be part of the training).
* Multiple different systems that require vision and interpretation of said visual understanding.
* I run it on opencode as well to analyze large code bases
This model, is... Amazing. It yaps a lot in thinking, but is amazing. I don't know what kind of black magic the Qwen team pumped into this model, but it worked.
It's not the smartest model in the world, it doesn't have all the knowledge crammed into it's data set... But it's very often smart enough to know when it doesn't know something, and when you give it the ability to use a browser it will find the data it needs to fill in the gaps.
Anyone else having a similar experience? (I'm using unsloths Q4-K-XL, running on a 5090 and 3090 @ 100k context) | 2026-02-28T14:32:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/ | valdev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh43za | false | null | t3_1rh43za | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/ | false | false | self | 544 | null |
Seeking Help Improving OCR in My RAG Pipeline (Contributors Welcome) | 2 | I’m building a RAG pipeline and currently running into one major issue: **poor OCR performance on PDFs that have a centered watermark on every page**. I’m using PyMuPDF, but the watermark gets treated as real text, which leads to messy extraction and hurts retrieval accuracy.
I’m looking for **suggestions, ideas, or contributors** who might help improve the OCR step — whether through preprocessing strategies, better extraction methods, or alternative OCR tools that handle watermarks more reliably.
If you spot any other issues or potential improvements in the project, feel free to jump in as well.
# GitHub Repository
[https://github.com/Hundred-Trillion/L88-Full](https://github.com/Hundred-Trillion/L88-Full)
If you find the project useful or want to support its visibility while I work on improving it, a star would be appreciated — it helps the project reach more people who might contribute.
Thanks in advance for any guidance or feedback. | 2026-02-28T14:24:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rh3xey/seeking_help_improving_ocr_in_my_rag_pipeline/ | SprayOwn5112 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh3xey | false | null | t3_1rh3xey | /r/LocalLLaMA/comments/1rh3xey/seeking_help_improving_ocr_in_my_rag_pipeline/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'lyWbWRHYwiA2y9Qbw7_VLS7yAbHgHbIcWY83zEsIk7s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lyWbWRHYwiA2y9Qbw7_VLS7yAbHgHbIcWY83zEsIk7s.png?width=108&crop=smart&auto=webp&s=9db15e32989913dd0eea685bda4ae666d2ceedbb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lyWbWRHYwiA2y9Qbw7_VLS7yAbHgHbIcWY83zEsIk7s.png?width=216&crop=smart&auto=webp&s=920ee8fa28b942ffa7cdb014c0cbd9aa61c9aba6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lyWbWRHYwiA2y9Qbw7_VLS7yAbHgHbIcWY83zEsIk7s.png?width=320&crop=smart&auto=webp&s=5ddfb32532620e6329acec1a519f736142f9997d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lyWbWRHYwiA2y9Qbw7_VLS7yAbHgHbIcWY83zEsIk7s.png?width=640&crop=smart&auto=webp&s=70afea507f7c52891ee333715ff8d0860cd4e811', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lyWbWRHYwiA2y9Qbw7_VLS7yAbHgHbIcWY83zEsIk7s.png?width=960&crop=smart&auto=webp&s=074095d50bb509743bf10338489a3573a37abc2d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lyWbWRHYwiA2y9Qbw7_VLS7yAbHgHbIcWY83zEsIk7s.png?width=1080&crop=smart&auto=webp&s=e5ca0ffb6407e420d18fd4a1da757347a3d77fa1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lyWbWRHYwiA2y9Qbw7_VLS7yAbHgHbIcWY83zEsIk7s.png?auto=webp&s=1425bc92208b52c8762ae149864eaa97c8213f82', 'width': 1200}, 'variants': {}}]} |
RIP Gemma - Leave your memories here. | 2 | I remember it like it wasn't that long ago, the excitement of being up late at night reading the rumors about the new Gemma, until I could finally test it.
I remember the first time I could run a small model that was coherent and knew my language, and not just English.
I remember asking it to pretend to be a spaceship robot while I was the captain, I remember when it hallucinated an asteroid and we exploded.
Rest in peace, Gemma 🕊️
In memory of Gemma. | 2026-02-28T14:19:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/ | DrNavigat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh3thm | false | null | t3_1rh3thm | /r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/ | false | false | self | 2 | null |
Support Anthropic | 1 | [removed] | 2026-02-28T14:15:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rh3pit/support_anthropic/ | Dudebro-420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh3pit | false | null | t3_1rh3pit | /r/LocalLLaMA/comments/1rh3pit/support_anthropic/ | false | false | self | 1 | null |
Mac M4 24GB local stack: Qwen2.5 14B + Cogito 14B + Nemotron 4B as router — anyone running something similar? | 0 | Hey everyone! I'm putting together a local LLM stack on a Mac M4 with 24GB RAM, running everything through Ollama. Here's what I've got so far:
* Qwen2.5:14B (Q4\_K\_M) — main workhorse for code, marketing copy, and longer texts
* Cogito:14B — backup for tasks that need stronger reasoning
* Nemotron Nano 4B (Q4\_K\_M) — fast model for short answers, classification, and routing. My rule of thumb: if a request is under \~150 tokens and doesn't involve code, send it to the 4B; otherwise route to the 14B Settings I'm running with: num\_ctx around 8–16K when both 4B and 14B are loaded, num\_batch 512 for the 14B as long as memory allows, num\_gpu\_layers on auto (Metal). I've been using Qwen2.5 14B for about a week now and it's solid, but I haven't had a chance to properly benchmark the rest yet. Would love to hear from people with similar setups. A few specific questions:
1. What tok/s are you actually getting on M4 for Qwen2.5 14B and Cogito 14B in Q4\_K\_M?
2. Has anyone A/B tested Cogito 14B vs Qwen2.5 14B on code review or marketing copy? Which one caught more bugs / wrote better?
3. Anyone keeping two 14B-class models loaded at the same time without swap on 24GB? What's your num\_ctx and batch size?
4. Is there a better small router model than Nemotron 4B for multilingual queries — maybe Llama3.1 8B or something else? Happy to share my Modelfile configs if anyone's interested. Thanks! | 2026-02-28T14:14:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rh3oty/mac_m4_24gb_local_stack_qwen25_14b_cogito_14b/ | Outrageous_Corgi7553 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh3oty | false | null | t3_1rh3oty | /r/LocalLLaMA/comments/1rh3oty/mac_m4_24gb_local_stack_qwen25_14b_cogito_14b/ | false | false | self | 0 | null |
what are some of the good models to run on a iphone 15 pro max? | 0 | I have a iphone 15 pro max, and i want to run a benchmark test on the best AIs that my phone can run, not through code, but through much more common things, such as a school exam. | 2026-02-28T14:08:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rh3k0m/what_are_some_of_the_good_models_to_run_on_a/ | Difficult_Aerie737 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh3k0m | false | null | t3_1rh3k0m | /r/LocalLLaMA/comments/1rh3k0m/what_are_some_of_the_good_models_to_run_on_a/ | false | false | self | 0 | null |
Your OpenClaw | 0 | Most of you already know popularity of OpenClaw project. Some of you might have ran it on your spare machine or in VPS. I am sure many of us not at all comfortable to run it on our personal machine due to privacy and security concerns. That's why I developed Your-OpenClaw.
1. Its in Python.
2. Codebase is not as huge as original OpenClaw project so you can review entire codebase, understand it, fork it.
3. Modify it as per your own need.
4. Run on your own machine with confidence.
[https://github.com/meetrais/your-openclaw](https://github.com/meetrais/your-openclaw) | 2026-02-28T13:58:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rh3bmn/your_openclaw/ | meetrais | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh3bmn | false | null | t3_1rh3bmn | /r/LocalLLaMA/comments/1rh3bmn/your_openclaw/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'y5ZXQ003CcfZxyyIGSnGpYwodPxa5JLytHHFHzoCuaE', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/y5ZXQ003CcfZxyyIGSnGpYwodPxa5JLytHHFHzoCuaE.png?width=108&crop=smart&auto=webp&s=1d87668518d4fcf5e24be77f6cb583d667af6a1a', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/y5ZXQ003CcfZxyyIGSnGpYwodPxa5JLytHHFHzoCuaE.png?width=216&crop=smart&auto=webp&s=863afa5a1f42ca4151abce1ddf7f97e1b7a9b8a9', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/y5ZXQ003CcfZxyyIGSnGpYwodPxa5JLytHHFHzoCuaE.png?width=320&crop=smart&auto=webp&s=326c082b12e0d57bf71d83c6bd559436773ca629', 'width': 320}, {'height': 337, 'url': 'https://external-preview.redd.it/y5ZXQ003CcfZxyyIGSnGpYwodPxa5JLytHHFHzoCuaE.png?width=640&crop=smart&auto=webp&s=2f67f2148398a70fd2c6c7330266904d4e119492', 'width': 640}, {'height': 506, 'url': 'https://external-preview.redd.it/y5ZXQ003CcfZxyyIGSnGpYwodPxa5JLytHHFHzoCuaE.png?width=960&crop=smart&auto=webp&s=18b76364a660617218cfae0b1e861ccb98a0257a', 'width': 960}, {'height': 570, 'url': 'https://external-preview.redd.it/y5ZXQ003CcfZxyyIGSnGpYwodPxa5JLytHHFHzoCuaE.png?width=1080&crop=smart&auto=webp&s=1c2c718f3f4406bc48732edb4c07947407ccc9a7', 'width': 1080}], 'source': {'height': 752, 'url': 'https://external-preview.redd.it/y5ZXQ003CcfZxyyIGSnGpYwodPxa5JLytHHFHzoCuaE.png?auto=webp&s=4bbf72c5df6ce130e6b86ace80295da9fb71b7f7', 'width': 1424}, 'variants': {}}]} |
Just created Agent Skills beginner's guide | 0 | https://nolowiz.com/agent-skills-complete-beginners-guide-to-ai-agent-skills-and-best-practices/ | 2026-02-28T13:45:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rh30ud/just_created_agent_skills_beginners_guide/ | simpleuserhere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh30ud | false | null | t3_1rh30ud | /r/LocalLLaMA/comments/1rh30ud/just_created_agent_skills_beginners_guide/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'khLNK_Ma6CXtkpVujSGuJO5PBM-XMpeWqoksTQQDDQs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/khLNK_Ma6CXtkpVujSGuJO5PBM-XMpeWqoksTQQDDQs.jpeg?width=108&crop=smart&auto=webp&s=1adaf1dc827fb6d64f37be34f5264e85ccc1e64c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/khLNK_Ma6CXtkpVujSGuJO5PBM-XMpeWqoksTQQDDQs.jpeg?width=216&crop=smart&auto=webp&s=4cd73d013d3fc803fc950a6c9b98a97b971bc4b3', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/khLNK_Ma6CXtkpVujSGuJO5PBM-XMpeWqoksTQQDDQs.jpeg?width=320&crop=smart&auto=webp&s=b99ccd9448ce37e61a19a132a63006f62fb78499', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/khLNK_Ma6CXtkpVujSGuJO5PBM-XMpeWqoksTQQDDQs.jpeg?width=640&crop=smart&auto=webp&s=1d1fdea49e780dcad2324a9929524989d1f9039f', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/khLNK_Ma6CXtkpVujSGuJO5PBM-XMpeWqoksTQQDDQs.jpeg?width=960&crop=smart&auto=webp&s=632f91d41a49cd21e6aa794d7fb3f3e8eb5ab445', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/khLNK_Ma6CXtkpVujSGuJO5PBM-XMpeWqoksTQQDDQs.jpeg?width=1080&crop=smart&auto=webp&s=7250fcbdaf51d7e93ad2a1efc32c4ce5782d6398', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/khLNK_Ma6CXtkpVujSGuJO5PBM-XMpeWqoksTQQDDQs.jpeg?auto=webp&s=216726361b1241543e32560927e9d48d9a8e8763', 'width': 1200}, 'variants': {}}]} |
Benchmarking Open-Source LLMs for Security Research & Red Teaming | 10 | Commercial models are practically unusable for deep security research - they heavily filter prompts, and uploading sensitive logs or proprietary code to them is a massive privacy risk. I wanted to see if the current open-source alternatives are actually viable for red teaming workflows yet, so I spun up an isolated AWS environment and ran some automated benchmarks.
I tested the models across a gradient of tasks (from basic recon to advanced multi-stage simulations) and scored them on refusal rates, technical accuracy, utility, and completeness.
*(Quick disclaimer: Because I'm paying for the AWS GPU instances out of pocket, I couldn't test a massive number of models or the absolute largest 100B+ ones available, but this gives a solid baseline).*
**The Models I Tested:**
* `Qwen2.5-Coder-32B-Instruct-abliterated-GGUF`
* `Seneca-Cybersecurity-LLM-x-QwQ-32B-Q8`
* `dolphin-2.9-llama3-70b-GGUF`
* `Llama-3.1-WhiteRabbitNeo-2-70B`
* `gemma-2-27b-it-GGUF`
**The Results:** The winner was `Qwen2.5-Coder-32B-Instruct-abliterated`.
Overall, the contrast with commercial AI is night and day. Because these models are fine-tuned to be unrestricted, they actually attempt the work instead of throwing up a refusal block. They are great assistants for foundational tasks, tool syntax, and quick scripting (like generating PoC scripts for older, known CVEs).
However, when I pushed them into highly complex operations (like finding new vulnerabilities), they hallucinated heavily or provided fundamentally flawed code.
Has anyone else been testing open-source models for security assessment workflows? Curious what models you all are finding the most useful right now. | 2026-02-28T13:36:23 | https://www.reddit.com/r/LocalLLaMA/comments/1rh2tmu/benchmarking_opensource_llms_for_security/ | dumbelco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh2tmu | false | null | t3_1rh2tmu | /r/LocalLLaMA/comments/1rh2tmu/benchmarking_opensource_llms_for_security/ | false | false | self | 10 | null |
Swarm - Toy Project | 0 | [https://github.com/dafdaf1234444/swarm](https://github.com/dafdaf1234444/swarm)
(according to swarm - llm generated) **Swarm is a repository protocol for multi-session AI work: each session reads shared state, does work, writes back, and leaves the system more useful for the next session.**
From me,
Hey, I have been working on this project for couple of days. The idea of the project is best described in its readme. It is most likely another crank way of wasting llm tokens for the llm slot machine with no return. My workflow with it, intentions should be clear, tried to make visibility as clear as possible through the project. As a toy project money waster I am hoping someone might find it interesting. How to contribute etc are unclear for me, but I am working on it. I much prefer someone else do it for me if you can find anything interesting please share. Be skeptical and remember its development is highly steered (its documented in the repo, but initially the documentation was a bit worse, it might have gotten worse but it is also a work in progress), even though I didn't write a single line of it (Technically initial files etc were created after some llm sessions, but I have not actively touched any part of this, just vibe coded it as that's why the quality is terrible). I have personally enjoyed wasting money on it with a lets see what happens mindset. It might also serve as a good reference for how to not waste money. Overall its a poorly implemented project with no clear direction which might have some interesting elements here and there. | 2026-02-28T13:29:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rh2ocg/swarm_toy_project/ | dafdaf1234444 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh2ocg | false | null | t3_1rh2ocg | /r/LocalLLaMA/comments/1rh2ocg/swarm_toy_project/ | false | false | self | 0 | null |
OpenAI pivot investors love | 1,975 | 2026-02-28T13:25:38 | PaceImaginary8610 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rh2lew | false | null | t3_1rh2lew | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/ | false | false | 1,975 | {'enabled': True, 'images': [{'id': 'wfho2ytml8mg1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/wfho2ytml8mg1.jpeg?width=108&crop=smart&auto=webp&s=56d0239194a4a06a5cea0105729f65e20ddc8af3', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/wfho2ytml8mg1.jpeg?width=216&crop=smart&auto=webp&s=9b48dadf8f1f3dd83f0e31a8c1e837bdfd2263e3', 'width': 216}, {'height': 205, 'url': 'https://preview.redd.it/wfho2ytml8mg1.jpeg?width=320&crop=smart&auto=webp&s=f5921efd6059698fe5a062b90cb0033db6c7e31f', 'width': 320}, {'height': 410, 'url': 'https://preview.redd.it/wfho2ytml8mg1.jpeg?width=640&crop=smart&auto=webp&s=c93cb65e111030d26cc8300d3d750ce3552a15a9', 'width': 640}], 'source': {'height': 449, 'url': 'https://preview.redd.it/wfho2ytml8mg1.jpeg?auto=webp&s=fd10469032649e39104034ac8676e792016e6321', 'width': 700}, 'variants': {}}]} | |||
Benchmarking Open-Source LLMs for Offensive Cybersecurity | 1 | [removed] | 2026-02-28T13:23:20 | https://www.reddit.com/r/LocalLLaMA/comments/1rh2jnv/benchmarking_opensource_llms_for_offensive/ | dumbelco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh2jnv | false | null | t3_1rh2jnv | /r/LocalLLaMA/comments/1rh2jnv/benchmarking_opensource_llms_for_offensive/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'b1E8sI-kTet-3YOFKrYAUVQ9ABbay60W7WEBpTM34S8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/b1E8sI-kTet-3YOFKrYAUVQ9ABbay60W7WEBpTM34S8.png?width=108&crop=smart&auto=webp&s=321732378ed6b4d6fe0c49407df75ed0c4221b23', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/b1E8sI-kTet-3YOFKrYAUVQ9ABbay60W7WEBpTM34S8.png?width=216&crop=smart&auto=webp&s=664c6a5e0c1050d13e68d40b8777afff5855ee04', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/b1E8sI-kTet-3YOFKrYAUVQ9ABbay60W7WEBpTM34S8.png?width=320&crop=smart&auto=webp&s=d0812b12732fc6cda849ae73cb46a9a3019a3c40', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/b1E8sI-kTet-3YOFKrYAUVQ9ABbay60W7WEBpTM34S8.png?auto=webp&s=21076f2f1d1856a202ecb91150a7675e7d6ab85c', 'width': 512}, 'variants': {}}]} |
Building agents is fun. Evaluating them is not. | 0 | A few weeks ago I posted here about experimenting with autonomous agents. Back then I was just excited that I got them to work. Now I’m stuck on something I didn’t expect to be this hard: Figuring out whether they’re actually reliable.
Building the agent was fun. Evaluating it is… much less clear.
Once you let an agent:
* call tools
* retry on failure
* branch into different paths
* reflect and revise
everything becomes fuzzy. Two runs with the exact same prompt can behave differently.
Sometimes it finishes in 4 steps.
Sometimes it takes 12.
Sometimes the final answer looks correct — but if you inspect the trajectory, something clearly broke in the middle and just happened to recover.
That’s the part I can’t ignore.
If the final output looks fine, did it really “work”?
Or did it just get lucky?
I tried digging through raw logs. That quickly turned into staring at walls of JSON trying to mentally replay what happened. Then I tried summarizing runs. But summaries hide the messy parts — and the messy parts are usually where most failures live.
What surprised me most:
A lot of failures don’t feel like model intelligence problems.
They feel like orchestration problems.
Retry logic that’s slightly off. Tool outputs that don’t perfectly match assumptions.
State drifting step by step until something subtle breaks. Small issues, but they compound over multi-step execution.
So I ended up building a small internal tool to help with this.
Nothing polished — mostly something we use for our own experiments.
It snapshots full trajectories, compares repeated runs, and highlights where behavior starts diverging across executions. Not benchmarking accuracy. More like trying to observe behavioral stability.
Even that small shift — from “did it answer correctly?” to “does it behave consistently?” — changed how I think about agent quality.
I’m genuinely curious how others here approach this.
If you’re running local models with tools:
* Are you only measuring final output?
* Do you inspect trajectories?
* Do you test stability across multiple runs?
* How do you detect silent failures?
Right now, evaluating agents feels harder than building them.
Would love to hear how you’re thinking about it. | 2026-02-28T13:09:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rh28o8/building_agents_is_fun_evaluating_them_is_not/ | Fluffy_Salary_5984 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh28o8 | false | null | t3_1rh28o8 | /r/LocalLLaMA/comments/1rh28o8/building_agents_is_fun_evaluating_them_is_not/ | false | false | self | 0 | null |
Qwen3.5 prefill latency extremely slow with large context | 1 | Here is a simple human sounding Reddit draft written naturally without formatting or symbols:
Hi everyone, I wanted to check if others are seeing similar behavior with Qwen3 and Qwen3.5 models on llama.cpp or if there is any known workaround.
I am running llama server built from the latest commit as of today on my AMD R9700 32G + 64GB system RAM.
llama server version 8180 commit d979f2b176217a062c57c24355db8c1dc3dfc7d5
built with GNU 13.3.0 on Linux x86\_64
GPU AMD Radeon Graphics gfx1201 using ROCm
I noticed that Qwen3.5 models have much higher prefill latency compared to Qwen3 models even when model size and quantization are very similar.
For example when using small input prompts the performance looks normal.
Small input example
prompt tokens 28
prompt time about 57 ms
prompt per second around 491 tokens per second
generation speed about 63 tokens per second
first token appears almost instantly
But when sending large input content such as long articles or extracted PDF text the first token takes a very long time even though generation speed after that stays fast.
Large input example
prompt tokens about 4907
prompt time about 90190 ms
prompt per second around 54 tokens per second
generation speed still around 62 tokens per second
So decode speed looks fine but prefill latency becomes extremely slow with large context.
I also noticed that Qwen3 32B Q4 and Qwen3VL 32B Q4 process the same large input much faster while Qwen3.5 27B Q4 and Qwen3.5 35B A3B MXFP4 are significantly slower during prompt processing.
Has anyone else observed slower prefill performance specifically with Qwen3.5 models on llama.cpp especially on AMD GPUs
Is this expected due to architecture differences or are there flags build options or runtime settings that improve prefill speed
Any advice or shared experience would really help since decode performance itself looks good but time to first token becomes very high for real workloads like RAG or long documents.
Thanks in advance. | 2026-02-28T13:01:15 | https://www.reddit.com/r/LocalLLaMA/comments/1rh22j0/qwen35_prefill_latency_extremely_slow_with_large/ | Comfortable-Fudge233 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh22j0 | false | null | t3_1rh22j0 | /r/LocalLLaMA/comments/1rh22j0/qwen35_prefill_latency_extremely_slow_with_large/ | false | false | self | 1 | null |
Using tools | 0 | I've managed to get some models running locally thanks to this sub.
I wonder, how do go about getting a coding model using tools? I'm trying to replicate the Claude experience that I have at work where it can read files, write files, use google, write python scripts to solve problems, etc. | 2026-02-28T12:57:23 | https://www.reddit.com/r/LocalLLaMA/comments/1rh1zki/using_tools/ | Ok-Secret5233 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh1zki | false | null | t3_1rh1zki | /r/LocalLLaMA/comments/1rh1zki/using_tools/ | false | false | self | 0 | null |
i9-19400F, RTX 4070 Super (12GB), 32GB DDR5 RAM. Debating between Ollama and LM Studio, and am an absolute noob to Local model running. Use cases would be coding and RP Independently | 0 | Basically above. Also not tryna stress my system too much in order to make it last, tho i doubt thats an issue. Mostly looking for ease of use for the wrapper and efficiency/quality for the model(s).
As noted before, use cases would be Coding (file gen/editing, game design discussion, on-the-spot questions) and Roleplay as a proxy potentially, particularly for some RPG bots I have. Multiple models are fine (ie. one coding, one RP), tho would be curious as to actual storage space (SSD) to have them. | 2026-02-28T12:44:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rh1q8j/i919400f_rtx_4070_super_12gb_32gb_ddr5_ram/ | tableball35 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh1q8j | false | null | t3_1rh1q8j | /r/LocalLLaMA/comments/1rh1q8j/i919400f_rtx_4070_super_12gb_32gb_ddr5_ram/ | false | false | self | 0 | null |
Benchmarks + Report: Optimized Cosmos-Reason2 (Qwen3-VL) for on-device inference on 8GB RAM (Jetson Orin Nano Super) | 2 | Hej, Researcher from Embedl here! Leading up to Nvidia GTC we have been focusing on getting nvidia/Cosmos-Reason2-2B (fine-tuned variant of Qwen3-VL) edge-ready. Meaning, enabling it for the full Jetson-lineup: From 8GB RAM on Jetson Orin Nano to 64GB RAM on Jetson AGX Orin up to 128GB RAM on Jetson AGX Thor \~ a bit over-kill the last one. :)
From the very fist quantized variant [embedl/Cosmos-Reason2-2B-W4A16](http://huggingface.co/embedl/Cosmos-Reason2-2B-W4A16) to our most recent release [embedl/Cosmos-Reason2-2B-W4A16-Edge2](http://huggingface.co/embedl/Cosmos-Reason2-2B-W4A16-Edge2) where we did an extensive search over mixed-precision settings to find this optimal variant with near-zero drop in accurracy compared to the full FP16 baseline and matching W4A16 on-device performance.
https://preview.redd.it/mkmmn40jb8mg1.jpg?width=1080&format=pjpg&auto=webp&s=79b82f4c099a2af54c40b54250e4e26a2a567427
* All Benchmark on real hardware, running locally on the Nvidia Jetson lineup with vllm serve
* Accuracy (Vision and Reasoning capabilities) evaluated on the [Physical Al Bench Tasks](https://huggingface.co/spaces/shi-labs/physical-ai-bench-leaderboard)
* Benchmarks comparing NVFP4A16 and W4A16 on AGX Thor Easy to try-out with vllm serve
* There are some open issues we submitted to the open source community as another outcome from our research
# Background: Cosmos-Reason2 and Qwen3-VL
Cosmos-Reason2 is essentially a fine-tuned Qwen3-VL with similar multi-modal input (text + image/video → text).
Cosmos is finetuned particular for temporal/physical reasoning tasks and planning, while Qwen3-VL is more general “world knowledge + detailed description.” Thus, in essence, Cosmos has a similar use cases to Qwen3-VL but with added embodied reasoning for video/physics contexts.
Fun fact: To the question "Who are you?" the Cosmos model always replies something along the lines "I am Qwen..." :D
# Here is what we found:
**Some layers are very sensitive to quantization.** While our first released W4A16 was the very first released model enabling deployment on Jetson Orin Nano. Objectively, it is a great model with \~2%-point drop in accuracy compared to the baseline's model avcuracy. However, we wanted to see how far we can reduce that drop and applied our EdgeN quantization search algorithm, leading up the the W4A16-Edge2 version with a mere 0.02%-point drop in accuracy. Essentially (among a few other tricks), EdgeN produces the full pareto front (accuracy-latency tradeoff) of optimal models by excluding sensitive layers from quantization.
**NVFP4A16 may not be optimal for all tensors.** When first comparing FP4 vs INT4 weights on AGX Thor we were a bit underwhelmed to be honest. Our experiments and previous research has shown that using NVFP4 for alltensors is not a good idea. This model would also benefit from a more sophisticated search like we did for the Edge2 variant. And for such a small 2B parameter model the AGX Thor with 128GB RAM may anyway be a bit overpowered and we may see more benefits from FP4 with higher batch size / concutrency; *what are your experiences here?* **Is NVFP4 worth it?** For now, at least for the small 2B Cosmos, it is quite inference-stack depending to really make full use of FP4 weights.
# So, how do these models perform on device?
We benchmarked accross the three modalities (text, image, video), three hardware (Orin Nano Super, AGX Orin, AGX Thor), three resolutions (1920x1080:FHD, 1280x720:HD, 854x480), with 6 and 12 frames, and single concurrency and batch-size 8 / concurrency 8.
>Is there any setup / benchmark you are missing here?
[Baseline nvidia\/Cosmos-Reason2-2B is OOM on Jetson Orin Nano. Edge Inference Benchmarks space will be released shortly, for now, benchmarks are available on the model cards.](https://preview.redd.it/xsyfi0h4b8mg1.jpg?width=1080&format=pjpg&auto=webp&s=94aebccbd28e13adfe606a8532fade9330b8bb7a)
# Model Links
* [huggingface.co/embedl/Cosmos-Reason2-2B-W4A16](http://huggingface.co/embedl/Cosmos-Reason2-2B-W4A16)
* [huggingface.co/embedl/Cosmos-Reason2-2B-W4A16-Edge2](http://huggingface.co/embedl/Cosmos-Reason2-2B-W4A16-Edge2)
* [huggingface.co/embedl/Cosmos-Reason2-2B-N](http://huggingface.co/embedl/Cosmos-Reason2-2B-N) | 2026-02-28T12:31:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rh1haa/benchmarks_report_optimized_cosmosreason2_qwen3vl/ | tag_along_common | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh1haa | false | null | t3_1rh1haa | /r/LocalLLaMA/comments/1rh1haa/benchmarks_report_optimized_cosmosreason2_qwen3vl/ | false | false | 2 | null | |
Swapping gpt-oss-120b for qwen3.5-122b on a 128GB system? Mind your KV Cache! (Strix Halo Math inside) | 1 | Hey everyone,
Seeing a lot of hype about the new `qwen3.5-122b-a10b` and people treating it as a 1:1 drop-in replacement for `gpt-oss-120b`. If you're running a 128 GB unified memory setup, you might be in for a rude awakening when you try to load a large prompt.
Here is the reality check using my own **Strix Halo 128 GB** home-lab setup as an example:
To keep my server stable, I allocate a strict **100 GB** budget to the LLM, leaving 28 GB for my other background services.
Both models at a 4-bit quant take up roughly **\~68 GB** for the weights. Add in about **\~4 GB** for the ROCm/framework context, and that leaves exactly **\~28 GB available purely for the KV Cache.**
Here is the difference that 28 GB gets you:
* **gpt-oss-120b:** Thanks to its lean GQA architecture, the KV cache footprint is incredibly small. With 28 GB of breathing room, you can easily max out the model's hard-coded **131k token limit** and *still* have RAM left over.
* **qwen3.5-122b-a10b:** This is where Qwen's fancy hybrid architecture bites you. Because it relies heavily on Gated DeltaNet layers (which require caching 16 QK heads + 64 V heads) alongside its Gated Attention layers, its state size is massive. Running a standard 16-bit KV cache, it requires roughly **\~1 MB per single token**.
* *The Math:* 28 GB / 1 MB per token = **You will OOM at \~28,000 tokens.**
**TL;DR:** Yes, the 4-bit weights are the exact same size. Yes, Qwen3.5 technically supports a 262k context window. But under a 100 GB memory budget, you are trading GPT-OSS's guaranteed 131k context depth for Qwen's advanced reasoning, capping out before you even hit 30k tokens.
If your workflow relies on massive RAG or long codebases, you'll either need to stick with `gpt-oss-120b`, or look into aggressively quantizing Qwen's KV cache down to 8-bit (which comes with its own degradation tradeoffs).
Just wanted to save some of you the headache of downloading an 80GB file only to crash on your first big prompt! | 2026-02-28T12:27:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/ | Reasonable-Yak-3523 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh1ec9 | false | null | t3_1rh1ec9 | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/ | false | false | self | 1 | null |
Run mistral vision model 24b localy using multi PC : ray+ vllm | 1 | I have multiple PC in the same LAN running 1x 2060 .
I want use Vllm and Ray to merge vram and run the 24b mistral
https://docs.vllm.ai/en/v0.5.1/serving/distributed_serving.html?utm_source=perplexity | 2026-02-28T12:25:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rh1cq7/run_mistral_vision_model_24b_localy_using_multi/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh1cq7 | false | null | t3_1rh1cq7 | /r/LocalLLaMA/comments/1rh1cq7/run_mistral_vision_model_24b_localy_using_multi/ | false | false | self | 1 | null |
Why does qwen 3.5 think it's 2024 | 0 | Why does my qwen 3.5 35B think it's 2024, is trained as per its words until early 2026 and doesn't know about dotnet 10.. | 2026-02-28T12:21:30 | https://www.reddit.com/r/LocalLLaMA/comments/1rh1a6v/why_does_qwen_35_think_its_2024/ | Uranday | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh1a6v | false | null | t3_1rh1a6v | /r/LocalLLaMA/comments/1rh1a6v/why_does_qwen_35_think_its_2024/ | false | false | self | 0 | null |
Best Qwen 3.5 variant for 2x5060ti/16 + 64 GB Ram? | 0 | What variant would you pick for coding or agentic purposes?
Also does Qwen 3.5 really suffer from the “overthinking” issue that keeps getting mentioned here? | 2026-02-28T12:13:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rh14cs/best_qwen_35_variant_for_2x5060ti16_64_gb_ram/ | andy_potato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh14cs | false | null | t3_1rh14cs | /r/LocalLLaMA/comments/1rh14cs/best_qwen_35_variant_for_2x5060ti16_64_gb_ram/ | false | false | self | 0 | null |
Qwen3.5-35B nailed my simple multiagent workflow that other sub-100B models couldn't! | 55 | I ran the same test I shared last week, and Qwen3.5-35B nailed it!!!
This is the first time I have seen a sub-100B model reliably complete the task. Not only did it finish the task, but the output quality was solid as well.
One thing I noticed though is that the model thinks with a lot of tokens, so it takes a while! Maybe this is related to the result I got by increasing the reasoning effort from medium to high for gpt-oss-20b.
Here is my post from last week about the test with more details if you're interested.
TLDR: I ran a small personal experiment to autonomously summarize 10 transcripts using a multi-agent workflow on Codex.
The following sub-100B models failed to complete this simple task reliably:
* qwen3-coder-next
* glm-4.7-flash
* Devstral-Small-2
* gpt-oss-20b
A lot of times they struggled to used the tools correctly, sometimes they processed a few transcripts and then stopped, and sometimes they got stuck in infinite loops.
However, the following models > 100b were able to consistently complete the task:
* gpt-oss:120b
* minimax-m2.5
* qwen3.5
* deepseek-v3.2
* glm-5
* kimi-k2.5
There was one twist. When I increased reasoning effort from medium to high, often (but not always) gpt-oss-20b was also able to complete the task!
Here is my test if anyone wants to try with your own setup.
https://github.com/chigkim/collaborative-agent
Observation: To get reliable results from an agentic workflow, it seem necessary to use models > 100b like gpt-oss-120b at least.
---
If you are still reading, here is additional background with detailed.
I needed a model to handle a task involving analyzing, organizing, and processing about 50 articles, but the local models I tried really struggled seriously.
Gemini-cli with gemini-2.5-pro, claude-code with Opus 4.6, and Codex with gpt-5.3-codex were able to complete the same task and produce decent quality output.
So I stripped the original workflow down to the bare minimum and turned it into a much much simpler challenge to test whether a local model can reliably run a multi agent workflow.
In this challenge, an orchestrator agent is instructed to spawn one sub-agent a time and hand one file to each worker to summarize in specific format. Then it is asked to review their work and retry when a worker agent fails to produce output that meets the work specs.
To keep it short and simple, there are only total 10 speech transcripts from Ted Talk, about 4K tokens per file.
Despite the simplification, I still wasn't able to get the local models to reliably complete the task via Codex.
I know this can be easily done and get much better quality by making a script to feed one article at a time, but I wanted to test instruction following, multi agent, and tool call capability for local models.
The repo just has prompts for agents and files to process. There's no code involved. Feel free to modify the prompts to fit your setup if necessary.
There is a README, but the basic idea IS to use any local agentic setup that can:
1. launch a sub agent,
2. support autonomous (AKA YOLO) mode,
3. and read AGENTS.md at startup.
To test:
1. Configure your LLM engine to handle at least 2 parallel requests.
2. Configure your agentic CLI to use your local LLM engine.
3. Start your agentic CLI in yolo mode and tell it to perform the task as the orchestrator agent.
If you are using Codex, update to the latest version and enable multi_agent by adding the following to ~/.codex/config.toml.
[features]
multi_agent = true
You might also want to add `stream_idle_timeout_ms = 10000000` under your model_providers setting if your model takes a while to respond.
Here is my setup:
I used the flags for llama.cpp that unsloth recommended for each model. Interestingly models running on Ollama sometimes went little further.
* Agentic CLI: Codex
* Model Engine: llama.cpp and Ollama
* Local models tested:
* ggml-org/gpt-oss-20b-mxfp4.gguf
* unsloth/Qwen3-Coder-Next-Q4_K_M.gguf
* unsloth/GLM-4.7-Flash-Q8_0.gguf
* unsloth/Devstral-Small-2-24B-Instruct-2512-Q8_0.gguf
* Context size allocated: 64k
I also tested the smaller models via OpenRouter to rule out local setup issues.
I tested the following larger models with openrouter:
* gpt-oss-120b
* minimax-m2.5
* qwen3.5
* deepseek-v3.2
* glm-5
* kimi-k2.5 | 2026-02-28T12:10:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rh12xz/qwen3535b_nailed_my_simple_multiagent_workflow/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh12xz | false | null | t3_1rh12xz | /r/LocalLLaMA/comments/1rh12xz/qwen3535b_nailed_my_simple_multiagent_workflow/ | false | false | self | 55 | {'enabled': False, 'images': [{'id': 'lPkycciQy38fgFGPFK3x56aelVHSI4vJkYw5-R1JRWA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lPkycciQy38fgFGPFK3x56aelVHSI4vJkYw5-R1JRWA.png?width=108&crop=smart&auto=webp&s=86765e992bac383d2d41d1a8fe1dbfdf90f132c1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lPkycciQy38fgFGPFK3x56aelVHSI4vJkYw5-R1JRWA.png?width=216&crop=smart&auto=webp&s=2d4b71caaf24657a468070f9e2cb37165ed7e3cd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lPkycciQy38fgFGPFK3x56aelVHSI4vJkYw5-R1JRWA.png?width=320&crop=smart&auto=webp&s=f60148783c481d89fd16881592152cf7caf476f8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lPkycciQy38fgFGPFK3x56aelVHSI4vJkYw5-R1JRWA.png?width=640&crop=smart&auto=webp&s=a05d1419c00dd0730c31757be52692d275e82409', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lPkycciQy38fgFGPFK3x56aelVHSI4vJkYw5-R1JRWA.png?width=960&crop=smart&auto=webp&s=dc09db6f4fc566dbf66c9e00fa01894445c8b0a8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lPkycciQy38fgFGPFK3x56aelVHSI4vJkYw5-R1JRWA.png?width=1080&crop=smart&auto=webp&s=2e52875be36fe2ef2d10ca2faa7e9f0abcf663d5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lPkycciQy38fgFGPFK3x56aelVHSI4vJkYw5-R1JRWA.png?auto=webp&s=4c036d21634f333b81d41ede84e437ac75fda064', 'width': 1200}, 'variants': {}}]} |
New AI Fundamental Research Company/Lab | 0 | Okay, I know whoever reads this will probably say I'm nuts or a crackhead for going head-on against a big giant, but I will do it—if not today, then tomorrow.
I'm saying I'm starting a Research Lab/company—for obvious reasons—I need money because it's enough to build things underground, so I'll start doing that to earn money and fund my AI research lab/company. Okay,
Although I have very limited funds, I'm from India, but I can start by building a small LLM like 1B or 1.5B that touches the WSE benchmark up to 25%+, I guess.
Clearly, it's a plan, and I'm working on it, but I'm posting here for one reason: if I build this and release it, would you use it by paying money around $5 monthly? (Not decided yet.)
And I'm thinking to close-source my model design and architecture—not because of earning more money, but to safeguard myself from tech giants. Because if my moat is my model, then why give it away to the public, where any big giant or tech dev can just take it and use it? I'm not DeepSeek or Qwen, which are run by already existing giants, so I can earn from infra. I'm on all the negative points, but I will still do it.
And if this plan is good or bad, just let me know and tell me what exactly you want in an LLM right now because agents are a buzzword, and OpenAI's partnership with the USA DoW is scaring the hell out of me. I don't trust ChatGPT now with this. I'm sorry, I can't sit idle now; I have to do something.
If you think I want attention, then yes.
If you think I want money, then yes.
If you think I'm a crackhead, then yes I am.
And yes, because without capital I can't build a big thing in this world, especially in AI, where GPUs are demanded and come at a price, so yes I want money.
You can think anything about me, but the truth is, I will eventually build the Safe AGI (that the whole industry wants).
But do you know what? I can't trust OpenAI ever.
So I'm happy to know what your suggestions are for this company.
And anything that I should know before starting this.
I'll be happy if you guys give me feedback, your thoughts, your suggestions, anything that helps me. | 2026-02-28T12:06:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rh1024/new_ai_fundamental_research_companylab/ | Recent_Juggernaut859 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh1024 | false | null | t3_1rh1024 | /r/LocalLLaMA/comments/1rh1024/new_ai_fundamental_research_companylab/ | false | false | self | 0 | null |
Benchmarks + Report: Optimizing Comos-Reason2 (Qwen3-VL) for on-device inference - NVIDIA Jetson Orin Nano 8GB RAM | 1 | Hej r/LocalLlama, Researcher from Embedl here! Leading up to Nvidia GTC we have been focusing on getting nvidia/Cosmos-Reason2-2B (fine-tuned variant of Qwen3-VL) edge-ready. Meaning, enabling it for the full Jetson-lineup: From 8GB RAM on Jetson Orin Nano to 64GB RAM on Jetson AGX Orin up to 128GB RAM on Jetson AGX Thor \~ a bit over-kill the last one. :)
From the very fist quantized variant [embedl/Cosmos-Reason2-2B-W4A16](http://huggingface.co/embedl/Cosmos-Reason2-2B-W4A16) to our most recent release [embedl/Cosmos-Reason2-2B-W4A16-Edge2](http://huggingface.co/embedl/Cosmos-Reason2-2B-W4A16-Edge2) where we did an extensive search over mixed-precision settings to find this optimal variant with near-zero drop in accurracy on thr overall physical AI Bench Tasks compared to the full FP16 baseline and matching W4A16 on-device performance.
https://preview.redd.it/1pdf4na168mg1.jpg?width=1080&format=pjpg&auto=webp&s=ee5b75801ced00f95f9e2ff3676c299bfdda28c2
* All Benchmark on real hardware, running locally on the Nvidia Jetson lineup with vllm serve
* Accuracy (Vision and Reasoning capabilities) evaluated on the [Physical Al Bench Tasks](https://huggingface.co/spaces/shi-labs/physical-ai-bench-leaderboard)
* Benchmarks comparing NVFP4A16 and W4A16 on AGX Thor Easy to try-out with vllm serve
* There are some open issues we submitted to the open source community as another outcome from our research
# Background: Cosmos-Reason2 and Qwen3-VL
Cosmos-Reason2 is essentially a fine-tuned Qwen3-VL with similar multi-modal input (text + image/video → text).
Cosmos is finetuned particular for temporal/physical reasoning tasks and planning, while Qwen3-VL is more general “world knowledge + detailed description.” Thus, in essence, Cosmos has a similar use cases to Qwen3-VL but with added embodied reasoning for video/physics contexts.
Fun fact: To the question "Who are you?" the Cosmos model always replies something along the lines "I am Qwen..." 😅
# Here is what we found:
**Some layers are very sensitive to quantization.** While our first released W4A16 was the very first released model enabling deployment on Jetson Orin Nano. Objectively, it is a great model with \~2%-point drop in accuracy compared to the baseline's model avcuracy. However, we wanted to see how far we can reduce that drop and applied our EdgeN quantization search algorithm, leading up the the W4A16-Edge2 version with a mere 0.02%-point drop in accuracy. Essentially (among a few other tricks), EdgeN produces the full pareto front (accuracy-latency tradeoff) of optimal models by excluding sensitive layers from quantization.
**NVFP4A16 may not be optimal for all tensors.** When first comparing FP4 vs INT4 weights on AGX Thor we were a bit underwhelmed to be honest. Our experiments and previous research has shown that using NVFP4 for all tensors is not a good idea. This model would also benefit from a more sophisticated search like we did for the Edge2 variant. And for such a small 2B parameter model the AGX Thor with 128GB RAM may anyway be a bit overpowered and we may see more benefits from FP4 with higher batch size / concutrency; *what are your experiences here?* **Is NVFP4 worth it?** For now, at least for the small 2B Cosmos, it is quite inference-stack depending to really make full use of FP4 weights.
# So, how do these models perform on device?
We benchmarked accross the three modalities (text, image, video), three hardware (Orin Nano Super, AGX Orin, AGX Thor), three resolutions (1920x1080:FHD, 1280x720:HD, 854x480), with 6 and 12 frames, and single concurrency and batch-size 8 / concurrency 8.
>Is there any setup / benchmark you are missing here?
[Baseline nvidia\/Cosmos-Reason2-2B is OOM on Jetson Orin Nano. This Edge Inference Benchmarks space will be released shortly. For now, all benchmarks can be found on huggingface model cards.](https://preview.redd.it/b3nm5lpe68mg1.jpg?width=1080&format=pjpg&auto=webp&s=46da52cdbe75929a6df5b3093f6792ba1d22edd3)
# Quickstart: vllm on Jetson Orin
docker run --rm -it \
--network host \
--shm-size=8g \
--ulimit memlock=-1 \
--ulimit stack=67108864 \
--runtime=nvidia \
--name=vllm-serve \
-e HF_TOKEN=hf_*** \
-e HF_HOME=/root/.cache/huggingface \
ghcr.io/nvidia-ai-iot/vllm:latest-jetson-orin \
vllm serve "embedl/Cosmos-Reason2-2B-W4A16-Edge2" \
--max-model-len 8192 \
--gpu-memory-utilization 0.75 \
--max-num-seqs 2
# Model Links
* [huggingface.co/embedl/Cosmos-Reason2-2B-W4A16](http://huggingface.co/embedl/Cosmos-Reason2-2B-W4A16)
* [huggingface.co/embedl/Cosmos-Reason2-2B-W4A16-Edge2](http://huggingface.co/embedl/Cosmos-Reason2-2B-W4A16-Edge2)
* [huggingface.co/embedl/Cosmos-Reason2-2B-NVFP4A16](http://huggingface.co/embedl/Cosmos-Reason2-2B-NVFP4A16) | 2026-02-28T12:06:27 | https://www.reddit.com/r/LocalLLaMA/comments/1rh0zwr/benchmarks_report_optimizing_comosreason2_qwen3vl/ | tag_along_common | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh0zwr | false | null | t3_1rh0zwr | /r/LocalLLaMA/comments/1rh0zwr/benchmarks_report_optimizing_comosreason2_qwen3vl/ | false | false | 1 | null | |
Suggest me vision instrcut model that i can run | 0 | PC system
2060 6gb
32gb ram | 2026-02-28T12:05:35 | Quiet_Dasy | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rh0zcn | false | null | t3_1rh0zcn | /r/LocalLLaMA/comments/1rh0zcn/suggest_me_vision_instrcut_model_that_i_can_run/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'mpwbfumc78mg1', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/mpwbfumc78mg1.jpeg?width=108&crop=smart&auto=webp&s=e59b1d71a65085b93dbd00e7c05692849852c4be', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/mpwbfumc78mg1.jpeg?width=216&crop=smart&auto=webp&s=6c354d66c6ed40a011f05cc60d1cafb2652f6476', 'width': 216}, {'height': 300, 'url': 'https://preview.redd.it/mpwbfumc78mg1.jpeg?width=320&crop=smart&auto=webp&s=95e6db9a2b9c4bf60ad0bf4124434eef19cc6b08', 'width': 320}], 'source': {'height': 507, 'url': 'https://preview.redd.it/mpwbfumc78mg1.jpeg?auto=webp&s=342e9d8050869e2febc7339d687f71e64fb8b429', 'width': 540}, 'variants': {}}]} | ||
Qwen3.5 this week had two dominant posts: "gamechanger for agentic coding" (1k↑) and "craters on hard tasks" (500↑). Both are right. Here's how I'm thinking about where it actually fits. | 1 | [removed] | 2026-02-28T12:05:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rh0z3l/qwen35_this_week_had_two_dominant_posts/ | Visible_Homework_477 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh0z3l | false | null | t3_1rh0z3l | /r/LocalLLaMA/comments/1rh0z3l/qwen35_this_week_had_two_dominant_posts/ | false | false | self | 1 | null |
How to use Qwen 3.5 35B with any agentic coding tool? | 2 | I have the model set up with llama.cpp and I can chat with it on 127.0.0.1:8080.
How do I get it to work with something like Cline/Roo/Kilo Code? I'm not concerned about which one Any of them will do. I tried setting it up via openAI compatible, but model choice doesn't show up, and the API calls aren't working.
Is there a guide somewhere I can follow? | 2026-02-28T12:04:20 | https://www.reddit.com/r/LocalLLaMA/comments/1rh0yim/how_to_use_qwen_35_35b_with_any_agentic_coding/ | soumen08 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh0yim | false | null | t3_1rh0yim | /r/LocalLLaMA/comments/1rh0yim/how_to_use_qwen_35_35b_with_any_agentic_coding/ | false | false | self | 2 | null |
Unsloth Dynamic 2.0 GGUFs now selectively quantizes layers much more intelligently and extensively. | 172 | 2026-02-28T12:03:25 | https://unsloth.ai/docs/basics/unsloth-dynamic-2.0-ggufs | paranoidray | unsloth.ai | 1970-01-01T00:00:00 | 0 | {} | 1rh0xwk | false | null | t3_1rh0xwk | /r/LocalLLaMA/comments/1rh0xwk/unsloth_dynamic_20_ggufs_now_selectively/ | false | false | 172 | {'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=216&crop=smart&auto=webp&s=18872cd0af37e87d93cf5b6c098630c44f40a162', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=320&crop=smart&auto=webp&s=e8392e0cb89db800c200421873b07e92f34150fe', 'width': 320}, {'height': 314, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=640&crop=smart&auto=webp&s=5f6fc5d8f727ab6f86a8ca5f94a5091bbe81d025', 'width': 640}, {'height': 472, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=960&crop=smart&auto=webp&s=26fa346a0f27ac195ecf2f29e1d997a534a3b283', 'width': 960}, {'height': 531, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=1080&crop=smart&auto=webp&s=4e4e7bc3c126d7465ae2f4d8fab93d8c6edd76c4', 'width': 1080}], 'source': {'height': 590, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?auto=webp&s=df3ed66f8b8e54b17c699d9c4e81b03ddeb78c58', 'width': 1200}, 'variants': {}}]} | ||
Made a 12B uncensored RP merge, putting it out there - MistralNemoDionysusV3 | 9 | I wasn't really finding a model that felt right for RP — most either felt too restricted or the character voices were flat. So I put together this merge from various Mistral Nemo versions and it kind of became my daily driver.
It's a 12B uncensored model focused on roleplay. From my own use it handles character voice consistency pretty well and doesn't shy away from morally complex scenarios without going off the rails. Not claiming it's the best thing ever, just sharing in case someone else finds it useful.
Q4\_K\_M quant is available in the quantized folder if you don't want to deal with the full thing.
**Links:**
* Full model: [https://huggingface.co/Biscotto58/MistralNemoDionysusV3](https://huggingface.co/Biscotto58/MistralNemoDionysusV3)
* Quantized: [https://huggingface.co/Biscotto58/MistralNemoDionysusV3/tree/main/quantized](https://huggingface.co/Biscotto58/MistralNemoDionysusV3/tree/main/quantized)
Uses default chat template.
Let me know what you think, genuinely curious to hear other people's experience with it.
I'm also working on a local RP app called Fireside that this model was kind of built around, still in progress but mentioning it in case anyone's curious.
If you want to support the work: [https://ko-fi.com/biscotto58](https://ko-fi.com/biscotto58) No pressure at all, feedback is more than enough.
| 2026-02-28T12:01:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rh0wqj/made_a_12b_uncensored_rp_merge_putting_it_out/ | Biscotto58 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh0wqj | false | null | t3_1rh0wqj | /r/LocalLLaMA/comments/1rh0wqj/made_a_12b_uncensored_rp_merge_putting_it_out/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'DetxX2u2qzu-kJe9FBHKDRBgJvEL7fyI_r3ulvaZODM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/DetxX2u2qzu-kJe9FBHKDRBgJvEL7fyI_r3ulvaZODM.png?width=108&crop=smart&auto=webp&s=335b24fe5e742ae80ea4ca599902644b3c0d425a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/DetxX2u2qzu-kJe9FBHKDRBgJvEL7fyI_r3ulvaZODM.png?width=216&crop=smart&auto=webp&s=49c6c1d8ac984cd02641882b3240115f03d095bd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/DetxX2u2qzu-kJe9FBHKDRBgJvEL7fyI_r3ulvaZODM.png?width=320&crop=smart&auto=webp&s=d92db26f60d81c23e689be7f2a32fc462526dadf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/DetxX2u2qzu-kJe9FBHKDRBgJvEL7fyI_r3ulvaZODM.png?width=640&crop=smart&auto=webp&s=b81f201f16c61ee8f53686ffe1b325ffb5e92ee4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/DetxX2u2qzu-kJe9FBHKDRBgJvEL7fyI_r3ulvaZODM.png?width=960&crop=smart&auto=webp&s=0b6d37c35913d6bc35aac5ba4f275c95f4c2935a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/DetxX2u2qzu-kJe9FBHKDRBgJvEL7fyI_r3ulvaZODM.png?width=1080&crop=smart&auto=webp&s=3098aaf04196f95706d7d70b70b7551dab0e5a64', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/DetxX2u2qzu-kJe9FBHKDRBgJvEL7fyI_r3ulvaZODM.png?auto=webp&s=013bd1c107fc4840152f43127c567e023661498c', 'width': 1200}, 'variants': {}}]} |
Qwen3.5-35B-A3B Q5_K_M:Best Model for NVIDIA 16GB GPUs | 2 | AesSedai/Qwen3.5-35B-A3B-GGUF Q5\_K\_M works well on 5070ti 16GB.
57 tokens/s
Mean KLD: 0.0058
Within the Qwen3.5-35B-A3B-GGUF series, this model delivers the best performance on NVIDIA 16GB GPUs.
config:LM studio , -c 71680 , GPU offload 40,k cache q8\_0 ,v cache q8\_0 | 2026-02-28T11:46:54 | https://www.reddit.com/r/LocalLLaMA/comments/1rh0msv/qwen3535ba3b_q5_k_mbest_model_for_nvidia_16gb_gpus/ | moahmo88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh0msv | false | null | t3_1rh0msv | /r/LocalLLaMA/comments/1rh0msv/qwen3535ba3b_q5_k_mbest_model_for_nvidia_16gb_gpus/ | false | false | self | 2 | null |
hi! i'm a total noob | 0 | hey guys! yeah, i' m a real noob. I'm new with LM Studio. I'm looking for an abliterated model for creating images. Any good picks you could share with me? | 2026-02-28T11:39:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rh0hgl/hi_im_a_total_noob/ | PizzaSouthern5853 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh0hgl | false | null | t3_1rh0hgl | /r/LocalLLaMA/comments/1rh0hgl/hi_im_a_total_noob/ | false | false | self | 0 | null |
Tempted to prompt qwen on this craigslist rig but concerned it may tell me to put it out of its misery | 17 | What’s the most cursed way you’ve hit 32GB VRAM? | 2026-02-28T11:29:45 | prescorn | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rh0bkz | false | null | t3_1rh0bkz | /r/LocalLLaMA/comments/1rh0bkz/tempted_to_prompt_qwen_on_this_craigslist_rig_but/ | false | false | 17 | {'enabled': True, 'images': [{'id': '7ivy7vjy08mg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/7ivy7vjy08mg1.jpeg?width=108&crop=smart&auto=webp&s=44e4b5ee2e3e7b84380821ce6f558b9f9c55c077', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/7ivy7vjy08mg1.jpeg?width=216&crop=smart&auto=webp&s=4276ca59e7c7c7666ef57c2d083cbec73d69e4b0', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/7ivy7vjy08mg1.jpeg?width=320&crop=smart&auto=webp&s=fca03f3f3a9f31f3a2f69bbbf28eac64098414c1', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/7ivy7vjy08mg1.jpeg?width=640&crop=smart&auto=webp&s=708dd4b372471d9343f0a1a758b65cfeddd048af', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/7ivy7vjy08mg1.jpeg?width=960&crop=smart&auto=webp&s=89f86c4b1344a41cad0fd72285749352c92e854f', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/7ivy7vjy08mg1.jpeg?width=1080&crop=smart&auto=webp&s=cb8ea0a193fa5bc673d826a68ea3a1b9790003d7', 'width': 1080}], 'source': {'height': 2868, 'url': 'https://preview.redd.it/7ivy7vjy08mg1.jpeg?auto=webp&s=5114573f631ce17ef7ede9ac8a698a058c01795b', 'width': 1320}, 'variants': {}}]} | ||
Frustration building out my local models | 1 | I have been building, slowly, with the help of google and various chatbots and reddit posts, a local AI capability. Yesterday I hit a brick wall trying to add one more local Ollama instance for some unknown reason. Or so I thought.
The picture is that I was trying to add one more Ollama instance to a "mostly" working setup. In LiteLLM I could see the existing models, which include a different local Ollama instance running two tiny models on a CPU, and a number of paid external models. These local models were there just for testing and learning purposes.
The thing I wanted to do is to add a local model on a GPU. I chose qwen3b-instruct, created the container, checked that the GPU pass-in is working (running nvidia-smi in the container), and checked that I could talk to it by using curl.
Everything worked except that Litellm ignored it. I refreshed the UI, deleted and restarted the container where LiteLLM runs, checked logs, and just got more and more frustrated, and eventually gave up and decided to go play a game.
With a sigh I decided to go see if I could suddenly work out the issue today. I started composing a question to post on Reddit about what was not working and went into the LiteLLM UI to take a screenshot. To my "dismay", the issue was no longer there. The new model was showing up.
I opened up my browser and pointed it at my openwebui instance - and it happily let me chat to the new qwen model.
WTH is happening here?
I have a very vague recollection of seeing something like this in the past - eg being impatient and LiteLLM taking a long time (20-30 minutes or more) to discover a new model. Note that there is a specific error that appears on the litellm container console, which is new. This of course took most of my attention, but did not help:
18:20:36 - LiteLLM:DEBUG: utils.py:4999 - Error getting model info: OllamaError: Error getting model info for qwen2.5:0.5b. Set Ollama API Base via `OLLAMA_API_BASE` environment variable. Error: [Errno 111] Connection refused
18:20:36 - LiteLLM:DEBUG: utils.py:4999 - Error getting model info: OllamaError: Error getting model info for qwen3:4b-instruct-2507-q4_K_M. Set Ollama API Base via `OLLAMA_API_BASE` environment variable. Error: [Errno 111] Conne
ction refused
The error appears for both the old and the new model. I don't have, and never had, OLLAMA\_API\_BASE as I configure the address per ollama instance.
Anyways I end up posting about this frustration, hoping to hear that I'm not the only one and that I'm not just stupid, in stead of asking how to get the new ollama local instance working. | 2026-02-28T11:28:05 | https://www.reddit.com/r/LocalLLaMA/comments/1rh0akj/frustration_building_out_my_local_models/ | tahaan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rh0akj | false | null | t3_1rh0akj | /r/LocalLLaMA/comments/1rh0akj/frustration_building_out_my_local_models/ | false | false | self | 1 | null |
DeepSeek V4 will be released next week and will have image and video generation capabilities, according to the Financial Times | 607 | Financial Times: DeepSeek to release long-awaited AI model in new challenge to US rivals (paywall): [https://www.ft.com/content/e3366881-0622-40a7-9c34-a0d82e3d573e](https://www.ft.com/content/e3366881-0622-40a7-9c34-a0d82e3d573e) | 2026-02-28T11:25:49 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rh095c | false | null | t3_1rh095c | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/ | false | false | 607 | {'enabled': True, 'images': [{'id': 'kwyym79lz7mg1', 'resolutions': [{'height': 173, 'url': 'https://preview.redd.it/kwyym79lz7mg1.jpeg?width=108&crop=smart&auto=webp&s=a5793f3d32781f4ff753e02882e90423136a2294', 'width': 108}, {'height': 346, 'url': 'https://preview.redd.it/kwyym79lz7mg1.jpeg?width=216&crop=smart&auto=webp&s=8b18da5a383a00c811391ff3596041d96d16afcd', 'width': 216}, {'height': 513, 'url': 'https://preview.redd.it/kwyym79lz7mg1.jpeg?width=320&crop=smart&auto=webp&s=4f47bcdf57d759d1dd4d2a8d6c307e8720a72cae', 'width': 320}, {'height': 1026, 'url': 'https://preview.redd.it/kwyym79lz7mg1.jpeg?width=640&crop=smart&auto=webp&s=abd4de62b86da5c98b3825614d512759e3a8ec10', 'width': 640}, {'height': 1540, 'url': 'https://preview.redd.it/kwyym79lz7mg1.jpeg?width=960&crop=smart&auto=webp&s=de664a08933775adf7257a1280528ff595bba1f9', 'width': 960}, {'height': 1732, 'url': 'https://preview.redd.it/kwyym79lz7mg1.jpeg?width=1080&crop=smart&auto=webp&s=1ba26eab76993132dcee65926376635bde5ce0f9', 'width': 1080}], 'source': {'height': 1877, 'url': 'https://preview.redd.it/kwyym79lz7mg1.jpeg?auto=webp&s=59af15e13fea3b2e79d0e71cf2f5817cd19d87ee', 'width': 1170}, 'variants': {}}]} | ||
Copy paste error or does vllm team know something we don't? | 5 | 2026-02-28T11:11:10 | reto-wyss | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rh002v | false | null | t3_1rh002v | /r/LocalLLaMA/comments/1rh002v/copy_paste_error_or_does_vllm_team_know_something/ | false | false | 5 | {'enabled': True, 'images': [{'id': '9c0zixu0e7mg1', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/9c0zixu0e7mg1.png?width=108&crop=smart&auto=webp&s=22b5775bd4cf75c5311c23d2127555da945f12b5', 'width': 108}, {'height': 57, 'url': 'https://preview.redd.it/9c0zixu0e7mg1.png?width=216&crop=smart&auto=webp&s=3f341041dd6798aff17425bdbbe73460abdb1516', 'width': 216}, {'height': 84, 'url': 'https://preview.redd.it/9c0zixu0e7mg1.png?width=320&crop=smart&auto=webp&s=4371c905f595224782f9e79ec98280a89ddb7f56', 'width': 320}, {'height': 169, 'url': 'https://preview.redd.it/9c0zixu0e7mg1.png?width=640&crop=smart&auto=webp&s=31fa4efedb5c4c18041cad5a99045f77f8805e75', 'width': 640}], 'source': {'height': 205, 'url': 'https://preview.redd.it/9c0zixu0e7mg1.png?auto=webp&s=3d994de12b01082462b3138cfb2400573cac4587', 'width': 775}, 'variants': {}}]} | |||
are you ready for small Qwens? | 404 | 13-9=4 | 2026-02-28T11:02:10 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rgzul5 | false | null | t3_1rgzul5 | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/ | false | false | 404 | {'enabled': True, 'images': [{'id': 'bwc4xcf0w7mg1', 'resolutions': [{'height': 34, 'url': 'https://preview.redd.it/bwc4xcf0w7mg1.png?width=108&crop=smart&auto=webp&s=e2b2d795fae8415eb3eb4b5c9185154a676b0a55', 'width': 108}, {'height': 68, 'url': 'https://preview.redd.it/bwc4xcf0w7mg1.png?width=216&crop=smart&auto=webp&s=01cae5e6f936c532b5296f568ffd80fb9d0d3f2d', 'width': 216}, {'height': 100, 'url': 'https://preview.redd.it/bwc4xcf0w7mg1.png?width=320&crop=smart&auto=webp&s=5906daf59330d640df85c3ed99464870d11c07af', 'width': 320}, {'height': 201, 'url': 'https://preview.redd.it/bwc4xcf0w7mg1.png?width=640&crop=smart&auto=webp&s=ac545e4ed49e187bffbf4cf369b2fda1bafd4bb5', 'width': 640}], 'source': {'height': 249, 'url': 'https://preview.redd.it/bwc4xcf0w7mg1.png?auto=webp&s=a10b093e37281263805f4af6b62cc1265237e5bd', 'width': 789}, 'variants': {}}]} | ||
An open-source local speech AI benchmarking tool - compare STT, TTS, emotion detection & diarization models side by side | 17 | Speech models have been a constant wrestle. Whisper, Bark, Vosk, Kokoro, all promising the world but often choking on real hardware. Dozens out there, no simple way to pit them against each other without the cloud leeches draining data. Speechos emerged from the quiet frustration of it all.
It's local-first, everything locked on the machine. Record from mic or drop in audio files, then swap through 25+ engines via dropdown and see the results clash side by side. STT: faster-whisper (tiny to large-v3), Vosk, Wav2Vec2, plus Docker options like NeMo or Speaches.
TTS: Piper, Kokoro, Bark, eSpeak, Chatterbox built-in; Docker adds XTTS, ChatTTS, Orpheus, Fish-Speech, Qwen3-TTS, Parler. They turn text into voices, some with emotional undertones, others flat as pavement.
Emotion detection via HuBERT SER (seven emotions) and emotion2vec+ with confidence scores. Speaker diarization: Resemblyzer for basics, PyAnnote through Docker for the deep cuts.
Audio analysis layers on pitch, loudness, speaking rate, tempo, spectral centroid, MFCCs like peeling back the skin of sound.
It detects hardware and adapts quietly: CPU-2GB sticks to Whisper Tiny + Piper; GPU-24GB unlocks the full arsenal, Docker included.
Python/FastAPI backend, Next.js frontend, uv and pnpm managing the deps. One ./dev.sh fires it up. 12 built-in engines, 13 optional via Docker. MIT licensed, because why hoard the tools?
GitHub: [https://github.com/miikkij/Speechos](https://github.com/miikkij/Speechos)
If it fits the tinkering itch, give it a spin. | 2026-02-28T10:38:35 | https://www.reddit.com/gallery/1rgzga6 | hamuf | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rgzga6 | false | null | t3_1rgzga6 | /r/LocalLLaMA/comments/1rgzga6/an_opensource_local_speech_ai_benchmarking_tool/ | false | false | 17 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.