title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Qwen 3.5 27b: a testament to the transformer architecture
1
It's really good. I thought an early warning sign that transformer architecture might have hard limits would be if these tiny models stopped being able to keep up with the large ones. And to some degree this seemed to be the case, at least at times. We didn't get much between the qwen3 2507 models and now that strongly suggested otherwise. But qwen 3.5 27b... damn! It's passing my reasoning and knowledge tests roughly at the level of R1 0528. Crazy. Makes me want to buy tech stocks... or a bunker. Fasten your seatbelt, the roller coaster is just getting started. Also, this model is ripe for finetunes! Qwen only lacks in personality.
2026-03-02T21:58:12
https://www.reddit.com/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/
nomorebuttsplz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj6m71
false
null
t3_1rj6m71
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/
false
false
self
1
null
Workstation for dev work + local LLMs — Tesla P40 vs MinisForum?
1
Building a new workstation primarily for programming/dev work. Since I'm investing in new hardware anyway, figured why not set it up so I can also run and finetune LLMs locally. Option A: Custom build - 9900X, dual-GPU motherboard, 2x Tesla P40s off eBay. 48GB VRAM total ( one of the cheapest solutions, don't have the money for investing in expensive video cards ). Option B: MinisForum MS-01 with the Ryzen AI Max+ PRO 395 - 128GB unified memory, compact, works as a proper workstation while also being capable for inference and smaller finetunes. The MinisForum is tempting as an all-in-one package. But this is first and foremost a work machine — I need it to be reliable day in, day out. My concern isn't really driver or software maturity, it's more about MinisForum as a company. How's their long-term support? Build quality? If something breaks in 2 years, am I on my own? With a custom build I can swap any part. Anyone here daily-driving a MinisForum for serious work? How's the experience been long-term? Also, are there any alternatives to the MinisForum available in Europe?
2026-03-02T21:54:47
https://www.reddit.com/r/LocalLLaMA/comments/1rj6j0y/workstation_for_dev_work_local_llms_tesla_p40_vs/
marius-c-d
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj6j0y
false
null
t3_1rj6j0y
/r/LocalLLaMA/comments/1rj6j0y/workstation_for_dev_work_local_llms_tesla_p40_vs/
false
false
self
1
null
Qwen3.5 Base models for 122B and 27B?
1
Anyone heard anything about it? I see they dropped base weights for all the recent tiny models, as well as the 35B-A3B model, but don't see any for the dense 27B or larger sparse models. I'm wondering if maybe that was just an oversight? I would really like to get my grubby hands on the base 27B or the 122B, partially preference but largely because I want to do some experiments with seeing how instruction-tuned model performance lines up against few-shot and many-shot template following on a base model. My hypothesis is that with a strong enough many-shot prompt, the base model might actually have *better* performance than the instruction tuned variant. It was pretty well known in the Llama2 days that instruction tuning did degrade model output quality to some degree, but was largely considered worth it in the context of much tighter context window limits. I think that those limits are much less relevant with the massive windows we have today, and that the improvements in general model capabilities might make it possible to get the same output adherence with just in-context learning. And 27B dense and 122B sparse happen to be the upper limit of what my homelab can handle, so would be really like to test with those models if Qwen has plans to release the base variants for those.
2026-03-02T21:53:09
https://www.reddit.com/r/LocalLLaMA/comments/1rj6hga/qwen35_base_models_for_122b_and_27b/
KallistiTMP
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj6hga
false
null
t3_1rj6hga
/r/LocalLLaMA/comments/1rj6hga/qwen35_base_models_for_122b_and_27b/
false
false
self
1
null
Qwen’s latest model thinks it’s developed by Google.
1
I asked the new Qwen3.5-9B to identify itself. Here is the answer. https://preview.redd.it/wh1p96r5bpmg1.png?width=536&format=png&auto=webp&s=eecff7d086a9703c96c5635b1ad884e654b42b13
2026-03-02T21:40:22
https://www.reddit.com/r/LocalLLaMA/comments/1rj65jl/qwens_latest_model_thinks_its_developed_by_google/
never-been-here-nl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj65jl
false
null
t3_1rj65jl
/r/LocalLLaMA/comments/1rj65jl/qwens_latest_model_thinks_its_developed_by_google/
false
false
https://preview.redd.it/…fb8f233e60ff2cfd
1
null
Mix & Matching R9700s?
1
Ive managed to pick up a Sapphire AI PRO Radeon AI Pro R9700 for my upgrade. Problem id Ive fallen afoul of Newegg's one per customer rule so cant easily get a second. Other suppliers are charging a mint for another Sapphire which leads me to ask.. 1- I cant imagine any issues with usuing different partner models but feel I have to ask if anyone has any experience in that.. 2- With Asus, ASrock, and Gigabyte as my alternatives is there one GPU that is better than the others? Cheers
2026-03-02T21:37:10
https://www.reddit.com/r/LocalLLaMA/comments/1rj62hk/mix_matching_r9700s/
RottenPingu1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj62hk
false
null
t3_1rj62hk
/r/LocalLLaMA/comments/1rj62hk/mix_matching_r9700s/
false
false
self
1
null
Running Qwen3.5-0.8B on my 7-year-old Samsung S10E
0
Qwen just released their 0.8B model. So naturally, I had to try running it on my 7-year-old Samsung S10E. After some tinkering with llama.cpp, Termux, and a few missing C libraries... behold! A fully working AI model running locally on an old phone at 12 tokens per second. And btw, the model itself is far from a gimmick - it can actually hold a conversation and do some serious stuff. Mind. Blown.
2026-03-02T21:21:28
https://i.redd.it/mg9ixtw58pmg1.png
HighFlyingB1rd
i.redd.it
1970-01-01T00:00:00
0
{}
1rj5ngc
false
null
t3_1rj5ngc
/r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/
false
false
https://preview.redd.it/…80ee9c11c8e5c25e
0
{'images': [{'source': {'url': 'https://preview.redd.it/mg9ixtw58pmg1.png?auto=webp&s=30fb9a7da42c36ff2a9bf6a196552af418941905', 'width': 3790, 'height': 1728}, 'resolutions': [{'url': 'https://preview.redd.it/mg9ixtw58pmg1.png?width=108&crop=smart&auto=webp&s=2b9005d27227dff202e1772b60bdcf56e2887f02', 'width': 108, 'height': 49}, {'url': 'https://preview.redd.it/mg9ixtw58pmg1.png?width=216&crop=smart&auto=webp&s=0dc52884995b6c284442e9ae18bf6a4f366180e2', 'width': 216, 'height': 98}, {'url': 'https://preview.redd.it/mg9ixtw58pmg1.png?width=320&crop=smart&auto=webp&s=e827edb651bceef504bde627bcb419d675f2a9c4', 'width': 320, 'height': 145}, {'url': 'https://preview.redd.it/mg9ixtw58pmg1.png?width=640&crop=smart&auto=webp&s=ea1f7fdad2dc55b7c1d69045f3ef504fc901c7e3', 'width': 640, 'height': 291}, {'url': 'https://preview.redd.it/mg9ixtw58pmg1.png?width=960&crop=smart&auto=webp&s=d29aa38582cc9a8db4dbeab017c0adc3d2308f49', 'width': 960, 'height': 437}, {'url': 'https://preview.redd.it/mg9ixtw58pmg1.png?width=1080&crop=smart&auto=webp&s=9591d8fb55a77577265c5e2eed2d5f566d11df43', 'width': 1080, 'height': 492}], 'variants': {}, 'id': 'mg9ixtw58pmg1'}], 'enabled': True}
Free image models that can run on 12gb VRAM?
1
I am kind of new to this but what are some good models that I can run myself with 12gb of VRAM? I don't need 4k images but something that can create realistic images in 1440p or worse quality.
2026-03-02T21:10:39
https://www.reddit.com/r/LocalLLaMA/comments/1rj5czr/free_image_models_that_can_run_on_12gb_vram/
CarsonWentzGOAT1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj5czr
false
null
t3_1rj5czr
/r/LocalLLaMA/comments/1rj5czr/free_image_models_that_can_run_on_12gb_vram/
false
false
self
1
null
Local LLM
1
Ah so currently I am using claude opus 4.6 fast mode and getting lots of work done. I am uncomfortable with the centralization of the AI models and I am considering buying 2x rtx 6000 blackwell gpus. The coding part I like the precision that opus provides but my monthly bill is over $700 this month. I have alot of servers that have 128GB - 1TB ram and have a few ideas how to utilize the rtx 6000. Local shop has it in stock for $13500 cdn. My business is affiliate marketing specifically managing large email newsletters I don’t think there will be much for new cards coming out till late 2027. I think main purpose I want my own system is mostly for experimentation. It would be interesting to run these cards on coding tasks 24 hours a day. Anyone want to share some input before I make this impulse buy?
2026-03-02T21:02:13
https://www.reddit.com/r/LocalLLaMA/comments/1rj54kw/local_llm/
Annual_Award1260
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj54kw
false
null
t3_1rj54kw
/r/LocalLLaMA/comments/1rj54kw/local_llm/
false
false
self
1
null
StepFun releases 2 base models for Step 3.5 Flash
1
2026-03-02T20:57:43
https://x.com/StepFun_ai/status/2028551435290554450
tarruda
x.com
1970-01-01T00:00:00
0
{}
1rj4zy3
false
null
t3_1rj4zy3
/r/LocalLLaMA/comments/1rj4zy3/stepfun_releases_2_base_models_for_step_35_flash/
false
false
default
1
null
Best model for basic text based rasks on RTX 3070
1
which model should I use?
2026-03-02T20:53:28
https://www.reddit.com/r/LocalLLaMA/comments/1rj4vwr/best_model_for_basic_text_based_rasks_on_rtx_3070/
freefireclashsquad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj4vwr
false
null
t3_1rj4vwr
/r/LocalLLaMA/comments/1rj4vwr/best_model_for_basic_text_based_rasks_on_rtx_3070/
false
false
self
1
null
local llm test cases text and coding
1
team, there are many benchmarks and tests that base comparisons for different models, where can i find those test cases to run them on my local LLM? I would like to run manually or even if there is automation to run a full suite of tests and capture the results or even measure pass/fail and duplicate, where do I even start?
2026-03-02T20:49:03
https://www.reddit.com/r/LocalLLaMA/comments/1rj4rml/local_llm_test_cases_text_and_coding/
sunole123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj4rml
false
null
t3_1rj4rml
/r/LocalLLaMA/comments/1rj4rml/local_llm_test_cases_text_and_coding/
false
false
self
1
null
Qwen3.5-2B on Android
1
So I ran a quick test of qwen 3.5 2B on my Android device. First I started with some basic questions that it was able to answer perfectly. Then an ez image to process and it described the image very well including texts that I asked it to translate from the provided image. As for the third run, I gave it a complex architecture diagram, and as far as you can see in the video that it was properly explaining that diagram to me, unless it stopped all of a sudden. Now, I am not sure what could be the issue here. I am using pocket pal AI for this test. Do you think it is due to the app being buggy or did I hit the context size, and what do you think I should keep my current settings of the model as well. I have mentioned my device and model settings below: Device: Google pixel 9 pro ( 16 gigs of RAM) Pocket Pal AI model settings: Context: 2048 CPU threads: 6 Max image tokens: 512 Flash Attention: Off KV cache is F16 by default Additional: It's my first time running an LLM locally on my Android device.
2026-03-02T20:44:57
https://v.redd.it/kyc0jcut1pmg1
Zealousideal-Check77
v.redd.it
1970-01-01T00:00:00
0
{}
1rj4nnq
false
{'reddit_video': {'bitrate_kbps': 5000, 'fallback_url': 'https://v.redd.it/kyc0jcut1pmg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'width': 860, 'scrubber_media_url': 'https://v.redd.it/kyc0jcut1pmg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/kyc0jcut1pmg1/DASHPlaylist.mpd?a=1775076350%2CMzZlMDhhNDU2YTM5MzhjYmE4NDg4ZTlmZmMyZjlkOWVmZTE3YTQ1MzU1YzUzMGVhOTRjZjRjNDY3NjAwNTY0Ng%3D%3D&v=1&f=sd', 'duration': 13, 'hls_url': 'https://v.redd.it/kyc0jcut1pmg1/HLSPlaylist.m3u8?a=1775076350%2CNGM0YzM0MTU2MjQwMTMxMGVjZTI4NTk2NWI4MDU3YzRjMGYwMmY2MzQ1M2IyOTQwZDQxYzgyNmZkNWU0OTg4OA%3D%3D&v=1&f=sd', 'is_gif': False, 'transcoding_status': 'completed'}}
t3_1rj4nnq
/r/LocalLLaMA/comments/1rj4nnq/qwen352b_on_android/
false
false
https://external-preview…fb03694e8e2244d9
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/d2hhMDZudXQxcG1nMbrZDj1CpDKfgJQaqWdMPKrgC6fL8I9ao2SNEU3BQC18.png?format=pjpg&auto=webp&s=1bf110aa5ec3f2687a7cbf3a53ef2fbe276e09cd', 'width': 322, 'height': 718}, 'resolutions': [{'url': 'https://external-preview.redd.it/d2hhMDZudXQxcG1nMbrZDj1CpDKfgJQaqWdMPKrgC6fL8I9ao2SNEU3BQC18.png?width=108&crop=smart&format=pjpg&auto=webp&s=b8caa5d212a6c656ee76c0b00ed8753b0d49181a', 'width': 108, 'height': 216}, {'url': 'https://external-preview.redd.it/d2hhMDZudXQxcG1nMbrZDj1CpDKfgJQaqWdMPKrgC6fL8I9ao2SNEU3BQC18.png?width=216&crop=smart&format=pjpg&auto=webp&s=dee5eec5e2a9131c2e4cb3c7794684a16d019e98', 'width': 216, 'height': 432}, {'url': 'https://external-preview.redd.it/d2hhMDZudXQxcG1nMbrZDj1CpDKfgJQaqWdMPKrgC6fL8I9ao2SNEU3BQC18.png?width=320&crop=smart&format=pjpg&auto=webp&s=8fd5bb7382ab7be08469804fc2dbf1dce697c47f', 'width': 320, 'height': 640}], 'variants': {}, 'id': 'd2hhMDZudXQxcG1nMbrZDj1CpDKfgJQaqWdMPKrgC6fL8I9ao2SNEU3BQC18'}], 'enabled': False}
Qwen3.5-35b-A3b Vision capabilties in llama.cpp
1
I haven't found any documentation or threads on this anywhere, but I'm not able to get vision capabilities working on the new qwen 3.5 models in llama.cpp. I know llama.cpp usually looks for an mmproj file, but my understanding is that the qwen 3.5 models integrate vision into the model itself. `image input is not supported - hint: if this is unexpected, you may need to provide the mmproj` Is it possible to get vision working with llama.cpp and these new qwen models? Or must I use vLLM or another alternative?
2026-03-02T20:42:01
https://www.reddit.com/r/LocalLLaMA/comments/1rj4ktw/qwen3535ba3b_vision_capabilties_in_llamacpp/
No_Information9314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj4ktw
false
null
t3_1rj4ktw
/r/LocalLLaMA/comments/1rj4ktw/qwen3535ba3b_vision_capabilties_in_llamacpp/
false
false
self
1
null
Qwen3.5 on Off Grid!
1
[Qwen3.5 on Off Grid!](https://preview.redd.it/haui2t420pmg1.png?width=760&format=png&auto=webp&s=1f4e4ddb9aa34d309a49f477466ade8ced96a1c6) The Qwen3.5 on Off Grid! These are exciting times. My bet on edge AI getting better seems to be paying off. If you haven't already go check out Off Grid!
2026-03-02T20:35:55
https://www.reddit.com/r/LocalLLaMA/comments/1rj4ee5/qwen35_on_off_grid/
alichherawalla
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj4ee5
false
null
t3_1rj4ee5
/r/LocalLLaMA/comments/1rj4ee5/qwen35_on_off_grid/
false
false
https://preview.redd.it/…358482ab567988ce
1
null
LM studio kv caching issue?
1
Hi, I've been trying out LM Studio's local api, but no matter what I do the kv cache just explodes. Each of my prompts add 100MB memory, and it's just NEVER purged? I must be missing some parameter to include in my requests? I'm using the '/v1/chat/completions' endpoint, being stateless, I'm so confused. Thanks.
2026-03-02T20:33:58
https://www.reddit.com/r/LocalLLaMA/comments/1rj4ck1/lm_studio_kv_caching_issue/
After-Operation2436
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj4ck1
false
null
t3_1rj4ck1
/r/LocalLLaMA/comments/1rj4ck1/lm_studio_kv_caching_issue/
false
false
self
1
null
Coding Power Ranking 26.02
1
Hi all, We're back with a new Power Ranking, focused on coding, including the best local model we've ever tested by a wide margin. My analysis is here: [https://blog.brokk.ai/the-26-02-coding-power-ranking/](https://blog.brokk.ai/the-26-02-coding-power-ranking/)
2026-03-02T20:20:01
https://brokk.ai/power-ranking
mr_riptano
brokk.ai
1970-01-01T00:00:00
0
{}
1rj3yzz
false
null
t3_1rj3yzz
/r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/
false
false
default
1
null
I made a native macOS app for Qwen3-TTS — voice cloning, emotion presets, and voice design, all offline
1
Wanted to use Qwen3-TTS on my Mac without dealing with Python environments and terminal commands, so I built a SwiftUI app around it. Figured others might find it useful too. It does voice cloning from audio samples, has 9 emotion presets with 3 intensity levels, voice design from text descriptions, and saves your generation history locally. Runs entirely offline on Apple Silicon through MLX. Built on top of mlx-audio by Prince Canuma and the CLI work by kapi2800 — couldn't have done it without their work. The app bundles its own Python runtime so there's no setup — just download the DMG and go. GitHub: https://github.com/PowerBeef/QwenVoice Let me know what you think or if you have any questions!
2026-03-02T20:17:25
https://v.redd.it/092osyw6vomg1
PowerBeef
v.redd.it
1970-01-01T00:00:00
0
{}
1rj3wgy
false
{'reddit_video': {'bitrate_kbps': 2400, 'fallback_url': 'https://v.redd.it/092osyw6vomg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'width': 1280, 'scrubber_media_url': 'https://v.redd.it/092osyw6vomg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/092osyw6vomg1/DASHPlaylist.mpd?a=1775074662%2CMGMyMmY4ZjAxNDE5ZDJlYzk4NWJkNGE4Nzk4YTc5OGRkMTg3MmJhOTM2MjM4YTEzOTU0MTlkZDA2ZTRiOWVjYg%3D%3D&v=1&f=sd', 'duration': 25, 'hls_url': 'https://v.redd.it/092osyw6vomg1/HLSPlaylist.m3u8?a=1775074662%2CYTYxYjcyYTUyMDVhZGI4MzVjMzcyNjRjNmU2NWZhMjM2M2FiOGJmNDdlMjEwMWRmMzk0NDhjMWUwNTdlMzA1Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'transcoding_status': 'completed'}}
t3_1rj3wgy
/r/LocalLLaMA/comments/1rj3wgy/i_made_a_native_macos_app_for_qwen3tts_voice/
false
false
https://external-preview…8b0484dd3ac70a9e
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/ZzlkeXQ3eDZ2b21nMV7DJq4cU73e-4xagPzBbxb2eGLh9EdSQHg_VlqejrSf.png?format=pjpg&auto=webp&s=bf52cbe19d41e93c3e0927dfb3a7b7afc4d92dce', 'width': 1280, 'height': 720}, 'resolutions': [{'url': 'https://external-preview.redd.it/ZzlkeXQ3eDZ2b21nMV7DJq4cU73e-4xagPzBbxb2eGLh9EdSQHg_VlqejrSf.png?width=108&crop=smart&format=pjpg&auto=webp&s=e0c045b3d5a72bcbc1da3ce35631b0d8a8ab04ec', 'width': 108, 'height': 60}, {'url': 'https://external-preview.redd.it/ZzlkeXQ3eDZ2b21nMV7DJq4cU73e-4xagPzBbxb2eGLh9EdSQHg_VlqejrSf.png?width=216&crop=smart&format=pjpg&auto=webp&s=54a567dccbfb8d57ecb77da88e685a0e42b56a57', 'width': 216, 'height': 121}, {'url': 'https://external-preview.redd.it/ZzlkeXQ3eDZ2b21nMV7DJq4cU73e-4xagPzBbxb2eGLh9EdSQHg_VlqejrSf.png?width=320&crop=smart&format=pjpg&auto=webp&s=09e3ff9f7b385dbfeff647be51016bd3a0deb59f', 'width': 320, 'height': 180}, {'url': 'https://external-preview.redd.it/ZzlkeXQ3eDZ2b21nMV7DJq4cU73e-4xagPzBbxb2eGLh9EdSQHg_VlqejrSf.png?width=640&crop=smart&format=pjpg&auto=webp&s=d90f364219492816eb175dcc80f7ba4fb2d6d02d', 'width': 640, 'height': 360}, {'url': 'https://external-preview.redd.it/ZzlkeXQ3eDZ2b21nMV7DJq4cU73e-4xagPzBbxb2eGLh9EdSQHg_VlqejrSf.png?width=960&crop=smart&format=pjpg&auto=webp&s=1a4a8be88ba648175237e7a1fcffa6b477d7eca0', 'width': 960, 'height': 540}, {'url': 'https://external-preview.redd.it/ZzlkeXQ3eDZ2b21nMV7DJq4cU73e-4xagPzBbxb2eGLh9EdSQHg_VlqejrSf.png?width=1080&crop=smart&format=pjpg&auto=webp&s=205137bc4ab024e41f493238a4009f1a1f1d16e1', 'width': 1080, 'height': 607}], 'variants': {}, 'id': 'ZzlkeXQ3eDZ2b21nMV7DJq4cU73e-4xagPzBbxb2eGLh9EdSQHg_VlqejrSf'}], 'enabled': False}
Beginner's Guide to LLM Quantization: How It Works
1
[removed]
2026-03-02T20:15:35
https://www.reddit.com/r/LocalLLaMA/comments/1rj3ump/beginners_guide_to_llm_quantization_how_it_works/
Pure-Fruit2654
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj3ump
false
null
t3_1rj3ump
/r/LocalLLaMA/comments/1rj3ump/beginners_guide_to_llm_quantization_how_it_works/
false
false
self
1
null
Any advice for using draft models with Qwen3.5 122b ?!
1
I have been using Qwen3.5 for a while now and it is absolutely amazing, however, I was wondering if someone tried to use any of the smaller models (including ofc and not limited to the Qwen3.5 0.6b ?! Perfect fit at say Q2, should be AWESOME!) Any advice or tips on that ? Thanks
2026-03-02T20:09:43
https://www.reddit.com/r/LocalLLaMA/comments/1rj3oue/any_advice_for_using_draft_models_with_qwen35_122b/
Potential_Block4598
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj3oue
false
null
t3_1rj3oue
/r/LocalLLaMA/comments/1rj3oue/any_advice_for_using_draft_models_with_qwen35_122b/
false
false
self
1
null
Question regarding model parameters and memory usage
1
Why does Qwen 3.5 9B or Qwen 2.5 VL 7B needs so such memory for high context length? It asks for around 25gb memory for 131k context lengthS whereas GPT OSS 20B needs only 16gb memory for the same context length despite having more than twice the parameters.
2026-03-02T20:09:13
https://www.reddit.com/r/LocalLLaMA/comments/1rj3ocy/question_regarding_model_parameters_and_memory/
IPC300
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj3ocy
false
null
t3_1rj3ocy
/r/LocalLLaMA/comments/1rj3ocy/question_regarding_model_parameters_and_memory/
false
false
self
1
null
I'm tired
1
I'm tired. I started getting interested in local models about 3-4 months ago. During that time, the GPT and Sonnet killers came out, at least that's how the hype went. Every time a new model came out, it seemed like, "This is it!" But later it turned out that "it's still not Sonnet." And so many questions. Backend settings, which are like magic or a combination accidentally thrown in a game of dice. I saw a dozen posts on Reddit about how someone was able to run a particular model and how many tokens it gave out. Why is it still such a mess? Models. Qwen rolls out qwen3 coder next — is that 3 or 3.5? What model is better for agentic coding - next or 3.5? And so with each model, you have to download and check for a long time, look for the right settings to run, the right quantisation. We want to automate things with LLM, but we spend days on end searching for and configuring the next sonnet killer. As soon as you get the coveted 50 tokens per second and find the secret settings only from the trusted author with Q4\_Best\_Of\_The\_Best, the next day a new model will come out, even better and faster (benchmarks can't lie!). Just look at the graph, one model is slightly better than the other, but overall they look like two almost identical models, don't they? Looking at these graphs, it is hardly possible to say unequivocally that one model will cope with the task and the other will not, that one is hallucinating and the other is not, that one keeps the context and follows instructions and the other does not. These are two equally good models, and the difference is in the details. I like that progress is advancing at a rapid pace, but I don't like that even the smartest people in the world still haven't managed to bring all this into a sensible, understandable form.
2026-03-02T20:05:09
https://i.redd.it/o68lr6fquomg1.jpeg
Fast_Thing_7949
i.redd.it
1970-01-01T00:00:00
0
{}
1rj3kfq
false
null
t3_1rj3kfq
/r/LocalLLaMA/comments/1rj3kfq/im_tired/
false
false
https://preview.redd.it/…b61145214a26c185
1
{'images': [{'source': {'url': 'https://preview.redd.it/o68lr6fquomg1.jpeg?auto=webp&s=ed3db413e690c008376f0838cdcada0f48cf4e7c', 'width': 1074, 'height': 1138}, 'resolutions': [{'url': 'https://preview.redd.it/o68lr6fquomg1.jpeg?width=108&crop=smart&auto=webp&s=105d431def0574f4631e48e38838e0b593e10713', 'width': 108, 'height': 114}, {'url': 'https://preview.redd.it/o68lr6fquomg1.jpeg?width=216&crop=smart&auto=webp&s=2a99a0f212459605adcd727ced4c3813bf918497', 'width': 216, 'height': 228}, {'url': 'https://preview.redd.it/o68lr6fquomg1.jpeg?width=320&crop=smart&auto=webp&s=61322d9e7c69ef839081f9dbb6a06d93120ad3f9', 'width': 320, 'height': 339}, {'url': 'https://preview.redd.it/o68lr6fquomg1.jpeg?width=640&crop=smart&auto=webp&s=79174e4691b1d74da48820c94d273ae3a2d4e6d2', 'width': 640, 'height': 678}, {'url': 'https://preview.redd.it/o68lr6fquomg1.jpeg?width=960&crop=smart&auto=webp&s=f69d59c103ad06364c19b8cdbb04c095c4441176', 'width': 960, 'height': 1017}], 'variants': {}, 'id': 'o68lr6fquomg1'}], 'enabled': True}
Strix Halo NPU performance compared to GPU and CPU in Linux.
1
Thanks to this project. https://github.com/FastFlowLM/FastFlowLM There is now support for the Max+ 395 NPU under Linux for LLMs. Here are some quick numbers for oss-20b. **NPU - 20 watts** Average decoding speed: 19.4756 tokens/s Average prefill speed: 19.6274 tokens/s **GPU - 82 watts** [ Prompt: 411.1 t/s | Generation: 75.6 t/s ] (1st prompt) [ Prompt: 1643.2 t/s | Generation: 73.9 t/s ] (2nd prompt) **CPU - 84 watts** [ Prompt: 269.7 t/s | Generation: 36.6 t/s ] (first prompt) [ Prompt: 1101.6 t/s | Generation: 34.2 t/s ] (second prompt) While the NPU is slower, much slower for PP. It uses much less power. A quarter the power of the GPU or CPU. It would be perfect for running a small model for speculative decoding. Hopefully there is support for the NPU in llama.cpp someday now that the mechanics have been worked out in Linux. Notes: The FastFlowLM model is Q4_1. For some reason, Q4_1 on llama.cpp just outputs gibberish. I tried a couple of different quants. So I used the Q4_0 quant in llama.cpp instead. The performance between Q4_0 and Q4_1 seems to be about the same even with the gibberish output in Q4_1. The FastFlowLM quant of Q4_1 oss-20b is about 2.5GB bigger than Q4_0/1 quant for llama.cpp. I didn't use llama-bench because there is no llama-bench equivalent for FastFlowLM. To keep things as fair as possible, I used llama-cli.
2026-03-02T20:02:57
https://www.reddit.com/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/
fallingdowndizzyvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj3i8m
false
null
t3_1rj3i8m
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/
false
false
self
1
null
New update CMDAI 1.1.1beta
0
This is the largest update to CMDAI so far, introducing new modes! We've focused on enhancing usability and adding powerful tools for AI interaction. Please test thoroughly and report any bugs in the Issues section – your feedback is crucial! **🔄 New Modes** 1. Code Mode: Uses the file generated by Plan Mode to create the app. This allows seamless code execution based on planned logic. 2. Plan Mode: Generates a detailed plan for Code Mode, helping structure complex tasks before implementation. **✨ New Functions** 1. Real-Time Model Activity Visibility: Now you can see what the model is doing in real-time (e.g., thinking, analyzing, etc.). This provides better transparency during operations. 2. Writing Area: Added a dedicated space for writing with the model. **⌨️ Commands** 1. Slash Prefix Requirement: From now on, commands only work when prefixed with /. We're still adding more commands in upcoming updates, as not all are fully implemented yet. Sorry for the inconvenience! **📦 Installation, Model Loading, and Code Execution** 1. Install CMDAI easily and load your GGUF models with simple terminal commands. 2. Enhanced code execution support for smoother integration with your workflows. **🐞 Bug Reporting** 1. This major update may have some rough edges – please report any bugs or issues in the \[GitHub Issues\] (https://github.com/Krzyzyk33/CMDAI/issues) section. Your reports help us improve! 3. Thank you for using CMDAI! Star the repo if you like it, and stay tuned for more updates. 🌟dowolad app in my GitHub repository (https://github.com/Krzyzyk33/CMDAI/releases/tag/v1.3.0) 5. This is the largest update to CMDAI so far, introducing new modes, features, and commands! We've focused on enhancing usability and adding powerful tools for AI interaction. Please test thoroughly and report any bugs in the Issues section – your feedback is crucial!
2026-03-02T20:01:00
https://www.reddit.com/r/LocalLLaMA/comments/1rj3g91/new_update_cmdai_111beta/
KRZYZYK33
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj3g91
false
null
t3_1rj3g91
/r/LocalLLaMA/comments/1rj3g91/new_update_cmdai_111beta/
false
false
self
0
{'images': [{'source': {'url': 'https://external-preview.redd.it/kT4f2DVg7ppNCNWJJMFxGQ6X0iKQQdFLkFUijHi3-wE.png?auto=webp&s=d053a5ebcebbc17f44b97363f808b69f88005b0c', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/kT4f2DVg7ppNCNWJJMFxGQ6X0iKQQdFLkFUijHi3-wE.png?width=108&crop=smart&auto=webp&s=8e2d14fc092c19f62dccc4c1433059e7fba27d77', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/kT4f2DVg7ppNCNWJJMFxGQ6X0iKQQdFLkFUijHi3-wE.png?width=216&crop=smart&auto=webp&s=3df7f6b7ebdaaf5ecf9f835b426b718ff03a8984', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/kT4f2DVg7ppNCNWJJMFxGQ6X0iKQQdFLkFUijHi3-wE.png?width=320&crop=smart&auto=webp&s=0abdd661c125b47b96272524f998fafa37222bd2', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/kT4f2DVg7ppNCNWJJMFxGQ6X0iKQQdFLkFUijHi3-wE.png?width=640&crop=smart&auto=webp&s=309a51e5ee34ec828d92f3d52d3ec492393145ed', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/kT4f2DVg7ppNCNWJJMFxGQ6X0iKQQdFLkFUijHi3-wE.png?width=960&crop=smart&auto=webp&s=1c9ddb453e980129f57ee6bfdd19074123fc4e8f', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/kT4f2DVg7ppNCNWJJMFxGQ6X0iKQQdFLkFUijHi3-wE.png?width=1080&crop=smart&auto=webp&s=e1dd89231fae243231e380e89e511212302ad288', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'kT4f2DVg7ppNCNWJJMFxGQ6X0iKQQdFLkFUijHi3-wE'}], 'enabled': False}
Running a business on a 20W Jetson box — local AI for support, shipping, invoicing ($0 to $60K in 3 weeks)
1
2026-03-02T19:58:12
https://openclawhardware.dev/blog/2026-02-28-zero-to-60k-clawbox-running-its-own-business
superactro
openclawhardware.dev
1970-01-01T00:00:00
0
{}
1rj3das
false
null
t3_1rj3das
/r/LocalLLaMA/comments/1rj3das/running_a_business_on_a_20w_jetson_box_local_ai/
false
false
default
1
null
Why Qwen 3.5 27B?
1
Qwen 3.5 has 27B and 35B versions. I wonder why they chose these numbers. I mean, I could fit a 24B as a Q4 in my 16GB but 27B is just a tiny bit too large for q4\_k\_m and I would have to go down to q3\_k\_m to fit it. 24B vs 27B shouldn't make that much of a difference, no? Compared to q4 vs q3.
2026-03-02T19:57:26
https://www.reddit.com/r/LocalLLaMA/comments/1rj3cku/why_qwen_35_27b/
dreamyrhodes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj3cku
false
null
t3_1rj3cku
/r/LocalLLaMA/comments/1rj3cku/why_qwen_35_27b/
false
false
self
1
null
Qwen3.5 397B vs 27B!
1
How are they so smart?? does it translate to real usage in real use? what has been your experiences? Its mind blowing that they being 10x small competing with the big dawgs
2026-03-02T19:56:18
https://i.redd.it/yofidqrusomg1.png
SennVacan
i.redd.it
1970-01-01T00:00:00
0
{}
1rj3bh0
false
null
t3_1rj3bh0
/r/LocalLLaMA/comments/1rj3bh0/qwen35_397b_vs_27b/
false
false
https://preview.redd.it/…b11fd563c07e1db4
1
{'images': [{'source': {'url': 'https://preview.redd.it/yofidqrusomg1.png?auto=webp&s=013b7918dad1ed3e45e28c46b8ff7eaca90aeb72', 'width': 175, 'height': 348}, 'resolutions': [{'url': 'https://preview.redd.it/yofidqrusomg1.png?width=108&crop=smart&auto=webp&s=bef6ffdfc47e52fa1df12e2b2750a2db330afe86', 'width': 108, 'height': 214}], 'variants': {}, 'id': 'yofidqrusomg1'}], 'enabled': True}
Qwen3.5-9b 4bit quant acting weird
1
Hi folks, I'm trying to run Qwen3.5-9b 4 bit quants with LM Studio (there are several options available), and first of all - they're really impressive so far! However, sometimes it gets stuck at the same though over and over and never finishes the thinking process. So far this seems to be only the case with MLX quants, while GGUF works just fine. Does anyone else have the same problem, are there any solutions to this? If you're curious about benchmarks, on M1 Pro with 16GB of memory, I get about 15 tok/s with GGUF and 30 tok/s with MLX.
2026-03-02T19:55:48
https://www.reddit.com/r/LocalLLaMA/comments/1rj3ay3/qwen359b_4bit_quant_acting_weird/
Ok_Whole_5900
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj3ay3
false
null
t3_1rj3ay3
/r/LocalLLaMA/comments/1rj3ay3/qwen359b_4bit_quant_acting_weird/
false
false
self
1
null
Intelligence density per GB is increasing and I expect 4o intelligence by end of year for small models.
1
With the release of small 3.5 Qwen models, I realize that intelligence density is constantly increasing and I expect 10-100x smarter models for local models by 2028. Elon said the AI community underestimates potential by 100x from algorithms alone, maybe sees \~10x smarter AI yearly overall. Yes models are getting smarter, and multimodals, but the trend is clear, we'll get insane models that run locally on smartphones. I've never seen such technical advancements happen so fast.
2026-03-02T19:54:44
https://www.reddit.com/r/LocalLLaMA/comments/1rj39se/intelligence_density_per_gb_is_increasing_and_i/
Traditional-Card6096
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj39se
false
null
t3_1rj39se
/r/LocalLLaMA/comments/1rj39se/intelligence_density_per_gb_is_increasing_and_i/
false
false
self
1
null
Any idea what is being used for these generations?
1
2026-03-02T19:47:06
https://v.redd.it/08vdwcyhromg1
C0C0Barbet
v.redd.it
1970-01-01T00:00:00
0
{}
1rj326g
false
{'reddit_video': {'bitrate_kbps': 1200, 'fallback_url': 'https://v.redd.it/08vdwcyhromg1/CMAF_480.mp4?source=fallback', 'has_audio': True, 'height': 854, 'width': 480, 'scrubber_media_url': 'https://v.redd.it/08vdwcyhromg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/08vdwcyhromg1/DASHPlaylist.mpd?a=1775072853%2CZTMyYjNkY2U0OTk0NWM4Mjc4Yjg0Njk4MTRkMDBkNTFjNTM2NTM3OTc3ZGU1ODE1YzZlMTU5MjFhZjU1YjgxMw%3D%3D&v=1&f=sd', 'duration': 15, 'hls_url': 'https://v.redd.it/08vdwcyhromg1/HLSPlaylist.m3u8?a=1775072853%2CZGJmMWU2MTkwNjc1MWM1OTg5NzUzYmZlZGIzNGZiZWYwY2UxNjliMTZjNGYyMjcwNjlkOTY5MTIyNDU4YjE5ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'transcoding_status': 'completed'}}
t3_1rj326g
/r/LocalLLaMA/comments/1rj326g/any_idea_what_is_being_used_for_these_generations/
false
false
https://external-preview…b82f5a966477f3ad
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/cjg4Zmw4MWlyb21nMb8lQHzrm7qtp6tdwReSEEg3uewqxNw-7zWM5Brju1uM.png?format=pjpg&auto=webp&s=ee7facb28037e7d95218a6a48eab7a9eff300d51', 'width': 952, 'height': 1693}, 'resolutions': [{'url': 'https://external-preview.redd.it/cjg4Zmw4MWlyb21nMb8lQHzrm7qtp6tdwReSEEg3uewqxNw-7zWM5Brju1uM.png?width=108&crop=smart&format=pjpg&auto=webp&s=31da80ff50d0b6f75239fd4a90da587a29337787', 'width': 108, 'height': 192}, {'url': 'https://external-preview.redd.it/cjg4Zmw4MWlyb21nMb8lQHzrm7qtp6tdwReSEEg3uewqxNw-7zWM5Brju1uM.png?width=216&crop=smart&format=pjpg&auto=webp&s=abf2f6ae1801c7885af05eac3077d937f29d4882', 'width': 216, 'height': 384}, {'url': 'https://external-preview.redd.it/cjg4Zmw4MWlyb21nMb8lQHzrm7qtp6tdwReSEEg3uewqxNw-7zWM5Brju1uM.png?width=320&crop=smart&format=pjpg&auto=webp&s=d2e79c8413fba592777338e4b9fbd9b098caca93', 'width': 320, 'height': 569}, {'url': 'https://external-preview.redd.it/cjg4Zmw4MWlyb21nMb8lQHzrm7qtp6tdwReSEEg3uewqxNw-7zWM5Brju1uM.png?width=640&crop=smart&format=pjpg&auto=webp&s=93686bb23fe80146c633e882a8810c264f8c6a4e', 'width': 640, 'height': 1138}], 'variants': {}, 'id': 'cjg4Zmw4MWlyb21nMb8lQHzrm7qtp6tdwReSEEg3uewqxNw-7zWM5Brju1uM'}], 'enabled': False}
You can monitor LoRA training quality without running eval — structural metrics track loss at r > 0.95
1
We've been running experiments on Mistral-7B LoRA fine-tuning and found something practically useful that I haven't seen discussed here. **The short version:** metrics computed from the adapter weights alone (no data, no forward pass) correlate with eval loss at |r| > 0.95 during training. You can watch these instead of running eval, or at least run eval way less often. **Why this matters for your training runs:** Each eval event in our Mistral-7B runs took 30-60 seconds (forward pass over the holdout set). Structural SVD on the LoRA matrices takes 1-2 seconds and doesn't touch your data at all. If you're running eval every 50 steps over a 1200-step run, that's 20+ minutes of pure eval overhead. Structural monitoring gives you continuous signal for a fraction of that cost. The metrics that track best: adapter Frobenius norm (total magnitude of the adapter update) and σ\_max (largest singular value). Both are cheap to compute and require zero held-out data. **Practical pattern:** run structural monitoring continuously, reduce your eval frequency by 4-5x, trigger actual eval only when the structural metrics plateau or do something weird. You get the same safety with less overhead. **This also helps if you're data-constrained.** If you're fine-tuning on a small proprietary dataset, splitting off a validation set hurts. Structural metrics let you monitor training quality without reserving any data for eval. One-line integration with HuggingFace Trainer: python from gradience_hf import GradienceCallback callback = GradienceCallback(out_dir="./logs", structural_interval=10) trainer = Trainer(..., callbacks=[callback]) Full writeup with the experimental details: [huggingface.co/blog/johntnanney/you-done-need-eval-lora](https://huggingface.co/blog/johntnanney/you-done-need-eval-lora) `pip install gradience`
2026-03-02T19:42:53
https://www.reddit.com/r/LocalLLaMA/comments/1rj2y4n/you_can_monitor_lora_training_quality_without/
Front-Structure2385
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj2y4n
false
null
t3_1rj2y4n
/r/LocalLLaMA/comments/1rj2y4n/you_can_monitor_lora_training_quality_without/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/tcIbwRfVfs51JcFSIq1Rwxx9BS-IZcR0rWxXm1OUZZY.png?auto=webp&s=0645cd7dd6efd7f2abc41057014dd48eb710a52e', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/tcIbwRfVfs51JcFSIq1Rwxx9BS-IZcR0rWxXm1OUZZY.png?width=108&crop=smart&auto=webp&s=bb7fbdce3d9087d6fa707f92ab8e1679e28e41aa', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/tcIbwRfVfs51JcFSIq1Rwxx9BS-IZcR0rWxXm1OUZZY.png?width=216&crop=smart&auto=webp&s=2f125d1568c590d05c1aeb464eee502d17b7a3e7', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/tcIbwRfVfs51JcFSIq1Rwxx9BS-IZcR0rWxXm1OUZZY.png?width=320&crop=smart&auto=webp&s=d1866a809fe12ccacaf7236c9c269c9820843d13', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/tcIbwRfVfs51JcFSIq1Rwxx9BS-IZcR0rWxXm1OUZZY.png?width=640&crop=smart&auto=webp&s=65451fc6352f750d76251144dd98f32ea8ef47fe', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/tcIbwRfVfs51JcFSIq1Rwxx9BS-IZcR0rWxXm1OUZZY.png?width=960&crop=smart&auto=webp&s=4894b556c3af48ce6f527d9757e3dc0a2be0fb5e', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/tcIbwRfVfs51JcFSIq1Rwxx9BS-IZcR0rWxXm1OUZZY.png?width=1080&crop=smart&auto=webp&s=2f8595f76b4ecdcd76356973b67ae223719ac5d4', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'tcIbwRfVfs51JcFSIq1Rwxx9BS-IZcR0rWxXm1OUZZY'}], 'enabled': False}
How to stop burning money on OpenClaw. What I learned from talking to 100+ users
1
OpenClaw is one of the fastest-growing open-source projects in recent history. 230,000 GitHub stars, 116,000 Discord members, 2 million visitors per week. All of that in two months. People are running personal AI agents on their Mac Minis and cloud servers. It works, and it is genuinely useful. Like any major shift in how we use technology, it comes with constraints. After speaking with over a hundred OpenClaw users, cost is the topic that comes up in almost every conversation. Someone sets up their agent, starts using it daily, and two weeks later discovers they have spent $254 on API tokens \[1\]. Another spent $800 in a month \[2\]. These are not power users pushing the limits. These are normal setups with normal usage. [](https://substackcdn.com/image/fetch/$s_!L7OR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67135b30-f5bf-49ec-b52d-7f847f4f4ab5_1396x804.png) # Where the money goes Your agent sends every request to your primary model. A heartbeat check, a calendar lookup, a simple web search. If your primary model is Opus 4.6, all of it goes through the most expensive endpoint available. Your costs stack up from four main sources: * **System context -** [`SOUL.md`](http://SOUL.md) loads into the prompt on every call. Other bootstrap files like [`AGENTS.md`](http://AGENTS.md) contribute depending on what the agent needs. Even with memory pulled in through search rather than loaded raw, the base system context still adds up. On a typical setup, you are looking at thousands of tokens billed on every single request. * **Conversation history** \- Your history grows with every exchange. After a few hours of active use, a session can carry a large amount of tokens. The entire history tags along with every new request. * **Heartbeat checks** \- The heartbeat runs in the background every 30 minutes by default. Each check is a full API call with all of the above included. * **Model choice** \- Without routing, every request is sent to a single primary model, whether the task is simple or complex. That prevents cost optimization. One user woke up to an unexpected $141 bill overnight because the heartbeat was hitting the wrong model \[4\]. Put all of this together on an unoptimized Opus setup and you can easily spend more per day than most people expect to pay in a month. https://i.redd.it/2st2esqyoomg1.gif [](https://substackcdn.com/image/fetch/$s_!mdWe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F347ef676-f0c0-4dc9-b9b6-0a989cdec8c4_1640x720.gif) # Use one agent with skills instead of many agents This is the highest-impact change you can make and almost nobody talks about it. A lot of users build multi-agent setups. One agent for writing, one for research, one for coding, one to coordinate. Each agent runs as a separate instance with its own memory, its own context, and its own configuration files. Every handoff between agents burns tokens. Each agent adds its own fixed context overhead, so costs scale with every new instance you spin up. OpenClaw has a built-in alternative. A skill is a markdown file that gives your agent a new capability without creating a new instance. Same brain, same memory, same context. One user went from spending hundreds per week on a multi-agent setup to $90 per month with a single agent and a dozen skills \[2\]. The quality went up because context stopped getting lost between handoffs. Keep one main agent. Give it a skill for each type of work. Only spin up a sub-agent for background tasks that take several minutes and need to run in parallel. # Route each task to the right model The majority of what your agent does is simple. Status checks, message formatting, basic lookups. These do not need a frontier model. Only a small fraction of requests actually benefits from premium reasoning. Without routing, all of it hits your most expensive endpoint by default. One deployment tracked their costs before and after implementing routing and went from $150 per month to $35 \[5\]. Another went from $347 to $68 \[6\]. Smart routing tools can reduce costs by 70 percent on average \[7\]. OpenClaw does not ship with a built-in routing engine, so you need an external tool to make this work. Manifest handles this out of the box. It classifies each request and routes it to the right model automatically, so your heartbeats and simple lookups go to Haiku while complex reasoning still hits Opus. That alone cuts your bill dramatically without any manual config per task. If you prefer a DIY approach, you can set up multiple model configs or write a routing skill yourself, but it takes more effort to get right. https://i.redd.it/4t8hu6s2pomg1.gif [](https://substackcdn.com/image/fetch/$s_!6YoV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74586f8b-f830-42cb-9f2d-00022b5ef1e2_1236x720.gif) # Cache what does not change Your SOUL.md, MEMORY.md, and system instructions are the same from one call to the next. Without caching, the provider processes all of those tokens from scratch on every single request. You pay full price every time for content that has not changed. Prompt caching is a capability on the provider side. Anthropic offers an explicit prompt caching mechanism with a documented TTL where cached reads cost significantly less than fresh processing \[8\]. Other providers handle caching differently or automatically, so the details depend on which model you are using. The point is the same: static tokens that hit warm cache cost less than tokens processed from scratch. This is where the heartbeat becomes relevant. If your heartbeat fires often enough to keep the provider’s cache warm between calls, every check reuses the cached system context instead of reprocessing it from zero. Cache TTLs vary by provider and configuration. Anthropic’s standard TTL is around 5 minutes, with longer windows available depending on the setup. Community members have found that aligning the heartbeat interval just under whichever TTL you are working with keeps the cache alive. Combine that with routing your heartbeat to a cheap model and each background check costs a fraction of what it would on a cold Opus call. The key principle is simple. Make sure your static content (system instructions, bootstrap files) sits at the beginning of your prompt and variable content comes at the end. That structure maximizes what the provider can cache. One user documented a drop from $720 to $72 per month primarily through this approach \[9\]. # Shrink your context window Every message you send includes your full conversation history. After a few hours that history alone can cost more than the actual answer. Three things you can do about it. Start new conversations often. This is the easiest win. Instead of running one conversation for an entire day, start a fresh one every couple of hours. Your agent keeps its long-term memory across conversations but drops the accumulated back-and-forth. Context resets to your bootstrap files only. Clean up your SOUL.md. Everything in that file loads on every single call. If you have task-specific instructions sitting next to your personality rules, you are paying for all of it every time. Move the specialized parts into skills. They only load when the agent actually needs them. Optimize how memory loads into context. OpenClaw uses memory\_search to pull relevant memories into your prompt, not the raw file. But the more memories accumulate over weeks of use, the more context those searches can return. Configuring the QMD backend and tuning what gets retrieved keeps that footprint tight \[3\]. Some community members have built structured memory layers on top of this and cut their base context to a fraction of what it used to be. # Run a local model for the simple stuff Running a model on your own hardware eliminates API costs for the tasks that do not need a cloud model. You pay for hardware once. After that, every inference is free. For heartbeats, classification, and routine lookups, local models are more than capable. The popular choice right now is Qwen 3 32B. On an RTX 4090 it runs at 40+ tokens per second \[10\]. A Mac Mini running 24/7 handles the lightweight workload while cloud models only get called for complex reasoning. Ollama makes the integration simple. Install, pull the model, point your OpenClaw config at the local endpoint for specific task types. It works through an OpenAI-compatible HTTP endpoint. [](https://substackcdn.com/image/fetch/$s_!g3zx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c1275d5-7b5c-41d1-958d-b2816d8ac9e6_2322x1860.png) https://preview.redd.it/jtttl7t7pomg1.png?width=2322&format=png&auto=webp&s=7558fa39cb10b5c580c27639ce1cda0c9dd5c8ac # Track your costs daily Every user who cut their bill says the same thing. The fix was not a specific technique. It was seeing where the money went. Checking your bill once a month hides everything. You miss the day a cron job misfired. You miss the skill that routes to Opus when it should hit Haiku. Use an observability tool that shows you per-prompt, per-model cost breakdowns. When you can see exactly which request went to which model and what it cost, problems become obvious. The fixes usually take minutes once you see the data. Some routing tools offer real-time tracking with daily budgets and alerts so you catch problems before they compound. Your provider dashboard already tracks spending, but the granularity varies. # Where to start Start with visibility. Set up an observability tool so you can see which prompts cost what and which models they hit. You cannot optimize what you cannot measure. If you are running multiple agents, switch to one agent with skills. That is the highest return for the least effort. Route your heartbeat to a cheap model. This alone makes a noticeable difference on a 24/7 agent. Enable prompt caching. It takes minutes to set up. Keep your context lean. Clean up your [SOUL.md](http://SOUL.md), start new conversations regularly, and switch your memory to vector search. Add a local model if you have the hardware. It handles heartbeats and simple tasks at zero marginal cost. Based on what we’ve observed across multiple OpenClaw deployments, applying these changes can reduce monthly costs by five. # Sources \[1\] Reddit user report — $254 spent in two weeks on normal OpenClaw usage. [r/OpenClaw](https://www.reddit.com/r/OpenClaw/) \[2\] u/jordymaui — “I wasted 80 hours and $800 setting up OpenClaw.” Went from hundreds/week on multi-agent to $90/month with one agent and skills. [X post, Feb 16 2026](https://x.com/jordymaui) \[3\] OpenClaw official documentation — Token Use and Costs. Bootstrap file limits, memory search, QMD backend. [docs.openclaw.ai](https://docs.openclaw.ai/reference/token-use) \[4\] u/desolat68 — "I woke up this morning to a 141 dollar bill because she was using Pro3 even after she told me she configged for flash." [OpenClaw GitHub Discussion #1949](https://github.com/openclaw/openclaw/discussions/1949#discussioncomment-15701481) \[5\] ClawHosters optimization guide — Documented cost drop from $150/month to $35/month (77% reduction) through routing and caching. [ClawHosters](https://clawhosters.com/blog/posts/openclaw-token-costs-optimization) \[6\] EastonDev performance guide — Documented cost drop from $347/month to $68/month (80% reduction) through combined optimizations. [EastonDev](https://eastondev.com/blog/en/posts/ai/20260205-openclaw-performance/) \[7\] Internal data — 70% average cost reduction through [Manifest](https://github.com/mnfst/manifest) intelligent model routing across hundreds of OpenClaw deployments. \[8\] Anthropic prompt caching pricing — Cache reads billed at significantly lower rate than input tokens. [Anthropic docs](https://docs.anthropic.com/docs/build-with-claude/prompt-caching) \[9\] Developer case study — Monthly costs dropped from $720 to $72 primarily through prompt caching implementation. [Reddit](https://labeveryday.medium.com/prompt-caching-is-a-must-how-i-went-from-spending-720-to-72-monthly-on-api-costs-3086f3635d63) \[10\] Qwen 3 32B benchmark — 40+ tokens/second on RTX 4090. Widely reported across community benchmarks. [Qwen docs](http://qwen.readthedocs.io/en/latest/getting_started/speed_benchmark.html) *This article was originally published on* [Claw's Newsletter](https://clawsnewsletter.substack.com/p/how-to-stop-burning-money-on-openclaw)*. Subscribe for weekly posts on OpenClaw optimization.*
2026-03-02T19:37:03
https://www.reddit.com/r/LocalLLaMA/comments/1rj2s2y/how_to_stop_burning_money_on_openclaw_what_i/
stosssik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj2s2y
false
null
t3_1rj2s2y
/r/LocalLLaMA/comments/1rj2s2y/how_to_stop_burning_money_on_openclaw_what_i/
false
false
https://preview.redd.it/…e3314a2dbca92699
1
null
New Qwen models for speculative decoding
1
Hey, has anyone successfully used the new Qwen models (0.8\\2\\4)B as draft models for speculative decoding? I benchmarked 122B and 397B using 0.8B, 2B, and 4B as draft models (tested 4B only with the 122B variant—397B triggered OOM errors). However, I found no performance improvement for either prompt processing or token generation compared to the baseline (didn't use llama-bench, just identical prompts). Did some PR not merged yet? Any success stories? I used an .ini file, all entries are similar: version = 1 [*] models-autoload = 0 [qwen3.5-397b-iq4-xs:thinking-coding-vision] model = /mnt/ds1nfs/codellamaweights/qwen3.5-397b-iq4-xs-bartowski/Qwen_Qwen3.5-397B-A17B-IQ4_XS-00001-of-00006.gguf c = 262144 temp = 0.6 top-p = 0.95 top-k = 20 min-p = 0.0 presence-penalty = 0.0 repeat-penalty = 1.0 cache-ram = 65536 fit-target = 1536 mmproj = /mnt/ds1nfs/codellamaweights/qwen3.5-397b-iq4-xs-bartowski/mmproj-Qwen_Qwen3.5-397B-A17B-f16.gguf load-on-startup = false md = /mnt/ds1nfs/codellamaweights/Qwen3.5-0.8B-UD-Q6_K_XL.gguf ngld = 99 Hardware is dual A5000\\Epyc 9274f\\384Gb of 4800 ram. Just for reference @4k context: 122B: 279 \\ 41 (t\\s) PP\\TG 397B: 72 \\ 25 (t\\s) PP\\TG
2026-03-02T19:36:22
https://www.reddit.com/r/LocalLLaMA/comments/1rj2rec/new_qwen_models_for_speculative_decoding/
unbannedfornothing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj2rec
false
null
t3_1rj2rec
/r/LocalLLaMA/comments/1rj2rec/new_qwen_models_for_speculative_decoding/
false
false
self
1
null
Is speculative decoding available with the Qwen 3.5 series?
1
Now that we have a series of dense models from 27B to 0.8B, I'm hoping that speculative decoding is on the menu again. The 27B model is great, but too slow. Now if I can just get some time to play with it...
2026-03-02T19:31:57
https://www.reddit.com/r/LocalLLaMA/comments/1rj2mzy/is_speculative_decoding_available_with_the_qwen/
PermanentLiminality
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj2mzy
false
null
t3_1rj2mzy
/r/LocalLLaMA/comments/1rj2mzy/is_speculative_decoding_available_with_the_qwen/
false
false
self
1
null
Qwen3.5 30B is Incredible for Local Deployment
1
I just tried out Qwen3.5 30B locally, and I am absolutely blown away by its performance! The model is incredibly powerful and runs smoothly even on local hardware. If you haven't tried it yet, I highly recommend giving it a go. It's a game-changer for local AI deployment!
2026-03-02T19:25:53
https://www.reddit.com/r/LocalLLaMA/comments/1rj2gwf/qwen35_30b_is_incredible_for_local_deployment/
Marco_Ferreira43516
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj2gwf
false
null
t3_1rj2gwf
/r/LocalLLaMA/comments/1rj2gwf/qwen35_30b_is_incredible_for_local_deployment/
false
false
self
1
null
Is LocalLLaMA for hate and malicious comments? - leave your comments
1
Is it normal on **LocalLLaMA** that comments under perhaps naive posts that sometimes appear here, or posts that are not always wise, are immediately hated on by some people? Yes, there are people who are resistant to knowledge, but you can just skip such posts. Unfortunately, those who comment usually need to efford of reading and making malicious comments. I haven't been here long, I often come across interesting things, sometimes I even wrote something, but now I don't feel like I will post anything in neares future It's like Linux groups, where if you're not a master of the terminal, you get a wave of hate. **It hasn't happened to me personally, but when I read malicious comments under some posts, I don't feel like posting anymore.** There are also people here who are new, don't know what's what, and can't connect the dots at first. They also deserve to learn something! **If you've had such experiences, share them in the comments below.** Places like LocalLLaMA (non-ideological, non-political) should be a place for everyone!
2026-03-02T19:24:36
https://www.reddit.com/r/LocalLLaMA/comments/1rj2fm9/is_localllama_for_hate_and_malicious_comments/
mossy_troll_84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj2fm9
false
null
t3_1rj2fm9
/r/LocalLLaMA/comments/1rj2fm9/is_localllama_for_hate_and_malicious_comments/
false
false
self
1
null
SpongeBob Art with Qwen 3.5 9b vs Opus 4.6
1
2026-03-02T19:23:03
https://i.redd.it/e58yrl38nomg1.jpeg
camracks
i.redd.it
1970-01-01T00:00:00
0
{}
1rj2e3j
false
null
t3_1rj2e3j
/r/LocalLLaMA/comments/1rj2e3j/spongebob_art_with_qwen_35_9b_vs_opus_46/
false
false
https://preview.redd.it/…9f5c1552453038e6
1
{'images': [{'source': {'url': 'https://preview.redd.it/e58yrl38nomg1.jpeg?auto=webp&s=4b24700e7498059a3008a94860c05acba9e0f93e', 'width': 1747, 'height': 1892}, 'resolutions': [{'url': 'https://preview.redd.it/e58yrl38nomg1.jpeg?width=108&crop=smart&auto=webp&s=cbd53e6d7107887c5735c8e474bf31b2e35400b2', 'width': 108, 'height': 116}, {'url': 'https://preview.redd.it/e58yrl38nomg1.jpeg?width=216&crop=smart&auto=webp&s=231e302c2bcfa7fbaafcf3e52dda0b1642e120b3', 'width': 216, 'height': 233}, {'url': 'https://preview.redd.it/e58yrl38nomg1.jpeg?width=320&crop=smart&auto=webp&s=dde605a10ff12b20aa67007a0a5f1114cfbe1d37', 'width': 320, 'height': 346}, {'url': 'https://preview.redd.it/e58yrl38nomg1.jpeg?width=640&crop=smart&auto=webp&s=d2a7d8fa1cd2ad2d00f16afa78bc3ca8cc130f3f', 'width': 640, 'height': 693}, {'url': 'https://preview.redd.it/e58yrl38nomg1.jpeg?width=960&crop=smart&auto=webp&s=587bbd1561423be8d1eb98e94f434eae006256cb', 'width': 960, 'height': 1039}, {'url': 'https://preview.redd.it/e58yrl38nomg1.jpeg?width=1080&crop=smart&auto=webp&s=255e4b6ccab4d10181357320714e37fbf1ac6db5', 'width': 1080, 'height': 1169}], 'variants': {}, 'id': 'e58yrl38nomg1'}], 'enabled': True}
Open source tool for fine-tuning/evals now works with NVIDIA DGX Spark (if your lab has one)
1
For those of you that have an NVIDIA DGX Spark in your training setup, Transformer Lab just released native support for it. It’s a free, open source tool for running fine-tuning, training, and evals and replaces a fragmented landscape of scripts and tools. Transformer Lab handles environment setup while managing your entire training workflow: tracking runs, storing datasets/checkpoints and coordinating compute. If nothing else, it can help you skip the hassle of setting up CUDA 13 and other ML libraries on your machine.  Open source and free to use. Worth a look if you're using DGX hardware:[ ](https://lab.cloud/for-teams/)[https://lab.cloud/docs/install/](https://lab.cloud/docs/install/) Appreciate feedback on how to make it more helpful. https://preview.redd.it/tk4jrwv1lomg1.png?width=2560&format=png&auto=webp&s=7af1a43a43625bbd2b6af8b25798f55a100d91ff
2026-03-02T19:10:57
https://www.reddit.com/r/LocalLLaMA/comments/1rj21zm/open_source_tool_for_finetuningevals_now_works/
Historical-Potato128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj21zm
false
null
t3_1rj21zm
/r/LocalLLaMA/comments/1rj21zm/open_source_tool_for_finetuningevals_now_works/
false
false
https://preview.redd.it/…0483c8d5fa2daa3b
1
null
I got tired of AI agents crashing my GPU and having root access. So I wrote a Rust Kernel to schedule and secure them (It’s probably broken)
1
Hi everybody out there running local LLMs, I'm doing a small, free free **process manager/daemon** (ORE) for local AI agents. This has been brewing because I got extremely annoyed that running two agents (like OpenClaw or custom scripts) at the same time causes **Ollama/vLLM** to **OOM** crash my GPU. It won't be a massive, bloated framework but serves as a **OS Kernel** for AI. It’s just a tiny daemon written in Rust that sits between your apps and your inference engine. Currently I've done, * **The VRAM Semaphore:** A strict priority queue. If Agent A is generating, Agent B's request is queued. No more CUDA OOM crashes. * **Context Firewall:** Intercepts prompts at the syscall level. It scrubs PII (Regex for emails/CCs) and uses **structural boundary enforcement,** heuristics to block prompt injections before they reach the model. * **App Manifests (.toml):** Agents must declare if they need network, file, or shell access. ORE enforces it. I'm working on **Unix Domain Sockets** for IPC, specifically agent-to-agent swarms via vector pipes (Embeddings) to minimize GPU compute. The Roadmap (The goal is to build POSIX standard for AI infra): * **KV-Cache Paging:** Pausing an idle agent, streaming its context from VRAM to an NVMe SSD, and resuming it later (Virtual Memory for AI). * **LoRA Multiplexing:** Holding one base model in VRAM and dynamically hot-swapping 50MB adapter personalities per agent request. * **Semantic File System:** A shared vector memory space via IPC so agents don't have to duplicate context. If you are interested in low-level systems engineering, GPU memory management, or AI infrastructure in Rust, I'm just looking for suggestions or people who want to hack on the core scheduler with me. I'm still early in my systems journey and learning a lot while building this, so feedback is very welcome. It works on my machine. If it panics on yours, the Issue tracker is open but PRs speak louder than feature requests ;-) GitHub: [https://github.com/Mahavishnu-K/ore-kernel](https://github.com/Mahavishnu-K/ore-kernel) Discord (I'm hanging out in #dev-core): [https://discord.gg/ZdGYnwZe](https://discord.gg/ZdGYnwZe) Mahavishnu-K
2026-03-02T19:01:35
https://www.reddit.com/r/LocalLLaMA/comments/1rj1sn9/i_got_tired_of_ai_agents_crashing_my_gpu_and/
InternationalSun5556
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj1sn9
false
null
t3_1rj1sn9
/r/LocalLLaMA/comments/1rj1sn9/i_got_tired_of_ai_agents_crashing_my_gpu_and/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/V2LVTOtOYiFm8jMhvm6A4TEzqmiCrADKpw4gYA-SsvQ.png?auto=webp&s=ebff41f366ee68a3c2468ff62f0ed3e7f6eebbcf', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/V2LVTOtOYiFm8jMhvm6A4TEzqmiCrADKpw4gYA-SsvQ.png?width=108&crop=smart&auto=webp&s=3e71588294d1be500f0fb1ff320aaefa7ee143d9', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/V2LVTOtOYiFm8jMhvm6A4TEzqmiCrADKpw4gYA-SsvQ.png?width=216&crop=smart&auto=webp&s=dc4cdf55e0ffd8d027921ff2fc02ef33deba7048', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/V2LVTOtOYiFm8jMhvm6A4TEzqmiCrADKpw4gYA-SsvQ.png?width=320&crop=smart&auto=webp&s=f8c1197679790789d5c04365274605055c9ad69c', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/V2LVTOtOYiFm8jMhvm6A4TEzqmiCrADKpw4gYA-SsvQ.png?width=640&crop=smart&auto=webp&s=9f1f8e2fd2353e393e6952d5a02b7ff908b4c0c9', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/V2LVTOtOYiFm8jMhvm6A4TEzqmiCrADKpw4gYA-SsvQ.png?width=960&crop=smart&auto=webp&s=16fa9c920a1da3cff20ff51721d1d8446af42878', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/V2LVTOtOYiFm8jMhvm6A4TEzqmiCrADKpw4gYA-SsvQ.png?width=1080&crop=smart&auto=webp&s=7edaf5aeb50e781f78ad7cd2fd43b692dc982fac', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'V2LVTOtOYiFm8jMhvm6A4TEzqmiCrADKpw4gYA-SsvQ'}], 'enabled': False}
AI agents don't have a context problem. They have a judgment problem.
1
I've been using AI agents and copilots daily for over a year and something keeps nagging me. These tools have access to my code, my docs, my conversations. But when they make a decision on my behalf - drafting a response, triaging an issue, suggesting an approach - it feels *off*. Not wrong exactly, but generic. Like a competent stranger did it instead of me. The agent has my data but not my judgment. When product says "this is a small change," I know which ones will ripple through half the system. I've learned which monitoring alerts are noise and which mean something's actually on fire. When someone proposes a new dependency, I have a gut sense for which ones will become abandonware. These aren't things I can write in a prompt. They're reasoning patterns I've built over years of being wrong and learning from it. They shape every decision I make. None of it transfers. The industry's answer is more context. More RAG, bigger context windows, pay for more tokens. But that's not how human expertise works. My decisions aren't better because I have more information - they're better because I've built reasoning patterns for which information to weigh and which to ignore. That's judgment, not context. The memory tools that exist (Screenpipe, Rewind, etc.) are a step forward - they capture what I do. But they stop at *what*. I can look up that I switched approaches at 3 PM. The reasoning behind it is still in my head today -- but it won't be next month. No tool captures it before it fades, so it's lost permanently. Multiply that across every meaningful decision, every day, and you're leaking the most valuable part of your expertise: not what you did, but why. So every time I work with an AI agent, I'm starting from scratch. It has my files but not my instincts. The more I delegate to agents, the more this gap matters - because they're making decisions in my name that don't reflect how I actually think. **This is where I get stuck and want this community's brain:** The problem seems clear to me: we need to capture not just *what* someone does, but *how they reason* \-- and make a local model learn that. Not preferences ("I liked output A over B"), but thinking traces - the chain of reasoning that led to a decision, the tradeoffs weighed, the instincts applied. And it needs to happen the same day, while the reasoning is still fresh - before memory decay turns a clear rationale into a vague "I think it was because..." But how? Here's where I see hard open questions and I'm genuinely curious how people here would approach them: **1. How do you even capture "reasoning" without making it a chore?** The richest data is when someone explains *why* they made a decision. But asking people to narrate their thinking all day is a non-starter. What's the minimum-friction way to extract reasoning traces from someone's workday? Periodic interviews? Prompted journaling? Passive inference from behavioral patterns? Something else entirely? Has anyone here tried approaches to this? **2. Is fine-tuning the right approach, or is structured retrieval enough?** One path is: collect enough thinking traces and fine-tune a local model (LoRA etc.) to actually reason like you. Another path is: just store your past reasoning in a vector DB and retrieve similar situations at inference time. The first is deeper but harder. The second is simpler but maybe "good enough"? Where do people here see the tradeoff? Has anyone fine-tuned a model on personal data and seen meaningful behavioral shift? **3. What's the right unit of "personal alignment"?** Companies do RLHF at population scale - millions of preferences shaping one model. Nobody's really doing it for one person. What would personal alignment even look like technically? Is it a LoRA adapter? A giant structured system prompt? A reward model trained on one person's preferences? A combination? What's most practical with current open source tooling? **4. The creepiness problem — is it solvable or fatal?** A system that learns how you think requires observing what you do. That's inherently intimate. Is "fully local, fully open source, user controls everything" enough to make people comfortable? Or is the concept itself too uncomfortable regardless of implementation? I go back and forth on this - the individual upside could be massive, but the psychological barrier might make it dead on arrival. **5. Where does this create the most value first?** I keep thinking about engineering - a senior dev's reasoning patterns captured and used to help onboard juniors, or to keep decision-making consistent across a team. But maybe there are better starting points. Where would *you* want an AI that actually thinks like you instead of thinking like a generic model with your files attached? Not launching anything. Not selling anything. I'm a full-stack engineer trying to figure out if this is a tractable problem and what the best angle of attack would be. The local LLM community seems like the right group to stress-test this with. Would love to hear where you think I'm wrong, what I'm missing, or if anyone's already cracked part of this.
2026-03-02T19:01:17
https://www.reddit.com/r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/
Illustrious-Bet6287
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj1sbq
false
null
t3_1rj1sbq
/r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/
false
false
self
1
null
GPU poor folks(<16gb) what’s your setup for coding ?
1
I’m on a 16gb M1, so I need to stick to \~9B models, I find cline is too much for a model that size. I think the system prompt telling it how to navigate the project is too much. Is there anything that’s like cline but it’s more lightweight, where I load a file at the time, and it just focuses on code changes ?
2026-03-02T18:56:34
https://www.reddit.com/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/
FearMyFear
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj1ni2
false
null
t3_1rj1ni2
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/
false
false
self
1
null
Which QWEN 3.5 model can i run on my laptop
1
I am confused on which model i can run and which unslothed quant i can use. I have a asus zephyrus G15 with Ryzen 9 5900HS with radeon graphics, 16GB ram and RTX 3060 laptop GPU 6B Also, is there a way i can connect the local model to antigravity? I’m analyzing a large datasets and constantly have to tweak and test cases.
2026-03-02T18:51:22
https://www.reddit.com/r/LocalLLaMA/comments/1rj1ifv/which_qwen_35_model_can_i_run_on_my_laptop/
dolo937
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj1ifv
false
null
t3_1rj1ifv
/r/LocalLLaMA/comments/1rj1ifv/which_qwen_35_model_can_i_run_on_my_laptop/
false
false
self
1
null
Are autonomous AI agents with wallet access actually a security risk, or am I overthinking this?
1
[removed]
2026-03-02T18:50:07
https://www.reddit.com/r/LocalLLaMA/comments/1rj1h6u/are_autonomous_ai_agents_with_wallet_access/
CraftyWriter2543
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj1h6u
false
null
t3_1rj1h6u
/r/LocalLLaMA/comments/1rj1h6u/are_autonomous_ai_agents_with_wallet_access/
false
false
self
1
null
[llamacpp][LMstudio] Draft model settings for Qwen3.5 27b?
1
Hey, I'm trying to figure the best draft model (speculative decoding) for `Qwen3.5-27b`. Using LMstudio, I downloaded `Qwen3.5-0.8B-Q8_0.gguf` but it doesn't show up in spec-decode options. Both my models were uploaded by `lmstudio-community`. The `27b` is a `q4_k_m`, while smaller one is `q8`. Next, I tried using: ./llama-server -m ~/.lmstudio/models/lmstudio-community/Qwen3.5-27B-GGUF/Qwen3.5-27B-Q4_K_M.gguf -md ~/.lmstudio/models/lmstudio-community/Qwen3.5-0.8B-GGUF/Qwen3.5-0.8B-Q8_0.gguf -ngld 99 but no benefit. Still getting the same token generation @ 7tps. Spec-decode with LMS is good because it gives a good visualization of accepted draft tokens. Can anyone help me set it up?
2026-03-02T18:47:00
https://www.reddit.com/r/LocalLLaMA/comments/1rj1e35/llamacpplmstudio_draft_model_settings_for_qwen35/
v01dm4n
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj1e35
false
null
t3_1rj1e35
/r/LocalLLaMA/comments/1rj1e35/llamacpplmstudio_draft_model_settings_for_qwen35/
false
false
self
1
null
Did someone managed to get speculative decoding working on Qwen3.5 models ?
1
[removed]
2026-03-02T18:44:07
https://www.reddit.com/r/LocalLLaMA/comments/1rj1b0w/did_someone_managed_to_get_speculative_decoding/
ArthurianX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj1b0w
false
null
t3_1rj1b0w
/r/LocalLLaMA/comments/1rj1b0w/did_someone_managed_to_get_speculative_decoding/
false
false
self
1
null
Built a local memory layer for AI agents where memories actually fade over time — works with any LLM, no cloud, no API keys
1
Most AI memory tools are basically just save everything forever and search it. That breaks fast because stale irrelevant context clutters every response. YourMemory works differently. Memories decay with time using the Ebbinghaus Forgetting Curve. The ones you keep coming back to stay strong. The ones you never reinforce quietly disappear. Just like real memory. Retrieval isn't just semantic search either. It's similarity × freshness. A memory from 2 months ago ranks lower than a recent one even if it's more topically relevant. It's not Claude specific. There's a REST API so any agent can use it — LangChain, AutoGPT, custom scripts, anything with HTTP. Claude Code gets native MCP tools (recall\_memory, store\_memory, update\_memory) but the backend is completely model agnostic. Stack: PostgreSQL + pgvector, Ollama (fully local embeddings), FastAPI. One command to run: docker compose up [https://github.com/sachitrafa/yourmemory](https://github.com/sachitrafa/yourmemory) Curious what the local first crowd thinks. Open to harsh feedback.
2026-03-02T18:41:34
https://www.reddit.com/r/LocalLLaMA/comments/1rj18h4/built_a_local_memory_layer_for_ai_agents_where/
Sufficient_Sir_5414
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj18h4
false
null
t3_1rj18h4
/r/LocalLLaMA/comments/1rj18h4/built_a_local_memory_layer_for_ai_agents_where/
false
false
self
1
null
How are you handling spending controls for your AI agents?
1
I've been looking into agents that make real purchases (booking flights, buying SaaS, etc.) and I'm surprised how few guardrails exist. OpenClaw has 190k stars and 5,400+ skills but the financial control story is basically "trust the agent" or "don't let it spend." For those running agents that interact with payment flows: * How do you prevent prompt injection from triggering unauthorized purchases? * Are you using virtual cards? Manual approval? Budget caps? * Would you want an external gateway that enforces limits the agent can't override? Curious what setups people have figured out.
2026-03-02T18:35:58
https://www.reddit.com/r/LocalLLaMA/comments/1rj12me/how_are_you_handling_spending_controls_for_your/
Professional_Cod9487
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj12me
false
null
t3_1rj12me
/r/LocalLLaMA/comments/1rj12me/how_are_you_handling_spending_controls_for_your/
false
false
self
1
null
Parameter Configuration for Knowledge Distill to Qwen3.5 model.
1
Hi everyone, I’m trying to add a new reasoning skill to Qwen3.5-27B via LoRA fine-tuning, but I’m running into issues. The base model has very strong coding and reasoning abilities. However, after fine-tuning on my dataset, it seems to completely forget its general capabilities. First setup: • LoRA rank: 64 • LoRA alpha: 128 • Learning rate: 1e-4 • Dataset size: 3,000 samples • Epochs: 1 This caused catastrophic forgetting — it lost original ability completely. It answers in the training dataset response format what ever your question is. Second setup: • LoRA rank: 16 • LoRA alpha: 32 • Learning rate: 1e-5 • Epochs: 1 With this configuration, the model seems to retain its original behavior but for the trained task, it never follow the specific reasoning steps in the dataset. I’m trying to teach the model to correct its reasoning steps for a specific task without degrading its general abilities in any benchmark. My questions: 1. Roughly how much data is typically needed to shift reasoning behavior for a specific task? 2. How should I think about choosing learning rate and LoRA rank for this? 3. What’s the best way to avoid catastrophic forgetting? Should I mix in general-domain data? If so, what db and in what proportion? 4. Is SFT with LoRA the correct way to do this? Any advice or references would be greatly appreciated 🙏
2026-03-02T18:35:14
https://www.reddit.com/r/LocalLLaMA/comments/1rj11vb/parameter_configuration_for_knowledge_distill_to/
Mysterious_Art_3211
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj11vb
false
null
t3_1rj11vb
/r/LocalLLaMA/comments/1rj11vb/parameter_configuration_for_knowledge_distill_to/
false
false
self
1
null
In search of getting started guide for Strix Halo
1
[removed]
2026-03-02T18:30:33
https://www.reddit.com/r/LocalLLaMA/comments/1rj0x6g/in_search_of_getting_started_guide_for_strix_halo/
WhatWouldVaderDo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj0x6g
false
null
t3_1rj0x6g
/r/LocalLLaMA/comments/1rj0x6g/in_search_of_getting_started_guide_for_strix_halo/
false
false
self
1
null
Why are people so quick to say Closed frontiers are benchmaxxed while they gulp this without any second thought?
1
Really wanna know these absurd benchmarks of qwen models specifically
2026-03-02T18:20:28
https://i.redd.it/4qqdcsy1comg1.jpeg
Independent-Ruin-376
i.redd.it
1970-01-01T00:00:00
0
{}
1rj0mxt
false
null
t3_1rj0mxt
/r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/
false
false
https://preview.redd.it/…f917179d69a4a347
1
{'images': [{'source': {'url': 'https://preview.redd.it/4qqdcsy1comg1.jpeg?auto=webp&s=c76c7822d107497834ac80a3e8987f41439be520', 'width': 1080, 'height': 1710}, 'resolutions': [{'url': 'https://preview.redd.it/4qqdcsy1comg1.jpeg?width=108&crop=smart&auto=webp&s=d4e574fae911e8b05cefe010968489ad38c5eb6e', 'width': 108, 'height': 171}, {'url': 'https://preview.redd.it/4qqdcsy1comg1.jpeg?width=216&crop=smart&auto=webp&s=93706ab9dd4a6d1160f688ba7d2149aa8d54f1e7', 'width': 216, 'height': 342}, {'url': 'https://preview.redd.it/4qqdcsy1comg1.jpeg?width=320&crop=smart&auto=webp&s=ac21a2484a115d872df4fc58edb8933276504f90', 'width': 320, 'height': 506}, {'url': 'https://preview.redd.it/4qqdcsy1comg1.jpeg?width=640&crop=smart&auto=webp&s=448903f13dd0a6bf65df6a821c2152ddff61e3f2', 'width': 640, 'height': 1013}, {'url': 'https://preview.redd.it/4qqdcsy1comg1.jpeg?width=960&crop=smart&auto=webp&s=ee18426f5b811d5ec8b0b079bad25bed18b569c7', 'width': 960, 'height': 1520}, {'url': 'https://preview.redd.it/4qqdcsy1comg1.jpeg?width=1080&crop=smart&auto=webp&s=a46417b84603582e686889b4e6d1b994eb899ec3', 'width': 1080, 'height': 1710}], 'variants': {}, 'id': '4qqdcsy1comg1'}], 'enabled': True}
Qwen3.5 2b, 4b and 9b tested on Raspberry Pi5
1
Tested on Raspberry Pi5 8 and 16GB variants, 16GB with SSD, all with vision encoder enabled and 16k context and llama.cpp with some optimisations for ARM/Pi. Overall I'm impressed: Qwen3.5-2b 4 bit quant: I'm getting constant **5-6t/s** on both raspberries, time to first token is fast (few seconds on short prompts), works great for image recognition etc (takes up to 30 seconds to process \~150kB image) Qwen3.5-4b 4 bitquant: **4-5t/s**, this one is a great choice for 8GB pi imo, preliminary results are much better than Qwen3-VL-4b. Qwen3.5-9b: worse results than 2 bit quants of Qwen3.5 a3b so this model doesn't make much sense for PI, either go with 4bit for 8GB or go with MoE (a3b) for 16GB one. On 16GB pi and a3b you cna get up to 3.5t/s which is great given how powerful this model is.
2026-03-02T18:19:34
https://v.redd.it/hzihay2laomg1
jslominski
v.redd.it
1970-01-01T00:00:00
0
{}
1rj0m27
false
{'reddit_video': {'bitrate_kbps': 2400, 'fallback_url': 'https://v.redd.it/hzihay2laomg1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'width': 978, 'scrubber_media_url': 'https://v.redd.it/hzihay2laomg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/hzihay2laomg1/DASHPlaylist.mpd?a=1775067618%2COWNjYTFhNWU5Y2M5MGNiNjFlZjA0NWRjZDM2NzIwMTRiNjAyODQxNzY5NWYxODE4OGYyNGNkYWQwY2U0ZDY0YQ%3D%3D&v=1&f=sd', 'duration': 43, 'hls_url': 'https://v.redd.it/hzihay2laomg1/HLSPlaylist.m3u8?a=1775067618%2CNWZhYTVmZTI0YWI3ZWVmNWVhNTQ4ZDM1NjRjZmE2YWM2ODEwYWNiZTE2OGUxM2M0NmM4ZTdkM2FkYmQzODk2Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'transcoding_status': 'completed'}}
t3_1rj0m27
/r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/
false
false
https://external-preview…764aac95883c0536
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/cTkyeHB5Mmxhb21nMfHhW0n7ZOIJe3LcNgsKxze3i5tQ83sRwhip-10VRMr_.png?format=pjpg&auto=webp&s=5d223844688f948fdcc378eb59afc0048c64ac2c', 'width': 1264, 'height': 930}, 'resolutions': [{'url': 'https://external-preview.redd.it/cTkyeHB5Mmxhb21nMfHhW0n7ZOIJe3LcNgsKxze3i5tQ83sRwhip-10VRMr_.png?width=108&crop=smart&format=pjpg&auto=webp&s=6fc29a4d989aa3bcb8495a4209efbd9f4b6407bb', 'width': 108, 'height': 79}, {'url': 'https://external-preview.redd.it/cTkyeHB5Mmxhb21nMfHhW0n7ZOIJe3LcNgsKxze3i5tQ83sRwhip-10VRMr_.png?width=216&crop=smart&format=pjpg&auto=webp&s=67701187988ac868084abed97502ff6095999d87', 'width': 216, 'height': 158}, {'url': 'https://external-preview.redd.it/cTkyeHB5Mmxhb21nMfHhW0n7ZOIJe3LcNgsKxze3i5tQ83sRwhip-10VRMr_.png?width=320&crop=smart&format=pjpg&auto=webp&s=da444f38a1179d485dd3038b96be7d575e72f831', 'width': 320, 'height': 235}, {'url': 'https://external-preview.redd.it/cTkyeHB5Mmxhb21nMfHhW0n7ZOIJe3LcNgsKxze3i5tQ83sRwhip-10VRMr_.png?width=640&crop=smart&format=pjpg&auto=webp&s=2245d3f40d40f16d22110c801b64c627ca00c375', 'width': 640, 'height': 470}, {'url': 'https://external-preview.redd.it/cTkyeHB5Mmxhb21nMfHhW0n7ZOIJe3LcNgsKxze3i5tQ83sRwhip-10VRMr_.png?width=960&crop=smart&format=pjpg&auto=webp&s=d42f66bfe3811ac68afb0bb5c5250c819ad02846', 'width': 960, 'height': 706}, {'url': 'https://external-preview.redd.it/cTkyeHB5Mmxhb21nMfHhW0n7ZOIJe3LcNgsKxze3i5tQ83sRwhip-10VRMr_.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7d439757f1b48faa6b7989bfe2dfeca75feb3f90', 'width': 1080, 'height': 794}], 'variants': {}, 'id': 'cTkyeHB5Mmxhb21nMfHhW0n7ZOIJe3LcNgsKxze3i5tQ83sRwhip-10VRMr_'}], 'enabled': False}
Best Compatible & Suitable LocalLLM Model Suggestion
1
Hi dudes, I ran the three models shown in the below, 5060 Ti 16 GB vRAM - 5600x - 32 GB DDR4 RAM, in LMStudio. You can see the settings in the attachment. Although I tried to keep the settings at the most ideal level possible (following Gemini's guidance), I have a very low token per second rate. Knowing this is related to insufficient vRAM, I would appreciate it if you could share your best advice and suggested settings for RAG & Coding that would be most useful for my needs. https://preview.redd.it/ssfximvj9omg1.png?width=457&format=png&auto=webp&s=4a8eb0034db69e70415a5d758aa4cd3e46b45bc3 https://preview.redd.it/ui00zj0aaomg1.png?width=740&format=png&auto=webp&s=6ffbba7f77ba3c6fe47ec1055527d811996faf49
2026-03-02T18:11:41
https://www.reddit.com/r/LocalLLaMA/comments/1rj0dyn/best_compatible_suitable_localllm_model_suggestion/
thesayk0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj0dyn
false
null
t3_1rj0dyn
/r/LocalLLaMA/comments/1rj0dyn/best_compatible_suitable_localllm_model_suggestion/
false
false
https://preview.redd.it/…11bde4b54580c111
1
null
**Running LLMs on Huawei Ascend without rewriting every script that assumes CUDA**
1
Been experimenting with running local LLMs on an Ascend 910B. The hardware is capable but the entire inference ecosystem, HuggingFace, vLLM, DeepSpeed, assumes torch.cuda everywhere. Every script dies immediately. Built a runtime shim that intercepts those calls and reroutes them to the NPU without touching the original code. import ascend\_compat ascend\_compat.activate() \# nothing else changes model = model.cuda() # routes to NPU Also covers ROCm and Intel XPU with device routing. The LLM-specific part is the ecosystem patches for flash-attn, HuggingFace, and vLLM since those have the most CUDA assumptions baked in. Has anyone here actually gotten vLLM or HuggingFace inference working on Ascend or ROCm without patching everything manually? Curious what the current state looks like for people running non-NVIDIA locally. [https://github.com/JosephAhn23/cuda-morph](https://github.com/JosephAhn23/cuda-morph)
2026-03-02T18:11:32
https://www.reddit.com/r/LocalLLaMA/comments/1rj0dsf/running_llms_on_huawei_ascend_without_rewriting/
AcanthocephalaNo2929
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj0dsf
false
null
t3_1rj0dsf
/r/LocalLLaMA/comments/1rj0dsf/running_llms_on_huawei_ascend_without_rewriting/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/bakpxELAni3sAwS7MYG4HptdT1qV5qITP7gGuANEOrM.png?auto=webp&s=937965269bdb42ffe727ebbddb35c14d3e1ca72a', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/bakpxELAni3sAwS7MYG4HptdT1qV5qITP7gGuANEOrM.png?width=108&crop=smart&auto=webp&s=fffd6bc132cfa319c6b540ccdfba03f39e13afc3', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/bakpxELAni3sAwS7MYG4HptdT1qV5qITP7gGuANEOrM.png?width=216&crop=smart&auto=webp&s=6eb55e490b06df232f0ea196c5488ec2d037ad00', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/bakpxELAni3sAwS7MYG4HptdT1qV5qITP7gGuANEOrM.png?width=320&crop=smart&auto=webp&s=d2aa143423c362835beb04bcf4a39eaca9dce434', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/bakpxELAni3sAwS7MYG4HptdT1qV5qITP7gGuANEOrM.png?width=640&crop=smart&auto=webp&s=12c755dc85663b8dd0aee4d5f4dd2203be1028a7', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/bakpxELAni3sAwS7MYG4HptdT1qV5qITP7gGuANEOrM.png?width=960&crop=smart&auto=webp&s=bbb38987f015bc5639c93de18c6feff033a0d72e', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/bakpxELAni3sAwS7MYG4HptdT1qV5qITP7gGuANEOrM.png?width=1080&crop=smart&auto=webp&s=2030746b9e7c4570c339f40379ec5396edeaee2e', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'bakpxELAni3sAwS7MYG4HptdT1qV5qITP7gGuANEOrM'}], 'enabled': False}
K2 (not 2.5) distillation - still worth it?..
1
I have been experimenting since November with trying to distill Kimi K2, known for its unique style. Had a very uneven ride with loads of things learned, loads of infrastructure bugs filed (most fixed now), and some interesting results but nothing definitive. K2.5 is generally considered to have nerfed the style while increasing coding and agentic abilities. Moreover. the new Qwen3.5 wave is alleged to bring sheer power to smaller models that was not seen before. My question now is whether there still is an appetite for K2 distills mainly for the style/manners/etc, as opposed to the practical abilities on which the open source SOTA has moved on. And if the appetite does exist, what are the actual key poionts people might be interested in? The talking back? The nontrivial creative takes? Something else? I was mostly experimenting on the 1-2B scale (my one checkpoint published here got some VERY useful critical feedback). I understand the target that would interest most potential users here needs to be around the 30B mark, and I even have that target (Granite 4-h Small - Granite has a neutral original style so takes very well to style distills; tried Ministral 14B for a change, and it just outright resists). I just want to know whether there is still any point in continuing the experiments, or maybe the new Qwens with some system prompting do all the "feisty nerding" local users want. (To make it clear it's all a passion project. I don't expect to ever monetize anything. Just trying to gauge potential users/testers fot the next step).
2026-03-02T18:06:27
https://www.reddit.com/r/LocalLLaMA/comments/1rj08k1/k2_not_25_distillation_still_worth_it/
ramendik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj08k1
false
null
t3_1rj08k1
/r/LocalLLaMA/comments/1rj08k1/k2_not_25_distillation_still_worth_it/
false
false
self
1
null
Beginner's Guide to LLM Quantization: Run 70B Models on Your Gaming GPU
1
[removed]
2026-03-02T18:02:15
https://www.reddit.com/r/LocalLLaMA/comments/1rj048e/beginners_guide_to_llm_quantization_run_70b/
Actual_Wolf_2932
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj048e
false
null
t3_1rj048e
/r/LocalLLaMA/comments/1rj048e/beginners_guide_to_llm_quantization_run_70b/
false
false
self
1
null
What models to "understand" videos? (No transcripts)
1
There are apps like Get Poppy where you paste an Instagram Reel or YouTube link and they don’t just transcribe the audio — they also extract and understand the visual sequence of the video. This isn’t done with single 1-second frames, because that wouldn’t capture temporal context or visual continuity. It’s real video understanding. What models or techniques are they using to do this efficiently, and how are they making it profitable without paying premium rates like Gemini’s video tariffs?
2026-03-02T17:56:22
https://www.reddit.com/r/LocalLLaMA/comments/1rizy4r/what_models_to_understand_videos_no_transcripts/
jrhabana
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rizy4r
false
null
t3_1rizy4r
/r/LocalLLaMA/comments/1rizy4r/what_models_to_understand_videos_no_transcripts/
false
false
self
1
null
Speedup GLM on Strix Halo and llama.cpp
1
Hello! Would you have some tipps / parameters for the GLM models how to speed them up especially the pp on Strix Halo and llama.cpp. prompt eval time =      91.59 ms /     1 tokens (   91.59 ms per token,    10.92 tokens per second)        eval time =   36265.55 ms /   426 tokens (   85.13 ms per token,    11.75 tokens per second) In this case the tg is even faster than the pp 😱😭 The quality seems ok but they are so terribly slow. My stack is ROCm nightlies Toolbox Headless fedora Kernel 6.18.9 Firmware Jan 26 Thanks!
2026-03-02T17:54:35
https://www.reddit.com/r/LocalLLaMA/comments/1rizw9u/speedup_glm_on_strix_halo_and_llamacpp/
Equivalent-Belt5489
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rizw9u
false
null
t3_1rizw9u
/r/LocalLLaMA/comments/1rizw9u/speedup_glm_on_strix_halo_and_llamacpp/
false
false
self
1
null
Running Qwen 3.5 0.8B locally in the browser on WebGPU w/ Transformers.js
2
Today, Qwen released their latest family of small multimodal models, Qwen 3.5 Small, available in a range of sizes (0.8B, 2B, 4B, and 9B parameters) and perfect for on-device applications. So, I built a demo running the smallest variant (0.8B) locally in the browser on WebGPU. The bottleneck is definitely the vision encoder, but I think it's pretty cool that it can run in the first place haha! Links for those interested: - Qwen 3.5 collection on Hugging Face: https://huggingface.co/collections/Qwen/qwen35 - Online WebGPU demo: https://huggingface.co/spaces/webml-community/Qwen3.5-0.8B-WebGPU
2026-03-02T17:46:44
https://v.redd.it/hta9o2i95omg1
xenovatech
v.redd.it
1970-01-01T00:00:00
0
{}
1rizodv
false
{'reddit_video': {'bitrate_kbps': 2400, 'fallback_url': 'https://v.redd.it/hta9o2i95omg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'width': 720, 'scrubber_media_url': 'https://v.redd.it/hta9o2i95omg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/hta9o2i95omg1/DASHPlaylist.mpd?a=1775065703%2CNjlkMzJhYTllNWZiNzQzZGQzMDViY2E1NDZlNzA3Njc0YWQ0ZjY5YjU5OTA2YTY0NjI5MTRlZWQ2MTMyOTA0Yw%3D%3D&v=1&f=sd', 'duration': 46, 'hls_url': 'https://v.redd.it/hta9o2i95omg1/HLSPlaylist.m3u8?a=1775065703%2CNmFjYmZiZWVjMDU3MjExMDM2N2FmOGRiZjE3NWJjNDdiNzFiMTAwOThiNDI2NzY2ZGQ0Mjg3MDJmOWNmZGQ5Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'transcoding_status': 'completed'}}
t3_1rizodv
/r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/
false
false
https://external-preview…164c138e10047d5b
2
{'images': [{'source': {'url': 'https://external-preview.redd.it/Y3Y3ejI3aTk1b21nMdWbvGcxCx2ye2tGU7zJWShDhnYRgbWYJJKHggNlhZlM.png?format=pjpg&auto=webp&s=589398bbf124364395ad0c8ec041c6ea283ca0cd', 'width': 800, 'height': 800}, 'resolutions': [{'url': 'https://external-preview.redd.it/Y3Y3ejI3aTk1b21nMdWbvGcxCx2ye2tGU7zJWShDhnYRgbWYJJKHggNlhZlM.png?width=108&crop=smart&format=pjpg&auto=webp&s=df61cd93f23e94e6237dccf1c8950ea5e4f12527', 'width': 108, 'height': 108}, {'url': 'https://external-preview.redd.it/Y3Y3ejI3aTk1b21nMdWbvGcxCx2ye2tGU7zJWShDhnYRgbWYJJKHggNlhZlM.png?width=216&crop=smart&format=pjpg&auto=webp&s=37a766a23565b66239f4664c02caa86c9b018d3e', 'width': 216, 'height': 216}, {'url': 'https://external-preview.redd.it/Y3Y3ejI3aTk1b21nMdWbvGcxCx2ye2tGU7zJWShDhnYRgbWYJJKHggNlhZlM.png?width=320&crop=smart&format=pjpg&auto=webp&s=65ec70ff41736a7adf30bf672c7c0085edca21c2', 'width': 320, 'height': 320}, {'url': 'https://external-preview.redd.it/Y3Y3ejI3aTk1b21nMdWbvGcxCx2ye2tGU7zJWShDhnYRgbWYJJKHggNlhZlM.png?width=640&crop=smart&format=pjpg&auto=webp&s=3b91c4db2bd69cfc5081038acfc8f7c440d6ba2a', 'width': 640, 'height': 640}], 'variants': {}, 'id': 'Y3Y3ejI3aTk1b21nMdWbvGcxCx2ye2tGU7zJWShDhnYRgbWYJJKHggNlhZlM'}], 'enabled': False}
Qwen 27B is a beast but not for agentic work.
1
After I tried it, even the base model, it really showed what it can do. I immediately fell in love. But after some time, the quality became too costly. Even if it shows great comprehension and can follow instructions well. It becomes unusable if I need it to work on similar context with multiple queries. It recalculates every request even if context is 90%+ identical between them. At longer context I might as well be using bigger model with wider instructions on ram, as recalculating takes soo much wasted time. I found a reported bug on llama.cpp, but updating did not solve the issue for me. My assumption is that the context length outgrows what would be possible on my hardware without swa, and hence requires updating, but that is my theory.
2026-03-02T17:43:57
https://www.reddit.com/r/LocalLLaMA/comments/1rizlkn/qwen_27b_is_a_beast_but_not_for_agentic_work/
kaisurniwurer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rizlkn
false
null
t3_1rizlkn
/r/LocalLLaMA/comments/1rizlkn/qwen_27b_is_a_beast_but_not_for_agentic_work/
false
false
self
1
null
qwen3.5-0.8b Released Today speed is insane 157TK/sec
1
https://reddit.com/link/1rizjco/video/395i9x2s4omg1/player I'm on an old machine Ryzen 9 5950x, 64GB DDR-3400, Geforce 3070. This is a basic bare minimum module 8B that came out today.
2026-03-02T17:41:41
https://www.reddit.com/r/LocalLLaMA/comments/1rizjco/qwen3508b_released_today_speed_is_insane_157tksec/
PhotographerUSA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rizjco
false
null
t3_1rizjco
/r/LocalLLaMA/comments/1rizjco/qwen3508b_released_today_speed_is_insane_157tksec/
false
false
self
1
null
Qwen3.5 9B (FP16) vs 27B (FP8) (have 64GB unified M1 Max memory)
1
[https://modelscope.cn/models/Qwen/Qwen3.5-9B](https://modelscope.cn/models/Qwen/Qwen3.5-9B) [https://modelscope.cn/models/Qwen/Qwen3.5-27B-FP8](https://modelscope.cn/models/Qwen/Qwen3.5-27B-FP8) These 2 models present the optimal size for using alongside a 64GB system. Are there any directly comparable results that we have? (or am I missing something?) Also, dumb question, but 27B is FP16, right?
2026-03-02T17:32:26
https://www.reddit.com/r/LocalLLaMA/comments/1riz9zz/qwen35_9b_fp16_vs_27b_fp8_have_64gb_unified_m1/
weight_matrix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riz9zz
false
null
t3_1riz9zz
/r/LocalLLaMA/comments/1riz9zz/qwen35_9b_fp16_vs_27b_fp8_have_64gb_unified_m1/
false
false
self
1
null
What if a small AI decided what your LLM keeps in memory, instead of dumb heuristics throwing away tokens? I wrote a whitepaper, need a collaborator.
1
You load 100K tokens into your model. Behind the scenes, the KV-cache is either blowing up your VRAM or some heuristic is silently deleting tokens it thinks you don't need. Spoiler: it often deletes the wrong ones. **The problem with current approaches (H2O, ScissorHands, StreamingLLM):** they evict tokens based on past attention patterns. They literally cannot anticipate what the model will need next. And once a token is gone, it's gone. **Hippocampus** is a small SSM (200-500M params, about 4% overhead on a 7B model) that plugs into any frozen LLM and makes one simple decision for each chunk of context: **keep it or offload it.** No retraining of the base model. No compression. No synthetic tokens injected into the cache. The host model sees only real, unmodified KV-pairs, just fewer of them, because the controller filtered out what's not currently needed. What makes it different from just "smarter eviction": → **It knows what you asked.** The controller is conditioned on your prompt. If you ask "summarize chapter 3", it knows to keep chapter 3. → **It knows what the model is thinking.** It reads the host's hidden states during generation to track evolving needs. → **It doesn't permanently delete anything.** Evicted segments go to CPU RAM. If they become relevant later, they come back. → **It finds natural boundaries.** Learned semantic segmentation instead of chopping context into fixed windows. Concrete example: 100K context, 30% retention means your LLM runs attention on 30K tokens instead of 100K. Roughly 3.3x less compute per layer. And if the controller is unsure, it just keeps more. Worst case you're back to standard inference. I wrote a full whitepaper (12 pages, v0.3) covering architecture, training, complexity, experiments, and ablations. I have compute for the PoC. What I need is someone who's comfortable in PyTorch and knows Transformer internals to co-build the proof of concept. Initial validation on Qwen3-4B (int4) for fast iteration, then scaling to Qwen3-8B, Gemma 3 12B, and Llama 3.1 8B if results hold. 📄 Whitepaper: [https://www.notion.so/hippocampus\_whitepaper\_v3-317ea74dabf28043b682f9ab8b7a346c?source=copy\_link](https://www.notion.so/hippocampus_whitepaper_v3-317ea74dabf28043b682f9ab8b7a346c?source=copy_link) Discord : jaycekan
2026-03-02T17:30:35
https://www.reddit.com/r/LocalLLaMA/comments/1riz852/what_if_a_small_ai_decided_what_your_llm_keeps_in/
Inside-Position-668
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riz852
false
null
t3_1riz852
/r/LocalLLaMA/comments/1riz852/what_if_a_small_ai_decided_what_your_llm_keeps_in/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/_hq-5N9hpSAsuZRM6VSvLJfC81r6JPf5Fm6OTc3zsiE.png?auto=webp&s=e034aa19f3da14dd6602c7cc4d0d4b04e2f663b7', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/_hq-5N9hpSAsuZRM6VSvLJfC81r6JPf5Fm6OTc3zsiE.png?width=108&crop=smart&auto=webp&s=ebb60ecda42d2ab4f3a061e285665477c3789baf', 'width': 108, 'height': 56}, {'url': 'https://external-preview.redd.it/_hq-5N9hpSAsuZRM6VSvLJfC81r6JPf5Fm6OTc3zsiE.png?width=216&crop=smart&auto=webp&s=f18ba6c4008d89ccae20a88936bb2d1a6d313d2a', 'width': 216, 'height': 113}, {'url': 'https://external-preview.redd.it/_hq-5N9hpSAsuZRM6VSvLJfC81r6JPf5Fm6OTc3zsiE.png?width=320&crop=smart&auto=webp&s=3e7daf0d635215c3232a198af2bb766c214101f9', 'width': 320, 'height': 168}, {'url': 'https://external-preview.redd.it/_hq-5N9hpSAsuZRM6VSvLJfC81r6JPf5Fm6OTc3zsiE.png?width=640&crop=smart&auto=webp&s=b49485158e566bc015ce56fe707ce521095787e8', 'width': 640, 'height': 336}, {'url': 'https://external-preview.redd.it/_hq-5N9hpSAsuZRM6VSvLJfC81r6JPf5Fm6OTc3zsiE.png?width=960&crop=smart&auto=webp&s=83f04e625a59428f3f50d9d13ba15758fb030c44', 'width': 960, 'height': 504}, {'url': 'https://external-preview.redd.it/_hq-5N9hpSAsuZRM6VSvLJfC81r6JPf5Fm6OTc3zsiE.png?width=1080&crop=smart&auto=webp&s=a33fb3cc10ece4c99f7426c165bfc7ee7a82bcd5', 'width': 1080, 'height': 567}], 'variants': {}, 'id': '_hq-5N9hpSAsuZRM6VSvLJfC81r6JPf5Fm6OTc3zsiE'}], 'enabled': False}
unsloth/Qwen3.5-9B-GGUF:Q8_0 failing on Ollama
1
I just installed unsloth/Qwen3.5-9B-GGUF:Q8\_0 via openwebui using `ollama run` [hf.co/unsloth/Qwen3.5-9B-GGUF:Q8\_0](http://hf.co/unsloth/Qwen3.5-9B-GGUF:Q8_0) But now my requests are failing . This is the first time i am downloading from HF via openwebui i usually use models listed on ollama website. `500: Ollama: 500, message='Internal Server Error', url='http://localhost:11434/api/chat'` Thanks in advance for the help.
2026-03-02T17:29:51
https://www.reddit.com/r/LocalLLaMA/comments/1riz7dv/unslothqwen359bggufq8_0_failing_on_ollama/
callmedevilthebad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riz7dv
false
null
t3_1riz7dv
/r/LocalLLaMA/comments/1riz7dv/unslothqwen359bggufq8_0_failing_on_ollama/
false
false
self
1
null
QWEN3.5: 397B-A17B 1-bit quantization (UD-TQ1_0) vs 27B 4-bit quantization (UD-Q4_K_XL)
1
I'm thinking to replace my RTX 5090 FE to RTX PRO 6000 if the former is better.
2026-03-02T17:22:58
https://www.reddit.com/r/LocalLLaMA/comments/1riz0db/qwen35_397ba17b_1bit_quantization_udtq1_0_vs_27b/
hurryman2212
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riz0db
false
null
t3_1riz0db
/r/LocalLLaMA/comments/1riz0db/qwen35_397ba17b_1bit_quantization_udtq1_0_vs_27b/
false
false
self
1
null
Axe - a precision agentic coder. large codebases. zero bloat. terminal-native. precise retrieval. powerful inference. open-sourced.
1
we built axe because we were tired of coding tools optimized for demo videos instead of production codebases. the core problem: most agents (including claude code, codex, etc.) take the brute force approach — dump everything into context and hope the LLM figures it out. that's fine for a 500-line side project. it falls apart completely when you're navigating a 100k+ line production codebase where a wrong change costs real downtime. **what we built instead: axe-dig** 5-layer retrieval that extracts exactly what matters: Layer 5: Program Dependence → "What affects line 42?" Layer 4: Data Flow → "Where does this value go?" Layer 3: Control Flow → "How complex is this?" Layer 2: Call Graph → "Who calls this function?" Layer 1: AST → "What functions exist?" when you ask about a function you get: its signature, forward call graph (what it calls), backward call graph (who calls it), control flow complexity, data flow, and impact analysis. the difference in token efficiency is pretty dramatic in practice: |Scenario|Raw tokens|axe-dig tokens|Savings| |:-|:-|:-|:-| |Function + callees|21,271|175|99%| |Codebase overview (26 files)|103,901|11,664|89%| |Deep call chain (7 files)|53,474|2,667|95%| important caveat: this isn't about being cheap on tokens. when you're tracing a complex bug through seven layers axe-dig will pull in 150k tokens if that's what correctness requires. the point is relevant tokens, not fewer tokens. **why this matters especially for local** this was actually the original design constraint. we run bodega — a local AI stack on apple silicon — and local LLMs have real limitations: slower prefill, smaller context windows, no cloud to throw money at. you can't afford to waste context on irrelevant code. precision retrieval wasn't a nice-to-have, it was a survival requirement. the result is it works well with both local and cloud models because precision benefits everyone. **how does axe search** traditional search finds syntax. axe-dig finds behavior. # finds get_user_profile() because it calls redis.get() + redis.setex() # with TTL parameters, called by functions doing expensive DB queries # even though it doesn't mention "memoize" or "TTL" anywhere chop semantic search "memoize expensive computations with TTL expiration" every function gets embedded with signature, call graphs, complexity metrics, data flow patterns, and dependencies **shell integration** `Ctrl+X` toggles between axe and your normal shell. no context switching, no juggling terminals. **local model performance** tested with our own `blackbird-she-doesnt-refuse-21b` running on M1 Max 64GB — subagent spawning, parallel task execution, full agentic workflows. precision retrieval is why even a local 21B can handle complex codebases without melting. and yeah it works with closed source llms too, the yaml should be configured. **what's coming** * interactive codebase dashboard (dependency graphs, dead code detection, execution trace visualization) * runtime execution tracing — see exact values that flowed through each function when a test fails * monorepo factoring (been using this internally for weeks) * language migration (Python → TS, JS → Go etc with semantic preservation not just transpilation) **install** uv pip install axe-cli cd /path/to/your/project axe indexes your codebase on first run (30-60 seconds). instant after that. open source: [https://github.com/SRSWTI/axe](https://github.com/SRSWTI/axe) models on HF if you want to run the full local stack: [https://huggingface.co/srswti](https://huggingface.co/srswti), you can run these bodega models with Bodega inference engine or on your mlx server as well happy to get into the axe-dig architecture, the approach, or how the call graph extraction works. ask anything.
2026-03-02T17:12:44
https://v.redd.it/ljdncgwnznmg1
EmbarrassedAsk2887
/r/LocalLLaMA/comments/1riypvk/axe_a_precision_agentic_coder_large_codebases/
1970-01-01T00:00:00
0
{}
1riypvk
false
null
t3_1riypvk
/r/LocalLLaMA/comments/1riypvk/axe_a_precision_agentic_coder_large_codebases/
false
false
https://external-preview…d31077e1591fd478
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/enY1MDRyd256bm1nMacmzVR93n6b8e7JLLtWbxvhOhgb1ORRy-2MYxqyZ3AL.png?format=pjpg&auto=webp&s=be34c295d1909fe64b1958538a74b8ccd67d5dff', 'width': 2226, 'height': 1440}, 'resolutions': [{'url': 'https://external-preview.redd.it/enY1MDRyd256bm1nMacmzVR93n6b8e7JLLtWbxvhOhgb1ORRy-2MYxqyZ3AL.png?width=108&crop=smart&format=pjpg&auto=webp&s=3d7bb4e1899d6e8d6942d0ba8929c97639b666aa', 'width': 108, 'height': 69}, {'url': 'https://external-preview.redd.it/enY1MDRyd256bm1nMacmzVR93n6b8e7JLLtWbxvhOhgb1ORRy-2MYxqyZ3AL.png?width=216&crop=smart&format=pjpg&auto=webp&s=44b8499a5619b1b5393a202f3b279f5551d35427', 'width': 216, 'height': 139}, {'url': 'https://external-preview.redd.it/enY1MDRyd256bm1nMacmzVR93n6b8e7JLLtWbxvhOhgb1ORRy-2MYxqyZ3AL.png?width=320&crop=smart&format=pjpg&auto=webp&s=8463d1dd6faafda91a521d8ac8f6ca61650d21b8', 'width': 320, 'height': 207}, {'url': 'https://external-preview.redd.it/enY1MDRyd256bm1nMacmzVR93n6b8e7JLLtWbxvhOhgb1ORRy-2MYxqyZ3AL.png?width=640&crop=smart&format=pjpg&auto=webp&s=d68a50d681a652f79410ab44d482d6bd3632e870', 'width': 640, 'height': 414}, {'url': 'https://external-preview.redd.it/enY1MDRyd256bm1nMacmzVR93n6b8e7JLLtWbxvhOhgb1ORRy-2MYxqyZ3AL.png?width=960&crop=smart&format=pjpg&auto=webp&s=44f8eb74690b6675650a7bedb01ae60006de4487', 'width': 960, 'height': 621}, {'url': 'https://external-preview.redd.it/enY1MDRyd256bm1nMacmzVR93n6b8e7JLLtWbxvhOhgb1ORRy-2MYxqyZ3AL.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c7356afd1f5354c221d2a880bc07702d6b94c4ff', 'width': 1080, 'height': 698}], 'variants': {}, 'id': 'enY1MDRyd256bm1nMacmzVR93n6b8e7JLLtWbxvhOhgb1ORRy-2MYxqyZ3AL'}], 'enabled': False}
TP2 Framework Desktop cyankiwi/Qwen3.5-122B-A10B-AWQ-4bit llama-benchy results
1
# Motherboard 128GB # Qwen3.5-122B-A10B-AWQ-4bit Benchmark Results Model: cyankiwi/Qwen3.5-122B-A10B-AWQ-4bit Network: Mellanox ConnectX-3 MCX311A-XCAT CX311A 10GbE SFP+ over RoCE v1 # 1x Framework Desktop 128GB (TP1) |Test|t/s (total)|t/s (req)|Peak t/s|Peak t/s (req)|TTFR (ms)|Est PPT (ms)|E2E TTFT (ms)| |:-|:-|:-|:-|:-|:-|:-|:-| |pp2048 (c1)|593.07 ± 15.42|593.07 ± 15.42|—|—|3,198.66 ± 65.24|3,196.34 ± 65.24|3,198.71 ± 65.25| |tg32 (c1)|9.51 ± 0.04|9.51 ± 0.04|10.00 ± 0.00|10.00 ± 0.00|—|—|—| |pp2048 (c2)|597.40 ± 30.29|344.19 ± 106.61|—|—|5,711.57 ± 1,142.57|5,709.25 ± 1,142.57|5,711.61 ± 1,142.57| |tg32 (c2)|13.98 ± 3.62|7.50 ± 1.38|17.33 ± 0.94|8.67 ± 0.47|—|—|—| |pp2048 (c4)|613.07 ± 4.59|223.44 ± 156.59|—|—|10,706.74 ± 3,334.80|10,704.43 ± 3,334.80|10,706.77 ± 3,334.79| |tg32 (c4)|15.66 ± 9.65|5.87 ± 1.71|30.67 ± 3.77|7.67 ± 0.94|—|—|—| |pp2048 @ d2048 (c1)|547.70 ± 2.21|547.70 ± 2.21|—|—|6,838.02 ± 193.75|6,835.70 ± 193.75|6,838.07 ± 193.76| |tg32 @ d2048 (c1)|9.46 ± 0.01|9.46 ± 0.01|10.00 ± 0.00|10.00 ± 0.00|—|—|—| |pp2048 @ d2048 (c2)|543.17 ± 6.82|312.42 ± 95.92|—|—|12,817.79 ± 2,543.78|12,815.48 ± 2,543.78|12,817.82 ± 2,543.77| |tg32 @ d2048 (c2)|12.70 ± 4.78|7.10 ± 1.85|17.33 ± 0.94|8.67 ± 0.47|—|—|—| |pp2048 @ d2048 (c4)|546.01 ± 2.97|211.20 ± 107.85|—|—|20,432.34 ± 6,554.08|20,430.02 ± 6,554.08|20,432.36 ± 6,554.07| |tg32 @ d2048 (c4)|6.58 ± 1.23|3.85 ± 2.13|29.33 ± 1.89|7.33 ± 0.47|—|—|—| |pp2048 @ d4096 (c1)|485.97 ± 2.88|485.97 ± 2.88|—|—|11,470.46 ± 187.57|11,468.15 ± 187.57|11,470.51 ± 187.57| |tg32 @ d4096 (c1)|9.38 ± 0.01|9.38 ± 0.01|10.00 ± 0.00|10.00 ± 0.00|—|—|—| |pp2048 @ d4096 (c2)|486.93 ± 1.82|361.95 ± 115.94|—|—|17,223.43 ± 5,679.67|17,221.11 ± 5,679.67|17,223.46 ± 5,679.66| |tg32 @ d4096 (c2)|3.97 ± 0.02|4.64 ± 2.65|16.00 ± 0.00|8.00 ± 0.00|—|—|—| |pp2048 @ d4096 (c4)|483.04 ± 3.34|201.72 ± 114.07|—|—|34,696.94 ± 12,975.95|34,694.63 ± 12,975.95|34,696.96 ± 12,975.94| |tg32 @ d4096 (c4)|3.40 ± 0.23|3.55 ± 2.35|28.00 ± 0.00|7.00 ± 0.00|—|—|—| # 2x Framework Desktop 128GB (TP2) |Test|t/s (total)|t/s (req)|Peak t/s|Peak t/s (req)|TTFR (ms)|Est PPT (ms)|E2E TTFT (ms)| |:-|:-|:-|:-|:-|:-|:-|:-| |pp2048 (c1)|732.49 ± 5.98|732.49 ± 5.98|—|—|2,561.13 ± 64.18|2,559.70 ± 64.18|2,561.17 ± 64.18| |tg32 (c1)|16.88 ± 0.08|16.88 ± 0.08|17.33 ± 0.47|17.33 ± 0.47|—|—|—| |pp2048 (c2)|710.66 ± 18.74|535.16 ± 187.67|—|—|3,915.74 ± 1,309.20|3,914.31 ± 1,309.20|3,915.77 ± 1,309.19| |tg32 (c2)|12.42 ± 1.07|9.57 ± 3.43|28.00 ± 0.00|14.00 ± 0.00|—|—|—| |pp2048 (c4)|776.12 ± 6.35|354.32 ± 215.80|—|—|6,689.79 ± 2,569.70|6,688.36 ± 2,569.70|6,689.82 ± 2,569.69| |tg32 (c4)|12.92 ± 0.22|7.14 ± 3.03|52.00 ± 0.00|13.00 ± 0.00|—|—|—| |pp2048 @ d2048 (c1)|686.70 ± 0.91|686.70 ± 0.91|—|—|5,472.01 ± 105.02|5,470.58 ± 105.02|5,472.04 ± 105.02| |tg32 @ d2048 (c1)|16.87 ± 0.02|16.87 ± 0.02|17.00 ± 0.00|17.00 ± 0.00|—|—|—| |pp2048 @ d2048 (c2)|727.89 ± 2.58|424.89 ± 63.64|—|—|9,083.38 ± 1,295.27|9,081.95 ± 1,295.27|9,083.41 ± 1,295.26| |tg32 @ d2048 (c2)|12.74 ± 0.13|10.03 ± 3.58|28.00 ± 0.00|14.00 ± 0.00|—|—|—| |pp2048 @ d2048 (c4)|744.57 ± 0.62|295.20 ± 118.53|—|—|14,480.80 ± 4,734.42|14,479.36 ± 4,734.42|14,480.82 ± 4,734.42| |tg32 @ d2048 (c4)|8.25 ± 0.05|5.68 ± 3.64|48.00 ± 0.00|12.08 ± 0.28|—|—|—| |pp2048 @ d4096 (c1)|661.41 ± 10.10|661.41 ± 10.10|—|—|8,423.04 ± 176.56|8,421.61 ± 176.56|8,423.10 ± 176.59| |tg32 @ d4096 (c1)|16.64 ± 0.04|16.64 ± 0.04|17.00 ± 0.00|17.00 ± 0.00|—|—|—| |pp2048 @ d4096 (c2)|640.81 ± 23.80|405.65 ± 87.51|—|—|14,258.18 ± 3,057.93|14,256.75 ± 3,057.93|14,258.22 ± 3,057.94| |tg32 @ d4096 (c2)|7.12 ± 0.54|7.72 ± 4.43|28.00 ± 0.00|14.00 ± 0.00|—|—|—| Single framework is marginally usable if you let it code overnight. for reference - llama.cpp: pp2048 (c1) 224.56 ± 5.16, tg32 (c1) 22.06 ± 0.63
2026-03-02T17:11:59
https://www.reddit.com/r/LocalLLaMA/comments/1riyp47/tp2_framework_desktop/
MirecX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riyp47
false
null
t3_1riyp47
/r/LocalLLaMA/comments/1riyp47/tp2_framework_desktop/
false
false
self
1
null
Access to DGX H200 — Looking for best model to perform Distillation
1
Hi all, I have temporary research access to a DGX H200 cluster and want to use the compute meaningfully rather than waste cycles on random fine-tunes. My current thinking: • Start from Llama 3.1 70B or Mixtral 8x7B as teacher • Distill into 7B/8B deployable student models • Focus on domain specialization (finance / Indian financial corpora) • Possibly explore coding assistant fine-tuning or structured reasoning distillation Constraints: • I can run multi-GPU distributed training (DeepSpeed/FSDP) • I can generate synthetic instruction datasets at scale • I care about making local model also hobby tuning Questions: 1. What research directions are currently underexplored in open-weight distillation? 2. Is logit-level distillation still competitive vs DPO/RLHF pipelines? 3. Any recommendations for large-scale high-quality finance datasets (public + structured)? 4. What evaluation frameworks do you trust beyond MMLU/HellaSwag for domain models? 5. If you had H200-class compute for \~X weeks, what experiment would you run? I’m especially interested in: • Multi-teacher distillation • Tool-augmented distillation • Domain grounding without catastrophic forgetting Would appreciate serious suggestions.
2026-03-02T17:07:40
https://www.reddit.com/r/LocalLLaMA/comments/1riyktj/access_to_dgx_h200_looking_for_best_model_to/
No-Yam9526
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riyktj
false
null
t3_1riyktj
/r/LocalLLaMA/comments/1riyktj/access_to_dgx_h200_looking_for_best_model_to/
false
false
self
1
null
So I have no knowledge of LLMs
1
[removed]
2026-03-02T17:06:35
https://www.reddit.com/r/LocalLLaMA/comments/1riyjpi/so_i_have_no_knowledge_of_llms/
machinegunnedburger
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riyjpi
false
null
t3_1riyjpi
/r/LocalLLaMA/comments/1riyjpi/so_i_have_no_knowledge_of_llms/
false
false
self
1
null
I am using Qwen AI model for OpenClaw and I thought this was free and local so why do I keep getting this error message: API rate limit reached. Please try again later.
1
Please help I am new to OpenClaw
2026-03-02T17:05:06
https://www.reddit.com/r/LocalLLaMA/comments/1riyi54/i_am_using_qwen_ai_model_for_openclaw_and_i/
utsavsarkar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riyi54
false
null
t3_1riyi54
/r/LocalLLaMA/comments/1riyi54/i_am_using_qwen_ai_model_for_openclaw_and_i/
false
false
self
1
null
Qwen3.5 Model Series - Thinking On/OFF: Does it Matter?
2
Hi, I've been testing Qwen3.5 models ranging from 2B to 122B. All configurations used Unsloth with LM Studio exclusively. Quantization-wise, the 2B through 9B/4B variants run at Q8, while the 122B uses MXFP4. Here is a summary of my observations: **1. Smaller Models (2B – 9B)** * **Thinking Mode Impact:** Activating Thinking ON has a **significant positive impact** on these models. As parameter count decreases, so does reasoning speed; smaller models spend significantly more time in the thinking phase. * **Reasoning Traces:** When reading traces from the 9B and 4B models, I frequently find that they generate the correct answer early (often within the first few lines) but continue analyzing irrelevant paths unnecessarily. * *Example:* In the Car Wash test, both managed to recommend driving after exhausting multiple options despite arriving at the conclusion earlier in their internal trace. The 9B quickly identified this ("Standard logic: You usually need a car for self-service"), yet continued evaluating walking options until late in generation. The 4B took longer but eventually corrected itself; the 2B failed entirely with or without thinking mode assistance. * **Context Recall:** Enabling Thinking Mode drastically improves context retention. The Qwen3 8B and 4B Instruct variants appear superior here, preserving recall quality without excessive token costs if used judiciously. * *Recommendation:* For smaller models, **enable Thinking Mode** to improve reliability over speed. **2. Larger Models (27B+)** * **Thinking Mode Impact:** I observed **no significant improvements** when turning Thinking ON for these models. Their inherent reasoning is sufficient to arrive at correct answers immediately. This holds true even for context recall. * **Variable Behavior:** Depending on the problem, larger models might take longer on "easy" tasks while spending less time (or less depth) on difficult ones, suggesting an inconsistent pattern or overconfidence. There is no clear heuristic yet for when to force extended thinking. * *Recommendation:* Disable Thinking Mode. The models appear capable of solving most problems without assistance. What are your observations so far? Have you experienced any differences for coding tasks? What about deep research and internet search?
2026-03-02T17:02:35
https://www.reddit.com/r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/
Iory1998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riyfg2
false
null
t3_1riyfg2
/r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/
false
false
self
2
null
lmao
1
2026-03-02T16:54:36
https://i.redd.it/oslpxh0nwnmg1.png
itsArmanJr
i.redd.it
1970-01-01T00:00:00
0
{}
1riy7cw
false
null
t3_1riy7cw
/r/LocalLLaMA/comments/1riy7cw/lmao/
false
false
https://preview.redd.it/…e7f6ba49b0212d3b
1
{'images': [{'source': {'url': 'https://preview.redd.it/oslpxh0nwnmg1.png?auto=webp&s=538dff3fd34b289f3507e046b512ffcc741fe6a9', 'width': 865, 'height': 629}, 'resolutions': [{'url': 'https://preview.redd.it/oslpxh0nwnmg1.png?width=108&crop=smart&auto=webp&s=744b4bc7e2a67a4f1d8cae0badbcfe0f08bf2645', 'width': 108, 'height': 78}, {'url': 'https://preview.redd.it/oslpxh0nwnmg1.png?width=216&crop=smart&auto=webp&s=5e613caeb63567a85b62c2e0c319653294452b04', 'width': 216, 'height': 157}, {'url': 'https://preview.redd.it/oslpxh0nwnmg1.png?width=320&crop=smart&auto=webp&s=4e0d24ef6f5d32ecdd9bd9325e64a22abdd409e9', 'width': 320, 'height': 232}, {'url': 'https://preview.redd.it/oslpxh0nwnmg1.png?width=640&crop=smart&auto=webp&s=2ce213772ecb0d096a41e5385f2191a4b660d2c0', 'width': 640, 'height': 465}], 'variants': {}, 'id': 'oslpxh0nwnmg1'}], 'enabled': True}
Qwen 3.5 Non-thinking Mode Benchmarks?
1
Has anybody had the chance to or know a benchmark on the performance of non-thinking vs thinking mode with Qwen 3.5 series? Very interested to see how much is being sacrificed for instant responses, as I use 27B dense, and thinking takes quite a while sometimes at \~20tps on my 3090. I find the non-thinking responses pretty good too, but it really depends on the context.
2026-03-02T16:53:10
https://www.reddit.com/r/LocalLLaMA/comments/1riy5x6/qwen_35_nonthinking_mode_benchmarks/
Embarrassed_Soup_279
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riy5x6
false
null
t3_1riy5x6
/r/LocalLLaMA/comments/1riy5x6/qwen_35_nonthinking_mode_benchmarks/
false
false
self
1
null
the data centers are being built for mass surveillance. none of it is gonna be used to scale or bring agi. hell, llms are just function aggregators. they cant even calculate boy math.
1
nobody is talking about this but the compute-to-revenue ratio on hyperscaler infra makes zero sense if the use case is just "better chatbot." you don't build exaflop-scale data centers to run inference on people asking for recipe substitutions. the numbers only work if you're doing something fundamentally more data-hungry. behavioral profiling at population scale is the only thesis that fits the capex. and the agi hype is cope. llms are interpolation engines. they compress a training distribution into a weighted graph of token co-occurrences and call it reasoning. the "emergent capabilities" everyone loses their mind over are just phase transitions in memorization hitting at scale. stress-test any of these models for five minutes and they fail in ways no general intelligence would. can't do basic arithmetic reliably. hallucinate because they're sampling from a probability distribution, not retrieving ground truth. and errors compound — each token conditioned on prior tokens means early drift gets amplified exponentially downstream. there's no internal world model correcting trajectory. stochastic parrots with a better publicist. the surveillance angle is just more straightforward. llms are extraordinarily good at extracting semantic meaning and latent intent from unstructured text at scale. pair that with models trained on scraped internet data :-- your posts, your searches, your messages and ngl you have a system that can classify political sentiment, predict behavior, and map social graphs with resolution that would've been science fiction ten years ago. the data centers aren't being built for your autocomplete. they're being built to index the contents of human cognition and make it queryable. agi is the cover story. profiling is the product. so the question i kept coming back to was what the actual alternative looks like. what does it take to run a full ai stack, speech to speech, chat inference, a browser with a local search indexer that never phones home, music, notes, the whole thing, entirely on the hardware people already own? if you actually go deep on how the memory is laid out and where the inference headroom is that nobody is exploiting, the excuse falls apart completely. configurable backends for every inference pipeline, llm, audio, vision, pixel acceleration, each with its own dynamic resource allocation based on what you're actually doing at that moment. runs on 8gb and scales to 512gb. no cloud, no data center, no profile being built on the back of your conversations. things you're currently paying subscriptions for, summarizing emails, writing code, having a conversation that actually sounds human, none of that requires a data center. the math works on the laptop in your bag. the surveillance economy depends on you believing local is impossible. it isn't. it just requires someone caring enough to actually go figure it out. curious if anyone here has gone deep on this. i have worked on something similar with my research lab. I open sourced the coding agents and distributed task schedulers we use internally at [github.com/SRSWTI](https://github.com/SRSWTI), models are on [huggingface.co/srswti](https://huggingface.co/srswti), and there are demos up if you want to see the speech-to-speech engine and TTS running locally: [speech-to-speech](https://youtu.be/8vOErJln9I0) · [TTS example](https://youtu.be/3eIpgfqM8gU).
2026-03-02T16:52:26
https://www.reddit.com/r/LocalLLaMA/comments/1riy56h/the_data_centers_are_being_built_for_mass/
EmbarrassedAsk2887
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riy56h
false
null
t3_1riy56h
/r/LocalLLaMA/comments/1riy56h/the_data_centers_are_being_built_for_mass/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/sTMyflae6NqJn3BaLIEKMamYH_3a81n4XCq4uqu9hzg.jpeg?auto=webp&s=23bde8732db9e27921532ecb811e619a854d3450', 'width': 280, 'height': 280}, 'resolutions': [{'url': 'https://external-preview.redd.it/sTMyflae6NqJn3BaLIEKMamYH_3a81n4XCq4uqu9hzg.jpeg?width=108&crop=smart&auto=webp&s=f6b31b6dc32884579c5371a8916f285c6c110ccb', 'width': 108, 'height': 108}, {'url': 'https://external-preview.redd.it/sTMyflae6NqJn3BaLIEKMamYH_3a81n4XCq4uqu9hzg.jpeg?width=216&crop=smart&auto=webp&s=37d9e44ee77d98cdb9346bdcbaf0094bde7c07f5', 'width': 216, 'height': 216}], 'variants': {}, 'id': 'sTMyflae6NqJn3BaLIEKMamYH_3a81n4XCq4uqu9hzg'}], 'enabled': False}
New to local llm, which model to use with a 4090?
1
Hey everyone, total newcomer to local LLMs here. Just sat up Ollama on a 4090/14900K and want to run a local LLM for agentic coding like OpenClaw and vibe coding with claude code. Given the 24GB VRAM limit and that I’m still figuring out context management, which model gives the best "out of the box" experience? QwQ-32B (Q4): Better reasoning/intelligence? Qwen2.5-Coder-32B (Q4): Better for actual code generation/fast iteration? And what should I set context length at, just default 32k? or something 3rd? These models were just suggestion i found quickly
2026-03-02T16:32:52
https://www.reddit.com/r/LocalLLaMA/comments/1rixlj6/new_to_local_llm_which_model_to_use_with_a_4090/
azndkflush
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rixlj6
false
null
t3_1rixlj6
/r/LocalLLaMA/comments/1rixlj6/new_to_local_llm_which_model_to_use_with_a_4090/
false
false
self
1
null
~40× speedup and 90% VRAM reduction on vLLMs compared to FlashAttention by exploiting Grouped Query Attention symmetries
1
LLMs suffer on long-contexts, they're memory and throughput limited by the GPU. We solved this. I built a Triton kernel that beats FlashAttention decode: up to 40x the speed, 84% - 90% VRAM reduction enabling 2.0-4.0x longer context windows on the same hardware. [https://github.com/leochlon/mezzanine/tree/main/mezzanine/kernels](https://github.com/leochlon/mezzanine/tree/main/mezzanine/kernels) Here is how it works: Our new pipeline, SyDecode, moves away from the standard "PACK + dense" attention backends that waste memory and cycles re-organizing data. Instead, it implements a paged-native decode that evaluates Grouped Query Attention (GQA) logic directly on physical block tables. By exploiting the GQA symmetry, reusing loaded Key/Value blocks across multiple query heads jointly, we’ve achieved massive performance leaps on models we quickly benchmarked against: 33.4x speedup on Qwen3-30B-A3B. 28.8x speedup on TinyLlama-1.1B. 18.8x speedup on Mistral-7B. This is part of massive ongoing work on our symmetry-exploiting Triton kernels and the paper will be up soon, until then enjoy using it! GitHub link for Kernel + Benchmarking script in the comments, works with any HuggingFace model as far as I'm aware but need to do more testing. Would love your thoughts guys.
2026-03-02T16:29:01
https://i.redd.it/q091u99ernmg1.png
Upset-Presentation28
i.redd.it
1970-01-01T00:00:00
0
{}
1rixhj9
false
null
t3_1rixhj9
/r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/
false
false
https://preview.redd.it/…597e2530c556e77e
1
{'images': [{'source': {'url': 'https://preview.redd.it/q091u99ernmg1.png?auto=webp&s=ba9e7d3f2eefb4b222b303467a94b3d8e1cd161f', 'width': 3410, 'height': 1870}, 'resolutions': [{'url': 'https://preview.redd.it/q091u99ernmg1.png?width=108&crop=smart&auto=webp&s=187218b25e0ef07b806360ae8f82e35b5225ac6d', 'width': 108, 'height': 59}, {'url': 'https://preview.redd.it/q091u99ernmg1.png?width=216&crop=smart&auto=webp&s=e51a74516136de4a6668b12b82246d55afe95f45', 'width': 216, 'height': 118}, {'url': 'https://preview.redd.it/q091u99ernmg1.png?width=320&crop=smart&auto=webp&s=019e706517019c957d6e79d7ecd18894f8b7260d', 'width': 320, 'height': 175}, {'url': 'https://preview.redd.it/q091u99ernmg1.png?width=640&crop=smart&auto=webp&s=37dd49bbbb92714971395a222b29dc0da09131be', 'width': 640, 'height': 350}, {'url': 'https://preview.redd.it/q091u99ernmg1.png?width=960&crop=smart&auto=webp&s=98ff7b6479189598ebad91ca9978f8b43927b804', 'width': 960, 'height': 526}, {'url': 'https://preview.redd.it/q091u99ernmg1.png?width=1080&crop=smart&auto=webp&s=274ff886b980b9f57427b80cd4665776c5d8dc0b', 'width': 1080, 'height': 592}], 'variants': {}, 'id': 'q091u99ernmg1'}], 'enabled': True}
Qwen3.5-122B Heretic GGUFs
1
https://huggingface.co/mradermacher/Qwen3.5-122B-A10B-heretic-GGUF Not my ggufs just thought it's worth sharing. No more refusals!
2026-03-02T16:28:35
https://www.reddit.com/r/LocalLLaMA/comments/1rixh53/qwen35122b_heretic_ggufs/
durden111111
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rixh53
false
null
t3_1rixh53
/r/LocalLLaMA/comments/1rixh53/qwen35122b_heretic_ggufs/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/DR87IEReTm1bBTwwh6gwsIJMMVh5zZ_ShzXeVfkyNKs.png?auto=webp&s=4a2adaaee080e90a56ce7f8778a7e5f619ec4f8d', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/DR87IEReTm1bBTwwh6gwsIJMMVh5zZ_ShzXeVfkyNKs.png?width=108&crop=smart&auto=webp&s=3ab3cec81d8c5738f4cd6c1a91d1e35bbab2a30d', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/DR87IEReTm1bBTwwh6gwsIJMMVh5zZ_ShzXeVfkyNKs.png?width=216&crop=smart&auto=webp&s=fbaccda416ec7d9383bffd5144862ed103dfba7f', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/DR87IEReTm1bBTwwh6gwsIJMMVh5zZ_ShzXeVfkyNKs.png?width=320&crop=smart&auto=webp&s=78d3bc1a2be12e2928217be83cc4bf4cff29fbb3', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/DR87IEReTm1bBTwwh6gwsIJMMVh5zZ_ShzXeVfkyNKs.png?width=640&crop=smart&auto=webp&s=7e730177b7719cda6c16d3057c1655737f1ae0b9', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/DR87IEReTm1bBTwwh6gwsIJMMVh5zZ_ShzXeVfkyNKs.png?width=960&crop=smart&auto=webp&s=bc881996c045c1035b39e413b5df931108532176', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/DR87IEReTm1bBTwwh6gwsIJMMVh5zZ_ShzXeVfkyNKs.png?width=1080&crop=smart&auto=webp&s=ffed5ef1bfeb4a6e74f8122b2db3ced0867084bc', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'DR87IEReTm1bBTwwh6gwsIJMMVh5zZ_ShzXeVfkyNKs'}], 'enabled': False}
Is Qwen3.5-9B enough for Agentic Coding?
1
On coding section, 9B model beats Qwen3-30B-A3B on all items. And beats Qwen3-Next-80B, GPT-OSS-20B on few items. Also maintains same range numbers as Qwen3-Next-80B, GPT-OSS-20B on few items. (If Qwen release 14B model in future, surely it would beat GPT-OSS-120B too.) So as mentioned in the title, Is 9B model is enough for Agentic coding to use with tools like Opencode/Cline/Roocode/Kilocode/etc., to make decent size/level Apps/Websites/Games? Q8 quant + 128K-256K context + Q8 KVCache. I'm asking this question for my laptop(8GB VRAM + 32GB RAM), though getting new rig this month.
2026-03-02T16:09:47
https://i.redd.it/bxh90z4gjnmg1.png
pmttyji
i.redd.it
1970-01-01T00:00:00
0
{}
1riwy9w
false
null
t3_1riwy9w
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/
false
false
https://preview.redd.it/…25dff5f1e6a04a2c
1
{'images': [{'source': {'url': 'https://preview.redd.it/bxh90z4gjnmg1.png?auto=webp&s=12c335cdf5cf5d29de8b1b1bdb737db82f6a9088', 'width': 606, 'height': 529}, 'resolutions': [{'url': 'https://preview.redd.it/bxh90z4gjnmg1.png?width=108&crop=smart&auto=webp&s=3f49f139534785b678b150d5f1ae737d8acfe839', 'width': 108, 'height': 94}, {'url': 'https://preview.redd.it/bxh90z4gjnmg1.png?width=216&crop=smart&auto=webp&s=12b75218e966579d42e9ed710b413e07ed86d0a2', 'width': 216, 'height': 188}, {'url': 'https://preview.redd.it/bxh90z4gjnmg1.png?width=320&crop=smart&auto=webp&s=6697e9989a7aa320c4b6d93e51ce28fe0bca5e40', 'width': 320, 'height': 279}], 'variants': {}, 'id': 'bxh90z4gjnmg1'}], 'enabled': True}
PSA: LM Studio's parser silently breaks Qwen3.5 tool calling and reasoning: a year of connected bug reports
1
I love LM Studio, but there have been bugs over its life that have made it difficult for me to completely make the move to a 90:10 local model reliance with frontier models as advisory only. This morning, I filed 3 critical bugs and pulled together a report that collects a lot of issues over the last \~year that seem to be posted only in isolation. This helps me personally and I thought might be of use to the community. It's not always the models' fault: even with heavy usage of open weights models through LM Studio, I only just learned how systemic tool usage issues are in its server parser. \# LM Studio's parser has a cluster of interacting bugs that silently break tool calling, corrupt reasoning output, and make models look worse than they are \## The bugs \### 1. Parser scans inside \`<think>\` blocks for tool call patterns (\[#1592\](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/1592)) When a reasoning model (Qwen3.5, DeepSeek-R1, etc.) thinks about tool calling syntax inside its \`<think>\` block, LM Studio's parser treats those prose mentions as actual tool call attempts. The model writes "some models use \`<function=...>\` syntax" as part of its reasoning, and the parser tries to execute it. This creates a recursive trap: the model reasons about tool calls → parser finds tool-call-shaped tokens in thinking → parse fails → error fed back to model → model reasons about the failure → mentions more tool call syntax → repeat forever. The model literally cannot debug a tool calling issue because describing the problem reproduces it. One model explicitly said "I'm getting caught in a loop where my thoughts about tool calling syntax are being interpreted as actual tool call markers" — and that sentence itself triggered the parser. This was first reported as \[#453\](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/453) in February 2025 — over a year ago, still open. \*\*Workaround:\*\* Disable reasoning (\`{%- set enable\_thinking = false %}\`). Instantly fixes it — 20+ consecutive tool calls succeed. \### 2. Registering a second MCP server breaks tool call parsing for the first (\[#1593\](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/1593)) This one is clean and deterministic. Tested with lfm2-24b-a2b at temperature=0.0: \- \*\*Only KG server active:\*\* Model correctly calls \`search\_nodes\`, parser recognizes \`<|tool\_call\_start|>\` tokens, tool executes, results returned. Works perfectly. \- \*\*Add webfetch server (don't even call it):\*\* Model emits \`<|tool\_call\_start|>\[web\_search(...)\]<|tool\_call\_end|>\` as \*\*raw text\*\* in the chat. The special tokens are no longer recognized. The tool is never executed. The mere \*registration\* of a second MCP server — without calling it — changes how the parser handles the first server's tool calls. Same model, same prompt, same target server. Single variable changed. \*\*Workaround:\*\* Only register the MCP server you need for each task. Impractical for agentic workflows. \### 3. Server-side \`reasoning\_content\` / \`content\` split produces empty responses that report success This one affects everyone using reasoning models via the API, whether you're using tool calling or not. We sent a simple prompt to Qwen3.5-35b-a3b via \`/v1/chat/completions\` asking it to list XML tags used for reasoning. The server returned: \`\`\`json { "content": "", "reasoning\_content": "\[3099 tokens of detailed deliberation\]", "finish\_reason": "stop" } \`\`\` The model did extensive work — 3099 tokens of reasoning — but got caught in a deliberation loop inside \`<think>\` and never produced output in the \`content\` field. The server returned \`finish\_reason: "stop"\` with empty content. \*\*It reported success.\*\* This means: \- \*\*Every eval harness\*\* checking \`finish\_reason == "stop"\` silently accepts empty responses \- \*\*Every agentic framework\*\* propagates empty strings downstream \- \*\*Every user\*\* sees a blank response and concludes the model is broken \- \*\*The actual reasoning is trapped\*\* in \`reasoning\_content\` — the model did real work that nobody sees unless they explicitly check that field \*\*This is server-side, not a UI bug.\*\* We confirmed by inspecting the raw API response and the LM Studio server log. The \`reasoning\_content\` / \`content\` split happens before the response reaches any client. \### The interaction between these bugs These aren't independent issues. They form a compound failure: 1. Reasoning model thinks about tool calling → \*\*Bug 1\*\* fires, parser finds false positives in thinking block 2. Multiple MCP servers registered → \*\*Bug 2\*\* fires, parser can't handle the combined tool namespace 3. Model gets confused, loops in reasoning → \*\*Bug 3\*\* fires, empty content reported as success 4. User/framework sees empty response, retries → Back to step 1 The root cause is the same across all three: \*\*the parser has no content-type model\*\*. It doesn't distinguish reasoning content from tool calls from regular assistant text. It scans the entire output stream with pattern matching and has no concept of boundaries, quoting, or escaping. The \`</think>\` tag should be a firewall. It isn't. \## What's already filed | Issue | Filed | Status | Age | |---|---|---|---| | \[#453\](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/453) — Tool call blocks inside \`<think>\` tags not ignored | Feb 2025 | Open | \*\*13 months\*\* | | \[#827\](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/827) — Qwen3 thinking tags break tool parsing | Aug 2025 | \`needs-investigation\`, 0 comments | 7 months | | \[#942\](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/942) — gpt-oss Harmony format parsing | Aug 2025 | Open | 7 months | | \[#1358\](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/1358) — LFM2.5 tool call failures | Jan 2026 | Open | 2 months | | \[#1528\](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/1528) — Parallel tool calls fail with GLM | Feb 2026 | Open | 2 weeks | | \[#1541\](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/1541) — First MCP call works, subsequent don't | Feb 2026 | Open | 10 days | | \[#1589\](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/1589) — Qwen3.5 think tags break JSON output | Today | Open | Hours | | \*\*\[#1592\](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/1592)\*\* — Parser scans inside thinking blocks | Today | Open | New | | \*\*\[#1593\](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/1593)\*\* — Multi-server registration breaks parsing | Today | Open | New | Thirteen months of isolated reports, starting with #453 in February 2025. Each person hits one facet, files a bug, disables reasoning or drops to one MCP server, and moves on. Nobody connected them because most people run one model with one server. \## Why this matters If you've evaluated a reasoning model in LM Studio and it "failed to respond" or "gave empty answers" — check \`reasoning\_content\`. The model may have done real work that was trapped by the server-side parser. The model isn't broken. The server is reporting success on empty output. If you've tried MCP tool calling and it "doesn't work reliably" — check how many servers are registered. The tools may work perfectly in isolation and fail purely because another server exists in the config. If you've seen models "loop forever" on tool calling tasks — check if reasoning is enabled. The model may be stuck in the recursive trap where thinking about tool calls triggers the parser, which triggers errors, which triggers more thinking about tool calls. These aren't model problems. They're infrastructure problems that make models look unreliable when they're actually working correctly behind a broken parser. \## Setup that exposed this I run an agentic orchestration framework (LAS) with 5+ MCP servers, multiple models (Qwen3.5, gpt-oss-20b, LFM2.5), reasoning enabled, and sustained multi-turn tool calling loops. This configuration stress-tests every parser boundary simultaneously, which is how the interaction between bugs became visible. Most chat-only usage would only hit one bug at a time — if at all. Models tested: qwen3.5-35b-a3b, qwen3.5-27b, lfm2-24b-a2b, gpt-oss-20b. The bugs are model-agnostic — they're in LM Studio's parser, not in the models.
2026-03-02T15:52:55
https://www.reddit.com/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/
One-Cheesecake389
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riwhcf
false
null
t3_1riwhcf
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/hMOBTNJ34I4GprLayq1KhLoKH6s3wV5ZdZV6dPZM1WE.png?auto=webp&s=56844cb7df169048f36825ae568455ac55ff2164', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/hMOBTNJ34I4GprLayq1KhLoKH6s3wV5ZdZV6dPZM1WE.png?width=108&crop=smart&auto=webp&s=02774e33c16c8ab31647045995a8361b5083f716', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/hMOBTNJ34I4GprLayq1KhLoKH6s3wV5ZdZV6dPZM1WE.png?width=216&crop=smart&auto=webp&s=c0fd8a65f70f71991c38de6b1ba30edcabb3e5d0', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/hMOBTNJ34I4GprLayq1KhLoKH6s3wV5ZdZV6dPZM1WE.png?width=320&crop=smart&auto=webp&s=874dab3d5c547f5c0096fa5a4b1bdc3b126b82c5', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/hMOBTNJ34I4GprLayq1KhLoKH6s3wV5ZdZV6dPZM1WE.png?width=640&crop=smart&auto=webp&s=cec5a3ceba84bdfc4e9d2d73bc99ec45cccba5b0', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/hMOBTNJ34I4GprLayq1KhLoKH6s3wV5ZdZV6dPZM1WE.png?width=960&crop=smart&auto=webp&s=7ddb86ebeff7661b3a68de4646c3e98618c3b0ef', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/hMOBTNJ34I4GprLayq1KhLoKH6s3wV5ZdZV6dPZM1WE.png?width=1080&crop=smart&auto=webp&s=f47f5f4c177b6f688b9be1cc20fa9e6cd26eba2e', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'hMOBTNJ34I4GprLayq1KhLoKH6s3wV5ZdZV6dPZM1WE'}], 'enabled': False}
Speculative decoding with Qwen3.5, is it working for anyone?
1
Had anyone gotten speculative decoding with Qwen3.5 0.8b on cap to work yet? Here’s my command and the result I’ve been getting /llama.cpp/build/bin/llama-server -m /.cache/llama.cpp/Qwen3.5-397B-A17B-MXFP4\_MOE-00001-of-00006.gguf -md .cache/llama.cpp/Qwen3.5-0.8B-Q8\_0.gguf -c 64000 -cd 64000 srv load\_model: initializing slots, n\_slots = 4 common\_speculative\_is\_compat: the target context does not support partial sequence removal srv load\_model: speculative decoding not supported by this context
2026-03-02T15:48:37
https://www.reddit.com/r/LocalLLaMA/comments/1riwd56/speculative_decoding_with_qwen35_is_it_working/
Frequent-Slice-6975
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riwd56
false
null
t3_1riwd56
/r/LocalLLaMA/comments/1riwd56/speculative_decoding_with_qwen35_is_it_working/
false
false
self
1
null
MCP co-location: STDIO (4–9ms, single client) vs HTTP (remote, multi-client). When do you actually need the latter?
1
MCP servers use STDIO for local/co-located setups — the host spawns the server as a subprocess, JSON-RPC over stdin/stdout. No network, no TLS. Latency is \~4–9ms, but you only get one client. HTTP/StreamableHTTP lets you run MCP servers remotely with multi-client support, but adds network latency and auth complexity. Curious how people are choosing in practice. Are you sticking with STDIO for everything, or running HTTP for remote access / team usage? When did co-location stop being enough?
2026-03-02T15:41:59
https://www.reddit.com/r/LocalLLaMA/comments/1riw6kd/mcp_colocation_stdio_49ms_single_client_vs_http/
hack_the_developer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riw6kd
false
null
t3_1riw6kd
/r/LocalLLaMA/comments/1riw6kd/mcp_colocation_stdio_49ms_single_client_vs_http/
false
false
self
1
null
Just saw it on the last page refresh: Qwen quantized models are now on Ollama
1
Pulling 4B and 9B for myself. 0.8B there for cell phones.
2026-03-02T15:36:46
https://ollama.com/library/qwen3.5
PlainBread
ollama.com
1970-01-01T00:00:00
0
{}
1riw1ml
false
null
t3_1riw1ml
/r/LocalLLaMA/comments/1riw1ml/just_saw_it_on_the_last_page_refresh_qwen/
false
false
https://external-preview…3b65a3c803957fab
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?auto=webp&s=a080c4707584d3aa14134960cda9ba2d339b93a3', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108, 'height': 56}, {'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=216&crop=smart&auto=webp&s=6ccf136f5d3091254a0067a3bc5d6c7df9d62d89', 'width': 216, 'height': 113}, {'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=320&crop=smart&auto=webp&s=2530aa4ecbcf7899ec0d023e217fe24af15fe0a6', 'width': 320, 'height': 168}, {'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=640&crop=smart&auto=webp&s=8e51add1cab39c7614eb13e6195f23c5b4eeb417', 'width': 640, 'height': 336}, {'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=960&crop=smart&auto=webp&s=750a6d42fd91c5a6e9a9c069e74247c877644e97', 'width': 960, 'height': 504}, {'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=1080&crop=smart&auto=webp&s=9eab390b865b031211658564ad5fe5241c9661c5', 'width': 1080, 'height': 567}], 'variants': {}, 'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM'}], 'enabled': False}
Qwen 3.5 2B is an OCR beast
1
It can read text from all angles and qualities (from clear scans to potato phone pics) and supports structured output. Previously I was using Ministral 3B and it was good but needed some image pre-processing to rotate images correctly for good results. I will continue to test more. I tried Qwen 3.5 0.8B but for some reason, the MRZ at the bottom of Passport or ID documents throws it in a loop repeating <<<< characters. What is your experience so far?
2026-03-02T15:34:22
https://www.reddit.com/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/
deadman87
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rivzcl
false
null
t3_1rivzcl
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/
false
false
self
1
null
Does Qwen3.5 4B supports thinking?
1
Does Qwen3.5 4B supports thinking? When testing 9B it thinks by default, with 4B it doesn't and adding the following to my API call doesn't do anything. I'm using LM Studio 'extra_body' => [ "chat_template_kwargs" => ["enable_thinking" => true], ]
2026-03-02T15:22:40
https://www.reddit.com/r/LocalLLaMA/comments/1rivo6f/does_qwen35_4b_supports_thinking/
IvnN7Commander
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rivo6f
false
null
t3_1rivo6f
/r/LocalLLaMA/comments/1rivo6f/does_qwen35_4b_supports_thinking/
false
false
self
1
null
Visualizing All Qwen 3.5 vs Qwen 3 Benchmarks
1
I averaged out the official scores from today’s and last week's release pages to get a quick look at how the new models stack up. * **Purple/Blue/Cyan:** New Qwen3.5 models * **Orange/Yellow:** Older Qwen3 models The choice of Qwen3 models is simply based on which ones Qwen included in their new comparisons. The bars are sorted in the same order as they are listed in the legend, so if the colors are too difficult to parse, you can just compare the positions. Some bars are missing for the smaller models because data wasn't provided for every category, but this should give you a general gist of the performance differences!
2026-03-02T15:10:24
https://i.redd.it/f6p9a7nibnmg1.png
Jobus_
i.redd.it
1970-01-01T00:00:00
0
{}
1rivckt
false
null
t3_1rivckt
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/
false
false
https://preview.redd.it/…0263b8387879141f
1
{'images': [{'source': {'url': 'https://preview.redd.it/f6p9a7nibnmg1.png?auto=webp&s=df929e45bdc827cb7368d875f253bf5b373513e8', 'width': 2243, 'height': 1035}, 'resolutions': [{'url': 'https://preview.redd.it/f6p9a7nibnmg1.png?width=108&crop=smart&auto=webp&s=d9c86e7cec5d32e90d22b2ddbdacf3f7d1bc3c86', 'width': 108, 'height': 49}, {'url': 'https://preview.redd.it/f6p9a7nibnmg1.png?width=216&crop=smart&auto=webp&s=d09e611cf5753ce5dfeb86ec98d2fc936cff981b', 'width': 216, 'height': 99}, {'url': 'https://preview.redd.it/f6p9a7nibnmg1.png?width=320&crop=smart&auto=webp&s=dc9716711e5abb5a5432355cf3dee5ae34cfe6c1', 'width': 320, 'height': 147}, {'url': 'https://preview.redd.it/f6p9a7nibnmg1.png?width=640&crop=smart&auto=webp&s=8e8c85778b822ccddd28312d83400b4aeff4839f', 'width': 640, 'height': 295}, {'url': 'https://preview.redd.it/f6p9a7nibnmg1.png?width=960&crop=smart&auto=webp&s=c2eb5c40400ba7d6e1f650f01072fc48c12a7aaf', 'width': 960, 'height': 442}, {'url': 'https://preview.redd.it/f6p9a7nibnmg1.png?width=1080&crop=smart&auto=webp&s=ecf32bd15bd0d7059f4ae2b95b7b3fe1e5d882bb', 'width': 1080, 'height': 498}], 'variants': {}, 'id': 'f6p9a7nibnmg1'}], 'enabled': True}
Whats Possible with Video Now?
1
I been feeding Qwen VL one frame at a time (usually 1 fps) to analyze video. Works well. But I realized today that I don't know if I can just give it a video clip. Does that work? I run on Mac is that matters.
2026-03-02T15:02:57
https://www.reddit.com/r/LocalLLaMA/comments/1riv5kc/whats_possible_with_video_now/
zipzag
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riv5kc
false
null
t3_1riv5kc
/r/LocalLLaMA/comments/1riv5kc/whats_possible_with_video_now/
false
false
self
1
null
Qwen 3.5 2B on Android
1
App: https://github.com/Vali-98/ChatterUI/releases/tag/v0.8.9-beta9 Note that this pre-release is very experimental. Hardware: Poco F5, Snapdragon 7 Gen 2 \--- Ive been excited for Qwen 3.5's release, but it seems to be much slower compared to other models of similar size, likely due to some architecture difference. that said, low context testing on some general knowledge seems decent, especially considering its size.
2026-03-02T15:01:20
https://v.redd.it/yui76dticnmg1
----Val----
v.redd.it
1970-01-01T00:00:00
0
{}
1riv3wv
false
{'reddit_video': {'bitrate_kbps': 2400, 'fallback_url': 'https://v.redd.it/yui76dticnmg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'width': 720, 'scrubber_media_url': 'https://v.redd.it/yui76dticnmg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/yui76dticnmg1/DASHPlaylist.mpd?a=1775055718%2CNTZlY2RmNDI1ZTdkZDMyNmU1OGI4N2JiNDJiNDJkY2NhNWFmNzk2Mjk2ZDA4Mjk2NjZiYTM4MGJmZDM1ZmFjNQ%3D%3D&v=1&f=sd', 'duration': 82, 'hls_url': 'https://v.redd.it/yui76dticnmg1/HLSPlaylist.m3u8?a=1775055718%2CMDU0NjM0OGFjNTBmOTlkODk4YjI4N2Q3ZWM1NzM0NDE2M2RlZjJjMTY5NTFhMzVjOGE0YmVlNThkOWQ2ODA5Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'transcoding_status': 'completed'}}
t3_1riv3wv
/r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/
false
false
https://external-preview…a91a9bd50b6388f2
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/dDFmZ3Rndmljbm1nMfgnKODLGXrl9BlVJzmuk6NTSggrTTf3ldLQWhAXaYHo.png?format=pjpg&auto=webp&s=6c7e56e2eeb1135c002b95712acc563399097def', 'width': 405, 'height': 720}, 'resolutions': [{'url': 'https://external-preview.redd.it/dDFmZ3Rndmljbm1nMfgnKODLGXrl9BlVJzmuk6NTSggrTTf3ldLQWhAXaYHo.png?width=108&crop=smart&format=pjpg&auto=webp&s=bd8b4efd03ed4c6ff13516dbdd94f96dd0500561', 'width': 108, 'height': 192}, {'url': 'https://external-preview.redd.it/dDFmZ3Rndmljbm1nMfgnKODLGXrl9BlVJzmuk6NTSggrTTf3ldLQWhAXaYHo.png?width=216&crop=smart&format=pjpg&auto=webp&s=d61cc1c1b1f9c9bf962ff5598d2548e72ae8e144', 'width': 216, 'height': 384}, {'url': 'https://external-preview.redd.it/dDFmZ3Rndmljbm1nMfgnKODLGXrl9BlVJzmuk6NTSggrTTf3ldLQWhAXaYHo.png?width=320&crop=smart&format=pjpg&auto=webp&s=9f7555f4a795fed8f756c147229ac8f61e65363a', 'width': 320, 'height': 568}], 'variants': {}, 'id': 'dDFmZ3Rndmljbm1nMfgnKODLGXrl9BlVJzmuk6NTSggrTTf3ldLQWhAXaYHo'}], 'enabled': False}
Genuinely fascinating, but also kind of terrifying...
1
I time to time run through my pen test runbook against my media server hosted on a cloud VPS and harden what I can based on new CVEs that come out. This time decided to take it a step further and using an OpenCode harness with Qwen3.5-27B-Heretic-Q6\_K model running via LMStudio — mainly to avoid refusals and have it execute commands for me (all isolated in a seperate vps). Had it run through my full runbook and it executed everything perfectly. On top of that it highlighted attack vectors well beyond what I'd normally cover in my testing, which honestly both blew me away and frightened me a little. I did something similar a good while back using an abliterated/heretic 120B OSS GPT model and it was no where near as verbose and frightening. Qwen3.5 absolutely blew it out of the water — and fast too, running entirely within my GPU's VRAM. This has further highlighted to me personally how scary the whole unrestricted Claude/ GPT models would be in the Pentagon hands considering how much more powerful they are... genuinely unsettling especially with the recent news.
2026-03-02T14:56:01
https://www.reddit.com/r/LocalLLaMA/comments/1riuywe/genuinely_fascinating_but_also_kind_of_terrifying/
ImmenseFox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riuywe
false
null
t3_1riuywe
/r/LocalLLaMA/comments/1riuywe/genuinely_fascinating_but_also_kind_of_terrifying/
false
false
self
1
null
Is Qwen3.5 2b is instruct?
1
I tried qwen's new 2b model and it's very fast and thinking is not showing in llama.cpp server
2026-03-02T14:53:45
https://www.reddit.com/r/LocalLLaMA/comments/1riuwsw/is_qwen35_2b_is_instruct/
NegotiationNo1504
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riuwsw
false
null
t3_1riuwsw
/r/LocalLLaMA/comments/1riuwsw/is_qwen35_2b_is_instruct/
false
false
self
1
null
How can I enable Context Shifting in Llama Server?
1
```makefile SEED := $(shell bash -c 'echo $$((RANDOM * 32768 + RANDOM))') QWEN35="$(MODELS_PATH)/unsloth/Qwen3.5-35B-A3B-GGUF/Qwen3.5-35B-A3B-UD-Q4_K_XL.gguf" FLAGS += --seed $(SEED) FLAGS += --ctx-size 16384 FLAGS += --cont-batching FLAGS += --context-shift FLAGS += --host 0.0.0.0 FLAGS += --port 9596 serve-qwen35-rg: llama-server -m $(QWEN35) $(FLAGS) \ --alias "QWEN35B" \ --temp 1.0 \ --top-p 0.95 \ --top-k 20 \ --min-p 0.00 ``` hi guys. sorry i couldn't figure out how to enable context shifting in llama cpp server. above is my config. just build llama cpp today with these two command below: ``` $> cmake -B build -DGGML_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES="89" $> cmake --build build --config Release ``` github says it is enabled by default but when work either on web ui or opencode app it stucks at context limit. i don't know what am i missing. i really appreciate some help.
2026-03-02T14:50:29
https://www.reddit.com/r/LocalLLaMA/comments/1riuttn/how_can_i_enable_context_shifting_in_llama_server/
source-drifter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riuttn
false
null
t3_1riuttn
/r/LocalLLaMA/comments/1riuttn/how_can_i_enable_context_shifting_in_llama_server/
false
false
self
1
null
how to fix endless looping with Qwen3.5?
1
seems to be fine for coding related stuff but anything general it struggles so hard and starts looping
2026-03-02T14:43:42
https://www.reddit.com/r/LocalLLaMA/comments/1riunee/how_to_fix_endless_looping_with_qwen35/
Odd-Ordinary-5922
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riunee
false
null
t3_1riunee
/r/LocalLLaMA/comments/1riunee/how_to_fix_endless_looping_with_qwen35/
false
false
self
1
null
AMD details Ryzen AI 400 desktop with up to 8 cores, Radeon 860M graphics
1
[https://www.tomshardware.com/pc-components/cpus/amd-details-ryzen-ai-400-desktop-with-up-to-8-cores-radeon-860m-graphics-apus-wont-be-available-as-boxed-units-only-in-oem-systems](https://www.tomshardware.com/pc-components/cpus/amd-details-ryzen-ai-400-desktop-with-up-to-8-cores-radeon-860m-graphics-apus-wont-be-available-as-boxed-units-only-in-oem-systems)
2026-03-02T14:28:01
https://www.reddit.com/r/LocalLLaMA/comments/1riu9gi/amd_details_ryzen_ai_400_desktop_with_up_to_8/
takuonline
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riu9gi
false
null
t3_1riu9gi
/r/LocalLLaMA/comments/1riu9gi/amd_details_ryzen_ai_400_desktop_with_up_to_8/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/3LO49iOXaUZbcZh8LCa1iQPUAoGCNsu5y0us7844AkU.png?auto=webp&s=565c537c193b179809ba9435dbc8508a0e56bfb1', 'width': 2391, 'height': 1345}, 'resolutions': [{'url': 'https://external-preview.redd.it/3LO49iOXaUZbcZh8LCa1iQPUAoGCNsu5y0us7844AkU.png?width=108&crop=smart&auto=webp&s=4117828c9427cb7afb6340e6b97981182af4b465', 'width': 108, 'height': 60}, {'url': 'https://external-preview.redd.it/3LO49iOXaUZbcZh8LCa1iQPUAoGCNsu5y0us7844AkU.png?width=216&crop=smart&auto=webp&s=56bb81975d8ec4f57ae20c7210f6f4e3bb41e295', 'width': 216, 'height': 121}, {'url': 'https://external-preview.redd.it/3LO49iOXaUZbcZh8LCa1iQPUAoGCNsu5y0us7844AkU.png?width=320&crop=smart&auto=webp&s=e2bdeefc9829d887b0e79ca148f498ffcc34f205', 'width': 320, 'height': 180}, {'url': 'https://external-preview.redd.it/3LO49iOXaUZbcZh8LCa1iQPUAoGCNsu5y0us7844AkU.png?width=640&crop=smart&auto=webp&s=ec755db33e70ddbc52350a3d20140a9800c9423a', 'width': 640, 'height': 360}, {'url': 'https://external-preview.redd.it/3LO49iOXaUZbcZh8LCa1iQPUAoGCNsu5y0us7844AkU.png?width=960&crop=smart&auto=webp&s=78ac5c5b0424519793e01deac00b2b41674d7bab', 'width': 960, 'height': 540}, {'url': 'https://external-preview.redd.it/3LO49iOXaUZbcZh8LCa1iQPUAoGCNsu5y0us7844AkU.png?width=1080&crop=smart&auto=webp&s=816e7a6865c321314b0aebe5b49c543b04ead115', 'width': 1080, 'height': 607}], 'variants': {}, 'id': '3LO49iOXaUZbcZh8LCa1iQPUAoGCNsu5y0us7844AkU'}], 'enabled': False}
Schema-only AI for data analysis, or why your LLM doesn't need to see your data to query it
1
I've been using Ollama for something that I think is a genuinely good local LLM use case beyond chat. The idea: for data analysis questions, the model only needs column names and types to generate SQL. You feed it the schema (and some stats), it writes the query, DuckDB-WASM executes it in the browser. The model never sees a row of data. So if you have a CSV with customer\_email, revenue, churn\_date (then the model gets that metadata), you ask "which segments churned most last quarter", it writes the SQL, DuckDB runs it locally. Done. Works surprisingly well for aggregations, filtering, joins, window functions. Breaks down for anything requiring the model to read actual cell content (summarizing a notes column, etc). I wrapped this into a browser tool at [queryveil.com](http://queryveil.com) (which supports Ollama and WebLLM for fully airgapped analysis, for FREE!). The DuckDB piece works offline without any AI at all. Wrote up a comparison of this vs ChatGPT ADA vs Jupyter here: [queryveil.com/blog/chatgpt-data-analysis-privacy-comparison](http://queryveil.com/blog/chatgpt-data-analysis-privacy-comparison) The thing is, my laptop is kind of limited when it comes to inference speed, and using Ollama makes everything waaaay slower. If anyone with a powerful setup is interested in seeing how the AI analyst works, let me know, I'll be glad to hear some feedback!
2026-03-02T14:20:03
https://www.reddit.com/r/LocalLLaMA/comments/1riu2ij/schemaonly_ai_for_data_analysis_or_why_your_llm/
United-Stress-1343
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riu2ij
false
null
t3_1riu2ij
/r/LocalLLaMA/comments/1riu2ij/schemaonly_ai_for_data_analysis_or_why_your_llm/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/BV_K9Cl5Hy_Z14dxQJAejcNj9e5UO99bxGVxsShkAT4.png?auto=webp&s=ec27acf5827079cecbf90f9b85b2b888b63c4018', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/BV_K9Cl5Hy_Z14dxQJAejcNj9e5UO99bxGVxsShkAT4.png?width=108&crop=smart&auto=webp&s=0dc03ecb894d8e3deb510d5c0435e4bebcaf96d6', 'width': 108, 'height': 56}, {'url': 'https://external-preview.redd.it/BV_K9Cl5Hy_Z14dxQJAejcNj9e5UO99bxGVxsShkAT4.png?width=216&crop=smart&auto=webp&s=c100f6987dcafc9b9bdf9675db8f58bc5a5373d4', 'width': 216, 'height': 113}, {'url': 'https://external-preview.redd.it/BV_K9Cl5Hy_Z14dxQJAejcNj9e5UO99bxGVxsShkAT4.png?width=320&crop=smart&auto=webp&s=c804d2c047fdce802ef33a66970341abf4dc811d', 'width': 320, 'height': 168}, {'url': 'https://external-preview.redd.it/BV_K9Cl5Hy_Z14dxQJAejcNj9e5UO99bxGVxsShkAT4.png?width=640&crop=smart&auto=webp&s=b5831bf5ed8985040bda0e7cfced879242fda5e6', 'width': 640, 'height': 336}, {'url': 'https://external-preview.redd.it/BV_K9Cl5Hy_Z14dxQJAejcNj9e5UO99bxGVxsShkAT4.png?width=960&crop=smart&auto=webp&s=e14d70b63b4666bb1b3bc433c27b28a5e91bb33e', 'width': 960, 'height': 504}, {'url': 'https://external-preview.redd.it/BV_K9Cl5Hy_Z14dxQJAejcNj9e5UO99bxGVxsShkAT4.png?width=1080&crop=smart&auto=webp&s=6de0841f04103f5a022c4c56ecdba61c6f03160b', 'width': 1080, 'height': 567}], 'variants': {}, 'id': 'BV_K9Cl5Hy_Z14dxQJAejcNj9e5UO99bxGVxsShkAT4'}], 'enabled': False}
A local “LLM session recorder command center” for all API/Codex/Code/ChatGPT sessions?
1
Hey, i’m looking for a tool that can sit in between (or kind of “on top of”) all these different AI apps/clients/GUI wrappers and record my sessions outside of whatever app I’m using. I keep bouncing between tools and backends, and it feels like a lot of really valuable prompts + model responses just disappear into random app histories (who are so scattered and fragmented around that they have no value), get lost when I switch setups, or never end up in a place I truly own. Meanwhile it sometimes feels like the only people consistently keeping that data are the big platforms. I’d love something that keeps a local, permanent archive of every LLM invocation and response, ideally grouped into full sessions, in one place, maybe even a standard open format, so I can actually search and reuse it later and keep it on my own drive. And honestly, down the line it’d be amazing if that personal dataset could be used to help train open-source models too. Does something like this already exist? I’m pretty new to this area, so if there’s an obvious solution I’m missing, I’d really appreciate a recommendation. I think such tool should be made if it doesn't exist. We never know how much longer our chat histories will be available in the various apps like chatgpt. I know this group is running models locally. But maybe it's an aspect of "local" that no one has yet explored. If we're not using local models, at least we're keeping local copies of the sessions?
2026-03-02T14:19:26
https://www.reddit.com/r/LocalLLaMA/comments/1riu1zd/a_local_llm_session_recorder_command_center_for/
dadaphl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riu1zd
false
null
t3_1riu1zd
/r/LocalLLaMA/comments/1riu1zd/a_local_llm_session_recorder_command_center_for/
false
false
self
1
null
Why Voice is the Perfect Starting Point for On-Device AI
1
2026-03-02T14:19:04
https://izwiai.com/blog/why-voice-is-the-perfect-starting-point
zinyando
izwiai.com
1970-01-01T00:00:00
0
{}
1riu1nn
false
null
t3_1riu1nn
/r/LocalLLaMA/comments/1riu1nn/why_voice_is_the_perfect_starting_point_for/
false
false
default
1
null
TRAIN 670B MODEL ON GTX 1060 FROM NOW IS REAL
1
[removed]
2026-03-02T14:14:24
https://www.reddit.com/r/LocalLLaMA/comments/1ritxkd/train_670b_model_on_gtx_1060_from_now_is_real/
Actual_Wolf_2932
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ritxkd
false
null
t3_1ritxkd
/r/LocalLLaMA/comments/1ritxkd/train_670b_model_on_gtx_1060_from_now_is_real/
false
false
self
1
null
OSS-120B beats all open models but one in new WeirdML Data Science benchmark
0
https://preview.redd.it/… GLM-5 beats it.
2026-03-02T14:07:08
https://www.reddit.com/r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/
magnus-m
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ritr5v
false
null
t3_1ritr5v
/r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/
false
false
https://preview.redd.it/…982f039097b481d1
0
null
Released: AI Cost Router — 100% local LLM router (Ollama)
0
If you’ve ever wanted an LLM router that: ✔ Costs $0 ✔ Runs fully offline ✔ Has clean config ✔ Works with TypeScript …then check this out: 👉 [https://github.com/shivadeore111-design/ai-cost-router](https://github.com/shivadeore111-design/ai-cost-router) Fully local, minimal, and ready for tinkering. I’d love your feedback! ⭐
2026-03-02T14:05:23
https://www.reddit.com/r/LocalLLaMA/comments/1ritplu/released_ai_cost_router_100_local_llm_router/
Suitable-Form8694
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ritplu
false
null
t3_1ritplu
/r/LocalLLaMA/comments/1ritplu/released_ai_cost_router_100_local_llm_router/
false
false
self
0
{'enabled': False, 'images': [{'id': 'SkKn9uhopQsEZyvdEQvBV2h-tut_OEq7RSy68HoVRf8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SkKn9uhopQsEZyvdEQvBV2h-tut_OEq7RSy68HoVRf8.png?width=108&crop=smart&auto=webp&s=433ba03fb0400acdaa2c1fb742dd4e65cf5370ba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SkKn9uhopQsEZyvdEQvBV2h-tut_OEq7RSy68HoVRf8.png?width=216&crop=smart&auto=webp&s=f08819437a3cd646aa39ad409eff89bbbd0a56e7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SkKn9uhopQsEZyvdEQvBV2h-tut_OEq7RSy68HoVRf8.png?width=320&crop=smart&auto=webp&s=adb102b8df6a7d1bd64396d2fb02ac75bddcfb3c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SkKn9uhopQsEZyvdEQvBV2h-tut_OEq7RSy68HoVRf8.png?width=640&crop=smart&auto=webp&s=9a7167cccf7d2c535725bbd5dd39cfbd0783869f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SkKn9uhopQsEZyvdEQvBV2h-tut_OEq7RSy68HoVRf8.png?width=960&crop=smart&auto=webp&s=7d649a1e60d9a47f97670b287606429ebf3a8795', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SkKn9uhopQsEZyvdEQvBV2h-tut_OEq7RSy68HoVRf8.png?width=1080&crop=smart&auto=webp&s=9a1030fa4cf06222991f396ee824789d93e45cc8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SkKn9uhopQsEZyvdEQvBV2h-tut_OEq7RSy68HoVRf8.png?auto=webp&s=d8aec81e4152db3a4003c7a8f62d5dfa90068e1b', 'width': 1200}, 'variants': {}}]}
Qwen3.5-2B-GGUF is here!
1
2026-03-02T14:02:04
https://huggingface.co/AaryanK/Qwen3.5-2B-GGUF
KvAk_AKPlaysYT
huggingface.co
1970-01-01T00:00:00
0
{}
1ritmjb
false
null
t3_1ritmjb
/r/LocalLLaMA/comments/1ritmjb/qwen352bgguf_is_here/
false
false
https://external-preview…8ec5a4688f374c37
1
{'enabled': False, 'images': [{'id': 'UTBnxlhv5svqlV78qwWGneyv6o2N_hOeL_T5SUD2u0c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UTBnxlhv5svqlV78qwWGneyv6o2N_hOeL_T5SUD2u0c.png?width=108&crop=smart&auto=webp&s=87f9fa4cefcccabbb3de1c2b4b107e2d0b6bbb48', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UTBnxlhv5svqlV78qwWGneyv6o2N_hOeL_T5SUD2u0c.png?width=216&crop=smart&auto=webp&s=3824088ef2d72b35c5b608cd9278968ce6c039c8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UTBnxlhv5svqlV78qwWGneyv6o2N_hOeL_T5SUD2u0c.png?width=320&crop=smart&auto=webp&s=aeab37d0f27b98b9f99261fa49d0e507a5c46a5f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UTBnxlhv5svqlV78qwWGneyv6o2N_hOeL_T5SUD2u0c.png?width=640&crop=smart&auto=webp&s=7f8bf996c60887baca7f5989eabe227f6266be16', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UTBnxlhv5svqlV78qwWGneyv6o2N_hOeL_T5SUD2u0c.png?width=960&crop=smart&auto=webp&s=72c139246fa61752adf1c22aff5b8b8f0e60a1e8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UTBnxlhv5svqlV78qwWGneyv6o2N_hOeL_T5SUD2u0c.png?width=1080&crop=smart&auto=webp&s=ae57e1b3cb1dbaa7f2fdc51d94ef1feee0c58de8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UTBnxlhv5svqlV78qwWGneyv6o2N_hOeL_T5SUD2u0c.png?auto=webp&s=93d93e6a7b93c1bd8d0d49ef91ffe6c56c6e9d50', 'width': 1200}, 'variants': {}}]}
Qwen3.5-0.8B-GGUF is here!
1
2026-03-02T14:01:23
https://huggingface.co/AaryanK/Qwen3.5-0.8B-GGUF
KvAk_AKPlaysYT
huggingface.co
1970-01-01T00:00:00
0
{}
1ritlux
false
null
t3_1ritlux
/r/LocalLLaMA/comments/1ritlux/qwen3508bgguf_is_here/
false
false
https://external-preview…8c7a2bf48b9670c1
1
{'enabled': False, 'images': [{'id': 'h0Py-Ta_vsofHW6viKSDiZdB4beO4_yqSl3RkFQZbUI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/h0Py-Ta_vsofHW6viKSDiZdB4beO4_yqSl3RkFQZbUI.png?width=108&crop=smart&auto=webp&s=4589b16ec3b3b805409a5ff8005519ad51377718', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/h0Py-Ta_vsofHW6viKSDiZdB4beO4_yqSl3RkFQZbUI.png?width=216&crop=smart&auto=webp&s=f98790fb06a39ce412b0fb2d840139caa55db20d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/h0Py-Ta_vsofHW6viKSDiZdB4beO4_yqSl3RkFQZbUI.png?width=320&crop=smart&auto=webp&s=76beb61928aeeca4be09b7bddaaf1af2d87f0ddd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/h0Py-Ta_vsofHW6viKSDiZdB4beO4_yqSl3RkFQZbUI.png?width=640&crop=smart&auto=webp&s=1aab1a1237ef9da08add1daca13632511ea1de84', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/h0Py-Ta_vsofHW6viKSDiZdB4beO4_yqSl3RkFQZbUI.png?width=960&crop=smart&auto=webp&s=f08b3dd63c18403a072312ddf3ea3fe2b924fef5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/h0Py-Ta_vsofHW6viKSDiZdB4beO4_yqSl3RkFQZbUI.png?width=1080&crop=smart&auto=webp&s=3a441a2cfb621d0ad7ae552cab387ef0042e77cb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/h0Py-Ta_vsofHW6viKSDiZdB4beO4_yqSl3RkFQZbUI.png?auto=webp&s=d85c644282a8b5deff0a8d380caed36397aec867', 'width': 1200}, 'variants': {}}]}
PSA: unsloth Qwen3.5 9/4/2/0.8B Quants are out
0
The usual and UD quants all here
2026-03-02T13:53:19
https://huggingface.co/collections/unsloth/qwen35
mmkzero0
huggingface.co
1970-01-01T00:00:00
0
{}
1ritepj
false
null
t3_1ritepj
/r/LocalLLaMA/comments/1ritepj/psa_unsloth_qwen35_94208b_quants_are_out/
false
false
https://external-preview…ae4fca63eed28300
0
{'enabled': False, 'images': [{'id': 'dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg.png?width=108&crop=smart&auto=webp&s=bc22945ffd1a5b4538e9461f0008217c12ab36d5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg.png?width=216&crop=smart&auto=webp&s=8424da8798c0aaa1cc507342283deec7ecab8102', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg.png?width=320&crop=smart&auto=webp&s=bd0b841f63efa2bbabe550c13942eb8faa7dc3e5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg.png?width=640&crop=smart&auto=webp&s=a0cabe321c52951854b9dbc3bdae8efaf50806ae', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg.png?width=960&crop=smart&auto=webp&s=64216464a61550631721bc3d991b1aa3d2d44638', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg.png?width=1080&crop=smart&auto=webp&s=f001bcc6b957a406d088f5dc2e14398d2a1b171d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg.png?auto=webp&s=0a38314dbac4d5a970eb60097f005a25f9562b60', 'width': 1200}, 'variants': {}}]}
Imrpove Qwen3.5 Performance on Weak GPU
31
I'm running Qwen3.5-27B-Q2\_K.gguf, Qwen3.5-35B-A3B-UD-IQ2\_XXS.gguf and Qwen3.5-35B-A3B-UD-IQ3\_XXS.gguf at my pc using llama.cpp and want to know if there are some tweaks I can do to Improve the performance. Currently I'm getting: \- 54 t/s with the Qwen3.5-35B-A3B-UD-IQ2\_XXS.gguf \- 15 t/s with the Qwen3.5-27B-Q2\_K.gguf \- 5 t/s with the Qwen3.5-35B-A3B-UD-IQ3\_XXS.gguf I'm using these commands: llama-cli.exe -m "Qwen3.5-27B-Q2\_K.gguf" -ngl 99 -t 6 -b 512 -ub 512 --flash-attn on --no-mmap -n -1 --reasoning-budget 0 llama-cli.exe -m "Qwen3.5-27B-Q2\_K.gguf" -ngl 99 -t 6 -b 512 -ub 512 --flash-attn on --no-mmap -n -1 --reasoning-budget 0 llama-cli.exe -m "Qwen3.5-35B-A3B-UD-IQ3\_XXS.gguf" -ngl 65 -c 4096 -t 6 -b 512 -ub 512 --flash-attn on --no-mmap -n -1 --cache-type-k q8\_0 --cache-type-v q8\_0 --reasoning-budget 0 My PC Specs are: Rtx 3060 12gb Vram + 32Gb Ram
2026-03-02T13:50:32
https://i.redd.it/apfbjikvzmmg1.png
MarketingGui
i.redd.it
1970-01-01T00:00:00
0
{}
1ritcfr
false
null
t3_1ritcfr
/r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/
false
false
https://preview.redd.it/…1b9ae2fffe55acb8
31
{'enabled': True, 'images': [{'id': 'apfbjikvzmmg1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/apfbjikvzmmg1.png?width=108&crop=smart&auto=webp&s=c753b95b898529e65a254de91a56ab629aafba64', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/apfbjikvzmmg1.png?width=216&crop=smart&auto=webp&s=affd098ce1328216a855baa6164b19d90bc15560', 'width': 216}, {'height': 202, 'url': 'https://preview.redd.it/apfbjikvzmmg1.png?width=320&crop=smart&auto=webp&s=8188c5b18119183941a0e9cf1a7e8b5f3d59991e', 'width': 320}, {'height': 404, 'url': 'https://preview.redd.it/apfbjikvzmmg1.png?width=640&crop=smart&auto=webp&s=bec18c268c0dcaf7d3c583e54023f0799be80cdb', 'width': 640}], 'source': {'height': 495, 'url': 'https://preview.redd.it/apfbjikvzmmg1.png?auto=webp&s=c6026086b70b83d45c18f6b50a2ac6765cd8019b', 'width': 784}, 'variants': {}}]}