title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Qwen 3.5 27b: a testament to the transformer architecture | 1 | It's really good. I thought an early warning sign that transformer architecture might have hard limits would be if these tiny models stopped being able to keep up with the large ones. And to some degree this seemed to be the case, at least at times. We didn't get much between the qwen3 2507 models and now that strongly... | 2026-03-02T21:58:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/ | nomorebuttsplz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj6m71 | false | null | t3_1rj6m71 | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/ | false | false | self | 1 | null |
Workstation for dev work + local LLMs — Tesla P40 vs MinisForum? | 1 | Building a new workstation primarily for programming/dev work. Since I'm investing in new hardware anyway, figured why not set it up so I can also run and finetune LLMs locally.
Option A: Custom build - 9900X, dual-GPU motherboard, 2x Tesla P40s off eBay. 48GB VRAM total ( one of the cheapest solutions, don't have... | 2026-03-02T21:54:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rj6j0y/workstation_for_dev_work_local_llms_tesla_p40_vs/ | marius-c-d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj6j0y | false | null | t3_1rj6j0y | /r/LocalLLaMA/comments/1rj6j0y/workstation_for_dev_work_local_llms_tesla_p40_vs/ | false | false | self | 1 | null |
Qwen3.5 Base models for 122B and 27B? | 1 | Anyone heard anything about it? I see they dropped base weights for all the recent tiny models, as well as the 35B-A3B model, but don't see any for the dense 27B or larger sparse models. I'm wondering if maybe that was just an oversight?
I would really like to get my grubby hands on the base 27B or the 122B, partiall... | 2026-03-02T21:53:09 | https://www.reddit.com/r/LocalLLaMA/comments/1rj6hga/qwen35_base_models_for_122b_and_27b/ | KallistiTMP | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj6hga | false | null | t3_1rj6hga | /r/LocalLLaMA/comments/1rj6hga/qwen35_base_models_for_122b_and_27b/ | false | false | self | 1 | null |
Qwen’s latest model thinks it’s developed by Google. | 1 | I asked the new Qwen3.5-9B to identify itself. Here is the answer.
https://preview.redd.it/wh1p96r5bpmg1.png?width=536&format=png&auto=webp&s=eecff7d086a9703c96c5635b1ad884e654b42b13
| 2026-03-02T21:40:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rj65jl/qwens_latest_model_thinks_its_developed_by_google/ | never-been-here-nl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj65jl | false | null | t3_1rj65jl | /r/LocalLLaMA/comments/1rj65jl/qwens_latest_model_thinks_its_developed_by_google/ | false | false | 1 | null | |
Mix & Matching R9700s? | 1 | Ive managed to pick up a Sapphire AI PRO Radeon AI Pro R9700 for my upgrade. Problem id Ive fallen afoul of Newegg's one per customer rule so cant easily get a second. Other suppliers are charging a mint for another Sapphire which leads me to ask..
1- I cant imagine any issues with usuing different partner models but ... | 2026-03-02T21:37:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rj62hk/mix_matching_r9700s/ | RottenPingu1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj62hk | false | null | t3_1rj62hk | /r/LocalLLaMA/comments/1rj62hk/mix_matching_r9700s/ | false | false | self | 1 | null |
Running Qwen3.5-0.8B on my 7-year-old Samsung S10E | 0 | Qwen just released their 0.8B model.
So naturally, I had to try running it on my 7-year-old Samsung S10E.
After some tinkering with llama.cpp, Termux, and a few missing C libraries... behold!
A fully working AI model running locally on an old phone at 12 tokens per second. And btw, the model itself is far f... | 2026-03-02T21:21:28 | HighFlyingB1rd | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rj5ngc | false | null | t3_1rj5ngc | /r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/ | false | false | 0 | {'images': [{'source': {'url': 'https://preview.redd.it/mg9ixtw58pmg1.png?auto=webp&s=30fb9a7da42c36ff2a9bf6a196552af418941905', 'width': 3790, 'height': 1728}, 'resolutions': [{'url': 'https://preview.redd.it/mg9ixtw58pmg1.png?width=108&crop=smart&auto=webp&s=2b9005d27227dff202e1772b60bdcf56e2887f02', 'width': 108, 'h... | ||
Free image models that can run on 12gb VRAM? | 1 | I am kind of new to this but what are some good models that I can run myself with 12gb of VRAM? I don't need 4k images but something that can create realistic images in 1440p or worse quality. | 2026-03-02T21:10:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rj5czr/free_image_models_that_can_run_on_12gb_vram/ | CarsonWentzGOAT1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj5czr | false | null | t3_1rj5czr | /r/LocalLLaMA/comments/1rj5czr/free_image_models_that_can_run_on_12gb_vram/ | false | false | self | 1 | null |
Local LLM | 1 | Ah so currently I am using claude opus 4.6 fast mode and getting lots of work done. I am uncomfortable with the centralization of the AI models and I am considering buying 2x rtx 6000 blackwell gpus.
The coding part I like the precision that opus provides but my monthly bill is over $700 this month. I have alot of ser... | 2026-03-02T21:02:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rj54kw/local_llm/ | Annual_Award1260 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj54kw | false | null | t3_1rj54kw | /r/LocalLLaMA/comments/1rj54kw/local_llm/ | false | false | self | 1 | null |
StepFun releases 2 base models for Step 3.5 Flash | 1 | 2026-03-02T20:57:43 | https://x.com/StepFun_ai/status/2028551435290554450 | tarruda | x.com | 1970-01-01T00:00:00 | 0 | {} | 1rj4zy3 | false | null | t3_1rj4zy3 | /r/LocalLLaMA/comments/1rj4zy3/stepfun_releases_2_base_models_for_step_35_flash/ | false | false | default | 1 | null | |
Best model for basic text based rasks on RTX 3070 | 1 | which model should I use? | 2026-03-02T20:53:28 | https://www.reddit.com/r/LocalLLaMA/comments/1rj4vwr/best_model_for_basic_text_based_rasks_on_rtx_3070/ | freefireclashsquad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj4vwr | false | null | t3_1rj4vwr | /r/LocalLLaMA/comments/1rj4vwr/best_model_for_basic_text_based_rasks_on_rtx_3070/ | false | false | self | 1 | null |
local llm test cases text and coding | 1 | team, there are many benchmarks and tests that base comparisons for different models,
where can i find those test cases to run them on my local LLM? I would like to run manually or even if there is automation to run a full suite of tests and capture the results or even measure pass/fail and duplicate, where do I even... | 2026-03-02T20:49:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rj4rml/local_llm_test_cases_text_and_coding/ | sunole123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj4rml | false | null | t3_1rj4rml | /r/LocalLLaMA/comments/1rj4rml/local_llm_test_cases_text_and_coding/ | false | false | self | 1 | null |
Qwen3.5-2B on Android | 1 | So I ran a quick test of qwen 3.5 2B on my Android device.
First I started with some basic questions that it was able to answer perfectly.
Then an ez image to process and it described the image very well including texts that I asked it to translate from the provided image.
As for the third run, I gave it a complex a... | 2026-03-02T20:44:57 | https://v.redd.it/kyc0jcut1pmg1 | Zealousideal-Check77 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rj4nnq | false | {'reddit_video': {'bitrate_kbps': 5000, 'fallback_url': 'https://v.redd.it/kyc0jcut1pmg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'width': 860, 'scrubber_media_url': 'https://v.redd.it/kyc0jcut1pmg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/kyc0jcut1pmg1/DASHPlaylist.mpd?a=1775076350%2CMzZl... | t3_1rj4nnq | /r/LocalLLaMA/comments/1rj4nnq/qwen352b_on_android/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/d2hhMDZudXQxcG1nMbrZDj1CpDKfgJQaqWdMPKrgC6fL8I9ao2SNEU3BQC18.png?format=pjpg&auto=webp&s=1bf110aa5ec3f2687a7cbf3a53ef2fbe276e09cd', 'width': 322, 'height': 718}, 'resolutions': [{'url': 'https://external-preview.redd.it/d2hhMDZudXQxcG1nMbrZDj1CpDKfgJQaqWd... | |
Qwen3.5-35b-A3b Vision capabilties in llama.cpp | 1 | I haven't found any documentation or threads on this anywhere, but I'm not able to get vision capabilities working on the new qwen 3.5 models in llama.cpp. I know llama.cpp usually looks for an mmproj file, but my understanding is that the qwen 3.5 models integrate vision into the model itself.
`image input is not sup... | 2026-03-02T20:42:01 | https://www.reddit.com/r/LocalLLaMA/comments/1rj4ktw/qwen3535ba3b_vision_capabilties_in_llamacpp/ | No_Information9314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj4ktw | false | null | t3_1rj4ktw | /r/LocalLLaMA/comments/1rj4ktw/qwen3535ba3b_vision_capabilties_in_llamacpp/ | false | false | self | 1 | null |
Qwen3.5 on Off Grid! | 1 | [Qwen3.5 on Off Grid!](https://preview.redd.it/haui2t420pmg1.png?width=760&format=png&auto=webp&s=1f4e4ddb9aa34d309a49f477466ade8ced96a1c6)
The Qwen3.5 on Off Grid! These are exciting times. My bet on edge AI getting better seems to be paying off.
If you haven't already go check out Off Grid! | 2026-03-02T20:35:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rj4ee5/qwen35_on_off_grid/ | alichherawalla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj4ee5 | false | null | t3_1rj4ee5 | /r/LocalLLaMA/comments/1rj4ee5/qwen35_on_off_grid/ | false | false | 1 | null | |
LM studio kv caching issue? | 1 | Hi,
I've been trying out LM Studio's local api, but no matter what I do the kv cache just explodes. Each of my prompts add 100MB memory, and it's just NEVER purged?
I must be missing some parameter to include in my requests?
I'm using the '/v1/chat/completions' endpoint, being stateless, I'm so confused.
... | 2026-03-02T20:33:58 | https://www.reddit.com/r/LocalLLaMA/comments/1rj4ck1/lm_studio_kv_caching_issue/ | After-Operation2436 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj4ck1 | false | null | t3_1rj4ck1 | /r/LocalLLaMA/comments/1rj4ck1/lm_studio_kv_caching_issue/ | false | false | self | 1 | null |
Coding Power Ranking 26.02 | 1 | Hi all,
We're back with a new Power Ranking, focused on coding, including the best local model we've ever tested by a wide margin. My analysis is here: [https://blog.brokk.ai/the-26-02-coding-power-ranking/](https://blog.brokk.ai/the-26-02-coding-power-ranking/) | 2026-03-02T20:20:01 | https://brokk.ai/power-ranking | mr_riptano | brokk.ai | 1970-01-01T00:00:00 | 0 | {} | 1rj3yzz | false | null | t3_1rj3yzz | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/ | false | false | default | 1 | null |
I made a native macOS app for Qwen3-TTS — voice cloning, emotion presets, and voice design, all offline | 1 | Wanted to use Qwen3-TTS on my Mac without dealing with Python environments and terminal commands, so I built a SwiftUI app around it. Figured others might find it useful too.
It does voice cloning from audio samples, has 9 emotion presets with 3 intensity levels, voice design from text descriptions, and saves your gen... | 2026-03-02T20:17:25 | https://v.redd.it/092osyw6vomg1 | PowerBeef | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rj3wgy | false | {'reddit_video': {'bitrate_kbps': 2400, 'fallback_url': 'https://v.redd.it/092osyw6vomg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'width': 1280, 'scrubber_media_url': 'https://v.redd.it/092osyw6vomg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/092osyw6vomg1/DASHPlaylist.mpd?a=1775074662%2CMGMyM... | t3_1rj3wgy | /r/LocalLLaMA/comments/1rj3wgy/i_made_a_native_macos_app_for_qwen3tts_voice/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/ZzlkeXQ3eDZ2b21nMV7DJq4cU73e-4xagPzBbxb2eGLh9EdSQHg_VlqejrSf.png?format=pjpg&auto=webp&s=bf52cbe19d41e93c3e0927dfb3a7b7afc4d92dce', 'width': 1280, 'height': 720}, 'resolutions': [{'url': 'https://external-preview.redd.it/ZzlkeXQ3eDZ2b21nMV7DJq4cU73e-4xagP... | |
Beginner's Guide to LLM Quantization: How It Works | 1 | [removed] | 2026-03-02T20:15:35 | https://www.reddit.com/r/LocalLLaMA/comments/1rj3ump/beginners_guide_to_llm_quantization_how_it_works/ | Pure-Fruit2654 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj3ump | false | null | t3_1rj3ump | /r/LocalLLaMA/comments/1rj3ump/beginners_guide_to_llm_quantization_how_it_works/ | false | false | self | 1 | null |
Any advice for using draft models with Qwen3.5 122b ?! | 1 | I have been using Qwen3.5 for a while now and it is absolutely amazing, however, I was wondering if someone tried to use any of the smaller models (including ofc and not limited to the Qwen3.5 0.6b ?! Perfect fit at say Q2, should be AWESOME!)
Any advice or tips on that ? Thanks | 2026-03-02T20:09:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rj3oue/any_advice_for_using_draft_models_with_qwen35_122b/ | Potential_Block4598 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj3oue | false | null | t3_1rj3oue | /r/LocalLLaMA/comments/1rj3oue/any_advice_for_using_draft_models_with_qwen35_122b/ | false | false | self | 1 | null |
Question regarding model parameters and memory usage | 1 | Why does Qwen 3.5 9B or Qwen 2.5 VL 7B needs so such memory for high context length? It asks for around 25gb memory for 131k context lengthS whereas GPT OSS 20B needs only 16gb memory for the same context length despite having more than twice the parameters. | 2026-03-02T20:09:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rj3ocy/question_regarding_model_parameters_and_memory/ | IPC300 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj3ocy | false | null | t3_1rj3ocy | /r/LocalLLaMA/comments/1rj3ocy/question_regarding_model_parameters_and_memory/ | false | false | self | 1 | null |
I'm tired | 1 | I'm tired.
I started getting interested in local models about 3-4 months ago. During that time, the GPT and Sonnet killers came out, at least that's how the hype went. Every time a new model came out, it seemed like, "This is it!" But later it turned out that "it's still not Sonnet."
And so many questions. Backend ... | 2026-03-02T20:05:09 | Fast_Thing_7949 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rj3kfq | false | null | t3_1rj3kfq | /r/LocalLLaMA/comments/1rj3kfq/im_tired/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/o68lr6fquomg1.jpeg?auto=webp&s=ed3db413e690c008376f0838cdcada0f48cf4e7c', 'width': 1074, 'height': 1138}, 'resolutions': [{'url': 'https://preview.redd.it/o68lr6fquomg1.jpeg?width=108&crop=smart&auto=webp&s=105d431def0574f4631e48e38838e0b593e10713', 'width': 108, ... | ||
Strix Halo NPU performance compared to GPU and CPU in Linux. | 1 | Thanks to this project.
https://github.com/FastFlowLM/FastFlowLM
There is now support for the Max+ 395 NPU under Linux for LLMs. Here are some quick numbers for oss-20b.
**NPU - 20 watts**
Average decoding speed: 19.4756 tokens/s
Average prefill speed: 19.6274 tokens/s
**GPU - 82 watts**
[ Prompt: 4... | 2026-03-02T20:02:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj3i8m | false | null | t3_1rj3i8m | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/ | false | false | self | 1 | null |
New update CMDAI 1.1.1beta | 0 | This is the largest update to CMDAI so far, introducing new modes! We've focused on enhancing usability and adding powerful tools for AI interaction. Please test thoroughly and report any bugs in the Issues section – your feedback is crucial!
**🔄 New Modes**
1. Code Mode: Uses the file generated by Plan Mode ... | 2026-03-02T20:01:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rj3g91/new_update_cmdai_111beta/ | KRZYZYK33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj3g91 | false | null | t3_1rj3g91 | /r/LocalLLaMA/comments/1rj3g91/new_update_cmdai_111beta/ | false | false | self | 0 | {'images': [{'source': {'url': 'https://external-preview.redd.it/kT4f2DVg7ppNCNWJJMFxGQ6X0iKQQdFLkFUijHi3-wE.png?auto=webp&s=d053a5ebcebbc17f44b97363f808b69f88005b0c', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/kT4f2DVg7ppNCNWJJMFxGQ6X0iKQQdFLkFUijHi3-wE.png?width=108&crop=... |
Running a business on a 20W Jetson box — local AI for support, shipping, invoicing ($0 to $60K in 3 weeks) | 1 | 2026-03-02T19:58:12 | https://openclawhardware.dev/blog/2026-02-28-zero-to-60k-clawbox-running-its-own-business | superactro | openclawhardware.dev | 1970-01-01T00:00:00 | 0 | {} | 1rj3das | false | null | t3_1rj3das | /r/LocalLLaMA/comments/1rj3das/running_a_business_on_a_20w_jetson_box_local_ai/ | false | false | default | 1 | null | |
Why Qwen 3.5 27B? | 1 | Qwen 3.5 has 27B and 35B versions. I wonder why they chose these numbers. I mean, I could fit a 24B as a Q4 in my 16GB but 27B is just a tiny bit too large for q4\_k\_m and I would have to go down to q3\_k\_m to fit it. 24B vs 27B shouldn't make that much of a difference, no? Compared to q4 vs q3. | 2026-03-02T19:57:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rj3cku/why_qwen_35_27b/ | dreamyrhodes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj3cku | false | null | t3_1rj3cku | /r/LocalLLaMA/comments/1rj3cku/why_qwen_35_27b/ | false | false | self | 1 | null |
Qwen3.5 397B vs 27B! | 1 | How are they so smart?? does it translate to real usage in real use? what has been your experiences? Its mind blowing that they being 10x small competing with the big dawgs | 2026-03-02T19:56:18 | SennVacan | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rj3bh0 | false | null | t3_1rj3bh0 | /r/LocalLLaMA/comments/1rj3bh0/qwen35_397b_vs_27b/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/yofidqrusomg1.png?auto=webp&s=013b7918dad1ed3e45e28c46b8ff7eaca90aeb72', 'width': 175, 'height': 348}, 'resolutions': [{'url': 'https://preview.redd.it/yofidqrusomg1.png?width=108&crop=smart&auto=webp&s=bef6ffdfc47e52fa1df12e2b2750a2db330afe86', 'width': 108, 'hei... | ||
Qwen3.5-9b 4bit quant acting weird | 1 | Hi folks,
I'm trying to run Qwen3.5-9b 4 bit quants with LM Studio (there are several options available), and first of all - they're really impressive so far!
However, sometimes it gets stuck at the same though over and over and never finishes the thinking process. So far this seems to be only the case with MLX quan... | 2026-03-02T19:55:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rj3ay3/qwen359b_4bit_quant_acting_weird/ | Ok_Whole_5900 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj3ay3 | false | null | t3_1rj3ay3 | /r/LocalLLaMA/comments/1rj3ay3/qwen359b_4bit_quant_acting_weird/ | false | false | self | 1 | null |
Intelligence density per GB is increasing and I expect 4o intelligence by end of year for small models. | 1 | With the release of small 3.5 Qwen models, I realize that intelligence density is constantly increasing and I expect 10-100x smarter models for local models by 2028.
Elon said the AI community underestimates potential by 100x from algorithms alone, maybe sees \~10x smarter AI yearly overall.
Yes models are g... | 2026-03-02T19:54:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rj39se/intelligence_density_per_gb_is_increasing_and_i/ | Traditional-Card6096 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj39se | false | null | t3_1rj39se | /r/LocalLLaMA/comments/1rj39se/intelligence_density_per_gb_is_increasing_and_i/ | false | false | self | 1 | null |
Any idea what is being used for these generations? | 1 | 2026-03-02T19:47:06 | https://v.redd.it/08vdwcyhromg1 | C0C0Barbet | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rj326g | false | {'reddit_video': {'bitrate_kbps': 1200, 'fallback_url': 'https://v.redd.it/08vdwcyhromg1/CMAF_480.mp4?source=fallback', 'has_audio': True, 'height': 854, 'width': 480, 'scrubber_media_url': 'https://v.redd.it/08vdwcyhromg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/08vdwcyhromg1/DASHPlaylist.mpd?a=1775072853%2CZTMyYj... | t3_1rj326g | /r/LocalLLaMA/comments/1rj326g/any_idea_what_is_being_used_for_these_generations/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/cjg4Zmw4MWlyb21nMb8lQHzrm7qtp6tdwReSEEg3uewqxNw-7zWM5Brju1uM.png?format=pjpg&auto=webp&s=ee7facb28037e7d95218a6a48eab7a9eff300d51', 'width': 952, 'height': 1693}, 'resolutions': [{'url': 'https://external-preview.redd.it/cjg4Zmw4MWlyb21nMb8lQHzrm7qtp6tdwR... | ||
You can monitor LoRA training quality without running eval — structural metrics track loss at r > 0.95 | 1 | We've been running experiments on Mistral-7B LoRA fine-tuning and found something practically useful that I haven't seen discussed here.
**The short version:** metrics computed from the adapter weights alone (no data, no forward pass) correlate with eval loss at |r| > 0.95 during training. You can watch these instead ... | 2026-03-02T19:42:53 | https://www.reddit.com/r/LocalLLaMA/comments/1rj2y4n/you_can_monitor_lora_training_quality_without/ | Front-Structure2385 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj2y4n | false | null | t3_1rj2y4n | /r/LocalLLaMA/comments/1rj2y4n/you_can_monitor_lora_training_quality_without/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/tcIbwRfVfs51JcFSIq1Rwxx9BS-IZcR0rWxXm1OUZZY.png?auto=webp&s=0645cd7dd6efd7f2abc41057014dd48eb710a52e', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/tcIbwRfVfs51JcFSIq1Rwxx9BS-IZcR0rWxXm1OUZZY.png?width=108&crop=... |
How to stop burning money on OpenClaw. What I learned from talking to 100+ users | 1 | OpenClaw is one of the fastest-growing open-source projects in recent history. 230,000 GitHub stars, 116,000 Discord members, 2 million visitors per week. All of that in two months. People are running personal AI agents on their Mac Minis and cloud servers. It works, and it is genuinely useful.
Like any major shift in... | 2026-03-02T19:37:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rj2s2y/how_to_stop_burning_money_on_openclaw_what_i/ | stosssik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj2s2y | false | null | t3_1rj2s2y | /r/LocalLLaMA/comments/1rj2s2y/how_to_stop_burning_money_on_openclaw_what_i/ | false | false | 1 | null | |
New Qwen models for speculative decoding | 1 | Hey, has anyone successfully used the new Qwen models (0.8\\2\\4)B as draft models for speculative decoding? I benchmarked 122B and 397B using 0.8B, 2B, and 4B as draft models (tested 4B only with the 122B variant—397B triggered OOM errors). However, I found no performance improvement for either prompt processing or to... | 2026-03-02T19:36:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rj2rec/new_qwen_models_for_speculative_decoding/ | unbannedfornothing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj2rec | false | null | t3_1rj2rec | /r/LocalLLaMA/comments/1rj2rec/new_qwen_models_for_speculative_decoding/ | false | false | self | 1 | null |
Is speculative decoding available with the Qwen 3.5 series? | 1 | Now that we have a series of dense models from 27B to 0.8B, I'm hoping that speculative decoding is on the menu again. The 27B model is great, but too slow.
Now if I can just get some time to play with it... | 2026-03-02T19:31:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rj2mzy/is_speculative_decoding_available_with_the_qwen/ | PermanentLiminality | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj2mzy | false | null | t3_1rj2mzy | /r/LocalLLaMA/comments/1rj2mzy/is_speculative_decoding_available_with_the_qwen/ | false | false | self | 1 | null |
Qwen3.5 30B is Incredible for Local Deployment | 1 | I just tried out Qwen3.5 30B locally, and I am absolutely blown away by its performance! The model is incredibly powerful and runs smoothly even on local hardware. If you haven't tried it yet, I highly recommend giving it a go. It's a game-changer for local AI deployment! | 2026-03-02T19:25:53 | https://www.reddit.com/r/LocalLLaMA/comments/1rj2gwf/qwen35_30b_is_incredible_for_local_deployment/ | Marco_Ferreira43516 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj2gwf | false | null | t3_1rj2gwf | /r/LocalLLaMA/comments/1rj2gwf/qwen35_30b_is_incredible_for_local_deployment/ | false | false | self | 1 | null |
Is LocalLLaMA for hate and malicious comments? - leave your comments | 1 | Is it normal on **LocalLLaMA** that comments under perhaps naive posts that sometimes appear here, or posts that are not always wise, are immediately hated on by some people?
Yes, there are people who are resistant to knowledge, but you can just skip such posts. Unfortunately, those who comment usually need to efford ... | 2026-03-02T19:24:36 | https://www.reddit.com/r/LocalLLaMA/comments/1rj2fm9/is_localllama_for_hate_and_malicious_comments/ | mossy_troll_84 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj2fm9 | false | null | t3_1rj2fm9 | /r/LocalLLaMA/comments/1rj2fm9/is_localllama_for_hate_and_malicious_comments/ | false | false | self | 1 | null |
SpongeBob Art with Qwen 3.5 9b vs Opus 4.6 | 1 | 2026-03-02T19:23:03 | camracks | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rj2e3j | false | null | t3_1rj2e3j | /r/LocalLLaMA/comments/1rj2e3j/spongebob_art_with_qwen_35_9b_vs_opus_46/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/e58yrl38nomg1.jpeg?auto=webp&s=4b24700e7498059a3008a94860c05acba9e0f93e', 'width': 1747, 'height': 1892}, 'resolutions': [{'url': 'https://preview.redd.it/e58yrl38nomg1.jpeg?width=108&crop=smart&auto=webp&s=cbd53e6d7107887c5735c8e474bf31b2e35400b2', 'width': 108, ... | |||
Open source tool for fine-tuning/evals now works with NVIDIA DGX Spark (if your lab has one) | 1 | For those of you that have an NVIDIA DGX Spark in your training setup, Transformer Lab just released native support for it.
It’s a free, open source tool for running fine-tuning, training, and evals and replaces a fragmented landscape of scripts and tools.
Transformer Lab handles environment setup while managing ... | 2026-03-02T19:10:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rj21zm/open_source_tool_for_finetuningevals_now_works/ | Historical-Potato128 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj21zm | false | null | t3_1rj21zm | /r/LocalLLaMA/comments/1rj21zm/open_source_tool_for_finetuningevals_now_works/ | false | false | 1 | null | |
I got tired of AI agents crashing my GPU and having root access. So I wrote a Rust Kernel to schedule and secure them (It’s probably broken) | 1 | Hi everybody out there running local LLMs,
I'm doing a small, free free **process manager/daemon** (ORE) for local AI agents. This has been brewing because I got extremely annoyed that running two agents (like OpenClaw or custom scripts) at the same time causes **Ollama/vLLM** to **OOM** crash my GPU.
It won't be a m... | 2026-03-02T19:01:35 | https://www.reddit.com/r/LocalLLaMA/comments/1rj1sn9/i_got_tired_of_ai_agents_crashing_my_gpu_and/ | InternationalSun5556 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj1sn9 | false | null | t3_1rj1sn9 | /r/LocalLLaMA/comments/1rj1sn9/i_got_tired_of_ai_agents_crashing_my_gpu_and/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/V2LVTOtOYiFm8jMhvm6A4TEzqmiCrADKpw4gYA-SsvQ.png?auto=webp&s=ebff41f366ee68a3c2468ff62f0ed3e7f6eebbcf', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/V2LVTOtOYiFm8jMhvm6A4TEzqmiCrADKpw4gYA-SsvQ.png?width=108&crop=... |
AI agents don't have a context problem. They have a judgment problem. | 1 | I've been using AI agents and copilots daily for over a year and something keeps nagging me.
These tools have access to my code, my docs, my conversations. But when they make a decision on my behalf - drafting a response, triaging an issue, suggesting an approach - it feels *off*. Not wrong exactly, but generic. Like ... | 2026-03-02T19:01:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/ | Illustrious-Bet6287 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj1sbq | false | null | t3_1rj1sbq | /r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/ | false | false | self | 1 | null |
GPU poor folks(<16gb) what’s your setup for coding ? | 1 | I’m on a 16gb M1, so I need to stick to \~9B models, I find cline is too much for a model that size. I think the system prompt telling it how to navigate the project is too much.
Is there anything that’s like cline but it’s more lightweight, where I load a file at the time, and it just focuses on code changes ? | 2026-03-02T18:56:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/ | FearMyFear | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj1ni2 | false | null | t3_1rj1ni2 | /r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/ | false | false | self | 1 | null |
Which QWEN 3.5 model can i run on my laptop | 1 | I am confused on which model i can run and which unslothed quant i can use. I have a asus zephyrus G15 with Ryzen 9 5900HS with radeon graphics, 16GB ram and RTX 3060 laptop GPU 6B
Also, is there a way i can connect the local model to antigravity? I’m analyzing a large datasets and constantly have to tweak and test ca... | 2026-03-02T18:51:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rj1ifv/which_qwen_35_model_can_i_run_on_my_laptop/ | dolo937 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj1ifv | false | null | t3_1rj1ifv | /r/LocalLLaMA/comments/1rj1ifv/which_qwen_35_model_can_i_run_on_my_laptop/ | false | false | self | 1 | null |
Are autonomous AI agents with wallet access actually a security risk, or am I overthinking this? | 1 | [removed] | 2026-03-02T18:50:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rj1h6u/are_autonomous_ai_agents_with_wallet_access/ | CraftyWriter2543 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj1h6u | false | null | t3_1rj1h6u | /r/LocalLLaMA/comments/1rj1h6u/are_autonomous_ai_agents_with_wallet_access/ | false | false | self | 1 | null |
[llamacpp][LMstudio] Draft model settings for Qwen3.5 27b? | 1 | Hey, I'm trying to figure the best draft model (speculative decoding) for `Qwen3.5-27b`.
Using LMstudio, I downloaded `Qwen3.5-0.8B-Q8_0.gguf` but it doesn't show up in spec-decode options. Both my models were uploaded by `lmstudio-community`. The `27b` is a `q4_k_m`, while smaller one is `q8`.
Next, I tried using:
... | 2026-03-02T18:47:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rj1e35/llamacpplmstudio_draft_model_settings_for_qwen35/ | v01dm4n | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj1e35 | false | null | t3_1rj1e35 | /r/LocalLLaMA/comments/1rj1e35/llamacpplmstudio_draft_model_settings_for_qwen35/ | false | false | self | 1 | null |
Did someone managed to get speculative decoding working on Qwen3.5 models ? | 1 | [removed] | 2026-03-02T18:44:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rj1b0w/did_someone_managed_to_get_speculative_decoding/ | ArthurianX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj1b0w | false | null | t3_1rj1b0w | /r/LocalLLaMA/comments/1rj1b0w/did_someone_managed_to_get_speculative_decoding/ | false | false | self | 1 | null |
Built a local memory layer for AI agents where memories actually fade over time — works with any LLM, no cloud, no API keys | 1 | Most AI memory tools are basically just save everything forever and search it.
That breaks fast because stale irrelevant context clutters every response.
YourMemory works differently. Memories decay with time using the Ebbinghaus
Forgetting Curve. The ones you keep coming back to stay strong.
The ones you never... | 2026-03-02T18:41:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rj18h4/built_a_local_memory_layer_for_ai_agents_where/ | Sufficient_Sir_5414 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj18h4 | false | null | t3_1rj18h4 | /r/LocalLLaMA/comments/1rj18h4/built_a_local_memory_layer_for_ai_agents_where/ | false | false | self | 1 | null |
How are you handling spending controls for your AI agents? | 1 | I've been looking into agents that make real purchases (booking flights, buying SaaS, etc.) and I'm surprised how few guardrails exist. OpenClaw has 190k stars and 5,400+ skills but the financial control story is basically "trust the agent" or "don't let it spend."
For those running agents that interact with payment f... | 2026-03-02T18:35:58 | https://www.reddit.com/r/LocalLLaMA/comments/1rj12me/how_are_you_handling_spending_controls_for_your/ | Professional_Cod9487 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj12me | false | null | t3_1rj12me | /r/LocalLLaMA/comments/1rj12me/how_are_you_handling_spending_controls_for_your/ | false | false | self | 1 | null |
Parameter Configuration for Knowledge Distill to Qwen3.5 model. | 1 | Hi everyone,
I’m trying to add a new reasoning skill to Qwen3.5-27B via LoRA fine-tuning, but I’m running into issues.
The base model has very strong coding and reasoning abilities. However, after fine-tuning on my dataset, it seems to completely forget its general capabilities.
First setup:
• LoRA rank: 64
• Lo... | 2026-03-02T18:35:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rj11vb/parameter_configuration_for_knowledge_distill_to/ | Mysterious_Art_3211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj11vb | false | null | t3_1rj11vb | /r/LocalLLaMA/comments/1rj11vb/parameter_configuration_for_knowledge_distill_to/ | false | false | self | 1 | null |
In search of getting started guide for Strix Halo | 1 | [removed] | 2026-03-02T18:30:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rj0x6g/in_search_of_getting_started_guide_for_strix_halo/ | WhatWouldVaderDo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj0x6g | false | null | t3_1rj0x6g | /r/LocalLLaMA/comments/1rj0x6g/in_search_of_getting_started_guide_for_strix_halo/ | false | false | self | 1 | null |
Why are people so quick to say Closed frontiers are benchmaxxed while they gulp this without any second thought? | 1 | Really wanna know these absurd benchmarks of qwen models specifically | 2026-03-02T18:20:28 | Independent-Ruin-376 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rj0mxt | false | null | t3_1rj0mxt | /r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/4qqdcsy1comg1.jpeg?auto=webp&s=c76c7822d107497834ac80a3e8987f41439be520', 'width': 1080, 'height': 1710}, 'resolutions': [{'url': 'https://preview.redd.it/4qqdcsy1comg1.jpeg?width=108&crop=smart&auto=webp&s=d4e574fae911e8b05cefe010968489ad38c5eb6e', 'width': 108, ... | ||
Qwen3.5 2b, 4b and 9b tested on Raspberry Pi5 | 1 | Tested on Raspberry Pi5 8 and 16GB variants, 16GB with SSD, all with vision encoder enabled and 16k context and llama.cpp with some optimisations for ARM/Pi.
Overall I'm impressed:
Qwen3.5-2b 4 bit quant: I'm getting constant **5-6t/s** on both raspberries, time to first token is fast (few seconds on short prompts), ... | 2026-03-02T18:19:34 | https://v.redd.it/hzihay2laomg1 | jslominski | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rj0m27 | false | {'reddit_video': {'bitrate_kbps': 2400, 'fallback_url': 'https://v.redd.it/hzihay2laomg1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'width': 978, 'scrubber_media_url': 'https://v.redd.it/hzihay2laomg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/hzihay2laomg1/DASHPlaylist.mpd?a=1775067618%2COWNjY... | t3_1rj0m27 | /r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/cTkyeHB5Mmxhb21nMfHhW0n7ZOIJe3LcNgsKxze3i5tQ83sRwhip-10VRMr_.png?format=pjpg&auto=webp&s=5d223844688f948fdcc378eb59afc0048c64ac2c', 'width': 1264, 'height': 930}, 'resolutions': [{'url': 'https://external-preview.redd.it/cTkyeHB5Mmxhb21nMfHhW0n7ZOIJe3LcNg... | |
Best Compatible & Suitable LocalLLM Model Suggestion | 1 | Hi dudes,
I ran the three models shown in the below,
5060 Ti 16 GB vRAM - 5600x - 32 GB DDR4 RAM, in LMStudio.
You can see the settings in the attachment.
Although I tried to keep the settings at the most ideal level possible (following Gemini's guidance), I have a very low token per second rate. Knowing thi... | 2026-03-02T18:11:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rj0dyn/best_compatible_suitable_localllm_model_suggestion/ | thesayk0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj0dyn | false | null | t3_1rj0dyn | /r/LocalLLaMA/comments/1rj0dyn/best_compatible_suitable_localllm_model_suggestion/ | false | false | 1 | null | |
**Running LLMs on Huawei Ascend without rewriting every script that assumes CUDA** | 1 | Been experimenting with running local LLMs on an Ascend 910B. The hardware is capable but the entire inference ecosystem, HuggingFace, vLLM, DeepSpeed, assumes torch.cuda everywhere. Every script dies immediately.
Built a runtime shim that intercepts those calls and reroutes them to the NPU without touching the orig... | 2026-03-02T18:11:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rj0dsf/running_llms_on_huawei_ascend_without_rewriting/ | AcanthocephalaNo2929 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj0dsf | false | null | t3_1rj0dsf | /r/LocalLLaMA/comments/1rj0dsf/running_llms_on_huawei_ascend_without_rewriting/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/bakpxELAni3sAwS7MYG4HptdT1qV5qITP7gGuANEOrM.png?auto=webp&s=937965269bdb42ffe727ebbddb35c14d3e1ca72a', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/bakpxELAni3sAwS7MYG4HptdT1qV5qITP7gGuANEOrM.png?width=108&crop=... |
K2 (not 2.5) distillation - still worth it?.. | 1 | I have been experimenting since November with trying to distill Kimi K2, known for its unique style. Had a very uneven ride with loads of things learned, loads of infrastructure bugs filed (most fixed now), and some interesting results but nothing definitive.
K2.5 is generally considered to have nerfed the style while... | 2026-03-02T18:06:27 | https://www.reddit.com/r/LocalLLaMA/comments/1rj08k1/k2_not_25_distillation_still_worth_it/ | ramendik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj08k1 | false | null | t3_1rj08k1 | /r/LocalLLaMA/comments/1rj08k1/k2_not_25_distillation_still_worth_it/ | false | false | self | 1 | null |
Beginner's Guide to LLM Quantization: Run 70B Models on Your Gaming GPU | 1 | [removed] | 2026-03-02T18:02:15 | https://www.reddit.com/r/LocalLLaMA/comments/1rj048e/beginners_guide_to_llm_quantization_run_70b/ | Actual_Wolf_2932 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rj048e | false | null | t3_1rj048e | /r/LocalLLaMA/comments/1rj048e/beginners_guide_to_llm_quantization_run_70b/ | false | false | self | 1 | null |
What models to "understand" videos? (No transcripts) | 1 | There are apps like Get Poppy where you paste an Instagram Reel or YouTube link and they don’t just transcribe the audio — they also extract and understand the visual sequence of the video.
This isn’t done with single 1-second frames, because that wouldn’t capture temporal context or visual continuity. It’s real video... | 2026-03-02T17:56:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rizy4r/what_models_to_understand_videos_no_transcripts/ | jrhabana | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rizy4r | false | null | t3_1rizy4r | /r/LocalLLaMA/comments/1rizy4r/what_models_to_understand_videos_no_transcripts/ | false | false | self | 1 | null |
Speedup GLM on Strix Halo and llama.cpp | 1 | Hello!
Would you have some tipps / parameters for the GLM models how to speed them up especially the pp on Strix Halo and llama.cpp.
prompt eval time = 91.59 ms / 1 tokens ( 91.59 ms per token, 10.92 tokens per second)
eval time = 36265.55 ms / 426 tokens ( 85.13 ms per token, 11... | 2026-03-02T17:54:35 | https://www.reddit.com/r/LocalLLaMA/comments/1rizw9u/speedup_glm_on_strix_halo_and_llamacpp/ | Equivalent-Belt5489 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rizw9u | false | null | t3_1rizw9u | /r/LocalLLaMA/comments/1rizw9u/speedup_glm_on_strix_halo_and_llamacpp/ | false | false | self | 1 | null |
Running Qwen 3.5 0.8B locally in the browser on WebGPU w/ Transformers.js | 2 | Today, Qwen released their latest family of small multimodal models, Qwen 3.5 Small, available in a range of sizes (0.8B, 2B, 4B, and 9B parameters) and perfect for on-device applications. So, I built a demo running the smallest variant (0.8B) locally in the browser on WebGPU. The bottleneck is definitely the vision en... | 2026-03-02T17:46:44 | https://v.redd.it/hta9o2i95omg1 | xenovatech | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rizodv | false | {'reddit_video': {'bitrate_kbps': 2400, 'fallback_url': 'https://v.redd.it/hta9o2i95omg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'width': 720, 'scrubber_media_url': 'https://v.redd.it/hta9o2i95omg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/hta9o2i95omg1/DASHPlaylist.mpd?a=1775065703%2CNjlkMz... | t3_1rizodv | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/ | false | false | 2 | {'images': [{'source': {'url': 'https://external-preview.redd.it/Y3Y3ejI3aTk1b21nMdWbvGcxCx2ye2tGU7zJWShDhnYRgbWYJJKHggNlhZlM.png?format=pjpg&auto=webp&s=589398bbf124364395ad0c8ec041c6ea283ca0cd', 'width': 800, 'height': 800}, 'resolutions': [{'url': 'https://external-preview.redd.it/Y3Y3ejI3aTk1b21nMdWbvGcxCx2ye2tGU7z... | |
Qwen 27B is a beast but not for agentic work. | 1 | After I tried it, even the base model, it really showed what it can do. I immediately fell in love.
But after some time, the quality became too costly. Even if it shows great comprehension and can follow instructions well. It becomes unusable if I need it to work on similar context with multiple queries.
It recalcula... | 2026-03-02T17:43:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rizlkn/qwen_27b_is_a_beast_but_not_for_agentic_work/ | kaisurniwurer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rizlkn | false | null | t3_1rizlkn | /r/LocalLLaMA/comments/1rizlkn/qwen_27b_is_a_beast_but_not_for_agentic_work/ | false | false | self | 1 | null |
qwen3.5-0.8b Released Today speed is insane 157TK/sec | 1 | https://reddit.com/link/1rizjco/video/395i9x2s4omg1/player
I'm on an old machine Ryzen 9 5950x, 64GB DDR-3400, Geforce 3070. This is a basic bare minimum module 8B that came out today. | 2026-03-02T17:41:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rizjco/qwen3508b_released_today_speed_is_insane_157tksec/ | PhotographerUSA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rizjco | false | null | t3_1rizjco | /r/LocalLLaMA/comments/1rizjco/qwen3508b_released_today_speed_is_insane_157tksec/ | false | false | self | 1 | null |
Qwen3.5 9B (FP16) vs 27B (FP8) (have 64GB unified M1 Max memory) | 1 | [https://modelscope.cn/models/Qwen/Qwen3.5-9B](https://modelscope.cn/models/Qwen/Qwen3.5-9B)
[https://modelscope.cn/models/Qwen/Qwen3.5-27B-FP8](https://modelscope.cn/models/Qwen/Qwen3.5-27B-FP8)
These 2 models present the optimal size for using alongside a 64GB system.
Are there any directly comparable results that... | 2026-03-02T17:32:26 | https://www.reddit.com/r/LocalLLaMA/comments/1riz9zz/qwen35_9b_fp16_vs_27b_fp8_have_64gb_unified_m1/ | weight_matrix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riz9zz | false | null | t3_1riz9zz | /r/LocalLLaMA/comments/1riz9zz/qwen35_9b_fp16_vs_27b_fp8_have_64gb_unified_m1/ | false | false | self | 1 | null |
What if a small AI decided what your LLM keeps in memory, instead of dumb heuristics throwing away tokens? I wrote a whitepaper, need a collaborator. | 1 | You load 100K tokens into your model. Behind the scenes, the KV-cache is either blowing up your VRAM or some heuristic is silently deleting tokens it thinks you don't need. Spoiler: it often deletes the wrong ones.
**The problem with current approaches (H2O, ScissorHands, StreamingLLM):** they evict tokens based on pa... | 2026-03-02T17:30:35 | https://www.reddit.com/r/LocalLLaMA/comments/1riz852/what_if_a_small_ai_decided_what_your_llm_keeps_in/ | Inside-Position-668 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riz852 | false | null | t3_1riz852 | /r/LocalLLaMA/comments/1riz852/what_if_a_small_ai_decided_what_your_llm_keeps_in/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/_hq-5N9hpSAsuZRM6VSvLJfC81r6JPf5Fm6OTc3zsiE.png?auto=webp&s=e034aa19f3da14dd6602c7cc4d0d4b04e2f663b7', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/_hq-5N9hpSAsuZRM6VSvLJfC81r6JPf5Fm6OTc3zsiE.png?width=108&crop=... |
unsloth/Qwen3.5-9B-GGUF:Q8_0 failing on Ollama | 1 | I just installed unsloth/Qwen3.5-9B-GGUF:Q8\_0 via openwebui using `ollama run` [hf.co/unsloth/Qwen3.5-9B-GGUF:Q8\_0](http://hf.co/unsloth/Qwen3.5-9B-GGUF:Q8_0)
But now my requests are failing . This is the first time i am downloading from HF via openwebui i usually use models listed on ollama website.
`500: Oll... | 2026-03-02T17:29:51 | https://www.reddit.com/r/LocalLLaMA/comments/1riz7dv/unslothqwen359bggufq8_0_failing_on_ollama/ | callmedevilthebad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riz7dv | false | null | t3_1riz7dv | /r/LocalLLaMA/comments/1riz7dv/unslothqwen359bggufq8_0_failing_on_ollama/ | false | false | self | 1 | null |
QWEN3.5: 397B-A17B 1-bit quantization (UD-TQ1_0) vs 27B 4-bit quantization (UD-Q4_K_XL) | 1 | I'm thinking to replace my RTX 5090 FE to RTX PRO 6000 if the former is better. | 2026-03-02T17:22:58 | https://www.reddit.com/r/LocalLLaMA/comments/1riz0db/qwen35_397ba17b_1bit_quantization_udtq1_0_vs_27b/ | hurryman2212 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riz0db | false | null | t3_1riz0db | /r/LocalLLaMA/comments/1riz0db/qwen35_397ba17b_1bit_quantization_udtq1_0_vs_27b/ | false | false | self | 1 | null |
Axe - a precision agentic coder. large codebases. zero bloat. terminal-native. precise retrieval. powerful inference. open-sourced. | 1 | we built axe because we were tired of coding tools optimized for demo videos instead of production codebases.
the core problem: most agents (including claude code, codex, etc.) take the brute force approach — dump everything into context and hope the LLM figures it out. that's fine for a 500-line side project. it fall... | 2026-03-02T17:12:44 | https://v.redd.it/ljdncgwnznmg1 | EmbarrassedAsk2887 | /r/LocalLLaMA/comments/1riypvk/axe_a_precision_agentic_coder_large_codebases/ | 1970-01-01T00:00:00 | 0 | {} | 1riypvk | false | null | t3_1riypvk | /r/LocalLLaMA/comments/1riypvk/axe_a_precision_agentic_coder_large_codebases/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/enY1MDRyd256bm1nMacmzVR93n6b8e7JLLtWbxvhOhgb1ORRy-2MYxqyZ3AL.png?format=pjpg&auto=webp&s=be34c295d1909fe64b1958538a74b8ccd67d5dff', 'width': 2226, 'height': 1440}, 'resolutions': [{'url': 'https://external-preview.redd.it/enY1MDRyd256bm1nMacmzVR93n6b8e7JL... | |
TP2 Framework Desktop cyankiwi/Qwen3.5-122B-A10B-AWQ-4bit llama-benchy results | 1 | # Motherboard 128GB
# Qwen3.5-122B-A10B-AWQ-4bit Benchmark Results
Model: cyankiwi/Qwen3.5-122B-A10B-AWQ-4bit
Network: Mellanox ConnectX-3 MCX311A-XCAT CX311A 10GbE SFP+ over RoCE v1
# 1x Framework Desktop 128GB (TP1)
|Test|t/s (total)|t/s (req)|Peak t/s|Peak t/s (req)|TTFR (ms)|Est PPT (ms)|E2E TTFT (ms)|
|:-|:... | 2026-03-02T17:11:59 | https://www.reddit.com/r/LocalLLaMA/comments/1riyp47/tp2_framework_desktop/ | MirecX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riyp47 | false | null | t3_1riyp47 | /r/LocalLLaMA/comments/1riyp47/tp2_framework_desktop/ | false | false | self | 1 | null |
Access to DGX H200 — Looking for best model to perform Distillation | 1 | Hi all,
I have temporary research access to a DGX H200 cluster and want to use the compute meaningfully rather than waste cycles on random fine-tunes.
My current thinking:
• Start from Llama 3.1 70B or Mixtral 8x7B as teacher
• Distill into 7B/8B deployable student models
• Focus on domain specialization (finan... | 2026-03-02T17:07:40 | https://www.reddit.com/r/LocalLLaMA/comments/1riyktj/access_to_dgx_h200_looking_for_best_model_to/ | No-Yam9526 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riyktj | false | null | t3_1riyktj | /r/LocalLLaMA/comments/1riyktj/access_to_dgx_h200_looking_for_best_model_to/ | false | false | self | 1 | null |
So I have no knowledge of LLMs | 1 | [removed] | 2026-03-02T17:06:35 | https://www.reddit.com/r/LocalLLaMA/comments/1riyjpi/so_i_have_no_knowledge_of_llms/ | machinegunnedburger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riyjpi | false | null | t3_1riyjpi | /r/LocalLLaMA/comments/1riyjpi/so_i_have_no_knowledge_of_llms/ | false | false | self | 1 | null |
I am using Qwen AI model for OpenClaw and I thought this was free and local so why do I keep getting this error message: API rate limit reached. Please try again later. | 1 | Please help I am new to OpenClaw | 2026-03-02T17:05:06 | https://www.reddit.com/r/LocalLLaMA/comments/1riyi54/i_am_using_qwen_ai_model_for_openclaw_and_i/ | utsavsarkar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riyi54 | false | null | t3_1riyi54 | /r/LocalLLaMA/comments/1riyi54/i_am_using_qwen_ai_model_for_openclaw_and_i/ | false | false | self | 1 | null |
Qwen3.5 Model Series - Thinking On/OFF: Does it Matter? | 2 | Hi, I've been testing Qwen3.5 models ranging from 2B to 122B. All configurations used Unsloth with LM Studio exclusively. Quantization-wise, the 2B through 9B/4B variants run at Q8, while the 122B uses MXFP4.
Here is a summary of my observations:
**1. Smaller Models (2B – 9B)**
* **Thinking Mode Impact:** Activating... | 2026-03-02T17:02:35 | https://www.reddit.com/r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/ | Iory1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riyfg2 | false | null | t3_1riyfg2 | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/ | false | false | self | 2 | null |
lmao | 1 | 2026-03-02T16:54:36 | itsArmanJr | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1riy7cw | false | null | t3_1riy7cw | /r/LocalLLaMA/comments/1riy7cw/lmao/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/oslpxh0nwnmg1.png?auto=webp&s=538dff3fd34b289f3507e046b512ffcc741fe6a9', 'width': 865, 'height': 629}, 'resolutions': [{'url': 'https://preview.redd.it/oslpxh0nwnmg1.png?width=108&crop=smart&auto=webp&s=744b4bc7e2a67a4f1d8cae0badbcfe0f08bf2645', 'width': 108, 'hei... | |||
Qwen 3.5 Non-thinking Mode Benchmarks? | 1 | Has anybody had the chance to or know a benchmark on the performance of non-thinking vs thinking mode with Qwen 3.5 series? Very interested to see how much is being sacrificed for instant responses, as I use 27B dense, and thinking takes quite a while sometimes at \~20tps on my 3090. I find the non-thinking responses p... | 2026-03-02T16:53:10 | https://www.reddit.com/r/LocalLLaMA/comments/1riy5x6/qwen_35_nonthinking_mode_benchmarks/ | Embarrassed_Soup_279 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riy5x6 | false | null | t3_1riy5x6 | /r/LocalLLaMA/comments/1riy5x6/qwen_35_nonthinking_mode_benchmarks/ | false | false | self | 1 | null |
the data centers are being built for mass surveillance. none of it is gonna be used to scale or bring agi. hell, llms are just function aggregators. they cant even calculate boy math. | 1 | nobody is talking about this but the compute-to-revenue ratio on hyperscaler infra makes zero sense if the use case is just "better chatbot." you don't build exaflop-scale data centers to run inference on people asking for recipe substitutions. the numbers only work if you're doing something fundamentally more data-hun... | 2026-03-02T16:52:26 | https://www.reddit.com/r/LocalLLaMA/comments/1riy56h/the_data_centers_are_being_built_for_mass/ | EmbarrassedAsk2887 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riy56h | false | null | t3_1riy56h | /r/LocalLLaMA/comments/1riy56h/the_data_centers_are_being_built_for_mass/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/sTMyflae6NqJn3BaLIEKMamYH_3a81n4XCq4uqu9hzg.jpeg?auto=webp&s=23bde8732db9e27921532ecb811e619a854d3450', 'width': 280, 'height': 280}, 'resolutions': [{'url': 'https://external-preview.redd.it/sTMyflae6NqJn3BaLIEKMamYH_3a81n4XCq4uqu9hzg.jpeg?width=108&crop... |
New to local llm, which model to use with a 4090? | 1 | Hey everyone, total newcomer to local LLMs here.
Just sat up Ollama on a 4090/14900K and want to run a local LLM for agentic coding like OpenClaw and vibe coding with claude code.
Given the 24GB VRAM limit and that I’m still figuring out context management, which model gives the best "out of the box" experience?
... | 2026-03-02T16:32:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rixlj6/new_to_local_llm_which_model_to_use_with_a_4090/ | azndkflush | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rixlj6 | false | null | t3_1rixlj6 | /r/LocalLLaMA/comments/1rixlj6/new_to_local_llm_which_model_to_use_with_a_4090/ | false | false | self | 1 | null |
~40× speedup and 90% VRAM reduction on vLLMs compared to FlashAttention by exploiting Grouped Query Attention symmetries | 1 | LLMs suffer on long-contexts, they're memory and throughput limited by the GPU. We solved this. I built a Triton kernel that beats FlashAttention decode: up to 40x the speed, 84% - 90% VRAM reduction enabling 2.0-4.0x longer context windows on the same hardware.
[https://github.com/leochlon/mezzanine/tree/main/mezzani... | 2026-03-02T16:29:01 | Upset-Presentation28 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rixhj9 | false | null | t3_1rixhj9 | /r/LocalLLaMA/comments/1rixhj9/40_speedup_and_90_vram_reduction_on_vllms/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/q091u99ernmg1.png?auto=webp&s=ba9e7d3f2eefb4b222b303467a94b3d8e1cd161f', 'width': 3410, 'height': 1870}, 'resolutions': [{'url': 'https://preview.redd.it/q091u99ernmg1.png?width=108&crop=smart&auto=webp&s=187218b25e0ef07b806360ae8f82e35b5225ac6d', 'width': 108, 'h... | ||
Qwen3.5-122B Heretic GGUFs | 1 | https://huggingface.co/mradermacher/Qwen3.5-122B-A10B-heretic-GGUF
Not my ggufs just thought it's worth sharing. No more refusals! | 2026-03-02T16:28:35 | https://www.reddit.com/r/LocalLLaMA/comments/1rixh53/qwen35122b_heretic_ggufs/ | durden111111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rixh53 | false | null | t3_1rixh53 | /r/LocalLLaMA/comments/1rixh53/qwen35122b_heretic_ggufs/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/DR87IEReTm1bBTwwh6gwsIJMMVh5zZ_ShzXeVfkyNKs.png?auto=webp&s=4a2adaaee080e90a56ce7f8778a7e5f619ec4f8d', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/DR87IEReTm1bBTwwh6gwsIJMMVh5zZ_ShzXeVfkyNKs.png?width=108&crop=... |
Is Qwen3.5-9B enough for Agentic Coding? | 1 | On coding section, 9B model beats Qwen3-30B-A3B on all items. And beats Qwen3-Next-80B, GPT-OSS-20B on few items. Also maintains same range numbers as Qwen3-Next-80B, GPT-OSS-20B on few items.
(If Qwen release 14B model in future, surely it would beat GPT-OSS-120B too.)
So as mentioned in the title, Is 9B model is en... | 2026-03-02T16:09:47 | pmttyji | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1riwy9w | false | null | t3_1riwy9w | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/bxh90z4gjnmg1.png?auto=webp&s=12c335cdf5cf5d29de8b1b1bdb737db82f6a9088', 'width': 606, 'height': 529}, 'resolutions': [{'url': 'https://preview.redd.it/bxh90z4gjnmg1.png?width=108&crop=smart&auto=webp&s=3f49f139534785b678b150d5f1ae737d8acfe839', 'width': 108, 'hei... | ||
PSA: LM Studio's parser silently breaks Qwen3.5 tool calling and reasoning: a year of connected bug reports | 1 | I love LM Studio, but there have been bugs over its life that have made it difficult for me to completely make the move to a 90:10 local model reliance with frontier models as advisory only. This morning, I filed 3 critical bugs and pulled together a report that collects a lot of issues over the last \~year that seem t... | 2026-03-02T15:52:55 | https://www.reddit.com/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/ | One-Cheesecake389 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riwhcf | false | null | t3_1riwhcf | /r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/hMOBTNJ34I4GprLayq1KhLoKH6s3wV5ZdZV6dPZM1WE.png?auto=webp&s=56844cb7df169048f36825ae568455ac55ff2164', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/hMOBTNJ34I4GprLayq1KhLoKH6s3wV5ZdZV6dPZM1WE.png?width=108&crop=... |
Speculative decoding with Qwen3.5, is it working for anyone? | 1 | Had anyone gotten speculative decoding with Qwen3.5 0.8b on cap to work yet? Here’s my command and the result I’ve been getting
/llama.cpp/build/bin/llama-server -m /.cache/llama.cpp/Qwen3.5-397B-A17B-MXFP4\_MOE-00001-of-00006.gguf -md .cache/llama.cpp/Qwen3.5-0.8B-Q8\_0.gguf -c 64000 -cd 64000
srv load\_model: in... | 2026-03-02T15:48:37 | https://www.reddit.com/r/LocalLLaMA/comments/1riwd56/speculative_decoding_with_qwen35_is_it_working/ | Frequent-Slice-6975 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riwd56 | false | null | t3_1riwd56 | /r/LocalLLaMA/comments/1riwd56/speculative_decoding_with_qwen35_is_it_working/ | false | false | self | 1 | null |
MCP co-location: STDIO (4–9ms, single client) vs HTTP (remote, multi-client). When do you actually need the latter? | 1 | MCP servers use STDIO for local/co-located setups — the host spawns the server as a subprocess, JSON-RPC over stdin/stdout. No network, no TLS. Latency is \~4–9ms, but you only get one client.
HTTP/StreamableHTTP lets you run MCP servers remotely with multi-client support, but adds network latency and auth complexity.... | 2026-03-02T15:41:59 | https://www.reddit.com/r/LocalLLaMA/comments/1riw6kd/mcp_colocation_stdio_49ms_single_client_vs_http/ | hack_the_developer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riw6kd | false | null | t3_1riw6kd | /r/LocalLLaMA/comments/1riw6kd/mcp_colocation_stdio_49ms_single_client_vs_http/ | false | false | self | 1 | null |
Just saw it on the last page refresh: Qwen quantized models are now on Ollama | 1 | Pulling 4B and 9B for myself. 0.8B there for cell phones. | 2026-03-02T15:36:46 | https://ollama.com/library/qwen3.5 | PlainBread | ollama.com | 1970-01-01T00:00:00 | 0 | {} | 1riw1ml | false | null | t3_1riw1ml | /r/LocalLLaMA/comments/1riw1ml/just_saw_it_on_the_last_page_refresh_qwen/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?auto=webp&s=a080c4707584d3aa14134960cda9ba2d339b93a3', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=... | |
Qwen 3.5 2B is an OCR beast | 1 | It can read text from all angles and qualities (from clear scans to potato phone pics) and supports structured output.
Previously I was using Ministral 3B and it was good but needed some image pre-processing to rotate images correctly for good results. I will continue to test more.
I tried Qwen 3.5 0.8B but for some... | 2026-03-02T15:34:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/ | deadman87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rivzcl | false | null | t3_1rivzcl | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/ | false | false | self | 1 | null |
Does Qwen3.5 4B supports thinking? | 1 | Does Qwen3.5 4B supports thinking? When testing 9B it thinks by default, with 4B it doesn't and adding the following to my API call doesn't do anything. I'm using LM Studio
'extra_body' => [
"chat_template_kwargs" =>
["enable_thinking" => true],
] | 2026-03-02T15:22:40 | https://www.reddit.com/r/LocalLLaMA/comments/1rivo6f/does_qwen35_4b_supports_thinking/ | IvnN7Commander | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rivo6f | false | null | t3_1rivo6f | /r/LocalLLaMA/comments/1rivo6f/does_qwen35_4b_supports_thinking/ | false | false | self | 1 | null |
Visualizing All Qwen 3.5 vs Qwen 3 Benchmarks | 1 | I averaged out the official scores from today’s and last week's release pages to get a quick look at how the new models stack up.
* **Purple/Blue/Cyan:** New Qwen3.5 models
* **Orange/Yellow:** Older Qwen3 models
The choice of Qwen3 models is simply based on which ones Qwen included in their new comparisons.
The bar... | 2026-03-02T15:10:24 | Jobus_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rivckt | false | null | t3_1rivckt | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/f6p9a7nibnmg1.png?auto=webp&s=df929e45bdc827cb7368d875f253bf5b373513e8', 'width': 2243, 'height': 1035}, 'resolutions': [{'url': 'https://preview.redd.it/f6p9a7nibnmg1.png?width=108&crop=smart&auto=webp&s=d9c86e7cec5d32e90d22b2ddbdacf3f7d1bc3c86', 'width': 108, 'h... | ||
Whats Possible with Video Now? | 1 | I been feeding Qwen VL one frame at a time (usually 1 fps) to analyze video. Works well. But I realized today that I don't know if I can just give it a video clip. Does that work? I run on Mac is that matters. | 2026-03-02T15:02:57 | https://www.reddit.com/r/LocalLLaMA/comments/1riv5kc/whats_possible_with_video_now/ | zipzag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riv5kc | false | null | t3_1riv5kc | /r/LocalLLaMA/comments/1riv5kc/whats_possible_with_video_now/ | false | false | self | 1 | null |
Qwen 3.5 2B on Android | 1 | App: https://github.com/Vali-98/ChatterUI/releases/tag/v0.8.9-beta9
Note that this pre-release is very experimental.
Hardware: Poco F5, Snapdragon 7 Gen 2
\---
Ive been excited for Qwen 3.5's release, but it seems to be much slower compared to other models of similar size, likely due to some architecture differenc... | 2026-03-02T15:01:20 | https://v.redd.it/yui76dticnmg1 | ----Val---- | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1riv3wv | false | {'reddit_video': {'bitrate_kbps': 2400, 'fallback_url': 'https://v.redd.it/yui76dticnmg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'width': 720, 'scrubber_media_url': 'https://v.redd.it/yui76dticnmg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/yui76dticnmg1/DASHPlaylist.mpd?a=1775055718%2CNTZlY... | t3_1riv3wv | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/dDFmZ3Rndmljbm1nMfgnKODLGXrl9BlVJzmuk6NTSggrTTf3ldLQWhAXaYHo.png?format=pjpg&auto=webp&s=6c7e56e2eeb1135c002b95712acc563399097def', 'width': 405, 'height': 720}, 'resolutions': [{'url': 'https://external-preview.redd.it/dDFmZ3Rndmljbm1nMfgnKODLGXrl9BlVJzm... | |
Genuinely fascinating, but also kind of terrifying... | 1 | I time to time run through my pen test runbook against my media server hosted on a cloud VPS and harden what I can based on new CVEs that come out.
This time decided to take it a step further and using an OpenCode harness with Qwen3.5-27B-Heretic-Q6\_K model running via LMStudio — mainly to avoid refusals and have it... | 2026-03-02T14:56:01 | https://www.reddit.com/r/LocalLLaMA/comments/1riuywe/genuinely_fascinating_but_also_kind_of_terrifying/ | ImmenseFox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riuywe | false | null | t3_1riuywe | /r/LocalLLaMA/comments/1riuywe/genuinely_fascinating_but_also_kind_of_terrifying/ | false | false | self | 1 | null |
Is Qwen3.5 2b is instruct? | 1 | I tried qwen's new 2b model and it's very fast and thinking is not showing in llama.cpp server | 2026-03-02T14:53:45 | https://www.reddit.com/r/LocalLLaMA/comments/1riuwsw/is_qwen35_2b_is_instruct/ | NegotiationNo1504 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riuwsw | false | null | t3_1riuwsw | /r/LocalLLaMA/comments/1riuwsw/is_qwen35_2b_is_instruct/ | false | false | self | 1 | null |
How can I enable Context Shifting in Llama Server? | 1 | ```makefile
SEED := $(shell bash -c 'echo $$((RANDOM * 32768 + RANDOM))')
QWEN35="$(MODELS_PATH)/unsloth/Qwen3.5-35B-A3B-GGUF/Qwen3.5-35B-A3B-UD-Q4_K_XL.gguf"
FLAGS += --seed $(SEED)
FLAGS += --ctx-size 16384
FLAGS += --cont-batching
FLAGS += --context-shift
FLAGS += --host 0.0.0.0... | 2026-03-02T14:50:29 | https://www.reddit.com/r/LocalLLaMA/comments/1riuttn/how_can_i_enable_context_shifting_in_llama_server/ | source-drifter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riuttn | false | null | t3_1riuttn | /r/LocalLLaMA/comments/1riuttn/how_can_i_enable_context_shifting_in_llama_server/ | false | false | self | 1 | null |
how to fix endless looping with Qwen3.5? | 1 | seems to be fine for coding related stuff but anything general it struggles so hard and starts looping | 2026-03-02T14:43:42 | https://www.reddit.com/r/LocalLLaMA/comments/1riunee/how_to_fix_endless_looping_with_qwen35/ | Odd-Ordinary-5922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riunee | false | null | t3_1riunee | /r/LocalLLaMA/comments/1riunee/how_to_fix_endless_looping_with_qwen35/ | false | false | self | 1 | null |
AMD details Ryzen AI 400 desktop with up to 8 cores, Radeon 860M graphics | 1 | [https://www.tomshardware.com/pc-components/cpus/amd-details-ryzen-ai-400-desktop-with-up-to-8-cores-radeon-860m-graphics-apus-wont-be-available-as-boxed-units-only-in-oem-systems](https://www.tomshardware.com/pc-components/cpus/amd-details-ryzen-ai-400-desktop-with-up-to-8-cores-radeon-860m-graphics-apus-wont-be-avail... | 2026-03-02T14:28:01 | https://www.reddit.com/r/LocalLLaMA/comments/1riu9gi/amd_details_ryzen_ai_400_desktop_with_up_to_8/ | takuonline | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riu9gi | false | null | t3_1riu9gi | /r/LocalLLaMA/comments/1riu9gi/amd_details_ryzen_ai_400_desktop_with_up_to_8/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/3LO49iOXaUZbcZh8LCa1iQPUAoGCNsu5y0us7844AkU.png?auto=webp&s=565c537c193b179809ba9435dbc8508a0e56bfb1', 'width': 2391, 'height': 1345}, 'resolutions': [{'url': 'https://external-preview.redd.it/3LO49iOXaUZbcZh8LCa1iQPUAoGCNsu5y0us7844AkU.png?width=108&crop... |
Schema-only AI for data analysis, or why your LLM doesn't need to see your data to query it | 1 | I've been using Ollama for something that I think is a genuinely good local LLM use case beyond chat.
The idea: for data analysis questions, the model only needs column names and types to generate SQL. You feed it the schema (and some stats), it writes the query, DuckDB-WASM executes it in the browser. The model never... | 2026-03-02T14:20:03 | https://www.reddit.com/r/LocalLLaMA/comments/1riu2ij/schemaonly_ai_for_data_analysis_or_why_your_llm/ | United-Stress-1343 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riu2ij | false | null | t3_1riu2ij | /r/LocalLLaMA/comments/1riu2ij/schemaonly_ai_for_data_analysis_or_why_your_llm/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/BV_K9Cl5Hy_Z14dxQJAejcNj9e5UO99bxGVxsShkAT4.png?auto=webp&s=ec27acf5827079cecbf90f9b85b2b888b63c4018', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/BV_K9Cl5Hy_Z14dxQJAejcNj9e5UO99bxGVxsShkAT4.png?width=108&crop=... |
A local “LLM session recorder command center” for all API/Codex/Code/ChatGPT sessions? | 1 | Hey, i’m looking for a tool that can sit in between (or kind of “on top of”) all these different AI apps/clients/GUI wrappers and record my sessions outside of whatever app I’m using.
I keep bouncing between tools and backends, and it feels like a lot of really valuable prompts + model responses just disappear into ra... | 2026-03-02T14:19:26 | https://www.reddit.com/r/LocalLLaMA/comments/1riu1zd/a_local_llm_session_recorder_command_center_for/ | dadaphl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riu1zd | false | null | t3_1riu1zd | /r/LocalLLaMA/comments/1riu1zd/a_local_llm_session_recorder_command_center_for/ | false | false | self | 1 | null |
Why Voice is the Perfect Starting Point for On-Device AI | 1 | 2026-03-02T14:19:04 | https://izwiai.com/blog/why-voice-is-the-perfect-starting-point | zinyando | izwiai.com | 1970-01-01T00:00:00 | 0 | {} | 1riu1nn | false | null | t3_1riu1nn | /r/LocalLLaMA/comments/1riu1nn/why_voice_is_the_perfect_starting_point_for/ | false | false | default | 1 | null | |
TRAIN 670B MODEL ON GTX 1060 FROM NOW IS REAL | 1 | [removed] | 2026-03-02T14:14:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ritxkd/train_670b_model_on_gtx_1060_from_now_is_real/ | Actual_Wolf_2932 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ritxkd | false | null | t3_1ritxkd | /r/LocalLLaMA/comments/1ritxkd/train_670b_model_on_gtx_1060_from_now_is_real/ | false | false | self | 1 | null |
OSS-120B beats all open models but one in new WeirdML Data Science benchmark | 0 | 2026-03-02T14:07:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/ | magnus-m | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ritr5v | false | null | t3_1ritr5v | /r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/ | false | false | 0 | null | ||
Released: AI Cost Router — 100% local LLM router (Ollama) | 0 | If you’ve ever wanted an LLM router that:
✔ Costs $0
✔ Runs fully offline
✔ Has clean config
✔ Works with TypeScript
…then check this out:
👉 [https://github.com/shivadeore111-design/ai-cost-router](https://github.com/shivadeore111-design/ai-cost-router)
Fully local, minimal, and ready for tinkering... | 2026-03-02T14:05:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ritplu/released_ai_cost_router_100_local_llm_router/ | Suitable-Form8694 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ritplu | false | null | t3_1ritplu | /r/LocalLLaMA/comments/1ritplu/released_ai_cost_router_100_local_llm_router/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'SkKn9uhopQsEZyvdEQvBV2h-tut_OEq7RSy68HoVRf8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SkKn9uhopQsEZyvdEQvBV2h-tut_OEq7RSy68HoVRf8.png?width=108&crop=smart&auto=webp&s=433ba03fb0400acdaa2c1fb742dd4e65cf5370ba', 'width': 108}, {'height': 108, 'url': 'h... |
Qwen3.5-2B-GGUF is here! | 1 | 2026-03-02T14:02:04 | https://huggingface.co/AaryanK/Qwen3.5-2B-GGUF | KvAk_AKPlaysYT | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ritmjb | false | null | t3_1ritmjb | /r/LocalLLaMA/comments/1ritmjb/qwen352bgguf_is_here/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'UTBnxlhv5svqlV78qwWGneyv6o2N_hOeL_T5SUD2u0c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UTBnxlhv5svqlV78qwWGneyv6o2N_hOeL_T5SUD2u0c.png?width=108&crop=smart&auto=webp&s=87f9fa4cefcccabbb3de1c2b4b107e2d0b6bbb48', 'width': 108}, {'height': 116, 'url': 'h... | ||
Qwen3.5-0.8B-GGUF is here! | 1 | 2026-03-02T14:01:23 | https://huggingface.co/AaryanK/Qwen3.5-0.8B-GGUF | KvAk_AKPlaysYT | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ritlux | false | null | t3_1ritlux | /r/LocalLLaMA/comments/1ritlux/qwen3508bgguf_is_here/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'h0Py-Ta_vsofHW6viKSDiZdB4beO4_yqSl3RkFQZbUI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/h0Py-Ta_vsofHW6viKSDiZdB4beO4_yqSl3RkFQZbUI.png?width=108&crop=smart&auto=webp&s=4589b16ec3b3b805409a5ff8005519ad51377718', 'width': 108}, {'height': 116, 'url': 'h... | ||
PSA: unsloth Qwen3.5 9/4/2/0.8B Quants are out | 0 | The usual and UD quants all here | 2026-03-02T13:53:19 | https://huggingface.co/collections/unsloth/qwen35 | mmkzero0 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ritepj | false | null | t3_1ritepj | /r/LocalLLaMA/comments/1ritepj/psa_unsloth_qwen35_94208b_quants_are_out/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg.png?width=108&crop=smart&auto=webp&s=bc22945ffd1a5b4538e9461f0008217c12ab36d5', 'width': 108}, {'height': 116, 'url': 'h... | |
Imrpove Qwen3.5 Performance on Weak GPU | 31 | I'm running Qwen3.5-27B-Q2\_K.gguf, Qwen3.5-35B-A3B-UD-IQ2\_XXS.gguf and Qwen3.5-35B-A3B-UD-IQ3\_XXS.gguf at my pc using llama.cpp and want to know if there are some tweaks I can do to Improve the performance.
Currently I'm getting:
\- 54 t/s with the Qwen3.5-35B-A3B-UD-IQ2\_XXS.gguf
\- 15 t/s with the Qwen3... | 2026-03-02T13:50:32 | MarketingGui | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ritcfr | false | null | t3_1ritcfr | /r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/ | false | false | 31 | {'enabled': True, 'images': [{'id': 'apfbjikvzmmg1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/apfbjikvzmmg1.png?width=108&crop=smart&auto=webp&s=c753b95b898529e65a254de91a56ab629aafba64', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/apfbjikvzmmg1.png?width=216&crop=smart&auto=web... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.