name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8al8rl
Thank you so much for sharing! With this 35B local beast, I am getting over 100 tps speed for even bs=1. With webdev tool and scheduler plugin, this is an agent for real!
1
0
2026-03-02T20:46:11
Heavy_Buyer
false
null
0
o8al8rl
false
/r/LocalLLaMA/comments/1rgxr0v/qwen_35_is_multimodal_here_is_how_to_enable_image/o8al8rl/
false
1
t1_o8al3br
Which 70b quantized model are you using?
1
0
2026-03-02T20:45:27
Frequent_Project_718
false
null
0
o8al3br
false
/r/LocalLLaMA/comments/18f6sae/got_myself_a_4way_rtx_4090_rig_for_local_llm/o8al3br/
false
1
t1_o8al12y
If you run the model through hf flag it resolves the mmproj for you, if you're running from cache you have to also pass the mmproj which is in the cache as well
1
0
2026-03-02T20:45:09
OakShortbow
false
null
0
o8al12y
false
/r/LocalLLaMA/comments/1rj4ktw/qwen3535ba3b_vision_capabilties_in_llamacpp/o8al12y/
false
1
t1_o8al0j8
Qwen 3.5 models have a draf-model included but in the case of 122B I found that it actually makes it slower, perhaps its not optimized yet, or 122B is already quite fast. But other models, for example, qwen3.5-27B, the included draft model makes it faster.
1
0
2026-03-02T20:45:05
ortegaalfredo
false
null
0
o8al0j8
false
/r/LocalLLaMA/comments/1rj3oue/any_advice_for_using_draft_models_with_qwen35_122b/o8al0j8/
false
1
t1_o8akrf8
You misunderstood. You still very much need the mmproj file.
1
0
2026-03-02T20:43:50
SarcasticBaka
false
null
0
o8akrf8
false
/r/LocalLLaMA/comments/1rj4ktw/qwen3535ba3b_vision_capabilties_in_llamacpp/o8akrf8/
false
1
t1_o8akpac
How’d you get NPU support working on Linux? I thought the drivers still weren’t public from AMD. For gpt-oss-20b, you definitely shouldn’t be using a Q4_0 quant. Use the native MXFP4. FastFlowLM has some benchmarks, and with a less powerful computer they were seeing 450+ PP, which seems more in-line with what I’ve observed on Windows with my laptop. Are you sure you’re using the NPU? The PP and TG numbers being so close is suspicious. The TG seems to be right about what they were measuring.
1
0
2026-03-02T20:43:34
EffectiveCeilingFan
false
null
0
o8akpac
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8akpac/
false
1
t1_o8akoyz
Seems interesting, hope it’ll be good. Any advice for a 4070 Super?
1
0
2026-03-02T20:43:31
tableball35
false
null
0
o8akoyz
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8akoyz/
false
1
t1_o8akkz5
custom trained classifier models are so dead
1
0
2026-03-02T20:42:59
Western_Objective209
false
null
0
o8akkz5
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8akkz5/
false
1
t1_o8akkyb
Nice to see NPU data like this! i do wonder how much optimization in software can improve those token rates in the future
1
0
2026-03-02T20:42:58
smwaqas89
false
null
0
o8akkyb
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8akkyb/
false
1
t1_o8akjz6
Maybe if you joined two years ago or so you could experience it like this. Now there is a million people here, lots of them mean and snarky. Shit's crazy, beyond control.
1
0
2026-03-02T20:42:51
fairydreaming
false
null
0
o8akjz6
false
/r/LocalLLaMA/comments/1riy7cw/lmao/o8akjz6/
false
1
t1_o8akgvb
Thanks for the feedback, I really appreciate it!
1
0
2026-03-02T20:42:26
Pr0tuberanz
false
null
0
o8akgvb
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8akgvb/
false
1
t1_o8akeap
That is the non-reasoning opus 4.6
1
0
2026-03-02T20:42:05
Potential_Block4598
false
null
0
o8akeap
false
/r/LocalLLaMA/comments/1rj3bh0/qwen35_397b_vs_27b/o8akeap/
false
1
t1_o8ak9q2
I’m sure I was already getting close to that speed anyway. What were you getting without using 2b?
1
0
2026-03-02T20:41:27
And-Bee
false
null
0
o8ak9q2
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8ak9q2/
false
1
t1_o8ak98r
Yeah but can it be that good in a 5GB file, we’ll see. DDR prices are so high because the memory manufacturers are booked for years for datacenter ai chips.
1
0
2026-03-02T20:41:23
Traditional-Card6096
false
null
0
o8ak98r
false
/r/LocalLLaMA/comments/1rj39se/intelligence_density_per_gb_is_increasing_and_i/o8ak98r/
false
1
t1_o8ak983
It just auto does alliteration, it has a bunch of prompts and it checks the layers as the model refuses those prompts, a lot like dynamic quants from unsloth. The layers that light up and say no are all lowered bit by bit in their power and the model stops throwing the refusals. It isn't a finetune, it's way easier than that. 
1
0
2026-03-02T20:41:23
ArtfulGenie69
false
null
0
o8ak983
false
/r/LocalLLaMA/comments/1rixh53/qwen35122b_heretic_ggufs/o8ak983/
false
1
t1_o8ak4fi
[removed]
1
0
2026-03-02T20:40:44
[deleted]
true
null
0
o8ak4fi
false
/r/LocalLLaMA/comments/1rj39se/intelligence_density_per_gb_is_increasing_and_i/o8ak4fi/
false
1
t1_o8ak2xk
I feel like when I tried it I was getting 5tok/sec where I get 50+ on MLX models like OSS 120B (macOS)
1
0
2026-03-02T20:40:32
Virtamancer
false
null
0
o8ak2xk
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8ak2xk/
false
1
t1_o8ajx38
Can still speculate.
1
0
2026-03-02T20:39:44
sourceholder
false
null
0
o8ajx38
false
/r/LocalLLaMA/comments/1rj3oue/any_advice_for_using_draft_models_with_qwen35_122b/o8ajx38/
false
1
t1_o8ajwp9
Me too. Now I’m not only GPU poor but also money poor.
1
0
2026-03-02T20:39:40
ul90
false
null
0
o8ajwp9
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8ajwp9/
false
1
t1_o8aju9g
Flashinfer + tensor parallel, mtp disabled.
1
0
2026-03-02T20:39:20
Nepherpitu
false
null
0
o8aju9g
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o8aju9g/
false
1
t1_o8ajtis
ok, will delete.
1
0
2026-03-02T20:39:14
alichherawalla
false
null
0
o8ajtis
false
/r/LocalLLaMA/comments/1rj4ee5/qwen35_on_off_grid/o8ajtis/
false
1
t1_o8ajnfq
This is not the appropriate forum for advertising your own project
1
0
2026-03-02T20:38:25
BumbleSlob
false
null
0
o8ajnfq
false
/r/LocalLLaMA/comments/1rj4ee5/qwen35_on_off_grid/o8ajnfq/
false
1
t1_o8ajlu7
I'm sorry. On Linux, I use Llama-Swap, and on Windows, I use Ollama. Here is my Llama-Swap configuration, if it's useful to you: \`\`\`code "qwen3.5-4b-q8": description: "Qwen3.5 4B VL (Q8)" filters: stripParams: "temperature, top\_k, top\_p, repeat\_penalty, min\_p, presence\_penalty, chat\_template\_kwargs" setParamsByID: "${MODEL\_ID}-thinking": temperature: 0.6 top\_p: 0.95 presence\_penalty: 0.0 chat\_template\_kwargs: enable\_thinking: true "${MODEL\_ID}-instruct": temperature: 1.0 top\_p: 0.95 presence\_penalty: 1.5 chat\_template\_kwargs: enable\_thinking: false metadata: model: "https://huggingface.co/unsloth/Qwen3.5-4B-GGUF/resolve/main/Qwen3.5-4B-Q8\_0.gguf?download=true" mmproj: "https://huggingface.co/unsloth/Qwen3.5-4B-GGUF/resolve/main/mmproj-BF16.gguf?download=true" ttl: 900 cmd: | ${server\_cmd} \--model ${models\_dir}/unsloth/Qwen3.5-4B-Q8\_0.gguf \--mmproj ${models\_dir}/unsloth/Qwen3.5-4B-Q8\_0.mmproj.gguf \--ctx-size 262144 \--n-predict 32768 \--top-p 0.95 \--top-k 20 \--min-p 0.0 \--temp 1.0 \--repeat-penalty 1 \--image-min-tokens 1024 \`\`\`
1
0
2026-03-02T20:38:12
vk3r
false
null
0
o8ajlu7
false
/r/LocalLLaMA/comments/1rj3ocy/question_regarding_model_parameters_and_memory/o8ajlu7/
false
1
t1_o8ajlsf
Any eta on instruct?
1
0
2026-03-02T20:38:11
Busy-Chemistry7747
false
null
0
o8ajlsf
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8ajlsf/
false
1
t1_o8ajioz
Multi token prediction. Same basically as eagle3 spec . I am currently training one for minimax m25
1
0
2026-03-02T20:37:46
getfitdotus
false
null
0
o8ajioz
false
/r/LocalLLaMA/comments/1rj3oue/any_advice_for_using_draft_models_with_qwen35_122b/o8ajioz/
false
1
t1_o8ajbx2
Yeah got it thank you I will try with the dense 27b model and share results asap Thanks again
1
0
2026-03-02T20:36:50
Potential_Block4598
false
null
0
o8ajbx2
false
/r/LocalLLaMA/comments/1rj3oue/any_advice_for_using_draft_models_with_qwen35_122b/o8ajbx2/
false
1
t1_o8aj8nu
What is mtp?!
1
0
2026-03-02T20:36:23
Potential_Block4598
false
null
0
o8aj8nu
false
/r/LocalLLaMA/comments/1rj3oue/any_advice_for_using_draft_models_with_qwen35_122b/o8aj8nu/
false
1
t1_o8aj6xz
Yeah totally agree just tried it and it is not that great (I think incomparable) I will try with the 27B model though (since it is a “dense” model and allegedly slightly better on some benchmarks (thanks MoEs!)
1
0
2026-03-02T20:36:08
Potential_Block4598
false
null
0
o8aj6xz
false
/r/LocalLLaMA/comments/1rj3oue/any_advice_for_using_draft_models_with_qwen35_122b/o8aj6xz/
false
1
t1_o8aj63i
[removed]
1
0
2026-03-02T20:36:02
[deleted]
true
null
0
o8aj63i
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8aj63i/
false
1
t1_o8aj66b
Yeah totally agree just tried it and it is not that great (I think incomparable) I will try with the 27B model though (since it is a “dense” model and allegedly slightly better on some benchmarks (thanks MoEs!)
1
0
2026-03-02T20:36:02
Potential_Block4598
false
null
0
o8aj66b
false
/r/LocalLLaMA/comments/1rj3oue/any_advice_for_using_draft_models_with_qwen35_122b/o8aj66b/
false
1
t1_o8aj2mk
How it is even work? Oo
1
0
2026-03-02T20:35:32
raiffuvar
false
null
0
o8aj2mk
false
/r/LocalLLaMA/comments/1rj2rec/new_qwen_models_for_speculative_decoding/o8aj2mk/
false
1
t1_o8aj2gd
Same here 32tkps same quant and rtx 2070 too! More than usable tbh if you ignore cloud models.
1
0
2026-03-02T20:35:30
sagiroth
false
null
0
o8aj2gd
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8aj2gd/
false
1
t1_o8aj0kt
How is that ? I was going to run the smaller model as draft models Could you explain more please (and I don’t mean self-speculation here tbh)
1
0
2026-03-02T20:35:15
Potential_Block4598
false
null
0
o8aj0kt
false
/r/LocalLLaMA/comments/1rj3oue/any_advice_for_using_draft_models_with_qwen35_122b/o8aj0kt/
false
1
t1_o8aiyk3
hahahaha
1
0
2026-03-02T20:34:58
EmbarrassedAsk2887
false
null
0
o8aiyk3
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8aiyk3/
false
1
t1_o8aiyg7
I just tried out the 4b with a playwright mcp and a search interface and it did amazingly well. I've not found a really useful 4b model before. It doing great as the brains of my home assistant install right now. Turned off thinking and its very snappy, even on an amd gpu. getting 3000+ pp and 113t/s. Using parrot instead of whisper in the stack and this feels as responsive as responsive as alexa and it can answer basic questions and has done decently at home assistant device control in my initial testing. The entire qwen 3.5 release has really been impressive so far.
1
0
2026-03-02T20:34:57
ravage382
false
null
0
o8aiyg7
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8aiyg7/
false
1
t1_o8aiy7z
I think it's very capable to do it with Qwen
1
0
2026-03-02T20:34:55
sagiroth
false
null
0
o8aiy7z
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8aiy7z/
false
1
t1_o8aivjv
I just tried, 28 tps with 2B draft and ctx 32768 on 3090, llama.cpp (q4\_k\_m both 27b and 2b)
1
0
2026-03-02T20:34:33
Hougasej
false
null
0
o8aivjv
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8aivjv/
false
1
t1_o8aiv3t
Thanks! This is amazing and works with qwen3.5-9b. Is there a way to auto load a model on startup of llama-swap u/No-Statement-0001 ? config.yaml: includeAliasesInList: true models: "Qwen": # This is the command llama-swap will use to spin up llama.cpp in the background. cmd: > llama-server --port ${PORT} --host 127.0.0.1 --model /models/Qwen.gguf --mmproj /models/mmproj-BF16.gguf --image-min-tokens 1024 --n-gpu-layers 99 --threads 4 --ctx-size 16576 --flash-attn on --parallel 1 --batch-size 4096 --no-mmap --logit-bias 151645+1 -r "<|im_end|>" -n 2048 filters: # Strip incoming parameters from your chat UI to enforce our optimal mode-specific settings stripParams: "temperature, top_p, top_k, min_p, presence_penalty, repeat_penalty" setParamsByID: # Virtual Model 1: Standard Thinking Mode "${MODEL_ID}:thinking": chat_template_kwargs: enable_thinking: true temperature: 1.0 top_p: 0.95 top_k: 20 min_p: 0.0 presence_penalty: 1.5 repeat_penalty: 1.0 # Virtual Model 2: Instruct Mode (No Thinking) "${MODEL_ID}:instruct": chat_template_kwargs: enable_thinking: false temperature: 0.7 top_p: 0.8 top_k: 20 min_p: 0.0 presence_penalty: 1.5 repeat_penalty: 1.0 docker-compose: version: '3.8' services: llama-swap: image: ghcr.io/mostlygeek/llama-swap:cuda container_name: llama-swap-qwen35 restart: unless-stopped ports: - "8880:8080" # Maps Host 8880 to Container 8080 volumes: - /mnt/AI/models/qwen35/9b:/models # Mount the config file into the container - /mnt/AI/models/config.yaml:/app/config.yaml environment: - NVIDIA_VISIBLE_DEVICES=all - NVIDIA_DRIVER_CAPABILITIES=all # Instruct llama-swap to run using our config file command: --config /app/config.yaml --listen 0.0.0.0:8080 deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: [gpu]
1
0
2026-03-02T20:34:29
andy2na
false
null
0
o8aiv3t
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o8aiv3t/
false
1
t1_o8aiu82
Here's my experience briefly this morning: With an RTX 4070 (12gb VRAM), on Windows, in llama.cpp webui I'm getting ~56 t/s. `Qwen3.5-9B-UD-Q4_K_XL` uses ~8gb VRAM at 32k context; I can definitely go longer! running on llama.cpp with the unsloth guide: https://unsloth.ai/docs/models/qwen3.5#how-to-enable-or-disable-reasoning-and-thinking __________ here's my example powershell command to run (need `llama-server` for thinking, otherwise you can use `llama-cli` without thinking): ``` .\llama-server.exe -m ".\models\Qwen3.5-9B-UD-Q4_K_XL.gguf" -ngl 99 --ctx-size 32768 --temp 1.0 --top-p 0.95 --top-k 20 --min-p 0.00 --port 8080 --chat-template-kwargs '{"enable_thinking":true}' ```
1
0
2026-03-02T20:34:22
huffalump1
false
null
0
o8aiu82
false
/r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8aiu82/
false
1
t1_o8ais1k
8vram 32ram, for side projects gemini, kimi, github copilot whatever is trendy. Locally Qwen 3.5 35 A3B (Q4_K_M) at 64k context and 32tkps output (62tkp read)
1
0
2026-03-02T20:34:04
sagiroth
false
null
0
o8ais1k
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8ais1k/
false
1
t1_o8aiqb2
hahahahah tell me about it. vibe coders are the ones leading the engineers to misery by undermining what we actually do. they dont know if engineering is actually building websites or is it to serve a service to millions and maintain them over a long period of time is actually called engineering. jbtw here is our open source coding agent we released as well-- can be used with closed sourced llms too, or local as well. We built it specifcailly for large codebases and not for greenfield projects. [https://github.com/SRSWTI/axe](https://github.com/SRSWTI/axe)
1
0
2026-03-02T20:33:51
EmbarrassedAsk2887
false
null
0
o8aiqb2
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8aiqb2/
false
1
t1_o8aimxc
What are you guys using these models for on your phones? Genuinely curious about possibilities :)
1
0
2026-03-02T20:33:21
valkiii
false
null
0
o8aimxc
false
/r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o8aimxc/
false
1
t1_o8aimcb
Yeah, I did not refer to that heuristic because I've heard it's outdated. Now, I still don't expect an 80B A3B model to perform as a 80B dense model, but I am surprised it barely seems better than the 30B A3B one.
1
0
2026-03-02T20:33:16
z_latent
false
null
0
o8aimcb
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8aimcb/
false
1
t1_o8ailkp
Yes they're both english words.
1
0
2026-03-02T20:33:10
ImmenseFox
false
null
0
o8ailkp
false
/r/LocalLLaMA/comments/1riuywe/genuinely_fascinating_but_also_kind_of_terrifying/o8ailkp/
false
1
t1_o8aikym
When I was building out my home workstation (Dual 3090s) I would test potential cards by bringing a test bench or spare PC and AC power with me (my truck has 120V AC or you can bring a battery/inverter) to test with at whatever random location the sale was happening at. I would plugin the GPU and make sure it can post, had the correct details in GPU-Z, and could run inference or a game for a min or two without crapping out. I would ask if the seller was okay with on-location testing beforehand to save time/grief. If someone doesn't want me to test their GPU its either because a) they know its broken, or b) they're afraid I'll break it testing. Either way I just say thank you and move on to the next card. I never, ever, ever, ever trusted a word anyone told me about how the GPU ran or how it was just working yesterday when they pulled it from their PC, etc. etc.
1
0
2026-03-02T20:33:05
RedKnightRG
false
null
0
o8aikym
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8aikym/
false
1
t1_o8aikj2
> start using axe At first I thought you're being mean to OP, made me giggle haha
1
0
2026-03-02T20:33:02
Xantrk
false
null
0
o8aikj2
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8aikj2/
false
1
t1_o8aik8y
I don't know if it's a language barrier or what. I want to see real numbers you collected, not marketing. This video is useless
1
0
2026-03-02T20:32:59
JamesEvoAI
false
null
0
o8aik8y
false
/r/LocalLLaMA/comments/1riypvk/axe_a_precision_agentic_coder_large_codebases/o8aik8y/
false
1
t1_o8aifey
Hehe, so funny. The only way the ablit models can be bad is that they don't refuse anymore. Like there is no resistance to the user, but that's just in chat and can maybe be told in a system prompt to be more aggressive or whatever. In a more agent oriented system where they are a writing staff or something they can still follow directions and because they are willing to write anything you can get some really wild stuff. 
1
0
2026-03-02T20:32:19
ArtfulGenie69
false
null
0
o8aifey
false
/r/LocalLLaMA/comments/1rixh53/qwen35122b_heretic_ggufs/o8aifey/
false
1
t1_o8aib17
Then no need for this post either.
1
0
2026-03-02T20:31:43
DinoAmino
false
null
0
o8aib17
false
/r/LocalLLaMA/comments/1rj3kfq/im_tired/o8aib17/
false
1
t1_o8ai7qw
I look forward to seeing these claims tested side by side. I know I'm coming off as hostile but I do genuinely want competition in this space. Especially if it draws attention away from Ollama. That said I also have an extremely sensitive bullshit meter as everyone and their dog is out here vibe coding "the next best thing"
1
0
2026-03-02T20:31:16
JamesEvoAI
false
null
0
o8ai7qw
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8ai7qw/
false
1
t1_o8ai7eu
definitely, i was shocked to see that on artificial analysis website
1
0
2026-03-02T20:31:14
SennVacan
false
null
0
o8ai7eu
false
/r/LocalLLaMA/comments/1rj3bh0/qwen35_397b_vs_27b/o8ai7eu/
false
1
t1_o8ai4ma
I hope I am doing something wrong that can be corrected, but I've been using larger updated quants and tweaked all the params, and the new ones will still think for 8 minutes if the question is mildly complex
1
0
2026-03-02T20:30:51
segfawlt
false
null
0
o8ai4ma
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8ai4ma/
false
1
t1_o8ai2jm
the retrieval approach is probably the right starting point. fine-tuning on personal data has a slow feedback loop and mistakes get baked in before you notice, whereas with retrieval you can actually inspect and fix what is being pulled
1
0
2026-03-02T20:30:34
BC_MARO
false
null
0
o8ai2jm
false
/r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/o8ai2jm/
false
1
t1_o8ai0q8
glm-ocr is supposed to use together wth paddle-layout. TLDR; Clone [https://github.com/zai-org/GLM-OCR](https://github.com/zai-org/GLM-OCR) and use their SDK glmocr parse
1
0
2026-03-02T20:30:20
adam444555
false
null
0
o8ai0q8
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8ai0q8/
false
1
t1_o8ahzqh
yes please. i would love to have feedbacks. face value is the most imp thing we aim for. for example here is one yt video to show its speed and tool calling capabilties. just made a quick vid to upload [https://youtu.be/\_Dc7VZXL7xo](https://youtu.be/_Dc7VZXL7xo)
1
0
2026-03-02T20:30:12
EmbarrassedAsk2887
false
null
0
o8ahzqh
false
/r/LocalLLaMA/comments/1riypvk/axe_a_precision_agentic_coder_large_codebases/o8ahzqh/
false
1
t1_o8ahz6b
slop on the x axis, OP's poop output on the y axis.
1
0
2026-03-02T20:30:08
One-Employment3759
false
null
0
o8ahz6b
false
/r/LocalLLaMA/comments/1rj3kfq/im_tired/o8ahz6b/
false
1
t1_o8ahqq2
I'll second Aider here. It's your best bet. That being said, I think your machine is a bit short of real viability for local coding. Maybe try Qwen3-30B-Coder at IQ2?
1
0
2026-03-02T20:29:00
Tai9ch
false
null
0
o8ahqq2
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8ahqq2/
false
1
t1_o8ahq02
I'll give it a try if you suggest it's great. I am certain 9b will be good given Qwen 3 line up for dense models was solid.
1
0
2026-03-02T20:28:54
ScoreUnique
false
null
0
o8ahq02
false
/r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8ahq02/
false
1
t1_o8ahogl
So qwen3.5 4b failed by a lot, so didn't bother to try 2b and 0.8b. Also tried using 0.8b for speculative decoding but found no speedup so far. Might work exclusively for 27b.
1
0
2026-03-02T20:28:42
Windowsideplant
false
null
0
o8ahogl
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o8ahogl/
false
1
t1_o8ahnki
You're probably thinking about Qwen next coder version. The one in the benchmarks was release ages ago.
1
0
2026-03-02T20:28:35
axiomatix
false
null
0
o8ahnki
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8ahnki/
false
1
t1_o8ahmyu
"frightening" "terrifying" Are you OK?
1
0
2026-03-02T20:28:30
Dry_Yam_4597
false
null
0
o8ahmyu
false
/r/LocalLLaMA/comments/1riuywe/genuinely_fascinating_but_also_kind_of_terrifying/o8ahmyu/
false
1
t1_o8ahmxi
> their centanario model which beats the other 20b models out of proportions on BENCHMRKS Can you provide data to back that claim up? I don't care about culture, I care about numbers and real world performance. So far all I've gotten is a lot of words and no data.
1
0
2026-03-02T20:28:30
JamesEvoAI
false
null
0
o8ahmxi
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8ahmxi/
false
1
t1_o8ahm5c
4o wasn't that good, we already have multiple models under 150B that beat it including models that can run on a Mac Mini. The models are already here, the hardware shortages are why we can't run them. It's absurd that DDR prices keep going up
1
0
2026-03-02T20:28:24
TokenRingAI
false
null
0
o8ahm5c
false
/r/LocalLLaMA/comments/1rj39se/intelligence_density_per_gb_is_increasing_and_i/o8ahm5c/
false
1
t1_o8ahl47
Judging by your screenshot, Opus 4.6 at the Gemini 3 Flah level, seriously? If it were only in the Vision test, I might believe it, but in real-world tasks, the difference is like night and day.
1
0
2026-03-02T20:28:15
DenZNK
false
null
0
o8ahl47
false
/r/LocalLLaMA/comments/1rj3bh0/qwen35_397b_vs_27b/o8ahl47/
false
1
t1_o8ahjpj
No idea on llamacpp but in production serving software vllm / sglang it works great can double tks
1
0
2026-03-02T20:28:04
getfitdotus
false
null
0
o8ahjpj
false
/r/LocalLLaMA/comments/1rj3oue/any_advice_for_using_draft_models_with_qwen35_122b/o8ahjpj/
false
1
t1_o8ahjcm
no 14b ?
1
0
2026-03-02T20:28:01
celsowm
false
null
0
o8ahjcm
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8ahjcm/
false
1
t1_o8ahil5
Since 3.5 uses MoE, drafting doesnt make that much sense
1
0
2026-03-02T20:27:55
TechnicSonik
false
null
0
o8ahil5
false
/r/LocalLLaMA/comments/1rj3oue/any_advice_for_using_draft_models_with_qwen35_122b/o8ahil5/
false
1
t1_o8ahcdk
I'm using a `--models-preset` file, which has the following *assorted* entries - use at your own risk - I don't recall if they all work. (the `ts = 47,48` is because I have two 24GB GPUs, and GPU0 usually has a ~0.5GB of VRAM taken by a whisper model.) I have probably used a few other settings, but I also did give this a try with vLLM last night - the rates were better, **but** it still had the large delay between the end of prompt processing and the beginning of apparent generation. ``` [Qwen3.5-27B-heretic-20k-ctx:Q4_K_M] ctx-size = 20000 model = /home/myusername/.cache/llama.cpp/mradermacher_Qwen3.5-27B-heretic-GGUF_Qwen3.5-27B-heretic.Q4_K_M.gguf mmproj = /home/myusername/.cache/llama.cpp/mradermacher_Qwen3.5-27B-heretic-GGUF_Qwen3.5-27B-heretic.mmproj-Q8_0.gguf [Qwen3.5-27B-heretic-20k-ctx-ts2:Q4_K_M] ctx-size = 20000 model = /home/myusername/.cache/llama.cpp/mradermacher_Qwen3.5-27B-heretic-GGUF_Qwen3.5-27B-heretic.Q4_K_M.gguf mmproj = /home/myusername/.cache/llama.cpp/mradermacher_Qwen3.5-27B-heretic-GGUF_Qwen3.5-27B-heretic.mmproj-Q8_0.gguf ts = 47,48 [Qwen3.5-27B-heretic-32k-ctx-nothink-tuned:Q4_K_M] ctx-size = 32768 model = /home/myusername/.cache/llama.cpp/mradermacher_Qwen3.5-27B-heretic-GGUF_Qwen3.5-27B-heretic.Q4_K_M.gguf mmproj = /home/myusername/.cache/llama.cpp/mradermacher_Qwen3.5-27B-heretic-GGUF_Qwen3.5-27B-heretic.mmproj-Q8_0.gguf temp = 0.7 top-p = 0.8 top-k = 20 min-p = 0 presence-penalty = 1.5 repeat-penalty = 1 chat-template-kwargs = "{\"enable_thinking\": \"false\"}" reasoning-budget = 0 [Qwen3.5-27B-heretic-32k-ctx-think-tuned:Q4_K_M] ctx-size = 32768 model = /home/myusername/.cache/llama.cpp/mradermacher_Qwen3.5-27B-heretic-GGUF_Qwen3.5-27B-heretic.Q4_K_M.gguf mmproj = /home/myusername/.cache/llama.cpp/mradermacher_Qwen3.5-27B-heretic-GGUF_Qwen3.5-27B-heretic.mmproj-Q8_0.gguf temp = 1.0 top-p = 0.95 top-k = 20 min-p = 0 presence-penalty = 1.5 repeat-penalty = 1 ```
1
0
2026-03-02T20:27:05
overand
false
null
0
o8ahcdk
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8ahcdk/
false
1
t1_o8ahc7n
Mtp is a built in draft model .
1
0
2026-03-02T20:27:04
getfitdotus
false
null
0
o8ahc7n
false
/r/LocalLLaMA/comments/1rj3oue/any_advice_for_using_draft_models_with_qwen35_122b/o8ahc7n/
false
1
t1_o8ahbek
On Ollama and LM studio using as chat, its super fast, but through Roo Code or Claude code (launched through Ollama) its just so slow, and just gives up half way through a response fairly often.
1
0
2026-03-02T20:26:57
JoeyJoeC
false
null
0
o8ahbek
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8ahbek/
false
1
t1_o8ah9q4
not quite, I tried one shot ecommerce website with basic item listing, item details, basket, checkout. A3B performed much better
1
0
2026-03-02T20:26:44
sagiroth
false
null
0
o8ah9q4
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8ah9q4/
false
1
t1_o8ah5or
Is making your own benchmark particularly time-consuming?
1
0
2026-03-02T20:26:10
CodProfessional3712
false
null
0
o8ah5or
false
/r/LocalLLaMA/comments/1rj3kfq/im_tired/o8ah5or/
false
1
t1_o8ah556
They're not going to be running the models at q3. More likely the consideration is what memory the 8/16-bit versions fit with maximum context.
1
0
2026-03-02T20:26:06
Middle_Bullfrog_6173
false
null
0
o8ah556
false
/r/LocalLLaMA/comments/1rj3cku/why_qwen_35_27b/o8ah556/
false
1
t1_o8ah3hj
You didn't answer my question though. Do you have quantitative evaluations on the performance of your model? I understand your claim that other models are benchmaxing, does that mean I should take your models quality at face value with *no supporting data*?
1
0
2026-03-02T20:25:53
JamesEvoAI
false
null
0
o8ah3hj
false
/r/LocalLLaMA/comments/1riypvk/axe_a_precision_agentic_coder_large_codebases/o8ah3hj/
false
1
t1_o8ah2sy
Im using LMStudio and its memory estimator shows these memory requirements. Currently running Qwen 3.5 9B with only 30k context length and it already takes around 11.5gb vram. How do i configure it correctly? Also im using UD Q4 K XL quant by unsloth
1
0
2026-03-02T20:25:47
IPC300
false
null
0
o8ah2sy
false
/r/LocalLLaMA/comments/1rj3ocy/question_regarding_model_parameters_and_memory/o8ah2sy/
false
1
t1_o8ah2b4
[removed]
1
0
2026-03-02T20:25:43
[deleted]
true
null
0
o8ah2b4
false
/r/LocalLLaMA/comments/1rj3kfq/im_tired/o8ah2b4/
false
1
t1_o8ah0q1
LOL you actually have clue what Bodega is? they have more culture and tech than any other AI labs. For some reason, because of them i was able to run a full blown file indexer all LOCALLY built with their centanario model which beats the other 20b models out of proportions on BENCHMRKS and runs phenomenally on my 32gb m3
1
0
2026-03-02T20:25:30
drip_lord007
false
null
0
o8ah0q1
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8ah0q1/
false
1
t1_o8agwfu
You gotta be doing something wrong. I have 24gb pooled and I can get the first token within a few seconds with qwen3.5-27b
1
0
2026-03-02T20:24:55
fulgencio_batista
false
null
0
o8agwfu
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8agwfu/
false
1
t1_o8agvhs
Yesterday, I was chatting about a situation with qwen3.5-397b. The law has changed, and it refused to believe that it's 2026 and the law has changed. It refused to answer and said I need to talk to a lawyer and CPA since I might be trying to commit fraud. That's why some of us want these models, nothing to do with roleplay. Lots of things, legal, medical, computing, politics, religion get censored...
1
0
2026-03-02T20:24:47
MotokoAGI
false
null
0
o8agvhs
false
/r/LocalLLaMA/comments/1rixh53/qwen35122b_heretic_ggufs/o8agvhs/
false
1
t1_o8agv5h
I'm using LMStudio and its memory estimator shows these memory requirements. Currently running Qwen 3.5 9B with only 30k context length and it already takes around 11.5gb vram. How do i configure it correctly?
1
0
2026-03-02T20:24:44
IPC300
false
null
0
o8agv5h
false
/r/LocalLLaMA/comments/1rj3ocy/question_regarding_model_parameters_and_memory/o8agv5h/
false
1
t1_o8agv1r
Maybe you're already reaching the limits of the local area, qwen remains qwen, it's not Gemini 3 Pro
1
0
2026-03-02T20:24:43
SamLeCoyote_Fix_1
false
null
0
o8agv1r
false
/r/LocalLLaMA/comments/1r9be56/i_ran_a_forensic_audit_on_my_local_ai_assistant/o8agv1r/
false
1
t1_o8agu4v
It's slop, you're replying to a spambot
1
0
2026-03-02T20:24:36
huffalump1
false
null
0
o8agu4v
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8agu4v/
false
1
t1_o8agsm7
45 minutes for 200 pages is rough. honestly on 18gb ram you are going to be pretty limited with the heavier vlm based models because they eat memory fast especially on multi page docs. the ones that fit in that footprint tend to sacrifice accuracy for speed or the other way around. if you want to stay local, something like marker (pip install marker-pdf torch) is probably your best bet for that hardware. it runs on cpu fine and it is way faster than glmocr while still giving decent quality markdown output. for a 200 page doc you are probably looking at a few minutes instead of 45. the quality on tables is not perfect but for general text and structure it holds up pretty well. the tricky part with local on a macbook is that you are always going to be making tradeoffs between speed and accuracy. if accuracy matters more than running locally, an api based approach will crush anything you can run on 18gb in both speed and quality. just depends on what matters more for your use case.
1
0
2026-03-02T20:24:23
Ok-Potential-333
false
null
0
o8agsm7
false
/r/LocalLLaMA/comments/1r0ser2/any_latest_ocr_model_i_can_run_locally_in_18gb_ram/o8agsm7/
false
1
t1_o8agqp2
Speculative decoding isn’t nearly as useful for MoE models. Also, as far as I know, the Qwen3.5 models have a form of multi-token prediction built-in, although I don’t think it’s working yet in the most recent llama.cpp.
1
0
2026-03-02T20:24:07
EffectiveCeilingFan
false
null
0
o8agqp2
false
/r/LocalLLaMA/comments/1rj3oue/any_advice_for_using_draft_models_with_qwen35_122b/o8agqp2/
false
1
t1_o8agoct
context + kv cache depends on model architecture. While there is some relationship with model size, there's also a lot of variability from model to model. For example, Qwen3-Coder-Next (an 80B model) needs just 10 GB for 128k, while MiniMax-M2.5 (a 229B model) needs over 100 GB for the same 128k. Less than 3x the number of parameters, but over 10x the VRAM required for context.
1
0
2026-03-02T20:23:48
suicidaleggroll
false
null
0
o8agoct
false
/r/LocalLLaMA/comments/1rj3ocy/question_regarding_model_parameters_and_memory/o8agoct/
false
1
t1_o8agk2e
i'm frustrated with the new models. try to prompt them with just: hello. they will overthink reeeeally hard
1
0
2026-03-02T20:23:13
asraniel
false
null
0
o8agk2e
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8agk2e/
false
1
t1_o8agiib
Which is really weird because basically _nobody_ asked about speculative decoding with Qwen3. The sudden interest and - 4 posts about it today alone - is pretty odd yeah.
1
0
2026-03-02T20:23:01
DinoAmino
false
null
0
o8agiib
false
/r/LocalLLaMA/comments/1rj2mzy/is_speculative_decoding_available_with_the_qwen/o8agiib/
false
1
t1_o8agft7
LMAO, you seriously haven’t tried Bodega yet? It has more culture and tech than these AI labs will ever do
1
0
2026-03-02T20:22:39
drip_lord007
false
null
0
o8agft7
false
/r/LocalLLaMA/comments/1riypvk/axe_a_precision_agentic_coder_large_codebases/o8agft7/
false
1
t1_o8agdta
Yeah, that works! But I still question why unsloth turned it off in their template. Thinking is **enabled** by default in the original Qwen files.
1
0
2026-03-02T20:22:22
thejoyofcraig
false
null
0
o8agdta
false
/r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o8agdta/
false
1
t1_o8agcc4
For work stuff, I almost always have to do some cleanup afterwards, but I use the planning stage to get the overall shape of things roughly right. I don't have different types of projects at work really, we're a small team working on a data visualisation app. I usually actually just hardly give any spec, and just say broadly what I want, let the model explore the code base itself to figure out the details. Then if there are things I don't like in there, I iterate. I find it's better to let the model understand the code itself than me to try to explain everything and possibly have misunderstandings.
1
0
2026-03-02T20:22:10
-dysangel-
false
null
0
o8agcc4
false
/r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/o8agcc4/
false
1
t1_o8agbf2
Probably Kling
1
0
2026-03-02T20:22:03
nerdlord420
false
null
0
o8agbf2
false
/r/LocalLLaMA/comments/1rj326g/any_idea_what_is_being_used_for_these_generations/o8agbf2/
false
1
t1_o8ag7tu
Which is especially ironic since everything we're doing here is built on free information sharing... Everything from the models, oss frameworks, tips and techniques, etc. Then someone uses allll of this free&open knowledge to do something insignificant and then would rather be snarky than just say what they're doing. Yes, you don't have to post and share what you did. But it takes just as much effort to be an asshole as it does to be helpful
1
0
2026-03-02T20:21:32
huffalump1
false
null
0
o8ag7tu
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8ag7tu/
false
1
t1_o8ag6of
beautiful
1
0
2026-03-02T20:21:23
LackingAGoodName
false
null
0
o8ag6of
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8ag6of/
false
1
t1_o8ag5wz
a thing of beauty. 2026 is the year ondevice Ai explodes
1
0
2026-03-02T20:21:17
ElectricalBar7464
false
null
0
o8ag5wz
false
/r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o8ag5wz/
false
1
t1_o8ag56j
Do you know any such program, I would try it, I have an RTX 4070 for now
1
0
2026-03-02T20:21:11
Deep-Island5895
false
null
0
o8ag56j
false
/r/LocalLLaMA/comments/1ranbod/best_local_software_for_realtime_deepfakes_face/o8ag56j/
false
1
t1_o8ag19t
It's 16GB VRAM as well, the 27B model was also a Q3 Quant which should've fit on VRAM, the 35B-A3B is loads better (not that 21 tkps is great) so will stick with that.. 
1
0
2026-03-02T20:20:38
mrstrangedude
false
null
0
o8ag19t
false
/r/LocalLLaMA/comments/1q0mg6w/how_is_running_local_ai_models_on_amd_gpus_today/o8ag19t/
false
1
t1_o8afxo7
\>>> hello Thinking... Hmm, the user just said “hello” with a simple lowercase. Okay, this is probably the first interaction, so they might be testing the waters or just greeting casually. The tone seems neutral—no urgency or specific requests yet. They might be new to chatting with AI assistants or just starting a general conversation. Since they didn’t provide any context, I should keep my response friendly but open-ended to encourage them to share more if they want to. I’ll match their casual tone with a warm greeting and a prompt to keep the conversation flowing. The smiley emoji feels appropriate here to convey approachability. Maybe they’ll respond with something specific, or this could be the start of a longer interaction. No need to dive deep yet—just set a positive tone and leave the door open for whatever they need. ...done thinking. Hello! 😊 How can I help you today?
1
0
2026-03-02T20:20:09
CrewIndependent6042
false
null
0
o8afxo7
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o8afxo7/
false
1
t1_o8afv0q
that would be fantastic!
1
0
2026-03-02T20:19:47
milkipedia
false
null
0
o8afv0q
false
/r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8afv0q/
false
1
t1_o8afqy1
Qwen 3.5 is using standard Multi-Head Attention (MHA) which means high KV head counts. it keeps large hidden dimensions. It stores KV cache in fp16 by default. And in VL models, it may allocate extra buffers for vision embeddings.
1
0
2026-03-02T20:19:14
Old-Satisfaction-420
false
null
0
o8afqy1
false
/r/LocalLLaMA/comments/1rj3ocy/question_regarding_model_parameters_and_memory/o8afqy1/
false
1
t1_o8afpod
But would 24B at q4 be so much worse than 27B at q3?
1
0
2026-03-02T20:19:04
dreamyrhodes
false
null
0
o8afpod
false
/r/LocalLLaMA/comments/1rj3cku/why_qwen_35_27b/o8afpod/
false
1