name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o893obg
iirc the "next" ones were more of a preview of the newer architecture coming soon, and was trained on less total tokens for a shorter amount of time to get the preview out quicker.
1
0
2026-03-02T16:30:00
tvall_
false
null
0
o893obg
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o893obg/
false
1
t1_o893l3o
Correct. If you're using the -ngl 99 flag it will already draw straight from memory into VRAM. Using the --no-mmap flag will just make it run slower by moving it into system memory before VRAM.
1
0
2026-03-02T16:29:35
RG_Fusion
false
null
0
o893l3o
false
/r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o893l3o/
false
1
t1_o893l2y
There are an influx of new users who ask the same redundant questions on a daily basis and seem to fundamentally fail to grasp the nature of the tool that they are using. Be self sufficient and don't waste other peoples time when visiting a highly regarded community of experts. I don't understand what is so difficult about that concept. r/Llamapettingzoo should be a thing.
1
0
2026-03-02T16:29:35
Impossible-Glass-487
false
null
0
o893l2y
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o893l2y/
false
1
t1_o893hpv
3-next was a preview of the 3.5 architecture. It was essentially an undertrained model with a ton of architectural innovations, meant as a preview of the 3.5 family and a way for implementations to add and validate support for the new architecture.
1
0
2026-03-02T16:29:07
spaceman_
false
null
0
o893hpv
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o893hpv/
false
1
t1_o893gzb
That's nice . Which dataset did you use to train the model ,mifnits customized , how did you prepare it ? Is it for multiple coding languages ?
1
0
2026-03-02T16:29:01
Fantastic_Quiet1838
false
null
0
o893gzb
false
/r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o893gzb/
false
1
t1_o893gmk
Honestly after testing GPT OSS models against anything else that fit into 64GB of VRAM I'm not all that surprised. Until Qwen 3.5 122B came out, it was the best performant model for my uses. and on some tasks it still beats Qwen 3.5 122B ( complex powershell scripts is one example). Whatever OpenAI used to train that model needs to be replicated by others. If someone could release a 240b A10b model using whatever magic QAT sauce OSS 120B had, plus maybe swapping MXFP4 for INT4+Autoround for higher accuracy, we would have something really great.
1
0
2026-03-02T16:28:58
gusbags
false
null
0
o893gmk
false
/r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o893gmk/
false
1
t1_o893cm9
I can confirm that I'm hitting above 3000t/s prefill for a dual RTX-4090 setup on the current vllms nightly build with pretty much the same configuration. Decode is roughly in the 100-130 t/s range. I did not run any rigorous benchmarks, so take this with a grain of salt.
1
0
2026-03-02T16:28:26
Sufficient-Rent6078
false
null
0
o893cm9
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o893cm9/
false
1
t1_o893893
They was preparing for next architecture/models, not really something polished to be production ready.
1
0
2026-03-02T16:27:50
lasizoillo
false
null
0
o893893
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o893893/
false
1
t1_o89372g
These are exactly my thoughts about everyone here. Lets mute this sub, block each other and cry, ig?
1
0
2026-03-02T16:27:41
HyperWinX
false
null
0
o89372g
false
/r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o89372g/
false
1
t1_o8931dw
Oh no, another case of AI induced psychosis..
1
0
2026-03-02T16:26:56
yoomiii
false
null
0
o8931dw
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8931dw/
false
1
t1_o8930qb
Haha what an ass hole. I bet you also go into repos and respond to bugs with "I fixed it" and don't explain how for future people.
1
0
2026-03-02T16:26:51
ImproveYourMeatSack
false
null
0
o8930qb
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8930qb/
false
1
t1_o892zeh
Local deployment is a pain to get right. I just use Lurvessa.com now because it is the best available for zero filters. It handles everything without the annoying restrictions you get elsewhere. Way easier than messing with local hardware.
1
0
2026-03-02T16:26:41
Extension_Doubt_8866
false
null
0
o892zeh
false
/r/LocalLLaMA/comments/1nbcsvk/looking_for_uncensored_unfiltered_ai_models_for/o892zeh/
false
1
t1_o892xk8
I think the next was a "beta test" for the 3.5 version. It uses the same architecture.
1
0
2026-03-02T16:26:26
JsThiago5
false
null
0
o892xk8
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o892xk8/
false
1
t1_o892vfl
No problem :)
1
0
2026-03-02T16:26:09
Dyssun
false
null
0
o892vfl
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o892vfl/
false
1
t1_o892uii
Qwen3-Coder-Next also missing u/Jobus_
1
0
2026-03-02T16:26:02
pmttyji
false
null
0
o892uii
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o892uii/
false
1
t1_o892qb1
Does unchecking the reasoning content in a reasoning block fix the non-MCP issues as a temporary fix? I think I've been noticing these issues but thought it was something wrong with my LangGraph.
1
0
2026-03-02T16:25:28
nicholas_the_furious
false
null
0
o892qb1
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o892qb1/
false
1
t1_o892nm9
If works differently: people who are neutral got all their attention drawn to testing Qwen, and those models won't get as much views or likes as they could, which will drown them down in search/feed rankings and limit the reach significantly.
1
0
2026-03-02T16:25:07
No-Refrigerator-1672
false
null
0
o892nm9
false
/r/LocalLLaMA/comments/1rira5e/iquestcoderv1_is_40b14b7b/o892nm9/
false
1
t1_o892mau
r/LocalLLaMA folk would rather point at the cloud, as if human interactions are inferior, rather than type "Just open the extensions tab and grab the extension A and extension B I use"
1
0
2026-03-02T16:24:56
FriskyFennecFox
false
null
0
o892mau
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o892mau/
false
1
t1_o892i8m
The whole '3, 3-next, 3.5' naming thing isn't my favorite. Why "next?"
1
0
2026-03-02T16:24:24
overand
false
null
0
o892i8m
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o892i8m/
false
1
t1_o892ba1
BF16 cache causes blue screens for me. very unstable. I'm running straight from LLama cpp on a 5090 and 96G ram
1
0
2026-03-02T16:23:28
durden111111
false
null
0
o892ba1
false
/r/LocalLLaMA/comments/1riunee/how_to_fix_endless_looping_with_qwen35/o892ba1/
false
1
t1_o8928sm
Which inference engine, what parameters? Paste the full command line ideally. Qwen3.5 works really well on llama.cpp as of ~3 days ago, there should be no looping unless you either have a broken gguf, run old software, or are calling it with wrong parameters.
1
0
2026-03-02T16:23:09
RadiantHueOfBeige
false
null
0
o8928sm
false
/r/LocalLLaMA/comments/1riunee/how_to_fix_endless_looping_with_qwen35/o8928sm/
false
1
t1_o8926m3
roo, cline, kilo code
1
0
2026-03-02T16:22:51
kayteee1995
false
null
0
o8926m3
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8926m3/
false
1
t1_o89216c
Is there a tool to sit in front and apply these params? Saw it mentioned in another thread, but neglected to write it down
1
0
2026-03-02T16:22:08
winkler1
false
null
0
o89216c
false
/r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o89216c/
false
1
t1_o891y2p
Thx for the reply, so it only matters when using moe models that don't fit into vram and need to be offloaded to RAM right ?
1
0
2026-03-02T16:21:43
Dr4x_
false
null
0
o891y2p
false
/r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o891y2p/
false
1
t1_o891t4p
No
1
0
2026-03-02T16:21:04
Septerium
false
null
0
o891t4p
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o891t4p/
false
1
t1_o891osz
Can it OCR hand-drawn comic-book lettering? I'm thinking here about auto-translation of comics which have relatively unusual and/or dynamic lettering.
1
0
2026-03-02T16:20:31
optimisticalish
false
null
0
o891osz
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o891osz/
false
1
t1_o891o8x
Yes
1
0
2026-03-02T16:20:26
boinkmaster360
false
null
0
o891o8x
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o891o8x/
false
1
t1_o891o9e
Has anyone done a coding benchmark against qwen3-coder-next and these new models? And the qwen3.5 variants? I've been looking for that to answer that question the lazy way until I can get the time to test with real scenarios
1
0
2026-03-02T16:20:26
cmdr-William-Riker
false
null
0
o891o9e
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o891o9e/
false
1
t1_o891j0v
With a custom harness the 3.0-4b is able to handle simpler tasks like: "Analyze my system logs"
1
0
2026-03-02T16:19:44
piexil
false
null
0
o891j0v
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o891j0v/
false
1
t1_o891h7z
Haha, my bad. I honestly tried, and clearly failed.
1
0
2026-03-02T16:19:30
Jobus_
false
null
0
o891h7z
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o891h7z/
false
1
t1_o891fu0
Geez. This sub has been thoroughly saturated with ignorance.
1
0
2026-03-02T16:19:19
DinoAmino
false
null
0
o891fu0
false
/r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o891fu0/
false
1
t1_o891d3r
I have no intention of posting "results" but you can try it for yourself
1
0
2026-03-02T16:18:57
Impossible-Glass-487
false
null
0
o891d3r
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o891d3r/
false
1
t1_o891acl
Why dont you try putting this question into a cloud model and it will explain the entire thing in much greater detail than I will here.
1
0
2026-03-02T16:18:35
Impossible-Glass-487
false
null
0
o891acl
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o891acl/
false
1
t1_o891933
I'm having a pretty hard time believing these outperform Next 80B
1
0
2026-03-02T16:18:26
SpicyWangz
false
null
0
o891933
false
/r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o891933/
false
1
t1_o8917xm
Lucky you: https://www.reddit.com/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/
1
0
2026-03-02T16:18:17
ProfessionalSpend589
false
null
0
o8917xm
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8917xm/
false
1
t1_o89168d
Have you tried hunyuan ocr? How it compares?
1
0
2026-03-02T16:18:03
Present-Ad-8531
false
null
0
o89168d
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o89168d/
false
1
t1_o8913pi
Totally agree. Benchmarks are a fun directional guide, but I never take them as gospel. Looking at some unofficial benchmarks, like [UGI Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard) the Qwen3-235B-A22B does beat Qwen3.5-35B-A3B in both NatInt (natural intelligence) and especially Writing by a wide margin. It seems official benchmarks often over-index on specific logic/math tasks where the new architectures shine, but miss the 'feel' of the larger models.
1
0
2026-03-02T16:17:43
Jobus_
false
null
0
o8913pi
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8913pi/
false
1
t1_o8913ew
Benchmark wins are real but they don't capture the production constraint. For agentic coding loops running 24/7 — code review agents, CI/CD fixers, autonomous test writers — the bottleneck isn't model quality, it's infra reliability. A 9B model on a shared laptop dies when the screen locks. What's your setup for keeping the agent process alive between sessions? That's where most of the failure modes live in practice.
1
0
2026-03-02T16:17:41
BreizhNode
false
null
0
o8913ew
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8913ew/
false
1
t1_o89126b
I use tiny models with my LYRN system because of my context management, a tiny model can be fairly smart. It just needs enough reasoning to understand the structure I give it and these particular tiny models are very good. I also do a lot of edge device testing for a satellite grant that we hope gets approved in the next year or so.
1
0
2026-03-02T16:17:31
PayBetter
false
null
0
o89126b
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89126b/
false
1
t1_o8911vs
I can run it in quant 4. That is my go to model these days.
1
0
2026-03-02T16:17:29
ProfessionalSpend589
false
null
0
o8911vs
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8911vs/
false
1
t1_o8910j4
https://preview.redd.it/…7a25f880f424c580
1
0
2026-03-02T16:17:19
MaddesJG
false
null
0
o8910j4
false
/r/LocalLLaMA/comments/1rfi53f/completed_my_64gb_vram_rig_dual_mi50_build_custom/o8910j4/
false
1
t1_o891009
Glm-ocr looses for me when it comes to layouts. Qwens can reproduce tables and formatting in markdown.
1
0
2026-03-02T16:17:15
Pjotrs
false
null
0
o891009
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o891009/
false
1
t1_o890wu1
Waiting for results
1
0
2026-03-02T16:16:50
NigaTroubles
false
null
0
o890wu1
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o890wu1/
false
1
t1_o890swd
I think this time they had a valid reason that they added vision to all the models. I don't know about previous generations though
1
0
2026-03-02T16:16:19
SpicyWangz
false
null
0
o890swd
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o890swd/
false
1
t1_o890sma
27B punching way above its weight. It has no right to be this good.
1
0
2026-03-02T16:16:17
dhtp2018
false
null
0
o890sma
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o890sma/
false
1
t1_o890mg1
Hello there, I'm from the future, where Orion O6 is available - and so is a bunch of other boards/systems with Cix P1, such as Orion O6N, Orange Pi 6 Plus and MINISFORUM MS-R1 - I think there was a laptop even, but I don't remember the name). For LLMs, Radxa seems to have updated the docs a few times, since I started following the board, the latest iteration is to use KleidiAI CPU optimizations: [https://docs.radxa.com/en/orion/o6/app-development/artificial-intelligence/llama-cpp](https://docs.radxa.com/en/orion/o6/app-development/artificial-intelligence/llama-cpp) Still, CLIP and Stable Diffusion can be run on NPU, it's demoed here [https://youtu.be/GDDTN421Zl8](https://youtu.be/GDDTN421Zl8) The mainlining of the drivers continues, but I would not hold my breath for it to be completed soon. It's mostly usable as-is though.
1
0
2026-03-02T16:15:27
Routine-Example927
false
null
0
o890mg1
false
/r/LocalLLaMA/comments/1hqi2tn/interesting_arm_hardware_on_the_horizon_radxa/o890mg1/
false
1
t1_o890kzn
By default, llama.cpp will only load parameters into RAM when they are required for generating a token. With large MoEs, this means most of the model won't load right away. This can result in latency and stuttering. --no-mmap just tells llama.cpp to load all the weights into RAM right from the start. Your start-up will take longer but things should run smoother.
1
0
2026-03-02T16:15:16
RG_Fusion
false
null
0
o890kzn
false
/r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o890kzn/
false
1
t1_o890k6a
Which extensions and how would you do this?
1
0
2026-03-02T16:15:09
Androck101
false
null
0
o890k6a
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o890k6a/
false
1
t1_o890jkm
Exactly, this is why on-disk model size doesnt mean shit.
1
0
2026-03-02T16:15:04
HyperWinX
false
null
0
o890jkm
false
/r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o890jkm/
false
1
t1_o890iaj
I'm 8 gb vram, 32 gb ram, 30 t/s on a model way better than gpt-oss 20 (I think at least on par with 120)
1
0
2026-03-02T16:14:54
R_Duncan
false
null
0
o890iaj
false
/r/LocalLLaMA/comments/1re72h4/qwen35_27b_better_than_35ba3b/o890iaj/
false
1
t1_o890evv
Is your ChatterUI AI on the Android app store? Just wondering because I couldn't spot it.
1
0
2026-03-02T16:14:27
kindofbluetrains
false
null
0
o890evv
false
/r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o890evv/
false
1
t1_o8908bb
Shameless self-promotion but I was running into a similar dilemma. Additionally, everything isn't available streamable even when it was just exposing a remote api. You can check out [https://github.com/mcpambassador](https://github.com/mcpambassador) if it's any help. I needed to centralize some of the sprawl. It also just got annoying setting up the same mcps over and over when I wanted to try some new tool. I have been working through the architecture for awhile, just published everything a few days ago.
1
0
2026-03-02T16:13:35
OGF3
false
null
0
o8908bb
false
/r/LocalLLaMA/comments/1riw6kd/mcp_colocation_stdio_49ms_single_client_vs_http/o8908bb/
false
1
t1_o8906kb
Have you tried GLM-OCR? That really impressed me. Before that, best local was Qwen3-VL-8B (plus Paddle but that's not a simple model like qwen)
1
0
2026-03-02T16:13:21
danihend
false
null
0
o8906kb
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8906kb/
false
1
t1_o8902uq
Benchmarks provided by model creators are run on what they released. They run them on the full fp16 safetensors. Except gpt-oss which were only released with MXFP4.
1
0
2026-03-02T16:12:51
DinoAmino
false
null
0
o8902uq
false
/r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o8902uq/
false
1
t1_o8902jy
Holy shit really.
1
0
2026-03-02T16:12:49
Present-Ad-8531
false
null
0
o8902jy
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8902jy/
false
1
t1_o8901x8
Does anyone else have the issue with these models (regardless of size/quant) where they cut themselves off before finishing when running them through an agent? I tried turning the max token output up in Kobold, which seemed to fix it running in-browser, but no dice for Cline. I like Ooba because at least I know the parameters I choose in the UI are reflected in the local API, but not sure if that's also true for Kobold.
1
0
2026-03-02T16:12:44
EuphoricPenguin22
false
null
0
o8901x8
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8901x8/
false
1
t1_o8901bd
I am about to load it onto some antigravity extensions and find out
1
0
2026-03-02T16:12:39
Impossible-Glass-487
false
null
0
o8901bd
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8901bd/
false
1
t1_o8900jd
There was a repetition bug? I used qwen3 vl 4b for ocr just fine
1
0
2026-03-02T16:12:33
Velocita84
false
null
0
o8900jd
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8900jd/
false
1
t1_o88zz69
Sad there’s no 14B tbh
1
0
2026-03-02T16:12:22
arman-d0e
false
null
0
o88zz69
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88zz69/
false
1
t1_o88zz7z
4b q4 runs well on the iphone
1
0
2026-03-02T16:12:22
Confusion_Senior
false
null
0
o88zz7z
false
/r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o88zz7z/
false
1
t1_o88zyom
I just benchmarked some models last week with a couple of V100 and they worked fine
1
0
2026-03-02T16:12:18
Careless-Travel-650
false
null
0
o88zyom
false
/r/LocalLLaMA/comments/1p3d34y/inspired_by_a_recent_post_a_list_of_the_cheapest/o88zyom/
false
1
t1_o88zy7v
Thank you
1
0
2026-03-02T16:12:14
Justify_87
false
null
0
o88zy7v
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o88zy7v/
false
1
t1_o88zt05
The Qwen3.5 models are vision models. There is no separate Vision and Non Vision in Qwen 3.5
1
0
2026-03-02T16:11:32
deadman87
false
null
0
o88zt05
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o88zt05/
false
1
t1_o88zrwj
They already have vision
1
0
2026-03-02T16:11:23
Velocita84
false
null
0
o88zrwj
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o88zrwj/
false
1
t1_o88zogh
All qwens 3.5 have vision.
1
0
2026-03-02T16:10:56
RadiantHueOfBeige
false
null
0
o88zogh
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o88zogh/
false
1
t1_o88znqp
I encountered the repetition bug in 0.8B. 2B is good so far.
1
0
2026-03-02T16:10:50
deadman87
false
null
0
o88znqp
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o88znqp/
false
1
t1_o88zn94
Came across this as I was wondering the same - plus grok2 / 2.5 is apparently out on huggingface. Grok-2: 270B Total, 115B Activated Kimi k2.5 i 1T 32b active - so grok2 is almost 4x the active size of k2.5, so I would expect about 1/4 the speed that people are reporting.
1
0
2026-03-02T16:10:46
bigh-aus
false
null
0
o88zn94
false
/r/LocalLLaMA/comments/1jwfahl/have_anyone_tried_running_grok_on_mac_studio/o88zn94/
false
1
t1_o88zf05
Thats surprising, as it works on the few snapdragon devices I have. I'll shoot a dm.
1
0
2026-03-02T16:09:40
----Val----
false
null
0
o88zf05
false
/r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o88zf05/
false
1
t1_o88zcg0
Which comments? In LM Studio thhere is No Option for bf16
1
0
2026-03-02T16:09:19
Achso998
false
null
0
o88zcg0
false
/r/LocalLLaMA/comments/1risvdc/how_to_set_the_kv_cache_to_bf16_in_lm_studio/o88zcg0/
false
1
t1_o88z84j
We can see the reason here as well why benchmarks are not very useful anymore. I have a hard time believing that Q3.5 35B A3B is better than Q3 235B A22B yet here it shows it is better in every test.
1
0
2026-03-02T16:08:44
tmvr
false
null
0
o88z84j
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88z84j/
false
1
t1_o88z7gi
Dumb question: there isn't gonna be a qwen 3.5 VL?
1
0
2026-03-02T16:08:39
Justify_87
false
null
0
o88z7gi
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o88z7gi/
false
1
t1_o88z29v
Ooh yeah, some pattern texture would have been a good idea. Didn't think of that. Unfortunately, Reddit doesn't let me edit the image once it's posted. I mainly put this together for a quick personal reference and figured I'd share, but I'll definitely keep the pattern idea in mind for next time.
1
0
2026-03-02T16:07:58
Jobus_
false
null
0
o88z29v
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88z29v/
false
1
t1_o88z13w
This is a super important issue. I would love to go back to the napster / limewire / bearshare / bittorrent days for open models. Open + centralized is a nice moat. But open + decentralized is goat. P2P models lfg
1
0
2026-03-02T16:07:48
clocksmith
false
null
0
o88z13w
false
/r/LocalLLaMA/comments/1ozo2v8/do_we_rely_too_much_on_huggingface_do_you_think/o88z13w/
false
1
t1_o88z0ma
With these settings, even if you download 4km and set the context to 32k, it will still be fine.
1
0
2026-03-02T16:07:45
Beneficial-Good660
false
null
0
o88z0ma
false
/r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o88z0ma/
false
1
t1_o88z0ev
Some more info: Handy complains at start-up that it can't find any sound module (pcm\_oss, pulseaudio or jack). I also get a crash (coredump) when I ask the larger, slower model. In the logfile, I can see references that Haswell Vulkan support is incomplete, I don't know if this is a red herring or something serious. HELP !
1
0
2026-03-02T16:07:43
Gullible_Home_7492
false
null
0
o88z0ev
false
/r/LocalLLaMA/comments/1ldvosh/handy_a_simple_opensource_offline_speechtotext/o88z0ev/
false
1
t1_o88yydb
This has been hiding behind bugs I've been trying to sort out in Continue as well as my own agentic scaffold project for months. I'm happy to share.
1
0
2026-03-02T16:07:27
One-Cheesecake389
false
null
0
o88yydb
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o88yydb/
false
1
t1_o88yooz
Then set LCCP's port to 11434. Or adapt your ecosystem to use LCCP/chat completion if it's using ollama's weird ass api
1
0
2026-03-02T16:06:11
Velocita84
false
null
0
o88yooz
false
/r/LocalLLaMA/comments/1riw1ml/just_saw_it_on_the_last_page_refresh_qwen/o88yooz/
false
1
t1_o88yopn
tbh the 0.0014 improvement seems pretty much within noise level... would be cool to see this tested on actual reasoning tasks where people report the looping issues
1
0
2026-03-02T16:06:11
papertrailml
false
null
0
o88yopn
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o88yopn/
false
1
t1_o88ykjg
The recursion trap is a perfect Heisenbug — the model literally cannot describe the bug without reproducing it. Thanks for connecting a year of isolated reports into one place.
2
0
2026-03-02T16:05:38
theagentledger
false
null
0
o88ykjg
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o88ykjg/
false
2
t1_o88yjz9
Awesome stuff! Unfortunately the ChatterUI 0.8.9 beta is currently crashing for me on Samsung S25 Ultra (Android 16) when trying to import the model file. Would it be helpful to get the crash logs? (got them already in a file via adb) If so, feel free to DM me.
1
0
2026-03-02T16:05:33
l_eo_
false
null
0
o88yjz9
false
/r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o88yjz9/
false
1
t1_o88yj8k
tbh these small models are perfect for routing tasks... been using similar sized ones to classify user intent before hitting the big model and it works surprisingly well. way faster than sending everything to 27b
1
0
2026-03-02T16:05:27
papertrailml
false
null
0
o88yj8k
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88yj8k/
false
1
t1_o88yij3
llama.cpp also uses openai compatable api. just like ollama. so you can make llama server serve at that port and everything will be exactly same. btw ollama uses llama.cpp in backend. you are loosing too much on performance and control for no reason by using ollama
1
0
2026-03-02T16:05:21
theghost3172
false
null
0
o88yij3
false
/r/LocalLLaMA/comments/1riw1ml/just_saw_it_on_the_last_page_refresh_qwen/o88yij3/
false
1
t1_o88y8xs
9b will be a huge disappointment for those who accept these benchmarks at face value and a great tool for the rest.
1
0
2026-03-02T16:04:05
Big_Mix_4044
false
null
0
o88y8xs
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88y8xs/
false
1
t1_o88xyom
That’s a crazy high top_k value and your temp is off, too. Bro above in a different comment t provided the correct values; perhaps they’ll help.
1
0
2026-03-02T16:02:42
__JockY__
false
null
0
o88xyom
false
/r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o88xyom/
false
1
t1_o88xrbw
Looks so good... but scores very low in Reasoning and Coding benchmarks as well as instruct following compared to gpt-oss. I guess Ill have to wait for coder and instruct models, I hoped the base model was better at it.
1
0
2026-03-02T16:01:42
guesdo
false
null
0
o88xrbw
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88xrbw/
false
1
t1_o88xo4r
Thanks!
1
0
2026-03-02T16:01:17
oxygen_addiction
false
null
0
o88xo4r
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o88xo4r/
false
1
t1_o88xmf5
* Non-thinking mode for text tasks: `temperature=1.0, top_p=1.00, top_k=20, min_p=0.0, presence_penalty=2.0, repetition_penalty=1.0`
1
0
2026-03-02T16:01:02
OrdinaryTransition57
false
null
0
o88xmf5
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88xmf5/
false
1
t1_o88xkp2
i see. im using LMstudio.
1
0
2026-03-02T16:00:48
kayteee1995
false
null
0
o88xkp2
false
/r/LocalLLaMA/comments/1re1b4a/you_can_use_qwen35_without_thinking/o88xkp2/
false
1
t1_o88xk24
LM Studio, defaults. I just downloaded, haven't changed anything. 58-59 tokens per second, unsloth Qwen3.5 9B Q4\_K\_S. https://preview.redd.it/uxhcjwyrmnmg1.png?width=924&format=png&auto=webp&s=b75b4fde52c9f048c72394d524725086de35c6ec
1
0
2026-03-02T16:00:43
JollyJoker3
false
null
0
o88xk24
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88xk24/
false
1
t1_o88xfny
Thanks for the heads up. I could see complexity reaching a point where not having the required parameters to support it produces a collapse of functionality. I'll see how it goes and if I have to regress to 3, it is what it is.
1
0
2026-03-02T16:00:07
PlainBread
false
null
0
o88xfny
false
/r/LocalLLaMA/comments/1riw1ml/just_saw_it_on_the_last_page_refresh_qwen/o88xfny/
false
1
t1_o88xc7b
i dont think termux supports gpu acceleration IIRC
1
0
2026-03-02T15:59:39
_yustaguy_
false
null
0
o88xc7b
false
/r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o88xc7b/
false
1
t1_o88x4sj
No. It doesn't work with my ecosystem. My ecosystem pings ollama at 11434.
1
0
2026-03-02T15:58:40
PlainBread
false
null
0
o88x4sj
false
/r/LocalLLaMA/comments/1riw1ml/just_saw_it_on_the_last_page_refresh_qwen/o88x4sj/
false
1
t1_o88wxts
9B is hacking for sure...
1
0
2026-03-02T15:57:45
KvAk_AKPlaysYT
false
null
0
o88wxts
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88wxts/
false
1
t1_o88wxv6
They definitely did, but I only included the models that Qwen featured in their official comparison charts for this 3.5 release. I didn't want to start mixing in different benchmark sources to keep it consistent.
1
0
2026-03-02T15:57:45
Jobus_
false
null
0
o88wxv6
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88wxv6/
false
1
t1_o88wxg8
DOA. Does anyone actually use Jan models?...
1
0
2026-03-02T15:57:42
rm-rf-rm
false
null
0
o88wxg8
false
/r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o88wxg8/
false
1
t1_o88wqwa
This is awesome! I have a question for training, you mentioned that \> First compile takes \~20-40ms. Cache hits are effectively free. This matters for inference (compile once, run forever) but creates challenges for training, where weights change every step. I cannot understand why the weights change every step in training. I know that the numerical values changes, but are the tensor shape/memory locations still static? Or does Apple Compiler require that the value is also static? I was thinking it just takes the tensor descriptor (shape, stride, address) like NVIDIA GPU TMA.
1
0
2026-03-02T15:56:49
Own-Performance-1900
false
null
0
o88wqwa
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o88wqwa/
false
1
t1_o88wo4q
I would answer but you would just downvote and disagree.
1
0
2026-03-02T15:56:27
PlainBread
false
null
0
o88wo4q
false
/r/LocalLLaMA/comments/1riw1ml/just_saw_it_on_the_last_page_refresh_qwen/o88wo4q/
false
1
t1_o88wn9w
i wonder if it will work fine with picoclaw
1
0
2026-03-02T15:56:20
uncanny-agent
false
null
0
o88wn9w
false
/r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88wn9w/
false
1
t1_o88wm2k
i, if you are the dev, any chances you could implement a mode where the app runs as a server? i really want a backend that works with GPU or NPU but i want to use sillytavern as front end. it is just that much better that all things i tried. and i have all my stuff there.
1
0
2026-03-02T15:56:11
weener69420
false
null
0
o88wm2k
false
/r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o88wm2k/
false
1
t1_o88wljz
Awful colouring (sorry). Can't you change/edit to add slashed patterns or some sort of distinguisher?
1
0
2026-03-02T15:56:07
frosticecold
false
null
0
o88wljz
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88wljz/
false
1