name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8afmwt
Can someone check my understanding? MOE like A3B route each word or token through the active parameters that are most relevant to the query but this inherently means a subset of the reasoning capability was used. so dense models may produce better results. Additionally the quant level matters too. a fully resolution model may be limited by parameter but each inference is at the highest precision vs a large model thats been quantized lower which can be "smarter" at the cost of accuracy. is the above fully accurate?
1
0
2026-03-02T20:18:41
OriginalPlayerHater
false
null
0
o8afmwt
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8afmwt/
false
1
t1_o8afk7z
It's a generally decent agent and has solid STEM knowledge for its size.
1
0
2026-03-02T20:18:19
jeremyckahn
false
null
0
o8afk7z
false
/r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o8afk7z/
false
1
t1_o8aff5y
You may have the wrong configuration. I have full context (262,144), with unquantized KV cache using the Q4 quantized model, and it is using 13 GB of VRAM.
1
0
2026-03-02T20:17:38
vk3r
false
null
0
o8aff5y
false
/r/LocalLLaMA/comments/1rj3ocy/question_regarding_model_parameters_and_memory/o8aff5y/
false
1
t1_o8afcvz
for sure james! there is a reason why we open sourced the model instead of benchmaxxing like other ai labs on benchmakrs whihc dont even relate to real life tasks. we will be releasing the gguf weights soon as well :)
1
0
2026-03-02T20:17:20
EmbarrassedAsk2887
false
null
0
o8afcvz
false
/r/LocalLLaMA/comments/1riypvk/axe_a_precision_agentic_coder_large_codebases/o8afcvz/
false
1
t1_o8afcof
This related PR implements no-draft-model speculative decoding (ngram-cache|ngram-simple|ngram-map-k|ngram-map-k4v|ngram-mod), draft model should've been working out of the box (as I thought wrong, apparently), at least I used qwen's 0.5B with QWQ.
1
0
2026-03-02T20:17:18
unbannedfornothing
false
null
0
o8afcof
false
/r/LocalLLaMA/comments/1rj2rec/new_qwen_models_for_speculative_decoding/o8afcof/
false
1
t1_o8afcjj
what all do you do successfully with gpt-oss20b?
1
0
2026-03-02T20:17:17
swimmer434
false
null
0
o8afcjj
false
/r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o8afcjj/
false
1
t1_o8afaur
context sizes are related to the architecture of the models not their parameter sizes
1
0
2026-03-02T20:17:03
ikaganacar
false
null
0
o8afaur
false
/r/LocalLLaMA/comments/1rj3ocy/question_regarding_model_parameters_and_memory/o8afaur/
false
1
t1_o8af6ii
Sheesh that's impressive and also way over my head. I'm a math guy but I code up simulations from time to time and like to play with Gemini cli for whole projects. I also have a Mac Ultra with 128GB of unified ram on my network (which I got for CPU heavy research and had the budget to be greedy with ram). I just have no idea how to get into local LLM agentic coding to leverage the thing. Where do I go to learn this stuff, and get started? Best I've managed is to run a few models via mlx (seems to work better than ollama) and expose the API on my local network, and I use open webui to chat with them. But even that took a lot of help from Gemini to figure out.
1
0
2026-03-02T20:16:28
AerosolHubris
false
null
0
o8af6ii
false
/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o8af6ii/
false
1
t1_o8af53i
Im struggling with 24gb. Even running the qwen 3.5 9b model, just takes like 3 minutes to first token.
1
0
2026-03-02T20:16:16
JoeyJoeC
false
null
0
o8af53i
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8af53i/
false
1
t1_o8af230
How are people doing coding with these small models? I can't even get sonnet or codex to get things right half the time.
1
0
2026-03-02T20:15:52
cosmicr
false
null
0
o8af230
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8af230/
false
1
t1_o8af15u
do you wanna know? for example their lm link which limits the users for 2 devices per user? we can scale upto 160 devices per user even more. our mlx engine perf is far better than their bloated mlx engine. ill release the benchmarks by this week. we just started marketing about bodega few days back, getting the assets ready for it. lm studio doesnt support multi model laoding registry wihtout jeopardizing a bunch of overheads it spawns in its electron app. loading time of a base 20b model is 25 seconds, in bodega its 8. our prefilling stage takes 1/10th of the time it takes by lm studio for comparable models. we have introduced speculative decoding as well. instead of generating one token at a time with a massive "target" model (which is bottleneccked by loading the large model's weights into unified for apple silicon or gpu memory over and over), the engine simultaneously runs a much smaller, faster "draft" model. we have prompt caching as well which is basic enough but lm studio doesnt provide it. we give support for your heterogeneous devices to juice out bodega inference engine as well in upcpming updates for those reading , im the creator of bodega and its a real recommendation
1
0
2026-03-02T20:15:44
EmbarrassedAsk2887
false
null
0
o8af15u
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8af15u/
false
1
t1_o8af0vk
aider does exactly this — you add files manually with `/add`, it never tries to map the whole repo. pair it with qwen2.5-coder-7b Q8 on MLX (~8GB, leaves headroom) and it's actually usable for single-file edits. the cline system prompt is ~2k tokens before you've typed a word, which is brutal when your model starts degrading past 60% of a 8k context. the problem isn't 9B models, it's that every popular coding tool was designed assuming 128k context and a model that doesn't fall apart at 6k.
1
0
2026-03-02T20:15:42
tom_mathews
false
null
0
o8af0vk
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8af0vk/
false
1
t1_o8af0t9
Probably mainly comes down to which sizes work well for the architecture and which sizes are useful. E.g. powers of two for various model dimensions that happen to fall on the optimal scaling curve. Since the 35B specifically is their flash model in chat, it's probably also chosen to optimize inference in whatever setup they run it in.
1
0
2026-03-02T20:15:41
Middle_Bullfrog_6173
false
null
0
o8af0t9
false
/r/LocalLLaMA/comments/1rj3cku/why_qwen_35_27b/o8af0t9/
false
1
t1_o8af0us
a100x2 at 1 vm here. feel free to share your experience
1
0
2026-03-02T20:15:41
slava_smirnov
false
null
0
o8af0us
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8af0us/
false
1
t1_o8aezf7
I haven't used them enough to judge quality of output yet but I do observe that the amount of tokens spent on thinking is excessive, and I've already seen runaway thinking processes a couple of times with both the 122B and 35B versions. Maybe the quants are two lobotomized, who knows. I will try to cap thinking budgets with these, if possible.
1
0
2026-03-02T20:15:30
milkipedia
false
null
0
o8aezf7
false
/r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8aezf7/
false
1
t1_o8aeye4
I believe these Qwen models effectively have speculative decoding baked in so it may mean running your own is duplicative
1
0
2026-03-02T20:15:22
BumbleSlob
false
null
0
o8aeye4
false
/r/LocalLLaMA/comments/1rj3oue/any_advice_for_using_draft_models_with_qwen35_122b/o8aeye4/
false
1
t1_o8aexhj
Benchmarks are only an approximation, a way to measure that what's basically unmeasurable. That's why there are so many and why they are so untrustworthy, because companies target the benchmarks. You need to create your own bench for your own uses, never publish it, and use that. The only way to know.
1
0
2026-03-02T20:15:15
ortegaalfredo
false
null
0
o8aexhj
false
/r/LocalLLaMA/comments/1rj3kfq/im_tired/o8aexhj/
false
1
t1_o8aevic
There are narratives being pushed for strategic reasons, but I don’t think it’s fair to lump all tech companies into the same bucket. Take NVIDIA as an example. Their core objective is very straightforward, sell AI chips globally including to China, and stay ahead technologically. That alone doesn’t make it a national security threat or anything but the framing changes when competitors with different business models like Anthropic push a different narrative that conveniently aligns with sidelining hardware leaders. That narrative then bleeds into policy and slows or blocks approvals. So yes, there’s real narrative manipulation happening, but it’s selective. Some companies benefit from painting others as threats especially when it shifts market leadership in their favor and some just simoly don’t. People just need to learn how to discern this especially online
1
0
2026-03-02T20:15:00
Educational_Ease367
false
null
0
o8aevic
false
/r/LocalLLaMA/comments/1rd1lmz/american_vs_chinese_ai_is_a_false_narrative/o8aevic/
false
1
t1_o8aetpo
Feel free to put ANY labels there, I'm not kidding!
1
0
2026-03-02T20:14:45
Fast_Thing_7949
false
null
0
o8aetpo
false
/r/LocalLLaMA/comments/1rj3kfq/im_tired/o8aetpo/
false
1
t1_o8aenes
Do what I did. Download a model or two and put it through some tests.  My experience with long texts is that you should explicitly tell it to provide VERBATIM text, clear context and start over for each page, otherwise the LLMs tend to remember older pages and hallucinate in the middle of your current page. Just my 2 cents
1
0
2026-03-02T20:13:53
deadman87
false
null
0
o8aenes
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8aenes/
false
1
t1_o8aehyc
toooop
1
0
2026-03-02T20:13:08
Salty_Painting6184
false
null
0
o8aehyc
false
/r/LocalLLaMA/comments/1r99yda/pack_it_up_guys_open_weight_ai_models_running/o8aehyc/
false
1
t1_o8aeh19
"The ship is slightly to the left, wearing a grey eye patch."
1
0
2026-03-02T20:13:01
Tastetrykker
false
null
0
o8aeh19
false
/r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8aeh19/
false
1
t1_o8aegbq
well, there is a big wish / need there for it. Hope LM Studio will allow MTP sooner than later...
1
0
2026-03-02T20:12:55
mouseofcatofschrodi
false
null
0
o8aegbq
false
/r/LocalLLaMA/comments/1rj2mzy/is_speculative_decoding_available_with_the_qwen/o8aegbq/
false
1
t1_o8aeeeq
Oh it does? I've never tried that model, but I generally haven't liked the writing style of any of the Qwen3 models for task that calls for a more human feel, so I guess I shouldn't be surprised. I think Qwen3.5 does far better general prose; it feels a lot less AI sloppy. Have you tried Qwen3.5-122B-A10B? If so, how do you feel about it in comparison?
1
0
2026-03-02T20:12:39
Jobus_
false
null
0
o8aeeeq
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8aeeeq/
false
1
t1_o8aedod
As someone who has a 5090 but haven't done much with local AI since 2 years ago, what's the meta for it? Which models should I be looking to run?
1
0
2026-03-02T20:12:33
megacewl
false
null
0
o8aedod
false
/r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8aedod/
false
1
t1_o8aed22
>\> I started getting interested in local models about 3-4 months ago and \> even the smartest people in the world still haven't managed to bring all this into ... aren't you contradicting yourself?
1
0
2026-03-02T20:12:29
anzzax
false
null
0
o8aed22
false
/r/LocalLLaMA/comments/1rj3kfq/im_tired/o8aed22/
false
1
t1_o8aebyt
Also because llama.cpp basically has a non-working tensor-parallel inference.
1
0
2026-03-02T20:12:19
ortegaalfredo
false
null
0
o8aebyt
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8aebyt/
false
1
t1_o8adsl6
The only downside of vLLM for us casuals is that the power consumption just is way higher. It just never goes idle. So a lot of my video cards end up sitting at a good steady 100 watts with vLLM when with llamaserver they idle regularly.
1
0
2026-03-02T20:09:42
Ok-Ad-8976
false
null
0
o8adsl6
false
/r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8adsl6/
false
1
t1_o8admpp
estou tentando fazer isso, mas acontece uma serie de erros e erros. estou no zorinOS usando uma gpu da amd. antes rodava via vulkan.
1
0
2026-03-02T20:08:54
Numerous_Sandwich_62
false
null
0
o8admpp
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8admpp/
false
1
t1_o8adkpt
Hell yeaaaa just tried it out... Surprisingly 2B_q8 is much faster on my phone than 0.8B BF16
1
0
2026-03-02T20:08:38
Zealousideal-Check77
false
null
0
o8adkpt
false
/r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o8adkpt/
false
1
t1_o8adjnb
Yes there is need for comments, because you need to label your graph. That is the first rule of graphs.
1
0
2026-03-02T20:08:29
One-Employment3759
false
null
0
o8adjnb
false
/r/LocalLLaMA/comments/1rj3kfq/im_tired/o8adjnb/
false
1
t1_o8adip8
That is good bro that is good
1
0
2026-03-02T20:08:22
Potential_Block4598
false
null
0
o8adip8
false
/r/LocalLLaMA/comments/1rj3kfq/im_tired/o8adip8/
false
1
t1_o8adeil
that makes sense. how long did it take you to get to a workflow where the planning sessions actually reflected your standards? and does it hold up when you switch to a different type of project?
1
0
2026-03-02T20:07:47
Illustrious-Bet6287
false
null
0
o8adeil
false
/r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/o8adeil/
false
1
t1_o8addph
https://preview.redd.it/… app. Well done.
1
0
2026-03-02T20:07:41
LegacyRemaster
false
null
0
o8addph
false
/r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o8addph/
false
1
t1_o8add15
Thanks, "Prompt Template," was hidden by default in my LM Studio, right-click inside sidebar area and enabling it fixed the issue.
1
0
2026-03-02T20:07:35
DarkArtsMastery
false
null
0
o8add15
false
/r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8add15/
false
1
t1_o8adbew
Διάβασα την κριτική για τις δυνατότητες το μοντέλου τεχνητής νοημοσύνης [Kimi K2.5](https://texnologia.net/kimi-k2-5-review-multimodal-agents-swarm-orchestration-coding-me-256k-context-odigos-gia-developers/2026/01) και οφείλω να ομολογήσω πως θεωρείται ένα από τα πέντε κορυφαία στον κόσμο στο είδος του, αλλά είναι πρακτικά αδύνατον να το τρέξεις στο προσωπικό σου υπολογιστή. Κρίμα γιατί είναι αρκετά καλό και μάλιστα σχετικά κοντά με τα υπόλοιπα ιδιωτικά μοντέλα (gemini, chatgpt, claude) που είναι και πανάκριβα για βαριά χρήση
1
0
2026-03-02T20:07:22
seriani
false
null
0
o8adbew
false
/r/LocalLLaMA/comments/1qw5uh0/finetuning_kimi_k25/o8adbew/
false
1
t1_o8ad4yk
By the way, the two models on the chart are qwen3.5 35b-a3b and opus 4.5. I think there is no need for comments here.
1
0
2026-03-02T20:06:29
Fast_Thing_7949
false
null
0
o8ad4yk
false
/r/LocalLLaMA/comments/1rj3kfq/im_tired/o8ad4yk/
false
1
t1_o8acyr6
On the 35b-a3b-fp8 models I’ve found that non-thinking fails the carwash test, while thinking passes. I think that’s a significant improvement. The downside is almost 10x the token usage (on my prompts) for thinking compared to non-thinking so use sparingly.
1
0
2026-03-02T20:05:38
Operation_Fluffy
false
null
0
o8acyr6
false
/r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8acyr6/
false
1
t1_o8acuy4
I will post a guide later. I think many people would be interested.
1
0
2026-03-02T20:05:07
Iory1998
false
null
0
o8acuy4
false
/r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8acuy4/
false
1
t1_o8acuqc
LMStudio is generally good enough for the kind of people who wouldn't want/know how to use llama.cpp. Saying it's "pretty bad at everything" is a wildly unqualified exaggeration lol. What does Bodega offer that would make it superior for that crowd?
1
0
2026-03-02T20:05:05
JamesEvoAI
false
null
0
o8acuqc
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8acuqc/
false
1
t1_o8act4v
the explanation is that this benchmark is not very good
1
0
2026-03-02T20:04:51
koushd
false
null
0
o8act4v
false
/r/LocalLLaMA/comments/1rj3bh0/qwen35_397b_vs_27b/o8act4v/
false
1
t1_o8acr45
Yeah this makes sense, especially the tradeoff part.
1
0
2026-03-02T20:04:35
MedicineTop5805
false
null
0
o8acr45
false
/r/LocalLLaMA/comments/1riw6kd/mcp_colocation_stdio_49ms_single_client_vs_http/o8acr45/
false
1
t1_o8aco82
I think I will create a guide on how to do that and post it for everyone.
1
0
2026-03-02T20:04:11
Iory1998
false
null
0
o8aco82
false
/r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8aco82/
false
1
t1_o8aciux
As will retirement
1
0
2026-03-02T20:03:28
themoregames
false
null
0
o8aciux
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8aciux/
false
1
t1_o8acd18
imagine thinking Elon knows anything past what the ketamine tells him 
1
0
2026-03-02T20:02:41
HopePupal
false
null
0
o8acd18
false
/r/LocalLLaMA/comments/1rj39se/intelligence_density_per_gb_is_increasing_and_i/o8acd18/
false
1
t1_o8ac8tt
Have you tried using React Native ExecuTorch, if so what are your opinions about it?
1
0
2026-03-02T20:02:07
MutedCommission4236
false
null
0
o8ac8tt
false
/r/LocalLLaMA/comments/1renuky/everything_i_learned_building_ondevice_ai_into_a/o8ac8tt/
false
1
t1_o8ac6lo
He had those questions and instead of annoying people on the internet with moronic questions, he used google or chatbots...
1
0
2026-03-02T20:01:48
AdventurousFly4909
false
null
0
o8ac6lo
false
/r/LocalLLaMA/comments/1rj2fm9/is_localllama_for_hate_and_malicious_comments/o8ac6lo/
false
1
t1_o8ac678
Front end is reasonably nice but still even their direct compatibility with even the most popular models is totally flaky! I downloaded the unsloth Qwen 3.5 and they can’t handle the default tool integration code that works with every other system, their front end claims vision works with the eye icon, but vision, in fact, does not work. I had to download their worse performing higher memory lmstudio community flavor of the model. Their product is literally a model download and model running tool easy and broad compatibility should be the point!
1
0
2026-03-02T20:01:45
doomdayx
false
null
0
o8ac678
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8ac678/
false
1
t1_o8ac55j
this post was written with an LLM and you can tell. that's literally the problem i'm describing. it has my ideas but not my voice. if the agent actually understood how i think and write, you wouldn't have noticed.
1
0
2026-03-02T20:01:36
Illustrious-Bet6287
false
null
0
o8ac55j
false
/r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/o8ac55j/
false
1
t1_o8ac0z8
Cool, can we get 379B also?
1
0
2026-03-02T20:01:03
pieonmyjesutildomine
false
null
0
o8ac0z8
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8ac0z8/
false
1
t1_o8ac06m
IMO the effort so far has been on making agents that are smart enough to convert natural language into working code. I think we're basically there. The next step is to instil better engineering practices and architectural thinking. I guess it's a similar journey to the one that we all are on. LLMs are on the tail end of "make it work" and are now starting to venture into "make it good" >These aren't things I can write in a prompt This is why I always try plan things out with the models first and give feedback if something seems off. Even human devs can have very different ideas of what right looks like. I think at some point we're going to realise the issue is still communication and specs. We'll be able to crank out the code almost instantaneously, but we still need to spend some time beforehand thinking about the real spec.
1
0
2026-03-02T20:00:56
-dysangel-
false
null
0
o8ac06m
false
/r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/o8ac06m/
false
1
t1_o8abzrh
I wasted lot of time with both. Only R1 435B or Maverick full params too.
1
0
2026-03-02T20:00:53
H4UnT3R_CZ
false
null
0
o8abzrh
false
/r/LocalLLaMA/comments/1rf2b90/benchmarking_qwen3535b_vs_gptoss20b_for_agentic/o8abzrh/
false
1
t1_o8abpqy
and you knew everything from 1st day on this world and never asked such questions or never added posts like that in the past right?
1
0
2026-03-02T19:59:31
mossy_troll_84
false
null
0
o8abpqy
false
/r/LocalLLaMA/comments/1rj2fm9/is_localllama_for_hate_and_malicious_comments/o8abpqy/
false
1
t1_o8abp4b
We sure need more shades of blue hahahaha
1
0
2026-03-02T19:59:26
iScreem1
false
null
0
o8abp4b
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8abp4b/
false
1
t1_o8abms7
Anthropic and Google and a few others have found that it doesnt really help. They have papers on this. Well, Google has a paper, Antrophic released a blog post. What youre observing is the rate of error, variance, and bias which can correlate with coherence and objective reasoning paths. Larger models generalize better than smaller models, but both actually suffer from the same issues for varied distributions. So, scale doesnt actually solve the problem. There has been a lot of studies suggesting that scaling is more of an S curve, which is why improvements diminish after a certain point. One interesting post here recently found that Google surveyed some performance loss from long reasoning budgets. I havent looked into it yet. Ive been taking some personal time for myself to figure out what Im gonna do next, but I need a clear head which means I need to take a beat for awhile. Maybe someone else can fill in the gaps that understands this more deeply.
1
0
2026-03-02T19:59:08
teleprint-me
false
null
0
o8abms7
false
/r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8abms7/
false
1
t1_o8abmrf
so many abusers, crooks, and egotists amongst the big names in that movement.
1
0
2026-03-02T19:59:07
ouroborosborealis
false
null
0
o8abmrf
false
/r/LocalLLaMA/comments/1rcseh1/fun_fact_anthropic_has_never_opensourced_any_llms/o8abmrf/
false
1
t1_o8abfll
sounds much better now. what was the issue?
1
0
2026-03-02T19:58:11
ElectricalBar7464
false
null
0
o8abfll
false
/r/LocalLLaMA/comments/1rc9qvb/kitten_tts_v08_running_in_the_browser/o8abfll/
false
1
t1_o8abb1j
https://preview.redd.it/…a3285b39f943f8
1
0
2026-03-02T19:57:35
UltrMgns
false
null
0
o8abb1j
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8abb1j/
false
1
t1_o8ab80f
🤔
1
0
2026-03-02T19:57:10
C0C0Barbet
false
null
0
o8ab80f
false
/r/LocalLLaMA/comments/1rj326g/any_idea_what_is_being_used_for_these_generations/o8ab80f/
false
1
t1_o8ab70v
A credit card with an api key
1
0
2026-03-02T19:57:03
claythearc
false
null
0
o8ab70v
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8ab70v/
false
1
t1_o8ab0pa
Look for settings for MTP. Multi token prediction.
1
0
2026-03-02T19:56:11
TaiMaiShu-71
false
null
0
o8ab0pa
false
/r/LocalLLaMA/comments/1rj2rec/new_qwen_models_for_speculative_decoding/o8ab0pa/
false
1
t1_o8aaxjo
I think they serve as bases for fine tuning or merging to make those types of models.
1
0
2026-03-02T19:55:46
Daniel_H212
false
null
0
o8aaxjo
false
/r/LocalLLaMA/comments/1rixh53/qwen35122b_heretic_ggufs/o8aaxjo/
false
1
t1_o8aaxkf
I agree that LLMs are not mind reading machines. a team member who's worked with me for 6 months would understand without me explaining every time. They learn through observation, it should be possible with LLMs too.
1
0
2026-03-02T19:55:46
Illustrious-Bet6287
false
null
0
o8aaxkf
false
/r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/o8aaxkf/
false
1
t1_o8aat9c
I was playing with the 35B vs Coder next, as I can't fit enough context in VRAM so I'm leaking to system RAM for both.  Short story is coder next takes more RAM/ will have less context for the same quantity, 35B is about 30% faster, but Coder with no thinking has same or better results than the 35B with thinking on, so it feels better. For my 16 VRAM / 64 RAM system, I think Next is better. If you only have 32 GB RAM, 3.5 35B isn't much of a downgrade.
1
0
2026-03-02T19:55:12
sine120
false
null
0
o8aat9c
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8aat9c/
false
1
t1_o8aat4y
I like to use 0.8b
1
0
2026-03-02T19:55:11
shoonee_balavolka
false
null
0
o8aat4y
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8aat4y/
false
1
t1_o8aaso9
chinese models
1
0
2026-03-02T19:55:07
MotokoAGI
false
null
0
o8aaso9
false
/r/LocalLLaMA/comments/1rj326g/any_idea_what_is_being_used_for_these_generations/o8aaso9/
false
1
t1_o8aalxu
Under "Prompt Template," you can add `{% set enable_thinking = true %}` to the top of the Jinja template.
1
0
2026-03-02T19:54:13
Mental-Inference
false
null
0
o8aalxu
false
/r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8aalxu/
false
1
t1_o8aajf4
nah nah look for something on lm studio somebody probably has something for you. just try lm studio
1
0
2026-03-02T19:53:53
IndependenceFlat4181
false
null
0
o8aajf4
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8aajf4/
false
1
t1_o8aafbk
Can you supply samples of those two files?
1
0
2026-03-02T19:53:21
ElectronicProgram
false
null
0
o8aafbk
false
/r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8aafbk/
false
1
t1_o8aac1x
you're right, i used an LLM to write the post to convey my thoughts clearly. on the actual point though, i context engineer my prompts daily and it works for the straightforward stuff. but from my experience there's a ceiling. some judgment i just can't put into words clearly enough to prompt, no matter how much i refine it.
1
0
2026-03-02T19:52:56
Illustrious-Bet6287
false
null
0
o8aac1x
false
/r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/o8aac1x/
false
1
t1_o8aaaqw
Speculative decoding is built into the larger models already.
1
0
2026-03-02T19:52:46
TaiMaiShu-71
false
null
0
o8aaaqw
false
/r/LocalLLaMA/comments/1rj2rec/new_qwen_models_for_speculative_decoding/o8aaaqw/
false
1
t1_o8aa9c0
[https://github.com/p-e-w/heretic](https://github.com/p-e-w/heretic)
1
0
2026-03-02T19:52:35
JamesEvoAI
false
null
0
o8aa9c0
false
/r/LocalLLaMA/comments/1rixh53/qwen35122b_heretic_ggufs/o8aa9c0/
false
1
t1_o8aa7ld
https://github.com/ggml-org/llama.cpp/issues/20039
1
0
2026-03-02T19:52:21
coder543
false
null
0
o8aa7ld
false
/r/LocalLLaMA/comments/1rj2rec/new_qwen_models_for_speculative_decoding/o8aa7ld/
false
1
t1_o8aa3al
[removed]
1
0
2026-03-02T19:51:46
[deleted]
true
null
0
o8aa3al
false
/r/LocalLLaMA/comments/1rj2fm9/is_localllama_for_hate_and_malicious_comments/o8aa3al/
false
1
t1_o8a9w9s
qwen 235b also has the worst feel of a larger model that I have tried. Feels like 4o distilled.
1
0
2026-03-02T19:50:50
nomorebuttsplz
false
null
0
o8a9w9s
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8a9w9s/
false
1
t1_o8a9v8t
if you did not bother to waste 10 minutes of your precious time to try to find an answer for your question and instead post a thread with that question wanting us spend our precious time to answer for 1000th time what was answered 999 times already then you will get some deserved hate. same as if you have vibecoded some 20k lines of code and advertise it here as "enterprise solution", "production ready", "90% memory savings", "new paradigm" or whatever, without even checking if that hallucinated code works at all.
1
0
2026-03-02T19:50:42
MelodicRecognition7
false
null
0
o8a9v8t
false
/r/LocalLLaMA/comments/1rj2fm9/is_localllama_for_hate_and_malicious_comments/o8a9v8t/
false
1
t1_o8a9uwj
It happens sometimes. We need something to stop thinking when it takes to long. Either by injecting the end thinking token or by restart generation. Let's hope they can fix that for qwen 4
1
0
2026-03-02T19:50:39
DanielWe
false
null
0
o8a9uwj
false
/r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8a9uwj/
false
1
t1_o8a9syq
As somebody who was lucky enough to source a RTX5090, I have to say Local LLM coding is still lagging far behind because of the total VRAM constraints. I would say if you have less than 48GB of unified ram, you're 1000% better off getting a subscription if you value your time. Qwen3-Coder-Next 80B is lowest tier model I will be willing to run locally. Mostly everything below that is currently obsolete IMO... waiting for more efficient future models for local work.
1
0
2026-03-02T19:50:23
Wild-File-5926
false
null
0
o8a9syq
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8a9syq/
false
1
t1_o8a9rnj
Why would it be relevant in any way to their business model what local-only ERP users run on their potato rigs lol.
1
0
2026-03-02T19:50:13
Exodus124
false
null
0
o8a9rnj
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o8a9rnj/
false
1
t1_o8a9rmg
And in the app interface?
1
0
2026-03-02T19:50:12
Mashic
false
null
0
o8a9rmg
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8a9rmg/
false
1
t1_o8a9n9q
Isn't it a draft model decoding on a gpt-oss? [https://www.snowflake.com/en/engineering-blog/faster-gpt-oss-reasoning-arctic-inference/](https://www.snowflake.com/en/engineering-blog/faster-gpt-oss-reasoning-arctic-inference/)
1
0
2026-03-02T19:49:38
wolframko
false
null
0
o8a9n9q
false
/r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o8a9n9q/
false
1
t1_o8a9kyd
it should be in pairs of similar size
1
0
2026-03-02T19:49:19
nomorebuttsplz
false
null
0
o8a9kyd
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8a9kyd/
false
1
t1_o8a9jv0
how tf u get $8000 in runpod credits lol
1
0
2026-03-02T19:49:11
jfreee23
false
null
0
o8a9jv0
false
/r/LocalLLaMA/comments/1qqarn1/i_have_8000_runpod_credits_which_model_should_i/o8a9jv0/
false
1
t1_o8a9dci
Either 35B A3B or 9B. 35B A3B q4\_k\_m by bartowski fits nicely and has great quality. Runs at around 18 token/s. As for the 9B, Q3\_K\_XL should fit in VRAM.
1
0
2026-03-02T19:48:19
dampflokfreund
false
null
0
o8a9dci
false
/r/LocalLLaMA/comments/1rj1ifv/which_qwen_35_model_can_i_run_on_my_laptop/o8a9dci/
false
1
t1_o8a9ccw
Why, oh why is this post written by an LLM. It would help if you had examples because this feels like a stream of conscious about friction points without actually outlining a legitimate problem. Can you edit and restate. These are not mind reading machines. Would a team member understand what you were asking? That could be one way to refine your thinking around this. This could very well be a case of a very experienced developer who hasn't learnt how to communicate with their llms? I'm not slighting you because I've had to adopt different communication patterns (context engineering) approaches when using different model cli tools.
1
0
2026-03-02T19:48:11
SvenVargHimmel
false
null
0
o8a9ccw
false
/r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/o8a9ccw/
false
1
t1_o8a94z0
This reads to me like Qwen3.5 9B is benchmaxxed to an inch of its LLM life. Qwen3.5 9B model dunking or matching Qwen3-Next-80B-A3 everywhere, the model that literally came out 9 weeks ago, *from the same lab/company?* I hope I am wrong, but this smells a bit like Llama4....
1
0
2026-03-02T19:47:11
Mechanical_Number
false
null
0
o8a94z0
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8a94z0/
false
1
t1_o8a949y
Yes, think=true/false
1
0
2026-03-02T19:47:06
ultars
false
null
0
o8a949y
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8a949y/
false
1
t1_o8a8ug3
https://preview.redd.it/…431ad8b27289ef
1
0
2026-03-02T19:45:46
UltrMgns
false
null
0
o8a8ug3
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8a8ug3/
false
1
t1_o8a8te8
Large claims require large evidence. If you've validated the claims you're making surely you can just provide some benchmarks and testing methodology? Why is the burden on me to download your model and setup a test harness to run evals just to validate that you're not just talking shit? There's too many people making schizo posts and BS claims to spend the time validating all of them. I'd love for this to be true!
1
0
2026-03-02T19:45:38
JamesEvoAI
false
null
0
o8a8te8
false
/r/LocalLLaMA/comments/1riypvk/axe_a_precision_agentic_coder_large_codebases/o8a8te8/
false
1
t1_o8a8rw6
FP8 versions are 99% identical to FP16. There is no reason to run models in more than 8bit even though you may have the resources to.
1
0
2026-03-02T19:45:26
Deep-Vermicelli-4591
false
null
0
o8a8rw6
false
/r/LocalLLaMA/comments/1riz9zz/qwen35_9b_fp16_vs_27b_fp8_have_64gb_unified_m1/o8a8rw6/
false
1
t1_o8a8r0x
But 9b active parameters > 3b
1
0
2026-03-02T19:45:19
def_not_jose
false
null
0
o8a8r0x
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8a8r0x/
false
1
t1_o8a8qq0
GPU poor??? I prefer the term "temporarily embarrassed future RTX5090 owner" But I use claude and gemini because my local models arent going to code better than me. I do use qwen 4b in my workflows - usually for cleaning dirty data and standardizing it. Going to try to run the new 3.5 9B on my gtx 1080 when I get home. wish me luck.
1
0
2026-03-02T19:45:16
Wise-Comb8596
false
null
0
o8a8qq0
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8a8qq0/
false
1
t1_o8a8lvl
Sigh. I hate custom licenses.
1
0
2026-03-02T19:44:36
silenceimpaired
false
null
0
o8a8lvl
false
/r/LocalLLaMA/comments/1rira5e/iquestcoderv1_is_40b14b7b/o8a8lvl/
false
1
t1_o8a8j51
Thanks for the feedback, does it also apply to agentic tasks?
1
0
2026-03-02T19:44:13
Zhelgadis
false
null
0
o8a8j51
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o8a8j51/
false
1
t1_o8a8g04
Yup. Plenty of posts saying the same thing. Over and over. But thanks for stopping by to add another.
1
0
2026-03-02T19:43:48
DinoAmino
false
null
0
o8a8g04
false
/r/LocalLLaMA/comments/1rj2gwf/qwen35_30b_is_incredible_for_local_deployment/o8a8g04/
false
1
t1_o8a88ul
Ban this dummy
1
0
2026-03-02T19:42:50
RonJonBoviAkaRonJovi
false
null
0
o8a88ul
false
/r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o8a88ul/
false
1
t1_o8a7wns
I have the same GPU and 32GB RAM. Qwen3.5 9B seems to be the best choice for us. i was getting ~30tps, which is good enough.
1
0
2026-03-02T19:41:11
Deep-Vermicelli-4591
false
null
0
o8a7wns
false
/r/LocalLLaMA/comments/1rj1ifv/which_qwen_35_model_can_i_run_on_my_laptop/o8a7wns/
false
1
t1_o8a7rym
Rtx 9070xt, 16gb vram, 32gb ram. I5 12400f
1
0
2026-03-02T19:40:33
Suitable_Currency440
false
null
0
o8a7rym
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8a7rym/
false
1
t1_o8a7qi2
Third post today about spec decoding in Qwen.
1
0
2026-03-02T19:40:21
DinoAmino
false
null
0
o8a7qi2
false
/r/LocalLLaMA/comments/1rj2mzy/is_speculative_decoding_available_with_the_qwen/o8a7qi2/
false
1
t1_o8a7ngi
Is there an fp8 version anywhere?
1
0
2026-03-02T19:39:57
Glum-Traffic-7203
false
null
0
o8a7ngi
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a7ngi/
false
1