name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o888g0s | I would love to know the answer too.
When I tried using a draft model (other) my TG fell around 2 times lower. So I just bought a GPU (which is still not part of the system, because of some incompatibilities, but I tested it in another PC and it worked).
| 1 | 0 | 2026-03-02T13:49:26 | ProfessionalSpend589 | false | null | 0 | o888g0s | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o888g0s/ | false | 1 |
t1_o888ey1 | Isn't this just out of range for 16GBs? Its 19ish gb, so you'd still have to use a quant and from my experience smaller models quanted tend to fall into repetition a lot easier. | 2 | 0 | 2026-03-02T13:49:15 | DragonfruitIll660 | false | null | 0 | o888ey1 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o888ey1/ | false | 2 |
t1_o888btl | Probably the 2B and the 4B will get to that level, but of course it will lack the world knowledge that the 70B had. | 13 | 0 | 2026-03-02T13:48:45 | SystematicKarma | false | null | 0 | o888btl | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o888btl/ | false | 13 |
t1_o88865p | There is a variety of factors, i hope my reading-along in github prs etc. is accurate:
1. MoEs dont have draft model support, at least not with a smaller draft model like that.
2. Qwen3Next architecture doesnt have speculative decoding support in general, because linear
3. It wont have draft model compatability whe... | 10 | 0 | 2026-03-02T13:47:50 | MaxKruse96 | false | null | 0 | o88865p | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o88865p/ | false | 10 |
t1_o8884yy | I always hear that but, I've never seen a single phone with more than 2GB free at any time? Or are you telling me this model eats less than that? | 3 | 0 | 2026-03-02T13:47:38 | Devatator_ | false | null | 0 | o8884yy | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o8884yy/ | false | 3 |
t1_o8884zq | How can we use them? They are not visible in ollama | -4 | 1 | 2026-03-02T13:47:38 | migrated-human | false | null | 0 | o8884zq | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8884zq/ | false | -4 |
t1_o8881ol | Oh hell yes. Perfect for my laptop | 3 | 0 | 2026-03-02T13:47:07 | brickout | false | null | 0 | o8881ol | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8881ol/ | false | 3 |
t1_o887xq6 | Woohoo. Anyone know what's the best to run on my 3090? | 4 | 0 | 2026-03-02T13:46:29 | inigid | false | null | 0 | o887xq6 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o887xq6/ | false | 4 |
t1_o887vzi | There's no (indexed) GGUFs yet, I just made a Q8\_0 locally real quick | 1 | 0 | 2026-03-02T13:46:12 | spaceman_ | false | null | 0 | o887vzi | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o887vzi/ | false | 1 |
t1_o887vpq | The real question! | 1 | 0 | 2026-03-02T13:46:09 | danigoncalves | false | null | 0 | o887vpq | false | /r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o887vpq/ | false | 1 |
t1_o887uqw | Try
`--chat-template-kwargs '{"enable_thinking": false}'`
Worked for me | 11 | 0 | 2026-03-02T13:46:00 | panic_in_the_galaxy | false | null | 0 | o887uqw | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o887uqw/ | false | 11 |
t1_o887u2n | I had better problem with Qwen3.5 0.8B, it didn't think at all. (Q8\_0) | 6 | 0 | 2026-03-02T13:45:53 | stopbanni | false | null | 0 | o887u2n | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o887u2n/ | false | 6 |
t1_o887rjv | This is with reasoning off and reasoning budget 0. It now does this 'thinking' in the main output. | -2 | 1 | 2026-03-02T13:45:29 | DeltaSqueezer | false | null | 0 | o887rjv | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o887rjv/ | false | -2 |
t1_o887n29 | 9B is getting about 13-14 tok/s on a base M4, not bad! | 3 | 0 | 2026-03-02T13:44:45 | Aliryth | false | null | 0 | o887n29 | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o887n29/ | false | 3 |
t1_o887j1s | didnt work for me using llama-cpp latest | 4 | 0 | 2026-03-02T13:44:05 | jadbox | false | null | 0 | o887j1s | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o887j1s/ | false | 4 |
t1_o887hb2 | Would have to test if the benchmarks holdup | 4 | 0 | 2026-03-02T13:43:48 | SennVacan | false | null | 0 | o887hb2 | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o887hb2/ | false | 4 |
t1_o887h76 | If this is true, then this is groundbreaking and may well pop that AI bubble were having right now. | -2 | 1 | 2026-03-02T13:43:47 | dadidutdut | false | null | 0 | o887h76 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o887h76/ | false | -2 |
t1_o887gvr | does downloading openclaw automatically star the project? | 1 | 0 | 2026-03-02T13:43:44 | ResearchScience2000 | false | null | 0 | o887gvr | false | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o887gvr/ | false | 1 |
t1_o887d2k | Can I use one of these for intellij code completion? | 1 | 0 | 2026-03-02T13:43:07 | sieskei | false | null | 0 | o887d2k | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o887d2k/ | false | 1 |
t1_o887bhc | I'm impressed with GPT OSS hanging in as much as it has | 52 | 0 | 2026-03-02T13:42:52 | Piyh | false | null | 0 | o887bhc | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o887bhc/ | false | 52 |
t1_o8878mh | I'm not seeing the smaller models listed as compatible in LM Studio. Odd | -2 | 0 | 2026-03-02T13:42:24 | -dysangel- | false | null | 0 | o8878mh | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o8878mh/ | false | -2 |
t1_o887611 | Oh right, sure, sizes are just perfect to play NEW updated version of Waidrin.
https://preview.redd.it/bytie1paymmg1.png?width=856&format=png&auto=webp&s=75f210f8ec355f96fb538e943a98deb86a367bf0
| 1 | 0 | 2026-03-02T13:41:59 | -Ellary- | false | null | 0 | o887611 | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o887611/ | false | 1 |
t1_o88754w | Time to p-e-w them!! | 1 | 0 | 2026-03-02T13:41:50 | Friendly-Gur-3289 | false | null | 0 | o88754w | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88754w/ | false | 1 |
t1_o8873hp | yesterday I read that ollama has issues with qwen3.5, is this true? | 2 | 0 | 2026-03-02T13:41:34 | jacek2023 | false | null | 0 | o8873hp | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8873hp/ | false | 2 |
t1_o886ya1 | You can turn of reasoning for Qwen3.5 models if you want, when using llama.cpp:
`--chat-template-kwargs "{\"enable_thinking\": false}"` | 18 | 0 | 2026-03-02T13:40:43 | spaceman_ | false | null | 0 | o886ya1 | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o886ya1/ | false | 18 |
t1_o886wxh | sqrt(30*3) ~= 9.48 | 4 | 0 | 2026-03-02T13:40:29 | x0wl | false | null | 0 | o886wxh | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o886wxh/ | false | 4 |
t1_o886ws8 | "So, what is the key idea behind knowledge distillation? It enables to transfer knowledge from larger model, called teacher, to smaller one, called student. This process allows smaller models to inherit the strong capabilities of larger ones, avoiding the need for training from scratch and making powerful models more a... | 5 | 0 | 2026-03-02T13:40:28 | Cane_P | false | null | 0 | o886ws8 | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o886ws8/ | false | 5 |
t1_o886wnp | I've been waiting to try these with open code. Any ideas if they will be good? | 1 | 0 | 2026-03-02T13:40:26 | Upstairs-Sky-5290 | false | null | 0 | o886wnp | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o886wnp/ | false | 1 |
t1_o886wjx | Qwen 3.5 9B and 4B also just came out as well. They are probably going to be very good for their small sides. Qwen 3 4B thinking 2507 was really good for tool calls too and runs really fast on 5070Ti | 1 | 0 | 2026-03-02T13:40:25 | Guilty_Rooster_6708 | false | null | 0 | o886wjx | false | /r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o886wjx/ | false | 1 |
t1_o886twm | Nice, have to give it a try. | 1 | 0 | 2026-03-02T13:40:00 | stuckyfeet | false | null | 0 | o886twm | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o886twm/ | false | 1 |
t1_o886tit | it worked for me just now:
ollama --version
ollama version is 0.17.4
ollama pull qwen3.5:35b | 1 | 0 | 2026-03-02T13:39:56 | donbowman | false | null | 0 | o886tit | false | /r/LocalLLaMA/comments/1rfc7d3/ollama_dons_support_qwen3535b_yet/o886tit/ | false | 1 |
t1_o886rxp | Already done by qwen, the thinking part of the 3.5 models is extremely synthetic and token-saving respect others, i.e.: glm | 1 | 0 | 2026-03-02T13:39:40 | R_Duncan | false | null | 0 | o886rxp | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o886rxp/ | false | 1 |
t1_o886rv5 | What is the usual timeline on quants? A few days? | 2 | 0 | 2026-03-02T13:39:39 | bedofhoses | false | null | 0 | o886rv5 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o886rv5/ | false | 2 |
t1_o886qq3 | read Hannah Arendt, that's basically the cycle of Revolution she describes | 1 | 0 | 2026-03-02T13:39:28 | Odenhobler | false | null | 0 | o886qq3 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o886qq3/ | false | 1 |
t1_o886nsy | So will this model work with less vram than the previous 7b model?
It seems like the KV cache will be much smaller because of the choice for linear attention in spots instead of full attention? | 3 | 0 | 2026-03-02T13:38:59 | bedofhoses | false | null | 0 | o886nsy | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o886nsy/ | false | 3 |
t1_o886mwi | if you can run q8 then just run the q8 basic version | 8 | 0 | 2026-03-02T13:38:50 | Odd-Ordinary-5922 | false | null | 0 | o886mwi | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o886mwi/ | false | 8 |
t1_o886iec | What happened to the 176 TOPS ? | 1 | 0 | 2026-03-02T13:38:05 | jlsilicon9 | false | null | 0 | o886iec | false | /r/LocalLLaMA/comments/1q0ny4i/orange_pi_unveils_ai_station_with_ascend_310_and/o886iec/ | false | 1 |
t1_o886hp3 | Hi,
I have quite similar config and the same model (qwen3 Coder Next) (Ryzen 9950x +64GB system ram + 16GB RTX 5070Ti on Windows 10) to load in Lm Studio.
Question: I switched off the options "Keep Model in Memory" and "Try mmap()" to use less system ram, when I load some layers in VRAM.
Now I see, it frees system r... | 1 | 0 | 2026-03-02T13:37:58 | MiserableAd8890 | false | null | 0 | o886hp3 | false | /r/LocalLLaMA/comments/1ra9bns/need_help_optimizing_lm_studio_settings_for_to/o886hp3/ | false | 1 |
t1_o886h3i | How can you run that on linux? Ollama doesn't support it yet. | 2 | 0 | 2026-03-02T13:37:52 | MrMrsPotts | false | null | 0 | o886h3i | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o886h3i/ | false | 2 |
t1_o886dm8 | Thanks. But what are the actual technical differences? Is there anything to do with speed or accuracy or anything like that between them? | 3 | 0 | 2026-03-02T13:37:17 | mintybadgerme | false | null | 0 | o886dm8 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o886dm8/ | false | 3 |
t1_o886cvo | ollama doesn't seem to support unsloth quants yet | 4 | 0 | 2026-03-02T13:37:10 | nkjnjknkjn9999 | false | null | 0 | o886cvo | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o886cvo/ | false | 4 |
t1_o886cq4 | According to my benckmarks, there is no improvement related to latest firmware.
Using vulkan, I have higher PP and lower tg. I have "-fa on" flag.
firmware 20251111
Kernel 6.18.12
llama.cpp b8146
| model | test | t/s | peak t/s |
|:------------------|----------------:|... | 1 | 0 | 2026-03-02T13:37:09 | PhilippeEiffel | false | null | 0 | o886cq4 | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o886cq4/ | false | 1 |
t1_o886bbf | Pretty cool they got ultra-small models for mobile use.
Though it's funny that models around the size of GPT-2 are considered small nowadays.
I remember when that model was new, two billion parameters seemed massive. Now it's tiny compared to the GLMs, the Minimaxes and other Kimis. | 48 | 0 | 2026-03-02T13:36:54 | Firepal64 | false | null | 0 | o886bbf | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o886bbf/ | false | 48 |
t1_o8863z5 | GPUs. FP8 and FP4 kind of help speed things along so try for ones that have it. At least 16g of vram, you can also do split inference now on certain models like wan. | 1 | 0 | 2026-03-02T13:35:42 | a_beautiful_rhind | false | null | 0 | o8863z5 | false | /r/LocalLLaMA/comments/1rdfuca/is_macstudio_fine_for_local_llms/o8863z5/ | false | 1 |
t1_o885zhq | is it already supported by llama.cpp? | 8 | 0 | 2026-03-02T13:34:57 | quilso | false | null | 0 | o885zhq | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o885zhq/ | false | 8 |
t1_o885y7q | While you are correct, a single counterexample is also strong. I tried a highly detailed task at Q8 KV that a model under test completely failed at, switched to f16 KV and got much better results. So in at least one case it mattered a great deal, which is all that is needed to disprove the blanket statement "Q8 KV cach... | 5 | 0 | 2026-03-02T13:34:45 | ElectronSpiderwort | false | null | 0 | o885y7q | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o885y7q/ | false | 5 |
t1_o885von | I like this idea a lot. I need to learn more about how this physics sandbox works in particular. Joining the discord | 1 | 0 | 2026-03-02T13:34:20 | No_Pirate_8204 | false | null | 0 | o885von | false | /r/LocalLLaMA/comments/1rirgs7/i_got_sick_of_ai_game_masters_hallucinating_so_i/o885von/ | false | 1 |
t1_o885utx | Select the model that u want and click Quantized and select unsloth/qwen3.5 (the version that u want) | 1 | 0 | 2026-03-02T13:34:12 | NegotiationNo1504 | false | null | 0 | o885utx | false | /r/LocalLLaMA/comments/1rirsmh/small_qwen_models_out/o885utx/ | false | 1 |
t1_o885s7v | But won’t the VPS eventually cost me more? | 1 | 0 | 2026-03-02T13:33:46 | Boring_Tip_1218 | false | null | 0 | o885s7v | false | /r/LocalLLaMA/comments/1riom3s/openclaw_on_my_spare_laptop/o885s7v/ | false | 1 |
t1_o885rb4 | Oi mano , esso aqui é um sub em inglês... | 8 | 0 | 2026-03-02T13:33:36 | dugavo | false | null | 0 | o885rb4 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o885rb4/ | false | 8 |
t1_o885qfr | If there are GGUFs, it's supported. | 5 | 0 | 2026-03-02T13:33:28 | Stepfunction | false | null | 0 | o885qfr | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o885qfr/ | false | 5 |
t1_o885po8 | Why did you choose kilo over opencode? | 1 | 0 | 2026-03-02T13:33:20 | MrMrsPotts | false | null | 0 | o885po8 | false | /r/LocalLLaMA/comments/1rf3n9r/recommended_local_models_for_vibe_coding/o885po8/ | false | 1 |
t1_o885n2y | Blessed Qwen dev team out there. Their model is always all-round open model. | 2 | 0 | 2026-03-02T13:32:55 | Weary_Long3409 | false | null | 0 | o885n2y | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o885n2y/ | false | 2 |
t1_o885hsn | Using Qwen 3.5. Kilo Code is a common combo for me. | 2 | 0 | 2026-03-02T13:32:01 | alokin_09 | false | null | 0 | o885hsn | false | /r/LocalLLaMA/comments/1rf3n9r/recommended_local_models_for_vibe_coding/o885hsn/ | false | 2 |
t1_o885guy | unsloth ones are out too! lovely work from these ppl | 1 | 0 | 2026-03-02T13:31:51 | FunConversation7257 | false | null | 0 | o885guy | false | /r/LocalLLaMA/comments/1risqk1/qwen359bgguf_is_here/o885guy/ | false | 1 |
t1_o885f4p | oh,, | 0 | 0 | 2026-03-02T13:31:35 | EffectBrief1480 | false | null | 0 | o885f4p | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o885f4p/ | false | 0 |
t1_o885e1h | maybe it's because they increased token embeddings? | 3 | 0 | 2026-03-02T13:31:23 | alppawack | false | null | 0 | o885e1h | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o885e1h/ | false | 3 |
t1_o885dil | good | 1 | 0 | 2026-03-02T13:31:18 | EffectBrief1480 | false | null | 0 | o885dil | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o885dil/ | false | 1 |
t1_o885apg | OMG | 1 | 0 | 2026-03-02T13:30:50 | EffectBrief1480 | false | null | 0 | o885apg | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o885apg/ | false | 1 |
t1_o8859ha | great... | 1 | 0 | 2026-03-02T13:30:38 | EffectBrief1480 | false | null | 0 | o8859ha | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8859ha/ | false | 1 |
t1_o8858a6 | You think it's best to get a mac mini pro, mac studio, or waiting till Apple releases something else? My use cases span from building apps, reading / creating images and also speech | 1 | 0 | 2026-03-02T13:30:26 | cyahahn | false | null | 0 | o8858a6 | false | /r/LocalLLaMA/comments/1p2lqi7/are_any_of_the_m_series_mac_macbooks_and_mac/o8858a6/ | false | 1 |
t1_o8857re | oh.. | 1 | 0 | 2026-03-02T13:30:20 | EffectBrief1480 | false | null | 0 | o8857re | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8857re/ | false | 1 |
t1_o8855vh | wow.. | 1 | 0 | 2026-03-02T13:30:02 | EffectBrief1480 | false | null | 0 | o8855vh | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o8855vh/ | false | 1 |
t1_o885565 | They're days old dude | 20 | 0 | 2026-03-02T13:29:55 | braydon125 | false | null | 0 | o885565 | false | /r/LocalLLaMA/comments/1risja2/sorting_by_new_right_now_be_like/o885565/ | false | 20 |
t1_o88522v | yeah it still takes ages to think for simple tasks | 1 | 0 | 2026-03-02T13:29:24 | Odd-Ordinary-5922 | false | null | 0 | o88522v | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88522v/ | false | 1 |
t1_o8851ph | I'll take a look here. Thank you! | 1 | 0 | 2026-03-02T13:29:21 | AppealThink1733 | false | null | 0 | o8851ph | false | /r/LocalLLaMA/comments/1rirsmh/small_qwen_models_out/o8851ph/ | false | 1 |
t1_o8850ww | [removed] | 1 | 0 | 2026-03-02T13:29:13 | [deleted] | true | null | 0 | o8850ww | false | /r/LocalLLaMA/comments/1rij9k1/whats_the_best_local_model_i_can_run_with_8gb/o8850ww/ | false | 1 |
t1_o884ztd | 4b model that just upgrade to brilliant 4b-2507 is good. 9b that outperform gpt-oss120b is insane. Well done. | 10 | 0 | 2026-03-02T13:29:01 | dkeiz | false | null | 0 | o884ztd | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o884ztd/ | false | 10 |
t1_o884yxg | It is a good supplement to a stronger non-vision model. | 3 | 0 | 2026-03-02T13:28:52 | SillyLilBear | false | null | 0 | o884yxg | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o884yxg/ | false | 3 |
t1_o884wnn | [deleted] | 1 | 0 | 2026-03-02T13:28:30 | [deleted] | true | null | 0 | o884wnn | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o884wnn/ | false | 1 |
t1_o884vfc | Wow, that sounds amazing if accurate. This doesn't just benefit potato users, but anyone who wants to locally run highly autonomous pipelines nearly 24/7. | 51 | 0 | 2026-03-02T13:28:17 | sonicnerd14 | false | null | 0 | o884vfc | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o884vfc/ | false | 51 |
t1_o884j3z | can someone please tell me why arent there many finetunes of these qwen models? No writing models, No RP, Not many **Abliterated models** | -12 | 0 | 2026-03-02T13:26:15 | Beautiful_Egg6188 | false | null | 0 | o884j3z | false | /r/LocalLLaMA/comments/1risja2/sorting_by_new_right_now_be_like/o884j3z/ | false | -12 |
t1_o884iby | Fair point and worth remembering. I generally take benchmarks with a grain of salt, but yeah: doubly so for this team. | 0 | 0 | 2026-03-02T13:26:07 | dinerburgeryum | false | null | 0 | o884iby | false | /r/LocalLLaMA/comments/1rira5e/iquestcoderv1_is_40b14b7b/o884iby/ | false | 0 |
t1_o884hs5 | Kilo Code | 1 | 0 | 2026-03-02T13:26:02 | alokin_09 | false | null | 0 | o884hs5 | false | /r/LocalLLaMA/comments/1rie3yc/which_ide_to_code_with_qwen_35/o884hs5/ | false | 1 |
t1_o884him | Yes I was also expecting tiny MoE, looks like 35B-A3B is fast enough but some people are struggling (I don't know why, because it works even on my 12GB setup) | 1 | 0 | 2026-03-02T13:25:59 | jacek2023 | false | null | 0 | o884him | false | /r/LocalLLaMA/comments/1rira5e/iquestcoderv1_is_40b14b7b/o884him/ | false | 1 |
t1_o884fbf | Rightfully so | 19 | 0 | 2026-03-02T13:25:37 | -Cubie- | false | null | 0 | o884fbf | false | /r/LocalLLaMA/comments/1risja2/sorting_by_new_right_now_be_like/o884fbf/ | false | 19 |
t1_o884bnj | Awww sadly no MoE models to compete with my Qwen3.5 35B A3B | 4 | 0 | 2026-03-02T13:25:00 | soyalemujica | false | null | 0 | o884bnj | false | /r/LocalLLaMA/comments/1rira5e/iquestcoderv1_is_40b14b7b/o884bnj/ | false | 4 |
t1_o884a03 | Test both or just listen to your heart | 11 | 0 | 2026-03-02T13:24:44 | jacek2023 | false | null | 0 | o884a03 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o884a03/ | false | 11 |
t1_o8848ad | Try the new **v1.1.1** version. | 1 | 0 | 2026-03-02T13:24:26 | TwilightEncoder | false | null | 0 | o8848ad | false | /r/LocalLLaMA/comments/1r9y6s8/transcriptionsuite_a_fully_local_private_open/o8848ad/ | false | 1 |
t1_o8847uj | https://huggingface.co/collections/unsloth/qwen35 | 17 | 0 | 2026-03-02T13:24:22 | cenderis | false | null | 0 | o8847uj | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8847uj/ | false | 17 |
t1_o8847ar | how its compare to the old 14b ? | 4 | 0 | 2026-03-02T13:24:16 | celsowm | false | null | 0 | o8847ar | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8847ar/ | false | 4 |
t1_o8842nz | FINALLY | 1 | 0 | 2026-03-02T13:23:29 | NegotiationNo1504 | false | null | 0 | o8842nz | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8842nz/ | false | 1 |
t1_o8842qb | Qwen3.5-9B-Q8_0 or Qwen3.5-9B-UD-Q8_K_XL?
Which is best for 16GB VRAM? | 7 | 0 | 2026-03-02T13:23:29 | mintybadgerme | false | null | 0 | o8842qb | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8842qb/ | false | 7 |
t1_o8841bf | 32GB VRAM (2x5060TI 16GB), 96GB RAM (3600MHZ DDR4), Ryzen 5 3600 CPU | 1 | 0 | 2026-03-02T13:23:15 | zp-87 | false | null | 0 | o8841bf | false | /r/LocalLLaMA/comments/1pw6qvw/running_a_local_llm_for_development_minimum/o8841bf/ | false | 1 |
t1_o883vj1 | When you say "expensive", what do you mean? | 1 | 0 | 2026-03-02T13:22:17 | cyahahn | false | null | 0 | o883vj1 | false | /r/LocalLLaMA/comments/1p2lqi7/are_any_of_the_m_series_mac_macbooks_and_mac/o883vj1/ | false | 1 |
t1_o883uts | Its already usable. | 2 | 0 | 2026-03-02T13:22:10 | YourNightmar31 | false | null | 0 | o883uts | false | /r/LocalLLaMA/comments/1rirsmh/small_qwen_models_out/o883uts/ | false | 2 |
t1_o883t6y | Привет | 1 | 0 | 2026-03-02T13:21:54 | Professional_Big7308 | false | null | 0 | o883t6y | false | /r/LocalLLaMA/comments/18s587r/what_are_the_best_free_uncensored_local_ai_for/o883t6y/ | false | 1 |
t1_o883r6q | There are thinking mode benchmarks but I wonder instructions benchmarks.
I hope it will better than 2507 instruction models. | 2 | 0 | 2026-03-02T13:21:34 | IAMk10 | false | null | 0 | o883r6q | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o883r6q/ | false | 2 |
t1_o883poa | Yes, it didn’t work right for me. Would just stop thinking. Probably PEBKAC. | 17 | 0 | 2026-03-02T13:21:19 | Potential-Bet-1111 | false | null | 0 | o883poa | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o883poa/ | false | 17 |
t1_o883kcf | First there were pt and bin files. Pretty big and slow. Then some smart fellas came with the ggml format. Smaller, faster. Then some other smart fellas came with a better version of ggml they called gguf, even faster and better. Because it supports more quantization, has metadata and is not specific to a certain infere... | 1 | 0 | 2026-03-02T13:20:26 | ostroia | false | null | 0 | o883kcf | false | /r/LocalLLaMA/comments/1risgk3/gguf_format/o883kcf/ | false | 1 |
t1_o883jye | 27b is more eloquent, clearly a bit smarter, and benches better.
35-A3B is visibly worse when used. You’ll see it loop more, make more simple mistakes, etc.
That said, the A3B model is much, much faster, which means it can often get you a similar or potentially better result in the same or less time. | 11 | 0 | 2026-03-02T13:20:22 | teachersecret | false | null | 0 | o883jye | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o883jye/ | false | 11 |
t1_o883fn7 | what do you suggest for video/image stuff? | 2 | 0 | 2026-03-02T13:19:38 | cyahahn | false | null | 0 | o883fn7 | false | /r/LocalLLaMA/comments/1rdfuca/is_macstudio_fine_for_local_llms/o883fn7/ | false | 2 |
t1_o883ery | You have my permission ;) | 7 | 0 | 2026-03-02T13:19:29 | jacek2023 | false | null | 0 | o883ery | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o883ery/ | false | 7 |
t1_o883eas | I can run 25B quantized on my 4080. | 12 | 0 | 2026-03-02T13:19:24 | ianitic | false | null | 0 | o883eas | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o883eas/ | false | 12 |
t1_o883dpi | New version just released, try that one! (remember to grab the fresh image too)
Btw way to help me fix problems like these is to copy the system logs from the bottom of the left column on the session tab and send them to me here (or even better create an issue on GitHub). | 1 | 0 | 2026-03-02T13:19:18 | TwilightEncoder | false | null | 0 | o883dpi | false | /r/LocalLLaMA/comments/1r9y6s8/transcriptionsuite_a_fully_local_private_open/o883dpi/ | false | 1 |
t1_o883dql | Believe it or not but wast majority of people still it at <8GB VRAM. The MOE models exists but something like this can achieve better speed while still being capable. if it includes reasoning and decent tool calling then even better. The future is to make the models smaller while keeping the inteligence high not other ... | 14 | 0 | 2026-03-02T13:19:18 | sagiroth | false | null | 0 | o883dql | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o883dql/ | false | 14 |
t1_o883b7a | Dense is definitely better if you can fit it into vram in a fast card. MoE is a good tradeoff if you need to use CPU or hybrid | 1 | 0 | 2026-03-02T13:18:53 | AnomalyNexus | false | null | 0 | o883b7a | false | /r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o883b7a/ | false | 1 |
t1_o883agl | I'm also waiting | 1 | 0 | 2026-03-02T13:18:46 | assemblu | false | null | 0 | o883agl | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o883agl/ | false | 1 |
t1_o8839dl | Can I use 0.8B qwen3.5 as Draft Model for qwen3.5 35b ? | 4 | 0 | 2026-03-02T13:18:35 | Life-Screen-9923 | false | null | 0 | o8839dl | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8839dl/ | false | 4 |
t1_o8839ah | Create a training set with one of the large models -> finetune one of the small ones on that -> faster, cheaper | 2 | 0 | 2026-03-02T13:18:34 | reto-wyss | false | null | 0 | o8839ah | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o8839ah/ | false | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.