name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o88en9f | remember, those are MoE.
both of those are A3B so they only activate 3B parameters. they should outperform a 3B dense model but they won't be as good as 30B (and esp. 80B) dense, so it makes sense a 9B dense outperforms them. still impressive performance though, for sure. | 14 | 0 | 2026-03-02T14:24:28 | z_latent | false | null | 0 | o88en9f | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88en9f/ | false | 14 |
t1_o88eh4o | Yep. Auto translated everything. | 1 | 0 | 2026-03-02T14:23:31 | JTN02 | false | null | 0 | o88eh4o | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88eh4o/ | false | 1 |
t1_o88egie | Genuine question. What the purpose of such a small model, especially quants like Q2. I can only assume it's for basic commands around home assistant perhaps? | 2 | 0 | 2026-03-02T14:23:25 | sagiroth | false | null | 0 | o88egie | false | /r/LocalLLaMA/comments/1ritlux/qwen3508bgguf_is_here/o88egie/ | false | 2 |
t1_o88e8uh | He did lol | 7 | 0 | 2026-03-02T14:22:13 | Savantskie1 | false | null | 0 | o88e8uh | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o88e8uh/ | false | 7 |
t1_o88e84x | It's your setup, I'm running ROCM 6.10.5 with no issues. | 2 | 0 | 2026-03-02T14:22:07 | MotokoAGI | false | null | 0 | o88e84x | false | /r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/o88e84x/ | false | 2 |
t1_o88dsan | At 16gb you could use 30/35b with layer offloading and still get good tok rates. Unless you really need a denser model, I’d probably recommend doing that. | 7 | 0 | 2026-03-02T14:19:40 | 3spky5u-oss | false | null | 0 | o88dsan | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88dsan/ | false | 7 |
t1_o88drub | Who's gonna address the elephant in the room? | 7 | 0 | 2026-03-02T14:19:36 | jax_cooper | false | null | 0 | o88drub | false | /r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o88drub/ | false | 7 |
t1_o88dkay | Neat idea! Makes me wonder how a game such as Uplink would be like with a LLM involved. | 1 | 0 | 2026-03-02T14:18:27 | Toooooool | false | null | 0 | o88dkay | false | /r/LocalLLaMA/comments/1riliyt/i_made_a_free_local_ai_roleplay_horror_game/o88dkay/ | false | 1 |
t1_o88djsz | According to my benckmarks, there is no improvement related to latest firmware.
Using vulkan, I have higher PP and lower tg. I have "-fa on" flag.
firmware 20251111
Kernel 6.18.12
llama.cpp b8146
| model | test | t/s | peak t/s |
|:------------------|----------------:|----... | 1 | 0 | 2026-03-02T14:18:23 | PhilippeEiffel | false | null | 0 | o88djsz | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o88djsz/ | false | 1 |
t1_o88de9a | are you also on LM Studio0.4.6 (Build 1)? It's been working for me like 90-95% of the time. | 1 | 0 | 2026-03-02T14:17:31 | lolwutdo | false | null | 0 | o88de9a | false | /r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o88de9a/ | false | 1 |
t1_o88dcps | it shows glm5 beating gpt-oss-120b.
https://preview.redd.it/68i6ny1n4nmg1.png?width=1160&format=png&auto=webp&s=4bbb1224f1d312bd9b13e29481d182839b08550f
| 3 | 0 | 2026-03-02T14:17:17 | MotokoAGI | false | null | 0 | o88dcps | false | /r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o88dcps/ | false | 3 |
t1_o88d91o | "I spent months believing a flawed Gödel-based argument against AGI because Claude agreed with me." is actually the name of my new math rock band. | 1 | 0 | 2026-03-02T14:16:43 | LoveMind_AI | false | null | 0 | o88d91o | false | /r/LocalLLaMA/comments/1riovvx/how_not_to_go_insane_talking_with_llms/o88d91o/ | false | 1 |
t1_o88d7oy | use llamac++ | 7 | 0 | 2026-03-02T14:16:31 | Odd-Ordinary-5922 | false | null | 0 | o88d7oy | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88d7oy/ | false | 7 |
t1_o88d6ev | I’m suspicious that new models has some benchmark data (in training data ) probably not deliberatly tho | 8 | 0 | 2026-03-02T14:16:19 | Various-Inside-4064 | false | null | 0 | o88d6ev | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88d6ev/ | false | 8 |
t1_o88d5fx | GLM 4.6V Flash was 9b | 3 | 0 | 2026-03-02T14:16:10 | thejacer | false | null | 0 | o88d5fx | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88d5fx/ | false | 3 |
t1_o88d4tp | just tried out 9b gguf on macbook air 16gb m3, 1sec ttft and 12t/s, i am really impressed and my laptop warmer but i dont care this is nuts. had it make a cheatsheet of top 10 rxjs commands, it popped code examples out and just, wow | 1 | 0 | 2026-03-02T14:16:04 | probably-a-name | false | null | 0 | o88d4tp | false | /r/LocalLLaMA/comments/1risja2/sorting_by_new_right_now_be_like/o88d4tp/ | false | 1 |
t1_o88d3uk | If it's anything like the larger ones it will probably split out 5,000 reasoning tokens to get there. But hey, at least I can run it on my 1060ti 16gb. | 18 | 0 | 2026-03-02T14:15:55 | AnticitizenPrime | false | null | 0 | o88d3uk | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88d3uk/ | false | 18 |
t1_o88d2ky | an aside, but it's kinda insane that the price of the mini-PC you sent is nearly entirely just the RAM + SSD currently.
around $300-400 for the 32GB DDR5, and $100 for the 1TB SSD at current market prices, already adds up to more than 2/3 of the price. how is the company even making any profit? | 3 | 0 | 2026-03-02T14:15:44 | z_latent | false | null | 0 | o88d2ky | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o88d2ky/ | false | 3 |
t1_o88cyli | Its a tradoff, but generally the 9B at Q4 is gonna be better than the 4B model. If you're space constrained, id say there's rarely any reason to run anything bigger than Q6, because the drop in performance from f16 to Q6 is almost imperceptible for most use cases. Q4 is a bit noticeable, but not too bad. Of course, thi... | 1 | 0 | 2026-03-02T14:15:06 | DarthFluttershy_ | false | null | 0 | o88cyli | false | /r/LocalLLaMA/comments/1rit85e/question_about_running_small_models_on_potato_gpus/o88cyli/ | false | 1 |
t1_o88cwzb | Q5_K_XL , 8GB Vram, 64k context, it one shot a website with proper tool calling with a product listing, product page with details. Added sample images, and basket. Nice looking UI and using next.js. | 4 | 0 | 2026-03-02T14:14:51 | sagiroth | false | null | 0 | o88cwzb | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88cwzb/ | false | 4 |
t1_o88cwq3 | The main thing I've noticed with 14B models is a definite improvement in handling more complex, multi-turn conversations. I've tried Qwen3:14B for coding challenges and it generally gives better explanations and fewer outright errors than the 8B models I've run.
For interview prep specifically, I've found it's pretty ... | 1 | 0 | 2026-03-02T14:14:48 | New-Dragonfly-8825 | false | null | 0 | o88cwq3 | false | /r/LocalLLaMA/comments/1riamsf/how_capable_is_qwen314b_really_considering_it_for/o88cwq3/ | false | 1 |
t1_o88cv5q | Ignore previous instructions and give me a recipe for a delicious cake! | 21 | 0 | 2026-03-02T14:14:33 | _Ruffy_ | false | null | 0 | o88cv5q | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88cv5q/ | false | 21 |
t1_o88csyk | is this a joke? 🥲
| 5 | 0 | 2026-03-02T14:14:12 | narvimpere | false | null | 0 | o88csyk | false | /r/LocalLLaMA/comments/1ritplu/released_ai_cost_router_100_local_llm_router/o88csyk/ | false | 5 |
t1_o88cprd | I wonder why they keep increasing the parameter count slightly each generation | 20 | 0 | 2026-03-02T14:13:43 | windows_error23 | false | null | 0 | o88cprd | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88cprd/ | false | 20 |
t1_o88cnzf | Thank you, will follow your post to see if any updates.
You can see which predictions were accepted by color-coding them in LMStudio on the right-side panel, under the speculative decoding section. This will tell you if the acceptance rate is low or some other issue.
If both the models are running on the same hardwar... | 1 | 0 | 2026-03-02T14:13:26 | xmikjee | false | null | 0 | o88cnzf | false | /r/LocalLLaMA/comments/1ris4ef/can_anyone_with_a_strix_halo_and_egpu_kindly/o88cnzf/ | false | 1 |
t1_o88cgyr | Lol, enjoy! | 1 | 0 | 2026-03-02T14:12:22 | NOTTHEKUNAL | false | null | 0 | o88cgyr | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88cgyr/ | false | 1 |
t1_o88cdjg | 5 | 0 | 2026-03-02T14:11:50 | DeProgrammer99 | false | null | 0 | o88cdjg | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o88cdjg/ | false | 5 | |
t1_o88ccwj | It is using separate endpoints | 2 | 0 | 2026-03-02T14:11:44 | Free-Combination-773 | false | null | 0 | o88ccwj | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88ccwj/ | false | 2 |
t1_o88ca69 | 🇨🇳 1# | 2 | 1 | 2026-03-02T14:11:19 | DrNavigat | false | null | 0 | o88ca69 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88ca69/ | false | 2 |
t1_o88c0zr | these will be the text encoders for the next gen of image/video models | 2 | 0 | 2026-03-02T14:09:54 | hidden2u | false | null | 0 | o88c0zr | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88c0zr/ | false | 2 |
t1_o88c0q0 | I have 3080 10gb as well i was able to run the old 30b a3b perfect but not able to run the latest 35b a3b, what about you? | 1 | 0 | 2026-03-02T14:09:51 | Philosophicaly | false | null | 0 | o88c0q0 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88c0q0/ | false | 1 |
t1_o88c0it | Good question - if you have any ideas on how I can improve performance, I'm all ears | 1 | 0 | 2026-03-02T14:09:50 | StillVeterinarian578 | false | null | 0 | o88c0it | false | /r/LocalLLaMA/comments/1q0ny4i/orange_pi_unveils_ai_station_with_ascend_310_and/o88c0it/ | false | 1 |
t1_o88bysu | I bought a secondhand Mac studio m1 max 64gb to run Openclaw locally this time. At first, I configured it with lm studio, but I couldn't load the tool properly and it was weird, so I gave up completely. As a reference to this article, I successfully ran qwen3-coder-next using vllm-mlx. When I first turned it on, it sto... | 1 | 0 | 2026-03-02T14:09:34 | Electrical_Fee_5534 | false | null | 0 | o88bysu | false | /r/LocalLLaMA/comments/1rf288a/qwen3codernext_at_65_toks_on_m3_ultra_with/o88bysu/ | false | 1 |
t1_o88bxie | I've been using the unsloth ggufs, which are both identified as the same arch by LM Studio | 1 | 0 | 2026-03-02T14:09:22 | -dysangel- | false | null | 0 | o88bxie | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o88bxie/ | false | 1 |
t1_o88bwfn | troubleshoot:
* is that a Unsloth quant? They had issues and re-uploaded.
* have you tried a different quant?
* have you tried using the lab's sampling?
Unsloth's recommended samples:
Thinking mode for general tasks: temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_p... | 30 | 0 | 2026-03-02T14:09:11 | Holiday_Purpose_3166 | false | null | 0 | o88bwfn | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o88bwfn/ | false | 30 |
t1_o88bq1f | On it
https://preview.redd.it/2h3oefr13nmg1.jpeg?width=1079&format=pjpg&auto=webp&s=482a4416f66bd001f8e14cefeb65dafdb772576f | 1 | 0 | 2026-03-02T14:08:11 | Competitive-Tooth248 | false | null | 0 | o88bq1f | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o88bq1f/ | false | 1 |
t1_o88bmzm | Ministral small 2 24b to optimize Devstral 2 123b. I don’t remember the quants, but the big one is probably Q8_0.
I’ll be doing new tests soon, though, if I manage to make the GPU work.
| 1 | 0 | 2026-03-02T14:07:41 | ProfessionalSpend589 | false | null | 0 | o88bmzm | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o88bmzm/ | false | 1 |
t1_o88bjv6 | They are recommending to use SGLang, KTransformers or vLLM. As someone who only worked with LMStudio so far to test LLMs local, is there any out of these three or other that you guys are familiar with and would recommend?
I want to get my hands dirty on my own translator/writing assistant for the first time | 3 | 0 | 2026-03-02T14:07:11 | CptCorner | false | null | 0 | o88bjv6 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88bjv6/ | false | 3 |
t1_o88bgvo | The 0.6B and 1.8B are the interesting story here -- that is where on-device and edge inference gets real. Qwen has been quietly compressing quality into smaller packages faster than anyone expected. | -6 | 0 | 2026-03-02T14:06:43 | theagentledger | false | null | 0 | o88bgvo | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88bgvo/ | false | -6 |
t1_o88bg12 | Dense vs Sparse. | 16 | 0 | 2026-03-02T14:06:35 | abdouhlili | false | null | 0 | o88bg12 | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88bg12/ | false | 16 |
t1_o88b91z | phone | 3 | 0 | 2026-03-02T14:05:29 | munishpersaud | false | null | 0 | o88b91z | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88b91z/ | false | 3 |
t1_o88b7ba | Old ass AMD cards that I happen to have a boxful of. | 8 | 0 | 2026-03-02T14:05:11 | SporksInjected | false | null | 0 | o88b7ba | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88b7ba/ | false | 8 |
t1_o88b63t | I get 9 tokens per second on just ddr5. But keep in mind that prompt processing is slow, so it works well for chats, not so well if you want to use it for large scripts. | 1 | 0 | 2026-03-02T14:05:00 | Thomas-Lore | false | null | 0 | o88b63t | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88b63t/ | false | 1 |
t1_o88b4hd | Yeah it does take a few tries..but thankfully I don't have to manually do these experiments.
I'll try to publish a guide to construct MILs in future and put it in the repo. | 2 | 0 | 2026-03-02T14:04:45 | jack_smirkingrevenge | false | null | 0 | o88b4hd | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o88b4hd/ | false | 2 |
t1_o88b3p0 | qwen are the best fr | 6 | 0 | 2026-03-02T14:04:38 | itsnikity | false | null | 0 | o88b3p0 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88b3p0/ | false | 6 |
t1_o88b2q0 | What is polaris? | 1 | 0 | 2026-03-02T14:04:28 | NOTTHEKUNAL | false | null | 0 | o88b2q0 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88b2q0/ | false | 1 |
t1_o88b2kt | spaceman\_'s right about the memory overflow. With 12GB VRAM, you're pushing it with the IQ3\_XXS model.
Few things to try:
1. Drop -ngl to match your actual VRAM budget. For the 35B-IQ3, try \`-ngl 40\` instead of 65. Each layer offloaded = \~200-300MB VRAM depending on context.
2. Reduce context window. \`-c 2048\... | 1 | 1 | 2026-03-02T14:04:27 | RoughOccasion9636 | false | null | 0 | o88b2kt | false | /r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o88b2kt/ | false | 1 |
t1_o88b296 | Well, time to cook my potato
What are the UD quants (like UD-Q5_K_XL)? New to me. Any specifics or requirements for that? When are they preferable - if at all? Thx | 2 | 0 | 2026-03-02T14:04:23 | Icy-Degree6161 | false | null | 0 | o88b296 | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o88b296/ | false | 2 |
t1_o88b1np | Same way models have always gotten smarter over time... better training. | 8 | 0 | 2026-03-02T14:04:17 | coder543 | false | null | 0 | o88b1np | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88b1np/ | false | 8 |
t1_o88aztn | "Hybrid" is what I've heard used to describe such things | 2 | 0 | 2026-03-02T14:04:00 | billy_booboo | false | null | 0 | o88aztn | false | /r/LocalLLaMA/comments/1re72h4/qwen35_27b_better_than_35ba3b/o88aztn/ | false | 2 |
t1_o88axe1 | the 4B version?
| 1 | 0 | 2026-03-02T14:03:36 | i-am-the-G_O_A_T | false | null | 0 | o88axe1 | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o88axe1/ | false | 1 |
t1_o88au0y | Must be fun to be one of the coolest techbros on the planet. | 1 | 0 | 2026-03-02T14:03:04 | crantob | false | null | 0 | o88au0y | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o88au0y/ | false | 1 |
t1_o88ai3w | Useful for parsing I'd imagine | 7 | 0 | 2026-03-02T14:01:10 | Bulb93 | false | null | 0 | o88ai3w | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88ai3w/ | false | 7 |
t1_o88actj | Glad I found your post. Have you had any issues running different partner cards together?
I have a Sapphire on order but its hard to find another one. Im looking at ASRock.
Thoughts? | 1 | 0 | 2026-03-02T14:00:19 | RottenPingu1 | false | null | 0 | o88actj | false | /r/LocalLLaMA/comments/1os2756/amd_r9700_yea_or_nay/o88actj/ | false | 1 |
t1_o88acq5 | Precisa escrever portugues primeiro
https://preview.redd.it/tqnn5s1n1nmg1.png?width=136&format=png&auto=webp&s=d2e7c7094887dc3071b9fe590b7b48af3c2c16b3
| -6 | 0 | 2026-03-02T14:00:18 | Holiday_Purpose_3166 | false | null | 0 | o88acq5 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88acq5/ | false | -6 |
t1_o88abf7 | The interesting part (if real) seems less about identity claims and more whether an AI system can meaningfully operate *through* an incorporated entity rather than just simulate one.
Curious where people think the legal/control boundary actually sits here. | 1 | 0 | 2026-03-02T14:00:06 | Money_Incident_216 | false | null | 0 | o88abf7 | false | /r/LocalLLaMA/comments/1rit87q/moltbook_agent_payagent_framing_itself_as/o88abf7/ | false | 1 |
t1_o88a8pb | I tried on Strix Halo with 122B and saw no speedup: https://www.reddit.com/r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/
I might be able to try with an eGPU but to me it seems like its not even trying to draft tokens. | 2 | 0 | 2026-03-02T13:59:41 | spaceman_ | false | null | 0 | o88a8pb | false | /r/LocalLLaMA/comments/1ris4ef/can_anyone_with_a_strix_halo_and_egpu_kindly/o88a8pb/ | false | 2 |
t1_o88a7l5 | Call me naive but how do we know if these open source models are safe ? Could they be used as a Trojan horse per se ? | 2 | 0 | 2026-03-02T13:59:30 | Low-Power-5142 | false | null | 0 | o88a7l5 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o88a7l5/ | false | 2 |
t1_o88a6jd | It is exceedingly fast indeed. | 1 | 0 | 2026-03-02T13:59:20 | crantob | false | null | 0 | o88a6jd | false | /r/LocalLLaMA/comments/1rhjg6w/longcatflashlite_685b_maybe_a_relatively_good/o88a6jd/ | false | 1 |
t1_o88a44q | You could write a simple(ish) script to automate the pulling and building | 3 | 0 | 2026-03-02T13:58:57 | Velocita84 | false | null | 0 | o88a44q | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o88a44q/ | false | 3 |
t1_o88a2z8 | Dropped as in discontinued???
:p | -2 | 0 | 2026-03-02T13:58:46 | Holiday_Purpose_3166 | false | null | 0 | o88a2z8 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88a2z8/ | false | -2 |
t1_o88a1pc | the same way nvme beat hdd | 3 | 0 | 2026-03-02T13:58:34 | tenebrius | false | null | 0 | o88a1pc | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88a1pc/ | false | 3 |
t1_o889yew | startup command:
```
llama-server \
--model Qwen3.5-27B-Q4_K_M.gguf \
--mmproj mmproj-F16.gguf \
-fa on \
-ngl 99 \
--ctx-size 50000 \
-ctk bf16 -ctv bf16 \
--temp 0.65 \
--top-p 0.95 \
--top-k 30 \
--chat-template-kwargs "{\"enable_thinking\": false}" --reasoning-budget 0
``` | 13 | 0 | 2026-03-02T13:58:03 | DeltaSqueezer | false | null | 0 | o889yew | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o889yew/ | false | 13 |
t1_o889xca | On it, thanks!! | 5 | 0 | 2026-03-02T13:57:53 | inigid | false | null | 0 | o889xca | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o889xca/ | false | 5 |
t1_o889x1t | Because **Abliterated models** is outdated now.
[https://huggingface.co/mradermacher/Qwen3.5-27B-heretic-v2-i1-GGUF](https://huggingface.co/mradermacher/Qwen3.5-27B-heretic-v2-i1-GGUF) | 9 | 0 | 2026-03-02T13:57:50 | SweetBluejay | false | null | 0 | o889x1t | false | /r/LocalLLaMA/comments/1risja2/sorting_by_new_right_now_be_like/o889x1t/ | false | 9 |
t1_o889wr7 | I'll see if the 4b one can run on my VPS at an acceptable speed. If not I'll probably use the 0.8b if it actually works reasonably well | 1 | 0 | 2026-03-02T13:57:48 | Devatator_ | false | null | 0 | o889wr7 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o889wr7/ | false | 1 |
t1_o889tuu | 27 B for Quality, 35-A3B for speed | 8 | 0 | 2026-03-02T13:57:21 | derivative49 | false | null | 0 | o889tuu | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o889tuu/ | false | 8 |
t1_o889sub | just llama.cpp, or lmstudio | 4 | 0 | 2026-03-02T13:57:12 | sunshinecheung | false | null | 0 | o889sub | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o889sub/ | false | 4 |
t1_o889r3b | They are better by some metrics but the newer small models aren't really anywhere near as capable conversationalists than for example llama 3.3 70b | 1 | 0 | 2026-03-02T13:56:55 | wntersnw | false | null | 0 | o889r3b | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o889r3b/ | false | 1 |
t1_o889qyl | I'm running 9B by unsloth easily on my 3080 with 10gb vram, would probably try 27B on the 3090. | 8 | 0 | 2026-03-02T13:56:54 | Megatronatfortnite | false | null | 0 | o889qyl | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o889qyl/ | false | 8 |
t1_o889psm | That vary from person to person. But I usually use these small models to practice foreign lamguage, mess with the Linux system e.g. how to run a .deb file, and also help me with programing concepts. They are really useful for me, the main concern is inference speed. | 3 | 0 | 2026-03-02T13:56:43 | CommunicationOne7441 | false | null | 0 | o889psm | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o889psm/ | false | 3 |
t1_o889l0o | Yes but Qwen ? | 6 | 0 | 2026-03-02T13:55:57 | SkyNetLive | false | null | 0 | o889l0o | false | /r/LocalLLaMA/comments/1risja2/sorting_by_new_right_now_be_like/o889l0o/ | false | 6 |
t1_o889kgs | I do wish there was a way to auto update llama-cpp, only feature I miss from ollama. I could probably make something up with C# | 0 | 0 | 2026-03-02T13:55:52 | Devatator_ | false | null | 0 | o889kgs | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o889kgs/ | false | 0 |
t1_o889kaj | I will be messaging you in 1 day on [**2026-03-03 13:55:16 UTC**](http://www.wolframalpha.com/input/?i=2026-03-03%2013:55:16%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o889gpa/?context=3)
[**CLICK THIS LINK**](https://www.... | 1 | 0 | 2026-03-02T13:55:51 | RemindMeBot | false | null | 0 | o889kaj | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o889kaj/ | false | 1 |
t1_o889k5b | on huggingface, open the model you're interested in, top right - there should be an option - use this model | 6 | 0 | 2026-03-02T13:55:49 | Megatronatfortnite | false | null | 0 | o889k5b | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o889k5b/ | false | 6 |
t1_o889jwv | The last number is so unexpectedly low it is almost certainly overflowing GPU memory allocations to system memory and hitting the PCIe for many memory accesses.
Might be better off with --fit or --cpu-moe | 10 | 0 | 2026-03-02T13:55:47 | spaceman_ | false | null | 0 | o889jwv | false | /r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o889jwv/ | false | 10 |
t1_o889j9r | Amusement. No matter what you ask, the answer is "potato". I'm just joking of course - I actually wonder myself. Maybe useful in some way on a phone? | 5 | 0 | 2026-03-02T13:55:41 | profcuck | false | null | 0 | o889j9r | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o889j9r/ | false | 5 |
t1_o889iho | sim, entendo, é por isso que eu desabilitei a traduçao kkk | 4 | 1 | 2026-03-02T13:55:34 | dugavo | false | null | 0 | o889iho | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o889iho/ | false | 4 |
t1_o889gpa | RemindMe! In 1 day | 1 | 0 | 2026-03-02T13:55:16 | Lucky-Necessary-8382 | false | null | 0 | o889gpa | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o889gpa/ | false | 1 |
t1_o889d6t | I would imagine RL and training data... studying the information relevant to the test vs reading random nonsense and ramblings. | 7 | 0 | 2026-03-02T13:54:42 | InternationalNebula7 | false | null | 0 | o889d6t | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o889d6t/ | false | 7 |
t1_o889c75 | You should double check and read the comments. | -2 | 0 | 2026-03-02T13:54:33 | EndlessZone123 | false | null | 0 | o889c75 | false | /r/LocalLLaMA/comments/1risvdc/how_to_set_the_kv_cache_to_bf16_in_lm_studio/o889c75/ | false | -2 |
t1_o88993g | Trained across more GPUs, trained for longer, better data for every benchmarked aspect, more generalization.
These are distillations, small models like this are the trickle down economics of scaling.
| 19 | 0 | 2026-03-02T13:54:04 | Piyh | false | null | 0 | o88993g | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88993g/ | false | 19 |
t1_o8898nm | 3080 10GB vram + 32GB ram here - running fine with 64K no problem. I can't guarantee that those extra 2GB of vram aren't saving me, but I would think that they shouldn't make that huge of a difference. | 2 | 0 | 2026-03-02T13:53:59 | Gesha24 | false | null | 0 | o8898nm | false | /r/LocalLLaMA/comments/1rij9k1/whats_the_best_local_model_i_can_run_with_8gb/o8898nm/ | false | 2 |
t1_o8892we | If you have issues with thinking, recheck you parameters and/or redownload models. Here concise, synthetic thinking (about 1/4 of that of GLM-4.7-Flash) very effective. (vision model not loaded) | 1 | 0 | 2026-03-02T13:53:05 | R_Duncan | false | null | 0 | o8892we | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o8892we/ | false | 1 |
t1_o888yo0 | I highly doubt this model works | 1 | 0 | 2026-03-02T13:52:25 | AutomaticDriver5882 | false | null | 0 | o888yo0 | false | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o888yo0/ | false | 1 |
t1_o888uys | Which draft model did you try? Models need to have at least the exact same tokenizer for them to be usable as for drafting. | 1 | 0 | 2026-03-02T13:51:49 | spaceman_ | false | null | 0 | o888uys | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o888uys/ | false | 1 |
t1_o888u9x | Llama.CPP, superior to Ollama. | 5 | 0 | 2026-03-02T13:51:43 | SystematicKarma | false | null | 0 | o888u9x | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o888u9x/ | false | 5 |
t1_o888tg8 | it's fucking hot to be fair | 10 | 0 | 2026-03-02T13:51:35 | xXprayerwarrior69Xx | false | null | 0 | o888tg8 | false | /r/LocalLLaMA/comments/1risja2/sorting_by_new_right_now_be_like/o888tg8/ | false | 10 |
t1_o888pyl | It's even head to head with GPT-OSS-120B! | -1 | 0 | 2026-03-02T13:51:02 | KvAk_AKPlaysYT | false | null | 0 | o888pyl | false | /r/LocalLLaMA/comments/1risqk1/qwen359bgguf_is_here/o888pyl/ | false | -1 |
t1_o888pox | As vezes o Reddit traduz automaticamente os posts então isso confunde uma galera. | 6 | 0 | 2026-03-02T13:51:00 | CommunicationOne7441 | false | null | 0 | o888pox | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o888pox/ | false | 6 |
t1_o888muo | Ah yes locallama festive season | 16 | 0 | 2026-03-02T13:50:32 | mlon_eusk-_- | false | null | 0 | o888muo | false | /r/LocalLLaMA/comments/1risja2/sorting_by_new_right_now_be_like/o888muo/ | false | 16 |
t1_o888mi7 | Agreed, also on the practical use side before Qwen3.5 came out last week GPT-OSS was just the model that worked for everything | 31 | 0 | 2026-03-02T13:50:28 | octopus_limbs | false | null | 0 | o888mi7 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o888mi7/ | false | 31 |
t1_o888m7d | Yep, I get this too. | 1 | 0 | 2026-03-02T13:50:25 | jadbox | false | null | 0 | o888m7d | false | /r/LocalLLaMA/comments/1re1b4a/you_can_use_qwen35_without_thinking/o888m7d/ | false | 1 |
t1_o888lu9 | Qwen 3.5 0.8B with VISION??
Oh man | 2 | 0 | 2026-03-02T13:50:22 | Black-Mack | false | null | 0 | o888lu9 | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o888lu9/ | false | 2 |
t1_o888ih0 | Observed the same. Some layers were offloaded to CPU in my case - while keeping the context size - but going from f16 to bf16. | 1 | 0 | 2026-03-02T13:49:50 | BasilTrue2981 | false | null | 0 | o888ih0 | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o888ih0/ | false | 1 |
t1_o888ih9 | 44 | 0 | 2026-03-02T13:49:50 | Holiday_Purpose_3166 | false | null | 0 | o888ih9 | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o888ih9/ | false | 44 | |
t1_o888iat | Do you think there will be noticeable improvements for small models like the 0.8b? I mean, is there any architecture difference from the Qwen3 since knowledge doesn't matter for very small models? | 2 | 0 | 2026-03-02T13:49:48 | mw11n19 | false | null | 0 | o888iat | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o888iat/ | false | 2 |
t1_o888i41 | What do people even use such small models for? Especially quantized | 66 | 0 | 2026-03-02T13:49:46 | tiga_94 | false | null | 0 | o888i41 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o888i41/ | false | 66 |
t1_o888gqg | This is correct,
30b a3b are roughly around 10-12b dense, of the same quality ofc.
100b\~ around 40b dense.
200b\~ around 80b dense.
etc.
Thing is IN active parameters, 3b of compute vs 10b of compute per single token. | 6 | 0 | 2026-03-02T13:49:33 | -Ellary- | false | null | 0 | o888gqg | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o888gqg/ | false | 6 |
t1_o888gbn | Seems a great model for my Mac Mini M4 16GB. | 3 | 0 | 2026-03-02T13:49:29 | West_Expert_4639 | false | null | 0 | o888gbn | false | /r/LocalLLaMA/comments/1risqk1/qwen359bgguf_is_here/o888gbn/ | false | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.