name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o88kkwv | Play with the repetition settings:
--repeat-last-n N last n tokens to consider for penalize (default: 64, 0 = disabled, -1
--repeat-penalty N penalize repeat sequence of tokens (default: 1.00, 1.0 = disabled)
--presence-penalty N repeat alpha pr... | 2 | 0 | 2026-03-02T14:56:26 | spaceman_ | false | null | 0 | o88kkwv | false | /r/LocalLLaMA/comments/1riunee/how_to_fix_endless_looping_with_qwen35/o88kkwv/ | false | 2 |
t1_o88kkdd | Hah sysadmin privilege is real, that's a great find | 1 | 0 | 2026-03-02T14:56:21 | Western_Objective209 | false | null | 0 | o88kkdd | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o88kkdd/ | false | 1 |
t1_o88ki0z | In my experience it's quite low EQ but pretty good at instruction following and reasoning. Lazy af for Information Extraction though. | 2 | 0 | 2026-03-02T14:56:02 | mtmttuan | false | null | 0 | o88ki0z | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88ki0z/ | false | 2 |
t1_o88khy9 | You can check out these prebuilt containers and build on top of them if they don't quite suit your needs.
https://github.com/ggml-org/llama.cpp/blob/master/docs/docker.md | 2 | 0 | 2026-03-02T14:56:01 | chensium | false | null | 0 | o88khy9 | false | /r/LocalLLaMA/comments/1riic5m/running_llamaserver_as_a_persistent_systemd/o88khy9/ | false | 2 |
t1_o88khkb | Ok it's definitely something on my end then, appreciate it! | 1 | 0 | 2026-03-02T14:55:56 | sleepy_roger | false | null | 0 | o88khkb | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o88khkb/ | false | 1 |
t1_o88khjf | Add `--chat-template-kwargs '{"enable_thinking": true}'` | 3 | 0 | 2026-03-02T14:55:56 | xenydactyl | false | null | 0 | o88khjf | false | /r/LocalLLaMA/comments/1riuwsw/is_qwen35_2b_is_instruct/o88khjf/ | false | 3 |
t1_o88kfit | Ok then it's definitely something with my settings I'm guessing. Thank you! I was thinking there's no way it couldn't do this. | 2 | 0 | 2026-03-02T14:55:38 | sleepy_roger | false | null | 0 | o88kfit | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o88kfit/ | false | 2 |
t1_o88kc9w | You must not have tried the models yourself or seen any practical use case videos. 3.5 27b is a tier above 3s versions. 27b is equivalent to some of the mini or flash models from openai and google in some tasks. They hit far above their weight, and have much more use than just playing around with. 0.8b theoretically se... | 15 | 0 | 2026-03-02T14:55:10 | sonicnerd14 | false | null | 0 | o88kc9w | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88kc9w/ | false | 15 |
t1_o88ka2f | What is no-mmap ? | 3 | 0 | 2026-03-02T14:54:51 | Dr4x_ | false | null | 0 | o88ka2f | false | /r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o88ka2f/ | false | 3 |
t1_o88k911 | what else could they do? the benchmark numbers used here are after adjustment, no? Not the initial one when their environment was broken. | 3 | 0 | 2026-03-02T14:54:42 | FunConversation7257 | false | null | 0 | o88k911 | false | /r/LocalLLaMA/comments/1rira5e/iquestcoderv1_is_40b14b7b/o88k911/ | false | 3 |
t1_o88k2ng | Somebody run swebench with the 9b | 2 | 0 | 2026-03-02T14:53:47 | True_Requirement_891 | false | null | 0 | o88k2ng | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88k2ng/ | false | 2 |
t1_o88k0wl | Is there a way to like test these models like an huggingface space or something | 1 | 0 | 2026-03-02T14:53:32 | Confident-Aerie-6222 | false | null | 0 | o88k0wl | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88k0wl/ | false | 1 |
t1_o88jy4h | Would also like to see qwen 3.5 35b3a and 122b5a in comparison. Ive been using gpt oss 120b for the past 6 months as my daily local modal with qwen 3vl for images. I now switched from qwen 3vl to qwen3.5 35b3a and find myself using qwen 3.5 as the new daily over gpt oss 120b | 5 | 0 | 2026-03-02T14:53:07 | RepresentativeFun28 | false | null | 0 | o88jy4h | false | /r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o88jy4h/ | false | 5 |
t1_o88jn36 | There a whole herd of them. But the important part is to look at what exactly is measured there. | 5 | 0 | 2026-03-02T14:51:30 | Prudent-Ad4509 | false | null | 0 | o88jn36 | false | /r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o88jn36/ | false | 5 |
t1_o88jjby | yes, tested 2G and 4G on CPU | 5 | 0 | 2026-03-02T14:50:57 | cosmoschtroumpf | false | null | 0 | o88jjby | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88jjby/ | false | 5 |
t1_o88jgbr | Alibaba is extra confusing as they both benchmax AND deliver amazing models. You always need to feel them out | 15 | 0 | 2026-03-02T14:50:31 | ForsookComparison | false | null | 0 | o88jgbr | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88jgbr/ | false | 15 |
t1_o88jfnp | Make sure your KV cache is set to bf16. Also try other quants - some quants can cause looping more often | 2 | 0 | 2026-03-02T14:50:26 | fulgencio_batista | false | null | 0 | o88jfnp | false | /r/LocalLLaMA/comments/1riunee/how_to_fix_endless_looping_with_qwen35/o88jfnp/ | false | 2 |
t1_o88je4x | the Irony of OpenClaw is that I had this argument months ago with [claude.ai](http://claude.ai) that as coded its not so much an Ai as an almost useful tool that uses the operator to do the job its paid to do. I asked Claude to monitor a news event locally. Just scrape the local news feeds and create a daily update, N... | 1 | 0 | 2026-03-02T14:50:13 | smoke99999 | false | null | 0 | o88je4x | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o88je4x/ | false | 1 |
t1_o88jcss | Nope! That was true for the previous Qwen 3 VL where the vision portion was "hacked onto" the normal text versions. | 1 | 0 | 2026-03-02T14:50:01 | KvAk_AKPlaysYT | false | null | 0 | o88jcss | false | /r/LocalLLaMA/comments/1ritlux/qwen3508bgguf_is_here/o88jcss/ | false | 1 |
t1_o88jch9 | [removed] | 1 | 0 | 2026-03-02T14:49:59 | [deleted] | true | null | 0 | o88jch9 | false | /r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o88jch9/ | false | 1 |
t1_o88j89l | even 8b and 27b are barely useful, but with big enough context I can see some use cases, and 0.8b quantized sounds like nonsense generator to me | -12 | 0 | 2026-03-02T14:49:22 | tiga_94 | false | null | 0 | o88j89l | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88j89l/ | false | -12 |
t1_o88j6v1 | Sure, but I wouldn't call 122B "small"? | 2 | 0 | 2026-03-02T14:49:10 | spaceman_ | false | null | 0 | o88j6v1 | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o88j6v1/ | false | 2 |
t1_o88j6wz | I used an IPA-to-Whisper model, but its results were not accurate. The problem is that the topic of my graduation thesis is TTS and STT models, and my supervisor wants the application to be focused on learning English. So far, I have not found a suitable STT model.
I tried Wav2Vec2, but the issue is that it outputs th... | 1 | 0 | 2026-03-02T14:49:10 | jihad-Ad4063 | false | null | 0 | o88j6wz | false | /r/LocalLLaMA/comments/1prmjt3/best_speechtotext_in_2025/o88j6wz/ | false | 1 |
t1_o88j4uu | It appears that the instruct models show no significant improvement, with the 2B and 0.8B versions delivering results as follows:
| | Qwen3-4B-2507 | Qwen3-1.7B | Qwen3.5-2B | Qwen3.5-0.8B |
| :--- | :--- | :--- | :--- | :--- |
| \*\*Non-Thinking Mode\*\* |
| MMLU-Pro | 69.6 | 40.2 | 55.3 | 29.7 |
| MMLU-Redux |... | 1 | 0 | 2026-03-02T14:48:53 | Status_Ranger7984 | false | null | 0 | o88j4uu | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88j4uu/ | false | 1 |
t1_o88j4g0 | Qwen 3.5 models in gguf dont need mmproj file for vision ? | 1 | 0 | 2026-03-02T14:48:50 | Ok_Reserve4339 | false | null | 0 | o88j4g0 | false | /r/LocalLLaMA/comments/1ritlux/qwen3508bgguf_is_here/o88j4g0/ | false | 1 |
t1_o88j3sp | Are you running ROCm or Vulkan? When did you last build llama.cpp and what were the CMake flags? | 2 | 0 | 2026-03-02T14:48:43 | spaceman_ | false | null | 0 | o88j3sp | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o88j3sp/ | false | 2 |
t1_o88j1yv | This model one-shotted my binary tree inversion benchmark that both 27B and 35B struggled with. Incredible! | 2 | 0 | 2026-03-02T14:48:28 | jeremyckahn | false | null | 0 | o88j1yv | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88j1yv/ | false | 2 |
t1_o88izga | I thought you were adding a second line as a rebuttal! Maybe I misunderstood!
But anyway I agree they are getting better and my suspicions again like maybe benchmaxxed! | 5 | 0 | 2026-03-02T14:48:06 | Various-Inside-4064 | false | null | 0 | o88izga | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88izga/ | false | 5 |
t1_o88iyh8 | 146 | 0 | 2026-03-02T14:47:57 | syxa | false | null | 0 | o88iyh8 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88iyh8/ | false | 146 | |
t1_o88ixc4 | yeah that is a more fair comparison | 0 | 0 | 2026-03-02T14:47:47 | magnus-m | false | null | 0 | o88ixc4 | false | /r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o88ixc4/ | false | 0 |
t1_o88ithj | 9gb for 8b quants + something for kv cache. so yes, its fit. But 4b would be so much faster. | 7 | 0 | 2026-03-02T14:47:14 | dkeiz | false | null | 0 | o88ithj | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88ithj/ | false | 7 |
t1_o88intk | Its purely testing at the moment, if we can run a model on CPU which is small, against a small data set we can test it out.
Another one of our sites have it, but they use a tool called xWiki, we likely wont be using that system.
But I agree with the above, we will test first on a small set of local dat, however, it ... | 1 | 0 | 2026-03-02T14:46:24 | Beginning-Chef-7085 | false | null | 0 | o88intk | false | /r/LocalLLaMA/comments/1rins2j/hardware_for_local_ai_project/o88intk/ | false | 1 |
t1_o88igrk | Am I the only one thinks these charts are fucking hard to read? | 33 | 0 | 2026-03-02T14:45:22 | peyloride | false | null | 0 | o88igrk | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88igrk/ | false | 33 |
t1_o88ierm | Is it available for ollama?
Are they better than qwen2.5-coder at coding? | 4 | 0 | 2026-03-02T14:45:04 | Mashic | false | null | 0 | o88ierm | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88ierm/ | false | 4 |
t1_o88iej4 | small MoE is kind of useless with draft model due to compute limited | 0 | 0 | 2026-03-02T14:45:02 | shing3232 | false | null | 0 | o88iej4 | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o88iej4/ | false | 0 |
t1_o88icdx | Models are originally released as weights and configs in different files inside one directory, and they can be run by transformers or vllm which are both python based inference engines. GGUF packages everything into one file that can be run by llama.cpp. transformers is a low level inference engine, vllm is meant for s... | 1 | 0 | 2026-03-02T14:44:43 | Velocita84 | false | null | 0 | o88icdx | false | /r/LocalLLaMA/comments/1risgk3/gguf_format/o88icdx/ | false | 1 |
t1_o88iajt | GLM 5 at Q4 is ~300-350GB, not 1.5TB. GPT OSS 120b at FP16 is ~240GB, not 65GB. | -7 | 0 | 2026-03-02T14:44:27 | HyperWinX | false | null | 0 | o88iajt | false | /r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o88iajt/ | false | -7 |
t1_o88iacu | A
Ooh! Nice!!! | 1 | 0 | 2026-03-02T14:44:25 | Sticking_to_Decaf | false | null | 0 | o88iacu | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88iacu/ | false | 1 |
t1_o88i9q3 | it's much faster in my setup | 1 | 0 | 2026-03-02T14:44:20 | Educational_Sun_8813 | false | null | 0 | o88i9q3 | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o88i9q3/ | false | 1 |
t1_o88i69b | These smaller models are far more capable than before. 8b vl was nearly as good as some bigger models for computer use tasks. Id imagine this variant with vl integrated into one model will fair even better. You can use it for agentic tasks that requires taking actions, but maybe not for high intelligence tasks such as... | 14 | 0 | 2026-03-02T14:43:49 | sonicnerd14 | false | null | 0 | o88i69b | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88i69b/ | false | 14 |
t1_o88i5bg | Yeah, fine for home but not if you want to build something other will relay on. if you don’t have confidence your queries are returning best possible matches and do it every single time consistently then you don’t have a product, you have a toy or a simple PoC. | 1 | 0 | 2026-03-02T14:43:40 | Low-Opening25 | false | null | 0 | o88i5bg | false | /r/LocalLLaMA/comments/1rins2j/hardware_for_local_ai_project/o88i5bg/ | false | 1 |
t1_o88i4ma | Simply speaking whats the IMAP of LLM/AI/AE harness? Whats the Thunderbird? | 1 | 0 | 2026-03-02T14:43:34 | dadaphl | false | null | 0 | o88i4ma | false | /r/LocalLLaMA/comments/1riu1zd/a_local_llm_session_recorder_command_center_for/o88i4ma/ | false | 1 |
t1_o88i3i0 | yes, it's prompt processing, so faster better | 1 | 0 | 2026-03-02T14:43:24 | Educational_Sun_8813 | false | null | 0 | o88i3i0 | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o88i3i0/ | false | 1 |
t1_o88i2jl | what size of Q4 k m and Q6 k m? im so happy that Qwen released 0.8 and 2b models! | 2 | 0 | 2026-03-02T14:43:16 | Ok_Reserve4339 | false | null | 0 | o88i2jl | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88i2jl/ | false | 2 |
t1_o88hzun | Native MTP is not supported by llama.cpp (yet) | 5 | 0 | 2026-03-02T14:42:51 | spaceman_ | false | null | 0 | o88hzun | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o88hzun/ | false | 5 |
t1_o88hwg8 | By running your own specific tests.
35b a3b is good when you have cpu only setup or 4-6 gb vram.
9b dense is good when you have 8-12gb vram. | 3 | 0 | 2026-03-02T14:42:20 | -Ellary- | false | null | 0 | o88hwg8 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88hwg8/ | false | 3 |
t1_o88huj8 | I don't have LM Studio though. | 1 | 0 | 2026-03-02T14:42:03 | spaceman_ | false | null | 0 | o88huj8 | false | /r/LocalLLaMA/comments/1ris4ef/can_anyone_with_a_strix_halo_and_egpu_kindly/o88huj8/ | false | 1 |
t1_o88hqpr | Yes | 1 | 0 | 2026-03-02T14:41:29 | jacek2023 | false | null | 0 | o88hqpr | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88hqpr/ | false | 1 |
t1_o88hp3q | Uninstall Ollama | 7 | 0 | 2026-03-02T14:41:15 | jacek2023 | false | null | 0 | o88hp3q | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o88hp3q/ | false | 7 |
t1_o88hp2o | What kind of workflow are you doing that with? What hardware? What model flavor?
If you don't mind a quick write up. I'm curious what such a thing actually looks like in practice. | 3 | 0 | 2026-03-02T14:41:14 | ImpressiveSuperfluit | false | null | 0 | o88hp2o | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o88hp2o/ | false | 3 |
t1_o88hnsn | [removed] | 1 | 0 | 2026-03-02T14:41:03 | [deleted] | true | null | 0 | o88hnsn | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88hnsn/ | false | 1 |
t1_o88hkwl | You forgot --n-cpu-moe | 7 | 0 | 2026-03-02T14:40:37 | jacek2023 | false | null | 0 | o88hkwl | false | /r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o88hkwl/ | false | 7 |
t1_o88hkle | 9b will fit in to a 6gb or 12gb gpu? | 4 | 0 | 2026-03-02T14:40:34 | redonculous | false | null | 0 | o88hkle | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88hkle/ | false | 4 |
t1_o88hj9d | Did I say you did? | 2 | 0 | 2026-03-02T14:40:22 | coder543 | false | null | 0 | o88hj9d | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88hj9d/ | false | 2 |
t1_o88hfkq | ugh ollama fails to work with it for now
llama\_model\_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35' | -2 | 0 | 2026-03-02T14:39:49 | Simon_Ackles | false | null | 0 | o88hfkq | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o88hfkq/ | false | -2 |
t1_o88helq | well what u need is proper quantization and yaa uploading to cpu and all many will tell its best but when u run u will know whole system is hanged and even moving mouse becomes annoying
well heres the thing u have limited space and good news is u can work or load model on gpu and 8 gb is enough
for resilientworkfl... | 1 | 0 | 2026-03-02T14:39:40 | Intelligent-School64 | false | null | 0 | o88helq | false | /r/LocalLLaMA/comments/1rf1dxh/lil_help/o88helq/ | false | 1 |
t1_o88h740 | How does one choose between qwen 9b dense and qwen 3.5b 35b a3b? | 1 | 0 | 2026-03-02T14:38:33 | KittyPigeon | false | null | 0 | o88h740 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88h740/ | false | 1 |
t1_o88h6sn | To be honest, I'm not 100% exactly sure what I'm looking for. I guess it's a MITM, but not so much focused on debugging, but more on archiving local copies of all of your chat sessions and a good UX for more normal users. A little bit like when you have a Gmail account, but you're also using an IMAP client to have loca... | 1 | 0 | 2026-03-02T14:38:29 | dadaphl | false | null | 0 | o88h6sn | false | /r/LocalLLaMA/comments/1riu1zd/a_local_llm_session_recorder_command_center_for/o88h6sn/ | false | 1 |
t1_o88h3mq | No embedder :( | 2 | 0 | 2026-03-02T14:38:01 | crewone | false | null | 0 | o88h3mq | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88h3mq/ | false | 2 |
t1_o88h2vb | Wonderful. Thank you. | 1 | 0 | 2026-03-02T14:37:55 | jamaalwakamaal | false | null | 0 | o88h2vb | false | /r/LocalLLaMA/comments/1risdjf/mnn_chat_support_qwen35_2b4b_and_08b/o88h2vb/ | false | 1 |
t1_o88gymj | if you want to host your own model to use. the best option to start imho is the mac mini. spend the extra money and get one with maxed out ram. now here is why i say this. you WILL outgrow this if you continue down this route. but it gives you everything you need to learn. and has great resale value. once youve spend e... | 1 | 0 | 2026-03-02T14:37:16 | Electrical_Ninja3805 | false | null | 0 | o88gymj | false | /r/LocalLLaMA/comments/1risau2/please_help_me_with_the_following_ai_questions/o88gymj/ | false | 1 |
t1_o88gvik | I'm on 16GB and couldn't get the 27b-a3b fitted. | 1 | 0 | 2026-03-02T14:36:48 | virtualworker | false | null | 0 | o88gvik | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88gvik/ | false | 1 |
t1_o88gttp | i have a 3060 12GB and 24 GB ddr4 ram, could i run an 35b model? | 2 | 0 | 2026-03-02T14:36:33 | FuegoFlamingo | false | null | 0 | o88gttp | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88gttp/ | false | 2 |
t1_o88gtjq | Is new Qwen 3.5 9B it better than Step3 10B? | 2 | 0 | 2026-03-02T14:36:30 | Urseelo | false | null | 0 | o88gtjq | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88gtjq/ | false | 2 |
t1_o88gs1r | Pro tip, adjust your prompt template to turn off thinking, set temperature to about .45, don't go any lower. These 3.5 variants appear to have the same problem with thinking as some of the previous qwen3 versions did. They tend to over think and talk themselves out of correct solutions. I noticed that at least in visio... | 98 | 0 | 2026-03-02T14:36:17 | sonicnerd14 | false | null | 0 | o88gs1r | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88gs1r/ | false | 98 |
t1_o88grlw | Unsloth slothn'd, so; minutes.
https://huggingface.co/unsloth/Qwen3.5-9B-GGUF/tree/main | 6 | 0 | 2026-03-02T14:36:12 | GirthusThiccus | false | null | 0 | o88grlw | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88grlw/ | false | 6 |
t1_o88gppt | What is the calculation for the amount of GB of memory needed per B parameters? I know there are other factors but the “general rule” is? | 1 | 0 | 2026-03-02T14:35:55 | Lastb0isct | false | null | 0 | o88gppt | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88gppt/ | false | 1 |
t1_o88gph9 | Did I said they are not getting better at all? | -6 | 0 | 2026-03-02T14:35:53 | Various-Inside-4064 | false | null | 0 | o88gph9 | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88gph9/ | false | -6 |
t1_o88gp6k | Embedded devices like RPi | 35 | 0 | 2026-03-02T14:35:50 | stopbanni | false | null | 0 | o88gp6k | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88gp6k/ | false | 35 |
t1_o88gnqx | And here you are on Reddit crying on a 2 year old post.
Why aren't you out there advancing society, champ? Get off Reddit, and help your community! | 1 | 0 | 2026-03-02T14:35:37 | unpleasant_wrecker | false | null | 0 | o88gnqx | false | /r/LocalLLaMA/comments/1dj1kyy/i_built_the_dumbest_ai_imaginable_tinyllama/o88gnqx/ | false | 1 |
t1_o88gnj8 | The 4B and 9B aren't popping up yet in the HF search in SmolChat on my phone, though they're popping up in LM studio on my laptop.
I'm excited to try them on both. If LM studio needs an update for it, I'm assuming SmolChat does too? | 3 | 0 | 2026-03-02T14:35:35 | PhlarnogularMaqulezi | false | null | 0 | o88gnj8 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88gnj8/ | false | 3 |
t1_o88gn0z | Speculative decoding is built into these models in the form of multi token prediction (all Qwen 3.5 models based on their HF model cards). It does not work in GGUF land. GGUF needs to implement MTP support. | 6 | 0 | 2026-03-02T14:35:30 | this-just_in | false | null | 0 | o88gn0z | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o88gn0z/ | false | 6 |
t1_o88gmvi | But the trend is for smaller models to become smarter and surpass older, larger models. Now it's time to test them. | 5 | 0 | 2026-03-02T14:35:29 | AppealThink1733 | false | null | 0 | o88gmvi | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88gmvi/ | false | 5 |
t1_o88gm0t | Same issue | 1 | 0 | 2026-03-02T14:35:21 | AdministrationOk3584 | false | null | 0 | o88gm0t | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88gm0t/ | false | 1 |
t1_o88gkvc | Tell this to the big labs then. Evals are not unit tests, they are signals of change over time | 1 | 0 | 2026-03-02T14:35:10 | Electronic-Grab7297 | false | null | 0 | o88gkvc | false | /r/LocalLLaMA/comments/1reds0p/qwen_35_craters_on_hard_coding_tasks_tested_all/o88gkvc/ | false | 1 |
t1_o88gfae | it's no sense. please explain more your idea
| 3 | 0 | 2026-03-02T14:34:18 | Flimsy_Leadership_81 | false | null | 0 | o88gfae | false | /r/LocalLLaMA/comments/1ritplu/released_ai_cost_router_100_local_llm_router/o88gfae/ | false | 3 |
t1_o88g9cn | They always have. They're still getting better regardless. | 6 | 0 | 2026-03-02T14:33:24 | coder543 | false | null | 0 | o88g9cn | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88g9cn/ | false | 6 |
t1_o88g8p6 | do you put some of the layers on regular ram? | 1 | 0 | 2026-03-02T14:33:18 | xFloaty | false | null | 0 | o88g8p6 | false | /r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o88g8p6/ | false | 1 |
t1_o88g3ju | thank you
m1 max 32 $1600, m3 max 48 $2800 in my country | 1 | 0 | 2026-03-02T14:32:30 | yusunglee2074 | false | null | 0 | o88g3ju | false | /r/LocalLLaMA/comments/1ripjzc/choosing_the_right_apple_silicon_for_backend/o88g3ju/ | false | 1 |
t1_o88g1ot | Yes, I opened an issue: https://github.com/ggml-org/llama.cpp/issues/20039
It is currently disabled.
Specdec with a draft model won't help you with the MoE models, but it would help with the 27B model. | 16 | 0 | 2026-03-02T14:32:13 | coder543 | false | null | 0 | o88g1ot | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o88g1ot/ | false | 16 |
t1_o88g0wo | What do you mean? MoE 30b A3b = 9-10b\~ dense. This is how moe works. | 8 | 0 | 2026-03-02T14:32:06 | -Ellary- | false | null | 0 | o88g0wo | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88g0wo/ | false | 8 |
t1_o88fx3i | qwen3.5? 👀 | 9 | 0 | 2026-03-02T14:31:29 | magnus-m | false | null | 0 | o88fx3i | false | /r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o88fx3i/ | false | 9 |
t1_o88fvlz | Actually wait... yeah kinda. Been rebuilding some internal tools lately and literally just ripped out like three layers of LangChain garbage that was mostly error-handling for dumb models.
My turning point was when I realized Claude 3.5 could use our JIRA API more reliably *with a plain OpenAPI spec* than my fancy age... | 1 | 0 | 2026-03-02T14:31:16 | Popular_Gene8353 | false | null | 0 | o88fvlz | false | /r/LocalLLaMA/comments/1qwwfvu/the_best_ai_architecture_in_2026_is_no/o88fvlz/ | false | 1 |
t1_o88ftom | I'm not yet sure how 9b performs at agentic tasks, but in general conversation it feels kinda dumb and confused. | 34 | 0 | 2026-03-02T14:30:58 | Big_Mix_4044 | false | null | 0 | o88ftom | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88ftom/ | false | 34 |
t1_o88ftlq | 9B would be better, but 4B is very close behind it.
I am still perplexed that 4B runs at 30-40tps on my old laptop gpu (RTX 2060, 6gb vram), describes images accurately and does a seo analysis with puppeteer in roo code automatically. Of couse, coding, too.
Qwen was always honest with the benchmarks. Now compare the ... | 3 | 0 | 2026-03-02T14:30:57 | AppealSame4367 | false | null | 0 | o88ftlq | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o88ftlq/ | false | 3 |
t1_o88fsmq | so you are looking on a kind of wireshark of LLMs? it's a good ideas for agent orchestrators. i maybe will include in my project [LightPhon.com](http://LightPhon.com) so what like you like to have? an app that collect all your LLM input and output in your server? so like a m-i-t-m? | 1 | 0 | 2026-03-02T14:30:48 | Flimsy_Leadership_81 | false | null | 0 | o88fsmq | false | /r/LocalLLaMA/comments/1riu1zd/a_local_llm_session_recorder_command_center_for/o88fsmq/ | false | 1 |
t1_o88fseu | Just curious what are the smaller models good for? The only practical usage I've found so far was using a small model to auto completely code while typing. | 1 | 0 | 2026-03-02T14:30:46 | Easy_Werewolf7903 | false | null | 0 | o88fseu | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88fseu/ | false | 1 |
t1_o88fsar | Those are some wants algorithmically created by some system developed by unsloth that automatically analyzes which layers are the ones in which quantization hurts the most, so some layers might be in Q2 while other might be in Q6 or Q8, but the overall average is still around Q4, although in theory with less degradatio... | 8 | 0 | 2026-03-02T14:30:45 | cibernox | false | null | 0 | o88fsar | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o88fsar/ | false | 8 |
t1_o88frtz | I looked a lot at that codebase and mention it in the acknowledgments :) I found some bugs there with the streaming, but I'm not sure how much they affect the perceived audio quality | 1 | 0 | 2026-03-02T14:30:41 | Routine-Berry-2828 | false | null | 0 | o88frtz | false | /r/LocalLLaMA/comments/1rfc3ic/introducing_fasterqwentts/o88frtz/ | false | 1 |
t1_o88fr2m | Trying 0.8b with 35b or 27b, it actually wont even attempt. As if they arent even compatible.
Im also still trying to find the performance. I must be less than 50% performance on amd. Whereas the nvidia folks seem to be rocketspeed. | 1 | 0 | 2026-03-02T14:30:34 | sleepingsysadmin | false | null | 0 | o88fr2m | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o88fr2m/ | false | 1 |
t1_o88fqxq | I don't know what it is about Qwen3.5, I was thinking of posting in this sub to ask. At least for me, it seems to be very poorly suited for partial GPU offload.
When I run both the 27b and 35b versions (~4bpw quants) on my PC with 64GB RAM and 16GB VRAM, the GPU does almost nothing and the CPU is also underutilized. ... | 1 | 0 | 2026-03-02T14:30:32 | Count_Rugens_Finger | false | null | 0 | o88fqxq | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o88fqxq/ | false | 1 |
t1_o88fqki | I tried Qwen3.5-4B-Q4\_K\_M using PocketPal on iPhone 17 Pro and ran the benchmark:
https://preview.redd.it/mq9e6oo07nmg1.png?width=1205&format=png&auto=webp&s=f2cb0fbbcc86c0e39fcf5bbd46f5aad92cf4fdbe
| 5 | 0 | 2026-03-02T14:30:29 | groosha | false | null | 0 | o88fqki | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o88fqki/ | false | 5 |
t1_o88fq96 | Come to think of it, I think it might be difficult to rotate those three. Thank you. | 1 | 0 | 2026-03-02T14:30:26 | yusunglee2074 | false | null | 0 | o88fq96 | false | /r/LocalLLaMA/comments/1ripjzc/choosing_the_right_apple_silicon_for_backend/o88fq96/ | false | 1 |
t1_o88fjnd | Qwen 3.5 9b vs Gemma4 9b benchmarks when :) | 3 | 0 | 2026-03-02T14:29:26 | Icy-Degree6161 | false | null | 0 | o88fjnd | false | /r/LocalLLaMA/comments/1risja2/sorting_by_new_right_now_be_like/o88fjnd/ | false | 3 |
t1_o88fcbh | They were the best models in their range for 6 months. As much as we despise closed companies, they still have the edge. | 13 | 0 | 2026-03-02T14:28:18 | No_Swimming6548 | false | null | 0 | o88fcbh | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88fcbh/ | false | 13 |
t1_o88f4pp | Sorry, having a faster prefill is nice but the bottleneck is always decode/tokens generation tok/sec in my use cases, you have Qwen3.5-35B-A3B "Q4" doin 15.0 t/s on your hardware while mine, which is very poorer, is at 30 t/s. | 1 | 0 | 2026-03-02T14:27:08 | R_Duncan | false | null | 0 | o88f4pp | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o88f4pp/ | false | 1 |
t1_o88f2db | .\llama-server.exe --model Distil\Qwen3.5-35B-A3B-MXFP4_MOE.gguf --alias Qwen3.5-35B-A3B-MXFP4 --mmproj \Distil\MMorj\mmproj-Qwen35bA3-BF16.gguf --flash-attn on -c 32000 --n-predict 32000 --jinja --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.00 --threads 6 --fit on --no-mmap | 3 | 0 | 2026-03-02T14:26:47 | Beneficial-Good660 | false | null | 0 | o88f2db | false | /r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o88f2db/ | false | 3 |
t1_o88exh4 | I just rebuilt llama server and use qwen3 4b and now it doesn't even recognize the promp-cache param anymore.
There might be an error in latest llama cpp code | 1 | 0 | 2026-03-02T14:26:02 | AppealSame4367 | false | null | 0 | o88exh4 | false | /r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o88exh4/ | false | 1 |
t1_o88ex5u | which quant? | 1 | 0 | 2026-03-02T14:25:59 | MotokoAGI | false | null | 0 | o88ex5u | false | /r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/o88ex5u/ | false | 1 |
t1_o88ewu2 | yes, a 1.5 TB model that only barely outperforms a 65 GB model | -5 | 0 | 2026-03-02T14:25:56 | magnus-m | false | null | 0 | o88ewu2 | false | /r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o88ewu2/ | false | -5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.