name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o88qlr1 | Missing the 397B... | 1 | 0 | 2026-03-02T15:27:02 | rm-rf-rm | false | null | 0 | o88qlr1 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88qlr1/ | false | 1 |
t1_o88qkh1 | i would go with setting up skyvern for my invoice retrieval workflow works pretty good | 1 | 0 | 2026-03-02T15:26:51 | Consistent_Papaya901 | false | null | 0 | o88qkh1 | false | /r/LocalLLaMA/comments/1r3ovso/whats_the_workflow_for_browser_automation_in_2026/o88qkh1/ | false | 1 |
t1_o88qhyb | I’m doing a bunch of local work this weekend and its so much faster than everything else for the quality I’m seeing. 200t/s on my 5090. | 1 | 0 | 2026-03-02T15:26:30 | fredandlunchbox | false | null | 0 | o88qhyb | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88qhyb/ | false | 1 |
t1_o88qhl1 | Of course random Redditor knows more than these researchers [https://blogs.cisco.com/ai/open-model-vulnerability-analysis](https://blogs.cisco.com/ai/open-model-vulnerability-analysis)
| 1 | 0 | 2026-03-02T15:26:27 | Glad_Middle9240 | false | null | 0 | o88qhl1 | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o88qhl1/ | false | 1 |
t1_o88qaf4 | Disappointed by lack of Wolfram Language knowledge in 2B and 4B. Qwen3-VL was much better. | 1 | 0 | 2026-03-02T15:25:28 | sergeysi | false | null | 0 | o88qaf4 | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o88qaf4/ | false | 1 |
t1_o88q4vc | It does, it is lm studios fault
I just used it with thinking in the pocket pal app on iOS | 1 | 0 | 2026-03-02T15:24:41 | Confusion_Senior | false | null | 0 | o88q4vc | false | /r/LocalLLaMA/comments/1rivo6f/does_qwen35_4b_supports_thinking/o88q4vc/ | false | 1 |
t1_o88q4xj | All it does is loop and loop and think and think even with just a “hi”. I can not for the life of me get it to stop. Using the Unsloth Q8_k on lmstudio. | 1 | 0 | 2026-03-02T15:24:41 | d4mations | false | null | 0 | o88q4xj | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88q4xj/ | false | 1 |
t1_o88q3tq | Do you think there will be noticeable improvements for small models like the 0.8b? I mean, is there any text related architecture difference from the Qwen3 since knowledge doesn't matter for very small models? | 1 | 0 | 2026-03-02T15:24:32 | mw11n19 | false | null | 0 | o88q3tq | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88q3tq/ | false | 1 |
t1_o88q3ht | If you are running 35B with MTP you don't need any draft model | 1 | 0 | 2026-03-02T15:24:30 | AloneSYD | false | null | 0 | o88q3ht | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88q3ht/ | false | 1 |
t1_o88q0qn | Oh no, it wasn't a typo. I was actually talking about llama 2 7b. It was an old model and one of the few ones (along with the mistral 7b) which I could run on my hardware. | 1 | 0 | 2026-03-02T15:24:07 | itsdigimon | false | null | 0 | o88q0qn | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88q0qn/ | false | 1 |
t1_o88q0eh | How do you find it, intelligence wise? I'd love to one day have a local model on mobile that I can use reliably. | 1 | 0 | 2026-03-02T15:24:04 | Monkey_1505 | false | null | 0 | o88q0eh | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o88q0eh/ | false | 1 |
t1_o88pzab | Tengo un setup similar al tuyo, algunas dudas:
Mi CPU es un 9800X3D y mi GPU es una 4080SUPER 16GB, y tengo 64GB RAM, o sea, estamos parecidos.
La duda es, porque estás usando un contexto tan amplio?
En mi caso con un contexto de 64k me funciona de maravilla, hasta cierto punto, luego empiezo a tener problemas de C... | 1 | 0 | 2026-03-02T15:23:55 | EixaFinite | false | null | 0 | o88pzab | false | /r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o88pzab/ | false | 1 |
t1_o88pyeg | WoW | 1 | 0 | 2026-03-02T15:23:47 | fab_space | false | null | 0 | o88pyeg | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88pyeg/ | false | 1 |
t1_o88py2h | Alright, thank you for the insight! | 1 | 0 | 2026-03-02T15:23:45 | lain_hirs | false | null | 0 | o88py2h | false | /r/LocalLLaMA/comments/1rit85e/question_about_running_small_models_on_potato_gpus/o88py2h/ | false | 1 |
t1_o88pxrv | Great, thanks!
Would have been nice to see them grouped per group. | 1 | 0 | 2026-03-02T15:23:42 | l_eo_ | false | null | 0 | o88pxrv | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88pxrv/ | false | 1 |
t1_o88pvrf | A very narrow minded. You may not be aware, but models now call tools. Tools can execute code and take other actions.
I can't remember the last time I interacted with a model that didn't have access to a python interpreter. Is it too much of stretch for your imagination that behavior to misuse these tools could be ... | 1 | 0 | 2026-03-02T15:23:25 | Glad_Middle9240 | false | null | 0 | o88pvrf | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o88pvrf/ | false | 1 |
t1_o88puk2 | But some weapons are astronomically more dangerous than others, which is why militaries use them.
No military is equipping its soldiers with pencils to use as weapons. | 1 | 0 | 2026-03-02T15:23:15 | AnOnlineHandle | false | null | 0 | o88puk2 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o88puk2/ | false | 1 |
t1_o88ptrk | Why so big and both very compressed model? Better use newly made Qwen3.5-4B-Q4\_0 | 1 | 0 | 2026-03-02T15:23:09 | stopbanni | false | null | 0 | o88ptrk | false | /r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o88ptrk/ | false | 1 |
t1_o88pr2o | Yes, I already turned off and confirmed in the llama-server startup:
```
init: chat template, example_format: '<|im_start|>system
You are a helpful assistant<|im_end|>
<|im_start|>user
Hello<|im_end|>
<|im_start|>assistant
Hi there<|im_end|>
<|im_start|>user
How are you?<|im_end|>
<|im_start|>assistant
<think>
</thin... | 6 | 0 | 2026-03-02T15:22:47 | DeltaSqueezer | false | null | 0 | o88pr2o | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o88pr2o/ | false | 6 |
t1_o88pnhw | benchmaxxing on benchmark questions | 3 | 0 | 2026-03-02T15:22:16 | --Spaci-- | false | null | 0 | o88pnhw | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88pnhw/ | false | 3 |
t1_o88pn6q | nice | 1 | 0 | 2026-03-02T15:22:14 | huffalump1 | false | null | 0 | o88pn6q | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o88pn6q/ | false | 1 |
t1_o88pmv7 | Hey everyone the new models just dropped and they are absolutely CRACKED.
"... Like an egg?" | 4 | 0 | 2026-03-02T15:22:11 | lookwatchlistenplay | false | null | 0 | o88pmv7 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88pmv7/ | false | 4 |
t1_o88pmk9 | why are people on this sub so surprised by how good the qwen3.5 models are lol this should be a massively accel sub | 3 | 0 | 2026-03-02T15:22:08 | pigeon57434 | false | null | 0 | o88pmk9 | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88pmk9/ | false | 3 |
t1_o88pj2j | Man, at that size it's pretty much feasible to run the VLM on individual security cameras themselves, if you like. Fully local object detection / notifications / etc.
I'm very curious how good it is for this use case; I enjoyed experimenting with gemini 2.5 flash lite for camera monitoring, and [the 9B and 4B models b... | 1 | 0 | 2026-03-02T15:21:39 | huffalump1 | false | null | 0 | o88pj2j | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o88pj2j/ | false | 1 |
t1_o88pfvh | It's the ChatterUI guy! Props for such a great app! I use it almost every day with local models :) | 32 | 0 | 2026-03-02T15:21:12 | KvAk_AKPlaysYT | false | null | 0 | o88pfvh | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o88pfvh/ | false | 32 |
t1_o88p74v | and faster. But you still need some extra GB for context, | 3 | 0 | 2026-03-02T15:19:59 | dkeiz | false | null | 0 | o88p74v | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88p74v/ | false | 3 |
t1_o88p5lz | "malicious AI models execute embedded unauthorized code"
I repeat, that is pure nonsense. Models can not execute code. They don't contain code. They are not programs. | 1 | 0 | 2026-03-02T15:19:47 | q-admin007 | false | null | 0 | o88p5lz | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o88p5lz/ | false | 1 |
t1_o88ovun | This is a mess of conspiracy theories that doesn't survive basic scrutiny.
You can learn how to code via a thousand and one tutorials online. There's no software developer 'they' cabal keeping knowledge from you. There's more knowledge today than there's ever been.
There's plenty of innovation. There's not much 'pr... | 1 | 0 | 2026-03-02T15:18:25 | Aphid_red | false | null | 0 | o88ovun | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o88ovun/ | false | 1 |
t1_o88ou72 | Nice, the 2B or 4B could replace the Qwen2.5 1.5B Coder model I'm currently running for autocompletion in Continue.dev. I hope they will be a lot more intelligent at similar speeds. | 1 | 0 | 2026-03-02T15:18:12 | KingKoro | false | null | 0 | o88ou72 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o88ou72/ | false | 1 |
t1_o88ot6r | Right now you should look into qwen3.5-27b. You can fit a Q6\_K quantisation into VRAM with some context. Should be zippy and it's right now the best general purpose model there is at that size. Example:
[https://huggingface.co/mradermacher/Qwen3.5-27B-GGUF?show\_file\_info=Qwen3.5-27B.Q6\_K.gguf](https://huggingface.... | 2 | 0 | 2026-03-02T15:18:03 | q-admin007 | false | null | 0 | o88ot6r | false | /r/LocalLLaMA/comments/1rg4rtg/i_have_a_5090_with_64gb_system_ram_is_there_a/o88ot6r/ | false | 2 |
t1_o88ory1 | 9B is insane, I just loaded Q4\_K\_M from unsloth it on RTX 3070 using LM studio, I can't believe this is answer of 9B model, it analysed image and described it in details with 46 t/s
https://preview.redd.it/vp1yvn1gfnmg1.png?width=592&format=png&auto=webp&s=82ae8e96cfe84ba0211a8bab2268c9622e480bf6
| 1 | 0 | 2026-03-02T15:17:53 | TheMagic2311 | false | null | 0 | o88ory1 | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o88ory1/ | false | 1 |
t1_o88onnj | The 9B and 4B models are more like gemini 2.5 flash lite and gpt-5 nano according to the benchmarks, though: https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen3.5/Figures/qwen3.5_small_size_score.png
Still crazy that they're competitive with much larger models like gpt-oss-120b and 20b. | 2 | 0 | 2026-03-02T15:17:17 | huffalump1 | false | null | 0 | o88onnj | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o88onnj/ | false | 2 |
t1_o88olqx | Qwen3-coder tool templates are broken across all major repos. If you fix them it is a decent model. | 1 | 0 | 2026-03-02T15:17:01 | Realistic-Elephant-6 | false | null | 0 | o88olqx | false | /r/LocalLLaMA/comments/1rf2b90/benchmarking_qwen3535b_vs_gptoss20b_for_agentic/o88olqx/ | false | 1 |
t1_o88ohpt | Apparently the required tensors (e.g. blk.0.ssm_in.weight) are missing from the GGUF, which makes them unusable in Ollama’s current GGUF loader. | 2 | 0 | 2026-03-02T15:16:28 | can_dry | false | null | 0 | o88ohpt | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88ohpt/ | false | 2 |
t1_o88og47 | I’ve tried 9b and it is useless!! All it does is loop and loop and think and think even with just a “hi”. I can not for the life of me get it to stop. Using the Unsloth Q8_k | 2 | 0 | 2026-03-02T15:16:14 | d4mations | false | null | 0 | o88og47 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88og47/ | false | 2 |
t1_o88oasf | yep already
https://huggingface.co/unsloth/Qwen3.5-9B-GGUF
https://huggingface.co/unsloth/Qwen3.5-4B-GGUF
etc | 1 | 0 | 2026-03-02T15:15:29 | huffalump1 | false | null | 0 | o88oasf | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o88oasf/ | false | 1 |
t1_o88oae6 | But why these specific models over others? | 1 | 0 | 2026-03-02T15:15:26 | amejin | false | null | 0 | o88oae6 | false | /r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o88oae6/ | false | 1 |
t1_o88o9az | Ollama is unsuited for agentic coding because it has no prefix caching, and its default quants are often broken, especially in regard to tool use. | 1 | 0 | 2026-03-02T15:15:17 | Realistic-Elephant-6 | false | null | 0 | o88o9az | false | /r/LocalLLaMA/comments/1rf2b90/benchmarking_qwen3535b_vs_gptoss20b_for_agentic/o88o9az/ | false | 1 |
t1_o88o7v3 | Quick | 2 | 0 | 2026-03-02T15:15:05 | MoffKalast | false | null | 0 | o88o7v3 | false | /r/LocalLLaMA/comments/1risja2/sorting_by_new_right_now_be_like/o88o7v3/ | false | 2 |
t1_o88o4o0 | Yeah, there’s a good reason Anthropic had two requirements in their TOS. (They don’t want their code to be used for mass surveillance or fully autonomous killbots)
There’s also a good reason the pentagon threw a hissy fit over those two rules. (They want mass surveillance and fully autonomous killbots) | 14 | 0 | 2026-03-02T15:14:38 | n8mo | false | null | 0 | o88o4o0 | false | /r/LocalLLaMA/comments/1riuywe/genuinely_fascinating_but_also_kind_of_terrifying/o88o4o0/ | false | 14 |
t1_o88o12o | Qwen is very specific in regards to system prompting and cannot figure out how to be creative. | 1 | 0 | 2026-03-02T15:14:08 | CooperDK | false | null | 0 | o88o12o | false | /r/LocalLLaMA/comments/1pkndwc/chat_bots_up_to_24b/o88o12o/ | false | 1 |
t1_o88o14v | He isn't wrong...
From their perspective, they don't really care about open source, as it isn't 'free'. Sure the model is, but the inference isn't, and the folks that are actually capable and interested of running those large unquantized models are insignificant at the moment compared to the folks that don't want to d... | 1 | 0 | 2026-03-02T15:14:08 | Cergorach | false | null | 0 | o88o14v | false | /r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o88o14v/ | false | 1 |
t1_o88o0je | Is the 9b good for anyone? Does seem that great to me. Trying to write a small story and various things were logically inconsistent. Haven't tried it for coding. | 1 | 0 | 2026-03-02T15:14:03 | funny_lyfe | false | null | 0 | o88o0je | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88o0je/ | false | 1 |
t1_o88nvnn | Gemma 3 is a reasoning model. You can enable it. | 1 | 0 | 2026-03-02T15:13:21 | CooperDK | false | null | 0 | o88nvnn | false | /r/LocalLLaMA/comments/1pkndwc/chat_bots_up_to_24b/o88nvnn/ | false | 1 |
t1_o88np2u | wow really? | 4 | 0 | 2026-03-02T15:12:27 | gondoravenis | false | null | 0 | o88np2u | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o88np2u/ | false | 4 |
t1_o88nly2 | ok understood. sound usefull in the long term | 1 | 0 | 2026-03-02T15:12:00 | Flimsy_Leadership_81 | false | null | 0 | o88nly2 | false | /r/LocalLLaMA/comments/1riu1zd/a_local_llm_session_recorder_command_center_for/o88nly2/ | false | 1 |
t1_o88nlr9 | I'm confused. Should I switch my KV Cache? | 2 | 0 | 2026-03-02T15:11:58 | StardockEngineer | false | null | 0 | o88nlr9 | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o88nlr9/ | false | 2 |
t1_o88nl0n | Well, small Qwen3.5 models have just been released. Also, I would not run Qwen3 or Qwen3.5 with 4-bit quantization unless mxfp4 or nvfp4. I run my 27B with Q8 (a bit slow, I know). | 3 | 0 | 2026-03-02T15:11:52 | Prudent-Ad4509 | false | null | 0 | o88nl0n | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o88nl0n/ | false | 3 |
t1_o88nijb | Okay I will try it thanks a lot guys | 1 | 0 | 2026-03-02T15:11:31 | NegotiationNo1504 | false | null | 0 | o88nijb | false | /r/LocalLLaMA/comments/1riuwsw/is_qwen35_2b_is_instruct/o88nijb/ | false | 1 |
t1_o88ng7e | go with an open source model loaded in the cloud with known low restrictions or easy jailbreaking that doesn't keep changing because the model is not changing. | 2 | 0 | 2026-03-02T15:11:11 | ayawnimouse | false | null | 0 | o88ng7e | false | /r/LocalLLaMA/comments/1n3cfi8/how_close_can_i_get_close_to_chatgpt5_full_with/o88ng7e/ | false | 2 |
t1_o88neqe | Probably to show even greater improvement to their previous generarion's correspondants. | 6 | 0 | 2026-03-02T15:10:59 | Constandinoskalifo | false | null | 0 | o88neqe | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88neqe/ | false | 6 |
t1_o88nddq | yes perhaps something like this: https://huggingface.co/DavidAU/OpenAi-GPT-oss-20b-HERETIC-uncensored-NEO-Imatrix-gguf
(I googled it) | 1 | 0 | 2026-03-02T15:10:48 | huffalump1 | false | null | 0 | o88nddq | false | /r/LocalLLaMA/comments/1rcmlwk/so_is_openclaw_local_or_not/o88nddq/ | false | 1 |
t1_o88naid | 9b | 1 | 0 | 2026-03-02T15:10:24 | Skyline34rGt | false | null | 0 | o88naid | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o88naid/ | false | 1 |
t1_o88n87a | Qnow | 2 | 0 | 2026-03-02T15:10:04 | Pille5 | false | null | 0 | o88n87a | false | /r/LocalLLaMA/comments/1risja2/sorting_by_new_right_now_be_like/o88n87a/ | false | 2 |
t1_o88n1wc | PocketPal is still in active development:
https://github.com/a-ghorbani/pocketpal-ai
You can also get it from the app store, it just hasn't updated for Qwen 3.5 yet. | 26 | 0 | 2026-03-02T15:09:11 | ----Val---- | false | null | 0 | o88n1wc | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o88n1wc/ | false | 26 |
t1_o88n05q | Good visual performance, too - I liked experimenting with Gemini 2.5 Flash Lite for things like monitoring cameras or extracting info because it was fast+cheap+good enough, and this beats it yet runs locally.
I haven't tried Qwen3.5-9B or 4B yet though, just looking at charts and commenting on reddit (like everyone lo... | 3 | 0 | 2026-03-02T15:08:56 | huffalump1 | false | null | 0 | o88n05q | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88n05q/ | false | 3 |
t1_o88myl6 | [removed] | 1 | 0 | 2026-03-02T15:08:42 | [deleted] | true | null | 0 | o88myl6 | false | /r/LocalLLaMA/comments/1riuttn/how_can_i_enable_context_shifting_in_llama_server/o88myl6/ | false | 1 |
t1_o88mwh4 | Q4_K_M | 1 | 0 | 2026-03-02T15:08:24 | 1ncehost | false | null | 0 | o88mwh4 | false | /r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/o88mwh4/ | false | 1 |
t1_o88mvij | No, it's just in an environment where it processes a lot of JSON. Ive been comparing Coder to 35B and 122B, both with thinking turned off. Nt a lot of difference so far except that coder may be faster. Perhaps removing the think block is non-optimal. | 2 | 0 | 2026-03-02T15:08:16 | zipzag | false | null | 0 | o88mvij | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o88mvij/ | false | 2 |
t1_o88mtdy | Highly recommend LFM2.5 1.2B. It blows my mind how good it is. | 18 | 0 | 2026-03-02T15:07:58 | _raydeStar | false | null | 0 | o88mtdy | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88mtdy/ | false | 18 |
t1_o88msqa | Yes but that eats all your VRAM and part of system RAM, right? I want space for at least 50k tokens | 2 | 0 | 2026-03-02T15:07:53 | ansibleloop | false | null | 0 | o88msqa | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88msqa/ | false | 2 |
t1_o88mrp0 | Llama cpp | 1 | 0 | 2026-03-02T15:07:44 | 1ncehost | false | null | 0 | o88mrp0 | false | /r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/o88mrp0/ | false | 1 |
t1_o88mos7 | I tried rocm the day of qwen3.5 release, lm studio the day after, and then latest greatest this morning of vulkan. Every single one is right about the same speed and switches make essentially no difference.
No cmake flags, i downloaded their copy. | 1 | 0 | 2026-03-02T15:07:20 | sleepingsysadmin | false | null | 0 | o88mos7 | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o88mos7/ | false | 1 |
t1_o88mnzh | Last time I used android app for demos, was the MyPocketPal - does anybody know of any recent replacement? | 10 | 0 | 2026-03-02T15:07:13 | Medium_Chemist_4032 | false | null | 0 | o88mnzh | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o88mnzh/ | false | 10 |
t1_o88mjdk | Qwen3.5 4B vs Qwen 3.5 9B-4bit version which is better? | 1 | 0 | 2026-03-02T15:06:34 | Significant-Pay-6476 | false | null | 0 | o88mjdk | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o88mjdk/ | false | 1 |
t1_o88mhxi | According to the model's card, Thinking is disabled by default on the 2b model, but can be reenabled. | 2 | 0 | 2026-03-02T15:06:21 | AyraWinla | false | null | 0 | o88mhxi | false | /r/LocalLLaMA/comments/1riuwsw/is_qwen35_2b_is_instruct/o88mhxi/ | false | 2 |
t1_o88mho4 | Great question! Small models (0.8B-4B) are perfect for: 1. \*\*Agentic routing\*\* - Use them as "footsoldiers" to classify incoming requests and route to the right tools/MCP hooks 2. \*\*Embedded devices\*\* - Raspberry Pi, old phones, IoT devices 3. \*\*Low-latency tasks\*\* - Classification, intent detection, quick ... | 1 | 0 | 2026-03-02T15:06:19 | SquareDazzling7981 | false | null | 0 | o88mho4 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88mho4/ | false | 1 |
t1_o88mfft | still testing it but am also curious on other's experience. if you make a new topic for it; pls link back here as well | 7 | 0 | 2026-03-02T15:05:59 | -_Apollo-_ | false | null | 0 | o88mfft | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88mfft/ | false | 7 |
t1_o88mcvx | thinking about it, it would require a bench of GLM 5 at Q4 to compare it to OSS-120B, which would be nice to have. | -3 | 1 | 2026-03-02T15:05:37 | magnus-m | false | null | 0 | o88mcvx | false | /r/LocalLLaMA/comments/1ritr5v/oss120b_beats_all_open_models_but_one_in_new/o88mcvx/ | false | -3 |
t1_o88maiv | Hey I'm also rocking a 12GB GPU but can't find those details... would you mind sending me a link or briefly explaining here? thanks so much | 3 | 0 | 2026-03-02T15:05:17 | anthonybustamante | false | null | 0 | o88maiv | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88maiv/ | false | 3 |
t1_o88m9g7 | Surprisingly, it doesn’t code better than qwen3 4b 2507 on LCBv6 | 5 | 0 | 2026-03-02T15:05:08 | pgrijpink | false | null | 0 | o88m9g7 | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o88m9g7/ | false | 5 |
t1_o88m8e3 | One of the benefits of this architecture is the much smaller KV cache. Or that's my understanding at least. | 5 | 0 | 2026-03-02T15:04:59 | bedofhoses | false | null | 0 | o88m8e3 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88m8e3/ | false | 5 |
t1_o88m6y9 | i use it in raycast for quick ai handy | 0 | 0 | 2026-03-02T15:04:46 | Simon_Ackles | false | null | 0 | o88m6y9 | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o88m6y9/ | false | 0 |
t1_o88m496 | was that a typo or llama 2 7b is better than current models? Did you mean 70b? | 2 | 0 | 2026-03-02T15:04:23 | asraniel | false | null | 0 | o88m496 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88m496/ | false | 2 |
t1_o88m0to | oh yeah I forgot people use LLMs to do this kind of stuff, like define a category for something even if only 90% accurate, makes sense to use a low latency small model if the accuracy suffices | 44 | 0 | 2026-03-02T15:03:54 | tiga_94 | false | null | 0 | o88m0to | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88m0to/ | false | 44 |
t1_o88m0nb | For some reason setting the -ctk and -ctv to both bf16 makes it so the prompt processing only happens on my cpu and is extremely slow.. Do you have that issue as well? | 5 | 0 | 2026-03-02T15:03:53 | Odd-Ordinary-5922 | false | null | 0 | o88m0nb | false | /r/LocalLLaMA/comments/1riunee/how_to_fix_endless_looping_with_qwen35/o88m0nb/ | false | 5 |
t1_o88lyma | Uh, every trained model is an instruct, basically. The "base" model I think you refer to is the pt (pretrain). They always have basic features. | 1 | 0 | 2026-03-02T15:03:35 | CooperDK | false | null | 0 | o88lyma | false | /r/LocalLLaMA/comments/1kn6mic/qwen_25_vs_qwen_3_vs_gemma_3_real_world_base/o88lyma/ | false | 1 |
t1_o88lxfy | RAG seems to fail even for big models Gemini pro 3.1 or ChatGPT :( it only samples a few points, gets overwhelmed easily. Doesn't use the entire context. I've always had that problem with RAG unless I'm literrally "searching for something" | 1 | 0 | 2026-03-02T15:03:25 | ZeitgeistArchive | false | null | 0 | o88lxfy | false | /r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o88lxfy/ | false | 1 |
t1_o88lw7o | Damn. That's where I was hoping it improved. Are you comparing it to a large LLM or previous similar models like qwen 3 8b? | 5 | 0 | 2026-03-02T15:03:15 | bedofhoses | false | null | 0 | o88lw7o | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88lw7o/ | false | 5 |
t1_o88ltky | I’ve tried 9b and it is useless!! All it does is loop and loop and think and think even with just a “hi”. I can not for the life of me get it to stop. Using the Unsloth Q8_k | 1 | 0 | 2026-03-02T15:02:52 | d4mations | false | null | 0 | o88ltky | false | /r/LocalLLaMA/comments/1rirsmh/small_qwen_models_out/o88ltky/ | false | 1 |
t1_o88lsmw | Qwen is EXCELLENT at following PRECISE instructions. But it sucks at being creative. | 1 | 0 | 2026-03-02T15:02:44 | CooperDK | false | null | 0 | o88lsmw | false | /r/LocalLLaMA/comments/1kn6mic/qwen_25_vs_qwen_3_vs_gemma_3_real_world_base/o88lsmw/ | false | 1 |
t1_o88lq9b | just tell me are u feeding datasets of 2500 as its
what i mean is are the datasets structured well
also if its structured well also giving instructions is not enough u have to trian on certain aspects as well
also are u aware what ur basemodel (llama 3 - 8b) getting biases and things
also u mentioned u using... | 1 | 0 | 2026-03-02T15:02:23 | Intelligent-School64 | false | null | 0 | o88lq9b | false | /r/LocalLLaMA/comments/1r22myf/help_finetuning_llama38b_for_lowresource_language/o88lq9b/ | false | 1 |
t1_o88lnme | because thinking mode is disabled by default for small model | 11 | 0 | 2026-03-02T15:02:00 | Negative-Magazine174 | false | null | 0 | o88lnme | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o88lnme/ | false | 11 |
t1_o88lmw3 | running, thank you | 1 | 0 | 2026-03-02T15:01:53 | ZeitgeistArchive | false | null | 0 | o88lmw3 | false | /r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o88lmw3/ | false | 1 |
t1_o88lhxn | Man I need a new phone | 1 | 0 | 2026-03-02T15:01:10 | Devatator_ | false | null | 0 | o88lhxn | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o88lhxn/ | false | 1 |
t1_o88l85b | yep, more info in the guide from unsloth: https://unsloth.ai/docs/models/qwen3.5
and their gguf benchmarks: https://unsloth.ai/docs/models/qwen3.5/gguf-benchmarks | 2 | 0 | 2026-03-02T14:59:46 | huffalump1 | false | null | 0 | o88l85b | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o88l85b/ | false | 2 |
t1_o88l68l | Yea, unless your pp is already big enough | 1 | 0 | 2026-03-02T14:59:30 | HyperWinX | false | null | 0 | o88l68l | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o88l68l/ | false | 1 |
t1_o88l5cv | Drafting for larger model for example. Although 2b version might be better for that. | 4 | 0 | 2026-03-02T14:59:22 | mtmttuan | false | null | 0 | o88l5cv | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88l5cv/ | false | 4 |
t1_o88l3nq | [deleted] | 1 | 0 | 2026-03-02T14:59:08 | [deleted] | true | null | 0 | o88l3nq | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88l3nq/ | false | 1 |
t1_o88l27y | I only use up to 4000 tokens | 1 | 0 | 2026-03-02T14:58:55 | ZeitgeistArchive | false | null | 0 | o88l27y | false | /r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o88l27y/ | false | 1 |
t1_o88l23s | They're showing the corrected number here, though? It was just SWE-Bench Verified, and the broken score was 81.4, but this shows 76.2. | 10 | 0 | 2026-03-02T14:58:54 | DeProgrammer99 | false | null | 0 | o88l23s | false | /r/LocalLLaMA/comments/1rira5e/iquestcoderv1_is_40b14b7b/o88l23s/ | false | 10 |
t1_o88l0pl | in my knowledge and reasoning tests it fails against Q3 K M Qwen 235B :( | 1 | 0 | 2026-03-02T14:58:42 | ZeitgeistArchive | false | null | 0 | o88l0pl | false | /r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o88l0pl/ | false | 1 |
t1_o88kyr9 | 10A is small | 2 | 0 | 2026-03-02T14:58:25 | shing3232 | false | null | 0 | o88kyr9 | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o88kyr9/ | false | 2 |
t1_o88kwgl | test using OpenRouter or some old laptop before you commit to $ and effort.
I work with AI every day professionally and I found local LLMs not very reliable nor robust for professional use. I have a beefy server for home lab and games (128GB), so it wasn’t much to setup a PoC, I build the full monty including model fi... | 1 | 0 | 2026-03-02T14:58:06 | Low-Opening25 | false | null | 0 | o88kwgl | false | /r/LocalLLaMA/comments/1rins2j/hardware_for_local_ai_project/o88kwgl/ | false | 1 |
t1_o88kw6m | I created a 'footsoldier' logic for a tiny llm to parse. 'classify this chat as a chat, web\_call, logic\_problem' sort of thing. It's quick and responds within a few hundred ms, and protects agents from making the wrong calls all the time (ie routing a chat message to a web call)
It gets really hard when there are... | 137 | 0 | 2026-03-02T14:58:04 | _raydeStar | false | null | 0 | o88kw6m | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88kw6m/ | false | 137 |
t1_o88ksja | Home assistant, some IoT maybe, or some small automation tasks for homelab. It's small enough for phone, wouod like having one for selfhost assistant on Graphene ngl. | 2 | 0 | 2026-03-02T14:57:33 | mell1suga | false | null | 0 | o88ksja | false | /r/LocalLLaMA/comments/1ritlux/qwen3508bgguf_is_here/o88ksja/ | false | 2 |
t1_o88kqwb | It's better than the worst paid models of openai and google. Don't see the "pop that AI bubble" anywhere from the benchmark. | 6 | 0 | 2026-03-02T14:57:18 | mtmttuan | false | null | 0 | o88kqwb | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88kqwb/ | false | 6 |
t1_o88kotv | Copyright, Muricans. | 1 | 0 | 2026-03-02T14:57:00 | Metalmaxm | false | null | 0 | o88kotv | false | /r/LocalLLaMA/comments/1rh0bkz/tempted_to_prompt_qwen_on_this_craigslist_rig_but/o88kotv/ | false | 1 |
t1_o88knb8 | Wait until Deepseek arrives and the tidal wave of threads for that will clear everything else out. | 2 | 0 | 2026-03-02T14:56:47 | TurnOffAutoCorrect | false | null | 0 | o88knb8 | false | /r/LocalLLaMA/comments/1risja2/sorting_by_new_right_now_be_like/o88knb8/ | false | 2 |
t1_o88kn1u | a good one | 1 | 0 | 2026-03-02T14:56:45 | laphilosophia | false | null | 0 | o88kn1u | false | /r/LocalLLaMA/comments/1ritplu/released_ai_cost_router_100_local_llm_router/o88kn1u/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.