name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8836wm | i used a 4B qwen model to caption images and ask some questions about the image automatically. Found the captions much better than something like BLIP captions | 3 | 0 | 2026-03-02T13:18:10 | TristarHeater | false | null | 0 | o8836wm | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o8836wm/ | false | 3 |
t1_o8834oo | so how the dual spark's doing now? | 1 | 0 | 2026-03-02T13:17:47 | No_Afternoon_4260 | false | null | 0 | o8834oo | false | /r/LocalLLaMA/comments/1ptakw0/2x_dgx_spark_vs_rtx_pro_6000_blackwell_for_local/o8834oo/ | false | 1 |
t1_o8833nd | In this case I was testing on 5070. But yes, I have two 3060s and three 3090s, just not on that machine... :) | 7 | 0 | 2026-03-02T13:17:36 | jacek2023 | false | null | 0 | o8833nd | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8833nd/ | false | 7 |
t1_o8832cf | Um advantage to china would be bad, lol. That is why we won't let that happen, and an open source one where everyone is involved in a similar way to everyone handling crypo (mining) wouldn't really benefit anyone way particularly, other than financially affecting some ai companies.
I am not sure why you think open so... | 0 | 0 | 2026-03-02T13:17:23 | AcePilot01 | false | null | 0 | o8832cf | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o8832cf/ | false | 0 |
t1_o883030 | google benchmarks, looks like they're somewhat similar performance but we'll only know when you try both, plus a3b is much faster so id go with that | 3 | 0 | 2026-03-02T13:17:00 | stellarknight_ | false | null | 0 | o883030 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o883030/ | false | 3 |
t1_o882xla | LM Studio back end and Open WebUI front end | 1 | 0 | 2026-03-02T13:16:35 | Guilty_Rooster_6708 | false | null | 0 | o882xla | false | /r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o882xla/ | false | 1 |
t1_o882w42 | Want to learn AI models in the local PC and see how does it work and if we also have own data sets in NoSql db can we train this local model on that so we can get information based on our custom dataset. | 1 | 0 | 2026-03-02T13:16:19 | vvarun203 | false | null | 0 | o882w42 | false | /r/LocalLLaMA/comments/1risau2/please_help_me_with_the_following_ai_questions/o882w42/ | false | 1 |
t1_o882ver | Thanks! | 1 | 0 | 2026-03-02T13:16:12 | l34sh | false | null | 0 | o882ver | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o882ver/ | false | 1 |
t1_o882utf | do you have a 3060 as well? | 3 | 0 | 2026-03-02T13:16:06 | Odd-Ordinary-5922 | false | null | 0 | o882utf | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o882utf/ | false | 3 |
t1_o882t53 | How do imatrix quants compare with k quants? | 1 | 0 | 2026-03-02T13:15:48 | itsdigimon | false | null | 0 | o882t53 | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o882t53/ | false | 1 |
t1_o882rzj | good for people that have less capable hardware. And the benchmarks seem good! | 5 | 0 | 2026-03-02T13:15:37 | Odd-Ordinary-5922 | false | null | 0 | o882rzj | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o882rzj/ | false | 5 |
t1_o882qye | I feel like some sort of retirement meme would fit amazingly here | 60 | 0 | 2026-03-02T13:15:26 | Long_comment_san | false | null | 0 | o882qye | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o882qye/ | false | 60 |
t1_o882qec | Yes I posted details in various places on this sub... :) | 4 | 0 | 2026-03-02T13:15:20 | jacek2023 | false | null | 0 | o882qec | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o882qec/ | false | 4 |
t1_o882nx4 | What kind of LLMs do you use currently? Because these models generate text, not images. | 10 | 0 | 2026-03-02T13:14:54 | jacek2023 | false | null | 0 | o882nx4 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o882nx4/ | false | 10 |
t1_o882lk1 | offloaded to cpu right? | 5 | 0 | 2026-03-02T13:14:30 | Odd-Ordinary-5922 | false | null | 0 | o882lk1 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o882lk1/ | false | 5 |
t1_o882lbv | What?
https://preview.redd.it/j86ovxbftmmg1.png?width=489&format=png&auto=webp&s=69c727fe3481f087dbe82233d18cd6d9c3a505d4
| 6 | 0 | 2026-03-02T13:14:28 | KaMaFour | false | null | 0 | o882lbv | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o882lbv/ | false | 6 |
t1_o882k5v | Looking at the benchmarks and artificial analysis: They caught up to Gemini 3 Flash and Sonnet 4 / 4.5 in like half of them, including vision.
This is kind of a historic moment, isn't it? It will run with 40 tps on my Laptop gpu and I won't ever need anything apart from the occasional Opus 4.6 push for the big plans. | 10 | 0 | 2026-03-02T13:14:16 | AppealSame4367 | false | null | 0 | o882k5v | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o882k5v/ | false | 10 |
t1_o882jct | llamacpp not mentioned in quants, so it is not supported? | 1 | 0 | 2026-03-02T13:14:08 | Deep_Traffic_7873 | false | null | 0 | o882jct | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o882jct/ | false | 1 |
t1_o882iwz | I can’t imagine what 12-14B dense would be if 9B dense is already matching/beating the outgoing 30B-A3B, as well as seriously threatening gpt-120b… with a fraction of the memory requirements? | 13 | 0 | 2026-03-02T13:14:04 | siggystabs | false | null | 0 | o882iwz | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o882iwz/ | false | 13 |
t1_o882huj | the 9b should work, maybe u could push 27b w quantization
Dont got a 16gb gpu personally but im sure it can run 9b, download ollama and try it, ez setup but takes long to download.. | 31 | 0 | 2026-03-02T13:13:53 | stellarknight_ | false | null | 0 | o882huj | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o882huj/ | false | 31 |
t1_o882h6r | I found a copy of old llama 2 7b on my drive and I tried to compare it with qwen 3 4b and qwen performed much better for my use case by a long mile. Similar was the case when I compared it to ministral 3b.
Idk how but the llm technology has come a really long way in a short timespan. Feels unreal. | 12 | 0 | 2026-03-02T13:13:46 | itsdigimon | false | null | 0 | o882h6r | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o882h6r/ | false | 12 |
t1_o882gz1 | yessir, already downloading, gotta love qwen lab and unsloth | 1 | 0 | 2026-03-02T13:13:44 | Acceptable_Home_ | false | null | 0 | o882gz1 | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o882gz1/ | false | 1 |
t1_o882h04 | Im still in the stone age where i code by pasting stuff back and forth between openwebui and vim. What do i need to read to do what you did? Ie set it onto a (sandboxed hopefully) directory of files and get it to code, run, debug and reiterate? | 1 | 0 | 2026-03-02T13:13:44 | grunt_monkey_ | false | null | 0 | o882h04 | false | /r/LocalLLaMA/comments/1riboy2/learnt_about_emergent_intention_maybe_prompt/o882h04/ | false | 1 |
t1_o882fai | I understand that, but what's the performance like? Is it good enough for coding, for example? Or is it consistent in generating images? | 0 | 1 | 2026-03-02T13:13:27 | iaNCURdehunedoara | false | null | 0 | o882fai | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o882fai/ | false | 0 |
t1_o882cue | They will beat, big networks are brute force, but we improve and optimize.
As time pass smaller networks or maybe something else entirely will surface and beat the heavy model. | 1 | 0 | 2026-03-02T13:13:02 | ResponsibleTruck4717 | false | null | 0 | o882cue | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o882cue/ | false | 1 |
t1_o882aii | What is powering your process? (Openrouter/llama.cpp/vllm/???) - Brain
I would play around with inference(just running the model) to get a feel for what the model(s) you're wanting to try out.
Then youll want to think about how you want to put the brain to work. Existing frameworks like n8n, or going the hands on ro... | 1 | 0 | 2026-03-02T13:12:39 | SocialDinamo | false | null | 0 | o882aii | false | /r/LocalLLaMA/comments/1risau2/please_help_me_with_the_following_ai_questions/o882aii/ | false | 1 |
t1_o8829ks | Or even come this close to OSS-120B | 28 | 0 | 2026-03-02T13:12:29 | HugoCortell | false | null | 0 | o8829ks | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8829ks/ | false | 28 |
t1_o8825ee | ~~Yea just saw that 9B GGUF was available since 7 hours ago~~ actually commit is from 7 hours ago. It is likely that they only made the repo public after official release | 1 | 0 | 2026-03-02T13:11:44 | tarruda | false | null | 0 | o8825ee | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o8825ee/ | false | 1 |
t1_o8824r2 | make a new post for this, i was wondering the same. | 44 | 0 | 2026-03-02T13:11:38 | ab2377 | false | null | 0 | o8824r2 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8824r2/ | false | 44 |
t1_o88237n | 12-14B dense would be also awesome... | 9 | 0 | 2026-03-02T13:11:22 | Skyline34rGt | false | null | 0 | o88237n | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o88237n/ | false | 9 |
t1_o8821z4 | time to wait for ggufs | 13 | 0 | 2026-03-02T13:11:09 | Leather_Flan5071 | false | null | 0 | o8821z4 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8821z4/ | false | 13 |
t1_o88218i | finally something that'll fit on my rtx 3060 12gb | 6 | 0 | 2026-03-02T13:11:01 | Odd-Ordinary-5922 | false | null | 0 | o88218i | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88218i/ | false | 6 |
t1_o881yvu | Given ur mixed VRAM setup (two 32GB + two 16GB V100s), vLLM is ur best bet. It handles asymmetric memory well and will utilize all 96GB effectively.
TensorRT-LLM also works but requires more manual config. TGI can be hit-or-miss with uneven cards.
The key challenge is that the 16GB cards will bottleneck ur throughput... | 2 | 0 | 2026-03-02T13:10:37 | Rain_Sunny | false | null | 0 | o881yvu | false | /r/LocalLLaMA/comments/1rirru7/which_backend_works_best_with_different_gpus/o881yvu/ | false | 2 |
t1_o881vvu | What's the difference between 27B vs 35-A3B?
Besides the obvious higher param count and that one uses 3B active params, how does it affect performance? Can we expect the 27B one to actually be smarter since it goes through all of its params, or is the 35-A3B better? | 4 | 0 | 2026-03-02T13:10:06 | HugoCortell | false | null | 0 | o881vvu | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o881vvu/ | false | 4 |
t1_o881u42 | yes they started uploading content from [https://huggingface.co/unsloth/Qwen3.5-9B-GGUF/tree/main](https://huggingface.co/unsloth/Qwen3.5-9B-GGUF/tree/main) | 2 | 0 | 2026-03-02T13:09:47 | jacek2023 | false | null | 0 | o881u42 | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o881u42/ | false | 2 |
t1_o881tq8 | Damn I think I'm mistaking it for something.
There was a 12 or 14b dense model. I thought it was GLM flash. Hmm. | 2 | 0 | 2026-03-02T13:09:43 | Long_comment_san | false | null | 0 | o881tq8 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o881tq8/ | false | 2 |
t1_o881pkq | Finallyyyyyy!!! | 1 | 0 | 2026-03-02T13:09:01 | Joey___M | false | null | 0 | o881pkq | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o881pkq/ | false | 1 |
t1_o881lw4 | OP here. Seeing some downvotes, which is fair—I know dropping a Discord link looks like vaporware marketing. To be completely transparent: the engine architecture (C# Roslyn compiling natural language to the physics sandbox) is functioning locally, but I am currently bridging the UI data-binding for the MVP. I set up t... | 2 | 0 | 2026-03-02T13:08:22 | Impressive_Half5130 | false | null | 0 | o881lw4 | false | /r/LocalLLaMA/comments/1rirgs7/i_got_sick_of_ai_game_masters_hallucinating_so_i/o881lw4/ | false | 2 |
t1_o881l23 | 9B is smaller than 27B so it will be faster than 27B, and will work on smaller GPU, that's the main reason to use 9B | 9 | 0 | 2026-03-02T13:08:14 | jacek2023 | false | null | 0 | o881l23 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o881l23/ | false | 9 |
t1_o881kcw | I wish they would compare the benchmarks to their 3.5:27B and 3.5:35B-A3B.
Is it better to run the 27B at q3 or the 9B at Q4?
| 54 | 0 | 2026-03-02T13:08:06 | InternationalNebula7 | false | null | 0 | o881kcw | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o881kcw/ | false | 54 |
t1_o881idg | Since this got some traction, I've uploaded the weights in GGUF format here: https://huggingface.co/JPQ24/Logic-4-GGUF.
Context: The base is Llama-3.1-8B. I got tired of small models failing at logic zero-shot, so I conceptually designed my own logic templates and synthetically expanded them to create a high-quality d... | 1 | 0 | 2026-03-02T13:07:45 | Pleasant-Mud-2939 | false | null | 0 | o881idg | false | /r/LocalLLaMA/comments/1rij1nx/a_comparison_between_same_8b_parameter_llm/o881idg/ | false | 1 |
t1_o881hz8 | This is probably a noob question, but are there any models here that would be ideal for a 16 GB GPU (RTX 5080)? | 26 | 0 | 2026-03-02T13:07:41 | l34sh | false | null | 0 | o881hz8 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o881hz8/ | false | 26 |
t1_o881h60 | Github:- https://github.com/vmDeshpande/ai-agent-automation
Website:- https://vmdeshpande.github.io/ai-automation-platform-website/ | 1 | 0 | 2026-03-02T13:07:32 | Feathered-Beast | false | null | 0 | o881h60 | false | /r/LocalLLaMA/comments/1risc6w/released_v040_added_semantic_agent_memory_powered/o881h60/ | false | 1 |
t1_o881cvt | Has anybody tried this yet?
Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GGUF
[https://huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GGUF](https://huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GGUF) | 59 | 0 | 2026-03-02T13:06:47 | Artistic-Falcon-8304 | false | null | 0 | o881cvt | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o881cvt/ | false | 59 |
t1_o881bk6 | But did they use some kind of engrams to vector facts or what? | 1 | 0 | 2026-03-02T13:06:33 | maxpayne07 | false | null | 0 | o881bk6 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o881bk6/ | false | 1 |
t1_o8819yu | Is that A3B running this bot? | 1 | 0 | 2026-03-02T13:06:16 | jslominski | false | null | 0 | o8819yu | false | /r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o8819yu/ | false | 1 |
t1_o88171r | Impressive work on the full duplex approach! The memory system with personalized recommendations is a great differentiator. For those looking for simpler local dictation (not full voice assistant), there are also lightweight options like faster-whisper + Kokoro TTS that run on much less RAM. But for the full experience... | 1 | 0 | 2026-03-02T13:05:45 | Weesper75 | false | null | 0 | o88171r | false | /r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o88171r/ | false | 1 |
t1_o8816gu | Throw away ollama and use llama.cpp | 19 | 0 | 2026-03-02T13:05:39 | Velocita84 | false | null | 0 | o8816gu | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o8816gu/ | false | 19 |
t1_o8812sb | 9B is a dense model, old 30B is MoE | 35 | 0 | 2026-03-02T13:05:00 | mertats | false | null | 0 | o8812sb | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8812sb/ | false | 35 |
t1_o8812u0 | Exactly my question. I can't understand, especially in general knowledge. How its possible a 9B have more factual knowledge than a 30B or 80B?? Engrams or what? | 12 | 0 | 2026-03-02T13:05:00 | maxpayne07 | false | null | 0 | o8812u0 | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8812u0/ | false | 12 |
t1_o880zjj | With models as small as 0.8b you have to be very specific with the context and task etc to make them useful, but they can be great for that.
Like a router model that decides what context to load before the main LLM answers. Or even a simple assistant to handle commands etc, assess intent and call the right tool out of... | 5 | 0 | 2026-03-02T13:04:26 | SherbertMindless8205 | false | null | 0 | o880zjj | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o880zjj/ | false | 5 |
t1_o880z8z | I'm not very knowledgeable on this, but what is a 9B model good for? I understand it's a smaller model, but is it good for tasks or is it just manageable? | 4 | 0 | 2026-03-02T13:04:23 | iaNCURdehunedoara | false | null | 0 | o880z8z | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o880z8z/ | false | 4 |
t1_o880xl9 | not so fast, Ollama still throwing 500 errors when I try to run it. Must be an update coming soon
| -4 | 0 | 2026-03-02T13:04:05 | Birdinhandandbush | false | null | 0 | o880xl9 | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o880xl9/ | false | -4 |
t1_o880xjl | Have you tried the distil versions of Whisper? For transcription use cases, distil-medium gives a good balance between speed and accuracy - much faster than the large model while still very accurate for punctuation and formatting. Also worth mentioning that for Mac users, the MLX-optimized Whisper models run surprising... | 1 | 0 | 2026-03-02T13:04:04 | Weesper75 | false | null | 0 | o880xjl | false | /r/LocalLLaMA/comments/1r7bsfd/best_audio_models_feb_2026/o880xjl/ | false | 1 |
t1_o880snh | The 2B model sounds perfect for edge devices. Can't wait to test it on mobile! | 1 | 0 | 2026-03-02T13:03:12 | LocalVRAM | false | null | 0 | o880snh | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o880snh/ | false | 1 |
t1_o880s5k | I won't be quick to trust benchmarks but for GPU poors like me, it would be a literal blessing :') | 62 | 0 | 2026-03-02T13:03:06 | itsdigimon | false | null | 0 | o880s5k | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o880s5k/ | false | 62 |
t1_o880pbg | Great project! Have you tried using smaller Whisper models like distil-medium for better latency on consumer GPUs? I've had good results with that combo for real-time voice apps. Also curious how the TTS latency compares to cloud solutions now. | 1 | 0 | 2026-03-02T13:02:35 | Weesper75 | false | null | 0 | o880pbg | false | /r/LocalLLaMA/comments/1rie2ww/stop_letting_your_gpu_sit_idle_make_it_answer/o880pbg/ | false | 1 |
t1_o880p12 | Back in 2023, I predicted that 7B models would eventually beat older 70B models. People kept telling me it would never happen (for some reasons). But at the end of the day, it’s just a neural network, and training methods will keep improving. | 26 | 0 | 2026-03-02T13:02:32 | jacek2023 | false | null | 0 | o880p12 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o880p12/ | false | 26 |
t1_o880k5j | It's empty. | 2 | 0 | 2026-03-02T13:01:39 | jslominski | false | null | 0 | o880k5j | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o880k5j/ | false | 2 |
t1_o880e1x | It's 30B with 3B active, so yes roughly equivalent to a dense a 10B supposedly. | 2 | 0 | 2026-03-02T13:00:33 | MoffKalast | false | null | 0 | o880e1x | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o880e1x/ | false | 2 |
t1_o880cjd | How's it possible that a 9B can beat old 30B qwen models in diamond and general knowledge? Did they find a form to compress vectorization or what? | 33 | 0 | 2026-03-02T13:00:17 | maxpayne07 | false | null | 0 | o880cjd | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o880cjd/ | false | 33 |
t1_o880aia | How many t/s? And how much ram? | 4 | 0 | 2026-03-02T12:59:55 | JorG941 | false | null | 0 | o880aia | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o880aia/ | false | 4 |
t1_o8809xo | GLM 4.7 Flash is a MoE 30b a3b.
Qwen 3.5 35b a3b.
Also Qwen 3.5 9b dense should be around Qwen 3.5 35b a3b. | 15 | 0 | 2026-03-02T12:59:48 | -Ellary- | false | null | 0 | o8809xo | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8809xo/ | false | 15 |
t1_o8809iv | except 4b unsloth GGUFs are out already | 2 | 0 | 2026-03-02T12:59:43 | Cubow | false | null | 0 | o8809iv | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8809iv/ | false | 2 |
t1_o8808v2 | base model exist for further fine-tuning | 5 | 0 | 2026-03-02T12:59:36 | dkeiz | false | null | 0 | o8808v2 | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o8808v2/ | false | 5 |
t1_o8807fd | Rapaz, que performance incrivel | 3 | 0 | 2026-03-02T12:59:21 | charmander_cha | false | null | 0 | o8807fd | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o8807fd/ | false | 3 |
t1_o8807b3 | How's it possible that a 9B can beat old 30B qwen models in diamond and general knowledge? Did they find a form to compress vectorization or what? | 9 | 0 | 2026-03-02T12:59:20 | maxpayne07 | false | null | 0 | o8807b3 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8807b3/ | false | 9 |
t1_o8805f4 | A 9B model that outperforms 30B and 80B models?! | 69 | 0 | 2026-03-02T12:58:59 | promethe42 | false | null | 0 | o8805f4 | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8805f4/ | false | 69 |
t1_o88048x | as u/Lemondifficult22 said. Altho my specific project was building a human coordination platform for a client who had a bunch of teams. Use case was something like - 2k people write something on the platform, various different configuration options checked (basically defining the scenario in which that decision was bei... | 1 | 0 | 2026-03-02T12:58:47 | AryanEmbered | false | null | 0 | o88048x | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o88048x/ | false | 1 |
t1_o8804av | yeah | 1 | 0 | 2026-03-02T12:58:47 | sunshinecheung | false | null | 0 | o8804av | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o8804av/ | false | 1 |
t1_o87zztu | [deleted] | 1 | 0 | 2026-03-02T12:57:58 | [deleted] | true | null | 0 | o87zztu | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o87zztu/ | false | 1 |
t1_o87zyf7 | >It gives a string and a No module named huggingface_hub.
Yup, fixing this one now.
>Also, which model have you found good for greek transcription?
`faster-whisper-large-v3`
>And sometimes container seems to close by itself though I am not sure how often is this reproducible.
Best way to help me fix these is to co... | 1 | 0 | 2026-03-02T12:57:43 | TwilightEncoder | false | null | 0 | o87zyf7 | false | /r/LocalLLaMA/comments/1r9y6s8/transcriptionsuite_a_fully_local_private_open/o87zyf7/ | false | 1 |
t1_o87zy8i | [https://huggingface.co/unsloth/Qwen3.5-9B-GGUF](https://huggingface.co/unsloth/Qwen3.5-9B-GGUF)
[https://huggingface.co/unsloth/Qwen3.5-4B-GGUF](https://huggingface.co/unsloth/Qwen3.5-4B-GGUF) | 5 | 0 | 2026-03-02T12:57:41 | sunshinecheung | false | null | 0 | o87zy8i | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o87zy8i/ | false | 5 |
t1_o87zthn | I use these models for coding and engineering, the qwen models are the best (open) models for engineering hands down.
Personally I’m really surprised to read this, yeah they get stuff wrong but they are mostly right if you know what you’re doing and you give them specific instructions.
I have a rig with two AMD MI50’... | 1 | 0 | 2026-03-02T12:56:50 | Far-Low-4705 | false | null | 0 | o87zthn | false | /r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o87zthn/ | false | 1 |
t1_o87zt2l | Small models can do wonders with tool calling if they can do so reliably. | 14 | 0 | 2026-03-02T12:56:45 | mxforest | false | null | 0 | o87zt2l | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o87zt2l/ | false | 14 |
t1_o87zpao | yes but with my 12GB GPU on my desktop I can still use 35B-A3B in Q4 :) | 42 | 0 | 2026-03-02T12:56:05 | jacek2023 | false | null | 0 | o87zpao | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o87zpao/ | false | 42 |
t1_o87znli | Here we gooo
| 2 | 0 | 2026-03-02T12:55:46 | Birdinhandandbush | false | null | 0 | o87znli | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o87znli/ | false | 2 |
t1_o87zn0y | Qwen3.5:4B - yesss.
Qwen is bae. | 8 | 0 | 2026-03-02T12:55:40 | exaknight21 | false | null | 0 | o87zn0y | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o87zn0y/ | false | 8 |
t1_o87zjlh | Great work!!
unsloth benchmarks as promised? | 2 | 0 | 2026-03-02T12:55:02 | Embarrassed-Fuel534 | false | null | 0 | o87zjlh | false | /r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o87zjlh/ | false | 2 |
t1_o87zfzd | Hell yeah - this is what everyone with a 16GB GPU has been waiting for | 68 | 0 | 2026-03-02T12:54:24 | ansibleloop | false | null | 0 | o87zfzd | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o87zfzd/ | false | 68 |
t1_o87zequ | is it jinja template? | 1 | 0 | 2026-03-02T12:54:10 | kayteee1995 | false | null | 0 | o87zequ | false | /r/LocalLLaMA/comments/1re1b4a/you_can_use_qwen35_without_thinking/o87zequ/ | false | 1 |
t1_o87zcjs | Why do you think they were dumping money into ai? To help the people they already do the minimum for? HA!!! | 1 | 0 | 2026-03-02T12:53:47 | Wakeandbass | false | null | 0 | o87zcjs | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o87zcjs/ | false | 1 |
t1_o87z953 | All sizes in the collection are Apache 2.0 licensed 😍 | 14 | 0 | 2026-03-02T12:53:10 | kulchacop | false | null | 0 | o87z953 | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o87z953/ | false | 14 |
t1_o87z89h | wdym "not compete with GLM flash in 12-17b range"? 1. GLM Flash is 30b, 2. the 9b will likely be on par with it | 1 | 0 | 2026-03-02T12:53:00 | KaMaFour | false | null | 0 | o87z89h | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o87z89h/ | false | 1 |
t1_o87z615 | Host another AI on your better machine. Run the agentic stuff on the M1. | 1 | 0 | 2026-03-02T12:52:36 | a_beautiful_rhind | false | null | 0 | o87z615 | false | /r/LocalLLaMA/comments/1riom3s/openclaw_on_my_spare_laptop/o87z615/ | false | 1 |
t1_o87z583 | Already quantizing 0.8B variant! (Romarchive) | 159 | 0 | 2026-03-02T12:52:27 | stopbanni | false | null | 0 | o87z583 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o87z583/ | false | 159 |
t1_o87z3s2 | Thank you for letting us know! | 1 | 0 | 2026-03-02T12:52:11 | AppealThink1733 | false | null | 0 | o87z3s2 | false | /r/LocalLLaMA/comments/1rirsmh/small_qwen_models_out/o87z3s2/ | false | 1 |
t1_o87z1e4 | If you're running an inference server, you can forget llama.cpp
vLLM is much better, never gave a try with SGLang tho | 1 | 0 | 2026-03-02T12:51:44 | LinkSea8324 | false | null | 0 | o87z1e4 | false | /r/LocalLLaMA/comments/1rirru7/which_backend_works_best_with_different_gpus/o87z1e4/ | false | 1 |
t1_o87yy9o | Wow, that was quick. | 14 | 0 | 2026-03-02T12:51:11 | itsdigimon | false | null | 0 | o87yy9o | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o87yy9o/ | false | 14 |
t1_o87yxjs | I use CC atm, it works | 4 | 0 | 2026-03-02T12:51:03 | jacek2023 | false | null | 0 | o87yxjs | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o87yxjs/ | false | 4 |
t1_o87yu9t | RIght on time, local FTW
https://preview.redd.it/85t453t1pmmg1.png?width=1318&format=png&auto=webp&s=2ce36e92805e606da3c77daeecb57d3db43618bb
| 63 | 0 | 2026-03-02T12:50:26 | Karnemelk | false | null | 0 | o87yu9t | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o87yu9t/ | false | 63 |
t1_o87yrag | I'll get 0.8B, perfect for selfhost AI BMO for some dumdum fun interaction. | 5 | 0 | 2026-03-02T12:49:52 | mell1suga | false | null | 0 | o87yrag | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o87yrag/ | false | 5 |
t1_o87ymz9 | Hey! A bit late to the party, but what specs are you running on your pc? I have a 3090 24gb vram and 64gb ddr5 ram, not sure if its enough. | 1 | 0 | 2026-03-02T12:49:05 | kavakravata | false | null | 0 | o87ymz9 | false | /r/LocalLLaMA/comments/1pw6qvw/running_a_local_llm_for_development_minimum/o87ymz9/ | false | 1 |
t1_o87ymzg | updated | 5 | 0 | 2026-03-02T12:49:05 | jacek2023 | false | null | 0 | o87ymzg | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o87ymzg/ | false | 5 |
t1_o87yfuo | Why?
Wanna write smut gibberish with a 0.8B model? | 10 | 0 | 2026-03-02T12:47:45 | bonobomaster | false | null | 0 | o87yfuo | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o87yfuo/ | false | 10 |
t1_o87yf1l | No ggufs? | -5 | 0 | 2026-03-02T12:47:36 | MrMrsPotts | false | null | 0 | o87yf1l | false | /r/LocalLLaMA/comments/1rirsmh/small_qwen_models_out/o87yf1l/ | false | -5 |
t1_o87ydxc | Quants are usually up pretty quick, if I were to guess around lunchtime US east time | 3 | 0 | 2026-03-02T12:47:24 | _murb | false | null | 0 | o87ydxc | false | /r/LocalLLaMA/comments/1rirsmh/small_qwen_models_out/o87ydxc/ | false | 3 |
t1_o87ydt1 | Really?? Would u say it’s better to use 122b at UD_Q3_K_XL or 27/35b at Q4?
They seem pretty smart for STEM use cases and engineering, glm 4.7 flash is probably more specialized for coding tho. | 1 | 0 | 2026-03-02T12:47:22 | Far-Low-4705 | false | null | 0 | o87ydt1 | false | /r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o87ydt1/ | false | 1 |
t1_o87y8os | Any abliterated version? | 1 | 0 | 2026-03-02T12:46:24 | Opp-Contr | false | null | 0 | o87y8os | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o87y8os/ | false | 1 |
t1_o87y55p | Thank you | 3 | 0 | 2026-03-02T12:45:44 | quietsubstrate | false | null | 0 | o87y55p | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o87y55p/ | false | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.