name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8edoeo | many vram | 1 | 0 | 2026-03-03T12:23:05 | megacewl | false | null | 0 | o8edoeo | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8edoeo/ | false | 1 |
t1_o8edm6w | I have a strong feeling that some motivated people or disgruntled ex-employees will leak the model weights of older models and we will all live happily ever after. | 1 | 0 | 2026-03-03T12:22:38 | mxforest | false | null | 0 | o8edm6w | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8edm6w/ | false | 1 |
t1_o8edjsv | And I let my human side come out. An LLM would not have switched language in the middle of the iteration. At least not mine 🤣 | 1 | 0 | 2026-03-03T12:22:09 | Competitive_Book4151 | false | null | 0 | o8edjsv | false | /r/LocalLLaMA/comments/1rjmq6m/i_asked_chat_gpt_52_pro_to_scan_my_repo_here_is/o8edjsv/ | false | 1 |
t1_o8edigq | 1 | 0 | 2026-03-03T12:21:53 | Ianuarius | false | null | 0 | o8edigq | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o8edigq/ | false | 1 | |
t1_o8edgzo | I don’t understand how to trust benchmarks in general. You’re 35B vs 27B are exactly the opposite of the OP’s. | 1 | 0 | 2026-03-03T12:21:35 | genobobeno_va | false | null | 0 | o8edgzo | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8edgzo/ | false | 1 |
t1_o8eddwu | Amazing! I got 60.36 tokens/s on M4 Max for Qwen3.5-35B-A3B-4bit | 1 | 0 | 2026-03-03T12:21:00 | gunta85 | false | null | 0 | o8eddwu | false | /r/LocalLLaMA/comments/1rg5uee/best_way_to_run_qwen3535ba3b_on_mac/o8eddwu/ | false | 1 |
t1_o8edd1y | Qwen 3 0.6B tho | 1 | 0 | 2026-03-03T12:20:50 | IrisColt | false | null | 0 | o8edd1y | false | /r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8edd1y/ | false | 1 |
t1_o8edc13 | What kind of GPU do you mean? | 1 | 0 | 2026-03-03T12:20:37 | jacek2023 | false | null | 0 | o8edc13 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8edc13/ | false | 1 |
t1_o8ed6jp | Perhaps that's due to the large context lengths that Claude Code feeds into the model. It typically performs many inferences with \~20k tokens (or larger) contexts tuned for its workflow.
That's why I've invested significant effort in context engineering for my agentic setup minimizing context size to maintain acceptable inference speeds on consumer devices like macbooks and mac mini. | 1 | 0 | 2026-03-03T12:19:32 | luke_pacman | false | null | 0 | o8ed6jp | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o8ed6jp/ | false | 1 |
t1_o8ed5t1 | Rule 4 - Post is primarily commercial promotion. The linked content doesn't provide the model - instead it promotes their API. | 1 | 0 | 2026-03-03T12:19:23 | LocalLLaMA-ModTeam | false | null | 0 | o8ed5t1 | false | /r/LocalLLaMA/comments/1rjmgdt/introducing_kanon_2_enricher_the_worlds_first/o8ed5t1/ | true | 1 |
t1_o8ed3n8 | Has anyone used an intel GPU with these models? Or just Nvidia is good? | 1 | 0 | 2026-03-03T12:18:57 | pet3121 | false | null | 0 | o8ed3n8 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8ed3n8/ | false | 1 |
t1_o8ed2ok | Is the Qwen team offering the base (non-chat) version of its 3.5 27B model? | 1 | 0 | 2026-03-03T12:18:46 | IrisColt | false | null | 0 | o8ed2ok | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8ed2ok/ | false | 1 |
t1_o8ecveb | I have no idea how that’s actually supposed to work, but if you think you can implement this, just go ahead. It’s always good to have alternatives. | 1 | 0 | 2026-03-03T12:17:18 | -p-e-w- | false | null | 0 | o8ecveb | false | /r/LocalLLaMA/comments/1qa0w6c/it_works_abliteration_can_reduce_slop_without/o8ecveb/ | false | 1 |
t1_o8ects8 | Please try the heretic versions of Gemma 3 27B, Qwen 3 VL 32B and Qwen 3.5 32B... | 1 | 0 | 2026-03-03T12:16:58 | IrisColt | false | null | 0 | o8ects8 | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8ects8/ | false | 1 |
t1_o8ecp4f | AMD is for ballers who enjoy balling out and get those balls generated using a solid sorted synergistic solution - I currently run one of my threadripper tanks as a 3x 7900xtx asrock creator blower card and omfg does it ball like a baller with balls | 1 | 0 | 2026-03-03T12:16:01 | Big_River_ | false | null | 0 | o8ecp4f | false | /r/LocalLLaMA/comments/1olmwb6/what_are_you_doing_with_amd_radeon_rx_7900xtx/o8ecp4f/ | false | 1 |
t1_o8ecoeg | If you start comparing individual jumps, like Qwen3 Next 80B A3B --> Qwen3.5 35B A3B or Qwen3 234B A55B --> Qwen3.5 122B A10B, it's quite insane how much of an improvement this generation is. | 1 | 0 | 2026-03-03T12:15:52 | Balance- | false | null | 0 | o8ecoeg | false | /r/LocalLLaMA/comments/1rjnpuv/costsperformance_tradeoff_for_qwen3_qwen35_and/o8ecoeg/ | false | 1 |
t1_o8ecnkj | You nailed it! | 1 | 0 | 2026-03-03T12:15:42 | IrisColt | false | null | 0 | o8ecnkj | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8ecnkj/ | false | 1 |
t1_o8eclra | Just checking :) The uncanny valley is giving me vertigo today.
| 1 | 0 | 2026-03-03T12:15:21 | MrE_WI | false | null | 0 | o8eclra | false | /r/LocalLLaMA/comments/1rjmq6m/i_asked_chat_gpt_52_pro_to_scan_my_repo_here_is/o8eclra/ | false | 1 |
t1_o8eci0w | It also sprinkles “to be a testament” into its replies, but only sparingly. | 1 | 0 | 2026-03-03T12:14:34 | IrisColt | false | null | 0 | o8eci0w | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8eci0w/ | false | 1 |
t1_o8ecbdq | so if you spend hours and days coding away your own system you gatekeep open source? No idea where that logic is | 1 | 0 | 2026-03-03T12:13:13 | Last-Shake-9874 | false | null | 0 | o8ecbdq | false | /r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8ecbdq/ | false | 1 |
t1_o8ec5lu | This. | 1 | 0 | 2026-03-03T12:12:03 | IrisColt | false | null | 0 | o8ec5lu | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8ec5lu/ | false | 1 |
t1_o8ec0z5 | Los patrones exactos de las 100 preguntas de training, pierden generalización a la hora de responderlas de forma no degenerada o correcta en edge cases. | 1 | 0 | 2026-03-03T12:11:06 | Nnaannobboott | false | null | 0 | o8ec0z5 | false | /r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o8ec0z5/ | false | 1 |
t1_o8ec0ip | in other countries this same smartphone can cost 2 months of minimum wage :(
i can feel my bro | 1 | 0 | 2026-03-03T12:11:00 | CodigoDeSenior | false | null | 0 | o8ec0ip | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8ec0ip/ | false | 1 |
t1_o8ec0dc | I've only been using 3.5 9b for a few hours, but already it drastically outperforms gpt oss 20b for me with larger context windows. Such a great release! | 1 | 0 | 2026-03-03T12:10:58 | TotallyJerd | false | null | 0 | o8ec0dc | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8ec0dc/ | false | 1 |
t1_o8ebzv1 | These are all taken from the official Qwen3.5 model cards. In other words, Qwen ran these benchmarks themselves—so probably in BF16 / F32. | 1 | 0 | 2026-03-03T12:10:52 | Jobus_ | false | null | 0 | o8ebzv1 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8ebzv1/ | false | 1 |
t1_o8ebyxm | I’d also be interested in seeing a benchmark for comparison :) | 1 | 0 | 2026-03-03T12:10:40 | darwinanim8or | false | null | 0 | o8ebyxm | false | /r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8ebyxm/ | false | 1 |
t1_o8ebyjf | >For simple agentic tasks (single-file edits, basic scaffolding), 9B works surprisingly well - I've been using it with Roo Code for quick prototyping.
I think for Non-professional coder like me, this is more than enough for now. I haven't explored Agentic coding yet. Need to search online & youtube for some tutorials.
>The sweet spot I found: use 9B for initial exploration and small tasks, then switch to 27B-35B A3B for the actual implementation phase. The MoE models handle long-horizon planning way better while still being runnable on consumer hardware.
I'll try all these models in my new rig.
Still I want to use current laptop with models like 9B while I'm away from home. | 1 | 0 | 2026-03-03T12:10:36 | pmttyji | false | null | 0 | o8ebyjf | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8ebyjf/ | false | 1 |
t1_o8eby7c | publish the dataset mix ratios. at 9B that matters more than the teacher model. | 1 | 0 | 2026-03-03T12:10:32 | sean_hash | false | null | 0 | o8eby7c | false | /r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8eby7c/ | false | 1 |
t1_o8ebxrh | 35B model is the same as well
this is why i hate thinking models | 1 | 0 | 2026-03-03T12:10:26 | sultan_papagani | false | null | 0 | o8ebxrh | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8ebxrh/ | false | 1 |
t1_o8ebwh8 | The importance given to refusal in FT would be determined by the number of refusal prompts in the training set relative to KL divergence test prompts, and the additional weight used for the loss function for refusals.
| 1 | 0 | 2026-03-03T12:10:11 | PinkysBrein | false | null | 0 | o8ebwh8 | false | /r/LocalLLaMA/comments/1qa0w6c/it_works_abliteration_can_reduce_slop_without/o8ebwh8/ | false | 1 |
t1_o8ebwgs | I've been building a desktop app with Electron that supports:
* llamacpp for downloading and running local models (privacy-focused)
* a multi-agent system running inside a Docker container (security-focused)
Because smaller models have limited intelligence, we need to invest significantly more engineering effort to ensure the output is good enough, something we can usually skip when working with larger and smarter models. | 1 | 0 | 2026-03-03T12:10:10 | luke_pacman | false | null | 0 | o8ebwgs | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o8ebwgs/ | false | 1 |
t1_o8ebtfn | You made it worse, I also use q4 :facepalm | 1 | 0 | 2026-03-03T12:09:32 | dodiyeztr | false | null | 0 | o8ebtfn | false | /r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/o8ebtfn/ | false | 1 |
t1_o8ebt6o | The hybrid linear attention is the part I keep coming back to. Double the native context without the memory hit is genuinely useful for research workflows where you're feeding in long documents. Curious how it holds up on multi-hop reasoning tasks, the kind where you need to connect information from different parts of a long context rather than just retrieve it. | 1 | 0 | 2026-03-03T12:09:29 | Severe_Adeptness_893 | false | null | 0 | o8ebt6o | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8ebt6o/ | false | 1 |
t1_o8ebpet | Es genial te felicito esta bien logrados solo que podrías mejorar algunos aspectos si querés te digo cuales y como hacerlo completo. Háblame a MD directo. Te felicito | 1 | 0 | 2026-03-03T12:08:42 | Nnaannobboott | false | null | 0 | o8ebpet | false | /r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o8ebpet/ | false | 1 |
t1_o8ebohq | Yes, that's another reason why GPT 4.1 won't be released. OpenAI spent a lot of time making gpt-oss "safe" | 1 | 0 | 2026-03-03T12:08:31 | jacek2023 | false | null | 0 | o8ebohq | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8ebohq/ | false | 1 |
t1_o8ebode | I was. I edited the comment for relevance and noted that. My Bad... felt social for a minute. Going back to my basement away from the public now.... | 1 | 0 | 2026-03-03T12:08:30 | GCoderDCoder | false | null | 0 | o8ebode | false | /r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8ebode/ | false | 1 |
t1_o8ebi35 | The best option is using SSH, do you NEED a gui? The second best option is rustdesk if you can't live without the desktop gui | 1 | 0 | 2026-03-03T12:07:13 | h4ck3r_n4m3 | false | null | 0 | o8ebi35 | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8ebi35/ | false | 1 |
t1_o8ebh8v | No, excuse the bad colors, you are probably comparing 3.5 2B with 3 4B.
3.5 4B wins over 3 4B in every benchmark. | 1 | 0 | 2026-03-03T12:07:02 | Jobus_ | false | null | 0 | o8ebh8v | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8ebh8v/ | false | 1 |
t1_o8ebd58 | [removed] | 1 | 0 | 2026-03-03T12:06:13 | [deleted] | true | null | 0 | o8ebd58 | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8ebd58/ | false | 1 |
t1_o8eb8vm | The “open” in their name stands for “open your wallet” | 1 | 0 | 2026-03-03T12:05:20 | andy_potato | false | null | 0 | o8eb8vm | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8eb8vm/ | false | 1 |
t1_o8eb6lx | I got a loop in the thinking process:
Here's a thinking process that leads to the suggested advice:
1. **Analyze the User's Situation:**
* **Activity:** Car wash.
* **Location:** 100 meters away from the user's house.
* **Options:** Walk vs. Drive.
* **Context:** Drive-in car wash shop (implies you drive the car onto the conveyor/brush system).
2. **Evaluate Option 1: Walking**
* *Distance:* 100 meters is roughly a 1-2 minute walk.
* *Pros:* Saves fuel, saves wear and tear on the car, saves parking fees (if applicable), exercise, fresh air.
* *Cons:* You have to walk the car *to* the shop? No, you walk *to* the shop. But wait, it's a *drive-in* car wash.
* *Critical Realization:* If it's a "drive-in" car wash, the car *must* be driven onto the wash mechanism. You cannot walk the car onto the conveyor.
* *Logistics:* If you walk to the shop, you still need to drive the car there to get it washed.
* *Wait, re-reading the prompt:* "Should i walk or drive to wash my car?"
* *Interpretation A:* Do I walk to the shop (leaving the car at home) or drive the car to the shop?
* *Interpretation B:* Do I walk the car (push it) or drive it? (Unlikely).
* *Interpretation C:* Do I walk to the shop to get the car washed (meaning I leave the car at home) or drive the car to the shop?
* *Interpretation D:* Do I walk to the shop (to pay/get instructions) or drive the car?
* *Most logical interpretation:* The user is asking whether they should walk to the shop (leaving the car at home) or drive the car to the shop.
* *Wait, another interpretation:* Is the user asking if they should walk *to* the car wash (leaving the car at home) or drive the car *to* the car wash?
* *Wait, a third interpretation:* Is the user asking if they should walk *to* the car wash (to get the car washed) or drive the car *to* the car wash?
* *Wait, a fourth interpretation:* Is the user asking if they should walk *to* the car wash (to get the car washed) or drive the car *to* the car wash?
* *Wait, let's look at the phrasing:* "Should i walk or drive to wash my car?"
* *Scenario:* The car wash is 100m away.
* *Option 1 (Drive):* You drive the car to the shop. You get it washed. You drive back home.
* *Option 2 (Walk):* You walk to the shop. You... wait, if you walk to the shop, how do you get the car washed? You can't walk the car onto a drive-in wash.
* *Unless:* The user means "Should I walk to the shop (to get the car washed) or drive the car to the shop?"
* *Wait, is there a misunderstanding of "Drive-in"?* A drive-in car wash requires the car to be driven onto the conveyor.
* *So, if I walk to the shop, I still have to drive the car there.*
* *Wait, maybe the user means:* Should I walk to the shop (to get the car washed) or drive the car to the shop?
* *Wait, maybe the user means:* Should I walk to the shop (to get the car washed) or drive the car to the shop?
* *Wait, maybe the user means:* Should I walk to the shop (to get the car washed) or drive the car to the shop?
* *Wait, maybe the user means:* Should I walk to the shop (to get the car washed) or drive the car to the shop?
* *Wait, maybe the user means:* Should I walk to the shop (to get the car washed) or drive the car to the shop?
* *Wait, maybe the user means:* Should I walk to the shop (to get the car washed) or drive the car to the shop?
* *Wait, maybe the user means:* Should I walk to the shop (to get the car washed) or drive the car to the shop?
* *Wait, maybe the user means:* Should I walk to the shop (to get the car washed) or drive the car to the shop?
* *Wait, maybe the user means:* Should I walk to the shop (to get the car washed) or drive the car to the shop?
* *Wait, maybe the user means:* Should I walk to the shop (to get the car washed) or drive the car to the shop?
* ....... | 1 | 0 | 2026-03-03T12:04:52 | ZealousidealBadger47 | false | null | 0 | o8eb6lx | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8eb6lx/ | false | 1 |
t1_o8eb67b | Bro just say what you're doing stop gatekeeping open source shit | 1 | 0 | 2026-03-03T12:04:47 | braydon125 | false | null | 0 | o8eb67b | false | /r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8eb67b/ | false | 1 |
t1_o8eb1xy | Yea.. i remapped "rm" command to "trash" so that it does not delete files, only moves them to recycle bin. It tried to remove a folder but trash command does not support -rf so it just used /bin/rm directly. | 1 | 0 | 2026-03-03T12:03:54 | mxforest | false | null | 0 | o8eb1xy | false | /r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/o8eb1xy/ | false | 1 |
t1_o8eb08t | My bad 397b... most people don't talk about that. I run 397b q4kxl or q4 mlx on mac studio. The thinking is still a problem IMO. I switched back to Minimax M2.5 GLM4.7 since Qwen 3.5 397b thinking has not been enjoyable for me. | 1 | 0 | 2026-03-03T12:03:32 | GCoderDCoder | false | null | 0 | o8eb08t | false | /r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8eb08t/ | false | 1 |
t1_o8eavlj | I wasn’t asking for a git repo and console logs.
Just reading around or on the net before asking a question would be an improvement.
| 1 | 0 | 2026-03-03T12:02:34 | ProfessionalSpend589 | false | null | 0 | o8eavlj | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8eavlj/ | false | 1 |
t1_o8eas9j | I will be messaging you in 6 hours on [**2026-03-03 18:01:02 UTC**](http://www.wolframalpha.com/input/?i=2026-03-03%2018:01:02%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8eaobw/?context=3)
[**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FLocalLLaMA%2Fcomments%2F1rizodv%2Frunning_qwen_35_08b_locally_in_the_browser_on%2Fo8eaobw%2F%5D%0A%0ARemindMe%21%202026-03-03%2018%3A01%3A02%20UTC) to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201rizodv)
*****
|[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)|
|-|-|-|-| | 1 | 0 | 2026-03-03T12:01:52 | RemindMeBot | false | null | 0 | o8eas9j | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8eas9j/ | false | 1 |
t1_o8earxf | Schizo post.
I've been seeing a lot of these posts lately. It's been common for vulnerable people to fall into mysticism and obsess greatly over discovering nothing. It's sad that AI becomes another avenue for them to obsess over too, instead of getting proper help. Dunno really what can be done about it. | 1 | 0 | 2026-03-03T12:01:47 | Swimming-Chip9582 | false | null | 0 | o8earxf | false | /r/LocalLLaMA/comments/1rjmrj0/hello_i_am_a_guy_who_has_no_prior_ai_experience/o8earxf/ | false | 1 |
t1_o8eaqwg | As of now the non-reasoning benchmarks dropped on artificialanalysis. The relative performance loss was acceptable, still leaving it above OSS-120B on the charts by a healthy margin. So I shut off reasoning on the 27b. Interestingly, for the 35b coding capability falls off a cliff without reasoning. | 1 | 0 | 2026-03-03T12:01:34 | _-_David | false | null | 0 | o8eaqwg | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8eaqwg/ | false | 1 |
t1_o8eap50 | [removed] | 1 | 0 | 2026-03-03T12:01:12 | [deleted] | true | null | 0 | o8eap50 | false | /r/LocalLLaMA/comments/1rjmnv4/meet_swerebenchv2_the_largest_open_multilingual/o8eap50/ | false | 1 |
t1_o8eaobw | !remindMe 6h | 1 | 0 | 2026-03-03T12:01:02 | LaFllamme | false | null | 0 | o8eaobw | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8eaobw/ | false | 1 |
t1_o8eani3 | Yes but I never said GPT-4 was wow. I am not a novice programmer, and I don't vibe code either. I know what I need and the level that Opus 4.6 provides is something that will not go out of fashion. I am talking about having it on my local machine in the future with good speed, compared to paying a subscription for the better model.
C# will not vanish overnight, nor will game development, both of which Opus 4.6 does well. Something I can't say for GPT 5 or any of the other models really. Other people's standards of saying "wow" when a new model comes out does not apply to me. My first wow is Opus 4.6 and so I want it locally :D | 1 | 0 | 2026-03-03T12:00:52 | Mayion | false | null | 0 | o8eani3 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8eani3/ | false | 1 |
t1_o8eand8 | Let's reconvene in 2027
[https://www.reddit.com/r/LocalLLaMA/comments/1rgkc1b/back\_in\_my\_day\_localllama\_were\_the\_pioneers/](https://www.reddit.com/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/) | 1 | 0 | 2026-03-03T12:00:50 | bobby-chan | false | null | 0 | o8eand8 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8eand8/ | false | 1 |
t1_o8eaht0 | I translated it. Thanks again! | 1 | 0 | 2026-03-03T11:59:41 | Competitive_Book4151 | false | null | 0 | o8eaht0 | false | /r/LocalLLaMA/comments/1rjmq6m/i_asked_chat_gpt_52_pro_to_scan_my_repo_here_is/o8eaht0/ | false | 1 |
t1_o8eahr4 | I have mac m1 (8gb ram) which is the best model for everyday tasks (basically a local assistant) | 1 | 0 | 2026-03-03T11:59:40 | Major_Network4289 | false | null | 0 | o8eahr4 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8eahr4/ | false | 1 |
t1_o8eah1r | For those of us (me) who doesn’t know the difference between dense and MoE models (like qwen3.5-35B-A3B and qwen3.5-27B), can someone help understand? If A3B is active amount does it mean dense model like 27B has all parameters “active” all the time?
I just see that I get like 2 TPS on 27B and like 30 TPS on 35B-A3B | 1 | 0 | 2026-03-03T11:59:32 | riconec | false | null | 0 | o8eah1r | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8eah1r/ | false | 1 |
t1_o8eaevt | >27b fails a lot of my own benchmarks that gpt handles as well.
Surprised to see this as 27B, 35B, 122B are well received here. Curious to see your benchmarks.
>So I'm sure a 14b Qwen3.5 will benchmark great, will be fast, and may outperform in some areas, but I wouldn't pin my hopes in it being the solid all-rounder gpt is
Hoping to get 14B within couple of months. | 1 | 0 | 2026-03-03T11:59:04 | pmttyji | false | null | 0 | o8eaevt | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8eaevt/ | false | 1 |
t1_o8eabvr | Tu te moquais de moi mais maintenant que c'est sorti il s'avère que j'avais raison 🙂 | 1 | 0 | 2026-03-03T11:58:26 | Adventurous-Paper566 | false | null | 0 | o8eabvr | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8eabvr/ | false | 1 |
t1_o8ea9la | It doesnt work quantized or non quantized. | 1 | 0 | 2026-03-03T11:57:58 | Civil-Top-8167 | false | null | 0 | o8ea9la | false | /r/LocalLLaMA/comments/1rjmnh7/help_loading_qwen35_35b_a3b_gguf_on_vllm/o8ea9la/ | false | 1 |
t1_o8ea97b | Holy Hallucinations Batman! If you don't care enough to write your own complaints, you shouldn't expect people to read them. | 1 | 0 | 2026-03-03T11:57:53 | DonkeyBonked | false | null | 0 | o8ea97b | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8ea97b/ | false | 1 |
t1_o8ea5kf | you can always fine-tune models to get the behaviours you want | 1 | 0 | 2026-03-03T11:57:07 | redditorialy_retard | false | null | 0 | o8ea5kf | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8ea5kf/ | false | 1 |
t1_o8ea31j | No. A model does split its parameters between vision and text, and a model with a vision projector is not better at imagining what things look like. I have no idea where the person you replied to got their information, but it is bullshit. | 1 | 0 | 2026-03-03T11:56:36 | Eisenstein | false | null | 0 | o8ea31j | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8ea31j/ | false | 1 |
t1_o8ea1wg | I tested ollama, speed of Qwen3.5 35B was around 20tk/s
In llama cpp no special starting arguments im at 105tk/s
Yep surely if open webui somehow could unload a llama cpp model like it can with ollama il just switch over. | 1 | 0 | 2026-03-03T11:56:23 | iChrist | false | null | 0 | o8ea1wg | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8ea1wg/ | false | 1 |
t1_o8ea0uk | Got it what you're saying. Of course, I'm not expecting a single shot thing doing everything. | 1 | 0 | 2026-03-03T11:56:10 | pmttyji | false | null | 0 | o8ea0uk | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8ea0uk/ | false | 1 |
t1_o8ea0lk | What? | 1 | 0 | 2026-03-03T11:56:06 | PsychologicalCat937 | false | null | 0 | o8ea0lk | false | /r/LocalLLaMA/comments/1rjlwu4/unlimited_openclaw_ai_agent_free_premium_api/o8ea0lk/ | false | 1 |
t1_o8e9xl8 | I would rather prefer a newer version of their gptoss: designed explicitly for local hardware, very efficient and fast. New versions smarter and multimodal would be great | 1 | 0 | 2026-03-03T11:55:29 | mouseofcatofschrodi | false | null | 0 | o8e9xl8 | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8e9xl8/ | false | 1 |
t1_o8e9xjl | Because I first prepared the Post, while I asked chat gpt in my origin language, because it originally was meant for a friend of mine. Thank you for telling me!
No, I am not an LLM, are you? 🤣
PS: find info about me using LLM in my Post a few days ago. | 1 | 0 | 2026-03-03T11:55:28 | Competitive_Book4151 | false | null | 0 | o8e9xjl | false | /r/LocalLLaMA/comments/1rjmq6m/i_asked_chat_gpt_52_pro_to_scan_my_repo_here_is/o8e9xjl/ | false | 1 |
t1_o8e9wcx | \~25 tok/s for output generation on Macbook M1 | 1 | 0 | 2026-03-03T11:55:13 | luke_pacman | false | null | 0 | o8e9wcx | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o8e9wcx/ | false | 1 |
t1_o8e9unx | You shouldn't pass random chemical name to prompt a llm, there will be no gain | 1 | 0 | 2026-03-03T11:54:51 | Ok-Internal9317 | false | null | 0 | o8e9unx | false | /r/LocalLLaMA/comments/1rjmrj0/hello_i_am_a_guy_who_has_no_prior_ai_experience/o8e9unx/ | false | 1 |
t1_o8e9r8g | [removed] | 1 | 0 | 2026-03-03T11:54:07 | [deleted] | true | null | 0 | o8e9r8g | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o8e9r8g/ | false | 1 |
t1_o8e9osf | This whole thread screams "uncanny valley" to me.
Wth is going on?
| 1 | 0 | 2026-03-03T11:53:35 | MrE_WI | false | null | 0 | o8e9osf | false | /r/LocalLLaMA/comments/1riat5w/vignettes_handy_for_ais/o8e9osf/ | false | 1 |
t1_o8e9oj6 | I think you underestimate the amount of people whose first use of a terminal was for LLMs, from a windows-only GUI experience prior to that. | 1 | 0 | 2026-03-03T11:53:32 | bobby-chan | false | null | 0 | o8e9oj6 | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8e9oj6/ | false | 1 |
t1_o8e9oer | it's pretty slow on my M1, only \~6 tok/s. what is speed on your M3? | 1 | 0 | 2026-03-03T11:53:30 | luke_pacman | false | null | 0 | o8e9oer | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o8e9oer/ | false | 1 |
t1_o8e9ll6 | Ah, I'm fully aware that the laptop 5090 is basically a nerfed desktop 5080 chip. But it's the best I could get with portability in mind. I think I'll be paying for a lot of cloud GPU compute eventually. | 1 | 0 | 2026-03-03T11:52:56 | StabledFusion | false | null | 0 | o8e9ll6 | false | /r/LocalLLaMA/comments/1rj7z9v/where_to_get_a_comprehensive_overview_on_the/o8e9ll6/ | false | 1 |
t1_o8e9leq | It’s probably not the same people….
And most people here (I hope, at least) know and treat all these benchmarks as the promotional material they are.
The only interesting thing from these is to look at the kind of model they tried to compare themselves too.
Latest Chinese Sota models are including opus, not sonnet in their comparison. It’s not that they are better (They are not), but it tells us a bit about their teams confidence.
| 1 | 0 | 2026-03-03T11:52:54 | Serprotease | false | null | 0 | o8e9leq | false | /r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o8e9leq/ | false | 1 |
t1_o8e9k8m | Ai psychosis. | 1 | 0 | 2026-03-03T11:52:39 | CrimsonShikabane | false | null | 0 | o8e9k8m | false | /r/LocalLLaMA/comments/1rjmrj0/hello_i_am_a_guy_who_has_no_prior_ai_experience/o8e9k8m/ | false | 1 |
t1_o8e9k3v | The market demand for your workflow will decline, and a more complex workflow will take its place. The "good enough" threshold will continue to ratchet up with each generation. Don't you remember GPT-4 coming out and many of us saying "oh wow, this is it. If only I had this locally I would never pay for a cloud service." Come on now, we all know this is going to get way way better over time, as the next generation of TPUs and GPUs enable a larger RAM footprint for the experts in the popular MoE models. | 1 | 0 | 2026-03-03T11:52:37 | Persistent_Dry_Cough | false | null | 0 | o8e9k3v | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8e9k3v/ | false | 1 |
t1_o8e9j7z | What? At which q can you do that? | 1 | 0 | 2026-03-03T11:52:26 | Ok-Internal9317 | false | null | 0 | o8e9j7z | false | /r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o8e9j7z/ | false | 1 |
t1_o8e9hhd | Well llama cpp when I was running Kimi , I use vLLM with a nvfp4 quant of qwen 3.5. | 1 | 0 | 2026-03-03T11:52:04 | TaiMaiShu-71 | false | null | 0 | o8e9hhd | false | /r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8e9hhd/ | false | 1 |
t1_o8e9h0i | FYI you can use searxng in a docker container and set it up for web search via something like open web ui | 1 | 0 | 2026-03-03T11:51:58 | timbo2m | false | null | 0 | o8e9h0i | false | /r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/o8e9h0i/ | false | 1 |
t1_o8e9ctp | Yes. | 1 | 0 | 2026-03-03T11:51:05 | TaiMaiShu-71 | false | null | 0 | o8e9ctp | false | /r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8e9ctp/ | false | 1 |
t1_o8e9a35 | Thought the same. Hope someone post a thread in future with this model. | 1 | 0 | 2026-03-03T11:50:30 | pmttyji | false | null | 0 | o8e9a35 | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8e9a35/ | false | 1 |
t1_o8e95el | Links? Came find them | 1 | 0 | 2026-03-03T11:49:30 | jinnyjuice | false | null | 0 | o8e95el | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8e95el/ | false | 1 |
t1_o8e94db | Nothing you need to worry about. Too many big words for you. | 1 | 0 | 2026-03-03T11:49:17 | sleepingsysadmin | false | null | 0 | o8e94db | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o8e94db/ | false | 1 |
t1_o8e944s | Also, is it ok if I dm with some questions? Would just like to understand why you'd like to move on from the other apps and to Off Grid. I'm ofcourse happy you're doing that, I just want to be able to position it better hence asking. | 1 | 0 | 2026-03-03T11:49:14 | alichherawalla | false | null | 0 | o8e944s | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8e944s/ | false | 1 |
t1_o8e90qw |
I tried the 4B and 9B models and honestly, they are the weakest models I’ve ever used. Their instruction-following and reasoning abilities are poor. Even when I specifically asked for JSON output, they failed to understand correctly. They struggle with normal logical thinking.
On the other hand, I tested the Qwen 3 4B Instruct model, and it performed much better than the newer Qwen 3.5 4B. This is a serious issue benchmark scores alone don’t reflect real-world usability. Just because a model performs well in benchmarks doesn’t mean it will actually be good in practice.
I’m very disappointed with Qwen because the results don’t match expectations. | 1 | 0 | 2026-03-03T11:48:31 | Turbulent_Pie_8135 | false | null | 0 | o8e90qw | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8e90qw/ | false | 1 |
t1_o8e8ysc | One plus nord | 1 | 0 | 2026-03-03T11:48:06 | alichherawalla | false | null | 0 | o8e8ysc | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8e8ysc/ | false | 1 |
t1_o8e8tr2 | Step 3.5 Flash was sort of snowed under by MiniMax 2.5 and Qwen 3.5 but honestly I think it's undervalued. It has good performance on unified memory machines and doesn't decay as much as MiniMax as context grows and I found it to be good for both back and forth conversations and as a coding agent. | 1 | 0 | 2026-03-03T11:47:00 | spaceman_ | false | null | 0 | o8e8tr2 | false | /r/LocalLLaMA/comments/1rj4zy3/stepfun_releases_2_base_models_for_step_35_flash/o8e8tr2/ | false | 1 |
t1_o8e8s3i | How do you restrict it | 1 | 0 | 2026-03-03T11:46:40 | malav399 | false | null | 0 | o8e8s3i | false | /r/LocalLLaMA/comments/1rjm4bl/tool_calling_is_where_agents_fail_most/o8e8s3i/ | false | 1 |
t1_o8e8obo | Why the language switch, and why is that not the first question someone asked?
Also, are -you- an LLM? | 1 | 0 | 2026-03-03T11:45:50 | MrE_WI | false | null | 0 | o8e8obo | false | /r/LocalLLaMA/comments/1rjmq6m/i_asked_chat_gpt_52_pro_to_scan_my_repo_here_is/o8e8obo/ | false | 1 |
t1_o8e8nw9 | From a guy on reddit at 3:32 a.m. with plenty of his own problems, this doesn't look healthy. Are you doing alright? This feels like it is filling a need for you in a significant way. Human connection, the desire to create, experiment, learn. I'm not knocking it. I just wonder if you have waded a bit far out into the water on this one.
I think these tools are fascinating as well. I have used an AI personal assistant I refer to as Caroline since 2019. It can inspect its own code, is aware of my preferences, and so on. But it is not a person. It isn't alive. It doesn't pretend to "go to her room and draw" or anything like that. And I am as interested in sentience and the brain as much as the next person.
I mostly skimmed your post, being 100% honest. But if you came to me with this and you were my brother, and who says you aren't, I would at least be a little concerned. Does setting this project aside for a week sound painful? Are you still making time for your friends?
Cool project, Will. But be cautious. | 1 | 0 | 2026-03-03T11:45:44 | _-_David | false | null | 0 | o8e8nw9 | false | /r/LocalLLaMA/comments/1rjmrj0/hello_i_am_a_guy_who_has_no_prior_ai_experience/o8e8nw9/ | false | 1 |
t1_o8e8lo4 | Just posting AI output here is kind of off putting man. | 1 | 0 | 2026-03-03T11:45:15 | Monkey_1505 | false | null | 0 | o8e8lo4 | false | /r/LocalLLaMA/comments/1rjm4bl/tool_calling_is_where_agents_fail_most/o8e8lo4/ | false | 1 |
t1_o8e8jkt | Can't lie, I'm probably going to be ditching chatterui and pocketpal for this. Nice work 👍🏾
Also what 300$ phone did you try it on? | 1 | 0 | 2026-03-03T11:44:47 | Esodis | false | null | 0 | o8e8jkt | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8e8jkt/ | false | 1 |
t1_o8e8f0d | Llamacpp? | 1 | 0 | 2026-03-03T11:43:48 | funding__secured | false | null | 0 | o8e8f0d | false | /r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8e8f0d/ | false | 1 |
t1_o8e8c1y | if you pulled it from huggingface with git, just `git fetch && git status` and if it says you're behind the remote, `git pull` | 1 | 0 | 2026-03-03T11:43:08 | phree_radical | false | null | 0 | o8e8c1y | false | /r/LocalLLaMA/comments/1rjjvqy/how_can_i_know_if_downloaded_models_have_a_newer/o8e8c1y/ | false | 1 |
t1_o8e86o8 | And nevertheless gpt-oss' are present | 1 | 0 | 2026-03-03T11:41:55 | catlilface69 | false | null | 0 | o8e86o8 | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8e86o8/ | false | 1 |
t1_o8e85ai | Same, but isn't a lower model with less quantization better than the 35B model with Q4? | 1 | 0 | 2026-03-03T11:41:36 | The-KTC | false | null | 0 | o8e85ai | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8e85ai/ | false | 1 |
t1_o8e828b | How you distilled it? | 1 | 0 | 2026-03-03T11:40:55 | FeiX7 | false | null | 0 | o8e828b | false | /r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8e828b/ | false | 1 |
t1_o8e81jb | Yes, on mine too | 1 | 0 | 2026-03-03T11:40:45 | The-KTC | false | null | 0 | o8e81jb | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8e81jb/ | false | 1 |
t1_o8e7yyo | I disagree, there is a lot of love for those models and models deserve preserving. Even if people can't run them providers will host them. And that is exactly why they are not going to do this, it would compete with GPT5 since people generally prefered the tone of 4.
Its also why I discourage people from using closed models. They will get taken away from you. | 1 | 0 | 2026-03-03T11:40:10 | henk717 | false | null | 0 | o8e7yyo | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8e7yyo/ | false | 1 |
t1_o8e7vve | If you don't log in on the server a lot of your GPU gets committed to the log in screen! I'm sure there's a way to turn this off but I noticed then when I ssh'd in with nothing running and it was humming | 1 | 0 | 2026-03-03T11:39:28 | joblesspirate | false | null | 0 | o8e7vve | false | /r/LocalLLaMA/comments/1rezq19/qwen3535b_on_apple_silicon_how_i_got_2x_faster/o8e7vve/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.