name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8bomw7 | yeah it's not worth using qwen in cloud. GLM5 and K2.5 are better and about 2/3rd the price of the big qwen. | 1 | 0 | 2026-03-03T00:10:52 | llama-impersonator | false | null | 0 | o8bomw7 | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8bomw7/ | false | 1 |
t1_o8bohes | How do you do that? I thought it had to be set when the model is loaded from the template? | 1 | 0 | 2026-03-03T00:09:58 | Zc5Gwu | false | null | 0 | o8bohes | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8bohes/ | false | 1 |
t1_o8bofui | [removed] | 1 | 0 | 2026-03-03T00:09:43 | [deleted] | true | null | 0 | o8bofui | false | /r/LocalLLaMA/comments/1naqv29/anyone_know_if_there_any_other_uncensored_models/o8bofui/ | false | 1 |
t1_o8boen3 | From what I've seen, a lot of the recent Qwen models, like [Qwen3.5](https://huggingface.co/Qwen/Qwen3.5-27B), [Qwen3-Omni](https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Thinking), and [Qwen3-VL](https://huggingface.co/Qwen/Qwen3-VL-30B-A3B-Thinking) are supposed to do this, though I haven't been able to test video myself. The main issue I'm running into is a lot of the interfaces like LM Studio and Ollama don't have native video input support. Not sure about llama.cpp. There are ways to get around it by doing the whole "image-batch" method, but that doesn't sound like what you are looking for. For what its worth though, Qwen3.5 has been pretty impressive with image descriptions and does a decent job understanding image sequences.
So while I don't have an easy solution for it, the models that are out at least are supposed to work with video. For me at least, its just a matter of waiting until interfaces catch up and allow video inputs. | 1 | 0 | 2026-03-03T00:09:31 | TriggerHappy842 | false | null | 0 | o8boen3 | false | /r/LocalLLaMA/comments/1rizy4r/what_models_to_understand_videos_no_transcripts/o8boen3/ | false | 1 |
t1_o8bodcn | Cannot share currently as it code for work, and it's pretty sloppy currently tbh.
I had Claude write a custom harness. Opencode, etc have way too long of system prompt. My system prompt is only a couple hundred tokens
Rather than expose all tools to the LLM, the harness uses heuristics to analyze the users requests and intelligently feed it tools. It also feeds in a "list_all" tool. There's an "epheremal" message system which regularly analyzes the llm's output and feeds it in things as well "you should use this tool". "You are trying this tool too many times, try something else", etc.
I found the small models understood what tools to use but failed to call them. Usually because of malformed JSON, so I added coalescing and fall back to simple Key value matching in the tool calls, rather than erroring. this seemed to fix the issue
I also have a knowledge base system which contains its own internal documents, and also reads all system man pages. it then uses a simple TF-IDF rag system to provide a search function the model is able to freely call.
My system prompt uses a CoT style prompt that enphansis these tools. | 1 | 0 | 2026-03-03T00:09:18 | piexil | false | null | 0 | o8bodcn | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bodcn/ | false | 1 |
t1_o8bo781 | Which app are you using? | 1 | 0 | 2026-03-03T00:08:17 | RIP26770 | false | null | 0 | o8bo781 | false | /r/LocalLLaMA/comments/1rj4nnq/qwen352b_on_android/o8bo781/ | false | 1 |
t1_o8bo4q0 | [removed] | 1 | 0 | 2026-03-03T00:07:52 | [deleted] | true | null | 0 | o8bo4q0 | false | /r/LocalLLaMA/comments/1rj8gb4/for_sure/o8bo4q0/ | false | 1 |
t1_o8bo3nd | I agree with you, totally unusable experience, don't get why everyone praise it so much. Maybe some tweaking would help but out of the box, it takes around 1 minute just to answer "hi", that is nonsense. | 1 | 0 | 2026-03-03T00:07:42 | Specialist-Chain-369 | false | null | 0 | o8bo3nd | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bo3nd/ | false | 1 |
t1_o8bo3hs | Numbers are for people who are trying to build things in production, not hypebeasts. Mad at myself for sticking around this long for what is obviously just a bunch of bullshit | 1 | 0 | 2026-03-03T00:07:40 | JamesEvoAI | false | null | 0 | o8bo3hs | false | /r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8bo3hs/ | false | 1 |
t1_o8bo0kf | Backup your data. The AI worms are coming. | 1 | 0 | 2026-03-03T00:07:11 | fullouterjoin | false | null | 0 | o8bo0kf | false | /r/LocalLLaMA/comments/1riuywe/genuinely_fascinating_but_also_kind_of_terrifying/o8bo0kf/ | false | 1 |
t1_o8bnufy | How are people running the gguf versions of these? Textgen and ollama don't seem to work for me and has some errors about wrong architecture. | 1 | 0 | 2026-03-03T00:06:11 | SubjectBridge | false | null | 0 | o8bnufy | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bnufy/ | false | 1 |
t1_o8bnpqp | the 3.5 35 a3b is incredible overall, works very well with agentic tasks, I have even used opencode to test, doesn't have the result of frontier models, but worked and finished the task | 1 | 0 | 2026-03-03T00:05:26 | yay-iviss | false | null | 0 | o8bnpqp | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8bnpqp/ | false | 1 |
t1_o8bnpnq | as someone with 32gb ram and 12gb vram, im gutted that Qwen 3.5 27b is like 5 tk/s | 1 | 0 | 2026-03-03T00:05:25 | philmarcracken | false | null | 0 | o8bnpnq | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8bnpnq/ | false | 1 |
t1_o8bnoxd | Unlike Google, Anthropic, and OpenAI, they don't have access to infinite GPU/TPU's because of export controls. | 1 | 0 | 2026-03-03T00:05:18 | JamesEvoAI | false | null | 0 | o8bnoxd | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8bnoxd/ | false | 1 |
t1_o8bnmvy | You're right, I missed that part. | 1 | 0 | 2026-03-03T00:04:59 | thejoyofcraig | false | null | 0 | o8bnmvy | false | /r/LocalLLaMA/comments/1riy5x6/qwen_35_nonthinking_mode_benchmarks/o8bnmvy/ | false | 1 |
t1_o8bnlcb | Thanks, that's actually useful. Though disabling thinking is exactly what I mentioned in the original post — the quality drop is significant enough that it underperforms the older qwen3:4b-instruct. So the choice is basically: too slow with thinking, too dumb without it. | 1 | 0 | 2026-03-03T00:04:43 | CapitalShake3085 | false | null | 0 | o8bnlcb | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bnlcb/ | false | 1 |
t1_o8bnkyu | I've managed to get 27b 8Q into the loop today with mostly default settings for the (freshly built) llama-server. I guess I'll have to tune penalties. | 1 | 0 | 2026-03-03T00:04:39 | Prudent-Ad4509 | false | null | 0 | o8bnkyu | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8bnkyu/ | false | 1 |
t1_o8bnk6t | You're not getting the question.
The 30B of experts should deliver more world knowledge than the 9B irrespective of dense or sparse. | 1 | 0 | 2026-03-03T00:04:32 | crantob | false | null | 0 | o8bnk6t | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8bnk6t/ | false | 1 |
t1_o8bnj7x | its been available on the api for a few days now: [https://developers.openai.com/api/docs/models/gpt-5.3-codex](https://developers.openai.com/api/docs/models/gpt-5.3-codex) | 1 | 0 | 2026-03-03T00:04:23 | itsjase | false | null | 0 | o8bnj7x | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8bnj7x/ | false | 1 |
t1_o8bni3g | Please post content like that on youtube so we could share, it's worth showing people who have no idea what local LLMs are. Most youtube content about LLMs is total shit. | 1 | 0 | 2026-03-03T00:04:12 | jacek2023 | false | null | 0 | o8bni3g | false | /r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o8bni3g/ | false | 1 |
t1_o8bnbnv | Same result :)
https://preview.redd.it/nprbzko61qmg1.png?width=808&format=png&auto=webp&s=15fc04785934d0c65486f75bb7c7ac1217e20396
| 1 | 0 | 2026-03-03T00:03:09 | CapitalShake3085 | false | null | 0 | o8bnbnv | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bnbnv/ | false | 1 |
t1_o8bn7o9 | Does this just run on GPU or can I run this on the CPU? | 1 | 0 | 2026-03-03T00:02:30 | TinFoilHat_69 | false | null | 0 | o8bn7o9 | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8bn7o9/ | false | 1 |
t1_o8bn4tc | what do you mean with "still had the default sampler",
what sampler should be used and how? | 1 | 0 | 2026-03-03T00:02:03 | meganoob1337 | false | null | 0 | o8bn4tc | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8bn4tc/ | false | 1 |
t1_o8bmy3k | Are there any speed gains due to rust? Except probably load times :) | 1 | 0 | 2026-03-03T00:00:59 | Gregory-Wolf | false | null | 0 | o8bmy3k | false | /r/LocalLLaMA/comments/1rj7y9d/pmetal_llm_finetuning_framework_for_apple_silicon/o8bmy3k/ | false | 1 |
t1_o8bmqrw | That's close to Gemini 3 Flash price which a way better model. | 1 | 0 | 2026-03-02T23:59:51 | baseketball | false | null | 0 | o8bmqrw | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8bmqrw/ | false | 1 |
t1_o8bmk6e | I'm still waiting for nanbeige to answer my 2nd question -- one hour later.
Literally one hour. | 1 | 0 | 2026-03-02T23:58:46 | crantob | false | null | 0 | o8bmk6e | false | /r/LocalLLaMA/comments/1rb61og/nanbeige_41_is_the_best_small_llm_it_crush_qwen_4b/o8bmk6e/ | false | 1 |
t1_o8bmfy3 | i still dont get what this is | 1 | 0 | 2026-03-02T23:58:07 | ClimateBoss | false | null | 0 | o8bmfy3 | false | /r/LocalLLaMA/comments/1riypvk/axe_a_precision_agentic_coder_large_codebases/o8bmfy3/ | false | 1 |
t1_o8bmcjj | > Reasoning capability is dominated by active parameters and intermediate state, not world knowledge.
>
> Therefore in the context of reasoning heavy benchmarks, the comparison between Qwen3.5-9B and Qwen3-30B-A3B is more like a 9B vs a 3B model, not 9B vs 30B.
Reasoning capability is dominated by active parameters and intermediate state, not world knowledge.
Therefore in the context of reasoning heavy benchmarks, the comparison between Qwen3.5-9B and Qwen3-30B-A3B is more like a 9B vs a 3B model, not 9B vs 30B. | 1 | 0 | 2026-03-02T23:57:34 | crantob | false | null | 0 | o8bmcjj | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8bmcjj/ | false | 1 |
t1_o8bmc4l | "Give me an appropriate reply to the greeting 'Hi'. Be friendly and concise." | 1 | 0 | 2026-03-02T23:57:30 | UndecidedLee | false | null | 0 | o8bmc4l | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bmc4l/ | false | 1 |
t1_o8bmb7n | Yeah, not really looking for my own hardware to tell me what I can and can't do with it. | 1 | 0 | 2026-03-02T23:57:21 | sine120 | false | null | 0 | o8bmb7n | false | /r/LocalLLaMA/comments/1rj89qy/merlin_research_released_qwen354bsafetythinking_a/o8bmb7n/ | false | 1 |
t1_o8bm9ke | Idk what is so hard for the people complaining here. It's not hard to follow which model is each one because they all share the same position in all benchmarks. | 1 | 0 | 2026-03-02T23:57:06 | _VirtualCosmos_ | false | null | 0 | o8bm9ke | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8bm9ke/ | false | 1 |
t1_o8bm9ag | Reasoning capability is dominated by active parameters and intermediate state, not world knowledge.
Therefore in the context of reasoning heavy benchmarks, the comparison between Qwen3.5-9B and Qwen3-30B-A3B is more like a 9B vs a 3B model, not 9B vs 30B. | 1 | 0 | 2026-03-02T23:57:03 | crantob | false | null | 0 | o8bm9ag | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8bm9ag/ | false | 1 |
t1_o8bm56g | it is not a conversational model.
But, you can disable thinking and put the temperature to 0.,45 like said on here: [https://www.reddit.com/r/LocalLLaMA/comments/1rirlau/comment/o88gs1r/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/LocalLLaMA/comments/1rirlau/comment/o88gs1r/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) | 1 | 0 | 2026-03-02T23:56:24 | yay-iviss | false | null | 0 | o8bm56g | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bm56g/ | false | 1 |
t1_o8bm3p1 | Me too, maybe they've said why somewhere?
I've been testing 9b with thinking enabled and so far I haven't had any issues.
| 1 | 0 | 2026-03-02T23:56:10 | neil_555 | false | null | 0 | o8bm3p1 | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o8bm3p1/ | false | 1 |
t1_o8blskj | What would be your top 3 models for what you're describing? | 1 | 0 | 2026-03-02T23:54:22 | trusty20 | false | null | 0 | o8blskj | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8blskj/ | false | 1 |
t1_o8bloun | Exactly, for structured tasks that don’t need fast responses this is perfectly fine. Regarding the latency, just look in the table, it’s about 10-15x slower. I didn’t yet measure token/sec properly, with small contexts simple chats and websearch that I did measure I got around 35 tok/sec. Now you have me the idea to add tok/sec to the benchmark and test all the qwen models that fit on my M3. I am really curious about the small ones that have now been released. | 1 | 0 | 2026-03-02T23:53:46 | Beautiful-Honeydew10 | false | null | 0 | o8bloun | false | /r/LocalLLaMA/comments/1rj8e7z/is_anyone_else_seeing_qwen_35_35b_outperform/o8bloun/ | false | 1 |
t1_o8blj11 | It seems to me, and it makes sense, that the small models think more to stabilize. They are trying to catch up to the bigger models, so they need more time to reach that quality. It's trained to think longer so it can be more coherent. That's what it seems to me. | 1 | 0 | 2026-03-02T23:52:49 | Lucis_unbra | false | null | 0 | o8blj11 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8blj11/ | false | 1 |
t1_o8ble0v | egg | 1 | 0 | 2026-03-02T23:52:01 | freehuntx | false | null | 0 | o8ble0v | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8ble0v/ | false | 1 |
t1_o8bld9i | \> GPT-5.3 Codex is untested because it is not yet available in the API | 1 | 0 | 2026-03-02T23:51:53 | mr_riptano | false | null | 0 | o8bld9i | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8bld9i/ | false | 1 |
t1_o8bl9uk | This is consistent with an "AI assistant" persona which is the best you can hope for unless you put the name you want in the context | 1 | 0 | 2026-03-02T23:51:20 | phree_radical | false | null | 0 | o8bl9uk | false | /r/LocalLLaMA/comments/1rj65jl/qwens_latest_model_thinks_its_developed_by_google/o8bl9uk/ | false | 1 |
t1_o8bl5rm | 5.3 codex? | 1 | 0 | 2026-03-02T23:50:41 | itsjase | false | null | 0 | o8bl5rm | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8bl5rm/ | false | 1 |
t1_o8bl3qv | I found qwen3.5 27b to be the first model im comfortable with one shotting minor features with unit and integration tests in a timely manor (under 15 minutes) | 1 | 0 | 2026-03-02T23:50:22 | super_pretzel | false | null | 0 | o8bl3qv | false | /r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8bl3qv/ | false | 1 |
t1_o8bl07n | same experience here we text to SQL :( | 1 | 0 | 2026-03-02T23:49:48 | mim722 | false | null | 0 | o8bl07n | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o8bl07n/ | false | 1 |
t1_o8bkvld | The prompt was 'hi'. Two letters. How would you make that more explicit? | 1 | 0 | 2026-03-02T23:49:03 | CapitalShake3085 | false | null | 0 | o8bkvld | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bkvld/ | false | 1 |
t1_o8bkmyd | Humm understood.
I'm thinking about upgrading my setup, but I live in Brazil and the taxes on electronics have grown a lot, so I'm waiting until I visit Europe on September to buy a new GPU.
Thanks for the tip bro | 1 | 0 | 2026-03-02T23:47:40 | MarketingGui | false | null | 0 | o8bkmyd | false | /r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o8bkmyd/ | false | 1 |
t1_o8bkkyw | [removed] | 1 | 0 | 2026-03-02T23:47:21 | [deleted] | true | null | 0 | o8bkkyw | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8bkkyw/ | false | 1 |
t1_o8bkauc | Could you perhaps release the opposite kind of model for local users with what knowledge you have about safety? It's funny, but I really want a model that doesn't tell me what to think, or dictate morals. | 1 | 0 | 2026-03-02T23:45:42 | Anduin1357 | false | null | 0 | o8bkauc | false | /r/LocalLLaMA/comments/1rj89qy/merlin_research_released_qwen354bsafetythinking_a/o8bkauc/ | false | 1 |
t1_o8bkasi | my observations are, I dont have time for thinking I dont enjoy reading the thought process either never have on any thinking model. I also never really felt the difference was worth the time spent, as its pretty easy to just create a better prompt and get a better answer. that being said some of the times the new 3.5 series seems to just think in a statefull way like prioritizations and such, this does seem to help and is usually short and sweet, but the chance of it going off on a thinking tangent means I still keep all of them with thinking off. | 1 | 0 | 2026-03-02T23:45:41 | Lesser-than | false | null | 0 | o8bkasi | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8bkasi/ | false | 1 |
t1_o8bk04j | It's very possible this is the better option for my coding domain.
But I won't be sure until I do my own finetune.
Thanks much for sharing your work. | 1 | 0 | 2026-03-02T23:43:57 | crantob | false | null | 0 | o8bk04j | false | /r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o8bk04j/ | false | 1 |
t1_o8bjwg0 | How do you define distillation? I always assumed parent/teacher training where the student was trained off the logits of the parent. It seems most just mean fine tuning off some output. | 1 | 0 | 2026-03-02T23:43:21 | silenceimpaired | false | null | 0 | o8bjwg0 | false | /r/LocalLLaMA/comments/1rj08k1/k2_not_25_distillation_still_worth_it/o8bjwg0/ | false | 1 |
t1_o8bjtjr | The hi-overthink happens even with the full model.
To cut the hi-overthink you need to be more explicit in the prompt, describing what you want. Otherwise, it will open "concurrent" hypothesis and colide them until one rise victorious. The problem of the hi-overthink is that there are a lot of possibilities when you give it less words.
Think a AI model not like a person. Think it as a text calculator that are dealing with a equation with 27 billions of variables, but with only one variable is known. The the user ask what is the answer?
With reasoning disabled it will take the most probable answer.
Hi. | 1 | 0 | 2026-03-02T23:42:53 | Turbulent_Pin7635 | false | null | 0 | o8bjtjr | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bjtjr/ | false | 1 |
t1_o8bjs6e | NB2 on "Old Cartoons" mode | 1 | 0 | 2026-03-02T23:42:40 | vintage2019 | false | null | 0 | o8bjs6e | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bjs6e/ | false | 1 |
t1_o8bjrw5 | I don't remember, sorry. The most likely scenario is that I asked the model to build a config for me. I have found that putting unsupported parameters in there breaks OpenCode, but I haven't done the level of testing you did to find out whether those parameters are actually being effective. | 1 | 0 | 2026-03-02T23:42:37 | paulgear | false | null | 0 | o8bjrw5 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o8bjrw5/ | false | 1 |
t1_o8bjlav | I would rather avoid the proxy as its just adding more complications to the stack, i was hopeful i could specify this directly somehow, but maybe there's nothing to do but add the proxy (llama-swap most likely) | 1 | 0 | 2026-03-02T23:41:32 | stoystore | false | null | 0 | o8bjlav | false | /r/LocalLLaMA/comments/1rj8sow/llamacpp_models_preset_with_multiple_presets_for/o8bjlav/ | false | 1 |
t1_o8bjkf4 | Depends what you are looking for in your use case. I find conversations with finetunes from https://huggingface.co/SicariusSicariiStuff to work well conversationally, some are nsfw, some are just for creative writing. I use iq4_xs or iq3_s/iq3_xs for 12B parameter models and q4_k_m when possible. I use lm studio and normally set context to 12k-16k tokens and turn on flash attention and k/v cache quantization to q4_0 which can help fit 12B parameters with little difference to quality.
EsotericSage12b is another good model in this range. Very knowledgeable, and noticably more uncensored than most first party models.
I expect finetunes of qwen3.5-9b to be quite good, but it only released a few days ago and it will take a bit more time before they get tuned and quantized.
If you're looking for a good knowledgeable model that happens to be uncensored, I would wait for heretic uncensored qwen3.5-9b models. I find abliterated models to hurt overall quality slightly, heretic models are also uncensored, but undergo a different process to uncensor them.
https://huggingface.co/TheDrummer also has good uncensored finetunes that are good conversationally but most of them are large in size. This includes finetunes of Gemma2-9b but this has a max context window of 8K without turning on other settings. | 1 | 0 | 2026-03-02T23:41:24 | ddeerrtt5 | false | null | 0 | o8bjkf4 | false | /r/LocalLLaMA/comments/1rj7p2h/i_need_an_uncensored_llm_for_8gb_vram/o8bjkf4/ | false | 1 |
t1_o8bjf94 | There are things hard to capture in benchmarks -- a sense of creativity, a deep generality that lets the model extend to totally novel tasks, a real sense of the world with clear expectations for how things might play out -- and you don't get there by climbing any amount of benchmarks.
You get there with a *lot* of parameters, and a lot of data, and a lot of compute. But the benchmark results you'd get from that model can be had more cheaply by targeting the benchmarks directly -- yes, including for an entire modern eval suite checking 80+ task categories. | 1 | 0 | 2026-03-02T23:40:34 | -main | false | null | 0 | o8bjf94 | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8bjf94/ | false | 1 |
t1_o8bjc8g | Thank you so much! :) | 1 | 0 | 2026-03-02T23:40:04 | Lord_Curtis | false | null | 0 | o8bjc8g | false | /r/LocalLLaMA/comments/1rj8zhq/where_can_i_get_good_priced_3090s/o8bjc8g/ | false | 1 |
t1_o8bjau0 | Very very interesting! Have been seeing a bunch of stuff about Qwen3.5 too. Mind catching me up on the general timeline since then as well, or at least the other "good" models that existed/were used/are still in people's workflows before qwen3.5? Any length of explanation is appreciated! | 1 | 0 | 2026-03-02T23:39:50 | megacewl | false | null | 0 | o8bjau0 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8bjau0/ | false | 1 |
t1_o8bjav7 | not surprised at all for extraction and classification tasks. these are basically pattern matching problems where the model size matters way less than people think — a well-quantized 35B with good training data can absolutely match or beat a cloud model that’s also probably running some internal distillation anyway.
the real test would be multi-step reasoning or tool use where the gap between local and frontier models still shows up. but for the kind of structured tasks you tested, running local is basically free money at this point. no rate limits, no API costs, full control over the pipeline.
curious about latency though — what kind of tokens/sec are you getting on the M3 Max with the 35B? | 1 | 0 | 2026-03-02T23:39:50 | Exact_Guarantee4695 | false | null | 0 | o8bjav7 | false | /r/LocalLLaMA/comments/1rj8e7z/is_anyone_else_seeing_qwen_35_35b_outperform/o8bjav7/ | false | 1 |
t1_o8biq2r | Few good options for used 3090s:
\*\*r/hardwareswap\*\* — Best prices I've seen, plus you can verify seller history. Usually $650-800 depending on condition. Post a \[W\] (want to buy) and you'll get offers.
\*\*eBay\*\* — More selection but prices slightly higher ($750-900). Look for sellers with good feedback. Filter by "Buy It Now" and check completed listings to see what they're actually selling for.
\*\*Facebook Marketplace\*\* — Hit or miss but you can sometimes find local deals and test before buying. Great for Minnesota since you can avoid shipping.
\*\*Tips:\*\*
- FE (Founders Edition) cards run hotter but fit in more cases
- EVGA/ASUS cards are generally safer bets for used
- Ask if it was used for mining — not a dealbreaker but good to know
- Make sure they have the original box/receipt if possible (helps with RMA if needed)
- Budget \~$1400-1600 total for two decent condition cards
For local LLM work, 24GB VRAM each is the sweet spot. Two 3090s will handle most 70B models with quantization. Good luck! | 1 | 0 | 2026-03-02T23:36:25 | Effective_Growth_514 | false | null | 0 | o8biq2r | false | /r/LocalLLaMA/comments/1rj8zhq/where_can_i_get_good_priced_3090s/o8biq2r/ | false | 1 |
t1_o8bimw5 | That's a clip! It's my favorite Air Bud sequel... Hard to pick just one, tho... | 1 | 0 | 2026-03-02T23:35:54 | Due-Function-4877 | false | null | 0 | o8bimw5 | false | /r/LocalLLaMA/comments/1rj326g/any_idea_what_is_being_used_for_these_generations/o8bimw5/ | false | 1 |
t1_o8bilq7 | Hey, im running an 8gb card aswell. 9B should be no problem. You can try koboldcpp and switch on autofit for easy setup, download a gguf file from unsloth on huggingface that fits in memory while leaving some room for context.
Then just start kobold and its ready :) | 1 | 0 | 2026-03-02T23:35:42 | DaikonProfessional58 | false | null | 0 | o8bilq7 | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8bilq7/ | false | 1 |
t1_o8biljz | Better acting than 90% of Hollywood. :) | 1 | 0 | 2026-03-02T23:35:41 | KS-Wolf-1978 | false | null | 0 | o8biljz | false | /r/LocalLLaMA/comments/1rj326g/any_idea_what_is_being_used_for_these_generations/o8biljz/ | false | 1 |
t1_o8bik2j | that one makes sense, kindof. maybe for everyone who has a small camera on their robot, which is almost no one. | 1 | 0 | 2026-03-02T23:35:26 | Space__Whiskey | false | null | 0 | o8bik2j | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bik2j/ | false | 1 |
t1_o8bijxk | If you are just changing these params, then you can just change it at the request level. Or if easier, stick a proxy in the middle which presents 2 different models/endpoints. | 1 | 0 | 2026-03-02T23:35:25 | DeltaSqueezer | false | null | 0 | o8bijxk | false | /r/LocalLLaMA/comments/1rj8sow/llamacpp_models_preset_with_multiple_presets_for/o8bijxk/ | false | 1 |
t1_o8big0p | [removed] | 1 | 0 | 2026-03-02T23:34:46 | [deleted] | true | null | 0 | o8big0p | false | /r/LocalLLaMA/comments/1jhiail/uncensored_image_generator/o8big0p/ | false | 1 |
t1_o8bider | I mean, maybe? But we don't do GPU + CPU (given we have enough VRAM), that should be even easier than GPU + NPU | 1 | 0 | 2026-03-02T23:34:21 | uti24 | false | null | 0 | o8bider | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8bider/ | false | 1 |
t1_o8bi8gv | Holy shit, StepFun is certified based AF. | 1 | 0 | 2026-03-02T23:33:34 | Kamal965 | false | null | 0 | o8bi8gv | false | /r/LocalLLaMA/comments/1rj4zy3/stepfun_releases_2_base_models_for_step_35_flash/o8bi8gv/ | false | 1 |
t1_o8bi5br | if your CPU doesnt hit 100% of usage, that means that the bottleneck is in the memory, and changing CPU is not going to help.
My CPU has 6 cores and i barely hit 50% when i have the AI running on CPU only. I got like 25 tk/s with ddr4 2800mhz on qwen 3 30ba3b 2507
If i upgrade my CPU to one with 16 cores, i would get the same 25 tk/s , maybe one token more. The problem is still the ram speed.
With my current 6 core CPU, if my ram speed were 5600 mhz instead of 2800 mhz, i would get 50 tk/s... but thats ddr5 speed, that would require change my CPU and buy overpriced ddr5 (thanks to sam altman and openai).
I would upgrade my CPU only if i get one quad channel (that duplicates the bandwidth and inference i can milk from every stick of ram, compared to dual channel CPU) and fast ddr5 (8000 mhz) and at least 128GB . Which on the current prices is not possible, or healthy.
My recommendation would be to save the money for an opportunity. During summer there was a price crisis when nvidia reduced the price of its GPU, and that made a lot of stock to flood the market (hoarders that panicked when GPU dropped price). The side effect was that other GPU also sunk in price, like the MI50. I bought 3 on that moment, at 108 euros each. Now they are 3 times that price. Thats the kind of deal you need to upgrade your setup for AI. | 1 | 0 | 2026-03-02T23:33:03 | brahh85 | false | null | 0 | o8bi5br | false | /r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o8bi5br/ | false | 1 |
t1_o8bi4j0 | Ironically, it's not actually using a full transformer architecture; 75% of the layers are using Gated DeltaNet linear attention. | 1 | 0 | 2026-03-02T23:32:55 | victory_and_death | false | null | 0 | o8bi4j0 | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8bi4j0/ | false | 1 |
t1_o8bi3uh | Could be useful for classification of some documents... | 1 | 0 | 2026-03-02T23:32:49 | RvierDotFr | false | null | 0 | o8bi3uh | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bi3uh/ | false | 1 |
t1_o8bi0gn | Hey, thanks, I'll try that. There are not many things it must do, it has to check if some other tools did the job right. It has to compare some results. | 1 | 0 | 2026-03-02T23:32:17 | Petroale | false | null | 0 | o8bi0gn | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bi0gn/ | false | 1 |
t1_o8bhv4s | Cool build aside of the nvidia gpu ;)
I have an amd gpu but cachy os works really well for me, so maybe check that out. | 1 | 0 | 2026-03-02T23:31:25 | TerminalNoop | false | null | 0 | o8bhv4s | false | /r/LocalLLaMA/comments/1rj7y0u/any_issues_tips_for_running_linux_with_a_5060ti/o8bhv4s/ | false | 1 |
t1_o8bhmzv | [removed] | 1 | 0 | 2026-03-02T23:30:06 | [deleted] | true | null | 0 | o8bhmzv | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8bhmzv/ | false | 1 |
t1_o8bhl54 | For sure something in your settings. I'm even q4 in kv cache, using lmstudio and it could find a single note in 72 others of my obsidian notes using obsidian cli. Pm? I can share my settings so far | 1 | 0 | 2026-03-02T23:29:47 | Suitable_Currency440 | false | null | 0 | o8bhl54 | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8bhl54/ | false | 1 |
t1_o8bhh2m | That kind of abandon is what led 4o to cause AI psychosis 😂😂.
Maybe we don't want that | 1 | 0 | 2026-03-02T23:29:08 | National_Meeting_749 | false | null | 0 | o8bhh2m | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8bhh2m/ | false | 1 |
t1_o8bh66u | WOW I actually forgot to mention..... I cap at 260W............ | 1 | 0 | 2026-03-02T23:27:26 | JohnTheNerd3 | false | null | 0 | o8bh66u | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8bh66u/ | false | 1 |
t1_o8bh0tv | this model looks like it's a little to small for a macbook air m4 24gb of ram, right? but the 27 and 30B version seems too heavy | 1 | 0 | 2026-03-02T23:26:36 | murkomarko | false | null | 0 | o8bh0tv | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bh0tv/ | false | 1 |
t1_o8bh0dx | Oh, wow. Them releases most of their pipeline is huge for OSS. Bravo StepFun team! | 1 | 0 | 2026-03-02T23:26:32 | oxygen_addiction | false | null | 0 | o8bh0dx | false | /r/LocalLLaMA/comments/1rj4zy3/stepfun_releases_2_base_models_for_step_35_flash/o8bh0dx/ | false | 1 |
t1_o8bgz6h | Counterintuitively it feels like you could push the 2b to a higher quant and end up going faster, because the full model would have to correct it less often. | 1 | 0 | 2026-03-02T23:26:20 | QuestionMarker | false | null | 0 | o8bgz6h | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bgz6h/ | false | 1 |
t1_o8bgtsf | > at the time
It's been like two months lol
But yeah the last few Gemini Flash revisions have been quite good. | 1 | 0 | 2026-03-02T23:25:31 | the__storm | false | null | 0 | o8bgtsf | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8bgtsf/ | false | 1 |
t1_o8bgl9j | I think you got the colors mixed up (understandably) - the 9B is almost as good as the 35B-A3B, not the 122. | 1 | 0 | 2026-03-02T23:24:11 | the__storm | false | null | 0 | o8bgl9j | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8bgl9j/ | false | 1 |
t1_o8bgjdb | Tried it on Samsung M52 and it is pretty fast 11 tokens per second with Q8 but too dumb imo. Tried talking in my native language to see if it will understand me but seems like English is its primary langauge. Don\`t know any use I have for it. Downloading qwen3.5-4B right now it was pretty good on PC will see if it is on phone too. | 1 | 0 | 2026-03-02T23:23:53 | camekans | false | null | 0 | o8bgjdb | false | /r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8bgjdb/ | false | 1 |
t1_o8bggls | yeah, i was gonna say, thats extremely impressive for a 9b model, it looks like it is super usable for a lot of actual use cases and doing real work.
Especially for agentic stuff, maybe not hard coding, but as an assistant it looks like it could be very useful | 1 | 0 | 2026-03-02T23:23:27 | Far-Low-4705 | false | null | 0 | o8bggls | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8bggls/ | false | 1 |
t1_o8bgfcr | mass surveillance => smaller, dumber model can do it
fully autonomous killbots => using (agentic/reasoning) LLMs? | 1 | 0 | 2026-03-02T23:23:15 | Negative-Web8619 | false | null | 0 | o8bgfcr | false | /r/LocalLLaMA/comments/1riuywe/genuinely_fascinating_but_also_kind_of_terrifying/o8bgfcr/ | false | 1 |
t1_o8bge6i | Why don’t you rent a couple gpus on a cloud service before you splash the cash, pay by the hour. There will be lots of posts in these for recommendations more broadly on Reddit. Get Claude to find them :-) | 1 | 0 | 2026-03-02T23:23:05 | johnerp | false | null | 0 | o8bge6i | false | /r/LocalLLaMA/comments/1rj54kw/local_llm/o8bge6i/ | false | 1 |
t1_o8bgaqa | Fine tune some small models to complete specific tasks? I believe in low footprint models and did exactly that. | 1 | 0 | 2026-03-02T23:22:32 | ciarandeceol1 | false | null | 0 | o8bgaqa | false | /r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8bgaqa/ | false | 1 |
t1_o8bg2yc | Well, you could setup a local llm that helps you do all that. It's not hard to setup something that has websearch capabilities.
If you're new to this space, these are the things I'd recommend for you to get start with Ollama ([https://docs.ollama.com/quickstart](https://docs.ollama.com/quickstart)). You can even start some agents directly from it ([https://docs.ollama.com/integrations](https://docs.ollama.com/integrations)), or you can go with Open WebUI ([https://docs.openwebui.com/getting-started/quick-start](https://docs.openwebui.com/getting-started/quick-start)).
Then on Open WebUI you can add something like SearXNG. This page has some info on it [https://docs.openwebui.com/troubleshooting/web-search](https://docs.openwebui.com/troubleshooting/web-search)
And welcome!! | 1 | 0 | 2026-03-02T23:21:20 | Di_Vante | false | null | 0 | o8bg2yc | false | /r/LocalLLaMA/comments/1rj7z9v/where_to_get_a_comprehensive_overview_on_the/o8bg2yc/ | false | 1 |
t1_o8bfyi1 | what are you trying to do with it that you're getting blocked on? It's super new, I imagine there will be uncensors and stuff but I'm new to the scene so no clue how long that takes or if some models make it more difficult | 1 | 0 | 2026-03-02T23:20:39 | Jordanthecomeback | false | null | 0 | o8bfyi1 | false | /r/LocalLLaMA/comments/1re17th/blown_away_by_qwen_35_35b_a3b/o8bfyi1/ | false | 1 |
t1_o8bfxrl | right? i'll try to replicate tonight if i get a chance. things don't go faster when you give them more work to do… | 1 | 0 | 2026-03-02T23:20:32 | HopePupal | false | null | 0 | o8bfxrl | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8bfxrl/ | false | 1 |
t1_o8bfuyh | Qwen3.5-27B **IQ3** vs Qwen-3.5 35B-A3M **Q4\_K\_M** | 1 | 0 | 2026-03-02T23:20:05 | Negative-Web8619 | false | null | 0 | o8bfuyh | false | /r/LocalLLaMA/comments/1ridwl5/qwen3527b_iq3_vs_qwen35_35ba3m_q4_k_m/o8bfuyh/ | false | 1 |
t1_o8bfr8s | you can fit 35B in the pi? | 1 | 0 | 2026-03-02T23:19:31 | nunodonato | false | null | 0 | o8bfr8s | false | /r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o8bfr8s/ | false | 1 |
t1_o8bfph8 | I agree AI is bullshit | 1 | 0 | 2026-03-02T23:19:15 | BornRoom257 | false | null | 0 | o8bfph8 | false | /r/LocalLLaMA/comments/1nu6kjc/hot_take_all_coding_tools_are_bullsht/o8bfph8/ | false | 1 |
t1_o8bfmot | My arguments in favor of local inference & training boil down to privacy, portability, reproducibility, customization, censorship/behavior control, freedom, availability, latency, fixed costs, and personal growth from learning this stuff top to bottom in a home lab and constant tinkering that costs nothing but my time, a few kilowatts of power, and maybe wear & tear on my hardware.
My counter-argument is that I derestricted gpt-oss-20b using Heretic in an hour, for $8, on a RunPod H100 cluster while it took 11 hours on my Mac.
If you value your time, just rent appropriately-sized gear 😊 | 1 | 0 | 2026-03-02T23:18:49 | txgsync | false | null | 0 | o8bfmot | false | /r/LocalLLaMA/comments/1p2lqi7/are_any_of_the_m_series_mac_macbooks_and_mac/o8bfmot/ | false | 1 |
t1_o8bf14g | There is no Qwen3.5 30B.
There’s a 27B and a 35B A3B, which one you talking about | 1 | 0 | 2026-03-02T23:15:30 | __JockY__ | false | null | 0 | o8bf14g | false | /r/LocalLLaMA/comments/1rj2gwf/qwen35_30b_is_incredible_for_local_deployment/o8bf14g/ | false | 1 |
t1_o8bezbx | do you think that there's much of a difference between running mlx x gguf on apple silicon m4 and beyond? | 1 | 0 | 2026-03-02T23:15:13 | murkomarko | false | null | 0 | o8bezbx | false | /r/LocalLLaMA/comments/1ksw070/genuine_question_why_are_the_unsloth_ggufs_more/o8bezbx/ | false | 1 |
t1_o8bevcf | Are the Qwen3.5 4B benchmark results achieved with reasoning enabled? I'm comparing it against Qwen3 4B 2507 Instruct and it actually seems less capable when the reasoning is disabled (it become too slow) — curious if reasoning mode makes a significant difference. | 1 | 0 | 2026-03-02T23:14:37 | CapitalShake3085 | false | null | 0 | o8bevcf | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8bevcf/ | false | 1 |
t1_o8beuxp | Mlx4121a ACAT, 25gb DAC between all the nodes | 1 | 0 | 2026-03-02T23:14:33 | braydon125 | false | null | 0 | o8beuxp | false | /r/LocalLLaMA/comments/1rj76pb/qwen35122ba10bq8_handling_the_car_wash_question/o8beuxp/ | false | 1 |
t1_o8belck | You might be able to get it working, but you would probably need to break down the tasks first. You could try using the free versions (if you don't have paid ones) of Claude/ChatGPT/Gemini for that, and then feed qwen task by task | 1 | 0 | 2026-03-02T23:13:05 | Di_Vante | false | null | 0 | o8belck | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8belck/ | false | 1 |
t1_o8be8dx | Ohhh yesss, funny that you mention Qwen, their newest 3.5 models came out today. Even a 4b and a 9b. I heard they're supposed to be damn good!!!! 😵💫💜🤭 | 1 | 0 | 2026-03-02T23:11:05 | DertekAn | false | null | 0 | o8be8dx | false | /r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o8be8dx/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.