name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8a7niw | I think you can get there, but it's not easy. I spent months developing methods and tools to help solve these problems for myself. They work very well, but again .. it took months of work to get there.
At some point it will become more natural, but we're not quite there yet. | 1 | 0 | 2026-03-02T19:39:57 | Total-Context64 | false | null | 0 | o8a7niw | false | /r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/o8a7niw/ | false | 1 |
t1_o8a7ksf | We’ve started deploying this already and have been very impressed. Big step up from qwen3 | 1 | 0 | 2026-03-02T19:39:35 | Glum-Traffic-7203 | false | null | 0 | o8a7ksf | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8a7ksf/ | false | 1 |
t1_o8a7ki5 | K2 had two versions, the original one and 0905. I think 0905 still preserved style quite well while also giving boost to an intelligence. It still great non-thinking model (I use IQ4 quants).
Later, K2 Thinking was clearly specialized more in coding than creative writing. K2.5 pushes things further, both in coding and agentic capabilities, as well as vision... but writing style took a hit unfortunately. It is possible they did not nerf it on purpose, just the side effect of how it was trained and what was prioritized.
If you go with the distilling, I suggest releasing at least some intermediate checkpoints as you go. This could help you to get feedback early if you are moving in the right direction, by allowing others compare your small model against full model on creative writing prompts that you did not think of, which may help you to get independent confirmation if your distill model actually manages to generalize the style. | 1 | 0 | 2026-03-02T19:39:33 | Lissanro | false | null | 0 | o8a7ki5 | false | /r/LocalLLaMA/comments/1rj08k1/k2_not_25_distillation_still_worth_it/o8a7ki5/ | false | 1 |
t1_o8a7b2r | Qwen3-next-coder instead of qwen3-next-80b-A3B-thinking. | 1 | 0 | 2026-03-02T19:38:16 | stankmut | false | null | 0 | o8a7b2r | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8a7b2r/ | false | 1 |
t1_o8a78vm | I'll take the 5090 if you don't use it | 1 | 0 | 2026-03-02T19:37:59 | Negative-Web8619 | false | null | 0 | o8a78vm | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a78vm/ | false | 1 |
t1_o8a76nj | I made an IRC bot yesterday with Nemotron. Told it it pretend it's a 90s IRC haxxor script kiddie. Then I asked it if it "hacked the system" - told me it can't answer that question. So there's that. | 1 | 0 | 2026-03-02T19:37:41 | Dry_Yam_4597 | false | null | 0 | o8a76nj | false | /r/LocalLLaMA/comments/1rixh53/qwen35122b_heretic_ggufs/o8a76nj/ | false | 1 |
t1_o8a722c | I have download the Qwen_Qwen3.5-27B-Q6_K_L.gguf from Bartowski and I cannot use a draft whatever I try. I tried 4B, 2B, manual put the models in the same folder but at the end draft doesn't work at all. | 1 | 0 | 2026-03-02T19:37:04 | Jasmin_Black | false | null | 0 | o8a722c | false | /r/LocalLLaMA/comments/1rj1e35/llamacpplmstudio_draft_model_settings_for_qwen35/o8a722c/ | false | 1 |
t1_o8a71jk | I find 90% accurate tagging to sometimes be better than what I get out of my team lol | 1 | 0 | 2026-03-02T19:37:00 | Chris266 | false | null | 0 | o8a71jk | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a71jk/ | false | 1 |
t1_o8a70k3 | any recommmendations for llama-server setup? | 1 | 0 | 2026-03-02T19:36:52 | rm-rf-rm | false | null | 0 | o8a70k3 | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o8a70k3/ | false | 1 |
t1_o8a6zai | i would suggest base unsloth docs to get started with | 1 | 0 | 2026-03-02T19:36:42 | EmbarrassedAsk2887 | false | null | 0 | o8a6zai | false | /r/LocalLLaMA/comments/1rj11vb/parameter_configuration_for_knowledge_distill_to/o8a6zai/ | false | 1 |
t1_o8a6v4m | Could you Explain a Little More in depth about the Prefil | 1 | 0 | 2026-03-02T19:36:08 | Weak_Ad9730 | false | null | 0 | o8a6v4m | false | /r/LocalLLaMA/comments/1pjcwhk/nsfw_uncensored_image_to_descriptions_caption/o8a6v4m/ | false | 1 |
t1_o8a6rv7 | I use llama.cpp with raycast (slightly more work in setting up the providers.yml, but worth it to be out of ollama's enshittification) | 1 | 0 | 2026-03-02T19:35:41 | rm-rf-rm | false | null | 0 | o8a6rv7 | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o8a6rv7/ | false | 1 |
t1_o8a6p7c | There is a Qwen3 30B and Qwen3.5 35B. U talking about dense 27B model that close to 30B? | 1 | 0 | 2026-03-02T19:35:19 | powerade-trader | false | null | 0 | o8a6p7c | false | /r/LocalLLaMA/comments/1rj2gwf/qwen35_30b_is_incredible_for_local_deployment/o8a6p7c/ | false | 1 |
t1_o8a6nzv | agreed on context windows. instructions, memory and guardrails get you pretty far for the obvious stuff. but i think there's a layer of judgment that's hard to capture as static rules. agents built on stock LLMs don't understand you, you end up constantly updating the system prompt through trial and error. there should be a more natural way to train an agent on how you think. | 1 | 0 | 2026-03-02T19:35:09 | Illustrious-Bet6287 | false | null | 0 | o8a6nzv | false | /r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/o8a6nzv/ | false | 1 |
t1_o8a6nec | I've spotted a mistake in the last comment: total amount of memory must be MORE than the file size of the model.
so for 16 GB RAM and 6 GB VRAM your maximum is <14B models in 8 bit quant or <28B models in 4 bit quant, if you want models to run fast then they should be MoE type, <5B active parameters in 8bit quant or <10B active parameters in 4 bit quant. | 1 | 0 | 2026-03-02T19:35:04 | MelodicRecognition7 | false | null | 0 | o8a6nec | false | /r/LocalLLaMA/comments/1rj1ifv/which_qwen_35_model_can_i_run_on_my_laptop/o8a6nec/ | false | 1 |
t1_o8a6jpg | I wouldn’t consider 24 GB of VRAM a potato | 1 | 0 | 2026-03-02T19:34:34 | TurnUpThe4D3D3D3 | false | null | 0 | o8a6jpg | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8a6jpg/ | false | 1 |
t1_o8a6jgx | disagree. thinking affect the quality alot for 27B and 35B. in my recent tests I tried translation of poet and some complex texts. thinking was dramatically increasing quality of output in the targeted language.
my tests applied both on unsloth Q4 and Bartowski Q4 and 2 more different quantizations. all shows exactly same behavior
I found that 35B moe with thinking is the best balance of speed and quality and much better than 27B with no thinking | 1 | 0 | 2026-03-02T19:34:32 | Familiar_Injury_4177 | false | null | 0 | o8a6jgx | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8a6jgx/ | false | 1 |
t1_o8a6g7w | I would say it’s the 27b model not the 9b model which is competing with the 122b | 1 | 0 | 2026-03-02T19:34:05 | Mysterious-Panic-325 | false | null | 0 | o8a6g7w | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8a6g7w/ | false | 1 |
t1_o8a6dd6 | That's exactly what I had in mind lol
Well, the average nerdy guy like us :) | 1 | 0 | 2026-03-02T19:33:41 | Much-Researcher6135 | false | null | 0 | o8a6dd6 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a6dd6/ | false | 1 |
t1_o8a66ya | As I've already said, for this case there are specialized finetunes. "Heretic" and other types of abliterated models won't deliver equal quality due to lack of specific finetuning. | 1 | 0 | 2026-03-02T19:32:47 | No-Refrigerator-1672 | false | null | 0 | o8a66ya | false | /r/LocalLLaMA/comments/1rixh53/qwen35122b_heretic_ggufs/o8a66ya/ | false | 1 |
t1_o8a65cr | How did they manage to pack that much intelligence into 9B and 4B? Amazing work by the Qwen team! | 1 | 0 | 2026-03-02T19:32:34 | TurnUpThe4D3D3D3 | false | null | 0 | o8a65cr | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8a65cr/ | false | 1 |
t1_o8a63m6 | Can't be bothered. The whole point of ollama is thst it just works... Other models work fine. | 1 | 0 | 2026-03-02T19:32:20 | Noiselexer | false | null | 0 | o8a63m6 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a63m6/ | false | 1 |
t1_o8a5t2c | I did not get the chance to try this one yet.
The issue is not related to running the 9B model, the issue is that the model does not perform well with cline when it comes to navigating the project.
| 1 | 0 | 2026-03-02T19:30:53 | FearMyFear | false | null | 0 | o8a5t2c | false | /r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8a5t2c/ | false | 1 |
t1_o8a5sz4 | Colorblind people
https://preview.redd.it/z373cchloomg1.jpeg?width=1125&format=pjpg&auto=webp&s=f3e9ba5c29e21e5276ca03e112604718034abd57 | 1 | 0 | 2026-03-02T19:30:52 | TurnUpThe4D3D3D3 | false | null | 0 | o8a5sz4 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8a5sz4/ | false | 1 |
t1_o8a5sth | I’d say it’s not possible at all if you want to generate code that actually works. | 1 | 0 | 2026-03-02T19:30:51 | Shoddy_Bed3240 | false | null | 0 | o8a5sth | false | /r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8a5sth/ | false | 1 |
t1_o8a5si5 | I worked out how to enable thinking, just put the line below as the first thing in the Jinja template ...
{%- set enable\_thinking = true %} | 1 | 0 | 2026-03-02T19:30:48 | neil_555 | false | null | 0 | o8a5si5 | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o8a5si5/ | false | 1 |
t1_o8a5sfa | How do you turn thinking off in LM Studio? I am using the
--chat-template-kwargs "{\"enable_thinking\": $THINKING}"
flag in Llama.cpp to control it with unsloth's quants | 1 | 0 | 2026-03-02T19:30:47 | sine120 | false | null | 0 | o8a5sfa | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8a5sfa/ | false | 1 |
t1_o8a5s27 | For speculative decoding to work properly in llama.cpp, you need: 1) The draft model must be much smaller than the target model (0.8B is good for 27B), 2) Make sure both models are in the same quantization format family, 3) Use the -ngld parameter to set number of draft tokens (try -ngld 5 instead of 99), 4) The draft model needs to be loaded with -md flag pointing to the draft GGUF file. Also, LMStudio has known issues with spec-decode - sometimes the model doesn't show up in the dropdown even when correctly downloaded. Try using llama.cpp directly with the CLI instead of LMStudio for spec-decode. | 1 | 0 | 2026-03-02T19:30:44 | Pure-Fruit2654 | false | null | 0 | o8a5s27 | false | /r/LocalLLaMA/comments/1rj1e35/llamacpplmstudio_draft_model_settings_for_qwen35/o8a5s27/ | false | 1 |
t1_o8a5p8l | I have a gaming laptop with 8gb rtx2070 and 65gb ram running nobara linux (redhat). I've been qwen3 35b a3 q4 and it runs at a 'usable' speed. | 1 | 0 | 2026-03-02T19:30:22 | je11eebean | false | null | 0 | o8a5p8l | false | /r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8a5p8l/ | false | 1 |
t1_o8a5g68 | Because in this game we know they're all benchmaxxed, it's just one of them is clearly better benchmaxxed than the other.
That said, in my experience so far, Qwen3.5-9B does punch above its weight. | 1 | 0 | 2026-03-02T19:29:08 | Creepy-Bell-4527 | false | null | 0 | o8a5g68 | false | /r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o8a5g68/ | false | 1 |
t1_o8a5cv4 | I'll be testing the 9B tonight, but the 27B has been very impressive. I just wish I could fit more context. | 1 | 0 | 2026-03-02T19:28:42 | sine120 | false | null | 0 | o8a5cv4 | false | /r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o8a5cv4/ | false | 1 |
t1_o8a59gk | [https://www.reddit.com/r/LocalLLaMA/comments/1rit2fy/comment/o89lsu1/](https://www.reddit.com/r/LocalLLaMA/comments/1rit2fy/comment/o89lsu1/)
| 1 | 0 | 2026-03-02T19:28:15 | Negative-Web8619 | false | null | 0 | o8a59gk | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a59gk/ | false | 1 |
t1_o8a56vg | Very well, so far. Speed on a 9070 XT is 30/100 for the dense and MoE, and it's very coherent. My basic coding tasks were no issue. I wish I could fit the Q4_K_XL quants but for the speeds, the IQ3 is very usable. | 1 | 0 | 2026-03-02T19:27:53 | sine120 | false | null | 0 | o8a56vg | false | /r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o8a56vg/ | false | 1 |
t1_o8a540b | yeah prompt optimization helps for the obvious stuff. but the nuances you're talking about, that's exactly the problem. I can't fully rely on it and trust it with static prompts. | 1 | 0 | 2026-03-02T19:27:30 | Illustrious-Bet6287 | false | null | 0 | o8a540b | false | /r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/o8a540b/ | false | 1 |
t1_o8a5196 | [removed] | 1 | 0 | 2026-03-02T19:27:08 | [deleted] | true | null | 0 | o8a5196 | false | /r/LocalLLaMA/comments/1rj2e3j/spongebob_art_with_qwen_35_9b_vs_opus_46/o8a5196/ | false | 1 |
t1_o8a50yq | What your preferred use case? Do you chat with or use it via Coding agent? | 1 | 0 | 2026-03-02T19:27:05 | Medium-Technology-79 | false | null | 0 | o8a50yq | false | /r/LocalLLaMA/comments/1rj2gwf/qwen35_30b_is_incredible_for_local_deployment/o8a50yq/ | false | 1 |
t1_o8a4y2e | There is the easy way and the hard way. The easy way is to just download the models from the LM Studio model manager. Download the models with the 3 icons for Vision, Tools call, and Reasoning. You need the reasoning icon for the reasoning tag to work.
The hard way is to create a new folder at .cache\\lm-studio\\hub\\models\\qwen, and include 2 important files: manifest.json and model.yaml. | 1 | 0 | 2026-03-02T19:26:42 | Iory1998 | false | null | 0 | o8a4y2e | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8a4y2e/ | false | 1 |
t1_o8a4xul | bro | 1 | 0 | 2026-03-02T19:26:40 | EmbarrassedAsk2887 | false | null | 0 | o8a4xul | false | /r/LocalLLaMA/comments/1riyi54/i_am_using_qwen_ai_model_for_openclaw_and_i/o8a4xul/ | false | 1 |
t1_o8a4vc1 | Even the 9B has issues of thinking loops. | 1 | 0 | 2026-03-02T19:26:20 | Iory1998 | false | null | 0 | o8a4vc1 | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8a4vc1/ | false | 1 |
t1_o8a4ua6 | Does llamacpp support that MTP setting that VLLMs has because supposedly these Qwen models have the drafting built in? Although I have to say that it only helps if running in a tensor parallel mode, at least from my testing on VLLM. | 1 | 0 | 2026-03-02T19:26:11 | Ok-Ad-8976 | false | null | 0 | o8a4ua6 | false | /r/LocalLLaMA/comments/1rj1e35/llamacpplmstudio_draft_model_settings_for_qwen35/o8a4ua6/ | false | 1 |
t1_o8a4sga | This model is truly amazing! It's blowing me away with how well it performs for local deployment. Highly recommend giving it a try! | 1 | 0 | 2026-03-02T19:25:57 | Marco_Ferreira43516 | false | null | 0 | o8a4sga | false | /r/LocalLLaMA/comments/1rj2gwf/qwen35_30b_is_incredible_for_local_deployment/o8a4sga/ | false | 1 |
t1_o8a4s84 | Idk, just ollama... | 1 | 0 | 2026-03-02T19:25:55 | Noiselexer | false | null | 0 | o8a4s84 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a4s84/ | false | 1 |
t1_o8a4s09 | It definitely thinks a lot. | 1 | 0 | 2026-03-02T19:25:53 | Iory1998 | false | null | 0 | o8a4s09 | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8a4s09/ | false | 1 |
t1_o8a4qws | Or even easier, llama.cpp in router mode. | 1 | 0 | 2026-03-02T19:25:44 | DD3Boh | false | null | 0 | o8a4qws | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8a4qws/ | false | 1 |
t1_o8a4otu | The same prompt in qwen3.5-35b-a3b and qwen3.5-9b with an off-the-radar test, which always works with all the models I test, and \`qwen3.5-9b\` generated much better code than qwen3.5-35b-a3b. Basically, the prompt is to create an app in TypeScript.
Only one test so far, but it looks very promising for its size. | 1 | 0 | 2026-03-02T19:25:28 | JumpyAbies | false | null | 0 | o8a4otu | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8a4otu/ | false | 1 |
t1_o8a4m6w | Thanks a bunch!! So - what would you be most interested in, regarding "what it does"?
The reason I am asking is because the data for the distill is basically "give K2 a bunch of prompts, get its answers, train on the pairs". I have a basic set, kinda balanced and also system-prompted for "brief and punchy" - but I need to expand that and I need to uinderstand in what direction to expand. | 1 | 0 | 2026-03-02T19:25:06 | ramendik | false | null | 0 | o8a4m6w | false | /r/LocalLLaMA/comments/1rj08k1/k2_not_25_distillation_still_worth_it/o8a4m6w/ | false | 1 |
t1_o8a4lpp | Quantized Qwen3.5 9B would be a good starting point and keep plenty of VRAM available for a decent size context window (something like [this](https://huggingface.co/unsloth/Qwen3.5-9B-GGUF?show_file_info=Qwen3.5-9B-IQ4_XS.gguf))
[Qwen3.5 35B A3B](https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF?show_file_info=Qwen3.5-35B-A3B-UD-Q4_K_M.gguf) would be another great choice, but can be trickier to set up. It's a different architecture (MoE) and larger, so it will use all your VRAM and spill over into RAM/CPU. Dense (non-MoE) models get incredibly slow when you do this, but MoE models manage this much better.
I would avoid the new Qwen 27B with that amount of VRAM given the alternatives. (You're probably looking at 2-5 tokens per second with 27B vs 40+ with the 9B or 35B) | 1 | 0 | 2026-03-02T19:25:02 | 1842 | false | null | 0 | o8a4lpp | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a4lpp/ | false | 1 |
t1_o8a4kr5 | Vision encoder is always the WebGPU bottleneck — try q4 GGUF via llama.cpp WASM instead; better throughput, same browser, no VRAM thrashing. | 1 | 0 | 2026-03-02T19:24:54 | tom_mathews | false | null | 0 | o8a4kr5 | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8a4kr5/ | false | 1 |
t1_o8a4gdp | How come the 27B model is so good?? | 1 | 0 | 2026-03-02T19:24:19 | QileHQ | false | null | 0 | o8a4gdp | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8a4gdp/ | false | 1 |
t1_o8a4gao | ah yes, beings from beneath the ocean. | 1 | 0 | 2026-03-02T19:24:18 | Asleep-Ingenuity-481 | false | null | 0 | o8a4gao | false | /r/LocalLLaMA/comments/1rj2e3j/spongebob_art_with_qwen_35_9b_vs_opus_46/o8a4gao/ | false | 1 |
t1_o8a4gcw | recommended settings...? | 1 | 0 | 2026-03-02T19:24:18 | Negative-Web8619 | false | null | 0 | o8a4gcw | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a4gcw/ | false | 1 |
t1_o8a4eli | Ran into this exact failure mode in prod — system prompt with explicit date injection fixed it, but the model still hallucinated changelogs from "last month." | 1 | 0 | 2026-03-02T19:24:04 | tom_mathews | false | null | 0 | o8a4eli | false | /r/LocalLLaMA/comments/1rh1a6v/why_does_qwen_35_think_its_2024/o8a4eli/ | false | 1 |
t1_o8a4be2 | I got my opencode into loop in agent mode. Kind of qwen and OC sending input to each other indefinitrly. Can opencode be targeted to modify one file only, without pulling anything else to context? Just by natural text instruction? | 1 | 0 | 2026-03-02T19:23:38 | Ambitious-Sense-7773 | false | null | 0 | o8a4be2 | false | /r/LocalLLaMA/comments/1rie3yc/which_ide_to_code_with_qwen_35/o8a4be2/ | false | 1 |
t1_o8a4ahy | 8GB VRAM won't fit Q8 9B — that's ~9.5GB ngl. Drop to Q4_K_M (~5.5GB) or wait for your new rig iirc. | 1 | 0 | 2026-03-02T19:23:31 | tom_mathews | false | null | 0 | o8a4ahy | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8a4ahy/ | false | 1 |
t1_o8a45xt | they literally used -- instead of Halbgeviertstrich | 1 | 0 | 2026-03-02T19:22:54 | Negative-Web8619 | false | null | 0 | o8a45xt | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a45xt/ | false | 1 |
t1_o8a43fb | U r goated bro 🙏 | 1 | 0 | 2026-03-02T19:22:34 | random_boy8654 | false | null | 0 | o8a43fb | false | /r/LocalLLaMA/comments/1rj1ifv/which_qwen_35_model_can_i_run_on_my_laptop/o8a43fb/ | false | 1 |
t1_o8a41d7 | I tried making SpongeBob in HTML with the 9b model VS Opus 4.6, same simple prompts
https://preview.redd.it/f64egjm0nomg1.jpeg?width=1747&format=pjpg&auto=webp&s=d6cc51a2927f2bb1b3975896ff5eeb7489e28045
The results are interesting but I think it has a lot of potential. | 1 | 0 | 2026-03-02T19:22:17 | camracks | false | null | 0 | o8a41d7 | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8a41d7/ | false | 1 |
t1_o8a4099 | 35b fits into your RAM, it's more about how fast it should be / how hard the tasks are | 1 | 0 | 2026-03-02T19:22:07 | Negative-Web8619 | false | null | 0 | o8a4099 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a4099/ | false | 1 |
t1_o8a3y2i | "Technically correct but contextually wrong" is way better than how I described it. Yes, the retrieval-first approach makes sense. Your concern about it being more of an organizational problem is valid, but the upsides on a personal level are too big to ignore. | 1 | 0 | 2026-03-02T19:21:50 | Illustrious-Bet6287 | false | null | 0 | o8a3y2i | false | /r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/o8a3y2i/ | false | 1 |
t1_o8a3per | Oh: On the knowledge and info front, I know there's ton of info out there. The point is you and I can distill it. But over half of the country cannot. That's a big red dildo of a flag if I've ever seen one...
Hence the attention span comment and the stats on literacy. The problem is that the information scape is fully corrupted and almost entirely noise where very few people have the context and necessary breadth of knowledge to distill the signals.
That is something that gets engineered over decades. | 1 | 0 | 2026-03-02T19:20:39 | brownman19 | false | null | 0 | o8a3per | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8a3per/ | false | 1 |
t1_o8a3pbn | I would be really curious to see how 27B Q3 compares to 9B Q8 | 1 | 0 | 2026-03-02T19:20:39 | TurnUpThe4D3D3D3 | false | null | 0 | o8a3pbn | false | /r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o8a3pbn/ | false | 1 |
t1_o8a3nmh | yeah, that judgment gap is a big deal. I have seen AI miss context in crucial situations. it is like when you are debugging itsthe nuances that count. have you tried optimizing prompts to guide their outputs better? just a thought | 1 | 0 | 2026-03-02T19:20:25 | smwaqas89 | false | null | 0 | o8a3nmh | false | /r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/o8a3nmh/ | false | 1 |
t1_o8a3k0i | Is it just me or this 4B is a lot slower than Qwen 3 2507 4B? | 1 | 0 | 2026-03-02T19:19:56 | SlaveZelda | false | null | 0 | o8a3k0i | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o8a3k0i/ | false | 1 |
t1_o8a3jwt | Nice article!
Here's a video that can really help people understand how Temperature, Top P and Top K works using a simple example and visuals.
The video is for AWS Bedrock but Temperature, Top P and Top K concepts are the same: [https://youtu.be/dHmf1Xojr5w](https://youtu.be/dHmf1Xojr5w) | 1 | 0 | 2026-03-02T19:19:55 | Significant-Pitch-22 | false | null | 0 | o8a3jwt | false | /r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/o8a3jwt/ | false | 1 |
t1_o8a3ikg | Check the usual suspects soon | 1 | 0 | 2026-03-02T19:19:46 | Icy-Degree6161 | false | null | 0 | o8a3ikg | false | /r/LocalLLaMA/comments/1rira5e/iquestcoderv1_is_40b14b7b/o8a3ikg/ | false | 1 |
t1_o8a3459 | Lol wdym | 1 | 0 | 2026-03-02T19:17:48 | utsavsarkar | false | null | 0 | o8a3459 | false | /r/LocalLLaMA/comments/1riyi54/i_am_using_qwen_ai_model_for_openclaw_and_i/o8a3459/ | false | 1 |
t1_o8a347a | Here's a video that can really help you understand how Temperature, Top P and Top K works using a simple example and visuals.
The video is for AWS Bedrock but Temperature, Top P and Top K concepts are the same. Once you understand the concepts, you can properly select values for your specific usecase: [https://youtu.be/dHmf1Xojr5w](https://youtu.be/dHmf1Xojr5w) | 1 | 0 | 2026-03-02T19:17:48 | Significant-Pitch-22 | false | null | 0 | o8a347a | false | /r/LocalLLaMA/comments/1m7k50u/recommended_settings_temperature_topk_topp_minp/o8a347a/ | false | 1 |
t1_o8a3203 | I think about it this way, If the closed models train on 1T parameters (just to make the math easier) this is 0.90% as much training. What percent of that was coding? I haven't seen these to be great with coding unless someone trains it on coding after it comes out. They are great for sum stuff and you may get by with some basic coding but... | 1 | 0 | 2026-03-02T19:17:30 | Psychological_Ad8426 | false | null | 0 | o8a3203 | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8a3203/ | false | 1 |
t1_o8a2x8a | yes
idk, I wonder that also about qwen-coder-next | 1 | 0 | 2026-03-02T19:16:51 | Negative-Web8619 | false | null | 0 | o8a2x8a | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a2x8a/ | false | 1 |
t1_o8a2thl | One request: Compare Qwen3-instruct-4B-2507 agains Qwen3.5-4B with **thinking disabled**. If not we're not sure if we're comparing the equivalent thing. | 1 | 0 | 2026-03-02T19:16:19 | cibernox | false | null | 0 | o8a2thl | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8a2thl/ | false | 1 |
t1_o8a2qvm | try using the bodega-raptor-90M, load it rn and try it yourself james. istg. | 1 | 0 | 2026-03-02T19:15:59 | EmbarrassedAsk2887 | false | null | 0 | o8a2qvm | false | /r/LocalLLaMA/comments/1riypvk/axe_a_precision_agentic_coder_large_codebases/o8a2qvm/ | false | 1 |
t1_o8a2jrz | I have a 5080 and I ran the 35B:
docker run --gpus all -p 8080:8080 -v /path/to/Models:/models ghcr.io/ggml-org/llama.cpp:server-cuda -m /models/Qwen3.5-35B-A3B-MXFP4_MOE --port 8080 --host 0.0.0.0 | 1 | 0 | 2026-03-02T19:15:01 | iamapizza | false | null | 0 | o8a2jrz | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a2jrz/ | false | 1 |
t1_o8a2iir | 27B definitely needs thinking on to manage long context retrieval. With NoLiMa at 32k it drops from about 85% to 30% (from memory, I've posted the exact figures recently) | 1 | 0 | 2026-03-02T19:14:51 | thigger | false | null | 0 | o8a2iir | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8a2iir/ | false | 1 |
t1_o8a2fcd | Computer vision. Like, you could identify objects in a small camera image (think robotics, roomba, pet feeder) | 1 | 0 | 2026-03-02T19:14:26 | 1731799517 | false | null | 0 | o8a2fcd | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a2fcd/ | false | 1 |
t1_o8a2ern | Here's a video that can really help you understand how Temperature, Top P and Top K works using a simple example and visuals.
The video is for AWS Bedrock but Temperature, Top P and Top K concepts are the same: [https://youtu.be/dHmf1Xojr5w](https://youtu.be/dHmf1Xojr5w) | 1 | 0 | 2026-03-02T19:14:21 | Significant-Pitch-22 | false | null | 0 | o8a2ern | false | /r/LocalLLaMA/comments/157djvv/confused_about_temperature_top_k_top_p_repetition/o8a2ern/ | false | 1 |
t1_o8a2dxp | Q3 seems precarious. I would be worried about inaccuracies at that quant. How’s it working for you? | 1 | 0 | 2026-03-02T19:14:14 | TurnUpThe4D3D3D3 | false | null | 0 | o8a2dxp | false | /r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o8a2dxp/ | false | 1 |
t1_o8a2dmm | You can use the rocm version instead of cuda, it should be as fast. And use a higher quant for 4b, Q6\_K.
Or in your case, just use Qwen3.5-9B, you have the VRAM for it. | 1 | 0 | 2026-03-02T19:14:12 | AppealSame4367 | false | null | 0 | o8a2dmm | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8a2dmm/ | false | 1 |
t1_o8a2ceu | I think judgement can be influenced by instruction, memory, and guard rails. Larger context windows only help with providing more information to an agent all at once, it doesn't really address an agent's judgement (unless it's so small that you can't provide the relevant guidance). | 1 | 0 | 2026-03-02T19:14:02 | Total-Context64 | false | null | 0 | o8a2ceu | false | /r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/o8a2ceu/ | false | 1 |
t1_o8a2b0q | https://old.reddit.com/r/LocalLLaMA/comments/1rg0pv6/how_can_i_determine_how_much_vram_each_model_uses/o7o1lpp/
https://old.reddit.com/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o82wms6/
https://old.reddit.com/r/LocalLLaMA/comments/1ri42ee/help_finding_best_for_my_specs/o83kpzr/ | 1 | 0 | 2026-03-02T19:13:51 | MelodicRecognition7 | false | null | 0 | o8a2b0q | false | /r/LocalLLaMA/comments/1rj1ifv/which_qwen_35_model_can_i_run_on_my_laptop/o8a2b0q/ | false | 1 |
t1_o8a2aiz | So how much vram do i need for 35b-a3b and 27b
Also how powerful setup for 122b-a10b? :D | 1 | 0 | 2026-03-02T19:13:47 | HCLB_ | false | null | 0 | o8a2aiz | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8a2aiz/ | false | 1 |
t1_o8a29zc | that sucks | 1 | 0 | 2026-03-02T19:13:43 | Negative-Web8619 | false | null | 0 | o8a29zc | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a29zc/ | false | 1 |
t1_o8a271y | > I mean we do have models even starting from 90M to 0.9b, which are amazing at tool calling and long context horizon tasks.
The word "amazing" here is beyond exaggeration. A 3B is barely useful for any serious tool calling, you can't seriously expect that a 0.9B to be useful. | 1 | 0 | 2026-03-02T19:13:19 | JamesEvoAI | false | null | 0 | o8a271y | false | /r/LocalLLaMA/comments/1riypvk/axe_a_precision_agentic_coder_large_codebases/o8a271y/ | false | 1 |
t1_o8a268x | Man, I hate when it does that, completely ignoring the config file. It's honestly part of the reason I've been using NyxPortal (nyxportal d0t c0m) a lot more, just to avoid that exact kind of headache with the output. | 1 | 0 | 2026-03-02T19:13:13 | Defro777 | false | null | 0 | o8a268x | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8a268x/ | false | 1 |
t1_o8a24ut | [removed] | 1 | 0 | 2026-03-02T19:13:02 | [deleted] | true | null | 0 | o8a24ut | false | /r/LocalLLaMA/comments/1rj1ifv/which_qwen_35_model_can_i_run_on_my_laptop/o8a24ut/ | false | 1 |
t1_o8a23yr | I appreciate you actually reading and detailing your reasoning out.
I was definitely reductionist, but in this case I think you are being so as well by bundling everything I said into purely grand conspiracy without the nuance (on the non Purdue pharma stuff).
The patterns are clear as day now and mounds of evidence seem to support the worst case scenario...the system that led to that doesn't just magically appear overnight. It's been going on for a long time.
\---
Human nature dictates behaviors and humans collaborate, make deals, partner up when meeting with one another.
The world's foreign leaders and elites who were peddling kids and at parties with one another also orchestrating major decisions that have lasting impacts on systems they control, for their own personal gain - the factors that lead to both behaviors are fully linked and causal. We know the stats around behaviorally disordered people at the top. We know the stats around deviant tendencies of behaviorally disordered people. We know the stats (and mechanisms) by which asymmetric control always destroys the underlying systems.
That's where I see the major mental block. It seems "unfathomable" to many but it's just...the most probable case. We can look at reality as the **realizability** of this. Ie the proof is in the pudding.
\---
Look around a bit. Americans cannot read. Addiction and health crises are on the rise. Mental health problems and dementia are on the rise. Attention spans are a fraction of what they were before. Car thefts are at all time highs. Anyone who has option between min wage job and being a criminal today has a much easier choice choosing the latter. It's not even close. This decay isn't "normal ebbs and flows".
How can 54% of Americans even understand a conspiracy staring at them? Look at the stats. Majority American adults cannot understand reality for what it is. The world is far too complex to get by with 6th grade level comprehension.
https://preview.redd.it/kqhgesyrbomg1.png?width=1784&format=png&auto=webp&s=e2cd576aa1c37e90f642991335229941cd045dc5
[https://www.nationalgeographic.com/health/article/attention-spans-shrinking-how-to-regain](https://www.nationalgeographic.com/health/article/attention-spans-shrinking-how-to-regain)
\----
I am totally with you on discovery side. I keep up with most of materials science advancements, photonics, progress on mrna cancer vaccines, protein folding, etc when I can. Huge nerd about all that still since I've been in "inventor" mode for a couple years now.
Happy to discuss privately if you are interested in learning more about what I am up to! | 1 | 0 | 2026-03-02T19:12:54 | brownman19 | false | null | 0 | o8a23yr | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8a23yr/ | false | 1 |
t1_o8a23bc | Here's a video that can really help you understand how Temperature, Top P and Top K works using a simple example and visuals.
The video is for AWS Bedrock but Temperature, Top P and Top K concepts are the same: [https://youtu.be/dHmf1Xojr5w](https://youtu.be/dHmf1Xojr5w) | 1 | 0 | 2026-03-02T19:12:49 | Significant-Pitch-22 | false | null | 0 | o8a23bc | false | /r/LocalLLaMA/comments/1dxut1u/is_my_understanding_of_temperature_top_p_max/o8a23bc/ | false | 1 |
t1_o8a1zfq | start using axe, its local ai first lightweight ide, and ofcourse it made sure it works super with low speced macbooks as well :
[https://github.com/SRSWTI/axe](https://github.com/SRSWTI/axe) | 1 | 0 | 2026-03-02T19:12:18 | EmbarrassedAsk2887 | false | null | 0 | o8a1zfq | false | /r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8a1zfq/ | false | 1 |
t1_o8a1wxq | The porn industry drives all techological progress. Its like NASA. | 1 | 0 | 2026-03-02T19:11:59 | blamestross | false | null | 0 | o8a1wxq | false | /r/LocalLLaMA/comments/1rixh53/qwen35122b_heretic_ggufs/o8a1wxq/ | false | 1 |
t1_o8a1wp0 | i think those small models are perfect for Local Ingame RPG AI -> limit the scope of knowledge, only needs to answer it the speed of human speech | 1 | 0 | 2026-03-02T19:11:57 | InviteEnough8771 | false | null | 0 | o8a1wp0 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a1wp0/ | false | 1 |
t1_o8a1ur0 | Why is qwen coder next 80b not there? Everybody sleeping on it... | 1 | 0 | 2026-03-02T19:11:41 | camwasrule | false | null | 0 | o8a1ur0 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8a1ur0/ | false | 1 |
t1_o8a1sux | That really depends on the probability distribution of the results because if there is significant gap between high probability and low probability result set then increasing the temperature won't affect the result much.
Here's a video that can really help you understand how Temperature, Top P and Top K works using a simple example and visuals.
The video is for AWS Bedrock but Temperature, Top P and Top K concepts are the same: [https://youtu.be/dHmf1Xojr5w](https://youtu.be/dHmf1Xojr5w) | 1 | 0 | 2026-03-02T19:11:25 | Significant-Pitch-22 | false | null | 0 | o8a1sux | false | /r/LocalLLaMA/comments/1q92nc9/question_about_topk_topp_temperature_intuition/o8a1sux/ | false | 1 |
t1_o8a1rmz | How do you enable thinking in LM Studio? | 1 | 0 | 2026-03-02T19:11:15 | DarkArtsMastery | false | null | 0 | o8a1rmz | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8a1rmz/ | false | 1 |
t1_o8a1lxz | Thanks, I didn't know which mmproj file to get so I'll try some of the files from here: https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF/tree/main | 1 | 0 | 2026-03-02T19:10:29 | iamapizza | false | null | 0 | o8a1lxz | false | /r/LocalLLaMA/comments/1rgxr0v/qwen_35_is_multimodal_here_is_how_to_enable_image/o8a1lxz/ | false | 1 |
t1_o8a1jvq | unsloth's jinja disables thinking by default, see the stickied comment here: https://old.reddit.com/r/unsloth/comments/1risuzs/qwen35_small_models_out_now/
I'm not sure why that decision was made. The benchmarks all refer to the thinking versions, so if you're expecting that performance and download the unsloth quants, you may be frustrated. | 1 | 0 | 2026-03-02T19:10:12 | thejoyofcraig | false | null | 0 | o8a1jvq | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o8a1jvq/ | false | 1 |
t1_o8a1h2a | bruh | 1 | 0 | 2026-03-02T19:09:50 | EmbarrassedAsk2887 | false | null | 0 | o8a1h2a | false | /r/LocalLLaMA/comments/1riyi54/i_am_using_qwen_ai_model_for_openclaw_and_i/o8a1h2a/ | false | 1 |
t1_o8a1eo9 | So the average guy when spoken to by a woman | 1 | 0 | 2026-03-02T19:09:31 | Dartister | false | null | 0 | o8a1eo9 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a1eo9/ | false | 1 |
t1_o8a1d38 | It's possible with some swap allocation and limitation
`llama-server -hf unsloth/Qwen3.5-9B-GGUF:UD-Q4_K_XL --alias "Qwen3.5-9B" -c 16384 --temp 0.6 --top-p 0.95 --top-k 20 --min-p 0.00` | 1 | 0 | 2026-03-02T19:09:18 | vrmorgue | false | null | 0 | o8a1d38 | false | /r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8a1d38/ | false | 1 |
t1_o8a1au8 | That's like 15 year old SoC if I recall correctly, I wouldn't bother. You'll get abysmal performance if you manage to make it work somehow. RP4 4GB might be doable. | 1 | 0 | 2026-03-02T19:09:00 | jslominski | false | null | 0 | o8a1au8 | false | /r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o8a1au8/ | false | 1 |
t1_o8a1apv | Can you elaborate a little/share link to a repo? I tried using some local LLMs earlier as a routing layer or request deconstructors (into structured JSONs) before calling expensive LLMs, but the instruction following seemed rather poor across the board (Phi 4, Qwen, Gemma etc.; tried a lot of models in the 8B range) | 1 | 0 | 2026-03-02T19:08:59 | i4858i | false | null | 0 | o8a1apv | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a1apv/ | false | 1 |
t1_o8a19gx | I will say, I haven’t tried Qwen (although I probably should given I run a very beefy MBP) but there are really solid options out there for cheap, agent-capable models these days. $10/mo sub to Minimax’s coding plan has been pretty nice to have for my little toy projects. | 1 | 0 | 2026-03-02T19:08:49 | porkyminch | false | null | 0 | o8a19gx | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8a19gx/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.