name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7vjc28 | I'm pretty sure the us gov doesn't use Chinese open source. And I wouldn't be surprised if they add all open source models to the things that any gov contractor is not allowed to use. | 1 | 0 | 2026-02-28T13:26:20 | DrDisintegrator | false | null | 0 | o7vjc28 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7vjc28/ | false | 1 |
t1_o7vj9p1 | At 32gb you can run the 27b dense as 4bit. | 12 | 0 | 2026-02-28T13:25:54 | silenceimpaired | false | null | 0 | o7vj9p1 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vj9p1/ | false | 12 |
t1_o7vj9f7 | vurderer å leie ut 5090 oc gpu på [vast.ai](http://vast.ai)
| 1 | 0 | 2026-02-28T13:25:52 | Martin-_-Garrix | false | null | 0 | o7vj9f7 | false | /r/LocalLLaMA/comments/1ajnhs1/renting_gpu_time_vast_ai_is_much_more_expensive/o7vj9f7/ | false | 1 |
t1_o7vj8d4 | I think both are great. I can't run 27b with full context on the 3090 while 122b can. 122b should have more knowledge, but maybe 27b beats it with reasoning. Both are about the same speed for me (\~20T/s TG and 500-800 T/s PP). 27B fully on the 3090 and 122B with MOE on 3090 and 96GB DDR5. | 1 | 0 | 2026-02-28T13:25:40 | chris_0611 | false | null | 0 | o7vj8d4 | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vj8d4/ | false | 1 |
t1_o7vj6bx | if you don't do it already, you can also compress context with `-ctk` `-ctv` (`--cache-type-k` `--cache-type-v` parameters, default is f16, but most of the time q8_0 is enough, and you can use higher value for `ctk` and lower for `ctv` | 1 | 0 | 2026-02-28T13:25:19 | Educational_Sun_8813 | false | null | 0 | o7vj6bx | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vj6bx/ | false | 1 |
t1_o7vj4nf | Aucun problème, mais personne n'est prêt pour la révolution que sera (encore) le 4B 🤣
Avec la hype openclaw ce modèle risque de devenir le plus populaire 🤣 | 0 | 0 | 2026-02-28T13:25:01 | Adventurous-Paper566 | false | null | 0 | o7vj4nf | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vj4nf/ | false | 0 |
t1_o7vj3bm | You're screwing something up in this argument. llama.cpp says that 256k context needs this:
[37427] llama_kv_cache: Vulkan0 KV buffer size = 6144.00 MiB
[37427] llama_kv_cache: size = 6144.00 MiB (262144 cells, 12 layers, 4/1 seqs), K (f16): 3072.00 MiB, V (f16): 3072.00 MiB
[37427] llama_memory_recu... | 2 | 0 | 2026-02-28T13:24:47 | audioen | false | null | 0 | o7vj3bm | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vj3bm/ | false | 2 |
t1_o7vj2z9 | > a “multimodal” model with picture, video, and text-generating functions. | 0 | 1 | 2026-02-28T13:24:43 | Chilangosta | false | null | 0 | o7vj2z9 | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vj2z9/ | false | 0 |
t1_o7vj24s | ♬ Don’t dream it’s over… | 1 | 0 | 2026-02-28T13:24:34 | silenceimpaired | false | null | 0 | o7vj24s | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vj24s/ | false | 1 |
t1_o7vivj2 | This is helpful, thanks! I just realized I'm not sure I understand the difference between undervolting and power limiting. Maybe I actually meant power limiting. I remember reading posts from people capping the power draw to 200 wats with single digit percentage performance loss in inference.. | 1 | 0 | 2026-02-28T13:23:24 | doesitoffendyou | false | null | 0 | o7vivj2 | false | /r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7vivj2/ | false | 1 |
t1_o7viqso | That says coauthored not only authored | 2 | 0 | 2026-02-28T13:22:33 | StardockEngineer | false | null | 0 | o7viqso | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7viqso/ | false | 2 |
t1_o7viqod | I wasn't talking about the quantization but the difference between MOE and dense model. Even the 27b qwen is "smarter" then the 35b MOE and requires less vram. You sacrifice intelligence for speed with a MOE. | 8 | 0 | 2026-02-28T13:22:32 | Safe_Sky7358 | false | null | 0 | o7viqod | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7viqod/ | false | 8 |
t1_o7vips3 | May your pillow always be exactly the right temperature and as fluffy as you like!
Thanks! | 2 | 0 | 2026-02-28T13:22:22 | bonobomaster | false | null | 0 | o7vips3 | false | /r/LocalLLaMA/comments/1rgswkc/turn_off_thinking_in_lm_studio/o7vips3/ | false | 2 |
t1_o7vipd0 | Have you compared 122B against the dense 27b? The 27b beat 122b in some benchmarks. If you compared, and prefer 122b, what are you using it for? Trying to get a pulse on real world use so I’m asking a few people. | 2 | 0 | 2026-02-28T13:22:18 | silenceimpaired | false | null | 0 | o7vipd0 | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vipd0/ | false | 2 |
t1_o7vip6g | Q5 on 2 times 48GB and 96k context. So not sure | 1 | 0 | 2026-02-28T13:22:16 | sjoerdmaessen | false | null | 0 | o7vip6g | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vip6g/ | false | 1 |
t1_o7vimbc | everyones been saying V4 is coming for months now lol. but if it actually ships with native image gen and not just routing to a separate model... thats huge for open source. the closed labs have been gatekeeping multimodal generation for way too long | 13 | 0 | 2026-02-28T13:21:45 | RobertLigthart | false | null | 0 | o7vimbc | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vimbc/ | false | 13 |
t1_o7vike3 | Have you compared 122B against the dense 27b? The 27b beat 122b in some benchmarks. If you compared, and prefer 122b, what are you using it for? | 1 | 0 | 2026-02-28T13:21:24 | silenceimpaired | false | null | 0 | o7vike3 | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vike3/ | false | 1 |
t1_o7vijcd | This post smells like slop | 3 | 0 | 2026-02-28T13:21:13 | Zyj | false | null | 0 | o7vijcd | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vijcd/ | false | 3 |
t1_o7vifw9 | If the logit likelihoods are available from the other model, then the training likely is attempting to match the model's prediction to the target logit likelihoods on all tokens at once. This is also how you do distillation, you're basically training a model to mimic another model. | 2 | 0 | 2026-02-28T13:20:36 | audioen | false | null | 0 | o7vifw9 | false | /r/LocalLLaMA/comments/1rgips0/how_does_training_an_ai_on_another_ai_actually/o7vifw9/ | false | 2 |
t1_o7vif5b | that is exactly what I am looking for. Thank you!!! | 2 | 0 | 2026-02-28T13:20:28 | felixlovesml | false | null | 0 | o7vif5b | false | /r/LocalLLaMA/comments/1rgswkc/turn_off_thinking_in_lm_studio/o7vif5b/ | false | 2 |
t1_o7vic11 | Essayez de faire générer une image à Gemini flash dans Google AI Studio ;) | 1 | 0 | 2026-02-28T13:19:55 | Adventurous-Paper566 | false | null | 0 | o7vic11 | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vic11/ | false | 1 |
t1_o7viabl | where is stepfun | 1 | 0 | 2026-02-28T13:19:36 | HeftyAeon | false | null | 0 | o7viabl | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7viabl/ | false | 1 |
t1_o7vi6cd | Qwen is the replacement for the late Gemma. | 9 | 0 | 2026-02-28T13:18:53 | DrNavigat | false | null | 0 | o7vi6cd | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vi6cd/ | false | 9 |
t1_o7vhsyx | 9B c'est pas rien, vu les progrès réalisés sur 3.5 il devrait être très bien pour ceux qui n'ont pas la RAM pour exécuter le MoE.
Sur une ancienne configuration gaming RTX 3060 12Go + 16Go de DDR4 par exemple. | 3 | 1 | 2026-02-28T13:16:27 | Adventurous-Paper566 | false | null | 0 | o7vhsyx | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vhsyx/ | false | 3 |
t1_o7vhr0h | 122b on a 96gb triple gpu rig. At least that's the goal, not working yet. | 1 | 0 | 2026-02-28T13:16:05 | WiseassWolfOfYoitsu | false | null | 0 | o7vhr0h | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vhr0h/ | false | 1 |
t1_o7vhq7l | i'd just happy if it uses engram and we can offload a good part of the model to disk with no inference speed cost | 11 | 0 | 2026-02-28T13:15:56 | HeftyAeon | false | null | 0 | o7vhq7l | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vhq7l/ | false | 11 |
t1_o7vhls9 | how much system ram? | 1 | 0 | 2026-02-28T13:15:08 | Old-Sherbert-4495 | false | null | 0 | o7vhls9 | false | /r/LocalLLaMA/comments/1rh0msv/qwen3535ba3b_q5_k_mbest_model_for_nvidia_16gb_gpus/o7vhls9/ | false | 1 |
t1_o7vhlbb | That is hilariously incorrect. Even the q4_k_m has about 96% similarity of the original and q8 is around 99. | -3 | 1 | 2026-02-28T13:15:03 | CooperDK | false | null | 0 | o7vhlbb | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vhlbb/ | false | -3 |
t1_o7vhl1r | Your setup is messed up. This basic ask is trivial for the model.
https://preview.redd.it/lze76t8qj8mg1.png?width=818&format=png&auto=webp&s=1a27132662306507c623b1a8034d84b865f0f1ef
| 3 | 0 | 2026-02-28T13:15:00 | audioen | false | null | 0 | o7vhl1r | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o7vhl1r/ | false | 3 |
t1_o7vhj00 | >Here is a simple human sounding Reddit draft written naturally without formatting or symbols:
Son 😭 | 2 | 0 | 2026-02-28T13:14:38 | Velocita84 | false | null | 0 | o7vhj00 | false | /r/LocalLLaMA/comments/1rh22j0/qwen35_prefill_latency_extremely_slow_with_large/o7vhj00/ | false | 2 |
t1_o7vhgwf | Based on actual llama.cpp numbers, Qwen3.5-122B-A10B uses about 30% *less* memory per token of KV (+ recurrent) cache than GPT-OSS-120B.
https://preview.redd.it/yh03phmlj8mg1.png?width=876&format=png&auto=webp&s=a454093e1c5c98ca19a1d6a4060a4303630aaa09 | 2 | 0 | 2026-02-28T13:14:14 | DeProgrammer99 | false | null | 0 | o7vhgwf | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vhgwf/ | false | 2 |
t1_o7vhcwl | that's a fair point actually. you're right that MCP tool definitions get sent in context every call, so 30 tools with schemas is still a chunk of tokens per message. the tradeoff is structured reliability vs token efficiency.
the grep approach you're describing is basically RAG over the API spec, which is clever for t... | 2 | 0 | 2026-02-28T13:13:31 | Beautiful-Dream-168 | false | null | 0 | o7vhcwl | false | /r/LocalLLaMA/comments/1rgf9zb/mcpforge_generate_mcp_servers_from_openapi_specs/o7vhcwl/ | false | 2 |
t1_o7vhbh4 | I hope it have a faster inference speed for us GPU destitute. It doesn't matter much for me if the model is SOTA but runs at 4 t/s on my hardware. An inference speed akin to LFM2/2.5 would be a dream and even with MLA for lengthier context.
One can only dream. | 3 | 0 | 2026-02-28T13:13:15 | Rique_Belt | false | null | 0 | o7vhbh4 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vhbh4/ | false | 3 |
t1_o7vhajv | Then you have another issue because lm studio actually runs through a llama.cpp library... | 1 | 0 | 2026-02-28T13:13:05 | CooperDK | false | null | 0 | o7vhajv | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vhajv/ | false | 1 |
t1_o7vh3l2 | Thanks | 1 | 0 | 2026-02-28T13:11:47 | Comfortable-Fudge233 | false | null | 0 | o7vh3l2 | false | /r/LocalLLaMA/comments/1rh22j0/qwen35_prefill_latency_extremely_slow_with_large/o7vh3l2/ | false | 1 |
t1_o7vh2ok | -2 | 0 | 2026-02-28T13:11:38 | xanduonc | false | null | 0 | o7vh2ok | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7vh2ok/ | false | -2 | |
t1_o7vh1a5 | Use Koboldcpp or lm studio | 2 | 0 | 2026-02-28T13:11:22 | CooperDK | false | null | 0 | o7vh1a5 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vh1a5/ | false | 2 |
t1_o7vgzft | Yes it will be native but the llama. Cpp implementations will be mmproj files. Its close to 5B | 1 | 0 | 2026-02-28T13:11:02 | SystematicKarma | false | null | 0 | o7vgzft | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vgzft/ | false | 1 |
t1_o7vgz48 | Using agents. | 3 | 0 | 2026-02-28T13:10:58 | Zyj | false | null | 0 | o7vgz48 | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vgz48/ | false | 3 |
t1_o7vgxne | [deleted] | 1 | 0 | 2026-02-28T13:10:42 | [deleted] | true | null | 0 | o7vgxne | false | /r/LocalLLaMA/comments/1rh0msv/qwen3535ba3b_q5_k_mbest_model_for_nvidia_16gb_gpus/o7vgxne/ | false | 1 |
t1_o7vgubi | Qwen3 also has 0.6 and 2 as far as I remember | 5 | 0 | 2026-02-28T13:10:05 | CooperDK | false | null | 0 | o7vgubi | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vgubi/ | false | 5 |
t1_o7vgrhy | What are your problems? | 1 | 0 | 2026-02-28T13:09:33 | jacek2023 | false | null | 0 | o7vgrhy | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vgrhy/ | false | 1 |
t1_o7vgqj4 | 996 is not legal anymore in China. | 2 | 0 | 2026-02-28T13:09:22 | dugavo | false | null | 0 | o7vgqj4 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vgqj4/ | false | 2 |
t1_o7vgmrc | hard to say... Bartowski's 122B Q4 seems to be matching benchmark results good with a bunch of tests and runs on my system 130k Q8 context fine 480t/s pp with 22 t/s writes.
Versus the 27B Q5 seems to be diminished results compared to 27B FP8/q8, and Q5 is already 350t/s pp with 12ts writes on my machine... Granted I ... | 1 | 0 | 2026-02-28T13:08:39 | Dundell | false | null | 0 | o7vgmrc | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vgmrc/ | false | 1 |
t1_o7vgmk3 | For Qwen3-VL there are quite many reported issues of the model repeating itself. I remeber seeing an official statement/note from the qwen team with recommended generation settings for different scenarios.
For Cosmos there is not much data from the community on this as the community is much smaller.
We encountered i... | 1 | 0 | 2026-02-28T13:08:37 | tag_along_common | false | null | 0 | o7vgmk3 | false | /r/LocalLLaMA/comments/1rh1haa/benchmarks_report_optimized_cosmosreason2_qwen3vl/o7vgmk3/ | false | 1 |
t1_o7vglcb | Not ready 🤣 | 1 | 0 | 2026-02-28T13:08:24 | Adventurous-Paper566 | false | null | 0 | o7vglcb | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vglcb/ | false | 1 |
t1_o7vgh17 | yeah I agree, I always advice to use "vs studio 16 2019" when running into issues with versions, for as long as that still works. Claude and gemini will take them trough it most of the time. | 1 | 0 | 2026-02-28T13:07:34 | BrightRestaurant5401 | false | null | 0 | o7vgh17 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vgh17/ | false | 1 |
t1_o7vgfjx | You could try run the 120b qwen 3.5 as a q4 k xl from unsloth then with 64+24 since that uses 70 plus context but the difference is small cause the 120b is a 10b expert model so most times the output would be a bit faster but not better qualitatively. Id stick to the 27b model for now if i were you. | 2 | 0 | 2026-02-28T13:07:18 | getmevodka | false | null | 0 | o7vgfjx | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vgfjx/ | false | 2 |
t1_o7vgdmj | … a native VL 4b? | -3 | 1 | 2026-02-28T13:06:56 | DistanceSolar1449 | false | null | 0 | o7vgdmj | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vgdmj/ | false | -3 |
t1_o7vg4ee | I bet they go back to Qwen2.5 targets like 7B instead of 8B. Make the 9 a mini coder. 👍 | 0 | 0 | 2026-02-28T13:05:12 | neuralnomad | false | null | 0 | o7vg4ee | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vg4ee/ | false | 0 |
t1_o7vg3nw | AesSedai. | 0 | 0 | 2026-02-28T13:05:04 | moahmo88 | false | null | 0 | o7vg3nw | false | /r/LocalLLaMA/comments/1rh0msv/qwen3535ba3b_q5_k_mbest_model_for_nvidia_16gb_gpus/o7vg3nw/ | false | 0 |
t1_o7vg1yq | The issue is known and will likely be resolved very soon.
[https://www.reddit.com/r/LocalLLaMA/comments/1rgek4m/comment/o7ucx6b/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/LocalLLaMA/comments/1rgek4m/comment/o7ucx6b/?utm_source=share&utm_... | 2 | 0 | 2026-02-28T13:04:44 | Adventurous-Paper566 | false | null | 0 | o7vg1yq | false | /r/LocalLLaMA/comments/1rh22j0/qwen35_prefill_latency_extremely_slow_with_large/o7vg1yq/ | false | 2 |
t1_o7vg0ez | Ohhh ! ...can you me in detail procedure to get rid of this ? .. because I'm very new to here ! ..and idk anything about this !...it would be very helpful if you help me to set up these things ! | 1 | 0 | 2026-02-28T13:04:26 | Less_Strain7577 | false | null | 0 | o7vg0ez | false | /r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7vg0ez/ | false | 1 |
t1_o7vfvzd | Image+txt+video isn't EVERYTHING, there's still pure audio (music, speech, sfx) | 9 | 0 | 2026-02-28T13:03:36 | Silver-Champion-4846 | false | null | 0 | o7vfvzd | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vfvzd/ | false | 9 |
t1_o7vfv56 | I see you have tested the model. Scores are weirdly low and I have just tested the questions in arithmetic, the model answered all questions correctly. I quanted the model myself without imatrix. May I know if you got your model from Unsloth? Try Bartowski’s! | 1 | 0 | 2026-02-28T13:03:27 | Tccybo | false | null | 0 | o7vfv56 | false | /r/LocalLLaMA/comments/1rgmg99/llm_benchmark_site_for_dual_rtx_5060_ti/o7vfv56/ | false | 1 |
t1_o7vftmy | You can keep calling it the department of defense. The president cannot unilaterly change the name of the agency. | -2 | 0 | 2026-02-28T13:03:10 | buecker02 | false | null | 0 | o7vftmy | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7vftmy/ | false | -2 |
t1_o7vfsh9 | Anything other than political stance should be fine. Since the Chinese models are just distilled from the US models. | 1 | 0 | 2026-02-28T13:02:57 | albertgao | false | null | 0 | o7vfsh9 | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7vfsh9/ | false | 1 |
t1_o7vfroi | You can add the following prompt into the Prompt Template – template (Jinja):
System: Always use the current date from external sources. Do not rely on your internal knowledge about the year.
| 1 | 0 | 2026-02-28T13:02:48 | moahmo88 | false | null | 0 | o7vfroi | false | /r/LocalLLaMA/comments/1rh1a6v/why_does_qwen_35_think_its_2024/o7vfroi/ | false | 1 |
t1_o7vfr9a | Great project! For those looking for something even more lightweight for Mac dictation (no GPU needed), there are simpler options now. The local-only approach is really the way to go for privacy. | 0 | 0 | 2026-02-28T13:02:42 | Weesper75 | false | null | 0 | o7vfr9a | false | /r/LocalLLaMA/comments/1r9y6s8/transcriptionsuite_a_fully_local_private_open/o7vfr9a/ | false | 0 |
t1_o7vfppg | lmao | 1 | 0 | 2026-02-28T13:02:25 | j_osb | false | null | 0 | o7vfppg | false | /r/LocalLLaMA/comments/1rh22j0/qwen35_prefill_latency_extremely_slow_with_large/o7vfppg/ | false | 1 |
t1_o7vfo0b | The Chinese models are distilled from the US models. If the US models are not a thing. You will not have Chinese models at all in the end. | 1 | 0 | 2026-02-28T13:02:05 | albertgao | false | null | 0 | o7vfo0b | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7vfo0b/ | false | 1 |
t1_o7vfmzf | Like 16 at q4 | 5 | 0 | 2026-02-28T13:01:53 | thebadslime | false | null | 0 | o7vfmzf | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vfmzf/ | false | 5 |
t1_o7vfmhz | Do you work for them | -4 | 0 | 2026-02-28T13:01:48 | ambassadortim | false | null | 0 | o7vfmhz | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vfmhz/ | false | -4 |
t1_o7vfjx6 | Just install homebrew | 0 | 0 | 2026-02-28T13:01:19 | StardockEngineer | false | null | 0 | o7vfjx6 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vfjx6/ | false | 0 |
t1_o7vfj2q | Qwen-nextarch should have very small kv size compare even mla | 4 | 0 | 2026-02-28T13:01:09 | shing3232 | false | null | 0 | o7vfj2q | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vfj2q/ | false | 4 |
t1_o7vfian | Which inference framework are you using, or you just generated the post with a random LLM? At least the issue you mentioned did not exist with llama.cpp or vLLM. You can load 35B-A3B-Q4 at 128K with \~30GB RAM+vRAM, or load 27B-FP8 with >500k KV cache with 80GB vRAM and vLLM. | 8 | 0 | 2026-02-28T13:01:01 | lly0571 | false | null | 0 | o7vfian | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vfian/ | false | 8 |
t1_o7vfhy9 | V3? What about 2 and 1? Or not worth mentioning? | 1 | 0 | 2026-02-28T13:00:57 | Silver-Champion-4846 | false | null | 0 | o7vfhy9 | false | /r/LocalLLaMA/comments/1rh0wqj/made_a_12b_uncensored_rp_merge_putting_it_out/o7vfhy9/ | false | 1 |
t1_o7vfhnh | I've had issues in the past too, and qwen3.5:27b is the first that works for me in Codex.
It's noticeably slower than OpenAI models though (I use it on a Mac), but the quality is surprisingly good - it handles everything pretty well and feels very usable for real tasks.
Based on some benchmarks, it's the strongest lo... | 2 | 0 | 2026-02-28T13:00:53 | shuravi108 | false | null | 0 | o7vfhnh | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vfhnh/ | false | 2 |
t1_o7vfcyz | Yes you can. I am running it with full context in LMStudio with the Vulkan backend. | 1 | 0 | 2026-02-28T13:00:00 | dsartori | false | null | 0 | o7vfcyz | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vfcyz/ | false | 1 |
t1_o7vfcva | Do we know if unsloth are also updating their quants for Qwen3.5 397B? Or is it only the smaller variants that are being updated? | 7 | 0 | 2026-02-28T12:59:59 | twack3r | false | null | 0 | o7vfcva | false | /r/LocalLLaMA/comments/1rh0xwk/unsloth_dynamic_20_ggufs_now_selectively/o7vfcva/ | false | 7 |
t1_o7vfatc | Nano Banana Pro might've been something else, but Nano Banana 2 is Gemini 3.1 Flash generating according to Google | 11 | 0 | 2026-02-28T12:59:35 | mgostIH | false | null | 0 | o7vfatc | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vfatc/ | false | 11 |
t1_o7vf9ku | devstral is a great local model, tested in action | 2 | 0 | 2026-02-28T12:59:21 | Educational_Sun_8813 | false | null | 0 | o7vf9ku | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vf9ku/ | false | 2 |
t1_o7vf538 | Was ist der Unsloth Respin? | 1 | 0 | 2026-02-28T12:58:27 | DertekAn | false | null | 0 | o7vf538 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7vf538/ | false | 1 |
t1_o7vf0sv | It's all bubble dynamics. Any bit of bearishness from an authority like a first world government or central bank can tank their financial resources for further borrowing. | 1 | 0 | 2026-02-28T12:57:37 | PlainBread | false | null | 0 | o7vf0sv | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7vf0sv/ | false | 1 |
t1_o7vezx9 | You can't run 122B-A3B in Q4 with max context on a strix halo?
On a 3090 24GB + 96GB DDR5, I'm running 122B-A3B in Q5 with 256k context, even with batching 4096 (which takes extra vram) to increase PP. 20T/s TG and 500T/s PP. And it uses far from all of the DDR5. something like 80GB.
Sad times for strix halo. | 3 | 0 | 2026-02-28T12:57:27 | chris_0611 | false | null | 0 | o7vezx9 | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vezx9/ | false | 3 |
t1_o7vezix | I got all 256k context in my RTX Pro with 96GB. Seems there is some disconnect. | 14 | 0 | 2026-02-28T12:57:22 | StardockEngineer | false | null | 0 | o7vezix | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vezix/ | false | 14 |
t1_o7vew1v | Hitting 20Tok/s here on Ryzen AI Max Qwen3.5-122B-A10B-Q4\_K\_M (Proxmox LXC) | 1 | 0 | 2026-02-28T12:56:40 | lenne0816 | false | null | 0 | o7vew1v | false | /r/LocalLLaMA/comments/1rf2ulo/qwen35_122b_in_72gb_vram_3x3090_is_the_best_model/o7vew1v/ | false | 1 |
t1_o7vetiz | Question , realistically what'd you say a RTX 5070 TI would get for speeds? on the GitHub you said the RTX 4060 got speeds of 2,905ms, I'm trying to find a Custom Voice TTS that'd get the lowest possible latency for realtime talking | 1 | 0 | 2026-02-28T12:56:10 | According_Talk_3047 | false | null | 0 | o7vetiz | false | /r/LocalLLaMA/comments/1rfc3ic/introducing_fasterqwentts/o7vetiz/ | false | 1 |
t1_o7ver7f | Qwen 3.5 9b vs Qwen 3.5 35b A3b
https://preview.redd.it/we17zwhag8mg1.png?width=735&format=png&auto=webp&s=b4af902da036c38e3510fddcf5193d5808e16ad4 | 1 | 1 | 2026-02-28T12:55:42 | -Ellary- | false | null | 0 | o7ver7f | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7ver7f/ | false | 1 |
t1_o7vepu3 | Soon TM | 4 | 0 | 2026-02-28T12:55:26 | silenceimpaired | false | null | 0 | o7vepu3 | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vepu3/ | false | 4 |
t1_o7vektr | > even though it *loves* to "Wait..." over and over again even after it's already copied out its entire input.
That's what scares me.
Qwen 3 thinking was okay.
2507 thinking is overthinking and I don't want to see that again. | 7 | 0 | 2026-02-28T12:54:26 | LinkSea8324 | false | null | 0 | o7vektr | false | /r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/o7vektr/ | false | 7 |
t1_o7veih0 | You're thinking of DeepSeek (like V3/R1), which uses MLA to natively compress the KV cache!
Qwen 3.5 actually went a different route, it uses a hybrid architecture (Gated Delta Networks) that requires a much larger KV memory footprint per token than standard models to maintain its reasoning performance. | -3 | 0 | 2026-02-28T12:53:57 | Reasonable-Yak-3523 | false | null | 0 | o7veih0 | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7veih0/ | false | -3 |
t1_o7veg9b | Glad it's working well for you. For the offloading question — since this is a MoE model, you actually have more options than just basic layer splitting. llama.cpp has --override-tensor to control exactly which tensors go to GPU vs CPU, and --n-cpu-moe for offloading expert weights specifically to CPU while keeping atte... | 1 | 0 | 2026-02-28T12:53:31 | melanov85 | false | null | 0 | o7veg9b | false | /r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7veg9b/ | false | 1 |
t1_o7vefej | so you don't ever watch the "thinking" process? it has changed in all major models to deal with anachronisms. but you'd have to pay attention to such things and you'd rather laugh at people who do. disappointing, and strange. | -5 | 0 | 2026-02-28T12:53:21 | brickout | false | null | 0 | o7vefej | false | /r/LocalLLaMA/comments/1rh1a6v/why_does_qwen_35_think_its_2024/o7vefej/ | false | -5 |
t1_o7vedfr | I think military wants to buy something where they can use as they want and don't need to negotiate.
If military buys a gun, they want to own it and do whatever they'll need with it, not read a EULA before firing every shot.
If you give them asterisks they know you'll be trouble, so it's best to cut ties ASAP. Even i... | 0 | 0 | 2026-02-28T12:52:58 | FullOf_Bad_Ideas | false | null | 0 | o7vedfr | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7vedfr/ | false | 0 |
t1_o7vea3d | my guess is they're using openclaw to promote openclaw | 1 | 0 | 2026-02-28T12:52:19 | malkovichmusic | false | null | 0 | o7vea3d | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o7vea3d/ | false | 1 |
t1_o7ve7ap | Back to classic vanilla tech. | 14 | 0 | 2026-02-28T12:51:45 | -Ellary- | false | null | 0 | o7ve7ap | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7ve7ap/ | false | 14 |
t1_o7vdz9m | It's not well written | 0 | 0 | 2026-02-28T12:50:11 | Plane_Assumption_937 | false | null | 0 | o7vdz9m | false | /r/LocalLLaMA/comments/1rcd7nr/openclaw_vs_zeroclaw_vs_nullclaw_for_agentic/o7vdz9m/ | false | 0 |
t1_o7vdyzp | In fact, this model appeared on https://open.bigmodel.cn/pricing (chinese site of z.ai) as early as the release of GLM-5. But there is any information about it | 1 | 0 | 2026-02-28T12:50:08 | ravi_2233 | false | null | 0 | o7vdyzp | false | /r/LocalLLaMA/comments/1rggpu9/glm5code/o7vdyzp/ | false | 1 |
t1_o7vdyox | Appears to run very fast, but its useless as it is for agentic coding in OpenCode, haven't found a way to turn off thinking and make it run tool calls. | 1 | 0 | 2026-02-28T12:50:04 | Both_Mix3836 | false | null | 0 | o7vdyox | false | /r/LocalLLaMA/comments/1rezq19/qwen3535b_on_apple_silicon_how_i_got_2x_faster/o7vdyox/ | false | 1 |
t1_o7vdxfz | Figuring out how to work through it.
Lol, they have just understood that something like date is merely injected in the system prompt or obtained via tooling rather than expecting static weights to understand. | 8 | 0 | 2026-02-28T12:49:50 | goldlord44 | false | null | 0 | o7vdxfz | false | /r/LocalLLaMA/comments/1rh1a6v/why_does_qwen_35_think_its_2024/o7vdxfz/ | false | 8 |
t1_o7vdwuh | What's the issue with compiling on suse? | 3 | 0 | 2026-02-28T12:49:43 | jacek2023 | false | null | 0 | o7vdwuh | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vdwuh/ | false | 3 |
t1_o7vdt2d | Q5_K_M from unsloth? | 3 | 0 | 2026-02-28T12:48:59 | Significant_Fig_7581 | false | null | 0 | o7vdt2d | false | /r/LocalLLaMA/comments/1rh0msv/qwen3535ba3b_q5_k_mbest_model_for_nvidia_16gb_gpus/o7vdt2d/ | false | 3 |
t1_o7vdoa4 | can we do it on our own? | 2 | 0 | 2026-02-28T12:48:01 | Alarmed_Wind_4035 | false | null | 0 | o7vdoa4 | false | /r/LocalLLaMA/comments/1rh0xwk/unsloth_dynamic_20_ggufs_now_selectively/o7vdoa4/ | false | 2 |
t1_o7vdjvl | Underlying this question is an assumption that you can only effectively run a 7b model on your hardware (because split GPU/CPU inferencing is way too slow).
This may not be the case for much longer - RabbitLLM (on GitHub) is a new fork of an older tool AirLLM which aims to let you run 70B models on an 8GB GPU.
The re... | 2 | 0 | 2026-02-28T12:47:09 | Protopia | false | null | 0 | o7vdjvl | false | /r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7vdjvl/ | false | 2 |
t1_o7vdglg | The demand for kvcache even at f16 is laughably small for the Qwen3.5 series. I have 272GiB VRAM (6000, 5090 and 3 nvlinked 3090 pairs), all 4 bit quants of 397 fit in that envelope and the NL one performs the best. As in, I cannot see any reason to go for a higher quant or cloud, the quality of the output is the same. | 2 | 0 | 2026-02-28T12:46:31 | twack3r | false | null | 0 | o7vdglg | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vdglg/ | false | 2 |
t1_o7vde8d | I'm using Q6 with a 168k context on a single 5060 Ti, and I've already said goodbye to GLM 4.7 Flash. 35ba3b qwen | 4 | 0 | 2026-02-28T12:46:03 | RaDDaKKa | false | null | 0 | o7vde8d | false | /r/LocalLLaMA/comments/1rh14cs/best_qwen_35_variant_for_2x5060ti16_64_gb_ram/o7vde8d/ | false | 4 |
t1_o7vdbn6 | The observability problem is real — and there's a security angle most people miss.
When you do get logging working, check what your agent is actually capturing. Tool outputs, command results, API responses — if any of those contain credentials (and they will), they're now sitting in your debug logs in plaintext.
I've... | 1 | 0 | 2026-02-28T12:45:31 | Ok_Yard3778 | false | null | 0 | o7vdbn6 | false | /r/LocalLLaMA/comments/1rgwyqi/agent_debugging_is_a_mess_am_i_the_only_one/o7vdbn6/ | false | 1 |
t1_o7vdao3 | UD-Q4\_K\_XL | 0 | 0 | 2026-02-28T12:45:19 | Reasonable-Yak-3523 | false | null | 0 | o7vdao3 | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vdao3/ | false | 0 |
t1_o7vda56 | It's a lot dumber than a dense model as well :( | -1 | 0 | 2026-02-28T12:45:13 | Safe_Sky7358 | false | null | 0 | o7vda56 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vda56/ | false | -1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.