name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7vd9xt | I keep getting this error with Qwen but not with much smaller GLM-4.7-Flash. qwen/qwen3-coder-next-80B-Q6\_K
Error: The write tool was called with invalid arguments: [
{
"expected": "string",
"code": "invalid_type",
"path": [
"content"
],
"message": "Invalid... | 1 | 0 | 2026-02-28T12:45:10 | Content_Impact1507 | false | null | 0 | o7vd9xt | false | /r/LocalLLaMA/comments/1rdnxe6/qwen3codernext_vs_qwen3535ba3b_vs_qwen3527b_a/o7vd9xt/ | false | 1 |
t1_o7vd9nc | 122b 5bit vllm mlx
I’m in love | 1 | 0 | 2026-02-28T12:45:07 | Thump604 | false | null | 0 | o7vd9nc | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vd9nc/ | false | 1 |
t1_o7vd6p5 | Thanks!
5090 is too expensive and too powerful - I would have to buy a new UPS for it.
I went with a cheaper AMD option. Slower, but less power hungry. | 5 | 0 | 2026-02-28T12:44:31 | ProfessionalSpend589 | false | null | 0 | o7vd6p5 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vd6p5/ | false | 5 |
t1_o7vd626 | Hope this release shakes the market like last time. Just expecting tiny price down of GPUs for short time at least. | 14 | 0 | 2026-02-28T12:44:23 | pmttyji | false | null | 0 | o7vd626 | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vd626/ | false | 14 |
t1_o7vd3gz | If you use lm studio make sure to deactivate mmap and sytem memory offload and ativate full vram use options then. | 2 | 0 | 2026-02-28T12:43:52 | getmevodka | false | null | 0 | o7vd3gz | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vd3gz/ | false | 2 |
t1_o7vd13v | We'll see about that img/video gen | 7 | 0 | 2026-02-28T12:43:24 | No_Afternoon_4260 | false | null | 0 | o7vd13v | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vd13v/ | false | 7 |
t1_o7vcwny | Anthropic are ones with the most potential for evil IMO.
They are deeply anti open, against any form of information sharing, let alone weights, and the most likely to try to get open models banned.
Also, the fact that they are effective altruists, and this quote describes them well:
“Of all tyrannies, a tyranny ... | 4 | 0 | 2026-02-28T12:42:30 | ThisGonBHard | false | null | 0 | o7vcwny | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7vcwny/ | false | 4 |
t1_o7vct8f | This Next Week really never ends... | 12 | 0 | 2026-02-28T12:41:47 | Kirigaya_Mitsuru | false | null | 0 | o7vct8f | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vct8f/ | false | 12 |
t1_o7vcshk | Both can be true. Maybe the 2026 stuff is partial. Especially if we believe Americans who say Chinese models are distilling American ones. Maybe it's 2024 data with select 2026 top ups | 3 | 0 | 2026-02-28T12:41:38 | Hector_Rvkp | false | null | 0 | o7vcshk | false | /r/LocalLLaMA/comments/1rh1a6v/why_does_qwen_35_think_its_2024/o7vcshk/ | false | 3 |
t1_o7vcsdu | Isn’t this a simple solution? You always have this configuration in your system in terms of which model to use per client per task, so you don’t have vendor lock in. If the client doesn’t like Chinese models, you have loads of options, you don’t need to trap yourself in open source models, just use Gemini Pro 3 for exa... | 1 | 0 | 2026-02-28T12:41:37 | albertgao | false | null | 0 | o7vcsdu | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7vcsdu/ | false | 1 |
t1_o7vcn19 | If it was anyone else saying this you would be right, but the FT is usually right about this stuff, all be it not normally in this area. | 14 | 0 | 2026-02-28T12:40:31 | Logical_Look8541 | false | null | 0 | o7vcn19 | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vcn19/ | false | 14 |
t1_o7vcjys | Isn't the whole appeal of Owen 3.5 , at least the 400B model, that it's compressing kv cache natively resulting in much, much higher context windows for a given ram allowance? Did I imagine that? | 11 | 0 | 2026-02-28T12:39:53 | Hector_Rvkp | false | null | 0 | o7vcjys | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vcjys/ | false | 11 |
t1_o7vcehp | How do you run your self-iterative loop? I'm using https://github.com/darrenhinde/OpenAgentsControl but it still a very hands-on approach. I'm looking for a more small model oriented solution, every other scaffold has failed me besides this. | 1 | 0 | 2026-02-28T12:38:44 | DefNattyBoii | false | null | 0 | o7vcehp | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7vcehp/ | false | 1 |
t1_o7vcdqe | Qwen3VL is an amazing model. But it suffers from one problem I am unable to resolve which is repetition. Did you encounter it? Even repetition penalty does not fix it. | 2 | 0 | 2026-02-28T12:38:34 | scousi | false | null | 0 | o7vcdqe | false | /r/LocalLLaMA/comments/1rh1haa/benchmarks_report_optimized_cosmosreason2_qwen3vl/o7vcdqe/ | false | 2 |
t1_o7vccx9 | R markdown in R studio will allow you to generate figures from live code in markdown documents, so you shouldn't lose that.
The big advantages of R are all the vast number of stats packages, and its visual data DNA. If you want publication-ready graphs, R is your friend. | 1 | 0 | 2026-02-28T12:38:24 | llmentry | false | null | 0 | o7vccx9 | false | /r/LocalLLaMA/comments/1rg3da6/are_there_any_particular_offline_models_i_could/o7vccx9/ | false | 1 |
t1_o7vcbjz | How much do they pay you guys to astroturf OpenCode?
OpenCode is the worst of 20 different options. Multiple people here all casually pretending to daily drive it is absurd. | -5 | 0 | 2026-02-28T12:38:07 | beijinghouse | false | null | 0 | o7vcbjz | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7vcbjz/ | false | -5 |
t1_o7vc9v2 | No. You are thinking of the New York Times. Financial Times is about the best paper there is for accuracy, they are also one of the few news groups that actually makes a profit and doesn't need a 'sugar daddy' to keep them afloat. | 26 | 0 | 2026-02-28T12:37:46 | Logical_Look8541 | false | null | 0 | o7vc9v2 | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vc9v2/ | false | 26 |
t1_o7vbvpe | Ya only in their staff picked models. Its a yaml config file which exposes the button but even if you replicate that it does not expose the button for other quants, idk what voodoo shiit they did with it. | 4 | 0 | 2026-02-28T12:34:47 | FORNAX_460 | false | null | 0 | o7vbvpe | false | /r/LocalLLaMA/comments/1rgswkc/turn_off_thinking_in_lm_studio/o7vbvpe/ | false | 4 |
t1_o7vbuep | Thank you!! | 2 | 0 | 2026-02-28T12:34:30 | blksunr1z1ng | false | null | 0 | o7vbuep | false | /r/LocalLLaMA/comments/1rgswkc/turn_off_thinking_in_lm_studio/o7vbuep/ | false | 2 |
t1_o7vbu2c | Did anyone get tool working better for home assistant? I had to switch back to GLM-4.7 flash because tool use was so mangled. Will give Qwen 3.5 35B another shot. | 1 | 0 | 2026-02-28T12:34:26 | InternationalNebula7 | false | null | 0 | o7vbu2c | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vbu2c/ | false | 1 |
t1_o7vbtrc | 122B 4bit quant on a single spark runs quite well, goodbye gpt oss! | 1 | 0 | 2026-02-28T12:34:22 | yondercode | false | null | 0 | o7vbtrc | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vbtrc/ | false | 1 |
t1_o7vbruo | What quant? | 2 | 0 | 2026-02-28T12:33:58 | legit_split_ | false | null | 0 | o7vbruo | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vbruo/ | false | 2 |
t1_o7vbnj8 | Sub ten tokens per second is going to make sense in very, very specific use cases. For almost everyone, it just sucks. For someone who is looking to BUY hardware, it would be extremely misguided. | 1 | 0 | 2026-02-28T12:33:03 | Hector_Rvkp | false | null | 0 | o7vbnj8 | false | /r/LocalLLaMA/comments/1rg47i3/say_i_want_my_own_claude/o7vbnj8/ | false | 1 |
t1_o7vblre | Yes let's go 😁 | 2 | 0 | 2026-02-28T12:32:41 | Dreifach-M | false | null | 0 | o7vblre | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vblre/ | false | 2 |
t1_o7vblaq | Hegseth on X: "I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the De... | 3 | 0 | 2026-02-28T12:32:35 | _tresmil_ | false | null | 0 | o7vblaq | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7vblaq/ | false | 3 |
t1_o7vbka1 | it's funny because as an advertiser image/video/music gen is core part of my workflow | 6 | 0 | 2026-02-28T12:32:22 | ivari | false | null | 0 | o7vbka1 | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vbka1/ | false | 6 |
t1_o7vbg75 | True | 1 | 0 | 2026-02-28T12:31:29 | Psyko38 | false | null | 0 | o7vbg75 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vbg75/ | false | 1 |
t1_o7vbbfk | I think I understand. You’re running forced alignment with the audio and the ASR text, then using heuristics to screen disfluencies…?
I still don’t get how you can check if the pronunciation is correct though. You will get the compounded error rate of crisper whisper + the acoustic model.
If a person has a strong acce... | 1 | 0 | 2026-02-28T12:30:28 | str8it | false | null | 0 | o7vbbfk | false | /r/LocalLLaMA/comments/1r7j7kb/the_guy_that_won_the_nvidia_hackathon_and_an/o7vbbfk/ | false | 1 |
t1_o7vb9mg | I noticed a lot of European intellectuals can barely hold a conversation with normal people in teams and classrooms in the real world. It makes their brilliance useless for long term projects where they fail miserably because they don't have any communication skills. | -2 | 0 | 2026-02-28T12:30:06 | Elite_Crew | false | null | 0 | o7vb9mg | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7vb9mg/ | false | -2 |
t1_o7vb6yb | Unsloth. Figures. I'm not sure why anyone uses them TBH. | 1 | 0 | 2026-02-28T12:29:32 | Monkey_1505 | false | null | 0 | o7vb6yb | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o7vb6yb/ | false | 1 |
t1_o7vb2vv | Correct, but it's a lot faster when it offloads into the ram compared to a dense model. | 8 | 0 | 2026-02-28T12:28:40 | SystematicKarma | false | null | 0 | o7vb2vv | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vb2vv/ | false | 8 |
t1_o7vb2ti | Go apple then, like Emmanuel Macron would add, for sure. 128gb is already very, very capable, and will continue to be. If you want to stretch your budget, you won't regret 256gb either. I wouldn't get more because the bandwidth becomes insufficient / it stops making sense. If you can, play tricks, cheap out on storage ... | 2 | 0 | 2026-02-28T12:28:39 | Hector_Rvkp | false | null | 0 | o7vb2ti | false | /r/LocalLLaMA/comments/1rfvh4c/going_fully_offline_with_ai_for_research_where_do/o7vb2ti/ | false | 2 |
t1_o7vb2ee | I use the Linux bin from the release page because my local build broke. There's a Linux dist there, but I use it on fedora | 1 | 0 | 2026-02-28T12:28:33 | Effective_Head_5020 | false | null | 0 | o7vb2ee | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vb2ee/ | false | 1 |
t1_o7vb1i3 | I was a windows first, Ubuntu occasionally kind of user for over a decade (windows user since the mid 90s) switched to cachyos in November of 2022 when I was trring to get Stable Diffusion 1.5 to run and Cuda was being frustrating on both windows and ubuntu. I deleted my windows backup partition a few months later and ... | 1 | 0 | 2026-02-28T12:28:22 | giblesnot | false | null | 0 | o7vb1i3 | false | /r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7vb1i3/ | false | 1 |
t1_o7vaz8g | Yea was wondering the same, when asking stuff it often says thing like "as of 2024"...
When asking it about the data it says trained with data from 2026. | 1 | 0 | 2026-02-28T12:27:54 | T3KO | false | null | 0 | o7vaz8g | false | /r/LocalLLaMA/comments/1rh1a6v/why_does_qwen_35_think_its_2024/o7vaz8g/ | false | 1 |
t1_o7vaz10 | No, it doesn't fit into VRAM, we have to have the system RAM with it. | 3 | 0 | 2026-02-28T12:27:51 | Psyko38 | false | null | 0 | o7vaz10 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vaz10/ | false | 3 |
t1_o7vawxx | chronology problems have plagued LLMs since the start. it depends on which part of its training data it has chosen to focus on. try again tomorrow with a fresh instance and you'll get a different answer. but the modern large models are figuring out how to work through it.
I'm surprised you're surprised. | 4 | 0 | 2026-02-28T12:27:25 | brickout | false | null | 0 | o7vawxx | false | /r/LocalLLaMA/comments/1rh1a6v/why_does_qwen_35_think_its_2024/o7vawxx/ | false | 4 |
t1_o7vavn4 | Sure buddy. Third times the charm | 2 | 0 | 2026-02-28T12:27:08 | I-am_Sleepy | false | null | 0 | o7vavn4 | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vavn4/ | false | 2 |
t1_o7vasna | Lots of flavors. I like Linux mint as well. It's nice in general, I use the cinnamon version. | 1 | 0 | 2026-02-28T12:26:29 | ArtfulGenie69 | false | null | 0 | o7vasna | false | /r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7vasna/ | false | 1 |
t1_o7vaqo4 | It is good for me ! ...I tried it , though it is using 95% of my 16 gb RAM ..the outputs are worth it and one question can we divide the load to the GPU ? Or automatically it has the ability to load on CPU and GPU both?
NOTE THE RESULTS ARE BETTER AFTER SOME CHANGES IN PARAMETERS TO GET THE PRECISION | 1 | 0 | 2026-02-28T12:26:03 | Less_Strain7577 | false | null | 0 | o7vaqo4 | false | /r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7vaqo4/ | false | 1 |
t1_o7vanzb | the tool call reliability pattern you're seeing tracks - 100B+ seems to be the inflection point where models can actually maintain state across a multi-hop tool sequence without losing the thread. curious whether the orchestrator or the workers were the bigger failure point in the smaller models. | 10 | 0 | 2026-02-28T12:25:29 | BC_MARO | false | null | 0 | o7vanzb | false | /r/LocalLLaMA/comments/1rh12xz/qwen3535b_nailed_my_simple_multiagent_workflow/o7vanzb/ | false | 10 |
t1_o7vakc3 | Can someone explain in ELI5? (or maybe 18 :)) | 1 | 0 | 2026-02-28T12:24:42 | Uranday | false | null | 0 | o7vakc3 | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7vakc3/ | false | 1 |
t1_o7vai4q | 3 | 0 | 2026-02-28T12:24:14 | snapo84 | false | null | 0 | o7vai4q | false | /r/LocalLLaMA/comments/1rh14cs/best_qwen_35_variant_for_2x5060ti16_64_gb_ram/o7vai4q/ | false | 3 | |
t1_o7vaf70 | What models from ByteDance Seed have you tested? | 2 | 0 | 2026-02-28T12:23:36 | abdouhlili | false | null | 0 | o7vaf70 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vaf70/ | false | 2 |
t1_o7vadq0 | For me it is the first time I'm dipping my toes. So I can only compare qwen 3.5 in lm studio vs Llama.cpp. | 2 | 0 | 2026-02-28T12:23:17 | Uranday | false | null | 0 | o7vadq0 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vadq0/ | false | 2 |
t1_o7vacbi | Can you expand on this? I use a Mistral Agent as my Linux-go-to. | 1 | 0 | 2026-02-28T12:23:00 | sendmebirds | false | null | 0 | o7vacbi | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vacbi/ | false | 1 |
t1_o7vac6t | if *that* is their limit I now wonder what they have already been doing for the Trump administration | 6 | 0 | 2026-02-28T12:22:58 | t_krett | false | null | 0 | o7vac6t | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7vac6t/ | false | 6 |
t1_o7vabn6 | I agree that video/image generation are not useful, but a multimodal with vision is good for agentic coding as it is able to get UI feedback and iterate on it. | 5 | 0 | 2026-02-28T12:22:51 | tarruda | false | null | 0 | o7vabn6 | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vabn6/ | false | 5 |
t1_o7va917 | Search Firefox recent privacy backlash in Gemini. | 1 | 0 | 2026-02-28T12:22:16 | Hector_Rvkp | false | null | 0 | o7va917 | false | /r/LocalLLaMA/comments/1reqdpb/overwhelmed_by_so_many_quantization_variants/o7va917/ | false | 1 |
t1_o7va5qp | hopefully they also provide a qwen3.5 based multimodal embedding model 🎉 | 10 | 0 | 2026-02-28T12:21:34 | meganoob1337 | false | null | 0 | o7va5qp | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7va5qp/ | false | 10 |
t1_o7v9z1h | With 6GB of VRAM you can definitely run Qwen3-VL-4B at lower quants (Q4). If you want a bigger quant, look into LFM2-VL-3B. Quite a bit smaller, so you could use Q6 or something. There's also SmolVLM2-2.2B, which is useable at Q8 on a 6GB GPU. Alternatively, there are some finetunes of GPT-OSS 20B with Vision capabilit... | 1 | 0 | 2026-02-28T12:20:07 | Skitzenator | false | null | 0 | o7v9z1h | false | /r/LocalLLaMA/comments/1rh0zcn/suggest_me_vision_instrcut_model_that_i_can_run/o7v9z1h/ | false | 1 |
t1_o7v9xjw | No, that's an US citizen trying to pay for a higher education. | 15 | 0 | 2026-02-28T12:19:48 | Turbulent_Pin7635 | false | null | 0 | o7v9xjw | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v9xjw/ | false | 15 |
t1_o7v9wel | Nice, thank you! | 1 | 0 | 2026-02-28T12:19:32 | Thrumpwart | false | null | 0 | o7v9wel | false | /r/LocalLLaMA/comments/1rgbwwh/lora_training_vs_fft_what_do_i_need_to_know/o7v9wel/ | false | 1 |
t1_o7v9w12 | https://www.anthropic.com/news/statement-department-of-war | 1 | 0 | 2026-02-28T12:19:27 | eposnix | false | null | 0 | o7v9w12 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7v9w12/ | false | 1 |
t1_o7v9vsv | qwen3.5-27b-claude-4.6-opus-reasoning-distilled
I'm really impressed with this particular variant for collaboratively writing fiction (not just trying to one-shot it). Collaborative meaning I give it a list of writing style rules and a story outline for reference, then prompt it point-by-point with snippets from the o... | 1 | 0 | 2026-02-28T12:19:24 | novelide | false | null | 0 | o7v9vsv | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7v9vsv/ | false | 1 |
t1_o7v9sqv | I'm at your point right now. Still tuning, not yet vision working.
Next step programing tools. If I get it working I post it here, but I also will follow this post. | 2 | 0 | 2026-02-28T12:18:45 | Uranday | false | null | 0 | o7v9sqv | false | /r/LocalLLaMA/comments/1rh0yim/how_to_use_qwen_35_35b_with_any_agentic_coding/o7v9sqv/ | false | 2 |
t1_o7v9smh | Gemini absolutely suck for coding though. | 1 | 0 | 2026-02-28T12:18:43 | DurianDiscriminat3r | false | null | 0 | o7v9smh | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7v9smh/ | false | 1 |
t1_o7v9qym | Respectfully disagree. I’ve been running q4 Qwen3-14b MLX on 16gb Mac Air M4. There’s sometimes slight lag in other apps but it works surprisingly well. | 1 | 0 | 2026-02-28T12:18:22 | jarec707 | false | null | 0 | o7v9qym | false | /r/LocalLLaMA/comments/1rfv6ap/what_models_run_well_on_mac_mini_m4_16gb_for_text/o7v9qym/ | false | 1 |
t1_o7v9qfg | Sure, but they didn't support giving up their tech for autonomous killing machines. Yep, the bar is set there... | 6 | 0 | 2026-02-28T12:18:15 | Any_Fox5126 | false | null | 0 | o7v9qfg | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7v9qfg/ | false | 6 |
t1_o7v9q4r | No need, just download already compiled version from here: https://github.com/ggml-org/llama.cpp/releases/tag/b8180 | 5 | 0 | 2026-02-28T12:18:11 | hackiv | false | null | 0 | o7v9q4r | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v9q4r/ | false | 5 |
t1_o7v9pjp | Isn't this an article from last year, that has only been recently updated to include a comment about Qwen 3.5? | 25 | 0 | 2026-02-28T12:18:04 | Egoz3ntrum | false | null | 0 | o7v9pjp | false | /r/LocalLLaMA/comments/1rh0xwk/unsloth_dynamic_20_ggufs_now_selectively/o7v9pjp/ | false | 25 |
t1_o7v9p9t | Really? Across multiple models, or an isolated incident? I ask because I see llama.cpp updates fairly regularly in LM Studio. As long as the versions of llama.cpp are the same, I wouldn't imagine there would be a huge difference | 1 | 0 | 2026-02-28T12:18:01 | _-_David | false | null | 0 | o7v9p9t | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v9p9t/ | false | 1 |
t1_o7v9og6 | This article was recently updated to showcase the new Qwen3.5 GGUF benchmarks which we did here, which shows Unsloth's performing consistently low for GGUFs: [https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new\_qwen3535ba3b\_unsloth\_dynamic\_ggufs\_benchmarks/](https://www.reddit.com/r/LocalLLaMA/comments/1rgel1... | 50 | 0 | 2026-02-28T12:17:51 | yoracale | false | null | 0 | o7v9og6 | false | /r/LocalLLaMA/comments/1rh0xwk/unsloth_dynamic_20_ggufs_now_selectively/o7v9og6/ | false | 50 |
t1_o7v9ob4 | By talk you mean gifts or bribes then maybe | 0 | 0 | 2026-02-28T12:17:49 | DurianDiscriminat3r | false | null | 0 | o7v9ob4 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7v9ob4/ | false | 0 |
t1_o7v9n6c | Just... use docker? | 8 | 0 | 2026-02-28T12:17:34 | dodiyeztr | false | null | 0 | o7v9n6c | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v9n6c/ | false | 8 |
t1_o7v9ma3 | yeah same tbh. i kept blaming my prompts but ngl qwen3.5 just... stays on task in a way previous models didn't. been running it with opencode and it'll grind through like 3-4 chained tasks without going off the rails. feels less like fighting the model and more like actually delegating | 1 | 0 | 2026-02-28T12:17:22 | salmenus | false | null | 0 | o7v9ma3 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v9ma3/ | false | 1 |
t1_o7v9l09 | I’m surprised at how slow it is because nemotron and qwen 30b are much better. Although the reasoning and output was quite good.
I noticed that 5060ti 16GB are below £400 again, so may pick up two of those for the poor man’s LLM rig. I use CoPilot and Gemini for serious work but would like to do some RAG and home aut... | 1 | 0 | 2026-02-28T12:17:05 | Ornery-Turnip-8035 | false | null | 0 | o7v9l09 | false | /r/LocalLLaMA/comments/1r1c7ct/no_gpu_club_how_many_of_you_do_use_local_llms/o7v9l09/ | false | 1 |
t1_o7v9kxl | It shoud recommend you repost to r/CrackheadCraigslist | 2 | 0 | 2026-02-28T12:17:04 | JVBass75 | false | null | 0 | o7v9kxl | false | /r/LocalLLaMA/comments/1rh0bkz/tempted_to_prompt_qwen_on_this_craigslist_rig_but/o7v9kxl/ | false | 2 |
t1_o7v9jxw | [deleted] | 1 | 0 | 2026-02-28T12:16:51 | [deleted] | true | null | 0 | o7v9jxw | false | /r/LocalLLaMA/comments/1rgmg99/llm_benchmark_site_for_dual_rtx_5060_ti/o7v9jxw/ | false | 1 |
t1_o7v9jsu | Did anything help you get it deployed? GPT couldn’t seem to help me get the GPU connected. | 1 | 0 | 2026-02-28T12:16:50 | SoMuchLasagna | false | null | 0 | o7v9jsu | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7v9jsu/ | false | 1 |
t1_o7v9ifc | So you know how opencode compares to codex with qwen? | 0 | 0 | 2026-02-28T12:16:32 | Uranday | false | null | 0 | o7v9ifc | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v9ifc/ | false | 0 |
t1_o7v9gjv | I am going to submit a PR my dear :) | 1 | 0 | 2026-02-28T12:16:07 | fab_space | false | null | 0 | o7v9gjv | false | /r/LocalLLaMA/comments/1q9hu43/i_built_an_endtoend_local_llm_finetuning_gui_for/o7v9gjv/ | false | 1 |
t1_o7v9eni | I started there. It was slow, so with chatgpt got Llama.cpp working... Blazing fast. | 1 | 0 | 2026-02-28T12:15:42 | Uranday | false | null | 0 | o7v9eni | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v9eni/ | false | 1 |
t1_o7v9baf | They used to generate images using tool calls, but nowadays, most of the image is generated by the model itself in the case of gpt-image. No idea what Nano-Banana actually is though, it's marketed as if it's a separate model, but it's also often called Gemini image, so maybe it's a variant of the LLM tuned for better i... | 9 | 0 | 2026-02-28T12:14:58 | paperbenni | false | null | 0 | o7v9baf | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v9baf/ | false | 9 |
t1_o7v977g | qwen3-4b-instruct-2507
how much ram do you have btw? | 2 | 0 | 2026-02-28T12:14:03 | nunodonato | false | null | 0 | o7v977g | false | /r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7v977g/ | false | 2 |
t1_o7v929y | I would suggest none of these and just ask you to use Qwen3.5 35BA3B with experts offloaded to cpu if vram is your constraint or a quantized version of 27B dense if fitting in vram is your concern. Or wait for the upcoming smaller 3.5 releases. These models are much better at vision than any previous iterations. | 3 | 0 | 2026-02-28T12:12:56 | lacerating_aura | false | null | 0 | o7v929y | false | /r/LocalLLaMA/comments/1rh0zcn/suggest_me_vision_instrcut_model_that_i_can_run/o7v929y/ | false | 3 |
t1_o7v8z7q | I couldn't care less about image/video
I need cheap and fast for agentic/coding capabilities
I'd like something that understands my project and constantly iterate on it at light speed
Anything else is a waste of ressources for gooners
Usage & Limits & Downgrade all because of the furries doing RP and other weird sh... | -6 | 1 | 2026-02-28T12:12:15 | Ambitious-Call-7565 | false | null | 0 | o7v8z7q | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v8z7q/ | false | -6 |
t1_o7v8yuq | yes 4090 - so 24gb.. (and 64 system, but i guess offloading wouldn't offer benefits also not having a qwen3.5 model size that would offer benefits then) | 2 | 0 | 2026-02-28T12:12:10 | vogelvogelvogelvogel | false | null | 0 | o7v8yuq | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7v8yuq/ | false | 2 |
t1_o7v8yf2 | No I haven't. I've seen some posts about it. Have you? If so, what do you think about the model? | 1 | 0 | 2026-02-28T12:12:05 | melanov85 | false | null | 0 | o7v8yf2 | false | /r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7v8yf2/ | false | 1 |
t1_o7v8wef | Congratulations, is this 5090 or something else? | 3 | 0 | 2026-02-28T12:11:37 | jacek2023 | false | null | 0 | o7v8wef | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v8wef/ | false | 3 |
t1_o7v8sxz | Yep, ready!
I bought a new GPU with 32GB VRAM today and will hook it in a couple of hours.
I’ll finally be able to play with small dense models at decent speeds :)
| 14 | 0 | 2026-02-28T12:10:50 | ProfessionalSpend589 | false | null | 0 | o7v8sxz | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v8sxz/ | false | 14 |
t1_o7v8rtf | ChatGPT ahahahahh \^)))) Yeah u alone! | 3 | 0 | 2026-02-28T12:10:34 | Sunbait | false | null | 0 | o7v8rtf | false | /r/LocalLLaMA/comments/1re0zus/mercury_2_diffusion_model_speed_is_insane_if/o7v8rtf/ | false | 3 |
t1_o7v8rim | If you report next week every week, you will get it right at some point. I believe in you. | 40 | 0 | 2026-02-28T12:10:30 | nullmove | false | null | 0 | o7v8rim | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v8rim/ | false | 40 |
t1_o7v8ox2 | You can try [CLIO](https://github.com/SyntheticAutonomicMind/CLIO):
clio --new
: /api set provider llama.cpp
: /api key llama
: Hello World! | 1 | 0 | 2026-02-28T12:09:56 | Total-Context64 | false | null | 0 | o7v8ox2 | false | /r/LocalLLaMA/comments/1rh0yim/how_to_use_qwen_35_35b_with_any_agentic_coding/o7v8ox2/ | false | 1 |
t1_o7v8ox3 | you still need a custom machine to do that though | 1 | 0 | 2026-02-28T12:09:56 | Western_Objective209 | false | null | 0 | o7v8ox3 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7v8ox3/ | false | 1 |
t1_o7v8m8s | It's shouldn't affect the 397B one that much, we're going to reupload the 3 variants and update u guys hopefully soon. Rather the more improtant fix is the tool-calling fix which is why we have to reupload.
For now, you are free to use any other quant if you'd like. | 2 | 0 | 2026-02-28T12:09:21 | yoracale | false | null | 0 | o7v8m8s | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7v8m8s/ | false | 2 |
t1_o7v8lpe | It's shouldn't affect the 397B one that much, we're going to reupload the 3 variants and update u guys hopefully soon. Rather the more improtant fix is the tool-calling fix which is why we have to reupload.
For now, you are free to use any other quant if you'd like. | 2 | 0 | 2026-02-28T12:09:14 | yoracale | false | null | 0 | o7v8lpe | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7v8lpe/ | false | 2 |
t1_o7v8lqi | You can also just download github releases or build from source. | 13 | 0 | 2026-02-28T12:09:14 | spaceman_ | false | null | 0 | o7v8lqi | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v8lqi/ | false | 13 |
t1_o7v8ize | Does this mean, it would be a good idea to re-encode all models? | 16 | 0 | 2026-02-28T12:08:38 | paranoidray | false | null | 0 | o7v8ize | false | /r/LocalLLaMA/comments/1rh0xwk/unsloth_dynamic_20_ggufs_now_selectively/o7v8ize/ | false | 16 |
t1_o7v8iot | In my day it was shitposters on 4chan. Back when our model parameter sizes were measured with M's not B's, and outputs that read like a dementia patient having a fever dream were deciphered like messages from an oracle. | 1 | 0 | 2026-02-28T12:08:34 | TheActualDonKnotts | false | null | 0 | o7v8iot | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7v8iot/ | false | 1 |
t1_o7v8ik5 | This would be a really double edged sword situation. IF it is to be believed that their model will be an omni, itll be nearly impossible for community in general to make finetunes of it. Which is a BIG part of the image/video gen community. There are many reasons for fine tuning and LoRa creation and a Trillion plus mo... | 7 | 0 | 2026-02-28T12:08:32 | lacerating_aura | false | null | 0 | o7v8ik5 | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v8ik5/ | false | 7 |
t1_o7v8i50 | Trump wants AI for mass-surveillance of Americans and Hegseth wants autonomous weapons. Anthropic says that goes against their rules. | 1 | 0 | 2026-02-28T12:08:26 | SinnerP | false | null | 0 | o7v8i50 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7v8i50/ | false | 1 |
t1_o7v8a3s | LM studio has a button on the sidebar to disable thinking for me. Only when the official model is loaded though, other peoples quants dont show the button. | 6 | 0 | 2026-02-28T12:06:39 | Uncle___Marty | false | null | 0 | o7v8a3s | false | /r/LocalLLaMA/comments/1rgswkc/turn_off_thinking_in_lm_studio/o7v8a3s/ | false | 6 |
t1_o7v897r | It seems fine. I'm running some real life tests later today. | 2 | 0 | 2026-02-28T12:06:26 | schnauzergambit | false | null | 0 | o7v897r | false | /r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/o7v897r/ | false | 2 |
t1_o7v898g | Be that as it may, I don't remember Qwen (or other Chinese) models being released on weekends.
If they are adding new models to the collection days in advance, it would be sooner than is typical, but not inconceivable. | 2 | 0 | 2026-02-28T12:06:26 | rerri | false | null | 0 | o7v898g | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v898g/ | false | 2 |
t1_o7v880c | No kidding? How is it handling the harness? I really am comfortable in Codex and I'd use it over OpenCode, but I've had vague issues with models in the past I can't really recall. And with the 27b dense this strong, I am psyched about the small dense models. Not only as a speculative decoding model for the 27b, but sup... | 1 | 0 | 2026-02-28T12:06:09 | _-_David | false | null | 0 | o7v880c | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v880c/ | false | 1 |
t1_o7v86kq | I can't. I'm on openSUSE Tumbleweed. The only pre-build version I've found is in the distro repos and that one is outdated. | -3 | 1 | 2026-02-28T12:05:49 | LosEagle | false | null | 0 | o7v86kq | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v86kq/ | false | -3 |
t1_o7v85wc | Would be funny if as a result Anthropic is forced to buy GPUs from China, thus strenthening them even better | 6 | 0 | 2026-02-28T12:05:40 | takutekato | false | null | 0 | o7v85wc | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7v85wc/ | false | 6 |
t1_o7v82lz | Chinese models are mostly distilled Claude. | -2 | 0 | 2026-02-28T12:04:54 | aftersox | false | null | 0 | o7v82lz | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7v82lz/ | false | -2 |
t1_o7v80xu | Same people who run deepseek 3. | 1 | 0 | 2026-02-28T12:04:31 | -Ellary- | false | null | 0 | o7v80xu | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7v80xu/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.