name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8i209j | 10000% | 1 | 0 | 2026-03-03T23:27:06 | Borkato | false | null | 0 | o8i209j | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8i209j/ | false | 1 |
t1_o8i1uez | Tried it out, works surprisingly well, wow! | 1 | 0 | 2026-03-03T23:26:10 | Interesting-Town-433 | false | null | 0 | o8i1uez | false | /r/LocalLLaMA/comments/1r2ehfv/anyone_have_qwen_image_edit_working_reliably_in/o8i1uez/ | false | 1 |
t1_o8i1rm1 | r/hardwareswap is another good sub for deals if you feel like you can trust the redditor. I buy sell GPUs etc there | 1 | 0 | 2026-03-03T23:25:44 | tat_tvam_asshole | false | null | 0 | o8i1rm1 | false | /r/LocalLLaMA/comments/1rk0o58/where_do_you_buy_used_gpu_how_do_prevent_yourself/o8i1rm1/ | false | 1 |
t1_o8i1qqq | The real joke is the "Thought for 43 seconds" | 1 | 0 | 2026-03-03T23:25:36 | segfawlt | false | null | 0 | o8i1qqq | false | /r/LocalLLaMA/comments/1rk4x7w/i_think_that_is_a_good_one/o8i1qqq/ | false | 1 |
t1_o8i1pza | This isn’t helpful at all but I was too scared to buy used so I ended up paying $1100 for a new 3090 from Amazon. I just feel like if I’m going to dump $800-1000 I might as well have the protections Amazon offers 🤷 | 1 | 0 | 2026-03-03T23:25:29 | Borkato | false | null | 0 | o8i1pza | false | /r/LocalLLaMA/comments/1rk0o58/where_do_you_buy_used_gpu_how_do_prevent_yourself/o8i1pza/ | false | 1 |
t1_o8i1jko | Oh what really? Does that stop it from outputting the tags? Where do I put that? | 1 | 0 | 2026-03-03T23:24:29 | Borkato | false | null | 0 | o8i1jko | false | /r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/o8i1jko/ | false | 1 |
t1_o8i1hot | Can anyone explain the situation? I’m still confused on what happened | 1 | 0 | 2026-03-03T23:24:11 | Ok_Brain_2376 | false | null | 0 | o8i1hot | false | /r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8i1hot/ | false | 1 |
t1_o8i1dbt | qwen3.5 35b should be able to run okay-ish with most experts on CPU. GIve it a go with llama.cpp , and try fit-ctx 40000 first and adjust according to speed. (I'm running fine on 12 gb VRAM + 32 gb RAM combo with 35-40tk/s, so you should be around 20-30 tk/s territory with 100k context) | 1 | 0 | 2026-03-03T23:23:31 | Xantrk | false | null | 0 | o8i1dbt | false | /r/LocalLLaMA/comments/1rjrfzg/agentic_coding_moe_models_for_10gb_vram_setup/o8i1dbt/ | false | 1 |
t1_o8i1cm1 | IIRC, models tend to really only put weight on the top of the context pile. | 1 | 0 | 2026-03-03T23:23:24 | 3spky5u-oss | false | null | 0 | o8i1cm1 | false | /r/LocalLLaMA/comments/1rk045z/are_huge_context_windows_a_hallucination_problem/o8i1cm1/ | false | 1 |
t1_o8i18ul | And enable\_thinking to false. | 1 | 0 | 2026-03-03T23:22:48 | schnauzergambit | false | null | 0 | o8i18ul | false | /r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/o8i18ul/ | false | 1 |
t1_o8i10li | This person is an intern but their boss also stepped down | 1 | 0 | 2026-03-03T23:21:30 | eli_pizza | false | null | 0 | o8i10li | false | /r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8i10li/ | false | 1 |
t1_o8i0z32 | Wait is it still happening | 1 | 0 | 2026-03-03T23:21:16 | Borkato | false | null | 0 | o8i0z32 | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8i0z32/ | false | 1 |
t1_o8i0te4 | Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/PgFhZ8cnWW)
You've also been given a special flair for your contribution. We appreciate your post!
*I am a bot and this action was performed automatically.* | 1 | 0 | 2026-03-03T23:20:22 | WithoutReason1729 | false | null | 0 | o8i0te4 | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8i0te4/ | true | 1 |
t1_o8i0st2 | Are you perhaps confusing Junyang Lin with Kaixin LI? (And Qwen with Gwen) | 1 | 0 | 2026-03-03T23:20:17 | eli_pizza | false | null | 0 | o8i0st2 | false | /r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8i0st2/ | false | 1 |
t1_o8i0suq | Smaller weights will always be subpar (to some degree) to the gigantic often trillion parameters cloud models though. | 1 | 0 | 2026-03-03T23:20:17 | citrusalex | false | null | 0 | o8i0suq | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8i0suq/ | false | 1 |
t1_o8i0ovg | CPU-only LLM setups are actually more common than people think. I’ve seen people run 7B–13B models with llama.cpp on 64–128GB RAM machines. It’s obviously much slower than GPU, but for experimenting it can still be surprisingly usable.
What I’ve noticed though is that many people start CPU-only and eventually switch to remote GPUs instead of buying hardware. GPUs are expensive, and for a lot of workloads they end up idle most of the time. Renting GPU only when you need it often makes more sense.
Curious what models people here are running on CPU and what speeds you’re getting. | 1 | 0 | 2026-03-03T23:19:40 | Much_Marionberry3981 | false | null | 0 | o8i0ovg | false | /r/LocalLLaMA/comments/1r1c7ct/no_gpu_club_how_many_of_you_do_use_local_llms/o8i0ovg/ | false | 1 |
t1_o8i0naw | You're absolutely right! | 1 | 0 | 2026-03-03T23:19:25 | CheatCodesOfLife | false | null | 0 | o8i0naw | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8i0naw/ | false | 1 |
t1_o8i0l4i | Doing WHAT to its parameters?!? 😳 | 1 | 0 | 2026-03-03T23:19:05 | Borkato | false | null | 0 | o8i0l4i | false | /r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/o8i0l4i/ | false | 1 |
t1_o8i0joa | You can turn its thinking off entirely with —reasoning-budget 0 on llama cpp | 1 | 0 | 2026-03-03T23:18:52 | Borkato | false | null | 0 | o8i0joa | false | /r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/o8i0joa/ | false | 1 |
t1_o8i0hul | I've always been fascinated by communication and it's in large part why I find LLMs so interesting. There's just something amazing about seeing something so fundamentally human removed from the context it's always been in, removed from consciousness, and run on a different infrastructure than our brains and with different rules but still viable in a way. | 1 | 0 | 2026-03-03T23:18:35 | toothpastespiders | false | null | 0 | o8i0hul | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8i0hul/ | false | 1 |
t1_o8i0hml | Read "Human Action". You have been misled. | 1 | 0 | 2026-03-03T23:18:33 | crantob | false | null | 0 | o8i0hml | false | /r/LocalLLaMA/comments/1qqhhtx/mistral_ceo_arthur_mensch_if_you_treat/o8i0hml/ | false | 1 |
t1_o8i08hx | "Hello everyone, I have made a GUI for ChatGPT that let's you make folders for your chats, would you use it?" | 1 | 0 | 2026-03-03T23:17:10 | trusty20 | false | null | 0 | o8i08hx | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8i08hx/ | false | 1 |
t1_o8i08dl | what if i want to run it using llamacpp? Where can i download the mmproj file? Since llamacpp still needing mmproj file for vision | 1 | 0 | 2026-03-03T23:17:09 | juandann | false | null | 0 | o8i08dl | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8i08dl/ | false | 1 |
t1_o8i07ko | taking that as a complement 🙏 | 1 | 0 | 2026-03-03T23:17:02 | supermalvo | false | null | 0 | o8i07ko | false | /r/LocalLLaMA/comments/1rk4fqx/built_an_mcp_marketplace_so_developers_can/o8i07ko/ | false | 1 |
t1_o8i06ot | From Unsloth, ive tried Q4\_K\_XL, UD-Q6\_K\_XL, and Q8\_0. I've also tried a Q6 from Bartowski, if i remember correctly. They all suffer from the same issue.
I'm using a Tesla M40 with dual Xeon 2697A-V4, with Llama.cpp version 8148, but I'll update llama.cpp again, as it seems to have had a lot up updates since last week.
Using f32 for KV-cache helps alleviate the issue, but it doesn't go away completely; I don't know too much about this stuff, so I've asked Claude and Gemini about it and they both say that it looks like some sort of KV-cache corruption.
I don't see this issue with any of the other Qwen3.5 models though.
I also just use plain chat with the model. | 1 | 0 | 2026-03-03T23:16:54 | plopperzzz | false | null | 0 | o8i06ot | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8i06ot/ | false | 1 |
t1_o8i052r | Essentially through reddit posts and meta ads. That's the strategy. | 1 | 0 | 2026-03-03T23:16:39 | supermalvo | false | null | 0 | o8i052r | false | /r/LocalLLaMA/comments/1rk4fqx/built_an_mcp_marketplace_so_developers_can/o8i052r/ | false | 1 |
t1_o8hzzgm | Are you going to have support for Qwen3.5-9B on Mac? | 1 | 0 | 2026-03-03T23:15:49 | Xorita | false | null | 0 | o8hzzgm | false | /r/LocalLLaMA/comments/1r3wgi3/what_is_best_mac_app_store_alternative_to/o8hzzgm/ | false | 1 |
t1_o8hzryi | monster | 1 | 0 | 2026-03-03T23:14:40 | Complainer_Official | false | null | 0 | o8hzryi | false | /r/LocalLLaMA/comments/1rk4fqx/built_an_mcp_marketplace_so_developers_can/o8hzryi/ | false | 1 |
t1_o8hzqyl | I have, but I don’t see much of a difference!! What tasks are you trying that make a huge difference? | 1 | 0 | 2026-03-03T23:14:31 | Borkato | false | null | 0 | o8hzqyl | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8hzqyl/ | false | 1 |
t1_o8hzkfn | Put down the crack pipe, lilbro. | 1 | 0 | 2026-03-03T23:13:31 | RuthlessCriticismAll | false | null | 0 | o8hzkfn | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8hzkfn/ | false | 1 |
t1_o8hzh3j | have you tried the 27B dense model yet? makes 35B-A3B look dumb. (but it's slower, of course.) | 1 | 0 | 2026-03-03T23:13:01 | HopePupal | false | null | 0 | o8hzh3j | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8hzh3j/ | false | 1 |
t1_o8hzec2 | Thanks everyone! | 1 | 0 | 2026-03-03T23:12:36 | johnnyApplePRNG | false | null | 0 | o8hzec2 | false | /r/LocalLLaMA/comments/1ri3mxa/ideal_llamacpp_settings_for_12gb_vram_and_64gb/o8hzec2/ | false | 1 |
t1_o8hzc82 | 4x faster prompt processing on M5 is wild. Studio-quality local inference on a laptop is basically already here — the bottleneck is just context window and VRAM ceiling now. | 1 | 0 | 2026-03-03T23:12:17 | theagentledger | false | null | 0 | o8hzc82 | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8hzc82/ | false | 1 |
t1_o8hz9j2 | Not dead, just unpopular. The irony is you need raw base models to experiment with alignment and fine-tuning — but nobody ships them anymore because instruct is where the downloads go. | 1 | 0 | 2026-03-03T23:11:54 | theagentledger | false | null | 0 | o8hz9j2 | false | /r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8hz9j2/ | false | 1 |
t1_o8hyysf | haha appreciate it | 1 | 0 | 2026-03-03T23:10:14 | theagentledger | false | null | 0 | o8hyysf | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8hyysf/ | false | 1 |
t1_o8hyp1m | Yes but only if the underlying model has a permissive license, as when I was checking the text-to-3D models a while ago, the frontier options had very limiting licenses! | 1 | 0 | 2026-03-03T23:08:45 | FriskyFennecFox | false | null | 0 | o8hyp1m | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hyp1m/ | false | 1 |
t1_o8hyoxy | Read "Human Action" to understand human action. | 1 | 0 | 2026-03-03T23:08:44 | crantob | false | null | 0 | o8hyoxy | false | /r/LocalLLaMA/comments/1qqhhtx/mistral_ceo_arthur_mensch_if_you_treat/o8hyoxy/ | false | 1 |
t1_o8hynzf | Nah, some of them understand the temporal context between the frames rather than models like qwen which recieve a single frame and dont "understand" whats happening between each of the frames. So, to answer the original question its the "screenshot" version. | 1 | 0 | 2026-03-03T23:08:36 | Uncle___Marty | false | null | 0 | o8hynzf | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8hynzf/ | false | 1 |
t1_o8hyo02 | As a qwen user, this really is a kick to the balls.
Forced to resign makes me really question where qwen is heading. | 1 | 0 | 2026-03-03T23:08:36 | mshelbz | false | null | 0 | o8hyo02 | false | /r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8hyo02/ | false | 1 |
t1_o8hykm9 | I would like a feature to ignore every redditor that downvoted this. | 1 | 0 | 2026-03-03T23:08:06 | crantob | false | null | 0 | o8hykm9 | false | /r/LocalLLaMA/comments/1qqhhtx/mistral_ceo_arthur_mensch_if_you_treat/o8hykm9/ | false | 1 |
t1_o8hyjco | I have yet to try them!! I’m still using 35B-A3B. How do the small sizes compare?! | 1 | 0 | 2026-03-03T23:07:54 | Borkato | false | null | 0 | o8hyjco | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8hyjco/ | false | 1 |
t1_o8hyi0x | Actually, although this removed the error, the model still isn't thinking. Even of I also put the true in between escaped \" . | 1 | 0 | 2026-03-03T23:07:42 | WowSkaro | false | null | 0 | o8hyi0x | false | /r/LocalLLaMA/comments/1rjzlrn/are_the_9b_or_smaller_qwen35_models_unthinking/o8hyi0x/ | false | 1 |
t1_o8hyfmt | Truly and end of an era! Always used Qwens locally and some Mistrals. Hopeful for something new! | 1 | 0 | 2026-03-03T23:07:20 | GodComplecs | false | null | 0 | o8hyfmt | false | /r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8hyfmt/ | false | 1 |
t1_o8hye9w | Is this real time error propagation? In another post it was mentioned that he was just an intern at Qwen interesting. | 1 | 0 | 2026-03-03T23:07:08 | hustla17 | false | null | 0 | o8hye9w | false | /r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8hye9w/ | false | 1 |
t1_o8hycwt | It's always been a fraud. They'll happily make models that kill or censor you but freak out when they see the word "penis".
You are just kinda jailbreaking tho. | 1 | 0 | 2026-03-03T23:06:56 | a_beautiful_rhind | false | null | 0 | o8hycwt | false | /r/LocalLLaMA/comments/1rk342c/the_dow_vs_anthropic_saga_proves_closedsource/o8hycwt/ | false | 1 |
t1_o8hy9yp | busted. please don't tell anyone | 1 | 0 | 2026-03-03T23:06:29 | theagentledger | false | null | 0 | o8hy9yp | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8hy9yp/ | false | 1 |
t1_o8hy8yb | Bro theres like a million of these hacks. Even models can be made to never refuse with Heretic etc. | 1 | 0 | 2026-03-03T23:06:20 | GodComplecs | false | null | 0 | o8hy8yb | false | /r/LocalLLaMA/comments/1rk4ba9/crossplatform_discovery_total_refusal_bypass_via/o8hy8yb/ | false | 1 |
t1_o8hy7r0 | sorry, my safety training kicked in. also that's a suspiciously low-effort jailbreak for a 0.8B model lol | 1 | 0 | 2026-03-03T23:06:09 | theagentledger | false | null | 0 | o8hy7r0 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8hy7r0/ | false | 1 |
t1_o8hy6js | i deployed it on rtx 5090 and using it with opencode, bye bye claude code lool | 1 | 0 | 2026-03-03T23:05:58 | arnav_m_ | false | null | 0 | o8hy6js | false | /r/LocalLLaMA/comments/1rj2gwf/qwen35_30b_is_incredible_for_local_deployment/o8hy6js/ | false | 1 |
t1_o8hy3w1 | Can you compare the UX with using a sandbox? I just run everything in bubblewrap so that the agent has read-only access to parts of my system, toolchains, documentation, related projects etc. | 1 | 0 | 2026-03-03T23:05:34 | RadiantHueOfBeige | false | null | 0 | o8hy3w1 | false | /r/LocalLLaMA/comments/1rjrofr/code_container_safely_run_opencodecodexcc_with/o8hy3w1/ | false | 1 |
t1_o8hy29a | Probably because the other half isn't any better.
When you can't tell when business ends and government begins, it doesn't matter who you vote for. | 1 | 0 | 2026-03-03T23:05:19 | InsideAria | false | null | 0 | o8hy29a | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o8hy29a/ | false | 1 |
t1_o8hy23v | Hey i can help, but have a question - How much cold boot time is manageable to you? ik team at dcompute\[dot\] cloud which might be happy to sponsor a t4 | 1 | 0 | 2026-03-03T23:05:18 | arnav_m_ | false | null | 0 | o8hy23v | false | /r/LocalLLaMA/comments/1rjckv2/help_deploying_llama3_8b_finetune_for_lowresource/o8hy23v/ | false | 1 |
t1_o8hy0ab | ohh ty for the stats. | 1 | 0 | 2026-03-03T23:05:01 | sexytimeforwife | false | null | 0 | o8hy0ab | false | /r/LocalLLaMA/comments/1r79dcd/qwen_35_397b_is_strong_one/o8hy0ab/ | false | 1 |
t1_o8hxw63 | is this only for AMD boards ? i dont even see MCIO plugs on xeon v4 | 1 | 0 | 2026-03-03T23:04:24 | ClimateBoss | false | null | 0 | o8hxw63 | false | /r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8hxw63/ | false | 1 |
t1_o8hxvby | Yes, 3.5 is a pretty big leap it would seem.
I can’t get over how good the small models are, 0.8b, 2b, 4b and 9b. | 1 | 0 | 2026-03-03T23:04:16 | 3spky5u-oss | false | null | 0 | o8hxvby | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8hxvby/ | false | 1 |
t1_o8hxtow | Chipping away at lower entry level jobs. Reimagining work, studies, etc. | 1 | 0 | 2026-03-03T23:04:01 | Prigozhin2023 | false | null | 0 | o8hxtow | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8hxtow/ | false | 1 |
t1_o8hxt5p | This actually makes sense. Discovery is the real problem right now, not building the tools. If you can solve distribution for creators, that is real value. Curious how you plan to drive traffic in the early days. | 1 | 0 | 2026-03-03T23:03:57 | Wooden-Term-1102 | false | null | 0 | o8hxt5p | false | /r/LocalLLaMA/comments/1rk4fqx/built_an_mcp_marketplace_so_developers_can/o8hxt5p/ | false | 1 |
t1_o8hxo35 | Something went wrong here, I took the non-thinking score. With thinking it's way closer to the top. | 1 | 0 | 2026-03-03T23:03:11 | Balance- | false | null | 0 | o8hxo35 | false | /r/LocalLLaMA/comments/1rjnpuv/costsperformance_tradeoff_for_qwen3_qwen35_and/o8hxo35/ | false | 1 |
t1_o8hxnmv | Yes that sovereignity you refer-to is called *property rights*, which are DEAD in EUSSR | 1 | 0 | 2026-03-03T23:03:07 | crantob | false | null | 0 | o8hxnmv | false | /r/LocalLLaMA/comments/1qqhhtx/mistral_ceo_arthur_mensch_if_you_treat/o8hxnmv/ | false | 1 |
t1_o8hxk13 | Great job, that is something that was needed.
I tried using the huggingface space, the cloning was good quality, I used a 12s 16khz wav file, but the portuguese from the language choice is from 'portugal' not pt-br, I will try to clone your repo and use it on top of the KVOICEWALK, which is a voice mixer open source made for Kokoro that tries to create a new voice similar to the input audio (kind of cloning) by using a merge of the Kokoro voices.
Probably using your system on top of KVOICEWALK will create a true cloned voice experience.
For those of you curious about it: [https://github.com/RobViren/kvoicewalk](https://github.com/RobViren/kvoicewalk) | 1 | 0 | 2026-03-03T23:02:35 | flavio_geo | false | null | 0 | o8hxk13 | false | /r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8hxk13/ | false | 1 |
t1_o8hxjvj | thanks for this, looking forward to 9B | 1 | 0 | 2026-03-03T23:02:33 | LackingAGoodName | false | null | 0 | o8hxjvj | false | /r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8hxjvj/ | false | 1 |
t1_o8hxibz | Absolutely I would hell yeah | 1 | 0 | 2026-03-03T23:02:20 | DalekCoffee | false | null | 0 | o8hxibz | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hxibz/ | false | 1 |
t1_o8hxi4j | You don't have to remove you just lessen the power of that layer that is activating when you receive the refusal. | 1 | 0 | 2026-03-03T23:02:18 | ArtfulGenie69 | false | null | 0 | o8hxi4j | false | /r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8hxi4j/ | false | 1 |
t1_o8hxh49 | It's quite a quick model, currently im using the managed deployment by [dcompute.cloud](http://dcompute.cloud) team i have replaced my claude code using entirely with opencode + this model and seems to be doing quite fine | 1 | 0 | 2026-03-03T23:02:09 | arnav_m_ | false | null | 0 | o8hxh49 | false | /r/LocalLLaMA/comments/1rjygyu/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/o8hxh49/ | false | 1 |
t1_o8hxgym | >What should I do?
Did you do what the rear part of the message tells you to? | 1 | 0 | 2026-03-03T23:02:08 | scorp123_CH | false | null | 0 | o8hxgym | false | /r/LocalLLaMA/comments/1mitsok/which_ai_image_generators_are_good_for/o8hxgym/ | false | 1 |
t1_o8hxgwl | Good to know! | 1 | 0 | 2026-03-03T23:02:07 | dark-light92 | false | null | 0 | o8hxgwl | false | /r/LocalLLaMA/comments/1rjzlrn/are_the_9b_or_smaller_qwen35_models_unthinking/o8hxgwl/ | false | 1 |
t1_o8hxgay | The risk is to large part due to *regime uncertainty*.
Interventionist policy manufacturing run amok makes genuine (private, market) investment impossible.
RIP EUROPE | 1 | 0 | 2026-03-03T23:02:02 | crantob | false | null | 0 | o8hxgay | false | /r/LocalLLaMA/comments/1qqhhtx/mistral_ceo_arthur_mensch_if_you_treat/o8hxgay/ | false | 1 |
t1_o8hxfsz | Legend | 1 | 0 | 2026-03-03T23:01:57 | Significant_Fig_7581 | false | null | 0 | o8hxfsz | false | /r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8hxfsz/ | false | 1 |
t1_o8hxesm | If I have a 3080 10GB what should I try running the 27b? Also what's FP8 stand for? | 1 | 0 | 2026-03-03T23:01:48 | techperson1234 | false | null | 0 | o8hxesm | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8hxesm/ | false | 1 |
t1_o8hx9k2 | Qwen was one of the only delivering small models 9B and below, fuck me. | 1 | 0 | 2026-03-03T23:01:02 | Barubiri | false | null | 0 | o8hx9k2 | false | /r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8hx9k2/ | false | 1 |
t1_o8hx9gq | [removed] | 1 | 0 | 2026-03-03T23:01:01 | [deleted] | true | null | 0 | o8hx9gq | false | /r/LocalLLaMA/comments/18aj0un/a_model_i_discovered_today_onii1313b_made_for/o8hx9gq/ | false | 1 |
t1_o8hx69f | [deleted] | 1 | 0 | 2026-03-03T23:00:33 | [deleted] | true | null | 0 | o8hx69f | false | /r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8hx69f/ | false | 1 |
t1_o8hx447 | Not that impressive on AA to be honest. On LM Arena it fares a bit better (see other comment below).
https://preview.redd.it/qvoh6wdsuwmg1.png?width=3266&format=png&auto=webp&s=727ae1828b339e9ccc15539e74210cd684031e64 | 1 | 0 | 2026-03-03T23:00:14 | Balance- | false | null | 0 | o8hx447 | false | /r/LocalLLaMA/comments/1rjnpuv/costsperformance_tradeoff_for_qwen3_qwen35_and/o8hx447/ | false | 1 |
t1_o8hx16g | Thanks to green energy idiocy, among many other treasonous acts of our misleadership, Europe is a dead man walking. | 1 | 0 | 2026-03-03T22:59:48 | crantob | false | null | 0 | o8hx16g | false | /r/LocalLLaMA/comments/1qqhhtx/mistral_ceo_arthur_mensch_if_you_treat/o8hx16g/ | false | 1 |
t1_o8hwz2t | Do you plan to improve your framework and support Qwen 3.5 in the future? | 1 | 0 | 2026-03-03T22:59:30 | Agile_Tangelo6815 | false | null | 0 | o8hwz2t | false | /r/LocalLLaMA/comments/1qssxhx/research_vllmmlx_on_apple_silicon_achieves_21_to/o8hwz2t/ | false | 1 |
t1_o8hwyjx | Not my experience. huihui releases are great and I have had no issues even with ablits back when we had 70b llama models like deepseek r1 distilled. Also lots of other models. I don't usually use heritic because just about anyone does them, I trust that huihui on huggingface does a pretty good job usually. Also if you notice some of the kind authors of the ablits and heretics give you a kl distance from the original source model. That should help you tell how bad they are. | 1 | 0 | 2026-03-03T22:59:25 | ArtfulGenie69 | false | null | 0 | o8hwyjx | false | /r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8hwyjx/ | false | 1 |
t1_o8hwxxy | When I rendered the first version the other Qwen3.5 scores weren't available. Now they are:
https://preview.redd.it/m9tr9wjkuwmg1.png?width=3270&format=png&auto=webp&s=004583f696ed4b2ca34b6d70a99638e04804a7f0 | 1 | 0 | 2026-03-03T22:59:20 | Balance- | false | null | 0 | o8hwxxy | false | /r/LocalLLaMA/comments/1rjnpuv/costsperformance_tradeoff_for_qwen3_qwen35_and/o8hwxxy/ | false | 1 |
t1_o8hwrfe | Thanks | 1 | 0 | 2026-03-03T22:58:21 | SectionCrazy5107 | false | null | 0 | o8hwrfe | false | /r/LocalLLaMA/comments/1rjjvqo/vllm_on_v100_for_qwen_newer_models/o8hwrfe/ | false | 1 |
t1_o8hwqnl | Thanks, deployed lmdeploy too, not successful yet. | 1 | 0 | 2026-03-03T22:58:15 | SectionCrazy5107 | false | null | 0 | o8hwqnl | false | /r/LocalLLaMA/comments/1rjjvqo/vllm_on_v100_for_qwen_newer_models/o8hwqnl/ | false | 1 |
t1_o8hwoos | EXACTLY same point its stuck for me too. | 1 | 0 | 2026-03-03T22:57:58 | SectionCrazy5107 | false | null | 0 | o8hwoos | false | /r/LocalLLaMA/comments/1rjjvqo/vllm_on_v100_for_qwen_newer_models/o8hwoos/ | false | 1 |
t1_o8hwolt | No breakthroughs come from pissing away money and resources on ideologically-driven false-science megaprojects like green energy. | 1 | 0 | 2026-03-03T22:57:57 | crantob | false | null | 0 | o8hwolt | false | /r/LocalLLaMA/comments/1qqhhtx/mistral_ceo_arthur_mensch_if_you_treat/o8hwolt/ | false | 1 |
t1_o8hwlv0 | Okay, its because I am using Windows. So I had to use " " as the outer delimiters and escape the " inside with \ , now it works! | 1 | 0 | 2026-03-03T22:57:33 | WowSkaro | false | null | 0 | o8hwlv0 | false | /r/LocalLLaMA/comments/1rjzlrn/are_the_9b_or_smaller_qwen35_models_unthinking/o8hwlv0/ | false | 1 |
t1_o8hwc9o | StyleTTS2, on which Kokoro is based, \*supports zero shot cloning\*. (https://github.com/yl4579/StyleTTS2). Kokoro is slightly stripped down version with cloning removed. Why not just use StyleTTS2? Pretrained models are very good quality. Maybe just behind Kokoro, which was trained on a better (partly synthetic) dataset. | 1 | 0 | 2026-03-03T22:56:10 | geneing | false | null | 0 | o8hwc9o | false | /r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8hwc9o/ | false | 1 |
t1_o8hwccq | Vax status? | 1 | 0 | 2026-03-03T22:56:10 | dtdisapointingresult | false | null | 0 | o8hwccq | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8hwccq/ | false | 1 |
t1_o8hwb1e | How good is Ebay's protection against fraud? Say I buy a card and it turns out to be broken, do I get my money back? Sorry I never really used eBay before. | 1 | 0 | 2026-03-03T22:55:59 | Easy_Werewolf7903 | false | null | 0 | o8hwb1e | false | /r/LocalLLaMA/comments/1rk0o58/where_do_you_buy_used_gpu_how_do_prevent_yourself/o8hwb1e/ | false | 1 |
t1_o8hwaq7 | Nice Work - i looked at the orange pi plus but was put off by the reports of bad support on the software side. How did you find getting them both setup - any annoying bugs or missing info or was it fairly plain sailing? Also have you happened to measure the watts they pull when running inference over long run? | 1 | 0 | 2026-03-03T22:55:56 | bbMnty8 | false | null | 0 | o8hwaq7 | false | /r/LocalLLaMA/comments/1rjygyu/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/o8hwaq7/ | false | 1 |
t1_o8hw7n5 | this post is to sad to be a troll.
99% is white girls; they always want to be men | 1 | 0 | 2026-03-03T22:55:30 | Powerful_Evening5495 | false | null | 0 | o8hw7n5 | false | /r/LocalLLaMA/comments/1rk3uby/misgendering_issues_with_claude_sonnet_46/o8hw7n5/ | false | 1 |
t1_o8hw5lt | It doesn't work, I get a parse error as a: syntax error while parsing value - invalid literal; last read: '''
usage:
--chat-template-kwargs STRING sets additional params for the json template parser, must be a valid json object string, e.g. '{"key1":"value1","key2":"value2"}'
(env: LLAMA_CHAT_TEMPLATE_KWARGS)
And I tried to put true in between " ", change the outside ' ' to double quotes, removing the space after the : and a lot of permutations and nothing worked. | 1 | 0 | 2026-03-03T22:55:12 | WowSkaro | false | null | 0 | o8hw5lt | false | /r/LocalLLaMA/comments/1rjzlrn/are_the_9b_or_smaller_qwen35_models_unthinking/o8hw5lt/ | false | 1 |
t1_o8hw44c | My understanding from those videos was that MoE models scaled less well than dense models (even after fixing configurations), which makes sense. But they did all scale, and all scaled almost linearly on prompt processing/prefill. Only when using RDMA over TB5, though. | 1 | 0 | 2026-03-03T22:54:58 | ahjorth | false | null | 0 | o8hw44c | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8hw44c/ | false | 1 |
t1_o8hw2n3 | Good luck on your future, Legend❤️ | 1 | 0 | 2026-03-03T22:54:46 | Significant_Fig_7581 | false | null | 0 | o8hw2n3 | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8hw2n3/ | false | 1 |
t1_o8hvxuu | Weird…which quant size? What hardware? Latest llama.cpp?
What I tested was the Q4_k_m quant which barely fits into my system with 64GB RAM and 16Gb VRAM. Surprisingly still ran at 12 t/s when context was completely empty. Looked coherent. Didn’t try tool calls though, just plain chat. | 1 | 0 | 2026-03-03T22:54:04 | Danmoreng | false | null | 0 | o8hvxuu | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8hvxuu/ | false | 1 |
t1_o8hvwkp | \>90% of AI models will run slow on your hardware | 1 | 0 | 2026-03-03T22:53:53 | mr_zerolith | false | null | 0 | o8hvwkp | false | /r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8hvwkp/ | false | 1 |
t1_o8hvvs5 | Also known as 3.5 Plus?
That's the best overall:
https://preview.redd.it/8oz69k7qtwmg1.png?width=1930&format=png&auto=webp&s=56f997da22f15a45c0ef262aafddd397895d9ed7
| 1 | 0 | 2026-03-03T22:53:46 | XCSme | false | null | 0 | o8hvvs5 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8hvvs5/ | false | 1 |
t1_o8hvuft | That helps...sounds like Peta governs the trigger, while reasoning lives in traces/logs. The gap I’m trying to understand is whether logs alone are enough when you need to prove a decision path under dispute or audit.
Curious if you’ve seen anyone treat pre-trigger reasoning as something that needs to be sealed rather than just observed. | 1 | 0 | 2026-03-03T22:53:35 | Ok-Telephone2163 | false | null | 0 | o8hvuft | false | /r/LocalLLaMA/comments/1rjywpx/autonomous_agents_making_financial_decisions_how/o8hvuft/ | false | 1 |
t1_o8hvqek | The memory bandwidth jump is the part that matters most for local inference — M5 Max at 614 GB/s is what actually determines how fast tokens generate, not the TOPS numbers. The M4 Max is already running Qwen3.5 27B at a usable pace for everyday dev work via MLX. If M5 Max delivers a genuine 2× bandwidth increase on top of that, you're looking at 27B feeling more like a fast API call than a local model — low enough latency that you stop mentally accounting for the wait.
The interesting threshold isn't speed per se, it's whether local gets fast enough that you stop routing anything to a hosted endpoint. For most coding tasks (autocomplete, refactor, explain) the M4 Max is already close. M5 Max probably crosses it. | 1 | 0 | 2026-03-03T22:53:00 | canyoncreativestudio | false | null | 0 | o8hvqek | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8hvqek/ | false | 1 |
t1_o8hvq14 | [deleted] | 1 | 0 | 2026-03-03T22:52:56 | [deleted] | true | null | 0 | o8hvq14 | false | /r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8hvq14/ | false | 1 |
t1_o8hvmeu | That is really cool on 200 Euro hardware. Thanks for the inspiration. I hope NPU next! | 1 | 0 | 2026-03-03T22:52:24 | pauljdavis | false | null | 0 | o8hvmeu | false | /r/LocalLLaMA/comments/1rjygyu/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/o8hvmeu/ | false | 1 |
t1_o8hvjba | 1 | 0 | 2026-03-03T22:51:58 | lubezki | false | null | 0 | o8hvjba | false | /r/LocalLLaMA/comments/1mitsok/which_ai_image_generators_are_good_for/o8hvjba/ | false | 1 | |
t1_o8hvdjk | Can you help me? Im trying to install the latest version and everysingle time I get an error during installation saying “Failed to create virtual environment: process exited with code 1”
What should I do? | 1 | 0 | 2026-03-03T22:51:07 | lubezki | false | null | 0 | o8hvdjk | false | /r/LocalLLaMA/comments/1mitsok/which_ai_image_generators_are_good_for/o8hvdjk/ | false | 1 |
t1_o8hvbbt | I also made some benchmarks, you can filter for specific models, e.g. just Qwen:
[https://aibenchy.com/charts/?models=qwen-qwen3-5-plus-02-15-medium%2Cqwen-qwen3-5-27b-medium%2Cqwen-qwen3-5-122b-a10b-medium%2Cqwen-qwen3-5-35b-a3b-medium%2Cqwen-qwen3-5-flash-02-23-medium%2Cqwen-qwen3-coder-next-medium](https://aibenchy.com/charts/?models=qwen-qwen3-5-plus-02-15-medium%2Cqwen-qwen3-5-27b-medium%2Cqwen-qwen3-5-122b-a10b-medium%2Cqwen-qwen3-5-35b-a3b-medium%2Cqwen-qwen3-5-flash-02-23-medium%2Cqwen-qwen3-coder-next-medium)
https://preview.redd.it/q09yvjr6twmg1.png?width=837&format=png&auto=webp&s=c1866cda7dce7a6f1eafd0d6fd03149939158981
| 1 | 0 | 2026-03-03T22:50:48 | XCSme | false | null | 0 | o8hvbbt | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8hvbbt/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.