name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o80r7bm | We are training our replacements. | 1 | 0 | 2026-03-01T08:06:02 | GoFigYourself | false | null | 0 | o80r7bm | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o80r7bm/ | false | 1 |
t1_o80r693 | Freedom.
Watch this video to understand how much Freedom you have in USA right now
[https://youtu.be/BvLz1bI2sXU](https://youtu.be/BvLz1bI2sXU)
The same more or less applies across Europe right now, especially in UK.
Enjoy.
You cannot have Democracy without Freedom. And since you do not have Freedom any m... | 1 | 0 | 2026-03-01T08:05:45 | ImportancePitiful795 | false | null | 0 | o80r693 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o80r693/ | false | 1 |
t1_o80r55x | I mentioned it because it was just one possible cause of failure. The entire message makes this clear that I am not assuming the cause.
Early inference with new model families is rife with this sort of thing, and in unsloth's case that has happened with more than one model family, including I believe qwen 3.5 ear... | 1 | 0 | 2026-03-01T08:05:29 | Monkey_1505 | false | null | 0 | o80r55x | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o80r55x/ | false | 1 |
t1_o80r3xh | >So now I'm rethinking my entire strategy and pulling it back really to just a hobby for myself and not focusing on the family's need.
The good news is that there is no need to pull back. It has always been just a hobby for yourself, you just didn't know it. | 1 | 0 | 2026-03-01T08:05:09 | tmvr | false | null | 0 | o80r3xh | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80r3xh/ | false | 1 |
t1_o80r1vu | There are new models coming from Mistral but they seem incremental (Devstral 2.1) and it sounds like no small models are planned until at least May. | 1 | 0 | 2026-03-01T08:04:36 | spaceman_ | false | null | 0 | o80r1vu | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o80r1vu/ | false | 1 |
t1_o80r0ry | That’s awesome.. thanks for the link | 2 | 0 | 2026-03-01T08:04:19 | Maleficent-Ad5999 | false | null | 0 | o80r0ry | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o80r0ry/ | false | 2 |
t1_o80qz06 | As pointed here, Qwen3.5 35B A3B could be a very nice one. It has reasoning traces so you'll feel lagging to get a response, but that's what makes model more capable. Has visual capability if you decide to add the plug in.
GPT-OSS-20B will be the fastest and has 3 reasoning options. It's ok for light agentic work but... | 2 | 0 | 2026-03-01T08:03:51 | Holiday_Purpose_3166 | false | null | 0 | o80qz06 | false | /r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o80qz06/ | false | 2 |
t1_o80qxzw | Bad intel, bad targeting, a bomb or missile going off-course, all of which point to *not* using AI to make life/death decisions. | 31 | 0 | 2026-03-01T08:03:35 | SkyFeistyLlama8 | false | null | 0 | o80qxzw | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80qxzw/ | false | 31 |
t1_o80qwiv | Does it really matter who pulls the trigger when the intertwined empire seeks to destroy the lives of children in every way?
They just need more victims and increased radicalization on all sides to make sure their wars will continue, creating distractions from what they do closer to home. | 5 | 0 | 2026-03-01T08:03:12 | hum_ma | false | null | 0 | o80qwiv | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80qwiv/ | false | 5 |
t1_o80qtwg | "You are an autonomous AI that's very good at picking human targets for liquidation.
Now use markdown lists and headers to create a kill list..." | 1 | 0 | 2026-03-01T08:02:31 | SkyFeistyLlama8 | false | null | 0 | o80qtwg | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80qtwg/ | false | 1 |
t1_o80qtl0 | I don’t get it, you mention you’ve been inspired by https://github.com/code-yeongyu/oh-my-opencode but the do plugins too? | 3 | 0 | 2026-03-01T08:02:26 | riceinmybelly | false | null | 0 | o80qtl0 | false | /r/LocalLLaMA/comments/1rhr9ht/coworke_plugins_wiped_out_100_billion_from_saas_i/o80qtl0/ | false | 3 |
t1_o80qtjz | [motherfucker, thats called an OS!!!](https://youtu.be/jgYYOUC10aM?si=IIydmMdbV0aoKNv1) | 1 | 0 | 2026-03-01T08:02:25 | howardhus | false | null | 0 | o80qtjz | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o80qtjz/ | false | 1 |
t1_o80qqh5 | haters? guys pointing the obvious..
you „can“ put lots of things into UEFI but if you rebuild drivers, disk access, libraries access:
at that point….
[motherfucker, thats called an OS!!!](https://youtu.be/jgYYOUC10aM?si=IIydmMdbV0aoKNv1) | 5 | 0 | 2026-03-01T08:01:39 | howardhus | false | null | 0 | o80qqh5 | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o80qqh5/ | false | 5 |
t1_o80qmu8 | Please don’t downvote me, given the name of this sub, but I think yes, **if** you are constrained by cash.
The electricity cost alone to run A3B at speed for a whole month, let’s say 4 to 6 hours a day, will be a lot more than $10, on top of the hardware costs. You also **WILL** be spending more on hardware while doin... | 14 | 0 | 2026-03-01T08:00:41 | jslominski | false | null | 0 | o80qmu8 | false | /r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/o80qmu8/ | false | 14 |
t1_o80qm9v | Could ask Haiku to save tokens. | 1 | 0 | 2026-03-01T08:00:32 | ConfusedSimon | false | null | 0 | o80qm9v | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80qm9v/ | false | 1 |
t1_o80ql9p | Probably need to address the elephant in the room in the post: does your family use and want to use AI in the first place?
I'm sure your family realized how important the project was to you which was why they agreed when you suggested the project. But the problem is, it is likely they didn't want it themselves and, af... | 3 | 0 | 2026-03-01T08:00:17 | Intrepid-Second6936 | false | null | 0 | o80ql9p | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80ql9p/ | false | 3 |
t1_o80qk7r | Well, you *did* write: “Unsloth. Figures. I’m not sure why anyone uses them TBH,” which kind of insinuates the problem is on Unsloth’s side?
I was just trying to add more context since you might not have seen OP’s earlier comment that other quant providers ran into the same issue too which your opening message read li... | 1 | 0 | 2026-03-01T08:00:01 | yoracale | false | null | 0 | o80qk7r | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o80qk7r/ | false | 1 |
t1_o80qigr | 35b a3b is 35b world data inside a small 3b brain, the post says how even if it beats gpt in intelligence, it still not even close to its knowledge, so i said it has 4tb of knowledge with a tiny 3b brain | 1 | 0 | 2026-03-01T07:59:33 | KURD_1_STAN | false | null | 0 | o80qigr | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o80qigr/ | false | 1 |
t1_o80qdhm | There's a command line flat you can add
--reasoning-budget 0 iirc | 1 | 0 | 2026-03-01T07:58:16 | ArchdukeofHyperbole | false | null | 0 | o80qdhm | false | /r/LocalLLaMA/comments/1rhr5ko/is_there_a_way_to_disable_thinking_on_qwen_35_27b/o80qdhm/ | false | 1 |
t1_o80qass | thanks Man. I have been searching for it since yesterday. | 1 | 0 | 2026-03-01T07:57:35 | SearchTricky7875 | false | null | 0 | o80qass | false | /r/LocalLLaMA/comments/1re9xbi/qwen35_on_vllm/o80qass/ | false | 1 |
t1_o80q9ij | Any company that does business with them, now is in danger of falling foul of the US government. From AWS to a small contractor. If you use Anthropic you will face sanction..... that will pretty much mean their business is over (hence they're going to court)..... it's the same designation used to bar Chinese tech from ... | 1 | 0 | 2026-03-01T07:57:15 | FrostyParking | false | null | 0 | o80q9ij | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o80q9ij/ | false | 1 |
t1_o80q8fe | We did an update and UD now is much better for MoE models as well: [https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new\_qwen3535ba3b\_unsloth\_dynamic\_ggufs\_benchmarks/](https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/) | 2 | 0 | 2026-03-01T07:56:58 | yoracale | false | null | 0 | o80q8fe | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o80q8fe/ | false | 2 |
t1_o80q6sj | It only affected Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL and no other quant.
Also if you didn't see we did an update so all should now be fixed:and UD now is much better: [https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new\_qwen3535ba3b\_unsloth\_dynamic\_ggufs\_benchmarks/](https://www.reddit.com/r/LocalLLaMA/commen... | 3 | 0 | 2026-03-01T07:56:33 | yoracale | false | null | 0 | o80q6sj | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o80q6sj/ | false | 3 |
t1_o80q66u | They also said this, right under my reply, "Api version is normal. It seems like I am the only one with broken qwen3.5 so bad" suggesting it's not the model but something on his end. Which roughly matches what I said, no? | 1 | 0 | 2026-03-01T07:56:24 | Monkey_1505 | false | null | 0 | o80q66u | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o80q66u/ | false | 1 |
t1_o80q5lf | ill try it which one you used? this? Qwen3.5-35B-A3B-UD-Q8\_K\_XL.gguf | 1 | 0 | 2026-03-01T07:56:15 | alsolh | false | null | 0 | o80q5lf | false | /r/LocalLLaMA/comments/1rclied/glm47flash_vs_qwen3codernext_vs_gptoss120b/o80q5lf/ | false | 1 |
t1_o80q4ak | Honestly, anything medical or financial. I've caught myself rewording health-related questions to be vague before sending to ChatGPT, which kind of defeats the purpose.
Also internal company/work stuff — code reviews with proprietary logic, brainstorming business strategy, drafting emails about sensitive HR situations... | 2 | 0 | 2026-03-01T07:55:55 | WhizKid_dev | false | null | 0 | o80q4ak | false | /r/LocalLLaMA/comments/1rb062y/have_you_ever_hesitated_before_typing_something/o80q4ak/ | false | 2 |
t1_o80q37s | It only affected Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL and no other quant.
Also if you didn't see we did an update so all should now be fixed: [https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new\_qwen3535ba3b\_unsloth\_dynamic\_ggufs\_benchmarks/](https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3... | 1 | 0 | 2026-03-01T07:55:38 | yoracale | false | null | 0 | o80q37s | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o80q37s/ | false | 1 |
t1_o80q2lb | It only affected Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL and no other quant.
Also if you didn't see we did an update: [https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new\_qwen3535ba3b\_unsloth\_dynamic\_ggufs\_benchmarks/](https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_ben... | 2 | 0 | 2026-03-01T07:55:28 | yoracale | false | null | 0 | o80q2lb | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o80q2lb/ | false | 2 |
t1_o80q2c1 | Did you do a product market fit analysis -before- spending a fortune on hardware and invest hundreds of hours on the project? I’m guessing no | 3 | 0 | 2026-03-01T07:55:25 | jtackman | false | null | 0 | o80q2c1 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80q2c1/ | false | 3 |
t1_o80pyq8 | We tested them here: [https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new\_qwen3535ba3b\_unsloth\_dynamic\_ggufs\_benchmarks/](https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/) | 1 | 0 | 2026-03-01T07:54:28 | yoracale | false | null | 0 | o80pyq8 | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o80pyq8/ | false | 1 |
t1_o80py8v | Did you test our new updated Q4\_K\_XL ones? [https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new\_qwen3535ba3b\_unsloth\_dynamic\_ggufs\_benchmarks/](https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/) | 1 | 0 | 2026-03-01T07:54:21 | yoracale | false | null | 0 | o80py8v | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o80py8v/ | false | 1 |
t1_o80ptjw | Why there are so many posts about this? Bot farms are directed to highlight this. But why? | 4 | 0 | 2026-03-01T07:53:07 | Caderent | false | null | 0 | o80ptjw | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80ptjw/ | false | 4 |
t1_o80pta2 | What prompt would you use for that? I'm testing out Qwen3.5-397B on a 1bit quant just for fun, so I'm trying to see how it goes :P | 1 | 0 | 2026-03-01T07:53:03 | thaddeusk | false | null | 0 | o80pta2 | false | /r/LocalLLaMA/comments/1rdy4ko/qwen35_vs_qwen3codernext_impressions/o80pta2/ | false | 1 |
t1_o80psln |
I explained the process under this thread:
[https://www.reddit.com/r/LocalLLaMA/comments/1rdze5p/comment/o7ggasf/?context=1](https://www.reddit.com/r/LocalLLaMA/comments/1rdze5p/comment/o7ggasf/?context=1) | 9 | 0 | 2026-03-01T07:52:52 | hieuphamduy | false | null | 0 | o80psln | false | /r/LocalLLaMA/comments/1rhr5ko/is_there_a_way_to_disable_thinking_on_qwen_35_27b/o80psln/ | false | 9 |
t1_o80ps9u | You can do better. Im on 8gb vram and 32ram and i get 32tkps output and 62 read | 2 | 0 | 2026-03-01T07:52:47 | sagiroth | false | null | 0 | o80ps9u | false | /r/LocalLLaMA/comments/1rhe4oo/qwen_35_27b_and_qwen3535ba3b_ran_locally_on_my/o80ps9u/ | false | 2 |
t1_o80pqe1 | [removed] | 1 | 0 | 2026-03-01T07:52:18 | [deleted] | true | null | 0 | o80pqe1 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80pqe1/ | false | 1 |
t1_o80pnl7 | Thanks for this.
I like the idea of having two models with these strengths. Last year I was able to run GPT-OSS-120B and GPT-OSS-20B side by side for same reason as the big model had brains, but agentic work reality was not always there with the small model.
The inconvenience for me is the extra work switching models... | 2 | 0 | 2026-03-01T07:51:34 | Holiday_Purpose_3166 | false | null | 0 | o80pnl7 | false | /r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o80pnl7/ | false | 2 |
t1_o80pnn1 | ohh THAT's what jinja is
tysm! | 6 | 0 | 2026-03-01T07:51:34 | PermitNo8107 | false | null | 0 | o80pnn1 | false | /r/LocalLLaMA/comments/1rhr5ko/is_there_a_way_to_disable_thinking_on_qwen_35_27b/o80pnn1/ | false | 6 |
t1_o80pmpl | hate to say it
but if you wanted to keep them using it you will have to do unethical things much like any corporate
stuff like reading the chat and optimizing for it | 1 | 0 | 2026-03-01T07:51:20 | Mandus_Therion | false | null | 0 | o80pmpl | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80pmpl/ | false | 1 |
t1_o80pmkb | they are reuploading the ggufs right now, only the ID variants are available rn | 1 | 0 | 2026-03-01T07:51:18 | No_War_8891 | false | null | 0 | o80pmkb | false | /r/LocalLLaMA/comments/1rei65v/qwen3535ba3b_quantization_quality_speed/o80pmkb/ | false | 1 |
t1_o80pgfb | OP just said they tried other quant providers and not just Unsloth and the same issue happened thus it wasn't related to the Unsloth quants... | 1 | 0 | 2026-03-01T07:49:40 | yoracale | false | null | 0 | o80pgfb | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o80pgfb/ | false | 1 |
t1_o80pcix | You were using the Q4\_0 which was fine. The only quants that needed to be changed were Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL.
Also we did an update: [https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new\_qwen3535ba3b\_unsloth\_dynamic\_ggufs\_benchmarks/](https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new_qwen... | 1 | 0 | 2026-03-01T07:48:39 | yoracale | false | null | 0 | o80pcix | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o80pcix/ | false | 1 |
t1_o80p9nj | OP was using the Q4\_0 which was fine. The only quants were Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL. Also OP just said they tried other providers and the same issue happened thus it wasn't related to the Unsloth quants.
Also if you didn't see we did an update: [https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new\_qwen... | 1 | 0 | 2026-03-01T07:47:54 | yoracale | false | null | 0 | o80p9nj | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o80p9nj/ | false | 1 |
t1_o80p8tf | Your hobby is your hobby, you are not entitled to recognition. I think pretty much every creator goes through this. At least you didn't spend year learning a musical instrument, with same outcome | 1 | 0 | 2026-03-01T07:47:41 | def_not_jose | false | null | 0 | o80p8tf | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80p8tf/ | false | 1 |
t1_o80p6m5 | 7 | 0 | 2026-03-01T07:47:06 | itsjase | false | null | 0 | o80p6m5 | false | /r/LocalLLaMA/comments/1rhr5ko/is_there_a_way_to_disable_thinking_on_qwen_35_27b/o80p6m5/ | false | 7 | |
t1_o80p3uc | I've been trying to put together something very similar thing using OpenWebUi and llamas-swap, I'm currently using whisper.cpp for transcription and llama.cpp / vlmm for text generation models. Can you please tell me what you're using for image gen and TTS if you have that set up? I know OpenWebUi has native comfyui i... | 2 | 0 | 2026-03-01T07:46:22 | SarcasticBaka | false | null | 0 | o80p3uc | false | /r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o80p3uc/ | false | 2 |
t1_o80p3dj | if it runs in your servers it is not "privacy :)" | 1 | 0 | 2026-03-01T07:46:15 | Euphoric_North_745 | false | null | 0 | o80p3dj | false | /r/LocalLLaMA/comments/1re2qzr/after_all_the_news_do_you_worry_about_privacy/o80p3dj/ | false | 1 |
t1_o80p0wg | Hella cool! | 1 | 0 | 2026-03-01T07:45:36 | bytefactory | false | null | 0 | o80p0wg | false | /r/LocalLLaMA/comments/1rhjcvo/what_im_doing_locally_develping_an_mcp_to_attach/o80p0wg/ | false | 1 |
t1_o80oze8 | This is likely specific to fp8 (not 8bit in general) and things may change further down the road. Give it a few days or weeks. | 1 | 0 | 2026-03-01T07:45:13 | Prudent-Ad4509 | false | null | 0 | o80oze8 | false | /r/LocalLLaMA/comments/1rhflqn/letting_my_rtx_5090_21_tbs_mem_stretch_its_legs/o80oze8/ | false | 1 |
t1_o80oxv5 | Decode doesn't scale. If you don't care about that, you can run a p(n):1 prefill/decode cluster. | 1 | 0 | 2026-03-01T07:44:49 | b4d6d5d9dcf1 | false | null | 0 | o80oxv5 | false | /r/LocalLLaMA/comments/1qn02w8/i_put_an_rtx_pro_4000_blackwell_sff_in_my_mss1/o80oxv5/ | false | 1 |
t1_o80oxc8 | the problem is that most instances do not support format=json | 3 | 0 | 2026-03-01T07:44:40 | DocWolle | false | null | 0 | o80oxc8 | false | /r/LocalLLaMA/comments/1rhj0l9/mcp_server_for_searxngnonapi_local_search/o80oxc8/ | false | 3 |
t1_o80osrt | yes, 27b is quite slow, how are you running it, using vllm or sglang? | 1 | 0 | 2026-03-01T07:43:29 | SearchTricky7875 | false | null | 0 | o80osrt | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o80osrt/ | false | 1 |
t1_o80opeo | The original chat templates kept crashing inference with Opencode and Openclaw.
I vibed a few chat templates until I had a stable one and deployed publicly here https://github.com/wonderfuldestruction/devstral-small-2-template-fix
I did pinged Unsloth and they were aware, but I never checked if they fixed it since we... | 1 | 0 | 2026-03-01T07:42:37 | Holiday_Purpose_3166 | false | null | 0 | o80opeo | false | /r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o80opeo/ | false | 1 |
t1_o80oo2a | > A2
this is almost the worst GPU you could have bought, even P40 is better. | 1 | 0 | 2026-03-01T07:42:15 | MelodicRecognition7 | false | null | 0 | o80oo2a | false | /r/LocalLLaMA/comments/1rhifeg/im_waiting_for_my_nvidia_a2_to_crawl_in_to_run_a/o80oo2a/ | false | 1 |
t1_o80omso | OP basically has 0 emotional intelligence. He never considered that other people might not share his interests and spent lots of time and effort on something that his family doesn't care about. Frankly, 99%+ of people outside of this sub probably don't even know it's possible to run "AI" locally or what the benefits ar... | 4 | 0 | 2026-03-01T07:41:56 | userax | false | null | 0 | o80omso | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80omso/ | false | 4 |
t1_o80om8g | Thanks, I hope you smiled in the smile corner. ;-) | 1 | 0 | 2026-03-01T07:41:47 | fairydreaming | false | null | 0 | o80om8g | false | /r/LocalLLaMA/comments/1rg9lli/little_qwen_35_27b_and_qwen_35ba3b_models_did/o80om8g/ | false | 1 |
t1_o80okdw | thats interesting , well i didnt hit any issues with q8 yet , but if we do go with 16 bit , it will have a hard time loading up that entire 261 k context , even on a 5090 , since i only have a single graphics card , i am at around 27 gb vram usage with 261 k context at q8 and model being at q4 currently , if the task i... | 1 | 0 | 2026-03-01T07:41:18 | Key_Pace_9755 | false | null | 0 | o80okdw | false | /r/LocalLLaMA/comments/1rhflqn/letting_my_rtx_5090_21_tbs_mem_stretch_its_legs/o80okdw/ | false | 1 |
t1_o80og08 | I don't have experience with Chinese in particular but my experience from multi-lingual stuff with lower resource languages is that when the output is English you just want the largest model feasible and MoE is fine. But when you want output in another language the dense models tend to be relatively better.
Not sure ... | 9 | 0 | 2026-03-01T07:40:10 | Middle_Bullfrog_6173 | false | null | 0 | o80og08 | false | /r/LocalLLaMA/comments/1rhqeob/qwen_35_27b_is_the_best_chinese_translation_model/o80og08/ | false | 9 |
t1_o80oc1f | scuma addict | 1 | 0 | 2026-03-01T07:39:08 | Rheumi | false | null | 0 | o80oc1f | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o80oc1f/ | false | 1 |
t1_o80obvq | I am trying to use Qwen/Qwen3.5-27B with vllm , ***v0.16.0 Latest ,*** but getting error says vllm doesnt support qwen 3.5 27b, getting below error, although hosting with sglang is working, but with vllm the speed could be faster, anyone able to run it with vllm? upgraded both vllm, transformers, Error--
Value error, ... | 1 | 0 | 2026-03-01T07:39:05 | SearchTricky7875 | false | null | 0 | o80obvq | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o80obvq/ | false | 1 |
t1_o80o9rz | add {%- set enable_thinking = false %} at the top of the jinja | 24 | 0 | 2026-03-01T07:38:32 | tat_tvam_asshole | false | null | 0 | o80o9rz | false | /r/LocalLLaMA/comments/1rhr5ko/is_there_a_way_to_disable_thinking_on_qwen_35_27b/o80o9rz/ | false | 24 |
t1_o80nzu0 | lol | 4 | 0 | 2026-03-01T07:35:56 | Electrical_Ninja3805 | false | null | 0 | o80nzu0 | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o80nzu0/ | false | 4 |
t1_o80nzky | [removed] | 1 | 0 | 2026-03-01T07:35:52 | [deleted] | true | null | 0 | o80nzky | false | /r/LocalLLaMA/comments/1rhr5ko/is_there_a_way_to_disable_thinking_on_qwen_35_27b/o80nzky/ | false | 1 |
t1_o80nwqr | Why the K KL model? I remember this reddit post where the 4KM was leading king | 1 | 0 | 2026-03-01T07:35:09 | soyalemujica | false | null | 0 | o80nwqr | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o80nwqr/ | false | 1 |
t1_o80nvib | u/ilintar Any near-future possibility to include this on mainline? Good to have MOE at this size. | 2 | 0 | 2026-03-01T07:34:50 | pmttyji | false | null | 0 | o80nvib | false | /r/LocalLLaMA/comments/1rhjg6w/longcatflashlite_685b_maybe_a_relatively_good/o80nvib/ | false | 2 |
t1_o80noll | oooh, i see. thanks! why are the outputs different from the official demo:
[https://huggingface.co/spaces/KittenML/KittenTTS-Demo](https://huggingface.co/spaces/KittenML/KittenTTS-Demo)
Are you using the old model? | 1 | 0 | 2026-03-01T07:33:02 | ElectricalBar7464 | false | null | 0 | o80noll | false | /r/LocalLLaMA/comments/1rc9qvb/kitten_tts_v08_running_in_the_browser/o80noll/ | false | 1 |
t1_o80nlkm | [removed] | 1 | 0 | 2026-03-01T07:32:14 | [deleted] | true | null | 0 | o80nlkm | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o80nlkm/ | false | 1 |
t1_o80njzz | That’s not automatically sure to work this way. The model itself needs to be built in a way that allows it to shift tokens | 1 | 0 | 2026-03-01T07:31:50 | thibautrey | false | null | 0 | o80njzz | false | /r/LocalLLaMA/comments/1rgwryb/speculative_decoding_qwen35_27b/o80njzz/ | false | 1 |
t1_o80nis2 | Claude can’t even reconcile a 100 line bank statement without human intervention and they gave it guns?
Bravo! | 3 | 0 | 2026-03-01T07:31:31 | AIFocusedAcc | false | null | 0 | o80nis2 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80nis2/ | false | 3 |
t1_o80ncg1 | I very much agree with you. I remembered this when I was writing my conclusions and had Qwen3.5 27B doing a small job.
I have timestamp logs and I'll see if I can script to gather their timings to pinpoint finish times for the suite.
Good catch. | 2 | 0 | 2026-03-01T07:29:52 | Holiday_Purpose_3166 | false | null | 0 | o80ncg1 | false | /r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o80ncg1/ | false | 2 |
t1_o80n6qx | >and Gemma 3 27B 4 bit for translations, and although the translations aren't perfect, they're decent enough to be usable.
Did you try their recent [translategemma models](https://huggingface.co/collections/google/translategemma)?
BTW glad to hear Qwen3.5-27B scores well on translation too. | 15 | 0 | 2026-03-01T07:28:20 | pmttyji | false | null | 0 | o80n6qx | false | /r/LocalLLaMA/comments/1rhqeob/qwen_35_27b_is_the_best_chinese_translation_model/o80n6qx/ | false | 15 |
t1_o80n3r9 | yes, it's on there, check "advanced" | 1 | 0 | 2026-03-01T07:27:33 | HatEducational9965 | false | null | 0 | o80n3r9 | false | /r/LocalLLaMA/comments/1rc9qvb/kitten_tts_v08_running_in_the_browser/o80n3r9/ | false | 1 |
t1_o80n3bv | Maybe start with organizing your thoughts. You're all over the place here. | 1 | 0 | 2026-03-01T07:27:26 | Budget-Juggernaut-68 | false | null | 0 | o80n3bv | false | /r/LocalLLaMA/comments/1rh7mlv/before_i_rewrite_my_stack_again_advice/o80n3bv/ | false | 1 |
t1_o80n1n6 | > Why would I build this?
Hard flex for any CV | 10 | 0 | 2026-03-01T07:26:59 | Hood-Boy | false | null | 0 | o80n1n6 | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o80n1n6/ | false | 10 |
t1_o80mv7i | > Sources familiar with the matter confirmed
Aka the usual "trust me bro" | 9 | 0 | 2026-03-01T07:25:19 | Icy-Degree6161 | false | null | 0 | o80mv7i | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80mv7i/ | false | 9 |
t1_o80mshz | Hi I need some help in doing this. I am also fine tuning the Qwen 3.5 with my domain specific data even for tool calling part. But struggling to achieve the latency on CPU | 1 | 0 | 2026-03-01T07:24:37 | nerdy-oged | false | null | 0 | o80mshz | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o80mshz/ | false | 1 |
t1_o80mp8g | Is the -t 20 parameter model-dependent or more based on your specific CPU? | 1 | 0 | 2026-03-01T07:23:46 | No_War_8891 | false | null | 0 | o80mp8g | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o80mp8g/ | false | 1 |
t1_o80mogd | Thanks | 1 | 0 | 2026-03-01T07:23:34 | fredconex | false | null | 0 | o80mogd | false | /r/LocalLLaMA/comments/1rhiwwk/arandu_v057beta_llamacpp_app_like_lm_studio_ollama/o80mogd/ | false | 1 |
t1_o80ml1i | This one is better, only 7B. https://huggingface.co/tencent/HY-MT1.5-7B-GGUF | 7 | 0 | 2026-03-01T07:22:41 | SpeedyHare | false | null | 0 | o80ml1i | false | /r/LocalLLaMA/comments/1rhqeob/qwen_35_27b_is_the_best_chinese_translation_model/o80ml1i/ | false | 7 |
t1_o80mkdc | I currently have an issue with qwen 3.5 where pre-processing the KV cache happens every time. You can see this in the server tab in openclaw.
Not sure if nemotron or glm have the same problem. Def play around inside LM studio and play with context sizes. Try loading up the context and seeing if the same thing happens... | 2 | 0 | 2026-03-01T07:22:31 | starwaves1 | false | null | 0 | o80mkdc | false | /r/LocalLLaMA/comments/1qtnz9s/best_local_model_for_openclaw/o80mkdc/ | false | 2 |
t1_o80mhsv | Lmao | 6 | 0 | 2026-03-01T07:21:51 | Hood-Boy | false | null | 0 | o80mhsv | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80mhsv/ | false | 6 |
t1_o80mex9 | try a3d models!
They use less weights at a time. | 1 | 0 | 2026-03-01T07:21:07 | starwaves1 | false | null | 0 | o80mex9 | false | /r/LocalLLaMA/comments/1qtnz9s/best_local_model_for_openclaw/o80mex9/ | false | 1 |
t1_o80men5 | Since your actual problem is a 3090 running hot, the distro matters less than your NVIDIA driver setup and power limit configuration. `nvidia-smi -pl 300` (or lower) is your equivalent of undervolting — I run my 3090 at 280W for inference and it dropped temps 12-15C with maybe 5% throughput loss on llama.cpp. No Afterb... | 1 | 0 | 2026-03-01T07:21:02 | tom_mathews | false | null | 0 | o80men5 | false | /r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o80men5/ | false | 1 |
t1_o80mde7 | are you referring to печенеги and половцы? | 4 | 0 | 2026-03-01T07:20:43 | MelodicRecognition7 | false | null | 0 | o80mde7 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80mde7/ | false | 4 |
t1_o80mcej | They are obviously using it to process data not to control rockets.
Most likely use case: stream a fuckton of surveillance data and summarize and cross reference it in a RAG system and use that to query the intelligence data.
"At what time Khameinei is going home?"
"According to the data he usually goes home by 3pm... | 89 | 0 | 2026-03-01T07:20:26 | yopla | false | null | 0 | o80mcej | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80mcej/ | false | 89 |
t1_o80mapv | It will be supply chain risk after current contract termination, so they are good now. No risks so far. /s | 1 | 0 | 2026-03-01T07:20:01 | Weary_Bee_7957 | false | null | 0 | o80mapv | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80mapv/ | false | 1 |
t1_o80masa | Lololololololololol, I knew Dario was doing Dario | 1 | 0 | 2026-03-01T07:20:01 | MonitorAway2394 | false | null | 0 | o80masa | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80masa/ | false | 1 |
t1_o80m9t8 | Imagine that you work your ass off and your own flesh and blood don’t appreciate it you obviously warned them | 1 | 0 | 2026-03-01T07:19:47 | Relative_Trouble_555 | false | null | 0 | o80m9t8 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80m9t8/ | false | 1 |
t1_o80m878 | the qwen xxx a3b models will be what you want. They have very few active parameters and context at a time. I noticed hallucinations and incorrectness when alot of context/weights were outside vram though. The real solution, sadly, is to buy more vram | 1 | 0 | 2026-03-01T07:19:22 | starwaves1 | false | null | 0 | o80m878 | false | /r/LocalLLaMA/comments/1qtnz9s/best_local_model_for_openclaw/o80m878/ | false | 1 |
t1_o80m7rw | See https://github.com/p-e-w/heretic/issues/190 | 1 | 0 | 2026-03-01T07:19:15 | beneath_steel_sky | false | null | 0 | o80m7rw | false | /r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o80m7rw/ | false | 1 |
t1_o80m7iw | Its a very good idea and working very well !!!!!!!
| 1 | 0 | 2026-03-01T07:19:11 | EfficientDistrict998 | false | null | 0 | o80m7iw | false | /r/LocalLLaMA/comments/1q9bj5j/i_made_a_website_to_turn_any_confusing_ui_into_a/o80m7iw/ | false | 1 |
t1_o80m6vq | Actually there was a regression on bandwidth for NCCL, but most of these numbers were benchmarked prior to the bandwidth drop from 24GB/s to 16GB/s | 2 | 0 | 2026-03-01T07:19:01 | raphaelamorim | false | null | 0 | o80m6vq | false | /r/LocalLLaMA/comments/1rhbtnw/the_state_of_openweights_llms_performance_on/o80m6vq/ | false | 2 |
t1_o80m2sq | Yeah my first gigabyte 3090 I didn’t cap and let it run at 350w.
It fried after 2 months of llm use when I started spamming concurrent requests through ollama
The second one I got was a fe 3090 and I power capped it at 300w. It never reaches above 65 degrees now and I run it all day everyday, request after request. S... | 1 | 0 | 2026-03-01T07:17:59 | ParamedicAble225 | false | null | 0 | o80m2sq | false | /r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o80m2sq/ | false | 1 |
t1_o80lyxy | Try Roo Code plugin for vscode instead. They also have a cli now. Cline didn't work with llama.cpp for me. | 1 | 0 | 2026-03-01T07:17:01 | Independent_Pear4908 | false | null | 0 | o80lyxy | false | /r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o80lyxy/ | false | 1 |
t1_o80lxs3 | Try Roo Code plugin for vscode instead. They also have a cli now. Cline didn't work with llama.cpp for me. | 1 | 0 | 2026-03-01T07:16:43 | Independent_Pear4908 | false | null | 0 | o80lxs3 | false | /r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o80lxs3/ | false | 1 |
t1_o80lllq | https://www.reddit.com/r/LocalLLaMA/s/5Vk2XMAeB2
Apparently, fp8 kv cache is a no go.
Qwen3.5 model family is very sensitive to what is quantized and how, so the previous author recommendation to keep cache at 16bit makes sense if enough vram is available. | 1 | 0 | 2026-03-01T07:13:37 | Prudent-Ad4509 | false | null | 0 | o80lllq | false | /r/LocalLLaMA/comments/1rhflqn/letting_my_rtx_5090_21_tbs_mem_stretch_its_legs/o80lllq/ | false | 1 |
t1_o80ljsf | You'll get about 37tps for 64k context, and 39tps for 32k context on a 4090 with the q4 XL quant from unsloth
llama-server --host 0.0.0.0 --port 8080 -hf unsloth/Qwen3.5-27B-GGUF:UD-Q4_K_XL --min-p 0.0 --presence-penalty 0.0 --repeat-penalty 1.0 --temp 0.6 --top-p 0.95 --top-k 20 --ctx-size 32768 | 1 | 0 | 2026-03-01T07:13:09 | timbo2m | false | null | 0 | o80ljsf | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o80ljsf/ | false | 1 |
t1_o80licr | bureaucracy is slow in implementing things. probably will take another 3 weeks for them to even invalidate API keys. | 1 | 0 | 2026-03-01T07:12:47 | Budget-Juggernaut-68 | false | null | 0 | o80licr | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80licr/ | false | 1 |
t1_o80lf65 | My guess? Either their custom large vision models and custom text models for sniffing through logs and identifying intelligence. Keep in mind what they offer to the general public is completely different from what they offer to agencies. | 22 | 0 | 2026-03-01T07:11:59 | xseson23 | false | null | 0 | o80lf65 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80lf65/ | false | 22 |
t1_o80laxj | She uses this to sorts out quite complex technical problems of her field. I can do software and networking, but psychology is above my head :)) | 2 | 0 | 2026-03-01T07:10:55 | o0genesis0o | false | null | 0 | o80laxj | false | /r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/o80laxj/ | false | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.