name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7zvdsx | Rule 3 - Removing yet another artificial analysis index screengrab post of Qwen3.5 | 1 | 0 | 2026-03-01T03:48:48 | LocalLLaMA-ModTeam | false | null | 0 | o7zvdsx | true | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zvdsx/ | true | 1 |
t1_o7zv89m | All the best things are worth fighting for. | 1 | 0 | 2026-03-01T03:47:46 | Apart-Yam-979 | false | null | 0 | o7zv89m | false | /r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7zv89m/ | false | 1 |
t1_o7zv793 | The quality might be close, but the speed locally for me is no where near flash. And I’m running a quantized model. Earlier it thought for 13
Minutes before answering a very detailed reply. Flash would’ve been done in 20 seconds | 1 | 0 | 2026-03-01T03:47:34 | Fear_ltself | false | null | 0 | o7zv793 | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zv793/ | false | 1 |
t1_o7zv5bs | u/Presstabstart u/cookieGaboo24 Thanks for the great suggestions to use Qwen3.5, a relatively new MoE model!
I am able to run the following config, and it works great, and super fast
`llama-cli -m AppData\Local\llama.cpp\unsloth_Qwen3.5-35B-A3B-GGUF_Qwen3.5-35B-A3B-UD-Q4_K_M.gguf -c 200000 -ngl 99 --n-cpu-moe 30 ... | 1 | 0 | 2026-03-01T03:47:13 | iLoveWaffle5 | false | null | 0 | o7zv5bs | false | /r/LocalLLaMA/comments/1rhcnbt/best_coding_model_to_run_entirely_on_12gb_vram/o7zv5bs/ | false | 1 |
t1_o7zv1h3 | Never! | 1 | 0 | 2026-03-01T03:46:30 | ubrtnk | false | null | 0 | o7zv1h3 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zv1h3/ | false | 1 |
t1_o7zuz03 | I think they just want the easiest one, which is what I was trying to strive for - not sure how many time's we got an answer from an Alexa contributor that was wrong...never deterred from going back and trying Alexa again. | 2 | 0 | 2026-03-01T03:46:03 | ubrtnk | false | null | 0 | o7zuz03 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zuz03/ | false | 2 |
t1_o7zuwdy | Regarding this particular user buy-in issue, from your description it feels like the privacy issues you worry about are not shared by your family.
Personally I accepted years ago that big tech (and Google in particular) were likely to end up knowing more about my day to day life than I know about myself. I have an And... | 2 | 0 | 2026-03-01T03:45:35 | Protopia | false | null | 0 | o7zuwdy | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zuwdy/ | false | 2 |
t1_o7zuum7 | Sounds awesome, tbh. I’m too old for you to adopt me, though. | 0 | 0 | 2026-03-01T03:45:15 | boy-detective | false | null | 0 | o7zuum7 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zuum7/ | false | 0 |
t1_o7zus64 | pretty cool 👍 | 1 | 0 | 2026-03-01T03:44:49 | ab2377 | false | null | 0 | o7zus64 | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zus64/ | false | 1 |
t1_o7zurh5 | Step 1) Figure out how GitHub really works
Seriously though, if I could package it up, it wouldnt be bad, I just never learned to code. Don't even have any vibe-code setup (IDE etc.). I just can put blocks together. | 1 | 0 | 2026-03-01T03:44:41 | ubrtnk | false | null | 0 | o7zurh5 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zurh5/ | false | 1 |
t1_o7zulmb | '**Mira el video reciente en YouTube de Country Boy Computers con datos de prueba.**' | 1 | 0 | 2026-03-01T03:43:38 | tony10000 | false | null | 0 | o7zulmb | false | /r/LocalLLaMA/comments/1ok6w8r/i_bought_the_intel_arc_b50_to_use_with_lm_studio/o7zulmb/ | false | 1 |
t1_o7zuh0l | I thought thats what OpenClaw is? | 1 | 0 | 2026-03-01T03:42:46 | ubrtnk | false | null | 0 | o7zuh0l | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zuh0l/ | false | 1 |
t1_o7zufcz | I thought it would be easier than just bombarding them with a bunch of questions lol | 0 | 1 | 2026-03-01T03:42:28 | ubrtnk | false | null | 0 | o7zufcz | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zufcz/ | false | 0 |
t1_o7zublj | Why would they use it? Your family probably doesn't care as much about privacy and just want the smartest one | 4 | 0 | 2026-03-01T03:41:45 | six1123 | false | null | 0 | o7zublj | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zublj/ | false | 4 |
t1_o7zu76m | The gap in world knowledge and general intelligence is massive and not well-represented in benchmarks. | 1 | 0 | 2026-03-01T03:40:55 | NNN_Throwaway2 | false | null | 0 | o7zu76m | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zu76m/ | false | 1 |
t1_o7zu5vo | You sent a survey..? To your wife and kids? | 22 | 0 | 2026-03-01T03:40:40 | TracerBulletX | false | null | 0 | o7zu5vo | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zu5vo/ | false | 22 |
t1_o7ztsj8 | Under what settings, use cases, params, context? If test models and pass a deep multistep reasoning test and both always come out with the same output it’s all good to me. | 1 | 0 | 2026-03-01T03:38:10 | Dontdoitagain69 | false | null | 0 | o7ztsj8 | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7ztsj8/ | false | 1 |
t1_o7ztrzf | Hola tony, estoy en tu misma situación, literal me volvi adicto a la eficiencia energitica porque de donde soy te quiere estrangular con los gastos de electricidad, me interesa fuertemente la B50 por vram y consumo, usos ? varios realmente, soy de temas, hoy estoy haciendo automatizaciones para traducir videojuegos, y ... | 1 | 0 | 2026-03-01T03:38:04 | Unlucky_Post4391 | false | null | 0 | o7ztrzf | false | /r/LocalLLaMA/comments/1ok6w8r/i_bought_the_intel_arc_b50_to_use_with_lm_studio/o7ztrzf/ | false | 1 |
t1_o7ztl14 | I like to that when programing also, like a plan how to do stuff, but I delete it :D
So maybe it also found that devs do something similar in some comments... | 3 | 0 | 2026-03-01T03:36:47 | -InformalBanana- | false | null | 0 | o7ztl14 | false | /r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7ztl14/ | false | 3 |
t1_o7zt7m3 | Buttstrapping | 28 | 0 | 2026-03-01T03:34:16 | philmarcracken | false | null | 0 | o7zt7m3 | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zt7m3/ | false | 28 |
t1_o7zsxla | I don't use macos for local LLMs. there is @ivanfioravanti x account I follow. he regularly benchs performances of local LLM on apple silicon.
| 0 | 0 | 2026-03-01T03:32:23 | No-Simple8447 | false | null | 0 | o7zsxla | false | /r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/o7zsxla/ | false | 0 |
t1_o7zsvjy | Again I don’t know enough about it, but I would say remote, first of all without drivers and maybe limited devices it would be too slow to run anything. Second I don’t think AI can write it from scratch, drivers for similar hardware usually exists and need to be adjusted for the model to work correctly, so it’s not rea... | 1 | 0 | 2026-03-01T03:32:01 | sooodooo | false | null | 0 | o7zsvjy | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zsvjy/ | false | 1 |
t1_o7zsrzv | > but it's the first **breakout** instance of a program that does that
Emphasis added. Sure, other tools could do stuff like this earlier. But something had to end up being the first one to catch the public interest and this was the one that did it.
> It is not revolutionary, it’s reckless and irresponsible.
And ma... | 1 | 0 | 2026-03-01T03:31:21 | FaceDeer | false | null | 0 | o7zsrzv | false | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7zsrzv/ | false | 1 |
t1_o7zsot9 | r/confidentlyincorrect | 2 | 0 | 2026-03-01T03:30:46 | PeachScary413 | false | null | 0 | o7zsot9 | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zsot9/ | false | 2 |
t1_o7zso3b | Well when you learn a model and how you can get better info than new models you kind of stick to it because it works. You can build a game using an old PHI model if you know how to code and how to construct a solid input prompt and structured output. You can always feedback to the same model for verification. I have a ... | 1 | 0 | 2026-03-01T03:30:37 | Dontdoitagain69 | false | null | 0 | o7zso3b | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7zso3b/ | false | 1 |
t1_o7zsj1t | nvfp4? | 0 | 0 | 2026-03-01T03:29:41 | -InformalBanana- | false | null | 0 | o7zsj1t | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7zsj1t/ | false | 0 |
t1_o7zsi7y | How many tokens per second to exoect | 1 | 0 | 2026-03-01T03:29:31 | Secret_Forsaken | false | null | 0 | o7zsi7y | false | /r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/o7zsi7y/ | false | 1 |
t1_o7zshpb | >I’ll put the repo in the comments for anyone who wants to take a look.
Okay.
>Telegram as communication channel.
I would prefer not, but that's me. | 3 | 0 | 2026-03-01T03:29:25 | SM8085 | false | null | 0 | o7zshpb | false | /r/LocalLLaMA/comments/1rhkfek/built_a_localfirst_ai_agent_for_my_own_setup/o7zshpb/ | false | 3 |
t1_o7zse0l | He absolutely could just have the UEFI load his binary into memory and execute it like it would any other OS.. why not?
Operating systems are not made from magical memory allocation fairy dust, they are just binaries like anything else when it comes down it. | 3 | 0 | 2026-03-01T03:28:44 | PeachScary413 | false | null | 0 | o7zse0l | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zse0l/ | false | 3 |
t1_o7zsb2d | [deleted] | 1 | 0 | 2026-03-01T03:28:11 | [deleted] | true | null | 0 | o7zsb2d | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zsb2d/ | false | 1 |
t1_o7zs78o | On my 64GB MacBook Pro, I've added both the 27B and 35B to my model collection. | 1 | 0 | 2026-03-01T03:27:28 | Murgatroyd314 | false | null | 0 | o7zs78o | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7zs78o/ | false | 1 |
t1_o7zs0ln | Yep. I think UEFI is the right layer of abstraction. The question is does it make sense to manually bring up network to load the ai remotely and then let it figure out everything else. Or does it make sense to find/build a local ai that can write boot/rom/driver code and let it figure out everything else. Lots of avenu... | 1 | 0 | 2026-03-01T03:26:14 | Stunning_Mast2001 | false | null | 0 | o7zs0ln | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zs0ln/ | false | 1 |
t1_o7zs08i | Yes, 3/4 of the comments would have been complaining, but if this UI doesn't deserve some grimy ass electronic music what are we even doing at this point?
Just this week fas^^^ionable government says "damned your ethical considerations, murderbots look ready to me", queue the soundtrack homie, this UI goes ultra-hard ... | 3 | 0 | 2026-03-01T03:26:10 | natufian | false | null | 0 | o7zs08i | false | /r/LocalLLaMA/comments/1rhbfya/shunyanet_sentinel_a_selfhosted_rss_aggregator/o7zs08i/ | false | 3 |
t1_o7zru43 | Qwen 3.5 Plus was a pretty bad model. Is this supposed to be better than that? | 1 | 0 | 2026-03-01T03:25:00 | real_serviceloom | false | null | 0 | o7zru43 | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zru43/ | false | 1 |
t1_o7zrs59 | Very cool thing you’ve done, but never rely on your wife and kids to be interested. I use AI for EVERYTHING and my wife uses it like four times a week. My kids care even less. Professionals have woken up. The rest of society thinks this is still a trend like crypto was a trend.
Also, you did this for you. Your family ... | 1 | 0 | 2026-03-01T03:24:39 | Current-Ticket4214 | false | null | 0 | o7zrs59 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zrs59/ | false | 1 |
t1_o7zrr36 | Those two models can't do latent communication out of the box with AVP unfortunately. Same-family same-tokenizer pairs (e.g. Qwen3-4B and Qwen3-32B) would work. | 3 | 0 | 2026-03-01T03:24:27 | proggmouse | false | null | 0 | o7zrr36 | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7zrr36/ | false | 3 |
t1_o7zrny6 | I feel your pain brother. We should all put our failed experiments together in a single platform and just let them go for it | 1 | 0 | 2026-03-01T03:23:53 | kevinallen | false | null | 0 | o7zrny6 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zrny6/ | false | 1 |
t1_o7zrng6 | ? | 1 | 0 | 2026-03-01T03:23:48 | daeron-blackFyr | false | null | 0 | o7zrng6 | false | /r/LocalLLaMA/comments/1rguhz9/project_sota_toolkit_drop_3_distill_the_flow/o7zrng6/ | false | 1 |
t1_o7zrjt2 | lmstudio 0.4.6 fixed this issue. | 1 | 0 | 2026-03-01T03:23:07 | One-Pass3382 | false | null | 0 | o7zrjt2 | false | /r/LocalLLaMA/comments/1rdyia7/trouble_with_qwen_35_with_lmstudio/o7zrjt2/ | false | 1 |
t1_o7zrh7m | Probably 4 bit quants | 1 | 0 | 2026-03-01T03:22:38 | mrpogiface | false | null | 0 | o7zrh7m | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7zrh7m/ | false | 1 |
t1_o7zrgwt | Ai was actually able to look at the assembly code just using my local dev tools (honestly don’t know how but it did it on its own) but it kept getting stuck on a key memory address and a final reset command. So I had to insist we use a decompiler to better understand the function names (it kept insisting the disassembl... | 1 | 0 | 2026-03-01T03:22:34 | Stunning_Mast2001 | false | null | 0 | o7zrgwt | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zrgwt/ | false | 1 |
t1_o7zrbq6 | Thats how it is | 1 | 0 | 2026-03-01T03:21:39 | Ok_Technology_5962 | false | null | 0 | o7zrbq6 | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zrbq6/ | false | 1 |
t1_o7zr953 | Oh ok, I didn't knew. | 1 | 0 | 2026-03-01T03:21:10 | robertpro01 | false | null | 0 | o7zr953 | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zr953/ | false | 1 |
t1_o7zr91c | I will be messaging you in 3 months on [**2026-06-01 03:20:30 UTC**](http://www.wolframalpha.com/input/?i=2026-06-01%2003:20:30%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7zr5in/?context=3)
[**CLICK ... | 1 | 0 | 2026-03-01T03:21:09 | RemindMeBot | false | null | 0 | o7zr91c | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7zr91c/ | false | 1 |
t1_o7zr5in | !remindme 3 months | 1 | 0 | 2026-03-01T03:20:30 | robertpro01 | false | null | 0 | o7zr5in | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7zr5in/ | false | 1 |
t1_o7zr45a | I'm not writing it in HolyC, so not exactly. | 2 | 0 | 2026-03-01T03:20:15 | Electrical_Ninja3805 | false | null | 0 | o7zr45a | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zr45a/ | false | 2 |
t1_o7zr40m | Why is everyone in this thread having a stroke? | 3 | 0 | 2026-03-01T03:20:13 | Recoil42 | false | null | 0 | o7zr40m | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zr40m/ | false | 3 |
t1_o7zqy1p | ...I need to figure out how to get everything that the Ai says to end with "I have spoken" | 5 | 0 | 2026-03-01T03:19:08 | ubrtnk | false | null | 0 | o7zqy1p | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zqy1p/ | false | 5 |
t1_o7zqwux | you don't need an os, you need a kernel, and by my estimations if i pulled in a linux kernel it would be about 5-10mb. so its not outside of the realm of possibility. im just more interested in getting this along as far as i can. | 1 | 0 | 2026-03-01T03:18:55 | Electrical_Ninja3805 | false | null | 0 | o7zqwux | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zqwux/ | false | 1 |
t1_o7zqqs3 | Well, I am a huge Lord of the Rings fan so I 100% get that aspect of it lol - thanks | 5 | 0 | 2026-03-01T03:17:50 | ubrtnk | false | null | 0 | o7zqqs3 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zqqs3/ | false | 5 |
t1_o7zqp86 | Qwen 3.5 27b is getting close | 1 | 0 | 2026-03-01T03:17:32 | Ok_Technology_5962 | false | null | 0 | o7zqp86 | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zqp86/ | false | 1 |
t1_o7zqnlm | This is the way! | 2 | 0 | 2026-03-01T03:17:15 | l_eo_ | false | null | 0 | o7zqnlm | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zqnlm/ | false | 2 |
t1_o7zqmdl | For 3 promts? | 1 | 0 | 2026-03-01T03:17:02 | Ok_Technology_5962 | false | null | 0 | o7zqmdl | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zqmdl/ | false | 1 |
t1_o7zqj12 | Temple OS 2.0 AI bugaloo. | 2 | 0 | 2026-03-01T03:16:26 | mantafloppy | false | null | 0 | o7zqj12 | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zqj12/ | false | 2 |
t1_o7zqieu | I am also facing the unloading issue with llama.cpp. | 1 | 0 | 2026-03-01T03:16:19 | anubhav_200 | false | null | 0 | o7zqieu | false | /r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o7zqieu/ | false | 1 |
t1_o7zqg0d | "What you see is all there is"
>WYSIATI is a cognitive bias where individuals make decisions based solely on the information immediately available to them, often overlooking the possibility of missing or incomplete data.
Don't let it get to you. You will always get a spectrum of responses.
I think you tried to do ... | 5 | 0 | 2026-03-01T03:15:54 | l_eo_ | false | null | 0 | o7zqg0d | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zqg0d/ | false | 5 |
t1_o7zqam8 | I've successfully built tons of software that is used for more than 3k+ people being conservative, I create things for enterprise with full compliance, KPI's and full impact analysis, people ask me the hardest questions about AI, that's literally my job... however, I was naive too, built complete advanced RAG systems +... | 1 | 0 | 2026-03-01T03:14:56 | ZestRocket | false | null | 0 | o7zqam8 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zqam8/ | false | 1 |
t1_o7zq5rh | As my context keeps growing from 20k to 30 to 40k, with every bump the promp processing happens from 0 to 20k, 0 to 30k , 0 to 40k, this happens onlybwhen using the model via claude code, with opencode i am not facing this issue. So it seems like the way cc manages prompts is very diff from the way opencode does. | 1 | 0 | 2026-03-01T03:14:04 | anubhav_200 | false | null | 0 | o7zq5rh | false | /r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o7zq5rh/ | false | 1 |
t1_o7zq3j9 | It’s an interesting idea. But an OS with drivers gives you access to modern GPUs. Something virtually impossible without a driver provided by the manufacturer.
The overhead of an OS is minimal. The amount of optimisations you have to do to make it run without an OS are so much that by the time you’re done you’ll be 10... | 1 | 0 | 2026-03-01T03:13:41 | ChibaCityFunk | false | null | 0 | o7zq3j9 | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zq3j9/ | false | 1 |
t1_o7zq36a | Hey all! We are currently beta testing this feature with the FLM team. Join the lemonade discord and head to the flm-linux-npu channel if you want to join the party.
We’ll also do an announcement here when the stable release is out. | 5 | 0 | 2026-03-01T03:13:37 | jfowers_amd | false | null | 0 | o7zq36a | false | /r/LocalLLaMA/comments/1rhanvn/amd_npu_tutorial_for_linux/o7zq36a/ | false | 5 |
t1_o7zpxnt | Good question. The full KV-cache approach makes the most sense when the task is short enough that context doesn't blow up (like 2-4 agent hops, not a 50-turn conversation). Though I'm exploring options in order to make it better in that direction.
For longer workflows where context does grow substantially, you're righ... | 1 | 0 | 2026-03-01T03:12:38 | proggmouse | false | null | 0 | o7zpxnt | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7zpxnt/ | false | 1 |
t1_o7zpuop | Gemini 3 Thinking is free. | 1 | 0 | 2026-03-01T03:12:06 | Recoil42 | false | null | 0 | o7zpuop | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zpuop/ | false | 1 |
t1_o7zptr9 | Thank you! You are amazing! | 1 | 0 | 2026-03-01T03:11:56 | CATLLM | false | null | 0 | o7zptr9 | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zptr9/ | false | 1 |
t1_o7zpt39 | 7 | 0 | 2026-03-01T03:11:49 | ubrtnk | false | null | 0 | o7zpt39 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zpt39/ | false | 7 | |
t1_o7zpscm | Yes? | 1 | 0 | 2026-03-01T03:11:42 | Recoil42 | false | null | 0 | o7zpscm | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zpscm/ | false | 1 |
t1_o7zps26 | This reminds me of the time in the 1980s when I worked for IBM, and as a corporation they were adopting a "quality circle" approach. My manager decided we would try this and we spent weeks and weeks having meetings to brainstorm possible areas for quality improvements and picking one and then coming up with a proposed ... | 2 | 0 | 2026-03-01T03:11:39 | Protopia | false | null | 0 | o7zps26 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zps26/ | false | 2 |
t1_o7zpron | Better than older models yes but still has a ways to go.
My current goal model is one that can one shot a simple browser game where you’re a DVD logo on a screen and you have one minute to try to hit the corners as many times as possible within a minute, with a few other mechanics for scoring and fairness. Older model... | 1 | 0 | 2026-03-01T03:11:35 | silvertricl0ps | false | null | 0 | o7zpron | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zpron/ | false | 1 |
t1_o7zpq1g | Well, the real thing that's incorrect is that it's for surveillance. Battle droid hive brain is more likely. | 1 | 0 | 2026-03-01T03:11:17 | BusRevolutionary9893 | false | null | 0 | o7zpq1g | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7zpq1g/ | false | 1 |
t1_o7zph9l | Do find any solution to this? Im having similar issue where it doesn’t remember the name i set in soul.md. It didn’t even ask me to set its name on first tui launch. I’m using qwen3 8b. Every session is a fresh chat | 1 | 0 | 2026-03-01T03:09:46 | pro_potato96 | false | null | 0 | o7zph9l | false | /r/LocalLLaMA/comments/1qv6892/help_setting_local_ollama_models_with_openclaw/o7zph9l/ | false | 1 |
t1_o7zph6w | > image and video generation capabilities
An excellent claim to make if your goal is to coax disappointment in a modal that has historically destabilized peoples' trust in the glorious US AI Industrial Complex. | 1 | 0 | 2026-03-01T03:09:45 | ElementNumber6 | false | null | 0 | o7zph6w | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7zph6w/ | false | 1 |
t1_o7zph3l | One thing you can work on is your emotional stance towards the situation and reframe your perspective.
There might be the misalignment created by the effort you pour into the project and what kind of expectations that creates for you regarding you needing that effort to be seen, so I understand why that stings.
But ... | 1 | 0 | 2026-03-01T03:09:44 | l_eo_ | false | null | 0 | o7zph3l | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zph3l/ | false | 1 |
t1_o7zpesu | bruh i meant to be sarcastic if you seriously replying like that i think you need to detox a bit and get some real life for a moment. Those C suite running AI company have tendency to dont know what user actually need in smaller extent also applies to you. | 6 | 0 | 2026-03-01T03:09:20 | AffectionateBowl1633 | false | null | 0 | o7zpesu | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zpesu/ | false | 6 |
t1_o7zp9cr | Several people are lol | 0 | 1 | 2026-03-01T03:08:23 | ubrtnk | false | null | 0 | o7zp9cr | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zp9cr/ | false | 0 |
t1_o7zp2qk | Do you really think it is possible? | 1 | 0 | 2026-03-01T03:07:15 | robberviet | false | null | 0 | o7zp2qk | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zp2qk/ | false | 1 |
t1_o7zp15u | i can't even figure out what it does | 1 | 0 | 2026-03-01T03:07:00 | tridentgum | false | null | 0 | o7zp15u | false | /r/LocalLLaMA/comments/1r5v1jb/anyone_actually_using_openclaw/o7zp15u/ | false | 1 |
t1_o7zp0ql | Python/Django fullstack (no fancy front-end) | 2 | 0 | 2026-03-01T03:06:55 | robertpro01 | false | null | 0 | o7zp0ql | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zp0ql/ | false | 2 |
t1_o7zoyr8 | reminds me of story my dad used to tell about a friend at school. he wanted to show his love for his then girlfriend.
he was a computer guy and she didnt have anything to do with computers back then. she only used her mums to write emails once in a while (we talking dark ages here)
he built her a pc and spent like ... | 50 | 0 | 2026-03-01T03:06:35 | howardhus | false | null | 0 | o7zoyr8 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zoyr8/ | false | 50 |
t1_o7zorkf | Real men use sentence transformers and PyTorch 😆 | 1 | 0 | 2026-03-01T03:05:18 | Academic_Track_2765 | false | null | 0 | o7zorkf | false | /r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/o7zorkf/ | false | 1 |
t1_o7zoraw | Thats what I"m doing now - HA Voice -> Local OpenAI LLM custom integration (allows usage of llama.cpp + the MCP usage) and I use HA's STT and OpenAI compatible TTS custom integration. TSS is a Chatterbox Server deployed on a 5060Ti with continuity in the voices between OWUI model usage and Home Assistant Voice.
I also... | 1 | 0 | 2026-03-01T03:05:15 | ubrtnk | false | null | 0 | o7zoraw | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zoraw/ | false | 1 |
t1_o7zon41 | lol fair — apparently caring about ONNX is the Voight-Kampff test of 2026 😂 | 1 | 0 | 2026-03-01T03:04:32 | theagentledger | false | null | 0 | o7zon41 | false | /r/LocalLLaMA/comments/1rc59ze/qwen3s_most_underrated_feature_voice_embeddings/o7zon41/ | false | 1 |
t1_o7zomdc | \> Technically true but not every conservation is about STDs
Speak for yourself... It's my go to small talk subject and a great icebreaker at parties ("raise your hand if you've ever contracted \_\_\_")
It gets people talking 100% of the time, but the perplexing part is that they never want to talk *to me*. I can't ... | 14 | 0 | 2026-03-01T03:04:24 | redoubt515 | false | null | 0 | o7zomdc | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zomdc/ | false | 14 |
t1_o7zok8w | I'm not sure what this comment means | 1 | 0 | 2026-03-01T03:04:01 | iMakeSense | false | null | 0 | o7zok8w | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zok8w/ | false | 1 |
t1_o7zoj27 | Oh well, that's not a good time for me to see this | 1 | 0 | 2026-03-01T03:03:48 | Possible-Basis-6623 | false | null | 0 | o7zoj27 | false | /r/LocalLLaMA/comments/1r645g6/minimax_m25_vs_glm5_vs_kimi_k25_how_do_they/o7zoj27/ | false | 1 |
t1_o7zohdx | Makes sense — once you have the metrics the 7B accuracy cliff should be much easier to characterize. | 2 | 0 | 2026-03-01T03:03:30 | theagentledger | false | null | 0 | o7zohdx | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7zohdx/ | false | 2 |
t1_o7zoh6p | IT'S FREE AGAINST FREE | 2 | 0 | 2026-03-01T03:03:28 | 9gxa05s8fa8sh | false | null | 0 | o7zoh6p | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zoh6p/ | false | 2 |
t1_o7zogbh | I was comparing the Q5 27b to the Q8 35b3a. The 35b3a at Q4 indeed hits over 100 tokens per second | 1 | 0 | 2026-03-01T03:03:19 | Southern-Chain-6485 | false | null | 0 | o7zogbh | false | /r/LocalLLaMA/comments/1rdvq3s/qwen35_27b_is_match_made_in_heaven_for_size_and/o7zogbh/ | false | 1 |
t1_o7zod83 | How does mos run in Mac os? | 1 | 0 | 2026-03-01T03:02:47 | entimuscl | false | null | 0 | o7zod83 | false | /r/LocalLLaMA/comments/1r7bsfd/best_audio_models_feb_2026/o7zod83/ | false | 1 |
t1_o7zoczt | Excited to see where that lands — even a minimal projection layer might be enough to smooth out the representation shift between hops.Excited to see where that lands — even a minimal projection layer might be enough to smooth out the representation shift between hops. | 2 | 0 | 2026-03-01T03:02:45 | theagentledger | false | null | 0 | o7zoczt | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7zoczt/ | false | 2 |
t1_o7zocrh | I think the issue im having is the "cached prompts" is overloading the dev server and causing it to crash, anyone know how to disable this in LM studio so that we dont consatntly keep KV state cache per prompt over API ?? | 1 | 0 | 2026-03-01T03:02:42 | TheyCallMeDozer | false | null | 0 | o7zocrh | false | /r/LocalLLaMA/comments/1rhjc4x/lmstudio_model_unloads_between_requests_channel/o7zocrh/ | false | 1 |
t1_o7zoco2 | enjoy the process man...
keep building, improving, and being interested in what you do.
understanding people, users, humans in general is easy for some but usually hard for most.
| 2 | 0 | 2026-03-01T03:02:41 | LeatherRub7248 | false | null | 0 | o7zoco2 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zoco2/ | false | 2 |
t1_o7zoadj | You can set Ollama alive to I think -1 and that'll keep models running indefinitely.
As far as HA to Google, I think you might be able to go thru Music Assistant and that might give you media\_entity devices you can use. Music assistant has native support | 2 | 0 | 2026-03-01T03:02:17 | ubrtnk | false | null | 0 | o7zoadj | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zoadj/ | false | 2 |
t1_o7zo8ps | I agree that is your address going to do 1. then this is a great way to do it.
However, the general consensus appears to be to not to let the context grow substantially, nor to let it grow to the point that the AI compacts it automatically (or instruct the AI to compact) because AIs are not generally good at selectin... | 1 | 0 | 2026-03-01T03:01:59 | Protopia | false | null | 0 | o7zo8ps | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7zo8ps/ | false | 1 |
t1_o7zo6bh | You don't need any framework for this.
What you need is llamacpp directly or via lemonade server (to help with rocm stuffs) so that you can run llm on your 7900 xtx.
After that, you download a model. Which ever modern and not too stupid is fine. I would say try with Qwen 3 4B VL and give it as large context as possi... | 1 | 0 | 2026-03-01T03:01:34 | o0genesis0o | false | null | 0 | o7zo6bh | false | /r/LocalLLaMA/comments/1rhddg1/want_to_build_a_local_agentic_ai_to_help_with/o7zo6bh/ | false | 1 |
t1_o7zo4hs | The model is a bit cooked. I'm using the 35B MoE and it will infinite think given some reasoning task and thinking enabled | 1 | 0 | 2026-03-01T03:01:15 | bestsniperNAxoxo | false | null | 0 | o7zo4hs | false | /r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7zo4hs/ | false | 1 |
t1_o7zo3zs | I think this is really cool. I'm working with a local system that runs Qwen3.5 35B and Qwen3 4B and I think you might have just saved me a ton of tokens. | 2 | 0 | 2026-03-01T03:01:10 | plaintxt | false | null | 0 | o7zo3zs | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7zo3zs/ | false | 2 |
t1_o7zo2k6 | If your wife will let you, just replace the Alexa (or Google) device with a Home Assistant Voice Preview Edition and Home Assistant. Between Parakeet v3, Kokoro, and a decent \~8B gguf. You can run that with a 3060 and they'll use it for timers and if you throw in Tailscale and the Home Assistant app, (shopping) lists ... | 1 | 0 | 2026-03-01T03:00:55 | Pedalnomica | false | null | 0 | o7zo2k6 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zo2k6/ | false | 1 |
t1_o7znzzr | what do you build? what tech stack? | 1 | 0 | 2026-03-01T03:00:28 | No-Simple8447 | false | null | 0 | o7znzzr | false | /r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7znzzr/ | false | 1 |
t1_o7znvxe | That motherfucker the only person on this planet with that fade. | 1 | 0 | 2026-03-01T02:59:47 | TheDailySpank | false | null | 0 | o7znvxe | false | /r/LocalLLaMA/comments/1rhltfq/dm_i_got_this_type/o7znvxe/ | false | 1 |
t1_o7znksg | Awesome! I’m running much lighter hardware than you: Ollama and Edge TTS but found they unload themselves after 5 minutes of inactivity 😆
I have HA installed & now need a way to interface from HA to the Google speakers around my house.
Any tips there would be gold! | 1 | 0 | 2026-03-01T02:57:56 | redonculous | false | null | 0 | o7znksg | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7znksg/ | false | 1 |
t1_o7znfwv | lol
Lmao even | 1 | 0 | 2026-03-01T02:57:07 | Wompie | false | null | 0 | o7znfwv | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7znfwv/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.