name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7zb6j9 | creo que estas buscando reconocimiento de las personas equivocadas, pues es como mostrarle un partido de futbol a alguien que no le gusta, mas bien comparte con la comunidad con personas que si sepan del tema pues las personas no tecnicas no tienen ni idea del esfuerzo y dedicación ,pero pues eso te deja a que te digan... | 1 | 0 | 2026-03-01T01:42:14 | West-Affect-4832 | false | null | 0 | o7zb6j9 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zb6j9/ | false | 1 |
t1_o7zb60q | lol -- thats so janky.
I told it to ground with internet and it seemed to take that, pulling up settings for me. then i found the unsloth page and it was close enough to be acceptable. | 2 | 0 | 2026-03-01T01:42:08 | _raydeStar | false | null | 0 | o7zb60q | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7zb60q/ | false | 2 |
t1_o7zb4pz | Incredible llm for the processing power it requires. I have been using it over the last few days and it’s definitely my goto. | 2 | 0 | 2026-03-01T01:41:55 | Which_Grand8160 | false | null | 0 | o7zb4pz | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7zb4pz/ | false | 2 |
t1_o7zb0d9 | True - the Alexa thing still bothers me a bit so I'll probably still maintain the HA Voice Assistant stuff. I haven't found a better alternative - I tried checking the [Josh.ai](http://Josh.ai) path, but I think thats for professional installers only. | 2 | 0 | 2026-03-01T01:41:10 | ubrtnk | false | null | 0 | o7zb0d9 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zb0d9/ | false | 2 |
t1_o7zau0p | 😂😂😂 | 2 | 0 | 2026-03-01T01:40:06 | Which_Grand8160 | false | null | 0 | o7zau0p | false | /r/LocalLLaMA/comments/1qvq0xe/bashing_ollama_isnt_just_a_pleasure_its_a_duty/o7zau0p/ | false | 2 |
t1_o7zatgc | I try to always use folks, but sometimes forget and fall back on guys. Hard to adjust your language in your 40’s, but it’s worth it to try IMHO. | 8 | 0 | 2026-03-01T01:40:00 | HomsarWasRight | false | null | 0 | o7zatgc | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zatgc/ | false | 8 |
t1_o7zardr | Don’t take it personally. Most are afraid to use ai and their is as insane amount of competition out there. Really it comes down what problem you’re trying to solve… | 1 | 0 | 2026-03-01T01:39:39 | Which_Grand8160 | false | null | 0 | o7zardr | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zardr/ | false | 1 |
t1_o7zape5 | so i'm hearing you might have 4090s for sale… just kidding. that's how it goes sometimes. it does suck when someone you're close to doesn't get your hobby, but it sounds like you tried to do your best by your family and you didn't do anything wrong, it's just not something they're excited about.
my wife's also an engi... | 16 | 0 | 2026-03-01T01:39:18 | HopePupal | false | null | 0 | o7zape5 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zape5/ | false | 16 |
t1_o7zadx0 | Only it true and not funny. I hate openAI | 2 | 0 | 2026-03-01T01:37:18 | wrines | false | null | 0 | o7zadx0 | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7zadx0/ | false | 2 |
t1_o7zaasg | Minimum 32, recommended 48 and above. | 2 | 0 | 2026-03-01T01:36:46 | JLeonsarmiento | false | null | 0 | o7zaasg | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7zaasg/ | false | 2 |
t1_o7zaaac | You shared a screenshot of a text msg only from your wife, and made your post focused around that text msg. Do you not see her, every single day? Along with your children, who you maybe would have asked once or twice throughout the last few months “hey family, how do you like this thing I built which seems to have take... | 22 | 0 | 2026-03-01T01:36:40 | 2016YamR6 | false | null | 0 | o7zaaac | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zaaac/ | false | 22 |
t1_o7za8q4 | I did, which is how I got on the OWUI vs LIbrechat or others from mid-last year. We all were using ChatGPT, in some capacity or another, so it was a familiar interface. Coupled with usage of Alexa, I thought I had all the primary interface types covered. | 9 | 0 | 2026-03-01T01:36:24 | ubrtnk | false | null | 0 | o7za8q4 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7za8q4/ | false | 9 |
t1_o7za4m7 | Scaling law applies to computing not people. Reddit is a perfect example. The scale of compute required to serve the crowd is much larger than before but the quality of the content and comments has decreased significantly since the masses arrived. No amount of compute will fix that. | 1 | 0 | 2026-03-01T01:35:42 | LocoMod | false | null | 0 | o7za4m7 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7za4m7/ | false | 1 |
t1_o7z9zab | Mate, you’re competing against Anthropic, OpenAI, etc that all have SotA models, apps, and integrations. Do NOT take it personally.
Build it and use it yourself. If others use it, great, but do it for you. | 63 | 0 | 2026-03-01T01:34:47 | Ok-Contest-5856 | false | null | 0 | o7z9zab | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7z9zab/ | false | 63 |
t1_o7z9yvm | Your service offers you more privacy, but it offers the rest of your family less! Do you think your wife or kids would feel safer discussing discrete STD testing with a server you manage or with ChatGPT?
So for your family, there is no upside to it. The results are a bit worse than Gemini's or ChatGPT's, and they are... | 637 | 0 | 2026-03-01T01:34:43 | nihilistic_ant | false | null | 0 | o7z9yvm | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7z9yvm/ | false | 637 |
t1_o7z9wnj | lol yea all good - 22 years married and counting. Reliance is unavoidable going this long. I handle the tech, the pool and the house big maintenance things, she handles the finances - I'd be lost without her on that - but then again, she is an accountant and I am in IT | 14 | 0 | 2026-03-01T01:34:19 | ubrtnk | false | null | 0 | o7z9wnj | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7z9wnj/ | false | 14 |
t1_o7z9o8r | Yeah a lightweight adapter is exactly the direction I want to explore. I made some progress there but still in prototyping stage. | 1 | 0 | 2026-03-01T01:32:53 | proggmouse | false | null | 0 | o7z9o8r | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7z9o8r/ | false | 1 |
t1_o7z9lop | I texted the kids because they're older and have their own lives and things going on, not always around. Why is that nuts? | 7 | 0 | 2026-03-01T01:32:26 | ubrtnk | false | null | 0 | o7z9lop | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7z9lop/ | false | 7 |
t1_o7z9jls | Yup. Nobody cares unless it solves a real problem. | 47 | 0 | 2026-03-01T01:32:04 | Budget-Juggernaut-68 | false | null | 0 | o7z9jls | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7z9jls/ | false | 47 |
t1_o7z9fzb | To be honest, Local AGI is an agentic app I've been building for my own use so I don't need to give up my data to any third party. Happy to share it if you're interested in beta testing and giving feedback. | 1 | 0 | 2026-03-01T01:31:26 | luke_pacman | false | null | 0 | o7z9fzb | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7z9fzb/ | false | 1 |
t1_o7z9ev8 | A35-A3B Q4 would be your best choice for speed/performance with that little VRAM. | 3 | 0 | 2026-03-01T01:31:15 | oxygen_addiction | false | null | 0 | o7z9ev8 | false | /r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7z9ev8/ | false | 3 |
t1_o7z9elc | Yea I have the mandatory things - no ads, technitium redundant DNS, Plex etc. I think the only thing that the wife REALLY uses thats extra is Mealie, only because she kept losing her cookbooks lol. | 5 | 0 | 2026-03-01T01:31:12 | ubrtnk | false | null | 0 | o7z9elc | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7z9elc/ | false | 5 |
t1_o7z9clg | Maybe it’s not about the software and it’s about you? Not to overreach, but please just reassure us your marriage is okay haha.
Can also delete this comment for being irrelevant although this is one possible reason someone might not want to become overly reliant on another person. | 7 | 0 | 2026-03-01T01:30:51 | intrepidlemon | false | null | 0 | o7z9clg | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7z9clg/ | false | 7 |
t1_o7z9blo | You texted your family to get their opinion.. ?
Bro, is this a real family or an AI family you are talking about. Either way you sound kind of nutters | 36 | 0 | 2026-03-01T01:30:41 | 2016YamR6 | false | null | 0 | o7z9blo | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7z9blo/ | false | 36 |
t1_o7z9aye | Was it the K\_XL quant. Those might have been the ones with issues. | 2 | 0 | 2026-03-01T01:30:34 | Zc5Gwu | false | null | 0 | o7z9aye | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7z9aye/ | false | 2 |
t1_o7z97dm | Qwen | 0 | 0 | 2026-03-01T01:29:57 | Whyme-__- | false | null | 0 | o7z97dm | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7z97dm/ | false | 0 |
t1_o7z940m | Thanks and yea I agree - I was trying to get the user requirements and that's what kicked this whole thing off lol.
Hey maybe I can just get OpenClaw to do everything for me ;)
I think I might take a step back and compartmentalize whats needed vs not and go from there. Boil it down to an MVP and see what's left that... | 6 | 0 | 2026-03-01T01:29:23 | ubrtnk | false | null | 0 | o7z940m | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7z940m/ | false | 6 |
t1_o7z92fi | I don't think there's anything worth running locally tbh | 0 | 0 | 2026-03-01T01:29:07 | Western_Objective209 | false | null | 0 | o7z92fi | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7z92fi/ | false | 0 |
t1_o7z9288 | That's not what scaling law says | 1 | 0 | 2026-03-01T01:29:04 | Firepal64 | false | null | 0 | o7z9288 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7z9288/ | false | 1 |
t1_o7z8zy1 | My family enjoys having no ads and 5 layers of recursive DNS at home :)
DNS part they don't understand, but they do like having no ads I had to add the DNS failovers because that would always hurt if DNS went down.
I gave my kids OpenWebUI access but ended up connecting their accounts to open router because my stu... | 7 | 0 | 2026-03-01T01:28:41 | Ok-Ad-8976 | false | null | 0 | o7z8zy1 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7z8zy1/ | false | 7 |
t1_o7z8zn5 | Too much for a noob, llama.cpp would've given me a heart attack if i had started with it | 1 | 0 | 2026-03-01T01:28:38 | Velocita84 | false | null | 0 | o7z8zn5 | false | /r/LocalLLaMA/comments/1rhhfv8/how_do_i_get_started_i_know_zero_about_local/o7z8zn5/ | false | 1 |
t1_o7z8rt7 | Way to add nothing to the conversation but a “be careful”. You’re the type of person that when Bitcoin came out probably shunned it. And when you heard shun shunned again.
New tech always has issues and risks. But looking and learning of how we can use it to benefit is far more productive than just shitting on it. | 1 | 0 | 2026-03-01T01:27:18 | Vegetable_Belt_9542 | false | null | 0 | o7z8rt7 | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o7z8rt7/ | false | 1 |
t1_o7z8qf9 | Yeah, that's the way I usually go too. New models often need time for teams like llamacpp and Unsloth to keep improving and fixing bugs before we have a reliable version to stick with. I've re-downloaded the Unsloth quants a couple of times already due to bug fix releases.
I think there's still room for speed improvem... | 1 | 0 | 2026-03-01T01:27:04 | luke_pacman | false | null | 0 | o7z8qf9 | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7z8qf9/ | false | 1 |
t1_o7z8pkb | Why the hell not? This is better than most of the projects that get posted here. Looks fun. | 2 | 0 | 2026-03-01T01:26:56 | IllllIIlIllIllllIIIl | false | null | 0 | o7z8pkb | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7z8pkb/ | false | 2 |
t1_o7z8pkm | It's a data collection tool as far as i see it. | 1 | 0 | 2026-03-01T01:26:56 | FleetingSpaceMan | false | null | 0 | o7z8pkm | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o7z8pkm/ | false | 1 |
t1_o7z8lee | I even told it I downloaded a new release of qwen 3 before 3.5 was out and pointed to the model page. It looked it up and was like “amazing, cool, yay”. And I asked for some help configuring it and it goes “this is a bleeding edge model with bugs and architectures that aren’t supported in the current runtimes. Just use... | 8 | 0 | 2026-03-01T01:26:12 | dan-lash | false | null | 0 | o7z8lee | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7z8lee/ | false | 8 |
t1_o7z8l77 | [removed] | 1 | 0 | 2026-03-01T01:26:10 | [deleted] | true | null | 0 | o7z8l77 | false | /r/LocalLLaMA/comments/1qoty38/kimi_k25_costs_almost_10_of_what_opus_costs_at_a/o7z8l77/ | false | 1 |
t1_o7z8fel | Check out K2-V2 by LLM360. It's a 72B dense with a 512K context limit, and so far I've been really impressed by it. | 1 | 0 | 2026-03-01T01:25:10 | ttkciar | false | null | 0 | o7z8fel | false | /r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7z8fel/ | false | 1 |
t1_o7z87zv | Thanks. I have been considering getting the HP ZBook Ultra G1a 14” Ryzen AI Max+ PRO with 128GB ram and 1TB SSD. Just not sure whether to get it or an M4 MBP with 128GB RAM. | 1 | 0 | 2026-03-01T01:23:53 | Ygobyebye | false | null | 0 | o7z87zv | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7z87zv/ | false | 1 |
t1_o7z82qm | I’m actively working on better benchmark metrics that could shine some light on the accuracy drop.
The results are also a bit hand-wavy due to the small sample size. | 1 | 0 | 2026-03-01T01:22:57 | proggmouse | false | null | 0 | o7z82qm | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7z82qm/ | false | 1 |
t1_o7z81q8 | This is a great usage of local models. As someone who just built a translation app; nice work! | 1 | 0 | 2026-03-01T01:22:46 | InvertedVantage | false | null | 0 | o7z81q8 | false | /r/LocalLLaMA/comments/1rhjk18/localization_pain_diary_4500_ui_keys_local_models/o7z81q8/ | false | 1 |
t1_o7z7tkz | Just keep going. From 1996-2004 my side project was a web browser in C I updated to latest html specs once a year and had an install base of just me. I learned more from that side project than any other I ever did. | 2 | 0 | 2026-03-01T01:21:22 | DorianGre | false | null | 0 | o7z7tkz | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7z7tkz/ | false | 2 |
t1_o7z7ri6 | You sound like a right tool, my condolences to your family 💀 But in all seriousness, its a privacy thing - non techy pol don't like giving ppl they know direct access to themselves | 1 | 1 | 2026-03-01T01:21:01 | ZodiacKiller20 | false | null | 0 | o7z7ri6 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7z7ri6/ | false | 1 |
t1_o7z7omz | Multimodal? No, not that thing about generating things beyond text. Is it omnimodal?
Multimodal means it can read multimedia files; omnimodal means it can create them. | 1 | 0 | 2026-03-01T01:20:31 | Samy_Horny | false | null | 0 | o7z7omz | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7z7omz/ | false | 1 |
t1_o7z7lls | At the end of the day, Microsoft has done a masterclass move that nobody saw this coming... | 1 | 0 | 2026-03-01T01:20:00 | Psychological-Sun744 | false | null | 0 | o7z7lls | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7z7lls/ | false | 1 |
t1_o7z7e8a | Hi, although I agree that this is the wrong sub for this question, it also is the most reasonable sub out of all with the most of both enthusiasts and professionals hanging out and sharing their opinions. Here's my honsst take (pardon me for any typos, I will focus on the content, not the language):
1. I disagree wit... | 1 | 0 | 2026-03-01T01:18:44 | BeyondTheBlackBox | true | null | 0 | o7z7e8a | false | /r/LocalLLaMA/comments/1rhf9is/what_do_i_do_with_my_life/o7z7e8a/ | false | 1 |
t1_o7z744b | Nice breakdown 👍 | 2 | 0 | 2026-03-01T01:17:02 | sandseb123 | false | null | 0 | o7z744b | false | /r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7z744b/ | false | 2 |
t1_o7z6xkz | If it gives you joy and keep doing it.
Personally, I worked quite hard on homelab stuffs so that we can selfhost stuffs, including podcasts that my partner usually listen. And i setup VPN so that she can access the server easily. And I deal with DNS so that she does not need to see IP address. And I integrate SSO as w... | 69 | 0 | 2026-03-01T01:15:56 | o0genesis0o | false | null | 0 | o7z6xkz | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7z6xkz/ | false | 69 |
t1_o7z6u3u | Ikllama will be the fastest, you can build it on your system with optimized kernels. https://github.com/ikawrakow/ik_llama.cpp | 2 | 0 | 2026-03-01T01:15:22 | someone383726 | false | null | 0 | o7z6u3u | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7z6u3u/ | false | 2 |
t1_o7z6tdm | I use Q4-K-XL GGUF quant version by Unsloth. | 4 | 0 | 2026-03-01T01:15:15 | luke_pacman | false | null | 0 | o7z6tdm | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7z6tdm/ | false | 4 |
t1_o7z6bgq | I run the agentic setup on my MacBook M1 with 64GB unified memory, but it can comfortably run on any 32GB Apple Silicon device. For 24GB Macs, you can use a smaller quant (e.g. IQ3, which is only \~14GB). | 1 | 0 | 2026-03-01T01:12:11 | luke_pacman | false | null | 0 | o7z6bgq | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7z6bgq/ | false | 1 |
t1_o7z65v9 | Please dont make posts about your own PR. Let it float to the top, it will if its good and not bullshit. | -5 | 0 | 2026-03-01T01:11:14 | Xamanthas | false | null | 0 | o7z65v9 | false | /r/LocalLLaMA/comments/1rh69co/multidirectional_refusal_suppression_with/o7z65v9/ | false | -5 |
t1_o7z5yqx | I vaguely remember some tests about those models where the loss starts only below Q8. The difference at Q8 was below the margin of error.
I really need to start saving those links. | 1 | 0 | 2026-03-01T01:10:01 | Prudent-Ad4509 | false | null | 0 | o7z5yqx | false | /r/LocalLLaMA/comments/1rhflqn/letting_my_rtx_5090_21_tbs_mem_stretch_its_legs/o7z5yqx/ | false | 1 |
t1_o7z5ui5 | >found mostly unmaintained projects.
To be fair to them, SearXNG hasn't changed that much, so their MCPs shouldn't need to change that much.
The only feature I've thought about adding is the ability to have the MCP send a page off to an openAI compatible API with a query from the agent to have the subagent deal with ... | 6 | 0 | 2026-03-01T01:09:18 | SM8085 | false | null | 0 | o7z5ui5 | false | /r/LocalLLaMA/comments/1rhj0l9/mcp_server_for_searxngnonapi_local_search/o7z5ui5/ | false | 6 |
t1_o7z5prf | Debian isn't going to be your pick for speed, that's your choice for stability, i. e. A server that you will have running one service that you don't want to touch for 5 years.
You're going to want the newest kernel, newest driver, and if you really want it to go as fast as possible, you want to compile it from sourc... | 3 | 0 | 2026-03-01T01:08:29 | arades | false | null | 0 | o7z5prf | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7z5prf/ | false | 3 |
t1_o7z5p29 | Look into tailscale and something like https://oneuptime.com/blog/post/2026-01-27-ollama-docker/view | 2 | 0 | 2026-03-01T01:08:22 | lundrog | false | null | 0 | o7z5p29 | false | /r/LocalLLaMA/comments/1rhj2pj/whats_the_current_local_containerized_setup_look/o7z5p29/ | false | 2 |
t1_o7z5jcp | There is a song about that. It’s gunna big 🤯💥 | 1 | 0 | 2026-03-01T01:07:22 | Puzzleheaded-Nail814 | false | null | 0 | o7z5jcp | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7z5jcp/ | false | 1 |
t1_o7z5i7t | One of the very nice interfaces is open-webui, of course you want either a VPN for your family, or setup a proper public IP + domain and reverse proxy to it.
That open-webui can talk to practically to any AI runner or even to multiple of them (ollama or any openAI compatible). | 2 | 0 | 2026-03-01T01:07:11 | p_235615 | false | null | 0 | o7z5i7t | false | /r/LocalLLaMA/comments/1rhj2pj/whats_the_current_local_containerized_setup_look/o7z5i7t/ | false | 2 |
t1_o7z5i1w | Disclaimer English is not my native so i am using AI to format my response:
Hey! Honestly, I hadn't heard of Omi until your comment, but I just checked it out—continuous audio capture on an open-source wearable is a wild engineering challenge. And yes, the ESP32-S3 audio pipeline is a uniquely cruel form of torture.
... | 2 | 0 | 2026-03-01T01:07:09 | dkrusko | false | null | 0 | o7z5i1w | false | /r/LocalLLaMA/comments/1rhjavd/aipi_local_voice_assistant_bridge_esp32s3/o7z5i1w/ | false | 2 |
t1_o7z5bxq | Claude helped me get llama.cpp going. I have three models and just turn one on as I want to use it. Gemma 2 27, Qwen 3.5 AB something, and I think a Qwen 2.5 coder. Working with the 3.5 right now. | 1 | 0 | 2026-03-01T01:06:06 | SoMuchLasagna | false | null | 0 | o7z5bxq | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7z5bxq/ | false | 1 |
t1_o7z56z9 | but let's make sure people with potato setups are happy too | 3 | 0 | 2026-03-01T01:05:16 | jacek2023 | false | null | 0 | o7z56z9 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7z56z9/ | false | 3 |
t1_o7z54zs | I have an MCP server that queries public SearXNG instances, though I might've broken something in the last update and forgotten to fix it :/
[https://github.com/pwilkin/mcp-searxng-public/issues/5](https://github.com/pwilkin/mcp-searxng-public/issues/5) | 3 | 0 | 2026-03-01T01:04:56 | ilintar | false | null | 0 | o7z54zs | false | /r/LocalLLaMA/comments/1rhj0l9/mcp_server_for_searxngnonapi_local_search/o7z54zs/ | false | 3 |
t1_o7z4z43 | Loop closed. | 2 | 0 | 2026-03-01T01:03:56 | Infinite-Anxiety-105 | false | null | 0 | o7z4z43 | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7z4z43/ | false | 2 |
t1_o7z4y6z | But why do you even use 35B or 27B MoE models at FP8 with RTX Pro 6000? With 96GB VRAM it seems it’s way better to use larger models at Q6 or even MXFP4/NVFP4 or AWQ quants instead, right? Or it’s some specific case where you really need full FP8 inference precision? | 4 | 0 | 2026-03-01T01:03:47 | voyager256 | false | null | 0 | o7z4y6z | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7z4y6z/ | false | 4 |
t1_o7z4vos | ONNX interest as AI detection is honestly a better heuristic than most. I'll neither confirm nor deny, but props for the creativity. | 1 | 0 | 2026-03-01T01:03:22 | theagentledger | false | null | 0 | o7z4vos | false | /r/LocalLLaMA/comments/1rc59ze/qwen3s_most_underrated_feature_voice_embeddings/o7z4vos/ | false | 1 |
t1_o7z4tus | 2-3x from speculative decode on top of these prefill numbers would be genuinely wild — looking forward to those benchmarks. | 2 | 0 | 2026-03-01T01:03:04 | theagentledger | false | null | 0 | o7z4tus | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7z4tus/ | false | 2 |
t1_o7z4t45 | I opted for llama cpp about 6 months ago since it supported API server mode, which MLX didn't have back then. I believe MLX supports server mode by now, but is it mature? | 1 | 0 | 2026-03-01T01:02:56 | luke_pacman | false | null | 0 | o7z4t45 | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7z4t45/ | false | 1 |
t1_o7z4sz6 | It’s literally not the first breakout program to do that, literally every other agentic framework works that way, i know people who have such frameworks built with multiple mcp servers/skills written and use it for work, all openclaw had was viral marketing and being stupid/brave/cunning enough to connect it to apps no... | 1 | 0 | 2026-03-01T01:02:55 | Woke_TWC | false | null | 0 | o7z4sz6 | false | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7z4sz6/ | false | 1 |
t1_o7z4qse | The 7B accuracy drop is the interesting part — curious if it's attention pattern mismatch or just capacity limits showing up under the pressure of two merged caches. | 1 | 0 | 2026-03-01T01:02:33 | theagentledger | false | null | 0 | o7z4qse | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7z4qse/ | false | 1 |
t1_o7z4plw | Skip lmstudio and go to llama.cpp. | 1 | 0 | 2026-03-01T01:02:21 | chibop1 | false | null | 0 | o7z4plw | false | /r/LocalLLaMA/comments/1rhhfv8/how_do_i_get_started_i_know_zero_about_local/o7z4plw/ | false | 1 |
t1_o7z4ef8 | basic maths | 1 | 0 | 2026-03-01T01:00:29 | NaymmmYT | false | null | 0 | o7z4ef8 | false | /r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7z4ef8/ | false | 1 |
t1_o7z463m | Yup. Do not run a government like a business | 1 | 0 | 2026-03-01T00:59:09 | Bireus | false | null | 0 | o7z463m | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7z463m/ | false | 1 |
t1_o7z45jl | Just wait for the smaller Qwens 3.5 that will release soon. | 1 | 0 | 2026-03-01T00:59:03 | Elusive_Spoon | false | null | 0 | o7z45jl | false | /r/LocalLLaMA/comments/1rgwryb/speculative_decoding_qwen35_27b/o7z45jl/ | false | 1 |
t1_o7z3y77 | o god i wish. if that was the case this would have full hardware acceleration and gpu support by now. this is build so close to the processor that linux documentation and source helps, but its not even close to being something that can just be wired in. | 3 | 0 | 2026-03-01T00:57:50 | Electrical_Ninja3805 | false | null | 0 | o7z3y77 | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7z3y77/ | false | 3 |
t1_o7z3xyb | I think that total score against end-to-end runtime might be a more fair comparison given that some models think a lot more than others on the same problems. | 2 | 0 | 2026-03-01T00:57:47 | Zc5Gwu | false | null | 0 | o7z3xyb | false | /r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7z3xyb/ | false | 2 |
t1_o7z3sap | What do you think loads your kernel into ram lol? | 2 | 0 | 2026-03-01T00:56:50 | PeachScary413 | false | null | 0 | o7z3sap | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7z3sap/ | false | 2 |
t1_o7z3j7b | i literally cant support cuda with this. not without years of work wiring everything up from scratch and probably still failing. the issue is nvidia has gone out of there way to make sure you can never do anything gpu compute oriented outside of their supported hardware stack. its kind of a bummer. once this is finishe... | 5 | 0 | 2026-03-01T00:55:19 | Electrical_Ninja3805 | false | null | 0 | o7z3j7b | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7z3j7b/ | false | 5 |
t1_o7z3gm9 | $17 for a local voice bridge is wild. we've been working on something similar with Omi (omi.me) — open source wearable that does continuous audio capture and pipes it to local or cloud LLMs for transcription and context extraction. the ESP32-S3 audio pipeline pain is real, especially the memory fragmentation when you'r... | 1 | 0 | 2026-03-01T00:54:53 | Deep_Ad1959 | false | null | 0 | o7z3gm9 | false | /r/LocalLLaMA/comments/1rhjavd/aipi_local_voice_assistant_bridge_esp32s3/o7z3gm9/ | false | 1 |
t1_o7z3gmn | ai bot stop spamming reddit and recommending old model | 1 | 0 | 2026-03-01T00:54:53 | Cultured_Alien | false | null | 0 | o7z3gmn | false | /r/LocalLLaMA/comments/1rcbold/what_are_some_top_ocr_models_that_can_deal_with/o7z3gmn/ | false | 1 |
t1_o7z3gcq | Honestly there isn't reallly a way to do this within your specs. Either you use online research, or you use local rag, with vector search on a downloaded dump of wikipedia or something, but comouting the embeddings will require quite a lot of horsepower (I believe wikipedia dumps are ~10B tokens (?)). In both cases, I ... | 2 | 0 | 2026-03-01T00:54:50 | Hefty_Acanthaceae348 | false | null | 0 | o7z3gcq | false | /r/LocalLLaMA/comments/1rhcs8p/tiny_small_faster_models_for_13_year_old_laptop/o7z3gcq/ | false | 2 |
t1_o7z3cv1 | yeah, i've been building an agentic app focused on running real-world tasks on consumer-grade hardware so we do not need to give up our data to any third parties. | 3 | 0 | 2026-03-01T00:54:15 | luke_pacman | false | null | 0 | o7z3cv1 | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7z3cv1/ | false | 3 |
t1_o7z3avd | this is badass, but which parts did you use AI for? making sense of the decomp? | 1 | 0 | 2026-03-01T00:53:55 | HopePupal | false | null | 0 | o7z3avd | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7z3avd/ | false | 1 |
t1_o7z32su | Honestly an ai can just crawl through linux docs and integrate just what you need. The future is now baby | 2 | 0 | 2026-03-01T00:52:36 | Neptun0 | false | null | 0 | o7z32su | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7z32su/ | false | 2 |
t1_o7z30ex | Cool! We've gone from running an AI inside an OS, to an AI becoming the OS itself. | 1 | 0 | 2026-03-01T00:52:12 | c64z86 | false | null | 0 | o7z30ex | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7z30ex/ | false | 1 |
t1_o7z2zf1 | i'm gay and not a guy so this actually worked out pretty well for me but OP got lucky | 29 | 0 | 2026-03-01T00:52:03 | HopePupal | false | null | 0 | o7z2zf1 | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7z2zf1/ | false | 29 |
t1_o7z2tps | This is such an amazingly cool idea, but if you are aiming for supporting CUDA... I advise not doing this at all and instead pivot to trimming down a linux distribution down to only whats needed to load the NVIDIA driver, CUDA acceleration and the LLM stuff. | 3 | 0 | 2026-03-01T00:51:06 | valdev | false | null | 0 | o7z2tps | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7z2tps/ | false | 3 |
t1_o7z2sfq | Interesting. By what metrics? I have not seen a domain where someone prefers Grok, Kimi, or Qwen over GLM 5 right now unless you count pricing. | 3 | 0 | 2026-03-01T00:50:54 | TheRealGentlefox | false | null | 0 | o7z2sfq | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7z2sfq/ | false | 3 |
t1_o7z2scp | no. but i have been programing microcontrollers for years. i have spent years developing on marlin firmware. never anything i release. all business side project stuff. i used to run a 3d printing print to order shop and have designed my own printer and firmware. tho i never released them. just what i needed to use for ... | 2 | 0 | 2026-03-01T00:50:53 | Electrical_Ninja3805 | false | null | 0 | o7z2scp | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7z2scp/ | false | 2 |
t1_o7z2p1d | Wtf | 2 | 0 | 2026-03-01T00:50:20 | murkomarko | false | null | 0 | o7z2p1d | false | /r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/o7z2p1d/ | false | 2 |
t1_o7z2f2u | 3090s? I wish!
I'm stunned by how well my old laptop's 6GB RTX 2060 does with careful tuning. I'm able to run 3 7T-8T models at the same time: One on the GPU and 2 on the CPU (Ryzen 7 4800h, 8c/16t, 32 GB). All under Windows 11. | 4 | 0 | 2026-03-01T00:48:43 | IAmBobC | false | null | 0 | o7z2f2u | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7z2f2u/ | false | 4 |
t1_o7z28sf | Could be the loader you are running, or whatever that backend that uses. If there's a github for what is ultimately doing the inference, you could report the issue there, by referencing the api versus the version you loaded. | 1 | 0 | 2026-03-01T00:47:44 | Monkey_1505 | false | null | 0 | o7z28sf | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o7z28sf/ | false | 1 |
t1_o7z27dz | dude that's really cool well done. just out of curiosity, do you work with UEFI or other embedded stuff at your day job? | 1 | 0 | 2026-03-01T00:47:30 | HopePupal | false | null | 0 | o7z27dz | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7z27dz/ | false | 1 |
t1_o7z24d4 | Well it runs in EFI | 1 | 0 | 2026-03-01T00:47:01 | Kenavru | false | null | 0 | o7z24d4 | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7z24d4/ | false | 1 |
t1_o7z1z9p | As far as I know, with llama.cpp we can toggle thinking on or off per-request, but there's no way to set a token budget for reasoning effort (e.g. "think for at most 500 tokens"), it's all or nothing. | 3 | 0 | 2026-03-01T00:46:13 | luke_pacman | false | null | 0 | o7z1z9p | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7z1z9p/ | false | 3 |
t1_o7z1z42 | At least the Chinese paid them for the service. | 1 | 0 | 2026-03-01T00:46:11 | Cheema42 | false | null | 0 | o7z1z42 | false | /r/LocalLLaMA/comments/1rcpmwn/anthropic_weve_identified_industrialscale/o7z1z42/ | false | 1 |
t1_o7z1u80 | Sell it and get a strix halo | -1 | 0 | 2026-03-01T00:45:25 | rorowhat | false | null | 0 | o7z1u80 | false | /r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/o7z1u80/ | false | -1 |
t1_o7z1prv | what a lie. it didnt boot "directly" into a llm interface it you had to intervene 2 times at loaders before it went. and it was very indirect getting there. | -5 | 0 | 2026-03-01T00:44:43 | nntb | false | null | 0 | o7z1prv | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7z1prv/ | false | -5 |
t1_o7z1dij | Id suggest something small like qwen 4b to test device performance. Be sure to use Q4_0 models and if you want to experiment, the beta builds have some npu acceleration for qualcomm devices. | 3 | 0 | 2026-03-01T00:42:45 | ----Val---- | false | null | 0 | o7z1dij | false | /r/LocalLLaMA/comments/1rhikjv/does_anyone_know_about_this_app/o7z1dij/ | false | 3 |
t1_o7z19cf | As others said... wrong sub, but what worked for me when I was in your position was: Follow your heart. It sounds like anime advice but it's true. Follow the money and your heart. If you envision yourself making an app and it fills you with excitement, it motivates you and you really wanna make it exist... then yeah, g... | 1 | 0 | 2026-03-01T00:42:06 | IngenuityMotor2106 | false | null | 0 | o7z19cf | false | /r/LocalLLaMA/comments/1rhf9is/what_do_i_do_with_my_life/o7z19cf/ | false | 1 |
t1_o7z18tr | This is... an AI advertising an AI. We really have reached the end of the internet. How sad. | 6 | 0 | 2026-03-01T00:42:02 | Pretty_Challenge_634 | false | null | 0 | o7z18tr | false | /r/LocalLLaMA/comments/1rhcckv/vibehq_orchestrate_multiple_claude_code_codex/o7z18tr/ | false | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.