name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7zncoy
it is actually an excellent local model. excellent in agentic tasks if you have gpu with enough vram in it like 3090 or 4090 and the likes, you wont need those free models with opencode or kilo if you set it up, for many tasks. makes me wonder what will we have in feb 2027.
2
0
2026-03-01T02:56:34
ab2377
false
null
0
o7zncoy
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zncoy/
false
2
t1_o7znas4
Please test the A3B and A17B as well!
7
0
2026-03-01T02:56:14
rm-rf-rm
false
null
0
o7znas4
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7znas4/
false
7
t1_o7znae1
I thought there was a problem, or I thought I was solving a problem that everyone cared about.
1
0
2026-03-01T02:56:10
ubrtnk
false
null
0
o7znae1
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7znae1/
false
1
t1_o7zn6w0
OWUI's interface is much better than CoPilot's - I was just complaining about this yesterday - seems like CoPilot's web GUI has two different scroll bars that can overlap each other. That's the only experience I have - we dont have any other functions of CoPilot enabled yet at work and I dont use it at home.
0
0
2026-03-01T02:55:34
ubrtnk
false
null
0
o7zn6w0
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zn6w0/
false
0
t1_o7zn40b
I'm so weirded out that I had to scroll down so far to get this sentiment. But yeah u/ubrtnk are you doing alright? This kinda reads like you're spiraling or maybe you don't quite get people? I almost thought it was a bot post until I saw your screenshot.
15
0
2026-03-01T02:55:05
iMakeSense
false
null
0
o7zn40b
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zn40b/
false
15
t1_o7zn26x
Well, 3.5 122b does a very good job in coding tasks, for me in par or even better than Gemini 3 flash preview
1
0
2026-03-01T02:54:46
robertpro01
false
null
0
o7zn26x
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zn26x/
false
1
t1_o7zn1am
Quick question: Unsloth recently published a fix for the official Qwen 3.5 tool-calling chat template (specifically fixing the Jinja |items loop crashing on tool_call.arguments). Do you rec that I manually override the Jinja template in my inference engine for now or do you have any plans to updating the gguf?
1
0
2026-03-01T02:54:38
lannistersstark
false
null
0
o7zn1am
false
/r/LocalLLaMA/comments/1rfds1h/qwen3535ba3b_q4_quantization_comparison/o7zn1am/
false
1
t1_o7zmz2f
It’s currently in a private GitHub repo while I stabilize the architecture and clean up the structure. I’m planning to make parts of it public once the core orchestration and memory system are properly documented. Right now the system is centered around a “Brain” orchestrator coordinating LLM inference, persistent memo...
0
0
2026-03-01T02:54:15
WoodpeckerEastern629
false
null
0
o7zmz2f
false
/r/LocalLLaMA/comments/1rhl73t/exploring_a_modular_cognitive_architecture_for_a/o7zmz2f/
false
0
t1_o7zmy4y
lol I appreciate it. My family growing up wasnt really techie either. I will say though, my best trick for making a better end user platform was identifying the auxiliary components needed and making sure those were always running and available. Example: GPT-OSS:20B is the default model in OWUI AND the OpenAI integra...
3
0
2026-03-01T02:54:06
ubrtnk
false
null
0
o7zmy4y
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zmy4y/
false
3
t1_o7zmy1x
Rule 3
1
0
2026-03-01T02:54:05
LocalLLaMA-ModTeam
false
null
0
o7zmy1x
true
/r/LocalLLaMA/comments/1rhkw7m/peace/o7zmy1x/
true
1
t1_o7zmvg7
Can you use reasoning version using it for free?
1
0
2026-03-01T02:53:38
robertpro01
false
null
0
o7zmvg7
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zmvg7/
false
1
t1_o7zmrfq
It's mostly bots, yeah. I report a bunch here.
3
0
2026-03-01T02:52:57
webheadVR
false
null
0
o7zmrfq
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7zmrfq/
false
3
t1_o7zmqhd
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes version: 8179 (ecbcb7ea9) built with MSVC 19.40.33811.0 for x64 There's nothing in the command prompt prior to failure -- it just ends abruptly. Shows up in the windows app logs a...
1
0
2026-03-01T02:52:48
abstrkt
false
null
0
o7zmqhd
false
/r/LocalLLaMA/comments/1rh65my/native_tool_calling_fails_with_open_webui_llamacpp/o7zmqhd/
false
1
t1_o7zmqce
its too good agentic tasks.
1
0
2026-03-01T02:52:47
ab2377
false
null
0
o7zmqce
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zmqce/
false
1
t1_o7zmpoj
I've never disabled thinking so fast in my life. I asked it a simple question and I'm not joking it was stuck in a "but wait" loop for 10 fucking minutes to give me the answer it actually "thought of" in the first minute of the thinking process.
2
0
2026-03-01T02:52:40
ArkCoon
false
null
0
o7zmpoj
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7zmpoj/
false
2
t1_o7zmpmo
this is why I was careful to say "for my tasks". if I had a lot more vram, my quality of life with kimi2.5 would have surely been better and it would have been able to complete the task...
1
0
2026-03-01T02:52:39
Tema_Art_7777
false
null
0
o7zmpmo
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zmpmo/
false
1
t1_o7zmf3v
Tengo un AMD Ryzen AI 350 con 64 GB y seguí la guía y funciona perfecto. Solo que debe instalar este paquete modificado para este procesador: [https://gitlab.com/-/snippets/5962547](https://gitlab.com/-/snippets/5962547) No he logrado que lemonade-server trabaje con FLM, pero si que funciona correctamente con flm ser...
1
0
2026-03-01T02:50:52
samuelmesa
false
null
0
o7zmf3v
false
/r/LocalLLaMA/comments/1rhanvn/amd_npu_tutorial_for_linux/o7zmf3v/
false
1
t1_o7zmey9
This reminds me the time I started learning Linux, I wanted every one to use it, I tried to convince them abs help them, they just didn't care, they just want a device ready to use age that's it. Local LLM are for nerds bro.
1
0
2026-03-01T02:50:50
robertpro01
false
null
0
o7zmey9
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zmey9/
false
1
t1_o7zmeby
Sounds like you engineered a solution to a problem that didn’t really exist.
3
0
2026-03-01T02:50:44
buddroyce
false
null
0
o7zmeby
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zmeby/
false
3
t1_o7zm964
Is this the same feeling Satya Nadella feels right now? 
19
0
2026-03-01T02:49:52
AffectionateBowl1633
false
null
0
o7zm964
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zm964/
false
19
t1_o7zm8sg
Thats a VERY truthful statement - my wife has never understood my pc gaming stuff or my music studio stuff or really this. She reads, and does plant stuff...not to different from a Hobbit - and she is very short.
1
0
2026-03-01T02:49:48
ubrtnk
false
null
0
o7zm8sg
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zm8sg/
false
1
t1_o7zm2g7
a kernel is a program that manages hardware and provides abstractions for other programs to run on top of it. thats it. scheduler, memory manager, driver model, syscall interface ‚ thats what makes a kernel a kernel. my app doesnt do any of that. theres no scheduler because theres only one program running ‚ mine. there...
7
0
2026-03-01T02:48:42
Electrical_Ninja3805
false
null
0
o7zm2g7
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zm2g7/
false
7
t1_o7zm17h
It's fine. My missus' point is "what's the point", since she already pays for crunchyroll, which streams to TV easily. And she has iCloud. And she uses my Apple Music sub. So my tech just makes thing harder, not easier.
16
0
2026-03-01T02:48:29
o0genesis0o
false
null
0
o7zm17h
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zm17h/
false
16
t1_o7zlxco
Honestly thats what its probably going to mainly continue to exist as - Jarvis, turn on X, Jarvis, play Taylor Swift in the back yard, Jarvis, whats the temperature outside.
8
0
2026-03-01T02:47:50
ubrtnk
false
null
0
o7zlxco
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zlxco/
false
8
t1_o7zlwqu
you have underestimated the hostility towards machine learning.
0
0
2026-03-01T02:47:44
someexgoogler
false
null
0
o7zlwqu
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zlwqu/
false
0
t1_o7zlwp0
You are reading a lot into this. You can't know the context and I don't think you sound very friendly. >Either way you sound kind of nutters
-2
1
2026-03-01T02:47:43
l_eo_
false
null
0
o7zlwp0
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zlwp0/
false
-2
t1_o7zloi7
We've had Alexa's in the house for a while, since the first portable Bluetooth speaker version. At one point, almost every room had one. TBH, they're still the most prevalent single device, followed closely by Sonos - I havent tackled the ESP32 config to output HA Voice Preview Edition to Sonos media entities yet. ...
1
0
2026-03-01T02:46:17
ubrtnk
false
null
0
o7zloi7
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zloi7/
false
1
t1_o7zlndm
Whatever qwen provides for free with their oauth token. I bootstrapped her obsidian vault from my skeletal version with system prompts and scripts and skills built in. The tool calls and reasoning chain freaked her out at the beginning, but she got used to it quickly. I also added a local speech to text model for her. ...
1
0
2026-03-01T02:46:06
o0genesis0o
false
null
0
o7zlndm
false
/r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/o7zlndm/
false
1
t1_o7zlmo7
Tiny? Ive had success fine tuning 4b, 7b, and 14b models for tasks!
1
0
2026-03-01T02:45:58
ciarandeceol1
false
null
0
o7zlmo7
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zlmo7/
false
1
t1_o7zljeh
What's the purpose of your BF2 Card? How are you utilizing it with this setup?
1
0
2026-03-01T02:45:25
phreak9i6
false
null
0
o7zljeh
false
/r/LocalLLaMA/comments/1quznwr/dgx_cluster_my_small_footprint_low_power_ai_system/o7zljeh/
false
1
t1_o7zlgwz
I think Claude or codex could likely do this right now.
2
0
2026-03-01T02:44:59
Double_Sherbert3326
false
null
0
o7zlgwz
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zlgwz/
false
2
t1_o7zlgm4
I totally get it as someone who's written software for friends and family. It worked when the alternative was hours of data entry drudgery, but not so well when it didn't take much time and was a time killer in the first place.
2
0
2026-03-01T02:44:56
Ylsid
false
null
0
o7zlgm4
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zlgm4/
false
2
t1_o7zl920
If it makes you feel any better, I would kill to have someone in my family building cool shit like this. But I'd also advocate that you do it because it because it provides *you value, or enjoyment, or learning.* One unfortunate reality with tech hobbies, is the people closest to you often don't really understand them...
7
0
2026-03-01T02:43:39
redoubt515
false
null
0
o7zl920
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zl920/
false
7
t1_o7zl7lr
lol I haven't been full time end user facing in over 15 years. AI and these types of end user services has never been my job - the closet thing to end user services that I've ever done was VDI - nothing louder than an unhappy user trying to log into their virtual desktop and it takes a couple of minutes to build the pr...
2
0
2026-03-01T02:43:24
ubrtnk
false
null
0
o7zl7lr
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zl7lr/
false
2
t1_o7zl7h0
Sadly, it didn't pass the test, see my other reply on common bugs -- that was taken directly from my test with 35ba3b.
3
0
2026-03-01T02:43:22
derekp7
false
null
0
o7zl7h0
false
/r/LocalLLaMA/comments/1rhdddm/qwen_35_122ba10b_q3_k_xl_ud_actually_passed_my/o7zl7h0/
false
3
t1_o7zl44p
👍 🙂 nice
1
0
2026-03-01T02:42:49
NegotiationNo1504
false
null
0
o7zl44p
false
/r/LocalLLaMA/comments/1rhiwwk/arandu_v057beta_llamacpp_app_like_lm_studio_ollama/o7zl44p/
false
1
t1_o7zkzh7
I used the the prompt "Create a single page web app scientific RPN calculator". Typical results on most models is a Picasso keyboard (that is, the keys are somewhat jumbled -- they look right in the HTML but they wrap wrong in the browser). For example, 7 8 9 / Ent 4 5 6 x 1 2 3 - 0 . +- ...
2
0
2026-03-01T02:42:00
derekp7
false
null
0
o7zkzh7
false
/r/LocalLLaMA/comments/1rhdddm/qwen_35_122ba10b_q3_k_xl_ud_actually_passed_my/o7zkzh7/
false
2
t1_o7zktjb
Tried to be sarcastic. As other said there is not going to be much improvement.
0
0
2026-03-01T02:41:01
Agile_Cicada_1523
false
null
0
o7zktjb
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zktjb/
false
0
t1_o7zkrcb
I personally have never used Alexa or Siri or Google Assistant. Partly because of privacy concerns, but also i don't like the voice interface. Do or did the people in your family use Alexa regularly? Maybe they're like a telegram or WhatsApp chatbot more? For starters they wouldn't need to be around a particular applia...
1
0
2026-03-01T02:40:39
muyuu
false
null
0
o7zkrcb
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zkrcb/
false
1
t1_o7zkr3s
If you’re using a couple minutes of extra time as a limiting factor for intelligence, then you’re actually wasting your time at this period, that’s debt you’re unaware of, set up your system properly.
1
0
2026-03-01T02:40:36
vinigrae
false
null
0
o7zkr3s
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7zkr3s/
false
1
t1_o7zkp85
ok so you must’ve not known what you’re talking about when saying this > nobody is running those locally
1
0
2026-03-01T02:40:17
megacewl
false
null
0
o7zkp85
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7zkp85/
false
1
t1_o7zkozw
Hey I’d love you to adopt me! Or at least tell me all of the tips and tricks you’ve learned in building your family AI! Where do I sign up for your newsletter?
1
0
2026-03-01T02:40:15
redonculous
false
null
0
o7zkozw
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zkozw/
false
1
t1_o7zkot0
and what talks to that hardware, handles memory, and manages processes? I'll give u a hint, it starts with k
-4
0
2026-03-01T02:40:13
CondiMesmer
false
null
0
o7zkot0
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zkot0/
false
-4
t1_o7zkoi4
How did you get kv cache working in llama cpp for 3.5? I got issues because it is also a visual model and kv cache doesn't work for it, so I had to revert to 3.0 models. 3.5 compute context on every request or crashes, any tips?
1
0
2026-03-01T02:40:10
aparamonov
false
null
0
o7zkoi4
false
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7zkoi4/
false
1
t1_o7zkkrc
This happened to me when I built my Plex server to host all my movies and shows at home. No one used it but me. I just kept watching my media and left them alone. Slowly but surely they started asking me to find older movies and shows since it wasn’t on streaming services. I said sure, I’ll add it to the Plex serve...
2
0
2026-03-01T02:39:32
dengar69
false
null
0
o7zkkrc
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zkkrc/
false
2
t1_o7zkjao
What do you thinks allocates memory to actually store any information 
-1
0
2026-03-01T02:39:17
CondiMesmer
false
null
0
o7zkjao
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zkjao/
false
-1
t1_o7zkj5h
not true, i am gpu poor
3
0
2026-03-01T02:39:15
sunshinecheung
false
null
0
o7zkj5h
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7zkj5h/
false
3
t1_o7zkg7n
btw, they should train their own models
1
0
2026-03-01T02:38:45
sunshinecheung
false
null
0
o7zkg7n
false
/r/LocalLLaMA/comments/1rhkw7m/peace/o7zkg7n/
false
1
t1_o7zkg2n
[removed]
1
0
2026-03-01T02:38:44
[deleted]
true
null
0
o7zkg2n
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zkg2n/
false
1
t1_o7zkcd7
They can use siri/chatgpt...
1
0
2026-03-01T02:38:06
sunshinecheung
false
null
0
o7zkcd7
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zkcd7/
false
1
t1_o7zkb2w
tailscale that sucker.
3
0
2026-03-01T02:37:52
ShengrenR
false
null
0
o7zkb2w
false
/r/LocalLLaMA/comments/1rhkw7m/peace/o7zkb2w/
false
3
t1_o7zk62h
The only reason we know is because we attribute a weight to memory based on emotion. Imagine throwing a random line of a 5.5 movie on IMDb instead.
1
1
2026-03-01T02:37:02
Negative_Scarcity315
false
null
0
o7zk62h
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7zk62h/
false
1
t1_o7zk3tn
I built something extremely similar. Have 98 tools running on my home desktop, gpt-oss:20b is the main agentic driver. It has HA integration as well, can control lights and devices and TV as well. Can search YouTube app and put in shows on TV, can do web search, image to text, image gen, image to video, create music wi...
1
0
2026-03-01T02:36:39
BobbyNeedsANewBoat
false
null
0
o7zk3tn
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zk3tn/
false
1
t1_o7zk38f
[removed]
1
0
2026-03-01T02:36:33
[deleted]
true
null
0
o7zk38f
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zk38f/
false
1
t1_o7zjyy4
Write something that makes it run on any architecture. Maybe make a package control system so other people can contribute their own hardware specs. Name it something like LLM Inference, oNly Uefi eXecutable. It has a catchy acronym I dont think anyone has used yet. You can call it Linux!
-3
0
2026-03-01T02:35:50
TldrDev
false
null
0
o7zjyy4
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zjyy4/
false
-3
t1_o7zjwyx
1. you sound like a tinkerer vs someone motivated to solve a real market need. (thats totally fine, thats what hobbies are for, but just be honest to yourself about it) 2.. people dont know what they want so dont bother asking them. They will tell you 'i want this' and more than half the time they'll try it and then...
3
0
2026-03-01T02:35:29
LeatherRub7248
false
null
0
o7zjwyx
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zjwyx/
false
3
t1_o7zjrwo
I agree w this sentiment. At one point unasked family and friends if I could do this do their benefit and for free and the real issue was it was perceived that I would have their data personally. It didn’t matter what I said they prefer Sam.
156
0
2026-03-01T02:34:38
klenen
false
null
0
o7zjrwo
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zjrwo/
false
156
t1_o7zjph9
On a side note; I am building two new apps. One is a AI Brain / AI Mentor kind of a thing, similar to Delphi AI but your own private one. And it learns and grows with you the more knowledge you feed it. And you can calibrate it as much as you like to get it finetuned. The other is more of a AI knowledge base product....
1
0
2026-03-01T02:34:13
crxssrazr93
false
null
0
o7zjph9
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zjph9/
false
1
t1_o7zji5k
Same
1
0
2026-03-01T02:32:57
Sure_Explorer_6698
false
null
0
o7zji5k
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7zji5k/
false
1
t1_o7zjhgx
I am proud of my KBs I built for my music studio stuff. Does help me get more out of my gear because who reads the manuals when there are so many buttons to push
1
0
2026-03-01T02:32:51
ubrtnk
false
null
0
o7zjhgx
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zjhgx/
false
1
t1_o7zjgx8
I'm doing this on what I'd call a minimum-spec AI rig. i5-8500, 48 GB RAM, 12 GB RTX 3060. It's just my experience that anything smaller than about 20B parameters makes category errors and eventually becomes incoherent or loops back on itself or just loses the plot so badly that a new conversation becomes the only way ...
1
0
2026-03-01T02:32:45
MushroomCharacter411
false
null
0
o7zjgx8
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7zjgx8/
false
1
t1_o7zjf1s
This is a good meme
1
0
2026-03-01T02:32:25
PraxisOG
false
null
0
o7zjf1s
false
/r/LocalLLaMA/comments/1rhkw7m/peace/o7zjf1s/
false
1
t1_o7zje01
I actually tried PaddleVL-OCR, but it didn’t help much and just ended up being more resource-heavy.
1
0
2026-03-01T02:32:14
SprayOwn5112
false
null
0
o7zje01
false
/r/LocalLLaMA/comments/1rh3xey/seeking_help_improving_ocr_in_my_rag_pipeline/o7zje01/
false
1
t1_o7zjcnz
Yeah, fully agree with everything you said. Higher RAM bandwidth on non-Apple machines will only come with chip makers mandating soldered high-speed RAM as a minimum requirement, like how Qualcomm uses LP-DDR5x at 8448 MT/s for Snapdragon X and a 9300 MT/s variant for Snapdragon X2. Users don't know and don't care and ...
2
0
2026-03-01T02:32:01
SkyFeistyLlama8
false
null
0
o7zjcnz
false
/r/LocalLLaMA/comments/1rb8mzd/this_is_how_slow_local_llms_are_on_my_framework/o7zjcnz/
false
2
t1_o7zj96m
The best advice I ever got was keep building. Eventually it’ll find its purpose. Some of the most innovative inventions were on accident.
1
0
2026-03-01T02:31:25
Which_Grand8160
false
null
0
o7zj96m
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zj96m/
false
1
t1_o7zj7o3
Thats pretty much the last 2 episodes of Season 3 of Picard lol.
1
0
2026-03-01T02:31:09
ubrtnk
false
null
0
o7zj7o3
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zj7o3/
false
1
t1_o7zj6wk
I'm trying to build it on Windows, we'll see if it works. The docs stated Fedora in one section. If anyone wants to try: cd llama.cpp git fetch origin pull/19493/head:spec-checkpointing git checkout spec-checkpointing
5
0
2026-03-01T02:31:01
fragment_me
false
null
0
o7zj6wk
false
/r/LocalLLaMA/comments/1rh8o4b/selfspeculative_decoding_for_qwen3535ba3b_in/o7zj6wk/
false
5
t1_o7zj12i
Yeah these benchmarks are really misleading. Even the biggest qwen model is not even close to flash.
8
0
2026-03-01T02:30:01
kbt
false
null
0
o7zj12i
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zj12i/
false
8
t1_o7ziyb3
What tools do you use/recommend?
1
0
2026-03-01T02:29:32
Virtual-Listen4507
false
null
0
o7ziyb3
false
/r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7ziyb3/
false
1
t1_o7ziv3p
Lmao for real 🤣
1
0
2026-03-01T02:29:00
crxssrazr93
false
null
0
o7ziv3p
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7ziv3p/
false
1
t1_o7zirg4
Your 27B dense observation is actually really valuable, it confirms KV q8\_0 is NOT necessarily free on dense models I should add that caveat. For MoE models like Qwen3.5-35B-A3B it's still free because of the SSM hybrid architecture, but users shouldn't blindly apply it to dense models.
1
0
2026-03-01T02:28:23
gaztrab
false
null
0
o7zirg4
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zirg4/
false
1
t1_o7ziqxs
That sounds cool! Got it documented somewhere?
1
0
2026-03-01T02:28:18
dat_oldie_you_like
false
null
0
o7ziqxs
false
/r/LocalLLaMA/comments/1rhl73t/exploring_a_modular_cognitive_architecture_for_a/o7ziqxs/
false
1
t1_o7ziqx6
I have a single 3090. But I forgot that SSD and RAM cost a shit ton so now I have a glorified rock
1
0
2026-03-01T02:28:18
redditorialy_retard
false
null
0
o7ziqx6
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7ziqx6/
false
1
t1_o7zipcm
No hurt, I get it - I never look at the logs of the few chats that are in there. But I get it, there's always logs. Flip it back though, it is the same as if I turned on text message monitoring and equivalent in iOS on one of the kids phones - sometime, as a parent, monitoring IS needed - I would rather be the monitor ...
1
0
2026-03-01T02:28:02
ubrtnk
false
null
0
o7zipcm
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zipcm/
false
1
t1_o7zilkw
more usable doesn't mean better in quality of output, maybe quality of life for you since you don't have to wait. but as someone that has driven the heck out of kimik2.5, you can't even compare 27b or 35b
2
0
2026-03-01T02:27:23
MotokoAGI
false
null
0
o7zilkw
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7zilkw/
false
2
t1_o7zikl7
My data shows PP-512 = 1390 t/s without batch flags vs \~1532 with -b 4096 -ub 4096, but TG drops from 74.7 to 48.3. The middle ground -ub 1024 -b 2048 gives PP +22% with only TG -3.5%, which could be worth it for prompt-heavy workflows. I'm adding PP columns to our benchmark comparison tool to make this more transpare...
2
0
2026-03-01T02:27:13
gaztrab
false
null
0
o7zikl7
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zikl7/
false
2
t1_o7zik99
I have a local SearXNG instance and looking to first connect it the Jan chat client. It will then be used in a local coding agent setup next.
2
0
2026-03-01T02:27:10
SteppenAxolotl
false
null
0
o7zik99
false
/r/LocalLLaMA/comments/1rhj0l9/mcp_server_for_searxngnonapi_local_search/o7zik99/
false
2
t1_o7zik8p
I appreciate this sub's postings compared to a lot of the hype trains you see about LLM trends on other parts of the internet. It has been very valuable in evaluating local models.
1
0
2026-03-01T02:27:09
mugacariya
false
null
0
o7zik8p
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7zik8p/
false
1
t1_o7zih41
Speaking of Picard, they want the Enterprise computer *and* Data so the ship can be crewed by a dozen people.
1
0
2026-03-01T02:26:38
SkyFeistyLlama8
false
null
0
o7zih41
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zih41/
false
1
t1_o7zic8l
Thank you very much !
1
0
2026-03-01T02:25:50
orblabs
false
null
0
o7zic8l
false
/r/LocalLLaMA/comments/1rhjk18/localization_pain_diary_4500_ui_keys_local_models/o7zic8l/
false
1
t1_o7zic8x
Please clear out your desk.  But leave the magic laptop.
5
0
2026-03-01T02:25:50
PmMeSmileyFacesO_O
false
null
0
o7zic8x
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7zic8x/
false
5
t1_o7zic0f
It is weird how people in here are talking about how you can't compete with professional services. This assumes your family wants some AI/LLM. Are they using any AI? In any context? You can't create demand where there isn't any. Maybe you mentioned this in the post and I missed it.
1
0
2026-03-01T02:25:48
geneusutwerk
false
null
0
o7zic0f
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zic0f/
false
1
t1_o7ziabv
4TB a4b
6
0
2026-03-01T02:25:30
KURD_1_STAN
false
null
0
o7ziabv
false
/r/LocalLLaMA/comments/1rhkgo8/qwen_35_35b_a3b_is_better_than_freetier_chatgpt/o7ziabv/
false
6
t1_o7zi7tu
Sonnet 4.0 was released in May 2025. We have a bunch of open models that are better than Sonnet 4.0 now, with Qwen3.5 27B being the closest replacement, since it is comparable in quality, speed, and context size. A single 3090 will do. Meanwhile, Qwen3.5 397B A17B is also a decent replacement of Opus 4.0 / 4.1, again ...
1
0
2026-03-01T02:25:04
notdba
false
null
0
o7zi7tu
false
/r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7zi7tu/
false
1
t1_o7zi6xw
GLM5 is just sloppy sometimes. It's good at Agentic coding but way worse than the others as a general purpose model. With that level of specialization and >700B params it *should* blow the rest totally out of the water in coding, but it just doesn't. The silly mistakes still happen.
1
0
2026-03-01T02:24:55
ForsookComparison
false
null
0
o7zi6xw
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7zi6xw/
false
1
t1_o7zhyym
Thank you for your kind words. And yes! We tested AesSedai Q4\_K\_M in our experiments. Results: | Quant | PPL | KLD | Same-top-p | TG (tok/s) | |--------------------|--------|--------|------------|------------| | bartowski Q4\_K\_M | 6.6688 | 0.0286 | 92.46% | \~74 | | AesSedai Q4\_...
1
0
2026-03-01T02:23:34
gaztrab
false
null
0
o7zhyym
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zhyym/
false
1
t1_o7zhyln
Thanks for the insight. I didn't know SearXNG existed prior to this morning.
2
0
2026-03-01T02:23:31
SteppenAxolotl
false
null
0
o7zhyln
false
/r/LocalLLaMA/comments/1rhj0l9/mcp_server_for_searxngnonapi_local_search/o7zhyln/
false
2
t1_o7zhxna
I want it on my phone as well. Severs are best for this purpose. 
6
0
2026-03-01T02:23:21
Intrepid-Self-3578
false
null
0
o7zhxna
false
/r/LocalLLaMA/comments/1rhkw7m/peace/o7zhxna/
false
6
t1_o7zhv30
Yes! We tested AesSedai Q4\_K\_M in our experiments. Results: | Quant | PPL | KLD | Same-top-p | TG (tok/s) | |--------------------|--------|--------|------------|------------| | bartowski Q4\_K\_M | 6.6688 | 0.0286 | 92.46% | \~74 | | AesSedai Q4\_K\_M | 6.3949 | 0.0095 | 95.74...
1
0
2026-03-01T02:22:55
gaztrab
false
null
0
o7zhv30
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zhv30/
false
1
t1_o7zhura
Sounds like you need to get control of your local models lol. Stress is when we care about something outside our control. If you just want an llm that speaks your language and plays startek I have good for you- ! That type of thing is one afternoon away with the help of unsloth & the assistance of something like cod...
2
0
2026-03-01T02:22:52
Revolutionalredstone
false
null
0
o7zhura
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7zhura/
false
2
t1_o7zhq0v
For AMD/ROCm or Vulkan: --fit on doesn't work well (2.4x slower on ROCm per one user, 2.5x on Vulkan). Use manual offload instead: ./llama-server -m ./Qwen3.5-35B-A3B-Q4_K_M.gguf \ -c 65536 -ngl 999 --n-cpu-moe 24 \ -fa on -t 20 --no-mmap --jinja \ -ctk q8_0 -ctv q8_0 The key flag is --n-cpu-moe 24 — ...
1
0
2026-03-01T02:22:03
gaztrab
false
null
0
o7zhq0v
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zhq0v/
false
1
t1_o7zhlwm
Very interesting. Will test this out.
1
0
2026-03-01T02:21:21
Icy_Upstairs_7328
false
null
0
o7zhlwm
false
/r/LocalLLaMA/comments/1rhl0ro/i_fed_an_ai_50_hours_of_my_own_podcasts_it/o7zhlwm/
false
1
t1_o7zhkjm
Not a boring question at all! The exact same config works for 5060 Ti 16GB: ./llama-server -m ./Qwen3.5-35B-A3B-Q4_K_M.gguf \ -c 65536 --fit on -fa on -t 20 --no-mmap --jinja \ -ctk q8_0 -ctv q8_0 You should expect around 50-55 tok/s instead of 74 — the difference is purely memory bandwidth (460 vs 960 GB...
1
0
2026-03-01T02:21:07
gaztrab
false
null
0
o7zhkjm
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zhkjm/
false
1
t1_o7zhkgy
No worries man, your post is awesome
1
0
2026-03-01T02:21:06
drstrangelove80
false
null
0
o7zhkgy
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zhkgy/
false
1
t1_o7zhhsq
If one of my parents did that when I was teen, I'd be concerned whether he would see the cringe I'd put there. (If I wasn't tech literate) I would probably see it as mega invasive and controlling (sorry to hurt you OP...) My dad did something similar with monitoring my screen as a teen in real time and it made me freak...
21
0
2026-03-01T02:20:40
Briskfall
false
null
0
o7zhhsq
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zhhsq/
false
21
t1_o7zhhtg
Is AesSedai_Qwen3.5-35B-A3B-IQ4_XS really 16GB? The one i found linked below has two files totaling to about 28GB.  https://huggingface.co/AesSedai/Qwen3.5-35B-A3B-GGUF/tree/main/IQ4_XS
1
0
2026-03-01T02:20:40
ArchdukeofHyperbole
false
null
0
o7zhhtg
false
/r/LocalLLaMA/comments/1rfds1h/qwen3535ba3b_q4_quantization_comparison/o7zhhtg/
false
1
t1_o7zhfc4
How dare you give them a solution to the exact problem they have
1
0
2026-03-01T02:20:14
ubrtnk
false
null
0
o7zhfc4
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zhfc4/
false
1
t1_o7zhex5
Tested it! Do NOT use --no-kv-offload — it absolutely tanks generation speed. On my 5080: 16.1 tok/s with it vs 42.7 tok/s without (that's -63%). The KV cache on GPU is tiny for this model (only 10 KV cache layers because of the hybrid SSM architecture), so offloading it to RAM saves almost no VRAM but destroys perform...
2
0
2026-03-01T02:20:10
gaztrab
false
null
0
o7zhex5
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zhex5/
false
2
t1_o7zhdfz
**Thanks — that helps clarify things. So if I understand correctly, you're suggesting that I skip traditional OCR entirely and let a vision LLM (like Qwen-VL) read the text directly from page images, as long as I downscale them enough to stay under the 16k visual patch limit.** **I didn't realize Qwen3 VL 4B could run...
1
0
2026-03-01T02:19:55
SprayOwn5112
false
null
0
o7zhdfz
false
/r/LocalLLaMA/comments/1rh3xey/seeking_help_improving_ocr_in_my_rag_pipeline/o7zhdfz/
false
1
t1_o7zh9p1
Gotcha!
1
0
2026-03-01T02:19:17
gaztrab
false
null
0
o7zh9p1
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7zh9p1/
false
1