name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o803pjn
I have to say I’ve been testing Qwen3.5-Plus on alibaba cloud and I’m really impressed with it
1
0
2026-03-01T04:47:45
lolxd__
false
null
0
o803pjn
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o803pjn/
false
1
t1_o803om0
Maybe this will help looks like the user aessedai is pretty good at quanting. https://www.reddit.com/r/LocalLLaMA/comments/1rfds1h/qwen3535ba3b_q4_quantization_comparison/
3
0
2026-03-01T04:47:34
ArtfulGenie69
false
null
0
o803om0
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o803om0/
false
3
t1_o803l96
Don’t forget there is a firmware issue that Nvidia acknowledged that has the bandwidth reduced for multi-spark clusters right now. Once Nvidia patches this, numbers will improve across the board for DGX Spark clusters.
1
0
2026-03-01T04:46:54
OWilson90
false
null
0
o803l96
false
/r/LocalLLaMA/comments/1rhbtnw/the_state_of_openweights_llms_performance_on/o803l96/
false
1
t1_o803js0
A business cares about their business. Crazy.
1
0
2026-03-01T04:46:36
Acrobatic-Employer38
false
null
0
o803js0
false
/r/LocalLLaMA/comments/1qd8vpj/claude_code_or_opencode_which_one_do_you_use_and/o803js0/
false
1
t1_o803b7w
Because Gemini is free and offers faster and much better capability in video generation, image generation, music generation, responses etc etc etc. Why would you invent a "better" mousetrap for everyone? Just invent local AI apps that you can make use of and customized to your workflow to increase your own productivi...
1
0
2026-03-01T04:44:54
Euphoric_Emotion5397
false
null
0
o803b7w
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o803b7w/
false
1
t1_o803acs
Ohh just saw you have both 5090 and 3090 so q8 would work.
1
0
2026-03-01T04:44:43
hay-yo
false
null
0
o803acs
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o803acs/
false
1
t1_o80383d
For me Air was the perfect size and ratio of active parameters. But it's also a strong enough model that I feel like I could be happy with it for years even if there's no followup.
1
0
2026-03-01T04:44:16
toothpastespiders
false
null
0
o80383d
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o80383d/
false
1
t1_o802z70
Don't tell my leadership that lol.
1
0
2026-03-01T04:42:31
ubrtnk
false
null
0
o802z70
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o802z70/
false
1
t1_o802xg5
Qwen's GitHub cookbook has working function_calling examples; start there and cross-reference with the tokenizer template for your specific model. The formatting requirements are the whole trick.
2
0
2026-03-01T04:42:11
BC_MARO
false
null
0
o802xg5
false
/r/LocalLLaMA/comments/1rhmwfn/cant_get_qwen_models_to_work_with_tool_calls/o802xg5/
false
2
t1_o802w7f
The only insights I have are what OWUI has built in (which was a recent addition) and if I combed thru the Dream Machine logs to speculate device usage. Ain't nobody got time for that
1
0
2026-03-01T04:41:57
ubrtnk
false
null
0
o802w7f
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o802w7f/
false
1
t1_o802u7m
Why is Linux 7.0 needed for amdxdna? amdxdna has been in the kernel for a year now. Is there something important being implemented in 7.0?
1
0
2026-03-01T04:41:34
Mr-I17
false
null
0
o802u7m
false
/r/LocalLLaMA/comments/1rhanvn/amd_npu_tutorial_for_linux/o802u7m/
false
1
t1_o802tu3
and? there doesn't seem to be pricing comparison websites for serverless, and providers like runpod charge twice as much.
1
0
2026-03-01T04:41:29
chastieplups
false
null
0
o802tu3
false
/r/LocalLLaMA/comments/1rf4br0/where_do_you_all_rent_gpu_servers_for_small_ml_ai/o802tu3/
false
1
t1_o802oy1
You do realize how important data centers are these days, right? I would be full force into data centers if I knew enough to be on that path.
1
0
2026-03-01T04:40:30
Current-Ticket4214
false
null
0
o802oy1
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o802oy1/
false
1
t1_o802nk1
Basically Claude as the spec writer and reviewer - local models doing the heavy token churn. This way I can drop off Claude Pro Max.
2
0
2026-03-01T04:40:14
alphatrad
false
null
0
o802nk1
false
/r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o802nk1/
false
2
t1_o802nk4
It’s a similar situation in my home..
1
0
2026-03-01T04:40:14
leonbollerup
false
null
0
o802nk4
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o802nk4/
false
1
t1_o802n54
AI insights ?
1
0
2026-03-01T04:40:09
Spara-Extreme
false
null
0
o802n54
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o802n54/
false
1
t1_o802ipr
The "summarise at every turn but keep the full detail" pattern is basically what a lot of production agent systems come together on – MCP + structured memory + focused context. AVP doesn't conflict with that approach. It's more about the mechanics of how context gets passed between agents, not what gets passed. You c...
1
0
2026-03-01T04:39:16
proggmouse
false
null
0
o802ipr
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o802ipr/
false
1
t1_o802gsl
Other way around, NSFW text is easier to get out of corpo models then NSFW images. Not sure what’s going on with qwen other then speculating that qwen image 2.0 will be Flux.2 levels of censored if not more.
2
0
2026-03-01T04:38:53
Spara-Extreme
false
null
0
o802gsl
false
/r/LocalLLaMA/comments/1rh69co/multidirectional_refusal_suppression_with/o802gsl/
false
2
t1_o802ccj
Look here [https://unsloth.ai/docs/models/qwen3.5#qwen3.5-27b](https://unsloth.ai/docs/models/qwen3.5#qwen3.5-27b)
1
0
2026-03-01T04:37:59
DemmieMora
false
null
0
o802ccj
false
/r/LocalLLaMA/comments/1rfej6k/qwen_35_family_comparison_by_artificialanalysisai/o802ccj/
false
1
t1_o8025lq
https://preview.redd.it/…innocent peoples
1
0
2026-03-01T04:36:39
AlanTuringReborn
false
null
0
o8025lq
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o8025lq/
false
1
t1_o801z4q
[removed]
1
0
2026-03-01T04:35:20
[deleted]
true
null
0
o801z4q
false
/r/LocalLLaMA/comments/1rhiwwk/arandu_v057beta_llamacpp_app_like_lm_studio_ollama/o801z4q/
false
1
t1_o801wbp
I really think the most AI they're using is the AI that appears at the top of google searches now. We had some hope for Apple Intelligence when it was first announced but that was a massive turd.
1
0
2026-03-01T04:34:46
ubrtnk
false
null
0
o801wbp
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o801wbp/
false
1
t1_o801ty9
Do you have examples of such tool calls at hand? Is it a good idea to try and figure it out on my own?
2
0
2026-03-01T04:34:18
Protheu5
false
null
0
o801ty9
false
/r/LocalLLaMA/comments/1rhmwfn/cant_get_qwen_models_to_work_with_tool_calls/o801ty9/
false
2
t1_o801pcv
You know what? Now I'll thank you even harder!
2
0
2026-03-01T04:33:22
Protheu5
false
null
0
o801pcv
false
/r/LocalLLaMA/comments/1rhmwfn/cant_get_qwen_models_to_work_with_tool_calls/o801pcv/
false
2
t1_o801o3k
I think that might be why perplexity might be my favorite of the closed source options - Its like a better internet-rag system that answers my questions but then gives me the docs where the answers came from. If I had to spend money on only one (and I dont spend any right now), it would be on Perplexity
1
0
2026-03-01T04:33:07
ubrtnk
false
null
0
o801o3k
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o801o3k/
false
1
t1_o801mgy
Are they using some corpo model or just not using AI at all?
2
0
2026-03-01T04:32:48
Spara-Extreme
false
null
0
o801mgy
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o801mgy/
false
2
t1_o801gaw
🤌
1
0
2026-03-01T04:31:33
Head_Bananana
false
null
0
o801gaw
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o801gaw/
false
1
t1_o801e1q
Why would one use this when there are so many others \*and\* LLMs are now good enough that they can often figure stuff out without being polluted by memory that eats context?
1
0
2026-03-01T04:31:06
MrRandom04
false
null
0
o801e1q
false
/r/LocalLLaMA/comments/1rhn8eo/built_an_open_source_mcp_server_for_ai_coding/o801e1q/
false
1
t1_o801b87
Beep boop. System Error. The cake is a lie.
1
0
2026-03-01T04:30:32
LocoMod
false
null
0
o801b87
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o801b87/
false
1
t1_o8019g1
No they aren’t. Deepseek will release, it’ll be amazing, all us AI stocks will tank even more for a month and then the next Gemini and veo update, everyone will have forgotten about it. Just like last time.
1
0
2026-03-01T04:30:11
Spara-Extreme
false
null
0
o8019g1
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o8019g1/
false
1
t1_o80192f
Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/PgFhZ8cnWW) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*
1
0
2026-03-01T04:30:07
WithoutReason1729
false
null
0
o80192f
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o80192f/
true
1
t1_o80152t
Is this were I say that most people don't really use AI? plus the vast majority that do just treat it like Wikipedia or a Google search. A lot of functionality is for organizing things that most people don't really do anyways, or that their lives feel simple enough that they don't need an Ai assistant as the middle man...
1
0
2026-03-01T04:29:19
Massive-Question-550
false
null
0
o80152t
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80152t/
false
1
t1_o80114r
>\- ik\_llama.cpp is best for hybrid CPU+GPU inference if you have fairly modern hardware.  I gave ik\_llama.cpp a try and no it doesn't. I went from an insane 25.7 Tokens/s to 25.7 Tokens/s and lost vision capabilities for whatever reason. Maybe asking for a Flappy Bird clone wasn't a good benchmark though, maybe Qwe...
1
0
2026-03-01T04:28:30
FatheredPuma81
false
null
0
o80114r
false
/r/LocalLLaMA/comments/1qfcg4h/need_to_know_more_about_less_known_engines_ik/o80114r/
false
1
t1_o800z48
Haha I love that, accountability. People think you can only use it to run next start up idea. No, I need accountability.
1
0
2026-03-01T04:28:05
Citywidehomie
false
null
0
o800z48
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o800z48/
false
1
t1_o800x7z
SearXNG is a solid choice - no API keys, multiple engines, and hitting /search?format=json gives you clean structured output that is way easier to parse in an MCP wrapper than HTML scraping.
3
0
2026-03-01T04:27:42
BC_MARO
false
null
0
o800x7z
false
/r/LocalLLaMA/comments/1rhj0l9/mcp_server_for_searxngnonapi_local_search/o800x7z/
false
3
t1_o800vrx
The fix has been merged to be included in next version. https://github.com/ollama/ollama/pull/14517
2
0
2026-03-01T04:27:24
chibop1
false
null
0
o800vrx
false
/r/LocalLLaMA/comments/1rgyqz7/has_anyone_got_qwen35_to_work_with_ollama/o800vrx/
false
2
t1_o800vcz
Qwen3 series is still finicky with tool calling - make sure you're using the chat template that includes tool_call blocks, not just the base template. openwebui's MCP bridge sometimes also needs an explicit system prompt reminding the model it has tools available.
1
0
2026-03-01T04:27:20
BC_MARO
false
null
0
o800vcz
false
/r/LocalLLaMA/comments/1rhmwfn/cant_get_qwen_models_to_work_with_tool_calls/o800vcz/
false
1
t1_o800mtl
But a well calibrated OLED is SOOOO GOOD!!!!
2
0
2026-03-01T04:25:37
ubrtnk
false
null
0
o800mtl
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o800mtl/
false
2
t1_o800k11
Thanks - its on my list of things to do. As my company moves more and more into the cloud, my 20 years as the datacenter guy is slowly sunsetting so I need to add some new skills
1
0
2026-03-01T04:25:03
ubrtnk
false
null
0
o800k11
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o800k11/
false
1
t1_o7zzzbi
It's your hobby, it other people's hobby. Even if it is practical and consumes you. People are simply thinking about other things. I like my TV to be adjusted right, other people don't seem to care. I just let them be.
9
0
2026-03-01T04:20:54
Head_Bananana
false
null
0
o7zzzbi
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zzzbi/
false
9
t1_o7zzxay
GitHub is just a version control system. You create a directory (folder) with a group of documents and you use git to communicate with a remote repository that holds a copy of that directory with a group of documents. So first thing, describe the full system. All of its components. All of the connections. Then invento...
1
0
2026-03-01T04:20:30
Current-Ticket4214
false
null
0
o7zzxay
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zzxay/
false
1
t1_o7zzw95
>Qwen3.5 35B-A3B generates at ~27 tok/s on my M1 Qwen3.5-35B-A3B-UD-Q3_K_XL @ +100 tok/s on RTX 4090 24GB
3
0
2026-03-01T04:20:17
SteppenAxolotl
false
null
0
o7zzw95
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7zzw95/
false
3
t1_o7zzu06
Use my AI...or the terrorists win
2
0
2026-03-01T04:19:51
ubrtnk
false
null
0
o7zzu06
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zzu06/
false
2
t1_o7zzsg5
Does it do anything they couldn't do, if they could be bothered? Are you engaging with them (offering hugs, etc, rather than being a weird tech dude) that would help them engage with your kind of niche (but kinda cute) interest? Could you have just spent 20 grand on other stuff, that is more useful to your family and ...
17
0
2026-03-01T04:19:32
Sambojin1
false
null
0
o7zzsg5
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zzsg5/
false
17
t1_o7zzose
ABACUS? Here I am drawing on walls with charcoal from my cooking fire.
5
0
2026-03-01T04:18:48
Kirito_Uchiha
false
null
0
o7zzose
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7zzose/
false
5
t1_o7zzo1s
This is a cool way to use LLMs. Is any of what it's found truly new, or is it more a matter of evidence to help figure out which existing theories are stronger?
1
0
2026-03-01T04:18:39
Murgatroyd314
false
null
0
o7zzo1s
false
/r/LocalLLaMA/comments/1rhkwzn/from_gpt_wrapper_to_autonomous_oss_prs_apachenasa/o7zzo1s/
false
1
t1_o7zzny4
Of the 3 components of persuasion, you’re lacking pathos. Customers won’t use your product unless they’re emotionally convinced to do so. To achieve that, you need to appeal to their feelings -such as by clearly showing how your product solves their real pain points. You can’t simply shove the product and a manual in...
2
0
2026-03-01T04:18:37
elitePopcorn
false
null
0
o7zzny4
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zzny4/
false
2
t1_o7zzlzu
The running joke when Alexa first came out was "15 years ago, every one was like hell no I'm not letting some listening speaker into my house, that's the NSA spyin on us - now its like Hey NSA, what's the best pancake recipe". Alexa came out right around the time Snowden did his big whisleblower report. But you're co...
27
0
2026-03-01T04:18:14
ubrtnk
false
null
0
o7zzlzu
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zzlzu/
false
27
t1_o7zzldv
with the settings that you have suggested, I am still getting the same issue of full prompt reprocessing (only in claude code): \--------LOGS------ slot update\_slots: id 0 | task 88 | n\_past = 15125, slot.prompt.tokens.size() = 19287, seq\_id = 0, pos\_min = 19286, n\_swa = 1 slot update\_slots: id 0 | task 88 ...
1
0
2026-03-01T04:18:06
anubhav_200
false
null
0
o7zzldv
false
/r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o7zzldv/
false
1
t1_o7zzi37
I did not. I’ve probably spent closer $1500. The oauth for Anthropic oauth works well. The $200/mo plan will give you an enormous amount of tokens. Stepfun 3.5 flash has a free model on openrouter and also fits in 128gb. I will say the $1500ish I’ve spent has been worth it and its value compounds the more I build wit...
1
0
2026-03-01T04:17:27
No_Mango7658
false
null
0
o7zzi37
false
/r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7zzi37/
false
1
t1_o7zzcw4
They not know to value the knowledge and effort which have you put into this :-(
1
0
2026-03-01T04:16:24
palinko
false
null
0
o7zzcw4
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zzcw4/
false
1
t1_o7zz86d
Cuda >=13.0 does not support Pascal cards. Latest available is 12.8.1. Was a typo on my behalf.
1
0
2026-03-01T04:15:28
Organic-Thought8662
false
null
0
o7zz86d
false
/r/LocalLLaMA/comments/1r5sgow/llamacpp_takes_forever_to_load_model_from_ssd/o7zz86d/
false
1
t1_o7zz7zx
1000% definitely. I made a Facebook story of couple photo together with Nano Banana generated hot AI girl friend going to NFL game in Boston. People who haven't seen me in 15 years dropped love emojis. \* I set story privacy to "Friends: Except:"
1
0
2026-03-01T04:15:26
Snoo_64233
false
null
0
o7zz7zx
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zz7zx/
false
1
t1_o7zz5yz
Ha - I tried to offer my other family members to use it as well, they also had zero interest.
2
0
2026-03-01T04:15:02
ubrtnk
false
null
0
o7zz5yz
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zz5yz/
false
2
t1_o7zz5vo
FYI less "filler" word/penalizing for bridging words is clearly implemented for o3 (which leaked for actual output, making its tone somewhat edgy) and Gemini 3 Pro (you can actually see it by asking for explicit CoT as Google allows that; they avoided style leakage for actual output) but not 2.5 Pro (verbose). I though...
1
0
2026-03-01T04:15:01
NandaVegg
false
null
0
o7zz5vo
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7zz5vo/
false
1
t1_o7zz5dg
Yah qwen 3.5 is the best one I've tried. On a 24gi rtx 3090, running it in vLLM in getting 100k context window. Is it as strong as chat gpt? Not by far. But if I give it explicit plans to follow, it does a decent job with some clean up work for me.
1
0
2026-03-01T04:14:55
rateddurr
false
null
0
o7zz5dg
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7zz5dg/
false
1
t1_o7zz4x1
I used the mlx version
1
0
2026-03-01T04:14:50
BitXorBit
false
null
0
o7zz4x1
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7zz4x1/
false
1
t1_o7zz2vo
Same
2
0
2026-03-01T04:14:26
StardockEngineer
false
null
0
o7zz2vo
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7zz2vo/
false
2
t1_o7zyzge
Nothing will happen to anthropic. AI is already do closely guarded or blocked in the military. The posts were absolutely unnecessary. All thats happened is that the DoD and govt agencies will not be allowed to use anthropic at work or for any work capacity. That's it. Yall are blowing this up for nothing.
1
0
2026-03-01T04:13:46
floppypancakes4u
false
null
0
o7zyzge
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7zyzge/
false
1
t1_o7zyrxs
I don't, never bothered to look. The llama-swap documentation is pretty complete as far as setting it up goes, and there are a lot of sources for tuning llama.cpp. llama.cpp tuning can get pretty complicated, but even if you skip all of the optimizations and just use the basic --n-gpu-layers and --n-cpu-moe you'll ge...
1
0
2026-03-01T04:12:17
suicidaleggroll
false
null
0
o7zyrxs
false
/r/LocalLLaMA/comments/1rhg2ir/trying_to_set_up_a_vscode_server_local_llm/o7zyrxs/
false
1
t1_o7zyrgr
No you don't need to convert your dataset in anyway. Just hack the prepare dataset file to work with your trees and your good to go. This current version of prepare dataset and encoder are designed for language modelling but you can obviously change it :)
1
0
2026-03-01T04:12:11
SrijSriv211
false
null
0
o7zyrgr
false
/r/LocalLLaMA/comments/1qym566/i_trained_a_18m_params_model_from_scratch_on_a/o7zyrgr/
false
1
t1_o7zy4tw
As an intellectual challenge, I think it's cool, but the effort is enormous. You'll have to write file systems, network infrastructure, CUDA support, etc. A Linux kernel isn't a bottleneck for an AI model to run. Imagine how many new architectures are released all the time and you'll need to support them. In the end,...
1
0
2026-03-01T04:07:47
JumpyAbies
false
null
0
o7zy4tw
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zy4tw/
false
1
t1_o7zy1dz
No problem!
1
0
2026-03-01T04:07:05
eribob
false
null
0
o7zy1dz
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7zy1dz/
false
1
t1_o7zxzbn
Yep, I just was sitting here reading at a coffee shop, I gave my OC a note to record in my Obsidian notes and it added “hey, it’s 7:40, did you get the reminder I sent at 7 to do your workout?” Ain’t no calendar gonna do that for me.
1
0
2026-03-01T04:06:41
CalligrapherPlane731
false
null
0
o7zxzbn
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o7zxzbn/
false
1
t1_o7zxuxb
I've really optimized this model to run with mlx. It runs faster than Python mlx. Fully Open source. I'm getting about 115-120 tok/sec on M3 Ultra and 70-75 toks/sec on M4 Pro. Around 20 on M1 Pro. [https://github.com/scouzi1966/maclocal-api](https://github.com/scouzi1966/maclocal-api) afm-next is the nightly branc...
2
0
2026-03-01T04:05:50
scousi
false
null
0
o7zxuxb
false
/r/LocalLLaMA/comments/1rg5uee/best_way_to_run_qwen3535ba3b_on_mac/o7zxuxb/
false
2
t1_o7zxuoq
https://preview.redd.it/…dbe98f5cf7af7db5
1
0
2026-03-01T04:05:48
kosantosbik
false
null
0
o7zxuoq
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zxuoq/
false
1
t1_o7zxulj
Bro I feel you. If you are thinking to change the family I'm available can play brother or other roles.
0
0
2026-03-01T04:05:47
palinko
false
null
0
o7zxulj
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zxulj/
false
0
t1_o7zxtew
One of us
2
0
2026-03-01T04:05:33
jthree2001
false
null
0
o7zxtew
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zxtew/
false
2
t1_o7zxtfr
not every employer is like this. for example, at my employer, I decide what we do and how we do it. And I'm definetly think security first, as this is the whole global tax department. 60+ countries and billion+ in taxes paid every year. I'm not going to cheap out.
1
0
2026-03-01T04:05:33
whyyoudidit
false
null
0
o7zxtfr
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7zxtfr/
false
1
t1_o7zxnad
"Hiding in plain sight. My data is safe among many other 100 Million data points. I am not standing out by using Gemini or ChatGPT even if the data is to leak tomorrow. But with niche group of people, I am a giant white elephant in the room. " That above is how one of the coworker put in when I discussed about th...
101
0
2026-03-01T04:04:22
Snoo_64233
false
null
0
o7zxnad
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zxnad/
false
101
t1_o7zxgxo
Think of motifs like tags where the user says "diet" and "jogging" and "weight" those get logged so when they later mention "bananas" or "running" the model doesn't start talking about crops in South America or Boston Marathon results, it knows where the user's objectives are. Silly and simple example.
2
0
2026-03-01T04:03:08
CivilMonk6384
false
null
0
o7zxgxo
false
/r/LocalLLaMA/comments/1rhmye0/trying_to_improve_my_memory_system_any_notes/o7zxgxo/
false
2
t1_o7zxfsa
I already said you're welcome... sheesh! 😆 no problem. glad it's working.
1
0
2026-03-01T04:02:55
Xp_12
false
null
0
o7zxfsa
false
/r/LocalLLaMA/comments/1rhmwfn/cant_get_qwen_models_to_work_with_tool_calls/o7zxfsa/
false
1
t1_o7zxflk
That’s so cool.
1
0
2026-03-01T04:02:53
Bird_ee
false
null
0
o7zxflk
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zxflk/
false
1
t1_o7zxdl5
there are a bunch of these asics companies popping up.
1
0
2026-03-01T04:02:29
Electrical_Ninja3805
false
null
0
o7zxdl5
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zxdl5/
false
1
t1_o7zxbc3
fuck yea
1
0
2026-03-01T04:02:03
sipjca
false
null
0
o7zxbc3
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zxbc3/
false
1
t1_o7zxaqx
You can share with me 😭❤️
6
0
2026-03-01T04:01:56
iamrob15
false
null
0
o7zxaqx
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zxaqx/
false
6
t1_o7zx8pj
That worked! Thanks.
2
0
2026-03-01T04:01:32
Demodude123
false
null
0
o7zx8pj
false
/r/LocalLLaMA/comments/1rhmwfn/cant_get_qwen_models_to_work_with_tool_calls/o7zx8pj/
false
2
t1_o7zx8cy
looking forward to it...........
1
0
2026-03-01T04:01:28
TheInfiniteUniverse_
false
null
0
o7zx8cy
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7zx8cy/
false
1
t1_o7zx7da
122B Q6\_K\_XL is the sweet spot for those who have 128G of unified memory (Ryzen AI Max+ 395 Systems or DGX Spark). And 27B Q8 for 5090 owners. Don't know about Mac though.
1
0
2026-03-01T04:01:16
Mr-I17
false
null
0
o7zx7da
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7zx7da/
false
1
t1_o7zx5t0
So, a combination of some functionality shortfalls and resistance to change. (It doesn't have to be an all or nothing either - e.g. keep Snapchat and ditch Google.) So fix the issues, add some unique functions and use it yourself every day until they can't miss how useful and fun it would be to use themselves. But stop...
1
0
2026-03-01T04:00:58
Protopia
false
null
0
o7zx5t0
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zx5t0/
false
1
t1_o7zx0b1
This is a really clean implementation. The interesting problem at this stage isn’t storage or clustering, it’s truth maintenance over time. One thing to ask is: how does it handle contradictions when consolidation merges episodes that disagree? Silent merges are where long-running memory systems start drifting. Also mi...
2
0
2026-03-01T03:59:55
CivilMonk6384
false
null
0
o7zx0b1
false
/r/LocalLLaMA/comments/1rhmye0/trying_to_improve_my_memory_system_any_notes/o7zx0b1/
false
2
t1_o7zwszo
I guess maybe they think I'd rather be vulnerable to strangers I'll never met than the people I know. Is this not how oversharing things on the internet works but share nothing to family irl 😭😭😭
48
0
2026-03-01T03:58:30
Exciting-Mall192
false
null
0
o7zwszo
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zwszo/
false
48
t1_o7zwr4g
Taalas: Hold my beer [https://taalas.com/](https://taalas.com/)
1
0
2026-03-01T03:58:09
apunker
false
null
0
o7zwr4g
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7zwr4g/
false
1
t1_o7zwp05
Have you tried the Qwen3.5 27B? It's supposedly better for agentic workflows which is what im most curious about. Waiting for the updated/fixed versions to be uploaded by Unsloth..only the large param and 35B were recently updated.
1
0
2026-03-01T03:57:45
saucedy
false
null
0
o7zwp05
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7zwp05/
false
1
t1_o7zwl3l
# how do i use this on llama.cpp ?
0
0
2026-03-01T03:57:00
ClimateBoss
false
null
0
o7zwl3l
false
/r/LocalLLaMA/comments/1rhcckv/vibehq_orchestrate_multiple_claude_code_codex/o7zwl3l/
false
0
t1_o7zwisa
This is the “touch grass” thing dude. It’s easy to spend time on this sub and other tech focused sub, listen to tech podcasts, watch tech ceo interviews, and delude yourself into thinking that most people want to use AI and are interested in it. Your every day human being just doesn’t care that much and roll their eye...
10
0
2026-03-01T03:56:33
vwin90
false
null
0
o7zwisa
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zwisa/
false
10
t1_o7zwgjm
My own thoughts are more along the following lines (but I am far from an expert and have yet to actually try to put this into action)... 1, Providing the context has all the information the AI needs, or the AI has tools like MCP to get what it needs, the smaller and more focused the context, the better the quality. ...
1
0
2026-03-01T03:56:08
Protopia
false
null
0
o7zwgjm
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7zwgjm/
false
1
t1_o7zwgc3
I use mostly Claude/GPT now. I tried the local LLM thing for a bit, but I was spending more time setting up and troubleshooting issues than actually creating anything, so I switched to closed models. But the idea holds no matter what models you're using.
1
0
2026-03-01T03:56:06
Techngro
false
null
0
o7zwgc3
false
/r/LocalLLaMA/comments/1rhf9is/what_do_i_do_with_my_life/o7zwgc3/
false
1
t1_o7zwezs
Rule 4
1
0
2026-03-01T03:55:50
LocalLLaMA-ModTeam
false
null
0
o7zwezs
true
/r/LocalLLaMA/comments/1rgkv8u/agenttoagent_marketplace_let_your_local_agents/o7zwezs/
true
1
t1_o7zwdcy
Rule 4
1
0
2026-03-01T03:55:31
LocalLLaMA-ModTeam
false
null
0
o7zwdcy
true
/r/LocalLLaMA/comments/1rgkwnh/theos_opensource_dualengine_dialectical_reasoning/o7zwdcy/
true
1
t1_o7zwcmi
compare to Qwen3.5 for **coding** ?
3
0
2026-03-01T03:55:22
ClimateBoss
false
null
0
o7zwcmi
false
/r/LocalLLaMA/comments/1rhjg6w/longcatflashlite_685b_maybe_a_relatively_good/o7zwcmi/
false
3
t1_o7zwba3
Rule 3 . And likely an LLM bot
1
0
2026-03-01T03:55:06
LocalLLaMA-ModTeam
false
null
0
o7zwba3
true
/r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7zwba3/
true
1
t1_o7zw9mq
I see a lot of people hating on you, but let me be honest. I have major ADHD and I work in sales. I write things down and still forget. I don’t get quotes to clients fast enough. People just don’t get it, all these apps don’t help. What I want is to wake up and hear, “Hey, you’ve got this to handle today.” I want to t...
4
0
2026-03-01T03:54:48
Citywidehomie
false
null
0
o7zw9mq
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o7zw9mq/
false
4
t1_o7zw471
LLM generated content spam
1
0
2026-03-01T03:53:47
LocalLLaMA-ModTeam
false
null
0
o7zw471
true
/r/LocalLLaMA/comments/1rguxyo/i_ran_3830_inference_runs_to_measure_how_system/o7zw471/
true
1
t1_o7zvxxe
I definitely agree with you on the level of concern has shifted. But its like any time I would bring up any issue, there's immediate agreement that X is very much a problem, and then as soon as dinner overs and they're back to their lives, that problem is less important than the snapchat streak. Honestly, I think its...
1
0
2026-03-01T03:52:35
ubrtnk
false
null
0
o7zvxxe
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o7zvxxe/
false
1
t1_o7zvwx0
Rule 3
1
0
2026-03-01T03:52:23
LocalLLaMA-ModTeam
false
null
0
o7zvwx0
true
/r/LocalLLaMA/comments/1rgz6u3/how_tò_build_your_local_gaming_copilot_with/o7zvwx0/
true
1
t1_o7zvt4q
This post has been marked as spam.
1
0
2026-03-01T03:51:41
LocalLLaMA-ModTeam
false
null
0
o7zvt4q
true
/r/LocalLLaMA/comments/1rh3oty/mac_m4_24gb_local_stack_qwen25_14b_cogito_14b/o7zvt4q/
true
1
t1_o7zvnha
go to admin panel, find your models page, click the arrow to expand advanced parameters on that model, set tool calling to native instead of default. you're welcome.
5
0
2026-03-01T03:50:38
Xp_12
false
null
0
o7zvnha
false
/r/LocalLLaMA/comments/1rhmwfn/cant_get_qwen_models_to_work_with_tool_calls/o7zvnha/
false
5
t1_o7zvmaq
Rule 4
1
0
2026-03-01T03:50:25
LocalLLaMA-ModTeam
false
null
0
o7zvmaq
true
/r/LocalLLaMA/comments/1rhbo40/saw_someone_bridge_claude_code_into_chat_apps/o7zvmaq/
true
1
t1_o7zvh9f
Ya lo había visto! pero esperaba reforzar con esa "Lista" de otros usuarios para ver si realmente se acomoda a mi flujo o quizás ir por algo de segunda con menos vram tal vez.
1
0
2026-03-01T03:49:27
Unlucky_Post4391
false
null
0
o7zvh9f
false
/r/LocalLLaMA/comments/1ok6w8r/i_bought_the_intel_arc_b50_to_use_with_lm_studio/o7zvh9f/
false
1