name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o823v5v
yes. Use the local model supported in the documentation. [https://docs.openclaw.ai/providers/ollama](https://docs.openclaw.ai/providers/ollama)
1
0
2026-03-01T14:40:02
CollectionKey2320
false
null
0
o823v5v
false
/r/LocalLLaMA/comments/1qv6892/help_setting_local_ollama_models_with_openclaw/o823v5v/
false
1
t1_o823lrj
Thanks so much for the reply. I'll check that model out. Appreciate it
2
0
2026-03-01T14:38:34
l0nedigit
false
null
0
o823lrj
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o823lrj/
false
2
t1_o823kfz
They have to log in. That’s the problem. Nobody logs into Alexa or Google Home or most of the other platforms. You connect it to your account and add family members. It learns their voice and done.
1
0
2026-03-01T14:38:21
CantankerousOrder
false
null
0
o823kfz
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o823kfz/
false
1
t1_o823k5r
no I meant if something ever happened to me, 95% of everything my closet would go away and they could just run the internet off the ISP modem/wifi.
2
0
2026-03-01T14:38:19
ubrtnk
false
null
0
o823k5r
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o823k5r/
false
2
t1_o823hw5
You can still see the RAG and RLM versions in the development branches of the same repository - I am still working on them . The main branch is a simple markdown version which satisfies most use cases.
1
0
2026-03-01T14:37:58
the-ai-scientist
false
null
0
o823hw5
false
/r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/o823hw5/
false
1
t1_o823hit
I'm one of the maintaners of Discourse, the open source forum software. We calculate embeddings for all topics in all forums we host (multi millions post every month across tens of thousands of instances), which then power a myriad of features like - showing related topics at the end of a topic - semantic search, in...
8
0
2026-03-01T14:37:54
xfalcox
false
null
0
o823hit
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o823hit/
false
8
t1_o823dr4
I'm one of the maintaners of Discourse, the open source forum software. We calculate embeddings for all topics in all forums we host (multi millions post every month across tens of thousands of instances), which then power a myriad of features like - showing related topics at the end of a topic - semantic search, in...
12
0
2026-03-01T14:37:19
xfalcox
false
null
0
o823dr4
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o823dr4/
false
12
t1_o823btr
For now but there are active dev branches that implement RAG and RLM
1
0
2026-03-01T14:37:00
the-ai-scientist
false
null
0
o823btr
false
/r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/o823btr/
false
1
t1_o8239de
You'd have to explain it.
1
0
2026-03-01T14:36:38
a_beautiful_rhind
false
null
0
o8239de
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o8239de/
false
1
t1_o823629
Brilliant work. Ill integrate it.
1
0
2026-03-01T14:36:06
AurumDaemonHD
false
null
0
o823629
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o823629/
false
1
t1_o8230bb
yeah the amount of times it starts a response with "No, ..." and then says something stupid trying to argue with me was getting out of hand
4
0
2026-03-01T14:35:14
Western_Objective209
false
null
0
o8230bb
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8230bb/
false
4
t1_o822v75
Tinygrad?
18
0
2026-03-01T14:34:26
I-am_Sleepy
false
null
0
o822v75
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o822v75/
false
18
t1_o822re5
No, ditched Windows a while ago. How much VRAM is the 6800? My 9070 XT 16GB with a Q3 Quant shoved into VRAM gets 30 tkps. Dense models needs to be 100% in VRAM.
1
0
2026-03-01T14:33:50
sine120
false
null
0
o822re5
false
/r/LocalLLaMA/comments/1q0mg6w/how_is_running_local_ai_models_on_amd_gpus_today/o822re5/
false
1
t1_o822kht
this is sick. the fact that ANE has 38 TFLOPS of INT8 but Apple basically pretends it doesn't exist for training is so frustrating. I've got an M2 Pro and always wondered if there was a way to tap into the NPU beyond CoreML inference. how stable is the training loop? like does the ANE ever just silently corrupt gradie...
1
0
2026-03-01T14:32:44
BP041
false
null
0
o822kht
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o822kht/
false
1
t1_o822idw
It has the dunning-kruger effect, for sure.
0
0
2026-03-01T14:32:24
ayylmaonade
false
null
0
o822idw
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o822idw/
false
0
t1_o822gn8
Yeah, no idea why this happens. But you definitely did the right thing - data privacy is no joke. It's a pity non-techy people don't understand what that means. Your setup and project sounds very interesting. The same thing happened to me - family needed a software for managing quantities of goods in shop and selling...
1
0
2026-03-01T14:32:08
KneelB4S8n
false
null
0
o822gn8
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o822gn8/
false
1
t1_o822ge2
China has different corporate objectives/policies probably
3
0
2026-03-01T14:32:05
TomLucidor
false
null
0
o822ge2
false
/r/LocalLLaMA/comments/1rh69co/multidirectional_refusal_suppression_with/o822ge2/
false
3
t1_o822fn3
Yes :) there's an ongoing dev branch on the same repo that I'm working on which will implement RLM
2
0
2026-03-01T14:31:59
the-ai-scientist
false
null
0
o822fn3
false
/r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/o822fn3/
false
2
t1_o822fgp
Как ведет себя в играх, можешь сказать, сам недавно наткнулся на нее в ПДД. Не какой-то подвальный Китай? Живая ли до сих пор у тебя
1
0
2026-03-01T14:31:57
Flat_Time8541
false
null
0
o822fgp
false
/r/LocalLLaMA/comments/1np9rav/my_second_modified_3080_20gb_from_china_for_local/o822fgp/
false
1
t1_o822b2o
Afaik multiplication is often done using the quantized values, and scaling the final floating point precision based on the block's scale factor is possible. For example, GGML multiplies q4\_0 bit tensor and q8\_0 tensor together like this, where x points to the 4-bit tensor and y points to the 8-bit tensor. There's doz...
2
0
2026-03-01T14:31:16
audioen
false
null
0
o822b2o
false
/r/LocalLLaMA/comments/1rhy5o2/quantised_matrix_multiplication/o822b2o/
false
2
t1_o822amt
Could easily be done, and would gladly, if they requested.
1
0
2026-03-01T14:31:12
ubrtnk
false
null
0
o822amt
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o822amt/
false
1
t1_o822a2p
Why is politics-posting, and totally unrelated to local models, being allowed here? Meanwhile it seems like all my posts here get insta-removed.
5
0
2026-03-01T14:31:07
Virtamancer
false
null
0
o822a2p
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o822a2p/
false
5
t1_o8227zd
> Increase to full power, type rocm-smi --help then search for performance and use the cmd to set to high. Should observe 120w on power using rocm-smi. My device doesn't do 120 or 130W. Full power is 70W on my laptop. > Then, into bios, allocate the full 96gb to the gpu. This removes a handshake bottleneck and you ...
1
0
2026-03-01T14:30:47
spaceman_
false
null
0
o8227zd
false
/r/LocalLLaMA/comments/1re9h4r/some_qwen35_benchmarks_on_strix_halo_llamacpp/o8227zd/
false
1
t1_o822808
[removed]
1
0
2026-03-01T14:30:47
[deleted]
true
null
0
o822808
false
/r/LocalLLaMA/comments/1rhy5o2/quantised_matrix_multiplication/o822808/
false
1
t1_o8227e7
How soon??
2
0
2026-03-01T14:30:41
MrMrsPotts
false
null
0
o8227e7
false
/r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o8227e7/
false
2
t1_o8226rw
Same. I used to occasionally use larger cloud models for more complex queries before Qwen 3.5 came out, and my god, it's the most patronizing, "I know better than you" type personality I've ever encountered in an AI. And it doesn't seem to serve anybody well - all those people who developed unhealthy "relationships" wi...
1
0
2026-03-01T14:30:36
ayylmaonade
false
null
0
o8226rw
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8226rw/
false
1
t1_o821vux
Polymarket? Lol
1
1
2026-03-01T14:28:53
l0nedigit
false
null
0
o821vux
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o821vux/
false
1
t1_o821rue
Care to expand your use case? Currently exploring falkordb for memory and was contemplating running qdrant alongside for vectorized searching. Using the graph to model repo and service relationships and qdrant from code/files. Current hardware is an a6000 and 3090. Running only qwen3 coder next Q4 from unsloth.
4
0
2026-03-01T14:28:14
l0nedigit
false
null
0
o821rue
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o821rue/
false
4
t1_o821rg2
That amazing. Very thorough. That’s interesting that the 27b performs similarly to qwen3 coder at the same size. Thanks for sharing.
2
0
2026-03-01T14:28:10
Zc5Gwu
false
null
0
o821rg2
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o821rg2/
false
2
t1_o821n5o
This is just storing a context window to disk… it’s not true memory. If you’re in python already, implement this https://arxiv.org/abs/2512.24601v1
3
0
2026-03-01T14:27:29
croninsiglos
false
null
0
o821n5o
false
/r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/o821n5o/
false
3
t1_o821n6y
I did communicate - all the time. Every time I added a new feature, or made a big change like OAuth with GMail, I shared. Everytime I got excited or a new model was coming out I was testing, I shared.
1
0
2026-03-01T14:27:29
ubrtnk
false
null
0
o821n6y
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o821n6y/
false
1
t1_o821mvc
interesting!
1
0
2026-03-01T14:27:26
Billysm23
false
null
0
o821mvc
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o821mvc/
false
1
t1_o821jkf
I’ve dealt with this so much professionally lol, it’s okay to build things no one uses it’s part of the territory
1
0
2026-03-01T14:26:54
Afraid-Donke420
false
null
0
o821jkf
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o821jkf/
false
1
t1_o821ixh
[removed]
1
0
2026-03-01T14:26:48
[deleted]
true
null
0
o821ixh
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o821ixh/
false
1
t1_o821gh7
No, it's mostly related to an MXFP4 issue, see: [https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/comment/o7x7jdv/](https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/comment/o7x7jdv/) Also we just updated the unsloth one's to now have much less ppl + toolcalling fixes.
2
0
2026-03-01T14:26:25
yoracale
false
null
0
o821gh7
false
/r/LocalLLaMA/comments/1rfds1h/qwen3535ba3b_q4_quantization_comparison/o821gh7/
false
2
t1_o821drx
Yes inference, thanks.
1
0
2026-03-01T14:25:58
Grand-Stranger-2923
false
null
0
o821drx
false
/r/LocalLLaMA/comments/1rhy5o2/quantised_matrix_multiplication/o821drx/
false
1
t1_o8219xd
can you talk more about this?
1
0
2026-03-01T14:25:22
braydon125
false
null
0
o8219xd
false
/r/LocalLLaMA/comments/1rhcnbt/best_coding_model_to_run_entirely_on_12gb_vram/o8219xd/
false
1
t1_o8219ci
What's your use case for embeddings model? Is this something like RAG?
3
0
2026-03-01T14:25:17
jacek2023
false
null
0
o8219ci
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o8219ci/
false
3
t1_o8212cd
Hopefully the new smaller model is followed by a new embeddings model too. Their current qwen3 embedding model is awesome.
16
0
2026-03-01T14:24:09
xfalcox
false
null
0
o8212cd
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o8212cd/
false
16
t1_o82119p
the 9B model would fit along with 1M context window in that.
4
0
2026-03-01T14:23:58
Deep-Vermicelli-4591
false
null
0
o82119p
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o82119p/
false
4
t1_o8210mt
It all comes down to a mixture of things. For me, unsloths Q4 coder has been amazing. But I have rules (agents.md) to act as guides, have adjusted the temp just slightly lower than recommended for a bit more predictable results and had clear guidance on what to do in prompts (and example code for context). Not to ment...
2
0
2026-03-01T14:23:52
l0nedigit
false
null
0
o8210mt
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o8210mt/
false
2
t1_o8210jp
The MXFP4 issue only affected 3 Qwen3.5 quants - Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL and now they're all fixed. So if you were using any other quant or any quant Q5 or above, you were completely in the clear - so it's not related to the issue. We did have to update all of them with tool-calling chat template issues. (no...
1
0
2026-03-01T14:23:51
yoracale
false
null
0
o8210jp
false
/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o8210jp/
false
1
t1_o820wm7
How did you get months (1+ years?) deep into a project “for your family” without communicating with them daily or at least most days—or even basically ever—about it…?
1
0
2026-03-01T14:23:14
Virtamancer
false
null
0
o820wm7
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o820wm7/
false
1
t1_o820q8s
The MXFP4 issue for Unsloth only affected 3 Qwen3.5 quants - Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL and now they're all fixed. So if you were using any other quant or any quant Q5 or above, you were completely in the clear - so it's not related to the issue. We did have to update all of them with tool-calling chat template...
2
0
2026-03-01T14:22:12
yoracale
false
null
0
o820q8s
false
/r/LocalLLaMA/comments/1rg5uee/best_way_to_run_qwen3535ba3b_on_mac/o820q8s/
false
2
t1_o820naj
They also was using computers 🤌
1
0
2026-03-01T14:21:43
Kuarto
false
null
0
o820naj
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o820naj/
false
1
t1_o820k3a
The MXFP4 issue for Unsloth only affected 3 Qwen3.5 quants - Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL. So if you were using any other quant or any quant Q5 or above, you were completely in the clear - so it's not related to the issue. We did have to update all of them with tool-calling chat template issues. (not the chat tem...
1
0
2026-03-01T14:21:13
yoracale
false
null
0
o820k3a
false
/r/LocalLLaMA/comments/1reuss2/anybody_tested_qwen3535ba3b_on_translation_tasks/o820k3a/
false
1
t1_o820j9c
Not in my case, it was decomissioned servers from few of our customers when they migrated to cloud setups, so the cost was taking them and making sure that the storage media was handled securely. Then I took the ram from several to put in the one with the best two cpus to use as a personal server for selfhosting.
2
0
2026-03-01T14:21:05
Luvirin_Weby
false
null
0
o820j9c
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o820j9c/
false
2
t1_o820it1
are we talking about training or inference only? since if i am not mistaken int do not have autograd. Also if inference, you might as well just use the lower precision if the hardware support it, and use fake for fallback in dequant -> matmul -> eject
1
0
2026-03-01T14:21:00
Altruistic_Heat_9531
false
null
0
o820it1
false
/r/LocalLLaMA/comments/1rhy5o2/quantised_matrix_multiplication/o820it1/
false
1
t1_o820ii0
>Large language models (LLMs) have demonstrated impressive reasoning capabilities by scaling test-time compute via long Chain-of-Thought (CoT). However, recent findings suggest that raw token counts are unreliable proxies for reasoning quality: increased generation length does not consistently correlate with accuracy a...
2
0
2026-03-01T14:20:57
Cool-Chemical-5629
false
null
0
o820ii0
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o820ii0/
false
2
t1_o820fbd
Using the Unsloth Q4K_K_XL.gguf and ROCM 7.2 I get the following: prompt eval time = 7628.38 ms / 7973 tokens ( 0.96 ms per token, 1045.18 tokens per second) eval time = 81120.58 ms / 5081 tokens ( 15.97 ms per token, 62.64 tokens per second) total time = 88748.97 ms / 13054 tokens Us...
1
0
2026-03-01T14:20:27
Thrumpwart
false
null
0
o820fbd
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o820fbd/
false
1
t1_o8208je
Would be cool if we got a 0.6B that could be used for speculative decoding on the 122B or 397B model.
13
0
2026-03-01T14:19:22
spaceman_
false
null
0
o8208je
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o8208je/
false
13
t1_o8204um
> (Epstein stuff basically confirms most of 1989 to today, including software as a field, was almost entirely manufactured so the cabal could keep a few generations of control on the information scape). What?
9
0
2026-03-01T14:18:46
AnOnlineHandle
false
null
0
o8204um
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8204um/
false
9
t1_o8203ty
That's the next step Creating an orchestrator process and expose statistics or thinking. The orchestrator would call AI do make decisions, instead of just letting an llm loop itself and run against the game directly For now, the expose of the engine internals in a generic manner was very hard to get it running in a ...
1
0
2026-03-01T14:18:36
frosticecold
false
null
0
o8203ty
false
/r/LocalLLaMA/comments/1rhjcvo/what_im_doing_locally_develping_an_mcp_to_attach/o8203ty/
false
1
t1_o8202kp
You’re trolling. You have been replied to twice.
1
0
2026-03-01T14:18:23
CantankerousOrder
false
null
0
o8202kp
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8202kp/
false
1
t1_o8200xm
That's a start. The other part of it is having a coding harness like roocode, kilo, or cline. As well as having well-defined system prompts activating parameters only what you need to do. I find llama to be very slow. Qwen 3 coder to be alright gpt-oss-20b to be very fast and reliable, provided you don't do zero shot ...
1
0
2026-03-01T14:18:07
false79
false
null
0
o8200xm
false
/r/LocalLLaMA/comments/1rf3n9r/recommended_local_models_for_vibe_coding/o8200xm/
false
1
t1_o82004l
Well (not really kidding) you just have to ask the right questions Personally I have built a context over time with Claude code, things I care about, how to write good benchmarks. Then I just took it from there knowing what I know about the ANE, how to access it? Via coreml- great, what does coreML do? It calls bunch ...
28
0
2026-03-01T14:18:00
jack_smirkingrevenge
false
null
0
o82004l
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o82004l/
false
28
t1_o81zzmg
The Q4\_KM quant was fully fine. The MXFP4 issue only affected 3 quants - Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL. So if you were using any other quant or any quant Q5 or above, you were completely in the clear - so it's not related to the issue. We did have to update all of them with tool-calling chat template issues. (not...
2
0
2026-03-01T14:17:55
yoracale
false
null
0
o81zzmg
false
/r/LocalLLaMA/comments/1rf2zz1/qwen3535ba3b_is_awesome/o81zzmg/
false
2
t1_o81zzds
It would be amazing if 9B was even close to GLM 4.5 Air / 4.7 Flash. 🤞🏻
5
0
2026-03-01T14:17:52
Spitfire1900
false
null
0
o81zzds
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o81zzds/
false
5
t1_o81zz9m
The Q4\_KM quant was fully fine. The MXFP4 issue only affected 3 quants - Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL. So if you were using any other quant or any quant Q5 or above, you were completely in the clear - so it's not related to the issue. We did have to update all of them with tool-calling chat template issues. (not...
1
0
2026-03-01T14:17:51
yoracale
false
null
0
o81zz9m
false
/r/LocalLLaMA/comments/1rf2zz1/qwen3535ba3b_is_awesome/o81zz9m/
false
1
t1_o81zyp6
The Q4\_KM quant was fully fine. The MXFP4 issue only affected 3 quants - Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL. So if you were using any other quant or any quant Q5 or above, you were completely in the clear - so it's not related to the issue. We did have to update all of them with tool-calling chat template issues. (not...
2
0
2026-03-01T14:17:45
yoracale
false
null
0
o81zyp6
false
/r/LocalLLaMA/comments/1rf2zz1/qwen3535ba3b_is_awesome/o81zyp6/
false
2
t1_o81zxrc
Made more sense to add to post. Updated with better details too.
1
0
2026-03-01T14:17:37
Holiday_Purpose_3166
false
null
0
o81zxrc
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o81zxrc/
false
1
t1_o81zxj5
Because at the end of the day that isnt what it’s trained to do. I dont know why people on the internet seem to think any LLM can just do whatever some random person comes up with. That’s not how it works. Now, could they take claude and after a few years at DARPA could it control weapons systems? Yes. But in 2026...
2
1
2026-03-01T14:17:34
illicITparameters
false
null
0
o81zxj5
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81zxj5/
false
2
t1_o81zvd3
Thanks good point, I edited my post.
1
0
2026-03-01T14:17:13
Grand-Stranger-2923
false
null
0
o81zvd3
false
/r/LocalLLaMA/comments/1rhy5o2/quantised_matrix_multiplication/o81zvd3/
false
1
t1_o81zsle
https://preview.redd.it/…60ad3bc2b38abd
1
0
2026-03-01T14:16:47
Holiday_Purpose_3166
false
null
0
o81zsle
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o81zsle/
false
1
t1_o81zqr5
To a certain extent there is no pure prompting solution that will solve this within just the system prompt. Speaking more generally, instructions work best when they are positive and imperative (you should always) and when they are presented alongside examples of acceptable output. Repeating the information multiple ...
3
0
2026-03-01T14:16:30
NNN_Throwaway2
false
null
0
o81zqr5
false
/r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o81zqr5/
false
3
t1_o81zl9l
> They feed it mountains of intelligence - everything from geological data to troop movements to weapon system specs to satellite images to dossiers on individuals and more, then query that. Do you have a source for this? Otherwise I’m calling it fake news
2
0
2026-03-01T14:15:36
chill1217
false
null
0
o81zl9l
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81zl9l/
false
2
t1_o81zfqi
Bad bot
1
0
2026-03-01T14:14:43
robertpro01
false
null
0
o81zfqi
false
/r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/o81zfqi/
false
1
t1_o81zf5v
FYI the quant issues are now fixed and previously didn't affect any quants except 3 - Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL. So if you were using Q5 or above, you were completely in the clear. However, we did have to update all of them with tool-calling chat template issues. (not the chat template issue was prelevant in t...
1
0
2026-03-01T14:14:38
yoracale
false
null
0
o81zf5v
false
/r/LocalLLaMA/comments/1rg045u/overwhelmed_by_so_many_model_releases_within_a/o81zf5v/
false
1
t1_o81zeq5
FYI the quant issues are now fixed and previously didn't affect any quants except 3 - Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL. So if you were using Q5 or above, you were completely in the clear. However, we did have to update all of them with tool-calling chat template issues. (not the chat template issue was prelevant in t...
1
0
2026-03-01T14:14:34
yoracale
false
null
0
o81zeq5
false
/r/LocalLLaMA/comments/1rg045u/overwhelmed_by_so_many_model_releases_within_a/o81zeq5/
false
1
t1_o81zdj4
Dude.... It's fucking reddit. I am being an ass on purpose. Yikes. Have a good life.
1
0
2026-03-01T14:14:22
CoralBliss
false
null
0
o81zdj4
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81zdj4/
false
1
t1_o81zbsb
Nope
5
0
2026-03-01T14:14:06
TacGibs
false
null
0
o81zbsb
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o81zbsb/
false
5
t1_o81z9ld
Clearly is ai generated...
1
0
2026-03-01T14:13:45
robertpro01
false
null
0
o81z9ld
false
/r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/o81z9ld/
false
1
t1_o81z92s
Ah ok that's fair - I had no clue, I just assumed from your announcement that it was specific to your release as I didn't hear anything from anyone else. My bad! Thanks
2
0
2026-03-01T14:13:39
trusty20
false
null
0
o81z92s
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o81z92s/
false
2
t1_o81z61c
Lmao I’ve literally just been straightforward and asking for facts and sources the whole time. Meanwhile you call me rude and then an asshole?! I’m not bringing emotions into this at all, I’m just looking for the truth!
2
0
2026-03-01T14:13:11
chill1217
false
null
0
o81z61c
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81z61c/
false
2
t1_o81z43h
I addressed this below. You didn’t need to post it twice.
0
0
2026-03-01T14:12:52
CantankerousOrder
false
null
0
o81z43h
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81z43h/
false
0
t1_o81yx4k
Hi Bro! Try it. [https://github.com/leonoxo/qwen3-tts-vllm-omni.git](https://github.com/leonoxo/qwen3-tts-vllm-omni.git)
1
0
2026-03-01T14:11:44
Familiar_Doctor9754
false
null
0
o81yx4k
false
/r/LocalLLaMA/comments/1r14yyv/qwen_3_tts_is_streaming_even_working/o81yx4k/
false
1
t1_o81yum0
Not sure, qwen3-coder is giving me less smarter results?
8
0
2026-03-01T14:11:21
soyalemujica
false
null
0
o81yum0
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o81yum0/
false
8
t1_o81yufs
That's not how tensor quantization works. Having a uniform per-tensor quantization scale would be atrociously imprecise.
3
0
2026-03-01T14:11:19
ilintar
false
null
0
o81yufs
false
/r/LocalLLaMA/comments/1rhy5o2/quantised_matrix_multiplication/o81yufs/
false
3
t1_o81ytc2
I've updated the whole post. I've included scoring and time completion, since I believe both are important, as throughput doesn't seem to reflect better intelligence. Nice catch.
3
0
2026-03-01T14:11:09
Holiday_Purpose_3166
false
null
0
o81ytc2
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o81ytc2/
false
3
t1_o81ynw9
Thanks bro, it looks great, I'll try it.
1
0
2026-03-01T14:10:17
Dazzling_Equipment_9
false
null
0
o81ynw9
false
/r/LocalLLaMA/comments/1r8rgcp/minimax_25_on_strix_halo_thread/o81ynw9/
false
1
t1_o81yn58
Right now it doesn’t but I have a solution coming up with proper RAG in v1 and RLM in v2, which will be more immune to super long files
-1
0
2026-03-01T14:10:10
the-ai-scientist
false
null
0
o81yn58
false
/r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/o81yn58/
false
-1
t1_o81ykf0
ERROR: type should be string, got "https://preview.redd.it/gwnm80gkxfmg1.png?width=957&format=png&auto=webp&s=e25e9409ddc965a4f08fae2b225193f5097749cc\n\n \n100% vibe coded \n\\-> [backuprestore/GGUF-BF16-Checker: scan gguf for bf16 tensors, that causes slowdowns on strix halo with vulcan](https://github.com/backuprestore/GGUF-BF16-Checker/tree/main) \n\n\n"
2
0
2026-03-01T14:09:44
Lost_Eye7852
false
null
0
o81ykf0
false
/r/LocalLLaMA/comments/1r0b7p8/free_strix_halo_performance/o81ykf0/
false
2
t1_o81yir7
You're kind of an asshole. Nothing will be good enough for you...clearly. Must be lonely being so right all the time when everyone else is so wrong 😢
0
0
2026-03-01T14:09:28
CoralBliss
false
null
0
o81yir7
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81yir7/
false
0
t1_o81ygyc
I wonder how good would that perform, what would be better Finetuning both models on the same task Or Finetuning smaller model on big model responses
3
0
2026-03-01T14:09:11
wektor420
false
null
0
o81ygyc
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o81ygyc/
false
3
t1_o81yfpm
Interesting. I suppose I can try. The post was updated with other models and different charts.
1
0
2026-03-01T14:08:59
Holiday_Purpose_3166
false
null
0
o81yfpm
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o81yfpm/
false
1
t1_o81yev5
Thanks so much for OP u/Holiday_Purpose_3166 for sharing your results with the community!!
2
0
2026-03-01T14:08:51
yoracale
false
null
0
o81yev5
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o81yev5/
false
2
t1_o81ydzf
I found the ministral 14b model to be ideal. Fits nice on 16gb vram but also room for context.
1
0
2026-03-01T14:08:43
Malfun_Eddie
false
null
0
o81ydzf
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o81ydzf/
false
1
t1_o81ybzm
I just re-read - you’re correct. It is ISOLATED and that’s different. While it’s still effectively segmented from the Claude we use and the intent remains the same, the distinction is important and I will update the comment accordingly.
-1
0
2026-03-01T14:08:25
CantankerousOrder
false
null
0
o81ybzm
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81ybzm/
false
-1
t1_o81y8mg
This is literally an ai slop blog post from 16 hours ago, not a reputable source
1
0
2026-03-01T14:07:52
chill1217
false
null
0
o81y8mg
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81y8mg/
false
1
t1_o81y2ja
This is incorrect, the fix for the chat template were not Unsloth's but a universal chat template issue that affects all uploads, regardless of which provider. Also effects non GGUFs like safetensors as well. Secondly, the MXFP4 issue was only for the Qwen3.5 models and only for 3 variants: Q2\_K\_XL, Q3\_K\_XL and Q...
3
0
2026-03-01T14:06:52
yoracale
false
null
0
o81y2ja
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o81y2ja/
false
3
t1_o81xynp
thanks! i just saw the update. very good work, as always
1
0
2026-03-01T14:06:15
Live-Crab3086
false
null
0
o81xynp
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o81xynp/
false
1
t1_o81xtxb
Yeah, that sounds about right, and having 512gb of RAM is a significant investment
1
0
2026-03-01T14:05:29
Western_Objective209
false
null
0
o81xtxb
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o81xtxb/
false
1
t1_o81xphl
Because the RAM requirements to run kimi k2.5 locally or another comparable model are around 1 TB. To run it efficiently you need a cluster of data center GPUs, even for a single user
1
0
2026-03-01T14:04:46
Western_Objective209
false
null
0
o81xphl
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o81xphl/
false
1
t1_o81xl9k
honestly this is clean for small-scale use. append-only markdown beats juggling vector DBs when your context fits in a prompt anyway. the "just search and traversal" critique misses that most memory systems are overkill. if you're running 8k-32k context models locally, loading SOUL.md + MEMORY.md directly is way fast...
1
0
2026-03-01T14:04:04
RoughOccasion9636
false
null
0
o81xl9k
false
/r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/o81xl9k/
false
1
t1_o81x7w1
[removed]
1
0
2026-03-01T14:01:50
[deleted]
true
null
0
o81x7w1
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o81x7w1/
false
1
t1_o81x4ru
Btw not a bot just smart asf
1
0
2026-03-01T14:01:18
Gabriel-granata
false
null
0
o81x4ru
false
/r/LocalLLaMA/comments/1rhww3y/deterministic_supervisory_control_layer_for_llm/o81x4ru/
false
1
t1_o81x2l4
2B and 9B confirmed
10
0
2026-03-01T14:00:57
Deep-Vermicelli-4591
false
null
0
o81x2l4
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o81x2l4/
false
10
t1_o81wvaj
Can you quote from the article where it says Claude is air-gapped or that the DoD is feeding it troop movement/individual dossiers?
1
0
2026-03-01T13:59:44
chill1217
false
null
0
o81wvaj
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81wvaj/
false
1
t1_o81wuok
Interesting experiment! I've been thinking about similar architectures. What's been the biggest hurdle in getting the models to reliably use the tools you've exposed via MCP? Is it prompt engineering, model limitations, or something else entirely? I've found that fine-tuning can sometimes help bridge that gap, an...
1
0
2026-03-01T13:59:38
ikosuave
false
null
0
o81wuok
false
/r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/o81wuok/
false
1
t1_o81wrs0
I've wrestled with similar issues getting LLMs to behave in specific ways. LM Studio's UI can be limiting sometimes. You might be able to achieve what you want by editing the model's \`config.json\` file directly if you can locate it within LM Studio's model directory, but honestly, for full control, moving to a more b...
2
0
2026-03-01T13:59:08
ikosuave
false
null
0
o81wrs0
false
/r/LocalLLaMA/comments/1rhr5ko/is_there_a_way_to_disable_thinking_on_qwen_35_27b/o81wrs0/
false
2