name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o87q7sy | I join your question. The one minor thing I know -
>does llama cpp offload some of the kv cache to CPU while vllm doesn't ?
llama.cpp by default keeps KV cache on GPU (it is usually more performant to do so), but you have --no-kv-offload option to do otherwise. vLLM, what I understood, allows you to use CPU memory a... | 2 | 0 | 2026-03-02T11:47:04 | Total_Activity_7550 | false | null | 0 | o87q7sy | false | /r/LocalLLaMA/comments/1rihhw6/questions_on_awq_vs_gguf_on_a_5090/o87q7sy/ | false | 2 |
t1_o87q63z | Already on this list
https://preview.redd.it/y9w52hxrdmmg1.png?width=1495&format=png&auto=webp&s=e42eb3f2c5db11d457c0cc124b447061750b8ba2
[https://huggingface.co/Manojb/macmini-16gb-bench-gguf/blob/main/SUMMARY.md](https://huggingface.co/Manojb/macmini-16gb-bench-gguf/blob/main/SUMMARY.md) | 1 | 0 | 2026-03-02T11:46:41 | Honest-Debate-6863 | false | null | 0 | o87q63z | false | /r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o87q63z/ | false | 1 |
t1_o87q2qd | Maybe you can try qwen3.5 bigger model quants. It is actually better overall.
https://preview.redd.it/l8w50gjidmmg1.png?width=1310&format=png&auto=webp&s=0514e7df8cac6240e31aa51a4a986c7dfa2aab9c
| 1 | 0 | 2026-03-02T11:45:55 | Honest-Debate-6863 | false | null | 0 | o87q2qd | false | /r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o87q2qd/ | false | 1 |
t1_o87q1f7 | yes this works. also the model is really good, basically a local sonnet 3.5 | 1 | 0 | 2026-03-02T11:45:37 | woahdudee2a | false | null | 0 | o87q1f7 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o87q1f7/ | false | 1 |
t1_o87pyq6 | Nice — +1 on a dedicated sanitization layer. That “reduced unforeseen tool calls significantly” is exactly the outcome I’m aiming for.
A couple questions if you’re willing to share details:
1. What does “sanitize” mean in your setup?
* stripping tool-like directives / role claims?
* normalizing/escaping certain... | 0 | 0 | 2026-03-02T11:45:00 | AnteaterSlow3149 | false | null | 0 | o87pyq6 | false | /r/LocalLLaMA/comments/1rip2f0/how_are_you_mitigating_prompt_injection_in/o87pyq6/ | false | 0 |
t1_o87pwn6 | Thanks for the heads-up — appreciated.
The files are intentional and contain benchmark data plus a small, non-sensitive traffic sample. No credentials or personal data are included. | 1 | 0 | 2026-03-02T11:44:32 | Vivid-Gur2349 | false | null | 0 | o87pwn6 | false | /r/LocalLLaMA/comments/1rick3t/i_benchmarked_8_local_llms_for_phonetohome_chat/o87pwn6/ | false | 1 |
t1_o87pwk8 | Today im running benchmarks across 8-10 top open models (that fits my mac studio). So far seems like deepseek has the “brain” we are looking for, yet, runs very slow on mac studio and eats most of the unified ram.
Im really optimistic about deepseek upcoming release.
I also thought about changing approach to dual brai... | 1 | 0 | 2026-03-02T11:44:31 | BitXorBit | false | null | 0 | o87pwk8 | false | /r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o87pwk8/ | false | 1 |
t1_o87puqv | This is an excellent framing — “authority classes” enforced *before* the reasoning layer resonates a lot. Totally agree that “just tell the model to ignore retrieved instructions” is self-policing and brittle.
A few implementation questions (if you can share):
1. What does your **structural classification** look like... | 0 | 0 | 2026-03-02T11:44:06 | AnteaterSlow3149 | false | null | 0 | o87puqv | false | /r/LocalLLaMA/comments/1rip2f0/how_are_you_mitigating_prompt_injection_in/o87puqv/ | false | 0 |
t1_o87pp7q | Hello! Thank you for this post, it made me enthusiast and installed it locally, this is my first try ever with local AI and it’s fascinating.
I followed your instructions and it’s tried to integrate it with VS Code, been googling for hours and with everything I tried it seems to fail with the tools calling. Is that ha... | 1 | 0 | 2026-03-02T11:42:50 | OliverNoMore | false | null | 0 | o87pp7q | false | /r/LocalLLaMA/comments/1rezq19/qwen3535b_on_apple_silicon_how_i_got_2x_faster/o87pp7q/ | false | 1 |
t1_o87poba | It does slow down some, but I haven’t had time to benchmark it so I’ve just loaded it down to 30,000 tokens in opencode and have been checking docker logs from my phone lol. It definitely isn’t hitting 700 pp with 30,000, but the 700 had SOME context. No zero. | 2 | 0 | 2026-03-02T11:42:37 | thejacer | false | null | 0 | o87poba | false | /r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/o87poba/ | false | 2 |
t1_o87pmgv | [removed] | 1 | 0 | 2026-03-02T11:42:12 | [deleted] | true | null | 0 | o87pmgv | false | /r/LocalLLaMA/comments/1rin3ea/alibaba_team_opensources_copaw_a_highperformance/o87pmgv/ | false | 1 |
t1_o87piy1 | Fortunately we live in the internet age where knowledge is easier to get.
Decent intelligence with poor knowledge will perform better than decent knowledge with poor intelligence. | 5 | 0 | 2026-03-02T11:41:24 | HornyGooner4401 | false | null | 0 | o87piy1 | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o87piy1/ | false | 5 |
t1_o87pesb | Yes...I have 16g vram and I would like to fully utilize it. | 1 | 0 | 2026-03-02T11:40:27 | Subject_Ratio6842 | false | null | 0 | o87pesb | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o87pesb/ | false | 1 |
t1_o87pb23 | Any way to leverage speculative decoding in llama.cpp? | 1 | 0 | 2026-03-02T11:39:34 | SatoshiNotMe | false | null | 0 | o87pb23 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o87pb23/ | false | 1 |
t1_o87pak5 | Ohhh 2b params for impatient Poors like me | 3 | 0 | 2026-03-02T11:39:27 | Elbobinas | false | null | 0 | o87pak5 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o87pak5/ | false | 3 |
t1_o87p66u | > Multi-user Management
I liked how instavm.io solved it, they let you send key value metadata while creating the sandbox and then you query which user has which sandbox.
`a=InstaVM(metadata={"userid":444})` or similar syntax.. | 1 | 0 | 2026-03-02T11:38:26 | aib_fan | false | null | 0 | o87p66u | false | /r/LocalLLaMA/comments/1rimgii/the_computer_use_trend_how_are_you_managing/o87p66u/ | false | 1 |
t1_o87p1k1 | I think I was wrong. Use swa-full only when you get gibberish replies or you are running context beyond the sliding window size. | 1 | 0 | 2026-03-02T11:37:20 | Ok_Warning2146 | false | null | 0 | o87p1k1 | false | /r/LocalLLaMA/comments/1krr7hn/how_to_get_the_most_from_llamacpps_iswa_support/o87p1k1/ | false | 1 |
t1_o87oybc | 🤡 | 15 | 0 | 2026-03-02T11:36:34 | TacGibs | false | null | 0 | o87oybc | false | /r/LocalLLaMA/comments/1riorfz/qwen35122bvl_abliterated_working_mlx/o87oybc/ | false | 15 |
t1_o87otty | For M1 Pro 8GB, you're limited but it's doable. Try Qwen2.5-0.5B or Phi-3-mini - these small models run fine on 8GB unified memory. For research, a 3B model at Q4 should also work. Check out MLX community models for Apple Silicon optimization. | 1 | 0 | 2026-03-02T11:35:31 | Actual_Wolf_2932 | false | null | 0 | o87otty | false | /r/LocalLLaMA/comments/1riom3s/openclaw_on_my_spare_laptop/o87otty/ | false | 1 |
t1_o87otni | Yeah I always felt it was a bit of a cope when they said they gave up on more efficient attention. Obviously n\^2 is not the path forward in the long run. Yes, there will be engineering challenges along the way. Overcoming the challenges is part of their job. | 4 | 0 | 2026-03-02T11:35:29 | -dysangel- | false | null | 0 | o87otni | false | /r/LocalLLaMA/comments/1rim2y2/revisiting_minimaxs_article_on_their_decision_to/o87otni/ | false | 4 |
t1_o87ot42 | Dumb bot, you should use GPT 3.5 and Llama 2. | 3 | 0 | 2026-03-02T11:35:21 | TacGibs | false | null | 0 | o87ot42 | false | /r/LocalLLaMA/comments/1riow7h/i_use_a_local_mistral_7b_as_a_router_to_decide/o87ot42/ | false | 3 |
t1_o87of4p | Hi it’s Alan from Jan team,
Thank you for supporting us. Jan-v3 is among my favorite models we have released too! It’s compact but with significant improved in tone and style compared to base model. Hope you enjoy this one as well. | 2 | 0 | 2026-03-02T11:32:01 | Kooky-Somewhere-2883 | false | null | 0 | o87of4p | false | /r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o87of4p/ | false | 2 |
t1_o87ocqa | Hi Jundot,
Thank you for this amazing software.
**TLDR**: the SSD caching capability seems too powerful not to have gone viral - especially in the world of clawmania, agent-mania and ramageddon - what am I missing?
Here are some thoughts and questions.
**Context**:
I'm using Openclaw with a fair... | 2 | 0 | 2026-03-02T11:31:27 | whallsey | false | null | 0 | o87ocqa | false | /r/LocalLLaMA/comments/1r3qwyi/omlx_opensource_mlx_inference_server_with_paged/o87ocqa/ | false | 2 |
t1_o87ob0e | This is the way. Self hosting comfyui is how you get control over image generations.
pixelwave
pixelart diffusion XL spriteShaper
pixelart\_style\_eagle\_v6
pixel-portrait-v1
jqy-pixel
Envy Flux PixelArt | 2 | 0 | 2026-03-02T11:31:02 | truth_is_power | false | null | 0 | o87ob0e | false | /r/LocalLLaMA/comments/1rioml1/ask_anyone_know_good_pixel_art_and_pixel/o87ob0e/ | false | 2 |
t1_o87nvi8 | I just fired it up in my project for the first time. ZOOOOM! It works wonderfully. Thank you again for your work. And the least I can do is star it on github. I forked it, upvoted this post, told my cousin, and I will recommend it to everyone who asks for top-quality tts suggestions. Bravo. | 2 | 0 | 2026-03-02T11:27:23 | _-_David | false | null | 0 | o87nvi8 | false | /r/LocalLLaMA/comments/1rfc3ic/introducing_fasterqwentts/o87nvi8/ | false | 2 |
t1_o87nv13 | Any Claude model can run any model in LM Studio. I was using Qwen3.5 35B MoE, the MLX version (4bit) - it is very very fast, and passed all of Claude's tests with flying colors. Great as an agent. | 2 | 0 | 2026-03-02T11:27:16 | Ok_Significance_9109 | false | null | 0 | o87nv13 | false | /r/LocalLLaMA/comments/1riog2w/use_a_local_llm_as_a_subagent_from_claude_code_to/o87nv13/ | false | 2 |
t1_o87nupi | I'm on Windows, where effectively OS takes a bunch of ram/vram. However it seems like after a little tunning and filling the KV heads that it goes faster, although it doesn't reach 40t/s | 1 | 0 | 2026-03-02T11:27:11 | Temas3D | false | null | 0 | o87nupi | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o87nupi/ | false | 1 |
t1_o87nsyd | we also added a layer to sanitize user inputs before processing. it helped reduce unforeseen tool calls significantly but keeping it updated with new attack vectors is tricky. | 1 | 0 | 2026-03-02T11:26:47 | smwaqas89 | false | null | 0 | o87nsyd | false | /r/LocalLLaMA/comments/1rip2f0/how_are_you_mitigating_prompt_injection_in/o87nsyd/ | false | 1 |
t1_o87nsu5 | There's someone else directly selling the safetensors for $2500 so no I don't think I will be doing that at all especially when I came out of my way twice before to post my entire steps and findings and got shit on. I'm not really looking forward to being enthusiastic with reddit ever again. | -16 | 0 | 2026-03-02T11:26:45 | dealignai | false | null | 0 | o87nsu5 | false | /r/LocalLLaMA/comments/1riorfz/qwen35122bvl_abliterated_working_mlx/o87nsu5/ | false | -16 |
t1_o87nrau | Given the tech came from the US and is literally used in the US just not for lethal purposes shows the US is not dumb. Sorry. | 1 | 0 | 2026-03-02T11:26:22 | bobrobor | false | null | 0 | o87nrau | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o87nrau/ | false | 1 |
t1_o87npfg | I have downloaded and installed this and there is no icon to add an epub. | 1 | 0 | 2026-03-02T11:25:53 | Able_Zebra_476 | false | null | 0 | o87npfg | false | /r/LocalLLaMA/comments/1qlr3wj/i_built_an_opensource_audiobook_converter_using/o87npfg/ | false | 1 |
t1_o87nn92 |
I've been trying to use Google's TranslateGemma models (4b, 12b, 27b) via the Hugging Face Inference API for a document translation project, but I keep getting a StopIteration error which seems to indicate no inference provider is available for these models.
I can run TranslateGemma 4b locally via Ollama just fine... | 1 | 0 | 2026-03-02T11:25:22 | Asleep-Housing-2212 | false | null | 0 | o87nn92 | false | /r/LocalLLaMA/comments/1qdsnul/translategemma_27b12b4b/o87nn92/ | false | 1 |
t1_o87nm8o | Can't you just release the original bf16 safetensors or gguf? That would allow other community members to create their own GGUF quants.
While it is cool that MLX is faster, the reason I prefer llama.cpp is that low bit quants are dynamic and can be very good. As an example, I was able to run Qwen 3 397b on a 128G mac... | 8 | 0 | 2026-03-02T11:25:08 | tarruda | false | null | 0 | o87nm8o | false | /r/LocalLLaMA/comments/1riorfz/qwen35122bvl_abliterated_working_mlx/o87nm8o/ | false | 8 |
t1_o87nka3 | o de 27B é muito superior ao 35B A3B, fiz vários testes, o de 27B ganhou em todos disparado | 1 | 0 | 2026-03-02T11:24:40 | Sea-Care7424 | false | null | 0 | o87nka3 | false | /r/LocalLLaMA/comments/1re72h4/qwen35_27b_better_than_35ba3b/o87nka3/ | false | 1 |
t1_o87nhor | Hi its Alan from the team,
No worry 😂, Qwen3.5 coming soon is the exact reason why we try to release it anyways instead of holding off longer.
We will surely work on future models and new Jan-base as well with new Qwen release! | 26 | 0 | 2026-03-02T11:24:02 | Kooky-Somewhere-2883 | false | null | 0 | o87nhor | false | /r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o87nhor/ | false | 26 |
t1_o87nfdi | Wenn auch eine iOS App in Frage kommt, dann mal bei [https://sotitalk.soticek.com/](https://sotitalk.soticek.com/) reinschauen. Daten werden auf dem Gerät gespeichert und die Transkription läuft auf Servern in der EU. | 1 | 0 | 2026-03-02T11:23:30 | CrewApprehensive716 | false | null | 0 | o87nfdi | false | /r/LocalLLaMA/comments/1oe5igo/best_option_for_audio_or_video_transcription_now/o87nfdi/ | false | 1 |
t1_o87nd3a | [removed] | 1 | 0 | 2026-03-02T11:22:56 | [deleted] | true | null | 0 | o87nd3a | false | /r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o87nd3a/ | false | 1 |
t1_o87nbya | O de 27B é muito mais inteligente, erra muito menos, na minha RTX 5090 roda com 51 tokens por segundo, está bom para uso | 1 | 0 | 2026-03-02T11:22:39 | Sea-Care7424 | false | null | 0 | o87nbya | false | /r/LocalLLaMA/comments/1rhfjeg/qwen3527b_vs_qwen3535ba3b/o87nbya/ | false | 1 |
t1_o87na76 | The most reliable defense I've found is structural classification at the point of ingestion — treating external content (RAG chunks, tool outputs, anything not from the operator) as a categorically different authority class from operator instructions, enforced before it reaches the reasoning layer.
The failure mode wi... | 1 | 0 | 2026-03-02T11:22:13 | AIVisibilityHelper | false | null | 0 | o87na76 | false | /r/LocalLLaMA/comments/1rip2f0/how_are_you_mitigating_prompt_injection_in/o87na76/ | false | 1 |
t1_o87n9m4 | a 32GB M1 Pro, Max should handle your tasks fine for a few years. an M3 is nicer for future-proofing but not essential if you want to save money. | 1 | 0 | 2026-03-02T11:22:05 | cool_girrl | false | null | 0 | o87n9m4 | false | /r/LocalLLaMA/comments/1ripjzc/choosing_the_right_apple_silicon_for_backend/o87n9m4/ | false | 1 |
t1_o87n888 | Thats their level of understanding of tech. The market that openai and anthropic are tapping into. They think claude cli is a localllm. | 1 | 0 | 2026-03-02T11:21:44 | Dry_Yam_4597 | false | null | 0 | o87n888 | false | /r/LocalLLaMA/comments/1rhymsi/18_failed_attempts_to_get_a_tiny_ai_agent_running/o87n888/ | false | 1 |
t1_o87n6oz | Different models work better for different tasks. But I’m finding older dense 70b models are still some of the most powerful models I can run in mine | 2 | 0 | 2026-03-02T11:21:23 | INtuitiveTJop | false | null | 0 | o87n6oz | false | /r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o87n6oz/ | false | 2 |
t1_o87n44x | This is gold — thank you. The split between “can the model call this tool?” vs “should it call it right now?” is a really clean framing.A couple follow-ups (if you can share):1) What do you use for the lightweight policy engine in practice (OPA/Rego? custom rules? LLM-based classifier?)2) When you say “schema validatio... | 0 | 0 | 2026-03-02T11:20:46 | AnteaterSlow3149 | false | null | 0 | o87n44x | false | /r/LocalLLaMA/comments/1rip2f0/how_are_you_mitigating_prompt_injection_in/o87n44x/ | false | 0 |
t1_o87mt7w | what model are you running with? | 1 | 0 | 2026-03-02T11:18:09 | smwaqas89 | false | null | 0 | o87mt7w | false | /r/LocalLLaMA/comments/1riog2w/use_a_local_llm_as_a_subagent_from_claude_code_to/o87mt7w/ | false | 1 |
t1_o87mno4 | laptop 5090 24GB VRAM + 128 GB RAM | 1 | 0 | 2026-03-02T11:16:48 | misterflyer | false | null | 0 | o87mno4 | false | /r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o87mno4/ | false | 1 |
t1_o87mnb2 | Thanks! Qwen 2025 small models were clearly behind Gemma 3 overall.
But the newer Qwen releases look promising — a properly quantized build could change the picture. I’m looking forward to testing them.
https://preview.redd.it/8hsq141f7mmg1.jpeg?width=1065&format=pjpg&auto=webp&s=9d3fcc7398219f869a4087b132f383f21898d... | 2 | 0 | 2026-03-02T11:16:43 | Vivid-Gur2349 | false | null | 0 | o87mnb2 | false | /r/LocalLLaMA/comments/1rick3t/i_benchmarked_8_local_llms_for_phonetohome_chat/o87mnb2/ | false | 2 |
t1_o87mn0h | The new qwen3.5 "small" models should be out imminently. I've heard Tuesday is the top guess. I'd anticipate that a qwen3.5 9b mod would work nicely. But until they come out, we can't be certain what the "small" sizes will be exactly. But I'd bet there's a solid one you'll be able to use :) | 1 | 0 | 2026-03-02T11:16:39 | _-_David | false | null | 0 | o87mn0h | false | /r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o87mn0h/ | false | 1 |
t1_o87mi68 | That’s really cool! Thanks for sharing. I worked on astrology and I also did a summarization training section where I asked it to summarize the sections from books and that was pretty cool. I have to expand the corpus a little, but I’ll need a little time to be able to do that. It’s been a little over a year so I don’t... | 2 | 0 | 2026-03-02T11:15:28 | INtuitiveTJop | false | null | 0 | o87mi68 | false | /r/LocalLLaMA/comments/1ribjum/i_trained_a_3b_patristic_theology_llm_on_a_single/o87mi68/ | false | 2 |
t1_o87mfhz | Great write-up! I always treat LLMs as 'smart interns' - great for drafting but never skip final verification. The strawberry counting trick is brilliant - I've noticed the same pattern with different prompting styles. Another thing that works: asking 'what would make this answer wrong?' before accepting it. | 1 | 0 | 2026-03-02T11:14:49 | Actual_Wolf_2932 | false | null | 0 | o87mfhz | false | /r/LocalLLaMA/comments/1riovvx/how_not_to_go_insane_talking_with_llms/o87mfhz/ | false | 1 |
t1_o87mc61 | What? I am 100% sure that MASSIVE increase in model intelligence has been achieved, in sense that I'm pointing at Qwen3.5 to my codebases, it reads 50k tokens worth files, seems to understand all of them, and then proceeds to make perfectly valid and rational changes.
I've never seen that happen before locally. Usuall... | 5 | 0 | 2026-03-02T11:13:59 | audioen | false | null | 0 | o87mc61 | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o87mc61/ | false | 5 |
t1_o87mc7f | Thank you!! | 1 | 0 | 2026-03-02T11:13:59 | Sear_Oc | false | null | 0 | o87mc7f | false | /r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o87mc7f/ | false | 1 |
t1_o87mb3m | might as well make it a lora | 3 | 0 | 2026-03-02T11:13:42 | sultan_papagani | false | null | 0 | o87mb3m | false | /r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o87mb3m/ | false | 3 |
t1_o87m0mm | Gateway layer is the right call, we went that route too. Biggest win was splitting "can the model call this tool" from "should it call this tool right now" into two separate checks. Allowlists handle the first, a lightweight policy engine handles the second.
The attacks that actually scared us weren't clever inject... | 1 | 0 | 2026-03-02T11:11:07 | BreizhNode | false | null | 0 | o87m0mm | false | /r/LocalLLaMA/comments/1rip2f0/how_are_you_mitigating_prompt_injection_in/o87m0mm/ | false | 1 |
t1_o87lxiz | and if you add in 50 random gsm8k samples and test again? | 1 | 0 | 2026-03-02T11:10:22 | FaustAg | false | null | 0 | o87lxiz | false | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o87lxiz/ | false | 1 |
t1_o87lwb0 | Very interesting! could you share some example git snaphot of this framework? | 3 | 0 | 2026-03-02T11:10:04 | sotona- | false | null | 0 | o87lwb0 | false | /r/LocalLLaMA/comments/1riouwq/a_200_kb_toolusing_sixphase_loop_agent_for/o87lwb0/ | false | 3 |
t1_o87lukq | It’s the height of hypocrisy for these labs to cry about "theft" when their entire business model is built on the industrial-scale scraping of everyone else's intellectual property. Calling a paid API interaction an "attack" just because the customer is learning from the output feels like a desperate attempt to build a... | 1 | 0 | 2026-03-02T11:09:38 | Aaron_johnson_01 | false | null | 0 | o87lukq | false | /r/LocalLLaMA/comments/1rd8cfw/anthropics_recent_distillation_blog_should_make/o87lukq/ | false | 1 |
t1_o87lt4l | I've been really, really enjoying using Jan3-4B, it's a noticeable improvement over the base Qwen3-4B so I'm very excited to try this out!! Thank you for all your work! | 3 | 0 | 2026-03-02T11:09:16 | AntiqueHedgehog8513 | false | null | 0 | o87lt4l | false | /r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o87lt4l/ | false | 3 |
t1_o87lnkk | No, there is only one unified memory. However, we can configure how much can be used as VRAM. This can be configured through command line or through some tools.
IDEs like LM studio recognise this show the configured value as VRAM in their specs. | 2 | 0 | 2026-03-02T11:07:53 | kkb294 | false | null | 0 | o87lnkk | false | /r/LocalLLaMA/comments/1rimncl/whats_the_best_local_model_i_can_run_with_a/o87lnkk/ | false | 2 |
t1_o87lnig | Firmware has been updated from 20251111 to 20260110.
Note: release 20251125 has been skipped and this is a good new because there was a regression bug. | 2 | 0 | 2026-03-02T11:07:52 | PhilippeEiffel | false | null | 0 | o87lnig | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o87lnig/ | false | 2 |
t1_o87lmvv | They have the idea, but they did not yet tried it? Build the system first, use APIs, if your documents cannot be exposed to API models just use some free docs from the internet. My point is, try the system first, there are open source RAG systems on github. | 3 | 0 | 2026-03-02T11:07:42 | deenspaces | false | null | 0 | o87lmvv | false | /r/LocalLLaMA/comments/1rins2j/hardware_for_local_ai_project/o87lmvv/ | false | 3 |
t1_o87lmhv | As long as you are in your context window, that is the case. But as soon as you exceed the prompt size, it will reprocess it every time.
Example: Lets say you have set a context of 128K. If you current context in chat is 35k and you chat back and forth with Qwen, it won't reprocess it every time. Prompt caching works... | 2 | 0 | 2026-03-02T11:07:37 | dampflokfreund | false | null | 0 | o87lmhv | false | /r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o87lmhv/ | false | 2 |
t1_o87lmjn | > 24 GB sounds like a lot, and it is if you're just browsing the web
No, it’s not if I open multiple tabs. It’s fine if you open one site and then close it before opening the next :)
| 1 | 0 | 2026-03-02T11:07:37 | ProfessionalSpend589 | false | null | 0 | o87lmjn | false | /r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o87lmjn/ | false | 1 |
t1_o87lf18 | You can also try wayfarer 2 as it's inetnded to not be nice.
| 1 | 0 | 2026-03-02T11:05:45 | Hot-Employ-3399 | false | null | 0 | o87lf18 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o87lf18/ | false | 1 |
t1_o87lees | It’s wild how Qwen3.5-35B-A3B is basically killing the "small models can’t do long context" argument. The hybrid Gated DeltaNet architecture seems to be the secret sauce here—it handles that 50k+ range without the usual "memory rot" or hallucinations that plague other sub-10B active parameter models. Local reliability ... | 1 | 0 | 2026-03-02T11:05:36 | Aaron_johnson_01 | false | null | 0 | o87lees | false | /r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o87lees/ | false | 1 |
t1_o87l8np | It’s wild how MiniMax’s "surrender" to full attention already feels like a snapshot of a different era. DeepSeek V3.2 and Qwen3.5-397B are basically proving that the "multi-hop reasoning deficit" was a training and RL hurdle, not a fundamental architectural wall. For most agentic workflows, the 8x speed boost from thes... | 3 | 0 | 2026-03-02T11:04:11 | Aaron_johnson_01 | false | null | 0 | o87l8np | false | /r/LocalLLaMA/comments/1rim2y2/revisiting_minimaxs_article_on_their_decision_to/o87l8np/ | false | 3 |
t1_o87l13b | The power draw difference is definitely the "hidden" cost of dense models. Since the 27B model has all 27 billion parameters active for every single token, your 5060 Ti is basically doing 9x the math per second compared to the 35B-A3B MoE, which only fires up 3 billion. It's essentially the difference between a high-re... | 1 | 0 | 2026-03-02T11:02:17 | Aaron_johnson_01 | false | null | 0 | o87l13b | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o87l13b/ | false | 1 |
t1_o87kzd7 | >it's pretty good if you understand how to troubleshoot and **prompt properly**.
Is there perhaps some sort of established *Codex Astartes* for this that I can study while waiting for my new laptop to arrive? (64GB LPDDR5 6400MHz with 780M iGPU, nothing serious just for dipping toes into local LLM)
You know like a s... | 1 | 0 | 2026-03-02T11:01:51 | Marrond | false | null | 0 | o87kzd7 | false | /r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o87kzd7/ | false | 1 |
t1_o87kwuy | I don't know, maybe I'm picky, but Q2 with a 27B model makes my skin crawl. | 7 | 0 | 2026-03-02T11:01:12 | tmvr | false | null | 0 | o87kwuy | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o87kwuy/ | false | 7 |
t1_o87kuci | That 100 t/s on a single 3090 is actually insane for a model with that much reasoning density. Qwen3.5-35B-A3B is basically the poster child for why active parameter counts matter more than total weights right now, especially when you can fit it all in VRAM with MXFP4. Seeing it clear a 5-hour "human" test in 10 minute... | 1 | 0 | 2026-03-02T11:00:34 | Aaron_johnson_01 | false | null | 0 | o87kuci | false | /r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o87kuci/ | false | 1 |
t1_o87ksc1 | [https://www.reddit.com/r/LocalLLaMA/comments/1r9gve8/comment/o6dpsoa/](https://www.reddit.com/r/LocalLLaMA/comments/1r9gve8/comment/o6dpsoa/) | 1 | 0 | 2026-03-02T11:00:04 | evilspyboy | false | null | 0 | o87ksc1 | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o87ksc1/ | false | 1 |
t1_o87kp28 | I am sorry about that i don’t think people understand this I have been using Claude and Gemini agents to look into this and I quantized a misleading model and came to the same conclusions. I will review your blog. I am not sure everyone understands these new qwen models are very different including myself. | 2 | 0 | 2026-03-02T10:59:14 | AutomaticDriver5882 | false | null | 0 | o87kp28 | false | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o87kp28/ | false | 2 |
t1_o87kncd | Hi can you please share the video of the breakdown of architecture | 1 | 0 | 2026-03-02T10:58:48 | C_n0n | false | null | 0 | o87kncd | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o87kncd/ | false | 1 |
t1_o87klvd | >I made a free local AI roleplay horror game
I mean it's cool and all, but I though someone already made an AI horror game, it's currently called OpenClaw...
/ I'll see myself out... / | 3 | 0 | 2026-03-02T10:58:26 | tmvr | false | null | 0 | o87klvd | false | /r/LocalLLaMA/comments/1riliyt/i_made_a_free_local_ai_roleplay_horror_game/o87klvd/ | false | 3 |
t1_o87kgzb | No, incorrect. Old version of llama.cpp rejected prompt cache if you run it in multimodal configuration with mmproj. It is overzealous and that restriction was lifted in a simple commit last week. It has KV cache snapshots of the KV cache state and can reuse prompt from such a snapshot. | 1 | 0 | 2026-03-02T10:57:13 | audioen | false | null | 0 | o87kgzb | false | /r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o87kgzb/ | false | 1 |
t1_o87ke8r | Sucks when they do it too, but people lie. An AI shouldn’t. I understand that it is just regurgitating the training data, but it can still piss me off. | 1 | 0 | 2026-03-02T10:56:31 | OrbitalOutlander | false | null | 0 | o87ke8r | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o87ke8r/ | false | 1 |
t1_o87k983 | on the installation of cachyos, i didn't pick any DE. after install, on the TTY, i installed DankMaterialShell + Niri. | 1 | 0 | 2026-03-02T10:55:15 | theskilled42 | false | null | 0 | o87k983 | false | /r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o87k983/ | false | 1 |
t1_o87k5tb | left everything on defaults except for having background-opacity = 0.8 | 1 | 0 | 2026-03-02T10:54:23 | theskilled42 | false | null | 0 | o87k5tb | false | /r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o87k5tb/ | false | 1 |
t1_o87jzol | > they measure kl divergence with only 100 prompts from mablone harmless. totally not enough for anything meaningful.
KL divergence compares probability distributions over token vocabularies. Modern LLMs typically have vocabulary sizes between 50k and 250k tokens.
So computing KLD for 100 prompts ends up comparing *m... | 3 | 0 | 2026-03-02T10:52:50 | -p-e-w- | false | null | 0 | o87jzol | false | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o87jzol/ | false | 3 |
t1_o87jvdl | Wasn't Q4K\_M in overall the king and better than Q4\_K\_XL model ? Why did you chose XL model for 4Quant if may I ask? | -1 | 0 | 2026-03-02T10:51:45 | soyalemujica | false | null | 0 | o87jvdl | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o87jvdl/ | false | -1 |
t1_o87jvco | This is not 100% sensible position to take because fp32 is actually a superset of bf16, being capable of representing all values in that and beyond, whereas f16 has the smaller value range from bf16 due to more limited exponent. However, it has more precision in the mantissa, and seems to actually produce the same resu... | 1 | 0 | 2026-03-02T10:51:44 | audioen | false | null | 0 | o87jvco | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o87jvco/ | false | 1 |
t1_o87jui5 | But that's horribly inefficient as well. You want to have dedicated MATMUL kernels for the quants, not dequant to F32 and then do MATMUL. | 1 | 0 | 2026-03-02T10:51:31 | ilintar | false | null | 0 | o87jui5 | false | /r/LocalLLaMA/comments/1rhy5o2/quantised_matrix_multiplication/o87jui5/ | false | 1 |
t1_o87jtlq | For contradictions/decay I’d do a simple “fact flip” set: state A early, contradict to B later, then score whether it sticks to the latest. LongMemEval is a good base, but you’ll probably need to add the contradiction turns yourself. | 2 | 0 | 2026-03-02T10:51:17 | BC_MARO | false | null | 0 | o87jtlq | false | /r/LocalLLaMA/comments/1rin5r2/what_memory_systems_should_i_benchmark/o87jtlq/ | false | 2 |
t1_o87jt9i | I have extensive research up on [dealign.ai](http://dealign.ai) \- nobody really seemed to care when i posted it on reddit and scoffed at me. | 6 | 0 | 2026-03-02T10:51:12 | HealthyCommunicat | false | null | 0 | o87jt9i | false | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o87jt9i/ | false | 6 |
t1_o87jppr | Never mind... after starting the host today, llama-server apparently contacted Huggingface and has already downloaded half the new revision... I will let it finish. | 1 | 0 | 2026-03-02T10:50:18 | Haeppchen2010 | false | null | 0 | o87jppr | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o87jppr/ | false | 1 |
t1_o87jo6j | Is there documentation on this or a blog? | 1 | 0 | 2026-03-02T10:49:55 | AutomaticDriver5882 | false | null | 0 | o87jo6j | false | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o87jo6j/ | false | 1 |
t1_o87jh47 | I'm back to repply. I did do a small test ( as i usually do for my model testings ) and the results was very good.
qwen\_qwen3.5-27b IQ2\_M. I was not abble to run with 128k context without offloading to RAM. So, i keep in 64k of context, full into the GPU.
My test:
"""
Hello! Create a complete Snake Game in a sin... | 5 | 0 | 2026-03-02T10:48:07 | Turbulent_Dot3764 | false | null | 0 | o87jh47 | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o87jh47/ | false | 5 |
t1_o87j9zd | 27B is slightly better quality, 35B-A3B is faster inference | 1 | 0 | 2026-03-02T10:46:17 | grumd | false | null | 0 | o87j9zd | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o87j9zd/ | false | 1 |
t1_o87j4f0 | Thanks mate, def. time to update methodology to include this - I'll dig up those longmemeval cases for sure | 1 | 0 | 2026-03-02T10:44:51 | selund1 | false | null | 0 | o87j4f0 | false | /r/LocalLLaMA/comments/1rin5r2/what_memory_systems_should_i_benchmark/o87j4f0/ | false | 1 |
t1_o87j2gb | I know MacBooks aren't meant for massive local LLMs. I'm only planning to run lighter stuff like **TranslateGemma, STT, and TTS.** Since I keep my laptops for 10+ years (still on a 2013 MBP!), I'm just trying to figure out if M1 is enough or if I should jump to M3/M4 for the long haul. | 1 | 0 | 2026-03-02T10:44:21 | yusunglee2074 | false | null | 0 | o87j2gb | false | /r/LocalLLaMA/comments/1ripjzc/choosing_the_right_apple_silicon_for_backend/o87j2gb/ | false | 1 |
t1_o87j0vu | For contradictions, just inject conflicting facts at different steps and check what gets recalled - easy to script without a formal benchmark. LongMemEval has some update fidelity cases worth borrowing from if you want a starting point. | 3 | 0 | 2026-03-02T10:43:56 | BC_MARO | false | null | 0 | o87j0vu | false | /r/LocalLLaMA/comments/1rin5r2/what_memory_systems_should_i_benchmark/o87j0vu/ | false | 3 |
t1_o87izdh | lots of theory but zero practical information. the RAG method you listed may be valid but throwing data at a vector DB without some major enrichment and classification framework and some complex custom
retrieval logic for your data is going to end in disaster sooner than later. | 2 | 0 | 2026-03-02T10:43:33 | Low-Opening25 | false | null | 0 | o87izdh | false | /r/LocalLLaMA/comments/1rins2j/hardware_for_local_ai_project/o87izdh/ | false | 2 |
t1_o87isex | go try getting laid or something, or start by reading the thread. | 1 | 0 | 2026-03-02T10:41:45 | ndiphilone | false | null | 0 | o87isex | false | /r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o87isex/ | false | 1 |
t1_o87ir6a | 100% AI, probably either Gemini 3 or GPT-5.x. Have a quick look at his profile. You can tell where his actual writing ends and the vibe-writing begins. | 3 | 0 | 2026-03-02T10:41:25 | throwaway_ghast | false | null | 0 | o87ir6a | false | /r/LocalLLaMA/comments/1riow7h/i_use_a_local_mistral_7b_as_a_router_to_decide/o87ir6a/ | false | 3 |
t1_o87ir3s | Do the guys @ AMD fail to understand the concept of software version?!?
https://preview.redd.it/virsrva52mmg1.png?width=1250&format=png&auto=webp&s=23ea7e8990c0308045e7ca15533cd555ec0a5a99
| 0 | 0 | 2026-03-02T10:41:24 | simmessa | false | null | 0 | o87ir3s | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o87ir3s/ | false | 0 |
t1_o87ioyt | Sick numbers, this is hella useful. The MTP speculative decoding is lowkey the key sauce here, super underrated for local inference.
One thing worth flagging: the decode speeds drop noticeably on reasoning-heavy prompts (exactly as OP mentioned), so if you are running coding agents doing multi-step problem solving y... | 1 | 0 | 2026-03-02T10:40:52 | ElectricalOpinion639 | false | null | 0 | o87ioyt | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o87ioyt/ | false | 1 |
t1_o87iosz | I am running bartowski/Qwen\_Qwen3.5-27B-GGUF:IQ3\_XS on my RX 7800 XT successfully for almost a week now, after trying some others (devstral-2-mini, Qwen3-Coder) this is the most "like claude-sonnet-4-5-at-work-feeling" for me so far. I did my first proper "vibe coding" project with it (via opencode), not a single to... | 1 | 0 | 2026-03-02T10:40:49 | Haeppchen2010 | false | null | 0 | o87iosz | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o87iosz/ | false | 1 |
t1_o87ik2p | Use. The. Recommended. Inference. Parameters.
People always skip this and then wonder why their LLM turns into a paranoid neurotic at the slightest provocation. You have to set penalties for presence and reasoning. | -1 | 0 | 2026-03-02T10:39:35 | Thunderstarer | false | null | 0 | o87ik2p | false | /r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o87ik2p/ | false | -1 |
t1_o87igdu | You’re awesome. Thank you. | 2 | 0 | 2026-03-02T10:38:38 | quietsubstrate | false | null | 0 | o87igdu | false | /r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o87igdu/ | false | 2 |
t1_o87iarq | Have you used the "thinking" mode or the "nothinking" mode? | 1 | 0 | 2026-03-02T10:37:11 | Imaginary-Detail6778 | false | null | 0 | o87iarq | false | /r/LocalLLaMA/comments/1qkqvkr/yesterday_i_used_glm_47_flash_with_my_tools_and_i/o87iarq/ | false | 1 |
t1_o87i52f | Would you mind trying the two Q8 quants from unsloth, with and without the nvlink, if it's not too much trouble ? I have 2x 3090 without nvlink but using llama cpp at the moment. I can try vllm myself I guess. Need to evaluate if it's worth getting a nvlink bridge, can't even find one in my country. | 1 | 0 | 2026-03-02T10:35:42 | sgmv | false | null | 0 | o87i52f | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o87i52f/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.