name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o87i0v1 | There is a consolidate phase that does the post-action review. | 2 | 0 | 2026-03-02T10:34:38 | nnet42 | false | null | 0 | o87i0v1 | false | /r/LocalLLaMA/comments/1riouwq/a_200_kb_toolusing_sixphase_loop_agent_for/o87i0v1/ | false | 2 |
t1_o87hzta | Running the same unsloth model at 64k context via llama.cpp server. I prefer having the thinking on, and though I haven't seen the benchmarks I am loving this model a lot for my needs.
The first response takes a shit load of tokens and thinking, even to respond to basic 'hi', but post that it is quite fast. I started ... | 2 | 0 | 2026-03-02T10:34:21 | Silent_Man_100 | false | null | 0 | o87hzta | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o87hzta/ | false | 2 |
t1_o87hzm7 | It comes with speculative decoding builtin. | 3 | 0 | 2026-03-02T10:34:18 | RnRau | false | null | 0 | o87hzm7 | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o87hzm7/ | false | 3 |
t1_o87hzh1 | I am also building a model where the model has to judge on an outbound call if on the receiver end it's an IVR or a human, first I tried building a text based model which will take first 5 seconds(about 3 sentences of the call) and transcribe them and make judgement. But it was not performing well as it learned the pat... | 1 | 0 | 2026-03-02T10:34:16 | AurBtaKaisaHai | false | null | 0 | o87hzh1 | false | /r/LocalLLaMA/comments/1re6enq/seeking_productiongrade_opensource_llm_for/o87hzh1/ | false | 1 |
t1_o87hw6a | After using it for a few days, I've noticed the following issues:
1. Markdown output has some problems—line breaks before and after Markdown symbols are missing, likely because the line breaks were stripped.
2. Emoji output is broken; all emojis display as ��� ���.
This is my command
vllm-mlx serve mlx-communit... | 1 | 0 | 2026-03-02T10:33:24 | AdPast3 | false | null | 0 | o87hw6a | false | /r/LocalLLaMA/comments/1rf288a/qwen3codernext_at_65_toks_on_m3_ultra_with/o87hw6a/ | false | 1 |
t1_o87hswh | Yes | 1 | 0 | 2026-03-02T10:32:32 | RnRau | false | null | 0 | o87hswh | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o87hswh/ | false | 1 |
t1_o87hlly | That's a good idea too! | 0 | 0 | 2026-03-02T10:30:36 | jeremyckahn | false | null | 0 | o87hlly | false | /r/LocalLLaMA/comments/1riic5m/running_llamaserver_as_a_persistent_systemd/o87hlly/ | false | 0 |
t1_o87hhgu | They don’t | 5 | 0 | 2026-03-02T10:29:30 | Polite_Jello_377 | false | null | 0 | o87hhgu | false | /r/LocalLLaMA/comments/1rimncl/whats_the_best_local_model_i_can_run_with_a/o87hhgu/ | false | 5 |
t1_o87h7kr | Just added a "HowTo" comment. Claude did the heavy lifting, but that's essentially what was created. | 1 | 0 | 2026-03-02T10:26:54 | Ok_Significance_9109 | false | null | 0 | o87h7kr | false | /r/LocalLLaMA/comments/1riog2w/use_a_local_llm_as_a_subagent_from_claude_code_to/o87h7kr/ | false | 1 |
t1_o87h73a | please test: Cognee, Memori, memU! | 2 | 0 | 2026-03-02T10:26:46 | Embarrassed_Soup_279 | false | null | 0 | o87h73a | false | /r/LocalLLaMA/comments/1rin5r2/what_memory_systems_should_i_benchmark/o87h73a/ | false | 2 |
t1_o87h4x5 | What's stopping you from editing the template, removing the check, loading the model with that, and seeing how it performs? These templates are more like suggestions rather than strict rules, because it's all just text under the hood. | 2 | 0 | 2026-03-02T10:26:13 | aeqri | false | null | 0 | o87h4x5 | false | /r/LocalLLaMA/comments/1rinx3k/qwen_35_system_message_must_be_at_the_beginning/o87h4x5/ | false | 2 |
t1_o87h4ou | How to do this:
**1. LM Studio setup**
\- Install LM Studio, load a model (any tool-calling capable one)
\- Enable the local server (Settings → Local Server)
\- Add {%- set enable\_thinking = false %} to the top of the jinja template in the model card
**2.** **The** **two** **Python** **scripts:**
[https://g... | 2 | 0 | 2026-03-02T10:26:09 | Ok_Significance_9109 | false | null | 0 | o87h4ou | false | /r/LocalLLaMA/comments/1riog2w/use_a_local_llm_as_a_subagent_from_claude_code_to/o87h4ou/ | false | 2 |
t1_o87h38p | The code looks so clean and easy to understand.
Being a developer myself, Would love to play with it more. | 1 | 0 | 2026-03-02T10:25:46 | FortiCore | false | null | 0 | o87h38p | false | /r/LocalLLaMA/comments/1rin3ea/alibaba_team_opensources_copaw_a_highperformance/o87h38p/ | false | 1 |
t1_o87h0ur | **Add Openrouter provider to copaw** :
[https://www.reddit.com/r/copaw/comments/1riorgu/comment/o87es3g/](https://www.reddit.com/r/copaw/comments/1riorgu/comment/o87es3g/) | 3 | 0 | 2026-03-02T10:25:07 | FortiCore | false | null | 0 | o87h0ur | false | /r/LocalLLaMA/comments/1rin3ea/alibaba_team_opensources_copaw_a_highperformance/o87h0ur/ | false | 3 |
t1_o87gtt5 | The LFM models by liquid AI are great, they have a 1b model that you can fine tune easily on a free colab gpu, that's probably your best bet.
You can also enforce structured outputs with langchain and pydantic if you really want to make sure it's correct.
Good luck ! | 1 | 0 | 2026-03-02T10:23:12 | Certain-Cod-1404 | false | null | 0 | o87gtt5 | false | /r/LocalLLaMA/comments/1ric44g/what_would_be_the_best_small_model_for_json/o87gtt5/ | false | 1 |
t1_o87gt4e | does it still show this? | 1 | 0 | 2026-03-02T10:23:01 | salary_pending | false | null | 0 | o87gt4e | false | /r/LocalLLaMA/comments/1rdf4ai/claude_sonnet46_thinks_he_is_deepseekv3_when/o87gt4e/ | false | 1 |
t1_o87grlk | Well, I also have Radeon drivers 26.2.2 but btw they're from a different date ?!? 17/2 :/
https://preview.redd.it/wumv81fqylmg1.png?width=333&format=png&auto=webp&s=c816c0979f849117ca9e1400daa1fc89a5aef469
| 1 | 0 | 2026-03-02T10:22:36 | simmessa | false | null | 0 | o87grlk | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o87grlk/ | false | 1 |
t1_o87gmjs | Gateway layer is definitely the right approach. We've been running something similar in production for months.
The main attacks we've seen:
- RAG context injection where user data contains malicious instructions
- Tool chaining exploits where one tool's output gets fed into another in unexpected ways
- Parameter injec... | 0 | 0 | 2026-03-02T10:21:15 | UnderstandingOwn4448 | false | null | 0 | o87gmjs | false | /r/LocalLLaMA/comments/1rip2f0/how_are_you_mitigating_prompt_injection_in/o87gmjs/ | false | 0 |
t1_o87gkag | best quantization for 16GB Q4 is this:
[https://huggingface.co/sokann/Qwen3.5-27B-GGUF-4.165bpw](https://huggingface.co/sokann/Qwen3.5-27B-GGUF-4.165bpw)
Here, with 18k of context, it does 39 t/s and with 22k around 25 t/s. | 1 | 0 | 2026-03-02T10:20:38 | naxneri | false | null | 0 | o87gkag | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o87gkag/ | false | 1 |
t1_o87gjku | \+1
I could use those vllm args as well <3 | 4 | 0 | 2026-03-02T10:20:27 | UltrMgns | false | null | 0 | o87gjku | false | /r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o87gjku/ | false | 4 |
t1_o87gira | [removed] | 1 | 0 | 2026-03-02T10:20:13 | [deleted] | true | null | 0 | o87gira | false | /r/LocalLLaMA/comments/1ri60l3/qwen_35_35b_a3b_lmstudio_settings/o87gira/ | false | 1 |
t1_o87gheq | This is a really interesting approach! The six-phase cognitive loop is clever - I've been experimenting with similar architectures but using a simpler two-phase approach (think + execute). Have you considered adding a reflection phase where the agent reviews its own output before committing? Would love to see benchmark... | -1 | 0 | 2026-03-02T10:19:52 | Actual_Wolf_2932 | false | null | 0 | o87gheq | false | /r/LocalLLaMA/comments/1riouwq/a_200_kb_toolusing_sixphase_loop_agent_for/o87gheq/ | false | -1 |
t1_o87ge9c | It's not true though because the AI allows every single person to be uniquely targeted. | 1 | 0 | 2026-03-02T10:19:00 | rorykoehler | false | null | 0 | o87ge9c | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o87ge9c/ | false | 1 |
t1_o87ge9n | the sandbox isolation layer (E2B, Firecracker) handles the per-agent security boundary well. the part that gets missed is where the sandbox itself runs: if you're launching it from a laptop or a shared dev machine, cold starts get worse as load increases. a dedicated VPS as the sandbox host keeps spin-up times consiste... | 1 | 0 | 2026-03-02T10:19:00 | BreizhNode | false | null | 0 | o87ge9n | false | /r/LocalLLaMA/comments/1rimgii/the_computer_use_trend_how_are_you_managing/o87ge9n/ | false | 1 |
t1_o87gcgu | What GPU | 1 | 0 | 2026-03-02T10:18:31 | Lefty_Pencil | false | null | 0 | o87gcgu | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o87gcgu/ | false | 1 |
t1_o87g8vz | its my favorite sub on reddit - by far! | 1 | 0 | 2026-03-02T10:17:35 | reneil1337 | false | null | 0 | o87g8vz | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o87g8vz/ | false | 1 |
t1_o87g8oo | [https://github.com/rekuenkdr/Qwen3-TTS-streaming](https://github.com/rekuenkdr/Qwen3-TTS-streaming) with this one of 50 series i get first chunk in one second with a 5060ti. ill have to try | 1 | 0 | 2026-03-02T10:17:32 | Ok_Milk1045 | false | null | 0 | o87g8oo | false | /r/LocalLLaMA/comments/1rfc3ic/introducing_fasterqwentts/o87g8oo/ | false | 1 |
t1_o87g5as | the routing layer approach is smart. the part that gets tricky in production is when the laptop goes to sleep between agent calls and the router is unavailable. running the Mistral router on an always-on VPS sidesteps that and keeps the 60% saving consistent | 0 | 0 | 2026-03-02T10:16:37 | BreizhNode | false | null | 0 | o87g5as | false | /r/LocalLLaMA/comments/1riow7h/i_use_a_local_mistral_7b_as_a_router_to_decide/o87g5as/ | false | 0 |
t1_o87g4r0 | I'm trying to understand precisely what you did. I'm rephrasing what I understood, please tell me if I'm wrong:
You're embedding the markdown, do a mean-polling \[1\] to reduce dimension (which is a fairly standard context-length-extension method). And then to compensate for the loss of information due to the mean-pol... | 1 | 0 | 2026-03-02T10:16:29 | phhusson | false | null | 0 | o87g4r0 | false | /r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o87g4r0/ | false | 1 |
t1_o87g2a1 | wot xD, how are you running it? if you create an issue or paste a trace I might be able to fix it :) | 1 | 0 | 2026-03-02T10:15:49 | futterneid | false | null | 0 | o87g2a1 | false | /r/LocalLLaMA/comments/1rfc3ic/introducing_fasterqwentts/o87g2a1/ | false | 1 |
t1_o87g0w2 | [removed] | 1 | 0 | 2026-03-02T10:15:28 | [deleted] | true | null | 0 | o87g0w2 | false | /r/LocalLLaMA/comments/1rip0bh/glm5_matches_claude_opus_46_intelligence_at_155m/o87g0w2/ | false | 1 |
t1_o87fzvr | Are you trying to buy a GPU or do you have access to it? If you have access to it, you can just run the benchmark and know :) For the 4060, 2905 ms is without optimizations! With optimizations it's 460ms. I would expect the 5070 to be between 4060 and 4090, so between 460ms and 174ms. Fairly fast in my opinion | 1 | 0 | 2026-03-02T10:15:12 | futterneid | false | null | 0 | o87fzvr | false | /r/LocalLLaMA/comments/1rfc3ic/introducing_fasterqwentts/o87fzvr/ | false | 1 |
t1_o87fzsu | AI bot or repost bot, call it. | 8 | 0 | 2026-03-02T10:15:10 | KaroYadgar | false | null | 0 | o87fzsu | false | /r/LocalLLaMA/comments/1riow7h/i_use_a_local_mistral_7b_as_a_router_to_decide/o87fzsu/ | false | 8 |
t1_o87fy3b | better performance on niche tasks. | 1 | 0 | 2026-03-02T10:14:42 | Deep-Vermicelli-4591 | false | null | 0 | o87fy3b | false | /r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o87fy3b/ | false | 1 |
t1_o87fuid | 8GB unified is workable but tight. Qwen3.5-7B at Q4 quantization fits (~4.5GB), leaving just enough for the OS.
for research tasks needing longer context, the model starts swapping above ~2K tokens which kills latency. if you find local performance frustrating, pointing OpenClaw at a remote VPS with 16-24GB RAM is a c... | 1 | 0 | 2026-03-02T10:13:44 | BreizhNode | false | null | 0 | o87fuid | false | /r/LocalLLaMA/comments/1riom3s/openclaw_on_my_spare_laptop/o87fuid/ | false | 1 |
t1_o87fss5 | That's interesting feedback KeyToAll! The dependency is "torch>=2.1", so I'm not sure why it's resolving to the wrong CUDA for your system.
Re Blackwell, Yes, it should work. I tested it and so far no users have really complained beyond small install quirks like here above.
Re flashAttn: Are you sure it wasn't using... | 1 | 0 | 2026-03-02T10:13:17 | futterneid | false | null | 0 | o87fss5 | false | /r/LocalLLaMA/comments/1rfc3ic/introducing_fasterqwentts/o87fss5/ | false | 1 |
t1_o87fp8l | Thanks for the link. | 1 | 0 | 2026-03-02T10:12:19 | ProfessionalSpend589 | false | null | 0 | o87fp8l | false | /r/LocalLLaMA/comments/1re1b4a/you_can_use_qwen35_without_thinking/o87fp8l/ | false | 1 |
t1_o87fkr5 | When it comes to multi turn conversation (casual, RP etc) nothing really beats Llama 3 70B based models or in smaller size Gemma3 27B / Mistral small based models - out of which only Mistral small updates, L3 and Gemma3 are old. Okay, the very large MoE >300B probably do, but those are hard to run locally.
Or in other... | 1 | 0 | 2026-03-02T10:11:08 | Mart-McUH | false | null | 0 | o87fkr5 | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o87fkr5/ | false | 1 |
t1_o87fd4j | why is multiple user better to have --swa-full = true ?
| 1 | 0 | 2026-03-02T10:09:08 | chrisoutwright | false | null | 0 | o87fd4j | false | /r/LocalLLaMA/comments/1krr7hn/how_to_get_the_most_from_llamacpps_iswa_support/o87fd4j/ | false | 1 |
t1_o87fayf | Thank you! Worked hard on it :) | 1 | 0 | 2026-03-02T10:08:34 | futterneid | false | null | 0 | o87fayf | false | /r/LocalLLaMA/comments/1rfc3ic/introducing_fasterqwentts/o87fayf/ | false | 1 |
t1_o87fa6x | That being said, I think the Mac is made for MoE's. gpt-oss:120b runs at over 70 tokens per second, even on large context. qwen3.5-122b-a10b at about 40-45. | 1 | 0 | 2026-03-02T10:08:22 | waescher | false | null | 0 | o87fa6x | false | /r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o87fa6x/ | false | 1 |
t1_o87f9ep | yeah, I know. I call that RTF even thought people call it RTFx, I can't shake it xD I used to teach sound digital signal processing in a university 10 years ago and RTF followed the convention I use, but then people changed the convention and now I'm old :(
https://preview.redd.it/f3grz8m6wlmg1.png?width=825&format=pn... | 1 | 0 | 2026-03-02T10:08:09 | futterneid | false | null | 0 | o87f9ep | false | /r/LocalLLaMA/comments/1rfc3ic/introducing_fasterqwentts/o87f9ep/ | false | 1 |
t1_o87f5z4 | Amazing. On Solair AI, qwen3 4B is the best model i could test. But it could be faster, can’t wait to test 3.5 | 1 | 0 | 2026-03-02T10:07:14 | Traditional-Card6096 | false | null | 0 | o87f5z4 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o87f5z4/ | false | 1 |
t1_o87f3hd | [deleted] | 1 | 0 | 2026-03-02T10:06:36 | [deleted] | true | null | 0 | o87f3hd | false | /r/LocalLLaMA/comments/1riom3s/openclaw_on_my_spare_laptop/o87f3hd/ | false | 1 |
t1_o87f2ok | Do these models support FIM? | 5 | 0 | 2026-03-02T10:06:23 | Ill-Fishing-1451 | false | null | 0 | o87f2ok | false | /r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o87f2ok/ | false | 5 |
t1_o87f2k4 | You're funny David, I hope you enjoy the project! If you find issues, feel free to post them :) Also, if you know other people that would benefit from the project, if it gets a bit more diffusion I would be able to invest more time on it. Stars and such help :)
Hector, I think you should be able to implement something... | 1 | 0 | 2026-03-02T10:06:21 | futterneid | false | null | 0 | o87f2k4 | false | /r/LocalLLaMA/comments/1rfc3ic/introducing_fasterqwentts/o87f2k4/ | false | 1 |
t1_o87f1t3 | Did you already check out the REAP? [https://huggingface.co/mradermacher/MiniMax-M2.5-REAP-172B-A10B-i1-GGUF](https://huggingface.co/mradermacher/MiniMax-M2.5-REAP-172B-A10B-i1-GGUF) | 1 | 0 | 2026-03-02T10:06:09 | Equivalent-Belt5489 | false | null | 0 | o87f1t3 | false | /r/LocalLLaMA/comments/1r8rgcp/minimax_25_on_strix_halo_thread/o87f1t3/ | false | 1 |
t1_o87f1gn | [removed] | 1 | 0 | 2026-03-02T10:06:04 | [deleted] | true | null | 0 | o87f1gn | false | /r/LocalLLaMA/comments/1riow7h/i_use_a_local_mistral_7b_as_a_router_to_decide/o87f1gn/ | false | 1 |
t1_o87ewla | One of my favorite Douglas Adams quotes form Last chance to see
>*I have a well-deserved reputation for being something of a gadget freak, and am rarely happier than when spending an entire day programming my computer to perform automatically a task that would otherwise take me a good ten seconds to do by hand. Ten s... | 1 | 0 | 2026-03-02T10:04:46 | Grant_Son | false | null | 0 | o87ewla | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o87ewla/ | false | 1 |
t1_o87et20 | Okay, I went with [https://huggingface.co/mlx-community/Kimi-Dev-72B-4bit](https://huggingface.co/mlx-community/Kimi-Dev-72B-4bit) (MLX). It runs okay on 10 tokens per second.
I then increased context to 128000 and ran [https://github.com/awaescher/llmaid](https://github.com/awaescher/llmaid) against some files for ab... | 1 | 0 | 2026-03-02T10:03:50 | waescher | false | null | 0 | o87et20 | false | /r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o87et20/ | false | 1 |
t1_o87ertv | knowing the Chinese and seeing their passion, like with the desert changing to useful land again, I have no doubts, that they will crush Nvidia to mashed patato in the future. They will fund enormous amounts of money with help of goverment. Because china is only dependant on itself. Always attacked by others. alw... | 1 | 0 | 2026-03-02T10:03:30 | games-and-chocolate | false | null | 0 | o87ertv | false | /r/LocalLLaMA/comments/1n46ify/finally_china_entering_the_gpu_market_to_destroy/o87ertv/ | false | 1 |
t1_o87en32 | Creative writing performance?
I'd take inspiration from those prompts - https://github.com/EQ-bench/creative-writing-bench/blob/main/data/creative_writing_prompts_v3.json
And use a model that scores well on creative writing benchmark. http://eqbench.com/creative_writing.html | 1 | 0 | 2026-03-02T10:02:13 | FullOf_Bad_Ideas | false | null | 0 | o87en32 | false | /r/LocalLLaMA/comments/1ri0n8p/llm_lora_on_the_fly_with_hypernetworks/o87en32/ | false | 1 |
t1_o87eh2z | Retained Even prompt processing at 30k context? That is impressive!
I’m getting on my 2x P40s 300pp and 34tg. But after about 30k context my pp slows to 120pp.
At 100k context it’s barely useable.
Man I wish I got in on those Mi50 32gb deals mid way through last year. | 1 | 0 | 2026-03-02T10:00:35 | tehinterwebs56 | false | null | 0 | o87eh2z | false | /r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/o87eh2z/ | false | 1 |
t1_o87e9b7 | So, I am not a mac user. Still:
I have been playing with qwen 3.5 a bit. (27b and 35b). I find it 'overthinking' the simple stuff. To the point that I (at the moment) have reverted to gpt-oss-120b with medium thinking. It still find it a solid model.
I know there are tricks to disable thinking for qwen 3.5. Did not h... | 2 | 0 | 2026-03-02T09:58:32 | ethertype | false | null | 0 | o87e9b7 | false | /r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o87e9b7/ | false | 2 |
t1_o87e6vu | Fuck off | 3 | 0 | 2026-03-02T09:57:54 | Velocita84 | false | null | 0 | o87e6vu | false | /r/LocalLLaMA/comments/1rija4i/tired_of_the_lowquality_mindless_erp_chats_trying/o87e6vu/ | false | 3 |
t1_o87e4b2 | what gpu are you running on? | 1 | 0 | 2026-03-02T09:57:13 | xFloaty | false | null | 0 | o87e4b2 | false | /r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o87e4b2/ | false | 1 |
t1_o87e0z0 | Do a faiss on the documents. Each document is vectorized. And when you do a search, that search finds similar vectors. Results are semantically similar to the meaning of your question. You can feed the results to an llm or however you wanna do that. Basically, look up RAG.
It's something my 7 year old gaming laptop ... | 1 | 0 | 2026-03-02T09:56:19 | ArchdukeofHyperbole | false | null | 0 | o87e0z0 | false | /r/LocalLLaMA/comments/1rins2j/hardware_for_local_ai_project/o87e0z0/ | false | 1 |
t1_o87dz4m | How did you do this? I was thinking of doing the same by patching the ReadFile etc. tools in Claude code to use a local model (to make it faster). | 1 | 0 | 2026-03-02T09:55:49 | DeltaSqueezer | false | null | 0 | o87dz4m | false | /r/LocalLLaMA/comments/1riog2w/use_a_local_llm_as_a_subagent_from_claude_code_to/o87dz4m/ | false | 1 |
t1_o87dv7r | Infinite intelligence with zero knowledge can do nothing.
Infinite knowledge with zero intelligence can do nothing.
You need good balance of both. | 4 | 0 | 2026-03-02T09:54:47 | Mart-McUH | false | null | 0 | o87dv7r | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o87dv7r/ | false | 4 |
t1_o87dq1e | iQ2M? And about quality? What is you use case ? I also have a 5060 ti 16gb. What can I expect? | 2 | 0 | 2026-03-02T09:53:24 | Turbulent_Dot3764 | false | null | 0 | o87dq1e | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o87dq1e/ | false | 2 |
t1_o87dllc | "even gpt 5.2" bruh
Go to civitai.com and find pixelart finetune, wan is good for anim out of the box but requires some post processing if you are unsatisfied with the result, maybe there is pixelart wan finetune | 7 | 0 | 2026-03-02T09:52:12 | justicecurcian | false | null | 0 | o87dllc | false | /r/LocalLLaMA/comments/1rioml1/ask_anyone_know_good_pixel_art_and_pixel/o87dllc/ | false | 7 |
t1_o87dien | Yep. I get around there with mine @ 100k context mxfp4. | 1 | 0 | 2026-03-02T09:51:21 | Xp_12 | false | null | 0 | o87dien | false | /r/LocalLLaMA/comments/1rgmg99/llm_benchmark_site_for_dual_rtx_5060_ti/o87dien/ | false | 1 |
t1_o87dgl8 | I would have preferred a joke about LLMs or something. Not this juvenile thing.
Everything requires time and money (or equivalent goods).
| 1 | 0 | 2026-03-02T09:50:52 | ProfessionalSpend589 | false | null | 0 | o87dgl8 | false | /r/LocalLLaMA/comments/1re72h4/qwen35_27b_better_than_35ba3b/o87dgl8/ | false | 1 |
t1_o87daf9 | yeah the agentic environment makes a massive difference. its not just about the model weights, its about how the system around the model structures the task. claude code basically gives the model a well-designed loop with tool access, error recovery, and context management built in. when you give a local model the same... | 2 | 0 | 2026-03-02T09:49:13 | Friendly-Ask6895 | false | null | 0 | o87daf9 | false | /r/LocalLLaMA/comments/1rinbwd/local_agents_running_in_claude_codecodexopencode/o87daf9/ | false | 2 |
t1_o87d4i2 | Haha, sure, and then they come with stupid answer out of the blue no human with elementary logic would do. Like recently chatgpt thought for some time when I asked it to produce kind of regular expression (retrieve profile) with specific criteria, it was not trivial but not too difficult.
It thought and thought and th... | 1 | 0 | 2026-03-02T09:47:37 | Mart-McUH | false | null | 0 | o87d4i2 | false | /r/LocalLLaMA/comments/1ri6jg3/at_what_point_do_we_stop_reading_code/o87d4i2/ | false | 1 |
t1_o87d3ck | What cachyos is that o.o | 1 | 0 | 2026-03-02T09:47:19 | Sear_Oc | false | null | 0 | o87d3ck | false | /r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o87d3ck/ | false | 1 |
t1_o87d1z3 | I can take a picture later today, I have one of the cards in an improvised setup because I'm rebuilding my inference rig from an ancient dual xeon to a threadripper 1920x. Still old but I got it for free. | 1 | 0 | 2026-03-02T09:46:57 | MaddesJG | false | null | 0 | o87d1z3 | false | /r/LocalLLaMA/comments/1rfi53f/completed_my_64gb_vram_rig_dual_mi50_build_custom/o87d1z3/ | false | 1 |
t1_o87czv9 | It could be. Maybe it continues with service worker in the background or something and I need to kill the whole browser or at least the service worker.
The problem is if someone else is tying up the server, I have to terminate on the server side. | 1 | 0 | 2026-03-02T09:46:22 | DeltaSqueezer | false | null | 0 | o87czv9 | false | /r/LocalLLaMA/comments/1rinjb3/is_there_a_way_to_cleanly_terminate_a_running/o87czv9/ | false | 1 |
t1_o87cvux | I have a 5090 and I'm down to run the notebook for you and give you the results or if you're confortable with that you can use colab, the free tier gives you access to a 16 gb GPU if I remember | 2 | 0 | 2026-03-02T09:45:18 | Certain-Cod-1404 | false | null | 0 | o87cvux | false | /r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o87cvux/ | false | 2 |
t1_o87cv4p | Dual 5060ti usually gives me around 80 t/s for models like Qwen 3.5/35b | 1 | 0 | 2026-03-02T09:45:07 | andy_potato | false | null | 0 | o87cv4p | false | /r/LocalLLaMA/comments/1rgmg99/llm_benchmark_site_for_dual_rtx_5060_ti/o87cv4p/ | false | 1 |
t1_o87cg1d | Maybe wait a bit until unsloth fixes the upload. Currently, it gets stuck in loops above a certain context size. | 3 | 0 | 2026-03-02T09:40:59 | Zc5Gwu | false | null | 0 | o87cg1d | false | /r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o87cg1d/ | false | 3 |
t1_o87cdbe | Looks amazing! When I try to record I keep getting the "it needs to download 10 GB AI models, wait 10 minutes" I've been trying for a couple of days and having it open all the time. Have no prior experience coding really so I am on deep waters :/ just getting started now. | 1 | 0 | 2026-03-02T09:40:15 | Few_Tie1860 | false | null | 0 | o87cdbe | false | /r/LocalLLaMA/comments/1r9y6s8/transcriptionsuite_a_fully_local_private_open/o87cdbe/ | false | 1 |
t1_o87cbyc | You can try to find the first instance of the failed toolcall attempt in the conversation, cut off anything after it, and insert the toolcall\_response of "This tool returned too many errors and encountered a loop. Let the user know". or something like that. The LLM will see that response and act accordingly. | 1 | 0 | 2026-03-02T09:39:53 | MaxKruse96 | false | null | 0 | o87cbyc | false | /r/LocalLLaMA/comments/1rinx3k/qwen_35_system_message_must_be_at_the_beginning/o87cbyc/ | false | 1 |
t1_o87cbdt | If your project is large and complex (eg like medical information system) and running stable say for 5 years incorporating new requirements (both from users/clients as well as legislation, which is very unpredictable) as well as fixing unexpected runtime problems (not all of them are caused by bugs but often because of... | 1 | 0 | 2026-03-02T09:39:44 | Mart-McUH | false | null | 0 | o87cbdt | false | /r/LocalLLaMA/comments/1ri6jg3/at_what_point_do_we_stop_reading_code/o87cbdt/ | false | 1 |
t1_o87c2iv | Can someone enlighten me on why specifically they're excited for fine tuning? | 1 | 0 | 2026-03-02T09:37:18 | amejin | false | null | 0 | o87c2iv | false | /r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o87c2iv/ | false | 1 |
t1_o87blh6 | So, he does not care if model is open source/weights, he only cares if model is better. That says it all, he does not care about open source/weights at all.
My answer to Dario: Control. I do not want you to control the model I use. Privacy too, that could be solved in cloud, it would require trust though, of which the... | 1 | 0 | 2026-03-02T09:32:33 | Mart-McUH | false | null | 0 | o87blh6 | false | /r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o87blh6/ | false | 1 |
t1_o87bicc | yes | 3 | 0 | 2026-03-02T09:31:42 | Odd-Ordinary-5922 | false | null | 0 | o87bicc | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o87bicc/ | false | 3 |
t1_o87bdcj | How many people are going to be using it concurrently, what's your budget and do you have a specific size/class of models you're aiming for? | 6 | 0 | 2026-03-02T09:30:19 | Certain-Cod-1404 | false | null | 0 | o87bdcj | false | /r/LocalLLaMA/comments/1rins2j/hardware_for_local_ai_project/o87bdcj/ | false | 6 |
t1_o87b1x6 | Huh, so I’m not the only one who found the 27b, but in quant Q8_0 (from Unsloth) to be slow on my GPU (Radeon R9700).
But I’m satisfied with the speed of Mistral Small 2 24b in quant Q8_0 for chat. I haven’t tested it fully, though. | 2 | 0 | 2026-03-02T09:27:07 | ProfessionalSpend589 | false | null | 0 | o87b1x6 | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o87b1x6/ | false | 2 |
t1_o87az1o | That's probably because vLLM contains official implementation for qwen-next architecture.
Llama.cpp implementation comes community contributions and is probably not as optimized yet. | 3 | 0 | 2026-03-02T09:26:19 | tarruda | false | null | 0 | o87az1o | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o87az1o/ | false | 3 |
t1_o87awmx | Later we'll get additional codetune models(based on 3.5 models) from them. | 12 | 0 | 2026-03-02T09:25:39 | pmttyji | false | null | 0 | o87awmx | false | /r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o87awmx/ | false | 12 |
t1_o87avax | You’d be better off with control vector or doing soft-prompting. | 1 | 0 | 2026-03-02T09:25:17 | sergeant113 | false | null | 0 | o87avax | false | /r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o87avax/ | false | 1 |
t1_o87at1z | 2x mi50 16gb
Qwen-3.5-35B-A3B-GGUF Q4_K_M -c 132k -ctk q8_0 ctv q8_0 -fa on ub=96 b=2048 PP: ~201 tok/s TG: ~42.1 tok/s. That was with sending full context prompt no cache.
I pushed it to 196k context full prompt and it dropped down to about 33 tok/s but PP stayed about the same.
Neither of those were measuring ... | 2 | 0 | 2026-03-02T09:24:39 | Varstael | false | null | 0 | o87at1z | false | /r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/o87at1z/ | false | 2 |
t1_o87amo6 | Just started using Open WebUI and I've experienced it too.
Surprised me since LM Studio doesn't behave this way.
Even when a job is done processing on the frontend, the GPU continues for a little while. Looks like it's an issue with Open WebUI and not llama.cpp but please feel free to correct me. | 1 | 0 | 2026-03-02T09:22:53 | Monad_Maya | false | null | 0 | o87amo6 | false | /r/LocalLLaMA/comments/1rinjb3/is_there_a_way_to_cleanly_terminate_a_running/o87amo6/ | false | 1 |
t1_o87ajtt | [removed] | 1 | 0 | 2026-03-02T09:22:07 | [deleted] | true | null | 0 | o87ajtt | false | /r/LocalLLaMA/comments/1rins2j/hardware_for_local_ai_project/o87ajtt/ | false | 1 |
t1_o87aj6z | Heretic and classic ablation does not work on Hybrid SSSM, with CoT making it harder as it is. Search up my uploads and research @dealignai on hf. | 2 | 0 | 2026-03-02T09:21:56 | HealthyCommunicat | false | null | 0 | o87aj6z | false | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o87aj6z/ | false | 2 |
t1_o87ailv | What general purpose tasks does it fail at for you? It works well for me and does well on every private benchmark I know of. Not trying to "win" or anything, genuinely curious as a GLM lover. | 1 | 0 | 2026-03-02T09:21:46 | TheRealGentlefox | false | null | 0 | o87ailv | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o87ailv/ | false | 1 |
t1_o87agt6 | Heretic and classic abliteration does not work for these hybrid sssm + CoT models. As far as I know, me and another person who makes the PRISM models are the only ones to get the 122b abliterated, and I’m the only one so far who has a working coherent 397b reap. This took literally days of no sleep and a few thousand d... | 6 | 0 | 2026-03-02T09:21:15 | HealthyCommunicat | false | null | 0 | o87agt6 | false | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o87agt6/ | false | 6 |
t1_o87a7gn | What was the smallest open-source model you tried this with that gave high accuracy? | 1 | 0 | 2026-03-02T09:18:42 | d41_fpflabs | false | null | 0 | o87a7gn | false | /r/LocalLLaMA/comments/1qz2fra/i_benchmarked_672_return_json_only_calls_strict/o87a7gn/ | false | 1 |
t1_o87a6c9 | Using self-compiled llama.cpp can increase token speed by 3-4 times. | 1 | 0 | 2026-03-02T09:18:23 | Smart-Cap-2216 | false | null | 0 | o87a6c9 | false | /r/LocalLLaMA/comments/1rh1q8j/i919400f_rtx_4070_super_12gb_32gb_ddr5_ram/o87a6c9/ | false | 1 |
t1_o87a4jf | I fit everything into VRAM in llama.cpp and it still is 20 times slower. 60 tok/s llama.cpp vs 1500 tok/s vllm, 40 queries. | 4 | 0 | 2026-03-02T09:17:53 | ortegaalfredo | false | null | 0 | o87a4jf | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o87a4jf/ | false | 4 |
t1_o87a1z0 | The sandbox itself is only half the problem. Even inside a locked down container the agent can still make API calls or exfiltrate data through legitimate channels if the prompt gets hijacked. I have been pairing container isolation with runtime behavioral monitoring that watches what the agent actually does with its to... | 1 | 0 | 2026-03-02T09:17:10 | thecanonicalmg | false | null | 0 | o87a1z0 | false | /r/LocalLLaMA/comments/1rimgii/the_computer_use_trend_how_are_you_managing/o87a1z0/ | false | 1 |
t1_o87a1gc | Okay, thank you. However, in Qwen3, there is no restriction on the system, and "Sorry, that didn't work" is inserted into the role of tool answer or user. I want the LLM to be aware that it should no longer make tool calls, but still provide a response to the existing content. | 1 | 0 | 2026-03-02T09:17:02 | SpareAlps6450 | false | null | 0 | o87a1gc | false | /r/LocalLLaMA/comments/1rinx3k/qwen_35_system_message_must_be_at_the_beginning/o87a1gc/ | false | 1 |
t1_o879y80 | [removed] | 1 | 0 | 2026-03-02T09:16:09 | [deleted] | true | null | 0 | o879y80 | false | /r/LocalLLaMA/comments/1njkm96/any_good_voice_dubbing_software_for_audiovideo/o879y80/ | false | 1 |
t1_o879nx3 | I'm glad it helped!
The reason why it makes such a big performance difference is because it makes sure to load all the active parameters on the GPU. This takes advantage of the MoE architecture.
llama.cpp has launch parameters to configure that as well ("--cpu-moe" and "--n-cpu-moe N")
There is also the "--fit" para... | 1 | 0 | 2026-03-02T09:13:16 | kke12 | false | null | 0 | o879nx3 | false | /r/LocalLLaMA/comments/1ri60l3/qwen_35_35b_a3b_lmstudio_settings/o879nx3/ | false | 1 |
t1_o879mgt | **First look at copaw**
[https://www.reddit.com/r/copaw/comments/1rinb9h/first\_look\_at\_copaw\_opensource\_personal\_ai/](https://www.reddit.com/r/copaw/comments/1rinb9h/first_look_at_copaw_opensource_personal_ai/)
Still misses
\- Mutli agent set up out of the box \[ should be possible with agent scope, but no ... | 7 | 0 | 2026-03-02T09:12:52 | FortiCore | false | null | 0 | o879mgt | false | /r/LocalLLaMA/comments/1rin3ea/alibaba_team_opensources_copaw_a_highperformance/o879mgt/ | false | 7 |
t1_o879ly4 | you are opening pandora box of disappointment and wasted time and money | 20 | 0 | 2026-03-02T09:12:44 | Low-Opening25 | false | null | 0 | o879ly4 | false | /r/LocalLLaMA/comments/1rins2j/hardware_for_local_ai_project/o879ly4/ | false | 20 |
t1_o879kwj | Almost every model only supports system prompt as the first field... System prompts are the prefix, then user<->assistant<->tools "ping-pong"
Just keep track in your code of how many times in a row a specific tool was called, with what args. If it keeps using the same tool with the same args and errors X times, hard-a... | 1 | 0 | 2026-03-02T09:12:26 | MaxKruse96 | false | null | 0 | o879kwj | false | /r/LocalLLaMA/comments/1rinx3k/qwen_35_system_message_must_be_at_the_beginning/o879kwj/ | false | 1 |
t1_o879c53 | Depends on how much money they have, but any M3 Max (64GB contiguous memory or more), or DGX Spark with 128GB (or their cheaper equivalents) will let you run Qwen3.5 with 30B or 122B (depending on memory). This model plus RAG (retrieval of your local documents) plus some UI (e.g. Open WebUI) should be sufficient. | 0 | 0 | 2026-03-02T09:09:57 | breksyt | false | null | 0 | o879c53 | false | /r/LocalLLaMA/comments/1rins2j/hardware_for_local_ai_project/o879c53/ | false | 0 |
t1_o879blr | soory,I don't quite understand what you mean. Currently, I am using LLM for function calls, but when encountering difficult problems, the LLM might make multiple tool calls and still fail to solve the issue, potentially leading to infinite tool calls. | 1 | 0 | 2026-03-02T09:09:48 | SpareAlps6450 | false | null | 0 | o879blr | false | /r/LocalLLaMA/comments/1rinx3k/qwen_35_system_message_must_be_at_the_beginning/o879blr/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.