name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7z1333 | Even the shisa ai fine tunes? | 1 | 0 | 2026-03-01T00:41:07 | TheLegendOfKitty123 | false | null | 0 | o7z1333 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7z1333/ | false | 1 |
t1_o7z12xn | Using the IQ4_KSS quant (4.245 bpw) from https://huggingface.co/ubergarm/Qwen3-Coder-Next-GGUF, I can get 1395 t/s prefill on ik_llama.cpp, with a 3090 eGPU connected via oculink (PCIe 4.0 x4), using a batch size of 16384 and a prompt that consists of 16324 tokens. | 1 | 0 | 2026-03-01T00:41:06 | notdba | false | null | 0 | o7z12xn | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7z12xn/ | false | 1 |
t1_o7z11fu | yes, which is why we built Maple AI. Full AI chat with privacy. | 1 | 0 | 2026-03-01T00:40:52 | marks_ftw | false | null | 0 | o7z11fu | false | /r/LocalLLaMA/comments/1re2qzr/after_all_the_news_do_you_worry_about_privacy/o7z11fu/ | false | 1 |
t1_o7z0rcr | Thanks for your response. May I try your app? I found it on GitHub. What would be the right model to use for it? | 1 | 0 | 2026-03-01T00:39:15 | shit_99 | false | null | 0 | o7z0rcr | false | /r/LocalLLaMA/comments/1rhikjv/does_anyone_know_about_this_app/o7z0rcr/ | false | 1 |
t1_o7z0m1p | Maple AI | 1 | 0 | 2026-03-01T00:38:24 | marks_ftw | false | null | 0 | o7z0m1p | false | /r/LocalLLaMA/comments/1px83ji/best_api_providers_for_data_privacey_if_you_cant/o7z0m1p/ | false | 1 |
t1_o7z0f95 | No -- gemma 3 is amazing. It's just a year old. | 1 | 0 | 2026-03-01T00:37:18 | _raydeStar | false | null | 0 | o7z0f95 | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7z0f95/ | false | 1 |
t1_o7z0elo | Any tutorial or blog post you would suggest for making this transition? | 1 | 0 | 2026-03-01T00:37:12 | nborwankar | false | null | 0 | o7z0elo | false | /r/LocalLLaMA/comments/1rhg2ir/trying_to_set_up_a_vscode_server_local_llm/o7z0elo/ | false | 1 |
t1_o7z088y | >Unless you just end up writing your own OS that does all of that, and at that point you'd be better off running Gentoo with a customized kernel and just the strict packages required to load and run models.
Or you know stock Debian. 😅 | 2 | 0 | 2026-03-01T00:36:11 | corruptboomerang | false | null | 0 | o7z088y | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7z088y/ | false | 2 |
t1_o7z02gm | Here is why I call total bull shit on these posts where people claim their agent was so intelligent it went out and taught itself something in the bacground and anticipated the end users wishes and just figured it out on its own,
A simple request to openclaw:
"every 30 secondsI want you to post "openclaw agents... | 1 | 0 | 2026-03-01T00:35:15 | Superb_Situation9623 | false | null | 0 | o7z02gm | false | /r/LocalLLaMA/comments/1rbjxpv/i_think_openclaw_is_overhyped_just_use_skills/o7z02gm/ | false | 1 |
t1_o7z00kk | Thats very cool | 1 | 0 | 2026-03-01T00:34:57 | bartskol | false | null | 0 | o7z00kk | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7z00kk/ | false | 1 |
t1_o7yzwmi | Hey Guys,
So here's another iteration and redesign (once again), I've tried so many different ways to present the models in a manner that we could have lot of models and still keep things somewhat organized, the best deal I found was like this
\- Moved taskbar/dock to left
\- Redesigned downloads menu
\- New mode... | 4 | 0 | 2026-03-01T00:34:18 | fredconex | false | null | 0 | o7yzwmi | false | /r/LocalLLaMA/comments/1rhiwwk/arandu_v057beta_llamacpp_app_like_lm_studio_ollama/o7yzwmi/ | false | 4 |
t1_o7yzway | Just took a quick nap. Woke up and realized this is exactly what I've been looking for. Gonna go mess around with it for a bit. | 1 | 0 | 2026-03-01T00:34:15 | Thin-Effect-3926 | false | null | 0 | o7yzway | false | /r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/o7yzway/ | false | 1 |
t1_o7yzu4z | at the moment this is pure x86. nothing else will run it | 6 | 0 | 2026-03-01T00:33:54 | Electrical_Ninja3805 | false | null | 0 | o7yzu4z | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yzu4z/ | false | 6 |
t1_o7yztvb | Not really getting what the use case for the 27B or even 122B model is. All the benchmarks point to 35B model being nearly as good as either of them - but it's quite a bit faster.
Of course real world usage could prove the 35B version is incapable of one-shotting some prompts that 27B or 122B can one shot. | 1 | 0 | 2026-03-01T00:33:52 | ga239577 | false | null | 0 | o7yztvb | false | /r/LocalLLaMA/comments/1renq5y/qwen35_model_comparison_27b_vs_35b_on_rtx_4090/o7yztvb/ | false | 1 |
t1_o7yzlmv | That's interesting. | 1 | 0 | 2026-03-01T00:32:32 | Iory1998 | false | null | 0 | o7yzlmv | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yzlmv/ | false | 1 |
t1_o7yzkvi | this is literally a binary running directly on hardware. there is no kernel. just a uefi bin running on ring 0 with full hardware access. | 6 | 0 | 2026-03-01T00:32:25 | Electrical_Ninja3805 | false | null | 0 | o7yzkvi | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yzkvi/ | false | 6 |
t1_o7yzhbb | The heck is going on, why does the US go hostile against Anthropic? Just because they refused to support the military industry? As any private owned company should, I might get on Anthropic's side for once! | 1 | 0 | 2026-03-01T00:31:50 | FriskyFennecFox | false | null | 0 | o7yzhbb | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7yzhbb/ | false | 1 |
t1_o7yzh27 | In 1984, they were developing new speak so you couldn't think thought crimes anymore. Maybe we can develop a language that prevents these issues better | 2 | 0 | 2026-03-01T00:31:47 | OldHamburger7923 | false | null | 0 | o7yzh27 | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7yzh27/ | false | 2 |
t1_o7yzbxt | other than the ram saving, and nightmare of writing everything from scratch????No.....this is purely striping things down to the bear essentials to see if i can. at the end of the day to get thing like gpu support i am likely better off adding something like tiny core to make that happen. which will likely be added in ... | 12 | 0 | 2026-03-01T00:30:57 | Electrical_Ninja3805 | false | null | 0 | o7yzbxt | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yzbxt/ | false | 12 |
t1_o7yz8w8 | I'll be trying it today. The dense one should be smarter than the MoE one. I saw that the intelligence index benchmarked by an independent team scored 42 for the dense model, matching much bigger models like MiniMax-M2.5 (230B), DeepSeek V3.2 (685B), and GLM-4.7 (357B).
But to comfortably run my agentic setup on a co... | 8 | 0 | 2026-03-01T00:30:27 | luke_pacman | false | null | 0 | o7yz8w8 | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7yz8w8/ | false | 8 |
t1_o7yyz1v | hasn't worked for me yet. I tried the full template on the website and just appending "Output function calls as JSON" on two different quantizations, including q6. all calls failed. I'm using LMStudio on Windows. I really like this model, the speed and quality of outputs are very decent. (the speed is fantastic). Is th... | 1 | 0 | 2026-03-01T00:28:50 | FigZestyclose7787 | false | null | 0 | o7yyz1v | false | /r/LocalLLaMA/comments/1rdi26s/liquid_ai_releases_lfm224ba2b/o7yyz1v/ | false | 1 |
t1_o7yyytj | i'd consider Strix Halo systems as well; if you don't get lucky enough for the 128 GB Mac Studio, they're a cheaper (but lower bandwidth) way to get 128 GB of unified memory. my GMKtec EVO-X2 is definitely louder than a Studio at full burn but i only notice it because it's on top of my desk instead of underneath. | 1 | 0 | 2026-03-01T00:28:48 | HopePupal | false | null | 0 | o7yyytj | false | /r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o7yyytj/ | false | 1 |
t1_o7yyy14 | What architecture is this not compatible with? Apple Silicon Legacy hardware from the 90s. I know it’s running on a laptop that seems to be coffee lake era so I’m not quite sure the compatibility | 1 | 0 | 2026-03-01T00:28:40 | TinFoilHat_69 | false | null | 0 | o7yyy14 | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yyy14/ | false | 1 |
t1_o7yyvi2 | You can probably use something like [pdfminer.six](https://pypi.org/project/pdfminer.six/) to extract the text for the PDF and then send it to the bot in Python. Then catch the output for what the bot thinks should be the name, etc.
If you have to do vision analysis then you would have to look at vision models like t... | 1 | 0 | 2026-03-01T00:28:15 | SM8085 | false | null | 0 | o7yyvi2 | false | /r/LocalLLaMA/comments/1rhddg1/want_to_build_a_local_agentic_ai_to_help_with/o7yyvi2/ | false | 1 |
t1_o7yys7l | Tokenization is the longest part? No it’s not. Unless you’re saying something else or confusing tokenization with prefill | 1 | 0 | 2026-03-01T00:27:43 | StardockEngineer | false | null | 0 | o7yys7l | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7yys7l/ | false | 1 |
t1_o7yyqni | You can't not have a kernel lol. Also I don't see this being any faster. | -5 | 0 | 2026-03-01T00:27:28 | CondiMesmer | false | null | 0 | o7yyqni | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yyqni/ | false | -5 |
t1_o7yyppv | I aceess my server using tailscale vpn, nothing is exposed to a public ip | 1 | 0 | 2026-03-01T00:27:19 | allpowerfulee | false | null | 0 | o7yyppv | false | /r/LocalLLaMA/comments/1rhifeg/im_waiting_for_my_nvidia_a2_to_crawl_in_to_run_a/o7yyppv/ | false | 1 |
t1_o7yyolf | I am still waiting for R2.
R1 introduced CoT and MoE architecture and everyone immediately copied DeepSeek. | 1 | 0 | 2026-03-01T00:27:08 | mlhher | false | null | 0 | o7yyolf | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7yyolf/ | false | 1 |
t1_o7yynyy | next prbly assclaw | 1 | 0 | 2026-03-01T00:27:02 | Grindora | false | null | 0 | o7yynyy | false | /r/LocalLLaMA/comments/1r5v1jb/anyone_actually_using_openclaw/o7yynyy/ | false | 1 |
t1_o7yyez3 | The planning vs execution split you're seeing between Qwen and Devstral maps to what I've noticed too: Qwen documents heavily and over-engineers given free rein, Devstral just ships. For repo-level agentic work, Devstral's Mistral lineage means better instruction-following under constrained context.
The real test is ... | 1 | 0 | 2026-03-01T00:25:35 | Joozio | false | null | 0 | o7yyez3 | false | /r/LocalLLaMA/comments/1rg41ss/qwen35_27b_vs_devstral_small_2_nextjs_solidity/o7yyez3/ | false | 1 |
t1_o7yyciu | Local models don't have security risks unless you expose them to internet. I recommend llama.cpp instead of ollama. If you use agentic frameworks, some of them send telemetry, use open source ones and turn them off. If you give the model computer access, sandbox it so that it doesn't mess with your computer. (It does i... | 5 | 0 | 2026-03-01T00:25:11 | Several-Tax31 | false | null | 0 | o7yyciu | false | /r/LocalLLaMA/comments/1rhifeg/im_waiting_for_my_nvidia_a2_to_crawl_in_to_run_a/o7yyciu/ | false | 5 |
t1_o7yybto | wait gemma 3 is bad now? I still use it since it's such a good model for it's size.
The ones I use most is Gemma3, Qwen3 (haven't upgraded to 3.5 yet) and GLM 4.7 flash | 0 | 0 | 2026-03-01T00:25:05 | redditorialy_retard | false | null | 0 | o7yybto | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7yybto/ | false | 0 |
t1_o7yyai8 | the risk_level field is a good start, but having a policy layer decide reversibility instead of just prompting the agent helps a lot in prod - peta (peta.io) is building exactly this for MCP: policy-based approvals and audit trail on top of tool calls. | 2 | 0 | 2026-03-01T00:24:52 | BC_MARO | false | null | 0 | o7yyai8 | false | /r/LocalLLaMA/comments/1rhgzvs/built_a_lightweight_approval_api_for_llm_agents/o7yyai8/ | false | 2 |
t1_o7yy9zk | that is hillarious!, they'll try anything to save compute haha | 1 | 0 | 2026-03-01T00:24:47 | Revolutionalredstone | false | null | 0 | o7yy9zk | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7yy9zk/ | false | 1 |
t1_o7yy9vm | its a uefi app written in c, it boots directly into an inference engine, no OSm No Kernel. the ML runtime is called Foundry, its my own, from scratch, tensor/inference library written in pure c with zero deps. | 7 | 0 | 2026-03-01T00:24:46 | Electrical_Ninja3805 | false | null | 0 | o7yy9vm | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yy9vm/ | false | 7 |
t1_o7yy6r2 | env vars in your claude profile.
```
"env": {
"ANTHROPIC\_API\_KEY": "na",
"ANTHROPIC\_BASE\_URL": "[http://127.0.0.1:4444](http://127.0.0.1:4444)",
"ANTHROPIC\_MODEL": "Qwen3.5-35B-A3B-MXFP4\_MOE",
"ANTHROPIC\_SMALL\_FAST\_MODEL": "Qwen3.5-35B-A3B-MXFP4\_MOE"
}``` | 1 | 0 | 2026-03-01T00:24:16 | Diecron | false | null | 0 | o7yy6r2 | false | /r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/o7yy6r2/ | false | 1 |
t1_o7yy4hi | Super cool! | 1 | 0 | 2026-03-01T00:23:54 | sdfgeoff | false | null | 0 | o7yy4hi | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yy4hi/ | false | 1 |
t1_o7yy0fw | Sam Altman has made a deal with the USA Gov:
[https://x.com/sama/status/2027578508042723599](https://x.com/sama/status/2027578508042723599) | 1 | 0 | 2026-03-01T00:23:15 | ViperAICSO | false | null | 0 | o7yy0fw | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7yy0fw/ | false | 1 |
t1_o7yy0cu | I would be *thrilled* if these features made it into the official application somehow - or at least as "relatively easy to setup and not super hacky addons" | 1 | 0 | 2026-03-01T00:23:14 | overand | false | null | 0 | o7yy0cu | false | /r/LocalLLaMA/comments/1q9by7w/preview_logprobs_in_open_webui/o7yy0cu/ | false | 1 |
t1_o7yxwt0 | I would consider a proper open source solution like PocketPal.
I also have my own app, ChatterUI, but its more RP focused. | 7 | 0 | 2026-03-01T00:22:39 | ----Val---- | false | null | 0 | o7yxwt0 | false | /r/LocalLLaMA/comments/1rhikjv/does_anyone_know_about_this_app/o7yxwt0/ | false | 7 |
t1_o7yxvp4 | A2A + MCP for local orchestration is underexplored territory. What I found useful: separate the planning agent from the execution agents early, otherwise uncensored models drift on long chains. Also worth scoping what "autonomous" means - full autonomy vs human-in-loop on irreversible actions.
Wrote up patterns from ... | 1 | 0 | 2026-03-01T00:22:28 | Joozio | false | null | 0 | o7yxvp4 | false | /r/LocalLLaMA/comments/1rhfrvi/has_anyone_built_a_fully_local_autonomous_agent/o7yxvp4/ | false | 1 |
t1_o7yxsqz | No, it's the longest part on a short response sometimes. | 2 | 0 | 2026-03-01T00:21:59 | aseichter2007 | false | null | 0 | o7yxsqz | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7yxsqz/ | false | 2 |
t1_o7yxnpt | I suspect that virtually ALL labs have distilled their models at some point on DeepSeek. For some I will not name it is blatantly obvious if you deeply analyzed the thinking process of R1 (and R1-0528). | 2 | 0 | 2026-03-01T00:21:10 | mlhher | false | null | 0 | o7yxnpt | false | /r/LocalLLaMA/comments/1rcvimv/distillation_when_you_do_it_training_when_we_do_it/o7yxnpt/ | false | 2 |
t1_o7yxmpg | Install Google Antigravity on your laptop (there are other options too, this one has decent free quota and not super expensive if you need more). Ask it to create a toy version of your app. Say if the app is to annotate a video with drawing and share it online, ask for an app to just annotate using colored brushes and ... | 1 | 0 | 2026-03-01T00:21:00 | catplusplusok | false | null | 0 | o7yxmpg | false | /r/LocalLLaMA/comments/1rhf9is/what_do_i_do_with_my_life/o7yxmpg/ | false | 1 |
t1_o7yxm5t | Don’t quantize the context. It’s already 1/3 of the normal size with their deltanet attention. It’s also more sensitive to quantization. Just saying | 2 | 0 | 2026-03-01T00:20:55 | Pixer--- | false | null | 0 | o7yxm5t | false | /r/LocalLLaMA/comments/1rhflqn/letting_my_rtx_5090_21_tbs_mem_stretch_its_legs/o7yxm5t/ | false | 2 |
t1_o7yxklf | I'm more excited about the mid-size Qwens, 27B and 35B, now that both are starting to appear in "heretic" and "abliterated" form. | 1 | 0 | 2026-03-01T00:20:39 | MushroomCharacter411 | false | null | 0 | o7yxklf | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7yxklf/ | false | 1 |
t1_o7yxj7l | Are there any performance benefits running something like that instead of something like Tiny Core Linux? | 4 | 0 | 2026-03-01T00:20:26 | Pkittens | false | null | 0 | o7yxj7l | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yxj7l/ | false | 4 |
t1_o7yxj3v | Thanks for the response! | 1 | 0 | 2026-03-01T00:20:25 | allpowerfulee | false | null | 0 | o7yxj3v | false | /r/LocalLLaMA/comments/1rhifeg/im_waiting_for_my_nvidia_a2_to_crawl_in_to_run_a/o7yxj3v/ | false | 1 |
t1_o7yxdcw | 1. Also applies to Anthropic.
2. Qwen3.5 doesn't have /think or /nothink and injection is a problem for every LLM.
3. As it says, applies to everything.
Don't believe everything these tools say just because they say it well. | 3 | 0 | 2026-03-01T00:19:29 | Diecron | false | null | 0 | o7yxdcw | false | /r/LocalLLaMA/comments/1rhifeg/im_waiting_for_my_nvidia_a2_to_crawl_in_to_run_a/o7yxdcw/ | false | 3 |
t1_o7yx9yw | ᵠʷᵉⁿ | 0 | 0 | 2026-03-01T00:18:56 | MushroomCharacter411 | false | null | 0 | o7yx9yw | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7yx9yw/ | false | 0 |
t1_o7yx7wu | Ah, my bad. Dont know why i am being upvoted tho. Still, this particular instance does not seem to be very accurate i think; and sadly this is what has become of the internet and all of media ever since LLMs. And as a fellow LLM enthusiast too, i don’t want to live in a world of slop. Fake news was already a big issue ... | 1 | 0 | 2026-03-01T00:18:37 | demon_itizer | false | null | 0 | o7yx7wu | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7yx7wu/ | false | 1 |
t1_o7yx6q1 | Not sure about how LM studio implements, but web in general is hostile to web fetch. My current workaround is spoofing user agent when fetching, and that helped me got content from static sites that are not too strict. (See the screen shot of the implementation I mentioned)
If we don't use computer vision and computer... | 3 | 0 | 2026-03-01T00:18:24 | o0genesis0o | false | null | 0 | o7yx6q1 | false | /r/LocalLLaMA/comments/1rhdzrc/local_llm_agents_blocked_everywhere/o7yx6q1/ | false | 3 |
t1_o7yx6j4 | I guess you wanted to write "guys". You can also use "folks". | 21 | 0 | 2026-03-01T00:18:22 | markole | false | null | 0 | o7yx6j4 | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yx6j4/ | false | 21 |
t1_o7yx5eh | Yes, hopefully. i don't exactly have people throwing money at me to build it. so it will happen when i get around to it. | 6 | 0 | 2026-03-01T00:18:11 | Electrical_Ninja3805 | false | null | 0 | o7yx5eh | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yx5eh/ | false | 6 |
t1_o7yx4s2 | Whatever you do I would suggest not to use Ollama. Even the closed source LM Studio is significantly better for your learning experience than Ollama. | 1 | 0 | 2026-03-01T00:18:05 | mlhher | false | null | 0 | o7yx4s2 | false | /r/LocalLLaMA/comments/1rhhfv8/how_do_i_get_started_i_know_zero_about_local/o7yx4s2/ | false | 1 |
t1_o7yx4q8 | While Ollama might be “easier” when first starting, it’s also *significantly* slower and less reliable than llama.cpp. llama-swap takes a little reading to figure out, but is much more reliable, flexible, and faster. Anyone who started with Ollama should make the effort to ditch it after no more than a week or two on... | 3 | 0 | 2026-03-01T00:18:04 | suicidaleggroll | false | null | 0 | o7yx4q8 | false | /r/LocalLLaMA/comments/1rhg2ir/trying_to_set_up_a_vscode_server_local_llm/o7yx4q8/ | false | 3 |
t1_o7yx4i5 | Claude Code's agentic loop sends tool call chains with tight latency expectations. A 35B-A3B at Q4 on a single local machine will stall at inference time - the model isn't the problem, throughput is.
Try LiteLLM as a proxy between Ollama and Claude Code: it lets you tune timeouts per tool call. Also disable extended ... | 3 | 0 | 2026-03-01T00:18:02 | Joozio | false | null | 0 | o7yx4i5 | false | /r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/o7yx4i5/ | false | 3 |
t1_o7yx2py | Please do and please tell me your conclusion after your journey. From what I see, Qwens latest models beat everything else with the same model size. | 1 | 0 | 2026-03-01T00:17:45 | Zundrium | false | null | 0 | o7yx2py | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7yx2py/ | false | 1 |
t1_o7ywzal | I genuinely love this model. It seems as competent (if not tripped) as Qwen3 Coder Next but at less than half the size.
Though important to note might be here that it is significantly easier to trip and confuse than Qwen3 Coder Next which is a simple result of the "mere" 35B vs the 80B.
Then again for what its ... | 1 | 0 | 2026-03-01T00:17:11 | mlhher | false | null | 0 | o7ywzal | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7ywzal/ | false | 1 |
t1_o7ywyeq | What architecture is this | 2 | 0 | 2026-03-01T00:17:02 | TinFoilHat_69 | false | null | 0 | o7ywyeq | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7ywyeq/ | false | 2 |
t1_o7ywwnu | Pen and paper is nice but I prefer to do all my matmul with a computer powered entirely via hand cranks.
God my arm hurts - but once that first token comes in next month it'll all be worth it.
https://preview.redd.it/ispn16ustbmg1.jpeg?width=400&format=pjpg&auto=webp&s=c9d6cb749073c8945bce232dd10b74d42e07d179 | 15 | 0 | 2026-03-01T00:16:45 | RoyalCities | false | null | 0 | o7ywwnu | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7ywwnu/ | false | 15 |
t1_o7ywso1 | I'm using LangGraph for orchestration, so the workflow defines which model handles each step. Outputs from previous steps are fed back into context for the model to decide what to do next, though this requires some context engineering to keep things tight and avoid quality/speed degradation from overly long contexts, e... | 5 | 0 | 2026-03-01T00:16:06 | luke_pacman | false | null | 0 | o7ywso1 | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7ywso1/ | false | 5 |
t1_o7ywnxf | This is the right mental model - pause at boundaries, not everywhere. The real challenge is defining what counts as irreversible at runtime. File edits? Reversible. Emails sent? Not. External API calls that charge money? Definitely. Most agent frameworks lump these together.
Your risk\_level field is a good start but... | 2 | 0 | 2026-03-01T00:15:19 | Joozio | false | null | 0 | o7ywnxf | false | /r/LocalLLaMA/comments/1rhgzvs/built_a_lightweight_approval_api_for_llm_agents/o7ywnxf/ | false | 2 |
t1_o7ywm13 | yup absolutely love OpenCode, they have come a long way in the past couple of months.. you can also checkout pi which is the agent under the hood for OpenClaw | 1 | 0 | 2026-03-01T00:15:01 | Dismal_Bit_9879 | false | null | 0 | o7ywm13 | false | /r/LocalLLaMA/comments/1rcjzsk/is_opencode_the_best_free_coding_agent_currently/o7ywm13/ | false | 1 |
t1_o7ywk84 | details are in the paper - https://arxiv.org/abs/2503.21758
maybe something new came out since then, but it's massively cheaper than SD-like arch | 3 | 0 | 2026-03-01T00:14:42 | FullOf_Bad_Ideas | false | null | 0 | o7ywk84 | false | /r/LocalLLaMA/comments/1rhe790/my_frends_trained_and_benchmarked_4_diffusion/o7ywk84/ | false | 3 |
t1_o7ywi3s | An OS where the LLM is the interface? | 3 | 0 | 2026-03-01T00:14:21 | Emotional-Dust-1367 | false | null | 0 | o7ywi3s | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7ywi3s/ | false | 3 |
t1_o7ywfn3 | Maybe a RAG with a small language model like teapot.ai | 1 | 0 | 2026-03-01T00:13:57 | No-Concern-8832 | false | null | 0 | o7ywfn3 | false | /r/LocalLLaMA/comments/1rhcs8p/tiny_small_faster_models_for_13_year_old_laptop/o7ywfn3/ | false | 1 |
t1_o7yw92b | Perfect, thanks for the response. I grabbed the latest llama.cpp and it's chattering endlessly now. Maybe KCPP just needs an update.
I went through a similar headache a while back where GLM4 32B would generate gibberish... but only when loaded into 2+ GPUs and only with flash attention enabled, so that's why I was ask... | 1 | 0 | 2026-03-01T00:12:51 | GraybeardTheIrate | false | null | 0 | o7yw92b | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7yw92b/ | false | 1 |
t1_o7yw8jz | Yes I used unsloth. I'm currently improving on the benchmark tests and incorporating more complex tasks. I will try out Bartowski's model. | 2 | 0 | 2026-03-01T00:12:46 | do_u_think_im_spooky | false | null | 0 | o7yw8jz | false | /r/LocalLLaMA/comments/1rgmg99/llm_benchmark_site_for_dual_rtx_5060_ti/o7yw8jz/ | false | 2 |
t1_o7yw0bp | After tests it appears to be closer to around 70-80. | 1 | 0 | 2026-03-01T00:11:23 | do_u_think_im_spooky | false | null | 0 | o7yw0bp | false | /r/LocalLLaMA/comments/1rgmg99/llm_benchmark_site_for_dual_rtx_5060_ti/o7yw0bp/ | false | 1 |
t1_o7yvxwd | The rot is real but the distribution is lopsided. In communities where people use AI as a serious tool - not to generate content, but to accelerate thinking - quality has gone up, not down. The rot is concentrated in places where the bar was already low and the incentive was volume.
A signal filter is the right techn... | 1 | 0 | 2026-03-01T00:10:59 | Joozio | false | null | 0 | o7yvxwd | false | /r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7yvxwd/ | false | 1 |
t1_o7yvl64 | Wait. Isnt decoding and encoding tokens extremely cheap anyway? | 0 | 0 | 2026-03-01T00:08:53 | StardockEngineer | false | null | 0 | o7yvl64 | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7yvl64/ | false | 0 |
t1_o7yv8nw | New forum avatar pic discovered | 5 | 0 | 2026-03-01T00:06:47 | Miserable-Dare5090 | false | null | 0 | o7yv8nw | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7yv8nw/ | false | 5 |
t1_o7yv6d0 | I bought my partner the 16GB one for her work, so naturally I would borrow to see what the latest apple M series can do. Tested the following with MLX backend in LM studio:
\- GLM 4.7V: it runs not too great, and the machine gets hot quickly. But it works. It feels like running Gemma 3 27B on my main rig with 4060ti.
... | 3 | 0 | 2026-03-01T00:06:25 | o0genesis0o | false | null | 0 | o7yv6d0 | false | /r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/o7yv6d0/ | false | 3 |
t1_o7yv2fr | Ahhhhh I see. Thanks. | 1 | 0 | 2026-03-01T00:05:45 | StardockEngineer | false | null | 0 | o7yv2fr | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7yv2fr/ | false | 1 |
t1_o7yux70 | Delete llama.cpp and try vLLM | -1 | 0 | 2026-03-01T00:04:55 | Signal_Ad657 | false | null | 0 | o7yux70 | false | /r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/o7yux70/ | false | -1 |
t1_o7yuvkc | [deleted] | 1 | 0 | 2026-03-01T00:04:39 | [deleted] | true | null | 0 | o7yuvkc | false | /r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/o7yuvkc/ | false | 1 |
t1_o7yuu7k | But not for scamming customers by providing Sonnet as Opus 4.6 at q2. No one can get fooled by their scam on max subscriptions. | 1 | 0 | 2026-03-01T00:04:26 | Maximum-Wishbone5616 | false | null | 0 | o7yuu7k | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7yuu7k/ | false | 1 |
t1_o7yusuq | The fact that OpenAI is acquiring OpenClaw proves exactly how desperate Scam Altman is for fresh, non-circular cash.
OpenClaws codebase is absolute pure vibecoded slop. The only reason it gained traction is because of either maliciousness or sheer incompetence. It’s ridiculously easy to abuse.
The Chinese web is ... | 3 | 0 | 2026-03-01T00:04:13 | mlhher | false | null | 0 | o7yusuq | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7yusuq/ | false | 3 |
t1_o7yuq2j | And why this particular model and its architecture? | 2 | 0 | 2026-03-01T00:03:45 | zemondza | false | null | 0 | o7yuq2j | false | /r/LocalLLaMA/comments/1rhe790/my_frends_trained_and_benchmarked_4_diffusion/o7yuq2j/ | false | 2 |
t1_o7yupgr | > When the next AI trains on those 10,000 bad articles instead of the one original paper the knowledge about CRISPR becomes corrupted
2 points here:
1. You're describing pre-training, where volume comes first. However, it is not literally "download the whole internet" (there are actually many filters for what data ge... | 1 | 0 | 2026-03-01T00:03:40 | Economy_Cabinet_7719 | false | null | 0 | o7yupgr | false | /r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7yupgr/ | false | 1 |
t1_o7yuo7w | Been lurking on this sub for some time now. It really does shine above all others in its space.
At least a couple times a week in this sub I'll see someone post something interesting or really useful buried in a comment thread while doomscrolling that forces me to switch to my computer and try it out. It reminds me I ... | 4 | 0 | 2026-03-01T00:03:28 | infectoid | false | null | 0 | o7yuo7w | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7yuo7w/ | false | 4 |
t1_o7yunku | qwen 3.5 35b a3b or 27b | 6 | 0 | 2026-03-01T00:03:21 | No-Simple8447 | false | null | 0 | o7yunku | false | /r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/o7yunku/ | false | 6 |
t1_o7yul00 | No one wants those scammers in EU. | -1 | 0 | 2026-03-01T00:02:55 | Maximum-Wishbone5616 | false | null | 0 | o7yul00 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7yul00/ | false | -1 |
t1_o7yuher | Because China cares.
| 0 | 0 | 2026-03-01T00:02:18 | Maximum-Wishbone5616 | false | null | 0 | o7yuher | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7yuher/ | false | 0 |
t1_o7yugr9 | Try also: Q2\_K\_XL (unsloth), Q3\_K\_S or IQ2\_M (from bartowski). IQ2\_XXS quants are slow in some platforms, including CPU inference (depends on CPU model).
Also: -fit on (and remove ngl and cpu moe).
No guarantees, but worth a shot. | 1 | 0 | 2026-03-01T00:02:12 | xandep | false | null | 0 | o7yugr9 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7yugr9/ | false | 1 |
t1_o7yuew1 | was your testing specifically on 3.5 35a3b or other model? | 1 | 0 | 2026-03-01T00:01:53 | jdchmiel | false | null | 0 | o7yuew1 | false | /r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o7yuew1/ | false | 1 |
t1_o7yudko | Google found lol, shits been obvious since that feature came out. | 2 | 0 | 2026-03-01T00:01:40 | JeddyH | false | null | 0 | o7yudko | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7yudko/ | false | 2 |
t1_o7yucm7 | 0) Ollama -> If you have no idea and you want to kick start
1. LM Studio -> It indicates which models are going to work on your hardware before you download.
2. [Jan.ai](http://Jan.ai)
3. Msty
4. Cherry Studio | 1 | 0 | 2026-03-01T00:01:30 | No-Mountain3817 | false | null | 0 | o7yucm7 | false | /r/LocalLLaMA/comments/1rhhfv8/how_do_i_get_started_i_know_zero_about_local/o7yucm7/ | false | 1 |
t1_o7yuafs | Definitely interested in seeing how this goes. The censorship on the new qwen 3.5 is crushing. I use this stuff mostly for text to image prompts and did one for stereotypical redditors with fedoras. Refused. Given that the qwen image models were given a rating as the 2nd to least censored image model, it's rather odd ... | 5 | 0 | 2026-03-01T00:01:09 | Hoodfu | false | null | 0 | o7yuafs | false | /r/LocalLLaMA/comments/1rh69co/multidirectional_refusal_suppression_with/o7yuafs/ | false | 5 |
t1_o7yu8k5 | Can't wait for French support! | 1 | 0 | 2026-03-01T00:00:50 | Adventurous-Paper566 | false | null | 0 | o7yu8k5 | false | /r/LocalLLaMA/comments/1r4sivv/kanitts2_opensource_400m_tts_model_with_voice/o7yu8k5/ | false | 1 |
t1_o7yu8mo | I may have misunderstood what you have done, but from your comments is seems that the system effectively functions as a single LLM with a long context. It is first told "to act like an Agent A." It thinks for a certain number of steps. And then, without changing the internal state of the model, it is told "to act like ... | 4 | 0 | 2026-03-01T00:00:50 | Origin_of_Mind | false | null | 0 | o7yu8mo | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7yu8mo/ | false | 4 |
t1_o7yu827 | I run my 3090 at 250w and it's a noticeable slowdown but not extreme. Dropping power further wasn't worth it for me (much more performance dropoff) but power is cheap where I live.
Can you get an Ai max 395 with 128gb for your budget? I've not kept up on their pricing.
The 3090s will for sure be faster and leave yo... | 1 | 0 | 2026-03-01T00:00:45 | 12bitmisfit | false | null | 0 | o7yu827 | false | /r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o7yu827/ | false | 1 |
t1_o7yu4gd | This. Ask questions to AI first, when you get stuck after asking AI and trying to figure it out ask here. Good people here. | 1 | 0 | 2026-03-01T00:00:09 | Signal_Ad657 | false | null | 0 | o7yu4gd | false | /r/LocalLLaMA/comments/1rhhfv8/how_do_i_get_started_i_know_zero_about_local/o7yu4gd/ | false | 1 |
t1_o7yu44b | I assume all the sub agent's system prompts are different right? how does passing KV-cache work in this case? | 10 | 0 | 2026-03-01T00:00:06 | Budget-Juggernaut-68 | false | null | 0 | o7yu44b | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7yu44b/ | false | 10 |
t1_o7yu17u | i just spent the past 3 days trying to probe the wifi hardware by hand. i think he truly could be on to something but someone would have to train an ai to do it. | 4 | 0 | 2026-02-28T23:59:37 | Electrical_Ninja3805 | false | null | 0 | o7yu17u | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yu17u/ | false | 4 |
t1_o7ytygk | this result is similar to my experience with qw 3 coder next vs qw3.5 27b. qw3 coder next q8 eclipses the qw3.5 27b in all of my tests in both quality and performance | 4 | 0 | 2026-02-28T23:59:10 | anhphamfmr | false | null | 0 | o7ytygk | false | /r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7ytygk/ | false | 4 |
t1_o7ytwzx | I knew tech was done when factory floor managers became scrum masters and people with 3 months courses could get a job in the field. First signs started showing when a university degree meant you "knew" stuff. It showed that tech was no longer a field for ceative, playful, minds but simply a code spewing assembly job w... | 1 | 0 | 2026-02-28T23:58:55 | Dry_Yam_4597 | false | null | 0 | o7ytwzx | false | /r/LocalLLaMA/comments/1rhf9is/what_do_i_do_with_my_life/o7ytwzx/ | false | 1 |
t1_o7ytwrl | I think maybe you might've had some issues with your 35b-a3b run.
I'm getting 30 tokens/sec with the 27b at Q5\_K\_M on my 7900 XTX while getting 102 tokens/sec on AesSedai's quantization of the 35b-a3b model at Q4\_K\_M.
It's not an order of magnitude but I think it's very significant depending on if you want to rap... | 1 | 0 | 2026-02-28T23:58:53 | Intrepid-Second6936 | false | null | 0 | o7ytwrl | false | /r/LocalLLaMA/comments/1rdvq3s/qwen35_27b_is_match_made_in_heaven_for_size_and/o7ytwrl/ | false | 1 |
t1_o7ytugz | Welcome among us! | 1 | 0 | 2026-02-28T23:58:30 | Adventurous-Paper566 | false | null | 0 | o7ytugz | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7ytugz/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.