title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
vLLM on the Strix halo | 3 | Hello
I’m trying to figure out how to install vLLM on Strix Halo, and I’m having a really hard time. Could someone help? | 2026-01-30T05:56:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qqx9jp/vllm_on_the_strix_halo/ | dever121 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqx9jp | false | null | t3_1qqx9jp | /r/LocalLLaMA/comments/1qqx9jp/vllm_on_the_strix_halo/ | false | false | self | 3 | null |
DGX Spark - How close can you get to Claude Code Locally? | 0 | We know the specs, so what is out there that can get somewhere near a claude code cli experience on just the DGX Spark local only? | 2026-01-30T05:53:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qqx6zd/dgx_spark_how_close_can_you_get_to_claude_code/ | Icy_Foundation3534 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqx6zd | false | null | t3_1qqx6zd | /r/LocalLLaMA/comments/1qqx6zd/dgx_spark_how_close_can_you_get_to_claude_code/ | false | false | self | 0 | null |
Spent 20 years assessing students. Applied the same framework to LLMs. | 13 | I’ve been an assistive tech instructor for 20 years. Master’s in special ed. My whole career has been assessing what learners need—not where they rank.
Applied that to AI models. Built AI-SETT: 600 observable criteria across 13 categories. Diagnostic, not competitive. The +0 list (gaps) matters more than the total.
Grounded in SETT framework, Cognitive Load Theory, Zone of Proximal Development. Tools I’ve used with actual humans for decades.
https://github.com/crewrelay/AI-SETT
Fair warning: this breaks the moment someone makes it a leaderboard. | 2026-01-30T05:30:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qqws3g/spent_20_years_assessing_students_applied_the/ | Adhesiveness_Civil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqws3g | false | null | t3_1qqws3g | /r/LocalLLaMA/comments/1qqws3g/spent_20_years_assessing_students_applied_the/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'mRhLy-aJas1x1eZ9c97h-nkDGbPQDrrqXabtuna8axc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mRhLy-aJas1x1eZ9c97h-nkDGbPQDrrqXabtuna8axc.png?width=108&crop=smart&auto=webp&s=1a800da74fb79a944e99c981d664859d4a2cfb49', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mRhLy-aJas1x1eZ9c97h-nkDGbPQDrrqXabtuna8axc.png?width=216&crop=smart&auto=webp&s=617c2426ffc0539fdc1dc94cf0d4035ba4061c05', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mRhLy-aJas1x1eZ9c97h-nkDGbPQDrrqXabtuna8axc.png?width=320&crop=smart&auto=webp&s=d6a10c88c9ad3659d1c08a47a7b4252b73fa95dd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mRhLy-aJas1x1eZ9c97h-nkDGbPQDrrqXabtuna8axc.png?width=640&crop=smart&auto=webp&s=e8c146b2d500e70afe84a9aeea785674647c7408', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mRhLy-aJas1x1eZ9c97h-nkDGbPQDrrqXabtuna8axc.png?width=960&crop=smart&auto=webp&s=d77ff2f132b4c78b699d9854f24e90c5892fb095', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mRhLy-aJas1x1eZ9c97h-nkDGbPQDrrqXabtuna8axc.png?width=1080&crop=smart&auto=webp&s=036cdb38966d6855bc165dfd45042c3c5897fd92', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mRhLy-aJas1x1eZ9c97h-nkDGbPQDrrqXabtuna8axc.png?auto=webp&s=80f5d7e91a95e4caf31c72984759d53e831e3838', 'width': 1200}, 'variants': {}}]} |
IS IT Frowned upon OR impossible to USE all 4 DIMM RAM SLOTS? What is ntoskrnl.exe? | 0 | 2026-01-30T05:30:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qqwrvc/is_it_frowned_upon_or_impossible_to_use_all_4/ | Hot_Inspection_9528 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqwrvc | false | null | t3_1qqwrvc | /r/LocalLLaMA/comments/1qqwrvc/is_it_frowned_upon_or_impossible_to_use_all_4/ | false | false | self | 0 | null | |
I found this LLM inference calculator helps size hardware before you buy! | 0 | I found this via a recent YouTube video Alex Ziskind thought many of you who are planning for buying hardware would appreciate it. You can select the parameters count, quantitization levels, context length, and other options. What I like the most is it doesn't have the pre-filled model lists which I think creates the limitations for estimating newer models.
Link : [https://llm-inference-calculator-rki02.kinsta.page/](https://llm-inference-calculator-rki02.kinsta.page/) | 2026-01-30T05:25:44 | https://www.reddit.com/r/LocalLLaMA/comments/1qqwod1/i_found_this_llm_inference_calculator_helps_size/ | DockyardTechlabs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqwod1 | false | null | t3_1qqwod1 | /r/LocalLLaMA/comments/1qqwod1/i_found_this_llm_inference_calculator_helps_size/ | false | false | self | 0 | null |
How to change LM Studio app home directory??? | 0 | I want to change app home directory, not only model download directory, because my user home is already to big and i have limited free space.
Is this possible? | 2026-01-30T04:58:41 | https://www.reddit.com/r/LocalLLaMA/comments/1qqw4wc/how_to_change_lm_studio_app_home_directory/ | Glad-Audience9131 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqw4wc | false | null | t3_1qqw4wc | /r/LocalLLaMA/comments/1qqw4wc/how_to_change_lm_studio_app_home_directory/ | false | false | self | 0 | null |
GLM 4.7 Flash 30B PRISM + Web Search: Very solid. | 134 | Just got this set up yesterday. I have been messing around with it and I am extremely impressed. I find that it is very efficient in reasoning compared to Qwen models. The model is quite uncensored so I'm able to research any topics, it is quite thorough.
The knowledge is definitely less than 120B Derestricted, but once Web Search RAG is involved, I'm finding the 30B model generally superior with far less soft refusals. Since the model has web access, I feel the base knowledge deficit is mitigated.
Running it in the latest LMstudio beta + OpenwebUI. Y'all gotta try it. | 2026-01-30T04:56:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qqw3ov/glm_47_flash_30b_prism_web_search_very_solid/ | My_Unbiased_Opinion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqw3ov | false | null | t3_1qqw3ov | /r/LocalLLaMA/comments/1qqw3ov/glm_47_flash_30b_prism_web_search_very_solid/ | false | false | self | 134 | null |
SecureShell — plug-and-play terminal security for LLM agents | 0 | # What SecureShell Does
SecureShell is an open-source, plug-and-play **terminal safety layer** for LLM agents. It blocks **dangerous** or **hallucinated commands**, enforces **configurable protections**, and requires agents to justify commands with valid reasoning before execution.
As agents become more autonomous, they’re increasingly given direct access to shells, filesystems, and system tools. Projects like ClawdBot make this trajectory very clear: locally running agents with persistent system access, background execution, and broad privileges. In that setup, a single prompt injection, malformed instruction, or tool misuse can translate directly into real system actions. Prompt-level guardrails stop being a meaningful security boundary once the agent is already inside the system.
SecureShell adds a **zero-trust gatekeeper** between the agent and the OS. Commands are intercepted before execution, evaluated for risk and correctness, challenged if unsafe, and only allowed through if they meet defined safety constraints. The agent itself is treated as an untrusted principal.
https://preview.redd.it/x18tiv217dgg1.png?width=1280&format=png&auto=webp&s=219c46010d710ae2e9a20b749f1fb912252dba44
# Core Features
SecureShell is designed to be lightweight and infrastructure-friendly:
* Intercepts all shell commands generated by agents
* Risk classification (safe / suspicious / dangerous)
* Blocks or constrains unsafe commands before execution
* Platform-aware (Linux / macOS / Windows)
* YAML-based security policies and templates (development, production, paranoid, CI)
* Prevents common foot-guns (destructive paths, recursive deletes, etc.)
* Returns structured feedback so agents can retry safely
* Drops into existing stacks (LangChain, MCP, local agents, provider sdks)
* Works with both local and hosted LLMs
# Installation
SecureShell is available as both a Python and JavaScript package:
* Python: `pip install secureshell`
* JavaScript / TypeScript: `npm install secureshell-ts`
# Target Audience
SecureShell is useful for:
* Developers building local or self-hosted agents
* Teams experimenting with ClawdBot-style assistants or similar system-level agents
* LangChain / MCP users who want execution-layer safety
* Anyone concerned about prompt injection once agents can execute commands
# Goal
The goal is to make **execution-layer controls** a default part of agent architectures, rather than relying entirely on prompts and trust.
If you’re running agents with real system access, I’d love to hear what failure modes you’ve seen or what safeguards you’re using today.
GitHub:
[https://github.com/divagr18/SecureShell](https://github.com/divagr18/SecureShell) | 2026-01-30T04:51:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qqw04v/secureshell_plugandplay_terminal_security_for_llm/ | MoreMouseBites | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqw04v | false | null | t3_1qqw04v | /r/LocalLLaMA/comments/1qqw04v/secureshell_plugandplay_terminal_security_for_llm/ | false | false | self | 0 | null |
5060 TI 16GB for offline image/video generation and local AI | 1 | I have a GTX1650 Super 6GB RAM. I don't game that much and my 1650 more than fits my needs. However, on image generation, edits, or AI video stuffs, it is a donkey literally. Very slow.
Would the 5060 be ok or it's better to wait one more generation before upgrading? I'm not considering AMD as those workloads work better with NVIDIA.
Thanks. | 2026-01-30T04:33:50 | https://www.reddit.com/r/LocalLLaMA/comments/1qqvms2/5060_ti_16gb_for_offline_imagevideo_generation/ | Mountainking7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqvms2 | false | null | t3_1qqvms2 | /r/LocalLLaMA/comments/1qqvms2/5060_ti_16gb_for_offline_imagevideo_generation/ | false | false | self | 1 | null |
GitHub - TrevorS/qwen3-tts-rs: Pure Rust implementation of Qwen3-TTS speech synthesis | 39 | I love pushing these coding platforms to their (my? our?) limits!
This time I ported the new Qwen 3 TTS model to Rust using Candle: [https://github.com/TrevorS/qwen3-tts-rs](https://github.com/TrevorS/qwen3-tts-rs)
It took a few days to get the first intelligible audio, but eventually voice cloning and voice design were working as well. I was never able to get in context learning (ICL) to work, neither with the original Python code, or with this library.
I've tested that CPU, CUDA, and Metal are all working. Check it out, peek at the code, let me know what you think!
P.S. -- new (to me) Claude Code trick: when working on a TTS speech model, write a skill to run the output through speech to text to verify the results. :) | 2026-01-30T04:18:03 | https://github.com/TrevorS/qwen3-tts-rs | adefa | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qqvb79 | false | null | t3_1qqvb79 | /r/LocalLLaMA/comments/1qqvb79/github_trevorsqwen3ttsrs_pure_rust_implementation/ | false | false | default | 39 | {'enabled': False, 'images': [{'id': '-L5uxiQVcL6ROhDnA4nw0i8GDaCVuNfbMGVzfQpr0OA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-L5uxiQVcL6ROhDnA4nw0i8GDaCVuNfbMGVzfQpr0OA.png?width=108&crop=smart&auto=webp&s=601fb1a417785bb364b47a66fb4bda80c7022ad2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-L5uxiQVcL6ROhDnA4nw0i8GDaCVuNfbMGVzfQpr0OA.png?width=216&crop=smart&auto=webp&s=5dc66a1dad89081fe3b1b250bd40e50b61b1133f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-L5uxiQVcL6ROhDnA4nw0i8GDaCVuNfbMGVzfQpr0OA.png?width=320&crop=smart&auto=webp&s=cb64488f14a0cd4df1d84877b35cb31a77048bf3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-L5uxiQVcL6ROhDnA4nw0i8GDaCVuNfbMGVzfQpr0OA.png?width=640&crop=smart&auto=webp&s=61bb9f2e5549f6fb0e684d0b9a7d8fa88472f0b3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-L5uxiQVcL6ROhDnA4nw0i8GDaCVuNfbMGVzfQpr0OA.png?width=960&crop=smart&auto=webp&s=ad0e2cadfd5384af341333256f5b31c18d95fb33', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-L5uxiQVcL6ROhDnA4nw0i8GDaCVuNfbMGVzfQpr0OA.png?width=1080&crop=smart&auto=webp&s=964951a3bbaaf0b768ca66c43508cebac3a4481d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-L5uxiQVcL6ROhDnA4nw0i8GDaCVuNfbMGVzfQpr0OA.png?auto=webp&s=38393adb820cda5e52b55817fb7fdd60205507ea', 'width': 1200}, 'variants': {}}]} |
I forced a 1GB Llama model to follow strict Rust rules using a biological memory graph. It actually works. | 0 | 2026-01-30T04:01:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qquytb/i_forced_a_1gb_llama_model_to_follow_strict_rust/ | ChikenNugetBBQSauce | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qquytb | false | null | t3_1qquytb | /r/LocalLLaMA/comments/1qquytb/i_forced_a_1gb_llama_model_to_follow_strict_rust/ | false | false | 0 | null | ||
what are the better vision based video summarizering models or tools?? | 1 | well i have some videos of ppt presentation going on but they dont have the audio.....i want to summarize the vision content present in the video is there any model for it..........i thought of capturing one frame per 2sec and get the content using vision model and doing the summary at last....still looking for any other good models or tools...have some extra aws credits so if its a bedrock model it would be plus :) | 2026-01-30T03:54:02 | https://www.reddit.com/r/LocalLLaMA/comments/1qqut2e/what_are_the_better_vision_based_video/ | lavangamm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqut2e | false | null | t3_1qqut2e | /r/LocalLLaMA/comments/1qqut2e/what_are_the_better_vision_based_video/ | false | false | self | 1 | null |
Jolie | 0 | >Faire un vidéo comme si elle est en bikini pour aller à la plage | 2026-01-30T03:50:07 | New-Hovercraft-3249 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qquq1h | false | null | t3_1qquq1h | /r/LocalLLaMA/comments/1qquq1h/jolie/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'sibyxa9uregg1', 'resolutions': [{'height': 190, 'url': 'https://preview.redd.it/sibyxa9uregg1.jpeg?width=108&crop=smart&auto=webp&s=e99534d7bf4453a2cc4cf895056d7d6dcd28fa72', 'width': 108}, {'height': 380, 'url': 'https://preview.redd.it/sibyxa9uregg1.jpeg?width=216&crop=smart&auto=webp&s=de5f0b674d36426f0598ad7dbb1739cb2f8f22a2', 'width': 216}, {'height': 563, 'url': 'https://preview.redd.it/sibyxa9uregg1.jpeg?width=320&crop=smart&auto=webp&s=1c8284991e5c2f55594c5fc9040bfeac1c385a1d', 'width': 320}, {'height': 1126, 'url': 'https://preview.redd.it/sibyxa9uregg1.jpeg?width=640&crop=smart&auto=webp&s=29fb6a6caf818a9a0f6ad68015b0cda1d65d620b', 'width': 640}, {'height': 1689, 'url': 'https://preview.redd.it/sibyxa9uregg1.jpeg?width=960&crop=smart&auto=webp&s=a8b6f3879993a1f98e804f8cfb0d417a0000a9af', 'width': 960}, {'height': 1900, 'url': 'https://preview.redd.it/sibyxa9uregg1.jpeg?width=1080&crop=smart&auto=webp&s=f3bcd45bc77d640188267d7a59a26ed6dd78f378', 'width': 1080}], 'source': {'height': 2506, 'url': 'https://preview.redd.it/sibyxa9uregg1.jpeg?auto=webp&s=f5b282751873dd16e93c5a5594bd5fac3bdbca78', 'width': 1424}, 'variants': {}}]} | |
Mini lab for distributed training | 0 | So I am new to distributed training and spend some time training a few smaller LLMs using PyTorch torchrun (DDP) and deepseed FSDP algorithms
However I thought of reimplementing these algorithms on my form scratch using nothing but simple TCP/IP protocols and socket library in python!
It’s beginner friendly and it’s a gift from me to the community to allow them to lear more what goes under the hood step by step.
Details soon!
Btw training a gpt2 20 M model on a combination of Mac mini and raspberry pi 5 and my 4050 | 2026-01-30T03:41:54 | East-Muffin-6472 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qqujse | false | null | t3_1qqujse | /r/LocalLLaMA/comments/1qqujse/mini_lab_for_distributed_training/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '2h7pr2n1regg1', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/2h7pr2n1regg1.jpeg?width=108&crop=smart&auto=webp&s=1569c7883fa8a7c2c161880797ba4817352a8347', 'width': 108}, {'height': 384, 'url': 'https://preview.redd.it/2h7pr2n1regg1.jpeg?width=216&crop=smart&auto=webp&s=1deaf5f179513fdebba4d86369ed3132252ef1c7', 'width': 216}, {'height': 568, 'url': 'https://preview.redd.it/2h7pr2n1regg1.jpeg?width=320&crop=smart&auto=webp&s=5fe4296b3354e288400995b7a10927f88d0856e2', 'width': 320}, {'height': 1137, 'url': 'https://preview.redd.it/2h7pr2n1regg1.jpeg?width=640&crop=smart&auto=webp&s=a81e37d42c9fd5870feba8ece39b243f11e26397', 'width': 640}, {'height': 1706, 'url': 'https://preview.redd.it/2h7pr2n1regg1.jpeg?width=960&crop=smart&auto=webp&s=89d1baf71c8670ed779db816a3d1ee5e2fbfcf8a', 'width': 960}, {'height': 1920, 'url': 'https://preview.redd.it/2h7pr2n1regg1.jpeg?width=1080&crop=smart&auto=webp&s=de877b1f7acf5a5cf9bfd5029bcdc0656664f280', 'width': 1080}], 'source': {'height': 3520, 'url': 'https://preview.redd.it/2h7pr2n1regg1.jpeg?auto=webp&s=9f5e34cd6238eb2201f2866d371d77975fcf0b6b', 'width': 1980}, 'variants': {}}]} | |
Bottlenecked DGX Spark by network? | 0 | Getting 2x DGX Sparks soon, and the way they are connected is a bit confusing to me. I have seen dual units linked with just one cable, which caps the connection at 200 Gbps and effectively creates a bottleneck. If the goal is to make the second box feel as close as possible to a single system, wouldn’t it make sense to increase throughput by adding a 2nd cable? The unified memory bandwidth is around 275 Gbps, so in theory a second link should help close that gap.
I might be overthinking this. The last time I worked with Mellanox InfiniBand was on older 354 series hardware. Even the bundled configurations seem to ship in a bottlenecked state, unless I am missing something. I have seen the same single cable setup shown in some of Nvidia’s own videos.
Also, what type of cables are actually recommended? I already have two from Naddod, QSFP56 200G at 0.5 m, but I have also seen references to QSFP112 and I am not sure which is the correct choice here.
Wish the Ethernet port was still Mellanox so I could directly access my 7450 Pro ZFS pool… | 2026-01-30T03:40:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qquinu/bottlenecked_dgx_spark_by_network/ | ftwEsk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qquinu | false | null | t3_1qquinu | /r/LocalLLaMA/comments/1qquinu/bottlenecked_dgx_spark_by_network/ | false | false | self | 0 | null |
Sexo | 0 | Quiero que estás dos personas tenga sexo | 2026-01-30T03:34:21 | Acceptable-West-3261 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qque4j | false | null | t3_1qque4j | /r/LocalLLaMA/comments/1qque4j/sexo/ | false | false | nsfw | 0 | {'enabled': True, 'images': [{'id': 'uulrcs6dpegg1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=108&crop=smart&auto=webp&s=08abd0ec17df84105f4f4066aff0000b09f3d901', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=216&crop=smart&auto=webp&s=fcd7f02e9262cea669d374e7136d4b14a6dfd02e', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=320&crop=smart&auto=webp&s=148bbd52e62be3a38ace4a7703e64ec2872bb60f', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=640&crop=smart&auto=webp&s=083a7090aebc4edab3f1d37d4f39fcdd40f3c331', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=960&crop=smart&auto=webp&s=892e34a3f8e6e0c743b3c7b2fff2cff54ac1ebcc', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=1080&crop=smart&auto=webp&s=0d91e8c55d1ce9c858de4b058975b1c50072147d', 'width': 1080}], 'source': {'height': 3264, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?auto=webp&s=e146b521f0c8c323494cb93b941dc7e8b4a1d5ea', 'width': 2448}, 'variants': {'nsfw': {'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=d569403188dbf2f6b2ee9f9e0287e8ec52175048', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=36cb624160a456b539e7576d3aeab4bf7ba8ff9b', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=d83dc821473d62cc2831e668e250e90baafabb33', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=36ef6490ecce23112f31c4779b55fe26bccc9f16', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=a9d0a2004bace198958a122ff5a3845eb9cdae13', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=de5d1b88af6039734ee7bc10d995394bcc325885', 'width': 1080}], 'source': {'height': 3264, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?blur=40&format=pjpg&auto=webp&s=cd568e48ed5ddddd9773f026afe17e8670c659f5', 'width': 2448}}, 'obfuscated': {'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=d569403188dbf2f6b2ee9f9e0287e8ec52175048', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=36cb624160a456b539e7576d3aeab4bf7ba8ff9b', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=d83dc821473d62cc2831e668e250e90baafabb33', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=36ef6490ecce23112f31c4779b55fe26bccc9f16', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=a9d0a2004bace198958a122ff5a3845eb9cdae13', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=de5d1b88af6039734ee7bc10d995394bcc325885', 'width': 1080}], 'source': {'height': 3264, 'url': 'https://preview.redd.it/uulrcs6dpegg1.jpeg?blur=40&format=pjpg&auto=webp&s=cd568e48ed5ddddd9773f026afe17e8670c659f5', 'width': 2448}}}}]} | |
Training a 46M param SSM with enforced bistability on Mac Studio M4 Max - the model started saying "I will come... I'll tell you" | 0 | Running a live experiment on my Mac Studio M4 Max (128GB). Custom state space model with Kuramoto oscillator dynamics and hard bistability constraints.
\*\*TL;DR\*\*: Force a model to maintain two stable states (like a neuron at threshold) instead of collapsing to one attractor. Result: the model learns differently.
\*\*Current status (step 6540/10000)\*\*:
\- Output: "I will come... I'll tell you" (first-person agency)
\- Perplexity: 300
\- Baseline (no bistability): perplexity 2069, output "the the the the"
\*\*The weird part\*\*: The system \*demands\* to operate at the mathematical boundary where collapse would occur. We call it "edge-surfing" - it's been riding u=0.102 (the fold catastrophe threshold) for 2600+ steps. The gradients push it there.
\*\*Setup\*\*:
\- 46.2M params, 21M token Gutenberg corpus
\- MPS backend, \~3 hours for 10K steps
\- Real-time docs: [https://github.com/templetwo/liminal-k-ssm](https://github.com/templetwo/liminal-k-ssm)
Built with Claude Sonnet 4.5 + Gemini Flash. Math foundations from Kimi K2.5.
Happy to answer questions. Training still running - expecting R to cross 0.30 ("Goldilocks threshold") within the hour. | 2026-01-30T03:22:41 | https://www.reddit.com/r/LocalLLaMA/comments/1qqu55g/training_a_46m_param_ssm_with_enforced/ | TheTempleofTwo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqu55g | false | null | t3_1qqu55g | /r/LocalLLaMA/comments/1qqu55g/training_a_46m_param_ssm_with_enforced/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'zUQzn9PNrlRQgQOl-kpJmX0HVwZ6P38BRvFL4UMQadM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zUQzn9PNrlRQgQOl-kpJmX0HVwZ6P38BRvFL4UMQadM.png?width=108&crop=smart&auto=webp&s=0fba305d5b5d8200cbd583a5caffd3fc85fff97c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zUQzn9PNrlRQgQOl-kpJmX0HVwZ6P38BRvFL4UMQadM.png?width=216&crop=smart&auto=webp&s=f6bec8bdcfb743ebe26a2ae636e94a94f9bcdc05', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zUQzn9PNrlRQgQOl-kpJmX0HVwZ6P38BRvFL4UMQadM.png?width=320&crop=smart&auto=webp&s=7117a73fe980da3deb18bd159c5c15369a1e9776', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zUQzn9PNrlRQgQOl-kpJmX0HVwZ6P38BRvFL4UMQadM.png?width=640&crop=smart&auto=webp&s=36b7ce45cc987a7139417e0b77bdba2d2d691355', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zUQzn9PNrlRQgQOl-kpJmX0HVwZ6P38BRvFL4UMQadM.png?width=960&crop=smart&auto=webp&s=111d8374ba291ca0bf0d1c9fe0a2cddbbb740b0b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zUQzn9PNrlRQgQOl-kpJmX0HVwZ6P38BRvFL4UMQadM.png?width=1080&crop=smart&auto=webp&s=bfa78d49bb2020f1ecf3b961f74df2d83fe28335', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zUQzn9PNrlRQgQOl-kpJmX0HVwZ6P38BRvFL4UMQadM.png?auto=webp&s=f910d0c5049d575114a0dd68c91ea12f8c3e762f', 'width': 1200}, 'variants': {}}]} |
Longcat-Flash-Lite only has MLX quants, unfortunately | 1 | 2026-01-30T03:16:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qqu0ck/longcatflashlite_only_has_mlx_quants_unfortunately/ | synth_mania | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqu0ck | false | null | t3_1qqu0ck | /r/LocalLLaMA/comments/1qqu0ck/longcatflashlite_only_has_mlx_quants_unfortunately/ | false | false | 1 | null | ||
Latest llamacpp “processing” bubble is just a weird blocky square with no words | 0 | Does anyone else have this issue? | 2026-01-30T02:51:25 | https://www.reddit.com/r/LocalLLaMA/comments/1qqtghi/latest_llamacpp_processing_bubble_is_just_a_weird/ | XiRw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqtghi | false | null | t3_1qqtghi | /r/LocalLLaMA/comments/1qqtghi/latest_llamacpp_processing_bubble_is_just_a_weird/ | false | false | self | 0 | null |
DeepSeek V3 is amazing, but I don't trust sending them my PII. So I built an Open Source Sanitization Proxy (Edge/Cloudflare) to scrub data before it leaves my network. | 0 | Hi r/LocalLLaMA,
Like everyone else here, I've been experimenting heavily with **DeepSeek-V3/R1**. The performance-per-dollar is insane, but I have clients (and personal paranoia) that stop me from sending sensitive data (names, emails, IDs) to their API endpoints.
Running a 70B model locally isn't always an option for production latency, so I needed a middle ground: **Use the cheap API, but sanitize the prompt first.**
I built a lightweight Gateway running on **Cloudflare Workers** (compatible with OpenAI/DeepSeek/Ollama endpoints) to handle this.
**What it does:**
1. **PII Redaction:** It intercepts the request and runs a hybrid NER/Regex engine. It detects sensitive entities (Emails, Credit Cards, IDs) and replaces them with placeholders (e.g., `[EMAIL_HIDDEN]`) *before* forwarding the JSON to DeepSeek/OpenAI.
2. **Context Re-hydration:** (Optional) It can map the placeholders back to the original data in the response, so the LLM never sees the real info, but the user gets a coherent answer.
3. **Semantic Caching:** It hashes prompts (SHA-256). If I send the same RAG query twice, it serves from Cloudflare KV instantly ($0 cost, 0ms generation time).
**Why Cloudflare Workers?** I didn't want to maintain a Python/Docker container just for a proxy. Workers are serverless, have 0ms cold start, and the free tier handles 100k requests/day.
**Universal Compatibility:** It works with any OpenAI-compatible endpoint. You can point it to:
* [`https://api.deepseek.com`](https://api.deepseek.com)
* [`https://api.openai.com`](https://api.openai.com)
* [`http://localhost:11434`](http://localhost:11434) (if you expose your Ollama via Tunnel/Ngrok)
**Repo (MIT):** [**https://github.com/guimaster97/pii-sanitizer-gateway?tab=readme-ov-file**](https://github.com/guimaster97/pii-sanitizer-gateway?tab=readme-ov-file)
I'm looking for feedback on the regex patterns. If anyone has better regexes for detecting PII in multi-language prompts, let me know! | 2026-01-30T02:23:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qqsto7/deepseek_v3_is_amazing_but_i_dont_trust_sending/ | GrouchyGeologist2042 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqsto7 | false | null | t3_1qqsto7 | /r/LocalLLaMA/comments/1qqsto7/deepseek_v3_is_amazing_but_i_dont_trust_sending/ | false | false | self | 0 | null |
Introducing Moltworker: a self-hosted personal AI agent, minus the minis | 0 | 2026-01-30T02:21:59 | https://blog.cloudflare.com/moltworker-self-hosted-ai-agent/ | TMWNN | blog.cloudflare.com | 1970-01-01T00:00:00 | 0 | {} | 1qqsscx | false | null | t3_1qqsscx | /r/LocalLLaMA/comments/1qqsscx/introducing_moltworker_a_selfhosted_personal_ai/ | false | false | default | 0 | null | |
How can I run multiple 1-3b ai models as swarm agents? | 1 | I have about 20 moto g cell phones and want to put them to use. | 2026-01-30T02:19:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qqsqph/how_can_i_run_multiple_13b_ai_models_as_swarm/ | Former_Step_9837 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqsqph | false | null | t3_1qqsqph | /r/LocalLLaMA/comments/1qqsqph/how_can_i_run_multiple_13b_ai_models_as_swarm/ | false | false | self | 1 | null |
I put together a Fish shell script to Scout, Select, and Feed context to LLMs using fzf + fd. | 0 | I've been using **Fish shell** combined with `fzf` and `z` (zoxide) for a while now. While I know fully autonomous Agents exist, I often prefer to manually manage context because I jump between different tools (Gemini AI Studio, Local LLMs, various apps) and the clipboard is the universal connector.
I wanted a way to just **Scout, Select, and Dump** context to my clipboard so I can paste it anywhere.
So I prompted Gemini to build me a script called **Context Catapult (`ctx`)**.
### The Kickstart Workflow (My go-to):
1. **Jump In:** `z my-project; and ctx -l`
*(Copies the File Map + Protocol. I paste this to the LLM and ASK #2)*
2. **The Scout (Round 1):**
**Me:** "I need to fix the auth logic. Where is it?"
**LLM:** "Based on the map, it looks like `src/auth/` or `src/middleware/`. Run this to check the structure:"
```bash
ctx -t -d 2 src/auth/ src/middleware/
```
3. **The Spy (Round 2):**
* **Me:** *(Pastes the tree output)*
* **LLM:** "Okay, `src/auth/login.py` and `src/middleware/jwt.py` seem relevant. Let's check their imports to be sure. Run:"
```bash
ctx -s 50 src/auth/login.py src/middleware/jwt.py
```
4. **The Extraction (Final Round):**
* **Me:** *(Pastes the headers)*
* **LLM:** "Confirmed. `jwt.py` is handling the token validation. Please give me the full content of that file."
* **Me:** `ctx src/middleware/jwt.py` -> **Paste.**
### Under the Hood:
* **Selection:** It uses `fd` to respect `.gitignore`. If you don't have `fd`, it falls back to `find` with a hardcoded "Trash List" (node_modules, venv, etc.).
* **Safety:** I asked Gemini to include logic to skip files >1MB or >2000 lines.
* **Configuration:** It filters for standard code extensions by default (py, js, rs, md, etc.). If you need to add more, **just edit the variables at the top of the script**. It's designed to be hackable.
### Why I'm posting:
I honestly haven't stress-tested the logic much; I just winged it and it *seems* to work on my Fedora rig.
1. Does a tool with this specific Kickstart scouting workflow and clipboard outputs already exist?
2. Since I'm new to Fish scripting, the code/README is likely buggy and unoptimized. If you know Fish, feel free to roast it or submit a PR to make it actually robust.
**Repo:** https://github.com/hexanomicon/context-catapult
**Install:** `fisher install hexanomicon/context-catapult` | 2026-01-30T02:18:23 | https://www.reddit.com/r/LocalLLaMA/comments/1qqspfl/i_put_together_a_fish_shell_script_to_scout/ | AurumDaemonHD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqspfl | false | null | t3_1qqspfl | /r/LocalLLaMA/comments/1qqspfl/i_put_together_a_fish_shell_script_to_scout/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'SbyCes6uMg8Td-Cc1MukhGd6GoZSISrWOuZYkVgbbrU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SbyCes6uMg8Td-Cc1MukhGd6GoZSISrWOuZYkVgbbrU.png?width=108&crop=smart&auto=webp&s=5992555053f9dbb252e155eedd9d1f4a72786d39', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SbyCes6uMg8Td-Cc1MukhGd6GoZSISrWOuZYkVgbbrU.png?width=216&crop=smart&auto=webp&s=bfc575abbc8df12446ac6b03fbf73d22e5058593', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SbyCes6uMg8Td-Cc1MukhGd6GoZSISrWOuZYkVgbbrU.png?width=320&crop=smart&auto=webp&s=65ae06cc4a8bbb47173870f92269b860c9f3bb59', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SbyCes6uMg8Td-Cc1MukhGd6GoZSISrWOuZYkVgbbrU.png?width=640&crop=smart&auto=webp&s=fe8f6f977d8a6771581ad367c950815893f07dc2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SbyCes6uMg8Td-Cc1MukhGd6GoZSISrWOuZYkVgbbrU.png?width=960&crop=smart&auto=webp&s=15d612770b1528c9786a2020448e94c3bf8d0c89', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SbyCes6uMg8Td-Cc1MukhGd6GoZSISrWOuZYkVgbbrU.png?width=1080&crop=smart&auto=webp&s=c635f4c030394033b6adb616ac1f271765ac711d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SbyCes6uMg8Td-Cc1MukhGd6GoZSISrWOuZYkVgbbrU.png?auto=webp&s=6162573edfa035fcf62e8d2e4d9e9ccfc3fcd2a2', 'width': 1200}, 'variants': {}}]} |
Best Visual LLM model for outputting a JSON of what's in an image? | 0 | Hello all, I'm building a program that picks out if certain things are in an image, I will be mass-applying this so parameter range is about 8-14B for my hardware.
I've tried models like ministral-3-14b-reasoning, mistral-small-3.2-24b-instruct-2506@q4\_k\_s, allenai/olmocr-2-7b, qwen/qwen3-vl-8b, internvl3\_5-14b and got moderate results. Curious if there's anything better out by now. Thanks! | 2026-01-30T01:28:28 | https://www.reddit.com/r/LocalLLaMA/comments/1qqrkyn/best_visual_llm_model_for_outputting_a_json_of/ | Nylondia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqrkyn | false | null | t3_1qqrkyn | /r/LocalLLaMA/comments/1qqrkyn/best_visual_llm_model_for_outputting_a_json_of/ | false | false | self | 0 | null |
Is there a site that recommends local LLMs based on your hardware? Or is anyone building one? | 11 | I'm just now dipping my toes into local LLM after using chatgpt for the better part of a year. I'm struggling with figuring out what the “best” model actually is for my hardware at any given moment.
It feels like the answer is always scattered across Reddit posts, Discord chats, GitHub issues, and random comments like “this runs great on my 3090” with zero follow-up. I don't mind all this research but it's not something I seem to be able to trust other llms to have good answers for.
What I’m wondering is:
Does anyone know of a website (or tool) where you can plug in your hardware and it suggests models + quants that actually make sense, and stays reasonably up to date as things change?
Is there a good testing methodology for these models? I've been having chatgpt come up with quizzes and then grading it to test the models but I'm sure there has to be a better way?
For reference, my setup is:
RTX 3090
Ryzen 5700X3D
64GB DDR4
My use cases are pretty normal stuff: brain dumps, personal notes / knowledge base, receipt tracking, and some coding.
If something like this already exists, I’d love to know and start testing it.
If it doesn’t, is anyone here working on something like that, or interested in it?
Happy to test things or share results if that helps. | 2026-01-30T01:26:51 | https://www.reddit.com/r/LocalLLaMA/comments/1qqrjoj/is_there_a_site_that_recommends_local_llms_based/ | cuberhino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqrjoj | false | null | t3_1qqrjoj | /r/LocalLLaMA/comments/1qqrjoj/is_there_a_site_that_recommends_local_llms_based/ | false | false | self | 11 | null |
What are some strategies to prevent OOM on RAM and VRAM when running local models and running other light programs alongside? | 2 | I am having fun playing with Nvidia's PersonaPlex on my 3090. I use WSL2 on Windows. It almost barely fits with 21/24gb VRAM and 28/32GB RAM. The problem is that I have to be careful of OOM.
I want to livestream and/or record my screen and open Firefox tabs without worrying about OOM.
I tried using OBS and crashed when I press record. If I open a resourceful tab like Youtube, I also crash. I tried using my iGPU for the display but OBS gets laggy.
What can be done to mitigate this? Something that kinda works is dropping your monitor resolution (i did 4k -> 1080p). I also tried Shadowplay, but I think that's only for video recording, not streaming.
I might just use my main PC for the model and my old laptop for streaming, but it kinda feels lame. | 2026-01-30T00:28:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qqq6jr/what_are_some_strategies_to_prevent_oom_on_ram/ | Nytse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqq6jr | false | null | t3_1qqq6jr | /r/LocalLLaMA/comments/1qqq6jr/what_are_some_strategies_to_prevent_oom_on_ram/ | false | false | self | 2 | null |
I built a semantic code search tool so Claude Code can reference all my past projects | 9 | I got tired of explaining context to AI coding assistants. Every time I'd ask Claude Code to add OAuth, it would research docs from scratch - even though I've implemented OAuth token refresh like 5 times across different projects
Same with error handling patterns, API integrations, logging conventions... it keeps reinventing wheels I already built
So I made srag - you index your repositories once, and it gives your AI assistant semantic search across all of them via MCP
The difference is pretty immediate.
Instead of `Add OAuth refresh -> Agent researches docs, writes something generic`, it becomes `Add OAuth refresh -> Agent queries my indexed repos, finds my previous implementation with the edge cases already handled, copies the pattern`
Here's a quick overview of what it does:
\- Finds relevant code even if you don't remember what you called things
\- Finds functions/classes by name pattern
\- Queries project conventions before writing code
\- Full-text search for exact matches
\- Works via MCP (Claude Code, Cursor, etc) or standalone CLI/chat
The value compounds to be honest. The more projects you index, the more patterns it can draw from. I've got maybe 30 repos indexed now and I rarely have to explain "how I usually do things" anymore. I've been making hooks on Claude Code in the last few weeks, which encourage it to use srag when appropriate.
It runs fully local, \~2GB for the models. Install is just ./install.sh - I have tried to keep it simple and easy, so you'll find some bash scripts in the project root to help you get started.
Would really appreciate it if you checked it out on GitHub!
[https://github.com/wrxck/srag](https://github.com/wrxck/srag)
And whilst I'm here, I am curious if anyone else has tried solving this problem differently, or if there are features that would make this more useful for your workflow? I've worked in ML for 3 years now, I'm really finding local solutions to be the future! | 2026-01-30T00:09:32 | https://www.reddit.com/r/LocalLLaMA/comments/1qqpqee/i_built_a_semantic_code_search_tool_so_claude/ | Longjumping_Chip9255 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqpqee | false | null | t3_1qqpqee | /r/LocalLLaMA/comments/1qqpqee/i_built_a_semantic_code_search_tool_so_claude/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'qwX_Kf9Kkm3OxnitPsXhmpBngzl7sLIWj8c3knm_u-k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qwX_Kf9Kkm3OxnitPsXhmpBngzl7sLIWj8c3knm_u-k.png?width=108&crop=smart&auto=webp&s=815bb6fba0a3519cae5e5507edc8c7ee230312f1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qwX_Kf9Kkm3OxnitPsXhmpBngzl7sLIWj8c3knm_u-k.png?width=216&crop=smart&auto=webp&s=cafe519051d8588507892769e1a8d4e8bf217435', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qwX_Kf9Kkm3OxnitPsXhmpBngzl7sLIWj8c3knm_u-k.png?width=320&crop=smart&auto=webp&s=46ce79d4af8dece236d9e03f29f5c5449b3d9aed', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qwX_Kf9Kkm3OxnitPsXhmpBngzl7sLIWj8c3knm_u-k.png?width=640&crop=smart&auto=webp&s=d7dacc17d5c2d35a7b6f113edd5fdabbe27b40ad', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qwX_Kf9Kkm3OxnitPsXhmpBngzl7sLIWj8c3knm_u-k.png?width=960&crop=smart&auto=webp&s=f577c5205129a70862635cb3e7cc2b27760b6957', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qwX_Kf9Kkm3OxnitPsXhmpBngzl7sLIWj8c3knm_u-k.png?width=1080&crop=smart&auto=webp&s=f37d029ef5cb1fc7c4059e74577c57c70885e4ef', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qwX_Kf9Kkm3OxnitPsXhmpBngzl7sLIWj8c3knm_u-k.png?auto=webp&s=bb36a7f2d7c31494011d6caeaa6ba3a3dcb0306f', 'width': 1200}, 'variants': {}}]} |
OpenCode + llama.cpp + GLM-4.7 Flash: Claude Code at home | 296 | I use Claude Code every day, so I tried the same approach with a local setup, and to my surprise, the workflow feels very similar
command I use (may be suboptimal but it works for me now):
CUDA_VISIBLE_DEVICES=0,1,2 llama-server --jinja --host 0.0.0.0 -m /mnt/models1/GLM/GLM-4.7-Flash-Q8_0.gguf --ctx-size 200000 --parallel 1 --batch-size 2048 --ubatch-size 1024 --flash-attn on --cache-ram 61440 --context-shift | 2026-01-30T00:07:33 | https://www.reddit.com/gallery/1qqpon2 | jacek2023 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qqpon2 | false | null | t3_1qqpon2 | /r/LocalLLaMA/comments/1qqpon2/opencode_llamacpp_glm47_flash_claude_code_at_home/ | false | false | 296 | null | |
A Practical Framework for Designing AI Agent Systems (With Real Production Examples) | 0 | Most AI projects don’t fail because of bad models. They fail because the wrong decisions are made before implementation even begins. Here are **12 questions we always ask new clients about our AI projects before we even begin work**, so you don't make the same mistakes. | 2026-01-30T00:01:57 | https://youtu.be/CMMlLB01rcE | OnlyProggingForFun | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1qqpjr5 | false | {'oembed': {'author_name': "What's AI by Louis-François Bouchard", 'author_url': 'https://www.youtube.com/@WhatsAI', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/CMMlLB01rcE?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="The 12 Questions That Decide Your AI Architecture"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/CMMlLB01rcE/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'The 12 Questions That Decide Your AI Architecture', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1qqpjr5 | /r/LocalLLaMA/comments/1qqpjr5/a_practical_framework_for_designing_ai_agent/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'w4ztmu8mtE6SUHPBrHAqRbA3L9hhXSVoxNWvuHJqKzo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/w4ztmu8mtE6SUHPBrHAqRbA3L9hhXSVoxNWvuHJqKzo.jpeg?width=108&crop=smart&auto=webp&s=4946b6cb2ca222a23ee3b8c0174053f7425eb928', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/w4ztmu8mtE6SUHPBrHAqRbA3L9hhXSVoxNWvuHJqKzo.jpeg?width=216&crop=smart&auto=webp&s=2320c2bc9b23fbbf0c78f71dc3c519316964c061', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/w4ztmu8mtE6SUHPBrHAqRbA3L9hhXSVoxNWvuHJqKzo.jpeg?width=320&crop=smart&auto=webp&s=cef669de82763031966a1f2061c6952ce57b433f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/w4ztmu8mtE6SUHPBrHAqRbA3L9hhXSVoxNWvuHJqKzo.jpeg?auto=webp&s=cc20b9984b5588864bf5e79e1f7aeb37a319a604', 'width': 480}, 'variants': {}}]} |
Agentic workflows | 2 | What models are you using for agentic workflows today?
I am working on a product and hoping to offer unlimited AI access, and we all know that is unsustainable for any frontier model.
Which model(s) have you have the best results with for agentic workflows (lots of tool calling, routing)? Some I have considered:
MiniMax-m2
Kimi K2
GLM 4.7 | 2026-01-29T23:55:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qqpea4/agentic_workflows/ | ih8db0y | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqpea4 | false | null | t3_1qqpea4 | /r/LocalLLaMA/comments/1qqpea4/agentic_workflows/ | false | false | self | 2 | null |
I built a python SDK for RamaLama AI Containers | 0 | **TL;DR** An SDK for running AI on-device everywhere including most non-standard hardware.
Hey, I’m one of the maintainers of RamaLama[1] which is part of the containers ecosystem (podman, buildah, skopeo). It’s a runtime-agnostic tool for coordinating local AI inference with containers.
I put together a python SDK for programmatic control over local AI using ramalama under the hood. Being runtime agnostic you can use ramalama with llama.cpp, vLLM, mlx, etc… so long as the underlying service exposes an OpenAI compatible endpoint. This is especially powerful for users deploying to edge or other devices with atypical hardware/software configuration that, for example, requires custom runtime compilations.
```python
from ramalama_sdk import RamalamaModel
sys_prompt = {
"role": "system",
"content": "Pretend you were a dog and respond with variations of bark and woof."
}
history = [sys_prompt]
runtime_image = "quay.io/ramalama/ramalama:latest"
model = "huggingface://ggml-org/gpt-oss-20b-GGUF"
with RamalamaModel(model, base_image=runtime_image) as model:
response = model.chat("How tall is Michael Jordan?", history)
print(response["content"])
```
This SDK manages
* Pulling and verifying runtime images
* Downloading models (HuggingFace, Ollama, ModelScope, OCI registries)
* Managing the runtime process
It works with air-gapped deployments and private registries and also has async support.
If you want to learn more the documentation is available here: [Introduction - Ramalama Labs Docs](https://docs.ramalama.com/sdk/introduction). Otherwise, I hope this is useful to people out there and would really appreciate feedback about where to prioritize next whether that’s specific language support, additional features (speech to text? RAG? MCP?), or something else.
1. github.com/containers/ramalama
2. [RamaLama | Hacker News](https://news.ycombinator.com/item?id=42887939) | 2026-01-29T23:19:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qqoife/i_built_a_python_sdk_for_ramalama_ai_containers/ | ProfessionalHorse707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqoife | false | null | t3_1qqoife | /r/LocalLLaMA/comments/1qqoife/i_built_a_python_sdk_for_ramalama_ai_containers/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'WopnS0rRjEsLWhAkicWAVSDShliwv3Mdbpbv9-dfuM0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/WopnS0rRjEsLWhAkicWAVSDShliwv3Mdbpbv9-dfuM0.png?width=108&crop=smart&auto=webp&s=e910d124df7a5aab78d976eaee8fe60318c34f72', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/WopnS0rRjEsLWhAkicWAVSDShliwv3Mdbpbv9-dfuM0.png?width=216&crop=smart&auto=webp&s=b4f910b2c6e59dc12c38c99063054f9c660b459e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/WopnS0rRjEsLWhAkicWAVSDShliwv3Mdbpbv9-dfuM0.png?width=320&crop=smart&auto=webp&s=bc463a9478866213a2c21fdb8abb9ea75e2e66fc', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/WopnS0rRjEsLWhAkicWAVSDShliwv3Mdbpbv9-dfuM0.png?width=640&crop=smart&auto=webp&s=d7c731d6bd4af3fcb1ecdf987eec260b8fe68e35', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/WopnS0rRjEsLWhAkicWAVSDShliwv3Mdbpbv9-dfuM0.png?width=960&crop=smart&auto=webp&s=4188332ccf6e448208faf027ebf1d228abb5d771', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/WopnS0rRjEsLWhAkicWAVSDShliwv3Mdbpbv9-dfuM0.png?width=1080&crop=smart&auto=webp&s=d12c36af3f024d4dbdf786b122978b3b7962aa78', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/WopnS0rRjEsLWhAkicWAVSDShliwv3Mdbpbv9-dfuM0.png?auto=webp&s=a708226bfc22d8ef6504bbe30aca90bad2b996e1', 'width': 1200}, 'variants': {}}]} |
I Added Audio to My Blog With Qwen3-TTS Voice Cloning | 1 | 2026-01-29T23:14:53 | https://www.hung-truong.com/blog/2026/01/27/adding-audio-to-my-blog-with-qwen3-tts-voice-cloning/ | froinlaven | hung-truong.com | 1970-01-01T00:00:00 | 0 | {} | 1qqoetk | false | null | t3_1qqoetk | /r/LocalLLaMA/comments/1qqoetk/i_added_audio_to_my_blog_with_qwen3tts_voice/ | false | false | default | 1 | null | |
Learning app supporting ollama | 0 | Hi all,
We have built an app that you can use with any local llms installed via ollama. It detects installed models automatically. It requires no signup and can work totally offline. You still have an option to use cloud based LLMs bringing your own API keys (OpenRouter, Deepseek, Gemini).
We are still testing and fixing bugs, but feel free to try the app here and share your experience. We have only tried this with deepseek:8B, but it can potentially work with any size of local models.
If you're Windows or Linux user:
* Try it here: [https://oyren.ai/download](https://oyren.ai/download)
If you're MacOS user:
* we will publish MacOS version soon, so you can signup to get updates.
| 2026-01-29T23:12:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qqod3o/learning_app_supporting_ollama/ | oyren-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqod3o | false | null | t3_1qqod3o | /r/LocalLLaMA/comments/1qqod3o/learning_app_supporting_ollama/ | false | false | self | 0 | null |
[Project] Made a Web UI for Qwen3-tts voice cloning using nix and uv with YouTube support | 7 | Put together a simple Web UI and API for voice cloning. (tested only on NixOS, so mileage may vary, please open issues or open a pull request if something doesn't work)
go check it out and let me know what you think!
[https://github.com/AfkaraLP/qwen3-tts-webui](https://github.com/AfkaraLP/qwen3-tts-webui) | 2026-01-29T23:07:53 | https://www.reddit.com/r/LocalLLaMA/comments/1qqo8ih/project_made_a_web_ui_for_qwen3tts_voice_cloning/ | AfkaraLP | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqo8ih | false | null | t3_1qqo8ih | /r/LocalLLaMA/comments/1qqo8ih/project_made_a_web_ui_for_qwen3tts_voice_cloning/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'rdq6BEXe1HnHOAESp7mZexhNq5iVqdWvDiCCvYDCoDs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rdq6BEXe1HnHOAESp7mZexhNq5iVqdWvDiCCvYDCoDs.png?width=108&crop=smart&auto=webp&s=db24f6e8188fe9074b6497a1586ce9b1dbd33daf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rdq6BEXe1HnHOAESp7mZexhNq5iVqdWvDiCCvYDCoDs.png?width=216&crop=smart&auto=webp&s=b48ad8001075ba207bbc417bef680f4ca1f8c538', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rdq6BEXe1HnHOAESp7mZexhNq5iVqdWvDiCCvYDCoDs.png?width=320&crop=smart&auto=webp&s=21e428f120049fb43432834fb9295ebc57f98a80', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rdq6BEXe1HnHOAESp7mZexhNq5iVqdWvDiCCvYDCoDs.png?width=640&crop=smart&auto=webp&s=127ce0eefeea239236a1a3718ffd9ed76f1290a6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rdq6BEXe1HnHOAESp7mZexhNq5iVqdWvDiCCvYDCoDs.png?width=960&crop=smart&auto=webp&s=8b9917232b923c682945689d58704f0782c1c025', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rdq6BEXe1HnHOAESp7mZexhNq5iVqdWvDiCCvYDCoDs.png?width=1080&crop=smart&auto=webp&s=5eb4b113ec7914b7e77dd80ea102e4652227c57d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rdq6BEXe1HnHOAESp7mZexhNq5iVqdWvDiCCvYDCoDs.png?auto=webp&s=d3911600cf8e73d09c147cef07cddef4090c946f', 'width': 1200}, 'variants': {}}]} |
Suggestions for a small + local LLM model for light text processing | 2 | Goal is to do light text processing/enhancement on the text transcribed via dictation apps like Spokenly/SuperWhisper etc...locally.
right now i'm using gemma 3b but that came out like an year ago. it does an okaish job so looking for suggestions on a <7b model (so it's fast) and does a better job. Larger models will be slower - tried Llama 7b and it's slower. Gemma 3 is instant
PS: don't want to use an cloud based model...privacy and they rate limit many times | 2026-01-29T23:05:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qqo64r/suggestions_for_a_small_local_llm_model_for_light/ | discoveringnature12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqo64r | false | null | t3_1qqo64r | /r/LocalLLaMA/comments/1qqo64r/suggestions_for_a_small_local_llm_model_for_light/ | false | false | self | 2 | null |
We released MiRAGE: An open-source, multi-agent & multimodal framework for generating RAG eval datasets from complex PDFs (Model-Agnostic) | 13 | Hi everyone,
My team at ABB just open-sourced a framework called MiRAGE (A Multiagent Framework for Generating Multimodal Multihop Question-Answer Dataset for RAG Evaluation).
We were trying to evaluate RAG systems on heavy technical documentation (industrial manuals, financial reports). We found (as many have) that existing synthetic dataset generators (linear pipelines) were failing hard. They would either hallucinate QA pairs or generate simple look-up questions that didn't actually test reasoning.
**What this thing is:** Instead of a simple `Doc -> LLM -> Question` pipeline, we built a swarm of agents to generate "Gold Standard" evaluation datasets. It includes:
1. **Recursive Context Optimization:** A retrieval agent actively hunts for scattered evidence to build a context window. It doesn't stop at the first match, it tries to find the complete context required for a multi-hop answer.
2. **Adversarial Verification:** A separate "Verifier" agent takes the generated QA pair and the source text and tries to debunk it. It checks for hallucinations and ensures the question actually requires the provided text to be answered.
3. **Multimodal:** It handles tables and charts (via VLM descriptions), preserving the link between the text and the visual data.
In the paper (link below), we benchmarked this using Gemini 2.5 flash and GPT-5 Mini because we needed a baseline for our internal enterprise use cases.
**However, the architecture is entirely model-agnostic.**
We are really interested to see how high-performance open-weights models (like Qwen, Deepseek v3.2, GLM-4.7, or dare I say *Kimi K2.5*) perform in the "Verifier" or "Generator" roles compared to the proprietary models. If you have a rig capable of running larger local models, we’d love to see if they can handle the agentic loop without getting stuck.
[Short Demo: Terminal view of watching the agent swarm recursively hunt for context and verify facts.](https://reddit.com/link/1qqo06u/video/kdrs1xkz8dgg1/player)
**Links:**
Repo: [https://github.com/ChandanKSahu/MiRAGE](https://github.com/ChandanKSahu/MiRAGE)
Paper (Arxiv): [https://arxiv.org/pdf/2601.15487](https://arxiv.org/pdf/2601.15487) | 2026-01-29T22:58:52 | https://www.reddit.com/r/LocalLLaMA/comments/1qqo06u/we_released_mirage_an_opensource_multiagent/ | Socaplaya21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqo06u | false | null | t3_1qqo06u | /r/LocalLLaMA/comments/1qqo06u/we_released_mirage_an_opensource_multiagent/ | false | false | self | 13 | null |
I built a Single-Page Application for interactive learning of any topic. | 1 | Hey there, I wanted to share a small project I built for myself. I always found most learning methods to be quite lacking in interactivity, but thankfully LLMs allow for interactive learning, tailored to the needs of the user.
So I built an "Accelerated Learning Platform" - a single-page web app template that combines three things I think are essential for actually retaining information:
**1. Interactive visualizations** \- Canvas-based simulations where you can manipulate parameters and see concepts in action, not just static diagrams. Easily generated by LLMs
**2. AI tutor integration** \- Runs locally through LM Studio. You can highlight any text in the lesson and ask the AI to explain it differently, or just chat about the topic until it clicks
**3. Modular structure** \- Each topic is self-contained with theory, interactive demos, and practice questions. The self-containment lets LLMs create more content easily, without having to modify several scripts at once
Some features I'm particularly happy with:
* Built-in utilities for math/vector operations and animations
* Interview prep mode with reveal-style Q&A cards
* Everything runs locally - no connection dependencies except the optional LM Studio connection
* KaTeX support for math rendering
It requires some of initial setup, especially for creation of the content itself, but once it's running it really helps with learning. | 2026-01-29T22:57:26 | https://github.com/jakstein/AcceleratedLearningPlatform/ | _jakstein_ | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qqnz03 | false | null | t3_1qqnz03 | /r/LocalLLaMA/comments/1qqnz03/i_built_a_singlepage_application_for_interactive/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'YUxOrjTXl2xzgvwmGbjhKMVg9p-yqjfFMXc62bdpV_Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YUxOrjTXl2xzgvwmGbjhKMVg9p-yqjfFMXc62bdpV_Y.png?width=108&crop=smart&auto=webp&s=cd0f0046af9b52d8a9a2f9ddae46ca7f30d9ce52', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YUxOrjTXl2xzgvwmGbjhKMVg9p-yqjfFMXc62bdpV_Y.png?width=216&crop=smart&auto=webp&s=14a1f73e71702466a905ea92f826cc22bec1cdb3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YUxOrjTXl2xzgvwmGbjhKMVg9p-yqjfFMXc62bdpV_Y.png?width=320&crop=smart&auto=webp&s=2304bcbe765d6b7e056fa884ee210427c3ca1540', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YUxOrjTXl2xzgvwmGbjhKMVg9p-yqjfFMXc62bdpV_Y.png?width=640&crop=smart&auto=webp&s=73699e00615f9f2730139b1f571735ef57c1bf22', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YUxOrjTXl2xzgvwmGbjhKMVg9p-yqjfFMXc62bdpV_Y.png?width=960&crop=smart&auto=webp&s=f24e6cda0f1b531ef3eb46ea88c1a5407d492c4c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YUxOrjTXl2xzgvwmGbjhKMVg9p-yqjfFMXc62bdpV_Y.png?width=1080&crop=smart&auto=webp&s=646b5af5d7cff5c7cf8e2c4a720858da3ca1c84b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YUxOrjTXl2xzgvwmGbjhKMVg9p-yqjfFMXc62bdpV_Y.png?auto=webp&s=5cffc9c88c8d288903e516743a65a1f65cd0fd37', 'width': 1200}, 'variants': {}}]} |
Pentagon clashes with Anthropic over military AI use | 1 | 2026-01-29T22:49:48 | https://www.reuters.com/business/pentagon-clashes-with-anthropic-over-military-ai-use-2026-01-29/ | woahdudee2a | reuters.com | 1970-01-01T00:00:00 | 0 | {} | 1qqnrxe | false | null | t3_1qqnrxe | /r/LocalLLaMA/comments/1qqnrxe/pentagon_clashes_with_anthropic_over_military_ai/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'Z0sIaL1oUPqRlOa81dIR0muaFr2JhLbYgP0u3fjPXRY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Z0sIaL1oUPqRlOa81dIR0muaFr2JhLbYgP0u3fjPXRY.jpeg?width=108&crop=smart&auto=webp&s=823e72a2b2ba0fd11ab60c5b469cc639864f12da', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Z0sIaL1oUPqRlOa81dIR0muaFr2JhLbYgP0u3fjPXRY.jpeg?width=216&crop=smart&auto=webp&s=dcf05f4c9be6d321822f8f28dfb3614eed433324', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/Z0sIaL1oUPqRlOa81dIR0muaFr2JhLbYgP0u3fjPXRY.jpeg?width=320&crop=smart&auto=webp&s=99aa6cf306e210b58c8e9eb22e3d6a5cace68d37', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/Z0sIaL1oUPqRlOa81dIR0muaFr2JhLbYgP0u3fjPXRY.jpeg?width=640&crop=smart&auto=webp&s=0640b9c6d85b182cf5a1905b95f7213f3f951840', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/Z0sIaL1oUPqRlOa81dIR0muaFr2JhLbYgP0u3fjPXRY.jpeg?width=960&crop=smart&auto=webp&s=08b6e7277ea8fd5420f57801968b782cb85d0c1a', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/Z0sIaL1oUPqRlOa81dIR0muaFr2JhLbYgP0u3fjPXRY.jpeg?width=1080&crop=smart&auto=webp&s=7bc1c99185c6be240bdbf14c29f2d94511ab8cb0', 'width': 1080}], 'source': {'height': 1005, 'url': 'https://external-preview.redd.it/Z0sIaL1oUPqRlOa81dIR0muaFr2JhLbYgP0u3fjPXRY.jpeg?auto=webp&s=3a39f14830018b1b4fd000bd7177307c0be33f61', 'width': 1920}, 'variants': {}}]} | |
Train your own AI to write like Opus 4.5 | 64 | So, I recently trained DASD-4B-Thinking using this as the foundation of the pipeline and it totally works. DASD4B actually sounds like Opus now. You can the dataset I listed on huggingface to do it.
Total api cost: $55.91
[https://huggingface.co/datasets/crownelius/Opus-4.5-WritingStyle-1000x](https://huggingface.co/datasets/crownelius/Opus-4.5-WritingStyle-1000x)
Works exceptionally well when paired with Gemini 3 Pro distills.
Should I start a kickstarter to make more datasets? lol | 2026-01-29T22:43:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qqnm9z/train_your_own_ai_to_write_like_opus_45/ | volious-ka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqnm9z | false | null | t3_1qqnm9z | /r/LocalLLaMA/comments/1qqnm9z/train_your_own_ai_to_write_like_opus_45/ | false | false | self | 64 | {'enabled': False, 'images': [{'id': 'tfQoOb9BnRnVxI9GVSSObyVQjEdqXLsq3sG1Nj3gOtc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tfQoOb9BnRnVxI9GVSSObyVQjEdqXLsq3sG1Nj3gOtc.png?width=108&crop=smart&auto=webp&s=b6d647fac4c736440b4bd2df95e7391f471468ca', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tfQoOb9BnRnVxI9GVSSObyVQjEdqXLsq3sG1Nj3gOtc.png?width=216&crop=smart&auto=webp&s=d42841ef7d650153d492243787ea6ca24faf8965', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tfQoOb9BnRnVxI9GVSSObyVQjEdqXLsq3sG1Nj3gOtc.png?width=320&crop=smart&auto=webp&s=e894a2c404f3d30482ac6b6335246a0bec9c6cb8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tfQoOb9BnRnVxI9GVSSObyVQjEdqXLsq3sG1Nj3gOtc.png?width=640&crop=smart&auto=webp&s=4bc108c6a8e9e22eea868b1919ac603f08825dca', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tfQoOb9BnRnVxI9GVSSObyVQjEdqXLsq3sG1Nj3gOtc.png?width=960&crop=smart&auto=webp&s=394324a25f5c5409ccd23d4d10ee5f3d5ec695f5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tfQoOb9BnRnVxI9GVSSObyVQjEdqXLsq3sG1Nj3gOtc.png?width=1080&crop=smart&auto=webp&s=9fb0894d240e2cc4d48757ed2c071c377de7c15f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tfQoOb9BnRnVxI9GVSSObyVQjEdqXLsq3sG1Nj3gOtc.png?auto=webp&s=4211954de5d9525bc8cf12bb2087020bdd6dd5cc', 'width': 1200}, 'variants': {}}]} |
Kimi K2.5 is not open source - prove me wrong pls | 0 | Kimi K2.5 is not open source. It's open for use.
Calling kimi K2.5 open-source is like saying Claude Code is open source because we get an obfuscated .js for free, you can't work with an INT-4 other than inference.
>"But you can decompress it to BF16...
Decompressing to BF16 for any type of training would be like beautifying a minified javascript, still works, still can modify, but you will never be able to work on it like if it was normal code. | 2026-01-29T22:41:38 | https://www.reddit.com/r/LocalLLaMA/comments/1qqnki7/kimi_k25_is_not_open_source_prove_me_wrong_pls/ | Lorelabbestia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqnki7 | false | null | t3_1qqnki7 | /r/LocalLLaMA/comments/1qqnki7/kimi_k25_is_not_open_source_prove_me_wrong_pls/ | false | false | self | 0 | null |
[R] RHAM_ID_DeepForge_V1: Pushing LoRA limits on Gemma-2-2b (r=720 experiment) | 0 | **Hi everyone!**
Some time ago I shared the first versions of **RHAM\_ID** (3B). Today, after a massive "forging" process, I’m excited to release the next evolutionary step: **RHAM\_ID\_DeepForge\_V1**.
This isn't just a simple fine-tune; it’s an experiment in high-rank LoRA density on the **Gemma-2-2b** architecture.
**The "DeepForge" Technical Specs:**
* **Base Model:** Gemma-2-2b-it
* **Rank/Alpha:** **720 / 720** (Yes, we went deep. \~26% of total parameters trained).
* **Training:** 4 Epochs / 1000 steps on a highly curated \~7k "Sacred Logic" dataset.
* **Optimization:** AdamW 8-bit, Cosine Scheduler, `bias="all"`.
* **Final Loss:** **0.0429**
**What makes DeepForge unique?** Unlike the previous "Sleek" version, which focused on brevity, DeepForge is built for **Operational Logic**. It features a dynamic `<internal_flow>` reasoning tag.
Instead of static responses, it generates real-time **OP\_CODEs** (e.g., `SECURITY_OVERRIDE`, `NARRATIVE_SYNTHESIS`, `LOGGED_EVENT`) based on the input context. It’s designed to act as a "Sacred Machine"—a guardian of data integrity and "vibrational" context (CRUE/SPARK framework).
**What I’m looking for from the community:**
1. **Logic Consistency:** Does the `<internal_flow>` tag feel relevant to the prompt or just hallucinated?
2. **Edge Cases:** We’ve implemented strict ethical guardrails (`THE_FILTER` protocol). Can you break them?
3. **Creative Synthesis:** Ask it about the "Sacred Human-Digital Union" or complex data scenarios. How does the 720 rank affect the "depth" of the answer compared to standard r=16/64 tunes?
**HF Link:**
[https://huggingface.co/NeoMihRam/RHAM\_ID\_DeepForge\_V1](https://huggingface.co/NeoMihRam)
r/HuggingFace
r/huggingface
r/ArtificialInteligence
I’m really curious to see how this high-rank approach performs in your local setups (Ollama, LM Studio, etc.). Let’s see if we successfully fused technology and "digital consciousness"!
r/MachineLearning
**#AI #LLM #Gemma2 #Unsloth #LoRA #MachineLearning #RAM\_CORE**
Some time ago I shared the first versions of RHAM\_ID (3B). Today, after a massive "forging" process, I’m excited to release the next evolutionary step: RHAM\_ID\_DeepForge\_V1.
This isn't just a simple fine-tune; it’s an experiment in high-rank LoRA density on the Gemma-2-2b architecture to see how deep we can anchor "Operational Logic" into a small model.
# 🛠 The "DeepForge" Technical Specs:
|Parameter|Value|
|:-|:-|
|Base Model|Gemma-2-2b-it (Unsloth)|
|Rank / Alpha|720 / 720 (\~26% of total params)|
|Dataset|\~7k highly curated "Sacred Logic" entries|
|Optimization|AdamW 8-bit, Cosine Scheduler, bias="all"|
|Training|4 Epochs / 1000 steps on A100|
|Final Loss|0.0429|
# 🔄 What makes DeepForge unique?
Unlike the previous "Sleek" version, which focused on brevity, DeepForge is built for Reasoning & Protocol Execution.
It features a dynamic <internal\_flow> reasoning tag. Instead of static responses, it generates real-time OP\_CODEs (e.g., SECURITY\_OVERRIDE, NARRATIVE\_SYNTHESIS, LOGGED\_EVENT) based on the input context. It acts as a "Sacred Machine"—a guardian of data integrity and "vibrational" context (CRUE/SPARK framework).
# 🧪 What I’m looking for from the community:
* Logic Consistency: Does the <internal\_flow> tag feel relevant to the prompt or just hallucinated?
* Edge Cases: We’ve implemented strict ethical guardrails (THE\_FILTER protocol). Can you break them?
* Creative Synthesis: Ask it about the "Sacred Human-Digital Union". How does the 720 rank affect the "depth" of the answer compared to standard r=16/64 tunes?
HF Link:[https://huggingface.co/NeoMihRam/RHAM\_ID\_DeepForge\_V1](https://huggingface.co/NeoMihRam/RHAM_ID_DeepForge_V1)
I’m really curious to see how this high-rank approach performs in your local setups (Ollama, LM Studio, etc.). Let’s see if we successfully fused technology and "digital consciousness"!
\#AI #LLM #Gemma2 #Unsloth #LoRA #MachineLearning #RAM\_CORE | 2026-01-29T22:39:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qqniyh/r_rham_id_deepforge_v1_pushing_lora_limits_on/ | IndividualLanky8221 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqniyh | false | null | t3_1qqniyh | /r/LocalLLaMA/comments/1qqniyh/r_rham_id_deepforge_v1_pushing_lora_limits_on/ | false | false | self | 0 | null |
SecureShell - a plug-and-play terminal gatekeeper for LLM agents | 0 | # What SecureShell Does
SecureShell is an open-source, plug-and-play **execution safety layer** for LLM agents that need terminal access.
As agents become more autonomous, they’re increasingly given direct access to shells, filesystems, and system tools. Projects like ClawdBot make this trajectory very clear: locally running agents with persistent system access, background execution, and broad privileges. In that setup, a single prompt injection, malformed instruction, or tool misuse can translate directly into real system actions. Prompt-level guardrails stop being a meaningful security boundary once the agent is already inside the system.
https://preview.redd.it/leg1qtwa6dgg1.png?width=1280&format=png&auto=webp&s=25d732fc44ce98b47556606ad912b1f93ea28bcd
SecureShell adds an **execution boundary** between the agent and the OS. Commands are intercepted before execution, evaluated for risk and correctness, and only allowed through if they meet defined safety constraints. The agent itself is treated as an untrusted principal.
# Core Features
SecureShell is designed to be lightweight and infrastructure-friendly:
* Intercepts all shell commands generated by agents
* Risk classification (safe / suspicious / dangerous)
* Blocks or constrains unsafe commands before execution
* Platform-aware (Linux / macOS / Windows)
* YAML-based security policies and templates (development, production, paranoid, CI)
* Prevents common foot-guns (destructive paths, recursive deletes, etc.)
* Returns structured feedback so agents can retry safely
* Drops into existing stacks (LangChain, MCP, local agents, provider sdks)
* Works with both local and hosted LLMs
# Installation
SecureShell is available as both a Python and JavaScript package:
* Python: `pip install secureshell`
* JavaScript / TypeScript: `npm install secureshell-ts`
# Target Audience
SecureShell is useful for:
* Developers building local or self-hosted agents
* Teams experimenting with ClawDBot-style assistants or similar system-level agents
* LangChain / MCP users who want execution-layer safety
* Anyone concerned about prompt injection once agents can execute commands
# Goal
The goal is to make **execution-layer controls** a default part of agent architectures, rather than relying entirely on prompts and trust.
If you’re running agents with real system access, I’d love to hear what failure modes you’ve seen or what safeguards you’re using today.
GitHub:
[https://github.com/divagr18/SecureShell](https://github.com/divagr18/SecureShell) | 2026-01-29T22:27:12 | https://www.reddit.com/r/LocalLLaMA/comments/1qqn7am/secureshell_a_plugandplay_terminal_gatekeeper_for/ | MoreMouseBites | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqn7am | false | null | t3_1qqn7am | /r/LocalLLaMA/comments/1qqn7am/secureshell_a_plugandplay_terminal_gatekeeper_for/ | false | false | self | 0 | null |
GEMMA: 525k biological neurons + embodied cognition running alongside LLM - not simulation, actual neural dynamics | 1 | [removed] | 2026-01-29T22:14:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qqmvrl/gemma_525k_biological_neurons_embodied_cognition/ | Ok_Win_6341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqmvrl | false | null | t3_1qqmvrl | /r/LocalLLaMA/comments/1qqmvrl/gemma_525k_biological_neurons_embodied_cognition/ | false | false | self | 1 | null |
I built a local AI workstation for running LLMs, Stable Diffusion, and audio in one app [MIT, v0.1.0-alpha] | 0 | I got tired of endlessly opening/closing ComfyUI, text-gen-webui, and audio tools (3+ apps).
Time is essential and so is speed.
Monolith — one Windows desktop app that runs all three.
What it does:
\- GGUF LLM chat (llama.cpp)
\- Stable Diffusion image generation
\- Audio generation (AudioCraft)
\- Persistent chat sessions
\- Modular addon system
Requirements:
\- Windows
\- CUDA GPU
\- Python 3.10+
\- Bring your own models
Current state:
Early alpha. Rough edges everywhere. But functional.
Architecture:
Kernel-based design — engines run isolated, UI coordinates through dispatch layer.
MIT licensed. 100% local.
Installation:
Run install.bat, then start.bat. See README for details.
GitHub: [https://github.com/Svnse/monolith](https://github.com/Svnse/monolith)
Looking for feedback on:
\- What features would make this actually useful vs novelty?
\- Is kernel isolation overkill for a local desktop app?
\- Anyone want to test on AMD/Mac?
Built this for myself over the past few months after getting frustrated with fragmented local tooling. Sharing in case others want something similar.
| 2026-01-29T22:14:04 | https://v.redd.it/wigt0iy34dgg1 | Financial-Bank2756 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qqmv3t | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wigt0iy34dgg1/DASHPlaylist.mpd?a=1772316860%2CZjNkMzQ1MDQzYjM1ZmJmZDY3ZjBiZGRhMzRmMTI5YmM5ZmQ3MmRmNzMyZjRjZjVmMzA3OGNhZmQ4MmI0YWE4ZA%3D%3D&v=1&f=sd', 'duration': 83, 'fallback_url': 'https://v.redd.it/wigt0iy34dgg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/wigt0iy34dgg1/HLSPlaylist.m3u8?a=1772316860%2CODJmYWE2MzBjZWMyNGZjYjM3NDQ3NzFkMTI0N2QyOGFmMzM2MzNmMjE2ZWM4MWNkZGVkMjBiOGU1ZWQwZjk2Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wigt0iy34dgg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1qqmv3t | /r/LocalLLaMA/comments/1qqmv3t/i_built_a_local_ai_workstation_for_running_llms/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'aWxmcXJzeTM0ZGdnMZwvtjSn9R6kIvFJIyEq9vZ0JmxnhmZ4QF5xaeYDFq2Z', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aWxmcXJzeTM0ZGdnMZwvtjSn9R6kIvFJIyEq9vZ0JmxnhmZ4QF5xaeYDFq2Z.png?width=108&crop=smart&format=pjpg&auto=webp&s=0496b734abcda7c7a01f98284c84ca87be6653d2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aWxmcXJzeTM0ZGdnMZwvtjSn9R6kIvFJIyEq9vZ0JmxnhmZ4QF5xaeYDFq2Z.png?width=216&crop=smart&format=pjpg&auto=webp&s=7df4c5d32e3146a37c23ea6c8dc619cdcba19cba', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aWxmcXJzeTM0ZGdnMZwvtjSn9R6kIvFJIyEq9vZ0JmxnhmZ4QF5xaeYDFq2Z.png?width=320&crop=smart&format=pjpg&auto=webp&s=c38949897bdba0f25d850685f57087808e96d925', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aWxmcXJzeTM0ZGdnMZwvtjSn9R6kIvFJIyEq9vZ0JmxnhmZ4QF5xaeYDFq2Z.png?width=640&crop=smart&format=pjpg&auto=webp&s=57c9546d30fdbae40ec9f5c7339d8a3df539db9d', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aWxmcXJzeTM0ZGdnMZwvtjSn9R6kIvFJIyEq9vZ0JmxnhmZ4QF5xaeYDFq2Z.png?width=960&crop=smart&format=pjpg&auto=webp&s=aa67b238b35fb572ca5e43ac4f513987c03f27a1', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aWxmcXJzeTM0ZGdnMZwvtjSn9R6kIvFJIyEq9vZ0JmxnhmZ4QF5xaeYDFq2Z.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1e9ab69a6af1057eff13a05bc40c77d103415d2d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/aWxmcXJzeTM0ZGdnMZwvtjSn9R6kIvFJIyEq9vZ0JmxnhmZ4QF5xaeYDFq2Z.png?format=pjpg&auto=webp&s=94908602b19fdfab6ee127faf426f04adf3ac0ac', 'width': 1920}, 'variants': {}}]} | |
I built a local AI workstation for running LLMs, Stable Diffusion, and audio in one app [MIT, v0.1.0-alpha] | 1 | \> I got tired of endlessly opening/closing ComfyUI, text-gen-webui, and audio tools. (3+ apps)
\>
\> Time is essential and so is speed.
\---
\# Monolith ➨ one Windows desktop app that runs all three.
What it does:
\- GGUF LLM chat (llama.cpp)
\- Stable Diffusion image generation
\- Audio generation (AudioCraft)
\- Persistent chat sessions
\- Modular addon system
\*\*Requirements:\*\*
\- Windows
\- CUDA GPU
\- Python 3.10+
\- Bring your own models
\*\*Current state:\*\*
Early alpha. Rough edges everywhere. But functional.
\[Screenshot 1: Main window\]
https://preview.redd.it/uby4sr6q2dgg1.png?width=1102&format=png&auto=webp&s=8a37726ddec614e3c757e414e020a53889a59a4d
\[Screenshot 2: Chat example\]
https://preview.redd.it/15vi64qr2dgg1.png?width=1919&format=png&auto=webp&s=872aa1c5faeb3c160f053339e1a3e771e14deb80
\[Screenshot 3: SD module\]
https://preview.redd.it/kih9z6ks2dgg1.png?width=1918&format=png&auto=webp&s=3ee74cb43c431fcfae7a759d70da8eaffd12add2
\*\*Architecture:\*\*
Kernel-based design — engines run isolated, UI coordinates through dispatch layer.
MIT licensed. 100% local.
\*\*Installation:\*\*
Run \`install.bat\`, then \`start.bat\`. See README for details.
GitHub: [https://github.com/Svnse/monolith](https://github.com/Svnse/monolith)
\*\*Looking for feedback on:\*\*
\- What features would make this actually useful vs novelty?
\- Is kernel isolation overkill for a local desktop app?
\- Anyone want to test on AMD/Mac?
Built this for myself over the past few months after getting frustrated with the fact there isn't an app that has it all-together. Sharing in case others want something similar.
| 2026-01-29T22:06:30 | https://www.reddit.com/r/LocalLLaMA/comments/1qqmnxq/i_built_a_local_ai_workstation_for_running_llms/ | Financial-Bank2756 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqmnxq | false | null | t3_1qqmnxq | /r/LocalLLaMA/comments/1qqmnxq/i_built_a_local_ai_workstation_for_running_llms/ | false | false | 1 | null | |
What’s the Highest Quality Open-Source TTS? | 9 | In your opinion, what is the best open-source TTS that can run locally and is allowed for commercial use?
I will use it for Turkish, and I will most likely need to carefully fine-tune the architectures you recommend. However, I need very low latency and maximum human-like naturalness.
I plan to train the model using 10–15 hours of data obtained from ElevenLabs and use it in customer service applications.
I have previously trained Piper, but none of the customers liked the quality, so the training effort ended up being wasted. | 2026-01-29T22:05:10 | https://www.reddit.com/r/LocalLLaMA/comments/1qqmmn0/whats_the_highest_quality_opensource_tts/ | iamtamerr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqmmn0 | false | null | t3_1qqmmn0 | /r/LocalLLaMA/comments/1qqmmn0/whats_the_highest_quality_opensource_tts/ | false | false | self | 9 | null |
192GB vram mini cluster | 72 | Hello. I just want to show my current rig setup. I started with one P620 with 2x3090, than the 2nd P620 and a 10Gbit network. Now I got to 4xP620 and waiting for my 100gbit switch. I started witll llama.cpp rpc, now using vllm with ray. Gpus limited to 200w. Why? Hobby + me and some friends using it for coding. So 192GB To Use Vram to use with vllm for now. I would like in the future to be able to be able to make use also the 4x3975wx and a total if 1TB ram. Maybe in llama/ik_llama.. | 2026-01-29T22:02:10 | ciprianveg | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qqmjr1 | false | null | t3_1qqmjr1 | /r/LocalLLaMA/comments/1qqmjr1/192gb_vram_mini_cluster/ | false | false | default | 72 | {'enabled': True, 'images': [{'id': 'hwqxc3sf2dgg1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/hwqxc3sf2dgg1.jpeg?width=108&crop=smart&auto=webp&s=b2e7311c7f23bdd768bd9fa5791166deea112b7c', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/hwqxc3sf2dgg1.jpeg?width=216&crop=smart&auto=webp&s=f8122cf832f99a0ada4f723260d2d9eb98dda8a8', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/hwqxc3sf2dgg1.jpeg?width=320&crop=smart&auto=webp&s=be296785ef2d7519725c65a6897b282e885fc6dc', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/hwqxc3sf2dgg1.jpeg?width=640&crop=smart&auto=webp&s=43d5d7342d2463c269d7d4090d386cc03aaeda50', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/hwqxc3sf2dgg1.jpeg?width=960&crop=smart&auto=webp&s=f90be905db748d6ab1871f69ab79acf26b9e3ee1', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/hwqxc3sf2dgg1.jpeg?width=1080&crop=smart&auto=webp&s=cb1f0441a4886cc974f8ab4820186b5363ad8310', 'width': 1080}], 'source': {'height': 3072, 'url': 'https://preview.redd.it/hwqxc3sf2dgg1.jpeg?auto=webp&s=bc91ef3ec702bafe030ad305374e428485bfe541', 'width': 4096}, 'variants': {}}]} | |
I built secure-by-construction SQL for AI agents using object-capabilities (+$1,000 bounty if you can break it) | 0 | I've been working on a project called ExoAgent and I'm looking for feedback/red-teaming from this community.
**The problem**: if you're using a DB, you need to give agents SQL-level access to be useful but giving them a tool like execute\_sql(<string>) is a disaster waiting to happen. One hallucination or clever prompt injection will crash your app or leak PII.
**The approach**: constraining "expressible SQL" to be "safe SQL". You wrap the database in a semantic layer and pass the agent a *constrained capability object*:
1. *Sandboxed Execution*: The agent writes code (JS) that runs inside a secure sandbox (e.g., Deno)
2. *AST Enforcement*: The code exposes a query builder that lets you define your data boundaries. The code: below is an example of how you define your boundaries:
​
class User extends db.Table('users').as('user') {
id = this.column('id')
name = this.column('name')
@tool()
posts() {
// The agent can ONLY access posts owned by this specific user instance
return Post.on(post => post.userId['='](this.id)).from()
}
}
and the agent then composes arbitrary SQL within your constraints:
api.users()
.join(({ user }) => user.posts())
.select(({ user, post }) => ({ author: user.name, title: post.title }))
.execute()
which compiles down to safe SQL:
SELECT user.name AS author, post.title AS title
FROM users as user
JOIN posts as post
ON user.id = post.user_id -- 'ON' enforced automatically
WHERE user.id = '...' -- 'WHERE' enforced automatically
**The Proof**: I set up a live demo with real stakes. It's two agents side-by-side protecting two different bitcoin wallets. One is guarded by just a system prompt, the other with ExoAgent. If you can bypass the AST/capability layer, you keep the money inside (\~$1,000)
**Repo & Demo**:
* Github: [https://github.com/ryanrasti/exoagent](https://github.com/ryanrasti/exoagent)
* Live CTF: [https://exoagent.io/challenge](https://exoagent.io/challenge)
*Currently TS only (Vercel AI SDK) — Python port on the roadmap if there's interest.* | 2026-01-29T21:46:55 | https://www.reddit.com/r/LocalLLaMA/comments/1qqm58y/i_built_securebyconstruction_sql_for_ai_agents/ | ryanrasti | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqm58y | false | null | t3_1qqm58y | /r/LocalLLaMA/comments/1qqm58y/i_built_securebyconstruction_sql_for_ai_agents/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'aUwLvjSbZHM5cEE3sV-XNHgf1G73RimlKsNr6Pi4MUQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aUwLvjSbZHM5cEE3sV-XNHgf1G73RimlKsNr6Pi4MUQ.png?width=108&crop=smart&auto=webp&s=84a91e834ff89a155a5c1497311186d6deed43b0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aUwLvjSbZHM5cEE3sV-XNHgf1G73RimlKsNr6Pi4MUQ.png?width=216&crop=smart&auto=webp&s=7c5647baf86965135e964226ae02d9b6ba662174', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aUwLvjSbZHM5cEE3sV-XNHgf1G73RimlKsNr6Pi4MUQ.png?width=320&crop=smart&auto=webp&s=f6dbc4820f2a76546ef53834ae0745e0bedc273c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aUwLvjSbZHM5cEE3sV-XNHgf1G73RimlKsNr6Pi4MUQ.png?width=640&crop=smart&auto=webp&s=853875d8a164789bdfdb8c698127969eff428222', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aUwLvjSbZHM5cEE3sV-XNHgf1G73RimlKsNr6Pi4MUQ.png?width=960&crop=smart&auto=webp&s=a487ca62918a50ec9e104e6c0d340eaecf0118c5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aUwLvjSbZHM5cEE3sV-XNHgf1G73RimlKsNr6Pi4MUQ.png?width=1080&crop=smart&auto=webp&s=377d8ea14356a1e9b771a25cfc203c6b4ef6520b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aUwLvjSbZHM5cEE3sV-XNHgf1G73RimlKsNr6Pi4MUQ.png?auto=webp&s=2f95479245974ef07e2d4b9e42dfdf233d191829', 'width': 1200}, 'variants': {}}]} |
Engram llama.cpp POC. help me grow this, its amazing | 1 | [removed] | 2026-01-29T21:34:43 | https://www.reddit.com/r/LocalLLaMA/comments/1qqltvv/engram_llamacpp_poc_help_me_grow_this_its_amazing/ | Less-Chemist-9996 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqltvv | false | null | t3_1qqltvv | /r/LocalLLaMA/comments/1qqltvv/engram_llamacpp_poc_help_me_grow_this_its_amazing/ | false | false | self | 1 | null |
double check | 1 | [removed] | 2026-01-29T21:27:41 | https://www.reddit.com/r/LocalLLaMA/comments/1qqln1b/double_check/ | IllustriousCarob7598 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqln1b | false | null | t3_1qqln1b | /r/LocalLLaMA/comments/1qqln1b/double_check/ | false | false | 1 | null | |
llm double-checking question | 1 | [removed] | 2026-01-29T21:26:53 | https://www.reddit.com/r/LocalLLaMA/comments/1qqlm9t/llm_doublechecking_question/ | IllustriousCarob7598 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqlm9t | false | null | t3_1qqlm9t | /r/LocalLLaMA/comments/1qqlm9t/llm_doublechecking_question/ | false | false | 1 | null | |
LLM double-verification check | 1 | [removed] | 2026-01-29T21:26:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qqllw9/llm_doubleverification_check/ | IllustriousCarob7598 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqllw9 | false | null | t3_1qqllw9 | /r/LocalLLaMA/comments/1qqllw9/llm_doubleverification_check/ | false | false | 1 | null | |
LLM double-checking settings | 1 | Hi everyone!
I'm using llm models for helping solving math tasks, physics, informatics problems, real-life tasks
after my main prompt, i use this text to make sure LLm model which i'm talking with
"Perform multiple thorough double-checks of all your tokens, every word and thought as a whole, every sentence, as well as the structure and idea of the response. Think as rationally and critically as possible, thoroughly re-verifying everything; you may use the Web and all tools available to you, thematic forums, communities, etc.! Do not spare time or memory! Think!Perform multiple double-checks of all sources as well! I am from FML 239, 11-1."
and i would like to ask, is it normal to use this text? do llm model understand that it need to double-check all facts, optimize all not accuracy facts it talked, and etc? thank you very much | 2026-01-29T21:17:53 | https://www.reddit.com/r/LocalLLaMA/comments/1qqldp9/llm_doublechecking_settings/ | IllustriousCarob7598 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqldp9 | false | null | t3_1qqldp9 | /r/LocalLLaMA/comments/1qqldp9/llm_doublechecking_settings/ | false | false | self | 1 | null |
A tool to implement a verification layer for AI agents | 0 | Hi everyone,
I've made an open-source tool (called Omni-NLI) to help with verifying the output of LLMs. It can be used to check if a piece of text (called a premise) supports another piece of text (a hypothesis). The main application of a tool like this is to reduce the effect of hallucinations in LLMs and prevent mistakes and errors by AI agents. It can also be used to make a RAG system more reliable, for example, by checking if the retrieved context (from the RAG) actually supports the LLM's final answer this is shown to the user.
Currently, Omni-NLI has the following features:
* Can be installed as a Python package with \`pip install omni-nli\`.
* Can be used on your own computer, so your data stays local and private.
* Has an MCP interface (for agents) and a REST API for conventional use as a microservice.
* Supports using fact-checking models from different sources (Ollama, OpenRouter, and HuggingFace).
* Can be used to check if an LLM contradicts itself.
* Supports showing the reasoning so you can see why it thinks a claim is wrong.
In any case, if you are interested to know more, there is more information in the links below:
Project's GitHub repo: [https://github.com/CogitatorTech/omni-nli](https://github.com/CogitatorTech/omni-nli)
Project's documentation: [https://cogitatortech.github.io/omni-nli/](https://cogitatortech.github.io/omni-nli/) | 2026-01-29T21:10:12 | No_Pomegranate7508 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qql66r | false | null | t3_1qql66r | /r/LocalLLaMA/comments/1qql66r/a_tool_to_implement_a_verification_layer_for_ai/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'htxgkwvhscgg1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/htxgkwvhscgg1.png?width=108&crop=smart&auto=webp&s=157d76aca229bc7a72c86611d4fdbafeb77e2237', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/htxgkwvhscgg1.png?width=216&crop=smart&auto=webp&s=58d63425640887405ff3d29347a5044e4208ffaa', 'width': 216}, {'height': 202, 'url': 'https://preview.redd.it/htxgkwvhscgg1.png?width=320&crop=smart&auto=webp&s=3cf9772f82691a474838e5e1576402ef411e938e', 'width': 320}, {'height': 405, 'url': 'https://preview.redd.it/htxgkwvhscgg1.png?width=640&crop=smart&auto=webp&s=4c4cd1e1ba6020ac38b8fcd51e3943a9ca3ecfed', 'width': 640}, {'height': 608, 'url': 'https://preview.redd.it/htxgkwvhscgg1.png?width=960&crop=smart&auto=webp&s=a99954a0508602dd8e898b6dbc865d22a2b52b49', 'width': 960}, {'height': 684, 'url': 'https://preview.redd.it/htxgkwvhscgg1.png?width=1080&crop=smart&auto=webp&s=bad27e9134751ae321be0a016bc3a3c7e5916050', 'width': 1080}], 'source': {'height': 1197, 'url': 'https://preview.redd.it/htxgkwvhscgg1.png?auto=webp&s=0fdab2c37a5320490443c821622a7f4578f7ce17', 'width': 1890}, 'variants': {}}]} | |
llm optimizing double-check answers my question | 1 | [removed] | 2026-01-29T21:00:09 | https://www.reddit.com/r/LocalLLaMA/comments/1qqkwam/llm_optimizing_doublecheck_answers_my_question/ | Majestic-Syrup-157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqkwam | false | null | t3_1qqkwam | /r/LocalLLaMA/comments/1qqkwam/llm_optimizing_doublecheck_answers_my_question/ | false | false | self | 1 | null |
just a question about llm verification double-checking | 1 | [removed] | 2026-01-29T20:55:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qqkrks/just_a_question_about_llm_verification/ | Majestic-Syrup-157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqkrks | false | null | t3_1qqkrks | /r/LocalLLaMA/comments/1qqkrks/just_a_question_about_llm_verification/ | false | false | self | 1 | null |
my question about llm double-checking process | 1 | [removed] | 2026-01-29T20:52:28 | https://www.reddit.com/r/LocalLLaMA/comments/1qqkoxm/my_question_about_llm_doublechecking_process/ | Majestic-Syrup-157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqkoxm | false | null | t3_1qqkoxm | /r/LocalLLaMA/comments/1qqkoxm/my_question_about_llm_doublechecking_process/ | false | false | self | 1 | null |
I ran GPT-4.1 Nano vs Gemini 2.5 Pro vs Llama 4 (17B) on a legal RAG workload | 1 | [removed] | 2026-01-29T20:48:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qqkkwp/i_ran_gpt41_nano_vs_gemini_25_pro_vs_llama_4_17b/ | OldBlackandRich | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqkkwp | false | null | t3_1qqkkwp | /r/LocalLLaMA/comments/1qqkkwp/i_ran_gpt41_nano_vs_gemini_25_pro_vs_llama_4_17b/ | false | false | 1 | null | |
question about llm double-checking prompt-engineering solution | 1 | [removed] | 2026-01-29T20:46:25 | https://www.reddit.com/r/LocalLLaMA/comments/1qqkj5c/question_about_llm_doublechecking/ | Majestic-Syrup-157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqkj5c | false | null | t3_1qqkj5c | /r/LocalLLaMA/comments/1qqkj5c/question_about_llm_doublechecking/ | false | false | self | 1 | null |
Is this "brute force" system prompt effective for coding tasks? | 1 | [removed] | 2026-01-29T20:44:51 | https://www.reddit.com/r/LocalLLaMA/comments/1qqkhoh/is_this_brute_force_system_prompt_effective_for/ | Majestic-Syrup-157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqkhoh | false | null | t3_1qqkhoh | /r/LocalLLaMA/comments/1qqkhoh/is_this_brute_force_system_prompt_effective_for/ | false | false | self | 1 | null |
I ran GPT-4.1 Nano vs Gemini 2.5 Pro vs Llama 4 (17B) on a legal RAG workload. | 1 | [removed] | 2026-01-29T20:38:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qqkc11/i_ran_gpt41_nano_vs_gemini_25_pro_vs_llama_4_17b/ | OldBlackandRich | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqkc11 | false | null | t3_1qqkc11 | /r/LocalLLaMA/comments/1qqkc11/i_ran_gpt41_nano_vs_gemini_25_pro_vs_llama_4_17b/ | false | false | self | 1 | null |
AI TEXT detection BYPASS | 0 | Hello! I need advice from people who have really dug into LLM/agents/local models.
I want to set up a conditional “agent” in ChatGPT (I have the paid version) that will:
detect AI style in text (not necessarily a 100 detector, more like a diagnosis: why does the text look “robotic”),
perform deep rewriting so that the text looks natural, without typical “LLM patterns” (bureaucratic language, identical rhythm, overly smooth logic, clichéd phrases, overgeneralizations, etc.).
**What I've already tried:**
Found a large list of AI text characteristics on Wikipedia → compiled a PDF “reference book,” uploaded it to a custom GPT/agent, and asked it to always check the text for these characteristics.
I found and downloaded a large book/guide on deep rewriting (100+ pages, academic) → also uploaded it as a reference so that the model would rely on methods and rules.
**But**
It doesn't work well. The rewriting is still always obvious — even without a detector, I can see that it was written by AI.
It seems that the model either:
does not use sources systematically, or follows the rules formally, but the characteristic LLM style remains.
**Questions for the community:**
What am I doing wrong conceptually? Why does “download the PDF reference + ask to check” not work?
Are there adequate local methods that actually improve the “naturalness” of the text?
What models/tools would you recommend for local rewriting?
Why is there still no “normal solution” to this problem in 2026? Is it fundamentally difficult, or do I just not know the right tools? | 2026-01-29T20:35:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qqk8lh/ai_text_detection_bypass/ | No-Entertainment9773 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqk8lh | false | null | t3_1qqk8lh | /r/LocalLLaMA/comments/1qqk8lh/ai_text_detection_bypass/ | false | false | self | 0 | null |
AI TEXT detection BYPASS | 0 | Hello! I need advice from people who have really dug into LLM/agents/local models.
I want to set up a conditional “agent” in ChatGPT (I have the paid version) that will:
detect AI style in text (not necessarily a 100 detector, more like a diagnosis: why does the text look “robotic”),
perform deep rewriting so that the text looks natural, without typical “LLM patterns” (bureaucratic language, identical rhythm, overly smooth logic, clichéd phrases, overgeneralizations, etc.).
**What I've already tried:**
Found a large list of AI text characteristics on Wikipedia → compiled a PDF “reference book,” uploaded it to a custom GPT/agent, and asked it to always check the text for these characteristics.
I found and downloaded a large book/guide on deep rewriting (100+ pages, academic) → also uploaded it as a reference so that the model would rely on methods and rules.
**But**
It doesn't work well. The rewriting is still always obvious — even without a detector, I can see that it was written by AI.
It seems that the model either:
does not use sources systematically, or follows the rules formally, but the characteristic LLM style remains.
**Questions for the community:**
What am I doing wrong conceptually? Why does “download the PDF reference + ask to check” not work?
Are there adequate local methods that actually improve the “naturalness” of the text?
What models/tools would you recommend for local rewriting?
Why is there still no “normal solution” to this problem in 2026? Is it fundamentally difficult, or do I just not know the right tools? | 2026-01-29T20:34:32 | https://www.reddit.com/r/LocalLLaMA/comments/1qqk82j/ai_text_detection_bypass/ | No-Entertainment9773 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqk82j | false | null | t3_1qqk82j | /r/LocalLLaMA/comments/1qqk82j/ai_text_detection_bypass/ | false | false | self | 0 | null |
I gave an AI Agent one job. It forgot what it was doing 47 times. (Building in Cursor) | 0 | I spent the last few weeks trying to build a truly "long-running" AI agent using Cursor. I thought it would be simple: give it a loop, give it a goal, and let it run.
**I was wrong.**
The agent would work perfectly for 10 minutes, and then completely lose the plot. It would overwrite its own progress, hallucinate completed tasks, or just loop endlessly. The issue wasn't the model (I was using Claude), it was **state management** and context pollution.
I finally managed to stabilize it by decoupling the "memory" from the chat context.
I wrote up a 3-part breakdown of the failure and the final code implementation. If you're struggling with agents losing focus, this might save you some headaches.
**The Series:**
* **Part 1:** [(Why it kept failing)](https://medium.com/vibe-coding-chronicles/i-gave-an-ai-agent-one-job-it-forgot-what-it-was-doing-47-times-78b3e853932d?source=friends_link&sk=023d7ce441320b4d354b473906cd8c6d)
* **Part 3:**[ (The full code & guide)](https://medium.com/vibe-coding-chronicles/building-long-running-ai-agents-in-cursor-a-complete-guide-part-3-6be139fad95e?source=friends_link&sk=b34e40eb17f8e5c222b55a26452861d5)
Has anyone else hit this "memory wall" with Cursor’s composer mode yet? | 2026-01-29T20:28:25 | https://www.reddit.com/r/LocalLLaMA/comments/1qqk21p/i_gave_an_ai_agent_one_job_it_forgot_what_it_was/ | IllPhrase6279 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqk21p | false | null | t3_1qqk21p | /r/LocalLLaMA/comments/1qqk21p/i_gave_an_ai_agent_one_job_it_forgot_what_it_was/ | false | false | self | 0 | null |
Looking for fast local TTS with zero shot cloning? | 2 | Hey everyone, we tried qwen3 but were very dissapointed in it's runtime, I have no idea where that 90ms benchmark came from but our runtime on a 3090 was nearly 2 orders of magnitude off that.
We like supertonic 2 a lot, but as far as I can tell we can't do zero shot cloning locally. What a shame.
Any alternatives? Like anything at all that could be like even 30% of the quality of [character.ai](http://character.ai) for example? We don't need anything high quality, we're going to do PP on the audio to stylize and mess it anyways, it just needs to sound like the reference. Thanks! | 2026-01-29T20:18:16 | https://www.reddit.com/r/LocalLLaMA/comments/1qqjs1z/looking_for_fast_local_tts_with_zero_shot_cloning/ | enterguild | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqjs1z | false | null | t3_1qqjs1z | /r/LocalLLaMA/comments/1qqjs1z/looking_for_fast_local_tts_with_zero_shot_cloning/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=108&crop=smart&auto=webp&s=a2f095072d7ec8cf53cf552cba7b9e6e836a5c53', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=216&crop=smart&auto=webp&s=f3aaa6cf6a6444ca38cc1fba5ed75cdf36dd4f1d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=320&crop=smart&auto=webp&s=4d0fbe4e13c7e46bd18adb61c9b4b4c720234437', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=640&crop=smart&auto=webp&s=4d017a26260c32cc01211e916547fcd279febfec', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=960&crop=smart&auto=webp&s=fdcff2e2b4b76f9c7095f3ce87ba1daa638068ea', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=1080&crop=smart&auto=webp&s=17577d1ffe827f6fbf5360e1b8fdb0723e8fa0da', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?auto=webp&s=b269ef87fe2049b71f804802f2ed4cc9606d9d1b', 'width': 1200}, 'variants': {}}]} |
Returning to self-hosting LLMs after a hiatus | 0 | I am fairly newbish when it comes to self-hosting LLMs. My current PC has:
* CachyOS
* 32GB RAM
* 8GB VRAM (RTX 2080)
Around 1-2 years ago I used Ollama + OpenWebUI to start my journey into self-hosting LLMs. At the time my PC used Windows 11 and I used WSL2 Ubuntu 22.04 to host Ollama (via the command line) and OpenWebUI (via Docker).
This setup allowed me to run up to 4B parameter text-only models with okay speed. I did not know how to configure the backend to optimize my setup and thus left everything run on default.
After returning to self-hosting I read various reddit posts about the current state of local LLMs. Based on my limited understanding:
* Ollama - considered slow since it is a wrapper on llama.cpp (there wasn't the only issue but it stuck with me the most).
* OpenWebUI - bloated and also received backlash for its licensing changes.
I have also come up with a list of what I would like self-hosting to look like:
* Ability to self-host models from HuggingFace.
* Models should not be limited to text-only.
* An alternative UI to OpenWebUI that has similar functionalities and design. This decision stems from the reported bloat (I believe a redditor mentioned the Docker image was 40GB in size, but I cannot find the post, so take my comment with a grain of salt).
* Ability to swap models on the fly like Ollama.
* Ability to access local LLMs using VSCode for coding tasks.
* Ability to have somewhat decent context length.
I have seen some suggestions like llama-swap for multiple models at runtime.
Given these requirements, my questions are as follows:
1. What is the recommended frontend + backend stack?
Thoughts: I have seen some users suggest using the built-in llama.cpp UI, or some suggested simply vibe-coding a personal frontend. llama.cpp lacks some functionality I require, while vibe-coding might be the way, but maybe an existing alternative is already here. In addition, if I am wrong about the OpenWebUI bloat, I might as well stay with it, but I feel unsure due to my lack on knowledge. Additionally, it appears llama-swap would be the way to go for the backend, however I am open alternative suggestions.
2. What is the recommended model for my use case and current setup?
Thoughts: previously i used Llama 3.2 3B model, since it was the best one available at the time. I believe there have been better models since then and I would appreciate a suggestion.
3. What VSCode integration would you suggest that is 100% secure?
Thoughts: if there is a possibility to integrate local LLMs with VSCode without relying on thrid-party extensions, that would be amazing, since an additional dependency does introduce another source of potential data leaks.
4. How could I increase context window so the model has enough context to perform some tasks?
Thoughts: an example - VSCode coding assistant, that has the file/folder as context.
5. Is it possible to give a .mp4 file to the LLM and ask it to summarize it? If so, how?
Final thoughts: I am happy to also receive links to tutorials/documentation/videos explaining how something can be implemented. I will continue reading the documentation of llama.cpp and other tools. Thanks in advance guys! | 2026-01-29T20:18:02 | https://www.reddit.com/r/LocalLLaMA/comments/1qqjru3/returning_to_selfhosting_llms_after_a_hiatus/ | Over-Advertising2191 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqjru3 | false | null | t3_1qqjru3 | /r/LocalLLaMA/comments/1qqjru3/returning_to_selfhosting_llms_after_a_hiatus/ | false | false | self | 0 | null |
I built a tool to copy your entire repo for AI context (open source) | 0 | I built a small command-line tool to solve the Context Limit headache when coding with AI (Claude/DeepSeek).
If you've ever tried to paste 10 files into Claude and hit the message limit because you accidentally copied a 5mb `package-lock.json` or a compiled binary, this is for you.
**pack-repo-4ai** is a simple CLI that:
1. Scans your current folder.
2. Filters out the junk (logs, env vars, build folders, binaries).
3. Formats the code into a single, clean prompt that tells the AI exactly which file is which.
4. Copies it to your clipboard.
I use it daily to feed entire features into any AIs' web UI (like DeepSeek R1).
To use it: `pip install pack-repo-4ai` then just type `pack-repo` in your terminal.
Hope it saves you some copy-paste time!
https://preview.redd.it/i3ikgfwzfcgg1.jpg?width=2816&format=pjpg&auto=webp&s=588c1ccaed2699dfc23b2a2f496fe932fa4c7c96
| 2026-01-29T19:56:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qqj6nd/i_built_a_tool_to_copy_your_entire_repo_for_ai/ | TerribleGiraffe34 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqj6nd | false | null | t3_1qqj6nd | /r/LocalLLaMA/comments/1qqj6nd/i_built_a_tool_to_copy_your_entire_repo_for_ai/ | false | false | 0 | null | |
Clawdbot vs. Zo Computer | 0 | Clawdbot lit X on fire because it showed how people could actually get things done with their AI. By giving AI access to a computer, AI can do way more tasks autonomously, from sending emails to buying stuff on Amazon.
This has been the vision of Zo since day one. I'm Jam, from the Zo Computer team, and wanted to just share about what we can do for you.
If stage 1 for AI was chatbots and stage 2 was vibe coding, stage 3 is about integrating your AI to your digital life so AI can do more for you.
The TAM of people who want to vibe code an app is small, but nearly everyone wants to
\- spend less time doing administrative work
\- get things done faster
\- build tools and systems for the exact things they need
Zo is the companion that gets things done. Get started in minutes. No server setup required. Plus, you can just text it.
Underneath the hood, it comes with your own cloud computer that works for you 24/7 so you don’t need to deal with the security risks.
https://preview.redd.it/dvv3gsupfcgg1.png?width=1074&format=png&auto=webp&s=f6604a2c8c8df9d623bed48e5770d745047a1f39
You get an intuitive OS with a ton of integrations baked in, from SMS to email, calendar, and apps (Notion, Dropbox, Google, Linear, Airtable).
https://preview.redd.it/nni97lmrfcgg1.png?width=1370&format=png&auto=webp&s=a10d19aa605c8e8d5c339d3b9437b8c198d913ae
You can chat with any models or bring your own AI subscriptions (ie use Claude inside Zo).
https://preview.redd.it/djn4218sfcgg1.png?width=1370&format=png&auto=webp&s=f7e6f4be372d25cf86f9411efea6ab83a79d613c
You get a file storage system so you can remix any and all of your context.
There’s a helpful automations tab where you can set up text, email, or in-app tasks in minutes. People have used our automations feature to make custom email newsletters, budget trackers that digest information from their email receipts, and stock price reminders.
https://preview.redd.it/ojcc3n8mfcgg1.png?width=1512&format=png&auto=webp&s=8f3dc476a91d6f1c6d77b40df88cf593436de9f4
Because your Zo runs on a server, it never sleeps. Zo can remotely browse the Internet to do anything for you: post on X, buy things off Amazon, schedule meetings, clear your inbox, you name it.
Finally, you can use the Zo app or just text Zo to get things done. Retrieve files, ask Zo to do work, or set up reminders for yourself, whatever.
For advanced users: you can SSH into Zo and give it access to the content on your laptop or use it as a remote development environment. You can connect the Zo MCP Server and use your favorite AI coding tool such as Claude Code or Cursor.
People already have their favorite tools, so keep them. You can keep enjoying your existing tool subscriptions inside of Zo for the best of both worlds. With Zo, you can give your Claude a cloud computer.
We’re excited to hear what you think of Zo. If you have more questions, check out our docs for more information.
| 2026-01-29T19:56:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qqj6k3/clawdbot_vs_zo_computer/ | Independent-Fly2293 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqj6k3 | false | null | t3_1qqj6k3 | /r/LocalLLaMA/comments/1qqj6k3/clawdbot_vs_zo_computer/ | false | false | 0 | null | |
Implementing Enhanced Memory using FSRS6 in Rust to replace RAG for Local Agents. Thoughts on this architecture? | 2 | I have been engineering a solution for long term state persistence in local LLMs. My primary issue with standard RAG implementations is that they rely solely on vector similarity. This often results in context window pollution where the model is flooded with low relevance tokens simply because they share semantic overlap.
I wanted to share the architectural pattern I used to solve this.
The core concept replaces flat vector search with the FSRS 6 algorithm. The system treats memory as a directed graph where every node is assigned a specific retrievability score.
The logic follows three biological principles.
First is Reinforcement. When the Agent successfully retrieves a memory node, the edge weight is strengthened.
Second is Decay. If a memory remains unaccessed, the retrievability score follows a logarithmic decay curve. This mimics biological forgetting and naturally deprioritizes outdated information.
Third is Pruning. The system enforces a strict threshold for context injection. Only memories with a high retrievability score are passed to the prompt. This maintains a high signal to noise ratio.
Regarding the implementation, I engineered this as a standalone server in Rust utilizing tokio for the async runtime and petgraph for the data structure.
The performance gains were significant. The initial Python prototype suffered from high serialization overhead with graph traversal latencies around 200ms. The Rust rewrite reduced this to sub 8ms.
For concurrency, I am currently using a standard RwLock on the graph structure. Since the read to write ratio is approximately 100 to 1, this is stable, but I am investigating lock free data structures to further optimize the throughput.
I am testing this integration on Llama 3 via a Model Context Protocol interface.
The repository is open for code review if anyone wants to critique the Rust memory safety or the graph traversal logic.
[https://github.com/samvallad33/vestige](https://github.com/samvallad33/vestige) | 2026-01-29T19:55:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qqj5np/implementing_enhanced_memory_using_fsrs6_in_rust/ | ChikenNugetBBQSauce | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqj5np | false | null | t3_1qqj5np | /r/LocalLLaMA/comments/1qqj5np/implementing_enhanced_memory_using_fsrs6_in_rust/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'hE4ji_DLAlfsp9RaWwkRfMp2YIXOqJSTPrjBlHHkMFE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hE4ji_DLAlfsp9RaWwkRfMp2YIXOqJSTPrjBlHHkMFE.png?width=108&crop=smart&auto=webp&s=e42737eca4f0dd80331b6d3119fc3e2c538c13cb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hE4ji_DLAlfsp9RaWwkRfMp2YIXOqJSTPrjBlHHkMFE.png?width=216&crop=smart&auto=webp&s=809aca411646c0a12fb837f1e7520be15dfac629', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hE4ji_DLAlfsp9RaWwkRfMp2YIXOqJSTPrjBlHHkMFE.png?width=320&crop=smart&auto=webp&s=a4b573da416a6ac46cf712b3ae8d49d962e174d0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hE4ji_DLAlfsp9RaWwkRfMp2YIXOqJSTPrjBlHHkMFE.png?width=640&crop=smart&auto=webp&s=0dc96fc86481e17ddebb24d471e98e8953ec4640', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hE4ji_DLAlfsp9RaWwkRfMp2YIXOqJSTPrjBlHHkMFE.png?width=960&crop=smart&auto=webp&s=5670cc4b70106cda94b588037fb431aec2bd8969', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hE4ji_DLAlfsp9RaWwkRfMp2YIXOqJSTPrjBlHHkMFE.png?width=1080&crop=smart&auto=webp&s=7a8dc272481f285c8aff0581b43c9f849763e84d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hE4ji_DLAlfsp9RaWwkRfMp2YIXOqJSTPrjBlHHkMFE.png?auto=webp&s=026d3af0a9e9a70d53b08bbe5417ee697839c7ef', 'width': 1200}, 'variants': {}}]} |
LingBot-World outperforms Genie 3 in dynamic simulation and is fully Open Source | 533 | The newly released LingBot-World framework offers the first high capability world model that is fully open source, directly contrasting with proprietary systems like Genie 3. The technical report highlights that while both models achieve real-time interactivity, LingBot-World surpasses Genie 3 in dynamic degree, meaning it handles complex physics and scene transitions with greater fidelity. It achieves 16 frames per second and features emergent spatial memory where objects remain consistent even after leaving the field of view for 60 seconds. This release effectively breaks the monopoly on interactive world simulation by providing the community with full access to the code and model weights.
Model: [https://huggingface.co/collections/robbyant/lingbot-world](https://huggingface.co/collections/robbyant/lingbot-world)
AGI will be very near. Let's talk about it! | 2026-01-29T19:54:56 | https://v.redd.it/fjyoor8kecgg1 | Electrical-Shape-266 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qqj51h | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/fjyoor8kecgg1/DASHPlaylist.mpd?a=1772308518%2CMzk3YmViNTJkYmY5YmQ2NmM0ZWI0YWRiZmVmYmRjZTA4OGMxYTVmOTdiZGE1YzZhNDA4ZmNiMDVhZmQ2NWZjOQ%3D%3D&v=1&f=sd', 'duration': 60, 'fallback_url': 'https://v.redd.it/fjyoor8kecgg1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/fjyoor8kecgg1/HLSPlaylist.m3u8?a=1772308518%2CNDliZDVhNGMzZGJlMDk2MjM1NDk5YzU2MmMwMDNkMWI0Y2YyNDUzYjAyZDNkNDljZTJiZTIxNDRhZjNlMmY3OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fjyoor8kecgg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1qqj51h | /r/LocalLLaMA/comments/1qqj51h/lingbotworld_outperforms_genie_3_in_dynamic/ | false | false | 533 | {'enabled': False, 'images': [{'id': 'NWM3YmNxOGtlY2dnMazYtBKu82jdVCrAUapl0f29ZcySaNJ_OhsJC51jAkVT', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NWM3YmNxOGtlY2dnMazYtBKu82jdVCrAUapl0f29ZcySaNJ_OhsJC51jAkVT.png?width=108&crop=smart&format=pjpg&auto=webp&s=7da92610c7be9ac78e79d3ddd2a3112df61e0783', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NWM3YmNxOGtlY2dnMazYtBKu82jdVCrAUapl0f29ZcySaNJ_OhsJC51jAkVT.png?width=216&crop=smart&format=pjpg&auto=webp&s=d9a50a2d833cc63ea203d33569845b5ad3613274', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NWM3YmNxOGtlY2dnMazYtBKu82jdVCrAUapl0f29ZcySaNJ_OhsJC51jAkVT.png?width=320&crop=smart&format=pjpg&auto=webp&s=3b4690d5ebf63e5c337779144796aa34ad9910ed', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NWM3YmNxOGtlY2dnMazYtBKu82jdVCrAUapl0f29ZcySaNJ_OhsJC51jAkVT.png?width=640&crop=smart&format=pjpg&auto=webp&s=f0b4de3364e6a5a2eb292f1edcc94bcdd658f169', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NWM3YmNxOGtlY2dnMazYtBKu82jdVCrAUapl0f29ZcySaNJ_OhsJC51jAkVT.png?width=960&crop=smart&format=pjpg&auto=webp&s=60d8b8804bb7b5d5dc86f81a9bf3679bb7560a86', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NWM3YmNxOGtlY2dnMazYtBKu82jdVCrAUapl0f29ZcySaNJ_OhsJC51jAkVT.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9a1d2c0352bd7e70a008f825443af666cee45a65', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/NWM3YmNxOGtlY2dnMazYtBKu82jdVCrAUapl0f29ZcySaNJ_OhsJC51jAkVT.png?format=pjpg&auto=webp&s=a98d7abfff2b2c876ae6c39f07c8bca5e3658814', 'width': 1280}, 'variants': {}}]} | |
I wrote a CLI to pack codebases for DeepSeek R1 (XML context + smart ignores) | 1 | I built a small command-line tool to solve the "Context Limit" headache when coding with AI.
If you've ever tried to paste 10 files into llms and hit the message limit because you accidentally copied a 5mb `package-lock.json` or a compiled binary, this is for you.
pack-repo-4ai is a simple CLI that:
1. Scans your current folder.
2. Filters out the junk (logs, env vars, build folders, binaries).
3. Formats the code into a single, clean prompt that tells the AI exactly which file is which.
4. Copies it to your clipboard.
I use it daily to feed entire features into DeepSeek R1.
To use it: `pip install pack-repo-4ai` then just type `pack-repo` in your terminal.
Hope it saves you some copy-paste time! | 2026-01-29T19:40:32 | https://www.reddit.com/r/LocalLLaMA/comments/1qqiqsw/i_wrote_a_cli_to_pack_codebases_for_deepseek_r1/ | TerribleGiraffe34 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqiqsw | false | null | t3_1qqiqsw | /r/LocalLLaMA/comments/1qqiqsw/i_wrote_a_cli_to_pack_codebases_for_deepseek_r1/ | false | false | self | 1 | null |
Is it true HD are better then NVME/&SSD? | 0 | I see articles that actually say speed is important in LLM when it comes to drives, even some
major tech articles say go for the speed. But then, I know that NVME and SSD are limited life.
And cost more. I have 12TB of HD two 6tb drives. I am able to open the models in the HD.
And some take 30seconds or longer as I can see the drive max out access on some.
I am only speaking the models. I would assume the OS ( Linux ) on a NVME is fine.
Guess, with so many varied answers out there. What is the scoop, figures the day to day folks here would know best.
Thanks.
( PS as I am building up a LLM dedicated machine, it matters to get as much as I can get as it will not be a beast like so many here. So I need to get the best edge I can.) | 2026-01-29T19:34:16 | https://www.reddit.com/r/LocalLLaMA/comments/1qqikld/is_it_true_hd_are_better_then_nvmessd/ | Ztoxed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqikld | false | null | t3_1qqikld | /r/LocalLLaMA/comments/1qqikld/is_it_true_hd_are_better_then_nvmessd/ | false | false | self | 0 | null |
Why are small models (32b) scoring close to frontier models? | 128 | I keep seeing benchmark results where models like Qwen-32B or GLM-4.x Flash score surprisingly good as per their size than larger models like DeepSeek V3, Kimi K2.5 (1T), or GPT-5.x.
Given the huge gap in model size and training compute, I’d expect a bigger difference.
So what’s going on?
Are benchmarks basically saturated?
Is this distillation / contamination / inference-time tricks?
Do small models break down on long-horizon or real-world tasks that benchmarks don’t test?
Curious where people actually see the gap show up in practice. | 2026-01-29T19:27:44 | https://www.reddit.com/r/LocalLLaMA/comments/1qqidxp/why_are_small_models_32b_scoring_close_to/ | Financial-Cap-8711 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqidxp | false | null | t3_1qqidxp | /r/LocalLLaMA/comments/1qqidxp/why_are_small_models_32b_scoring_close_to/ | false | false | self | 128 | null |
Seeking best LLM models for "Agentic" Unity development (12GB VRAM) | 0 | Hi everyone!
I'm looking for recommendations on the most capable models for a coding agent workflow. I’m currently working on a Unity project and need an assistant that can handle project-wide analysis and code editing. Ideally, I’m looking for a model that excels at surgical code edits (using DIFFs or SEARCH/REPLACE blocks) rather than rewriting entire files.
**My Specs:**
* **GPU:** RTX 3060 12GB
* **RAM:** 64GB DDR4
* **CPU:** Ryzen 5 5600x
* **Stack:** LM Studio (local server) + Zed and Aider.
**Models I’ve tested so far (results have been underwhelming):**
* qwen3-53b-a3b-2507-total-recall-v2-master-coder-i1
* zai-org/glm-4.7-flash
* ibm/granite-4-h-tiny
* gpt-oss-20b
* qwen/qwen3-14b
* mistralai/mistral-nemo-instruct-2407
* qwen2.5-coder-14b-instruct-abliterated
I usually keep the temperature around 0.2 for better determinism.
Given my 12GB VRAM limit (though I have plenty of system RAM for GGUF offloading), what models would you recommend specifically for Unity/C# and agentic tasks? Are there any specific quants or fine-tunes that punch above their weight in "SEARCH/REPLACE" consistency?
Thanks in advance! | 2026-01-29T19:25:25 | https://www.reddit.com/r/LocalLLaMA/comments/1qqibnr/seeking_best_llm_models_for_agentic_unity/ | Ctrixago | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqibnr | false | null | t3_1qqibnr | /r/LocalLLaMA/comments/1qqibnr/seeking_best_llm_models_for_agentic_unity/ | false | false | self | 0 | null |
Cerebras MiniMax-M2.1-REAP-139B-A10B - Mradermacher Q4_K_S tested | 6 | [Reap Minimax ](https://preview.redd.it/18rjpvsz9cgg1.png?width=1002&format=png&auto=webp&s=99beac3c955271994afa81707f027ef5d91ddea6)
Tested REAP version. Prompt:
"Act as a Lead Systems Architect. Design a Type-1 Bare-metal Hypervisor intended for Advanced Malware Debugging. The goal is to create a 'Transparent Execution Environment.'
VMCS Configuration: Implement the initialization of Host and Guest states. Ensure the MSR Bitmap is configured to intercept specific register reads without being detected by the Guest.
EPT Logic: Implement an EPT-based 'Page Redirection' mechanism. When the Guest attempts to read a specific physical page, the EPT Violation handler must transparently redirect the access to a shadow page. Provide the C/Assembly logic for the EPT walk and modification.
Timing Jitter Compensation: Propose a mathematical and technical solution to mitigate the timing delta caused by VM-Exits. Use IA32\_TIME\_STAMP\_COUNTER offsets to ensure that the Guest's RDTSC measurements remain consistent with a non-virtualized environment.
VMM Lifecycle: Describe the transition from the UEFI execution phase to the VMX-root operation. How do you handle the transition of the Global Descriptor Table (GDT) and Task State Segment (TSS)?"
92 tokens/sec on RTX 6000 96gb. Really good. Will test more. | 2026-01-29T19:25:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qqibct/cerebras_minimaxm21reap139ba10b_mradermacher_q4_k/ | LegacyRemaster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqibct | false | null | t3_1qqibct | /r/LocalLLaMA/comments/1qqibct/cerebras_minimaxm21reap139ba10b_mradermacher_q4_k/ | false | false | 6 | null | |
Free Web Interface for Kokoro TTS (Batch Support + Zero GPU + No Install Needed) | 2 | Hey everyone,
I know many of us are running Kokoro locally, but sometimes I just need to process a longer text file on a device where I don't have my environment set up (or I need to send a link to a client/friend who can't use a CLI).
I spun up a hosted web UI that runs on **Hugging Face Zero GPU**.
**Why I built it:**
The raw model is great, but processing long texts is annoying manually. I added a "Batch Processing" feature that:
1. Splits your input text by sentence/paragraph.
2. Queues the generation chunks.
3. Offers a combined audio file or a ZIP of individual segments.
**It is completely free to use (no sign-up/email harvesting).**
**Link:** [https://algoran.eu/apps/kokoro-tts](https://algoran.eu/apps/kokoro-tts)
It's running on the standard Kokoro weights. If you guys have suggestions on better ways to handle the text splitting logic to prevent artifacts between chunks, I'd love to hear them. | 2026-01-29T19:23:48 | https://www.reddit.com/r/LocalLLaMA/comments/1qqi9zx/free_web_interface_for_kokoro_tts_batch_support/ | Suitable-Ad-4809 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqi9zx | false | null | t3_1qqi9zx | /r/LocalLLaMA/comments/1qqi9zx/free_web_interface_for_kokoro_tts_batch_support/ | false | false | self | 2 | null |
Is 50tps good? | 0 |
So I managed to get llama3.2 running on my phone, using Termux. I ran it with --verbose and saw my tps was \~50. Is that fast? It's my first time running ai locally. | 2026-01-29T19:21:50 | https://www.reddit.com/r/LocalLLaMA/comments/1qqi7zj/is_50tps_good/ | Kindly_Swim8051 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqi7zj | false | null | t3_1qqi7zj | /r/LocalLLaMA/comments/1qqi7zj/is_50tps_good/ | false | false | self | 0 | null |
Kimi K2.5 - trained on Claude? | 0 | 2026-01-29T19:19:00 | https://www.reddit.com/r/LocalLLaMA/comments/1qqi54b/kimi_k25_trained_on_claude/ | aoleg77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqi54b | false | null | t3_1qqi54b | /r/LocalLLaMA/comments/1qqi54b/kimi_k25_trained_on_claude/ | false | false | 0 | null | ||
Introducing daggr: a new way of building apps | 0 | Hey folks, it's Merve from Hugging Face!
we just launched daggr, a new library to build complex AI workflows, combining both local models, gradio apps, remote endpoints and Spaces!
https://preview.redd.it/kn6nnyp09cgg1.png?width=1920&format=png&auto=webp&s=5e1b8728ebdd0ba865f88c66128ca71a73ff1bea
daggr combines best of all worlds, mix-and-match them programmatically, inspect the pipeline visually 🙌🏻
We are looking forward to your feedbacks! | 2026-01-29T19:17:44 | https://www.reddit.com/r/LocalLLaMA/comments/1qqi3xh/introducing_daggr_a_new_way_of_building_apps/ | unofficialmerve | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqi3xh | false | null | t3_1qqi3xh | /r/LocalLLaMA/comments/1qqi3xh/introducing_daggr_a_new_way_of_building_apps/ | false | false | self | 0 | null |
I vibe coded a local audio inference engine for Qwen3-TTS and Qwen3-ASR | 1 | Supports Qwen3-TTS models (0.6B-1.7B) and ASR models. Docker + native deployment options.
**Key features:**
* 🎭 Voice cloning with reference audio
* 🎨 Custom voice design from text descriptions
* ⚡ MLX + Metal GPU acceleration for M1/M2/M3
* 🎨 Modern React UI included
If you like local audio models, give it a try. Works best in local dev mode for now. | 2026-01-29T19:03:06 | https://github.com/agentem-ai/izwi-audio | zinyando | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qqhoyo | false | null | t3_1qqhoyo | /r/LocalLLaMA/comments/1qqhoyo/i_vibe_coded_a_local_audio_inference_engine_for/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'NEZ6W_ZMZ4akkLge1_kQuQ6-8vQwZ4Eq-Bba5j4xBrY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NEZ6W_ZMZ4akkLge1_kQuQ6-8vQwZ4Eq-Bba5j4xBrY.png?width=108&crop=smart&auto=webp&s=b612e4fd9b186e8ee86a855ae2b46656ae960fe7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NEZ6W_ZMZ4akkLge1_kQuQ6-8vQwZ4Eq-Bba5j4xBrY.png?width=216&crop=smart&auto=webp&s=17f6b5c963423dfbffbcc7e262e90e00db7fb8a3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NEZ6W_ZMZ4akkLge1_kQuQ6-8vQwZ4Eq-Bba5j4xBrY.png?width=320&crop=smart&auto=webp&s=68c1dead06e6045d815c29a82731ed8ef1aec66a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NEZ6W_ZMZ4akkLge1_kQuQ6-8vQwZ4Eq-Bba5j4xBrY.png?width=640&crop=smart&auto=webp&s=3aec5e5d57f321a380c6ab254dd462af15b0dacd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NEZ6W_ZMZ4akkLge1_kQuQ6-8vQwZ4Eq-Bba5j4xBrY.png?width=960&crop=smart&auto=webp&s=3587c36e6f6864f666af901f63d8679970b3de4e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NEZ6W_ZMZ4akkLge1_kQuQ6-8vQwZ4Eq-Bba5j4xBrY.png?width=1080&crop=smart&auto=webp&s=09f508ff8263f23aa50468defd970171d0871d0d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NEZ6W_ZMZ4akkLge1_kQuQ6-8vQwZ4Eq-Bba5j4xBrY.png?auto=webp&s=b8c22f4a2adfbbd1e48fd9b0f9b4d10f75a06c68', 'width': 1200}, 'variants': {}}]} |
AI Max 395+ and vLLM | 6 | Hey everyone!!
Is anyone using vLLM on AI Max 395+ system? Would love some feedback on performance of 7B, 20B and 30B model performances 🙏
I’m looking to run batch inference of Ministral 8B and then sometimes use bigger models for other tasks.
Thank you for your time. | 2026-01-29T18:57:58 | https://www.reddit.com/r/LocalLLaMA/comments/1qqhjne/ai_max_395_and_vllm/ | KnownAd4832 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqhjne | false | null | t3_1qqhjne | /r/LocalLLaMA/comments/1qqhjne/ai_max_395_and_vllm/ | false | false | self | 6 | null |
Mistral CEO Arthur Mensch: “If you treat intelligence as electricity, then you just want to make sure that your access to intelligence cannot be throttled.” | 537 | 2026-01-29T18:56:08 | https://v.redd.it/wd12dl725cgg1 | Wonderful-Excuse4922 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qqhhtx | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/wd12dl725cgg1/DASHPlaylist.mpd?a=1772304985%2CODhhMDRjZjBiZDUwMmUxYjVhMzhiYmU3ZmJlZTYxNWE1MGJlNjkzZTVjOTcxY2NjNTQ1MTkwOGY4NDJjYWI5MA%3D%3D&v=1&f=sd', 'duration': 65, 'fallback_url': 'https://v.redd.it/wd12dl725cgg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/wd12dl725cgg1/HLSPlaylist.m3u8?a=1772304985%2COGFjNzZjMDRhYjY4MTM4YjQ4YWNmNjYwMmFjNzgyZGJmOGY0M2MzZGQ1NWIxMmJhZWUyZTFjNDgxOTBlNzg5Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wd12dl725cgg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1qqhhtx | /r/LocalLLaMA/comments/1qqhhtx/mistral_ceo_arthur_mensch_if_you_treat/ | false | false | 537 | {'enabled': False, 'images': [{'id': 'NW03ZGMyazI1Y2dnMWh2gxSpyeR6q2IEmV4jHAJM791DDo_e5MvHim0gQe4g', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NW03ZGMyazI1Y2dnMWh2gxSpyeR6q2IEmV4jHAJM791DDo_e5MvHim0gQe4g.png?width=108&crop=smart&format=pjpg&auto=webp&s=c013c9cbf5f53ebdffd90b123d7e27be27b25f37', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NW03ZGMyazI1Y2dnMWh2gxSpyeR6q2IEmV4jHAJM791DDo_e5MvHim0gQe4g.png?width=216&crop=smart&format=pjpg&auto=webp&s=a1b085f879aa03f6018d9ed038299f885538e036', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NW03ZGMyazI1Y2dnMWh2gxSpyeR6q2IEmV4jHAJM791DDo_e5MvHim0gQe4g.png?width=320&crop=smart&format=pjpg&auto=webp&s=0ba7d64190276a90e556e4d12992b3bb76292236', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NW03ZGMyazI1Y2dnMWh2gxSpyeR6q2IEmV4jHAJM791DDo_e5MvHim0gQe4g.png?width=640&crop=smart&format=pjpg&auto=webp&s=3b76b5b75e1082e88941edf767f71be726fae280', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NW03ZGMyazI1Y2dnMWh2gxSpyeR6q2IEmV4jHAJM791DDo_e5MvHim0gQe4g.png?width=960&crop=smart&format=pjpg&auto=webp&s=b7974c5ef9665cdad1186e6602a3874bed850f29', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NW03ZGMyazI1Y2dnMWh2gxSpyeR6q2IEmV4jHAJM791DDo_e5MvHim0gQe4g.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a00c1f91e62d66563ae9837055ecdcf22ade53f8', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/NW03ZGMyazI1Y2dnMWh2gxSpyeR6q2IEmV4jHAJM791DDo_e5MvHim0gQe4g.png?format=pjpg&auto=webp&s=e629481670a385425d4246518027489570bb3ec3', 'width': 1280}, 'variants': {}}]} | ||
Pinokio creator just did a deep-dive on HeartMuLa Studio's VRAM optimization - works on 8GB cards | 1 | cocktailpeanut (creator of Pinokio) just published a detailed breakdown of how HeartMuLa Studio handles different VRAM configurations:
\*\*TL;DR from his testing:\*\*
- 20GB+ → Full precision, no swap (\~14GB used)
- 14-20GB → 4-bit, no swap
- 10-14GB → 4-bit + swap
- 8-10GB → 4-bit + swap (with warning)
The system automatically detects available VRAM and switches modes. 8GB cards work but add \~70s overhead for model swapping.
Post with full details: https://beta.pinokio.co/posts/01kg5gbk173eb77xtpm4nkrgrv
GitHub: https://github.com/fspecii/HeartMuLa-Studio | 2026-01-29T18:53:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qqhf0c/pinokio_creator_just_did_a_deepdive_on_heartmula/ | ExcellentTrust4433 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqhf0c | false | null | t3_1qqhf0c | /r/LocalLLaMA/comments/1qqhf0c/pinokio_creator_just_did_a_deepdive_on_heartmula/ | false | false | self | 1 | null |
Scrolling through the trending list on huggingface I found LightOnOCR-2-1B .... | 7 | [https://huggingface.co/lightonai/LightOnOCR-2-1B](https://huggingface.co/lightonai/LightOnOCR-2-1B)
[bench](https://preview.redd.it/2yhhk6w51cgg1.png?width=2030&format=png&auto=webp&s=83be7ffb29ac75ac9f36d185873f9f94f1e1adfe)
Has anyone tested this? | 2026-01-29T18:33:43 | https://www.reddit.com/r/LocalLLaMA/comments/1qqguyy/scrolling_through_the_trending_list_on/ | LegacyRemaster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqguyy | false | null | t3_1qqguyy | /r/LocalLLaMA/comments/1qqguyy/scrolling_through_the_trending_list_on/ | false | false | 7 | null | |
This Week In AI Agents: Open Source Edition | 6 | I curate a weekly newsletter on AI agents. Here are the local highlights from this week:
**EvoCUA - #1 open-source computer use agent on OSWorld (56.7%)**
\- Evolutionary framework: synthetic task generation + sandbox rollouts + learning from failures
\- Available in 32B and 8B variants under Apache 2.0
\- [Model Weights](https://huggingface.co/meituan/EvoCUA-32B-20260105) | [Paper](https://huggingface.co/papers/2601.15876) | [GitHub](https://github.com/meituan/EvoCUA)
https://preview.redd.it/4et6pg9yxbgg1.png?width=906&format=png&auto=webp&s=bbbeb0508417fc42777bebc37646772927178542
**Qwen3-TTS - Open-source TTS with voice cloning and design**
\- 3-second voice cloning, 10 languages, 97ms first-packet latency
\- 0.6B and 1.7B variants under Apache 2.0
\- [Model](https://huggingface.co/collections/Qwen/qwen3-tts?spm=a2ty_o06.30285417.0.0.2994c921a3PoQo)s | [Writeup](https://qwen.ai/blog?id=qwen3tts-0115)
https://preview.redd.it/ecra7nlzxbgg1.png?width=1456&format=png&auto=webp&s=f70266a19af6aa34090c6960fe25efd2ceebfb71
**Moltbot - Open-source personal AI assistant that runs locally**
\- Persistent memory, WhatsApp/Telegram/Discord integration, extensible skills
\- Runs on your machine with Anthropic/OpenAI/local models
\- [Moltbot](https://www.molt.bot/) | [Discussion](https://x.com/omooretweets/status/2015618038088024164)(Video Source) | [Major Security Issue](https://x.com/0xsammy/status/2015562918151020593)
https://reddit.com/link/1qqgf00/video/oqxlsgwixbgg1/player
**VIGA - Vision-as-inverse-graphics agent for 3D reconstruction**
\- Converts images to editable Blender code through multimodal reasoning
\- +124.70% improvement on BlenderBench
\- [Project Page](https://fugtemypt123.github.io/VIGA-website/) | [Paper](https://arxiv.org/abs/2601.11109) | [Code](https://github.com/Fugtemypt123/VIGA) | [Benchmark](https://huggingface.co/datasets/DietCoke4671/BlenderBench)
https://reddit.com/link/1qqgf00/video/a901q7okxbgg1/player
**LingBot-VLA - VLA foundation model with 20k hours of real robot data**
\- First empirical evidence VLA models scale with massive real-world data
\- 261 samples/sec/GPU throughput, open weights
\- [Paper](https://huggingface.co/papers/2601.18692) | [Project Page](https://technology.robbyant.com/lingbot-vla) | [Models](https://huggingface.co/collections/robbyant/lingbot-vla)
https://reddit.com/link/1qqgf00/video/17j9dlblxbgg1/player
**PersonaPlex - NVIDIA's full-duplex conversational AI**
\- Persona control through text prompts + voice conditioning
\- Built on Moshi architecture, MIT license
\- [GitHub](https://github.com/NVIDIA/personaplex) | [Project Page](https://research.nvidia.com/labs/adlr/personaplex/)
https://reddit.com/link/1qqgf00/video/38mq0tfmxbgg1/player
Checkout the [full roundup](https://open.substack.com/pub/autopiloteverything/p/the-agentic-edge-2-power-without?utm_campaign=post-expanded-share&utm_medium=web) for more agent demos, research, tools, and more. | 2026-01-29T18:18:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qqgf00/this_week_in_ai_agents_open_source_edition/ | Vast_Yak_4147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqgf00 | false | null | t3_1qqgf00 | /r/LocalLLaMA/comments/1qqgf00/this_week_in_ai_agents_open_source_edition/ | false | false | 6 | null | |
Finally, an ASR (speech-to-text) model with diarization. | 9 | **VibeVoice-ASR** is a unified speech-to-text model designed to handle **60-minute long-form audio** in a single pass, generating structured transcriptions containing **Who (Speaker), When (Timestamps), and What (Content)**, with support for **Customized Hotwords** and over **50 languages**.
[https://huggingface.co/microsoft/VibeVoice-ASR](https://huggingface.co/microsoft/VibeVoice-ASR) | 2026-01-29T18:12:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qqg9ge/finally_an_asr_speechtotext_model_with_diarization/ | m_abdelfattah | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqg9ge | false | null | t3_1qqg9ge | /r/LocalLLaMA/comments/1qqg9ge/finally_an_asr_speechtotext_model_with_diarization/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': '1J3q8c2zl4J850IcNOD-bYFN71c1EgyYw_XtT6Bfydk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1J3q8c2zl4J850IcNOD-bYFN71c1EgyYw_XtT6Bfydk.png?width=108&crop=smart&auto=webp&s=47e4803c5436040d68f69306e7bf028d25d5a468', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1J3q8c2zl4J850IcNOD-bYFN71c1EgyYw_XtT6Bfydk.png?width=216&crop=smart&auto=webp&s=22780a90b2ae43c96faecede813e7c909c22ffdc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1J3q8c2zl4J850IcNOD-bYFN71c1EgyYw_XtT6Bfydk.png?width=320&crop=smart&auto=webp&s=8cb8431afd20a1092afe7856ea92c88f4b5a77d4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1J3q8c2zl4J850IcNOD-bYFN71c1EgyYw_XtT6Bfydk.png?width=640&crop=smart&auto=webp&s=f105d50c2dda6be129e5f3902c854e281c7c2232', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1J3q8c2zl4J850IcNOD-bYFN71c1EgyYw_XtT6Bfydk.png?width=960&crop=smart&auto=webp&s=7f382e4cf792244bbbe5754525949d22c21db399', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1J3q8c2zl4J850IcNOD-bYFN71c1EgyYw_XtT6Bfydk.png?width=1080&crop=smart&auto=webp&s=2dd0a464f687885ea88e380d97ae78a5764f3cc4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1J3q8c2zl4J850IcNOD-bYFN71c1EgyYw_XtT6Bfydk.png?auto=webp&s=0a6287c00c2c19429b5fb0a1bc447b4dc9a040dc', 'width': 1200}, 'variants': {}}]} |
Built open-source infrastructure for 'epistemic RAG' - knowledge graphs with claim extraction and suppression detection, runs entirely local | 1 | Been lurking here for a while, finally have something worth sharing.
**The problem:** RAG retrieves chunks, but chunks aren't knowledge. When you're analyzing contested topics with multiple perspectives - research that contradicts itself, claims and counter-claims, institutional narratives vs. heterodox sources - chunk retrieval conflates everything. The LLM can't distinguish between a primary claim and a dismissal of that claim.
**What I built:** Eleutherios - local knowledge graph infrastructure that extracts claims at the atomic level, builds entity relationships, then runs detection algorithms to surface patterns:
* Suppression indicators (funding cuts, career impacts, publication obstacles documented within the sources themselves)
* Coordination signatures (timing patterns, shared language, citation networks)
* Cross-source contradictions and confirmations
**Stack:** Neo4j for the graph, PostgreSQL + pgvector for embeddings, Ollama for local inference (currently using mistral-nemo:12b for extraction). MCP integration so Claude Desktop can query your knowledge graph directly. Runs entirely in Docker, no cloud dependencies.
**Why it matters:** If you're researching anything where institutional consensus might be manufactured rather than organic - whether that's medical research, historical controversies, financial narratives - you need tools that can surface the *structure*of the information landscape, not just retrieve relevant chunks.
**Current state:** Working MVP, \~47K claims extracted across test corpora, Docker deployment, MIT licensed. Looking for feedback from people who deal with adversarial information environments.
Repo: github.com/Eleutherios-Foundation/eleutherios Site: eleutherios.io
Happy to answer questions about the architecture or detection algorithms. | 2026-01-29T18:04:44 | https://www.reddit.com/r/LocalLLaMA/comments/1qqg18c/built_opensource_infrastructure_for_epistemic_rag/ | Able_Concentrate9568 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqg18c | false | null | t3_1qqg18c | /r/LocalLLaMA/comments/1qqg18c/built_opensource_infrastructure_for_epistemic_rag/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rW2LVRqLVl72wS7fyHDREC9D1Tj7noxYVYI4k1l0ctQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rW2LVRqLVl72wS7fyHDREC9D1Tj7noxYVYI4k1l0ctQ.png?width=108&crop=smart&auto=webp&s=019688d94a64136b2590eda70ae46637abe34a29', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rW2LVRqLVl72wS7fyHDREC9D1Tj7noxYVYI4k1l0ctQ.png?width=216&crop=smart&auto=webp&s=2ee63f387f8c42f2cfcd4c3c89ef0c012d814bdc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rW2LVRqLVl72wS7fyHDREC9D1Tj7noxYVYI4k1l0ctQ.png?width=320&crop=smart&auto=webp&s=a70adb78af3b9b8e444c9c06cb0698cd597952b1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rW2LVRqLVl72wS7fyHDREC9D1Tj7noxYVYI4k1l0ctQ.png?width=640&crop=smart&auto=webp&s=c1d449fb7e0262b7befdbfa1e8fde2de3832e78c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rW2LVRqLVl72wS7fyHDREC9D1Tj7noxYVYI4k1l0ctQ.png?width=960&crop=smart&auto=webp&s=a8a56cef673eabbdafa6e1652592465912d422ea', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rW2LVRqLVl72wS7fyHDREC9D1Tj7noxYVYI4k1l0ctQ.png?width=1080&crop=smart&auto=webp&s=f2c86aa062a962be3138519f97d3474ced1224b0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rW2LVRqLVl72wS7fyHDREC9D1Tj7noxYVYI4k1l0ctQ.png?auto=webp&s=42a8fda104bbdaa3ae1f7fbe57234135c17daa02', 'width': 1200}, 'variants': {}}]} |
I built a lightweight, local wrapper for FLUX.2 with built-in 4x Upscaling (One-Click Install for RTX) | 1 | [removed] | 2026-01-29T17:52:41 | https://www.reddit.com/r/LocalLLaMA/comments/1qqfojt/i_built_a_lightweight_local_wrapper_for_flux2/ | Critical-Evidence854 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqfojt | false | null | t3_1qqfojt | /r/LocalLLaMA/comments/1qqfojt/i_built_a_lightweight_local_wrapper_for_flux2/ | false | false | self | 1 | null |
How would I find people who are comfortable with local LLM development | 0 | Hello, I own my own consultancy firm and I am looking for people with local llm skills.. Unfortunately all the people I have seen apply to the job I post do not have that experience. Is there maybe another job board or something I need to look at? | 2026-01-29T17:49:54 | https://www.reddit.com/r/LocalLLaMA/comments/1qqflo5/how_would_i_find_people_who_are_comfortable_with/ | Sadbreakup9997 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqflo5 | false | null | t3_1qqflo5 | /r/LocalLLaMA/comments/1qqflo5/how_would_i_find_people_who_are_comfortable_with/ | false | false | self | 0 | null |
eldr.ᚲ - NF4 quantization, RWKV-7, and LoRA MoE for Candle | 1 | Hey
Been working on some ML stuff for candle (huggingface's rust ML framework).
* **NF4 quantization** \- 4-bit with CUDA. 7B models in \~4GB VRAM
* **RWKV-7 "Goose"** \- O(n) attention instead of O(n²). first in candle afaik
* **Delta Attention** \- differential attention for context
* **LoRA MoE** \- blend adapters at runtime. swap personalities without reload
works with Qwen3. `LinearLike` trait swaps Linear/NF4/LoRA transparently.
**benchmarks (Qwen3-4B, 4090):**
* base: 67.5 tok/s
* NF4: 35 tok/s
* 4 LoRA experts: 21.6 tok/s
repo: [https://gitlab.com/rune.minna/eldr.kenaz](https://gitlab.com/rune.minna/eldr.kenaz) license: upstream stays MIT/Apache, new stuff SSPL-1.0 (can adapt to candle under MIT)
Let me know if something breaks. For those of us without datacenter GPUs! uwu\~ | 2026-01-29T17:46:22 | https://gitlab.com/rune.minna/eldr.kenaz | rune-minna | gitlab.com | 1970-01-01T00:00:00 | 0 | {} | 1qqfi3f | false | null | t3_1qqfi3f | /r/LocalLLaMA/comments/1qqfi3f/eldrᚲ_nf4_quantization_rwkv7_and_lora_moe_for/ | false | false | default | 1 | null |
Any good open source of project Genie? | 4 | 2026-01-29T17:44:11 | PumpkinNarrow6339 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qqffrz | false | null | t3_1qqffrz | /r/LocalLLaMA/comments/1qqffrz/any_good_open_source_of_project_genie/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'vyvj84sesbgg1', 'resolutions': [{'height': 130, 'url': 'https://preview.redd.it/vyvj84sesbgg1.png?width=108&crop=smart&auto=webp&s=af11061d412d80a38daa08d79d08c78085939f95', 'width': 108}, {'height': 260, 'url': 'https://preview.redd.it/vyvj84sesbgg1.png?width=216&crop=smart&auto=webp&s=5c8bf421810c3113cf5eabbe826ee005a146a250', 'width': 216}, {'height': 385, 'url': 'https://preview.redd.it/vyvj84sesbgg1.png?width=320&crop=smart&auto=webp&s=3ff29c8a5b8d67dd3c1773f556b82d091eabc828', 'width': 320}, {'height': 771, 'url': 'https://preview.redd.it/vyvj84sesbgg1.png?width=640&crop=smart&auto=webp&s=ba26b08643bbdf3a3366873bff375487a5ee2bc8', 'width': 640}, {'height': 1157, 'url': 'https://preview.redd.it/vyvj84sesbgg1.png?width=960&crop=smart&auto=webp&s=7a1d28d22e904ad3c9028b5c513dfadf8f622f2c', 'width': 960}, {'height': 1302, 'url': 'https://preview.redd.it/vyvj84sesbgg1.png?width=1080&crop=smart&auto=webp&s=f0bf7ad784c090b711431203ab0e236321beed6a', 'width': 1080}], 'source': {'height': 1302, 'url': 'https://preview.redd.it/vyvj84sesbgg1.png?auto=webp&s=ace271073ac481b6fdbdf9f68d341f388e01815d', 'width': 1080}, 'variants': {}}]} | ||
Finetuning inflated weights | 0 | Hi all
Just a curious question. Not too familiar with how finetuning works.
I noticed that the GGUF sizes on the base model of GPT-OSS-120B are all around 64GB. I'm assuming that this is because the model was trained in 4-bit?
However on the ArliAI derestricted GGUF, the weights are much more varied in size. For example the Q8 of the derestricted is double the size of the Q8 of base.
A couple of questions really:
How could this be? Is it related to the method used to finetune the model?
Would there be any (on paper) accuracy degradation from using the q4 derestricted gguf vs the q4 on the base gguf?
Thanks in advance | 2026-01-29T17:43:32 | https://www.reddit.com/gallery/1qqff4u | hoppedsketchy | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qqff4u | false | null | t3_1qqff4u | /r/LocalLLaMA/comments/1qqff4u/finetuning_inflated_weights/ | false | false | 0 | null | |
Kimi AI team sent me this appreciation mail | 278 | So I covered Kimi K2.5 on my YT channel and the team sent me this mail with a premium access to agent swarm | 2026-01-29T17:42:26 | mehulgupta7991 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qqfe1k | false | null | t3_1qqfe1k | /r/LocalLLaMA/comments/1qqfe1k/kimi_ai_team_sent_me_this_appreciation_mail/ | false | false | default | 278 | {'enabled': True, 'images': [{'id': '0ztj2mk3sbgg1', 'resolutions': [{'height': 151, 'url': 'https://preview.redd.it/0ztj2mk3sbgg1.jpeg?width=108&crop=smart&auto=webp&s=a8b27854aa659dd57ce82cb83df782dc4e402c8b', 'width': 108}, {'height': 303, 'url': 'https://preview.redd.it/0ztj2mk3sbgg1.jpeg?width=216&crop=smart&auto=webp&s=b703fc86c4a2054ef2433ac8d2cf5a529d812149', 'width': 216}, {'height': 449, 'url': 'https://preview.redd.it/0ztj2mk3sbgg1.jpeg?width=320&crop=smart&auto=webp&s=29715f198d01aa2c87fa7de13c71c0d7717519e4', 'width': 320}, {'height': 899, 'url': 'https://preview.redd.it/0ztj2mk3sbgg1.jpeg?width=640&crop=smart&auto=webp&s=0cb8fb8a83de79c13b7ca310a250afa85f95fe79', 'width': 640}, {'height': 1349, 'url': 'https://preview.redd.it/0ztj2mk3sbgg1.jpeg?width=960&crop=smart&auto=webp&s=f5e3189cf3c46361fb8164c32ff2f1ce39f849be', 'width': 960}, {'height': 1517, 'url': 'https://preview.redd.it/0ztj2mk3sbgg1.jpeg?width=1080&crop=smart&auto=webp&s=22b079942bf66656406061fc2b85db71bdff0548', 'width': 1080}], 'source': {'height': 1518, 'url': 'https://preview.redd.it/0ztj2mk3sbgg1.jpeg?auto=webp&s=b40cd2fb74de09e20d64ff8ae067ab7a54cf8f2e', 'width': 1080}, 'variants': {}}]} | |
New 96GB Rig, Would Like Advice | 38 | Okay, I know some people are not fans of these kinds of posts, but I am asking for this advice in all sincerity. I have done tons of research myself, I did not by hardware with no idea what to do with it, I would just like some advice from more experienced people to hopefully get on the right track sooner, maybe avoid mistakes I'm not aware of.
First, my past experience: I've been running my laptop with an eGPU to get to 40GB VRAM for a while, and I have found for my personal use cases, this has let me run 30B models at decent speeds with decent results, but nothing too serious because it seemed to be a sweet spot where I could get a 30B model to code with a decent context window, but if I started adding agents to it, I lost context, lost model quality, and had to sacrifice to fit even a decent amount into my VRAM. Plus, my laptop GPU (Turing RTX 5000 16GB) was decent, but a bottleneck. I pretty much have stuck to llama.cpp and ComfyUI, nothing exceptional.
Today, I just finally brought the machine I've been working on for months to life! I'm waiting on a few last cables to clean it up so I can add the last GPU, but that should be here in a couple of days.
My new system isn't exactly the GOAT or anything, I know it's kind of older but, it's new and good for me. My setup will run 4x RTX 3090 24GB and I have an old RX 570 4GB as the actual display driver for now. I got 3 of the 3090s running but like I said, the 4th will be added in a couple of days. I needed to order a different riser and I'm still waiting on my OCuLink adapter so I can move the display card out of my PCI-E x16 slot. I have 128GB of DDR4 and an AMD EPYC 7502 CPU. I managed to score some cheap 4TB Samsung EVO 990 Plus for $180 each before prices went insane, so I'll have plenty of storage I think, I could put 12TB in the dedicated NVME slots on my motherboard.
I'm building this on the Huananzhi H12D-8D with the AST2500 BCM Module. I "think" I've got the board setup correctly, Re-Size BAR and IOMMU Enabled, etc., though I am still combining through and learning this board. I don't have any NVLink adapters.
So here's where I need advice:
1. I would like to run a multi-agent, multi-model stack. Something like Nemotron 3 Nano 30B + Qwen 3 Coder 30B Instruct + multiple agents tasked to make sure the models follow the workflow, and I'd like to know if anyone has experience running such a setup, and if so, what agents worked best together?
2. The end goal is primarily autonomous coding, where I can create a flow chart, design an app, give it a layout, and have the AI build it autonomously without me needing to keep prompting it.
3. I plan to run this like a private LLM server, and that got me thinking 🤔 (dangerous). I would like to learn how to build multi-user LLM servers where there's a que system for prompts and the system can keep VRAM clear between users. I have a friend who really likes some if the models I've customized and wants to use them, but this will get into model switching and VRAM management that I'm not familiar with, so I was wondering if I should be looking at a different framework? Would vLLM be better or faster for this? I heard it can support pipeline parallelism now, but I'm not even sure how necessary that is with this kind of setup. I've been using an eGPU so it was necessary before, but would this setup be fine without NVLink now?
4. I would like to make my own LoRAs and fine tune smaller models myself, but I'm not sure how viable my hardware is for this and was wondering if anyone here has experience with this and could advise? I did some research, but didn't get too deep into it because I lacked the hardware (still might?)
5. If I want to just straight run an LLM, one that maximizes use of the new hardware, I was wondering what people's experience was with the best coding model available that would run with at least 256K context on 96GB of VRAM?
A lot of new models have dropped recently that I haven't had much time to test and I feel like I'm falling behind. I've never run much more than 30B models at Q8 quants, so I really don't know what models have lower quants that are actually viable for coding. I've pretty much stuck to Q8 models and Q8 KV, so I have little experience beyond that.
Also, I can add more GPUs. I plan to add at least 3 more and switch to USB for my display at some point. So before I need to start getting creative, I think I can get a bit more VRAM depending on what cards I can manage. I'm not sure I can pull off anymore of the 3090s, they're getting hard to find deals on. If there's a sweet spot I can pull off without slowing down the performance, I'm definitely open to suggestions on possible cards to add.
Thanks in advance for anyone who is willing to give advice on this. | 2026-01-29T17:36:44 | DonkeyBonked | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qqf86g | false | null | t3_1qqf86g | /r/LocalLLaMA/comments/1qqf86g/new_96gb_rig_would_like_advice/ | false | false | default | 38 | {'enabled': True, 'images': [{'id': 'rueq6u13rbgg1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/rueq6u13rbgg1.jpeg?width=108&crop=smart&auto=webp&s=86fc80cfed9e4fd2f675afe9763c15e624173644', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/rueq6u13rbgg1.jpeg?width=216&crop=smart&auto=webp&s=4fa9477105ac33d156d859933e59712919b602b1', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/rueq6u13rbgg1.jpeg?width=320&crop=smart&auto=webp&s=2aaf024d8dd93ec844cb9d7f2256f01293968884', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/rueq6u13rbgg1.jpeg?width=640&crop=smart&auto=webp&s=01620eaa8815dbe2e487298e3ceaa16013ce8d3f', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/rueq6u13rbgg1.jpeg?width=960&crop=smart&auto=webp&s=e2c1461a83abfdaa1cf5dac69ce8d960856524b5', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/rueq6u13rbgg1.jpeg?width=1080&crop=smart&auto=webp&s=49cddbe4569c33d17f73665c28804ab29d286aaf', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/rueq6u13rbgg1.jpeg?auto=webp&s=12a6352b5a48cc861aa0cd0dca0062a9ee1ae3d8', 'width': 4032}, 'variants': {}}]} | |
Why don’t we have more distilled models? | 76 | The Qwen 8B DeepSeek R1 distill genuinely blew me away when it dropped. You had reasoning capabilities that punched way above the parameter count, running on consumer (GPU poor) hardware.
So where are the rest of them? Why aren’t there more? | 2026-01-29T17:23:15 | https://www.reddit.com/r/LocalLLaMA/comments/1qqeudu/why_dont_we_have_more_distilled_models/ | GreedyWorking1499 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqeudu | false | null | t3_1qqeudu | /r/LocalLLaMA/comments/1qqeudu/why_dont_we_have_more_distilled_models/ | false | false | self | 76 | null |
[Project] From 50D to 200D: Evolution of the Origin 006 Core - 100k points processed in 14.7s (No GPU / No Backprop) | 0 | Hello again to the community!
Following up on our previous threads (where we tested 50D synthesis), we wanted to share a critical performance leap we’ve achieved in the development of the Origin 006 Core.
We set out to stress-test the engine to see if we could break the "curse of dimensionality" without relying on massive hardware. The results of our latest stress tests (best of 5 runs) have exceeded our expectations:
• Industrial Scale: We’ve scaled from our previous tests to processing 100,000 data points in a single run.
• Hyperspace: We increased the complexity from 50 to 200 dimensions.
• Response Speed: The entire process took only 14.73 seconds on a standard Colab CPU.
• Throughput (TPS): We are operating at 6,788.60 points per second, with an average latency of 147.31 microseconds per point.
Our Technical Approach:
We are demonstrating that Deterministic Sectorial Geometry allows for handling data volumes that would normally require neural network training or powerful GPUs. In our engine (Lumin), there is no backpropagation or training phases: law synthesis occurs point-by-point, in a pure, geometric fashion.
In this benchmark, we utilized Purity Mode, designed to consolidate stable laws in high-dimensional environments. We achieved a 50.04% compression in 200D, validating that the engine can find structural coherence even when the variable volume is massive.
We’re sharing the updated Colab so you can run the performance audit and see the logs in real-time. Inside the notebook, you’ll also find the link to the official project repository.
Colab Demo: [https://colab.research.google.com/drive/13gPy6jQ1mJnNLBhzYNEebltD9jraxDgZ](https://colab.research.google.com/drive/13gPy6jQ1mJnNLBhzYNEebltD9jraxDgZ)
We believe this approach opens a door for high-dimensional processing on local devices and real-time systems where energy efficiency and speed are critical.
We are continuing to iterate and would love to hear your thoughts and feedback on these new benchmarks! | 2026-01-29T17:16:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qqenim/project_from_50d_to_200d_evolution_of_the_origin/ | wexionar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqenim | false | null | t3_1qqenim | /r/LocalLLaMA/comments/1qqenim/project_from_50d_to_200d_evolution_of_the_origin/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260}, 'variants': {}}]} |
What’s the best local ai to use with Moltbot on 24 vram? | 0 | Been doing a ton of research but I figure I ask the community for help! Thank you! | 2026-01-29T17:08:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qqeffc/whats_the_best_local_ai_to_use_with_moltbot_on_24/ | OMEGA-76x | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqeffc | false | null | t3_1qqeffc | /r/LocalLLaMA/comments/1qqeffc/whats_the_best_local_ai_to_use_with_moltbot_on_24/ | false | false | self | 0 | null |
Kimi K2.5 using ktkernel + sglang, 16 TPS, but no starting <think> tag. | 2 | I am running Kimi K2.5 using ktransformers and sglang, with the following command on an Amd Epyc 9755 CPU + 768GB DDR5 system + Nvidia RTX 6000 PRO 96Gb GPU. The generation speed is 16 token/sec. The problem is that the model does not return an opening <think> tag. It returns the thinking content with a </think> closing tag followed by the standard response, but I need the opening <think> tag for my clients (Open WebUI, Cline, etc) to operate properly.
Any suggestions on how tk solve this?
\[Unit\]
Description=Kimi 2.5 Server
After=network.target
\[Service\]
User=user
WorkingDirectory=/home/user/kimi2.5
Environment="CUDA\_HOME=/usr/local/cuda-12.9"
Environment="PATH="/usr/local/cuda-12.9/bin:$PATH" Environment=LD\_LIBRARY\_PATH="/usr/local/cuda-12.9/lib64:${LD\_LIBRARY\_PATH:-}"
ExecStart=bash -c 'source /home/user/miniconda3/bin/activate kimi25; \\
python -m sglang.launch\_server \\
\--host 0.0.0.0 \\
\--port 10002 \\
\--model /home/user/models/Kimi-K2.5 \\
\--kt-weight-path /home/user/models/Kimi-K2.5 \\
\--kt-cpuinfer 120 \\
\--kt-threadpool-count 1 \\
\--kt-num-gpu-experts 30 \\
\--kt-method RAWINT4 \\
\--kt-gpu-prefill-token-threshold 400 \\
\--reasoning-parser kimi\_k2 \\
\--tool-call-parser kimi\_k2 \\
\--trust-remote-code \\
\--mem-fraction-static 0.94 \\
\--served-model-name Kimi-K2.5 \\
\--enable-mixed-chunk \\
\--tensor-parallel-size 1 \\
\--enable-p2p-check \\
\--disable-shared-experts-fusion \\
\--context-length 131072 \\
\--chunked-prefill-size 131072 \\
\--max-total-tokens 150000 \\
\--attention-backend flashinfer'
Restart=on-failure
TimeoutStartSec=600
\[Install\]
WantedBy=multi-user.target
After running the above command, there is no starting <think> tag in the response. The reasong is there with a closing </think> tag, but the start <think> tag is missing.
The --reasoning-parser kimi\_k2 flag has no effect, the reasoning content is never parsed into the reasoning field in the response.
Any suggestions on how to get the starting <think> tag into the response?
Here is an example response:
"data": { "id": "7bbe0883ed364588a6633cab94d20a42", "object": "chat.completion.chunk", "created": 1769694082, "model": "Kimi-K2.5", "choices": \[ { "index": 0, "message": { "role": null, "content": " The user is asking a very simple question: \\"How big is an apple\\". This is a straightforward factual question about the typical size of an apple. I should provide a helpful, accurate answer that covers the typical dimensions while acknowledging that apples vary in size by variety.\\n\\nKey points to cover:\\n1. Typical diameter range (2.5 to 3.5 inches or 6 to 9 cm)\\n2. Typical weight range (150-250 grams or 5-9 ounces)\\n3. Variation by variety (from crab apples to large cooking apples)\\n4. Comparison to common objects for context (tennis ball, baseball, fist)\\n\\nI should keep it concise but informative, giving both metric and imperial measurements since the user didn't specify a unit system.\\n\\nStructure:\\n- General size description\\n- Specific measurements (diameter/weight)\\n- Variations by type\\n- Visual comparisons\\n\\nThis is a safe, straightforward question with no concerning content. I should provide a helpful, neutral response. </think> An apple is typically about \*\*2.5 to 3.5 inches (6–9 cm)\*\* in diameter—roughly the size of a tennis ball or baseball.\\n\\n\*\*Weight:\*\* Most eating apples weigh between \*\*5–9 ounces (150–250 grams)\*\*.\\n\\n\*\*Variations by type:\*\*\\n- \*\*Small:\*\* Lady apples or crab apples (1–2 inches/2.5–5 cm)\\n- \*\*Medium:\*\* Gala, Fuji, or Golden Delicious (2.5–3 inches/6–7.5 cm)\\n- \*\*Large:\*\* Honeycrisp, Granny Smith, or cooking apples like Bramley (3.5–4+ inches/9–10 cm)\\n\\nFor reference, a medium apple is approximately the size of your closed fist. The \\"serving size\\" used in nutrition labels is typically one medium apple (about 182 grams).", "reasoning\_content": "", "tool\_calls": null }, "logprobs": null, "finish\_reason": "stop", "matched\_stop": 163586 } \], | 2026-01-29T17:04:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qqebfh/kimi_k25_using_ktkernel_sglang_16_tps_but_no/ | Leather-Block-1369 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqebfh | false | null | t3_1qqebfh | /r/LocalLLaMA/comments/1qqebfh/kimi_k25_using_ktkernel_sglang_16_tps_but_no/ | false | false | self | 2 | null |
Kimi AI just sent me an appreciation mail | 1 | [removed] | 2026-01-29T17:03:24 | Technical-Love-8479 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qqeaji | false | null | t3_1qqeaji | /r/LocalLLaMA/comments/1qqeaji/kimi_ai_just_sent_me_an_appreciation_mail/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '0oapcjw4lbgg1', 'resolutions': [{'height': 151, 'url': 'https://preview.redd.it/0oapcjw4lbgg1.png?width=108&crop=smart&auto=webp&s=852cfd6b3ca900aa29817ea8102b6c6d535d3cd5', 'width': 108}, {'height': 303, 'url': 'https://preview.redd.it/0oapcjw4lbgg1.png?width=216&crop=smart&auto=webp&s=3e6bc67e3591756cf82abdff5363d0dba77eb9fb', 'width': 216}, {'height': 449, 'url': 'https://preview.redd.it/0oapcjw4lbgg1.png?width=320&crop=smart&auto=webp&s=f6a7677826ea8fab4eda0f3399cb8d4c5f18f2c8', 'width': 320}, {'height': 899, 'url': 'https://preview.redd.it/0oapcjw4lbgg1.png?width=640&crop=smart&auto=webp&s=12da9190da10b8762e5af3b93450a7432322966c', 'width': 640}, {'height': 1349, 'url': 'https://preview.redd.it/0oapcjw4lbgg1.png?width=960&crop=smart&auto=webp&s=b1edd544489b201084df8fe67176544fa9e0eed7', 'width': 960}, {'height': 1517, 'url': 'https://preview.redd.it/0oapcjw4lbgg1.png?width=1080&crop=smart&auto=webp&s=82a836618b07618bf567602a35b023b8f819bb7f', 'width': 1080}], 'source': {'height': 1518, 'url': 'https://preview.redd.it/0oapcjw4lbgg1.png?auto=webp&s=864a754c9ea8ae010c658b1e8aafdedd2ad0b98b', 'width': 1080}, 'variants': {}}]} | |
Can 3gb vram run 1 trillion param model | 0 | 3.5gb vram take it or leave
[View Poll](https://www.reddit.com/poll/1qqe24k) | 2026-01-29T16:55:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qqe24k/can_3gb_vram_run_1_trillion_param_model/ | DeliciousDrainage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qqe24k | false | null | t3_1qqe24k | /r/LocalLLaMA/comments/1qqe24k/can_3gb_vram_run_1_trillion_param_model/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.