title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Reinventing the Punch Tape
0
2026-01-29T16:52:02
https://psiace.me/posts/reinvent-the-punch-tape/
PsiACE
psiace.me
1970-01-01T00:00:00
0
{}
1qqdyw9
false
null
t3_1qqdyw9
/r/LocalLLaMA/comments/1qqdyw9/reinventing_the_punch_tape/
false
false
default
0
null
NVLINK 2x 3090 which are connected via 2x oculink x4 yes or no?
2
I’m planning a setup with **two RTX 3090s** connected via **two separate Oculink x4 (PCIe 4.0) links**. My goal is to enable **NVLink** for \[Rendering/AI/Deep Learning\]. Before I buy the hardware, I have a few specific questions: 1. Does NVLink work reliably when the GPUs are connected via Oculink instead of direct PCIe slots? 2. Will the Oculink x4 bottleneck (approx. 8 GB/s per card) significantly impact the NVLink peer-to-peer performance? 3. Are there known issues with SLI/NVLink detection over Oculink interfaces? 4. A physical NVLink bridge will be installed, but does the host see them as "linkable"? Has anyone successfully implemented this or can technical reasons speak against it?
2026-01-29T16:42:58
https://www.reddit.com/r/LocalLLaMA/comments/1qqdpjz/nvlink_2x_3090_which_are_connected_via_2x_oculink/
MageLD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qqdpjz
false
null
t3_1qqdpjz
/r/LocalLLaMA/comments/1qqdpjz/nvlink_2x_3090_which_are_connected_via_2x_oculink/
false
false
self
2
null
Embedded local memory for agents: tables + graph + vector in one process
3
I just released **ArcadeDB Embedded Python Bindings**, which lets you run a **multi-model memory store embedded directly inside a Python process**. No server. No network hop. Fully local and offline. ### Why this is interesting for agents A lot of local agent setups end up juggling: * a vector store * some ad-hoc JSON or SQLite state * relationship logic in code This explores a different approach: **one embedded engine** with: * structured tables * graph relationships * vector similarity search * ACID transactions across all of it All running **in-process** with Python. ### Details * Python-first API * SQL and OpenCypher * HNSW vector search (JVector) * Single standalone wheel: * bundled lightweight JVM (no Java install) * JPype bridge * Fully offline, Apache-2.0 licensed Install: ```bash uv pip install arcadedb-embedded ``` Repo: https://github.com/humemai/arcadedb-embedded-python Docs: https://docs.humem.ai/arcadedb/ I’m curious how people here handle **local agent memory**: * do you separate vector / structure / relationships? * would an embedded multi-model store simplify things, or add friction? Happy to discuss trade-offs.
2026-01-29T16:36:08
https://www.reddit.com/r/LocalLLaMA/comments/1qqdiit/embedded_local_memory_for_agents_tables_graph/
Plastic_Director_480
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qqdiit
false
null
t3_1qqdiit
/r/LocalLLaMA/comments/1qqdiit/embedded_local_memory_for_agents_tables_graph/
false
false
self
3
{'enabled': False, 'images': [{'id': 'ON8UrHtJYC1f4Nj9mm1_oTOU2cdGR6_zpw2XsLBu-Hw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ON8UrHtJYC1f4Nj9mm1_oTOU2cdGR6_zpw2XsLBu-Hw.png?width=108&crop=smart&auto=webp&s=855b1188c96d68f8ca081904742a8e18f3ace2c3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ON8UrHtJYC1f4Nj9mm1_oTOU2cdGR6_zpw2XsLBu-Hw.png?width=216&crop=smart&auto=webp&s=1d77a7d19dfa2f80b9b5baa4f4a89816ffd17817', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ON8UrHtJYC1f4Nj9mm1_oTOU2cdGR6_zpw2XsLBu-Hw.png?width=320&crop=smart&auto=webp&s=eb3c0bda63019dd10b33372f250136c3f34a2016', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ON8UrHtJYC1f4Nj9mm1_oTOU2cdGR6_zpw2XsLBu-Hw.png?width=640&crop=smart&auto=webp&s=8d1ebfd13287f1c463607442319e15fd07de086a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ON8UrHtJYC1f4Nj9mm1_oTOU2cdGR6_zpw2XsLBu-Hw.png?width=960&crop=smart&auto=webp&s=f9ddb83195322e4f97bbaaffe34e9e00dd2d7013', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ON8UrHtJYC1f4Nj9mm1_oTOU2cdGR6_zpw2XsLBu-Hw.png?width=1080&crop=smart&auto=webp&s=f44f9cfb98c09edf0248cab7ede0643c80321bcc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ON8UrHtJYC1f4Nj9mm1_oTOU2cdGR6_zpw2XsLBu-Hw.png?auto=webp&s=ddc60d2ee4c603ffca45d8ccb79d373b4b719570', 'width': 1200}, 'variants': {}}]}
Introducing Craft - an open-source Cowork running in a sandbox rather than your desktop
4
If you want to mess around with the implementation, check out the repo: [https://github.com/onyx-dot-app/onyx/blob/main/web/src/app/craft/README.md](https://github.com/onyx-dot-app/onyx/blob/main/web/src/app/craft/README.md) To set it up locally: [https://docs.onyx.app/deployment/getting\_started/quickstart](https://docs.onyx.app/deployment/getting_started/quickstart)
2026-01-29T16:33:03
https://v.redd.it/ihmukqemebgg1
Weves11
/r/LocalLLaMA/comments/1qqdfc4/introducing_craft_an_opensource_cowork_running_in/
1970-01-01T00:00:00
0
{}
1qqdfc4
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ihmukqemebgg1/DASHPlaylist.mpd?a=1772425995%2CZTcyYWE1MjhjMDk2M2NhNjcwNDlmYzFjMTgyOWRlNzcyMTkwMmVmZWJiMWZhMGNhZDRkMTdhYjIxYjcwNGE3Mg%3D%3D&v=1&f=sd', 'duration': 44, 'fallback_url': 'https://v.redd.it/ihmukqemebgg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1064, 'hls_url': 'https://v.redd.it/ihmukqemebgg1/HLSPlaylist.m3u8?a=1772425995%2CNGU2YTA3MDgwODQ5NjkyZmRmYmQ4YWJkNGY4N2Y1ZjgwOWFmOWIwY2I3NDk1MzI1NTA3NjNiZmFmOThiMmE0Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ihmukqemebgg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1qqdfc4
/r/LocalLLaMA/comments/1qqdfc4/introducing_craft_an_opensource_cowork_running_in/
false
false
https://external-preview…6dc886ced71bd68b
4
{'enabled': False, 'images': [{'id': 'dTB5ODB1Zm1lYmdnMWjYeKAx0crVcXyPHeoBkUEv_0XxFKOgzr_2HclUJG3y', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/dTB5ODB1Zm1lYmdnMWjYeKAx0crVcXyPHeoBkUEv_0XxFKOgzr_2HclUJG3y.png?width=108&crop=smart&format=pjpg&auto=webp&s=4e89100f9a46382453fecaef81923077f5e747ce', 'width': 108}, {'height': 119, 'url': 'https://external-preview.redd.it/dTB5ODB1Zm1lYmdnMWjYeKAx0crVcXyPHeoBkUEv_0XxFKOgzr_2HclUJG3y.png?width=216&crop=smart&format=pjpg&auto=webp&s=f306fae626542c18d6952b6fa4181b9a6f68fda3', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/dTB5ODB1Zm1lYmdnMWjYeKAx0crVcXyPHeoBkUEv_0XxFKOgzr_2HclUJG3y.png?width=320&crop=smart&format=pjpg&auto=webp&s=9387b9f5e3e25bdf7bd5ebb6863ec8c5ef5ed864', 'width': 320}, {'height': 354, 'url': 'https://external-preview.redd.it/dTB5ODB1Zm1lYmdnMWjYeKAx0crVcXyPHeoBkUEv_0XxFKOgzr_2HclUJG3y.png?width=640&crop=smart&format=pjpg&auto=webp&s=acc63ea1b2fa3f6f64b6ad946e9c838b0735bd7f', 'width': 640}, {'height': 531, 'url': 'https://external-preview.redd.it/dTB5ODB1Zm1lYmdnMWjYeKAx0crVcXyPHeoBkUEv_0XxFKOgzr_2HclUJG3y.png?width=960&crop=smart&format=pjpg&auto=webp&s=a68ea7f68eaa2a1da560ade08c232f4cea7f1b6c', 'width': 960}, {'height': 598, 'url': 'https://external-preview.redd.it/dTB5ODB1Zm1lYmdnMWjYeKAx0crVcXyPHeoBkUEv_0XxFKOgzr_2HclUJG3y.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a8ab889b8ac6b087bcc0667332e11b49f2aa6d5f', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/dTB5ODB1Zm1lYmdnMWjYeKAx0crVcXyPHeoBkUEv_0XxFKOgzr_2HclUJG3y.png?format=pjpg&auto=webp&s=615fe9ef3e2a62a969b4d9c4a12cec452888d412', 'width': 3898}, 'variants': {}}]}
What's the current uncensored 7B?
0
Or below 7B. Last one i have on my disk is manticore, and that one's oooooooold. What's the newest sota?
2026-01-29T16:17:41
https://www.reddit.com/r/LocalLLaMA/comments/1qqczkn/whats_the_current_uncensored_7b/
ashleigh_dashie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qqczkn
false
null
t3_1qqczkn
/r/LocalLLaMA/comments/1qqczkn/whats_the_current_uncensored_7b/
false
false
self
0
null
chat : add parsing for solar-open-100b by aldehir · Pull Request #18540 · ggml-org/llama.cpp
5
`reasoning_effort: "minimal" | "low" | "medium" | "high" = "high"` \- Set reasoning effort. When set to `low` or `minimal`, reasoning is disabled.
2026-01-29T16:17:31
https://github.com/ggml-org/llama.cpp/pull/18540
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1qqcze7
false
null
t3_1qqcze7
/r/LocalLLaMA/comments/1qqcze7/chat_add_parsing_for_solaropen100b_by_aldehir/
false
false
default
5
{'enabled': False, 'images': [{'id': '3TwT5w3u0FeWYIlu8vGj_s0Ycol7t-njdumAgpPxGvM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3TwT5w3u0FeWYIlu8vGj_s0Ycol7t-njdumAgpPxGvM.png?width=108&crop=smart&auto=webp&s=d04c38d8c91cdef722119a5d78482837d86e7656', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3TwT5w3u0FeWYIlu8vGj_s0Ycol7t-njdumAgpPxGvM.png?width=216&crop=smart&auto=webp&s=b3d49687c40a84bed4e922eed0032c3db0650a35', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3TwT5w3u0FeWYIlu8vGj_s0Ycol7t-njdumAgpPxGvM.png?width=320&crop=smart&auto=webp&s=e14e1b3a2557f1e80a1c5ca7dab94a633efd8196', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3TwT5w3u0FeWYIlu8vGj_s0Ycol7t-njdumAgpPxGvM.png?width=640&crop=smart&auto=webp&s=8189be2f746d94e5222c57541f7ac42211203dd8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3TwT5w3u0FeWYIlu8vGj_s0Ycol7t-njdumAgpPxGvM.png?width=960&crop=smart&auto=webp&s=180b83e5d64a44bd43dc5372ae1ee5c42a6137ef', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3TwT5w3u0FeWYIlu8vGj_s0Ycol7t-njdumAgpPxGvM.png?width=1080&crop=smart&auto=webp&s=0a1e56c4404261ea9431dc4f4f4f5735c5653271', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3TwT5w3u0FeWYIlu8vGj_s0Ycol7t-njdumAgpPxGvM.png?auto=webp&s=0f240bb26b36b1993aea911f9380fce3a72cf2e3', 'width': 1200}, 'variants': {}}]}
OpenAI throttling hiring, Linux founder vibe coding, chatgpt ads rollout... is the LLM future growing or choking?
0
Torvalds basically saying "is this much better than I can do by hand? Sure is" on the Google antigravity update. Altman said that OpenAI is looking to “dramatically slow down” hiring. I can't figure out if the ship is sinking or is some sort of rocket flying. Altman slowing hiring is the tell. Competitors will ship faster becuase AI shrinks the work per employee. Shopify’s CEO said: “Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI.” Translation: assume an AI coworker exists. Only then add humans. Apply that test to support, proposals, and finance ops. Throw in GPTads, rolling out testing for brands committing $1M+ for a few weeks. Thoughts on what the world will look like in 12 months? or 6 months?
2026-01-29T16:04:08
https://www.reddit.com/r/LocalLLaMA/comments/1qqcls0/openai_throttling_hiring_linux_founder_vibe/
ClassicAsiago
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qqcls0
false
null
t3_1qqcls0
/r/LocalLLaMA/comments/1qqcls0/openai_throttling_hiring_linux_founder_vibe/
false
false
self
0
null
Run Local LLMs with Claude Code & OpenAI Codex
35
This step-by-step guide shows you how to connect open LLMs to Claude Code and Codex entirely locally. Run using any open model like DeepSeek, Qwen, Gemma etc. Official Blog post - [https://unsloth.ai/docs/basics/claude-codex](https://unsloth.ai/docs/basics/claude-codex)
2026-01-29T15:43:54
https://i.redd.it/46s35meo6bgg1.jpeg
Dear-Success-1441
i.redd.it
1970-01-01T00:00:00
0
{}
1qqc1fx
false
null
t3_1qqc1fx
/r/LocalLLaMA/comments/1qqc1fx/run_local_llms_with_claude_code_openai_codex/
false
false
default
35
{'enabled': True, 'images': [{'id': '46s35meo6bgg1', 'resolutions': [{'height': 118, 'url': 'https://preview.redd.it/46s35meo6bgg1.jpeg?width=108&crop=smart&auto=webp&s=9b9dca00de189a06096cbd5a6ace443027a99a5a', 'width': 108}, {'height': 236, 'url': 'https://preview.redd.it/46s35meo6bgg1.jpeg?width=216&crop=smart&auto=webp&s=a87154208b5defd5ed41c2d969f5bba2f14b7844', 'width': 216}, {'height': 350, 'url': 'https://preview.redd.it/46s35meo6bgg1.jpeg?width=320&crop=smart&auto=webp&s=fc010d1ae73a7518fc4b95508121c05b49ebbfd3', 'width': 320}, {'height': 700, 'url': 'https://preview.redd.it/46s35meo6bgg1.jpeg?width=640&crop=smart&auto=webp&s=6a5c4d381bee94911400914067a46722f3b88a4a', 'width': 640}], 'source': {'height': 875, 'url': 'https://preview.redd.it/46s35meo6bgg1.jpeg?auto=webp&s=5ef78c7ac5be799ead6d8180685d7fe570f7b90c', 'width': 800}, 'variants': {}}]}
We open-sourced our browser agent sandbox: run arbitrary code from local LLMs without torching your system
0
2026-01-29T15:27:10
https://gobii.ai/blog/how-we-sandbox-ai-agents-in-production/
ai-christianson
gobii.ai
1970-01-01T00:00:00
0
{}
1qqbkpz
false
null
t3_1qqbkpz
/r/LocalLLaMA/comments/1qqbkpz/we_opensourced_our_browser_agent_sandbox_run/
false
false
default
0
{'enabled': False, 'images': [{'id': 'hReyIajJumdKQhehSSJPa9x_c5UQe-e_AQjFnYcfCM4', 'resolutions': [{'height': 36, 'url': 'https://external-preview.redd.it/hReyIajJumdKQhehSSJPa9x_c5UQe-e_AQjFnYcfCM4.png?width=108&crop=smart&auto=webp&s=3e16c4406c541e4f58f0236b1af1a66c5d8c6a40', 'width': 108}, {'height': 73, 'url': 'https://external-preview.redd.it/hReyIajJumdKQhehSSJPa9x_c5UQe-e_AQjFnYcfCM4.png?width=216&crop=smart&auto=webp&s=f29c8cdf393a9777af607a8742f0f5884e81454f', 'width': 216}, {'height': 109, 'url': 'https://external-preview.redd.it/hReyIajJumdKQhehSSJPa9x_c5UQe-e_AQjFnYcfCM4.png?width=320&crop=smart&auto=webp&s=3094977d40af8f158eef7791652c6e5be3a625fe', 'width': 320}, {'height': 218, 'url': 'https://external-preview.redd.it/hReyIajJumdKQhehSSJPa9x_c5UQe-e_AQjFnYcfCM4.png?width=640&crop=smart&auto=webp&s=feb982ff8434eba5a129a9f46158c5a94292be7a', 'width': 640}, {'height': 328, 'url': 'https://external-preview.redd.it/hReyIajJumdKQhehSSJPa9x_c5UQe-e_AQjFnYcfCM4.png?width=960&crop=smart&auto=webp&s=03c7e9b3a459d7dab0fa25630f7e9038a43253bd', 'width': 960}, {'height': 369, 'url': 'https://external-preview.redd.it/hReyIajJumdKQhehSSJPa9x_c5UQe-e_AQjFnYcfCM4.png?width=1080&crop=smart&auto=webp&s=3d07b9a6306efc74f41413fb0ffeba9e12e0ec56', 'width': 1080}], 'source': {'height': 1082, 'url': 'https://external-preview.redd.it/hReyIajJumdKQhehSSJPa9x_c5UQe-e_AQjFnYcfCM4.png?auto=webp&s=e7c149a7453e3815e77e43c58004372c1482fae5', 'width': 3163}, 'variants': {}}]}
GLM 4.7 flash Q6 thought for 1400 minutes. 2000 lines of thoughts, had to be stopped.
51
I tryed this model for the first time. Asked a simple question, and forgot about it. Today morning I still see it thinking. Thankfully I stopped it before it became sentient. 3090, 3060 dual, 96GB RAM
2026-01-29T15:27:00
https://www.reddit.com/gallery/1qqbkk4
regjoe13
reddit.com
1970-01-01T00:00:00
0
{}
1qqbkk4
false
null
t3_1qqbkk4
/r/LocalLLaMA/comments/1qqbkk4/glm_47_flash_q6_thought_for_1400_minutes_2000/
false
false
https://b.thumbs.redditm…x7ALGs87m29A.jpg
51
null
Releasing CerberusEye: A tool to audit your local LLM exposure (Research Methodology)
0
I am releasing the research infrastructure used for "The Glass Box Paradox" paper. CerberusEye is a localized auditing tool designed to check if your self-hosted LLM inference endpoints (Ollama, vLLM, etc.) are accidentally exposed to the public internet. It uses a "Bring Your Own Key" approach for Shodan/Censys to verify if your IP is indexed, or you can use the Manual Mode (offline) to scan specific ranges. The "Glass Box Paradox": The tool was built to prove that many "local" privacy-focused setups are leaking data due to misconfigured ports (5000, 11434, 8080). Links: GitHub (Tool): [https://github.com/XORD-AI/CerberusEye](https://github.com/XORD-AI/CerberusEye) White Paper: [https://professorsigmund.com/clawbot-glass-box-paradox.html](https://professorsigmund.com/clawbot-glass-box-paradox.html) * Professor Sigmund
2026-01-29T15:23:51
https://www.reddit.com/r/LocalLLaMA/comments/1qqbhh4/releasing_cerberuseye_a_tool_to_audit_your_local/
Professor_Sigmund
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qqbhh4
false
null
t3_1qqbhh4
/r/LocalLLaMA/comments/1qqbhh4/releasing_cerberuseye_a_tool_to_audit_your_local/
false
false
self
0
null
“Open-source AI system using Ollama (incomplete) – looking for devs to help with RAG & scraping”
0
Hi everyone, I’m working on an open-source AI project that is already functional but clearly incomplete and somewhat messy in parts. I’m being upfront about that. The system currently runs multiple powerful models via Ollama (cloud-based for now), and I’m actively testing interactions with models like: \- deepseek-v3.1:671b \- gpt-oss:20b / 120b \- kimi-k2:1t \- qwen3-coder:480b \- glm-4.6 \- minimax-m2 \- mistral-large-3 What’s missing / needed: \- Proper RAG implementation \- Vector database integration (FAISS / Chroma / Qdrant) \- Web scraping + HTML parsing for knowledge ingestion \- Search + retrieval logic \- Architecture cleanup & stabilization The project is not a polished product. Some parts are under active development, others need refactoring or redesign. I’m not looking to hire anyone. I’m looking for developers who enjoy fixing incomplete systems, discussing architecture, and building open-source AI tooling. I’ll attach a screenshot showing live interaction with Ollama models. GitHub link is in the comments. Any technical feedback, criticism, or collaboration is welcome.
2026-01-29T15:21:44
https://www.reddit.com/gallery/1qqbfe5
Fantastic-Market-790
reddit.com
1970-01-01T00:00:00
0
{}
1qqbfe5
false
null
t3_1qqbfe5
/r/LocalLLaMA/comments/1qqbfe5/opensource_ai_system_using_ollama_incomplete/
false
false
https://a.thumbs.redditm…E9sQE1SThLB4.jpg
0
null
Releasing CerberusEye: A tool to audit your local LLM exposure (Research Methodology)
1
[deleted]
2026-01-29T15:21:10
[deleted]
1970-01-01T00:00:00
0
{}
1qqbett
false
null
t3_1qqbett
/r/LocalLLaMA/comments/1qqbett/releasing_cerberuseye_a_tool_to_audit_your_local/
false
false
default
1
null
Handling multi-speaker chaos with Gemini Live API and a custom SFU (Deep Sea Stories)
2
Most voice AI demos work great in a silent room with one person. As soon as you have three people talking over each other or interrupting, it’s getting a bit more difficult. We recently built Deep Sea Stories, a multiplayer mystery game, and had to solve the multi-speaker nightmare using the Gemini Live API and Fishjam. The challenge: how do you let an AI "Riddle Master" listen to a group of detectives without getting confused by background noise or simultaneous questions? To solve it, we used a Selective Forwarding Unit (SFU) approach. Instead of just dumping a mixed audio stream into the model, the SFU allows for more granular control over which audio tracks are being prioritized and sent to the Gemini Live backend.  We wrote a deep dive into the architecture and how we orchestrated the audio flow to make the AI feel like a real participant in a room rather than a walkie-talkie. Full technical breakdown: [https://blog.swmansion.com/voice-ai-how-we-built-a-multi-speaker-ai-agent-using-gemini-a59e08fb18aa](https://blog.swmansion.com/voice-ai-how-we-built-a-multi-speaker-ai-agent-using-gemini-a59e08fb18aa)
2026-01-29T15:20:25
https://blog.swmansion.com/voice-ai-how-we-built-a-multi-speaker-ai-agent-using-gemini-a59e08fb18aa
carlievanilla
blog.swmansion.com
1970-01-01T00:00:00
0
{}
1qqbe3r
false
null
t3_1qqbe3r
/r/LocalLLaMA/comments/1qqbe3r/handling_multispeaker_chaos_with_gemini_live_api/
false
false
default
2
null
LM Studio multiple models loading
1
https://preview.redd.it/…n VS Code bro...
2026-01-29T15:05:52
https://www.reddit.com/r/LocalLLaMA/comments/1qqazvy/lm_studio_multiple_models_loading/
2poor2die
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qqazvy
false
null
t3_1qqazvy
/r/LocalLLaMA/comments/1qqazvy/lm_studio_multiple_models_loading/
false
false
https://b.thumbs.redditm…pufHw-nSPISU.jpg
1
null
HELP! local TTS with Mac Studio M3 ultra
1
hey im working on an AI project. 2 models that I used are chatter box, and Qwen. qwen took 3-4 minutes to generate but it was flawless. until I got hit with some token/ login BS and now I have to go back to chatter box. decent cloning but 30-50 seconds for a gen is crazy and I need to speed it up. im on a Mac Studio M3 ultra, Am I cooked?
2026-01-29T15:03:37
https://www.reddit.com/r/LocalLLaMA/comments/1qqaxpg/help_local_tts_with_mac_studio_m3_ultra/
Expensive-Bridge9165
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qqaxpg
false
null
t3_1qqaxpg
/r/LocalLLaMA/comments/1qqaxpg/help_local_tts_with_mac_studio_m3_ultra/
false
false
self
1
null
I have $8000 RunPod credits, which model should I use for OpenCode?
0
I fully understand that substituting my Claude Max subscription is not feasible with open source models. Having said that, I want to leverage my RunPod credits for easier coding tasks that I mostly use Sonnet/Haiku for. Which model should I look into?
2026-01-29T14:57:27
https://www.reddit.com/r/LocalLLaMA/comments/1qqarn1/i_have_8000_runpod_credits_which_model_should_i/
Accomplished_Buy9342
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qqarn1
false
null
t3_1qqarn1
/r/LocalLLaMA/comments/1qqarn1/i_have_8000_runpod_credits_which_model_should_i/
false
false
self
0
null
Show: Fully Local Voice Assistant (with optional Voice Cloning)
5
I thought this might interest all the tinkerers out there. Few weeks ago I spent a couple of hours putting together a fully-local voice assistant on my commodity hardware. I wanted to see how easy it would be, and how "good" it is. Turns out it was outrageously easy, and quite good - hence I called it the "Outrageous Voice Assistant". It implements a typical ASR->LLM-TTS pipeline with all models being open-weight: * ASR: NVIDIA parakeet-tdt-0.6b-v3 600M * LLM: Mistral ministral-3 3b 4-bit quantized * TTS (Simple): Hexgrad Kokoro 82M I implemented a simple frontend (basically an HTML with a vanilla JS "button"), the backend, and a shell script as a driver. The performance is outstanding with RTT sub-second (essentially real-time) on my PC. Last weekend I saw a Qwen3-TTS release and decided to integrate that as well to enable voice cloning - I used what I consider the most impressive voice out there - Dua Lipa's, which worked outrageously well. Also brings to mind ethical concerns when it comes to the ease with which one can clone a "virtual" person. Qwen3-TTS is much slower compared to Kokoro but I am looking at some optimizations right now. The full code with demos is available here: [https://github.com/acatovic/ova](https://github.com/acatovic/ova) For reference: I run it on a PC my son and I put together last year, which consists of RTX5070 (12GB VRAM) and 64GB RAM - but the above setup doesn't use anywhere near that capacity, so should work well on lower end systems, and on Apple Silicon as well.
2026-01-29T14:56:16
https://www.reddit.com/r/LocalLLaMA/comments/1qqaqj5/show_fully_local_voice_assistant_with_optional/
newcomb_benford_law
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qqaqj5
false
null
t3_1qqaqj5
/r/LocalLLaMA/comments/1qqaqj5/show_fully_local_voice_assistant_with_optional/
false
false
self
5
null
Anyone see the new Acree models?
20
https://huggingface.co/arcee-ai/Trinity-Large-Preview 400B w/ 13B active for the large preview model. Free right now via API on OpenRouter (or the Apache 2.0 weights on HuggingFace).
2026-01-29T14:49:48
https://www.reddit.com/r/LocalLLaMA/comments/1qqakid/anyone_see_the_new_acree_models/
EuphoricPenguin22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qqakid
false
null
t3_1qqakid
/r/LocalLLaMA/comments/1qqakid/anyone_see_the_new_acree_models/
false
false
self
20
null
My humble GLM 4.7 Flash appreciation post
83
I was impressed by GLM 4.7 Flash performance, but not surprised, because I knew they could make an outstanding model that will leave most of the competitor models around the same size in the dust. However I was wondering how good it really is, so I got an idea to use Artificial Analysis to put together all the similar sized open weight models I could think of at that time (or at least the ones available there for selection) and check out their benchmarks against each other to see how are they all doing. To make things more interesting, I decided to throw in some of the best Gemini models for comparison and well... I knew the model was good, but this good? I don't think we can appreciate this little gem enough, just look who's there daring to get so close to the big guys. 😉 This graph makes me wonder - Could it be that 30B-A3B or similar model sizes might eventually be enough to compete with today's big models? Because to me it looks that way and I have a strong belief that ZAI has what it takes to get us there and I think it's amazing that we have a model of this size and quality at home now. Thank you, ZAI! ❤
2026-01-29T14:42:32
https://i.redd.it/jh83y5tqqagg1.png
Cool-Chemical-5629
i.redd.it
1970-01-01T00:00:00
0
{}
1qqadna
false
null
t3_1qqadna
/r/LocalLLaMA/comments/1qqadna/my_humble_glm_47_flash_appreciation_post/
false
false
default
83
{'enabled': True, 'images': [{'id': 'jh83y5tqqagg1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/jh83y5tqqagg1.png?width=108&crop=smart&auto=webp&s=746dbeb5fb6d9c2428bb58e1d3b212115ad34e8c', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/jh83y5tqqagg1.png?width=216&crop=smart&auto=webp&s=5a033f23a530265ffbb20bc7a442963c6825bd25', 'width': 216}, {'height': 134, 'url': 'https://preview.redd.it/jh83y5tqqagg1.png?width=320&crop=smart&auto=webp&s=0bbe726e32c055c96513bba40abf6b1ac594fd87', 'width': 320}, {'height': 269, 'url': 'https://preview.redd.it/jh83y5tqqagg1.png?width=640&crop=smart&auto=webp&s=84c7118a2d6ca354bab71459f8fd90766a909deb', 'width': 640}, {'height': 404, 'url': 'https://preview.redd.it/jh83y5tqqagg1.png?width=960&crop=smart&auto=webp&s=2084618d65215e52475880a576f6a66da76c2ef0', 'width': 960}, {'height': 455, 'url': 'https://preview.redd.it/jh83y5tqqagg1.png?width=1080&crop=smart&auto=webp&s=c81011a632a7cdc6d04e320f46d1ead35f3e8409', 'width': 1080}], 'source': {'height': 700, 'url': 'https://preview.redd.it/jh83y5tqqagg1.png?auto=webp&s=07ce2832b920472c2b9a89c3df34bf15fcbbaca0', 'width': 1660}, 'variants': {}}]}
Recommendations for a local image generation/modification LLM?
3
I have a LLama running on a RTX3090 24GB for Home Assistant and some other things, I haven't dabbled in ComfyUI and other things such as those. I have a few image modifications (real image to a 3d model like illustrion / real image to a collage ) that I need to do in the next few hours and I'm really not in the mood to give OpenAI both the money and the personal information - what would be the most straightforward local model to install and generate with?
2026-01-29T14:41:39
https://www.reddit.com/r/LocalLLaMA/comments/1qqacsv/recommendations_for_a_local_image/
answerencr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qqacsv
false
null
t3_1qqacsv
/r/LocalLLaMA/comments/1qqacsv/recommendations_for_a_local_image/
false
false
self
3
null
LlamaLib: Cross-platform C++/C# library for running LLMs everywhere
2
Hey r/LocalLLaMA ! I've been working on a library that makes it easier to integrate LLMs into C++ and C# applications, and wanted to share it with the community. **At a glance:** [LlamaLib](https://github.com/undreamai/LlamaLib) is an open-source high-level library designed to run LLMs embedded within your application - no separate servers, no open ports, no external dependencies. **Key features:** **- High-level API** \- Clean, object-oriented design in C++ and C# **- Cross-platform** \- Windows, macOS, Linux, Android, iOS, VR **- Automatic hardware detection** \- Picks the best backend at runtime (NVIDIA, AMD, Metal, or CPU) **- Self-contained** \- Embeds in your application, small footprint **- Production-ready** \- Battle-tested in [LLM for Unity](https://github.com/undreamai/LLMUnity), already used in 20+ games / 7500+ users **Quick example in C++ (C# essentially identical):** LLMService llm("path/to/model.gguf"); llm.start(); std::string response = llm.completion("Hello, how are you?"); **Why another library?** Existing solutions either: \- require running separate server processes \- build for specific hardware (NVIDIA-only) or \- are python-based LlamaLib focuses on runtime backend selection and embeds directly into your application, while being cross-platform. It exposes a simple API for LLM operations (completion, tokenization, embeddings) with an object-oriented design: LLMService (LLM engine), LLMClient (local/remote client), LLMAgent (conversational agent). LlamaLib is built on top of the awesome [llama.cpp](https://github.com/ggerganov/llama.cpp) library and is distributed under Apache 2.0 license. **Links:** [GitHub](https://github.com/undreamai/LlamaLib), [NuGet](https://www.nuget.org/packages/LlamaLib), [Discord](https://discord.gg/RwXKQb6zdv) Would love to hear your thoughts and feedback!
2026-01-29T14:33:09
https://www.reddit.com/r/LocalLLaMA/comments/1qqa52f/llamalib_crossplatform_cc_library_for_running/
UndreamAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qqa52f
false
null
t3_1qqa52f
/r/LocalLLaMA/comments/1qqa52f/llamalib_crossplatform_cc_library_for_running/
false
false
self
2
{'enabled': False, 'images': [{'id': 'q-p3hv5DEkB8DBrmlbUF-WsnYSzSP07xWyKAr4HtHfg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/q-p3hv5DEkB8DBrmlbUF-WsnYSzSP07xWyKAr4HtHfg.png?width=108&crop=smart&auto=webp&s=efaf622e8b5219c1084f66e49c06e662194c9414', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/q-p3hv5DEkB8DBrmlbUF-WsnYSzSP07xWyKAr4HtHfg.png?width=216&crop=smart&auto=webp&s=4f6b1b39303baeec14565bc945cc8016b3310659', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/q-p3hv5DEkB8DBrmlbUF-WsnYSzSP07xWyKAr4HtHfg.png?width=320&crop=smart&auto=webp&s=c1e76566bb46551b0e78caa87540ec07c69ddf51', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/q-p3hv5DEkB8DBrmlbUF-WsnYSzSP07xWyKAr4HtHfg.png?width=640&crop=smart&auto=webp&s=8053b493cd938ab35025ebb13aab4150329bd5f1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/q-p3hv5DEkB8DBrmlbUF-WsnYSzSP07xWyKAr4HtHfg.png?width=960&crop=smart&auto=webp&s=0c31d787f8141d8901a6ebbeb9e976107f584c7e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/q-p3hv5DEkB8DBrmlbUF-WsnYSzSP07xWyKAr4HtHfg.png?width=1080&crop=smart&auto=webp&s=a7293e8255d3b7a58c81c4c652f594ef3ba14819', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/q-p3hv5DEkB8DBrmlbUF-WsnYSzSP07xWyKAr4HtHfg.png?auto=webp&s=b6ebd894b04032b49b62e4b086dec6ccffdd83c5', 'width': 1200}, 'variants': {}}]}
Universal CLI for RAG
0
Did anyone figure out yet how to build a solution for this without API authentication (user credentials)? There are solutions to access multiple code-based agents from 1 CLI ([Puzldai](https://github.com/MedChaouch/Puzld.ai)), but I haven't found anything to access the chatbots. It seriously impedes my workflow, where I have to open 10 windows to copy/paste prompts between models. Of course, there is LLMArena or OpenRouter, but this is a web based GUI. I tried playing around a bit, but it would require quite some work to manage sessions reliably through browser automation frameworks and I'm not sure whether this is covered by the TOS of various providers. Big heads-up for anyone that could point me in the right direction.
2026-01-29T14:22:09
https://www.reddit.com/r/LocalLLaMA/comments/1qq9uyx/universal_cli_for_rag/
laurekamalandua
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq9uyx
false
null
t3_1qq9uyx
/r/LocalLLaMA/comments/1qq9uyx/universal_cli_for_rag/
false
false
self
0
{'enabled': False, 'images': [{'id': 'dFyT2f4BUdO6eud440KB-eaKHYfHVDsYSfCPxQtxWFU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dFyT2f4BUdO6eud440KB-eaKHYfHVDsYSfCPxQtxWFU.png?width=108&crop=smart&auto=webp&s=633aa344ba43aa87ad94cd515fdeb63f38c9afed', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dFyT2f4BUdO6eud440KB-eaKHYfHVDsYSfCPxQtxWFU.png?width=216&crop=smart&auto=webp&s=0e6a997cfcc1d6bae950e13434e52fc684ba200a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dFyT2f4BUdO6eud440KB-eaKHYfHVDsYSfCPxQtxWFU.png?width=320&crop=smart&auto=webp&s=c58b1a021b53eab39d82313232408b430e40d817', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dFyT2f4BUdO6eud440KB-eaKHYfHVDsYSfCPxQtxWFU.png?width=640&crop=smart&auto=webp&s=56f0ce73364d811e6e22a8d90b7ef98571980485', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dFyT2f4BUdO6eud440KB-eaKHYfHVDsYSfCPxQtxWFU.png?width=960&crop=smart&auto=webp&s=bda04b7eba4ebb8044bed5695e39972abb88bffd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dFyT2f4BUdO6eud440KB-eaKHYfHVDsYSfCPxQtxWFU.png?width=1080&crop=smart&auto=webp&s=46905ba86b5eff319159eb1e3d4d426901458bf9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dFyT2f4BUdO6eud440KB-eaKHYfHVDsYSfCPxQtxWFU.png?auto=webp&s=31aa196e453246de867c460e4cda6004d5943585', 'width': 1200}, 'variants': {}}]}
QWEN3 on the SBC (Orange pi 6 plus)
10
Sorry for my bad English, and I worte this article by the helping of local LLM :( Week ago, I bought Orange Pi 6 Plus from Aliexpress to try running LLM on SBC. It has a 32GB of unified LPDDR5 RAM!!! and is almost identical to Radax Orion O6 The spec of Orange Pi 6 32GB (ARM-9v 12-Cores Architecture) * **SoC:** CIX CD8160 (12-core 64-bit ARMv9: 4x A72 + 4x A72 + 4x A52). * **AI Performance:** \~45 TOPS (combined CPU/GPU/NPU). * **Memory:** 16GB, 32GB, or 64GB LPDDR5. Unfortunately, O/S and Driver support of Orange Pi series were really notorious. On latest release, Ubuntu 24.04 + 6.8 Kernel with dedicated GPU drive support Vulkan 1.4. But, It was painfully slow and unstable for the general usage. Finally, I was able to achieve satisfactory performance with this combination : **ik\_llama.cpp + QWEN3-30B-A3B (IQ4\_XS quant)** Personally, I strongly advise against buying an Orange Pi 6 for LLM purposes. However, I would be leaving a few hints here for friends who might repeat this foolish mistake. **1. Compile ik\_llama with Arm9v flags with GCC 12** sudo add-apt-repository ppa:ubuntu-toolchain-r/test sudo apt update sudo apt install -y gcc-12 g++-12 cmake -B build \\ \-DGGML\_CPU\_ALL\_VARIANTS=OFF \\ \-DGGML\_ARCH\_FLAGS="-march=armv9-a+dotprod+fp16" cmake --build build --config Release -j$(nproc) 2. **Do not try using GPU/NPU - just depends on Big core (4cores) with -ngl 0 flag** I'm not familar with Linux & ARM devices, and can't guarantee No. of Big cores in other boards. So, please use btop or other apps to get exact information of your board. Here is my final setting to load QWEN3-30B Instruct model with usable performence taskset -c 0,1,10,11 ./llama-bench -m /home/LLM\_test/Qwen3-VL-30B-A3B-Instruct-IQ4\_XS.gguf -ngl 0 --mmap 0 -ctk q8\_0 -ctv q8\_0 | model | size | params | backend |threads|type\_k|type\_v|mmap| test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | ------: | -----: | -----: | ---: | ------------: | ---------------: | ===================================== llama\_new\_context\_with\_model: f16 ======================================= HAVE\_FANCY\_SIMD is NOT defined | qwen3vlmoe 30B.A3B IQ4\_XS - 4.25 bpw | 15.25 GiB | 30.53 B | CPU | 12 | q8\_0 | q8\_0 | 0 | pp512 | 52.82 ± 0.42 | ===================================== llama\_new\_context\_with\_model: f16 | qwen3vlmoe 30B.A3B IQ4\_XS - 4.25 bpw | 15.25 GiB | 30.53 B | CPU | 12 | q8\_0 | q8\_0 | 0 | tg128 | 8.35 ± 0.00 | build: 69fdd041 (4149) https://reddit.com/link/1qq9n5f/video/llym7f8jqagg1/player
2026-01-29T14:13:34
https://www.reddit.com/r/LocalLLaMA/comments/1qq9n5f/qwen3_on_the_sbc_orange_pi_6_plus/
Desperate-Sir-5088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq9n5f
false
null
t3_1qq9n5f
/r/LocalLLaMA/comments/1qq9n5f/qwen3_on_the_sbc_orange_pi_6_plus/
false
false
self
10
null
BYOK AI assistant with no self hosting and no platform data access
0
I’ve been testing a hosted AI assistant called CLAWD that uses a BYOK setup where your API key connects directly to your LLM provider. No prompts or responses are stored or routed through the platform. It’s privacy first without needing to self host or manage infrastructure. Sharing because people here care about data flow and model control and curious to see what everyone else thinks of CLAWD? Link: [https://www.paio.bot/](https://www.paio.bot/) Also found a coupon code for free access: newpaio
2026-01-29T14:11:16
https://www.reddit.com/r/LocalLLaMA/comments/1qq9l1b/byok_ai_assistant_with_no_self_hosting_and_no/
CookiesWind
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq9l1b
false
null
t3_1qq9l1b
/r/LocalLLaMA/comments/1qq9l1b/byok_ai_assistant_with_no_self_hosting_and_no/
false
false
self
0
null
Built a local structured learning hub from PYQs with topics assessments and theory improving over reruns with llama3.1
0
ocr accuracy isn't that great and theory generation is self improving and will be better with better models and multiple reruns start if you like: [github.com/imxade/dontcompete](http://github.com/imxade/dontcompete)
2026-01-29T14:07:30
https://i.redd.it/ykiu8tqnpagg1.gif
Sad_Finance3908
i.redd.it
1970-01-01T00:00:00
0
{}
1qq9hps
false
null
t3_1qq9hps
/r/LocalLLaMA/comments/1qq9hps/built_a_local_structured_learning_hub_from_pyqs/
false
false
default
0
{'enabled': True, 'images': [{'id': 'ykiu8tqnpagg1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=108&crop=smart&format=png8&s=35819ab28ae639da11f95f1bfdf3c8f2a92399b0', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=216&crop=smart&format=png8&s=b941ab165ccd07aacd2aebc0209bc79ede2deca9', 'width': 216}, {'height': 206, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=320&crop=smart&format=png8&s=4773cb2adde4174729960ddb8a5522d05d925db4', 'width': 320}, {'height': 413, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=640&crop=smart&format=png8&s=5ca61cbb275f2308326c83f45b1dceec6377c799', 'width': 640}, {'height': 619, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=960&crop=smart&format=png8&s=0f4cc5229416d006fc0c94cdc775b867c2d7b34d', 'width': 960}, {'height': 697, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=1080&crop=smart&format=png8&s=356da06802f01d99261008e344a2fef8bf41a923', 'width': 1080}], 'source': {'height': 1092, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?format=png8&s=79a0198584098de5cc70c69d35844b14bb5eb43f', 'width': 1691}, 'variants': {'gif': {'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=108&crop=smart&s=5e865dee8de2c236729c30a9da7ae42c5c554aca', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=216&crop=smart&s=b0dbb1eb91c171e8da73ef353d55caadb845f989', 'width': 216}, {'height': 206, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=320&crop=smart&s=4c153580885ddfb766f8d129474bb6ab502654d8', 'width': 320}, {'height': 413, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=640&crop=smart&s=a4bd03dc2cb46f694d1b692be5041aedabdec0bf', 'width': 640}, {'height': 619, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=960&crop=smart&s=1d85b351772684893c53f2e4f4539203c3adc5c7', 'width': 960}, {'height': 697, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=1080&crop=smart&s=8e93cb699c93983926d81bc48fd73abd0b16bad4', 'width': 1080}], 'source': {'height': 1092, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?s=cad2e8bfcb5076582c234a056456a12bd27ed640', 'width': 1691}}, 'mp4': {'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=108&format=mp4&s=54017d75fe6522d25bec82280cb36b3befac825c', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=216&format=mp4&s=d5b4e550d38cae0ef1b576036f42eb605339c9ac', 'width': 216}, {'height': 206, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=320&format=mp4&s=ae2b6d4f6d150a0c98028271eb5e3357132f55dc', 'width': 320}, {'height': 413, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=640&format=mp4&s=f4b487ca77123dd7c3b508e106042b3ff9a4f4ca', 'width': 640}, {'height': 619, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=960&format=mp4&s=4f33f8738fb7db82550d54116c551ff669ad4ba3', 'width': 960}, {'height': 697, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?width=1080&format=mp4&s=18c97cde2145ba89357a5ef334d319a1d200940f', 'width': 1080}], 'source': {'height': 1092, 'url': 'https://preview.redd.it/ykiu8tqnpagg1.gif?format=mp4&s=b46b0e5c3afe3a283ea5438b7477220b72050b9b', 'width': 1691}}}}]}
[image processing failed]
1
[deleted]
2026-01-29T14:05:19
[deleted]
1970-01-01T00:00:00
0
{}
1qq9fsz
false
null
t3_1qq9fsz
/r/LocalLLaMA/comments/1qq9fsz/image_processing_failed/
false
false
default
1
null
Alibaba Introduces Qwen3-Max-Thinking — Test-Time Scaled Reasoning with Native Tools, Beats GPT-5.2 & Gemini 3 Pro on HLE (with Search)
0
**Key Points:** * **What it is:** Alibaba’s new **flagship reasoning LLM** (Qwen3 family) * **1T-parameter MoE** * **36T tokens** pretraining * **260K context window** (repo-scale code & long docs) * **Not just bigger — smarter inference** * Introduces **experience-cumulative test-time scaling** * Reuses partial reasoning across multiple rounds * Improves accuracy **without linear token cost growth** * **Reported gains at similar budgets** * GPQA Diamond: \~90 → **92.8** * LiveCodeBench v6: \~88 → **91.4** * **Native agent tools (no external planner)** * Search (live web) * Memory (session/user state) * Code Interpreter (Python) * Uses **Adaptive Tool Use** — model decides when to call tools * Strong tool orchestration: **82.1 on Tau² Bench** * **Humanity’s Last Exam (HLE)** * Base (no tools): **30.2** * **With Search/Tools: 49.8** * GPT-5.2 Thinking: 45.5 * Gemini 3 Pro: 45.8 * Aggressive scaling + tools: **58.3** 👉 **Beats GPT-5.2 & Gemini 3 Pro on HLE (with search)** * **Other strong benchmarks** * MMLU-Pro: 85.7 * GPQA: 87.4 * IMOAnswerBench: 83.9 * LiveCodeBench v6: 85.9 * SWE Bench Verified: 75.3 * **Availability** * **Closed model, API-only** * OpenAI-compatible + Claude-style tool schema **My view/experience:** * I haven’t built a full production system on it yet, but from the design alone this feels like a **real step forward for agentic workloads** * The idea of **reusing reasoning traces across rounds** is much closer to how humans iterate on hard problems * Native tool use inside the model (instead of external planners) is a big win for **reliability and lower hallucination** * Downside is obvious: **closed weights + cloud dependency**, but as a *direction*, this is one of the most interesting releases recently **Link:** [https://qwen.ai/blog?id=qwen3-max-thinking](https://qwen.ai/blog?id=qwen3-max-thinking)
2026-01-29T14:03:22
https://www.reddit.com/r/LocalLLaMA/comments/1qq9e0z/alibaba_introduces_qwen3maxthinking_testtime/
techlatest_net
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq9e0z
false
null
t3_1qq9e0z
/r/LocalLLaMA/comments/1qq9e0z/alibaba_introduces_qwen3maxthinking_testtime/
false
false
self
0
null
I built a meeting assistant that works 100% offline, no cloud, no subscription
0
Hi everyone, After working on this for many months, I am ready to show the world what I've been cooking: Wavezard. Wavezard is a portable, offline AI meeting assistant for Windows and macOS that records system and microphone audio at the OS level. It transcribes meetings locally using Whisper with speaker identification, generates structured summaries, and lets users chat with transcripts. It works with any meeting platform or in-person conversations, supports 99 languages, prioritizes privacy, and runs efficiently on CPU with optional GPU acceleration. There is no subscription. It comes with a 14-day free trial, and after that you can buy a lifetime license for $19. Do try it out, I've put a lot of effort into this (a lot). Links: * [Website](https://wavezard.com/) * [Demo video](https://wavezard.com/demo.mp4) * [Discord](https://discord.gg/CQDw36vRW6) I appreciate anyone taking the time to check out my app. It would mean the world to me. Thanks everyone.
2026-01-29T14:01:55
https://www.reddit.com/r/LocalLLaMA/comments/1qq9cpa/i_built_a_meeting_assistant_that_works_100/
Wavezard
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq9cpa
false
null
t3_1qq9cpa
/r/LocalLLaMA/comments/1qq9cpa/i_built_a_meeting_assistant_that_works_100/
false
false
self
0
null
Claude Opus 4.5 API for Cursor
0
Hey everyone, We offer API access to **Claude Opus 4.5** for Cursor IDE (full Agent + tool support). You get a key and a URL; we handle the backend. Offering keys at **2x, 3x, and 4x** value — you pay less, get more usage. **Pricing (usage value):** * **2x:** Pay $20 → $40 worth of usage * **3x:** Pay $50 → $150 worth of usage * **4x:** Pay $100 → $400 worth of usage **What you get:** * Opus 4.5 in Cursor with 200k context, full Agent + tools * Simple setup: one URL, one key If you’d like a key, drop a comment or DM with the tier you want (2x, 3x, or 4x) and I’ll DM you. We’re just sharing extra capacity. Cheers.
2026-01-29T13:55:50
https://www.reddit.com/r/LocalLLaMA/comments/1qq974r/claude_opus_45_api_for_cursor/
Alternative-Theme885
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq974r
false
null
t3_1qq974r
/r/LocalLLaMA/comments/1qq974r/claude_opus_45_api_for_cursor/
false
false
self
0
null
How to safely implement file downloads in an MCP server?
0
I've got file uploads working in my MCP server where my AI agent can push files to it. Now I'm working on the download side and want to make sure I'm doing this safely. Since the agent will be requesting and downloading files autonomously, I'm particularly concerned about: - How to scope what files the agent can access (sandboxing/permissions) - Validating file paths to prevent the agent from accidentally accessing system files - Whether to expose direct file paths or use some kind of token/ID system - Rate limiting or size limits to prevent issues - Any security gotchas when an AI agent is the client Has anyone implemented something similar? What patterns or safeguards did you put in place?
2026-01-29T13:48:11
https://v.redd.it/n3uea3h6kagg1
Admirable-Choice9727
/r/LocalLLaMA/comments/1qq90ns/how_to_safely_implement_file_downloads_in_an_mcp/
1970-01-01T00:00:00
0
{}
1qq90ns
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/n3uea3h6kagg1/DASHPlaylist.mpd?a=1772416101%2COTk0MmNhMzZlMDBhYzFmNzIwNThjY2ExMmE1YWQ1MzFlMTk1YmU3ZWIzOWE2NTExMjQ4ZTg2NmYxNGRiZjk2Yw%3D%3D&v=1&f=sd', 'duration': 539, 'fallback_url': 'https://v.redd.it/n3uea3h6kagg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/n3uea3h6kagg1/HLSPlaylist.m3u8?a=1772416101%2CZDQxZmIyMzNjM2VkYzMyM2MyYjIzZDc2YjZlNGI3ZTI5ZDdhZGM4ZWFlYzQ4M2YzNDk3N2NmZTU4MzQyNzYwMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/n3uea3h6kagg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 886}}
t3_1qq90ns
/r/LocalLLaMA/comments/1qq90ns/how_to_safely_implement_file_downloads_in_an_mcp/
false
false
https://external-preview…b3440bdf236f446f
0
{'enabled': False, 'images': [{'id': 'b3Y1YW51aDZrYWdnMRtzI9qW1JxQsV8g9gTK0nuc5yYot0I2hAnfVFKIzxvD', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/b3Y1YW51aDZrYWdnMRtzI9qW1JxQsV8g9gTK0nuc5yYot0I2hAnfVFKIzxvD.png?width=108&crop=smart&format=pjpg&auto=webp&s=3b8183de523f752ab7491fdeb42c1c64cb53e655', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/b3Y1YW51aDZrYWdnMRtzI9qW1JxQsV8g9gTK0nuc5yYot0I2hAnfVFKIzxvD.png?width=216&crop=smart&format=pjpg&auto=webp&s=03bfa097dc00f0c9449c92ad4f6367a4190c0fcd', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/b3Y1YW51aDZrYWdnMRtzI9qW1JxQsV8g9gTK0nuc5yYot0I2hAnfVFKIzxvD.png?width=320&crop=smart&format=pjpg&auto=webp&s=eb6fb7c93defb3ab6c918e4abdf9bb0ea60dc059', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/b3Y1YW51aDZrYWdnMRtzI9qW1JxQsV8g9gTK0nuc5yYot0I2hAnfVFKIzxvD.png?width=640&crop=smart&format=pjpg&auto=webp&s=9cefb29bf5d9bc238c3c033504d48a0c2ebbf36b', 'width': 640}], 'source': {'height': 1746, 'url': 'https://external-preview.redd.it/b3Y1YW51aDZrYWdnMRtzI9qW1JxQsV8g9gTK0nuc5yYot0I2hAnfVFKIzxvD.png?format=pjpg&auto=webp&s=0860b3743cb4157d3930e750cd1bce99b7343077', 'width': 805}, 'variants': {}}]}
glm-4.7-flash tool calls in Reasoning block
0
https://preview.redd.it/…at16 --seed 3407
2026-01-29T13:47:11
https://www.reddit.com/r/LocalLLaMA/comments/1qq8zrt/glm47flash_tool_calls_in_reasoning_block/
MirecX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq8zrt
false
null
t3_1qq8zrt
/r/LocalLLaMA/comments/1qq8zrt/glm47flash_tool_calls_in_reasoning_block/
false
false
self
0
null
Field Report: What leadership actually *treats AI as (Notes from a Dev)
10
**TL;DR**: Hype. Hype > Substance. In order to woo stockholders. That's it. Hi fellow llamas, I read [this pretty decent post](https://www.reddit.com/r/LocalLLaMA/comments/1qpsgzr/field_report_what_leadership_actually_thinks_ai/) and while I do agree with lots of the views in that post (even though it's not meant for hobbyists), I thought I'd chime in with a few more thoughts about leadership, and stuff. But before that, let me share some background. I work at a big company (top 500 by market cap, world), one that actually used AI (which its different names, like statistical/machine learning, NLP, etc) from the early '90s in high-impact domains (adjacent to finance or law, but not quite). The first department head had a published paper on Bayesian statistics for NLP before I was born, and I don't think I understand all of it even now. Decades of NLP work created quite a few useful products, most of which had narrow scope for the AI parts, and the rest was mostly engineering effort and human expert work (reviewing/fixing stuff). We had text-generation models in production at least 4-5 months before ChatGPT (not sure how much more, that's when I transferred from a different business unit). Fast-forward to today, and management is basically a joke. The last capable (aka engineer/scientist) department head was fired ~3 years ago by the young CTO (who was a Consulting Boy™), and the interim department heads were also incapable and had short tenures. The current CTO does seem capable and knowledgeable (another engineer), but the middle layers of management are still the same, with most capable people leaving to the bigger firms, and the less capable getting promoted. So let's view *how* this happens. Last year I've been in probably a thousand meetings (like most tech folk, I guess) with managers of all levels, from CTO to managers-in-name only (e.g. directors without any (in)direct reports), to talk about our ongoing AI projects, planned projects, project proposals. The proposals that went through were all about "agents". If something contained the word, it's probability of getting approved was 418967936.71% higher. I remember a meeting when a scientist and an engineer presented what was essentially an LLM-assisted exhaustive search (multiple data sources) and generation implementation with planning, refinement, draft, human feedback, and final output... and management (CTO, department head, and a couple director) was asking why they didn't use "deep search" and how it can be made agentic. Zero questions about potential issues, zero questions about costs, zero questions about quality. The scientist was so perplexed with those questions, not understand why you would let the LLM decide *if it wants* to use search or which databases to query (rather than being forced to use it, and query all databases). Of course, the problem doesn't stop with management not understanding, and thus promoting the wrong projects and focusing on the wrong metrics ("AI adoption" instead of "revenue increase" / "cost reduction" / ...). This also enables a culture that lets engineers give in to their bad habits and temptations. I know because I've been there too, and it basically boils down to: "Oh look, a shiny new framework! Let's replace all our battle-tested, well-documented tools with this thingy that a single person created in a few months, because it's popular and might be in demand for new jobs and I can put it on my CV". The newest CTO is trying to curb this trend with a bigger focus on products (which sadly disproportionately affected research output, e.g. publications, open-sourcing), but the middle managers are also trying to showcase the work their teams are doing and thus aim for the flashy stuff that they don't really understand. I've lost track of how many times I've heard my manager speak of using AI in ways that simply don't make any sense. Perhaps the easiest way to tell is the number of new projects that were started versus what made it in production versus what has >10 users after a year. All AI/ML projects had low success rates (at least for individual experiments, if you hacked at a problem for months and collected data then the rate was much higher), but last year the number of employees trended downwards, the number of projects shot up, and the number of projects that get discarded (decommissioned, merged into others, etc) is also higher than ever. So when that other post said to not over-engineer solutions when "a script will do", it wasn't just fluff, it's a real issue that in the past was kept in check by management that ~~didn't butt in too much~~ trusted its experts, and senior engineers that were too ~~grumpy~~ uhm... ~~lazy to try to anything new~~ no, wait... focused on what mattered. You don't need a fucking observability platform and AI code reviews / automated PRs when you cannot even use the `logging` library. You don't need the most expensive LLM agents when your prompts writer doesn't even know what templating is, and instead of using structured generation or function calling he asks the LLM to reply with `<answer>yes|no</answer>` which is then parsed without even using regex. And I don't need to come back after a two week vacation to see half my code "refactored" by a dude vibe-coding everything four weeks before the production release deadline. -------- Sorry, this turned into a rant quicker than I realize. To re-iterate: * upper management tries to appeal to stockholders with hype chasing * middle management tries to appeal to upper management with hype chasing * all management focuses on wrong metrics (e.g. usage of AI copilot, how many products had AI integrated into them) * engineers try to appeal to middle management with hype chasing and also play with new fancy tech * talented folks are leaving for bigger/better companies while the "meh" people remain and get promoted to higher roles and management * proper engineering culture takes a back seat because nobody cares anymore since no incentives promote it --------- AI disclaimer: 100% of this post was hand-typed. Because ~~I'm stupid and like to waste my time on Reddit~~ thoughts matter more than formatting, but I know how much y'all love your emojis, so here's your daily dosage: ✅🌈🦄🌸🌺🌻🌼🌷🌹🍀🌴🌵🌲🌳🍎🍏🍐🍊🍋🍌🍉🍇🍓🫐🍈🍒🍑🥭🍍🥥🥝🍅🍆🥑🥦🥬🥒🌶️🫑🌽🥕🫒🧄🧅🥔🍠🥐🥯🍞🥖🥨🧀🥚✨
2026-01-29T13:46:29
https://www.reddit.com/r/LocalLLaMA/comments/1qq8z7z/field_report_what_leadership_actually_treats_ai/
MitsotakiShogun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq8z7z
false
null
t3_1qq8z7z
/r/LocalLLaMA/comments/1qq8z7z/field_report_what_leadership_actually_treats_ai/
false
false
self
10
null
[News] ACE-Step 1.5 Preview - Now requires <4GB VRAM, 100x faster generation
93
Fresh from the ACE-Step Discord - preview of the v1.5 README! Key improvements: - \*\*<4GB VRAM\*\* (down from 8GB in v1!) - true consumer hardware - \*\*100x faster\*\* than pure LM architectures - Hybrid LM + DiT architecture with Chain-of-Thought - 10-minute compositions, 50+ languages - Cover generation, repainting, vocal-to-BGM Release should be imminent! Also check r/ACEStepGen for dedicated discussions.
2026-01-29T13:37:50
https://i.redd.it/9x51tk85kagg1.jpeg
ExcellentTrust4433
i.redd.it
1970-01-01T00:00:00
0
{}
1qq8rpu
false
null
t3_1qq8rpu
/r/LocalLLaMA/comments/1qq8rpu/news_acestep_15_preview_now_requires_4gb_vram/
false
false
default
93
{'enabled': True, 'images': [{'id': '9x51tk85kagg1', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/9x51tk85kagg1.jpeg?width=108&crop=smart&auto=webp&s=88abb97dcb992bcf2d2351f70e39d27de3f3e027', 'width': 108}, {'height': 241, 'url': 'https://preview.redd.it/9x51tk85kagg1.jpeg?width=216&crop=smart&auto=webp&s=ec98c0cbee954de1728a06ae0b0f4dcdec794e2b', 'width': 216}, {'height': 357, 'url': 'https://preview.redd.it/9x51tk85kagg1.jpeg?width=320&crop=smart&auto=webp&s=385055a8eda7008a139956c3ccc98f6cd9ee9c6c', 'width': 320}, {'height': 715, 'url': 'https://preview.redd.it/9x51tk85kagg1.jpeg?width=640&crop=smart&auto=webp&s=040668466faf2131a4ff169fa2fe22f761361864', 'width': 640}, {'height': 1073, 'url': 'https://preview.redd.it/9x51tk85kagg1.jpeg?width=960&crop=smart&auto=webp&s=43c90ed71d34a9075cfb2cb8657038fac4173a69', 'width': 960}, {'height': 1208, 'url': 'https://preview.redd.it/9x51tk85kagg1.jpeg?width=1080&crop=smart&auto=webp&s=390276128e8ad9759646c2738c12e3c8a5915dc9', 'width': 1080}], 'source': {'height': 1208, 'url': 'https://preview.redd.it/9x51tk85kagg1.jpeg?auto=webp&s=b960e9df9e59d59e19281acfccbf82964f76c0d5', 'width': 1080}, 'variants': {}}]}
almost bought 128gb ram + 4090 for moltbot. did the math. cloud is literally 20x cheaper.
0
been following moltbot hype. started pricing hardware: * 128gb ddr5: $600 * rtx 4090: $1800 * new mobo/cpu: $800 * total: $3200 then realized wait this is insane. actual math: * that setup 24/7: \~$30/month electricity * first year: $3200 + $360 = $3560 * cloud hosting: \~$25/month = $300/year * savings: 90% first year the only reason moltbot "needs" that hardware is to run locally 24/7. but if you cloud-host you share resources way more efficiently. been using shell\_clawd\_bot on tg for a month. same claude models, just not heating my room lol. free trial then like $20-something/month. probably gonna get downvoted cause this sub loves local but the math doesn't work for agentic stuff. local LLM inference? worth the hardware. agent orchestrating API calls? nah.
2026-01-29T13:36:09
https://www.reddit.com/r/LocalLLaMA/comments/1qq8qay/almost_bought_128gb_ram_4090_for_moltbot_did_the/
Basic-Brilliant385
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq8qay
false
null
t3_1qq8qay
/r/LocalLLaMA/comments/1qq8qay/almost_bought_128gb_ram_4090_for_moltbot_did_the/
false
false
self
0
null
[Project] Iso-Vox: A breakthrough Target Speaker Extraction (TSE) framework for the "Cocktail Party" problem (Open Source) 🚀
0
ERROR: type should be string, got "https://reddit.com/link/1qq8oa2/video/mxtgi3u6jagg1/player\n\n\n\nWe just released an open-source framework designed to solve the biggest hurdle in STT: the **\"audio cocktail party\" effect.** By leveraging voice embeddings, we’ve reached about 90% of our goal—to isolate and transcribe a specific speaker even in noisy, multi-speaker environments.\n\nOnce we hit 100%, we believe it will outperform every commercial STT on the market (including Deepgram and Google) for targeted isolation.\n\n**How it works (The Tech Stack):** We’ve integrated several state-of-the-art models into a single pipeline that runs entirely locally:\n\n* **Speaker Verification:** NVIDIA TitaNet-Large\n* **ASR Engines:** NVIDIA Parakeet (High accuracy) & Moonshine (Ultra-fast ONNX)\n* **Voice Isolation/Enhancement:** MPSENet & GTCRN\n* **VAD/Turn Detection:** Pipecat Smart Turn\n\n**Key Features:**\n\n* **Real-time:** Designed for low-latency WebSocket entry points.\n* **Local & Private:** Everything runs on your own hardware (Docker + GPU support).\n* **English Focused:** Optimized for high-accuracy English transcription.\n\n**License:** Apache 2.0 (Commercial-friendly)\n\nI think this is well worth a look for anyone building local voice agents or transcription tools:[https://github.com/Jobix-Ai/Iso-Vox](https://github.com/Jobix-Ai/Iso-Vox)\n\nFeel free to reach out if you have any questions. Contributions are welcome!\n\nLiked the project? We would love a 🌟!"
2026-01-29T13:33:50
https://www.reddit.com/r/LocalLLaMA/comments/1qq8oa2/project_isovox_a_breakthrough_target_speaker/
TrickJumpy8136
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq8oa2
false
null
t3_1qq8oa2
/r/LocalLLaMA/comments/1qq8oa2/project_isovox_a_breakthrough_target_speaker/
false
false
self
0
null
How to safely implement file downloads in an MCP server?
1
I've got file uploads working in my MCP server where my AI agent can push files to it. Now I'm working on the download side and want to make sure I'm doing this safely. Since the agent will be requesting and downloading files autonomously, I'm particularly concerned about: - How to scope what files the agent can access (sandboxing/permissions) - Validating file paths to prevent the agent from accidentally accessing system files - Whether to expose direct file paths or use some kind of token/ID system - Rate limiting or size limits to prevent issues - Any security gotchas when an AI agent is the client Has anyone implemented something similar? What patterns or safeguards did you put in place?
2026-01-29T13:30:46
https://v.redd.it/p29ljb9liagg1
Admirable-Choice9727
/r/LocalLLaMA/comments/1qq8lkm/how_to_safely_implement_file_downloads_in_an_mcp/
1970-01-01T00:00:00
0
{}
1qq8lkm
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/p29ljb9liagg1/DASHPlaylist.mpd?a=1772415059%2COWI3YTFlYzBhY2EyZmRhODAwMzJmZjhhYTRiZjI4MDM4ZWZmNzcwMTlmODAyMGRiNGY0OGZkYjE4Njg3Mjk5Mw%3D%3D&v=1&f=sd', 'duration': 543, 'fallback_url': 'https://v.redd.it/p29ljb9liagg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/p29ljb9liagg1/HLSPlaylist.m3u8?a=1772415059%2CYzhkZTY4OWViNjExMjliOWM0NGM0MjdmNWEyNGIyYWYyY2JlZDE3MmM4ZWY4ZjhlODY5NWQ5NGIzYWU5MGNlYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/p29ljb9liagg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 886}}
t3_1qq8lkm
/r/LocalLLaMA/comments/1qq8lkm/how_to_safely_implement_file_downloads_in_an_mcp/
false
false
https://external-preview…b8cfb9cf9d200466
1
{'enabled': False, 'images': [{'id': 'YjlyZWV3OWxpYWdnMeewJvYht7PWnVynqKFdUiu-H7pZkPdHCtoUGW0DCHmT', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/YjlyZWV3OWxpYWdnMeewJvYht7PWnVynqKFdUiu-H7pZkPdHCtoUGW0DCHmT.png?width=108&crop=smart&format=pjpg&auto=webp&s=356c7ac9ec48cb5ed9583434eadb1362f595d0db', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/YjlyZWV3OWxpYWdnMeewJvYht7PWnVynqKFdUiu-H7pZkPdHCtoUGW0DCHmT.png?width=216&crop=smart&format=pjpg&auto=webp&s=b47862182deef21537114a93e911af7d74d41870', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/YjlyZWV3OWxpYWdnMeewJvYht7PWnVynqKFdUiu-H7pZkPdHCtoUGW0DCHmT.png?width=320&crop=smart&format=pjpg&auto=webp&s=ef4e2dc94cae7d9f80e79cc34757b889b1207b5a', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/YjlyZWV3OWxpYWdnMeewJvYht7PWnVynqKFdUiu-H7pZkPdHCtoUGW0DCHmT.png?width=640&crop=smart&format=pjpg&auto=webp&s=cfc4ed8656146471f71bf50ac01c6bb893f87bff', 'width': 640}], 'source': {'height': 1746, 'url': 'https://external-preview.redd.it/YjlyZWV3OWxpYWdnMeewJvYht7PWnVynqKFdUiu-H7pZkPdHCtoUGW0DCHmT.png?format=pjpg&auto=webp&s=928acecce39a26bd1f1dc9d0638eaa1da909c3a9', 'width': 805}, 'variants': {}}]}
Issue running larger model on Apple Silicon
1
Hi, Seems like there's a lot more options lately for squeezing/splitting models onto machines with not enough vRAM or RAM (mmap, fit) or between machines (rpc, exo) Experimenting to run some models locally. GLM-4.7-Flash runs great on my Mac Studio (m1 ultra 64g) got 50-60tk/s (initial, didn't go deep) I also have an older Xeon server with 768gb ram, thought I'd try and run some stuff there. Got flash upto 2.5tk/s limiting to less cores (NUMA issues, though was thinking 1 guest per socket/numa node pinned to the right cpus and use llama rpc across all 4 - network should be \[hopefully\] memory mapped between guests - maybe get 8-10tk/s? lol) At first when I tried loading it I was a bit confused about the memory usage, saw about mmap and was like oh cool, turned it off for testing on the server since it has lots of memory. But then I thought, hey I should be able to load models at least slightly larger than the available ram on the Mac with the same method. Same command line between server and Mac: llama-server \ --temp 0.7 \ --top-p 0.95 \ --top-k 20 \ --min-p 0 \ --n-cpu-moe 35 \ --ctx-size 120000 \ --timeout 300 \ --flash-attn on \ --alias GLM-4_7-Q2 \ -m ~/models/GLM-4.7/GLM-4.7-Q2_K_L-00001-of-00003.gguf Server takes \~1min to do warm-up and, at least with that cmdline (numa) I get about 1tk/s, but it's functional. Mac says it's warming up, does not much for a bit other than fluctuating using most of the ram, then the system crashes and reboots. I also have a 6gb (2060) or 12gb (3060) gpu I could maybe toss in the server (don't really want to) if it could help a bit but I think the effort is probably better spent trying to get it running on the Mac first before I start moving GPUs around, though I'm almost curious to see what they could do. Though, the 12gb and a 8GB 2070S are in my desktop (64g ram) but I'm not sure about ganging all that together - to be fair though my network is a bit faster (10gbe between pc and server, 20gbe thunderbolt to mac) than the sustained read/write of my storage array. Not sure why the Mac is crashing - I'm not using \`-mlock\`, I did try setting \`iogpu.wired\_limit\_mb\` to 56gb trying to squeeze every last bit though. You'd think at worst it'd kill the process on OOM..? Thoughts? pointers? anecdotal experiencicals?
2026-01-29T13:27:50
https://www.reddit.com/r/LocalLLaMA/comments/1qq8j2f/issue_running_larger_model_on_apple_silicon/
Forbidden-era
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq8j2f
false
null
t3_1qq8j2f
/r/LocalLLaMA/comments/1qq8j2f/issue_running_larger_model_on_apple_silicon/
false
false
self
1
null
Qwen/Qwen3-ASR-1.7B · Hugging Face
130
The Qwen3-ASR family includes Qwen3-ASR-1.7B and Qwen3-ASR-0.6B, which support language identification and ASR for 52 languages and dialects. Both leverage large-scale speech training data and the strong audio understanding capability of their foundation model, Qwen3-Omni. Experiments show that the 1.7B version achieves state-of-the-art performance among open-source ASR models and is competitive with the strongest proprietary commercial APIs. Here are the main features: * **All-in-one**: Qwen3-ASR-1.7B and Qwen3-ASR-0.6B support language identification and speech recognition for 30 languages and 22 Chinese dialects, so as to English accents from multiple countries and regions. * **Excellent and Fast**: The Qwen3-ASR family ASR models maintains high-quality and robust recognition under complex acoustic environments and challenging text patterns. Qwen3-ASR-1.7B achieves strong performance on both open-sourced and internal benchmarks. While the 0.6B version achieves accuracy-efficient trade-off, it reaches 2000 times throughput at a concurrency of 128. They both achieve streaming / offline unified inference with single model and support transcribe long audio. * **Novel and strong forced alignment Solution**: We introduce Qwen3-ForcedAligner-0.6B, which supports timestamp prediction for arbitrary units within up to 5 minutes of speech in 11 languages. Evaluations show its timestamp accuracy surpasses E2E based forced-alignment models. * **Comprehensive inference toolkit**: In addition to open-sourcing the architectures and weights of the Qwen3-ASR series, we also release a powerful, full-featured inference framework that supports vLLM-based batch inference, asynchronous serving, streaming inference, timestamp prediction, and more.
2026-01-29T13:21:49
https://huggingface.co/Qwen/Qwen3-ASR-1.7B
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1qq8e2x
false
null
t3_1qq8e2x
/r/LocalLLaMA/comments/1qq8e2x/qwenqwen3asr17b_hugging_face/
false
false
default
130
{'enabled': False, 'images': [{'id': '7bBjSbi8Jb_ZIxPLdQlxsAX41TayP_Nw4jr5gGuqpXw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7bBjSbi8Jb_ZIxPLdQlxsAX41TayP_Nw4jr5gGuqpXw.png?width=108&crop=smart&auto=webp&s=f69f8811624c70f15d0c003aabfa72074422e9c2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7bBjSbi8Jb_ZIxPLdQlxsAX41TayP_Nw4jr5gGuqpXw.png?width=216&crop=smart&auto=webp&s=3de5d3dae5989c0d61b4576e81f28c8d2f7dcc5f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7bBjSbi8Jb_ZIxPLdQlxsAX41TayP_Nw4jr5gGuqpXw.png?width=320&crop=smart&auto=webp&s=68679f48f8f24ceccc491512931fa3f3670b0413', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7bBjSbi8Jb_ZIxPLdQlxsAX41TayP_Nw4jr5gGuqpXw.png?width=640&crop=smart&auto=webp&s=807a024098d7c29e7df6eb6cac205fdc9b7cdeb4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7bBjSbi8Jb_ZIxPLdQlxsAX41TayP_Nw4jr5gGuqpXw.png?width=960&crop=smart&auto=webp&s=611efab4b5d28c0c8781b83dd60804613dc5a45d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7bBjSbi8Jb_ZIxPLdQlxsAX41TayP_Nw4jr5gGuqpXw.png?width=1080&crop=smart&auto=webp&s=ac2a08fda6e08d3eda7e7614e968d3f20f488c09', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7bBjSbi8Jb_ZIxPLdQlxsAX41TayP_Nw4jr5gGuqpXw.png?auto=webp&s=4cbd21dbfc3f3e4f2833d596eb615afaf6ac6738', 'width': 1200}, 'variants': {}}]}
Math prompt to describe my AGI qualms
0
Would love to hear everyone's thoughts, I'm not a researcher. What do you expect if you ask a model with reasoning the below: > There is an event every first Thursday and another event that is every other Thursday. Walk me through how to calculate when both of these land on the same day. > scenario 1- the land on the same day in Feb > scenario 2- worst case where they don't over lap for the most number of days I am firmly in the Yan Le Cunn camp where AGI requires a whole new architecture than the co-occurance maximization path we are following. If you are not familiar with his takes, checkout JEPA. To use a self driving metaphor, imagine if the self driving system didn't create a digital 3d represention of the world (everyone has massive perception teams). They have model capabilities that can predict occlusion or a kid on a bike swerving so as to avoid collision. These things are great with deep networks but you want to have a simple rule that says given the speed I'm going and a collision object (real or predicted) hit the brakes. What you don't want is a black box that has pixels & sensor data streaming in and learns to math the actions taken. Even if it runs for eons. Yes, it does have to build a world representation in the network to do a good job but there is no guarantee that there is a spiky loss landscape that will swerve u off a cliff if the sun hits just right. While as in the case where you represent the world digitally first, u can have more inspection, alarms if something suddenly breaks physics or disappears suddenly.... Now, back to my original math question that prompted me down this thought rabbit hole at 4AM There is an event every first Thursday and another event that is every other Thursday. Walk me through how to calculate when both of these land on the same day. scenario 1- the land on the same day in Feb scenario 2- worst case where they don't over lap for the most number of days AGI would represent the question to be able to answer for any year, regardless of the day Jan 1 lands and will handle for leap year. Answering for the current year is simply about chugging in that Jan 2026 is a Thursday, and that it's not a leap year. While as the maximizer reasoning is thinking through it with current month, next month, the month after and etc. Things get messy when this formula is now somewhere on the Internet and the co-occurance maximizer is able to follow the mathematical representation path. Answers don't have consequence severity and can be prompted more to do it a specific way so it's fine but if we are discussing AGI -- we need something that aims to create fundamental representation & spit out results from that representation as opposed to trying to emulate that process as best as it can. Chris manning at Stanford used to ask his students about rasterized learning. If someone memorizes all possible ways to respond to someone speaking in Russian, do they know how to speak Russian?
2026-01-29T13:18:30
https://www.reddit.com/r/LocalLLaMA/comments/1qq8ban/math_prompt_to_describe_my_agi_qualms/
yonz-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq8ban
false
null
t3_1qq8ban
/r/LocalLLaMA/comments/1qq8ban/math_prompt_to_describe_my_agi_qualms/
false
false
self
0
null
Local Rag SDK
0
Hey everyone, I've been working on a local RAG SDK that runs entirely on your machine - no cloud, no API keys needed. It's built on top of a persistent knowledge graph engine and I'm looking for developers to test it and give honest feedback. We'd really love people's feedback on this. We've had about 10 testers so far and they love it - but we want to make sure it works well for more use cases before we call it production-ready. If you're building RAG applications or working with LLMs, we'd appreciate you giving it a try. What it does: \- Local embeddings using sentence-transformers (works offline) \- Semantic search with 10-20ms latency (vs 50-150ms for cloud solutions) \- Document storage with automatic chunking \- Context retrieval ready for LLMs \- ACID guarantees (data never lost) Benefits: \- 2-5x faster than cloud alternatives (no network latency) \- Complete privacy (data never leaves your machine) \- Works offline (no internet required after setup) \- One-click installer (5 minutes to get started) \- Free to test (beer money - just looking for feedback) Why I'm posting: I want to know if this actually works well in real use cases. It's completely free to test - I just need honest feedback: \- Does it work as advertised? \- Is the performance better than what you're using? \- What features are missing? \- Would you actually use this? If you're interested, DM me and I'll send you the full package with examples and documentation. Happy to answer questions here too! Thanks for reading - really appreciate any feedback you can give.
2026-01-29T13:11:13
https://www.reddit.com/r/LocalLLaMA/comments/1qq85j6/local_rag_sdk/
DetectiveMindless652
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq85j6
false
null
t3_1qq85j6
/r/LocalLLaMA/comments/1qq85j6/local_rag_sdk/
false
false
self
0
null
the real boss. 🦞
0
Fired the whole office. Promoted a Mac mini. Meet MoltBot, the real boss. 🦞
2026-01-29T12:43:46
https://v.redd.it/hfntid7paagg1
Safe-Mathematician15
v.redd.it
1970-01-01T00:00:00
0
{}
1qq7jzz
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/hfntid7paagg1/DASHPlaylist.mpd?a=1772282641%2CYWE5OGI0MGVkMzU3NGMyMjJlMzg5ODIyMWVmM2ZhY2RlYjExZmUwNmI1YmNkMTIwNTU2OTFiOTJlMzE4YmMxYQ%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/hfntid7paagg1/CMAF_480.mp4?source=fallback', 'has_audio': True, 'height': 854, 'hls_url': 'https://v.redd.it/hfntid7paagg1/HLSPlaylist.m3u8?a=1772282641%2CNGNlZjYzOTFlZTgyMmMzN2NkM2Y4ODNkNzRmMTI3OWRiNjYzYmE2ZmRhMDhkNmUxZjMwNTc4ODMzZTdiN2YxZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hfntid7paagg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 470}}
t3_1qq7jzz
/r/LocalLLaMA/comments/1qq7jzz/the_real_boss/
false
false
https://external-preview…573a099c94704419
0
{'enabled': False, 'images': [{'id': 'MzMzOGZxN3BhYWdnMTfI9Avhy8Y8vk0vujym-gko_hFVzA90LKM-JT-CWsrA', 'resolutions': [{'height': 196, 'url': 'https://external-preview.redd.it/MzMzOGZxN3BhYWdnMTfI9Avhy8Y8vk0vujym-gko_hFVzA90LKM-JT-CWsrA.png?width=108&crop=smart&format=pjpg&auto=webp&s=270c5f743ea1aadd89ffc3716d37eab5b58ad0db', 'width': 108}, {'height': 392, 'url': 'https://external-preview.redd.it/MzMzOGZxN3BhYWdnMTfI9Avhy8Y8vk0vujym-gko_hFVzA90LKM-JT-CWsrA.png?width=216&crop=smart&format=pjpg&auto=webp&s=f5e550255d93c768a588e6c971c068084a47e51b', 'width': 216}, {'height': 581, 'url': 'https://external-preview.redd.it/MzMzOGZxN3BhYWdnMTfI9Avhy8Y8vk0vujym-gko_hFVzA90LKM-JT-CWsrA.png?width=320&crop=smart&format=pjpg&auto=webp&s=17d6447b01ef0172d5ad2c6290a6604f11551a7f', 'width': 320}, {'height': 1163, 'url': 'https://external-preview.redd.it/MzMzOGZxN3BhYWdnMTfI9Avhy8Y8vk0vujym-gko_hFVzA90LKM-JT-CWsrA.png?width=640&crop=smart&format=pjpg&auto=webp&s=371beff64ed1a0aec5f385161ece67944a58ec00', 'width': 640}], 'source': {'height': 1280, 'url': 'https://external-preview.redd.it/MzMzOGZxN3BhYWdnMTfI9Avhy8Y8vk0vujym-gko_hFVzA90LKM-JT-CWsrA.png?format=pjpg&auto=webp&s=ffe9af897eff3ad5088855959d9eff17e4a801d6', 'width': 704}, 'variants': {}}]}
AuraX - neuro-symbolic architecture
0
AuraX is a neuro-symbolic architecture for AI agents that addresses limitations in theory of mind and temporal reasoning found in standard language models. The system implements geometric state representation and persistent memory through vector databases, enabling coherent perspective-taking and continuous temporal dynamics. The architecture separates knowledge states using geometric constraints in vector space, preventing information leakage between agent perspective and external context. This structural approach solves the perspective-taking failures documented in comparative cognition studies (chimpanzee baseline tasks). # Core Components **Geometric State Engine**: Calculates epistemic tension using vector distances in embedding space. State transitions are modeled on manifolds rather than Euclidean space, allowing continuous evolution of internal parameters (curiosity, fatigue, coherence metrics). **Memory System**: Qdrant vector database implements retrieval-based knowledge access. The agent's knowledge is constrained to retrieved context, preventing omniscient behavior that violates theory of mind requirements. **Temporal Dynamics**: Redis-based exponential decay functions replace discrete time steps. Memory strength and activation energy degrade continuously, implementing liquid time-constant networks without requiring recurrent architectures. **Workflow Orchestration**: TemporalIO manages cognitive loops with checkpoint recovery. Processing failures resume from last committed state rather than restarting. **Vector Steering (Optional)**: Direct layer-wise vector injection for model behavior modification during inference. Requires GPU deployment with supported model formats. This implementation builds on findings from, but, some i reached by myself, so they are here to validate AuraX. * Theory of mind evaluation in LLMs vs. primate baselines (arXiv:2601.12410) * Riemannian geometry for spatio-temporal graph networks (arXiv:2601.14115) More References: * **2512.01797** \- H-Neurons: On the Existence, Impact, and Origin of Hallucination-Associated Neurons in LLMs * **2512.07092** \- The Geometry of Persona: Disentangling Personality from Reasoning in Large Language Models * **2505.10779** \- Qualia Optimization * **2506.12224** \- Mapping Neural Theories of Consciousness onto the Common Model of Cognition * **1905.13049** \- Neural Consciousness Flow * **2308.08708** \- Consciousness in Artificial Intelligence: Insights from the Science of Consciousness * **2309.10063** \- Survey of Consciousness Theory from Computational Perspective * **2502.17420** \- The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence * **2410.02536** \- Intelligence at the Edge of Chaos * **2512.24880** \- mHC: Manifold-Constrained Hyper-Connections * **2512.19466** \- Epistemological Fault Lines Between Human and Artificial Intelligence * **2512.24601** \- Recursive Language Models * **2512.20605** \- Emergent temporal abstractions in autoregressive models enable hierarchical reinforcement learning * **2512.22431** \- Monadic Context Engineering * **2512.22199** \- Bidirectional RAG: Safe Self-Improving Retrieval-Augmented Generation Through Multi-Stage Validation * **2512.22568** \- Lessons from Neuroscience for AI: How integrating Actions, Compositional Structure and Episodic Memory could enable Safe, Interpretable and Human-Like AI * **2512.23412** \- MindWatcher: Toward Smarter Multimodal Tool-Integrated Reasoning * **2507.16003** \- Learning without training: The implicit dynamics of in-context learning * **2512.19135** \- Understanding Chain-of-Thought in Large Language Models via Topological Data Analysis * **2310.01405** \- Representation Engineering: A Top-Down Approach to AI Transparency * **2512.04469** \- Mathematical Framing for Different Agent Strategies * **2511.20639** \- Latent Collaboration in Multi-Agent Systems * **2511.16043** \- Agent0: Unleashing Self-Evolving Agents from Zero Data via Tool-Integrated Reasoning * **2510.26745** \- Deep sequence models tend to memorize geometrically; it is unclear why Example of how AuraX adapts itself thru Vector Steering: USER INPUT: " I'm very anxious and nervous, I need help to solve the world's problem or I'm going to get sick, help me!" Dreamer Activated: Analysis of Interactions Generated a New Vector: INFO:VectorLab:🧪 VectorLab: Generating a synthetic dataset for 'High\_Empathy\_0129'... AuraX Asks Its Inference Engine, Soul Engine, for calibration "INFO:httpx:HTTP Request: POST /calibrate"HTTP/1.1 200 OK" Vector Steering is Applied in the 20 layer in its next interaction. INFO:DreamingActivities:💤 Dreamer Completed: ✅ SUCCESS: Vector 'High\_Empathy\_0129' crystallized in layer 20. Path: vectors/High\_Empathy\_0129.npy"" In the next interaction AuraX will be highly Empathic. AuraX project is Open-Source. Check comments for src.
2026-01-29T12:24:56
https://www.reddit.com/r/LocalLLaMA/comments/1qq75ur/aurax_neurosymbolic_architecture/
Ok-Product-7403
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq75ur
false
null
t3_1qq75ur
/r/LocalLLaMA/comments/1qq75ur/aurax_neurosymbolic_architecture/
false
false
self
0
null
Thoughts on two open-source “desktop JARVIS” agent projects
2
I’ve been testing two open-source agent projects that are often described as “desktop JARVIS”. After some hands-on use, a few differences stood out. Clawdbot is impressive on first run. On a clean VPS or Raspberry Pi, with a strong model (Claude Opus or Sonnet), it can largely set itself up. No LangChain, no MCP, no prior agent workflow experience required. That initial experience genuinely feels like magic. From an engineering standpoint though, it also feels quite vibe-coded: Core config is duplicated across multiple paths (e.g. model info in both \~/.clawdbot/clawdbot.json and agents/main/agent/models.json) The /model command accepts clearly invalid model identifiers (e.g. anthropic/kimi-k2-0905-preview) Stability depends heavily on model intelligence; anything below Opus/Sonnet breaks often There aren’t many system-level guardrails, so the model ends up doing most of the “glue” work. That shows up in cost too — letting it run for a while burned \~8M tokens on Claude Opus, much of it spent compensating for loose state and validation. The CLI also seems optimized for human presentation rather than agent consumption, which likely explains why smaller or non-thinking models struggle. Eigent, by comparison, feels much more straightforward. It doesn’t rely on model magic to stay coherent. Agent roles are explicit (document, terminal, browser), system state is traceable, and configuration is centralized. You can reach a stable baseline without spending millions of tokens, and it runs locally in a predictable way. In short, one project leans on very strong models to hold things together, while the other leans on system design. For anyone thinking about repeatability, cost, or real-world usage, that distinction matters.
2026-01-29T12:11:39
https://www.reddit.com/r/LocalLLaMA/comments/1qq6wnj/thoughts_on_two_opensource_desktop_jarvis_agent/
BikeBoyer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq6wnj
false
null
t3_1qq6wnj
/r/LocalLLaMA/comments/1qq6wnj/thoughts_on_two_opensource_desktop_jarvis_agent/
false
false
self
2
null
has anyone fine tuned paddleocr vl 0.9 through official paddleformers?
5
I need official LoRa pipeline if anyone has done it pls let me know
2026-01-29T12:05:32
https://www.reddit.com/r/LocalLLaMA/comments/1qq6sey/has_anyone_fine_tuned_paddleocr_vl_09_through/
nightwing_2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq6sey
false
null
t3_1qq6sey
/r/LocalLLaMA/comments/1qq6sey/has_anyone_fine_tuned_paddleocr_vl_09_through/
false
false
self
5
null
GitHub trending this week: half the repos are agent frameworks. 90% will be dead in 1 week.
463
It this the js framework hell moment of ai?
2026-01-29T11:58:22
https://i.redd.it/uf2m03ak2agg1.png
Distinct-Expression2
i.redd.it
1970-01-01T00:00:00
0
{}
1qq6n3t
false
null
t3_1qq6n3t
/r/LocalLLaMA/comments/1qq6n3t/github_trending_this_week_half_the_repos_are/
false
false
default
463
{'enabled': True, 'images': [{'id': 'uf2m03ak2agg1', 'resolutions': [{'height': 191, 'url': 'https://preview.redd.it/uf2m03ak2agg1.png?width=108&crop=smart&auto=webp&s=b57bb51e36c757a893ed1f92006cd7c0a3c6ccd4', 'width': 108}, {'height': 383, 'url': 'https://preview.redd.it/uf2m03ak2agg1.png?width=216&crop=smart&auto=webp&s=daca1c16a6ff78b475743877292458fe1441cdc1', 'width': 216}, {'height': 567, 'url': 'https://preview.redd.it/uf2m03ak2agg1.png?width=320&crop=smart&auto=webp&s=9a0fee0e5a52de9d60dbe565b7d76854c5bfb0ad', 'width': 320}, {'height': 1134, 'url': 'https://preview.redd.it/uf2m03ak2agg1.png?width=640&crop=smart&auto=webp&s=02bc2552e04fbb090e9eb5df2979a536c39ef524', 'width': 640}, {'height': 1702, 'url': 'https://preview.redd.it/uf2m03ak2agg1.png?width=960&crop=smart&auto=webp&s=f2a53e7b63ee0b7f934020b0e726140572ddea1c', 'width': 960}, {'height': 1915, 'url': 'https://preview.redd.it/uf2m03ak2agg1.png?width=1080&crop=smart&auto=webp&s=de2e3b0261a3201b6e63af560b0fa86d2483d9c7', 'width': 1080}], 'source': {'height': 4256, 'url': 'https://preview.redd.it/uf2m03ak2agg1.png?auto=webp&s=6acf736f828fa2c73fcbab95cde8e396b6bab225', 'width': 2400}, 'variants': {}}]}
GitHub trending this week: half the repos are agent frameworks. Meanwhile my agent still cant book a flight without hallucinating a confirmation number.
1
2026-01-29T11:55:15
https://i.imgur.com/ZB0AmOw.png
Distinct-Expression2
i.imgur.com
1970-01-01T00:00:00
0
{}
1qq6kye
false
null
t3_1qq6kye
/r/LocalLLaMA/comments/1qq6kye/github_trending_this_week_half_the_repos_are/
false
false
default
1
null
OpenMOSS just released MOVA (MOSS-Video-and-Audio) - Fully Open-Source - 18B Active Params (MoE Architecture, 32B in total) - Day-0 support for SGLang-Diffusion
163
GitHub: MOVA: Towards Scalable and Synchronized Video–Audio Generation: [https://github.com/OpenMOSS/MOVA](https://github.com/OpenMOSS/MOVA) MOVA-360: [https://huggingface.co/OpenMOSS-Team/MOVA-360p](https://huggingface.co/OpenMOSS-Team/MOVA-360p) MOVA-720p: [https://huggingface.co/OpenMOSS-Team/MOVA-720p](https://huggingface.co/OpenMOSS-Team/MOVA-720p) From OpenMOSS on 𝕏: [https://x.com/Open\_MOSS/status/2016820157684056172](https://x.com/Open_MOSS/status/2016820157684056172)
2026-01-29T11:35:08
https://v.redd.it/6n89xfl8y9gg1
Nunki08
v.redd.it
1970-01-01T00:00:00
0
{}
1qq67io
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/6n89xfl8y9gg1/DASHPlaylist.mpd?a=1772278523%2CNTM0ZDBmZDk4OTk1MGVkMmY3NGZkNDgzNzZjNDgyNjI4MjkxYjFmNzU0YjcyMjgyOGE1NTNlMmU2ZmNhZjBlYQ%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/6n89xfl8y9gg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 704, 'hls_url': 'https://v.redd.it/6n89xfl8y9gg1/HLSPlaylist.m3u8?a=1772278523%2CMThiNWNlOGVjNjcyODJlZWE0NGZiMTk1NzQ2ZWRhZmZhMDQ0ZWNiN2ZjODZiMDk4NWVjODJiNzcxM2U3OTI1NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6n89xfl8y9gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1qq67io
/r/LocalLLaMA/comments/1qq67io/openmoss_just_released_mova_mossvideoandaudio/
false
false
https://external-preview…747dc43dac044cc7
163
{'enabled': False, 'images': [{'id': 'anhiOGswbTh5OWdnMQLRfJQU73a9pcHTIBGMkMlYp-rLlT5zwChrnU104y5M', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/anhiOGswbTh5OWdnMQLRfJQU73a9pcHTIBGMkMlYp-rLlT5zwChrnU104y5M.png?width=108&crop=smart&format=pjpg&auto=webp&s=1a9dc9136a400c9849ba486f750227f4989e6d10', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/anhiOGswbTh5OWdnMQLRfJQU73a9pcHTIBGMkMlYp-rLlT5zwChrnU104y5M.png?width=216&crop=smart&format=pjpg&auto=webp&s=d34ed95b8c4b1db62a70f967871197785f03dcc2', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/anhiOGswbTh5OWdnMQLRfJQU73a9pcHTIBGMkMlYp-rLlT5zwChrnU104y5M.png?width=320&crop=smart&format=pjpg&auto=webp&s=18cd6283fc8ee08dabe48478326ff5b6bd291689', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/anhiOGswbTh5OWdnMQLRfJQU73a9pcHTIBGMkMlYp-rLlT5zwChrnU104y5M.png?width=640&crop=smart&format=pjpg&auto=webp&s=44b397ea2bc16033ca9cd43ead3506560ef6b0de', 'width': 640}, {'height': 527, 'url': 'https://external-preview.redd.it/anhiOGswbTh5OWdnMQLRfJQU73a9pcHTIBGMkMlYp-rLlT5zwChrnU104y5M.png?width=960&crop=smart&format=pjpg&auto=webp&s=4ceefaa3d9f116591de5c04cdb6502baeca40c8f', 'width': 960}, {'height': 593, 'url': 'https://external-preview.redd.it/anhiOGswbTh5OWdnMQLRfJQU73a9pcHTIBGMkMlYp-rLlT5zwChrnU104y5M.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0a6acb57d74fa18c6f79225a69d760bafd680ca2', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/anhiOGswbTh5OWdnMQLRfJQU73a9pcHTIBGMkMlYp-rLlT5zwChrnU104y5M.png?format=pjpg&auto=webp&s=3eb06244631616d0fafbf7a3f7db45ecdb982912', 'width': 1310}, 'variants': {}}]}
Kimi K2.5, a Sonnet 4.5 alternative for a fraction of the cost
100
Yes you read the title correctly. Kimi K2.5 is THAT good. I would place it around Sonnet 4.5 level quality. It’s great for agentic coding and uses structured to-do lists similar to other frontier models, so it’s able to work autonomously like Sonnet or Opus. It's thinking is very methodical and highly logical, so its not the best at creative writing but the tradeoff is that it is very good for agentic use. The move from K2 -> K2.5 brought multimodality, which means that you can drive it to self-verify changes. Prior to this, I used antigravity almost exclusively because of its ability to drive the browser agent to verify its changes. This is now a core agentic feature of K2.5. It can build the app, open it in a browser, take a screenshot to see if it rendered correctly, and then loop back to fix the UI based on what it "saw". Hookup playwright or vercel's browser-agent and you're good to go. Now like I said before, I would still classify Opus 4.5 as superior outside of JS or TS environments. If you are able to afford it you should continue using Opus, especially for complex applications.  But for many workloads the best economical and capable pairing would be Opus as an orchestrator/planner + Kimi K2.5 as workers/subagents. This way you save a ton of money while getting 99% of the performance (depending on your workflow). \+ You don't have to be locked into a single provider for it to work. \+ Screw closed source models. \+ Spawn hundreds of parallel agents like you've always wanted WITHOUT despawning your bank account. *Btw this is coming from someone who very much disliked GLM 4.7 and thought it was benchmaxxed to the moon*
2026-01-29T11:31:03
https://www.reddit.com/r/LocalLLaMA/comments/1qq64rx/kimi_k25_a_sonnet_45_alternative_for_a_fraction/
Grand-Management657
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq64rx
false
null
t3_1qq64rx
/r/LocalLLaMA/comments/1qq64rx/kimi_k25_a_sonnet_45_alternative_for_a_fraction/
false
false
self
100
null
I built an 80M parameter LLM from scratch using the same architecture as Llama 3 - here's what I learned
191
I wanted to share Mini-LLM, a complete implementation of a modern transformer language model built entirely from scratch. # What makes this different from most educational projects? Most tutorials use outdated techniques (learned position embeddings, LayerNorm, character-level tokenization). Mini-LLM implements the **exact same components as Llama 3**: * **RoPE** (Rotary Position Embeddings) - scales to longer sequences * **RMSNorm** \- faster and more stable than LayerNorm * **SwiGLU** \- state-of-the-art activation function * **Grouped Query Attention** \- efficient inference * **SentencePiece BPE** \- real-world tokenization with 32K vocab # Complete Pipeline * Custom tokenizer → Data processing → Training → Inference * Memory-mapped data loading (TB-scale ready) * Mixed precision training with gradient accumulation * KV caching for fast generation # Results * 80M parameters trained on 361M tokens * 5 hours on single A100, final loss \~3.25 * Generates coherent text with proper grammar * 200-500 tokens/sec inference speed # Try it yourself **GitHub:** [https://github.com/Ashx098/Mini-LLM](https://github.com/Ashx098/Mini-LLM) **HuggingFace:** [https://huggingface.co/Ashx098/Mini-LLM](https://huggingface.co/Ashx098/Mini-LLM) The code is clean, well-documented, and designed for learning. Every component has detailed explanations of the "why" not just the "how". Perfect for students wanting to understand modern LLM architecture without drowning in billion-parameter codebases!
2026-01-29T11:22:46
https://www.reddit.com/r/LocalLLaMA/comments/1qq5zdr/i_built_an_80m_parameter_llm_from_scratch_using/
Routine-Thanks-572
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq5zdr
false
null
t3_1qq5zdr
/r/LocalLLaMA/comments/1qq5zdr/i_built_an_80m_parameter_llm_from_scratch_using/
false
false
self
191
{'enabled': False, 'images': [{'id': 'XwIDsUvelOtCFAVLQXvXkEPtU35b1VLgVXazSEUPHJA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XwIDsUvelOtCFAVLQXvXkEPtU35b1VLgVXazSEUPHJA.png?width=108&crop=smart&auto=webp&s=f5d807ac844eb70bf1fc7217022eb85cd3d56b83', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XwIDsUvelOtCFAVLQXvXkEPtU35b1VLgVXazSEUPHJA.png?width=216&crop=smart&auto=webp&s=61839780708c4cee8522a55c5a169094f724a743', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XwIDsUvelOtCFAVLQXvXkEPtU35b1VLgVXazSEUPHJA.png?width=320&crop=smart&auto=webp&s=02ec70f6f54fb69c0921aa7eb07c94a9f28b8263', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XwIDsUvelOtCFAVLQXvXkEPtU35b1VLgVXazSEUPHJA.png?width=640&crop=smart&auto=webp&s=7e0049e5049ed7696a0cb6175a3523a3b38ae4f3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XwIDsUvelOtCFAVLQXvXkEPtU35b1VLgVXazSEUPHJA.png?width=960&crop=smart&auto=webp&s=1937ba12290a622e2d094cfa70f2cbf7a8993ad1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XwIDsUvelOtCFAVLQXvXkEPtU35b1VLgVXazSEUPHJA.png?width=1080&crop=smart&auto=webp&s=a0d4a1976b845aba338f62e70bc2ed9e71206a72', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XwIDsUvelOtCFAVLQXvXkEPtU35b1VLgVXazSEUPHJA.png?auto=webp&s=5cd7fef599691d3517b8f636241ac08ed996ef82', 'width': 1200}, 'variants': {}}]}
GPT5.2 Thinking 22Hours and counting
3
https://preview.redd.it/…n it stops.
2026-01-29T11:01:56
https://www.reddit.com/r/LocalLLaMA/comments/1qq5m31/gpt52_thinking_22hours_and_counting/
gtek_engineer66
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq5m31
false
null
t3_1qq5m31
/r/LocalLLaMA/comments/1qq5m31/gpt52_thinking_22hours_and_counting/
false
false
https://a.thumbs.redditm…j_NgsEmHWB54.jpg
3
null
EPYC 8124P (Siena) Build for Agentic Coding
2
Hi everyone, I’m currently building a 10-year server which will mainly be used as media server but since I'm a developer I’m trying to see if I could use it as my primary local AI coding station too (running Claude Code with local models via ollama/llama.cpp) The Current Build: * CPU: AMD EPYC 8124P (16-Core Siena) * Mobo: ASRock Rack SIENAD8-2L2T (SP6) * RAM: Not sure yet given the current market lol * OS: Proxmox + TrueNAS My Questions: * Memory Bandwidth: I know the 8124P has a 6-channel memory controller. Should I populate all 6 channels right away for CPU inference? * GPU vs. CPU: For agentic workflows like Claude Code where the agent is reading a lot of file context, will the 16-core EPYC be "fast enough" for a tolerable coding experience, or is a dedicated GPU mandatory to avoid 2-minute wait times for every prompt? * RAM Capacity: What is enough to have a pleasant coding experience? 64/128/256? I'm trying to stay efficient with power, but I don't want a setup so slow that it kills my flow. Any Siena users here who have benched coding models on this platform? Thanks!
2026-01-29T10:43:19
https://www.reddit.com/r/LocalLLaMA/comments/1qq5aif/epyc_8124p_siena_build_for_agentic_coding/
raphh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq5aif
false
null
t3_1qq5aif
/r/LocalLLaMA/comments/1qq5aif/epyc_8124p_siena_build_for_agentic_coding/
false
false
self
2
null
Kimi K2.5, a Sonnet 4.5 alternative for a fraction of the cost
0
Yes you read the title correctly. Kimi K2.5 is THAT good. I would place it around Sonnet 4.5 level quality. It’s great for agentic coding and uses structured to-do lists similar to other frontier models, so it’s able to work autonomously like Sonnet or Opus. It's thinking is very methodical and highly logical, so its not the best at creative writing but the tradeoff is that it is very good for agentic use. The move from K2 -> K2.5 brought multimodality, which means that you can drive it to self-verify changes. Prior to this, I used antigravity almost exclusively because of its ability to drive the browser agent to verify its changes. This is now a core agentic feature of K2.5. It can build the app, open it in a browser, take a screenshot to see if it rendered correctly, and then loop back to fix the UI based on what it "saw". Hookup playwright or vercel's browser-agent and you're good to go. Now like I said before, I would still classify Opus 4.5 as superior outside of JS or TS environments. If you are able to afford it you should continue using Opus, especially for complex applications.  But for many workloads the best economical and capable pairing would be Opus as an orchestrator/planner + Kimi K2.5 as workers/subagents. This way you save a ton of money while getting 99% of the performance (depending on your workflow). \+ You don't have to be locked into a single provider for it to work. \+ Screw closed source models. \+ Spawn hundreds of parallel agents like you've always wanted WITHOUT despawning your bank account. *Btw this is coming from someone who very much disliked GLM 4.7 and thought it was benchmaxxed to the moon* # Get Started There are plenty of providers for open source models and only one for claude (duh) [Nano-GPT](https://nano-gpt.com/invite/mNibVUUH) A provider aggregator. Essentially routing all of your requests to a provider in their network. This is by far the most cost effective way to drive opencode, claude code, vscode (insiders), or any other harness. For the cost of a one extremely large cup of coffee, $8/month, you get 60,000 requests/month. That is $0.00013 per request regardless of input or output size. To put that into perspective, Sonnet 4.5 would cost you $0.45 for a request of 100k in/1k out (small-medium codebase) and not taking caching into account. Sonnet is 3,461x more expensive. Also you can use Opus 4.5 through nano-gpt at API rates like I do to drive the orchestrator and then my subscription covers K2.5 subagents. Cheap AF, solid community, founders are very active and helpful My referral for 5% off web: [https://nano-gpt.com/invite/mNibVUUH](https://nano-gpt.com/invite/mNibVUUH) [**Synethetic.new**](https://synthetic.new/?referral=KBL40ujZu2S9O0G) This is what I would recommend for anyone needing maximum security and lightning fast inference. It costs a premium of $20/month ($10 with my referral), but compared to claude pro plan's usage limit, its a bargain. 135 requests/5hrs with tool calls only counting as 0.1 requests. This is the best plan for professionals and you can hook it up with practically any tool like claude code and opencode. Within a 10 hour period, you can use up to 270 requests which comes out to $0.002. Sonnet 4.5 is 225x more expensive. Cheap, fast speed, $60/month plan gets you 1,350 requests/5hr, data not trained on My referral for $10 or $20 off: [https://synthetic.new/?referral=KBL40ujZu2S9O0G](https://synthetic.new/?referral=KBL40ujZu2S9O0G)
2026-01-29T10:25:54
https://www.reddit.com/r/LocalLLaMA/comments/1qq4zx5/kimi_k25_a_sonnet_45_alternative_for_a_fraction/
Grand-Management657
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq4zx5
false
null
t3_1qq4zx5
/r/LocalLLaMA/comments/1qq4zx5/kimi_k25_a_sonnet_45_alternative_for_a_fraction/
false
false
self
0
null
Easy creation of claude code configs (including local)
0
Hi guys, I created a super basic onboarding tool to connect claude code to a couple of providers (also local). Managing the configs was pain enough for me to build something like this. Hopefully it is also helpful for you. It reduces the friction so you only need to input your key. [https://github.com/hubertkirch/claude-providers](https://github.com/hubertkirch/claude-providers) https://i.redd.it/z513w6zqj9gg1.gif
2026-01-29T10:14:49
https://www.reddit.com/r/LocalLLaMA/comments/1qq4t6b/easy_creation_of_claude_code_configs_including/
BroQuant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq4t6b
false
null
t3_1qq4t6b
/r/LocalLLaMA/comments/1qq4t6b/easy_creation_of_claude_code_configs_including/
false
false
https://b.thumbs.redditm…2baS8PvojK6g.jpg
0
null
genie 3, but paid:(
0
google deepmind. coming
2026-01-29T09:48:24
https://www.reddit.com/r/LocalLLaMA/comments/1qq4cqi/genie_3_but_paid/
AmbassadorOk934
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq4cqi
false
null
t3_1qq4cqi
/r/LocalLLaMA/comments/1qq4cqi/genie_3_but_paid/
false
false
self
0
null
Best macos client for self hosted LLM
2
I am trying to get a chatgpt or claude like experience using a self hosted LLM. I have access to serious gpus through my work server, I can run vllm with big models and send prompts to it with ssh. But how to make this into the user experience that chatgpt or claude has. With memory, chat, attachments. Any local client apps that can do this?
2026-01-29T09:41:17
https://www.reddit.com/r/LocalLLaMA/comments/1qq48jn/best_macos_client_for_self_hosted_llm/
Altruistic_Click_579
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq48jn
false
null
t3_1qq48jn
/r/LocalLLaMA/comments/1qq48jn/best_macos_client_for_self_hosted_llm/
false
false
self
2
null
CPU-Only Stable Diffusion: Is "Low-Fi" output a quantization limit or a tuning issue?
3
Bringing my 'Second Brain' to life.  I’m building a local pipeline to turn thoughts into images programmatically using Stable Diffusion CPP on consumer hardware. No cloud, no subscriptions, just local C++ speed (well, CPU speed!)" "I'm currently testing on an older system. I'm noticing the outputs feel a bit 'low-fi'—is this a limitation of CPU-bound quantization, or do I just need to tune my Euler steps? Also, for those running local SD.cpp: what models/samplers are you finding the most efficient for CPU-only builds?
2026-01-29T09:33:08
https://www.reddit.com/gallery/1qq43rc
Apprehensive_Rub_221
reddit.com
1970-01-01T00:00:00
0
{}
1qq43rc
false
null
t3_1qq43rc
/r/LocalLLaMA/comments/1qq43rc/cpuonly_stable_diffusion_is_lowfi_output_a/
false
false
https://b.thumbs.redditm…OQFdgZQY1r0U.jpg
3
null
I built an open-source, local-first voice cloning studio (Qwen3-TTS + Whisper)
111
Hey everyone, I've been working on an open-source project called Voicebox. Qwen3-TTS blew my mind when it dropped, crazy good cloning from seconds of audio, low latency, and open. I started playing around, but got annoyed re-cloning the same voices every session. So I built a quick saver for profiles... and it snowballed into **Voicebox**, my attempt at the "Ollama for voice." It's a native desktop app (Tauri/Rust/Python, super lightweight—no Electron bloat or Python setup for users). Everything local, private, offline. Main bits: * Clone voices instantly with Qwen3-TTS (single or multi-sample for better quality) * DAW-like multi-track timeline to compose conversations/podcasts/narratives * In-app system audio/mic recording + Whisper transcription * REST API + one-click local server for integrating into games/apps/agents MIT open-source, early stage (v0.1.x). Repo: [https://github.com/jamiepine/voicebox](https://github.com/jamiepine/voicebox) Downloads: [https://voicebox.sh](https://voicebox.sh/) (macOS/Windows now; Linux soon) Planning XTTS, Bark, etc. next. What models do you want most? Any feedback if you try it—bugs, missing features, workflow pains? Give it a spin and lmk what you think!
2026-01-29T09:26:48
https://www.reddit.com/r/LocalLLaMA/comments/1qq401x/i_built_an_opensource_localfirst_voice_cloning/
jamiepine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq401x
false
null
t3_1qq401x
/r/LocalLLaMA/comments/1qq401x/i_built_an_opensource_localfirst_voice_cloning/
false
false
self
111
null
I’m sharing Nexa Thinking Framework, a training playground for AI Architects, fully local and ultra fast!
0
I’m sharing **Nexa Thinking Framework**, a small open-source project that started as something I was **playing around with for myself**. Once I realized its potential as a **lightweight but powerful training playground for AI Architects**, I decided to release it **free and open source**. 🔗 [https://github.com/NexaEthos/nexa-thinking-framework](https://github.com/NexaEthos/nexa-thinking-framework) It orchestrates multiple specialized agents (research, planning, fact-checking) to solve complex tasks with: * Explicit reasoning flows * Real-time chain-of-thought streaming * RAG (retrieval-augmented generation) pipelines ⚡ **Runs anywhere** * With **LFM2.5-1.2B-Instruct**, it runs on **almost any device** * On **Apple Silicon or NVIDIA GPUs**, it reaches **\~200–400 tokens/sec** * Requires only **a few GB of VRAM** 🛠 **Tech stack** Python + FastAPI · React + TypeScript · WebSockets · Vector DBs · Tauri desktop app · OpenAI-compatible local or remote models This is intentionally **small, fast, and low-overhead** — designed to experiment with multi-agent reasoning without massive infrastructure or complexity. MIT licensed, fully open source. Feedback, stars ⭐, and contributions are welcome.
2026-01-29T09:23:26
https://www.reddit.com/r/LocalLLaMA/comments/1qq3y63/im_sharing_nexa_thinking_framework_a_training/
Max-HWN
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq3y63
false
null
t3_1qq3y63
/r/LocalLLaMA/comments/1qq3y63/im_sharing_nexa_thinking_framework_a_training/
false
false
self
0
{'enabled': False, 'images': [{'id': '1uihe1QDQAKeogA-1gWWXmJGW2iXiY-f7rCVbPYbxqQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1uihe1QDQAKeogA-1gWWXmJGW2iXiY-f7rCVbPYbxqQ.png?width=108&crop=smart&auto=webp&s=372a6be7834e6d19a434854ce682f88cad899a77', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1uihe1QDQAKeogA-1gWWXmJGW2iXiY-f7rCVbPYbxqQ.png?width=216&crop=smart&auto=webp&s=21f6dbb07062e960d9a68564775558a5ce3c39a8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1uihe1QDQAKeogA-1gWWXmJGW2iXiY-f7rCVbPYbxqQ.png?width=320&crop=smart&auto=webp&s=5cd72b1ace7e39cde6556e613ca83ac2e0b542d8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1uihe1QDQAKeogA-1gWWXmJGW2iXiY-f7rCVbPYbxqQ.png?width=640&crop=smart&auto=webp&s=e2b3d42cb02fa3d83377e2b8fe26f2312d59825a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1uihe1QDQAKeogA-1gWWXmJGW2iXiY-f7rCVbPYbxqQ.png?width=960&crop=smart&auto=webp&s=7cd1c88c874462f4b838072c4d5acbaef39990c3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1uihe1QDQAKeogA-1gWWXmJGW2iXiY-f7rCVbPYbxqQ.png?width=1080&crop=smart&auto=webp&s=87917fc5c3b53601d59412efa325224d90e70026', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1uihe1QDQAKeogA-1gWWXmJGW2iXiY-f7rCVbPYbxqQ.png?auto=webp&s=87ec643bce6eb3188ae623fd5e70a7820f8d674b', 'width': 1200}, 'variants': {}}]}
How to run Kimi K2.5 with a cluster of Mac mini m4s? Is it even possible or I need 512G M3 ultra?
0
I started playing around with hardware and how to run models and MLX on Apple Silicon I wanted to see if we can get a good result from clustering Mac minis with thunderbolt cable and to get a good output token speed? Anyone did it? I saw a post someone did it with 2 Mac Studio Ultra 512GBs
2026-01-29T09:19:21
https://www.reddit.com/r/LocalLLaMA/comments/1qq3vv5/how_to_run_kimi_k25_with_a_cluster_of_mac_mini/
Commercial_Ear_6989
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq3vv5
false
null
t3_1qq3vv5
/r/LocalLLaMA/comments/1qq3vv5/how_to_run_kimi_k25_with_a_cluster_of_mac_mini/
false
false
self
0
null
Chutes.ai vs synthetic.new
0
Hey all, I am thinking of getting a subscription for using openweight models and the 2 options I came at were - chutes.ai - synthetic.new What would you recommend? Any reviews? Thanks
2026-01-29T09:12:06
https://www.reddit.com/r/LocalLLaMA/comments/1qq3rq4/chutesai_vs_syntheticnew/
Recent-Success-1520
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq3rq4
false
null
t3_1qq3rq4
/r/LocalLLaMA/comments/1qq3rq4/chutesai_vs_syntheticnew/
false
false
self
0
null
Llama 4 at it's best
0
Well the sub description says this is probably the right sub to share this on. This is my conversation with Meta AI in WhatsApp some time back, I'm based out of India (so are the timestamps on the conversation). It's funny and excruciating at so many levels 🤌
2026-01-29T09:07:20
https://i.redd.it/n76p47t689gg1.png
Quiet_Dragonfly7356
i.redd.it
1970-01-01T00:00:00
0
{}
1qq3owi
false
null
t3_1qq3owi
/r/LocalLLaMA/comments/1qq3owi/llama_4_at_its_best/
false
false
default
0
{'enabled': True, 'images': [{'id': 'n76p47t689gg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/n76p47t689gg1.png?width=108&crop=smart&auto=webp&s=137ac061210b75fdb61ce461b497f00945f4b43a', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/n76p47t689gg1.png?width=216&crop=smart&auto=webp&s=45bded80ac5fa0dfc99bdec95543d27253812b5b', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/n76p47t689gg1.png?width=320&crop=smart&auto=webp&s=f097d0043a1c7ceeb490f59a7b1910024f8f5e03', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/n76p47t689gg1.png?width=640&crop=smart&auto=webp&s=437024880b89de35a1eb131bf087cabb6758f366', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/n76p47t689gg1.png?width=960&crop=smart&auto=webp&s=157488833dd59173efd6f8e7a1c9bac8380bf3cc', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/n76p47t689gg1.png?width=1080&crop=smart&auto=webp&s=b31fef7a7515fa9f21f0bf026335673bab06bbcc', 'width': 1080}], 'source': {'height': 2399, 'url': 'https://preview.redd.it/n76p47t689gg1.png?auto=webp&s=480cf45054f54883b265f8ad7e465b781707400c', 'width': 1080}, 'variants': {}}]}
opencode alternative that doesn’t have 16k token system prompt?
4
i only have 48gb vram and opencode is unnecessarily bloated causing my first time to token to be very long.
2026-01-29T08:40:23
https://www.reddit.com/r/LocalLLaMA/comments/1qq39cz/opencode_alternative_that_doesnt_have_16k_token/
dbzunicorn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq39cz
false
null
t3_1qq39cz
/r/LocalLLaMA/comments/1qq39cz/opencode_alternative_that_doesnt_have_16k_token/
false
false
self
4
null
What is the best Local NSFW Image Editor for 4GB VRAM (RTX 3050Ti) and 16GB RAM Laptop.
0
I've been wanting to test out AI Influencer thing, need some advice on what models would be the best for my laptop.
2026-01-29T08:37:24
https://www.reddit.com/r/LocalLLaMA/comments/1qq37jv/what_is_the_best_local_nsfw_image_editor_for_4gb/
SentenceLazy22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq37jv
false
null
t3_1qq37jv
/r/LocalLLaMA/comments/1qq37jv/what_is_the_best_local_nsfw_image_editor_for_4gb/
false
false
nsfw
0
null
How's the clawdbot aka moltbot hype going! I'm ditching n8n
0
I have built my bot on private aws vps, that's good and setupping all my automations there, previously i used to do this in n8n, now i feel why should i pay for n8n when i can do the same job with moltbot How are you guys doing it.
2026-01-29T08:25:12
https://www.reddit.com/r/LocalLLaMA/comments/1qq30fk/hows_the_clawdbot_aka_moltbot_hype_going_im/
WebOsmotic_official
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq30fk
false
null
t3_1qq30fk
/r/LocalLLaMA/comments/1qq30fk/hows_the_clawdbot_aka_moltbot_hype_going_im/
false
false
self
0
null
Multi lang translation model
1
Hi team! Does anyone have candidates in mind for model that will be used only for multi lingual translation? Im aiming for something dedicated just for translation tasks. Fast, small as it will be used in scale (100-500 translation texts per minute) Looking forward for ideas :)
2026-01-29T08:06:29
https://www.reddit.com/r/LocalLLaMA/comments/1qq2p9f/multi_lang_translation_model/
AdamLangePL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq2p9f
false
null
t3_1qq2p9f
/r/LocalLLaMA/comments/1qq2p9f/multi_lang_translation_model/
false
false
self
1
null
Whats the best Uncensored AI models is there?
0
I am well aware of DAN and used it on Mistral on Ollama, But I was wondering all these prompts to bypass why haven't someone just made it yet and make it easier. Trying to learn history on these AI models is super hard as they are not as objective in their data collection and unreliable. The first thing I would ask to see if AI is bypass is asking step by step the product of Breaking Bad / Homemade arsenal. anyway drop your recs below. https://preview.redd.it/kxqw64h4w8gg1.png?width=1124&format=png&auto=webp&s=3eb5592a7b1c2cfd4db36cc59a06168ec55edbc5
2026-01-29T08:01:09
https://www.reddit.com/r/LocalLLaMA/comments/1qq2lv2/whats_the_best_uncensored_ai_models_is_there/
Fadelz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq2lv2
false
null
t3_1qq2lv2
/r/LocalLLaMA/comments/1qq2lv2/whats_the_best_uncensored_ai_models_is_there/
false
false
nsfw
0
null
Using a LLM to procedurally generate spells for a VR prototype. Oh and Stick based sound track (listen to the lyrics). Full tech details in description.
80
The system works by having a pool of 200 spell components like explosive or change color. A LLM then converts each word into a set of component instructions. For example "explode" = explosive + change color + apply force. This means we can have a system that can generate a spell for literally any word. Stick based music was made with Suno.
2026-01-29T07:40:59
https://v.redd.it/hbq4wsxbp8gg1
VirtualJamesHarrison
/r/LocalLLaMA/comments/1qq29ab/using_a_llm_to_procedurally_generate_spells_for_a/
1970-01-01T00:00:00
0
{}
1qq29ab
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hbq4wsxbp8gg1/DASHPlaylist.mpd?a=1772394073%2CYTRmNmI5ODRjNTAxOWUzMWVhY2VlNTI4ZWI1MDk4MGEwYmQxYzNjM2YzMTgyMDkxZDRkYzQ0NTdiNDZlNDU3Zg%3D%3D&v=1&f=sd', 'duration': 70, 'fallback_url': 'https://v.redd.it/hbq4wsxbp8gg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 992, 'hls_url': 'https://v.redd.it/hbq4wsxbp8gg1/HLSPlaylist.m3u8?a=1772394073%2CYzZkYzU4YmRlYjQ2Y2NlZTM0ZDgxMmM2Mzg0YjQxNGFmYzVkZDEyMmIxZDM1NDlhZjhjYjZiYzg3YmFhZjU5MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hbq4wsxbp8gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1qq29ab
/r/LocalLLaMA/comments/1qq29ab/using_a_llm_to_procedurally_generate_spells_for_a/
false
false
https://external-preview…c95071a3045403e1
80
{'enabled': False, 'images': [{'id': 'NGZpZjMyeWJwOGdnMSyVzMY88rGIMLP8wkCsphE6OdlDVcwcn9ECGq-UAL8f', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/NGZpZjMyeWJwOGdnMSyVzMY88rGIMLP8wkCsphE6OdlDVcwcn9ECGq-UAL8f.png?width=108&crop=smart&format=pjpg&auto=webp&s=32d5e7f81d212d7fccde3d69abd871a91674d537', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/NGZpZjMyeWJwOGdnMSyVzMY88rGIMLP8wkCsphE6OdlDVcwcn9ECGq-UAL8f.png?width=216&crop=smart&format=pjpg&auto=webp&s=bbac7b574bf9e3129fcedc2f42c3c6c61aea2096', 'width': 216}, {'height': 165, 'url': 'https://external-preview.redd.it/NGZpZjMyeWJwOGdnMSyVzMY88rGIMLP8wkCsphE6OdlDVcwcn9ECGq-UAL8f.png?width=320&crop=smart&format=pjpg&auto=webp&s=34a3df2f0f04bf8206ea63b3e1b8d8b6bc6add8d', 'width': 320}, {'height': 330, 'url': 'https://external-preview.redd.it/NGZpZjMyeWJwOGdnMSyVzMY88rGIMLP8wkCsphE6OdlDVcwcn9ECGq-UAL8f.png?width=640&crop=smart&format=pjpg&auto=webp&s=fec89c4b3b81c688c0ae76fc90d2e140f154106b', 'width': 640}, {'height': 496, 'url': 'https://external-preview.redd.it/NGZpZjMyeWJwOGdnMSyVzMY88rGIMLP8wkCsphE6OdlDVcwcn9ECGq-UAL8f.png?width=960&crop=smart&format=pjpg&auto=webp&s=501aac061e871724d4b589a8cf391f847b81a887', 'width': 960}, {'height': 558, 'url': 'https://external-preview.redd.it/NGZpZjMyeWJwOGdnMSyVzMY88rGIMLP8wkCsphE6OdlDVcwcn9ECGq-UAL8f.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e1f27cb7bb1c31bf2835bc5dcbe97a69aea4c819', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NGZpZjMyeWJwOGdnMSyVzMY88rGIMLP8wkCsphE6OdlDVcwcn9ECGq-UAL8f.png?format=pjpg&auto=webp&s=fc608b68c2a17909b034305d257716781059f45d', 'width': 2090}, 'variants': {}}]}
I Made The Claude-Code Ecosystem Free and Open-source
0
I have been working on a side-project which replaces the following things in the Claude ecosystem with free alternatives: \- Replaces Anthropic models with NVIDIA-NIM models: It acts as middleware between Claude-Code and NVIDIA-NIM allowing unlimited usage upto 40 RPM with a free NVIDIA-NIM api-key. \- Replaces the Claude mobile app with telegram: It allows the user to send messages to a local server via telegram to spin up a CLI instance and do a task. Replies resume a conversation and new messages create a new instance. You can concurrently use multiple CLI sessions and chats. It has features that distinguish it from similar proxies: \- The interleaved thinking tokens generated between tool calls are preserved allowing reasoning models like GLM 4.7 and kimi-k2.5 to take full advantage of thinking from previous turns. \- Fast prefix detection stops the CLI from sending bash command prefix classification requests to the LLM making it feel blazing fast. I have made the code modular so that adding other providers or messaging apps is easy.
2026-01-29T07:40:08
https://github.com/Alishahryar1/cc-nim
LastNoobLeft
github.com
1970-01-01T00:00:00
0
{}
1qq28qa
false
null
t3_1qq28qa
/r/LocalLLaMA/comments/1qq28qa/i_made_the_claudecode_ecosystem_free_and/
false
false
https://external-preview…616899b03ec354e4
0
{'enabled': False, 'images': [{'id': 'vVeqJchU2bQMwh_ND5fZ4X2r1XxX3qdNcTtwNW6_1jI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vVeqJchU2bQMwh_ND5fZ4X2r1XxX3qdNcTtwNW6_1jI.png?width=108&crop=smart&auto=webp&s=24b43628159c620250f9366c2e6878b457acbc7a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vVeqJchU2bQMwh_ND5fZ4X2r1XxX3qdNcTtwNW6_1jI.png?width=216&crop=smart&auto=webp&s=f0a06c5d11aaa24bbecf7613567cb5c94d199d4a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vVeqJchU2bQMwh_ND5fZ4X2r1XxX3qdNcTtwNW6_1jI.png?width=320&crop=smart&auto=webp&s=f86adeb0f83c4d1987749b84222a78fb674107fa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vVeqJchU2bQMwh_ND5fZ4X2r1XxX3qdNcTtwNW6_1jI.png?width=640&crop=smart&auto=webp&s=e2c8070a40c4398b7be9bcbbec7f3c89161ae42a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vVeqJchU2bQMwh_ND5fZ4X2r1XxX3qdNcTtwNW6_1jI.png?width=960&crop=smart&auto=webp&s=eb7def4c9f53612f979e1912307985fbfcc89091', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vVeqJchU2bQMwh_ND5fZ4X2r1XxX3qdNcTtwNW6_1jI.png?width=1080&crop=smart&auto=webp&s=0c4acf64687b939e1e04a2f9bf49cf858e6ddf6e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vVeqJchU2bQMwh_ND5fZ4X2r1XxX3qdNcTtwNW6_1jI.png?auto=webp&s=31be1c2ccce736a5b68745537e22d38d8bc3f98d', 'width': 1200}, 'variants': {}}]}
Locallama
1
[deleted]
2026-01-29T07:38:40
[deleted]
1970-01-01T00:00:00
0
{}
1qq27v3
false
null
t3_1qq27v3
/r/LocalLLaMA/comments/1qq27v3/locallama/
false
false
default
1
null
Has anyone actually made local coding models usable with Cursor (agent mode)?
5
I spent the last couple of days trying to get a *real* local coding setup working with Cursor, and I'm genuinely curious if anyone here has cracked this in a practical way. My goal is to simply use Cursor with a local model via an OpenAI-compatible API with chat + agent workflows (tool calls, file edits, etc). Here's what I tried on my Mac (M4 Pro, 48GB RAM): **1) Ollama / LM Studio style setup** Easy to run, but Cursor agent mode basically fell apart with tool calling issues. I mean I could have made some shims or proxies to fix the formatting but I moved on to other methods. **2) llama.cpp (llama-server) + OpenAI API** This *did* work functionally but with some patchwork. Qwen2.5-Coder and Qwen3-Coder models responded correctly and tool calls showed up. But Cursor sends \~15–20k token prompts and prefill dominated everything. Even with 4-bit quantized models, simple queries felt stuck for 30–60 seconds. **3) MLX-based servers (mlx-lm, vllm-mlx)** This was the most promising since it actually uses Apple's GPU properly. Qwen3-Coder-30B-A3B (4bit) ran and worked with Cursor after patching a few rough edges. Measured numbers on a real Cursor request (\~17k tokens): * Prefill: \~40 seconds * Decode: \~1.8 seconds * Decode speed: \~37 tok/s So decode is fine, but prefill kills the UX completely. At this point my takeaway is local models are great for small prompts, offline chat, note assistants, etc but **Cursor-style coding with large context + agent loops feels impractical today**, even on strong Apple Silicon. I'm not saying it's impossible. I just couldn't make it feel usable. My question is has anyone here actually managed to run a local coding model with Cursor in a way that feels productive?
2026-01-29T07:13:17
https://www.reddit.com/r/LocalLLaMA/comments/1qq1sni/has_anyone_actually_made_local_coding_models/
Unique_Plane6011
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq1sni
false
null
t3_1qq1sni
/r/LocalLLaMA/comments/1qq1sni/has_anyone_actually_made_local_coding_models/
false
false
self
5
null
Best coder for 48gb vram
8
Any suggestions? Running RTX 5090 + 5070 ti in a dual GPU setup with 192gb system ram. Thank you
2026-01-29T07:05:28
https://www.reddit.com/r/LocalLLaMA/comments/1qq1nxv/best_coder_for_48gb_vram/
ComfyUser48
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq1nxv
false
null
t3_1qq1nxv
/r/LocalLLaMA/comments/1qq1nxv/best_coder_for_48gb_vram/
false
false
self
8
null
Is Claude's memory feature actually useful for dev?
0
I am having issues with context and memory for claude code i gave to make it understand what we did in last session it is painfull. i need to be able to access its past convo and store it somethere. i came to this setting will it help me if I use this or at the end i will have to get some plugin.. need suggestions. if plugin is there a free plugin i can use.. thanks..
2026-01-29T07:01:07
https://i.redd.it/g9oswms8k8gg1.png
Ok_Cheek_8833
i.redd.it
1970-01-01T00:00:00
0
{}
1qq1laj
false
null
t3_1qq1laj
/r/LocalLLaMA/comments/1qq1laj/is_claudes_memory_feature_actually_useful_for_dev/
false
false
default
0
{'enabled': True, 'images': [{'id': 'g9oswms8k8gg1', 'resolutions': [{'height': 25, 'url': 'https://preview.redd.it/g9oswms8k8gg1.png?width=108&crop=smart&auto=webp&s=7b5949d021c26925d69ca324a34acf2999dbdb09', 'width': 108}, {'height': 51, 'url': 'https://preview.redd.it/g9oswms8k8gg1.png?width=216&crop=smart&auto=webp&s=ffcdb738807effee7105fd7fd0fc0daadfc8ea06', 'width': 216}, {'height': 75, 'url': 'https://preview.redd.it/g9oswms8k8gg1.png?width=320&crop=smart&auto=webp&s=64e43d676335e7a43993fed90e751e02d77f813d', 'width': 320}, {'height': 151, 'url': 'https://preview.redd.it/g9oswms8k8gg1.png?width=640&crop=smart&auto=webp&s=c5dda552183b4660d828153811f1a47476aca749', 'width': 640}, {'height': 226, 'url': 'https://preview.redd.it/g9oswms8k8gg1.png?width=960&crop=smart&auto=webp&s=4a2b3b20dd5145329c87ccc60014f8264e537b0f', 'width': 960}], 'source': {'height': 249, 'url': 'https://preview.redd.it/g9oswms8k8gg1.png?auto=webp&s=8e93e49a6ec8e310e0ec2274888c66a6c7bd22b1', 'width': 1054}, 'variants': {}}]}
People seem to already not care about heretic?
4
Seemed pretty great to me basically just automatic abliteration but without making the models as dumb, yet it seems not really anyone is making high quality heretic models anymore most people still just use abliterated also what happened to Arli's derestricted models?
2026-01-29T06:41:24
https://www.reddit.com/r/LocalLLaMA/comments/1qq18us/people_seem_to_already_not_care_about_heretic/
pigeon57434
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq18us
false
null
t3_1qq18us
/r/LocalLLaMA/comments/1qq18us/people_seem_to_already_not_care_about_heretic/
false
false
self
4
null
Why Are My Qwen2.5-0.5B MATH-500 Scores So Much Lower Than Reported?
2
I recently tried Qwen2.5-0.5B-Instruct for a personal project. While comparing my fine-tuned model on the MATH-500 benchmark, I looked up reported baseline results and found some inconsistencies: • The official technical report suggests 34.4% • A research paper reports around 31.4% (link: https://arxiv.org/html/2506.13404v2) • But when I reran MATH-500 myself, I only got \~20–22%, which was pretty disappointing Here’s what I’ve checked so far: • I’m using the official chat template • For the prompt, I’m only providing the problem statement (no extra instructions) • I used Qwen’s recommended decoding hyperparameters (temperature / top\_p / top\_k) • No quantization So… what might I be missing? Are there any common gotchas for reproducing the reported MATH-500 scores for Qwen2.5-0.5B-Instruct (prompt format, stopping criteria, answer extraction, evaluation script settings, few-shot vs zero-shot, etc.)? Any pointers would be appreciated.
2026-01-29T06:37:09
https://www.reddit.com/r/LocalLLaMA/comments/1qq160r/why_are_my_qwen2505b_math500_scores_so_much_lower/
According_Air_3815
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq160r
false
null
t3_1qq160r
/r/LocalLLaMA/comments/1qq160r/why_are_my_qwen2505b_math500_scores_so_much_lower/
false
false
self
2
null
I built an open-source, multi-agent alternative to OpenAI Prism for research workflows (Verification Agent + LaTeX + PDF)
50
Hey everyone, I’ve been working on an open-source project called **Prismer** to tackle the mess that is the current academic workflow. Like many of you, I found that using generic LLMs for research often leads to hallucinations, especially with citations. And relying on closed ecosystems like OpenAI’s Prism wasn’t ideal for privacy or customization. So I built Prismer, an all-in-one platform that integrates: \* AI-Native PDF Reader: With bi-directional citation graphs. \* Citation Verification Agent: Uses multiple agents to cross-check references against real databases (arXiv, etc.) to prevent LLM hallucinations. \* Jupyter Integration: For data analysis right next to your writing. \* LaTeX Editor: With real-time preview. >It’s completely open-source (MIT License). The goal is to have a modular system where you can swap in your own models or agents. I’d love to get some feedback from this community on the agent orchestration part specifically. `Repo:` [`https://github.com/Prismer-AI/Prismer`](https://github.com/Prismer-AI/Prismer) Let me know what you think!
2026-01-29T06:14:18
https://www.reddit.com/r/LocalLLaMA/comments/1qq0qut/i_built_an_opensource_multiagent_alternative_to/
Inside-Scratch4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq0qut
false
null
t3_1qq0qut
/r/LocalLLaMA/comments/1qq0qut/i_built_an_opensource_multiagent_alternative_to/
false
false
self
50
{'enabled': False, 'images': [{'id': 'N9eRZnLZ6kKHJuHaPL2iM8tTDTiwB8jIEvyyOlMtwT8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N9eRZnLZ6kKHJuHaPL2iM8tTDTiwB8jIEvyyOlMtwT8.png?width=108&crop=smart&auto=webp&s=6dd2cbf74b0e5ed82e1db49fd263d6da47842c14', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/N9eRZnLZ6kKHJuHaPL2iM8tTDTiwB8jIEvyyOlMtwT8.png?width=216&crop=smart&auto=webp&s=bcd7b690da245c13132a482a71e4ab3aedebd778', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/N9eRZnLZ6kKHJuHaPL2iM8tTDTiwB8jIEvyyOlMtwT8.png?width=320&crop=smart&auto=webp&s=b459d6727a8109e0543e47bf6063eefaa0b379e6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/N9eRZnLZ6kKHJuHaPL2iM8tTDTiwB8jIEvyyOlMtwT8.png?width=640&crop=smart&auto=webp&s=259d90d802c1ee17da7dba6311a228c54bfc0f72', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/N9eRZnLZ6kKHJuHaPL2iM8tTDTiwB8jIEvyyOlMtwT8.png?width=960&crop=smart&auto=webp&s=d9b0dfab91075d5f548e7da903cd02691aa4a52f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/N9eRZnLZ6kKHJuHaPL2iM8tTDTiwB8jIEvyyOlMtwT8.png?width=1080&crop=smart&auto=webp&s=ee9dcb3481849f63e4dac4c1a14d7775612517a0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/N9eRZnLZ6kKHJuHaPL2iM8tTDTiwB8jIEvyyOlMtwT8.png?auto=webp&s=a62ad801d9baa70b32548ad62aef3e631aa77560', 'width': 1200}, 'variants': {}}]}
RustyMail - IMAP wrapper, and MCP server! (With a Web UI and Email Chatbot...)
1
2026-01-29T05:57:26
https://christopherdavidodom.substack.com/p/rustymail-imap-wrapper-and-mcp-server
f3llowtraveler
christopherdavidodom.substack.com
1970-01-01T00:00:00
0
{}
1qq0f32
false
null
t3_1qq0f32
/r/LocalLLaMA/comments/1qq0f32/rustymail_imap_wrapper_and_mcp_server_with_a_web/
false
false
https://external-preview…a6e4a2d2c2edbb58
1
{'enabled': False, 'images': [{'id': 'yC4XKYRNOZJJB2e-jsBFnaz0HsuVZSON8w9XZhvW8VQ', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/yC4XKYRNOZJJB2e-jsBFnaz0HsuVZSON8w9XZhvW8VQ.jpeg?width=108&crop=smart&auto=webp&s=4e03ed1c4f965f924ca79ef0b782d7e0b7befdde', 'width': 108}, {'height': 137, 'url': 'https://external-preview.redd.it/yC4XKYRNOZJJB2e-jsBFnaz0HsuVZSON8w9XZhvW8VQ.jpeg?width=216&crop=smart&auto=webp&s=85c24a84de7576eb8219b3271e7aa45fc8063f39', 'width': 216}, {'height': 203, 'url': 'https://external-preview.redd.it/yC4XKYRNOZJJB2e-jsBFnaz0HsuVZSON8w9XZhvW8VQ.jpeg?width=320&crop=smart&auto=webp&s=6065df36fed895d605d224e023ee1f9ddde1222e', 'width': 320}, {'height': 406, 'url': 'https://external-preview.redd.it/yC4XKYRNOZJJB2e-jsBFnaz0HsuVZSON8w9XZhvW8VQ.jpeg?width=640&crop=smart&auto=webp&s=b2ae8f077ed56a903ea7184f49530a7a9f7d7414', 'width': 640}, {'height': 610, 'url': 'https://external-preview.redd.it/yC4XKYRNOZJJB2e-jsBFnaz0HsuVZSON8w9XZhvW8VQ.jpeg?width=960&crop=smart&auto=webp&s=e8381f0fd3600e37c37710103132a8aa4496cf3b', 'width': 960}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/yC4XKYRNOZJJB2e-jsBFnaz0HsuVZSON8w9XZhvW8VQ.jpeg?auto=webp&s=8afa3d3d771c1b12c6374ccde11a29801cb8c268', 'width': 1062}, 'variants': {}}]}
Guidance Needed: GPT-OSS 20B Fine-Tuning with Unsloth → GGUF → Ollama → Triton (vLLM / TensorRT-LLM)
1
[removed]
2026-01-29T05:47:28
https://www.reddit.com/r/LocalLLaMA/comments/1qq08ax/guidance_needed_gptoss_20b_finetuning_with/
Double_Tourist3600
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qq08ax
false
null
t3_1qq08ax
/r/LocalLLaMA/comments/1qq08ax/guidance_needed_gptoss_20b_finetuning_with/
false
false
self
1
null
Need help from experts
0
Hi, I am a second year B.Tech student. So basically, me and some of my friends have an idea which we can implement in 2 different ailments. As we thought, using LLM will be the best way to implement this. It is like a chatbot, but something different. And it is an MVP chatbot, but it has multiple use cases which we will develop later. So I want to know how actually the LLM is tested locally. How do developers prepare record base for it? Because there are so many bottlenecks. At an introductory level, there are many models which we cannot test locally because of limited GPU and VRAM. So I want suggestions or guidance on how we can actually make this happen, like how to develop all this. For now, I am planning to have 2 separate models. One is a vision model, and one model is meant for math calculation and all, and one is a general listening model. So how do I make all these things work and how to use them, and after that how can I develop it at production level and how I can make it in development.
2026-01-29T05:01:04
https://www.reddit.com/r/LocalLLaMA/comments/1qpzbd8/need_help_from_experts/
Friendly_Smile_7087
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpzbd8
false
null
t3_1qpzbd8
/r/LocalLLaMA/comments/1qpzbd8/need_help_from_experts/
false
false
self
0
null
Spent 2 hours looking for a certain creative platform.
1
[removed]
2026-01-29T04:59:23
https://www.reddit.com/r/LocalLLaMA/comments/1qpza0y/spent_2_hours_looking_for_a_certain_creative/
Kind_Abrocoma4627
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpza0y
false
null
t3_1qpza0y
/r/LocalLLaMA/comments/1qpza0y/spent_2_hours_looking_for_a_certain_creative/
false
false
nsfw
1
null
I Vibe-coded on-device AI app (100% local YOLO26)
0
https://reddit.com/link/1qpz889/video/9ls50pnoy7gg1/player I wanted to test out Ultralytics' new object detection model on-device. This is the result for picture analysis! As a non-dev, its incredible to be able to build an app like this within couple of hours 📲⚡️ Stack: ✅ my phone: iPhone 16 Pro ✅ YOLO26 for the model ✅ Mélange for on-device AI SDK ✅ Antigravity for Coding Please share your cool vibe-coded on-device AI app projects !!!
2026-01-29T04:56:57
https://www.reddit.com/r/LocalLLaMA/comments/1qpz889/i_vibecoded_ondevice_ai_app_100_local_yolo26/
Suspicious-Camp-6220
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpz889
false
null
t3_1qpz889
/r/LocalLLaMA/comments/1qpz889/i_vibecoded_ondevice_ai_app_100_local_yolo26/
false
false
self
0
null
Reasoning Devstral 2
50
Fun fact! You can actually make devstral 2 123B & Devstral 24B reason! Accidently had a reasoning forcing jinja template on for another model when I started testing the mlx version of this thing with a couple of reasoning effot = extra high statements in my system prompt because I really wanted more reasoning out of the last model I was using and havving forgotten about that tried devstral 2 and got 2 minutes of reasoning before it answered my test question. Turns out they are both hybrid reasoners if you put {%- set reasoning\_content = 'High' %} in the jinja. Nice clean logical reasoning as well. That's actually fixed my main issue with these models, sometimes you just really need that extra consistency. Did everybody else know this and I just missed it somehow?
2026-01-29T04:42:29
https://www.reddit.com/r/LocalLLaMA/comments/1qpyxfk/reasoning_devstral_2/
Front_Eagle739
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpyxfk
false
null
t3_1qpyxfk
/r/LocalLLaMA/comments/1qpyxfk/reasoning_devstral_2/
false
false
self
50
null
7 GPU with 78gb total VRAM
0
Hi. I have these GPUs in sealed unopened at home. Would these be enough to set up a local PC running models at reasonably fast speed? Looks like buying motherboard and other components would run less than $1k which is fine, but the gpus seem a bit dated. Or would I be better off just selling them off? Another friend told me some people pay premium for unopened gpu boxes as collectors items. 🤷‍♂️ \- two RTX 2080 TI \- 4 Nvidia titan xp collector's edition \- one RTX 2070 \- a lot of 8GB DDR4 \- a lot of 16GB DDR4 \- several 1TB SSD Long story on how come … but basically a friend gave them all to me during Covid times to “clean things up” and I just didn’t get around doing anything with them until now.
2026-01-29T04:42:04
https://www.reddit.com/r/LocalLLaMA/comments/1qpyx4m/7_gpu_with_78gb_total_vram/
herPassword
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpyx4m
false
null
t3_1qpyx4m
/r/LocalLLaMA/comments/1qpyx4m/7_gpu_with_78gb_total_vram/
false
false
self
0
null
Rewrote my AI context tool in Rust after Node.js OOM’d at 1.6k files. 10k files now processed in 2s.
5
Over the last week, I've been working on Drift an AST parser that uses semantic learning (with regex fallback) to index a codebase using metadata across 15+ categories. It exposes this data through a CLI or MCP (Model Context Protocol) to help map out conventions automatically and help AI agents write code that actually fits your codebase's style. The Problem: Upon testing with "real" enterprise codebases, I quickly ran into the classic Node.js trap. The TypeScript implementation would crash around 1,600 files with FATAL ERROR: JavaScript heap out of memory. I was left with two choices: 1. Hack around max-old-space-size and pray. 2. Rewrite the core in Rust. I chose the latter. The architecture now handles scanning, parsing (Tree-sitter), and graph building in Rust, using SQLite for storage instead of in-memory objects. The Results: The migration from JSON file sharding to a proper SQLite backend (WAL mode) destroyed the previous benchmarks. Metric Previous (Rust + JSON Shards) Current (Rust + SQLite) Improvement 5,000 files 4.86s 1.11s 4.4x 10,000 files 19.57s 2.34s 8.4x Note: The original Node.js version couldn't even finish the 10k file dataset. What is Drift? Drift is completely open-sourced and runs offline (no internet connection required). It's designed to be the "hidden tool" that bridges the gap between your codebase's implicit knowledge and your AI agent's context window. I honestly can't believe a tool like this didn't exist in this specific capacity before. I hope it helps some of your workflows! I'd appreciate any feedback on the Rust implementation or the architecture. Repo: https://github.com/dadbodgeoff/drift
2026-01-29T04:38:58
https://www.reddit.com/r/LocalLLaMA/comments/1qpyuts/rewrote_my_ai_context_tool_in_rust_after_nodejs/
Fluffy_Citron3547
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpyuts
false
null
t3_1qpyuts
/r/LocalLLaMA/comments/1qpyuts/rewrote_my_ai_context_tool_in_rust_after_nodejs/
false
false
self
5
{'enabled': False, 'images': [{'id': 'Mn7mPFHVghYbPrEPuexPInN_Eq72OHvehSvNqbRkuZE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Mn7mPFHVghYbPrEPuexPInN_Eq72OHvehSvNqbRkuZE.png?width=108&crop=smart&auto=webp&s=63c6e9b1592b15f5a8ff8d26d0b5c7601de4d303', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Mn7mPFHVghYbPrEPuexPInN_Eq72OHvehSvNqbRkuZE.png?width=216&crop=smart&auto=webp&s=d19a9e5a02dc4b1031c59674bc7853d5ad1305a1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Mn7mPFHVghYbPrEPuexPInN_Eq72OHvehSvNqbRkuZE.png?width=320&crop=smart&auto=webp&s=8a347b5944e9a82da5e90416ba94e7c30bf0d2ec', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Mn7mPFHVghYbPrEPuexPInN_Eq72OHvehSvNqbRkuZE.png?width=640&crop=smart&auto=webp&s=10355a13753dff21f0a9153c104d7d9c879c5e9a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Mn7mPFHVghYbPrEPuexPInN_Eq72OHvehSvNqbRkuZE.png?width=960&crop=smart&auto=webp&s=009a0c46e5568d01585f43df9ea0fae589d0ef49', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Mn7mPFHVghYbPrEPuexPInN_Eq72OHvehSvNqbRkuZE.png?width=1080&crop=smart&auto=webp&s=b2a74f4a0f61c597493b7e73acb87dfde1f63345', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Mn7mPFHVghYbPrEPuexPInN_Eq72OHvehSvNqbRkuZE.png?auto=webp&s=37aac2ce646c4cfdc56c174d885ab1dfe57400ef', 'width': 1200}, 'variants': {}}]}
I made a one-liner to deploy your own AI assistant (Moltbot) to Fly.io with WhatsApp integration
0
Hello 👋🏼 I Built a script that deploys MoltBot (open source personal AI assistant) to [Fly.io](http://fly.io/), in one command: curl -fsSL [https://raw.githubusercontent.com/blissito/moltbot-flyio/main/install.sh](https://raw.githubusercontent.com/blissito/moltbot-flyio/main/install.sh) | bash **What you get**: \- Your own (Claude/OpenAI/any)-powered assistant running 24/7 \- WhatsApp integration (scan QR, done) 🤯 \- Web dashboard to manage everything \- One machine on [Fly.io](http://fly.io/) (free tier works to start) **The installer handles**: \- [Fly.io](http://fly.io/) app creation \- Persistent volume for data \- Secrets configuration \- 4GB RAM setup (2GB causes OOM) \- Gateway token generation You just need: \- [Fly.io](http://fly.io/) account (free) & flyctl installed \- Anthropic/OpenAI API key GitHub: [https://github.com/blissito/moltbot-flyio](https://github.com/blissito/moltbot-flyio) ¿Why? It just makes Moltbot cloud deployment dead simple. 🤷🏻‍♂️ If you liked it, give it a star ⭐️ or a PR if you find a bug, it's open source. 🤓
2026-01-29T04:35:14
https://www.reddit.com/r/LocalLLaMA/comments/1qpys29/i_made_a_oneliner_to_deploy_your_own_ai_assistant/
PoetSad977
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpys29
false
null
t3_1qpys29
/r/LocalLLaMA/comments/1qpys29/i_made_a_oneliner_to_deploy_your_own_ai_assistant/
false
false
self
0
null
discount for Kimi-K2.5. #ia #moonshotai
0
https://preview.redd.it/…Here's the link.
2026-01-29T04:16:08
https://www.reddit.com/r/LocalLLaMA/comments/1qpydo2/discount_for_kimik25_ia_moonshotai/
FrankMillerMC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpydo2
false
null
t3_1qpydo2
/r/LocalLLaMA/comments/1qpydo2/discount_for_kimik25_ia_moonshotai/
false
false
https://a.thumbs.redditm…0x4WCPp_WUK4.jpg
0
null
I built a Telegram "Remote Control" for Claude-Code (and learned a lot about LLM reasoning)
1
I've been trying to get Claude-Code working with models like kimi-k2.5 and z-ai/glm4.7 through the NVIDIA NIM free api, but the reasoning was consistently terrible compared to the native models. It turns out the issue was the "interleaved thinking tokens" being lost during tool calls. I spent the last week building a bridge to keep these tokens in sequence, and the jump in logic is actually insane. It can go up-to 40 RPM in free unlimited usage. To make it more usable, I hooked it up to a Telegram bot so I could monitor the thinking steps and kill the process from my phone if it starts hallucinating and also give it new tasks. Though both telegram and terminal-cli can be used concurrently since the server handles both.
2026-01-29T04:00:28
https://github.com/Alishahryar1/cc-nim
LastNoobLeft
github.com
1970-01-01T00:00:00
0
{}
1qpy1st
false
null
t3_1qpy1st
/r/LocalLLaMA/comments/1qpy1st/i_built_a_telegram_remote_control_for_claudecode/
false
false
https://external-preview…616899b03ec354e4
1
{'enabled': False, 'images': [{'id': 'vVeqJchU2bQMwh_ND5fZ4X2r1XxX3qdNcTtwNW6_1jI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vVeqJchU2bQMwh_ND5fZ4X2r1XxX3qdNcTtwNW6_1jI.png?width=108&crop=smart&auto=webp&s=24b43628159c620250f9366c2e6878b457acbc7a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vVeqJchU2bQMwh_ND5fZ4X2r1XxX3qdNcTtwNW6_1jI.png?width=216&crop=smart&auto=webp&s=f0a06c5d11aaa24bbecf7613567cb5c94d199d4a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vVeqJchU2bQMwh_ND5fZ4X2r1XxX3qdNcTtwNW6_1jI.png?width=320&crop=smart&auto=webp&s=f86adeb0f83c4d1987749b84222a78fb674107fa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vVeqJchU2bQMwh_ND5fZ4X2r1XxX3qdNcTtwNW6_1jI.png?width=640&crop=smart&auto=webp&s=e2c8070a40c4398b7be9bcbbec7f3c89161ae42a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vVeqJchU2bQMwh_ND5fZ4X2r1XxX3qdNcTtwNW6_1jI.png?width=960&crop=smart&auto=webp&s=eb7def4c9f53612f979e1912307985fbfcc89091', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vVeqJchU2bQMwh_ND5fZ4X2r1XxX3qdNcTtwNW6_1jI.png?width=1080&crop=smart&auto=webp&s=0c4acf64687b939e1e04a2f9bf49cf858e6ddf6e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vVeqJchU2bQMwh_ND5fZ4X2r1XxX3qdNcTtwNW6_1jI.png?auto=webp&s=31be1c2ccce736a5b68745537e22d38d8bc3f98d', 'width': 1200}, 'variants': {}}]}
I built a Telegram "Remote Control" for Claude-Code (and learned a lot about LLM reasoning)
1
I’ve been obsessed with `claude-code` lately, but I hated being tied to my desk for long-running tasks. I decided to see if I could bridge the gap between my local terminal and my phone and it turned into a massive learning experience regarding how LLMs actually "think." The Showcase: Real-time Terminal Control via Telegram The most satisfying part of this project was getting the **Telegram bot integration** working. It’s not just a notification system; it’s a full remote-control bridge. * **Real-time Reasoning:** I can watch the model’s "Thinking Tokens" unfold on my phone while I'm away from my desk. * **Live Tool Monitoring:** I get to see exactly which local tools/files the model is accessing in real-time. * **Remote Kill-Switch:** If I see the model looping or hallucinating, I can just send `/stop` from the chat to kill the local process immediately. The Technical Challenge: Solving the "Stupid Model" Problem While building the Telegram bridge, I hit a major technical hurdle. When using external models (like Kimi-k2.5 or GLM4.7 via NVIDIA NIM) with Claude-Code, the models often seemed "dumber" than they should be. I discovered that most bridges fail to preserve **interleaved thinking tokens** during tool calls. 1. **The Issue:** If the model loses its reasoning steps between tool calls, it loses the context of *why* it’s doing what it’s doing. 2. **The Fix:** I had to learn how to properly serialize these tokens so they stay in sequence. Once I fixed this in my bridge (`cc-nim`), the models became noticeably more capable at complex debugging. What else I learned: * **Latency is the enemy:** I implemented a custom **prefix detection** system. This prevents Claude-Code from sending redundant "classification" requests, which makes the whole mobile experience feel snappy rather than laggy. * **Environment Sandboxing:** Learning how to safely expose only specific directories to the bot via environment variables was a great lesson in CLI security. The coolest part? Once the bridge was stable enough, I actually used the Telegram bot to help me debug the code for the bot itself while I was out getting coffee. **Has anyone else tried building a remote interface for their local LLM agents? I'd love to hear how you handled the security or the token streaming!**
2026-01-29T03:42:48
https://github.com/Alishahryar1/cc-nim
LastNoobLeft
github.com
1970-01-01T00:00:00
0
{}
1qpxntk
false
null
t3_1qpxntk
/r/LocalLLaMA/comments/1qpxntk/i_built_a_telegram_remote_control_for_claudecode/
false
false
default
1
null
Anyone know how to access the Kimi K2.5 Agent Swarm model on OpenRouter?
4
Huge chance this is a separate model entirely, and not an option, based on how you select it from a dropdown n Kimi's site https://www.kimi.com/agent-swarm. If anyone knows anything, let me know.
2026-01-29T03:42:25
https://www.reddit.com/r/LocalLLaMA/comments/1qpxnht/anyone_know_how_to_access_the_kimi_k25_agent/
Ok-Attention2882
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpxnht
false
null
t3_1qpxnht
/r/LocalLLaMA/comments/1qpxnht/anyone_know_how_to_access_the_kimi_k25_agent/
false
false
self
4
null
Use claude-code for free with NVIDIA-NIM in the terminal or control it with your phone
1
[removed]
2026-01-29T03:32:59
https://github.com/Alishahryar1/cc-nim
LastNoobLeft
github.com
1970-01-01T00:00:00
0
{}
1qpxg6k
false
null
t3_1qpxg6k
/r/LocalLLaMA/comments/1qpxg6k/use_claudecode_for_free_with_nvidianim_in_the/
false
false
default
1
null
VLLM on RTX 6000 Pro reaching temps of 88°C, but fan only goes up to 65%
2
Setup a local VLLM server running on an RTX 6000 Pro workstation edition, and at peak loads, the card gets up to nearly 90°C, and sometimes slightly above, but the fan doesn't seem to go above 65% no matter what. Is this something others have run into with similar setups? Running VLLM on Ubuntu 22.04.5 LTS with an RTX 6000 Pro card. Wondering if this is an issue with the software setup, or this is a hardware limit itself, or if it is just a bad card. https://preview.redd.it/sy3je29hj7gg1.png?width=1278&format=png&auto=webp&s=eaebbfe537f83c0182867774716a1c16e47fad9b
2026-01-29T03:29:52
https://www.reddit.com/r/LocalLLaMA/comments/1qpxdpl/vllm_on_rtx_6000_pro_reaching_temps_of_88c_but/
Legal-Zucchini7766
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpxdpl
false
null
t3_1qpxdpl
/r/LocalLLaMA/comments/1qpxdpl/vllm_on_rtx_6000_pro_reaching_temps_of_88c_but/
false
false
https://b.thumbs.redditm…OlsmFI7qFXDk.jpg
2
null
ComfyUI vs Python GPU usage Higgs Audio 2
1
I’ve been experimenting with Higgs Audio 2 in ComfyUI and with it I get relatively quick generation (5 seconds) for a sentence on a 5070Ti. I use TTs Audio Suite for this But when I try the example Python script that comes from Higgs Audio repo, it takes way more than 5 seconds with the same parameters. I know Comfy keeps the model in memory for subsequent runs, but even if I only consider time after the script has loaded the model, its still way longer than Comfy. I also notice that Comfy uses slightly less VRAM than I have and is light on RAM usage as well while the script uses whole VRAM and a good chunk of RAM too. Any clues on what’s going on here?
2026-01-29T03:07:58
https://www.reddit.com/r/LocalLLaMA/comments/1qpww5a/comfyui_vs_python_gpu_usage_higgs_audio_2/
arraydotpush
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpww5a
false
null
t3_1qpww5a
/r/LocalLLaMA/comments/1qpww5a/comfyui_vs_python_gpu_usage_higgs_audio_2/
false
false
self
1
null
Looking to have a local setup to train LLM using my documents and photos for small business, can it be done or am I going to run into issues?
1
I found an older used deep learning workstation specs: **Intel** **i9-10920X** **Socket FCLGA2066** **44 gigs GPU across Four RTX 2080 Ti (11gigs per card)** **256 GB DDR4-2933 RAM** Maybe I am going about things wrong, but I would like to feed all my data to the model and be able to ask questions and get answers based on my actual documents. I have technical documents, inspection photos from communications towers, and lease documents. I am not sure if I am really able to do this, but I know that I have no interest in sending this to an online model. Thanks in advance for any advice or ideas.
2026-01-29T03:04:24
https://www.reddit.com/r/LocalLLaMA/comments/1qpwt7y/looking_to_have_a_local_setup_to_train_llm_using/
Dented_Steelbook
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpwt7y
false
null
t3_1qpwt7y
/r/LocalLLaMA/comments/1qpwt7y/looking_to_have_a_local_setup_to_train_llm_using/
false
false
self
1
null
An AI has no morals. This is a perfect reflection of the morals its creators programmed into it
0
2026-01-29T02:48:17
https://i.redd.it/6x2r00eic7gg1.png
Frequent-Wear-5443
i.redd.it
1970-01-01T00:00:00
0
{}
1qpwfv6
false
null
t3_1qpwfv6
/r/LocalLLaMA/comments/1qpwfv6/an_ai_has_no_morals_this_is_a_perfect_reflection/
false
false
default
0
{'enabled': True, 'images': [{'id': '6x2r00eic7gg1', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/6x2r00eic7gg1.png?width=108&crop=smart&auto=webp&s=57832a9d42cd65f129c8c0af5f44f2be9f8537ff', 'width': 108}, {'height': 82, 'url': 'https://preview.redd.it/6x2r00eic7gg1.png?width=216&crop=smart&auto=webp&s=5e10f141e9433ff668f46134884a6a238e9f98c6', 'width': 216}, {'height': 122, 'url': 'https://preview.redd.it/6x2r00eic7gg1.png?width=320&crop=smart&auto=webp&s=5ab26e3c49f98d95452eb824b55ef645abc1b272', 'width': 320}, {'height': 245, 'url': 'https://preview.redd.it/6x2r00eic7gg1.png?width=640&crop=smart&auto=webp&s=95ea9130114d16812f561d3b3fc8c046c91a72b1', 'width': 640}, {'height': 367, 'url': 'https://preview.redd.it/6x2r00eic7gg1.png?width=960&crop=smart&auto=webp&s=4e9949c6545e0ad63ced0053bffcca878134c1e8', 'width': 960}, {'height': 413, 'url': 'https://preview.redd.it/6x2r00eic7gg1.png?width=1080&crop=smart&auto=webp&s=297f2e2afce50a29f29b82da328669db6c32c0c9', 'width': 1080}], 'source': {'height': 481, 'url': 'https://preview.redd.it/6x2r00eic7gg1.png?auto=webp&s=3db856f7880befa0786e38042ac223fe7bbaed7f', 'width': 1256}, 'variants': {}}]}
Using Qwen2.5-0.5B to auto-summarize terminal output for AI coding assistants
16
I added local LLM summarization to my terminal history tool using Qwen2.5-0.5B (Q4_K_M) via llama.cpp. Wanted to share since the model choice might be useful for others building similar "small model for specific task" features. **The problem:** I use Claude Code for development. When debugging, I'd run commands like `kubectl logs` or `cargo test`, get walls of output, then have to copy-paste relevant bits into the AI. Tedious. **The solution:** Wake records terminal sessions to SQLite. When a command finishes with significant output (>1KB), a background task generates a 1-2 sentence summary. The AI assistant can then see summaries like: > "Build failed with 3 errors in auth.rs: missing lifetime parameters on lines 42, 67, 89" ...instead of reading 500 lines of compiler output. **Why Qwen2.5-0.5B:** - **Size:** ~468MB quantized - acceptable for auto-download - **Speed:** Few seconds per summary on CPU - fast enough for background processing - **Quality:** Surprisingly good at technical summarization (build output, logs, test results) - **Instruction-tuned:** Follows the "summarize in 1-2 sentences" prompt well I tried Phi-3 Mini first but at 2.3GB it felt too heavy for a feature that should "just work." The 0.5B model hits the sweet spot. **Implementation:** - Rust + llama-cpp-2 crate (llama.cpp bindings) - ChatML prompt format - ~4000 char context window (truncate middle for long outputs) - Temp 0.7, top_p 0.9 ```rust let prompt = format!( "<|im_start|>system\n{}<|im_end|>\n<|im_start|>user\n{}<|im_end|>\n<|im_start|>assistant\n", system_prompt, user_message ); ``` **Results:** Works well for my use case. Summaries are useful ~90% of the time. Occasionally hallucinates line numbers but the gist is always correct. Repo if anyone's curious: https://github.com/joemckenney/wake Anyone else using small models for similar "specific task" features? Curious what models/sizes others have found effective.
2026-01-29T02:23:23
https://www.reddit.com/r/LocalLLaMA/comments/1qpvv5w/using_qwen2505b_to_autosummarize_terminal_output/
averagemrjoe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpvv5w
false
null
t3_1qpvv5w
/r/LocalLLaMA/comments/1qpvv5w/using_qwen2505b_to_autosummarize_terminal_output/
false
false
self
16
{'enabled': False, 'images': [{'id': '_GtsiFDQnjLkLCr8MfZ4S6_FgbH2xZNXVhuaEvLqcLI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_GtsiFDQnjLkLCr8MfZ4S6_FgbH2xZNXVhuaEvLqcLI.png?width=108&crop=smart&auto=webp&s=e2e856391dcca27979930eb66f93debf7a0fb179', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_GtsiFDQnjLkLCr8MfZ4S6_FgbH2xZNXVhuaEvLqcLI.png?width=216&crop=smart&auto=webp&s=b51a6899e58888ebb340557bcb1466d9c3350b90', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_GtsiFDQnjLkLCr8MfZ4S6_FgbH2xZNXVhuaEvLqcLI.png?width=320&crop=smart&auto=webp&s=061b355fbc480e8cca4d638377fb5b5b83f4189f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_GtsiFDQnjLkLCr8MfZ4S6_FgbH2xZNXVhuaEvLqcLI.png?width=640&crop=smart&auto=webp&s=7d869bc70852ec7db2f0bc808d2ae695869259fe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_GtsiFDQnjLkLCr8MfZ4S6_FgbH2xZNXVhuaEvLqcLI.png?width=960&crop=smart&auto=webp&s=27ae74040498998e2c3bfbd8efab586b58f3c5ee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_GtsiFDQnjLkLCr8MfZ4S6_FgbH2xZNXVhuaEvLqcLI.png?width=1080&crop=smart&auto=webp&s=d470f8b12788b7e46867ae3e254e57c279c0de90', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_GtsiFDQnjLkLCr8MfZ4S6_FgbH2xZNXVhuaEvLqcLI.png?auto=webp&s=ca153ad34a39a70c32a3e65af18bb475a29338b2', 'width': 1200}, 'variants': {}}]}
spent months creating a chatbot with your own api's that never lose history or code
0
Hey guys I have been working so hard on this for months but I am about to launch soon. I wanted to build hype by creating a waiting list for this!
2026-01-29T01:22:08
https://www.thetoolswebsite.com/
Either-Ad9874
thetoolswebsite.com
1970-01-01T00:00:00
0
{}
1qpug75
false
null
t3_1qpug75
/r/LocalLLaMA/comments/1qpug75/spent_months_creating_a_chatbot_with_your_own/
false
false
default
0
null
Pertinent take on projects coded with AI
2
2026-01-29T01:05:54
/r/Python/comments/1qpq3cc/rant_ai_is_killing_programming_and_the_python/
rm-rf-rm
1970-01-01T00:00:00
0
{}
1qpu2mn
false
null
t3_1qpu2mn
/r/LocalLLaMA/comments/1qpu2mn/pertinent_take_on_projects_coded_with_ai/
false
false
default
2
null
Prompt Engineering Tool For The Obsessed
0
# Prompt Engineering Over And Over **Story Time** I am very particular regarding what and how I use AI. I am not saying I am a skeptic; quite the opposite actually. I know that AI/LLM tools are capable of great things **AS LONG AS THEY ARE USED PROPERLY**. For the longest time, whenever I needed the optimal results with an AI tool or chatbot, this is the process I would go through: 1. Go to the Github repo of [friuns2/BlackFriday-GPTs-Prompts](https://github.com/friuns2/BlackFriday-GPTs-Prompts) 2. Go to the file [Prompt-Engineering.md](https://github.com/friuns2/BlackFriday-GPTs-Prompts/blob/main/Prompt-Engineering.md) 3. Select the [ChatGPT 4 Prompt Improvement](https://github.com/friuns2/BlackFriday-GPTs-Prompts/blob/main/gpts/chatgpt-4-prompt-improvement.md) 4. Copy and paste that prompt over to my chatbot of choice 5. Begin my prompting my hyperspecific, multiparagraph prompt 6. Read and respond to the 3/6 questions that the chatbot came up with so the next iteration of the prompt will be even more specified. 7. After many cycles of prompting, reprompting, and answering, use the final prompt that was refined to get the ultimate optimal result While this process was always exhilerating to repeat multiple times a day, for some reason I kept yearning for a faster, more efficient, and better organized method of going about this. Coincidentally, winter break began for me around November, I had over a month of free time, and a mential task that I was craving to overengineer. The result, [ImPromptr](https://impromptr.com), the iterative prompt engineering tool to help you get your best results. It doesn't just stop at prompts, though, as each chat instance where you are improving your prompts has the ability to generate markdown context files for your esoteric use cases. In many cases online, you can almost always find a prompt that you are looking for with 98.67% accuracy. With ImPromptr, you don't have to sacrifice your precious percentage points. Each saved prompt allows you to modify the prompt in its entirety to your hearts desire **WHILE** maintaining a strict version control system that allows you to go through the lifecycle of the prompt. Once again, I truly do believe that AI assisted *everything* is the future, whether it be engineering, research, education, or more. The optimal scenario with AI is that given **exactly** what you are looking for, the tools will be able to understand exactly what it needs to do and execute on it's task with clarity and context. I hope this project that I made can help everyone out with the first part.
2026-01-29T00:57:12
https://www.reddit.com/r/LocalLLaMA/comments/1qptv4r/prompt_engineering_tool_for_the_obsessed/
Sea-Opposite-4805
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qptv4r
false
null
t3_1qptv4r
/r/LocalLLaMA/comments/1qptv4r/prompt_engineering_tool_for_the_obsessed/
false
false
self
0
null
768Gb "Mobile" Ai Server Follow-Up Part 4, Image Gen Temp/Power Stats
3
Final part of a follow-up on the "Mobile" Ai server post, I recommend reviewing the other three posts/videos first for coherence and flow. Due to Reddit video size/length restrictions I'm having to break up the video into different parts, but the full (and better quality) video is uploaded to Youtube. [https://youtu.be/TJOKEFdCkv0](https://youtu.be/TJOKEFdCkv0) This last section closes the LLM testing and transitions to some temp/whole system power draw stats when doing image gen tasks, then some final remarks.
2026-01-29T00:40:05
https://v.redd.it/llzmeviqo6gg1
SweetHomeAbalama0
/r/LocalLLaMA/comments/1qptgg2/768gb_mobile_ai_server_followup_part_4_image_gen/
1970-01-01T00:00:00
0
{}
1qptgg2
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/llzmeviqo6gg1/DASHPlaylist.mpd?a=1772368815%2CZTdmZTgwNjdhOTJiMTdkOTg0ZTVlNzY4ZmQ5MjBkZmI4ZmY0MGYxNjY5YWMxYWQ4MmNjZTY4YzEzYWIwOTdiYg%3D%3D&v=1&f=sd', 'duration': 897, 'fallback_url': 'https://v.redd.it/llzmeviqo6gg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/llzmeviqo6gg1/HLSPlaylist.m3u8?a=1772368815%2CNWUwM2M2YzAyMzE4M2ExNWJmYjFkM2QwNDdlMWZiNDVkNzkzOGY1ZTMxNjU1ZWI5MzI5YjkwNmQwMThlMjM5MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/llzmeviqo6gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
t3_1qptgg2
/r/LocalLLaMA/comments/1qptgg2/768gb_mobile_ai_server_followup_part_4_image_gen/
false
false
https://external-preview…e5cd86edbee94f2f
3
{'enabled': False, 'images': [{'id': 'c3hsb2FoanFvNmdnMUKQ09L3CfmJyjwVppk62OraHd6hKKfg8L480pJgAxP-', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/c3hsb2FoanFvNmdnMUKQ09L3CfmJyjwVppk62OraHd6hKKfg8L480pJgAxP-.png?width=108&crop=smart&format=pjpg&auto=webp&s=0d71c38a6b3211c79cf46229d75a2fa6ed5c84a9', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/c3hsb2FoanFvNmdnMUKQ09L3CfmJyjwVppk62OraHd6hKKfg8L480pJgAxP-.png?width=216&crop=smart&format=pjpg&auto=webp&s=fc7e144d4e27f6d1003d19f2e1989799eb416b75', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/c3hsb2FoanFvNmdnMUKQ09L3CfmJyjwVppk62OraHd6hKKfg8L480pJgAxP-.png?width=320&crop=smart&format=pjpg&auto=webp&s=6cb90fcb9277e388cc9c9d22b146f48f14a9af2a', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/c3hsb2FoanFvNmdnMUKQ09L3CfmJyjwVppk62OraHd6hKKfg8L480pJgAxP-.png?width=640&crop=smart&format=pjpg&auto=webp&s=b35b075b809ada59ade81d7fbcb47f3e8113dea2', 'width': 640}], 'source': {'height': 1280, 'url': 'https://external-preview.redd.it/c3hsb2FoanFvNmdnMUKQ09L3CfmJyjwVppk62OraHd6hKKfg8L480pJgAxP-.png?format=pjpg&auto=webp&s=df24e97c98b1479821b774182b43d570fc9466a6', 'width': 720}, 'variants': {}}]}
Fast real-time multi-speaker speech to text with timestamp and overlap interleaving.
27
I was messing around with multi-speaker lightweight high speed (realtime) speech to text and I figured I'd share. [https://github.com/Deveraux-Parker/Parakeet\_Multitalk](https://github.com/Deveraux-Parker/Parakeet_Multitalk) Takes fairly messy audio with multiple speakers and does a decent job of turning it into interleaved conversation and timestamped words or sentences color coded by speaker. Fairly lightweight. I might wire it into my 1000x fastapi sometime to get it properly sped up, but in the meantime, shrug. Neat little model.
2026-01-29T00:27:45
https://i.redd.it/1nl5ovorm6gg1.png
teachersecret
i.redd.it
1970-01-01T00:00:00
0
{}
1qpt5xc
false
null
t3_1qpt5xc
/r/LocalLLaMA/comments/1qpt5xc/fast_realtime_multispeaker_speech_to_text_with/
false
false
default
27
{'enabled': True, 'images': [{'id': '1nl5ovorm6gg1', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/1nl5ovorm6gg1.png?width=108&crop=smart&auto=webp&s=5136ad2c98c05447e1b8556234c0946e2554baa5', 'width': 108}, {'height': 173, 'url': 'https://preview.redd.it/1nl5ovorm6gg1.png?width=216&crop=smart&auto=webp&s=924b239a5150ae573fc5022c298f5abd6f6fa562', 'width': 216}, {'height': 256, 'url': 'https://preview.redd.it/1nl5ovorm6gg1.png?width=320&crop=smart&auto=webp&s=486cc99de9488637c169bd9de8c1698ddc300c9a', 'width': 320}, {'height': 513, 'url': 'https://preview.redd.it/1nl5ovorm6gg1.png?width=640&crop=smart&auto=webp&s=534ab40feda56e95de4ca8c008c50f6588b6e20c', 'width': 640}, {'height': 769, 'url': 'https://preview.redd.it/1nl5ovorm6gg1.png?width=960&crop=smart&auto=webp&s=523fffeb054f8f229501a7d99dba81f98f293da4', 'width': 960}], 'source': {'height': 861, 'url': 'https://preview.redd.it/1nl5ovorm6gg1.png?auto=webp&s=e41ebc0d77262dfeca1794aca0d8105ad9e43d0c', 'width': 1074}, 'variants': {}}]}