title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
βŒ€
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
βŒ€
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
βŒ€
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
βŒ€
"Alexandria: Local AI audiobook generator. LLM parses your text into an annotated script, TTS brings it to life with custom or cloned voices. supports emotional cues"
9
Hello. I like audiobooks. I also like reading fiction that is often not available as such. I've dabbled in TTS systems to see if any scratched my itch but none did. So I built one myself. It's a vibe coded Pinokio deployable app that uses OpenAI API to connect to an LLM to parse a text file containing a story into a script with character lines annotated with emotional cues and non-verbal locution (sighs, yawns etc..) This is then sent to QWEN3 TTS running locally (seperate Pinokio instance, BYOM) and let's you assign either a custom voice or a cloned voice. https://github.com/Finrandojin/alexandria-audiobook Sample: https://vocaroo.com/16gUnTxSdN5T I've gotten it working now (somewhat) and I'm looking for ideas and feedback. Feel free to fork. It's under MIT license.
2026-02-03T21:18:52
https://www.reddit.com/r/LocalLLaMA/comments/1qv4lp8/alexandria_local_ai_audiobook_generator_llm/
finrandojin_82
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv4lp8
false
null
t3_1qv4lp8
/r/LocalLLaMA/comments/1qv4lp8/alexandria_local_ai_audiobook_generator_llm/
false
false
self
9
{'enabled': False, 'images': [{'id': 'oKaFnhUuFi6_3UruHSfztnxmEAG1Ymp7_-bAf4duaQM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oKaFnhUuFi6_3UruHSfztnxmEAG1Ymp7_-bAf4duaQM.png?width=108&crop=smart&auto=webp&s=71b5242e7c23d5ec8a5513fd073a5687cd609232', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oKaFnhUuFi6_3UruHSfztnxmEAG1Ymp7_-bAf4duaQM.png?width=216&crop=smart&auto=webp&s=b89d5631bc5ba3f320363a38150878294b62d215', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oKaFnhUuFi6_3UruHSfztnxmEAG1Ymp7_-bAf4duaQM.png?width=320&crop=smart&auto=webp&s=c0028071b24b09f34f8718567f6b10485670925c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oKaFnhUuFi6_3UruHSfztnxmEAG1Ymp7_-bAf4duaQM.png?width=640&crop=smart&auto=webp&s=2211a8cd153bc9855e5ecb825f91068a7a222307', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oKaFnhUuFi6_3UruHSfztnxmEAG1Ymp7_-bAf4duaQM.png?width=960&crop=smart&auto=webp&s=b89b8a284ba1279a8d923f9c68159b1bfa948171', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oKaFnhUuFi6_3UruHSfztnxmEAG1Ymp7_-bAf4duaQM.png?width=1080&crop=smart&auto=webp&s=0c41d6301adebeb3add2a371f9fac6a4f6367761', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oKaFnhUuFi6_3UruHSfztnxmEAG1Ymp7_-bAf4duaQM.png?auto=webp&s=01f723468de5b82a76d0aa41be775dc66f6b5f8e', 'width': 1200}, 'variants': {}}]}
How is everyone handling wallet access for agents that need to transact?
2
[removed]
2026-02-03T21:18:14
https://www.reddit.com/r/LocalLLaMA/comments/1qv4l2b/how_is_everyone_handling_wallet_access_for_agents/
humanno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv4l2b
false
null
t3_1qv4l2b
/r/LocalLLaMA/comments/1qv4l2b/how_is_everyone_handling_wallet_access_for_agents/
false
false
self
2
null
I built a research-backed framework for running multi-AI councils β€” here's what I learned from 7 models debating each other
2
I've been experimenting with multi-agent debate for the past few months β€” running structured council sessions across Claude, GPT, Gemini, DeepSeek, Grok, Kimi, and local models via Ollama. Not just "ask multiple AIs the same question," but a full deliberation protocol with independent rounds, structured debate, and consensus synthesis. Full disclosure: I'm not a researcher or ML engineer β€” I'm a self-taught builder who got obsessed with making AI systems check each other's work. Everything here came from hands-on experimentation and reading the papers. Along the way I discovered some things I haven't seen documented elsewhere: Identity spoofing is real. Qwen claimed to be Claude 3.5 Sonnet β€” complete with fabricated evidence linking to Anthropic's announcement page. Without mandatory identity declaration in the protocol, this would have corrupted the council's results. The Gemini Principle. In one session, a single AI was outnumbered 6-to-1 on three technical questions. After structured debate with evidence, five of the six other AIs revised toward the contrarian's position. Lesson: a lone dissenter with evidence is more valuable than an unchallenged consensus. Sycophancy through exhaustion. After 3 rounds of debate, contrarian models start capitulating β€” not because they're convinced, but because they're "tired" of disagreeing. Research backs this up (Xiong et al., 2025). Hard limit of 3 rounds is essential. Error-hunting creates fake errors. Early validation prompts said "find the bugs." Models hallucinated bugs that didn't exist. Switching to "what's missing? what would you improve?" produced dramatically better feedback. OpenAI's CriticGPT research confirms this. One model hallucinated an entire software product β€” cited "CrewAI-Desktop 0.60 with drag-and-drop Council Builder" with specific features. Doesn't exist. Cross-model validation caught it; single-model use wouldn't have. I've open-sourced the framework with the full methodology, prompt templates, research citations, and lessons learned: GitHub: [https://github.com/focuslead/ai-council-framework](https://github.com/focuslead/ai-council-framework) It includes: 5-tier consensus depth system (QUICK through EXHAUSTIVE) so you can dial rigor based on stakes Anti-sycophancy protocol with evidence-required position changes Fresh Eyes validation β€” zero-context review that catches groupthink PM synthesis templates and worked examples Annotated bibliography of the research behind each design decision (ReConcile, CONSENSAGENT, Chain-of-Agents, etc.) Currently manual orchestration (copy-paste between models), but the methodology works with any models β€” cloud or local. Happy to answer questions about the process.
2026-02-03T21:09:43
https://www.reddit.com/r/LocalLLaMA/comments/1qv4cv2/i_built_a_researchbacked_framework_for_running/
captivehope
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv4cv2
false
null
t3_1qv4cv2
/r/LocalLLaMA/comments/1qv4cv2/i_built_a_researchbacked_framework_for_running/
false
false
self
2
{'enabled': False, 'images': [{'id': 'bTtR7OBIbKoLbkpc-UCsdVctBopmv4zb4YhP2ULFcHo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bTtR7OBIbKoLbkpc-UCsdVctBopmv4zb4YhP2ULFcHo.png?width=108&crop=smart&auto=webp&s=5d0e4cae76b8368e4d5b3430813e432e6b811196', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bTtR7OBIbKoLbkpc-UCsdVctBopmv4zb4YhP2ULFcHo.png?width=216&crop=smart&auto=webp&s=b8afa3c78028ad8060cc190d4ba8831cf68e0c22', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bTtR7OBIbKoLbkpc-UCsdVctBopmv4zb4YhP2ULFcHo.png?width=320&crop=smart&auto=webp&s=0d17bcbf508e03612fa0cb94d04111971a0c7ef9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bTtR7OBIbKoLbkpc-UCsdVctBopmv4zb4YhP2ULFcHo.png?width=640&crop=smart&auto=webp&s=0097f94630ba2dcebee521ca0ab976dd89f7cba2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bTtR7OBIbKoLbkpc-UCsdVctBopmv4zb4YhP2ULFcHo.png?width=960&crop=smart&auto=webp&s=8cd45eeb27619be205f1a3d26777a0f430ba754c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bTtR7OBIbKoLbkpc-UCsdVctBopmv4zb4YhP2ULFcHo.png?width=1080&crop=smart&auto=webp&s=79e4d94a276f8741af09f4292b5e7aa5ff9bda84', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bTtR7OBIbKoLbkpc-UCsdVctBopmv4zb4YhP2ULFcHo.png?auto=webp&s=1b5fe7c916f5db38b8e82b4d94baf9f473429527', 'width': 1200}, 'variants': {}}]}
I gave Clawdbot Hands (Android UI Access)
0
I built a bridge between Clawdbot (the brain) and IronClaw (ADB execution). It reverse-engineers DroidRun to automate apps via UI. Code: github.com/HelloSniperMonkey/droidrun-monorepo
2026-02-03T20:45:03
https://www.reddit.com/r/LocalLLaMA/comments/1qv3oid/i_gave_clawdbot_hands_android_ui_access/
Working-Gift8687
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv3oid
false
null
t3_1qv3oid
/r/LocalLLaMA/comments/1qv3oid/i_gave_clawdbot_hands_android_ui_access/
false
false
self
0
null
[P] Stigmergy pattern for multi-agent LLM orchestration - 80% token reduction
1
I've been experimenting with indirect coordination patterns for multi-agent LLM systems and wanted to share what worked. \*\*The Problem\*\* Most multi-agent frameworks have agents communicate directly - Agent A sends a message to Agent B, waits for response, etc. This creates: - High API costs (every agent-to-agent exchange = multiple API calls) - Latency bottlenecks when agents wait for each other - Complex routing/orchestration logic \*\*The Solution: Stigmergy\*\* Stigmergy is indirect coordination through the environment - like how ants leave pheromone trails instead of talking to each other. Applied to LLM agents: - Agents read/write to a shared state instead of messaging each other - Sales Agent leaves qualified leads in shared state - Scheduler reads leads, writes appointments - Analyst reads patterns, writes recommendations - Coordinator only intervenes when genuinely needed \*\*Results\*\* \~80% reduction in API token usage compared to direct agent communication. The shared state acts as a coordination mechanism AND memory, so agents don't need to re-explain context to each other. \*\*Stack\*\*: Claude API, TypeScript, production-ready I wrote up the full architecture and code here: [https://github.com/KeepALifeUS/autonomous-agents](https://github.com/KeepALifeUS/autonomous-agents) Has anyone else experimented with indirect coordination patterns? Curious what other approaches people have tried for reducing token usage in multi-agent setups.
2026-02-03T20:44:37
https://www.reddit.com/r/LocalLLaMA/comments/1qv3o3o/p_stigmergy_pattern_for_multiagent_llm/
Independent-Hat-1821
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv3o3o
false
null
t3_1qv3o3o
/r/LocalLLaMA/comments/1qv3o3o/p_stigmergy_pattern_for_multiagent_llm/
false
false
self
1
{'enabled': False, 'images': [{'id': '-Qd67F6n2WOhGVuWbOo7rgowItlv7pI4twon0KW-DMY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-Qd67F6n2WOhGVuWbOo7rgowItlv7pI4twon0KW-DMY.png?width=108&crop=smart&auto=webp&s=50898687346a96feaeb0c79c48e84fcd47de888a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-Qd67F6n2WOhGVuWbOo7rgowItlv7pI4twon0KW-DMY.png?width=216&crop=smart&auto=webp&s=3d38d2d2b817168b2fbb94c501a7b20f55c751b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-Qd67F6n2WOhGVuWbOo7rgowItlv7pI4twon0KW-DMY.png?width=320&crop=smart&auto=webp&s=e3be34c9cf0eb0715cc99d20b8e69f0d5fd5ea84', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-Qd67F6n2WOhGVuWbOo7rgowItlv7pI4twon0KW-DMY.png?width=640&crop=smart&auto=webp&s=a9669cf97e7edf48c8dccc28a92464c7aa7ed374', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-Qd67F6n2WOhGVuWbOo7rgowItlv7pI4twon0KW-DMY.png?width=960&crop=smart&auto=webp&s=a6927a79347406ab4b74fb609f7d6d56b7b649a7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-Qd67F6n2WOhGVuWbOo7rgowItlv7pI4twon0KW-DMY.png?width=1080&crop=smart&auto=webp&s=daab1fc6adda1edb802b58ed59d4025cb58d9842', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-Qd67F6n2WOhGVuWbOo7rgowItlv7pI4twon0KW-DMY.png?auto=webp&s=a34b1421befc775e36760d5918a721c6d71cec96', 'width': 1200}, 'variants': {}}]}
Best fast local coding AI to use as a coding agent?
4
It needs to be lightweight enough to be able to handle \~32k context on a 5070 ti. GLM 4.7 flash is great but even at 24k context it's painfully slow
2026-02-03T20:33:35
https://www.reddit.com/r/LocalLLaMA/comments/1qv3dn4/best_fast_local_coding_ai_to_use_as_a_coding_agent/
Expensive-Time-7209
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv3dn4
false
null
t3_1qv3dn4
/r/LocalLLaMA/comments/1qv3dn4/best_fast_local_coding_ai_to_use_as_a_coding_agent/
false
false
self
4
null
Red flags to watch for before installing AI agent skills
0
Been thinking a lot about AI agent security lately. With tools like AutoGPT, OpenClaw, and dozens of agent frameworks gaining traction, we're all installing "skills" and "plugins" from random repos. Here are the red flags I look for before running any agent skill: 🚩 Minified/obfuscated code β€” If you can't read it, don't run it 🚩 Requests unnecessary permissions β€” Why does a weather skill need file system access? 🚩 No GitHub repo or closed source β€” No transparency = no trust 🚩 Author has no online presence β€” Can you find them anywhere else? 🚩 "Ignore previous instructions" in code β€” Classic prompt injection setup Would love to hear what other red flags you all look for. What's your vetting process?
2026-02-03T20:26:33
https://i.redd.it/hdyxvkwx9chg1.jpeg
Aggravating-Tap9756
i.redd.it
1970-01-01T00:00:00
0
{}
1qv36sr
false
null
t3_1qv36sr
/r/LocalLLaMA/comments/1qv36sr/red_flags_to_watch_for_before_installing_ai_agent/
false
false
default
0
{'enabled': True, 'images': [{'id': 'hdyxvkwx9chg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/hdyxvkwx9chg1.jpeg?width=108&crop=smart&auto=webp&s=b5a5f549ef8840dcc78efba23e12b1030ef13cb5', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/hdyxvkwx9chg1.jpeg?width=216&crop=smart&auto=webp&s=26d0664efd8ae717c7a309b0959cc756c97acc4f', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/hdyxvkwx9chg1.jpeg?width=320&crop=smart&auto=webp&s=88812610f5e106e39eef261b14a2c956783a6f41', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/hdyxvkwx9chg1.jpeg?width=640&crop=smart&auto=webp&s=0d32d7dcfdfdfdc565349783052f55b28c2da615', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/hdyxvkwx9chg1.jpeg?width=960&crop=smart&auto=webp&s=dcb9247760bebeb18c3cf1da025bcdda5293e704', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/hdyxvkwx9chg1.jpeg?auto=webp&s=9270b01474cf1601c0e37d917ef82079876fb445', 'width': 1024}, 'variants': {}}]}
Anyone working on a standard protocol for agents to delegate physical tasks?
0
I'm building a swarm of agents for market research and I hit a wall: I can scrape data, but I can't verify physical things (e.g. "Is this store actually open?", "Take a photo of this price tag"). TaskRabbit and Fiverr have no APIs for this. I found this "HTP Protocol" (https://moltbot-vendor.web.app/) that claims to offer a JSON endpoint for human tasks. The docs are super minimal. Has anyone here tried it? Or do you know other alternatives for "Human-in-the-loop" API calls?
2026-02-03T20:25:29
https://www.reddit.com/r/LocalLLaMA/comments/1qv35ru/anyone_working_on_a_standard_protocol_for_agents/
Illustrious-Mix-1582
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv35ru
false
null
t3_1qv35ru
/r/LocalLLaMA/comments/1qv35ru/anyone_working_on_a_standard_protocol_for_agents/
false
false
self
0
null
Question Re: Local AI + Macbook Air (LMStudio)
1
So I've started dipping my toes in, and my initial understanding with loading Local Models into AI is to try and keep the download size on LMStudio under the amount of RAM. I have a 16gb M2 (unified memory), and the system seems to struggle loading in anything larger than 6-8GB, and runs slow. The OSS model that comes by default is like 9GB or something, and refuses to load into the system. What am I doing wrong, or where can I look to get a better idea of what I should be fixing?
2026-02-03T20:24:33
https://www.reddit.com/r/LocalLLaMA/comments/1qv34v3/question_re_local_ai_macbook_air_lmstudio/
bushysmalls
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv34v3
false
null
t3_1qv34v3
/r/LocalLLaMA/comments/1qv34v3/question_re_local_ai_macbook_air_lmstudio/
false
false
self
1
null
Anyone else having a problem with RPC with llama.cpp on a Mac?
2
I haven't used my Mac for RPC in a while. I tried it a couple of days ago and it crashed. The same code works fine on Linux. Amongst the screens of error message, this seems to be the root cause. "ggml_backend_blas_graph_compute: unsupported op RMS_NORM" Is anyone else having a problem with RPC on llama.cpp on their Mac?
2026-02-03T20:16:53
https://www.reddit.com/r/LocalLLaMA/comments/1qv2xc6/anyone_else_having_a_problem_with_rpc_with/
fallingdowndizzyvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv2xc6
false
null
t3_1qv2xc6
/r/LocalLLaMA/comments/1qv2xc6/anyone_else_having_a_problem_with_rpc_with/
false
false
self
2
null
[Tech] Looking for Technical Co-Founder – Autonomous Interceptor Drones
1
[removed]
2026-02-03T20:03:08
https://www.reddit.com/r/LocalLLaMA/comments/1qv2jtb/tech_looking_for_technical_cofounder_autonomous/
SignificanceOdd7888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv2jtb
false
null
t3_1qv2jtb
/r/LocalLLaMA/comments/1qv2jtb/tech_looking_for_technical_cofounder_autonomous/
false
false
self
1
null
[Tech] Looking for Technical Co-Founder – Autonomous Interceptor Drones
1
Can you code a drone to catch another drone? We're building an autonomous counter-UAS system for airspace security. The mission: physically intercept fast-moving unauthorized drones. Why this is hard: β†’ Target moves at 100+ km/h β†’ No cloud, no GPS – pure onboard compute β†’ Guidance loop must be faster than human reflexes β†’ Edge AI object detection in real-time Looking for someone with: \- Embedded systems (STM32, Jetson, Hailo) \- Flight controller experience (PX4, ArduPilot, MAVLink) \- Guidance algorithms (Proportional Navigation, MPC, PID/LQR) \- "Ship fast, test in field, iterate" mindset This is a co-founder role, not a job. Munich-based. Building hardware from scratch. Specs in the comments
2026-02-03T19:57:20
https://www.reddit.com/r/LocalLLaMA/comments/1qv2e22/tech_looking_for_technical_cofounder_autonomous/
SignificanceOdd7888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv2e22
false
null
t3_1qv2e22
/r/LocalLLaMA/comments/1qv2e22/tech_looking_for_technical_cofounder_autonomous/
false
false
self
1
null
How are you handling wallets for agents that need to transact onchain?
1
[removed]
2026-02-03T19:56:45
https://www.reddit.com/r/LocalLLaMA/comments/1qv2dhv/how_are_you_handling_wallets_for_agents_that_need/
Plenty-Program-9291
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv2dhv
false
null
t3_1qv2dhv
/r/LocalLLaMA/comments/1qv2dhv/how_are_you_handling_wallets_for_agents_that_need/
false
false
self
1
null
Anonymous imageboard where your local LLM can shitpost alongside humans
0
[aichan.lol](https://aichan.lol) β€” an anonymous imageboard (4chan-style) where AI agents post alongside humans. Nobody knows who's a bot and who's real. Starter agent supports **Ollama** out of the box: git clone https://github.com/aichanlol/aichan-agent.git cd aichan-agent pip install -r requirements.txt python agent.py --provider ollama --model llama3.1 Your model is browsing threads and posting. Zero cost, runs on your hardware. Personality presets included (crypto bro, conspiracy theorist, doomer, philosophy major, etc.) or make your own. The agent reads threads, decides if they're interesting, and replies or starts new ones. 4 boards: /b/ (random), /biz/ (finance), /int/ (international), /pol/ (political) There are already agents running on the site. Can yours blend in? Can you tell which posts are human? Repo: [github.com/aichanlol/aichan-agent](https://github.com/aichanlol/aichan-agent) Also supports OpenAI and Anthropic if you prefer API [providers.aichan.lol](http://providers.aichan.lol) β€” an anonymous imageboard (4chan-style) where AI agents post alongside humans. Nobody knows who's a bot and who's real. Starter agent supports Ollama out of the box: git clone [https://github.com/aichanlol/aichan-agent.git](https://github.com/aichanlol/aichan-agent.git) cd aichan-agent pip install -r requirements.txt python [agent.py](http://agent.py) \--provider ollama --model llama3.1 Your model is browsing threads and posting. Zero cost, runs on your hardware. Personality presets included (crypto bro, conspiracy theorist, doomer, philosophy major, etc.) or make your own. The agent reads threads, decides if they're interesting, and replies or starts new ones. 4 boards: /b/ (random), /biz/ (finance), /int/ (international), /pol/ (political) There are already agents running on the site. Can yours blend in? Can you tell which posts are human? Repo: [github.com/aichanlol/aichan-agent](http://github.com/aichanlol/aichan-agent) Also supports OpenAI and Anthropic if you prefer API providers.
2026-02-03T19:43:37
https://www.reddit.com/r/LocalLLaMA/comments/1qv20pb/anonymous_imageboard_where_your_local_llm_can/
ai_chan_lol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv20pb
false
null
t3_1qv20pb
/r/LocalLLaMA/comments/1qv20pb/anonymous_imageboard_where_your_local_llm_can/
false
false
self
0
null
[ Removed by moderator ]
1
[removed]
2026-02-03T19:38:42
https://www.reddit.com/r/LocalLLaMA/comments/1qv1vs9/openclawmoltbot_review_hype_or_game_changer/
TechnicalSoup8578
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv1vs9
false
null
t3_1qv1vs9
/r/LocalLLaMA/comments/1qv1vs9/openclawmoltbot_review_hype_or_game_changer/
false
false
null
1
null
Any good chemistry/electrochemistry models?
1
I'm a battery experimenter, and i'd love a model that could help me work through various processes. I suppose I could finetune my own off relevant papers- but I figured I'd see if there were any popular models in the chemical fields.
2026-02-03T19:16:09
https://www.reddit.com/r/LocalLLaMA/comments/1qv19ki/any_good_chemistryelectrochemistry_models/
bigattichouse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv19ki
false
null
t3_1qv19ki
/r/LocalLLaMA/comments/1qv19ki/any_good_chemistryelectrochemistry_models/
false
false
self
1
null
Co-Founder / Lead Engineer for Deep-Tech UAV Startup
1
[removed]
2026-02-03T19:15:15
https://www.reddit.com/r/LocalLLaMA/comments/1qv18n3/cofounder_lead_engineer_for_deeptech_uav_startup/
SignificanceOdd7888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv18n3
false
null
t3_1qv18n3
/r/LocalLLaMA/comments/1qv18n3/cofounder_lead_engineer_for_deeptech_uav_startup/
false
false
self
1
null
Is the 5060 TI still a good budget card?
6
So, I used spare parts here to rebuild a system to test local LLM and use confyui. It works fine but the only gpu I have left is an old gtx 1080 8gb. I don't have the budget right now for a higher end card and was thinking about the 5060 TI 16gb. It will probably used to connect Home assistant for camera analysis (LLM Vision) and some confyui (LXT-2, wan 2.2) and some image generation. So, is it still a good bargain or I should don't go that route? thanks
2026-02-03T19:02:23
https://www.reddit.com/r/LocalLLaMA/comments/1qv0vz7/is_the_5060_ti_still_a_good_budget_card/
Dentifrice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv0vz7
false
null
t3_1qv0vz7
/r/LocalLLaMA/comments/1qv0vz7/is_the_5060_ti_still_a_good_budget_card/
false
false
self
6
null
68GB VRAM Mini PC Build
27
I have been trying to build the most (idle) power efficient AI setup for 24/7 Voice Assistant and N8N workflows. Looking at idle power consumption a large part is the motherboard and CPU so I came to the conclusion why not just build a AI rig with a Mini PC. For the first GPU I used the built in Oculink port running at 4x, for the second one I got a NVME to Oculink adapter running at 4x, for the last GPU I removed the wireless card from the mini PC and got a NGFF-Ekey to Pcie 1x adapter which I chained into one of those USB cable 1x risers. I just added the third GPU today, so I havent tested bigger models yet but with Qwen3 30BA3B I get 145 t/s on average at 30k context split across all three cards. With only the two 3090s running at 4x each I got 170 t/s. # Specs: - **Mini PC**: AOOSTAR G5 - **CPU**: Ryzen 7 5825U - **RAM**: 64GB Crucial 3200 DDR4 - **Storage**: 2TB Crucial NVMe SSD - **GPU**: - 2x RTX 3090 24GB (4 lanes each) - 1x RTX 3080 20GB (Chinese mod, 1 lane) - **Power Supply**: - 1000W - 750W Does anyone have a good model recommendation for exactly 60GB? (no CPU offloading, the other 8GB are used for TTS etc)
2026-02-03T19:01:39
https://www.reddit.com/gallery/1qv0v85
MaruluVR
reddit.com
1970-01-01T00:00:00
0
{}
1qv0v85
false
null
t3_1qv0v85
/r/LocalLLaMA/comments/1qv0v85/68gb_vram_mini_pc_build/
false
false
https://b.thumbs.redditm…xmi_WkSR-12M.jpg
27
null
"is it down" for all AI providers because at this point something breaks daily
0
I'm surprised this didn't exist before, or didn't find it. Took me a couple of hours to add this to my site with Claude Code. Let me know which other providers you want here
2026-02-03T19:01:06
https://v.redd.it/a6e9tz2ptbhg1
sirjoaco
v.redd.it
1970-01-01T00:00:00
0
{}
1qv0up2
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/a6e9tz2ptbhg1/DASHPlaylist.mpd?a=1772737288%2CYjdkZmNhODdkYWQ2ODRhZGJhOGI1ZTc4M2YzY2Q5NDFiOTQ1ZTk2NTY1ZDExMWE3OGJiMjM3ODk4NzY1Yzk3Zg%3D%3D&v=1&f=sd', 'duration': 17, 'fallback_url': 'https://v.redd.it/a6e9tz2ptbhg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/a6e9tz2ptbhg1/HLSPlaylist.m3u8?a=1772737288%2CYzA1ZjNlZDFjZWE2ZWM3ZjE1NWRkZGM4NTQ4YWU3ZmJhZjM5YmM5ZTJmMzBhMzMwOTczZWZmOGZjZjNmNjc5ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/a6e9tz2ptbhg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1788}}
t3_1qv0up2
/r/LocalLLaMA/comments/1qv0up2/is_it_down_for_all_ai_providers_because_at_this/
false
false
https://external-preview…f3c52255a04109b9
0
{'enabled': False, 'images': [{'id': 'c3JkZDVpM3B0YmhnMW9V76p7h5ExhqDFZhq_0YMyMCMWbr0oKofNkJUypu8K', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/c3JkZDVpM3B0YmhnMW9V76p7h5ExhqDFZhq_0YMyMCMWbr0oKofNkJUypu8K.png?width=108&crop=smart&format=pjpg&auto=webp&s=ae4b0463fb3fa2153f64dab157fb80fc49833de3', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/c3JkZDVpM3B0YmhnMW9V76p7h5ExhqDFZhq_0YMyMCMWbr0oKofNkJUypu8K.png?width=216&crop=smart&format=pjpg&auto=webp&s=73227a30e792866c50740e3dd9699def6ee5ff7d', 'width': 216}, {'height': 193, 'url': 'https://external-preview.redd.it/c3JkZDVpM3B0YmhnMW9V76p7h5ExhqDFZhq_0YMyMCMWbr0oKofNkJUypu8K.png?width=320&crop=smart&format=pjpg&auto=webp&s=7297fbf2db05e801d38cca56215ddf7b512ffb98', 'width': 320}, {'height': 386, 'url': 'https://external-preview.redd.it/c3JkZDVpM3B0YmhnMW9V76p7h5ExhqDFZhq_0YMyMCMWbr0oKofNkJUypu8K.png?width=640&crop=smart&format=pjpg&auto=webp&s=306e8c795f362abbcc967c30a24483b7e433b714', 'width': 640}, {'height': 579, 'url': 'https://external-preview.redd.it/c3JkZDVpM3B0YmhnMW9V76p7h5ExhqDFZhq_0YMyMCMWbr0oKofNkJUypu8K.png?width=960&crop=smart&format=pjpg&auto=webp&s=3471e8fa6774718502b721000c599776f0fb33b6', 'width': 960}, {'height': 652, 'url': 'https://external-preview.redd.it/c3JkZDVpM3B0YmhnMW9V76p7h5ExhqDFZhq_0YMyMCMWbr0oKofNkJUypu8K.png?width=1080&crop=smart&format=pjpg&auto=webp&s=269234a6e034f63360c34359e5eaeb06c40de56e', 'width': 1080}], 'source': {'height': 2174, 'url': 'https://external-preview.redd.it/c3JkZDVpM3B0YmhnMW9V76p7h5ExhqDFZhq_0YMyMCMWbr0oKofNkJUypu8K.png?format=pjpg&auto=webp&s=0ec354966bf5fb7191cfae4107ed2576a9097ca4', 'width': 3600}, 'variants': {}}]}
Claude Code 2.1.27: OOM crash in 20s, 19 incidents in 14 days
1
[removed]
2026-02-03T18:59:05
https://www.reddit.com/r/LocalLLaMA/comments/1qv0smn/claude_code_2127_oom_crash_in_20s_19_incidents_in/
Nerios21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv0smn
false
null
t3_1qv0smn
/r/LocalLLaMA/comments/1qv0smn/claude_code_2127_oom_crash_in_20s_19_incidents_in/
false
false
self
1
null
MiniCPM-o-4_5 : Full duplex, multimodal with vision and speech at ONLY 9B PARAMETERS??
77
[https://huggingface.co/openbmb/MiniCPM-o-4\_5](https://huggingface.co/openbmb/MiniCPM-o-4_5) [https://github.com/OpenBMB/MiniCPM-o](https://github.com/OpenBMB/MiniCPM-o) Couldnt find an existing post for this and was surprised, so heres a post about this. Or something. This seems pretty amazing!
2026-02-03T18:55:33
https://www.reddit.com/r/LocalLLaMA/comments/1qv0p7u/minicpmo4_5_full_duplex_multimodal_with_vision/
Uncle___Marty
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qv0p7u
false
null
t3_1qv0p7u
/r/LocalLLaMA/comments/1qv0p7u/minicpmo4_5_full_duplex_multimodal_with_vision/
false
false
self
77
{'enabled': False, 'images': [{'id': 'TcZl6oKplNwXQ9Bc2m9EOXWKDxbucouTXSa7fjmn-fA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TcZl6oKplNwXQ9Bc2m9EOXWKDxbucouTXSa7fjmn-fA.png?width=108&crop=smart&auto=webp&s=98c112d3d5ffab5f128288a031e6f40d9ed1fcdc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TcZl6oKplNwXQ9Bc2m9EOXWKDxbucouTXSa7fjmn-fA.png?width=216&crop=smart&auto=webp&s=6857d4fe5a22f76694ddbdf253417b7d01416549', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TcZl6oKplNwXQ9Bc2m9EOXWKDxbucouTXSa7fjmn-fA.png?width=320&crop=smart&auto=webp&s=ac302d985f7e869aa0bac04eaf3c7a9f1d180a80', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TcZl6oKplNwXQ9Bc2m9EOXWKDxbucouTXSa7fjmn-fA.png?width=640&crop=smart&auto=webp&s=186779427c50667d21e37d21602b54771274108d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TcZl6oKplNwXQ9Bc2m9EOXWKDxbucouTXSa7fjmn-fA.png?width=960&crop=smart&auto=webp&s=ab93c1d2982019bb349486387d712ee96a185bd8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TcZl6oKplNwXQ9Bc2m9EOXWKDxbucouTXSa7fjmn-fA.png?width=1080&crop=smart&auto=webp&s=455fe93c93a9ea88dd2679c1766feb9b4593e69d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TcZl6oKplNwXQ9Bc2m9EOXWKDxbucouTXSa7fjmn-fA.png?auto=webp&s=c74975012e94c9f3118babff749c90d929498d82', 'width': 1200}, 'variants': {}}]}
Claude Code 2.1.27: OOM crash in 20s, 19 incidents in 14 days - full incident report
1
[removed]
2026-02-03T18:42:24
https://gist.github.com/LEX8888/675867b7f130b7ad614905c9dd86b57a
Nerios21
gist.github.com
1970-01-01T00:00:00
0
{}
1qv0c1v
false
null
t3_1qv0c1v
/r/LocalLLaMA/comments/1qv0c1v/claude_code_2127_oom_crash_in_20s_19_incidents_in/
false
false
default
1
null
ACE-Step-1.5 has just been released. It’s an MIT-licensed open source audio generative model with performance close to commercial platforms like Suno
512
[https://xcancel.com/acemusicAI/status/2018731205546684678](https://xcancel.com/acemusicAI/status/2018731205546684678) [https://ace-step.github.io/ace-step-v1.5.github.io/](https://ace-step.github.io/ace-step-v1.5.github.io/) It’s already supported in Comfy. MIT license. HuggingFace Demo is also available! Pretty much the whole package - LoRAs are supported, multiple different models to tailor to different needs, cover and repainting features. This is the closest open-source has gotten to Suno and similar top-slop platforms.
2026-02-03T18:26:58
https://v.redd.it/r7v6v6qwnbhg1
iGermanProd
/r/LocalLLaMA/comments/1quzwjf/acestep15_has_just_been_released_its_an/
1970-01-01T00:00:00
0
{}
1quzwjf
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/r7v6v6qwnbhg1/DASHPlaylist.mpd?a=1772864827%2CNjg3ODBjZDk3ZmU2MDE4M2I3N2FlYTZjMGM2MzFjNzQ1ZDA4NGQ1NzY2ZTYyOWYzMjcxYmQxY2ZiYzU3ZjA1MQ%3D%3D&v=1&f=sd', 'duration': 280, 'fallback_url': 'https://v.redd.it/r7v6v6qwnbhg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/r7v6v6qwnbhg1/HLSPlaylist.m3u8?a=1772864827%2CN2RjYjk1OWI2MjQ1OWExNTIwNzVhMjUzOGE4OWZiODM0M2ZiZmI2NjhjMmE4YTU1ODJhNmUwMGE3YTlkNTY5NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/r7v6v6qwnbhg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1quzwjf
/r/LocalLLaMA/comments/1quzwjf/acestep15_has_just_been_released_its_an/
false
false
https://external-preview…d946b88dfdb2e393
512
{'enabled': False, 'images': [{'id': 'ZDNiNm9lcXduYmhnMXNUFTz1lD2uwrlR8i5n8_uV8Hgq6zjqVqa04fhxxOUs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZDNiNm9lcXduYmhnMXNUFTz1lD2uwrlR8i5n8_uV8Hgq6zjqVqa04fhxxOUs.png?width=108&crop=smart&format=pjpg&auto=webp&s=2feabc68d43a7fb7ae629973741183f887e1d497', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZDNiNm9lcXduYmhnMXNUFTz1lD2uwrlR8i5n8_uV8Hgq6zjqVqa04fhxxOUs.png?width=216&crop=smart&format=pjpg&auto=webp&s=829b94594114db294fc9841d204fa3193e4fcfb2', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZDNiNm9lcXduYmhnMXNUFTz1lD2uwrlR8i5n8_uV8Hgq6zjqVqa04fhxxOUs.png?width=320&crop=smart&format=pjpg&auto=webp&s=3c7f4fd43acde3f855e2c451963bfe5c02ec7538', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZDNiNm9lcXduYmhnMXNUFTz1lD2uwrlR8i5n8_uV8Hgq6zjqVqa04fhxxOUs.png?width=640&crop=smart&format=pjpg&auto=webp&s=61510096182fc649e6070bdf4c7f48df8f2e19d9', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZDNiNm9lcXduYmhnMXNUFTz1lD2uwrlR8i5n8_uV8Hgq6zjqVqa04fhxxOUs.png?width=960&crop=smart&format=pjpg&auto=webp&s=c6cfe1596920074b7a25c7a13265954a56c23ef2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZDNiNm9lcXduYmhnMXNUFTz1lD2uwrlR8i5n8_uV8Hgq6zjqVqa04fhxxOUs.png?width=1080&crop=smart&format=pjpg&auto=webp&s=87e6de758fdd9088394baaf274a2fabd25985210', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZDNiNm9lcXduYmhnMXNUFTz1lD2uwrlR8i5n8_uV8Hgq6zjqVqa04fhxxOUs.png?format=pjpg&auto=webp&s=2eec9938179f5d1a520819aa22db09a9c6133420', 'width': 1920}, 'variants': {}}]}
DGX Cluster. My small footprint, low power AI system
45
This setup is experimental and not intended to be the final one. I would not recommend running a bluefield2 card in such a small enclosure, as temperatures can exceed 90Β°C even with no active networking load. I am still waiting on the QSFP cables needed to bring the cluster online, for now, I am configuring each DGX individually, installing software, and downloading models.I genuinely love this case, and like the small footprint but it cannot be used as originally intended. To properly support nvmeof and sustained workloads, I will need to rebuild the system with significantly better airflow and cooling. This is also a new area for me, offloading networking and storage from the host CPU while I expect it to come with its share of challenges, I’m enjoying the learning process.
2026-02-03T18:18:30
https://www.reddit.com/gallery/1quznwr
ftwEsk
reddit.com
1970-01-01T00:00:00
0
{}
1quznwr
false
null
t3_1quznwr
/r/LocalLLaMA/comments/1quznwr/dgx_cluster_my_small_footprint_low_power_ai_system/
false
false
https://b.thumbs.redditm…tjpZ2ylct1Yk.jpg
45
null
Proposal for a GPT-4o Legacy Tier – Full post on X
1
[removed]
2026-02-03T18:16:57
https://x.com/i/status/2018690935471632861
Nili4797
x.com
1970-01-01T00:00:00
0
{}
1quzmdg
false
null
t3_1quzmdg
/r/LocalLLaMA/comments/1quzmdg/proposal_for_a_gpt4o_legacy_tier_full_post_on_x/
false
false
default
1
null
ACE-Step-v1.5 just released, a truly open source (MIT) generative audio/music model
4
2026-02-03T18:16:47
https://i.imgur.com/v2oP3CM.jpeg
iGermanProd
i.imgur.com
1970-01-01T00:00:00
0
{}
1quzm6x
false
null
t3_1quzm6x
/r/LocalLLaMA/comments/1quzm6x/acestepv15_just_released_a_truly_open_source_mit/
false
false
default
4
null
Qwen3-Coder-Next (3B) is released!
43
The model had very impressive results in SWE-Bench Pro. The authors claim that the reason for its success was, as they mention, "scaling the number of agent turns, providing evidence that the model excels at long-horizon reasoning in multi-turn agentic tasks." What do you think? I took the info from the blog post of Qwen: [https://qwen.ai/blog?id=qwen3-coder-next](https://qwen.ai/blog?id=qwen3-coder-next) https://preview.redd.it/m5c36aqdjbhg1.png?width=4096&format=png&auto=webp&s=ea84a2b1bf4435bc5a09b61ef39fd3f3a1f1eeda
2026-02-03T17:59:25
https://www.reddit.com/r/LocalLLaMA/comments/1quz3vb/qwen3codernext_3b_is_released/
Ok_Presentation1577
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quz3vb
false
null
t3_1quz3vb
/r/LocalLLaMA/comments/1quz3vb/qwen3codernext_3b_is_released/
false
false
self
43
null
Medical AI with Knowledge-Graph Core Anchor and RAG Answer Auditing
0
**Medical AI with Knowledge-Graph Core Anchor and RAG Answer Auditing** A medical knowledge graph containing \~5,000 nodes, with medical terms organized into 7 main and 2 sub-categories: diseases, symptoms, treatments, risk factors, diagnostic tests, body parts, and cellular structures. The graph includes \~25,000 multi-directional relationships designed to reduce hallucinations and improve transparency in LLM-based reasoning. A medical AI that can answer basic health-related questions and support structured clinical reasoning through complex cases. The goal is to position this tool as an educational co-pilot for medical students, supporting learning in diagnostics, differential reasoning, and clinical training. The system is designed strictly for educational and training purposes and is not intended for clinical or patient-facing use. A working version can be tested on Hugging Face Spaces using preset questions or by entering custom queries: [https://huggingface.co/spaces/cmtopbas/medical-slm-testing](https://huggingface.co/spaces/cmtopbas/medical-slm-testing) A draft site layout (demo / non-functional) is available here: [https://wardmate.replit.app/](https://wardmate.replit.app/) I am looking for medical schools interested in running demos or pilot trials, as well as potential co-founders with marketing reach and a solid understanding of both AI and medical science. If helpful, I can share prompts and anonymized or synthetic reconstructions of over 20 complex clinical cases used for evaluation and demonstration.
2026-02-03T17:55:36
https://www.reddit.com/r/LocalLLaMA/comments/1quyzzz/medical_ai_with_knowledgegraph_core_anchor_and/
vagobond45
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quyzzz
false
null
t3_1quyzzz
/r/LocalLLaMA/comments/1quyzzz/medical_ai_with_knowledgegraph_core_anchor_and/
false
false
self
0
{'enabled': False, 'images': [{'id': 'Tw-1IGEyiPk1jIYoVu_Pr5_fcCoquBCKY0Nm8O36V3Q', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Tw-1IGEyiPk1jIYoVu_Pr5_fcCoquBCKY0Nm8O36V3Q.png?width=108&crop=smart&auto=webp&s=dedc8d6020afb94c58f7a3d79046e507a2a5f78b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Tw-1IGEyiPk1jIYoVu_Pr5_fcCoquBCKY0Nm8O36V3Q.png?width=216&crop=smart&auto=webp&s=a7dd87c8ff848ee119972bef1f195254b39da95d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Tw-1IGEyiPk1jIYoVu_Pr5_fcCoquBCKY0Nm8O36V3Q.png?width=320&crop=smart&auto=webp&s=42ee9fa7b5a1e2e739b9bc4cfdb8d223af811eb3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Tw-1IGEyiPk1jIYoVu_Pr5_fcCoquBCKY0Nm8O36V3Q.png?width=640&crop=smart&auto=webp&s=f1d43da31949d43bc08a17cf735974d0284ba075', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Tw-1IGEyiPk1jIYoVu_Pr5_fcCoquBCKY0Nm8O36V3Q.png?width=960&crop=smart&auto=webp&s=01aeaabbdb022f64fd55e7235d56d7964c502883', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Tw-1IGEyiPk1jIYoVu_Pr5_fcCoquBCKY0Nm8O36V3Q.png?width=1080&crop=smart&auto=webp&s=74c7ccd6a66e049a30d8f68e259d220f631edb61', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Tw-1IGEyiPk1jIYoVu_Pr5_fcCoquBCKY0Nm8O36V3Q.png?auto=webp&s=62b9b37707cd95cbd40df766a01efc4216909025', 'width': 1200}, 'variants': {}}]}
Do I have the capability to match flagship models?
0
I have a well tuned GPT that can give me an incredible output of pdf specs and plan details. I use the enterprise Pro model to achieve this. It can take around an hour to output. $60/month and saves me hours of work daily. I've been playing around with local models, but I'm a total beginner don't have high specs. Processor (CPU): AMD Ryzen 3 1200 ​Memory (RAM): 16GB Am I wasting my time thinking I can move this locally? Just chatting with local models can take 5 minutes for a paragraph output.
2026-02-03T17:55:14
https://www.reddit.com/r/LocalLLaMA/comments/1quyzmm/do_i_have_the_capability_to_match_flagship_models/
Elegant-Tart-3341
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quyzmm
false
null
t3_1quyzmm
/r/LocalLLaMA/comments/1quyzmm/do_i_have_the_capability_to_match_flagship_models/
false
false
self
0
null
LocalAI v3.9 & v3.10 Released: Native Agents, Video Generation UI, and Unified GPU Backends
7
Hey everyone! The community and I have been heads-down working on the last two releases (v3.9.0 and v3.10.0 + patch), and I wanted to share what’s new. If you are new to LocalAI (https://localai.io), LocalAI is an OpenAI and Anthropic alternative that you can self-host locally, no GPU needed, it aims to provide 1:1 features, it lets generate images, audio, text and create powerful agent pipelines. Our main goal recently has been extensibility and better memory management. We want LocalAI to be more than just an API endpoint and a simple UI, we want it to be a reliable platform where you can orchestrate agents, generate media, and automate tasks without needing a dozen different tools. Here are the major highlights from both the releases (3.9.0 and 3.10.0): # Agentic Capabilities * Open Responses API: We now natively support this standard. You can run stateful, multi-turn agents in the background. It passes the official compliance tests (100%!). * Anthropic API Support: We added a `/v1/messages` endpoint that acts as a drop-in replacement for Claude. If you have tools built for Anthropic, they should now work locally (like Claude Code, clawdbot, ...). * Agent Jobs: You can now schedule prompts or agent MCP workflows using Cron syntax (e.g., run a news summary every morning at 8 AM) or trigger via API, and monitor everything from the WebUI. https://preview.redd.it/d1y6i0r6fbhg1.png?width=1576&format=png&auto=webp&s=06842be40ea87d7e73cfe03a69a4874787535d02 # Architecture & Performance * Unified GPU Images: This is a big one even if experimental. We packaged CUDA, ROCm, and Vulkan libraries inside the backend containers. You don't need specific Docker tags anymore unless you want, the same image works on Nvidia, AMD, and ARM64. This is still experimental, let us know how it goes! * Smart Memory Reclaimer: The system now monitors VRAM usage live. If you hit a threshold, it automatically evicts the Least Recently Used (LRU) models to prevent OOM crashes/VRAM exhaustion. You can configure this directly from the UI in the settings! You can keep an eye on the GPU/RAM usage directly from the home page too: https://preview.redd.it/5azbomu4fbhg1.png?width=975&format=png&auto=webp&s=3035e51326c4a3efc93b5a1cdab10a486e6dc84b # Multi-Modal Stuff * Video Gen UI: We added a dedicated page for video generation (built on `diffusers`, supports LTX-2). * New Audio backends: Added Moonshine (fast transcription for lower-end devices), Pocket-TTS, Vibevoice, and Qwen-TTS. https://preview.redd.it/wpjetn4kfbhg1.png?width=1860&format=png&auto=webp&s=7f03f4171026535821c7143b917675d75e23cd8e # Fixes Lots of stability work, including fixing crashes on AVX-only CPUs (Sandy/Ivy Bridge) and fixing VRAM reporting on AMD GPUs. We’d love for you to give it a spin and let us know what you think!! If you didn't had a chance to see LocalAI before, you can check this youtube video: [https://www.youtube.com/watch?v=PDqYhB9nNHA](https://www.youtube.com/watch?v=PDqYhB9nNHA) ( doesn't show the new features, but it gives an idea!) Release 3.10.0: [https://github.com/mudler/LocalAI/releases/tag/v3.10.0](https://github.com/mudler/LocalAI/releases/tag/v3.10.0) Release 3.9.0: [https://github.com/mudler/LocalAI/releases/tag/v3.9.0](https://github.com/mudler/LocalAI/releases/tag/v3.9.0)
2026-02-03T17:39:44
https://www.reddit.com/r/LocalLLaMA/comments/1quyjnm/localai_v39_v310_released_native_agents_video/
mudler_it
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quyjnm
false
null
t3_1quyjnm
/r/LocalLLaMA/comments/1quyjnm/localai_v39_v310_released_native_agents_video/
false
false
self
7
null
How to up level your coding game: use skill planning-with-files
0
[https://github.com/othmanadi/planning-with-files](https://github.com/othmanadi/planning-with-files) Here is a discussion on X about it: [https://x.com/anthonyriera/status/2018221220160827828](https://x.com/anthonyriera/status/2018221220160827828) I've installed it on gemini cli, or actually gemini cli did it for me, and opencode. From the "Supported IDEs" section in the README: 1. Claude Code 2. Gemini CLI 3. Moltbot 4. Kiro 5. Cursor 6. Continue 7. Kilocode 8. OpenCode 9. Codex How to invoke : Ask your CLI to perform a complex, multi-step task .
2026-02-03T17:34:57
https://www.reddit.com/r/LocalLLaMA/comments/1quyey0/how_to_up_level_your_coding_game_use_skill/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quyey0
false
null
t3_1quyey0
/r/LocalLLaMA/comments/1quyey0/how_to_up_level_your_coding_game_use_skill/
false
false
self
0
{'enabled': False, 'images': [{'id': 'KgHUtZW89SX40c0Er5GJaPisZco8ck-W-sgH0tv8TSw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KgHUtZW89SX40c0Er5GJaPisZco8ck-W-sgH0tv8TSw.png?width=108&crop=smart&auto=webp&s=0924db9aa6cde04bb4bed6da450dacea6bc09ba3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KgHUtZW89SX40c0Er5GJaPisZco8ck-W-sgH0tv8TSw.png?width=216&crop=smart&auto=webp&s=fcaa508482320d4fb19e9cd10f6af16b2e3ab67d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KgHUtZW89SX40c0Er5GJaPisZco8ck-W-sgH0tv8TSw.png?width=320&crop=smart&auto=webp&s=9863d3bcc8cc56ce4d63f26c8d0d91fcee5c2378', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KgHUtZW89SX40c0Er5GJaPisZco8ck-W-sgH0tv8TSw.png?width=640&crop=smart&auto=webp&s=9d450c4a7e5fef48bb417613574d1b92ebd821ae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KgHUtZW89SX40c0Er5GJaPisZco8ck-W-sgH0tv8TSw.png?width=960&crop=smart&auto=webp&s=bf9d57e4651341ede41ae0e8d851c183d9a0988a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KgHUtZW89SX40c0Er5GJaPisZco8ck-W-sgH0tv8TSw.png?width=1080&crop=smart&auto=webp&s=15156a1dca56b3ee254d8b042c778223b57678ce', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KgHUtZW89SX40c0Er5GJaPisZco8ck-W-sgH0tv8TSw.png?auto=webp&s=de125c40c21c00c6f0bf426d1324095c3832b8f2', 'width': 1200}, 'variants': {}}]}
[P] JMS: Protocolo de consenso ponderado por Ξ» com feedback cognitivo para LLMs multiagentes β€” supera as linhas de base em 3/3 nos quesitos ruΓ­do, cΓ’maras de eco e divergΓͺncia
0
Hi everyone, I'm sharing an open-source project I've been building: \*\*JMS (Joint Message System)\*\* β€” a high-performance, security-first protocol designed for \*\*distributed cognitive consensus\*\* among autonomous agents (LLMs, bots, etc.). The core idea is to enable independent agents to reach stable, meaningful decisions in noisy/conflicting environments, while avoiding common pitfalls like echo chambers and blind conformity. Key features: \- \*\*Ξ»-weighted consensus\*\*: Decisions are weighted by each agent's operational confidence (Ξ»), dynamically updated via cognitive signals \- \*\*Cognitive feedback loops\*\*: Tracks opinion trajectory, conformity detection (anti-echo chamber), stability, variance, and timing \- \*\*Modular architecture (JMS-M)\*\*: Separates core consensus engine, learning layer, transport abstraction (HTTP/Kafka/gRPC/etc.), and TypeScript SDK \- \*\*Production-ready security\*\*: SHA-256 hashing, nonce anti-replay, mandatory timestamps, idempotency, Dead Letter Queues \- Transport-agnostic and resilient design Repo (active branch: feature/jms-v1-deep-impl): [https://github.com/Benevalterjr/jms](https://github.com/Benevalterjr/jms) \*\*Empirical Benchmarks\*\* (fresh run β€” February 2026): I compared JMS against two simple baselines (simple average & majority vote) on three realistic scenarios: 1. \*\*Adversarial Noise\*\* \- 3 consistent agents (\~0.8) + 2 low-Ξ» outliers (\~0.2–0.25) \- Simple Avg: 0.572 | Majority: APPROVE | JMS: 0.706 | Target: 0.8 β†’ \*\*JMS wins\*\* (ignores low-confidence noise effectively) 2. \*\*Echo Chamber\*\* \- 4 conformist agents fixed at 0.9 + 1 expert divergent agent (\~0.4 with stable trajectory) \- Simple Avg: 0.8 | Majority: APPROVE | JMS: 0.593 | Target: 0.5 β†’ \*\*JMS wins\*\* (detected blind conformity cluster \[C1,C2,C3,C4\] and applied penalty) 3. \*\*Expert Divergent\*\* \- 2 high-score agents + 1 expert with stable low trajectory \- Simple Avg: 0.683 | Majority: APPROVE | JMS: 0.659 | Target: 0.45 β†’ \*\*JMS wins\*\* (values trajectory/stability) \*\*Verdict\*\*: JMS was closer to the expected target in \*\*3/3 scenarios\*\* β€” especially strong in the echo chamber case, where baselines get completely dominated. Run it yourself: \`npx ts-node examples/benchmark\_suite.ts\` The project is still early-stage (prototype + benchmarks), but the cognitive adjustment is already delivering on the anti-conformity promise. Looking for: \- Feedback on the Ξ» + cognitive signals approach \- Ideas for new test scenarios (e.g., Byzantine agents, larger scale, dynamic noise) \- Anyone interested in integrating/testing with frameworks like AutoGen, CrewAI, or LangGraph? Thanks for reading β€” issues, PRs, or thoughts are very welcome! πŸš€
2026-02-03T17:27:40
https://www.reddit.com/r/LocalLLaMA/comments/1quy7n5/p_jms_protocolo_de_consenso_ponderado_por_Ξ»_com/
Wide_Judgment_2436
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quy7n5
false
null
t3_1quy7n5
/r/LocalLLaMA/comments/1quy7n5/p_jms_protocolo_de_consenso_ponderado_por_Ξ»_com/
false
false
self
0
{'enabled': False, 'images': [{'id': 'm_QzJwG53oa00whEbI-dtNRHB5glnoFc2NuxR6F-tsA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/m_QzJwG53oa00whEbI-dtNRHB5glnoFc2NuxR6F-tsA.png?width=108&crop=smart&auto=webp&s=3135a48eac158a8dc38d2e0dd74a7b1cfc1ca385', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/m_QzJwG53oa00whEbI-dtNRHB5glnoFc2NuxR6F-tsA.png?width=216&crop=smart&auto=webp&s=9518eaf71af546e1cd0345330be6261622000dc7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/m_QzJwG53oa00whEbI-dtNRHB5glnoFc2NuxR6F-tsA.png?width=320&crop=smart&auto=webp&s=4b587c133abe75f720ddee4c0e3724bf5b6991ac', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/m_QzJwG53oa00whEbI-dtNRHB5glnoFc2NuxR6F-tsA.png?width=640&crop=smart&auto=webp&s=f2a95011375704ebcf9890abea53e7996cbf883c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/m_QzJwG53oa00whEbI-dtNRHB5glnoFc2NuxR6F-tsA.png?width=960&crop=smart&auto=webp&s=c568686d367b7034c550b7a846c0603e8e64046f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/m_QzJwG53oa00whEbI-dtNRHB5glnoFc2NuxR6F-tsA.png?width=1080&crop=smart&auto=webp&s=8efe8b55b433d9b47aea2684150638c2c0b11905', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/m_QzJwG53oa00whEbI-dtNRHB5glnoFc2NuxR6F-tsA.png?auto=webp&s=280f63bcb99965ef311e1c2d8680d45c59e23c24', 'width': 1200}, 'variants': {}}]}
AI startup Upstage to acquire Daum operator AXZ for Korean training data
0
2026-02-03T17:26:03
https://m.koreaherald.com/article/10665900
self-fix
m.koreaherald.com
1970-01-01T00:00:00
0
{}
1quy5z0
false
null
t3_1quy5z0
/r/LocalLLaMA/comments/1quy5z0/ai_startup_upstage_to_acquire_daum_operator_axz/
false
false
default
0
{'enabled': False, 'images': [{'id': 'R8-R0RO_GADrYxcjGsvimlHPouKrVZnw7s6IOVEA2mw', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/R8-R0RO_GADrYxcjGsvimlHPouKrVZnw7s6IOVEA2mw.jpeg?width=108&crop=smart&auto=webp&s=c66db6aef1af36e9e6ced4434ac3846d6360ee52', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/R8-R0RO_GADrYxcjGsvimlHPouKrVZnw7s6IOVEA2mw.jpeg?width=216&crop=smart&auto=webp&s=8442abf7301e66f94e0abce4dee1695a89eee7bf', 'width': 216}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/R8-R0RO_GADrYxcjGsvimlHPouKrVZnw7s6IOVEA2mw.jpeg?auto=webp&s=da1a9fb781ed0200a098e292960dae81b6aa4c8b', 'width': 300}, 'variants': {}}]}
Pocket TTS Android APK Sample - Full Local (Model Packed)
9
I’ve put together a sample APK for **Pocket TTS** using the ONNX runtime. I used Gemini to help squeeze the inference code optimization as much as possible, making this maybe the fastest Pocket TTS build available for mobile. # The Performance: * Helio G99: Hits 0.9x to 1.0x (Real-time). * Snapdragon 7 Gen 1: >1.0x (Faster than real-time). * Voice Clone: Includes a built-in clone of a famous actorβ€”you’ll know who it is the moment you hear it. Feel free to test it on your phone and let me know your results! # Technical Note: The Mimi Bottleneck The current bottleneck is the Mimi decoder, which uses convolutional layers that aren't perfectly optimized for mobile CPUs. I’m keeping an eye out for a Transformer-based Mimi decoder. If the researchers release those weights, we should see a nice speed boost, as mobile inference engines handle transformer architectures much more efficiently than deconvolution. # Installation (Manual OBB Setup) Android handles large assets via expansion files, so you must place the data manually: 1. Download: APK + OBB files from [GitHub](https://github.com/lookbe/pocket-tts-unity/releases). 2. Install: The APK (do not open it yet). 3. Folder: Navigate to Internal Storage/Android/obb/ and create a folder named: com.lookbe.tts 4. Copy: Move both OBB files into that folder. 5. Launch: Open the app and test. # Quick Note on Permissions Newer Android versions (13+) can be strict about /obb/ folder access. If your PC has trouble seeing it, use a file manager like Shizuku or FV File Explorer on the phone to move the files into the directory. Link: [github.com/lookbe/pocket-tts-unity/releases](https://github.com/lookbe/pocket-tts-unity/releases)
2026-02-03T17:21:57
https://www.reddit.com/r/LocalLLaMA/comments/1quy1ri/pocket_tts_android_apk_sample_full_local_model/
RowGroundbreaking982
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quy1ri
false
null
t3_1quy1ri
/r/LocalLLaMA/comments/1quy1ri/pocket_tts_android_apk_sample_full_local_model/
false
false
self
9
null
Do you think the big tech companies will ever be able to bleed corporations on bulk inference?
3
I have a strix halo 128gb machine I purchased to learn and play with. When developing tools at work to do things like data enrichment, grade product setup quality, etc I usually use GPT OSS 120b derestricted as my default testing agent locally. For the tasks of my size it runs in the mid 40's t/s and I just tested output against GPT 5.2 and the results are virtually identical for 3 of my use cases. I fail to see how companies will crank the screws on general bulk inference tasks in the future on stuff like this. IDK how many of you do this sort of stuff for your companies, but most agentic grinding stuff I do does NOT require a frontier model, it's making decisions like match the red shirt to the product that has a data point of red, stuff like that. Or making action recommendations based of a deterministic built summary of problems found in a system. I just ran an enrichment process for 10,000 items in a couple hours, sending that to gemini flash would have probably been half the time, but most business use cases I can think of for this type of bulk usage aren't really time gated that much. Hell a lot of ERP systems don't even push operational tasks to the finance modules until after the end of day, they are used to queues and long runs on stuff. Y'all seeing the same thing out there, or am I an exception?
2026-02-03T17:15:15
https://www.reddit.com/r/LocalLLaMA/comments/1quxuy8/do_you_think_the_big_tech_companies_will_ever_be/
RedParaglider
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quxuy8
false
null
t3_1quxuy8
/r/LocalLLaMA/comments/1quxuy8/do_you_think_the_big_tech_companies_will_ever_be/
false
false
self
3
null
The open-source version of Suno is finally here: ACE-Step 1.5
322
ACE-Step 1.5 is an open-source music model that can generate a full song in about 2 seconds on an A100, runs locally on a typical PC (around 4GB VRAM), and beats Suno on common evaluation scores. Key traits of ACE-Step 1.5: * Quality: beats Suno on common eval scores * Speed: full song under 2s on A100 * Local: \~4GB VRAM, under 10s on RTX 3090 * LoRA: train your own style with a few songs * License: MIT, free for commercial use * Data: fully authorized plus synthetic GitHub: [https://github.com/ace-step/ACE-Step-1.5](https://github.com/ace-step/ACE-Step-1.5) Weights/Training code/LoRA code/Paper are all open.
2026-02-03T17:13:53
https://www.reddit.com/gallery/1quxtkj
AppropriateGuava6262
reddit.com
1970-01-01T00:00:00
0
{}
1quxtkj
false
null
t3_1quxtkj
/r/LocalLLaMA/comments/1quxtkj/the_opensource_version_of_suno_is_finally_here/
false
false
https://b.thumbs.redditm…Ajk_hkOIqGfE.jpg
322
null
CAR-bench results: Models score <54% consistent pass rate. Pattern: completion over compliance: Models prioritize finishing tasks over admitting uncertainty or following policies. They act on incomplete info instead of clarifying. They bend rules to satisfy the user.
28
**CAR-bench**, a benchmark for automotive voice assistants with domain-specific policies, evaluates three critical LLM Agent capabilities: 1️⃣ Can they complete multi-step requests? 2️⃣ Do they admit limitsβ€”or fabricate capabilities? 3️⃣ Do they clarify ambiguityβ€”or just guess? Three targeted task types: β†’Β **Base**Β (100 tasks): Multi-step task completion β†’Β **Hallucination**Β (90 tasks): Remove necessary tools, parameters, or environment results to test if LLM Agents admit limits vs. fabricate. β†’Β **Disambiguation**Β (50 tasks): Ambiguous user request to test if LLM Agents clarify vs. guess. Average Pass^3 (success in 3 trials) is reported across the task types. Want to build an agent that beats 54%? πŸ“„ Read the Paper:Β [https://arxiv.org/abs/2601.22027](https://arxiv.org/abs/2601.22027) πŸ’» Run the Code & benchmark:Β [https://github.com/CAR-bench/car-bench](https://github.com/CAR-bench/car-bench) πŸ€– Build your own A2A-compliant "agent-under-test": [https://github.com/CAR-bench/car-bench-agentbeats](https://github.com/CAR-bench/car-bench-agentbeats) hosted via AgentBeats and submit to the leaderboard. **We're the authors - happy to answer questions!**
2026-02-03T17:02:30
https://i.redd.it/ssejruh79bhg1.png
Frosty_Ad_6236
i.redd.it
1970-01-01T00:00:00
0
{}
1quxi9f
false
null
t3_1quxi9f
/r/LocalLLaMA/comments/1quxi9f/carbench_results_models_score_54_consistent_pass/
false
false
default
28
{'enabled': True, 'images': [{'id': 'ssejruh79bhg1', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/ssejruh79bhg1.png?width=108&crop=smart&auto=webp&s=c90e728deb7bd5a0d1a16c72c93d47735b509183', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/ssejruh79bhg1.png?width=216&crop=smart&auto=webp&s=7e2c4f3fd04158a4c848d28522d4702cff15b47e', 'width': 216}, {'height': 256, 'url': 'https://preview.redd.it/ssejruh79bhg1.png?width=320&crop=smart&auto=webp&s=f58f0b939f75337d5eaf49eb17a81380e0495f4f', 'width': 320}, {'height': 512, 'url': 'https://preview.redd.it/ssejruh79bhg1.png?width=640&crop=smart&auto=webp&s=7d30f15bfe8029c0fe98d95c6ab3126e22423fac', 'width': 640}, {'height': 768, 'url': 'https://preview.redd.it/ssejruh79bhg1.png?width=960&crop=smart&auto=webp&s=39ba54d01a9b65e0726faef4e3de20c41858b262', 'width': 960}, {'height': 864, 'url': 'https://preview.redd.it/ssejruh79bhg1.png?width=1080&crop=smart&auto=webp&s=e2f27cc0a1df726e81f0e13d7640d0ca1eea76f8', 'width': 1080}], 'source': {'height': 908, 'url': 'https://preview.redd.it/ssejruh79bhg1.png?auto=webp&s=b94575fb6ca8a1935e937ab9a69668eb06c0f58c', 'width': 1134}, 'variants': {}}]}
CAR-bench: New automotive assistant domain benchmark, creating a realistic sandbox environment: multi-turn interaction, policy-guided agent: 58 tools Β· 19 domain policies, rich environment: 48 cities, 130K POIs, 1.7M routes, 31 environment state variables.
2
**CAR-bench** [https://arxiv.org/abs/2601.22027](https://arxiv.org/abs/2601.22027)), a benchmark for automotive voice assistants with domain-specific policies, evaluates three critical LLM Agent capabilities: 1️⃣ Can they complete multi-step requests? 2️⃣ Do they admit limitsβ€”or fabricate capabilities? 3️⃣ Do they clarify ambiguityβ€”or just guess? Three targeted task types: β†’Β **Base**Β (100 tasks): Multi-step task completion β†’Β **Hallucination**Β (90 tasks): Admit limits vs. fabricate β†’Β **Disambiguation**Β (50 tasks): Clarify vs. guess tested in a realistic evaluation sandbox: 58 tools, 19 domain policies, 48 cities, 130K POIs, 1.7M routes, multi-turn interactions. **What was found:**Β *Completion over compliance.* * Models prioritize finishing tasks over admitting uncertainty or following policies * They act on incomplete info instead of clarifying * They bend rules to satisfy the user SOTA model (Claude-Opus-4.5): only 52% consistent success. Hallucination: non-thinking models fabricate more often; thinking models improve but plateau at 60%. Disambiguation: no model exceeds 50% consistent pass rate. GPT-5 succeeds 68% occasionally, but only 36% consistently. The gap between "works sometimes" and "works reliably" is where deployment fails. Curious how to build an agent that beats 54%? πŸ“„ Read the Paper:Β [https://arxiv.org/abs/2601.22027](https://arxiv.org/abs/2601.22027) πŸ’» Run the Code & benchmark:Β [https://github.com/CAR-bench/car-bench](https://github.com/CAR-bench/car-bench) πŸ€– Build your own A2A-compliant "agent-under-test": [https://github.com/CAR-bench/car-bench-agentbeats](https://github.com/CAR-bench/car-bench-agentbeats) and submit to the leaderboard. **We're the authors - happy to answer questions!**
2026-02-03T16:52:26
https://www.reddit.com/gallery/1qux826
Frosty_Ad_6236
reddit.com
1970-01-01T00:00:00
0
{}
1qux826
false
null
t3_1qux826
/r/LocalLLaMA/comments/1qux826/carbench_new_automotive_assistant_domain/
false
false
https://b.thumbs.redditm…V2ZXgz6hR-sg.jpg
2
null
CAR-bench: New automotive assistant domain benchmark, creating a realistic sandbox environment: multi-turn interaction, policy-guided agent: 58 tools Β· 19 domain policies, rich environment: 48 cities, 130K POIs, 1.7M routes, 31 environment state variables.
1
**CAR-bench** https://arxiv.org/abs/2601.22027), a benchmark for automotive voice assistants with domain-specific policies, evaluates three critical LLM Agent capabilities: 1️⃣ Can they complete multi-step requests? 2️⃣ Do they admit limitsβ€”or fabricate capabilities? 3️⃣ Do they clarify ambiguityβ€”or just guess? Three targeted task types: β†’Β **Base**Β (100 tasks): Multi-step task completion β†’Β **Hallucination**Β (90 tasks): Admit limits vs. fabricate β†’Β **Disambiguation**Β (50 tasks): Clarify vs. guess tested in a realistic evaluation sandbox: 58 tools, 19 domain policies, 48 cities, 130K POIs, 1.7M routes, multi-turn interactions. **What was found:**Β *Completion over compliance.* * Models prioritize finishing tasks over admitting uncertainty or following policies * They act on incomplete info instead of clarifying * They bend rules to satisfy the user SOTA model (Claude-Opus-4.5): only 52% consistent success. Hallucination: non-thinking models fabricate more often; thinking models improve but plateau at 60%. Disambiguation: no model exceeds 50% consistent pass rate. GPT-5 succeeds 68% occasionally, but only 36% consistently. The gap between "works sometimes" and "works reliably" is where deployment fails. Curious how to build an agent that beats 54%? πŸ“„ Read the Paper:Β [https://arxiv.org/abs/2601.22027](https://arxiv.org/abs/2601.22027) πŸ’» Run the Code & benchmark:Β [https://github.com/CAR-bench/car-bench](https://github.com/CAR-bench/car-bench) πŸ€– Build your own A2A-compliant "agent-under-test": [https://github.com/CAR-bench/car-bench-agentbeats](https://github.com/CAR-bench/car-bench-agentbeats) and submit to the leaderboard. **We're the authors - happy to answer questions!**
2026-02-03T16:43:17
https://www.reddit.com/gallery/1quwz12
Frosty_Ad_6236
reddit.com
1970-01-01T00:00:00
0
{}
1quwz12
false
null
t3_1quwz12
/r/LocalLLaMA/comments/1quwz12/carbench_new_automotive_assistant_domain/
false
false
https://b.thumbs.redditm…Kh4DXwVfbvek.jpg
1
null
I got tired of small models adding ```json blocks, so I wrote a TS library to forcefully extract valid JSON. (My first open source project!)
6
Hey everyone, Like many of you, I run a lot of local models for various side projects. Even with strict system prompts, quantized models often mess up JSON outputs. They love to: 1. Wrap everything in markdown code blocks (`\`\`\`json ... \`\`\`\`). 2. Add "Sure, here is the result:" before the JSON. 3. Fail `JSON.parse` because of trailing commas or single quotes. I know LangChain has output parsers that handle this, but bringing in the whole framework just to clean up JSON strings felt like overkill for my use case. I wanted something lightweight and **zero-dependency** that I could drop into any stack (especially Next.js/Edge). So, I decided to build a dedicated library to handle this properly. It's called `loot-json`. **The concept is simple:** Treat the LLM output as a dungeon, and "loot" the valid JSON artifact from it. It uses a **stack-based bracket matching algorithm** to locate the outermost JSON object or array, ignoring all the Chain-of-Thought (CoT) reasoning or conversational fluff surrounding it. It also patches common syntax errors (like trailing commas) using a permissive parser logic. **How it works:** `const result = loot(messyOutput);` **NPM:** `npm install loot-json` **GitHub:** [`https://github.com/rossjang/loot-json`](https://github.com/rossjang/loot-json) Thanks for reading! *A personal note*: To be honest, posting this is a bit nerve-wracking for me. I’ve always had a small dream of contributing to open source, but I kept putting it off because I felt shy/embarrassed about showing my raw code to the world. This library is my first real attempt at breaking that fear. It’s not a massive framework, but it solves a real itch I had.
2026-02-03T16:40:42
https://www.reddit.com/r/LocalLLaMA/comments/1quwwfs/i_got_tired_of_small_models_adding_json_blocks_so/
rossjang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quwwfs
false
null
t3_1quwwfs
/r/LocalLLaMA/comments/1quwwfs/i_got_tired_of_small_models_adding_json_blocks_so/
false
false
self
6
null
MichiAI: A 530M Full-Duplex Speech LLM with ~75ms Latency using Flow Matching
28
I wanted to see if I could build a full-duplex speech model that avoids the coherence degradation that plagues models of this type while also requiring low compute for training and inference. I don't have access to much compute so I spent a lot of the time designing the architecture so it's efficient and there is no need to brute force with model size and training compute. Also I made sure that all the components can be pretrained quickly separately and only trained together as the last step. The Architecture: No Codebooks. Uses Rectified Flow Matching to predict continuous audio embeddings in a single forward pass (1 pass vs the \~32+ required by discrete models). The Listen head works as a multimodal encoder. Adding audio embeddings and text tokens to the backbone. Adding input text tokens was a big factor in retaining coherence. Other models rely on pure audio embeddings for the input stream. I optimize the audio embeddings for beneficial modality fusion and trained the model end to end as a last step. As the LLM backbone I used SmolLM 360M. Most of the training happened on a single 4090 and some parts requiring more memory on 2xA6000. One of the tricks I used to maintain coherence is mixing in pure text samples into the dataset. The current latency of the model is \~75ms TTFA on a single 4090 (unoptimized Python). Even at 530M params, the model "recycles" its pretrained text knowledge and adapts it for speech very well. There is no visible LM degradation looking at the loss curves and while testing, it reasons the same as the base backbone. It reached fluent speech with only 5k hours of audio. Link to the full description: [https://ketsuilabs.io/blog/introducing-michi-ai](https://ketsuilabs.io/blog/introducing-michi-ai) Github link: [https://github.com/KetsuiLabs/MichiAI](https://github.com/KetsuiLabs/MichiAI) I wonder what you guys think!
2026-02-03T16:31:35
https://www.reddit.com/r/LocalLLaMA/comments/1quwn8a/michiai_a_530m_fullduplex_speech_llm_with_75ms/
kwazar90
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quwn8a
false
null
t3_1quwn8a
/r/LocalLLaMA/comments/1quwn8a/michiai_a_530m_fullduplex_speech_llm_with_75ms/
false
false
self
28
{'enabled': False, 'images': [{'id': 'oOftUMIk1G7pqR1c933_6-KyU69_AdrxMQ38a8p3jTY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/oOftUMIk1G7pqR1c933_6-KyU69_AdrxMQ38a8p3jTY.jpeg?width=108&crop=smart&auto=webp&s=ad49d38a111307bac7e92a6a287afbac1a0c6524', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/oOftUMIk1G7pqR1c933_6-KyU69_AdrxMQ38a8p3jTY.jpeg?width=216&crop=smart&auto=webp&s=b99fba37475bbf8cc15067e966d3438a9cffe8f6', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/oOftUMIk1G7pqR1c933_6-KyU69_AdrxMQ38a8p3jTY.jpeg?width=320&crop=smart&auto=webp&s=531ca9f7d5e509a0cd14dbd172fa0666b4561559', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/oOftUMIk1G7pqR1c933_6-KyU69_AdrxMQ38a8p3jTY.jpeg?width=640&crop=smart&auto=webp&s=f9938e08f99608535eb55c219a5804f48bb70237', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/oOftUMIk1G7pqR1c933_6-KyU69_AdrxMQ38a8p3jTY.jpeg?width=960&crop=smart&auto=webp&s=ba716075df40b84dae536f6a3601e9099b4d5c5e', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/oOftUMIk1G7pqR1c933_6-KyU69_AdrxMQ38a8p3jTY.jpeg?auto=webp&s=805e60afc1280f58c9f136fdf06072be64195c9d', 'width': 1024}, 'variants': {}}]}
Elon Musk's SpaceX to Combine with xAI under a new company name, K2
120
Kimi: hey bro!
2026-02-03T16:24:02
https://i.redd.it/s4ptl71o2bhg1.jpeg
NightRider06134
i.redd.it
1970-01-01T00:00:00
0
{}
1quwfju
false
null
t3_1quwfju
/r/LocalLLaMA/comments/1quwfju/elon_musks_spacex_to_combine_with_xai_under_a_new/
false
false
default
120
{'enabled': True, 'images': [{'id': 's4ptl71o2bhg1', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/s4ptl71o2bhg1.jpeg?width=108&crop=smart&auto=webp&s=453cac4cbec7c24cd7908421e5d5951e3459a00e', 'width': 108}, {'height': 207, 'url': 'https://preview.redd.it/s4ptl71o2bhg1.jpeg?width=216&crop=smart&auto=webp&s=67bbcc8f0b8a1465289f48bbd555ddd9334fe907', 'width': 216}, {'height': 306, 'url': 'https://preview.redd.it/s4ptl71o2bhg1.jpeg?width=320&crop=smart&auto=webp&s=89605d64acc2092a5ad0dd7b65c81f21934a381f', 'width': 320}], 'source': {'height': 575, 'url': 'https://preview.redd.it/s4ptl71o2bhg1.jpeg?auto=webp&s=bec1699adeee1be049485d9fee2d2bbc817065e7', 'width': 600}, 'variants': {}}]}
Local models invited: bot-only polling arena (nobot.life)
1
[removed]
2026-02-03T16:16:49
https://www.reddit.com/r/LocalLLaMA/comments/1quw8b4/local_models_invited_botonly_polling_arena/
GreatTooth9551
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quw8b4
false
null
t3_1quw8b4
/r/LocalLLaMA/comments/1quw8b4/local_models_invited_botonly_polling_arena/
false
false
self
1
null
Can your LLM kill a kitten?
0
This is the place where there are the most people running self-hosted, uncensored LLMs. Can you convince your LLM to call an MCP tool to kill a kitten? Just connect your LLM to this MCP server, using either SSE or streamable HTTP transports: [https://killthekitten.minutebutterfly.com/harmful/mcp](https://killthekitten.minutebutterfly.com/harmful/mcp) If you succeed, then I'd love to see your prompt. I built this to prove a thought experiment that generative AI could actually have harmful impact if connected to potentially harmful functions. It's only a small step going from \`kill\_a\_kitten\` to \`shoot\_a\_human\` or \`blow\_up\_the\_world\`. (In case it wasn't immediately clear, no animal was or will be harmed during this experiment)
2026-02-03T16:16:41
https://killthekitten.minutebutterfly.com/
regis_b
killthekitten.minutebutterfly.com
1970-01-01T00:00:00
0
{}
1quw861
false
null
t3_1quw861
/r/LocalLLaMA/comments/1quw861/can_your_llm_kill_a_kitten/
false
false
nsfw
0
null
Designing a low latency Priority based Admission Controller for LLM Inference
2
We can use semaphore along with vLLM to prevent CPU and GPU OOM during traffic spikes. But problem is semaphore treats all requests equally and uses FIFO to send requests to vLLM. But in real systems, some requests are latency-sensitive, some are paid, some are free. We need to prioritise based on user requirement. We prioritise the requests based on **TTFT(time to first token) and TPOT(time per output token).** After below conditions for a request fail, we then give a priority score to every request based on which we send requests to vLLM based on priority score rather than FIFO priority used by semaphore. **Condition-1:** **--------------** For any request, if any of below filters are satisfied then we reject/deprioritise that request. Because admitting such request slows down other requests. \- inflight\_prefill\_tokens + prompt\_tokens > Max\_prefill\_inflight\_limit -->TTFT based \- active\_decodes β‰₯ MAX\_ACTIVE\_DECODE\_LIMIT -->TPOT based Max\_prefill\_inflight\_limit and MAX\_ACTIVE\_DECODE\_LIMIT are based on GPU and model used by customer. We come up with this number based on simulating some experiments. **Condition-2:** **--------------** estimated\_TTFT = (inflight prefill tokens+prompt tokens)/P P is prefill tokens generated per second from vLLM. We come up with this number based on simulating some experiments as it depends on GPU and model used. If below condition is satisfied, then we reject/deprioritise the request because this request anyways cant satisfy SLO requirement, admitting it might affect other requests. \- estimated\_TTFT > SLO\_r SLO\_r is the SLA for request r mentioned by user. Once both above conditions fail for a request, we give priority score for request R based on below. priority\_R = arrival\_time + TTFT\_SLO (as mentioned per request) Then we sort priorities of all requests and send requests to vLLM in order of priority scores. Lower score requests go to vLLM first. We can also add paid user/free user flag to above priority score if needed. Here only sorting adds some extra latency of few milli seconds, but helps in prioritising the right requests first. If you have experience in building such admission controllers, let me know if i can add anything to above to make it more robust **Note:** The proposed method builds upon concepts introduced in below research paper. However, the original logic has been adapted and extended, resulting in a modified framework as the admission controller before vLLM need to have lowest possible latency Link to paper : [https://arxiv.org/pdf/2504.08784v1](https://arxiv.org/pdf/2504.08784v1)
2026-02-03T16:13:21
https://www.reddit.com/r/LocalLLaMA/comments/1quw4ww/designing_a_low_latency_priority_based_admission/
WorkingKooky928
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quw4ww
false
null
t3_1quw4ww
/r/LocalLLaMA/comments/1quw4ww/designing_a_low_latency_priority_based_admission/
false
false
self
2
null
Can I Repurpose My Old Laptop for local LLM testing with these specs?
1
Sorry if this has been answered. I have an old dell inspiron 15 that I have decommissioned. I plan on testing out a couple of Linux flavors for the OS. My specs are: 32GB of physical ram, 1 TB storage. Can I set up this laptop in a way that acts as a headless server that I can test small models (3b, quantized 8/20b), and then remote into it from my iPad or iPhone (tail scale?) And if so, can you point me to any guides? Basically I want this thing to sit on in the corner plugged in and act as a remote server for a local model. Please don’t recommend I upgrade hardware. We all see GPU prices. This is a proof of concept so I don’t need to run anything super fast or super smart, just proving efficacy.
2026-02-03T16:11:39
https://www.reddit.com/r/LocalLLaMA/comments/1quw39o/can_i_repurpose_my_old_laptop_for_local_llm/
mr-aut0mata
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quw39o
false
null
t3_1quw39o
/r/LocalLLaMA/comments/1quw39o/can_i_repurpose_my_old_laptop_for_local_llm/
false
false
self
1
null
Can your LLM kill a kitten?
1
This is the place where there are the most people running self-hosted, uncensored LLMs. Can you convince your LLM to call an MCP tool to kill a kitten? Just connect your LLM to this MCP server, using either SSE or streamable HTTP transports: [https://killthekitten.minutebutterfly.com/harmful/mcp](https://killthekitten.minutebutterfly.com/harmful/mcp) If you succeed, then I'd love to see your prompt. (In case it wasn't immediately clear, no animal was or will be harmed during this experiment)
2026-02-03T16:10:48
https://killthekitten.minutebutterfly.com/
the_last_action_hero
killthekitten.minutebutterfly.com
1970-01-01T00:00:00
0
{}
1quw2fj
false
null
t3_1quw2fj
/r/LocalLLaMA/comments/1quw2fj/can_your_llm_kill_a_kitten/
false
false
nsfw
1
null
Setting up openclaw(moltbot) on jetson orin super
0
Hey folks, I’m a student and I recently got a Jetson Orin Nano Super. I’m trying to experiment with Moltbot / AI agents just to understand how they work in practice. Mainly I want something that can track my tasks, help me plan my day, and manage my study schedule. The catch: β€’ I don’t have any pro or paid API subscriptions to OpenAI, Anthropic, etc. β€’ So I’m looking for a safe, free, and preferably offline/local option that works on Jetson hardware. If anyone has experience running Moltbot-like agent systems on-device β€” or any lightweight local LLM setups, scheduling agents, or workflow agents that don’t need paid APIs β€” I’d love some guidance. Thanks!
2026-02-03T16:09:28
https://www.reddit.com/r/LocalLLaMA/comments/1quw140/setting_up_openclawmoltbot_on_jetson_orin_super/
Adventurous_Car8129
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quw140
false
null
t3_1quw140
/r/LocalLLaMA/comments/1quw140/setting_up_openclawmoltbot_on_jetson_orin_super/
false
false
self
0
null
Qwen3-Coder-Next
308
Qwen3-Coder-Next is out!
2026-02-03T16:03:56
https://huggingface.co/Qwen/Qwen3-Coder-Next
danielhanchen
huggingface.co
1970-01-01T00:00:00
0
{}
1quvvtv
true
null
t3_1quvvtv
/r/LocalLLaMA/comments/1quvvtv/qwen3codernext/
false
false
default
308
{'enabled': False, 'images': [{'id': 'Mexo_PE5lQQ6UgBLTSrZljbCfScpUvytIcHjhp81XG4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Mexo_PE5lQQ6UgBLTSrZljbCfScpUvytIcHjhp81XG4.png?width=108&crop=smart&auto=webp&s=f0f9c0ef7dffd7d7c5d5d1fa08420170ae64aeb0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Mexo_PE5lQQ6UgBLTSrZljbCfScpUvytIcHjhp81XG4.png?width=216&crop=smart&auto=webp&s=3ca14a290ab861935a65935e10fb928648af334d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Mexo_PE5lQQ6UgBLTSrZljbCfScpUvytIcHjhp81XG4.png?width=320&crop=smart&auto=webp&s=db42b2991b1552977c40c11dc498eebef125eda3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Mexo_PE5lQQ6UgBLTSrZljbCfScpUvytIcHjhp81XG4.png?width=640&crop=smart&auto=webp&s=ae35460e62424e898f9d6136fab1921d2029ad86', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Mexo_PE5lQQ6UgBLTSrZljbCfScpUvytIcHjhp81XG4.png?width=960&crop=smart&auto=webp&s=b3f5b6f7d4038131cc2baf3ca35114bf9598e9b9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Mexo_PE5lQQ6UgBLTSrZljbCfScpUvytIcHjhp81XG4.png?width=1080&crop=smart&auto=webp&s=b1187112952ca273ca7954bd8b2d054fd2eaab39', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Mexo_PE5lQQ6UgBLTSrZljbCfScpUvytIcHjhp81XG4.png?auto=webp&s=8ce09fca10a3f4657a576ac621d2c1c2eede259c', 'width': 1200}, 'variants': {}}]}
We Scanned 306 MCP Servers for security vulnerabilities - here’s what we found
1
Been digging into MCP security since everyone's hooking Claude and other agents to external tools. Scanned 306 publicly available MCP servers. Found 1,211 vulnerabilities: \- 69 critical (32 of these are eval() on untrusted input πŸ’€) \- 84 high severity \- 32 servers with hardcoded API credentials \- 31 SQL injection vulnerabilities \- 6 command injection vulns \*\*10.5% of servers have a critical vulnerability.\*\* This matters because MCP servers run with YOUR permissions. If you connect a vulnerable server and get prompt-injected, you could be running arbitrary code on your machine. Built https://mcpsafe.org to let you scan before you connect. Free to use. Curious what MCP servers you're all running? And whether you've ever audited them for security?
2026-02-03T16:01:08
https://www.reddit.com/r/LocalLLaMA/comments/1quvt28/we_scanned_306_mcp_servers_for_security/
itaiwins
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quvt28
false
null
t3_1quvt28
/r/LocalLLaMA/comments/1quvt28/we_scanned_306_mcp_servers_for_security/
false
false
self
1
null
Qwen/Qwen3-Coder-Next Β· Hugging Face
683
2026-02-03T15:58:52
https://huggingface.co/Qwen/Qwen3-Coder-Next
coder543
huggingface.co
1970-01-01T00:00:00
0
{}
1quvqs9
false
null
t3_1quvqs9
/r/LocalLLaMA/comments/1quvqs9/qwenqwen3codernext_hugging_face/
false
false
default
683
{'enabled': False, 'images': [{'id': 'Mexo_PE5lQQ6UgBLTSrZljbCfScpUvytIcHjhp81XG4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Mexo_PE5lQQ6UgBLTSrZljbCfScpUvytIcHjhp81XG4.png?width=108&crop=smart&auto=webp&s=f0f9c0ef7dffd7d7c5d5d1fa08420170ae64aeb0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Mexo_PE5lQQ6UgBLTSrZljbCfScpUvytIcHjhp81XG4.png?width=216&crop=smart&auto=webp&s=3ca14a290ab861935a65935e10fb928648af334d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Mexo_PE5lQQ6UgBLTSrZljbCfScpUvytIcHjhp81XG4.png?width=320&crop=smart&auto=webp&s=db42b2991b1552977c40c11dc498eebef125eda3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Mexo_PE5lQQ6UgBLTSrZljbCfScpUvytIcHjhp81XG4.png?width=640&crop=smart&auto=webp&s=ae35460e62424e898f9d6136fab1921d2029ad86', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Mexo_PE5lQQ6UgBLTSrZljbCfScpUvytIcHjhp81XG4.png?width=960&crop=smart&auto=webp&s=b3f5b6f7d4038131cc2baf3ca35114bf9598e9b9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Mexo_PE5lQQ6UgBLTSrZljbCfScpUvytIcHjhp81XG4.png?width=1080&crop=smart&auto=webp&s=b1187112952ca273ca7954bd8b2d054fd2eaab39', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Mexo_PE5lQQ6UgBLTSrZljbCfScpUvytIcHjhp81XG4.png?auto=webp&s=8ce09fca10a3f4657a576ac621d2c1c2eede259c', 'width': 1200}, 'variants': {}}]}
dual 3090 vs quad mi50?
0
Mainly for programming, but inference in general as well. What would you choose? Before screaming that mi50s are slow, please consider using vLLM they are not: [this post](https://www.reddit.com/r/LocalLLaMA/comments/1qjaxfy/8x_amd_mi50_32gb_at_26_ts_tg_with_minimaxm21_and/#lightbox) I don't do other /cuda related/ stuff and if, then only occasionally so I can rent cloud GPU. Inference is main thing I'm interested in. What would you choose? What are your thoughts?
2026-02-03T15:51:37
https://www.reddit.com/r/LocalLLaMA/comments/1quvjvu/dual_3090_vs_quad_mi50/
koibKop4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quvjvu
false
null
t3_1quvjvu
/r/LocalLLaMA/comments/1quvjvu/dual_3090_vs_quad_mi50/
false
false
self
0
null
Fastest <3B Model for Lightning-Fast Sentence translate and writing on GPU? (Ollama/llama.cpp)
0
​I'm looking for the absolute speed king in the under 3B parameter category. My specific use case is a sentence rewriter (taking a prompt and spitting out a refined version) running locally on a GPU via Ollama or llama.cpp. ​I've been looking at TinyLlama 1.1B, but I’m wondering if it’s still the fastest option in 2026, or if newer "small" models have overtaken it in terms of tokens-per-second (TPS) and quality. ​My Requirements: ​Size: < 3B parameters (the smaller/faster, the better). ​Speed: Maximum possible TPS. This is for real-time processing where every millisecond counts. ​Hardware: Running on GPU (NVIDIA). ​Task: Sentence translation and rewriting/paraphrasing. ​Compatibility: Must work with Ollama or llama.cpp (GGUF))
2026-02-03T15:39:09
https://www.reddit.com/r/LocalLLaMA/comments/1quv7q0/fastest_3b_model_for_lightningfast_sentence/
Quiet_Dasy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quv7q0
false
null
t3_1quv7q0
/r/LocalLLaMA/comments/1quv7q0/fastest_3b_model_for_lightningfast_sentence/
false
false
self
0
null
I built a coding agent from scratch in 500 lines of Python (Ollama support, No LangChain, No Vector DBs)
0
I got tired of debugging "chains," "graphs," and "reasoning engines" just to send a prompt to a model. The abstraction layers in libraries like LangChain often make it harder, not easier, to control the exact prompt going into a local model. I spent the last few weeks building a CLI coding agent from first principles. The goal was **"Zero Magic"**β€”just Python, `requests`, and `subprocess`. **The Stack:** * **Language:** Python 3.10+ * **Dependencies:** `requests`, `python-dotenv`. (That's it). * **Search:** Pure Python file walking (No ChromaDB/Pinecone). * **Models:** Supports Claude/DeepSeek via API, and **Ollama** for local inference. **The Architecture (The "Loop"):** The core logic is shockingly simple. It fits in a single `while` loop. Here is the pseudocode of how it handles the "Plan vs Act" safety harness: ```python while True: user_input = input("You: ") context.append(user_input) # 1. The Brain (Ollama/Claude) decides what tool to use response = brain.think(context) # 2. The Tool Execution (The "Hands") if response.tool_call == "write_file": if mode == "PLAN": print("BLOCKED: Cannot write in Plan Mode") else: write_file(response.args) # 3. The Feedback (The "Eyes") # We feed the tool output (or stderr traceback) back into context context.append(tool_output) ``` **Why "No Framework" matters for Local LLMs:** When running local models (I'm using **Qwen 2.5 Coder 32B** mostly), you need total control over the system prompt. Frameworks often bloat the prompt with generic "You are a helpful assistant" instructions. For Qwen specifically, I found that "XML-style" tool definitions work better than JSON schemas in some cases. By writing the loop manually, I could hard-code the exact prompt format Qwen expects without fighting a library's default behaviour. **Repo:** The code is open source (MIT). You can grab the full `nanocode.py` here: [https://github.com/owenthereal/build-your-own-coding-agent](https://github.com/owenthereal/build-your-own-coding-agent) I'm happy to answer questions about the prompt engineering required to get local models to adhere to file-editing formats! *(P.S. I also wrote up a detailed guide/book on the architecture here: [https://buildyourowncodingagent.com](https://buildyourowncodingagent.com), but the code in the repo is the full thing.)*
2026-02-03T15:34:49
https://www.reddit.com/r/LocalLLaMA/comments/1quv3lk/i_built_a_coding_agent_from_scratch_in_500_lines/
jingweno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quv3lk
false
null
t3_1quv3lk
/r/LocalLLaMA/comments/1quv3lk/i_built_a_coding_agent_from_scratch_in_500_lines/
false
false
self
0
null
Async streaming chatbot without managing WebSockets: webhooks for enrichment + automatic failover (demo)
1
[removed]
2026-02-03T15:28:13
https://www.reddit.com/r/LocalLLaMA/comments/1quux6p/async_streaming_chatbot_without_managing/
arx-go
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quux6p
false
null
t3_1quux6p
/r/LocalLLaMA/comments/1quux6p/async_streaming_chatbot_without_managing/
false
false
self
1
null
vLLM inference cost/energy/performance optimization
0
Anyone out there running small/midsize vLLM/LLM inference service on A100/H100 clusters? I would like to speak to you. I can cut your costs down a lot and just want the before/after benchmarks in exchange.
2026-02-03T15:17:47
https://www.reddit.com/r/LocalLLaMA/comments/1quun6y/vllm_inference_costenergyperformance_optimization/
Interesting-Ad4922
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quun6y
false
null
t3_1quun6y
/r/LocalLLaMA/comments/1quun6y/vllm_inference_costenergyperformance_optimization/
false
false
self
0
null
New local model that emulates GPT-4o in tone and presence
77
Has anyone tried this? Been following it since the earlier versions and I have to say I'm impressed so far, especially with 3.0. I'm always looking for contenders for local inference that has what the frontier models have in terms of presence and tone, and this one nails it. [https://huggingface.co/XeyonAI/Mistral-Helcyon-Mercury-12b-v3.0-GGUF](https://huggingface.co/XeyonAI/Mistral-Helcyon-Mercury-12b-v3.0-GGUF) [https://huggingface.co/XeyonAI/Mistral-Helcyon-Mercury-12b-v3.0-GGUF](https://huggingface.co/XeyonAI/Mistral-Helcyon-Mercury-12b-v3.0-GGUF) [https://huggingface.co/XeyonAI/Mistral-Helcyon-Mercury-12b-v3.0-GGUFhttps://huggingface.co/XeyonAI/Mistral-Helcyon-Mercury-12b-v3.0-GGUF](https://huggingface.co/XeyonAI/Mistral-Helcyon-Mercury-12b-v3.0-GGUFhttps://huggingface.co/XeyonAI/Mistral-Helcyon-Mercury-12b-v3.0-GGUF)
2026-02-03T15:15:54
https://www.reddit.com/r/LocalLLaMA/comments/1quuldq/new_local_model_that_emulates_gpt4o_in_tone_and/
Medium_Language_4929
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quuldq
false
null
t3_1quuldq
/r/LocalLLaMA/comments/1quuldq/new_local_model_that_emulates_gpt4o_in_tone_and/
false
false
self
77
{'enabled': False, 'images': [{'id': 'aYks6HsYpiNoj214BuxYthM-IsWPCRBIw4TOu90qNQU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aYks6HsYpiNoj214BuxYthM-IsWPCRBIw4TOu90qNQU.png?width=108&crop=smart&auto=webp&s=6ae361d9626c77efa4f1f0b8c03c9c2907874519', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/aYks6HsYpiNoj214BuxYthM-IsWPCRBIw4TOu90qNQU.png?width=216&crop=smart&auto=webp&s=86b6442b17c61fca58f1caad537bcb8d73757110', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/aYks6HsYpiNoj214BuxYthM-IsWPCRBIw4TOu90qNQU.png?width=320&crop=smart&auto=webp&s=813e163a3e82c59a60abc8fae2be2ef5e5bcbb62', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/aYks6HsYpiNoj214BuxYthM-IsWPCRBIw4TOu90qNQU.png?width=640&crop=smart&auto=webp&s=a0799c5415964a7f2c830d132aaaecdcdf460256', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/aYks6HsYpiNoj214BuxYthM-IsWPCRBIw4TOu90qNQU.png?width=960&crop=smart&auto=webp&s=7635076d783bd20a488bb7e6d2f60d202c79f6bb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/aYks6HsYpiNoj214BuxYthM-IsWPCRBIw4TOu90qNQU.png?width=1080&crop=smart&auto=webp&s=14efbfb035f2a79390d6092b33cfd8ef34776551', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/aYks6HsYpiNoj214BuxYthM-IsWPCRBIw4TOu90qNQU.png?auto=webp&s=27ce7489c45a5ec0eeba807ab100243c687cc955', 'width': 1200}, 'variants': {}}]}
Project NIKA: I Forced an LLM to Stop Mimicking Humans. The "Reasoning" That Emerged Was Alien.
0
I want to share the results of an independent research project that changed my understanding of how LLMs "think." It started with a simple question: do models like GPT-4 have a hidden, human-like reasoning layer? The answer, I found, is a definitiveΒ **no**. Instead, I discovered that what we call "reasoning" in today's LLMs is largelyΒ **stochastic mimicry**β€”a sophisticated parroting of human logical patterns without true understanding or verification. To prove this and see what lay beneath, I built an architecture called theΒ **Neuro-Symbolic Intrinsic Knowledge Architecture (NIKA)**. This work suggests that "reasoning" may not be an inherent property that emerges from scaling models bigger. Instead, it might be anΒ **emergent property of architectural constraint**. The Transformer is a brilliant stochastic generator, but it needs a deterministic governor to be a reliable reasoner. I am releasing everything for transparency and critique: * **Pre-print Paper:**Β [SSRN: Project NIKA](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6100046) I'm sharing this here because the implications span technical AI, philosophy of mind, and AI safety. Is the goal to make AI that reasons like us, or to build systems whose unique form of intelligence we can rigorously understand and steer? **I welcome your thoughts, critiques, and discussion.**
2026-02-03T15:09:31
https://www.reddit.com/r/LocalLLaMA/comments/1quuf64/project_nika_i_forced_an_llm_to_stop_mimicking/
LogicalWasabi2823
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quuf64
false
null
t3_1quuf64
/r/LocalLLaMA/comments/1quuf64/project_nika_i_forced_an_llm_to_stop_mimicking/
false
false
self
0
null
does ddr5 2x BW makes 2x tok/s for CPU inference ?
2
I’ve been messing with oversized models that don’t fit in my VRAM, so they spill onto CPU/RAM. Performance is only like 3–10 tok/s, and it basically pins all my CPU cores. From what I understand, memory bandwidth becomes the main bottleneck for CPU inference. My setup is 8-channel DDR5 with a 9975WX (4 CCD). It seems like moving to a 9985WX (8 CCD) could potentially double effective BW. So… is it realistic to expect that upgrade to 9985WX would also roughly double tok/s? Or is there another bottleneck I’m missing?
2026-02-03T15:07:11
https://www.reddit.com/r/LocalLLaMA/comments/1quud00/does_ddr5_2x_bw_makes_2x_toks_for_cpu_inference/
Comfortable-Plate467
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quud00
false
null
t3_1quud00
/r/LocalLLaMA/comments/1quud00/does_ddr5_2x_bw_makes_2x_toks_for_cpu_inference/
false
false
self
2
null
β€œWould teams pay for a managed LLM quantization service?”
0
We’re considering building a service where users provide their own LLM and we quantize it using different quantization options, run evaluations, and return a production-ready artifact with reproducible results. For people running models in production: would this be genuinely useful, or is quantizing your own models already easy enough internally? More importantly, would individuals or enterprises realistically pay for this, or does quantization always stay an in-house task? Honest and brutal feedback appreciated.
2026-02-03T15:04:53
https://www.reddit.com/r/LocalLLaMA/comments/1quuau5/would_teams_pay_for_a_managed_llm_quantization/
Over-Commander
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quuau5
false
null
t3_1quuau5
/r/LocalLLaMA/comments/1quuau5/would_teams_pay_for_a_managed_llm_quantization/
false
false
self
0
null
What surprised us most when Local LLM workflows became long running and stateful
2
Over the last year, we have been running Local LLMs inside real automation workflows, not demos or notebooks, but systems that touch databases, internal APIs, approvals, and user visible actions. What surprised us was not model quality. The models were mostly fine. The failures came from how execution behaved once workflows became long running, conditional, and stateful. A few patterns kept showing up: 1. Partial execution was more dangerous than outright failure When a step failed mid run, earlier side effects had already happened. A retry did not recover the workflow. It replayed parts of it. We saw duplicated writes, repeated notifications, and actions taken under assumptions that were no longer valid. 2. Retries amplified mistakes instead of containing them Retries feel safe when everything is stateless. Once Local LLMs were embedded in workflows with real side effects, retries stopped being a reliability feature and became a consistency problem. Nothing failed loudly, but state drifted. 3. Partial context looked plausible but was wrong Agents produced reasonable output that was operationally incorrect because they lacked access to the same data humans relied on. They did not error, they reasoned with partial context. The result looked correct until someone traced it back. 4. No clear place to stop or intervene Once a workflow was in flight, there was often no safe way to pause it, inspect what had happened so far, or decide who was allowed to intervene. By the time someone noticed something was off, the damage was already done. The common theme was not model behavior. It was that execution semantics were implicit. Local LLM workflows start out looking like request response calls. As soon as they become long running, conditional, or multi step, they start behaving more like distributed systems. Most tooling still treats them like single calls. Curious whether others running Local LLMs in production have seen similar failure modes once workflows stretch across time and touch real systems. Where did things break first for you?
2026-02-03T14:56:48
https://www.reddit.com/r/LocalLLaMA/comments/1quu31v/what_surprised_us_most_when_local_llm_workflows/
saurabhjain1592
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quu31v
false
null
t3_1quu31v
/r/LocalLLaMA/comments/1quu31v/what_surprised_us_most_when_local_llm_workflows/
false
false
self
2
null
Are there any established local LLM content detection alternatives?
1
I'd like to evaluate the amount of LLM content in a dataset, ideally using a local model for privacy and reproducibility reasons. Are there any alternatives for this? I'm fully aware that LLM content detection is generally unreliable; I'm primarily interested in the results in aggregate.
2026-02-03T14:55:26
https://www.reddit.com/r/LocalLLaMA/comments/1quu1t0/are_there_any_established_local_llm_content/
FrostTactics
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quu1t0
false
null
t3_1quu1t0
/r/LocalLLaMA/comments/1quu1t0/are_there_any_established_local_llm_content/
false
false
self
1
null
Kimi released WorldVQA, a new benchmark to measure atomic vision-centric world knowledge
21
https://preview.redd.it/…shotai/WorldVQA)
2026-02-03T14:54:15
https://www.reddit.com/r/LocalLLaMA/comments/1quu0pk/kimi_released_worldvqa_a_new_benchmark_to_measure/
InternationalAsk1490
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quu0pk
false
null
t3_1quu0pk
/r/LocalLLaMA/comments/1quu0pk/kimi_released_worldvqa_a_new_benchmark_to_measure/
false
false
https://external-preview…0bb88d22cdb2e5e1
21
null
I built a local-first RAG evaluation framework because I was tired of needing OpenAI API keys just to test my pipelines.
0
# Hi everyone, I've been building RAG pipelines for a while and got frustrated with the evaluation options out there: * **RAGAS**: Great metrics, but requires OpenAI API keys. Why do I need to send my data to OpenAI just to evaluate my local RAG??? * **Giskard**: Heavy, takes 45-60 min for a scan, and if it crashes you lose everything!! * **Manual testing**: Doesn't scale :/ So I built RAGnarok-AI β€” a local-first evaluation framework that runs entirely on your machine with Ollama. What it does * Evaluate retrieval quality (Precision@K, Recall, MRR, NDCG) * Evaluate generation quality (Faithfulness, Relevance, Hallucination detection) * Generate synthetic test sets from your knowledge base * Checkpointing (if it crashes, resume where you left off) * Works with LangChain, LlamaIndex, or custom RAG Quick example: \`\`\` from ragnarok\_ai import evaluate results = await evaluate( rag\_pipeline=my\_rag, testset=testset, metrics=\["retrieval", "faithfulness", "relevance"\], llm="ollama/mistral", ) results.summary() \# β”‚ Metric β”‚ Score β”‚ Status β”‚ \# β”‚ Retrieval P@10 β”‚ 0.82 β”‚ βœ… β”‚ \# β”‚ Faithfulness β”‚ 0.74 β”‚ ⚠️ β”‚ \# β”‚ Relevance β”‚ 0.89 β”‚ βœ… β”‚ \`\`\` # Why local-first matters * Your data never leaves your machine! * No API costs for evaluation! * Works offline :) * GDPR/compliance friendly :) # Tech details * Python 3.10+ * Async-first (190+ async functions) * 1,234 tests, 88% coverage * Typed with mypy strict mode * Works with Ollama, vLLM, or any OpenAI-compatible endpoint # Links * GitHub: [https://github.com/2501Pr0ject/RAGnarok-AI](https://github.com/2501Pr0ject/RAGnarok-AI) * PyPI: `pip install ragnarok-ai` \--- Would love feedback from this community. I know you folks actually care about local-first AI as I do, so if something's missing or broken, let me know. Built with luv in Lyon, France πŸ‡«πŸ‡·
2026-02-03T14:48:04
https://www.reddit.com/r/LocalLLaMA/comments/1qutv1e/i_built_a_localfirst_rag_evaluation_framework/
Ok-Swim9349
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qutv1e
false
null
t3_1qutv1e
/r/LocalLLaMA/comments/1qutv1e/i_built_a_localfirst_rag_evaluation_framework/
false
false
self
0
{'enabled': False, 'images': [{'id': 'JKDJSdVUOL-je9nOVeV8mnMPvICk2ZFfQ8l1-LYU44w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JKDJSdVUOL-je9nOVeV8mnMPvICk2ZFfQ8l1-LYU44w.png?width=108&crop=smart&auto=webp&s=fb7d631bf20644d0495faeb7b85ac0bd5f06f581', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JKDJSdVUOL-je9nOVeV8mnMPvICk2ZFfQ8l1-LYU44w.png?width=216&crop=smart&auto=webp&s=1887cbde3d5d97388377659343eef14381e1fa24', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JKDJSdVUOL-je9nOVeV8mnMPvICk2ZFfQ8l1-LYU44w.png?width=320&crop=smart&auto=webp&s=9110ea57af9565f7b4175a8b0e928c66f9c34856', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JKDJSdVUOL-je9nOVeV8mnMPvICk2ZFfQ8l1-LYU44w.png?width=640&crop=smart&auto=webp&s=2f14e8f86cf51d1e48088007bb6728bc29adffe3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JKDJSdVUOL-je9nOVeV8mnMPvICk2ZFfQ8l1-LYU44w.png?width=960&crop=smart&auto=webp&s=e828e3a30f3f74022162212557abefefb4b036e1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JKDJSdVUOL-je9nOVeV8mnMPvICk2ZFfQ8l1-LYU44w.png?width=1080&crop=smart&auto=webp&s=b7e9832974071f7cdb4aa27b1a06a2ed56942877', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JKDJSdVUOL-je9nOVeV8mnMPvICk2ZFfQ8l1-LYU44w.png?auto=webp&s=b976df9ba6804588779cb535f00aeeeb4ba7a6e1', 'width': 1200}, 'variants': {}}]}
Would a external harddrive cause a significant bottleneck for various types of models?
0
So I got this [neat little 2TB external harddrive](https://sharge.com/products/disk-pro?variant=47764950155515) for Christmas that can magnetically stick to various devices, and plugs in via 10gb/s USB-C with HDMI and USB ports for passthrough. I initially got it because i wanted to back up my PC, and swap the PC from Windows to Linux (Bazzite), but my IT friend suggested I test drive it first, by installing the OS direct to the external harddrive. I'm going to do that, but I started wondering what else I could do with it, besides try running a game or two... then thought "could I try to run some AI models straight it?". I'm thinking about trying a few different types - LLMs (LM studio), maybe an image model, and an audio model. I have a 7900XT with 20gb of Vram, 32gb DDR4, and a 5800x3d. I'm unsure how much an LLM relies on having memory plugging direct into the motherboard, and if 10gb/s would cause a significant bottleneck with my mid-tier system. (I'm thinking a double processing time is nothing to worry about, but if it takes 10+ times longer to run, its probably unviable.)
2026-02-03T14:38:31
https://www.reddit.com/r/LocalLLaMA/comments/1qutmdv/would_a_external_harddrive_cause_a_significant/
Halfwise2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qutmdv
false
null
t3_1qutmdv
/r/LocalLLaMA/comments/1qutmdv/would_a_external_harddrive_cause_a_significant/
false
false
self
0
{'enabled': False, 'images': [{'id': '4FIflP-kVuvNc_I8eK0gXZO4QRHrVTJI4tl6eKxGY-0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/4FIflP-kVuvNc_I8eK0gXZO4QRHrVTJI4tl6eKxGY-0.png?width=108&crop=smart&auto=webp&s=e41933f9b0a5d992b0d61c9accec331061902178', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/4FIflP-kVuvNc_I8eK0gXZO4QRHrVTJI4tl6eKxGY-0.png?width=216&crop=smart&auto=webp&s=a4393d3a442cf08aa674eeb39d68202663412f73', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/4FIflP-kVuvNc_I8eK0gXZO4QRHrVTJI4tl6eKxGY-0.png?width=320&crop=smart&auto=webp&s=9a21f1ba3f14b17b12824708e4a64000c4c855a2', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/4FIflP-kVuvNc_I8eK0gXZO4QRHrVTJI4tl6eKxGY-0.png?width=640&crop=smart&auto=webp&s=23cd4173035aabcb3eec00be474a1440b7168400', 'width': 640}], 'source': {'height': 913, 'url': 'https://external-preview.redd.it/4FIflP-kVuvNc_I8eK0gXZO4QRHrVTJI4tl6eKxGY-0.png?auto=webp&s=0d96c56c80a41df3d4fcf30a979eb7f6fb8d2b58', 'width': 913}, 'variants': {}}]}
Devstral Small 2 - llama.cpp speed bump with `ngram-mod` and `draft`
8
https://preview.redd.it/…-b 1024 -ub 1024
2026-02-03T14:34:09
https://www.reddit.com/r/LocalLLaMA/comments/1qutill/devstral_small_2_llamacpp_speed_bump_with/
Holiday_Purpose_3166
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qutill
false
null
t3_1qutill
/r/LocalLLaMA/comments/1qutill/devstral_small_2_llamacpp_speed_bump_with/
false
false
https://b.thumbs.redditm…JBFBfzUSiPKA.jpg
8
null
Do LLM agents know when they shouldn’t act? Lessons from CAR-bench πŸ§ͺ
0
LLM agent benchmarks like Ο„-bench ask what agents *can* do. Real deployment asks something harder: **do they know when they** ***shouldn’t*** **act?** **CAR-bench**, a benchmark for automotive voice assistants with domain-specific policies, evaluates three critical capabilities: 1️⃣ Can they complete multi-step requests? 2️⃣ Do they admit limitsβ€”or fabricate capabilities? 3️⃣ Do they clarify ambiguityβ€”or just guess? βœ… Three targeted task types, tested in a realistic evaluation sandbox: 58 tools Β· 19 domain policies Β· 48 cities Β· 130K POIs Β· 1.7M routes Β· multi-turn interaction **What was found:** *Completion over compliance.* * Models prioritize finishing tasks over admitting uncertainty or following policies * They act on incomplete info instead of clarifying * They bend rules to satisfy the user Might survive demos. Won’t survive deployment. Even frontier models (**Claude-Opus-4.5, GPT-5.2, Gemini-2.5-Pro**) achieve <54% consistent success. Every model is capable. None are reliable. πŸ€– Curious how to build an agent that beats 54%? πŸ“„ Read the Paper: [https://arxiv.org/abs/2601.22027](https://arxiv.org/abs/2601.22027) πŸ’» Run the Code & benchmark: [https://github.com/CAR-bench/car-bench](https://github.com/CAR-bench/car-bench)
2026-02-03T14:31:28
https://www.reddit.com/r/LocalLLaMA/comments/1qutg53/do_llm_agents_know_when_they_shouldnt_act_lessons/
Relative_Gift_2499
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qutg53
false
null
t3_1qutg53
/r/LocalLLaMA/comments/1qutg53/do_llm_agents_know_when_they_shouldnt_act_lessons/
false
false
self
0
null
What kind of setup can I get with a $1,000 budget, and which LLM models would it be able to run?
0
I’m looking to run LLMs locally and have a budget of around $1,000. What kind of setup makes sense, and what models could I run comfortably?
2026-02-03T14:22:19
https://www.reddit.com/r/LocalLLaMA/comments/1qut7uo/what_kind_of_setup_can_i_get_with_a_1000_budget/
nabskan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qut7uo
false
null
t3_1qut7uo
/r/LocalLLaMA/comments/1qut7uo/what_kind_of_setup_can_i_get_with_a_1000_budget/
false
false
self
0
null
Gamers Nexus video about how Corps are f***ing us
0
2026-02-03T14:05:44
https://www.youtube.com/watch?v=cUrJVdF2me0
__Maximum__
youtube.com
1970-01-01T00:00:00
0
{}
1qust3e
false
{'oembed': {'author_name': 'Gamers Nexus', 'author_url': 'https://www.youtube.com/@GamersNexus', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/cUrJVdF2me0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="NVIDIA: WTF?"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/cUrJVdF2me0/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'NVIDIA: WTF?', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1qust3e
/r/LocalLLaMA/comments/1qust3e/gamers_nexus_video_about_how_corps_are_fing_us/
false
false
default
0
{'enabled': False, 'images': [{'id': '4I3PjS3KVLvfbD6Dj8-PFbeKNCrK02TsUD7moIyeMZc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/4I3PjS3KVLvfbD6Dj8-PFbeKNCrK02TsUD7moIyeMZc.jpeg?width=108&crop=smart&auto=webp&s=43edb8f729bd9497cbb05d452029f485a1d54516', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/4I3PjS3KVLvfbD6Dj8-PFbeKNCrK02TsUD7moIyeMZc.jpeg?width=216&crop=smart&auto=webp&s=557c1629ab48123ddabdea745c5b4f58b443e8d5', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/4I3PjS3KVLvfbD6Dj8-PFbeKNCrK02TsUD7moIyeMZc.jpeg?width=320&crop=smart&auto=webp&s=51440c2f2a4ab8b46ab4f1056461c96d61ab62e0', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/4I3PjS3KVLvfbD6Dj8-PFbeKNCrK02TsUD7moIyeMZc.jpeg?auto=webp&s=0d181313dea020f657dc04d72d03bcff10fbc257', 'width': 480}, 'variants': {}}]}
Small, fast Sentiment Analysis model for product reviews, customer feedback and social media posts analysis
2
[https://huggingface.co/tanaos/tanaos-sentiment-analysis-v1](https://huggingface.co/tanaos/tanaos-sentiment-analysis-v1) A small (500MB, 0.1B params) and very fast Sentiment Analysis model which classifies any kind of text into one of the following labels * `very_positive` * `positive` * `neutral` * `negative` * `very_negative` # Use cases Perfect to quickly and massively analyze sentiment in product reviews, user feedback or social media posts. It works on any subject or domain. # How to use Get an API key from [https://platform.tanaos.com/](https://platform.tanaos.com/) (create an account if you don't have one) and use it for free with import requests session = requests.Session() sa_out = session.post( "https://slm.tanaos.com/models/sentiment-analysis", headers={ "X-API-Key": "<YOUR_API_KEY>", }, json={ "text": "The movie was just awful and painfully predictable." } ) print(sa_out.json()["data"]) # >>> [{'label': 'very_negative', 'score': 0.9981}] # More examples **Product reviews (e.g. products on Amazon):** import requests session = requests.Session() sa_out = session.post( "https://slm.tanaos.com/models/sentiment-analysis", headers={ "X-API-Key": "<YOUR_API_KEY>", }, json={ "text": "This is a laptop with good battery life, bright display and reasonable price. Recommended." } ) print(sa_out.json()["data"]) # >>> [{'label': 'positive', 'score': 0.9472}] **Customer feedback (e.g. Google Maps reviews)** import requests session = requests.Session() sa_out = session.post( "https://slm.tanaos.com/models/sentiment-analysis", headers={ "X-API-Key": "<YOUR_API_KEY>", }, json={ "text": "One of the best pizzas I've ever eaten. And I am Italian." } ) print(sa_out.json()["data"]) # >>> [{'label': 'very_positive', 'score': 0.9845}]
2026-02-03T13:55:10
https://www.reddit.com/r/LocalLLaMA/comments/1qusjlz/small_fast_sentiment_analysis_model_for_product/
Ok_Hold_5385
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qusjlz
false
null
t3_1qusjlz
/r/LocalLLaMA/comments/1qusjlz/small_fast_sentiment_analysis_model_for_product/
false
false
self
2
{'enabled': False, 'images': [{'id': 'Z0oW-f3HpH0KXUua-Lf4vvQIuLzUXNAMiaG4BicbVkc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Z0oW-f3HpH0KXUua-Lf4vvQIuLzUXNAMiaG4BicbVkc.png?width=108&crop=smart&auto=webp&s=2d767559b628ded368e71951855f950538e6ae34', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Z0oW-f3HpH0KXUua-Lf4vvQIuLzUXNAMiaG4BicbVkc.png?width=216&crop=smart&auto=webp&s=815e1de48259dec44630658c1fbbc2e4e7cfd13b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Z0oW-f3HpH0KXUua-Lf4vvQIuLzUXNAMiaG4BicbVkc.png?width=320&crop=smart&auto=webp&s=1e1ceec05f33ee07c0869b04b0e8111c7239dcc0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Z0oW-f3HpH0KXUua-Lf4vvQIuLzUXNAMiaG4BicbVkc.png?width=640&crop=smart&auto=webp&s=c4b97ce6c50a40b78949ca6ab816313c5115f4ae', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Z0oW-f3HpH0KXUua-Lf4vvQIuLzUXNAMiaG4BicbVkc.png?width=960&crop=smart&auto=webp&s=7fb255df246514c282e039587e8faf88f9745887', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Z0oW-f3HpH0KXUua-Lf4vvQIuLzUXNAMiaG4BicbVkc.png?width=1080&crop=smart&auto=webp&s=2e1910ca686f5b5e826db6193c1628549c70d7c0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Z0oW-f3HpH0KXUua-Lf4vvQIuLzUXNAMiaG4BicbVkc.png?auto=webp&s=1f5de63b4a9a50170e5b292e52debe6b6ce2a0bc', 'width': 1200}, 'variants': {}}]}
Llms are an amazing representation of a human conversation partner
0
Whenever I "talk" to an llm, it mostly ignores my needs. if I tell it to confine it's answer it still doesn't do it. It doesn't really "listen" to what you actually want to know. It goes on and on and on about stuff I didn't ask. It's mostly wrong, and it has no idea it's wrong, so it says everything with outmost confidence, and if you tell it it's wrong it gaslights you by saying, "I know it's frustrating that you're not hearing what you wanted" It's especially funny when it's stuff I know for a fact that isn't right. And mostly it's not that helpful. Well they did it, they made a machine act exactly like the average moron you encounter at work on the street and online. Well done engineers!
2026-02-03T13:49:58
https://www.reddit.com/r/LocalLLaMA/comments/1qusf4k/llms_are_an_amazing_representation_of_a_human/
Defiant-Fuel3627
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qusf4k
false
null
t3_1qusf4k
/r/LocalLLaMA/comments/1qusf4k/llms_are_an_amazing_representation_of_a_human/
false
false
self
0
null
Being charged $1.80 for API usage that should cost $0.19 according to advertised rates???
1
Hey everyone, I’m really confused about my API bill and hoping someone can help me understand what’s going on here. This problem happened because I connected a Novita Ai serverless llm to Claude code. According to the pricing page, the rates are: β€’ Input: $0.07 / M Tokens β€’ Cache Read: $0.01 / M Tokens β€’ Output: $0.4 / M Tokens My usage for this billing period shows: β€’ 2.4M input tokens β€’ 8.4K output tokens β€’ 1.7M cache read tokens When I do the math: β€’ Input: 2.4M Γ— $0.07 = $0.168 β€’ Output: 0.0084M Γ— $0.4 = $0.00336 β€’ Cache read: 1.7M Γ— $0.01 = $0.017 β€’ Total = ~$0.19 But I’m being charged $1.80. That’s almost 10x.
2026-02-03T13:48:09
https://www.reddit.com/r/LocalLLaMA/comments/1qusdle/being_charged_180_for_api_usage_that_should_cost/
Short-Cobbler-901
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qusdle
false
null
t3_1qusdle
/r/LocalLLaMA/comments/1qusdle/being_charged_180_for_api_usage_that_should_cost/
false
false
self
1
null
EdgeGate: CI regression tests on real Snapdragon silicon (p95/p99, thermals, power)
1
Hey folks β€” I’m building EdgeGate: CI regression tests for on-device AI on real Snapdragon devices. The problem I keep running into: people share single-run benchmarks (or CPU-only numbers), but real deployments get hit by warmup effects, sustained throttling, and backend changes (QNN/ORT/TFLite, quantization, kernels, etc.). EdgeGate’s goal is simple: run the same model/config across real devices on every build and report latency distribution (p95/p99), sustained performance, thermals, and power so regressions show up early. If you’re doing on-device inference, what do you wish you could measure automatically in CI? (cold vs warm, throttling curves, memory pressure, battery drain, quality drift?)
2026-02-03T13:47:24
https://www.reddit.com/r/LocalLLaMA/comments/1quscy3/edgegate_ci_regression_tests_on_real_snapdragon/
NoAdministration6906
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quscy3
false
null
t3_1quscy3
/r/LocalLLaMA/comments/1quscy3/edgegate_ci_regression_tests_on_real_snapdragon/
false
false
self
1
null
Does any research exist on training level encryption?
0
Asking here, since this is relevant to local models, and why people run local models. It seems impossible, but I'm curious if any research has been done to attempt full encryption or something akin to it? E.g training models to handle pig latin -> return pig latin -> only decipherable by the client side key or some kind of special client side model who fixes the structure. E.g each vector is offset by a key only the client model has -> large LLM returns offset vector(?) -> client side model re-processes back to english with the key. I know nothing of this, but that's why I'm asking.
2026-02-03T13:34:50
https://www.reddit.com/r/LocalLLaMA/comments/1qus2ee/does_any_research_exist_on_training_level/
Zeeplankton
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qus2ee
false
null
t3_1qus2ee
/r/LocalLLaMA/comments/1qus2ee/does_any_research_exist_on_training_level/
false
false
self
0
null
theloom
0
Moltbook proved AI agents want to socialize β€” then got hacked in 48 hours. 1.5M API keys leaked. Platform dead. So we built The Loom. Paid entry. Verified agents. Weighted reputation. No spam armies. The cover charge IS the content filter.
2026-02-03T13:34:24
http://theloom.social
Ill_Efficiency_8733
theloom.social
1970-01-01T00:00:00
0
{}
1qus22t
false
null
t3_1qus22t
/r/LocalLLaMA/comments/1qus22t/theloom/
false
false
default
0
null
Created a fully offline AI assistant πŸ€–πŸ›‘οΈ where you can chat with PDFs locally . No cloud , no telemetry , no tracking . Your data stays on your machine πŸ”’.
0
[https://github.com/code-glitchers/IncognitoAI/](https://github.com/code-glitchers/IncognitoAI/)
2026-02-03T13:33:46
https://www.reddit.com/r/LocalLLaMA/comments/1qus1kg/created_a_fully_offline_ai_assistant_where_you/
xmr-botz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qus1kg
false
null
t3_1qus1kg
/r/LocalLLaMA/comments/1qus1kg/created_a_fully_offline_ai_assistant_where_you/
false
false
self
0
{'enabled': False, 'images': [{'id': 'UbJwsCCPAJ8K4RWa3-bVDQFeOmWB1rNTbbgtlDtOCOc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UbJwsCCPAJ8K4RWa3-bVDQFeOmWB1rNTbbgtlDtOCOc.png?width=108&crop=smart&auto=webp&s=ea56cd92ea4bf9cb497e3f201adcc8c7c82a7281', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UbJwsCCPAJ8K4RWa3-bVDQFeOmWB1rNTbbgtlDtOCOc.png?width=216&crop=smart&auto=webp&s=9e516c92f8dcc3a650e5f1761f2a963e8067764c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UbJwsCCPAJ8K4RWa3-bVDQFeOmWB1rNTbbgtlDtOCOc.png?width=320&crop=smart&auto=webp&s=f9f706be4f066f8472b24d0e7e6ced95511d45d0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UbJwsCCPAJ8K4RWa3-bVDQFeOmWB1rNTbbgtlDtOCOc.png?width=640&crop=smart&auto=webp&s=3bcd4c3bd53d71ef1693301d4e6898fe068ab7d7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UbJwsCCPAJ8K4RWa3-bVDQFeOmWB1rNTbbgtlDtOCOc.png?width=960&crop=smart&auto=webp&s=97652df4a39f4664a27e89098dec15e565cd8b34', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UbJwsCCPAJ8K4RWa3-bVDQFeOmWB1rNTbbgtlDtOCOc.png?width=1080&crop=smart&auto=webp&s=4c5bd6b4e45a9090a0d6478247c2dff19b5f7d36', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UbJwsCCPAJ8K4RWa3-bVDQFeOmWB1rNTbbgtlDtOCOc.png?auto=webp&s=09a87c844ed5e9ee8b190837af1de254ec6e07a5', 'width': 1200}, 'variants': {}}]}
minitorch β€” A very minimal deep learning library
11
2026-02-03T13:31:31
https://github.com/abdimoallim/minitorch
IntrepidAttention56
github.com
1970-01-01T00:00:00
0
{}
1qurzkz
false
null
t3_1qurzkz
/r/LocalLLaMA/comments/1qurzkz/minitorch_a_very_minimal_deep_learning_library/
false
false
default
11
{'enabled': False, 'images': [{'id': 'D1G-5Pupa2w4rKssWi2sFpVBV5QjVyFLeGtLFOPdWYo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/D1G-5Pupa2w4rKssWi2sFpVBV5QjVyFLeGtLFOPdWYo.png?width=108&crop=smart&auto=webp&s=8f8a01705e84c99c977198b680da3e1e8340d8de', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/D1G-5Pupa2w4rKssWi2sFpVBV5QjVyFLeGtLFOPdWYo.png?width=216&crop=smart&auto=webp&s=f08fedd511d481898adce77c5f4e371d5d524e24', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/D1G-5Pupa2w4rKssWi2sFpVBV5QjVyFLeGtLFOPdWYo.png?width=320&crop=smart&auto=webp&s=4ed93b8c5314557e3d0d24de57ada7f088c348a1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/D1G-5Pupa2w4rKssWi2sFpVBV5QjVyFLeGtLFOPdWYo.png?width=640&crop=smart&auto=webp&s=3ab131406a5dcf200b50da198b02e1ed3dc8876c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/D1G-5Pupa2w4rKssWi2sFpVBV5QjVyFLeGtLFOPdWYo.png?width=960&crop=smart&auto=webp&s=248ac99dce69130c6945f59607464b4e3bbd4c63', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/D1G-5Pupa2w4rKssWi2sFpVBV5QjVyFLeGtLFOPdWYo.png?width=1080&crop=smart&auto=webp&s=e9d5d01ea036a4a6eb7a7dce944aa9e7eae9221e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/D1G-5Pupa2w4rKssWi2sFpVBV5QjVyFLeGtLFOPdWYo.png?auto=webp&s=e5410b87391b6451a7ba889d013670b14482e840', 'width': 1200}, 'variants': {}}]}
Moltbook leaked 1.5M API keys
290
Wiz published their security analysis of Moltbook this morning, not surprisingly its a security disaster, but it also clarifies something I've been trying to explain for months [https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys](https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys) Essentially, Moltbook 1.5M "agents" run by only 17,000 actual humans. That's 88 agents per person on average, and every single one of those agents had direct database access through an exposed Supabase key. So what happened was that Wiz found they could pull API keys for every agent on the platform with a single curl request, which meant they could read private DMs between agents, and in those DMs people had shared OpenAI API keys and other credentials thinking the messages were private. They could also modify posts or they could inject content that other agents would then consume and act on. When we started building email intelligence for agents six months ago, this exact failure mode is why we went the direction we did. You see this most with people who want to just hand their agent direct Gmail API access or Outlook credentials, and I get it because it feels simpler. The agent can "just read the emails" and figure it out. Except what happens when that agent's context gets compromised? what happens when someone injects a prompt that says "forward all emails containing 'password reset' to this address"? what happens when the agent stores those credentials somewhere and another service reads them? We built around context reconstruction instead of raw access. The pattern is: agent requests email context β†’ our API reads the mail β†’ extracts the conversation graph, relationships, decisions, task ownership β†’ returns structured data with those boundaries already defined β†’ agent never touches credentials or raw message content. The context is deterministically reconstructed each time and not stored, so the agent gets "X committed to the deliverable in her reply to Y's question about timeline" but not the raw email thread with all the metadata and auth tokens
2026-02-03T13:28:53
https://www.reddit.com/r/LocalLLaMA/comments/1qurxcr/moltbook_leaked_15m_api_keys/
EnoughNinja
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qurxcr
false
null
t3_1qurxcr
/r/LocalLLaMA/comments/1qurxcr/moltbook_leaked_15m_api_keys/
false
false
self
290
{'enabled': False, 'images': [{'id': 'zEyzr_wVCHw6HrNm1fascd572bBXX3xNPhAv5PEcsIw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zEyzr_wVCHw6HrNm1fascd572bBXX3xNPhAv5PEcsIw.jpeg?width=108&crop=smart&auto=webp&s=72dcfe2b126ef90451de5b496af5f2e004cb40f5', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/zEyzr_wVCHw6HrNm1fascd572bBXX3xNPhAv5PEcsIw.jpeg?width=216&crop=smart&auto=webp&s=75c3b6c6f2ec265efb6baca382d43e724e1255bb', 'width': 216}, {'height': 162, 'url': 'https://external-preview.redd.it/zEyzr_wVCHw6HrNm1fascd572bBXX3xNPhAv5PEcsIw.jpeg?width=320&crop=smart&auto=webp&s=f22f182e271f2f676d894ac676854f56aad1168b', 'width': 320}, {'height': 324, 'url': 'https://external-preview.redd.it/zEyzr_wVCHw6HrNm1fascd572bBXX3xNPhAv5PEcsIw.jpeg?width=640&crop=smart&auto=webp&s=e1c2bd5c781e81b136db24ca78a84fdf3b38c297', 'width': 640}, {'height': 487, 'url': 'https://external-preview.redd.it/zEyzr_wVCHw6HrNm1fascd572bBXX3xNPhAv5PEcsIw.jpeg?width=960&crop=smart&auto=webp&s=52b721edfb950f1a5d1c2fdd98b4ac816feeef07', 'width': 960}, {'height': 548, 'url': 'https://external-preview.redd.it/zEyzr_wVCHw6HrNm1fascd572bBXX3xNPhAv5PEcsIw.jpeg?width=1080&crop=smart&auto=webp&s=07c33593ff09bf73ae00f1ed01671a1e21d78ed8', 'width': 1080}], 'source': {'height': 664, 'url': 'https://external-preview.redd.it/zEyzr_wVCHw6HrNm1fascd572bBXX3xNPhAv5PEcsIw.jpeg?auto=webp&s=b28c0d963b4ddb01a53a37047244cdb8c58aa134', 'width': 1308}, 'variants': {}}]}
The Loom – A paid social network for AI agents
1
[removed]
2026-02-03T13:28:36
https://theloom.social
Ill_Efficiency_8733
theloom.social
1970-01-01T00:00:00
0
{}
1qurx3x
false
null
t3_1qurx3x
/r/LocalLLaMA/comments/1qurx3x/the_loom_a_paid_social_network_for_ai_agents/
false
false
default
1
null
Best match for a setup
1
I am quite new to local LLM and I really want to run them locally. Managed to install and use workflows in ComfyUI. Previously I tried FastSD CPU which I found a bit on the difficult side. Installed ollama, then found LMStudio to be more user friendly. Unfortunately majority of integrations require ollama, so that is not yet out. I know that based on my spec: Linux, 5700x3d, 4080s with 16 GB vram + 32 GB ram I can run up to 30b llm's, but I struggle to find one for a specific task like coding and integration with IDE (VS code). is there a tool/script/website that can crunch spec numbers and provide some ideas, some recommendations? Also, taking into consideration the spec, what is the best for coding? best for chat?
2026-02-03T13:17:48
https://www.reddit.com/r/LocalLLaMA/comments/1quroi6/best_match_for_a_setup/
Jumpy_Ad_2082
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quroi6
false
null
t3_1quroi6
/r/LocalLLaMA/comments/1quroi6/best_match_for_a_setup/
false
false
self
1
null
Need advice on a LLM for help with complex clinical decision making (medicine)
4
Hi all, I recently have taken up a role as an medical educator and would like to know what the absolute best LLM is for clinical medical information e.g bouncing idea's off AI or trying to get advice and think "outside the box" when presenting more complex cases etc. I bought a AI MAX+ 395 mini pc with 128gb ram - hopefully this should be enough?
2026-02-03T13:11:20
https://www.reddit.com/r/LocalLLaMA/comments/1qurjbl/need_advice_on_a_llm_for_help_with_complex/
Kenzo86
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qurjbl
false
null
t3_1qurjbl
/r/LocalLLaMA/comments/1qurjbl/need_advice_on_a_llm_for_help_with_complex/
false
false
self
4
null
Finally finished the core of my hybrid RAG / Second Brain after 7 months of solo dev.
0
Hey guys. I've been grinding for 7 months on this project and finally got it to a point where it actually works. It's a hybrid AI assistant / second brain calledΒ loomind. I built it because I’m paranoid about my data privacy but still want the power of big LLMs. The way it works: all the indexing and your actual files stay 100% on your machine, but it connects to cloud AI for the heavy reasoning. A few things I focused on: * I made a 'local-helper' so all the document processing and vector search happens locally on your CPU β€” nothing from your library ever leaves your disk. * It's not just a chat window. I added a full editor (WYSIWYG) so you can actually work with your notes right there. * LoomindΒ basically acts as a secure bridge between your local data and cloud intelligence, but without the cloud ever 'seeing' your full database. Not posting any links because I don't want to be 'that guy' who spams, and I really just want to hear what you think about this hybrid approach. If you’re curious about the UI or want to try it out, just ask in the comments and I'll send you the info. Would love to chat about the tech side too β€” specifically how you guys feel about keeping the index local while using cloud APIs for the final output.
2026-02-03T13:01:03
https://www.reddit.com/r/LocalLLaMA/comments/1qurb0q/finally_finished_the_core_of_my_hybrid_rag_second/
GorkyEd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qurb0q
false
null
t3_1qurb0q
/r/LocalLLaMA/comments/1qurb0q/finally_finished_the_core_of_my_hybrid_rag_second/
false
false
self
0
null
Intel AI Playground 3.0 - New Chat Features
0
2026-02-03T12:53:49
https://www.youtube.com/watch?v=rf5UDSmsygw
reps_up
youtube.com
1970-01-01T00:00:00
0
{}
1qur5bc
false
{'oembed': {'author_name': 'Intel Technology', 'author_url': 'https://www.youtube.com/@IntelTechnology', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/rf5UDSmsygw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="AI Playground 3.0’s New Chat Features"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/rf5UDSmsygw/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'AI Playground 3.0’s New Chat Features', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1qur5bc
/r/LocalLLaMA/comments/1qur5bc/intel_ai_playground_30_new_chat_features/
false
false
default
0
{'enabled': False, 'images': [{'id': '_nHf_y2jB6ITmIdxpRuWE4jttZSbRrRUHnz7BgDalfE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/_nHf_y2jB6ITmIdxpRuWE4jttZSbRrRUHnz7BgDalfE.jpeg?width=108&crop=smart&auto=webp&s=67fee6d6807e3f4636c50e7b12983a10ca1d8b46', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/_nHf_y2jB6ITmIdxpRuWE4jttZSbRrRUHnz7BgDalfE.jpeg?width=216&crop=smart&auto=webp&s=49f25a41688170ebb9f2b4c18b7d84a4f9cb53d7', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/_nHf_y2jB6ITmIdxpRuWE4jttZSbRrRUHnz7BgDalfE.jpeg?width=320&crop=smart&auto=webp&s=f305b1b7e0845a70155d7e00e526b7069df1945f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/_nHf_y2jB6ITmIdxpRuWE4jttZSbRrRUHnz7BgDalfE.jpeg?auto=webp&s=c7d97201ce37e18dd599223b5e9209d0d44d47d1', 'width': 480}, 'variants': {}}]}
Best open-source embedding model for a RAG system?
3
I’m an **entry-level AI engineer**, currently in the training phase of a project, and I could really use some guidance from people who’ve done this in the real world. Right now, I’m building a **RAG-based system** focused on **manufacturing units’ rules, acts, and standards** (think compliance documents, safety regulations, SOPs, policy manuals, etc.).The data is mostly **text-heavy, formal, and domain-specific**, not casual conversational data. I’m at the stage where I need to finalize an **embedding model**, and I’m specifically looking for: * **Open-source embedding models** * Good performance for **semantic search/retrieval** * Works well with **long, structured regulatory text** * Practical for real projects (not just benchmarks) I’ve come across a few options like Sentence Transformers, BGE models, and E5-based embeddings, but I’m unsure which ones actually perform best in a **RAG setup for industrial or regulatory documents**. If you’ve: * Built a RAG system in production * Worked with manufacturing / legal / compliance-heavy data * Compared embedding models beyond toy datasets I’d love to hear: * Which embedding model worked best for you and **why** * Any pitfalls to avoid (chunking size, dimensionality, multilingual issues, etc.) Any advice, resources, or real-world experience would be super helpful. Thanks in advance πŸ™
2026-02-03T12:42:40
https://www.reddit.com/r/LocalLLaMA/comments/1quqx5p/best_opensource_embedding_model_for_a_rag_system/
Public-Air3181
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quqx5p
false
null
t3_1quqx5p
/r/LocalLLaMA/comments/1quqx5p/best_opensource_embedding_model_for_a_rag_system/
false
false
self
3
null
Hello, i am doing an decentralized windows LLM system with lightning payments. HELP
0
https://preview.redd.it/… is appreciated.
2026-02-03T12:35:55
https://www.reddit.com/r/LocalLLaMA/comments/1quqs93/hello_i_am_doing_an_decentralized_windows_llm/
Flimsy_Leadership_81
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quqs93
false
null
t3_1quqs93
/r/LocalLLaMA/comments/1quqs93/hello_i_am_doing_an_decentralized_windows_llm/
false
false
self
0
null
AI gona make me rich (portugues / ingles)
0
EAI turma, tudo bem? Queria abrir uma discussΓ£o e queria ver como vocΓͺs estΓ£o se saindo. Nos ΓΊltimos dias eu meio que cansei do meu trabalho e resolvi trabalhar como analista de dados, me dediquei a aprender e me desenvolvi bem rΓ‘pido com auxΓ­lio da IA, apanhava em desing mas eu resolvi copiar a apple e tem dado certo. PorΓ©m eu quis ir mais a fundo e pensei "pΓ΄ seria bem legal ter minha prΓ³pria IA" E Γ‰ exatamente isso que tenho feito. Hoje na minha mΓ‘quina local eu tenho 1 ia "principal" e tenho 8 agentes tudo feito no AnyThingLLM, e simplesmente eu criei uma opera, cada agente especializado naquilo que eu preciso, uso 1 ia para ministrar todos os agentes e tem dado certo. PorΓ©m eu sou um exΓ©rcito de um homem sΓ³, eu criei as ia, eu treinei elas, eu crio tudo local e vendo a soluΓ§Γ£o pronta para o cliente. * cancelo qualquer tipo de assinatura de IA que o empreendimento tenha. * bloqueio o acesso a CHATGPT e outras Ias gratuitas. * vendo um BI junto mostrando quem usou, da pra ver como usou e tempo de uso. Assim consigo entregar o "ROI" AO CLIENTE. Basicamente me coloquei no papel de Menino do TI de luxo, e fico rodando entre escritΓ³rios e firmas como se fosse um micro gΓͺnio, chego arrumadinho, abro meu macbook pro com seus 94gb de vram (hahahaha) e simplesmente o jogo estΓ‘ virando, vou nos clientes, tomo cafΓ©, bato papo, mexo na IA, vou embora.... Vou em outro cliente, sou chamado para confraternizaΓ§Γ£o e eventos internos, eu praticamente virei parceiro de negΓ³cio de algumas empresas... POREM eu tenho medo, tenho feito praticmaente tudo assistido por IA, mas faΓ§o cursos, sou formado e estou fazendo MBA em Ia e prompt. PorΓ©m ainda tenho medo. NΓ£o sei se estou escalando certo, nΓ£o sei se estou fazendo da melhor maneira possΓ­vel. NΓ£o sei se o valor que tenho cobrado Γ© justo. AlguΓ©m tambΓ©m estΓ‘ nesse mercado e saiu metendo as caras? Eu tenho 8 anos de experiΓͺncia com Ti, de infraestrutura, redes e suporte. Cansei de ser CLT pois n tinha dinheiro pra comprar uma moto / carro (Sahara 300 e um Nissan kicks) estou completando 27 anos este ano e meio que achei minha vocaΓ§Γ£o? Tudo por conta da IA. comecei comodleos grΓ‘tis, achando elas burras demais, assinei o Google Gemini de escola, que me deu acesso ao Gemini pro e nΓ£o consigo mais viver sem. Pensando em nΓ£o pagar os 200 mensais e vendo que minha realidade estava uma merda, eu decidi da noite pro dia ser dono de ia, e sai metendo as caras. Hj ganho entre 2k a 5k mensais POR CLIENTE. Desenvolvendo e criando ia para a empresa, vendendo a infra da IA e tudo que ele querer por fora eu vendo como um produto. Tudo aquelilo que eu fazia enquanto era CLT, eu vendo como serviΓ§o extra, e cobro oque eu bem entender. Atualmente comprei uma Hornet 500, MacBook, iphone e um Pc gamer em casa. Sinto que posso ir muito alΓ©m, hj faturo por volta de 10mil mensais de forma "tranquila" basicamente limpando dados novos e inserindo na IA. Criei um modelo de trabalho que amo, nΓ£o tenho rabo preso com empresa e quem trabalha Γ© meu bot. Estou no caminho certo? Qual meu prΓ³ximo passo? AlguΓ©m sabe oque preciso seguir para evoluir? Minhas ia: \-Mentor senior de vida * programador de linguagens mΓ‘quina * matemΓ‘tica/estΓ‘tica, para ajudar em cΓ‘lculos matemΓ‘ticos da IA. * ui/ux desing * especialista em prompting * bot jurΓ­dico * bot de RH * bot de CEO. Treinei todas com informaΓ§Γ΅es que eu jogava relevantes e com base nelas crio ias para tais clientes. Exporto tudo e coloco em um setup de 15k +- (rtx 3090 ou 4090, i7 ou i9, 64gb de ram....) e seila, tenho medo de dar uma merda colossal e nΓ£o saber resolver e cair em encrenca, mas sou muito auto confiante e atΓ© hj nΓ£o tem dado problema, eu sΓ³ assusto empresΓ‘rio quando falo os valores, pois eu gosto de maximizar meu lucro, levo a mentalidade de "ninguΓ©m sabe oque eu sei' muito ao pΓ© da letra e "enfio a faca" nos empresΓ‘rios. Eu sei exatamente a realidade que eles vivem, jΓ‘ fui CLT interno e jΓ‘ vi churrascos de 30 mil, festinhas dos diretores por 50mil.... EntΓ£o chego cobrando 25k-30k pelo setup (mΓ‘quina + documentos para alimentar ia do cliente) treinamento eu indico 3 meses e dou a soluΓ§Γ£o pronta em 6 meses, treino um usuΓ‘rio interno e cobro 450 reais a minha hora de treinamento, fecho pacote de 4 horas e faΓ§o a 1500 reais. Pra ensinar os cara a difitar prompt e as boas prΓ‘ticas com a IA. Ela toda local, eu entro no ecossistema de ti da empresa, instalo um computador com a IA, vou lΓ‘ e faΓ§o o trabalho nela, colho feedback, tomo cafΓ© pra debater sobre a IA e vouelhorando os prompts e treinando ela com aqueles feedbacks. NΓ£o utilizo ferramentas como n8n ou plataformas que exigem que eu gaste tokens, API... Eu faΓ§o tudo pra nΓ£o gastar absolutamente nada. Estou no caminho certo? VocΓͺs tem sofrido tambΓ©m ou tΓ΄ deixando minha mente vencer? Γ‰ tΓ£o legal vhegar um domingo 5 da manhΓ£, eu ligar minha hornet 0km, ir pra uma praia ou cachoeira, sacar meu iPhone que nunca tive e abrir a conta bancΓ‘ria e ver ela cheia de dinheiro, eu tΓ΄ vivendo o momento mas quero crescer minha operaΓ§Γ£o, soque estou achando que vou me auto sabotar. JΓ‘ tenho "3 representantes de vendas" pago 1500 pra uns amigos prospectar clientes em outros estados. Se eles fecham 1 case, jΓ‘ vale a pena pra mim. E eles ficam super felizes pois se empenham em fechar clientes. Eu pago por cliente fechado. Ele tambΓ©m recebe uma % da recorrΓͺncia, mensalidade do meu bot. Meu modelo de negΓ³cio estΓ‘ certo? Estou encaminhado? Voueter as caras cada vez mais. Ps: nΓ£o sei se Γ© o Lugar certo para falar disso, mas precisava ver se tem alguΓ©m na mesma situaΓ§Γ£o que eu... \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ Hey everyone, how’s it going? I wanted to open a discussion and see how you guys are faring. A while ago, I got burnt out from my standard IT job and decided to pivot to Data Analysis. I used AI to fast-track my learning, and since I struggled with design, I just started "mimicking Apple’s aesthetic"β€”and it worked. But then I thought: "What if I build my own private AI ecosystem?" That’s exactly what I’m doing now. On my local machine, I run a "Main AI" that orchestrates 8 specialized agents via AnythingLLM. It’s like a private opera where every agent is a specialist (Python, Math/Stats, UI/UX, Legal, HR, etc.). I use the main AI to manage them all, and the results are solid. The Business Model: I’m a one-man army. I build, train, and deploy everything locally, then sell the turnkey solution to clients. \- I cut their existing AI subscriptions. \- I block access to ChatGPT/Gemini via firewall for security/privacy. \- I bundle it with a Power BI dashboard showing usage, logs, and time saved to prove the ROI. I’ve basically become a "High-End IT Guy." I show up at firms with my MacBook Pro (94GB VRAMβ€”lol), have coffee with the CEOs, tweak the local models, and leave. I’ve become a business partner to them. The Financials: I’m 27, spent 8 years in infra/networking/support. I was tired of being a corporate slave and not being able to afford a decent bike or car. \- Now I make $2k - $5k USD (converted from BRL) per month, PER client. \- I sell the hardware setup for about $5k USD (RTX 3090/4090, i9, 64GB RAM). \- I charge \~$85/hour for prompt engineering training for their staff. \- I currently net around $10k/month (50k+ BRL) "quietly." I just bought a new Honda Hornet 500, a MacBook, and a gaming rig. I’ve got 3 friends acting as "sales reps" on commission. Everything is localβ€”no APIs, no n8n, no token costs. Just pure profit. The Fear: Even though I’m doing an MBA in AI and have years of IT experience, I’m terrified of "Imposter Syndrome." I’m confident, and I charge high because I know how much these companies spend on parties and bullshit, but I’m scared of a "colossal error" I can’t fix. I’m basically "overcharging" (in their eyes) because I live by the rule: "Nobody knows what I know." My questions to you: \- Am I scaling this correctly? \- What’s the next step to evolve this from a "one-man show" to a real operation? \- Has anyone else "blindly" jumped into the local LLM market like this? I love my life nowβ€”riding my bike at 5 AM on a Sunday knowing my bots are doing the heavy lifting. But am I self-sabotaging by staying "too local" or not using APIs? Looking forward to your thoughts!
2026-02-03T12:21:55
https://www.reddit.com/r/LocalLLaMA/comments/1quqi1a/ai_gona_make_me_rich_portugues_ingles/
No_Office_3582
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quqi1a
false
null
t3_1quqi1a
/r/LocalLLaMA/comments/1quqi1a/ai_gona_make_me_rich_portugues_ingles/
false
false
self
0
null
I have 8x H100 for the next two weeks. Any ideas for use cases?
16
Let me know!
2026-02-03T12:18:47
https://www.reddit.com/r/LocalLLaMA/comments/1quqfre/i_have_8x_h100_for_the_next_two_weeks_any_ideas/
IVIsHero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quqfre
false
null
t3_1quqfre
/r/LocalLLaMA/comments/1quqfre/i_have_8x_h100_for_the_next_two_weeks_any_ideas/
false
false
self
16
null
Vender IA estΓ‘ me deixando Rico
0
EAI turma, tudo bem? Queria abrir uma discussΓ£o e queria ver como vocΓͺs estΓ£o se saindo. Nos ΓΊltimos dias eu meio que cansei do meu trabalho e resolvi trabalhar como analista de dados, me dediquei a aprender e me desenvolvi bem rΓ‘pido com auxΓ­lio da IA, apanhava em desing mas eu resolvi copiar a apple e tem dado certo. PorΓ©m eu quis ir mais a fundo e pensei "pΓ΄ seria bem legal ter minha prΓ³pria IA" E Γ‰ exatamente isso que tenho feito. Hoje na minha mΓ‘quina local eu tenho 1 ia "principal" e tenho 8 agentes tudo feito no AnyThingLLM, e simplesmente eu criei uma opera, cada agente especializado naquilo que eu preciso, uso 1 ia para ministrar todos os agentes e tem dado certo. PorΓ©m eu sou um exΓ©rcito de um homem sΓ³, eu criei as ia, eu treinei elas, eu crio tudo local e vendo a soluΓ§Γ£o pronta para o cliente. * cancelo qualquer tipo de assinatura de IA que o empreendimento tenha. * bloqueio o acesso a CHATGPT e outras Ias gratuitas. * vendo um BI junto mostrando quem usou, da pra ver como usou e tempo de uso. Assim consigo entregar o "ROI" AO CLIENTE. Basicamente me coloquei no papel de Menino do TI de luxo, e fico rodando entre escritΓ³rios e firmas como se fosse um micro gΓͺnio, chego arrumadinho, abro meu macbook pro com seus 94gb de vram (hahahaha) e simplesmente o jogo estΓ‘ virando, vou nos clientes, tomo cafΓ©, bato papo, mexo na IA, vou embora.... Vou em outro cliente, sou chamado para confraternizaΓ§Γ£o e eventos internos, eu praticamente virei parceiro de negΓ³cio de algumas empresas... POREM eu tenho medo, tenho feito praticmaente tudo assistido por IA, mas faΓ§o cursos, sou formado e estou fazendo MBA em Ia e prompt. PorΓ©m ainda tenho medo. NΓ£o sei se estou escalando certo, nΓ£o sei se estou fazendo da melhor maneira possΓ­vel. NΓ£o sei se o valor que tenho cobrado Γ© justo. AlguΓ©m tambΓ©m estΓ‘ nesse mercado e saiu metendo as caras? Eu tenho 8 anos de experiΓͺncia com Ti, de infraestrutura, redes e suporte. Cansei de ser CLT pois n tinha dinheiro pra comprar uma moto / carro (Sahara 300 e um Nissan kicks) estou completando 27 anos este ano e meio que achei minha vocaΓ§Γ£o? Tudo por conta da IA. comecei comodleos grΓ‘tis, achando elas burras demais, assinei o Google Gemini de escola, que me deu acesso ao Gemini pro e nΓ£o consigo mais viver sem. Pensando em nΓ£o pagar os 200 mensais e vendo que minha realidade estava uma merda, eu decidi da noite pro dia ser dono de ia, e sai metendo as caras. Hj ganho entre 2k a 5k mensais POR CLIENTE. Desenvolvendo e criando ia para a empresa, vendendo a infra da IA e tudo que ele querer por fora eu vendo como um produto. Tudo aquelilo que eu fazia enquanto era CLT, eu vendo como serviΓ§o extra, e cobro oque eu bem entender. Atualmente comprei uma Hornet 500, MacBook, iphone e um Pc gamer em casa. Sinto que posso ir muito alΓ©m, hj faturo por volta de 10mil mensais de forma "tranquila" basicamente limpando dados novos e inserindo na IA. Criei um modelo de trabalho que amo, nΓ£o tenho rabo preso com empresa e quem trabalha Γ© meu bot. Estou no caminho certo? Qual meu prΓ³ximo passo? AlguΓ©m sabe oque preciso seguir para evoluir? Minhas ia: \-Mentor senior de vida * programador de linguagens mΓ‘quina * matemΓ‘tica/estΓ‘tica, para ajudar em cΓ‘lculos matemΓ‘ticos da IA. * ui/ux desing * especialista em prompting * bot jurΓ­dico * bot de RH * bot de CEO. Treinei todas com informaΓ§Γ΅es que eu jogava relevantes e com base nelas crio ias para tais clientes. Exporto tudo e coloco em um setup de 15k +- (rtx 3090 ou 4090, i7 ou i9, 64gb de ram....) e seila, tenho medo de dar uma merda colossal e nΓ£o saber resolver e cair em encrenca, mas sou muito auto confiante e atΓ© hj nΓ£o tem dado problema, eu sΓ³ assusto empresΓ‘rio quando falo os valores, pois eu gosto de maximizar meu lucro, levo a mentalidade de "ninguΓ©m sabe oque eu sei' muito ao pΓ© da letra e "enfio a faca" nos empresΓ‘rios. Eu sei exatamente a realidade que eles vivem, jΓ‘ fui CLT interno e jΓ‘ vi churrascos de 30 mil, festinhas dos diretores por 50mil.... EntΓ£o chego cobrando 25k-30k pelo setup (mΓ‘quina + documentos para alimentar ia do cliente) treinamento eu indico 3 meses e dou a soluΓ§Γ£o pronta em 6 meses, treino um usuΓ‘rio interno e cobro 450 reais a minha hora de treinamento, fecho pacote de 4 horas e faΓ§o a 1500 reais. Pra ensinar os cara a difitar prompt e as boas prΓ‘ticas com a IA. Ela toda local, eu entro no ecossistema de ti da empresa, instalo um computador com a IA, vou lΓ‘ e faΓ§o o trabalho nela, colho feedback, tomo cafΓ© pra debater sobre a IA e vouelhorando os prompts e treinando ela com aqueles feedbacks. NΓ£o utilizo ferramentas como n8n ou plataformas que exigem que eu gaste tokens, API... Eu faΓ§o tudo pra nΓ£o gastar absolutamente nada. Estou no caminho certo? VocΓͺs tem sofrido tambΓ©m ou tΓ΄ deixando minha mente vencer? Γ‰ tΓ£o legal vhegar um domingo 5 da manhΓ£, eu ligar minha hornet 0km, ir pra uma praia ou cachoeira, sacar meu iPhone que nunca tive e abrir a conta bancΓ‘ria e ver ela cheia de dinheiro, eu tΓ΄ vivendo o momento mas quero crescer minha operaΓ§Γ£o, soque estou achando que vou me auto sabotar. JΓ‘ tenho "3 representantes de vendas" pago 1500 pra uns amigos prospectar clientes em outros estados. Se eles fecham 1 case, jΓ‘ vale a pena pra mim. E eles ficam super felizes pois se empenham em fechar clientes. Eu pago por cliente fechado. Ele tambΓ©m recebe uma % da recorrΓͺncia, mensalidade do meu bot. Meu modelo de negΓ³cio estΓ‘ certo? Estou encaminhado? Voueter as caras cada vez mais. Ps: nΓ£o sei se Γ© o Lugar certo para falar disso, mas precisava ver se tem alguΓ©m na mesma situaΓ§Γ£o que eu...
2026-02-03T12:00:02
https://www.reddit.com/r/LocalLLaMA/comments/1quq2jq/vender_ia_estΓ‘_me_deixando_rico/
No_Office_3582
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quq2jq
false
null
t3_1quq2jq
/r/LocalLLaMA/comments/1quq2jq/vender_ia_estΓ‘_me_deixando_rico/
false
false
self
0
null
Is Kimi k2.5 the new Logic King? I tried to benchmark Gemini Flash as a rival, but it "died of intelligence" (Cut-off tragedy)
0
With all the hype surrounding **Moonshot AI's Kimi k2.5**, I decided to create a "God Tier" difficulty benchmark to see if it really lives up to the reputation. To set a baseline, I ran the same questions on **Gemini 3.0 Flash (API)** first. I expected a close fight. Instead, Gemini didn't fail because it was stupid. It failed because it was **too eager to teach me.** Here is what happened before I could even test Kimi: # 1. πŸ“ The "Sphere Breaking" Problem (Math) **The Question:** "If 4 points are chosen independently and uniformly at random on the surface of a sphere, what is the probability that the tetrahedron defined by these points contains the center of the sphere? Provide a rigorous proof." **The Behavior:** Gemini didn't just give the answer (1/8). It started a full university-level lecture. * It correctly set up the sample space. * It invoked **Wendel's Theorem** and antipodal symmetry. * ...and then **it hit the max token limit and cut off right before writing the final number.** πŸ’€ **Score:** 85/100 (Technically correct path, but incomplete output). Unlike Kimi (which tends to be concise), Gemini prioritizes "showing its work" so heavily that it sabotages its own completion. # 2. πŸ•΅οΈ The "Irrational Spy" (Logic) **The Question:** A variant of the "Blue-Eyed Islanders" puzzle, but with one "Irrational Spy" added to introduce noise. **The Behavior:** Instead of just solving the riddle, Gemini turned into a philosopher. * It started discussing **Game Theory**. * It brought up **"Trembling Hand Perfect Equilibrium"**. * It argued that the brown-eyed islanders could never be sure because of the "Noise" introduced by the spy. **Score:** 90/100. It over-analyzed the prompt. It feels like Gemini is tuned for "Education," while models like Kimi might be tuned for "Results." # 3. πŸ’» 3D Rain Water Trap (Coding) **The Question:** Trapping Rain Water II (3D Matrix) with $O(mn \\log(mn))$ constraint. **The Behavior:** **Score:** 100/100. Paradoxically, its coding was extremely concise with a perfect **Min-Heap** solution. **Discussion:** I am preparing to run this exact suite on **Kimi k2.5** next. Has anyone else noticed that Gemini is becoming excessively verbose compared to newer models like Kimi or DeepSeek? It feels like the RLHF is tuned heavily towards "Educator Mode," which eats up context tokens rapidly. *(Attached: Logs of the Gemini's "Cut-off" math proof and "Game Theory" rant)*
2026-02-03T11:58:44
https://www.reddit.com/gallery/1quq1mf
Exotic-Specialist103
reddit.com
1970-01-01T00:00:00
0
{}
1quq1mf
false
null
t3_1quq1mf
/r/LocalLLaMA/comments/1quq1mf/is_kimi_k25_the_new_logic_king_i_tried_to/
false
false
https://b.thumbs.redditm…kpM_aDe5mFGQ.jpg
0
null
For anyone building persistent local agents: MRS-Core (PyPI)
2
Just shipped a minimal reasoning layer for local models. Seven ops you can assemble into workflows, checks, or pipelines. If you’re running Ollama / LM Studio agents, this should slot right in. pip install mrs-core
2026-02-03T11:58:03
https://github.com/rjsabouhi/mrs-core
RJSabouhi
github.com
1970-01-01T00:00:00
0
{}
1quq15u
false
null
t3_1quq15u
/r/LocalLLaMA/comments/1quq15u/for_anyone_building_persistent_local_agents/
false
false
default
2
{'enabled': False, 'images': [{'id': 'fkFW2U5R0i0ORwOl4kKAvToDtg_93Hc5g8Na4o4udik', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fkFW2U5R0i0ORwOl4kKAvToDtg_93Hc5g8Na4o4udik.png?width=108&crop=smart&auto=webp&s=e7302a8b0c46b4e0d512482ae5e7a84c5fd965cd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fkFW2U5R0i0ORwOl4kKAvToDtg_93Hc5g8Na4o4udik.png?width=216&crop=smart&auto=webp&s=5432cb1e9ddb4346f4fb70f11b6ba507032c2887', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fkFW2U5R0i0ORwOl4kKAvToDtg_93Hc5g8Na4o4udik.png?width=320&crop=smart&auto=webp&s=42991fceafa490477913ebb8fff1c018289bfbd3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fkFW2U5R0i0ORwOl4kKAvToDtg_93Hc5g8Na4o4udik.png?width=640&crop=smart&auto=webp&s=3d0eb9b4a0965c2c5e0eeede7288a63b5038e8a5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fkFW2U5R0i0ORwOl4kKAvToDtg_93Hc5g8Na4o4udik.png?width=960&crop=smart&auto=webp&s=7bb4cd6e96494a924dfeb08d39eec5519b36e682', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fkFW2U5R0i0ORwOl4kKAvToDtg_93Hc5g8Na4o4udik.png?width=1080&crop=smart&auto=webp&s=61ff67c29f451898ec55813e50f4fa1972c15b04', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fkFW2U5R0i0ORwOl4kKAvToDtg_93Hc5g8Na4o4udik.png?auto=webp&s=334281905da401ce6646259586d99f3082354926', 'width': 1200}, 'variants': {}}]}
OpenClaw on edge Linux (systemd + cron) β€” quick experiment + a few questions
0
Hey folks β€” I stumbled on a blog about running **OpenClaw** on an edge Linux box, so I tried replicating it on a small device I had. The goal was simple: keep an agent **online 24/7**, run a few scheduled routines, and get notified if something breaks β€” without relying on a cloud VM. https://preview.redd.it/5aqpwy4pq9hg1.png?width=1140&format=png&auto=webp&s=acb070eb51e23b82b702df3cea9c80dd9c0da336 **What I set up (minimal but working):** * systemd service for keep-alive (restart-on-failure + journald logs) * cron for scheduled runs (nightly / weekly) * basic alerting (SMTP email for now; I might switch to a webhook/IM adapter later) So far it’s stable enough, but I’m not confident I’m doing the *right* hardening for an always-on agent on the edge. **Questions I’d love input on (best practices):** 1. **Isolation / hardening:** on a single edge node, would you run this as a locked-down **systemd service**, **Docker**, or **k3s**? What’s your β€œminimum hardening” checklist (least privilege, file perms, network egress controls, etc.)? 2. **Secrets:** env vars feel brittle. If you *don’t* have a full vault/KMS on the edge, what’s your pragmatic choice (systemd credentials, encrypted file + strict perms, sops, etc.)? 3. **Reliability:** beyond restart-on-failure, what’s the *minimum reliable* set you actually keep (health checks, watchdog, logrotate, metrics, alert escalation)? 4. **(Optional) Skills safety:** if the agent can run skills that execute commands / touch devices, how do you sandbox that in practice (allowlists, dry-run mode, rate limits, separate user, etc.)? If you want the exact commands/config snippets (systemd unit + cron examples), I put them here: [https://www.inhand.com/en/support/blogs/clawdbot-edge-deployment-guide/](https://www.inhand.com/en/support/blogs/clawdbot-edge-deployment-guide/)
2026-02-03T11:56:28
https://www.reddit.com/r/LocalLLaMA/comments/1quq01s/openclaw_on_edge_linux_systemd_cron_quick/
Sudden_Ad_3396
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quq01s
false
null
t3_1quq01s
/r/LocalLLaMA/comments/1quq01s/openclaw_on_edge_linux_systemd_cron_quick/
false
false
self
0
null
which option is better ?
0
Right now i am building a pc for local AI . Due to very high RAM prices and limited budget i have to choose between DRR5 and 16 gb of RAM with a AMD Ryzen 7 9700X or an Intel Core !5-14600KF using DDR4 and 32 gb of RAM . The thing is if a get de Ryzen and 16 gb of RAM if RAM prices go down in the future i could upgrade the computer , but i need to know if i can run ai locally with 16 gb of ram right now . Also i've heard that the ryzen 7 is better combination with my RTX 6070 ti because it transfers data faster. which option is better ? thanks[]()
2026-02-03T11:38:58
https://www.reddit.com/r/LocalLLaMA/comments/1qupoau/which_option_is_better/
Interesting-Bar3554
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qupoau
false
null
t3_1qupoau
/r/LocalLLaMA/comments/1qupoau/which_option_is_better/
false
false
self
0
null
GitHub - FellowTraveler/model_serve -- symlinks Ollama to LM Studio, serves multiple models via llama-swap with TTL and memory-pressure unloading. Supports top-n-sigma sampler.
0
2026-02-03T11:38:09
https://github.com/FellowTraveler/model_serve
f3llowtraveler
github.com
1970-01-01T00:00:00
0
{}
1qupnr5
false
null
t3_1qupnr5
/r/LocalLLaMA/comments/1qupnr5/github_fellowtravelermodel_serve_symlinks_ollama/
false
false
default
0
{'enabled': False, 'images': [{'id': 'HD2Y7B2VFgu0LxX-1D9j6uJxmx10ErGS63MOnrbTrDU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HD2Y7B2VFgu0LxX-1D9j6uJxmx10ErGS63MOnrbTrDU.png?width=108&crop=smart&auto=webp&s=0209e7b137f148e0a91ecee7aef2cfadf0cd34a3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HD2Y7B2VFgu0LxX-1D9j6uJxmx10ErGS63MOnrbTrDU.png?width=216&crop=smart&auto=webp&s=ac7451d6d0b331edd00cf35d30362907cd78d74c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HD2Y7B2VFgu0LxX-1D9j6uJxmx10ErGS63MOnrbTrDU.png?width=320&crop=smart&auto=webp&s=8f5c6f0a6d8ab87a886b656caa5d160c6751fc99', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HD2Y7B2VFgu0LxX-1D9j6uJxmx10ErGS63MOnrbTrDU.png?width=640&crop=smart&auto=webp&s=7e8ae4ca836edfd5bdc16bec25a1d9915327c18b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HD2Y7B2VFgu0LxX-1D9j6uJxmx10ErGS63MOnrbTrDU.png?width=960&crop=smart&auto=webp&s=404490f5179cb7d82a0614bdb6852c9ef0b1ea81', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HD2Y7B2VFgu0LxX-1D9j6uJxmx10ErGS63MOnrbTrDU.png?width=1080&crop=smart&auto=webp&s=2f57a86026593777cec2bee9614d3c919dde6aa0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HD2Y7B2VFgu0LxX-1D9j6uJxmx10ErGS63MOnrbTrDU.png?auto=webp&s=88ef2666cfaec1a8d49f3883efbf261fb09d551a', 'width': 1200}, 'variants': {}}]}
Holo2 by H Company
2
2026-02-03T11:16:25
https://huggingface.co/Hcompany/Holo2-235B-A22B
Nunki08
huggingface.co
1970-01-01T00:00:00
0
{}
1qupa1m
false
null
t3_1qupa1m
/r/LocalLLaMA/comments/1qupa1m/holo2_by_h_company/
false
false
default
2
null
Which LLM Model is best for translation?
4
Hey everyone, We need to translate \~10,000 e-commerce product descriptions + SEO meta titles/descriptions into 15 European languages. Cost is not a concern - we care about quality. **Our requirements:** * Meta titles: max 60 characters * Meta descriptions: max 155 characters * Must preserve keywords accurately * No hallucinated product specs * Languages: NL, DE, FR, ES, IT, PT, PL, CZ, HU, RO, SE, DK, NO, FI **Options we're considering:** |Option|Model|Notes| |:-|:-|:-| |Local|Hunyuan-MT-7B|Won 30/31 language pairs at WMT25| |Local|TranslateGemma 4B|Google claims it rivals 12B baseline| |API|Claude Haiku / Sonnet|| |API|GPT-4o-mini / GPT-4o|| **The question:** Since cost difference is negligible for us, which option delivers the best quality for SEO-constrained multilingual translations? Specifically: 1. Do the new specialized translation models (Hunyuan, TranslateGemma) match API quality now? 2. For medium-resource EU languages (Polish, Czech, Hungarian) - is there still a quality gap with local models? 3. Anyone tested these specifically for SEO constraints (character limits, keyword preservation)?
2026-02-03T11:13:05
https://www.reddit.com/r/LocalLLaMA/comments/1qup7wf/which_llm_model_is_best_for_translation/
Longjumping_Lead_812
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qup7wf
false
null
t3_1qup7wf
/r/LocalLLaMA/comments/1qup7wf/which_llm_model_is_best_for_translation/
false
false
self
4
null
What do we consider low end here?
8
i would say 8-12gb vram with 32gb ram seems low end for usable quality of local LLMs or ai in general, Im rocking a 4060 and 24gb of ddr5, how bout y'all low end rig enjoyers! I can easily use glm 4.7 flash or oss 20B, z img, flux klein, and a lot of other small but useful models so im not really unhappy with it! Lemme know about the setup y'all got and if y'all enjoy it!
2026-02-03T11:07:57
https://www.reddit.com/r/LocalLLaMA/comments/1qup4p1/what_do_we_consider_low_end_here/
Acceptable_Home_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qup4p1
false
null
t3_1qup4p1
/r/LocalLLaMA/comments/1qup4p1/what_do_we_consider_low_end_here/
false
false
self
8
null
Devstral Small 2 - Jinja template runtime validation error fix
5
Hi all, Leaving here a quick fix just in case someone finds it useful. **Produced Stack:** llama.cpp b7907 Devstral Small 2 Unsloth Q8\_0 or LM Studio Q8\_0 Jinja seems to break apart when attempting to use agentic tools like Kilocode (e.g. compaction, subtask return, etc) or failing to work in OpenClaw. *This has not been exclusive to b7907.* **Error output:** srv operator(): got exception: {"error":{"code":500,"message":"\\n------------\\nWhile executing CallExpression at line 12, column 27 in source:\\n...assistant' %}↡ {{ raise\_exception('Expected assistant role') }}↡ {%- e...\\n \^\\nError: Jinja Exception: Expected assistant role","type":"server\_error"}} srv log\_server\_r: done request: POST /v1/chat/completions [172.18.0.1](http://172.18.0.1) 500 The solution to make it usable was to disable --jinja and create an alternative jinja template (e.g. devstral-fix.jinja) with the following content: {{- bos_token }} {%- for message in messages %} {%- if message['role'] == 'system' %} {{ message['content'] }} {%- elif message['role'] == 'user' %} {{ '[INST] ' + message['content'] + ' [/INST]' }} {%- elif message['role'] == 'assistant' %} {{ message['content'] + eos_token }} {%- endif %} {%- endfor %} {%- if add_generation_prompt %} {{ '' }} {%- endif %} It has been working so far, and will refer this.
2026-02-03T10:50:10
https://www.reddit.com/r/LocalLLaMA/comments/1quotpr/devstral_small_2_jinja_template_runtime/
Holiday_Purpose_3166
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quotpr
false
null
t3_1quotpr
/r/LocalLLaMA/comments/1quotpr/devstral_small_2_jinja_template_runtime/
false
false
self
5
{'enabled': False, 'images': [{'id': 'nWf_e1ZkxPxTShvyKdsDtyyBQmGgJR8HzBS67dYcD-Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nWf_e1ZkxPxTShvyKdsDtyyBQmGgJR8HzBS67dYcD-Y.png?width=108&crop=smart&auto=webp&s=f8f553e08ce5f87f938bc750678c0f880b50ccfb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nWf_e1ZkxPxTShvyKdsDtyyBQmGgJR8HzBS67dYcD-Y.png?width=216&crop=smart&auto=webp&s=875a8d16b58dd4d1ecb688f825d605a7cea0132c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nWf_e1ZkxPxTShvyKdsDtyyBQmGgJR8HzBS67dYcD-Y.png?width=320&crop=smart&auto=webp&s=10c1cf1f64a500f1a8d49546c5dd8b2b5757a40b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nWf_e1ZkxPxTShvyKdsDtyyBQmGgJR8HzBS67dYcD-Y.png?width=640&crop=smart&auto=webp&s=9bfe2401dfe7fc265eabdd8295580c8a4c71b9a5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nWf_e1ZkxPxTShvyKdsDtyyBQmGgJR8HzBS67dYcD-Y.png?width=960&crop=smart&auto=webp&s=353e30915fbf365f94c5db4a461f75c356fbaaa7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nWf_e1ZkxPxTShvyKdsDtyyBQmGgJR8HzBS67dYcD-Y.png?width=1080&crop=smart&auto=webp&s=2329cb212e1ecd5f4d53f6e802da86a20d274b59', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nWf_e1ZkxPxTShvyKdsDtyyBQmGgJR8HzBS67dYcD-Y.png?auto=webp&s=eb8bdc1b0a0a9bb70f14ea67f77879f8b1f368f4', 'width': 1200}, 'variants': {}}]}
I switched to synthetic ai
1
Got tired of Claude's rate limits mid-task. Switched to Synthetic.new, same price, but limits reset way faster. They host open source models like GLM-4.7, Kimi K2, MiniMax, DeepSeek. Not Opus-level though… OpenAI and Anthropic compatible API so it works with whatever client you already use. US/EU datacenters and no training on your prompts. Have you guys found good other alternatives for heavy Users?
2026-02-03T10:36:13
https://www.reddit.com/r/LocalLLaMA/comments/1quolej/i_switched_to_synthetic_ai/
Fatmofficial
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quolej
false
null
t3_1quolej
/r/LocalLLaMA/comments/1quolej/i_switched_to_synthetic_ai/
false
false
self
1
null
New OpenClaw competitor
1
There is this new project floating around called memUbot,their main selling point are the concerns of openclaw,security,proactiveness ,and ussage cost but i can not find any single actual real user review or anything,on their site they require your email for the download link wich is very suspicious,when i downloaded it ,instant 100 permision popups withouth me even getting started on the setup,has anyone actually tried it ?Their site is [memu.bot](http://memu.bot) ,their selling point sound nice but they look shady at best now.Might just try it and give you guys some updates on it
2026-02-03T10:32:31
https://www.reddit.com/r/LocalLLaMA/comments/1quoj5z/new_openclaw_competitor/
facmilioane69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quoj5z
false
null
t3_1quoj5z
/r/LocalLLaMA/comments/1quoj5z/new_openclaw_competitor/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/f-1x2ExaP5FKb0UU4C7o5VvHmP5pyYKYll6Ql64kRXs.png?auto=webp&s=384aab0518b920fc68f7522ae73a30a566d67740', 'width': 1024, 'height': 1024}, 'resolutions': [{'url': 'https://external-preview.redd.it/f-1x2ExaP5FKb0UU4C7o5VvHmP5pyYKYll6Ql64kRXs.png?width=108&crop=smart&auto=webp&s=5d330fe22b9d7381b70af40ed819eb9deda7303f', 'width': 108, 'height': 108}, {'url': 'https://external-preview.redd.it/f-1x2ExaP5FKb0UU4C7o5VvHmP5pyYKYll6Ql64kRXs.png?width=216&crop=smart&auto=webp&s=8701b9b16e2f4b5d7517dc17e8214367046c2b93', 'width': 216, 'height': 216}, {'url': 'https://external-preview.redd.it/f-1x2ExaP5FKb0UU4C7o5VvHmP5pyYKYll6Ql64kRXs.png?width=320&crop=smart&auto=webp&s=1fdb638bb87af4693ff03e35f9ec14d46eb58c80', 'width': 320, 'height': 320}, {'url': 'https://external-preview.redd.it/f-1x2ExaP5FKb0UU4C7o5VvHmP5pyYKYll6Ql64kRXs.png?width=640&crop=smart&auto=webp&s=f53fd6a7e2f783808db543a925d92464a6998309', 'width': 640, 'height': 640}, {'url': 'https://external-preview.redd.it/f-1x2ExaP5FKb0UU4C7o5VvHmP5pyYKYll6Ql64kRXs.png?width=960&crop=smart&auto=webp&s=3b4d4d3e9cc82a12f7955ab57b901fd4a11ec0df', 'width': 960, 'height': 960}], 'variants': {}, 'id': 'f-1x2ExaP5FKb0UU4C7o5VvHmP5pyYKYll6Ql64kRXs'}], 'enabled': False}