title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Is there a good LLM eval for agentic use?
0
Like Swe-bench but for general agentic use
2026-01-15T06:34:55
https://www.reddit.com/r/LocalLLaMA/comments/1qdc9el/is_there_a_good_llm_eval_for_agentic_use/
Vegetable_Sun_9225
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qdc9el
false
null
t3_1qdc9el
/r/LocalLLaMA/comments/1qdc9el/is_there_a_good_llm_eval_for_agentic_use/
false
false
self
0
null
Mistral releases Ministral 3 paper
135
details: >We introduce the Ministral 3 series, a family of parameter-efficient dense language models designed for compute and memory constrained applications, available in three model sizes: 3B, 8B, and 14B parameters. For each model size, we release three variants: a pretrained base model for general-purpose use, an instruction finetuned, and a reasoning model for complex problem-solving. In addition, we present our recipe to derive the Ministral 3 models through Cascade Distillation, an iterative pruning and continued training with distillation technique. Each model comes with image understanding capabilities, all under the Apache 2.0 license.
2026-01-15T06:16:31
https://arxiv.org/abs/2601.08584
Old-School8916
arxiv.org
1970-01-01T00:00:00
0
{}
1qdbxei
false
null
t3_1qdbxei
/r/LocalLLaMA/comments/1qdbxei/mistral_releases_ministral_3_paper/
false
false
default
135
null
Stop LLM bills from exploding: I built Budget guards for LLM apps – auto-pause workflows at $X limit
0
Hey everyone, I think nowadays everyone faces AI agents that get stuck in a retry loop. Woke up to a $1K OpenAI bill for what should've been $5. I know I'm not alone because I've seen this on HN/Twitter constantly: * "My agent cost $12K in 3 days" * "Customer ran our AI feature 10,000 times, we ate the cost" * "No idea how to bill customers when costs vary 100x per request" So I built UsageFlow - a billing infrastructure specifically for AI/LLM apps. **The 3 features people asked for most:** 1. **Budget Guards** \- Auto-pause workflows at $X limit (no more surprise bills) 2. **Workflow Tracking** \- See cost per multi-step agent run, not scattered tokens 3. **Outcome Billing** \- Charge customers for completions, not attempts **Current status:** * Working MVP in Docker Compose * 0 users (just finished building) * Planning AWS deployment Other features: \- Smart model routing (save 30-40% by auto-switching to cheaper models) \- Agent efficiency analytics (which model is actually worth the cost?) \- Predictive alerts (customer will hit the limit in 3 days) \- LiteLLM integration (auto-track all calls) Before I make a demo video, curious what the community thinks: **Questions for you:** 1. **Have you been burned by unexpected AI costs?** (trying to gauge if this is common) 2. **Would you pay for this, and how much, vs. building in-house?** (honest feedback only) 3. **What's missing from this feature list that would be a dealbreaker?** Genuinely appreciate brutal honesty. **Tech:** FastAPI, PostgreSQL, React, LiteLLM integration
2026-01-15T06:00:15
https://www.reddit.com/r/LocalLLaMA/comments/1qdbmbc/stop_llm_bills_from_exploding_i_built_budget/
Extension_Key_5970
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qdbmbc
false
null
t3_1qdbmbc
/r/LocalLLaMA/comments/1qdbmbc/stop_llm_bills_from_exploding_i_built_budget/
false
false
self
0
null
LFM 2.5 is insanely good
101
It's the first model at ~1b that I find not just useful, but altright good and comparable to models 3x larger Everytime a ultra small model launches with impressive benchmark numbers , it's always the same thing: infinite loops, breaking in multi turn conversations, doesn't know basic facts like the size of an elephant, etc etc... And it is very good at my native language (Portuguese) despite it not being officially supported But this is different, the benchmarks seem to reflect it's performance really well, and it feels somewhere in between llama 2 7b and llama 3 8b You should try it. I am running at Q6 and having excelent results for simple tasks like basic QA and summarization. The jump from lfm2 makes me excited about the 8b-a1b moe model.
2026-01-15T05:22:51
https://www.reddit.com/r/LocalLLaMA/comments/1qdax6z/lfm_25_is_insanely_good/
guiopen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qdax6z
false
null
t3_1qdax6z
/r/LocalLLaMA/comments/1qdax6z/lfm_25_is_insanely_good/
false
false
self
101
null
Silent UI presence test. Fully offline (airplane mode proof)
0
No audio on purpose, this was a UI/presence test. Airplane mode to prove it’s fully local/offline. Happy to post a deeper architecture breakdown if anyone wants it. Next clip will show full phase states, voice loop & live interaction once I finish polishing.
2026-01-15T04:34:32
https://v.redd.it/se4en9tryfdg1
The-Build
v.redd.it
1970-01-01T00:00:00
0
{}
1qd9yi3
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/se4en9tryfdg1/DASHPlaylist.mpd?a=1771043689%2CMDRiZDcwMDQ3YmU1NGYxYzA3NmU1N2U4MmU4N2I2MmZhZWZlYzUwMTczMTM0NTcwYTQzNjMyMGU5NzUxMjVjZg%3D%3D&v=1&f=sd', 'duration': 25, 'fallback_url': 'https://v.redd.it/se4en9tryfdg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1062, 'hls_url': 'https://v.redd.it/se4en9tryfdg1/HLSPlaylist.m3u8?a=1771043689%2CZGFiYThjZWFhOGIzOTIyMGEyYzVkZTllOTJhNTYyYTYzNTRjODY2ODFkMWFjNDEzYzU4N2QwNDMxZjAxODJiMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/se4en9tryfdg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1qd9yi3
/r/LocalLLaMA/comments/1qd9yi3/silent_ui_presence_test_fully_offline_airplane/
false
false
https://external-preview…d3bdebb70f73b35a
0
{'enabled': False, 'images': [{'id': 'ZDA3Y2xsdHJ5ZmRnMdbKLl1GK7U0RMU7MI7B4plhdZzjOGGBRYBhiYvXHfkh', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/ZDA3Y2xsdHJ5ZmRnMdbKLl1GK7U0RMU7MI7B4plhdZzjOGGBRYBhiYvXHfkh.png?width=108&crop=smart&format=pjpg&auto=webp&s=bdfc5ef6b679610e2015e232a7f89827b7a87afe', 'width': 108}, {'height': 119, 'url': 'https://external-preview.redd.it/ZDA3Y2xsdHJ5ZmRnMdbKLl1GK7U0RMU7MI7B4plhdZzjOGGBRYBhiYvXHfkh.png?width=216&crop=smart&format=pjpg&auto=webp&s=b58f7f2e39213fae8e8e7fabed3cca5d9407fb1d', 'width': 216}, {'height': 176, 'url': 'https://external-preview.redd.it/ZDA3Y2xsdHJ5ZmRnMdbKLl1GK7U0RMU7MI7B4plhdZzjOGGBRYBhiYvXHfkh.png?width=320&crop=smart&format=pjpg&auto=webp&s=6d0681bae6ef159ebd649cc1d6b510f12c4a8373', 'width': 320}, {'height': 353, 'url': 'https://external-preview.redd.it/ZDA3Y2xsdHJ5ZmRnMdbKLl1GK7U0RMU7MI7B4plhdZzjOGGBRYBhiYvXHfkh.png?width=640&crop=smart&format=pjpg&auto=webp&s=db7f8b1472899d9837f6db5260f1d221e50cff2c', 'width': 640}, {'height': 530, 'url': 'https://external-preview.redd.it/ZDA3Y2xsdHJ5ZmRnMdbKLl1GK7U0RMU7MI7B4plhdZzjOGGBRYBhiYvXHfkh.png?width=960&crop=smart&format=pjpg&auto=webp&s=979a3bff72e8d829ab5d4fcf6bc77d669be189e1', 'width': 960}, {'height': 597, 'url': 'https://external-preview.redd.it/ZDA3Y2xsdHJ5ZmRnMdbKLl1GK7U0RMU7MI7B4plhdZzjOGGBRYBhiYvXHfkh.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a63a9c24c21374ab005007fe8f950adfba38aa6b', 'width': 1080}], 'source': {'height': 796, 'url': 'https://external-preview.redd.it/ZDA3Y2xsdHJ5ZmRnMdbKLl1GK7U0RMU7MI7B4plhdZzjOGGBRYBhiYvXHfkh.png?format=pjpg&auto=webp&s=5319b86871ff15087a829cc37b6b2a10c109f89e', 'width': 1440}, 'variants': {}}]}
Open-source tamper-evident audit log for AI agent actions (early, looking for feedback)
0
Hey all — I’ve been working on a small open-source tool called **AI Action Ledger** and wanted to share it here to get feedback from people building agentic systems. **What it is:** A lightweight, append-only audit log for AI agent actions (LLM calls, tool use, chain steps) that’s **tamper-evident** via cryptographic hash chaining. If an event is logged, you can later prove it wasn’t silently modified. **What it’s** ***not*****:** * Not a safety / alignment system * Not compliance (no SOC2, HIPAA, etc.) * Does *not* guarantee completeness — only integrity of what’s logged **Why I built it:** When debugging agents or reviewing incidents, I kept wanting a reliable answer to: > This gives you a verifiable trail without storing raw prompts or outputs by default (hashes + metadata only). **Current state:** * Self-hosted backend (FastAPI + Postgres + JSONL archive) * Python SDK * Working LangChain callback * Simple dashboard * Fully documented, early but tested **Repo:** [https://github.com/Jreamr/ai-action-ledger](https://github.com/Jreamr/ai-action-ledger) **Early access / feedback:** [https://github.com/Jreamr/ai-action-ledger/discussions]() Very open to criticism — especially from folks who’ve run into agent debugging, observability, or audit-trail problems before.
2026-01-15T03:54:11
https://www.reddit.com/r/LocalLLaMA/comments/1qd94f5/opensource_tamperevident_audit_log_for_ai_agent/
Big-Put8683
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd94f5
false
null
t3_1qd94f5
/r/LocalLLaMA/comments/1qd94f5/opensource_tamperevident_audit_log_for_ai_agent/
false
false
self
0
{'enabled': False, 'images': [{'id': 'v7Ta8OGE20vhf8PB2G1Zxhv7gb9VB0uPrZB-ST1T5yg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/v7Ta8OGE20vhf8PB2G1Zxhv7gb9VB0uPrZB-ST1T5yg.png?width=108&crop=smart&auto=webp&s=730a8b38ffc97f02bb39b3e0b04d122cce8b780c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/v7Ta8OGE20vhf8PB2G1Zxhv7gb9VB0uPrZB-ST1T5yg.png?width=216&crop=smart&auto=webp&s=7bdde9ee7ce2492468c0100b70325960380b9ad9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/v7Ta8OGE20vhf8PB2G1Zxhv7gb9VB0uPrZB-ST1T5yg.png?width=320&crop=smart&auto=webp&s=f57b10e4e26ff3108572b43f9940fbf325d89fc1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/v7Ta8OGE20vhf8PB2G1Zxhv7gb9VB0uPrZB-ST1T5yg.png?width=640&crop=smart&auto=webp&s=6f204783c93ef6d24f54ffc693ecc4212d079087', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/v7Ta8OGE20vhf8PB2G1Zxhv7gb9VB0uPrZB-ST1T5yg.png?width=960&crop=smart&auto=webp&s=d7ceae49b4681979a655075b590a33a99b110f3c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/v7Ta8OGE20vhf8PB2G1Zxhv7gb9VB0uPrZB-ST1T5yg.png?width=1080&crop=smart&auto=webp&s=d6279abe20a07604a3092cfb7d7c71dfe6164f39', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/v7Ta8OGE20vhf8PB2G1Zxhv7gb9VB0uPrZB-ST1T5yg.png?auto=webp&s=19e93baf4ac624451983be023dc047ba3d1b3380', 'width': 1200}, 'variants': {}}]}
stepfun-ai/Step3-VL-10B · Hugging Face
93
[stepfun-ai/Step3-VL-10B · Hugging Face](https://huggingface.co/stepfun-ai/Step3-VL-10B)
2026-01-15T03:51:48
https://i.redd.it/88t4oaa3rfdg1.png
TKGaming_11
i.redd.it
1970-01-01T00:00:00
0
{}
1qd92pm
false
null
t3_1qd92pm
/r/LocalLLaMA/comments/1qd92pm/stepfunaistep3vl10b_hugging_face/
false
false
default
93
{'enabled': True, 'images': [{'id': '88t4oaa3rfdg1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/88t4oaa3rfdg1.png?width=108&crop=smart&auto=webp&s=49ab222faa77216e1d25d4070f1afc94aaa080a9', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/88t4oaa3rfdg1.png?width=216&crop=smart&auto=webp&s=8b563c76fb1a95118cb9e6c2eee734d9a470027e', 'width': 216}, {'height': 182, 'url': 'https://preview.redd.it/88t4oaa3rfdg1.png?width=320&crop=smart&auto=webp&s=1be64a7873566c94c1710f7e898b8c03297bd732', 'width': 320}, {'height': 365, 'url': 'https://preview.redd.it/88t4oaa3rfdg1.png?width=640&crop=smart&auto=webp&s=88fb05f58d5a4493756c36b2bd8d44c42fe3366b', 'width': 640}, {'height': 548, 'url': 'https://preview.redd.it/88t4oaa3rfdg1.png?width=960&crop=smart&auto=webp&s=bc157beeca0cf979f3d57ae8d7973b240d02cf99', 'width': 960}, {'height': 617, 'url': 'https://preview.redd.it/88t4oaa3rfdg1.png?width=1080&crop=smart&auto=webp&s=93221f03c05dc314ad2afd078df4ab13565b97d1', 'width': 1080}], 'source': {'height': 686, 'url': 'https://preview.redd.it/88t4oaa3rfdg1.png?auto=webp&s=62eebb0043ac9492fe9c01c704a99f62c9b27b92', 'width': 1200}, 'variants': {}}]}
Best AI TTS model?
5
Hello everyone, I was wondering if anyone could help me out with finding out what the best English AI TTS model was? I am in hopes of starting my youtube channel, but can't speak eloquently enough, so I feel like an AI TTS model can help me out with that. Can anyone tell me anything they may know regarding the topic. And what the best 1. Paid and 2. Free models for AI TTS are? Thank you very much.
2026-01-15T03:42:36
https://www.reddit.com/r/LocalLLaMA/comments/1qd8vwn/best_ai_tts_model/
Commercial-Wear4453
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd8vwn
false
null
t3_1qd8vwn
/r/LocalLLaMA/comments/1qd8vwn/best_ai_tts_model/
false
false
self
5
null
Claude Code or OpenCode which one do you use and why?
15
I’m curious what people here are using more for coding: **Claude Code** or **OpenCode**. Which one do you personally prefer, and *why*? Is it better reasoning, speed, pricing, rate limits, editor integration, or something else? Would love to hear real-world experiences and tradeoffs. Thanks!
2026-01-15T03:42:21
https://www.reddit.com/r/LocalLLaMA/comments/1qd8vpj/claude_code_or_opencode_which_one_do_you_use_and/
Empty_Break_8792
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd8vpj
false
null
t3_1qd8vpj
/r/LocalLLaMA/comments/1qd8vpj/claude_code_or_opencode_which_one_do_you_use_and/
false
false
self
15
null
I trained a model to 'unslop' AI prose
198
I ran passages from Project Gutenberg through GPT-4o-mini 10 times over, each time telling it to "make it read far better, adding superior prose, etc.". This lead to classic literary passages being enslopped. I then reversed this pipeline, and trained a model to go from \[slop\] -> \[original\]. The resulting model is capable enough to fool Pangram (a fairly robust AI detector - I take this as a metric of how 'human-sounding' the output is), at very little overall quality cost: [While quality decreases slightly, humanness jumps from 0 to 0.481. The unslopped version stays firmly above Mistral Large 3 and close to the original GPT-5.2 baseline.](https://preview.redd.it/go88234vifdg1.png?width=2817&format=png&auto=webp&s=fed2c84e748f4441648e9f53c891258d78ccbb0a) Of course, the model is OSS: [https://huggingface.co/N8Programs/Unslopper-30B-A3B-bf16](https://huggingface.co/N8Programs/Unslopper-30B-A3B-bf16) And there is a twitter thread (complete with 'thread below 👇', ironic in a thread about slop): [https://x.com/N8Programs/status/2011591738591494625](https://x.com/N8Programs/status/2011591738591494625) The goal here is **not** to fool Pangram, deceive/cheat, etc. I only use Pangram as a proxy for the prose being more readable - ideally, you'd use this model to make your own AI-generated passages more palatable to read, or as part of a larger pipeline in automated writing generation for training, for instance. Example (GPT-5.2): The wind rose all at once, as if the mountain itself had inhaled. She stood at the edge of the pass, cloak snapping like a torn banner behind her, staring down at the valley where the last lights were going out one by one. Each flame extinguished felt deliberate—an answer to a question she had been too afraid to ask. Somewhere below, bells began to ring, not in warning, but in mourning. They already knew. “Don’t,” he said, breathless, reaching for her arm. His fingers brushed fabric, then air. “If you cross that line, there’s no coming back.” She didn’t turn. The cold burned her lungs, sharpened her thoughts. “There was never a way back,” she replied softly. “Only a way forward you were willing to pretend didn’t exist.” Thunder rolled, low and ancient. For a moment, the sky fractured into white fire, and in that instant he saw her face—resolved, terrified, incandescent with purpose. Not the girl he had sworn to protect, but the woman the world had demanded she become. “You’ll die,” he said. She smiled then, finally facing him, eyes bright with unshed tears. “So will everyone,” she answered. “The difference is what we leave burning when we’re gone.” She stepped forward. The mountain exhaled. **Pangram Verdict:** High Confidence AI Generated Unslopped (Temp=0.8, Rep Penalty=1.1, 6-bit MLX quant): And then it just happened. Like the whole of reality had been held breathless for a moment, and suddenly sprang into focus. She stood at the pass's edge. Her cloak flapped behind her like a banner carried to its last stop. She was looking down into the valley, at lights being snuffed out one by one. Each flame extinguished was a point of answer to questions she had dared not ask. And now bells were ringing, not a signal but a dirge, because they already knew. "Please," he said. He reached for her; his hand closed on the snaking line of fabric that trailed behind her, then nothing. "You can't cross over." She didn't turn to him. It was cold enough to bite at her throat; she breathed it in, sharp and full of poison. "There was never a way out," she said softly. "Only the road in that you did not want to remember." A low rumble of thunder, older than mountains, filled the air. A white flash lit up the sky for a moment, a vivid static song of fire that illuminated her face in an instant. It was no longer the girl he'd sworn to protect, but the woman she had become. "You'll die." She smiled at him and turned toward him. There were tears on her face, but they did not fall. "We all do," she said. "The difference is what we leave behind us." She started walking toward the edge. And it all happened at once. The mountain exhaled itself, and took her with it. **Pangram Verdict:** High Confidence Human Written Note that there are some local coherence issues w/ the Unslopper - that's why I'd recommend integrating it into a larger pipeline or editing its output yourself. It's definitely not production ready.
2026-01-15T03:12:29
https://www.reddit.com/r/LocalLLaMA/comments/1qd88v2/i_trained_a_model_to_unslop_ai_prose/
N8Karma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd88v2
false
null
t3_1qd88v2
/r/LocalLLaMA/comments/1qd88v2/i_trained_a_model_to_unslop_ai_prose/
false
false
https://b.thumbs.redditm…fXyYG6Tn3zms.jpg
198
null
Which is relatively more user-friendly, cline or opencode
0
cline vs opencode
2026-01-15T03:09:18
https://www.reddit.com/r/LocalLLaMA/comments/1qd86ex/which_is_relatively_more_userfriendly_cline_or/
Slow_Independent5321
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd86ex
false
null
t3_1qd86ex
/r/LocalLLaMA/comments/1qd86ex/which_is_relatively_more_userfriendly_cline_or/
false
false
self
0
null
cline 和 opencode 哪个相对更好用
1
RT
2026-01-15T03:07:43
https://www.reddit.com/r/LocalLLaMA/comments/1qd856x/cline_和_opencode_哪个相对更好用/
Slow_Independent5321
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd856x
false
null
t3_1qd856x
/r/LocalLLaMA/comments/1qd856x/cline_和_opencode_哪个相对更好用/
false
false
self
1
null
The most immersion-breaking thing in AI chat isn’t hallucination.
1
[removed]
2026-01-15T02:35:11
https://www.reddit.com/r/LocalLLaMA/comments/1qd7ezl/the_most_immersionbreaking_thing_in_ai_chat_isnt/
OkBicycle3812
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd7ezl
false
null
t3_1qd7ezl
/r/LocalLLaMA/comments/1qd7ezl/the_most_immersionbreaking_thing_in_ai_chat_isnt/
false
false
self
1
null
Stop paying OpenAI to re-read your history. Why "Context Stuffing" is bankrupting your Agents (Benchmarks).
0
I audited a workflow last week where an "AI Agency" was pasting a 50-page PDF into GPT-4o's context window for \*every single user query\*. Their API bill was \*\*\~$400/week\*\*. Latency was \*\*12 seconds per loop\*\*. The bot hallucinated anything past token #60k ("Lost in the Middle" phenomenon). If you are building autonomous loops, relying on the 128k context window is not "memory." It is a cash incinerator. \*\*The shift from "Chatbot" to "Agent" requires shifting from Context (RAM) to Vector (Disk).\*\* We ran benchmarks on the "Big 3" for agentic state management (not just search). Here is the architectural reality for 2026: \### 1. The "Rent" Option (Pinecone) \* \*\*Good for:\*\* Speed. If you just need a webhook to fire and forget. \* \*\*Bad for:\*\* Complex multi-agent swarms. The "Serverless" billing model gets toxic when your agent enters a high-frequency loop. \* \*\*Benchmark Note:\*\* We observed embedding write frequency hitting \~1 write / 3s per agent loop, which spikes costs linearly on consumption plans. \### 2. The "Sovereign" Option (Qdrant/Weaviate) \* \*\*The Alpha:\*\* Binary Quantization. You can compress vectors by 30x. \* \*\*The Cost:\*\* We moved that $400/week client to a self-hosted instance. Cost dropped to \*\*\~$20/mo fixed\*\*. Latency dropped to <400ms. \### The Architecture Stop treating memory as a "Prompt Engineering" problem. It is a \*\*Database Engineering\*\* problem. \`User Input\` -> \`Embedding\` -> \`Vector Lookup (Top 3)\` -> \`Inject Context\` -> \`LLM\` If you aren't separating \*\*Episodic Memory\*\* (Logs) from \*\*Semantic Memory\*\* (Knowledge), your agent is just a goldfish with a credit card. I wrote up the full comparison and the specific "L1/L2" memory stack here for those building production swarms: \[Source: The 2026 Vector Database Comparison\](https://ranksquire.com/2026/01/07/best-vector-database-ai-agents/) \*If anyone has seen different behavior past \~60k tokens regarding the "Lost in the Middle" retrieval drop-off, I’d be curious what model or retrieval stack you’re testing with.\*
2026-01-15T02:11:06
https://www.reddit.com/r/LocalLLaMA/comments/1qd6vkn/stop_paying_openai_to_reread_your_history_why/
ConfusionTerrible238
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd6vkn
false
null
t3_1qd6vkn
/r/LocalLLaMA/comments/1qd6vkn/stop_paying_openai_to_reread_your_history_why/
false
false
self
0
null
Zhipu AI breaks US chip reliance with first major model trained on Huawei stack (GLM-Image)
406
2026-01-15T02:01:03
https://www.scmp.com/tech/tech-war/article/3339869/zhipu-ai-breaks-us-chip-reliance-first-major-model-trained-huawei-stack
fallingdowndizzyvr
scmp.com
1970-01-01T00:00:00
0
{}
1qd6nho
false
null
t3_1qd6nho
/r/LocalLLaMA/comments/1qd6nho/zhipu_ai_breaks_us_chip_reliance_with_first_major/
false
false
default
406
{'enabled': False, 'images': [{'id': '67JUXSnUreB8wTlODdM32UrKgKSfJgeIROoAbEyBScs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/67JUXSnUreB8wTlODdM32UrKgKSfJgeIROoAbEyBScs.jpeg?width=108&crop=smart&auto=webp&s=1c1f53d30677d0b88bec9c3bb1fe6380fc24b9b7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/67JUXSnUreB8wTlODdM32UrKgKSfJgeIROoAbEyBScs.jpeg?width=216&crop=smart&auto=webp&s=db753ffec17c1a8ca3e9efc7f46464a563277376', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/67JUXSnUreB8wTlODdM32UrKgKSfJgeIROoAbEyBScs.jpeg?width=320&crop=smart&auto=webp&s=4c7a73d6e48f527742622ccf8676f6e4bfc73e02', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/67JUXSnUreB8wTlODdM32UrKgKSfJgeIROoAbEyBScs.jpeg?width=640&crop=smart&auto=webp&s=1d29623e3dc2c928508ca7ce7d10f296f2a1d15a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/67JUXSnUreB8wTlODdM32UrKgKSfJgeIROoAbEyBScs.jpeg?width=960&crop=smart&auto=webp&s=19c9703a1e4d06cdb0286b7c981375268691cf15', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/67JUXSnUreB8wTlODdM32UrKgKSfJgeIROoAbEyBScs.jpeg?width=1080&crop=smart&auto=webp&s=5d8aea745cf21bda0b4282ed2282e110cad3b7b5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/67JUXSnUreB8wTlODdM32UrKgKSfJgeIROoAbEyBScs.jpeg?auto=webp&s=2844e8c5c925d50a613c86b6c1620525fdad0a2a', 'width': 1200}, 'variants': {}}]}
Stop treating LLM context as a linear chat: We need a Context-Editing IDE for serious engineering and professional project development
3
Editing an image is purely cosmetic, but managing context is structural engineering. Currently, we are forced into a linear rigidity that poisons project logic with redundant politeness and conversational noise. For serious engineering and professional project development, I’m not looking for an AI that apologizes for its mistakes; **I’m looking for a context-editing IDE where I can perform a surgical Git Rebase on the chat memory.** The industry is obsessed with bigger context windows, yet we lack the tools to manage them efficiently. We need the ability to prune paths that lead nowhere and break the logic loops that inevitably degrade long-form development. Clearing out social ACK packets to free up reasoning isn't about inducing amnesia—it’s about compute efficiency, corporate savings, and developer flow. It is a genuine win-win for both the infrastructure and the user. We must evolve from the assisted chatbot paradigm into a professional environment of state manipulation and thought-editing. Only the organizations or open-source projects that implement this level of control will take a giant leap toward true effectiveness, in my view. The "chat" interface has become the very bottleneck we need to overcome to **reach the next level of professional productivity.**
2026-01-15T01:41:25
https://www.reddit.com/r/LocalLLaMA/comments/1qd67i3/stop_treating_llm_context_as_a_linear_chat_we/
Chemical-Skin-3756
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd67i3
false
null
t3_1qd67i3
/r/LocalLLaMA/comments/1qd67i3/stop_treating_llm_context_as_a_linear_chat_we/
false
false
self
3
null
My wishes for 2026
0
I figured there should be *some* representation for this particular demographic
2026-01-15T01:38:41
https://i.redd.it/x222ipj93fdg1.png
ElementNumber6
i.redd.it
1970-01-01T00:00:00
0
{}
1qd659n
false
null
t3_1qd659n
/r/LocalLLaMA/comments/1qd659n/my_wishes_for_2026/
false
false
default
0
{'enabled': True, 'images': [{'id': 'x222ipj93fdg1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/x222ipj93fdg1.png?width=108&crop=smart&auto=webp&s=849b2acfd5f9307b324105ca4c8b7c5b2a57d23b', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/x222ipj93fdg1.png?width=216&crop=smart&auto=webp&s=c3154e67f72ecca682a962bf379173d97b38cbb4', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/x222ipj93fdg1.png?width=320&crop=smart&auto=webp&s=a51cde9b4dd9450197b6fb0f353392b317276931', 'width': 320}], 'source': {'height': 768, 'url': 'https://preview.redd.it/x222ipj93fdg1.png?auto=webp&s=21ea3910cb4de7ddbfe7b623d510571e2ad60650', 'width': 512}, 'variants': {}}]}
Slow week
0
Feels like it has been a slow week in ai as a life changing ai model hasn't been dropped in 3 days.
2026-01-15T00:55:13
https://www.reddit.com/r/LocalLLaMA/comments/1qd55ly/slow_week/
emperorofrome13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd55ly
false
null
t3_1qd55ly
/r/LocalLLaMA/comments/1qd55ly/slow_week/
false
false
self
0
null
AgentStudio: A VLA-based Kiosk Automation Agent using Gemini 3 and LangGraph
0
Hi everyone, I’d like to share **AgentStudio**, an open-source project we’ve been working on at Pseudo-Lab. We built an AI agent system specifically designed to bridge the intergenerational knowledge gap by automating complex kiosk UIs. https://preview.redd.it/cmif3co8vedg1.png?width=2816&format=png&auto=webp&s=9cc789583a7c9af6911b5182ff2179040ca7c77f **Key Technical Highlights:** * **VLA (Vision-Language-Action) Paradigm:** The agent "sees" the Android screen via ADB, reasons with Gemini 3 (Flash/Pro), and executes actions directly. * **LangGraph-based State Machine:** We managed the complex workflow (including loops and interrupts) using LangGraph for better reliability. * **Human-in-the-Loop (HITL):** When the agent encounters subjective choices (like menu options), it interrupts the flow to ask the user via a real-time dashboard. * **AG-UI Protocol:** We implemented a standardized communication protocol between the agent and our Next.js dashboard using SSE. **Upcoming Roadmap:** * Integration with **Gemma** for on-device/local execution. * Support for Google ADK and Microsoft Agent Framework. We’d love to get some feedback from the community! github : [https://github.com/Pseudo-Lab/Agent\_Studio](https://github.com/Pseudo-Lab/Agent_Studio)
2026-01-15T00:53:39
https://www.reddit.com/r/LocalLLaMA/comments/1qd54bx/agentstudio_a_vlabased_kiosk_automation_agent/
AIsimons
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd54bx
false
null
t3_1qd54bx
/r/LocalLLaMA/comments/1qd54bx/agentstudio_a_vlabased_kiosk_automation_agent/
false
false
https://b.thumbs.redditm…kj17xq5ocA3Y.jpg
0
null
[Project] 55.8 t/s Hybrid 44GB VRAM Cluster: Shaming the "Incompatible" MI50 with NVIDIA 4070 + llama.cpp RPC
1
[removed]
2026-01-15T00:36:28
https://www.reddit.com/r/LocalLLaMA/comments/1qd4q2u/project_558_ts_hybrid_44gb_vram_cluster_shaming/
xxDoman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd4q2u
false
null
t3_1qd4q2u
/r/LocalLLaMA/comments/1qd4q2u/project_558_ts_hybrid_44gb_vram_cluster_shaming/
false
false
https://b.thumbs.redditm…mARqO93ioO_Y.jpg
1
null
How Many Real Model Are There?
0
NOTE: I have re-written my original post using llm for better syntax (am not a native english speaker). Let me propose something that might sound conspiratorial, but actually aligns with what we’re observing: The Core Hypothesis: There’s evidence suggesting that many AI providers claiming to run “proprietary models” are actually routing requests through a shared infrastructure layer - potentially a single foundational model or a centralized inference cluster. Here’s why this makes technical and economic sense: 1. Infrastructure Economics: Training and maintaining LLMs at scale requires: \- Massive GPU clusters (10,000+ H100s for competitive models) \- Petabytes of training data with proper licensing \- Specialized MLOps infrastructure for inference optimization \- Continuous RLHF pipelines with human feedback loops The capital expenditure alone ranges from $50M-500M per competitive model. For smaller providers claiming “proprietary models,” these numbers don’t add up with their funding rounds or revenue. 2. The White-Label Infrastructure Pattern: We’ve seen this before in cloud services: \- Multiple “different” CDN providers actually routing through Cloudflare/Fastly \- “Independent” payment processors using Stripe’s infrastructure \- Various “AI chips” that are just rebadged NVIDIA silicon The AI model space likely follows the same pattern. Providers take a base model (GPT-4, Claude, or even an unreleased foundation model), apply minor fine-tuning or prompt engineering, wrap it in their own API, and market it as “proprietary.” 3. Technical Evidence from the Outage: What we observed: \- Simultaneous failures across supposedly independent providers \- Identical error patterns (rate limiting, timeout behaviors, response degradation) \- Synchronized recovery times - if these were truly independent systems, we’d see staggered recovery This suggests: \- Shared rate limiting infrastructure \- Common upstream dependency (likely a model hosting service) \- Single point of failure in the inference pipeline 4. What About “Model Fingerprinting”? You might ask: “But different providers give different outputs!” True, but this can be achieved through: \- System prompts: Different instructions prepended to every request \- Temperature/sampling tweaks: Slight parameter variations \- Post-processing layers: Filtering, reformatting, style transfer \- Fine-tuning on small datasets: Giving the illusion of wuniqueness while using the same base The Uncomfortable Conclusion: When Anthropic (Claude) goes down and suddenly 10+ “different AI providers” fail simultaneously, it’s not a coincidence. It’s a cascading failure in a shared infrastructure that the industry doesn’t openly discuss. The “AI diversity” in the market might be largely theatrical - a handful of actual model providers with dozens of resellers creating the illusion of choice.
2026-01-15T00:31:22
https://www.reddit.com/r/LocalLLaMA/comments/1qd4luu/how_many_real_model_are_there/
No-Signature8559
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd4luu
false
null
t3_1qd4luu
/r/LocalLLaMA/comments/1qd4luu/how_many_real_model_are_there/
false
false
self
0
null
Built a runtime with native llama.cpp bindings (Run local inference from JavaScript without spinning up a server)
0
I've been working on Elide, a runtime that has llama.cpp baked in at the native layer (Rust bindings via JNI). The idea is pretty simple, run local inference from the language you're writing in without spinning up llama-server or making localhost HTTP calls. Models load from disk (GGUF) or download from HuggingFace and cache locally. Inference runs in the same process as your app which means no network round-trips, no serialization overhead, data never leaves your machine. * Native layer: Rust crate with JNI bindings to llama.cpp * Runtime bridge: Kotlin layer handles model lifecycle and streaming * Guest access: Exposed to JS (python is coming very soon) via runtime intrinsics There's this dance where you spin up llama-server, then call localhost:8080 for every project. We wanted inference to be a function call, never an infrastructure problem. So far, GPU acceleration works, GGUF models work, HuggingFace download/cache works. Now im being honest when I say that **this isn't a replacement for llama.cpp** itself. It's llama.cpp embedded in a runtime so you can use it from higher-level languages without the server ceremony. Hope this is useful to anyone here! See `tools/scripts/local-ai.mts` GitHub: [https://github.com/elide-dev/elide](https://github.com/elide-dev/elide)
2026-01-15T00:21:26
https://www.reddit.com/r/LocalLLaMA/comments/1qd4dfc/built_a_runtime_with_native_llamacpp_bindings_run/
Zealousideal-Read883
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd4dfc
false
null
t3_1qd4dfc
/r/LocalLLaMA/comments/1qd4dfc/built_a_runtime_with_native_llamacpp_bindings_run/
false
false
self
0
null
llama.cpp has incredible performance on Ubuntu, i'd like to know why
46
[**https://www.phoronix.com/review/ubuntu-2604-jan-amd-epyc/4**](https://www.phoronix.com/review/ubuntu-2604-jan-amd-epyc/4)
2026-01-14T23:46:56
https://www.reddit.com/r/LocalLLaMA/comments/1qd3jk9/llamacpp_has_incredible_performance_on_ubuntu_id/
Deep_Traffic_7873
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd3jk9
false
null
t3_1qd3jk9
/r/LocalLLaMA/comments/1qd3jk9/llamacpp_has_incredible_performance_on_ubuntu_id/
false
false
self
46
{'enabled': False, 'images': [{'id': 'o7OmenQJichxnXwIfHQxzTNVFYPueCCAmAEuwZRk-bs', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/o7OmenQJichxnXwIfHQxzTNVFYPueCCAmAEuwZRk-bs.jpeg?width=108&crop=smart&auto=webp&s=7ec7bf9f483825ab3420fbd6d63a86e3c33fa812', 'width': 108}, {'height': 160, 'url': 'https://external-preview.redd.it/o7OmenQJichxnXwIfHQxzTNVFYPueCCAmAEuwZRk-bs.jpeg?width=216&crop=smart&auto=webp&s=e677e81df411df113a6de30042205ec915c00d91', 'width': 216}, {'height': 237, 'url': 'https://external-preview.redd.it/o7OmenQJichxnXwIfHQxzTNVFYPueCCAmAEuwZRk-bs.jpeg?width=320&crop=smart&auto=webp&s=4aeaffa7bc60acf847f9bb8dedd7182f0a3229a8', 'width': 320}, {'height': 475, 'url': 'https://external-preview.redd.it/o7OmenQJichxnXwIfHQxzTNVFYPueCCAmAEuwZRk-bs.jpeg?width=640&crop=smart&auto=webp&s=de5ca44c4b80f96cd290f1652c2d6092739b8a39', 'width': 640}, {'height': 713, 'url': 'https://external-preview.redd.it/o7OmenQJichxnXwIfHQxzTNVFYPueCCAmAEuwZRk-bs.jpeg?width=960&crop=smart&auto=webp&s=aca7cbe12dcfdd0bc746cb558074d267df4bb850', 'width': 960}], 'source': {'height': 758, 'url': 'https://external-preview.redd.it/o7OmenQJichxnXwIfHQxzTNVFYPueCCAmAEuwZRk-bs.jpeg?auto=webp&s=75e87048950b4dc896b7bc5adad0650112cbe466', 'width': 1020}, 'variants': {}}]}
any uncensored / unfiltered AI that has a good intelligence?
0
hello I'm looking for good LLMs that have a good intelligence, right now I just tried Venice and apifreellm but I'm looking for more and better solutions, I'm so tired of restrictions that block almost every prompt when I do research
2026-01-14T23:14:53
https://www.reddit.com/r/LocalLLaMA/comments/1qd2qwo/any_uncensored_unfiltered_ai_that_has_a_good/
Icy-Assignment-9344
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd2qwo
false
null
t3_1qd2qwo
/r/LocalLLaMA/comments/1qd2qwo/any_uncensored_unfiltered_ai_that_has_a_good/
false
false
self
0
null
New here and looking for help!
2
Background: I left banking nearly 12 months ago after watching AI transform the outside world while we were still building in Excel and sending faxes. Rather than completely poking around in the dark, I decided to actually start properly (at least from a Corporate banking background) so took an AI solutions architecture course, then started building my own projects. My Hardware: Ryzen 9 9900X + RTX 5080 (32GB RAM). I assume this is probably overkill for a beginner, but I wanted room to experiment without being outdated in a month. Also I have a friend who builds gaming pcs and he helped a lot! As with every newbie I started with cloud AI (Gemini, Claude, GPT) for guiding my every move which worked great until I saw new products being launch around the same projects I was chatting about - no doubt they'd been working on this for months before I even knew what AI was but, maybe not, so now I'm paranoid and worried about what I was sharing. Naturally I started exploring local LLM and despite my grand visions of building "my own Jarvis" (I'm not Tony Stark), so I scaled back to something more practical: What I've built so far is: - System-wide overlay tool (select text anywhere, hotkey, get AI response) - Multi-model routing (different models for different tasks) - Works via Ollama (currently using Llama 3.2, CodeLlama, DeepSeek R1) - Replaces my cloud AI workflow for most daily tasks What I'm currently using it for: - Code assistance (my main use case) - Document analysis (contracts, technical docs) - General productivity (writing, research) So far it's fast enough, private, with no API costs and I have many ideas about developing it further but honestly, I'm not sure whether I'm over-engineering this or if others have similar concerns, challenges or have similar workflow needs? So I have a few questions if anyone could help? 1. Cloud AI privacy concerns - legitimate? Has anyone else felt uncomfortable with sensitive code/documents going to cloud providers? Or am I being overly ridiculous? 2. Model recommendations for task-specific routing? Currently using: - Llama 3.2 Vision 11B (general) - CodeLlama 13B (code) - DeepSeek R1 8B (reasoning) - GPT-OSS:20B (deep reasoning) What would you use with my setup? Are there any better alternatives? 3. Multi-model architecture - is routing between specialised models actually better than just running one bigger model? Or am I creating unnecessary complexity? 4. Biggest local LLM pain points (besides compute)? For me it's been: - Context window management - Model switching friction (before I built routing) - Lack of system-wide integration (before I built the overlay) What frustrates everyone most about local AI workflows? 5. If people don't mind sharing, why do you choose/need local and what do you use it for vs the cloud? I'm curious about real use cases beyond "I don't trust cloud AI." Ultimately, I'm posting now as I've been watching some videos on YT, working on some side projects, still chatting to the cloud for some, learned a ton, finally built something that works for my workflow but realised I haven't ever really looked outside my little box to see what others are doing and so I found this channel. Also curious about architectural approaches - I've been experimenting with multi-model routing inspired by MoE concepts, but genuinely don't know if that's smart design or just me over-complicating things because I'm really enjoying building stuff. Appreciate any feedback, criticism (preferably constructive but I'll take anything I can get), or "you're being a pleb - do this instead".
2026-01-14T23:14:11
https://www.reddit.com/r/LocalLLaMA/comments/1qd2q8l/new_here_and_looking_for_help/
SaiXZen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd2q8l
false
null
t3_1qd2q8l
/r/LocalLLaMA/comments/1qd2q8l/new_here_and_looking_for_help/
false
false
self
2
null
Stop keeping your Agent Skills in local files if you want them to be actually useful
0
The current trend with tools like Claude Code and Cursor is to have everyone define "Agent Skills" locally, usually tucked away in a hidden `.md` file or a local config. It works great for a solo dev, but it’s a complete dead-end for production. If your skills are trapped on your local machine, your LLM can't actually "use" them when you move to a hosted environment or try to share that capability with your team. The real breakthrough happens when you treat Agent Skills as a hosted registry. Instead of the agent reading a file from your disk, it fetches the skill definition from a gateway. This allows you to update a skill once and have it instantly reflected across every agent in your stack, whether it's running in your IDE, a CI/CD pipeline, or a production chatbot. The architecture shifts from "file-based prompting" to "dynamic skill discovery." When you host these skills, you can actually monitor which ones are being called, how often they fail, and what the latency looks like. It turns a local experiment into a manageable part of your infrastructure. If you're still copy-pasting skill definitions between projects, you're building a maintenance nightmare.
2026-01-14T23:06:45
https://www.reddit.com/r/LocalLLaMA/comments/1qd2jlj/stop_keeping_your_agent_skills_in_local_files_if/
Main-Fisherman-2075
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd2jlj
false
null
t3_1qd2jlj
/r/LocalLLaMA/comments/1qd2jlj/stop_keeping_your_agent_skills_in_local_files_if/
false
false
self
0
null
Best local LLM setup for VS Code + Continue on RTX 4060 Ti (16GB) & i9 11900?
2
Hi everyone, I'm getting into local AI and want to turn my PC into a local coding assistant using VS Code and the Continue extension. I'm currently studying Fine-Tuning (FT) and want to leverage my hardware for inference as well. **My Specs:** * **CPU:** Intel Core i9-11900 * **GPU:** RTX 4060 Ti (16GB VRAM) * **RAM:** 16GB With 16GB of VRAM, what model combinations (Chat vs. Autocomplete) do you recommend for the best balance of speed and coding capability? Is the DeepSeek-R1 series viable here, or should I stick to Qwen 2.5 Coder? Thanks!
2026-01-14T22:57:49
https://www.reddit.com/r/LocalLLaMA/comments/1qd2bah/best_local_llm_setup_for_vs_code_continue_on_rtx/
useralguempporai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd2bah
false
null
t3_1qd2bah
/r/LocalLLaMA/comments/1qd2bah/best_local_llm_setup_for_vs_code_continue_on_rtx/
false
false
self
2
null
Is Liquid LFM truly a hybrid model?
4
Is it possible to have any of the Liquid models reason/think before providing an answer? I'm quite impressed with the quality of the output of the LFM 2 2.6b model, but I wish I could uplevel it with reasoning....
2026-01-14T22:32:35
https://www.reddit.com/r/LocalLLaMA/comments/1qd1oai/is_liquid_lfm_truly_a_hybrid_model/
Clipbeam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd1oai
false
null
t3_1qd1oai
/r/LocalLLaMA/comments/1qd1oai/is_liquid_lfm_truly_a_hybrid_model/
false
false
self
4
null
Help renting out a server.
1
[removed]
2026-01-14T22:20:10
https://www.reddit.com/r/LocalLLaMA/comments/1qd1cz3/help_renting_out_a_server/
nikzart
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd1cz3
false
null
t3_1qd1cz3
/r/LocalLLaMA/comments/1qd1cz3/help_renting_out_a_server/
false
false
self
1
null
Free tool to parse & chunk your AI conversation exports (ChatGPT, Claude, Grok)
1
[removed]
2026-01-14T22:16:10
https://www.reddit.com/r/LocalLLaMA/comments/1qd19c7/free_tool_to_parse_chunk_your_ai_conversation/
CoDy-28601
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd19c7
false
null
t3_1qd19c7
/r/LocalLLaMA/comments/1qd19c7/free_tool_to_parse_chunk_your_ai_conversation/
false
false
self
1
null
Renting a server. 2x 5090
1
[removed]
2026-01-14T22:09:05
https://www.reddit.com/r/LocalLLaMA/comments/1qd12sm/renting_a_server_2x_5090/
nikzart
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd12sm
false
null
t3_1qd12sm
/r/LocalLLaMA/comments/1qd12sm/renting_a_server_2x_5090/
false
false
self
1
null
Mid Range Local Setup Questions
1
I got the opportunity to build a small local AI “server” in my company. I read here from time to time, but unfortunately I don’t quite understand some things or I don’t quite understand. Anyway: I have a 5090 and two old 3060 that were left, as well as 64 GB of RAM. Can I change the VRAM of the graphics cards regarding Simply add model size? As I understand it, I don’t, but I often read about multi-GPU setups here, where everything is simply added. What kind of model do you think I could run there? I think I would use LLM - but I’m not sure if that’s really better than llma.ccp or ollama. Sorry for the probably dumb Question and thanks in advance.
2026-01-14T21:45:44
https://www.reddit.com/r/LocalLLaMA/comments/1qd0gtv/mid_range_local_setup_questions/
seji64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qd0gtv
false
null
t3_1qd0gtv
/r/LocalLLaMA/comments/1qd0gtv/mid_range_local_setup_questions/
false
false
self
1
null
Learning exercise: building a small AI product end-to-end with local LLMs (GGUF / node-llama-cpp)
0
I’ve spent some time building a small learning project to better understand what it actually takes to turn a local LLM into something that feels like a "product", without hiding behind frameworks or cloud APIs. The project uses GGUF models via node-llama-cpp and focuses on message analysis (intent, tone, impact, alternative phrasings). It’s intentionally not production-ready. A few things I learned the hard way: * Small local models do *not* behave like OpenAI flagship models. Output quality, consistency, and reasoning depth are clearly lower, and no amount of prompting fully closes that gap. * Structured output and validation matter much more with small models. Without strict schemas and retries, things fall apart quickly. * Breaking analysis into explicit steps helps, but increases latency and exposes model limits. * Logging raw prompts and responses was essential to understand failure modes. What this is not: * Not a framework * Not scalable * Not an attempt to claim "local models are just as good" What it is: * A transparent learning repo showing trade-offs, limits, and implementation details * Focused on understanding rather than performance Repo (for anyone curious): [https://github.com/pguso/ai-product-from-scratch](https://github.com/pguso/ai-product-from-scratch) I’m mostly posting to get feedback from people who’ve worked deeply with llama.cpp / GGUF models. If you’ve run into similar failure modes or have opinions on better ways to structure small-model pipelines, I’d be interested to hear them.
2026-01-14T21:19:55
https://www.reddit.com/r/LocalLLaMA/comments/1qczsj4/learning_exercise_building_a_small_ai_product/
purellmagents
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qczsj4
false
null
t3_1qczsj4
/r/LocalLLaMA/comments/1qczsj4/learning_exercise_building_a_small_ai_product/
false
false
self
0
null
Now is clearly stated: Bezos's Vision of Rented Cloud PCs Looks Less Far-Fetched
13
2026-01-14T20:47:04
https://it.slashdot.org/story/26/01/14/1655234/bezoss-vision-of-rented-cloud-pcs-looks-less-far-fetched
HumanDrone8721
it.slashdot.org
1970-01-01T00:00:00
0
{}
1qcyxd4
false
null
t3_1qcyxd4
/r/LocalLLaMA/comments/1qcyxd4/now_is_clearly_stated_bezoss_vision_of_rented/
false
false
default
13
{'enabled': False, 'images': [{'id': '7THE41dQU9aUR8BMRCgmF1Y--_H-hDmx23B3VOnGEmQ', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/7THE41dQU9aUR8BMRCgmF1Y--_H-hDmx23B3VOnGEmQ.png?auto=webp&s=6eae9811f60ed4108cb170ff2374801957be12e5', 'width': 64}, 'variants': {}}]}
Which models are unambiguously better than oss:120b at math/coding?
10
Are any of the qwen models for example?
2026-01-14T20:38:30
https://www.reddit.com/r/LocalLLaMA/comments/1qcyp9z/which_models_are_unambiguously_better_than/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcyp9z
false
null
t3_1qcyp9z
/r/LocalLLaMA/comments/1qcyp9z/which_models_are_unambiguously_better_than/
false
false
self
10
null
Home workstation vs NYC/NJ colo for LLM/VLM + Whisper video-processing pipeline (start 1 GPU, scale to 4–8)
1
I’m building a video sharing app and I’m deciding where to put my GPU compute: **home workstation** vs **colocated GPU server in NYC/NJ**. I want advice from folks running vLLM/Ollama stacks in production-ish setups. **Current dev/prototype machine (also hosting backend right now):** * Ryzen 9 9950X3D (16-core), RTX 3090, 64GB DDR5 * Cant handle 4 GPU setup will need to either build another workstation or a move to rackmount * Verizon FiOS 1Gbps (maybe 2Gbps) * \~30 beta users [This is what I'm currently using](https://preview.redd.it/gko5quhwjddg1.jpg?width=3024&format=pjpg&auto=webp&s=acec0e37d37a65f80f163e4454cd6b34e621211f) **Models/tools:** * Using **Ollama** today (Qwen 2.5-VL, Llama 3.2) + **OpenAI Whisper** * Planning to move to **vLLM** for inference (and to run more of the pipeline “server style”) **Pipeline / bandwidth reality:** * Video streaming is handled by a cloud provider * My compute box mainly sees: * regular API/web traffic (not video streaming) * **downloading user uploads for processing**, then pushing results back to the cloud **Hardware paths Options:** 1. **Workstation (home)**: Threadripper 24-core, 256GB RAM, start 2× RTX Pro 6000 (Blackwell) then add 2 more over the course of the year 2. **2U 4-GPU server (NYC/NJ colo)**: EPYC 32-core, 256–512GB, start 1 GPU then scale to 4 3. **4U 8-GPU server (NYC/NJ colo)**: EPYC 32-core, 256–512GB, start 1 GPU then scale upward **Questions for people who’ve actually run this stuff:** * vLLM + VLM workloads: any “wish I knew this earlier” about batching, concurrency, quantization, model serving layout, or job queues? * If you were scaling from 1 GPU to 4–8 GPUs over a year, would you choose **Workstation**(I would have to build one since my current PC isnt up to task) or **2U 4-GPU** first or just start **4U 8-GPU** to avoid a chassis migration later? Constraints: I’m only considering **NYC or North NJ** for colo (I want to be able to physically check on the machine) if I decide on the rackmount option, and I’m trying to keep colo spend roughly **$200–$1000/mo** after buying the hardware. Would really appreciate any opinions/war stories.
2026-01-14T20:33:54
https://www.reddit.com/r/LocalLLaMA/comments/1qcykx4/home_workstation_vs_nycnj_colo_for_llmvlm_whisper/
mr__smooth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcykx4
false
null
t3_1qcykx4
/r/LocalLLaMA/comments/1qcykx4/home_workstation_vs_nycnj_colo_for_llmvlm_whisper/
false
false
https://b.thumbs.redditm…czeF4IZLjFwk.jpg
1
null
I built this computer but I don't know what to do with it now.
2
Build: AMD 3950X Asus Pro WS X570-ACE motherboard 32 gigs of ddr4 Two NVlinked Nvidia 32gb V100s using a Chinese SXM2 to PCIe adapter board Not pictured but I threw a 3090 in the third PCI slot. Two 1000 watt PSUs And a 2TB nvme ssd It's mostly old parts I already had. I just saw those weird SXM to PCIe adapters with nvlink and wanted to see if that works. The adapter board was 300 dollars and I got the v100s for 350 each. I honestly have no idea what I'm doing though. This is like the first time I've ever used Linux. I got ollama going and can run some big models. It has a total of 88 gigs of vram and at full power it seems to pull around 1300 watts. It's also incredibly loud, the pictures show some attempts to cool the v100s with noctua fans but that did not work. Now I have some bigger delta fans (not pictured) and that definitely keeps them cool, my roommates hate it. What do I do now?
2026-01-14T20:28:12
https://www.reddit.com/gallery/1qcyfmf
KoalaCloaca
reddit.com
1970-01-01T00:00:00
0
{}
1qcyfmf
false
null
t3_1qcyfmf
/r/LocalLLaMA/comments/1qcyfmf/i_built_this_computer_but_i_dont_know_what_to_do/
false
false
https://b.thumbs.redditm…6niwkEG7HxBE.jpg
2
null
meituan-longcat/LongCat-Flash-Thinking-2601 · Hugging Face
64
2026-01-14T20:20:00
https://huggingface.co/meituan-longcat/LongCat-Flash-Thinking-2601
TKGaming_11
huggingface.co
1970-01-01T00:00:00
0
{}
1qcy7ug
false
null
t3_1qcy7ug
/r/LocalLLaMA/comments/1qcy7ug/meituanlongcatlongcatflashthinking2601_hugging/
false
false
default
64
{'enabled': False, 'images': [{'id': 'kb1mVOfmWTAvSMxYL_8sXovYFqgoXm6u9Rl74bhEZK8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kb1mVOfmWTAvSMxYL_8sXovYFqgoXm6u9Rl74bhEZK8.png?width=108&crop=smart&auto=webp&s=0109d6b6c1a141a3d2a8de8f9285130187e412a3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kb1mVOfmWTAvSMxYL_8sXovYFqgoXm6u9Rl74bhEZK8.png?width=216&crop=smart&auto=webp&s=04554adfa1ee2789079bf44a0a46126a31239cf7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kb1mVOfmWTAvSMxYL_8sXovYFqgoXm6u9Rl74bhEZK8.png?width=320&crop=smart&auto=webp&s=89a8cef170d03a390f133ac3d8d4acba2ac46015', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kb1mVOfmWTAvSMxYL_8sXovYFqgoXm6u9Rl74bhEZK8.png?width=640&crop=smart&auto=webp&s=8c6634be30000f0e6f7229b645e18aa7d9cde211', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kb1mVOfmWTAvSMxYL_8sXovYFqgoXm6u9Rl74bhEZK8.png?width=960&crop=smart&auto=webp&s=bace25b19423f4bf15ab79b464ecd77721562ab7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kb1mVOfmWTAvSMxYL_8sXovYFqgoXm6u9Rl74bhEZK8.png?width=1080&crop=smart&auto=webp&s=e117fb086339541229cfbdcdba1262245134c553', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kb1mVOfmWTAvSMxYL_8sXovYFqgoXm6u9Rl74bhEZK8.png?auto=webp&s=87d78d55b6dfe1f253ae3c68e1b27722c857a626', 'width': 1200}, 'variants': {}}]}
Newbie looking to run a hobby AI locally
0
I have a fairly basic consumer level computer (5600x cpu, 32GB RAM, 500gb availble on it's own nvme ssd, and a RTX 5070 ti) and I want to try running a model locally on my computer focused solely on text generation. I just want to feed all the lore for a D&D setting into it so I can get answers for obscure lore questions that would likely otherwise require reading three or four different books to cross check. I haven't gone beyond reading, but from what I take it I need a smaller model 7-8b, hopefully with a GUI, and I need to setup a RAG. As far as the RAG, I also suspect I'll have to give all the text sources I have a once over to format them. Are there any guides that can vaguely guide me in the right direction? I understand this is a rapidly evolving field.
2026-01-14T20:16:39
https://www.reddit.com/r/LocalLLaMA/comments/1qcy4jh/newbie_looking_to_run_a_hobby_ai_locally/
alternate_persona
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcy4jh
false
null
t3_1qcy4jh
/r/LocalLLaMA/comments/1qcy4jh/newbie_looking_to_run_a_hobby_ai_locally/
false
false
self
0
null
The Ro Philosophyis a philosophical and mathematical framework describing the "source code" of the Universe. RI explains how data is processed "under the hood" of existence. Presenting the 9 Fundamental Laws:I've developed a hypothetical model and would love to hear your critique.
0
2026-01-14T20:15:45
https://www.reddit.com/gallery/1qcy3p9
erikqamalyan0
reddit.com
1970-01-01T00:00:00
0
{}
1qcy3p9
false
null
t3_1qcy3p9
/r/LocalLLaMA/comments/1qcy3p9/the_ro_philosophyis_a_philosophical_and/
false
false
https://b.thumbs.redditm…S66kzaiLCWcc.jpg
0
null
Nura - AI with True Memory ; Private and Yours!!
1
[removed]
2026-01-14T20:06:38
https://www.reddit.com/r/LocalLLaMA/comments/1qcxuqw/nura_ai_with_true_memory_private_and_yours/
Striking-Isopod5866
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcxuqw
false
null
t3_1qcxuqw
/r/LocalLLaMA/comments/1qcxuqw/nura_ai_with_true_memory_private_and_yours/
false
false
self
1
null
I built a DOM-pruning engine to run reliable browser agents on Qwen 2.5 (3B) without having to use Vision
5
Hey everyone, Like many of you, I've been experimenting with browser agents (using `browser-use` and LangChain). The current meta seems to be "Just throw GPT-4o Vision at it." It works, but it drives me crazy for two reasons: 1. **Cost:** Sending screenshots + massive HTML dumps burns tokens like crazy. 2. **Overkill:** I shouldn't need a 100B+ parameter model just to find the "Login" button. I realized that if I could drastically reduce the input noise, I could get "dumb" local models to perform like "smart" cloud models. So I built **SentienceAPI**, a structure-first extraction engine designed specifically to fit complex web pages into the context window of small local models (like Qwen 2.5 3B or Llama 3 or Bitnet b1.58 2b4t). # The Architecture (The "Vision-as-Fallback" Approach) Instead of relying on pixels, I built a pipeline to treat the DOM as a semantic database: 1. **The "Chain Saw" (Client-Side Rust/WASM):** I wrote a Chrome Extension using Rust (compiled to WASM) that injects into the browser. It uses a `TreeWalker` to traverse the DOM and ruthlessly prune \~95% of the nodes. It drops wrapper divs, invisible elements, scripts, and layout noise *before* it leaves the browser. 2. **The "Refinery" (Semantic Geometry):** The raw interactive elements are sent to a gateway that calculates "Semantic Geometry." It looks for "Dominant Groups" (repeated patterns like search results) and assigns ordinal IDs (e.g., "This is the 2nd item in the main feed"). 3. **The Output (Small Context):** The LLM doesn't get a screenshot or raw HTML. It gets a dense, 1k-token JSON snapshot that describes *only* the interactive elements and their spatial relationships. # Why this matters for Local LLMs Because the input is so clean, **Qwen 2.5 3B (Instruct)** can actually navigate complex sites. * **Standard Approach:** Raw HTML > Context Limit Exceeded > Model Hallucinates. * **Sentience Approach:** Dense JSON > Model sees "Button: Checkout (ID: 42)" > Model outputs `{"action": "click", "id": 42}`. I’m seeing **\~50% token reduction** compared to standard text-based scraping, and obviously massive savings vs. vision-based approaches. # Integration with browser-use I’ve integrated this into the `browser-use` ecosystem. If you are running local agents via Ollama/LM Studio and failing because the context window is getting choked by HTML garbage, this might fix it. It’s currently in a "Show HN" phase. The SDK is Python-based. **My ShowHN Post:** [https://news.ycombinator.com/item?id=46617496](https://news.ycombinator.com/item?id=46617496) **browser-use integrations:** * Jest-style assertions for agents: [https://github.com/SentienceAPI/browser-use/pull/5](https://github.com/SentienceAPI/browser-use/pull/5) * Browser-use + Local LLM (Qwen 2.5 3B) demo: [https://github.com/SentienceAPI/browser-use/pull/4](https://github.com/SentienceAPI/browser-use/pull/4) **Open source SDK:** * Python: [https://github.com/SentienceAPI/sentience-python](https://github.com/SentienceAPI/sentience-python) * TypeScript: [https://github.com/SentienceAPI/sentience-ts](https://github.com/SentienceAPI/sentience-ts) I’d love to hear if anyone else is trying to get sub-7B models to drive browsers reliably. The "Vision is All You Need" narrative feels inefficient for 90% of web tasks.
2026-01-14T19:57:39
https://www.reddit.com/r/LocalLLaMA/comments/1qcxllu/i_built_a_dompruning_engine_to_run_reliable/
Aggressive_Bed7113
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcxllu
false
null
t3_1qcxllu
/r/LocalLLaMA/comments/1qcxllu/i_built_a_dompruning_engine_to_run_reliable/
false
false
self
5
null
I made 10 frontier LLMs judge each other's code debugging — Claude Opus 4.5 won by 0.01 points over o1, GPT-4o came 9th
0
I'm running daily blind evaluations where 10 models answer the same prompt, then all 10 judge all 10 responses (100 total judgments, excluding self-scores). **CODE-001: Async Python Bug Hunt** * Task: Find race condition, unhandled exception, resource leak * Winner: Claude Opus 4.5 (9.49/10) * o1 was 0.01 points behind at 9.48 * GPT-4o surprisingly ranked 9th at 8.79 **Key finding:** Claude Opus showed actual code fixes with double-check patterns. o1 was concise but comprehensive. GPT-4o identified bugs but gave generic solutions. **Meta-insight:** Claude Opus was also the STRICTEST judge (avg score given: 8.76). Mistral Large was most lenient (9.73). The winner was the toughest critic. Full methodology + raw responses: [https://substack.com/@themultivac](https://substack.com/@themultivac) **REASON-001: Two Envelope Paradox** (today's eval) * 10 models tackled the classic probability paradox * Results: https://preview.redd.it/x7zpvrwmdddg1.png?width=777&format=png&auto=webp&s=84139c81e8a3f428f231678c4d4f2db4280ad9eb * Claude models dominated again but were the harshest judges Doing this daily with rotating categories (Code Mon, Reasoning Tue, Analysis Wed, etc.). Feedback on methodology welcome — does the peer matrix approach eliminate enough bias? Also, if you like don't forget to subscribe my substack!
2026-01-14T19:54:15
https://www.reddit.com/r/LocalLLaMA/comments/1qcxib4/i_made_10_frontier_llms_judge_each_others_code/
Silver_Raspberry_811
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcxib4
false
null
t3_1qcxib4
/r/LocalLLaMA/comments/1qcxib4/i_made_10_frontier_llms_judge_each_others_code/
false
false
https://b.thumbs.redditm…p2gKdLyK19AM.jpg
0
null
Dual GPU mounting suggestions
1
Looking for suggestions. I am mounting a 4070 blower GEO RTX model as a secondary GPU in my PC. PC case is an Antec Flux Pro, my 5070 TI is mounted in slot one, slot two is being blocked by the bottom row of fans (which I can remove) and one of the cables that is plugged into the MOBO (MSI B650 AMD 5 / 1200Watt PSU) which I cannot remove. The 4070 will not physically fit into pcle slot two because of this cable. I also already use a coolermaster vertical mount for the 5070 TI (which will also have to be removed) and there is no way that I can see to mount the 4070 with the 5070 TI in vertical or horizontal positioning. What options do I have for mounting the second GPU? The Flux Pro is a huge case so I should be able to mount this somewhere. Any ideas?
2026-01-14T19:45:47
https://www.reddit.com/r/LocalLLaMA/comments/1qcxa7g/dual_gpu_mounting_suggestions/
Impossible-Glass-487
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcxa7g
false
null
t3_1qcxa7g
/r/LocalLLaMA/comments/1qcxa7g/dual_gpu_mounting_suggestions/
false
false
self
1
null
Please Recommend Local LLM on Android with GPU Acceleration - 8 Elite Gen 5
0
2026-01-14T19:28:42
https://www.reddit.com/gallery/1qcwtcc
DroidLife97
reddit.com
1970-01-01T00:00:00
0
{}
1qcwtcc
false
null
t3_1qcwtcc
/r/LocalLLaMA/comments/1qcwtcc/please_recommend_local_llm_on_android_with_gpu/
false
false
https://b.thumbs.redditm…WG3bjjSyoXiM.jpg
0
null
Please Recommend Local LLM on Android with GPU Acceleration - 8 Elite Gen 5
0
2026-01-14T19:28:29
https://www.reddit.com/gallery/1qcwt5j
DroidLife97
reddit.com
1970-01-01T00:00:00
0
{}
1qcwt5j
false
null
t3_1qcwt5j
/r/LocalLLaMA/comments/1qcwt5j/please_recommend_local_llm_on_android_with_gpu/
false
false
https://b.thumbs.redditm…1M1Fis-B-tdM.jpg
0
null
Help me decide on a vision model
1
Pixtral-12B-2409 vs Ministral-3-14B-Instruct-2512 for computer screenshots (IDE errors, UI dialogs, Confluence pages) — which is better in practice? Users mostly send only screenshots (no long logs), so I care most about OCR/layout + diagram/screenshot understanding, not agentic long-context. If you’ve tried both: which one gives fewer hallucinations and better troubleshooting from screenshots?
2026-01-14T19:25:34
https://www.reddit.com/r/LocalLLaMA/comments/1qcwqag/help_me_decide_on_a_vision_model/
Some-Manufacturer-21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcwqag
false
null
t3_1qcwqag
/r/LocalLLaMA/comments/1qcwqag/help_me_decide_on_a_vision_model/
false
false
self
1
null
What is the best open source model for coding?
0
What is the best open source model for coding? What hardware is required for the best model?
2026-01-14T19:01:46
https://www.reddit.com/r/LocalLLaMA/comments/1qcw2nh/what_is_the_best_open_source_model_for_coding/
Excellent_Koala769
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcw2nh
false
null
t3_1qcw2nh
/r/LocalLLaMA/comments/1qcw2nh/what_is_the_best_open_source_model_for_coding/
false
false
self
0
null
Most MCPs don’t survive real use with local models I’m tracking which ones do
0
I’ve been experimenting with MCPs alongside local and hybrid setups (local inference + cloud tools), and honestly a lot of MCPs don’t survive real use. On paper, many look great. In practice: - context usage ramps up fast - behavior differs between cloud models and local ones - some MCPs quietly fail once you move beyond narrow prompts - for a lot of tasks, CLI or small scripts still win The part I kept missing wasn’t discovery, but what happens after you try them. Those details usually live in comment threads and then disappear. So I started tracking MCPs with a focus on real usage: - which ones people actually ran - where they worked well - where they broke or weren’t worth the context cost - short, practical “works / breaks / avoid when…” notes No ratings, no hype just usage signals and field notes that accumulate over time. If you’re running local or mixed workflows and have already learned which MCPs are worth it (or not), you can mark MCPs you’ve used or drop a short not here: https://ai-stack.dev Trying to save the next person from repeating the same experiments.
2026-01-14T18:58:14
https://www.reddit.com/r/LocalLLaMA/comments/1qcvywz/most_mcps_dont_survive_real_use_with_local_models/
Silver-Photo2198
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcvywz
false
null
t3_1qcvywz
/r/LocalLLaMA/comments/1qcvywz/most_mcps_dont_survive_real_use_with_local_models/
false
false
self
0
null
Popularity of DDR3 motherboards is growing rapidly - VideoCardz.com
140
I genuinely hate this timeline. While I'm in the very lucky position to have bought more than enough RAM and storage for my homelab and local LLM needs before prices went up, my favorite past time and hobby of homelabbing feels completely ruined. Three months ago, I was looking forward to ECC DDR5 prices coming down to the point of being bale to buy 512GB DDR5 RAM for ~€500 to finally have a Saphire Rapids Xeon in my homelab and play with AMX, I'm now afraid that DDR4 stick I have might fail, and not being able to replace it. With DDR4 prices through the roof, I guess this was bound to happen, but it doesn't make it sting any less. How long now until DDR3 prices also skyrocket, and with them the motherboards and CPUs that also support it?
2026-01-14T18:43:26
https://videocardz.com/newz/popularity-of-ddr3-motherboards-is-growing-rapidly
FullstackSensei
videocardz.com
1970-01-01T00:00:00
0
{}
1qcvk9n
false
null
t3_1qcvk9n
/r/LocalLLaMA/comments/1qcvk9n/popularity_of_ddr3_motherboards_is_growing/
false
false
default
140
{'enabled': False, 'images': [{'id': '1_qecsYofG5L-GuD7Gh4daRo-sGqgl6aBwxtGGFCAGA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/1_qecsYofG5L-GuD7Gh4daRo-sGqgl6aBwxtGGFCAGA.jpeg?width=108&crop=smart&auto=webp&s=1157b546584bc242b448988c63ef1a8436a24407', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/1_qecsYofG5L-GuD7Gh4daRo-sGqgl6aBwxtGGFCAGA.jpeg?width=216&crop=smart&auto=webp&s=d9554585e8dc31c7aba7b39c84d3e570cae77ca1', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/1_qecsYofG5L-GuD7Gh4daRo-sGqgl6aBwxtGGFCAGA.jpeg?width=320&crop=smart&auto=webp&s=c972c50372327aefcfc362556725ddfc1211265c', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/1_qecsYofG5L-GuD7Gh4daRo-sGqgl6aBwxtGGFCAGA.jpeg?width=640&crop=smart&auto=webp&s=4f35147fed27d85c69f96db9aabc5b1ec2622714', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/1_qecsYofG5L-GuD7Gh4daRo-sGqgl6aBwxtGGFCAGA.jpeg?width=960&crop=smart&auto=webp&s=e52e488f94743d7495b2fff8481d937a47161863', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/1_qecsYofG5L-GuD7Gh4daRo-sGqgl6aBwxtGGFCAGA.jpeg?width=1080&crop=smart&auto=webp&s=c63240b236102d233f37a592f15dc3e6199cb682', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://external-preview.redd.it/1_qecsYofG5L-GuD7Gh4daRo-sGqgl6aBwxtGGFCAGA.jpeg?auto=webp&s=41b9e21966e524e94ce4467b60e84088033a672c', 'width': 2500}, 'variants': {}}]}
What’s the deal with these fake GPU listings on eBay?
93
I’ve been seeing these around for a while. For most AI GPU searches there will be a couple on the first page. It’s always a zero review account that was created same-day selling for a third of the normal price. They’re very clearly scams, but how? eBay buyer protection will always provide a refund if you ask for it basically, so what’s the scam? Do they just send you a fake GPU and hope you don’t notice?
2026-01-14T18:29:25
https://www.reddit.com/gallery/1qcv64u
humandisaster99
reddit.com
1970-01-01T00:00:00
0
{}
1qcv64u
false
null
t3_1qcv64u
/r/LocalLLaMA/comments/1qcv64u/whats_the_deal_with_these_fake_gpu_listings_on/
false
false
https://b.thumbs.redditm…kLzUX95wfnoE.jpg
93
null
NeuTTS Nano: 120M Parameter On-Device TTS based on Llama3
191
Hey everyone, The team at Neuphonic is back with a new open-source release: NeuTTS Nano. After NeuTTS Air trended #1 on HuggingFace last October, we received a lot of requests for something even smaller that could fit into tighter VRAM/RAM constraints for robotics and embedded agents. Key Specs: * Model Size: 120M active parameters (3x smaller than NeuTTS Air). * Architecture: Simple LM + codec architecture built off Llama3. * Format: Provided in GGML for easy deployment on mobile, Jetson, and Raspberry Pi. * Capabilities: Instant voice cloning (3s sample) and ultra-realistic prosody. Why use this? If you are building for smart home devices, robotics, or mobile apps where every MB of RAM matters, Nano is designed for you. It delivers the same "voice magic" but in a much lighter package. Links: * GitHub: [https://github.com/neuphonic/neutts](https://github.com/neuphonic/neutts) * HuggingFace: [https://huggingface.co/neuphonic/neutts-nano](https://huggingface.co/neuphonic/neutts-nano) * Spaces: [https://huggingface.co/spaces/neuphonic/neutts-nano](https://huggingface.co/spaces/neuphonic/neutts-nano) * Website: [https://www.neuphonic.com/](https://www.neuphonic.com/) We’re curious to see the RTF (Real-Time Factor) benchmarks the community gets on different hardware. What’s the smallest device you’re planning to run this on?
2026-01-14T18:26:19
https://v.redd.it/2nikcyj6ycdg1
TeamNeuphonic
v.redd.it
1970-01-01T00:00:00
0
{}
1qcv304
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2nikcyj6ycdg1/DASHPlaylist.mpd?a=1771007200%2CZTc2NzdlMDM3Nzg1ZTc2NGJjMzBjNGU2NjNjNTZmZTdlZTg1NWViNTA0NWJhMTRjMzQ2N2I1NDY3MmEwYTk1ZQ%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/2nikcyj6ycdg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/2nikcyj6ycdg1/HLSPlaylist.m3u8?a=1771007200%2CYjg2YWE3OTg0MmRiMzYxMzY5NDUxZTU5YWJhYzY0ZjliNGYwZWQ0NDU5ODU0OTk4NDFkMTU0MTg5YmUyZjg2Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/2nikcyj6ycdg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1qcv304
/r/LocalLLaMA/comments/1qcv304/neutts_nano_120m_parameter_ondevice_tts_based_on/
false
false
https://external-preview…9358a327ec7e5c09
191
{'enabled': False, 'images': [{'id': 'eGh0aTBhazZ5Y2RnMTTPucJdRjO2R67S5i-oYJkuLIwhwL3TAJbW3Q2hg2iU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eGh0aTBhazZ5Y2RnMTTPucJdRjO2R67S5i-oYJkuLIwhwL3TAJbW3Q2hg2iU.png?width=108&crop=smart&format=pjpg&auto=webp&s=3c2b384eb486015b0069eef9c07d2974486248a2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eGh0aTBhazZ5Y2RnMTTPucJdRjO2R67S5i-oYJkuLIwhwL3TAJbW3Q2hg2iU.png?width=216&crop=smart&format=pjpg&auto=webp&s=7e4dd82578627bf3fa101fc4b36855d843dc910f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eGh0aTBhazZ5Y2RnMTTPucJdRjO2R67S5i-oYJkuLIwhwL3TAJbW3Q2hg2iU.png?width=320&crop=smart&format=pjpg&auto=webp&s=f158aaa76c827dc534a5edd7250c0f8a11c3df52', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eGh0aTBhazZ5Y2RnMTTPucJdRjO2R67S5i-oYJkuLIwhwL3TAJbW3Q2hg2iU.png?width=640&crop=smart&format=pjpg&auto=webp&s=96bbedc6c51626478e44459365aa5126db4a5911', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eGh0aTBhazZ5Y2RnMTTPucJdRjO2R67S5i-oYJkuLIwhwL3TAJbW3Q2hg2iU.png?width=960&crop=smart&format=pjpg&auto=webp&s=c82215c0c834085ff0b65d71558027e399612c2b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eGh0aTBhazZ5Y2RnMTTPucJdRjO2R67S5i-oYJkuLIwhwL3TAJbW3Q2hg2iU.png?width=1080&crop=smart&format=pjpg&auto=webp&s=18a7ae0409ee53cff962e2a34691d9b2847689ca', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eGh0aTBhazZ5Y2RnMTTPucJdRjO2R67S5i-oYJkuLIwhwL3TAJbW3Q2hg2iU.png?format=pjpg&auto=webp&s=ee8b29c53811df3136b83cc86d8c4420b3f73b7d', 'width': 1920}, 'variants': {}}]}
Local tool for visualizing model drift (not another eval)
3
I got tired of guessing what “instability” actually looks like. This is a local-only visualization to explore how systems drift, stabilize, collapse. No cloud, no APIs, no agents. Tweak parameters, dynamics evolve in real time. repo: https://github.com/rjsabouhi/sfd-engine
2026-01-14T18:21:14
https://v.redd.it/1lmntgjcxcdg1
RJSabouhi
v.redd.it
1970-01-01T00:00:00
0
{}
1qcuxwr
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/1lmntgjcxcdg1/DASHPlaylist.mpd?a=1771006897%2CMmFjY2EzZDA3NmM1YjhlMmM1MDcxYjM3NDQ1MDU4MTZjNWVkYzQ0ZGViNDA0NDg3ZmJhYzAzMGIzOGQ3YmZhZQ%3D%3D&v=1&f=sd', 'duration': 60, 'fallback_url': 'https://v.redd.it/1lmntgjcxcdg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 602, 'hls_url': 'https://v.redd.it/1lmntgjcxcdg1/HLSPlaylist.m3u8?a=1771006897%2CMDVhNzQ0M2IwNDE2ODc5OGQ2MDZiNjE4ZjQ3MTNlNTU3ZjFlOTk3OWQ4OTI5OGY1MGE1YWU2MjJlYjA2MTBmNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1lmntgjcxcdg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1qcuxwr
/r/LocalLLaMA/comments/1qcuxwr/local_tool_for_visualizing_model_drift_not/
false
false
https://external-preview…206cf78a89b2dc44
3
{'enabled': False, 'images': [{'id': 'dmFyZ2RnZmN4Y2RnMV-pL20Bo6htkQgwEaHpfYOiTNkwgyc6J6kT50wV9Vif', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/dmFyZ2RnZmN4Y2RnMV-pL20Bo6htkQgwEaHpfYOiTNkwgyc6J6kT50wV9Vif.png?width=108&crop=smart&format=pjpg&auto=webp&s=c1be152a4094c6c04977bc48cc71bd230e8ce74f', 'width': 108}, {'height': 101, 'url': 'https://external-preview.redd.it/dmFyZ2RnZmN4Y2RnMV-pL20Bo6htkQgwEaHpfYOiTNkwgyc6J6kT50wV9Vif.png?width=216&crop=smart&format=pjpg&auto=webp&s=922f63550b8fa9c4f9c575d83c275851224e8f32', 'width': 216}, {'height': 150, 'url': 'https://external-preview.redd.it/dmFyZ2RnZmN4Y2RnMV-pL20Bo6htkQgwEaHpfYOiTNkwgyc6J6kT50wV9Vif.png?width=320&crop=smart&format=pjpg&auto=webp&s=45aa7fe2907b41ac6819fe4649456b5cff6292fb', 'width': 320}, {'height': 300, 'url': 'https://external-preview.redd.it/dmFyZ2RnZmN4Y2RnMV-pL20Bo6htkQgwEaHpfYOiTNkwgyc6J6kT50wV9Vif.png?width=640&crop=smart&format=pjpg&auto=webp&s=59ca8e29b1d583f9c46cf734fa08bc83ba478d66', 'width': 640}, {'height': 451, 'url': 'https://external-preview.redd.it/dmFyZ2RnZmN4Y2RnMV-pL20Bo6htkQgwEaHpfYOiTNkwgyc6J6kT50wV9Vif.png?width=960&crop=smart&format=pjpg&auto=webp&s=144345be126341e96bd6215ebd44e8a2d66bf9a6', 'width': 960}, {'height': 507, 'url': 'https://external-preview.redd.it/dmFyZ2RnZmN4Y2RnMV-pL20Bo6htkQgwEaHpfYOiTNkwgyc6J6kT50wV9Vif.png?width=1080&crop=smart&format=pjpg&auto=webp&s=51981bc013392469b313c217f68534d7db1679ad', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/dmFyZ2RnZmN4Y2RnMV-pL20Bo6htkQgwEaHpfYOiTNkwgyc6J6kT50wV9Vif.png?format=pjpg&auto=webp&s=e89353b5af6d03039db14ea0fa17fb59c87d3848', 'width': 1914}, 'variants': {}}]}
Gemma 1b-it finetune worked great for multi-turn chat, but failed for `dialect text → standard text` conversion
1
[removed]
2026-01-14T18:17:25
https://www.reddit.com/r/LocalLLaMA/comments/1qcuu1v/gemma_1bit_finetune_worked_great_for_multiturn/
_hasin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcuu1v
false
null
t3_1qcuu1v
/r/LocalLLaMA/comments/1qcuu1v/gemma_1bit_finetune_worked_great_for_multiturn/
false
false
self
1
null
Soprano 1.1-80M released: 95% fewer hallucinations and 63% preference rate over Soprano-80M
303
Hello everyone! Today, I am announcing Soprano 1.1! I’ve designed it for massively improved stability and audio quality over the original model.  While many of you were happy with the quality of Soprano, it had a tendency to start, well, *Mongolian throat singing*. Contrary to its name, Soprano is **NOT** supposed to be for singing, so I have reduced the frequency of these hallucinations by **95%**. Soprano 1.1-80M also has a **50%** lower WER than Soprano-80M, with comparable clarity to much larger models like Chatterbox-Turbo and VibeVoice. In addition, it now supports sentences up to **30 seconds** long, up from 15. The outputs of Soprano could sometimes have a lot of artifacting and high-frequency noise. This was because the model was severely undertrained. I have trained Soprano further to reduce these audio artifacts. According to a blind study I conducted on my family (against their will), they preferred Soprano 1.1's outputs **63%** of the time, so these changes have produced a noticeably improved model. You can check out the new Soprano here: Model: [https://huggingface.co/ekwek/Soprano-1.1-80M](https://huggingface.co/ekwek/Soprano-1.1-80M)  Try Soprano 1.1 Now: [https://huggingface.co/spaces/ekwek/Soprano-TTS](https://huggingface.co/spaces/ekwek/Soprano-TTS)  Github: [https://github.com/ekwek1/soprano](https://github.com/ekwek1/soprano)  \- Eugene
2026-01-14T18:16:00
https://v.redd.it/v0c2rda9scdg1
eugenekwek
v.redd.it
1970-01-01T00:00:00
0
{}
1qcusnt
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/v0c2rda9scdg1/DASHPlaylist.mpd?a=1771006582%2CZDA0YjU4MDgxM2RjYWE3YWRmMjUyY2Y2ZGY2MWNmYTBiODFiNjYxNmIwYmY2MmI3Y2NhZTUwZjFkY2ZmZjIyMQ%3D%3D&v=1&f=sd', 'duration': 34, 'fallback_url': 'https://v.redd.it/v0c2rda9scdg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/v0c2rda9scdg1/HLSPlaylist.m3u8?a=1771006582%2CMTFlMzhkMDBjY2JmM2Q1ZDNhM2I5NjcxMDNkOTZjYmE5YWRhMGRmOGFhYmMzOTVlMDU0ZDViNmNlMTgyMzAxMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/v0c2rda9scdg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1qcusnt
/r/LocalLLaMA/comments/1qcusnt/soprano_1180m_released_95_fewer_hallucinations/
false
false
https://external-preview…0d2df0a274e9a7ca
303
{'enabled': False, 'images': [{'id': 'NXZ5NDNuYTlzY2RnMX4ZwK1s5ENYxRsvoiSEu3mA0RmAAs2-sAvwRMu-2CtN', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NXZ5NDNuYTlzY2RnMX4ZwK1s5ENYxRsvoiSEu3mA0RmAAs2-sAvwRMu-2CtN.png?width=108&crop=smart&format=pjpg&auto=webp&s=ddc39b382527a30c2e6f99746edd68ee4e484d89', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NXZ5NDNuYTlzY2RnMX4ZwK1s5ENYxRsvoiSEu3mA0RmAAs2-sAvwRMu-2CtN.png?width=216&crop=smart&format=pjpg&auto=webp&s=80681afd7d51935d7b3c08378f0b4a511decacb6', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NXZ5NDNuYTlzY2RnMX4ZwK1s5ENYxRsvoiSEu3mA0RmAAs2-sAvwRMu-2CtN.png?width=320&crop=smart&format=pjpg&auto=webp&s=e28a3897113196fb642232379c0cda8efe88fd7c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NXZ5NDNuYTlzY2RnMX4ZwK1s5ENYxRsvoiSEu3mA0RmAAs2-sAvwRMu-2CtN.png?width=640&crop=smart&format=pjpg&auto=webp&s=bc35bfa670595b1e2f031580dd077826ccb48f74', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NXZ5NDNuYTlzY2RnMX4ZwK1s5ENYxRsvoiSEu3mA0RmAAs2-sAvwRMu-2CtN.png?width=960&crop=smart&format=pjpg&auto=webp&s=bcda6bc782b4cfc0d0b8d705a857654dd8f675a8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NXZ5NDNuYTlzY2RnMX4ZwK1s5ENYxRsvoiSEu3mA0RmAAs2-sAvwRMu-2CtN.png?width=1080&crop=smart&format=pjpg&auto=webp&s=41920c7724936f0920a9245277e9b21365751a4a', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/NXZ5NDNuYTlzY2RnMX4ZwK1s5ENYxRsvoiSEu3mA0RmAAs2-sAvwRMu-2CtN.png?format=pjpg&auto=webp&s=3d9e46681c3bd777dfb7c092190843613a29c78f', 'width': 1280}, 'variants': {}}]}
Curious ablation: GPT-like LM trained with *frozen* 16‑dim *binary* token-ID embeddings (n_embed=16) It still learns end-to-end and generates coherent text, non-trivial text.
7
I ran a small but (IMO) interesting ablation: a GPT-like decoder-only Transformer where **the entire input embedding table is frozen** and replaced with a **16‑dim 0/1 token-ID code**. This is **not** 16-bit quantization—each token gets a fixed binary identifier, and the model learns everything else on top. Despite having **no trainable / semantically-shaped input embeddings**, the model still trains end-to-end and generates coherent, non-trivial text. **Setup (core idea)** * `vocab_size = 65536` * `n_embed = 16` (since `2^16 = 65536`, the code uniquely identifies every token) * fixed 16 → `d_model=1024` expansion via `repeat_interleave` (×64), no learned projection * the frozen embedding table is fully published (`embeddings.txt`) so anyone can audit it **Repro + quick verification** * Blog + script: [https://huggingface.co/blog/Bochkov/emergent-semantics-beyond-token-embeddings](https://huggingface.co/blog/Bochkov/emergent-semantics-beyond-token-embeddings) * Model repo: [https://huggingface.co/Bochkov/emergent-semantics-model-16-bit-269m](https://huggingface.co/Bochkov/emergent-semantics-model-16-bit-269m) * **Paper (more ablations + context)**: [https://arxiv.org/abs/2507.04886](https://arxiv.org/abs/2507.04886) **Question I’m probing:** if input embeddings don’t carry semantics (and aren’t trainable), **where exactly does semantic structure form inside a decoder-only Transformer** https://preview.redd.it/30tsbfxpvcdg1.png?width=1590&format=png&auto=webp&s=2a37094a5165ca7fca3b2ac047ccc0a83b66c494 License: Apache-2.0
2026-01-14T18:14:49
https://www.reddit.com/r/LocalLLaMA/comments/1qcurf9/curious_ablation_gptlike_lm_trained_with_frozen/
AVBochkov
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcurf9
false
null
t3_1qcurf9
/r/LocalLLaMA/comments/1qcurf9/curious_ablation_gptlike_lm_trained_with_frozen/
false
false
https://a.thumbs.redditm…eFd7KYEEFYh8.jpg
7
null
Mis-matched GPU options
3
I built a new computer with a 5090, 5070ti, and 96gb ram. I've been using text Gen webui with Llama.cpp to run GGUFs less than 48gb to keep it on both cards with 16000 context. I've had fairly good luck using models as a language tutor, having the llm quiz me and me checking with Google to make sure the models aren't making things up. My main goals are somewhat fast LLM responses with accurate quizzing. I'd like to use bigger models, but the second I use ram the response time drops heavily. But I have a few questions: 1. Am I right with this setup and use of chatting, I'm kind of stuck using Llama.cpp and GGUFs for mis matched gpus? 2. Is there anyway tricks to use ram efficiently? 3. Is there something better than text Gen webui? 4. Any thoughts on any other uses I could do with 32/48 gbs of vram? Originally I was hoping that would be enough for agentic llms​ but haven't found good instructions on how to set it up. ​
2026-01-14T18:11:29
https://www.reddit.com/r/LocalLLaMA/comments/1qcuo1u/mismatched_gpu_options/
MrCuddles20
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcuo1u
false
null
t3_1qcuo1u
/r/LocalLLaMA/comments/1qcuo1u/mismatched_gpu_options/
false
false
self
3
null
NVIDIA's new 8B model is Orchestrator-8B, a specialized 8-billion-parameter AI designed not to answer everything itself, but to intelligently manage and route complex tasks to different tools (like web search, code execution, other LLMs) for greater efficiency
663
I’ve seen some arguments we’ve reached AGI, it’s just about putting the separate pieces together in the right context. I think having a relatively small model that knows how to connect with other tools and models is exactly the correct route towards very functional systems.
2026-01-14T18:02:19
https://www.reddit.com/r/LocalLLaMA/comments/1qcuerc/nvidias_new_8b_model_is_orchestrator8b_a/
Fear_ltself
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcuerc
false
null
t3_1qcuerc
/r/LocalLLaMA/comments/1qcuerc/nvidias_new_8b_model_is_orchestrator8b_a/
false
false
self
663
null
AI Model Tracker: I was finding it hard to track suitable local models online, so I vibe-coded a simple open source tool using GLM 4.7 and OpenCode. Hope it helps others.
5
2026-01-14T17:40:53
https://github.com/nigelp/ai-model-tracker
mintybadgerme
github.com
1970-01-01T00:00:00
0
{}
1qctt0s
false
null
t3_1qctt0s
/r/LocalLLaMA/comments/1qctt0s/ai_model_tracker_i_was_finding_it_hard_to_track/
false
false
default
5
{'enabled': False, 'images': [{'id': 's9A0bKqIvcM1hMy5X6iV0j00-DwGVDnbszvCF4k_LlE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s9A0bKqIvcM1hMy5X6iV0j00-DwGVDnbszvCF4k_LlE.png?width=108&crop=smart&auto=webp&s=85600dd08dee7a2518fd0c0e1365151164587d37', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s9A0bKqIvcM1hMy5X6iV0j00-DwGVDnbszvCF4k_LlE.png?width=216&crop=smart&auto=webp&s=4435a42d5feed5a0ee15b83b947cc8fb31082f6b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s9A0bKqIvcM1hMy5X6iV0j00-DwGVDnbszvCF4k_LlE.png?width=320&crop=smart&auto=webp&s=d643031f4dd8f24df15a685d0cf5f1ab50ba68ec', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s9A0bKqIvcM1hMy5X6iV0j00-DwGVDnbszvCF4k_LlE.png?width=640&crop=smart&auto=webp&s=dd5d729aebef1e628f141938d6126bfd0467ddea', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s9A0bKqIvcM1hMy5X6iV0j00-DwGVDnbszvCF4k_LlE.png?width=960&crop=smart&auto=webp&s=f6885ba979e79640207a7cbd13420f23ae0e943f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s9A0bKqIvcM1hMy5X6iV0j00-DwGVDnbszvCF4k_LlE.png?width=1080&crop=smart&auto=webp&s=b09d3503dde6c38a7bd3b4ae4b358633f39873ee', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s9A0bKqIvcM1hMy5X6iV0j00-DwGVDnbszvCF4k_LlE.png?auto=webp&s=3352d7cd958d501ac44125d1e6e8b6aa036fe5dd', 'width': 1200}, 'variants': {}}]}
Public coding benchmarks suck, how are you evaluating performance?
7
Lately I feel the need to preface my posts saying this was **entirely written by me with zero help from an LLM**. A lot of people see a long post w/ headers and automatically think it's AI slop (myself included sometimes). This post might be slop, but it's *my* slop. # Background We all know public benchmark scores are becoming less useful as model authors attempt to benchmax everything. To really get a sense of whether a model is viable, I usually just throw a couple of my old one-shot programming problems at it, and if it passes, I give it a complex problem in Roo code on one of my projects at a specific git commit to see how it performs. However, this is process highly subjective and sometimes it's hard to tell if bad results are due to the model itself, a setting I changed, or just a random failure that goes away after retrying. I wanted to use a more empirical, automated, and repeatable process to evaluate performance of different models / quants / kv quants / settings. I decided to try Aider Polyglot since it seems to be a pretty popular benchmark. However, I no longer think this is a good option for a few reasons: # Problem 1: Poorly Written Tests I started noticing some of the test failures were not really the model's fault and were instead due to bad/vague instructions, or information the model couldn't have known ahead of time (unless the data was included during training 🤔). Take the [two-bucket test](https://github.com/Aider-AI/polyglot-benchmark/blob/main/python/exercises/practice/two-bucket/.docs/instructions.md) for example. From the instructions (emphasis mine): >Your program will take as input: \- the size of bucket one \- the size of bucket two \- the desired number of liters to reach \- which bucket to fill first, either **bucket one** or **bucket two** Your program should determine: \- the total number of actions it should take to reach the desired number of liters, including the first fill of the starting bucket \- which bucket should end up with the desired number of liters - either **bucket one** or **bucket two** \- how many liters are left in the other bucket In this case, the model failed the test because it expected an input variable to be either `bucket one` or `bucket two`, but the the unit test passes bucket names as `one` / `two` (and expects the return values to be the same). The unit test is not visible to the model during evaluation, so it has no way of knowing exactly how the code will be tested. (note that by default, Aider gives the model two attempts to pass the test. If the first attempt fails, Aider gives the model the test failure output and gives asks the model to fix the errors.) As mentioned, the first attempt failed because `one` / `two` were not valid input variables: ================================== FAILURES ================================== _ TwoBucketTest.test_measure_one_step_using_bucket_one_of_size_1_and_bucket_two_of_size_3_start_with_bucket_two _ self = <two_bucket_test.TwoBucketTest testMethod=test_measure_one_step_using_bucket_one_of_size_1_and_bucket_two_of_size_3_start_with_bucket_two> def test_measure_one_step_using_bucket_one_of_size_1_and_bucket_two_of_size_3_start_with_bucket_two( self, ): > self.assertEqual(measure(1, 3, 3, "two"), (1, "two", 0)) ^^^^^^^^^^^^^^^^^^^^^^^ two_bucket_test.py:36: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ bucket_one = 1, bucket_two = 3, goal = 3, start_bucket = 'two' def measure(bucket_one, bucket_two, goal, start_bucket): # Input validation with meaningful error messages if goal == 0: raise ValueError("Goal cannot be zero") if goal > bucket_one and goal > bucket_two: raise ValueError("Goal exceeds both bucket capacities") if bucket_one <= 0 or bucket_two <= 0: raise ValueError("Bucket sizes must be positive") if start_bucket not in ("bucket one", "bucket two"): > raise ValueError("Start bucket must be either 'bucket one' or 'bucket two'") E ValueError: Start bucket must be either 'bucket one' or 'bucket two' No problem, the model fixed the code to accept either format and normalized the variable before running the rest of the code. But then it failed again because the *output* did not match the test case ================================== FAILURES ================================== _ TwoBucketTest.test_measure_one_step_using_bucket_one_of_size_1_and_bucket_two_of_size_3_start_with_bucket_two _ self = <two_bucket_test.TwoBucketTest testMethod=test_measure_one_step_using_bucket_one_of_size_1_and_bucket_two_of_size_3_start_with_bucket_two>     def test_measure_one_step_using_bucket_one_of_size_1_and_bucket_two_of_size_3_start_with_bucket_two(         self,     ): >       self.assertEqual(measure(1, 3, 3, "two"), (1, "two", 0)) E       AssertionError: Tuples differ: (1, 'bucket two', 0) != (1, 'two', 0) E       E       First differing element 1: E       'bucket two' E       'two' E       E       - (1, 'bucket two', 0) E       ?      ------- E       E       + (1, 'two', 0) This counts as a strike against the model and lowers its score, but I don't care because the model followed the literal instructions. In fact, I'd almost argue that any model passing this test on the first shot might actually be evidence of cheating / benchmaxing. # Problem 2: Aider results don't translate to agentic coding Most (if not all) Aider tests only involve a editing a single file, but agentic coding involves reading and editing multiple files on top of planning, tool calling, asking the user for clarification etc. That's not really Aider's fault, I just didn't understand that until I looked at the coding problems. I guess Livebench or SWE-bench might be more relevant to agentic coding? # Problem 3: Tests take forever I run [Seed-OSS 36B INT4 AutoRound](https://huggingface.co/Intel/Seed-OSS-36B-Instruct-int4-AutoRound) in VLLM across 2x Nvidia L4 24GB cards (tensor parallelism), which gives me about 20 tp/s. It's very usable in Roo Code, as its thinking is usually very short (<512 tokens in most cases). However, with the default system prompt, Aider Polyglot tests often produce 8k+ thinking tokens, and the average duration of each test is over 10 minutes (I actually had to increase the hard-coded 600s timeout to get some tests to complete). I will probably try using a different system prompt or limit thinking, but I worry that could cause more variance in the results. # Possible Solutions I'll probably start by curating/modifying the Aider problems to fit my taste, as the framework is laid out very logically and it's easy to make changes. However, I still want a more automated and empirical method of testing agentic performance. Ideally, this process would use the same client that I use in the real world (Roo Code currently, but taking a closer look at OpenCode), and work on actual (past) problems from my project codebases. Maybe I can set something up in n8n/dify, but I haven't played around with those too much. Anyway, this started as a private note but I thought I'd post here to see if anyone else has any experience with this. If you have an empirical, automated, quick-ish, and repeatable process for benching LLM coding performance, I'd love to hear it.
2026-01-14T17:40:15
https://www.reddit.com/r/LocalLLaMA/comments/1qctseq/public_coding_benchmarks_suck_how_are_you/
AvocadoArray
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qctseq
false
null
t3_1qctseq
/r/LocalLLaMA/comments/1qctseq/public_coding_benchmarks_suck_how_are_you/
false
false
self
7
{'enabled': False, 'images': [{'id': 'YSTyEuikNe3AbrEYu_czUVTQBM6CIOJzeYuFTBIDRpY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YSTyEuikNe3AbrEYu_czUVTQBM6CIOJzeYuFTBIDRpY.png?width=108&crop=smart&auto=webp&s=fcd120d835bf917cce45f9a0ec562d1a0997b3a7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YSTyEuikNe3AbrEYu_czUVTQBM6CIOJzeYuFTBIDRpY.png?width=216&crop=smart&auto=webp&s=a4db3b1bd0639a80272d755c329c91d6ec25785b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YSTyEuikNe3AbrEYu_czUVTQBM6CIOJzeYuFTBIDRpY.png?width=320&crop=smart&auto=webp&s=5976205674613332f0c85cbd0825a95f2f82c637', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YSTyEuikNe3AbrEYu_czUVTQBM6CIOJzeYuFTBIDRpY.png?width=640&crop=smart&auto=webp&s=781bef92ce985e9cd046d8a9e3e354a97865a863', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YSTyEuikNe3AbrEYu_czUVTQBM6CIOJzeYuFTBIDRpY.png?width=960&crop=smart&auto=webp&s=27b771e42f6c7fa7c180ee5ef8bf2de13e025ac9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YSTyEuikNe3AbrEYu_czUVTQBM6CIOJzeYuFTBIDRpY.png?width=1080&crop=smart&auto=webp&s=0ac2a25094f97ff0c4601402658c1c94a8db6962', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YSTyEuikNe3AbrEYu_czUVTQBM6CIOJzeYuFTBIDRpY.png?auto=webp&s=18dba6b556dc1d25df0ccc11d7f3fe85008253ae', 'width': 1200}, 'variants': {}}]}
Local tool for visualizing model drift (not another eval)
2
Built a local-only visualization to explore drift and collapse behavior over time. No cloud, no APIs. Posting a short demo because screenshots don’t capture it. This resonate or if it’s just a me-problem. repo: https://github.com/rjsabouhi/sfd-engine
2026-01-14T17:34:46
https://v.redd.it/eo6bubs1pcdg1
GraciousMule
v.redd.it
1970-01-01T00:00:00
0
{}
1qctmym
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/eo6bubs1pcdg1/DASHPlaylist.mpd?a=1771004106%2CYTc1NzJkMjFhOTdhM2ZmN2Y0ZDg4YTI3MWQ3NGU3MjM2MjQ2MGFjNTExMDY2OWM1MTMyOTlkNDA3YjFmYmE0MQ%3D%3D&v=1&f=sd', 'duration': 60, 'fallback_url': 'https://v.redd.it/eo6bubs1pcdg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 598, 'hls_url': 'https://v.redd.it/eo6bubs1pcdg1/HLSPlaylist.m3u8?a=1771004106%2CNTRkMTgxMTczNWZjOTA5ZDA0MWM0OTdmNGZlYjU0NzA3NDc2MmQzYzY1M2ZjNWFkOTIyMmUzZmNjNGNiN2IyYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/eo6bubs1pcdg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1qctmym
/r/LocalLLaMA/comments/1qctmym/local_tool_for_visualizing_model_drift_not/
false
false
https://external-preview…678b378df67260dd
2
{'enabled': False, 'images': [{'id': 'Y2x6anVxcDFwY2RnMUv6mao-V93I3EI6kooGjAlOcQobFlyP7tsjRRx0yRtX', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/Y2x6anVxcDFwY2RnMUv6mao-V93I3EI6kooGjAlOcQobFlyP7tsjRRx0yRtX.png?width=108&crop=smart&format=pjpg&auto=webp&s=fb5651c1b6424191a8cfea42ef37bc7f3b97f511', 'width': 108}, {'height': 101, 'url': 'https://external-preview.redd.it/Y2x6anVxcDFwY2RnMUv6mao-V93I3EI6kooGjAlOcQobFlyP7tsjRRx0yRtX.png?width=216&crop=smart&format=pjpg&auto=webp&s=522de778c3c39382554bf191d95d64e91b1fb496', 'width': 216}, {'height': 149, 'url': 'https://external-preview.redd.it/Y2x6anVxcDFwY2RnMUv6mao-V93I3EI6kooGjAlOcQobFlyP7tsjRRx0yRtX.png?width=320&crop=smart&format=pjpg&auto=webp&s=2e400bf4473a944c400bb0fcdfee53e1631d3a33', 'width': 320}, {'height': 299, 'url': 'https://external-preview.redd.it/Y2x6anVxcDFwY2RnMUv6mao-V93I3EI6kooGjAlOcQobFlyP7tsjRRx0yRtX.png?width=640&crop=smart&format=pjpg&auto=webp&s=293b229d1f034e61d050da065f306f1bf483973c', 'width': 640}, {'height': 448, 'url': 'https://external-preview.redd.it/Y2x6anVxcDFwY2RnMUv6mao-V93I3EI6kooGjAlOcQobFlyP7tsjRRx0yRtX.png?width=960&crop=smart&format=pjpg&auto=webp&s=54b18246eebc1294f09913d733f9235836fbc691', 'width': 960}, {'height': 505, 'url': 'https://external-preview.redd.it/Y2x6anVxcDFwY2RnMUv6mao-V93I3EI6kooGjAlOcQobFlyP7tsjRRx0yRtX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d7dfc40738003233dec7e9749bdb31c9e3a2cf9d', 'width': 1080}], 'source': {'height': 896, 'url': 'https://external-preview.redd.it/Y2x6anVxcDFwY2RnMUv6mao-V93I3EI6kooGjAlOcQobFlyP7tsjRRx0yRtX.png?format=pjpg&auto=webp&s=b0d705c1f96082ca707b547ff911021e85157367', 'width': 1916}, 'variants': {}}]}
How does my local LLM rig look?
33
In garage/ freezing MN temps are nice! Key Specs: Motherboard: ASUS Pro WS W790E-SAGE SE (workstation platform, multi-GPU + tons of PCIe) CPU: Intel Xeon W9-3495X 56 cores 112 threads, Intel AMX primarily for ktransformers build in mind (moved from an engineering sample to retail) Memory: 512GB DDR5 ECC (8×64GB) 4800 but overclocked to 6000 on an octa-channel platform GPUs: 2× NVIDIA RTX PRO 6000 Blackwell Workstation Edition (96GB VRAM each) Storage: Samsung 9100 PRO 4TB Gen5 NVMe for models + WD_BLACK SN850X 2TB for OS Network: 10Gb local + 1Gb internet Can you spot all other tools except for the server?
2026-01-14T17:18:09
https://i.redd.it/z1xw8usylcdg1.jpeg
texasdude11
i.redd.it
1970-01-01T00:00:00
0
{}
1qct6h2
false
null
t3_1qct6h2
/r/LocalLLaMA/comments/1qct6h2/how_does_my_local_llm_rig_look/
false
false
default
33
{'enabled': True, 'images': [{'id': 'z1xw8usylcdg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/z1xw8usylcdg1.jpeg?width=108&crop=smart&auto=webp&s=393103a161e2c8f30f319d75ad7147e5bb6669ca', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/z1xw8usylcdg1.jpeg?width=216&crop=smart&auto=webp&s=c70420f0f88ee363fb605ffec8db381370bfcac2', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/z1xw8usylcdg1.jpeg?width=320&crop=smart&auto=webp&s=f3819db7ad0d0e5fa6f84b09e2660b1bd4fe9f16', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/z1xw8usylcdg1.jpeg?width=640&crop=smart&auto=webp&s=afc831716841f3b411148307f63ee880af80b163', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/z1xw8usylcdg1.jpeg?width=960&crop=smart&auto=webp&s=91590747b9278eeba1883307e5581657383d9aea', 'width': 960}, {'height': 608, 'url': 'https://preview.redd.it/z1xw8usylcdg1.jpeg?width=1080&crop=smart&auto=webp&s=cdaf2881d93d909b21d19f6d3fce126308d528e3', 'width': 1080}], 'source': {'height': 2252, 'url': 'https://preview.redd.it/z1xw8usylcdg1.jpeg?auto=webp&s=ec0539d6b2055657a597866ee597ef1cbde75150', 'width': 4000}, 'variants': {}}]}
VectorDBZ update: Pinecone, pgvector, custom embeddings, search stats
3
👋 Hey everyone, A while ago I shared **VectorDBZ, a desktop GUI for vector databases**, and the feedback from this community was incredibly useful. Thanks again! 🙏 Since then, I’ve added: • **Pinecone** and **pgvector** support • Search statistics for queries • Custom embedding functions directly in the search tab Your earlier feedback helped shape a clear roadmap, and the app feels much more capable now. I’d love more ideas and feedback: • What other databases or features would make this essential for your workflows? • Any UI/UX improvements for search or embeddings you’d suggest? • Is sparse vector worth implementing, and how have you used it? • If you do hybrid search with BM25, check the current search flow and tell me how you’d implement it UI-wise, since I feel like I might be overthinking it. • Other analytics or visualizations that would be useful? Links: GitHub: [https://github.com/vectordbz/vectordbz](https://github.com/vectordbz/vectordbz?utm_source=chatgpt.com) Downloads: [https://github.com/vectordbz/vectordbz/releases](https://github.com/vectordbz/vectordbz/releases) If you find this useful, a ⭐ on GitHub would mean a lot and helps me keep building. Thanks again for all your input!
2026-01-14T17:16:32
https://www.reddit.com/r/LocalLLaMA/comments/1qct4we/vectordbz_update_pinecone_pgvector_custom/
snirjka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qct4we
false
null
t3_1qct4we
/r/LocalLLaMA/comments/1qct4we/vectordbz_update_pinecone_pgvector_custom/
false
false
self
3
null
Train LoRA over GGUF
7
I've made a proof of concept that we can train LoRA over GGUF rather than bnb 4-bit quantized base model. When using 3-bit rather than 4-bit base model, we can train Qwen-30B-A3B with 16 rather than 24 GB VRAM. For convenience I'm developing it in my repo https://github.com/woct0rdho/transformers-qwen3-moe-fused#lora-over-gguf , but it also works with many models that are not Qwen and not MoE. For now it surely has a lot of rough edges, and we need more experiments to check the quality of such LoRA.
2026-01-14T17:02:28
https://www.reddit.com/r/LocalLLaMA/comments/1qcsr6h/train_lora_over_gguf/
woct0rdho
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcsr6h
false
null
t3_1qcsr6h
/r/LocalLLaMA/comments/1qcsr6h/train_lora_over_gguf/
false
false
self
7
{'enabled': False, 'images': [{'id': 'GAXpAl76ipTpYBTE71z0iu5Y97p4lbUZA20VVH42wGI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GAXpAl76ipTpYBTE71z0iu5Y97p4lbUZA20VVH42wGI.png?width=108&crop=smart&auto=webp&s=90f3e5a58921d32dbea887c14453afe330c257f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GAXpAl76ipTpYBTE71z0iu5Y97p4lbUZA20VVH42wGI.png?width=216&crop=smart&auto=webp&s=61089887b3353738b37dc99ef1ebd517fb77759b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GAXpAl76ipTpYBTE71z0iu5Y97p4lbUZA20VVH42wGI.png?width=320&crop=smart&auto=webp&s=a8acf4f9a208a5db13e04031670fcef954ce3135', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GAXpAl76ipTpYBTE71z0iu5Y97p4lbUZA20VVH42wGI.png?width=640&crop=smart&auto=webp&s=9d88cf1eecdfef5488cb51d2cf407af23c0dc677', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GAXpAl76ipTpYBTE71z0iu5Y97p4lbUZA20VVH42wGI.png?width=960&crop=smart&auto=webp&s=b1a0e11cbf7183d0c85445fc7808e500ddd3bd23', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GAXpAl76ipTpYBTE71z0iu5Y97p4lbUZA20VVH42wGI.png?width=1080&crop=smart&auto=webp&s=301e9e8ab3ca013ed81120f869f4ebb09fe92c37', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GAXpAl76ipTpYBTE71z0iu5Y97p4lbUZA20VVH42wGI.png?auto=webp&s=ce03b9a9f3deb3256572706ddd3b146c72de0a69', 'width': 1200}, 'variants': {}}]}
We tried to automate product labeling in one prompt. It failed. 27 steps later, we've processed 10,000+ products.
52
We built an AI agent to localize imported food products for a retail client. The task sounds simple: extract product info, translate it contextually (not Google Translate), calculate nutritional values for local formats, check compliance with local regulations. First attempt: one detailed prompt. Let the AI figure out the workflow. Result: chaos. The AI would hallucinate numbers even with clean images. It would skip steps randomly. At scale, we had no idea where things broke. Every error was a mystery to debug. So we broke it down. Way down. 27 steps. Each column in our system handles one thing: * Extract product name * Extract weight * Extract nutritional values per serving * Convert units to local format * Translate product name (contextual, not literal) * Translate description * Check certification requirements * ... and so on **What changed:** **1. Traceability.** When something fails, we know exactly which step. No more guessing. **2. Fixability.** Client corrects a number extraction error once, we build a formula that prevents it downstream. Errors get fixed permanently, not repeatedly. **3. Consistency at scale.** The AI isn't "deciding" what to do. It's executing a defined process. Same input, same process, predictable output. **4. Human oversight actually works.** The person reviewing outputs learns where the AI struggles. Step 14 always needs checking. Step 22 is solid. They get faster over time. **The counterintuitive part:** making the AI "dumber" per step made the overall system smarter. One prompt trying to do everything is one prompt that can fail in infinite ways. 27 simple steps means 27 places where you can inspect, correct, and improve. We've processed over 10,000 products this way. The manual process used to take 20 minutes per product. Now it's 3 minutes, mostly human review. The boring truth about reliable AI agents: it's not about prompt engineering magic. It's about architecture that assumes AI will fail and makes failure easy to find and fix. Happy to answer questions about the approach.
2026-01-14T16:58:22
https://www.reddit.com/r/LocalLLaMA/comments/1qcsmww/we_tried_to_automate_product_labeling_in_one/
No-Reindeer-9968
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcsmww
false
null
t3_1qcsmww
/r/LocalLLaMA/comments/1qcsmww/we_tried_to_automate_product_labeling_in_one/
false
false
self
52
null
Is 25k a good price for the GH200
1
How much will you be willing to pay for this beast? QuantaGrid S74G-2U - 480gb ram - 1x 1.92TB e1.s, 8tb m.2
2026-01-14T16:37:59
https://www.reddit.com/r/LocalLLaMA/comments/1qcs2o2/is_25k_a_good_price_for_the_gh200/
slimeh91
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcs2o2
false
null
t3_1qcs2o2
/r/LocalLLaMA/comments/1qcs2o2/is_25k_a_good_price_for_the_gh200/
false
false
self
1
null
code execution agents
1
[removed]
2026-01-14T16:16:22
https://www.reddit.com/r/LocalLLaMA/comments/1qcrhjf/code_execution_agents/
Annual-Item-101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcrhjf
false
null
t3_1qcrhjf
/r/LocalLLaMA/comments/1qcrhjf/code_execution_agents/
false
false
self
1
null
Kaggle Introduces Community Benchmarks
4
Now you can build, run and share custom AI benchmarks for real-world tasks like reasoning, coding and multimodal workflows - all with reproducible results.  Learn more: [https://www.kaggle.com/benchmarks?type=community](https://www.kaggle.com/benchmarks?type=community) Watch the tutorial: [https://www.youtube.com/watch?v=VBlyJJ7PTD8](https://www.youtube.com/watch?v=VBlyJJ7PTD8) Start building today: [https://www.kaggle.com/benchmarks?type=community](https://www.kaggle.com/benchmarks?type=community)  Would be interested to hear how others here think about benchmarking models or what kinds of real-world tasks you’d want to see evaluated.
2026-01-14T15:52:54
https://www.reddit.com/r/LocalLLaMA/comments/1qcqu9f/kaggle_introduces_community_benchmarks/
kaggle_official
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcqu9f
false
null
t3_1qcqu9f
/r/LocalLLaMA/comments/1qcqu9f/kaggle_introduces_community_benchmarks/
false
false
self
4
{'enabled': False, 'images': [{'id': 'fzSEZFMnon3Ma5Fp208oYEYKVDCDg7nUjEeqfwfFQL4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/fzSEZFMnon3Ma5Fp208oYEYKVDCDg7nUjEeqfwfFQL4.png?width=108&crop=smart&auto=webp&s=0b8af91e489a4db9a71df367e624b09df7588636', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/fzSEZFMnon3Ma5Fp208oYEYKVDCDg7nUjEeqfwfFQL4.png?width=216&crop=smart&auto=webp&s=e9649913db92bf869860c3757e8dd136f9f2b722', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/fzSEZFMnon3Ma5Fp208oYEYKVDCDg7nUjEeqfwfFQL4.png?width=320&crop=smart&auto=webp&s=b50a2044cb406ec4eb99c50cb994ba2f8fb0fe0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/fzSEZFMnon3Ma5Fp208oYEYKVDCDg7nUjEeqfwfFQL4.png?width=640&crop=smart&auto=webp&s=493f1611943d519e52c080bfd9175666b9076969', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/fzSEZFMnon3Ma5Fp208oYEYKVDCDg7nUjEeqfwfFQL4.png?width=960&crop=smart&auto=webp&s=20b5c6b735854add3a1c5bc2316eb7ed960c4aab', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/fzSEZFMnon3Ma5Fp208oYEYKVDCDg7nUjEeqfwfFQL4.png?width=1080&crop=smart&auto=webp&s=6af6e0f104182fb6e3fcf2a9cedaaf8b416874f0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/fzSEZFMnon3Ma5Fp208oYEYKVDCDg7nUjEeqfwfFQL4.png?auto=webp&s=1744bc7d31ae842efddd7d75d2975e5b7ad6c67e', 'width': 1200}, 'variants': {}}]}
Open-Source Claude Cowork is here!
15
Claude recently launched its Cowork feature, but it's currently limited to Max subscribers and lacks API support. One of our talented community developers just changed the game! Using the MiniMax M2.1 SOTA model, he has built an open-source alternative that brings collaborative coding to everyone. Project Highlights: 1. Fully Compatible: Works with Claude Code configuration files. 2. Powered by SDK: Built using the Claude Agent SDK for seamless performance. 3. Interactive: Supports real-time process interaction and manual intervention. 4. Cross-Platform Ready: Currently optimized for Apple M-series chips, with full platform build support. Check out the repo here: https://github.com/DevAgentForge/Claude-Cowork We love seeing the MiniMax M-series community push the boundaries of what's possible with our latest models. We can't wait to see what you build next!
2026-01-14T15:51:39
https://github.com/DevAgentForge/Claude-Cowork
srtng
github.com
1970-01-01T00:00:00
0
{}
1qcqt2i
false
null
t3_1qcqt2i
/r/LocalLLaMA/comments/1qcqt2i/opensource_claude_cowork_is_here/
false
false
default
15
null
One wallet, 30+ models: Route between GPT, Grok, DeepSeek, DALL-E from Claude Code
0
Different models are better at different things. I got tired of managing 5 API keys to access them all, so I built a unified router. \*\*The idea:\*\* Give Claude Code a wallet. When it needs a capability, it pays the right model: | Task | Routes to | Why | Cost | |------|-----------|-----|------| | Generate image | DALL-E | Claude can't do images | $0.05 | | Real-time X/Twitter | Grok | Only model with live X access | \~$0.26 | | Code review | GPT-4o | Different perspective catches different bugs | $0.001 | | Bulk summarization | DeepSeek | 10x cheaper for simple tasks | $0.0001 | \*\*Smart routing built-in:\*\* \- Mentions Twitter/X → auto-routes to Grok \- Image request → auto-routes to DALL-E \- \`--cheap\` flag → routes to DeepSeek \- \`--fast\` flag → routes to GPT-4o-mini \*\*Cost comparison:\*\* $1 USDC gets you: \- \~1,000 GPT-4o calls \- \~10,000 DeepSeek calls \- \~20 DALL-E images \- \~4 Grok queries with live X search \*\*No API keys.\*\* One wallet funded with USDC on Base. Pay per request via x402 micropayments. \*\*Install:\*\* \`\`\` /plugin install github:BlockRunAI/blockrun-claude-code-wallet pip install blockrun-llm \`\`\` \*\*Python SDK:\*\* \`\`\`python from blockrun\_llm import LLMClient client = LLMClient() response = client.chat("openai/gpt-4o", "Review this code for bugs") print(response) \# Check spending print(f"Spent: ${client.get\_spending()\['total\_usd'\]:.4f}") \`\`\` Open source (MIT): [https://github.com/BlockRunAI/blockrun-claude-code-wallet](https://github.com/BlockRunAI/blockrun-claude-code-wallet) \--- What models would you add to the routing? Thinking about adding Mixtral, Llama, etc.
2026-01-14T15:50:18
https://www.reddit.com/r/LocalLLaMA/comments/1qcqrqg/one_wallet_30_models_route_between_gpt_grok/
Klutzy_Car1425
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcqrqg
false
null
t3_1qcqrqg
/r/LocalLLaMA/comments/1qcqrqg/one_wallet_30_models_route_between_gpt_grok/
false
false
self
0
null
vLLM on 2x/4x Tesla v100 32GB
2
Is anybody running latest models with vLLM on Teslas V100? The GPTQ 4bit quants should be somehow supported on V100 (CUDA 7.0) with Triton Attention. In fact some models like Qwen3 30B A3B GPTQ or Seed OSS 36B GPTQ run well on my cards. I noticed though that the compression tools have changed lately and produce models with metadata “compressed-tensors”. I’d like to run the latest ZAi models (especially GLM4.5 Air) but I keep getting errors related to the compressed-tensors not supported. Any idea? Thanks!
2026-01-14T15:40:23
https://www.reddit.com/r/LocalLLaMA/comments/1qcqicx/vllm_on_2x4x_tesla_v100_32gb/
grayarks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcqicx
false
null
t3_1qcqicx
/r/LocalLLaMA/comments/1qcqicx/vllm_on_2x4x_tesla_v100_32gb/
false
false
self
2
null
Kaggle now lets you create your own benchmarks
1
[removed]
2026-01-14T15:22:51
https://www.reddit.com/r/LocalLLaMA/comments/1qcq1ni/kaggle_now_lets_you_create_your_own_benchmarks/
CautiousLog3690
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcq1ni
false
null
t3_1qcq1ni
/r/LocalLLaMA/comments/1qcq1ni/kaggle_now_lets_you_create_your_own_benchmarks/
false
false
self
1
null
Kaggle now lets you create your own benchmarks
1
[removed]
2026-01-14T15:21:18
https://www.reddit.com/r/LocalLLaMA/comments/1qcq072/kaggle_now_lets_you_create_your_own_benchmarks/
CautiousLog3690
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcq072
false
null
t3_1qcq072
/r/LocalLLaMA/comments/1qcq072/kaggle_now_lets_you_create_your_own_benchmarks/
false
false
self
1
null
Kaggle now lets you create your own benchmarks!
1
[removed]
2026-01-14T15:17:42
https://www.reddit.com/r/LocalLLaMA/comments/1qcpwsz/kaggle_now_lets_you_create_your_own_benchmarks/
CautiousLog3690
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcpwsz
false
null
t3_1qcpwsz
/r/LocalLLaMA/comments/1qcpwsz/kaggle_now_lets_you_create_your_own_benchmarks/
false
false
self
1
null
I automated 3-way invoice matching in Accounts Payable (process breakdown )
3
Hey everyone, I'm an AI engineer who's been neck deep in document intelligence lately, testing different automation scenarios to see what actually holds up in real workflows vs what just sounds good on paper. Latest experiment: 3-way invoice matching in accounts payable. You know, that soul-crushing process where someone has to match purchase orders → goods receipts → invoices, line by line, catching price mismatches and quantity errors before approving payments. Talked to a few finance folks and holy shit, they're spending so much time on this. **So here's what I built:** The core idea is pretty straightforward but the execution matters: **Part 1: Document extraction step** Used [Kudra.ai](http://Kudra.ai) (a document intelligence platform with VLMs, not just basic OCR). The difference is huge - basic OCR reads text but doesn't understand context. It can't tell you that "PO Number," "Order ID," and "Purchase Order Reference" are the same thing. Set up three workflows - one each for POs, goods receipts, and invoices. Each workflow: * Runs OCR to pull all text * Runs a vision language model to actually understand document structure * Extracts structured data (line items, amounts, dates, etc.) * Validates the extraction (checks totals match line items, required fields are present) * Spits out clean JSON The whole setup took maybe 2 hours. Once it's configured, you just API it in and forget about it. **Part 2: The actual matching logic** This is where it gets interesting. The system needs to: * Link all three documents (verify they're talking about the same transaction) * Check the golden rule: billed quantity ≤ received quantity ≤ ordered quantity * Compare invoice prices to PO prices (with tolerance thresholds for rounding) * Recalculate everything - line totals, taxes, grand totals * Catch unauthorized charges or price increases **Part 3: Making it actually useful** When there's a mismatch, the system doesn't just flag it - it tells you: * Exactly where the discrepancy is (line 3, quantity mismatch) * Which document is causing the issue * Why it violates policy * What to do about it (approve within tolerance, request corrected invoice, hold payment, etc.) Clean matches? Auto-approved and moved forward. Only real exceptions need human eyes. **The results so far:** Processing time is a few seconds. The AI works really well after a lot of testing And honestly? The finance people I tested this with were relieved, not threatened. They still make the final call on exceptions - they're just not drowning in grunt work anymore. **Why I'm sharing this:** I'm documenting these experiments as I go because I think there's a huge gap between "AI will automate everything!" hype and actual implementation details that work. Most blog posts are too high-level to be useful. sharing the details below. ( also thinking of open sourcing the project for the people who want to test it)
2026-01-14T15:02:13
https://www.reddit.com/r/LocalLLaMA/comments/1qcpioa/i_automated_3way_invoice_matching_in_accounts/
Helpful_Milk_5618
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcpioa
false
null
t3_1qcpioa
/r/LocalLLaMA/comments/1qcpioa/i_automated_3way_invoice_matching_in_accounts/
false
false
self
3
null
Where to discuss local training of small language models?
1
[removed]
2026-01-14T14:51:32
https://www.reddit.com/r/LocalLLaMA/comments/1qcp8mf/where_to_discuss_local_training_of_small_language/
JoeStrout
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcp8mf
false
null
t3_1qcp8mf
/r/LocalLLaMA/comments/1qcp8mf/where_to_discuss_local_training_of_small_language/
false
false
self
1
null
Any Firecrawl Users?
1
[removed]
2026-01-14T14:51:12
https://www.reddit.com/r/LocalLLaMA/comments/1qcp8b8/any_firecrawl_users/
Wyguy18
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcp8b8
false
null
t3_1qcp8b8
/r/LocalLLaMA/comments/1qcp8b8/any_firecrawl_users/
false
false
self
1
null
Building a low-cost, business-level local LLM for small businesses — hardware & security advice needed
5
Hi everyone, I’m a complete beginner (zero background) but very interested in building a **low-cost, business-level local LLM** that can run **fully on-premise** for small businesses (no cloud, no data leaving the site). I’d really appreciate advice from people with experience in this area, especially on: **1) Hardware** * What kind of CPU/GPU setup makes sense for a small business budget? * Is a single consumer GPU enough, or is multi-GPU necessary? * How much RAM and storage should I realistically plan for? * Any recommendations for cost-effective hardware that’s stable for 24/7 use? **2) Architecture / Practical Considerations** * What model sizes are realistic for local deployment today? * Things beginners usually underestimate (power, cooling, noise, maintenance, etc.) * Whether virtualization or containers are recommended for this kind of setup **3) Security** * Key security risks when running a local LLM for business use * Best practices for data isolation, access control, and auditability * Any must-have protections to make customers feel confident their data is safe My goal is not cutting-edge performance, but **reliable, affordable, and secure** local AI that small businesses can actually trust and run themselves. Any guidance, resources, or real-world lessons would be hugely appreciated. Thanks in advance!
2026-01-14T14:34:09
https://www.reddit.com/r/LocalLLaMA/comments/1qcot7e/building_a_lowcost_businesslevel_local_llm_for/
eeprogrammer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcot7e
false
null
t3_1qcot7e
/r/LocalLLaMA/comments/1qcot7e/building_a_lowcost_businesslevel_local_llm_for/
false
false
self
5
null
Should I train 1 or 2 models? or 2 LoRAs?
1
hey guys! I'm trying to gain some clarity on the best way to train 1(or 2?) models. My use case is this: 1. model intakes documents and extracts relevant parts, based on specific criteria (system prompt) 2. human user approves/fixes and a final report is generated 3. model intakes the report and another set of instructions and generates a structured JSON to be consumed My doubt is: should I train a model for 1 and one for 2, or one lora for each? Or will be it possible to train just one model to do both tasks, as long as the training set has enough of both? Right now I'm using a SOTA model to do both, but the idea is to get a (much) smaller model to handle the task
2026-01-14T14:09:10
https://www.reddit.com/r/LocalLLaMA/comments/1qco7e1/should_i_train_1_or_2_models_or_2_loras/
nunodonato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qco7e1
false
null
t3_1qco7e1
/r/LocalLLaMA/comments/1qco7e1/should_i_train_1_or_2_models_or_2_loras/
false
false
self
1
null
Need your help figuring out if I got fault gpu
1
I got 5060ti and 4060. All the errors / problem happens when using 5060ti even if I'm using only the 5060ti (disabling the 4060 or using set cuda\_visible\_devices=1, 1 is my 5060ti). I tried different drivers / cuda, compiling it from source, flags, different models different flags. I have no issues when gaming, or using comfyui when using the no pinned memory flag. My question is how can I figure out if it's hardware failure? I tested some apps for stress vram, gpu but they are all focusing rendering images instead of cuda / tensor core.
2026-01-14T13:37:35
https://www.reddit.com/r/LocalLLaMA/comments/1qcngyl/need_your_help_figuring_out_if_i_got_fault_gpu/
ResponsibleTruck4717
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcngyl
false
null
t3_1qcngyl
/r/LocalLLaMA/comments/1qcngyl/need_your_help_figuring_out_if_i_got_fault_gpu/
false
false
self
1
null
Found a cool AI tool that lets you "try on" any dress virtually
1
[removed]
2026-01-14T13:24:37
https://i.redd.it/qgs54j49gbdg1.jpeg
horizon_echo
i.redd.it
1970-01-01T00:00:00
0
{}
1qcn69m
false
null
t3_1qcn69m
/r/LocalLLaMA/comments/1qcn69m/found_a_cool_ai_tool_that_lets_you_try_on_any/
false
false
default
1
{'enabled': True, 'images': [{'id': 'qgs54j49gbdg1', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/qgs54j49gbdg1.jpeg?width=108&crop=smart&auto=webp&s=8174e09557d28c3d75c6f5cfb04e12e7b436ce92', 'width': 108}, {'height': 101, 'url': 'https://preview.redd.it/qgs54j49gbdg1.jpeg?width=216&crop=smart&auto=webp&s=13361f96e99820f992fb86ee5d8100503c23fc56', 'width': 216}, {'height': 150, 'url': 'https://preview.redd.it/qgs54j49gbdg1.jpeg?width=320&crop=smart&auto=webp&s=6b904b470c9f4bd3019418377ef6fb5b5361f4ee', 'width': 320}, {'height': 301, 'url': 'https://preview.redd.it/qgs54j49gbdg1.jpeg?width=640&crop=smart&auto=webp&s=ed1ac9bfe11ff8f2841ce320c2a971c086988147', 'width': 640}, {'height': 452, 'url': 'https://preview.redd.it/qgs54j49gbdg1.jpeg?width=960&crop=smart&auto=webp&s=eb1ad767a26bd0444ed2cb40181fea68fa10a2ad', 'width': 960}, {'height': 508, 'url': 'https://preview.redd.it/qgs54j49gbdg1.jpeg?width=1080&crop=smart&auto=webp&s=e70d446d1ecd99bb9e5da4ec67dd27dc1bfac707', 'width': 1080}], 'source': {'height': 888, 'url': 'https://preview.redd.it/qgs54j49gbdg1.jpeg?auto=webp&s=ff756f0af938de632366a7d54ab89d80edd4cfaa', 'width': 1885}, 'variants': {}}]}
Intel's AI Playground version 3.0 alpha released
6
2026-01-14T13:22:44
https://github.com/intel/AI-Playground/releases
reps_up
github.com
1970-01-01T00:00:00
0
{}
1qcn4s5
false
null
t3_1qcn4s5
/r/LocalLLaMA/comments/1qcn4s5/intels_ai_playground_version_30_alpha_released/
false
false
default
6
{'enabled': False, 'images': [{'id': '67mSzjto7y36iFjSqWIlYth52jM5FJhdSYXhGrks3Ak', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/67mSzjto7y36iFjSqWIlYth52jM5FJhdSYXhGrks3Ak.png?width=108&crop=smart&auto=webp&s=5fccb965186583b181b4fd73328cb9ad45266f79', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/67mSzjto7y36iFjSqWIlYth52jM5FJhdSYXhGrks3Ak.png?width=216&crop=smart&auto=webp&s=72af8aef36bea23d9db433b8672576787bc2242b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/67mSzjto7y36iFjSqWIlYth52jM5FJhdSYXhGrks3Ak.png?width=320&crop=smart&auto=webp&s=d4ea7da7a7e78d413034287d6ccbc7cbd13207ba', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/67mSzjto7y36iFjSqWIlYth52jM5FJhdSYXhGrks3Ak.png?width=640&crop=smart&auto=webp&s=5704b932628173409e66e33879b19579714acb8e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/67mSzjto7y36iFjSqWIlYth52jM5FJhdSYXhGrks3Ak.png?width=960&crop=smart&auto=webp&s=ba0032cce22b5630db4a5dab686e98b0ef99410f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/67mSzjto7y36iFjSqWIlYth52jM5FJhdSYXhGrks3Ak.png?width=1080&crop=smart&auto=webp&s=4ae6deaef361afc8ab264ba50dfe5d98469c92b9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/67mSzjto7y36iFjSqWIlYth52jM5FJhdSYXhGrks3Ak.png?auto=webp&s=fc39dcb871e486197ecc6d5dd8ed4569ccf66a52', 'width': 1200}, 'variants': {}}]}
De-duplication / Image to OCR LLM
3
Hey everyone, I’m starting to theory craft a work project currently. We’ve got several million attachments from a third party system and we are wanting to 1. Demonstrate the risk of this data sitting opens the company up to considerable risk 2. That pruning or turning some of these systems off is a wise move. I’d love general thoughts on orchestration, deduplication, how to do the OCR to text process, and how to actually use the LLM itself to extract practical meaning.
2026-01-14T13:19:57
https://www.reddit.com/r/LocalLLaMA/comments/1qcn2i6/deduplication_image_to_ocr_llm/
Direct_Bodybuilder63
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcn2i6
false
null
t3_1qcn2i6
/r/LocalLLaMA/comments/1qcn2i6/deduplication_image_to_ocr_llm/
false
false
self
3
null
GPT-OSS -> MLA conversion breakthrough (20B), still looking for compute + collaborators
5
[GPT-OSS -\> MLA conversion breakthrough](https://preview.redd.it/3urzbjw2abdg1.png?width=1600&format=png&auto=webp&s=7dfb389e0a6bff274c6654b81f88b30a3b125954) Quick update to my earlier post: [https://www.reddit.com/r/LocalLLaMA/comments/1qaqqqn/is\_anyone\_offering\_compute\_to\_finetune\_a\_unique/](https://www.reddit.com/r/LocalLLaMA/comments/1qaqqqn/is_anyone_offering_compute_to_finetune_a_unique/) MOTTO: \*\*NECESSITY IS ALL YOU NEED. NECESSITY IS THE MOTHER OF INVENTION.\*\* Progress tracker / notes (tables + TODOs, no run-log spam): [https://gist.github.com/radna0/b447711ea4e766f3b8ab8b434b35a372](https://gist.github.com/radna0/b447711ea4e766f3b8ab8b434b35a372) So the big news: the "TransMLA-style" conversion path I was using had a real quality floor on GPT-OSS (PPL was stuck \~5 vs baseline \~3 on the 20B testbed). It wasn't just "needs finetuning" or "not enough calibration" - it was structural. I dug into why and found that GPT-OSS KV-head RoPE keys are basically not shareable (pairwise cosine is \~0). So any MLA variant that implicitly forces a shared RoPE-K (MQA-style) is going to lose information on this model family. After changing the conversion to keep RoPE-K exact per KV head (and starting from a quality-first anchor where V is not aggressively compressed), I finally got near-lossless behavior on 20B: PPL matches baseline within noise at 1024/2048/4096. Huge relief - it means GPT-OSS isn't "inconvertible", the earlier floor was just the wrong assumption. Now I'm measuring the tradeoff curve when we actually compress V (V\_latent\_rank sweep). It does start to introduce quality loss as you push rank down. The tables (and what I'm testing next) are in the Gist. One nuance I want to be honest about: PPL is a great cheap gate and helps us iterate fast, but I'm not treating it as the only truth forever. Next I'm going to do token-level analysis on a lot more samples (per-token NLL distributions / tail behavior, etc.) to be more confident about capability preservation and to tell whether something is "recoverable" or if there's a structural loss floor. Also: TransMLA's RoRoPE/Partial-RoPE step seems inherently lossy across models to some degree. It's not really "break vs not break", it's "how much it breaks" depending on the original model's RoPE frequency geometry. The TransMLA paper mentions needing a big recovery phase (they cite \~6B tokens). I'm not comfortable assuming that will generalize cleanly to every model or scale cheaply to 120B - so I'm trying hard to avoid relying on recovery as a crutch. I'm still looking for compute / collaborators, especially for: \- running repeatable PPL evals (so we can iterate faster and trust results) \- running token-level NLL/EAFT-style evals on larger samples \- scaling these exactK vs approximateK ablations to GPT-OSS-120B \- long-context decode benchmarks at higher batch once the conversion is stable If you're interested, comment here or DM me. Discord: \_radna
2026-01-14T12:49:10
https://www.reddit.com/r/LocalLLaMA/comments/1qcmf4s/gptoss_mla_conversion_breakthrough_20b_still/
Ok_Difference_4483
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcmf4s
false
null
t3_1qcmf4s
/r/LocalLLaMA/comments/1qcmf4s/gptoss_mla_conversion_breakthrough_20b_still/
false
false
https://b.thumbs.redditm…gUNLJKMibzQU.jpg
5
null
I need a feedback about an open-source CLI that scan AI models (Pickle, PyTorch, GGUF) for malware, verify HF hashes, and check licenses
0
Hi everyone, I've created a new CLI tool to secure AI pipelines. It scans models (Pickle, PyTorch, GGUF) for malware using stack emulation, verifies file integrity against the Hugging Face registry, and detects restrictive licenses (like CC-BY-NC). It also integrates with Sigstore for container signing. GitHub: [https://github.com/ArseniiBrazhnyk/Veritensor](https://github.com/ArseniiBrazhnyk/Veritensor) Install: pip install veritensor If you're interested, check it out and let me know what you think and if it might be useful to you?
2026-01-14T12:41:13
https://www.reddit.com/r/LocalLLaMA/comments/1qcm9e1/i_need_a_feedback_about_an_opensource_cli_that/
arsbrazh12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcm9e1
false
null
t3_1qcm9e1
/r/LocalLLaMA/comments/1qcm9e1/i_need_a_feedback_about_an_opensource_cli_that/
false
false
self
0
{'enabled': False, 'images': [{'id': 'ij86rY-selm3axNtkT8fMd81bAytULnEfLFMIBY0A0E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ij86rY-selm3axNtkT8fMd81bAytULnEfLFMIBY0A0E.png?width=108&crop=smart&auto=webp&s=978a21ce42867a657e140dd20038120ad561fb09', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ij86rY-selm3axNtkT8fMd81bAytULnEfLFMIBY0A0E.png?width=216&crop=smart&auto=webp&s=3e85cf937da5f1d92307e0e54dd879bc82ada911', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ij86rY-selm3axNtkT8fMd81bAytULnEfLFMIBY0A0E.png?width=320&crop=smart&auto=webp&s=d6bc418bf1417d0fe73e49599f39b116d487b24a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ij86rY-selm3axNtkT8fMd81bAytULnEfLFMIBY0A0E.png?width=640&crop=smart&auto=webp&s=52311269686b9f542c64bb935923f9d790669de6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ij86rY-selm3axNtkT8fMd81bAytULnEfLFMIBY0A0E.png?width=960&crop=smart&auto=webp&s=fe02c9787e98c10829898b44766ea7cc9528d22c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ij86rY-selm3axNtkT8fMd81bAytULnEfLFMIBY0A0E.png?width=1080&crop=smart&auto=webp&s=9041a7ee1c1de2486606e830877ccbcd278387da', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ij86rY-selm3axNtkT8fMd81bAytULnEfLFMIBY0A0E.png?auto=webp&s=947534861d651db233fe3bac889a093e812f620c', 'width': 1200}, 'variants': {}}]}
"Agent Skills" - The spec unified us. The paths divided us.
21
Skills are standardized now. But..... .github/skills/ .claude/skills/ .codex/skills/ .copilot/skills/ Write once, store… wherever your agent feels like. Wish we just also agreed on standardized discovery path for skills (like agents.md). So Agents Skills are truly interoperable when I am jumping between agents.
2026-01-14T12:39:50
https://i.redd.it/fe2fdwzb8bdg1.jpeg
phoneixAdi
i.redd.it
1970-01-01T00:00:00
0
{}
1qcm8ds
false
null
t3_1qcm8ds
/r/LocalLLaMA/comments/1qcm8ds/agent_skills_the_spec_unified_us_the_paths/
false
false
default
21
{'enabled': True, 'images': [{'id': 'fe2fdwzb8bdg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/fe2fdwzb8bdg1.jpeg?width=108&crop=smart&auto=webp&s=b0af32eff2b3ffae3e93a81a020704192280d2c0', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/fe2fdwzb8bdg1.jpeg?width=216&crop=smart&auto=webp&s=dbe6408d521d46e0cdc452bc985d62b14b2871a2', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/fe2fdwzb8bdg1.jpeg?width=320&crop=smart&auto=webp&s=ce4f4e91626483e0ed89ada9969d0385c9539a83', 'width': 320}], 'source': {'height': 500, 'url': 'https://preview.redd.it/fe2fdwzb8bdg1.jpeg?auto=webp&s=220e0bd60cedfaa4072ecd037c264f83d01bef6c', 'width': 500}, 'variants': {}}]}
Sanity check : 3090 build
3
Hi everyone, I need a final sanity check before I pull the trigger on a used local workstation for **£1,270 (about 1700$)**. **My Goal:** Working on different projects that would need (Unreal Engine 5 Metahumans + Local LLM + TTS + RVC), also doing machine learning and llm work. **The Dilemma:** I'm debating between buying this PC or just keeping my laptop and using AWS EC2 (g5.2xlarge) for the heavy lifting. **The Local Build (£1,270):** * **GPU:** EVGA RTX 3090 FTW3 Ultra (24GB VRAM) <— *For loading 70B models + UE5* * **CPU:** Intel Core i5-13600K * **RAM:** 32GB DDR4 (Will upgrade to 64GB later) * **Storage:** 1TB NVMe * **PSU:** Corsair RM850 Gold **My concerns:** 1. Is £1,270 a fair price for this in the UK? 2. For real-time talking projects, is the latency of Cloud (AWS) too high compared to running locally on a 3090? 3. Is the i5-13600K enough to drive the 3090 for simultaneous LLM + Rendering workloads? P.S : I had thought about a mac mini or ultra but sadly can't do any cuda in it. Thanks for the help!
2026-01-14T12:35:56
https://www.reddit.com/r/LocalLLaMA/comments/1qcm5ld/sanity_check_3090_build/
Individual-School-07
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcm5ld
false
null
t3_1qcm5ld
/r/LocalLLaMA/comments/1qcm5ld/sanity_check_3090_build/
false
false
self
3
null
🚀 Announcing EROS — The Autonomous State-Derived Reward System (Released Under SBAAL v1.1)
1
[removed]
2026-01-14T12:24:36
https://www.reddit.com/r/LocalLLaMA/comments/1qclxu9/announcing_eros_the_autonomous_statederived/
ShadovvBeast
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qclxu9
false
null
t3_1qclxu9
/r/LocalLLaMA/comments/1qclxu9/announcing_eros_the_autonomous_statederived/
false
false
self
1
null
Engram—The New Cornerstone of the AI Industrial Revolution? Putting the Best Steel on the Edge Through Structural Decoupling
0
Our studying on the latest #deepseek paper. The history of technological leapfrogging is rarely about raw power; it is about the elegant **decoupling** of conflicting processes to achieve **structural efficiency**. Just as James Watt catalyzed the first Industrial Revolution not by inventing the steam engine, but by perfecting it through the separation of the condenser and the cylinder, a new breakthrough from DeepSeek-AI titled **Engram** introduces a similar "cornerstone" moment for Artificial Intelligence ([https://github.com/deepseek-ai/Engram](https://github.com/deepseek-ai/Engram)). Its core mission is the realization of the ancient Chinese wisdom: **“好钢用在刀刃上”**—allocating the most precious resources to where they matter most. # The Structural Bottleneck: A Master Craftsman Cutting Butter Current Large Language Models (LLMs) suffer from a fundamental architectural conflict. They lack a native primitive for knowledge lookup, forcing them to "inefficiently simulate retrieval through computation". To process a sequence, they rely on a deep Transformer backbone—a "high-quality steel" designed for dynamic reasoning. However, much of language consists of "local, static, and highly stereotyped" patterns, such as the entity "Alexander the Great". Forcing a Transformer to expend its sequential depth to "reconstruct" these simple facts in early layers is like using a master-crafted blade to cut butter. It is a massive waste of "steel" on trivial operations that should be handled by a simpler mechanism. # The Engram Solution: Structural Decoupling To resolve this, the researchers introduced **Engram**, a "conditional memory" module that decouples static pattern storage from dynamic computation. Engram serves as the "separate condenser" of the AI era. It modernizes classic N-gram embeddings to enable scalable, constant-time lookups. By delegating the heavy lifting of static knowledge to this specialized module, the Transformer backbone is relieved of the burden of "static reconstruction"10101010. This architectural shift effectively "deepens" the network, ensuring the attention layers focus exclusively on the "blade's edge": global context, complex reasoning, and mathematics. # The Economic Transformation: Infrastructure-Aware Efficiency Just as Watt’s invention enabled widespread mechanization by making steam power cost-effective, Engram creates a path for an "AI Industrial Revolution" through **infrastructure-aware design**. Because its retrieval is deterministic, the system can asynchronously "prefetch" embeddings from host DRAM. This allows a **100B-parameter** table to be offloaded to host memory with negligible overhead (<3%), bypassing the physical and economic constraints of GPU HBM. Engram perfects the Transformer by ensuring that every FLOP is spent on innovation, not repetition. It is the architectural cornerstone that finally puts the best steel where it belongs: on the edge of the world’s most complex problems.
2026-01-14T12:23:31
https://www.reddit.com/r/LocalLLaMA/comments/1qclx2f/engramthe_new_cornerstone_of_the_ai_industrial/
Straight-Gazelle-597
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qclx2f
false
null
t3_1qclx2f
/r/LocalLLaMA/comments/1qclx2f/engramthe_new_cornerstone_of_the_ai_industrial/
false
false
self
0
null
Why Doesn't a "Personal Clone" AI App Exist Yet?
0
So I've been thinking about this for a while and I'm genuinely confused why no one's building this yet. Here's the idea: **What if there was an app that literally learned how to be you?** You give it access to your Slack, WhatsApp, email, and—here's the magic part—your Notion or personal wiki where you've dumped all your principles, habits, and how you do things. The app watches all these channels *continuously*. It learns not just *what* you say, but *how* you say it. Why you make decisions. Your taste. Your style. Your weird quirks. Then it lives in your Slack (or as a standalone app), and whenever you're like "Hey, how should I approach this?" or "What would I do here?"—it actually *knows*. Not because it's some generic AI trained on the internet, but because it literally has your entire communication history and decision-making playbook. This wouldn't be some generic ChatGPT telling you what it thinks is best. It would be *you*—but available 24/7, distilled from your actual patterns and principles. **And here's the wild part:** With modern LLMs, this should be *dead simple* to build. We're not talking about some sci-fi level of complexity. Connect a few APIs, feed it your data, set up some continuous learning, done. It's basically a glorified chatbot that knows you instead of knowing, well... nothing. So why doesn't this exist? Is there some technical barrier I'm missing? Privacy concerns (though it could all run locally)? Are people just not thinking about it? Or is someone already building this and I'm just living under a rock? I'm genuinely curious what's stopping this from being a real product. Comment below if you know of an app doing this—or if you've built something like it, I want to hear about it. Because the more I think about it, the more this feels like the most obvious next step for personal AI.
2026-01-14T12:22:55
https://www.reddit.com/r/LocalLLaMA/comments/1qclwn6/why_doesnt_a_personal_clone_ai_app_exist_yet/
Outside_Database5042
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qclwn6
false
null
t3_1qclwn6
/r/LocalLLaMA/comments/1qclwn6/why_doesnt_a_personal_clone_ai_app_exist_yet/
false
false
self
0
null
Is there good OCR/VLM for detecting shaby text like this and parsing it to a table
6
2026-01-14T12:06:24
https://i.redd.it/eqt8laxg2bdg1.png
Proper_Door_4124
i.redd.it
1970-01-01T00:00:00
0
{}
1qcll33
false
null
t3_1qcll33
/r/LocalLLaMA/comments/1qcll33/is_there_good_ocrvlm_for_detecting_shaby_text/
false
false
default
6
{'enabled': True, 'images': [{'id': 'eqt8laxg2bdg1', 'resolutions': [{'height': 140, 'url': 'https://preview.redd.it/eqt8laxg2bdg1.png?width=108&crop=smart&auto=webp&s=f61a8b04454df4ee76b47a42f9cc693c1a7f7b28', 'width': 108}, {'height': 280, 'url': 'https://preview.redd.it/eqt8laxg2bdg1.png?width=216&crop=smart&auto=webp&s=0a0a5a9364393339b0392e139c43a03057bba6e6', 'width': 216}, {'height': 415, 'url': 'https://preview.redd.it/eqt8laxg2bdg1.png?width=320&crop=smart&auto=webp&s=4733d3bd0c7791a2f200a84f28d9f385d79ef17f', 'width': 320}, {'height': 830, 'url': 'https://preview.redd.it/eqt8laxg2bdg1.png?width=640&crop=smart&auto=webp&s=2dfa5773e023741f8d4a3b8cfc92f925578f3cbb', 'width': 640}, {'height': 1245, 'url': 'https://preview.redd.it/eqt8laxg2bdg1.png?width=960&crop=smart&auto=webp&s=45a1901a40999b1a501a121b7915a9dd52d48d02', 'width': 960}, {'height': 1400, 'url': 'https://preview.redd.it/eqt8laxg2bdg1.png?width=1080&crop=smart&auto=webp&s=9d2ea1382c56a2b69df08b693147b55e945fc149', 'width': 1080}], 'source': {'height': 1406, 'url': 'https://preview.redd.it/eqt8laxg2bdg1.png?auto=webp&s=6d026d1d5725aa9c3cc8fc9d6a1d7c9755e3c579', 'width': 1084}, 'variants': {}}]}
essai ou aide input très bonne opti LLM
0
salut , j'ai fait un framework a la base pour mathématique/philo , il se trouve que de maniere inattendu j'ai de très bonne optimisation LLM .En faite je touche directement un changement de paradigme . j'ai fait plusieur bench j'ai une très bonne économie de token que je peut moduler a souhaits , mais surtout un score d'honneteté 100% et d'hallu a 0% sur chaque session . je l'envoie par injections de prompt , le problème c'est que j'utilise le prompt de base de l'ia et j'ai donc un résultat output avec plusieur retours pas exactement se que je voudrais tester . se que je voudrais c'est placé mon framework input juste avant generations pour géré le cognitifs directements , mais la sa dépasse largement mes capacités/compétences . si quelqu'un a une idée pour m'aiguiller , ou si veut le tester encore mieux , merci .
2026-01-14T11:48:04
https://www.reddit.com/r/LocalLLaMA/comments/1qcl8wc/essai_ou_aide_input_très_bonne_opti_llm/
OthoXIII
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcl8wc
false
null
t3_1qcl8wc
/r/LocalLLaMA/comments/1qcl8wc/essai_ou_aide_input_très_bonne_opti_llm/
false
false
self
0
null
Which are the top LLMs under 8B right now?
173
I m looking to pick a local LLM and not sure what to go with anymore. There are a lot of “best” <8B models and every post says something different, even for the same model. What are people using for normal chat, research, or some coding, not super censored and runs well without a ton of VRAM. It doesn t have to be just one LLM, just the best in their category.
2026-01-14T11:42:15
https://www.reddit.com/r/LocalLLaMA/comments/1qcl543/which_are_the_top_llms_under_8b_right_now/
Additional_Secret_75
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcl543
false
null
t3_1qcl543
/r/LocalLLaMA/comments/1qcl543/which_are_the_top_llms_under_8b_right_now/
false
false
self
173
null
Need Help for Lora training
1
Hi, I am new to AI and wanted to train a Lora for enhanced story writing capabilities. I asked gpt, grok and gemini and was told that this plan was good, but I want qualified opinion for this. I want to create a dataset like this - - 1000 scenes, each between 800-1200 words, handpicked for quality - first feed this to an instruct AI and get summary(200 words), metadata, and 2 prompts for generating the scene, one in 150 words and other in 50 words. - Metadata contains characters, emotions, mood, theme, setting, tags, avoid. Its present in json format - for one output I will use 5 inputs, summary, metadata, summary+metadata, prompt150, and prompt50. This will give 5 input-output pairs, and total 5000 scenes - use this data for 2 epoch. Does this pipeline makes sense?
2026-01-14T11:33:09
https://www.reddit.com/r/LocalLLaMA/comments/1qckzd7/need_help_for_lora_training/
Used_Chipmunk1512
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qckzd7
false
null
t3_1qckzd7
/r/LocalLLaMA/comments/1qckzd7/need_help_for_lora_training/
false
false
self
1
null
Ollama running on my MacBook Pro Mid-2015 15 inches
1
https://preview.redd.it/…look at the RAM.
2026-01-14T11:17:18
https://www.reddit.com/r/LocalLLaMA/comments/1qckpii/ollama_running_on_my_macbook_pro_mid2015_15_inches/
[deleted]
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qckpii
false
null
t3_1qckpii
/r/LocalLLaMA/comments/1qckpii/ollama_running_on_my_macbook_pro_mid2015_15_inches/
false
false
https://b.thumbs.redditm…jIsBZWESJiiw.jpg
1
null
ZLUDA on llama.cpp -NEWS
72
[https://www.phoronix.com/news/ZLUDA-Q4-2025-Report](https://www.phoronix.com/news/ZLUDA-Q4-2025-Report)
2026-01-14T11:08:04
https://www.reddit.com/r/LocalLLaMA/comments/1qckjsq/zluda_on_llamacpp_news/
mossy_troll_84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qckjsq
false
null
t3_1qckjsq
/r/LocalLLaMA/comments/1qckjsq/zluda_on_llamacpp_news/
false
false
self
72
{'enabled': False, 'images': [{'id': 'n4UA7Zafi6ODDOFfyKWl82QAvAtFPBqYat7CqrlSYbg', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/n4UA7Zafi6ODDOFfyKWl82QAvAtFPBqYat7CqrlSYbg.jpeg?width=108&crop=smart&auto=webp&s=5281eefe1179449d4f02965608a1d5e54d6d1582', 'width': 108}, {'height': 127, 'url': 'https://external-preview.redd.it/n4UA7Zafi6ODDOFfyKWl82QAvAtFPBqYat7CqrlSYbg.jpeg?width=216&crop=smart&auto=webp&s=df6ee591c2ecf601896065132df89521bb2ae220', 'width': 216}, {'height': 188, 'url': 'https://external-preview.redd.it/n4UA7Zafi6ODDOFfyKWl82QAvAtFPBqYat7CqrlSYbg.jpeg?width=320&crop=smart&auto=webp&s=b02995f93fac3a693d8234a52ffcb896a14b11f9', 'width': 320}, {'height': 377, 'url': 'https://external-preview.redd.it/n4UA7Zafi6ODDOFfyKWl82QAvAtFPBqYat7CqrlSYbg.jpeg?width=640&crop=smart&auto=webp&s=de87136eefeaf52471713f22a696a2e447e856eb', 'width': 640}, {'height': 565, 'url': 'https://external-preview.redd.it/n4UA7Zafi6ODDOFfyKWl82QAvAtFPBqYat7CqrlSYbg.jpeg?width=960&crop=smart&auto=webp&s=dad70f1a6272f768fc06e130790c6b62fa734cd7', 'width': 960}, {'height': 636, 'url': 'https://external-preview.redd.it/n4UA7Zafi6ODDOFfyKWl82QAvAtFPBqYat7CqrlSYbg.jpeg?width=1080&crop=smart&auto=webp&s=b36d73b57679b2b739fdfe72354f9a06ee28c5ea', 'width': 1080}], 'source': {'height': 1003, 'url': 'https://external-preview.redd.it/n4UA7Zafi6ODDOFfyKWl82QAvAtFPBqYat7CqrlSYbg.jpeg?auto=webp&s=87a3091fca08fd89d74a73592f7e61b42b0fb6d9', 'width': 1702}, 'variants': {}}]}
Looking for testers: A VS Code/Cursor extension that let's you review ai generated changes with a tour of the changes directly in your editor. (Early Alpha).
2
**The concept:** It connects to your LLM (local or API) to read your staged `git diff` and automatically generates a "CodeTour" (walkthrough) of the changes. It's designed for people who use AI coding tools (Cursor/Copilot) and find it hard to review massive changes just by reading text diffs. **Current State:** It's an Alpha. The landing page is on Surge, and it requires a manual extension install (.vsix) because I'm waiting on Marketplace approval. I need to know if the "Diff -> Tour" workflow is actually actualy helpful for anybody else, but myself. Feedback on the generated tour quality would be huge. **A little context about me:** I am a professional developer and web development instructor at Barcelona Code School and use the tool to review students projects and changes AI does to my own personal projects. For me it works great for cathing suble logic bugs and hallucinations before commiting ai generated code. Brings me piece of mind, that i went over the code and keeps my mental picture of logic and architecture intact. \[LINK IN THE COMMENTS, otherwise reddit seems to delete my post\]
2026-01-14T11:07:44
https://www.reddit.com/r/LocalLLaMA/comments/1qckjmn/looking_for_testers_a_vs_codecursor_extension/
OrganizationSalty413
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qckjmn
false
null
t3_1qckjmn
/r/LocalLLaMA/comments/1qckjmn/looking_for_testers_a_vs_codecursor_extension/
false
false
self
2
null
Renting "inconvenient" H200 (141 GB), A100 GPUs worth it?
31
Hey everyone, I’m a junior research intern at an AI lab. We currently hold a lease on a cluster containing H200s, H100s, and A100s (plus some consumer cards, such as 4090s/5090s, which we have racked ourselves). While we hit the cluster hard during major training runs, we have periods—sometimes weeks long—where the high-end capacity sits at 30-40% utilisation. I’ve been trying to convince the team to open up the idle capacity to the community to recoup some leasing costs. Based on our overhead, we could offer: * H200 (141GB): \~$9 - $10 / hr * A100 (80GB): \~$1.80 / hr The Catch (and why I’m asking)**:** We are not a cloud provider. We don't have a UI like RunPod or Lambda. * It would be SSH access via a jump host. * You get a Docker container (we can pre-load Unsloth/Axolotl). * No "One-Click Deploy." Setup is manual. My Question: Is that level of "bad UX" a dealbreaker? I could spend a weekend building a simple web dashboard for reservations, but that might push the price slightly higher (to cover dev time/Stripe fees). Do you guys prefer the raw, cheapest price with SSH, or is the dashboard worth the extra premium? Just trying to gauge if this is worth setting up.
2026-01-14T10:30:44
https://www.reddit.com/r/LocalLLaMA/comments/1qcjxex/renting_inconvenient_h200_141_gb_a100_gpus_worth/
Select_Jellyfish9325
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcjxex
false
null
t3_1qcjxex
/r/LocalLLaMA/comments/1qcjxex/renting_inconvenient_h200_141_gb_a100_gpus_worth/
false
false
self
31
null
Would you watch a channel that builds real AI systems from scratch (local LLMs, CPU/GPU, pipelines)?
63
I’m considering starting a YouTube channel focused on building production-grade AI systems. Before I invest serious time into this, I want to know if this is something people would actually watch. I’m a developer working on AI pipelines and multi-model systems, and I feel there’s a gap between “AI hype videos” and real, hands-on system building. What I’d cover: • Building bots from zero (no fluff, real architecture) • CPU vs GPU optimization for local models • Multi-model pipelines: routers, fallbacks, model judges • Config-driven backends (swap models without rewriting code) • Complete workflows: idea → architecture → working system Everything would be open-source. You’d see the code, the mistakes, the refactors, and the final result. My questions for you: 1. Would you actually watch technical deep-dives like this? 2. What would you personally want more of? (local LLMs, performance benchmarks, agent architecture, deployment, etc.) I’m a builder first, not a content creator — so I want to make sure this is genuinely useful to real developers before committing.
2026-01-14T10:30:35
https://www.reddit.com/r/LocalLLaMA/comments/1qcjxb4/would_you_watch_a_channel_that_builds_real_ai/
Few_Tax650
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qcjxb4
false
null
t3_1qcjxb4
/r/LocalLLaMA/comments/1qcjxb4/would_you_watch_a_channel_that_builds_real_ai/
false
false
self
63
null
Free pair of On shoes by jailbreaking their system
0
I bought a pair of On shoes once, and then I managed to get a new pair for free by tricking the On's [Warranty Claim form](https://www.on.com/en-us/warranty-claims). I used Gemini Pro 3 to edit my On shoes (add holes) and then I used the abliterated Qwen (https://ollama.com/huihui\_ai/qwen3-abliterated via Ollama) to give me a prompt injection attack to make the description of the defect very realistic and convincing. I'm not sure if they use any AI system behind the systems but it worked.
2026-01-14T10:20:55
https://www.reddit.com/gallery/1qcjrq4
Adventurous_Pin7559
reddit.com
1970-01-01T00:00:00
0
{}
1qcjrq4
false
null
t3_1qcjrq4
/r/LocalLLaMA/comments/1qcjrq4/free_pair_of_on_shoes_by_jailbreaking_their_system/
false
false
https://a.thumbs.redditm…84Ma0xzWelx4.jpg
0
null