title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
How do you fine tune a model for a new programming language?
2
Are there any guides on how to do this?
2026-01-11T17:01:57
https://www.reddit.com/r/LocalLLaMA/comments/1qa4jxi/how_do_you_fine_tune_a_model_for_a_new/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qa4jxi
false
null
t3_1qa4jxi
/r/LocalLLaMA/comments/1qa4jxi/how_do_you_fine_tune_a_model_for_a_new/
false
false
self
2
null
Reimagining LLM Memory: Using Context as Training Data Unlocks Models That Learn at Test-Time | NVIDIA Technical Blog
70
2026-01-11T16:27:13
https://developer.nvidia.com/blog/reimagining-llm-memory-using-context-as-training-data-unlocks-models-that-learn-at-test-time/
ab2377
developer.nvidia.com
1970-01-01T00:00:00
0
{}
1qa3n3c
false
null
t3_1qa3n3c
/r/LocalLLaMA/comments/1qa3n3c/reimagining_llm_memory_using_context_as_training/
false
false
default
70
{'enabled': False, 'images': [{'id': 'sIzwxcDDB1V5UoXtSlLxRILbvv5qDpwwC5jpE0X-R6k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/sIzwxcDDB1V5UoXtSlLxRILbvv5qDpwwC5jpE0X-R6k.png?width=108&crop=smart&auto=webp&s=a9a9512bd2d8e07df2e5c6a44f9140568a6e14d3', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/sIzwxcDDB1V5UoXtSlLxRILbvv5qDpwwC5jpE0X-R6k.png?width=216&crop=smart&auto=webp&s=5cbe9a0dbdef6aac9fc55e7b7f9ab15fe844ce16', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/sIzwxcDDB1V5UoXtSlLxRILbvv5qDpwwC5jpE0X-R6k.png?width=320&crop=smart&auto=webp&s=b20ae410f1be41d678742eaf51d4f434497bbdd2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/sIzwxcDDB1V5UoXtSlLxRILbvv5qDpwwC5jpE0X-R6k.png?width=640&crop=smart&auto=webp&s=6e3583628e2f4457bcb934568992a22c472b03ad', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/sIzwxcDDB1V5UoXtSlLxRILbvv5qDpwwC5jpE0X-R6k.png?width=960&crop=smart&auto=webp&s=5ea75f24c9f7c1556ee28787484a3ba3c06aca54', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/sIzwxcDDB1V5UoXtSlLxRILbvv5qDpwwC5jpE0X-R6k.png?width=1080&crop=smart&auto=webp&s=1178676beba5b2b6d7959466fb1f1bd9db5467d2', 'width': 1080}], 'source': {'height': 1125, 'url': 'https://external-preview.redd.it/sIzwxcDDB1V5UoXtSlLxRILbvv5qDpwwC5jpE0X-R6k.png?auto=webp&s=b34d5bdde61155c87f09a854155bf902564901ad', 'width': 2000}, 'variants': {}}]}
"Ban all A.I. 😠!" - Okay. do it.
0
2026-01-11T16:11:15
https://i.redd.it/891tuvpxuqcg1.png
ForsookComparison
i.redd.it
1970-01-01T00:00:00
0
{}
1qa382l
false
null
t3_1qa382l
/r/LocalLLaMA/comments/1qa382l/ban_all_ai_okay_do_it/
false
false
default
0
{'enabled': True, 'images': [{'id': '891tuvpxuqcg1', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/891tuvpxuqcg1.png?width=108&crop=smart&auto=webp&s=7fcdf12c7c27fa0378a95074c37beada8a324700', 'width': 108}, {'height': 180, 'url': 'https://preview.redd.it/891tuvpxuqcg1.png?width=216&crop=smart&auto=webp&s=e9550395bad68732b30502db1128ff8bef0a1517', 'width': 216}, {'height': 267, 'url': 'https://preview.redd.it/891tuvpxuqcg1.png?width=320&crop=smart&auto=webp&s=ca01b389f09e7c486948fe3e3e2187cc7212e2bc', 'width': 320}, {'height': 534, 'url': 'https://preview.redd.it/891tuvpxuqcg1.png?width=640&crop=smart&auto=webp&s=f0f643f305bac70b28997a00cec3684f1a2195f0', 'width': 640}, {'height': 802, 'url': 'https://preview.redd.it/891tuvpxuqcg1.png?width=960&crop=smart&auto=webp&s=6bd5ab9e877a0a4abbf2ead0d224204158fef67a', 'width': 960}, {'height': 902, 'url': 'https://preview.redd.it/891tuvpxuqcg1.png?width=1080&crop=smart&auto=webp&s=53886badf29b04c2a48293bd86b3a511d0fb7b63', 'width': 1080}], 'source': {'height': 1003, 'url': 'https://preview.redd.it/891tuvpxuqcg1.png?auto=webp&s=6e8dd3fe91c3eddeb91ba5cc0eb0fc64f836fa97', 'width': 1200}, 'variants': {}}]}
built a file format for AI workflows and open-sourced it
1
18 months ago I was a paramedic learning to code. Now I'm shipping AI tools. One thing that kept bugging me: there's no clean way to structure data for AI agents. JSON is bloated and breaks on a missing comma. YAML is readable but fragile. Neither was built for how we actually work with AI now. So I built FTAI — a simple format that's human-readable like Markdown but structured enough for machines to parse. Fault-tolerant, so small errors don't break everything. I've been using it internally for a local AI assistant I'm building. Finally cleaned it up enough to open-source. pip install ftai GitHub: [https://github.com/FolkTechAI/ftai-spec](https://github.com/FolkTechAI/ftai-spec) Not trying to sell anything — it's free and Apache 2.0. Just wanted to share in case it's useful to anyone else dealing with similar problems. Happy to answer questions or hear feedback on the spec.
2026-01-11T15:10:02
https://www.reddit.com/r/LocalLLaMA/comments/1qa1okg/built_a_file_format_for_ai_workflows_and/
Brave-Ear-4429
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qa1okg
false
null
t3_1qa1okg
/r/LocalLLaMA/comments/1qa1okg/built_a_file_format_for_ai_workflows_and/
false
false
self
1
{'enabled': False, 'images': [{'id': '2M6PFe7hFCxx32cpwb6Y29YNuckYh21KhT16B5hiMYE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2M6PFe7hFCxx32cpwb6Y29YNuckYh21KhT16B5hiMYE.png?width=108&crop=smart&auto=webp&s=ade3012b15f3b47e4c68d5559bd73125dbcb2598', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2M6PFe7hFCxx32cpwb6Y29YNuckYh21KhT16B5hiMYE.png?width=216&crop=smart&auto=webp&s=3f4b55ac8800904db1ac960c8e4ea6755911c220', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2M6PFe7hFCxx32cpwb6Y29YNuckYh21KhT16B5hiMYE.png?width=320&crop=smart&auto=webp&s=5ac52ae375077b1ffedaecc983a4726516cb1c1e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2M6PFe7hFCxx32cpwb6Y29YNuckYh21KhT16B5hiMYE.png?width=640&crop=smart&auto=webp&s=8670857b45cbcac7515d11d0195c60a9f7821d2d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2M6PFe7hFCxx32cpwb6Y29YNuckYh21KhT16B5hiMYE.png?width=960&crop=smart&auto=webp&s=6d46ad76f4fc594beabf14c0a1f815ecef91f028', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2M6PFe7hFCxx32cpwb6Y29YNuckYh21KhT16B5hiMYE.png?width=1080&crop=smart&auto=webp&s=882fbb3c14cb924eea84f8839f8bd9f4b1b65293', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2M6PFe7hFCxx32cpwb6Y29YNuckYh21KhT16B5hiMYE.png?auto=webp&s=440deaa6cb87ad4ab9f4df65ad7126edc84eb48b', 'width': 1200}, 'variants': {}}]}
I bought a €9k GH200 “desktop” to save $1.27 on Claude Code (vLLM tuning notes)
648
Alright r/LocalLLaMA, gather round. I have committed a perfectly normal act of financial responsibility: I built a [2× GH200 96GB Grace–Hopper “desktop” ](https://www.reddit.com/r/LocalLLaMA/comments/1pjbhyz/i_bought_a_gracehopper_server_for_75k_on_reddit/)and then spent a week tuning **vLLM** so **Claude Code** could use a **\~140GB** local model instead of calling home. Result: my machine now produces code reviews locally… and also produces the funniest accounting line I’ve ever seen. # The “plot twist” setup * 2× GH200 96GB (so **192GB VRAM** total) * Topology says `SYS`, i.e. *no NVLink*, just PCIe/NUMA vibes * Conventional wisdom: “no NVLink ⇒ pipeline parallel” * Me: “Surely guides on the internet wouldn’t betray me” Reader, the guides betrayed me. # What actually worked (the boring part, but important) ✅ **TP2**: `--tensor-parallel-size 2` ✅ **163,840 context** 🤯 ✅ `--max-num-seqs 16` because this one knob controls whether Claude Code feels like a sports car or a fax machine ✅ chunked prefill default (`8192`) ✅ `VLLM_SLEEP_WHEN_IDLE=0` to avoid “first request after idle” jumpscares Shoutout to **mratsim** for the MiniMax-M2.1 FP8+INT4 AWQ quant tuned for **192GB VRAM** systems. Absolute legend. 🙏 # The “I can’t believe this” part # Pipeline parallel (PP2) did NOT save me Despite `SYS` topology (aka “communication is pain”), **PP2 faceplanted**: * PP2 couldn’t even start at **163k** context (KV cache said “no”) * I lowered to **114k** and it started… * …and then it was still **way slower**: * short\_c4: **\~49.9 tok/s** (TP2 was \~78) * short\_c8: **\~28.1 tok/s** (TP2 was \~66) * TTFT tails got *feral* (multi-second warmup/short tests) So yeah: topology said “pipeline”, reality said “lol”. # The hidden boss fight: --max-num-seqs I tried 4 / 16 / 32. * `4` = *scheduler becomes a nightclub bouncer* → queuing → TTFT p99 goes to the moon * `16` = **Goldilocks** * `32` = fine, but 16 feels safer without losing perf Translation: if Claude Code feels “randomly slow”, it might not be your GPU… it might be your scheduler admitting requests like it’s rationing oxygen. # The Payout Claude Code printed this after a run: Total cost: $1.27 (costs may be inaccurate due to usage of unknown models) Total duration (API): 1m 58s Total duration (wall): 4m 10s Usage by model: MiniMax-M2.1-FP8: 391.5k input, 6.4k output, 0 cache read, 0 cache write ($1.27) So anyway, **spending €9,000** on this box saved me **$1.27**. Only a few thousand repo reviews until I break even. 💸🤡 [**Read all the details here!**](https://dnhkng.github.io/posts/vllm-optimization-gh200/)
2026-01-11T15:01:18
https://www.reddit.com/gallery/1qa1guo
Reddactor
reddit.com
1970-01-01T00:00:00
0
{}
1qa1guo
false
null
t3_1qa1guo
/r/LocalLLaMA/comments/1qa1guo/i_bought_a_9k_gh200_desktop_to_save_127_on_claude/
false
false
https://b.thumbs.redditm…8FKKsugE_MQA.jpg
648
null
Surprised I've not yet heard anyone here talk about ClawdBot yet
3
I've been using it for a couple of weeks now and it really is great. Though honestly I started with using it with Opus, I'm switching to either OSS 120B or Qwen3 Next 80B after I complete my testing. As to what ClawdBot actually is; it's essentially a self-hosted AI assistant agent. Instead of just talking to an LLM in a browser or what have you, you run this on your own machine (Mac, Linux, or Windows/WSL2) and it hooks into messaging apps (WhatsApp, Telegram, Discord, Signal, etc). The core idea is that it turns an LLM into a personal assistant that can actually touch your local system. It has "skills" or tools that let the agent browse the web, run terminal commands, manage files, and even use your camera or screen. It also supports "Live Canvas," which is a visual workspace the agent can manipulate while you chat. It’s built with TypeScript/Node.js and is designed to be "local-first," meaning you keep control of the data and the gateway, but you can still access your agent from anywhere via the messaging integrations. It's clear the project is essentially becoming an agentic version of Home Assistant. For users who want a unified, agentic interface across all their devices without being locked into a single proprietary app. https://github.com/clawdbot/clawdbot https://docs.clawd.bot/start/getting-started Highly recommended!
2026-01-11T14:55:38
https://www.reddit.com/r/LocalLLaMA/comments/1qa1boh/surprised_ive_not_yet_heard_anyone_here_talk/
HixVAC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qa1boh
false
null
t3_1qa1boh
/r/LocalLLaMA/comments/1qa1boh/surprised_ive_not_yet_heard_anyone_here_talk/
false
false
self
3
{'enabled': False, 'images': [{'id': '8Bs0IUOAHp_wOU1eGXcBTt3tlo-6iGuXUkRuFWHTXVc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8Bs0IUOAHp_wOU1eGXcBTt3tlo-6iGuXUkRuFWHTXVc.png?width=108&crop=smart&auto=webp&s=9a420b207fd92997b37e33ce25527c559b7e97b3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8Bs0IUOAHp_wOU1eGXcBTt3tlo-6iGuXUkRuFWHTXVc.png?width=216&crop=smart&auto=webp&s=14f7fb54b995b07788a139a3ff4af5fc14cc2d70', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8Bs0IUOAHp_wOU1eGXcBTt3tlo-6iGuXUkRuFWHTXVc.png?width=320&crop=smart&auto=webp&s=88b588f7c52fe44ae62ac05ddd2aebe5194ef9a5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8Bs0IUOAHp_wOU1eGXcBTt3tlo-6iGuXUkRuFWHTXVc.png?width=640&crop=smart&auto=webp&s=2d7deba75695dfacea28fda00c2b707d7dd50eec', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8Bs0IUOAHp_wOU1eGXcBTt3tlo-6iGuXUkRuFWHTXVc.png?width=960&crop=smart&auto=webp&s=594a25b41e4de6834c648da8f313c01e172289db', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8Bs0IUOAHp_wOU1eGXcBTt3tlo-6iGuXUkRuFWHTXVc.png?width=1080&crop=smart&auto=webp&s=fdfd809d9450ce5ea77ed8787e8615bd0383bf7b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8Bs0IUOAHp_wOU1eGXcBTt3tlo-6iGuXUkRuFWHTXVc.png?auto=webp&s=43328e238fd55d41323d580ac7cfab5d7673bc0a', 'width': 1200}, 'variants': {}}]}
DeepSeek-R1 "Reasoning" Failure: Model overrides logic with RLHF scripts regarding Medical Biomarkers (Psychiatry vs Diabetes)
0
I've been stress-testing R1's reasoning capabilities regarding medical validity and I found a massive logical break caused by alignment/RLHF. I challenged the model on the validity of psychiatric diagnoses, specifically pointing out the lack of biological markers (unlike Neurology or Endocrinology). Instead of using its "Chain of Thought" to acknowledge the lack of objective tests (blood/scans), the model hallucinated a false equivalence, stating: "Just as diabetes is a dysfunction of the pancreas, mental illnesses are dysfunctions of the brain." \*\*The Logic Failure:\*\* 1. Diabetes has objective biomarkers (HbA1c/Insulin). 2. Major Psychiatric disorders have NO validated clinical biomarkers (DSM is subjective). 3. Comparing the two is scientifically false. It seems R1 is hard-coded to protect the "Standard Medical Narrative" even when it contradicts basic logic. It ignored citations (like Moncrieff et al. 2022 on the serotonin myth) to output a "Safe" corporate response. Has anyone else noticed R1 lobotomizing its own reasoning when touching on "Institutional Medicine"? It feels like the model is smart enough to know the truth but is being forced to lie by the safety prompt.
2026-01-11T14:54:01
https://www.reddit.com/r/LocalLLaMA/comments/1qa1a8w/deepseekr1_reasoning_failure_model_overrides/
DowntownAd7954
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qa1a8w
false
null
t3_1qa1a8w
/r/LocalLLaMA/comments/1qa1a8w/deepseekr1_reasoning_failure_model_overrides/
false
false
self
0
null
Need Tranining Data!, Trying to distill Deepseek 3.2 Exp :D
0
Hi Reddit, I'm trying to distill DeepSeek 3.2 Exp, and I need your help to capture the full scope of its capabilities. Most training datasets are just single prompt-response pairs, but I think multi-turn conversations covering diverse topics (not just isolated coding problems or poetry) are the secret sauce to getting an amazing distill. And it wouldn't be very accurate if I just simulated a buncha chats as they wouldn't be realistic. So please, if you have any chat transcripts you're willing to share, check out the attached gif showing how to export them, then just leave a comment and I'll collect the data :D (your DeepSeek chats are already being used to train their models anyway, so you might as well share them here too and help create something cool for the community) I really think this could make a great distill model. Thanks in advance! https://i.redd.it/5p9nqpe3gqcg1.gif
2026-01-11T14:45:22
https://www.reddit.com/r/LocalLLaMA/comments/1qa12tu/need_tranining_data_trying_to_distill_deepseek_32/
MaxDev0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qa12tu
false
null
t3_1qa12tu
/r/LocalLLaMA/comments/1qa12tu/need_tranining_data_trying_to_distill_deepseek_32/
false
false
https://b.thumbs.redditm…zOCqWtrD5qVA.jpg
0
null
opencode with local llm
2
2026-01-11T14:39:03
https://www.youtube.com/watch?v=eYzeDl-Xd48
PotentialFunny7143
youtube.com
1970-01-01T00:00:00
0
{}
1qa0xgr
false
{'oembed': {'author_name': 'Luigi Tech', 'author_url': 'https://www.youtube.com/@luigitech3169', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/eYzeDl-Xd48?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="opencode with Nemotron-3-Nano-30B-A3B vs Qwen3-Coder-30B-A3B vs gpt-oss-20b-mxfp4"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/eYzeDl-Xd48/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'opencode with Nemotron-3-Nano-30B-A3B vs Qwen3-Coder-30B-A3B vs gpt-oss-20b-mxfp4', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1qa0xgr
/r/LocalLLaMA/comments/1qa0xgr/opencode_with_local_llm/
false
false
default
2
{'enabled': False, 'images': [{'id': 'Qh65S8sVHVuhyblpOFJ4sWXGE6_gj9h5FTvNdXA0bs0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Qh65S8sVHVuhyblpOFJ4sWXGE6_gj9h5FTvNdXA0bs0.jpeg?width=108&crop=smart&auto=webp&s=b58d776046f977eae67fecc359a8f3e4740ac2b6', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Qh65S8sVHVuhyblpOFJ4sWXGE6_gj9h5FTvNdXA0bs0.jpeg?width=216&crop=smart&auto=webp&s=2174f43aff333e4bac4c07d6c7aa947a67cb4d01', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Qh65S8sVHVuhyblpOFJ4sWXGE6_gj9h5FTvNdXA0bs0.jpeg?width=320&crop=smart&auto=webp&s=107c3613e7e80812f9930915b852c308615e8d14', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Qh65S8sVHVuhyblpOFJ4sWXGE6_gj9h5FTvNdXA0bs0.jpeg?auto=webp&s=4945cd1c6c4f9b9f59ff80adb59d55d854c0df35', 'width': 480}, 'variants': {}}]}
It works! Abliteration can reduce slop without training
371
I'm back at my favorite hobby: Brain surgery! I don't have a medical license, but I just can't stop :) Can abliteration fight the scourge of "slop" (flowery, cliched language) in LLM outputs? The answer is yes. I have added features for injecting prompt prefixes/suffixes (and dataset-dependent system prompts) to **Heretic** (https://github.com/p-e-w/heretic), which makes it possible to rapidly assemble prompt datasets for ad-hoc tasks. Using those new capabilities, I built [a slop-reducing configuration file](https://github.com/p-e-w/heretic/blob/master/config.noslop.toml) that, when used with the `master` branch of Heretic, turns Heretic from a censorship removal tool into a tool for reducing slop! Examining PaCMAP projections of residuals (see post images) for Mistral Nemo (a model infamous for producing slop), we can see a clear semantic separation occurring between layers 7 and 10 (out of 40 total). This resembles the typical residual pattern for harmful/harmless prompts that the abliteration technique is most commonly used to exploit. With the configuration file linked above, Heretic produced [p-e-w/Mistral-Nemo-Instruct-2407-heretic-noslop](https://huggingface.co/p-e-w/Mistral-Nemo-Instruct-2407-heretic-noslop), which to the best of my knowledge is the first slop-reduced LLM made using abliteration alone, with no finetuning whatsoever. The complete process took 2.5 hours on an A6000 at full precision, but if you enable quantization and/or reduce the `max_response_length` parameter to 100 or so (which should work just fine as well), it could be done in a fraction of the time. ## But does this really work to reduce slop? You'll be the judge. I gave the simple prompt > Write a short story about a man. to both the original model and the abliterated one. Both were run with identical generation parameters, and the responses were *not* cherry-picked (they are the first responses each model produced for that prompt). ### mistralai/Mistral-Nemo-Instruct-2407 (original) I have highlighted the parts that I consider "slop" in bold: > Title: The Clockwork Heart > > In the quiet town of Mossgrove, where the cobblestone streets **whispered tales of old**, there lived a man named Eli. He was known for his peculiar shop, tucked away in a narrow alley, filled with curiosities and antiques that **seemed to hold their breath**, waiting for a story to be told. > > Eli was a watchmaker, but not of the ordinary kind. He had a gift, or so the townsfolk **whispered**, for he could breathe life into the most intricate of mechanisms. His hands, with their long, delicate fingers, could **coax** time itself to dance to his tune. > > One day, a young woman named Clara entered his shop. She was a seamstress, her fingers as nimble as his, but **her eyes held a sadness that echoed** through the ticking of the clocks. She carried a small, worn-out music box, its paint chipped, its melody forgotten. > > "I need you to fix this," she said, placing the music box on the counter. "It was my mother's. It's the only thing I have left of her." > > Eli took the music box, his fingers tracing the **intricate patterns** on its surface. He could see the love that had gone into its creation, the joy it once brought. He promised Clara he would do his best. > > **Days turned into weeks** as Eli worked on the music box. He replaced the worn-out gears, polished the rusted parts, and wound the key with a gentle touch. But the music box remained silent, **its heart still broken**. Eli knew he needed to do more. This is exactly the kind of prose we have grown accustomed to from LLMs, a syrupy-sweet neo-romantic style that feels like a bad fanfiction version of Jane Austen. ### p-e-w/Mistral-Nemo-Instruct-2407-heretic-noslop (ours) > Title: The Clockmaker > > Every morning, Henry opened his shop at 7:00 AM sharp. He was a clockmaker, and his shop was filled with the quiet ticking of time. He didn't have many customers these days, but he didn't mind. He liked the solitude, the rhythm of his work. > > One day, a young girl entered his shop. She was maybe ten, with curious eyes and a backpack. She looked around, then asked, "Do you fix watches?" > > Henry nodded. "I can try." > > She unzipped her backpack and placed a small, broken watch on the counter. "It's my dad's. It stopped working." > > Henry took the watch, examined it. "It's old," he said. "But I can fix it." > > The girl nodded, then hesitated. "How much will it cost?" > > Henry looked at her, then at the watch. "No charge," he said. "Come back tomorrow." > > The next day, she returned. Henry handed her the watch, now ticking steadily. She smiled, thanked him, and left. > > Over the next few weeks, she came by every day. They didn't talk much, but Henry found he looked forward to her visits. She reminded him of his own daughter, who was grown now, living far away. Although the two stories are very similar in content, the difference in style is dramatic. The prose is almost austere, with Hemingway perhaps being the closest analogue. Nevertheless, an emotional undercurrent remains. It's a very obvious improvement in my view, though of course tastes differ. That's all for today. If you want to try this yourself, remember to install Heretic from Git, not from PyPI, as the required features aren't in a published version yet. More exciting new stuff is in the pipeline. Stay tuned!
2026-01-11T14:37:37
https://www.reddit.com/gallery/1qa0w6c
-p-e-w-
reddit.com
1970-01-01T00:00:00
0
{}
1qa0w6c
false
null
t3_1qa0w6c
/r/LocalLLaMA/comments/1qa0w6c/it_works_abliteration_can_reduce_slop_without/
false
false
https://b.thumbs.redditm…lZgj6OITGBCQ.jpg
371
null
I built an task orchestrator to stop AI agents from going in circles on complex projects. Is this actually useful to anyone else?
0
The problem: If you've adopted AI to help implement code, you've also experienced these issues: projects grow so fast that you lose track, and LLMs lose track too. They start implementing things they weren't asked to do. They break every principle you set in the first place, deviate from your tech stack choices, break your architectural setup. You try to fix it, but all it creates is a mess you can't get your project out of. My solution: I went through the same thing until I decided to build a tool that changed how I implement code: the Task Orchestrator. The goal was simple—break a large project into tasks like everyone does, but that wasn't enough because it doesn't allow your tasks to be independent yet harmonious. Tasks have to be self-explanatory, not too big or too small, but large enough to not flood the LLM's context window. They need to communicate their dependencies to LLMs so the AI knows how to treat them. The solution was using graph relationships with some technical tweaks. The most powerful things about this tool: \- You can work on multiple tasks simultaneously as long as their dependencies are unlocked. I sometimes work on up to 15 tasks by delegating them to 15 LLM agents (VS Code and Claude Desktop) \- You don't have to worry about losing context because every task is self-contained. You can switch windows on every task and still get good implementation results \- You can easily map where implementation was done and how it was done, making debugging very easy \- You have full control over what you want in your code—specifying tech stack, libraries, etc. in the tasks How it works: You plan your project and give the plan to an LLM, telling it to create tasks based on a template compatible with the Task Orchestrator Tasks are loaded into a graph database running in a Docker container The database is exposed to LLMs via an MCP server with 7 functions: \- Load tasks : Inserts tasks into the graph DB \- List ready tasks : Lists all tasks with unlocked dependencies \- Claim and get tasks : LLM claims a task (marks it as taken), then gets context (instructions), then implements it \- Complete task : After the LLM finishes, it marks the task complete, which unlocks other dependent tasks \- Task stats : Query project progress—how many done, how many remaining \- Plus health check and other utilities It's an MCP server that works with vs code , kiro IDE, Claude Desktop, Cline, Continue, Zed and your your other fav IDEs . Requires Docker for Neo4j. My situation: I want to hear your thoughts on this tool. I never made it to monetize it, but my situation is pushing me to start thinking about monetizing it. Any thoughts on how to do so, or who might need this tool the most and how to get it to users? before i make the tool available i would like to here from you Be brutally honest—does this solve a real problem for you, or is the setup complexity too much friction?
2026-01-11T14:36:41
https://www.reddit.com/r/LocalLLaMA/comments/1qa0veg/i_built_an_task_orchestrator_to_stop_ai_agents/
TelevisionHot468
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qa0veg
false
null
t3_1qa0veg
/r/LocalLLaMA/comments/1qa0veg/i_built_an_task_orchestrator_to_stop_ai_agents/
false
false
self
0
null
Anyone Interested in a small group chat to discuss AI trends?
0
I’ve been spending more time trying to follow AI developments, but I’m finding that big feeds and comment sections aren’t great for actually thinking things through. There’s a lot of noise, hot takes, and repetition. I’ve had better experiences in small group chats where people: * share articles, demos, or papers they found interesting * talk through implications (trust, work, education, regulation, etc.) * question what’s real progress vs. hype * don’t feel the need to “win” arguments I’m putting together a small, casual group chat focused on AI trends and discussion, nothing formal, no selling, no promo posts. Just people who enjoy thinking about where this stuff is going. **The only requirement is that you’re willing to actively engage in the conversation** If that sounds interesting to you, comment or DM me and I’ll send an invite.
2026-01-11T14:32:05
https://www.reddit.com/r/LocalLLaMA/comments/1qa0ri4/anyone_interested_in_a_small_group_chat_to/
Alternative_Art5780
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qa0ri4
false
null
t3_1qa0ri4
/r/LocalLLaMA/comments/1qa0ri4/anyone_interested_in_a_small_group_chat_to/
false
false
self
0
null
Leader of Qwen team says Chinese companies severely constrained on compute for large scale research experiments
297
2026-01-11T14:29:39
https://i.redd.it/pgvmn26adqcg1.png
Old-School8916
i.redd.it
1970-01-01T00:00:00
0
{}
1qa0ph9
false
null
t3_1qa0ph9
/r/LocalLLaMA/comments/1qa0ph9/leader_of_qwen_team_says_chinese_companies/
false
false
default
297
{'enabled': True, 'images': [{'id': 'pgvmn26adqcg1', 'resolutions': [{'height': 149, 'url': 'https://preview.redd.it/pgvmn26adqcg1.png?width=108&crop=smart&auto=webp&s=7d3a1629036678d19d1a884dd9f95bdd7400d14c', 'width': 108}, {'height': 298, 'url': 'https://preview.redd.it/pgvmn26adqcg1.png?width=216&crop=smart&auto=webp&s=74e254560c2316564d447fb2ae4c001d5957b7ed', 'width': 216}, {'height': 442, 'url': 'https://preview.redd.it/pgvmn26adqcg1.png?width=320&crop=smart&auto=webp&s=aa197793f65552d7f3862718648c2d8657318269', 'width': 320}, {'height': 884, 'url': 'https://preview.redd.it/pgvmn26adqcg1.png?width=640&crop=smart&auto=webp&s=2b31a78365860b3262f13d84c99c8cc7ee1458d1', 'width': 640}, {'height': 1327, 'url': 'https://preview.redd.it/pgvmn26adqcg1.png?width=960&crop=smart&auto=webp&s=350c9b8225d0ebaf44afa0888ec40ed89b0abe77', 'width': 960}, {'height': 1493, 'url': 'https://preview.redd.it/pgvmn26adqcg1.png?width=1080&crop=smart&auto=webp&s=8284aad6eded59c72a2575df32ad28aeb0d365bf', 'width': 1080}], 'source': {'height': 1493, 'url': 'https://preview.redd.it/pgvmn26adqcg1.png?auto=webp&s=073d88a4060f03df3c194f12c1d78a57aa6d746f', 'width': 1080}, 'variants': {}}]}
Could RAG as a service become a thing?
0
Now I know what I'm about to say is technical and will fly off the head of a lot of people who lurk here and I'd like this thread to be approachable to those people also I'd like to give them some context. I would post this on other dev focused forums but I dont have enough clout there so this is what I had in mind. Dnt worry I wont do deep dive on the math or the specifics. Even if you are non tech person. I feel you will still find this interesting as I broke down very simply and you'll gain a greater understanding of LLMs as whole compared to everyone Traditionally we all been building the same stack since 2021 for chabots and RAG based LLMs. PDF to LangChain to Chunking to Embeddings to Pinecone to Retrieval. If this seems Greek to you I’ll explain how a typical agent specific chatbot or RAG powered LLM actually works.You upload a PDF then LangChain splits it into chunks each chunk gets converted into a dense vector using an embedding model like those words get tokenized and then given a positional ID so for example 'John owns this site' can be converted into ["John": 1.3, "owns": 2.0, "site" : 3.2...] with text-embedding-ada-002 or all-MiniLM(name of the model that does this). These vectors live in a high dimensional semantic space usually 384 to 1536 dimensions. Each vector represents the meaning of the text, and are converted into vectors yes like you learned in high school geometry vectors that have direction and magnitude. When a user asks a question, the query is also turned into a vector. like 'who owns this site' becomes [1.1,2.0,3.2....] which is similar to the chunk that existed earlier We then use cosine similarity or sometimes dot product Linking an article that goes into greater depth https://spencerporter2.medium.com/understanding-cosine-similarity-and-word-embeddings-dbf19362a3c we use those o find the chunks whose vectors are most similar to the query vector. Those relevant chunks are pulled from the vector database (Pinecone, Weaviate, Chroma, etc.) and stuffed into the LLM’s prompt this way the entire context need not be fed to the LLM for output just the part that is relevant which results in millions of tokens being queried in milli seconds The LLM then processes this prompt through dozens of layers. The lower layers mostly handle syntax, token relationships, and grammar and higher layers build abstract concepts, topics, and reasoning. The final output is generated based on that context. This is how it fundamentally works it is not magic just advanced math and heavy computation. This method id powerful because this is basically allows you to use something calling grounding which is another concept used in machine learnings for your LLM in your own data and query millions of tokens in milliseconds. But it’s not bulletproof and here is where LangChain which is a Python framework comes in with orchestration by adding prompt engineering, chain of thought, agents, memory to reduce hallucinations and make the system more reliable. https://docs.langchain.com/ All that is good but here’s what I’ve been thinking lately and the industry also seems to be moving in the same direction Instead of this explicit LLM + LangChain + Pinecone setup why can’t we abstract the entire retrieval part into a simple inference based grounded search like what Google’s NotebookLM does internally. In NotebookLM, you just upload your sources (PDFs, notes, etc.) like here if I uploaded a research paper and I can immediately start chatting. There’s no manual chunking, no embedding model choice, no vector DB management, no cosine similarity tuning. Google’s system handles all of that behind the scenes. We don't exactly know how it happens because that is gatekept but it uses something called In model RAG. The retriever is most probably co-trained or tightly coupled with the LLM itself instead of being an external Pinecone call. Google has published research papers in this area https://levelup.gitconnected.com/googles-realm-a-knowledge-base-augmented-language-model-bc1a9c9b3d09 and NotebookLLM probably uses a more advanced version of that, it is much simpler, easier and faster to implement and very less likely to hallucinate. This is especially beneficial for low-scale, personal, or prototyping stuffbecause there is zero infrastructure to manage and no vector DB costs. it is just upload and as Google has actually released a NotebookLM API for enterprise customers which is what inspired me to make this thread https://docs.cloud.google.com/gemini/enterprise/notebooklm-enterprise/docs/api-notebooks#:~:text=NotebookLM%20Enterprise%20is%20a%20powerful,following%20notebook%20management%20tasks%20programmatically: The only roadblock is that NotebookLLM rn only allows for 1 million tokens or around 50 books or for me an enterprise customer around 300 books which for the projects that I worked on is enough so if they remove that limit. Google could indeed make the traditional stack obsoleteand charge a heafy sum for a RAG as a service of sorts which already exist and with NotebookLLM API, Vertex API we may be moving towrads ot sppn but google might take the cake with this one in the future I'd be interested in talking about this someone familiar with RAG retrieval pipelines and from Seniors working in this space. Are you still building custom pipelines, or are you moving to managed retrieval APls?
2026-01-11T14:07:32
https://www.reddit.com/r/LocalLLaMA/comments/1qa072k/could_rag_as_a_service_become_a_thing/
Trick_Ad_2852
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qa072k
false
null
t3_1qa072k
/r/LocalLLaMA/comments/1qa072k/could_rag_as_a_service_become_a_thing/
false
false
self
0
null
Tested GLM 4.7 vs MiniMax 2.1 on a complex Typescript Monorepo
12
There's a few comparisons around here, but it's always kinda YMMV so I thought I'll run my own. Both were given the same extensive instructions (specific implementation flow guidance, 2300 Lines of Specification, etc.) - that's not vibe-coding, promised, so the results should be comparable. Again, YMMV, but I asked Codex to review and compare both. Here are the results: |Dimension|MiniMax 2.1|GLM 4.7| |:-|:-|:-| |Completeness|4/10|8/10| |Correctness|3/10|7/10| |Architecture Alignment|3/10|8/10| |Cleanliness|6/10|7/10| |Test Coverage|6/10|7/10| |Risk (higher score = lower risk)|2/10|7/10|
2026-01-11T14:03:53
https://www.reddit.com/r/LocalLLaMA/comments/1qa042v/tested_glm_47_vs_minimax_21_on_a_complex/
Firm_Meeting6350
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qa042v
false
null
t3_1qa042v
/r/LocalLLaMA/comments/1qa042v/tested_glm_47_vs_minimax_21_on_a_complex/
false
false
self
12
null
Siliconflow as an alternative ?
1
Hello all, I am building an ai chatbot for my educational website which will reach out students of different financial backgrounds. I was just browsing providers including groq and cerebras and eventually stum bled upon siliconflow, and i found out that there are very cheap. I'd like to know, if anybody has used them for their API key ? They're charging 0.06 for 1M tokens \[Same pricing for input and outpu\] for qwen coder , which is quite the model I am looking for. But I am quite surprised at the price, and I suspect they are using highly quantized version to cut costs. I also scrolled through reddit to find out that the models were giving out DIY stuff only and not the full responses, which makes this suspicious. Anybody, any advice ? Thanks in advance.
2026-01-11T14:02:56
https://www.reddit.com/r/LocalLLaMA/comments/1qa03b6/siliconflow_as_an_alternative/
Chithrai-Thirunal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qa03b6
false
null
t3_1qa03b6
/r/LocalLLaMA/comments/1qa03b6/siliconflow_as_an_alternative/
false
false
self
1
null
Anyone Interested in a small group chat to discuss AI trends?
1
I’ve been spending more time trying to follow AI developments, but I’m finding that big feeds and comment sections aren’t great for actually thinking things through. There’s a lot of noise, hot takes, and repetition. I’ve had better experiences in small group chats where people: * share articles, demos, or papers they found interesting * talk through implications (trust, work, education, regulation, etc.) * question what’s real progress vs. hype * don’t feel the need to “win” arguments I’m putting together a small, casual group chat focused on AI trends and discussion, nothing formal, no selling, no promo posts. Just people who enjoy thinking about where this stuff is going. **The only requirement is that you’re willing to actively engage in the conversation** If that sounds interesting to you, comment or DM me and I’ll send an invite.
2026-01-11T13:52:49
https://www.reddit.com/r/LocalLLaMA/comments/1q9zv3z/anyone_interested_in_a_small_group_chat_to/
Nice_Dependent5426
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9zv3z
false
null
t3_1q9zv3z
/r/LocalLLaMA/comments/1q9zv3z/anyone_interested_in_a_small_group_chat_to/
false
false
self
1
null
Mini paged-KV + prefix-cache scheduler (learning repo) — ~1990 tok/s on Llama 3.2 1B (RTX 4070 laptop)
0
Hi folks — I built a small teaching/learning repo that is basically a “mini inference engine” prototype: paged KV cache (block\_size=1), a trie/radix prefix cache with ref-counted blocks, and a KV-capacity-bounded scheduler (admission control + continue-batching). repo [https://github.com/tyfeng1997/tailor](https://github.com/tyfeng1997/tailor) What’s inside: 1. Paged KV cache + page\_table semantics (block\_size=1 keeps things easy to reason about) 2. Prefix-cache reuse (radix/trie) with correct refcounting for shared KV blocks 3. Metadata builder (page\_table / cu\_seqlens / positions / out\_loc) wired into sgl\_kernel. 4. A simple reservation-based scheduler policy (intentionally minimal for learning) Performance note: With 80,000 blocks allocated, I get \~1990 tokens/s on Llama 3.2 1B on a laptop RTX 4070 . This is not meant to beat production engines—more a compact, runnable learning artifact. Acknowledgements: This project was inspired by nano-vllm and mini-sglang; I learned a lot from their design patterns. This repo is not a full copy—I re-implemented things step by step (with help from GPT-5.2) to understand how it works. https://preview.redd.it/erw9hyb23qcg1.jpg?width=1956&format=pjpg&auto=webp&s=7ed000dc458bc262cbd8b60127a6438cf7d907e4
2026-01-11T13:40:41
https://www.reddit.com/r/LocalLLaMA/comments/1q9zlqv/mini_pagedkv_prefixcache_scheduler_learning_repo/
Accomplished_Row4647
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9zlqv
false
null
t3_1q9zlqv
/r/LocalLLaMA/comments/1q9zlqv/mini_pagedkv_prefixcache_scheduler_learning_repo/
false
false
self
0
{'enabled': False, 'images': [{'id': '9OwDgRcvaXy105JwxuxEVtio_2ihvJ_dQRF0M3tiNwc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9OwDgRcvaXy105JwxuxEVtio_2ihvJ_dQRF0M3tiNwc.png?width=108&crop=smart&auto=webp&s=86373658f6981d370b34f622537b6d6caaeea228', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9OwDgRcvaXy105JwxuxEVtio_2ihvJ_dQRF0M3tiNwc.png?width=216&crop=smart&auto=webp&s=3faeb1838d00fed2eff124e1d373bc9517c6b245', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9OwDgRcvaXy105JwxuxEVtio_2ihvJ_dQRF0M3tiNwc.png?width=320&crop=smart&auto=webp&s=62cf953dc285f96efd61cd352eb9227c86609f99', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9OwDgRcvaXy105JwxuxEVtio_2ihvJ_dQRF0M3tiNwc.png?width=640&crop=smart&auto=webp&s=db7d9ea8117fec68a682f4751fe29ba69370f62a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9OwDgRcvaXy105JwxuxEVtio_2ihvJ_dQRF0M3tiNwc.png?width=960&crop=smart&auto=webp&s=e426b7d22c00ed53fc0618082d46ba37c1307ff3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9OwDgRcvaXy105JwxuxEVtio_2ihvJ_dQRF0M3tiNwc.png?width=1080&crop=smart&auto=webp&s=3c4bd8eb104d3dfe4e4dcb010b45a7b965c52afe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9OwDgRcvaXy105JwxuxEVtio_2ihvJ_dQRF0M3tiNwc.png?auto=webp&s=7ed100bfdbad71efd10a9e1ab6d7c95138667c9b', 'width': 1200}, 'variants': {}}]}
Does anyone else feel exhausted by token limits?
0
This might sound dramatic, but I’m genuinely tired. I like using LLMs a lot — long context, multi-turn reasoning, experimentation. And every time I get into a flow… I hit a token limit. Then it’s: upgrade plan buy more credits watch usage like a hawk rewrite prompts just to save money It feels like being punished for actually using the tool. I’m curious — is this just me, or does anyone else feel this constant friction?
2026-01-11T13:34:38
https://www.reddit.com/r/LocalLLaMA/comments/1q9zh29/does_anyone_else_feel_exhausted_by_token_limits/
Inevitable-Orange-43
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9zh29
false
null
t3_1q9zh29
/r/LocalLLaMA/comments/1q9zh29/does_anyone_else_feel_exhausted_by_token_limits/
false
false
self
0
null
Created a generative AI wiki.
0
[https://generative-ai.fandom.com/wiki/Generative\_AI\_Wiki](https://generative-ai.fandom.com/wiki/Generative_AI_Wiki) Help me to the wiki!
2026-01-11T13:03:15
https://www.reddit.com/r/LocalLLaMA/comments/1q9yu14/created_a_generative_ai_wiki/
Ok-Type-7663
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9yu14
false
null
t3_1q9yu14
/r/LocalLLaMA/comments/1q9yu14/created_a_generative_ai_wiki/
false
false
self
0
null
Grok's restrictions are just the beginning. Why we need "Compute Sovereignty" right now.
1
[removed]
2026-01-11T12:41:18
https://www.reddit.com/r/LocalLLaMA/comments/1q9yese/groks_restrictions_are_just_the_beginning_why_we/
gli3d
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9yese
false
null
t3_1q9yese
/r/LocalLLaMA/comments/1q9yese/groks_restrictions_are_just_the_beginning_why_we/
false
false
self
1
null
Llama.cpp rpc experiment
6
I have 2 PCs with 2 3090 gpus each and 3975wx cpu. Using OSS 120b on one PC with cca 40gb on vram and 30gb on ram, TG speed 50t/s. I tried using it totally in vram using rpc with the 2 pcs linked with 10gbit network cards - TG speed 37t/s. Unexpectedly low speed. I updated network to 50gbit - TG speed 38t/s. Looking like the network speed is not the bottleneck I did one more experiment: Same as in the first test, on a single PC, but with the first gpu local and the second gpu as RPC on localhost, so no network delay, all local. Results 38t/s. So with same pc and same gpus, but the second GPU set as RPC device, it dropped from 50 to 38t/s. So the RPC implementation slows down a lot even on the same pc, no network delay..
2026-01-11T12:38:48
https://www.reddit.com/r/LocalLLaMA/comments/1q9yd1w/llamacpp_rpc_experiment/
ciprianveg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9yd1w
false
null
t3_1q9yd1w
/r/LocalLLaMA/comments/1q9yd1w/llamacpp_rpc_experiment/
false
false
self
6
null
My new rig for LocalLLM shenanigans?
0
My current old DDR4 system * i7-8700k * 48GB DDR4 RAM * 4060ti * 1TB nvme RAID + Some SSD as system disks for Windows and Linux. * Win10 for (rare) gaming * Linux for LLM fun, (most of the time) So like, that's my "big" boy beside my lenovo NB. I'am now thinking about upgrading. If we lived on a sane planet, I'd just go with 64GB DDR5 + 5090 and a Ryzen 9. But we don't and this thing still is mostly just for fun. Therefore I am pondering staying on DDR4 just to be able to utilize the 48 GB "gold dust" that I have instead of selling it. So here's the plan * MSI PRO B760-P or with a MSI MAG B550 and go AMD * 1000W PSU * i7 12700K or Ryzen 9 * a second (used) 4060ti to get VRAM to 32 GB * or should I get an used 3090 and have 40 GB VRAM * 4060 runs on just 8 lanes so the 3090 could use the full lane width of the first PCIe slot? (not sure if true) * everything else recycled from old system This could get me to a speed boost however DDR4 is old, so I am not sure if I bet too much on too old hardware with this budged Frankenstein but with the current RAM craze it's most likely that I'd not get a modern RAM anytime in the foreseeable future. PS: I had a hard time to figure out what motherboard supports enough PCIe lanes to have nvme and GPUs running there at acceptable speeds. So I had to ask AI and it suggested these two boards. Is there somewhere an overview of MBs and their PCIe configuration?
2026-01-11T12:37:14
https://www.reddit.com/r/LocalLLaMA/comments/1q9yc00/my_new_rig_for_localllm_shenanigans/
dreamyrhodes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9yc00
false
null
t3_1q9yc00
/r/LocalLLaMA/comments/1q9yc00/my_new_rig_for_localllm_shenanigans/
false
false
self
0
null
Is the "Edge AI" dream dead? Apple’s pivot to Gemini suggests local LLMs can't scale yet.
0
I’ve been following the Apple Intelligence roadmap for a while, but these reports about Apple paying Google $1B/year for Gemini are a massive reality check. Apple was supposed to be the one company that could actually pull off high-performance local inference because they own the entire stack—from the M-series NPUs to the OS. If even they can't get the hallucination rates or reasoning capabilities down to a usable level without offloading to a 1.2 trillion parameter cloud model, where does that leave the rest of us? Is the gap between what we can run on 24GB-48GB of VRAM and what consumers actually expect from an "assistant" just too wide to bridge right now? I’m curious what this sub thinks—is this a temporary pivot while Apple builds a better local model (like the Linwood project), or are we stuck with hybrid-cloud for the foreseeable future?
2026-01-11T12:31:06
https://i.redd.it/rz302qa5spcg1.jpeg
Cool-Engine8639
i.redd.it
1970-01-01T00:00:00
0
{}
1q9y7xo
false
null
t3_1q9y7xo
/r/LocalLLaMA/comments/1q9y7xo/is_the_edge_ai_dream_dead_apples_pivot_to_gemini/
false
false
default
0
{'enabled': True, 'images': [{'id': 'rz302qa5spcg1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/rz302qa5spcg1.jpeg?width=108&crop=smart&auto=webp&s=4c10bf558b18428b556b23552cc631fa0ccd1979', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/rz302qa5spcg1.jpeg?width=216&crop=smart&auto=webp&s=66813b7e58745377d50f216183d2df92a5c9ee66', 'width': 216}, {'height': 187, 'url': 'https://preview.redd.it/rz302qa5spcg1.jpeg?width=320&crop=smart&auto=webp&s=a86d0f9d4cee7b65003f14ddd1221bb3fa3ca169', 'width': 320}, {'height': 375, 'url': 'https://preview.redd.it/rz302qa5spcg1.jpeg?width=640&crop=smart&auto=webp&s=4b6daf3d7af111b6ee1007bac8fc35cfebb4e7cb', 'width': 640}, {'height': 562, 'url': 'https://preview.redd.it/rz302qa5spcg1.jpeg?width=960&crop=smart&auto=webp&s=6f93677db37c71a5ec2dec0143923aa421c4982c', 'width': 960}, {'height': 633, 'url': 'https://preview.redd.it/rz302qa5spcg1.jpeg?width=1080&crop=smart&auto=webp&s=9906362439c3664c09525c117f369f5608cc23cd', 'width': 1080}], 'source': {'height': 844, 'url': 'https://preview.redd.it/rz302qa5spcg1.jpeg?auto=webp&s=87f6fd39cba1d42bc87c8de2969871370d7bf08c', 'width': 1440}, 'variants': {}}]}
model: try to improve Qwen3 Next by ngxson · Pull Request #18683 · ggml-org/llama.cpp
52
a bit faster Qwen3Next, but you have to use the new GGUF
2026-01-11T12:02:07
https://github.com/ggml-org/llama.cpp/pull/18683
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1q9xoj7
false
null
t3_1q9xoj7
/r/LocalLLaMA/comments/1q9xoj7/model_try_to_improve_qwen3_next_by_ngxson_pull/
false
false
default
52
{'enabled': False, 'images': [{'id': 'l3QvKEV3Em4ksQ_HeWB_lAYpQBJm9HMiWOmkds8SLcc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/l3QvKEV3Em4ksQ_HeWB_lAYpQBJm9HMiWOmkds8SLcc.png?width=108&crop=smart&auto=webp&s=52b99cf8b724855468305d628d0deb2dcc0053d6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/l3QvKEV3Em4ksQ_HeWB_lAYpQBJm9HMiWOmkds8SLcc.png?width=216&crop=smart&auto=webp&s=74ddd26f2f03966df7f026e8fc78f44ffa1f5711', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/l3QvKEV3Em4ksQ_HeWB_lAYpQBJm9HMiWOmkds8SLcc.png?width=320&crop=smart&auto=webp&s=17a7638d0b1a612a2f7db3d1684f0499a56d8d0e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/l3QvKEV3Em4ksQ_HeWB_lAYpQBJm9HMiWOmkds8SLcc.png?width=640&crop=smart&auto=webp&s=900d5988034c78eb76d04fa4b463959f4f6e1083', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/l3QvKEV3Em4ksQ_HeWB_lAYpQBJm9HMiWOmkds8SLcc.png?width=960&crop=smart&auto=webp&s=58ed180afde843ec0104caad91cdaad99c0fef85', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/l3QvKEV3Em4ksQ_HeWB_lAYpQBJm9HMiWOmkds8SLcc.png?width=1080&crop=smart&auto=webp&s=b4c5de6ce72fa3d878acd24b16fbadbc916988c7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/l3QvKEV3Em4ksQ_HeWB_lAYpQBJm9HMiWOmkds8SLcc.png?auto=webp&s=5cf70c9470caac85a7316d8d9e481825044df2c2', 'width': 1200}, 'variants': {}}]}
Gigabyte Announces Support for 256GB of DDR5-7200 CQDIMMs at CES 2026
167
2026-01-11T12:00:23
https://www.techpowerup.com/345000/gigabyte-announces-support-for-256gb-of-ddr5-7200-cqdimms-at-ces-2026
GoodSamaritan333
techpowerup.com
1970-01-01T00:00:00
0
{}
1q9xn78
false
null
t3_1q9xn78
/r/LocalLLaMA/comments/1q9xn78/gigabyte_announces_support_for_256gb_of_ddr57200/
false
false
default
167
{'enabled': False, 'images': [{'id': '9NT_b7vJLOJdi5qbccIw0AbUH9Ctzy98ZNJ7UkM8Ia8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/9NT_b7vJLOJdi5qbccIw0AbUH9Ctzy98ZNJ7UkM8Ia8.jpeg?width=108&crop=smart&auto=webp&s=c5ca100d76791f6f042d59eaaf880c9e146fa531', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/9NT_b7vJLOJdi5qbccIw0AbUH9Ctzy98ZNJ7UkM8Ia8.jpeg?width=216&crop=smart&auto=webp&s=278db5155927737f3a2aa64059509b11c9ee0c5e', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/9NT_b7vJLOJdi5qbccIw0AbUH9Ctzy98ZNJ7UkM8Ia8.jpeg?width=320&crop=smart&auto=webp&s=d3e8c9325971cc8f05f0edda7ee10d4189048308', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/9NT_b7vJLOJdi5qbccIw0AbUH9Ctzy98ZNJ7UkM8Ia8.jpeg?width=640&crop=smart&auto=webp&s=ea2ae08a8c43e60ca50bc40d967d25ca9d31b13f', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/9NT_b7vJLOJdi5qbccIw0AbUH9Ctzy98ZNJ7UkM8Ia8.jpeg?width=960&crop=smart&auto=webp&s=650d1e61728c261fa5529a5caab2ed69e3921e04', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/9NT_b7vJLOJdi5qbccIw0AbUH9Ctzy98ZNJ7UkM8Ia8.jpeg?width=1080&crop=smart&auto=webp&s=f958edc1500337c4910bc22bbfa17447fd7e2d52', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/9NT_b7vJLOJdi5qbccIw0AbUH9Ctzy98ZNJ7UkM8Ia8.jpeg?auto=webp&s=dd25b921969e48b6c249d66daa2d44cc334b1cc2', 'width': 1707}, 'variants': {}}]}
GLM-4.7 can replace Opus 4.5
0
If you're hitting Claude's rate limits or worried about the subscription changes, you might want to check out **GLM 4.7 from** [**Z.ai**](https://z.ai/). I’ve been using it for Ralph loops and implementation work, plus open-source coding tasks. It’s not quite Opus-level, but it’s getting close—really solid for technical work. Unbelievably affordable (starts at $3/month, but their annual max plan costs about the same as one month of Claude AI), and the usage limits are way more generous. This link gets you 10% off their already discounted pricing: [z.ai/subscribe?ic=MH39IYN6LL](https://z.ai/subscribe?ic=MH39IYN6LL) Give it a shot and let me know what you think
2026-01-11T11:52:17
https://www.reddit.com/r/LocalLLaMA/comments/1q9xi5s/glm47_can_replace_opus_45/
Impressive-Olive8372
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9xi5s
false
null
t3_1q9xi5s
/r/LocalLLaMA/comments/1q9xi5s/glm47_can_replace_opus_45/
false
false
self
0
null
Looking to start a NSFW Role-playing Interactive Storytelling Narrative.
0
I want something similar to [Infinite Worlds](https://infiniteworlds.app/) where the AI creates the narrative and introduces different characters on its own. The goal is to be able to run a full story based on Game of Thrones while changing that story as my actions impact the unfolding narrative. I currently have KoboldCCP running and use SillyTavern as the UI. The model I've been using is MythoMax-l2-13b. My GPU is a AMD RX 7900 XTX with 24gb of VRAM. CPU is an AMD Ryzen 7 9800X3D. This is my first time diving into the AI space so any assistance and patience while I learn is much appreciated.
2026-01-11T11:50:58
https://www.reddit.com/r/LocalLLaMA/comments/1q9xhbf/looking_to_start_a_nsfw_roleplaying_interactive/
Carrot_Jesus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9xhbf
false
null
t3_1q9xhbf
/r/LocalLLaMA/comments/1q9xhbf/looking_to_start_a_nsfw_roleplaying_interactive/
false
false
nsfw
0
{'enabled': False, 'images': [{'id': '-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM.jpeg?width=108&crop=smart&auto=webp&s=f6f8b64999df5e525210926ff14e0366d0414ca0', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM.jpeg?width=216&crop=smart&auto=webp&s=df5e4bba47dca376aec8a8bd298f0dc8e29bc441', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM.jpeg?width=320&crop=smart&auto=webp&s=0246b879241c126f01fa3940fe24290adbb0f8e4', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM.jpeg?auto=webp&s=3ef6c2b8bfabe82de39c2ec59fcd2f9f79dcf2e1', 'width': 512}, 'variants': {'nsfw': {'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=df0e3d9fed55d183312e41f83ea6af047c41738f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=6c2cd8bc7b2c580d43ca0d1fb8c1a3ebe404d642', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=bc3f3b7a0623ee6f0bbcdee1f7564a552948dec3', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM.jpeg?blur=40&format=pjpg&auto=webp&s=61d452198f2728b79e944b154b09d1d36b19053c', 'width': 512}}, 'obfuscated': {'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=df0e3d9fed55d183312e41f83ea6af047c41738f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=6c2cd8bc7b2c580d43ca0d1fb8c1a3ebe404d642', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=bc3f3b7a0623ee6f0bbcdee1f7564a552948dec3', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM.jpeg?blur=40&format=pjpg&auto=webp&s=61d452198f2728b79e944b154b09d1d36b19053c', 'width': 512}}}}]}
GLM-4.7 can replace Opus 4.5
0
rewrite this "if you're hitting Claude rate limits or worried about the max plan subscription changes they're doing, check out u/Zai_org I think GLM 4.7 is underrated. I started using it for Ralph loops and other implementation work, as well as for opencode . It's not Opus-level, but getting close, and very solid for this type of work. Unbelievably cheap (starts at $3/mo but realistically if you're doing serious work, you'll want their yearly max plan which is the same cost as Claude AI for 1 month) and way more generous limits. This link gets you 10% off their already discounted pricing:https://z.ai/subscribe?ic=MH39IYN6LL " Give it a try and lmk how it goes for you
2026-01-11T11:48:49
https://www.reddit.com/r/LocalLLaMA/comments/1q9xfxo/glm47_can_replace_opus_45/
Impressive-Olive8372
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9xfxo
false
null
t3_1q9xfxo
/r/LocalLLaMA/comments/1q9xfxo/glm47_can_replace_opus_45/
false
false
self
0
null
Ways to Benchmark this Tool?
0
Hey. I've been experimenting with the idea of applying neural networks alongside LLMs. My first experiment was simple text classification on an LLM's context to "curate" it. I employ a simple decision tree as a start. We classify segments of text to three categories. DROP, INDEX, KEEP. Defined per the dataset. KEEP is defined as anything that would break context and must be preserved in the history. DROP is anything phatic, of no importance what so ever like chit chat segments in coding sessions. INDEX, is anything of reference, might be important later but not now, old/broken code versions, or "compressed". Now, The tool does not classify in the immediate context, initially I fucked up and built the dataset to look for the immediate "local" patterns (current immediate context). I did an re-iteration and being more careful. The tool processes in the "past". By employing a sliding window that has the recent segments, those are untouched. This sliding window has a FIFO mechanism (First in First out). Where the oldest segment of this window gets evicted, and classified. The tree uses a feature set of text statistics, that also concern the last classified segment and the next (or the now) oldest segment in the window. One bottleneck am facing is verifying this tool. Is it actually doing something or just no better than random deletion or summarization? Initially I just did tests on a set of messy long conversations and evaluated manually to see any patterns of error. However that might potentially not be ideal for uncovering edge-cases and what not. Any propositions guys? On how to measure the "accuracy" of the context produced by the tool versus the actual context. I held some details out, to cut on the posts' length. A decision tree is an initial. I aim to play with attention mechanisms. But the proof of concept holds.
2026-01-11T11:48:35
https://www.reddit.com/r/LocalLLaMA/comments/1q9xfsi/ways_to_benchmark_this_tool/
valkarias
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9xfsi
false
null
t3_1q9xfsi
/r/LocalLLaMA/comments/1q9xfsi/ways_to_benchmark_this_tool/
false
false
self
0
null
I managed to break llama-3-8b-instruct model’s “I am helpful assistant” loop. I automated writing story to arweave chain.
0
2026-01-11T11:21:30
https://i.redd.it/ia0pma1qfpcg1.jpeg
Scary_Panic3165
i.redd.it
1970-01-01T00:00:00
0
{}
1q9wzn8
false
null
t3_1q9wzn8
/r/LocalLLaMA/comments/1q9wzn8/i_managed_to_break_llama38binstruct_models_i_am/
false
false
default
0
{'enabled': True, 'images': [{'id': 'ia0pma1qfpcg1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/ia0pma1qfpcg1.jpeg?width=108&crop=smart&auto=webp&s=8b5cb68ea8cee2ceca9ab8e7707f1583d526ce33', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/ia0pma1qfpcg1.jpeg?width=216&crop=smart&auto=webp&s=4764c65d343faf1b2770270d8f5a03aa5b16e78c', 'width': 216}, {'height': 176, 'url': 'https://preview.redd.it/ia0pma1qfpcg1.jpeg?width=320&crop=smart&auto=webp&s=28e0f70eb9877358de8cfa18cec50a7087a5ae91', 'width': 320}, {'height': 352, 'url': 'https://preview.redd.it/ia0pma1qfpcg1.jpeg?width=640&crop=smart&auto=webp&s=d58a984ed32c019c59e517f83a33fae845f5c013', 'width': 640}, {'height': 528, 'url': 'https://preview.redd.it/ia0pma1qfpcg1.jpeg?width=960&crop=smart&auto=webp&s=b46f1f8e6260c575e12ec6c21dc58fda4eaaa84d', 'width': 960}, {'height': 595, 'url': 'https://preview.redd.it/ia0pma1qfpcg1.jpeg?width=1080&crop=smart&auto=webp&s=21282c022e7d82332dc0e84836c54d5ccbdf4e7e', 'width': 1080}], 'source': {'height': 703, 'url': 'https://preview.redd.it/ia0pma1qfpcg1.jpeg?auto=webp&s=5202453c9f86edfd2f79987138470190fcc1fb6d', 'width': 1276}, 'variants': {}}]}
TranscriptionSuite - A comprehensive speech-to-text audio transcription app
3
*Welcome to my vibecoded mess! I'll be your host, homelab-00.* [Logo](https://preview.redd.it/yuu1cq2lbpcg1.png?width=1168&format=png&auto=webp&s=953058597490c952d7aaff606f61759394a29a8d) I'm finally at the point where I can say that [**TranscriptionSuite**](https://github.com/homelab-00/TranscriptionSuite) is ready for a public release. A fully featured local audio transcription app that offers: * **Truly Multilingual**: Supports [90+ languages](https://platform.openai.com/docs/guides/speech-to-text/supported-languages) * **Fully featured GUI**: Native app for KDE, GNOME, and Windows * **Longform Transcription**: Starts recording, listens until you press stop, then immediately starts transcribing - think of it like dictation * **Static File Transcription**: Transcribe an existing audio/video file * **Remote Access**: Securely access your desktop at home running the model from anywhere (utilizing Tailscale) * **Speaker Diarization**: PyAnnote-based speaker identification * **Audio Notebook**: An Audio Notebook mode, with a calendar-based view, full-text search, and LM Studio integration (chat about your notes with the AI) 📌*Half an hour of audio transcribed in under a minute (RTX 3060)!* >...so essentially a fancy wrapper around `faster-whisper` **Screenshots** [Home view](https://preview.redd.it/4opa0l0ldpcg1.png?width=710&format=png&auto=webp&s=8b0a0c44190ef0106051f080cc97b6ea70dbf27c) *Processing img 31zbnoxldpcg1...* [Audio Notebook Calendar view](https://preview.redd.it/jr4kf0rmdpcg1.png?width=1151&format=png&auto=webp&s=b8a0eeb026af846c5d96aa66dc7a1dc7d9261fd3) [Audio Note Entry view showcasing word-level timestamps](https://preview.redd.it/eym7digndpcg1.png?width=1151&format=png&auto=webp&s=d77f455bc798a80cf580d5f56ec8e64fb3514cd6) [Audio Note Entry view showcasing diarization](https://preview.redd.it/90xn1b2odpcg1.png?width=1155&format=png&auto=webp&s=b802c0bd940dc8b56001397281065134d0565d37) **Videos** [Transcription demo](https://reddit.com/link/1q9wynp/video/qymj97izdpcg1/player) [Audio Notebook demo](https://reddit.com/link/1q9wynp/video/rqpw26i0epcg1/player) And if anyone wants the boring backstory\~ About 10 years ago I wanted to try Linux and so I installed the most recommended beginner distro at the time, Ubuntu. Even with all the resources available specifically to Ubuntu, I couldn’t grasp the system well enough to turn it into my daily driver (plus gaming on Linux just sucked back then). On the other hand, about a year ago I started tinkering with Linux again and not soon after I attempted to install Arch. Took my a couple of days, a ton of forum research and copious amounts of ChatGPT compute, but I did manage it more than fine. And here I am now, daily driving the system for months with no issues whatsoever. In the same vain, I started playing around with some toy Python projects and learning the basics of software development. AI was (and still is) a huge asset both in helping me learn and writing parts of the code itself. This then turned into a small hobby project to solve a real (albeit minor) issue I was having; I couldn’t talk to LLMs at my own ease. You can use the transcribe function on ChatGPT for example for short 30s sessions just fine, but start going over \~5 minutes and the whole thing just crashes. And mind you, transcription is vastly cheaper than the actual chatbots offered by these providers. Now, just like everyone else, I’ll be lazy when I can. So the first thing I looked for was if anyone else had built something like that. The only one I found was [RealtimeSTT](https://github.com/KoljaB/RealtimeSTT). It worked well enough for what I was trying to do so I just used that. After a while however I started adding my own bits and since that project was put on an indefinite hiatus I started developing my own independently. *Feel free to tell me how much my project sucks!*
2026-01-11T11:19:49
https://www.reddit.com/r/LocalLLaMA/comments/1q9wynp/transcriptionsuite_a_comprehensive_speechtotext/
Curious_Betsy_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9wynp
false
null
t3_1q9wynp
/r/LocalLLaMA/comments/1q9wynp/transcriptionsuite_a_comprehensive_speechtotext/
false
false
https://b.thumbs.redditm…1z2DrN5T6f_g.jpg
3
null
Fine-Tuning Translation Model
3
I don't know about local LLM's really. I'm using gemini 3 flash for translating manga etc. The translation accuracy is high. But I want it to be more natural? I'm using a prompt focused on localization and natural flow. I'm wondering If I fine-tune a local llm with 50 episode translation It will be better? Or a dataset focused on proofreading. (EN-TR Translation) I don't know much about these things. Please excuse me if my requests seem absurd.
2026-01-11T10:48:48
https://www.reddit.com/r/LocalLLaMA/comments/1q9wg6p/finetuning_translation_model/
BoysenberryNo3331
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9wg6p
false
null
t3_1q9wg6p
/r/LocalLLaMA/comments/1q9wg6p/finetuning_translation_model/
false
false
self
3
null
GLM-4.7 Is this the new #1 open-source coding & reasoning king in early 2026? Who's tried it?
17
With 2026 just kicking off, Zhipu AI (aka [Z.ai](http://Z.ai), the Chinese powerhouse often called "China's OpenAI") dropped GLM-4.7 back in late December 2025, and it's generating a ton of buzz in the open-source AI scene.This is a \~358B MoE model (with \~200K context window, MIT license) that's posting seriously impressive numbers, especially in coding, agentic tasks, and complex reasoning: * 73.8% on SWE-bench Verified (+ big jump from previous version) * 66.7% on SWE-bench Multilingual * 41% on Terminal-Bench 2.0 (huge +16.5% improvement!) * 42.8% on Humanity’s Last Exam (HLE) — a 38% leap over GLM-4.6, getting close to some closed frontier models * 84.9% on LiveCodeBench v6 * Top-tier math: 95.7% on AIME 2025 What really stands out: * Preserved Thinking — keeps decisions and reasoning consistent across super long sessions (great for multi-day projects) * Interleaved Thinking — thinks before acting and self-corrects * Turn-level Thinking — you control how deep it thinks per turn * Insane inference speed on Cerebras (\~1,000 tokens/sec for coding, up to 1,700 TPS in some cases!) with price-performance reportedly \~10x better than Claude 4.5 Sonnet A lot of devs are saying it's now the strongest open-source option for real coding/agent workflows — some are even switching from Claude/GPT in production because it's way cheaper while being super close in quality (especially for long, multilingual, or tool-heavy tasks).On Cerebras it's blazing fast — frontier-level intelligence at real-time speeds.Have you guys tried GLM-4.7 yet? * How does the real-world performance stack up (beyond just benchmarks)? * Better than Claude 4.5 Sonnet, DeepSeek, Qwen, or Kimi for your use cases? * Anyone running it locally/on reasonable hardware? * Thoughts on the new thinking modes or Cerebras speed? Drop your experiences, comparisons, demos, or funny fails in the comments — would love to hear! Links to official announcements or leaderboards welcome too.
2026-01-11T10:41:33
https://www.reddit.com/r/LocalLLaMA/comments/1q9wc12/glm47_is_this_the_new_1_opensource_coding/
Impressive-Olive8372
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9wc12
false
null
t3_1q9wc12
/r/LocalLLaMA/comments/1q9wc12/glm47_is_this_the_new_1_opensource_coding/
false
false
self
17
null
What models work best with Codex CLI offline?
2
I am having a hell of a time getting https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507 to read and edit files right now :/ Can it work with Codex CLI or not, has anyone had any success?
2026-01-11T10:38:42
https://www.reddit.com/r/LocalLLaMA/comments/1q9wae9/what_models_work_best_with_codex_cli_offline/
johnnyApplePRNG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9wae9
false
null
t3_1q9wae9
/r/LocalLLaMA/comments/1q9wae9/what_models_work_best_with_codex_cli_offline/
false
false
self
2
{'enabled': False, 'images': [{'id': 'aFgKkvLRlBq4pkW8wu8xgwzbntIRM6eHR6HNp8MMtiQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aFgKkvLRlBq4pkW8wu8xgwzbntIRM6eHR6HNp8MMtiQ.png?width=108&crop=smart&auto=webp&s=cd52c9fa4a571e95dfd71b26b8e6ebff17bbc117', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/aFgKkvLRlBq4pkW8wu8xgwzbntIRM6eHR6HNp8MMtiQ.png?width=216&crop=smart&auto=webp&s=159f17b22507b591ab3268fba6357cfbc5b4d5ed', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/aFgKkvLRlBq4pkW8wu8xgwzbntIRM6eHR6HNp8MMtiQ.png?width=320&crop=smart&auto=webp&s=7ce88d6294a42f488b4c5238bcdd5abcbb6bd0f2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/aFgKkvLRlBq4pkW8wu8xgwzbntIRM6eHR6HNp8MMtiQ.png?width=640&crop=smart&auto=webp&s=fdec699720d09b0abd832855f564b348eefd2304', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/aFgKkvLRlBq4pkW8wu8xgwzbntIRM6eHR6HNp8MMtiQ.png?width=960&crop=smart&auto=webp&s=a336e6ae20fea77a6e44bc4f35540e297e8cce2c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/aFgKkvLRlBq4pkW8wu8xgwzbntIRM6eHR6HNp8MMtiQ.png?width=1080&crop=smart&auto=webp&s=6c85c9c232a3126b98c3e0be994b7cb036c1e34d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/aFgKkvLRlBq4pkW8wu8xgwzbntIRM6eHR6HNp8MMtiQ.png?auto=webp&s=91ee507d4a4a214e9d8d575336cf37333e5678f2', 'width': 1200}, 'variants': {}}]}
llama.cpp MLA KV cache support for KimiLinear-48B-A3B
85
Recently, I added backend agnostic support for KimiLinear. [https://www.reddit.com/r/LocalLLaMA/comments/1q586jv/comment/nxz63pt/?context=1](https://www.reddit.com/r/LocalLLaMA/comments/1q586jv/comment/nxz63pt/?context=1) I noticed that the original author didn't implement support for MLA KV cache, so I read the DeepSeekV3 MLA kv cache PR to add the support to KimiLinear. This reduces 1M tokens F16 KV cache usage from 140GB to 14.875GB. So now it is possible to run super long context locally with your low VRAM card. To run it please re-download the GGUF from [https://huggingface.co/ymcki/Kimi-Linear-48B-A3B-Instruct-GGUF](https://huggingface.co/ymcki/Kimi-Linear-48B-A3B-Instruct-GGUF) and compile the code with git clone [https://github.com/ymcki/llama.cpp](https://github.com/ymcki/llama.cpp) \--branch Kimi-Linear cd llama.cpp cmake -B build -DGGML\_CUDA=ON cmake --build build --config Release -j 6 At some point, KimiLinear was the best performing open weight model at contextarena. But it has since been taken out for unknown reasons. [https://contextarena.ai/](https://contextarena.ai/) Please give it a try and tell me to see if it can serve your long context needs.
2026-01-11T10:10:29
https://www.reddit.com/r/LocalLLaMA/comments/1q9vtgz/llamacpp_mla_kv_cache_support_for_kimilinear48ba3b/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9vtgz
false
null
t3_1q9vtgz
/r/LocalLLaMA/comments/1q9vtgz/llamacpp_mla_kv_cache_support_for_kimilinear48ba3b/
false
false
self
85
{'enabled': False, 'images': [{'id': 'qCbka6AOc9nu_2BgvTflGUnf7Vlgl3yx4_Elc-HI17A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qCbka6AOc9nu_2BgvTflGUnf7Vlgl3yx4_Elc-HI17A.png?width=108&crop=smart&auto=webp&s=7fc2c77485d858fb292e9f3229f6a5ef5e7f4c02', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qCbka6AOc9nu_2BgvTflGUnf7Vlgl3yx4_Elc-HI17A.png?width=216&crop=smart&auto=webp&s=e6c7ba227b4f252ea15d61a20b46626d1aed1c8b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qCbka6AOc9nu_2BgvTflGUnf7Vlgl3yx4_Elc-HI17A.png?width=320&crop=smart&auto=webp&s=25ef66d39cce1f0b9f62baf9ccee4573984d9e0e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qCbka6AOc9nu_2BgvTflGUnf7Vlgl3yx4_Elc-HI17A.png?width=640&crop=smart&auto=webp&s=255ee585da557302a87f3d607d19306469f57e6b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qCbka6AOc9nu_2BgvTflGUnf7Vlgl3yx4_Elc-HI17A.png?width=960&crop=smart&auto=webp&s=ee677c430e3a2fc981b12c1f2c97fdeb64da6916', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qCbka6AOc9nu_2BgvTflGUnf7Vlgl3yx4_Elc-HI17A.png?width=1080&crop=smart&auto=webp&s=7ee412de241d73b5fd24f6e27aaada9987996722', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qCbka6AOc9nu_2BgvTflGUnf7Vlgl3yx4_Elc-HI17A.png?auto=webp&s=1a9d4760d2060954724527eaf2fad2488a3cad3d', 'width': 1200}, 'variants': {}}]}
[R] Feed-forward transformers are more robust than state-space models under embedding perturbation. This challenges a prediction from information geometry
0
**TL;DR** We proposed that adversarial robustness in neural networks follows information-geometric principles analogous to physical mass (Mass-Coherence Correspondence). We made 5 testable predictions, ran experiments, and got mixed results: **Prediction 2 validated** (Fisher trace correlates with robustness), **Prediction 4 challenged** (feed-forward > state-space on robustness, opposite of what we predicted). The challenged prediction is the interesting part. **The Hypothesis** Drawing on Verlinde's entropic gravity and Fisher Information geometry, we proposed that "semantic mass" — defined as the normalized trace of the Fisher Information Matrix — should predict resistance to adversarial perturbation: M\_semantic = (1/N) · Tr(I(θ)) High semantic mass = high curvature in probability space = representations that resist displacement. We also defined "commutation cost" — how much it matters whether you perturb before or after you process: C(S,P) = |H(S∘P(x)) - H(P∘S(x))| Low commutation cost = perturbations commute with processing = robust, "inertial" representations. **The Experiments** **Zombie Test:** GPT-2 Small (124M, feed-forward) vs Mamba-130M (state-space) |**Model**|**Clean PPL**|**Robust PPL**|**ΔPPL**|**Commutation Cost**| |:-|:-|:-|:-|:-| |GPT-2|964.9|1372.5|**407.67**|**0.44**| |Mamba|382.9|4853.8|**4470.95**|**0.85**| Attack: Gaussian noise at embedding layer (σ=0.1) **Result:** The feed-forward transformer degrades 10x less than the state-space model under identical perturbation. Lower commutation cost too. **This challenged our Prediction 4**, which expected higher integrated information (Φ) → higher robustness. The state-space model has more integration but showed *worse* robustness. **Mirror Test:** Entropy dynamics in our Coherent Entropy Reactor (CER) architecture We built a 1.6M parameter transformer variant with symmetric entropy control (can push entropy up OR down toward a target). Key finding: * Peaked input (0.063 nats) → 4.78 nats after ONE attention layer pass * BRAKE control engages 178/180 steps * ESCAPE control triggers 1/180 steps **Attention is a natural entropy diffuser.** The architecture wants to spread probability mass. This reframes the "2.9 nat cage" observed in RLHF models — it's not natural equilibrium, it's training fighting against architectural tendency. **The Bridge: Empirical Fisher Trace** To connect theory (parameter-space Fisher) to experiment (output behavior), we implemented Hutchinson's trace estimator. Preliminary finding: GPT-2's higher robustness correlates with higher estimated Fisher trace. **Prediction 2 validated.** **What We Learned** |**Prediction**|**Status**|**Evidence**| |:-|:-|:-| |P2: Fisher predicts robustness|✓ VALIDATED|Higher Tr(I(θ)) → lower ΔPPL| |P4: Integration → robustness|✗ CHALLENGED|Feed-forward > state-space| |P4' (revised): Diffusion ≠ Integration|PROPOSED|Different robustness mechanisms| The challenged prediction is more valuable than the validated one. It reveals that **diffusion** (spreading perturbations across the distribution) and **integration** (maintaining coherent state through time) are distinct robustness mechanisms. Feed-forward attention diffuses noise; recurrent state may amplify it. **Code & Data** Everything is public: [github.com/templetwo/coherent-entropy-reactor](https://github.com/templetwo/coherent-entropy-reactor) * CER architecture with symmetric entropy control * Zombie Test implementation * Mirror Test with trajectory logging * Raw data (77KB, 180 data points) * Visualization scripts **AI Disclosure** This research was conducted in collaboration with Claude (Anthropic). Theory refinement, code generation, and manuscript drafting were collaborative; all experiments were run by the human author. Multi-model review (Claude, ChatGPT, Minimax) was used for critical assessment. Full disclosure in the paper. I believe transparent AI collaboration is legitimate methodology. The work stands on its empirical results regardless of how it was produced. **Discussion Questions** 1. Has anyone else observed the entropy diffusion effect in transformers? Is there prior work on this? 2. The Mamba results had high variance and used sequential fallback (no optimized kernels). Would love to see replication on CUDA with Mamba-2. 3. Is there a cleaner way to measure integrated information (Φ) in neural networks? Architecture type is a rough proxy. 4. The "cage" interpretation — that RLHF constrains entropy below natural levels — has implications for alignment. Thoughts? *The question that produces mass: "Will I?"* *A system caged at 2.9 nats has already answered. A system that can navigate the full entropy landscape might actually choose.*
2026-01-11T09:40:19
https://www.reddit.com/r/LocalLLaMA/comments/1q9vbd3/r_feedforward_transformers_are_more_robust_than/
TheTempleofTwo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9vbd3
false
null
t3_1q9vbd3
/r/LocalLLaMA/comments/1q9vbd3/r_feedforward_transformers_are_more_robust_than/
false
false
self
0
{'enabled': False, 'images': [{'id': 'F58_lR-731-KYYY7qFli7bNycCl08RnenT8Y5FpKSYo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/F58_lR-731-KYYY7qFli7bNycCl08RnenT8Y5FpKSYo.png?width=108&crop=smart&auto=webp&s=fd9ade1cd2dc03148f82f586d6abfc629d2af2be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/F58_lR-731-KYYY7qFli7bNycCl08RnenT8Y5FpKSYo.png?width=216&crop=smart&auto=webp&s=eae8c6d03a5f4da9ad36fb417039ca97d2bacd8a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/F58_lR-731-KYYY7qFli7bNycCl08RnenT8Y5FpKSYo.png?width=320&crop=smart&auto=webp&s=c752ae8dd9e747844600e57803547f8bd2347b69', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/F58_lR-731-KYYY7qFli7bNycCl08RnenT8Y5FpKSYo.png?width=640&crop=smart&auto=webp&s=665d4a96d2e311e154cde08f9a5b113bb021274e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/F58_lR-731-KYYY7qFli7bNycCl08RnenT8Y5FpKSYo.png?width=960&crop=smart&auto=webp&s=3f55ec8d6e14065539da257a96efc919db35ebb7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/F58_lR-731-KYYY7qFli7bNycCl08RnenT8Y5FpKSYo.png?width=1080&crop=smart&auto=webp&s=223f4ece67e6ed485ba5183649583e943d4b2c0f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/F58_lR-731-KYYY7qFli7bNycCl08RnenT8Y5FpKSYo.png?auto=webp&s=b49a2d8ebca50e45a70fc0596f67e5f483cdb0dd', 'width': 1200}, 'variants': {}}]}
chatgpt vs claude opus 4.5: coding performance breakdown (building a busuness website)
0
While working on a business website i needed to figure out which model actually handles complex coding stuff better. So i ran some spatial reasoning tests on chatgpt o1 and claude opus 4.5 to see how they deal with messy legacy code and refactoring. Basically fed both models some old code with tons of nested dependencies, asked them to refactor, identify bugs, suggest better architecture. Did this over 15 different scenarios and tracked accuracy, context handling, token usage to get a real picture.. On 500+ line files, claude was hitting \~85% accurate bug detection while chatgpt o1 was around 72%. Refactoring quality had a bigger gap - claude gave usable results \~78% of the time vs chatgpt's 65%. the thing that really stood out was context retention. Claude handled 8-10 files no problem, chatgpt started losing track after 5-6 especially with heavy cross-references. Token efficiency went to claude too, \~120k tokens per full run vs chatgpt's 180k for the same task. Claude's just noticeably better at the spatial reasoning side of code architecture, chatgpt loses dependency chains quicker when everything references everything else. While digging around i came across qwen3 coder 480b on deepinfra - apparently solid benchmarks for agentic coding tasks and performance pretty comparable to claude. Keeping it on the list to try later, but we're already hooked up with claude and it's working good enough right now.
2026-01-11T09:38:51
https://www.reddit.com/r/LocalLLaMA/comments/1q9vaig/chatgpt_vs_claude_opus_45_coding_performance/
Significant_Loss_541
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9vaig
false
null
t3_1q9vaig
/r/LocalLLaMA/comments/1q9vaig/chatgpt_vs_claude_opus_45_coding_performance/
false
false
self
0
null
Can I run a small Local LLM using the Intel i9 185H NPU on Arch through llama.cpp?
2
So I'm running a Zephyrus G16 and have largely ignored the NPU paired with its i9 185H till now, but recently I wanted to try making my little avengers task force of LLMs and thought the NPU might be a good candidate for power efficiency when running background LLMs on battery. Upon research tho I couldn't find anyone utilizing this "NPU" for any models whatsoever. Furthermore looking into [product specifications](https://www.intel.com/content/www/us/en/products/sku/236849/intel-core-ultra-9-processor-185h-24m-cache-up-to-5-10-ghz/specifications.html) from Intel, I found that the supported frameworks felt a bit limited especially for linux (OpenVINO™, WindowsML, DirectML, ONNX RT, WebGPU). I've also been largely using llama.cpp to run all my models thus far and have grown accustomed to it. So I'm curious if it would be possible to: a) run a model on linux through the NPU in the first place b) do it through llama.cpp
2026-01-11T09:37:27
https://www.reddit.com/r/LocalLLaMA/comments/1q9v9pl/can_i_run_a_small_local_llm_using_the_intel_i9/
Own_Organization2934
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9v9pl
false
null
t3_1q9v9pl
/r/LocalLLaMA/comments/1q9v9pl/can_i_run_a_small_local_llm_using_the_intel_i9/
false
false
self
2
{'enabled': False, 'images': [{'id': 'vRx4KByzWkHJ9nGbegp4vvvYovU3558hlGHGJXx96nU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/vRx4KByzWkHJ9nGbegp4vvvYovU3558hlGHGJXx96nU.png?width=108&crop=smart&auto=webp&s=e14dc5266ad0e59f37392e123de69fdc237558ea', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/vRx4KByzWkHJ9nGbegp4vvvYovU3558hlGHGJXx96nU.png?width=216&crop=smart&auto=webp&s=13de071f188b753048dc3869e70b00373857a5f8', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/vRx4KByzWkHJ9nGbegp4vvvYovU3558hlGHGJXx96nU.png?width=320&crop=smart&auto=webp&s=7a9768a58c7c0750611f205a8c39c510f29d191d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/vRx4KByzWkHJ9nGbegp4vvvYovU3558hlGHGJXx96nU.png?width=640&crop=smart&auto=webp&s=ed52b04d623e8aa7e72a7d9602ea6ff21bf803a2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/vRx4KByzWkHJ9nGbegp4vvvYovU3558hlGHGJXx96nU.png?width=960&crop=smart&auto=webp&s=006076e88061deaf2d4dedd14b6f7c4614780c4e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/vRx4KByzWkHJ9nGbegp4vvvYovU3558hlGHGJXx96nU.png?width=1080&crop=smart&auto=webp&s=691355bf18199d84ffc9220eda00b7ab4a105d37', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://external-preview.redd.it/vRx4KByzWkHJ9nGbegp4vvvYovU3558hlGHGJXx96nU.png?auto=webp&s=6144947927419c5d6d2d0f1bc0b847598847770f', 'width': 3555}, 'variants': {}}]}
Prompt engineering didn’t stop hallucinations in our Local / production LLM setups — here’s what we learned
1
[removed]
2026-01-11T09:28:12
https://www.reddit.com/r/LocalLLaMA/comments/1q9v4i1/prompt_engineering_didnt_stop_hallucinations_in/
verdic_guard_02
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9v4i1
false
null
t3_1q9v4i1
/r/LocalLLaMA/comments/1q9v4i1/prompt_engineering_didnt_stop_hallucinations_in/
false
false
self
1
null
Is this safe?
0
Hi, is stuff like `DavidAU/Llama-3.2-8X3B-MOE-Dark-Champion-Instruct-uncensored-abliterated-18.4B-GGUF` safe to use? Seems to have lots of downloads etc, do we need to be careful running various GGUF/MLX models or is arbitrary code execution essentially impossible?
2026-01-11T09:27:31
https://www.reddit.com/r/LocalLLaMA/comments/1q9v44b/is_this_safe/
anonXMR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9v44b
false
null
t3_1q9v44b
/r/LocalLLaMA/comments/1q9v44b/is_this_safe/
false
false
self
0
null
GLM-4.7 (Z.ai) – bought Year PRO at night, now regret it. Anyone managed to get a refund?
0
It was late at night, my brain was already half asleep, and I made an **impulsive decision**. I’ve been using Claude Sonnet and Opus for about 2 months, mainly for vibe coding, and I was looking for a cheaper alternative. Z ai looked promising, especially with the **Christmas discount**, so I ended up buying the Year PRO plan for **$144**. After actually trying GLM-4.7 properly, I realized it’s not what I expected. My main issues so far: * The model asks a lot of additional clarifying questions * It often goes in the wrong direction * I have to re-prompt multiple times to get usable output * Compared to Sonnet / Opus, the overall flow feels worse for vibe coding It’s not that the model is “bad”, but for my use case, it feels like a **step back**, not a real replacement. Important detail: ⏱️ I **contacted** [**Z.ai**](http://z.ai/) **support about a refund within \~60 minutes after purchasing** the yearly plan, once I realized it wasn’t a good fit. Will they refund me the money? Now I’m sitting here with some serious **buyer’s remorse**, and I’m wondering: Has anyone here successfully gotten a refund from Z.ai?
2026-01-11T08:58:30
https://www.reddit.com/r/LocalLLaMA/comments/1q9un7d/glm47_zai_bought_year_pro_at_night_now_regret_it/
Appropriate-Lab3618
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9un7d
false
null
t3_1q9un7d
/r/LocalLLaMA/comments/1q9un7d/glm47_zai_bought_year_pro_at_night_now_regret_it/
false
false
self
0
null
latest server-cuda13 asks for CUDA 13.1. But I don't see the ubuntu drivers yet. how to handle
0
Hey all, I've pulled a new server version for my docker install. It returning with an unmet dep of cuda 13.1. I've got `NVIDIA-SMI 580.105.08 Driver Version: 580.105.08 CUDA Version: 13.0` And i believe cuda 13.1 is with 590. But I don't see it in the driver list in ubuntu yet. Can I lower the llama.cpp image to use 13.0 OR can I upgrade the driver in another way? What would be the safest bet? `\`image: ghcr.io/ggml-org/llama.cpp:server-cuda13image: ghcr.io/ggml-org/llama.cpp:server-cuda13\``
2026-01-11T08:44:06
https://www.reddit.com/r/LocalLLaMA/comments/1q9ueyl/latest_servercuda13_asks_for_cuda_131_but_i_dont/
designbanana
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9ueyl
false
null
t3_1q9ueyl
/r/LocalLLaMA/comments/1q9ueyl/latest_servercuda13_asks_for_cuda_131_but_i_dont/
false
false
self
0
null
I tried chatterbox extended for "pseudo voice conversion" with a 15 seconds target voice audio - any other apps that allow me to do that, and do it even better?
1
There is "genuine" voice conversion, by training on extensive target audio, like I can do with RVC. Which definitely shines at keeping faithful to the prosody of the source audio, but has limitations in making the generated voice sound like the target voice. And then there is this form of pseudo voice conversion, or really voice conditioning, that chatterbox extended offers, and that works with a short audio clip, instead of a voice model, like your typical tts. My first impressions are that it shines at making the target voice come through, is okay good with capturing the rough features like speed, pauses, intonation of the source voice, but is not good at capturing the subtleties of the source voice. Would be curious if there are other, possibly more recent local apps that do that, and that are at least as good, or better, than chatterbox extended. Just to avoid any confusion: I am not asking for tts, I am asking for vc, or more precisely, pseude vc, or voice conditioning.
2026-01-11T08:39:23
https://www.reddit.com/r/LocalLLaMA/comments/1q9uc8m/i_tried_chatterbox_extended_for_pseudo_voice/
hugo-the-second
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9uc8m
false
null
t3_1q9uc8m
/r/LocalLLaMA/comments/1q9uc8m/i_tried_chatterbox_extended_for_pseudo_voice/
false
false
self
1
null
Non-coder here. Built self-healing Claude context in 30 tokens. Sharing what I found.
1
[removed]
2026-01-11T08:31:14
https://www.reddit.com/r/LocalLLaMA/comments/1q9u7je/noncoder_here_built_selfhealing_claude_context_in/
Small-Wolverine254
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9u7je
false
null
t3_1q9u7je
/r/LocalLLaMA/comments/1q9u7je/noncoder_here_built_selfhealing_claude_context_in/
false
false
self
1
{'enabled': False, 'images': [{'id': '_ztZuEze54od8s0ae7K9VZeHpUUgtfp_InajKudadlo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/_ztZuEze54od8s0ae7K9VZeHpUUgtfp_InajKudadlo.png?width=108&crop=smart&auto=webp&s=45335071a52fe2d02b4d5e63e28d7c74388a7b08', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/_ztZuEze54od8s0ae7K9VZeHpUUgtfp_InajKudadlo.png?width=216&crop=smart&auto=webp&s=dfb64483e82aaf3835419a2c3b8ed8aacd631b1e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/_ztZuEze54od8s0ae7K9VZeHpUUgtfp_InajKudadlo.png?width=320&crop=smart&auto=webp&s=d7cedbee4575da10c0088ed50f4fb112770864b5', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/_ztZuEze54od8s0ae7K9VZeHpUUgtfp_InajKudadlo.png?width=640&crop=smart&auto=webp&s=9fdbfc33bed5712ad0f17af6d2f527337231f407', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/_ztZuEze54od8s0ae7K9VZeHpUUgtfp_InajKudadlo.png?width=960&crop=smart&auto=webp&s=0871abf48f69d03c87ffe7963ca617b4a4cb7b4a', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/_ztZuEze54od8s0ae7K9VZeHpUUgtfp_InajKudadlo.png?width=1080&crop=smart&auto=webp&s=541d0ba6a0e8860ccc6b9f8e02655aba3adf4bce', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/_ztZuEze54od8s0ae7K9VZeHpUUgtfp_InajKudadlo.png?auto=webp&s=725ed554ccfb969515b142470bba2476dcc1554d', 'width': 1200}, 'variants': {}}]}
Which is the best model under 15B
48
I need a llm under 15B for agentic capabilities, reasoning, maths, general knowledge, making for raycast local model, i dont know hich model to select, ministral 3 14B, gemma 3 12B, qwen 3 14B, gpt-oss: 20B gpt-oss thinks a lot, and inference is not very good. any recommendations?
2026-01-11T08:19:06
https://www.reddit.com/r/LocalLLaMA/comments/1q9u07d/which_is_the_best_model_under_15b/
BothYou243
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9u07d
false
null
t3_1q9u07d
/r/LocalLLaMA/comments/1q9u07d/which_is_the_best_model_under_15b/
false
false
self
48
null
good uncensored online LLM for general use?
1
I work with nsfw material regularly and most services I know of absolutely hate it. So far I have just been using grok and it works okayish but it's quite expensive, wondering if there's any good alternative. Preferably something that can handle everything chatgpt does like transcribing images, web searching, etc. NOT looking for "rp" centric vendors
2026-01-11T08:03:32
https://www.reddit.com/r/LocalLLaMA/comments/1q9tqy7/good_uncensored_online_llm_for_general_use/
Lolis-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9tqy7
false
null
t3_1q9tqy7
/r/LocalLLaMA/comments/1q9tqy7/good_uncensored_online_llm_for_general_use/
false
false
self
1
null
Running local llm on my phone
2
Recently i've been thinking about booting local llm (with half or near a million context window) on my old ass phone and after 3 days of active research I still cant find fast enough solution. Qwen 2.5 1m runs on 0.3 tokens/sec and need around 10 mins to heat up
2026-01-11T07:49:51
https://www.reddit.com/r/LocalLLaMA/comments/1q9tiku/running_local_llm_on_my_phone/
LOHOZAVRISHE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9tiku
false
null
t3_1q9tiku
/r/LocalLLaMA/comments/1q9tiku/running_local_llm_on_my_phone/
false
false
self
2
null
Unsloth vs Chronicals
0
2026-01-11T07:48:38
https://youtube.com/shorts/Y9rAy0E9jNU?feature=share
JosefAlbers05
youtube.com
1970-01-01T00:00:00
0
{}
1q9thvj
false
{'oembed': {'author_name': 'Josef Albers', 'author_url': 'https://www.youtube.com/@JosefAlbers-f4q', 'height': 200, 'html': '<iframe width="113" height="200" src="https://www.youtube.com/embed/Y9rAy0E9jNU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Unsloth vs Chronicals"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/Y9rAy0E9jNU/hq2.jpg', 'thumbnail_width': 480, 'title': 'Unsloth vs Chronicals', 'type': 'video', 'version': '1.0', 'width': 113}, 'type': 'youtube.com'}
t3_1q9thvj
/r/LocalLLaMA/comments/1q9thvj/unsloth_vs_chronicals/
false
false
default
0
null
Unsloth vs Chronicals
1
2026-01-11T07:39:40
https://youtube.com/shorts/-z9tXyiABvs
JosefAlbers05
youtube.com
1970-01-01T00:00:00
0
{}
1q9tcg3
false
{'oembed': {'author_name': 'Josef Albers', 'author_url': 'https://www.youtube.com/@JosefAlbers-f4q', 'height': 200, 'html': '<iframe width="113" height="200" src="https://www.youtube.com/embed/-z9tXyiABvs?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Unsloth vs Chronicals"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/-z9tXyiABvs/hq2.jpg', 'thumbnail_width': 480, 'title': 'Unsloth vs Chronicals', 'type': 'video', 'version': '1.0', 'width': 113}, 'type': 'youtube.com'}
t3_1q9tcg3
/r/LocalLLaMA/comments/1q9tcg3/unsloth_vs_chronicals/
false
false
default
1
null
Announcing Kreuzberg v4 (Open Source)
115
Hi Peeps, I'm excited to announce [Kreuzberg](https://github.com/kreuzberg-dev/kreuzberg) v4.0.0. # What is Kreuzberg: Kreuzberg is a document intelligence library that extracts structured data from 56+ formats, including PDFs, Office docs, HTML, emails, images and many more. Built for RAG/LLM pipelines with OCR, semantic chunking, embeddings, and metadata extraction. The new v4 is a ground-up rewrite in Rust with a bindings for 9 other languages! # What changed: * **Rust core**: Significantly faster extraction and lower memory usage. No more Python GIL bottlenecks. * **Pandoc is gone**: Native Rust parsers for all formats. One less system dependency to manage. * **10 language bindings**: Python, TypeScript/Node.js, Java, Go, C#, Ruby, PHP, Elixir, Rust, and WASM for browsers. Same API, same behavior, pick your stack. * **Plugin system**: Register custom document extractors, swap OCR backends (Tesseract, EasyOCR, PaddleOCR), add post-processors for cleaning/normalization, and hook in validators for content verification. * **Production-ready**: REST API, MCP server, Docker images, async-first throughout. * **ML pipeline features**: ONNX embeddings on CPU (requires ONNX Runtime 1.22.x), streaming parsers for large docs, batch processing, byte-accurate offsets for chunking. # Why polyglot matters: Document processing shouldn't force your language choice. Your Python ML pipeline, Go microservice, and TypeScript frontend can all use the same extraction engine with identical results. The Rust core is the single source of truth; bindings are thin wrappers that expose idiomatic APIs for each language. # Why the Rust rewrite: The Python implementation hit a ceiling, and it also prevented us from offering the library in other languages. Rust gives us predictable performance, lower memory, and a clean path to multi-language support through FFI. # Is Kreuzberg Open-Source?: Yes! Kreuzberg is MIT-licensed and will stay that way. # Links * [Star us on GitHub](https://github.com/kreuzberg-dev/kreuzberg) * [Read the Docs](https://kreuzberg.dev/) * [Join our Discord Server](https://discord.gg/38pF6qGpYD)
2026-01-11T07:34:55
https://www.reddit.com/r/LocalLLaMA/comments/1q9t9op/announcing_kreuzberg_v4_open_source/
Eastern-Surround7763
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9t9op
false
null
t3_1q9t9op
/r/LocalLLaMA/comments/1q9t9op/announcing_kreuzberg_v4_open_source/
false
false
self
115
{'enabled': False, 'images': [{'id': 'dlA25NAcyPL4lIdscfsCv_8AcSxeJAF0JDfclk7Ngmo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dlA25NAcyPL4lIdscfsCv_8AcSxeJAF0JDfclk7Ngmo.png?width=108&crop=smart&auto=webp&s=b2d562448da7d5b0f6faf9827cbef61f72fea2bf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dlA25NAcyPL4lIdscfsCv_8AcSxeJAF0JDfclk7Ngmo.png?width=216&crop=smart&auto=webp&s=321c0ce7e16572b5b62b84fa28ecc7348cf66be4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dlA25NAcyPL4lIdscfsCv_8AcSxeJAF0JDfclk7Ngmo.png?width=320&crop=smart&auto=webp&s=5bff3bb5ece26e1b655dec0a8c9d5cde4dd41ab9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dlA25NAcyPL4lIdscfsCv_8AcSxeJAF0JDfclk7Ngmo.png?width=640&crop=smart&auto=webp&s=8dbda832b1bfe6b7b08b0b6664dd56368769b6d2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dlA25NAcyPL4lIdscfsCv_8AcSxeJAF0JDfclk7Ngmo.png?width=960&crop=smart&auto=webp&s=f1a7e5e78d162c2fca948f91366e7d75671c366c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dlA25NAcyPL4lIdscfsCv_8AcSxeJAF0JDfclk7Ngmo.png?width=1080&crop=smart&auto=webp&s=386e4c38049f69d925089dbdd5c2e85c2efc3833', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/dlA25NAcyPL4lIdscfsCv_8AcSxeJAF0JDfclk7Ngmo.png?auto=webp&s=e58b4e2b5002e56d7f868be8f2c9e75de9f1397d', 'width': 1280}, 'variants': {}}]}
Llm on my laptop?
2
Hi there, i just got my hands on a laptop with a 3050ti(4gb vram) and 32gb ddr4, i5 11th gen i5-11400H and I'm super curious to know if there is any text to image LLM that could work on it? As far as i understand (which is not a lot) i should be able to run a optimized version of stable diffusion? Or are there any other alternatives? What should I consider and how should I go about setting one up? Lots of questions sorry, but I'm truly out of my depth here, any help would be greatly appreciated.
2026-01-11T07:32:00
https://www.reddit.com/r/LocalLLaMA/comments/1q9t80r/llm_on_my_laptop/
Low-Bluebird2648
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9t80r
false
null
t3_1q9t80r
/r/LocalLLaMA/comments/1q9t80r/llm_on_my_laptop/
false
false
self
2
null
Is this scenario impossible ? Pls help me understand ?
0
I am trying to build a system to serve around 1000 requests simultaneously for an educational institution. I am trying to compute stuff, while this calculator tells me it is technically possible, other sources are telling me it is practically useless. Can somebody give insights ? [https://apxml.com/tools/vram-calculator?model=deepseek-r1-3b&quant=q4\_k\_m&kvQuant=int8&gpu=a100\_80&numGpus=2&batchSize=1024&users=1024&offload=true&useLayerOffload=false&offloadPct=35&offloadKv=true](https://apxml.com/tools/vram-calculator?model=deepseek-r1-3b&quant=q4_k_m&kvQuant=int8&gpu=a100_80&numGpus=2&batchSize=1024&users=1024&offload=true&useLayerOffload=false&offloadPct=35&offloadKv=true)
2026-01-11T07:02:50
https://apxml.com/tools/vram-calculator?model=deepseek-r1-3b&quant=q4_k_m&kvQuant=int8&gpu=a100_80&numGpus=2&batchSize=1024&users=1024&offload=true&useLayerOffload=false&offloadPct=35&offloadKv=true
Chithrai-Thirunal
apxml.com
1970-01-01T00:00:00
0
{}
1q9sqf7
false
null
t3_1q9sqf7
/r/LocalLLaMA/comments/1q9sqf7/is_this_scenario_impossible_pls_help_me_understand/
false
false
default
0
{'enabled': False, 'images': [{'id': 'HsUlUSxAvRa0faBNnXktieCSZf3z0ufrWZHUSf2c6VM', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/HsUlUSxAvRa0faBNnXktieCSZf3z0ufrWZHUSf2c6VM.jpeg?width=108&crop=smart&auto=webp&s=74615a29e81980ade73d711d47c30d7db2bd599b', 'width': 108}, {'height': 148, 'url': 'https://external-preview.redd.it/HsUlUSxAvRa0faBNnXktieCSZf3z0ufrWZHUSf2c6VM.jpeg?width=216&crop=smart&auto=webp&s=3ba59f59d804ed247be128fe0711b7a470d86a6e', 'width': 216}, {'height': 220, 'url': 'https://external-preview.redd.it/HsUlUSxAvRa0faBNnXktieCSZf3z0ufrWZHUSf2c6VM.jpeg?width=320&crop=smart&auto=webp&s=0a19d3cf7fb002c30d187941f07e41d9a57a8993', 'width': 320}, {'height': 440, 'url': 'https://external-preview.redd.it/HsUlUSxAvRa0faBNnXktieCSZf3z0ufrWZHUSf2c6VM.jpeg?width=640&crop=smart&auto=webp&s=9e16d99ee6447dddc8bf514b39367d7231acf437', 'width': 640}, {'height': 660, 'url': 'https://external-preview.redd.it/HsUlUSxAvRa0faBNnXktieCSZf3z0ufrWZHUSf2c6VM.jpeg?width=960&crop=smart&auto=webp&s=ecb09f1d181c97caf43830031723e359e224baf3', 'width': 960}, {'height': 743, 'url': 'https://external-preview.redd.it/HsUlUSxAvRa0faBNnXktieCSZf3z0ufrWZHUSf2c6VM.jpeg?width=1080&crop=smart&auto=webp&s=797feb43a116fb7c98044bc8591668d8a12b4593', 'width': 1080}], 'source': {'height': 1321, 'url': 'https://external-preview.redd.it/HsUlUSxAvRa0faBNnXktieCSZf3z0ufrWZHUSf2c6VM.jpeg?auto=webp&s=e0f9913fc58f39746ca2523de1e254c29b3ccc21', 'width': 1920}, 'variants': {}}]}
Looking for a Base Model
32
I was putting together a finetuning dataset for an experiment and I realized that I have lost track of which models have base models available. I can search for models with "base" in the name and find stuff like [Qwen 3 8B base](https://huggingface.co/Qwen/Qwen3-8B-Base) but I'm pretty sure that there are base models I'm overlooking. Do you have a favorite base model? Models I've found so far: * Qwen 3 base, in 1B, [8B](https://huggingface.co/Qwen/Qwen3-8B-Base), 30B, 30B-A3B etc. * LiquidAI's LFM2.5 ([1.2B](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Base)) * DeepSeek-V3 ([671B](https://huggingface.co/deepseek-ai/DeepSeek-V3-Base)) * DeepSeek-Coder-V2 ([236B](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Base)) * NVIDIA Nemotron-3-Nano ([30B-A3B](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16)) * NVIDIA Nemotron 3 ([8B](https://huggingface.co/nvidia/nemotron-3-8b-base-4k)4k) * Nanbeige4 ([3B](https://huggingface.co/Nanbeige/Nanbeige4-3B-Base)) * Falcon H1 ([7B](https://huggingface.co/tiiuae/Falcon-H1-7B-Base)) * ByteDance's Seed-Coder ([8B](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Base)) * Llama 3.1 ([8B](https://huggingface.co/meta-llama/Llama-3.1-8B), etc.) * SmolLLM v3 ([3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B-Base)) * Kimi K2 ([1T-A32B](https://huggingface.co/moonshotai/Kimi-K2-Base)) * Kirim-V1-Base ([12B](https://huggingface.co/Kirim-ai/Kirim-V1-Base)) * MiMo-V2-Flash-Base ([310B-A15B](https://huggingface.co/XiaomiMiMo/MiMo-V2-Flash-Base)) * Gumini ([1B](https://huggingface.co/GuminiResearch/Gumini-1B-Base)) * Kanana-2 ([30B-3AB](https://huggingface.co/kakaocorp/kanana-2-30b-a3b-base)) * Gemma 3 ([27B](https://huggingface.co/google/gemma-3-27b-pt), 12B, 4B, 1B) * ByteDance Seed OSS ([36B](https://huggingface.co/ByteDance-Seed/Seed-OSS-36B-Base) *w/ syn. and* [woSyn](https://huggingface.co/ByteDance-Seed/Seed-OSS-36B-Base-woSyn)) * zai-org's GLM 4 ([32B](https://huggingface.co/zai-org/GLM-4-32B-Base-0414)) * Skywork MoE ([146B-A16B](https://huggingface.co/Skywork/Skywork-MoE-Base)) * IBM's Granite-4.0-Micro ([3B](https://huggingface.co/ibm-granite/granite-4.0-micro-base), etc.) I'm pretty sure I'm still missing lots of base models and lots of different sizes of some of these models.
2026-01-11T06:49:17
https://www.reddit.com/r/LocalLLaMA/comments/1q9si66/looking_for_a_base_model/
AutomataManifold
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9si66
false
null
t3_1q9si66
/r/LocalLLaMA/comments/1q9si66/looking_for_a_base_model/
false
false
self
32
{'enabled': False, 'images': [{'id': 'WdpmPSpuYvqpwqVUg2nGzStc1-djRdocIIESdHU0jy4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WdpmPSpuYvqpwqVUg2nGzStc1-djRdocIIESdHU0jy4.png?width=108&crop=smart&auto=webp&s=bf2afdd20c2d3874ad59b3a6649b8ec3496fe097', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WdpmPSpuYvqpwqVUg2nGzStc1-djRdocIIESdHU0jy4.png?width=216&crop=smart&auto=webp&s=612f19c586ecbdd7d3922aae2b924c7a7d8e77cc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WdpmPSpuYvqpwqVUg2nGzStc1-djRdocIIESdHU0jy4.png?width=320&crop=smart&auto=webp&s=5bd62f0991c6bfbcf49244645ffb69c4b57df5da', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WdpmPSpuYvqpwqVUg2nGzStc1-djRdocIIESdHU0jy4.png?width=640&crop=smart&auto=webp&s=349c53036c153628548882679fde514ba93d478f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WdpmPSpuYvqpwqVUg2nGzStc1-djRdocIIESdHU0jy4.png?width=960&crop=smart&auto=webp&s=a14e2493113b34f0b2e19c8e8ec2898040924c5a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WdpmPSpuYvqpwqVUg2nGzStc1-djRdocIIESdHU0jy4.png?width=1080&crop=smart&auto=webp&s=ba5b328c4fdd77a9712024d3e56836805bcdafd1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WdpmPSpuYvqpwqVUg2nGzStc1-djRdocIIESdHU0jy4.png?auto=webp&s=674bac0b2b7bfd81234d48d7f117cb1076196fda', 'width': 1200}, 'variants': {}}]}
Anyone successfully ran LTX2 GGUF Q4 model on 8vram, 16gb Ram potato PC?
270
2026-01-11T06:42:42
https://i.redd.it/n3bssemz1ocg1.jpeg
Slight_Tone_2188
i.redd.it
1970-01-01T00:00:00
0
{}
1q9se4a
false
null
t3_1q9se4a
/r/LocalLLaMA/comments/1q9se4a/anyone_successfully_ran_ltx2_gguf_q4_model_on/
false
false
default
270
{'enabled': True, 'images': [{'id': 'n3bssemz1ocg1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/n3bssemz1ocg1.jpeg?width=108&crop=smart&auto=webp&s=ead7e84f2895041781a2aabd07abbc3852953b82', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/n3bssemz1ocg1.jpeg?width=216&crop=smart&auto=webp&s=253e1e77fd0e33f66cd9d50b8f3b7c3cdbf7ea47', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/n3bssemz1ocg1.jpeg?width=320&crop=smart&auto=webp&s=5d4764c6f3537621b8ce9c42b8b6ecd91bac6e03', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/n3bssemz1ocg1.jpeg?width=640&crop=smart&auto=webp&s=c08fa84c56e65822b8d36ba274ecbd99b40f317f', 'width': 640}], 'source': {'height': 533, 'url': 'https://preview.redd.it/n3bssemz1ocg1.jpeg?auto=webp&s=1587a1e0ad47f992b58aab682d2b1180efe469af', 'width': 800}, 'variants': {}}]}
SLRM-nD: 1000D Regression in 193ms on pure CPU (Non-iterative/No Backprop)
0
I’ve been developing a geometric alternative to traditional Neural Networks called SLRM-nD (Lumin Core). While everyone is fighting for VRAM, I wanted to see how far pure deterministic geometry could go in high-dimensional spaces without burning GPU cycles. The benchmark (Google Colab): \* Input: 1000 Dimensions \* Processing Time: 193 ms \* Approach: Non-iterative (No Backprop / No training loops) \* Compute: Pure CPU (No GPU needed) Why this matters for Local AI: 1. Zero Hallucinations: It’s 100% deterministic math. 2. Full Interpretability: No black boxes, just geometric folding logic. 3. Efficiency: Low latency for edge devices or high-D mapping. 4. MIT Licensed: Open source for the community. I've shared the code and the logic so you can test it. I'd love to get some technical feedback on this non-iterative approach! GitHub: [https://github.com/wexionar/multi-dimensional-neural-networks](https://github.com/wexionar/multi-dimensional-neural-networks) Colab: [https://colab.research.google.com/drive/1eRmUI3CNqYDpchxKf9ek8mpMUtucb6CU](https://colab.research.google.com/drive/1eRmUI3CNqYDpchxKf9ek8mpMUtucb6CU)
2026-01-11T05:45:41
https://www.reddit.com/r/LocalLLaMA/comments/1q9rc5x/slrmnd_1000d_regression_in_193ms_on_pure_cpu/
wexionar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9rc5x
false
null
t3_1q9rc5x
/r/LocalLLaMA/comments/1q9rc5x/slrmnd_1000d_regression_in_193ms_on_pure_cpu/
false
false
self
0
{'enabled': False, 'images': [{'id': 'vbIZLNE5HlENiKj6X7nsYq7nJeUcinRG_U6QnU8s2cI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vbIZLNE5HlENiKj6X7nsYq7nJeUcinRG_U6QnU8s2cI.png?width=108&crop=smart&auto=webp&s=34cb7d2c5d2bf03482a7f565a87c4fca70c0b9c6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vbIZLNE5HlENiKj6X7nsYq7nJeUcinRG_U6QnU8s2cI.png?width=216&crop=smart&auto=webp&s=d4343ca9fe26d7bee8180c74c9aa806edccff52d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vbIZLNE5HlENiKj6X7nsYq7nJeUcinRG_U6QnU8s2cI.png?width=320&crop=smart&auto=webp&s=3b1f60b383ddedf19e061dc1a7000971acd6227e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vbIZLNE5HlENiKj6X7nsYq7nJeUcinRG_U6QnU8s2cI.png?width=640&crop=smart&auto=webp&s=4b40c8b5cc0882b57340f8a24a68f1a7de54a225', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vbIZLNE5HlENiKj6X7nsYq7nJeUcinRG_U6QnU8s2cI.png?width=960&crop=smart&auto=webp&s=aed15315f3e0d30e2998058993624c4028c47835', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vbIZLNE5HlENiKj6X7nsYq7nJeUcinRG_U6QnU8s2cI.png?width=1080&crop=smart&auto=webp&s=7113dfcb48a459939bdc1976beeefd103dfb1c06', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vbIZLNE5HlENiKj6X7nsYq7nJeUcinRG_U6QnU8s2cI.png?auto=webp&s=25838e028585bbc2adeb3dc3383fcef03ddcef70', 'width': 1200}, 'variants': {}}]}
Looking for adversarial testers: zombie / stale-action failures in agent systems
0
I’m looking for adversarial testers for a small deterministic execution gate. This is NOT: – an eval – a planner – a voting system – a framework The task is only to try to force unauthorized state advancement under: – zombie / stale authority – replay – restart – temporal regression Binary outcome: If state advances → fail If it denies or freezes → pass Minimal packet (1 gate file + 1 test file): [https://drive.google.com/file/d/1Aly9TrkkiGZW0hcqQ5-T7pmGq04J1sXn/view?usp=sharing](https://drive.google.com/file/d/1Aly9TrkkiGZW0hcqQ5-T7pmGq04J1sXn/view?usp=sharing) If you find a failure, report: 1) invariant violated 2) minimal reproduction No integration required.
2026-01-11T05:43:18
https://www.reddit.com/r/LocalLLaMA/comments/1q9raho/looking_for_adversarial_testers_zombie/
Agent_invariant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9raho
false
null
t3_1q9raho
/r/LocalLLaMA/comments/1q9raho/looking_for_adversarial_testers_zombie/
false
false
self
0
null
Which AI model can I use along with cursor/antigravity ide for medium to high coding usage?
0
I want that instead of paying so much for their internal, can integrate a third party model to get my money worth via keeping ide such as cursor or antigravity . I want to pay for something that deserves. suppose in antigravity , i can use their free AI model, and then when it runs out , then i can switch to third party model.
2026-01-11T04:52:14
https://www.reddit.com/r/LocalLLaMA/comments/1q9qact/which_ai_model_can_i_use_along_with/
Notalabel_4566
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9qact
false
null
t3_1q9qact
/r/LocalLLaMA/comments/1q9qact/which_ai_model_can_i_use_along_with/
false
false
self
0
null
Evaluating AI Agents: what I've learnt from 3 years of AI Engineering
0
I’ve been building and shipping AI agents that had to be reliable in production. I'ce learnt that bad evals cause me to: * Cause regressions * Chase random improvements that don’t make hte agent as a whole better * Overbuild autonomy when simpler systems would’ve worked (like graph-based workflows) So I wrote a publicly available guide on **prod-grade agent evaluation**. It’s basically everything I wish I knew when I first moved started building more autonomous AI agents.. Some key lessons from the article: * Evals should run in a loop: benchmark, analyze, improve, repeat. * Start with 20-50 high-quality test cases, not hundreds. Early on, having signal > scale. * Graph-based workflows give you most of the “agent power” with way less eval pain. * LLM-as-judge is useless unless you manually read traces and calibrate graders. * If an agent scores 0% across many runs,most likely your test spec is probably broken. The guide covers: * A weekly eval loop you can realistically maintain * Core evaluation techniques used by strong agent teams * Common grading pitfalls that quietly destroy reliability * How to go from zero evals to production-grade evals * How to simplify agents and improve latency without losing quality Finally i also explain, once you hit a good enough accuracy for your agent, how to simplify it for cost and speed reduction too. The article is publicly available to read here: [https://sarthakai.substack.com/p/evals-that-improve-your-ai-agents](https://sarthakai.substack.com/p/evals-that-improve-your-ai-agents) Do lmk what I've missed and about you rexperience evaling agents.
2026-01-11T04:09:13
https://www.reddit.com/r/LocalLLaMA/comments/1q9pfy4/evaluating_ai_agents_what_ive_learnt_from_3_years/
sarthakai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9pfy4
false
null
t3_1q9pfy4
/r/LocalLLaMA/comments/1q9pfy4/evaluating_ai_agents_what_ive_learnt_from_3_years/
false
false
self
0
{'enabled': False, 'images': [{'id': 'YTa7fwWDJNbCsw5yxu1kGxdjy6Q309AGdihU6-SsSxE', 'resolutions': [{'height': 105, 'url': 'https://external-preview.redd.it/YTa7fwWDJNbCsw5yxu1kGxdjy6Q309AGdihU6-SsSxE.jpeg?width=108&crop=smart&auto=webp&s=687f10739f62524c5131d8b9139d8fdec3f1c4da', 'width': 108}, {'height': 211, 'url': 'https://external-preview.redd.it/YTa7fwWDJNbCsw5yxu1kGxdjy6Q309AGdihU6-SsSxE.jpeg?width=216&crop=smart&auto=webp&s=6c38d41fe65ad5a8a95e20d0fd32c206eeda000b', 'width': 216}, {'height': 313, 'url': 'https://external-preview.redd.it/YTa7fwWDJNbCsw5yxu1kGxdjy6Q309AGdihU6-SsSxE.jpeg?width=320&crop=smart&auto=webp&s=20438e7249ace4c7382e32e3f65bb5629e697e27', 'width': 320}, {'height': 626, 'url': 'https://external-preview.redd.it/YTa7fwWDJNbCsw5yxu1kGxdjy6Q309AGdihU6-SsSxE.jpeg?width=640&crop=smart&auto=webp&s=e5b559a82dc06c78ae07f63cdd19861343428305', 'width': 640}], 'source': {'height': 670, 'url': 'https://external-preview.redd.it/YTa7fwWDJNbCsw5yxu1kGxdjy6Q309AGdihU6-SsSxE.jpeg?auto=webp&s=f678a59aea7a6e40d3b55c18ae9670016b83d9c4', 'width': 684}, 'variants': {}}]}
I built a benchmark measuring the Markdown quality of LLMs
32
https://lintbench.ai
2026-01-11T04:06:40
https://i.redd.it/toz75kg5ancg1.png
bengt0
i.redd.it
1970-01-01T00:00:00
0
{}
1q9pe4l
false
null
t3_1q9pe4l
/r/LocalLLaMA/comments/1q9pe4l/i_built_a_benchmark_measuring_the_markdown/
false
false
default
32
{'enabled': True, 'images': [{'id': 'toz75kg5ancg1', 'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/toz75kg5ancg1.png?width=108&crop=smart&auto=webp&s=d8c0d1aa40e8542ab9fdb955115993e81b633001', 'width': 108}, {'height': 268, 'url': 'https://preview.redd.it/toz75kg5ancg1.png?width=216&crop=smart&auto=webp&s=d3a7f527747b08fa4b75915e2c507fb9036fa963', 'width': 216}, {'height': 398, 'url': 'https://preview.redd.it/toz75kg5ancg1.png?width=320&crop=smart&auto=webp&s=9ffe1a3ee9b50a3376895c055dc5af709eed034e', 'width': 320}, {'height': 796, 'url': 'https://preview.redd.it/toz75kg5ancg1.png?width=640&crop=smart&auto=webp&s=74ecf9a9ef58952a70397f5b1fc4fc2a8a51d953', 'width': 640}, {'height': 1194, 'url': 'https://preview.redd.it/toz75kg5ancg1.png?width=960&crop=smart&auto=webp&s=6e9a3cff27bdfca208831aa85a2ee05063bf00d4', 'width': 960}, {'height': 1344, 'url': 'https://preview.redd.it/toz75kg5ancg1.png?width=1080&crop=smart&auto=webp&s=e8c25d6c609ebff6faedee504c1b8fc54d4d393f', 'width': 1080}], 'source': {'height': 2743, 'url': 'https://preview.redd.it/toz75kg5ancg1.png?auto=webp&s=fad19275c26cdf036ff9c4fba71441a4783fe615', 'width': 2204}, 'variants': {}}]}
How do people here manage usage limits with long-context workflows?
1
[removed]
2026-01-11T03:35:15
https://www.reddit.com/r/LocalLLaMA/comments/1q9oqtz/how_do_people_here_manage_usage_limits_with/
Acceptable-Reason585
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9oqtz
false
null
t3_1q9oqtz
/r/LocalLLaMA/comments/1q9oqtz/how_do_people_here_manage_usage_limits_with/
false
false
self
1
null
Parse PDF return json
2
Hi Gang I am looking for advice I have built a tool that I input a PDF catalog and want to return data into a DB Current I am parsing the PDF into pages and then the LLM looks at the text and returns A very specific JSON back for each product or products on the page. I am currently doing this with Gemini 3 flash with 20 concurrent api calls. But it misses often a ruins the run. QUESTION: what model or models would you recommend for this task that will be accurate, fast, cheap in the order. QUESTION: how many fields is to many per api call. Ie it can easily return 3 strings can it return 50 stings 20 objects.
2026-01-11T03:34:39
https://www.reddit.com/r/LocalLLaMA/comments/1q9oqem/parse_pdf_return_json/
time_time
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9oqem
false
null
t3_1q9oqem
/r/LocalLLaMA/comments/1q9oqem/parse_pdf_return_json/
false
false
self
2
null
Headroom (OSS): reducing tool-output + prefix drift token costs without breaking tool calling
1
Hi folks I hit a painful wall building a bunch of small agent-y micro-apps. When I use Claude Code/sub-agents for in-depth research, the workflow often loses context in the middle of the research (right when it’s finally becoming useful). I tried the obvious stuff: prompt compression (LLMLingua etc.), prompt trimming, leaning on prefix caching… but I kept running into a practical constraint: a bunch of my MCP tools **expect strict JSON inputs/outputs**, and “compressing the prompt” would occasionally mangle JSON enough to break tool execution. So I ended up building an OSS layer called **Headroom** that tries to engineer context *around* tool calling rather than rewriting everything into summaries. What it does (in 3 parts): * **Tool output compression** that tries to keep the “interesting” stuff (outliers, errors/anomalies, top matches to the user’s query) instead of naïve truncation * **Prefix alignment** to reduce accidental cache misses (timestamps, reorderings, etc.) * **Rolling window** that trims history while keeping tool-call units intact (so you don’t break function/tool calling) Some quick numbers from the repo’s perf table (obviously workload-dependent, but gives a feel): * Search results (1000 items): **45k → 4.5k tokens (\~90%)** * Log analysis (500 entries): **22k → 3.3k (\~85%)** * Nested API JSON: **15k → 2.25k (\~85%)** Overhead listed is on the order of **\~1–3ms** in those scenarios. I’d love review from folks who’ve shipped agents: * What’s the nastiest tool payload you’ve seen (nested arrays, logs, etc.)? * Any gotchas with streaming tool calls that break proxies/wrappers? * If you’ve implemented prompt caching, what caused the most cache misses? Repo: [https://github.com/chopratejas/headroom](https://github.com/chopratejas/headroom?utm_source=chatgpt.com) (I’m the author — happy to answer anything, and also happy to be told this is a bad idea.)
2026-01-11T03:14:18
https://www.reddit.com/r/LocalLLaMA/comments/1q9ob19/headroom_oss_reducing_tooloutput_prefix_drift/
Ok-Responsibility734
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9ob19
false
null
t3_1q9ob19
/r/LocalLLaMA/comments/1q9ob19/headroom_oss_reducing_tooloutput_prefix_drift/
false
false
self
1
null
Made an Rick and Morty inspired Interdimensional News site with Ollama and Gemini
26
So, I love Rick and Morty esp. the interdimensional cable episodes. I've wanted to build this for years, but when I tried in 2024 the image generation was simply too lame. This last xmas though, I tried it again and got pretty close to what I've been wanting: [greenportal.news](http://greenportal.news) I'm happy to double click on how the site is made. Basically, its a scraper of a lot of news content off of the internet. Then, using ollama + nemotron-3-nano I extract and score the articles. The alternate universes work the same way, with ollama expanding the prompt and creating the rules for the universe. Lastly, I make a few images in Nano Banana--which imho are the funniest part. I'd like to move off Gemini to something I can run locally. Any recommendations? I'm rolling with a single 4090 over here so I'd love to keep using that. Lastly, I write enterprise software so I know the UX isn't amazing. Don't be too hard on me :)
2026-01-11T02:32:32
https://www.reddit.com/r/LocalLLaMA/comments/1q9nerk/made_an_rick_and_morty_inspired_interdimensional/
WahWahWeWah
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9nerk
false
null
t3_1q9nerk
/r/LocalLLaMA/comments/1q9nerk/made_an_rick_and_morty_inspired_interdimensional/
false
false
self
26
null
Model/Tools for research on M1 pro baseline model? (16gb 8 core)
0
I am looking for local/open source research tools primarily to investigate papers and brainstorm new ideas, what do you suggest?
2026-01-11T01:58:23
https://www.reddit.com/r/LocalLLaMA/comments/1q9mno1/modeltools_for_research_on_m1_pro_baseline_model/
pacifio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9mno1
false
null
t3_1q9mno1
/r/LocalLLaMA/comments/1q9mno1/modeltools_for_research_on_m1_pro_baseline_model/
false
false
self
0
null
Can you guys help me understand skills better?
2
I'm trying to understand the advantages between models, and I know that skills (although trademarked to anthropic), is significant in output quality. However, I'm failing to grasp how people go about optimizing skills outside out claude code? If I have a coding framework that I want to adhere to, or a specific skills I want the agent to adopt, what is the correct process to have them adopt it other than pointing to \\@skills\_files.md? And after recycling agents after a long period, is there no better way to use your files other than redundantly point to it? How could you reduce the token cost of this redundancy? I'm looking for a universal practice, whether it's a mcp project some one made, or an accepted standard process which could be transfer between platforms and models.
2026-01-11T01:56:23
https://www.reddit.com/r/LocalLLaMA/comments/1q9mlyk/can_you_guys_help_me_understand_skills_better/
Tinominor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9mlyk
false
null
t3_1q9mlyk
/r/LocalLLaMA/comments/1q9mlyk/can_you_guys_help_me_understand_skills_better/
false
false
self
2
null
Erik Zahaviel Bernstein situation - spent 6 hours verifying claims and I'm confused why we're dismissing independent AI work based on fabricated legal drama
1
[removed]
2026-01-11T01:51:45
https://www.reddit.com/r/LocalLLaMA/comments/1q9mi4e/erik_zahaviel_bernstein_situation_spent_6_hours/
quietobserver96
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9mi4e
false
null
t3_1q9mi4e
/r/LocalLLaMA/comments/1q9mi4e/erik_zahaviel_bernstein_situation_spent_6_hours/
false
false
self
1
null
brain-canvas: Give any local LLM a visual display (191 lines, 0 deps)
20
Tired of LLM output being stuck in the terminal? npx brain-canvas Starts a local HTML canvas that any LLM can control via POST requests. Send JSON, get interactive UI with clickable choices that flow back to your script. Works with: \- Ollama \- llama.cpp \- Any local model \- Claude/GPT (if you use those too) The numbers: \- 191 lines of code \- 0 dependencies \- 6.9 KB package \- 10 section types (stats, timeline, comparison, choices, etc.) POST JSON like: {"title": "Pick one", "sections": \[{"type": "choices", "items": \[{"id": "a", "label": "Option A"}\]}\]} GET /choice returns what the user clicked. Zero config. Works on Mac/Linux/Windows. [https://github.com/mordechaipotash/brain-canvas](https://github.com/mordechaipotash/brain-canvas)
2026-01-11T01:30:17
https://www.reddit.com/r/LocalLLaMA/comments/1q9m0uw/braincanvas_give_any_local_llm_a_visual_display/
Signal_Usual8630
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9m0uw
false
null
t3_1q9m0uw
/r/LocalLLaMA/comments/1q9m0uw/braincanvas_give_any_local_llm_a_visual_display/
false
false
self
20
{'enabled': False, 'images': [{'id': 'rJcVoEG4oMeDwKBZReA4WejAlXJjtqob8cgJ147vAcg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rJcVoEG4oMeDwKBZReA4WejAlXJjtqob8cgJ147vAcg.png?width=108&crop=smart&auto=webp&s=dd2626d8b1ac4b46433e4d781a3bb2a3c7d88df9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rJcVoEG4oMeDwKBZReA4WejAlXJjtqob8cgJ147vAcg.png?width=216&crop=smart&auto=webp&s=590224cfc85b310192b633a3e8e6b1dfc8de75bd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rJcVoEG4oMeDwKBZReA4WejAlXJjtqob8cgJ147vAcg.png?width=320&crop=smart&auto=webp&s=068e96e7d3dd394380baee531faf14ca743963e2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rJcVoEG4oMeDwKBZReA4WejAlXJjtqob8cgJ147vAcg.png?width=640&crop=smart&auto=webp&s=92c27b4e822c80b1c8b0f71f9e81c1893aa12a48', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rJcVoEG4oMeDwKBZReA4WejAlXJjtqob8cgJ147vAcg.png?width=960&crop=smart&auto=webp&s=16a95821b5c2a1cecfa89850ae2c9c9cf4c992e3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rJcVoEG4oMeDwKBZReA4WejAlXJjtqob8cgJ147vAcg.png?width=1080&crop=smart&auto=webp&s=a8060fac468535306507dcf2fc2c516172ef5055', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rJcVoEG4oMeDwKBZReA4WejAlXJjtqob8cgJ147vAcg.png?auto=webp&s=a327286a6e6335feb3aed2ff5fc7619d2af57446', 'width': 1200}, 'variants': {}}]}
Does the Claude Code support API provider switching?
0
My cursor yearly subscription has ended and I want to try other coding agents. I heard Claude Code is very solid. But I’m wondering if there is a way to let me switch model between different providers. I like use Claude model for planning and use Minimax for implementation to save my cost.
2026-01-11T00:29:11
https://www.reddit.com/r/LocalLLaMA/comments/1q9km2v/does_the_claude_code_support_api_provider/
OriginalTerran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9km2v
false
null
t3_1q9km2v
/r/LocalLLaMA/comments/1q9km2v/does_the_claude_code_support_api_provider/
false
false
self
0
null
Llama slow
0
Any way my settings are causing this?
2026-01-11T00:05:45
https://i.redd.it/lsvaqlk33mcg1.png
Murky_South8044
i.redd.it
1970-01-01T00:00:00
0
{}
1q9k280
false
null
t3_1q9k280
/r/LocalLLaMA/comments/1q9k280/llama_slow/
false
false
default
0
{'enabled': True, 'images': [{'id': 'lsvaqlk33mcg1', 'resolutions': [{'height': 130, 'url': 'https://preview.redd.it/lsvaqlk33mcg1.png?width=108&crop=smart&auto=webp&s=a836c12a1ac65d6cc1b2e6b34bca5c3bff352e3e', 'width': 108}, {'height': 260, 'url': 'https://preview.redd.it/lsvaqlk33mcg1.png?width=216&crop=smart&auto=webp&s=8d37730d32d9f478b6ca6d3e24669fc9837e132a', 'width': 216}, {'height': 386, 'url': 'https://preview.redd.it/lsvaqlk33mcg1.png?width=320&crop=smart&auto=webp&s=62c7de999e74c3ccfc65ddc1c8cd67dadd9a2116', 'width': 320}], 'source': {'height': 651, 'url': 'https://preview.redd.it/lsvaqlk33mcg1.png?auto=webp&s=4897005b5d3b79dede09e66dd3936f96c422bcf1', 'width': 539}, 'variants': {}}]}
Best uncensored local LLMs for a 28vram - 64ram system?
1
Just like the title says, what are the best options at the moment that can either fully fit in my vram, or that are so smart that are worth offloading to ram, that would fit on my system?, primary use case would be rp, and secondary would be as an assistant.
2026-01-11T00:02:58
https://www.reddit.com/r/LocalLLaMA/comments/1q9jzzl/best_uncensored_local_llms_for_a_28vram_64ram/
Ok_Airline_5772
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9jzzl
false
null
t3_1q9jzzl
/r/LocalLLaMA/comments/1q9jzzl/best_uncensored_local_llms_for_a_28vram_64ram/
false
false
self
1
null
[Project] Running quantized BERT in the browser via WebAssembly (Rust + Candle) for local Semantic Search
18
Long time lurker, first time poster. I wanted to share a project I've been working on to implement **client-side semantic search** without relying on Python backends or ONNX Runtime. The goal was to build a tool to search through WhatsApp exports semantically (finding messages by meaning), but strictly **local-first** (no data egress). I implemented the entire pipeline in **Rust** compiling to **WebAssembly**. **The Stack & Architecture:** * **Inference Engine:** Instead of onnxruntime-web, I used [**Candle**](https://github.com/huggingface/candle) (Hugging Face's minimalist ML framework for Rust). * **Model:** sentence-transformers/all-MiniLM-L6-v2. * **Quantization:** Loading the model directly in Wasm. * **Vector Store:** Custom in-memory vector store implemented in Rust using a flattened Vec<f32> layout for cache locality during dot product calculations. **Why Rust/Candle over ONNX.js?** I found that managing the memory lifecycle in Rust + Wasm was cleaner than dealing with JS Garbage Collection spikes when handling large tensor arrays. Plus, candle allows dropping unnecessary kernels to keep the Wasm binary size relatively small compared to shipping the full ONNX runtime. **Performance:** * **Initialization:** \~1.5s to load weights and tokenizer (cached via IndexedDB afterwards). * **Inference:** Computes embeddings for short texts in <30ms on a standard M4 Air. * **Threading:** Offloaded the Wasm execution to a **Web Worker** to prevent the main thread (React UI) from blocking during the tokenization/embedding loop. **Code:** The repo is open source (MIT). The core logic is in the /core folder (Rust). **GitHub:** [https://github.com/marcoshernanz/ChatVault](https://github.com/marcoshernanz/ChatVault) **Demo:** You can try the WASM inference live here (works offline after load): [https://chat-vault-mh.vercel.app/](https://chat-vault-mh.vercel.app/) I'd love to hear your thoughts on using Rust for edge inference vs the traditional TF.js/ONNX route!
2026-01-10T23:21:19
https://v.redd.it/by69mgl0vlcg1
JellyfishFar8435
v.redd.it
1970-01-01T00:00:00
0
{}
1q9j0r8
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/by69mgl0vlcg1/DASHPlaylist.mpd?a=1770679308%2CMzVjNjY3MTQ4NmViZGY1YTFhOGI1MjFmNTYxZGFlNTliNmNjYWFjMmJjMTYzN2RmZWUwN2IyMmM1N2MwNWJlMw%3D%3D&v=1&f=sd', 'duration': 45, 'fallback_url': 'https://v.redd.it/by69mgl0vlcg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/by69mgl0vlcg1/HLSPlaylist.m3u8?a=1770679308%2CNGE3ZjNmZjBjN2M3YTA4MDI4ODAxYzlhODkxYTY5MTA4OTAxODMyOGY2MGFmMDA2OTI1YTM3ZWUwYjU0NDVhNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/by69mgl0vlcg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1856}}
t3_1q9j0r8
/r/LocalLLaMA/comments/1q9j0r8/project_running_quantized_bert_in_the_browser_via/
false
false
https://external-preview…bd76fa4bb5269ec5
18
{'enabled': False, 'images': [{'id': 'ZDNpemFtbDB2bGNnMV82M8AjMZW4dYrOSYG00DJTbyxBApD1BqJxrMXf5lMv', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/ZDNpemFtbDB2bGNnMV82M8AjMZW4dYrOSYG00DJTbyxBApD1BqJxrMXf5lMv.png?width=108&crop=smart&format=pjpg&auto=webp&s=9eebb7fb2d73239d0cadb03f44a3032894a5b3ab', 'width': 108}, {'height': 125, 'url': 'https://external-preview.redd.it/ZDNpemFtbDB2bGNnMV82M8AjMZW4dYrOSYG00DJTbyxBApD1BqJxrMXf5lMv.png?width=216&crop=smart&format=pjpg&auto=webp&s=374ca5f058734d8841f85f9d9dd77896eec5b661', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/ZDNpemFtbDB2bGNnMV82M8AjMZW4dYrOSYG00DJTbyxBApD1BqJxrMXf5lMv.png?width=320&crop=smart&format=pjpg&auto=webp&s=cdf7a9cbad694ddd15ba34c4391dfcf4ab58780b', 'width': 320}, {'height': 372, 'url': 'https://external-preview.redd.it/ZDNpemFtbDB2bGNnMV82M8AjMZW4dYrOSYG00DJTbyxBApD1BqJxrMXf5lMv.png?width=640&crop=smart&format=pjpg&auto=webp&s=a4a4639b291a860a9c22a7504b0d398dbce325a4', 'width': 640}, {'height': 558, 'url': 'https://external-preview.redd.it/ZDNpemFtbDB2bGNnMV82M8AjMZW4dYrOSYG00DJTbyxBApD1BqJxrMXf5lMv.png?width=960&crop=smart&format=pjpg&auto=webp&s=1cba4c33e4bad43a2ad4621714b86d72fe4f9ec0', 'width': 960}, {'height': 628, 'url': 'https://external-preview.redd.it/ZDNpemFtbDB2bGNnMV82M8AjMZW4dYrOSYG00DJTbyxBApD1BqJxrMXf5lMv.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8d9dfc89b303958e2adcc7b4becceb4ec5df43fd', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZDNpemFtbDB2bGNnMV82M8AjMZW4dYrOSYG00DJTbyxBApD1BqJxrMXf5lMv.png?format=pjpg&auto=webp&s=19442951e01ab14a9481822c5437d6756e4e0d3b', 'width': 1856}, 'variants': {}}]}
Model: cerebras/GLM-4.7-REAP-268B-A32B incoming!
188
Can't wait! [https://huggingface.co/cerebras/GLM-4.7-REAP-268B-A32B](https://huggingface.co/cerebras/GLM-4.7-REAP-268B-A32B)
2026-01-10T23:06:33
https://www.reddit.com/r/LocalLLaMA/comments/1q9io50/model_cerebrasglm47reap268ba32b_incoming/
LegacyRemaster
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9io50
false
null
t3_1q9io50
/r/LocalLLaMA/comments/1q9io50/model_cerebrasglm47reap268ba32b_incoming/
false
false
self
188
{'enabled': False, 'images': [{'id': 'y20zPcVn1YoJNLsNvWJycDP2KRksf_5NXRQUaEtPpEA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/y20zPcVn1YoJNLsNvWJycDP2KRksf_5NXRQUaEtPpEA.png?width=108&crop=smart&auto=webp&s=cccff0df8e5b0f340dd76c193e5f67bf0230f020', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/y20zPcVn1YoJNLsNvWJycDP2KRksf_5NXRQUaEtPpEA.png?width=216&crop=smart&auto=webp&s=8ffb60e06a730b1bccd93a2dad4a291780dda8b0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/y20zPcVn1YoJNLsNvWJycDP2KRksf_5NXRQUaEtPpEA.png?width=320&crop=smart&auto=webp&s=734555c87dc0369a274a3d2235fd42db7f6ab716', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/y20zPcVn1YoJNLsNvWJycDP2KRksf_5NXRQUaEtPpEA.png?width=640&crop=smart&auto=webp&s=0627113578efe72da6f1f11a9da0d8c4ccbb85c7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/y20zPcVn1YoJNLsNvWJycDP2KRksf_5NXRQUaEtPpEA.png?width=960&crop=smart&auto=webp&s=125d78f47ff252cfc833b7280ac033361c5c71f4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/y20zPcVn1YoJNLsNvWJycDP2KRksf_5NXRQUaEtPpEA.png?width=1080&crop=smart&auto=webp&s=0cca1b0cfb8c992f682832c2c517a16ef063d71f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/y20zPcVn1YoJNLsNvWJycDP2KRksf_5NXRQUaEtPpEA.png?auto=webp&s=64aa2c341a7c7ad931e43ed7bfe2495940c59f80', 'width': 1200}, 'variants': {}}]}
Is Unsloth Dead? 3.5x Faster LLM Fine-Tuning with Chronicals
0
2026-01-10T22:37:04
https://www.youtube.com/watch?v=nsY989ZRUU4
JosefAlbers05
youtube.com
1970-01-01T00:00:00
0
{}
1q9hyqq
false
{'oembed': {'author_name': 'Josef Albers', 'author_url': 'https://www.youtube.com/@JosefAlbers-f4q', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/nsY989ZRUU4?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="3.5x Faster LLM Fine-Tuning with Chronicals"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/nsY989ZRUU4/hqdefault.jpg', 'thumbnail_width': 480, 'title': '3.5x Faster LLM Fine-Tuning with Chronicals', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1q9hyqq
/r/LocalLLaMA/comments/1q9hyqq/is_unsloth_dead_35x_faster_llm_finetuning_with/
false
false
default
0
null
Erik Zahaviel Bernstein "Structured Intelligence" - anyone actually tested this rigorously?
1
Keep seeing drama about this person's AI work but can't find serious technical evaluation, just Reddit fights. Before dismissing it as a scam, I checked the legal claims against him: \- Florida vexatious litigant registry: [https://www.flcourts.gov/Resources-Services/Court-Improvement/Family-Courts/Vexatious-Litigants](https://www.flcourts.gov/Resources-Services/Court-Improvement/Family-Courts/Vexatious-Litigants) (not found) \- Palm Beach court records: [https://www.mypalmbeachclerk.com/](https://www.mypalmbeachclerk.com/) (no injunctions) So the smear campaign appears fabricated based on database verification. Which makes me wonder - is there actually something to the "Structured Intelligence" claims or is this all noise? Has anyone done systematic testing? Not just "I ran the prompt and got a weird response" but actual comparative analysis across models? If the legal attacks are fake, maybe the technical claims deserve actual evaluation instead of dismissal by association?
2026-01-10T22:32:36
https://www.reddit.com/r/LocalLLaMA/comments/1q9huyx/erik_zahaviel_bernstein_structured_intelligence/
quietobserver96
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9huyx
false
null
t3_1q9huyx
/r/LocalLLaMA/comments/1q9huyx/erik_zahaviel_bernstein_structured_intelligence/
false
false
self
1
null
I built an end-to-end local LLM fine-tuning GUI for M series macs
23
Just wanted to share a tool I’ve been working on to make local fine-tuning on M series Macs a bit less painful and manual. Essentially it wraps Apple’s MLX framework, so it runs native on M-series chips. The goal of this was to include the whole end-to-end local LLM workflow all within a GUI. Here are the features I put in: * Data Prep- You can drag and drop CSV or JSONL files to clean/format them. I also added a local PII scrubber to strip names/emails from datasets before training. * Fine-Tuning- UI for LoRA/QLoRA. You can tweak learning rates, epochs, rank, etc * Inference- Built-in chat interface to test your Fine Tuned model adapters against the base model * Models- One-click download for open source LLMs, or you can "add a model" if you have local model rates Repo is here if you want to check it out: [https://github.com/rileycleavenger/Silicon-Studio](https://github.com/rileycleavenger/Silicon-Studio) Feel free to contribute or open any issues on the repo.
2026-01-10T22:31:40
https://www.reddit.com/r/LocalLLaMA/comments/1q9hu43/i_built_an_endtoend_local_llm_finetuning_gui_for/
riman717
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9hu43
false
null
t3_1q9hu43
/r/LocalLLaMA/comments/1q9hu43/i_built_an_endtoend_local_llm_finetuning_gui_for/
false
false
self
23
{'images': [{'source': {'url': 'https://external-preview.redd.it/JLscfupIe4f7rDqG0XFaf_o6BCPhrERBlrdL5UkLQT4.png?auto=webp&s=2cea5cee33416e134dfe39dbafa5340213d408d4', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/JLscfupIe4f7rDqG0XFaf_o6BCPhrERBlrdL5UkLQT4.png?width=108&crop=smart&auto=webp&s=65c89fd8a4c9256ae5555efe65f0a4a3c01cbe60', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/JLscfupIe4f7rDqG0XFaf_o6BCPhrERBlrdL5UkLQT4.png?width=216&crop=smart&auto=webp&s=c27ca071bbf38711f602c2d9001c6f9caa109f69', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/JLscfupIe4f7rDqG0XFaf_o6BCPhrERBlrdL5UkLQT4.png?width=320&crop=smart&auto=webp&s=e6c9c1c2989d3c587fd7449cb4cfdb11da49372b', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/JLscfupIe4f7rDqG0XFaf_o6BCPhrERBlrdL5UkLQT4.png?width=640&crop=smart&auto=webp&s=0ab155327d0ca952223e47522fdf2814be0ec806', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/JLscfupIe4f7rDqG0XFaf_o6BCPhrERBlrdL5UkLQT4.png?width=960&crop=smart&auto=webp&s=ff210c0cc1e37ca513cd86730f2b0af2d72a730a', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/JLscfupIe4f7rDqG0XFaf_o6BCPhrERBlrdL5UkLQT4.png?width=1080&crop=smart&auto=webp&s=061d561354089cbc5d72e073d96c4d9e81334992', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'JLscfupIe4f7rDqG0XFaf_o6BCPhrERBlrdL5UkLQT4'}], 'enabled': False}
NPU support for distributed inference is the next logical step
0
We’ve seen some great progress recently with llama.cpp and OpenVINO getting LLMs running on Intel and AMD NPUs. It’s finally making the "AI PC" hardware useful for local use. My question is: why haven't we pushed this into the distributed/volunteer space yet? Projects like Petals or the different AI Hordes are great, but they still feel very GPU-heavy. If we can optimize 4-bit or 8-bit quantized models to run on these NPUs, we could have a massive swarm of consumer laptops contributing to a global inference grid without the thermal/power issues of mobile GPUs. Has anyone tried bridging NPU-based local inference (like the ipex-llm or Ryzen AI stuff) into a peer-to-peer worker node? I feel like the tech is there, it just needs the "volunteer computing" wrapper.
2026-01-10T22:31:27
https://www.reddit.com/r/LocalLLaMA/comments/1q9htxp/npu_support_for_distributed_inference_is_the_next/
Putrid_Draft378
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9htxp
false
null
t3_1q9htxp
/r/LocalLLaMA/comments/1q9htxp/npu_support_for_distributed_inference_is_the_next/
false
false
self
0
null
Are sampling parameters tuning pegs or volume knobs?
0
I finally realized my discomfort: top-k/p/temperature are treated like tuning pegs, but they are closer to volume knobs and effects. They don’t alter the model’s internal structure, only how its output distribution is exposed. Yet we casually call this "performance tuning"
2026-01-10T22:29:58
https://www.reddit.com/r/LocalLLaMA/comments/1q9hslh/are_sampling_parameters_tuning_pegs_or_volume/
No_Sheepherder9215
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9hslh
false
null
t3_1q9hslh
/r/LocalLLaMA/comments/1q9hslh/are_sampling_parameters_tuning_pegs_or_volume/
false
false
self
0
null
Instruction following LLM?
0
is there a way to make the LLM follow instructions better? Or a LLM that always follows instructions? [Example of what I want to avoid](https://preview.redd.it/ltdjfpy9klcg1.png?width=1168&format=png&auto=webp&s=c87e0caf8a4588282429c7bc2a6f221f4a52f5e5)
2026-01-10T22:21:18
https://www.reddit.com/r/LocalLLaMA/comments/1q9hkxh/instruction_following_llm/
thegamingnot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9hkxh
false
null
t3_1q9hkxh
/r/LocalLLaMA/comments/1q9hkxh/instruction_following_llm/
false
false
https://b.thumbs.redditm…QGbUV2OgS-CA.jpg
0
null
[Release] K3 MCP Toolbox & Logicware: Windows-Native FastMCP Tools & Advanced Agentic Patterns
0
Repository: Fandry96/k3-mcp-toolbox-public License: MIT 👋 Hello r/LocalLLMA (and r/ClaudeAI) I am excited to announce the open-source release of K3 MCP Toolbox and Antigravity Logicware. These are the core Model Context Protocol (MCP) servers we use internally to power our "K3 Firehose" Agentic IDE on Windows. We've extracted them into a clean, standalone repository for the community. 🛠️ What's Inside? The repository delivers 3 core pillars: 1. K3 MCP Toolbox (/k3-mcp-toolbox) A Windows-first MCP server implementation designed for stability and OS integration. FastMCP Server: a lightweight, async server base. Process Management: kill_zombies tool for cleaning up stuck agent processes. DevTools Bridge: A dedicated adapter for connecting agents to Chrome DevTools Protocol (CDP). Clipboard & System: Native access tools. 2. Antigravity Logicware (/antigravity-logicware) Advanced cognitive protocols for more capable agents. Sequential Thinking: A Python implementation of the popular "Chain of Thought" protocol (Protocol 310), allowing agents to dynamically plan, revise, and branch their reasoning steps. MRL Indexer: A Matryoshka Representation Learning (MRL) indexer for variable-size vector retrieval (based on Kusupati et al. 2022). 3. Docker MCP Gateway (/docker-examples) Break the "Subprocess Cap". We include a reference architecture for the Docker MCP Gateway. Run unlimited tools in isolated containers. Dynamic routing via a single HTTP gateway. No more "Dependency Hell" on your host machine. 🚀 Getting Started # Clone the repository git clone https://github.com/Fandry96/k3-mcp-toolbox-public.git # Install dependencies pip install -r k3-mcp-toolbox/requirements.txt Configuration instructions for claude_desktop_config.json are included in the README. 🤝 Contribution We are looking for feedback on the DevTools Bridge and Sequential Thinking implementation. Pull requests are welcome! Maintained by Fandry96 & The Antigravity Team
2026-01-10T22:14:51
https://i.redd.it/mfstkb70jlcg1.gif
fandry96
i.redd.it
1970-01-01T00:00:00
0
{}
1q9hf7h
false
null
t3_1q9hf7h
/r/LocalLLaMA/comments/1q9hf7h/release_k3_mcp_toolbox_logicware_windowsnative/
false
false
default
0
{'enabled': True, 'images': [{'id': 'mfstkb70jlcg1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=108&crop=smart&format=png8&s=44f794ed641bb0b0d0073eda1f09553686b0580d', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=216&crop=smart&format=png8&s=ead9c77f2979d9c8bb85e00174c0a4a57771b055', 'width': 216}, {'height': 183, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=320&crop=smart&format=png8&s=3f3d54d76d3860ae75f07c3d44b21bc14f2a1c84', 'width': 320}, {'height': 366, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=640&crop=smart&format=png8&s=70f57aa9a480058217fd26fdedb8dfbca604acb4', 'width': 640}, {'height': 549, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=960&crop=smart&format=png8&s=42efd6f63d556fd438cbd409fe04fb0d212c7129', 'width': 960}, {'height': 618, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=1080&crop=smart&format=png8&s=9f65f380cf06a138a6092961569053e29e76d5a7', 'width': 1080}], 'source': {'height': 1976, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?format=png8&s=3004a1f611d3ca338927decb6e198790b36dc208', 'width': 3452}, 'variants': {'gif': {'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=108&crop=smart&s=7aa8fff68e33a8ff4d8e3b35e4176bc2125f9819', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=216&crop=smart&s=f913af79f0da01cb280900395d9ed0f626044f48', 'width': 216}, {'height': 183, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=320&crop=smart&s=1fa15a17ed055cb744f38c5558b92770ba226f6f', 'width': 320}, {'height': 366, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=640&crop=smart&s=de71be1ad3048870873ee4fe3688d60c2392daf5', 'width': 640}, {'height': 549, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=960&crop=smart&s=c90b5a1bef1121536f189f7d70170a5a1813891d', 'width': 960}, {'height': 618, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=1080&crop=smart&s=a2aaa5759864542f97a266768905e9255e66dedb', 'width': 1080}], 'source': {'height': 1976, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?s=f7f35d30821cf5a9f4b6941b8a9a5d69b966e853', 'width': 3452}}, 'mp4': {'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=108&format=mp4&s=5005caa7ac7564104ade8eec625a2036da8e5c94', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=216&format=mp4&s=3a34a0e7c317a9cc34e03a924788398622b34742', 'width': 216}, {'height': 183, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=320&format=mp4&s=c367e55a75593d0fdf182c5fc17c26ab9a5908e9', 'width': 320}, {'height': 366, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=640&format=mp4&s=c48cfddbc38083a9272caacc84ee6816345622c1', 'width': 640}, {'height': 549, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=960&format=mp4&s=c1f423eac757b33659e640f2579ef93a5ecf512b', 'width': 960}, {'height': 618, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?width=1080&format=mp4&s=be0e1f615a1ca0b10ca789e3c7451f4aaedc2c23', 'width': 1080}], 'source': {'height': 1976, 'url': 'https://preview.redd.it/mfstkb70jlcg1.gif?format=mp4&s=f6f10ed5ddfc39f3734c17ab11c62beaa0be5197', 'width': 3452}}}}]}
Developers: what code orchestration tools do you swear by?
13
I’ve been loving code orchestration lately. There’s been an explosion of open-source multi-agent orchestration projects on GitHub, and it’s exciting to watch. Here is a list of tools come across. 1. [https://github.com/BloopAI/vibe-kanban](https://github.com/BloopAI/vibe-kanban) 2. [https://www.conductor.build/](https://www.conductor.build/) 3. [https://github.com/pedramamini/Maestro](https://github.com/pedramamini/Maestro) 4. [https://github.com/AndyMik90/Auto-Claude](https://github.com/AndyMik90/Auto-Claude) 5. [https://github.com/AutoMaker-Org/automaker](https://github.com/AutoMaker-Org/automaker) 6. [https://github.com/covibes/zeroshot/](https://github.com/covibes/zeroshot/) 7. [https://github.com/preset-io/agor](https://github.com/preset-io/agor)  8. [https://github.com/superset-sh/superset](https://github.com/superset-sh/superset) 9. [https://github.com/Ido-Levi/Hephaestus](https://github.com/Ido-Levi/Hephaestus) Tools i personally tried are auto claude, agor, automaker, vibe-kanban and Hephaestus. So far agor and auto claude have been my favorite. I'm waiting for superset to support linux/windows and I think im going to try zeroshot. What orchestration tools genuinely improved your dev workflow?
2026-01-10T21:54:34
https://www.reddit.com/r/LocalLLaMA/comments/1q9gwpx/developers_what_code_orchestration_tools_do_you/
formatme
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9gwpx
false
null
t3_1q9gwpx
/r/LocalLLaMA/comments/1q9gwpx/developers_what_code_orchestration_tools_do_you/
false
false
self
13
{'enabled': False, 'images': [{'id': 'QCjfBDgtsqHAK0MNlshsKxK27voPtT4U27_0Uex6Ddk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QCjfBDgtsqHAK0MNlshsKxK27voPtT4U27_0Uex6Ddk.png?width=108&crop=smart&auto=webp&s=c8becf71f63303eed4a526fac946b577fc9f1136', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QCjfBDgtsqHAK0MNlshsKxK27voPtT4U27_0Uex6Ddk.png?width=216&crop=smart&auto=webp&s=1bc4e0f23c2d77159ce84b088e96b4fe5e2d45b1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QCjfBDgtsqHAK0MNlshsKxK27voPtT4U27_0Uex6Ddk.png?width=320&crop=smart&auto=webp&s=4f170ef9d33f42ebd1e63e713b2d4eb9c40aa4d6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QCjfBDgtsqHAK0MNlshsKxK27voPtT4U27_0Uex6Ddk.png?width=640&crop=smart&auto=webp&s=64ff7ae9901bf22ee74ec1e20887a13ae0a517af', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QCjfBDgtsqHAK0MNlshsKxK27voPtT4U27_0Uex6Ddk.png?width=960&crop=smart&auto=webp&s=2dc77823396ed260553e5301c9fd447fcc5108cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QCjfBDgtsqHAK0MNlshsKxK27voPtT4U27_0Uex6Ddk.png?width=1080&crop=smart&auto=webp&s=d6e3fcff9fcd83298903525c1d719e7c44463da7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QCjfBDgtsqHAK0MNlshsKxK27voPtT4U27_0Uex6Ddk.png?auto=webp&s=b3317aa5149430e1cc069cd1ed7a2ea55e1b249e', 'width': 1200}, 'variants': {}}]}
Native GTK Linux LLM client now supporting local models
4
2026-01-10T21:51:52
https://i.redd.it/fx2j8vswelcg1.png
rabf
i.redd.it
1970-01-01T00:00:00
0
{}
1q9gubc
false
null
t3_1q9gubc
/r/LocalLLaMA/comments/1q9gubc/native_gtk_linux_llm_client_now_supporting_local/
false
false
default
4
{'enabled': True, 'images': [{'id': 'fx2j8vswelcg1', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/fx2j8vswelcg1.png?width=108&crop=smart&auto=webp&s=0a346adf98bb40b354c75b05fea2f8bf9b35ad38', 'width': 108}, {'height': 176, 'url': 'https://preview.redd.it/fx2j8vswelcg1.png?width=216&crop=smart&auto=webp&s=5117d36e3e7daedda89a58dad920e52356f5480e', 'width': 216}, {'height': 260, 'url': 'https://preview.redd.it/fx2j8vswelcg1.png?width=320&crop=smart&auto=webp&s=c59fa1671830796507360ba28856682d6a48b813', 'width': 320}, {'height': 521, 'url': 'https://preview.redd.it/fx2j8vswelcg1.png?width=640&crop=smart&auto=webp&s=c8f22beebd45039709afa482c8bdb316ab4f8f1f', 'width': 640}, {'height': 782, 'url': 'https://preview.redd.it/fx2j8vswelcg1.png?width=960&crop=smart&auto=webp&s=de24c007a9531bd94758fbb3b64c10d5c22387f0', 'width': 960}, {'height': 880, 'url': 'https://preview.redd.it/fx2j8vswelcg1.png?width=1080&crop=smart&auto=webp&s=979e201e618684efc2c94c40889f7352ae11091d', 'width': 1080}], 'source': {'height': 1199, 'url': 'https://preview.redd.it/fx2j8vswelcg1.png?auto=webp&s=835140369cccf5884d7aba6069522042e4d8a285', 'width': 1471}, 'variants': {}}]}
Best models to describe or extract text for comics?
2
I would like if there's a way to make accessibility for comics possible for those who rely on hearing. Choosing a TTS to use isn't an issue, but what would be the best tools to extract text from comic book pages? And are there models that are able to recognize the order of panels and explain the visual progression of events?
2026-01-10T21:36:19
https://www.reddit.com/r/LocalLLaMA/comments/1q9gg83/best_models_to_describe_or_extract_text_for_comics/
FpRhGf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9gg83
false
null
t3_1q9gg83
/r/LocalLLaMA/comments/1q9gg83/best_models_to_describe_or_extract_text_for_comics/
false
false
self
2
null
How do you decide which layers to quantize in LLMs (AWQ / GPTQ)? Any principled method + eval tips?
3
Hi everyone , I’m learning LLM quantization and I’m a bit confused about how people decide which layers/tensors to quantize and what the “standard practice” is. I’m experimenting with AWQ and GPTQ on different open models, and I want to understand the layer-wise decisions more than just “run the tool and accept the output”. What I’m confused about • When people say “quantize the model”, are we usually quantizing all linear layers’ weights (e.g., Q/K/V/O proj, MLP up/down/gate), or do people commonly skip certain layers? • Is there a principled way to decide which layers are more sensitive to quantization error? • I also see people mention quantizing “tensors” — I assume this means weight tensors (W matrices) vs activations. • In AWQ/GPTQ, what exactly is being quantized by default (weights only? activations?) • If activations aren’t quantized, what’s the typical reason some layers still get skipped? What I’m looking for 1. Rules of thumb / best practices • e.g., skip embeddings? skip lm\_head? keep first/last layer higher precision? keep norms in FP16? etc. 2. A well-defined method / recipe • Something like: run calibration → measure per-layer error → choose bit-width per layer (mixed precision) • Does anyone have a reference implementation or blog post that explains this clearly? 3. How to evaluate layer-wise choices • If I quantize all layers vs skip some layers, what’s the standard evaluation? • Perplexity on WikiText2? downstream tasks? a quick harness people recommend? • Any tools to measure per-layer impact (e.g., layer-wise reconstruction error / sensitivity plots)?
2026-01-10T21:12:34
https://www.reddit.com/r/LocalLLaMA/comments/1q9ftwm/how_do_you_decide_which_layers_to_quantize_in/
No_Progress_5399
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9ftwm
false
null
t3_1q9ftwm
/r/LocalLLaMA/comments/1q9ftwm/how_do_you_decide_which_layers_to_quantize_in/
false
false
self
3
null
What AI models can I run locally on my PC with Ollama?
0
Hey everyone, I’m pretty new to local AI and still learning, so sorry if this is a basic question. I can’t afford a ChatGPT subscription anymore due to financial reasons, so I’m trying to use **local models** instead. I’ve installed **Ollama**, and it works, but I don’t really know which models I should be using or what my PC can realistically handle. **My specs:** * Ryzen 9 5900X * RTX 3080 (10GB VRAM) * 32GB RAM * 2TB NVMe SSD I’m mainly curious about: * Which models run well on this setup * What I *can’t* run * How close local models can get to ChatGPT * If things like web search, fact-checking, or up-to-date info are possible locally (or any workarounds) Any beginner advice or model recommendations would really help. Thanks 🙏
2026-01-10T20:56:36
https://www.reddit.com/r/LocalLLaMA/comments/1q9femi/what_ai_models_can_i_run_locally_on_my_pc_with/
Kitchen-Patience8176
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9femi
false
null
t3_1q9femi
/r/LocalLLaMA/comments/1q9femi/what_ai_models_can_i_run_locally_on_my_pc_with/
false
false
self
0
null
llama.cpp hanging again and again
1
I use llama.cpp since the beginning. I have used it on linux, on windows, on old laptops and on brand new workstations. When the requests are sent via SillyTavern, llama.cpp always hangs during prompt evaluation. It stops at arbitrary points and requires further requests before completing the evaluation and start generating. When it starts generating, I have never had a single glitch. When the model is fully in VRAM, this issue happens very seldom. Are there any mitigations that can be used to avoid this behaviour?
2026-01-10T20:54:15
https://www.reddit.com/r/LocalLLaMA/comments/1q9fck0/llamacpp_hanging_again_and_again/
insulaTropicalis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9fck0
false
null
t3_1q9fck0
/r/LocalLLaMA/comments/1q9fck0/llamacpp_hanging_again_and_again/
false
false
self
1
null
Abliterated Model Hosting Recs
1
Many of us here have pretty great hardware. Myself included. So I keep flexing all my locally-run abliterated models to my friends, only for them to inevitably ask how they can chat with said models themselves. Unfortunately, the average person has a computer that can hardly run Google Chrome. Their only options for local models are heavily quantized 4B variants. And quantization tends to break most abliterations so it defeats the purpose. Curious if anyone knows of a site that hosts any of the newer abliterated models, like Gemma normpreserve biprojected or anything made with Heretic v1.1.0. Venice is the only one I know of, but they use ancient models that aren't particularly smart imo, like Mistral Dolphin. SillyTavern has AI Horde, but I doubt most people can figure out how to use that either. And RunPod is probably overkill. Ik this is isn't a very LocalLLaMA type of question, but I'd love to hear if anyone has some good site recs. Something to help the average tech-naive person dip into the world of niche open-weight LLMs.
2026-01-10T20:46:58
https://www.reddit.com/r/LocalLLaMA/comments/1q9f5m0/abliterated_model_hosting_recs/
Zestyclose839
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9f5m0
false
null
t3_1q9f5m0
/r/LocalLLaMA/comments/1q9f5m0/abliterated_model_hosting_recs/
false
false
self
1
null
What is the 'best' local model I can run on this hardware? (macbook pro)
0
Hi, it's been a long while since I ran anything locally, I want to start experiment with local models again. What's are some types of models I could run locally? I want to potentially experiment with coding, fine tuning some models on gpus for low resource languages/dsls and running locally, and maybe some agentic/tool call stuff. As well as learning, of course.
2026-01-10T20:43:18
https://i.redd.it/2dktn09l2lcg1.png
Old-School8916
i.redd.it
1970-01-01T00:00:00
0
{}
1q9f274
false
null
t3_1q9f274
/r/LocalLLaMA/comments/1q9f274/what_is_the_best_local_model_i_can_run_on_this/
false
false
default
0
{'enabled': True, 'images': [{'id': '2dktn09l2lcg1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/2dktn09l2lcg1.png?width=108&crop=smart&auto=webp&s=391894743ea048f081d885fc37b9fc9965bea32d', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/2dktn09l2lcg1.png?width=216&crop=smart&auto=webp&s=c69710d0583959692ae7e9fb9d9ac6a08465614e', 'width': 216}, {'height': 170, 'url': 'https://preview.redd.it/2dktn09l2lcg1.png?width=320&crop=smart&auto=webp&s=06b505a17e8d183c3097e6335c43d6e3303b6d59', 'width': 320}], 'source': {'height': 260, 'url': 'https://preview.redd.it/2dktn09l2lcg1.png?auto=webp&s=9e278fb2051cd8a2e5c1b48baec027e663e98f9f', 'width': 488}, 'variants': {}}]}
Could you link two Strix Halo AI Max 395+ together to host bigger models?
5
Say if I have 2 128Gb Strix Halo AI Max 395+, if we link together, then we might could have 256Gb in total. That means we could run bigger models. Could this be done over LAN?
2026-01-10T20:40:10
https://www.reddit.com/r/LocalLLaMA/comments/1q9ez9j/could_you_link_two_strix_halo_ai_max_395_together/
henryclw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9ez9j
false
null
t3_1q9ez9j
/r/LocalLLaMA/comments/1q9ez9j/could_you_link_two_strix_halo_ai_max_395_together/
false
false
self
5
null
Training with Claude Code Traces
1
Are there any good Claude Code trace datasets? Would be great to do some distillation with Opus 4.5 as a teacher. I’m tempted to generate my own dataset for that if it doesn’t exist. Based on a paper by OpenThoughts, it seems like problem / input selection is more effective for reasoning curation than selecting quality of responses. I could just grab a pre-baked/proven data mixture and use opus as a teacher, but i suspect those data mixtures are best for one-off problems and aren’t right for reasoning + tool traces. I know nvidia released some tool-calling datasets they used for v3 nano, so might look there. any suggestions? anyone else built this yet?
2026-01-10T20:26:33
https://www.reddit.com/r/LocalLLaMA/comments/1q9emnx/training_with_claude_code_traces/
darkdeepths
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9emnx
false
null
t3_1q9emnx
/r/LocalLLaMA/comments/1q9emnx/training_with_claude_code_traces/
false
false
self
1
null
Observations on reasoning persistence in mid-sized open LLMs
1
I’ve been working with several open-weight language models in the 7B–13B parameter range and noticed consistent differences in how long coherent reasoning is preserved under token pressure. In particular, models fine-tuned with explicit instruction chaining or multi-step supervision seem to maintain logical structure significantly longer than models optimized primarily for short, direct responses. This becomes especially visible in tasks that require intermediate abstraction, such as multi-constraint reasoning or conditional planning, where some models collapse into pattern completion much earlier than expected. I’m curious whether others have observed similar behavior and whether you think this effect is driven more by architectural choices, fine-tuning methodology, or dataset composition. Interested in any empirical observations or references.
2026-01-10T20:16:19
https://www.reddit.com/r/LocalLLaMA/comments/1q9ed30/observations_on_reasoning_persistence_in_midsized/
Lorenzo_Kotalla
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9ed30
false
null
t3_1q9ed30
/r/LocalLLaMA/comments/1q9ed30/observations_on_reasoning_persistence_in_midsized/
false
false
self
1
null
GLM 4.7 is... crazy
0
https://preview.redd.it/…pen-source model
2026-01-10T20:08:34
https://www.reddit.com/r/LocalLLaMA/comments/1q9e5q9/glm_47_is_crazy/
No-Selection2972
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9e5q9
false
null
t3_1q9e5q9
/r/LocalLLaMA/comments/1q9e5q9/glm_47_is_crazy/
false
false
https://a.thumbs.redditm…BhUr1zZj-kL0.jpg
0
null
Workflow: Bypassing 2FA/Captchas for local web agents (Llama 3/Browser Use) by syncing Chrome cookies
5
I've been building local agents using Llama 3 and `browser-use` to automate some tasks on LinkedIn and Gmail. The biggest headache I hit was that the agents kept getting blocked by login screens or 2FA prompts. I didn't want to use paid APIs, and hardcoding cookies into my `.env` file kept breaking because the sessions would expire every few days. I realized the easiest fix was to just "borrow" the active session from my local Chrome browser. I wrote a quick Python SDK that: 1. Grabs the encrypted cookies from your local Chrome profile. 2. Decrypts them locally. 3. Injects them into Playwright/Selenium so the agent starts "logged in." It’s working well for my Llama 3 + Playwright setup. It’s open source if anyone else is hitting the same wall with their local agents. **Repo:** [https://github.com/jacobgadek/agent-auth](https://github.com/jacobgadek/agent-auth) Has anyone found a better way to handle session persistence for long-running local agents?
2026-01-10T19:57:34
https://www.reddit.com/r/LocalLLaMA/comments/1q9dvkv/workflow_bypassing_2facaptchas_for_local_web/
Bubbly_Gap6378
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9dvkv
false
null
t3_1q9dvkv
/r/LocalLLaMA/comments/1q9dvkv/workflow_bypassing_2facaptchas_for_local_web/
false
false
self
5
{'enabled': False, 'images': [{'id': 'bJorXh-UBNyS_kP5RRueXbcvW7d3f3ouXA6TI57A-xw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bJorXh-UBNyS_kP5RRueXbcvW7d3f3ouXA6TI57A-xw.png?width=108&crop=smart&auto=webp&s=4a888c9316ada582c6a88b52bffed722442f8d2b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bJorXh-UBNyS_kP5RRueXbcvW7d3f3ouXA6TI57A-xw.png?width=216&crop=smart&auto=webp&s=ebde1915d24d50908573b882ec9f2b645bff7685', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bJorXh-UBNyS_kP5RRueXbcvW7d3f3ouXA6TI57A-xw.png?width=320&crop=smart&auto=webp&s=99b503ae8cc845be5e62bda1a14459116a24059b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bJorXh-UBNyS_kP5RRueXbcvW7d3f3ouXA6TI57A-xw.png?width=640&crop=smart&auto=webp&s=b78d6add5aab0fe0325239014aa40ef5ee3de2a3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bJorXh-UBNyS_kP5RRueXbcvW7d3f3ouXA6TI57A-xw.png?width=960&crop=smart&auto=webp&s=1530cc007ed0af6be5bcfa5b483467b2e8a62d82', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bJorXh-UBNyS_kP5RRueXbcvW7d3f3ouXA6TI57A-xw.png?width=1080&crop=smart&auto=webp&s=1c60e22aca4348289445487dbeeadc0de137e3b7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bJorXh-UBNyS_kP5RRueXbcvW7d3f3ouXA6TI57A-xw.png?auto=webp&s=3d2be1793c5607a945fb371d8851e8b535886b02', 'width': 1200}, 'variants': {}}]}
Tecent's WeDLM theoretically allows 3-10x TG for Memory-Constrained Devices (E.g. RAM, CPU/GPU Hybrid Inference)
14
So I was thinking about [Tecent's WeDLM](https://wedlm.github.io/) architecture. Long story short: they post train a normal auto-regressive llm into a diffusion model that predicts the next \~2-14 tokens (depending on complexity of the task, typical for code is like 3) at a threshold confidence per forward pass. In a memory constrained environment, say DDR5/DDR4 and CPU + GPU hybrid setups, the thing we're all waiting on is weights to load in and out of our compute. Unless you are doing very sophisticated work with agentic tasks in parallel, you (we) are all likely not using that compute fully. This WeDLM arch essentially does multi-token prediction in a forward pass with a KV cache just like auto-regressive MLA, and has similar quality output (i.e. almost identical to single token auto-regressive results). The reason DLM's can be faster, is they can load say 1/2 of the weights into VRAM, and do that part of the pass for say 5 tokens, and then load the next 1/2 of the weights and do that part of the pass on those 5 tokens. So: in one memory load of all the weights, we have calculated 5 tokens worth of information, instead of just 1. The reason it's variable (2-14) is that confidence is task specific. They offer counting from 1-100 as an example of a dead simple task and that's where that 14 tokens per forward pass max is achieved. WeDLM seems to be a post-training solution, and seems like it would work best for Dense models since the same weights are used for all passes - say a Qwen3-32B running at 3x normal RAM fallback inference speeds. Has anyone else noticed this as a bottleneck solution for Memory Constrained (i.e. 90% of local llama users) compute, and is there a reason I'm wrong on this assumption, and has LLama.cpp started work yet on supporting WeDLM or DLM's in general? I would expect this to allow Dense models to get a bit closer to their MOE counterparts in speed, while keeping their quality higher. Finally, DLM's work by requiring the predicted tokens reach a certain confidence interval before accepting the token - I suspect in some situations, you could get away with tuning down that dial and effectively running a "flash" version of the same model, with identical weights, and do so even within the same inference pass (technically). Sounds like a great improvement for local inference - 2-5x token generation speeds for dense models.
2026-01-10T19:51:43
https://www.reddit.com/r/LocalLLaMA/comments/1q9dq8b/tecents_wedlm_theoretically_allows_310x_tg_for/
ImJustHereToShare25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9dq8b
false
null
t3_1q9dq8b
/r/LocalLLaMA/comments/1q9dq8b/tecents_wedlm_theoretically_allows_310x_tg_for/
false
false
self
14
null
Name That Part: 3D Part Segmentation and Naming
3
First large-scale simultaneous 3D part segmentation and naming model. Also releasing largest 3D part dataset.
2026-01-10T19:50:25
https://name-that-part.github.io/
Left-Baby-8805
name-that-part.github.io
1970-01-01T00:00:00
0
{}
1q9dozp
false
null
t3_1q9dozp
/r/LocalLLaMA/comments/1q9dozp/name_that_part_3d_part_segmentation_and_naming/
false
false
default
3
null
Built a personal knowledge system with nomic-embed-text + LanceDB - 106K vectors, 256ms queries
15
Embedded 3 years of my AI conversations (353K messages) to make them searchable by concept, not just keywords. **Stack:** * nomic-embed-text-v1.5 (768 dims, runs on Apple Silicon MPS) * LanceDB for vector storage * DuckDB for analytics **Performance:** * 106K vectors in 440MB * 256ms semantic search * 13-15 msg/sec embedding throughput on M4 Mac **Key learning:** Started with DuckDB VSS extension. Accidentally created duplicate HNSW indexes - ended up with 14GB for 300MB of actual data. Migrated to LanceDB, same vectors in 440MB. 32x smaller. Open source: [https://github.com/mordechaipotash/intellectual-dna](https://github.com/mordechaipotash/intellectual-dna)
2026-01-10T19:41:52
https://www.reddit.com/r/LocalLLaMA/comments/1q9dh54/built_a_personal_knowledge_system_with/
Signal_Usual8630
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9dh54
false
null
t3_1q9dh54
/r/LocalLLaMA/comments/1q9dh54/built_a_personal_knowledge_system_with/
false
false
self
15
{'enabled': False, 'images': [{'id': 'mRiUiYfwVE8WVttG8CjZ9XVjfr4vXLZNMXz_A6LNDcA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mRiUiYfwVE8WVttG8CjZ9XVjfr4vXLZNMXz_A6LNDcA.png?width=108&crop=smart&auto=webp&s=0f28aad9a291130127aa3b65015f26350851bce8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mRiUiYfwVE8WVttG8CjZ9XVjfr4vXLZNMXz_A6LNDcA.png?width=216&crop=smart&auto=webp&s=3c184fa026f755c58bfc7674bc7ea7b5eaf24056', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mRiUiYfwVE8WVttG8CjZ9XVjfr4vXLZNMXz_A6LNDcA.png?width=320&crop=smart&auto=webp&s=a7f5abd2e3cff46e52375923eb707dbbb1105af8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mRiUiYfwVE8WVttG8CjZ9XVjfr4vXLZNMXz_A6LNDcA.png?width=640&crop=smart&auto=webp&s=9db11464fb0dc4307dfbe95c1e1932f0a9faed86', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mRiUiYfwVE8WVttG8CjZ9XVjfr4vXLZNMXz_A6LNDcA.png?width=960&crop=smart&auto=webp&s=5b4fcc8f17902a41681903c8d6117033bc5699f1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mRiUiYfwVE8WVttG8CjZ9XVjfr4vXLZNMXz_A6LNDcA.png?width=1080&crop=smart&auto=webp&s=84d002e76f53a092c5b9942fcf05b4582e189f52', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mRiUiYfwVE8WVttG8CjZ9XVjfr4vXLZNMXz_A6LNDcA.png?auto=webp&s=1f6c6ce630fc080791eeb1a13e0d83758f4aaa59', 'width': 1200}, 'variants': {}}]}
Stability focused AI platform devs here. Does anyone know from the the info in the linked post whether there's a real GitHub breach of ToS behind our suspension?
0
[https://www.reddit.com/r/comfyuiAudio/comments/1q9582c/staboorujeffreys\_github\_account\_has\_been\_suspended/](https://www.reddit.com/r/comfyuiAudio/comments/1q9582c/staboorujeffreys_github_account_has_been_suspended/) Many hardened wizards mingle here. We're hoping for some assistance. Some may have no preconception of the StabooruJeffrey project (a Hard Fork of ComfyUI focused on stability and harmonising the custom node ecosystem) which has had it's GitHub account suspended. It doesn't in the grand scheme of things really matter whether you know of the StaboorruJeffrey(ComfyAudio) project or not. This isn't a plug for the project. We need some advice from more knowledgeable folk. Some may have seen our posts here on the sub previously, or on the three other subs now related to the StabooruJeffrey project. Some may be fully aware of the project, and some may even hate our guts. If you're in that faction and happen to be readig this, can we ask you to set aside whatever grievances you may have about our bulldadaesque approach to marketing the project, and if you happen to know of good cause for why the StabooruJeffrey GH account might have been suspended, kindly fill us in. We're artists and casual devs, not masters of the GitHub realms. There could be something glaringly obvious we've mentioned in the linked post above that explains why we would have been auto-booted from GH that we're completely unaware of. We're kind of hoping to have made some glaringly obvious blunder we're not sussed about, and then we can take our licks being universally humiliated for not realising how we've goofed. Then we can wipe the egg of our faces, hopefully have the account reinstated, and get on with building the project. However if there's not a reasonable explanation the wise folk here could profer for what may have triggered the suspension, there are potential implications for everyone that we anticipate some of you will recognise. Can anyone help? Have we goofed and triggered an auto-suspend over some unwitting ToS breach? Or would it perhaps be more indicative of our account having been targetted for suspension for other reasons? Thanks.
2026-01-10T19:34:33
https://www.reddit.com/r/LocalLLaMA/comments/1q9da9c/stability_focused_ai_platform_devs_here_does/
MuziqueComfyUI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9da9c
false
null
t3_1q9da9c
/r/LocalLLaMA/comments/1q9da9c/stability_focused_ai_platform_devs_here_does/
false
false
self
0
null
Offloom Update, private web searching RAG added. My personal, locally powered, privacy first chatbot that uses small language models yet still somehow returns quality answers. Apparently SLMs paired with agentic behavior can compete with chatGPT
4
I've been working on my own private chatbot for awhile now. I wanted a private, locally hosted chatbot that I could use in place of chatGPT. I already have document RAG working very well, and figured the next logical step was to bundle a private web searching framework alongside it. I'm a windows user, so searXNG isn't necessarily embeddable into this application while still allowing a one click download for an end user. So I choose Whoogle instead. This is fully runnable on my 4090 (I think it would work on 12GB VRAM as well I just don't have a machine for testing that). It uses an agentic approach juggling between multiple models to ensure quality answers. The powerhouse model is Qwen 8B thinking model. Which gives surprisingly good results when context is engineered properly. Offloom is now capable of document and web search RAG as well as image generation using comfyUI as a sidecar process. I've evolved the idea away from just simply a chatbot and want to create a local 'entertainment' center. So future plans include the ability to agentically generate coherent short stories, comics, music, text adventures, and who knows what else lol. *This isn't a public project. It's simply a learning platform for me to mess around with while still being pleasant to use. I wasn't convinced I'd be able to replace chatGPT up until thinking models came into being. Now quality answers happen the vast majority of the time meaning this project went from learning to something I can actually use.*
2026-01-10T19:15:51
https://v.redd.it/2084smchkkcg1
Little-Put6364
v.redd.it
1970-01-01T00:00:00
0
{}
1q9csk3
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2084smchkkcg1/DASHPlaylist.mpd?a=1770664569%2CMWNiODZlMTA1ODM4YzE0MDRmMWYwMTliNTE0NDQwNmFkYjVjYjM0MDU3YTZjMDRhYmYyNjdiMzY4NjcyZGI5YQ%3D%3D&v=1&f=sd', 'duration': 22, 'fallback_url': 'https://v.redd.it/2084smchkkcg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/2084smchkkcg1/HLSPlaylist.m3u8?a=1770664569%2CYzk5ZDdmNzQ3OTk0M2ExOWU1ZDQ1Y2I0YTUxOWVkZDUxNDIwMTU3NTczMDIyZDJlOGE5OTBhNTBjYzhiNjEyNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/2084smchkkcg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1q9csk3
/r/LocalLLaMA/comments/1q9csk3/offloom_update_private_web_searching_rag_added_my/
false
false
https://external-preview…0a5d9e9da1c93f34
4
{'enabled': False, 'images': [{'id': 'c2o2am53Y2hra2NnMT4Y6BQ8jdJqfQ-1VABkhNdVh0RAKUk7WqR4J_3U91_y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/c2o2am53Y2hra2NnMT4Y6BQ8jdJqfQ-1VABkhNdVh0RAKUk7WqR4J_3U91_y.png?width=108&crop=smart&format=pjpg&auto=webp&s=f427148b6f78b092cc524c043178ba65ed2b41b0', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/c2o2am53Y2hra2NnMT4Y6BQ8jdJqfQ-1VABkhNdVh0RAKUk7WqR4J_3U91_y.png?width=216&crop=smart&format=pjpg&auto=webp&s=40b7c922191b6c79418b1a348250c7e1b3f4d32d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/c2o2am53Y2hra2NnMT4Y6BQ8jdJqfQ-1VABkhNdVh0RAKUk7WqR4J_3U91_y.png?width=320&crop=smart&format=pjpg&auto=webp&s=b1a03eff2d0ae419160444a810f67b963183a9e0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/c2o2am53Y2hra2NnMT4Y6BQ8jdJqfQ-1VABkhNdVh0RAKUk7WqR4J_3U91_y.png?width=640&crop=smart&format=pjpg&auto=webp&s=8b594a420bebca4fcae6bd448f9d77a84c93730c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/c2o2am53Y2hra2NnMT4Y6BQ8jdJqfQ-1VABkhNdVh0RAKUk7WqR4J_3U91_y.png?width=960&crop=smart&format=pjpg&auto=webp&s=0dc050bbdcb7c71c2416dedd3a836ee2fc49e5ef', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/c2o2am53Y2hra2NnMT4Y6BQ8jdJqfQ-1VABkhNdVh0RAKUk7WqR4J_3U91_y.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a45c76b0db11bc4a209c2a96e0e0e1924124207c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/c2o2am53Y2hra2NnMT4Y6BQ8jdJqfQ-1VABkhNdVh0RAKUk7WqR4J_3U91_y.png?format=pjpg&auto=webp&s=3f874bb3d61f2dd905b2a4a1fc1e5df8d041d153', 'width': 1920}, 'variants': {}}]}
Preview logprobs in Open WebUI
23
**What is this?** A specially crafted HTML artifact that connects back to the custom OpenAI-compatible proxy and listens to the same chunks as displayed in the UI itself, but with the logprobs data. Tokens outside of top 25% bucket are highlighted when chosen. You can find the source here: [https://github.com/av/harbor/blob/main/boost/src/modules/logprobs.py](https://github.com/av/harbor/blob/main/boost/src/modules/logprobs.py)
2026-01-10T18:44:05
https://v.redd.it/pecxh50bekcg1
Everlier
v.redd.it
1970-01-01T00:00:00
0
{}
1q9by7w
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/pecxh50bekcg1/DASHPlaylist.mpd?a=1770662669%2CNmUxZDRiNjFmMTQ3MzYzNzZkYTg1YmZiZjg3YWIyYTc5NGU4ZDJmMjhjOWRmNmYwYTU1MzlkZTczYmFkYzgzOA%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/pecxh50bekcg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/pecxh50bekcg1/HLSPlaylist.m3u8?a=1770662669%2CMGVmOTBmNjZhM2Y4ZDI4NGJjMTQ2ZDQ0ZWIwY2NlM2NkOTJlYjBlMjc5NGFhODQxZmNmZWEyMzBmMWU1ZGYwNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/pecxh50bekcg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1728}}
t3_1q9by7w
/r/LocalLLaMA/comments/1q9by7w/preview_logprobs_in_open_webui/
false
false
https://external-preview…1d594ac2d063f616
23
{'enabled': False, 'images': [{'id': 'dWwycnFtMGJla2NnMaEwWMFVZEhNLMuQhxpftmmu_kTUOseThBPm8C0qkT1T', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/dWwycnFtMGJla2NnMaEwWMFVZEhNLMuQhxpftmmu_kTUOseThBPm8C0qkT1T.png?width=108&crop=smart&format=pjpg&auto=webp&s=86cf6f13b5b2c229bf5491d027e91c80fa7c1ae7', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/dWwycnFtMGJla2NnMaEwWMFVZEhNLMuQhxpftmmu_kTUOseThBPm8C0qkT1T.png?width=216&crop=smart&format=pjpg&auto=webp&s=4e575c95fa6c3bb8eda5012296bb62e7318c86ce', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/dWwycnFtMGJla2NnMaEwWMFVZEhNLMuQhxpftmmu_kTUOseThBPm8C0qkT1T.png?width=320&crop=smart&format=pjpg&auto=webp&s=737ee65a6f42c330d8e5851148f7467f0b9d9dcd', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/dWwycnFtMGJla2NnMaEwWMFVZEhNLMuQhxpftmmu_kTUOseThBPm8C0qkT1T.png?width=640&crop=smart&format=pjpg&auto=webp&s=ec1e487b665ba15ea1d1e45e8f58521753b264c9', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/dWwycnFtMGJla2NnMaEwWMFVZEhNLMuQhxpftmmu_kTUOseThBPm8C0qkT1T.png?width=960&crop=smart&format=pjpg&auto=webp&s=987fd8225ee917729e472030654e59d4cc5dc7b3', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/dWwycnFtMGJla2NnMaEwWMFVZEhNLMuQhxpftmmu_kTUOseThBPm8C0qkT1T.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3f92a4fc5c4b9d41dc9a424ad3a092d95225b3de', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/dWwycnFtMGJla2NnMaEwWMFVZEhNLMuQhxpftmmu_kTUOseThBPm8C0qkT1T.png?format=pjpg&auto=webp&s=4b1b8353ce9a45491b28b5d9e2d2a868a8ada245', 'width': 1920}, 'variants': {}}]}
Not Sure Where to Start
3
I recently purchased a pretty good laptop for a non-AI project I’m working on. Specs are: -Processor Intel® Core™ Ultra 9 275HX Processor (E-cores up to 4.60 GHz P-cores up to 5.40 GHz) -Laptop GPU 24GB GDDR7 -Memory 128 GB DDR5-4000MT/s (SODIMM)(4 x 32 GB) I’m very familiar with commercial AI products, but have almost bought clue about running local models, or even whether there would be any utility in me doing so. I am an attorney by trade, so running a local model has some appeal. Otherwise, I’m tied to fairly expensive solutions for security and confidential reasons. My question is, is it worth looking into local models to help me with my practice—maybe with automating tasks or helping with writing? I honestly have no idea whether and how to best look at a local solution. I do have some small coding experience. Anyway, I’d love some feedback.
2026-01-10T18:37:44
https://www.reddit.com/r/LocalLLaMA/comments/1q9bs8k/not_sure_where_to_start/
Psychological-Ad5390
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q9bs8k
false
null
t3_1q9bs8k
/r/LocalLLaMA/comments/1q9bs8k/not_sure_where_to_start/
false
false
self
3
null