title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Why you shoudn't trust AI,
0
Maybe Im asking to much model that is not AGI, But large language models are for a reason not called factual Models So A Use case for AI chatbot is you can use it with RAG, its like your own search engine, You can also Just Use real Search engine, When using RAG you create own dataset of questions and answers, And use language model to read RAG. Remeber The AI is good as the RAG dataset so garbadge in garbadgge out, You can also add data to LLM But using LORA its same as fine-tuning. But Its like Not worth learning Python too Technical.
2026-01-29T00:17:32
https://i.redd.it/ox8kmpwhl6gg1.png
Eventual-Conguar7292
i.redd.it
1970-01-01T00:00:00
0
{}
1qpsx01
false
null
t3_1qpsx01
/r/LocalLLaMA/comments/1qpsx01/why_you_shoudnt_trust_ai/
false
false
default
0
{'enabled': True, 'images': [{'id': 'ox8kmpwhl6gg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ox8kmpwhl6gg1.png?width=108&crop=smart&auto=webp&s=8364cd5975cdba16541ef2e4214d9a67802e408c', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ox8kmpwhl6gg1.png?width=216&crop=smart&auto=webp&s=8ae767a9150e00524822b5fc0ee065aea861a178', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ox8kmpwhl6gg1.png?width=320&crop=smart&auto=webp&s=179ef98e89e57a5672b91a9653b0c417c587593f', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/ox8kmpwhl6gg1.png?width=640&crop=smart&auto=webp&s=8550ce39a3dd362c927528be66765397a9ff2caf', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/ox8kmpwhl6gg1.png?width=960&crop=smart&auto=webp&s=1b8a51953204d12fcbe4052bd35d00d72beaeeb6', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/ox8kmpwhl6gg1.png?width=1080&crop=smart&auto=webp&s=3e5e848a32fd3238ddc7c63f35a5b038d1e6d2c7', 'width': 1080}], 'source': {'height': 5811, 'url': 'https://preview.redd.it/ox8kmpwhl6gg1.png?auto=webp&s=7ee734e77bcd08771a64e5775127354efe64e351', 'width': 1571}, 'variants': {}}]}
768Gb "Mobile" Ai Server Follow-Up Part 3, Temp/Power Draw Stats & LLM Benchmark
1
Part 3 Follow-up post to the "Mobile" Ai server build Due to Reddit video size/length restrictions I'm having to break up the video into different parts, but the full (and better quality) video is uploaded to Youtube. [https://youtu.be/TJOKEFdCkv0](https://youtu.be/TJOKEFdCkv0) This part gets into actual temp and power draw pulse checks at idle and during inferencing workloads. Unfortunately, because of the way I had to downsize quality for the video to meet reddit video size requirements, visibility in the screen record section in this post's video is pretty poor. The uploaded video to Youtube however is better quality and should have these numbers more legible, just be sure playback quality is set to 1080p/HD.
2026-01-29T00:12:14
https://v.redd.it/hdqou3cuj6gg1
SweetHomeAbalama0
/r/LocalLLaMA/comments/1qpssdj/768gb_mobile_ai_server_followup_part_3_temppower/
1970-01-01T00:00:00
0
{}
1qpssdj
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/hdqou3cuj6gg1/DASHPlaylist.mpd?a=1772367144%2CN2QyMDMwZmVmZTRmMGRiYThhOTc5ZTZiYzI1OWYzMzg3Y2FlODVmMTdmZjdlZjZlNWUyZmUxY2Q4YjNjYzk1OQ%3D%3D&v=1&f=sd', 'duration': 899, 'fallback_url': 'https://v.redd.it/hdqou3cuj6gg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/hdqou3cuj6gg1/HLSPlaylist.m3u8?a=1772367144%2COWQ1NGVhYjY5MjdiNTc3ODZjZjEzMmRlNDJhNzU2YWIzNWVmOGJhOTc1MWUxMTYwYzk3Njg1MDQzNjI4MDMyMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hdqou3cuj6gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
t3_1qpssdj
/r/LocalLLaMA/comments/1qpssdj/768gb_mobile_ai_server_followup_part_3_temppower/
false
false
default
1
{'enabled': False, 'images': [{'id': 'N3JtbHl4YnVqNmdnMQ7B6IWXkhw7GzGXseCHGrW5rZdLGQiJobqUZJNsEDGq', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/N3JtbHl4YnVqNmdnMQ7B6IWXkhw7GzGXseCHGrW5rZdLGQiJobqUZJNsEDGq.png?width=108&crop=smart&format=pjpg&auto=webp&s=5d67d12af387ba1bf57e8f142212324af804f481', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/N3JtbHl4YnVqNmdnMQ7B6IWXkhw7GzGXseCHGrW5rZdLGQiJobqUZJNsEDGq.png?width=216&crop=smart&format=pjpg&auto=webp&s=d5dbaae971ce82b0da99a287433d1573fc7b3262', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/N3JtbHl4YnVqNmdnMQ7B6IWXkhw7GzGXseCHGrW5rZdLGQiJobqUZJNsEDGq.png?width=320&crop=smart&format=pjpg&auto=webp&s=d4f2d2bfdf17ef90efa2fd55f6acfcac9a0d1aec', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/N3JtbHl4YnVqNmdnMQ7B6IWXkhw7GzGXseCHGrW5rZdLGQiJobqUZJNsEDGq.png?width=640&crop=smart&format=pjpg&auto=webp&s=d280de28743cd1990746d1711deba4dfda57f7e2', 'width': 640}], 'source': {'height': 1280, 'url': 'https://external-preview.redd.it/N3JtbHl4YnVqNmdnMQ7B6IWXkhw7GzGXseCHGrW5rZdLGQiJobqUZJNsEDGq.png?format=pjpg&auto=webp&s=11eff40c2cdb42dae401e9e0b33e29338261cba8', 'width': 720}, 'variants': {}}]}
What upgrades do you recommend to run the most advanced models while keeping the same motherboard?
0
Current setup: CPU: Ryzen 5 5600 Motherboard: Gigabyte B550 AORUS Elite AX V2 GPU: RX 6600 RAM: 16 GB DDR4 PSU: Corsair RM850e Case: Lian Li Lancool 216 I can currently run 7b flawlessly. 13b works but it's so slow it's practically unusable. My goal is to do some combination on ram + GPU upgrade to get me running 70b comfortably. But I'll settle for 30b. I really just have no interest in swapping out my motherboard at this time, so that's my hard limit. If you were me, what upgrades would you do to max out my motherboard's capability for my usecase?
2026-01-29T00:10:15
https://www.reddit.com/r/LocalLLaMA/comments/1qpsqnh/what_upgrades_do_you_recommend_to_run_the_most/
Upstairs_Hold_374
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpsqnh
false
null
t3_1qpsqnh
/r/LocalLLaMA/comments/1qpsqnh/what_upgrades_do_you_recommend_to_run_the_most/
false
false
self
0
null
Media bias analysis: legal-trained open model beats Claude and Gemini in blind peer eval
2
Running daily blind peer evaluations. Day 34. Today's task: analyze two news articles covering the same event (5,000 layoffs) with opposite framings. One says "industry crisis," other says "strategic AI pivot." Models had to separate facts from spin and identify what info would settle the dispute. Results: https://preview.redd.it/7ekhalakj6gg1.png?width=1000&format=png&auto=webp&s=ca55eccd8b01db467596116e1314aded317e9552 The legal fine-tune winning makes sense when you think about it. Media bias analysis is basically case analysis: what's in evidence vs what's interpretation, how same facts support different arguments. That's legal training 101. DeepSeek came last but the interesting part is variance. Std dev of 1.48 vs 0.26 for the winner. Scores ranged 5.70 to 9.80 depending on judge. Some models loved the response, others hated it. Inconsistency is its own signal. Open models competitive here. GPT-OSS-120B variants took top two spots. Not everything needs a $20/month subscription. [themultivac.substack.com](http://themultivac.substack.com)
2026-01-29T00:08:30
https://www.reddit.com/r/LocalLLaMA/comments/1qpsp4n/media_bias_analysis_legaltrained_open_model_beats/
Silver_Raspberry_811
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpsp4n
false
null
t3_1qpsp4n
/r/LocalLLaMA/comments/1qpsp4n/media_bias_analysis_legaltrained_open_model_beats/
false
false
https://b.thumbs.redditm…F_wfUO3L09Yc.jpg
2
null
Field Report: What leadership actually thinks AI is (Notes from a Director)
29
Hi builders, I'm an IT Director for a global org, and I just spent two hours in a 2026 goal-planning meeting with the leadership team. Naturally, the main goal for this year is "Integrating AI." There has been a lot of investment in AI over the last year, and now the board wants a return. But here is the surprising observation from the room: Most people cannot distinguish between "Automation" and "AI." They use the terms interchangeably. The Shift: Automation in IT has been hot since 2010 (DevOps/Agile), but back then, there was massive resistance because people were terrified of automating their roles away. The vibe is different now. People are embracing "AI," but they have a misconception about the skill set. They think "Upskilling" just means getting better at Prompt Engineering. My Advice to Builders: If you are building solutions for the enterprise, keep it simple. Don't over-engineer a complex neural network when a deterministic script will do. * Most "Agents" today are just fancy workflows. * You can build a solid workflow in Power Automate, and most corporate stakeholders will look at it and see "AGI." Don't let the hype distract you from the fact that Business Logic still wins over "Vibe Coding." Just wanted to share this reality check from the trenches. Keep building.
2026-01-28T23:59:27
https://www.reddit.com/r/LocalLLaMA/comments/1qpsgzr/field_report_what_leadership_actually_thinks_ai/
forevergeeks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpsgzr
false
null
t3_1qpsgzr
/r/LocalLLaMA/comments/1qpsgzr/field_report_what_leadership_actually_thinks_ai/
false
false
self
29
null
Moonshot AI Releases Kimi K2.5: An Open Source Visual Agentic Intelligence Model with Native Swarm Execution
0
Kimi k2.5 looks pretty powerful! Check out their blog with comparisons and stats. [https://www.kimi.com/blog/kimi-k2-5.html](https://www.kimi.com/blog/kimi-k2-5.html) Anyone tried it yet, and if so, what results did you see?
2026-01-28T23:54:01
https://huggingface.co/moonshotai/Kimi-K2.5
Danksyy
huggingface.co
1970-01-01T00:00:00
0
{}
1qpsc9q
false
null
t3_1qpsc9q
/r/LocalLLaMA/comments/1qpsc9q/moonshot_ai_releases_kimi_k25_an_open_source/
false
false
default
0
{'enabled': False, 'images': [{'id': 'ZeVTbidHtLUurK6_HGGn6W9aaL373m8FpmleXI1NksQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZeVTbidHtLUurK6_HGGn6W9aaL373m8FpmleXI1NksQ.png?width=108&crop=smart&auto=webp&s=6e7384dbd97fabd73bc9cea9728941cdf21d7f5d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZeVTbidHtLUurK6_HGGn6W9aaL373m8FpmleXI1NksQ.png?width=216&crop=smart&auto=webp&s=897a39c720060b999041c7834ad7e5e54a77525e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZeVTbidHtLUurK6_HGGn6W9aaL373m8FpmleXI1NksQ.png?width=320&crop=smart&auto=webp&s=3aae6ac115025d15d12e8d37a937a2150c74e3f3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZeVTbidHtLUurK6_HGGn6W9aaL373m8FpmleXI1NksQ.png?width=640&crop=smart&auto=webp&s=32903a0ad22a0b7dba0ddfb30f1670480f676d1c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZeVTbidHtLUurK6_HGGn6W9aaL373m8FpmleXI1NksQ.png?width=960&crop=smart&auto=webp&s=b4831d561b031c995b748078a8cf1726e47ab4cf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZeVTbidHtLUurK6_HGGn6W9aaL373m8FpmleXI1NksQ.png?width=1080&crop=smart&auto=webp&s=a29663619e646d7cdb2a88c72583b023b6cc278a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZeVTbidHtLUurK6_HGGn6W9aaL373m8FpmleXI1NksQ.png?auto=webp&s=9d26497f1115eeb50ff5db8a4adc1e0bee35e1c7', 'width': 1200}, 'variants': {}}]}
My phone won on local LLM latency over mac... benchmarked mobile NPU vs M4 Max GPU
1
Didn't expect a phone chip to beat my Mac on time-to-first-token, but here we are. Been fighting slow TTFT on long-context local inference for months. RAG workflows with \~1K tokens of context meant staring at a blank screen for too long before getting any response. Assumed I needed beefier hardware to get usable latency. Ran some benchmarks on Llama 3.2-3B 4-bit: |Hardware|Prefill Speed|TTFT (\~1K context)| |:-|:-|:-| |M4 Max GPU|\~1,037 tok/s|\~965ms| |Snapdragon 8 Elite NPU|\~1,786 tok/s|\~560ms| **1.7× faster time-to-first-token on a phone chip.** The key insight: prefill speed directly determines latency. When your RAG pipeline injects 1,000 tokens of context, faster prefill = faster first response. That's the metric that makes or breaks UX. Used NexaML's Hexagon optimization to get there. The NPU architecture handles the prefill phase more efficiently than general-purpose GPU. The implication: RAG and agentic workflows that felt laptop-only now have **sub-second latency on mobile**. Anyone else optimizing for TTFT on mobile?
2026-01-28T23:53:37
https://www.reddit.com/r/LocalLLaMA/comments/1qpsbxj/my_phone_won_on_local_llm_latency_over_mac/
Main_Try_8068
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpsbxj
false
null
t3_1qpsbxj
/r/LocalLLaMA/comments/1qpsbxj/my_phone_won_on_local_llm_latency_over_mac/
false
false
self
1
null
Local MCP server for private docs - lightweight alternative to heavy RAG setups
1
So I've been going down this rabbit hole for a while now. You know that feeling when you're using Claude or ChatGPT and you're like "dude, I literally have a 50-page internal doc that answers this, why can't you just READ IT?" But then your options are: • Copy-paste everything into the chat (hits token limits instantly) • Upload to some third-party RAG service (lol no thanks, not with company IP) • Use existing MCP servers (spent 2 hours setting one up, it's slow as hell, returns way too much garbage) I got frustrated enough that I just... made my own thing——a pure knowledge MCP server.I think it will be the answer.Maybe we can call it Context9 or something else.....It runs locally, stupid simple to set up. **Some scenarios where I've actually used it:** **Coding at work:** We have this internal API that's documented in a janky Confluence. I pointed Context9 at it. Now when I ask Cursor "how do I authenticate users in our system?" it actually pulls from OUR docs, not Stack Overflow from 2019. **Taking an online course:** Professor uploaded like 15 PDFs. I dumped them in. Now I can ask "what did the lecture say about X?" and it quotes the actual slide deck. Saves me from Ctrl+F-ing through 200 pages. **Small biz stuff:** My friend runs a tiny startup. They have process docs for customer support. New hires can now ask the AI "what's our refund policy?" and it pulls the real answer, not generic advice. **Research:** If you're reading PDFs you've forgot, you can finally ask "how does paper A's methodology compare to paper B?" and get answers based on YOUR specific papers. **Why I think this might be useful:** Everything stays on your machine. No uploads unless you want to (there's optional GitHub integration for teams, but that's it). It's fast because it doesn't do all that vector database embedding bullshit. Just... finds the relevant doc and hands it over. Works with whatever MCP client you're using - Claude Desktop, Cursor, etc. Open source so you can see exactly what it's doing. **Plus:** Don't use it for massive enterprise knowledge bases with millions of docs. That's not what this is for. Anyway, GitHub link is here: [https://github.com/Prism-Shadow/context9](https://github.com/Prism-Shadow/context9) Honestly just curious if anyone else has been hitting the same wall. Like, what are you doing when you need AI to reference your specific stuff? Are you just raw-dogging it with copy-paste? Using something else that works better? Also open to "your tool is pointless because X already exists" - genuinely want to know if I'm reinventing the wheel. https://preview.redd.it/8bpxgadrf6gg1.png?width=1600&format=png&auto=webp&s=1bc8b2e78cb76f5400393cb12f8f471f284af963
2026-01-28T23:46:19
https://www.reddit.com/r/LocalLLaMA/comments/1qps5mb/local_mcp_server_for_private_docs_lightweight/
Prismshadow_AI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qps5mb
false
null
t3_1qps5mb
/r/LocalLLaMA/comments/1qps5mb/local_mcp_server_for_private_docs_lightweight/
false
false
https://b.thumbs.redditm…mNbaUBBosPPw.jpg
1
null
Reverse Engineering a $500M Mystery: From HashHop to Memory-Augmented Language Models
1
2026-01-28T23:42:52
https://huggingface.co/blog/codelion/reverse-engineering-magic-hashhop
asankhs
huggingface.co
1970-01-01T00:00:00
0
{}
1qps2jp
false
null
t3_1qps2jp
/r/LocalLLaMA/comments/1qps2jp/reverse_engineering_a_500m_mystery_from_hashhop/
false
false
https://external-preview…033151e664976929
1
{'enabled': False, 'images': [{'id': 'EyRwmekx_uVt17sPtBXjpuOBTqwnB5skDNDBy3a8p94', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/EyRwmekx_uVt17sPtBXjpuOBTqwnB5skDNDBy3a8p94.jpeg?width=108&crop=smart&auto=webp&s=3f7c8fa391d9f7f4458d0c80cc29d7b3147056ea', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/EyRwmekx_uVt17sPtBXjpuOBTqwnB5skDNDBy3a8p94.jpeg?width=216&crop=smart&auto=webp&s=2fb8c5565c5d09e086bfd2275009823bb7ea0300', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/EyRwmekx_uVt17sPtBXjpuOBTqwnB5skDNDBy3a8p94.jpeg?width=320&crop=smart&auto=webp&s=ff4197ccd5a57dbda7a9721f00503db7b29a794b', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/EyRwmekx_uVt17sPtBXjpuOBTqwnB5skDNDBy3a8p94.jpeg?width=640&crop=smart&auto=webp&s=1dd81440f9c5fe7f21f6783337e3dd754c804e2a', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/EyRwmekx_uVt17sPtBXjpuOBTqwnB5skDNDBy3a8p94.jpeg?width=960&crop=smart&auto=webp&s=833acc36e50512670bb8b31412db211d9bf3e9e9', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/EyRwmekx_uVt17sPtBXjpuOBTqwnB5skDNDBy3a8p94.jpeg?width=1080&crop=smart&auto=webp&s=20b6dd011595284fb2c1c6a25638eb77ca91b3d0', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/EyRwmekx_uVt17sPtBXjpuOBTqwnB5skDNDBy3a8p94.jpeg?auto=webp&s=f66449caa09f2b18f159d1f9a03093efd011187d', 'width': 1910}, 'variants': {}}]}
786Gb "Mobile" AI Server Follow-Up Part 2, The Potential of the W200
6
Part 2 Follow-up post to the "Mobile" Ai server build Due to Reddit video size/length restrictions I'm having to break up the video into different parts, but the full (and better quality) video is uploaded to Youtube. [https://youtu.be/TJOKEFdCkv0](https://youtu.be/TJOKEFdCkv0) This section highlights and goes into more detail on the main intent of the original post, which was not to showcase my hardware setup in particular, but to bring attention to the W200 chassis and the potential it may be capable of with some modifications. Following sections will include actual LLM/image gen benchmarks as well as getting datapoints on temp/power draw. If someone out there really is crazy enough to try putting together a 1Tb combined VRAM unit with this thing, please let me know, if I can't be a part of it I'd at least like to follow along to see how it goes.
2026-01-28T23:42:39
https://v.redd.it/ox3tmfvwc6gg1
SweetHomeAbalama0
/r/LocalLLaMA/comments/1qps2cn/786gb_mobile_ai_server_followup_part_2_the/
1970-01-01T00:00:00
0
{}
1qps2cn
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ox3tmfvwc6gg1/DASHPlaylist.mpd?a=1772365366%2CMjAwZmMwNzVkODAwMzk4YTE0YzI3OGNiMTQxZWFjYzRlZDNjODFlZTkyYzMyODgwOTZjYTVhOTA5Nzg1NjM4MA%3D%3D&v=1&f=sd', 'duration': 884, 'fallback_url': 'https://v.redd.it/ox3tmfvwc6gg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/ox3tmfvwc6gg1/HLSPlaylist.m3u8?a=1772365366%2CMWZmZjhlMjkxZmYyZTQzOTE0M2RmZWM1ZjFmYTJlMjVjNmY3ZjM2MzYxMjg5YmIwNTQ1MmNmM2Y3NDM5ZWZlZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ox3tmfvwc6gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
t3_1qps2cn
/r/LocalLLaMA/comments/1qps2cn/786gb_mobile_ai_server_followup_part_2_the/
false
false
default
6
{'enabled': False, 'images': [{'id': 'cXlqcjVsdndjNmdnMTlI7CfLjgqQahAJ4b2NYrqe_8OEbmNXYRYvEqq4eU5F', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/cXlqcjVsdndjNmdnMTlI7CfLjgqQahAJ4b2NYrqe_8OEbmNXYRYvEqq4eU5F.png?width=108&crop=smart&format=pjpg&auto=webp&s=487d95fca72d5872b3192b50e6941ff242c208da', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/cXlqcjVsdndjNmdnMTlI7CfLjgqQahAJ4b2NYrqe_8OEbmNXYRYvEqq4eU5F.png?width=216&crop=smart&format=pjpg&auto=webp&s=ffc35202d2b973240e6337536e92432a66847584', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/cXlqcjVsdndjNmdnMTlI7CfLjgqQahAJ4b2NYrqe_8OEbmNXYRYvEqq4eU5F.png?width=320&crop=smart&format=pjpg&auto=webp&s=7d30f017ce5e1344f1c87f87231e01a86199d1c8', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/cXlqcjVsdndjNmdnMTlI7CfLjgqQahAJ4b2NYrqe_8OEbmNXYRYvEqq4eU5F.png?width=640&crop=smart&format=pjpg&auto=webp&s=f89531ba511e24d5dd43a84894c05c90ea2be910', 'width': 640}], 'source': {'height': 1280, 'url': 'https://external-preview.redd.it/cXlqcjVsdndjNmdnMTlI7CfLjgqQahAJ4b2NYrqe_8OEbmNXYRYvEqq4eU5F.png?format=pjpg&auto=webp&s=65126f8f699e6505e626600c60533a98229666f6', 'width': 720}, 'variants': {}}]}
Qwen 3-VL vision model
3
Hey, I'm building my girlfriend a modal app so she can improve her handwriting. She wants to get really good at cursive. I was very curious if I could actually make it really good with Qwen or fine tuning qwen or another Open Sourced Model. I want to be able to upload an image and the model should nit pick things like "“Your ‘t’ cross is too high for this modern cursive style; bring it down to x-height + small overshoot." Is Qwen the best bet? are there other models that won't require me to fine tune anything and I can just prompt engineer? any help would be awesome
2026-01-28T23:12:47
https://www.reddit.com/r/LocalLLaMA/comments/1qprbmq/qwen_3vl_vision_model/
PriorCompote1452
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qprbmq
false
null
t3_1qprbmq
/r/LocalLLaMA/comments/1qprbmq/qwen_3vl_vision_model/
false
false
self
3
null
Drastic performance degradation of glm-4.7-flash on LM Studio
0
Hi, I have been trying to make glm-4.7-flash model work properly on LM studio. However, I keep seeing a sharp drop in token-generation time after each query (56tk/s->37 tk/s-> 27tk/s). I know there was some issue with the implementation of deepseek2 architecture, which is implemented for this model, on Llama.cpp and apparently you would need to re-download the model and run it on the latest llama release, but I am not sure if the official model and llama release on LM Studio have been updated with that change or not ? Is there anyone else who experience the same issue ?
2026-01-28T23:08:08
https://www.reddit.com/r/LocalLLaMA/comments/1qpr7d9/drastic_performance_degradation_of_glm47flash_on/
hieuphamduy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpr7d9
false
null
t3_1qpr7d9
/r/LocalLLaMA/comments/1qpr7d9/drastic_performance_degradation_of_glm47flash_on/
false
false
self
0
null
Proof that you’re really rich
6
Not sponsored by corsair
2026-01-28T23:02:22
https://i.redd.it/2acbris986gg1.jpeg
Hot_Inspection_9528
i.redd.it
1970-01-01T00:00:00
0
{}
1qpr28t
false
null
t3_1qpr28t
/r/LocalLLaMA/comments/1qpr28t/proof_that_youre_really_rich/
false
false
default
6
{'enabled': True, 'images': [{'id': '2acbris986gg1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/2acbris986gg1.jpeg?width=108&crop=smart&auto=webp&s=5402d5ea5744cc251524837f7892af79a435e109', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/2acbris986gg1.jpeg?width=216&crop=smart&auto=webp&s=0821f92eeb954a79649f89eb35461876bfe2762b', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/2acbris986gg1.jpeg?width=320&crop=smart&auto=webp&s=9a6e5d2131aec9d4341aa2b1655e28fb38bc78b8', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/2acbris986gg1.jpeg?width=640&crop=smart&auto=webp&s=51dedf9574ffff1a8201441106237e1e6b65fc48', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/2acbris986gg1.jpeg?width=960&crop=smart&auto=webp&s=6b3d53a85485e26166c742f6b208d5ad7b85a01a', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/2acbris986gg1.jpeg?width=1080&crop=smart&auto=webp&s=ee0f5a6f7d91ce8fd1f8bd1b4b8d8a08a19f1a6b', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/2acbris986gg1.jpeg?auto=webp&s=93a6568ce0339e976427f4efb133d7404c17dfea', 'width': 3024}, 'variants': {}}]}
768Gb "Mobile" AI Server Follow-Up Part 1, Look Inside
147
Hey Y'all, The post I made about the AI server got a lot of buzz, so I decided to do a follow up with some video on the project. Because of reddit's video upload restrictions, I'll have to upload them in separate posts with slightly different focuses, but I've uploaded the full (and higher quality) version to Youtube. Taking the video from 1080p to 720p to meet reddit's video size requirements kinda messed up visibility on the screen record in one of the later parts, so I'll leave a link to the full video here for convenience, otherwise the other parts should get posted here shortly. [https://youtu.be/TJOKEFdCkv0](https://youtu.be/TJOKEFdCkv0) This part primarily focuses on providing some background context on how we came to the W200 in the first place, what it solved for us, and a look inside the unit. Spec summary: 512Gb DDR4, 256GB VRAM (8x3090+2x5090), 64 core Threadripper Pro 3995WX Case: Core W200 Appreciate all of the comments and responses on the last post, I've never done anything like this before so I apologize if things are not more polished, attention normally isn't my thing so while the volume of feedback was a little overwhelming the interest was very much encouraging. It seems like every other day we see people post builds here composed of top of the line enterprise hardware with sunken costs reaching tens of thousands of dollars, so I think it can make a difference to just highlight what can be possible with a little ingenuity, consumer grade components, and a more relatively "realistic" budget (in this case, around \~17k usd). Keep this figure in mind when comparing cost:value to these other workstations and their specs/performance capability/creative potential, because I do think this illustrates that effective AI hosting can be more than just throwing money at the problem. Whether someone is working with 100$ or 100k$, focusing on innovative problem solving, pushing optimization limits, and just seeing what can be possible with what's currently available is an order of magnitude more exciting and interesting to see than a squeaky clean $50,000 supercomputer with specialized hardware that very few people will ever get to see in-person within their lifetime posted by someone asking the same question asked since the dawn of time, "what should I do with this?". Ultimately the interest for experimentation and trying new approaches is what keeps this hobby (local AI) alive and relevant, and imo will be our best counterbalance to the complications that closed-model AI companies impose as we move forward. Questions welcome. Enjoy!
2026-01-28T22:44:03
https://v.redd.it/trvmg2cpp5gg1
SweetHomeAbalama0
/r/LocalLLaMA/comments/1qpqlfj/768gb_mobile_ai_server_followup_part_1_look_inside/
1970-01-01T00:00:00
0
{}
1qpqlfj
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/trvmg2cpp5gg1/DASHPlaylist.mpd?a=1772361852%2CNGQ0Mzg0MWM2YjBkZWE1YzIwZjNhOTQ1YTMwMTIxODE4MDA5NmNkNDgzODE2YmI5ZGI3YjJkZmQ0ZmFmOTI5Zg%3D%3D&v=1&f=sd', 'duration': 890, 'fallback_url': 'https://v.redd.it/trvmg2cpp5gg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/trvmg2cpp5gg1/HLSPlaylist.m3u8?a=1772361852%2CYjlhZmMzZTdhYzE0MmZmYzZmYjliZjgzZDU5ZDdjNGI0MWQxOTBlMjAyMmY1MzYxYTk4M2FjN2RkOWJlMDE2OA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/trvmg2cpp5gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
t3_1qpqlfj
/r/LocalLLaMA/comments/1qpqlfj/768gb_mobile_ai_server_followup_part_1_look_inside/
false
false
default
147
{'enabled': False, 'images': [{'id': 'b3IzbXRvY3BwNWdnMU2mkuU7oHD8qNQyskshMOF3z-mD-pRqGUpyoVD2VUXQ', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/b3IzbXRvY3BwNWdnMU2mkuU7oHD8qNQyskshMOF3z-mD-pRqGUpyoVD2VUXQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=a54b1f5e051caee0d632ec385b0a6323167cc379', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/b3IzbXRvY3BwNWdnMU2mkuU7oHD8qNQyskshMOF3z-mD-pRqGUpyoVD2VUXQ.png?width=216&crop=smart&format=pjpg&auto=webp&s=4c402ca55d93a4103784488c2b3acb06809f84a3', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/b3IzbXRvY3BwNWdnMU2mkuU7oHD8qNQyskshMOF3z-mD-pRqGUpyoVD2VUXQ.png?width=320&crop=smart&format=pjpg&auto=webp&s=ec17eb47f4bfad74fc67778e2850fb746c5719f4', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/b3IzbXRvY3BwNWdnMU2mkuU7oHD8qNQyskshMOF3z-mD-pRqGUpyoVD2VUXQ.png?width=640&crop=smart&format=pjpg&auto=webp&s=dcccbe1da545522d784f5a3749a7424c574bf565', 'width': 640}], 'source': {'height': 1280, 'url': 'https://external-preview.redd.it/b3IzbXRvY3BwNWdnMU2mkuU7oHD8qNQyskshMOF3z-mD-pRqGUpyoVD2VUXQ.png?format=pjpg&auto=webp&s=aac1dce1665bf923a1b41e83eb52176cfca50af6', 'width': 720}, 'variants': {}}]}
How to checkpoint on unified memory (training)?
6
**Anyone knows how to solve this?** I'm on a DGX Spark and I'm doing LoRA BF16 on [nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16) using [NeMo AutoModel](https://github.com/NVIDIA-NeMo/Automodel) but I can fit only 1 batch, as 2 batches OOM's. I can train the model fine on 2 batches with about 18 GiB headroom, but when it tries to checkpoint, memory spikes, and it goes OOM. What I don't get is, if the checkpoint is already in memory, on a unified system why would you need to allocate more memory to store what's already in memory? On non unified systems I guess that's needed as for the checkpoint VRAM -> CPU -> RAM -> CPU -> SSD, but on unified it could go RAM -> CPU -> SSD, or am I missing something? Is it doing some extra computation/compression on checkpoint? Is this a NeMo AutoModel limitation, some kernel limitation, algorithm limitation, or do I just have the wrong settings? What do you guys experience when training on DGX, Strix Halo, Mac or other unified memory system? Is this behavior observed also on dedicated GPU systems? (does it spike RAM or VRAM) https://preview.redd.it/8i5rxfuw26gg1.png?width=1894&format=png&auto=webp&s=b69e8e5ba16be463e1632c261f547bacc7631c3f I'm crying having to see such a bad GPU usage... Too much potential being wasted in my point of view. On 1 batch I'm getting about 450 tps while on 2 batches I was about 680 tps during training, until the OOM.
2026-01-28T22:43:16
https://www.reddit.com/r/LocalLLaMA/comments/1qpqkp3/how_to_checkpoint_on_unified_memory_training/
Lorelabbestia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpqkp3
false
null
t3_1qpqkp3
/r/LocalLLaMA/comments/1qpqkp3/how_to_checkpoint_on_unified_memory_training/
false
false
https://a.thumbs.redditm…piv-p_FF_hE4.jpg
6
null
Our command line tool to transpile TTS Models from Python to C++
25
We're a small (semi-stealth) team that's been working on a tool to rewrite AI inference code from Python to C++ (similar to llama.cpp, whisper.cpp, and so on). Today, we're launching `muna transpile`. It takes a Python function and generates a self-contained, header-only C++ library and a corresponding `CMakeLists.txt` file. It pulls in required libraries automatically (e.g. llama.cpp, onnxruntime, mlx, and so on). You can then use it to build and ship an application or library. The video above shows us transpiling, compiling, and running Kokoro-TTS on Apple Silicon (compile times may vary 😅). We're working on support for Qwen3-TTS next, then we'll look at LLMs like gpt-oss-20b. If you have a model (or pipeline of models) that you've proved out in Python but want to run at speed (or ramp up), please try it out! Note that this is free and freely-usable: your Python source code goes in, it's still your source code when it comes out (just converted to C++). We're working on building more stuff on top of this, so we're using this as an opportunity to expand support for different kinds of AI models. Try it out and lmk what you think: # Run this in Terminal $ pip install muna && muna transpile https://github.com/muna-ai/muna-predictors/blob/main/text-to-speech/kokoro.py --trust-remote-code --install-deps Source code for the CLI is [here](https://github.com/muna-ai/muna-py), but the actual transpilation logic is not yet open-source.
2026-01-28T22:36:56
https://v.redd.it/xt3gvhgl16gg1
Historical_Pen6499
v.redd.it
1970-01-01T00:00:00
0
{}
1qpqet8
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xt3gvhgl16gg1/DASHPlaylist.mpd?a=1772231833%2CYzg3NzRhODQ4MmMxNmQ5MGQwYzExMzVmYWJlOWRmMzI5ZmVhM2M3MmE3NDI1Nzc5MjhmZDk1NmE3YmMyM2NmYg%3D%3D&v=1&f=sd', 'duration': 51, 'fallback_url': 'https://v.redd.it/xt3gvhgl16gg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1054, 'hls_url': 'https://v.redd.it/xt3gvhgl16gg1/HLSPlaylist.m3u8?a=1772231833%2CMDE5YWMxZTljN2ViNjAwMzg0ZTViNzMzYmYzOTgzOTVkNDU3NzliZjY2Y2JjMTg1YjBkMzM3MmI1NDJmNDYzZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xt3gvhgl16gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1qpqet8
/r/LocalLLaMA/comments/1qpqet8/our_command_line_tool_to_transpile_tts_models/
false
false
https://external-preview…975f6e6f14f2cc00
25
{'enabled': False, 'images': [{'id': 'dG5oM211ZmwxNmdnMfJOw4PcdggKLWcq08o9ybIUHfTYyE5NGjv3XmJod7px', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/dG5oM211ZmwxNmdnMfJOw4PcdggKLWcq08o9ybIUHfTYyE5NGjv3XmJod7px.png?width=108&crop=smart&format=pjpg&auto=webp&s=bce5e52bdabaea263549267169b80be464bd87da', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/dG5oM211ZmwxNmdnMfJOw4PcdggKLWcq08o9ybIUHfTYyE5NGjv3XmJod7px.png?width=216&crop=smart&format=pjpg&auto=webp&s=3d8d655239f4d599c1fb244871c834cda1540287', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/dG5oM211ZmwxNmdnMfJOw4PcdggKLWcq08o9ybIUHfTYyE5NGjv3XmJod7px.png?width=320&crop=smart&format=pjpg&auto=webp&s=8e3e28148f70c61d09f7f7d95e2f12f0f9302549', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/dG5oM211ZmwxNmdnMfJOw4PcdggKLWcq08o9ybIUHfTYyE5NGjv3XmJod7px.png?width=640&crop=smart&format=pjpg&auto=webp&s=16acfbe8c84a2bdc9b489086ea36fb21521f7cfd', 'width': 640}, {'height': 526, 'url': 'https://external-preview.redd.it/dG5oM211ZmwxNmdnMfJOw4PcdggKLWcq08o9ybIUHfTYyE5NGjv3XmJod7px.png?width=960&crop=smart&format=pjpg&auto=webp&s=086de7e4a946caf989f0c7d2f77237a0a321e91e', 'width': 960}, {'height': 592, 'url': 'https://external-preview.redd.it/dG5oM211ZmwxNmdnMfJOw4PcdggKLWcq08o9ybIUHfTYyE5NGjv3XmJod7px.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b9ae098731089e2c6dfc11accefe0f8ef856283a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dG5oM211ZmwxNmdnMfJOw4PcdggKLWcq08o9ybIUHfTYyE5NGjv3XmJod7px.png?format=pjpg&auto=webp&s=a9318b8b8d3d06704006e3dcbe71a00be711cda7', 'width': 1968}, 'variants': {}}]}
Options for 5060 ti
2
Hi there, So a more hardware question. I recently bought a 5060 ti for gaming. But want to get into running local LLM’s, currently my main focus would be inference speed and being able to run big (but realistically good) models. I read online that 3090’s are where it’s at. But obviously I don’t have one. Should I buy a second card (3090, or another 5060 ti) to increase VRAM? Or just stick to what I’ve got. I also have a spare 2060, but I (also) read online that adding an older GPU will limit my speeds to that GPU, so I suspect its better to sell that one and buy a newer card? Or should I look into upgrading other parts like CPU,RAM(please don’t say this), or my motherboard. As I’m still on AM4… Thanks for the help! Have a nice day :)
2026-01-28T22:27:47
https://www.reddit.com/r/LocalLLaMA/comments/1qpq67c/options_for_5060_ti/
Murtsdurt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpq67c
false
null
t3_1qpq67c
/r/LocalLLaMA/comments/1qpq67c/options_for_5060_ti/
false
false
self
2
null
Assistant_Pepe_8B, 1-M context, zero slop
105
> This is a project that was a long time in the making because I wanted to get it right. I'm still not fully satisfied, as there are some rough corners to sand, but for now, this would do. The goal was to **maximize shitpostness** along with **helpfulness**, without glazing the user for every retarded idea. Not an easy needle to thread. This amphibious AI has learned the ways of /g/, and speaks **fluent brainrot**, but will also help you out with just about anything you'll need, and won't be ashamed to roast you while at it. For those who remember [Oni\_Mitsubishi\_12B](https://huggingface.co/SicariusSicariiStuff/Oni_Mitsubishi_12B) \- it was **so overtly toxic** that it made me worry at first (only to quickly be verified as not even that uncensored). I could do better. So now I did. This model is a **significant refinement** of the idea, with a cleaned dataset, better curation, and with much more intelligence (also **one million tokens of contexts**, theoretically). It is much less (overtly) toxic, and much smarter, while also being very helpful (and imo much more funny too, because the skies are blue due to the chemtrails and neurlink that feeds this simulation) # [](https://huggingface.co/SicariusSicariiStuff/Assistant_Pepe_8B#but-why)But why? It's now late **January**, **2026**, open source is crushing closed frontier ([Kimi K2.5](https://huggingface.co/moonshotai/Kimi-K2.5) was recently released, **1T** params that **beats frontier models**), but has anyone released a **helpful shitposting AI yet?** Yeah, didn't think so. If it **shitposts too hard**, it is often not that **helpful**; if it's '**helpful enough**, the **shitposting ability is often lacking**. You just couldn't win. **Until now**. Oh, and **no system prompt is needed**. Just don't let it get stuck in a greentext loop. I might have overcooked the frog a tad bit too fast in the pot for this one. P.S It writes **HILARIOUS STORIES**, nothing like a typical AI assistant, see the examples below for details. \--- # [](https://huggingface.co/SicariusSicariiStuff/Assistant_Pepe_8B#tldr)TL;DR * **Top tier shitposting** absolutely unhinged, funny, and witty. Sometimes cringe too; nothing is perfect. * **Helpful!** will actually get shit done. * Will **100% roast you** for being dumb, thanks to a subtle **negativity bias infusion**. Very **refreshing!** 🤌 * **Deep insights** (when it doesn't delve into absolutely unhinged conspiracy theories about how the water makes the frogs gay). * Built on my [UltraLong-1M-Instruct\_Abliterated](https://huggingface.co/SicariusSicariiStuff/Llama-3.1-Nemotron-8B-UltraLong-1M-Instruct_Abliterated) model, fulfill your dream of a **million-token-long** shitpost. * Say goodbye to **GPT-isms** and say hello to **truly creative stories!** * Ships code. * Inclusive towards amphibians. [https://huggingface.co/SicariusSicariiStuff/Assistant\_Pepe\_8B](https://huggingface.co/SicariusSicariiStuff/Assistant_Pepe_8B)
2026-01-28T22:03:49
https://www.reddit.com/r/LocalLLaMA/comments/1qppjo4/assistant_pepe_8b_1m_context_zero_slop/
Sicarius_The_First
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qppjo4
false
null
t3_1qppjo4
/r/LocalLLaMA/comments/1qppjo4/assistant_pepe_8b_1m_context_zero_slop/
false
false
self
105
{'enabled': False, 'images': [{'id': 'pSVZnx94eMPjsS534OtAJMkp7HIYEOenxOEGp6QdSTY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pSVZnx94eMPjsS534OtAJMkp7HIYEOenxOEGp6QdSTY.png?width=108&crop=smart&auto=webp&s=9e70a2b55a68ae25463df21d794f100151d2001f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pSVZnx94eMPjsS534OtAJMkp7HIYEOenxOEGp6QdSTY.png?width=216&crop=smart&auto=webp&s=0ab43a675134080ace32646dc1d00d9780d911e2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pSVZnx94eMPjsS534OtAJMkp7HIYEOenxOEGp6QdSTY.png?width=320&crop=smart&auto=webp&s=539894c30e2964c30b0569397891c76fc2c99692', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pSVZnx94eMPjsS534OtAJMkp7HIYEOenxOEGp6QdSTY.png?width=640&crop=smart&auto=webp&s=80992309b2dbe54270269268f04aacb5735a2460', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pSVZnx94eMPjsS534OtAJMkp7HIYEOenxOEGp6QdSTY.png?width=960&crop=smart&auto=webp&s=126e186c63202894719b81f15f12898207281cee', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pSVZnx94eMPjsS534OtAJMkp7HIYEOenxOEGp6QdSTY.png?width=1080&crop=smart&auto=webp&s=472fe9c82947064ca48180be41192a33bbd3ace8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pSVZnx94eMPjsS534OtAJMkp7HIYEOenxOEGp6QdSTY.png?auto=webp&s=211cfad9186f65197e9b800e09dddabf6f6258fe', 'width': 1200}, 'variants': {}}]}
DGX Spark (128GB Unified) vs. RTX 5090 Build
3
Hey everyone, I’ve been racking my brain for weeks trying to decide on the right hardware for my next build. My Use Case: • Local LLMs: I want to run 70B models (maybe even 120B) with decent performance. • Image & Video Gen: Heavy workflows in ComfyUI (Stable Diffusion / Flux / SVD). • 24/7 Server: Running web servers and automation scripts around the clock. • Remote Only: Access will be strictly via SSH or Remote Desktop. The Constraints: • No Macs: I need to stay within the NVIDIA/CUDA ecosystem. • NVIDIA-only: It has to be a green team solution for compatibility reasons. The Candidates: 1. NVIDIA DGX Spark (Founders Edition / GB10): • Pros: 128 GB Unified Memory (fits almost any model), compact, quiet, and very power-efficient (140-240W). Full DGX OS support. • Cons: Only 273 GB/s memory bandwidth (LPDDR5X) and significantly fewer CUDA cores than a 5090. 2. Workstation with 1x (or later 2x) RTX 5090: • Pros: Massive raw compute power, \~1.8 TB/s bandwidth (GDDR7). Perfect for fast image/video generation. • Cons: Only 32 GB VRAM (70B models won’t fit on a single card without heavy quantization), high power draw (600W+), and louder cooling for 24/7 operation. My Main Concern: The low bandwidth of the Spark (273 GB/s) really worries me. I’m afraid it will make LLM inference feel sluggish (low tokens per second). On the other hand, the VRAM limit of a single 5090 is a major bottleneck for larger models unless I go for a multi-GPU setup immediately. What would you choose? Would you prioritize the massive VRAM capacity of the Spark despite the slower speeds, or go for the performance beast that is the 5090 and deal with quantization/splitting? Thanks, guys
2026-01-28T21:58:30
https://www.reddit.com/r/LocalLLaMA/comments/1qppea0/dgx_spark_128gb_unified_vs_rtx_5090_build/
Professional-Safe894
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qppea0
false
null
t3_1qppea0
/r/LocalLLaMA/comments/1qppea0/dgx_spark_128gb_unified_vs_rtx_5090_build/
false
false
self
3
null
Radeon 9070XT vs 7900XTX for LLM Inference (paired with a 7800XT)
2
\[Not looking at NVIDIA GPUs\] I already have a 7800XT and 32GB DDR4 RAM. I'm trying to buy another GPU & upgrade Motherboard + case (for cooling + fitting 2 GPUs). I also own a Bosgame M5 (AMD Ryzen 395 Strix Halo) machine. It's been pretty good for MoE models, I intend to keep using it, but also feel like upgrading my system because GPU prices don't seem like they're gonna come down anytime soon. I wanted to get a R9700, but I'm in Singapore and it doesn't seem like there's gonna be easy availability here anytime soon. It seems like even if it does get here, prices will be inflated. I was about to pull the trigger on the 9070XT, but then did some more research and realized that I can buy new/used 7900XTX without much of a hassle (a bit pricier, but seems worth it for the extra VRAM). My questions are : 1. Is it a bad idea to try and pair either of these cards with the 7800XT? (sunk cost fallacy?) 2. If the main goal is to run some MoE + dense models (e.g. Devstral or Qwen-Image), VRAM would matter more, hence the 7900XTX is the better choice right? 3. Does PCIe x8/x8 matter? i.e. do I need to be very careful about motherboard choice? I was looking at a MSI MAG B550 Tomahawk due to the spacing (so that cooling doesn't become a massive issue). Is it fine if the 2nd GPU is on a PCIe 3.0 x4 slot? 4. Should I look into a PCIe riser and mount one of the GPUs vertically? If there are any suggestions on pairing a GPU with the Strix Halo via the NVMe slot (I haven't seen much of this being done), I'm open to hearing if that would be an option for me & how to best do it. Thanks in advance.
2026-01-28T21:52:49
https://www.reddit.com/r/LocalLLaMA/comments/1qpp8ud/radeon_9070xt_vs_7900xtx_for_llm_inference_paired/
kahlil29
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpp8ud
false
null
t3_1qpp8ud
/r/LocalLLaMA/comments/1qpp8ud/radeon_9070xt_vs_7900xtx_for_llm_inference_paired/
false
false
self
2
null
Advice wanted: designing robust LLM inference loops with tools
0
Hey folks 👋 I’m an AI engineer working on a Python library for agent-to-agent communication and orchestration in my spare time (https://github.com/nMaroulis/protolink). The project is mainly a learning vehicle for me to go deeper into topics like A2A task delegation, agent orchestration, and deterministic LLM inference loops with tool usage and reasoning. Right now I’m focused on the LLM inference loop, and I’d really appreciate some feedback from people who’ve tackled similar problems. Current approach At a high level: • An agent receives a task. • If the task requires LLM reasoning, the agent invokes LLM.infer(...). • infer() runs a multi-step, bounded inference loop. • The model is instructed (via a strict prompt + JSON contract) to return exactly one of: • final → user-facing output, terminate the loop • tool\_call → runtime executes a tool and feeds the result back • agent\_call → delegate to another agent (not implemented yet) The loop itself is provider-agnostic. Each LLM subclass (e.g. OpenAI, Anthropic, Ollama) implements its own \_on\_tool\_call hook to inject tool results back into history in a provider-compliant way, since tool semantics differ significantly across APIs. The problem In practice, I often hit infinite tool-call loops: • The model repeatedly requests the same tool • Even after the tool result has been injected back into context • The loop never converges to final I’m already enforcing: • Strict JSON output validation • A maximum step limit • External (runtime-only) tool execution …but the behavior still shows up often enough that it feels like an architectural issue rather than just prompt tuning. What I’m looking for I’d love input on things like: • Patterns to reliably prevent repeated tool calls • Whether people explicitly track tool call state / tool saturation • How much logic you push into the prompt vs the runtime • Whether you allow the model to “see” prior tool calls explicitly, or abstract them • Any hard-won lessons from production agent loops I’m also genuinely curious how big libraries e.g. Langchain model or observe inference loops, tool usage, and retries internally, especially around detecting non-converging behavior. Any thoughts, critiques, or references would be hugely appreciated 🙏 Happy to share code snippets if that helps.
2026-01-28T21:46:48
https://www.reddit.com/r/LocalLLaMA/comments/1qpp393/advice_wanted_designing_robust_llm_inference/
sheik66
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpp393
false
null
t3_1qpp393
/r/LocalLLaMA/comments/1qpp393/advice_wanted_designing_robust_llm_inference/
false
false
self
0
{'enabled': False, 'images': [{'id': 'EeuMhj5tlQWcm8p9hoiniuyPzK8Try6s2Ms09U7xDCI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EeuMhj5tlQWcm8p9hoiniuyPzK8Try6s2Ms09U7xDCI.png?width=108&crop=smart&auto=webp&s=c4be544ee0942406ccd05b5fdc0e009a897d73f5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EeuMhj5tlQWcm8p9hoiniuyPzK8Try6s2Ms09U7xDCI.png?width=216&crop=smart&auto=webp&s=eefa453374717575b3b17e5d856c663249660f58', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EeuMhj5tlQWcm8p9hoiniuyPzK8Try6s2Ms09U7xDCI.png?width=320&crop=smart&auto=webp&s=e8872636616aa83ec93f721065c09be52f31c1bc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EeuMhj5tlQWcm8p9hoiniuyPzK8Try6s2Ms09U7xDCI.png?width=640&crop=smart&auto=webp&s=594e1fa5f96f31163b6414fa21d6bc49092b1c38', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EeuMhj5tlQWcm8p9hoiniuyPzK8Try6s2Ms09U7xDCI.png?width=960&crop=smart&auto=webp&s=5e56d00eb1d9613314d4cb6288cdf3fa8064aff6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EeuMhj5tlQWcm8p9hoiniuyPzK8Try6s2Ms09U7xDCI.png?width=1080&crop=smart&auto=webp&s=b119f015ce2732003cd1d6848fd475b8761dad4c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EeuMhj5tlQWcm8p9hoiniuyPzK8Try6s2Ms09U7xDCI.png?auto=webp&s=6973cdc76bf53a282a576e2183633762446bdb19', 'width': 1200}, 'variants': {}}]}
Building opensource Zero Server Code Intelligence Engine
10
Hi, guys, I m building GitNexus, an opensource Code Intelligence Engine which works fully client sided in-browser. There have been lot of progress since I last posted. Repo: [https://github.com/abhigyanpatwari/GitNexus](https://github.com/abhigyanpatwari/GitNexus) ( ⭐ would help so much, u have no idea!! ) Try: [https://gitnexus.vercel.app/](https://gitnexus.vercel.app/) It creates a Knowledge Graph from github repos and exposes an Agent with specially designed tools and also MCP support. Idea is to solve the project wide context issue in tools like cursor, claude code, etc and have a shared code intelligence layer for multiple agents. It provides a reliable way to retrieve full context important for codebase audits, blast radius detection of code changes and deep architectural understanding of the codebase for both humans and LLM. ( Ever encountered the issue where cursor updates some part of the codebase but fails to adapt other dependent functions around it ? this should solve it ) **I tested it using cursor through MCP. Even without the impact tool and LLM enrichment feature, haiku 4.5 model was able to produce better Architecture documentation compared to opus 4.5 without MCP on PyBamm repo ( its a complex battery modelling repo )**. Opus 4.5 was asked to get into as much detail as possible but haiku had a simple prompt asking it to explain the architecture. The output files were compared in chatgpt 5.2 chat link: [https://chatgpt.com/share/697a7a2c-9524-8009-8112-32b83c6c9fe4](https://chatgpt.com/share/697a7a2c-9524-8009-8112-32b83c6c9fe4) ( IK its not a good enough benchmark but still promising ) Quick tech jargon: \- Everything including db engine, embeddings model, all works in-browser client sided \- The project architecture flowchart u can see in the video is generated without LLM during repo ingestion so is reliable. \- Creates clusters ( using leidens algo ) and process maps during ingestion. \- It has all the usual tools like grep, semantic search, etc but enhanced majorly using process maps and clusters making the tool themselves smart hence a lot of the decisions the LLM had to make to retrieve context is offloaded into the tools, making it much more reliable even with non sota models. **What I need help with:** \- To convert it into a actually useful product do u think I should make it like a CLI tool that keeps track of local code changes and updating the graph? \- Is there some way to get some free API credits or sponsorship or something so that I can test gitnexus with multiple providers \- Some insights into enterprise code problems like security audits or dead code detection or any other potential usecase I can tune gitnexus for? Any cool idea and suggestion helps a lot. The comments on previous post helped a LOT, thanks.
2026-01-28T21:44:35
https://v.redd.it/b3d93edpq5gg1
DeathShot7777
v.redd.it
1970-01-01T00:00:00
0
{}
1qpp13g
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/b3d93edpq5gg1/DASHPlaylist.mpd?a=1772228697%2CYzYwNzI1ZjE4YmYyMTQyNzUwNDI0MjQ4ZDAwMWZiMGU0ODJlNjk1MDEyN2FhOGMxYzUxMzcwYzhhNWIxNDYyMQ%3D%3D&v=1&f=sd', 'duration': 92, 'fallback_url': 'https://v.redd.it/b3d93edpq5gg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/b3d93edpq5gg1/HLSPlaylist.m3u8?a=1772228697%2CZDk1ZWIxYTFkYWI1OTJhZjNhNGRiOWEwNWZlNWVjZGM3YTcxYzViM2FlNjM0YjlmMmViZDg0ODRiYmYwNzliMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/b3d93edpq5gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1qpp13g
/r/LocalLLaMA/comments/1qpp13g/building_opensource_zero_server_code_intelligence/
false
false
https://external-preview…3cb1bdedaa0c63b1
10
{'enabled': False, 'images': [{'id': 'c251Yzk2ZXBxNWdnMX8xZ7UL_WAflwTq0BqeDmN95WRo5Ajh0fAkuzEXaT9M', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/c251Yzk2ZXBxNWdnMX8xZ7UL_WAflwTq0BqeDmN95WRo5Ajh0fAkuzEXaT9M.png?width=108&crop=smart&format=pjpg&auto=webp&s=c812fe5e3f699b8636858ca0b811fa2f94a1cf0b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/c251Yzk2ZXBxNWdnMX8xZ7UL_WAflwTq0BqeDmN95WRo5Ajh0fAkuzEXaT9M.png?width=216&crop=smart&format=pjpg&auto=webp&s=866c713125f09d92dd4d8e19387b6d3e45b47fa1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/c251Yzk2ZXBxNWdnMX8xZ7UL_WAflwTq0BqeDmN95WRo5Ajh0fAkuzEXaT9M.png?width=320&crop=smart&format=pjpg&auto=webp&s=2662dd280b1bcfec8d7b2192533c5dd7392c3018', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/c251Yzk2ZXBxNWdnMX8xZ7UL_WAflwTq0BqeDmN95WRo5Ajh0fAkuzEXaT9M.png?width=640&crop=smart&format=pjpg&auto=webp&s=d3533650f6d33845f2c1f5fd769c6b530d700dcb', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/c251Yzk2ZXBxNWdnMX8xZ7UL_WAflwTq0BqeDmN95WRo5Ajh0fAkuzEXaT9M.png?width=960&crop=smart&format=pjpg&auto=webp&s=37fdf1d6ae7fd4f31a3213e5846f23f92b1e862b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/c251Yzk2ZXBxNWdnMX8xZ7UL_WAflwTq0BqeDmN95WRo5Ajh0fAkuzEXaT9M.png?width=1080&crop=smart&format=pjpg&auto=webp&s=32cbd527ed8b68e27a84d8f9b6f7db24a6e7c43d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/c251Yzk2ZXBxNWdnMX8xZ7UL_WAflwTq0BqeDmN95WRo5Ajh0fAkuzEXaT9M.png?format=pjpg&auto=webp&s=1e1edc3bb18ad46231e992e2adde783cd1723404', 'width': 1920}, 'variants': {}}]}
Olmo/Bolmo: Why is remote code needed?
8
When I went to try Bolmo-1B in vLLM, I got a message saying I need to enable 'trust remote code.' Which code? For what purpose? This should be explained in the model card, or preferably the requisite functionality should be just a PR into vLLM rather than allowing arbitrary code execution.
2026-01-28T21:24:44
https://www.reddit.com/r/LocalLLaMA/comments/1qpoi1y/olmobolmo_why_is_remote_code_needed/
FrozenBuffalo25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpoi1y
false
null
t3_1qpoi1y
/r/LocalLLaMA/comments/1qpoi1y/olmobolmo_why_is_remote_code_needed/
false
false
self
8
null
10M vectors on a single device (iPhone / Mac mini). CPU-only. Persistent.
0
Most “AI agent” demos focus on models and prompts. The part that breaks first in real usage is neither. It’s the **data layer**. If you actually run agents locally, memory explodes fast: * 1 screenshot / second * \~10 embeddings per frame * ≈ **300M vectors / year** if continuous * Even **1 hour / day** → **\~10M vectors annually** That’s not a toy index anymore. That’s a **real database problem**. We asked a systems question almost a year ago: > So we tested it. **Setup** * Scale: **10M vectors (\~40GB)** * Devices: iPhone 16 Pro, Mac mini–class systems, consumer laptops * Execution: **CPU-only** * Access pattern: concurrent reads + updates * Storage: fully persistent (not an ephemeral cache) **Result** * Stable operation at 10M vectors * \~25–30ms query latency (4 threads) * No cloud, no GPU, no “fits-in-RAM” assumptions This wasn’t a benchmark for leaderboards. It was validating whether the **architecture itself holds** under real constraints. The key takeaway isn’t “10M is big.” It’s this: > Most local-agent stacks today implicitly assume: * memory is small * data is transient * or everything eventually syncs to the cloud Those assumptions don’t survive long-lived agents. As agents move closer to users (PCs, phones, edge devices), **data gravity stops being a metaphor and starts becoming physics.** Happy to discuss system design tradeoffs, failure modes, or what breaks first when you push past 10M.
2026-01-28T21:07:58
https://i.redd.it/ip3x04whn5gg1.png
DueKitchen3102
i.redd.it
1970-01-01T00:00:00
0
{}
1qpo1ua
false
null
t3_1qpo1ua
/r/LocalLLaMA/comments/1qpo1ua/10m_vectors_on_a_single_device_iphone_mac_mini/
false
false
default
0
{'enabled': True, 'images': [{'id': 'ip3x04whn5gg1', 'resolutions': [{'height': 38, 'url': 'https://preview.redd.it/ip3x04whn5gg1.png?width=108&crop=smart&auto=webp&s=ebaa461cf6ab9eae7615de1cff7d4cb7926dfa1b', 'width': 108}, {'height': 76, 'url': 'https://preview.redd.it/ip3x04whn5gg1.png?width=216&crop=smart&auto=webp&s=a5a634060a43781d9a037d6675877d6a257c847b', 'width': 216}, {'height': 113, 'url': 'https://preview.redd.it/ip3x04whn5gg1.png?width=320&crop=smart&auto=webp&s=0fa7cee5eb4e035bc0669742e2ab27c826b5e6aa', 'width': 320}, {'height': 226, 'url': 'https://preview.redd.it/ip3x04whn5gg1.png?width=640&crop=smart&auto=webp&s=df157e549b55f710b772ced45675c5eb17471560', 'width': 640}], 'source': {'height': 311, 'url': 'https://preview.redd.it/ip3x04whn5gg1.png?auto=webp&s=0566f2e27126b6c1f2ded5318bf47e5515f7a7c5', 'width': 877}, 'variants': {}}]}
What’s the difference between LLaMA Omni and MOSHI? (training, data, interruption, structure)
1
Hi! I’m new to this and trying to understand the real differences between LLaMA Omni and MOSHI. Could someone explain, in simple terms: How each model is trained (high-level overview)? The main dataset differences they use? How MOSHI’s interruption works (what it is and why it matters)? The model structure / architecture differences between them? What the main practical differences are for real-time speech or conversation? Beginner explanations would really help. Thanks!
2026-01-28T21:03:08
https://www.reddit.com/r/LocalLLaMA/comments/1qpnx4u/whats_the_difference_between_llama_omni_and_moshi/
Adept_Lawyer_4592
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpnx4u
false
null
t3_1qpnx4u
/r/LocalLLaMA/comments/1qpnx4u/whats_the_difference_between_llama_omni_and_moshi/
false
false
self
1
null
Is reasoning in ML and LLM architectures decomposable into a small set of reusable computational primitives?
0
Or is it inherently a tangled, non-factorizable process?
2026-01-28T20:56:18
https://www.reddit.com/r/LocalLLaMA/comments/1qpnqbi/is_reasoning_in_ml_and_llm_architectures/
RJSabouhi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpnqbi
false
null
t3_1qpnqbi
/r/LocalLLaMA/comments/1qpnqbi/is_reasoning_in_ml_and_llm_architectures/
false
false
self
0
null
I built a "Cognitive OS" cloning framework for my daughters. Then I realized its potential to solve a major challenge in the AI industry.
0
Hi everyone, I’ve been an **application engineer for 20 years**. My journey with the **MET (Mind-Engine Transplant) Framework** began with a deeply personal crisis: the sudden death of a relative. Terrified of leaving my daughters alone one day, I started building a way to leave behind my "Thinking OS"—a digital twin that could joke, hesitate, and offer guidance using my authentic cognitive patterns. [MET workflow](https://preview.redd.it/gqc1lq3cj5gg1.png?width=574&format=png&auto=webp&s=714905f6550540067146734c68b9e6b05ad408fe) I realized this "Mind-Engine Transplant" could serve 4 strategic pillars for our AI future: 1. **Digitizing Tacit Knowledge** 2. **Legacy Reconstruction** 3. **Digital Leader Simulation** 4. **High-Precision Persona Marketing** I've open-sourced the framework on GitHub. I would be honored to get feedback from fellow developers. **GitHub: ​**[https://github.com/Hirofumi-I/MET-Cognitive-OS-Cloning.git](https://github.com/Hirofumi-I/MET-Cognitive-OS-Cloning.git)
2026-01-28T20:48:05
https://www.reddit.com/r/LocalLLaMA/comments/1qpnidt/i_built_a_cognitive_os_cloning_framework_for_my/
dicon_2525
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpnidt
false
null
t3_1qpnidt
/r/LocalLLaMA/comments/1qpnidt/i_built_a_cognitive_os_cloning_framework_for_my/
false
false
https://b.thumbs.redditm…bi11mbKF6Fas.jpg
0
null
AMD Strix Halo GMTEK 128GB Unified ROCKS!
102
I've been running a MAX+ 395 as my daily workstation — the unified memory architecture is a game-changer for AI/ML workloads. Being able to allocate 96GB+ to the GPU without the PCIe bottleneck makes local LLM. DeepSeek 70B \*12 tokens/s, gpt-oss faster, comfyui with LTX2 12 s/it this is a game changer...no quants not hassle. In if you need check out my GIT I have step by step [https://github.com/bkpaine1](https://github.com/bkpaine1) have some comfyui nodes for AMD and walk throughs to get beast cranking!
2026-01-28T20:43:57
https://www.reddit.com/r/LocalLLaMA/comments/1qpneiq/amd_strix_halo_gmtek_128gb_unified_rocks/
MSBStudio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpneiq
false
null
t3_1qpneiq
/r/LocalLLaMA/comments/1qpneiq/amd_strix_halo_gmtek_128gb_unified_rocks/
false
false
self
102
{'enabled': False, 'images': [{'id': 'G4NAObhnqEkMVyLQfc3yTQuhudAFT8-juN4A0UiQjXk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/G4NAObhnqEkMVyLQfc3yTQuhudAFT8-juN4A0UiQjXk.png?width=108&crop=smart&auto=webp&s=db36c00dd5a48155cd9c0efa6ad5edcf5061f370', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/G4NAObhnqEkMVyLQfc3yTQuhudAFT8-juN4A0UiQjXk.png?width=216&crop=smart&auto=webp&s=2dbe9b72924ecff34807675ac7da97d630363c8e', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/G4NAObhnqEkMVyLQfc3yTQuhudAFT8-juN4A0UiQjXk.png?width=320&crop=smart&auto=webp&s=7dde3c3da325e4deaa00684477a12f76121eb45c', 'width': 320}], 'source': {'height': 420, 'url': 'https://external-preview.redd.it/G4NAObhnqEkMVyLQfc3yTQuhudAFT8-juN4A0UiQjXk.png?auto=webp&s=1aeb4beb5acb8c235e6f3de8495b764d37718512', 'width': 420}, 'variants': {}}]}
[LM Studio] - GLM 4.7 Flash MLX 4bit stuck in loops vs Q4_K_S
1
Hi everyone, I have got a macbook air with M4 chip. Because of performance reasons, I prefer to use the MLX 4bit version of GLM 4.7 Flash. When using LM Studio and connecting it to Cline, however, the MLX 4 bit version starts to get stuck in loops whereas the Q4\_K\_S version does not but is much slower. I have updated LM Studio to the latest version including latest version of runtimes. I am using the lm-studio-community version. Does anyone know what to do here? I also following all the recommended settings in terms of temperature, top-k , min-k and removing repeat penalyt
2026-01-28T20:41:52
https://www.reddit.com/r/LocalLLaMA/comments/1qpnci8/lm_studio_glm_47_flash_mlx_4bit_stuck_in_loops_vs/
ChickenShieeeeeet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpnci8
false
null
t3_1qpnci8
/r/LocalLLaMA/comments/1qpnci8/lm_studio_glm_47_flash_mlx_4bit_stuck_in_loops_vs/
false
false
self
1
null
What ide works best for Kimi 2.5 code?
2
I subscribed to kimi, can I integrate kimi code via vscode/cursor? If so, how?
2026-01-28T20:41:49
https://www.reddit.com/r/LocalLLaMA/comments/1qpncg7/what_ide_works_best_for_kimi_25_code/
Otherwise-Finish-174
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpncg7
false
null
t3_1qpncg7
/r/LocalLLaMA/comments/1qpncg7/what_ide_works_best_for_kimi_25_code/
false
false
self
2
null
AnythingLLM "Fetch failed" when importing gguf file
1
I am having a really strange issue with AnythingLLM on my Mac and I am hoping someone has a fix that I havent tried yet. I am trying to import the new Qwen 3 Next and Gemma 3 27B models using the single .gguf files. My Mac has 64GB of RAM so it is definitely not a memory issue. When I start the import process I can see ssd reading is start after 30sec later writing is start and the CPU jump to over 50 percent for about 10 seconds like it is doing something but then it just stops and gives me a Fetch failed error. The weirdest part is that smaller models like the 0.6m embedding one import perfectly fine but these larger ones just wont budge. To save everyone some time I have already tried basically every standard fix I could find. I gave the app Full Disk Access and made sure the folder permissions werent locked. I tried shortening the filenames to something really simple and even tried the import process with my wifi turned off to see if it was some weird network check. I also tried manually moving the files into the blobs folder but AnythingLLM just deletes them as soon as I restart the app. by the way, embeding model work fine.
2026-01-28T20:35:41
https://www.reddit.com/r/LocalLLaMA/comments/1qpn6hd/anythingllm_fetch_failed_when_importing_gguf_file/
mr-KSA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpn6hd
false
null
t3_1qpn6hd
/r/LocalLLaMA/comments/1qpn6hd/anythingllm_fetch_failed_when_importing_gguf_file/
false
false
self
1
null
LAD-A2A: How AI agents find each other on local networks
1
AI agents are getting really good at doing things, but they're completely blind to their physical surroundings. If you walk into a hotel and you have an AI assistant (like the Chatgpt mobile app), it has no idea there may be a concierge agent on the network that could help you book a spa, check breakfast times, or request late checkout. Same thing at offices, hospitals, cruise ships. The agents are there, but there's no way to discover them. A2A (Google's agent-to-agent protocol) handles how agents talk to each other. MCP handles how agents use tools. But neither answers a basic question: how do you find agents in the first place? So I built LAD-A2A, a simple discovery protocol. When you connect to a Wi-Fi, your agent can automatically find what's available using mDNS (like how AirDrop finds nearby devices) or a standard HTTP endpoint. The spec is intentionally minimal. I didn't want to reinvent A2A or create another complex standard. LAD-A2A just handles discovery, then hands off to A2A for actual communication. Open source, Apache 2.0. Includes a working Python implementation you can run to see it in action. Curious what people think!
2026-01-28T20:32:48
https://www.reddit.com/r/LocalLLaMA/comments/1qpn3r6/lada2a_how_ai_agents_find_each_other_on_local/
franzvill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpn3r6
false
null
t3_1qpn3r6
/r/LocalLLaMA/comments/1qpn3r6/lada2a_how_ai_agents_find_each_other_on_local/
false
false
self
1
null
3 days of blind peer evaluations: DeepSeek V3.2 beats closed models on code parsing—full 10×10 matrix results
0
Running a project called The Multivac. Daily AI evaluations, 33 days straight now. The setup: models judge each other's outputs blind—they don't know whose response they're scoring. 1100+ judgments across 20+ models. https://preview.redd.it/fq23xgt5h5gg1.png?width=837&format=png&auto=webp&s=bbbf771ca0e9a57692b9bf2c6910e2c8c40e48e8 DeepSeek V3.2 took Nested JSON Parser with 9.39. Beat Claude, GPT variants, Gemini. Not cherry-picked, just what fell out of the matrix. Thing I keep seeing: task-specific competence varies way more than "frontier model" branding suggests. Claude Opus 4.5 got 7.42 on Instruction Following Under Constraint. Same model got 9.49 on Async Bug Hunt. Two point spread on the same model depending on task. I know the obvious gap here—open-weight representation is thin because I'm working through APIs. If anyone's running local inference and wants to contribute responses to evaluation prompts, genuinely interested in figuring that out. Want to get Qwen, Llama 3.3, Mixtral into Phase 3. What else should be in there? [themultivac.substack.com](http://themultivac.substack.com)
2026-01-28T20:31:03
https://www.reddit.com/r/LocalLLaMA/comments/1qpn234/3_days_of_blind_peer_evaluations_deepseek_v32/
Silver_Raspberry_811
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpn234
false
null
t3_1qpn234
/r/LocalLLaMA/comments/1qpn234/3_days_of_blind_peer_evaluations_deepseek_v32/
false
false
https://b.thumbs.redditm…GKTL9tp4IvXk.jpg
0
null
“How many R’s are in strawberry?” across a few models
0
Ran the same task across a few models. How many R’s are in “strawberry”. Some get it right, some don’t. Some change between runs.
2026-01-28T20:23:51
https://v.redd.it/dlxqcr9ub5gg1
Rent_South
v.redd.it
1970-01-01T00:00:00
0
{}
1qpmv2i
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dlxqcr9ub5gg1/DASHPlaylist.mpd?a=1772223848%2CODMzNDM0ZGNjMzdiNmQ0YjY0ZWVmMjY4NTdkNWU2MzkyZTZjOTg3ZGIxYTRkMmY5Yzc0ZTllZmUxNGRkN2ZlYw%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/dlxqcr9ub5gg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/dlxqcr9ub5gg1/HLSPlaylist.m3u8?a=1772223848%2CY2ZhOWYyYWU5YmE1MDM4N2JkZjBjZTMzYjRmZDI0NThjMjQxZTk4OGNhYzUzYjM0YjJkNmQ1NDVlYjc5NWUwNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dlxqcr9ub5gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1806}}
t3_1qpmv2i
/r/LocalLLaMA/comments/1qpmv2i/how_many_rs_are_in_strawberry_across_a_few_models/
false
false
https://external-preview…b0143dc1885fadff
0
{'enabled': False, 'images': [{'id': 'cGo4Z3ptYnViNWdnMT08YCalcN-PbklogAvNxt1ApKWOOoFwXmmiI3AlCNHx', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/cGo4Z3ptYnViNWdnMT08YCalcN-PbklogAvNxt1ApKWOOoFwXmmiI3AlCNHx.png?width=108&crop=smart&format=pjpg&auto=webp&s=8884019bbbd9e214e9ca1237bd813758e0393525', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/cGo4Z3ptYnViNWdnMT08YCalcN-PbklogAvNxt1ApKWOOoFwXmmiI3AlCNHx.png?width=216&crop=smart&format=pjpg&auto=webp&s=0cfd661e97c7101eede7cf170deb09c367d8f9e3', 'width': 216}, {'height': 191, 'url': 'https://external-preview.redd.it/cGo4Z3ptYnViNWdnMT08YCalcN-PbklogAvNxt1ApKWOOoFwXmmiI3AlCNHx.png?width=320&crop=smart&format=pjpg&auto=webp&s=02204c9da9fae714840024431abd0b6954f43b5f', 'width': 320}, {'height': 382, 'url': 'https://external-preview.redd.it/cGo4Z3ptYnViNWdnMT08YCalcN-PbklogAvNxt1ApKWOOoFwXmmiI3AlCNHx.png?width=640&crop=smart&format=pjpg&auto=webp&s=dae727df2f5e2b2c2ebfd4c73b7ad5abf7bb72a3', 'width': 640}, {'height': 574, 'url': 'https://external-preview.redd.it/cGo4Z3ptYnViNWdnMT08YCalcN-PbklogAvNxt1ApKWOOoFwXmmiI3AlCNHx.png?width=960&crop=smart&format=pjpg&auto=webp&s=c659726f7c37c7f476144d942f30205d34885bb2', 'width': 960}, {'height': 645, 'url': 'https://external-preview.redd.it/cGo4Z3ptYnViNWdnMT08YCalcN-PbklogAvNxt1ApKWOOoFwXmmiI3AlCNHx.png?width=1080&crop=smart&format=pjpg&auto=webp&s=67d8b7575122afede65ce8e315cc9f8bdff4919d', 'width': 1080}], 'source': {'height': 1722, 'url': 'https://external-preview.redd.it/cGo4Z3ptYnViNWdnMT08YCalcN-PbklogAvNxt1ApKWOOoFwXmmiI3AlCNHx.png?format=pjpg&auto=webp&s=dffdfddd53430d5bd2bd0f20e5ada773c133773d', 'width': 2880}, 'variants': {}}]}
LAD-A2A: How AI agents find each other on local networks
1
[removed]
2026-01-28T20:04:14
https://www.reddit.com/r/LocalLLaMA/comments/1qpmbsn/lada2a_how_ai_agents_find_each_other_on_local/
franzvill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpmbsn
false
null
t3_1qpmbsn
/r/LocalLLaMA/comments/1qpmbsn/lada2a_how_ai_agents_find_each_other_on_local/
false
false
self
1
null
I just got my Dell DGX Spark GB10 that I won from the hackathon!
172
Please don't mind the breadcrumbs... But they pretty much overnighted the Dell DGX Spark GB10. I think the first thing I am going to try and do is figure out how to get a robot arm to do some sort of shape matching using transfer learning to stick particular shapes in the correct holes. I think that might be easy enough? (I am naive because I haven't done transfer learning or physical AI yet) I also want to try using LTX and see if it can recreate the ending for How I Met Your Mother or Game of Thrones (if it is able to do that). Might honestly be difficult because I haven't worked with vision models other than image creation using Fal.ai. I wonder if this machine can handle it. Otherwise, I am going to keep hammering at figuring out better ways of solving the Social Determinants of Health problem. There are a lot of correlations that I wasn't able to completely finish within the limited amount of time for example: Crime, lack of parks, and food insecurity increases chronic disease risk because people do not feel safe to leave their homes and exercise or walk and often times default to junk food as there are no other culturally sensitive alternatives leading to obesity and higher cardiovascular. It would be also great if my AI Agents can go through some research paper and identify some of the most crucial ones that I can at least bake into the platform as a baseline that might be effecting other cities. Also since I have 4 TB SSD I can potentially add the data from a bunch of different cities and start doing some pattern matching/correlation detection between this generally siloed data and see if I could suggest specific campaigns for the cities that would help unrepresented people get better access to care. One of my passions (and I know this sounds really nerdy) is to create really good multi-turn evaluation harnesses that can use Process Supervised Reward Models to better train complex AI agents and self-heal. If anyone has advice on any of this I would love to hear it.
2026-01-28T20:03:20
https://i.redd.it/af2u39y6a5gg1.jpeg
brandon-i
i.redd.it
1970-01-01T00:00:00
0
{}
1qpmay0
false
null
t3_1qpmay0
/r/LocalLLaMA/comments/1qpmay0/i_just_got_my_dell_dgx_spark_gb10_that_i_won_from/
false
false
default
172
{'enabled': True, 'images': [{'id': 'af2u39y6a5gg1', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/af2u39y6a5gg1.jpeg?width=108&crop=smart&auto=webp&s=c42fe65853069f68a2327c6cd3fe1ba458a0904a', 'width': 108}, {'height': 384, 'url': 'https://preview.redd.it/af2u39y6a5gg1.jpeg?width=216&crop=smart&auto=webp&s=c1e83e87029f2fe2d929d85fc2292c2eb67e0c09', 'width': 216}, {'height': 568, 'url': 'https://preview.redd.it/af2u39y6a5gg1.jpeg?width=320&crop=smart&auto=webp&s=c756cb1c3f1665bbc08aaeac38b7986a1de4f539', 'width': 320}, {'height': 1137, 'url': 'https://preview.redd.it/af2u39y6a5gg1.jpeg?width=640&crop=smart&auto=webp&s=d772153c2ea4d039aff7359c493d9106f7a82aae', 'width': 640}, {'height': 1706, 'url': 'https://preview.redd.it/af2u39y6a5gg1.jpeg?width=960&crop=smart&auto=webp&s=4de52d227368578865da44a7a26ab5d2ae087ebd', 'width': 960}, {'height': 1920, 'url': 'https://preview.redd.it/af2u39y6a5gg1.jpeg?width=1080&crop=smart&auto=webp&s=e0e5506d11c607af57c65370d93d44c8389dbfd6', 'width': 1080}], 'source': {'height': 5712, 'url': 'https://preview.redd.it/af2u39y6a5gg1.jpeg?auto=webp&s=aed4ec945835f4ebbedf8e3f29d7882def4fc95e', 'width': 3213}, 'variants': {}}]}
ByteDance-Seed/Stable-DiffCoder-8B-Instruct · Hugging Face
68
Diffusion text/coding models are finally tricking in!
2026-01-28T19:56:52
https://huggingface.co/ByteDance-Seed/Stable-DiffCoder-8B-Instruct
FullstackSensei
huggingface.co
1970-01-01T00:00:00
0
{}
1qpm48y
false
null
t3_1qpm48y
/r/LocalLLaMA/comments/1qpm48y/bytedanceseedstablediffcoder8binstruct_hugging/
false
false
default
68
{'enabled': False, 'images': [{'id': 'NHXAooAa4KDswUx_-kPck6PlutEBvd6amAoPNHHGh0s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NHXAooAa4KDswUx_-kPck6PlutEBvd6amAoPNHHGh0s.png?width=108&crop=smart&auto=webp&s=0ba31334985a700cff7eabd1e3601cc8081fbcca', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NHXAooAa4KDswUx_-kPck6PlutEBvd6amAoPNHHGh0s.png?width=216&crop=smart&auto=webp&s=ea39658b1154851d6b77a9777e48ae8f9a815529', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NHXAooAa4KDswUx_-kPck6PlutEBvd6amAoPNHHGh0s.png?width=320&crop=smart&auto=webp&s=72436cd678714e5f9863049e8980b9a8c41fef24', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NHXAooAa4KDswUx_-kPck6PlutEBvd6amAoPNHHGh0s.png?width=640&crop=smart&auto=webp&s=8ee5c6bc1687d34fcebba3988d582836917613a0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NHXAooAa4KDswUx_-kPck6PlutEBvd6amAoPNHHGh0s.png?width=960&crop=smart&auto=webp&s=8db85c02206e73e6405372325073a9e78d8bd53a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NHXAooAa4KDswUx_-kPck6PlutEBvd6amAoPNHHGh0s.png?width=1080&crop=smart&auto=webp&s=93bd152c0def4b2c3cfd5314ee1724fd3ce577f9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NHXAooAa4KDswUx_-kPck6PlutEBvd6amAoPNHHGh0s.png?auto=webp&s=1417703412e09211cf67259526dee1963d65edd0', 'width': 1200}, 'variants': {}}]}
Running 4+ GPUs - how are you handling cooling?
5
 *Curious about setups with 4-8 GPUs in a single system or small cluster. Air cooling working okay?*      *Anyone gone liquid? What density/wattage before things got uncomfortable?*
2026-01-28T19:49:45
https://www.reddit.com/r/LocalLLaMA/comments/1qplx5f/running_4_gpus_how_are_you_handling_cooling/
OkParking9422
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qplx5f
false
null
t3_1qplx5f
/r/LocalLLaMA/comments/1qplx5f/running_4_gpus_how_are_you_handling_cooling/
false
false
self
5
null
I'm building a Dungeon Master AI for fun. It’s open source and I’d love some help/feedback.
1
Hey everyone, I've been working on a project called DungeonMasterAI for a while now. I’m a huge D&D fan and I’ve always wanted a tool that could actually help with world-building and running sessions without feeling like a generic chatbot. Just to be clear: I’m not making any money off this. No subscriptions, no "pro" versions. It’s a pure passion project because I love coding and I love TTRPGs. The project is currently at a stage where it works, but it could be so much better. I’m looking for anyone who wants to nerd out on this with me—whether you're a dev, a writer, or just someone who plays a lot of D&D and has ideas for the logic behind encounters and NPC behavior. What I’ve got so far: \- Created a nice frontend. \- Implemented images with pollination ai api. \- Implemented gemini/gemma as llm for the backend, which has some context windows that remember events and create the story. \- Text To Speech with high-level Kokoro. What I need help with: \- Refining the prompts/logic. \- Improve context windows to prevent AI from making choices outside of context \- Any type of ideas is welcome General feedback from people who actually play the game. Everything is open source on GitHub: [https://github.com/pPyrius/DungeonMasterAI](https://github.com/pPyrius/DungeonMasterAI) If you find this interesting, feel free to check the repo, open an issue, or just roast my code in the comments. Any help is appreciated! Cheers! https://preview.redd.it/2lajmj7p75gg1.png?width=1564&format=png&auto=webp&s=6dba1e0c40d12df68a6b93a92d68d2f30004ced1
2026-01-28T19:38:55
https://www.reddit.com/r/LocalLLaMA/comments/1qplmbt/im_building_a_dungeon_master_ai_for_fun_its_open/
user667711
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qplmbt
false
null
t3_1qplmbt
/r/LocalLLaMA/comments/1qplmbt/im_building_a_dungeon_master_ai_for_fun_its_open/
false
false
https://b.thumbs.redditm…EHge7jZcG8Yo.jpg
1
null
ACE-Step 1.5 dropping in days - "Commercial grade OSS music gen" with quality between Suno v4.5 and v5 (8GB VRAM)
77
For those who haven't been following the AI music generation space, ACE-Step is about to have its "Stable Diffusion moment." ## What's Happening According to \[@realmrfakename on X\](https://x.com/realmrfakename/status/2016274138701476040) (7K+ views), ACE-Step 1.5 is coming in days with early access already rolling out. \*\*Key claims:\*\* - Quality "somewhere between Suno v4.5 and v5" - "Far better than HeartMuLa or DiffRhythm" - "We finally have commercial grade OSS music gen" ## Why This Matters for Local AI \*\*ACE-Step v1\*\* already runs on \*\*8GB VRAM\*\* with CPU offload. It's a 3.5B parameter model that generates full songs with vocals + instrumentals + lyrics in 19 languages. \*\*Speed:\*\* 4 minutes of music in \~20 seconds on A100, \~1.7s on RTX 4090 If v1.5 delivers on the quality claims while keeping the same hardware requirements, this could be huge for: - Local music generation without cloud dependencies - LoRA fine-tuning for custom voices/styles - Integration into creative workflows ## Links - \[GitHub\](https://github.com/ace-step/ACE-Step) - \[HuggingFace\](https://huggingface.co/ACE-Step/ACE-Step-v1-3.5B) - \[Demo Space\](https://huggingface.co/spaces/ACE-Step/ACE-Step) - \[Technical Report\](https://arxiv.org/abs/2506.00045) Also created r/ACEStepGen for dedicated discussions if anyone's interested. Anyone here tried the current v1? Curious about real-world experiences with quality and inference speed.
2026-01-28T19:38:05
https://www.reddit.com/r/LocalLLaMA/comments/1qpllhm/acestep_15_dropping_in_days_commercial_grade_oss/
ExcellentTrust4433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpllhm
false
null
t3_1qpllhm
/r/LocalLLaMA/comments/1qpllhm/acestep_15_dropping_in_days_commercial_grade_oss/
false
false
self
77
null
Any good model that runs on 1 GB-1.5 GB of RAM?
0
Any good model that runs on a potato device of 1 GB-1.5 GB of RAM? Srry for asking 256 mb of ram
2026-01-28T19:32:47
https://www.reddit.com/r/LocalLLaMA/comments/1qplg6e/any_good_model_that_runs_on_1_gb15_gb_of_ram/
Ok-Type-7663
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qplg6e
false
null
t3_1qplg6e
/r/LocalLLaMA/comments/1qplg6e/any_good_model_that_runs_on_1_gb15_gb_of_ram/
false
false
self
0
null
Theorizer by AllenAI: Local, grounded scientific theory generation
6
AllenAI just released Theorizer, a multi LLM system for producing novel theories based on a corpus of scientific papers. It's all local, give it a clone and try it out! Blog: [https://allenai.org/blog/theorizer](https://allenai.org/blog/theorizer) Code: [https://github.com/allenai/asta-theorizer](https://github.com/allenai/asta-theorizer) Technical report: [https://arxiv.org/abs/2601.16282](https://arxiv.org/abs/2601.16282)
2026-01-28T19:30:49
https://allenai.org/blog/theorizer
Unstable_Llama
allenai.org
1970-01-01T00:00:00
0
{}
1qple8b
false
null
t3_1qple8b
/r/LocalLLaMA/comments/1qple8b/theorizer_by_allenai_local_grounded_scientific/
false
false
default
6
{'enabled': False, 'images': [{'id': 'ZlRqczjoQiIqffI1tQ9FD8JfzM6ak7D5leYcEJLqrPY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZlRqczjoQiIqffI1tQ9FD8JfzM6ak7D5leYcEJLqrPY.png?width=108&crop=smart&auto=webp&s=01ab30dd7046f323493d38aa9e2fe7768285cab1', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZlRqczjoQiIqffI1tQ9FD8JfzM6ak7D5leYcEJLqrPY.png?width=216&crop=smart&auto=webp&s=a4dd71697f28275dfbcc2a303b6bcb5218c6fdc9', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZlRqczjoQiIqffI1tQ9FD8JfzM6ak7D5leYcEJLqrPY.png?width=320&crop=smart&auto=webp&s=d0912c0c1a7bb8ee6954305731a5fb384d157c94', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZlRqczjoQiIqffI1tQ9FD8JfzM6ak7D5leYcEJLqrPY.png?width=640&crop=smart&auto=webp&s=a179c2be1e6aaed528c675af75dc1b57347fad96', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZlRqczjoQiIqffI1tQ9FD8JfzM6ak7D5leYcEJLqrPY.png?width=960&crop=smart&auto=webp&s=122416aed2e58d623aa012a3dc1bad2c961374b4', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZlRqczjoQiIqffI1tQ9FD8JfzM6ak7D5leYcEJLqrPY.png?width=1080&crop=smart&auto=webp&s=173c5e6d83ae5816e9d5ae9e202bdbd25ec5a9e2', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/ZlRqczjoQiIqffI1tQ9FD8JfzM6ak7D5leYcEJLqrPY.png?auto=webp&s=9cc1e4082702431f45d2d2bc10cdd9a25e1fb561', 'width': 2133}, 'variants': {}}]}
My First Rig
6
So I was just looking to see how cheap I could make a little box that can run some smaller models and I came up with this. It’s an old E5 Xeon with 10 cores, 32GB of DDR3 RAM, Chinese salvage X79 mobo, 500GB Patriot NVMe, and a 16GB P100. The grand total, not including fans and zip ties I had laying around (lol), was about $400. I’m running Rocky 9 headlessly and Ollama inside a Podman container. Everything seems to be running pretty smooth. I can hit my little models on the network using the API, and it’s pretty responsive. ChatGPT helped me get some things figured out with Podman. It really wanted me to run Ubuntu 22.04 and Docker, but I just couldn’t bring myself to run crusty ol 22.04. Plus Cockpit seems to run better on Red Hat distros. Next order of business is probably getting my GPU cooling in a more reliable (non zip tied) place.
2026-01-28T19:26:49
https://i.redd.it/ugxf51ft55gg1.jpeg
randofreak
i.redd.it
1970-01-01T00:00:00
0
{}
1qpla42
false
null
t3_1qpla42
/r/LocalLLaMA/comments/1qpla42/my_first_rig/
false
false
default
6
{'enabled': True, 'images': [{'id': 'ugxf51ft55gg1', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/ugxf51ft55gg1.jpeg?width=108&crop=smart&auto=webp&s=b8b3193f26fe8b3ec04b1f961a7ec4d093429c5c', 'width': 108}, {'height': 161, 'url': 'https://preview.redd.it/ugxf51ft55gg1.jpeg?width=216&crop=smart&auto=webp&s=3a01b55e292472a7b82fa0184e478ac25c089bc5', 'width': 216}, {'height': 239, 'url': 'https://preview.redd.it/ugxf51ft55gg1.jpeg?width=320&crop=smart&auto=webp&s=35554ddc1fe12eb97ccdb40be315b5e7b7647faa', 'width': 320}, {'height': 479, 'url': 'https://preview.redd.it/ugxf51ft55gg1.jpeg?width=640&crop=smart&auto=webp&s=a5fb33530e647c4d9a1209c06839bbbce6e9b019', 'width': 640}, {'height': 719, 'url': 'https://preview.redd.it/ugxf51ft55gg1.jpeg?width=960&crop=smart&auto=webp&s=503fed9a78b619c7f11cf815988c63dcf712da42', 'width': 960}, {'height': 809, 'url': 'https://preview.redd.it/ugxf51ft55gg1.jpeg?width=1080&crop=smart&auto=webp&s=c36f575e4cd48ed1df96f1f83c63debcc669ddd3', 'width': 1080}], 'source': {'height': 2805, 'url': 'https://preview.redd.it/ugxf51ft55gg1.jpeg?auto=webp&s=125dfe10fb99fcccc6ecd9a9f3a88be5a70de47d', 'width': 3741}, 'variants': {}}]}
Built a low-cost Search API for my RAG projects ($0.25/1k reqs) because SERP APIs were too expensive. Looking for feedback/testers.
1
[removed]
2026-01-28T19:21:20
https://www.reddit.com/r/LocalLLaMA/comments/1qpl4k1/built_a_lowcost_search_api_for_my_rag_projects/
Maleficent_Money24
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpl4k1
false
null
t3_1qpl4k1
/r/LocalLLaMA/comments/1qpl4k1/built_a_lowcost_search_api_for_my_rag_projects/
false
false
self
1
null
Any good model that even runs on a 256 MB of RAM device?
0
Well, any good model that even runs on a 256 MB of RAM potato device?
2026-01-28T19:15:30
https://www.reddit.com/r/LocalLLaMA/comments/1qpkyoi/any_good_model_that_even_runs_on_a_256_mb_of_ram/
Ok-Type-7663
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpkyoi
false
null
t3_1qpkyoi
/r/LocalLLaMA/comments/1qpkyoi/any_good_model_that_even_runs_on_a_256_mb_of_ram/
false
false
self
0
null
I built a filesystem-based "Stigmergy" protocol to give local agents persistent memory on files using xattr (Python + VS Code Extension)
3
I've been working with autonomous coding agents (using Claude Code, Cursor, and Antigravity) for a while now. One frustration I kept running into was that agents are **stateless**. Once a chat session ends, the agent "forgets" why a specific architectural decision was made, or that I specifically told it *not* to touch the legacy auth module. Comments in code pollute the source. Context windows are expensive (and finite). So, I took inspiration from nature (ants) and built **PheroPath**. **GitHub:** \[https://github.com/starpig1129/PheroPath\] **VS Code Marketplace:** \[https://marketplace.visualstudio.com/items?itemName=pheropath.pheropath&ssr=false#overview\] What is it? PheroPath is an open-source protocol that implements **Stigmergy** (indirect coordination through the environment). It allows AI Agents (and humans) to read/write invisible signals—"pheromones"—directly onto file paths. Instead of keeping state in a vector DB or a chat log, **the state lives on the file system itself.** How it works? * **Core (Python):** Uses **Extended Attributes (**`xattr`**)** on Linux/macOS to store metadata (JSON payloads) attached to files without modifying the file content (hash remains the same!). * **Windows Fallback:** Uses a sidecar JSON strategy since NTFS ADS is tricky to manage cross-platform. * **Visualization (VS Code):** I built an extension that visualizes these hidden signals. If an agent marks a file as `DANGER`, it turns **Red** in your file explorer. *(This is a 10s clip showing an Agent finding a bug and marking the file instantly)* !\[Demo GIF\](\[https://raw.githubusercontent.com/starpig1129/PheroPath/main/demo.gif\]) Why I built this? I want to enable **Asynchronous Human-in-the-loop workflows**. 1. My local agent runs a nightly audit. 2. It finds a potential race condition but isn't confident enough to fix it. 3. It marks the file with `TODO` and a note. 4. Next morning, I open VS Code, see the yellow file, read the agent's note via tooltip, and fix it. It's MIT Licensed and I'd love to hear what you think. Could this be useful for your agentic workflows?
2026-01-28T19:09:11
https://www.reddit.com/r/LocalLLaMA/comments/1qpks4q/i_built_a_filesystembased_stigmergy_protocol_to/
Expensive-Rub3117
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpks4q
false
null
t3_1qpks4q
/r/LocalLLaMA/comments/1qpks4q/i_built_a_filesystembased_stigmergy_protocol_to/
false
false
self
3
null
Memory architecture that actually works for AI companions - lessons from production
0
Built an AI companion app and wanted to share what worked for persistent memory. The naive approach (stuff full history into context) doesn't scale. Here's what landed after iteration:                                                                                                                                                **Semantic retrieval, not full history.** Generate embeddings for conversation summaries, then retrieve a few relevant memories based on the current message. A well-prompted smaller model with relevant context can beat a bigger model drowning in irrelevant history.                **Character-scoped memory.** Each character maintains their own memory store. Memories from Character A don't bleed into Character B. Critical for anyone working with multiple characters **Summarize aggressively.** Every X messages, generate a summary focused on character development and plot points. Classify importance and skip storing low-value memories. Cap retrieved memories at X chars each - more than that and the model gets confused.                             **Context structure matters.** Order that worked: character definition > retrieved semantic memories > current session summary > recent messages. Character grounding needs to stay visible to the model throughout.                                                                             Built this into [https://anyconversation.com](https://anyconversation.com) if you want to see it working. Happy to answer questions.        
2026-01-28T19:02:24
https://www.reddit.com/r/LocalLLaMA/comments/1qpkl8l/memory_architecture_that_actually_works_for_ai/
Pretty-Increase-7128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpkl8l
false
null
t3_1qpkl8l
/r/LocalLLaMA/comments/1qpkl8l/memory_architecture_that_actually_works_for_ai/
false
false
self
0
null
Introducing LlamaAgents Builder - vibe-code document workflows through natural language
0
Yes - this is a real, first-party LlamaIndex launch, not a random hype 3rd party. Announced as a natural-language “agent/workflow generator” inside LlamaCloud that: * turns your description of a *document workflow* (classify → route → extract → validate, etc.) into an actual LlamaIndex Workflows Python project, * pushes the generated code to your GitHub repo, and * can deploy it to LlamaCloud with an API + simple web UI. Can they be run locally? Yes- the agents/workflows can run locally, but the “Builder” part is cloud-first. (the “idea → GitHub repo → deploy” UI) is part of LlamaCloud**)** Blog post: [https://www.llamaindex.ai/blog/llamaagents-builder-idea-to-deployed-agent-in-minutes](https://www.llamaindex.ai/blog/llamaagents-builder-idea-to-deployed-agent-in-minutes)
2026-01-28T19:00:39
https://i.redd.it/f0gfzhsd05gg1.jpeg
etherd0t
i.redd.it
1970-01-01T00:00:00
0
{}
1qpkjb0
false
null
t3_1qpkjb0
/r/LocalLLaMA/comments/1qpkjb0/introducing_llamaagents_builder_vibecode_document/
false
false
default
0
{'enabled': True, 'images': [{'id': 'f0gfzhsd05gg1', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/f0gfzhsd05gg1.jpeg?width=108&crop=smart&auto=webp&s=15b150ac57e3d429a6f0043f8a6aaafd6d892542', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/f0gfzhsd05gg1.jpeg?width=216&crop=smart&auto=webp&s=8095d3afe097ac1680ab9099aa9fbdbf08556937', 'width': 216}, {'height': 300, 'url': 'https://preview.redd.it/f0gfzhsd05gg1.jpeg?width=320&crop=smart&auto=webp&s=d5451b383aefb9cde7909cb14b82305c232a4f89', 'width': 320}, {'height': 600, 'url': 'https://preview.redd.it/f0gfzhsd05gg1.jpeg?width=640&crop=smart&auto=webp&s=b75d181f276dd5757eb5cfffcc6a1d4864e38ea9', 'width': 640}], 'source': {'height': 677, 'url': 'https://preview.redd.it/f0gfzhsd05gg1.jpeg?auto=webp&s=5e24180097ecde4c4a9cb7a4261eb3aa4298a010', 'width': 721}, 'variants': {}}]}
Image generation is now available alongside LLMs and Whisper in Lemonade v9.2
55
We're on a mission to make local generative AI supremely easy for users and devs. Today, Lemonade has taken a big step by introducing image generation into our unified local API. This means our one-click installer gets you LLMs, Whisper, and Stable Diffusion and makes them all available on the same base URL. We'll use these capabilities to build local apps and agents that are more powerful and natural to interact with. What would a unified multi-modal server help you build? Load models: ``` lemonade-server run SD-Turbo lemonade-server run Whisper-Large-v3 lemonade-server run GLM-4.7-Flash-GGUF ``` Endpoints: ``` /api/v1/images/generations /api/v1/audio/transcriptions /api/v1/chat/completions ``` Today is just the beginning, introducing the fundamental capability and enabling the endpoints. Future work to enable multi-modal local AI apps includes: - Add Z-Image and other SOTA models to `images/generations`. - Add ROCm, Vulkan, and AMD NPU builds for `images/generations` and `audio/transcriptions`. - Streaming input support for `audio/transcriptions`. - Introduce a text-to-speech endpoint. If you like what we're doing, please support the project with a star on the [lemonade GitHub](https://github.com/lemonade-sdk/lemonade) and come hang out with us on [Discord](https://discord.gg/5xXzkMu8Zk)! PS. as always huge thanks to the maintainers of llama.cpp, stablediffusion.cpp, whisper.cpp, and the other tools lemonade builds on.
2026-01-28T18:56:13
https://i.redd.it/sbfse0xez4gg1.png
jfowers_amd
i.redd.it
1970-01-01T00:00:00
0
{}
1qpkem2
false
null
t3_1qpkem2
/r/LocalLLaMA/comments/1qpkem2/image_generation_is_now_available_alongside_llms/
false
false
default
55
{'enabled': True, 'images': [{'id': 'sbfse0xez4gg1', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/sbfse0xez4gg1.png?width=108&crop=smart&auto=webp&s=db773d6c63a459d07515eaa5410c66e3b204691a', 'width': 108}, {'height': 192, 'url': 'https://preview.redd.it/sbfse0xez4gg1.png?width=216&crop=smart&auto=webp&s=edde0c101cc18d75433c5db4910ad54847bc8a66', 'width': 216}, {'height': 285, 'url': 'https://preview.redd.it/sbfse0xez4gg1.png?width=320&crop=smart&auto=webp&s=a1423091a7626c93e047f0639bda759745ea5acb', 'width': 320}, {'height': 570, 'url': 'https://preview.redd.it/sbfse0xez4gg1.png?width=640&crop=smart&auto=webp&s=344e447c355af53401fc4c3ee2b177014b36bc61', 'width': 640}, {'height': 855, 'url': 'https://preview.redd.it/sbfse0xez4gg1.png?width=960&crop=smart&auto=webp&s=cbd1d8ab68e5805b0bbba99bb9c6b1a21095f748', 'width': 960}, {'height': 962, 'url': 'https://preview.redd.it/sbfse0xez4gg1.png?width=1080&crop=smart&auto=webp&s=10efa05294f7ef4bcd6d854b3b156324fcbeda16', 'width': 1080}], 'source': {'height': 1140, 'url': 'https://preview.redd.it/sbfse0xez4gg1.png?auto=webp&s=4a543a97aed2741994eaa18b7f63d51f020227de', 'width': 1279}, 'variants': {}}]}
Running Kimi K2.5 at 24 token/s with 2 x 512GB M3 Ultra Mac Studios
59
https://preview.redd.it/…3b So Cooooool!
2026-01-28T18:53:08
https://www.reddit.com/r/LocalLLaMA/comments/1qpkbb4/running_kimi_k25_at_24_tokens_with_2_x_512gb_m3/
Zestyclose_Slip_6467
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpkbb4
false
null
t3_1qpkbb4
/r/LocalLLaMA/comments/1qpkbb4/running_kimi_k25_at_24_tokens_with_2_x_512gb_m3/
false
false
https://b.thumbs.redditm…d1K0acMpLc_w.jpg
59
null
[Project] Porting Chronos Bolt (Time-Series) and T5 to Mobile NPU - Implementation shared
7
2026-01-28T18:48:44
https://v.redd.it/sk7v4jwzy4gg1
Glittering-Topic-822
/r/LocalLLaMA/comments/1qpk6qx/project_porting_chronos_bolt_timeseries_and_t5_to/
1970-01-01T00:00:00
0
{}
1qpk6qx
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/sk7v4jwzy4gg1/DASHPlaylist.mpd?a=1772347730%2CNWMyYWIwZmY5ODRiNjkyNTM3ODE2MjZiOTYxOTY1MmVhYjc1MWM1NzJiNzhhNzRkY2JlZWNhNjFlMDQ4ZDJjZQ%3D%3D&v=1&f=sd', 'duration': 91, 'fallback_url': 'https://v.redd.it/sk7v4jwzy4gg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/sk7v4jwzy4gg1/HLSPlaylist.m3u8?a=1772347730%2CODk1NTNkYTZmYWRhYjBhZTIzYWVlMWQzMzc4NzE0ZmU0MmU1ODk5NjI3MjJlMmZjYjNiODI2MzkxOTYxMWJmNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/sk7v4jwzy4gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 886}}
t3_1qpk6qx
/r/LocalLLaMA/comments/1qpk6qx/project_porting_chronos_bolt_timeseries_and_t5_to/
false
false
https://external-preview…e0c99dc4ec29b344
7
{'enabled': False, 'images': [{'id': 'eHF0dzF3d3p5NGdnMR2zQb3k3X6Yz_6BtK7gYs3rfGbJWmJS66OpOcqEtSSx', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/eHF0dzF3d3p5NGdnMR2zQb3k3X6Yz_6BtK7gYs3rfGbJWmJS66OpOcqEtSSx.png?width=108&crop=smart&format=pjpg&auto=webp&s=927a7305cdc3a9e7b4764e0c40f960e95dc454bc', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/eHF0dzF3d3p5NGdnMR2zQb3k3X6Yz_6BtK7gYs3rfGbJWmJS66OpOcqEtSSx.png?width=216&crop=smart&format=pjpg&auto=webp&s=445bae589f4a3bb3042e670dcdc806a3c7671805', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/eHF0dzF3d3p5NGdnMR2zQb3k3X6Yz_6BtK7gYs3rfGbJWmJS66OpOcqEtSSx.png?width=320&crop=smart&format=pjpg&auto=webp&s=1b5afd09a7600773d2b78583d04d5dd341e92fac', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/eHF0dzF3d3p5NGdnMR2zQb3k3X6Yz_6BtK7gYs3rfGbJWmJS66OpOcqEtSSx.png?width=640&crop=smart&format=pjpg&auto=webp&s=085a732eafe073514a82e08bb143944bd5e73937', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/eHF0dzF3d3p5NGdnMR2zQb3k3X6Yz_6BtK7gYs3rfGbJWmJS66OpOcqEtSSx.png?width=960&crop=smart&format=pjpg&auto=webp&s=947e0b6bf4b2d21a1a3d9602a74dc646ca8c9ad8', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/eHF0dzF3d3p5NGdnMR2zQb3k3X6Yz_6BtK7gYs3rfGbJWmJS66OpOcqEtSSx.png?width=1080&crop=smart&format=pjpg&auto=webp&s=00afac3166ece8f2be624dda63fa67ec529238c5', 'width': 1080}], 'source': {'height': 3118, 'url': 'https://external-preview.redd.it/eHF0dzF3d3p5NGdnMR2zQb3k3X6Yz_6BtK7gYs3rfGbJWmJS66OpOcqEtSSx.png?format=pjpg&auto=webp&s=cb4a7fc6b2f058b467aa6a30d4132f9580a36395', 'width': 1440}, 'variants': {}}]}
[Project] Porting Chronos Bolt (Time-Series) and T5 to Mobile NPU - Implementation shared
1
[removed]
2026-01-28T18:45:59
https://v.redd.it/kdi04cq4y4gg1
Unusual-Bag4487
/r/LocalLLaMA/comments/1qpk3wx/project_porting_chronos_bolt_timeseries_and_t5_to/
1970-01-01T00:00:00
0
{}
1qpk3wx
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/kdi04cq4y4gg1/DASHPlaylist.mpd?a=1772347570%2CYmExYzQ0YmRjNjIzNjYxOGMyMDZjMThlMjdjYjFhZTQ0MGVkOTdjOTU3YmZlNTc3MTNiMTI5OThkOTlhY2ZkMg%3D%3D&v=1&f=sd', 'duration': 91, 'fallback_url': 'https://v.redd.it/kdi04cq4y4gg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/kdi04cq4y4gg1/HLSPlaylist.m3u8?a=1772347570%2CMjE1ZmQ4MjRiMDhkOWM2MjZlMmYxMzA3YTJjZTdlZDVlZjlkZDFkYzlmZDgwM2RjNjIwNGI3ZDBhMDE1Zjk1NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kdi04cq4y4gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 886}}
t3_1qpk3wx
/r/LocalLLaMA/comments/1qpk3wx/project_porting_chronos_bolt_timeseries_and_t5_to/
false
false
https://external-preview…63edd51954148923
1
{'enabled': False, 'images': [{'id': 'NWRzOTk3dTR5NGdnMSyQ4OHnDI-VwBjK8AUvJHMf6VdS5tEf9sAy_Md_310-', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/NWRzOTk3dTR5NGdnMSyQ4OHnDI-VwBjK8AUvJHMf6VdS5tEf9sAy_Md_310-.png?width=108&crop=smart&format=pjpg&auto=webp&s=189e1f64b263a9a93a6fef5bc6af09f5da48bf7d', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/NWRzOTk3dTR5NGdnMSyQ4OHnDI-VwBjK8AUvJHMf6VdS5tEf9sAy_Md_310-.png?width=216&crop=smart&format=pjpg&auto=webp&s=5be1439d7d0b3eaa412c426f90e4876bcb713707', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/NWRzOTk3dTR5NGdnMSyQ4OHnDI-VwBjK8AUvJHMf6VdS5tEf9sAy_Md_310-.png?width=320&crop=smart&format=pjpg&auto=webp&s=f1899759486cbb84513db3e4c0985a392237d345', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/NWRzOTk3dTR5NGdnMSyQ4OHnDI-VwBjK8AUvJHMf6VdS5tEf9sAy_Md_310-.png?width=640&crop=smart&format=pjpg&auto=webp&s=63a20c588418f4f35616209a39d825ae312fd1a2', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/NWRzOTk3dTR5NGdnMSyQ4OHnDI-VwBjK8AUvJHMf6VdS5tEf9sAy_Md_310-.png?width=960&crop=smart&format=pjpg&auto=webp&s=ff21bd15a262f937a58299dadcffe0dd0549a625', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/NWRzOTk3dTR5NGdnMSyQ4OHnDI-VwBjK8AUvJHMf6VdS5tEf9sAy_Md_310-.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e8f24e775caafde8038714fcc681efe66cd5b64f', 'width': 1080}], 'source': {'height': 3118, 'url': 'https://external-preview.redd.it/NWRzOTk3dTR5NGdnMSyQ4OHnDI-VwBjK8AUvJHMf6VdS5tEf9sAy_Md_310-.png?format=pjpg&auto=webp&s=2645f8f234d2278e8f6f7b258bfac47bc54dc364', 'width': 1440}, 'variants': {}}]}
Best tool for generating/editing with local-ai presentations?
1
[removed]
2026-01-28T18:41:50
https://www.reddit.com/r/LocalLLaMA/comments/1qpjzn4/best_tool_for_generatingediting_with_localai/
UnderstandingAny4075
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpjzn4
false
null
t3_1qpjzn4
/r/LocalLLaMA/comments/1qpjzn4/best_tool_for_generatingediting_with_localai/
false
false
self
1
null
Running Chronos Bolt (Time-series) and Hybrid Transformers on Mobile NPU - Open Source SDK
1
[removed]
2026-01-28T18:36:33
https://www.reddit.com/r/LocalLLaMA/comments/1qpju26/running_chronos_bolt_timeseries_and_hybrid/
Unusual-Bag4487
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpju26
false
null
t3_1qpju26
/r/LocalLLaMA/comments/1qpju26/running_chronos_bolt_timeseries_and_hybrid/
false
false
self
1
null
Add self‑speculative decoding (no draft model required) by srogmann · Pull Request #18471 · ggml-org/llama.cpp
56
tl;dr: potential **t/s boost** for all (non-reasoning) models This looks really interesting, but needs more investigation. Speculative decoding uses a smaller draft model to speed up a bigger one. **Self-speculative decoding** uses no extra model at all, the model is helping itself. It only speeds up certain workloads with a lot of repetition, should be especially useful for coding and refactoring tasks.
2026-01-28T18:19:15
https://github.com/ggml-org/llama.cpp/pull/18471
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1qpjc4a
false
null
t3_1qpjc4a
/r/LocalLLaMA/comments/1qpjc4a/add_selfspeculative_decoding_no_draft_model/
false
false
default
56
null
Options regarding 3rd gpu for Inference
1
Hey everyone, So I recently bought a 5090 to replace one of my two 3090s (I do a fair amount of gaming) and am now considering future options of whether to sell or keep the orphaned card. Few questions for anyone with prior experience: What sort of improvement is there to go from 48 -> 56 -> 80 in VRAM? Im most familiar with the old 70b dense models with tabby api at 4.5 bpw (tensor parallel) and don’t anticipate much of an uplift to 56 aside from a bpw increase; however 80 could open up the much more interesting 120b models. That being said, I wondering if using oculink (m.2 gen 4 conversion) would provide suitable bandwidth and still maintain “ok” tp/s at larger contexts (3090s are slow on exl3 so I’m used to being around 8-10 for reference.) Can I still use tensor parallel with 3 cards? How does mix and matching the two cards work? 3090 can’t do fp8 so does inference speed default to the slowest denominator and I just have a 3090 with extra VRAM? (With tensor parallel on or off?) Other considerations: 32 GB RAM (system was upgraded to ddr 5 only recently) So offloading is less than ideal even with a 9950x3D until prices normalize but I’m open to MOE suggestions. Not really interested in getting a mining frame due to spacial limitations for now. Thanks for reading!
2026-01-28T18:16:31
https://www.reddit.com/r/LocalLLaMA/comments/1qpj9bv/options_regarding_3rd_gpu_for_inference/
Darc78
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpj9bv
false
null
t3_1qpj9bv
/r/LocalLLaMA/comments/1qpj9bv/options_regarding_3rd_gpu_for_inference/
false
false
self
1
null
does running locally actually protect you or are we kidding ourselves?
0
ive been running models locally specifically because I dont want my stuff used for training but I keep wondering if im missing something. like even with local setups, arent the base models already trained on scraped data? and if youre fine tuning on your own data, that data still exists on your machine which could get compromised anyway. starting to feel like the only real "privacy" is just not using AI at all. someone tell me im being paranoid please
2026-01-28T18:15:56
https://www.reddit.com/r/LocalLLaMA/comments/1qpj8q7/does_running_locally_actually_protect_you_or_are/
pennyco2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpj8q7
false
null
t3_1qpj8q7
/r/LocalLLaMA/comments/1qpj8q7/does_running_locally_actually_protect_you_or_are/
false
false
self
0
null
Wave - AI native , All-in-one Terminal
0
Terminals traditionally render text, are fast and give developers a productivity boost with its fast keyboard workflows. But almost all terminals support only text based workflows in the terminal. What if the terminal could support all other media types and also most common uses on a computer : \- Open any file type (markdown, pdf, image, video, audio,etc) in your terminal \- browse web in your terminal / search web from your command line, \- use a file explorer in your terminal, \- chat with you favorite hosted / local AI model \- without sacrificing the speed and utility of a fast terminal The terminal Is called “Wave”. I tried it out and I’m impressed. It’s open source and also has the users privacy at its heart. Give it a try. \[WaveTerm.dev\](waveterm.dev) If you aren’t convinced, here’s a video I recorded to convince you. I bet you’ll install it before you complete watching the full video 😉 \[Wave - Ultimate Terminal Upgrade\](https://youtu.be/\_sDJBosDznI) PS - I’m not affiliated to the project. Just sharing a cool terminal I found to be a productivity powerhouse. PPS - No AI was used/harmed for writing this post. The impressive writing style and the typos are all mine. 🙂
2026-01-28T18:12:49
https://youtu.be/_sDJBosDznI
NoobMLDude
youtu.be
1970-01-01T00:00:00
0
{}
1qpj5gl
false
{'oembed': {'author_name': 'Noob ML Dude', 'author_url': 'https://www.youtube.com/@NoobMLDude', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/_sDJBosDznI?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Wave - Ultimate Terminal Upgrade - Complete Setup and Usage Guide"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/_sDJBosDznI/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Wave - Ultimate Terminal Upgrade - Complete Setup and Usage Guide', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1qpj5gl
/r/LocalLLaMA/comments/1qpj5gl/wave_ai_native_allinone_terminal/
false
false
default
0
{'enabled': False, 'images': [{'id': '50uvsj5Sg51xLE2xiEkk0NSqCPBY7VjHBaoqUlMskT4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/50uvsj5Sg51xLE2xiEkk0NSqCPBY7VjHBaoqUlMskT4.jpeg?width=108&crop=smart&auto=webp&s=7613b56cb635f9930e8dd107721b72e33df30bb4', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/50uvsj5Sg51xLE2xiEkk0NSqCPBY7VjHBaoqUlMskT4.jpeg?width=216&crop=smart&auto=webp&s=874afeb17f5b4da4344663876acf2dd31eeb0e5e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/50uvsj5Sg51xLE2xiEkk0NSqCPBY7VjHBaoqUlMskT4.jpeg?width=320&crop=smart&auto=webp&s=5d33c9073bf1cb52fe8d63d8f50be98af678bb90', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/50uvsj5Sg51xLE2xiEkk0NSqCPBY7VjHBaoqUlMskT4.jpeg?auto=webp&s=65d1439fe84b210d191623752d576a0ab9562d6e', 'width': 480}, 'variants': {}}]}
Introducing LM Studio 0.4.0
135
Testing out Parralel setting, default is 4, i tried 2, i tried 40. Overall no change at all in performance for me. I havent changed unified kv cache, on by default. Seems to be fine. New UI moved the runtimes into settings, but they are hidden unless you enable developer in settings.
2026-01-28T18:08:04
https://lmstudio.ai/blog/0.4.0
sleepingsysadmin
lmstudio.ai
1970-01-01T00:00:00
0
{}
1qpj0i1
false
null
t3_1qpj0i1
/r/LocalLLaMA/comments/1qpj0i1/introducing_lm_studio_040/
false
false
default
135
{'enabled': False, 'images': [{'id': 'ONJ57JrCEXk7COEJSakptLDGIozjYb7vtKbjnKraCHs', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/ONJ57JrCEXk7COEJSakptLDGIozjYb7vtKbjnKraCHs.png?width=108&crop=smart&auto=webp&s=600f0d6c57b5c14d44aa789c3fb4ad88d9b36e53', 'width': 108}, {'height': 125, 'url': 'https://external-preview.redd.it/ONJ57JrCEXk7COEJSakptLDGIozjYb7vtKbjnKraCHs.png?width=216&crop=smart&auto=webp&s=c8fb92cf203a5a48698f4e6aed83283fce43e285', 'width': 216}, {'height': 185, 'url': 'https://external-preview.redd.it/ONJ57JrCEXk7COEJSakptLDGIozjYb7vtKbjnKraCHs.png?width=320&crop=smart&auto=webp&s=ab88262b2ade36702fd138d35230d8609099d88b', 'width': 320}, {'height': 371, 'url': 'https://external-preview.redd.it/ONJ57JrCEXk7COEJSakptLDGIozjYb7vtKbjnKraCHs.png?width=640&crop=smart&auto=webp&s=a3acf1e85d0e36eae10cc58480d7a5498c2ec4e4', 'width': 640}, {'height': 556, 'url': 'https://external-preview.redd.it/ONJ57JrCEXk7COEJSakptLDGIozjYb7vtKbjnKraCHs.png?width=960&crop=smart&auto=webp&s=9e032731216e1d4f21b7d84ece913fb3997fdcb3', 'width': 960}, {'height': 626, 'url': 'https://external-preview.redd.it/ONJ57JrCEXk7COEJSakptLDGIozjYb7vtKbjnKraCHs.png?width=1080&crop=smart&auto=webp&s=42e069e87874eb46caa9de9c39d0b9cb90fadef5', 'width': 1080}], 'source': {'height': 1447, 'url': 'https://external-preview.redd.it/ONJ57JrCEXk7COEJSakptLDGIozjYb7vtKbjnKraCHs.png?auto=webp&s=e30b0afbbf7c94a1f0b13bb6942c9d2cefb58eed', 'width': 2495}, 'variants': {}}]}
Overwhelmed trying to replace APIs, suggestions where to start?
0
I’m not trying to ask how to replace Claude code, but rather how to troubleshoot. I’ve got a good system going telling APIs how to help with my coding projects, but switching to local has been a disappointment. I know I’m doing something wrong but it’s become overwhelming trying to figure out how to get the basics going. ## What's currently workingk I am making a homelab and have a repo that I have Codex/ChatGPT code ansible playbooks for. The API can essentially take over coding for me - I primarily identify problems, delineate goals, and then make it build code. Then I tell it smoke tests, run them and test output, and then repeat until my homelab’s Infrastructure as Code is solidified. This is a great process, but it’s fully dependent on Codex right now because that can do so much. ## The problem I’d like to move to the point that I get it all done by LLMs, even if I have to do far more work than I’m doing now, making more code, more rigid smoke tests and parameters, etc. While not the most complex thing, building a homelab has proved too much for the locals I’ve tried. For example - I used Qwen 3-coder instruct 30b flavor and asked it to analyze my repo and tell me its purpose. It could barely read my readme.md. Codex can identify which markdown file is important, look at the rest of the code and correlate tasks to the readme files, and make recommendations for what tasks to tackle next. It can give nuance explanations of potential security problems. It can create ansible playbooks based on general requests ("Create a docker container with X program using this port and add that to the list of current programs".) Now, There are a dozen things I’m probably doing wrong. I’m probably using the wrong quant of qwen coder, probably using an incomplete prompt, probably asking it too much, and maybe a dozen other things. I’m fully aware I’m going about it the wrong way, but I’m feeling like I don’t know where to start. Since this industry moves so fast, is there a place I can go to understand what the current best practice is? Like, should I download a sample project and try and get specific LLMs to do specific tasks as a standard? (Say, downloading a sample broken game and have it analyze, diagnose and fix a bug)? Is there a FAQ or guide for the best practice for what models do diagnosis, small task coding, reading code, etc.? I apologize if this is the wrong place for this, but I’m not entirely sure where to go. For background: I’m a semi-experienced coder (former game dev, now physician academic) and I’ve got two computers with 64gb RAM and a 16gb vRAM graphics card each (one is AMD and the other Nvidia, so unfortunately I can’t combine ‘em. It also sounds like 128gb with a 16gb card is not useful, since I’m always choked by the vRAM anyways). I plan on using n8n and some sort of AI model to assign tasks to multiple VMs so that the right models do inference vs coding vs. smoke tests etc. I’m familiar with those pieces, but LLMs are still new to me.
2026-01-28T18:04:01
https://www.reddit.com/r/LocalLLaMA/comments/1qpiw8q/overwhelmed_trying_to_replace_apis_suggestions/
cniinc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpiw8q
false
null
t3_1qpiw8q
/r/LocalLLaMA/comments/1qpiw8q/overwhelmed_trying_to_replace_apis_suggestions/
false
false
self
0
{'enabled': False, 'images': [{'id': 'nATt2JFuTGqsmzmgq0loBX3PKLI-foNIrURqpfWlXz4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nATt2JFuTGqsmzmgq0loBX3PKLI-foNIrURqpfWlXz4.png?width=108&crop=smart&auto=webp&s=f9a672e7497ccba688a80a1e48ab7e6c4d3f033f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nATt2JFuTGqsmzmgq0loBX3PKLI-foNIrURqpfWlXz4.png?width=216&crop=smart&auto=webp&s=0ce35e124c4031d65622515b8b991724f957b02a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nATt2JFuTGqsmzmgq0loBX3PKLI-foNIrURqpfWlXz4.png?width=320&crop=smart&auto=webp&s=1b67f3f64cfb89751182eb49cadd3998fb65c16e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nATt2JFuTGqsmzmgq0loBX3PKLI-foNIrURqpfWlXz4.png?width=640&crop=smart&auto=webp&s=3d1e1bf596de8da3ad5d3095659a5122588e42e2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nATt2JFuTGqsmzmgq0loBX3PKLI-foNIrURqpfWlXz4.png?width=960&crop=smart&auto=webp&s=1b9f68f0ee652bc1edb0ebacaac8cb8476d7577c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nATt2JFuTGqsmzmgq0loBX3PKLI-foNIrURqpfWlXz4.png?width=1080&crop=smart&auto=webp&s=4e1f99f9eff4b546514424ff966233274f9d0704', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nATt2JFuTGqsmzmgq0loBX3PKLI-foNIrURqpfWlXz4.png?auto=webp&s=0ad0d67ddb48fc06081d55d653c70df2c44d22b2', 'width': 1200}, 'variants': {}}]}
[R] Open-sourcing an unfinished research project: A Self-Organizing, Graph-Based Alternative to Transformers (Looking for feedback or continuation)
8
Hi everyone, I'm sharing a research project I worked on over a long period but had to pause due to personal reasons. Rather than letting it sit idle, I wanted to open it up to the community either for technical feedback, critique, or for anyone interested in continuing or experimenting with it. The main project is called Self-Organizing State Model (SOSM): https://github.com/PlanetDestroyyer/Self-Organizing-State-Model At a high level, the goal was to explore an alternative to standard Transformer attention by: • Using graph-based routing instead of dense attention • Separating semantic representation and temporal pattern learning Introducing a hierarchical credit/attribution mechanism for better interpretability The core system is modular and depends on a few supporting components: Semantic representation module (MU) https://github.com/PlanetDestroyyer/MU Temporal pattern learner (TEMPORAL) https://github.com/PlanetDestroyyer/TEMPORAL Hierarchical / K-1 self-learning mechanism https://github.com/PlanetDestroyyer/self-learning-k-1 I'm honestly not sure how valuable or novel this work is that's exactly why I'm posting it here. If nothing else, I'd really appreciate constructive criticism, architectural feedback, or pointers to related work that overlaps with these ideas. If someone finds parts of it useful (or wants to take it further, refactor it, or formalize it into a paper), they're more than welcome to do so. The project is open-source, and I'm happy to answer questions or clarify intent where needed. Thanks for taking a look. Summary: This work explores a language model architecture based on structured semantics rather than unstructured embeddings. Instead of positional encodings, a temporal learning module is used to model sequence progression and context flow. A K-1 hierarchical system is introduced to provide interpretability, enabling analysis of how a token is predicted and which components, states, or nodes contribute to that prediction. Most importantly, rather than comparing every token with all others (as in full self-attention), the model uses a graph-based connection mechanism that restricts computation to only the most relevant or necessary tokens, enabling selective reasoning and improved efficiency. (Have used claude code to code)
2026-01-28T17:58:44
https://www.reddit.com/r/LocalLLaMA/comments/1qpiqho/r_opensourcing_an_unfinished_research_project_a/
WriedGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpiqho
false
null
t3_1qpiqho
/r/LocalLLaMA/comments/1qpiqho/r_opensourcing_an_unfinished_research_project_a/
false
false
self
8
{'enabled': False, 'images': [{'id': '2fVyMlK_6ub7j9yL9TPBZt5OlXJfLSRkNUsMIMCbbsU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2fVyMlK_6ub7j9yL9TPBZt5OlXJfLSRkNUsMIMCbbsU.png?width=108&crop=smart&auto=webp&s=ea3839729912b320eb8caedecbc065e979dd2a57', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2fVyMlK_6ub7j9yL9TPBZt5OlXJfLSRkNUsMIMCbbsU.png?width=216&crop=smart&auto=webp&s=a34eacc4ce06604e85a5ad188fd69870a5d7ac47', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2fVyMlK_6ub7j9yL9TPBZt5OlXJfLSRkNUsMIMCbbsU.png?width=320&crop=smart&auto=webp&s=73f13f9ba685750ec7ba7540ee3aae718ae3ddc5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2fVyMlK_6ub7j9yL9TPBZt5OlXJfLSRkNUsMIMCbbsU.png?width=640&crop=smart&auto=webp&s=10a2b8a98e2e4943d66f390e5c651ce413040d89', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2fVyMlK_6ub7j9yL9TPBZt5OlXJfLSRkNUsMIMCbbsU.png?width=960&crop=smart&auto=webp&s=1b277316778ace4a682e671f2c26b5bbdd9ed248', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2fVyMlK_6ub7j9yL9TPBZt5OlXJfLSRkNUsMIMCbbsU.png?width=1080&crop=smart&auto=webp&s=ce59fbdb17b921abef783195e1b46d1a3d44eff6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2fVyMlK_6ub7j9yL9TPBZt5OlXJfLSRkNUsMIMCbbsU.png?auto=webp&s=e5b28537dfe65faf88c16f636428453c61699c7e', 'width': 1200}, 'variants': {}}]}
Your agents don’t have an intelligence problem, they have a context problem. I built Drift (Open Source) to fix that.
0
Hi everyone! Drift has surpassed 3,000 rpm downloads, 350+ stars and 500 clones in the last 7 days…I’m blown away and just wanna say THANK YOU Lets jump into V1 of Drift BenchMarks vs Baseline grep/file search We tested this on 8 different prompts of different complexity levels to have the agent explore the tool call and be able to answer our questions as quickly and detailed as possible. Drift uses AST based semantic parsing to index metadata from your codebase conventions then exposes it via CLI or MCP Drift does the parsing and indexing with 0 outward calls all data stays locally (Fully open sourced and git hub will be linked below) And works completely offline non connection required. What makes drift work so well is the AST parsing done with semantic learning and regex hybrid fallback to ensure that were able to extract and index codebases into 15 different categories and over 400 pattern detectors. We currently support 8 different languages and continuing to fortify the supported frameworks as well. Drift excels in thoroughness, quantification, depth and efficiency. It also provides security insight and blast radius that call graph analysis that grep and bash are unable to handle itself. The issue is clear, our agents have context limitation not intelligence issues. In order to have the agents write codes that fit conventions instead of hallucinations we need to bridge the gap and drift gives you that capability. Check it out here: https://github.com/dadbodgeoff/drift and a 48 page wiki that teaches you everything about drift and how it works nothing hidden! https://github.com/dadbodgeoff/drift/wiki Thanks for all the love and I hope you enjoy the video \*trying to get better at this\*!
2026-01-28T17:52:38
https://v.redd.it/yx6vb640p4gg1
Fluffy_Citron3547
/r/LocalLLaMA/comments/1qpijco/your_agents_dont_have_an_intelligence_problem/
1970-01-01T00:00:00
0
{}
1qpijco
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/yx6vb640p4gg1/DASHPlaylist.mpd?a=1772344364%2CZDIwOGM0ZWQyOWU4N2Q4ZTUyZDg0MzlmODRjNGJmZDczZmQwODA3ZGFjMGQzOGRiYjdjMGRiNTc3ZTcwOWVmYQ%3D%3D&v=1&f=sd', 'duration': 205, 'fallback_url': 'https://v.redd.it/yx6vb640p4gg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/yx6vb640p4gg1/HLSPlaylist.m3u8?a=1772344364%2CZTdlYjdiNzA2ODViYmQ2Yjk0OWJiMmJjNGE4Y2Q5MjJiMmNjNjIwODRjMDBiZWM3ZGY4MDBkZjRmZWQ3ZmFkYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/yx6vb640p4gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1572}}
t3_1qpijco
/r/LocalLLaMA/comments/1qpijco/your_agents_dont_have_an_intelligence_problem/
false
false
https://external-preview…583a58329c3428a2
0
{'enabled': False, 'images': [{'id': 'amt2a3V3MjBwNGdnMaPwpg2N9QNbfGHWtpi7ri0iXLXlU2u3nK_trA52WUXa', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/amt2a3V3MjBwNGdnMaPwpg2N9QNbfGHWtpi7ri0iXLXlU2u3nK_trA52WUXa.png?width=108&crop=smart&format=pjpg&auto=webp&s=f96bc9ccf8aef30fea1928c76068661105b7366a', 'width': 108}, {'height': 148, 'url': 'https://external-preview.redd.it/amt2a3V3MjBwNGdnMaPwpg2N9QNbfGHWtpi7ri0iXLXlU2u3nK_trA52WUXa.png?width=216&crop=smart&format=pjpg&auto=webp&s=10f848597f153993cab675547d526e9d6d92862b', 'width': 216}, {'height': 219, 'url': 'https://external-preview.redd.it/amt2a3V3MjBwNGdnMaPwpg2N9QNbfGHWtpi7ri0iXLXlU2u3nK_trA52WUXa.png?width=320&crop=smart&format=pjpg&auto=webp&s=56f1662e4e3463a13bb317631ac3902dc24889ea', 'width': 320}, {'height': 439, 'url': 'https://external-preview.redd.it/amt2a3V3MjBwNGdnMaPwpg2N9QNbfGHWtpi7ri0iXLXlU2u3nK_trA52WUXa.png?width=640&crop=smart&format=pjpg&auto=webp&s=68638dd0cc5f1cdd5447ec2093d4c19bd96b1780', 'width': 640}, {'height': 659, 'url': 'https://external-preview.redd.it/amt2a3V3MjBwNGdnMaPwpg2N9QNbfGHWtpi7ri0iXLXlU2u3nK_trA52WUXa.png?width=960&crop=smart&format=pjpg&auto=webp&s=a76b3f8e537bbb222e1e1a25b604e06db28a26da', 'width': 960}, {'height': 741, 'url': 'https://external-preview.redd.it/amt2a3V3MjBwNGdnMaPwpg2N9QNbfGHWtpi7ri0iXLXlU2u3nK_trA52WUXa.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f4bb2cf4933d8bedf1ec3809dc3413f62a389fec', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/amt2a3V3MjBwNGdnMaPwpg2N9QNbfGHWtpi7ri0iXLXlU2u3nK_trA52WUXa.png?format=pjpg&auto=webp&s=cdc18fd0b3c6d09d6fd8fd9335d9dc34a4c115c7', 'width': 1572}, 'variants': {}}]}
Your agents don’t have an intelligence problem, they have a context problem. I built Drift (Open Source) to fix that
1
Hi everyone! Drift has surpassed 3,000 rpm downloads, 350+ stars and 500 clones in the last 7 days…I’m blown away and just wanna say THANK YOU Lets jump into V1 of Drift BenchMarks vs Baseline grep/file search We tested this on 8 different prompts of different complexity levels to have the agent explore the tool call and be able to answer our questions as quickly and detailed as possible. Drift uses AST based semantic parsing to index metadata from your codebase conventions then exposes it via CLI or MCP Drift does the parsing and indexing with 0 outward calls all data stays locally (Fully open sourced and git hub will be linked below) And works completely offline non connection required. What makes drift work so well is the AST parsing done with semantic learning and regex hybrid fallback to ensure that were able to extract and index codebases into 15 different categories and over 400 pattern detectors. We currently support 8 different languages and continuing to fortify the supported frameworks as well. Drift excels in thoroughness, quantification, depth and efficiency. It also provides security insight and blast radius that call graph analysis that grep and bash are unable to handle itself. The issue is clear, our agents have context limitation not intelligence issues. In order to have the agents write codes that fit conventions instead of hallucinations we need to bridge the gap and drift gives you that capability. Check it out here: https://github.com/dadbodgeoff/drift and a 48 page wiki that teaches you everything about drift and how it works nothing hidden! Thanks for all the love and I hope you enjoy the video \*trying to get better at this\*!
2026-01-28T17:45:49
https://v.redd.it/702vqnasn4gg1
Fluffy_Citron3547
/r/LocalLLaMA/comments/1qpic8a/your_agents_dont_have_an_intelligence_problem/
1970-01-01T00:00:00
0
{}
1qpic8a
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/702vqnasn4gg1/DASHPlaylist.mpd?a=1772343959%2CMWQwYzRkMmExZDk5OTU1MzA3NzNmMzFjMjYxMDhhOWU2Zjk4ODlkMWI5NTZjMDMwNTNlMDE0Njg3ZWYxY2UxOQ%3D%3D&v=1&f=sd', 'duration': 205, 'fallback_url': 'https://v.redd.it/702vqnasn4gg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/702vqnasn4gg1/HLSPlaylist.m3u8?a=1772343959%2CMGY0ODEzZjgxYWVmZjk4ZjlkODVkYThmYWYxNWFmZTk2NDM4NDdkODcxNzBlMTZmNmJkMjE3YWI1NzQ0MTVlZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/702vqnasn4gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1572}}
t3_1qpic8a
/r/LocalLLaMA/comments/1qpic8a/your_agents_dont_have_an_intelligence_problem/
false
false
https://external-preview…80ed3c56ecc88b6e
1
{'enabled': False, 'images': [{'id': 'cWVlOW02OXNuNGdnMaPwpg2N9QNbfGHWtpi7ri0iXLXlU2u3nK_trA52WUXa', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/cWVlOW02OXNuNGdnMaPwpg2N9QNbfGHWtpi7ri0iXLXlU2u3nK_trA52WUXa.png?width=108&crop=smart&format=pjpg&auto=webp&s=b142ce41736ea1726e24b5cd5e6a916998648109', 'width': 108}, {'height': 148, 'url': 'https://external-preview.redd.it/cWVlOW02OXNuNGdnMaPwpg2N9QNbfGHWtpi7ri0iXLXlU2u3nK_trA52WUXa.png?width=216&crop=smart&format=pjpg&auto=webp&s=5c1d4875e7a1078dc9b3af85acec3599f745cbcf', 'width': 216}, {'height': 219, 'url': 'https://external-preview.redd.it/cWVlOW02OXNuNGdnMaPwpg2N9QNbfGHWtpi7ri0iXLXlU2u3nK_trA52WUXa.png?width=320&crop=smart&format=pjpg&auto=webp&s=57bd5157ec1217f1921e6d6fe09b8d35fc9119cd', 'width': 320}, {'height': 439, 'url': 'https://external-preview.redd.it/cWVlOW02OXNuNGdnMaPwpg2N9QNbfGHWtpi7ri0iXLXlU2u3nK_trA52WUXa.png?width=640&crop=smart&format=pjpg&auto=webp&s=c8cafd251b637131446be7e76bbd7ab4b1f39a29', 'width': 640}, {'height': 659, 'url': 'https://external-preview.redd.it/cWVlOW02OXNuNGdnMaPwpg2N9QNbfGHWtpi7ri0iXLXlU2u3nK_trA52WUXa.png?width=960&crop=smart&format=pjpg&auto=webp&s=57cb3f6661568dc4ef26728a4dee1c60530ac354', 'width': 960}, {'height': 741, 'url': 'https://external-preview.redd.it/cWVlOW02OXNuNGdnMaPwpg2N9QNbfGHWtpi7ri0iXLXlU2u3nK_trA52WUXa.png?width=1080&crop=smart&format=pjpg&auto=webp&s=017db3b796770207e057c0396a11de3461b02d02', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cWVlOW02OXNuNGdnMaPwpg2N9QNbfGHWtpi7ri0iXLXlU2u3nK_trA52WUXa.png?format=pjpg&auto=webp&s=4a3b22b5f311e6d51eaafd36d17ab64212cd3370', 'width': 1572}, 'variants': {}}]}
meituan-longcat/LongCat-Flash-Lite
100
2026-01-28T17:42:14
https://huggingface.co/meituan-longcat/LongCat-Flash-Lite
windows_error23
huggingface.co
1970-01-01T00:00:00
0
{}
1qpi8d4
false
null
t3_1qpi8d4
/r/LocalLLaMA/comments/1qpi8d4/meituanlongcatlongcatflashlite/
false
false
default
100
{'enabled': False, 'images': [{'id': 'SP0mBN3iaWZga0fzz80Atyk8zYq_GP0yB1tyV6wtLlw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SP0mBN3iaWZga0fzz80Atyk8zYq_GP0yB1tyV6wtLlw.png?width=108&crop=smart&auto=webp&s=a5fa473d0451230c3b342b571675c2be6a3f228f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SP0mBN3iaWZga0fzz80Atyk8zYq_GP0yB1tyV6wtLlw.png?width=216&crop=smart&auto=webp&s=1716d76379803138913cdfa197bb7a7526fce72d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SP0mBN3iaWZga0fzz80Atyk8zYq_GP0yB1tyV6wtLlw.png?width=320&crop=smart&auto=webp&s=632b71ed98cc3b5b3277aae4932ce4e987bd51ab', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SP0mBN3iaWZga0fzz80Atyk8zYq_GP0yB1tyV6wtLlw.png?width=640&crop=smart&auto=webp&s=bfbc45df0f4f6a1d3dd5267993dbfc464f761f81', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SP0mBN3iaWZga0fzz80Atyk8zYq_GP0yB1tyV6wtLlw.png?width=960&crop=smart&auto=webp&s=0b9e080b5a098dc768ed8cadfd654a50215218c7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SP0mBN3iaWZga0fzz80Atyk8zYq_GP0yB1tyV6wtLlw.png?width=1080&crop=smart&auto=webp&s=a54f0dc35b4a0c80d647c870375b9d8caa7052e9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SP0mBN3iaWZga0fzz80Atyk8zYq_GP0yB1tyV6wtLlw.png?auto=webp&s=cc8f17a834f1ea6b407a9f399750e50bb2efccd6', 'width': 1200}, 'variants': {}}]}
Local vibe coding tools?
1
I have got to a point where I have developed the backend and now I am doing the frontend, but doing the frontend is a tedious thing. I wrote some frontend code that I wish the AI could look at and write more code implementing the frontend based on the code I already wrote. Of couse I want everything to run locally on my machine, a B580 with 32Gb system RAM. As of now I use qwen2.5-coder:14b for autocompletion and I want to take one more step into my productivity. Any recomendations for that?
2026-01-28T17:34:20
https://www.reddit.com/r/LocalLLaMA/comments/1qpi01f/local_vibe_coding_tools/
WizardlyBump17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpi01f
false
null
t3_1qpi01f
/r/LocalLLaMA/comments/1qpi01f/local_vibe_coding_tools/
false
false
self
1
null
[Release] BitMamba-2-1B: I trained a 1.58-bit Mamba-2 model from scratch on 150B tokens (Runs on CPU @ 50+ tok/s)
108
Hey everyone! I’ve been working on scaling efficient architectures and just released **BitMamba-2**, a hybrid model combining **Mamba-2 SSM with BitNet 1.58-bit quantization.** The goal was to prove that ternary scaling laws hold up even for SSMs, and to enable decent inference on legacy hardware/edge devices without heavy GPUs. **Key Specs:** * **Architecture:** Mamba-2 + BitNet b1.58 (Ternary weights {-1, 0, 1}) * **Training:** Trained from scratch on 150B tokens (FineWeb-Edu, Cosmopedia, Stack-Dedup) using Google TPU v6e-8. * **Performance:** The 1B model beats the 255M baseline significantly, validating the scaling laws (You can check the loss curves in the repo). I wrote a custom C++ inference engine for this. On a consumer **Intel Core i3-12100F (CPU only)**, I'm getting: * **BitMamba-2-1B:** \~53 tokens/sec (621 MB RAM) * **BitMamba-2-255M:** \~146 tokens/sec (252 MB RAM) It’s fully open-source (Apache/MIT). I’d love for you guys to test it and let me know what you think about the generation quality vs. pure transformers. **Links:** * **Paper (Zenodo):** [https://zenodo.org/records/18394665](https://zenodo.org/records/18394665) * **Hugging Face (Weights):** [https://huggingface.co/Zhayr1/BitMamba-2-1B](https://huggingface.co/Zhayr1/BitMamba-2-1B) * **GitHub (JAX Code):** [https://github.com/Zhayr1/BitMamba-2](https://github.com/Zhayr1/BitMamba-2) * **GitHub (C++ Inference):** [https://github.com/Zhayr1/bitmamba.cpp](https://github.com/Zhayr1/bitmamba.cpp) Let me know if you have questions about the training dynamics or the C++ implementation.
2026-01-28T17:19:36
https://www.reddit.com/r/LocalLLaMA/comments/1qphkd8/release_bitmamba21b_i_trained_a_158bit_mamba2/
Positive-Violinist90
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qphkd8
false
null
t3_1qphkd8
/r/LocalLLaMA/comments/1qphkd8/release_bitmamba21b_i_trained_a_158bit_mamba2/
false
false
self
108
null
MiMo V2 Flash & Kimi K2.5: How Chinese Models Are Democratizing AI
11
For years, the AI narrative has been simple: OpenAI, Google, and Anthropic build the best models, everyone else catches up. You pay premium API prices, accept their terms, and hope your data stays private. That narrative is breaking down. Fast. In the past few weeks, two Chinese labs dropped open-weight models that rival—and in some cases beat—the best from Silicon Valley. Xiaomi's MiMo V2 Flash and Moonshot AI's Kimi K2.5 aren't just catching up. They're reshaping what "accessible AI" actually means. https://onllm.dev/blog/2-mimo-v2-flash-kimi-k25-democratizing
2026-01-28T16:52:06
https://onllm.dev/blog/2-mimo-v2-flash-kimi-k25-democratizing
prakersh
onllm.dev
1970-01-01T00:00:00
0
{}
1qpgrlg
false
null
t3_1qpgrlg
/r/LocalLLaMA/comments/1qpgrlg/mimo_v2_flash_kimi_k25_how_chinese_models_are/
false
false
default
11
null
Which Local model is best for Clawdebot in a low end laptop ;(
0
specifications- Nvidia geforce gtx ryzen 5 5000 series 512 gb SSD
2026-01-28T16:33:17
https://www.reddit.com/r/LocalLLaMA/comments/1qpg8as/which_local_model_is_best_for_clawdebot_in_a_low/
No-Mess-8224
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpg8as
false
null
t3_1qpg8as
/r/LocalLLaMA/comments/1qpg8as/which_local_model_is_best_for_clawdebot_in_a_low/
false
false
self
0
null
The Mystery of Position 193: I Found a Weird Outlier in Gemma 3's Vision Tokens 🔍
22
This is a follow-up to my [previous post](https://www.reddit.com/r/LocalLLaMA/comments/1ompw8z/vision_language_i_decoded_vlm_tokens_to_see_what/) about unembedding VLM image tokens ("Vision = Language: I Decoded VLM Tokens to See What AI 'Sees' 🔬"). I've been digging deeper into how Gemma 3 uses its 256 image token "budget" and found something I can't fully explain. **The core finding:** One token position out of 256 is doing something completely different from the rest. Position 193 is the outlier in 95% of images, and whatever it encodes appears to be meaningful. # Background: The 256 Token Budget Gemma 3's vision tower outputs 256 soft tokens that get fed to the language model. I've been thinking about this as a "budget" – 256 slots to encode visual information in a way the language model understands. This raises natural questions: How are these slots actually used? Are certain positions more meaningful than others? Is information distributed evenly or specialized by position? So I went looking for weird token positions. Position 193 jumped out immediately. # Method: Finding Outliers I processed 10,000 images from Open Images V7 through Gemma 3's vision tower and stored all the embeddings (10K images × 256 positions × 2560 dimensions). **Step 1: Within-image similarity** For each image, I computed a 256×256 cosine similarity matrix between all token positions. Then I averaged across all 10K images. If there's structure that isn't content-specific, it should emerge in the average. https://preview.redd.it/tc59qo3x84gg1.png?width=969&format=png&auto=webp&s=0e984025d1f936b84e3cd4e502ca538885449a2d Position 193 shows up as the darkest line – it's dissimilar to everything else. https://preview.redd.it/2dkwru8y84gg1.png?width=1184&format=png&auto=webp&s=dd0f1dd301c462cd3d6136ed192de35addd8b74c 193 being so dissimilar to the other slots tells us that it is encoding unique information. **Step 2: Which position is the outlier?** For each image, I found which position had the lowest mean similarity to all other positions. Results: |Position|% of images as outlier| |:-|:-| |193|95.3| |48|1.1| |223|0.9| |14|0.2| |192|0.2| Position 193 is the outlier in almost every image! **Step 3: Is it rotation-invariant?** If 193 encodes something about image content or spatial position, rotating the image should change which position is the outlier. I tested this across multiple images at 0°, 90°, 180°, 270° rotations. Result: For the images where 193 is the outlier at 0°, 193 remains the outlier regardless of rotation. Whatever it encodes isn't tied to spatial location in the image. **Step 4: Cross-image consistency** Here's where it gets interesting. If 193 is dissimilar to other positions within an image, but encodes the same semantic thing across images, then position 193 embeddings should be highly similar to each other across different images. That's exactly what I found. Position 193 has 0.91 cross-image similarity – much higher than other positions. This suggests 193 encodes consistent meta-information rather than image-specific content. https://preview.redd.it/7sitccj194gg1.png?width=1184&format=png&auto=webp&s=b1f66b579f596f1d322fa109fa3ffcf120e0ee8f Interestingly, this is more or less a mirror of the first plot. # Trying to Interpret It **Unembedding:** I computed the centroid of position 193 embeddings and projected it through the language head. Result: maps to space token with very low probability. Not interpretable this way. **Zero-out ablation:** What if we just zero out position 193 before it reaches the language model? Surprisingly, nothing breaks. The model still answers questions correctly. **Directional steering:** Inspired by the Golden Gate Claude work, I tried flipping the direction of position 193 (α = -1). This breaks things in interesting ways – the model can still see the image but seems to lose the ability to answer questions about it coherently. |Intervention|Effect| |:-|:-| |Zero out|No noticeable change| |Flip direction|Model sees image but responses become incoherent| # The Mystery Remains Position 193 is: * Dissimilar to other positions within images * Consistent across images * Rotation-invariant * Not interpretable via unembedding * Safe to zero out * Breaks things when flipped Everything points to it encoding something meaningful. But I haven't been able to cleanly interpret what that is. If anyone has ideas on what 193 might encode or how to investigate further, I'd love to hear them. And if anyone has connections to the Gemma team – they might have an answer, or at least find this interesting. I'd love to get this in front of them. Feel free to reach out! # Want to Explore More? * Video Explainer: ["Dissecting Gemma 3 Image Tokenization: The Mystery of 193"](https://youtu.be/3FMOknkH9XM) * [GitHub repo with notebooks](https://github.com/jacob-danner/dissecting-vlm) (all experiments are reproducible) * Previous Video Explainer: ["Dissecting Vision Language Models: How AI Sees"](https://youtu.be/NpWP-hOq6II)
2026-01-28T16:30:00
https://www.reddit.com/r/LocalLLaMA/comments/1qpg4ty/the_mystery_of_position_193_i_found_a_weird/
ComputeVoid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpg4ty
false
null
t3_1qpg4ty
/r/LocalLLaMA/comments/1qpg4ty/the_mystery_of_position_193_i_found_a_weird/
false
false
https://b.thumbs.redditm…KWKeO6xUMPOA.jpg
22
null
Convert Charts & Tables to Knowledge Graphs in Minutes | Vision RAG Tuto...
1
Struggling to extract data from complex charts and tables? Stop relying on broken OCR. In this video, I reveal how to use Vision-Native RAG to turn messy PDFs into structured Knowledge Graphs using Llama 3.2 Vision. Traditional RAG pipelines fail when they meet complex tables or charts. Optical Character Recognition (OCR) just produces a mess of text. Today, we are exploring VeritasGraph, a powerful new tool that uses Multimodal AI to "see" documents exactly like a human does. We will walk through the entire pipeline: ingesting a financial report, bypassing OCR, extracting hierarchical data, and visualizing the connections in a stunning Knowledge Graph. 👇 Resources & Code mentioned in this video: 🔗 GitHub Repo (VeritasGraph): [https://github.com/bibinprathap/VeritasGraph](https://github.com/bibinprathap/VeritasGraph)
2026-01-28T16:26:14
https://youtube.com/watch?v=8fz8RWgL04Y&si=MduDYnb__kK9bGIp
BitterHouse8234
youtube.com
1970-01-01T00:00:00
0
{}
1qpg0zh
false
{'oembed': {'author_name': 'BIBIN PRATHAP', 'author_url': 'https://www.youtube.com/@bibinprathap8175', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/8fz8RWgL04Y?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Convert Charts &amp; Tables to Knowledge Graphs in Minutes | Vision RAG Tutorial"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/8fz8RWgL04Y/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Convert Charts & Tables to Knowledge Graphs in Minutes | Vision RAG Tutorial', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1qpg0zh
/r/LocalLLaMA/comments/1qpg0zh/convert_charts_tables_to_knowledge_graphs_in/
false
false
default
1
{'enabled': False, 'images': [{'id': '6cl2A2vb4grCUIrIh5_PxwsH1sahuR7Bqn3coaCJMj8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/6cl2A2vb4grCUIrIh5_PxwsH1sahuR7Bqn3coaCJMj8.jpeg?width=108&crop=smart&auto=webp&s=e5d1e0a46b327a71caf0d79ea60849d7006a3377', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/6cl2A2vb4grCUIrIh5_PxwsH1sahuR7Bqn3coaCJMj8.jpeg?width=216&crop=smart&auto=webp&s=be849cbab6517b8416dca95204e1bc3fd5837359', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/6cl2A2vb4grCUIrIh5_PxwsH1sahuR7Bqn3coaCJMj8.jpeg?width=320&crop=smart&auto=webp&s=ab785a64e2e422d915c040b0cdae00454917c519', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/6cl2A2vb4grCUIrIh5_PxwsH1sahuR7Bqn3coaCJMj8.jpeg?auto=webp&s=5537ebd200077b517aacf416bb3d02b874a530a7', 'width': 480}, 'variants': {}}]}
Run Kimi K2.5 Locally
412
Kimi-K2.5 achieves SOTA performance in vision, coding, agentic and chat tasks. The 1T parameter hybrid reasoning model requires 600GB of disk space, while the quantized **Unsloth Dynamic 1.8-bit** version reduces this to **240GB (-60% size).** **Model:** [**Kimi-K2.5-GGUF**](https://huggingface.co/unsloth/Kimi-K2.5-GGUF) **Official Guide:** [**https://unsloth.ai/docs/models/kimi-k2.5**](https://unsloth.ai/docs/models/kimi-k2.5)
2026-01-28T16:17:45
https://i.redd.it/rxqfj5os74gg1.jpeg
Dear-Success-1441
i.redd.it
1970-01-01T00:00:00
0
{}
1qpfse6
false
null
t3_1qpfse6
/r/LocalLLaMA/comments/1qpfse6/run_kimi_k25_locally/
false
false
default
412
{'enabled': True, 'images': [{'id': 'rxqfj5os74gg1', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/rxqfj5os74gg1.jpeg?width=108&crop=smart&auto=webp&s=c21489c6035eea0189beda8a7deece5917780c37', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/rxqfj5os74gg1.jpeg?width=216&crop=smart&auto=webp&s=7fdb5c600038c272968661c997850604a730fd3d', 'width': 216}, {'height': 344, 'url': 'https://preview.redd.it/rxqfj5os74gg1.jpeg?width=320&crop=smart&auto=webp&s=74c585ef35541856c64836fbdf8ab93ec0404dfd', 'width': 320}, {'height': 689, 'url': 'https://preview.redd.it/rxqfj5os74gg1.jpeg?width=640&crop=smart&auto=webp&s=2606f30079a77f14bb28c31413be651c092abaa9', 'width': 640}], 'source': {'height': 862, 'url': 'https://preview.redd.it/rxqfj5os74gg1.jpeg?auto=webp&s=1d7d7d0be20f254d014c8f62eb500cb947df19bb', 'width': 800}, 'variants': {}}]}
ReAct agents vs Function Calling: when does each pattern actually make sense in production?
0
After 6 months building a production job-application agent, here's what I've learned about when to use ReAct-style planning vs plain function calling: \## Function Calling = Your Default For any workflow where the steps are predictable: \- Parse job description → classify company segment (B2B/B2C) → generate tailored pitch → submit \- Cost: \~$0.02 per run (2-3 model calls, structured JSON outputs) \- Latency: 3-5 seconds end-to-end \- Reliability: High - easy to unit test each function \## ReAct = Your Escape Hatch Only use ReAct when the agent genuinely needs to \*plan\*: \- Mixed B2B/B2C companies where segment isn't obvious \- Missing data requiring multi-step research (LinkedIn → Crunchbase → company website) \- Ambiguous job posts that need clarification before proceeding Cost: \~$0.08-0.12 per run (5-10+ model calls with reasoning traces) Latency: 10-30+ seconds Reliability: Lower - harder to predict what the agent will do \## The Hybrid Pattern That Works \`\`\` IF confidence\_score > 0.7: use\_function\_calling() # fast path, 90-95% of traffic ELSE: use\_react\_agent() # slow path, handles edge cases \`\`\` This gives you: \- Predictable costs for most runs \- "Smart" behavior when it matters \- Clear separation for monitoring and debugging \## Why This Matters At 10K applications/month: \- Pure function calling: \~$200/month inference \- Pure ReAct: \~$800-1200/month inference \- Hybrid: \~$250-300/month The economics force architectural discipline. ReAct is powerful but expensive - treat it like a cache miss, not the default code path. Anyone else finding similar patterns in production agents?
2026-01-28T16:16:24
https://www.reddit.com/r/LocalLLaMA/comments/1qpfr0c/react_agents_vs_function_calling_when_does_each/
KitchenSomew
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpfr0c
false
null
t3_1qpfr0c
/r/LocalLLaMA/comments/1qpfr0c/react_agents_vs_function_calling_when_does_each/
false
false
self
0
null
South Korea's SK Hynix to establish a special ‘AI Company’ in the U.S.
0
2026-01-28T16:11:29
https://www.cnbc.com/2026/01/28/sk-hynix-ai-company-us.html
self-fix
cnbc.com
1970-01-01T00:00:00
0
{}
1qpflx7
false
null
t3_1qpflx7
/r/LocalLLaMA/comments/1qpflx7/south_koreas_sk_hynix_to_establish_a_special_ai/
false
false
default
0
null
Korea to allow companies to freely use government-owned works to train AI
16
2026-01-28T16:05:25
https://koreajoongangdaily.joins.com/news/2026-01-28/business/tech/Korea-to-allow-companies-to-freely-use-governmentowned-works-to-train-AI/2510756
self-fix
koreajoongangdaily.joins.com
1970-01-01T00:00:00
0
{}
1qpffuf
false
null
t3_1qpffuf
/r/LocalLLaMA/comments/1qpffuf/korea_to_allow_companies_to_freely_use/
false
false
default
16
{'enabled': False, 'images': [{'id': 'PA1mjIPPUx8N5cvBSwSlowNfj_bf9v1FbsLvKDLIWLQ', 'resolutions': [{'height': 75, 'url': 'https://external-preview.redd.it/PA1mjIPPUx8N5cvBSwSlowNfj_bf9v1FbsLvKDLIWLQ.jpeg?width=108&crop=smart&auto=webp&s=691bb764b951088541117fdf8cd2f40ca2f1168f', 'width': 108}, {'height': 150, 'url': 'https://external-preview.redd.it/PA1mjIPPUx8N5cvBSwSlowNfj_bf9v1FbsLvKDLIWLQ.jpeg?width=216&crop=smart&auto=webp&s=5237d64c6ac08d4a311f96fade152fc31f390a7f', 'width': 216}, {'height': 222, 'url': 'https://external-preview.redd.it/PA1mjIPPUx8N5cvBSwSlowNfj_bf9v1FbsLvKDLIWLQ.jpeg?width=320&crop=smart&auto=webp&s=7926beeda1efd3e3c4cf71170755cb791d909029', 'width': 320}, {'height': 444, 'url': 'https://external-preview.redd.it/PA1mjIPPUx8N5cvBSwSlowNfj_bf9v1FbsLvKDLIWLQ.jpeg?width=640&crop=smart&auto=webp&s=3db51d9bb96901caf8400100d0fbc9517e28cacc', 'width': 640}, {'height': 666, 'url': 'https://external-preview.redd.it/PA1mjIPPUx8N5cvBSwSlowNfj_bf9v1FbsLvKDLIWLQ.jpeg?width=960&crop=smart&auto=webp&s=3e71f1d58d9116b2bc1a610c9f9fa508f8f50f8a', 'width': 960}, {'height': 750, 'url': 'https://external-preview.redd.it/PA1mjIPPUx8N5cvBSwSlowNfj_bf9v1FbsLvKDLIWLQ.jpeg?width=1080&crop=smart&auto=webp&s=952459b0c8fb5754d19b95188d78fef73d2673c1', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/PA1mjIPPUx8N5cvBSwSlowNfj_bf9v1FbsLvKDLIWLQ.jpeg?auto=webp&s=1a6a4d2616b8a911eb7a624c3739e9b307d93aa7', 'width': 1440}, 'variants': {}}]}
I built a WebGPU deep learning framework from scratch to run Qwen3 & Whisper completely in the browser (PyTorch-aligned, Zero Python, pure js without onnxruntime)
1
[removed]
2026-01-28T15:58:03
https://www.reddit.com/r/LocalLLaMA/comments/1qpf80k/i_built_a_webgpu_deep_learning_framework_from/
HotKaleidoscope9154
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpf80k
false
null
t3_1qpf80k
/r/LocalLLaMA/comments/1qpf80k/i_built_a_webgpu_deep_learning_framework_from/
false
false
self
1
null
Universal DeepSeek OCR 2: CPU, MPS, CUDA Support
8
DeepSeek OCR 2 is out, but it only supports CUDA out of the box. Thus, I've updated it to run on CPU & MPS as well, so you can run it locally on your laptop or mac. Sample code is also available at: [https://github.com/Dogacel/Universal-DeepSeek-OCR-2](https://github.com/Dogacel/Universal-DeepSeek-OCR-2)
2026-01-28T15:57:29
https://huggingface.co/Dogacel/Universal-DeepSeek-OCR-2/
Dogacel
huggingface.co
1970-01-01T00:00:00
0
{}
1qpf7f8
false
null
t3_1qpf7f8
/r/LocalLLaMA/comments/1qpf7f8/universal_deepseek_ocr_2_cpu_mps_cuda_support/
false
false
default
8
{'enabled': False, 'images': [{'id': 'SBRAaLNmL_q-arTsMz5JUDwYiSYhc7xi6pm2IqhuNUE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SBRAaLNmL_q-arTsMz5JUDwYiSYhc7xi6pm2IqhuNUE.png?width=108&crop=smart&auto=webp&s=669c2194a9ac9a499e99685d7d39cb902b8896e6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SBRAaLNmL_q-arTsMz5JUDwYiSYhc7xi6pm2IqhuNUE.png?width=216&crop=smart&auto=webp&s=38f102e7a312e273210f849367d547b75023da23', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SBRAaLNmL_q-arTsMz5JUDwYiSYhc7xi6pm2IqhuNUE.png?width=320&crop=smart&auto=webp&s=d3afd3c0c12cdef2276fdfa16fb6d15113db5a25', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SBRAaLNmL_q-arTsMz5JUDwYiSYhc7xi6pm2IqhuNUE.png?width=640&crop=smart&auto=webp&s=e8726ac5bdab16ad049ccd5e9727656f1b5c8dce', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SBRAaLNmL_q-arTsMz5JUDwYiSYhc7xi6pm2IqhuNUE.png?width=960&crop=smart&auto=webp&s=f0658e6bf76ab4f55f408d6bd896286b45b3c537', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SBRAaLNmL_q-arTsMz5JUDwYiSYhc7xi6pm2IqhuNUE.png?width=1080&crop=smart&auto=webp&s=f640742e266a65c2b1c8c6a8544edd93ceef0892', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SBRAaLNmL_q-arTsMz5JUDwYiSYhc7xi6pm2IqhuNUE.png?auto=webp&s=c6c5f4ca6b23fd83725c1857611a1c6bc245eac9', 'width': 1200}, 'variants': {}}]}
SK hynix wins over two-thirds of Nvidia's HBM orders for AI platform this year
0
What does it mean? Well one thing looks certain: We won't get affordable memory for quite a while.
2026-01-28T15:57:15
https://en.yna.co.kr/view/AEN20260128002800320
Zyj
en.yna.co.kr
1970-01-01T00:00:00
0
{}
1qpf75x
false
null
t3_1qpf75x
/r/LocalLLaMA/comments/1qpf75x/sk_hynix_wins_over_twothirds_of_nvidias_hbm/
false
false
default
0
{'enabled': False, 'images': [{'id': 'FVDSqbf4aOuqY4rDU2XnfX3vFelQscgyOq59aWkanVc', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/FVDSqbf4aOuqY4rDU2XnfX3vFelQscgyOq59aWkanVc.jpeg?width=108&crop=smart&auto=webp&s=49f92089473291758b109905dcddd6f4ef388f41', 'width': 108}, {'height': 157, 'url': 'https://external-preview.redd.it/FVDSqbf4aOuqY4rDU2XnfX3vFelQscgyOq59aWkanVc.jpeg?width=216&crop=smart&auto=webp&s=7a05fb4c93ce1624dff9e4179906beca1bc0e903', 'width': 216}, {'height': 233, 'url': 'https://external-preview.redd.it/FVDSqbf4aOuqY4rDU2XnfX3vFelQscgyOq59aWkanVc.jpeg?width=320&crop=smart&auto=webp&s=ab795382e81b6c55807f4ba1fa30bd9dc0c22705', 'width': 320}, {'height': 466, 'url': 'https://external-preview.redd.it/FVDSqbf4aOuqY4rDU2XnfX3vFelQscgyOq59aWkanVc.jpeg?width=640&crop=smart&auto=webp&s=f361fc7d9874585e94f83aba462b4ee613facd83', 'width': 640}, {'height': 699, 'url': 'https://external-preview.redd.it/FVDSqbf4aOuqY4rDU2XnfX3vFelQscgyOq59aWkanVc.jpeg?width=960&crop=smart&auto=webp&s=08eb39509f9b242e00c7df4757122d154135f025', 'width': 960}, {'height': 786, 'url': 'https://external-preview.redd.it/FVDSqbf4aOuqY4rDU2XnfX3vFelQscgyOq59aWkanVc.jpeg?width=1080&crop=smart&auto=webp&s=a5f9cc5cfd1c29c9f99188f17d465100fd9ad9fa', 'width': 1080}], 'source': {'height': 874, 'url': 'https://external-preview.redd.it/FVDSqbf4aOuqY4rDU2XnfX3vFelQscgyOq59aWkanVc.jpeg?auto=webp&s=a38d77a1fa4ed4b366152809bee988ca4a1e5a9d', 'width': 1200}, 'variants': {}}]}
2018 vs 2026 reality check
0
2026-01-28T15:55:12
https://i.redd.it/oaudjody34gg1.png
Academic-Local-7530
i.redd.it
1970-01-01T00:00:00
0
{}
1qpf50w
false
null
t3_1qpf50w
/r/LocalLLaMA/comments/1qpf50w/2018_vs_2026_reality_check/
false
false
default
0
{'enabled': True, 'images': [{'id': 'oaudjody34gg1', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/oaudjody34gg1.png?width=108&crop=smart&auto=webp&s=7692538c430dcfe0df892ef1375ddcdc5ddc11df', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/oaudjody34gg1.png?width=216&crop=smart&auto=webp&s=5a5b08fa6f0b75173505011afc660975757b3707', 'width': 216}, {'height': 300, 'url': 'https://preview.redd.it/oaudjody34gg1.png?width=320&crop=smart&auto=webp&s=c1967b744979cfa76843745f11e431677802c3e3', 'width': 320}, {'height': 600, 'url': 'https://preview.redd.it/oaudjody34gg1.png?width=640&crop=smart&auto=webp&s=0805aefc829c492e30fff7ea58618d7f7f502c4b', 'width': 640}], 'source': {'height': 600, 'url': 'https://preview.redd.it/oaudjody34gg1.png?auto=webp&s=96cfd050f863c464e2d718080d66049cf51cef80', 'width': 640}, 'variants': {}}]}
nosy: CLI to summarize various types of content
2
I’m the author of **nosy**. I’m posting for feedback/discussion, not as a link drop. I often want a repeatable way to turn “a URL or file” into clean text and then a summary, regardless of format. So I built a small CLI that: * Accepts **URLs or local files** * Fetches via **HTTP GET** or **headless browser** (for pages that need JS) * Auto-selects a text extractor by **MIME type / extension** * Extracts from **HTML, PDF, Office docs (via pandoc), audio/video (via Whisper transcription)**, etc. * Summarizes with **multiple LLM providers** (OpenAI / Anthropic / Gemini / …) * Lets you customize tone/structure via **Handlebars templates** * Has shell **tab completion** (zsh/bash/fish)
2026-01-28T15:53:52
https://github.com/ynqa/nosy
aqny
github.com
1970-01-01T00:00:00
0
{}
1qpf3np
false
null
t3_1qpf3np
/r/LocalLLaMA/comments/1qpf3np/nosy_cli_to_summarize_various_types_of_content/
false
false
default
2
{'enabled': False, 'images': [{'id': 'rA9Q1S4OYwKSmxu6XBJuFnEJwIjr2YX_5W2NY7LTjLs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rA9Q1S4OYwKSmxu6XBJuFnEJwIjr2YX_5W2NY7LTjLs.png?width=108&crop=smart&auto=webp&s=16f5ec549ee64960b8cc799241e5c99db99a891c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rA9Q1S4OYwKSmxu6XBJuFnEJwIjr2YX_5W2NY7LTjLs.png?width=216&crop=smart&auto=webp&s=10faee8c649fd65cad23f53acede26ae5736e774', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rA9Q1S4OYwKSmxu6XBJuFnEJwIjr2YX_5W2NY7LTjLs.png?width=320&crop=smart&auto=webp&s=154c1e7fccf98f54a20519890c30541fac4756cc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rA9Q1S4OYwKSmxu6XBJuFnEJwIjr2YX_5W2NY7LTjLs.png?width=640&crop=smart&auto=webp&s=3b6fc5caa22233d6c07e63a3744922b704ad8577', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rA9Q1S4OYwKSmxu6XBJuFnEJwIjr2YX_5W2NY7LTjLs.png?width=960&crop=smart&auto=webp&s=935e7e6ddd5a2e52cbb54e7d8de890422925cc74', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rA9Q1S4OYwKSmxu6XBJuFnEJwIjr2YX_5W2NY7LTjLs.png?width=1080&crop=smart&auto=webp&s=fa5ee099fc98942c4250992805c19f546ef9588b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rA9Q1S4OYwKSmxu6XBJuFnEJwIjr2YX_5W2NY7LTjLs.png?auto=webp&s=b862c30ca6bb5c18c8a15af6a304cac370b5e00c', 'width': 1200}, 'variants': {}}]}
2018 vs 2026 Humour
1
2026-01-28T15:50:08
https://i.redd.it/lgxjb0r434gg1.png
Academic-Local-7530
i.redd.it
1970-01-01T00:00:00
0
{}
1qpf007
false
null
t3_1qpf007
/r/LocalLLaMA/comments/1qpf007/2018_vs_2026_humour/
false
false
default
1
{'enabled': True, 'images': [{'id': 'lgxjb0r434gg1', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/lgxjb0r434gg1.png?width=108&crop=smart&auto=webp&s=bfbca2724e78814f728896ee15671955f5689500', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/lgxjb0r434gg1.png?width=216&crop=smart&auto=webp&s=c5e8ed1ca9a5304a9a2b021fed29a6ab07a19a69', 'width': 216}, {'height': 300, 'url': 'https://preview.redd.it/lgxjb0r434gg1.png?width=320&crop=smart&auto=webp&s=d3bbca429fe10b027a9c06f0333eb4aa27a408bd', 'width': 320}, {'height': 600, 'url': 'https://preview.redd.it/lgxjb0r434gg1.png?width=640&crop=smart&auto=webp&s=2a58ef7110e54ac41399968331bd1b819f656836', 'width': 640}], 'source': {'height': 600, 'url': 'https://preview.redd.it/lgxjb0r434gg1.png?auto=webp&s=ceafd31cd778443767b92019ae93afddab2475ce', 'width': 640}, 'variants': {}}]}
mistral.rs 0.7.0: New CLI with built-in UI, auto-quantization tuner, configuration files, MCP server, and tons of new models
12
Hey everyone! Just released [mistral.rs](http://mistral.rs) v0.7.0, and it is the biggest update yet. GitHub: [https://github.com/EricLBuehler/mistral.rs](https://github.com/EricLBuehler/mistral.rs) Docs: [https://ericlbuehler.github.io/mistral.rs/](https://ericlbuehler.github.io/mistral.rs/) Here are the highlights: **New CLI: mistralrs-cli** The new CLI is a complete overhaul: * **Built-in chat UI:** no need for a separate frontend * **OpenAI-compatible server:** drop-in replacement for existing tooling * **MCP client support:** connect to Model Context Protocol tools natively * **Auto-quantization tuner:** find the best quantization level to fit your model on your hardware, then spit out a TOML config you can reuse. * **TOML configuration files:** standardize your setups, share configs, version control them. * **Multi-model serving:** Load multiple models, including across different modalities (e.g. vision + embedding models all in the same server). * **Model hotswapping:** Dynamically load/unload models from memory. **Performance** * **Prefix Caching for PagedAttention:** huge speedups for multi-turn conversations and RAG workflows by reusing KV cache for shared prefixes * **Fused CUDA kernels:** new GEMV, GLU, and blockwise FP8 kernels for significant speedups on NVIDIA GPUs * **Metal:** Optimizations and stability improvements **New Models** * **Text:** GLM-4, GLM-4.7 Flash, Granite Hybrid MoE, GPT-OSS, SmolLM3, Ministral 3 * **Vision:** Gemma 3n, Qwen 3 VL, Qwen 3 VL MoE * **Embedding:** Qwen 3 Embedding, Embedding Gemma [Demo of the new features of the CLI.](https://reddit.com/link/1qpexx4/video/3aa2ozsp24gg1/player)
2026-01-28T15:48:02
https://www.reddit.com/r/LocalLLaMA/comments/1qpexx4/mistralrs_070_new_cli_with_builtin_ui/
EricBuehler
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpexx4
false
null
t3_1qpexx4
/r/LocalLLaMA/comments/1qpexx4/mistralrs_070_new_cli_with_builtin_ui/
false
false
self
12
null
AMA With Kimi, The Open-source Frontier Lab Behind Kimi K2.5 Model
244
Hi [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/) Today we are having **Kimi**, the research lab behind the **Kimi** **K2.5**. We’re excited to have them open up and answer your questions directly. Our participants today: * [u/ComfortableAsk4494](https://www.reddit.com/user/ComfortableAsk4494/) * [u/zxytim](https://www.reddit.com/user/zxytim/) * [u/ppwwyyxx](https://www.reddit.com/user/ppwwyyxx/) **The AMA will run from 8 AM – 11 AM PST, with the Kimi team continuing to follow up on questions over the next 24 hours.**
2026-01-28T15:46:40
https://www.reddit.com/r/LocalLLaMA/comments/1qpewj7/ama_with_kimi_the_opensource_frontier_lab_behind/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpewj7
false
null
t3_1qpewj7
/r/LocalLLaMA/comments/1qpewj7/ama_with_kimi_the_opensource_frontier_lab_behind/
false
true
self
244
null
Caching embedding outputs made my codebase indexing 7.6x faster - Part 2
1
More details on a demo video I shared recently here - [https://www.reddit.com/r/LocalLLaMA/comments/1qp7vl7/caching\_embedding\_outputs\_made\_my\_codebase/](https://www.reddit.com/r/LocalLLaMA/comments/1qp7vl7/caching_embedding_outputs_made_my_codebase/) I spent the last week accidentally convincing myself that caching embeddings actually matters. A lot more than I expected. I was working with around 111k embeddings, 1024-dim vectors. Nothing exotic. Single GPU, RTX 5090, power limit at 450W. Before caching, every full run took about 7 minutes 53 seconds of sustained GPU compute. Same inputs every time. Same outputs. And I kept recomputing them because that was just the default pipeline. After caching the embeddings, the same workload finishes in about 62 seconds. On cache hits the GPU basically does nothing. Most of that time is I/O and request handling. That works out to roughly a 7.6x speedup, but honestly the bigger thing is realizing how much GPU time was being wasted for no reason. I did waste time going in the wrong direction first. I assumed KV cache would help, since that is what everyone talks about with LLM inference. I even allocated an absurd amount of RAM for hierarchical KV cache. At one point I had a process reserving 245GB of host memory. Turns out embeddings do not really benefit from KV cache the way text generation does. SGLang hicache is doing the right thing, but it caches intermediate KV states, not final embedding outputs. That part was on me. After a few days of poking around and reading logs, it became obvious that the simplest solution was also the correct one. Just cache the final embedding vectors. I ended up using a small proof of concept someone shared here [https://github.com/joe32140/tei-qdrant-cache](https://github.com/joe32140/tei-qdrant-cache) It took about a day of tweaks and testing, but the difference was immediate. First run, cache miss, about 8 minutes of GPU compute. Subsequent runs, cache hit, about 62 seconds total. Per request latency stayed in the 150 to 400ms range, but the GPU stayed mostly idle. Nothing here is particularly clever. You pay GPU and electricity once to generate embeddings for large context inputs. If those inputs do not change and you recompute them every time, you are just burning money and power. Persist the outputs and reuse them. I think we spend a lot of time chasing model improvements and kernel optimizations, but sometimes the biggest win is just not doing the same work again. Genuinely curious how many people here are caching embeddings or prefill in production versus recomputing them every run.
2026-01-28T15:32:57
https://v.redd.it/fd3owfxiz3gg1
Emergency_Fuel_2988
v.redd.it
1970-01-01T00:00:00
0
{}
1qpej60
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/fd3owfxiz3gg1/DASHPlaylist.mpd?a=1772206394%2CODFmODIyOWVkZDY3MmU2ZTVmZDllZmM0MGY3ZmMxNDgwY2E2MmQ5YTFhMTBhODhjMWI1NzlmNzNmZTljNWYzMw%3D%3D&v=1&f=sd', 'duration': 144, 'fallback_url': 'https://v.redd.it/fd3owfxiz3gg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 808, 'hls_url': 'https://v.redd.it/fd3owfxiz3gg1/HLSPlaylist.m3u8?a=1772206394%2CMTUxNTk4YWQ3MGE5OGExZjY5NDM3MWIzMGUwYjMxNWQ2NTZlNzVlOWYxZGFiZTY2NTgzMzk3YjMwMThmZWE3NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fd3owfxiz3gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
t3_1qpej60
/r/LocalLLaMA/comments/1qpej60/caching_embedding_outputs_made_my_codebase/
false
false
https://external-preview…93e70a72349743f0
1
{'enabled': False, 'images': [{'id': 'MGNmdGVoMWp6M2dnMRfUtc9ieZvVElseim3ja2I4SOalP4eaNdEaZKie4W7I', 'resolutions': [{'height': 121, 'url': 'https://external-preview.redd.it/MGNmdGVoMWp6M2dnMRfUtc9ieZvVElseim3ja2I4SOalP4eaNdEaZKie4W7I.png?width=108&crop=smart&format=pjpg&auto=webp&s=44bf0f1ab4ee2a2b91838067f47cdbf21cfbe625', 'width': 108}, {'height': 242, 'url': 'https://external-preview.redd.it/MGNmdGVoMWp6M2dnMRfUtc9ieZvVElseim3ja2I4SOalP4eaNdEaZKie4W7I.png?width=216&crop=smart&format=pjpg&auto=webp&s=3c00b00c97f136eeb263058adce9cde896de5d91', 'width': 216}, {'height': 358, 'url': 'https://external-preview.redd.it/MGNmdGVoMWp6M2dnMRfUtc9ieZvVElseim3ja2I4SOalP4eaNdEaZKie4W7I.png?width=320&crop=smart&format=pjpg&auto=webp&s=9ffda20abe0e3cd38360ff29f629b90371b438fb', 'width': 320}, {'height': 717, 'url': 'https://external-preview.redd.it/MGNmdGVoMWp6M2dnMRfUtc9ieZvVElseim3ja2I4SOalP4eaNdEaZKie4W7I.png?width=640&crop=smart&format=pjpg&auto=webp&s=2151dd41cf9f9affe6f685eb72c2d34dc382b176', 'width': 640}, {'height': 1076, 'url': 'https://external-preview.redd.it/MGNmdGVoMWp6M2dnMRfUtc9ieZvVElseim3ja2I4SOalP4eaNdEaZKie4W7I.png?width=960&crop=smart&format=pjpg&auto=webp&s=bfac2383ce99d100f546976e47ce9ab5b66ddf38', 'width': 960}], 'source': {'height': 1110, 'url': 'https://external-preview.redd.it/MGNmdGVoMWp6M2dnMRfUtc9ieZvVElseim3ja2I4SOalP4eaNdEaZKie4W7I.png?format=pjpg&auto=webp&s=3c192480364032618c6f023a7d55c0e4b5dcd25b', 'width': 990}, 'variants': {}}]}
Has anyone set up local LLM + Vertex AI Search?
3
I think one of the big features Gemini has is its grounding with Google feature. While I think having a local tool do the searching would be nice, I don't think there is currently a usable decentralized database of web contents that can be easily used. As far as I understand, Vertex AI Search is basically the grounding with Google part but you can use it with any LLM like one you have running at home, but without the heavy lifting of having to load up hundreds of webpages and deal with bot detection and such. Has anyone set up a simple solution that lets you use Vertex AI search for something like Qwen3 VL 4b to get long context grounded results on a 16 GB GPU? Even better would be if there was some decentralized cached database on a Blockchain or something. Or maybe something like native YaCY integration.
2026-01-28T15:28:51
https://www.reddit.com/r/LocalLLaMA/comments/1qpef76/has_anyone_set_up_local_llm_vertex_ai_search/
pneuny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpef76
false
null
t3_1qpef76
/r/LocalLLaMA/comments/1qpef76/has_anyone_set_up_local_llm_vertex_ai_search/
false
false
self
3
null
I built a self-hosted API server for Android automation using LLMs
0
Wanted to automate my Android phone with natural language commands, so I built a Docker-based HTTP API server that wraps https://github.com/droidrun/droidrun. What it does: \- Send a goal like "open WhatsApp and message Mom" via API \- Server queues the task and uses an LLM to figure out the steps \- Supports Claude, ChatGPT, Gemini, DeepSeek, and Ollama (local models) Quick start: docker run -d --name droidrun \\ \-p 8000:8000 \\ \-e DROIDRUN\_SERVER\_KEY="change-me" \\ [ghcr.io/8ff/droidrunnerd:latest](http://ghcr.io/8ff/droidrunnerd:latest) Then just POST your task: curl -X POST [http://localhost:8000/run](http://localhost:8000/run) \\ \-d '{"goal":"open settings and enable dark mode"}' GitHub: [https://github.com/8ff/droidrunnerd](https://github.com/8ff/droidrunnerd) Happy to answer questions!
2026-01-28T15:19:54
https://www.reddit.com/r/LocalLLaMA/comments/1qpe6mg/i_built_a_selfhosted_api_server_for_android/
8ffChief
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpe6mg
false
null
t3_1qpe6mg
/r/LocalLLaMA/comments/1qpe6mg/i_built_a_selfhosted_api_server_for_android/
false
false
self
0
{'enabled': False, 'images': [{'id': 'gQ8ITyG-fXfyOpaE7llXg5ExCPJ0k88LykjSZSy5g9Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gQ8ITyG-fXfyOpaE7llXg5ExCPJ0k88LykjSZSy5g9Y.png?width=108&crop=smart&auto=webp&s=2a6dbf6530318c61430917e5cbf9a06aee6d0513', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gQ8ITyG-fXfyOpaE7llXg5ExCPJ0k88LykjSZSy5g9Y.png?width=216&crop=smart&auto=webp&s=f83265e267e013ca52f0d9c74117862338b63c9d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gQ8ITyG-fXfyOpaE7llXg5ExCPJ0k88LykjSZSy5g9Y.png?width=320&crop=smart&auto=webp&s=fcf50bdc13cd310cc65e7ad9b6158b00bc3bc1e1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gQ8ITyG-fXfyOpaE7llXg5ExCPJ0k88LykjSZSy5g9Y.png?width=640&crop=smart&auto=webp&s=2b1fb8d37e06920b53b08d9cc134fb9de4b6b079', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gQ8ITyG-fXfyOpaE7llXg5ExCPJ0k88LykjSZSy5g9Y.png?width=960&crop=smart&auto=webp&s=dcb15aef08ef9864cc86abd2161ed4a16411e245', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gQ8ITyG-fXfyOpaE7llXg5ExCPJ0k88LykjSZSy5g9Y.png?width=1080&crop=smart&auto=webp&s=57733e7d5e74dfbc9269672afcc4f6cced749c07', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gQ8ITyG-fXfyOpaE7llXg5ExCPJ0k88LykjSZSy5g9Y.png?auto=webp&s=3652adf292c8f2079a96026da67cb4983413922c', 'width': 1200}, 'variants': {}}]}
Does anyone want to combine forces to run local SOTA models?
7
Hello all, I am long time enterprise IT/datacenter guy, and I have free reign of a datacenter facility here in Ohio. (and free power too, shhh) Ive been doing alot of genAI and MLOps/InferenceOps stuff over the last few years, and over this time I have accumulated a hodge podge of GPUs and high end servers - but I was stuck with 70b models, oss120b, and awq quants etc - because all my GPUs were disparate and in different systems. I want to change that in 2026 I have the capital to buy 2, maybe 3 blackwells right now - but as many of you know 4-6 is the sweet spot for these larger local sota models. I bought two 4u AI geared Gigabyte servers that can fit 4-5 blackwells in each, and I was wondering if anyone wants to go in on blackwells if we can split the use of their time or collaborate. I have a couple of projects that would really benefit from cheap local inference, so we could even combine forces on my current ventures if its something that is of interest to you. Maybe you already have some blackwells? Id be glad to host them. I'm hoping to get the b6000 server edition maxq, but workstation would suffice. I know it would require alot of trust etc, but we can connect and share LinkedIn and have a call or even meet in person. Just throwing this out there in case anyone might be interested! DM or comment, thank you!
2026-01-28T15:16:08
https://www.reddit.com/r/LocalLLaMA/comments/1qpe2zk/does_anyone_want_to_combine_forces_to_run_local/
ahgroseclose
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpe2zk
false
null
t3_1qpe2zk
/r/LocalLLaMA/comments/1qpe2zk/does_anyone_want_to_combine_forces_to_run_local/
false
false
self
7
null
FASHN VTON v1.5: Apache-2.0 virtual try-on model, runs on consumer GPUs (~8GB VRAM), ~1B params
79
We just open-sourced FASHN VTON v1.5, a virtual try-on model that generates photorealistic images of people wearing garments. We've been running this as a production API for the past year, and now we're releasing the weights and inference code under Apache-2.0. # Why we're releasing this Most open-source VTON models are either research prototypes that require significant engineering to deploy, or they're locked behind restrictive licenses. As state-of-the-art capabilities consolidate into massive generalist models, we think there's value in releasing focused, efficient models that researchers and developers can actually own, study, and extend commercially. We also want to demonstrate that competitive results in this domain don't require massive compute budgets. Total training cost was in the $5-10k range on rented A100s. This follows our [human parser release](https://www.reddit.com/r/MachineLearning/comments/1qax221/p_opensourcing_a_human_parsing_model_trained_on/) from a couple weeks ago. # Specs * **Parameters:** 972M * **Architecture:** Custom MMDiT * **VRAM:** \~8GB minimum * **Hardware:** Runs on consumer GPUs (RTX 30xx/40xx) * **Latency:** \~5 seconds on H100 * **License:** Apache-2.0 (fully permissive, commercial use allowed) # Technical highlights **Pixel-space operation:** Unlike most diffusion models that work in VAE latent space, we operate directly on RGB pixels. This avoids lossy VAE encoding/decoding that can blur fine garment details like textures, patterns, and text. **Maskless inference:** No segmentation mask required on the target person. The model learns where clothing boundaries should be rather than being told. # Links * **GitHub:** [fashn-AI/fashn-vton-1.5](https://github.com/fashn-AI/fashn-vton-1.5) * **HuggingFace:** [fashn-ai/fashn-vton-1.5](https://huggingface.co/fashn-ai/fashn-vton-1.5) * **Project page:** [fashn.ai/research/vton-1-5](https://fashn.ai/research/vton-1-5) # Quick example from fashn_vton import TryOnPipeline from PIL import Image pipeline = TryOnPipeline(weights_dir="./weights") person = Image.open("person.jpg").convert("RGB") garment = Image.open("garment.jpg").convert("RGB") result = pipeline( person_image=person, garment_image=garment, category="tops", ) result.images[0].save("output.png") # Coming soon * **HuggingFace Space:** Online demo * **Technical paper:** Architecture decisions, training methodology, and design rationale Happy to answer questions about running this locally or the implementation.
2026-01-28T14:59:46
https://v.redd.it/x689lnt0t3gg1
JYP_Scouter
v.redd.it
1970-01-01T00:00:00
0
{}
1qpdn1t
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/x689lnt0t3gg1/DASHPlaylist.mpd?a=1772204405%2CY2ExYjFjNjViMWFhMWY2NGQ4M2E1ZTgxMzE1Mjk5ZDEzMjc0YzVmMzc5NGEzYmJlOGYwNGRkNDA1ZWE0MTIyZQ%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/x689lnt0t3gg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 930, 'hls_url': 'https://v.redd.it/x689lnt0t3gg1/HLSPlaylist.m3u8?a=1772204405%2CNDE3Nzg2NmRiYjZjZDQ2MGMwZjI4ZDQzZmQ1YzNjN2MyNzYxMjYzOGFjYjNkMTdlMzcyYWU0OWVhZDFmYzhkNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/x689lnt0t3gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1qpdn1t
/r/LocalLLaMA/comments/1qpdn1t/fashn_vton_v15_apache20_virtual_tryon_model_runs/
false
false
https://external-preview…e145fb1754b88a2b
79
{'enabled': False, 'images': [{'id': 'dzB2eTU3dTB0M2dnMQAcCp4XjN-WlzBmxKbhH4j6EHXSl-nuFCETM3brGZzN', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/dzB2eTU3dTB0M2dnMQAcCp4XjN-WlzBmxKbhH4j6EHXSl-nuFCETM3brGZzN.png?width=108&crop=smart&format=pjpg&auto=webp&s=d8e8b6c6c8c688ee0c94d58a6151648ff45c8d18', 'width': 108}, {'height': 104, 'url': 'https://external-preview.redd.it/dzB2eTU3dTB0M2dnMQAcCp4XjN-WlzBmxKbhH4j6EHXSl-nuFCETM3brGZzN.png?width=216&crop=smart&format=pjpg&auto=webp&s=141a6795680c41cf847670270ec855155a0dcd57', 'width': 216}, {'height': 154, 'url': 'https://external-preview.redd.it/dzB2eTU3dTB0M2dnMQAcCp4XjN-WlzBmxKbhH4j6EHXSl-nuFCETM3brGZzN.png?width=320&crop=smart&format=pjpg&auto=webp&s=5934e67c36ad287c35039e65fb19e9920de59619', 'width': 320}, {'height': 309, 'url': 'https://external-preview.redd.it/dzB2eTU3dTB0M2dnMQAcCp4XjN-WlzBmxKbhH4j6EHXSl-nuFCETM3brGZzN.png?width=640&crop=smart&format=pjpg&auto=webp&s=96832ed5acf856d51d70cf41c88c977f1b0be843', 'width': 640}, {'height': 464, 'url': 'https://external-preview.redd.it/dzB2eTU3dTB0M2dnMQAcCp4XjN-WlzBmxKbhH4j6EHXSl-nuFCETM3brGZzN.png?width=960&crop=smart&format=pjpg&auto=webp&s=c0bf466420d1c7caa0706d639c7e6a24e68ad423', 'width': 960}, {'height': 522, 'url': 'https://external-preview.redd.it/dzB2eTU3dTB0M2dnMQAcCp4XjN-WlzBmxKbhH4j6EHXSl-nuFCETM3brGZzN.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f251304a34b26ba71fb1a68593a2071e4502471d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dzB2eTU3dTB0M2dnMQAcCp4XjN-WlzBmxKbhH4j6EHXSl-nuFCETM3brGZzN.png?format=pjpg&auto=webp&s=5d9bb47cd4abf9478c885f4fea04837092b62e5b', 'width': 2232}, 'variants': {}}]}
Backup those models, because of calls for regulations
92
[‘Humanity needs to wake up’ to AI threats, Anthropic CEO says](https://www.euronews.com/next/2026/01/28/humanity-needs-to-wake-up-to-ai-threats-anthropic-ceo-says) \> Dario Amodei, the CEO of Anthropic, says that humanity needs to regulate the use of AI,…
2026-01-28T14:50:06
https://www.reddit.com/r/LocalLLaMA/comments/1qpde5g/backup_those_models_because_of_calls_for/
ProfessionalSpend589
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpde5g
false
null
t3_1qpde5g
/r/LocalLLaMA/comments/1qpde5g/backup_those_models_because_of_calls_for/
false
false
self
92
null
1TB Modl into a Single H200
2
2026-01-28T14:46:38
https://lmsys.org/blog/2026-01-26-int4-qat/?utm_source=tldrai
westsunset
lmsys.org
1970-01-01T00:00:00
0
{}
1qpdax9
false
null
t3_1qpdax9
/r/LocalLLaMA/comments/1qpdax9/1tb_modl_into_a_single_h200/
false
false
default
2
null
My workflow using Gemini pro (for images) Grok & Qwen to "Direct" AI Video trailers (16k views on r/aivideo) 0 Budget 100% free
0
I recently posted a concept trailer for *Aegon's Conquest* that did well on r/aivideo. I wanted to share how I used LLMs to maintain visual consistency, which is usually the hardest part of AI video. **The Problem:** Directing "action" (like dragon fire) usually results in hallucinations. **The Fix:** I used gemini to break down the scene into "shot lists" before feeding them into the video generator. sneak peak from an upcoming scene I'm making just to push myself After generating image in gemini and video gen prompt in grok **Prompt Structure:** Cinematic close-up of Anya Taylor-Joy as Rhaenys Targaryen, side profile view. She stands perfectly still against a stone wall with a flickering torch in the background. Her expression is calm and intelligent. She slowly raises her eyes from looking down at a table to looking forward at a character off-screen, with a subtle, knowing arch of one eyebrow. Warm firelight illuminates her face, sharp focus, 8k, high fidelity, photorealistic style. Visual Anchor: "Stands perfectly still" (Prevents body warping). Action: "Raises eyes... subtle arch of eyebrow" (Matches the questioning tone of the dialogue without distorting the mouth/jaw). Lighting: Matches your uploaded reference (Warm torchlight). Performance Note for the Voiceover: Tone: Observant, dry, and slightly rhetorical. She already knows the answer; she is just vocalizing the obvious to break the tension. Delivery: "I take it..." (slight pause as she looks up) "...the Storm King declined our generous offer?" (ending on a distinct upward inflection). **Result:** [https://grok.com/imagine/post/bb9c16fe-91c9-4574-a6c1-e3fbca1380b3?source=post-page&platform=web](https://grok.com/imagine/post/bb9c16fe-91c9-4574-a6c1-e3fbca1380b3?source=post-page&platform=web) Happy to answer questions about the prompting workflow!
2026-01-28T14:44:15
https://v.redd.it/9t3y0xivq3gg1
Nihal_raj30
/r/LocalLLaMA/comments/1qpd8nx/my_workflow_using_gemini_pro_for_images_grok_qwen/
1970-01-01T00:00:00
0
{}
1qpd8nx
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/9t3y0xivq3gg1/DASHPlaylist.mpd?a=1772333065%2CZjIzNWVjMzgxYWI2MjY5M2EwZDI0YjM4MGVlNmM3ZmQ5YjUyZWZmMjI2MWNjYmNiYmVlMjI2OTYwYWIzYWVkMw%3D%3D&v=1&f=sd', 'duration': 63, 'fallback_url': 'https://v.redd.it/9t3y0xivq3gg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/9t3y0xivq3gg1/HLSPlaylist.m3u8?a=1772333065%2CMjRkM2U5MjU0MWM1ODFiNzI2NmI2ZDc0ODYzYWYyMGUyMGU4ZmM0MDRmODE2MTJlZmRhZTVmYjdkNWM1Y2I0MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/9t3y0xivq3gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1qpd8nx
/r/LocalLLaMA/comments/1qpd8nx/my_workflow_using_gemini_pro_for_images_grok_qwen/
false
false
https://external-preview…3c02229736f44698
0
{'enabled': False, 'images': [{'id': 'aWhhbXduanZxM2dnMQX3kf7r19YxLYlxr8gwPM4kJm-St5nylm8BKME-L469', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aWhhbXduanZxM2dnMQX3kf7r19YxLYlxr8gwPM4kJm-St5nylm8BKME-L469.png?width=108&crop=smart&format=pjpg&auto=webp&s=7ee7a77813e0a895541e6f4f715d8c9c0de67a28', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aWhhbXduanZxM2dnMQX3kf7r19YxLYlxr8gwPM4kJm-St5nylm8BKME-L469.png?width=216&crop=smart&format=pjpg&auto=webp&s=5089becf3e311ae74f6184e5f2213d4ddb47acc3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aWhhbXduanZxM2dnMQX3kf7r19YxLYlxr8gwPM4kJm-St5nylm8BKME-L469.png?width=320&crop=smart&format=pjpg&auto=webp&s=e235527d94abe0b1494009acc938b033b3bd3269', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aWhhbXduanZxM2dnMQX3kf7r19YxLYlxr8gwPM4kJm-St5nylm8BKME-L469.png?width=640&crop=smart&format=pjpg&auto=webp&s=646ada2efcf167c2ef15c13845331da4154f3e53', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aWhhbXduanZxM2dnMQX3kf7r19YxLYlxr8gwPM4kJm-St5nylm8BKME-L469.png?width=960&crop=smart&format=pjpg&auto=webp&s=afe24d9bb7dc9053912250cdafbf295b90119f61', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aWhhbXduanZxM2dnMQX3kf7r19YxLYlxr8gwPM4kJm-St5nylm8BKME-L469.png?width=1080&crop=smart&format=pjpg&auto=webp&s=32f0e4139261854f5705f45379a1f8e7bd7c5e3a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/aWhhbXduanZxM2dnMQX3kf7r19YxLYlxr8gwPM4kJm-St5nylm8BKME-L469.png?format=pjpg&auto=webp&s=5869c623b3753dd65caecd0d514c7ee2bd32ad22', 'width': 1920}, 'variants': {}}]}
SudoAgent - Self-hosted authorization and audit logging for AI agents
1
Running AI agents that can modify your self-hosted infrastructure? You probably want authorization + audit trails. SudoAgent is an open-source library (Apache 2.0) that wraps agent tool calls with: * Policy enforcement (allow/deny/approve) * Approval workflows * Tamper-evident ledger (JSONL or SQLite) * Verification CLI Self-hosted by design: * No external services required * JSONL files or SQLite (your choice) * Optional cryptographic signing * Full control over your audit data Built for scenarios like: * Agent can restart services → require approval for production * Agent can modify configs → log all changes with hash chain * Agent can delete backups → hard deny unless explicitly approved Just released v2. Docs are comprehensive, setup is <5 minutes. GitHub: [lemnk/Sudo-agent: A runtime authorization layer for LLM tool calls policy, approval, audit logs.](https://github.com/lemnk/Sudo-agent)
2026-01-28T14:38:47
https://www.reddit.com/r/LocalLLaMA/comments/1qpd3hc/sudoagent_selfhosted_authorization_and_audit/
No_Loan5230
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpd3hc
false
null
t3_1qpd3hc
/r/LocalLLaMA/comments/1qpd3hc/sudoagent_selfhosted_authorization_and_audit/
false
false
self
1
null
Help setting up local AI
1
Hey there! I'm using a 96GB Laptop with 8TB SSD space (WD\_BLACK SN850X), paired with an Nvidia 5070 Laptop GPU and AMD Ryzen AI 9 HX370 CPU. I want to set up a local AI to help out with coding and event-driven state handling (logic debugging, basically), and I've been doing a lot of research to see what I could use. After some research, the conclusion I came to is: * GPT-OSS-20B, quantized at 4bit (hope I'm using the terminology correctly) as primary model * Gwen-Coder-32B, quantized at 4bit as secondary model for longer code debugging * Setting up RAG with full documentation on the event driven state handling language that I want to work with (Osiris), for much faster inference (once again, hopefully I'm using terminology properly) I realize there would be a lot of offloading onto the CPU, but I don't think I'd have much issue with it... Can anyone with specs similar to mine tell me what their experience with these models is? Are they "fast enough" (just a few seconds per prompt) and effective? The idea would be to run them completely offline after setting up about 20 GB of RAG, accessing the internet on occasion if there is information missing. Would you recommend another model for my specs and needs?
2026-01-28T14:31:20
https://www.reddit.com/r/LocalLLaMA/comments/1qpcwnd/help_setting_up_local_ai/
MobTalon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpcwnd
false
null
t3_1qpcwnd
/r/LocalLLaMA/comments/1qpcwnd/help_setting_up_local_ai/
false
false
self
1
null
Mac mini for work
1
I'm curious if anyone is using a local llm server in their workplace. Our dev team manager is wanting to purchase a machine that we all use for development. The twist is our CIO has approved licenses for github copilot (Pro). Can anyone justify building, maintaining and using a local llm server over a paid copilot subscription? Security/privacy is the only advantage I'm seeing - but copilot has already been approved...
2026-01-28T14:24:58
https://www.reddit.com/r/LocalLLaMA/comments/1qpcqv0/mac_mini_for_work/
ngless13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpcqv0
false
null
t3_1qpcqv0
/r/LocalLLaMA/comments/1qpcqv0/mac_mini_for_work/
false
false
self
1
null
Self hosted Kimi K2.5 (no content) issue
9
Hey guys, I am hosting Kimi K2.5 on 8x H200 node. I am able to get really nice speed and output. but I am having following issues : 1. Observing `(no content)` in response from the model randomly in between text, reasoning and in tool calls when running the model with vllm. Observed similar behavior while using sglang too where tool calls itself was not working {"role":"assistant","content":[{"type":"text","text":"(no content)"},{"type":"tool_use","id":"functions.Read:2","name":"Read","input":{"file_path":"/Users/pratik.narola/workspace/opencode/README.md"}},{"type":"tool_use","id":"functions.Bash:3","name":"Bash","input":{"command":"ls -la /Users/pratik.narola/workspace/opencode/packages","description":"List packages directory structure"}},{"type":"tool_use","id":"functions.Read:4","name":"Read","input":{"file_path":"/Users/pratik.narola/workspace/opencode/package.json"},"cache_control":{"type":"ephemeral"}}]}, 2. With opencode, Its breaking completely. tool call parser and might be chat template thats breaking. markers like <|tool\_calls\_section\_begin|> <|tool\_calls\_section\_end|> are leaking into the final content output. I am using vllm with --tool-call-parser kimi_k2 \ --reasoning-parser kimi_k2 \ Please let me know if you have experienced anything like this or has any suggestions or ideas for me to try out. Thanks.
2026-01-28T14:11:25
https://www.reddit.com/r/LocalLLaMA/comments/1qpce5w/self_hosted_kimi_k25_no_content_issue/
pratiknarola
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpce5w
false
null
t3_1qpce5w
/r/LocalLLaMA/comments/1qpce5w/self_hosted_kimi_k25_no_content_issue/
false
false
self
9
null
Should data centers be required to include emergency shutdown mechanisms as we have with nuclear power plants?
30
2026-01-28T14:10:43
https://v.redd.it/xvo60r3cl3gg1
FinnFarrow
v.redd.it
1970-01-01T00:00:00
0
{}
1qpcdjg
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/xvo60r3cl3gg1/DASHPlaylist.mpd?a=1772201459%2CZjk5MzA3YjE0YWZmNmMxZGZhYTJlZjJlMGIzYjFlMTRiYTA2ZTNiMDllNDZkMmRkNDYxY2EwYzFhYzA0ZDhmNQ%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/xvo60r3cl3gg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/xvo60r3cl3gg1/HLSPlaylist.m3u8?a=1772201459%2CYTRkNTZkNWU0ZjgzOGQ3MTEzZTNhNDU4YzFkNjkzYzUzMTlmYTNiYTdjNTRkY2U4NDA3ZjRlZDY4YjQ0ZmViYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xvo60r3cl3gg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
t3_1qpcdjg
/r/LocalLLaMA/comments/1qpcdjg/should_data_centers_be_required_to_include/
false
false
https://external-preview…fa710a1b1350326a
30
{'enabled': False, 'images': [{'id': 'dnQ2eHBuOWNsM2dnMbsCN6d3L6_HXnnRFCSSTAWqyWY23lOHKBvxCRMllsYF', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/dnQ2eHBuOWNsM2dnMbsCN6d3L6_HXnnRFCSSTAWqyWY23lOHKBvxCRMllsYF.png?width=108&crop=smart&format=pjpg&auto=webp&s=fff4f4e94ffebb9f493d403d18f93af8519f42fe', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/dnQ2eHBuOWNsM2dnMbsCN6d3L6_HXnnRFCSSTAWqyWY23lOHKBvxCRMllsYF.png?width=216&crop=smart&format=pjpg&auto=webp&s=50ebb0eb9126ccb57ecf87ffc1ec757bed923e9c', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/dnQ2eHBuOWNsM2dnMbsCN6d3L6_HXnnRFCSSTAWqyWY23lOHKBvxCRMllsYF.png?width=320&crop=smart&format=pjpg&auto=webp&s=90a1b364b10e6157ba9114975c774f22a997ec89', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/dnQ2eHBuOWNsM2dnMbsCN6d3L6_HXnnRFCSSTAWqyWY23lOHKBvxCRMllsYF.png?width=640&crop=smart&format=pjpg&auto=webp&s=98de57c028718e5be85746311ed4402776d0e229', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/dnQ2eHBuOWNsM2dnMbsCN6d3L6_HXnnRFCSSTAWqyWY23lOHKBvxCRMllsYF.png?format=pjpg&auto=webp&s=36643e9709773fd9e147618c0b5e055dd7f63e7c', 'width': 720}, 'variants': {}}]}
Made WebLLM downloads resumable - connection dies at 80%, picks up from 80%
0
Running Phi-3-mini in the browser via WebLLM. Works great until you're 1.8GB into a 2.3GB download and your connection hiccups. Back to zero. Browsers just don't have resumable verified downloads. So I built it. Chunks get verified and saved as they download. Connection drops? Page refresh? Close laptop? Resumes from last verified chunk. ```ts import { preloadVerifiedModel } from '@verifyfetch/webllm'; await preloadVerifiedModel('Phi-3-mini-4k-instruct-q4f16_1-MLC', { manifestUrl: '/vf.manifest.json', onProgress: ({ file, percent, resumed }) => { console.log(`${file}: ${percent}%${resumed ? ' (resumed)' : ''}`); } }); // MLCEngine finds verified files in its cache const engine = new MLCEngine(); await engine.reload('Phi-3-mini-4k-instruct-q4f16_1-MLC'); ``` Uses WebLLM's cache structure (`webllm/model`, etc.) so MLCEngine finds the files without any changes. https://github.com/hamzaydia/verifyfetch/tree/main/packages/webllm Anyone else running into this? Curious if there's interest in something like this upstream.
2026-01-28T14:07:20
https://www.reddit.com/r/LocalLLaMA/comments/1qpcag4/made_webllm_downloads_resumable_connection_dies/
aginext
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpcag4
false
null
t3_1qpcag4
/r/LocalLLaMA/comments/1qpcag4/made_webllm_downloads_resumable_connection_dies/
false
false
self
0
null
LM Studio cloud models client
1
Hey guys, as a big fan of local inference, I constantly had to run two separate apps. One for local inference which is LM Studio (Ollama sometimes) and a cloud client like Jan or Apollo. And while I like both of them, I always wondered how can I use LM Studio to handle local and cloud simultaneously. It's less about resource allocation and more about productivity actually. After doing some research, I found out that LMS team build a simplistic client to just do that. I got it forked it and tuned according to my power user needs. Basically what was added compared to the bare bones original: * any openai compatible API is supported * sampling params support * reasoning for thinking models * setting system prompt If you're interested in trying it out go find it at: [https://lmstudio.ai/gdmka/openai-compat-endpoint](https://lmstudio.ai/gdmka/openai-compat-endpoint) General readme page is here [https://github.com/gdmka/openai-compat-endpoint](https://github.com/gdmka/openai-compat-endpoint) I hope you will find it useful. https://preview.redd.it/uyauzk6fj3gg1.png?width=3584&format=png&auto=webp&s=e76fffd2b3b9fddfadff9471b5e81edc20f5b2c3
2026-01-28T14:04:09
https://www.reddit.com/r/LocalLLaMA/comments/1qpc7nk/lm_studio_cloud_models_client/
gdmkaa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpc7nk
false
null
t3_1qpc7nk
/r/LocalLLaMA/comments/1qpc7nk/lm_studio_cloud_models_client/
false
false
https://a.thumbs.redditm…8d1N_Dgav-F0.jpg
1
null
What does Gemini know about you?
0
[What LLMs know about you](https://preview.redd.it/bj0j1je3k3gg1.png?width=1080&format=png&auto=webp&s=1adf3c15bc14a88af56ecde0e142dd5376e25ead) I built a workflow to test what Gemini "knows" about people without giving it any context or tools. I tested myself and all major AI models: * ChatGPT: Declined (privacy guardrails) * Claude: Declined (privacy guardrails) * Gemini: Gave me a full biography - founder of Flink (€1B unicorn), ex-McKinsey consultant... lol Plot twist: I've never worked at any of these places. I'm building a different startup called Needle app. But weirdly, Gemini got the vibe right - German entrepreneur, startup scene, operations-focused. It pattern-matched me into a "neighboring reality" with similar characteristics. The interesting part: it seems to pull from real people's data. The bio it gave me matches someone else's actual LinkedIn profile almost exactly. I made the workflow public if anyone wants to test themselves: [https://needle.app/workflow-templates/what-ai-knows-about-you](https://needle.app/workflow-templates/what-ai-knows-about-you) Curious what results others get. Does Gemini invent biographies for you too? Or does it decline like the other models
2026-01-28T14:03:53
https://www.reddit.com/r/LocalLLaMA/comments/1qpc7eh/what_does_gemini_know_about_you/
jannemansonh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpc7eh
false
null
t3_1qpc7eh
/r/LocalLLaMA/comments/1qpc7eh/what_does_gemini_know_about_you/
false
false
https://b.thumbs.redditm…2S9lTNvLkYPY.jpg
0
null
Multi-stream constraint failure in Gemini Pro (Logic vs. Creativity)
0
I’m hitting a consistent bug where Gemini Pro fails at symbolic logic. In a 4-part SATB music task, the model cannot maintain "Parallel Check" constraints. It can handle the "creative" side (choosing chords), but it fails the "mathematical" side (checking for illegal parallel movements between independent streams). Even when prompted with the error, the model hallucinates a fix that breaks a different fundamental rule (like voice-stacking). This suggests a weakness in how LLMs process overlapping constraints in real-time. Are there better prompting techniques for "hard-rule" compliance, or is this a fundamental limit of current transformer architectures?
2026-01-28T13:41:52
https://www.reddit.com/r/LocalLLaMA/comments/1qpbnwt/multistream_constraint_failure_in_gemini_pro/
Putrid_Draft378
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpbnwt
false
null
t3_1qpbnwt
/r/LocalLLaMA/comments/1qpbnwt/multistream_constraint_failure_in_gemini_pro/
false
false
self
0
null
[R] Pushing Llama 3.1 8B further: My experiments with 800k specialized tokens and the impact of Context Length
1
Hi everyone, I’ve spent the last few weeks running different training tests on Llama 3.1 8B Instruct, and I wanted to share a specific "checkpoint" (I call it Model E) that feels like a real success. I should start by saying **I’m not a coder or a specialist in this field.** I’m an enthusiast who spends a lot of time "under the hood" of these models, learning as I go. My training technique is pretty basic, but it has taught me two very important lessons that I think the local LLM community will find interesting: 1. **Dataset prep is everything.** It’s not about the quantity of the data, but the "density" and structure. 2. **Context Length (MAX\_LENGTH) is the secret sauce.** In my experience, setting a value of **3096** was the turning point where the model’s reasoning actually started to stabilize and surpass the base model. # The Experiment I used a technique I call **STO (Specialized Task Optimization)**. The idea is to stop the model from just "predicting the next word" and force it to "explain the logic." I only used **800,000 specialized synthetic tokens** for this run. I actually have a dataset of 300 million tokens ready, but training on that scale is currently beyond my hardware and my current technical skills. However, seeing what just 800k tokens did to an 8B model is eye-opening. # The Results (Subjective vs. Objective) According to my internal testing, the "IQ" of this model feels significantly higher than the base 8B personally, it feels like a **20-30 point jump** in how it handles complex instructions. In my evaluations (ARC, MMLU, Hellaswag), it consistently outperformed the base Llama 3.1 8B Instruct, especially in **ARC Challenge (Logic)** where it hit **53.6%**. **But here is the catch:** I am biased. I built this, so of course, I want it to be good. That’s why I’m sharing it here. I want you guys to run your own evals, poke holes in it, and tell me where it fails. # Why this matters for us The goal is to see if we can make an 8B model think and reason like a 70B model. If we can do that, it means anyone with a normal home computer can run a highly "intelligent" agent without needing a cluster of A100s. # Links If you want to test it out, I’ve uploaded both the full weights and the GGUFs (Ollama ready): * **Full Weights/LoRA:** [AiAsistent/Llama-3.1-8B-Instruct-STO-Master](https://huggingface.co/AiAsistent/Llama-3.1-8B-Instruct-STO-Master) * **GGUF Version:** [AiAsistent/Llama-3.1-8B-Instruct-STO-Master-GGUF](https://huggingface.co/AiAsistent/Llama-3.1-8B-Instruct-STO-Master-GGUF) * **Ollama Command:** ollama run aiasistentworld/Llama-3.1-8B-Instruct-STO-Master I’m still learning, and this is just other test out of the 100 I have planned. If you decide to give it a spin, please let me know your thoughts especially on where it struggles. **Settings used for the run:** * **Method:** STO (Private technique) * **CTX:** 3096 * **Data:** 800k Synthetic Tokens (Grade 20) Looking forward to your feedback!
2026-01-28T13:40:50
https://www.reddit.com/r/LocalLLaMA/comments/1qpbn1a/r_pushing_llama_31_8b_further_my_experiments_with/
AlexHardy08
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qpbn1a
false
null
t3_1qpbn1a
/r/LocalLLaMA/comments/1qpbn1a/r_pushing_llama_31_8b_further_my_experiments_with/
false
false
self
1
{'enabled': False, 'images': [{'id': 'SHXx0N5aMKhjJWvRkml5hywpOdSZCssG6CTpdPOlGH4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SHXx0N5aMKhjJWvRkml5hywpOdSZCssG6CTpdPOlGH4.png?width=108&crop=smart&auto=webp&s=e44e70bd903328d6e1a2bbd9ef0d00258f4047e8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SHXx0N5aMKhjJWvRkml5hywpOdSZCssG6CTpdPOlGH4.png?width=216&crop=smart&auto=webp&s=1ab74e9cc87c8d5a5a968649d1ff3d525d929222', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SHXx0N5aMKhjJWvRkml5hywpOdSZCssG6CTpdPOlGH4.png?width=320&crop=smart&auto=webp&s=d6daa2ebbef6ef68f2f3b57584346c0faa11f490', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SHXx0N5aMKhjJWvRkml5hywpOdSZCssG6CTpdPOlGH4.png?width=640&crop=smart&auto=webp&s=910b8fda21fb2406b90b3d79f8f116b1eef3f464', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SHXx0N5aMKhjJWvRkml5hywpOdSZCssG6CTpdPOlGH4.png?width=960&crop=smart&auto=webp&s=afe7fd298ace530866f05f99e8f54ae47ceffcce', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SHXx0N5aMKhjJWvRkml5hywpOdSZCssG6CTpdPOlGH4.png?width=1080&crop=smart&auto=webp&s=7da0ea070651252b800a26b54d9da14fba92a054', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SHXx0N5aMKhjJWvRkml5hywpOdSZCssG6CTpdPOlGH4.png?auto=webp&s=4bc043b2be2b05a91284eff54d063c6527402d93', 'width': 1200}, 'variants': {}}]}