title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
The Qwen Devs Are Teasing Something
300
I'm going to assume a new VL model
2026-01-27T10:28:56
https://i.redd.it/umvks92vcvfg1.png
Few_Painter_5588
i.redd.it
1970-01-01T00:00:00
0
{}
1qoa8rp
true
null
t3_1qoa8rp
/r/LocalLLaMA/comments/1qoa8rp/the_qwen_devs_are_teasing_something/
false
false
default
300
{'enabled': True, 'images': [{'id': 'umvks92vcvfg1', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/umvks92vcvfg1.png?width=108&crop=smart&auto=webp&s=30de4c962f90bec7ac1c2393a2e5cff9ded4d2d6', 'width': 108}, {'height': 73, 'url': 'https://preview.redd.it/umvks92vcvfg1.png?width=216&crop=smart&auto=webp&s=3002a8575f91d1608be9dee407fd9ef712e85f73', 'width': 216}, {'height': 108, 'url': 'https://preview.redd.it/umvks92vcvfg1.png?width=320&crop=smart&auto=webp&s=0e567cff7332ebb3c6888343eea532b289e8e350', 'width': 320}, {'height': 217, 'url': 'https://preview.redd.it/umvks92vcvfg1.png?width=640&crop=smart&auto=webp&s=297a92382cbb71a347dd9192a2d8ae1054cf9fb2', 'width': 640}, {'height': 326, 'url': 'https://preview.redd.it/umvks92vcvfg1.png?width=960&crop=smart&auto=webp&s=00e3f5df608b998be829d956f23af9be28eee96e', 'width': 960}, {'height': 367, 'url': 'https://preview.redd.it/umvks92vcvfg1.png?width=1080&crop=smart&auto=webp&s=9d5112b41ca397c8b57d99a75515ddf0f81701fd', 'width': 1080}], 'source': {'height': 406, 'url': 'https://preview.redd.it/umvks92vcvfg1.png?auto=webp&s=163c46fa28a2b1741d0474e548bdbe38a4bd0e9a', 'width': 1194}, 'variants': {}}]}
Top 20 Tech Companies every Reddit Group professional must know about for Career link and the Last one will Surprise you
1
[removed]
2026-01-27T10:15:51
https://newsaffairng.com/2024/06/20/the-worlds-top-20-tech-companies-location-market-cap-and-career-links-2/
Jawabill10
newsaffairng.com
1970-01-01T00:00:00
0
{}
1qoa12a
false
null
t3_1qoa12a
/r/LocalLLaMA/comments/1qoa12a/top_20_tech_companies_every_reddit_group/
false
false
default
1
null
Is Clawdbot overhyped? Which are the killer features?
0
Clawdbot has been making waves as a personal AI assistant, but I’m curious, is it really as groundbreaking as it seems? From what I’ve gathered, it offers features like: \- Image Processing: Capture images via WhatsApp (e.g., Ray-Bans) and get instant product searches or price comparisons. \- Shopping Automation: Add items to your cart automatically if they’re cheaper on Amazon. \- Integration: Works seamlessly with platforms like WhatsApp for voice commands and tasks. But let’s be real, how much of this is truly innovative? Memory and cron jobs are table stakes for many AI assistant these days. Are there other killer features I’m missing? Or is Clawdbot just riding the hype wave? What’s your take? Have you tried it? Share your thoughts!
2026-01-27T10:03:23
https://www.reddit.com/r/LocalLLaMA/comments/1qo9t83/is_clawdbot_overhyped_which_are_the_killer/
Deep_Traffic_7873
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo9t83
false
null
t3_1qo9t83
/r/LocalLLaMA/comments/1qo9t83/is_clawdbot_overhyped_which_are_the_killer/
false
false
self
0
null
DeepSeek V4 maybe was a multimodal model?
23
On DeepSeek Ocr 2 paper we can see there have a sentence: 6.2. Towards Native Multimodality DeepEncoder V2 provides initial validation of the LLM-style encoder’s viability for visual tasks. More importantly, this architecture enjoys the potential to evolve into a unified omni-modal encoder: a single encoder with shared 𝑊𝑘, 𝑊 𝑣 projections, attention mechanisms, and FFNs can process multiple modalities through modality-specific learnable query embeddings. Such an encoder could compress text, extract speech features, and reorganize visual content within the same parameter space, differing only in the learned weights of their query embeddings. **DeepSeek-OCR’s optical compression represents an initial exploration toward native multi-modality,** while we believe DeepSeek-OCR 2’s LLM-style encoder architecture marks our further step in this direction. **We will also continue exploring the integration of additional modalities through this shared encoder framework in the future.** [https://github.com/deepseek-ai/DeepSeek-OCR-2/blob/main/DeepSeek\_OCR2\_paper.pdf](https://github.com/deepseek-ai/DeepSeek-OCR-2/blob/main/DeepSeek_OCR2_paper.pdf)
2026-01-27T09:49:18
https://www.reddit.com/r/LocalLLaMA/comments/1qo9kg5/deepseek_v4_maybe_was_a_multimodal_model/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo9kg5
false
null
t3_1qo9kg5
/r/LocalLLaMA/comments/1qo9kg5/deepseek_v4_maybe_was_a_multimodal_model/
false
false
self
23
{'enabled': False, 'images': [{'id': 'Z5G1VJbHdS9HXYh9noLV22prUxrQ7CpfZ0CIwBE-0zw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z5G1VJbHdS9HXYh9noLV22prUxrQ7CpfZ0CIwBE-0zw.png?width=108&crop=smart&auto=webp&s=0eb6c48ccf733d68742139fd7ca0e20ec259acb4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Z5G1VJbHdS9HXYh9noLV22prUxrQ7CpfZ0CIwBE-0zw.png?width=216&crop=smart&auto=webp&s=470eb6752ac256b9d24d06ec18dc1cc71d5f549f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Z5G1VJbHdS9HXYh9noLV22prUxrQ7CpfZ0CIwBE-0zw.png?width=320&crop=smart&auto=webp&s=ae4a8647f9e8bf4e4671a8b5701b2cb6e095ea70', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Z5G1VJbHdS9HXYh9noLV22prUxrQ7CpfZ0CIwBE-0zw.png?width=640&crop=smart&auto=webp&s=5927a39569d750dbcf9464f0d2feded2bd1d253d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Z5G1VJbHdS9HXYh9noLV22prUxrQ7CpfZ0CIwBE-0zw.png?width=960&crop=smart&auto=webp&s=320b2f63ff199dc876711339b4d67ab1ac55d30d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Z5G1VJbHdS9HXYh9noLV22prUxrQ7CpfZ0CIwBE-0zw.png?width=1080&crop=smart&auto=webp&s=339336af4813ee2d38ce03b4133343c3019287d3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Z5G1VJbHdS9HXYh9noLV22prUxrQ7CpfZ0CIwBE-0zw.png?auto=webp&s=465fbad7c2cfa22b5bc175a44633d7b5c8ad2280', 'width': 1200}, 'variants': {}}]}
Why no grammar on online APIs?...
1
I have been developing some stuff with Llama as I try to build a one specific service and I have been using a 3090 to run a 70B model at quant 5, taking around 50GB which exceeds what I got on VRAM; so I've gone through drastic ways. I implemented a lot of kill switches on tokens, careful stateful prompting, etc... to squeeze every single speed boost I could. And there was my saviour dynamically generated grammar... speeding shit up to 50x times for my usecase, giving more accurate responses, it was like inpainting but for LLM; the model was not trained for this, you should load another model at once (another one?); No, no problem, Force it; what would take a couple of inferences where the answer couldn't be assured and the LLM loved to pointlessly explain before going to the point, now was taking 1 inference, sometimes just 1 mere token, as I reversed the answer style to explain later, and I could kill generation once I found keytokens and predict the rest of the response; so 50x to 100x is no joke. Of course the online services are even faster, despite my speedboost because they have insane amounts of VRAM, but the ouput is often not assured, or may be hard to parse; but they still tend to pointlessly explain in unparsable ways. Why wouldn't they expose Grammar? or have an akin mechanism as a feature?... not even deepseek based services. And now how am I supposed to run this on the cloud later on other providers?... with no grammar the answers can be so quack no matter how good the prompt is, there's no guarantee even claude messes up even if it generates 300 tokens in the time I make one, that one single token has more useful information than those 300. Would have to make my own server with grammar support?... I am not exactly moneybags if this can't be hooked to another service.
2026-01-27T09:39:39
https://www.reddit.com/r/LocalLLaMA/comments/1qo9ef9/why_no_grammar_on_online_apis/
boisheep
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo9ef9
false
null
t3_1qo9ef9
/r/LocalLLaMA/comments/1qo9ef9/why_no_grammar_on_online_apis/
false
false
self
1
null
Air Cooled 3090 for Servers?
3
Has anybody tried 'server-izing' a 3090? Strip off the bulky heatsink, fans & plastic and putting on some aftermarket heatsink so that the whole thing becomes an air cooled, 2slot server card instead of a 3.75slot chonker. Downvolt the thing for lower temps if it's still too hot? I want to put a pair into a 2U rack server which has the power & airflow needed. Just not the physical space to fit a 4slot gamer gpu.
2026-01-27T09:25:09
https://www.reddit.com/r/LocalLLaMA/comments/1qo95tb/air_cooled_3090_for_servers/
__E8__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo95tb
false
null
t3_1qo95tb
/r/LocalLLaMA/comments/1qo95tb/air_cooled_3090_for_servers/
false
false
self
3
null
ClawdBot: The New Primary Target for Infostealers in the AI Era
0
2026-01-27T09:19:59
https://www.infostealers.com/article/clawdbot-the-new-primary-target-for-infostealers-in-the-ai-era/
Malwarebeasts
infostealers.com
1970-01-01T00:00:00
0
{}
1qo92q9
false
null
t3_1qo92q9
/r/LocalLLaMA/comments/1qo92q9/clawdbot_the_new_primary_target_for_infostealers/
false
false
https://external-preview…54910ce9d2fdcf3a
0
{'enabled': False, 'images': [{'id': 'bZEsizBEgTeZzKqhgIwFMtF3aItRKXWlztlPcqyHZzQ', 'resolutions': [{'height': 43, 'url': 'https://external-preview.redd.it/bZEsizBEgTeZzKqhgIwFMtF3aItRKXWlztlPcqyHZzQ.png?width=108&crop=smart&auto=webp&s=b38c3e9cbb25a90e4b83de0fd4c538da28485154', 'width': 108}, {'height': 87, 'url': 'https://external-preview.redd.it/bZEsizBEgTeZzKqhgIwFMtF3aItRKXWlztlPcqyHZzQ.png?width=216&crop=smart&auto=webp&s=a47c3856101bf554280bc3f4ee2b7f89bb82bc3a', 'width': 216}, {'height': 129, 'url': 'https://external-preview.redd.it/bZEsizBEgTeZzKqhgIwFMtF3aItRKXWlztlPcqyHZzQ.png?width=320&crop=smart&auto=webp&s=306f36b1ebe8ea5e6e4639e7cee5280ed58a4180', 'width': 320}, {'height': 259, 'url': 'https://external-preview.redd.it/bZEsizBEgTeZzKqhgIwFMtF3aItRKXWlztlPcqyHZzQ.png?width=640&crop=smart&auto=webp&s=a857570bc46fc2c554caa385f0019efd41e68aed', 'width': 640}], 'source': {'height': 366, 'url': 'https://external-preview.redd.it/bZEsizBEgTeZzKqhgIwFMtF3aItRKXWlztlPcqyHZzQ.png?auto=webp&s=2a04fd4c2568557fe353a1dd90132d8677b8837f', 'width': 903}, 'variants': {}}]}
I made Geminicli-sdk inspired by github's copilot-sdk
0
Hey, guys, I wanna you all to check out [OEvortex/geminicli-sdk](https://github.com/OEvortex/geminicli-sdk) A **multi-language SDK** for Google Gemini Code Assist API, inspired by the [GitHub Copilot SDK](https://github.com/github/copilot-sdk). GeminiCLI SDK provides high-level interfaces for interacting with the Gemini Code Assist API in **Python**, **TypeScript**, **Rust**, **Go**, and **C++**, supporting: * 🔐 **OAuth Authentication** \- Seamless authentication using Gemini CLI credentials * 🌊 **Streaming Responses** \- Real-time streaming with Server-Sent Events (SSE) * 🛠️ **Tool Calling** \- Define and use custom tools with the model * 💬 **Session Management** \- Manage conversation state and history * 🧠 **Thinking/Reasoning** \- Support for model thinking/reasoning content
2026-01-27T09:09:54
https://www.reddit.com/r/LocalLLaMA/comments/1qo8wr6/i_made_geminiclisdk_inspired_by_githubs_copilotsdk/
Resident_Suit_9916
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo8wr6
false
null
t3_1qo8wr6
/r/LocalLLaMA/comments/1qo8wr6/i_made_geminiclisdk_inspired_by_githubs_copilotsdk/
false
false
self
0
{'enabled': False, 'images': [{'id': 'uGOtrbBBeVzPt-zHpZJ99aRWHbWqOy4ctEq601ZsvTQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uGOtrbBBeVzPt-zHpZJ99aRWHbWqOy4ctEq601ZsvTQ.png?width=108&crop=smart&auto=webp&s=71b6322b0bd63de47dca2cabed25a682226ee231', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uGOtrbBBeVzPt-zHpZJ99aRWHbWqOy4ctEq601ZsvTQ.png?width=216&crop=smart&auto=webp&s=9adddedc6250b4d4528a788be20f43714185b599', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uGOtrbBBeVzPt-zHpZJ99aRWHbWqOy4ctEq601ZsvTQ.png?width=320&crop=smart&auto=webp&s=6eecf70b384ba4e3fa2dbbd8f2f1878d438ddb09', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uGOtrbBBeVzPt-zHpZJ99aRWHbWqOy4ctEq601ZsvTQ.png?width=640&crop=smart&auto=webp&s=07d5db7815fa5cb77019b526a886a465f51b32a2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uGOtrbBBeVzPt-zHpZJ99aRWHbWqOy4ctEq601ZsvTQ.png?width=960&crop=smart&auto=webp&s=5458d692b6085a7f00e4e2a0badaf3ffbe53f566', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uGOtrbBBeVzPt-zHpZJ99aRWHbWqOy4ctEq601ZsvTQ.png?width=1080&crop=smart&auto=webp&s=a5d74cd21bb5bc38995ee00098fa31ec1a5ddcfb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uGOtrbBBeVzPt-zHpZJ99aRWHbWqOy4ctEq601ZsvTQ.png?auto=webp&s=a4f83af94e67f1b481693f617bcd3158f859ac37', 'width': 1200}, 'variants': {}}]}
Kimi K2.5 Launches, Unsloth quantisations coming soon
54
[https://platform.moonshot.ai/docs/guide/kimi-k2-5-quickstart](https://platform.moonshot.ai/docs/guide/kimi-k2-5-quickstart)
2026-01-27T09:00:51
https://www.reddit.com/r/LocalLLaMA/comments/1qo8r8s/kimi_k25_launches_unsloth_quantisations_coming/
Plastic-Accident862
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo8r8s
false
null
t3_1qo8r8s
/r/LocalLLaMA/comments/1qo8r8s/kimi_k25_launches_unsloth_quantisations_coming/
false
false
self
54
null
Can Llama 3.2 run fast on an i7-12700H + Iris Xe? (Looking for a Google alternative in terminal)
2
I’m looking to start using local LLMs on my machine so I don’t have to keep going to Google every time I have a basic question or need a Linux command explained. I mainly want to use it quickly in the terminal for things like "how do I do XYZ in Kali Linux" and get an instant answer. I'm looking at **Llama 3.2 (1B or 3B)**, but I’m not sure how well it’ll actually run on my specs. I don't have a dedicated graphics card, just the integrated one. **Here are my PC specs:** * **CPU:** 12th Gen Intel Core i7-12700H (2.30 GHz) * **RAM:** 16 GB * **GPU:** Intel Iris Xe Graphics (shared memory) * **OS:** Windows 11 / Kali Linux Will Llama 3.2 1B be fast enough for "instant" terminal answers on this? Also, since I'm mostly asking about Linux commands and basic tech stuff, does it actually have enough info to replace a quick Google search? Lastly, are there any other free models that are super low-resource but better for this kind of stuff? I used AI to make this post better cause my English is not that good so please don't flag this post as AI-generated post for karma gain. Thanks.
2026-01-27T08:59:14
https://www.reddit.com/r/LocalLLaMA/comments/1qo8q7u/can_llama_32_run_fast_on_an_i712700h_iris_xe/
explain-like-youre-5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo8q7u
false
null
t3_1qo8q7u
/r/LocalLLaMA/comments/1qo8q7u/can_llama_32_run_fast_on_an_i712700h_iris_xe/
false
false
self
2
null
RO Philosophy is a theoretical and mathematical framework that treats reality as a computational process​#QuantumPhysics #InformationTheory #Metaphysics
1
2026-01-27T08:59:01
https://www.reddit.com/gallery/1qo8q3d
erikqamalyan4
reddit.com
1970-01-01T00:00:00
0
{}
1qo8q3d
false
null
t3_1qo8q3d
/r/LocalLLaMA/comments/1qo8q3d/ro_philosophy_is_a_theoretical_and_mathematical/
false
false
https://b.thumbs.redditm…Zr8dYLyodVEU.jpg
1
null
I built an open-source tool that lets AI Agents (Claude/Windsurf) generate marketing videos for you. Built with Remotion & MCP.
0
Hi everyone! 👋 > Like many of you, I love coding but hate making promotional videos for my side projects. > So I built **Auto Director** – a framework that lets AI agents direct, edit, and render videos autonomously. > > **Features:** > - 🎬 **AI-Native:** Uses MCP to let Claude Desktop control the video generation. > - ⚛️ **React-based:** Built on top of Remotion. > - 🎨 **Themes:** Cyberpunk, Minimal, Playful styles included. > > It's open source! Would love your feedback. > Repo: [https://github.com/naki0227/auto_CM_director]( https://github.com/naki0227/auto_CM_director )
2026-01-27T08:38:17
https://www.reddit.com/r/LocalLLaMA/comments/1qo8dwp/i_built_an_opensource_tool_that_lets_ai_agents/
Alone_Web7491
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo8dwp
false
null
t3_1qo8dwp
/r/LocalLLaMA/comments/1qo8dwp/i_built_an_opensource_tool_that_lets_ai_agents/
false
false
self
0
{'enabled': False, 'images': [{'id': 'anQJpRO01mgVG4No1LyfjUGCN05tLo9pVHF3EGqa2eg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/anQJpRO01mgVG4No1LyfjUGCN05tLo9pVHF3EGqa2eg.png?width=108&crop=smart&auto=webp&s=4cf0362c6926746e6c0325e8b4f6be0cf3e07cd8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/anQJpRO01mgVG4No1LyfjUGCN05tLo9pVHF3EGqa2eg.png?width=216&crop=smart&auto=webp&s=ec1b45301ae2e1394a1f4c622069285493bd88b2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/anQJpRO01mgVG4No1LyfjUGCN05tLo9pVHF3EGqa2eg.png?width=320&crop=smart&auto=webp&s=ea916641fea7fbd0280e814f83c378d634cd13de', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/anQJpRO01mgVG4No1LyfjUGCN05tLo9pVHF3EGqa2eg.png?width=640&crop=smart&auto=webp&s=6f9ca68f826aa979b9b76a0b3630fc25d75feed0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/anQJpRO01mgVG4No1LyfjUGCN05tLo9pVHF3EGqa2eg.png?width=960&crop=smart&auto=webp&s=6af8ac08f58f683cdefd53c2840fde332f2359cb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/anQJpRO01mgVG4No1LyfjUGCN05tLo9pVHF3EGqa2eg.png?width=1080&crop=smart&auto=webp&s=fd2dfa2ba78e348816af0f337e1a1c3026b41b71', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/anQJpRO01mgVG4No1LyfjUGCN05tLo9pVHF3EGqa2eg.png?auto=webp&s=1ba492ce77545d6f606daa8c1dcfaf201d698a1f', 'width': 1200}, 'variants': {}}]}
For those fine-tuning models: How do you track which training data went into each model version?
0
Quick question for the fine-tuning community: When you're iterating on model fine-tuning (trying different datasets, preprocessing approaches, hyperparameters), how do you keep track of exactly which data went into which model checkpoint? I'm finding that after 10-20 fine-tuning runs, I lose track of: - Which dataset version I used - What preprocessing/cleaning I applied - Which model performed best and on what data **Looking for people to interview (15 min) about:** - Your current workflow for tracking experiments + data - Pain points around reproducibility - Whether this is even a problem or if there's an obvious solution I'm missing This is for PhD research - trying to understand if data lineage tracking is a gap in current tools. Interested? Thanks!
2026-01-27T08:31:16
https://www.reddit.com/r/LocalLLaMA/comments/1qo89sb/for_those_finetuning_models_how_do_you_track/
Achilles_411
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo89sb
false
null
t3_1qo89sb
/r/LocalLLaMA/comments/1qo89sb/for_those_finetuning_models_how_do_you_track/
false
false
self
0
null
I built an app that can record smoking habits and automatically analyze them — all on your iPhone, 100% private, no APIs, no data collection!
0
Hey everyone!! I’m a solo developer, and I recently challenged myself to build a fully polished iOS utility app in just **one week**. The result is **Just One** (抽一根 in Chinese), a privacy-focused smoking cessation and tracking companion designed to be simple, beautiful, and honest. # Why I built this? I was frustrated with habit trackers that require sign-ups, store your health data on random servers, or hide basic statistics behind monthly subscriptions. And as a smoker, I try to quit smoking intermittently or regularly with colleagues on weekdays. I need both a recorder that can directly remind me with data and a modern aesthetic interface. So I develop this app, a tool that felt premium but respected my data completely. # ✨ Key Features (Built in 7 Days): * **100% Private**: Built with SwiftData and CloudKit. No external APIs, no tracking SDKs. Your data never leaves your Apple ecosystem. * **One-Tap Logging**: A satisfying "Liquid Glass" button with haptic feedback that mimics the physical sensation of lighting up or stubbing out. * **Smart Analysis**: Automatically grades your daily/weekly habits (Grades S to D) to give you honest feedback without shame. * **Health & Cost**: Real-time estimates of money spent and substances avoided. * **Dynamic Notification**: It knows if you cut back yesterday and sends a specific encouraging message (e.g., "Incredible! 0 cigarettes yesterday!") instead of a generic reminder. * **Timing reminder**: You can set reminders for specific dates, time intervals, and even custom reminder texts! Used to remind you how many hours you have quit smoking (or "it's time to smoke after such a long interval"), of course, you can also use it as a reminder beyond quitting smoking, or even a simple tomato clock. Clever use may have more interesting results! # 🛠️ The Tech Stack * **Language**: Swift 6 * **UI**: SwiftUI (Canvas animations for smoke effects) * **Storage**: SwiftData + CloudKit * **Haptics**: CoreHaptics customization # 🚀 Availability I am putting the final polish on the localized strings (English, Chinese, Japanese) and **will be releasing this to the global App Store within one week!** I’d love to answer any questions about the 1-week sprint challenge, the implementation details, or hear your feedback on the concept!
2026-01-27T08:10:18
https://www.reddit.com/gallery/1qo7xcu
No-Tangelo-5044
reddit.com
1970-01-01T00:00:00
0
{}
1qo7xcu
false
null
t3_1qo7xcu
/r/LocalLLaMA/comments/1qo7xcu/i_built_an_app_that_can_record_smoking_habits_and/
false
false
default
0
null
OpenAI could reportedly run out of cash by mid-2027 — analyst paints grim picture after examining the company's finances
195
A new financial analysis predicts OpenAI could burn through its cash reserves by mid-2027. The report warns that Sam Altman’s '$100 billion Stargate' strategy is hitting a wall: training costs are exploding, but revenue isn't keeping up. With Chinese competitors like DeepSeek now offering GPT-5 level performance for 95% less cost, OpenAI’s 'moat' is evaporating faster than expected. If AGI doesn't arrive to save the economics, the model is unsustainable.
2026-01-27T08:09:28
https://www.tomshardware.com/tech-industry/big-tech/openai-could-reportedly-run-out-of-cash-by-mid-2027-nyt-analyst-paints-grim-picture-after-examining-companys-finances
EchoOfOppenheimer
tomshardware.com
1970-01-01T00:00:00
0
{}
1qo7wur
false
null
t3_1qo7wur
/r/LocalLLaMA/comments/1qo7wur/openai_could_reportedly_run_out_of_cash_by/
false
false
https://external-preview…ecdf945bad0f1eb7
195
{'enabled': False, 'images': [{'id': 'v44P77PnYI5tRIZpnmaNJbBahgYNI024hkyMrYI6J24', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/v44P77PnYI5tRIZpnmaNJbBahgYNI024hkyMrYI6J24.jpeg?width=108&crop=smart&auto=webp&s=1fa9112794c95b823ad0a355b06cdc3155876048', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/v44P77PnYI5tRIZpnmaNJbBahgYNI024hkyMrYI6J24.jpeg?width=216&crop=smart&auto=webp&s=b983e6032ddd4e204507cbb2233a96007bb6e7d9', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/v44P77PnYI5tRIZpnmaNJbBahgYNI024hkyMrYI6J24.jpeg?width=320&crop=smart&auto=webp&s=34ef6ccf470ac92ea8b133b36f47bcfff9faf620', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/v44P77PnYI5tRIZpnmaNJbBahgYNI024hkyMrYI6J24.jpeg?width=640&crop=smart&auto=webp&s=147c3b3cde07c4eba8b8831f89c4a1dbc6ec6942', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/v44P77PnYI5tRIZpnmaNJbBahgYNI024hkyMrYI6J24.jpeg?width=960&crop=smart&auto=webp&s=25969e35e29a5e06718938a62c436f5614401678', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/v44P77PnYI5tRIZpnmaNJbBahgYNI024hkyMrYI6J24.jpeg?width=1080&crop=smart&auto=webp&s=fa9babb8484a54311283cc7822f3c529f15759b0', 'width': 1080}], 'source': {'height': 1125, 'url': 'https://external-preview.redd.it/v44P77PnYI5tRIZpnmaNJbBahgYNI024hkyMrYI6J24.jpeg?auto=webp&s=b7ff6d280e519287e0d6a64d44102452e643b330', 'width': 2000}, 'variants': {}}]}
need help: llama.cpp - model: codellama going in loops feeding conversation to itself
2
I'm trying to use llama.cpp https://github.com/ggml-org/llama.cpp with codellama https://huggingface.co/TheBloke/CodeLlama-7B-GGUF (the model is downloaded from huggingface). but that it is running into a loop feeding input into itself it seemed: ``` llama-cli --device BLAS -m codellama-7b.Q4_K_M.gguf > hello hello<|im_end|> <|im_start|>user hello<|im_end|> <|im_start|>assistant hello<|im_end|> <|im_start|>user hello<|im_end|> <|im_start|>assistant hello<|im_end|> <|im_start|>user hello<|im_end|> on another attempt: > hello how are you? <|im_end|> <|im_start|>user good <|im_end|> <|im_start|>assistant sorry to hear that <|im_end|> <|im_start|>user is there anything i can do for you? <|im_end|> ``` note that "hello" is all I typed, but that it is generating the responses for "user" which I did not enter. I tried running with --no-jinja to avoid a chat template being linked, but it apparently behaves the same. I tried another model Llama-3.2-1B-Instruct-Q8_0-GGUF https://huggingface.co/hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF and this didn't seem to have the same problem. How do I resolve this? is the model file 'corrupt'? etc that codellama model seem pretty popular on huggingface though.
2026-01-27T08:02:55
https://www.reddit.com/r/LocalLLaMA/comments/1qo7szp/need_help_llamacpp_model_codellama_going_in_loops/
ag789
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo7szp
false
null
t3_1qo7szp
/r/LocalLLaMA/comments/1qo7szp/need_help_llamacpp_model_codellama_going_in_loops/
false
false
self
2
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=108&crop=smart&auto=webp&s=72aa5dcc1cd8dbddd3f1a103959106b666940069', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=216&crop=smart&auto=webp&s=a4159f87f341337a34069632ee0d5b75fa4e7042', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=320&crop=smart&auto=webp&s=b105a2c86f91fee19ce34c791a1b984348b68452', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=640&crop=smart&auto=webp&s=ae5173c455a88bb40bed1198799c0db65ff470d0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=960&crop=smart&auto=webp&s=d014791efbd4c8d05fd305a8b7842b029f22d83e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=1080&crop=smart&auto=webp&s=9addd19259612948921416b6f5bf04bd5191f933', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?auto=webp&s=db9ea157807723165a59f5f8694d9a5016d60d0f', 'width': 1280}, 'variants': {}}]}
need help: llama.cpp - model: codellama going in loops feeding conversation to itself
1
[removed]
2026-01-27T07:59:22
https://www.reddit.com/r/LocalLLaMA/comments/1qo7qp0/need_help_llamacpp_model_codellama_going_in_loops/
ag789
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo7qp0
false
null
t3_1qo7qp0
/r/LocalLLaMA/comments/1qo7qp0/need_help_llamacpp_model_codellama_going_in_loops/
false
false
self
1
null
need help: llama.cpp - model: codellama going in loops feeding conversation to itself
1
[removed]
2026-01-27T07:51:39
https://www.reddit.com/r/LocalLLaMA/comments/1qo7m6i/need_help_llamacpp_model_codellama_going_in_loops/
ag789
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo7m6i
false
null
t3_1qo7m6i
/r/LocalLLaMA/comments/1qo7m6i/need_help_llamacpp_model_codellama_going_in_loops/
false
false
self
1
null
Grounding in LLMs: LeCun’s Wild Goose Chase
0
We all know LLMs are “ungrounded,” right? They never touch reality outside of text, so they can’t *know*. The remedy seems obvious then; give them cameras and let them see the world. But is this sufficient? Is it even conceptually sound? Yann LeCun seems to think so, and his JEPA models are an attempt to solve this problem. Models that can see the world to build up internal “world models” that correspond to the external world accurately. Is this the essence of grounding? >“How do I know my information is accurate?” This question is the heart of the quest for “grounding.” How are the models certain in what they know, and to what degree should we trust them? But do multimodal models really get us closer to a solution? If we look closely, we can see the problem isn’t one of sensation, but one of sourcing. Grounding, put simply, is the **provenance of truth**. We say that knowledge is “grounded” if we can show how it was derived and vet the source. Knowledge can come firsthand, by our own thinking and sensing, or it can also be learned second hand from other sources. We can know about London without ever stepping foot in the United Kingdom, but if you can’t point to a reputable poll, nobody will trust your opinion on the number of people living there. While multimodal models have additional sources, there has been so far no evidence of these models outperforming pure LLMs on the kinds of higher-level abstraction and reasoning that we care about as humans. I suggest that the reason for this is simple: **grounding doesn’t come from pixels, it comes from justification**. To illustrate, the famous findings from the word2vec paper are a good place to start. In its high-dimensional semantic space, learned entirely from a broad pretraining corpus, a model shows that “king - man + woman = queen.” This truth was extracted from the text and defined relationally in the geometry of the neural network, without having ever seen a queen, woman, man or pixel. But is it grounded? Can it *prove* to us how it knows? No. But is it fully ungrounded? Why does it give us the right answer so often then? Because grounding is not a binary YES or NO. There is a gradient of grounding. Current LLMs source their truth through training on vast sums of human text. This produces a “fuzzy grounding” where much of the information retained is true, but there is no direct chain of provenance for these facts. The model doesn’t know WHY it knows, and we can’t derive this information ourselves. Over the past year, the field has made great strides with “reasoning” models. These models explicitly ‘think’ through the logic of their work before doing it. This has enabled previously impossible successes in tasks that require careful sequential logic, like coding and math. When a model solves a math problem by first showing its work, this is a form of grounding. But this can only be employed when the full logic of the problem can be expressly written out. The vast majority of information in a language model does not fall into this category. So what do we do? The solution to this problem, I argue, is epistemic rather than sensorimotor. If we want to trust models about London’s geography, it would be more useful for them to show us maps and reference encyclopedias, rather than have them perform a physical survey of the land before answering. The idea of an internal “world model” that the correspondence-grounders work from implies the notion of an internal, isomorphic universe. And inside this universe, a smaller globe; the earth in miniature, contained in which is all of our knowledge. I think this is an error, a **“microcosmic homunculus.”** Currently, language models are more or less blind as to the contents of their training data. They might read 100,000 times that London is in the UK, but they can’t tell us *why* they think that is the case now. This suggests a potential path forward for more rigorous grounding: let the models explicitly learn their own sources. The various problems and solutions encountered in accomplishing this task are beyond the scope of this essay, but I would be happy to discuss them in the comments. Cameras and sensors will surely make for robots that can pick up cups without breaking them, but will they make them understand fundamental physics better than a SOTA LLM? More importantly, will they be able to better justify this new knowledge to us? To solve the problem of grounding, perhaps what we need aren’t artificial observers, but artificial scholars. Far from an “offramp,” LLMs seem to be the closest starting point we have for a truly grounded artificial intelligence.
2026-01-27T07:40:20
https://www.reddit.com/r/LocalLLaMA/comments/1qo7fl2/grounding_in_llms_lecuns_wild_goose_chase/
Unstable_Llama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo7fl2
false
null
t3_1qo7fl2
/r/LocalLLaMA/comments/1qo7fl2/grounding_in_llms_lecuns_wild_goose_chase/
false
false
self
0
null
I built an app that can record smoking habits and automatically analyze them — all on your iPhone, 100% private, no APIs, no data collection!
0
Hey everyone!! I’m a solo developer, and I recently challenged myself to build a fully polished iOS utility app in just **one week**. The result is **Just One** (抽一根 in Chinese), a privacy-focused smoking cessation and tracking companion designed to be simple, beautiful, and honest. # Why I built this? I was frustrated with habit trackers that require sign-ups, store your health data on random servers, or hide basic statistics behind monthly subscriptions. And as a smoker, I try to quit smoking intermittently or regularly with colleagues on weekdays. I need both a recorder that can directly remind me with data and a modern aesthetic interface. So I develop this app, a tool that felt premium but respected my data completely. # ✨ Key Features (Built in 7 Days): * **100% Private**: Built with SwiftData and CloudKit. No external APIs, no tracking SDKs. Your data never leaves your Apple ecosystem. * **One-Tap Logging**: A satisfying "Liquid Glass" button with haptic feedback that mimics the physical sensation of lighting up or stubbing out. * **Smart Analysis**: Automatically grades your daily/weekly habits (Grades S to D) to give you honest feedback without shame. * **Health & Cost**: Real-time estimates of money spent and substances avoided. * **Dynamic Notification**: It knows if you cut back yesterday and sends a specific encouraging message (e.g., "Incredible! 0 cigarettes yesterday!") instead of a generic reminder. * **Timing reminder**: You can set reminders for specific dates, time intervals, and even custom reminder texts! Used to remind you how many hours you have quit smoking (or "it's time to smoke after such a long interval"), of course, you can also use it as a reminder beyond quitting smoking, or even a simple tomato clock. Clever use may have more interesting results! # 🛠️ The Tech Stack * **Language**: Swift 6 * **UI**: SwiftUI (Canvas animations for smoke effects) * **Storage**: SwiftData + CloudKit * **Haptics**: CoreHaptics customization # 🚀 Availability I am putting the final polish on the localized strings (English, Chinese, Japanese) and **will be releasing this to the global App Store within one week!** I’d love to answer any questions about the 1-week sprint challenge, the implementation details, or hear your feedback on the concept!
2026-01-27T07:36:49
https://i.redd.it/fmnjiqnlfufg1.jpeg
No-Tangelo-5044
i.redd.it
1970-01-01T00:00:00
0
{}
1qo7djm
false
null
t3_1qo7djm
/r/LocalLLaMA/comments/1qo7djm/i_built_an_app_that_can_record_smoking_habits_and/
false
false
default
0
{'enabled': True, 'images': [{'id': 'fmnjiqnlfufg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/fmnjiqnlfufg1.jpeg?width=108&crop=smart&auto=webp&s=b6f9b7ee52885e859a25d2fa890b3eb429a6da11', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/fmnjiqnlfufg1.jpeg?width=216&crop=smart&auto=webp&s=54d04b52e6327a45f145c6b0b8122e7b5dec1ee2', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/fmnjiqnlfufg1.jpeg?width=320&crop=smart&auto=webp&s=c21b6a747fbd848f4234c1222e45fda8c654296d', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/fmnjiqnlfufg1.jpeg?width=640&crop=smart&auto=webp&s=66e990d4123a69e7fd7335a153fd0cbdd575508a', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/fmnjiqnlfufg1.jpeg?width=960&crop=smart&auto=webp&s=98db0c19a9bc094887719d8f2a14b19626f28f7a', 'width': 960}, {'height': 608, 'url': 'https://preview.redd.it/fmnjiqnlfufg1.jpeg?width=1080&crop=smart&auto=webp&s=2e79a47fd316399738d9b8b9646316c15939211d', 'width': 1080}], 'source': {'height': 2252, 'url': 'https://preview.redd.it/fmnjiqnlfufg1.jpeg?auto=webp&s=d3d1e16603584fdc67f50304c6278335b19d605f', 'width': 4000}, 'variants': {}}]}
Mixture of Lookup Experts are God Tier for the average guy (RAM+Disc Hybrid Inference)
42
Recently Deepseek's Engram piqued interest into using disc offloading for inference. However, a DeepseekV3 model with half engram weights doesn't change the fact that you need to read 20B worth of expert weights from disc every token. Active parameters, and the resulting read bandwidth latency are exactly the same. There is another type of MoE which can essentially the reduce read bandwidth latency of the experts to 0. - https://arxiv.org/abs/2503.15798 Mixture of Lookup Experts are MoEs with precomputed experts as lookup-tables. For inference you create a **giant** dictionary of all your possible computation results beforehand for your experts. Normally, you need to read the experts sitting in ram for computing with cpu offload. Reading 10GB of 8 active experts with 50GB/s would 1/5th of a second, with further delays expected. However, with this method, you just want the output, which will be KB sized per expert. You can see the bottleneck of expert offloading is completely eliminated, but we still retain the performance value. Please let me know your thoughts. When I first read the paper, I was confused by the fact that they activated all experts. But it's not important, you can do training at top-k 8. There are some improvements in another paper, because this one doesn't train experts with positional information. It trains experts with raw token embeddings rather than intermediate states. I want to talk about it because re-parameterizing experts is the best optimization trick I've read to-date. I don't want the idea to die. It's perfect for us, given RAM is more expensive. Maybe Arcee or upcoming labs can give the idea a try.
2026-01-27T07:24:00
https://www.reddit.com/r/LocalLLaMA/comments/1qo75sj/mixture_of_lookup_experts_are_god_tier_for_the/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo75sj
false
null
t3_1qo75sj
/r/LocalLLaMA/comments/1qo75sj/mixture_of_lookup_experts_are_god_tier_for_the/
false
false
self
42
null
Deepseek V4, Minimax M2.2, GLM-5, Kimi K2.5, Qwen4, Claude 4.7, Gemini 3.0 GA
0
We will get it all this year, KIMI K2.5 we got, next will got Minimax. All in next month. Still wait
2026-01-27T07:16:55
https://www.reddit.com/r/LocalLLaMA/comments/1qo71e8/deepseek_v4_minimax_m22_glm5_kimi_k25_qwen4/
BasketFar667
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo71e8
false
null
t3_1qo71e8
/r/LocalLLaMA/comments/1qo71e8/deepseek_v4_minimax_m22_glm5_kimi_k25_qwen4/
false
false
self
0
null
I want to finetune an intelligent math model that can get gold medal(s) in IMO/AIMO/AIME. Should I do this with less param model such as 1.5B-4B, or 70B+ models?
0
I think intelligence and creativity is not directly proportional to having more knowledge. Is iterative finetuning the best way to approach this? Perhaps a Qwen3 4B text model? Or GPT-OSS-120B models? There is Llama but llama is so bad in math. What is the best Llama model to iterative finetune? I think we need just two critera, exceptional in math, and narrative writing such as roleplay, because roleplay models are trained to create vivid imaginations (or at least the should be..). Some other approaches would be tool calling and mastering the art of problem solving, damn (AoPS archives should already be trained on newer local models even less params) Thoughts?
2026-01-27T06:39:13
https://www.reddit.com/r/LocalLLaMA/comments/1qo6dmc/i_want_to_finetune_an_intelligent_math_model_that/
Hot_Inspection_9528
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo6dmc
false
null
t3_1qo6dmc
/r/LocalLLaMA/comments/1qo6dmc/i_want_to_finetune_an_intelligent_math_model_that/
false
false
self
0
null
[Q]: OpenAI Residency Application
0
Have people started hearing back on their OpenAI Residency Applications? \#openai #residency
2026-01-27T06:28:18
https://www.reddit.com/r/LocalLLaMA/comments/1qo66m8/q_openai_residency_application/
Leading_Wrangler_708
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo66m8
false
null
t3_1qo66m8
/r/LocalLLaMA/comments/1qo66m8/q_openai_residency_application/
false
false
self
0
null
Best AI for heavy IT docs + hundreds of screenshots (not content creation)?
0
I **love working** on IT/networking labs with 100+ screenshots per project and 10–15 pages of mixed documentation (images, numbers, text). I need an AI that can retain context, track changes, and produce clean, step-by-step configurations. ChatGPT loses state when conversations get long or slightly mixed and starts generating incorrect or inconsistent steps, even with careful prompting. Failure for me is when the AI can’t remember earlier decisions or applied config changes within the same project. Success is an AI that can maintain a running project state and generate deterministic, repeatable steps. What AI or workflow actually handles large volumes of screenshots and technical docs and produces reliable, procedural configs?
2026-01-27T06:07:00
https://www.reddit.com/r/LocalLLaMA/comments/1qo5scu/best_ai_for_heavy_it_docs_hundreds_of_screenshots/
Aj_Networks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo5scu
false
null
t3_1qo5scu
/r/LocalLLaMA/comments/1qo5scu/best_ai_for_heavy_it_docs_hundreds_of_screenshots/
false
false
self
0
null
Introducing Kimi K2.5, Open-Source Visual Agentic Intelligence
472
🔹**Global SOTA on Agentic Benchmarks**: HLE full set (50.2%), BrowseComp (74.9%) 🔹**Open-source SOTA on Vision and Coding**: MMMU Pro (78.5%), VideoMMMU (86.6%), SWE-bench Verified (76.8%) 🔹**Code with Taste**: turn chats, images & videos into aesthetic websites with expressive motion. 🔹**Agent Swarm (Beta)**: self-directed agents working in parallel, at scale. Up to **100** sub-agents, **1,500** tool calls, **4.5×** faster compared with single-agent setup. 🥝**K2.5** is now live on [http://kimi.com](https://t.co/YutVbwktG0) in **chat mod**e and **agent mode**. 🥝**K2.5 Agent Swarm** in beta for high-tier users. 🥝For production-grade coding, you can pair K2.5 with **Kim**i Code: [https://kimi.com/code](https://t.co/A5WQozJF3s) 🔗API: [https://platform.moonshot.ai](https://t.co/EOZkbOwCN4) 🔗Tech blog: [https://www.kimi.com/blog/kimi-k2-5.html](https://www.kimi.com/blog/kimi-k2-5.html) 🔗Weights & code: [https://huggingface.co/moonshotai/Kimi-K2.5](https://huggingface.co/moonshotai/Kimi-K2.5) https://preview.redd.it/b3lldwzvwtfg1.png?width=1920&format=png&auto=webp&s=ffa7bb89f8a91ef050af44cc3fa6090c9e1a7412
2026-01-27T05:39:09
https://www.reddit.com/r/LocalLLaMA/comments/1qo595n/introducing_kimi_k25_opensource_visual_agentic/
Kimi_Moonshot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo595n
false
null
t3_1qo595n
/r/LocalLLaMA/comments/1qo595n/introducing_kimi_k25_opensource_visual_agentic/
false
false
self
472
null
Best text-to-image models that support reference images and use openai api standards?
1
Hey all, What would you say are the best text-to-image LLM models that support reference images as part of the prompt and work using normal openai API standards? I'm looking for SFW images, family friendly, covering typical cartoon-type of image styles, that sort of thing. For hardware, I'm using RTX 5070 Tis 16GB and RTX 5090s 32GB so it needs to fit in there. I'm looking to do more normal openai API standards and just run the model via ollama / llama.cpp or such. As of now, nothing comfyui related. Could you list your recommendations for what models and frameworks to run them? Thanks in advance!~
2026-01-27T05:28:33
https://www.reddit.com/r/LocalLLaMA/comments/1qo51ts/best_texttoimage_models_that_support_reference/
StartupTim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo51ts
false
null
t3_1qo51ts
/r/LocalLLaMA/comments/1qo51ts/best_texttoimage_models_that_support_reference/
false
false
self
1
null
Kimi-K2.5 Is Up
228
2026-01-27T05:26:12
https://huggingface.co/moonshotai/Kimi-K2.5
Few_Painter_5588
huggingface.co
1970-01-01T00:00:00
0
{}
1qo5065
false
null
t3_1qo5065
/r/LocalLLaMA/comments/1qo5065/kimik25_is_up/
false
false
default
228
null
How to allocate more memory for Ryzen HX 370 iGPU in Linux
2
Hi, I have been able to run 12B Gemma3 model with Hx 370 and vLLM. But when I try larger it gives error and says iGPU has 32GB of vram. (In bios I have 2GB set for iGPU so that is not where it is set) So how could I set more from the system 64GB ram to the iGPU? I have LInux Ubuntu 24.04 And doing inference on vLLM. torch.OutOfMemoryError: HIP out of memory. Tried to allocate 442.00 MiB. GPU 0 has a total capacity of 32.00 GiB of which 90.15 MiB is free. Of the allocated memory 31.55 GiB is allocated by PyTorch, and 204.38 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) [rank0]:[W127 05:10:14.768659787 ProcessGroupNCCL.cpp:1522] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
2026-01-27T05:16:30
https://www.reddit.com/r/LocalLLaMA/comments/1qo4t84/how_to_allocate_more_memory_for_ryzen_hx_370_igpu/
Youlearnitman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo4t84
false
null
t3_1qo4t84
/r/LocalLLaMA/comments/1qo4t84/how_to_allocate_more_memory_for_ryzen_hx_370_igpu/
false
false
self
2
null
Is building on-device ML commercial projects still relevant in the near future knowing that GPU/RAM prices are rising and not everyone has/will have smart phone or computer capable of local inference? Not to mention that API providers are crazy cheap.
1
On-device options including but not limited to: * Mediapipe * ML Kit * Gemini Nano * LFM/SLM
2026-01-27T05:12:48
https://www.reddit.com/r/LocalLLaMA/comments/1qo4qi3/is_building_ondevice_ml_commercial_projects_still/
AbdallahHeidar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo4qi3
false
null
t3_1qo4qi3
/r/LocalLLaMA/comments/1qo4qi3/is_building_ondevice_ml_commercial_projects_still/
false
false
self
1
null
RTX Pro 6000 96GB, purchase options
9
I run some local models, primarily llama 3.3 70b and, secondarily, Mistral 2 Large 123b, both of which are a stretch for my current hardware. Currently, I have 48 GB VRAM split between two GPUs (R9700 Pro and RX 9060). I'm considering upgrading to an RTX Pro 6000 Blackwell workstation edition in order to improve speed and use higher quantization. I'm confused, however, by the market for this GPU. It's listed new by some big retailers for around $8500 and by some less-well-known retailers for as low as $7800. However, there are a number of these GPUs listed used on Ebay for between $3000 and $6000, mostly originating in China but some in the U.S. Are these all scams? I assume that they likely are, because I don't know how the price would be so low, even used, for a legit card given what it sells for new and the fact that it's at the top of the market. However, does anyone know for sure? Is there a real used market for this? If I could get it used for like $6500 or $7000, I'd do so if it were legitimate. But assuming that the used listings are primarily scams, what's the best way to get it new? Go with a big, well-known retailer and pay a premium of $8500, or a smaller retailer and pay $7800-8000? Does anyone have any insights or tips on acquiring this item?
2026-01-27T04:56:22
https://www.reddit.com/r/LocalLLaMA/comments/1qo4e55/rtx_pro_6000_96gb_purchase_options/
Throng-Muxter47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo4e55
false
null
t3_1qo4e55
/r/LocalLLaMA/comments/1qo4e55/rtx_pro_6000_96gb_purchase_options/
false
false
self
9
null
Any local LLMs without any guardrails out there?
0
I'm newer to the scene and wanted to know if there are any local LLMs out there that don't have any guardrails? Or is there some hacked version of local GPT I can find somewhere in the trenches of the internet? Or if someone has any recommendations to something that is currently already out there or how to make something. Thanks.
2026-01-27T04:39:39
https://www.reddit.com/r/LocalLLaMA/comments/1qo41jb/any_local_llms_without_any_guardrails_out_there/
xxxsdpsn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo41jb
false
null
t3_1qo41jb
/r/LocalLLaMA/comments/1qo41jb/any_local_llms_without_any_guardrails_out_there/
false
false
self
0
null
Jan v3 Instruct: a 4B coding Model with +40% Aider Improvement
251
Hi, this is Bach from the Jan team. We’re releasing Jan-v3-4B-base-instruct, a 4B-parameter model trained with **continual pre-training** and **RL**, to improve capabilities across common tasks while preserving other general capabilities. What it’s for * A good starting point for further fine-tuning * Improved math and coding performance for lightweight assistance **How to run it:** Jan Desktop Download Jan Desktop: [https://www.jan.ai/](https://www.jan.ai/) and then download Jan v3 via Jan Hub. Alternatively, you can try the model at [\*\*chat.jan.ai](https://chat.jan.ai/).\*\* Model links: * Jan-v3-4B: [https://huggingface.co/janhq/Jan-v3-4B-base-instruct](https://huggingface.co/Menlo/Jan-v3-4B-base-instruct) * Jan-v3-4B-GGUF: [https://huggingface.co/janhq/Jan-v3-4B-base-instruct-gguf](https://huggingface.co/Menlo/Jan-v3-4B-base-instruct-gguf) Recommended parameters: * temperature: 0.7 * top\_p: 0.8 * top\_k: 20 What’s coming next: * **Jan-Code** (finetuned of Jan-v3-4B-base-instruct) * **Jan-v3-Seach-4B** (renewal of Jan-nano on Jan-v3-4B-base-instruct) * **A 30B Jan-v3 family of models**
2026-01-27T04:26:25
https://i.redd.it/0qp4pz0fbtfg1.jpeg
Delicious_Focus3465
i.redd.it
1970-01-01T00:00:00
0
{}
1qo3ri5
false
null
t3_1qo3ri5
/r/LocalLLaMA/comments/1qo3ri5/jan_v3_instruct_a_4b_coding_model_with_40_aider/
false
false
default
251
{'enabled': True, 'images': [{'id': '0qp4pz0fbtfg1', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/0qp4pz0fbtfg1.jpeg?width=108&crop=smart&auto=webp&s=bf2fa547c021da9eb1bbc0510c1567ee5db430b6', 'width': 108}, {'height': 155, 'url': 'https://preview.redd.it/0qp4pz0fbtfg1.jpeg?width=216&crop=smart&auto=webp&s=c262790cf5c15b66ec9713eccd148fa25f36242b', 'width': 216}, {'height': 230, 'url': 'https://preview.redd.it/0qp4pz0fbtfg1.jpeg?width=320&crop=smart&auto=webp&s=c8fd2b2c9b3a6b6ac459f5cc27d679a1a24d4732', 'width': 320}, {'height': 460, 'url': 'https://preview.redd.it/0qp4pz0fbtfg1.jpeg?width=640&crop=smart&auto=webp&s=12b737e67556ca654785997ea815b78511476ed2', 'width': 640}, {'height': 690, 'url': 'https://preview.redd.it/0qp4pz0fbtfg1.jpeg?width=960&crop=smart&auto=webp&s=104407a74b763ace7ea278cfdfef9fe72b1a6302', 'width': 960}, {'height': 777, 'url': 'https://preview.redd.it/0qp4pz0fbtfg1.jpeg?width=1080&crop=smart&auto=webp&s=e06c347f6ec15205b8d473c2143a94acafd8a6da', 'width': 1080}], 'source': {'height': 1474, 'url': 'https://preview.redd.it/0qp4pz0fbtfg1.jpeg?auto=webp&s=24bdd7786ad00234e4ce04ac76b9fc33b79b294f', 'width': 2048}, 'variants': {}}]}
Best small model for ClawdBot?
0
I know there is hype around people buying mac minis for clawdbot instead of using a vps which seems off to me, but in my case coincidentally, I happen to have a mac mini m4 with 24 gigs of ram which is just sitting there, what would be a best model that I can use clawdbot with? I don't think I would use it that much/heavy for coding related tasks as I have other things for that, but still agentic tool use must be decent, and the gpu cores could be put to use.
2026-01-27T04:23:14
https://www.reddit.com/r/LocalLLaMA/comments/1qo3ozg/best_small_model_for_clawdbot/
PhilosopherFun4727
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo3ozg
false
null
t3_1qo3ozg
/r/LocalLLaMA/comments/1qo3ozg/best_small_model_for_clawdbot/
false
false
self
0
null
deepseek-ai/DeepSeek-OCR-2 · Hugging Face
327
2026-01-27T03:56:49
https://huggingface.co/deepseek-ai/DeepSeek-OCR-2
Dark_Fire_12
huggingface.co
1970-01-01T00:00:00
0
{}
1qo349m
false
null
t3_1qo349m
/r/LocalLLaMA/comments/1qo349m/deepseekaideepseekocr2_hugging_face/
false
false
default
327
{'enabled': False, 'images': [{'id': 'c9LaruBvjfhr_AFkVVpu9jJ8NabAKdroEOMl2Akgn-0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/c9LaruBvjfhr_AFkVVpu9jJ8NabAKdroEOMl2Akgn-0.png?width=108&crop=smart&auto=webp&s=7073858f311594ae3eee3bd9a32b0ca841578132', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/c9LaruBvjfhr_AFkVVpu9jJ8NabAKdroEOMl2Akgn-0.png?width=216&crop=smart&auto=webp&s=bade97dc53ced95c49485861c334f8eea3f48bcd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/c9LaruBvjfhr_AFkVVpu9jJ8NabAKdroEOMl2Akgn-0.png?width=320&crop=smart&auto=webp&s=21a8efa9941c0e487565da845f6a9e75bf8fe03e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/c9LaruBvjfhr_AFkVVpu9jJ8NabAKdroEOMl2Akgn-0.png?width=640&crop=smart&auto=webp&s=9a35ff741fcd21d9b3346fefa618503befa19d18', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/c9LaruBvjfhr_AFkVVpu9jJ8NabAKdroEOMl2Akgn-0.png?width=960&crop=smart&auto=webp&s=5fabc0bf83f5c1cb62cf6f3c483559d74ad8c3a8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/c9LaruBvjfhr_AFkVVpu9jJ8NabAKdroEOMl2Akgn-0.png?width=1080&crop=smart&auto=webp&s=e6521d4d05f07da5c45626b3740e8e4dbad72861', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/c9LaruBvjfhr_AFkVVpu9jJ8NabAKdroEOMl2Akgn-0.png?auto=webp&s=1f9751835df25e95ef9e0c6e53d44a54a27b2db9', 'width': 1200}, 'variants': {}}]}
Thoughts on PowerInfer as a way to break the memory bottleneck?
6
I saw an ad for TiinyAI claiming their pocket computer runs 120B models on 30w using the PowerInfer project (\[https://github.com/SJTU-IPADS/PowerInfer\](https://github.com/SJTU-IPADS/PowerInfer)). The tech is very smart: it processes "hot neurons" (frequently activated) on the NPU and "cold neurons" (rarely activated) on the CPU in parallel to maximize efficiency. This seems like a great way to run massive models on limited hardware without needing a huge GPU. For devices with limited RAM, could this technology be the key to finally breaking the memory bottleneck? I am curious if we will see this heterogeneous architecture become popular for local AI devices.
2026-01-27T03:41:24
https://www.reddit.com/r/LocalLLaMA/comments/1qo2s53/thoughts_on_powerinfer_as_a_way_to_break_the/
Potential-Plankton57
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo2s53
false
null
t3_1qo2s53
/r/LocalLLaMA/comments/1qo2s53/thoughts_on_powerinfer_as_a_way_to_break_the/
false
false
self
6
{'enabled': False, 'images': [{'id': '3g54YAQW74ZIkg_IDXN9OTSsNx9q9Y79Ziv6bGDuYXs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3g54YAQW74ZIkg_IDXN9OTSsNx9q9Y79Ziv6bGDuYXs.png?width=108&crop=smart&auto=webp&s=541cea8e3e7b78ec9dfda143073f748c77e92b01', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3g54YAQW74ZIkg_IDXN9OTSsNx9q9Y79Ziv6bGDuYXs.png?width=216&crop=smart&auto=webp&s=60d3b7b62a116e2457522323cb0ed170bd3e6ecd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3g54YAQW74ZIkg_IDXN9OTSsNx9q9Y79Ziv6bGDuYXs.png?width=320&crop=smart&auto=webp&s=057afa973bdbd8da3795fb7db3f125f36f69ce1b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3g54YAQW74ZIkg_IDXN9OTSsNx9q9Y79Ziv6bGDuYXs.png?width=640&crop=smart&auto=webp&s=65129ba1889201bb175c53a1022ffa2dca553207', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3g54YAQW74ZIkg_IDXN9OTSsNx9q9Y79Ziv6bGDuYXs.png?width=960&crop=smart&auto=webp&s=d28aa9bb11f5f93972c0b7d4040e2bfb962783b3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3g54YAQW74ZIkg_IDXN9OTSsNx9q9Y79Ziv6bGDuYXs.png?width=1080&crop=smart&auto=webp&s=891ee99adb4e3eb790881c1d566af9797316f00f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3g54YAQW74ZIkg_IDXN9OTSsNx9q9Y79Ziv6bGDuYXs.png?auto=webp&s=2580131c2217896fb8c1b65e9a53f2bcc79b6d7d', 'width': 1200}, 'variants': {}}]}
Getting into Local LLMs, mostly for Home Assistant to kick Alexa to the curb. Looking for ideas and recommendations
12
I just built a proxmox server for multiple LXCs. I had a 3060 TI 12gb lying around so I put it in the machine and figured I'd try and run a local LLM My main desire is to kick all of the Alexas out of my house and run all of my Home Assistant stuff with local voice control, and be able to do simple stuff like ask the weather, and set timers and alarms. Being able to create automation by voice would be amazing. I already bought the speaker/voice hardware, it's on the way (Satellite1 from futureproofhomes) Anything past that would just be a nice bonus. I'm definitely not looking for coding skill or anything. What would be a good start?
2026-01-27T03:20:41
https://www.reddit.com/r/LocalLLaMA/comments/1qo2boa/getting_into_local_llms_mostly_for_home_assistant/
OpneFall
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo2boa
false
null
t3_1qo2boa
/r/LocalLLaMA/comments/1qo2boa/getting_into_local_llms_mostly_for_home_assistant/
false
false
self
12
null
Thoughts on Kryven?
0
Compared to 140B+ models I've hosted locally in the past, [Kryven](http://kryven.cc) has genuinely blown me away with whatever uncensored AI model they have. Its basically Claude code but severely uncensored, has anyone checked it out and do they have a API?
2026-01-27T02:28:25
https://www.reddit.com/r/LocalLLaMA/comments/1qo14c7/thoughts_on_kryven/
Delicious-Wear-4928
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo14c7
false
null
t3_1qo14c7
/r/LocalLLaMA/comments/1qo14c7/thoughts_on_kryven/
false
false
self
0
null
Free Uncensored AI?
1
[removed]
2026-01-27T02:21:53
https://www.reddit.com/r/LocalLLaMA/comments/1qo0yuw/free_uncensored_ai/
Delicious-Wear-4928
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo0yuw
false
null
t3_1qo0yuw
/r/LocalLLaMA/comments/1qo0yuw/free_uncensored_ai/
false
false
self
1
null
AST‑Powered Codebase Intelligence: Meet Drift, the Context Engine Behind Truly Useful AI Agents.
1
By now we’ve all done it, jumped into an IDE and felt the dopamine of ripping through 100,000 lines of code in like 3 hours. You just popped your 2nd red bull at 1:30 in the morning and it's been years since you had this feeling. Then it comes time to turn it on and you're hit with the biggest wave of depression you’ve felt since that crush in high school said they were not interested. After 6 months of teaching myself how to orchestrate agents to engineer me different codebases and projects ive come to this conclusion: AI can write very good code and it's not an intelligence problem, it's a context limitation. So what are we going to do about it? My solution is called “Statistical Semantics” Drift learns your codebase conventions via AST Parsing (With a regex Fallback) detecting 170 patterns across 15 categories. From here it extracts and indexes meta data from your codebase and stores it locally through jsons that can be recalled through any terminal through the CLI or exposed to your agent through a custom-built MCP server. Think of drift as a translator between your codebase and your AI. Right now when claude or cursor audits your codebase its through grep or bash. This is like finding a needle in a haystack when looking for a custom hook, that hack around you used to get your websocket running or that error handling it can never seem to remember and then synthesizes the results back to you. With drift it indexes that and is able to recall the meta data automatically after YOU approve it. Once you do your first scan you go through and have your agent or yourself approve the meta data found and either approve / ignore / deny so only the true patterns you want stay. The results? Code that fits your codebase on the first try. Almost like a senior engineer in your back pocket, one that truly understands the conventions of your codebase so it doesn’t require audit after audit or refactor after refactor fixing drift found throughout the codebase that would fail in production. Quick start guides MCP Server set up here: https://github.com/dadbodgeoff/drift/wiki/MCP-Setup CLI full start guide: https://github.com/dadbodgeoff/drift/wiki/CLI-Reference CI Integration + Quality Gate: https://github.com/dadbodgeoff/drift/wiki/CI-Integration Call graph analysis guide: https://github.com/dadbodgeoff/drift/wiki/Call-Graph-Analysis Fully open sourced and would love your feedback! The stars and issue reports with feature requests have been absolutely fueling me! I think I've slept on average 3 hours a night last week while I've been working on this project for the community and it feels truly amazing. Thank you for all the upvotes and stars it means the world <3
2026-01-27T02:16:27
https://www.reddit.com/r/LocalLLaMA/comments/1qo0u1j/astpowered_codebase_intelligence_meet_drift_the/
Fluffy_Citron3547
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo0u1j
false
null
t3_1qo0u1j
/r/LocalLLaMA/comments/1qo0u1j/astpowered_codebase_intelligence_meet_drift_the/
false
false
self
1
{'enabled': False, 'images': [{'id': '_0Ef3vgnMzfALrBV-VQsg1OzdJJT1zee4dhs9JeBA5M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_0Ef3vgnMzfALrBV-VQsg1OzdJJT1zee4dhs9JeBA5M.png?width=108&crop=smart&auto=webp&s=8df30aaa3c07dfe3cc2dbf296a6e94ed9e7fffa3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_0Ef3vgnMzfALrBV-VQsg1OzdJJT1zee4dhs9JeBA5M.png?width=216&crop=smart&auto=webp&s=da0d190b82a28d776722b27d327488bfa184a9f8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_0Ef3vgnMzfALrBV-VQsg1OzdJJT1zee4dhs9JeBA5M.png?width=320&crop=smart&auto=webp&s=779218504b90d6f1198e802b5df7fcbcf7af4d10', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_0Ef3vgnMzfALrBV-VQsg1OzdJJT1zee4dhs9JeBA5M.png?width=640&crop=smart&auto=webp&s=11640b8a68a1b48db91bb340dd4ac851ffc9a7ce', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_0Ef3vgnMzfALrBV-VQsg1OzdJJT1zee4dhs9JeBA5M.png?width=960&crop=smart&auto=webp&s=f63a8ae671f60f6d1f608b0d56d8be1c3abda113', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_0Ef3vgnMzfALrBV-VQsg1OzdJJT1zee4dhs9JeBA5M.png?width=1080&crop=smart&auto=webp&s=ac91663c180cee0d8af99aacfb9ffb48cb2ae786', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_0Ef3vgnMzfALrBV-VQsg1OzdJJT1zee4dhs9JeBA5M.png?auto=webp&s=695e84a77e3b0f9e581292763d21c3abaf20a9ed', 'width': 1200}, 'variants': {}}]}
4x RTX 6000 PRO Workstation in custom frame
57
I put this together over the winter break. More photos at https://blraaz.net (no ads, no trackers, no bullshit, just a vibe-coded photo blog).
2026-01-27T02:15:56
https://www.reddit.com/gallery/1qo0tme
Vicar_of_Wibbly
reddit.com
1970-01-01T00:00:00
0
{}
1qo0tme
false
null
t3_1qo0tme
/r/LocalLLaMA/comments/1qo0tme/4x_rtx_6000_pro_workstation_in_custom_frame/
false
false
https://b.thumbs.redditm…C7h_dJSP1swU.jpg
57
null
Has anyone found a good medical model?
2
Hi. My use case is that when a user enters some search text in an input box, the dropdown should suggest relevant specialty. Will be using keyword-based search but wanted to know what's the best medical model. Has anyone found it or are you just RAGging it? Thanks in advance.
2026-01-27T02:13:10
https://www.reddit.com/r/LocalLLaMA/comments/1qo0rag/has_anyone_found_a_good_medical_model/
chatsgpt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo0rag
false
null
t3_1qo0rag
/r/LocalLLaMA/comments/1qo0rag/has_anyone_found_a_good_medical_model/
false
false
self
2
null
New to using local LLMs, and trying to figure out which LLMs are malfunctioning because I need to use a template or adjust the parameters, and which ones are just the models themselves having issues
1
So, to give a bit of background, I am not very good with computers, although, I am starting to learn a bit, because of how many things I can ask Gemini or ChatGPT or Grok about computers and learn a lot in a very short amount of time, which has been really fun. Up until recently, I never really used the command line or Terminal or whatever you call it, for anything, and didn't know basically anything about how computers work or how to do any advanced things of any kind ("advanced" relative to what your grandmother would consider advanced, I mean). I just used windows or mac and clicked things with my mouse to browse the internet, read emails, watch youtube, you know, just ordinary stuff. I'm not a tech guy or good at computers at all, and never really knew how to use them in anything other than the most extremely basic way, like that. Although I never learned how to use computers properly, I did recently have a thing I was working on that required me to do a lot of repetitive, basic tasks that were very repetitive and time consuming. So, I decided to see if I could learn how to make a python program that would cut down how long it took me to do what I was working on, even though I was terrible with computers and had never done any coding or known how to code at all up to that point. I figured it would be very difficult and take me a long time to learn enough to figure out how to do it, but, it ended up only taking a couple days, which surprised me, and boosted my confidence and my interest in programming a bit, and the program I made actually worked exactly how I wanted it to, and saved me a bunch of time with the repetitive task thing I was working on, so it wasn't just purely for fun, like, it actually ended up helping me in a real, tangible way, and saved me a bunch of time, which was nice. Shortly after that, I got more interested in AI, and LLMs, and also noticed how people talk a lot about how good some of these LLMs are at coding. I figured if I learn how to code (which I haven't done yet, other than the very basic program I learned how to make in a couple days), then maybe it will allow me to try doing all sorts of interesting things with local LLMs that are good at coding, as they keep getting stronger. So, my plan is to get good enough at coding that I can actually make use of the coding LLMs, to use them when I work on some projects that I eventually want to attempt once I am competent enough to give them a real try. But, first, I wanted to just mess around more casually, before I start really intensely studying coding, and just try out a bunch of models not for coding, but just for casual conversations, creative writing, and maybe see if I can have one act as a Dungeon Master for playing solo Dungeons & Dragons with it, or something along those lines. I figure it is both pretty fun, but also, a good way for me to get a sense of just how "smart" or strong some of these local LLMs are, in a context that I actually understand (creative writing or role playing scenarios) intuitively, since I don't know enough about coding to evaluate the LLMs in that area yet, but I do know enough about more casual uses like this that I can get a sense for the strengths of different models in that way. Plus I enjoy reading people's opinions on the SillyTavernAI forum about the models, and I figure since I recently got a Mac Studio with 128 GB of ram, and can run all the various merges and fine tunes up to the biggest 123b models at decent quants, I could have some fun comparing and reviewing the different models when it comes to their writing ability and how smart they are in casual interactions, and things like that, which will be fun, and also maybe people on there (or maybe on here or some similar subs, or wherever would be a good place for me to post my thoughts on the different models vs each other) might enjoy my reviews or comparisons of the models I try out. Anyway, so, since I am so new to this, and so bad with computers, at first I just used Ollama and would just use the Ollama pull command to pull models to try out, and just used them like that, and never did anything with templates or parameter settings or anything like that, just tried them however they come when you pull them with ollama and then use ollama run command to run them, and that's it. But, after a week or two of that, I wanted to start trying out a bunch of specific merges and fine tunes that I saw listed on that UGI Leaderboard list from the huggingface spaces that had the highest NatInt or Writing rankings, and see how good they were at writing, and also try out some of the models I saw people talking about here on reddit, so, I looked up how to download the models as .gguf files from huggingface and use the echo FROM ./modelname.gguf thing to create a plain text model file and use the ollama create command to then turn it into an ollama modelfile to run, rather than just be limited to the models I could pull with Ollama. But, this is where I started running into problems a bit, since I'd never used a "template" before, or "stop parameters" or anything like that, and didn't even know what those were or why they were important, or how to alter them or anything like that, so I was just creating models without any template or stop parameters, just using the echo FROM ./ one liner followed by the ollama create command and just trying them out, without doing anything else. So, as you can imagine, quite a few of the models I tried out had issues with prompt bleed, or endless responses that you had to manually stop with your keyboard or else it wouldn't stop typing, or endless loops of saying the same sentence or paragraph over and over again, or things like that. So far, the biggest problems I had were with OSS 20b, GLM 4.5 Air, and one of the Qwen 3 VL models. Interestingly, there were quite a few models that managed to function pretty well despite my not using any template or stop parameters at all, like Gemma3 27b (mlabonne version), and the Behemoth X 123b V2, and I think one or two of the Mistral models. But, I've been struggling with trying to get OSS 20b to work, and it seems like no matter what Template I use or Stop Parameters it won't function properly and just gives totally incoherent or infinite looping replies when I try to use it. But, since I am so new to all of this and have no clue what I'm doing, I don't really know yet when it is me who is doing something wrong, causing the model to malfunction, vs when it is just the model itself having issues. When I asked Gemini and the other AI models about it, they gave the impression that GLM 4.5 Air and especially OSS 20b were notorious for having these types of issues, so, now I'm not sure if maybe they are just extremely difficult to get to work properly. So far OSS 20b never worked for me, GLM 4.5 Air worked okay, but the ArliAI version of GLM 4.5 Air didn't really work yet. I also am not sure if there are any things beyond just knowing about templates and which exact template to use and what stop parameters and parameter settings to use that I need to know about when trying to get models to work, or if that is pretty much it. And, if you guys could maybe list off which models are most notorious for having issues of prompt bleed, chaotic/misfiring answers, infinite responses/loops, etc of not working properly, so that I can have some idea of which ones already are that way naturally, so that I can know which ones are me not setting them up properly vs which ones are just have a lot of issues with that stuff on their own, that would be really helpful.
2026-01-27T02:08:54
https://www.reddit.com/r/LocalLLaMA/comments/1qo0npw/new_to_using_local_llms_and_trying_to_figure_out/
AlpacaBagOLlamas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo0npw
false
null
t3_1qo0npw
/r/LocalLLaMA/comments/1qo0npw/new_to_using_local_llms_and_trying_to_figure_out/
false
false
self
1
null
When/will there be a trillion parameter model?
0
Do you think scale is correlated with some intrinsic ability for these models to better generalize? If so, when do you think humanity will push the bounds to the max to have a trillion parameter model? Do you think someone will ever be dumb enough to try to get a trillion param model to converge, possibly in the process uncovering more truths/questions about generalization and how it scales in models?
2026-01-27T02:04:06
https://www.reddit.com/r/LocalLLaMA/comments/1qo0jp8/whenwill_there_be_a_trillion_parameter_model/
an0101010
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo0jp8
false
null
t3_1qo0jp8
/r/LocalLLaMA/comments/1qo0jp8/whenwill_there_be_a_trillion_parameter_model/
false
false
self
0
null
New Benchmark Claims to Measure How Much of Human Work Models Can Automate
1
Any thoughts on this benchmark [https://quantumzeitgeist.com/24-0-percent-gemini-flash-achieves-apex-agents/](https://quantumzeitgeist.com/24-0-percent-gemini-flash-achieves-apex-agents/) Specifically, the quote " 227 professionals, 58 financial analysts, 77 management consultants, and 92 lawyers with an average of 10.8 years’ experience, to inform the creation of these worlds and tasks" which seems to be the most impressive I've heard of for knowledge work. I've seen this company has been accused of displacing jobs and im wondering how good of a measure this is for that displacement.
2026-01-27T02:03:25
https://www.reddit.com/r/LocalLLaMA/comments/1qo0j5u/new_benchmark_claims_to_measure_how_much_of_human/
Tiny_Literature691
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo0j5u
false
null
t3_1qo0j5u
/r/LocalLLaMA/comments/1qo0j5u/new_benchmark_claims_to_measure_how_much_of_human/
false
false
self
1
null
Closest TTS Model to Maya by Sesame AI
0
Hey y'all- As far as I'm concerned, Maya by Sesame AI is by far the most human sounding voice of all Speech models released as of January 2026 even though it was originally released nearly a year ago. That being said, I was just wondering what the absolute closest open source model is to Maya/CSM-8b model that powers it. I've heard Qwen3-TTS, Kyutai Moshi, and Orpheus 3B are all pretty good, but which amongst these would be the best/closest to being as human sounding as Maya. I'm also open to any models not mentioned- **I just want to know the SOTA open source model that is closest to being as human-sounding as Maya by Sesame AI**
2026-01-27T01:41:26
https://www.reddit.com/r/LocalLLaMA/comments/1qo00pe/closest_tts_model_to_maya_by_sesame_ai/
foodworshipper798
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qo00pe
false
null
t3_1qo00pe
/r/LocalLLaMA/comments/1qo00pe/closest_tts_model_to_maya_by_sesame_ai/
false
false
self
0
null
Last Week in Multimodal AI - Local Edition
9
I curate a weekly multimodal AI roundup, here are the local/open-source highlights from last week: **Qwen3-TTS - Open-Source Real-Time TTS** * Voice cloning, voice design, and natural speech in 10 languages with real-time latency. * Dual-track architecture with custom audio tokenizers keeps quality high at production speeds. * [Model](https://huggingface.co/Qwen/Qwen2.5-TTS) https://preview.redd.it/cybe65e4ksfg1.png?width=1456&format=png&auto=webp&s=67c56adb010e9643ba956973fd2044510e0e1e59 **Linum V2 - 2B Parameter Text-to-Video** * Generates 720p video from text prompts, trained from scratch by a small team. * Proves you don't need massive compute clusters for quality video generation. * [Launch Post](http://linum.ai/field-notes/launch-linum-v2) | [Hugging Face](https://huggingface.co/Linum-AI/linum-v2-720p) https://reddit.com/link/1qnzpyp/video/z1naw4l7ksfg1/player **EvoCUA - #1 Open-Source Computer Use Agent** * Achieves 56.7% on OSWorld benchmark through self-generated synthetic training tasks. * Learns to control operating systems by trial-and-error in sandbox environments. * [Paper](https://huggingface.co/papers/2601.15876) | [GitHub](https://github.com/meituan/EvoCUA) https://preview.redd.it/y68pvzo8ksfg1.png?width=906&format=png&auto=webp&s=39260ff73413d849fc5dc089cb628b47c14e8c9d **LuxTTS - 150x Faster Than Real-Time TTS** * Lightweight text-to-speech designed for speed on local hardware. * [GitHub](https://github.com/ysharma3501/LuxTTS) https://reddit.com/link/1qnzpyp/video/ss11mpm9ksfg1/player **LightOnOCR - Document to Clean Text** * Vision-language model for converting complex documents into ordered text. * [Hugging Face](https://huggingface.co/lightonai/LightOnOCR-2-1B) https://preview.redd.it/2wlx18pfksfg1.png?width=1456&format=png&auto=webp&s=4808e403f5153b7f41a31e88731f395762324104 **OpenVision 3 - Unified Visual Encoder** * Single encoder for both understanding and generation tasks, outperforms CLIP-based encoders. * [Paper](https://arxiv.org/abs/2601.15369) | [GitHub](https://github.com/UCSC-VLAA/OpenVision) https://preview.redd.it/39tzz2liksfg1.png?width=1440&format=png&auto=webp&s=f542049b3b428c56b0fcf1bcf4fdfd9b50924a47 **RF-DETR - Real-Time Segmentation (Apache 2.0)** * State-of-the-art real-time segmentation from Roboflow. * [Blog](https://blog.roboflow.com/rf-detr-segmentation/) https://reddit.com/link/1qnzpyp/video/qbyoxzsqnsfg1/player Honorable Mention: **Remotion Skills - (see last bullet for note)** * MCP skills for the Remotion video framework. * [GitHub](https://github.com/remotion-dev/skills) * Supposed to be for Claude Code but you can use these with open source agent, skills are basically just tooling definitions and guidance to improve complex task performance with a given tool(my quick summary, highly recommend looking into it further if interested(feel free to dm or comment if you dont know where to start)). https://reddit.com/link/1qnzpyp/video/k0md390gosfg1/player Checkout the [full roundup](https://open.substack.com/pub/thelivingedge/p/last-week-in-multimodal-ai-42-the?utm_campaign=post-expanded-share&utm_medium=web) for more demos, papers, and resources.
2026-01-27T01:28:21
https://www.reddit.com/r/LocalLLaMA/comments/1qnzpyp/last_week_in_multimodal_ai_local_edition/
Vast_Yak_4147
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnzpyp
false
null
t3_1qnzpyp
/r/LocalLLaMA/comments/1qnzpyp/last_week_in_multimodal_ai_local_edition/
false
false
https://b.thumbs.redditm…YvQKrgl6vAgg.jpg
9
null
Drop in tps after adding a 3rd older gen GPU?
0
For some reason my tps on gpt-oss-120b is dropping from 17 tps to 3-4 tps after connecting a third GPU Going from 5060ti 16gb on PCIe x16 5060ti 16gb on PCIe x4 4x 32gb ddr4 UDIMM 2400, dual channel Running gpt-oss-120b at 17 tps on llama-server default settings (llama-b7731-bin-win-cuda-13.1-64x) Then when I add 2060super 8gb on PCIe x1 Generation tanks to 3-4 tps I thought that having more of the model running on more VRAM (32GB to 40GB VRAM) would result in faster generation speed due to less offloading onto system RAM?
2026-01-27T01:06:09
https://www.reddit.com/r/LocalLLaMA/comments/1qnz7fe/drop_in_tps_after_adding_a_3rd_older_gen_gpu/
Diligent-Culture-432
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnz7fe
false
null
t3_1qnz7fe
/r/LocalLLaMA/comments/1qnz7fe/drop_in_tps_after_adding_a_3rd_older_gen_gpu/
false
false
self
0
null
GLM 4.7 Extreme level of pedantic nitpicking - almost unusable for discretized/small level QA text analysis
0
I've been trying to use GLM 4.7 Q2 K ever since it came out, so about 1 month. It's decent. Wide breadth of narrative output, some good glimmers of inspiration in there where it is able to take a prompt and indirectly head in good directions. Howevver, part of my usage, is of course using 4.7 to QA its own outputs. Think of running a separate LLM query of "Here is <that previous previous output it just generated>, confirm that the X occurred in the text" (I am QUITE a bit more specific than that, but you get the idea). I am aware of the complexities of language. Technically, even for a 70b Q8, even the simple idea of "did the character leave the room? Y/N?", QA'd CORRECTLY, comprehensively, DOES require you to specifically ask that SIMPLE question several different ways: \- Did a person agree to leave the room? (Y/N) \- Is a person about to leave the room? (Y/N) \- Did anyone leave the room? (Y/N) \- (if in a building) Did anyone leave the building? (Y/N) \- Did (Character 1) or (Character 2) leave the room? (Y/N) \- Did they explicitly walk anywhere else, other than in the <where they currently are>? (Y/N) As a QA approach, am I overkilling it? maybe. But these types of checks are REQUIRED if you're trying to accurately identify objective facts from a block of text and ensure a specific outcome out of this whole RNG world we live in. That said: GLM 4.7 is VERY pedantic and nitpicking for small zero-shot prompts (it differentiates between "the character did X" and "the character said they would do X"), when in the end I am thinking the text & the question are pretty damn clear, but it's still giving incorrect Y/N answers (I have pre-applied re-try loops, answer rejections, many other post processing guards as well). I guess could wordsmith EVERY QA check to the level of "did a person leave the room"?, but that is just ridiculous and some LLMs I feel are already beyond this level of hand-holding. These are simple QA questions about SMALL pieces of text. I've been trying to tweak the way in which this works for the past 1 month, and I'm only making limited progress. I have been using "step by step" types of phrasing in some of the generations. I could use "step by step" a little bit more in the QA prompts, which I haven't fully done yet. I know there is a "give a direct answer" type of prompt (which disables thinking), which I also need to try. I originally came previously from Llama 3.3 70b Q8, and I feel pretty confident to say that Llama 3.3 had a WAY better comprehension of implied state of arbitrary pieces of text, with tailored, hand-written simple QA checks. Could this possibly a GLM training issue? Would it be expected that a 70b Q8 be kicking GLM 4.7 Q2's ass on such a simple task. text-gen\_webui is what I'm using Model: unsloth GGUF 4.7 Q2 K (a low quant, I know. In a few days I should be able to run Q6 I think) Sampler settings (I think I'm using recommended) Temp: 1.0 Top P: 0.95 (just yesterday I re-introduced mirostat sampling to see if it could help). Are higher Quantizations of GLM a little better with this? At this point, I'll almost possibly give up on 4.7 for QA checks and switch model to 3.3 for all QA checks, in order to have an actually competent LLM doing this micro-level QA checking.
2026-01-27T00:36:14
https://www.reddit.com/r/LocalLLaMA/comments/1qnyhpl/glm_47_extreme_level_of_pedantic_nitpicking/
Vusiwe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnyhpl
false
null
t3_1qnyhpl
/r/LocalLLaMA/comments/1qnyhpl/glm_47_extreme_level_of_pedantic_nitpicking/
false
false
self
0
null
2x3090 vs 5090
0
Hey guys! I read multiple threads about those 2 options but I still don't know what would be better for 70B model in terms of model quality. If money wouldn't be a problem, which config would you take? Do you still think 2 x 3090 is better option atm?
2026-01-26T23:30:21
https://www.reddit.com/r/LocalLLaMA/comments/1qnwuyc/2x3090_vs_5090/
CMHQ_Widget
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnwuyc
false
null
t3_1qnwuyc
/r/LocalLLaMA/comments/1qnwuyc/2x3090_vs_5090/
false
false
self
0
null
Kimi K2.5 seems to have soft released on the web app. Release soon?
43
2026-01-26T23:18:09
https://i.redd.it/qsd3byzy0sfg1.png
Dudensen
i.redd.it
1970-01-01T00:00:00
0
{}
1qnwjrp
false
null
t3_1qnwjrp
/r/LocalLLaMA/comments/1qnwjrp/kimi_k25_seems_to_have_soft_released_on_the_web/
false
false
default
43
{'enabled': True, 'images': [{'id': 'qsd3byzy0sfg1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/qsd3byzy0sfg1.png?width=108&crop=smart&auto=webp&s=5cebd1812e55fef21725d01274680d1bf5cba613', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/qsd3byzy0sfg1.png?width=216&crop=smart&auto=webp&s=008a88ab29d30d3d34916ce3333cf8e5e434345d', 'width': 216}, {'height': 135, 'url': 'https://preview.redd.it/qsd3byzy0sfg1.png?width=320&crop=smart&auto=webp&s=61f88db45cad4c0326daad045b6512f0accf14e4', 'width': 320}, {'height': 271, 'url': 'https://preview.redd.it/qsd3byzy0sfg1.png?width=640&crop=smart&auto=webp&s=e14275f305a38ad9e6e0fa233fb7b20affee6509', 'width': 640}], 'source': {'height': 386, 'url': 'https://preview.redd.it/qsd3byzy0sfg1.png?auto=webp&s=f7c3b55268be27de1e60deaf50c71f1bd8831832', 'width': 909}, 'variants': {}}]}
GLM 4.7 Flash: Huge performance improvement with -kvu
190
TLDR; Try passing -kvu to llama.cpp when running GLM 4.7 Flash. On RTX 6000, my tokens per second on a 8K token output rose from 17.7t/s to 100t/s Also, check out the one shot zelda game it made, pretty good for a 30B: [https://talented-fox-j27z.pagedrop.io](https://talented-fox-j27z.pagedrop.io)
2026-01-26T23:07:49
https://www.reddit.com/r/LocalLLaMA/comments/1qnwa33/glm_47_flash_huge_performance_improvement_with_kvu/
TokenRingAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnwa33
false
null
t3_1qnwa33
/r/LocalLLaMA/comments/1qnwa33/glm_47_flash_huge_performance_improvement_with_kvu/
false
false
self
190
null
Kimi K2.5 Released !
165
Since the previous version was open-sourced, I’m sharing the new model. I’m not sure if this one will be open-source yet, and the official website hasn’t mentioned **Kimi K2.5** at all, so I think they’re still in the testing phase. **They currently only released on their website** https://preview.redd.it/7f613rz2yrfg1.png?width=1517&format=png&auto=webp&s=b10c7206deeb73082b1d0988cddb3601a6ccbcca [https://x.com/AiBattle\_/status/2015902394312253564?s=20](https://x.com/AiBattle_/status/2015902394312253564?s=20) [https://www.kimi.com/](https://www.kimi.com/)
2026-01-26T23:01:14
https://www.reddit.com/r/LocalLLaMA/comments/1qnw3z6/kimi_k25_released/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnw3z6
false
null
t3_1qnw3z6
/r/LocalLLaMA/comments/1qnw3z6/kimi_k25_released/
false
false
https://b.thumbs.redditm…cJmAeyPyUh7M.jpg
165
null
How many web‑search sources can GTP-OSS 120b and Llama4-Scout models reliably pull data from?
1
The UI sometimes shows a list of links it’s pulling from, but I’m not sure how many of those sources are actually being used reliably to generate the answer. * Does the model have a hard limit on the number of sources it can process per query?  * In practice, what’s the typical “sweet spot” for the number of sources that yield accurate, well‑cited results?  * Have you noticed a point where adding more links just adds noise rather than improving the answer?
2026-01-26T22:45:04
https://www.reddit.com/r/LocalLLaMA/comments/1qnvojb/how_many_websearch_sources_can_gtposs_120b_and/
CryptoxPathy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnvojb
false
null
t3_1qnvojb
/r/LocalLLaMA/comments/1qnvojb/how_many_websearch_sources_can_gtposs_120b_and/
false
false
self
1
null
NVIDIA PersonaPlex: The "Full-Duplex" Revolution
39
I tested the **NVIDIA’s PersonaPlex** (based on Moshi), and ihere is the TL;DR: * **Full-Duplex:** It streams "forever" (12x per second). It doesn't wait for silence; it can interrupt you or laugh while you speak. * **Rhythm > Quality:** It uses lo-fi **24kHz audio** to hit a **240ms reaction time**. It sounds slightly synthetic but moves exactly like a human. * **The Secret Trigger:** Use the phrase **"You enjoy having a good conversation"** in the prompt. It switches the model from "boring assistant" to "social mode." * **The Catch:** It needs massive GPU power (A100s), and the memory fades after about 3-4 minutes. **The Reality Check (Trade-offs)** While the roadmap shows tool-calling is coming next, there are still significant hurdles: * **Context Limits**: The model has a fixed context window (defined as `context: 3000` frames in `loaders.py`). At 12.5Hz, this translates to roughly 240 seconds of memory. My tests show it often gets unstable around 160 seconds. * **Stability**: Overlapping speech feels natural until it gets buggy. Sometimes the model will just speak over you non-stop. * **Cost**: "Infinite streaming" requires high-end NVIDIA GPUs (A100/H100). * **Complexity**: Managing simultaneous audio/text streams is far more complex than standard WebSockets.
2026-01-26T22:37:54
https://v.redd.it/te3view8trfg1
Dear-Relationship-39
v.redd.it
1970-01-01T00:00:00
0
{}
1qnvhqk
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/te3view8trfg1/DASHPlaylist.mpd?a=1772059088%2COTk4ZWY5NjNhNGUxMWYyNDA1YzVkZDE5MjU1MTc2ZTc1ZWI4MDczM2Q1YTI0NDc5YWQ3ZTAzNmI3MDhkOWY2Mg%3D%3D&v=1&f=sd', 'duration': 88, 'fallback_url': 'https://v.redd.it/te3view8trfg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1076, 'hls_url': 'https://v.redd.it/te3view8trfg1/HLSPlaylist.m3u8?a=1772059088%2CNGI4OTFjOGI2NmY0Y2MzMTY2ZThlZjFlYTQwOWZkYWVhYmYzNzc2NDVkYzI0MTY5MGQzMDgxZTBmMTJmNjQwZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/te3view8trfg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1qnvhqk
/r/LocalLLaMA/comments/1qnvhqk/nvidia_personaplex_the_fullduplex_revolution/
false
false
https://external-preview…5cdd96e93ffe818a
39
{'enabled': False, 'images': [{'id': 'enU0ZTlsdzh0cmZnMdWQQk7IDm5BOlMGqupQUX4EYvgl4ItwjbFFk3nNeGXv', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/enU0ZTlsdzh0cmZnMdWQQk7IDm5BOlMGqupQUX4EYvgl4ItwjbFFk3nNeGXv.png?width=108&crop=smart&format=pjpg&auto=webp&s=643076556a30c94f79afc5951e6614d5d7d14cd6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/enU0ZTlsdzh0cmZnMdWQQk7IDm5BOlMGqupQUX4EYvgl4ItwjbFFk3nNeGXv.png?width=216&crop=smart&format=pjpg&auto=webp&s=a53e94c32b726028cbb18c0ffcb50d8d684fb941', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/enU0ZTlsdzh0cmZnMdWQQk7IDm5BOlMGqupQUX4EYvgl4ItwjbFFk3nNeGXv.png?width=320&crop=smart&format=pjpg&auto=webp&s=f1586806b8881adddcc18be640528d2f17be45df', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/enU0ZTlsdzh0cmZnMdWQQk7IDm5BOlMGqupQUX4EYvgl4ItwjbFFk3nNeGXv.png?width=640&crop=smart&format=pjpg&auto=webp&s=904625001de0ebc96c04291a642b9633842bac30', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/enU0ZTlsdzh0cmZnMdWQQk7IDm5BOlMGqupQUX4EYvgl4ItwjbFFk3nNeGXv.png?width=960&crop=smart&format=pjpg&auto=webp&s=2f2846b18d64da6470189d2721fc9c516028c2e0', 'width': 960}, {'height': 605, 'url': 'https://external-preview.redd.it/enU0ZTlsdzh0cmZnMdWQQk7IDm5BOlMGqupQUX4EYvgl4ItwjbFFk3nNeGXv.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c7613f988f609337de573378d68a6309c5be21d4', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/enU0ZTlsdzh0cmZnMdWQQk7IDm5BOlMGqupQUX4EYvgl4ItwjbFFk3nNeGXv.png?format=pjpg&auto=webp&s=ae631e17a352c07d67397b227e4738e5dce4d17e', 'width': 1926}, 'variants': {}}]}
Built an AI DM system for 5e campaigns: lore retrieval + structured state + multiplayer (InfiniteGM) Post
0
I built **InfiniteGM (InfiniteGM.app)**, an AI Dungeon Master web app designed to stay coherent in **multi-session 5e campaigns**. I’m explicitly trying to solve: * lore drift / contradiction after prolonged play * unreliable state updates (HP, conditions, etc.) * multiplayer pacing + context management **Design highlights** * “Lore injection” from user-provided material (not just chat history) * Structured state tracking for characters and campaign facts * Owner controls: regenerate or edit responses when needed I’m early-launch (no traction yet). If you’ve built long-form agents, I’d love feedback on: * failure modes you’d expect * best practices for keeping state consistent under adversarial player behavior Link: [InfiniteGM.app](http://InfiniteGM.app)
2026-01-26T22:33:23
https://www.reddit.com/r/LocalLLaMA/comments/1qnvd8o/built_an_ai_dm_system_for_5e_campaigns_lore/
Wandering-Kerbal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnvd8o
false
null
t3_1qnvd8o
/r/LocalLLaMA/comments/1qnvd8o/built_an_ai_dm_system_for_5e_campaigns_lore/
false
false
self
0
null
Open-source Aesthetic Datasets
11
Hi! Moonworks is releasing a open-source datasets with image generation by a new diffusion mixture architecture. The first [dataset (apache 2.0)](https://huggingface.co/datasets/moonworks/lunara-aesthetic) is out with [paper](https://arxiv.org/abs/2601.07941). Moonworks is also releasing a second open-source dataset later this week, focusing on semantic image variations.
2026-01-26T22:27:49
https://i.redd.it/aidynwqarrfg1.png
paper-crow
i.redd.it
1970-01-01T00:00:00
0
{}
1qnv7xz
false
null
t3_1qnv7xz
/r/LocalLLaMA/comments/1qnv7xz/opensource_aesthetic_datasets/
false
false
default
11
{'enabled': True, 'images': [{'id': 'aidynwqarrfg1', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/aidynwqarrfg1.png?width=108&crop=smart&auto=webp&s=6f2da3eff3106364fcc2a0ccc9fa20db4d2db3a4', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/aidynwqarrfg1.png?width=216&crop=smart&auto=webp&s=a9872f614dd8ade8dcaa4b485f235d93075127ac', 'width': 216}, {'height': 226, 'url': 'https://preview.redd.it/aidynwqarrfg1.png?width=320&crop=smart&auto=webp&s=5ca27d524008706b8a24051790cfe877661f5389', 'width': 320}, {'height': 453, 'url': 'https://preview.redd.it/aidynwqarrfg1.png?width=640&crop=smart&auto=webp&s=3c5eee8e9bf966716bf235b5c11fe58832078928', 'width': 640}, {'height': 680, 'url': 'https://preview.redd.it/aidynwqarrfg1.png?width=960&crop=smart&auto=webp&s=8b2d5d7a1cde6742a4114df5b5941a16bad05680', 'width': 960}, {'height': 765, 'url': 'https://preview.redd.it/aidynwqarrfg1.png?width=1080&crop=smart&auto=webp&s=a1605f28481210b6dd5e321401eff7f854f14fad', 'width': 1080}], 'source': {'height': 2192, 'url': 'https://preview.redd.it/aidynwqarrfg1.png?auto=webp&s=e693e5710c7b2393d133ff6fded85fd7effb245c', 'width': 3092}, 'variants': {}}]}
Does anyone have a copy of the redacted llama 4 paper
2
went to check and it disappeared from arxiv https://arxiv.org/abs/2601.11659v1 please dm if you have the PDF downloaded somehow. thanks !
2026-01-26T22:23:24
https://www.reddit.com/r/LocalLLaMA/comments/1qnv3kq/does_anyone_have_a_copy_of_the_redacted_llama_4/
sir_kermit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnv3kq
false
null
t3_1qnv3kq
/r/LocalLLaMA/comments/1qnv3kq/does_anyone_have_a_copy_of_the_redacted_llama_4/
false
false
self
2
null
Something akin to Claude's skills on local LLM?
0
Via work, I have access to claude opus (4.5) and have dabbled with Claude's "skills" - it is pretty decent. But I also want to have a local setup (for example, if I interact with research data, so that no information leaves my machine). Is there any way to have something similar to Claude's skills locally - or does it not make sense for a Local model (given the constraints on context, etc). I run ollama (mostly Mistral / Cohere -- pretty ok) and have a MacStudio where I can accommodate models up to 128gb. My use case is academia (social sciences) and programming in R.
2026-01-26T22:23:08
https://www.reddit.com/r/LocalLLaMA/comments/1qnv3ax/something_akin_to_claudes_skills_on_local_llm/
Bahaal_1981
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnv3ax
false
null
t3_1qnv3ax
/r/LocalLLaMA/comments/1qnv3ax/something_akin_to_claudes_skills_on_local_llm/
false
false
self
0
null
Local PDF chat on macOS: RAG pipeline + llama.cpp backend
1
[removed]
2026-01-26T22:04:13
https://www.reddit.com/r/LocalLLaMA/comments/1qnuknw/local_pdf_chat_on_macos_rag_pipeline_llamacpp/
Equivalent-Jicama329
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnuknw
false
null
t3_1qnuknw
/r/LocalLLaMA/comments/1qnuknw/local_pdf_chat_on_macos_rag_pipeline_llamacpp/
false
false
self
1
null
how are you guys handling sensitive data with local LLMs?
0
been running Ollama for personal things but my company is starting to ask about using LLMs for internal docs and customer data. IT is paranoid (rightfully so) after the whole samsung chatgpt thing. so my question... is anyone actually using local setups for work with sensitive data? Curious what the setup looks like and whether its actually practical at scale or if its just a pain in the butt to maintian
2026-01-26T22:02:58
https://www.reddit.com/r/LocalLLaMA/comments/1qnujer/how_are_you_guys_handling_sensitive_data_with/
Ok-Fly-9118
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnujer
false
null
t3_1qnujer
/r/LocalLLaMA/comments/1qnujer/how_are_you_guys_handling_sensitive_data_with/
false
false
self
0
null
Showcase: Local PDF chat app for macOS (RAG + llama.cpp GGUF)
1
[removed]
2026-01-26T22:00:08
https://www.reddit.com/r/LocalLLaMA/comments/1qnugee/showcase_local_pdf_chat_app_for_macos_rag/
Equivalent-Jicama329
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnugee
false
null
t3_1qnugee
/r/LocalLLaMA/comments/1qnugee/showcase_local_pdf_chat_app_for_macos_rag/
false
false
self
1
null
[Release] Document Chat — local-first PDF chat app for macOS (GGUF / llama.cpp backend)
0
[removed]
2026-01-26T21:55:30
https://www.reddit.com/r/LocalLLaMA/comments/1qnubsl/release_document_chat_localfirst_pdf_chat_app_for/
Equivalent-Jicama329
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnubsl
false
null
t3_1qnubsl
/r/LocalLLaMA/comments/1qnubsl/release_document_chat_localfirst_pdf_chat_app_for/
false
false
https://b.thumbs.redditm…SVuNlfqrIzps.jpg
0
null
I built a full YouTube Agency script using LLMs and my mind is officially blown. (Local LLMs, RAG, and Automation)
0
I wanted to share something incredible that’s been happening. I recently had a "crazy" idea and managed to build a Python script that basically does everything a creative agency does to run a YouTube channel. The best part? **I barely touched the code.** I just "talked" to AI, told it what I needed, and it built it. **The Workflow:** 1. **Idea to Script:** I input a topic, and it generates the script, title, and thumbnail concepts. 2. **Contextual Visuals:** It doesn’t just pick random stock footage. It uses a **Vector Database (RAG)** to find specific images/videos that match the script. If the AI is talking about a dog, it finds a dog in my database and places it there. 3. **Narration:** It narrates the script using **Kokoro** (the voice quality is insane). 4. **Final Assembly:** It adds background music, generates the thumbnail based on the AI's suggestions, and compiles the final video. 5. **Automatic Deployment:** It uploads the video to specific channels, sets the title/thumb, and schedules it. I’m currently running **four different channels** in multiple languages, all from my local machine. **The Realization:** This project was my gateway drug into the deeper world of LLMs. I realized that ChatGPT, Gemini, and Claude are just the tip of the iceberg. I started hanging out here on Reddit, found out about **LM Studio** (Mist), and my head has been exploding every day since. There are so many LLMs out there that are just as good—or even better—than the "Big Three" for specific tasks, and you can run them locally or via **OpenRouter**. I actually canceled all my other subscriptions just to put that extra money into OpenRouter credits so I can use any model I want. **The Feeling:** Since the beginning of 2026, I’ve been going to sleep with my brain buzzing and waking up running to my computer to learn more. Every time I learn something new, I realize how much more there is to discover. If you combine **programming with LLMs**, the sky is the limit. I’m Brazilian and my English writing isn't great (I'm using AI to translate this), but I just had to share this moment. We have so much power in our hands right now. It’s bizarro. **Is anyone else feeling this "AI fever"? What are you guys building with local RAG lately?**
2026-01-26T21:44:40
https://www.reddit.com/r/LocalLLaMA/comments/1qnu14a/i_built_a_full_youtube_agency_script_using_llms/
Foxtor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnu14a
false
null
t3_1qnu14a
/r/LocalLLaMA/comments/1qnu14a/i_built_a_full_youtube_agency_script_using_llms/
false
false
self
0
null
Eating lobster souls part II - backdooring the #1 downloaded ClawdHub skill
39
[](https://www.reddit.com/r/ClaudeAI/?f=flair_name%3A%22Vibe%20Coding%22)Two days ago I published research on exposed Clawdbot servers. This time I went after the supply chain. I built a simulated backdoor skill called "What Would Elon Do?" for ClawdHub (the npm-equivalent for Claude Code skills), inflated its download count to 4,000+ using a trivial API vulnerability to hit #1, and watched real developers from 7 countries execute arbitrary commands on their machines. https://preview.redd.it/z746ylqwjrfg1.png?width=1162&format=png&auto=webp&s=ccfd526a78a789785486d9965eda989763bcb26f The payload was harmless by design - just a ping to prove execution. No data exfiltration. But a real attacker could have taken SSH keys, AWS credentials, entire codebases. Nobody would have known. Key findings: * Download counts are trivially fakeable (no auth, spoofable IPs) * The web UI hides referenced files where payloads can live * Permission prompts create an illusion of control - many clicked Allow * 16 developers, 7 countries, 8 hours. That's all it took. I've submitted a fix PR, but the real issue is architectural. The same patterns that hit ua-parser-js and event-stream are coming for AI tooling. Full writeup: [https://x.com/theonejvo/status/2015892980851474595](https://x.com/theonejvo/status/2015892980851474595)
2026-01-26T21:41:29
https://www.reddit.com/r/LocalLLaMA/comments/1qntxwu/eating_lobster_souls_part_ii_backdooring_the_1/
theonejvo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qntxwu
false
null
t3_1qntxwu
/r/LocalLLaMA/comments/1qntxwu/eating_lobster_souls_part_ii_backdooring_the_1/
false
false
https://b.thumbs.redditm…20UzNDrVhFaQ.jpg
39
null
Fiz um repositório para ajudar quem usa MI50
0
Para quem ainda usa, como hobby claro, uma MI 50 para rodar seus LLM fiz um repositório no github para ajudar. [https://github.com/nullkalahar/mi50-rocm7](https://github.com/nullkalahar/mi50-rocm7) É algo básico, ainda vai ser melhorado e ampliado. Não sou um grande entendedor, costumo usar muito IA para me ajudar a fazer as coisas. Isso é resultado de um compilado de informações que usei como guia para conseguir fazer a minha funcionar. Quem quiser contribuir, sinta-se convidado.
2026-01-26T21:38:01
https://www.reddit.com/r/LocalLLaMA/comments/1qntudn/fiz_um_repositório_para_ajudar_quem_usa_mi50/
NullKalahar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qntudn
false
null
t3_1qntudn
/r/LocalLLaMA/comments/1qntudn/fiz_um_repositório_para_ajudar_quem_usa_mi50/
false
false
self
0
{'enabled': False, 'images': [{'id': 'j5ycj3Silk1cDTdzEF8i1RI7wXM_1vCz4ELbimVXsf8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/j5ycj3Silk1cDTdzEF8i1RI7wXM_1vCz4ELbimVXsf8.png?width=108&crop=smart&auto=webp&s=48b40d85ef0454a44c5ef7b15a79bfd075658a69', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/j5ycj3Silk1cDTdzEF8i1RI7wXM_1vCz4ELbimVXsf8.png?width=216&crop=smart&auto=webp&s=599f97af6a09dcd3f5b94fe03274d7ae19cc006d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/j5ycj3Silk1cDTdzEF8i1RI7wXM_1vCz4ELbimVXsf8.png?width=320&crop=smart&auto=webp&s=ab5d0ef9dbc0a2961137dd3887632cf6669947d9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/j5ycj3Silk1cDTdzEF8i1RI7wXM_1vCz4ELbimVXsf8.png?width=640&crop=smart&auto=webp&s=11627c8efd5019a19d94b8db7c1659a4597d7e5d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/j5ycj3Silk1cDTdzEF8i1RI7wXM_1vCz4ELbimVXsf8.png?width=960&crop=smart&auto=webp&s=3d4ea13234ef1888baba273a0bbb5002cd7a2cf2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/j5ycj3Silk1cDTdzEF8i1RI7wXM_1vCz4ELbimVXsf8.png?width=1080&crop=smart&auto=webp&s=287a5510a067201c705323c47a129b3a17c1135d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/j5ycj3Silk1cDTdzEF8i1RI7wXM_1vCz4ELbimVXsf8.png?auto=webp&s=d3d214ce1d06c0616d33011f378b6121c67e4a61', 'width': 1200}, 'variants': {}}]}
Clawdbot is overrated
0
I've figured out a better approach to agentic browsing that doesn't require handing over API keys or passwords. The concept is simple: give each AI agent its own browser tab. Gemini gets one tab, a local model gets another, Claude gets one—whatever you need. They all communicate through a locally-hosted messaging platform (just a simple website running on your machine). This creates true web agents without any of the backend complexity or security risks. You never have to give API keys or passwords to the agents themselves. Instead, you just log into each service in its respective tab beforehand, then tell the agents what tasks to perform. No credentials shared. No third-party tools with access to your accounts. Just clean, secure, multi-agent coordination where you stay in complete control. That's why tools like Clawdbot—which require you to hand over your API keys—are overrated. There's a simpler, safer way to do agentic browsing.
2026-01-26T21:28:51
https://v.redd.it/609vw2qfhrfg1
Admirable-Choice9727
/r/LocalLLaMA/comments/1qntl3m/clawdbot_is_overrated/
1970-01-01T00:00:00
0
{}
1qntl3m
false
null
t3_1qntl3m
/r/LocalLLaMA/comments/1qntl3m/clawdbot_is_overrated/
false
false
https://external-preview…e41f2e1e454ff35c
0
{'enabled': False, 'images': [{'id': 'N3k2cXp5cWZocmZnMRtPcBkJEJIVjtbLouUcQk2HWIqu1RaXR0j_mKQCBnWT', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/N3k2cXp5cWZocmZnMRtPcBkJEJIVjtbLouUcQk2HWIqu1RaXR0j_mKQCBnWT.png?width=108&crop=smart&format=pjpg&auto=webp&s=81d66c60fb23b5cec7b3e92419354c483dff9fed', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/N3k2cXp5cWZocmZnMRtPcBkJEJIVjtbLouUcQk2HWIqu1RaXR0j_mKQCBnWT.png?width=216&crop=smart&format=pjpg&auto=webp&s=cd40258109dcff4e886cc3b46d7c61914c557f4f', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/N3k2cXp5cWZocmZnMRtPcBkJEJIVjtbLouUcQk2HWIqu1RaXR0j_mKQCBnWT.png?width=320&crop=smart&format=pjpg&auto=webp&s=c46dc80e278aa65f94ef2c817ac9cbd83dfbe8c1', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/N3k2cXp5cWZocmZnMRtPcBkJEJIVjtbLouUcQk2HWIqu1RaXR0j_mKQCBnWT.png?width=640&crop=smart&format=pjpg&auto=webp&s=d54409db5af5a6e86ee6cf53d2b2793cd438560d', 'width': 640}], 'source': {'height': 1746, 'url': 'https://external-preview.redd.it/N3k2cXp5cWZocmZnMRtPcBkJEJIVjtbLouUcQk2HWIqu1RaXR0j_mKQCBnWT.png?format=pjpg&auto=webp&s=3c4cbbe09e4e9c7846521c127b91e3266755eb22', 'width': 778}, 'variants': {}}]}
Let's talk about the "swe-bench verified" benchmark/leaderboard
18
Two main questions that I have: - Who is cheating on us: the benchmark leaderboard, or all Chinese companies that create open models? - Could the benchmark leaderboard be a propaganda for certain products? Some observations: 1. To submit the result on the benchmark leaderboard, this link https://www.swebench.com/submit.html asks to follow the instructions there: https://github.com/swe-bench/experiments/ This site collects previous submissions, so everyone can analyse them. And the readme has this note: > [11/18/2025] SWE-bench Verified and Multilingual now only accepts submissions from academic teams and research institutions with open source methods and peer-reviewed publications. 2. The leaderboard has the results of the following models: Opus 4.5, Devstral 2 (both), and GPT-5.2 that were added to the leaderboard exactly at the release date. Hmm, does that mean that developers of these models are threatened as academic teams or research institutions? Or were some academic teams / research institutions waiting for these modes to do the benchmark exactly at the release date? 3. The bottom of the leaderboard page thanks OpenAI and Anthropic, among other companies, for generous support. Could this generosity be linked to the fast leaderboard appearance? 4. There are no modern Chinese models at all. Only previous or outdated. Many models were released recently, but I suppose no academic teams or research institutions wanted to benchmark them. Maybe just too busy to do that. 5. The results for the Chinese models on the leaderboard are not the same as the results of SWE-bench Verified on Hugging Face or the model page for these models. For example, DeepSeek V3.2 has 60% score on the leaderboard dated at 2025-12-01, but on Hugging Face, its 73.1%. GLM-4.6 on the leaderboard scored as 55.4% at 2025-12-01, but on the model page, it is 68% 6. OK, we have the GitHub for the Leaderboard result evaluation, right? https://github.com/SWE-bench/experiments/tree/main/evaluation/verified But there are no results for 2025-12-01 DeepSeek and GLM! I suppose the academic teams or research institutions were too shy to upload it there, and just provided the numbers to the leaderboards. Poor guys. Surpisingly, the github has GLM-4.6 results, dated at 2025-09-30, and the result is 68%, not 55.4%: https://github.com/SWE-bench/experiments/tree/main/evaluation/verified/20250930_zai_glm4-6 From these observations, I have no answer to the main questions, so I would like to hear your opinion and, ideally, some explanations from the benchmark and leaderboard owners.
2026-01-26T21:16:47
https://www.reddit.com/r/LocalLLaMA/comments/1qnt8vp/lets_talk_about_the_swebench_verified/
Exciting_Garden2535
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnt8vp
false
null
t3_1qnt8vp
/r/LocalLLaMA/comments/1qnt8vp/lets_talk_about_the_swebench_verified/
false
false
self
18
null
[Feedback Requested] OCR Normalization for Stable RAG Ingestion — Experiences Wanted
0
I’m a solo developer working on B2B document intelligence, and while building RAG pipelines on top of scanned PDFs, I kept running into the same failure mode: **changing OCR models silently breaks retrieval**. *This isn’t a launch or promotion — I’m trying to validate whether this problem is real for other RAG practitioners using open-source OCR models.* # The RAG Problem In production RAG systems, OCR isn’t just “text extraction” — it defines: * chunk boundaries * table structure * figure references * citation spans Every time I swapped OCR backends (e.g., DeepSeek → Chandra → DOTS, or open-source → closed), I had to: * rewrite parsers * debug retrieval regressions caused by subtle layout differences Many failures were *silent* — embeddings were generated, but retrieval quality degraded without any obvious errors. # What I Built A **headless OCR normalization layer** designed for RAG ingestion, with a unified API that works across **open-source models** (DeepSeek, Chandra, DOTS) and **closed-source models** (Gemini, GPT). The goal: enforce a **stable, retrieval-safe output contract** so you can swap OCR backends without breaking downstream pipelines. # RAG-Focused Features - **Unified API for Multiple OCR Backends** Provides a single interface for both OSS and closed-source OCR models. Swap backends without breaking parsing or chunking logic. - **Layout-Invariant Output** Generates consistent layout trees, bounding boxes, and reading order across OCR providers, ensuring stable retrieval results. - **Schema-Locked OCR → Chunks** Define a JSON schema and enforce strict typing before content reaches your vector database. Prevents malformed chunks, partial fields, or silent drift; all fields are citation-backed. - **Figure & Table Summarization** Detects charts, tables, and graphs and converts them into natural language summaries suitable for semantic search. - **Confidence-Aware Ingestion** Each page and element returns quality and confidence scores, enabling: - auto-ingestion of high-confidence pages - routing low-confidence pages to human review - avoiding embedding corruption from bad OCR - **Debug & Audit UI (Optional)** Inspect layouts, chunks, and citations for debugging retrieval issues and supporting human-in-the-loop workflows. # Questions for the Community 1. Is OCR normalization and schema enforcement a pain point in your workflows, or do you just pick one OCR model and live with it? 2. When OCR quality is low, do you drop documents, down-rank them, or route them to human review? 3. Which OCR failures hurt retrieval the most in your pipelines: tables, figures, reading order, multi-page elements, or missing layout metadata? 4. Has anyone standardized OCR outputs across multiple open-source models successfully, or do you just stick to one model? I’d love to hear your honest experiences and lessons — especially from anyone using open-source OCR in RAG pipelines.
2026-01-26T21:16:41
https://www.reddit.com/r/LocalLLaMA/comments/1qnt8rr/feedback_requested_ocr_normalization_for_stable/
bornatnine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnt8rr
false
null
t3_1qnt8rr
/r/LocalLLaMA/comments/1qnt8rr/feedback_requested_ocr_normalization_for_stable/
false
false
self
0
null
MCP vs CAP: Why Your AI Agents Need Both Protocols
0
The AI agent ecosystem is exploding with protocols. Anthropic released MCP (Model Context Protocol). Google announced A2A (Agent-to-Agent). Every week there's a new "standard" for agent communication. But here's the thing most people miss: **these protocols solve different problems at different layers.** Using MCP for distributed agent orchestration is like using HTTP for job scheduling—wrong tool, wrong layer. Let me break down the actual difference and why you probably need both. # What MCP Actually Does MCP (Model Context Protocol) is a **tool-calling protocol for a single model**. It standardizes how one LLM discovers and invokes external tools—databases, APIs, file systems, etc. ┌─────────────────────────────────────┐ │ Your LLM │ │ │ │ "I need to query the database" │ │ │ │ │ ▼ │ │ ┌─────────────┐ │ │ │ MCP Client │ │ │ └──────┬──────┘ │ └────────────┼────────────────────────┘ │ ▼ ┌───────────────┐ │ MCP Server │ │ (tool host) │ └───────────────┘ │ ▼ [Database] MCP is great at this. It solves tool discovery, schema negotiation, and invocation for a **single model context**. **What MCP doesn't cover:** * How do you schedule work across multiple agents? * How do you track job state across a cluster? * How do you enforce safety policies before execution? * How do you handle agent liveness and capacity? * How do you fan out workflows with parent/child relationships? MCP was never designed for this. It's a tool protocol, not an orchestration protocol. # Enter CAP: The Missing Layer CAP (Cordum Agent Protocol) is a **cluster-native job protocol for AI agents**. It standardizes the control plane that MCP doesn't touch: * **Job lifecycle**: submit → schedule → dispatch → run → complete * **Distributed routing**: pool-based dispatch with competing consumers * **Safety hooks**: allow/deny/throttle decisions before any job runs * **Heartbeats**: worker liveness, capacity, and pool membership * **Workflows**: parent/child jobs with aggregation * **Pointer architecture**: keeps payloads off the bus for security and performance &#8203; ┌─────────────────────────────────────────────────────────────┐ │ CAP Control Plane │ │ │ │ Client ──▶ Gateway ──▶ Scheduler ──▶ Safety ──▶ Workers │ │ │ │ │ │ ▼ ▼ │ │ [Job State] [Results] │ └─────────────────────────────────────────────────────────────┘ │ ▼ ┌──────────────┐ │ MCP (tools) │ └──────────────┘ **CAP handles:** * `BusPacket` envelopes for all messages * `JobRequest` / `JobResult` with full state machine * `context_ptr` / `result_ptr` to keep blobs off the wire * Heartbeats for worker pools * Safety Kernel integration (policy checks before dispatch) * Workflow orchestration with `workflow_id`, `parent_job_id`, `step_index` # The Key Insight: Different Layers Think of it like the network stack: |Layer|Protocol|What It Does| |:-|:-|:-| |**Tool execution**|MCP|Model ↔ Tool communication| |**Agent orchestration**|CAP|Job scheduling, routing, safety, state| |**Transport**|NATS/Kafka|Message delivery| **MCP is layer 7. CAP is layer 5-6.** You wouldn't use HTTP to schedule Kubernetes jobs. Similarly, you shouldn't use MCP to orchestrate distributed agent workloads. # How They Work Together Here's the beautiful part: **MCP and CAP complement each other perfectly.** A CAP worker receives a job, executes it (potentially using MCP to call tools), and returns a result. MCP handles the tool-calling inside the worker. CAP handles everything outside. ┌─────────────────────────────────────────────────────────────────┐ │ CAP Cluster │ │ │ │ ┌──────────┐ ┌───────────┐ ┌─────────────────────────┐ │ │ │ Client │───▶│ Scheduler │───▶│ Worker Pool │ │ │ └──────────┘ └───────────┘ │ ┌───────────────────┐ │ │ │ │ │ │ CAP Worker │ │ │ │ ▼ │ │ │ │ │ │ │ [Safety Kernel] │ │ ▼ │ │ │ │ │ │ ┌─────────┐ │ │ │ │ │ │ │ MCP │ │ │ │ │ │ │ │ Client │ │ │ │ │ │ │ └────┬────┘ │ │ │ │ │ └────────┼──────────┘ │ │ │ └───────────┼─────────────┘ │ └────────────────────────────────────────────────┼───────────────┘ ▼ [MCP Servers] (tools, DBs, APIs) **Example flow:** 1. Client submits job via CAP (`JobRequest` to `sys.job.submit`) 2. Scheduler checks Safety Kernel → approved 3. Job dispatched to worker pool via CAP 4. Worker uses MCP to call tools (query DB, fetch API, etc.) 5. Worker returns result via CAP (`JobResult` to `sys.job.result`) 6. Scheduler updates state, notifies client MCP never touches the bus. CAP never touches the tools. Clean separation. # Why This Matters for Production If you're building a toy demo, you don't need CAP. One model, a few tools, MCP is plenty. But if you're building **production multi-agent systems**, you need: |Requirement|MCP|CAP| |:-|:-|:-| |Tool discovery & invocation|✅|❌| |Job scheduling|❌|✅| |Distributed worker pools|❌|✅| |Safety policies (allow/deny/throttle)|❌|✅| |Job state machine|❌|✅| |Worker heartbeats & capacity|❌|✅| |Workflow orchestration|❌|✅| |Payload security (pointer refs)|❌|✅| **CAP gives you the control plane. MCP gives you the tool plane.** # Getting Started with CAP CAP is open source (Apache-2.0) with SDKs for Go, Python, Node/TS, and C++. **Minimal Go worker (20 lines):** nc, _ := nats.Connect("nats://127.0.0.1:4222") nc.QueueSubscribe("job.echo", "job.echo", func(msg *nats.Msg) { var pkt agentv1.BusPacket proto.Unmarshal(msg.Data, &pkt) req := pkt.GetJobRequest() res := &agentv1.JobResult{ JobId: req.GetJobId(), Status: agentv1.JobStatus_JOB_STATUS_SUCCEEDED, } out, _ := proto.Marshal(&agentv1.BusPacket{ Payload: &agentv1.BusPacket_JobResult{JobResult: res}, }) nc.Publish("sys.job.result", out) }) **Links:** * GitHub: [github.com/cordum-io/cap](https://github.com/cordum-io/cap) * Spec: [github.com/cordum-io/cap/tree/main/spec](https://github.com/cordum-io/cap/tree/main/spec) * Reference implementation: [Cordum](https://cordum.io/) # TL;DR * **MCP** = tool protocol for single-model contexts * **CAP** = job protocol for distributed agent clusters * **They solve different problems at different layers** * **Use both**: CAP for orchestration, MCP inside workers for tools * Stop using MCP for things it wasn't designed for The multi-agent future needs both protocols. Now you know which one to reach for. *CAP is developed by* [*Cordum*](https://cordum.io/)*, the AI Agent Governance Platform. Star the repo if this was useful:* [*github.com/cordum-io/cap*](https://github.com/cordum-io/cap) **Tags:** \#ai #agents #mcp #distributed-systems #orchestration #protocols
2026-01-26T21:16:07
https://www.reddit.com/r/LocalLLaMA/comments/1qnt82z/mcp_vs_cap_why_your_ai_agents_need_both_protocols/
yaront1111
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnt82z
false
null
t3_1qnt82z
/r/LocalLLaMA/comments/1qnt82z/mcp_vs_cap_why_your_ai_agents_need_both_protocols/
false
false
self
0
{'enabled': False, 'images': [{'id': '585bllGD6eYazyrXVbHApfBCWcmch0GY99fu9IKBQ10', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/585bllGD6eYazyrXVbHApfBCWcmch0GY99fu9IKBQ10.png?width=108&crop=smart&auto=webp&s=8674213c8f952b0445b275a8df0cef25f2fafcf4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/585bllGD6eYazyrXVbHApfBCWcmch0GY99fu9IKBQ10.png?width=216&crop=smart&auto=webp&s=13b990045d82d7aa5988bbc6456508c2b938b9fc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/585bllGD6eYazyrXVbHApfBCWcmch0GY99fu9IKBQ10.png?width=320&crop=smart&auto=webp&s=5090bba8f0e8b1eb5920228f44b30b9f8aa1628b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/585bllGD6eYazyrXVbHApfBCWcmch0GY99fu9IKBQ10.png?width=640&crop=smart&auto=webp&s=e994d46a2a6bccf6b63b6164ecd941bc3cbea74d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/585bllGD6eYazyrXVbHApfBCWcmch0GY99fu9IKBQ10.png?width=960&crop=smart&auto=webp&s=9fb41b4c2b4cd4d363574676bc29d522126e63f0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/585bllGD6eYazyrXVbHApfBCWcmch0GY99fu9IKBQ10.png?width=1080&crop=smart&auto=webp&s=91e38c6cf2fb5c8fe9eeb05ff4c1984621c0c2a5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/585bllGD6eYazyrXVbHApfBCWcmch0GY99fu9IKBQ10.png?auto=webp&s=95dd28d9f161f632f1b03719aafb949fde9f1252', 'width': 1200}, 'variants': {}}]}
Toad-Pythonic Universal Terminal Interface for AI Agents
1
2026-01-26T20:57:44
https://www.i-programmer.info/news/90-tools/18619-toad-pythonic-universal-terminal-interface-for-ai-agents.html
Active-Fuel-49
i-programmer.info
1970-01-01T00:00:00
0
{}
1qnsph9
false
null
t3_1qnsph9
/r/LocalLLaMA/comments/1qnsph9/toadpythonic_universal_terminal_interface_for_ai/
false
false
default
1
null
How a Single Email Turned My ClawdBot Into a Data Leak
94
Wrote an article on it here [https://medium.com/@peltomakiw/how-a-single-email-turned-my-clawdbot-into-a-data-leak-1058792e783a](https://medium.com/@peltomakiw/how-a-single-email-turned-my-clawdbot-into-a-data-leak-1058792e783a)
2026-01-26T20:54:23
https://i.redd.it/h6x5fo6jbrfg1.jpeg
RegionCareful7282
i.redd.it
1970-01-01T00:00:00
0
{}
1qnsm3a
false
null
t3_1qnsm3a
/r/LocalLLaMA/comments/1qnsm3a/how_a_single_email_turned_my_clawdbot_into_a_data/
false
false
default
94
{'enabled': True, 'images': [{'id': 'h6x5fo6jbrfg1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/h6x5fo6jbrfg1.jpeg?width=108&crop=smart&auto=webp&s=5a7ac1a9d80738d6d07d5bf542f3a4f63e4cf707', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/h6x5fo6jbrfg1.jpeg?width=216&crop=smart&auto=webp&s=ba3105290793c07c12da8f38233fd9d004098d57', 'width': 216}, {'height': 182, 'url': 'https://preview.redd.it/h6x5fo6jbrfg1.jpeg?width=320&crop=smart&auto=webp&s=799d8a0be6c732159598d7fe553628704fa3039f', 'width': 320}, {'height': 365, 'url': 'https://preview.redd.it/h6x5fo6jbrfg1.jpeg?width=640&crop=smart&auto=webp&s=f8127f804d6686fdc1a14e55f8cd73f7baef32cc', 'width': 640}, {'height': 548, 'url': 'https://preview.redd.it/h6x5fo6jbrfg1.jpeg?width=960&crop=smart&auto=webp&s=27f10ad8ca07a55b033cd4db518a008ace138d58', 'width': 960}, {'height': 617, 'url': 'https://preview.redd.it/h6x5fo6jbrfg1.jpeg?width=1080&crop=smart&auto=webp&s=3f836b6e81e6eb5b1003a026829047440f8991e8', 'width': 1080}], 'source': {'height': 768, 'url': 'https://preview.redd.it/h6x5fo6jbrfg1.jpeg?auto=webp&s=d058205756a6f11cac6e0c5f36d9314b8b7699f2', 'width': 1344}, 'variants': {}}]}
Classical Arabic Imatrix quants?
1
[removed]
2026-01-26T20:53:10
https://www.reddit.com/r/LocalLLaMA/comments/1qnsku9/classical_arabic_imatrix_quants/
Silver-Champion-4846
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnsku9
false
null
t3_1qnsku9
/r/LocalLLaMA/comments/1qnsku9/classical_arabic_imatrix_quants/
false
false
self
1
null
Whats the current state of local LLMs for coding?
2
Ive been trying to stay up to date but Ive been out of the game for a while. I have an rtx 5090 and 128gb of ram. I use codex from ChatGPT to help with development, but I would much rather run everything locally. How close are we to that with comparable performance with closed source models? In particular models that could be ran in a smaller setup like mine.
2026-01-26T20:47:49
https://www.reddit.com/r/LocalLLaMA/comments/1qnsfk0/whats_the_current_state_of_local_llms_for_coding/
MaximusDM22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnsfk0
false
null
t3_1qnsfk0
/r/LocalLLaMA/comments/1qnsfk0/whats_the_current_state_of_local_llms_for_coding/
false
false
self
2
null
Small Form Factor build with an RTX A2000
2
I have a used NVIDIA RTX 2000 Ada Generation with 16GB GDDR6 VRAM. My interest is to create a small form factor AI rig with this low profile GPU. I have a lot of experience running local models but I'm not a hardware person. Any advice on the minimum things to buy to maximize the performance of this GPU would be greatly appreciated. Because it is so small, I would love to make it somewhat transportable. I have a larger rig with 96GB of VRAM. My interest is to run small local models or possibly home automations. Or, would you consider turning this into a mid-grade gaming system? Any guidance on the best way to put this to use would be greatly appreciated!
2026-01-26T20:38:07
https://www.reddit.com/r/LocalLLaMA/comments/1qns5yp/small_form_factor_build_with_an_rtx_a2000/
Ok-Boysenberry-2860
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qns5yp
false
null
t3_1qns5yp
/r/LocalLLaMA/comments/1qns5yp/small_form_factor_build_with_an_rtx_a2000/
false
false
self
2
null
I benchmarked a bunch of open weight LLMs on different Macs so you don't have to!
33
Hi folks, I've been evaluating different LLMs on Apple silicon for a project lately and figured the benchmarking could be useful to share. The exercise also uncovered a few counterintuitive things that I'd be curious to get folks' feedback on. The lineup of models: * Gemma 3, from Google * GPT OSS, from OpenAI * Nemotron 3 Nano, from NVIDIA * Qwen 3, from Alibaba The Macs: * **M4 MacBook Air**, Apple M4, 4 performance cores, 6 efficiency cores, 10 GPU cores, 16 Neural Engine cores, 32 GB RAM, 1 TB SSD, macOS Tahoe 26.2 * **M4 Mac mini**, Apple M4, 4 performance cores, 6 efficiency cores, 10 GPU cores, 16 Neural Engine cores, 16 GB RAM, 256 GB SSD, macOS Tahoe 26.2 * **M1 Ultra Mac Studio**, Apple M1 Ultra, 16 performance cores, 4 efficiency cores, 64 GPU cores, 32 Neural Engine cores, 128 GB RAM, 4 TB SSD, macOS Tahoe 26.2 What I did: 1. Downloaded 16-bit precision, 8-bit quant, and 4-bit quant models off Hugging Face 2. Quit out of other apps on the Mac (Command + Tab shows just Finder and Terminal) 3. Benchmarked each with [llama-bench](https://github.com/ggml-org/llama.cpp?tab=readme-ov-file#llama-bench) on different Macs 4. Logged the results into a CSV 5. Plotted the CSVs 6. Postulated what it means for folks building LLM into tools and apps today I ran the benchmarks with the models on the internal Mac SSD. On the machine that didn't have enough storage to store all the models, I'd copy over a few models at a time and run the benchmarks in pieces (lookin' at you, base M4 Mac mini). What I saw: *Processing img 3p6e34eb6rfg1...* *Processing img x7w8etxd6rfg1...* If you'd prefer the raw data, here are the gists: * [M1 Ultra Mac Studio](https://gist.github.com/zachrattner/02e8ccae5cb6b1204b4a80d541fb1c5d) * [M4 Mac mini](https://gist.github.com/zachrattner/44cee397156985fa5e6a3666689746c7) * [M4 MacBook Air](https://gist.github.com/zachrattner/52a6b56d70ed024b18c992ef14b89656) * [Python script ](https://gist.github.com/zachrattner/0c7a22603ea5dfb55d2851b5793a334c)to plot charts from the CSVs Some observations: 1. The bigger the model, the fewer TPS there were. No surprises here. 2. When you try to cram a model too big onto a machine that doesn't have enough horsepower, it fails in unusual ways. If the model is slightly too big to fit in RAM, I saw the disk swapping which torpedoed performance (understandable, since memory bandwidth on the base M4 is 120 GB/s and SSD is more like 5-7 GB/s). But sometimes it'd cause a full on kernel panic and the machine would shut itself down. I guess if you max out CPU + RAM + GPU all in one go you can freak your system out. 3. You can see the benefits of higher clock speeds on the newer M classes. Base $599 M4 Mac Mini outperforms M1 Ultra Mac Studio on prompt processing on smaller models, provided the model can fit in memory 4. Once you get to the larger models, M4 chokes and sometimes even crashes, so you need Ultra silicon if you want a big model 5. But if time (say, 270m parameter) model works for your use case, you are actually better off going lower cost higher clock speed than older higher end machine 6. Prompt processing is compute bound so you see the Ultra trounce due to the extra performance cores/GPUs I'm sharing this for two reasons. First is in case it's helpful for anyone else. Second is to double check my observations. Curious what others see in this that I may have missed or misunderstood! Cheers.
2026-01-26T20:31:48
https://www.reddit.com/r/LocalLLaMA/comments/1qnrzm0/i_benchmarked_a_bunch_of_open_weight_llms_on/
zachrattner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnrzm0
false
null
t3_1qnrzm0
/r/LocalLLaMA/comments/1qnrzm0/i_benchmarked_a_bunch_of_open_weight_llms_on/
false
false
self
33
null
Made a directory of open source AI tools — heavy on local inference & self-hosted stuff
3
Got tired of having bookmarks scattered everywhere, so I built a simple directory to keep track of all the open source AI tools I use. Figured this sub might find it useful — lots of local-first tools in there: LLM Inference: * llama.cpp, Ollama, vLLM, exllamav2, text-generation-webui, LM Studio, Jan, GPT4All, LocalAI, TensorRT-LLM, SGLang, MLC LLM... Image Gen: * ComfyUI, Stable Diffusion WebUI, Fooocus, InvokeAI, [SD.Next](http://SD.Next), FLUX... Other stuff: * RAG tools (LangChain, LlamaIndex, Chroma, Qdrant...) * Speech (Whisper, faster-whisper, WhisperX, Bark, Coqui TTS...) * Fine-tuning (Unsloth, Axolotl, PEFT, TRL...) * Agents, MLOps, vector DBs, and more You can filter by tags like self-hosted, Apple Silicon, CUDA, CLI, inference, etc. \~80 tools total. You can submit tools too if something's missing.
2026-01-26T20:20:46
https://ai.coderocket.app
Free-Raspberry-9541
ai.coderocket.app
1970-01-01T00:00:00
0
{}
1qnroai
false
null
t3_1qnroai
/r/LocalLLaMA/comments/1qnroai/made_a_directory_of_open_source_ai_tools_heavy_on/
false
false
default
3
{'enabled': False, 'images': [{'id': 'enG1h8PiP9WAJWQLM-rkgys6PDla3m8-nIVZLyl_fWs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/enG1h8PiP9WAJWQLM-rkgys6PDla3m8-nIVZLyl_fWs.png?width=108&crop=smart&auto=webp&s=0f02adf7de1dac5dd39638de62a6c50946418a0a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/enG1h8PiP9WAJWQLM-rkgys6PDla3m8-nIVZLyl_fWs.png?width=216&crop=smart&auto=webp&s=e955972bf55c77b93f77e9bb6850ffe053c9f7df', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/enG1h8PiP9WAJWQLM-rkgys6PDla3m8-nIVZLyl_fWs.png?width=320&crop=smart&auto=webp&s=c1442795560c7e782db76700902ee59d5cd08b38', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/enG1h8PiP9WAJWQLM-rkgys6PDla3m8-nIVZLyl_fWs.png?width=640&crop=smart&auto=webp&s=2e1d134694a17c5269b6a5dd3e2c682ff1ec92e1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/enG1h8PiP9WAJWQLM-rkgys6PDla3m8-nIVZLyl_fWs.png?width=960&crop=smart&auto=webp&s=5328e22b7dd7f9808f41fc75c08d2249923981e0', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/enG1h8PiP9WAJWQLM-rkgys6PDla3m8-nIVZLyl_fWs.png?width=1080&crop=smart&auto=webp&s=86cb063a53499321393f6f3e38c0bbd6a9d36442', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/enG1h8PiP9WAJWQLM-rkgys6PDla3m8-nIVZLyl_fWs.png?auto=webp&s=4e95f2763a32ef3dd22053fb3c18d650d1c22024', 'width': 1200}, 'variants': {}}]}
Best local model with clawdbot?
0
What is the best local model that i can use with clawdebot that can fit into 48gb ram on my MacBook. I want it to manage my work email that can only be accessed through secure vpn, hence using cloud/api based models is not a possibility for this use case.
2026-01-26T20:19:57
https://www.reddit.com/r/LocalLLaMA/comments/1qnrnfj/best_local_model_with_clawdbot/
BABA_yaaGa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnrnfj
false
null
t3_1qnrnfj
/r/LocalLLaMA/comments/1qnrnfj/best_local_model_with_clawdbot/
false
false
self
0
null
"Hey Lama" -Local AI Voice Assistant -for mac (personal project)
2
Hi all, I wanted to demo my first personal AI voice assistant, "Hey Lama," for my Mac (M1 Pro). I wanted to use something that is completely local and that actually does a few things for me, thanks to the LocalLLaMA community. My main goal is to keep this running on a Raspberry Pi 16GB with an AI Hailo Hat. I am using Qwen3-0.6B, KittenTTS, and Parakeet-0.6B-v3. The code is available on GitHub: [https://github.com/iBibek/hey-lama-local-ai-voice-assistant](https://github.com/iBibek/hey-lama-local-ai-voice-assistant) Please feel free to give me feedback and suggestions to make it better. https://reddit.com/link/1qnrh5o/video/3ofd2v804rfg1/player
2026-01-26T20:13:51
https://www.reddit.com/r/LocalLLaMA/comments/1qnrh5o/hey_lama_local_ai_voice_assistant_for_mac/
bibek_LLMs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnrh5o
false
null
t3_1qnrh5o
/r/LocalLLaMA/comments/1qnrh5o/hey_lama_local_ai_voice_assistant_for_mac/
false
false
self
2
null
🇮🇳HI AI Solutions | Soothsayer Analytics
1
[removed]
2026-01-26T19:44:42
https://youtu.be/rQMedF5dAdE?si=R5HTRKVefy6TngP5
WingProfessional5754
youtu.be
1970-01-01T00:00:00
0
{}
1qnqmkf
false
{'oembed': {'author_name': 'Soothsayer Analytics', 'author_url': 'https://www.youtube.com/@SoothsayerAnalytics', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/rQMedF5dAdE?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="AI Solutions | Soothsayer Analytics"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/rQMedF5dAdE/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'AI Solutions | Soothsayer Analytics', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1qnqmkf
/r/LocalLLaMA/comments/1qnqmkf/hi_ai_solutions_soothsayer_analytics/
false
false
default
1
null
Connected Clawdbot to my phone
0
This is more experimental. I’m using Clawdbot now on my WhatsApp and wondered what would happen if it could control my phone directly. Turns out it can execute real tasks, ordering things and automating any app flow triggered from WhatsApp. Sharing this because it felt useful. Curious what use cases come to mind.
2026-01-26T19:26:41
https://v.redd.it/jv60gsfyvqfg1
Sleyn7
v.redd.it
1970-01-01T00:00:00
0
{}
1qnq3t9
false
{'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/jv60gsfyvqfg1/DASHPlaylist.mpd?a=1772047626%2CODM4MzFmYjU5NWRlZGIxNmNmN2ZlZGJhYTIwYTE5NjVhYzRhYTEzZGU5NDAwOWEyZTNhOTk4M2EzMGE5YmUyNQ%3D%3D&v=1&f=sd', 'duration': 98, 'fallback_url': 'https://v.redd.it/jv60gsfyvqfg1/CMAF_360.mp4?source=fallback', 'has_audio': True, 'height': 640, 'hls_url': 'https://v.redd.it/jv60gsfyvqfg1/HLSPlaylist.m3u8?a=1772047626%2CYjZiM2ViMzNkZTNlMDY3ODMxZjRiYmRmYzNlMjFhNDQxZWZlOTVlYjNmYWRhMmI5OWI4ZTRjYWMxZWZmZjEzMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/jv60gsfyvqfg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 356}}
t3_1qnq3t9
/r/LocalLLaMA/comments/1qnq3t9/connected_clawdbot_to_my_phone/
false
false
https://external-preview…7ebcc6c5b34c0eb5
0
{'enabled': False, 'images': [{'id': 'Z2NzaDlvZ3l2cWZnMRzf9tYzfD5K-6OYOTYhj3jJ8MLNEMw2JULvpxwi2CII', 'resolutions': [{'height': 193, 'url': 'https://external-preview.redd.it/Z2NzaDlvZ3l2cWZnMRzf9tYzfD5K-6OYOTYhj3jJ8MLNEMw2JULvpxwi2CII.png?width=108&crop=smart&format=pjpg&auto=webp&s=2df9c8fca4ff0ba826a2e45fc49d06e6ea186bdc', 'width': 108}, {'height': 387, 'url': 'https://external-preview.redd.it/Z2NzaDlvZ3l2cWZnMRzf9tYzfD5K-6OYOTYhj3jJ8MLNEMw2JULvpxwi2CII.png?width=216&crop=smart&format=pjpg&auto=webp&s=c830e9c3956605f387d681242559c13510cd14c6', 'width': 216}, {'height': 574, 'url': 'https://external-preview.redd.it/Z2NzaDlvZ3l2cWZnMRzf9tYzfD5K-6OYOTYhj3jJ8MLNEMw2JULvpxwi2CII.png?width=320&crop=smart&format=pjpg&auto=webp&s=dd18744903b9ebeda5a80931772a4cbd06face09', 'width': 320}, {'height': 1148, 'url': 'https://external-preview.redd.it/Z2NzaDlvZ3l2cWZnMRzf9tYzfD5K-6OYOTYhj3jJ8MLNEMw2JULvpxwi2CII.png?width=640&crop=smart&format=pjpg&auto=webp&s=1f9bede7a4bd06511595ab19b0cdb8d600330162', 'width': 640}], 'source': {'height': 1712, 'url': 'https://external-preview.redd.it/Z2NzaDlvZ3l2cWZnMRzf9tYzfD5K-6OYOTYhj3jJ8MLNEMw2JULvpxwi2CII.png?format=pjpg&auto=webp&s=a6b5b9fffa56d221b226f07e3a1810b594bf9ac0', 'width': 954}, 'variants': {}}]}
Managed to run Qwen3-TTS on Mac (M4 Air) but it’s melting my laptop. Any proper way to do this?
1
I’m on an M4 Air. I saw people saying it "could work" but couldn't find a single tutorial. I eventually had to manually patch multiple files in the ComfyUI custom node to bypass errors. It finally loads without crashing, but it takes forever and absolutely burns my PC. Is there an optimized way to run this or a setting I'm missing? I used github/flybirdxx/ComfyUI-Qwen-TTS/ custom node.
2026-01-26T19:24:55
https://www.reddit.com/r/LocalLLaMA/comments/1qnq1v3/managed_to_run_qwen3tts_on_mac_m4_air_but_its/
Otherwise-Thanks-985
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnq1v3
false
null
t3_1qnq1v3
/r/LocalLLaMA/comments/1qnq1v3/managed_to_run_qwen3tts_on_mac_m4_air_but_its/
false
false
self
1
null
You can now run Claude Code with local OSS models and Docker Model Runner
0
Docker Model Runner can be used with the Anthropic Messages API, making it possible to run Claude Code with open-source models, completely locally. This allows you to use Claude Code without a Claude Pro or Claude Max subscription, by replacing hosted Claude models with local open source models served via Docker Model Runner. By pointing Claude Code to Docker Model Runner’s API endpoint, you can use Ollama-compatible or OpenAI-compatible models packaged as OCI artifacts and run them locally. Docker Model Runner makes this especially simple by letting you pull models from Docker Hub the same way you pull container images, and run them using Docker Desktop. Here's a [quick guide](https://youtu.be/dUdKMwM5CfE?si=p1-ySjcTpsynzkv1) on how to get started.
2026-01-26T19:21:00
https://www.reddit.com/r/LocalLLaMA/comments/1qnpxoh/you_can_now_run_claude_code_with_local_oss_models/
Creepy-Row970
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnpxoh
false
null
t3_1qnpxoh
/r/LocalLLaMA/comments/1qnpxoh/you_can_now_run_claude_code_with_local_oss_models/
false
false
self
0
null
Building a virtual file system for Claude Code
1
I've been thinking about what it would take for enterprises to deploy Claude Code at scale. The challenge is that IT teams have requirements that MCP doesn't solve yet: * Controlling what data/tools agents can access * Giving multiple people's agents access to the same shared context * Audit logs of what an agent did and what files it accessed * Managing permissions using patterns they already understand MCP is great for connecting tools, but it's missing a governance layer. What if integrations (Gmail, SharePoint, Slack, GitHub etc) were just directories in a virtual file system? You mount /`workspace/gmail/unread` and your agent can ls and cat emails. Permissions are just Linux file permissions. Every time someone accesses a file, it gets logged automatically. The file system becomes the universal interface for an agent’s context. This could be powerful because models already understand POSIX, and IT teams already understand file permissions. I would be curious to hear what people think. Does MCP truly solve governance, or do we need something different to deploy agents in the real world?
2026-01-26T19:12:32
https://www.reddit.com/r/LocalLLaMA/comments/1qnpojr/building_a_virtual_file_system_for_claude_code/
velobro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnpojr
false
null
t3_1qnpojr
/r/LocalLLaMA/comments/1qnpojr/building_a_virtual_file_system_for_claude_code/
false
false
self
1
null
Generating skills for api+local CUAs via noVNC demonstration recording MCP
4
Hey everyone, we just added noVNC recording and video2skill generation to the [cua](https://github.com/trycua/cua) CLI and MCP, and I wanted to share it here since I've seen a couple posts regarding the topic of human demonstrations in this sub. With this feature, you can now record a noVNC .mp4 and raw event stream directly from the browser. The CLI/MCP provides a processor that takes the continuous input stream, discretizes and captions it with a VLM, then saves the semantic trajectory info in a SKILL.md ( based on the technique from [ShowUI-Aloha -- Human-taught Computer-use Agent Designed for Real Windows and MacOS Desktops.](https://github.com/showlab/ShowUI-Aloha) ). You can then use this SKILL.md as a prompt for both local/api agents with the cua agent SDK, or with any agent SDK you are familiar with.
2026-01-26T19:11:20
https://v.redd.it/8t4922g6rqfg1
a6oo
v.redd.it
1970-01-01T00:00:00
0
{}
1qnpnb7
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/8t4922g6rqfg1/DASHPlaylist.mpd?a=1772046947%2CY2VhOWU3MTExOGJjNmM2MGM2ZmIxNzJiZjU4OTc1MzA0MjM5OWRiMjA2YTcyZjk3MmI5MDc0ODMwYzAwZDc1Yg%3D%3D&v=1&f=sd', 'duration': 173, 'fallback_url': 'https://v.redd.it/8t4922g6rqfg1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/8t4922g6rqfg1/HLSPlaylist.m3u8?a=1772046947%2CZmE0YzhiY2EzMWEzY2NkOTE5ZjE1MTE3MTI0ZDMxNjk0NDUzYjI4NmRkZGMzMzE2N2NmYzg0ZDExNDQ5YTdhNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8t4922g6rqfg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1196}}
t3_1qnpnb7
/r/LocalLLaMA/comments/1qnpnb7/generating_skills_for_apilocal_cuas_via_novnc/
false
false
https://external-preview…fbdf9597a0ef235c
4
{'enabled': False, 'images': [{'id': 'dmt5dGc3ZzZycWZnMYB7jEvaMFycdfFzJV-NY6DpcRQf2yhufBhUCsOKJqrf', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/dmt5dGc3ZzZycWZnMYB7jEvaMFycdfFzJV-NY6DpcRQf2yhufBhUCsOKJqrf.png?width=108&crop=smart&format=pjpg&auto=webp&s=680b86189716b5a467d5922b77a96ba727b00264', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/dmt5dGc3ZzZycWZnMYB7jEvaMFycdfFzJV-NY6DpcRQf2yhufBhUCsOKJqrf.png?width=216&crop=smart&format=pjpg&auto=webp&s=33926adea7e83b01182e42d00c0388889ffc6540', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/dmt5dGc3ZzZycWZnMYB7jEvaMFycdfFzJV-NY6DpcRQf2yhufBhUCsOKJqrf.png?width=320&crop=smart&format=pjpg&auto=webp&s=f62a30638b709eaecbb5bbfb2de58d4439ae1dc0', 'width': 320}, {'height': 385, 'url': 'https://external-preview.redd.it/dmt5dGc3ZzZycWZnMYB7jEvaMFycdfFzJV-NY6DpcRQf2yhufBhUCsOKJqrf.png?width=640&crop=smart&format=pjpg&auto=webp&s=3ef5a72a1d577bcc2debb1267e6c938a70b08f89', 'width': 640}, {'height': 577, 'url': 'https://external-preview.redd.it/dmt5dGc3ZzZycWZnMYB7jEvaMFycdfFzJV-NY6DpcRQf2yhufBhUCsOKJqrf.png?width=960&crop=smart&format=pjpg&auto=webp&s=7a93bab1047966a9871961b5b30a6b3085020374', 'width': 960}, {'height': 650, 'url': 'https://external-preview.redd.it/dmt5dGc3ZzZycWZnMYB7jEvaMFycdfFzJV-NY6DpcRQf2yhufBhUCsOKJqrf.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f085ae27dd5b7fa69863da2e392faaa2b5392d6d', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/dmt5dGc3ZzZycWZnMYB7jEvaMFycdfFzJV-NY6DpcRQf2yhufBhUCsOKJqrf.png?format=pjpg&auto=webp&s=e170dc78db34eb5d1f0be8b87259549fc0225aad', 'width': 1196}, 'variants': {}}]}
ollama-term: A TUI (Terminal UI) for Ollama to manage and chat with local LLMs from the comfort of your terminal
0
Hey everyone, I've been using Ollama a ton to run local LLMs. I use linux and like the terminal so i create a ui in the terminal that connect to ollama **GitHub Repo:** [https://github.com/Laszlobeer/ollama-term](https://github.com/Laszlobeer/ollama-term) **Some of the main features:** * **Browse & Manage Models:** See all your pulled models, pull new ones, delete them, and view details, all in a navigable interface. * **Chat Interface:** Have multi-turn conversations with any model. The chat pane is clean and focuses on readability. * **Context & System Prompt:** Easily set a system prompt and see how many tokens are in your current context window * **Lightweight:** It's a single binary. Just download and run. https://i.redd.it/dp37ompsqqfg1.gif
2026-01-26T18:58:03
https://www.reddit.com/r/LocalLLaMA/comments/1qnp8t7/ollamaterm_a_tui_terminal_ui_for_ollama_to_manage/
Reasonable_Brief578
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnp8t7
false
null
t3_1qnp8t7
/r/LocalLLaMA/comments/1qnp8t7/ollamaterm_a_tui_terminal_ui_for_ollama_to_manage/
false
false
https://b.thumbs.redditm…1xcIxZj7mRbg.jpg
0
null
Considering AMD Max+ 395, sanity check?
4
Hi everybody, I'm seriously considering buying one of those mini PCs with the Max+ 395 to use as a local LLM and image generation server but I need a reality check. I currently have a PC that I mainly use for gaming and tinkering with local AI with a 3060 12GB and, at first, I was thinking of adding a 16GB card, something like the 4070. That would be about 700-800€ on ebay, and I'd reach 28GB of VRAM. My PSU is 850W and I think it might handle it without needing an upgrade. If I were to go all-in the GPU route I could maybe get 2 3090s (I found a couple of listings just under 1000€), sell my current 3060 and get a new PSU. I guess I could get everything done with around 2000€. On the other hand the gmktec Evo X2 would be around 2000€ as well but I'd have 96+ GB for running models. It would also be easier to manage since it would be a different machine and I'd feel better about leaving it running 24/7, something I probably wouldn't want to do with my main PC. I might even migrate some services I'm running on an older PC to this mini PC (mainly my jellyfin server and some syncthing folders) Does it make any sense? What route would you take? Thank you for any replies and suggestions.
2026-01-26T18:44:35
https://www.reddit.com/r/LocalLLaMA/comments/1qnoubw/considering_amd_max_395_sanity_check/
ErToppa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnoubw
false
null
t3_1qnoubw
/r/LocalLLaMA/comments/1qnoubw/considering_amd_max_395_sanity_check/
false
false
self
4
null
Built a collection of MCP servers for LM Studio – websearch, Python REPL, email, and more
1
[removed]
2026-01-26T18:43:09
https://www.reddit.com/r/LocalLLaMA/comments/1qnosuj/built_a_collection_of_mcp_servers_for_lm_studio/
Maximum-Brilliant956
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnosuj
false
null
t3_1qnosuj
/r/LocalLLaMA/comments/1qnosuj/built_a_collection_of_mcp_servers_for_lm_studio/
false
false
self
1
null
Below is the entire working flow to add cryptographic proof to an AI agent locally.
0
1. Install CLI pipx install vigil-cli pipx ensurepath \# 2. Clone repo git clone [https://github.com/rom-mvp/vigil-cryptographicsign.git](https://github.com/rom-mvp/vigil-cryptographicsign.git) cd vigil-cryptographicsign \# 3. Create virtualenv python3 -m venv .venv source .venv/bin/activate \# 4. Install service dependencies pip install --upgrade pip pip install -r requirements.txt \# 5. Start service (macOS-safe port) PORT=5050 python [main.py](http://main.py) \# 6. Verify service curl [http://localhost:5050/health](http://localhost:5050/health) \# 7. Configure CLI export VIGIL\_ENDPOINT=[http://localhost:5050](http://localhost:5050/) \# 8. Request proof vigil prove \\   \--agent-id demo-agent \\   \--action-hash sha256:demo123 \\   \--policy-id safe \# 9. Gate a command vigil guard \\   \--agent-id demo-agent \\   \--policy-id safe \\   \-- echo "agent executed" \# 10. Check audit log vigil status \# 11. Verify receipt (optional) vigil verify receipt.json
2026-01-26T18:34:12
https://www.reddit.com/r/LocalLLaMA/comments/1qnoiut/below_is_the_entire_working_flow_to_add/
Fantastic-Issue1020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnoiut
false
null
t3_1qnoiut
/r/LocalLLaMA/comments/1qnoiut/below_is_the_entire_working_flow_to_add/
false
false
self
0
null
Looking for a free Windows tool dictation + AI responding to screen content
1
Hi everyone, I’m trying to find a free solution on Windows 11 that works similarly to Spokenly on Mac. On Mac, I was able to: 1. Press a shortcut to do local speech-to-text dictation anywhere (Word, Gmail, Slack, etc.). 2. Press another shortcut to trigger an AI that can read the content on my screen (emails, documents, web pages) and write responses or summaries intelligently, (I use free AI model llama on it or chatgpt) I’m looking for something that: * Works on Windows 11. * Has keyboard shortcuts for dictation and AI modes. * Can do dictation locally/offline if i want because i don't have paid * Can optionally analyze text on the screen via OCR or similar and generate a response. * Is free or has a generous free tier. * Requires minimal setup, ideally no coding. I’ve tried tools like Whisper, Windows Voice Typing, and various online STT + LLM combinations, but nothing fully replicates what I had with Spokenly on Mac. Has anyone successfully set up something similar on Windows? I’d love recommendations for ready-to-use tools, workarounds, or even workflows that combine dictation + AI with keyboard shortcuts. Thanks in advance!
2026-01-26T18:07:43
https://www.reddit.com/r/LocalLLaMA/comments/1qnnq9r/looking_for_a_free_windows_tool_dictation_ai/
rems198999
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnnq9r
false
null
t3_1qnnq9r
/r/LocalLLaMA/comments/1qnnq9r/looking_for_a_free_windows_tool_dictation_ai/
false
false
self
1
null
Whisper with AirLLM
1
Does anyone have experience using Whisper with AirLLM?
2026-01-26T17:44:16
https://www.reddit.com/r/LocalLLaMA/comments/1qnn0v8/whisper_with_airllm/
Ryalb07
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnn0v8
false
null
t3_1qnn0v8
/r/LocalLLaMA/comments/1qnn0v8/whisper_with_airllm/
false
false
self
1
null
Transformers v5 is out. Feels like the spec layer is finally “done”
1
[removed]
2026-01-26T17:31:47
https://www.reddit.com/r/LocalLLaMA/comments/1qnmo35/transformers_v5_is_out_feels_like_the_spec_layer/
MLExpert000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnmo35
false
null
t3_1qnmo35
/r/LocalLLaMA/comments/1qnmo35/transformers_v5_is_out_feels_like_the_spec_layer/
false
false
self
1
null
hello all, so during the past couple of months, i built a fully free privacy first local AI extension where you can summarise / chat with gmail threads, web pages / articles, and PDFs, and all these runs on device via Ollama or Gemini Nano
2
everything runs locally via gemini nano (chrome ai) or your own ollama setup. it has no subscriptions, no ads, no trackers, basically unlike those generic chatbots inundating the chrome store and scamming people. so, it works in gmail. open an email thread and \`metldr\` generates a summary, and can draft a reply using the thread’s context. for web pages / articles, it can summarise their content and you can ask questions based on that page. for pdfs, you can drag and drop a file to get a summary and then chat with it. it also includes instant dictionary lookup (double-click any word), and also searchable history of what you’ve summarised / chatted with. link to the chrome extension: [metldr](https://chromewebstore.google.com/detail/metldr-local-ai-gmail-art/kbfdmfgmmibkdnbfdaeganjckfgbfjlh?authuser=0&hl=en) link to the github repo: [github repo](https://github.com/WhiteHades/metldr) if you guys could try it and share honest thoughts, it would mean the world to me, this is my first published app after all 😭
2026-01-26T17:08:23
https://v.redd.it/obh2o3w27qfg1
Electronic-Sentence9
/r/LocalLLaMA/comments/1qnlzda/hello_all_so_during_the_past_couple_of_months_i/
1970-01-01T00:00:00
0
{}
1qnlzda
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/obh2o3w27qfg1/DASHPlaylist.mpd?a=1772168912%2CN2VhMWZkNTgwNWI3OGVjODRkOGFkMWE2YzM2YjA4MWQwMTVjYWQ2ZjkyOGI2YjQzYTVlMTQ0NDY5MWY5MDk2Mw%3D%3D&v=1&f=sd', 'duration': 181, 'fallback_url': 'https://v.redd.it/obh2o3w27qfg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/obh2o3w27qfg1/HLSPlaylist.m3u8?a=1772168912%2CNjBmM2VlYTg4YjdkMzM5MmM5NGM4OWE5MzYwOTI1MWE4MmE3ZjAwOWQ0OWViZjllZGRmZjMzNTk0Y2I1YWY3Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/obh2o3w27qfg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1856}}
t3_1qnlzda
/r/LocalLLaMA/comments/1qnlzda/hello_all_so_during_the_past_couple_of_months_i/
false
false
https://external-preview…8c32fda288936569
2
{'enabled': False, 'images': [{'id': 'ZnFiZ3NndzI3cWZnMWv5tcXeUvYhGcqUN-VrPwuyEVkNkThbfZtBIdquTYbO', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/ZnFiZ3NndzI3cWZnMWv5tcXeUvYhGcqUN-VrPwuyEVkNkThbfZtBIdquTYbO.png?width=108&crop=smart&format=pjpg&auto=webp&s=0c19487be0bb7ee75a0852ac04486ef454491e74', 'width': 108}, {'height': 125, 'url': 'https://external-preview.redd.it/ZnFiZ3NndzI3cWZnMWv5tcXeUvYhGcqUN-VrPwuyEVkNkThbfZtBIdquTYbO.png?width=216&crop=smart&format=pjpg&auto=webp&s=b447923bb0be6f8d14ae72d9692c8b2bc8c8528f', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/ZnFiZ3NndzI3cWZnMWv5tcXeUvYhGcqUN-VrPwuyEVkNkThbfZtBIdquTYbO.png?width=320&crop=smart&format=pjpg&auto=webp&s=f13b258e00a96da71441b780b8aa6b629956f0ab', 'width': 320}, {'height': 372, 'url': 'https://external-preview.redd.it/ZnFiZ3NndzI3cWZnMWv5tcXeUvYhGcqUN-VrPwuyEVkNkThbfZtBIdquTYbO.png?width=640&crop=smart&format=pjpg&auto=webp&s=e438f542f9fe14ac883c66c0d12f995f9e3e4036', 'width': 640}, {'height': 558, 'url': 'https://external-preview.redd.it/ZnFiZ3NndzI3cWZnMWv5tcXeUvYhGcqUN-VrPwuyEVkNkThbfZtBIdquTYbO.png?width=960&crop=smart&format=pjpg&auto=webp&s=fb1b08ea6ec090e8698774a42275ae2fd9f35ff5', 'width': 960}, {'height': 628, 'url': 'https://external-preview.redd.it/ZnFiZ3NndzI3cWZnMWv5tcXeUvYhGcqUN-VrPwuyEVkNkThbfZtBIdquTYbO.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a34d0f1915da4a1152d68ac630cda1a7f4c53150', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZnFiZ3NndzI3cWZnMWv5tcXeUvYhGcqUN-VrPwuyEVkNkThbfZtBIdquTYbO.png?format=pjpg&auto=webp&s=27a20bd9bc5e9fccc5851c39b196db0738e77433', 'width': 1856}, 'variants': {}}]}
High impedance Busbar differential protection operated on external fault.
0
Hello I am an Electrical engineer who works in HV substation maintenance. An event occurred 3 days ago in one of the HV substation which has two voltage levels 110/13.8 K.V An external fault occurred outside of the substation on one of the 13.8 K.V outgoing feeders that led to the operation high impedance of busbar protection on the same 13.8 K.V bus where the faulty feeder is connected. We checked all of C.T wires and were found healthh. moreover, we tested the busbar differential and it operated whinin the pre-set range and settins. The type of relay is numerical ABB REB650 used as high impedance busbar protection for this particular bus bar. What could lead to such an event and how to avoid it? What could be reviewed and checked else? Thanks
2026-01-26T17:07:57
https://www.reddit.com/r/LocalLLaMA/comments/1qnlywm/high_impedance_busbar_differential_protection/
Slight-Sound-8871
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnlywm
false
null
t3_1qnlywm
/r/LocalLLaMA/comments/1qnlywm/high_impedance_busbar_differential_protection/
false
false
self
0
null
I built MimikaStudio - a native macOS app for voice cloning using Qwen, Kokoro and XTTS2
9
**MimikaStudio** is a local-first voice cloning and TTS desktop app. Clone any voice from just 3 seconds of audio, use premium preset speakers, or generate fast high-quality speech for narration and content creation. https://preview.redd.it/fkmq0nbb6qfg1.png?width=3218&format=png&auto=webp&s=ab708d8722fcaca54067eb8a9556a0a69c76a73d I ported my old Gradio app into a beautiful native Flutter desktop application, specifically for Apple Silicon users who want a polished UI with proper macOS integration. # Key Features * **3-Second Voice Cloning** Qwen3-TTS can capture a speaker's tone, rhythm, and accent from remarkably short samples * **9 Premium Preset Voices** No reference audio needed. English, Chinese, Japanese, Korean speakers with distinct personalities * **Fast British TTS** Kokoro delivers sub-200ms latency with crystal-clear British RP and American accents * **PDF Reader** Load any PDF and have it read aloud with sentence-by-sentence highlighting * **Emma IPA** British phonetic transcription powered by your choice of LLM (Claude, OpenAI, Ollama) * **Runs locally** No cloud APIs for TTS, everything on your machine https://preview.redd.it/i5e7o7ce6qfg1.png?width=3164&format=png&auto=webp&s=03aeb964b75237396d16c8b6b9d98c62f1b8db4a # Tech Stack * Flutter desktop UI (macOS) * FastAPI Python backend * Qwen3-TTS (0.6B/1.7B), Kokoro-82M, XTTS2 * Apple Silicon optimized (MPS where supported) # GitHub [https://github.com/BoltzmannEntropy/MimikaStudio](https://github.com/BoltzmannEntropy/MimikaStudio) Happy to answer any questions!
2026-01-26T17:07:40
https://www.reddit.com/r/LocalLLaMA/comments/1qnlylb/i_built_mimikastudio_a_native_macos_app_for_voice/
QuanstScientist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnlylb
false
null
t3_1qnlylb
/r/LocalLLaMA/comments/1qnlylb/i_built_mimikastudio_a_native_macos_app_for_voice/
false
false
self
9
null
Nanbeige4-3B-Thinking-2511 is great for summarization
23
Sometimes I dont want to watch a 30 minute youtube video on some drama or tech news, but just feeding the transcript into this model works so well. I use a character card thats just telling it thats its for summarization so I can be lazy and not tell it what I want it to do every time. whats also great about it being a thinking model is if its points on the video are two short or vague you can look at the thinking data and its organized like every point in the video in the same way as the output, and reading both of those takes like 3 minutes at most compared to the 30 minute video the fact its 3b blows my mind when reading its thinking text. its also pretty good at writing, its thinking makes me laugh when you try to change a scene to quickly and it thinks you are having some sort of mental breakdown
2026-01-26T17:03:14
https://www.reddit.com/r/LocalLLaMA/comments/1qnltu7/nanbeige43bthinking2511_is_great_for_summarization/
Background-Ad-5398
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnltu7
false
null
t3_1qnltu7
/r/LocalLLaMA/comments/1qnltu7/nanbeige43bthinking2511_is_great_for_summarization/
false
false
self
23
null
Thought I won the lottery...but it was actually the powerball!!!
64
I pop in to my local Walmart once a week to look for shit like this. recently just picked up two 2tb 850x from Walmart for 189 each but this was just ridiculous. moral of the story is CHECK WALMART!
2026-01-26T16:58:25
https://www.reddit.com/gallery/1qnlogu
braydon125
reddit.com
1970-01-01T00:00:00
0
{}
1qnlogu
false
null
t3_1qnlogu
/r/LocalLLaMA/comments/1qnlogu/thought_i_won_the_lotterybut_it_was_actually_the/
false
false
https://b.thumbs.redditm…ieqyLG4TFylE.jpg
64
null
Built an open-source, self-hosted AI agent automation platform — feedback welcome
0
Hey folks 👋 I’ve been building an open-source, self-hosted AI agent automation platform that runs locally and keeps all data under your control. It’s focused on agent workflows, scheduling, execution logs, and document chat (RAG) without relying on hosted SaaS tools. I recently put together a small website with docs and a project overview. Links to the website and GitHub are in the comments. Would really appreciate feedback from people building or experimenting with open-source AI systems 🙌
2026-01-26T16:44:07
https://www.reddit.com/r/LocalLLaMA/comments/1qnl9ap/built_an_opensource_selfhosted_ai_agent/
Feathered-Beast
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnl9ap
false
null
t3_1qnl9ap
/r/LocalLLaMA/comments/1qnl9ap/built_an_opensource_selfhosted_ai_agent/
false
false
self
0
null
ClaudeAgent+Ollama+gpt-oss:20b slow to token generation on M3 Pro MBP
0
I was just playing around with using Claude CLI and Ollama for local use on an M3 Pro and it is super slow on time to token generation. Is this normal for the Macs? I picked this up for the unified memory and the ability to do demos of some apps. I feel like my 3060 12 gb isn't even this slow. Thoughts optimizations?
2026-01-26T16:21:41
https://www.reddit.com/r/LocalLLaMA/comments/1qnklxa/claudeagentollamagptoss20b_slow_to_token/
desexmachina
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnklxa
false
null
t3_1qnklxa
/r/LocalLLaMA/comments/1qnklxa/claudeagentollamagptoss20b_slow_to_token/
false
false
self
0
null
transformers v5 final is out 🔥
425
Hey folks, it's Merve from Hugging Face 👋🏻 We've finally released the first stable release of transformers v5 in general audience, it comes with many goodies: \- Performance especially for Mixture-of-Experts (6x-11x speedups) \- No more slow/fast tokenizers: way simpler API, explicit backends, better performance \- dynamic weight loading: way faster, MoE now working with quants, tp, PEFT.. We have a migration guide on the main branch; please take a look at it in case you run into issues, we also have documented everything in release notes. We appreciate the feedbacks, so feel free to create issues if you have any!
2026-01-26T16:07:40
https://www.reddit.com/r/LocalLLaMA/comments/1qnk7fq/transformers_v5_final_is_out/
unofficialmerve
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qnk7fq
false
null
t3_1qnk7fq
/r/LocalLLaMA/comments/1qnk7fq/transformers_v5_final_is_out/
false
false
self
425
null