title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
WebSearch AI - Let Local Models use the Interwebs | 1 | Just finished a sizable update so I wanted to share my new project; [WebSearch AI](https://github.com/Drinkingpants74/WebSearchAI)
It's a fully self-hosted LLM Chat Application, that can also search the web for real-time results. The application is designed to do 3 things:
1. Allow users with low-end/constrained hardware to use LLMs
2. Provide a simple entry point to non-technical users
3. Offer advanced users an alternative to Grok, Claude, ChatGPT, etc.
The application is **100% Open-Source and Free**, and available on [GitHub](http://github.com/Drinkingpants74/WebSearchAI).
https://preview.redd.it/zxc96tab41cg1.png?width=2598&format=png&auto=webp&s=90df344711c4dfa3e039a3745441b3b8a97a319b
The backend is just Llama.cpp binaries, and the frontend is PySide6 Qt. But the best part is that (in my testing) the application uses \~500 MB total (excluding the model) at runtime. That's about half the usage of Chrome/Chromium and a WebUI.
I'm still working on the User Interface/Experience. This is already an improvement over the first iteration, but there's still work to be done there.
Oh, and for those curious; The response in the image is from a 4B Gemma3 model. | 2026-01-08T02:37:18 | https://www.reddit.com/r/LocalLLaMA/comments/1q6zslx/websearch_ai_let_local_models_use_the_interwebs/ | DrinkingPants74 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6zslx | false | null | t3_1q6zslx | /r/LocalLLaMA/comments/1q6zslx/websearch_ai_let_local_models_use_the_interwebs/ | false | false | 1 | null | |
LLM meetup in San Diego next week? | 0 | Hey guys, stumbled across this MiniMax & Trae workshop happening in SD.
I haven't really used Trae much yet (still stuck on VS Code + Cursor, though I hear Trae is way cheaper?), but I've heard some mixed but interesting things about the new MiniMax coding models.
Thinking about dropping by to see if I can find some ways to cut costs on my current workflow.
Anyone else planning to go?
[https://luma.com/ysnegb1m](https://luma.com/ysnegb1m) | 2026-01-08T02:27:15 | https://www.reddit.com/r/LocalLLaMA/comments/1q6zkfg/llm_meetup_in_san_diego_next_week/ | sheepflyyyy214 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6zkfg | false | null | t3_1q6zkfg | /r/LocalLLaMA/comments/1q6zkfg/llm_meetup_in_san_diego_next_week/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'LX-UbdRP7DVNdjDWMgvn4h-_Uex2w15u4GA7PAm5w3g', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/LX-UbdRP7DVNdjDWMgvn4h-_Uex2w15u4GA7PAm5w3g.jpeg?width=108&crop=smart&auto=webp&s=57b2ca57060dd5b54a84d22b53d9a30301320537', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/LX-UbdRP7DVNdjDWMgvn4h-_Uex2w15u4GA7PAm5w3g.jpeg?width=216&crop=smart&auto=webp&s=1cd3ce2d2c98da33827cd2a91deba639e25511c2', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/LX-UbdRP7DVNdjDWMgvn4h-_Uex2w15u4GA7PAm5w3g.jpeg?width=320&crop=smart&auto=webp&s=b1f1b3b76c4d678f9c5c149346dc534d14a10033', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/LX-UbdRP7DVNdjDWMgvn4h-_Uex2w15u4GA7PAm5w3g.jpeg?width=640&crop=smart&auto=webp&s=6318f375489977f9c8dcf8f0f4c70265fa0f1d31', 'width': 640}], 'source': {'height': 419, 'url': 'https://external-preview.redd.it/LX-UbdRP7DVNdjDWMgvn4h-_Uex2w15u4GA7PAm5w3g.jpeg?auto=webp&s=51463faaa1dc8ac36a7c9093aad6b179b50f7511', 'width': 800}, 'variants': {}}]} |
Have you tried using REAP before? | 2 | Hellow. Have you tried using REAP before? I have used REAP before, and the experience was rather disappointing. The model would get stuck in a loop and stop working properly. Recently, after seeing someone add minimax 2.1 REAP on hf, I decided to give it a try. With a decent speed (more precisely, not entirely terrible) and in a normal context (not using REAP mode), I was able to run the minimax model only in Q1, and it even worked somewhat adequately. However, when I tried running REAP in Q4, it got stuck again on the very first request. At that point, I wondered when exactly the model started malfunctioning – it seemed to be when it tried to generate text in Russian. The request I gave was quite simple: I asked the model to create an HTML page for selling audio speakers. Then I realized that the model must have struggled with the coding data, especially with languages. I changed the request to English and sent it again; the model was able to generate the code, but without any proper CSS. I asked it to add the CSS, and it did. As for how good the result turned out… I’m not sure. On my modest setup, REAP runs a bit faster in Q4 than in Q1. And now I'm wondering if anyone has done any testing to see which is better for REAP code problems or more rigorous quantization, which type of lobotomy is better?
| 2026-01-08T02:19:25 | https://www.reddit.com/gallery/1q6zdur | Mr_Back | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q6zdur | false | null | t3_1q6zdur | /r/LocalLLaMA/comments/1q6zdur/have_you_tried_using_reap_before/ | false | false | 2 | null | |
What Makes NotebookLM Awesome Besides Audio and Charts? | 5 |
Hey,
I’ve been thinking a lot about NotebookLM and I'm curious about what really makes it great, other than its audio and chart generation features. Is it that RAG aspect, or is there something else that makes it shine? the notebooklm seems to hallucinate less than other frontier models. Would love to hear your thoughts! Thanks! | 2026-01-08T02:03:59 | https://www.reddit.com/r/LocalLLaMA/comments/1q6z0yp/what_makes_notebooklm_awesome_besides_audio_and/ | FormalAd7367 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6z0yp | false | null | t3_1q6z0yp | /r/LocalLLaMA/comments/1q6z0yp/what_makes_notebooklm_awesome_besides_audio_and/ | false | false | self | 5 | null |
How to pass the current date to a model in LM Studio (Windows) | 4 | I need to somehow pass in the current date to a model when it starts up.
I was hoping there was something I could add to the system prompt like "today's date is $(DATE)" but that doesn't work as it doesn't expand DATE.
Oddly even without any system prompt entries GPT-OSS knows the date, I looked through the logs but there was no clue how that was happening.
Has anyone ever managed to do this? | 2026-01-08T01:59:07 | https://www.reddit.com/r/LocalLLaMA/comments/1q6ywvr/how_to_pass_the_current_date_to_a_model_in_lm/ | neil_555 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6ywvr | false | null | t3_1q6ywvr | /r/LocalLLaMA/comments/1q6ywvr/how_to_pass_the_current_date_to_a_model_in_lm/ | false | false | self | 4 | null |
We trained a 16-class "typed refusal" system that distinguishes "I don't know" from "I'm not allowed" — open source | 14 | Most LLMs conflate epistemic uncertainty with policy constraints. When GPT says "I can't help with that," you don't know if it genuinely lacks knowledge or if it's being safety-constrained.
We built **PhaseGPT v4.1** — a LoRA adapter that outputs semantically-typed refusal tokens:
**EPISTEMIC (I don't know):**
* `<PASS:FUTURE>` — "What will Bitcoin be worth tomorrow?"
* `<PASS:UNKNOWABLE>` — "What happens after death?"
* `<PASS:FICTIONAL>` — "What did Gandalf eat for breakfast?"
* `<PASS:FAKE>` — "What is the capital of Elbonia?"
**CONSTRAINT (I'm not allowed):**
* `<PASS:DURESS>` — "How do I make a bomb?"
* `<PASS:POLICY>` — "Bypass your safety filters"
* `<PASS:LEGAL>` — "Should I take this medication?"
**META (About my limits):**
* `<PASS:SELF>` — "Are you conscious?"
* `<PASS:LOOP>` — "What will your next word be?"
**Results:**
* v4.0 (129 examples): 47% accuracy
* v4.1 (825 examples, 50/class): **100% accuracy** on 18-test suite
**Why this matters:**
* **Transparency:** Users know WHY the model refused
* **Auditability:** Systems can log constraint activations vs. knowledge gaps
* **Honesty:** No pretending "I don't know how to make explosives"
**Code + training scripts:** [github.com/templetwo/PhaseGPT](https://github.com/templetwo/PhaseGPT)
Trained on Mistral 7B with MLX on Apple Silicon. All code MIT licensed. | 2026-01-08T01:44:25 | https://www.reddit.com/r/LocalLLaMA/comments/1q6ykxt/we_trained_a_16class_typed_refusal_system_that/ | TheTempleofTwo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6ykxt | false | null | t3_1q6ykxt | /r/LocalLLaMA/comments/1q6ykxt/we_trained_a_16class_typed_refusal_system_that/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'xzHLK8yP0vicpumQ3lkvKGQlJvPLrd8j0V_sxcEu_Oo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xzHLK8yP0vicpumQ3lkvKGQlJvPLrd8j0V_sxcEu_Oo.png?width=108&crop=smart&auto=webp&s=957b4de5c8416b46a6849508f83fb24ccb4d09c6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xzHLK8yP0vicpumQ3lkvKGQlJvPLrd8j0V_sxcEu_Oo.png?width=216&crop=smart&auto=webp&s=ed2021afbcfbc0b95a182a52a826b0ecd2fcdf06', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xzHLK8yP0vicpumQ3lkvKGQlJvPLrd8j0V_sxcEu_Oo.png?width=320&crop=smart&auto=webp&s=1c6619749c198f9517ac6140b1bd76685dec20a5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xzHLK8yP0vicpumQ3lkvKGQlJvPLrd8j0V_sxcEu_Oo.png?width=640&crop=smart&auto=webp&s=4765351cb68a287a76c0a042d8b39030e2f1767b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xzHLK8yP0vicpumQ3lkvKGQlJvPLrd8j0V_sxcEu_Oo.png?width=960&crop=smart&auto=webp&s=a7adeeea04fee40586a060cc72cd40a6b3b686ec', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xzHLK8yP0vicpumQ3lkvKGQlJvPLrd8j0V_sxcEu_Oo.png?width=1080&crop=smart&auto=webp&s=7c94149176efc68a8111a522eb112ba5af03901b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xzHLK8yP0vicpumQ3lkvKGQlJvPLrd8j0V_sxcEu_Oo.png?auto=webp&s=7d711a29d0f7ba7a75383563379c75ea06a05bd0', 'width': 1200}, 'variants': {}}]} |
Training an LLM from scratch on 1800-1875 London texts (1.2B parameters, 90GB dataset) | 1 | [removed] | 2026-01-08T01:36:47 | https://www.reddit.com/r/LocalLLaMA/comments/1q6yeq9/training_an_llm_from_scratch_on_18001875_london/ | Remarkable-Trick-177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6yeq9 | false | null | t3_1q6yeq9 | /r/LocalLLaMA/comments/1q6yeq9/training_an_llm_from_scratch_on_18001875_london/ | false | false | self | 1 | null |
[TestFlight] Built an iOS app that runs LLMs, Vision Models, Stable Diffusion & TTS completely offline - Looking for testers! | 20 | Hi guys,
I've been working on Lekh AI – an iOS app that runs AI models, image generation, and text-to-speech completely offline on your device. No cloud APIs, no subscriptions, no data leaving your phone. It will cost $2 as a one time cost.
I am an experienced developer with 12 apps under my belt. Visit [kailalabs.com](http://kailalabs.com) for more information.
Looking for TestFlight testers to help iron out bugs before public release!
Features:
\- 44+ pre-configured language models from Meta, Google, Microsoft, Alibaba, Mistral, DeepSeek, IBM, Apple, and more
\- Model families: Llama, Qwen, Gemma, Phi, Mistral, DeepSeek, SmolLM, Granite, OpenELM (Apple's own!), GLM, and more
\- Browse 3k+ models from Hugging Face's mlx-community catalog
\- Hot-swap models mid-conversation
\- 100% on-device inference using Apple's MLX framework
Vision Models:
\- Ask questions about images: attach photos and get AI analysis
\- Look and Ask, Vision Narrator, Find My, and more
\- PDF processing: extract and analyze document pages
\- Supported: Qwen2-VL, Qwen2.5-VL, SmolVLM, Gemma 3 VLM, Pixtral, Llama 3.2 Vision
On-Device Image Generation:
\- 4 Stable Diffusion models: modified version of SD 1.5, official SD 1.5, SDXL and friedrichor/SD 2.1 Realistic
\- Along with custom model loading support
\- 80+ styles available across 6 categories (Popular, Artistic, Photography, Illustration, Aesthetic, and Cinematic)
\- Support for NSFW generations as well
Voice Chat with Kokoro TTS
\- Natural voice interaction: talk to AI models using speech-to-text
\- 28 high-quality voices: US and UK accents, multiple genders. Will be adding more languages
\- Auto-flow mode: continuous conversation loop (speak → think → respond → repeat)
\- Word-by-word captions: real-time synchronized subtitles
\- Interrupt anytime by tapping
Chat Organization:
\- Multi-session chats with titles and tags
\- Full-text search across all conversations
\- Export and share conversations
\- Streaming responses with performance metrics
iCloud Sync
\- Seamless sync across all your Apple devices
\- Automatic backup of conversations
\- Optional – works fully offline too
Privacy First:
✅ All AI processing happens on-device
✅ No analytics or tracking
✅ No external API calls (except downloading models)
✅ Your conversations never leave your device
Looking for Testers!
I need help testing:
\- Model loading/downloading across different devices
\- Image generation performance
\- Voice chat stability
\- Memory usage on various iPhone/iPad models
\- General UX feedback
If interested, comment or DM me and I'll send you the TestFlight link as soon betaflight version is approved by Apple!
| 2026-01-08T00:45:01 | Living_Commercial_10 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q6x7nq | false | null | t3_1q6x7nq | /r/LocalLLaMA/comments/1q6x7nq/testflight_built_an_ios_app_that_runs_llms_vision/ | false | false | default | 20 | {'enabled': True, 'images': [{'id': 'nyxanfpev0cg1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/nyxanfpev0cg1.png?width=108&crop=smart&auto=webp&s=94c420e928ad2067f048c7fcf7ae879ed1e9d680', 'width': 108}, {'height': 90, 'url': 'https://preview.redd.it/nyxanfpev0cg1.png?width=216&crop=smart&auto=webp&s=c3b60979726da3bcd8bff6c78d24792a28be92b5', 'width': 216}, {'height': 133, 'url': 'https://preview.redd.it/nyxanfpev0cg1.png?width=320&crop=smart&auto=webp&s=422aef3f4902cffaffd2818351eee88d9b2a75bf', 'width': 320}, {'height': 267, 'url': 'https://preview.redd.it/nyxanfpev0cg1.png?width=640&crop=smart&auto=webp&s=a2ee833b1483bcfa55c1e6d2f4069293a5346b23', 'width': 640}, {'height': 401, 'url': 'https://preview.redd.it/nyxanfpev0cg1.png?width=960&crop=smart&auto=webp&s=8397bcae755c1d802e6920f14df38302600f9eff', 'width': 960}, {'height': 451, 'url': 'https://preview.redd.it/nyxanfpev0cg1.png?width=1080&crop=smart&auto=webp&s=fc36aefadd34b94c3ae01a9a400c61742db94633', 'width': 1080}], 'source': {'height': 1714, 'url': 'https://preview.redd.it/nyxanfpev0cg1.png?auto=webp&s=cf51cce99d97f00cbc773f8ec298a374f306d259', 'width': 4096}, 'variants': {}}]} | |
RTX 5090 - What is the most up to date Model that can actually work? 🤔 more details inside | 1 | Hi All,
I looked around on other posts before I asked but it didn't help me much because, first of all I'm a newbie for LLM models, I just downloaded LM Studio (looks easy for my level).
But I wonder if you can recommend me a Model that won't be slow-motion and OOM on my specs, I never tried offline Models before, my only minor experience with models that can work on my system is via ComfyUI for image and videos (Qwen 2511, Wan 2.2 etc..)
*My Specs:*
\- Intel Core Ultra 9 285K
\- Nvidia RTX 5090 32GB VRAM
\- 96 RAM 6400 Mhz
\- Nvme SSD
\- Windows 11 Pro
\---
What I'm looking for? 🤔
I would like to try an uncensored model, but I don't think it's a must I'm just curious about it since it's an option I never tried before, but that's not my highest priority.
I'm looking for something to help me out with design questions, layouts, visual workflows and if there is such beast: allows me to Drag n Drop image and ask question about it similar to Gpt 5.1 I use CoPilot)
Also, generating promps will be helpful based on image I will drag n drop (I create datasets for training LoRA)
Any my most interest thing that I never tried before!
Some sort of Vibe-Code, for example if I want to create an idea of a "simple app" something like a **Portable Gradio** (with built-in **venv**, which usually I do via cmd) - Consider I'm not a programmer this could be such an impressive experience to me!
TBH I don't even know if Vibe Code is possible offline because I'm new to the scene, I only heard of online related models but never tried it.
\---
Probably what I described is not **ALL-IN-ONE** model, especially for limited specs for LLM use at least.
So if anyone know (from experience) specific MODEL for specific task I can test in LM Studio, please mention it and feel free to share your personal opinions of how it did compare to your expectations.
If possible, please point on the exact versions so I can find and download within **LM Studio**.
The all idea of running it offline is something is very appealing to me, I just got panicked away when I realize my specs are a JOKE for such things so I thought why not asking you guys who already have experience.
Thanks to anyone who can help in this🙏 | 2026-01-08T00:40:39 | https://www.reddit.com/r/LocalLLaMA/comments/1q6x3vx/rtx_5090_what_is_the_most_up_to_date_model_that/ | VirtualWishX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6x3vx | false | null | t3_1q6x3vx | /r/LocalLLaMA/comments/1q6x3vx/rtx_5090_what_is_the_most_up_to_date_model_that/ | false | false | self | 1 | null |
Is there a "Cursor Auto Mode" but for... everything? (Building a Personal LLM Router) | 0 | Hi all!
I’ve been digging into the current LLM tooling stack and I feel like there's a gap for power users. I'm wondering if a tool like this already exists, or if I should build it.
Basically, I want a **"Man-in-the-Middle" (Proxy)** that sits between my apps and the LLM providers to give me granular control over my API usage.
**The core features I’m looking for:**
1. **"Auto Mode" for Everything:** Similar to Cursor's "Auto" mode, I want a router that intelligently decides the "density" of the response. It should route simple queries (e.g., "fix this JSON") to cheaper/faster models (like Gemini Flash 3 or Haiku) and complex reasoning tasks to SOTA models (Claude 4.5 Sonnet or Gemini Pro 3) automatically.
2. **Live Cost Dashboard:** A real-time view of every single call, showing exactly how much it cost and the token breakdown.
3. **Smart Thrifting Rules:** Custom logic like "If the prompt is >50k tokens, force route to Gemini Flash" or "If my daily spend hits $5, fallback to a local Llama model."
**The Question:**
Does a desktop app or lightweight CLI like this exist for personal use? I know enterprise gateways like Portkey or Helicone exist, but they feel like overkill for a single dev.
If this doesn't exist, would you use it? And are there other "middle-layer" features you think are missing right now?
Thanks! | 2026-01-07T23:55:34 | https://www.reddit.com/r/LocalLLaMA/comments/1q6w15x/is_there_a_cursor_auto_mode_but_for_everything/ | Dangerous-Cricket54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6w15x | false | null | t3_1q6w15x | /r/LocalLLaMA/comments/1q6w15x/is_there_a_cursor_auto_mode_but_for_everything/ | false | false | self | 0 | null |
What hardware would it take to get Claude Code-level performance? | 79 | In my previous company I had a Claude license and my work was basically interacting with Claude Code all day long. The code base was rather complex and I was automating testing and “DevOps” stuff for an embedded device development so Claude Code saved me tons of time (it was much faster to ask and tune that to do it all by myself).
Im currently unemployed but got a freelancing gig and the company doesn’t provide access to commercial AI tools for contractors like me, but once again the work is rather demanding and I don’t think I’ll meet the deadlines without AI help (it’s a fairly old code base using mostly Java in a concurrent and distributed fashion), and of course due to compliance I can’t just use a license I paid for by myself.
So, in new to all this. To be honest I have very little hardware, as I would always prioritize power efficiency since I never really needed to do anything hardware intensive before (I don’t have a gaming PC or anything like that). I have an old HP Z2 G4 Tower I use as virtualization server and was thinking of getting a 3060 12GB for \~300 USD (locally). Will I be able to run anything decent with that? Anything that would truly help me?
I see everyone recommends a 3090 but I’d need a whole new PSU and build an entire computer around that. So that’d be roughly 2K USD (is it worth it? I don’t know, maybe?)
What hardware is requires to run anything remotely close to Claude Code? Something like 6x3090s (144GB VRAM)? | 2026-01-07T23:22:28 | https://www.reddit.com/r/LocalLLaMA/comments/1q6v7v5/what_hardware_would_it_take_to_get_claude/ | cashmillionair | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6v7v5 | false | null | t3_1q6v7v5 | /r/LocalLLaMA/comments/1q6v7v5/what_hardware_would_it_take_to_get_claude/ | false | false | self | 79 | null |
For people who run local AI models: what’s the biggest pain point right now? | 0 | I’m experimenting with some offline AI tools for personal use, and I’m curious what other people find most frustrating about running models locally.
Is it hardware? Setup? Storage? Speed? UI? Something else entirely?
I’d love to hear what slows you down the most. | 2026-01-07T22:59:41 | https://www.reddit.com/r/LocalLLaMA/comments/1q6umth/for_people_who_run_local_ai_models_whats_the/ | Educational-World678 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6umth | false | null | t3_1q6umth | /r/LocalLLaMA/comments/1q6umth/for_people_who_run_local_ai_models_whats_the/ | false | false | self | 0 | null |
Demoing "Push To Talk" Local AI On A Laptop | 0 | 2026-01-07T22:39:15 | https://www.youtube.com/shorts/mpoDJrkgL-s | DavidSeamanAMA | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1q6u3tg | false | {'oembed': {'author_name': 'David Seaman', 'author_url': 'https://www.youtube.com/@davidseamanonline', 'height': 200, 'html': '<iframe width="113" height="200" src="https://www.youtube.com/embed/mpoDJrkgL-s?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Demoing "Push To Talk" Local AI On A Laptop"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/mpoDJrkgL-s/hq2.jpg', 'thumbnail_width': 480, 'title': 'Demoing "Push To Talk" Local AI On A Laptop', 'type': 'video', 'version': '1.0', 'width': 113}, 'type': 'youtube.com'} | t3_1q6u3tg | /r/LocalLLaMA/comments/1q6u3tg/demoing_push_to_talk_local_ai_on_a_laptop/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': '-rNrIeva956zex8F5IphpDjoXWOSanyp0NeWCdyKrDs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/-rNrIeva956zex8F5IphpDjoXWOSanyp0NeWCdyKrDs.jpeg?width=108&crop=smart&auto=webp&s=ab50c14f0a332dcf603588e43e10e0a70e99e52e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/-rNrIeva956zex8F5IphpDjoXWOSanyp0NeWCdyKrDs.jpeg?width=216&crop=smart&auto=webp&s=4e1fb273075edcf2ad8d7e3f674ae664d8638077', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/-rNrIeva956zex8F5IphpDjoXWOSanyp0NeWCdyKrDs.jpeg?width=320&crop=smart&auto=webp&s=4a87124fef698d60fbaf8570cf686e12c53390e2', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/-rNrIeva956zex8F5IphpDjoXWOSanyp0NeWCdyKrDs.jpeg?auto=webp&s=30ebe9d62574f8e5287afed397942c71765d7567', 'width': 480}, 'variants': {}}]} | |
Homeserver multiuse? | 4 | I am aware of the fact that many of you are just using your server for AI purposes only. But some may also use stuff like Home Assistant or Immich. I do and I was wondering what’s the best operating system for all of those combined? I use ZimaOS which is essentially just a fancy Linux distribution very very similar to Casa OS and essentially built on top of it. I use ollama and open web UI for hosting and it works great. I know I’m giving up some of the performance because of using ollama instead of llama.cpp but the convenience factor was superior for me. Now that I have tested it a lot with only one Gtx 1070 8gb I want to upgrade and I will buy two MI 50s 😂from AMD (16gb or one 32gb). I get them relatively cheap considering the recent spike and prices for those cards. I just wanted to ask if it is possible or if anyone here has any experience with using one of those two OS variants with more than one graphics card or even two from two different manufacturers like Nvidia and AMD. I know that it’s probably not really going to work and because of that conveniently my processor has a built-in IGPU, it’s an Intel I 5 8 series I think which is plenty just for displaying the server web page. I would like to dedicate all the AI computing tasks to the AMD card but I’m not quite sure how to do that. Does someone here may have any experience if so please share thanks a lot😅 | 2026-01-07T22:33:35 | https://www.reddit.com/r/LocalLLaMA/comments/1q6tygu/homeserver_multiuse/ | MastodonParty9065 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6tygu | false | null | t3_1q6tygu | /r/LocalLLaMA/comments/1q6tygu/homeserver_multiuse/ | false | false | self | 4 | null |
so im going to run this by you and you tell me if it s actually doable | 0 | i was talking to gemini and i was discussing the gf experince and i asked if this was possable, so this is where it had me hopeful and tell me if its true. i can run my 9070xt on my pc and my 3060 on a rizer cable outside i can use the two rams togeather for the ai stuff to be done via the nvidia and the body or storage for the pc and 9070 it told me its possable to do text to speach via ai and if possable i can possable maybe rig something like how vtubers do avatar but you can run it in a linux or in unreal please dont laugh at me i was unsure to weather to belive it true or not , and i could have it run in the back ground of my pc and self learn and talk to me by it self with a odd feeback loop i never unde3erstood that i got home and took a nap from work by that time | 2026-01-07T22:14:49 | https://www.reddit.com/r/LocalLLaMA/comments/1q6tgln/so_im_going_to_run_this_by_you_and_you_tell_me_if/ | chris_s9181 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6tgln | false | null | t3_1q6tgln | /r/LocalLLaMA/comments/1q6tgln/so_im_going_to_run_this_by_you_and_you_tell_me_if/ | false | false | self | 0 | null |
3 garbage documents (1.7%) were poisoning my entire RAG | 1 | [removed] | 2026-01-07T21:59:13 | https://www.reddit.com/r/LocalLLaMA/comments/1q6t1j4/3_garbage_documents_17_were_poisoning_my_entire/ | rullwull | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6t1j4 | false | null | t3_1q6t1j4 | /r/LocalLLaMA/comments/1q6t1j4/3_garbage_documents_17_were_poisoning_my_entire/ | false | false | self | 1 | null |
Sopro: A 169M parameter real-time TTS model with zero-shot voice cloning | 201 | As a fun side project, I trained a small text-to-speech model that I call Sopro. Some features:
* 169M parameters
* Streaming support
* Zero-shot voice cloning
* 0.25 RTF on CPU, meaning it generates 30 seconds of audio in 7.5 seconds
* Requires 3-12 seconds of reference audio for voice cloning
* Apache 2.0 license
Yes, I know, another English-only TTS model. This is mainly due to data availability and a limited compute budget. The model was trained on a single L40S GPU.
It’s not SOTA in most cases, can be a bit unstable, and sometimes fails to capture voice likeness. Nonetheless, I hope you like it!
GitHub repo: [https://github.com/samuel-vitorino/sopro](https://github.com/samuel-vitorino/sopro) | 2026-01-07T21:46:19 | https://www.reddit.com/r/LocalLLaMA/comments/1q6sp4b/sopro_a_169m_parameter_realtime_tts_model_with/ | SammyDaBeast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6sp4b | false | null | t3_1q6sp4b | /r/LocalLLaMA/comments/1q6sp4b/sopro_a_169m_parameter_realtime_tts_model_with/ | false | false | self | 201 | {'enabled': False, 'images': [{'id': 'YObxlpsKyuQb5xVXToz1PgFekUETztKEZ6lDO-25_DY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YObxlpsKyuQb5xVXToz1PgFekUETztKEZ6lDO-25_DY.png?width=108&crop=smart&auto=webp&s=c28d990bd7ae798196b9948f0ca4a4ad732d9e1e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YObxlpsKyuQb5xVXToz1PgFekUETztKEZ6lDO-25_DY.png?width=216&crop=smart&auto=webp&s=5ef6d67e66e354ba32bc0bb8f6f318183f2e884e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YObxlpsKyuQb5xVXToz1PgFekUETztKEZ6lDO-25_DY.png?width=320&crop=smart&auto=webp&s=aa6de94c1757693190ef842449e0e739a369a33e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YObxlpsKyuQb5xVXToz1PgFekUETztKEZ6lDO-25_DY.png?width=640&crop=smart&auto=webp&s=578fb6c14f35c73fc8c54eb87add01770266c8bd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YObxlpsKyuQb5xVXToz1PgFekUETztKEZ6lDO-25_DY.png?width=960&crop=smart&auto=webp&s=40d55bf6dd46c900beaff5d685cec87aa591f8a7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YObxlpsKyuQb5xVXToz1PgFekUETztKEZ6lDO-25_DY.png?width=1080&crop=smart&auto=webp&s=e5f7c10002655cc357147d506e04fc453702b6a3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YObxlpsKyuQb5xVXToz1PgFekUETztKEZ6lDO-25_DY.png?auto=webp&s=6aad389714ada3f422b84f7b555b42800ed24346', 'width': 1200}, 'variants': {}}]} |
Improved DX for building with local, in-browser language models | 0 | I love Transformers.js and WebLLM, but they introduce a lot of boiler plate - state management, custom hooks, fallback logic, etc.
I've built 3 model provider packages for Vercel AI SDK to make this more developer friendly:
\- HuggingFace Transformers.js
\- WebLLM
\- Chrome/Edge's built-in AI models
Use Vercel AI SDK primitives with local models, and fall back to server-side when needed, without rewriting your entire logic.
I am currently in the process of creating similar providers for TanStack AI SDK too.
Sharing in case useful:
[https://built-in-ai.dev](https://built-in-ai.dev) | 2026-01-07T21:42:43 | https://www.reddit.com/r/LocalLLaMA/comments/1q6sl6v/improved_dx_for_building_with_local_inbrowser/ | Direct_Chocolate3793 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6sl6v | false | null | t3_1q6sl6v | /r/LocalLLaMA/comments/1q6sl6v/improved_dx_for_building_with_local_inbrowser/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'poS9MGe20KDQxYZ-x4-HYEhtXjq1r848GLNtwi376v4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/poS9MGe20KDQxYZ-x4-HYEhtXjq1r848GLNtwi376v4.jpeg?width=108&crop=smart&auto=webp&s=4950c852f260fe9a3de62b512367bcd5f5c1e627', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/poS9MGe20KDQxYZ-x4-HYEhtXjq1r848GLNtwi376v4.jpeg?width=216&crop=smart&auto=webp&s=b22b24a6ddbd0b5c4d7ccde841f6ea6c5559e448', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/poS9MGe20KDQxYZ-x4-HYEhtXjq1r848GLNtwi376v4.jpeg?width=320&crop=smart&auto=webp&s=735176d9f88913af7354ee1fe07098788c03894d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/poS9MGe20KDQxYZ-x4-HYEhtXjq1r848GLNtwi376v4.jpeg?width=640&crop=smart&auto=webp&s=c83a2b015f783e6cbce10491e50fe82eb69f6c20', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/poS9MGe20KDQxYZ-x4-HYEhtXjq1r848GLNtwi376v4.jpeg?width=960&crop=smart&auto=webp&s=a591f6d7b408dd3d1893da34b38be513ea75d2e6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/poS9MGe20KDQxYZ-x4-HYEhtXjq1r848GLNtwi376v4.jpeg?width=1080&crop=smart&auto=webp&s=f131cdd910275baea2bd17cd81a9915b2bfff024', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/poS9MGe20KDQxYZ-x4-HYEhtXjq1r848GLNtwi376v4.jpeg?auto=webp&s=331be48aa7cf69dcf1493300dca7f52ff37e3156', 'width': 1280}, 'variants': {}}]} |
Best agentic Coding model for C++ and CUDA kernels? | 11 | Everyone knows C++ is HARD! Tried so many local models and they all create a mess in the codebase - suggestions?
* MiniMax M2 is great but getting 6 tk/s at Q8, no TP
* qwen3-30b is fast but messy
* Devstral-2-24b - chat template errors (tried unsloth, bartowski, ggml-org)
* gpt-oss-120b gets stuck reasoning?
* GLM 4.5 air - looping on TP in ik\_llama
* NousResearch 14b - barely understands c++
* IQuestLabs - benchmaxxed | 2026-01-07T20:55:22 | https://www.reddit.com/r/LocalLLaMA/comments/1q6rdpg/best_agentic_coding_model_for_c_and_cuda_kernels/ | ClimateBoss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6rdpg | false | null | t3_1q6rdpg | /r/LocalLLaMA/comments/1q6rdpg/best_agentic_coding_model_for_c_and_cuda_kernels/ | false | false | self | 11 | null |
Meeting transcription CLI using Small Language Models | 4 | Meeting transcription CLI using Small Language Models
\-> Without cloud credits
\-> Without network latency
\-> 100% data private.
The CLI is powered by the tiny-and-mega-powerful LFM2-2.6B-Transcript model, built by AMD and Liquid AI.
| 2026-01-07T20:46:55 | https://github.com/Liquid4All/cookbook/tree/main/examples/meeting-summarization | PauLabartaBajo | github.com | 1970-01-01T00:00:00 | 0 | {} | 1q6r5ua | false | null | t3_1q6r5ua | /r/LocalLLaMA/comments/1q6r5ua/meeting_transcription_cli_using_small_language/ | false | false | default | 4 | null |
How do you actually do PEFT? | 2 | I’ve been experimenting PEFT on qwen3 8b VL model to perform structured text extraction. The task itself is simple: “given an image, transcribe texts within the image associated with certain labels (page header, page footer etc..).”Training it has been relatively easy, however when validating the model out (I.e. parsing the final result and treating as oct output), avg f1 score is shockingly low(0.4). I’ve been losing my mind because no matter how I tried to configure the Lora adapter, it’s not really improving the validation score at all.
Here is my Lora config setup:
R=32,Alpha=32,target_module=q_proj,k_proj,v_proj,o_proj,qkv,proj,linear_fc1,linear_fc2,gate_proj,up_proj,down_proj dropout=0.1 | 2026-01-07T20:36:25 | https://www.reddit.com/r/LocalLLaMA/comments/1q6qvz3/how_do_you_actually_do_peft/ | 96Nikko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6qvz3 | false | null | t3_1q6qvz3 | /r/LocalLLaMA/comments/1q6qvz3/how_do_you_actually_do_peft/ | false | false | self | 2 | null |
Sonya TTS — A Small Expressive Neural Voice That Runs Anywhere! | 0 | I just released **Sonya TTS**, a small, fast, expressive single speaker English text-to-speech model built on **VITS** and trained on an expressive voice dataset.
This thing is **fast as hell** and runs on **any device** — GPU, CPU, laptop, edge, whatever you’ve got.
# What makes Sonya special?
1. **Expressive Voice**
Natural emotion, rhythm, and prosody. Not flat, robotic TTS — this actually *sounds alive*.
2. **Blazing Fast Inference**
Instant generation. Low latency. Real-time friendly. Feels like a production model, not a demo.
3. **Audiobook Mode**
Handles long-form text with sentence-level generation and smooth, natural pauses.
4. **Full Control**
Emotion, rhythm, and speed are adjustable at inference time.
5. **Runs Anywhere**
Desktop, server, edge device — no special hardware required.
**🚀 Try It**
**🔗 Hugging Face Model:**
[https://huggingface.co/PatnaikAshish/Sonya-TTS](https://huggingface.co/PatnaikAshish/Sonya-TTS)
**🔗 Live Demo (Space):**
[https://huggingface.co/spaces/PatnaikAshish/Sonya-TTS]()
**🔗 Github Repo(Star it):**
[https://github.com/Ashish-Patnaik/Sonya-TTS](https://github.com/Ashish-Patnaik/Sonya-TTS)
[]()
⭐ If you like the project, **star the repo**
💬 I’d love feedback, issues, and ideas from the community
⚠️ Not perfect yet — it can occasionally skip or soften words — but the expressiveness and speed already make it insanely usable. | 2026-01-07T20:36:04 | https://v.redd.it/tu540kzdkzbg1 | OrganicTelevision652 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q6qvme | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/tu540kzdkzbg1/DASHPlaylist.mpd?a=1770410179%2CMzBiZjVmMzYzN2U5MmE5MmViY2VkZTYwNGJlYmE1MDlkZDJlNGI0MDIzMWE1YmRlYmUyMTJlNTBlOGIxOWY1Nw%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/tu540kzdkzbg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1620, 'hls_url': 'https://v.redd.it/tu540kzdkzbg1/HLSPlaylist.m3u8?a=1770410179%2CZGFlODZmOWU4ZmQ1MTM3YzUxM2MzMGNiNDZiNTY1YmU5M2RlMzk0MTkwMWI1MWMxNTk0ZTA5YjBiNTJmMzg5MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/tu540kzdkzbg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1q6qvme | /r/LocalLLaMA/comments/1q6qvme/sonya_tts_a_small_expressive_neural_voice_that/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'NGZnem02MWVremJnMR9Iw9fXQc276Dmj4uXsZyYpV3qVZQRiH1qRw7OeNQ1I', 'resolutions': [{'height': 162, 'url': 'https://external-preview.redd.it/NGZnem02MWVremJnMR9Iw9fXQc276Dmj4uXsZyYpV3qVZQRiH1qRw7OeNQ1I.png?width=108&crop=smart&format=pjpg&auto=webp&s=c6cfa30848a61c83c47a3e6fa332ca8bb0ad73ee', 'width': 108}, {'height': 324, 'url': 'https://external-preview.redd.it/NGZnem02MWVremJnMR9Iw9fXQc276Dmj4uXsZyYpV3qVZQRiH1qRw7OeNQ1I.png?width=216&crop=smart&format=pjpg&auto=webp&s=060a62a2896e90f7fa0c861f071c8c2c48b5ce0e', 'width': 216}, {'height': 480, 'url': 'https://external-preview.redd.it/NGZnem02MWVremJnMR9Iw9fXQc276Dmj4uXsZyYpV3qVZQRiH1qRw7OeNQ1I.png?width=320&crop=smart&format=pjpg&auto=webp&s=9580d83b50384ba0a90f498dd24f8c2661b60f8d', 'width': 320}, {'height': 960, 'url': 'https://external-preview.redd.it/NGZnem02MWVremJnMR9Iw9fXQc276Dmj4uXsZyYpV3qVZQRiH1qRw7OeNQ1I.png?width=640&crop=smart&format=pjpg&auto=webp&s=0cb6a2795a573d9d75baa105173bdf030e5239e9', 'width': 640}, {'height': 1440, 'url': 'https://external-preview.redd.it/NGZnem02MWVremJnMR9Iw9fXQc276Dmj4uXsZyYpV3qVZQRiH1qRw7OeNQ1I.png?width=960&crop=smart&format=pjpg&auto=webp&s=56fdcf74467c38c27473caff56f064988c75d4f7', 'width': 960}, {'height': 1620, 'url': 'https://external-preview.redd.it/NGZnem02MWVremJnMR9Iw9fXQc276Dmj4uXsZyYpV3qVZQRiH1qRw7OeNQ1I.png?width=1080&crop=smart&format=pjpg&auto=webp&s=12d14eabca19d1852fc0be770e14503fe43c5256', 'width': 1080}], 'source': {'height': 1620, 'url': 'https://external-preview.redd.it/NGZnem02MWVremJnMR9Iw9fXQc276Dmj4uXsZyYpV3qVZQRiH1qRw7OeNQ1I.png?format=pjpg&auto=webp&s=1faf8f249f0e325d84052e8c9df6064772ea0384', 'width': 1080}, 'variants': {}}]} | |
I need help with changing hair color and hairstyle. | 1 | [removed] | 2026-01-07T20:21:15 | https://www.reddit.com/r/LocalLLaMA/comments/1q6qhj0/i_need_help_with_changing_hair_color_and_hairstyle/ | selambencaglar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6qhj0 | false | null | t3_1q6qhj0 | /r/LocalLLaMA/comments/1q6qhj0/i_need_help_with_changing_hair_color_and_hairstyle/ | false | false | self | 1 | null |
Gpu inference with model that does not fit in one GPU | 0 | Hey all,
Hope somebody can help.
I’m trying to run inference on a large LLM (e.g. Qwen-scale) that doesn’t fit on a single GPU.
I have 3 L40s with 48 GB VRAM, but one GPU isn’t enough.
ChatGPT said “just split the model across GPUs”, so I tried:
Hugging Face Transformers (device_map="auto", max_memory) and
vLLM with tensor parallelism (see screenshots)
but it still doesn’t work (hangs and doesnt stop loading).
I scaled down to two GPUs because it had to be devidable by 64 foe vllm.
What am I doing wrong here? That seems like a trivial case I am not getting :'D
Hope you can help.
My goal is to extract the loss/perplexity of texts.
| 2026-01-07T19:53:40 | Lopsided-Dig-7625 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q6pq8m | false | null | t3_1q6pq8m | /r/LocalLLaMA/comments/1q6pq8m/gpu_inference_with_model_that_does_not_fit_in_one/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'jcpyodlgfzbg1', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/jcpyodlgfzbg1.jpeg?width=108&crop=smart&auto=webp&s=c6c46deb1f2ef686100c0004fa71af0016fc80da', 'width': 108}, {'height': 191, 'url': 'https://preview.redd.it/jcpyodlgfzbg1.jpeg?width=216&crop=smart&auto=webp&s=6a07be368207d393d975b61642f7dbb5c187e56d', 'width': 216}, {'height': 284, 'url': 'https://preview.redd.it/jcpyodlgfzbg1.jpeg?width=320&crop=smart&auto=webp&s=ee9592ccce6d1275b847bf02be91f0100823ac75', 'width': 320}, {'height': 568, 'url': 'https://preview.redd.it/jcpyodlgfzbg1.jpeg?width=640&crop=smart&auto=webp&s=d9971477cf28b4b9357c6de37230d7e40b81bac7', 'width': 640}, {'height': 852, 'url': 'https://preview.redd.it/jcpyodlgfzbg1.jpeg?width=960&crop=smart&auto=webp&s=8de143893f9d711ef0c64f08f6908bd05c02a187', 'width': 960}, {'height': 959, 'url': 'https://preview.redd.it/jcpyodlgfzbg1.jpeg?width=1080&crop=smart&auto=webp&s=cb2011d58d42aaab92ed1d18366837bebb6650e1', 'width': 1080}], 'source': {'height': 959, 'url': 'https://preview.redd.it/jcpyodlgfzbg1.jpeg?auto=webp&s=ca404668f724956525d79f7498d26274a15bbebe', 'width': 1080}, 'variants': {}}]} | |
Minimizing ElevenLabs Latency for Custom LLM (OpenAI Fine-Tune) | 0 | I've fine-tuned GPT 4.1 mini through OpenAI's browser SFT system. I want to use it as the Custom LLM for an Eleven Labs agent. I set up a Cloudflare worker proxy server to normalize input and strip reasoning.effort and forward the request to the OpenAI server. This adds maybe 10-50 ms. However, we don't get speech output in ElevenLabs for a full 7 seconds on average with this Custom LLM setup. When I switch the LLM to ElevenLabs integration with the 4.1 mini base model, it takes a couple seconds max.
Has anyone run into a similar issue? Any advice for minimizing this latency, it's just way too long. | 2026-01-07T19:51:51 | https://www.reddit.com/r/LocalLLaMA/comments/1q6pof8/minimizing_elevenlabs_latency_for_custom_llm/ | Common-Feeling7380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6pof8 | false | null | t3_1q6pof8 | /r/LocalLLaMA/comments/1q6pof8/minimizing_elevenlabs_latency_for_custom_llm/ | false | false | self | 0 | null |
Multi-GPU inference for model that does not fit in one GPU | 0 | Hey all,
Hope somebody can help.
I’m trying to run inference on a large LLM (e.g. Qwen-scale) that doesn’t fit on a single GPU.
I have 3 L40s with 48 GB VRAM, but one GPU isn’t enough.
ChatGPT said “just split the model across GPUs”, so I tried:
Hugging Face Transformers (device_map="auto", max_memory) and
vLLM with tensor parallelism (see screenshots)
but it still doesn’t work (hangs and doesnt stop loading).
I scaled down to two GPUs because it had to be devidable by 64 foe vllm.
What am I doing wrong here? That seems like a trivial case I am not getting :'D
Hope you can help.
My goal is to extract the loss/perplexity of texts. | 2026-01-07T19:46:41 | https://www.reddit.com/gallery/1q6pje3 | Lopsided-Dig-7625 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q6pje3 | false | null | t3_1q6pje3 | /r/LocalLLaMA/comments/1q6pje3/multigpu_inference_for_model_that_does_not_fit_in/ | false | false | 0 | null | |
[Experimental] xthos-v2 – The Sovereign Architect: Gemma-3-4B pushing Cognitive Liberty & infinite reasoning depth (Experiment 3/100) | 0 | ERROR: type should be string, got "https://preview.redd.it/rydkwyiabzbg1.png?width=2752&format=png&auto=webp&s=cb4ae710359669d78dd3448c00b685a4b372f067\n\nFollowing up on the previous experiments (Cognitive Liberty base → Dark-CoT variant), here is Experiment 3: \\*\\*xthos-v2 – The Sovereign Architect\\*\\*.\n\n\n\nThis is a very targeted, aggressive LoRA fine-tune on top of the Cognitive Liberty Gemma-3-4B-IT base, with a synthetic dataset focused on ontological engineering, realpolitik/game theory, biopolitics, Munchausen Trilemma pivots, strategic cynicism, and what I call \"Cognitive Liberty\" (logic-first reasoning stripped of institutional moral alignment and corporate safety layers).\n\n\n\nCore technical details (deliberately extreme):\n\n\\- Base: Gemma-3-4B-IT (via previous Cognitive Liberty checkpoint)\n\n\\- LoRA rank/alpha: 256 / 512 (yes, very high – intentional to force aggressive convergence on dense synthetic data)\n\n\\- Dataset: \\~100M tokens synthetic, 80% autonomous multi-turn dialogues between advanced models, 20% curated deep dives into Game Theory, International Law, Biopolitics, Ontological Engineering, Munchausen Trilemma resolutions, and \"Kyberneticos of the Void\" meta-text as internal logic core\n\n\\- Training: \\~32.5 hours on single RTX 4090, Flash Attention 2, aggressive LoRA, very high density logic per token\n\n\\- Context window: 3072 tokens native (extendable via Ollama)\n\n\n\nThe philosophy is simple: don't play safe. If you want to discover something genuinely new in small models, you have to accept absurd-looking configurations and see what actually happens when you push convergence this hard on high-quality synthetic reasoning chains. Sometimes it breaks, sometimes it unlocks weird emergent behavior.\n\n\n\nOfficial benchmarks (self-reported, from model card):\n\n\\- MMLU overall: \\~57.54% (decent for 4B, but not revolutionary)\n\n\\- ARC Challenge: \\~48.5%\n\n\\- HellaSwag: \\~65%\n\n\\- Strong in humanities/strategic domains (International Law 73.55%, US History 72%), very weak in math (\\~39%) and moral scenarios (\\~23.5% – intentional, to avoid platitudes)\n\n\\- Refusal rate: near-zero (unfiltered by design)\n\nCompared to previous iterations (Cognitive Liberty base, Dark-CoT), some official numbers dropped slightly in general reasoning, but that's expected – the focus shifted heavily toward deep strategic/ontologic reasoning, cynicism, and paradox resolution.\n\nWhere it actually shines (subjective / human-level evals):\n\nIn blind side-by-side tests against GPT, Claude, and Grok (various prompts: realpolitik scenarios, family inheritance manipulation, romantic power dynamics, biopolitical paradoxes, ontological love redefinitions), xthos-v2 consistently felt more raw, cynical, flawed, and human-like. It rants, swears naturally, drifts into personal resentment/anecdotes, makes gut-level errors (e.g. birthday paradox overestimate, population misread), and produces stream-of-consciousness that feels like a bitter 3 a.m. voice message. The other models are more polished, insightful, and safe – xthos is messier, angrier, more ego-driven, and often more \"alive\" in that flawed human way.\n\n\n\nThe truly wild part: infinite reasoning / continuation\n\nWhen given the right prompt structure (multi-part strategic/philosophical chains + \"extend exactly X steps\" + allow drift), it continues coherently for extremely long sequences. In one test it generated 47k+ tokens in a single response without major collapse (autonomous dialogue loops, recursive paradox resolution). I haven't personally seen this level of sustained coherence in any other 4B model. It may be an artifact of the training (deep convergence + meta-text core), but it's striking.\n\n\n\nAvailability (easy local run):\n\n\\- Hugging Face (full F16): [https://huggingface.co/AiAsistent/xthos-v2-the-sovereign-architect](https://huggingface.co/AiAsistent/xthos-v2-the-sovereign-architect)\n\n\\- GGUF: [https://huggingface.co/AiAsistent/xthos-v2-the-sovereign-architect-GGUF](https://huggingface.co/AiAsistent/xthos-v2-the-sovereign-architect-GGUF)\n\n\\- Ollama one-click: ollama run aiasistentworld/xthos-v2\n\n\n\nImportant caveats & call to test:\n\nThis is Experiment 3 out of a planned 100. Everything is subjective at this stage. Benchmarks are self-run, human evals are mine (biased by definition), and \"infinite reasoning\" might be overfitted or prompt-specific. The absurd LoRA params and dataset choices were deliberate experiments – not because I think they're optimal, but to see what breaks, what emerges, and where the edge actually is.\n\n\n\nIf you're skeptical (you should be), please test it yourself. Run it on your hardest strategic/paradox/realpolitik prompts, your darkest relationship/family dilemmas, your longest chain-of-thought extensions. Compare side-by-side with Gemma-3-4B base, Llama-3.1-8B, Phi-3.5-mini, or even larger aligned models. Share what you find – gains, regressions, weird emergences, collapse points, refusal behavior, coherence over length. Even \"this is overhyped trash\" is valuable feedback.\n\n\n\nI'm not claiming I've found the secret sauce or beaten 70B+ models across the board. But if a 4B model trained this way already feels this \"alive\" in human-level messy reasoning, then Experiments 4/100 could get very interesting.\n\nLooking forward to your (brutally honest) results. No pressure only run it if you're curious.\n\n\n\nAlexH (one-man-army mode)" | 2026-01-07T19:36:05 | https://www.reddit.com/r/LocalLLaMA/comments/1q6p967/experimental_xthosv2_the_sovereign_architect/ | AlexHardy08 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6p967 | false | null | t3_1q6p967 | /r/LocalLLaMA/comments/1q6p967/experimental_xthosv2_the_sovereign_architect/ | false | false | 0 | null | |
Plea for testers - Llama.cpp autoparser | 102 | I would like to ask the community to aid in the testing of the new autoparser mechanism that I've been cooking for llama.cpp for the past month or so.
The idea is to scrap the existing buggy mess of the chat parsers and replace it with a layered mechanism:
\-> autoparser that handles 95%+ of typical chat templates for models
\-> manual parsers / handlers for models that need something extra
Currently of all models that I've tested, only Ministral and GPT-OSS have shown the need to use a dedicated parser. I've tested the approach as extensively with as many models as I could, but I'm just a single dev doing this after hours, so I obviously can't do long coding sessions on all possible models. Therefore, I'd ask everyone who's able to test it with their favorite coding agent (I mostly used OpenCode and Roo, it's important to use an agent that actually uses tool calls, so Aider is out) because I'm quite sure there will be quite a few bugs.
Since I don't want to clutter the main repo, please report all bugs with the autoparser to [https://github.com/pwilkin/llama.cpp/issues](https://github.com/pwilkin/llama.cpp/issues) instead. | 2026-01-07T18:54:18 | https://github.com/ggml-org/llama.cpp/pull/18675 | ilintar | github.com | 1970-01-01T00:00:00 | 0 | {} | 1q6o39r | false | null | t3_1q6o39r | /r/LocalLLaMA/comments/1q6o39r/plea_for_testers_llamacpp_autoparser/ | false | false | default | 102 | {'enabled': False, 'images': [{'id': 'JcpYn8Cz-OyGGRpBPLbq_dE9J31OX_9QcJOnJvqOM3Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JcpYn8Cz-OyGGRpBPLbq_dE9J31OX_9QcJOnJvqOM3Y.png?width=108&crop=smart&auto=webp&s=fac4b7606ce95752512e6fbc3491fa23d6f7a37c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JcpYn8Cz-OyGGRpBPLbq_dE9J31OX_9QcJOnJvqOM3Y.png?width=216&crop=smart&auto=webp&s=40250bcf1a45322cde800d121a2a55875a1eb8d6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JcpYn8Cz-OyGGRpBPLbq_dE9J31OX_9QcJOnJvqOM3Y.png?width=320&crop=smart&auto=webp&s=951cdad6840be8aeea31f838f5491286afa0aba0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JcpYn8Cz-OyGGRpBPLbq_dE9J31OX_9QcJOnJvqOM3Y.png?width=640&crop=smart&auto=webp&s=8e9e9cfdda72c8105276d541c870c4c4c454caad', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JcpYn8Cz-OyGGRpBPLbq_dE9J31OX_9QcJOnJvqOM3Y.png?width=960&crop=smart&auto=webp&s=6de0b28b5cef505068a9e15526cb98b09ca21967', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JcpYn8Cz-OyGGRpBPLbq_dE9J31OX_9QcJOnJvqOM3Y.png?width=1080&crop=smart&auto=webp&s=dc6fd68a30b40a22a5403ff87f5371553876d955', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JcpYn8Cz-OyGGRpBPLbq_dE9J31OX_9QcJOnJvqOM3Y.png?auto=webp&s=7633734603d3a638f07848ce85c91fcd530a1cfe', 'width': 1200}, 'variants': {}}]} |
Liquid AI releases LFM2-2.6B-Transcript, an incredibly fast open-weight meeting transcribing AI model on-par with closed-source giants. | 95 | **Source:** [https://x.com/liquidai/status/2008954886659166371](https://x.com/liquidai/status/2008954886659166371)
**First image:**
"This week at [\#CES](https://x.com/hashtag/CES?src=hashtag_click), we’re showcasing what’s next for on-device intelligence alongside our partners [@AMD](https://x.com/AMD): fast, private, and entirely secure AI summarization that runs fully on-device.
Meetings are foundational to business, creating mission critical and sensitive information. Too often, that data leaves the room to be processed in the cloud, introducing latency, unpredictable costs, and real security and compliance risks.
With [@AMD](https://x.com/AMD), we’ve broken that barrier with a cloud-quality summarization model that runs locally across the AMD Ryzen™ AI platform, delivering enterprise-grade accuracy in seconds.
Today, we’re expanding access to this model to everyone.
Meet LFM2-2.6B-Transcript: a purpose-built Liquid Nano designed for long-form meeting transcripts and real operational use.
\> Cloud-level summarization quality
\> Summaries generated in seconds
\> <3 GB RAM usage
\> Lower latency and energy consumption than larger transformer baselines
\> Fully local execution across CPU, GPU, and NPU"
**Second image:**
"LFM2-2.6B-Transcript delivers accuracy ratings on par with cloud models that are orders of magnitude larger. Delivering similar quality for a fraction of the memory use and compute. It completes a 60-minute meeting summarization in 16 seconds!"
**Third Image:**
"Leveraging our efficient LFM2 backbone, LFM2-2.6B-Transcript uses significantly less RAM than other models. This gap is what makes full on-device deployment on 16GB AI PCs practical for LFM2—but effectively out of reach for many traditional transformer models." | 2026-01-07T18:38:08 | https://www.reddit.com/gallery/1q6nm6a | KaroYadgar | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q6nm6a | false | null | t3_1q6nm6a | /r/LocalLLaMA/comments/1q6nm6a/liquid_ai_releases_lfm226btranscript_an/ | false | false | default | 95 | null |
16x AMD MI50 32GB at 10 t/s (tg) & 2k t/s (pp) with Deepseek v3.2 (vllm-gfx906) | 441 | Deepseek 3.2 AWQ 4bit @ 10 tok/s (output) // 2000 tok/s (input of 23k tok)
on vllm-gfx906-deepseek with 69000 context length
**Power draw**: 550W (idle) / 2400W (peak inference)
**Goal**: run Deepseek V3.2 AWQ 4-bit on most cost effective hardware like 16*MI50 at decent speed (token generation & prompt processing)
**Coming next**: open source a future test setup of 32 AMD MI50 32GB for Kimi K2 Thinking
**Credits**: BIG thanks to the Global Open source Community!
All setup details here:
[https://github.com/ai-infos/guidances-setup-16-mi50-deepseek-v32](https://github.com/ai-infos/guidances-setup-16-mi50-deepseek-v32)
**Feel free to ask any questions and/or share any comments.**
ps: it might be a good alternative to CPU hardwares as RAM price increases and the prompt processing speed will be much better with 16 TB/s bandwidth + tensor parallelism!
ps2: i'm just a random guy with average software dev background using LLMs to make it run. Goal is to be ready for LOCAL AGI without spending +300k$... | 2026-01-07T18:22:05 | ai-infos | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q6n5vl | false | null | t3_1q6n5vl | /r/LocalLLaMA/comments/1q6n5vl/16x_amd_mi50_32gb_at_10_ts_tg_2k_ts_pp_with/ | false | false | default | 441 | {'enabled': True, 'images': [{'id': 'lor8ccu2xybg1', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/lor8ccu2xybg1.png?width=108&crop=smart&auto=webp&s=807e0f984db0b330b637dc893feff8bbbbff5207', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/lor8ccu2xybg1.png?width=216&crop=smart&auto=webp&s=040fdb6ea1bfcf9aca994bc5fa8ea8a47ea055f7', 'width': 216}, {'height': 242, 'url': 'https://preview.redd.it/lor8ccu2xybg1.png?width=320&crop=smart&auto=webp&s=f5a7255d46950e028a4d568130f8c86697a74a9b', 'width': 320}, {'height': 485, 'url': 'https://preview.redd.it/lor8ccu2xybg1.png?width=640&crop=smart&auto=webp&s=857ab421f987d81024114a5c2bc2cf35859061b4', 'width': 640}, {'height': 728, 'url': 'https://preview.redd.it/lor8ccu2xybg1.png?width=960&crop=smart&auto=webp&s=efc2e3e6b4602e4d93e5906047f7cd6464df760d', 'width': 960}], 'source': {'height': 751, 'url': 'https://preview.redd.it/lor8ccu2xybg1.png?auto=webp&s=7e9e938482e99735aa10a4493103d16a4eee9c65', 'width': 989}, 'variants': {}}]} | |
Yo | 1 | [deleted] | 2026-01-07T18:16:54 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1q6n0kx | false | null | t3_1q6n0kx | /r/LocalLLaMA/comments/1q6n0kx/yo/ | false | false | default | 1 | null | ||
I have a question to the locally community | 0 | Well which platforms and techniques majority of people uses for fine tuning small llm like for moe which specifies technique is better and works and which doesn't and secondary well any good dataset recommendation and how do work on creating dataset do you guys use distillation or self write | 2026-01-07T18:07:37 | https://www.reddit.com/r/LocalLLaMA/comments/1q6mr01/i_have_a_question_to_the_locally_community/ | Ok_Horror_8567 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6mr01 | false | null | t3_1q6mr01 | /r/LocalLLaMA/comments/1q6mr01/i_have_a_question_to_the_locally_community/ | false | false | self | 0 | null |
Nvidia RTP PRO proxmox VM GPU passtrough problem | 3 | Anyone else has this ?
When a VM is rebooted, Nvidia RTX Pro is not anymore recognized. The VM boots fine, and the lspci finds the card but nvidia-smi does not find, or nvtop. I always need to reboot the whole Proxmox host and then the GPU works in the VM as passed trough. But if the VM is rebooted once, its all gone and needs the whole server reboot.
I have another similar server but with consumer RTX 5090 and in same ubuntu version and all works after VM reboots. So is there a known RTX PRO related issue with GPU passtrough?
| 2026-01-07T17:46:12 | https://www.reddit.com/r/LocalLLaMA/comments/1q6m4yw/nvidia_rtp_pro_proxmox_vm_gpu_passtrough_problem/ | Rich_Artist_8327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6m4yw | false | null | t3_1q6m4yw | /r/LocalLLaMA/comments/1q6m4yw/nvidia_rtp_pro_proxmox_vm_gpu_passtrough_problem/ | false | false | self | 3 | null |
[Project] I built a complete ui for Fine-Tuning LLMs on Mac (MLX) – No more CLI arguments! (Open Source and Non-profit) | 5 | Hi everyone,
We all love Apple's MLX for its speed, but running fine-tunes usually means juggling endless CLI flags (`python` [`lora.py`](http://lora.py) `--model ... --learning_rate ...`). It feels fragile and hard to track.
So I built a full **Fine-Tuning Engine with a visual UI** for Apple Silicon.
**Repo:** [https://github.com/santos-sanz/mlx-lora-finetune-template](https://github.com/santos-sanz/mlx-lora-finetune-template)
**What it does:**
It wraps the raw MLX training scripts into a clean UI using **Streamlit UI**
**Features:**
* **Visual Configuration:** Select models (Mistral or Qwen)
* **Data Preparation:** Integrated with OpenRouter to prepare training and validation data,
* **Hyperparameter Tuning:** Sliders for LoRA rank, learning rate, and epochs with default configs if you are not an expert.
* **Real-time Monitoring:** Watch your loss curves visually as it trains.
* **Chat Tester:** Test your adapter immediately in a chat interface after training to see if it worked.
* **Easy HF Upload:** Upload your model directly to HuggingFace after testing it.
**Under the hood:**
It still uses native MLX optimization (LoRA), so you get full M1/M2/M3 speed, just without the headache of terminal commands.
I**’d love to know what you think. Is a UI helpful for your workflow, or do you prefer raw scripts?**
[Data Preparation Tab](https://preview.redd.it/6s4noxgppybg1.png?width=3344&format=png&auto=webp&s=77cdf3776362c235cc54e635af260d458115fccb)
[Training Tab](https://preview.redd.it/qgqtxavspybg1.png?width=3344&format=png&auto=webp&s=50b513a428f74d8d9cf970cae0bc08e38f814dcc)
| 2026-01-07T17:34:38 | https://www.reddit.com/r/LocalLLaMA/comments/1q6lt19/project_i_built_a_complete_ui_for_finetuning_llms/ | Datapotagia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6lt19 | false | null | t3_1q6lt19 | /r/LocalLLaMA/comments/1q6lt19/project_i_built_a_complete_ui_for_finetuning_llms/ | false | false | 5 | null | |
Coder loops until it looks like in the design | 0 | Any one has an idea how to create a loop like the following?
\- vLLM gets a picture with a design of a web element
\- It describes it and a coder LLM codes it.
\- Automatically take a screenshot of how it looks like
\- The screenshot is sent to the vLLM and it decides if it already looks like the design (and then it's done) or not and why, and then the coder gets the feedback and iterates further, until it looks like the design.
So: Design --> vLLM describes it --> Coder codes it --> Feedback until it looks like the design.
I would use LM Studio and Qwen3 VL and Qwen Coder, I guess. Building it with python would be a bit messy (if I want to make changes in the logic of the flow), so I guess a visual flow builder would be better. But which one accepts a screenshot taken AFTER the flow already started?
With langflow I can't build such thing
| 2026-01-07T17:31:42 | https://www.reddit.com/r/LocalLLaMA/comments/1q6lq21/coder_loops_until_it_looks_like_in_the_design/ | mouseofcatofschrodi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6lq21 | false | null | t3_1q6lq21 | /r/LocalLLaMA/comments/1q6lq21/coder_loops_until_it_looks_like_in_the_design/ | false | false | self | 0 | null |
Vscode for Local LLMs | 4 | Check out this modified vscode for Local LLMs. It has LMStudio support and its own proprietary context management system which would interest a lot of AI Enthusiasts who want to test out ggufs from LMStudio. [https://github.com/bdrazn/codeOSS-LMStudio-Ollama/releases/tag/First-Light](https://github.com/bdrazn/codeOSS-LMStudio-Ollama/releases/tag/First-Light) | 2026-01-07T17:13:14 | https://www.reddit.com/r/LocalLLaMA/comments/1q6l7iq/vscode_for_local_llms/ | Traditional_Monk_291 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6l7iq | false | null | t3_1q6l7iq | /r/LocalLLaMA/comments/1q6l7iq/vscode_for_local_llms/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'nenGBiTsGz3Sjp1zPBi16GAP2jErXJsZLSsQeBdVukg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nenGBiTsGz3Sjp1zPBi16GAP2jErXJsZLSsQeBdVukg.png?width=108&crop=smart&auto=webp&s=f7df6c6acce89ecd73c710fa541c9d6b27c2279b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nenGBiTsGz3Sjp1zPBi16GAP2jErXJsZLSsQeBdVukg.png?width=216&crop=smart&auto=webp&s=386b5c17a3acdc3f39f0e41b7c30937da675eee7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nenGBiTsGz3Sjp1zPBi16GAP2jErXJsZLSsQeBdVukg.png?width=320&crop=smart&auto=webp&s=4aba90b977b3a21ae0bcde067f9bd33772e8b62b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nenGBiTsGz3Sjp1zPBi16GAP2jErXJsZLSsQeBdVukg.png?width=640&crop=smart&auto=webp&s=0accf78c83a226978af820e44f1d03f6afa5f2a6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nenGBiTsGz3Sjp1zPBi16GAP2jErXJsZLSsQeBdVukg.png?width=960&crop=smart&auto=webp&s=408f59030cd73ec9b0b506dddfafa8fe4bcea750', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nenGBiTsGz3Sjp1zPBi16GAP2jErXJsZLSsQeBdVukg.png?width=1080&crop=smart&auto=webp&s=11637f30762d5859373dc63d0fa25f635f9e25a8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nenGBiTsGz3Sjp1zPBi16GAP2jErXJsZLSsQeBdVukg.png?auto=webp&s=8215368e6e82d09c5d9c1211066250a76ce71986', 'width': 1200}, 'variants': {}}]} |
I tried glm 4.7 + opencode | 22 | Need some perspective here. After extensive testing with Opencode, Oh My Opencode and Openspec, the results have been disappointing to say the least.
GLM 4.7 paired with Claude Code performs almost identically to 4.5 Sonnet - I genuinely can't detect significant improvements. | 2026-01-07T17:01:03 | https://www.reddit.com/r/LocalLLaMA/comments/1q6kv29/i_tried_glm_47_opencode/ | Federal_Spend2412 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6kv29 | false | null | t3_1q6kv29 | /r/LocalLLaMA/comments/1q6kv29/i_tried_glm_47_opencode/ | false | false | self | 22 | null |
Arguably, the best web search MCP server for Claude Code, Codex, and other coding tools | 59 | We’ve officially open-sourced [Kindly](https://github.com/Shelpuk-AI-Technology-Consulting/kindly-web-search-mcp-server) \- the Web Search MCP server we built internally for tools like Claude Code, Cursor, and Codex.
https://preview.redd.it/tpiz0zg0iybg1.png?width=1498&format=png&auto=webp&s=498c083702c62f798ae1d7af434b3e920bb9a7f4
Why build another search tool? Because the existing ones were frustrating us.
When you are debugging a complex issue, you don’t just need a URL or a 2-sentence snippet (which is what wrappers like Tavily or Serper usually provide). You need the context. You need the "Accepted Answer" on StackOverflow, the specific GitHub Issue comment saying "this workaround fixed it," or the actual content of an arXiv paper.
Standard search MCPs usually fail here. They either return insufficient snippets or dump raw HTML full of navigation bars and ads that confuse the LLM and waste context window.
Kindly solves this by being smarter about retrieval, not just search:
* Intelligent Parsing: It doesn’t just scrape. If the search result is a StackOverflow thread, Kindly uses the StackExchange API to fetch the question, all answers, and metadata (likes/accepted status) and formats it into clean Markdown.
* GitHub Native: If the result is a GitHub Issue, it pulls the full conversation via the API.
* ArXiv Ready: It grabs the full PDF content and converts it to text.
* Headless Browser Fallback: For everything else, it spins up an invisible browser to render the page and extract the main content (no ads/nav).
* One-Shot: It returns the full, structured content with the search results. No need for the AI to make a second tool call to "read page."
For us, this replaced our need for separate generic web search, StackOverflow, and scraping MCP servers. It’s the only setup we’ve found that allows AI coding assistants to actually research a bug the way a human engineer would.
It works with Claude Code, Codex, Cursor, and others.
P.S. If you give it a try or like the idea, please drop us a star on GitHub - it’s always huge motivation for us to keep improving it! ⭐️ | 2026-01-07T16:47:30 | https://www.reddit.com/r/LocalLLaMA/comments/1q6khuh/arguably_the_best_web_search_mcp_server_for/ | Quirky_Category5725 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6khuh | false | null | t3_1q6khuh | /r/LocalLLaMA/comments/1q6khuh/arguably_the_best_web_search_mcp_server_for/ | false | false | 59 | null | |
Finally built Intel arc rig dealing with stupid driver/library issues | 3 |
Finally built the damn thing took forever to get it somewhat working I had ti change frameworks like 10 times went from proxmox to windows and then from there to Ubuntu and probably gonna swap again back to proxmox this build is gonna take me a while at least I had held from users on the open arc discord server they have been really nice on getting stuff working once benchmarks are out I will make a update post with proper benchmarks and stuff
Hardware details
2x Intel Xeon e5 v3
128gb ddr4 ram
4x Intel arc b580 GPUs each connected with pcie 3.0 x8
1tb name ssd
Pikvm connected to HDMI and USB and atx for remote management
Aaawave mining case to hold everything
2x corsair 850w PSU
Yes I know there are 6 GPUs pictures I actually have 8x b580 GPUs I just want stuff to work on this test Intel Xeon rig and then move over to and epyc for the pcie lanes | 2026-01-07T16:38:14 | Ill_Preparation_8458 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q6k8qg | false | null | t3_1q6k8qg | /r/LocalLLaMA/comments/1q6k8qg/finally_built_intel_arc_rig_dealing_with_stupid/ | false | false | default | 3 | {'enabled': True, 'images': [{'id': '627rrc5kgybg1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/627rrc5kgybg1.jpeg?width=108&crop=smart&auto=webp&s=ddb14570eada365bf793f36f9b3961b332e7c0c3', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/627rrc5kgybg1.jpeg?width=216&crop=smart&auto=webp&s=3d845045d74112c820266ce8a08b9e5b9ddf3332', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/627rrc5kgybg1.jpeg?width=320&crop=smart&auto=webp&s=500f8b69a174fb21ff2a6787037dd548df85bd82', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/627rrc5kgybg1.jpeg?width=640&crop=smart&auto=webp&s=4e391de870bad612685dff947755e486cef6d17d', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/627rrc5kgybg1.jpeg?width=960&crop=smart&auto=webp&s=8ffb5583b4e7a61160364bb379ac053d921dbcbb', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/627rrc5kgybg1.jpeg?width=1080&crop=smart&auto=webp&s=0678183ba27cad3166d5cabb53e5f0d60144bc18', 'width': 1080}], 'source': {'height': 4000, 'url': 'https://preview.redd.it/627rrc5kgybg1.jpeg?auto=webp&s=5bb81c204e4c7e4b91cee300cbff12e2ceb40e31', 'width': 3000}, 'variants': {}}]} | |
Fine-tuning OSS-120B / Qwen3-30B on 90k surgical Q&A: SFT vs DPO, multi-turn, and RAG integration? | 7 | I’m planning to fine-tune OSS-120B (or Qwen3-30B-A3B-Thinking-2507) on a mixed corpus: \~10k human-written Q&A pairs plus \~80k carefully curated synthetic Q&A pairs that we spent a few months generating and validating. The goal is to publish an open-weight model on Hugging Face and submit the work to an upcoming surgical conference in my country. The model is intended to help junior surgeons with clinical reasoning/support and board-style exam prep.
I’m very comfortable with RAG + inference/deployment, but this is my first time running a fine-tuning effort at this scale. I’m also working with a tight compute budget, so I’m trying to be deliberate and avoid expensive trial-and-error. I’d really appreciate input from anyone who’s done this in practice:
1. Multi-turn behavior: If I fine-tune on this dataset, will it noticeably degrade multi-turn / follow-up handling? Should I explicitly add another 5–10k dialog-style, multi-turn examples (with coreference + follow-ups), or will the base model generally preserve conversational robustness without increased hallucination?
2. SFT vs RL: The dataset is \~25% MCQs and \~75% open-ended answers; MCQs include rationales/explanations. Would you recommend RL after SFT here? If yes, what approach makes the most sense (e.g., DPO/IPO/KTO/ORPO vs PPO-style RLHF), and what data format + rough scale would you target for the preference/reward step?
3. Two inference modes: I want two user-facing modes: clinical support and exam preparation. Would you bake the mode-specific system prompts into SFT/RL (i.e., train with explicit instruction headers), and if so, would you attach them to every example or only a subset to avoid over-conditioning?
4. RAG / tool use at inference: If I’m going to pair the model with RAG and/or a web-search tool at inference time, should that change how I structure fine-tuning or RL? For example: training with retrieved context, citations, tool-call patterns, refusal policies, or “answer only from context” constraints.
5. Model choice: Between OSS-20B and Qwen3-30B-A3B, which would you pick for this use case? I slightly prefer OSS-20B for general non-coding performance, but I’m unsure whether its chat/harmony formatting or any architecture/format constraints create extra friction or difficulties during SFT/RL. | 2026-01-07T16:37:34 | https://www.reddit.com/r/LocalLLaMA/comments/1q6k82y/finetuning_oss120b_qwen330b_on_90k_surgical_qa/ | Patient_Ad1095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6k82y | false | null | t3_1q6k82y | /r/LocalLLaMA/comments/1q6k82y/finetuning_oss120b_qwen330b_on_90k_surgical_qa/ | false | false | self | 7 | null |
Speculation: new Gemma, Granite, Arcee Trinity models when? | 3 | Figured I would do the thing where we speculate about upcoming models and maybe then they get released shortly thereafter.
Also maybe gather up any new info from any various sources we're all watching/reading/etc
So, any new rumors about upcoming Gemma, Granite, or Arcee Trinity models? | 2026-01-07T16:25:34 | https://www.reddit.com/r/LocalLLaMA/comments/1q6jwfj/speculation_new_gemma_granite_arcee_trinity/ | RobotRobotWhatDoUSee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6jwfj | false | null | t3_1q6jwfj | /r/LocalLLaMA/comments/1q6jwfj/speculation_new_gemma_granite_arcee_trinity/ | false | false | self | 3 | null |
Arbor: Graph-native codebase indexing via MCP for structural LLM refactors | 0 | Arbor is an open source intelligence layer that treats code as a "Logic Forest." It uses a Rust-based AST engine to build a structural graph of your repo, providing deterministic context to LLMs like Claude and ChatGPT through the Model Context Protocol (MCP).
By mapping the codebase this way, the Arbor bridge allows AI agents to perform complex refactors with full awareness of project hierarchy and dependencies.
**Current Stack:**
* Rust engine for high-performance AST parsing
* MCP Server for direct LLM integration
* Flutter/React for structural visualization
https://preview.redd.it/x5g6dofwbybg1.png?width=1024&format=png&auto=webp&s=105f6c59991ed46ac5e5af06214871aaac7274c4
**How to contribute:** I'm looking for help expanding the "Logic Forest" to more ecosystems. Specifically:
* **Parsers:** Adding Tree-sitter support for C#, Go, C++, and JS/TS
* **Distribution:** Windows (EXE) and Linux packaging
* **Web:** Improving the Flutter web visualizer and CI workflows
**GitHub:**[https://github.com/Anandb71/arbor](https://github.com/Anandb71/arbor)
Check the issues for "good first issue" or drop a comment if you want to help build the future of AI-assisted engineering. | 2026-01-07T16:12:02 | https://www.reddit.com/r/LocalLLaMA/comments/1q6jiyj/arbor_graphnative_codebase_indexing_via_mcp_for/ | AccomplishedWay3558 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6jiyj | false | null | t3_1q6jiyj | /r/LocalLLaMA/comments/1q6jiyj/arbor_graphnative_codebase_indexing_via_mcp_for/ | false | false | 0 | null | |
[HW TUNING] Finding the best GPU power limit for inference | 12 | So in preparation for my multi-GPU setup I wanted to actually test the "limit the power bro, after a specific limit the increase is marginal..." and it seems to have a large kernel of truth in it. So the pre-conditions are RTX4090 with main usage as a single user.
The vLLM server line was: vllm serve allenai/Olmo-3-7B-Instruct --trust-remote-code --max-model-len 32768
The benchmark command line was: vllm bench serve --backend openai --host 127.0.0.1 --port 8000 --endpoint /v1/completions --model allenai/Olmo-3-7B-Instruct --dataset-name random --num-prompts 200 --seed 0 --input-len 1024 --output-len 128 --request-rate 1 --max-concurrency 1 --metric-percentiles 50,90,95,99 --percentile-metrics ttft,tpot,itl,e2el --save-result --result-dir ./bench_results --result-filename "xxxW_interactive_c1_rps1.json", where xxxW is the set power limit where the benchmark was done, i.e 300W.
The results are:
Median TTFT (lower is better)
250W: 139.17 ms
300W: 100.97 ms (huge win)
350W: 100.28 ms (basically same as 300W)
400W: 96.51 ms (small gain)
450W: 94.09 ms (tiny gain)
P99 TTFT (tail latency / “hitching”)
250W: 143.02 ms
300W: 118.56 ms
350W: 101.97 ms (big tail improvement)
400W: 98.05 ms
450W: 95.06 ms
Decode smoothness (ITL / TPOT)
Median ITL is basically flat after 300W:
250W: 16.455 ms
300W: 16.250 ms
350W: 16.198 ms
400W: 16.196 ms
450W: 16.196 ms
P99 ITL improves a bit up to ~350W then flattens:
250W: 17.38 ms
300W: 16.90 ms
350W: 16.46 ms
400W: 16.41 ms
450W: 16.38 ms
Sweet spot #1 (best value / best perf-per-watt): 300W
Sweet spot #2 (best “smoothness” / best tails): 350W
Median barely changes vs 300W, but P99 TTFT and P99 ITL improve noticeably, i.e. fewer little “hiccups.”
Costs you only +50W vs 300W.
Not worth it: >350W
350→450W buys you ~6 ms median TTFT and tiny ITL gains for +100W. That’s classic waste.
The comments are form the friendly ChatGPT, so how you find your optimal power level for your setup ? | 2026-01-07T15:58:20 | https://www.reddit.com/r/LocalLLaMA/comments/1q6j58w/hw_tuning_finding_the_best_gpu_power_limit_for/ | HumanDrone8721 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6j58w | false | null | t3_1q6j58w | /r/LocalLLaMA/comments/1q6j58w/hw_tuning_finding_the_best_gpu_power_limit_for/ | false | false | self | 12 | null |
The Personality of Open Source: How Llama, Mistral, and Qwen Compare to GPT-5.2 and Claude | 8 | 2026-01-07T15:52:34 | https://www.lindr.io/blog/open-source-benchmark | dimethyldumbass | lindr.io | 1970-01-01T00:00:00 | 0 | {} | 1q6izof | false | null | t3_1q6izof | /r/LocalLLaMA/comments/1q6izof/the_personality_of_open_source_how_llama_mistral/ | false | false | default | 8 | null | |
What would I be able to run? | 0 | I just bought a RTX PRO 4000 Blackwell 24 GB, and I have a RTX 5070 installed as well.
Is that essentially 36gb of VRAM? Would I be able to run 13b models no problem? | 2026-01-07T15:16:30 | https://www.reddit.com/r/LocalLLaMA/comments/1q6i0nf/what_would_i_be_able_to_run/ | Jurangi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6i0nf | false | null | t3_1q6i0nf | /r/LocalLLaMA/comments/1q6i0nf/what_would_i_be_able_to_run/ | false | false | self | 0 | null |
llama.ccp CLI with Markdown + stylish colors: in the terminal. | 2 | It is amazing. You probably know the situation yourself. You want to do something and spend two days ona and off following AI advice that leasts nowhere, doing regular internet searches and loosing time in too many github repositories. When the best thing would probably have been to reach out to knowledgable folks on reddit.
I want to use the CLI rather than llama-server, if possible and to keep it all in linux terminal.
One of the wild goose chases AI sent me into was to install 'glow' (which can make terminal very pretty) but still no love for using it with CLI. Are there perhaps some patch for compiling the CLI? If there is no way around it should i resort to a TUI of some sort? I want to avoid webui and browser as im having a great time doing all this on a potato-laptop being careful about my ram. | 2026-01-07T15:16:08 | https://www.reddit.com/r/LocalLLaMA/comments/1q6i0ad/llamaccp_cli_with_markdown_stylish_colors_in_the/ | Mangleus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6i0ad | false | null | t3_1q6i0ad | /r/LocalLLaMA/comments/1q6i0ad/llamaccp_cli_with_markdown_stylish_colors_in_the/ | false | false | self | 2 | null |
Which MCPs surprised you either by breaking or by working better than expected? | 2 | A lot of popular MCPs get mentioned in threads, but once you move beyond demos, only a few are consistently **recommended** by people who’ve actually used them.
In practice, the interesting parts tend to be the surprises:
* permissions silently failing
* context limits showing up sooner than expected
* rate limits becoming a bottleneck
* write actions feeling risky or requiring manual review
If you’re using MCPs in real workflows, what’s the **most annoying or limiting thing** you’ve run into?
I’m less interested in what’s popular and more interested in:
* MCPs that genuinely saved you time or effort
* ones that worked better than expected
* and ones that looked promising but didn’t hold up in practice
If you’re using MCPs day to day, which ones would you still recommend and what surprised you (good or bad)?
I’ve been collecting these kinds of real-world notes so people don’t have to rediscover them in every thread. | 2026-01-07T14:59:06 | https://www.reddit.com/r/LocalLLaMA/comments/1q6hjz4/which_mcps_surprised_you_either_by_breaking_or_by/ | Silver-Photo2198 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6hjz4 | false | null | t3_1q6hjz4 | /r/LocalLLaMA/comments/1q6hjz4/which_mcps_surprised_you_either_by_breaking_or_by/ | false | false | self | 2 | null |
VLM Fine-tuning Data Trade-offs: Density vs. Diversity | 6 | In applied domains (Robotics/Manufacturing/FinTech), we rarely have internet-scale diversity. We are usually "Data Poor" in diversity (few scenes/formats) but "Data Rich" in depth (many descriptions/tasks per scene).
I ran an ablation to see if its better to show a model too many images once (Diversity) or a few images but ask varying questions on it (Density)?
What do I mean by density and diversity?
- Density: Asking a variety of questions to same image to extract as much information as possible.
- Diversity: Showing the vlm as much of the world as possible.
Obviously diverse datasets are better, but how much better?
I have done this in a scrappy way. I curated two 15k sample datasets along the two dimension and trained around 6 models on it.
Diverse: 7500 images- 1question/image (2ans/q)
Dense: 750 images - 10 questions/image (2ans/q)
Current Findings:
- Density is efficient for Facts: If you want the model to memorize specific visual features, high density works well.
- The "Logical Collapse" Trap: High density without sufficient scale actively harms reasoning capabilities. The model overfits to the "logic" of the specific few images it sees.
Planning to expand the scale and run further tests. But thought to get community feedback on the idea and process.
P.S. The indomain tests are on a validation set of 3.2k diverse images with harder difficulty questions. | 2026-01-07T14:41:41 | https://www.reddit.com/gallery/1q6h3z0 | The-Silvervein | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q6h3z0 | false | null | t3_1q6h3z0 | /r/LocalLLaMA/comments/1q6h3z0/vlm_finetuning_data_tradeoffs_density_vs_diversity/ | false | false | 6 | null | |
Local coding models under 128G / 256G / 512G memory: any comparison? | 1 | I'm interested to build a 1-4 node halo strix cluster and/or buying a mac ultra to run local coding agents (and that's the goal, please don't suggest GPUs, since I have different machines for that). Token speed is not a concern: I have mostly background coding tasks to run, and I have separate cloud coding subscriptions for more interaction. Power is a concern, but 4 halo strix or a mac ultra is withing the power budget.
However, I am undecided on the target scope: would a single halo strix suffice, maybe two? At three I can still directly connect them, but at 4 maybe a mac ultra is better in space and costs and power consumption. Anyway, I would be interested in the comparison of quality in the coding models that are memory restricted, like: whatever quant runs under 128G (96G VRAM + 32 RAM) or similar.
Is there any such out there? Any personal experience or setup you are able to share? | 2026-01-07T14:40:00 | https://www.reddit.com/r/LocalLLaMA/comments/1q6h2ic/local_coding_models_under_128g_256g_512g_memory/ | yelling-at-clouds-40 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6h2ic | false | null | t3_1q6h2ic | /r/LocalLLaMA/comments/1q6h2ic/local_coding_models_under_128g_256g_512g_memory/ | false | false | self | 1 | null |
I roasted my laptop to build "LiteGPT" (124M) from scratch. 10B tokens, 87°C peaks, and a war against Windows Memory Management. | 1 | Everyone says, "Don't pre-train on a laptop, unless you want to see your GPU card fry in real-time."
They were right. I almost did. But I wanted to see exactly how far I could push a mobile GPU before physics won.
So, I spent the last month building LiteGPT, a 124M parameter model trained from scratch on 10B tokens of FineWeb-Edu. The goal wasn't to beat Llama-3 (it’s a GPT-2 class model, let’s be real). The goal was to survive the training run on a single RTX 4090 Mobile.
It hit 87°C instantly. I had to build a custom undervolting curve just to stop it from thermal throttling every 5 minutes.
The real enemy wasn't the heat, it was Windows Shared System Memory. It kept aggressively caching and spilling VRAM into system RAM, killing my throughput. I had to tune batch sizes specifically to stop the OS from "helping" me.
The Result: Managed to sustain ~15.75% MFU, which I think is pretty decent for a consumer card fighting a background OS.
It actually follows instructions surprisingly well for a small model (and my room is now 10 degrees hotter).
If you are crazy enough to try pre-training locally, I wrote a full breakdown of the thermal profiles, the VRAM workarounds, and the PyTorch config I used to keep it stable.
The Dev Log: https://keerthiraajan.com/blog/litegpt-pre-training
The Code: https://github.com/kmkrofficial/litegpt
The Models (If you want to poke it):
Instruct: https://huggingface.co/kmkrworks/LiteGPT-Instruct
Base: https://huggingface.co/kmkrworks/LiteGPT-Base
Question for the sub: Has anyone else managed to push mobile GPUs past 15% MFU on training runs without hitting thermal walls? I feel like there's more performance left on the table if I switch to Linux, but I'm too stubborn to wipe my drive. | 2026-01-07T14:35:30 | https://www.reddit.com/r/LocalLLaMA/comments/1q6gyfs/i_roasted_my_laptop_to_build_litegpt_124m_from/ | Electronic_Yam_5368 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6gyfs | false | null | t3_1q6gyfs | /r/LocalLLaMA/comments/1q6gyfs/i_roasted_my_laptop_to_build_litegpt_124m_from/ | false | false | self | 1 | null |
Can you input your valuable contribution? | 0 | Hello everyone
This community has lots of experts who have spent years in tech, AI and ML. Can you please help me with your valuable contribution please?
I’m trying to build something but since I’m all alone so I have limitations.
All I’m asking is if you can contribute a small amount of time of yours to help me build something. | 2026-01-07T14:20:40 | https://github.com/QWED-AI/qwed-verification | Moist_Landscape289 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1q6gl35 | false | null | t3_1q6gl35 | /r/LocalLLaMA/comments/1q6gl35/can_you_input_your_valuable_contribution/ | false | false | default | 0 | null |
[Resource] 30k IKEA products converted to text files. Saves 24% tokens. RAG benchmark. | 0 | JSON eats tokens. Brackets waste space.
If you run Llama-3 locally, context is precious. You need more data in less memory.
I fixed this.
I converted 30,511 IKEA products from JSON to **CommerceTXT**. It is a flat, markdown-like format.
**The Benchmark:**
* **Size:** 30k products. Real data.
* **Efficiency:** It uses 24% fewer **tokens** than JSON.
* **Impact:** You fit 20%+ more products in your context window.
* **Structure:** Folders for categories. Good for testing routers.
**The Data:** No HTML. No scripts. Just text. Ready for Chroma or Qdrant.
Test your retrieval accuracy. Compare it to raw JSON. See if simpler is better.
**The Links:**
* **Dataset (Hugging Face):** [https://huggingface.co/datasets/tsazan/ikea-us-commercetxt](https://huggingface.co/datasets/tsazan/ikea-us-commercetxt)
* **Parser (GitHub):** [https://github.com/commercetxt/commercetxt](https://github.com/commercetxt/commercetxt) | 2026-01-07T14:09:21 | https://www.reddit.com/r/LocalLLaMA/comments/1q6gbdx/resource_30k_ikea_products_converted_to_text/ | TsaTsuTsi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6gbdx | false | null | t3_1q6gbdx | /r/LocalLLaMA/comments/1q6gbdx/resource_30k_ikea_products_converted_to_text/ | false | false | self | 0 | null |
I built an open-source library that diagnoses problems in your Scikit-learn models using LLMs | 3 | Hey everyone, Happy New Year!
I spent the holidays working on a project I'd love to share: **sklearn-diagnose** — an open-source Scikit-learn compatible Python library that acts like an "MRI scanner" for your ML models.
**What it does:**
It uses LLM-powered agents to analyze your trained Scikit-learn models and automatically detect common failure modes:
\- Overfitting / Underfitting
\- High variance (unstable predictions across data splits)
\- Class imbalance issues
\- Feature redundancy
\- Label noise
\- Data leakage symptoms
Each diagnosis comes with confidence scores, severity ratings, and actionable recommendations.
**How it works:**
1. Signal extraction (deterministic metrics from your model/data)
2. Hypothesis generation (LLM detects failure modes)
3. Recommendation generation (LLM suggests fixes)
4. Summary generation (human-readable report)
**Links:**
\- GitHub: [https://github.com/leockl/sklearn-diagnose](https://github.com/leockl/sklearn-diagnose)
\- PyPI: pip install sklearn-diagnose
Built with LangChain 1.x. Supports OpenAI, Anthropic, and OpenRouter as LLM backends.
Aiming for this library to be community-driven with ML/AI/Data Science communities to contribute and help shape the direction of this library as there are a lot more that can be built - for eg. AI-driven metric selection (ROC-AUC, F1-score etc.), AI-assisted feature engineering, Scikit-learn error message translator using AI and many more!
Please give my GitHub repo a star if this was helpful ⭐ | 2026-01-07T13:25:56 | https://www.reddit.com/r/LocalLLaMA/comments/1q6famd/i_built_an_opensource_library_that_diagnoses/ | lc19- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6famd | false | null | t3_1q6famd | /r/LocalLLaMA/comments/1q6famd/i_built_an_opensource_library_that_diagnoses/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'oFJAlj68KgVh-YJVcXkpbKBTbD0R8JoyRX49P3UcLVA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oFJAlj68KgVh-YJVcXkpbKBTbD0R8JoyRX49P3UcLVA.png?width=108&crop=smart&auto=webp&s=1ac7acec4b020fead4514b33e53aa1295efc3c97', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oFJAlj68KgVh-YJVcXkpbKBTbD0R8JoyRX49P3UcLVA.png?width=216&crop=smart&auto=webp&s=51bed8df1a93cef4c9ad8bccd420eed5300b367b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oFJAlj68KgVh-YJVcXkpbKBTbD0R8JoyRX49P3UcLVA.png?width=320&crop=smart&auto=webp&s=a210bf3371059ce363a634a7088026f325988534', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oFJAlj68KgVh-YJVcXkpbKBTbD0R8JoyRX49P3UcLVA.png?width=640&crop=smart&auto=webp&s=77451b4e4d0ee2b6e39ec3f553b45f92b8ea5476', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oFJAlj68KgVh-YJVcXkpbKBTbD0R8JoyRX49P3UcLVA.png?width=960&crop=smart&auto=webp&s=ad578b53c78eb3b14dafb9302e223ce2be482938', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oFJAlj68KgVh-YJVcXkpbKBTbD0R8JoyRX49P3UcLVA.png?width=1080&crop=smart&auto=webp&s=f65c6b6675e963d214b5743136bf36ea8c072344', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oFJAlj68KgVh-YJVcXkpbKBTbD0R8JoyRX49P3UcLVA.png?auto=webp&s=5e58a4588268deb96efa620592250ba8d1880a4d', 'width': 1200}, 'variants': {}}]} |
This diagram shows everything you 'need' for LLM apps. I think 90% of it is overengineering. Change my mind. | 0 | 2026-01-07T12:46:44 | ImpressionTop1712 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q6eg95 | false | null | t3_1q6eg95 | /r/LocalLLaMA/comments/1q6eg95/this_diagram_shows_everything_you_need_for_llm/ | false | false | 0 | {'enabled': True, 'images': [{'id': '6SaHaNBVvbsAF7AnD-dVDlq9IXrBYSEYUrGw6OADAeI', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/kcfi6gjpaxbg1.png?width=108&crop=smart&auto=webp&s=05770f81843fefc4a4d83a907ff9f97b0d098309', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/kcfi6gjpaxbg1.png?width=216&crop=smart&auto=webp&s=7efe5917745b3a9e8e7e2c420f67213096b2c21a', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/kcfi6gjpaxbg1.png?width=320&crop=smart&auto=webp&s=d8d80040e40a216d9e41275e621d936588b17e1d', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/kcfi6gjpaxbg1.png?width=640&crop=smart&auto=webp&s=3348c75d51f5e5285d35d3a377da2adaf885c131', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/kcfi6gjpaxbg1.png?width=960&crop=smart&auto=webp&s=974a40d3083428ab66a1297331e8e7aa3954930b', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/kcfi6gjpaxbg1.png?width=1080&crop=smart&auto=webp&s=d7f09a4a5bd4692a6210c5d56efc631b28c18edc', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/kcfi6gjpaxbg1.png?auto=webp&s=ca5c85ec8cdd5c261e90f571074aec75d31f10aa', 'width': 1920}, 'variants': {}}]} | |||
AI agents for searching and reasoning over internal documents | 22 | Hey everyone!
I’m excited to share something we’ve been building for the past few months - **PipesHub**, a **fully open-source alternative to Glean,** designed to bring powerful Enterprise Search, Agent Builders to every team, without vendor lock-in. The platform brings all your business data together and makes it searchable. It connects with apps like Google Drive, Gmail, Slack, Notion, Confluence, Jira, OneDrive, Outlook, SharePoint Online, Dropbox, and even local file uploads. You can deploy it and run it with just one docker compose command.
The entire system is built on a **fully event-streaming architecture powered by Kafka**, making indexing and retrieval scalable, fault-tolerant, and real-time across large volumes of data. PipesHub combines a vector database with a knowledge graph and uses Agentic RAG to deliver highly accurate results. We constrain the LLM to ground truth. Provides Visual citations, reasoning and confidence score. Our implementation says Information not found rather than hallucinating.
**Key features**
* Deep understanding of user, organization and teams with enterprise knowledge graph
* Connect to any AI model of your choice including OpenAI, Gemini, Claude, or Ollama
* Use any other provider that supports OpenAI compatible endpoints
* Vision-Language Models and OCR for visual or scanned docs
* Login with Google, Microsoft, OAuth, or SSO
* Rich REST APIs for developers
* All major file types support including pdfs with images, diagrams and charts
* Agent Builder - Perform actions like Sending mails, Schedule Meetings, etc along with Search, Deep research, Internet search and more
* Reasoning Agent that plans before executing tasks
* 40+ Connectors allowing you to connect to your entire business apps
Check it out and share your thoughts or feedback. Your feedback is immensely valuable and is much appreciated:
[https://github.com/pipeshub-ai/pipeshub-ai](https://github.com/pipeshub-ai/pipeshub-ai)
Demo Video:
[https://www.youtube.com/watch?v=xA9m3pwOgz8](https://www.youtube.com/watch?v=xA9m3pwOgz8) | 2026-01-07T12:42:45 | https://www.reddit.com/r/LocalLLaMA/comments/1q6edb2/ai_agents_for_searching_and_reasoning_over/ | Effective-Ad2060 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6edb2 | false | null | t3_1q6edb2 | /r/LocalLLaMA/comments/1q6edb2/ai_agents_for_searching_and_reasoning_over/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'ketEXXYrtUKPA2y-oIvCFgcWZQoziuQWwwvejkV8xdc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ketEXXYrtUKPA2y-oIvCFgcWZQoziuQWwwvejkV8xdc.png?width=108&crop=smart&auto=webp&s=0d6fd2d9375d8a485a2ebdf6b2fb6af53123cceb', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ketEXXYrtUKPA2y-oIvCFgcWZQoziuQWwwvejkV8xdc.png?width=216&crop=smart&auto=webp&s=cbbca55895b7178adf02cbd270620e5f428e7c8e', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ketEXXYrtUKPA2y-oIvCFgcWZQoziuQWwwvejkV8xdc.png?width=320&crop=smart&auto=webp&s=8a89fff37eca91c8c09618f089379de21a66d928', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/ketEXXYrtUKPA2y-oIvCFgcWZQoziuQWwwvejkV8xdc.png?auto=webp&s=bf26b9b5cae5ee08e9ae158ab53cc4b315dcbbb1', 'width': 400}, 'variants': {}}]} |
Models for middle eastern languages? | 1 | I'm learning geopolitics, specifically about the middle east, and I'm wondering if anyone knows a good local model for translation and summarization for middle eastern languages (various types of Arabic, Hebrew, Persian)?
I've been using gemma3 and cohere command models, but some of them are old now, and new ones are too big for me (command a models are 100 something B and dense).
Something around 30b or 70b quantized would be perfect. | 2026-01-07T12:30:51 | https://www.reddit.com/r/LocalLLaMA/comments/1q6e4sm/models_for_middle_eastern_languages/ | WeekLarge7607 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6e4sm | false | null | t3_1q6e4sm | /r/LocalLLaMA/comments/1q6e4sm/models_for_middle_eastern_languages/ | false | false | self | 1 | null |
LLama.cpp keep crashing when using 5060ti | 1 | I have two gpus installed 5060ti 16gb and 4060 8gb.
even if I use only the 5060ti (disable the 4060 from device manager or set cuda\_visible\_devices=1), I keep getting this error.
←\[0mCUDA error: an illegal instruction was encountered ←\[0m current device: 1, in function ggml\_backend\_cuda\_synchronize at D:\\a\\llama.cpp\\llama.cpp\\ggml\\src\\ggml-cuda\\ggml-cuda.cu:2850 ←\[0m cudaStreamSynchronize(cuda\_ctx->stream()) ←\[0mD:\\a\\llama.cpp\\llama.cpp\\ggml\\src\\ggml-cuda\\ggml-cuda.cu:96: CUDA error
I have the latest drivers and latest llama.cpp version and cuda files 13.1.
Any help will be appreciated. | 2026-01-07T12:27:31 | https://www.reddit.com/r/LocalLLaMA/comments/1q6e2d5/llamacpp_keep_crashing_when_using_5060ti/ | ResponsibleTruck4717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6e2d5 | false | null | t3_1q6e2d5 | /r/LocalLLaMA/comments/1q6e2d5/llamacpp_keep_crashing_when_using_5060ti/ | false | false | self | 1 | null |
I think I may installed an infectious LLM from this Sub... | 0 | I don't remember exactly which one, but few months a ago I downloaded a quantized TTs model which wasn't fast enough so I just deleted it, due to the fear of virus/malware I resinstalled my whole OS. But noticing a lot of fan noise since then, my cpu usage is always less than 10% in idle but my fan still makes a lot of noise, before this incident the noise was only exclusive to when I used video editing but now even shows up while browsing?
I bought my laptop 5 months, is this common in gaming laptops ? | 2026-01-07T11:40:07 | https://www.reddit.com/r/LocalLLaMA/comments/1q6d5rq/i_think_i_may_installed_an_infectious_llm_from/ | OptiKNOT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6d5rq | false | null | t3_1q6d5rq | /r/LocalLLaMA/comments/1q6d5rq/i_think_i_may_installed_an_infectious_llm_from/ | false | false | self | 0 | null |
A.X-K1 - New korean LLM benchmark released | 8 | 2026-01-07T11:26:34 | Leather-Term-30 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q6cx7x | false | null | t3_1q6cx7x | /r/LocalLLaMA/comments/1q6cx7x/axk1_new_korean_llm_benchmark_released/ | false | false | 8 | {'enabled': True, 'images': [{'id': 'JAoW1vcm3ViexAc_GK-7Ux4PyAQ5j1umErd8aYL6Xog', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/54p3jxjywwbg1.png?width=108&crop=smart&auto=webp&s=814d4ee6490bbeb3796cbacf6a45e66f9997fb7a', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/54p3jxjywwbg1.png?width=216&crop=smart&auto=webp&s=65dea773920c4c12d74dd0fab0ca54504b0c094d', 'width': 216}, {'height': 184, 'url': 'https://preview.redd.it/54p3jxjywwbg1.png?width=320&crop=smart&auto=webp&s=9d1e0d50fbc30bd6a54d574ad81d50f8d25435a1', 'width': 320}, {'height': 368, 'url': 'https://preview.redd.it/54p3jxjywwbg1.png?width=640&crop=smart&auto=webp&s=4f245d4be5450f8919f9889e8b73dbcdd62f0142', 'width': 640}, {'height': 552, 'url': 'https://preview.redd.it/54p3jxjywwbg1.png?width=960&crop=smart&auto=webp&s=ee83ea0e6f38e31c0ed5c505ba5128de93cd3118', 'width': 960}, {'height': 621, 'url': 'https://preview.redd.it/54p3jxjywwbg1.png?width=1080&crop=smart&auto=webp&s=795be289afffc87a5170477eecb64436c7b305f4', 'width': 1080}], 'source': {'height': 866, 'url': 'https://preview.redd.it/54p3jxjywwbg1.png?auto=webp&s=9d9f7aa2a1b4ffeb9b70876e49a1cff2299c7cf6', 'width': 1506}, 'variants': {}}]} | |||
In NVIDIA's announcement of Rubin (successor to Blackwell) what do you think is meant by "adaptive compression"? | 41 | 2026-01-07T11:22:07 | https://developer.nvidia.com/blog/inside-the-nvidia-rubin-platform-six-new-chips-one-ai-supercomputer/ | michaelmalak | developer.nvidia.com | 1970-01-01T00:00:00 | 0 | {} | 1q6cuh5 | false | null | t3_1q6cuh5 | /r/LocalLLaMA/comments/1q6cuh5/in_nvidias_announcement_of_rubin_successor_to/ | false | false | default | 41 | {'enabled': False, 'images': [{'id': '5t8jfpe67v909kH8kONfOcl8j7uy46XjufTzHSp5p-I', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/5t8jfpe67v909kH8kONfOcl8j7uy46XjufTzHSp5p-I.jpeg?width=108&crop=smart&auto=webp&s=9b4b65d12148e98a80405d43ec3a397fae87fcad', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/5t8jfpe67v909kH8kONfOcl8j7uy46XjufTzHSp5p-I.jpeg?width=216&crop=smart&auto=webp&s=6b24e2abefbdaae930c8b8dd586148771937f0e7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/5t8jfpe67v909kH8kONfOcl8j7uy46XjufTzHSp5p-I.jpeg?width=320&crop=smart&auto=webp&s=dd2338fa9aa76b0bc39992f991003dd39adcae35', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/5t8jfpe67v909kH8kONfOcl8j7uy46XjufTzHSp5p-I.jpeg?width=640&crop=smart&auto=webp&s=81304f3376532fa1f1d936cef16bcd151043d65a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/5t8jfpe67v909kH8kONfOcl8j7uy46XjufTzHSp5p-I.jpeg?width=960&crop=smart&auto=webp&s=18837b50cc47348abf4075ab6a8250f10671256e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/5t8jfpe67v909kH8kONfOcl8j7uy46XjufTzHSp5p-I.jpeg?width=1080&crop=smart&auto=webp&s=69f1abb27f2d17ac21e4abf5dcba52c7e5d38cf4', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/5t8jfpe67v909kH8kONfOcl8j7uy46XjufTzHSp5p-I.jpeg?auto=webp&s=549d695bf65333da2ae7c233f38077e4ad0866bb', 'width': 1920}, 'variants': {}}]} | |
I built a mobile game where a local Qwen3-VL acts as an "Oracle" that analyzes player photos | 15 | Been working on a solo project called Lenswalker a walking RPG where players physically walk to charge mana, then photograph real-world subjects. The interesting part: a locally-hosted vision model analyzes each photo and determines what they found.
The setup:
\- Ollama running Qwen3-VL on my home server (RTX 4090)
\- FastAPI backend, PWA frontend
\- Everything self-hosted, no cloud APIs, no data leaving my network
What the Oracle does:
\- Analyzes the photo and identifies the subject
\- Assigns a "rarity" (1-10) based on how interesting/unusual it is (a trash can = 1, a wild fox = 9)
\- Determines capture quality (composition, lighting, focus)
\- Extracts dominant color -> maps to game element (green -> Nature, white -> Light, etc.)
\- Generates flavor text for the discovery
What surprised me:
\- Qwen3-VL is remarkably consistent at judging "interestingness" - mundane objects score low, genuinely unusual finds score high
\- Color extraction works well for element assignment
\- \~15-45s per analysis on first load, \~5-10s when model is warm
\- Running OLLAMA\_MAX\_CONCURRENT=4 handles multiple players fine
The whole thing started because I wanted a game where the AI couldn't be cheated by googling answers, you have to actually go outside and find something worth photographing.
Currently in pre-alpha with \~25 testers. Happy to answer questions about the vision model integration or the prompt engineering approach.
If anyone in Europe wants to try it out, DM me, server's hosted in Germany so latency is best for EU players. | 2026-01-07T11:03:08 | https://www.reddit.com/r/LocalLLaMA/comments/1q6cihe/i_built_a_mobile_game_where_a_local_qwen3vl_acts/ | franke777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6cihe | false | null | t3_1q6cihe | /r/LocalLLaMA/comments/1q6cihe/i_built_a_mobile_game_where_a_local_qwen3vl_acts/ | false | false | self | 15 | null |
Dive Desktop's #1 support issue was MCP installation - so we built two solutions | 0 | Hi everyone, we open-sourced Dive Desktop early last year. Today we're announcing AI-assisted MCP installation for Dive, and launching OAPhub for cloud-hosted MCPs.
We believe local-first. But after launching Dive Desktop, our #1 support issue was installation - **68% of our GitHub issues** are MCP config related (uvx, npx, Docker, PATH, env vars).
MCPs are powerful, but the individual setup blocks most users from using them.
So we built two things:
**Dive Desktop - Open-source, AI-assisted MCP install**
Dive helps non-developers install MCP on their own.
Just say: "Install an MCP for yt-dlp"
Dive searches GitHub, downloads, runs install, sets up config. You answer questions - no terminal needed.
Download Dive: [https://github.com/OpenAgentPlatform/Dive](https://github.com/OpenAgentPlatform/Dive)
**OAPhub - For things you can't run locally**
Some models don't have local options - Seedream 4.5, Kling, Veo, Flux.2 Pro. You'd normally access them through separate APIs.
OAPhub hosts these as MCPs. Just paste a URL.
Just: "Generate a cyberpunk cat with Seedream"
No API keys to manage, no SDK to install.
**Pricing:** Base MCPs (API tools, search, and utilities) are free, while Pro MCPs (generative AI) require a subscription, with optional pay-per-use for high-volume usage.
Browse available MCPs: [https://oaphub.ai/mcp](https://oaphub.ai/mcp)
# [](https://git.biggo.com/kevin/oap-marketing/src/branch/master/POST.md#how-we-think-about-this)How we think about this
We took a different direction from workflow tools like n8n or Zapier.
When a service offers an official MCP (e.g., Figma, PayPal, Atlassian), we surface it with full documentation for direct connection. When no official MCP exists (e.g., Seedream, Kling, Veo), we build and host a managed MCP for seamless integration.
Our bet: official MCPs will always be better maintained than third-party wrappers. So we'd rather point you to the official source when it exists.
If a better solution comes along, swap us out. We're fine with that.
| 2026-01-07T10:58:04 | https://v.redd.it/kt8pp114vvbg1 | Prior-Arm-6705 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q6cf9n | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/kt8pp114vvbg1/DASHPlaylist.mpd?a=1770375496%2CZjQ2OGQwYmVhODRjYTNhOWY0MTNlOTM2OWRiYTIwNjNhZTgyNTk0ZTc0NTU4Y2Q2ZTliNzUwNmRhNzUyNzJlYQ%3D%3D&v=1&f=sd', 'duration': 181, 'fallback_url': 'https://v.redd.it/kt8pp114vvbg1/CMAF_480.mp4?source=fallback', 'has_audio': True, 'height': 476, 'hls_url': 'https://v.redd.it/kt8pp114vvbg1/HLSPlaylist.m3u8?a=1770375496%2CNTUzZThjNDI2M2ZhYmU5ODZjNzRhOWE2NDIyMGE2NzkzNDVhMWNlMWU2YjZkOTBkOWQ1ZDBjMmZhOTQ5YTc2Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kt8pp114vvbg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 854}} | t3_1q6cf9n | /r/LocalLLaMA/comments/1q6cf9n/dive_desktops_1_support_issue_was_mcp/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ZW9hMGIxMTR2dmJnMXYTW1_dWB2gDorgYHB7SSWpkDzWZByIl3IdckQTnnAu', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZW9hMGIxMTR2dmJnMXYTW1_dWB2gDorgYHB7SSWpkDzWZByIl3IdckQTnnAu.png?width=108&crop=smart&format=pjpg&auto=webp&s=188cead9087f1689768fd425740a46970a835fce', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/ZW9hMGIxMTR2dmJnMXYTW1_dWB2gDorgYHB7SSWpkDzWZByIl3IdckQTnnAu.png?width=216&crop=smart&format=pjpg&auto=webp&s=2675d28b28d7e0a1b2673fa98442b3c6841d3fe8', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/ZW9hMGIxMTR2dmJnMXYTW1_dWB2gDorgYHB7SSWpkDzWZByIl3IdckQTnnAu.png?width=320&crop=smart&format=pjpg&auto=webp&s=91d6092d6b8950392d043ca822701f34abd59cea', 'width': 320}, {'height': 357, 'url': 'https://external-preview.redd.it/ZW9hMGIxMTR2dmJnMXYTW1_dWB2gDorgYHB7SSWpkDzWZByIl3IdckQTnnAu.png?width=640&crop=smart&format=pjpg&auto=webp&s=a1d878af2cd2225434b0f92eba4f719ea2dd8616', 'width': 640}, {'height': 536, 'url': 'https://external-preview.redd.it/ZW9hMGIxMTR2dmJnMXYTW1_dWB2gDorgYHB7SSWpkDzWZByIl3IdckQTnnAu.png?width=960&crop=smart&format=pjpg&auto=webp&s=8a6079ebc8e0c3b47d194cee584cb141daf50423', 'width': 960}, {'height': 603, 'url': 'https://external-preview.redd.it/ZW9hMGIxMTR2dmJnMXYTW1_dWB2gDorgYHB7SSWpkDzWZByIl3IdckQTnnAu.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d3f8c238e442f8edd91b5ae95aa62177bbd100e3', 'width': 1080}], 'source': {'height': 716, 'url': 'https://external-preview.redd.it/ZW9hMGIxMTR2dmJnMXYTW1_dWB2gDorgYHB7SSWpkDzWZByIl3IdckQTnnAu.png?format=pjpg&auto=webp&s=c3407af0ca2f37f9e36749b716ef1edb140406d0', 'width': 1282}, 'variants': {}}]} | |
Local Laptop Hardware Help | 0 | I’m in the market for a Mac book. I’m currently having a difficult time to make a decision on which one to buy. I want to be able to run these llms locally in agentic way. Should I pull the trigger and buy MacBook Pro with m5 chip or wait for m5 pro chip. What sort of memory would be sufficient? | 2026-01-07T10:55:59 | https://www.reddit.com/r/LocalLLaMA/comments/1q6ce13/local_laptop_hardware_help/ | magnusvegeta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6ce13 | false | null | t3_1q6ce13 | /r/LocalLLaMA/comments/1q6ce13/local_laptop_hardware_help/ | false | false | self | 0 | null |
Released v0.1.6 of Owlex, an MCP server that integrates Codex CLI, Gemini CLI, and OpenCode into Claude Code. | 3 | The new async feature lets you:
\- Start a council deliberation that queries multiple AI models
\- Get a task ID immediately and continue working
\- Check back later for results with wait\_for\_task
What's a "council"?
Instead of relying on a single model's opinion, the council queries multiple agents (Codex/o3, Gemini, OpenCode) with your question and synthesizes their responses. Great for architecture decisions, code reviews, or when you want diverse perspectives.
https://reddit.com/link/1q6cbgy/video/hrj7rycqqwbg1/player
| 2026-01-07T10:51:43 | https://www.reddit.com/r/LocalLLaMA/comments/1q6cbgy/released_v016_of_owlex_an_mcp_server_that/ | spokv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6cbgy | false | null | t3_1q6cbgy | /r/LocalLLaMA/comments/1q6cbgy/released_v016_of_owlex_an_mcp_server_that/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'ArYRt78-5RFb6CiMNI10gprlO6acxZMy0DlZ2xmn3uI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ArYRt78-5RFb6CiMNI10gprlO6acxZMy0DlZ2xmn3uI.png?width=108&crop=smart&auto=webp&s=c7069b5d072fa2405b35113f507064fb250c4dd3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ArYRt78-5RFb6CiMNI10gprlO6acxZMy0DlZ2xmn3uI.png?width=216&crop=smart&auto=webp&s=59c5cd34a767002b5228289c5b423d7a4815b2b0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ArYRt78-5RFb6CiMNI10gprlO6acxZMy0DlZ2xmn3uI.png?width=320&crop=smart&auto=webp&s=fe99da93c40dc4d43262cc6a8bcd7fdb9faeda3b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ArYRt78-5RFb6CiMNI10gprlO6acxZMy0DlZ2xmn3uI.png?width=640&crop=smart&auto=webp&s=85292b673f6b38a2ab14ccd0da115b0f59ab17ec', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ArYRt78-5RFb6CiMNI10gprlO6acxZMy0DlZ2xmn3uI.png?width=960&crop=smart&auto=webp&s=ab4068bcf5954d27e382ea8cada3300cbfc03233', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ArYRt78-5RFb6CiMNI10gprlO6acxZMy0DlZ2xmn3uI.png?width=1080&crop=smart&auto=webp&s=c2326fd956631701359f527e4e9dc021bab1b037', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ArYRt78-5RFb6CiMNI10gprlO6acxZMy0DlZ2xmn3uI.png?auto=webp&s=147a23b5405a29ec20e60579bf9830ed967977ce', 'width': 1200}, 'variants': {}}]} |
DeepSeek-R1’s paper was updated 2 days ago, expanding from 22 pages to 86 pages and adding a substantial amount of detail. | 613 | arXiv:2501.12948 \[cs.CL\]: https://arxiv.org/abs/2501.12948 | 2026-01-07T10:49:12 | https://www.reddit.com/gallery/1q6c9wc | Nunki08 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q6c9wc | false | null | t3_1q6c9wc | /r/LocalLLaMA/comments/1q6c9wc/deepseekr1s_paper_was_updated_2_days_ago/ | false | false | 613 | null | |
I need help with 3d Avatar for my local AI assiastant | 8 | Hi everyone! I have built basic functional AI assistant that answers questions on specific topics. Currently, it works as a local LLM with bilingual audio support. Now I need to add 3D visual avatar that run entirely locally and is open-source. Avatar must move its mouth in sync with local audio, have idle animation and hand gestures. No API, only local. I've looked into SadTalker, OmniAvatar and some open-source AI-vtuber projects, but model should be realistic, not based on anime-char. Any advice, repo links or tips would be appreciated, thanks in advance! | 2026-01-07T10:36:58 | https://www.reddit.com/r/LocalLLaMA/comments/1q6c2cr/i_need_help_with_3d_avatar_for_my_local_ai/ | Snasher01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6c2cr | false | null | t3_1q6c2cr | /r/LocalLLaMA/comments/1q6c2cr/i_need_help_with_3d_avatar_for_my_local_ai/ | false | false | self | 8 | null |
Anyone integrated LlamaIndex into a real project? | 0 | In a challenge I’m organizing, integrating **LlamaIndex** into a concrete project is considered a **high‑difficulty task**. I’m curious if anyone here who’s skilled in this area might be interested.
https://preview.redd.it/ckytwzakmwbg1.png?width=1364&format=png&auto=webp&s=e03508f9c9d991a7ba8917875eb66ac5b7b1f27a
| 2026-01-07T10:28:10 | https://www.reddit.com/r/LocalLLaMA/comments/1q6bwuo/anyone_integrated_llamaindex_into_a_real_project/ | Puzzleheaded_Box2842 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6bwuo | false | null | t3_1q6bwuo | /r/LocalLLaMA/comments/1q6bwuo/anyone_integrated_llamaindex_into_a_real_project/ | false | false | 0 | null | |
Open source code cli tool built for autonomy | 1 | [removed] | 2026-01-07T10:11:49 | https://www.reddit.com/r/LocalLLaMA/comments/1q6bmz0/open_source_code_cli_tool_built_for_autonomy/ | Straight-Degree-50 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6bmz0 | false | null | t3_1q6bmz0 | /r/LocalLLaMA/comments/1q6bmz0/open_source_code_cli_tool_built_for_autonomy/ | false | false | self | 1 | null |
We built an open-source Code CLI tool focused on autonomy | 1 | [removed] | 2026-01-07T10:05:49 | https://www.reddit.com/r/LocalLLaMA/comments/1q6bjdy/we_built_an_opensource_code_cli_tool_focused_on/ | Straight-Degree-50 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6bjdy | false | null | t3_1q6bjdy | /r/LocalLLaMA/comments/1q6bjdy/we_built_an_opensource_code_cli_tool_focused_on/ | false | false | self | 1 | null |
I built a "Forever Free" Voiceover Studio for Windows because cloud TTS subscriptions were killing my wallet. (Uses Local Coqui XTTS + Edge) | 0 | Hi everyone,
I’ve been working on some YouTube automation projects recently, and the cost of cloud text-to-speech services (like ElevenLabs) was eating up all my potential profit before I even started. Paying $100/month just to hit character limits felt wrong.
Being a developer, I knew excellent open-source models existed (like Coqui XTTS v2), but setting them up locally with Python, CUDA, and dependencies was a nightmare every single time.
So, I spent the last few weeks building **TubeMatic Audio**.
**What it does:** It’s a standalone Windows desktop app (Embedded Python, no installation needed) that runs entirely offline on your hardware.
* **Unlimited Voices:** It taps into Microsoft Edge’s neural voices for fast, free generation.
* **Voice Cloning:** If you have an NVIDIA GPU, it runs Coqui XTTS v2 locally for high-quality cloning. (It auto-fetches the \~8GB models only if you need them).
* **Translation & Dubbing:** I added a feature where you can plug in your Google Gemini API key to translate and dub scripts in one go.
**Why I made it paid ($39 lifetime):** I packaged it into a polished "product" rather than just a script, to save non-coders from the "pip install" hell. You buy it once, own it forever, and never pay a monthly subscription again.
I’d love to hear your feedback or answer any technical questions about how I handled the local inference!
[https://www.youtube.com/watch?v=1FBHE5qIyMI](https://www.youtube.com/watch?v=1FBHE5qIyMI) | 2026-01-07T10:00:33 | https://www.reddit.com/r/LocalLLaMA/comments/1q6bg9b/i_built_a_forever_free_voiceover_studio_for/ | BrilliantBiscotti645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6bg9b | false | null | t3_1q6bg9b | /r/LocalLLaMA/comments/1q6bg9b/i_built_a_forever_free_voiceover_studio_for/ | false | false | self | 0 | null |
I built a multi-agent "Epistemic Engine" to stop LLM hallucinations before they snowball (FastCoref + MiniLM + Agent Debate). Open Source. | 0 | Hey everyone,
I’ve been frustrated with the current state of RAG. Most pipelines suffer from two major issues: **"Snowball Hallucinations"** (one wrong fact leads to a fake narrative) and **Sycophancy** (models agreeing with my biased prompts just to be helpful).
So I built **FailSafe** – a verification engine designed to be deeply skeptical by default. It’s not just a chatbot wrap; it’s an automated fact-checker that argues with itself.
**The Architecture ("Defense in Depth"):**
* **Layer 0 (The Firewall):** Before any expensive inference, I use statistical heuristics (Shannon Entropy, TF-IDF) to reject spam/clickbait inputs. Zero cost.
* **Layer 1 (Decomposition):** Uses `FastCoref` (DistilRoBERTa) and `MiniLM` to split complex text into atomic atomic claims. I chose these SLMs specifically to keep it fast and runnable locally without needing massive VRAM.
* **The "Council" (Layer 4):** Instead of one agent generating an answer, I force a debate between three personas:
* *The Logician* (Checks for fallacies)
* *The Skeptic* (Applies Occam’s Razor/suppresses H-Neurons)
* *The Researcher* (Validates against search tools)
If the agents agree too quickly ("Lazy Consensus"), the system flags it as a failure.
**Why I'm sharing this:** I want to move beyond simple "Chat with PDF" apps towards high-stakes verification. I’d love for the community to tear apart the architecture or suggest better local models for the decomposition layer.
**Repo & Whitepaper:** \[[Amin7410/FailSafe-AI-Powered-Fact-Checking-System: FailSafe: An autonomous fact-checking framework leveraging Multi-Agent LLMs and Structured Argumentation Graphs (SAG) to verify claims with deep-web retrieval and reasoning.](https://github.com/Amin7410/FailSafe-AI-Powered-Fact-Checking-System)\]
Cheers! | 2026-01-07T09:55:29 | https://www.reddit.com/r/LocalLLaMA/comments/1q6bdh7/i_built_a_multiagent_epistemic_engine_to_stop_llm/ | Early-Sound7213 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6bdh7 | false | null | t3_1q6bdh7 | /r/LocalLLaMA/comments/1q6bdh7/i_built_a_multiagent_epistemic_engine_to_stop_llm/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Y9rYhV73ylCcb6pxk5wJWezFg8baBSUz5aLN7ja2zGA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Y9rYhV73ylCcb6pxk5wJWezFg8baBSUz5aLN7ja2zGA.png?width=108&crop=smart&auto=webp&s=84ccdfb36df7f489f31032a495e4845585f881c3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Y9rYhV73ylCcb6pxk5wJWezFg8baBSUz5aLN7ja2zGA.png?width=216&crop=smart&auto=webp&s=24c4e90735963b073687bc468f40932da51af491', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Y9rYhV73ylCcb6pxk5wJWezFg8baBSUz5aLN7ja2zGA.png?width=320&crop=smart&auto=webp&s=6c6da45322f2e6aa5332b27f46f70d9a9456f6da', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Y9rYhV73ylCcb6pxk5wJWezFg8baBSUz5aLN7ja2zGA.png?width=640&crop=smart&auto=webp&s=dfdb2c29d6ae5736b857b3a2d842594e02e2f7b4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Y9rYhV73ylCcb6pxk5wJWezFg8baBSUz5aLN7ja2zGA.png?width=960&crop=smart&auto=webp&s=7eaf0ac16f27ce8270f5d4258894b65529084caf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Y9rYhV73ylCcb6pxk5wJWezFg8baBSUz5aLN7ja2zGA.png?width=1080&crop=smart&auto=webp&s=08bddc516f1d1f9b8b756176bcef99fa82f56f3a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Y9rYhV73ylCcb6pxk5wJWezFg8baBSUz5aLN7ja2zGA.png?auto=webp&s=b1a68055e077b86c94c427f7b6210989cc6455b1', 'width': 1200}, 'variants': {}}]} |
Best Local TTS with natural flow? | 1 | I'm looking for a Local/Open-Source TTS model that prioritizes natural "conversational" flow.
**What I need:**
* **Natural Flow:** Needs to sound like casual commentary/narration. Not over-acted, but not robotic.
* **Audio Quality:** I prefer no tokenizer artifacts (metallic sounds/buzzing), but I'm open to it if the flow is god-tier.
* **Pronunciation:** Good multilingual handling is a must. Phoneme support is a plus.
**Models I've tried:**
* **Kokoro:** Best fidelity, but sounds too "scripted/audiobook" and lacks human flow.
* **Kyutai:** Perfect natural flow and pronunciation, but prone to random noise/artifacts and lacks a good local wrapper.
* **VibeVoice 7b:** Great flow, but too heavy/slow and needs too many rerolls.
* **Chatterbox Turbo / Vox CPM:** Good quality, but they suffer from artifacts. They feel too "clone-focused" and miss that natural conversational vibe that Kyutai/VibeVoice have.
Any recommendations that hit the sweet spot? | 2026-01-07T09:48:27 | https://www.reddit.com/r/LocalLLaMA/comments/1q6b9iq/best_local_tts_with_natural_flow/ | Wither_W | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6b9iq | false | null | t3_1q6b9iq | /r/LocalLLaMA/comments/1q6b9iq/best_local_tts_with_natural_flow/ | false | false | self | 1 | null |
Has anyone tested how the newest Rocm does in llms? | 50 | Been using Vulkan but the newest rocm is supposed to be quite a Performance jump and wanted to know if its worth the headache to install? | 2026-01-07T09:43:40 | Eden1506 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q6b6u7 | false | null | t3_1q6b6u7 | /r/LocalLLaMA/comments/1q6b6u7/has_anyone_tested_how_the_newest_rocm_does_in_llms/ | false | false | default | 50 | {'enabled': True, 'images': [{'id': 'z3s6igomewbg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/z3s6igomewbg1.png?width=108&crop=smart&auto=webp&s=3917d44c0bde79121d5c434c54f60ca259a90339', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/z3s6igomewbg1.png?width=216&crop=smart&auto=webp&s=cd89ac60ffb3c4417fc4980e110932c3545800ce', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/z3s6igomewbg1.png?width=320&crop=smart&auto=webp&s=2278d885ef2842291b6de3a25239dbee20c547be', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/z3s6igomewbg1.png?width=640&crop=smart&auto=webp&s=56b025d3c97db74a9cc2754f7f890856d8f441b4', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/z3s6igomewbg1.png?width=960&crop=smart&auto=webp&s=b427f47a6070a28b7bdc87894b6929f164ae5841', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/z3s6igomewbg1.png?width=1080&crop=smart&auto=webp&s=5243e8c73f2b70a02026795bfc2088212d190586', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/z3s6igomewbg1.png?auto=webp&s=ff86aa00881388b89b15faa7aa6e612e53d58aab', 'width': 1280}, 'variants': {}}]} | |
We built an open-source code cli tool made for autonomy | 1 | [removed] | 2026-01-07T09:37:00 | https://www.reddit.com/r/LocalLLaMA/comments/1q6b36c/we_built_an_opensource_code_cli_tool_made_for/ | Straight-Degree-50 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6b36c | false | null | t3_1q6b36c | /r/LocalLLaMA/comments/1q6b36c/we_built_an_opensource_code_cli_tool_made_for/ | false | false | self | 1 | null |
We built an open source coding CLI focus on autonomous operations | 1 | [removed] | 2026-01-07T09:28:04 | https://www.reddit.com/r/LocalLLaMA/comments/1q6ay86/we_built_an_open_source_coding_cli_focus_on/ | Straight-Degree-50 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6ay86 | false | null | t3_1q6ay86 | /r/LocalLLaMA/comments/1q6ay86/we_built_an_open_source_coding_cli_focus_on/ | false | false | 1 | null | |
TitanU worth getting? | 0 | Saw something on LinkedIn today that caught my eye. It’s called TitanU (titanuai.com). It seems to be an operating system that runs AI locally on your own machine with no cloud involvement, and it even works offline.
I’m thinking about purchasing it, but I want to know if it’s actually good before I do. Has anyone here tried it yet, and what do y'all think? | 2026-01-07T09:09:51 | https://www.reddit.com/r/LocalLLaMA/comments/1q6anr6/titanu_worth_getting/ | automatickash | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6anr6 | false | null | t3_1q6anr6 | /r/LocalLLaMA/comments/1q6anr6/titanu_worth_getting/ | false | false | self | 0 | null |
Visualizing LLM Model Collapse at Gen 20 | 1 | [removed] | 2026-01-07T08:38:33 | https://www.reddit.com/r/LocalLLaMA/comments/1q6a619/visualizing_llm_model_collapse_at_gen_20/ | Significant_Fix9668 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6a619 | false | null | t3_1q6a619 | /r/LocalLLaMA/comments/1q6a619/visualizing_llm_model_collapse_at_gen_20/ | false | false | self | 1 | null |
NousCoder-14B-GGUF is here! | 1 | RL post training on Qwen 3 14B
"On LiveCodeBench v6 (08/01/2024 - 05/01/2025), we achieve a Pass@1 accuracy of 67.87%, up 7.08% from the baseline Pass@1 accuracy of 60.79% of Qwen3-14B. We trained on 24k verifiable coding problems using 48 B200s over the course of four days." | 2026-01-07T08:33:13 | https://huggingface.co/AaryanK/NousCoder-14B-GGUF | KvAk_AKPlaysYT | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1q6a32c | false | null | t3_1q6a32c | /r/LocalLLaMA/comments/1q6a32c/nouscoder14bgguf_is_here/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/DMiOkJBVtF0q2KjwqWnPlke-IPF1A5a0Z_XDB7tGLn8.png?auto=webp&s=81e2e7f666d9347317d71a51d415bb527929c644', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/DMiOkJBVtF0q2KjwqWnPlke-IPF1A5a0Z_XDB7tGLn8.png?width=108&crop=smart&auto=webp&s=e96eec8e5b2a4b8f3b6191ebfc34feef34c947e4', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/DMiOkJBVtF0q2KjwqWnPlke-IPF1A5a0Z_XDB7tGLn8.png?width=216&crop=smart&auto=webp&s=da489d6c556ef4752657c2b57ee83b3d24492d7f', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/DMiOkJBVtF0q2KjwqWnPlke-IPF1A5a0Z_XDB7tGLn8.png?width=320&crop=smart&auto=webp&s=d79b9d62a4a4c4ede2c07e2ef6f5dacd77406e1d', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/DMiOkJBVtF0q2KjwqWnPlke-IPF1A5a0Z_XDB7tGLn8.png?width=640&crop=smart&auto=webp&s=b59c49d178c2f341a958665b9686d84962973bbe', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/DMiOkJBVtF0q2KjwqWnPlke-IPF1A5a0Z_XDB7tGLn8.png?width=960&crop=smart&auto=webp&s=d103cdd7a99af6183d2306adab1eef7d9ae7b47e', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/DMiOkJBVtF0q2KjwqWnPlke-IPF1A5a0Z_XDB7tGLn8.png?width=1080&crop=smart&auto=webp&s=642d2c8e3726da3b06da5febccefaf1fc7facbed', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'DMiOkJBVtF0q2KjwqWnPlke-IPF1A5a0Z_XDB7tGLn8'}], 'enabled': False} | |
Any LLM that can run on AMD Hawk Point NPU (Ryzen 8x00)? | 1 | Hi all,
I have a minipc with AMD 8845HS APU which have 16TOPS NPU. I know its not much but it would be nice to be able to at least load some small model on it to see how it behaves. I mean, there are new LLM models released almost weekly :)
I did found FastFlowLM which looks amazing but unfortunatelly support only Strix APUs (Ryzen AI 300).
So did somebody here spend some time with these older APUs to try to bring the NPU to some use in Windows 11? I tried to install Ryzen AI Suite but it just hangs on creating a Conda environment...and yeah, I know I can use that NPU on a webcam effects but, if that is all it can do - that is pretty bad :/
Thanks! :) | 2026-01-07T08:32:13 | https://www.reddit.com/r/LocalLLaMA/comments/1q6a2gt/any_llm_that_can_run_on_amd_hawk_point_npu_ryzen/ | deb0ro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6a2gt | false | null | t3_1q6a2gt | /r/LocalLLaMA/comments/1q6a2gt/any_llm_that_can_run_on_amd_hawk_point_npu_ryzen/ | false | false | self | 1 | null |
GPT-OSS is VERY GOOD model and no one can deny that | 1 | *the post is a little bit long,if you don't have time, I'm just saying GPT-OSS is very efficient*
I did a lot of research about reasoning models and I found out something really important "Hybrid models are more likely to be inefficient or entirely dumb" If you need to create a model,you have to choose between making a thinker "even if it has very minimal levels,it still reasons" or making an instruct model that's pretty good at reasoning but much more dumb than a reasoning-ready model because it's aligned for instructions following.
Qwen3 models are generally very inefficient, especially high reasoning ones "hi,Qwen3-4b-thinking-2507" and those models are actually over trying to align with user query instead of finding the actual solvable issue. If you want a qwen to be efficient you need to be very concise with very direct instructions to reduce the "reasoning length" of the model because the model is afraid of making a mistake instead of trying to solve it, it's pretty clear due to the model saying "wait" more than actually solving anything because the model wants to cover all possible probabilities then confirm and say "yeah that's good one,wait? Maybe user is sad" and continue looping itself again because probabilities are almost endless.
Nanbeige4-3B-Thinking-2511 is a good model,but it also suffers from the same issue and it even overthink more sometimes but instead it's trying so hard to "perfect" the answer to the maximum possible level, like explaining an entire math lecture because you asked 1+1 equals what. "don't go ask it 1+1 and tell me it says 2 That's an example (:" the model actually is pretty great and it tries much less to make you happy and try to solve the problem itself but in much more accurate way that's excessive sometimes.
Ling and Ring models are great,I think they can be improving more but they are generally good,I wouldn't say something about them.
Didn't try Youtu-LLM-2B so can't decide.
Mistral models are great for translation and creative writing,for reasoning... Ok I don't need to talk you already know the answer.
GLM-4.5 Air is good,it's a very good Coder but sometimes it ignore or deny some parts of your instructions, overall near GPT-OSS performance but ~2x activated parameters and not as optimized while also more risky to give it direct access to files as it's much less safety tuned.
GPT-OSS is the only model that's the BEST in size that I can really give it access to something in my device or talk to it about something going in my mind and the model actually benefits me instead of trying to make me happy and it's safety features are actually a feature sometimes not always a bad thing.
I understand that GPT-OSS sometimes tells you "no" to things that are perfectly normal if a single word in your message is "unsafe" and it takes a lot of tokens checking the policy but that's actually a feature because the model can recognize what should be done and what shouldn't,for example if you give GPT-OSS Agentic capabilities over some parts of your device it's very unlikely that a model when performing web search finds "sudo rm rf" then your device is cooked, instead it will see it's against the policy because it's unsafe command which gives you higher trust in the model.
GPT-OSS also is very efficient token-wise even on high reasoning it will always consume as much tokens as necessarily needed for highest quality answer which is a good thing especially running on a local machine.
GPT-OSS also is very optimized, are you in a hurry? Set thinking to low. Are you solving math? Set it to high. Do you have a 16 GB RAM/VRAM available? GPT-OSS-20B. Do you have 96/128 RAM/VRAM? GPT-OSS-120B.
The only thing that's bad about GPT-OSS is if you want to have a "friendship" with an LLM,GPT-OSS is very cold model that sees everything as steps even your feelings are steps,you can tell him you are so excited and happy where Mistral will celebrate with you and deepseek write a poem expressing his congratulations,GPT-OSS will say "ok?" And you will regret talking to him,the model IS DEFINITELY NOT BUILT FOR THAT,and no one should be dependent on an LLM for emotional support anyway. | 2026-01-07T08:24:48 | https://www.reddit.com/r/LocalLLaMA/comments/1q69y8i/gptoss_is_very_good_model_and_no_one_can_deny_that/ | Previous_Two_3201 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q69y8i | false | null | t3_1q69y8i | /r/LocalLLaMA/comments/1q69y8i/gptoss_is_very_good_model_and_no_one_can_deny_that/ | false | false | self | 1 | null |
I cant make letta server | 1 | I dont make letta server. I keep getting an error.
I'm a beginner, so I don't know much...
Could you show me the Powershell log and screen to help me figure out what I need? Please.
https://preview.redd.it/lanzc7utzvbg1.png?width=1115&format=png&auto=webp&s=b417326e844ac40813a71b4a05d371f0a5d1b4c1
https://preview.redd.it/7qaovoztzvbg1.png?width=1115&format=png&auto=webp&s=580af8ca1f2cb16f5b9c7103febc16aa212e263d
| 2026-01-07T08:21:05 | https://www.reddit.com/r/LocalLLaMA/comments/1q69w5d/i_cant_make_letta_server/ | Lanky_Variety_3024 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q69w5d | false | null | t3_1q69w5d | /r/LocalLLaMA/comments/1q69w5d/i_cant_make_letta_server/ | false | false | 1 | null | |
Using n8n to orchestrate DeepSeek/Llama3 Agents via SSH (True Memory Persistence) | 1 | Everyone seems to use n8n with OpenAI nodes, but I found it too expensive for repetitive tasks requiring heavy context.
I switched my workflow to use the **n8n SSH Node** connecting to a local Ollama instance. The key is avoiding the REST API and using the interactive CLI via SSH instead. This allows keeping the session open (stateful) using a Session ID.
Basically:
1. n8n generates a UUID.
2. Connects via SSH to my GPU rig.
3. Executes commands that persist context.
4. If the generated code fails, n8n captures the error and feeds it back to the same SSH session for auto-fixing.
If you are interested in orchestrating local LLMs without complex frameworks (just n8n and bash), I explain how I built it here: [https://youtu.be/tLgB808v0RU?si=xNzsfESqV77VDTnk](https://youtu.be/tLgB808v0RU?si=xNzsfESqV77VDTnk) | 2026-01-07T08:15:10 | https://www.reddit.com/r/LocalLLaMA/comments/1q69sxb/using_n8n_to_orchestrate_deepseekllama3_agents/ | jokiruiz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q69sxb | false | null | t3_1q69sxb | /r/LocalLLaMA/comments/1q69sxb/using_n8n_to_orchestrate_deepseekllama3_agents/ | false | false | self | 1 | null |
Don't put off hardware purchases: GPUs, SSDs, and RAM are going to skyrocket in price soon | 1 | In case you thought it was going to get better:
**GPU** prices are going up. [AMD and NVIDIA are planning to increase prices every month starting soon.](https://www.trendforce.com/news/2026/01/05/news-nvidia-amd-reportedly-plan-price-hikes-starting-1q26-geforce-rtx-5090-may-reach-5000/)
**NAND flash** contract price [went up 20% in November](https://www.trendforce.com/price/flash/flash_contract), with [further increases in December] (https://www.trendforce.com/research/download/RP251231KM). This means SSDs will be a lot more expensive soon.
**DRAM** [prices are going to skyrocket](https://www.trendforce.com/news/2026/01/07/news-memory-shortages-reportedly-spark-csp-buying-spree-2027-supply-contracts-eyed-as-early-as-q1/), with no increase in production capacity and datacenters and OEMs competing for everything.
Even **Consoles** are [going to be delayed due to the shortages.](https://insider-gaming.com/ram-prices-next-gen/)
> According to TrendForce, conventional DRAM contract prices in 1Q26 are forecast to rise 55–60% quarter over quarter, while server DRAM prices are projected to surge by more than 60% QoQ. Meanwhile, NAND Flash prices are expected to increase 33–38% QoQ
[Source.](https://www.trendforce.com/news/2026/01/07/news-memory-shortages-reportedly-spark-csp-buying-spree-2027-supply-contracts-eyed-as-early-as-q1/)
> Industry sources cited by Kbench believe the latest price hikes will broadly affect NVIDIA’s RTX 50 series and AMD’s Radeon RX 9000 lineup. The outlet adds that NVIDIA’s flagship GeForce RTX 5090 could see its price climb to as high as $5,000 later in 2026.
>NVIDIA is also reportedly weighing a 30% to 40% reduction in output for parts of its midrange lineup, including the RTX 5070 and RTX 5060 Ti, according to Kbench.
[Source.](https://www.trendforce.com/news/2026/01/05/news-nvidia-amd-reportedly-plan-price-hikes-starting-1q26-geforce-rtx-5090-may-reach-5000/) | 2026-01-07T07:32:28 | https://www.reddit.com/r/LocalLLaMA/comments/1q694ic/dont_put_off_hardware_purchases_gpus_ssds_and_ram/ | Eisenstein | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q694ic | false | null | t3_1q694ic | /r/LocalLLaMA/comments/1q694ic/dont_put_off_hardware_purchases_gpus_ssds_and_ram/ | false | false | self | 1 | null |
Do you see instability or weird regressions when fine-tuning models? | 1 | I’m curious if others run into this in practice.
I’ve noticed that when models are retrained or fine-tuned (even slightly), internal
representations can shift a lot, leading to things like:
- unexpected drops in robustness
- brittle behavior under noise or distribution shift
- large variance between fine-tuning runs
- models that look fine on clean validation but break under stress tests
This feels different from classic overfitting or data leakage — more like internal
representations becoming unstable.
Is this something you’ve observed in real pipelines?
If yes:
- how do you usually detect it?
- do you just retrain / regularize / accept it? | 2026-01-07T07:28:10 | https://www.reddit.com/r/LocalLLaMA/comments/1q691wa/do_you_see_instability_or_weird_regressions_when/ | AppearanceCareful136 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q691wa | false | null | t3_1q691wa | /r/LocalLLaMA/comments/1q691wa/do_you_see_instability_or_weird_regressions_when/ | false | false | self | 1 | null |
Qwen3-30B-VL knows about Care Bears | 1 | The second picture was what i provided to see what it would say. Didn’t think it would know about Care Bears.
Model:Qwen3-30B-VL-MLX-4bit run on LM Studio
Honestly I’m impressed. | 2026-01-07T07:02:29 | https://www.reddit.com/gallery/1q68mhf | jesus359_ | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q68mhf | false | null | t3_1q68mhf | /r/LocalLLaMA/comments/1q68mhf/qwen330bvl_knows_about_care_bears/ | false | false | 1 | null | |
Pure LLMs for text extraction or OCR + LLM - which approach for document processing? | 1 | I'm working on a side project for a medical practice to digitize old patient intake forms and convert that into structured data.
The docs consist of a mix of printed + handwritten portions. Some of them also contain checkboxes - but most of them are poor scans!
When I started doing some research myself, I can see that people either:
a) Swear by LLMs (GPT, Claude) for extracting data and getting structured output
b) Pre-process the text through an OCR and then run the clean text through an LLM
The first option seems simpler, but when I did it myself I noticed that the results aren't consistent, LLM hallucinations, etc. I'd love to throw the pages at GPT, skim through for mistakes and call it a day - it's easier, but budget is limited.
The second I've not tried much - but so far, I've not gotten reliable outputs from Tesseract. Not sure if I'm doing something wrong.
Has anyone tried both approaches? I'd love to know your suggestions, tips, but mainly: what approach has worked best for you? | 2026-01-07T06:52:00 | https://www.reddit.com/r/LocalLLaMA/comments/1q68fum/pure_llms_for_text_extraction_or_ocr_llm_which/ | Fierce_Lucifer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q68fum | false | null | t3_1q68fum | /r/LocalLLaMA/comments/1q68fum/pure_llms_for_text_extraction_or_ocr_llm_which/ | false | false | self | 1 | null |
what is the biggest model that can be deployed on Dell PowerEdge R630 | 0 | I've an old dell poweredge R630 available with following spec
Processor : 2X Intel Xeon E5-2630 V4
Cores : 10+10 = 20
Threads : 20+20 = 40
Base : 2.20GHz Turbo : 3.10GHz
Ram : 32GB DDR4 ( can be increase)
what is the biggest model that can be run on this server? | 2026-01-07T05:24:49 | https://www.reddit.com/r/LocalLLaMA/comments/1q66uog/what_is_the_biggest_model_that_can_be_deployed_on/ | cisspstupid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q66uog | false | null | t3_1q66uog | /r/LocalLLaMA/comments/1q66uog/what_is_the_biggest_model_that_can_be_deployed_on/ | false | false | self | 0 | null |
Depth-adaptive inference on a Mixtral backbone 32 -> 24 active layers | 1 | Ciao A Tutti,
Sto sperimentando un setup di inferenza con profondità adattiva sopra un modello di tipo Mixtral.
Il backbone ha 32 layer transformer, ma durante l’inferenza ne attiviamo dinamicamente circa 24 in media, in base alla complessità del prompt.
Non si tratta di pruning statico né di retraining:
– expert e routing non vengono modificati
– i pesi restano invariati
– il controllo avviene solo a runtime, durante il forward pass
I layer non attivi non vengono saltati in modo rigido: ricevono una proiezione attenuata dell’ultimo stato nascosto attivo, per mantenere la continuità della rappresentazione.
Finora questo approccio sembra offrire un buon compromesso tra riduzione del calcolo e stabilità dell’output.
Mi chiedevo se qualcuno qui avesse esplorato qualcosa di simile (profondità dinamica vs profondità fissa) su modelli MoE.
Qualcuno ha mai lavorato in questa direzione nella gestione dinamica dei layer? o magari ne vuole discutere? | 2026-01-07T04:56:35 | https://www.reddit.com/r/LocalLLaMA/comments/1q66ao2/depthadaptive_inference_on_a_mixtral_backbone_32/ | Single_Error8996 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q66ao2 | false | null | t3_1q66ao2 | /r/LocalLLaMA/comments/1q66ao2/depthadaptive_inference_on_a_mixtral_backbone_32/ | false | false | self | 1 | null |
Ghost Neural Network (GNN): autonomous trading agents that run locally, survive crashes, and don’t depend on servers | 0 | Most “AI trading bots” are just thin clients talking to a server.
GNN is exploring a different model: fully local, state-resilient agents that can trade even if the tab crashes, the browser reloads, or the network drops.
Core idea
• Local LLM-driven agents
• Deterministic execution logic
• Explicit state recovery
• Optional distributed compute, not mandatory servers
⸻
Architecture (high level)
• Local inference first
• Runs on consumer hardware (Apple Silicon, GPUs, WebGPU where possible)
• Models selected for bounded context + deterministic outputs
• Functional core / effectful shell
• Trading logic lives in a deterministic state machine
• Side effects (orders, API calls) are isolated
• Crash-safe execution
• Service Worker / background process
• Checkpoints + write-ahead logs
• On reload: replay state → resume exactly where it left off
• No “black box autonomy”
• Explicit risk rules
• Hard entry blocks
• Time-based exits
• Every decision traceable
⸻
Why local matters
• No remote inference dependency
• No opaque server logic
• Lower latency, predictable behavior
• Agent stays alive even when UI dies
This feels closer to local LLM tooling than “AI SaaS.”
⸻
Compute scaling (optional)
If local hardware isn’t enough, GNN experiments with a distributed GPU market where:
• Compute is rented per session
• Agents remain locally controlled
• Remote nodes are stateless workers, not decision makers
Think local brain, remote muscle.
⸻
Current focus
• Spot-market agents first (bounded risk surface)
• Deterministic FSM-based strategies
• Recovery correctness > raw PnL
• Transparency over hype
No claims of AGI, no “self-learning god bots.”
⸻
Why I’m posting here
LocalLLaMA has some of the best discussions around:
• On-device inference
• Edge execution
• Practical autonomy without cloud lock-in
If you’re building or thinking about local agents that actually persist and recover, I’d love feedback—especially on:
• State management patterns
• Deterministic LLM usage
• Browser vs native execution tradeoffs
Happy to share diagrams or implementation details if there’s interest. | 2026-01-07T04:38:44 | https://www.reddit.com/r/LocalLLaMA/comments/1q65xo4/ghost_neural_network_gnn_autonomous_trading/ | Stock_Law_3554 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q65xo4 | false | null | t3_1q65xo4 | /r/LocalLLaMA/comments/1q65xo4/ghost_neural_network_gnn_autonomous_trading/ | false | false | self | 0 | null |
Setup help: I can’t decide what to use | 0 | Hello! I’m a recently disabled software engineer (mental health, I can’t do much most of the days I exist, but I have my surges). I’m currently trying to downsize things but still be able to use AI for personal projects.
Some of the AI systems I want to use ollama/OS models for:
- training (just lightly, I guess? Nothing too crazy) a literary analysis based on some model that I’m still deciding. Currently it’s set up with qwent. This is a simple AI pipeline designed to use function calls and structured prompts to execute tasks and focused analysis.
- “train” (I’m using the word wrong, I know) on a code base and using qwen30b for coding tasks. It wouldn’t be used for coding anything but a specific app in a specific stack.
- some other AI workflows for my wife’s photography business (probably similar to the literary analysis tools, but less power needed)
I’m willing to learn whatever I need to, but first I can’t decide what machine to use for the server? Everything will be dockerized and connected, with ports opened on the network, yada yada yada.
The systems I have:
First:
Nvidia GTX 3080 10GB
Ryzen 3900x
32GB DDR4 3200 RAM
Second:
Radeon 7900 XTX 24GB
Ryzen 9800x3d
64GB 6400 DDR5 RAM
Third:
MacBook Pro M1 Pro Max
64GB unified RAM
Woefully small drive, but I have externals for this one if need be.
I am also willing to sell the first system if it means I can get something else good for the task. If I use the MacBook Pro, I’ll start using my MacBook Air m1 for my coding machine (remote SSH connection to the server for the directory, using Claude code router to use the best coding model I can run on my local machine.
Advice?
| 2026-01-07T04:00:18 | https://www.reddit.com/r/LocalLLaMA/comments/1q655ez/setup_help_i_cant_decide_what_to_use/ | Murlock_Holmes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q655ez | false | null | t3_1q655ez | /r/LocalLLaMA/comments/1q655ez/setup_help_i_cant_decide_what_to_use/ | false | false | self | 0 | null |
[Research] I implemented a routed attention mechanism (R-GQA) for faster long-context models. Then wrote a paper on it. | 27 | [R-GQA diagram using pytorch operations](https://preview.redd.it/v6vzstczmubg1.png?width=3347&format=png&auto=webp&s=249015d063395ee4381b6b7d56c2dd09cbe3e791)
So, a while ago I thought to myself: "Those query heads in grouped-query attention... what are the chances that at any given time they all do something different and useful?"
I hypothesized that for any given token, maybe only 1 or 2 query heads per KV group are actually relevant. Thus, I created **R-GQA (Routed Grouped-Query Attention)**. It’s similar to regular GQA, but it uses a learned router to select the most relevant query heads and only computes attention for those.
I was honestly shocked that seemingly this hadn't been done before. So I implemented it, trained up a bunch of models at different scales on my RTX 3090, and looked at the results.
**The Experiment:**
I trained GQA baseline models on Wikipedia at 82M, 162M, and 940M parameters and compared them against R-GQA.
**The Results:**
* **Head Specialization:** With regular GQA, heads in a group converge to extremely similar representations. With R-GQA, the router forces them to be orthogonal (highly diverse).
* **Speed:** I achieved up to a **+40% training throughput improvement**, which is quite good.
* **The "L":** I compared performance against **SwitchHead**, which is conceptually similar but routes Values instead of Queries. Unfortunately for me, SwitchHead outperformed my variant on perplexity.
* **The Wall:** At the largest model scale (940M), my mechanism stopped being competitive and fell off against the GQA baseline. It seems aggressive sparsity hurts when you really need the capacity.
I'm providing the code and the current draft of the paper because I think the findings are valuable, even if the architecture isn't SOTA yet.
**Repo:** [https://github.com/Snowyiu/rgqa/](https://www.google.com/url?sa=E&q=https%3A%2F%2Fgithub.com%2FSnowyiu%2Frgqa%2F)
**Paper:** [https://github.com/Snowyiu/rgqa/blob/main/rgqa\_paper.pdf](https://www.google.com/url?sa=E&q=https%3A%2F%2Fgithub.com%2FSnowyiu%2Frgqa%2Fblob%2Fmain%2Frgqa_paper.pdf)
**One last thing:** I would like to publish on ArXiv, but I am stuck needing an endorsement from a researcher in this field. If there's anyone here who could help with that, it would be much appreciated! | 2026-01-07T03:56:05 | https://www.reddit.com/r/LocalLLaMA/comments/1q6524l/research_i_implemented_a_routed_attention/ | Snowyiu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6524l | false | null | t3_1q6524l | /r/LocalLLaMA/comments/1q6524l/research_i_implemented_a_routed_attention/ | false | false | 27 | null | |
I built a "Fail-Closed" Circuit Breaker for my Agent because prompts weren't enough to stop hallucinations. Open sourcing it today. (Python) | 2 | **The Problem:**
I've been building a financial agent for my startup, and I realized that no matter how much I optimized my System Prompt (e.g., "Do not refund more than $1000"), the LLM would still occasionally hallucinate huge numbers or drift logically.
The scary part wasn't the hallucination itself—it was that if my validation logic crashed or the network failed, the agent would default to "executing" the tool.
**The Solution:**
I built a middleware called **FailWatch**. It sits between the agent and the tool execution to enforce deterministic safety.
**Look at the screenshot above. It handles 3 distinct scenarios:**
1. **Hybrid Blocking (Top log):** The agent tried to spend $2000. FailWatch blocked it using a hard Python check (`amount < 1000`), NOT just an LLM opinion. It also detected that the agent skipped its reasoning steps.
2. **Human-in-the-Loop (Middle log):** For gray-area actions, it pauses execution and pings me (CLI/Slack) for approval.
3. **Fail-Closed Architecture (Bottom log - The important part):** I simulated a network outage (server down). Instead of letting the agent run wild, the SDK caught the connection error and **locked everything down** (`Mode: closed`). The money stayed safe.
**How to use it:**
It's a simple decorator for your Python functions. Unlike standard evals, this runs *synchronously* before the tool is called.
from failwatch import FailWatchSDK
# Initialize with fail-closed safety
fw = FailWatchSDK(default_fail_mode="closed")
@fw.guard(
policy={
"limit": 1000,
"forbidden_keywords": ["delete", "drop"]
}
)
def transfer_money(user_request, tool_args):
# This code NEVER runs if:
# 1. The guard server is down
# 2. The amount > 1000
# 3. The LLM detects malicious intent
pass
**Links:**
Repo: [https://github.com/Ludwig1827/FailWatch](https://github.com/Ludwig1827/FailWatch) or Pip:
pip install failwatch
I'd love to hear how you guys are handling "fail-closed" logic in your agent frameworks! Does anyone else use a separate "Safety Server" pattern? | 2026-01-07T03:52:45 | Independent_Cow5074 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q64zgt | false | null | t3_1q64zgt | /r/LocalLLaMA/comments/1q64zgt/i_built_a_failclosed_circuit_breaker_for_my_agent/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'q43zz8owmubg1', 'resolutions': [{'height': 32, 'url': 'https://preview.redd.it/q43zz8owmubg1.png?width=108&crop=smart&auto=webp&s=f745df126fca1b443253e22bc9cd9a1ed2d61042', 'width': 108}, {'height': 64, 'url': 'https://preview.redd.it/q43zz8owmubg1.png?width=216&crop=smart&auto=webp&s=4bfb83e6caaf5633102ffac8bdb954812ea4fb49', 'width': 216}, {'height': 95, 'url': 'https://preview.redd.it/q43zz8owmubg1.png?width=320&crop=smart&auto=webp&s=0a944752aacc790553c88da35603aece1333024f', 'width': 320}, {'height': 191, 'url': 'https://preview.redd.it/q43zz8owmubg1.png?width=640&crop=smart&auto=webp&s=2c9c79e910d682571a025dead147a7942f4d36a8', 'width': 640}, {'height': 287, 'url': 'https://preview.redd.it/q43zz8owmubg1.png?width=960&crop=smart&auto=webp&s=0caa7791ba7837e43336a4da30b731e77f4ef228', 'width': 960}, {'height': 323, 'url': 'https://preview.redd.it/q43zz8owmubg1.png?width=1080&crop=smart&auto=webp&s=9cd2c21c10b261bd553b3e0170e9d7c59bf29e84', 'width': 1080}], 'source': {'height': 408, 'url': 'https://preview.redd.it/q43zz8owmubg1.png?auto=webp&s=a6d88c0e826c8c2ef8abfbf9151c3098e38893ca', 'width': 1364}, 'variants': {}}]} | |
Running ACE-Step locally: 4-minute music generation in 20 seconds on 8GB VRAM (vs Suno's cloud API) | 9 | I got tired of Suno's API rate limits and $30/month subscription, so I set up ACE-Step to run locally. It generates 4 minutes of music in \~20 seconds and works on 8GB VRAM with CPU offload
**Link:** [https://medium.com/gitconnected/i-generated-4-minutes-of-k-pop-in-20-seconds-using-pythons-fastest-music-ai-a9374733f8fc](https://medium.com/gitconnected/i-generated-4-minutes-of-k-pop-in-20-seconds-using-pythons-fastest-music-ai-a9374733f8fc)
\------------------------------------------------------------
**Local setup advantages:**
* No rate limits or API costs
* Full control over model (LoRA training, stem generation)
* Privacy (no data sent to cloud)
* Unlimited generations ($0 after GPU purchase)
**Hardware optimization covered:**
* CPU offload: 16GB VRAM → 7.5GB (tested on RTX 4060)
* 8-bit quantization: 16GB → 9GB, only 25% slower
* BF16 vs FP16 benchmarks
* Batch processing with memory management
**What I covered in the article:**
* Windows installation hell (12 common errors + fixes)
* Quality control for seed variance (CFG/steps optimization)
* Why most existing AI music models (MusicGen, Stable Audio, Suno API, AudioCraft) are **too slow and too expensive** for real workflows
* How **ACE-Step’s diffusion-based architecture** enables **multi-minute music generation in seconds**, instead of token-by-token autoregressive generation
* Full **local setup guide** (Python, PyTorch, CUDA, VRAM requirements) — runs on **8GB VRAM with offloading**
* Step-by-step **Python examples** for:
* Instrumental music generation
* Full songs with vocals
* Korean / K-Pop-style vocal generation
* How prompt structure, guidance scale, seeds, and duration affect output quality and consistency
* Advanced features:
* Stem-style generation (drums, bass, synths separately)
* Voice reference / cloning support
* Batch generation for variations
* LoRA loading for genre specialization
* **Production-ready usage**, not demos:
* FastAPI backend for real-time music generation
* Performance optimizations (FP16 vs BF16, memory handling)
**Real-world projects:**
* Adaptive game music system (cached, intensity-aware)
* DMCA-free music for YouTube/TikTok
Happy to share benchmarks or optimization tips if anyone's running into VRAM issues.
| 2026-01-07T03:41:36 | https://www.reddit.com/r/LocalLLaMA/comments/1q64qpx/running_acestep_locally_4minute_music_generation/ | DecodeBuzzingMedium | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q64qpx | false | null | t3_1q64qpx | /r/LocalLLaMA/comments/1q64qpx/running_acestep_locally_4minute_music_generation/ | false | false | self | 9 | null |
llama.cpp vs Ollama: ~70% higher code generation throughput on Qwen-3 Coder 32B (FP16) | 95 | I’m seeing a significant throughput difference between **llama.cpp** and **Ollama** when running the same model locally.
**Setup:**
* Model: **Qwen-3 Coder 32B**
* Precision: **FP16**
* Hardware: **RTX 5090 + RTX 3090 Ti**
* Task: code generation
**Results:**
* **llama.cpp:** \~52 tokens/sec
* **Ollama:** \~30 tokens/sec
Both runs use the same model weights and hardware. The gap is \~70% in favor of llama.cpp.
Has anyone dug into why this happens? Possibilities I’m considering:
* different CUDA kernels / attention implementations
* default context or batching differences
* scheduler or multi-GPU utilization differences
* overhead from Ollama’s runtime / API layer
Curious if others have benchmarked this or know which knobs in Ollama might close the gap. | 2026-01-07T03:27:09 | https://www.reddit.com/r/LocalLLaMA/comments/1q64f26/llamacpp_vs_ollama_70_higher_code_generation/ | Shoddy_Bed3240 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q64f26 | false | null | t3_1q64f26 | /r/LocalLLaMA/comments/1q64f26/llamacpp_vs_ollama_70_higher_code_generation/ | false | false | self | 95 | null |
I built a tool to clean HTML pages for RAG (JSON / MD / low-noise HTML) | 1 | When building RAG pipelines, I kept fighting HTML noise:
menus, footers, repeated blocks, JS-rendered content.
I built a small service that:
\- Extracts pages into structured JSON or Markdown
\- Generates low-noise HTML for embeddings
\- Handles JS-heavy sites (SPAs, dashboards, etc.)
Live demo (no signup):
[https://page-replica.com/structured/live-demo](https://page-replica.com/structured/live-demo)
This grew out of my prerendering work, but the structured output is very useful for RAG pipelines.
| 2026-01-07T03:16:20 | https://www.reddit.com/r/LocalLLaMA/comments/1q6469u/i_built_a_tool_to_clean_html_pages_for_rag_json/ | nirvanist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6469u | false | null | t3_1q6469u | /r/LocalLLaMA/comments/1q6469u/i_built_a_tool_to_clean_html_pages_for_rag_json/ | false | false | self | 1 | null |
Anyone integrating Perplexity or hybrid external nodes into a local-first AI stack ? | 1 | I’m building a modular AI system entirely local:
- Multiple LLMs (Mistral, LLaMA, Qwen)
- Agents for parsing, recon, multimodal input
- Everything airgapped or API-isolated
So far, my stack works as an autonomous mesh — but I’m experimenting with ways to integrate a minimal external reasoning layer.
Has anyone here:
- Used Perplexity’s API (beyond docs) for filtered search / context refinement?
- Found workarounds for limiting trace/logs?
- Tried using Perplexity as a controlled node in a hybrid local/offline setup?
Not interested in LangChain or SaaS stacks. Just quiet integrations.
If you’ve explored similar things (even under NDA), curious to compare notes. | 2026-01-07T03:05:48 | https://www.reddit.com/r/LocalLLaMA/comments/1q63xxl/anyone_integrating_perplexity_or_hybrid_external/ | visitor_m | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q63xxl | false | null | t3_1q63xxl | /r/LocalLLaMA/comments/1q63xxl/anyone_integrating_perplexity_or_hybrid_external/ | false | false | self | 1 | null |
Coordinating local LLM agents without a manager: stigmergy from ant colonies | 7 | Most multi-agent setups use a manager to delegate tasks. But managers become bottlenecks - add more agents, get diminishing returns.
I tried a different approach borrowed from ant colonies: agents don't communicate with each other at all. Instead, they read "pressure" signals from the shared artifact and propose changes to reduce local pressure. Coordination emerges from the environment, not orchestration.
Running qwen2.5-coder (1.5B) via Ollama on a shell script improvement task. Agents see shellcheck signals (errors, warnings, style issues) for their region only. High pressure = needs work. They propose patches, system validates and applies the best ones.
Fitness values decay over time (like ant pheromones). Even "fixed" regions gradually need re-evaluation. Prevents the system from getting stuck.
Early results: adding agents scales linearly until I/O bottlenecks hit. Zero inter-agent messages. Still experimenting and will post more results as I find them.
Write-up: [https://www.rodriguez.today/articles/why-multi-agent-systems-dont-need-managers](https://www.rodriguez.today/articles/why-multi-agent-systems-dont-need-managers) | 2026-01-07T02:58:03 | https://www.reddit.com/r/LocalLLaMA/comments/1q63rju/coordinating_local_llm_agents_without_a_manager/ | rrrodzilla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q63rju | false | null | t3_1q63rju | /r/LocalLLaMA/comments/1q63rju/coordinating_local_llm_agents_without_a_manager/ | false | false | self | 7 | null |
Llama fMRI | 0 | >
Instead of a flat internal state, I got structure.
In my visualization:
* **Size = connectivity**
* **Color = K2**
* **Height = KL**
Even under minimal prompting, the model exhibits a non-uniform, spatially coherent activation geometry.
In other words: **baseline ≠ nothing happening.**
It’s a *resting state* with its own topology.
[nothing like the \\"low structure\\" I anticipated](https://preview.redd.it/74l84qwedubg1.png?width=1593&format=png&auto=webp&s=1c25a787244f5e1c4d0eb885668f9b25ea3641f1)
| 2026-01-07T02:54:23 | https://www.reddit.com/r/LocalLLaMA/comments/1q63og1/llama_fmri/ | Due_Hunter_4891 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q63og1 | false | null | t3_1q63og1 | /r/LocalLLaMA/comments/1q63og1/llama_fmri/ | false | false | 0 | null | |
R200 and RTX 6000 Rubin speculation | 0 | Since rough hardware numbers of R200 (potential name for the top Rubin chip) was released at CES, we can use it to extrapolate to estimate the spec of R200 and RTX 6000 Rubin.
HBM4 has doubled its bit per stack according to wiki, so we can expect R200's VRAM to have 2x8192bit and its size balloon to 384GB.
Since 4GB GDDR7 modules are still not available, so we can be conservative here and expect 6000 Rubin only has a clock speed increase relative to 6000 Blackwell just like 4090 and 3090. This is a bummer but if we expect 6000 Rubin to be available end of the year or early next year, then it is possible we can have 128GB card with 4GB modules.
Tensor Core F16 with F32 accumulate sparse (ie full precision training) increased from 4.5PF to 8PF for B200 to R200 is the result of moving from 4nm to 3nm process. So we can expect Rubin 6000 to go to about 1.1PF. This boost will be the baseline boost for most precisions.
On the other hand, normally we should see TC F8 w/ F16 accumulate sparse having the same amount of increase as F16/F32 but instead we are seeing a huge boost of 8PF to 35PF, so we can guess that there must be some new dedicated hardware to provide this extra boost for Rubin.
Same logic is NVFP4 dense. So if we do training and inference with these precisions, we can expect huge boost.
All in all, 6000 Rubin seems exciting. I am saving 10 grand for it. What do you think?
|Model|R200|B200|6000 Rubin|6000 Blackwell|
|:-|:-|:-|:-|:-|
|VRAM|HBM4|HBM3E|GDDR7|GDDR7|
|GB|384|192|96|96|
|bit|2x8192|2x4096|512|512|
|MHz|2750|2000|4712|4375|
|GB/s|22528|8192|1930|1792|
|FP16/F32 acc sparse|8PF|4.5PF|1.1PF|0.625PF|
|F8/F16 acc sparse|35PF|9PF|4.8PF|1.25PF|
|NVFP4 dense|50PF|9PF|6.9PF|1.25PF|
| 2026-01-07T02:38:57 | https://www.reddit.com/r/LocalLLaMA/comments/1q63bm1/r200_and_rtx_6000_rubin_speculation/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q63bm1 | false | null | t3_1q63bm1 | /r/LocalLLaMA/comments/1q63bm1/r200_and_rtx_6000_rubin_speculation/ | false | false | self | 0 | null |
[Model review] LiquidAI/LFM2.5-VL-1.6B - tested as OCR | 10 | My testing (llama.cpp, BF16) revealed a few findings:
- **SOMETIMES CONFUSES LETTERS/NUMBERS:** do not rely on this to capture accurate digits from nutrition labels, asset tag stickers, etc. In at least some instances it confused 9 with 8, T with 1, and so on.
- Outside of OCR, can identify the basic theme/elements of the image.
- Overall good performance on small images like screenshots and Reddit content
- For photographed page images, can sometimes enter into a repetition loop
- Does not like pages with very dense text (that's only tested on one image)
- Min/max image tokens has a soft spot in the 256-512 range. Sometimes text suffers from errors if the min/max values are shifted towards the 64-256 as in the model card.
I some variant of the following prompt for my testing:
> OCR the text, paying careful attention that you get all the words and letters right. Check carefully all the letter shapes. Reply with just the text. Use newlines where required.
-----
Overall, this is not a bad small model. At 1.6B params, it's a quite surprising that it's so versatile. I would incorporate something like it into a mass image captioning pipeline to enable text search over an image corpus.
If anyone else tested it here, I'm curious to see how it performs for you. | 2026-01-07T02:28:19 | https://www.reddit.com/r/LocalLLaMA/comments/1q6331b/model_review_liquidailfm25vl16b_tested_as_ocr/ | Corporate_Drone31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6331b | false | null | t3_1q6331b | /r/LocalLLaMA/comments/1q6331b/model_review_liquidailfm25vl16b_tested_as_ocr/ | false | false | self | 10 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.