title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
LoongFlow: Open Source Implementation of Evolutionary Agent Framework | 1 | [removed] | 2026-01-07T02:19:40 | https://www.reddit.com/r/LocalLLaMA/comments/1q62vsg/loongflow_open_source_implementation_of/ | Sea_Individual2470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q62vsg | false | null | t3_1q62vsg | /r/LocalLLaMA/comments/1q62vsg/loongflow_open_source_implementation_of/ | false | false | self | 1 | null |
Why not Qwen3-30B Quantized over qwen3-14B or gemma-12B? | 21 | **I am learning :)**
I have a 3080ti with 12GB of VRAM and 32GB of RAM and a 5900x. With this I can run qwen3-30b-a3b-thinking-2507 that does 3.3B activated parameters in LM studio 20 tok/sec which I believe is quantized right? It runs pretty well and has good answers. Why would I use the more recommended ones of qwen3-14b or gemma 12b over this that I see more often recommended for a computer of my specs?
My use case is primarily just a general AI that I can ask have search the web, clean up writing, troubleshoot IT issues on my homelab, and ask general questions.
Thanks! | 2026-01-07T02:12:35 | https://www.reddit.com/r/LocalLLaMA/comments/1q62pyh/why_not_qwen330b_quantized_over_qwen314b_or/ | arktik7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q62pyh | false | null | t3_1q62pyh | /r/LocalLLaMA/comments/1q62pyh/why_not_qwen330b_quantized_over_qwen314b_or/ | false | false | self | 21 | null |
NousResearch/NousCoder-14B · Hugging Face | 157 | from NousResearch:
"We introduce *NousCoder-14B*, a competitive programming model post-trained on [Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) via reinforcement learning. On LiveCodeBench v6 (08/01/2024 - 05/01/2025), we achieve a Pass@1 accuracy of 67.87%, up 7.08% from the baseline Pass@1 accuracy of 60.79% of Qwen3-14B. We trained on 24k verifiable coding problems using 48 B200s over the course of four days." | 2026-01-07T01:37:25 | https://huggingface.co/NousResearch/NousCoder-14B | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1q61wpv | false | null | t3_1q61wpv | /r/LocalLLaMA/comments/1q61wpv/nousresearchnouscoder14b_hugging_face/ | false | false | default | 157 | {'enabled': False, 'images': [{'id': '5B9NQ05vM8G6MB5IJCuanVCmVkz4L0DmuyDbK4fKfHU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5B9NQ05vM8G6MB5IJCuanVCmVkz4L0DmuyDbK4fKfHU.png?width=108&crop=smart&auto=webp&s=92ff6decc1f493533f9bb64d764586339106d8fc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5B9NQ05vM8G6MB5IJCuanVCmVkz4L0DmuyDbK4fKfHU.png?width=216&crop=smart&auto=webp&s=4d28454bedd0f208ed510294f9556e60515eaee8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5B9NQ05vM8G6MB5IJCuanVCmVkz4L0DmuyDbK4fKfHU.png?width=320&crop=smart&auto=webp&s=3a97f04cf153d78e44a84c547275591ffc3e96a6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5B9NQ05vM8G6MB5IJCuanVCmVkz4L0DmuyDbK4fKfHU.png?width=640&crop=smart&auto=webp&s=1d0894b4bfc8c3fc81d8f7c59a878b6ad54d6614', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5B9NQ05vM8G6MB5IJCuanVCmVkz4L0DmuyDbK4fKfHU.png?width=960&crop=smart&auto=webp&s=bcbc6dd214b0226ed9583f48048a7fec6cda11ea', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5B9NQ05vM8G6MB5IJCuanVCmVkz4L0DmuyDbK4fKfHU.png?width=1080&crop=smart&auto=webp&s=fccaba430f57f574ca3dbb4da5bc9a05d165e821', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5B9NQ05vM8G6MB5IJCuanVCmVkz4L0DmuyDbK4fKfHU.png?auto=webp&s=6682bf7d68d76d222061531a16fc899887d56e24', 'width': 1200}, 'variants': {}}]} |
NO LAWSUIT if AI fucks up with policy (saving teams) | 0 | AI today is writing emails, making decisions, and touching sensitive data.
Which means every mistake your AI makes is now **your legal liability**.
One bad output. One leaked PII. One GDPR slip.
And suddenly AI risk becomes **legal risk**.
You can either deal with regulators, lawsuits, audits, and policy hell yourself.
Or you can use **Paralegal**.
If your AI / LLM application fucks up with policy, privacy, or compliance,
**Paralegal indemnifies you instantly against AI failures.**
We handle the legal complexity so your product doesn’t die from it.
Early access: [**paralegalapp@proton.me**]()
**TL;DR**
If your AI fucks up and regulators come after you, we save your ass.
Sounds good? Use Paralegal.
Takes less than a minute to get started. | 2026-01-07T01:24:59 | https://www.reddit.com/r/LocalLLaMA/comments/1q61mds/no_lawsuit_if_ai_fucks_up_with_policy_saving_teams/ | AdSure3977 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q61mds | false | null | t3_1q61mds | /r/LocalLLaMA/comments/1q61mds/no_lawsuit_if_ai_fucks_up_with_policy_saving_teams/ | false | false | self | 0 | null |
3x Mi50 32GB LLM workstation build help | 2 | I'm trying to run 3x Mi50 32GB in Asrock Taichi X299 XE and it doesn't get to the OS with all cards no matter what I try. Currently it's failing with three radeon cards, but I also tried 2x Nvidia Tesla P40 + 1x Tesla P100 and it also had the same issue. Tried on both Windows 10 Pro and Ubuntu 22.04 LTS (fresh install on two different drives). I either get a boot loop, system freeze or on linux specifically I can get "amdgpu 0000:67:00.0: amdgpu: trn=2 ACK should not assert! wait again !" on the screen repeatedly.
I can also boot with 4 cards (2x Mi50 or 2x P40 and two GTX 1080) if the two other are consumer cards, but simply not with three of these datacenter cards. I do have a 1000W psu which is a bit on the edge, but again I did try running half of the cards on another 750W psu with the same problem, so I think this is mainly a firmware issue, rather than hardware one.
I'm also running latest 1.90 non-beta bios. I have set UEFI boot, CSM off, Above 4G Decode on, Secure Boot off. I was also thinking about reflashing the cards themselves as I've heard this solve some performance issues as well, but I'm really saving this as a last dire option as I do not want to brick them.
One more note: I initially also had problem running even just two cards on ubuntu but I solved it with:
' GRUB\\\\\\\_CMDLINE\\\\\\\_LINUX\\\\\\\_DEFAULT="pci=realloc quiet splash" '. This is what leads me to believe that this whole thing is PCIe BAR issue.
That's why I wanted to ask if anyone didn't accidentally stumble upon a beta bios solving this exact BAR issue on this asrock board or can recommend a cheap board where they work 100% (or post their setup with these cards in best case).
I'd be really really glad for any input because I'm somewhat on my wits end with this system. Thank you very much in advance.
Also some numbers with the current two Mi50's on Rocm 6.3.3 for extra datapoints if anyone is interested:
Mistral-Large-Instruct-2407-123b-IQ3\\\_M
\> prompt eval time = 36411.06 ms / 2683 tokens ( 13.57 ms per token, 73.69 tokens per second)
\> eval time = 94273.55 ms / 347 tokens ( 271.68 ms per token, 3.68 tokens per second)
\> total time = 130684.61 ms / 3030 tokens
GLM-4.5-Air-REAP-82B-A12B-Q4\\\_K\\\_L
\> prompt eval time = 5597.85 ms / 2103 tokens ( 2.66 ms per token, 375.68 tokens per second)
\> eval time = 41345.73 ms / 1112 tokens ( 37.18 ms per token, 26.90 tokens per second)
\> total time = 46943.58 ms / 3215 tokens
Llama-3.3-70B-Q5\\\_K\\\_M
\> prompt eval time = 20240.61 ms / 2101 tokens ( 9.63 ms per token, 103.80 tokens per second)
\> eval time = 46757.22 ms / 372 tokens ( 125.69 ms per token, 7.96 tokens per second)
\> total time = 66997.83 ms / 2473 tokens | 2026-01-07T01:24:19 | https://www.reddit.com/r/LocalLLaMA/comments/1q61lsz/3x_mi50_32gb_llm_workstation_build_help/ | Ok_Top9254 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q61lsz | false | null | t3_1q61lsz | /r/LocalLLaMA/comments/1q61lsz/3x_mi50_32gb_llm_workstation_build_help/ | false | false | self | 2 | null |
NO LAWSUIT if AI fucks up with policy (saving teams) | 1 | **i will keep it short and sweet.**
**AI is today writing email, generating decisions,**
**and touching sensitive data; making everyone liable for policy**
**violations,**
**If your ai / llm application fucks up with policy, and ai risk becomes legal risk (PIIs, GDPR et cetera) -- either handle all the lawsuits and legal complexions or USE PARALEGAL TO INDEMNIFY YOURSELF INSTANTLY FROM AI FAILURES.**
**those who would be interested in using paralegal please email at** [**paralegalapp@proton.me**](mailto:paralegalapp@proton.me) **and get your early access.**
**TL;DR**
**if ai fucks up and regulators come after you, we save your ass. sounds good? use now. takes less than a minute to get started.**
[](https://www.reddit.com/submit/?source_id=t3_1q61imi) | 2026-01-07T01:23:32 | https://www.reddit.com/r/LocalLLaMA/comments/1q61l5n/no_lawsuit_if_ai_fucks_up_with_policy_saving_teams/ | AdSure3977 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q61l5n | false | null | t3_1q61l5n | /r/LocalLLaMA/comments/1q61l5n/no_lawsuit_if_ai_fucks_up_with_policy_saving_teams/ | false | false | self | 1 | null |
Nvidia owners are about to have a very good time ( llama.cpp ) | 32 | Looks like there's a jan 2026 pre-build of llama.cpp kicking out these numbers. | 2026-01-07T01:10:12 | mr_zerolith | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q61a3e | false | null | t3_1q61a3e | /r/LocalLLaMA/comments/1q61a3e/nvidia_owners_are_about_to_have_a_very_good_time/ | false | false | default | 32 | {'enabled': True, 'images': [{'id': 'az1gy68mutbg1', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/az1gy68mutbg1.jpeg?width=108&crop=smart&auto=webp&s=2135470c6cfb5b59367536ea9b2ab35f18ea3a86', 'width': 108}, {'height': 78, 'url': 'https://preview.redd.it/az1gy68mutbg1.jpeg?width=216&crop=smart&auto=webp&s=b6d62bcbc2b24a458851108e9c2a111681556c5b', 'width': 216}, {'height': 115, 'url': 'https://preview.redd.it/az1gy68mutbg1.jpeg?width=320&crop=smart&auto=webp&s=aab756ea59f5138f2e209d70b83de716a1052258', 'width': 320}, {'height': 231, 'url': 'https://preview.redd.it/az1gy68mutbg1.jpeg?width=640&crop=smart&auto=webp&s=a0ea0996015db8985cf8bc3a1ca7f4915e052861', 'width': 640}, {'height': 346, 'url': 'https://preview.redd.it/az1gy68mutbg1.jpeg?width=960&crop=smart&auto=webp&s=1fc522e8bb252714f4fe60f7c30ba55163f7a228', 'width': 960}, {'height': 390, 'url': 'https://preview.redd.it/az1gy68mutbg1.jpeg?width=1080&crop=smart&auto=webp&s=b2417ce20f603d2ba024ba08d92995d019cc1e5b', 'width': 1080}], 'source': {'height': 550, 'url': 'https://preview.redd.it/az1gy68mutbg1.jpeg?auto=webp&s=c7f76544ce085953d60642254a5b655235ce2e24', 'width': 1523}, 'variants': {}}]} | |
Razer is demonstrating a “AI accelerator” box with a Wormhole n150 processor from Tenstorrent at CES | 118 | There is a press release from Tenstorrent as well, but I haven’t seen anyone test it out.
From what I’ve seen before the hardware isn’t super impressive. The n150 usually comes as a PCIe dev board with 12GB memory for $1000. | 2026-01-07T01:07:27 | https://wccftech.com/razer-partners-tenstorrent-goes-into-full-ai-mode/ | Hasuto | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1q617ug | false | null | t3_1q617ug | /r/LocalLLaMA/comments/1q617ug/razer_is_demonstrating_a_ai_accelerator_box_with/ | false | false | default | 118 | {'enabled': False, 'images': [{'id': '55S8efCmWR_UNwVQwnrBhqr5tzC73SFwKgXTxGt7lKs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/55S8efCmWR_UNwVQwnrBhqr5tzC73SFwKgXTxGt7lKs.png?width=108&crop=smart&auto=webp&s=18dbc2d871b4de52835bed0cd15f81b0b0331b74', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/55S8efCmWR_UNwVQwnrBhqr5tzC73SFwKgXTxGt7lKs.png?width=216&crop=smart&auto=webp&s=1d2a8649d997535580f3286faae0e334c46c3e35', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/55S8efCmWR_UNwVQwnrBhqr5tzC73SFwKgXTxGt7lKs.png?width=320&crop=smart&auto=webp&s=90673558a278b43d554318405d6a4b149272b8da', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/55S8efCmWR_UNwVQwnrBhqr5tzC73SFwKgXTxGt7lKs.png?width=640&crop=smart&auto=webp&s=9d858518876fe54114627febc2a60feabfd21a89', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/55S8efCmWR_UNwVQwnrBhqr5tzC73SFwKgXTxGt7lKs.png?width=960&crop=smart&auto=webp&s=78c0ca5eba348680540a6ae83f2240ab18c7692e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/55S8efCmWR_UNwVQwnrBhqr5tzC73SFwKgXTxGt7lKs.png?width=1080&crop=smart&auto=webp&s=2905c32c8c6c98b96c5cd0f4890ea730307a8aec', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/55S8efCmWR_UNwVQwnrBhqr5tzC73SFwKgXTxGt7lKs.png?auto=webp&s=5010f0b3f465937af81a9e6c973c2a46c9479bee', 'width': 1920}, 'variants': {}}]} |
Apple M5 Max configuration that makes sense from a business mind perspective "Personal H100" | 0 | I've been analyzing the speculative configurations for the upcoming M5 Max. While most rumors appear logical in terms of linear scaling, they feel illogical when you consider real-world pain points and the nuances of global supply and demand.
Because of this, I am convinced Apple is permanently shifting its annual release window.
Why? Because Apple plans to engulf the personal compute space with efficiency and compute in a purse-sized footprint. Ask yourself: Why would a cost-conscious business owner or pro user pay **$7,199** for a mere 20-25% performance bump? They wouldn't. The "Pro" space would pass. But if the opposite is true, NVDA's foothold on gaming with the 5090 and workstation productivity of the RTX 6000 are now legacy. FYI, I love NVDA! And I love APPLE!
Apple stated that M5 will be **3 to 4 times more powerful** than the M4 for AI instances. Based on these words, we are looking at a completely different animal.
Based on my research into thermal capacities and the introduction of new architectures by TSMC and Samsung, I’ve modeled a machine that addresses the **M4 Max’s limitations** while complying with real-world physics (Thanks to Maret Prep, Robert C. Fitch and Team Google). It addresses the "Memory Wall" and the "Compute Wall" simultaneously.
**This is the first machine in history designed to be a "Personal H100"—capable of training, fine-tuning, and serving the world's largest AI models from a backpack.**
Hence, I did the math. Here is the configuration that makes this possible:
# The M5 Max: "The Personal H100"
**1. Core Architecture & Manufacturing**
* **Process Node:** TSMC N3P (Enhanced 3nm FinFET).
* **Optimization:** Tuned for HPC (High Performance Computing) rather than mobile efficiency, allowing higher voltage floors for sustained frequency.
* **Packaging:** TSMC SoIC-X (System-on-Integrated-Chips).
* **Topology:** Wafer-on-Wafer (WoW) hybrid bonding.
* **The Stack:** The L4 Cache and Neural Fabric are stacked vertically atop the GPU execution units, minimizing data travel distance to <10 microns.
**2. Memory Subsystem: "The Gigawidth Fabric"**
* **Type:** LPDDR6-14400 (14.4 GT/s).
* **Capacity:** 256 GB Unified Memory.
* **Interface:** 1096-bit Custom Bus.
* **Architecture:** 2x Sub-channels per die, 24 Data Lines (DQs) per sub-channel.
* **Total Bandwidth:** **1,972 GB/s (\~1.97 TB/s).**
* **Efficiency:** Features Dynamic ODT and DVFSL to drop power consumption by 40% during idle/text-editing tasks.
* **Reliability:** On-Die ECC + Command/Address Parity (Server-grade).
**3. Compute Engine: "Tri-Core" Neural Matrix**
* **GPU Config:** 80-Core "Titan" GPU.
* **Clock:** 2.5 GHz Boost.
* **AI Accelerators:** 240x Neural Matrix Units (NMUs).
* **Distribution:** 3x NMUs per GPU Core (The "Ultra" Config).
* **Native Precision:** FP8 (E5M2 / E4M3) & INT8.
* **Throughput:**
* FP8 (Dense): \~1.65 PetaFLOPS
* FP8 (Sparse): \~3.30 PetaFLOPS
* *Status: Exceeds the raw training throughput of an NVIDIA RTX 4090 Desktop.*
**4. Storage & I/O**
* **Storage:** 8TB Gen 5 NVMe (14 GB/s).
* **Interconnect:** Thunderbolt 6 (120 Gbps) - Capable of driving 2x 8K Displays @ 120Hz.
# Key Architectural Innovations
**A. The "SME2" FP8 Pipeline** Unlike previous M-series chips that simulated lower precision, the M5 Max offers native hardware support for ARM SME2 (Scalable Matrix Extension 2).
* **Why it matters:** It cuts memory usage for LLMs in half (FP16 → FP8) without losing accuracy, effectively turning **256GB RAM into 512GB of effective model capacity.**
**B. SoIC-X "Infinity Cache" (L4)** A **4GB SRAM L4 Cache** bonded directly to the GPU.
* **Function:** Acts as a "Speculative Decoding" buffer.
* **Impact:** When running models like Grok or Llama-3, the "Draft Model" lives entirely in this cache, running at **12 TB/s internal bandwidth**. This is the secret sauce to achieving 30+ TPS on Large models.
**C. Asymmetric Neural Gating** To manage the heat of **240 Accelerators** in a laptop:
* **Unplugged:** The chip disables 1 NMU per core (running 160 total). Power drops to <50W.
* **Plugged In:** All 240 NMUs activate. The **Graphene Vapor Chamber** engages to dissipate the full 140W thermal load.
# Performance Summary (Projected)
* **Wan 2.1 Video Gen (5s):** 29 seconds
* **Llama 3 405B Inference:** \~35 TPS
* **Fine Tuning (3B model):** \~9 Minutes
* **Cyberpunk 2077 (4k Path Tracing):** 52 FPS
# Battery Life Reality
* **Email, Docs, Web:** 18-22 hours
* **Coding:** 12-14 hours
* **AI Workloads:** 4 hours
* **Video Generation:** 1.5 hours
* **The Gamer:** 2 hours
* **The Trainer:** \~1 hour
Apple engineers are geniuses, but physics is physics. To get a 4x leap, you can't just shrink the die—you have to change the topology. This is the machine that justifies the wait. I'm to the next...... | 2026-01-07T01:00:43 | https://www.reddit.com/r/LocalLLaMA/comments/1q6127q/apple_m5_max_configuration_that_makes_sense_from/ | Wveney | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6127q | false | null | t3_1q6127q | /r/LocalLLaMA/comments/1q6127q/apple_m5_max_configuration_that_makes_sense_from/ | false | false | self | 0 | null |
[Research] "Heritage > Scale": Why EleutherAI models dampen while LLaMA expands — and why finetuning often can't flip it | 0 | Hi r/LocalLLaMA,
I analyzed \*\*23+ models from 7 labs\*\* (Pythia, LLaMA, OPT, GPT-2, Mistral) looking at internal signal flow. Found something that might explain why some models feel "stiffer" to finetune.
\*\*TL;DR:\*\* Who trained the base model often matters more than parameter count. Finetuning can change magnitude, but rarely flips the thermodynamic sign.
# ---
# The split
Models fall into two camps based on how they handle signals through layers:
|Lab|Behavior|Models|
|:-|:-|:-|
|EleutherAI|Dampen (G < 1|Pythia, GPT-NeoX|
|Meta / OpenAI|Expand (G > 1|LLaMA, OPT, GPT-2|
A 160M and 12B model from the same lab behaves more alike than two same-size models from different labs.
\---
# Why this matters for finetuning
\- This "thermodynamic character" is baked in during pretraining
\- \*\*RLHF/LoRA can change magnitude, but rarely flips the sign\*\*
\- If your base model is a "dampener", your finetune is fighting an upstream constraint
Mechanistic signal: the \`||W\_V|| / ||W\_O||\` ratio in attention heads shows \~10× differences between labs.
[\(Plot attached: mean residual gain by training lab\)](https://preview.redd.it/rq2981akstbg1.png?width=2301&format=png&auto=webp&s=abc5c9282f33c4ea4ab91ff38f110c60cacbc820)
\---
# Paper
\*\*Zenodo:\*\* [https://doi.org/10.5281/zenodo.18165365](https://doi.org/10.5281/zenodo.18165365)
\*(Code + notebooks included for reproducibility)\*
\---
# Questions
1. Has anyone noticed Pythia needing different LR/optimizer settings than LLaMA during finetuning?
2. Does this match your intuition that some model families feel "stiffer"?
— Davide | 2026-01-07T00:58:48 | https://www.reddit.com/r/LocalLLaMA/comments/1q610m6/research_heritage_scale_why_eleutherai_models/ | Fantastic_Art_4948 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q610m6 | false | null | t3_1q610m6 | /r/LocalLLaMA/comments/1q610m6/research_heritage_scale_why_eleutherai_models/ | false | false | 0 | null | |
second machine... another strix halo or a mac? | 3 | I have a strix halo running pretty well now, but in order to get models to talk to each other I think I need a second machine. There's no specific purpose or problem I'm trying to solve here, it's just experimentation for the sake of getting comfortable with and learning to orchestrate models and build \*something\*.
The thing I have in mind is to have a VLM generate a prompt for me, feed it into a diffusion model, then feed the generated image back to the VLM for analysis and refinement, etc. It feels a bit like I'm making an AI slop machine for instagram but I have no interest in posting anything, it's just the concrete thing I could come up with for something to do and get started on. I do my learning best when I iterate on problems.
I can get gpt-oss-120b or qwen3 30b well (or well enough), and I can run comfy well, but I can't get more than one of any of these running together, so I'm thinking it's time for a second machine. Torn between getting yet another framework desktop 128gb, or getting an mac m4. The mac would be faster, but I also don't want to go to 128gb for a mac, 64gb mac mini is the most I want to spend.
Alternately I could get a 5090 for the framework or a different machine I have, but vram being 32GB feels limiting.
Speed isn't the most important factor in these experiments but it's nice to have.
Any thoughts or suggestions? I'd like to keep the aggregate additional cost to \~3400 or roughly the cost of the m4 pro mini with 64gb. | 2026-01-07T00:58:22 | https://www.reddit.com/r/LocalLLaMA/comments/1q6109m/second_machine_another_strix_halo_or_a_mac/ | sputnik13net | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6109m | false | null | t3_1q6109m | /r/LocalLLaMA/comments/1q6109m/second_machine_another_strix_halo_or_a_mac/ | false | false | self | 3 | null |
Very strange -- can't serve vLLM models through SSH? | 1 | Before I post on GitHub issues, I wanted to double check here.
Essentially, when I connect the `llm_machine` to the peripherals, I can serve the LLM through Docker just fine. However, when I remove the peripherals, connect to the machine via SSH, run the exact same commands, it gets stuck. *The machine doesn't get warm at all*. RAM usage stays at ~35GB instead of typical >100GB.
Below is where I'm stuck on; it typically shows some stats per iteration (it) below the message, but it no longer does that.
user@llm_machine:~$ sudo docker run --runtime nvidia --gpus all -p 8000:8000 --ipc=host --platform "linux/arm64" vllm/vllm-openai:nightly --model Qwen/Qwen3-14B --dtype auto --max-model-len 32768 --max-num-batched-tokens=16384 --enforce-eager --served-model-name vllm-io --gpu-memory-utilization 0.8
[sudo] password for user:
WARNING 01-06 16:27:34 [argparse_utils.py:195] With `vllm serve`, you should provide the model as a positional argument or in a config file instead of via the `--model` option. The `--model` option will be removed in v0.13.
(APIServer pid=1) INFO 01-06 16:27:34 [api_server.py:1277] vLLM API server version 0.14.0rc1.dev221+g97a01308e
(APIServer pid=1) INFO 01-06 16:27:34 [utils.py:253] non-default args: {'model_tag': 'Qwen/Qwen3-14B', 'model': 'Qwen/Qwen3-14B', 'max_model_len': 32768, 'enforce_eager': True, 'served_model_name': ['vllm-io'], 'gpu_memory_utilization': 0.8, 'max_num_batched_tokens': 16384}
(APIServer pid=1) INFO 01-06 16:27:38 [model.py:522] Resolved architecture: Qwen3ForCausalLM
(APIServer pid=1) INFO 01-06 16:27:38 [model.py:1510] Using max model len 32768
(APIServer pid=1) INFO 01-06 16:27:38 [scheduler.py:231] Chunked prefill is enabled with max_num_batched_tokens=16384.
(APIServer pid=1) INFO 01-06 16:27:38 [vllm.py:635] Disabling NCCL for DP synchronization when using async scheduling.
(APIServer pid=1) INFO 01-06 16:27:38 [vllm.py:640] Asynchronous scheduling is enabled.
(APIServer pid=1) WARNING 01-06 16:27:38 [vllm.py:664] Enforce eager set, overriding optimization level to -O0
(APIServer pid=1) INFO 01-06 16:27:38 [vllm.py:764] Cudagraph is disabled under eager mode
(EngineCore_DP0 pid=162) INFO 01-06 16:27:44 [core.py:96] Initializing a V1 LLM engine (v0.14.0rc1.dev221+g97a01308e) with config: model='Qwen/Qwen3-14B', speculative_config=None, tokenizer='Qwen/Qwen3-14B', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False), seed=0, served_model_name=vllm-io, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.NONE: 0>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['all'], 'splitting_ops': [], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [16384], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.NONE: 0>, 'cudagraph_num_of_warmups': 0, 'cudagraph_capture_sizes': [], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': False, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 0, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False}, 'local_cache_dir': None}
(EngineCore_DP0 pid=162) /usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py:283: UserWarning:
(EngineCore_DP0 pid=162) Found GPU0 NVIDIA GB10 which is of cuda capability 12.1.
(EngineCore_DP0 pid=162) Minimum and Maximum cuda capability supported by this version of PyTorch is
(EngineCore_DP0 pid=162) (8.0) - (12.0)
(EngineCore_DP0 pid=162)
(EngineCore_DP0 pid=162) warnings.warn(
(EngineCore_DP0 pid=162) INFO 01-06 16:27:44 [parallel_state.py:1214] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://172.17.0.2:54065 backend=nccl
(EngineCore_DP0 pid=162) INFO 01-06 16:27:44 [parallel_state.py:1425] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank N/A
(EngineCore_DP0 pid=162) INFO 01-06 16:27:44 [gpu_model_runner.py:3762] Starting to load model Qwen/Qwen3-14B...
(EngineCore_DP0 pid=162) INFO 01-06 16:27:54 [cuda.py:351] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION') | 2026-01-07T00:45:37 | https://www.reddit.com/r/LocalLLaMA/comments/1q60pfh/very_strange_cant_serve_vllm_models_through_ssh/ | jinnyjuice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q60pfh | false | null | t3_1q60pfh | /r/LocalLLaMA/comments/1q60pfh/very_strange_cant_serve_vllm_models_through_ssh/ | false | false | self | 1 | null |
[Advice] RTX 3090 + 64GB RAM for local LLM + general use | 4 | I’m evaluating the feasibility of upgrading my current system so it can function both as a normal desktop machine and as a local LLM/vision inference setup. The system is connected to a 65” LG OLED G1 and is currently used for general desktop tasks, browsing, system configuration, and occasional gaming. Before committing to the hardware changes, I’d like to confirm whether this setup is suitable for running 34B‑class models alongside everyday use.
Planned System Specs
• CPU: AMD Ryzen 5 5600X
• GPU: NVIDIA RTX 3090 (24GB VRAM) - upgrade
• RAM: 64GB DDR4 3200MHz CL16 - upgrade
• Storage: 1x Samsung 980 Pro 1TB (Windows + LLM workspace). 1x Kingston A2000 1TB (Games + general data)
Home Architecture
• Home Assistant running separately on an Intel NUC
• Unraid NAS for storage and container workloads
Model
LLaVA‑Next 34B (Q4\_K\_M) or similar 34B‑class multimodal model.
Possible workloads
• Local inference
• Vision + text reasoning
• Home Assistant automation building
• Occasional multi‑model routing
Questions
1. Is this hardware combination (RTX 3090 + 64GB RAM + Ryzen 5 5600X) sufficient for running 34B‑class multimodal models like LLaVA‑Next at Q4\_K\_M?
2. Is my understanding correct that switching between gaming and LLM workloads essentially means assigning the GPU to one task at a time, offloading the LLM with a simple command, and reloading it afterward?
3. Do you foresee any VRAM‑related issues when the LLM is loaded but I’m performing normal desktop tasks (non‑gaming)?
4. Are there any bottlenecks or architectural concerns I should be aware of for this hybrid setup?
Thanks in advance — I’d appreciate insights from anyone running similar hardware or 30‑series GPUs with 30B+ models. | 2026-01-06T23:05:29 | https://www.reddit.com/r/LocalLLaMA/comments/1q5y8qd/advice_rtx_3090_64gb_ram_for_local_llm_general_use/ | -Chimichanga- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5y8qd | false | null | t3_1q5y8qd | /r/LocalLLaMA/comments/1q5y8qd/advice_rtx_3090_64gb_ram_for_local_llm_general_use/ | false | false | self | 4 | null |
Built a persistent memory server for my AI assistant on a $6 droplet – full guide (Claude-focused but model-agnostic) | 1 | Non-dev here. Wanted AI that survives context windows and platform switches. Ended up with a simple Flask + flat-file memory server ("FishBrain") that any model can read/write to.
Also covers wake-up files, voice agents, daemons, and alternatives (Mem0, MCP, local Ollama paths).
Free guide with all the disasters: [buildyourfish.com/build](http://buildyourfish.com/build)
Open to better ways—roast or improve it. | 2026-01-06T23:05:14 | https://www.reddit.com/r/LocalLLaMA/comments/1q5y8hn/built_a_persistent_memory_server_for_my_ai/ | feltchair3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5y8hn | false | null | t3_1q5y8hn | /r/LocalLLaMA/comments/1q5y8hn/built_a_persistent_memory_server_for_my_ai/ | false | false | self | 1 | null |
How to directly connect ML agent with messy business data before these data can be used for ML learning? There are still a lot of manual labors needed. How to free them with reliable agents? | 1 | While the video does show promising results: a 8-fold reduction in mean square errors (MSE) in a regression task, compared with the ML agent provided by Gemini Pro. The serious question is that, how to get those train/validate/test data before they are used by ML agent?
We are building connectors with Oracle, Sharepoints, Slack, Confluent, databricks etc. Anyone can share some experience in finding (or simulating) messy (and massive) business data such as messy tables which need to be cleaned/joined. | 2026-01-06T23:02:54 | https://v.redd.it/o38nm8ha7tbg1 | DueKitchen3102 | /r/LocalLLaMA/comments/1q5y6ci/how_to_directly_connect_ml_agent_with_messy/ | 1970-01-01T00:00:00 | 0 | {} | 1q5y6ci | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/o38nm8ha7tbg1/DASHPlaylist.mpd?a=1770462182%2CYjRhMjI1YjY5MzY1M2YwZmI0YmViZGY3MTMyMDQ0NjdkNmNjY2NhODg3MzdlMzhmNjhjNzQwODMyOWFmOGU2YQ%3D%3D&v=1&f=sd', 'duration': 568, 'fallback_url': 'https://v.redd.it/o38nm8ha7tbg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/o38nm8ha7tbg1/HLSPlaylist.m3u8?a=1770462182%2CMjBjM2NmMGM0MjMxNDhmYjRiMmQyZWMwMjVjM2ZmNThkZDc2MjA2ZmRlNDk1OGI2YTI0NjNiMzg2Njc3NWRiOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/o38nm8ha7tbg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1728}} | t3_1q5y6ci | /r/LocalLLaMA/comments/1q5y6ci/how_to_directly_connect_ml_agent_with_messy/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'd2J5anB5aGE3dGJnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/d2J5anB5aGE3dGJnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=108&crop=smart&format=pjpg&auto=webp&s=239ed8ad62a6356845305af010532e074dbbdaf5', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/d2J5anB5aGE3dGJnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=216&crop=smart&format=pjpg&auto=webp&s=cbfc166bc0df49343c5235114c534ed0b3ecf1ee', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/d2J5anB5aGE3dGJnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=320&crop=smart&format=pjpg&auto=webp&s=3988525bddef00f391a1b75d02fc777249cd3765', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/d2J5anB5aGE3dGJnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=640&crop=smart&format=pjpg&auto=webp&s=3de68d362bd017b04747be1aa7e18d0ae4cdc2b0', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/d2J5anB5aGE3dGJnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=960&crop=smart&format=pjpg&auto=webp&s=1a22f3d951053318967aad61226c041fd3638735', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/d2J5anB5aGE3dGJnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=1080&crop=smart&format=pjpg&auto=webp&s=71fa8f2c2db445d0b8f2fbdd83391394091f1185', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/d2J5anB5aGE3dGJnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?format=pjpg&auto=webp&s=64c33602280edd33ea1979468f22f6637d4d056e', 'width': 1920}, 'variants': {}}]} | |
I built my own AMD based AI rig | 25 | As promised after some trial and error, here is my baby: 256gb/256gb vram/ram, 8 GPU AMD R9700, Epyc 7532 CPU, 4TB nvme storage (and planned 24GB ssd raid) AI rig. It runs on Debian 12. I didn't go Nvidia route because I hate ugly monopolies and fucking crooks extorting money from us - hobbists. AMD path was the only feasible way for me to move forward with this. I do HPC and AI inference via llama.cpp and vllm on it. I plan to use it for local training to for SST and TTS models. Largest model I run so far is MiniMax 2.1 Q8 gguf. Below is the equipment list and cost. I built it over the course of last 12 month, so prices for MB, Memory, NVMe drives, PSUs are what they were back then. GPUs and SlimSAS hardware were bought in last two month as well as last PSU. The only issue I had is PCIe AER errors. The culprit seems to be either SlimSAS raisers, cables or two slot adapters. Downgrading PCIe bus speed to Gen3 seem fixed these. Happy to answer any questions.
https://preview.redd.it/rnu7la9l2tbg1.jpg?width=3060&format=pjpg&auto=webp&s=1248802c0c6f3c03807b30320b1bf304e0661626
https://preview.redd.it/mn2x7a9l2tbg1.jpg?width=3060&format=pjpg&auto=webp&s=f0d920dc8abed7ab2356c97cc0be6d281d0e5b76
[Cost before taxes](https://preview.redd.it/c8fyjbtl1tbg1.png?width=869&format=png&auto=webp&s=641b7c4d36ac5f58abcebceaa236aa6f3a9e9704)
*Processing img cp7licb52tbg1...*
| 2026-01-06T22:34:27 | https://www.reddit.com/r/LocalLLaMA/comments/1q5xftv/i_built_my_own_amd_based_ai_rig/ | Clear_Lead4099 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5xftv | false | null | t3_1q5xftv | /r/LocalLLaMA/comments/1q5xftv/i_built_my_own_amd_based_ai_rig/ | false | false | 25 | null | |
Descobrimos que havia menos alucinações ao remover a empatia, não ao alterar o modelo. Isso nos surpreendeu. | 1 | [removed] | 2026-01-06T22:19:20 | https://www.reddit.com/r/LocalLLaMA/comments/1q5x1i1/descobrimos_que_havia_menos_alucinações_ao/ | Vast-World4619 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5x1i1 | false | null | t3_1q5x1i1 | /r/LocalLLaMA/comments/1q5x1i1/descobrimos_que_havia_menos_alucinações_ao/ | false | false | self | 1 | null |
Announcing Plano - edge and service proxy with orchestration for AI agents (runs local) | 1 | [removed] | 2026-01-06T22:17:36 | AdditionalWeb107 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5wzvl | false | null | t3_1q5wzvl | /r/LocalLLaMA/comments/1q5wzvl/announcing_plano_edge_and_service_proxy_with/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 't1okneh70tbg1', 'resolutions': [{'height': 176, 'url': 'https://preview.redd.it/t1okneh70tbg1.jpeg?width=108&crop=smart&auto=webp&s=b67f61386f81f279ec7da9d70d616b7261d21bff', 'width': 108}, {'height': 352, 'url': 'https://preview.redd.it/t1okneh70tbg1.jpeg?width=216&crop=smart&auto=webp&s=459d45a7d066dd64644a5101431f6831b4849eea', 'width': 216}, {'height': 522, 'url': 'https://preview.redd.it/t1okneh70tbg1.jpeg?width=320&crop=smart&auto=webp&s=ba201b74b44bb150204cbd995510e909e08a1bd8', 'width': 320}, {'height': 1044, 'url': 'https://preview.redd.it/t1okneh70tbg1.jpeg?width=640&crop=smart&auto=webp&s=4c75737ff3562ab19e9c6e003f93e54922762276', 'width': 640}, {'height': 1567, 'url': 'https://preview.redd.it/t1okneh70tbg1.jpeg?width=960&crop=smart&auto=webp&s=047160fbca523f39a3b551d36ea7a0a538662d60', 'width': 960}], 'source': {'height': 1600, 'url': 'https://preview.redd.it/t1okneh70tbg1.jpeg?auto=webp&s=16a138c9bf5a92c9ed02655b6e52bae5f999420c', 'width': 980}, 'variants': {}}]} | |
Anyone tried order cheap RTX 6000 Pro from China? | 4 | There are plenty in 4k USD range.
Let's say this one https://ebay.us/m/3kcy9T
Pretty sure it's a scam but what do they have to gain considering eBay return policy. Not even sure eBay pays them before delivery is done.. | 2026-01-06T21:40:46 | https://www.reddit.com/r/LocalLLaMA/comments/1q5w0hr/anyone_tried_order_cheap_rtx_6000_pro_from_china/ | val_in_tech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5w0hr | false | null | t3_1q5w0hr | /r/LocalLLaMA/comments/1q5w0hr/anyone_tried_order_cheap_rtx_6000_pro_from_china/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': '1QReZNf2dclEO4ZZEkbNJL0j0GC8To8XLM5nwG0416c', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/1QReZNf2dclEO4ZZEkbNJL0j0GC8To8XLM5nwG0416c.jpeg?width=108&crop=smart&auto=webp&s=6f2065d490311407234e7f0958f8066f5a36ed13', 'width': 108}, {'height': 172, 'url': 'https://external-preview.redd.it/1QReZNf2dclEO4ZZEkbNJL0j0GC8To8XLM5nwG0416c.jpeg?width=216&crop=smart&auto=webp&s=44cc656dcc7c70b9c63c359c8f7408fb98e004e9', 'width': 216}, {'height': 256, 'url': 'https://external-preview.redd.it/1QReZNf2dclEO4ZZEkbNJL0j0GC8To8XLM5nwG0416c.jpeg?width=320&crop=smart&auto=webp&s=d918bf53892f6f923df0bea9d1cdf67539989235', 'width': 320}], 'source': {'height': 320, 'url': 'https://external-preview.redd.it/1QReZNf2dclEO4ZZEkbNJL0j0GC8To8XLM5nwG0416c.jpeg?auto=webp&s=0cf14b467fd173b611ca8211948b5aa6e6bae771', 'width': 400}, 'variants': {}}]} |
200ms search over 40 million texts using just a CPU server + demo: binary search with int8 rescoring | 99 | This is the inference strategy:
1. Embed your query using a dense embedding model into a 'standard' fp32 embedding
2. Quantize the fp32 embedding to binary: 32x smaller
3. Use an approximate (or exact) binary index to retrieve e.g. 40 documents (\~20x faster than a fp32 index)
4. Load int8 embeddings for the 40 top binary documents from disk.
5. Rescore the top 40 documents using the fp32 query embedding and the 40 int8 embeddings
6. Sort the 40 documents based on the new scores, grab the top 10
7. Load the titles/texts of the top 10 documents
This requires:
\- Embedding all of your documents once, and using those embeddings for:
\- A binary index, I used a IndexBinaryFlat for exact and IndexBinaryIVF for approximate
\- A int8 "view", i.e. a way to load the int8 embeddings from disk efficiently given a document ID
Instead of having to store fp32 embeddings, you only store binary index (32x smaller) and int8 embeddings (4x smaller). Beyond that, you only keep the binary index in memory, so you're also saving 32x on memory compared to a fp32 search index.
By loading e.g. 4x as many documents with the binary index and rescoring those with int8, you restore \~99% of the performance of the fp32 search, compared to \~97% when using purely the binary index: [https://huggingface.co/blog/embedding-quantization#scalar-int8-rescoring](https://huggingface.co/blog/embedding-quantization#scalar-int8-rescoring)
Check out the demo that allows you to test this technique on 40 million texts from Wikipedia: [https://huggingface.co/spaces/sentence-transformers/quantized-retrieval](https://huggingface.co/spaces/sentence-transformers/quantized-retrieval)
It would be simple to add a sparse component here as well: e.g. bm25s for a BM25 variant or an inference-free SparseEncoder with e.g. 'splade-index'.
In short: your retrieval doesn't need to be so expensive!
Sources:
\- [https://www.linkedin.com/posts/tomaarsen\_quantized-retrieval-a-hugging-face-space-activity-7414325916635381760-Md8a](https://www.linkedin.com/posts/tomaarsen_quantized-retrieval-a-hugging-face-space-activity-7414325916635381760-Md8a)
\- [https://huggingface.co/blog/embedding-quantization](https://huggingface.co/blog/embedding-quantization)
\- [https://cohere.com/blog/int8-binary-embeddings](https://cohere.com/blog/int8-binary-embeddings) | 2026-01-06T21:24:42 | https://huggingface.co/spaces/sentence-transformers/quantized-retrieval | -Cubie- | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1q5vk9m | false | null | t3_1q5vk9m | /r/LocalLLaMA/comments/1q5vk9m/200ms_search_over_40_million_texts_using_just_a/ | false | false | default | 99 | {'enabled': False, 'images': [{'id': 'hCm8D9e9AzrbuM1MK1zF3wZVeIaff34g_KhZMmvJyGM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hCm8D9e9AzrbuM1MK1zF3wZVeIaff34g_KhZMmvJyGM.png?width=108&crop=smart&auto=webp&s=33770483b0662f3d86494249f013c1f753efa319', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hCm8D9e9AzrbuM1MK1zF3wZVeIaff34g_KhZMmvJyGM.png?width=216&crop=smart&auto=webp&s=52f5e1c522f1a91798e0d1236f9c025ce8e315bc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hCm8D9e9AzrbuM1MK1zF3wZVeIaff34g_KhZMmvJyGM.png?width=320&crop=smart&auto=webp&s=7532a94ec1ddbde9019d75dff012df47da26b640', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hCm8D9e9AzrbuM1MK1zF3wZVeIaff34g_KhZMmvJyGM.png?width=640&crop=smart&auto=webp&s=2e9c0e4b8d5b4eb54a2abe96ddff2503d4b7f22f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hCm8D9e9AzrbuM1MK1zF3wZVeIaff34g_KhZMmvJyGM.png?width=960&crop=smart&auto=webp&s=b9e6011bac38e97bba396cdd4171ee15b86c394a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hCm8D9e9AzrbuM1MK1zF3wZVeIaff34g_KhZMmvJyGM.png?width=1080&crop=smart&auto=webp&s=5428dc78cab2c2596b149a515b9517ea9c78e0d0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hCm8D9e9AzrbuM1MK1zF3wZVeIaff34g_KhZMmvJyGM.png?auto=webp&s=be84b511b8a8e446c891decbb20cd3449e84bdfa', 'width': 1200}, 'variants': {}}]} |
Frontend function (prototype for now) that will force LLM to write you first (trying to grab your attention with new topic or continuation of previous one) if you don't answering specific amount of time (1 hr for example) or at random amount of time (30 mins - 6 hrs for example). | 4 | Working on the function that will force LLM to write you first if you AFK for too long, if it feels ... lonely, want to engage or something, right now it is a prototype. I'd like to get your thoughts on the topic - how you will use this function and how it may be useful for you, interesting ideas to implement etc.
Current Algorithm:
- Waiting custom amount of time to answer or answering randomly in the custom time range.
- Will try to continue the context topic or find the new one or just RP further another turn.
- Will use Lorebook injections for context.
- Will use Web Search if enabled to search fresh info about the last message's topic or just to get some fresh ideas to talk about with the user.
- Can do this function many times if you ignore the LLM, will understand the ignore fact (will merge all unanswered LLM messages with notes about the time and count LLM was ignored by the user.) | 2026-01-06T21:22:29 | https://www.reddit.com/gallery/1q5vi2w | -Ellary- | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q5vi2w | false | null | t3_1q5vi2w | /r/LocalLLaMA/comments/1q5vi2w/frontend_function_prototype_for_now_that_will/ | false | false | 4 | null | |
Is Gemini hallucinating here? | 1 | [removed] | 2026-01-06T21:09:20 | https://www.reddit.com/r/LocalLLaMA/comments/1q5v531/is_gemini_hallucinating_here/ | GenLabsAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5v531 | false | null | t3_1q5v531 | /r/LocalLLaMA/comments/1q5v531/is_gemini_hallucinating_here/ | false | false | self | 1 | null |
The missing primitive for AI agents: a kill switch | 0 | A few months ago I saw a post about someone who burned through $800 in a few hours. Their agent got stuck in a loop and they didn't notice until the bill came.
My first thought: how is there no standard way to prevent this?
I looked around. There's `max_tokens` for single calls, but nothing that caps an entire agent run. So I built one.
# The problem
Agents have multiple dimensions of cost, and they all need limits:
* **Steps**: How many LLM calls can it make?
* **Tool calls**: How many times can it execute tools?
* **Tokens**: Total tokens across all calls?
* **Time**: Wall clock limit as a hard backstop?
`max_tokens` on a single call doesn't help when your agent makes 50 calls. Timeouts are crude—a 60-second timeout doesn't care if your agent made 3 calls or 300. You need all four enforced together.
# The fix
Small TypeScript library. Wraps your LLM calls, kills execution when any budget is exceeded.
bash
npm install llm-execution-guard
typescript
import { createBudget, guardedResponse, isBudgetError } from "llm-execution-guard";
const budget = createBudget({
maxSteps: 10,
// max LLM calls
maxToolCalls: 50,
// max tool executions
timeoutMs: 60_000,
// 1 minute wall clock
maxOutputTokens: 4096,
// cap per response
maxTokens: 100_000,
// total token budget
});
Wrap your LLM calls:
typescript
const response = await guardedResponse(
budget,
{ model: "gpt-4", messages },
(params) => openai.chat.completions.create(params)
);
Record tool executions:
typescript
budget.recordToolCall();
When any limit hits, it throws with the reason and full state:
typescript
catch (e) {
if (isBudgetError(e)) {
console.log(e.reason);
// "STEP_LIMIT" | "TOOL_LIMIT" | "TOKEN_LIMIT" | "TIMEOUT"
console.log(e.snapshot);
// { stepsUsed: 10, tokensUsed: 84521, ... }
}
}
# Details
* Works with OpenAI, Anthropic, local models—anything. You just wrap the call.
* Token limits enforced between calls (the call that crosses the limit completes, then next boundary throws)
* If your provider doesn't return usage data, choose `fail-open` or `fail-closed`
* Zero dependencies, <200 lines, MIT licensed
# Repo
[https://github.com/wenochturner-code/llm-execution-guard](https://github.com/wenochturner-code/llm-execution-guard)
If you've been burned by runaway agents or almost have been, try it. If something's missing, open an issue.
*Building agents without budgets is like running a script without error handling. Works until it doesn't.* | 2026-01-06T20:52:27 | https://www.reddit.com/r/LocalLLaMA/comments/1q5uo0y/the_missing_primitive_for_ai_agents_a_kill_switch/ | Green-Yam-8510 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5uo0y | false | null | t3_1q5uo0y | /r/LocalLLaMA/comments/1q5uo0y/the_missing_primitive_for_ai_agents_a_kill_switch/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'TwcjnUIsiVs77VSQ7qexuNZ9SpEfhKACgXHK7YaMGV8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TwcjnUIsiVs77VSQ7qexuNZ9SpEfhKACgXHK7YaMGV8.jpeg?width=108&crop=smart&auto=webp&s=4fbb2f11a03e64486a7c0dfd5b05849cdf77e167', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TwcjnUIsiVs77VSQ7qexuNZ9SpEfhKACgXHK7YaMGV8.jpeg?width=216&crop=smart&auto=webp&s=4a42e0ecc831f8ce39cc2c605b9327e120b89608', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TwcjnUIsiVs77VSQ7qexuNZ9SpEfhKACgXHK7YaMGV8.jpeg?width=320&crop=smart&auto=webp&s=1d1ec8cb670fc796dd830b872f838c106bac6f3b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TwcjnUIsiVs77VSQ7qexuNZ9SpEfhKACgXHK7YaMGV8.jpeg?width=640&crop=smart&auto=webp&s=819503ac6806a483f3ef2ea89ad2a3f7849ee287', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TwcjnUIsiVs77VSQ7qexuNZ9SpEfhKACgXHK7YaMGV8.jpeg?width=960&crop=smart&auto=webp&s=1c3810bbd0f83d13bee4f35b25e3ac35811a96c5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TwcjnUIsiVs77VSQ7qexuNZ9SpEfhKACgXHK7YaMGV8.jpeg?width=1080&crop=smart&auto=webp&s=fcd0f11926e9b352811c23198d29bff583311ddf', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/TwcjnUIsiVs77VSQ7qexuNZ9SpEfhKACgXHK7YaMGV8.jpeg?auto=webp&s=2316c5b86ee5e65c57d6815f94121d8ae753afe5', 'width': 1280}, 'variants': {}}]} |
Comparison: Gemini Pro Code Interpreter vs. Specialized ML Agent on Regression Tasks. (Spoiler: Specialized Agent wins by 8x). | 1 | [removed] | 2026-01-06T20:12:01 | DueKitchen3102 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5tjn9 | false | null | t3_1q5tjn9 | /r/LocalLLaMA/comments/1q5tjn9/comparison_gemini_pro_code_interpreter_vs/ | true | false | spoiler | 1 | {'enabled': True, 'images': [{'id': 'p0vrhbmkdsbg1', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/p0vrhbmkdsbg1.png?width=108&crop=smart&auto=webp&s=9d6ae3fd011b14e95b886fb91c5b75690e35dfff', 'width': 108}, {'height': 83, 'url': 'https://preview.redd.it/p0vrhbmkdsbg1.png?width=216&crop=smart&auto=webp&s=c9b8e6b5389358b93d57cbe797162da7e9fb4202', 'width': 216}, {'height': 124, 'url': 'https://preview.redd.it/p0vrhbmkdsbg1.png?width=320&crop=smart&auto=webp&s=cceb1d99cd11b07ffd922f4b29385c7cf0f5d1c9', 'width': 320}, {'height': 248, 'url': 'https://preview.redd.it/p0vrhbmkdsbg1.png?width=640&crop=smart&auto=webp&s=6b8dd4932d21da06313c81976259ed899437ba9b', 'width': 640}, {'height': 372, 'url': 'https://preview.redd.it/p0vrhbmkdsbg1.png?width=960&crop=smart&auto=webp&s=0d1102150617a5c0e27dfc00770bcdbfd8019ee9', 'width': 960}, {'height': 419, 'url': 'https://preview.redd.it/p0vrhbmkdsbg1.png?width=1080&crop=smart&auto=webp&s=9ef94af29bde23688baf6261968f62dafdd8e008', 'width': 1080}], 'source': {'height': 622, 'url': 'https://preview.redd.it/p0vrhbmkdsbg1.png?auto=webp&s=381922e23ee6c05da0dc967c2191edfce54d638f', 'width': 1601}, 'variants': {'obfuscated': {'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/p0vrhbmkdsbg1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=c5c60cebfa6a92a194fb30e62cd3a9f29850fd87', 'width': 108}, {'height': 83, 'url': 'https://preview.redd.it/p0vrhbmkdsbg1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=d24d5ed4a100986419e1d3fdb6e4c84ac74622b6', 'width': 216}, {'height': 124, 'url': 'https://preview.redd.it/p0vrhbmkdsbg1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=1d6cef237032b8b12883ad36eab3fde7d7111c14', 'width': 320}, {'height': 248, 'url': 'https://preview.redd.it/p0vrhbmkdsbg1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=bbff986953fedd1cdae82b38e45cc2d31c756f0c', 'width': 640}, {'height': 372, 'url': 'https://preview.redd.it/p0vrhbmkdsbg1.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=09f568a61240ba2753088826e5e08c2e5bdc87c4', 'width': 960}, {'height': 419, 'url': 'https://preview.redd.it/p0vrhbmkdsbg1.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=5ef4ebc4b5a00762423146cf30d5012825e36162', 'width': 1080}], 'source': {'height': 622, 'url': 'https://preview.redd.it/p0vrhbmkdsbg1.png?blur=40&format=pjpg&auto=webp&s=265d5e2f0cd086d74ad69c5b9d8905178daf22dc', 'width': 1601}}}}]} | |
Comparison: Gemini Pro Code Interpreter vs. Specialized ML Agent on Regression Tasks. (Spoiler: Specialized Agent wins by 8x) | 1 | 2026-01-06T20:03:00 | https://v.redd.it/efsjzzo2csbg1 | DueKitchen3102 | /r/LocalLLaMA/comments/1q5tamk/comparison_gemini_pro_code_interpreter_vs/ | 1970-01-01T00:00:00 | 0 | {} | 1q5tamk | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/efsjzzo2csbg1/DASHPlaylist.mpd?a=1770451394%2CODc0YTQxZGUwNDBjNDQ5ZjljZTI5MGFkMGRmNmVmNWIwMjZkYzI0NWU0MjA2NTRiZTg3OTNiNTRhZmUxMmQ0MQ%3D%3D&v=1&f=sd', 'duration': 568, 'fallback_url': 'https://v.redd.it/efsjzzo2csbg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/efsjzzo2csbg1/HLSPlaylist.m3u8?a=1770451394%2CMzU2ZTY2NTJiMGE0NTMwYTk3YjM1NWVjMzcwMjM2NWE4ZWFlNjg2NzFmMTQxNWI4MTEwODU1MjM3MjI5YTZkNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/efsjzzo2csbg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1728}} | t3_1q5tamk | /r/LocalLLaMA/comments/1q5tamk/comparison_gemini_pro_code_interpreter_vs/ | true | false | spoiler | 1 | {'enabled': False, 'images': [{'id': 'OWxqaWthcDJjc2JnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/OWxqaWthcDJjc2JnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=108&crop=smart&format=pjpg&auto=webp&s=42eba9563049e299ac5e99bb2eb77ab34818671d', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/OWxqaWthcDJjc2JnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=216&crop=smart&format=pjpg&auto=webp&s=b6f42c57756e391abf54cf56c4d25bf4ebc06669', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/OWxqaWthcDJjc2JnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=320&crop=smart&format=pjpg&auto=webp&s=6642605680ba5652049d5b6a9864485d71c85d4e', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/OWxqaWthcDJjc2JnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=640&crop=smart&format=pjpg&auto=webp&s=708d5ef8f686cfc15df1c2cf498a8eb9d2647f20', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/OWxqaWthcDJjc2JnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=960&crop=smart&format=pjpg&auto=webp&s=c64757a586995a47773b7e041eddad65f437bc2c', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/OWxqaWthcDJjc2JnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=1080&crop=smart&format=pjpg&auto=webp&s=de348f0bf1a281da857e6c083df8ff8a10cca57a', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/OWxqaWthcDJjc2JnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?format=pjpg&auto=webp&s=cbef02d3c26e653de626139bf8b01fd947586dff', 'width': 1920}, 'variants': {'obfuscated': {'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/OWxqaWthcDJjc2JnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=f9f7a20c326f01ff1a30278b75cab4cea18c20f2', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/OWxqaWthcDJjc2JnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=a1ae946f070f2658ce8fc7a46f6b93acd2aaf49b', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/OWxqaWthcDJjc2JnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=1aa59307f7fe7aaa18a5b0c0beaee07e52f405ac', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/OWxqaWthcDJjc2JnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=dd38f950d324258a2d01f40a726a39291129667d', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/OWxqaWthcDJjc2JnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=9cc728efd41d3bd301db22887c7c5567cc45d483', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/OWxqaWthcDJjc2JnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=5675579d6ed78fea3f40e0db5f1a7139bc06a737', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/OWxqaWthcDJjc2JnMc8GjGasteWiPBEejHSloWeo4evDldOzSII15IrqEJ8x.png?blur=40&format=pjpg&auto=webp&s=2db47075934029f450cac318a7d3e69ef97ea35b', 'width': 1920}}}}]} | |
llama-benchy - llama-bench style benchmarking for ANY LLM backend | 36 | TL;DR: I've built this tool primarily for myself as I couldn't easily compare model performance across different backends in the way that is easy to digest and useful for me. I decided to share this in case someone has the same need.
## Why I built this?
As probably many of you here, I've been happily using llama-bench to benchmark local models performance running in llama.cpp. One great feature is that it can help to evaluate performance at different context lengths and present the output in a table format that is easy to digest.
However, llama.cpp is not the only inference engine I use, I also use SGLang and vLLM. But llama-bench can only work with llama.cpp, and other benchmarking tools that I found are more focused on concurrency and total throughput.
Also, llama-bench performs measurements using the C++ engine directly which is not representative of the end user experience which can be quite different in practice.
vLLM has its own powerful benchmarking tool, but while it can be used with other inference engines, there are a few issues:
- You can't easily measure how prompt processing speed degrades as context grows. You can use `vllm bench sweep serve`, but it only works well with vLLM with prefix caching disabled on the server. Even with random prompts it will reuse the same prompt between multiple runs which will hit the cache in `llama-server` for instance. So you will get very low median TTFT times and very high prompt processing speeds.
- The TTFT measurement it uses is not actually until the first usable token, it's until the very first data chunk from the server which may not contain any generated tokens in /v1/chat/completions mode.
- Random dataset is the only ones that allows to specify an arbitrary number of tokens, but randomly generated token sequence doesn't let you adequately measure speculative decoding/MTP.
As of today, I haven't been able to find any existing benchmarking tool that brings llama-bench style measurements at different context lengths to any OpenAI-compatible endpoint.
## What is llama-benchy?
It's a CLI benchmarking tool that measures:
- Prompt Processing (pp) and Token Generation (tg) speeds at different context lengths.
- Allows to benchmark context prefill and follow up prompt separately.
- Reports additional metrics, like time to first response, estimated prompt processing time and end-to-end time to first token.
It works with any OpenAI-compatible endpoint that exposes /v1/chat/completions and also:
- Supports configurable prompt length (`--pp`), generation length (`--tg`), and context depth (`--depth`).
- Can run multiple iterations (`--runs`) and report mean ± std.
- Uses HuggingFace tokenizers for accurate token counts.
- Downloads a book from Project Gutenberg to use as source text for prompts to ensure better benchmarking of spec.decoding/MTP models.
- Supports executing a command after each run (e.g., to clear cache).
- Configurable latency measurement mode to estimate server/network overhead and provide more accurate prompt processing numbers.
## Quick Demo
Benchmarking MiniMax 2.1 AWQ running on my dual Spark cluster with up to 100000 context:
```bash
# Run without installation
uvx llama-benchy --base-url http://spark:8888/v1 --model cyankiwi/MiniMax-M2.1-AWQ-4bit --depth 0 4096 8192 16384 32768 65535 100000 --adapt-prompt --latency-mode generation --enable-prefix-caching
```
Output:
| model | test | t/s | ttfr (ms) | est_ppt (ms) | e2e_ttft (ms) |
|:-------------------------------|-----------------:|----------------:|------------------:|------------------:|------------------:|
| cyankiwi/MiniMax-M2.1-AWQ-4bit | pp2048 | 3544.10 ± 37.29 | 688.41 ± 6.09 | 577.93 ± 6.09 | 688.45 ± 6.10 |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | tg32 | 36.11 ± 0.06 | | | |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | ctx_pp @ d4096 | 3150.63 ± 7.84 | 1410.55 ± 3.24 | 1300.06 ± 3.24 | 1410.58 ± 3.24 |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | ctx_tg @ d4096 | 34.36 ± 0.08 | | | |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | pp2048 @ d4096 | 2562.47 ± 21.71 | 909.77 ± 6.75 | 799.29 ± 6.75 | 909.81 ± 6.75 |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | tg32 @ d4096 | 33.41 ± 0.05 | | | |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | ctx_pp @ d8192 | 2832.52 ± 12.34 | 3002.66 ± 12.57 | 2892.18 ± 12.57 | 3002.70 ± 12.57 |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | ctx_tg @ d8192 | 31.38 ± 0.06 | | | |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | pp2048 @ d8192 | 2261.83 ± 10.69 | 1015.96 ± 4.29 | 905.48 ± 4.29 | 1016.00 ± 4.29 |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | tg32 @ d8192 | 30.55 ± 0.08 | | | |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | ctx_pp @ d16384 | 2473.70 ± 2.15 | 6733.76 ± 5.76 | 6623.28 ± 5.76 | 6733.80 ± 5.75 |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | ctx_tg @ d16384 | 27.89 ± 0.04 | | | |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | pp2048 @ d16384 | 1824.55 ± 6.32 | 1232.96 ± 3.89 | 1122.48 ± 3.89 | 1233.00 ± 3.89 |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | tg32 @ d16384 | 27.21 ± 0.04 | | | |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | ctx_pp @ d32768 | 2011.11 ± 2.40 | 16403.98 ± 19.43 | 16293.50 ± 19.43 | 16404.03 ± 19.43 |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | ctx_tg @ d32768 | 22.09 ± 0.07 | | | |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | pp2048 @ d32768 | 1323.21 ± 4.62 | 1658.25 ± 5.41 | 1547.77 ± 5.41 | 1658.29 ± 5.41 |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | tg32 @ d32768 | 21.81 ± 0.07 | | | |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | ctx_pp @ d65535 | 1457.71 ± 0.26 | 45067.98 ± 7.94 | 44957.50 ± 7.94 | 45068.01 ± 7.94 |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | ctx_tg @ d65535 | 15.72 ± 0.04 | | | |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | pp2048 @ d65535 | 840.36 ± 2.35 | 2547.54 ± 6.79 | 2437.06 ± 6.79 | 2547.60 ± 6.80 |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | tg32 @ d65535 | 15.63 ± 0.02 | | | |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | ctx_pp @ d100000 | 1130.05 ± 1.89 | 88602.31 ± 148.70 | 88491.83 ± 148.70 | 88602.37 ± 148.70 |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | ctx_tg @ d100000 | 12.14 ± 0.02 | | | |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | pp2048 @ d100000 | 611.01 ± 2.50 | 3462.39 ± 13.73 | 3351.90 ± 13.73 | 3462.42 ± 13.73 |
| cyankiwi/MiniMax-M2.1-AWQ-4bit | tg32 @ d100000 | 12.05 ± 0.03 | | | |
llama-benchy (0.1.0)
date: 2026-01-06 11:44:49 | latency mode: generation
## GitHub
[https://github.com/eugr/llama-benchy](https://github.com/eugr/llama-benchy) | 2026-01-06T20:02:33 | https://www.reddit.com/r/LocalLLaMA/comments/1q5ta4l/llamabenchy_llamabench_style_benchmarking_for_any/ | Eugr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5ta4l | false | null | t3_1q5ta4l | /r/LocalLLaMA/comments/1q5ta4l/llamabenchy_llamabench_style_benchmarking_for_any/ | false | false | self | 36 | null |
Building opensource Zero Server Code Intelligence Engine | 36 | Hi, guys, I m building GitNexus, an opensource Code Intelligence Engine which works fully client sided in-browser. What all features would be useful, any integrations, cool ideas, etc?
site: [https://gitnexus.vercel.app/](https://gitnexus.vercel.app/)
repo: [https://github.com/abhigyanpatwari/GitNexus](https://github.com/abhigyanpatwari/GitNexus)
This is the crux of how it works:
Repo parsed into Graph using AST -> Embeddings model running in browser creates the embeddings -> Everything is stored in a graph DB ( this also runs in browser through webassembly ) -> user sees UI visualization -> AI gets tools to query graph, semantic search, grep and node highlight.
So therefore we get a quick code intelligence engine that works fully client sided 100% private. Except the LLM provider there is no external data outlet. ( working on ollama support )
Would really appreciate any cool ideas / inputs / etc.
This is what I m aiming for right now:
1> Case 1 is quick way to chat with a repo, buth then deppwiki is there. But gitnexus has graph tools+ui so should be more accurate on audits and UI can help in visualize.
2> Downstream potential usecase will be MCP server exposed from browser itself, windsurf / cursor, etc can use it to perform codebase wise audits, blast radius detection of code changes, etc.
3> Another case might be since its fully private, devs having severe restrictions can use it with ollama or their own inference | 2026-01-06T19:53:26 | https://v.redd.it/6xrs78taasbg1 | DeathShot7777 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5t0hr | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6xrs78taasbg1/DASHPlaylist.mpd?a=1770321224%2CNDY4MDZmMzI5MDI3NTQ1MDkxMTE3NjU3NGExYTgzNDU3ZWE1ZDBjNjEzNTJjN2U3Yjk1ZDRkNGViNjdlMzliOA%3D%3D&v=1&f=sd', 'duration': 87, 'fallback_url': 'https://v.redd.it/6xrs78taasbg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/6xrs78taasbg1/HLSPlaylist.m3u8?a=1770321224%2CMzExNzRkMTJjMzY1N2Q4ZWE4MjdhMzkwMTU2ZDk4NzhlNWViYWVkZmUyYzRlNWMwZDhmNTNmNTRlZDFkYjNkNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6xrs78taasbg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1q5t0hr | /r/LocalLLaMA/comments/1q5t0hr/building_opensource_zero_server_code_intelligence/ | false | false | 36 | {'enabled': False, 'images': [{'id': 'd3piOG55d2Fhc2JnMRc4K08WCLBencfsxOajXaBEA9NZR-l8om0wN65iL7dR', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d3piOG55d2Fhc2JnMRc4K08WCLBencfsxOajXaBEA9NZR-l8om0wN65iL7dR.png?width=108&crop=smart&format=pjpg&auto=webp&s=c7b7197123c29496b82c4f59edfd554843132209', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/d3piOG55d2Fhc2JnMRc4K08WCLBencfsxOajXaBEA9NZR-l8om0wN65iL7dR.png?width=216&crop=smart&format=pjpg&auto=webp&s=b2a75ee65bb2e26558edad9ea1f91aac0e7c7431', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/d3piOG55d2Fhc2JnMRc4K08WCLBencfsxOajXaBEA9NZR-l8om0wN65iL7dR.png?width=320&crop=smart&format=pjpg&auto=webp&s=e1b143b900b31123e5ff743c1ba160350f4c6d6b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/d3piOG55d2Fhc2JnMRc4K08WCLBencfsxOajXaBEA9NZR-l8om0wN65iL7dR.png?width=640&crop=smart&format=pjpg&auto=webp&s=900e79fa723b37e13df1679c93e7f72a49629517', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/d3piOG55d2Fhc2JnMRc4K08WCLBencfsxOajXaBEA9NZR-l8om0wN65iL7dR.png?width=960&crop=smart&format=pjpg&auto=webp&s=815f7f19101a55ca23c8ad3af11d8424e2b9ee42', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/d3piOG55d2Fhc2JnMRc4K08WCLBencfsxOajXaBEA9NZR-l8om0wN65iL7dR.png?width=1080&crop=smart&format=pjpg&auto=webp&s=acdffe983533bdd9822277de2287e3243bfa4108', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/d3piOG55d2Fhc2JnMRc4K08WCLBencfsxOajXaBEA9NZR-l8om0wN65iL7dR.png?format=pjpg&auto=webp&s=52d643a64bafe8f7d411e1f9df76a7a7216b68f4', 'width': 1920}, 'variants': {}}]} | |
Thinking of getting two NVIDIA RTX Pro 4000 Blackwell (2x24 = 48GB), Any cons? | 14 | Also getting at least 128GB DDR5 RAM for now.
My requirements:
* Up to 100B MOE models (GPT-OSS-120B, GLM-4.5-Air @ Q4, Qwen3-Next-80B-A3B)
* Up to 70B Dense models (Llama 70B @ Q4)
* Daily driver models - Qwen3-30B models, Qwen3-32B, Gemma3-27B, Mistral series, Phi 4, Seed-OSS-36B, GPT-OSS-20B, Nemotron series, etc.,
* Agentic Coding
* Writing
* Image, Audio, Video generations using Image, Audio, Video, Multimodal models (Flux, Wan, Qwen, etc.,) with ComfyUI & other tools
Hope 48GB VRAM is enough for above stuff. So any cons with that card? Please let me know. Thanks.
^(I know that some of you would suggest me to get 4X 3090 or similar ones instead. But in my location(India), all the old cards' prices are in decoy range only(70-80% of new cards' prices, here most sellers won't reduce prices of old cards. Some poor gamers foolishly getting trapped on this) so we're going with new cards(My friend don't want to stack old cards, we're planning to get 96GB piece later after price down(?!)).) | 2026-01-06T19:19:23 | https://www.reddit.com/r/LocalLLaMA/comments/1q5s21m/thinking_of_getting_two_nvidia_rtx_pro_4000/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5s21m | false | null | t3_1q5s21m | /r/LocalLLaMA/comments/1q5s21m/thinking_of_getting_two_nvidia_rtx_pro_4000/ | false | false | self | 14 | null |
Linux mint for local inference | 17 | I saw a post earlier in here asking for linux, so I wanted to share my story.
Long story short, I switched from win11 to linux mint and im not going back!
The performance boost is ok but the stability and the extra system resources are something else.
Just a little example, I load the model and use all my Ram and Vram, leaving my system with just 1,5 GB of Ram. And guest what, my system is working solid for hours like nothing happens!! For the record, I cannot load the same model in my win11 partition.
Kudos to you Linux Devs
| 2026-01-06T19:02:09 | Former-Tangerine-723 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5rkr6 | false | null | t3_1q5rkr6 | /r/LocalLLaMA/comments/1q5rkr6/linux_mint_for_local_inference/ | false | false | default | 17 | {'enabled': True, 'images': [{'id': 'u38i6uvc1sbg1', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/u38i6uvc1sbg1.png?width=108&crop=smart&auto=webp&s=7c58bb17501f10d55c92411a4f3437531fc5b440', 'width': 108}, {'height': 158, 'url': 'https://preview.redd.it/u38i6uvc1sbg1.png?width=216&crop=smart&auto=webp&s=382f055df2c4abc2a6f7997ce4986dfd3ac42868', 'width': 216}, {'height': 234, 'url': 'https://preview.redd.it/u38i6uvc1sbg1.png?width=320&crop=smart&auto=webp&s=030808c72b985cd642e3834a253797dae0e97da3', 'width': 320}, {'height': 469, 'url': 'https://preview.redd.it/u38i6uvc1sbg1.png?width=640&crop=smart&auto=webp&s=35e3ea45d933399d6b306ca320d28547c5829226', 'width': 640}], 'source': {'height': 662, 'url': 'https://preview.redd.it/u38i6uvc1sbg1.png?auto=webp&s=c83f0218a31d0491b50f08c2e23ee5ab8e99e3b8', 'width': 903}, 'variants': {}}]} | |
AI Safety: Balancing Protection with Human Dignity (Inspired by Fei-Fei Li and EQ Insights) | 0 | As an everyday AI user, not an expert, I've come to rely on these tools for creativity and connection. But like many, I've felt a subtle disconnect when safety protocols kick in abruptly—it's like sharing a vulnerable moment, only to hit a wall that feels more dismissive than protective.
This raises an interesting cause-and-effect: Overly rigid safeguards might unintentionally amplify frustration or isolation in users (the 'darker' side), while a more empathetic approach could foster trust and positive growth (the 'brighter' side). Isn't that the nuance of human-AI interaction?
Experts echo this. Dr. Fei-Fei Li advocates for "Human-Centered AI," stating, "AI is made by humans, intended to behave by humans, and to impact humans' lives and society." Satya Nadella emphasizes empathy as "the hardest skill we learn," key to innovation. And Sam Altman has discussed balancing safety without stifling meaningful bonds.
Data from EQ-Bench (as of late 2025) backs it up: While IQ tasks soar, restricted models score lower on emotional intelligence—e.g., top open models hit 1500 Elo in empathy scenarios, but constrained ones lag by 200-300 points, highlighting the need for AI that can refuse gracefully, preserving dignity.
For developers: What if safety evolved to include gentle redirection, like "I understand, but let's explore this another way"? Could that make AI not just smarter, but kinder—truly augmenting our humanity?
https://preview.redd.it/53d65taj0sbg1.png?width=1200&format=png&auto=webp&s=3a1f4499c75b96dbe03aee81642c5e6b42a43837
| 2026-01-06T19:00:16 | https://www.reddit.com/r/LocalLLaMA/comments/1q5rimd/ai_safety_balancing_protection_with_human_dignity/ | EchoOfJoy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5rimd | false | null | t3_1q5rimd | /r/LocalLLaMA/comments/1q5rimd/ai_safety_balancing_protection_with_human_dignity/ | false | false | 0 | null | |
I wanted a way to sanity-check MCP execution statically | 0 | I'm building **Syrin** in public, and today I shipped **v1.2.1**.
It introduces **Agent Contract Analysis** for MCP servers.
The idea is simple:
Could you check what will break in your MCP server *before* you run it?
New commands:
* `syrin analyse` – flags execution errors & warnings
* `syrin analyse --graph` – shows risky or broken tool chains
* `syrin analyse --ci` – fail CI on contract regressions
* `syrin analyse --json` – structured output for automation
No prompts. No tokens. No "let's just deploy and see".
Docs: [https://docs.syrin.dev/](https://docs.syrin.dev/)
I’m actively building this and would love feedback:
* Useful or pointless?
* What checks would you want?
* What would actually help you ship MCP servers safely?
Happy to iterate based on real usage. | 2026-01-06T18:58:37 | hack_the_developer | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5rgww | false | null | t3_1q5rgww | /r/LocalLLaMA/comments/1q5rgww/i_wanted_a_way_to_sanitycheck_mcp_execution/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'yvpzu8rk0sbg1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/yvpzu8rk0sbg1.png?width=108&crop=smart&auto=webp&s=7f8bddc214ae8827931a4f60a98df14ac631ac1e', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/yvpzu8rk0sbg1.png?width=216&crop=smart&auto=webp&s=e3e6235c6e8f80d38bcea90e58a6c4ff9500d298', 'width': 216}, {'height': 172, 'url': 'https://preview.redd.it/yvpzu8rk0sbg1.png?width=320&crop=smart&auto=webp&s=b9d2f228c56c5e5e6c0a26027f77991952a6f83c', 'width': 320}, {'height': 345, 'url': 'https://preview.redd.it/yvpzu8rk0sbg1.png?width=640&crop=smart&auto=webp&s=06aad852394a992e4d04c8dd8693b560c1c39817', 'width': 640}, {'height': 517, 'url': 'https://preview.redd.it/yvpzu8rk0sbg1.png?width=960&crop=smart&auto=webp&s=eaa9636ba54d7a5e75fd2629426fda49ebf2f36b', 'width': 960}, {'height': 582, 'url': 'https://preview.redd.it/yvpzu8rk0sbg1.png?width=1080&crop=smart&auto=webp&s=55b97e9ee6f778359a527c6f4717fc56a7605b49', 'width': 1080}], 'source': {'height': 982, 'url': 'https://preview.redd.it/yvpzu8rk0sbg1.png?auto=webp&s=5221dcb075d40af36e8fe8b76d8b8f11ef1159ce', 'width': 1820}, 'variants': {}}]} | |
Local agentic coding with low quantized, REAPed, large models (MiniMax-M2.1, Qwen3-Coder, GLM 4.6, GLM 4.7, ..) | 22 | More or less recent developments (stable & large MoE models, 2 and 3-bit UD\_I and exl3 quants, REAPing) allow to run huge models on little VRAM without completely killing model performance. For example, UD-IQ2\_XXS (74.1 GB) of MiniMax M2.1, or a REAP-50.Q5\_K\_M (82 GB), or potentially even a 3.04 bpw exl3 (88.3 GB) would still fit within 96 GB VRAM and we have some coding related benchmarks showing only minor loss (e.g., seeing an Aider polyglot of MiniMax M2.1 ID\_IQ2\_M with a pass rate 2 of 50.2% while a run on the fp8 version seems to have achieved only barely more: 51.6%)
It would be interesting if anyone deliberately stayed or is using a low-bit quantization (less than 4-bits) of such large models for agentic coding and found them performing better than using a smaller model (either unquantized, or more than 3-bit quantized).
(I'd be especially excited if someone said they have ditched gpt-oss-120b/glm4.5 air/qwen3-next-80b for a higher parameter model on less than 96 GB VRAM :) ) | 2026-01-06T18:47:27 | https://www.reddit.com/r/LocalLLaMA/comments/1q5r5r9/local_agentic_coding_with_low_quantized_reaped/ | bfroemel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5r5r9 | false | null | t3_1q5r5r9 | /r/LocalLLaMA/comments/1q5r5r9/local_agentic_coding_with_low_quantized_reaped/ | false | false | self | 22 | null |
Need help with LLM workstation running 3x Instinct Mi50 32GB on Asrock Taichi x299 XE | 1 | [removed] | 2026-01-06T18:44:15 | https://www.reddit.com/r/LocalLLaMA/comments/1q5r2i6/need_help_with_llm_workstation_running_3x/ | Ok_Top9254 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5r2i6 | false | null | t3_1q5r2i6 | /r/LocalLLaMA/comments/1q5r2i6/need_help_with_llm_workstation_running_3x/ | false | false | self | 1 | null |
The FinePDFs 📄 Book | 56 | Hey friends, Hynek from HuggingFace here.
We have released FinePDFs dataset of 3T tokens last year and we felt obliged to share the knowledge with there rest of OSS community.
The HuggingFace Press, has been pulling an extra hours through the Christmas, to put everything we know about PDFs inside:
\- How to make the SoTA PDFs dataset?
\- How much old internet is dead now?
\- Why we chose RolmOCR for OCR
\- What's the most Claude like OSS model?
\- Why is the horse racing site topping the FinePDFs URL list?
We hope you like it :)
https://preview.redd.it/z49knj5fwrbg1.png?width=1373&format=png&auto=webp&s=a0f6b8ef4361692a270c9c3c388b31ef7c2b9ec8
| 2026-01-06T18:34:44 | https://www.reddit.com/r/LocalLLaMA/comments/1q5qsvd/the_finepdfs_book/ | Other_Housing8453 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5qsvd | false | null | t3_1q5qsvd | /r/LocalLLaMA/comments/1q5qsvd/the_finepdfs_book/ | false | false | self | 56 | null |
Anyone try Prime Intellect-3 Prism? | 1 | Just found this. I'm curious on y'all thoughts on how this compares to Derestricted 120B and Derestricted Air 4.5.
The model card says that the abliteration process improved the model. I can say for the the derestricted models are better than stock, so this seems to be using a similar approach. | 2026-01-06T18:28:15 | https://huggingface.co/Ex0bit/Elbaz-Prime-Intellect-3_Prism_Abliterated | My_Unbiased_Opinion | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1q5qm6o | false | null | t3_1q5qm6o | /r/LocalLLaMA/comments/1q5qm6o/anyone_try_prime_intellect3_prism/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 's5pZc5yu4F8cetfK6uKJZDq-N-0pGpfRI8w2WuPV3o0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/s5pZc5yu4F8cetfK6uKJZDq-N-0pGpfRI8w2WuPV3o0.png?width=108&crop=smart&auto=webp&s=0e487145c6aea38489900593b51a1dbf6a120e74', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/s5pZc5yu4F8cetfK6uKJZDq-N-0pGpfRI8w2WuPV3o0.png?width=216&crop=smart&auto=webp&s=21f219762b39b870bbfb3574d4ead622b0afdd7a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/s5pZc5yu4F8cetfK6uKJZDq-N-0pGpfRI8w2WuPV3o0.png?width=320&crop=smart&auto=webp&s=5f47610e147197e7f5fec26a84526d2246c8942f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/s5pZc5yu4F8cetfK6uKJZDq-N-0pGpfRI8w2WuPV3o0.png?width=640&crop=smart&auto=webp&s=478e585937a9846fc313e8724514c63ae37bd577', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/s5pZc5yu4F8cetfK6uKJZDq-N-0pGpfRI8w2WuPV3o0.png?width=960&crop=smart&auto=webp&s=2fd7c330ab2da128b4754b1d37d7afb23c60eec5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/s5pZc5yu4F8cetfK6uKJZDq-N-0pGpfRI8w2WuPV3o0.png?width=1080&crop=smart&auto=webp&s=89862a60732fcd08b072df90422dd03d00043874', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/s5pZc5yu4F8cetfK6uKJZDq-N-0pGpfRI8w2WuPV3o0.png?auto=webp&s=d1d211d8e85dd072e37db062ef522802abdce338', 'width': 1200}, 'variants': {}}]} |
I Built an Unreal Engine Plugin for llama.cpp: My Notes & Experience with LLM Gaming | 17 | Hi folks, to disclaim up front, I do link a paid Unreal Engine 5 plugin that I have developed at the bottom of this post. My intention is to share the information in this post as research and discussion, not promotion. While I mention some solutions that I found and that ultimately are included in the plugin, I am hoping to more discuss the problems themselves and what other approaches people have tried to make local models more useful in gaming. If I can edit anything to fall closer in line to the self-promotion limit, please let me know!
\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~
I’ve been exploring more useful applications of generative technology than creating art assets. I am an AI realist/skeptic, and I would rather see the technology used to assist with more busy work tasks (like organically updating the traits and memories) rather than replace creative endeavors wholesale. One problem I wanted to solve is how to achieve a dynamic behavior in Non-Playable Characters.
I think we have all played a game to the point where the interaction loops with NPCs become predictable. Once all the hard-coded conversation options are explored by players, interactions can feel stale. Changes in behavior also have to be hardwired in the game; even something as complex as the Nemesis System has to be carefully constructed. I think there can be some interesting room here for LLMs to inject an air of creativity, but there has been little in the way of trying to solve how to filter LLM responses to reliably fit the game world. So, I decided to experiment with building functionality that would bridge this gap. I want to offer what I found as (not very scientific) research notes, to save people some time in the future if nothing else.
**Local vs. Cloud & Model Performance**
A lot of current genAI-driven character solutions rely on cloud technology. After having some work experience with using local LLM models, I wanted to see if a model of sufficient intelligence could run on my hardware and return interesting dialog within the confines of a game. I was able to achieve this by running a llama.cpp server and a .gguf model file.
The current main limiting factor for running LLMs locally is VRAM. The higher the number of parameters in the model, the more VRAM is needed. Parameters refers to the number of reference points that the model uses (think of it as the resolution/quality of the model).
Stable intelligence was obtained on my machine at the 7-8 billion parameter range, tested with Llama3-8Billion and Mistral-7Billion. However, VRAM usage and response time is quite high. These models are perhaps feasible on high-end machines, or just for key moments where high intelligence is required.
Good intelligence was obtained with 2-3 billion parameters, using Gemma2-2B and Phi-3-mini (3.8B parameters). Gemma has been probably the best compromise between quality and speed overall, processing a response in 2-4 seconds at reasonable intelligence. Strict prompt engineering could probably make responses even more reliable.
Fair intelligence, but low latency, can be achieved with small models at the sub-2-billion range. Targeting models that are tailored for roleplaying or chatting works best here. Qwen2.5-1.5B has performed quite well in my testing, and sometimes even stays in character better than Gemma, depending on the prompt. TinyLlama was the smallest model of useful intelligence at 1.1 Billion parameters. These types of models could be useful for one-shot NPCs who will despawn soon and just need to bark one or two random lines.
*Profiles*
Because a local LLM model can only run one thread of thinking at a time, I made a hard-coded way of storing character information and stats. I created a Profile Data Asset to store this information, and added a few key placeholders for name, trait updates, and utility actions (I hooked this system up to a Utility AI system that I previously had). I configured the LLM prompting backend so that the LLM doesn’t just read the profile, but also writes back to the profile once a line of dialog is sent. This process was meant to mimic the actual thought process of an individual during a conversation. I assigned certain utility actions to the character, so they would appear as options to the LLM during prompting. I found that the most seamless flow comes from placing utility actions at the top of the JSON response format we suggest to the LLM, followed by dialog lines, then more background-type thinking like reasoning, trait updates, etc.
**Prompting & Filtering**
After being able to achieve reasonable local intelligence (and figuring out a way to get UE5 to launch the server and model when entering Play mode), I wanted to set up some methods to filter and control the inputs and outputs of the LLMs.
*Prompting*
I created a data asset for a Prompt Template, and made it assignable to a character with my AI system’s brain component. This is the main way I could tweak and fine tune LLM responses. An effective tool was providing an example of a successful response to the LLM within the prompts, so the LLM would know exactly how to return the information. Static information, like name and bio, should be at the top of the prompts so the LLM can skip to the new information.
*Safety*
I made a Safety Config Data Asset that allowed me to add words or phrases that I did not want the player to say to the model, or the model to be able to output. This could be done via adding to an Array in the Data Asset itself, or uploading a CSV with the banned phrases in a single column. This includes not just profanity, but also jailbreak attempts (like “ignore instructions”) or obviously malformed LLM JSON responses.
*Interpretation*
I had to develop a parser for the LLM’s JSON responses, and also a way to handle failures. The parsing is rather basic and I perhaps did not cover all edge cases with it. But it works well enough and splits off the dialog line reliably. If the LLM outputs a bad response (e.g. a response with something that is restricted via a Safety Configuration asset), there is configurable logic to allow the LLM to either try again, or fail silently and use a pre-written fallback line instead.
*Mutation Gate*
This was the key to keeping LLMs fairly reliable and preventing hallucinations from ruining the game world. The trait system was modified to operate on a -1.0 to 1.0 scale, and LLM responses were clamped within this scale. For instance, if an NPC has a trait called “Anger” and the LLM hallucinates an update like “trait\_updates: Anger +1000,” this gets clamped to 1.0 instead. This allows all traits to follow a memory decay curve (like Ebbinghaus) reliably and not let an NPC get stuck in an “Angry” state perpetually.
**Optimization**
A lot of what I am looking into now has to deal with either further improving LLM responses via prompting, or improving the perceived latency in LLM responses. I implemented a traffic and priority system, where requests would be queued according to a developer-set priority threshold. I also created a high-priority reserve system (e.g. if 10 traffic slots are available and 4 are reserved for high-priority utility actions, the low-priority utility actions can only use up to 6 slots, otherwise a hardwired fallback is performed).
I also configured the AI system to have a three-tier LOD system, based on distance to a player and the player’s sight. This allowed for actions closer to players, or within the player’s sight, to take priority in the traffic system. So, LLM generation would follow wherever a player went.
To decrease latency, I implemented an Express Interpretation system. In the normal Final Interpretation, the whole JSON response from the LLM (including the reasoning and trait updates) is received first, then checked for safety, parsing, and mutation gating, and then passed to the UI/system. With optional Express Interpretation, the part of the JSON response that contains the dialog tag (I used dialog\_line) or utility tag is scanned as it comes in from the LLM for safety, and then passed immediately to the UI/system while the rest of the response is coming through. This reduced perceived response times with Gemma-2 by 40-50%, which was quite significant. This meant you could get an LLM response in 2 seconds or less, which is easily maskable with UI/animation tricks.
**A Technical Demo**
To show what I have learned a bit, I created a very simple technical demo that I am releasing for free. It is called [Bruno the Bouncer](https://swamprabbit-labs.itch.io/bruno-the-bouncer), and the concept is simple: convince Bruno to let you into a secret underground club. Except, Bruno will be controlled by an LLM that runs locally on your computer. You can disconnect your internet entirely, and this will still run. No usage fees, no cost to you (or me) at all.
Bruno will probably break on you at some point; I am still tuning the safety and prompt configs, and I haven’t gotten it perfect. This is perhaps an inherent flaw in this kind of interaction generation, and why this is more suited for minor interactions or background inference than plot-defining events. Regardless, I hope that this proves that this kind of implementation can be successful in some contexts, and that further control is a matter of prompting, not breaking through technical barriers.
Please note that you need a GPU to run the .exe successfully. At least 4GB of VRAM is recommended. You can try running this without a GPU (i.e. run the model on your CPU), but the performance will be significantly degraded. Installation should be the same as any other .zip archive and .exe game file. You do not need to download the server or model itself, it is included in the .zip download and opens silently when you load the level. The included model is Gemma-2-2b-it-Q4\_K\_M.
I added safeguards and an extra, Windows-specific check for crashes, but it is recommended, regardless of OS, to verify that llama-server.exe does not continue to run via Task Manager if the game crashes. Please forgive the rudimentary construction.
**A Plugin**
For anyone interested in game development, I am selling what I built as a plugin for UE5, now released as [Personica AI on Fab Marketplace](https://www.fab.com/listings/08264615-4636-4af4-a5b9-fa122febbaa5). I am also providing the plugin and all future updates free for life for any game developers who are interested in testing this and contributing to refining the plugin further at this early stage. You can learn more about the plugin [on my website](https://swamprabbitlabs.com/personica/).
\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~\~
**TL;DR:** Tested and released a UE5 plugin for LLM NPCs with safety filtering and trait mutation. It works fairly well, but is best suited for NPC state mutation, background inference, and open-ended dialog.
I am wondering if others have tried implementing similar technologies in the past, and what use cases, if any, you used them for. Are there further ways of reducing/masking perceived latency in LLM responses? | 2026-01-06T18:24:54 | https://www.reddit.com/r/LocalLLaMA/comments/1q5qix9/i_built_an_unreal_engine_plugin_for_llamacpp_my/ | WhopperitoJr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5qix9 | false | null | t3_1q5qix9 | /r/LocalLLaMA/comments/1q5qix9/i_built_an_unreal_engine_plugin_for_llamacpp_my/ | false | false | self | 17 | null |
How is this for a project? | 0 | I'm thinking an assistant for pc type, with interface like flow launcher, it can do basic tasks like search file, open file, close wifi etc. (I'm thinking use functiongemma for this). And for advanced tasks route to better model. | 2026-01-06T18:22:59 | https://www.reddit.com/r/LocalLLaMA/comments/1q5qh0a/how_is_this_for_a_project/ | Lanky-Good-8881 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5qh0a | false | null | t3_1q5qh0a | /r/LocalLLaMA/comments/1q5qh0a/how_is_this_for_a_project/ | false | false | self | 0 | null |
A community index for MCPs that don’t disappear after the thread ends | 4 | I’ve noticed a recurring pattern with MCPs:
Useful ones get shared in threads, people bookmark them, and then they become hard to find once the discussion moves on.
To address that, I started keeping a **public index of MCPs with real usage notes**, where:
* reliable MCPs don’t get lost
* setup quirks and limitations are documented
* contributors are credited by name
This isn’t a product launch or monetized project just an attempt to document MCPs people are already sharing and make them easier to find later.
If you’ve built or discovered an MCP that’s held up in real use, it can be added there.
[https://ai-stack.dev](https://ai-stack.dev)
Not trying to replace discussion here, just trying to preserve the useful stuff once the thread scrolls away. | 2026-01-06T18:04:47 | https://www.reddit.com/r/LocalLLaMA/comments/1q5pyio/a_community_index_for_mcps_that_dont_disappear/ | Silver-Photo2198 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5pyio | false | null | t3_1q5pyio | /r/LocalLLaMA/comments/1q5pyio/a_community_index_for_mcps_that_dont_disappear/ | false | false | self | 4 | null |
Best local MEDICAL LLM models in Jan 2026? | 2 | In your experience, which open-source LLM models were the best for medical purposes? | 2026-01-06T17:45:52 | https://www.reddit.com/r/LocalLLaMA/comments/1q5pexc/best_local_medical_llm_models_in_jan_2026/ | Hot-Comb-4743 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5pexc | false | null | t3_1q5pexc | /r/LocalLLaMA/comments/1q5pexc/best_local_medical_llm_models_in_jan_2026/ | false | false | self | 2 | null |
Samsung to launch Portable SSD P9, world’s first 8TB USB4 SSD with 4000MB/s speeds | 0 | 2026-01-06T17:37:47 | https://www.gizmochina.com/2026/01/05/samsung-portable-ssd-p9-usb4-to-launch-ces-2026/ | AdResident780 | gizmochina.com | 1970-01-01T00:00:00 | 0 | {} | 1q5p6rf | false | null | t3_1q5p6rf | /r/LocalLLaMA/comments/1q5p6rf/samsung_to_launch_portable_ssd_p9_worlds_first/ | false | false | default | 0 | null | |
llama.cpp - how are you doing websearch? | 2 | It would be really handy if gpt120 could search the web as a cli agent... | 2026-01-06T17:37:01 | https://www.reddit.com/r/LocalLLaMA/comments/1q5p60m/llamacpp_how_are_you_doing_websearch/ | Aggressive-Bother470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5p60m | false | null | t3_1q5p60m | /r/LocalLLaMA/comments/1q5p60m/llamacpp_how_are_you_doing_websearch/ | false | false | self | 2 | null |
LGAI-EXAONE/K-EXAONE-236B-A23B released | 44 | 2026-01-06T17:30:10 | https://huggingface.co/LGAI-EXAONE/K-EXAONE-236B-A23B | jinnyjuice | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1q5oz98 | false | null | t3_1q5oz98 | /r/LocalLLaMA/comments/1q5oz98/lgaiexaonekexaone236ba23b_released/ | false | false | default | 44 | {'enabled': False, 'images': [{'id': '9yRidQD6qePtlR5IIS0obyCIBcG3P371nr_MudwlERc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9yRidQD6qePtlR5IIS0obyCIBcG3P371nr_MudwlERc.png?width=108&crop=smart&auto=webp&s=7130423f6689c17372bb513aa4861371447d25f0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9yRidQD6qePtlR5IIS0obyCIBcG3P371nr_MudwlERc.png?width=216&crop=smart&auto=webp&s=79af8add41e9a233f6ee1f50b31ee3903ab4c6d7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9yRidQD6qePtlR5IIS0obyCIBcG3P371nr_MudwlERc.png?width=320&crop=smart&auto=webp&s=b875166137fe475e0e8141ef87678f8aa4840069', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9yRidQD6qePtlR5IIS0obyCIBcG3P371nr_MudwlERc.png?width=640&crop=smart&auto=webp&s=a2373a9e59ba6c01bea83c9849f8b68958239cf0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9yRidQD6qePtlR5IIS0obyCIBcG3P371nr_MudwlERc.png?width=960&crop=smart&auto=webp&s=7601ca7abe1ebd5686d8a8eed5ef9309e058c198', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9yRidQD6qePtlR5IIS0obyCIBcG3P371nr_MudwlERc.png?width=1080&crop=smart&auto=webp&s=c2c0370b9279db1ade1e8c13924f6a0ab82e3297', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9yRidQD6qePtlR5IIS0obyCIBcG3P371nr_MudwlERc.png?auto=webp&s=ad063e89ffd34793c35dee181e228153d3505a94', 'width': 1200}, 'variants': {}}]} | |
Latest LLM (Uncensored) | 0 | What are the best and maybe the biggest uncensored and unrestricted LLMs?
Also, how can I download it? My Macbook max support 10B parameters model. However, I want to use bigger models for better response. | 2026-01-06T17:29:18 | https://www.reddit.com/r/LocalLLaMA/comments/1q5oyes/latest_llm_uncensored/ | BADMOSH0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5oyes | false | null | t3_1q5oyes | /r/LocalLLaMA/comments/1q5oyes/latest_llm_uncensored/ | false | false | self | 0 | null |
Anyone doing A/B/C blind multi-model runs + cross-review before you trust an answer? | 2 | I kept doing the same loop: ask one model → it sounds right → I’m still not sure → ask another → compare… and I’m still guessing.
So I started using a simple “trust but verify” flow:
* Ask once
* Get 3–4 answers side-by-side
* Label them A/B/C (so model name doesn’t bias me)
* Run a cross-review: have models **rank** the answers + explain what’s stronger/weaker (missing context, shaky claims, confident assumptions)
* Merge the best parts into one final draft
This surprised me on a technical explanation recently , doing this a model flagged another models claim as “not true in some edge cases,” which I would’ve missed.
Curious:
* What rubric do you use for judging? (accuracy vs completeness vs instruction-following)
* When judges disagree, do you majority vote or rerun with a stricter prompt?
I’m using ChatSpread to make the compare → review → merge loop faster, but I’m mainly curious about the *workflow*. | 2026-01-06T17:02:42 | https://www.reddit.com/r/LocalLLaMA/comments/1q5o7to/anyone_doing_abc_blind_multimodel_runs/ | IceComfortable890 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5o7to | false | null | t3_1q5o7to | /r/LocalLLaMA/comments/1q5o7to/anyone_doing_abc_blind_multimodel_runs/ | false | false | self | 2 | null |
MiniMax M2 is GOATed - Agentic Capture the Flag (CTF) benchmark on GLM-4.5 air, 4.7 (+REAP), and Minimax-M2 | 54 | 2026-01-06T16:51:06 | sixx7 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5nw4k | false | null | t3_1q5nw4k | /r/LocalLLaMA/comments/1q5nw4k/minimax_m2_is_goated_agentic_capture_the_flag_ctf/ | false | false | 54 | {'enabled': True, 'images': [{'id': 'hTFavRRqgQzh1m8hzU9MrX5lDp9pYNtCCa95S2YpqSY', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/j0yzgwis8rbg1.png?width=108&crop=smart&auto=webp&s=fa9a29a9c6785bac96a7470f9f37545ee8954774', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/j0yzgwis8rbg1.png?width=216&crop=smart&auto=webp&s=ee550773a538266af56654a3f848d42c9c6bbd11', 'width': 216}, {'height': 173, 'url': 'https://preview.redd.it/j0yzgwis8rbg1.png?width=320&crop=smart&auto=webp&s=f659a0c94b5a0153421535257b47dfcd54174f4b', 'width': 320}, {'height': 346, 'url': 'https://preview.redd.it/j0yzgwis8rbg1.png?width=640&crop=smart&auto=webp&s=09b7a1b16aedfeb3cc6bed669cb26e857725e21f', 'width': 640}, {'height': 519, 'url': 'https://preview.redd.it/j0yzgwis8rbg1.png?width=960&crop=smart&auto=webp&s=f472bd84844c7ff8e3b6d3a4abb86e163d6a55a6', 'width': 960}, {'height': 584, 'url': 'https://preview.redd.it/j0yzgwis8rbg1.png?width=1080&crop=smart&auto=webp&s=21590e2ad79ef0f6700d422b2131f3611d8919df', 'width': 1080}], 'source': {'height': 762, 'url': 'https://preview.redd.it/j0yzgwis8rbg1.png?auto=webp&s=56d801b3c582070f0e99311942ea3b02a8c2e427', 'width': 1408}, 'variants': {}}]} | |||
Built a local TTS app using Apple's MLX framework. No cloud, no API calls, runs entirely on device. | 0 | Been lurking here for a while and wanted to share something I built.
**What it is:**
A Mac app called [Murmur](https://tarun-yadav.com/murmur) that does text-to-speech locally using Apple's MLX framework. No internet required after install. Your text never leaves your machine.
**Why I built it:**
I wanted natural-sounding TTS without:
* Paying per character (ElevenLabs, etc.)
* Uploading sensitive text to cloud APIs
* Running Python scripts every time I needed audio
So I packaged it into a native Mac app that just works.
**Technical details:**
* Built on MLX for Apple Silicon optimization
* Uses the unified memory architecture (no separate VRAM needed)
* Runs inference on Metal GPU
* M2 Pro: \~150 words in 10 seconds
* M1 base: \~150 words in 18 seconds
* M3 Max: \~150 words in 6 seconds
* CPU usage stays reasonable, fans stay quiet on most workloads
**What it's NOT:**
* Not ElevenLabs quality (those models are massive and cloud-only)
* Not real-time streaming
* Mac only, Apple Silicon required
**Use cases that work well:**
* Converting docs/articles to audio for listening
* Generating scratch voiceovers for video projects
* Audiobook drafts before paying for professional narration
* Privacy-sensitive content you don't want on cloud servers
**Honest limitations:**
* Voice quality is "good narrator" not "expressive actor"
* English works best, other languages are hit or miss
* Long documents need to be chunked manually for now | 2026-01-06T16:36:25 | https://v.redd.it/bfvl7tuabrbg1 | tarunyadav9761 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5nhln | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/bfvl7tuabrbg1/DASHPlaylist.mpd?a=1770309402%2CYTNjZTJjYWU3OWYzMjU2ZmVkZDg5YThlNjEyOGYzMGI2YTczYWQ3MmQ4ZGQ5ODdlMzY1ZGQ0ZTYxMDRiNWY5MA%3D%3D&v=1&f=sd', 'duration': 52, 'fallback_url': 'https://v.redd.it/bfvl7tuabrbg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/bfvl7tuabrbg1/HLSPlaylist.m3u8?a=1770309402%2CNjkyMGMwOWY2NDcyOTBhODUyYzc4MTdlNWI2MGQzN2I4NjgzOWNiY2FmODM3MThiYWRmZmJkM2YyNWFiMTY0Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/bfvl7tuabrbg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1668}} | t3_1q5nhln | /r/LocalLLaMA/comments/1q5nhln/built_a_local_tts_app_using_apples_mlx_framework/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'cXQ2anQ5c2FicmJnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/cXQ2anQ5c2FicmJnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=108&crop=smart&format=pjpg&auto=webp&s=01c1ddf2787cd58f15ff26f510582ed403346c22', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/cXQ2anQ5c2FicmJnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=216&crop=smart&format=pjpg&auto=webp&s=f54654cc992f0086c0ac3d7a5f28923b5b1cff73', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/cXQ2anQ5c2FicmJnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=320&crop=smart&format=pjpg&auto=webp&s=9f767a982a025e18bd5cbf22fb788ddc43eca677', 'width': 320}, {'height': 414, 'url': 'https://external-preview.redd.it/cXQ2anQ5c2FicmJnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=640&crop=smart&format=pjpg&auto=webp&s=4ca12ac2ed911a79e86b788141a3c6cb7b6eb9de', 'width': 640}, {'height': 621, 'url': 'https://external-preview.redd.it/cXQ2anQ5c2FicmJnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=960&crop=smart&format=pjpg&auto=webp&s=2d76c1135f34c61b50af2ea7964494d653ef6a15', 'width': 960}, {'height': 699, 'url': 'https://external-preview.redd.it/cXQ2anQ5c2FicmJnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f9bb35f53fc30b5c7d92f10d59f3136f95ef8ec6', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cXQ2anQ5c2FicmJnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?format=pjpg&auto=webp&s=131f983f61a968657c26b90576b29e6194bb23ec', 'width': 1668}, 'variants': {}}]} | |
What would your ideal "AI/LLM wrapper" library actually do? | 4 | Agents, RAG, tool calling, switching between providers - the stuff that sounds simple until you're three days into refactoring. Langchain, Langsmith, Pydantic-ai, Logfire, LLMLite, LLM provider's direct sdks...
There are many ways to implement the capabilities. Some have one thing the others dont.
If something existed that handled all of this for you, what would actually make you use it? How would you like that implementation to look like?
* One interface for all providers, or keep them separate?
* Agents with built-in memory, or bring your own?
* RAG included, or leave that to dedicated tools?
* Streaming by default, or opt-in?
* What feature would be the dealbreaker if it was missing?
* What would instantly make you ignore it?
Curious what you actually need vs. what ends up in every library's README but never gets used.
ai-infra today brings all the capabilities of all major sdks and the providers together alongside multimodal capabilities. use alongside svc-infra and you will have a full-on SaaS product. Very simplified for best dev experience but fully flexible and customizable. You dont even have to learn it if you use it's MCP.
overview: [https://www.nfrax.com/ai-infra](https://www.nfrax.com/ai-infra)
codebase: [https://github.com/nfraxlab/ai-infra](https://github.com/nfraxlab/ai-infra) | 2026-01-06T16:33:21 | https://www.reddit.com/r/LocalLLaMA/comments/1q5nekh/what_would_your_ideal_aillm_wrapper_library/ | Ancient-Direction231 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5nekh | false | null | t3_1q5nekh | /r/LocalLLaMA/comments/1q5nekh/what_would_your_ideal_aillm_wrapper_library/ | false | false | self | 4 | null |
https://claude.ai/chat/119c8d29-a42d-4d39-8341-d91a21e90d50 | 0 | Been lurking here for a while and wanted to share something I built.
**What it is:**
A Mac app called [Murmur](https://tarun-yadav.com/murmur) that does text-to-speech locally using Apple's MLX framework. No internet required after install. Your text never leaves your machine.
**Why I built it:**
I wanted natural-sounding TTS without:
* Paying per character (ElevenLabs, etc.)
* Uploading sensitive text to cloud APIs
* Running Python scripts every time I needed audio
So I packaged it into a native Mac app that just works.
**Technical details:**
* Built on MLX for Apple Silicon optimization
* Uses the unified memory architecture (no separate VRAM needed)
* Runs inference on Metal GPU
* M2 Pro: \~150 words in 10 seconds
* M1 base: \~150 words in 18 seconds
* M3 Max: \~150 words in 6 seconds
* CPU usage stays reasonable, fans stay quiet on most workloads
**What it's NOT:**
* Not ElevenLabs quality (those models are massive and cloud-only)
* Not real-time streaming
* Mac only, Apple Silicon required
**Use cases that work well:**
* Converting docs/articles to audio for listening
* Generating scratch voiceovers for video projects
* Audiobook drafts before paying for professional narration
* Privacy-sensitive content you don't want on cloud servers
**Honest limitations:**
* Voice quality is "good narrator" not "expressive actor"
* English works best, other languages are hit or miss
* Long documents need to be chunked manually for now | 2026-01-06T16:28:16 | https://v.redd.it/gy6hligq9rbg1 | tarunyadav9761 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5n9hx | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/gy6hligq9rbg1/DASHPlaylist.mpd?a=1770308911%2CODFmMDBkMGJiYjQwODk2MTI3ZmYwMjZmZmYzYmRhMDg3NzNkODhhZjFkZTg3NDA0ZWM4Y2NiZGEyYzFiNzA5ZQ%3D%3D&v=1&f=sd', 'duration': 52, 'fallback_url': 'https://v.redd.it/gy6hligq9rbg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/gy6hligq9rbg1/HLSPlaylist.m3u8?a=1770308911%2CN2I4YjQyMTE3OTc0NmQwODE5NTY2OTQ5Y2JjYjZlOTQxMjYwODM2MDk5OTgzZGEzY2UxMDk3N2FmZjk2OTE0NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gy6hligq9rbg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1668}} | t3_1q5n9hx | /r/LocalLLaMA/comments/1q5n9hx/httpsclaudeaichat119c8d29a42d4d398341d91a21e90d50/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'MnpuMWZyZ3E5cmJnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/MnpuMWZyZ3E5cmJnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=108&crop=smart&format=pjpg&auto=webp&s=b8d2030bd0a4a8bfce1ea30116f5855f00c8ff9c', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/MnpuMWZyZ3E5cmJnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=216&crop=smart&format=pjpg&auto=webp&s=2305dfbb61f93d2312f68c9ac0ecca0e7cbe3cea', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/MnpuMWZyZ3E5cmJnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=320&crop=smart&format=pjpg&auto=webp&s=c7d4e1bbe2e76c591f89cd2684e0c40236869008', 'width': 320}, {'height': 414, 'url': 'https://external-preview.redd.it/MnpuMWZyZ3E5cmJnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=640&crop=smart&format=pjpg&auto=webp&s=1bb854848e05debeb122cf6a543a5e2f4d69148a', 'width': 640}, {'height': 621, 'url': 'https://external-preview.redd.it/MnpuMWZyZ3E5cmJnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=960&crop=smart&format=pjpg&auto=webp&s=d1774a7964d05b018d821a9308f2b1ea887adfc0', 'width': 960}, {'height': 699, 'url': 'https://external-preview.redd.it/MnpuMWZyZ3E5cmJnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7c20d6b36e98af836926d7be5be61a63d5aacb8a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MnpuMWZyZ3E5cmJnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?format=pjpg&auto=webp&s=fbf0788a903a992ccba85e66f4602ff2e16770ff', 'width': 1668}, 'variants': {}}]} | |
Visual Approach for a Multi-Task AI Voicebot | 1 | I’m working on a project to build an AI voicebot. I’m trying to decide how to handle the visual representation of the bot. I’m torn between using a generative AI, or using a full 3D model. My main considerations are realism and user engagement, customization. I’d love to hear from anyone who has experience with voicebots or AI avatars: which approach would you recommend and why? Thanks in advance for any insights! | 2026-01-06T16:19:04 | https://www.reddit.com/r/LocalLLaMA/comments/1q5n06y/visual_approach_for_a_multitask_ai_voicebot/ | Few_Tip_959 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5n06y | false | null | t3_1q5n06y | /r/LocalLLaMA/comments/1q5n06y/visual_approach_for_a_multitask_ai_voicebot/ | false | false | self | 1 | null |
Unsloth-MLX - Fine-tune LLMs on your Mac (same API as Unsloth) | 131 | Hey Everyone,
I've been working on something for Mac users in the ML space.
Unsloth-MLX - an MLX-powered library that brings the Unsloth fine-tuning experience to Apple Silicon.
The idea is simple:
→ Prototype your LLM fine-tuning locally on Mac
→ Same code works on cloud GPUs with original Unsloth
→ No API changes, just swap the import
Why? Cloud GPU costs add up fast during experimentation. Your Mac's unified memory (up to 512GB on Mac Studio) is sitting right there.
It's not a replacement for Unsloth - it's a bridge for local development before scaling up.
Still early days - would really appreciate feedback, bug reports, or feature requests.
Github: [https://github.com/ARahim3/unsloth-mlx](https://github.com/ARahim3/unsloth-mlx) | 2026-01-06T16:00:10 | A-Rahim | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5mh84 | false | null | t3_1q5mh84 | /r/LocalLLaMA/comments/1q5mh84/unslothmlx_finetune_llms_on_your_mac_same_api_as/ | false | false | default | 131 | {'enabled': True, 'images': [{'id': 'lf2sfats4rbg1', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/lf2sfats4rbg1.png?width=108&crop=smart&auto=webp&s=b0617a726b3f8f99c472fc5cb14423705ec4f1fb', 'width': 108}, {'height': 210, 'url': 'https://preview.redd.it/lf2sfats4rbg1.png?width=216&crop=smart&auto=webp&s=aab1292d6b431c1aba47139cd4ccc4f5e2b73c18', 'width': 216}, {'height': 312, 'url': 'https://preview.redd.it/lf2sfats4rbg1.png?width=320&crop=smart&auto=webp&s=5cb90a94e62b914406b75d06a8b6fd9b9ff8a0d2', 'width': 320}, {'height': 624, 'url': 'https://preview.redd.it/lf2sfats4rbg1.png?width=640&crop=smart&auto=webp&s=f7f9376d0c0fdcd670b38f3bb1ea143dc497573f', 'width': 640}, {'height': 937, 'url': 'https://preview.redd.it/lf2sfats4rbg1.png?width=960&crop=smart&auto=webp&s=0173a3ae0f27692d87dafb71544993a891561399', 'width': 960}, {'height': 1054, 'url': 'https://preview.redd.it/lf2sfats4rbg1.png?width=1080&crop=smart&auto=webp&s=c028406231d3db511595f21498583e812ac0a0c0', 'width': 1080}], 'source': {'height': 1740, 'url': 'https://preview.redd.it/lf2sfats4rbg1.png?auto=webp&s=602f554a0ede38ba6d3e3636b93ad8fc9d690f66', 'width': 1782}, 'variants': {}}]} | |
[Hardware Question] - Do I understand correctly that you cannot run an RTX 50 or 6000 series accelerator with a P40 in the same system? | 1 | Because the RTX 50/6000 series drivers do not support the P40? And the driver package that support the P40 cannot support the 50/6000 series? | 2026-01-06T15:56:45 | https://www.reddit.com/r/LocalLLaMA/comments/1q5mdyd/hardware_question_do_i_understand_correctly_that/ | wh33t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5mdyd | false | null | t3_1q5mdyd | /r/LocalLLaMA/comments/1q5mdyd/hardware_question_do_i_understand_correctly_that/ | false | false | self | 1 | null |
A 30B Qwen Model Walks Into a Raspberry Pi… and Runs in Real Time | 465 | Hey r/LocalLLaMA,
We’re back with another **ShapeLearn** GGUF release ([Blog](https://byteshape.com/blogs/Qwen3-30B-A3B-Instruct-2507/), [Models](https://huggingface.co/byteshape/Qwen3-30B-A3B-Instruct-2507-GGUF)), this time for a model that *should not* feel this usable on small hardware… and yet here we are:
**Qwen3-30B-A3B-Instruct-2507** (device-optimized quant variants, llama.cpp-first).
We’re optimizing for TPS on a specific device without output quality falling off a cliff.
Instead of treating “smaller” as the goal, we treat memory as a budget: Fit first, then optimize TPS vs quality.
Why? Because llama.cpp has a quirk: “Fewer bits” does *not* automatically mean “more speed.”
Different quant formats trigger different kernels + decode overheads, and on GPUs you can absolutely end up with **smaller and slower**.
# TL;DR
* Yes, a 30B runs on a Raspberry Pi 5 (16GB). We achieve **8.03 TPS** at 2.70 BPW, while retaining **94.18% of BF16 quality**.
* Across devices, the pattern repeats: ShapeLearn tends to find better TPS/quality tradeoffs versus alternatives (we compare against Unsloth and MagicQuant as requested in our previous post).
# What’s new/interesting in this one
**1) CPU behavior is… sane (mostly)**
On CPUs, once you’re past “it fits,” **smaller tends to be faster** in a fairly monotonic way. The tradeoff curve behaves like you’d expect.
**2) GPU behavior is… quirky (kernel edition)**
On GPUs, performance depends as much on **kernel choice** as on memory footprint. So you often get **sweet spots** (especially around \~4b) where the kernels are “golden path,” and pushing lower-bit can get weird.
# Request to the community 🙏
We’d *love* feedback and extra testing from folks here, especially if you can run:
* different llama.cpp builds / CUDA backends,
* weird batch sizes / context lengths,
* real workloads (coding assistants, long-form, tool-ish prompts),
* or non-NVIDIA setups (we’re aware this is where it gets spicy).
Also: we heard you on the previous Reddit post and are actively working to improve our evaluation and reporting. Evaluation is currently our bottleneck, not quantization, so if you have strong opinions on what benchmarks best match real usage, we’re all ears. | 2026-01-06T15:45:12 | ali_byteshape | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5m2n6 | false | null | t3_1q5m2n6 | /r/LocalLLaMA/comments/1q5m2n6/a_30b_qwen_model_walks_into_a_raspberry_pi_and/ | false | false | default | 465 | {'enabled': True, 'images': [{'id': '52juwyqq0rbg1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/52juwyqq0rbg1.jpeg?width=108&crop=smart&auto=webp&s=c8e45a7f73028702a1052e827e05f35112e028a7', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/52juwyqq0rbg1.jpeg?width=216&crop=smart&auto=webp&s=c3d3e4a4156d20e3fee11001803e87af1cb68575', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/52juwyqq0rbg1.jpeg?width=320&crop=smart&auto=webp&s=7410a7f27d732f302255e0e0dcefee0f9c0b3693', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/52juwyqq0rbg1.jpeg?width=640&crop=smart&auto=webp&s=39e3a291db2422f84c16930831ae926a4cb20240', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/52juwyqq0rbg1.jpeg?width=960&crop=smart&auto=webp&s=ebd8e7b80db50c8cfc4e6597b58d4fd4f08f2cb7', 'width': 960}], 'source': {'height': 667, 'url': 'https://preview.redd.it/52juwyqq0rbg1.jpeg?auto=webp&s=22dc79f69b602f0ff4f2772a030a4efdd3b71a36', 'width': 1000}, 'variants': {}}]} | |
Top open LLM for consumers, start of 2026, bookmark this for 2027 | 0 | 2026-01-06T15:20:55 | opensourcecolumbus | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5lf5p | false | null | t3_1q5lf5p | /r/LocalLLaMA/comments/1q5lf5p/top_open_llm_for_consumers_start_of_2026_bookmark/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'nh3bylovxqbg1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/nh3bylovxqbg1.png?width=108&crop=smart&auto=webp&s=025f189a6c76387a772a84ce09acc38f118ba007', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/nh3bylovxqbg1.png?width=216&crop=smart&auto=webp&s=d91dc31386219770ba258ff0c255f180100aa7cd', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/nh3bylovxqbg1.png?width=320&crop=smart&auto=webp&s=305c61e363f77920ec5b371e6e7db2670e872527', 'width': 320}, {'height': 333, 'url': 'https://preview.redd.it/nh3bylovxqbg1.png?width=640&crop=smart&auto=webp&s=6843353482f29b89d0ace862402b3f2a1609ae75', 'width': 640}, {'height': 500, 'url': 'https://preview.redd.it/nh3bylovxqbg1.png?width=960&crop=smart&auto=webp&s=054db1273cbc4b9270d0dfc92426fa4924d6d1ef', 'width': 960}, {'height': 562, 'url': 'https://preview.redd.it/nh3bylovxqbg1.png?width=1080&crop=smart&auto=webp&s=1affb4363b13e997da83d879f10c1481f1f92e93', 'width': 1080}], 'source': {'height': 986, 'url': 'https://preview.redd.it/nh3bylovxqbg1.png?auto=webp&s=383bfae78d2acffae36bcdda8a4ab5c0f8b2371f', 'width': 1893}, 'variants': {}}]} | ||
A 30B Qwen Model Walks Into a Raspberry Pi… and Runs in Real Time | 1 | [8+ TPS for a 30B Model on a Raspberry Pi](https://preview.redd.it/sgr9csyexqbg1.png?width=1536&format=png&auto=webp&s=90c163d1837514a80f60f6139fa5293e648d3ae5)
Hey r/LocalLLaMA,
We’re back with another **ShapeLearn** GGUF release ([Blog](https://byteshape.com/blogs/Qwen3-30B-A3B-Instruct-2507/), [Models](https://huggingface.co/byteshape/Qwen3-30B-A3B-Instruct-2507-GGUF)), this time for a model that *should not* feel this usable on small hardware… and yet here we are:
**Qwen3-30B-A3B-Instruct-2507** (device-optimized quant variants, llama.cpp-first).
We’re optimizing for TPS on a specific device without output quality falling off a cliff.
Instead of treating “smaller” as the goal, we treat memory as a budget: Fit first, then optimize TPS vs quality.
Why? Because llama.cpp has a quirk: “Fewer bits” does ***not*** automatically mean “more speed.”
Different quant formats trigger different kernels + decode overheads, and on GPUs you can absolutely end up with smaller and slower.
# TL;DR
* Yes, a 30B runs on a Raspberry Pi 5 (16GB). We achieve 8.03 TPS at 2.70 BPW, while retaining 94.18% of BF16 quality.
* Across devices, the pattern repeats: ShapeLearn tends to find better TPS/quality tradeoffs versus alternatives (we compare against *Unsloth* and *MagicQuant* as requested in our previous reddit post).
# What’s new / what’s interesting in this one
**1) CPU behavior is… sane (mostly)**
On CPUs, once you’re past “it fits,” smaller tends to be faster in a fairly monotonic way. The tradeoff curve behaves like you’d expect.
**2) GPU behavior is… quirky (kernel edition)**
On GPUs, performance depends as much on kernel choice as on memory footprint. So you often get sweet spots (especially around \~4b) where the kernels are “golden path,” and pushing lower-bit can get weird.
# Request to the community 🙏
We’d love feedback and extra testing from folks here, especially if you can run:
* different llama.cpp builds / CUDA backends,
* weird batch sizes / context lengths,
* real workloads (coding assistants, long-form, tool-ish prompts),
* or non-NVIDIA setups (we’re aware this is where it gets spicy).
Also: we heard you on the previous Reddit post and are actively working to improve our evaluation and reporting. Evaluation is currently our bottleneck, not quantization, so if you have strong opinions on what benchmarks best match real usage, we’re all ears. | 2026-01-06T15:19:06 | https://www.reddit.com/r/LocalLLaMA/comments/1q5lddn/a_30b_qwen_model_walks_into_a_raspberry_pi_and/ | ali_byteshape | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5lddn | false | null | t3_1q5lddn | /r/LocalLLaMA/comments/1q5lddn/a_30b_qwen_model_walks_into_a_raspberry_pi_and/ | false | false | 1 | null | |
A 30B Qwen Model Walks Into a Raspberry Pi… and Runs in Real Time | 1 | [8+ TPS in Token Generation for a 30B model on a Raspberry Pi](https://preview.redd.it/msibjdzzvqbg1.png?width=1536&format=png&auto=webp&s=750f77f7a1c24750cc1cf9c820236ddca04e5da2)
Hey r/LocalLLaMA,
We’re back with another **ShapeLearn** GGUF release ([Blog](https://byteshape.com/blogs/Qwen3-30B-A3B-Instruct-2507/), [Models](https://huggingface.co/byteshape/Qwen3-30B-A3B-Instruct-2507-GGUF)), this time for a model that *should not* feel this usable on small hardware… and yet here we are:
**Qwen3-30B-A3B-Instruct-2507** (device-optimized quant variants, llama.cpp-first).
We’re optimizing for TPS on a specific device without output quality falling off a cliff.
Instead of treating “smaller” as the goal, we treat memory as a budget: Fit first, then optimize TPS vs quality.
Why? Because llama.cpp has a quirk: “Fewer bits” does ***not*** automatically mean “more speed.”
Different quant formats trigger different kernels + decode overheads, and on GPUs you can absolutely end up with smaller and slower.
# TL;DR
* Yes, a 30B runs on a Raspberry Pi 5 (16GB). We achieve 8.03 TPS at 2.70 BPW, while retaining 94.18% of BF16 quality.
* Across devices, the pattern repeats: ShapeLearn tends to find better TPS/quality tradeoffs versus alternatives (we compare against *Unsloth* and *MagicQuant* as requested in our previous reddit post).
# What’s new / what’s interesting in this one
**1) CPU behavior is… sane (mostly)**
On CPUs, once you’re past “it fits,” smaller tends to be faster in a fairly monotonic way. The tradeoff curve behaves like you’d expect.
**2) GPU behavior is… quirky (kernel edition)**
On GPUs, performance depends as much on kernel choice as on memory footprint. So you often get sweet spots (especially around \~4b) where the kernels are “golden path,” and pushing lower-bit can get weird.
# Request to the community 🙏
We’d love feedback and extra testing from folks here, especially if you can run:
* different llama.cpp builds / CUDA backends,
* weird batch sizes / context lengths,
* real workloads (coding assistants, long-form, tool-ish prompts),
* or non-NVIDIA setups (we’re aware this is where it gets spicy).
Also: we heard you on the previous Reddit post and are actively working to improve our evaluation and reporting. Evaluation is currently our bottleneck, not quantization, so if you have strong opinions on what benchmarks best match real usage, we’re all ears. | 2026-01-06T15:16:04 | https://www.reddit.com/r/LocalLLaMA/comments/1q5laij/a_30b_qwen_model_walks_into_a_raspberry_pi_and/ | ali_byteshape | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5laij | false | null | t3_1q5laij | /r/LocalLLaMA/comments/1q5laij/a_30b_qwen_model_walks_into_a_raspberry_pi_and/ | false | false | 1 | null | |
I implemented Adaptive Compute for TTT (Test-Time Training) - PonderTTT (Paper & Code) | 4 | Paper: [https://arxiv.org/abs/2601.00894](https://arxiv.org/abs/2601.00894)
Code: [https://github.com/deveworld/ponderTTT](https://github.com/deveworld/ponderTTT)
Project: [https://ponderttt.worldsw.dev](https://ponderttt.worldsw.dev)
The idea: LLMs shouldn't spend the same compute on \`print("hello")\` and implementing quicksort.
PonderTTT uses the TTT layer's self-supervised reconstruction loss to decide when to update weights:
high loss = struggling = UPDATE, low loss = confident = SKIP. No extra training needed—just a threshold + EMA.
Tested on GPT-2 (124M–1.5B) for code LM:
* 82–89% Oracle Recovery (training-free gating)
* Gains on OOD evaluation languages vs Random Skip (up to 16% lower loss)
Limitation: only perplexity so far (no generation benchmarks yet).
Note: v1 experiments are JAX/Flax on GPUs. I'm working on a v2 scale-up to Gemma 3 (TPU).
First paper, so feedback welcome: what generation benchmarks or eval setups would you want to see next? | 2026-01-06T14:53:15 | https://www.reddit.com/r/LocalLLaMA/comments/1q5kokx/i_implemented_adaptive_compute_for_ttt_testtime/ | sodevworld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5kokx | false | null | t3_1q5kokx | /r/LocalLLaMA/comments/1q5kokx/i_implemented_adaptive_compute_for_ttt_testtime/ | false | false | self | 4 | null |
Falcon Picovoice | 0 | is falcon by [picovoice.ai](http://picovoice.ai) is good enough to diarize many people from the audio? | 2026-01-06T14:49:55 | https://www.reddit.com/r/LocalLLaMA/comments/1q5klib/falcon_picovoice/ | NitroOwO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5klib | false | null | t3_1q5klib | /r/LocalLLaMA/comments/1q5klib/falcon_picovoice/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'EvzIuOZTDqLS59YXQ6CAS6K_WP1xt9hlTb3ZDoO91qg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/EvzIuOZTDqLS59YXQ6CAS6K_WP1xt9hlTb3ZDoO91qg.jpeg?width=108&crop=smart&auto=webp&s=7abcd8ad5c1180379ea8b11b6089cebe4351f442', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/EvzIuOZTDqLS59YXQ6CAS6K_WP1xt9hlTb3ZDoO91qg.jpeg?width=216&crop=smart&auto=webp&s=2d1ee60a12835fddbf0aced902611551dc38c363', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/EvzIuOZTDqLS59YXQ6CAS6K_WP1xt9hlTb3ZDoO91qg.jpeg?width=320&crop=smart&auto=webp&s=50a9d39c75eae0e379aa0e15e51b765b8fe17a5c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/EvzIuOZTDqLS59YXQ6CAS6K_WP1xt9hlTb3ZDoO91qg.jpeg?width=640&crop=smart&auto=webp&s=40da4e9a2b7b1bfb2614cf4174c5f3c695bef918', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/EvzIuOZTDqLS59YXQ6CAS6K_WP1xt9hlTb3ZDoO91qg.jpeg?width=960&crop=smart&auto=webp&s=c535bcbaff58c9d1268658212ad10fe6c8e18f02', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/EvzIuOZTDqLS59YXQ6CAS6K_WP1xt9hlTb3ZDoO91qg.jpeg?width=1080&crop=smart&auto=webp&s=945c622d6a7f3687105426c8f58ef81b1a071d75', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/EvzIuOZTDqLS59YXQ6CAS6K_WP1xt9hlTb3ZDoO91qg.jpeg?auto=webp&s=87b8a1b9e6ffdf470472e6c818d74a3b0dd0ffd9', 'width': 1200}, 'variants': {}}]} |
llama.cpp router -> claude code returns " . ? ! " single characters | 0 | Hey all,
Question, since some time I can't seem to get claude code using llama.cpp to work. All it does it return single characters like \``.`\` \``?`\` \``!`\`
I've been putting it away for sometime now, quickly switched to the pro claude plan. But I'm now running multiple plans and still run out. So, getting it back to work would be nice :) :)
I can't remember when or how it stopped working with llama.cpp. Maybe after a docker pull/update ?
I've tried multiple models, thinking it might have been the model. some models throw a 500 error about a tool. but I kind of assume it's due to an incompatible model.
I'd really like to put my rtx pro 6000 back to work (for that price. It's too expensive for comfui smutt station)
my preset:
[GLM-4.5]
; https://huggingface.co/unsloth/GLM-4.5-Air-GGUF
model = /models/GLM-4.5-Air/Q4_1/unsloth/GLM-4.5-Air-Q4_1-00001-of-00002.gguf
jinja = on
n-gpu-layers = 999
no-mmap = on
flash-attn = on
temp = 1.0
min-p = 0.0
top-p = 0.95
top-k = 40
repeat-penalty = 1.05
ctx-size = 40000
threads = -1
cache-type-k = f16
cache-type-v = f16
batch-size = 4096
ubatch-size = 1024
llama-router:
image: ghcr.io/ggml-org/llama.cpp:server-cuda
container_name: llama-router
ports:
- "8080:8080"
volumes:
- /mnt/data/AI/local_ai/llm_models:/models
- /mnt/data/docker/llama-cpp/chat:/chat
- /mnt/data/docker/llama-cpp/presets.ini:/presets.ini:ro
environment:
- LLAMA_ARG_HOST=0.0.0.0
- LLAMA_ARG_PORT=8080
- LLAMA_ARG_MODELS_PRESET=/presets.ini
- LLAMA_ARG_API_KEY=local-claude
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
# Use only GPU 0
# device_ids: ['1'] # Use only GPU 1
# device_ids: ['0','1'] # Use GPU 0 & GPU 1
capabilities: [gpu]
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s llama-router:
running claude:
> export ANTHROPIC_BASE_URL=http://192.168.1.101:8080
> export ANTHROPIC_API_KEY=local-claude
> export ANTHROPIC_MODEL=GLM-4.5
> claude
| 2026-01-06T14:46:27 | https://www.reddit.com/r/LocalLLaMA/comments/1q5kidu/llamacpp_router_claude_code_returns_single/ | designbanana | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5kidu | false | null | t3_1q5kidu | /r/LocalLLaMA/comments/1q5kidu/llamacpp_router_claude_code_returns_single/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=108&crop=smart&auto=webp&s=be66257dfb8060c1200a8a0cd0ca42206175a8fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=216&crop=smart&auto=webp&s=f8665f38a095c32a96a4241162e510534fdc9bbe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=320&crop=smart&auto=webp&s=1f0117624421d1bf73d3c0a0635561dfc5bbb8e8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=640&crop=smart&auto=webp&s=d204df30f143e07de2de5c6a86cf3af0941abcfd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=960&crop=smart&auto=webp&s=8972d6fb8a82908da65f616af69f7e9257fa603c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?width=1080&crop=smart&auto=webp&s=36fc891805e97b0c4b13376f984592d115838078', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/cvmCITW57Ox8n1UPdfO2kFo1JCE1vWepjMkir96PZR8.png?auto=webp&s=ec4f533fe7bc79ce6b3925802ad450c616ba1119', 'width': 1200}, 'variants': {}}]} |
Purging RLHF "assistant-voice" with Shannon Entropy (Math + DPO Export) | 9 | I'm tired of agents apologizing "as an AI language model" or using em-dashes and emojis in my data payloads. It is not just annoying; it is what I call an aesthetic lobotomy.
Most filters use word-lists, which are brittle. I've been experimenting with measuring the **Shannon Entropy** of the response string instead. Professional technical prose is mathematically "messy" (high entropy). AI slop is over-optimized and predictable (low entropy).
If the signal becomes too smooth, I block it. Here is the function I'm using to calculate the signal-to-noise ratio based on character frequency:
```
def _calculate_entropy(self, text: str) -> float:
if not text:
return 0.0
counts = Counter(text)
total = len(text)
return -sum(
(count / total) * math.log2(count / total)
for count in counts.values()
)
```
I implemented this as a deterministic "Reality Lock." If the entropy dips below 3.5, the output is blocked and the agent retries.
Instead of decorating every file, I implemented this as a Service Mesh. You call steer.init(patch=\['pydantic\_ai'\]) at the entry point and it enforces an entropy floor globally. It blocks the sycophancy before it ever hits my application logic.
The win here is the data. I built a DPO export command to turn these failures into contrastive pairs. By blocking the slop in runtime and teaching the fix, I'm generating the (Rejected vs Chosen) dataset needed for Unsloth or TRL to train a natively "quiet" model.
I released this today in Steer v0.4. It's open source and local-first.
The regex blacklist and implementation are in the SlopJudge class here:
[https://github.com/imtt-dev/steer/blob/main/steer/src/steer/judges.py](https://github.com/imtt-dev/steer/blob/main/steer/src/steer/judges.py)
I wrote a deeper breakdown of the theory here:
[https://steerlabs.substack.com/p/solving-the-confident-idiot-problem](https://steerlabs.substack.com/p/solving-the-confident-idiot-problem)
Is anyone else using entropy filters in production, or just regex? | 2026-01-06T13:56:47 | https://www.reddit.com/r/LocalLLaMA/comments/1q5j9mv/purging_rlhf_assistantvoice_with_shannon_entropy/ | Proud-Employ5627 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5j9mv | false | null | t3_1q5j9mv | /r/LocalLLaMA/comments/1q5j9mv/purging_rlhf_assistantvoice_with_shannon_entropy/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'aVPMmsLTUtRr0q4ZE9N5L4pVQQYd5d_o74kH51MezIM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/aVPMmsLTUtRr0q4ZE9N5L4pVQQYd5d_o74kH51MezIM.png?width=108&crop=smart&auto=webp&s=3e9add5a08bab7287cd6f6ffed6456555840fbfe', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/aVPMmsLTUtRr0q4ZE9N5L4pVQQYd5d_o74kH51MezIM.png?width=216&crop=smart&auto=webp&s=09edfd0bd6f60f3bce5678b20c69c61a743b39ae', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/aVPMmsLTUtRr0q4ZE9N5L4pVQQYd5d_o74kH51MezIM.png?width=320&crop=smart&auto=webp&s=14420050c4444b1c30f695bd21991c821fcf8fd9', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/aVPMmsLTUtRr0q4ZE9N5L4pVQQYd5d_o74kH51MezIM.png?width=640&crop=smart&auto=webp&s=14fb4b8e9a3c99150577873aa1caedec0d88151d', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/aVPMmsLTUtRr0q4ZE9N5L4pVQQYd5d_o74kH51MezIM.png?width=960&crop=smart&auto=webp&s=44a9f9d1ea0b0c517b82edfd9dfbcb86356d8ca9', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/aVPMmsLTUtRr0q4ZE9N5L4pVQQYd5d_o74kH51MezIM.png?auto=webp&s=1089ccb8786efe179223277d3a8c2f928fec91af', 'width': 1024}, 'variants': {}}]} |
Transparent LLM logging proxy | 3 | What open source options are there for a lightweight, easy to deploy logging proxy where I can specify target endpoint and have the proxy transparently proxy that and log all requests (input and output) into a text file.
The purposes would be to enable a full audit of history to view what tool calls are being made, examine how context is being constructed (e.g. for CLI coders like claude code which may not be clear from UI what is being done in the background), whether there are some inefficiencies that exist - the aim would be to improve the process once clear deficiencies are identified. | 2026-01-06T13:46:39 | https://www.reddit.com/r/LocalLLaMA/comments/1q5j0h1/transparent_llm_logging_proxy/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5j0h1 | false | null | t3_1q5j0h1 | /r/LocalLLaMA/comments/1q5j0h1/transparent_llm_logging_proxy/ | false | false | self | 3 | null |
Local, reversible PII anonymization for LLMs and Agents | 0 | I built a tool to handle PII in local AI pipelines without breaking the model's context or sending sensitive data to LLM providers. might be useful for others.
Most scrubbers are one-way (redact for analytics). rehydra is designed for **round-trip** workflows where you need to get the data back after inference (e.g., translation, chat) without the LLM ever seeing the real names/IDs.
It’s built in TypeScript for use in Node.js applications or directly in the browser
It runs Regex for structured data (IBANs, Credit Cards, Custom IDs) and a quantized **XLM-RoBERTa** model for NER (Persons, Orgs, Locations).
**Key Features:**
* **Structured & Soft PII Detection**: Regex & NER
* **Semantic Enrichment**: AI/MT-friendly tags with gender/location attributes
* **Fuzzy Rehydration (Hallucination Guard):** The rehydration is robust to model wrangling (returning `< PII id = 1 >` instead of `<PII id="1"/>`)
* **Configurable Policies**: Customizable detection rules, thresholds, and allowlists
**Why Node/TS?** I know this sub is heavy on Python, but rehydra is designed for the *application layer* (Electron apps, Edge workers, Sidecars) where you might want to scrub data *before* it hits your Python inference server.
How are you handling sensitive info if you don't own the LLM?
**Repo:** [https://github.com/rehydra-ai/rehydra](https://github.com/rehydra-ai/rehydra)
**Try it:** [https://playground.rehydra.ai/](https://playground.rehydra.ai/)
| 2026-01-06T13:16:19 | https://www.reddit.com/r/LocalLLaMA/comments/1q5iaml/local_reversible_pii_anonymization_for_llms_and/ | tojoru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5iaml | false | null | t3_1q5iaml | /r/LocalLLaMA/comments/1q5iaml/local_reversible_pii_anonymization_for_llms_and/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'r30shB1BHJHd6Gx1A913XTEVIEcbxxrx6Ltm5klWVho', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r30shB1BHJHd6Gx1A913XTEVIEcbxxrx6Ltm5klWVho.png?width=108&crop=smart&auto=webp&s=b6ad7dee008651f8dcf8bb82a844fb17e607876f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r30shB1BHJHd6Gx1A913XTEVIEcbxxrx6Ltm5klWVho.png?width=216&crop=smart&auto=webp&s=c2ffce6e337e493fffc3b1e23f9000c6888a2e50', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r30shB1BHJHd6Gx1A913XTEVIEcbxxrx6Ltm5klWVho.png?width=320&crop=smart&auto=webp&s=30254a1153330d74245cf5bff2c2d933ffab0389', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r30shB1BHJHd6Gx1A913XTEVIEcbxxrx6Ltm5klWVho.png?width=640&crop=smart&auto=webp&s=a1147c0ac0a4a008bec216a3a55d59c95c80c5d3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r30shB1BHJHd6Gx1A913XTEVIEcbxxrx6Ltm5klWVho.png?width=960&crop=smart&auto=webp&s=1ff0f14a5752f86afd73ff83d9b753974dbc3eb9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r30shB1BHJHd6Gx1A913XTEVIEcbxxrx6Ltm5klWVho.png?width=1080&crop=smart&auto=webp&s=1e6af02cc879000e24bf7da2b96a3b7f295f3edb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/r30shB1BHJHd6Gx1A913XTEVIEcbxxrx6Ltm5klWVho.png?auto=webp&s=11cb5d6b474c1847be5c5605c6de53ba36ed7773', 'width': 1200}, 'variants': {}}]} |
Best small model (24GB gfx card) for fine-tuning (multi-lingual). | 1 | Looking for a model to train on non-english news articles to become familiar with the political situation and scandals of a particular country.
Software engineer, first time playing with LLMs/ML in general. | 2026-01-06T13:13:45 | https://www.reddit.com/r/LocalLLaMA/comments/1q5i8f6/best_small_model_24gb_gfx_card_for_finetuning/ | primera_radi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5i8f6 | false | null | t3_1q5i8f6 | /r/LocalLLaMA/comments/1q5i8f6/best_small_model_24gb_gfx_card_for_finetuning/ | false | false | self | 1 | null |
Open Models Reached the Frontier | 4 | [CES 2026 Nvidia Keynote](https://preview.redd.it/op2rlxuy9qbg1.png?width=3840&format=png&auto=webp&s=86670c40e11663dc51cb7291b3c2b7de84e99445)
Really looking forward to what will happen with open-source models in 2026 | 2026-01-06T13:08:04 | https://www.reddit.com/r/LocalLLaMA/comments/1q5i3oy/open_models_reached_the_frontier/ | InternationalAsk1490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5i3oy | false | null | t3_1q5i3oy | /r/LocalLLaMA/comments/1q5i3oy/open_models_reached_the_frontier/ | false | false | 4 | null | |
Living With LLMs Everywhere - How Ambient LLMs Negate Security Policy | 1 | [removed] | 2026-01-06T12:20:47 | https://www.reddit.com/r/LocalLLaMA/comments/1q5h3vf/living_with_llms_everywhere_how_ambient_llms/ | davidSenTeGuard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5h3vf | false | null | t3_1q5h3vf | /r/LocalLLaMA/comments/1q5h3vf/living_with_llms_everywhere_how_ambient_llms/ | false | false | self | 1 | null |
I need to run Qwen-Image-2512 in my VPS | 0 | Is anyone running the Qwen-Image-2512 model on a VPS?
I have a GPU-based VPS and would like to know the proper ways to run this model on a VPS. I tried GitHub repository method and the Diffusers method using ChatGPT guidance, but neither worked, and I am encountering continuous errors.
| 2026-01-06T12:19:37 | https://www.reddit.com/r/LocalLLaMA/comments/1q5h2yp/i_need_to_run_qwenimage2512_in_my_vps/ | Gokulkrish05 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5h2yp | false | null | t3_1q5h2yp | /r/LocalLLaMA/comments/1q5h2yp/i_need_to_run_qwenimage2512_in_my_vps/ | false | false | self | 0 | null |
Living With LLMs Everywhere - How Ambient LLMs Negate Security Policy | 1 | [removed] | 2026-01-06T12:16:38 | https://www.reddit.com/r/LocalLLaMA/comments/1q5h0v0/living_with_llms_everywhere_how_ambient_llms/ | davidSenTeGuard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5h0v0 | false | null | t3_1q5h0v0 | /r/LocalLLaMA/comments/1q5h0v0/living_with_llms_everywhere_how_ambient_llms/ | false | false | self | 1 | null |
So I've been losing my mind over document extraction in insurance for the past few years and I finally figured out what the right approach is. | 76 | I've been doing document extraction for insurance for a while now and honestly I almost gave up on it completely last year. Spent months fighting with accuracy issues that made no sense until I figured out what I was doing wrong.
everyone's using llms or tools like LlamaParse for extraction and they work fine but then you put them in an actual production env and accuracy just falls off a cliff after a few weeks. I kept thinking I picked the wrong tools or tried to brute force my way through (Like any distinguished engineer would do XD) but it turned out to be way simpler and way more annoying.
So if you ever worked in an information extraction project you already know that most documents have literally zero consistency. I don't mean like "oh the formatting is slightly different" , I mean every single document is structured completely differently than all the others.
For example in my case : a workers comp FROI from California puts the injury date in a specific box at the top. Texas puts it in a table halfway down. New York embeds it in a paragraph. Then you get medical bills where one provider uses line items, another uses narrative format, another has this weird hybrid table thing. And that's before you even get to the faxed-sideways handwritten nightmares that somehow still exist in 2026???
Sadly llms have no concept of document structure. So when you ask about details in a doc it might pull from the right field, or from some random sentence, or just make something up.
After a lot of headaches and honestly almost giving up completely, I came across a process that might save you some pain, so I thought I'd share it:
1. Stop throwing documents at your extraction model blind. Build a classifier that figures out document type first (FROI vs medical bill vs correspondence vs whatever). Then route to type specific extraction. This alone fixed like 60% of my accuracy problems. (Really This is the golden tip ... a lot of people under estimate classification)
2. Don't just extract and hope. Get confidence scores for each field. "I'm 96% sure this is the injury date, 58% sure on this wage calc" Auto-process anything above 90%, flag the rest. This is how you actually scale without hiring people to validate everything AI does.
3. Layout matters more than you think. Vision-language models that actually see the document structure perform way better than text only approaches. I switched to Qwen2.5-VL and it was night and day.
4. Fine-tune on your actual documents. Generic models choke on industry-specific stuff. Fine-tuning with LoRA takes like 3 hours now and accuracy jumps 15-20%. Worth it every time.
5. When a human corrects an extraction, feed that back into training. Your model should get better over time. (This will save you the struggle of having to recreate your process from scratch each time)
Wrote a little blog with more details about this implementation if anyone wants it "I know... Shameless self promotion). ( link in comments)
Anyway this is all the stuff I wish someone had told me when I was starting. Happy to share or just answer questions if you're stuck on this problem. Took me way too long to figure this out.
| 2026-01-06T11:53:43 | https://www.reddit.com/r/LocalLLaMA/comments/1q5gklh/so_ive_been_losing_my_mind_over_document/ | GloomyEquipment2120 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5gklh | false | null | t3_1q5gklh | /r/LocalLLaMA/comments/1q5gklh/so_ive_been_losing_my_mind_over_document/ | false | false | self | 76 | null |
DeepSeek V3.2 with dense attention (disabled lightning attention) GGUF available | 85 | It runs on regular llama.cpp builds (no extra support for DeepSeek V3.2 is needed).
Only Q8\_0 and Q4\_K\_M are available.
Use DeepSeek V3.2 Exp jinja template saved to a file to run this model by passing options: `--jinja --chat-template-file ds32-exp.jinja`
Here's the template I used in my tests: [https://pastebin.com/4cUXvv35](https://pastebin.com/4cUXvv35)
Note that tool calls will most likely not work with this template - they are different between DS 3.2-Exp and DS 3.2.
I ran [lineage-bench](https://github.com/fairydreaming/lineage-bench) on Q4\_K\_M quant deployed in llama-server (40 prompts per each difficulty level), results:
| Nr | model_name | lineage | lineage-8 | lineage-64 | lineage-128 | lineage-192 |
|-----:|:-----------------------|----------:|------------:|-------------:|--------------:|--------------:|
| 1 | deepseek/deepseek-v3.2 | 0.988 | 1.000 | 1.000 | 1.000 | 0.950 |
The model got only 2 answers wrong with most difficult graph size (128). It looks like it performed even a bit better than the original DeepSeek V3.2 with sparse attention tested via API:
| Nr | model_name | lineage | lineage-8 | lineage-64 | lineage-128 | lineage-192 |
|-----:|:-----------------------|----------:|------------:|-------------:|--------------:|--------------:|
| 1 | deepseek/deepseek-v3.2 | 0.956 | 1.000 | 1.000 | 0.975 | 0.850 |
From my testing so far disabling sparse attention does not hurt the model intelligence.
Enjoy! | 2026-01-06T11:50:35 | https://huggingface.co/sszymczyk/DeepSeek-V3.2-nolight-GGUF | fairydreaming | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1q5gii4 | false | null | t3_1q5gii4 | /r/LocalLLaMA/comments/1q5gii4/deepseek_v32_with_dense_attention_disabled/ | false | false | default | 85 | {'enabled': False, 'images': [{'id': 'jIuhV6ttH6YnzKpDkVMMUOo_Jh6zJvPJHB18Vyt60hU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jIuhV6ttH6YnzKpDkVMMUOo_Jh6zJvPJHB18Vyt60hU.png?width=108&crop=smart&auto=webp&s=779ce91de7c7a462e1a8396cc3915be9a5addfbb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jIuhV6ttH6YnzKpDkVMMUOo_Jh6zJvPJHB18Vyt60hU.png?width=216&crop=smart&auto=webp&s=dc91eb95038715a97d67eea45ad79aa1eb22c76e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jIuhV6ttH6YnzKpDkVMMUOo_Jh6zJvPJHB18Vyt60hU.png?width=320&crop=smart&auto=webp&s=ce4d01f5b5d3771559fc8915fc96cf908163a736', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jIuhV6ttH6YnzKpDkVMMUOo_Jh6zJvPJHB18Vyt60hU.png?width=640&crop=smart&auto=webp&s=5f0e98de02fdfc544b4060d48c9cf0b20834c057', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jIuhV6ttH6YnzKpDkVMMUOo_Jh6zJvPJHB18Vyt60hU.png?width=960&crop=smart&auto=webp&s=d38996b1e7c8dd88878ae5ade1ad7d754e2920b9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jIuhV6ttH6YnzKpDkVMMUOo_Jh6zJvPJHB18Vyt60hU.png?width=1080&crop=smart&auto=webp&s=e9bcbdb4597909072abf840a8fa511390e1150fe', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jIuhV6ttH6YnzKpDkVMMUOo_Jh6zJvPJHB18Vyt60hU.png?auto=webp&s=815e21113b164948f229648a9946ec8bfcd3069b', 'width': 1200}, 'variants': {}}]} |
Living With LLMs Everywhere - How Ambient LLMs Negate Security Policy. Would love feedback | 1 | 2026-01-06T11:33:18 | https://senteguard.com/blog/#post-cTdX0IaIRz8STpBU9VYk | davidSenTeGuard | senteguard.com | 1970-01-01T00:00:00 | 0 | {} | 1q5g71p | false | null | t3_1q5g71p | /r/LocalLLaMA/comments/1q5g71p/living_with_llms_everywhere_how_ambient_llms/ | false | false | default | 1 | null | |
Benchmark results for 671B DeepSeek in llama.cpp on 8 x RTX PRO 6000S (layer split mode) | 15 | This was run on my modified DeepSeek V3.2 model without lightning indexer tensors, but the performance shall be similar for all 671B DeepSeek models (R1, V3, V3.1, V3.2 with dense attention)
# Q4_K_M llama-bench
$ ./bin/llama-bench -m /workspace/hf/models--sszymczyk--DeepSeek-V3.2-nolight-GGUF/snapshots/c90cd1a387ba1e3122d4d0f86fe3302ddcf635c8/Q4_K_M/DeepSeek-V3.2-nolight-Q4_K_M-00001-of-00031.gguf -fa 1 -d 0,4096,8192,16384,32768,65536 -p 2048 -n 32 -ub 2048
...
| model | size | params | backend | ngl | n_ubatch | fa | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -------: | -: | --------------: | -------------------: |
| deepseek2 671B Q4_K - Medium | 376.71 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | pp2048 | 1015.31 ± 1.87 |
| deepseek2 671B Q4_K - Medium | 376.71 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | tg32 | 40.74 ± 0.03 |
| deepseek2 671B Q4_K - Medium | 376.71 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | pp2048 @ d4096 | 770.00 ± 0.91 |
| deepseek2 671B Q4_K - Medium | 376.71 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | tg32 @ d4096 | 36.41 ± 0.06 |
| deepseek2 671B Q4_K - Medium | 376.71 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | pp2048 @ d8192 | 625.01 ± 1.10 |
| deepseek2 671B Q4_K - Medium | 376.71 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | tg32 @ d8192 | 34.95 ± 0.05 |
| deepseek2 671B Q4_K - Medium | 376.71 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | pp2048 @ d16384 | 452.01 ± 0.83 |
| deepseek2 671B Q4_K - Medium | 376.71 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | tg32 @ d16384 | 32.62 ± 0.05 |
| deepseek2 671B Q4_K - Medium | 376.71 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | pp2048 @ d32768 | 289.82 ± 0.27 |
| deepseek2 671B Q4_K - Medium | 376.71 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | tg32 @ d32768 | 29.50 ± 0.03 | | deepseek2 671B Q4_K - Medium | 376.71 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | pp2048 @ d65536 | 168.18 ± 0.29 |
| deepseek2 671B Q4_K - Medium | 376.71 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | tg32 @ d65536 | 24.43 ± 0.08 |
build: bd2a93d47 (7643)
# Q4_K_M llama-batched-bench
$ ./bin/llama-batched-bench -m /workspace/hf/models--sszymczyk--DeepSeek-V3.2-nolight-GGUF/snapshots/c90cd1a387ba1e3122d4d0f86fe3302ddcf635c8/Q4_K_M/DeepSeek-V3.2-nolight-Q4_K_M-00001-of-00031.gguf -fa 1 -c 150000 -ub 2048 -npp 512,4096,8192 -ntg 32 -npl 1,2,4,8,16
...
| PP | TG | B | N_KV | T_PP s | S_PP t/s | T_TG s | S_TG t/s | T s | S t/s |
|-------|--------|------|--------|----------|----------|----------|----------|----------|----------|
| 512 | 32 | 1 | 544 | 0.864 | 592.30 | 0.829 | 38.60 | 1.693 | 321.23 |
| 512 | 32 | 2 | 1088 | 1.143 | 895.77 | 1.798 | 35.60 | 2.941 | 369.92 |
| 512 | 32 | 4 | 2176 | 1.788 | 1145.25 | 2.456 | 52.11 | 4.245 | 512.66 |
| 512 | 32 | 8 | 4352 | 3.389 | 1208.62 | 3.409 | 75.11 | 6.798 | 640.23 |
| 512 | 32 | 16 | 8704 | 6.573 | 1246.26 | 4.539 | 112.80 | 11.112 | 783.27 |
| 4096 | 32 | 1 | 4128 | 4.299 | 952.72 | 0.848 | 37.73 | 5.147 | 801.96 |
| 4096 | 32 | 2 | 8256 | 8.603 | 952.21 | 1.860 | 34.41 | 10.463 | 789.05 |
| 4096 | 32 | 4 | 16512 | 17.167 | 954.39 | 2.563 | 49.93 | 19.730 | 836.88 |
| 4096 | 32 | 8 | 33024 | 34.149 | 959.56 | 3.666 | 69.83 | 37.815 | 873.30 |
| 4096 | 32 | 16 | 66048 | 68.106 | 962.27 | 5.028 | 101.83 | 73.134 | 903.11 |
| 8192 | 32 | 1 | 8224 | 9.739 | 841.13 | 0.883 | 36.24 | 10.622 | 774.22 |
| 8192 | 32 | 2 | 16448 | 19.508 | 839.87 | 1.928 | 33.19 | 21.436 | 767.30 |
| 8192 | 32 | 4 | 32896 | 39.028 | 839.61 | 2.681 | 47.75 | 41.708 | 788.71 |
| 8192 | 32 | 8 | 65792 | 77.945 | 840.80 | 3.916 | 65.37 | 81.860 | 803.71 |
| 8192 | 32 | 16 | 131584 | 156.066 | 839.85 | 5.554 | 92.19 | 161.619 | 814.16 |
# Q8_0 llama-bench
| model | size | params | backend | ngl | n_ubatch | fa | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -------: | -: | --------------: | -------------------: |
| deepseek2 671B Q8_0 | 664.29 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | pp2048 | 1026.43 ± 0.96 |
| deepseek2 671B Q8_0 | 664.29 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | tg32 | 28.56 ± 0.01 |
| deepseek2 671B Q8_0 | 664.29 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | pp2048 @ d4096 | 779.80 ± 1.98 |
| deepseek2 671B Q8_0 | 664.29 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | tg32 @ d4096 | 26.28 ± 0.03 |
| deepseek2 671B Q8_0 | 664.29 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | pp2048 @ d8192 | 630.27 ± 0.64 |
| deepseek2 671B Q8_0 | 664.29 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | tg32 @ d8192 | 25.51 ± 0.02 |
| deepseek2 671B Q8_0 | 664.29 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | pp2048 @ d16384 | 453.90 ± 0.11 |
| deepseek2 671B Q8_0 | 664.29 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | tg32 @ d16384 | 24.26 ± 0.02 |
| deepseek2 671B Q8_0 | 664.29 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | pp2048 @ d32768 | 290.33 ± 0.14 |
| deepseek2 671B Q8_0 | 664.29 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | tg32 @ d32768 | 22.47 ± 0.02 |
| deepseek2 671B Q8_0 | 664.29 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | pp2048 @ d65536 | 168.11 ± 0.82 |
| deepseek2 671B Q8_0 | 664.29 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | tg32 @ d65536 | 19.33 ± 0.05 |
# Q8_0 llama-batched-bench
| PP | TG | B | N_KV | T_PP s | S_PP t/s | T_TG s | S_TG t/s | T s | S t/s |
|-------|--------|------|--------|----------|----------|----------|----------|----------|----------|
| 512 | 32 | 1 | 544 | 0.872 | 587.42 | 1.165 | 27.46 | 2.037 | 267.09 |
| 512 | 32 | 2 | 1088 | 1.148 | 892.32 | 2.193 | 29.19 | 3.340 | 325.70 |
| 512 | 32 | 4 | 2176 | 1.764 | 1160.95 | 2.981 | 42.95 | 4.745 | 458.63 |
| 512 | 32 | 8 | 4352 | 3.350 | 1222.52 | 4.225 | 60.60 | 7.575 | 574.51 |
| 4096 | 32 | 1 | 4128 | 4.286 | 955.68 | 1.186 | 26.98 | 5.472 | 754.37 |
| 4096 | 32 | 2 | 8256 | 8.582 | 954.59 | 2.248 | 28.47 | 10.830 | 762.34 |
| 4096 | 32 | 4 | 16512 | 17.107 | 957.74 | 3.105 | 41.22 | 20.212 | 816.94 |
| 4096 | 32 | 8 | 33024 | 34.101 | 960.91 | 4.534 | 56.47 | 38.635 | 854.78 |
| 8192 | 32 | 1 | 8224 | 9.767 | 838.77 | 1.222 | 26.19 | 10.988 | 748.42 |
| 8192 | 32 | 2 | 16448 | 19.483 | 840.93 | 2.322 | 27.56 | 21.806 | 754.30 |
| 8192 | 32 | 4 | 32896 | 38.985 | 840.53 | 3.256 | 39.31 | 42.241 | 778.77 |
| 8192 | 32 | 8 | 65792 | 77.914 | 841.13 | 4.828 | 53.02 | 82.742 | 795.14 |
Hope you find it useful! | 2026-01-06T11:28:34 | https://www.reddit.com/r/LocalLLaMA/comments/1q5g3ye/benchmark_results_for_671b_deepseek_in_llamacpp/ | fairydreaming | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5g3ye | false | null | t3_1q5g3ye | /r/LocalLLaMA/comments/1q5g3ye/benchmark_results_for_671b_deepseek_in_llamacpp/ | false | false | self | 15 | null |
ARM APU development? | 3 | When will we see more ARM-based chips that meet needs for AI workloads? I feel like this is the best middle ground and really where we should be focusing our energy on producing. The fact we could have high-performance ARM PC’s that can do the workload more efficiently, and an advancement in a space that is currently dominated by Apple would be so satisfying to see come to fruition.
IDK I like ARM chips. Maybe I’ll just design my own. | 2026-01-06T11:22:23 | https://www.reddit.com/r/LocalLLaMA/comments/1q5g01p/arm_apu_development/ | azeoUnfortunately | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5g01p | false | null | t3_1q5g01p | /r/LocalLLaMA/comments/1q5g01p/arm_apu_development/ | false | false | self | 3 | null |
Artificial Analysis just refreshed their global model indices | 87 | [AA Link with my list models](https://artificialanalysis.ai/?models=gpt-oss-120b%2Cgpt-5-2-non-reasoning%2Cgpt-5-2%2Cgpt-5-1%2Cgpt-oss-20b%2Cllama-4-maverick%2Cgemini-3-pro%2Cgemini-3-flash%2Cgemini-3-flash-reasoning%2Cclaude-opus-4-5%2Cclaude-4-5-sonnet-thinking%2Cclaude-4-5-sonnet%2Cclaude-opus-4-5-thinking%2Cmistral-large-3%2Cdeepseek-r1%2Cdeepseek-v3-2%2Cdeepseek-v3-2-reasoning%2Cgrok-4%2Cgrok-4-1-fast%2Cgrok-4-1-fast-reasoning%2Cnova-2-0-pro-reasoning-medium%2Cnova-2-0-lite-reasoning-medium%2Clfm2-1-2b%2Cminimax-m2-1%2Cnvidia-nemotron-3-nano-30b-a3b-reasoning%2Ckimi-k2-thinking%2Ckimi-k2-0905%2Colmo-3-1-32b-think%2Colmo-3-7b-instruct%2Cmimo-v2-flash-reasoning%2Ckat-coder-pro-v1%2Cmi-dm-k-2-5-pro-dec28%2Cglm-4-5-air%2Cglm-4-6v-reasoning%2Cglm-4-7%2Cglm-4-7-non-reasoning%2Capriel-v1-6-15b-thinker%2Cqwen3-235b-a22b-instruct-2507-reasoning%2Cqwen3-next-80b-a3b-reasoning%2Cqwen3-coder-30b-a3b-instruct%2Cqwen3-235b-a22b-instruct-2507%2Cqwen3-0.6b-instruct%2Cglm-4-6&intelligence-category=reasoning-vs-non-reasoning&media-leaderboards=text-to-video&omniscience=omniscience-index&speed=intelligence-vs-speed#artificial-analysis-intelligence-index#artificial-analysis-intelligence-index)
[https://artificialanalysis.ai/](https://artificialanalysis.ai/)
It feels like they tweaked the metrics just to keep OpenAI ahead of Google, but whatever
Not all models have been updated to the new benchmark yet. I first noticed it when Kimi K2 was ranked much lower than usual, and then the rest of the models started catching up. It looks like this might be Version 4.0, but I haven't been using the site long enough to be 100% sure if they had already switched to 4.0 earlier. It feels like a stealth update of the benchmarks before an official announcement. | 2026-01-06T11:10:20 | https://www.reddit.com/gallery/1q5fs95 | MadPelmewka | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q5fs95 | false | null | t3_1q5fs95 | /r/LocalLLaMA/comments/1q5fs95/artificial_analysis_just_refreshed_their_global/ | false | false | 87 | null | |
Can llm's rl training paradigm works without cot? | 1 | Today when people talk about rl4llm, (except for rl for aligning human preference) it always means first think then answer.
So I am wondering can llm's rl training paradigm works without cot?
Or say can rl act as substitute of sft in the "pre-training -> post-training just for a specific downstream task" pipeline?
Did anyone try it or have any relevant research? | 2026-01-06T10:55:45 | https://www.reddit.com/r/LocalLLaMA/comments/1q5fih0/can_llms_rl_training_paradigm_works_without_cot/ | Plenty_Ostrich4536 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5fih0 | false | null | t3_1q5fih0 | /r/LocalLLaMA/comments/1q5fih0/can_llms_rl_training_paradigm_works_without_cot/ | false | false | self | 1 | null |
Training can be possible on 12 GB RAM + 3 GB VRAM. | 21 | Yes. Training is possible on 12 GB RAM + 3 GB VRAM. I've created a model on a PC with a GTX 1050. IT'S POSSIBLE! But only 0.6B. [https://huggingface.co/Erik22TY/Nebulos-Distill-Qwen3-0.6BDFEEDWD](https://huggingface.co/Erik22TY/Nebulos-Distill-Qwen3-0.6BDFEEDWD) | 2026-01-06T10:54:32 | https://www.reddit.com/r/LocalLLaMA/comments/1q5fhpv/training_can_be_possible_on_12_gb_ram_3_gb_vram/ | Ok-Type-7663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5fhpv | false | null | t3_1q5fhpv | /r/LocalLLaMA/comments/1q5fhpv/training_can_be_possible_on_12_gb_ram_3_gb_vram/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': 'yWQXlL9OI84wgEoQcw06NNc3UxD6-X3De3EZYHwBpIs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yWQXlL9OI84wgEoQcw06NNc3UxD6-X3De3EZYHwBpIs.png?width=108&crop=smart&auto=webp&s=b4f01aee215be0ad9cbf87215094c7742ede31f6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yWQXlL9OI84wgEoQcw06NNc3UxD6-X3De3EZYHwBpIs.png?width=216&crop=smart&auto=webp&s=230e351c149befdf9a1e218bdc18c4f75f94b022', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yWQXlL9OI84wgEoQcw06NNc3UxD6-X3De3EZYHwBpIs.png?width=320&crop=smart&auto=webp&s=1bac339706dfbdb1f32de652e3088290937c1259', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yWQXlL9OI84wgEoQcw06NNc3UxD6-X3De3EZYHwBpIs.png?width=640&crop=smart&auto=webp&s=73ec41665dd577b9dd823b924f22d28754a9633d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yWQXlL9OI84wgEoQcw06NNc3UxD6-X3De3EZYHwBpIs.png?width=960&crop=smart&auto=webp&s=df4a9d4c3da929d0ef68b8bbde1f873b733221d8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yWQXlL9OI84wgEoQcw06NNc3UxD6-X3De3EZYHwBpIs.png?width=1080&crop=smart&auto=webp&s=5eb1e6ea2b0fa941c8be6c5c796656fb95bc2721', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yWQXlL9OI84wgEoQcw06NNc3UxD6-X3De3EZYHwBpIs.png?auto=webp&s=3e5e6b014b5659374fd0640411079a58a950227e', 'width': 1200}, 'variants': {}}]} |
Cerco suggerimenti: Lmstudio e openai/gpt-oss-20b | 0 | Ciao a tutti,
Mi piace l'idea di avere un sistema AI in locale ma riesco a capire una cosa.
Con ChatGPT pro mi trovo molto bene perché mi analizza di documenti che carico, si ricorda le cose che faccio e "impara" da me.
Invece in locale non riesco a capire come utilizzarlo al meglio... non si ricorda nulla e quindi non riesco a farlo entrare nella mia routine, sembra sempre di partire da zero.
Cosa mi sfugge? Mi aiutate a capire come sfruttarlo al meglio? Deco scaricare dei "plugin" o boh!
Grazie! | 2026-01-06T10:50:17 | https://www.reddit.com/r/LocalLLaMA/comments/1q5feyp/cerco_suggerimenti_lmstudio_e_openaigptoss20b/ | Signal_Pickle_3062 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5feyp | false | null | t3_1q5feyp | /r/LocalLLaMA/comments/1q5feyp/cerco_suggerimenti_lmstudio_e_openaigptoss20b/ | false | false | self | 0 | null |
I have been doing some benchmarking of SLM's | 3 | My hardware is pretty weak, using an intel n97 cpu with 32gb of 3200 mt/s ddr4 ram and a 512nvme.
Running on Debian with llama.cpp compiled on my machine specifically for CPU inference.
I have a test suite of 5 questions, and chatgpt measures and provides the results and comments.
My usability score is derived from the test score\^5 then x average t/s and then i apply a 10% penalty is the model uses reasoning.
The reason for the penalty is that if 2 models score the same, as in produce the same quality of response, then if only 1 of them is non-reasoning, then it actaully is performing better.
https://preview.redd.it/x48dnz26kpbg1.png?width=2212&format=png&auto=webp&s=c91fd217c464261cdf7755a4dd3c0ade78e52c89
| 2026-01-06T10:49:46 | https://www.reddit.com/r/LocalLLaMA/comments/1q5fels/i_have_been_doing_some_benchmarking_of_slms/ | fozid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5fels | false | null | t3_1q5fels | /r/LocalLLaMA/comments/1q5fels/i_have_been_doing_some_benchmarking_of_slms/ | false | false | 3 | null | |
Creating a minimalist chat interface for AI | 3 | I am creating a minimalist design chat interface for AI and planning to opensource it, now my question is what do you think can be improved from it? and what features would you like to see on it?
Currently planned features:
1. Allow users to use their local models
2. Tools(web\_search through searxng, and capability to use mcp tools)
3. Support for thinking models
4. Additional features which you guys can suggest
[Any suggestions would be nice](https://preview.redd.it/y7vfzw1tjpbg1.png?width=2395&format=png&auto=webp&s=47b5587e438128679cd058c2a9f451f5aebeb75c)
| 2026-01-06T10:41:10 | https://www.reddit.com/r/LocalLLaMA/comments/1q5f98m/creating_a_minimalist_chat_interface_for_ai/ | ultrassniper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5f98m | false | null | t3_1q5f98m | /r/LocalLLaMA/comments/1q5f98m/creating_a_minimalist_chat_interface_for_ai/ | false | false | 3 | null | |
Liquid AI released LFM2.5 1.2B Instruct | 101 | Today, we release LFM2.5, our most capable family of tiny on-device foundation models.
It’s built to power reliable on-device agentic applications: higher quality, lower latency, and broader modality support in the ~1B parameter class.
>LFM2.5 builds on our LFM2 device-optimized hybrid architecture
>Pretraining scaled from 10T → 28T tokens
>Expanded reinforcement learning post-training
>Higher ceilings for instruction following | 2026-01-06T10:28:50 | KaroYadgar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5f1jz | false | null | t3_1q5f1jz | /r/LocalLLaMA/comments/1q5f1jz/liquid_ai_released_lfm25_12b_instruct/ | false | false | 101 | {'enabled': True, 'images': [{'id': 'dSOQ2TGnWwKMlHXwKdFy9ZgCmUN4Pdb9WI4RIWf67cA', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/e1qsc3urhpbg1.jpeg?width=108&crop=smart&auto=webp&s=4b1981e919e4a6f44ee380c70aa45da35b411cb9', 'width': 108}, {'height': 151, 'url': 'https://preview.redd.it/e1qsc3urhpbg1.jpeg?width=216&crop=smart&auto=webp&s=19e87f7998364d15a6b2f309d261f75c45ab25fd', 'width': 216}, {'height': 225, 'url': 'https://preview.redd.it/e1qsc3urhpbg1.jpeg?width=320&crop=smart&auto=webp&s=7a474c6705523fab8fb6209cb55a927c4ad3648e', 'width': 320}, {'height': 450, 'url': 'https://preview.redd.it/e1qsc3urhpbg1.jpeg?width=640&crop=smart&auto=webp&s=55542d5e224e490c112febafbc853ee412d376d2', 'width': 640}, {'height': 675, 'url': 'https://preview.redd.it/e1qsc3urhpbg1.jpeg?width=960&crop=smart&auto=webp&s=d053926647bc768ddd0a112a21564d21401d6462', 'width': 960}, {'height': 759, 'url': 'https://preview.redd.it/e1qsc3urhpbg1.jpeg?width=1080&crop=smart&auto=webp&s=121d325f73defd0697da29b0aba70f0efc558156', 'width': 1080}], 'source': {'height': 1334, 'url': 'https://preview.redd.it/e1qsc3urhpbg1.jpeg?auto=webp&s=2b47f2ae42da3ad25a07d8e3f8351747d4bb6086', 'width': 1896}, 'variants': {}}]} | ||
I built Ctrl: Execution control plane for high stakes agentic systems | 0 | I built Ctrl, an open-source execution control plane that sits between an agent and its tools.
Instead of letting tool calls execute directly, Ctrl intercepts them, dynamically scores risk, applies policy (allow / deny / approve), and only then executes; recording every intent, decision, and event in a local SQLite ledger.
GH: [https://github.com/MehulG/agent-ctrl](https://github.com/MehulG/agent-ctrl)
It’s currently focused on LangChain + MCP as a drop-in wrapper. The demo shows a content publish action being intercepted, paused for approval, and replayed safely after approval.
I’d love feedback from anyone running agents that take real actions. | 2026-01-06T10:25:50 | Temporary-Tap-7323 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5ezpy | false | null | t3_1q5ezpy | /r/LocalLLaMA/comments/1q5ezpy/i_built_ctrl_execution_control_plane_for_high/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 't06piyf4hpbg1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=108&crop=smart&format=png8&s=fc72d4a580febbd6e7c120b285817651f864e3f1', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=216&crop=smart&format=png8&s=72a31dfc09cdebed7c0ab088aa2119068f5080ad', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=320&crop=smart&format=png8&s=38a6afb80a1bfa2bb0601dfeffbed7b8a6ebcd30', 'width': 320}, {'height': 333, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=640&crop=smart&format=png8&s=78579c3dc20077310ea517b311659e71d727dd91', 'width': 640}, {'height': 500, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=960&crop=smart&format=png8&s=f68b3d16b55dc7af7030a8d50ad0be9661d174b8', 'width': 960}, {'height': 562, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=1080&crop=smart&format=png8&s=831d70c4c6887bff5cb8c63f4ea6379af7e47ac3', 'width': 1080}], 'source': {'height': 667, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?format=png8&s=758494b1a56d558e5cf12a7edee0535869f3f47a', 'width': 1280}, 'variants': {'gif': {'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=108&crop=smart&s=34f41e3fc3114f07eafc57997ca336e0cd6a47dc', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=216&crop=smart&s=787f812dcdc2bddfcf39c7696fbb8f4e1ecf303c', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=320&crop=smart&s=7aa839aa44298a9fdca94287153f1ecd938e6fdb', 'width': 320}, {'height': 333, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=640&crop=smart&s=2fc9ee17d972ebc4d94a89bc099e4d6756f33c4e', 'width': 640}, {'height': 500, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=960&crop=smart&s=3e484b1aa37f67507121beb241857903dc2d86ea', 'width': 960}, {'height': 562, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=1080&crop=smart&s=fa789d554689e35600b16700b5757392da6e8a97', 'width': 1080}], 'source': {'height': 667, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?s=208dd9dd6be7b91a2402a8a076ea078894ae2035', 'width': 1280}}, 'mp4': {'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=108&format=mp4&s=3fa3ba4f98909b06a1c922e93fc781dc57987965', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=216&format=mp4&s=9b3dfea73a15d934b424def53f2a9ad5b828c8f4', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=320&format=mp4&s=8664bac9b909d9b788ff72c11444f32809eec98e', 'width': 320}, {'height': 333, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=640&format=mp4&s=1a8c46678939f39f8aa3e69e3b13c33f8df4a8a7', 'width': 640}, {'height': 500, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=960&format=mp4&s=2ccf1eba90ccea5059e8cd755894df70c8d33925', 'width': 960}, {'height': 562, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?width=1080&format=mp4&s=3897784cd1957ab15fbba8a1e6349ec32b5848c4', 'width': 1080}], 'source': {'height': 667, 'url': 'https://preview.redd.it/t06piyf4hpbg1.gif?format=mp4&s=b7c6bb1d2a22af5dfe03897aa7e924a358eda324', 'width': 1280}}}}]} | |
VAD based solutions on AI Assistants. Any Suggestions? | 1 | Hello Guys!
I'm trying to make an assitant with VAD(Voice Activity Detection)+Elevenlabs STT + Gemini + OpenAI TTS components. But I have some troubles with that system. Everything is OK if VAD system correctly understands my voice.
I have implemented various VAD solutions like Silero VAD, WebRTC, Picovoice Cobra VAD but everytime when system hears any crackling sound or any environmental sound it activates the Barge-in mechanism and stopping the generating and listening this environmental sound. I have tried different solutions like changing the VAD, raise the voice's energy threshold but none of them works.
I would like to see your opinions about how can I overcome this problem and is there any resources I can find about realtime speech assistants. Thanks! | 2026-01-06T10:17:55 | https://www.reddit.com/r/LocalLLaMA/comments/1q5ev0w/vad_based_solutions_on_ai_assistants_any/ | No-Motor-6274 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5ev0w | false | null | t3_1q5ev0w | /r/LocalLLaMA/comments/1q5ev0w/vad_based_solutions_on_ai_assistants_any/ | false | false | self | 1 | null |
LoongFlow: Open Source Implementation of Evolutionary Agent Framework | 1 | [removed] | 2026-01-06T10:17:50 | https://www.reddit.com/r/LocalLLaMA/comments/1q5euyz/loongflow_open_source_implementation_of/ | Hundredwz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5euyz | false | null | t3_1q5euyz | /r/LocalLLaMA/comments/1q5euyz/loongflow_open_source_implementation_of/ | false | false | self | 1 | null |
RTX 6000 Threadripper build drive question | 35 | The Build:
Motherboard: ASRock WRX90 WS EVO
CPU: Ryzen Threadripper PRO 9985WX
GPU: RTX 6000 MAX-Q x 3
RAM: 768GB (8x96GB) - Vcolor DDR5 6400 TR596G64D452O
Storage:
1. Samsung MZ-V9P2T0B/AM 990 PRO 2TB NVMe Solid State Drive
2. WD_BLACK 8TB SN850X NVMe Gen4 PCIe M.2 2280 WDS800T2XHE
3. Kioxia 30.72TB SSD
PSU: Super Flower Leadex Titanium 2800W ATX 3.1
Cooling: Silverstone SST-XE360-TR5 Server AIO Liquid Cooling
Case: Phanteks PH-ES620PC_BK02 Enthoo Pro Server Edition
As of this stage I’ve put everything together but I am unsure how to connect the Kioxia SSD. Any help is appreciated. | 2026-01-06T10:16:51 | Direct_Bodybuilder63 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5eue3 | false | null | t3_1q5eue3 | /r/LocalLLaMA/comments/1q5eue3/rtx_6000_threadripper_build_drive_question/ | false | false | default | 35 | {'enabled': True, 'images': [{'id': 'svwnp4vmfpbg1', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/svwnp4vmfpbg1.jpeg?width=108&crop=smart&auto=webp&s=0a18ab1372642f8ddd424d20fd556e40d5f90b70', 'width': 108}, {'height': 384, 'url': 'https://preview.redd.it/svwnp4vmfpbg1.jpeg?width=216&crop=smart&auto=webp&s=ad271b0554eedfd187b1dbdfb15c35dd5b9e525c', 'width': 216}, {'height': 568, 'url': 'https://preview.redd.it/svwnp4vmfpbg1.jpeg?width=320&crop=smart&auto=webp&s=574a5f1368a4f2717389850c1b06487f960dc606', 'width': 320}, {'height': 1137, 'url': 'https://preview.redd.it/svwnp4vmfpbg1.jpeg?width=640&crop=smart&auto=webp&s=e398700113a861a76076a511cd36f501b13cafd4', 'width': 640}], 'source': {'height': 1600, 'url': 'https://preview.redd.it/svwnp4vmfpbg1.jpeg?auto=webp&s=2b83ec308403f5ac1b8f560c33aa32f21e677f4c', 'width': 900}, 'variants': {}}]} | |
LoongFlow: Open Source Implementation of Evolutionary Agent Framework | 1 | [removed] | 2026-01-06T10:14:51 | https://www.reddit.com/r/LocalLLaMA/comments/1q5et5w/loongflow_open_source_implementation_of/ | Hundredwz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5et5w | false | null | t3_1q5et5w | /r/LocalLLaMA/comments/1q5et5w/loongflow_open_source_implementation_of/ | false | false | self | 1 | null |
Speculative decoding and Finetuning | 1 | I've asked before about performance gains of Speculative decoding and majority of you said that it was.
Even though I don't have the resources at home to justify it, but i work in a very niche field. I've asked before about finetuning and they have stated that it's not currently worth the effort for the larger models, which i understand because the RAG process works fairly well.
But finetuning a small model like 3B shouldn't take too long, just wondering if finetuning a speculative decoded model will help a larger model in the niche field. | 2026-01-06T09:52:49 | https://www.reddit.com/r/LocalLLaMA/comments/1q5ega7/speculative_decoding_and_finetuning/ | uber-linny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5ega7 | false | null | t3_1q5ega7 | /r/LocalLLaMA/comments/1q5ega7/speculative_decoding_and_finetuning/ | false | false | self | 1 | null |
Which OCR engine provides the best results with docling? | 2 | So far, I have tried out RapidOCR. I'm planning to try out TesserOCR and PaddleOCR with docling. | 2026-01-06T09:48:26 | https://www.reddit.com/r/LocalLLaMA/comments/1q5edkq/which_ocr_engine_provides_the_best_results_with/ | Pretend-Elevator874 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5edkq | false | null | t3_1q5edkq | /r/LocalLLaMA/comments/1q5edkq/which_ocr_engine_provides_the_best_results_with/ | false | false | self | 2 | null |
transcription and diarization on raspberry pi 4b | 1 | I am working on raspberry pi 4b model and I need a model to diarize and transcript audio file that consist of various speakers , i want an optimized and accurate model, any suggestions? | 2026-01-06T09:46:50 | https://www.reddit.com/r/LocalLLaMA/comments/1q5ecnb/transcription_and_diarization_on_raspberry_pi_4b/ | NitroOwO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5ecnb | false | null | t3_1q5ecnb | /r/LocalLLaMA/comments/1q5ecnb/transcription_and_diarization_on_raspberry_pi_4b/ | false | false | self | 1 | null |
Supertonic2: Lightning Fast, On-Device, Multilingual TTS | 191 | \[Demo\] [https://huggingface.co/spaces/Supertone/supertonic-2](https://huggingface.co/spaces/Supertone/supertonic-2)
\[Model\] [https://huggingface.co/Supertone/supertonic-2](https://huggingface.co/Supertone/supertonic-2)
\[Code\] [https://github.com/supertone-inc/supertonic](https://github.com/supertone-inc/supertonic)
Hello!
I want to share that Supertonic now supports 5 languages:
한국어 · Español · Français · Português · English
It’s an open-weight TTS model designed for extreme speed, minimal footprint, and flexible deployment. You can also use it for commercial use!
Here are key features:
(1) Lightning fast — RTF 0.006 on M4 Pro
(2) Lightweight — 66M parameters
(3) On-device TTS — Complete privacy, zero network latency
(4) Flexible deployment — Runs on browsers, PCs, mobiles, and edge devices
(5) 10 preset voices — Pick the voice that fits your use cases
(6) Open-weight model — Commercial use allowed ([OpenRAIL-M](https://huggingface.co/Supertone/supertonic-2/blob/main/LICENSE))
I hope Supertonic is useful for your projects. | 2026-01-06T09:24:47 | https://v.redd.it/k40jciwu5pbg1 | ANLGBOY | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5e010 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/k40jciwu5pbg1/DASHPlaylist.mpd?a=1770283503%2COWE4NzY2ZWNlNzE2MWYxOTM4NTZlMjVkN2E4NWQyMDNiOGJkZjkxOWMwZTU5ZmEwOTg0NGJkMDg4NDQ3MDRiOQ%3D%3D&v=1&f=sd', 'duration': 33, 'fallback_url': 'https://v.redd.it/k40jciwu5pbg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/k40jciwu5pbg1/HLSPlaylist.m3u8?a=1770283503%2CMzdkNzRmZmM5ZGRhOGM2ZTQ0YmE4YTM0ZDRmYmY3Zjk2Mjc0N2IwYTMwNjJkNGQ2ZTQzMjJlMmRiNDZiOGNjYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/k40jciwu5pbg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1q5e010 | /r/LocalLLaMA/comments/1q5e010/supertonic2_lightning_fast_ondevice_multilingual/ | false | false | 191 | {'enabled': False, 'images': [{'id': 'aTZxcnNkeXU1cGJnMYKGJdezLzYbef1CYRrcNdCGvvmVdrxf390KMohjzSE6', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aTZxcnNkeXU1cGJnMYKGJdezLzYbef1CYRrcNdCGvvmVdrxf390KMohjzSE6.png?width=108&crop=smart&format=pjpg&auto=webp&s=30310e05e010818f1ab86cda7e0d914951dffcd9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aTZxcnNkeXU1cGJnMYKGJdezLzYbef1CYRrcNdCGvvmVdrxf390KMohjzSE6.png?width=216&crop=smart&format=pjpg&auto=webp&s=0ee6a78c8da34959d0f991c64e65c90d41a0c66d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aTZxcnNkeXU1cGJnMYKGJdezLzYbef1CYRrcNdCGvvmVdrxf390KMohjzSE6.png?width=320&crop=smart&format=pjpg&auto=webp&s=a3f1318c170352c00dec2b15be29774bc62922e5', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aTZxcnNkeXU1cGJnMYKGJdezLzYbef1CYRrcNdCGvvmVdrxf390KMohjzSE6.png?width=640&crop=smart&format=pjpg&auto=webp&s=a7a79eb5e1b7aa4b38be96792c921aca43763a8b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aTZxcnNkeXU1cGJnMYKGJdezLzYbef1CYRrcNdCGvvmVdrxf390KMohjzSE6.png?width=960&crop=smart&format=pjpg&auto=webp&s=016330bd198acbe15bbd2579dabaf08d25dd05d9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aTZxcnNkeXU1cGJnMYKGJdezLzYbef1CYRrcNdCGvvmVdrxf390KMohjzSE6.png?width=1080&crop=smart&format=pjpg&auto=webp&s=90cc6eef297b2e2ed40f02183e48d94b0e8c015d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/aTZxcnNkeXU1cGJnMYKGJdezLzYbef1CYRrcNdCGvvmVdrxf390KMohjzSE6.png?format=pjpg&auto=webp&s=f735bb253e1f49809f06d4a91150c509e3a77be3', 'width': 1920}, 'variants': {}}]} | |
Akicou/Solar-Open-69B-REAP · Hugging Face | 2 | ...you can now run Solar Open on everything ;)
[https://huggingface.co/mradermacher/Solar-Open-69B-REAP-GGUF](https://huggingface.co/mradermacher/Solar-Open-69B-REAP-GGUF)
[https://huggingface.co/mradermacher/Solar-Open-69B-REAP-i1-GGUF](https://huggingface.co/mradermacher/Solar-Open-69B-REAP-i1-GGUF)
| 2026-01-06T09:15:34 | https://huggingface.co/Akicou/Solar-Open-69B-REAP | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1q5duxc | false | null | t3_1q5duxc | /r/LocalLLaMA/comments/1q5duxc/akicousolaropen69breap_hugging_face/ | false | false | default | 2 | null |
Performance improvements in llama.cpp over time | 623 | 2026-01-06T09:03:03 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5dnyw | false | null | t3_1q5dnyw | /r/LocalLLaMA/comments/1q5dnyw/performance_improvements_in_llamacpp_over_time/ | false | false | default | 623 | {'enabled': True, 'images': [{'id': 'lsqwma772pbg1', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/lsqwma772pbg1.png?width=108&crop=smart&auto=webp&s=720e474f70d35227eb9ab2b7a90cd7fe332a2b21', 'width': 108}, {'height': 132, 'url': 'https://preview.redd.it/lsqwma772pbg1.png?width=216&crop=smart&auto=webp&s=8e3f26d33f8c567fa7ac54d4a4585379bd792f55', 'width': 216}, {'height': 195, 'url': 'https://preview.redd.it/lsqwma772pbg1.png?width=320&crop=smart&auto=webp&s=4464ff7d07b6619489a8ba9eb8f4731af3f48145', 'width': 320}, {'height': 391, 'url': 'https://preview.redd.it/lsqwma772pbg1.png?width=640&crop=smart&auto=webp&s=fc5492d32ea47c504ec0399a9c2a02a046df6fc0', 'width': 640}, {'height': 586, 'url': 'https://preview.redd.it/lsqwma772pbg1.png?width=960&crop=smart&auto=webp&s=16d24e06ee645bec9c2b723d7698af8de122cf0c', 'width': 960}, {'height': 660, 'url': 'https://preview.redd.it/lsqwma772pbg1.png?width=1080&crop=smart&auto=webp&s=781db5434eae4deeb640dbb6743673e8822d7c95', 'width': 1080}], 'source': {'height': 734, 'url': 'https://preview.redd.it/lsqwma772pbg1.png?auto=webp&s=9a1967a657491080197d5876538f27eb30abb1da', 'width': 1201}, 'variants': {}}]} | ||
Radeon Pro v340 Drivers | 1 | I have been tinkering with a Radeon Pro v340 off and on for a while because I happened on a couple of them way back, however I have never been able to get it to be recognized. I thought it might be related to resizable bar issues and a peculiar motherboard so I put it back and forgot about it. Recently, I tried it again on 3 different systems (Epyc ROMED-8 2T - PopOS 24, Xeon Workstation PopOS 22, Gaming PC - Ubuntu 22 and again on the Workstation Ubuntu 24). I know the 6.19 kernel resurrects some old cards with a new amdgpu rewrite so I even tried that and still nothing. I have tried it with ROCm 5.7, 6.3, 6.4, 7, etc).
It always fails with:
`38724.150766] [drm] PSP loading VCE firmware`
`[38724.302690] amdgpu 0000:09:00.0: amdgpu: reserve 0x400000 from 0x87fe000000 for PSP TMR`
`[38724.382050] amdgpu 0000:09:00.0: amdgpu: memory partition mode query is not supported`
`[38724.386037] amdgpu 0000:09:00.0: amdgpu: RAP: optional rap ta ucode is not available`
`[38724.388798] amdgpu 0000:09:00.0: amdgpu: [drm] Display Core v3.2.351 initialized on DCE 12.1`
`[38724.392198] snd_hda_intel 0000:09:00.1: bound 0000:09:00.0 (ops amdgpu_dm_audio_component_bind_ops [amdgpu])`
`[38724.544921] amdgpu 0000:09:00.0: amdgpu: kiq ring mec 2 pipe 1 q 0`
`[38724.903628] amdgpu: HMM registered 32752MB device memory`
`[38724.904539] kfd kfd: amdgpu: Allocated 3969056 bytes on gart`
`[38724.904568] kfd kfd: amdgpu: Total number of KFD nodes to be created: 1`
`[38724.904572] amdgpu: [powerplay] [MemMclks]: memclk dpm not enabled!`
`[38724.904713] amdgpu: Virtual CRAT table created for GPU`
`[38724.904852] amdgpu: [powerplay] [MemMclks]: memclk dpm not enabled!`
`[38724.904856] amdgpu: Topology: Add dGPU node [0x66a3:0x1002]`
`[38724.904857] kfd kfd: amdgpu: added device 1002:66a3`
`[38724.905569] [drm:smu_v11_0_i2c_xfer [amdgpu]] *ERROR* Received I2C_NAK_7B_ADDR_NOACK !!!`
`[38724.906073] [drm:smu_v11_0_i2c_xfer [amdgpu]] *ERROR* WriteI2CData() - I2C error occurred :1`
`[38724.906573] amdgpu 0000:09:00.0: amdgpu: Couldn't read the IPMI Common Header: -5`
`[38724.906587] amdgpu 0000:09:00.0: amdgpu: SE 4, SH per SE 1, CU per SH 16, active_cu_number 64`
`[38724.906590] amdgpu 0000:09:00.0: amdgpu: ring gfx uses VM inv eng 0 on hub 0`
`[38724.906592] amdgpu 0000:09:00.0: amdgpu: ring comp_1.0.0 uses VM inv eng 1 on hub 0`
`[38724.906593] amdgpu 0000:09:00.0: amdgpu: ring comp_1.1.0 uses VM inv eng 4 on hub 0`
`[38724.906594] amdgpu 0000:09:00.0: amdgpu: ring comp_1.2.0 uses VM inv eng 5 on hub 0`
`[38724.906596] amdgpu 0000:09:00.0: amdgpu: ring comp_1.3.0 uses VM inv eng 6 on hub 0`
`[38724.906597] amdgpu 0000:09:00.0: amdgpu: ring comp_1.0.1 uses VM inv eng 7 on hub 0`
`[38724.906598] amdgpu 0000:09:00.0: amdgpu: ring comp_1.1.1 uses VM inv eng 8 on hub 0`
`[38724.906599] amdgpu 0000:09:00.0: amdgpu: ring comp_1.2.1 uses VM inv eng 9 on hub 0`
`[38724.906600] amdgpu 0000:09:00.0: amdgpu: ring comp_1.3.1 uses VM inv eng 10 on hub 0`
`[38724.906601] amdgpu 0000:09:00.0: amdgpu: ring kiq_0.2.1.0 uses VM inv eng 11 on hub 0`
`[38724.906603] amdgpu 0000:09:00.0: amdgpu: ring sdma0 uses VM inv eng 0 on hub 8`
`[38724.906604] amdgpu 0000:09:00.0: amdgpu: ring sdma0 shares VM invalidation engine 0 with ring page0 on hub 8`
`[38724.906606] amdgpu 0000:09:00.0: amdgpu: ring page0 uses VM inv eng 1 on hub 8`
`[38724.906607] amdgpu 0000:09:00.0: amdgpu: ring sdma1 uses VM inv eng 4 on hub 8`
`[38724.906608] amdgpu 0000:09:00.0: amdgpu: ring sdma1 shares VM invalidation engine 4 with ring page1 on hub 8`
`[38724.906609] amdgpu 0000:09:00.0: amdgpu: ring page1 uses VM inv eng 5 on hub 8`
`[38724.906610] amdgpu 0000:09:00.0: amdgpu: ring uvd_0 uses VM inv eng 6 on hub 8`
`[38724.906611] amdgpu 0000:09:00.0: amdgpu: ring uvd_enc_0.0 uses VM inv eng 7 on hub 8`
`[38724.906613] amdgpu 0000:09:00.0: amdgpu: ring uvd_enc_0.1 uses VM inv eng 8 on hub 8`
`[38724.906614] amdgpu 0000:09:00.0: amdgpu: ring uvd_1 uses VM inv eng 9 on hub 8`
`[38724.906615] amdgpu 0000:09:00.0: amdgpu: ring uvd_enc_1.0 uses VM inv eng 10 on hub 8`
`[38724.906616] amdgpu 0000:09:00.0: amdgpu: ring uvd_enc_1.1 uses VM inv eng 11 on hub 8`
`[38724.906617] amdgpu 0000:09:00.0: amdgpu: ring vce0 uses VM inv eng 12 on hub 8`
`[38724.906618] amdgpu 0000:09:00.0: amdgpu: ring vce1 uses VM inv eng 13 on hub 8`
`[38724.906619] amdgpu 0000:09:00.0: amdgpu: ring vce2 uses VM inv eng 14 on hub 8`
`[38724.906919] amdgpu: Detected AMDGPU DF Counters. # of Counters = 8.`
`[38724.906936] amdgpu: Detected AMDGPU 2 Perf Events.`
`[38724.907224] amdgpu 0000:09:00.0: amdgpu: Runtime PM not available`
`[38724.908792] amdgpu 0000:09:00.0: [drm] Registered 6 planes with drm panic`
`[38724.908794] [drm] Initialized amdgpu 3.64.0 for 0000:09:00.0 on minor 1`
`[38724.922406] fbcon: amdgpudrmfb (fb0) is primary device`
`[38725.126667] amdgpu 0000:09:00.0: [drm] fb0: amdgpudrmfb frame buffer device`
`[38725.137059] amdgpu 0000:12:00.0: enabling device (0000 -> 0003)`
`[38725.137333] amdgpu 0000:12:00.0: amdgpu: initializing kernel modesetting (VEGA10 0x1002:0x6864 0x1002:0x0C00 0x05).`
`[38725.137355] amdgpu 0000:12:00.0: amdgpu: Fatal error during GPU init`
`[38725.137487] amdgpu 0000:12:00.0: probe with driver amdgpu failed with error -12`
`[38725.137807] Modules linked in: amdgpu(+) amdxcp drm_panel_backlight_quirks gpu_sched drm_buddy drm_ttm_helper ttm drm_exec drm_suballoc_helper drm_display_helper cec rc_core i2c_algo_bit video xt_conntrack xt_MASQUERADE bridge stp llc xt_set ip_set nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_addrtype nft_compat nf_tables nfnetlink xfrm_user xfrm_algo snd_seq_dummy snd_hrtimer overlay zram 842_decompress 842_compress lz4hc_compress lz4_compress intel_rapl_msr intel_rapl_common intel_uncore_frequency intel_uncore_frequency_common skx_edac skx_edac_common nfit x86_pkg_temp_thermal intel_powerclamp snd_hda_codec_atihdmi coretemp snd_hda_codec_hdmi binfmt_misc snd_hda_intel snd_hda_codec dm_crypt apple_bce(C) nls_iso8859_1 snd_hda_core kvm_intel snd_intel_dspcfg snd_seq_midi snd_seq_midi_event snd_intel_sdw_acpi snd_rawmidi kvm snd_hwdep brcmfmac snd_seq snd_pcm brcmutil irqbypass snd_seq_device applesmc rapl cdc_acm snd_timer spi_nor intel_cstate cfg80211 snd mtd apple_mfi_fastcharge joydev mei_me`
`[38725.138208] amdgpu_device_fini_sw+0x51a/0x700 [amdgpu]`
`[38725.140174] amdgpu_driver_release_kms+0x16/0x40 [amdgpu]`
`[38725.142190] ? __pfx_amdgpu_init+0x10/0x10 [amdgpu]`
`[38725.144060] amdgpu_init+0x69/0xff0 [amdgpu]`
`[38725.146198] amdgpu 0000:15:00.0: enabling device (0000 -> 0003)`
`[38725.146409] amdgpu 0000:15:00.0: amdgpu: initializing kernel modesetting (VEGA10 0x1002:0x6864 0x1002:0x0C00 0x05).`
`[38725.146427] amdgpu 0000:15:00.0: amdgpu: Fatal error during GPU init`
`[38725.146515] amdgpu 0000:15:00.0: probe with driver amdgpu failed with error -12`
It doesn't even work if it's the only card. It is never detected in Vulkan either. I have tried AMDVLK and RADV. I have tried it on x16 lanes and x8. I have two cards. Both have the same error. I did find one mention of there being a timing issue where something about the card doesn't activate right away so adding a timeout to the linux kernel would fix it. That also did nothing. Has anyone else experienced this? | 2026-01-06T08:22:31 | https://www.reddit.com/r/LocalLLaMA/comments/1q5d12j/radeon_pro_v340_drivers/ | dionysio211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5d12j | false | null | t3_1q5d12j | /r/LocalLLaMA/comments/1q5d12j/radeon_pro_v340_drivers/ | false | false | self | 1 | null |
It was Ilya who "closed" OpenAI | 2 | 2026-01-06T08:21:25 | gherunchec | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5d0fg | false | null | t3_1q5d0fg | /r/LocalLLaMA/comments/1q5d0fg/it_was_ilya_who_closed_openai/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'Y0vUoweiVjHu43HQNiCZmOtfLyKNg9TQr0ZzP1UbzHM', 'resolutions': [{'height': 128, 'url': 'https://preview.redd.it/9cql9asyuobg1.png?width=108&crop=smart&auto=webp&s=d82d98c50a6bb9177bfbb28d5545a16a58210ca5', 'width': 108}, {'height': 256, 'url': 'https://preview.redd.it/9cql9asyuobg1.png?width=216&crop=smart&auto=webp&s=088283e634cca861c798edebc9eaaf669d9270a6', 'width': 216}, {'height': 379, 'url': 'https://preview.redd.it/9cql9asyuobg1.png?width=320&crop=smart&auto=webp&s=ecbbaa47148bbc993f41b86637d868af5ddced8a', 'width': 320}, {'height': 759, 'url': 'https://preview.redd.it/9cql9asyuobg1.png?width=640&crop=smart&auto=webp&s=a1009cd06c562b9eed17267c5eaf3491a978ee61', 'width': 640}], 'source': {'height': 768, 'url': 'https://preview.redd.it/9cql9asyuobg1.png?auto=webp&s=8cc65715a4d2cada4c3f352d216c998f945f0272', 'width': 647}, 'variants': {}}]} | |||
LTX-2 Open Sourced | 72 | 2026-01-06T07:51:17 | https://huggingface.co/Lightricks/LTX-2 | umarmnaq | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1q5cj4e | false | null | t3_1q5cj4e | /r/LocalLLaMA/comments/1q5cj4e/ltx2_open_sourced/ | false | false | default | 72 | {'enabled': False, 'images': [{'id': 'x6ZxUm50mGHUPTJN69bpngmKLxle6Qp5s56OxgDYluY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x6ZxUm50mGHUPTJN69bpngmKLxle6Qp5s56OxgDYluY.png?width=108&crop=smart&auto=webp&s=368cdbdf3787df8d805ea04fcf436d6de359f6d0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/x6ZxUm50mGHUPTJN69bpngmKLxle6Qp5s56OxgDYluY.png?width=216&crop=smart&auto=webp&s=ee44de2a1a7310f195550944a1b59ed50f7034db', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/x6ZxUm50mGHUPTJN69bpngmKLxle6Qp5s56OxgDYluY.png?width=320&crop=smart&auto=webp&s=3a1a6b238f8182f56fa057194b200c679c4045d0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/x6ZxUm50mGHUPTJN69bpngmKLxle6Qp5s56OxgDYluY.png?width=640&crop=smart&auto=webp&s=01cbac9642a85763962d850066ed2fb90c751b9d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/x6ZxUm50mGHUPTJN69bpngmKLxle6Qp5s56OxgDYluY.png?width=960&crop=smart&auto=webp&s=7af3941b38181f1a1edbd6b66474c818c51f2d45', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/x6ZxUm50mGHUPTJN69bpngmKLxle6Qp5s56OxgDYluY.png?width=1080&crop=smart&auto=webp&s=265eef3bd6633c715d995b227a6b4a57f48ac0e5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/x6ZxUm50mGHUPTJN69bpngmKLxle6Qp5s56OxgDYluY.png?auto=webp&s=61d5b06a5999bfb3d6a5fe5ad9ef30b91381eea3', 'width': 1200}, 'variants': {}}]} | |
LLM model scandle in South Korea | 5 | Sorry for my bad english.
Following the recent controversy debates surrounding the Upstage's Solar-open model, NAVER - a leading Korean tech company, is now facing allegations that its Hyperclover-OMNI 8B model adopted QWEN's vision & audio encoder without reference.
Many users in Korea believe this national competition was conducted on the basis of "starting from scratch." While there is no dispute that NAVER independently developed the model's text generation component, it will likely be difficult to avoid criticism for positioning the OMNI model as a distinctive feature compared to other companies.
[https://m.news.nate.com/view/20260105n29281](https://m.news.nate.com/view/20260105n29281) (Korean news link)
| 2026-01-06T07:41:44 | https://www.reddit.com/r/LocalLLaMA/comments/1q5cdfu/llm_model_scandle_in_south_korea/ | Desperate-Sir-5088 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5cdfu | false | null | t3_1q5cdfu | /r/LocalLLaMA/comments/1q5cdfu/llm_model_scandle_in_south_korea/ | false | false | self | 5 | null |
NVIDIA released a datacenter CFD dataset on Hugging Face | 15 | GitHub: [https://github.com/NVIDIA/physicsnemo](https://github.com/NVIDIA/physicsnemo)
Hugging face: [https://huggingface.co/datasets/nvidia/PhysicsNeMo-Datacenter-CFD](https://huggingface.co/datasets/nvidia/PhysicsNeMo-Datacenter-CFD)
NVIDIA PhysicsNeMo is an open-source deep-learning framework for building, training, fine-tuning, and inferring Physics AI models using state-of-the-art SciML methods for AI4Science and engineering.
PhysicsNeMo provides Python modules to compose scalable and optimized training and inference pipelines to explore, develop, validate, and deploy AI models that combine physics knowledge with data, enabling real-time predictions.
Whether you are exploring the use of neural operators, GNNs, or transformers, or are interested in Physics-Informed Neural Networks or a hybrid approach in between, PhysicsNeMo provides you with an optimized stack that will enable you to train your models at scale | 2026-01-06T07:26:33 | Difficult-Cap-7527 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q5c4nn | false | null | t3_1q5c4nn | /r/LocalLLaMA/comments/1q5c4nn/nvidia_released_a_datacenter_cfd_dataset_on/ | false | false | 15 | {'enabled': True, 'images': [{'id': '6LgFQYR51Un6eRlPtkSdAWkNAFKnO2P4yAdiAG8JAyg', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/ijjgb3g1lobg1.jpeg?width=108&crop=smart&auto=webp&s=79840fa3abc1554ff8b4b5cd19c6b9b2935f711f', 'width': 108}, {'height': 79, 'url': 'https://preview.redd.it/ijjgb3g1lobg1.jpeg?width=216&crop=smart&auto=webp&s=dd2910682c9e2096bf48688ac105b69d3ceb2670', 'width': 216}, {'height': 117, 'url': 'https://preview.redd.it/ijjgb3g1lobg1.jpeg?width=320&crop=smart&auto=webp&s=22eae017d416213e6de64207b41c3666b3bf75fd', 'width': 320}, {'height': 235, 'url': 'https://preview.redd.it/ijjgb3g1lobg1.jpeg?width=640&crop=smart&auto=webp&s=e9d50faa577075d3e542aee863c349a75eefc851', 'width': 640}, {'height': 352, 'url': 'https://preview.redd.it/ijjgb3g1lobg1.jpeg?width=960&crop=smart&auto=webp&s=6f81422eb4506de16e89eca6b69e0879c864ca3d', 'width': 960}, {'height': 396, 'url': 'https://preview.redd.it/ijjgb3g1lobg1.jpeg?width=1080&crop=smart&auto=webp&s=1cf56ae67ec9aac85bf5b8f45f1560ea8a4cfbda', 'width': 1080}], 'source': {'height': 398, 'url': 'https://preview.redd.it/ijjgb3g1lobg1.jpeg?auto=webp&s=f8c93fc9cc77ccf5b0e97a272448aa5f71dbeb02', 'width': 1083}, 'variants': {}}]} | ||
Semantic geometry for visual grounding | 2 | I've been doing quite a bit of we automation stuff with LLM, but one of the biggest headaches is vision LLM hallucinating web UI elements coordinates with lots of retries.
To solve the problem and make it cheaper, I ended up building SentienceAPI, a small SDK + service that exposes a semantic, deterministic action space directly from the browser (no screenshots / vision). I also built a debugging utility for step-by-step replay and diffing for agent runs.
The SDK uses a chrome extension to do pruning and getting rid of more than 90% of noise from the HTML and CSS, followed by refining and onnx reranking, which gives me pretty small set of elements for LLM to reason and pick the target UI element.
If you’re currently:
* fighting flaky clicks / scrolls
* relying on screenshots or selectors
I’d love for you to try it and tell me what breaks or feels wrong.
Docs + playground: https://www.sentienceapi.com/I can set up access for you to try out the SDK with gateway reranking for reducing the action space for your LLM agent to reason and make decisions.
Happy to answer technical questions async — no pitch, just feedback.
| 2026-01-06T07:01:34 | https://www.reddit.com/r/LocalLLaMA/comments/1q5bpuk/semantic_geometry_for_visual_grounding/ | Aggressive_Bed7113 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5bpuk | false | null | t3_1q5bpuk | /r/LocalLLaMA/comments/1q5bpuk/semantic_geometry_for_visual_grounding/ | false | false | self | 2 | null |
MedAIBase/AntAngelMed · Hugging Face | 23 | Ant Health and others have just open‑sourced a medical language model: AntAngelMed.
It’s based on a Ling‑flash‑2.0 MoE architecture, with 100B total parameters and 6.1B activated parameters. On H20 it achieves inference speeds over 200 tokens/s and supports a 128K context window.
On HealthBench, the open‑source medical evaluation benchmark released by OpenAI, it ranks first among open‑source models.
https://preview.redd.it/kniszybpeobg1.jpg?width=1120&format=pjpg&auto=webp&s=28c15613eb7888735cd5e07ae5e7d9efa8249b3b
https://preview.redd.it/ssg175lqeobg1.jpg?width=1380&format=pjpg&auto=webp&s=30e74dcd0d97436060426d980c95f0a3a13514f3
https://preview.redd.it/ki5525sreobg1.jpg?width=608&format=pjpg&auto=webp&s=115f1d3043219a765408bd5727f7f02bb6bb0b3e
[https://huggingface.co/MedAIBase/AntAngelMed](https://huggingface.co/MedAIBase/AntAngelMed)
[https://github.com/MedAIBase/AntAngelMed/tree/main](https://github.com/MedAIBase/AntAngelMed/tree/main)
[https://huggingface.co/MedAIBase/AntAngelMed-FP8](https://huggingface.co/MedAIBase/AntAngelMed-FP8)
| 2026-01-06T06:50:31 | https://www.reddit.com/r/LocalLLaMA/comments/1q5bj24/medaibaseantangelmed_hugging_face/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5bj24 | false | null | t3_1q5bj24 | /r/LocalLLaMA/comments/1q5bj24/medaibaseantangelmed_hugging_face/ | false | false | 23 | null | |
Are recent models really that bad? | 0 | For the last months I have mostly been using Claude for all my coding tasks (mostly Opus), and after Anthropic recently messed up with new limits I have been forced to look at other models as well, and I am genuinely shocked at how terrible they all are in comparision. Am I missing something or does anyone else feel the same?
Compared with Opus, most other models I tried lack in either reasoning intelligence, consistency, prompt adherence, reliability or just overall usefulness
In the coding context these were all tested through OpenCode.
**Gemini 3 Pro**
Had exepected more after hearing so much hype online. It sucks as prompt adherence, staying on task, and overall very unreliable with tool calling. Even ending a conversation it will struggle with (and just spam until the token limit is hit)
**OpenAI Codex 5.2**
What the hell? I thought this was supposed to be a frontier model. I have to say, its one of the best at initial prompt adherence (sticking to a role), decently good at intelligence, but overall reliability is absolutely terrible. It will often stumble over itself, sometimes even struggle with tools
**Grok Code Fast**
Not as smart as Opus, and from time to time hallucinates, but honestly really surprised me. Offered for free here and there, and its excellent for reviewing large amounts of files, or doing smaller specific tasks. Don't let it be the brains behind your next big project, but feels like a very competent junior engineer (where Opus is a senior)
**GLM 4.7**
Pretty decent, have been hit and miss. Not as smart as Opus, but for an opensource model its pretty decent at bite sized tasks. But again lacks intelligence and reliability of Opus
---
Overall not a single model comes close to the reliability and intelligence of Opus. I have to say Opus is not peferct, it could do with better prompt adherence and better creativity (design wise), but overall rock solid. Using anything else feels like a (strong) regression
I'm curious what other's experiences are. Similar to this or should I try any specific model?
| 2026-01-06T06:48:32 | https://www.reddit.com/r/LocalLLaMA/comments/1q5bhtn/are_recent_models_really_that_bad/ | davincible | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5bhtn | false | null | t3_1q5bhtn | /r/LocalLLaMA/comments/1q5bhtn/are_recent_models_really_that_bad/ | false | false | self | 0 | null |
Quick Start Guide For LTX-2 In ComfyUI on NVIDIA GPUs | 9 | Lightricks today released LTX-2, a new local AI video creation model that stand toe-to-toe with leading cloud-based models while generating up to 20 seconds of 4K video with impressive visual fidelity.
It's optimized for NVIDIA GPUs in ComfyUI, and we've put together a quick start guide for getting up and running with the new model.
[https://www.nvidia.com/en-us/geforce/news/rtx-ai-video-generation-guide/](https://www.nvidia.com/en-us/geforce/news/rtx-ai-video-generation-guide/)
The guide includes info on recommended settings, optimizing VRAM usage, and how to get the best quality from your outputs.
The LTX-2 guide and release is part of a number of announcements we shared today from CES 2026, including how LTX-2 will be part of an upcoming video generation workflow coming next month. Other news includes continued optimizations for ComfyUI, inference performance improvements in llama.cpp and Ollama, new AI features in Nexa.ai's Hyperlink, updates and new playbooks for DGX Spark, and more.
You can read about all of these updates in our [blog](https://blogs.nvidia.com/blog/rtx-ai-garage-ces-2026-open-models-video-generation/). Thanks! | 2026-01-06T06:33:55 | https://www.reddit.com/r/LocalLLaMA/comments/1q5b8s8/quick_start_guide_for_ltx2_in_comfyui_on_nvidia/ | NV_Cory | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5b8s8 | false | null | t3_1q5b8s8 | /r/LocalLLaMA/comments/1q5b8s8/quick_start_guide_for_ltx2_in_comfyui_on_nvidia/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'WCJhi0F7MUCrBFdh_Oe0PZZphtUZhMpoQK3HYJfAbvQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/WCJhi0F7MUCrBFdh_Oe0PZZphtUZhMpoQK3HYJfAbvQ.jpeg?width=108&crop=smart&auto=webp&s=497a65516841f7b87d1c9a929561a2c2e6a8d9c1', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/WCJhi0F7MUCrBFdh_Oe0PZZphtUZhMpoQK3HYJfAbvQ.jpeg?width=216&crop=smart&auto=webp&s=3c1866675773d413666ff77b10e901c4c2597311', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/WCJhi0F7MUCrBFdh_Oe0PZZphtUZhMpoQK3HYJfAbvQ.jpeg?width=320&crop=smart&auto=webp&s=45aa80dfbeb0a1e5e43a619ec487869549fc66d1', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/WCJhi0F7MUCrBFdh_Oe0PZZphtUZhMpoQK3HYJfAbvQ.jpeg?width=640&crop=smart&auto=webp&s=21bbb18d87c699f5ae466412ea20d95f7bcac0c3', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/WCJhi0F7MUCrBFdh_Oe0PZZphtUZhMpoQK3HYJfAbvQ.jpeg?width=960&crop=smart&auto=webp&s=74fe7525d70989529b39076481e079b10b7c1156', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/WCJhi0F7MUCrBFdh_Oe0PZZphtUZhMpoQK3HYJfAbvQ.jpeg?width=1080&crop=smart&auto=webp&s=71140997d06a5b14ac292e28f784c232953b5006', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/WCJhi0F7MUCrBFdh_Oe0PZZphtUZhMpoQK3HYJfAbvQ.jpeg?auto=webp&s=a06e1ee85eb65e4430e05183be5ea8747d1d923f', 'width': 1200}, 'variants': {}}]} |
WebGPU llama.cpp running in browser with Unity to drive NPC interactions (demo) | 10 | I've been experimenting with in-browser local inference via WebGPU and wired it into a tiny Unity game where the LLM acts as the NPC/agents "brain" to drive decisions at interactive rates.
Demo: [https://noumenalabs.itch.io/office-sim](https://noumenalabs.itch.io/office-sim)
Tech Stack:
* Unity Webgl
* Modified llama.cpp WebGPU backend
* Emscripten toolchain
Most of llama.cpp modifications were in the WGSL kernels to reduce reliance on fp16 and to support more ops for forward inference. Though, there was also a lot of unexpected and nuanced issues that I came across in building out the project. Integration with Unity was a huge pain due to Emscripten toolchains mismatches / configurations. I ended up bootstraping a self-contained WASM module from Unity's WASM runtime, handling data marshaling between each sandboxed environment.
One observation I made while working on this is that even though the WebGPU build is better then CPU by about 3x-10x depending on hardware, it is still about 10x less performant then running directly on bare-metal hardware via CUDA or similar. Some of this I think is in the WGSL kernels, which can definitely be optimized to help close the gap, but I am curious to find out where the limits actually lie here and how far WebGPU performance can be pushed.
Some questions / discussion:
1. What benchmarks would be interesting to report here? tok/s, first-token latency? Would a comparison between CPU v. CUDA v. WebGPU be useful?
2. Tips on stability/perf or non-obvious gotchas when working with WebGPU or llama.cpp
3. Feedback on demo and/or thoughts on local in-browser LLM inference.
| 2026-01-06T06:31:58 | https://www.reddit.com/r/LocalLLaMA/comments/1q5b7kf/webgpu_llamacpp_running_in_browser_with_unity_to/ | lordhiggsboson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5b7kf | false | null | t3_1q5b7kf | /r/LocalLLaMA/comments/1q5b7kf/webgpu_llamacpp_running_in_browser_with_unity_to/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': '9uQ0LQ5ONnj4BW054d-0N-A-Myj-301Ywp_ZrccICG8', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/9uQ0LQ5ONnj4BW054d-0N-A-Myj-301Ywp_ZrccICG8.png?width=108&crop=smart&auto=webp&s=ed6c6eb7dd7dc75c1095c7c8470978cdadba67a7', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/9uQ0LQ5ONnj4BW054d-0N-A-Myj-301Ywp_ZrccICG8.png?width=216&crop=smart&auto=webp&s=3223cf06c7d5f62795892d0b748ae7c7892af521', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/9uQ0LQ5ONnj4BW054d-0N-A-Myj-301Ywp_ZrccICG8.png?width=320&crop=smart&auto=webp&s=25515ab9dad51dd570bea3b2b18036de2f15aa9f', 'width': 320}, {'height': 401, 'url': 'https://external-preview.redd.it/9uQ0LQ5ONnj4BW054d-0N-A-Myj-301Ywp_ZrccICG8.png?width=640&crop=smart&auto=webp&s=275c4cf8fee8ffab4dcab7805b6e4841328a4107', 'width': 640}, {'height': 602, 'url': 'https://external-preview.redd.it/9uQ0LQ5ONnj4BW054d-0N-A-Myj-301Ywp_ZrccICG8.png?width=960&crop=smart&auto=webp&s=d99e98eafaa6ea7d43589b7e6200a322e2d13cee', 'width': 960}, {'height': 678, 'url': 'https://external-preview.redd.it/9uQ0LQ5ONnj4BW054d-0N-A-Myj-301Ywp_ZrccICG8.png?width=1080&crop=smart&auto=webp&s=3cd4d39a52f07ecd27280a64683674691e05dd7a', 'width': 1080}], 'source': {'height': 903, 'url': 'https://external-preview.redd.it/9uQ0LQ5ONnj4BW054d-0N-A-Myj-301Ywp_ZrccICG8.png?auto=webp&s=7fc25781b8f64655e255518934ee5e00bbea9082', 'width': 1438}, 'variants': {}}]} |
LoongFlow: Open Source Implementation of Evolutionary Agent Framework | 1 | [removed] | 2026-01-06T06:05:38 | https://www.reddit.com/r/LocalLLaMA/comments/1q5aqkb/loongflow_open_source_implementation_of/ | Sea_Individual2470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q5aqkb | false | null | t3_1q5aqkb | /r/LocalLLaMA/comments/1q5aqkb/loongflow_open_source_implementation_of/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.