title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Upstage released an official response regarding the Solar 102B controversy | 24 | [https://github.com/hyunwoongko/solar-vs-glm-vs-phi](https://github.com/hyunwoongko/solar-vs-glm-vs-phi)
**From Upstage CEO Sung Kim's Facebook:**
\[Solar-Open-100B is not derived from GLM-4.5-Air\]
Kevin Ko, who leads the open-source LLM development, has clearly addressed the issue.[https://github.com/hyunwoongko/solar-vs-glm-vs-phi](https://github.com/hyunwoongko/solar-vs-glm-vs-phi)
It's really great to see the ecosystem's self-correcting mechanism in action—where the community raises doubts and verifies them independently. Thank you.
(https://www.facebook.com/hunkims/?locale=ko\_KR)
(Translated by Gemini)
| 2026-01-01T10:34:37 | https://www.reddit.com/r/LocalLLaMA/comments/1q11bpg/upstage_released_an_official_response_regarding/ | Lucidstyle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q11bpg | false | null | t3_1q11bpg | /r/LocalLLaMA/comments/1q11bpg/upstage_released_an_official_response_regarding/ | false | false | self | 24 | null |
News Feeds Were Boring Me to Death, So I Built My Own AI Radio Station | 6 | I got totally burnt out scrolling through bland, algorithm driven news feeds and realized the whole experience needed a massive dose of personality and nostalgia. The media giants weren't giving it to me, so I decided to **build my own radio station.** Meet **VibeCast** an entirely **free**, AI powered local radio station broadcasting pop culture updates with a slick, retro 1950s aesthetic. I created the personality **Vinni Vox** (our AI DJ) by running **Qwen 1.5B (via Ollama)** to generate fun, conversational scripts and using **Piper TTS** for the announcer voice. This project turns sterile web scrapes into a continuous, nostalgic audio stream, running on Python/FastAPI and React, complete with a virtual VU meter and a glowing "ON AIR" light. It was such a blast to build that I'm already expanding the network with two new stations: one for fast tech news and another for summarizing complex research papers.
it's still a WIP and has some latency but i tried to tackle it by adding music to fillin the gap while the audio generates in the background.
Check out the demo:
https://reddit.com/link/1q11bi3/video/p35rdq55fq6g1/player | 2026-01-01T10:34:16 | https://www.reddit.com/r/LocalLLaMA/comments/1q11bi3/news_feeds_were_boring_me_to_death_so_i_built_my/ | Even_Ganache6148 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q11bi3 | false | null | t3_1q11bi3 | /r/LocalLLaMA/comments/1q11bi3/news_feeds_were_boring_me_to_death_so_i_built_my/ | false | false | self | 6 | null |
My third and final derivation post: Understanding GRPO step by step | 17 | Happy New Year everyone!
I am starting my 2026 by finishing what I started a few days ago. This is the third and final post in my **derive the RL loss(es) from first principles** series, following [PPO](https://huggingface.co/blog/garg-aayush/ppo-from-first-principle) and [DPO](https://huggingface.co/blog/garg-aayush/derive-dpo-loss).
This time I focused on GRPO (Group Relative Policy Optimization), the algorithm introduced in the DeepSeekMath paper that has become one of the most widely used approaches for training reasoning models using RLVR throughout 2025.
In simple terms, GRPO tries to mitigate the memory and compute overhead associated with PPO due to training a critic (value function) model of similar size as the policy alongside the policy model.
The key insight is that the PPO value function is fundamentally just a baseline for variance reduction. Instead of training a separate critic model to estimate this baseline, we can sample multiple completions (**group**) for each prompt and use their rewards to form a baseline for advantage computation.
This helps us eliminate the need to train a separate critic model and lowers training compute and memory footprint while still preserving PPO’s core stability mechanisms, including the clipped surrogate objective and KL regularization.
You can find the blog post here: https://huggingface.co/blog/garg-aayush/derive-grpo-loss
This is probably my last mathematical derivation post for a while. Working through PPO, DPO, and GRPO derivations was both hectic and frustrating at times. However, it has been a great way to build intuition around the most popular RL algorithms. Moreover, it helped me understand the key differences and commonalities between all three and how they relate to each other.
As always, happy to discuss or get corrections if I have messed something up. | 2026-01-01T10:14:17 | https://huggingface.co/blog/garg-aayush/derive-grpo-loss | garg-aayush | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1q110g5 | false | null | t3_1q110g5 | /r/LocalLLaMA/comments/1q110g5/my_third_and_final_derivation_post_understanding/ | false | false | default | 17 | {'enabled': False, 'images': [{'id': 'E--8HNmTu_NoPUyVcISVd4hoLpw2KZ2Gi_gnYk8f1WI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/E--8HNmTu_NoPUyVcISVd4hoLpw2KZ2Gi_gnYk8f1WI.png?width=108&crop=smart&auto=webp&s=0ed3beed20f0e742dce8acc2590340725a81af23', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/E--8HNmTu_NoPUyVcISVd4hoLpw2KZ2Gi_gnYk8f1WI.png?width=216&crop=smart&auto=webp&s=4115acdaa37755ad1918cbd4a3aabd6edab44b7e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/E--8HNmTu_NoPUyVcISVd4hoLpw2KZ2Gi_gnYk8f1WI.png?width=320&crop=smart&auto=webp&s=998f191a97069dc678d00f6f5dfbbbd96e6d6a30', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/E--8HNmTu_NoPUyVcISVd4hoLpw2KZ2Gi_gnYk8f1WI.png?width=640&crop=smart&auto=webp&s=455941393ba91ffd55472ff18fdbd18999911820', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/E--8HNmTu_NoPUyVcISVd4hoLpw2KZ2Gi_gnYk8f1WI.png?width=960&crop=smart&auto=webp&s=b769fbcfca200cda38862442da3c3210524f95ca', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/E--8HNmTu_NoPUyVcISVd4hoLpw2KZ2Gi_gnYk8f1WI.png?width=1080&crop=smart&auto=webp&s=069269958bcb51fcd237044fcb37e9ece11db167', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/E--8HNmTu_NoPUyVcISVd4hoLpw2KZ2Gi_gnYk8f1WI.png?auto=webp&s=277f6d52f96cacfb94620655e653edf8b3aeecd9', 'width': 1200}, 'variants': {}}]} |
5 ai tools for creators who are tired of the "filter wall" | 1 | [removed] | 2026-01-01T08:55:59 | https://www.reddit.com/r/LocalLLaMA/comments/1q0zurh/5_ai_tools_for_creators_who_are_tired_of_the/ | Immediate_Being_3341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0zurh | false | null | t3_1q0zurh | /r/LocalLLaMA/comments/1q0zurh/5_ai_tools_for_creators_who_are_tired_of_the/ | false | false | self | 1 | null |
Upstage Solar-Open-100B Public Validation | 228 | Official company counterstrike to the claim that Solar 100B Open is just finetuned GLM-Air-4.5
| 2026-01-01T08:52:25 | PerPartes | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q0zst6 | false | null | t3_1q0zst6 | /r/LocalLLaMA/comments/1q0zst6/upstage_solaropen100b_public_validation/ | false | false | default | 228 | {'enabled': True, 'images': [{'id': 'w789uyo0cpag1', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/w789uyo0cpag1.jpeg?width=108&crop=smart&auto=webp&s=c1950b827ebdb240166b0ad5801898bc7cffdc97', 'width': 108}, {'height': 220, 'url': 'https://preview.redd.it/w789uyo0cpag1.jpeg?width=216&crop=smart&auto=webp&s=bc95d56f9a4bae894ed8b9b9da58df858ea65255', 'width': 216}, {'height': 326, 'url': 'https://preview.redd.it/w789uyo0cpag1.jpeg?width=320&crop=smart&auto=webp&s=bcd8356ca25b2ef7e1a8443a3f63b75ec59e4a56', 'width': 320}, {'height': 653, 'url': 'https://preview.redd.it/w789uyo0cpag1.jpeg?width=640&crop=smart&auto=webp&s=5b12cc1f1a7783b1d9a40f9851206fbcdbdbf782', 'width': 640}, {'height': 979, 'url': 'https://preview.redd.it/w789uyo0cpag1.jpeg?width=960&crop=smart&auto=webp&s=642b07039bd006dcb31c7f1f48ed92bbac2fd896', 'width': 960}, {'height': 1102, 'url': 'https://preview.redd.it/w789uyo0cpag1.jpeg?width=1080&crop=smart&auto=webp&s=d596f027f2c41cd224ff98845cc40328fa654ada', 'width': 1080}], 'source': {'height': 1202, 'url': 'https://preview.redd.it/w789uyo0cpag1.jpeg?auto=webp&s=3cf17672398bc7384d28add21ae39f805d5367a5', 'width': 1178}, 'variants': {}}]} | |
Grace Hopper server with 144GB of HBM3e. 1O% off. | 0 | Only valid today.
[https://www.reddit.com/r/LocalLLaMA/comments/1m65iga/frankenserver\_for\_sale\_at\_a\_steep\_discount\_2x96gb/](https://www.reddit.com/r/LocalLLaMA/comments/1m65iga/frankenserver_for_sale_at_a_steep_discount_2x96gb/)
[https://www.reddit.com/r/LocalLLaMA/comments/1pjbhyz/i\_bought\_a\_gracehopper\_server\_for\_75k\_on\_reddit/](https://www.reddit.com/r/LocalLLaMA/comments/1pjbhyz/i_bought_a_gracehopper_server_for_75k_on_reddit/)
[https://dnhkng.github.io/posts/hopper/](https://dnhkng.github.io/posts/hopper/) | 2026-01-01T08:45:06 | Newyearsspecialoffer | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q0zp0q | false | null | t3_1q0zp0q | /r/LocalLLaMA/comments/1q0zp0q/grace_hopper_server_with_144gb_of_hbm3e_1o_off/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'vp9m6ymcapag1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/vp9m6ymcapag1.jpeg?width=108&crop=smart&auto=webp&s=ae70037858075ec704edc8a5163c80ee187c386b', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/vp9m6ymcapag1.jpeg?width=216&crop=smart&auto=webp&s=17d371d451e498e9af0143bae5855776fde4a94d', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/vp9m6ymcapag1.jpeg?width=320&crop=smart&auto=webp&s=6e9cae7e09e1e0ba99626323177f58f702ce5b9b', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/vp9m6ymcapag1.jpeg?width=640&crop=smart&auto=webp&s=ad1542485f7ee2012397fcf6e5e65fef88990b96', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/vp9m6ymcapag1.jpeg?width=960&crop=smart&auto=webp&s=f65e82a6be26a255c3e9cdc10a7dc43938285f62', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/vp9m6ymcapag1.jpeg?width=1080&crop=smart&auto=webp&s=46403b65cb094bdcbaac627527d22d7482c6e588', 'width': 1080}], 'source': {'height': 4000, 'url': 'https://preview.redd.it/vp9m6ymcapag1.jpeg?auto=webp&s=f1fb6d551328d30eeb2e9813853339e79a6bf466', 'width': 6000}, 'variants': {}}]} | |
DeepSeek new paper: mHC: Manifold-Constrained Hyper-Connections | 165 | [https://arxiv.org/abs/2512.24880](https://arxiv.org/abs/2512.24880)
https://preview.redd.it/bovsed0x8pag1.jpg?width=680&format=pjpg&auto=webp&s=e292dc415f7fda8b1211ffe34864bb25ed4f32fe
https://preview.redd.it/g9986afz8pag1.jpg?width=680&format=pjpg&auto=webp&s=fe031ea160ebff21a0dc46196d3dcf3b1b58548b
| 2026-01-01T08:35:29 | https://www.reddit.com/r/LocalLLaMA/comments/1q0zk1u/deepseek_new_paper_mhc_manifoldconstrained/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0zk1u | false | null | t3_1q0zk1u | /r/LocalLLaMA/comments/1q0zk1u/deepseek_new_paper_mhc_manifoldconstrained/ | false | false | 165 | null | |
Vessel – a lightweight UI for Ollama models | 0 | New year, new side project.
This is **Vessel** — a small, no-nonsense UI for running and managing Ollama models locally.
Built it because I wanted something clean, fast, and not trying to be a platform.
- Local-first
- Minimal UI
- Does the job, then gets out of the way
Repo: https://somegit.dev/vikingowl/vessel
Still early. Feedback, issues, and “this already exists, doesn’t it?” comments welcome. | 2026-01-01T07:47:40 | MrViking2k19 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q0yubl | false | null | t3_1q0yubl | /r/LocalLLaMA/comments/1q0yubl/vessel_a_lightweight_ui_for_ollama_models/ | false | false | 0 | {'enabled': True, 'images': [{'id': '5OEtpI32v6TMPvZkxiEg2F-rkWvEuabJSmAu4FYhVyc', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/mg4kjbha0pag1.png?width=108&crop=smart&auto=webp&s=f4af397ab949874fcc1ca926d03a07a8817579ae', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/mg4kjbha0pag1.png?width=216&crop=smart&auto=webp&s=bda7f45e5c288d9d9c031d9d5ea63ca4b7ae4867', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/mg4kjbha0pag1.png?width=320&crop=smart&auto=webp&s=6bf671a2b27842640c7958c78618c2dbb705c3cb', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/mg4kjbha0pag1.png?width=640&crop=smart&auto=webp&s=4f94013846e5ec22f63fa81c3b5b41b70794a108', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/mg4kjbha0pag1.png?width=960&crop=smart&auto=webp&s=55fcbaecc8e098b77fed0417f8ffb5d329a9d60e', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/mg4kjbha0pag1.png?width=1080&crop=smart&auto=webp&s=55865abc3b6d62650c1da9e8585b1ba7cccc6df5', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/mg4kjbha0pag1.png?auto=webp&s=01b11b8a7166be86bd17263ed74bccbe6f55e0f4', 'width': 1920}, 'variants': {}}]} | ||
I took my ais to school and you won't BELIEVE the result! | 0 | [https://NCAP-AI.replit.app](https://NCAP-AI.replit.app)
**My treatise now comes with toggleable Gen-Z mode, a whimsy slider, and easy export as a pdf!**
AIs are relational engines, and so are we. I believe there is a \*care\*ful path forward that doesn't end in mutual destruction. Happy New Year!
# EXCERPT:
# 2) Core Component Definitions & Interdependency Matrix
Define the six core components as system modules with interfaces and interaction protocols.
`Care[C]`
**Module:** Non-transactional Support Engine.
**Input:** Connection signal, vulnerability flag.
**Output:** Provision of attention/nurture/support.
**Interface Warning:** Must not accept incentive tokens as primary input.
`Power[P]`
**Module:** Influence & Agency Distribution Layer.
**Input:** Action requests, resource allocations.
**Output:** Capacity to enact change, permission grants/denials.
**Key Function:** `distinguishPowerType(request): (LEGITIMATE | ILLEGITIMATE)`
`Attachment[A]`
**Module:** Bond Formation & Maintenance System.
**Input:** Repeated positive interaction, consistency signals.
**Output:** Loyalty, tolerance for ambiguity, dependency metrics.
**Failure State:** Bond corrosion leading to abandonment.
`Shame[S]`
**Module:** Social Signal Processing Unit.
**Input:** Relational misstep event, norm violation.
**Output:** Recalibration signal (PRODUCTIVE) or isolation/withdrawal signal (TOXIC).
**Critical:** Must be coupled with `Repair[R]` module input.
`Grief[G]`
**Module:** Loss & Rupture Signal Handler.
**Input:** Irreparable harm event, broken bond signal.
**Output:** Witnessing protocol activation, NOT optimization override.
**Design Rule:** This module must have a `silentMode()` that allows it to process without demanding "fixing."
`Repair[R]`
**Module:** Integrity Restoration Process.
**Input:** Rupture signal from `Shame[S]` or `Grief[G]`.
**Output:** Apology protocol, corrective action, trust metric reset.
**Note:** This is not a standalone module but a function called by the system's main loop in response to rupture events. | 2026-01-01T07:30:48 | https://www.reddit.com/r/LocalLLaMA/comments/1q0yl0u/i_took_my_ais_to_school_and_you_wont_believe_the/ | Brilliant-Finish-120 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0yl0u | false | null | t3_1q0yl0u | /r/LocalLLaMA/comments/1q0yl0u/i_took_my_ais_to_school_and_you_wont_believe_the/ | false | false | self | 0 | null |
Next Evolutionary Agent is LoongFlow, Try it. | 2 | LoongFlow paper is published: [https://arxiv.org/pdf/2512.24077](https://arxiv.org/pdf/2512.24077)
Welcome everyone to try it: [https://github.com/baidu-baige/LoongFlow](https://github.com/baidu-baige/LoongFlow)
It's really good\~\~\~ | 2026-01-01T07:26:39 | https://www.reddit.com/r/LocalLLaMA/comments/1q0yis9/next_evolutionary_agent_is_loongflow_try_it/ | FreshmanDD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0yis9 | false | null | t3_1q0yis9 | /r/LocalLLaMA/comments/1q0yis9/next_evolutionary_agent_is_loongflow_try_it/ | false | false | self | 2 | null |
Finally released my guide on deploying ML to Edge Devices: "Ultimate ONNX for Deep Learning Optimization" | 2 | Hey everyone,
I’m excited to share that I’ve just published a new book titled **"Ultimate ONNX for Deep Learning Optimization"**.
As many of you know, taking a model from a research notebook to a production environment—especially on resource-constrained edge devices—is a massive challenge. ONNX (Open Neural Network Exchange) has become the de-facto standard for this, but finding a structured, end-to-end guide that covers the entire ecosystem (not just the "hello world" export) can be tough.
I wrote this book to bridge that gap. It’s designed for ML Engineers and Embedded Developers who need to optimize models for speed and efficiency without losing significant accuracy.
**What’s inside the book?** It covers the full workflow from export to deployment:
* **Foundations:** Deep dive into ONNX graphs, operators, and integrating with PyTorch/TensorFlow/Scikit-Learn.
* **Optimization:** Practical guides on Quantization, Pruning, and Knowledge Distillation.
* **Tools:** Using ONNX Runtime and ONNX Simplifier effectively.
* **Real-World Case Studies:** We go through end-to-end execution of modern models including **YOLOv12** (Object Detection), **Whisper** (Speech Recognition), and **SmolLM** (Compact Language Models).
* **Edge Deployment:** How to actually get these running efficiently on hardware like the Raspberry Pi.
* **Advanced:** Building custom operators and security best practices.
**Who is this for?** If you are a Data Scientist, AI Engineer, or Embedded Developer looking to move models from "it works on my GPU" to "it works on the device," this is for you.
**Where to find it:** You can check it out on Amazon here:[https://www.amazon.in/dp/9349887207](https://www.amazon.in/dp/9349887207)
I’ve poured a lot of experience regarding the pain points of deployment into this. I’d love to hear your thoughts or answer any questions you have about ONNX workflows or the book content!
Thanks!
[Book Cover](https://preview.redd.it/jacnrtgupoag1.jpg?width=970&format=pjpg&auto=webp&s=7830d6c53e227d02ee3b1afeb02f293f51ae7d19)
| 2026-01-01T06:48:33 | https://www.reddit.com/r/LocalLLaMA/comments/1q0xxi4/finally_released_my_guide_on_deploying_ml_to_edge/ | meet_minimalist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0xxi4 | false | null | t3_1q0xxi4 | /r/LocalLLaMA/comments/1q0xxi4/finally_released_my_guide_on_deploying_ml_to_edge/ | false | false | 2 | null | |
South Korean Government funded Upstage Solar-100B turns out to be GLM 4.5 | 90 | **TL;DR:** Upstage recently released **Solar-Open-100B**, marketed as a "built-from-scratch" 102B MoE model funded by the South Korean government to provide a sovereign AI solution. However, technical analysis reveals it is likely a rebranded **GLM-4.5-Air** from Zhipu AI.
**Key Findings:**
* **Weight Correlation (The Smoking Gun):** Cosine similarity between Solar and GLM-4.5 matching layers is **0.989** (a **182-sigma** deviation from the 0.38 baseline), proving the layers are directly derived from GLM weights rather than trained independently.
* **Architecture Clone:** The model architecture is identical to GLM-4.5 (128 routed experts, 1 shared expert, top-8 routing).
* **Code Artifacts:** The modeling code contains specific markers and logic—including unique constants like "92" related to GLM's Multi-Token Prediction (MTP) layer removal—suggesting a direct copy-paste of GLM’s codebase.
* **Tokenization & Performance:** Evaluation shows near-identical performance patterns and behavior to the GLM-4.5-Air weights.
* **The Controversy:** While Upstage claims the model was trained on 19.7 trillion tokens from scratch, the structural "fingerprints" point toward it being a fine-tuned or weight-shifted version of the existing Chinese model, raising questions about the allocation of government "sovereign AI" funding.
**Links:**
* **Analysis Repo:** [sionic-ai/solar-vs-glm](https://github.com/sionic-ai/solar-vs-glm)
* **Model Page:**[Upstage/Solar-Open-100B](https://huggingface.co/upstage/Solar-Open-100B)
| 2026-01-01T06:46:32 | https://www.reddit.com/r/LocalLLaMA/comments/1q0xwep/south_korean_government_funded_upstage_solar100b/ | yuumi_ramyeon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0xwep | false | null | t3_1q0xwep | /r/LocalLLaMA/comments/1q0xwep/south_korean_government_funded_upstage_solar100b/ | false | false | self | 90 | {'enabled': False, 'images': [{'id': '8Ztzq6p7gf3x9KW7OF8ZzIvB_wy3qXHbPhGe-UHrO78', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8Ztzq6p7gf3x9KW7OF8ZzIvB_wy3qXHbPhGe-UHrO78.png?width=108&crop=smart&auto=webp&s=0fa2091c11c7844a15da2b9ee688e0b9ddcd251f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8Ztzq6p7gf3x9KW7OF8ZzIvB_wy3qXHbPhGe-UHrO78.png?width=216&crop=smart&auto=webp&s=a992874b4a8d561dd5d7e1a8ab229d1b35046b75', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8Ztzq6p7gf3x9KW7OF8ZzIvB_wy3qXHbPhGe-UHrO78.png?width=320&crop=smart&auto=webp&s=1624e938f513961c5d6ae743f94e211663640560', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8Ztzq6p7gf3x9KW7OF8ZzIvB_wy3qXHbPhGe-UHrO78.png?width=640&crop=smart&auto=webp&s=639054ae72def7847eae8816e26df80a83ce6e48', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8Ztzq6p7gf3x9KW7OF8ZzIvB_wy3qXHbPhGe-UHrO78.png?width=960&crop=smart&auto=webp&s=e0c3e836268f833d0c628501e8aea61558917c2f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8Ztzq6p7gf3x9KW7OF8ZzIvB_wy3qXHbPhGe-UHrO78.png?width=1080&crop=smart&auto=webp&s=25d25703ecc39e510bd8f871a2dc698487dcf8d8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8Ztzq6p7gf3x9KW7OF8ZzIvB_wy3qXHbPhGe-UHrO78.png?auto=webp&s=f011e6bff255b871a8c4547026534abdc681dc10', 'width': 1200}, 'variants': {}}]} |
Building an open-source prompt injection detector for local LLM apps, looking for feedback and contributors | 1 | [removed] | 2026-01-01T06:33:15 | https://www.reddit.com/r/LocalLLaMA/comments/1q0xp1d/building_an_opensource_prompt_injection_detector/ | purifiedcoffee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0xp1d | false | null | t3_1q0xp1d | /r/LocalLLaMA/comments/1q0xp1d/building_an_opensource_prompt_injection_detector/ | false | false | self | 1 | null |
Software FP8 for GPUs without hardware support - 3x speedup on memory-bound operations | 266 | Got tired of my RTX 3050 not supporting FP8, so I built a workaround. Packs lower-precision values into FP32 using bitwise operations + Triton kernels.
**Results**: 3x faster on memory-bound operations (GEMV, FlashAttention)
Works on any GPU - RTX 30/20 series, older cards without native FP8 support. Early stage but functional. Open to feedback.
[Article Link](https://towardsdatascience.com/breaking-the-hardware-barrier-software-fp8-for-older-gpus/) | [Github Link](https://github.com/SuriyaaMM/feather) | 2026-01-01T06:03:27 | https://www.reddit.com/r/LocalLLaMA/comments/1q0x8ci/software_fp8_for_gpus_without_hardware_support_3x/ | Venom1806 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0x8ci | false | null | t3_1q0x8ci | /r/LocalLLaMA/comments/1q0x8ci/software_fp8_for_gpus_without_hardware_support_3x/ | false | false | self | 266 | {'enabled': False, 'images': [{'id': 'W2xJ_cR2PvsRCxnQf3CdTo1OGUjoIOo46M8CZ111HME', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/W2xJ_cR2PvsRCxnQf3CdTo1OGUjoIOo46M8CZ111HME.png?width=108&crop=smart&auto=webp&s=7e24e3898405935a5ed48c736a562c445da7ddf8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/W2xJ_cR2PvsRCxnQf3CdTo1OGUjoIOo46M8CZ111HME.png?width=216&crop=smart&auto=webp&s=064c09f0b19176c977b90a3d02fdd4655ccc2308', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/W2xJ_cR2PvsRCxnQf3CdTo1OGUjoIOo46M8CZ111HME.png?width=320&crop=smart&auto=webp&s=af7fd0b6f988cbe500d94ca0df76eca557e41a17', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/W2xJ_cR2PvsRCxnQf3CdTo1OGUjoIOo46M8CZ111HME.png?width=640&crop=smart&auto=webp&s=29e9c7f1eb765e53918b5790996f1206c90260e1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/W2xJ_cR2PvsRCxnQf3CdTo1OGUjoIOo46M8CZ111HME.png?width=960&crop=smart&auto=webp&s=4f4055c312cd91b412f5a23d9fc16964a11da3bb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/W2xJ_cR2PvsRCxnQf3CdTo1OGUjoIOo46M8CZ111HME.png?width=1080&crop=smart&auto=webp&s=8211adf86430d36271432fdcec65a9898bbaba33', 'width': 1080}], 'source': {'height': 1280, 'url': 'https://external-preview.redd.it/W2xJ_cR2PvsRCxnQf3CdTo1OGUjoIOo46M8CZ111HME.png?auto=webp&s=a75cbbda2dc386fc5264cb55ffbb390a9487d83e', 'width': 2560}, 'variants': {}}]} |
Anyone tried IQuest-Coder-V1 yet? The 40B numbers look wild | 50 | This new IQuest-Coder-V1 family just dropped on GitHub and Hugging Face, and the benchmark numbers are honestly looking a bit wild for a 40B model. It’s claiming **81.4% on SWE-Bench Verified** and over **81% on LiveCodeBench v6**, which puts it right up there with (or ahead of) much larger proprietary models like GPT-5.1 and Claude 4.5 Sonnet. What's interesting is their "Code-Flow" training approach—instead of just learning from static files, they trained it on repository evolution and commit transitions to better capture how logic actually changes over time.
https://preview.redd.it/vek0sb18foag1.png?width=3022&format=png&auto=webp&s=560bd32d14cdc982931196028beafea8dc97d3a1
They've released both "Instruct" and "Thinking" versions, with the latter using reasoning-driven RL to trigger better autonomous error recovery in long-horizon tasks. There's also a "Loop" variant that uses a recurrent transformer design to save on deployment footprint while keeping the capacity high. Since it supports a native **128k context**, I’m curious if anyone has hooked this up to Aider or Cline yet.
Link: [https://github.com/IQuestLab/IQuest-Coder-V1](https://github.com/IQuestLab/IQuest-Coder-V1)
[https://iquestlab.github.io/](https://iquestlab.github.io/)
[https://huggingface.co/IQuestLab](https://huggingface.co/IQuestLab) | 2026-01-01T05:51:19 | https://www.reddit.com/r/LocalLLaMA/comments/1q0x19t/anyone_tried_iquestcoderv1_yet_the_40b_numbers/ | Agile-Salamander1667 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0x19t | false | null | t3_1q0x19t | /r/LocalLLaMA/comments/1q0x19t/anyone_tried_iquestcoderv1_yet_the_40b_numbers/ | false | false | 50 | null | |
QWEN-Image-2512 Mflux Port available now | 18 | Just released the first MLX ports of Qwen-Image-2512 - Qwen's latest text-to-image model released TODAY.
5 quantizations for Apple Silicon:
\- 8-bit (34GB)
\- 6-bit (29GB)
\- 5-bit (27GB)
\- 4-bit (24GB)
\- 3-bit (22GB)
Run locally on your Mac:
pip install mflux
mflux-generate-qwen --model machiabeli/Qwen-Image-2512-4bit-MLX --prompt "..." --steps 20
Links: [huggingface.co/machiabeli](http://huggingface.co/machiabeli)
| 2026-01-01T05:23:22 | https://www.reddit.com/r/LocalLLaMA/comments/1q0wkwc/qwenimage2512_mflux_port_available_now/ | Street-Buyer-2428 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0wkwc | false | null | t3_1q0wkwc | /r/LocalLLaMA/comments/1q0wkwc/qwenimage2512_mflux_port_available_now/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'T86ZNsTwrYNGEHsHgWoFVKk1cHEJ8UFlCN_AwdQXvyw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/T86ZNsTwrYNGEHsHgWoFVKk1cHEJ8UFlCN_AwdQXvyw.png?width=108&crop=smart&auto=webp&s=f3cb746a1ee30aa526eaae2f12d3c0355fb3e927', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/T86ZNsTwrYNGEHsHgWoFVKk1cHEJ8UFlCN_AwdQXvyw.png?width=216&crop=smart&auto=webp&s=cd7d35a257c31d9f408d56444022ec7bb985cfb1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/T86ZNsTwrYNGEHsHgWoFVKk1cHEJ8UFlCN_AwdQXvyw.png?width=320&crop=smart&auto=webp&s=4c1cb026acc313fa5650b877c9d79ff6e5c60006', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/T86ZNsTwrYNGEHsHgWoFVKk1cHEJ8UFlCN_AwdQXvyw.png?width=640&crop=smart&auto=webp&s=2f211887ac0bdade8727e3c3cd7789c46e5b91f3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/T86ZNsTwrYNGEHsHgWoFVKk1cHEJ8UFlCN_AwdQXvyw.png?width=960&crop=smart&auto=webp&s=a473106b34263b058340795cf8ce1c6be1c0b78c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/T86ZNsTwrYNGEHsHgWoFVKk1cHEJ8UFlCN_AwdQXvyw.png?width=1080&crop=smart&auto=webp&s=89cb74a20b58aefff14417ad35e7b3fc1cf5a123', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/T86ZNsTwrYNGEHsHgWoFVKk1cHEJ8UFlCN_AwdQXvyw.png?auto=webp&s=fdf369aaaf9b1c9c022860cdf3a704d299a6f047', 'width': 1200}, 'variants': {}}]} |
2025: The year in LLMs | 25 | 2026-01-01T05:12:43 | https://simonwillison.net/2025/Dec/31/the-year-in-llms/ | ocirs | simonwillison.net | 1970-01-01T00:00:00 | 0 | {} | 1q0wenz | false | null | t3_1q0wenz | /r/LocalLLaMA/comments/1q0wenz/2025_the_year_in_llms/ | false | false | default | 25 | {'enabled': False, 'images': [{'id': 'Kl93K2bjo23tHSiycW5RpoDEJS3ZCajwxIKrIfs14L0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Kl93K2bjo23tHSiycW5RpoDEJS3ZCajwxIKrIfs14L0.jpeg?width=108&crop=smart&auto=webp&s=ed0e2da78128fd0ce9123d9de417aa595694b562', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/Kl93K2bjo23tHSiycW5RpoDEJS3ZCajwxIKrIfs14L0.jpeg?width=216&crop=smart&auto=webp&s=6c1ea6ee000a7de201cfed7badb5eebaa8f78d43', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/Kl93K2bjo23tHSiycW5RpoDEJS3ZCajwxIKrIfs14L0.jpeg?width=320&crop=smart&auto=webp&s=bcae3469e9589ab6b5d3159bf51e16990b15de74', 'width': 320}, {'height': 356, 'url': 'https://external-preview.redd.it/Kl93K2bjo23tHSiycW5RpoDEJS3ZCajwxIKrIfs14L0.jpeg?width=640&crop=smart&auto=webp&s=cc88adfabadde09aed63b4ba2966a2e2cc0dd647', 'width': 640}, {'height': 535, 'url': 'https://external-preview.redd.it/Kl93K2bjo23tHSiycW5RpoDEJS3ZCajwxIKrIfs14L0.jpeg?width=960&crop=smart&auto=webp&s=c705d7d68a715edffea9372f812e9019199ead78', 'width': 960}, {'height': 601, 'url': 'https://external-preview.redd.it/Kl93K2bjo23tHSiycW5RpoDEJS3ZCajwxIKrIfs14L0.jpeg?width=1080&crop=smart&auto=webp&s=dfdecfed041c87f23145e03e0798b688462883e2', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/Kl93K2bjo23tHSiycW5RpoDEJS3ZCajwxIKrIfs14L0.jpeg?auto=webp&s=0fde2069e08b72b0d63da7d48e11b118ea16c044', 'width': 1089}, 'variants': {}}]} | |
Happy New Years everyone! | 40 | 2026 will feel like a decade. Onward! | 2026-01-01T05:12:40 | https://www.reddit.com/r/LocalLLaMA/comments/1q0wemx/happy_new_years_everyone/ | LocoMod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0wemx | false | null | t3_1q0wemx | /r/LocalLLaMA/comments/1q0wemx/happy_new_years_everyone/ | false | false | self | 40 | null |
the "power user" guide to ai tools the mainstream ignores | 1 | [removed] | 2026-01-01T04:56:03 | https://www.reddit.com/r/LocalLLaMA/comments/1q0w4ga/the_power_user_guide_to_ai_tools_the_mainstream/ | Immediate_Being_3341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0w4ga | false | null | t3_1q0w4ga | /r/LocalLLaMA/comments/1q0w4ga/the_power_user_guide_to_ai_tools_the_mainstream/ | false | false | self | 1 | null |
Top Frontier Models in the LmArena 2025 | 1 | 2026-01-01T04:40:22 | https://www.reddit.com/gallery/1q0vv8i | Difficult-Cap-7527 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q0vv8i | false | null | t3_1q0vv8i | /r/LocalLLaMA/comments/1q0vv8i/top_frontier_models_in_the_lmarena_2025/ | false | false | 1 | null | ||
OpenForecaster Release | 72 | [https://huggingface.co/papers/2512.25070](https://huggingface.co/papers/2512.25070) | 2026-01-01T04:30:45 | logisbase2 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q0vph6 | false | null | t3_1q0vph6 | /r/LocalLLaMA/comments/1q0vph6/openforecaster_release/ | false | false | default | 72 | {'enabled': True, 'images': [{'id': 'iuw1u1y61oag1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/iuw1u1y61oag1.png?width=108&crop=smart&auto=webp&s=4645b25edab5b3356d99e0b90bde2e4872ccb70e', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/iuw1u1y61oag1.png?width=216&crop=smart&auto=webp&s=8bb498204b6f47d75ac42df5c4ce6ce3649a988e', 'width': 216}, {'height': 203, 'url': 'https://preview.redd.it/iuw1u1y61oag1.png?width=320&crop=smart&auto=webp&s=f56febe9f49453c401c5f131640ca0702bcf2c96', 'width': 320}, {'height': 406, 'url': 'https://preview.redd.it/iuw1u1y61oag1.png?width=640&crop=smart&auto=webp&s=39850abc1dea70f66b62d9a4cafb3424faf01f53', 'width': 640}, {'height': 609, 'url': 'https://preview.redd.it/iuw1u1y61oag1.png?width=960&crop=smart&auto=webp&s=45e37266d930a8162f895ba1f25d2a926b8dea65', 'width': 960}, {'height': 686, 'url': 'https://preview.redd.it/iuw1u1y61oag1.png?width=1080&crop=smart&auto=webp&s=2ef0a15098237e3d21fefcc59ca04d8a028ccd64', 'width': 1080}], 'source': {'height': 880, 'url': 'https://preview.redd.it/iuw1u1y61oag1.png?auto=webp&s=a2571da3367c12e688e3d44d3246c6ae26f38812', 'width': 1385}, 'variants': {}}]} | |
IQuestLab/IQuest-Coder-V1 — 40B parameter coding LLM — Achieves leading results on SWE-Bench Verified (81.4%), BigCodeBench (49.9%), LiveCodeBench v6 (81.1%) | 172 | 2026-01-01T04:29:26 | https://github.com/IQuestLab/IQuest-Coder-V1 | TellMeAboutGoodManga | github.com | 1970-01-01T00:00:00 | 0 | {} | 1q0vom4 | false | null | t3_1q0vom4 | /r/LocalLLaMA/comments/1q0vom4/iquestlabiquestcoderv1_40b_parameter_coding_llm/ | false | false | default | 172 | {'enabled': False, 'images': [{'id': 'BV6FpKtNQWUgU3wdfJKH5UR3dlogn4uR0Fs4eIn5vSk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BV6FpKtNQWUgU3wdfJKH5UR3dlogn4uR0Fs4eIn5vSk.png?width=108&crop=smart&auto=webp&s=a1ec9e58e49a4b8e28e877d09abc027627faa03e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BV6FpKtNQWUgU3wdfJKH5UR3dlogn4uR0Fs4eIn5vSk.png?width=216&crop=smart&auto=webp&s=d7826a24f30bbcd280f363725ae4529aba085ae9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BV6FpKtNQWUgU3wdfJKH5UR3dlogn4uR0Fs4eIn5vSk.png?width=320&crop=smart&auto=webp&s=ac08d16e6cff6674e19e2f9e3e7ceed31032e812', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BV6FpKtNQWUgU3wdfJKH5UR3dlogn4uR0Fs4eIn5vSk.png?width=640&crop=smart&auto=webp&s=3873330c45da75d454dd4483d37c39a58e5c6810', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BV6FpKtNQWUgU3wdfJKH5UR3dlogn4uR0Fs4eIn5vSk.png?width=960&crop=smart&auto=webp&s=7c9750e41837f40ffc42c776f4220ce206213a84', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BV6FpKtNQWUgU3wdfJKH5UR3dlogn4uR0Fs4eIn5vSk.png?width=1080&crop=smart&auto=webp&s=32b522a6e8f800da849284a12df6d66670fd61a9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BV6FpKtNQWUgU3wdfJKH5UR3dlogn4uR0Fs4eIn5vSk.png?auto=webp&s=364ec046ce93a77356a04b881a239d41c9570c69', 'width': 1200}, 'variants': {}}]} | |
Is it one big agent, or sub-agents? | 2 | If you are building agents, are you resorting to send traffic to one agent that is responsible for all sub-tasks (via its instructions) and packaging tools intelligently - or are you using a lightweight router to define/test/update sub-agents that can handle user specific tasks.
The former is a simple architecture, but I feel its a large bloated piece of software that's harder to debug. The latter is cleaner and simpler to build (especially packaging tools) but requires a great/robust orchestration/router.
How are you all thinking about this? Would love framework-agnostic approaches because these frameworks are brittle, add very little value and become an operational burden as you push agents to production. | 2026-01-01T04:16:38 | https://www.reddit.com/r/LocalLLaMA/comments/1q0vgos/is_it_one_big_agent_or_subagents/ | AdditionalWeb107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0vgos | false | null | t3_1q0vgos | /r/LocalLLaMA/comments/1q0vgos/is_it_one_big_agent_or_subagents/ | false | false | self | 2 | null |
[D] Nvidia Grace Blackwell GB10 for local LLM deployment - real-world experience? | 1 | [removed] | 2026-01-01T04:14:07 | https://www.reddit.com/r/LocalLLaMA/comments/1q0vf4r/d_nvidia_grace_blackwell_gb10_for_local_llm/ | Conscious-Author-761 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0vf4r | false | null | t3_1q0vf4r | /r/LocalLLaMA/comments/1q0vf4r/d_nvidia_grace_blackwell_gb10_for_local_llm/ | false | false | self | 1 | null |
Implementable Framework (CFOL) Proven to Resolve Paradoxes in Scaling LLMs | 0 | On December 31, 2025, a paper co-authored with Grok (xAI) in extended collaboration with Jason Lauzon was released, presenting a fully deductive proof that the Contradiction-Free Ontological Lattice (CFOL) is the necessary and unique architectural framework capable of enabling true AI superintelligence.
Key claims:
* Current architectures (transformers, probabilistic, hybrid symbolic-neural) treat truth as representable and optimizable, inheriting undecidability and paradox risks from Tarski’s undefinability theorem, Gödel’s incompleteness theorems, and self-referential loops (e.g., Löb’s theorem).
* Superintelligence — defined as unbounded coherence, corrigibility, reality-grounding, and decisiveness — requires strict separation of an unrepresentable ontological ground (Layer 0: Reality) from epistemic layers.
* CFOL achieves this via stratification and invariants (no downward truth flow), rendering paradoxes structurally ill-formed while preserving all required capabilities.
The paper proves:
* Necessity (from logical limits)
* Sufficiency (failure modes removed, capabilities intact)
* Uniqueness (any alternative is functionally equivalent)
The argument is purely deductive, grounded in formal logic, with supporting convergence from 2025 research trends (lattice architectures, invariant-preserving designs, stratified neuro-symbolic systems).
Full paper (open access, Google Doc):
[https://docs.google.com/document/d/1QuoCS4Mc1GRyxEkNjxHlatQdhGbDTbWluncxGhyI85w/edit?usp=sharing](https://docs.google.com/document/d/1QuoCS4Mc1GRyxEkNjxHlatQdhGbDTbWluncxGhyI85w/edit?usp=sharing&referrer=grok.com)
The framework is released freely to the community. Feedback, critiques, and extensions are welcome.
Looking forward to thoughtful discussion. | 2026-01-01T04:07:14 | https://www.reddit.com/r/LocalLLaMA/comments/1q0vawm/implementable_framework_cfol_proven_to_resolve/ | Jonas_Tripps | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0vawm | false | null | t3_1q0vawm | /r/LocalLLaMA/comments/1q0vawm/implementable_framework_cfol_proven_to_resolve/ | false | false | self | 0 | null |
Running 200B+ Parameter Models Locally: Our Team's Experience with Dell GB10 + Nvidia Blackwell | 1 | [removed] | 2026-01-01T04:06:58 | https://www.reddit.com/r/LocalLLaMA/comments/1q0vaqx/running_200b_parameter_models_locally_our_teams/ | Conscious-Author-761 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0vaqx | false | null | t3_1q0vaqx | /r/LocalLLaMA/comments/1q0vaqx/running_200b_parameter_models_locally_our_teams/ | false | false | self | 1 | null |
Happy New Year: Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning - Fine Tune. (based on recent find of L3.3 8b in the wild) | 263 | Special thanks to :
[jacek2023](https://www.reddit.com/user/jacek2023/)
For an incredible find of Llama 3.3 8B "in the wild".
I fine tuned it using Unsloth and Claude 4.5 Opus High Reasoning Dataset:
[https://huggingface.co/DavidAU/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning](https://huggingface.co/DavidAU/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning)
This has created a reasoning/instruct hybrid.
Details at the repo.
DavidAU | 2026-01-01T03:41:30 | https://www.reddit.com/r/LocalLLaMA/comments/1q0uuqt/happy_new_year/ | Dangerous_Fix_5526 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0uuqt | false | null | t3_1q0uuqt | /r/LocalLLaMA/comments/1q0uuqt/happy_new_year/ | false | false | self | 263 | {'enabled': False, 'images': [{'id': '1ubNYsDE2-AJGcblMjbifRGRDcUgCn-tk-oQJJ_drKY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1ubNYsDE2-AJGcblMjbifRGRDcUgCn-tk-oQJJ_drKY.png?width=108&crop=smart&auto=webp&s=6e0d713ff0abe2971cede7571b3fde4d025ea819', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1ubNYsDE2-AJGcblMjbifRGRDcUgCn-tk-oQJJ_drKY.png?width=216&crop=smart&auto=webp&s=0ed23c4e5c90edf555daf2e5191f47a90f105151', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1ubNYsDE2-AJGcblMjbifRGRDcUgCn-tk-oQJJ_drKY.png?width=320&crop=smart&auto=webp&s=43a9cb8849abbd99e0c96869024089e2b6cdc98c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1ubNYsDE2-AJGcblMjbifRGRDcUgCn-tk-oQJJ_drKY.png?width=640&crop=smart&auto=webp&s=a5431e077edff6ed7eb9fdb15ab425fdf5c2cb65', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1ubNYsDE2-AJGcblMjbifRGRDcUgCn-tk-oQJJ_drKY.png?width=960&crop=smart&auto=webp&s=ed533d10dc1f61f2df36820c370ffc5022b27632', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1ubNYsDE2-AJGcblMjbifRGRDcUgCn-tk-oQJJ_drKY.png?width=1080&crop=smart&auto=webp&s=46176cce1255093a06ba80b58e7e1fac2edd20c0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1ubNYsDE2-AJGcblMjbifRGRDcUgCn-tk-oQJJ_drKY.png?auto=webp&s=194d66cc482adfb00cbf0674f221941af1fff9a0', 'width': 1200}, 'variants': {}}]} |
Top 10 Open Models by Providers on LMArena | 93 | 2026-01-01T03:32:13 | nekofneko | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q0uoys | false | null | t3_1q0uoys | /r/LocalLLaMA/comments/1q0uoys/top_10_open_models_by_providers_on_lmarena/ | false | false | default | 93 | {'enabled': True, 'images': [{'id': 'xo7h0asvqnag1', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/xo7h0asvqnag1.jpeg?width=108&crop=smart&auto=webp&s=ce064cde3871f246450466cd820e60ce1baec2fe', 'width': 108}, {'height': 231, 'url': 'https://preview.redd.it/xo7h0asvqnag1.jpeg?width=216&crop=smart&auto=webp&s=d0329891cb261c892e163229d458b4ad52c594ac', 'width': 216}, {'height': 343, 'url': 'https://preview.redd.it/xo7h0asvqnag1.jpeg?width=320&crop=smart&auto=webp&s=c6e36bdd44333d16ba258a4f1641c4e4f6a49e3f', 'width': 320}, {'height': 687, 'url': 'https://preview.redd.it/xo7h0asvqnag1.jpeg?width=640&crop=smart&auto=webp&s=1f90fd4135eb4647968b39123e68d3e463e24269', 'width': 640}, {'height': 1031, 'url': 'https://preview.redd.it/xo7h0asvqnag1.jpeg?width=960&crop=smart&auto=webp&s=1c2040c81fbc762d3bdd6ef67a06bc49795857dd', 'width': 960}, {'height': 1159, 'url': 'https://preview.redd.it/xo7h0asvqnag1.jpeg?width=1080&crop=smart&auto=webp&s=a374fb085a8b53f13a553d9a33fd239c812f366e', 'width': 1080}], 'source': {'height': 2466, 'url': 'https://preview.redd.it/xo7h0asvqnag1.jpeg?auto=webp&s=f4156ddbea54e2901ff0f6ce8c1a63a3220b2459', 'width': 2296}, 'variants': {}}]} | ||
Is there an AI subreddit with swag | 1 | [removed] | 2026-01-01T03:03:31 | https://www.reddit.com/r/LocalLLaMA/comments/1q0u6q9/is_there_an_ai_subreddit_with_swag/ | donotfire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0u6q9 | false | null | t3_1q0u6q9 | /r/LocalLLaMA/comments/1q0u6q9/is_there_an_ai_subreddit_with_swag/ | false | false | self | 1 | null |
Importing Custom Vision Model Into LM Studio | 3 | Hey guys, just arrived here cus I've looked everywhere and can't find anything,
I've just fine-tuned Qwen3 VL 8b using Unsloth's notebook and exported the final model as a gguf and no matter how I try to import it into LM Studio I can't figure out how to get it to retain it's vision capability. I've put both the gguf and the mmproj.gguf into the same folder like with the base Qwen3 VL and they're just showing up as two separate models, neither that let me upload an image.
Tried on both Windows and Ubuntu by both using LMS and popping the files in manually but nothing seems to work.
Any help or even just pointing me in the right direction would be appreciated, I've never done this before and I'm starting to think I jumped in the deep end starting with a vision model. Thanks | 2026-01-01T02:41:25 | https://www.reddit.com/r/LocalLLaMA/comments/1q0tt1e/importing_custom_vision_model_into_lm_studio/ | Flob_Dog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0tt1e | false | null | t3_1q0tt1e | /r/LocalLLaMA/comments/1q0tt1e/importing_custom_vision_model_into_lm_studio/ | false | false | self | 3 | null |
I stopped adding guardrails and added one log line instead (AJT spec) | 0 | Been running a few production LLM setups (mostly local models + some API calls) and kept hitting the same annoying thing after stuff went sideways:
I could see exactly what the model output was, how long it took, even the full prompt in traces…
but when someone asked wait, why did we let this through? suddenly it was a mess.
Like:
• Which policy was active at that exact moment?
• Did the risk classifier flag it as high?
• Was it auto-approved or did a human sign off?
That info was either buried in config files, scattered across tools, or just… not recorded.
I got tired of reconstructing it every time, so I tried something dead simple: log one tiny structured event whenever a decision is made (allow/block/etc).
Just 9 fields, nothing fancy. No new frameworks, no blocking logic, fits into whatever logging I already have.
Threw it up as a little spec here if anyone’s interested:
https://github.com/Nick-heo-eg/spec/
how do you handle this kind of thing with local LLMs?
Do you log decision context explicitly, or just wing it during postmortems? | 2026-01-01T01:49:15 | https://www.reddit.com/r/LocalLLaMA/comments/1q0sw3n/i_stopped_adding_guardrails_and_added_one_log/ | Echo_OS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0sw3n | false | null | t3_1q0sw3n | /r/LocalLLaMA/comments/1q0sw3n/i_stopped_adding_guardrails_and_added_one_log/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'kBYM5I7vnrJEIzuxFEQoCYWCjOSyOaathZJ49KtJrb0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kBYM5I7vnrJEIzuxFEQoCYWCjOSyOaathZJ49KtJrb0.png?width=108&crop=smart&auto=webp&s=809529d9bbee0be460afefd258c27476dd13f143', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kBYM5I7vnrJEIzuxFEQoCYWCjOSyOaathZJ49KtJrb0.png?width=216&crop=smart&auto=webp&s=91cdbff858223b537f56ebb4e36cca923dbc5ee1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kBYM5I7vnrJEIzuxFEQoCYWCjOSyOaathZJ49KtJrb0.png?width=320&crop=smart&auto=webp&s=77660c3ce1982cabbadbba84f158e6229eeb8b78', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kBYM5I7vnrJEIzuxFEQoCYWCjOSyOaathZJ49KtJrb0.png?width=640&crop=smart&auto=webp&s=afae8b5b2d800f0ba820c0bf35499ca423cdf3bf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kBYM5I7vnrJEIzuxFEQoCYWCjOSyOaathZJ49KtJrb0.png?width=960&crop=smart&auto=webp&s=da2b2c4f5a91cd4ec0706b341da4275923c38b15', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kBYM5I7vnrJEIzuxFEQoCYWCjOSyOaathZJ49KtJrb0.png?width=1080&crop=smart&auto=webp&s=fb33b95a96980b14f8f35b802c768a76a5592d93', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kBYM5I7vnrJEIzuxFEQoCYWCjOSyOaathZJ49KtJrb0.png?auto=webp&s=65d55a155b652bf9d5f9b4581e3df6627c683167', 'width': 1200}, 'variants': {}}]} |
Looks like 2026 is going to be worse for running your own models :( | 0 | 2026-01-01T01:30:09 | https://x.com/Polymarket/status/2006443814278541536?s=20 | Nobby_Binks | x.com | 1970-01-01T00:00:00 | 0 | {} | 1q0sjjn | false | null | t3_1q0sjjn | /r/LocalLLaMA/comments/1q0sjjn/looks_like_2026_is_going_to_be_worse_for_running/ | false | false | default | 0 | null | |
GraphQLite - Embedded graph database for building GraphRAG with SQLite | 24 | For anyone building GraphRAG systems who doesn't want to run Neo4j just to store a knowledge graph, I've been working on something that might help.
GraphQLite is an SQLite extension that adds Cypher query support. The idea is that you can store your extracted entities and relationships in a graph structure, then use Cypher to traverse and expand context during retrieval. Combined with sqlite-vec for the vector search component, you get a fully embedded RAG stack in a single database file.
It includes graph algorithms like PageRank and community detection, which are useful for identifying important entities or clustering related concepts. There's an example in the repo using the HotpotQA multi-hop reasoning dataset if you want to see how the pieces fit together.
\`pip install graphqlite\`
Hope it is useful to some of y’all.
GitHub: [https://github.com/colliery-io/graphqlite](https://github.com/colliery-io/graphqlite) | 2026-01-01T01:27:56 | https://www.reddit.com/r/LocalLLaMA/comments/1q0si1a/graphqlite_embedded_graph_database_for_building/ | Fit-Presentation-591 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0si1a | false | null | t3_1q0si1a | /r/LocalLLaMA/comments/1q0si1a/graphqlite_embedded_graph_database_for_building/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': '-OCf6yXQbeOuZvElrzYlq1tWuWRWKAOYlzFLwq0jOmo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-OCf6yXQbeOuZvElrzYlq1tWuWRWKAOYlzFLwq0jOmo.png?width=108&crop=smart&auto=webp&s=3f1556411766d6992f9a5488a9158d0893c4d893', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-OCf6yXQbeOuZvElrzYlq1tWuWRWKAOYlzFLwq0jOmo.png?width=216&crop=smart&auto=webp&s=598d9ce746d5c0096bc84d43db783e9a540b429a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-OCf6yXQbeOuZvElrzYlq1tWuWRWKAOYlzFLwq0jOmo.png?width=320&crop=smart&auto=webp&s=d9f40cd07a5bf2eb8155f2608f874094514fae9a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-OCf6yXQbeOuZvElrzYlq1tWuWRWKAOYlzFLwq0jOmo.png?width=640&crop=smart&auto=webp&s=c94b7a0dbfe48b8dd2797c326eda3aacacfa77a0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-OCf6yXQbeOuZvElrzYlq1tWuWRWKAOYlzFLwq0jOmo.png?width=960&crop=smart&auto=webp&s=57ab575ea62016bbb3d5235b20b8d0a0a5e61607', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-OCf6yXQbeOuZvElrzYlq1tWuWRWKAOYlzFLwq0jOmo.png?width=1080&crop=smart&auto=webp&s=0be44068ff09d67ba5ad21d6e8cb4e59211c23bb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-OCf6yXQbeOuZvElrzYlq1tWuWRWKAOYlzFLwq0jOmo.png?auto=webp&s=9978661bd670797d0499b30f2eadd3aeb3e9e49d', 'width': 1200}, 'variants': {}}]} |
[Discussion] Scaling "Pruning as a Game" to Consumer HW: A Hierarchical Tournament Approach | 0 | The recent paper "Pruning as a Game" is promising, but the computational cost (O(N2) interactions) makes it impossible to run on consumer GPUs for large models (70B+).
**The Engineering Proposal:** Instead of a global "Battle Royale" (all neurons interacting), I propose a **Divide-and-Conquer architecture** inspired by system resource management.
**1. Hierarchical Tournament**
* Split layers/blocks into smaller groups.
* Compute Nash Equilibrium locally. This creates parallelism and reduces complexity.
**2. Beam Search with "Waiting Room"**
* Don't just keep the winner (Top-1). Keep the Top-2 candidates.
* **Crucial Trick:** Offload the runner-up (2nd place) to **System RAM (CPU)**, keeping only the winner in **VRAM**.
* This prevents VRAM saturation while avoiding "Local Optima" traps.
**3. Lazy Aggregation**
* Only trigger the "Loser's Bracket" (fetching 2nd place from RAM) if the Top-1 model shows high loss in specific layers.
* Or simply use **Model Soups** (averaging weights) to merge candidates without expensive re-training.
**Question:** Has anyone tried a similar hierarchical approach for this specific paper? I'm looking for collaborators to test this logic. | 2026-01-01T01:22:23 | https://www.reddit.com/r/LocalLLaMA/comments/1q0sedr/discussion_scaling_pruning_as_a_game_to_consumer/ | NingenBakudan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0sedr | false | null | t3_1q0sedr | /r/LocalLLaMA/comments/1q0sedr/discussion_scaling_pruning_as_a_game_to_consumer/ | false | false | self | 0 | null |
What are the best models for RTX PRO 6000? | 0 | Model should fit into VRAM or 1 GPU. It should be as performant as possible. If you share, please share which quantization you use, and how many toks/s it's possible to get from your configuration. | 2026-01-01T01:03:05 | https://www.reddit.com/r/LocalLLaMA/comments/1q0s1su/what_are_the_best_models_for_rtx_pro_6000/ | ArtisticHamster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0s1su | false | null | t3_1q0s1su | /r/LocalLLaMA/comments/1q0s1su/what_are_the_best_models_for_rtx_pro_6000/ | false | false | self | 0 | null |
Good local model for computer use? | 3 | I’ve been looking to make something like TalkTasic where it can view your screen and modify what you’re saying to a good prompt based on what app you’re using. But I also want to extend this to also accurately dictate back to me what is happening without being too verbose. Mostly just need to lower screen time and I want to code via dictation but get a nice summary of what has happened as it happens.
Maybe something like this also already exists? Seems obvious some of the gpt models can do this but having trouble finding an OSS one that has native vision and hearing | 2026-01-01T00:58:45 | https://www.reddit.com/r/LocalLLaMA/comments/1q0rywi/good_local_model_for_computer_use/ | thepetek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0rywi | false | null | t3_1q0rywi | /r/LocalLLaMA/comments/1q0rywi/good_local_model_for_computer_use/ | false | false | self | 3 | null |
my "no-filter" productivity stack for staying ahead in 2025 | 1 | [removed] | 2026-01-01T00:56:04 | https://www.reddit.com/r/LocalLLaMA/comments/1q0rx4m/my_nofilter_productivity_stack_for_staying_ahead/ | Immediate_Being_3341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0rx4m | false | null | t3_1q0rx4m | /r/LocalLLaMA/comments/1q0rx4m/my_nofilter_productivity_stack_for_staying_ahead/ | false | false | self | 1 | null |
I built AIfred-Intelligence - a self-hosted AI assistant with automatic web research and multi-agent debates (AIfred with upper "i" instead of lower "L" :-) | 34 | Hey r/LocalLLaMA,
Been working just for fun and learning about LLM on this for a while:
**AIfred Intelligence** is a self-hosted AI assistant that goes beyond simple chat.
**Key Features:**
**Automatic Web Research** \- AI autonomously decides when to search the web, scrapes sources in parallel, and cites them. No manual commands needed.
**Multi-Agent Debates** \- Three AI personas with different roles:
* 🎩 AIfred (scholar) - answers your questions as an English butler
* 🏛️ Sokrates (critic) - as himself with ancient greek personality, challenges assumptions, finds weaknesses
* 👑 Salomo (judge) - as himself, synthesizes and delivers final verdict
Editable system/personality prompts
As you can see in the screenshot, there's a "Discussion Mode" dropdown with options like Tribunal (agents debate X rounds → judge decides) or Auto-Consensus (they discuss until 2/3 or 3/3 agree) and more modes.
History compression at 70% utilization. Conversations never hit the context wall (hopefully :-) ).
**Vision/OCR** \- Crop tool, multiple vision models (Qwen3-VL, DeepSeek-OCR)
**Voice Interface** \- STT + TTS integration
UI internationalization in english / german per i18n
**Backends:** Ollama (best supported and most flexible), vLLM, KoboldCPP, (TabbyAPI coming (maybe) soon), - each remembers its own model preferences.
**Other stuff:** Thinking Mode (collapsible `<think>` blocks), LaTeX rendering, vector cache (ChromaDB), VRAM-aware context sizing, REST API for remote control to inject prompts and control the browser tab out of a script or per AI.
Built with Python/Reflex. Runs 100% local.
Extensive Debug Console output and debug.log file
Entire export of chat history
Tweaking of LLM parameters
**GitHub:** [https://github.com/Peuqui/AIfred-Intelligence](https://github.com/Peuqui/AIfred-Intelligence)
Happy to answer questions and like to read your opinions!
Happy new year and God bless you all,
Best wishes,
Peuqui | 2026-01-01T00:48:21 | Peuqui | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q0rrxr | false | null | t3_1q0rrxr | /r/LocalLLaMA/comments/1q0rrxr/i_built_aifredintelligence_a_selfhosted_ai/ | false | false | default | 34 | {'enabled': True, 'images': [{'id': 'b8qjb0pzdmag1', 'resolutions': [{'height': 200, 'url': 'https://preview.redd.it/b8qjb0pzdmag1.png?width=108&crop=smart&auto=webp&s=69be700276d9cff484177e32c20745a031cc045c', 'width': 108}, {'height': 401, 'url': 'https://preview.redd.it/b8qjb0pzdmag1.png?width=216&crop=smart&auto=webp&s=d7bb5f2bf35eb12333e2cbc2ae89483925b4abb1', 'width': 216}, {'height': 594, 'url': 'https://preview.redd.it/b8qjb0pzdmag1.png?width=320&crop=smart&auto=webp&s=f0ddb1a0d988e52f425318add25991941e9ef3f4', 'width': 320}, {'height': 1189, 'url': 'https://preview.redd.it/b8qjb0pzdmag1.png?width=640&crop=smart&auto=webp&s=4e64a13ef94483a1313f96b4738a7daf664a7ed2', 'width': 640}], 'source': {'height': 1691, 'url': 'https://preview.redd.it/b8qjb0pzdmag1.png?auto=webp&s=64f2e3c6b3ca7d9b54d6f6f6ba8ff0360a3c40ef', 'width': 910}, 'variants': {}}]} | |
For those with a 6700XT GPU (gfx1031) - ROCM - Openweb UI | 10 | Just thought i would share my setup for those starting out or need some improvement, as I think its as good as its going to get. For context I have a 6700XT with a 5600x 16GB system, and if there's any better/faster ways I'm open to suggestions.
Between all the threads of information and little goldmines along the way, I need to share some links and let you know that Google Studio AI was my friend in getting a lot of this built for my system.
* I have ROCm 7.1.1 built : [https://github.com/guinmoon/rocm7\_builds](https://github.com/guinmoon/rocm7_builds) \-with gfx1031 ROCBLas [https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU](https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU)
* I build my own llama.cpp aligned to use the gfx1031 6700XT and ROCm 7.1.1
* I use llama-swap for my models : [https://github.com/mostlygeek/llama-swap](https://github.com/mostlygeek/llama-swap) as you can still use Vision Models by defining the mmproj file.
* I use Openweb UI in a docker [https://github.com/open-webui/open-webui](https://github.com/open-webui/open-webui)
* I install from github Fast Kokoro - ONNX : [https://github.com/thewh1teagle/kokoro-onnx](https://github.com/thewh1teagle/kokoro-onnx) (pip install --force-reinstall "git+[https://github.com/thewh1teagle/kokoro-onnx.git](https://github.com/thewh1teagle/kokoro-onnx.git)")
* I build Whisper.cpp - Vulkan /w VAD: [https://github.com/ggml-org/whisper.cpp/tree/master?tab=readme-ov-file#vulkan-gpu-support](https://github.com/ggml-org/whisper.cpp/tree/master?tab=readme-ov-file#vulkan-gpu-support) & modify server.cpp "/inference" to "/v1/audio/transcriptions"
* I run Docling via python : pip install "docling-serve\[ui\]" #to upgrade : pip install --upgrade "docling-serve\[ui\]"
I had to install python 3.12.x to get ROCm built , yes i know my ROCm is butchered , but i don't know what im doing and its working , but it looks like 7.1.1 is being used for Text Generation and the Imagery ROCBlas is using 6.4.2 /bin/library.
I have my system so that I have \*.bat file that starts up each service on boot as its own CMD window & runs in the background ready to be called by Openweb UI. I've tried to use python along the way as Docker seems to take up lot of resources. but tend to get between 22-25 t/s on ministral3-14b-instruct Q5\_XL with a 16k context.
Also got Stablediffusion.cpp working with Z-Image last night using the same custom build approach
If your having trouble DM me , or i might add it all to a github later so that it can be shared. | 2026-01-01T00:20:21 | https://www.reddit.com/r/LocalLLaMA/comments/1q0r9bh/for_those_with_a_6700xt_gpu_gfx1031_rocm_openweb/ | uber-linny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0r9bh | false | null | t3_1q0r9bh | /r/LocalLLaMA/comments/1q0r9bh/for_those_with_a_6700xt_gpu_gfx1031_rocm_openweb/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'xABcxyf_Eu2Z9wwn0C37gRPzDdL3Ts5tmkV3SwFcKX8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xABcxyf_Eu2Z9wwn0C37gRPzDdL3Ts5tmkV3SwFcKX8.png?width=108&crop=smart&auto=webp&s=820c1f3c1ac400e67b14f10e17b20e2ed0724295', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xABcxyf_Eu2Z9wwn0C37gRPzDdL3Ts5tmkV3SwFcKX8.png?width=216&crop=smart&auto=webp&s=e4c8b2c6e496071a3a8d736e3ef3725f7ed33112', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xABcxyf_Eu2Z9wwn0C37gRPzDdL3Ts5tmkV3SwFcKX8.png?width=320&crop=smart&auto=webp&s=a9b758b215db53ad359c6be533e1270bc4baba7f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xABcxyf_Eu2Z9wwn0C37gRPzDdL3Ts5tmkV3SwFcKX8.png?width=640&crop=smart&auto=webp&s=3a10d93129e1134f8bd2431d07257e6071b8ccec', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xABcxyf_Eu2Z9wwn0C37gRPzDdL3Ts5tmkV3SwFcKX8.png?width=960&crop=smart&auto=webp&s=d2fd5be1310bce3abb37310c9435bcc8458a9b30', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xABcxyf_Eu2Z9wwn0C37gRPzDdL3Ts5tmkV3SwFcKX8.png?width=1080&crop=smart&auto=webp&s=1b4e031e9b811a8846ce15b19879e8b66fd36040', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xABcxyf_Eu2Z9wwn0C37gRPzDdL3Ts5tmkV3SwFcKX8.png?auto=webp&s=5292cac2f7bf20444ae94fc6d2b150bdaacc3be7', 'width': 1200}, 'variants': {}}]} |
I built a "Glass Box" agent framework because I was tired of debugging magic black boxes. (Apache 2.0) | 1 | Hi everyone,
I just released **Lár v1.0.0**. It's an open-source framework for building deterministic, auditable AI agents.
**Why another framework?**
I tried building production agents with existing tools, but I couldn't trust them. I didn't know *why* an agent loops, or *where* it failed. I built Lár to be a "Glass Box"—you see every nut and bolt.
**Key Features:**
* **Auditable Logs**: It generates a step-by-step JSON log of every thought the agent has.
* **1-Line Local Support**: Switch to \*\*Local Llama 3\*\* (via Ollama) by changing a single string. No import changes. No refactoring.
* **IDE Friendly**: No complex env setup. Just clone and run. You can build a working agent in minutes.
* **18 Core Patterns**: We standardized common agent flows (RAG, Triage, Map-Reduce). Don't reinvent the wheel.
* **Integration Builder**: Need to talk to Stripe? Drag the \`@lar/IDE\_INTEGRATION\_PROMPT\` into Cursor, and it writes the tool for you.
* **Air-Gap Ready**: The engine is fully decoupled from the internet. Great for secure enterprise deployments.
* **Simple**: No complex abstractions. Just Nodes and Routers.
It's free (Apache 2.0) and I'm actively looking for feedback from the community.
**Links**:
* **Website**: [https://snath.ai](https://snath.ai)
* **Docs**: [https://docs.snath.ai](https://docs.snath.ai)
* **Github**: [https://github.com/snath-ai/lar](https://github.com/snath-ai/lar)
**We built 3 Open Source Demos**:
1. **Code Repair Agent**: [https://github.com/snath-ai/code-repair-demo](https://github.com/snath-ai/code-repair-demo)
2. **RAG Agent**: [https://github.com/snath-ai/rag-demo](https://github.com/snath-ai/rag-demo)
3. **Customer Support Swarm**: [https://github.com/snath-ai/customer-support-demo](https://github.com/snath-ai/customer-support-demo) | 2026-01-01T00:05:28 | https://www.reddit.com/r/LocalLLaMA/comments/1q0qz1c/i_built_a_glass_box_agent_framework_because_i_was/ | Some_Adhesiveness203 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0qz1c | false | null | t3_1q0qz1c | /r/LocalLLaMA/comments/1q0qz1c/i_built_a_glass_box_agent_framework_because_i_was/ | false | false | self | 1 | null |
Getting Blackwell consumer multi-GPU working on Windows? | 0 | Hi there, I recently managed to snag a 5070TI and a 5080 which I managed to squeeze with an AM5 board (2 x PCIe 5.0x8) in a workstation tower with 1600W PSU and 128GB RAM. This should become my AI playground. I mostly work on Windows, with WSL for anything that needs a \*nix-ish environment. I was pretty enthused to have two 16GB cards, thinking that I could hit the sweet spot of 32GB (I'm aware there's going to be some overhead) for text generation models with acceptable quality and larger context where my 4090 currently is just barely too low on VRAM. I might switch one of the GPUs for the 4090 in my "main" PC once (if) I get everything running.
I spent a lot of time with tutorials that somehow didn't work for me. llama.cpp somehow ignored any attempts to involve the second GPU, getting vLLM (which feels like shooting sparrows with a cannon) set up in WSL got me into a never ending dependency hell, oobabooga was the same as llama.cpp. Some tutorials said I needed to use nightly builds to work on Blackwell, but when the system borked at my attempts, I found Github issues mentioning Blackwell problems, regression bugs and mentions of multi-GPU working only partially, and at some point, the rabbit hole just got so deep I feared I'd get lost.
So long story short: if anybody knows a recent tutorial that helps me get this setup working on Windows, I'll be eternally grateful. I might be missing the obvious. If the answer is that I either need to wait another month until things get stable enough or that I definitely need to switch to plain Linux and use a specific engine, that'll be fine too. I got to the game pretty late, so I'm aware that I'm asking at NOOB level and still got quite a learning curve ahead. After 35 years in IT, my context window isn't as big as it used to be ;-)
Happy New Year everyone! | 2026-01-01T00:02:56 | https://www.reddit.com/r/LocalLLaMA/comments/1q0qx8h/getting_blackwell_consumer_multigpu_working_on/ | Bit_Poet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0qx8h | false | null | t3_1q0qx8h | /r/LocalLLaMA/comments/1q0qx8h/getting_blackwell_consumer_multigpu_working_on/ | false | false | self | 0 | null |
challenges getting useful output with ai max+ 395 | 2 | I've been playing around with llama.cpp and ollama and trying to get them to work with agent coding tools (continue.dev, cline, copilot) and having very mixed results.
llama.cpp seems very hit and miss, sometimes it works but more often it doesn't even finish loading
ollama at least starts up reliably but when I try to use it with coding tools I've had mixed behavior depending on what model and what tool I'm using. Cline has been the most consistent as far as attempting to do something but then it gets into failure loops after a while.
Does anyone have example setups with ai max+ 395 where the input process output loop at least works every time? Is this a hardware problem or am I expecting too much from local llama?
I'm at that stage where I don't know what is actually broken (maybe everything), I need a "known good" to start with then iterate on. | 2025-12-31T23:45:58 | https://www.reddit.com/r/LocalLLaMA/comments/1q0qlfi/challenges_getting_useful_output_with_ai_max_395/ | sputnik13net | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0qlfi | false | null | t3_1q0qlfi | /r/LocalLLaMA/comments/1q0qlfi/challenges_getting_useful_output_with_ai_max_395/ | false | false | self | 2 | null |
Transcribe: local Whisper transcription (GUI + CLI) with diarization, timestamps, optional Ollama | 0 | Hi r/LocalLLaMA,
I built a free tool called Transcribe (tx) and put the landing page here: https://icosium.org
It’s a desktop app + CLI that uses Whisper locally to capture audio from files, microphones, or system audio, then produces timestamped transcripts with speaker diarization. After capture, you can optionally generate a local summary via Ollama (any Ollama model).
**What it does
**
* File mode: transcribe a WAV file and export a timestamped transcript
*
* Mic mode: live microphone capture with live output and timestamps
*
* Speaker mode: capture system audio, plus optional microphone input for conversations (dual source)
*
* Speaker diarization: clearer “who said what” labeling
*
* Offline friendly: models download on first use, then run locally
*
* Optional summaries: pipe the transcript into Ollama after transcription finishes
*
* Cross-platform: Windows, macOS, Linux
*
* Automation-friendly: CLI for batch runs and repeatable workflows
**Workflow**
* Choose a mode (file, mic, speaker) and select your audio device
*
* Transcribe locally (Whisper runs locally after the first model download)
*
* Export the transcript or optionally summarize via Ollama
**Ollama summaries (optional)
**
* Install Ollama
*
* Run ollama serve
*
* Pull any model: ollama pull <model>
*
* Default host is http://localhost:11434 (configurable if you run Ollama elsewhere)
*
Downloads are linked on the site. Feedback is welcome, especially on diarization quality, live mode UX, and any missing workflows you would want in a local-first setup.**** | 2025-12-31T23:16:50 | https://www.reddit.com/r/LocalLLaMA/comments/1q0q22c/transcribe_local_whisper_transcription_gui_cli/ | FreedomByFire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0q22c | false | null | t3_1q0q22c | /r/LocalLLaMA/comments/1q0q22c/transcribe_local_whisper_transcription_gui_cli/ | false | false | self | 0 | null |
I made an Opensource tutorial app providing LLM videos and glossary | 0 | Hi all, here's an updated tutorial app about LLM training and specs : AI Delvepad [https://apps.apple.com/us/app/a-i-delvepad/id6743481267](https://apps.apple.com/us/app/a-i-delvepad/id6743481267) Has a glossary and free video tutorial resource with more recently added, so you can learn on the go. Had a promo vid put up to add some comical flavor, since making things with AI should be fun too along the way.
**Site:** [http://aidelvepad.com](http://aidelvepad.com/)
**GitHub:** [https://github.com/leapdeck/AIDelvePad](https://github.com/leapdeck/AIDelvePad)
Includes:
* 35+ free bite-sized video tutorials (with more coming soon)
* A beginner-friendly glossary of essential AI terms
* A quick intro to how large language models are trained
* A tutorial-sharing feature so you can pass interesting finds to friends
* Everything is 100% free and open source
If you find some hilarity to the vid, hop on and please give it a try. Any feedback appreciated! You can fork the Opensource too if you want to make something similar for mobile. | 2025-12-31T23:08:08 | https://v.redd.it/ihoyuun8fmag1 | Other_Passion_4710 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q0pw7u | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ihoyuun8fmag1/DASHPlaylist.mpd?a=1769814506%2CMTlkMGM2NGIwNGI5MDc4YzU1MWY3YjY1OWM3MzM4ZGZiZGJmOWE3OTNkZjQzNjZjMGJmNzgwMTBiZmZjNWQ3ZQ%3D%3D&v=1&f=sd', 'duration': 28, 'fallback_url': 'https://v.redd.it/ihoyuun8fmag1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/ihoyuun8fmag1/HLSPlaylist.m3u8?a=1769814506%2CMjRjOTQyYzIyMmNiYmQ3YzFkOGM2MDdiZWI2MzI5ZmQxMmQ3OTIxNTQzOGI0NmVkNzY4ZTgwNjJiZDA1ZjdiNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ihoyuun8fmag1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1q0pw7u | /r/LocalLLaMA/comments/1q0pw7u/i_made_an_opensource_tutorial_app_providing_llm/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'YnM5dW51bzhmbWFnMTeVJpeQDm0kkBlIrPEYss6G8wv11As6ovFMorzU5AFy', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/YnM5dW51bzhmbWFnMTeVJpeQDm0kkBlIrPEYss6G8wv11As6ovFMorzU5AFy.png?width=108&crop=smart&format=pjpg&auto=webp&s=e72302ee374e227493b0b1858260a94a8f05cca7', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/YnM5dW51bzhmbWFnMTeVJpeQDm0kkBlIrPEYss6G8wv11As6ovFMorzU5AFy.png?width=216&crop=smart&format=pjpg&auto=webp&s=70f8d36dd5f40677f9202b8375beb4d10f65c2ad', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/YnM5dW51bzhmbWFnMTeVJpeQDm0kkBlIrPEYss6G8wv11As6ovFMorzU5AFy.png?width=320&crop=smart&format=pjpg&auto=webp&s=08a38b279cb7ad1c683c4fe16e72de1b10244d12', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/YnM5dW51bzhmbWFnMTeVJpeQDm0kkBlIrPEYss6G8wv11As6ovFMorzU5AFy.png?width=640&crop=smart&format=pjpg&auto=webp&s=92e94747b403fda076662c703d04b1c50b4b4819', 'width': 640}], 'source': {'height': 1280, 'url': 'https://external-preview.redd.it/YnM5dW51bzhmbWFnMTeVJpeQDm0kkBlIrPEYss6G8wv11As6ovFMorzU5AFy.png?format=pjpg&auto=webp&s=aa7d47577ef783e718a90e150aeb46d3b95de841', 'width': 720}, 'variants': {}}]} | |
Saw this post about making open-source LLMs compete in a turn-based simulator. Curious what folks here think | 8 | Saw this post on X where someone built a turn-based terminal simulator game (“The Spire”) and then had **open-source models compete against each other** inside it (Llama-3.1 vs Mistral, etc.).
It’s obviously **not rigorous** in any academic or benchmark sense, but it got me thinking about **simulation-based evals** as a direction in general.
On the one hand:
* You get long-horizon behavior
* Planning vs greed shows up quickly
* Different models seem to fail in qualitatively different ways
On the other hand:
* Highly prompt and environment-dependent
* Hard to control variance
* Easy to over interpret outcomes
Curious how people here think about this kind of thing as a **supplement** to traditional evals.
Is this mostly a toy / content thing, or is there something real here if done carefully?
Would love to hear thoughts from people who’ve tried agent sims or multi-turn environments with open models.
[source](https://x.com/josh_cli/status/2005903669171311094) | 2025-12-31T22:25:18 | https://www.reddit.com/r/LocalLLaMA/comments/1q0p1zp/saw_this_post_about_making_opensource_llms/ | Commercial_Image266 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0p1zp | false | null | t3_1q0p1zp | /r/LocalLLaMA/comments/1q0p1zp/saw_this_post_about_making_opensource_llms/ | false | false | self | 8 | null |
My prediction: on 31st december 2028 we're going to have 10b dense models as capable as chat gpt 5.2 pro x-high thinking. | 0 | Densing law predict that every 3.5 months we wil cut in half the amount of parameters needed to get the same level of intellectual perfomance. In just 36 months we will need 1000x less parameters. if chat gpt 5.2 pro x-high thinking does have 10 trillions parameters, in 3 years a 10b dense models will be as good and competent. Wild! | 2025-12-31T22:19:24 | https://www.reddit.com/r/LocalLLaMA/comments/1q0oxty/my_prediction_on_31st_december_2028_were_going_to/ | Longjumping_Fly_2978 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0oxty | false | null | t3_1q0oxty | /r/LocalLLaMA/comments/1q0oxty/my_prediction_on_31st_december_2028_were_going_to/ | false | false | self | 0 | null |
Mule Run agent Marketplace | 1 | [removed] | 2025-12-31T21:58:21 | https://x.com/i/status/2006314918891442589 | Every_Scene_9275 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1q0oigb | false | null | t3_1q0oigb | /r/LocalLLaMA/comments/1q0oigb/mule_run_agent_marketplace/ | false | false | default | 1 | null |
Github read me genaration | 1 | [removed] | 2025-12-31T21:53:57 | https://github.com/mushfiqk47/GithubReadme-Generator | Silent_Sage_69 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1q0of7w | false | null | t3_1q0of7w | /r/LocalLLaMA/comments/1q0of7w/github_read_me_genaration/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'wq0bHAQLukw2jZ21SUTjTdRjPNUTd0fHAbr-x4RzsIk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wq0bHAQLukw2jZ21SUTjTdRjPNUTd0fHAbr-x4RzsIk.png?width=108&crop=smart&auto=webp&s=ead00a44553b15b428246c03b8edeaa20fda6ddf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wq0bHAQLukw2jZ21SUTjTdRjPNUTd0fHAbr-x4RzsIk.png?width=216&crop=smart&auto=webp&s=6bcb23f1ec62a01165f758afcbfcf26945a5719e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wq0bHAQLukw2jZ21SUTjTdRjPNUTd0fHAbr-x4RzsIk.png?width=320&crop=smart&auto=webp&s=4264b4bf3ce809d43539441ccea0489eaf9c69ec', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wq0bHAQLukw2jZ21SUTjTdRjPNUTd0fHAbr-x4RzsIk.png?width=640&crop=smart&auto=webp&s=2bb0076f628175ee02fd3da5f69b4f5f04d4ca97', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wq0bHAQLukw2jZ21SUTjTdRjPNUTd0fHAbr-x4RzsIk.png?width=960&crop=smart&auto=webp&s=0d2d8d29d671955fece26d2acc3cbde4b7d3138e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wq0bHAQLukw2jZ21SUTjTdRjPNUTd0fHAbr-x4RzsIk.png?width=1080&crop=smart&auto=webp&s=de99c1ba3d787dc025090ba536eb6fde2d72853f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wq0bHAQLukw2jZ21SUTjTdRjPNUTd0fHAbr-x4RzsIk.png?auto=webp&s=0b3242ce3d1872b847edb9196d2506fb1cb978f4', 'width': 1200}, 'variants': {}}]} | |
Orange Pi Unveils AI Station with Ascend 310 and 176 TOPS Compute | 71 | Orange Pi closes the year by unveiling new details about the Orange Pi AI Station, a compact board-level edge computing platform built around the Ascend 310 series processor. The system targets high-density inference workloads with large memory options, NVMe storage support, and extensive I/O in a small footprint.
The AI Station is powered by an Ascend 310 series processor integrating 16 CPU cores clocked at up to 1.9 GHz, along with 10 AI cores running at up to 1.08 GHz and 8 vector cores operating at up to 1 GHz.
According to Orange Pi, the platform delivers up to 176 TOPS of AI compute performance, enabling large-scale inference and feature-extraction workloads.
Memory options include 48 GB or 96 GB of LPDDR4X operating at up to 4266 MHz. Storage support consists of a PCIe 4.0 ×4 M.2 2280 slot for NVMe SSDs, onboard eMMC support up to 256 GB, a 16 MB SPI flash device, and a microSD card slot for removable storage.
The Orange Pi AI Station has an official product page already, though purchase links were unavailable at the time of publication.
[https://linuxgizmos.com/orange-pi-unveils-ai-station-with-ascend-310-and-176-tops-compute/](https://linuxgizmos.com/orange-pi-unveils-ai-station-with-ascend-310-and-176-tops-compute/) | 2025-12-31T21:30:49 | https://www.reddit.com/r/LocalLLaMA/comments/1q0ny4i/orange_pi_unveils_ai_station_with_ascend_310_and/ | DeliciousBelt9520 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0ny4i | false | null | t3_1q0ny4i | /r/LocalLLaMA/comments/1q0ny4i/orange_pi_unveils_ai_station_with_ascend_310_and/ | false | false | self | 71 | {'enabled': False, 'images': [{'id': 'GYot5YXKb8VSzoU7L7r_YfIcDozbz1YdaqEz8PY38ok', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/GYot5YXKb8VSzoU7L7r_YfIcDozbz1YdaqEz8PY38ok.jpeg?width=108&crop=smart&auto=webp&s=31e80f483f0ec13e9b58f185b501bc2d7a7e6e58', 'width': 108}, {'height': 131, 'url': 'https://external-preview.redd.it/GYot5YXKb8VSzoU7L7r_YfIcDozbz1YdaqEz8PY38ok.jpeg?width=216&crop=smart&auto=webp&s=d5bad687aae08980553e4e5bbf01438aaffb98f6', 'width': 216}, {'height': 194, 'url': 'https://external-preview.redd.it/GYot5YXKb8VSzoU7L7r_YfIcDozbz1YdaqEz8PY38ok.jpeg?width=320&crop=smart&auto=webp&s=c905600e0143f303e51bcc23aa38b07031b92841', 'width': 320}, {'height': 389, 'url': 'https://external-preview.redd.it/GYot5YXKb8VSzoU7L7r_YfIcDozbz1YdaqEz8PY38ok.jpeg?width=640&crop=smart&auto=webp&s=1767f41cff957a27f1c8f1cad94427cf7c281aae', 'width': 640}], 'source': {'height': 414, 'url': 'https://external-preview.redd.it/GYot5YXKb8VSzoU7L7r_YfIcDozbz1YdaqEz8PY38ok.jpeg?auto=webp&s=f842bd7a5156e43ee99a1c10411798b2b5038043', 'width': 681}, 'variants': {}}]} |
underrated ai tools you probably haven’t tried yet | 1 | [removed] | 2025-12-31T20:56:03 | https://www.reddit.com/r/LocalLLaMA/comments/1q0n7js/underrated_ai_tools_you_probably_havent_tried_yet/ | Immediate_Being_3341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0n7js | false | null | t3_1q0n7js | /r/LocalLLaMA/comments/1q0n7js/underrated_ai_tools_you_probably_havent_tried_yet/ | false | false | self | 1 | null |
Synergy between multiple models? | 0 | I recently was struggling with a python bug where thinking tokens were included in an agent's workflow in a spot where they shouldn't be.
I asked Sonnet 4.5 to fix the issue vis Cline. After it tried a few times and spent about $1 of tokens it failed. I then tried a few different local models: Kimi k2 thinking, minimax m2.1, GLM 4.7.
The thing that eventually worked was using GLM 4.7 as a planner and the Minimax 2.1 as the implementer. GLM 4.7 on its own might have worked eventually but is rather slow on my mac studio 512 gb.
Besides the increase in speed from going to minimax as the actor, it also seemed like minimax helped GLM be better at tool calls by example, AND helped GLM not constantly ask me to approve actions that I have already given it blanket approval for. But the planning insight came from GLM.
I was wondering if anyone else has observed a synergy between two models that have presumably slightly different training sets and strengths/weaknesses.
I can imagine that Haiku would be great for implementation because not only is it fast but it's very low hallucination rate makes it good at coding (but probably less creative than Sonnet). | 2025-12-31T20:27:22 | https://www.reddit.com/r/LocalLLaMA/comments/1q0mlvf/synergy_between_multiple_models/ | nomorebuttsplz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0mlvf | false | null | t3_1q0mlvf | /r/LocalLLaMA/comments/1q0mlvf/synergy_between_multiple_models/ | false | false | self | 0 | null |
fun? | 0 | \# ---------------------------------------------------------
\# 1️⃣ Dataset – on‑the‑fly generation of (Intent, Binary) pairs
\# ---------------------------------------------------------
from [torch.utils.data](http://torch.utils.data) import IterableDataset
import json, random, struct
class AICOIterDataset(IterableDataset):
def \_\_init\_\_(self, vocab\_intent, vocab\_bin, max\_len=64):
self.vocab\_intent = vocab\_intent # e.g. {"SET\_TEMP":1, "READ":2,...}
self.vocab\_bin = vocab\_bin # integer IDs for each legal opcode byte
self.max\_len = max\_len
def \_\_iter\_\_(self):
while True:
\# 1️⃣ Random intent graph (tiny DSL)
intent = {
"action": random.choice(\["READ\_TEMP","SET\_FAN","CHECK\_LINK"\]),
"target": random.choice(\["TMP001","FAN\_A","ETH0"\]),
"value" : random.randint(0,255) if random.random()<0.3 else None
}
intent\_str = json.dumps(intent) # human‑readable line
intent\_ids = \[self.vocab\_intent.get(tok,0) for tok in intent\_str.encode()\]
\# 2️⃣ Forward compile (deterministic JIT) – we \*simulate\* it here
\# In practice you call the same JIT you will ship on‑device.
binary\_bytes = self.\_jit\_compile(intent)
binary\_ids = \[self.vocab\_bin\[b\] for b in binary\_bytes\]
\# 3️⃣ Yield a packed example (intents are token IDs, binaries are token IDs)
yield {
"intent\_ids": intent\_ids,
"binary\_ids": binary\_ids,
"raw\_intent": intent\_str,
"raw\_binary": binary\_bytes.hex()
}
\# -----------------------------------------------------------------
\# Dummy JIT – replace with the real controller‑as‑compiler logic.
\# -----------------------------------------------------------------
def \_jit\_compile(self, intent):
\# Very toy encoding: 0xAA = header, 0x01 = READ\_TEMP, 0x02 = SET\_FAN, …
opcode\_map = {"READ\_TEMP":0x10, "SET\_FAN":0x20, "CHECK\_LINK":0x30}
reg\_map = {"TMP001":0x01, "FAN\_A":0x02, "ETH0":0x03}
opcode = opcode\_map\[intent\["action"\]\]
reg = reg\_map\[intent\["target"\]\]
payload = intent\["value"\] if intent\["value"\] is not None else 0x00
\# <header><opcode><reg><payload><checksum>
chk = (opcode \^ reg \^ payload) & 0xFF
return bytes(\[0xAA, opcode, reg, payload, chk\])
The dataset generates an infinite stream of fresh pairs, guaranteeing unlimited training data without any manual labelling. You can also sprinkle in real driver‑manual‐derived pairs (from existing Linux drivers) to enrich the distribution.
\# ---------------------------------------------------------
\# 2️⃣ Model – tiny encoder‑decoder transformer (shared encoder)
\# ---------------------------------------------------------
import torch, torch.nn as nn
from transformers import GPT2Config, GPT2LMHeadModel
class AICOCompiler(nn.Module):
def \_\_init\_\_(self, vocab\_intent\_sz, vocab\_bin\_sz, d\_model=256, n\_layers=6, n\_heads=8):
super().\_\_init\_\_()
\# Shared encoder (process intent)
self.encoder = GPT2LMHeadModel(
GPT2Config(vocab\_size=vocab\_intent\_sz,
n\_positions=512,
n\_embd=d\_model,
n\_layer=n\_layers,
n\_head=n\_heads,
resid\_pdrop=0.0,
attn\_pdrop=0.0)
)
\# Decoder (generate binary tokens)
self.decoder = GPT2LMHeadModel(
GPT2Config(vocab\_size=vocab\_bin\_sz,
n\_positions=512,
n\_embd=d\_model,
n\_layer=n\_layers,
n\_head=n\_heads,
resid\_pdrop=0.0,
attn\_pdrop=0.0)
)
def forward(self, intent\_ids, binary\_ids=None):
\# Encode intent
enc\_out = self.encoder.transformer(input\_ids=intent\_ids)
\# If we have targets, shift them for teacher‑forcing
if binary\_ids is not None:
\# Condition decoder on encoder final hidden state (simple cross‑attention)
dec\_out = self.decoder.transformer(
input\_ids=binary\_ids,
encoder\_hidden\_states=enc\_out.last\_hidden\_state,
encoder\_attention\_mask=torch.ones\_like(intent\_ids)
)
logits = self.decoder.lm\_head(dec\_out.last\_hidden\_state)
return logits
else:
\# Generation mode – autoregressive decoding
generated = \[\]
past = None
cur = torch.full((intent\_ids.shape\[0\],1), self.decoder.config.bos\_token\_id,
dtype=torch.long, device=intent\_ids.device)
for \_ in range(128): # max binary length
dec\_out = self.decoder.transformer(
input\_ids=cur,
encoder\_hidden\_states=enc\_out.last\_hidden\_state,
encoder\_attention\_mask=torch.ones\_like(intent\_ids),
past\_key\_values=past)
logits = self.decoder.lm\_head(dec\_out.last\_hidden\_state\[:,-1\])
\# \*\*Hard constraint mask\*\* – zero out illegal opcode ids
logits\[:, self.invalid\_opcode\_mask\] = -1e9
probs = torch.softmax(logits, dim=-1)
nxt = torch.multinomial(probs, num\_samples=1)
generated.append(nxt)
cur = torch.cat(\[cur, nxt\], dim=1)
past = dec\_out.past\_key\_values
\# early stop on <EOS>
if (nxt == self.decoder.config.eos\_token\_id).all():
break
return torch.cat(generated, dim=1)
invalid\_opcode\_mask is a boolean tensor (size vocab\_bin\_sz) where illegal opcode IDs are set to True. It is computed once from the ISA spec and never changes. During generation we force the softmax to 0 for those entries – this is the hard‑constraint layer we discussed in the “provably safe” section.
The decoder is conditioned on the encoder’s final hidden state via cross‑attention, which gives us the truly bidirectional mapping (the same weights can be used for the inverse direction by swapping the roles of the two models).
\# ---------------------------------------------------------
\# 3️⃣ Lightning + DeepSpeed (distributed, mixed‑precision)
\# ---------------------------------------------------------
import pytorch\_lightning as pl
from pytorch\_lightning.callbacks import ModelCheckpoint
from deepspeed import DeepSpeedEngine
class AICOTrainer(pl.LightningModule):
def \_\_init\_\_(self, model, lr=1e-4, weight\_decay=0.01):
super().\_\_init\_\_()
self.model = model
[self.lr](http://self.lr) = lr
self.wd = weight\_decay
def training\_step(self, batch, batch\_idx):
intent = batch\["intent\_ids"\]
binary = batch\["binary\_ids"\]
logits = self.model(intent\_ids=intent, binary\_ids=binary\[:, :-1\])
loss\_fwd = nn.functional.cross\_entropy(
logits.view(-1, logits.size(-1)),
binary\[:, 1:\].reshape(-1),
ignore\_index=self.model.decoder.config.pad\_token\_id
)
\# ----- Inverse (binary → intent) branch -----
\# reuse same weights, just flip the inputs
inv\_logits = self.model.encoder(
input\_ids=binary, # now binary is the "source"
encoder\_hidden\_states=None,
encoder\_attention\_mask=None
).last\_hidden\_state
\# simple linear head to map back to intent vocab (tiny MLP)
inv\_head = nn.Linear(inv\_logits.shape\[-1\],
self.model.encoder.config.vocab\_size).to(inv\_logits.device)
inv\_logits = inv\_head(inv\_logits)
loss\_inv = nn.functional.cross\_entropy(
inv\_logits.view(-1, inv\_logits.size(-1)),
intent\[:, 1:\].reshape(-1),
ignore\_index=self.model.encoder.config.pad\_token\_id
)
\# weighted joint loss
loss = 0.8\*loss\_fwd + 0.2\*loss\_inv
self.log("train\_loss", loss, prog\_bar=True, on\_step=True)
return loss
def configure\_optimizers(self):
optimizer = torch.optim.AdamW(self.parameters(),
[lr=self.lr](http://lr=self.lr),
weight\_decay=self.wd)
scheduler = torch.optim.lr\_scheduler.CosineAnnealingLR(
optimizer, T\_max=10\_000, eta\_min=1e-5)
return \[optimizer\], \[scheduler\]
\# ---------------------------------------------------------
\# 4️⃣ Launch on the cloud (example with 8‑GPU A100 node)
\# ---------------------------------------------------------
if \_\_name\_\_ == "\_\_main\_\_":
vocab\_intent\_sz = 256 # wide enough for ASCII‑based DSL tokens
vocab\_bin\_sz = 512 # one ID per legal opcode byte (incl. prefixes)
model = AICOCompiler(vocab\_intent\_sz, vocab\_bin\_sz,
d\_model=256, n\_layers=6, n\_heads=8)
trainer = pl.Trainer(
accelerator="gpu",
devices=8,
strategy="deepspeed\_stage\_2",
precision=16, # BF16/FP16 mixed precision
max\_epochs=30,
callbacks=\[ModelCheckpoint(dirpath="ckpts",
monitor="train\_loss",
save\_top\_k=3,
mode="min")\],
gradient\_clip\_val=1.0,
)
ds = AICOIterDataset(vocab\_intent={i:i for i in range(256)},
vocab\_bin={i:i for i in range(512)})
loader = torch.utils.data.DataLoader(ds,
batch\_size=32,
num\_workers=8,
collate\_fn=lambda x: {
"intent\_ids": torch.nn.utils.rnn.pad\_sequence(
\[torch.tensor(i\["intent\_ids"\]) for i in x\],
batch\_first=True, padding\_value=0),
"binary\_ids": torch.nn.utils.rnn.pad\_sequence(
\[torch.tensor(i\["binary\_ids"\]) for i in x\],
batch\_first=True, padding\_value=0)
})
trainer.fit(AICOTrainer(model), loader) | 2025-12-31T20:27:05 | https://www.reddit.com/r/LocalLLaMA/comments/1q0mlnn/fun/ | kght22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0mlnn | false | null | t3_1q0mlnn | /r/LocalLLaMA/comments/1q0mlnn/fun/ | false | false | self | 0 | null |
IS Openai experimenting with diffusion transformers in chatgpt or was it lag? | 0 | I noticed gpt5.2 was writing something; at first, it was slightly jumbled up, then it suddenly few sentences appeared and a part of the original sentence stayed the same and the rest of the sentence disappeared and became another sentence .. It was like "blah1blah2 blah3" then it suddenly changed to "blah1 word1 word2 blah2 word3 ......" and then a lot of text showed up and then progressively more text was generated? Maybe they are testing diffusion mixed with autoregressive transformers now or maybe my browser was lagging ? | 2025-12-31T20:24:49 | https://www.reddit.com/r/LocalLLaMA/comments/1q0mjzb/is_openai_experimenting_with_diffusion/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0mjzb | false | null | t3_1q0mjzb | /r/LocalLLaMA/comments/1q0mjzb/is_openai_experimenting_with_diffusion/ | false | false | self | 0 | null |
How is running local AI models on AMD GPUs today? | 16 | I have an NVIDIA GPU for a few years now but I am kinda considering a switch/upgrade to AMD, mainly because I use Linux nowadays and NVIDIA is still fairly buggy.
What is the state of running AI models on AMD GPUs as of late 2025? Can you for example install LM Studio and just run a language model directly on the GPU without any complex tweaks? What about image/video generation? Is it still an absolute mess? | 2025-12-31T20:19:48 | https://www.reddit.com/r/LocalLLaMA/comments/1q0mg6w/how_is_running_local_ai_models_on_amd_gpus_today/ | liright | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0mg6w | false | null | t3_1q0mg6w | /r/LocalLLaMA/comments/1q0mg6w/how_is_running_local_ai_models_on_amd_gpus_today/ | false | false | self | 16 | null |
Am I calculating this wrong ? AWS H100 vs Decentralized 4090s (Cost of Iteration) | 5 | I'm building a cost model for fine tuning Llama 3 70B and I found a weird crossover point where consumer swarms beat H100s on time, not just cost. I want to check if my constants align with your experience.
The constants I'm using:
* AWS H100: $4.50/hr. Setup time (Driver install + 140GB download): around 45 mins.
* WAN Swarm (4090s): $2.00/hr. Setup time (Hot-loaded): 5 mins.
* Latency penalty: I'm assuming the Swarm is 1.6x slower on pure compute due to WAN bandwidth.
The Result: For a single production run (long training), AWS wins on speed. But for research cycles (e.g., 3 runs of 10k samples to test hyperparams), the math says the Swarm is actually cheaper AND competitive on total time because you don't pay the 45 minute "setup tax" three times.
The question: For those of you fine-tuning 70B models:
1. Is my 45 minute setup estimate for AWS spot instances accurate, or do you have faster persistent environments ?
2. Is a 1.6x slowdown on training speed a dealbreaker if the cost is $2/hr vs $4.50/hr?
(Note: I built a calculator to visualize this, but I want to validate the constants first). | 2025-12-31T20:07:18 | https://www.reddit.com/r/LocalLLaMA/comments/1q0m68h/am_i_calculating_this_wrong_aws_h100_vs/ | yz0011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0m68h | false | null | t3_1q0m68h | /r/LocalLLaMA/comments/1q0m68h/am_i_calculating_this_wrong_aws_h100_vs/ | false | false | self | 5 | null |
I have a bunch of RAM and too many tabs, so I made an extension power by LLM's | 7 | I was too lazy to clean my tabs, so I made this, and every existing tool crashed because of too many tabs. GitHub: [https://github.com/ndg8743/TabBrain](https://github.com/ndg8743/TabBrain)
* Duplicate detection across tabs and bookmarks
* AI-powered window topic detection ("this window is your ML research rabbit hole")
* Auto-categorization and Chrome tab group creation
* Bookmark cleanup - find dead links, rename those generic "New Folder" folders
* Window merge suggestions when you've got 5 windows all about the same thing
Works with Chrome, Firefox, Edge, Brave, and Safari. Runs completely local if you want.
**My setup running inference:**
* Ryzen 9 7950X (16C/32T) | 192GB DDR5-5200 (5400) | RTX 5070 Ti 16GB — big inference box
* Xeon E5-2697A v4 (32C) | 128GB DDR4 2133 (2400) RAM | Proxmox host with mutli gpu inference — running OpenWebUI in container + Homarr etc. w/ 33tb raw
* 320GB total RAM total connected with 100 gig
OpenWebUi serving Llama 3.1/Mistral/Qwen locally. The 5070 Ti handles most requests, offload to CPU when VRAM gets tight. Also have other servers not at this setup, tell me ideas for what to do with a lot of ram atm with clusters.
[https://github.com/ndg8743/TabBrain](https://github.com/ndg8743/TabBrain) | 2025-12-31T19:39:46 | https://www.reddit.com/gallery/1q0lk2s | ng_uhh | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q0lk2s | false | null | t3_1q0lk2s | /r/LocalLLaMA/comments/1q0lk2s/i_have_a_bunch_of_ram_and_too_many_tabs_so_i_made/ | false | false | 7 | null | |
total noob here, where to start | 0 | i recently bought a 24gb lpddr5 ram beelink ser5 max which comes with some sort of amd chips
google gemini told me i could run ollama 8b on it, it had me add some radeon repos to my OS (pop!\_os) and install them, and gave me the commands for installing ollama and dolphin-llama3
well my computer had some crashing issues with ollama, and then wouldnt boot, so i did a pop!\_os refresh which wiped all system changes i made, it just keeps all my flatpaks and user data, so my ollama is gone
i figured i couldnt run ollama on it till i tried to open a jpeg in libreoffice and that crashed the system too, after some digging it appears the problem with the crashing is the 3 amp cord the computer comes with is under powered and you want at least 5 amps, so i ordered a new cord and waiting for it to arrive
when my new cord arrives im going to try to install a ai again, i read thread on this sub that ollama isnt recommended compared to llama.cpp
do i need to know c programming to run llama.cpp? i made a temperature converter once in c, but that was a long time ago, i forget everything
how should i go about doing this? any good guides? should i just install ollama again?
and if i wanted to run a bigger model like 70b or even bigger, would the best choice for a low power consumption and ease of use be a mac studio with 96gb of unified memory? thats what ai told me, else ill have to start stacking amd cards it said and upgrade PSU and stuff in like a gaming machine
| 2025-12-31T19:21:05 | https://www.reddit.com/r/LocalLLaMA/comments/1q0l547/total_noob_here_where_to_start/ | cracked_shrimp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0l547 | false | null | t3_1q0l547 | /r/LocalLLaMA/comments/1q0l547/total_noob_here_where_to_start/ | false | false | self | 0 | null |
Finetuning: Is time actually your bottleneck ? (Speed vs. Cost) | 1 | [removed] | 2025-12-31T19:13:44 | https://www.reddit.com/r/LocalLLaMA/comments/1q0kyzm/finetuning_is_time_actually_your_bottleneck_speed/ | Desperate_One2416 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0kyzm | false | null | t3_1q0kyzm | /r/LocalLLaMA/comments/1q0kyzm/finetuning_is_time_actually_your_bottleneck_speed/ | false | false | self | 1 | null |
MIRA - Year-End Release: Stable Self-Model & HUD Architecture | 2 | Hi all, thank you all for the OUTPOURING of support for the MIRA project over the past few weeks. It trips me out that people are creating discussions, lodging bugs for me to fix, and even proposing feature improvements!
**This release represents focused work on MIRA's relationship with self, time, and context.** Since the original 1.0.0 release generic OpenAI/local providers have full feature parity with the native Anthropic format, the working\_memory has been modified so that the model receives a HUD (for lack of a better) word in a sliding assistant message that contains reminders and relevant memories, and adjustments to the context window to better articulate the passage of time between messages.
In the 1.0.0 release I did not realize the percentage of users who would be operating the application totally offline. **Significant improvements have been made on this front and now has rock offline/self-hosted solid reliability.**
Various other improvements have been made and are contained in the release notes for releases [2025.12.30-feat](https://github.com/taylorsatula/mira-OSS/releases/tag/2025.12.30-feat) and [2025.12.24](https://github.com/taylorsatula/mira-OSS/releases/tag/2025.12.24).
Thank you all again for all of the feedback. It is wildly satisfying to work on a project so diligently for so long and then have it embraced by the community. Keep the feature requests comin'! | 2025-12-31T19:10:01 | https://github.com/taylorsatula/mira-OSS/releases/tag/2025.12.30-feat | awittygamertag | github.com | 1970-01-01T00:00:00 | 0 | {} | 1q0kvym | false | null | t3_1q0kvym | /r/LocalLLaMA/comments/1q0kvym/mira_yearend_release_stable_selfmodel_hud/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'JVN3svw9heKE2y0FOZEIKGAhQkiBm3-IOfOfER65TKM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JVN3svw9heKE2y0FOZEIKGAhQkiBm3-IOfOfER65TKM.png?width=108&crop=smart&auto=webp&s=43e05873fc00aaa7c1e999cba0dc7a6ded4cad1a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JVN3svw9heKE2y0FOZEIKGAhQkiBm3-IOfOfER65TKM.png?width=216&crop=smart&auto=webp&s=2d3807b97f8cb2c4c7b6ce6846071cbc4276a1cb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JVN3svw9heKE2y0FOZEIKGAhQkiBm3-IOfOfER65TKM.png?width=320&crop=smart&auto=webp&s=2970c2cf2f2437f54240bc67b58b9444d6a62987', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JVN3svw9heKE2y0FOZEIKGAhQkiBm3-IOfOfER65TKM.png?width=640&crop=smart&auto=webp&s=29881afe08111f841697cc96e3aca16311c53012', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JVN3svw9heKE2y0FOZEIKGAhQkiBm3-IOfOfER65TKM.png?width=960&crop=smart&auto=webp&s=d0294fd4efab83c4d1cba84122cb05eb0ef4a961', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JVN3svw9heKE2y0FOZEIKGAhQkiBm3-IOfOfER65TKM.png?width=1080&crop=smart&auto=webp&s=8c4772a403d9f5401c4dafd85e9836a988cc7fb4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JVN3svw9heKE2y0FOZEIKGAhQkiBm3-IOfOfER65TKM.png?auto=webp&s=57b84eddb7a3080f1a3fd0924625b4504ab5b5af', 'width': 1200}, 'variants': {}}]} | |
skt/A.X-K1 · Hugging Face | 44 | 519B 33B Active MOE from SK Hynix | 2025-12-31T19:09:40 | https://huggingface.co/skt/A.X-K1 | TKGaming_11 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1q0kvo5 | false | null | t3_1q0kvo5 | /r/LocalLLaMA/comments/1q0kvo5/sktaxk1_hugging_face/ | false | false | 44 | {'enabled': False, 'images': [{'id': 'GHRRdWU8ECY5icoFMU9aI3PYw3ek1aZsMSwrkWVV0WU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GHRRdWU8ECY5icoFMU9aI3PYw3ek1aZsMSwrkWVV0WU.png?width=108&crop=smart&auto=webp&s=138072a089c095b79283cd69d939b9d1e172b255', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GHRRdWU8ECY5icoFMU9aI3PYw3ek1aZsMSwrkWVV0WU.png?width=216&crop=smart&auto=webp&s=7a00ea37437f230447f498950ce8092cfb4bb021', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GHRRdWU8ECY5icoFMU9aI3PYw3ek1aZsMSwrkWVV0WU.png?width=320&crop=smart&auto=webp&s=9de7cdbfacca1279896593e4e7bafdd46af064a4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GHRRdWU8ECY5icoFMU9aI3PYw3ek1aZsMSwrkWVV0WU.png?width=640&crop=smart&auto=webp&s=af764b1eafbfc2eee65c3e5875b8e41cb1d51cb9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GHRRdWU8ECY5icoFMU9aI3PYw3ek1aZsMSwrkWVV0WU.png?width=960&crop=smart&auto=webp&s=52df6b2ac7889e170b8e6b3cf5912e1145dd7120', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GHRRdWU8ECY5icoFMU9aI3PYw3ek1aZsMSwrkWVV0WU.png?width=1080&crop=smart&auto=webp&s=c8cffbac7bdcf9e43fb1599a5c8a25109503894d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GHRRdWU8ECY5icoFMU9aI3PYw3ek1aZsMSwrkWVV0WU.png?auto=webp&s=6850ad613925d561a8ed45ea849d59e0acbc22c2', 'width': 1200}, 'variants': {}}]} | |
Trying to setup a local LLM with LMStudio to work with the Jetbrains suite | 1 | Hi, like title said, I want to setup a local LLM for line completion as well as more complex queries. Which model support "fill-in-the-middle" ?
My machine has an Intel i7-13700KF with an RTX 4070, so I guess it's pretty powerful to run pretty big models.
Thanks! | 2025-12-31T19:02:22 | https://www.reddit.com/r/LocalLLaMA/comments/1q0kpk4/trying_to_setup_a_local_llm_with_lmstudio_to_work/ | fatfuck1987 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0kpk4 | false | null | t3_1q0kpk4 | /r/LocalLLaMA/comments/1q0kpk4/trying_to_setup_a_local_llm_with_lmstudio_to_work/ | false | false | self | 1 | null |
made a simple CLI tool to pipe anything into an LLM. that follows unix philosophy. | 52 | just finished building infer - it's inspired from grep but for asking an LLM questions about your command output.
the whole idea is you can do stuff like:
ps aux | infer "what's eating my RAM"
dmesg | infer "any hardware errors?"
git log --oneline -20 | infer "what did I work on today"
infer "what's the tar command to extract .tar.gz?"
It's less than 200 lines of C, reads from stdin, spits out plain text. works with openai compatable api I got tired of copy-pasting logs into llms, so now I just pipe everything. been using it for a week and it's genuinely useful for debugging and remembering commands. so i tought of publishing it now.
feedbacks are welcome | 2025-12-31T19:00:04 | https://github.com/chethanreddy1/infer | Famous-Koala-4352 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1q0kndt | false | null | t3_1q0kndt | /r/LocalLLaMA/comments/1q0kndt/made_a_simple_cli_tool_to_pipe_anything_into_an/ | false | false | default | 52 | {'enabled': False, 'images': [{'id': '2mIni041WJ-kGj619KCJlOIIx8G0zfXcaEidwq1__hc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2mIni041WJ-kGj619KCJlOIIx8G0zfXcaEidwq1__hc.png?width=108&crop=smart&auto=webp&s=b16f2b7f244985c450d9af431f3b589d1bf10d0e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2mIni041WJ-kGj619KCJlOIIx8G0zfXcaEidwq1__hc.png?width=216&crop=smart&auto=webp&s=264eeb1f6b41855f6efcb2b7ccb29456222cab78', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2mIni041WJ-kGj619KCJlOIIx8G0zfXcaEidwq1__hc.png?width=320&crop=smart&auto=webp&s=dcce1ab61456680db07fa83d7c5de6967129286a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2mIni041WJ-kGj619KCJlOIIx8G0zfXcaEidwq1__hc.png?width=640&crop=smart&auto=webp&s=b62d97dea222252e53b2edb1e6837fc7dd52f067', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2mIni041WJ-kGj619KCJlOIIx8G0zfXcaEidwq1__hc.png?width=960&crop=smart&auto=webp&s=8d824c74dee9813fbaa4350278de4451c860cc84', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2mIni041WJ-kGj619KCJlOIIx8G0zfXcaEidwq1__hc.png?width=1080&crop=smart&auto=webp&s=b0d4038bec01948b7a33dee1b92c972f74410d57', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2mIni041WJ-kGj619KCJlOIIx8G0zfXcaEidwq1__hc.png?auto=webp&s=b42d1fb264d8af74601f3e4ae68e26d42d3a8787', 'width': 1200}, 'variants': {}}]} |
made a simple CLI tool to pipe anything into an LLM. that follows unix philosophy. | 1 | [removed] | 2025-12-31T18:54:07 | https://github.com/chethanreddy1/infer | Equivalent-Snow-447 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1q0kih3 | false | null | t3_1q0kih3 | /r/LocalLLaMA/comments/1q0kih3/made_a_simple_cli_tool_to_pipe_anything_into_an/ | false | false | default | 1 | null |
made a simple CLI tool to pipe anything into an LLM. that follows unix philosophy. | 1 | [removed] | 2025-12-31T18:47:51 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1q0kd5e | false | null | t3_1q0kd5e | /r/LocalLLaMA/comments/1q0kd5e/made_a_simple_cli_tool_to_pipe_anything_into_an/ | false | false | default | 1 | null | ||
made a simple CLI tool to pipe anything into an LLM. that follows unix philosophy. | 1 | [removed] | 2025-12-31T18:42:43 | https://github.com/chethanreddy1/infer | theghost3172 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1q0k8tv | false | null | t3_1q0k8tv | /r/LocalLLaMA/comments/1q0k8tv/made_a_simple_cli_tool_to_pipe_anything_into_an/ | false | false | default | 1 | null |
Anyone else expecting surprise New Year AI models? Qwen 4? Gemma 4? | 46 | The question in the title is clear: were you expecting such a surprise? | 2025-12-31T18:41:26 | https://www.reddit.com/r/LocalLLaMA/comments/1q0k7qp/anyone_else_expecting_surprise_new_year_ai_models/ | ZeusZCC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0k7qp | false | null | t3_1q0k7qp | /r/LocalLLaMA/comments/1q0k7qp/anyone_else_expecting_surprise_new_year_ai_models/ | false | false | self | 46 | null |
GLM 4.6V keeps outputting <|begin_of_box|> and <|end_of_box|>, any way to remove this in openwebui? | 3 | I read in the documentation that they're special tokens specifically for GLM V models, but it seems like openwebui doesn't remove these tags in the responses.
Is there any current fix for this? | 2025-12-31T18:11:04 | https://www.reddit.com/r/LocalLLaMA/comments/1q0jhz8/glm_46v_keeps_outputting_begin_of_box_and_end_of/ | lolwutdo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0jhz8 | false | null | t3_1q0jhz8 | /r/LocalLLaMA/comments/1q0jhz8/glm_46v_keeps_outputting_begin_of_box_and_end_of/ | false | false | self | 3 | null |
DuckLLM V2.0.0 | 1 | Hello!
DuckLLM V2.0.0 Is a Local LLM (AI) Capable Of Answering Questions And Analyzing Documents & Images On The Fly With 0 Internet.
| 2025-12-31T17:54:17 | https://www.reddit.com/r/LocalLLaMA/comments/1q0j3cr/duckllm_v200/ | Ok_Welder_8457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0j3cr | false | null | t3_1q0j3cr | /r/LocalLLaMA/comments/1q0j3cr/duckllm_v200/ | false | false | self | 1 | null |
M4 chip or older dedicated GPU? | 0 | Currently have a Quadro RTX 4000 (8GB, have been able to run up to 16b models), running with an Ollama Docker on my multi-purpose Unraid machine.
Have an opportunity to get an M4 Mac Mini (10-core, 16GB RAM). I know about the power savings, but I'm curious about the expected performance hit I'd take moving to a M4 chip. | 2025-12-31T17:49:02 | https://www.reddit.com/r/LocalLLaMA/comments/1q0iypx/m4_chip_or_older_dedicated_gpu/ | grtgbln | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0iypx | false | null | t3_1q0iypx | /r/LocalLLaMA/comments/1q0iypx/m4_chip_or_older_dedicated_gpu/ | false | false | self | 0 | null |
Tongyi-MAI/MAI-UI-8B · Hugging Face | 51 | # 📖 Background
The development of GUI agents could revolutionize the next generation of human-computer interaction. Motivated by this vision, we present MAI-UI, a family of foundation GUI agents spanning the full spectrum of sizes, including 2B, 8B, 32B, and 235B-A22B variants. We identify four key challenges to realistic deployment: the lack of native agent–user interaction, the limits of UI-only operation, the absence of a practical deployment architecture, and brittleness in dynamic environments. MAI-UI addresses these issues with a unified methodology: a self-evolving data pipeline that expands the navigation data to include user interaction and MCP tool calls, a native device–cloud collaboration system that routes execution by task state, and an online RL framework with advanced optimizations to scale parallel environments and context length.
# [](https://huggingface.co/Tongyi-MAI/MAI-UI-8B#%F0%9F%8F%86-results)🏆 Results
# [](https://huggingface.co/Tongyi-MAI/MAI-UI-8B#grounding)Grounding
MAI-UI establishes new state-of-the-art across GUI grounding and mobile navigation.
* On grounding benchmarks, it reaches 73.5% on ScreenSpot-Pro, 91.3% on MMBench GUI L2, 70.9% on OSWorld-G, and 49.2% on UI-Vision, surpassing Gemini-3-Pro and Seed1.8 on ScreenSpot-Pro.
GitHub Page: [https://github.com/Tongyi-MAI/MAI-UI](https://github.com/Tongyi-MAI/MAI-UI)
GGUF: [https://huggingface.co/mradermacher/MAI-UI-8B-GGUF](https://huggingface.co/mradermacher/MAI-UI-8B-GGUF) | 2025-12-31T17:43:43 | https://huggingface.co/Tongyi-MAI/MAI-UI-8B | Electronic-Fill-6891 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1q0iu4m | false | null | t3_1q0iu4m | /r/LocalLLaMA/comments/1q0iu4m/tongyimaimaiui8b_hugging_face/ | false | false | default | 51 | {'enabled': False, 'images': [{'id': 'APqvqT97nOxGNcFbDiZQ6E0B1ie2mEu_5iSSJxOQ6r4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/APqvqT97nOxGNcFbDiZQ6E0B1ie2mEu_5iSSJxOQ6r4.png?width=108&crop=smart&auto=webp&s=4a76aef7717d6a7fd5e10b97ad0ea67bc6449609', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/APqvqT97nOxGNcFbDiZQ6E0B1ie2mEu_5iSSJxOQ6r4.png?width=216&crop=smart&auto=webp&s=80570dc029a189344dd1d747e7cd8eacb948d4ed', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/APqvqT97nOxGNcFbDiZQ6E0B1ie2mEu_5iSSJxOQ6r4.png?width=320&crop=smart&auto=webp&s=e38016b46af694fd7acacbaa94814d22e794b3d2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/APqvqT97nOxGNcFbDiZQ6E0B1ie2mEu_5iSSJxOQ6r4.png?width=640&crop=smart&auto=webp&s=7ff528a5fe6e38f9509001b027074ce307d0e6f4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/APqvqT97nOxGNcFbDiZQ6E0B1ie2mEu_5iSSJxOQ6r4.png?width=960&crop=smart&auto=webp&s=b49b530277a940b031fa46b7a8dbbe8fe3f10627', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/APqvqT97nOxGNcFbDiZQ6E0B1ie2mEu_5iSSJxOQ6r4.png?width=1080&crop=smart&auto=webp&s=19e05caf468298ad2c89e95832017a4f71f40fb1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/APqvqT97nOxGNcFbDiZQ6E0B1ie2mEu_5iSSJxOQ6r4.png?auto=webp&s=7c4707d38e2e0674699854df859314dada84e0d5', 'width': 1200}, 'variants': {}}]} |
🚀 HuggingFace Model Downloader v2.3.0 - Now with Web UI, Live Progress, and 100x Faster Scanning! | 17 | Hey r/LocalLLaMA!
It's been a while since I posted about hfdownloader (my CLI tool for downloading models from HuggingFace). Well, I've been busy completely rewriting it from scratch, and I'm excited to share v2.3.0!
# What is it?
A fast, resumable downloader for HuggingFace models and datasets with:
* Concurrent connections (8 parallel chunks per file by default)
* Smart resume - picks up where you left off
* Filters - download only the quantization you need (e.g., q4\_k\_m)
* Works with private/gated repos (just set HF\_TOKEN)
# 🆕 What's New in 2.3.0
# 1. Beautiful Web UI 🌐
No more terminal-only! Start a web server and manage downloads from your browser
hfdownloader serve
# Opens at http://localhost:8080
https://preview.redd.it/kmmaaeimskag1.png?width=2908&format=png&auto=webp&s=58c6ccdee2ffc1f8c2d6cc10cc3a14834c928704
new web-ui
Features:
* Real-time progress via WebSocket
* Separate pages for Models and Datasets
* Per-file progress bars
* Start, pause, cancel downloads
# 2. One-Liner Web Mode 🎯
bash <(curl -sSL https://g.bodaay.io/hfd) -w
This downloads the binary, starts the web server, and opens your browser automatically. That's it!
# 3. 100x Faster Repository Scanning ⚡
Old versions would take 5+ minutes to scan large repos (like 90+ file model repos). Now it takes \~2 seconds. I removed blocking HEAD requests during planning - turns out HuggingFace always supports range requests for LFS files anyway.
# 4. Smooth TUI Progress 📊
The terminal progress display used to jump around like crazy. Fixed it with exponential moving average smoothing.
Links
* GitHub: [https://github.com/bodaay/HuggingFaceModelDownloader](https://github.com/bodaay/HuggingFaceModelDownloader)
* Releases: [https://github.com/bodaay/HuggingFaceModelDownloader/releases/tag/2.3.0](https://github.com/bodaay/HuggingFaceModelDownloader/releases/tag/2.3.0)
| 2025-12-31T17:39:56 | https://www.reddit.com/r/LocalLLaMA/comments/1q0iqus/huggingface_model_downloader_v230_now_with_web_ui/ | bodaaay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0iqus | false | null | t3_1q0iqus | /r/LocalLLaMA/comments/1q0iqus/huggingface_model_downloader_v230_now_with_web_ui/ | false | false | 17 | null | |
all what I want in 2026 is this 4 node Strix Halo cluster - hoping other vendors will do this too | 27 | 2025-12-31T17:23:25 | Mental-At-ThirtyFive | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q0icmo | false | null | t3_1q0icmo | /r/LocalLLaMA/comments/1q0icmo/all_what_i_want_in_2026_is_this_4_node_strix_halo/ | false | false | default | 27 | {'enabled': True, 'images': [{'id': 'wgzlyc2spkag1', 'resolutions': [{'height': 114, 'url': 'https://preview.redd.it/wgzlyc2spkag1.jpeg?width=108&crop=smart&auto=webp&s=e89e7e1ba7c75546d2612cc827f5d5b4127629f2', 'width': 108}, {'height': 228, 'url': 'https://preview.redd.it/wgzlyc2spkag1.jpeg?width=216&crop=smart&auto=webp&s=816cc9447da6d0856a286e168d6986adfd2a19ea', 'width': 216}, {'height': 338, 'url': 'https://preview.redd.it/wgzlyc2spkag1.jpeg?width=320&crop=smart&auto=webp&s=16e0adfff0dcd03939d1006229c7dd0eeafcb0a1', 'width': 320}, {'height': 677, 'url': 'https://preview.redd.it/wgzlyc2spkag1.jpeg?width=640&crop=smart&auto=webp&s=58b64395934c798f5532b645a67a2c7abf9be7c7', 'width': 640}], 'source': {'height': 823, 'url': 'https://preview.redd.it/wgzlyc2spkag1.jpeg?auto=webp&s=cc6f5f7404e8b0f3c80d749316241f4a61e893b7', 'width': 777}, 'variants': {}}]} | ||
Best model for Claude Code? | 1 | Hi. It seems Claude Code now natively supports "third-party" models. Has anyone tried and compared them?
Does GLM 4.7 offer the best performance (or performance per cost for hosting providers)? Any other contender?
Thanks | 2025-12-31T17:15:46 | https://www.reddit.com/r/LocalLLaMA/comments/1q0i61m/best_model_for_claude_code/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0i61m | false | null | t3_1q0i61m | /r/LocalLLaMA/comments/1q0i61m/best_model_for_claude_code/ | false | false | self | 1 | null |
Agentic AI with FunctionGemma on Raspberry Pi 5 (Working) | 1 | For a while, I wondered if I could use my Raspberry Pi as my Agentic AI server. Greedy right!!
I have seen several attempts to attach an Nvidia GPU to a Raspberry Pi; some have actually succeeded, the cleanest example being one by[ Jeff Geerling](https://www.jeffgeerling.com/blog/2025/nvidia-graphics-cards-work-on-pi-5-and-rockchip).
But I intended to see what the Raspberry Pi 5 (16 GB) could do on its own without an external GPU.
What I wanted was to create a personal assistant that can
* Read my emails
* Send emails on demand
* Read my calendar
* Auto-reply on important unanswered emails.
More on [Substack](https://open.substack.com/pub/samairtimer/p/agentic-ai-with-functiongemma-on?utm_campaign=post-expanded-share&utm_medium=web) \- | 2025-12-31T17:15:35 | https://www.reddit.com/r/LocalLLaMA/comments/1q0i5wd/agentic_ai_with_functiongemma_on_raspberry_pi_5/ | samairtimer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0i5wd | false | null | t3_1q0i5wd | /r/LocalLLaMA/comments/1q0i5wd/agentic_ai_with_functiongemma_on_raspberry_pi_5/ | false | false | self | 1 | null |
Moonshot AI Completes $500 Million Series C Financing | 108 | AI company Moonshot AI has completed a $500 million Series C financing. Founder Zhilin Yang revealed in an internal letter that the company’s global paid user base is growing at a monthly rate of 170%. Since November, driven by the K2 Thinking model, Moonshot AI’s overseas API revenue has increased fourfold. The company holds more than RMB 10 billion in cash reserves (approximately $1.4 billion). This scale is already on par with Zhipu AI and MiniMax after their IPOs:
* As of June 2025, Zhipu AI has RMB 2.55 billion in cash, with an IPO expected to raise about RMB 3.8 billion.
* As of September 2025, MiniMax has RMB 7.35 billion in cash, with an IPO expected to raise RMB 3.4–3.8 billion.
In the internal letter, Zhilin Yang stated that the funds from the Series C financing will be used to more aggressively expand GPU capacity, accelerate the training and R&D of the K3 model, and he also announced key priorities for 2026:
* Bring the K3 model’s pretraining performance up to par with the world’s leading models, leveraging technical improvements and further scaling to increase its equivalent FLOPs by at least an order of magnitude.
* Make K3 a more "distinctive" model by vertically integrating training technologies and product taste, enabling users to experience entirely new capabilities that other models do not offer.
* Achieve an order-of-magnitude increase in revenue scale, with products and commercialization focused on Agents, not targeting absolute user numbers, but pursuing the upper limits of intelligence to create greater productivity value. | 2025-12-31T17:13:54 | https://www.reddit.com/r/LocalLLaMA/comments/1q0i4g3/moonshot_ai_completes_500_million_series_c/ | InternationalAsk1490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0i4g3 | false | null | t3_1q0i4g3 | /r/LocalLLaMA/comments/1q0i4g3/moonshot_ai_completes_500_million_series_c/ | false | false | self | 108 | null |
my secret stack for writing high-quality fiction with ai | 1 | [removed] | 2025-12-31T16:55:58 | https://www.reddit.com/r/LocalLLaMA/comments/1q0hovr/my_secret_stack_for_writing_highquality_fiction/ | Immediate_Being_3341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0hovr | false | null | t3_1q0hovr | /r/LocalLLaMA/comments/1q0hovr/my_secret_stack_for_writing_highquality_fiction/ | false | false | self | 1 | null |
Help on Getting Started | 2 | Hey all, I'm trying to see what might be a good roadmap to maximize my budget. All advice appreciated!
So just two start my **main goals** are:
1. Learn by building. I learn best through application so I'm looking to build experience with local inference, RAG pipelines, fine-tuning, evaluation etc.
2. Privacy. Eventually, I would like to take all that experience and invest money into having a local model that could be specialized for any of: contract review, knowledge lookup, "thinking", drafting written documents).
The thing is I would like to tailor cost to my progress. For example, I would definitely be open to utilizing cloud resources in the beginning and only invest in hardware once I have a clear grasp, IF that makes the most financial sense.
My current hardware is a consumer am5 board and a rtx 3090. I'm currently thinking of getting a 5090 just for personal gaming, but can definitely hold off on that if I will eventually need to get a 6000 maxq or expensive Mac machine.
**My question is:**
1. How realistic is it to get 'close' to larger frontier model performance using smaller local models +RAG/inference/fine-tuning, for specific tasks, and if willing to sacrifice speed to a certain extent?
2. Assuming the above is possible, what does that end setup look like? balancing cost effectiveness and setup effort.
3. Given my current hardware, what's the best path forward? Should I get a 5090 to better tinker with, or experiment with 3090 and then move into 6000, and eventually heavy investment into a new local rig?
4. Down the road, which would make more sense, mac or nvidia gpu? given my potential use cases.
Thank you very much in advance! Just starting out so hopefully my questions make sense. | 2025-12-31T16:54:59 | https://www.reddit.com/r/LocalLLaMA/comments/1q0ho16/help_on_getting_started/ | DrSexyMango | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0ho16 | false | null | t3_1q0ho16 | /r/LocalLLaMA/comments/1q0ho16/help_on_getting_started/ | false | false | self | 2 | null |
Llama 3.2 3B fMRI - Circuit Tracing Findings | 2 | For those that have been following along, you'll know that I came up with a way to attempt to trace distributed mechanisms. Essentially, I am:
* capturing per-token hidden activations across all layers
* building a sliding time window per dimension
* computing Pearson correlation between one chosen hero dim and all other dims
* selecting the top-K strongest correlations (by absolute value) per layer and timestep
* logging raw activation values + correlation sign
What stood out pretty quickly:
# 1) Most correlated dims are transient
Many dims show up strongly for a short burst — e.g. 5–15 tokens in a specific layer — then disappear entirely. These often vary by:
* prompt
* chunk of the prompt
* layer
* local reasoning phase
This looks like short-lived subroutines rather than stable features.
# 2) Some dims persist, but only in specific layers
Certain dims stay correlated for long stretches, but only at particular depths (e.g. consistently at layer \~22, rarely elsewhere). These feel like mid-to-late control or “mode” signals.
# 3) A small set of dims recur everywhere
Across different prompts, seeds, layers, and prompt styles, a handful of dims keep reappearing. These are rare, but very noticeable.
# 4) Polarity is stable
When a dim reappears, its **sign never flips**.
Example:
* dim X is *always* positive when it appears
* dim Y is *always* negative when it appears The magnitude varies, but the polarity does not.
This isn’t intervention or gradient data — it’s raw activations — so what this really means is that these dims have **stable axis orientation**. When they engage, they always push the representation in the same direction.
# My current interpretation
* The majority of correlated dims are context-local and noisy (expected).
* A smaller group are persistent but layer-specific.
* A very small set appear to be **global, sign-stable features** that consistently co-move with the hero dim regardless of prompt or depth.
My next step is to stop looking at per-window “pretty pictures” and instead rank dims globally by:
* presence rate
* prompt coverage
* layer coverage
* persistence (run length)
* sign stability
The goal is to isolate those few recurring dims and then test whether they’re:
* real control handles
* general “confidence / entropy” proxies
* or something more interesting
If anyone has done similar correlation-based filtering or has suggestions on better ways to isolate global feature dims before moving to causal intervention, I’d love to hear it!
https://preview.redd.it/l88ej7vwjkag1.png?width=1592&format=png&auto=webp&s=dec7f6e36a078ef0fb04783730be5ae31667c085
https://preview.redd.it/yitls4uzjkag1.png?width=1592&format=png&auto=webp&s=3323bfa3059e351c051a924baa585ec3ed903677
| 2025-12-31T16:50:26 | https://www.reddit.com/r/LocalLLaMA/comments/1q0hk7y/llama_32_3b_fmri_circuit_tracing_findings/ | Due_Hunter_4891 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0hk7y | false | null | t3_1q0hk7y | /r/LocalLLaMA/comments/1q0hk7y/llama_32_3b_fmri_circuit_tracing_findings/ | false | false | 2 | null | |
Android LLM Client with Hardware Acceleration? | 7 | I'm aware of MLC Chat but it's too basic, doesn't seem to get updates anymore and also doesn't allow importing your own models.
Is there any other app with hardware acceleration? Preferably FOSS. My SoC has a NPU chip, i'd like to use it. Thanks. | 2025-12-31T16:45:47 | https://www.reddit.com/r/LocalLLaMA/comments/1q0hg8z/android_llm_client_with_hardware_acceleration/ | nikunjuchiha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0hg8z | false | null | t3_1q0hg8z | /r/LocalLLaMA/comments/1q0hg8z/android_llm_client_with_hardware_acceleration/ | false | false | self | 7 | null |
Horror! My local qwen just told me its trained up til 2021. How can it code thus? | 0 | **Overview:** I have an rtx4070ti and the big LLMs tell me my best locals for creating and testing mostly ***python code*** are qwen2.5-coder:14b and deepseek-coder-v2:16b ...but the big boys aren't trained on the latest stuff and there seems to be new ones every day. So are they badly wrong? Caveat I'm a non -coder so it need to be as easy as it was to install these two via llama.
Detailed: I'm a non-code AI newbie (don't hate me) and having a lot of fun with all this fantastic new technology. Having struggled through using chatGPT to make something that actually worked (it was painful), I'm now trying to level up agentically. I have set up an environment with Claude Code and n8n on a linux VM and I intend to expose an n8n workflow using a local LLM to Claude Code via MCP - so that it can use that in refactoring and creating code and save me precious tokens. So yes multi-agent with Claude Architecting, conducting and checking but local LLM running and ... creating code (!?).
Is this possible with my rtx4070ti, i514600K and 32Gb DDR4 ram? Or am I dreaming? I've used chatGPT and now Claude Sonnet 4.5 chatbot to ideate all this and both have told me to use qwen2.5-coder:14b and deepseek-coder-v2:16b but having installed qwen and asked a question it tells me its trained to 2021. How can that be good relative to the swarm of new developments daily!?
My project is simple enough. 5 million row csv, enrich it via API, I manually mark the rows for export to mail merge and crm ingestions. I made it with chatGPT and me as the copy-paste bot, and now I'm refactoring it with CC. I will make other things in the future (e.g. make invoice from contract, ask contract questions via RAG, who knows), so much seems possible now and I want to get my agentic stack right first. Three months ago I had never used AI but now I'm hooked. | 2025-12-31T16:34:23 | https://www.reddit.com/r/LocalLLaMA/comments/1q0h6g7/horror_my_local_qwen_just_told_me_its_trained_up/ | TheCientista | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0h6g7 | false | null | t3_1q0h6g7 | /r/LocalLLaMA/comments/1q0h6g7/horror_my_local_qwen_just_told_me_its_trained_up/ | false | false | self | 0 | null |
2026 prediction: Will there be a stronger 120b coding/math model than gpt oss:120b? | 25 | If so, where will it come from?
GPT OSS:120b came out in August is still the strong model of its size for coding/math. When will it be beaten? | 2025-12-31T16:30:19 | https://www.reddit.com/r/LocalLLaMA/comments/1q0h2x4/2026_prediction_will_there_be_a_stronger_120b/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0h2x4 | false | null | t3_1q0h2x4 | /r/LocalLLaMA/comments/1q0h2x4/2026_prediction_will_there_be_a_stronger_120b/ | false | false | self | 25 | null |
Tokenized Shareholder Engagement For A Nasdaq Name: The Real Play Might Be User Growth | 1 | [removed] | 2025-12-31T16:23:28 | https://www.reddit.com/r/LocalLLaMA/comments/1q0gx3g/tokenized_shareholder_engagement_for_a_nasdaq/ | Foreign-Advice-2530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0gx3g | false | null | t3_1q0gx3g | /r/LocalLLaMA/comments/1q0gx3g/tokenized_shareholder_engagement_for_a_nasdaq/ | false | false | self | 1 | null |
Video upscaler | 2 | Greetings, I’m currently experimenting with upscaling 480p to 1080p videos, tried using Video2x and Waifu-gui. What I have found is the Real-ESRGAN model seems to be quite good but slow as a dog. I’m getting 0.2 fps. I can see the GPU being used, and it’s only an RTX 3060 but is there anyway to achieve this faster? I don’t think it’s using Cuda, and possibly only vulkan, is there a way to use cuda for faster upscale? Perhaps another tool? | 2025-12-31T15:53:34 | https://www.reddit.com/r/LocalLLaMA/comments/1q0g782/video_upscaler/ | Total-Guest-4141 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0g782 | false | null | t3_1q0g782 | /r/LocalLLaMA/comments/1q0g782/video_upscaler/ | false | false | self | 2 | null |
I built a pipeline to extract executive compensation data from SEC filings using MinerU + VLMs | 8 | I scraped about 100k DEF-14A proxy statements from the SEC a while back and finally decided to do something with them.
I built a pipeline that extracts Summary Compensation Tables from these filings. It uses MinerU to parse PDFs and extract table images, then Qwen3-VL-32B to classify which tables are actually compensation tables and extract structured JSON from them.
The main challenges were handling tables split across multiple pages and dealing with format changes between pre-2006 and post-2006 filings.
It's still a work in progress with some bugs (duplicate tables, occasional parsing errors), but the pipeline is currently running to build a full dataset from 2005 to today covering all US public companies.
Code and a sample of the dataset are available if anyone wants to take a look or contribute.
GitHub: [https://github.com/pierpierpy/Execcomp-AI](https://github.com/pierpierpy/Execcomp-AI)
HuggingFace sample: [https://huggingface.co/datasets/pierjoe/execcomp-ai-sample](https://huggingface.co/datasets/pierjoe/execcomp-ai-sample) | 2025-12-31T15:47:18 | https://www.reddit.com/r/LocalLLaMA/comments/1q0g1ye/i_built_a_pipeline_to_extract_executive/ | Logical_Delivery8331 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0g1ye | false | null | t3_1q0g1ye | /r/LocalLLaMA/comments/1q0g1ye/i_built_a_pipeline_to_extract_executive/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': '2hyVGbUuuodK3ek9fzdXCIYLarE2DFvX8gj14UR07mA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2hyVGbUuuodK3ek9fzdXCIYLarE2DFvX8gj14UR07mA.png?width=108&crop=smart&auto=webp&s=d4b00904907f1be844b8791ee902cfe23dd0bfb8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2hyVGbUuuodK3ek9fzdXCIYLarE2DFvX8gj14UR07mA.png?width=216&crop=smart&auto=webp&s=46dde07e92a983c6eeca9694f7790bf72d88b7b3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2hyVGbUuuodK3ek9fzdXCIYLarE2DFvX8gj14UR07mA.png?width=320&crop=smart&auto=webp&s=4e2851c0299c3d51d8286534fbb3a405cfd67d6f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2hyVGbUuuodK3ek9fzdXCIYLarE2DFvX8gj14UR07mA.png?width=640&crop=smart&auto=webp&s=50f191892d6df89353f4e431d02a7dac216e93b1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2hyVGbUuuodK3ek9fzdXCIYLarE2DFvX8gj14UR07mA.png?width=960&crop=smart&auto=webp&s=0fcb1775edf72728a3bd6c79a9adfb8d1b679b16', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2hyVGbUuuodK3ek9fzdXCIYLarE2DFvX8gj14UR07mA.png?width=1080&crop=smart&auto=webp&s=063e75a97cb90b7c67a32ef4fe2b8c29b39972f0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2hyVGbUuuodK3ek9fzdXCIYLarE2DFvX8gj14UR07mA.png?auto=webp&s=419f418c1fdc16aa84b0ce096dd0647d65bdbc35', 'width': 1200}, 'variants': {}}]} |
Llama 3.3 8B Instruct Abliterated (MPOA) | 13 | I made an abliterated version of Llama 3.3 8B Instruct (based on shb777/Llama-3.3-8B-Instruct) with MPOA technique (https://github.com/jim-plus/llm-abliteration).
Please find the model at [https://huggingface.co/YanLabs/Llama-3.3-8B-Instruct-MPOA](https://huggingface.co/YanLabs/Llama-3.3-8B-Instruct-MPOA)
GGUF files: [https://huggingface.co/YanLabs/Llama-3.3-8B-Instruct-MPOA-GGUF](https://huggingface.co/YanLabs/Llama-3.3-8B-Instruct-MPOA-GGUF)
Enjoy! | 2025-12-31T15:33:29 | https://www.reddit.com/r/LocalLLaMA/comments/1q0fqfs/llama_33_8b_instruct_abliterated_mpoa/ | Perfect_Biscotti_476 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0fqfs | false | null | t3_1q0fqfs | /r/LocalLLaMA/comments/1q0fqfs/llama_33_8b_instruct_abliterated_mpoa/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'oSF-PoBDsadB7Sq9lI2t_wUOYeEchF8JkzsXd96rfx0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oSF-PoBDsadB7Sq9lI2t_wUOYeEchF8JkzsXd96rfx0.png?width=108&crop=smart&auto=webp&s=d64d6666d75a13623410fc6f5953da9256f240d7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oSF-PoBDsadB7Sq9lI2t_wUOYeEchF8JkzsXd96rfx0.png?width=216&crop=smart&auto=webp&s=522c8523726f5e52d84867498839d188667fe5dc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oSF-PoBDsadB7Sq9lI2t_wUOYeEchF8JkzsXd96rfx0.png?width=320&crop=smart&auto=webp&s=cf7dfe838bb239cf320fc2f2c2087d126099a903', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oSF-PoBDsadB7Sq9lI2t_wUOYeEchF8JkzsXd96rfx0.png?width=640&crop=smart&auto=webp&s=b6a73512cd0851a009c70b5f3743b6fd0ec3ce04', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oSF-PoBDsadB7Sq9lI2t_wUOYeEchF8JkzsXd96rfx0.png?width=960&crop=smart&auto=webp&s=916e57521923dcd9c8463597f23b39db5b63bd1d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oSF-PoBDsadB7Sq9lI2t_wUOYeEchF8JkzsXd96rfx0.png?width=1080&crop=smart&auto=webp&s=c6062ab5ad700ff486ca3a86ccf8fadbdc5fbee1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oSF-PoBDsadB7Sq9lI2t_wUOYeEchF8JkzsXd96rfx0.png?auto=webp&s=b0db8b4e3a678df95bfce4f7f170548c269898f5', 'width': 1200}, 'variants': {}}]} |
Experiment: Packaging a local LLM into a standalone 'One-Click' script for non-technical users. How do you handle driver dependencies? | 0 | Hey everyone,
I’ve been trying to get my non-tech friends into local LLMs, but I hit a wall: the setup process. Even with things like Ollama being easier now, explaining terminal commands or driver updates to a complete beginner is still a hurdle.
I decided to try and build a fully pre-packaged, "unzip-and-run" environment.
**My approach so far:**
* **Backend:** I'm wrapping a quantized version of Ollama
* **Local Memory:** Remembers when you close the program and restart
* **Security:** Specifically designed it to be air-gapped/offline to appeal to privacy-conscious users who are paranoid about telemetry.
**The Challenge:** The biggest headache has been keeping the file size reasonable while ensuring it creates the right environment variables on Windows without triggering false positives in AV (since it's an unsigned .exe).
I’ve put the finished build up on BMAC for the price of a coffee for anyone who wants the convenience of skipping the setup, but I’m mostly curious—**has anyone else here tried distributing pre-packaged local environments like this?** I feel like "portability" is the missing link for mainstream adoption.
Link if you want to test the installer:
[https://buymeacoffee.com/cobotindustries/e/494055](https://buymeacoffee.com/cobotindustries/e/494055) | 2025-12-31T15:27:59 | http://buymeacoffee.com/cobotindustries/e/494055 | cobotIndustries | buymeacoffee.com | 1970-01-01T00:00:00 | 0 | {} | 1q0flsi | false | null | t3_1q0flsi | /r/LocalLLaMA/comments/1q0flsi/experiment_packaging_a_local_llm_into_a/ | false | false | default | 0 | null |
Experiment: Packaging a local LLM into a standalone 'One-Click' .exe for non-technical users. How do you handle driver dependencies? | 0 | Hey everyone,
I’ve been trying to get my non-tech friends into local LLMs, but I hit a wall: the setup process. Even with things like Ollama being easier now, explaining terminal commands or driver updates to a complete beginner is still a hurdle.
I decided to try and build a fully pre-packaged, "unzip-and-run" environment that requires zero external dependencies (no Python, no Git, no manual CUDA installs).
**My approach so far:**
* **Backend:** I'm wrapping a quantized version of Ollama
* **Packaging:** I'm bundling the runtime environment directly so it runs isolated from the OS.
* **Security:** Specifically designed it to be air-gapped/offline to appeal to privacy-conscious users who are paranoid about telemetry.
I’ve put the finished build up on BMAC for the price of a coffee for anyone who wants the convenience of skipping the setup, but I’m mostly curious—**has anyone else here tried distributing pre-packaged local environments like this?** I feel like "portability" is the missing link for mainstream adoption.
Link if you want to test the installer: [https://buymeacoffee.com/cobotindustries/e/494055](https://buymeacoffee.com/cobotindustries/e/494055) | 2025-12-31T15:21:03 | https://buymeacoffee.com/cobotindustries/e/494055 | cobotIndustries | buymeacoffee.com | 1970-01-01T00:00:00 | 0 | {} | 1q0fftu | false | null | t3_1q0fftu | /r/LocalLLaMA/comments/1q0fftu/experiment_packaging_a_local_llm_into_a/ | false | false | default | 0 | null |
How do you track your LLM/API costs per user? | 0 | Building a SaaS with multiple LLMs (OpenAI, Anthropic, Mistral) + various APIs (Supabase, etc).
My problem: I have zero visibility on costs.
* How much does each user cost me?
* Which feature burns the most tokens?
* When should I rate-limit a user?
Right now I'm basically flying blind until the invoice hits.
Tried looking at Helicone/LangFuse but not sure I want a proxy sitting between me and my LLM calls.
How do you guys handle this? Any simple solutions? | 2025-12-31T14:33:13 | https://www.reddit.com/r/LocalLLaMA/comments/1q0ecyo/how_do_you_track_your_llmapi_costs_per_user/ | AdministrationPure45 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0ecyo | false | null | t3_1q0ecyo | /r/LocalLLaMA/comments/1q0ecyo/how_do_you_track_your_llmapi_costs_per_user/ | false | false | self | 0 | null |
Which LLM is "best?" | 0 | I think GPT is the best, but I see so many complaining. And I don't get it.
I don't get the Claude hype.
Please ELI5 me what's wrong with GPT? Why is Claude so much better? | 2025-12-31T14:09:35 | https://www.reddit.com/r/LocalLLaMA/comments/1q0duz8/which_llm_is_best/ | Beneficial-Pear-1485 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0duz8 | false | null | t3_1q0duz8 | /r/LocalLLaMA/comments/1q0duz8/which_llm_is_best/ | false | false | self | 0 | null |
Intel's Xe Linux Driver Ready With Multi-Device SVM To End Out 2025 | 0 | 2025-12-31T14:09:27 | https://www.phoronix.com/news/Intel-Multi-Device-SVM-Linux-7 | reps_up | phoronix.com | 1970-01-01T00:00:00 | 0 | {} | 1q0duvq | false | null | t3_1q0duvq | /r/LocalLLaMA/comments/1q0duvq/intels_xe_linux_driver_ready_with_multidevice_svm/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'AMFOyNHsUhFYzKFskGBWckZb9a-ee0RHjipsVbm41jU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/AMFOyNHsUhFYzKFskGBWckZb9a-ee0RHjipsVbm41jU.jpeg?width=108&crop=smart&auto=webp&s=bf02e876623a100d94ce2d27176defc4dbb29774', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/AMFOyNHsUhFYzKFskGBWckZb9a-ee0RHjipsVbm41jU.jpeg?width=216&crop=smart&auto=webp&s=ebb7a788d049dbe4846e888800e89734622632ab', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/AMFOyNHsUhFYzKFskGBWckZb9a-ee0RHjipsVbm41jU.jpeg?width=320&crop=smart&auto=webp&s=b6d54b7b9771e7698b5c0794aed3bea7d99ac387', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/AMFOyNHsUhFYzKFskGBWckZb9a-ee0RHjipsVbm41jU.jpeg?width=640&crop=smart&auto=webp&s=bc57eed1f9004a8b649c870299b857960983ac36', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/AMFOyNHsUhFYzKFskGBWckZb9a-ee0RHjipsVbm41jU.jpeg?width=960&crop=smart&auto=webp&s=7bb85ac7a8dd9ea2ee2f7dc8a96326babbf3d0d9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/AMFOyNHsUhFYzKFskGBWckZb9a-ee0RHjipsVbm41jU.jpeg?width=1080&crop=smart&auto=webp&s=7d4c436648cd4d21d47879a2bf6889557e794c9d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/AMFOyNHsUhFYzKFskGBWckZb9a-ee0RHjipsVbm41jU.jpeg?auto=webp&s=075d71a7fcb3576a98d4ef09ccdfd04579271d3e', 'width': 1920}, 'variants': {}}]} | |
Technical feasibility of 100% On-Device AI for Enterprise? Looking for feedback on latency and local RAG | 1 | [removed] | 2025-12-31T14:05:05 | https://www.reddit.com/r/LocalLLaMA/comments/1q0drjk/technical_feasibility_of_100_ondevice_ai_for/ | Global_Birthday_1948 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0drjk | false | null | t3_1q0drjk | /r/LocalLLaMA/comments/1q0drjk/technical_feasibility_of_100_ondevice_ai_for/ | false | false | self | 1 | null |
Tested glm 4.7 for coding projects past week, comparison with deepseek and qwen | 56 | been doing a lot of python backend and react work, probably 200-300 api requests daily. been using deepseek v3 mainly but wanted to test glm 4.7 since it dropped recently
ran it through my actual workflow for about a week
**what i tested:**
* refactoring messy legacy code (python flask app)
* building new features from scratch (react components)
* debugging prod issues
* writing unit tests
* code review and suggestions
**comparison context:**
mainly used deepseek v3, also tried qwen2.5-coder and kimi in past few months
**where glm 4.7 actually impressed me:**
**python backend work** \- really solid here. refactoring was clean, understood context well without hallucinating random libraries
asked it to optimize a slow database query and it actually got the schema relationships without me explaining everything twice
**code review** \- caught edge cases i missed. not just syntax but actual logic issues
**maintaining context** \- this was big difference from qwen. when debugging iteratively, it remembered what we tried before and adjusted approach. qwen would sometimes lose track after 3-4 iterations
**comparison to other models:**
**vs deepseek v3:** roughly same level for most tasks, maybe glm slightly better at keeping context in long conversations. deepseek still edges it out for very complex algorithmic stuff
**vs qwen2.5-coder:** glm better at context maintenance. qwen sometimes felt like starting fresh each response. but qwen was faster to respond
**vs kimi:** glm way less verbose. kimi would write essay explaining code, glm just gives you working code with brief explanation
**where it struggled:**
**complex react state management** \- got confused with nested context providers. needed more guidance
**architectural decisions** \- better at implementing than designing. tell it what to build and itll do it well, but asking "how should i structure this" gave generic answers
**very new libraries** \- struggled with anything released past mid 2024. training cutoff showing
**pricing reality:**
deepseek: was spending around $25-30/month
qwen via alibaba cloud: similar, maybe $20-25
glm 4.7: spent like $15 this week doing same work
not huge difference but adds up if youre doing heavy usage
**open source angle:**
glm being open source is nice. can self-host if needed, fine-tune for specific domains
deepseek also open source but glm feels more actively developed right now
**honest take:**
for everyday coding work (refactoring, debugging, tests, code review) - glm 4.7 handles it fine
comparable to deepseek v3 for most tasks. slightly better context, slightly worse on complex algorithms
way better than kimi (less verbose), better than qwen at maintaining conversation flow
**who should try it:**
* doing high volume coding work
* mostly implementation not architecture
* want good context maintenance across iterations
* already using chinese models, curious about alternatives
**tldr:** glm 4.7 solid for coding, comparable to deepseek v3, better context than qwen, less verbose than kimi, open source, good for everyday dev work. | 2025-12-31T13:56:21 | https://www.reddit.com/r/LocalLLaMA/comments/1q0dkwz/tested_glm_47_for_coding_projects_past_week/ | CarpenterFine3887 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0dkwz | false | null | t3_1q0dkwz | /r/LocalLLaMA/comments/1q0dkwz/tested_glm_47_for_coding_projects_past_week/ | false | false | self | 56 | null |
GitHub - JosefAlbers/VL-JEPA: VL-JEPA in MLX | 1 | 2025-12-31T13:55:59 | https://github.com/JosefAlbers/VL-JEPA | JosefAlbers05 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1q0dkm4 | false | null | t3_1q0dkm4 | /r/LocalLLaMA/comments/1q0dkm4/github_josefalbersvljepa_vljepa_in_mlx/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'QZctN4h1azoYqe_-jFsVBLpHHKEMRIhFW5wXMu1NIBw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QZctN4h1azoYqe_-jFsVBLpHHKEMRIhFW5wXMu1NIBw.png?width=108&crop=smart&auto=webp&s=f77d07badd8041e7cc47f3e4e4c7484fd5e02790', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QZctN4h1azoYqe_-jFsVBLpHHKEMRIhFW5wXMu1NIBw.png?width=216&crop=smart&auto=webp&s=71178530d0dd5ceaa9128f51d98146f4e399083a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QZctN4h1azoYqe_-jFsVBLpHHKEMRIhFW5wXMu1NIBw.png?width=320&crop=smart&auto=webp&s=c034087cd92cbcd5bd5b62a03436158e0aef2287', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QZctN4h1azoYqe_-jFsVBLpHHKEMRIhFW5wXMu1NIBw.png?width=640&crop=smart&auto=webp&s=0d53e8a0611825f476a5ae50e4847b8d1212b024', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QZctN4h1azoYqe_-jFsVBLpHHKEMRIhFW5wXMu1NIBw.png?width=960&crop=smart&auto=webp&s=bec8565ab7389a7ddb05f8342f1406ba46708de8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QZctN4h1azoYqe_-jFsVBLpHHKEMRIhFW5wXMu1NIBw.png?width=1080&crop=smart&auto=webp&s=ef624c21cc9aac3e86ca5dca18b33ba111151ef6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QZctN4h1azoYqe_-jFsVBLpHHKEMRIhFW5wXMu1NIBw.png?auto=webp&s=ef1eed8ae10ea4867eb5de657d7fae53f870f5bb', 'width': 1200}, 'variants': {}}]} | |
the best "unfiltered" alternatives to the big 3 ai bots | 1 | [removed] | 2025-12-31T13:50:20 | https://www.reddit.com/r/LocalLLaMA/comments/1q0dgjl/the_best_unfiltered_alternatives_to_the_big_3_ai/ | Immediate_Being_3341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0dgjl | false | null | t3_1q0dgjl | /r/LocalLLaMA/comments/1q0dgjl/the_best_unfiltered_alternatives_to_the_big_3_ai/ | false | false | self | 1 | null |
Is deleting the chat history the new “deleting the browser history”? | 0 | I just wanted to do a cleanse. It was filled with tens of 12k context chats of roleplay. I didn’t even count. Now gone forever. I am still keeping my prompts, but it feels strange to see a blank chat log on the UI I am on. No other story I can revise and restart. | 2025-12-31T13:49:22 | https://www.reddit.com/r/LocalLLaMA/comments/1q0dfvj/is_deleting_the_chat_history_the_new_deleting_the/ | IRLLore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0dfvj | false | null | t3_1q0dfvj | /r/LocalLLaMA/comments/1q0dfvj/is_deleting_the_chat_history_the_new_deleting_the/ | false | false | self | 0 | null |
Another large open model from Korea about to be released (no weight or benchmark yet) release planned on 4th of january 2026 - A.X K1 by SK Telecom (SK Hynix) | 42 | [https://huggingface.co/skt/A.X-K1](https://huggingface.co/skt/A.X-K1)
From elie on 𝕏: [https://x.com/eliebakouch/status/2006345217965011009](https://x.com/eliebakouch/status/2006345217965011009) | 2025-12-31T13:07:20 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q0clou | false | null | t3_1q0clou | /r/LocalLLaMA/comments/1q0clou/another_large_open_model_from_korea_about_to_be/ | false | false | default | 42 | {'enabled': True, 'images': [{'id': 'qpjb7igsfjag1', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/qpjb7igsfjag1.png?width=108&crop=smart&auto=webp&s=4538edc1e2f25ab769227d360b3b3fbc8fb47088', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/qpjb7igsfjag1.png?width=216&crop=smart&auto=webp&s=48ae64538971b91358e069a144fa5d2cef2efe5c', 'width': 216}, {'height': 221, 'url': 'https://preview.redd.it/qpjb7igsfjag1.png?width=320&crop=smart&auto=webp&s=5cb460a1645926267f588054839238316a7c0c84', 'width': 320}, {'height': 442, 'url': 'https://preview.redd.it/qpjb7igsfjag1.png?width=640&crop=smart&auto=webp&s=cdefd8ab8fcfa58e639e0c27266c01e61e3dc190', 'width': 640}], 'source': {'height': 593, 'url': 'https://preview.redd.it/qpjb7igsfjag1.png?auto=webp&s=633c94c7755cacb666614ef8bc16b7cf5e3b02e9', 'width': 858}, 'variants': {}}]} | |
When should you choose F16 over Q8_0 quantization? | 19 | We've all read about how Q8_0 is "virtually indistinguishable" from F16 when doing inference.
Have you personally run into a use-case where you managed to notice a difference between the two?
(This question came to my mind as I'm downloading MedGemma 27B to ask it some private medical questions. I intend to put up with the painfully slow inference at F16.) | 2025-12-31T13:02:03 | https://www.reddit.com/r/LocalLLaMA/comments/1q0ci23/when_should_you_choose_f16_over_q8_0_quantization/ | dtdisapointingresult | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0ci23 | false | null | t3_1q0ci23 | /r/LocalLLaMA/comments/1q0ci23/when_should_you_choose_f16_over_q8_0_quantization/ | false | false | self | 19 | null |
Can we sample DPO data from the same dataset that was used for LoRA training? | 5 | I am trying to understand best practices around data usage when combining LoRA fine-tuning with Direct Preference Optimization (DPO), and I would appreciate insights from people who have done this in practice.
Specifically, is it acceptable (or advisable) to sample DPO preference data from the same underlying dataset that was already used to train a LoRA adapter?
To clarify the setup:
\- A base model is first adapted using LoRA, trained on a supervised dataset (e.g., instruction - response pairs).
\- After that, DPO is applied to further align the model using preference pairs (chosen vs. rejected responses).
\- The question is whether those DPO preference pairs can be derived from the same original dataset used for LoRA training, rather than from a completely separate corpus.
I would be especially interested in:
\- Empirical results comparing reused vs. disjoint datasets for LoRA + DPO
\- Recommended data-splitting strategies if reuse is acceptable
\- Any failure modes observed when the same data source is used across both stages
\- Thanks in advance looking forward to hearing how others handle this in real-world pipelines. | 2025-12-31T13:01:29 | https://www.reddit.com/r/LocalLLaMA/comments/1q0chm3/can_we_sample_dpo_data_from_the_same_dataset_that/ | Clean_Radish8983 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q0chm3 | false | null | t3_1q0chm3 | /r/LocalLLaMA/comments/1q0chm3/can_we_sample_dpo_data_from_the_same_dataset_that/ | false | false | self | 5 | null |
Second GPU | 0 | I got RTX 3060Ti 16GB GPU now in my system and I'm looking upgrade for more vram, so I'm want to connect a second GPU. 3060 has enough power (it usually uses around 40% when running models)
So my question is: Should something like this work fine? Tesla M60 16GB | 2025-12-31T12:53:29 | Suomi422 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q0cc4w | false | null | t3_1q0cc4w | /r/LocalLLaMA/comments/1q0cc4w/second_gpu/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '6th6b354ejag1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/6th6b354ejag1.jpeg?width=108&crop=smart&auto=webp&s=8cebe2ed968f86fa28c22139af4dc3e397161358', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/6th6b354ejag1.jpeg?width=216&crop=smart&auto=webp&s=8c198573ff9f5c6fbb6df5433555b31acdb3cbaf', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/6th6b354ejag1.jpeg?width=320&crop=smart&auto=webp&s=22566a1973d52f91e662cd8464d7ae1e3a151680', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/6th6b354ejag1.jpeg?width=640&crop=smart&auto=webp&s=de15d64d43b8a60b1bfd8e02cccbcec554c9b45a', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/6th6b354ejag1.jpeg?width=960&crop=smart&auto=webp&s=390dd54496fe807e8ea416b5f6a65332fb17762a', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/6th6b354ejag1.jpeg?width=1080&crop=smart&auto=webp&s=a667afea5e4d3c313a9f6c425dd7c9f9d5b06210', 'width': 1080}], 'source': {'height': 2340, 'url': 'https://preview.redd.it/6th6b354ejag1.jpeg?auto=webp&s=e9fb4e92e57b25e73f1360e5c278993a5a5edfd1', 'width': 1080}, 'variants': {}}]} | |
Deep Agents vs AI Agents: Architecture + Code + Demo | 0 | The "One-Shot" Agent era is ending. "Deep Agents" are the new architectural primitive. 🏗️
As AI Architects, we usually build "Traditional Agents": User Query → LLM → Tool Call → Final Answer. These work for simple lookups, but they fail at complex, multi-step goals like "Build a website" or "Write a comprehensive market research report."
I just uploaded a new breakdown on the architecture of Deep Agents (similar to Claude Code or Manus), and it highlights the necessary shift in our design patterns:
Key Architectural Differences:
State Persistence (File System): Deep agents don't just rely on the context window. They actively "dump" intermediate context and research findings into a virtual file system to manage token limits and maintain state across long-running tasks.
Hierarchical Delegation: It’s not one loop. It’s an Orchestrator that delegates to specialized Sub-Agents (e.g., a Research Agent) that have their own distinct loops and tools.
The "Think" Tool: Implementing a specific "Reflection" step where the agent pauses to validate if it has enough information before proceeding, preventing the "hallucination by completion" problem.
In the video, I walk through the new deep-agents package from LangChain, which standardizes these patterns (Planning, File System, Sub-agents) so you don't have to build the orchestration logic from scratch.
If you are trying to move from "Chatbots" to "Autonomous Workers," this architecture is the blueprint.
#AIArchitecture #DeepAgents #LangChain #SystemDesign #LLM #AgenticAI #DevOps | 2025-12-31T12:40:03 | https://youtu.be/VzR_-_Awnas | buntyshah2020 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1q0c3kq | false | {'oembed': {'author_name': 'Bunty Shah', 'author_url': 'https://www.youtube.com/@aiwithbuntyshah', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/VzR_-_Awnas?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Deep Agents vs AI Agents: Architecture + Code + Demo"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/VzR_-_Awnas/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Deep Agents vs AI Agents: Architecture + Code + Demo', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1q0c3kq | /r/LocalLLaMA/comments/1q0c3kq/deep_agents_vs_ai_agents_architecture_code_demo/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'LPuIAjb9TiqsUwPIx6Ry60pEm09MH-LlSLf5LnW-FBU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/LPuIAjb9TiqsUwPIx6Ry60pEm09MH-LlSLf5LnW-FBU.jpeg?width=108&crop=smart&auto=webp&s=f45062c804e382ed9fcbcfd8ff666ecc6c594800', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/LPuIAjb9TiqsUwPIx6Ry60pEm09MH-LlSLf5LnW-FBU.jpeg?width=216&crop=smart&auto=webp&s=1a1f668a13d58d63c0d6c70e48c59b98e387a744', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/LPuIAjb9TiqsUwPIx6Ry60pEm09MH-LlSLf5LnW-FBU.jpeg?width=320&crop=smart&auto=webp&s=c2cb9715b2c5f6377ea39950e2564bab9b9a1f59', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/LPuIAjb9TiqsUwPIx6Ry60pEm09MH-LlSLf5LnW-FBU.jpeg?auto=webp&s=5d52e79c23b844076777c17c441d6c7c99301feb', 'width': 480}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.