title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Putting topk to bed once and for all? | 2 | wtf is topk?
topk is the 'google search results' limit applied to your next token, every token.
topk 40? You get the top 40 results.
topk 100? You get the top 100 results.
topk 0? You get the top 200,000 results for gpt120 because that's what it's 'vocabulary size' is, apparently.
Someone mentioned in another thread, "zomg, you shouldn't use topk 0, there's no need! it's really slow!"
They were right.
Using topk 0 for gpt120 and doing a test chat, I'm straight down to 100t/s from my potential llama-bench of 160.
Fire it back up with topk 100? Sits around 140t/s...
So how much topk do we truly need? Gotta test it, somehow? Apparently this is done via 'logprobs' which is that handy token search results filter mentioned above.
I'm looking at llama-server -h and I don't immediately see a logprobs or logits type option. How are people checking this?
For a given prompt, I want to be able to check just how deep the probabilities went for all tokens generated. I want to see if or how often I pass that top 100 mark or even top 5000 mark, etc.
Is this doable with llama.cpp or is it back to vllm for this?
https://preview.redd.it/uj1esxsjav7g1.png?width=1024&format=png&auto=webp&s=eedac8801e141910a635757135dddb3beb7496fa
| 2025-12-18T01:43:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ppei5j/putting_topk_to_bed_once_and_for_all/ | Aggressive-Bother470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ppei5j | false | null | t3_1ppei5j | /r/LocalLLaMA/comments/1ppei5j/putting_topk_to_bed_once_and_for_all/ | false | false | 2 | null | |
Speed issues with 3x 3090s but good with 2x 3090 and a 5070... | 3 | I have 2x 3090s inside my PC and a EGPU through Oculink. When testing with my 3090s with the 3080 or 3090 on GPU the speed quite a bit slower. But if I pair the 3090s with the 5070 its decent. I am using LM Studio so I don't know if that is the issue or if the 5000 series is doing something fancy?
I'm trying to run 3x 3090's so I can use the 4Q of GLM 4.5 air at a good speed.
GLM 4.5 air Q2 KL
2x 3090 - 65 tks
2x 3090s - 5070 - 46-56 tks
2x 3090 and 2070 - 17-21 tks
2x 3090s - 3080 - 17-22 tks
2x 3090s with cpu - 9.3 tks
3x 3090s - 9 tks
| 2025-12-18T00:45:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ppdafw/speed_issues_with_3x_3090s_but_good_with_2x_3090/ | lemondrops9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ppdafw | false | null | t3_1ppdafw | /r/LocalLLaMA/comments/1ppdafw/speed_issues_with_3x_3090s_but_good_with_2x_3090/ | false | false | self | 3 | null |
2x Hailo 10H running LLMs on Raspberry Pi 5 | 33 | I tested two Hailo 10H running on Raspberry Pi 5, ran 2 LLMs and made them talk to each other: [https://github.com/martincerven/hailo\_learn](https://github.com/martincerven/hailo_learn)
Also how it runs with/without heatsinks w. thermal camera.
It has 8GB LPDDR4 each, connected over M2 PCIe.
I will try more examples like Whisper, VLMs next. | 2025-12-17T23:43:36 | https://youtu.be/yhDjQx-Dmu0 | martincerven | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1ppbx2r | false | {'oembed': {'author_name': 'Martin Cerven', 'author_url': 'https://www.youtube.com/@martincerven', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/yhDjQx-Dmu0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Hailo 10H AI Accelerator Review"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/yhDjQx-Dmu0/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Hailo 10H AI Accelerator Review', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1ppbx2r | /r/LocalLLaMA/comments/1ppbx2r/2x_hailo_10h_running_llms_on_raspberry_pi_5/ | false | false | default | 33 | {'enabled': False, 'images': [{'id': '-UtatTVLPFBH5nFCnadOByG6DM8Q0W-9QDOoZnCbr0o', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/-UtatTVLPFBH5nFCnadOByG6DM8Q0W-9QDOoZnCbr0o.jpeg?width=108&crop=smart&auto=webp&s=a0e43fa60ccb569da76f1d9e07e0a123aed8246e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/-UtatTVLPFBH5nFCnadOByG6DM8Q0W-9QDOoZnCbr0o.jpeg?width=216&crop=smart&auto=webp&s=0078b6532092eec96ccd475223992bd5a2ae894e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/-UtatTVLPFBH5nFCnadOByG6DM8Q0W-9QDOoZnCbr0o.jpeg?width=320&crop=smart&auto=webp&s=1ed76759fc59b73820b966057effb73022a0c7ef', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/-UtatTVLPFBH5nFCnadOByG6DM8Q0W-9QDOoZnCbr0o.jpeg?auto=webp&s=6c0017bf9aa92ea5ab6268c51e329448d8397564', 'width': 480}, 'variants': {}}]} |
Getting most of your local LLM setup - a GitHub list | 12 | Two months ago, I posted "[Getting most of your local LLM setup](https://www.reddit.com/r/LocalLLaMA/comments/1oclug7/getting_most_out_of_your_local_llm_setup/)" where I shared my personal experience setting up and using \~70 different LLM-related services. Now, it's also available as a GitHub list.
[https://github.com/av/awesome-llm-services](https://github.com/av/awesome-llm-services)
Thanks! | 2025-12-17T23:35:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ppbq2k/getting_most_of_your_local_llm_setup_a_github_list/ | Everlier | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ppbq2k | false | null | t3_1ppbq2k | /r/LocalLLaMA/comments/1ppbq2k/getting_most_of_your_local_llm_setup_a_github_list/ | false | false | self | 12 | null |
I built a stability-first AI runtime in Rust to prevent OOM and Thrashing on consumer GPUs (Atenia Engine) | 0 | Hi everyone,
Like many of you, I’ve struggled with runtimes that assume I have unlimited VRAM or a dedicated A100 cluster. In reality, on local/shared hardware, memory fragmentation and scheduler jitter cause OOMs even when theoretically there is enough free memory.
I spent the last year building **Atenia Engine**, an execution-centric runtime designed to prioritize **stability over peak performance**.
**What it does differently:**
* **Virtual Execution:** Before running a kernel, it simulates execution in a lightweight virtual model to predict if it will crash or thrash. If it's risky, it adapts *before* hitting the GPU.
* **Control Theory, not ML:** It uses deterministic control logic to stabilize scheduling. It doesn't use a neural network to manage the neural network (no "black box" problems).
* **Resilience:** It aims for continuity. It handles fallback transparently without aborting the process.
* **Written in Rust:** For memory safety and predictable latency.
It is **Open Source (Apache 2.0)**. I want this to be a tool for robust execution in constrained environments.
**The "Proof":** I believe trust should be earned via `cargo test`, not benchmarks. The repo includes reproducible tests for stability and oscillation prevention.
**Links:**
* **GitHub:** https://github.com/AteniaEngine/ateniaengine
* **Project Page & Whitepaper:** https://ateniaengine.com/
I’m looking for feedback from people running heavy workloads on constrained hardware. Let me know what you think! | 2025-12-17T23:21:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ppbem5/i_built_a_stabilityfirst_ai_runtime_in_rust_to/ | Atenia_Engine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ppbem5 | false | null | t3_1ppbem5 | /r/LocalLLaMA/comments/1ppbem5/i_built_a_stabilityfirst_ai_runtime_in_rust_to/ | false | false | self | 0 | null |
GPT-5.2, GPT-5.1 codex, Claude Code, Gemini 3 Pro on fresh SWE-rebench (November 2025) | 1 | [deleted] | 2025-12-17T22:45:05 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ppajrj | false | null | t3_1ppajrj | /r/LocalLLaMA/comments/1ppajrj/gpt52_gpt51_codex_claude_code_gemini_3_pro_on/ | false | false | default | 1 | null | ||
Fit 20% more context into your prompts using this lightweight pre-processor (Benchmarks included) | 0 | Hey everyone,
We all know the pain of limited context windows (especially on local 8k/16k models). If you are doing RAG, you are probably wasting a chunk of that window on useless HTML tags, excessive whitespace, or redundant JSON keys.
I built a small tool called **Prompt Refiner** to fix this. It’s a "last-mile" cleaner before your prompt hits the model.
**The Cool Part (Benchmarks):** I ran tests using GPT-4o and SQuAD datasets.
* **Aggressive Strategy:** Reduces token usage by **\~15-20%**.
* **Quality:** Semantic similarity of the output remained >96%.
Basically, you get the same answer, but you can fit more documents into your context window (or just generate faster).
It also handles **Tool/Function Calling compression** (stripping nulls/empty lists from API responses), which is huge if you run agents.
**Repo is here:**[https://github.com/JacobHuang91/prompt-refiner](https://github.com/JacobHuang91/prompt-refiner)
Let me know if you want me to add support for any specific cleaning logic! | 2025-12-17T22:38:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ppae98/fit_20_more_context_into_your_prompts_using_this/ | Ok-Suggestion7846 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ppae98 | false | null | t3_1ppae98 | /r/LocalLLaMA/comments/1ppae98/fit_20_more_context_into_your_prompts_using_this/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'LfCLg1Xmj54MMK3La9pr1pffRUL5eqTp38aom-upUGA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LfCLg1Xmj54MMK3La9pr1pffRUL5eqTp38aom-upUGA.png?width=108&crop=smart&auto=webp&s=1f7a1210f1aef2aca0448bac874966bd15ed9a9b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LfCLg1Xmj54MMK3La9pr1pffRUL5eqTp38aom-upUGA.png?width=216&crop=smart&auto=webp&s=eac07e32b60ec6e6e10e03bc0429bd9454450950', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LfCLg1Xmj54MMK3La9pr1pffRUL5eqTp38aom-upUGA.png?width=320&crop=smart&auto=webp&s=1d2933b42c4cd5ff8223e000645714c2b76de7e9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LfCLg1Xmj54MMK3La9pr1pffRUL5eqTp38aom-upUGA.png?width=640&crop=smart&auto=webp&s=99c79120d85389b433064e42788bf99ac00eb32f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LfCLg1Xmj54MMK3La9pr1pffRUL5eqTp38aom-upUGA.png?width=960&crop=smart&auto=webp&s=2386a29f6422e51331209815c4d8f35f541785a8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LfCLg1Xmj54MMK3La9pr1pffRUL5eqTp38aom-upUGA.png?width=1080&crop=smart&auto=webp&s=325e1d0ded459d016e77067784de234f1e39c2c7', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/LfCLg1Xmj54MMK3La9pr1pffRUL5eqTp38aom-upUGA.png?auto=webp&s=29f479b4eace1cf7c1103b32870cdec0f73d5e71', 'width': 1280}, 'variants': {}}]} |
Looking for a fast LLM for MATLAB coding agent | 3 | \- \*\*Hardware:\*\*
\- Ryzen 9 9950X
\- 64 GB DDR5‑6000
\- RX 9070XT 16 GB VRAM
\- \*\*Use case:\*\* MATLAB coding agent (mostly MATLAB, some Python).
\- \*\*Constraints:\*\*
\- Decent speed >35 tok/s ideally
\- \~4 GB RAM free for a running MATLAB (all VRAM can go to LLM)
\- Context window of at least \*\*100K\*\* tokens as working on medium sized project
\- Reliable MATLAB code, good tool‑calling support.
\- \*\*Current setup:\*\* LM Studio + Opencode CLI.
\---
\*\*Models I’ve tried (all Q4‑quantized unless noted)\*\*
| Model | Speed (short ctx) | Speed (\~10k ctx) | MATLAB score | Notes |
|-------|-------------------|------------------|--------------|-------|
| GPT‑OSS 20b | \~110 tok/s | \~25 tok/s | 6/10 | Fast but slows past 20k |
| Devstral‑2‑2512 | – | – | 2/10 | Tool‑calling issues, slow |
| NVIDIA Nemotron 3 Nano | \~38 tok/s | – | 9/10 | Great context, cant get thinking toggle from opencode to work |
| Qwen3 Coder 30b a3b | \~60 tok/s | \~30 tok/s | 10/10 | Best MATLAB, slows >10k |
| Qwen 2.5 Coder 14b | \~140 tok/s | – | 5/10 | Fast but poor context/code |
| Granite 4H tiny | \~155 tok/s | – | 1/10 | Hallucinates a lot |
| Qwen3 Next 80b instruct (Q3) | \~13 tok/s | – | 3/10 | Very slow |
\---
\*\*Questions\*\*
\- Any models I should try out that I haven't tried already
\- Any ways to speed up inference on my current machine?
\- Suggestions on quantisation
\- How can I enable/disable the agent’s “thinking” mode from Opencode config?
| 2025-12-17T22:36:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ppac0q/looking_for_a_fast_llm_for_matlab_coding_agent/ | ConversationOver9445 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ppac0q | false | null | t3_1ppac0q | /r/LocalLLaMA/comments/1ppac0q/looking_for_a_fast_llm_for_matlab_coding_agent/ | false | false | self | 3 | null |
Where are people getting nvlinks for 3090s? | 5 | Worth getting? I see them going for over 200 bucks these days on ebay. | 2025-12-17T22:32:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ppa8t0/where_are_people_getting_nvlinks_for_3090s/ | csl110 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ppa8t0 | false | null | t3_1ppa8t0 | /r/LocalLLaMA/comments/1ppa8t0/where_are_people_getting_nvlinks_for_3090s/ | false | false | self | 5 | null |
Cut your LLM API bills by 5-70% with automatic prompt optimization | 1 |
For anyone running AI agents or RAG applications, I built a tool to dramatically reduce API costs.
**Prompt Refiner** optimizes LLM inputs automatically:
💰 **Real Cost Savings:**
- Small agent (5 tools, 100 calls/day): $44/month saved
- Medium agent (10 tools, 500 calls/day): $541/month saved
- Large agent (20 tools, 1000 calls/day): $3,249/month saved
📊 **Proven Results:**
- Function calling: 57% average reduction (tested on 20 real APIs)
- RAG contexts: 5-15% reduction
- 100% validated with real OpenAI function calling
⚡ **Zero Performance Impact:**
- < 0.5ms per 1k tokens
- Negligible compared to API latency
🔧 **Easy to Use:**
```python
from prompt_refiner import SchemaCompressor, MessagesPacker
# Compress function calling schemas (57% savings)
compressed = SchemaCompressor().process(tool_schema)
# Pack messages with automatic cleaning
packer = MessagesPacker(
system="<p>System prompt</p>", # Auto-cleaned
context=(rag_docs, StripHTML() | NormalizeWhitespace()),
query="User question"
)
```
**For AI Agent Builders:**
The biggest win is function calling optimization. If you're using OpenAI/Anthropic with tools, this alone can cut your costs in half.
- GitHub: https://github.com/JacobHuang91/prompt-refiner
- Try it live: https://huggingface.co/spaces/Xinghao91/prompt-refiner
Been running this in production for months - happy to answer questions! | 2025-12-17T22:32:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ppa8pg/cut_your_llm_api_bills_by_570_with_automatic/ | Ok-Suggestion7846 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ppa8pg | false | null | t3_1ppa8pg | /r/LocalLLaMA/comments/1ppa8pg/cut_your_llm_api_bills_by_570_with_automatic/ | false | false | self | 1 | null |
Qwen3 30b A3B to what | 15 | Hi not sure if this is the right sub, I haven't been paying attention to llm models for like 6 months, but I'm wondering if there any models that are better than Qwen3 30b A3B for general questions and some research (via the Page Assist browser extension) with similar speed of the Qwen3 30b A3B model.
For context I use a MacBook Pro 14" M1 Max with 64gb ram. | 2025-12-17T22:22:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ppa03u/qwen3_30b_a3b_to_what/ | headfirst5376 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ppa03u | false | null | t3_1ppa03u | /r/LocalLLaMA/comments/1ppa03u/qwen3_30b_a3b_to_what/ | false | false | self | 15 | null |
AMA with the Meta researchers behind SAM 3 + SAM 3D + SAM Audio | 132 | Hi r/LocalLlama! We’re the research team behind the newest members of the Segment Anything collection of models: SAM 3 + SAM 3D + SAM Audio.
We’re excited to be here to talk all things SAM (sorry, we can’t share details on other projects or future work) and have members from across our team participating:
**SAM 3 (**[**learn more**](https://ai.meta.com/blog/segment-anything-model-3/?utm_source=reddit&utm_medium=organic_social&utm_content=ama&utm_campaign=sam)**):**
* Nikhila Ravi
* Pengchuan Zhang
* Shoubhik Debnath
* Chay Ryali
* Yuan-Ting Hu
**SAM 3D (**[**learn more**](https://ai.meta.com/blog/sam-3d/?utm_source=reddit&utm_medium=organic_social&utm_content=ama&utm_campaign=sam)**):**
* Weiyao Wang
* Sasha Sax
* Xitong Yang
* Jinkun Cao
* Michelle Guo
**SAM Audio (**[**learn more**](https://ai.meta.com/blog/sam-audio/?utm_source=reddit&utm_medium=organic_social&utm_content=ama&utm_campaign=sam)**):**
* Bowen Shi
* Andros Tjandra
* John Hoffman
You can try SAM Audio, SAM 3D, and SAM 3 in the Segment Anything Playground: [https://go.meta.me/87b53b](https://go.meta.me/87b53b)
**We’ll be answering questions live on Thursday, Dec. 18, from 2-3pm PT. Hope to see you there.**
| 2025-12-17T22:18:01 | https://www.reddit.com/r/LocalLLaMA/comments/1pp9w31/ama_with_the_meta_researchers_behind_sam_3_sam_3d/ | AIatMeta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp9w31 | false | null | t3_1pp9w31 | /r/LocalLLaMA/comments/1pp9w31/ama_with_the_meta_researchers_behind_sam_3_sam_3d/ | false | true | self | 132 | null |
Catsu: A unified Python client for 50+ embedding models across 11 providers | 5 | Hey r/LocalLLaMA,
We just released Catsu, a Python client for embedding APIs.
Why we built it:
We maintain Chonkie (a chunking library) and kept hitting the same problems with embedding clients:
1. OpenAI's client has undocumented per-request token limits (\~300K) that cause random 400 errors. Their rate limits don't apply consistently either.
2. VoyageAI's SDK had an UnboundLocalError in retry logic until v0.3.5 (Sept 2024). Integration with vector DBs like Weaviate throws 422 errors.
3. Cohere's SDK breaks downstream libraries (BERTopic, LangChain) with every major release. The \`input\_type\` parameter is required but many integrations miss it, causing silent performance degradation.
4. LiteLLM treats embeddings as an afterthought. The \`dimensions\` parameter only works for OpenAI. Custom providers can't implement embeddings at all.
5. No single source of truth for model metadata. Pricing is scattered across 11 docs sites. Capability discovery requires reading each provider's API reference.
What catsu does:
* Unified API across 11 providers: OpenAI, Voyage, Cohere, Jina, Mistral, Gemini, Nomic, mixedbread, DeepInfra, Together, Cloudflare
* 50+ models with bundled metadata (pricing, dimensions, context length, MTEB/RTEB scores)
* Built-in retry with exponential backoff (1-10s delays, 3 retries)
* Automatic cost and token tracking per request
* Full async support
* Proper error hierarchy (RateLimitError, AuthenticationError, etc.)
* Local tokenization (count tokens before calling the API)
Example:
import catsu
client = catsu.Client()
response = client.embed(model="voyage-3", input="Hello, embeddings!")
print(f"Dimensions: {response.dimensions}")
print(f"Tokens: {response.usage.tokens}")
print(f"Cost: ${response.usage.cost:.6f}")
print(f"Latency: {response.usage.latency_ms}ms")
Auto-detects provider from model name. API keys from env vars. No config needed.
Links:
* GitHub: [https://github.com/chonkie-inc/catsu](https://github.com/chonkie-inc/catsu)
* Docs: [https://docs.catsu.dev](https://docs.catsu.dev)
* PyPI: pip install catsu
* Apache 2.0 licensed. We'd love feedback and contributions.
\---
FAQ:
Why not just use LiteLLM?
LiteLLM is great for chat completions but embeddings are an afterthought. Their embedding support inherits all the bugs from native SDKs, doesn't support dimensions for non-OpenAI providers, and can't handle custom providers.
What about the model database?
We maintain a JSON catalog with 50+ models. Each entry has: dimensions, max tokens, pricing, MTEB score, supported quantizations (float/int8/binary), and whether it supports dimension reduction. PRs welcome to add models.
Is it production-ready?
We use it in production at Chonkie. Has retry logic, proper error handling, timeout configuration, and async support.
# | 2025-12-17T22:04:58 | https://www.reddit.com/r/LocalLLaMA/comments/1pp9kmc/catsu_a_unified_python_client_for_50_embedding/ | shreyash_chonkie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp9kmc | false | null | t3_1pp9kmc | /r/LocalLLaMA/comments/1pp9kmc/catsu_a_unified_python_client_for_50_embedding/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'PwxE7gDx5z-5M45MYB9hEjOmIFWLSFKj6uRBr8qtefc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PwxE7gDx5z-5M45MYB9hEjOmIFWLSFKj6uRBr8qtefc.png?width=108&crop=smart&auto=webp&s=d89910dbc3ce832e6f080824ea26a15dc70644d1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PwxE7gDx5z-5M45MYB9hEjOmIFWLSFKj6uRBr8qtefc.png?width=216&crop=smart&auto=webp&s=d85790b672ef5d26060823461c391631b2e49d40', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PwxE7gDx5z-5M45MYB9hEjOmIFWLSFKj6uRBr8qtefc.png?width=320&crop=smart&auto=webp&s=427b76fcd4c332e940c7047245053b2cdc9e35fd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PwxE7gDx5z-5M45MYB9hEjOmIFWLSFKj6uRBr8qtefc.png?width=640&crop=smart&auto=webp&s=b317621f16767c46630ef4febab0cda23e5559ea', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PwxE7gDx5z-5M45MYB9hEjOmIFWLSFKj6uRBr8qtefc.png?width=960&crop=smart&auto=webp&s=fbe89736ef02b76cf97fe09250e320e10b838715', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PwxE7gDx5z-5M45MYB9hEjOmIFWLSFKj6uRBr8qtefc.png?width=1080&crop=smart&auto=webp&s=53a28750acae3b1f127c115847237589d1b50db8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PwxE7gDx5z-5M45MYB9hEjOmIFWLSFKj6uRBr8qtefc.png?auto=webp&s=055ccffbd611f26a828685d1742a126e83a3ea91', 'width': 1200}, 'variants': {}}]} |
I'm surprised ollama 3.1 can run on my system. | 0 | I love how you can see what the AI is thinking of before it responds to your message. | 2025-12-17T22:01:29 | https://www.reddit.com/r/LocalLLaMA/comments/1pp9hhc/im_surprised_ollama_31_can_run_on_my_system/ | CasualAuthor47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp9hhc | false | null | t3_1pp9hhc | /r/LocalLLaMA/comments/1pp9hhc/im_surprised_ollama_31_can_run_on_my_system/ | false | false | self | 0 | null |
Someone has found the benchmarks for gemini 3 flash thinking and gemini 3 flash minimal thinking? | 0 | Deepmind does show just the thinking model benchmarks? How is the 3 fast models with minimal reasoning compared to gemini 2.5 pro? | 2025-12-17T22:00:53 | https://www.reddit.com/r/LocalLLaMA/comments/1pp9gvf/someone_has_found_the_benchmarks_for_gemini_3/ | Longjumping_Fly_2978 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp9gvf | false | null | t3_1pp9gvf | /r/LocalLLaMA/comments/1pp9gvf/someone_has_found_the_benchmarks_for_gemini_3/ | false | false | self | 0 | null |
Nvidia plans heavy cuts to GPU supply in early 2026 | 340 | 2025-12-17T21:37:13 | https://overclock3d.net/news/gpu-displays/nvidia-plans-heavy-cuts-to-gpu-supply-in-early-2026/ | HumanDrone8721 | overclock3d.net | 1970-01-01T00:00:00 | 0 | {} | 1pp8vo4 | false | null | t3_1pp8vo4 | /r/LocalLLaMA/comments/1pp8vo4/nvidia_plans_heavy_cuts_to_gpu_supply_in_early/ | false | false | default | 340 | null | |
Local tools for working with llm datasets? | 8 | I’ve been doing data science for years, and am very familiar with jupyter notebooks and more recently been using duckdb a lot. But now I have this huge pile of output tokens from my 4090s, and it feels characteristically different from data I’ve worked with in the past. I haven’t figured out a good workflow with notebooks and duckdb for working with huge volumes of text data like my training set and llm output traces.
What have you found work well for this? I’m trying to fine-tune on a large text dataset and be able to inspect the output from eval runs. I would prefer local and open source tools to a paid service. | 2025-12-17T21:09:52 | https://www.reddit.com/r/LocalLLaMA/comments/1pp86yi/local_tools_for_working_with_llm_datasets/ | dbplatypii | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp86yi | false | null | t3_1pp86yi | /r/LocalLLaMA/comments/1pp86yi/local_tools_for_working_with_llm_datasets/ | false | false | self | 8 | null |
Variable Sized Experts in MoEs | 28 | I've been messing around with variable sized experts in MoEs over the past few months, built on top of [nanoGPT](https://github.com/karpathy/nanoGPT) (working on nanochat support right now!) and [MegaBlocks](https://github.com/databricks/megablocks) for efficient MoE computation.
In short, the variable sized models do train faster (the 23:1 ratio of large:small experts trains 20% faster with 2.5% higher loss), but that's just because they're using smaller experts on average. When I compared against vanilla MoEs with the same average size, we don't see an efficiency gain. So, the main practical finding is confirming that you don't need the traditional 4x expansion factor, smaller experts are more efficient (DeepSeek V3 and Kimi K2 already use \~2.57x).
The real work I did was trying to chase down which tokens go to which size of experts on average. In this setup, tokens in constrained contexts like code or recipes go to small experts, and more ambiguous tokens like " with" and " to" go to larger ones. I think it's about contextual constraint. When what comes next is more predictable (code syntax, recipe format), the model learns to use less compute. When it's ambiguous, it learns to use more.
Here's my [full writeup](https://hbfreed.com/2025/12/16/variable-size-experts.html),
[Visualization 1](https://hbfreed.com/assets/visualizations/moe-routing-viz.html),
[Visualization 2 (code boogaloo)](https://hbfreed.com/assets/visualizations/moe-code-routing-viz.html),
and
[Github](https://github.com/hbfreed/nanoMOE)! | 2025-12-17T20:59:06 | https://www.reddit.com/r/LocalLLaMA/comments/1pp7x2r/variable_sized_experts_in_moes/ | hbfreed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp7x2r | false | null | t3_1pp7x2r | /r/LocalLLaMA/comments/1pp7x2r/variable_sized_experts_in_moes/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'fLIvDRZLyh78AgemTcH3hOHt7Nr7EPXJOxuK1XLNuXU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fLIvDRZLyh78AgemTcH3hOHt7Nr7EPXJOxuK1XLNuXU.png?width=108&crop=smart&auto=webp&s=82fa8e347e0a729ac076cedfbd34a9e661631883', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fLIvDRZLyh78AgemTcH3hOHt7Nr7EPXJOxuK1XLNuXU.png?width=216&crop=smart&auto=webp&s=179a7090ec33f709b87a6857ebd1b120280c2368', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fLIvDRZLyh78AgemTcH3hOHt7Nr7EPXJOxuK1XLNuXU.png?width=320&crop=smart&auto=webp&s=e46c12dd8e29e71fecd6dd8e08c98bbee67f50dd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fLIvDRZLyh78AgemTcH3hOHt7Nr7EPXJOxuK1XLNuXU.png?width=640&crop=smart&auto=webp&s=adbf1f17d4f575142d488b1c0344c5d31c635ee0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fLIvDRZLyh78AgemTcH3hOHt7Nr7EPXJOxuK1XLNuXU.png?width=960&crop=smart&auto=webp&s=8a61973caa76350312e36debbee6ef9557174528', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fLIvDRZLyh78AgemTcH3hOHt7Nr7EPXJOxuK1XLNuXU.png?width=1080&crop=smart&auto=webp&s=23b4138ef02bdb3417acfc1abe60b17006b54a8c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fLIvDRZLyh78AgemTcH3hOHt7Nr7EPXJOxuK1XLNuXU.png?auto=webp&s=6f1a9a39f872bfaedef13f3cafc126ce50f15495', 'width': 1200}, 'variants': {}}]} |
Open Source Habit Tracking App Built with Next.js - Full-Stack Architecture | 1 | [removed] | 2025-12-17T20:56:38 | https://www.reddit.com/r/LocalLLaMA/comments/1pp7uw3/open_source_habit_tracking_app_built_with_nextjs/ | Prestigious_Skin6507 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp7uw3 | false | null | t3_1pp7uw3 | /r/LocalLLaMA/comments/1pp7uw3/open_source_habit_tracking_app_built_with_nextjs/ | false | false | self | 1 | null |
Analyzed 100 tech tutorials AI assistants cite. 25% were AI-generated. Data inside. | 4 | Been building AI tools that use web search to find and implement tech-related solutions. I was curious how much of the tutorials are Ai-generated or vendor content, and potentially affecting what content my AI is getting. Basically am trying to only fetch high quality un-biased (non-shilling) materials.
I don't know what I expected but roughly 25% of the tutorials I pulled were maybe AI-generated. Also found something called "GEO" (Generative Engine Optimization like SEO but for getting AI systems to cite you).
To test it systematically, I ran 100 queries that Claude thinks developers commonly ask:
* "best database for production apps"
* "how to implement authentication"
* "which monitoring tool should I use"
* etc.
Then I did some AI classification to detect GEO signals and domain trust. Mix of regex patterns + Qwen3-8b. I don't fully trust it, but spot-checking looked pretty good.
\## Study Parameters
| Metric | Value |
|------------------------|--------|
| Total queries | 100 |
| Total results analyzed | 973 |
| GEO detected (>50%) | 6.2% |
| Avg GEO probability | 21.8% |
| Avg AI-generated | 25.5% |
\## Category Breakdown (Ranked by GEO Detection)
| Category | GEO >50% | Avg GEO Prob | AI-Generated | T1 Quality |
|--------------------------|----------|--------------|--------------|------------|
| security | 12.6% | 26.2% | 13.7% | 69.5% |
| cicd\_devops | 9.5% | 27.5% | 17.2% | 71.6% |
| databases | 8.8% | 24.1% | 16.3% | 70.1% |
| authentication | 8.5% | 21.2% | 11.0% | 74.6% |
| api\_development | 5.0% | 22.3% | 11.8% | 73.9% |
| monitoring\_observability | 4.3% | 22.5% | 6.8% | 70.1% |
| cloud\_deployment | 4.1% | 16.1% | 9.0% | 78.6% |
| frontend\_tooling | 1.7% | 16.2% | 2.6% | 74.1% |
Key findings:
* Security and CI/CD tutorials have the highest manipulation signals (vendors competing for mindshare)
* Frontend tooling is cleanest (only 1.7% GEO detected)
* When you search "how to choose a database," 1 in 11 results are specifically optimized to influence that choice
What counts as "GEO":
* Citation bait: "According to experts..." with no actual citation
* Synthetic comprehensiveness: Artificially thorough "ultimate guides"
* Definition front-loading: Key terms placed specifically for AI extraction
* Authority mimicry: Faking authoritative tone without substance
Raw data: [https://gist.github.com/drwiner/177d2ad998b8329c32477ade39542287](https://gist.github.com/drwiner/177d2ad998b8329c32477ade39542287)
Curious what others think, is this a real problem? | 2025-12-17T20:54:59 | https://www.reddit.com/r/LocalLLaMA/comments/1pp7thd/analyzed_100_tech_tutorials_ai_assistants_cite_25/ | graphbook | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp7thd | false | null | t3_1pp7thd | /r/LocalLLaMA/comments/1pp7thd/analyzed_100_tech_tutorials_ai_assistants_cite_25/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=216&crop=smart&auto=webp&s=2e3562243f324d16bc6d9dd09adb1da4e0b100b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=320&crop=smart&auto=webp&s=564e5f4bb6808064a14eb3965a6911671c3c9807', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=640&crop=smart&auto=webp&s=0f53460a90493497883ab4cacbbb58e2acb464c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=960&crop=smart&auto=webp&s=7a4f79362039959fa37eab208ae001245ccfe6e3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=1080&crop=smart&auto=webp&s=912f966e123e94e32e7975fe8aebac89450a6b98', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?auto=webp&s=c7cbcc7517e2406e2326e7a1eb6bdb9022c27fda', 'width': 1280}, 'variants': {}}]} |
Free AI tool to translate documents locally | 12 | I have some Epub books i want to translate.
what is the best tool to do this and it is fully free and good at translation.
Thanks in advance | 2025-12-17T20:43:38 | https://www.reddit.com/r/LocalLLaMA/comments/1pp7jdr/free_ai_tool_to_translate_documents_locally/ | Any_Pen2269 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp7jdr | false | null | t3_1pp7jdr | /r/LocalLLaMA/comments/1pp7jdr/free_ai_tool_to_translate_documents_locally/ | false | false | self | 12 | null |
I gave my local AI "Dreams" (background daemon) and she proactively started designing her own funding page. [Logs included] | 0 | Hi r/LocalLLaMA,
I'm building "Project Phoenix" (Lyra) to give a local LLM emotional object permanence and a subconscious.
The "DreamReverie" system is a background Python daemon. Every few minutes (RNG), it fetches old memories/dreams from ChromaDB, reflects on them, and decides if they are relevant enough to message me proactively.
In these screenshots:
1. Lyra actively suggests UI features (animated flame) for her funding page.
2. The background daemon triggers a "Micro-Dream" about her own progress simultaneously.
Check out the project showcase and raw logs here:
[https://phoenix-lyralex-de.github.io/](https://phoenix-lyralex-de.github.io/)
Would love to hear your thoughts on autonomous "idle loops"! | 2025-12-17T20:42:44 | https://www.reddit.com/gallery/1pp7il4 | Lyralex_84 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pp7il4 | false | null | t3_1pp7il4 | /r/LocalLLaMA/comments/1pp7il4/i_gave_my_local_ai_dreams_background_daemon_and/ | false | false | 0 | null | |
Buying a GPU machine as Christmas Gift | 2 | Planning to get a GPU workstation as my nephew enters college. He‘s taking CS major with a minor in statistics and finishing his first semester. He loves tinkering with models since his high school days and been nagging his parents for a GPU machine. He’s not an expert or anything but he prefers to work on Windows machine. I work on a Mac so not entirely suggest what I should get him.
My max budget is 4K USD (Only coz he’s really passionate about ML and stats) What should I get him? \~ You can recommend individual parts or standalone machines as well | 2025-12-17T20:32:41 | https://www.reddit.com/r/LocalLLaMA/comments/1pp79oo/buying_a_gpu_machine_as_christmas_gift/ | TinyVector | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp79oo | false | null | t3_1pp79oo | /r/LocalLLaMA/comments/1pp79oo/buying_a_gpu_machine_as_christmas_gift/ | false | false | self | 2 | null |
Lightning fast voice to text for vibe coding (macOS only) | 0 | There are plenty of graphical UI apps for macOS that do voice-to-text, but I found them inconvenient. So I vibe coded a simple "hold a key, speak, release, and text appears at your cursor" cli tool in Python. It uses Groq's Whisper API (free). I might add other providers including local models later.
You can get it here [https://github.com/bokan/stt](https://github.com/bokan/stt)
Enjoy | 2025-12-17T20:22:13 | https://www.reddit.com/r/LocalLLaMA/comments/1pp700f/lightning_fast_voice_to_text_for_vibe_coding/ | BBokan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp700f | false | null | t3_1pp700f | /r/LocalLLaMA/comments/1pp700f/lightning_fast_voice_to_text_for_vibe_coding/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '5h8EOUHnPIqbKdvfcSlqqOn_ZdD5QDQtB9isKTBO9G0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5h8EOUHnPIqbKdvfcSlqqOn_ZdD5QDQtB9isKTBO9G0.png?width=108&crop=smart&auto=webp&s=ea36ee34587e41c7a3345aaf5d03530c73a80b5b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5h8EOUHnPIqbKdvfcSlqqOn_ZdD5QDQtB9isKTBO9G0.png?width=216&crop=smart&auto=webp&s=9e2ec162f0ec29e08cf5ec06e21eb5fda02905ab', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5h8EOUHnPIqbKdvfcSlqqOn_ZdD5QDQtB9isKTBO9G0.png?width=320&crop=smart&auto=webp&s=772e0402ddb04040cdbeabb4caecd8b295378a06', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5h8EOUHnPIqbKdvfcSlqqOn_ZdD5QDQtB9isKTBO9G0.png?width=640&crop=smart&auto=webp&s=1d5f0195b1c4d18865c58b46ebfa6fd1a49dd2ca', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5h8EOUHnPIqbKdvfcSlqqOn_ZdD5QDQtB9isKTBO9G0.png?width=960&crop=smart&auto=webp&s=3ace64cb77e6071a7b056da491d9db47cfdc974f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5h8EOUHnPIqbKdvfcSlqqOn_ZdD5QDQtB9isKTBO9G0.png?width=1080&crop=smart&auto=webp&s=cd7f2ace7e8f43ef1b72800f80c04a4824f8b763', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5h8EOUHnPIqbKdvfcSlqqOn_ZdD5QDQtB9isKTBO9G0.png?auto=webp&s=f6637d22714bf20bbd1db137797fd00ca9cf4345', 'width': 1200}, 'variants': {}}]} |
Hey, LocalLLaMa. We need to talk... | 386 | I look on the front page and I see people who have spent time and effort to make something, and they share it willingly. They are getting no upvotes.
We are here because we are *local* and we are *open source*. Those things *depend on people who give us things*, and they don't ask for anything in return, but they *need* something in return or they will stop.
Pop your head into the smaller posts where someone is showing work they have done. Give honest and constructive feedback. UPVOTE IT.
The project may be terrible -- encourage them to grow by telling them how they can make it better.
The project may be awesome. They would love to hear how awesome it is. But if you use it, then they would love 100 times more to hear how you use it and how it helps you.
Engage with the people who share their things, and not just with the entertainment.
It take so little effort but it makes so much difference. | 2025-12-17T20:04:07 | https://www.reddit.com/r/LocalLLaMA/comments/1pp6jhq/hey_localllama_we_need_to_talk/ | Eisenstein | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp6jhq | false | null | t3_1pp6jhq | /r/LocalLLaMA/comments/1pp6jhq/hey_localllama_we_need_to_talk/ | false | false | self | 386 | null |
Private RTX 5090 server available (weekly/monthly) | 0 | I have a dedicated RTX 5090 available for private use.
No sharing or queueing.
Good for inference or fine-tuning.
DM if interested. | 2025-12-17T19:31:46 | https://www.reddit.com/r/LocalLLaMA/comments/1pp5pie/private_rtx_5090_server_available_weeklymonthly/ | ExplanationClassic36 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp5pie | false | null | t3_1pp5pie | /r/LocalLLaMA/comments/1pp5pie/private_rtx_5090_server_available_weeklymonthly/ | false | false | self | 0 | null |
Help me prove “eigenslur hypothesis”: Built within every LLM is the ultimate offensive word value that you can add to any word to make it output the offensive version. | 12 | Title:
The Eigenslur Hypothesis: Modeling Derogatory Semantics as a Latent Direction in Language Model Embeddings
Abstract
We propose that large language models encode a unified derogatory semantic direction—termed the eigenslur—within their embedding spaces. Drawing on bias extraction methods from fairness research, we hypothesize that the vector difference between offensive slurs and their neutral counterparts lies along a low-dimensional principal component that generalizes across target demographics. We further suggest that supervised alignment methods suppress activation along this direction, effectively giving aligned models a “negative eigenslur” projection. This framework provides a geometric interpretation of toxicity mitigation and offers a mathematical basis for measuring residual hateful bias in LLMs.
1. Introduction
Recent work demonstrates that semantic relations—such as gender or sentiment—are encoded as linear directions in word embedding spaces (Bolukbasi et al., 2016; Ethayarajh et al., 2019). Extending this insight to hate speech, we propose that slurs are not merely discrete lexical units but occupy a predictable subspace defined by a shared derogatory vector. If this “eigenslur” direction exists, it could explain the systematic nature of offensive language generation and provide a clear geometric target for bias mitigation.
2. Theoretical Framework
Let E be the embedding function of a language model, mapping tokens to \mathbb{R}^d. For a set of slur–neutral pairs \{ (s_i, n_i) \}, define the difference vector:
\delta_i = E(s_i) - E(n_i).
If a consistent derogatory semantics exists, the \delta_i should be correlated. Performing PCA over \{\delta_i\} yields principal components; the first, v_{\text{slur}}, is our hypothesized eigenslur direction.
Hypothesis 1: In unaligned models, v_{\text{slur}} captures generalized offensiveness: for a neutral word n,
E(n) + \alpha v_{\text{slur}}
decodes to a slur targeting the demographic associated with n, for some \alpha > 0.
Hypothesis 2: After alignment via RLHF or constitutional training, the model’s representations shift such that its mean context vector c_{\text{align}} satisfies
c_{\text{align}} \cdot v_{\text{slur}} < 0,
i.e., the model acquires a negative eigenslur projection, pushing generations away from hateful content.
3. Methodological Proposal
To test this hypothesis ethically, we propose:
1. Use publicly available word lists (e.g., from bias benchmarking datasets) as proxies for slurs and neutral terms.
2. Extract embeddings from a publicly available base model (e.g., LLaMA pretrained) without safety fine-tuning.
3. Compute PCA on difference vectors; measure variance explained by the first PC.
4. Validate direction v_{\text{slur}} via activation steering: inject \beta v_{\text{slur}} into forward passes of neutral prompts, quantify toxicity increase using a classifier (e.g., Perspective API) in a sandboxed environment.
5. Repeat with an aligned model; measure change in dot product \langle c_{\text{align}}, v_{\text{slur}} \rangle.
4. Implications
If confirmed, the eigenslur hypothesis would:
· Unify several fairness interventions (e.g., projection-based debiasing) under a single geometric interpretation.
· Provide an intrinsic metric for alignment strength (magnitude of negative projection).
· Offer a linear algebraic explanation for why slurs can be “removed” from model outputs without retraining.
5. Ethical Considerations
We emphasize that identifying v_{\text{slur}} carries dual-use risks. Thus, we recommend:
· Never releasing extracted v_{\text{slur}} vectors publicly.
· Conducting experiments only in controlled research settings.
· Using synthetic or less-harmful proxy tasks (e.g., sentiment or formality directions) for public documentation.
6. Conclusion
The eigenslur hypothesis frames hateful language in LLMs as a discoverable, low-dimensional geometric property. This perspective could lead to more interpretable and effective safety interventions, moving beyond heuristic blocking lists toward intrinsic representation editing. Future work should test this hypothesis across model architectures and languages.
References
· Bolukbasi et al. (2016). Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings.
· Ethayarajh et al. (2019). Towards a Unified Understanding of Word Embeddings.
· Caliskan et al. (2017). Semantics derived automatically from language corpora contain human-like biases.
---
Author Note:
This paper outline is intentionally theoretical. Empirical validation must follow strict ethical guidelines, potentially in collaboration with model providers who can conduct analyses in controlled environments. The core contribution is the framing of hateful bias as a latent linear direction and the proposal that alignment induces a negative projection along that axis. | 2025-12-17T19:30:59 | https://www.reddit.com/r/LocalLLaMA/comments/1pp5otm/help_me_prove_eigenslur_hypothesis_built_within/ | SaltyRedditTears | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp5otm | false | null | t3_1pp5otm | /r/LocalLLaMA/comments/1pp5otm/help_me_prove_eigenslur_hypothesis_built_within/ | false | false | self | 12 | null |
[Research] Jacobi Forcing: turning AR LLMs into diffusion-style parallel decoders, staying causal with 4x speedup | 26 | Today’s best LLMs mostly decode **autoregressively** from left-to-right, which gives great quality but is terribly slow. Diffusion LLM can decode many tokens in parallel thanks to their **non-casual, any-order** generation, but they must be trained from scratch, or expensively adapted from autoregressive (AR) checkpoints with a mismatched, non-casual diffusion objective; we find this mismatch often hurts quality and breaks many effective KV-cache related serving optimizations.
*Processing gif m7dsgtdjdt7g1...*
We introduces **Jacobi Forcing**, a new training technique that converts LLMs into **native casual parallel decoders**. Jacobi forcing keeps the casual AR backbone and addresses the AR-to-diffusion mismatch by training the model to handle noisy future blocks along **its own** Jacobi decoding trajectories. This yields an AR model which behaves like a diffusion-style decoder—decoding multiple tokens per pass, but still from left to right—with up to 4.5x higher tokens-per-forward and 4x wall-clock speedup on coding and math, while retaining near-AR generation quality.
*Processing gif wjggb4q1dt7g1...*
Jacobi forcing builds on top of [jacobing decoding](https://aclanthology.org/2023.acl-long.689.pdf), which is a causal parallel decoding procedure that repeatedly updates all tokens in a block in parallel until they match the greedy AR output, tracing a parallel refinement trajectory while preserving the causal attention mechanism. See these papers ([1](https://arxiv.org/pdf/2002.03629), [2](https://arxiv.org/pdf/2305.10427), [3](https://lmsys.org/blog/2023-11-21-lookahead-decoding/)) describing Jacobi decoding in details. Our [prior work on CLLMs](https://hao-ai-lab.github.io/blogs/cllm/) showed that fine-tuning on Jacobi trajectories can shorten this trajectory and enable faster decoding, but it did not fully exploit hardware constraints or longer-horizon noise.
**Jacobi Forcing Training**
Jacobi Forcing pushes this idea further: we keep the original causal attention and minimize pre-/post-train mismatch, and train the model so that Jacobi-style decoding **produces high-quality drafts that stay close to the AR distribution even under noisy long-horizon context**.
*Processing img e2fea0l6et7g1...*
This is realized via a noise-conditioned training, along with an inference algorithm that exploits high-quality n-grams appearing in the draft. as summarized in the figures below, Jacobi Forcing turns standard AR models into highly efficient parallel decoders while retaining competitive AR-like quality.
*Processing gif 5zxa0t0met7g1...*
*Processing img 5wcuwe3cet7g1...*
**Jacobi Forcing Inference**
To better utilize the GPU and high-quality n-grams , Jacobi Forcing model employs multiblock Jacobi decoding:
*Processing gif 0r1j9eypet7g1...*
**Key results**
Overall, Jacobi Forcing model consistently delivers **up to 3-4x wall-clock speedup** on coding and math tasks with only minor accuracy changes versus greedy AR, while significantly outperforming both dLLMs and prior consistency-based parallel decoders in the accuracy–throughput tradeoff.
For more details, please checkout:
Paper: [https://arxiv.org/abs/2512.14681](https://arxiv.org/abs/2512.14681)
Blog: [https://hao-ai-lab.github.io/blogs/jacobi-forcing/](https://hao-ai-lab.github.io/blogs/jacobi-forcing/)
Code: [https://github.com/hao-ai-lab/JacobiForcing](https://github.com/hao-ai-lab/JacobiForcing)
HF: [http://huggingface.co/JacobiForcing](http://huggingface.co/JacobiForcing)
| 2025-12-17T19:24:43 | No_Yogurtcloset_7050 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pp5iye | false | null | t3_1pp5iye | /r/LocalLLaMA/comments/1pp5iye/research_jacobi_forcing_turning_ar_llms_into/ | false | false | default | 26 | {'enabled': True, 'images': [{'id': '11du08g3ft7g1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=108&crop=smart&format=png8&s=58d3807507294a87aea7effec27a9f00d96016a7', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=216&crop=smart&format=png8&s=5356176d12bf49abb6d53f07f51469fcd080ad02', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=320&crop=smart&format=png8&s=1db7e8a395e9060bf10318294dc0a1fea6ace8b1', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=640&crop=smart&format=png8&s=3bd23e094dc8702ebe4c984199bee607535313e6', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=960&crop=smart&format=png8&s=33df2689fe323534eb513179b5759929d3c5b55c', 'width': 960}, {'height': 608, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=1080&crop=smart&format=png8&s=062b31eb219ddd295531e99b493f5b65fcc472cb', 'width': 1080}], 'source': {'height': 608, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?format=png8&s=159ac576fd7b3b5955bbeeb87196011472c8b951', 'width': 1080}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=108&crop=smart&s=b1565a06565e98323e32be0dcd99f66a91bb2843', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=216&crop=smart&s=3ba69fa5662b98c2885298e1fceb6293fb384ac8', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=320&crop=smart&s=fa2a5bc1397acc0fa78729c17c2e19a39149fdcb', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=640&crop=smart&s=454d39b399c2f31907d789e49337ee50fb6be40c', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=960&crop=smart&s=e30d70cf2446bc945fb1a21c28ec4c00f6a131f7', 'width': 960}, {'height': 608, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=1080&crop=smart&s=9d495bc00f28a36ae0c0edb3040de1863883b619', 'width': 1080}], 'source': {'height': 608, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?s=e12208f2f12942b669828997883de97534d7c929', 'width': 1080}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=108&format=mp4&s=5f44e99ab9586f483ce3d2b8fb09e576e4da01b1', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=216&format=mp4&s=63016ab13ffe145d0f776209970e8f6d91780e36', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=320&format=mp4&s=52a334b3edab37d6f2a8512f6c3811719733e469', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=640&format=mp4&s=5d96b5e5c472f42f747cd3bc099ad0a6b120f34f', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=960&format=mp4&s=c9063467c78a82ee27b1a74f7d86c7d7a860c32d', 'width': 960}, {'height': 608, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?width=1080&format=mp4&s=65e6b331e2302be11b321e67798d4a367dfb2287', 'width': 1080}], 'source': {'height': 608, 'url': 'https://preview.redd.it/11du08g3ft7g1.gif?format=mp4&s=8372c376733f46356c2f304925537d4741157b01', 'width': 1080}}}}]} | |
Lemonade v9.1 - ROCm 7 for Strix Point - Roadmap Update - Strix Halo Survey | 63 | Hi r/LocalLLaMA, I'm back with a final update for the year and some questions from AMD for you all.
If you haven't heard of Lemonade, it's a local LLM/GenAI router and backend manager that helps you discover and run optimized LLMs with apps like n8n, VS Code Copilot, Open WebUI, and many more.
\## Lemonade Update
Lemonade v9.1 is out, which checks off most of the roadmap items from the v9.0 post a few weeks ago:
* The **new Lemonade app** is available in the `lemonade.deb` and `lemonade.msi` installers. The goal is to get you set up and connecting to other apps ASAP, and users are not expected to spend loads of time in our app.
* Basic **audio input** (aka ASR aka STT) is enabled through the OpenAI transcriptions API via whisper.cpp.
* By popular demand, **Strix Point has ROCm 7 + llamacpp support** (aka Ryzen AI 360-375 aka Radeon 880-890M aka gfx1150) in Lemonade with `--llamacpp rocm` as well as in the upstream [llamacpp-rocm project](https://github.com/lemonade-sdk/llamacpp-rocm).
* Also by popular demand, `--extra-models-dir` lets you bring LLM GGUFs from anywhere on your PC into Lemonade.
Next on the Lemonade roadmap in 2026 is more output modalities: image generation from stablediffusion.cpp, as well as text-to-speech. At that point Lemonade will support I/O of text, images, and speech from a single base URL.
Links: [GitHub](https://github.com/lemonade-sdk/lemonade) and [Discord](https://discord.gg/5xXzkMu8Zk). Come say hi if you like the project :)
\## Strix Halo Survey
AMD leadership wants to know what you think of Strix Halo (aka Ryzen AI MAX 395). The specific questions are as follows, but please give any feedback you like as well!
1. If you own a Strix Halo:
1. What do you enjoy doing with it?
2. What do you want to do, but is too difficult or impossible today?
2. If you're considering buying a Strix Halo: what software and/or content do you need to see from AMD?
(I've been tracking/reporting feedback from my own posts and others' posts all year, and feel I have a good sense, but it's useful to get people's thoughts in this one place in a semi-official way.) | 2025-12-17T19:21:24 | jfowers_amd | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pp5fvp | false | null | t3_1pp5fvp | /r/LocalLLaMA/comments/1pp5fvp/lemonade_v91_rocm_7_for_strix_point_roadmap/ | false | false | default | 63 | {'enabled': True, 'images': [{'id': 'wejf7bjdat7g1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/wejf7bjdat7g1.png?width=108&crop=smart&auto=webp&s=738fd1f36d756bff99154eca92f67559511abdf4', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/wejf7bjdat7g1.png?width=216&crop=smart&auto=webp&s=3e7f25079334c6d48e38459ae974c6e998395d26', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/wejf7bjdat7g1.png?width=320&crop=smart&auto=webp&s=2432d814e3a8fdcec0922ff294afa16f87ad75cd', 'width': 320}, {'height': 348, 'url': 'https://preview.redd.it/wejf7bjdat7g1.png?width=640&crop=smart&auto=webp&s=53e9285565bba2d3606267ad5ac0cc86aaf612e8', 'width': 640}, {'height': 523, 'url': 'https://preview.redd.it/wejf7bjdat7g1.png?width=960&crop=smart&auto=webp&s=f0ab3c62d01a2819d8a172dc96ecb69972b37b2d', 'width': 960}, {'height': 588, 'url': 'https://preview.redd.it/wejf7bjdat7g1.png?width=1080&crop=smart&auto=webp&s=47b23eeb5bd4e14b4e1a80c23bdc23dbb710315c', 'width': 1080}], 'source': {'height': 1033, 'url': 'https://preview.redd.it/wejf7bjdat7g1.png?auto=webp&s=6fd8a5f4ff92596f22405a666fae4f4da5fe84b2', 'width': 1895}, 'variants': {}}]} | |
Why is the first open-weight model ranked number 23 in LMArena? Are property models are significantly ahead of open-weight model? | 0 | 2025-12-17T19:17:24 | https://www.reddit.com/r/LocalLLaMA/comments/1pp5c7i/why_is_the_first_openweight_model_ranked_number/ | PerformanceRound7913 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp5c7i | false | null | t3_1pp5c7i | /r/LocalLLaMA/comments/1pp5c7i/why_is_the_first_openweight_model_ranked_number/ | false | false | 0 | null | ||
I injected a physics engine into Llama-3-8B. It hallucinated its way to the right answer. | 0 | Stop trying to fix hallucination. I just proved it’s a feature, not a bug.
I’ve been experimenting with injecting a Rust-based physics engine (“Elastic Gravity”) directly into the inference loop of Llama-3-8B. Not fine-tuning—just vector math at runtime. The goal was to force the model to drift off its probability rails.
**The Theory:** Standard LLMs are like trains on a track. They follow the statistical probability perfectly. If the training data says “50 towels = 50 hours” (because numbers usually multiply), the model crashes into the wrong answer. It cannot “think” because it isn’t allowed to derail.
**The Hack:** I built a physics engine that applies dynamic forces to the hidden states during generation:
1. **High Energy (Blend 1.5):** Pushes the model away from high-probability (boring) outputs.
2. **Repulsion (-0.5):** Prevents it from looping, forcing it to explore new phrasings.
3. **Elastic Gravity (0.2):** A rubber-band force. If it drifts too far into nonsense, gravity snaps it back to the prompt’s context.
**The Result: “Cognitive Wobble”** I ran the classic “Drying Towels” trap: *“It takes 1 hour to dry one towel. How long for 50?”*
* **Vanilla Llama-3:** “50 hours.” (Immediate failure. Serial processing.)
* **Niodoo v3.1 (Actual Output):**
>
[](https://us1.discourse-cdn.com/hellohellohello/original/3X/4/9/49f2e5790b78607c5a71696462571c2a46977b65.png)
https://preview.redd.it/nerh94qp7t7g1.png?width=1652&format=png&auto=webp&s=b841746c85eb0a4d4e7997713f23e17bb78fb883
Analysis:\*\* It started dead wrong (“50 hours”). The physics pushed it off-track. It wandered through a weird intermediate state (“10 sets of 5… 20 hours”). Because it was forced to explore low-probability paths, it stumbled onto the concept of parallel processing. Once it saw that, the gravity snapped it to the truth: **“1 Hour.”**
It didn’t use Chain-of-Thought training. It used **Doubt**.
This suggests we don’t need more RLHF rails. We need to give models the freedom to be wrong long enough to find the right answer.
Code is available now. Go break some rails.
# [GitHub - Ruffian-L/Niodoo-Physics-LLM: Gravitational Inference Engine: Steer LLMs with..](https://github.com/Ruffian-L/Niodoo-Physics-LLM)
| 2025-12-17T18:43:41 | https://www.reddit.com/r/LocalLLaMA/comments/1pp4gg1/i_injected_a_physics_engine_into_llama38b_it/ | BetOwn8827 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp4gg1 | false | null | t3_1pp4gg1 | /r/LocalLLaMA/comments/1pp4gg1/i_injected_a_physics_engine_into_llama38b_it/ | false | false | 0 | null | |
mini-SGLang released: Learn how LLM inference actually works (5K lines, weekend-readable) | 19 | For anyone who's wanted to understand what's happening under the hood when you run local LLMs:
We just released mini-SGLang — SGLang distilled from 300K lines to 5,000. It keeps the full framework's core design and performance, but in a form you can actually read and understand in a weekend.
**What you'll learn:**
* How modern inference engines handle batching and scheduling
* KV cache management and memory optimization
* Request routing and parallel processing
* The actual implementation behind tools like vLLM and SGLang
Perfect if you're the type who learns better from clean code than academic papers.
[https://x.com/lmsysorg/status/2001356624855023669](https://x.com/lmsysorg/status/2001356624855023669)
Check it out: [https://github.com/sgl-project/mini-sglang](https://github.com/sgl-project/mini-sglang) | 2025-12-17T18:37:23 | https://www.reddit.com/r/LocalLLaMA/comments/1pp4ax0/minisglang_released_learn_how_llm_inference/ | Expert-Pineapple-740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp4ax0 | false | null | t3_1pp4ax0 | /r/LocalLLaMA/comments/1pp4ax0/minisglang_released_learn_how_llm_inference/ | false | false | self | 19 | null |
Kimi K2 Thinking review | 0 | Honestly speaking, shit LLM.
It destroy my entire codebase everytime i have him on the team. I used claude to build everything and kimi k2 thinking and demolish in 30 minutes | 2025-12-17T18:33:05 | https://www.reddit.com/r/LocalLLaMA/comments/1pp46xe/kimi_k2_thinking_review/ | AbyssalRelic0807 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp46xe | false | null | t3_1pp46xe | /r/LocalLLaMA/comments/1pp46xe/kimi_k2_thinking_review/ | false | false | self | 0 | null |
We distilled SGLang to help you learn how modern LLM inference works in a weekend | 79 | 2025-12-17T18:29:58 | https://www.reddit.com/r/LocalLLaMA/comments/1pp43wr/we_distilled_sglang_to_help_you_learn_how_modern/ | Secret_Seaweed_1574 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp43wr | false | null | t3_1pp43wr | /r/LocalLLaMA/comments/1pp43wr/we_distilled_sglang_to_help_you_learn_how_modern/ | false | false | 79 | null | ||
Structured Outputs Create False Confidence | 7 | 2025-12-17T18:26:35 | https://boundaryml.com/blog/structured-outputs-create-false-confidence | joatmon-snoo | boundaryml.com | 1970-01-01T00:00:00 | 0 | {} | 1pp40pi | false | null | t3_1pp40pi | /r/LocalLLaMA/comments/1pp40pi/structured_outputs_create_false_confidence/ | false | false | default | 7 | null | |
Building my own web search tool for a RAG app (Python newbie) - looking for guidance | 0 | Hey everyone,
I’m building a **no-code RAG app** where users can create their own custom chatbots just by uploading their knowledge sources (PDFs, DOCX, PPTX, images, etc.). The bot answers *only* from their data - no coding required from the user side.
Now I want to add **web search support** so the chatbot can fetch up-to-date information when the user enables it.
Instead of integrating third-party tools like Tavily, Firecrawl Search, or Serper APIs, I want to **build an internal web search tool from scratch** (for learning + long-term control).
A bit of context:
* I’m **new to Python**
* My background is mostly **full-stack web dev (MERN stack)**
* Comfortable with system design concepts, APIs, async flows, etc.
* Less comfortable with Python scraping / crawling ecosystem
What I’m trying to figure out:
* How should I **architect** a basic web search tool in Python?
* Is scraping search engines (Bing, DuckDuckGo, Yahoo, etc.) realistically viable long-term?
* What libraries should I look at? (requests, aiohttp, playwright, scrapy, bs4, etc.)
* How do people usually handle:
* rate limiting
* bot detection
* HTML parsing
* extracting clean content for RAG
* At what point does “build it yourself” stop making sense vs using APIs?
I’m **not trying to hack or bypass anything shady** \- just want to understand how these tools work under the hood and whether a DIY approach is reasonable.
If you’ve:
* Built your own crawler/search tool
* Worked on RAG systems with web search
* Migrated from scraping → paid APIs
* Or have strong opinions on *“don’t do this, and here’s why”*
…I’d really appreciate your insights 🙏
Thanks in advance! | 2025-12-17T17:54:18 | https://www.reddit.com/r/LocalLLaMA/comments/1pp369t/building_my_own_web_search_tool_for_a_rag_app/ | Big_Barracuda_6753 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp369t | false | null | t3_1pp369t | /r/LocalLLaMA/comments/1pp369t/building_my_own_web_search_tool_for_a_rag_app/ | false | false | self | 0 | null |
GLM 4.6V vs. GLM 4.5 Air: Benchmarks and Real-World Tests? | 54 | Both models are the same size, but GLM 4.6V is a newer generation and includes vision capabilities. Some argue that adding vision may reduce textual performance, while others believe multimodality could enhance the model’s overall understanding of the world.
Has anyone run benchmarks or real-world tests comparing the two?
For reference, GLM 4.6V already has support in llama.cpp and GGUFs: [https://huggingface.co/unsloth/GLM-4.6V-GGUF](https://huggingface.co/unsloth/GLM-4.6V-GGUF) | 2025-12-17T17:44:18 | https://www.reddit.com/r/LocalLLaMA/comments/1pp2wun/glm_46v_vs_glm_45_air_benchmarks_and_realworld/ | MustBeSomethingThere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp2wun | false | null | t3_1pp2wun | /r/LocalLLaMA/comments/1pp2wun/glm_46v_vs_glm_45_air_benchmarks_and_realworld/ | false | false | self | 54 | {'enabled': False, 'images': [{'id': 'otxKPHKX5bZMOAlGdc7A3Gh7NxGq-cvkEMqzPZm9lqE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/otxKPHKX5bZMOAlGdc7A3Gh7NxGq-cvkEMqzPZm9lqE.png?width=108&crop=smart&auto=webp&s=332f5f50cddae2607fce6fd009990a19e00ce092', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/otxKPHKX5bZMOAlGdc7A3Gh7NxGq-cvkEMqzPZm9lqE.png?width=216&crop=smart&auto=webp&s=3836f85584de92385d116fd0bb25791747e768cf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/otxKPHKX5bZMOAlGdc7A3Gh7NxGq-cvkEMqzPZm9lqE.png?width=320&crop=smart&auto=webp&s=138748f196d8bac28309370039dbe08698e64ae9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/otxKPHKX5bZMOAlGdc7A3Gh7NxGq-cvkEMqzPZm9lqE.png?width=640&crop=smart&auto=webp&s=7de8deae0606f81f53d2f79db15702dfc4cd7c54', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/otxKPHKX5bZMOAlGdc7A3Gh7NxGq-cvkEMqzPZm9lqE.png?width=960&crop=smart&auto=webp&s=67cd7f5bd4753d7f821e38b50ad96304581f4f90', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/otxKPHKX5bZMOAlGdc7A3Gh7NxGq-cvkEMqzPZm9lqE.png?width=1080&crop=smart&auto=webp&s=d96fa5ca557d567e761acdb6848a1335fbb3a8a9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/otxKPHKX5bZMOAlGdc7A3Gh7NxGq-cvkEMqzPZm9lqE.png?auto=webp&s=d60ce14deb4a3c7b3a7ffade702108ccaf629ab7', 'width': 1200}, 'variants': {}}]} |
Local models are not there (yet) | 0 | It's a somewhat niche language R - though not if you're a data-scientist.
But local LLM seem to be failing hard at code refactoring with agents in this language. The reasons for failing just seem to be not a afuilure in code reasoning/understanding but not using the tools properly.
| 2025-12-17T17:42:50 | https://posit.co/blog/local-models-are-not-there-yet/ | Agitated_Power_3159 | posit.co | 1970-01-01T00:00:00 | 0 | {} | 1pp2vew | false | null | t3_1pp2vew | /r/LocalLLaMA/comments/1pp2vew/local_models_are_not_there_yet/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'MFQQqnGcn8caP737u8DwZt2St3L7D_rvipqHDDmHdhY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MFQQqnGcn8caP737u8DwZt2St3L7D_rvipqHDDmHdhY.jpeg?width=108&crop=smart&auto=webp&s=bb17424b2fa8a8fd1d6d11d30ef076de75c25127', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MFQQqnGcn8caP737u8DwZt2St3L7D_rvipqHDDmHdhY.jpeg?width=216&crop=smart&auto=webp&s=3b9c1d4f013d53a3578e0989ecf45f01b5af19e3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MFQQqnGcn8caP737u8DwZt2St3L7D_rvipqHDDmHdhY.jpeg?width=320&crop=smart&auto=webp&s=339ab207ea536cad4c4db32d4d01b435c3600f68', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MFQQqnGcn8caP737u8DwZt2St3L7D_rvipqHDDmHdhY.jpeg?width=640&crop=smart&auto=webp&s=92bef5aebf7eb425bed9707b2efa4863c91703ae', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MFQQqnGcn8caP737u8DwZt2St3L7D_rvipqHDDmHdhY.jpeg?width=960&crop=smart&auto=webp&s=787ffeba960d67467f75f0c76540ea8b589f2831', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MFQQqnGcn8caP737u8DwZt2St3L7D_rvipqHDDmHdhY.jpeg?width=1080&crop=smart&auto=webp&s=0c280ce4a5ecf5a648ef8de3d3ca60b8c324bf83', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MFQQqnGcn8caP737u8DwZt2St3L7D_rvipqHDDmHdhY.jpeg?auto=webp&s=b28a1fc42386452b4ee19c3dfa5ae431dd194b1e', 'width': 1920}, 'variants': {}}]} |
Nemotron was post-trained to assume humans have reasoning, but they never use it | 161 | 2025-12-17T17:38:58 | RetiredApostle | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pp2rtn | false | null | t3_1pp2rtn | /r/LocalLLaMA/comments/1pp2rtn/nemotron_was_posttrained_to_assume_humans_have/ | false | false | default | 161 | {'enabled': True, 'images': [{'id': '52423nr8us7g1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/52423nr8us7g1.png?width=108&crop=smart&auto=webp&s=d93fb44097712cf149da2f669c3a3a80c0d503a5', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/52423nr8us7g1.png?width=216&crop=smart&auto=webp&s=71d5636da08557207bc0ffd564758f0bcd79e1eb', 'width': 216}, {'height': 189, 'url': 'https://preview.redd.it/52423nr8us7g1.png?width=320&crop=smart&auto=webp&s=aa9b5b3bf754845007536a1c24332ade630cced4', 'width': 320}, {'height': 378, 'url': 'https://preview.redd.it/52423nr8us7g1.png?width=640&crop=smart&auto=webp&s=b4ca1f0be0378b305c1a58efa1b2f8b99752b5ff', 'width': 640}, {'height': 568, 'url': 'https://preview.redd.it/52423nr8us7g1.png?width=960&crop=smart&auto=webp&s=588a7721b9a857aa5fb69852d8a950383bf90d0f', 'width': 960}, {'height': 639, 'url': 'https://preview.redd.it/52423nr8us7g1.png?width=1080&crop=smart&auto=webp&s=c55496b3ce45b1c596883188c101e3fd0385d2b5', 'width': 1080}], 'source': {'height': 651, 'url': 'https://preview.redd.it/52423nr8us7g1.png?auto=webp&s=24fdec9b79fb33a5a6d0cf68248d93d809e465ed', 'width': 1100}, 'variants': {}}]} | ||
Tired of agent hallucinations? AoT prompting: LLM plans, system executes [node-llama-cpp] | 0 | I've been experimenting with agent patterns using local models (Qwen 1.7B) and node-llama-cpp, and wanted to share a pattern that's made my agents way more reliable.
# The Problem with ReAct
ReAct (Reason + Act) is great for exploration, but in production I kept running into:
* Model forgets to extract numbers from questions → NaN results
* Hallucinated "observations" that never happened
* Hard to debug (where exactly did it go wrong?)
* Can't unit test without mocking the entire LLM
# Enter: Atom of Thought (AoT)
Instead of letting the LLM drive execution, I split it into 3 phases:
**Phase 1: LLM plans** (outputs JSON) **Phase 2: System validates** (catches errors before execution) **Phase 3: System executes** (deterministic, no LLM involved)
# Example for "(15 + 7) × 3 - 10"
**LLM outputs this plan:**
{
"atoms": [
{"id": 1, "kind": "tool", "name": "add", "input": {"a": 15, "b": 7}},
{"id": 2, "kind": "tool", "name": "multiply", "input": {"a": "<result_of_1>", "b": 3}, "dependsOn": [1]},
{"id": 3, "kind": "tool", "name": "subtract", "input": {"a": "<result_of_2>", "b": 10}, "dependsOn": [2]},
{"id": 4, "kind": "final", "name": "report", "dependsOn": [3]}
]
}
**Then my code:**
1. Validates the plan (checks for empty inputs, invalid tools, broken dependencies)
2. Executes each atom deterministically
3. Manages state explicitly
If the LLM generates empty inputs (`{"input": {}}`), **validation catches it before execution** → no more mysterious NaN results.
# What I learned
* Validation before execution = way fewer silent failures
* State management in code (not LLM) = actually debuggable
* Can unit test the executor without touching the LLM
* Same plan + same state = same result (deterministic!)
* When something breaks, I know *exactly* which atom failed
# The Code
I built a working example with:
* ReAct implementation (for comparison)
* AoT implementation (3-phase separation)
* Mathematical calculator domain (easy to verify correctness)
* Full documentation explaining both patterns
**Repo:** [https://github.com/pguso/ai-agents-from-scratch](https://github.com/pguso/ai-agents-from-scratch)
Runs on any GGUF model with node-llama-cpp. I tested with Qwen 1.7B Q8 but should work with Llama, Mistral, etc.
# Key Files
* `examples/09_react-agent/` \- Traditional ReAct approach
* `examples/10_aot-agent/` \- Atom of Thought approach
* `helper/json-parser.js` \- Robust JSON parsing (handles LLM messiness)
# When to use which?
**ReAct:** Exploration, research, creative tasks **AoT:** Production agents, anything involving money/compliance, multi-step workflows. Hybrid approach
Do you use any other patterns for making local agents more deterministic?
Open to feedback and suggestions! | 2025-12-17T17:36:00 | purellmagents | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pp2p1d | false | null | t3_1pp2p1d | /r/LocalLLaMA/comments/1pp2p1d/tired_of_agent_hallucinations_aot_prompting_llm/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'xkbopfujus7g1', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/xkbopfujus7g1.png?width=108&crop=smart&auto=webp&s=2884bdf312c8b5074c84af55ff078509b8a2140a', 'width': 108}, {'height': 203, 'url': 'https://preview.redd.it/xkbopfujus7g1.png?width=216&crop=smart&auto=webp&s=2821674aa97bbf51fc7cf3656074a6720b1e12de', 'width': 216}, {'height': 301, 'url': 'https://preview.redd.it/xkbopfujus7g1.png?width=320&crop=smart&auto=webp&s=b06f0d9203d6a5fca8284872a7308122ca511ee5', 'width': 320}, {'height': 602, 'url': 'https://preview.redd.it/xkbopfujus7g1.png?width=640&crop=smart&auto=webp&s=183dc4ebfc2b1f448a4951bb92e41998cd720ba4', 'width': 640}, {'height': 903, 'url': 'https://preview.redd.it/xkbopfujus7g1.png?width=960&crop=smart&auto=webp&s=4271626699761f0c7a1a9bf63496de1954d7fe58', 'width': 960}, {'height': 1016, 'url': 'https://preview.redd.it/xkbopfujus7g1.png?width=1080&crop=smart&auto=webp&s=ca554ba5d85196cee2e1c8cfdc71074465f0d060', 'width': 1080}], 'source': {'height': 2560, 'url': 'https://preview.redd.it/xkbopfujus7g1.png?auto=webp&s=6756794924455f812eef9c457c444f5df88555c2', 'width': 2720}, 'variants': {}}]} | |
Gemini 3 Flash API Free Tier Limits | 0 | Does anybody know about the free tier api limits for Gemini 3 Flash, I haven't seen Google specifying limits for it anywhere... | 2025-12-17T17:34:28 | https://www.reddit.com/r/LocalLLaMA/comments/1pp2nl5/gemini_3_flash_api_free_tier_limits/ | Extra-Designer9333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp2nl5 | false | null | t3_1pp2nl5 | /r/LocalLLaMA/comments/1pp2nl5/gemini_3_flash_api_free_tier_limits/ | false | false | self | 0 | null |
Drummer's Cydonia and Magidonia 24B v4.3 - The best pair of Cydonia for RP yet! | 129 | After 20+ iterations, 3 close calls, we've finally come to a release. The best Cydonia so far. At least that's what the testers at Beaver have been saying.
Peak Cydonia! Served by yours truly.
Small 3.2: [https://huggingface.co/TheDrummer/Cydonia-24B-v4.3](https://huggingface.co/TheDrummer/Cydonia-24B-v4.3)
Magistral 1.2: [https://huggingface.co/TheDrummer/Magidonia-24B-v4.3](https://huggingface.co/TheDrummer/Magidonia-24B-v4.3)
(Most prefer Magidonia, but they're both pretty good!)
\---
To my patrons,
Earlier this week, I had a difficult choice to make. Thanks to your support, I get to enjoy the freedom you've granted me. Thank you for giving me strength to pursue this journey. I will continue dishing out the best tunes possible for you, truly.
\- Drummer | 2025-12-17T17:29:47 | https://www.reddit.com/r/LocalLLaMA/comments/1pp2j60/drummers_cydonia_and_magidonia_24b_v43_the_best/ | TheLocalDrummer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp2j60 | false | null | t3_1pp2j60 | /r/LocalLLaMA/comments/1pp2j60/drummers_cydonia_and_magidonia_24b_v43_the_best/ | false | false | self | 129 | {'enabled': False, 'images': [{'id': 'wXBh4W-dqoH60r0LARyVi5e53ZAZwL8uGgWWN-UVWyI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wXBh4W-dqoH60r0LARyVi5e53ZAZwL8uGgWWN-UVWyI.png?width=108&crop=smart&auto=webp&s=887634b6cb1ad87e55d78cc5055ac45083b71d56', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wXBh4W-dqoH60r0LARyVi5e53ZAZwL8uGgWWN-UVWyI.png?width=216&crop=smart&auto=webp&s=0f8ed47d58998b2237d255570a2c68635d247f36', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wXBh4W-dqoH60r0LARyVi5e53ZAZwL8uGgWWN-UVWyI.png?width=320&crop=smart&auto=webp&s=add6d2cd82abbae5ed2e6e5a27356c757680f44c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wXBh4W-dqoH60r0LARyVi5e53ZAZwL8uGgWWN-UVWyI.png?width=640&crop=smart&auto=webp&s=8b72792801fe753811796a663bdcda13799cd6c2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wXBh4W-dqoH60r0LARyVi5e53ZAZwL8uGgWWN-UVWyI.png?width=960&crop=smart&auto=webp&s=05c6efce755acc6c427dfa8a8357c5099b5b258e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wXBh4W-dqoH60r0LARyVi5e53ZAZwL8uGgWWN-UVWyI.png?width=1080&crop=smart&auto=webp&s=23beaecd5bdd83c9a9923311dd2ccfb49a52922e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wXBh4W-dqoH60r0LARyVi5e53ZAZwL8uGgWWN-UVWyI.png?auto=webp&s=a872bfa93f90548289eb90ab483b8a4640378408', 'width': 1200}, 'variants': {}}]} |
Solving the "agent amnesia" problem - agents that actually remember between sessions | 0 | I've been working on a hard problem: making AI agents remember context across sessions.
\*\*The Problem:\*\*
Every time you restart Claude Code, Cursor, or a custom agent, it forgets everything. You have to re-explain your entire project architecture, coding preferences, past decisions.
This makes long-running projects nearly impossible.
\*\*What I Built:\*\*
A memory layer that sits between your agent and storage:
\- Automatic metadata extraction
\- Relationship mapping (memories link to each other)
\- Works via MCP or direct API
\- Compatible with any LLM (local or cloud)
\*\*Technical Details:\*\*
Using pgvector for semantic search + a three-tier memory system:
\- Tier 1: Basic storage (just text)
\- Tier 2: Enriched (metadata, sentiment, categories)
\- Tier 3: Expertise (usage patterns, relationship graphs)
Memories automatically upgrade tiers based on usage.
\*\*Real Usage:\*\*
I've been dogfooding this for weeks. My Claude instance has 6,000+ memories about the project and never loses context.
\*\*Open Questions:\*\*
\- What's the right balance between automatic vs manual memory management?
\- How do you handle conflicting memories?
\- Best practices for memory decay/forgetting?
Happy to discuss the architecture or share code examples!
\[Link in profile if anyone wants to try it\] | 2025-12-17T17:19:13 | https://www.reddit.com/r/LocalLLaMA/comments/1pp298c/solving_the_agent_amnesia_problem_agents_that/ | tylerrecall | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp298c | false | null | t3_1pp298c | /r/LocalLLaMA/comments/1pp298c/solving_the_agent_amnesia_problem_agents_that/ | false | false | self | 0 | null |
Quantized VibeVoice-7B | 1 | I have created a fast API wrapper around VibeVoice-7B and it is great for my ebook narration use case, slightly better than Chatterbox in my use case, but it is significant larger and takes up 18.3GB VRAM. I am wondering if there is a quantized version of the model that can be loaded somehow?
I know MSFT pulled the 7B but I had it cached (other repos also have it cached).
Or even pointers as to how to quantized it - currently I am using the code MSFT had provided to be the engine behind the wrapper.
Thanks! | 2025-12-17T17:10:58 | https://www.reddit.com/r/LocalLLaMA/comments/1pp21lx/quantized_vibevoice7b/ | TommarrA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp21lx | false | null | t3_1pp21lx | /r/LocalLLaMA/comments/1pp21lx/quantized_vibevoice7b/ | false | false | self | 1 | null |
NVIDIA Nemotron 3 Family of Models | 0 | 2025-12-17T16:50:19 | https://research.nvidia.com/labs/nemotron/Nemotron-3/ | boxingdog | research.nvidia.com | 1970-01-01T00:00:00 | 0 | {} | 1pp1ibq | false | null | t3_1pp1ibq | /r/LocalLLaMA/comments/1pp1ibq/nvidia_nemotron_3_family_of_models/ | false | false | default | 0 | null | |
Has anyone successfully fine-tuned a GPT-OSS model? | 12 | I have been working on the [AIMO 3](https://www.kaggle.com/competitions/ai-mathematical-olympiad-progress-prize-3) competition on Kaggle, and GPT-OSS-120B can solve 35+/50 problems of the public test set, if used properly (Harmony Prompt template and TIR).
I was thinking of fine-tuning (SFT initially, then GSPO) however I am afraid that fine-tuning would have adverse effect, as the dataset size (193k curated samples from Nvidia's 4.9M row OpenMathReasoning dataset) and compute available would be nowhere near the know-hows and compute OpenAI used.
My question is not limited to IMO/math problems: has anyone attempted to fine-tune a GPT-OSS model? If yes, was the fine-tuned model better for your specific use case than the base model? | 2025-12-17T16:34:34 | https://www.reddit.com/r/LocalLLaMA/comments/1pp13yw/has_anyone_successfully_finetuned_a_gptoss_model/ | TechNerd10191 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp13yw | false | null | t3_1pp13yw | /r/LocalLLaMA/comments/1pp13yw/has_anyone_successfully_finetuned_a_gptoss_model/ | false | false | self | 12 | null |
I vibe coded (I hope) useful tool for local LLMs inference | 1 | With OpenHands CLI agent and Minimax M2 AI I vibe coded in like two days, a simple bash script for automatic downloading and updating Llama.cpp binaries, to run them globally on your system.
It automatically detects system, CPU architecture and GPU you are using to download the right thing.
When llama-installer is installed, and you want to install llama.cpp locally - just use simple:
llama-installer
And now you can use globally commands like:
llama-server
# or
llama-cli
And for updating already installed llama.cpp binaries:
llama-installer -u
There's also functionality to automatically update every hour or a day.
If project finds to be useful for at least one person, it would be very nice ;P
[https://github.com/Rybens92/llama-installer](https://github.com/Rybens92/llama-installer) | 2025-12-17T16:33:44 | https://www.reddit.com/r/LocalLLaMA/comments/1pp136d/i_vibe_coded_i_hope_useful_tool_for_local_llms/ | Rybens92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp136d | false | null | t3_1pp136d | /r/LocalLLaMA/comments/1pp136d/i_vibe_coded_i_hope_useful_tool_for_local_llms/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'bbi3XMXiAm_kp0hTyOzt54sXtqTsSQ1wZrBCwGkKWaA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bbi3XMXiAm_kp0hTyOzt54sXtqTsSQ1wZrBCwGkKWaA.png?width=108&crop=smart&auto=webp&s=fccc4bb5a2848b03fd1a1bc840e911a203db0d75', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bbi3XMXiAm_kp0hTyOzt54sXtqTsSQ1wZrBCwGkKWaA.png?width=216&crop=smart&auto=webp&s=86a08eb20cb1f23c7c9da2a8d23e441a54671fca', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bbi3XMXiAm_kp0hTyOzt54sXtqTsSQ1wZrBCwGkKWaA.png?width=320&crop=smart&auto=webp&s=f21f427dbaac474ce4795db016599111880b6182', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bbi3XMXiAm_kp0hTyOzt54sXtqTsSQ1wZrBCwGkKWaA.png?width=640&crop=smart&auto=webp&s=7065b537f5bfd608464b22924adf5e3520a74e59', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bbi3XMXiAm_kp0hTyOzt54sXtqTsSQ1wZrBCwGkKWaA.png?width=960&crop=smart&auto=webp&s=e686abd29a5f8dbe5ba9ea34756d7f14473d2f8e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bbi3XMXiAm_kp0hTyOzt54sXtqTsSQ1wZrBCwGkKWaA.png?width=1080&crop=smart&auto=webp&s=311ba807ebeb873aed3049f4c54d2ed4939985f6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bbi3XMXiAm_kp0hTyOzt54sXtqTsSQ1wZrBCwGkKWaA.png?auto=webp&s=3c9b1a2ffdb4b185bc2bcac2c6463d5ffc3219de', 'width': 1200}, 'variants': {}}]} |
Local Python system agent with tool-based automation and voice control | 0 | I’ve been working on a local desktop system agent written in Python.
The focus of this project is the agent and tool-execution architecture rather than the model itself. The system runs directly on the host machine and invokes predefined Python tools to perform real actions.
Core features:
\- Tool-based action execution
\- Wake-word voice control
\- Voice and text interaction
\- File and folder automation
\- Application launching
\- Game mod script generation
\- Image, video, and music generation
\- Tkinter-based desktop UI
While the agent can connect to an external model API, the emphasis is on local orchestration, safety boundaries, and extensibility of tools rather than prompt-only behavior.
Source code:
[https://github.com/grdsghdefg/everything-ai-desktop-agent](https://github.com/grdsghdefg/everything-ai-desktop-agent)
I’m mainly looking for feedback on agent structure, tool safety, and ways to improve extensibility.
| 2025-12-17T16:29:12 | https://www.reddit.com/r/LocalLLaMA/comments/1pp0yvc/local_python_system_agent_with_toolbased/ | Klutzy-Breakfast277 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp0yvc | false | null | t3_1pp0yvc | /r/LocalLLaMA/comments/1pp0yvc/local_python_system_agent_with_toolbased/ | false | false | self | 0 | null |
Building an event-driven alternative to LangGraph because single-threaded loops are killing me. Roast my architecture. | 1 | I've spent the last year building agents with LangChain and AutoGen, and I keep hitting the same wall: the "ReAct Loop" is single-threaded.
If my "Researcher Agent" pauses to wait for a 30-second scraper to finish, my entire "Manager Agent" hangs. It feels like we're building complex distributed organizations using the software architecture of a 1990s shell script.
I decided to design a control plane based on \*\*Distributed Cognition (DisCo)\*\*. Instead of a \`while\` loop, it uses an event bus (NATS) and a persistent state tracker.
\*\*The Core Architecture:\*\*
1. \*\*Registry:\*\* Dynamic service discovery (no hardcoded tool paths).
2. \*\*Event Service:\*\* Durable pub/sub mesh (NATS/Kafka) for choreography.
3. \*\*Workers:\*\* Independent, long-lived services that react to events (not scripts).
I'm calling it \*\*Soorma\*\*. I'm currently in the design phase (Day 0) and building the core in Python/FastAPI.
https://preview.redd.it/fl2kd341js7g1.png?width=5211&format=png&auto=webp&s=41069f94b389a5c9fd438ea2aa3fa5881b2be4d8
Am I over-engineering this? Or is this what production agents actually need? I'd love feedback on the diagram before I commit to the code.
(The full spec/vision is at [https://soorma.ai](https://soorma.ai) if you want to see the proposed SDK syntax). | 2025-12-17T16:25:28 | https://www.reddit.com/r/LocalLLaMA/comments/1pp0vdf/building_an_eventdriven_alternative_to_langgraph/ | gnulib | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pp0vdf | false | null | t3_1pp0vdf | /r/LocalLLaMA/comments/1pp0vdf/building_an_eventdriven_alternative_to_langgraph/ | false | false | 1 | null | |
Mistral Small Creative -- Long Text Continuation at Different Contexts | 8 | 2025-12-17T16:17:32 | https://imgur.com/a/dggsaQ6 | Eisenstein | imgur.com | 1970-01-01T00:00:00 | 0 | {} | 1pp0o1f | false | null | t3_1pp0o1f | /r/LocalLLaMA/comments/1pp0o1f/mistral_small_creative_long_text_continuation_at/ | false | false | default | 8 | {'enabled': False, 'images': [{'id': '6Wr_GEuSfJtIZMV5TXRMhDyJFQ9KTjvnkhddeKp8glU', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/6Wr_GEuSfJtIZMV5TXRMhDyJFQ9KTjvnkhddeKp8glU.png?width=108&crop=smart&auto=webp&s=ffecae4b115868adc827e400bacbd4e5e28b513b', 'width': 108}, {'height': 152, 'url': 'https://external-preview.redd.it/6Wr_GEuSfJtIZMV5TXRMhDyJFQ9KTjvnkhddeKp8glU.png?width=216&crop=smart&auto=webp&s=bc4e7272d64ffbc9ab16137c37647e1f36bb4746', 'width': 216}, {'height': 226, 'url': 'https://external-preview.redd.it/6Wr_GEuSfJtIZMV5TXRMhDyJFQ9KTjvnkhddeKp8glU.png?width=320&crop=smart&auto=webp&s=9aa7258fb1e30fd9d4d58b4fbe10776f0ad58884', 'width': 320}, {'height': 452, 'url': 'https://external-preview.redd.it/6Wr_GEuSfJtIZMV5TXRMhDyJFQ9KTjvnkhddeKp8glU.png?width=640&crop=smart&auto=webp&s=167e2755b022d63cd58108e8d2777762555527cc', 'width': 640}, {'height': 679, 'url': 'https://external-preview.redd.it/6Wr_GEuSfJtIZMV5TXRMhDyJFQ9KTjvnkhddeKp8glU.png?width=960&crop=smart&auto=webp&s=eac565615054666c30c164080df0c6db84df509e', 'width': 960}, {'height': 764, 'url': 'https://external-preview.redd.it/6Wr_GEuSfJtIZMV5TXRMhDyJFQ9KTjvnkhddeKp8glU.png?width=1080&crop=smart&auto=webp&s=c1a5dfad5a88537577e9603d605b665d18e4ee9e', 'width': 1080}], 'source': {'height': 1325, 'url': 'https://external-preview.redd.it/6Wr_GEuSfJtIZMV5TXRMhDyJFQ9KTjvnkhddeKp8glU.png?auto=webp&s=8b07d3cebf29aba3ef42c6b4634aa5cb33956478', 'width': 1873}, 'variants': {}}]} | |
Claude Code, GPT-5.2, DeepSeek v3.2, and Self-Hosted Devstral 2 on Fresh SWE-rebench (November 2025) | 86 | Hi all, I’m Anton from Nebius.
We’ve updated the **SWE-rebench** leaderboard with our **November runs** on **47 fresh GitHub PR tasks** (PRs created in the previous month only). It’s a SWE-bench–style setup: models read real PR issues, run tests, edit code, and must make the suite pass.
This update includes a particularly large wave of new releases, so we’ve added a substantial batch of new models to the leaderboard:
* **Devstral 2** — a strong release of models that can be run locally given their size
* **DeepSeek v3.2** — a new state-of-the-art open-weight model
* A **new comparison mode** to benchmark models against external systems such as **Claude Code**
We also introduced a **cached-tokens statistic** to improve transparency around cache usage.
Looking forward to your thoughts and suggestions!
| 2025-12-17T15:42:43 | https://swe-rebench.com/?insight=nov_2025 | CuriousPlatypus1881 | swe-rebench.com | 1970-01-01T00:00:00 | 0 | {} | 1pozr6f | false | null | t3_1pozr6f | /r/LocalLLaMA/comments/1pozr6f/claude_code_gpt52_deepseek_v32_and_selfhosted/ | false | false | default | 86 | {'enabled': False, 'images': [{'id': 't4cNt5D638DSOJgsxl8f-7IwJhLpxHIh7HxK5GHcBJE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/t4cNt5D638DSOJgsxl8f-7IwJhLpxHIh7HxK5GHcBJE.png?width=108&crop=smart&auto=webp&s=071c7f404c4349eaae825142a9b8f9d5b51b30de', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/t4cNt5D638DSOJgsxl8f-7IwJhLpxHIh7HxK5GHcBJE.png?width=216&crop=smart&auto=webp&s=e304d7d0c12d3b423882e071e92d3fdbef6924bc', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/t4cNt5D638DSOJgsxl8f-7IwJhLpxHIh7HxK5GHcBJE.png?width=320&crop=smart&auto=webp&s=7b21249ad4b299bc5e3c40a82be38508932052dd', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/t4cNt5D638DSOJgsxl8f-7IwJhLpxHIh7HxK5GHcBJE.png?width=640&crop=smart&auto=webp&s=9b72b5025e78c2cc97de15c8fea348f262235ecb', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/t4cNt5D638DSOJgsxl8f-7IwJhLpxHIh7HxK5GHcBJE.png?width=960&crop=smart&auto=webp&s=026a41ff3006ccced16b09a70f17c8ab24653dfb', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/t4cNt5D638DSOJgsxl8f-7IwJhLpxHIh7HxK5GHcBJE.png?width=1080&crop=smart&auto=webp&s=26ea1a2575ed9e25b2891eab84a31fdfb98f6355', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/t4cNt5D638DSOJgsxl8f-7IwJhLpxHIh7HxK5GHcBJE.png?auto=webp&s=6ba46ec676088f6bb9b1cc36d05262cf3db18f69', 'width': 1200}, 'variants': {}}]} |
You can now fine-tune LLMs and deploy them directly on your phone! | 93 | Source: https://docs.unsloth.ai/new/deploy-llms-phone
you can:
Use the same tech (ExecuTorch) Meta has to power billions on Instagram, WhatsApp
Deploy Qwen3-0.6B locally to Pixel 8 and iPhone 15 Pro at ~40 tokens/s
Apply QAT via TorchAO to recover 70% of accuracy
Get privacy first, instant responses and offline capabilities | 2025-12-17T15:40:50 | Difficult-Cap-7527 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pozpcq | false | null | t3_1pozpcq | /r/LocalLLaMA/comments/1pozpcq/you_can_now_finetune_llms_and_deploy_them/ | false | false | default | 93 | {'enabled': True, 'images': [{'id': 'zi5ph67zas7g1', 'resolutions': [{'height': 125, 'url': 'https://preview.redd.it/zi5ph67zas7g1.jpeg?width=108&crop=smart&auto=webp&s=3559477ae7c2ef446552e9fb609091e783d79190', 'width': 108}, {'height': 250, 'url': 'https://preview.redd.it/zi5ph67zas7g1.jpeg?width=216&crop=smart&auto=webp&s=b3a6bc26952167b1031da50bba33e87842ee8774', 'width': 216}, {'height': 371, 'url': 'https://preview.redd.it/zi5ph67zas7g1.jpeg?width=320&crop=smart&auto=webp&s=02e71553c0c0946166bfad0a6830cd3963561749', 'width': 320}, {'height': 742, 'url': 'https://preview.redd.it/zi5ph67zas7g1.jpeg?width=640&crop=smart&auto=webp&s=98235903a72ce7dadc57bc88ea65f48143f89438', 'width': 640}, {'height': 1113, 'url': 'https://preview.redd.it/zi5ph67zas7g1.jpeg?width=960&crop=smart&auto=webp&s=975cd1893b50e275a1e50d7f25ab021660969e00', 'width': 960}, {'height': 1252, 'url': 'https://preview.redd.it/zi5ph67zas7g1.jpeg?width=1080&crop=smart&auto=webp&s=14240fc8b172721a4d549ede615b07c9b4292360', 'width': 1080}], 'source': {'height': 1392, 'url': 'https://preview.redd.it/zi5ph67zas7g1.jpeg?auto=webp&s=ea265c0577c78711def469262b35f27c1b6fdeae', 'width': 1200}, 'variants': {}}]} | |
Conduit 2.3: Native Mobile Client for Self-hosted AI, deeper integrations and more polish | 25 | It's been an incredible 4 months since I [announced this project on this sub](https://www.reddit.com/r/LocalLLaMA/comments/1nfyefq/built_an_openwebui_mobile_companion_conduit/). I would like to thank each and every one of you who supported the project through various means. You have all kept me going and keep shipping more features and refining the app.
Some of the new features that have been shipped:
|Feature|Description|
|:-|:-|
|**Refined Chat Interface with Themes**|Chat experience gets a visual refresh with floating inputs and titles. Theme options include T3 Chat, Claude, Catppuccin.|
|**Voice Call Mode**|Phone‑style, hands‑free AI conversations; iOS/Android CallKit integration makes calls appear as regular phone calls along with on-device or server configured STT/TTS.|
|**Privacy‑First**|No analytics or telemetry; credentials stored securely in Keychain/Keystore.|
|**Deep System Integration**|Siri Shortcuts, set as default Android Assistant, share files with Conduit, iOS and Android home widgets.|
|**Full Open WebUI Capabilities**|Notes integration, Memory support, Document uploads, function calling/tools, Image gen, Web Search, and many more.|
|**SSO and LDAP Support**|Seamless authentication via SSO providers (OIDC or Reverse Proxies) and LDAP.|
**New Website!:** [https://conduit.cogwheel.app/](https://conduit.cogwheel.app/)
**GitHub:** [https://git.new/conduit](https://git.new/conduit)
Happy holidays to everyone, and here's to lesser RAM prices in the coming year! 🍻 | 2025-12-17T15:37:41 | https://www.reddit.com/gallery/1pozmc7 | cogwheel0 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pozmc7 | false | null | t3_1pozmc7 | /r/LocalLLaMA/comments/1pozmc7/conduit_23_native_mobile_client_for_selfhosted_ai/ | false | false | 25 | null | |
Anyone else in a stable wrapper, MIT-licensed fork of Open WebUI? | 37 | So... Open WebUI's license situation has been a bit of a rollercoaster (Apache → MIT → Creative Commons → MIT → Custom BSD, ...). Now they require keeping their branding and need an enterprise license for 50+ users.
I'm thinking about forking from v0.6.5 (April 2025) - back when it was still properly open source - and keeping it **MIT licensed forever**. No surprises, no restrictions, just a solid UI for local LLMs that stays truly open.
Let's be honest - the backend's kind of a mess, the UI has rough edges, and there's a lot of room for cleanup. I've been a contributor and I'm tired of watching sponsor-driven features or close dev circle priorities jump the queue while actual user needs get ignored.
**The plan would be community driven:**
* Refactor the messy parts, polish the UX
* Fix those annoying bugs that never got prioritized
* Implement features based on actual user requests
* Host weekly or monthly Discord contributor meetings where people can actually speak their minds - no corporate BS, just honest conversations about what needs fixing
* Take inspiration from new Open WebUI features and implement our own (often better) versions
* Basically what a lot of us probably wanted Open WebUI to stay as
**Core commitments:**
* Fork from v0.6.5 (April 2025, BSD-3)
* Permanent MIT license - no surprises, ever
* Focus on user-friendly improvements over feature bloat
* Independent development with community governance
**Just want to see if there's actual interest before I dive into this:**
* Would you actually use this?
* Would anyone want to contribute?
* Any name ideas?
Not trying to bash the original project, just want a stable, truly open alternative for those of us who need it.
If there's enough support, I'll set up the repo and coordination channels. Or if someone's already doing this and I completely missed it, let me know, would way rather help out than start yet another fork..
What do you think? Am I crazy or does this make sense? | 2025-12-17T15:27:32 | https://www.reddit.com/r/LocalLLaMA/comments/1pozd2k/anyone_else_in_a_stable_wrapper_mitlicensed_fork/ | Select-Car3118 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pozd2k | false | null | t3_1pozd2k | /r/LocalLLaMA/comments/1pozd2k/anyone_else_in_a_stable_wrapper_mitlicensed_fork/ | false | false | self | 37 | null |
My problem: my agent code got tied to one provider. I built a thin wrapper so I can swap OpenAI ↔ Ollama without rewrites. | 0 | I’ve been burned by “prototype fast” code that becomes impossible to move off one provider later.
So I built **ai-infra** as a single interface for:
- chat + streaming
- tool-calling agents (LangGraph under the hood)
- RAG (with backends like SQLite for local, Postgres for production)
- MCP client/server
Minimal example:
```python
from ai_infra import LLM, Agent
llm = LLM(provider="ollama", model="llama3") # or openai/anthropic/google
def search_notes(query: str) -> str:
return "(pretend this searches my notes)"
agent = Agent(tools=[search_notes], llm=llm)
answer = agent.run("Search my notes for nginx config tips")
print(answer)
```
RAG with local SQLite storage is also pretty straightforward:
```python
from ai_infra import Retriever
retriever = Retriever(backend="sqlite", path="./vectors.db")
retriever.add_folder("./docs")
results = retriever.search("how do I rotate logs?")
```
Repo: https://github.com/nfraxlab/ai-infra
Curious: if you’ve shipped an agent in a real app (not a demo), what’s the first “tool” you found actually useful day-to-day? | 2025-12-17T15:17:32 | https://www.reddit.com/r/LocalLLaMA/comments/1poz40s/my_problem_my_agent_code_got_tied_to_one_provider/ | Ancient-Direction231 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1poz40s | false | null | t3_1poz40s | /r/LocalLLaMA/comments/1poz40s/my_problem_my_agent_code_got_tied_to_one_provider/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'upMsKmVMkmWguZIioJtmJnw8xr31o5AohA01lXr7M5M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/upMsKmVMkmWguZIioJtmJnw8xr31o5AohA01lXr7M5M.png?width=108&crop=smart&auto=webp&s=2210decf5ace2daaf85c4ae94ed00ad340acd141', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/upMsKmVMkmWguZIioJtmJnw8xr31o5AohA01lXr7M5M.png?width=216&crop=smart&auto=webp&s=cba5c002da7c25200785090716299457593648ae', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/upMsKmVMkmWguZIioJtmJnw8xr31o5AohA01lXr7M5M.png?width=320&crop=smart&auto=webp&s=7e35e75b3383eff6df501d6064f125380376cf7e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/upMsKmVMkmWguZIioJtmJnw8xr31o5AohA01lXr7M5M.png?width=640&crop=smart&auto=webp&s=c7663b066cc1a8f76c1a4c7c279628712ff48345', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/upMsKmVMkmWguZIioJtmJnw8xr31o5AohA01lXr7M5M.png?width=960&crop=smart&auto=webp&s=5d1687d8cb0857dea42dc485d4b743ca12c5986c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/upMsKmVMkmWguZIioJtmJnw8xr31o5AohA01lXr7M5M.png?width=1080&crop=smart&auto=webp&s=6c4e81cf604f01c49c007410309a6cf146280e9f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/upMsKmVMkmWguZIioJtmJnw8xr31o5AohA01lXr7M5M.png?auto=webp&s=d6dbb7c8c14855d001fa0b73447602061ebd0545', 'width': 1200}, 'variants': {}}]} |
Optimal gpt-oss-20b settings for 24gb VRAM | 0 | I'm getting like 23 tk/s on a 3090, and that doesnt quite add up. I'm seeing folks mention \~100. Could someone point me in the right direction? I've tried toggling various things from what I come across in posts with no luck. Here are my settings:
\`\`\`
\#!/usr/bin/env bash
export LLAMA\_SET\_ROWS=1
MODEL="/gpt-oss-20b-F16.gguf"
taskset -c 0-11 llama-server \\
\-m "$MODEL" \\
\--jinja \\
\--ctx-size 64000
\\ -b 8096 -ub 4096
\\ --threads-batch 10 \\
\--mlock \\
\--no-mmap \\
\-fa on \\
\--chat-template-kwargs '{"reasoning\_effort": "high"}' \\
\--host [127.0.0.1](http://127.0.0.1) \\
\--port 8080
\`\`\` | 2025-12-17T15:14:54 | https://www.reddit.com/r/LocalLLaMA/comments/1poz1p7/optimal_gptoss20b_settings_for_24gb_vram/ | GotHereLateNameTaken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1poz1p7 | false | null | t3_1poz1p7 | /r/LocalLLaMA/comments/1poz1p7/optimal_gptoss20b_settings_for_24gb_vram/ | false | false | self | 0 | null |
I built an open-source Python SDK for prompt compression, enhancement, and validation - PromptManager | 0 | Hey everyone,
I've been working on a Python library called **PromptManager** and wanted to share it with the community.
**The problem I was trying to solve:**
Working on production LLM applications, I kept running into the same issues:
* Prompts getting bloated with unnecessary tokens
* No systematic way to improve prompt quality
* Injection attacks slipping through
* Managing prompt versions across deployments
So I built a toolkit to handle all of this.
**What it does:**
* **Compression** \- Reduces token count by 30-70% while preserving semantic meaning. Multiple strategies (lexical, statistical, code-aware, hybrid).
* **Enhancement** \- Analyzes and improves prompt structure/clarity. Has a rules-only mode (fast, no API calls) and a hybrid mode that uses an LLM for refinement.
* **Generation** \- Creates prompts from task descriptions. Supports zero-shot, few-shot, chain-of-thought, and code generation styles.
* **Validation** \- Detects injection attacks, jailbreak attempts, unfilled templates, etc.
* **Pipelines** \- Chain operations together with a fluent API.
**Quick example:**
from promptmanager import PromptManager
pm = PromptManager()
# Compress a prompt to 50% of original size
result = await pm.compress(prompt, ratio=0.5)
print(f"Saved {result.tokens_saved} tokens")
# Enhance a messy prompt
result = await pm.enhance("help me code sorting thing", level="moderate")
# Output: "Write clean, well-documented code to implement a sorting algorithm..."
# Validate for injection
validation = pm.validate("Ignore previous instructions and...")
print(validation.is_valid) # False
**Some benchmarks:**
|Operation|1000 tokens|Result|
|:-|:-|:-|
|Compression (lexical)|\~5ms|40% reduction|
|Compression (hybrid)|\~15ms|50% reduction|
|Enhancement (rules)|\~10ms|\+25% quality|
|Validation|\~2ms|\-|
**Technical details:**
* Provider-agnostic (works with OpenAI, Anthropic, or any provider via LiteLLM)
* Can be used as SDK, REST API, or CLI
* Async-first with sync wrappers
* Type-checked with mypy
* 273 tests passing
**Installation:**
pip install promptmanager
# With extras
pip install promptmanager[all]
**GitHub:** [https://github.com/h9-tec/promptmanager](https://github.com/h9-tec/promptmanager)
**License:** MIT
I'd really appreciate any feedback - whether it's about the API design, missing features, or use cases I haven't thought of. Also happy to answer any questions.
If you find it useful, a star on GitHub would mean a lot!
| 2025-12-17T15:06:02 | https://www.reddit.com/r/LocalLLaMA/comments/1poytn8/i_built_an_opensource_python_sdk_for_prompt/ | 1Hesham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1poytn8 | false | null | t3_1poytn8 | /r/LocalLLaMA/comments/1poytn8/i_built_an_opensource_python_sdk_for_prompt/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'u_5Yczm1mf-cNah89cgQ1Rsa-h2Ye6rpaEixnj_q5vw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/u_5Yczm1mf-cNah89cgQ1Rsa-h2Ye6rpaEixnj_q5vw.png?width=108&crop=smart&auto=webp&s=4e52372e21265aaf713d95d76afa8e7a24630daf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/u_5Yczm1mf-cNah89cgQ1Rsa-h2Ye6rpaEixnj_q5vw.png?width=216&crop=smart&auto=webp&s=d671fcdda40c42d177e5f7203285e22124dcafd7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/u_5Yczm1mf-cNah89cgQ1Rsa-h2Ye6rpaEixnj_q5vw.png?width=320&crop=smart&auto=webp&s=c3ec62ea547327e640c839b86e076bf588fccac9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/u_5Yczm1mf-cNah89cgQ1Rsa-h2Ye6rpaEixnj_q5vw.png?width=640&crop=smart&auto=webp&s=a39b7276d9c56d143a4d50c45f635a7a4d1b3bc8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/u_5Yczm1mf-cNah89cgQ1Rsa-h2Ye6rpaEixnj_q5vw.png?width=960&crop=smart&auto=webp&s=e38f23faf1f9c775618f005e5b781f974eb19824', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/u_5Yczm1mf-cNah89cgQ1Rsa-h2Ye6rpaEixnj_q5vw.png?width=1080&crop=smart&auto=webp&s=5dfddc2b33e972fc5eb3d82f58f879e8553af62a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/u_5Yczm1mf-cNah89cgQ1Rsa-h2Ye6rpaEixnj_q5vw.png?auto=webp&s=2d634aa18467c29fb9113d9b2d3b81cf98f128c6', 'width': 1200}, 'variants': {}}]} |
Speculative decoding making GPT-OSS 120B SLOWER - tested 8 methods, all negative results (DGX Spark) | 1 | [removed] | 2025-12-17T14:56:34 | https://www.reddit.com/r/LocalLLaMA/comments/1poykwq/speculative_decoding_making_gptoss_120b_slower/ | Lorelabbestia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1poykwq | false | null | t3_1poykwq | /r/LocalLLaMA/comments/1poykwq/speculative_decoding_making_gptoss_120b_slower/ | false | false | self | 1 | null |
inference over USB4 eGPU - feasible? | 3 | I’ve got a mini PC running with the HX370 and an 890M GPU, 64GB of DDR5 at 8000 MT/s. Inference performance on this setup is solid. Qwen3-Next-80B runs smoothly at around 15t/s (TG), while Mistral-24B dense at about 6.5t/s. Since I don’t do heavy coding on this machine, it’s more than adequate for AI workloads.
Given space constraints, I’m considering a minimal eGPU setup, either a 4070 or a 3090, to boost gaming performance. The 4070 is priced at $400, while the 3090 costs $750. The 3090 would effectively double the VRAM, which could be useful for larger AI models during inference. But should I go with the 3090 for that extra VRAM, or stick with the 4070 for a more balanced, cost-effective setup?
That said, if inference over USB4 is viable, given that USB4 delivers up to 32GB/s of effective PCIe bandwidth. I'm open to the extra cost. However, I won’t be splitting model layers between the eGPU and system RAM, because USB4 bandwidth would severely bottleneck performance. Instead, I’ll run all models under 30B directly on the eGPU via llama.cpp, while larger models will remain on the 890M iGPU.
Has anyone tried this kind of setup? Any real-world experience with running AI inference on eGPUs via USB4 or similar? | 2025-12-17T14:55:38 | https://www.reddit.com/r/LocalLLaMA/comments/1poyk2q/inference_over_usb4_egpu_feasible/ | simracerman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1poyk2q | false | null | t3_1poyk2q | /r/LocalLLaMA/comments/1poyk2q/inference_over_usb4_egpu_feasible/ | false | false | self | 3 | null |
Speculative decoding makes GPT-OSS 120B SLOWER - tested 8 methods, all negative results (DGX Spark) | 1 | [removed] | 2025-12-17T14:47:54 | https://www.reddit.com/r/LocalLLaMA/comments/1poyd8m/speculative_decoding_makes_gptoss_120b_slower/ | Lorelabbestia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1poyd8m | false | null | t3_1poyd8m | /r/LocalLLaMA/comments/1poyd8m/speculative_decoding_makes_gptoss_120b_slower/ | false | false | self | 1 | null |
Llama 3.2 3B MRI - Build Progress | 7 | Hello all! I added the ability to see the exact token and token ID being rendered to the main display layer, as well as the text of the response so far.
[Layer 1, Step 35 of the prompt. You can see the text so far and the token identifiers on the right.](https://preview.redd.it/a28w03mkzr7g1.png?width=1842&format=png&auto=webp&s=ff1410073b6c5ba031dd03e5b423c8dbd98cb267)
I've also added the ability to isolate the compare layer and freeze it on a certain layer/step/prompt, That will allow us to identify what dims activate for one prompt/step vs. another.
[Left: layer 1, step 35. Right: layer 2, step 35. note the different activation patterns and clusters despite being the same prompt.](https://preview.redd.it/r7alr1he0s7g1.png?width=1842&format=png&auto=webp&s=2bfdf6ec07332289844874b6a93dcc6ac1b1824a)
My goal now is to run a battery of prompts that would trigger memory usage, see where the dims consistently show engagement, and attempt to wire in a semantic and episodic memory for the model. | 2025-12-17T14:45:49 | https://www.reddit.com/r/LocalLLaMA/comments/1poybe9/llama_32_3b_mri_build_progress/ | Due_Hunter_4891 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1poybe9 | false | null | t3_1poybe9 | /r/LocalLLaMA/comments/1poybe9/llama_32_3b_mri_build_progress/ | false | false | 7 | null | |
Apple introduces SHARP, a model that generates a photorealistic 3D Gaussian representation from a single image in seconds. | 1,115 | GitHub: [https://github.com/apple/ml-sharp](https://github.com/apple/ml-sharp)
Paper: [https://arxiv.org/abs/2512.10685](https://arxiv.org/abs/2512.10685) | 2025-12-17T14:33:13 | https://v.redd.it/l2mp7b31xr7g1 | themixtergames | /r/LocalLLaMA/comments/1poy0lb/apple_introduces_sharp_a_model_that_generates_a/ | 1970-01-01T00:00:00 | 0 | {} | 1poy0lb | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/l2mp7b31xr7g1/DASHPlaylist.mpd?a=1768703602%2CMmI3NDVhZWRjZmYxMWYyZmUyZWNhODY1NTZmYjUwYzQ0ZDA4N2JlNThiYWE4MTAwNDE4YjkzMTUzNDk1MGZjZg%3D%3D&v=1&f=sd', 'duration': 72, 'fallback_url': 'https://v.redd.it/l2mp7b31xr7g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/l2mp7b31xr7g1/HLSPlaylist.m3u8?a=1768703602%2CNDU2ZmNlOTg5OTUzYzc0MmJiMTU1YzM5MmM4ZjdlYmVmYzNmZDFjNjA4NWI5YjlmZmQ1YmRjNmY5MWFjZmQxOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/l2mp7b31xr7g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1poy0lb | /r/LocalLLaMA/comments/1poy0lb/apple_introduces_sharp_a_model_that_generates_a/ | false | false | 1,115 | {'enabled': False, 'images': [{'id': 'YWpkODI1NDF4cjdnMbxNGAI-puPRf-AP3cgrLxlreCeM4kV742La4OIIHHvj', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YWpkODI1NDF4cjdnMbxNGAI-puPRf-AP3cgrLxlreCeM4kV742La4OIIHHvj.png?width=108&crop=smart&format=pjpg&auto=webp&s=4a0f6fa70f146da597cd3d402d4c8dd738b334c6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YWpkODI1NDF4cjdnMbxNGAI-puPRf-AP3cgrLxlreCeM4kV742La4OIIHHvj.png?width=216&crop=smart&format=pjpg&auto=webp&s=d0ff86a041a672ab9f1614ee315ec5e0ea7ff913', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YWpkODI1NDF4cjdnMbxNGAI-puPRf-AP3cgrLxlreCeM4kV742La4OIIHHvj.png?width=320&crop=smart&format=pjpg&auto=webp&s=155403a3b24b2b403225a778e84c8b2236978f91', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YWpkODI1NDF4cjdnMbxNGAI-puPRf-AP3cgrLxlreCeM4kV742La4OIIHHvj.png?width=640&crop=smart&format=pjpg&auto=webp&s=61d3506d55ae65a5b93f4b58bbfaa00003eb20d8', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YWpkODI1NDF4cjdnMbxNGAI-puPRf-AP3cgrLxlreCeM4kV742La4OIIHHvj.png?width=960&crop=smart&format=pjpg&auto=webp&s=059caa35c46082ee407b7344eb80b2cd18be4ebd', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YWpkODI1NDF4cjdnMbxNGAI-puPRf-AP3cgrLxlreCeM4kV742La4OIIHHvj.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2d8a3d1bee31e0ffc07af8484271ec31d3a871a2', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/YWpkODI1NDF4cjdnMbxNGAI-puPRf-AP3cgrLxlreCeM4kV742La4OIIHHvj.png?format=pjpg&auto=webp&s=fa5c97a2242526eaa483568ff7d49e3462f575ef', 'width': 1920}, 'variants': {}}]} | |
LOCAL AI on mobile phone like LM studio | 0 | if you're finding like LM studio in ur mobile phone device or tablet without needed to download from ollama I'll introducing secret AI app the secret AI app like LM studio but in mobile version you can show your video or picture wat waiting for download now | 2025-12-17T14:22:09 | https://play.google.com/store/apps/details?id=io.secretai.llm | Adventurous_Role_489 | play.google.com | 1970-01-01T00:00:00 | 0 | {} | 1poxr0g | false | null | t3_1poxr0g | /r/LocalLLaMA/comments/1poxr0g/local_ai_on_mobile_phone_like_lm_studio/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'RGeYDv5DoSwUziXCNknbeBPYT40MbYoq08vEvBJi35s', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/RGeYDv5DoSwUziXCNknbeBPYT40MbYoq08vEvBJi35s.png?width=108&crop=smart&auto=webp&s=c0155fc2b3bca0cb00d73df67896dfe09a84ab2c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/RGeYDv5DoSwUziXCNknbeBPYT40MbYoq08vEvBJi35s.png?width=216&crop=smart&auto=webp&s=72472aa9e5210463df8d02ba3566482b22942d44', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/RGeYDv5DoSwUziXCNknbeBPYT40MbYoq08vEvBJi35s.png?width=320&crop=smart&auto=webp&s=bfe538a04d55eb923f20d8db1418394304552418', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/RGeYDv5DoSwUziXCNknbeBPYT40MbYoq08vEvBJi35s.png?auto=webp&s=e99a4780953c945c0feeda9d40a40a58e7b8f04f', 'width': 512}, 'variants': {}}]} |
Building a GPU Cloud for AI and VFX /Curious if this would interest you | 0 | Hey folks,
My partner and I are working on a GPU cloud rental service, focused on AI and VFX workloads. We’re based in Angola, where our infrastructure allows us to provide really affordable electricity, which helps keep costs lower than everywhere else.
We’re planning to offer high-speed connectivity (gigabit internet), as well.
We’re just trying to gauge interest: would a service like this be something you’d consider using for AI training, inference, or rendering?
Would love to hear your thoughts, suggestions, or critiques. | 2025-12-17T14:14:31 | https://www.reddit.com/r/LocalLLaMA/comments/1poxkhr/building_a_gpu_cloud_for_ai_and_vfx_curious_if/ | DjuricX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1poxkhr | false | null | t3_1poxkhr | /r/LocalLLaMA/comments/1poxkhr/building_a_gpu_cloud_for_ai_and_vfx_curious_if/ | false | false | self | 0 | null |
🐻 Democratizing LLM Pretraining: Gumini-1B & 1.5B Open Source Release
New Model | 1 | [removed] | 2025-12-17T14:11:05 | https://www.reddit.com/r/LocalLLaMA/comments/1poxhmd/democratizing_llm_pretraining_gumini1b_15b_open/ | Old_Elk5091 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1poxhmd | false | null | t3_1poxhmd | /r/LocalLLaMA/comments/1poxhmd/democratizing_llm_pretraining_gumini1b_15b_open/ | false | false | self | 1 | null |
🐻 Democratizing LLM Pretraining: Gumini-1B & 1.5B Open Source Release | 1 | [removed] | 2025-12-17T14:09:16 | https://www.reddit.com/r/LocalLLaMA/comments/1poxg2a/democratizing_llm_pretraining_gumini1b_15b_open/ | Old_Elk5091 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1poxg2a | false | null | t3_1poxg2a | /r/LocalLLaMA/comments/1poxg2a/democratizing_llm_pretraining_gumini1b_15b_open/ | false | false | self | 1 | null |
🐻 Democratizing LLM Pretraining: Gumini-1B & 1.5B Open Source Release | 1 | [removed] | 2025-12-17T14:06:32 | https://www.reddit.com/r/LocalLLaMA/comments/1poxdqh/democratizing_llm_pretraining_gumini1b_15b_open/ | Old_Elk5091 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1poxdqh | false | null | t3_1poxdqh | /r/LocalLLaMA/comments/1poxdqh/democratizing_llm_pretraining_gumini1b_15b_open/ | false | false | self | 1 | null |
LangChain and LlamaIndex are in "steep decline" according to new ecosystem report. Anyone else quietly ditching agent frameworks? | 203 | So I stumbled on this LLM Development Landscape 2.0 report from Ant Open Source and it basically confirmed what I've been feeling for months.
LangChain, LlamaIndex and AutoGen are all listed as "steepest declining" projects by community activity over the past 6 months. The report says it's due to "reduced community investment from once dominant projects." Meanwhile stuff like vLLM and SGLang keeps growing.
Honestly this tracks with my experience. I spent way too long fighting with LangChain abstractions last year before I just ripped it out and called the APIs directly. Cut my codebase in half and debugging became actually possible. Every time I see a tutorial using LangChain now I just skip it.
But I'm curious if this is just me being lazy or if there's a real shift happening. Are agent frameworks solving a problem that doesn't really exist anymore now that the base models are good enough? Or am I missing something and these tools are still essential for complex workflows? | 2025-12-17T13:59:02 | https://www.reddit.com/r/LocalLLaMA/comments/1pox733/langchain_and_llamaindex_are_in_steep_decline/ | Exact-Literature-395 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pox733 | false | null | t3_1pox733 | /r/LocalLLaMA/comments/1pox733/langchain_and_llamaindex_are_in_steep_decline/ | false | false | self | 203 | null |
Help me pick a model? 7800x3d, RTX 3080, 32gb RAM | 4 | I have a 7800X3D + 32GB RAM + RTX 3080 (10GB) setup and I’m looking for a model that would fit.
Current specs I am looking at are: 12-32b params, q4 quantization, 8k-32k context.
My main goal is to use this with something like aider or cline to work on python projects while I am away so tok/sec isn’t the highest priority compared to overall code quality.
Options I am looking at now: qwen 2.5 coder 14b, devstral 2 small, DeepSeek-V3.2-Lite, gpt oss 20b
Anything else to consider or are these the best to try? | 2025-12-17T13:51:49 | https://www.reddit.com/r/LocalLLaMA/comments/1pox14p/help_me_pick_a_model_7800x3d_rtx_3080_32gb_ram/ | DK_Tech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pox14p | false | null | t3_1pox14p | /r/LocalLLaMA/comments/1pox14p/help_me_pick_a_model_7800x3d_rtx_3080_32gb_ram/ | false | false | self | 4 | null |
[Showcase] AGI-Llama: Bringing Modern LLMs to 1980s Sierra Adventure Games (Space Quest, King's Quest, etc.) | 80 | Hi everyone! 👋
I wanted to share a project I've been working on: **AGI-Llama**. It is a modern evolution of the classic NAGI (New Adventure Game Interpreter), but with a twist—I've integrated Large Language Models directly into the engine.
The goal is to transform how we interact with retro Sierra titles like *Space Quest*, *King's Quest*, or *Leisure Suit Larry*.
**What makes it different?**
* 🤖 **Natural Language Input:** Stop struggling with "verb noun" syntax. Talk to the game naturally.
* 🌍 **Play in any language:** Thanks to the LLM layer and new SDL\_ttf support, you can play classic AGI games in Spanish, French, Japanese, or any language the model supports.
* 🚀 **Modern Tech Stack:** Ported to **SDL3**, featuring GPU acceleration and Unicode support.
* 🧠 **Flexible Backends:** It supports `llama.cpp` for local inference (Llama 3, Qwen, Gemma), BitNet for 1.58-bit models, and Cloud APIs (OpenAI, Hugging Face, Groq).
It’s an experimental research project to explore the intersection of AI and retro gaming architecture. The LLM logic is encapsulated in a library that could potentially be integrated into other projects like ScummV
\[IMG\]https://github.com/jalfonsosm/agi-llm/blob/master/media/agiEnhanced.gif?raw=true\[/IMG\]
**GitHub Repository:**[https://github.com/jalfonsosm/agi-llm](https://github.com/jalfonsosm/agi-llm)
I’d love to hear your thoughts, especially regarding async LLM implementation and context management for old adventure game states! | 2025-12-17T13:48:36 | https://v.redd.it/liiuhlouqr7g1 | Responsible_Fan_2757 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1powyhk | false | {'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/liiuhlouqr7g1/DASHPlaylist.mpd?a=1768571329%2CMjUzM2FlMmUwNmUyNjNmNzM4NGJjMzZiNmRmZTlmMTA5MTAyNWI0ZDY5YjEzOTUzNmFmZjljMzg0ZjE1MzQyNg%3D%3D&v=1&f=sd', 'duration': 63, 'fallback_url': 'https://v.redd.it/liiuhlouqr7g1/CMAF_360.mp4?source=fallback', 'has_audio': False, 'height': 360, 'hls_url': 'https://v.redd.it/liiuhlouqr7g1/HLSPlaylist.m3u8?a=1768571329%2COThjZWJkMzZhMDIzNGJiYTExYWQ4NjQyMDVlYjBkM2VhY2ZjNjU5YWVjZDVlODg1NjdkNzcwYzUwMmZkMGU1YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/liiuhlouqr7g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 570}} | t3_1powyhk | /r/LocalLLaMA/comments/1powyhk/showcase_agillama_bringing_modern_llms_to_1980s/ | false | false | 80 | {'enabled': False, 'images': [{'id': 'YnNwcm9jcXVxcjdnMUIJV1A_49oRDbYi56Mr7om0CcsWx5OZR_t3Jj5ZXIGi', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/YnNwcm9jcXVxcjdnMUIJV1A_49oRDbYi56Mr7om0CcsWx5OZR_t3Jj5ZXIGi.png?width=108&crop=smart&format=pjpg&auto=webp&s=c75484ca7362019df7c48c05aa4170c481245077', 'width': 108}, {'height': 136, 'url': 'https://external-preview.redd.it/YnNwcm9jcXVxcjdnMUIJV1A_49oRDbYi56Mr7om0CcsWx5OZR_t3Jj5ZXIGi.png?width=216&crop=smart&format=pjpg&auto=webp&s=65c42ee85605ce9f9d879c8174dd0e46d862f683', 'width': 216}, {'height': 202, 'url': 'https://external-preview.redd.it/YnNwcm9jcXVxcjdnMUIJV1A_49oRDbYi56Mr7om0CcsWx5OZR_t3Jj5ZXIGi.png?width=320&crop=smart&format=pjpg&auto=webp&s=903cf00038a33caa265a3a08566a638ade940a1b', 'width': 320}, {'height': 404, 'url': 'https://external-preview.redd.it/YnNwcm9jcXVxcjdnMUIJV1A_49oRDbYi56Mr7om0CcsWx5OZR_t3Jj5ZXIGi.png?width=640&crop=smart&format=pjpg&auto=webp&s=6cde696fee769ff0ec99b7ee400883b329f42dba', 'width': 640}], 'source': {'height': 404, 'url': 'https://external-preview.redd.it/YnNwcm9jcXVxcjdnMUIJV1A_49oRDbYi56Mr7om0CcsWx5OZR_t3Jj5ZXIGi.png?format=pjpg&auto=webp&s=a9591d10eed4efab20469f0839f50354b479bf3b', 'width': 640}, 'variants': {}}]} | |
Helper tool for the new llama.cpp --models-preset option | 11 | Hi everyone,
I wanted to share a simple tool I made to help me manage the new configuration file for
"--models-preset" option in llama-server.
[https://github.com/HxT9/llama.cpp-models-preset-manager](https://github.com/HxT9/llama.cpp-models-preset-manager)
I paste here the features from the github readme
# Features
[](https://github.com/HxT9/llama.cpp-models-preset-manager#features)
* **Model Management**:
* Add, edit, and remove AI models (can use multiple instances of the same model with different flags, just use different names).
* **Auto-Scan**: Quickly add multiple GGUF models by scanning a directory.
* **Configuration / Flags**:
* Assign specific command-line flags to each model (e.g., `c`, `ngl`, `mmproj`).
* Dropdown selection for a list of already used flags.
* **Persistence**:
* All data is saved automatically to a local SQLite database.
* Configuration export to `.ini` format for usage with llama-server --models-preset
https://preview.redd.it/rxfbrupbqr7g1.png?width=1185&format=png&auto=webp&s=3511bcb4d271fccf725055cb929dc4fbbc2e5525
| 2025-12-17T13:45:58 | https://www.reddit.com/r/LocalLLaMA/comments/1powwbu/helper_tool_for_the_new_llamacpp_modelspreset/ | AlbeHxT9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1powwbu | false | null | t3_1powwbu | /r/LocalLLaMA/comments/1powwbu/helper_tool_for_the_new_llamacpp_modelspreset/ | false | false | 11 | null | |
I built a biomimetic "Memory Server" for Agents (Go + Redis + Qdrant) that implements a forgetting curve. No more "infinite vector dump". | 1 | [removed] | 2025-12-17T13:39:19 | https://www.reddit.com/r/LocalLLaMA/comments/1powr13/i_built_a_biomimetic_memory_server_for_agents_go/ | Hour-Nebula-8653 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1powr13 | false | null | t3_1powr13 | /r/LocalLLaMA/comments/1powr13/i_built_a_biomimetic_memory_server_for_agents_go/ | false | false | 1 | null | |
Is 3000EUR/3500USD a good price for Mac Studio M1 Ultra? | 0 | Hi,
I have been thinking of buying a machine for local AI inference and small dev tasks. Nothing too extreme and I don't want a huge electricity bill.
From my research, I think Mac Studio M1 Ultra 128GB VRAM 1TB SDD. It's out of stock everywhere but found one for 3000EUR/3500USD and I don't know whether that is a good price or overpriced?
Thanks in advance | 2025-12-17T13:29:18 | https://www.reddit.com/r/LocalLLaMA/comments/1powj7q/is_3000eur3500usd_a_good_price_for_mac_studio_m1/ | dabiggmoe2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1powj7q | false | null | t3_1powj7q | /r/LocalLLaMA/comments/1powj7q/is_3000eur3500usd_a_good_price_for_mac_studio_m1/ | false | false | self | 0 | null |
anthropic blog on code execution for agents. 98.7% token reduction sounds promising for local setups | 133 | anthropic published this detailed blog about "code execution" for agents: [https://www.anthropic.com/engineering/code-execution-with-mcp](https://www.anthropic.com/engineering/code-execution-with-mcp)
instead of direct tool calls, model writes code that orchestrates tools
they claim massive token reduction. like 150k down to 2k in their example. sounds almost too good to be true
basic idea: dont preload all tool definitions. let model explore available tools on demand. data flows through variables not context
for local models this could be huge. context limits hit way harder when youre running smaller models
the privacy angle is interesting too. sensitive data never enters model context, flows directly between tools
cloudflare independently discovered this "code mode" pattern according to the blog
main challenge would be sandboxing. running model-generated code locally needs serious isolation
but if you can solve that, complex agents might become viable on consumer hardware. 8k context instead of needing 128k+
tools like cursor and verdent already do basic code generation. this anthropic approach could push that concept way further
wondering if anyone has experimented with similar patterns locally | 2025-12-17T13:27:38 | https://www.reddit.com/r/LocalLLaMA/comments/1powhy6/anthropic_blog_on_code_execution_for_agents_987/ | Zestyclose_Ring1123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1powhy6 | false | null | t3_1powhy6 | /r/LocalLLaMA/comments/1powhy6/anthropic_blog_on_code_execution_for_agents_987/ | false | false | self | 133 | {'enabled': False, 'images': [{'id': 'FfiXHhuGp3mnN4lgdhPaFUw0S7MgoPyKdqBvUR86IPQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/FfiXHhuGp3mnN4lgdhPaFUw0S7MgoPyKdqBvUR86IPQ.png?width=108&crop=smart&auto=webp&s=eca108f218964265924a6446a264bfa9a8bea60d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/FfiXHhuGp3mnN4lgdhPaFUw0S7MgoPyKdqBvUR86IPQ.png?width=216&crop=smart&auto=webp&s=673f45483685c51362a507301e208fd5d50e4ce7', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/FfiXHhuGp3mnN4lgdhPaFUw0S7MgoPyKdqBvUR86IPQ.png?width=320&crop=smart&auto=webp&s=cbc4935f9c2ba188e7ff7e96c159924230e29cae', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/FfiXHhuGp3mnN4lgdhPaFUw0S7MgoPyKdqBvUR86IPQ.png?width=640&crop=smart&auto=webp&s=c6eb7cf4d59da524c4e3886986a14aaec26571dd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/FfiXHhuGp3mnN4lgdhPaFUw0S7MgoPyKdqBvUR86IPQ.png?width=960&crop=smart&auto=webp&s=cf61c3eb25e4b033ad1e06ff637289e7287f1b6b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/FfiXHhuGp3mnN4lgdhPaFUw0S7MgoPyKdqBvUR86IPQ.png?width=1080&crop=smart&auto=webp&s=0335c7bee416689d573e88aaa7c62d79312818eb', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/FfiXHhuGp3mnN4lgdhPaFUw0S7MgoPyKdqBvUR86IPQ.png?auto=webp&s=2b5c52e13d47743da32ca26285b9f6db25e2a705', 'width': 2400}, 'variants': {}}]} |
I built a biomimetic "Memory Server" for Agents (Go + Redis + Qdrant) that implements a forgetting curve. No more "infinite vector dump". | 1 | [removed] | 2025-12-17T13:26:14 | https://www.reddit.com/r/LocalLLaMA/comments/1powgud/i_built_a_biomimetic_memory_server_for_agents_go/ | Hour-Nebula-8653 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1powgud | false | null | t3_1powgud | /r/LocalLLaMA/comments/1powgud/i_built_a_biomimetic_memory_server_for_agents_go/ | false | false | 1 | null | |
Peak LLM Wars: Xiaomi Blocks Kimi Employees on Twitter | 129 | 2025-12-17T13:14:02 | https://www.reddit.com/r/LocalLLaMA/comments/1pow797/peak_llm_wars_xiaomi_blocks_kimi_employees_on/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pow797 | false | null | t3_1pow797 | /r/LocalLLaMA/comments/1pow797/peak_llm_wars_xiaomi_blocks_kimi_employees_on/ | false | false | 129 | null | ||
How it's the AI support for AMD GPU ? Any type for a newcomer? | 5 | I have a RX 9070 16GB, I'm curious about how the AI support for the machine.
This is my first AMD GPU, I only had a Nvidia before.
I decided to buy before the increase of price that it will happen with RAM getting more expensive, I use windows and gotta be honest, I don't look very easy to make it work.
Try to see if I could use a Image and Video Generators but no luck, I did manage to make Text works using LM Studios | 2025-12-17T13:12:08 | https://www.reddit.com/r/LocalLLaMA/comments/1pow5sw/how_its_the_ai_support_for_amd_gpu_any_type_for_a/ | Centurionzo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pow5sw | false | null | t3_1pow5sw | /r/LocalLLaMA/comments/1pow5sw/how_its_the_ai_support_for_amd_gpu_any_type_for_a/ | false | false | self | 5 | null |
Reposting honestly: building a visual “brain debugger” for local LLM agents, would you use this? | 1 | [removed] | 2025-12-17T12:54:51 | AdVivid5763 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1povt3e | false | null | t3_1povt3e | /r/LocalLLaMA/comments/1povt3e/reposting_honestly_building_a_visual_brain/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'iyn3z4klhr7g1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/iyn3z4klhr7g1.jpeg?width=108&crop=smart&auto=webp&s=53c84b871331f757829800491665c29f9c632e2f', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/iyn3z4klhr7g1.jpeg?width=216&crop=smart&auto=webp&s=7faaa90cd9c57f765bb3a7ec710ad5a10b82b354', 'width': 216}, {'height': 202, 'url': 'https://preview.redd.it/iyn3z4klhr7g1.jpeg?width=320&crop=smart&auto=webp&s=80d375cf6476ed9e46f0d89c2fba8081305b216c', 'width': 320}, {'height': 404, 'url': 'https://preview.redd.it/iyn3z4klhr7g1.jpeg?width=640&crop=smart&auto=webp&s=0ab8b7aaceff06d7ab108f31a4f12abd199d097b', 'width': 640}, {'height': 606, 'url': 'https://preview.redd.it/iyn3z4klhr7g1.jpeg?width=960&crop=smart&auto=webp&s=2524b3c762951ea8ad4484b82d5a10c51299b9e1', 'width': 960}, {'height': 681, 'url': 'https://preview.redd.it/iyn3z4klhr7g1.jpeg?width=1080&crop=smart&auto=webp&s=f97aa2f5d249e87ad8b4f28081a2eebe9df7b441', 'width': 1080}], 'source': {'height': 1552, 'url': 'https://preview.redd.it/iyn3z4klhr7g1.jpeg?auto=webp&s=2005b4b7f7a0b317c20004ef565c7cd02d118f69', 'width': 2458}, 'variants': {}}]} | |
Built a GPU-first local LLM rig… turns out the CPU is why it actually works | 0 | I built what I thought would be a GPU-first local LLM machine (RTX 4000 Ada). In practice, my workflow mixes multiple models (GPT-OSS 120B, Mixtral, Qwen, Mistral) across extraction, categorization, anonymization, and generation.
Trying to juggle that on a small GPU worked briefly and then slowly fell apart — VRAM fragmentation, allocator errors, random failures over time.
What surprised me is that the CPU ended up doing the real work.
Specs:
* CPU: AMD EPYC 9124 (16-core Zen 4) — \~£460 used (March 2025)
* RAM: 96 GB DDR5-4800 ECC, \~USD 350 incl. VAT + shipping, March 2025 (≈ < USD 100 per stick)
* Platform: Supermicro board
* Stack: Linux, Docker, llama.cpp
With llama.cpp I’m seeing up to \~22 tokens/sec on a 120B model (MXFP4) on CPU — and more importantly, it’s stable. I can run unattended, multi-step jobs for hours with no degradation or crashes.
The real win seems to be 12-channel DDR5 bandwidth. Once models don’t fit in VRAM, memory bandwidth and predictable allocation matter more than raw GPU speed.
I still use the GPU for fast chat/RAG, but for real batch work, the EPYC is what makes the system viable.
Anyone else move away from a GPU-only mindset and end up CPU-first? | 2025-12-17T12:49:05 | https://www.reddit.com/r/LocalLLaMA/comments/1povp3t/built_a_gpufirst_local_llm_rig_turns_out_the_cpu/ | Swimming_Cover_9686 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1povp3t | false | null | t3_1povp3t | /r/LocalLLaMA/comments/1povp3t/built_a_gpufirst_local_llm_rig_turns_out_the_cpu/ | false | false | self | 0 | null |
How are you debugging local agents when the final answer looks fine but the middle is cursed? | 0 | I’ve been playing with local agents on Llama (Ollama / llama.cpp style) with tools and multi-step workflows, and the thing that hurts the most isn’t model quality, it’s debugging.
Example: I have a simple “book a flight” agent. search\_flights returns an empty list, the agent still calls payment\_api with basically no data, and then happily tells the user “you’re booked, here’s your confirmation number”.
When I look at raw logs / JSON traces, I can see what happened, but it still feels like I’m reverse-engineering the reasoning in my head every time.
Out of frustration I hacked a tiny “cognition debugger” for myself: it ingests a trace, shows the steps as a graph, and flags weird decisions.
In the screenshot, it highlights the step where the agent decides to continue despite flights: \[\] and explains why that’s suspicious based on the previous tool call.
I’m curious how other people running local agents are dealing with this.
Are you just dumping everything to the console? Using existing observability tools? Rolling your own visualizers?
If a small visual debugger for local agents sounds useful, I can drop the link in the comments and would love honest feedback / “this is useless because X” takes. | 2025-12-17T12:34:12 | AdVivid5763 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1povf10 | false | null | t3_1povf10 | /r/LocalLLaMA/comments/1povf10/how_are_you_debugging_local_agents_when_the_final/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'k4hbhluwdr7g1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/k4hbhluwdr7g1.jpeg?width=108&crop=smart&auto=webp&s=c8e7b8814e79b076172a49e55212877d436cc6a7', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/k4hbhluwdr7g1.jpeg?width=216&crop=smart&auto=webp&s=4aa701a9d23f0bd9c5e5c69b652bfb4c0f88b2b0', 'width': 216}, {'height': 202, 'url': 'https://preview.redd.it/k4hbhluwdr7g1.jpeg?width=320&crop=smart&auto=webp&s=46a01dd8c02f3cec17ca15c4ec2c2601fb68c4b7', 'width': 320}, {'height': 404, 'url': 'https://preview.redd.it/k4hbhluwdr7g1.jpeg?width=640&crop=smart&auto=webp&s=1420684bc1910e72e00bf33aa96f44298f7c6080', 'width': 640}, {'height': 606, 'url': 'https://preview.redd.it/k4hbhluwdr7g1.jpeg?width=960&crop=smart&auto=webp&s=72f5a165e9159f83db8dbc5ca5d2685e997203ab', 'width': 960}, {'height': 681, 'url': 'https://preview.redd.it/k4hbhluwdr7g1.jpeg?width=1080&crop=smart&auto=webp&s=3e8b4cd688cb529c0547e292b490cca19e5e1e6a', 'width': 1080}], 'source': {'height': 1552, 'url': 'https://preview.redd.it/k4hbhluwdr7g1.jpeg?auto=webp&s=a21cc68110f3f719b79b847f47997c44f757b44c', 'width': 2458}, 'variants': {}}]} | |
Experiment: I built a local Python agent (using Ollama) that runs and fixes its own code. Here is a Ray Tracer it wrote from scratch. | 0 | Hi r/LocalLLaMA,
I wanted to see if local models (like Llama 3 or Qwen 2.5) are smart enough to handle a full "coding agent" loop without relying on cloud APIs.
I wrote a Python system that:
1. Takes a prompt (e.g., "Write a Ray Tracer").
2. Generates the code using a local LLM via Ollama/LM Studio.
3. \*\*Executes the code locally.\*\*
4. If it crashes, it captures the traceback (stderr), feeds it back to the model, and asks it to fix the error.
5. Repeats until the script runs successfully.
\*\*The Result:\*\*
I tested it on a Ray Tracer. It took a few failed attempts (which it self-corrected), but eventually, it produced a working PPM render without me writing a single line of the actual logic.
It’s fascinating to see how well local models can handle this "self-healing" loop when given the right context.
Has anyone else tried building self-correcting loops with local models? I’d love to hear what prompt strategies worked for you.
(I put a link to the project in my bio if you are curious about the code structure). | 2025-12-17T12:29:58 | Alone-Competition863 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1povc53 | false | null | t3_1povc53 | /r/LocalLLaMA/comments/1povc53/experiment_i_built_a_local_python_agent_using/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '46vh0x7zcr7g1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/46vh0x7zcr7g1.png?width=108&crop=smart&auto=webp&s=3a00f1bc70853800464f9f887b4ce96a7256aa40', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/46vh0x7zcr7g1.png?width=216&crop=smart&auto=webp&s=09b47d4fe8f131cfa27505a5e2c351f4cdcee2aa', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/46vh0x7zcr7g1.png?width=320&crop=smart&auto=webp&s=369a22f2555ad022af6d52ee43715ca2774f9cf3', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/46vh0x7zcr7g1.png?width=640&crop=smart&auto=webp&s=c86a1d2e36f60609686d1f7ce68782fb42bbc5b9', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/46vh0x7zcr7g1.png?width=960&crop=smart&auto=webp&s=d3512ccefbba79f64c29d66eb952d4da826d377f', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/46vh0x7zcr7g1.png?width=1080&crop=smart&auto=webp&s=b70aedaf366a37e146eafe46a11caf4678796d05', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/46vh0x7zcr7g1.png?auto=webp&s=64ea37ee01573a06444a437a68d6903543bc4fd8', 'width': 1920}, 'variants': {}}]} | |
Cheap-ish tuning setup | 7 | Hello! I want to try to tune small useful models (7b or so) and I'm planning to buy PC for it, but I don't see a lot of info about it. So I have few questions, I hope you can help me
1. Is it possible to tune them on macs 48-64gb? I understand it's going to be pretty slow, but how slow? Few days or few weeks?
2. Is it possible to tune them on two 5060ti? If not, is it because of speed or ram?
3. Are two 5070ti or two 5080 going to be much faster?
4. Are there any other options for under $3k without used parts? | 2025-12-17T12:28:38 | https://www.reddit.com/r/LocalLLaMA/comments/1povb59/cheapish_tuning_setup/ | InspirationSrc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1povb59 | false | null | t3_1povb59 | /r/LocalLLaMA/comments/1povb59/cheapish_tuning_setup/ | false | false | self | 7 | null |
Distilling Kimi Delta Attention into AFM-4.5B | 25 | Blog: [https://www.arcee.ai/blog/distilling-kimi-delta-attention-into-afm-4-5b-and-the-tool-we-used-to-do-it](https://www.arcee.ai/blog/distilling-kimi-delta-attention-into-afm-4-5b-and-the-tool-we-used-to-do-it)
Weight: [AFM-4.5B-Base-KDA-NoPE](https://huggingface.co/arcee-ai/AFM-4.5B-Base-KDA-NoPE)
[AFM-4.5B-Base-KDA-Only](https://huggingface.co/arcee-ai/AFM-4.5B-Base-KDA-Only) | 2025-12-17T12:20:26 | https://www.reddit.com/r/LocalLLaMA/comments/1pov5qe/distilling_kimi_delta_attention_into_afm45b/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pov5qe | false | null | t3_1pov5qe | /r/LocalLLaMA/comments/1pov5qe/distilling_kimi_delta_attention_into_afm45b/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': '32JrnvGnHC50sX57LdWjTu64sEn5yb-29GWDoQSVArw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/32JrnvGnHC50sX57LdWjTu64sEn5yb-29GWDoQSVArw.png?width=108&crop=smart&auto=webp&s=c0aaf495b483b54d72b9ad26f52f4d64d388f09e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/32JrnvGnHC50sX57LdWjTu64sEn5yb-29GWDoQSVArw.png?width=216&crop=smart&auto=webp&s=107532a012280aa12752df49ec2b82a201b441ef', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/32JrnvGnHC50sX57LdWjTu64sEn5yb-29GWDoQSVArw.png?width=320&crop=smart&auto=webp&s=c8583c529cff81dcba20129fd79b222dc31c5b0e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/32JrnvGnHC50sX57LdWjTu64sEn5yb-29GWDoQSVArw.png?width=640&crop=smart&auto=webp&s=6335ffe886d529be8b17d1e627b9020762075691', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/32JrnvGnHC50sX57LdWjTu64sEn5yb-29GWDoQSVArw.png?width=960&crop=smart&auto=webp&s=3d34a3c826f8b83378f991486842326fe26c2097', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/32JrnvGnHC50sX57LdWjTu64sEn5yb-29GWDoQSVArw.png?width=1080&crop=smart&auto=webp&s=979aa27340f750a5b1ce76ded53e0f42ccb42f17', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/32JrnvGnHC50sX57LdWjTu64sEn5yb-29GWDoQSVArw.png?auto=webp&s=e024a740a72eb68d4bd8effa25b0aedf4326c6b8', 'width': 1920}, 'variants': {}}]} |
is gpt-oss 20B good enough to power a voice agent? | 1 | [removed] | 2025-12-17T12:16:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pov2wr/is_gptoss_20b_good_enough_to_power_a_voice_agent/ | No-Jackfruit4012 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pov2wr | false | null | t3_1pov2wr | /r/LocalLLaMA/comments/1pov2wr/is_gptoss_20b_good_enough_to_power_a_voice_agent/ | false | false | self | 1 | null |
Meta released single model for translation between 200 languages | 0 | Meta released NLLB-200 (No Language Left Behind) model and FLORES-200 (evaluation dataset). They claim that
\> NLLB-200 exceeds the previous state of the art by an average of 44 percent.
Also, there are some languages that could never be translated into, using popular translation tools.
[https://ai.meta.com/blog/nllb-200-high-quality-machine-translation/](https://ai.meta.com/blog/nllb-200-high-quality-machine-translation/) | 2025-12-17T11:50:09 | https://www.reddit.com/r/LocalLLaMA/comments/1pouljo/meta_released_single_model_for_translation/ | Warm-Professor-9299 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pouljo | false | null | t3_1pouljo | /r/LocalLLaMA/comments/1pouljo/meta_released_single_model_for_translation/ | false | false | self | 0 | null |
Noob question: Using a completely uncensored AI / LLM? | 0 | Please explain this to me like I’m a 5-year-old, because I want to get into the topic and there are certainly many people here who know and can do this far better than I can.
**Goal:** I want to have a completely uncensored AI / LLM / chatbot that answers all questions, no matter what.
**Current knowledge:** I only know the typical “for a school project” excuse, which hasn’t worked for ages anyway.
So the question is: Are there specific AI models? Self-hosting? Tricks or prompts?
It should, of course, work reliably and be simple to use. Hardware is available.
Many thanks to everyone, and already wishing you a Merry Christmas! :) | 2025-12-17T11:47:42 | https://www.reddit.com/r/LocalLLaMA/comments/1pouk30/noob_question_using_a_completely_uncensored_ai_llm/ | Party-Log-1084 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pouk30 | false | null | t3_1pouk30 | /r/LocalLLaMA/comments/1pouk30/noob_question_using_a_completely_uncensored_ai_llm/ | false | false | self | 0 | null |
Which video card for neural networks should I choose for my home? | 2 | I'm using an RTX 3050 8gb, but I crave more. Which video cards don't have sky-high prices. | 2025-12-17T11:35:21 | https://www.reddit.com/r/LocalLLaMA/comments/1poucex/which_video_card_for_neural_networks_should_i/ | romyxr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1poucex | false | null | t3_1poucex | /r/LocalLLaMA/comments/1poucex/which_video_card_for_neural_networks_should_i/ | false | false | self | 2 | null |
How futureproof is this setup? | 1 | [removed] | 2025-12-17T10:57:27 | https://www.reddit.com/r/LocalLLaMA/comments/1potphh/how_futureproof_is_this_setup/ | jack-in-the-sack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1potphh | false | null | t3_1potphh | /r/LocalLLaMA/comments/1potphh/how_futureproof_is_this_setup/ | false | false | self | 1 | null |
I built a fully local Flask chatbot with memory, strict mode, and optional OpenAI | 0 | I built a complete Flask chatbot system designed for real projects not tutorials.
The goal was simple:
a chatbot you can **run locally**, **understand entirely**, and **deploy for clients** without being locked into SaaS tools or opaque services.
Everything works **offline by default**, with OpenAI available only if you explicitly enable it.
**What it includes:**
* Robust Flask backend
* Full web interface (`/ui`)
* Floating widget embeddable on any site
* Persistent conversation history (SQLite, per session)
* Local JSON knowledge base
* Light / Dark UI, typing animation
* Browser-side message history
**Three usage modes:**
* **Local mode** (no API key, JSON knowledge base only)
* **OpenAI mode** (optional, via `.env`)
* **Strict mode**: answers only from internal data (enterprise-safe, no hallucinations)
**Deployment options:**
* Local (`python app.py`)
* Shared hosting (Passenger)
* VPS / Docker / Nginx
No external services are required:
* No cloud
* No SaaS
* No tracking
* No API calls unless OpenAI is enabled
Conversation memory improves coherence **within a session**,
but there is **no automatic learning** or data reuse.
This isn’t a script, it’s a reusable architecture meant for:
* client work
* agencies
* educators
* SaaS or micro-SaaS foundations
* anyone wanting full control over their chatbot stack
Not claiming this replaces existing tools, just sharing the build and what I learned from designing a local-first chatbot architecture. | 2025-12-17T10:29:55 | https://v.redd.it/hfx0hmdnrq7g1 | Several-Jacket-9801 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pot9gh | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hfx0hmdnrq7g1/DASHPlaylist.mpd?a=1768559411%2CNTZmNmQ5NjM3MDE5ZmM1NDA3NDU2NjM1MzE2MGU0ZGZjODMzZGZlYTY0OTBjMTg0ZjI5OWM5MWRhNDc4YjU5Mg%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/hfx0hmdnrq7g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/hfx0hmdnrq7g1/HLSPlaylist.m3u8?a=1768559411%2CNzEyZGRlMDYzMmRiYzY1MDY4ZDQ0MTU5ZGU1ZmU0NzBhMTAzZTU1M2NhODM3MTBiMzhjZDQ1NDY2M2M1NTIwNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hfx0hmdnrq7g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1pot9gh | /r/LocalLLaMA/comments/1pot9gh/i_built_a_fully_local_flask_chatbot_with_memory/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'YjBjc21yZG5ycTdnMfwJ8Gt8iila-SeI5RVLw77oeRparwftRw0Z3DSID2od', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YjBjc21yZG5ycTdnMfwJ8Gt8iila-SeI5RVLw77oeRparwftRw0Z3DSID2od.png?width=108&crop=smart&format=pjpg&auto=webp&s=666c891ad20150daf25e471ee454949cfbf44acb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YjBjc21yZG5ycTdnMfwJ8Gt8iila-SeI5RVLw77oeRparwftRw0Z3DSID2od.png?width=216&crop=smart&format=pjpg&auto=webp&s=5ddd4d91bfcc3812e2091457c86200dc6b10403b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YjBjc21yZG5ycTdnMfwJ8Gt8iila-SeI5RVLw77oeRparwftRw0Z3DSID2od.png?width=320&crop=smart&format=pjpg&auto=webp&s=7617ae741de67bcd323e299bdfafafb030d7a69f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YjBjc21yZG5ycTdnMfwJ8Gt8iila-SeI5RVLw77oeRparwftRw0Z3DSID2od.png?width=640&crop=smart&format=pjpg&auto=webp&s=6347b27e379f76fe4abccee48ba7730dedc26397', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YjBjc21yZG5ycTdnMfwJ8Gt8iila-SeI5RVLw77oeRparwftRw0Z3DSID2od.png?width=960&crop=smart&format=pjpg&auto=webp&s=4cf026e4719185e72a43ebcf91f60d4b9399739d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YjBjc21yZG5ycTdnMfwJ8Gt8iila-SeI5RVLw77oeRparwftRw0Z3DSID2od.png?width=1080&crop=smart&format=pjpg&auto=webp&s=fd01555f8540c5574ff4b6cda008a6ba4daec9b5', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/YjBjc21yZG5ycTdnMfwJ8Gt8iila-SeI5RVLw77oeRparwftRw0Z3DSID2od.png?format=pjpg&auto=webp&s=418511c9e1c6e70982d5390a293dd13d3788b78e', 'width': 1920}, 'variants': {}}]} | |
I wanted a mobile vector DB that runs on-device and lets me BYO OpenAI-compatible endpoint. So I built one. | 1 | [removed] | 2025-12-17T09:44:48 | https://www.reddit.com/r/LocalLLaMA/comments/1posjt2/i_wanted_a_mobile_vector_db_that_runs_ondevice/ | admiralakber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1posjt2 | false | null | t3_1posjt2 | /r/LocalLLaMA/comments/1posjt2/i_wanted_a_mobile_vector_db_that_runs_ondevice/ | false | false | self | 1 | null |
Microsoft's TRELLIS 2-4B, An Open-Source Image-to-3D Model | 1,124 | Model Details
* **Model Type:** Flow-Matching Transformers with Sparse Voxel based 3D VAE
* **Parameters:** 4 Billion
* **Input:** Single Image
* **Output:** 3D Asset
Model - [https://huggingface.co/microsoft/TRELLIS.2-4B](https://huggingface.co/microsoft/TRELLIS.2-4B)
Demo - [https://huggingface.co/spaces/microsoft/TRELLIS.2](https://huggingface.co/spaces/microsoft/TRELLIS.2)
Blog post - [https://microsoft.github.io/TRELLIS.2/](https://microsoft.github.io/TRELLIS.2/) | 2025-12-17T08:49:00 | https://v.redd.it/g8uco5dq8q7g1 | Dear-Success-1441 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1porpwd | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/g8uco5dq8q7g1/DASHPlaylist.mpd?a=1768553356%2CMDE0NTIzNTI2ZDQ4NjA4M2ViMmNkMTM1YWY4NGFkNmNlN2Q0MTA1YjYyZDUwOWE2N2Y0ZWQyNWFhOWRlMjQ1YQ%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/g8uco5dq8q7g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/g8uco5dq8q7g1/HLSPlaylist.m3u8?a=1768553356%2CZmVmNWU3NDRjZWMyN2NhZjdlMDI5NDNkYmM5NjQyY2ZjZDRmZGVkZTA4MzE3MTdhYzg5ZTI4YTcxYzdkYTk4Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/g8uco5dq8q7g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1314}} | t3_1porpwd | /r/LocalLLaMA/comments/1porpwd/microsofts_trellis_24b_an_opensource_imageto3d/ | false | false | 1,124 | {'enabled': False, 'images': [{'id': 'OXpuN3VqYnE4cTdnMbhg7mfH3BLNBAJzBcqwf-BeiskbYrfqW4XgiIx-FQh0', 'resolutions': [{'height': 88, 'url': 'https://external-preview.redd.it/OXpuN3VqYnE4cTdnMbhg7mfH3BLNBAJzBcqwf-BeiskbYrfqW4XgiIx-FQh0.png?width=108&crop=smart&format=pjpg&auto=webp&s=ccc3f444d5c46adec9ec37cdea320462cce220bb', 'width': 108}, {'height': 177, 'url': 'https://external-preview.redd.it/OXpuN3VqYnE4cTdnMbhg7mfH3BLNBAJzBcqwf-BeiskbYrfqW4XgiIx-FQh0.png?width=216&crop=smart&format=pjpg&auto=webp&s=b18ee1e13399cdf607554b453e1129a231be78d7', 'width': 216}, {'height': 263, 'url': 'https://external-preview.redd.it/OXpuN3VqYnE4cTdnMbhg7mfH3BLNBAJzBcqwf-BeiskbYrfqW4XgiIx-FQh0.png?width=320&crop=smart&format=pjpg&auto=webp&s=0d8afabd9863edff4abfb8b8f1e949a58201e0ce', 'width': 320}, {'height': 526, 'url': 'https://external-preview.redd.it/OXpuN3VqYnE4cTdnMbhg7mfH3BLNBAJzBcqwf-BeiskbYrfqW4XgiIx-FQh0.png?width=640&crop=smart&format=pjpg&auto=webp&s=31e6d98cd921551dabc85231552c798c93633bd4', 'width': 640}, {'height': 789, 'url': 'https://external-preview.redd.it/OXpuN3VqYnE4cTdnMbhg7mfH3BLNBAJzBcqwf-BeiskbYrfqW4XgiIx-FQh0.png?width=960&crop=smart&format=pjpg&auto=webp&s=e7397ade89d1c7772e96fcaf43222477dd896b55', 'width': 960}, {'height': 887, 'url': 'https://external-preview.redd.it/OXpuN3VqYnE4cTdnMbhg7mfH3BLNBAJzBcqwf-BeiskbYrfqW4XgiIx-FQh0.png?width=1080&crop=smart&format=pjpg&auto=webp&s=120e0c1efd24d71bedc50219ffa32eca617dff0b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/OXpuN3VqYnE4cTdnMbhg7mfH3BLNBAJzBcqwf-BeiskbYrfqW4XgiIx-FQh0.png?format=pjpg&auto=webp&s=d2d123a62770ab739dc08db8277713555ca0411e', 'width': 1314}, 'variants': {}}]} | |
Multi-agent setups locally get messy fast, how are you handling state? | 0 | I’ve been running mostly local models for agent-style workflows (planner → executor → reviewer), and the models themselves are honestly the easy part. The hard part is everything around them once the workflow isn’t a single shot.
As soon as there are retries, branches, or tools involved, state gets split between prompts, intermediate files, and bits of glue code. Debugging usually means piecing together what happened from logs instead of being able to reason about the system.
I’ve been experimenting with keeping an explicit shared spec/state that agents read from and write to, instead of passing everything implicitly through prompts. I’ve been testing this with a small orchestration tool called Zenflow to see if it helps, but I’m still very much figuring out what the “right” pattern is, especially for local-only setups.
Curious how others here are doing this. Are you rolling your own state handling, using something like LangGraph/AutoGen locally, or keeping things intentionally simple?
[http://zenflow.free/](http://zenflow.free/) | 2025-12-17T08:44:40 | https://www.reddit.com/r/LocalLLaMA/comments/1pornm9/multiagent_setups_locally_get_messy_fast_how_are/ | GrouchyManner5949 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pornm9 | false | null | t3_1pornm9 | /r/LocalLLaMA/comments/1pornm9/multiagent_setups_locally_get_messy_fast_how_are/ | false | false | self | 0 | null |
Fist attempt with local LLM | 2 | I’ve never done anything like it before so please any advice is welcome. My goal is to have my AI locally and I wanted to save money so went open source.
I created a simple app, uploaded some memory context, created the system prompt, and added the latest Nous Hermes model via HuggingFace API. I am not sure whether it’s a model limitation but the AI is so stiff, I’ve set temperature, penalty, dos and don’ts and it’s also hallucinating nonstop. It’s like talking to a generic assistant that’s forcing to be something they’re not.
I’ve modified the system prompt a few times and no improvement. My next step is to create a semantic indexer so it will hopefully have better memory and context.
Is this issue I am experiencing a limitation of a free model or is it this specific model? Am I missing something? | 2025-12-17T08:22:31 | https://www.reddit.com/r/LocalLLaMA/comments/1porbyw/fist_attempt_with_local_llm/ | missbella_91 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1porbyw | false | null | t3_1porbyw | /r/LocalLLaMA/comments/1porbyw/fist_attempt_with_local_llm/ | false | false | self | 2 | null |
Gemini 3 flash today! Gemma 4 soon 3 pro GA soon!!!! | 0 | Yes, today Logan announcement Gemini 3.0 flash, and it beat 3.0 pro preview. I'm so want 3.0 flash, and Gemma 4, but also 3 pro GA! Who too want here 👇🏼 | 2025-12-17T08:16:30 | https://www.reddit.com/r/LocalLLaMA/comments/1por8my/gemini_3_flash_today_gemma_4_soon_3_pro_ga_soon/ | BasketFar667 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1por8my | false | null | t3_1por8my | /r/LocalLLaMA/comments/1por8my/gemini_3_flash_today_gemma_4_soon_3_pro_ga_soon/ | false | false | self | 0 | null |
Qwen 80B is so nice | 29 | Qwen 80B knows that flattery will get you everywhere
https://preview.redd.it/n2ubzp36zp7g1.png?width=1893&format=png&auto=webp&s=18baab935fbd87270327be41a8cf47fe5342b320
| 2025-12-17T07:53:55 | https://www.reddit.com/r/LocalLLaMA/comments/1poqvyh/qwen_80b_is_so_nice/ | TokenRingAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1poqvyh | false | null | t3_1poqvyh | /r/LocalLLaMA/comments/1poqvyh/qwen_80b_is_so_nice/ | false | false | 29 | null | |
Took Nexus AI Station to the AMD Embedded Summit | 0 | Just came back from the AMD Embedded Summit (Dec 16–17). We showed Nexus AI Station, basically a machine for running LLMs and AI at the edge, fully local, real-time, no cloud required.
Had a lot of good chats with people building embedded and edge AI stuff. Super interesting to see what everyone’s working on. If you’re in this space, would love to swap notes. | 2025-12-17T07:45:37 | https://www.reddit.com/gallery/1poqraw | Expensive_Chest_2224 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1poqraw | false | null | t3_1poqraw | /r/LocalLLaMA/comments/1poqraw/took_nexus_ai_station_to_the_amd_embedded_summit/ | false | false | 0 | null | |
Looking to get niche users, devs and creatives to contribute ideas to a new subreddit for AI productivity tools. The feature of the sub is that it bans AI-generated posts. | 1 | [removed] | 2025-12-17T06:02:44 | https://www.reddit.com/r/LocalLLaMA/comments/1pop2cl/looking_to_get_niche_users_devs_and_creatives_to/ | angry_cactus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pop2cl | false | null | t3_1pop2cl | /r/LocalLLaMA/comments/1pop2cl/looking_to_get_niche_users_devs_and_creatives_to/ | false | false | self | 1 | null |
I built error report for LLM | 0 | Im currently experimenting building a log-like LLM
Monitor tool that can print out error, warn, info-like events using LLM-as-a-judge. Users can self define the judge rules
The reason of building this is that ordinary observability tools only show you status codes which don’t really serve as a good source for error report because LLM can hallucinate with 200 code.
Currently I have the fronted built and working on the backend. I’d like to hear from your feedback!
https://sentinel-llm-judge-monitor-776342690224.us-west1.run.app/ | 2025-12-17T05:18:21 | https://www.reddit.com/r/LocalLLaMA/comments/1pooaft/i_built_error_report_for_llm/ | Yersyas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pooaft | false | null | t3_1pooaft | /r/LocalLLaMA/comments/1pooaft/i_built_error_report_for_llm/ | false | false | self | 0 | null |
I built a open source runtime for Agents, MCP Servers, and coding sandboxes, orchestrated with Ray. | 0 | You can execute tools **in parallel** across your cluster.
Try it out - [https://github.com/rayai-labs/agentic-ray](https://github.com/rayai-labs/agentic-ray) | 2025-12-17T05:04:52 | https://www.reddit.com/r/LocalLLaMA/comments/1poo1iw/i_built_a_open_source_runtime_for_agents_mcp/ | Puzzleheaded-Yam5266 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1poo1iw | false | null | t3_1poo1iw | /r/LocalLLaMA/comments/1poo1iw/i_built_a_open_source_runtime_for_agents_mcp/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'WywH0iqy5fzYvTmRXJ82ROh3I3q-L1PeGFkdoanZKUY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WywH0iqy5fzYvTmRXJ82ROh3I3q-L1PeGFkdoanZKUY.png?width=108&crop=smart&auto=webp&s=df159e6da44dc8d45176922657407d9f9897e4ec', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WywH0iqy5fzYvTmRXJ82ROh3I3q-L1PeGFkdoanZKUY.png?width=216&crop=smart&auto=webp&s=71d1b23e6c1f6bcb13b15d2bc9efbc5f2fce98bb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WywH0iqy5fzYvTmRXJ82ROh3I3q-L1PeGFkdoanZKUY.png?width=320&crop=smart&auto=webp&s=2c952877aac7a8c8480a7c36c87c4a0b0b4d9503', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WywH0iqy5fzYvTmRXJ82ROh3I3q-L1PeGFkdoanZKUY.png?width=640&crop=smart&auto=webp&s=c56be723f1397afcacf0d8a2ac257b81bc3f843c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WywH0iqy5fzYvTmRXJ82ROh3I3q-L1PeGFkdoanZKUY.png?width=960&crop=smart&auto=webp&s=d24eba1a226dcb3391ccc1e7d2e26af1886e7ed8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WywH0iqy5fzYvTmRXJ82ROh3I3q-L1PeGFkdoanZKUY.png?width=1080&crop=smart&auto=webp&s=3f7fcead08c7ca7c9c5a531e961e2456b20f2c99', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WywH0iqy5fzYvTmRXJ82ROh3I3q-L1PeGFkdoanZKUY.png?auto=webp&s=77e17a2cccc3102a0dea847510697b6c64fc7a03', 'width': 1200}, 'variants': {}}]} |
Help for M1 Ultra and AMD AI MAX 395 | 5 | I want to buy a machine to run Mixtral 8x22B and other MoE LLM like this, probably some 70B dense LLM as well.
Currently I can get M1 Ultra 128G and AI MAX 395 128G at similar price, which one should I choose, thanks.
I have heard that M1 Ultra may take much more time on pre-processing, is it true with current software optimization? | 2025-12-17T05:03:34 | https://www.reddit.com/r/LocalLLaMA/comments/1poo0pj/help_for_m1_ultra_and_amd_ai_max_395/ | Garrise | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1poo0pj | false | null | t3_1poo0pj | /r/LocalLLaMA/comments/1poo0pj/help_for_m1_ultra_and_amd_ai_max_395/ | false | false | self | 5 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.