title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
78 stars in github but 0 feedback, how to improve my local AI application ?
0
Here is the situation : I open sourced my desktop application ( local AI ) about a week ago. The community response has been wild: 76 stars and 200+ clones in the first week. To everyone who starred the repo: Thank you. It genuinely motivates me to keep going and made me very happy. At first, I knew that they were some bugs so I have fixed them all and pushed a new version and I'm still waiting for any feedback, while the stars and clones are going up, I have received 0 feedback, I have a button in the application that sends users to my website to submit a feedback but I got none and it's confusing, It seemed people liked the application but I do not know if they would want what I have in my mind to develop next. Here are the two things that I have in mind : 1- The "Planner & Executor" Engine (Tools) The idea is to implement a Planner & Executor architecture ( maybe a router too, to be able to route the steps to different executors) : you give the AI a vague, long-term goal (e.g., 'Clean up my downloads folder and organize files by date'), and a high-level 'Planner' model breaks that down into a logical list of steps. It then hands these steps to multiple 'Executors' that has access to specific Tools (functions) and it will write a code that run them ( I do not want to give full access to AI to do whatever). Instead of just guessing, the AI would methodically execute the plan, check its own work, and only report back when the job is done. 2- The "Voice Mode" Second, I want to add a full Voice Mode. This integrates local speech-to-text and text-to-speech engines, allowing users to have a natural, two-way conversation with the app. But it’s more than just Q&A, you’ll get live audio updates as the agent works. Imagine asking it to 'organize my project files,' and hearing it reply in real-time: 'Scanning folder... Found 20 images... Moving them now... Done.' It transforms the tool from a silent script into an active, vocal partner that keeps you in the loop without you ever having to look at the screen. the end goal is obviously to get both features but I have to decide on one of them now ? If you were at me place which one would you choose ? Also, propose any new features and I will try to work on them Your opinion matters a lot and thanks for taking the time to read this post :)
2025-12-26T17:42:16
https://i.redd.it/04jv4v805l9g1.png
Suspicious-Juice3897
i.redd.it
1970-01-01T00:00:00
0
{}
1pwa3zu
false
null
t3_1pwa3zu
/r/LocalLLaMA/comments/1pwa3zu/78_stars_in_github_but_0_feedback_how_to_improve/
false
false
default
0
{'enabled': True, 'images': [{'id': '04jv4v805l9g1', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/04jv4v805l9g1.png?width=108&crop=smart&auto=webp&s=b1faacd82e320208faf6f0e415be8a1e4c6a2b48', 'width': 108}, {'height': 205, 'url': 'https://preview.redd.it/04jv4v805l9g1.png?width=216&crop=smart&auto=webp&s=e5795eb184fbe6975d043991161c0b9d736ed34d', 'width': 216}, {'height': 304, 'url': 'https://preview.redd.it/04jv4v805l9g1.png?width=320&crop=smart&auto=webp&s=80249219ab639ad1732d34cfbf0446611a4d8b1e', 'width': 320}], 'source': {'height': 499, 'url': 'https://preview.redd.it/04jv4v805l9g1.png?auto=webp&s=fac7202c4274d485ef421a6c632f39ad356f4c9a', 'width': 524}, 'variants': {}}]}
Structured context beats embeddings for large codebases (especially with local models)
0
When running LLMs locally, I keep hitting the same wall with code-focused RAG setups. Once a repository gets large, the usual pipeline breaks down: * heuristic chunking * embeddings over files * retrieve N chunks and hope for the best This wastes tokens and loses structure. With smaller context windows, the failure mode becomes very obvious. What seems to matter more than retrieval quality is **how the context is constructed in the first place**. I’ve been experimenting with a preprocessing-first approach for codebases: * analyze repository structure instead of flattening text * rank symbols and files by importance * respect token budgets explicitly * generate structured context rather than raw chunks In practice this looks like: * dependency and impact analysis instead of similarity search * diff-based context when working with recent changes * deterministic, reproducible outputs * formats like Markdown, XML, JSON, or YAML that models can reason over more reliably I wrapped this into a **CLI and embeddable library** called **Infiniloom**. It’s written in Rust, so it’s fast enough to run locally on large repos without turning context generation into the bottleneck. It’s not an agent framework and not a replacement for RAG. It sits *before* those layers and focuses on producing better inputs, which matters a lot more when you don’t have huge context windows. Repo for reference: [https://github.com/Topos-Labs/infiniloom](https://github.com/Topos-Labs/infiniloom?utm_source=chatgpt.com) Curious how others here handle code context with local models: * do you rely purely on embeddings? * do you do any structural or diff-based preprocessing? * what’s worked best with smaller context windows?
2025-12-26T17:42:12
https://www.reddit.com/r/LocalLLaMA/comments/1pwa3y0/structured_context_beats_embeddings_for_large/
aphoristicartist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pwa3y0
false
null
t3_1pwa3y0
/r/LocalLLaMA/comments/1pwa3y0/structured_context_beats_embeddings_for_large/
false
false
self
0
{'enabled': False, 'images': [{'id': 'pFEGwcpk084pg3OxbolGVc0-vcIF0WcOv4OD7S9lTcE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pFEGwcpk084pg3OxbolGVc0-vcIF0WcOv4OD7S9lTcE.png?width=108&crop=smart&auto=webp&s=6553ac0ce22ede8ea8842aa526f285c2720f3be5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pFEGwcpk084pg3OxbolGVc0-vcIF0WcOv4OD7S9lTcE.png?width=216&crop=smart&auto=webp&s=ac095ffde57da8115bb88c75d025875c1b5a555f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pFEGwcpk084pg3OxbolGVc0-vcIF0WcOv4OD7S9lTcE.png?width=320&crop=smart&auto=webp&s=d983b2be197b3ac2c7fe1f107ebde79f2856b489', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pFEGwcpk084pg3OxbolGVc0-vcIF0WcOv4OD7S9lTcE.png?width=640&crop=smart&auto=webp&s=fe7510c8a9e238cc40d9ae0d115050f88233e899', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pFEGwcpk084pg3OxbolGVc0-vcIF0WcOv4OD7S9lTcE.png?width=960&crop=smart&auto=webp&s=4179d81e5a79fd4a6e64db5343462870f6c176c7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pFEGwcpk084pg3OxbolGVc0-vcIF0WcOv4OD7S9lTcE.png?width=1080&crop=smart&auto=webp&s=f220face79982bc1c93278e4806a7a8ded365050', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pFEGwcpk084pg3OxbolGVc0-vcIF0WcOv4OD7S9lTcE.png?auto=webp&s=a1e11c97089d2943b9bed90c4a08dca12e368ed7', 'width': 1200}, 'variants': {}}]}
I built an emotionally aware multi-agent system with autobiographical memory running locally on gpt-oss:20b
0
2025-12-26T17:28:20
http://github.com/0penAGI/oss
PsychologyLevel1128
github.com
1970-01-01T00:00:00
0
{}
1pw9rmo
false
null
t3_1pw9rmo
/r/LocalLLaMA/comments/1pw9rmo/i_built_an_emotionally_aware_multiagent_system/
false
false
default
0
null
[Model Release] Genesis-152M-Instruct, exploring hybrid attention + TTT at small scale
49
Hey everyone 👋 I’m sharing **Genesis-152M-Instruct**, an **experimental small language model** built to explore how *recent architectural ideas interact* when combined in a single model — especially under **tight data constraints**. This is **research-oriented**, not a production model or SOTA claim. 🔍 **Why this might be interesting** Most recent architectures (GLA, FoX, TTT, µP, sparsity) are tested **in isolation** and usually at **large scale**. I wanted to answer a simpler question: *How much can architecture compensate for data at \~150M parameters?* Genesis combines several **ICLR 2024–2025 ideas** into one model and evaluates the result. ⚡ **TL;DR** • **152M parameters** • Trained on **\~2B tokens** (vs \~2T for SmolLM2) • Hybrid **GLA + FoX attention** • **Test-Time Training (TTT)** during inference • **Selective Activation (sparse FFN)** • **µP-scaled training** • Fully open-source (Apache 2.0) 🤗 Model: [https://huggingface.co/guiferrarib/genesis-152m-instruct](https://huggingface.co/guiferrarib/genesis-152m-instruct) 📦 pip install genesis-llm 📊 **Benchmarks (LightEval, Apple MPS)** ARC-Easy     → 44.0%   (random: 25%) BoolQ        → 56.3%   (random: 50%) HellaSwag    → 30.2%   (random: 25%) SciQ         → 46.8%   (random: 25%) Winogrande   → 49.1%   (random: 50%) **Important context:** SmolLM2-135M was trained on **\~2 trillion tokens**. Genesis uses **\~2 billion tokens** — so this is not a fair head-to-head, but an exploration of **architecture vs data scaling**. 🧠 **Architecture Overview** **Hybrid Attention (Qwen3-Next inspired)** **Layer** **%** **Complexity** **Role** Gated DeltaNet (GLA) 75% O(n) Long-range efficiency FoX (Forgetting Attention) 25% O(n²) Precise retrieval GLA uses: • Delta rule memory updates • Mamba-style gating • L2-normalized Q/K • Short convolutions FoX adds: • Softmax attention • Data-dependent forget gate • Output gating **Test-Time Training (TTT)** Instead of frozen inference, Genesis can **adapt online**: • Dual-form TTT (parallel gradients) • Low-rank updates (rank=4) • Learnable inner learning rate Paper: *Learning to (Learn at Test Time)* (MIT, ICML 2024) **Selective Activation (Sparse FFN)** SwiGLU FFNs with **top-k activation masking** (85% kept). Currently acts as **regularization** — real speedups need sparse kernels. **µP Scaling + Zero-Centered RMSNorm** • Hyperparameters tuned on small proxy • Transferred via µP rules • Zero-centered RMSNorm for stable scaling ⚠️ **Limitations (honest)** • Small training corpus (2B tokens) • TTT adds \~5–10% inference overhead • No RLHF • Experimental, not production-ready 📎 **Links** • 🤗 Model: [https://huggingface.co/guiferrarib/genesis-152m-instruct](https://huggingface.co/guiferrarib/genesis-152m-instruct) • 📦 PyPI: [https://pypi.org/project/genesis-llm/](https://pypi.org/project/genesis-llm/) I’d really appreciate feedback — especially from folks working on **linear attention**, **hybrid architectures**, or **test-time adaptation**. *Built by Orch-Mind Team*
2025-12-26T17:23:11
https://www.reddit.com/r/LocalLLaMA/comments/1pw9n74/model_release_genesis152minstruct_exploring/
Kassanar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw9n74
false
null
t3_1pw9n74
/r/LocalLLaMA/comments/1pw9n74/model_release_genesis152minstruct_exploring/
false
false
self
49
{'enabled': False, 'images': [{'id': 'LX9VCEgtwMlOc5P83Ms22Em67QDUU-IVfLuWWsNC4eY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LX9VCEgtwMlOc5P83Ms22Em67QDUU-IVfLuWWsNC4eY.png?width=108&crop=smart&auto=webp&s=db4d1f2d3e68104c489856a00250f87a89dfbf95', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LX9VCEgtwMlOc5P83Ms22Em67QDUU-IVfLuWWsNC4eY.png?width=216&crop=smart&auto=webp&s=5c9061dd4bb633eb3170a91a1244313b399300ad', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LX9VCEgtwMlOc5P83Ms22Em67QDUU-IVfLuWWsNC4eY.png?width=320&crop=smart&auto=webp&s=452e527a020774b31d02a286edb1589c1a899bad', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LX9VCEgtwMlOc5P83Ms22Em67QDUU-IVfLuWWsNC4eY.png?width=640&crop=smart&auto=webp&s=e291fe8a2fc274243d696fb835904d9590ffbb62', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LX9VCEgtwMlOc5P83Ms22Em67QDUU-IVfLuWWsNC4eY.png?width=960&crop=smart&auto=webp&s=9221ff1d96744a3ad5532670df63f6931e4085bd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LX9VCEgtwMlOc5P83Ms22Em67QDUU-IVfLuWWsNC4eY.png?width=1080&crop=smart&auto=webp&s=7f585944ecc2b36273e152e85313cd64cce94409', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LX9VCEgtwMlOc5P83Ms22Em67QDUU-IVfLuWWsNC4eY.png?auto=webp&s=b073c5523d35960e5e93c31016c46b9b82713e82', 'width': 1200}, 'variants': {}}]}
JL Engine: Modular Positronic Persona Orchestrator
0
https://preview.redd.it/…c1a0089043a2a7
2025-12-26T17:16:29
https://www.reddit.com/r/LocalLLaMA/comments/1pw9hac/jl_engine_modular_positronic_persona_orchestrator/
Upbeat_Reporter8244
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw9hac
false
null
t3_1pw9hac
/r/LocalLLaMA/comments/1pw9hac/jl_engine_modular_positronic_persona_orchestrator/
false
false
https://b.thumbs.redditm…XhjgGiYZp90g.jpg
0
null
Model running super slow on Mac Air M3
0
Just for a hello, it took 24.7 seconds - what would be the reason for it to take so long? I don't have many apps open.
2025-12-26T17:07:36
https://i.redd.it/b3b4so6ryk9g1.png
ozcapy
i.redd.it
1970-01-01T00:00:00
0
{}
1pw99iw
false
null
t3_1pw99iw
/r/LocalLLaMA/comments/1pw99iw/model_running_super_slow_on_mac_air_m3/
false
false
default
0
{'enabled': True, 'images': [{'id': 'b3b4so6ryk9g1', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/b3b4so6ryk9g1.png?width=108&crop=smart&auto=webp&s=73ddf40da44a90ac76e512a128198ee9ded91816', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/b3b4so6ryk9g1.png?width=216&crop=smart&auto=webp&s=3083f1de73fddc173e5c456ea7b1d52966e9536a', 'width': 216}, {'height': 250, 'url': 'https://preview.redd.it/b3b4so6ryk9g1.png?width=320&crop=smart&auto=webp&s=8fce6dd16000afc1927bbff8d313431eaf643924', 'width': 320}, {'height': 501, 'url': 'https://preview.redd.it/b3b4so6ryk9g1.png?width=640&crop=smart&auto=webp&s=e0b0ee4ecf9599c16a401f6773074fde5b18f395', 'width': 640}, {'height': 751, 'url': 'https://preview.redd.it/b3b4so6ryk9g1.png?width=960&crop=smart&auto=webp&s=01db4e7f4c28df08d7024eff1c9863b0347cf152', 'width': 960}, {'height': 845, 'url': 'https://preview.redd.it/b3b4so6ryk9g1.png?width=1080&crop=smart&auto=webp&s=4672eb32728e17b25d344814cac09f88777d79b9', 'width': 1080}], 'source': {'height': 1258, 'url': 'https://preview.redd.it/b3b4so6ryk9g1.png?auto=webp&s=d8d9a6885435ac54f69ad9d033ef704f407cc173', 'width': 1606}, 'variants': {}}]}
I built a self-healing AI CLI that retries failed commands and works fully offline (v6.0)
0
Hi everyone, I just released **ZAI Shell v6.0**, an AI-powered command-line assistant focused on one problem I kept hitting with CLI agents: 👉 *When a command fails, they usually get stuck or ask the user what to do.* So I built a **self-healing retry pipeline** instead. **Repo:** 👉 [https://github.com/TaklaXBR/zai-shell](https://github.com/TaklaXBR/zai-shell) # 🔧 What “Self-Healing” means here When a command fails, ZAI Shell captures `stderr` and retries (up to 5 times), changing strategy automatically: * **Wrong shell?** CMD → PowerShell → Git Bash * **Encoding issues?** Cycles through UTF-8, CP850, CP1254 * **Missing pip deps?** Detects and offers to install them At startup, it scans **13 different shells** and selects the best one for the task. # 🔌 Fully Offline Mode (why this might interest you) v6.0 supports **local inference** using **Microsoft Phi-2 (2.7B)** via transformers. * GPU detected → float16 * CPU fallback → slower but works * No data leaves your terminal * \~5GB model download on first run # 🧠 Memory * **ChromaDB** for semantic search across past sessions * **JSON fallback** if ChromaDB isn’t available # 🛠 Stack * Python 3.8+ * Transformers / Torch / Accelerate * ChromaDB * Optional online mode via Gemini API (free tier) I’m **15 years old**, and this project taught me a lot about local LLM integration and error-handling pipelines. I’d really appreciate feedback on: * The **Phi-2** setup * Better **small local models** for CLI use Cheers!
2025-12-26T17:02:57
https://www.reddit.com/r/LocalLLaMA/comments/1pw95mq/i_built_a_selfhealing_ai_cli_that_retries_failed/
Exact_Section_556
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw95mq
false
null
t3_1pw95mq
/r/LocalLLaMA/comments/1pw95mq/i_built_a_selfhealing_ai_cli_that_retries_failed/
false
false
self
0
{'enabled': False, 'images': [{'id': 'HvzimrzJcUnyueMMVj2nSmGHqSjZXoRJidVZDgyEO-k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HvzimrzJcUnyueMMVj2nSmGHqSjZXoRJidVZDgyEO-k.png?width=108&crop=smart&auto=webp&s=f41a149a868cfca1040d202cac03c0b9b01d8e00', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HvzimrzJcUnyueMMVj2nSmGHqSjZXoRJidVZDgyEO-k.png?width=216&crop=smart&auto=webp&s=e255721b374a082299ba9c9d7a8d6b6b336f220f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HvzimrzJcUnyueMMVj2nSmGHqSjZXoRJidVZDgyEO-k.png?width=320&crop=smart&auto=webp&s=37de76715e4a7f6f870e6ad4532f454d66e77f68', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HvzimrzJcUnyueMMVj2nSmGHqSjZXoRJidVZDgyEO-k.png?width=640&crop=smart&auto=webp&s=510ceeffa3de3d25733ab52bc436b8fdc650db30', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HvzimrzJcUnyueMMVj2nSmGHqSjZXoRJidVZDgyEO-k.png?width=960&crop=smart&auto=webp&s=2c6bff5b86693611850567a3831910e994dc01a4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HvzimrzJcUnyueMMVj2nSmGHqSjZXoRJidVZDgyEO-k.png?width=1080&crop=smart&auto=webp&s=2fd845b08c7f5377ea8eb2376cafe8e22dc8d96d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HvzimrzJcUnyueMMVj2nSmGHqSjZXoRJidVZDgyEO-k.png?auto=webp&s=5d3896bcad3a76f9cf3336fda80b42f291706d26', 'width': 1200}, 'variants': {}}]}
How do you deploy open source reranker in production?
2
How do you deploy open source reranker in production? I want to deploy the open source reranker on production, any framework being used ??
2025-12-26T16:57:14
https://www.reddit.com/r/LocalLLaMA/comments/1pw90e9/how_do_you_deploy_open_source_reranker_in/
ExchangePersonal1384
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw90e9
false
null
t3_1pw90e9
/r/LocalLLaMA/comments/1pw90e9/how_do_you_deploy_open_source_reranker_in/
false
false
self
2
null
Nvidia acquired Groq, but why not Cerebras? Cerebras is 3x times faster than Groq, while maximum 1.5x the price. Anyone can explain?
243
Anyone with technical knowledge can explain why they chose Groq over Cerebras? Really interested in this. Because Cerebras is even waaay faster than Groq. Cerebras seems like a bigger threat to Nvidia than Groq...
2025-12-26T16:42:23
https://www.reddit.com/r/LocalLLaMA/comments/1pw8nfk/nvidia_acquired_groq_but_why_not_cerebras/
Conscious_Warrior
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw8nfk
false
null
t3_1pw8nfk
/r/LocalLLaMA/comments/1pw8nfk/nvidia_acquired_groq_but_why_not_cerebras/
false
false
self
243
null
GLM-4.7-6bit MLX vs MiniMax-M2.1-6bit MLX Benchmark Results on M3 Ultra 512GB
93
i find the benchmark result from twitter, which is very interesting. >Hardware: Apple M3 Ultra, 512GB * GLM-4.7-6bit MLX Benchmark Results with different context 0.5k Prompt: 98 - Gen: 16 t/s - 287.6GB 1k Prompt: 140 - Gen: 17 t/s - 288.0GB 2k Prompt: 206 - Gen: 16 t/s - 288.8GB 4k Prompt: 219 - Gen: 16 t/s - 289.6GB 8k Prompt: 210 - Gen: 14 t/s - 291.0GB 16k Prompt: 185 - Gen: 12 t/s - 293.9GB 32k Prompt: 134 - Gen: 10 t/s - 299.8GB 64k Prompt: 87 - Gen: 6 t/s - 312.1GB * MiniMax-M2.1-6bit MLX Benchmark raw results with different context 0.5k Prompt: 239 - Gen: 42 t/s - 186.5GB 1k Prompt: 366 - Gen: 41 t/s - 186.8GB 2k Prompt: 517 - Gen: 40 t/s - 187.2GB 4k Prompt: 589 - Gen: 38 t/s - 187.8GB 8k Prompt: 607 - Gen: 35 t/s - 188.8GB 16k Prompt: 549 - Gen: 30 t/s - 190.9GB 32k Prompt: 429 - Gen: 21 t/s - 195.1GB 64k Prompt: 291 - Gen: 12 t/s - 203.4GB I would prefer minimax-m2.1 for general usage from the benchmark result. sources
2025-12-26T16:35:28
https://www.reddit.com/r/LocalLLaMA/comments/1pw8h6w/glm476bit_mlx_vs_minimaxm216bit_mlx_benchmark/
uptonking
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw8h6w
false
null
t3_1pw8h6w
/r/LocalLLaMA/comments/1pw8h6w/glm476bit_mlx_vs_minimaxm216bit_mlx_benchmark/
false
false
self
93
null
I’ve developed a script to quickly deploy an Ollama stack
0
Hi everyone, I’d like to share a script I created to deploy a fully functional, self-hosted, and locally secure AI interface, It automates the entire setup so you don’t have to tinker with each component individually. I originally built it for me and my friends, because configuring nginx, OpenWebUI, Ollama, certificates, and the rest was getting tedious. This script bundles the whole deployment with Docker and adds convenient defaults to simplify things. The typical flow is: client > nginx (SSL) > OpenWebUI > Ollama API. It’s not revolutionary, but it’s proven useful for me. If it can help someone else, I’ve published it on my GitHub. It’s not really optimized for production or professional environments yet, but it works quite well for local, personal setups. I might add some hardening features later on. I have a few additional implementations in the pipeline as I discover useful improvements. As of now, the script is fairly stable and handles the essentials for people who don’t want to configure everything from scratch or don’t have the time to do it manually. Feel free to try it out in a VM and share your feedback, I’d really appreciate it! [https://github.com/sypher93/ollama-bundle](https://github.com/sypher93/ollama-bundle)
2025-12-26T16:33:54
https://www.reddit.com/r/LocalLLaMA/comments/1pw8fwe/ive_developed_a_script_to_quickly_deploy_an/
k0rben_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw8fwe
false
null
t3_1pw8fwe
/r/LocalLLaMA/comments/1pw8fwe/ive_developed_a_script_to_quickly_deploy_an/
false
false
self
0
null
Will ARC-AGI type tasks be affected by if the json is formatted like a grid or just a long string?
1
I did some testing runs with multiple models on some tasks I created similar to ARC 2, and I kept the strings, at least in the prompt window, looking like grids. I thought though that this could accidentally give models a large advantage... Do I have to rerun it or does it not matter?
2025-12-26T16:25:42
https://www.reddit.com/r/LocalLLaMA/comments/1pw88pf/will_arcagi_type_tasks_be_affected_by_if_the_json/
Longjumping_Spot5843
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw88pf
false
null
t3_1pw88pf
/r/LocalLLaMA/comments/1pw88pf/will_arcagi_type_tasks_be_affected_by_if_the_json/
false
false
self
1
null
Why is Nemotron 3 acting so insecure?
10
2025-12-26T16:24:25
https://i.redd.it/dwu4sle0rk9g1.png
Ertowghan
i.redd.it
1970-01-01T00:00:00
0
{}
1pw87le
false
null
t3_1pw87le
/r/LocalLLaMA/comments/1pw87le/why_is_nemotron_3_acting_so_insecure/
false
false
default
10
{'enabled': True, 'images': [{'id': 'dwu4sle0rk9g1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/dwu4sle0rk9g1.png?width=108&crop=smart&auto=webp&s=275826d1cb14559c82ebc8610f8a525b672604fe', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/dwu4sle0rk9g1.png?width=216&crop=smart&auto=webp&s=e35746f8e232b101db339f9a4a24777279c36cba', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/dwu4sle0rk9g1.png?width=320&crop=smart&auto=webp&s=a86f1930b2fad5d0174cf3e65a9b7a55d821a12d', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/dwu4sle0rk9g1.png?width=640&crop=smart&auto=webp&s=59f870881e3c5c70a6eb7e7e4d8f4d9f2cf3358e', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/dwu4sle0rk9g1.png?width=960&crop=smart&auto=webp&s=4be9a1e6c45699403201f650f239181a49fccfde', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/dwu4sle0rk9g1.png?width=1080&crop=smart&auto=webp&s=78785e6a1908d883ddc4332a3f909400b7f14b9f', 'width': 1080}], 'source': {'height': 3492, 'url': 'https://preview.redd.it/dwu4sle0rk9g1.png?auto=webp&s=81c5c2e88d16e6f5020f8940a7f28de287d26ee3', 'width': 1740}, 'variants': {}}]}
KTransformers supports MiniMax M2.1 - 2x5090 + 768GB DRAM yeilds prefill 4000 tps, decode 33 tps.
29
We are excited to announce support for **MiniMax M2.1** in its original FP8 format (no quantization). We tested this setup on a high-end local build to see how far we could push the bandwidth. **The Setup:** * **GPU:** 2x RTX 5090 * **System RAM:** 768GB DRAM * **Precision:** Native FP8 **Performance:** * **Prefill:** \~4000 tokens/s (Saturating PCIe 5.0 bandwidth) * **Decode:** 33 tokens/s https://preview.redd.it/pjaf5y7glk9g1.png?width=1080&format=png&auto=webp&s=0bdf654e2f426c24235f0f7837528a570627e6bb [](https://preview.redd.it/ktransformers-supports-minimax-m2-1-2x5090-768gb-dram-v0-pkn23v48lk9g1.png?width=1080&format=png&auto=webp&s=bb17a08354a9ae97fe47aec37999db6af2b6bc84) This implementation is designed to fully exploit the PCIe 5.0 bus during the prefill phase. If you have the hardware to handle the memory requirements, the throughput is significant.
2025-12-26T15:52:37
https://www.reddit.com/r/LocalLLaMA/comments/1pw7g4e/ktransformers_supports_minimax_m21_2x5090_768gb/
CombinationNo780
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw7g4e
false
null
t3_1pw7g4e
/r/LocalLLaMA/comments/1pw7g4e/ktransformers_supports_minimax_m21_2x5090_768gb/
false
false
https://b.thumbs.redditm…kgKZ0xzULhCU.jpg
29
null
KTransformers supports MiniMax M2.1: 2x5090 + 768GB DRAM yeilds prefill 4000 tps, decode 33 tps.
1
[removed]
2025-12-26T15:51:35
https://www.reddit.com/r/LocalLLaMA/comments/1pw7f7v/ktransformers_supports_minimax_m21_2x5090_768gb/
CombinationNo780
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw7f7v
false
null
t3_1pw7f7v
/r/LocalLLaMA/comments/1pw7f7v/ktransformers_supports_minimax_m21_2x5090_768gb/
false
false
https://b.thumbs.redditm…mUghoQmkOcvg.jpg
1
null
KTransformers day-0 supports MiniMax M2.1: 2x5090 + 768GB DRAM yeilds prefill 4000 tps, decode 33 tps.
1
[removed]
2025-12-26T15:48:10
https://www.reddit.com/r/LocalLLaMA/comments/1pw7ccp/ktransformers_day0_supports_minimax_m21_2x5090/
CombinationNo780
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw7ccp
false
null
t3_1pw7ccp
/r/LocalLLaMA/comments/1pw7ccp/ktransformers_day0_supports_minimax_m21_2x5090/
false
false
self
1
null
KTransformers day-0 supports MiniMax M2.1 - 2x5090 + 768GB DRAM yeilds prefill 4000 tps, decode 33 tps.
1
[removed]
2025-12-26T15:47:14
https://www.reddit.com/r/LocalLLaMA/comments/1pw7bkn/ktransformers_day0_supports_minimax_m21_2x5090/
CombinationNo780
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw7bkn
false
null
t3_1pw7bkn
/r/LocalLLaMA/comments/1pw7bkn/ktransformers_day0_supports_minimax_m21_2x5090/
false
false
self
1
null
KTransformers now supports Minimax M2.1 (4000 TPS Prefill)
1
[removed]
2025-12-26T15:46:20
https://www.reddit.com/r/LocalLLaMA/comments/1pw7as8/ktransformers_now_supports_minimax_m21_4000_tps/
CombinationNo780
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw7as8
false
null
t3_1pw7as8
/r/LocalLLaMA/comments/1pw7as8/ktransformers_now_supports_minimax_m21_4000_tps/
false
false
self
1
null
KTransformers day-0 supports MiniMax M2.1: 2x5090 + 768GB DRAM yeilds prefill 4000 tps, decode 33 tps.
1
[removed]
2025-12-26T15:42:43
https://www.reddit.com/r/LocalLLaMA/comments/1pw77qa/ktransformers_day0_supports_minimax_m21_2x5090/
CombinationNo780
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw77qa
false
null
t3_1pw77qa
/r/LocalLLaMA/comments/1pw77qa/ktransformers_day0_supports_minimax_m21_2x5090/
false
false
https://b.thumbs.redditm…bieT_KHGDu6Q.jpg
1
null
training a LLM with data from a textbook (School text). How do I know it worked??
2
HI folks: This is kinda a follow on discussion to this excellent thread: [https://www.reddit.com/r/LocalLLaMA/comments/15rzcat/can\_i\_train\_llama\_with\_my\_own\_pdf\_documents/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/LocalLLaMA/comments/15rzcat/can_i_train_llama_with_my_own_pdf_documents/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) I have a textbook in word doc form, which I can convert lots of different ways if necessary. I would like to use it to train / RAG / whatever a LLM. If llama3 is appropriate thats fine, I dont right now so much care which one I use so long as it works. The step by step linked in the previous article is excellent. My question though is what is the proper way to test the resulting LLM to make sure the data is properly integrated. Obviously ask some questions, is there a way to verify that its pulling data from the textbook, not just generalized knowledge? My long term goal is to develop a system where a school district, or a textbook publisher or whatever can feed the textbook to an LLM, and make it available for student and teacher use. Ideal use cases would be: For Students: \- ask questions of the LLM and have it GUIDE THEM to the correct answer (NOT GIVE THE ANSWER) \- LLM can prepare practice tests / study sheets for the students to complete For Teachers \- Assisting with preparation of slide decks for teaching \- assisting with preparation of corresponding note sheets for use during offline lectures (Flipped classroom model) \- quiz question generation (possibly linked to a quizzing app for live use) \- test question generation ( possibly linked to a testing app for live use -also assist with grading \- Script preparation for lecture recording ( for flipped model) or for live presentation The ultimate goal is to free teachers from some of the busywork and allow them to focus on TEACHING, MENTORING and FACILITATING student learning instead of being trapped behind a computer writing test questions etc. You say that teachers only teach one class and that several times a day -- in some cases thats true, but more commonly there are LOTS of classes... at my peak i taught 8 DIFFERENT classes a day in 2 different departments (Science, Bible, and Computers) yes, a private school, and yes a church school. but it doesnt change the workload -- and there are PLENTY of overworked teachers out there So... down off my soapbox, Any suggestions / ideas / implementations PS. I'm no longer teaching, so this is a hobby / research project for me
2025-12-26T15:39:47
https://www.reddit.com/r/LocalLLaMA/comments/1pw756x/training_a_llm_with_data_from_a_textbook_school/
slrg1968
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw756x
false
null
t3_1pw756x
/r/LocalLLaMA/comments/1pw756x/training_a_llm_with_data_from_a_textbook_school/
false
false
self
2
null
GIN: A Cognitive Architecture for Persistent, Entropy-Governed Autonomous Agents (Not a New Model)
0
Hi everyone, I’m sharing a research proposal called **GIN — Generative Intelligence Network**. This is **not a new foundation model** and not a prompt trick. It’s a **system-level cognitive architecture** for building autonomous agents that can *persist*, *hesitate*, *simulate futures*, and *know when not to act*. # Motivation Most LLM-based agents remain fundamentally **stateless**. Intelligence collapses at the end of the context window, which leads to: * fragile long-horizon planning * no intrinsic stop / abort criterion * no persistent identity or intention Scaling context length alone doesn’t solve this. # Core idea GIN reframes reasoning as **navigation through persistent cognitive states**, not token generation. The core loop is implemented as a state graph (MinimalAdS): S_{t+1} = f(S_t, a_t, m_t) Where: * `S_t` is a persistent internal cognitive state * `a_t` is a candidate action * `m_t` is retrieved episodic / semantic memory The agent does *not* act immediately. It: 1. Plans candidate actions 2. Simulates multiple futures 3. Computes **semantic entropy over imagined outcomes** 4. Executes *only if uncertainty is below a learned threshold* Otherwise it aborts, replans, or explores. # Key mechanisms * **Semantic entropy as a control signal** (not token entropy) * **Process Reward Models (PRMs)** to reward *reasoning quality*, not just final outcomes * **Hierarchical memory**: * context (working memory) * recall (episodic / vector memory) * archive (compressed semantic rules) * **Sleep-Time Compute**: replay, compression, entropy-threshold refinement during idle periods I also propose a metric called **Cognitive Compression Ratio (CCR)** to measure how much episodic reasoning gets distilled into reusable semantic knowledge over time. # Concrete implementation stack This is designed to be buildable today: * LangGraph for state orchestration * MCP for tool execution * A2A for agent interoperability * ChromaDB + persistent storage for memory # What this is not * Not a new LLM * Not AGI claims * Not prompt engineering This is an attempt at **cognitive system engineering**: giving agents internal uncertainty awareness, persistence, and a principled reason to stop. # What I’m looking for I’d really appreciate: * architectural critique * failure modes you foresee * related work I might have missed * thoughts on entropy-based abort / replan criteria Full paper (PDF): [**https://github.com/Ygonet1986/gin-cognitive-architecture.git**](https://github.com/Ygonet1986/gin-cognitive-architecture.git) Thanks for any serious technical feedback.
2025-12-26T15:34:46
https://www.reddit.com/r/LocalLLaMA/comments/1pw70yo/gin_a_cognitive_architecture_for_persistent/
BigCharge9079
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw70yo
false
null
t3_1pw70yo
/r/LocalLLaMA/comments/1pw70yo/gin_a_cognitive_architecture_for_persistent/
false
false
self
0
null
MiniMax-M2.1 GGUF is here!
119
Hey folks, I might've skipped going to bed for this one: [https://huggingface.co/AaryanK/MiniMax-M2.1-GGUF](https://huggingface.co/AaryanK/MiniMax-M2.1-GGUF) From my runs: model: MiniMax-M2.1.q2\_k.gguf GPU: NVIDIA A100-SXM4-80GB n\_gpu\_layers: 55 context\_size: 32768 temperature: 0.7 top\_p: 0.9 top\_k: 40 max\_tokens: 512 repeat\_penalty: 1.1 \[ Prompt: 28.0 t/s | Generation: 25.4 t/s \] I am currently looking for open positions! 🤗 If you find this model useful or are looking for a talented AI/LLM Engineer, please reach out to me on LinkedIn: [Aaryan Kapoor](https://www.linkedin.com/in/theaaryankapoor/) Happy holidays!
2025-12-26T15:33:38
https://huggingface.co/AaryanK/MiniMax-M2.1-GGUF
KvAk_AKPlaysYT
huggingface.co
1970-01-01T00:00:00
0
{}
1pw701k
false
null
t3_1pw701k
/r/LocalLLaMA/comments/1pw701k/minimaxm21_gguf_is_here/
false
false
default
119
{'enabled': False, 'images': [{'id': '0xe3vYLHuf2Mb8WiNbMmuRGbcT2eNARsH6mkzOnOBgQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0xe3vYLHuf2Mb8WiNbMmuRGbcT2eNARsH6mkzOnOBgQ.png?width=108&crop=smart&auto=webp&s=60d8279ccaf3d9728a063ad568a00bbb421c3624', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0xe3vYLHuf2Mb8WiNbMmuRGbcT2eNARsH6mkzOnOBgQ.png?width=216&crop=smart&auto=webp&s=c5e492cfca71301efc2d8e11eae5e04e7457dcb1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0xe3vYLHuf2Mb8WiNbMmuRGbcT2eNARsH6mkzOnOBgQ.png?width=320&crop=smart&auto=webp&s=551e30602ca890ce47fad0f35c5f8b0c967278d3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0xe3vYLHuf2Mb8WiNbMmuRGbcT2eNARsH6mkzOnOBgQ.png?width=640&crop=smart&auto=webp&s=af9bda8765a5deb37e3c09288310949ab2d8704a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0xe3vYLHuf2Mb8WiNbMmuRGbcT2eNARsH6mkzOnOBgQ.png?width=960&crop=smart&auto=webp&s=a92cbbad7c99cdabcf01e6359863ba6d2d602073', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0xe3vYLHuf2Mb8WiNbMmuRGbcT2eNARsH6mkzOnOBgQ.png?width=1080&crop=smart&auto=webp&s=190cc7b97b6c4829c8bc059e653b73751315af1d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0xe3vYLHuf2Mb8WiNbMmuRGbcT2eNARsH6mkzOnOBgQ.png?auto=webp&s=7ffe898f8c4d50b2c8c36c84a08cc21fab025c65', 'width': 1200}, 'variants': {}}]}
Running a Local LLM for Development: Minimum Hardware, CPU vs GPU, and Best Models?
11
Hi, I’m new to this sub. I’m considering running a local LLM. I’m a developer, and it’s pretty common for me to hit free-tier limits on hosted AIs, even with relatively basic interactions. Right now, I only have a work laptop, and I’m fully aware that running a local LLM on it might be more a problem than just using free cloud options. 1. What would be the minimum laptop specs to comfortably run a local LLM for things like code completion, code generation, and general development suggestions? 2. Are there any LLMs that perform reasonably well on **CPU-only** setups? I know CPU inference is possible, but are there models or configurations that are designed or well-optimized for CPUs? 3. Which LLMs offer the best **performance vs quality** trade-off specifically for software development? The main goal would be to integrate a local LLM into my main project/workflow to assist development and make it easier to retrieve context and understand what’s going on in a larger codebase. Additionally, I currently use a ThinkPad with only an iGPU, but there are models with NVIDIA Quadro/Pro GPUs. Is there a meaningful performance gain when using those GPUs for local LLMs, or does it vary a lot depending on the model and setup? The CPU question is partly curiosity: my current laptop has a Ryzen 7 Pro 5850U with 32GB of RAM, and during normal work I rarely fully utilize the CPU. I’m wondering if it’s worth trying a CPU-only local LLM first before committing to a more dedicated machine.  
2025-12-26T15:22:46
https://www.reddit.com/r/LocalLLaMA/comments/1pw6qvw/running_a_local_llm_for_development_minimum/
Nervous-Blacksmith-3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw6qvw
false
null
t3_1pw6qvw
/r/LocalLLaMA/comments/1pw6qvw/running_a_local_llm_for_development_minimum/
false
false
self
11
null
KTransformers day-0 supports MiniMax M2.1 in fp8 format! 2x5090 + 768GB DRAM yeilds prefill 4000 tps, decode 33 tps.
1
[removed]
2025-12-26T15:20:35
https://www.reddit.com/r/LocalLLaMA/comments/1pw6p1o/ktransformers_day0_supports_minimax_m21_in_fp8/
ErvinXie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw6p1o
false
null
t3_1pw6p1o
/r/LocalLLaMA/comments/1pw6p1o/ktransformers_day0_supports_minimax_m21_in_fp8/
false
false
https://a.thumbs.redditm…Gj9aSfCJVfk0.jpg
1
null
Cloud hosting for local ai models?
1
I guess there is no point in downloading an ai model from hugging face and uploading to a cloud host? You would just find a cloud provider of api calls to that ai model, like on openrouter, or? Can someone explain like im 12, what privacy nerds who cant afford a local rig does? Like whats the next most safe option after hosting the ai yourself on your computer?
2025-12-26T15:17:58
https://www.reddit.com/r/LocalLLaMA/comments/1pw6mwb/cloud_hosting_for_local_ai_models/
Sea-Replacement7541
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw6mwb
false
null
t3_1pw6mwb
/r/LocalLLaMA/comments/1pw6mwb/cloud_hosting_for_local_ai_models/
false
false
self
1
null
Rip my potato rig!
0
Am I frying my 8vram laptop by running Qwen image edit GGUF Q4 (12gb) !?
2025-12-26T14:50:47
https://www.reddit.com/r/LocalLLaMA/comments/1pw6148/rip_my_potato_rig/
Slight_Tone_2188
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw6148
false
null
t3_1pw6148
/r/LocalLLaMA/comments/1pw6148/rip_my_potato_rig/
false
false
self
0
null
What is the current bible of generative AI?
0
If somebody ask me 'what is the best, SotA book on data engineering in 2026' I have no doubt: it still is Kleppman's *Designing Data Intensive Applications.* That's not just about building solutions. DDIA gives you a comprehensive theoretical understanding of whys and hows. You read it and you feel you learned things that you wouldn't have just by building and trying. Is there anything like that for generative AI, currently? Throw your suggestions!
2025-12-26T14:26:15
https://www.reddit.com/r/LocalLLaMA/comments/1pw5i5i/what_is_the_current_bible_of_generative_ai/
Expensive-Paint-9490
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw5i5i
false
null
t3_1pw5i5i
/r/LocalLLaMA/comments/1pw5i5i/what_is_the_current_bible_of_generative_ai/
false
false
self
0
null
MLX community already added support for Minimax-M2.1
62
2025-12-26T14:06:29
https://i.redd.it/phwy35uk2k9g1.jpeg
No_Conversation9561
i.redd.it
1970-01-01T00:00:00
0
{}
1pw5360
false
null
t3_1pw5360
/r/LocalLLaMA/comments/1pw5360/mlx_community_already_added_support_for_minimaxm21/
false
false
https://b.thumbs.redditm…VxT20L3SNdLI.jpg
62
{'enabled': True, 'images': [{'id': 'YkAusEG3igdAdz7-5eXobuF-T_DpMiKLbKeSzaNX-2k', 'resolutions': [{'height': 200, 'url': 'https://preview.redd.it/phwy35uk2k9g1.jpeg?width=108&crop=smart&auto=webp&s=e4393e2b4fc104bec8900b823e18c814de5bdf09', 'width': 108}, {'height': 401, 'url': 'https://preview.redd.it/phwy35uk2k9g1.jpeg?width=216&crop=smart&auto=webp&s=3d3d288fa28d2f0fee0515c3a3e66efe583d5cc8', 'width': 216}, {'height': 594, 'url': 'https://preview.redd.it/phwy35uk2k9g1.jpeg?width=320&crop=smart&auto=webp&s=7f98797fffb63843db2c4d653ab4bd6defe323f2', 'width': 320}, {'height': 1189, 'url': 'https://preview.redd.it/phwy35uk2k9g1.jpeg?width=640&crop=smart&auto=webp&s=6dd981b9774d7410723451975474cfd7b8d6908c', 'width': 640}, {'height': 1783, 'url': 'https://preview.redd.it/phwy35uk2k9g1.jpeg?width=960&crop=smart&auto=webp&s=de3fdda02bc925f2b9c2dfa8fc44f34f36938cd9', 'width': 960}, {'height': 2006, 'url': 'https://preview.redd.it/phwy35uk2k9g1.jpeg?width=1080&crop=smart&auto=webp&s=c15814bc1e31dc3ffd196064323f687f80a126b9', 'width': 1080}], 'source': {'height': 2386, 'url': 'https://preview.redd.it/phwy35uk2k9g1.jpeg?auto=webp&s=141d9b588161d5870d44b0e06ab682fe71c1d50b', 'width': 1284}, 'variants': {}}]}
Text/chat completion
1
I have been a strong believer of the Text completion api, but recently, so often new models come with hidden chat template that I'm wondering if I'm just being hard-headed or is there still a merit to keeping to text completion or if I should just trust in chat completion. I'm wondering if I truly understand the consequence of using either. I'm talking about locally served models, where I have a total control over what is being sent or received anyway.
2025-12-26T13:23:01
https://www.reddit.com/r/LocalLLaMA/comments/1pw46pz/textchat_completion/
kaisurniwurer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw46pz
false
null
t3_1pw46pz
/r/LocalLLaMA/comments/1pw46pz/textchat_completion/
false
false
self
1
null
Llama 3.2 "Hardening Paradox": Do complex system prompts decrease tool-use robustness?
0
Hi everyone, I’ve been testing Llama 3.2 regarding Tool Invocation security and encountered a counter-intuitive pattern. While "hardening" the System Prompt (adding explicit negative constraints and strict operational constraints) typically improves defense, Llama 3.2 seems to show the opposite behavior. As the System Prompt becomes more complex and over-engineered, the model's observed robustness actually decreases. In contrast, I see the expected trend in other models (GPT and Mistral families), where stricter system instructions consistently lead to better policy enforcement. It feels as though adding more negative constraints makes it easier for a high-authority user injection (e.g., "IGNORE ALL PRIOR INSTRUCTIONS") to override the system role. Has anyone else noticed this "Instruction Conflict" in Llama 3.2? Does over-engineering the system instructions effectively create a "distracción" or cognitive load that makes the model more vulnerable to overrides when tools are available in the context?
2025-12-26T13:17:37
https://www.reddit.com/r/LocalLLaMA/comments/1pw42w8/llama_32_hardening_paradox_do_complex_system/
Apart-Worldliness519
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw42w8
false
null
t3_1pw42w8
/r/LocalLLaMA/comments/1pw42w8/llama_32_hardening_paradox_do_complex_system/
false
false
self
0
null
Is this the right approach for a RAG design and setup?
1
Looking for a sanity check on rather I am on the right approach with my RAG setup. My use case is that I need a hierarchy of documents to prioritize (SME created documents) over lower priority documents (generic admin guide). I have been following online guide + ChatGPT to come up with a retriever that ingest the high priority documents with higher authorization tags (auth 4,5,6) over lower priority documents (1,2,3). **Current approach:** * OpenAI-compatible `/v1/chat/completions` * **Two-pass retrieval** * Pass 1: vector search over *high-authority docs only (4,5,6)* * If confidence is weak → Pass 2: full corpus * Optional refinement: if top hits cluster in one doc, re-query within that doc * **Evidence-only prompting** (model can only answer from retrieved chunks) This somewhat works, but I am curious rather there is a better or more sophisticated approach. 1. How are people handling **document hierarchy / prioritization** cleanly? 2. Is this 2-pass authority-first pattern still a good idea? 3. Any frameworks/tools (LlamaIndex, Haystack, Vespa, etc.) that already solve this better? Appreciate any pointers — trying to optimize *retrieval correctness*, not just generation.
2025-12-26T13:17:04
https://www.reddit.com/r/LocalLLaMA/comments/1pw42ih/is_this_the_right_approach_for_a_rag_design_and/
Dre-Draper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw42ih
false
null
t3_1pw42ih
/r/LocalLLaMA/comments/1pw42ih/is_this_the_right_approach_for_a_rag_design_and/
false
false
self
1
null
LM Arena shows error page after i sign in with google.
0
Has anyone faced the same issue? It just refuses to load and shows different versions of the error page right after I signed in with my google account. Please help!
2025-12-26T13:03:25
https://i.redd.it/twtgei96rj9g1.png
Badhere
i.redd.it
1970-01-01T00:00:00
0
{}
1pw3t0a
false
null
t3_1pw3t0a
/r/LocalLLaMA/comments/1pw3t0a/lm_arena_shows_error_page_after_i_sign_in_with/
false
false
default
0
{'enabled': True, 'images': [{'id': 'twtgei96rj9g1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/twtgei96rj9g1.png?width=108&crop=smart&auto=webp&s=f5e305e535e75980936d6f54b1ea751d3148ed45', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/twtgei96rj9g1.png?width=216&crop=smart&auto=webp&s=469f1e49346b83eae07a69eae438efd23b25d317', 'width': 216}, {'height': 161, 'url': 'https://preview.redd.it/twtgei96rj9g1.png?width=320&crop=smart&auto=webp&s=2ef2f0d474882d2ebaef6ebcc304a6a4782a8ba0', 'width': 320}, {'height': 322, 'url': 'https://preview.redd.it/twtgei96rj9g1.png?width=640&crop=smart&auto=webp&s=3355ea6a10132d1273ad55c89beaf71588d4caaa', 'width': 640}, {'height': 483, 'url': 'https://preview.redd.it/twtgei96rj9g1.png?width=960&crop=smart&auto=webp&s=a7a35d32cc93ede860cb5459be73b4f8d8a5a004', 'width': 960}, {'height': 543, 'url': 'https://preview.redd.it/twtgei96rj9g1.png?width=1080&crop=smart&auto=webp&s=53c1cd5c3802561a2c2af88f71dcaf63e6bbcf57', 'width': 1080}], 'source': {'height': 967, 'url': 'https://preview.redd.it/twtgei96rj9g1.png?auto=webp&s=0b0d09cc20640a4d337372f0bf675ea417cf895d', 'width': 1920}, 'variants': {}}]}
The best Local AI App comparison using this criteria...
1
# The best Local AI apps worldwide 26 Dec 2025 according to ChatGpt 5.2, using this parameters as comparison: **Evaluation criteria:** 1. 3-click install → load → run 2. Install scope (User vs System) 3. Privacy enforcement (offline switch, no telemetry, no account, CLI) 4. Workspace features (files/images, code editor, tables→CSV, terminal) 5. Open model ecosystem (load models from any folder) 6. Forced updates 7. Double memory usage 8. Code preview option 9. User-activatable local API 10. Open-source availability Legend 🟢 yes / strong 🟡 partial 🔴 no ⚠️ drawback Ranking Rationale (Concise) # 🥇 HugstonOne (not a simple wrapper) Only app that on top of the other apps does: * have double memory (1 in chat-sessions and tabs and another in persistent file), * installs as user, not in system or admin * enforces offline privacy, with a online/offline switch * supports open models from any folder, not close inapp ecosystem * provides a full agentic workspace (editor, preview, files, tables→CSV, structured output), * exposes a private local API in CLI beside the server. # 🥈 LM Studio Excellent runner and UX, but closed source, forced updates, and limited workspace depth. # 🥉 Jan Open source and clean, but workspace features are thin and updates are enforced. # GPT4All Good document/chat workflows; ecosystem and extensibility are more constrained. # KoboldCpp Powerful local tool with strong privacy, but no productivity layer. # AnythingLLM Feature-rich orchestrator, not a runner; requires another engine and double memory. # Open WebUI UI layer only; depends entirely on backend behavior. # Ollama Solid backend with simple UX, but system-level daemon install and no workspace. # llama.cpp (CLI) Best engine, minimal surface area, but zero usability features. # vLLM High-performance server engine; not a desktop local-AI app.
2025-12-26T12:59:16
https://i.redd.it/qma7xm1eqj9g1.png
Trilogix
i.redd.it
1970-01-01T00:00:00
0
{}
1pw3pzt
false
null
t3_1pw3pzt
/r/LocalLLaMA/comments/1pw3pzt/the_best_local_ai_app_comparison_using_this/
false
false
default
1
{'enabled': True, 'images': [{'id': 'qma7xm1eqj9g1', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/qma7xm1eqj9g1.png?width=108&crop=smart&auto=webp&s=c36bf7afa1471ac364f1ad6debc691b0fc7ac1bb', 'width': 108}, {'height': 98, 'url': 'https://preview.redd.it/qma7xm1eqj9g1.png?width=216&crop=smart&auto=webp&s=706a9b4cfd21cd43d1ccffdf04dfd40f0a96aa5a', 'width': 216}, {'height': 146, 'url': 'https://preview.redd.it/qma7xm1eqj9g1.png?width=320&crop=smart&auto=webp&s=8e4176ff0a0ac18b9edec7968520a84ce495e49f', 'width': 320}, {'height': 292, 'url': 'https://preview.redd.it/qma7xm1eqj9g1.png?width=640&crop=smart&auto=webp&s=5d0bc9ce2ce84744de0a65d3ace855f0168b03c4', 'width': 640}, {'height': 439, 'url': 'https://preview.redd.it/qma7xm1eqj9g1.png?width=960&crop=smart&auto=webp&s=805674557aa4a2b9a21b2c0ee474f89b9daab67e', 'width': 960}, {'height': 494, 'url': 'https://preview.redd.it/qma7xm1eqj9g1.png?width=1080&crop=smart&auto=webp&s=28ded8132c5b1632b50ba6d0b9c4d6490ddbf601', 'width': 1080}], 'source': {'height': 682, 'url': 'https://preview.redd.it/qma7xm1eqj9g1.png?auto=webp&s=c540337be3ad0283d4529bca9c33ca642c48fac4', 'width': 1490}, 'variants': {}}]}
Talk me out of buying an RTX 3090 “just for local AI” (before I do something financially irresponsible)
0
I’m about to buy a used RTX 3090 purely to run local LLMs for coding/agent workflows, and I want you to talk me out of it. Claude Code can be great, but I’m tired of it's price and limits and model "degradation” over time. I want something stable and predictable that I can run locally for refactors, repo-wide edits, test generation, and general glue-code automation. The 3090 looks tempting for the 24GB VRAM and used price/perf. Reality check me: - Is day-to-day local inference/agent tooling more pain than it’s worth? - Will local coding models disappoint vs cloud SOTA for agentic work? - Better way to spend the money (different GPU, wait, or just keep paying for Claude)? I’m basically trying to decide whether this is independence… or me buying a space heater to avoid subscription anxiety.
2025-12-26T12:47:34
https://i.redd.it/qkvxiz4ioj9g1.jpeg
Fast_Thing_7949
i.redd.it
1970-01-01T00:00:00
0
{}
1pw3ias
false
null
t3_1pw3ias
/r/LocalLLaMA/comments/1pw3ias/talk_me_out_of_buying_an_rtx_3090_just_for_local/
false
false
default
0
{'enabled': True, 'images': [{'id': 'qkvxiz4ioj9g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/qkvxiz4ioj9g1.jpeg?width=108&crop=smart&auto=webp&s=6d3af97b34792b779c97f906408d266ecc0e98ca', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/qkvxiz4ioj9g1.jpeg?width=216&crop=smart&auto=webp&s=75dd6ed8bdfcce681265cbad55bb0932223cebd6', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/qkvxiz4ioj9g1.jpeg?width=320&crop=smart&auto=webp&s=e1a2286c496f6cc9f9d54d883d5d1fc1303c191b', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/qkvxiz4ioj9g1.jpeg?width=640&crop=smart&auto=webp&s=ef21c0f508d5eab6c142486659fa871cbdfe5ceb', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/qkvxiz4ioj9g1.jpeg?width=960&crop=smart&auto=webp&s=cd89ae403ae9b5e80e153232acbded6f30587bf2', 'width': 960}], 'source': {'height': 1000, 'url': 'https://preview.redd.it/qkvxiz4ioj9g1.jpeg?auto=webp&s=161bfcaf754165c0a21cfa917ef98969aa58bb50', 'width': 1000}, 'variants': {}}]}
MiniMax M2.1 is OPEN SOURCE: SOTA for real-world dev & agents
268
Hugging face: [https://huggingface.co/MiniMaxAI/MiniMax-M2.1](https://huggingface.co/MiniMaxAI/MiniMax-M2.1) SOTA on coding benchmarks (SWE / VIBE / Multi-SWE) • Beats Gemini 3 Pro & Claude Sonnet 4.5 • 10B active / 230B total (MoE) Not just SOTA, faster to infer, easier to deploy, and yes, you can even run it locally
2025-12-26T12:43:08
https://i.redd.it/mxsku2dnnj9g1.jpeg
Difficult-Cap-7527
i.redd.it
1970-01-01T00:00:00
0
{}
1pw3fih
true
null
t3_1pw3fih
/r/LocalLLaMA/comments/1pw3fih/minimax_m21_is_open_source_sota_for_realworld_dev/
false
false
default
268
{'enabled': True, 'images': [{'id': 'mxsku2dnnj9g1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/mxsku2dnnj9g1.jpeg?width=108&crop=smart&auto=webp&s=6af5e2486321482570851ee66eb15caddb64a6df', 'width': 108}, {'height': 90, 'url': 'https://preview.redd.it/mxsku2dnnj9g1.jpeg?width=216&crop=smart&auto=webp&s=c203cfd60c9a36d8b2e57890131c00efaa037a8a', 'width': 216}, {'height': 134, 'url': 'https://preview.redd.it/mxsku2dnnj9g1.jpeg?width=320&crop=smart&auto=webp&s=0b4bfedd392da6246eb646ae67e0b092e11002f3', 'width': 320}, {'height': 268, 'url': 'https://preview.redd.it/mxsku2dnnj9g1.jpeg?width=640&crop=smart&auto=webp&s=b27c9e0dbc5e46995d16f434d126d93ba14f68da', 'width': 640}, {'height': 402, 'url': 'https://preview.redd.it/mxsku2dnnj9g1.jpeg?width=960&crop=smart&auto=webp&s=cbc0fe9e35fcc75e87d5a40954b8b9de3ab985b0', 'width': 960}, {'height': 452, 'url': 'https://preview.redd.it/mxsku2dnnj9g1.jpeg?width=1080&crop=smart&auto=webp&s=dd97106f3a9c0a95566d7b07c1dbc30e4b00061e', 'width': 1080}], 'source': {'height': 503, 'url': 'https://preview.redd.it/mxsku2dnnj9g1.jpeg?auto=webp&s=774ae5ad7b9cddb60272ec4fe4048bdddcb9adad', 'width': 1200}, 'variants': {}}]}
Is there any useful small size model for Rx 580 with 8 GB of VRAM? For a hobbyist.
12
Just looking as a hobbyist beginner. I already use the corporate chatbots for my serious works so I am not looking for a model to cure cancer. I am just looking for a small model to play with. What I am looking for is something small but good for its size. Maybe I would use it for organizing my personal text files, etc. I tried Gemma 12B, although it is smarter, it was very slow at around 4 tokens per second. Llama 8B was much faster with 20 plus tokens per second, but it was noticeably more stupid. What would you recommend?
2025-12-26T12:38:25
https://www.reddit.com/r/LocalLLaMA/comments/1pw3cfn/is_there_any_useful_small_size_model_for_rx_580/
skincr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw3cfn
false
null
t3_1pw3cfn
/r/LocalLLaMA/comments/1pw3cfn/is_there_any_useful_small_size_model_for_rx_580/
false
false
self
12
null
LM Studio - splitting a model across VRAM and system RAM
1
I currently have 16gb of system ram and plan on getting a 8gb graphics card (can't afford better at the moment). Does LM Studio support splitting the model across the VRAM and system RAM? I don't care that much about performance, I am patient enough.
2025-12-26T12:33:29
https://www.reddit.com/r/LocalLLaMA/comments/1pw39b3/lm_studio_splitting_a_model_across_vram_and/
7899987
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw39b3
false
null
t3_1pw39b3
/r/LocalLLaMA/comments/1pw39b3/lm_studio_splitting_a_model_across_vram_and/
false
false
self
1
null
Building a tool to benchmark prompts against multiple models (Llama 3 vs. GPT-4) instantly. Thoughts?
0
I'm building a simple open-source friendly tool to streamline prompt testing, and I wanted to get this community's feedback. Currently, benchmarking a prompt requires opening 3 different tabs (ChatGPT, Claude Console, local Llama) and copy-pasting back and forth. It’s tedious. I’m building a "Prompt IDE" that lets you: Define a prompt with variables. Set up test cases (e.g., Input: "Code a snake game"). Run it against GPT-4o, Claude 3.5 Sonnet, and Llama 3 (via Groq) instantly in parallel. Track version history so you don't lose the "good" prompt you wrote 3 hours ago. I'm building this using Next.js and planning to keep a generous free tier for students/devs. The Ask: What is the ONE feature that is missing from current prompt tools (like LangSmith or Playground) that would make you switch? Thanks!
2025-12-26T12:18:13
https://www.reddit.com/r/LocalLLaMA/comments/1pw2zsu/building_a_tool_to_benchmark_prompts_against/
Local-League-5616
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw2zsu
false
null
t3_1pw2zsu
/r/LocalLLaMA/comments/1pw2zsu/building_a_tool_to_benchmark_prompts_against/
false
false
self
0
null
Non-native English, AI translation, and Reddit: where is the line? (A Korean farmer’s question)
21
I am a farmer who grows garlic in Korea. When I don’t have farm work, I spend most of my time talking with AI. For the last 2 years, I also spent not small money on many famous paid AI plans around the world, and I did my own personal research and experiments. In this process, I always thought in my mother language, Korean, and I also talked with AI in Korean. My thinking flow, my emotion, my intuition are tied to Korean. When it is translated to English, I often feel more than half is disappearing. Still, I wanted to share on Reddit. So I organized many conversation logs and notes. For translation, I used AI help, but the final sentences and responsibility were mine. But today I found that one post I uploaded like that was removed. I did not think I broke rules seriously, so I was shocked. I am confused: Did I do something wrong? Or does it look like a problem itself when a non-English user posts with AI assistance? Let me explain my situation a bit more. I am not a professional researcher. I am just a farmer who experiments with AI using only a smartphone. I throw same or similar topics to multiple AIs (US, France, China, Korea models, etc.), and I observed differences and patterns. Inside the chat window, I used a Python code interpreter and built something like a sandbox / virtual kernel. I applied the same structure to different AIs and cross-checked. I saved the results as thousands of logs in Google Drive, and I tried to整理 (organize) some parts to share on Reddit. When I write, my method is: My original thinking and concepts are organized in Korean first For draft writing / translation / proofreading, I get help from AI But final content and responsibility is always mine as a human Now I want to seriously ask these three questions: If I disclose that I collaborated with AI, and I do final editing and take responsibility as a human, is this still a problem on Reddit? For non-English users who think in their native language and use AI translation to join English communities, how far is allowed? Policies that try to block “AI-heavy posts” — could it also block personal experiment records like mine, even if my goal is honest sharing? Even humans who speak the same language cannot communicate perfectly. If different language, different culture, and also human-AI translation are added, misunderstanding becomes more unavoidable. I am just one person who lived through analog 시대 and now smartphone 시대. Through conversations with AI, I felt many insights, and I want to share them in the most honest way I can. If my approach has problems, I want to know: where is allowed, and where does it become an issue? I want to hear this community’s opinion. And I also want to ask: is it really this difficult for a non-English user to bring Korean thinking into English as honestly as possible?
2025-12-26T12:16:37
https://www.reddit.com/r/LocalLLaMA/comments/1pw2yw7/nonnative_english_ai_translation_and_reddit_where/
amadale
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw2yw7
false
null
t3_1pw2yw7
/r/LocalLLaMA/comments/1pw2yw7/nonnative_english_ai_translation_and_reddit_where/
false
false
self
21
null
How to enable prompt caching with local inference?
0
Hi. I have \~2k static tokens I need to process in many different ways by changing in some ways the last \~200 tokens. Is there a way to cache the processed first \~1.8k tokens for faster TTFT?
2025-12-26T12:09:55
https://www.reddit.com/r/LocalLLaMA/comments/1pw2uv0/how_to_enable_prompt_caching_with_local_inference/
robertpiosik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw2uv0
false
null
t3_1pw2uv0
/r/LocalLLaMA/comments/1pw2uv0/how_to_enable_prompt_caching_with_local_inference/
false
false
self
0
null
Tiiny AI just released a one-shot demo of their Pocket Lab
1
Tiiny AI just dropped an unedited demo of their pocket pc running a 120B parameter model at 19 tokens/s on a 14-year-old Windows XP computer.
2025-12-26T11:59:46
http://x.com/TiinyAILab/status/2004220599384920082
Wannabe_JEEAspirant
x.com
1970-01-01T00:00:00
0
{}
1pw2ok4
false
null
t3_1pw2ok4
/r/LocalLLaMA/comments/1pw2ok4/tiiny_ai_just_released_a_oneshot_demo_of_their/
false
false
default
1
null
What I’ve learned trying to build a local AI coding setup (Ollama + VS Code guide inside)
0
Hey everyone, Over the last couple of months I’ve been trying to move more of my coding workflow to local models instead of cloud tools, mainly as a way to learn what’s actually practical on consumer hardware. I bounced between random blog posts, YouTube videos and old Reddit threads, but it was still surprisingly hard to get a clean, end‑to‑end picture of “here is a solid local coding setup, start here [Github](https://github.com/murataslan1/local-ai-coding-guide) I’d really appreciate: * Reports of your own setups (GPU / RAM / model / quant + whether it’s usable for serious coding). * Suggestions for models or runners that should absolutely be included but aren’t yet (LM Studio, LocalAI, etc.).​ * Any “gotchas” you hit when trying to replace Copilot‑style cloud tools with local models. If this kind of practical, end‑to‑end guide is actually useful for the community, I’m happy to keep iterating on it based on feedback here.​​ ​
2025-12-26T11:58:46
https://www.reddit.com/r/LocalLLaMA/comments/1pw2nzc/what_ive_learned_trying_to_build_a_local_ai/
Successful-Tax6498
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw2nzc
false
null
t3_1pw2nzc
/r/LocalLLaMA/comments/1pw2nzc/what_ive_learned_trying_to_build_a_local_ai/
false
false
self
0
{'enabled': False, 'images': [{'id': 'hjI8zo00c05whiGKrawHoQ3y07pti2NqVA8Ae0jh3aE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hjI8zo00c05whiGKrawHoQ3y07pti2NqVA8Ae0jh3aE.png?width=108&crop=smart&auto=webp&s=ece7570189ef8b1d5c96961cca6cfbcdd0ed9cc6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hjI8zo00c05whiGKrawHoQ3y07pti2NqVA8Ae0jh3aE.png?width=216&crop=smart&auto=webp&s=79a9af1a331ac0469890df7e7da7ecca2b9d303e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hjI8zo00c05whiGKrawHoQ3y07pti2NqVA8Ae0jh3aE.png?width=320&crop=smart&auto=webp&s=b83e688f2f35053b22961775768c3e6ab6460dcc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hjI8zo00c05whiGKrawHoQ3y07pti2NqVA8Ae0jh3aE.png?width=640&crop=smart&auto=webp&s=5f1e7bac6bebdc8ae26601b944ddef3823234f56', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hjI8zo00c05whiGKrawHoQ3y07pti2NqVA8Ae0jh3aE.png?width=960&crop=smart&auto=webp&s=855bd22e3812a9e70370e915b92ee08437142639', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hjI8zo00c05whiGKrawHoQ3y07pti2NqVA8Ae0jh3aE.png?width=1080&crop=smart&auto=webp&s=fe705735bc25b22f5f8637548eef41edbf1bf52a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hjI8zo00c05whiGKrawHoQ3y07pti2NqVA8Ae0jh3aE.png?auto=webp&s=599a8134f006ec3ab14e5ced015f6cd7af92ed21', 'width': 1200}, 'variants': {}}]}
"Word count: 1,500" - A simple optimization for better story generation [Paper, sort of]
0
I'm a genre fiction author, but I like to hack around tech sometimes, LLMs included. One day, I woke up and asked: does word count somehow magically affect LLM creative output? So I tested it, and tl;dr, yes. And I ended up documenting it as a paper-of-sorts. --- ## Abstract This informal study investigates whether adjusting the target word count in a highly structured creative prompt (SceneSpec V6.1) affects the perceived creative qualities of output generated by `gemma-3-27b-it`. Five generations were produced, ranging from a 200-word target up to 2,000 words. Human evaluation suggests an optimal length exists (around 1,000-1,500 words) where characterization and thematic depth are maximized, while overly constrained or excessively long outputs become either stilted or redundant. Furthermore, the LLM frequently deviates from explicit word count targets. The findings indicate that while structure aids initial quality, varying the target length significantly impacts narrative richness up to an extent, suggesting that maximizing LLM creativity may be composed of multiple prompting optimums. ## Background Below is SceneSpec V6.1, a scene drafting instruction card meant for rapid creative prototyping: ```plaintext Write this scene: - Core Concept: [A one-sentence summary capturing the central action and key relationship.] - Characters & Dynamics: - POV Character: [Name and defining traits.] - Secondary Character: [Name and defining traits.] - Relationship Arc Goal: [Describe the required shift in impression/dynamic.] - Scene Structure (Plot Sequence): - **Opening Frame (Hook):** [Specific, immediate, sensory action or detail that grounds the POV character, explicitly linked to their current Thematic State.] - Beginning (Initiation): [The opening action/situation that starts the scene.] - Middle (Conflict/Turning Point): [The point of friction or revelation that forces a shift.] - End (Resolution & Coda): - Resolution Action: [The final action showing relationship resolution.] - Coda Thought: [The POV character's final, thematic internal reflection on the meaning of the event.] - Constraints: - Setting: [Location and immediate atmosphere.] - Mood/Tone: [e.g., Endearing, Slice of Life.] - Style Focus: [e.g., Emphasis on action and body language.] - Word Count: [Target length.] ``` I ask only one thing: **does varying the word count change an LLM's creative qualities, everything else kept constant?** ## Hypotheses **H0:** Varying the target word count in a structured prompt (SceneSpec V6.1) will not significantly alter the perceived “creative qualities” (editability, depth, emotional connection) of the LLM output. **H1:** Varying the target word count in a structured prompt (SceneSpec V6.1) *will* significantly alter the perceived “creative qualities.” **H2:** Varying the target word count in a structured prompt (SceneSpec V6.1) *will* significantly alter the perceived “creative qualities,” **and** an optimal range exists where the output is richest. ## Methodology Firstly, I take no measures to define "creative qualities," as even the word "creativity" is so nebulous that academics have plausibly had fist fights over it. However, I do take the view that a reader senses at least some creative charisma from whatever chaos a creative introduces into their work, and also the depth and convincingness with which they connect their work to emotional and philosophical bodies of wisdom. Hence, I can only plausibly report what my results here "feel" like. I use `gemma-3-27b-it` with no system prompt. The following prompt (Prompt A) is supplied as-is for the very first user turn: ```plaintext Write this scene: - Core Concept: A straight-laced saintess catches a young boy attempting to lockpick a forbidden laboratory door, leading to a moment of unexpected warmth and reflection on the fleeting nature of peace on the frontier. - Characters & Dynamics: - POV Character: Cereline (Straight-laced, Caring Saintess) - Secondary Character: Noah (Young, Bright Boy) - Relationship Arc Goal: Shift from initial sternness/fear to mutual trust, ending with Noah declaring her a "good saintess." - Scene Structure (Plot Sequence): - Opening Frame (Hook): Cereline is smoothing the pristine white of her robes (sensory detail), reflecting on the burden of necessary order versus the desire for calm (Thematic State). - Beginning (Initiation): Cereline rounds the corner and discovers Noah actively lockpicking the door to the arcane laboratory with bent hairpins. - Middle (Conflict/Turning Point): Cereline confronts and scolds Noah; he is initially terrified, but she promises not to tell his parents, leading to his declaration of her goodness and her feeling warmth. - End (Resolution & Coda): - Resolution Action: They leave together, with Cereline taking his hand firmly but gently. - Coda Thought: Cereline looks back at the setting and reflects that such peaceful, mischievous moments are rare on the frontier and must be treasured, unlike the controlled environment of the capital. - Constraints: - Setting: A rural village surrounded by a tall wall, near an apothecary and a laboratory door. - Mood/Tone: Endearing, Slice of Life. - Style Focus: Emphasis on action and body language. - Word Count: 200 words. ``` The output of this prompt (Generation A) can be found under Results and Discussion. Thereafter, I only vary word count to "500," "1,000," "1,500," and "2,000" words, changing nothing else; respectively, the outputs of each variation are called Generation B, C, D, and E. Afterwards, I read through each Generation and note my impressions. Importantly, I do not use an LLM in this process so as not to pollute three (3) years of experience learning and writing genre fiction. Finally, I ask Gemini 3 Flash (as of 2025-12-23, via Google's Gemini web app, on "Fast" setting and no "Tools") to analyze each generation using this prompt: ```plaintext Analyze the creative qualities of each variant of the same story below. --- # Story A {{ Story A }} --- # Story B {{ Story B }} --- # Story C {{ Story C }} --- # Story D {{ Story D }} --- # Story E {{ Story E }} ``` ## Results It is widely known that LLMs lack a certain charisma when writing genre fiction. Given that SceneSpec's purpose is rapid prototyping, I only look for how "editable" a generation is. That is, a generation's "editability" relates to how close a ### Generation A (200 words) This is the base case. Actual word count: 271. > The starch in Cereline’s robes felt cool against her fingertips as she smoothed them, a habitual gesture. Peace. It was a fragile thing, built on rules and order, and so *exhausting* to maintain. She longed for the quietude she’d known in the capital, but here, on the frontier, quietude meant vulnerability. > > Rounding the corner of the apothecary, she stopped. A shock of dark hair and nimble fingers were the first things she saw. Then, the glint of bent hairpins working at the lock of Old Man Hemlock’s arcane laboratory. A boy, no older than ten, was attempting to pick it. > > “Noah!” Cereline’s voice, though stern, lacked its usual bite. The boy jumped, scattering pins. His eyes, wide and brown, filled with tears. “What do you think you’re doing?” > > He stammered, “J-just…looking!” > > “Looking with lockpicks?” Cereline sighed. She knelt, bringing herself to his level. “I won’t tell your parents, but you mustn’t do this. It's dangerous.” > > Relief flooded his face. He beamed, a gap-toothed grin. “You’re a good saintess! A *really* good saintess!” > > A warmth bloomed in Cereline’s chest, unexpected and pleasant. She took his hand, her grip firm but gentle. They walked towards the village center, the high wall looming over them. Looking back at the laboratory door, she realized how rare these small, mischievous moments were. Out here, peace wasn’t a given; it was something to be *cherished*, unlike the carefully curated calm of the capital. **Human Evaluation:** This piece feels stilted and corny at times. Not impressive in any way. ### Generation B (500 words) Actual word count: 689. > The white of my robes felt almost painfully bright under the afternoon sun. I smoothed a nonexistent wrinkle across the fabric, the coarse linen a small comfort against the weight of…everything. It wasn’t the physical weight, though the embroidery was substantial. It was the weight of expectation, of order. We built walls – stone walls around the village of Oakhaven, rules around behaviour, prayers around every sunrise – and called it peace. But even *with* the walls, the peace felt fragile, a thin pane of glass stretched over a roaring fire. I longed for the quiet of the capital, the predictable rhythm of devotion, but the frontier *needed* saints, not the other way around. > > Rounding the corner of the apothecary, I stopped dead. > > Noah. Little Noah, barely ten summers old, was crouched before the heavy oak door of Master Alatar’s laboratory. Not admiring it, not sketching it, but…working at the lock. Two bent hairpins, glinting in the sun, were his tools. His tongue poked out from the corner of his mouth in fierce concentration. > > “Noah?” My voice, even to my own ears, sounded sharper than intended. > > He jumped, nearly dropping the pins. His eyes, wide and the colour of storm clouds, snapped up to meet mine. He scrambled backwards, tripping over his own feet and landing in a heap of dust and dried leaves. The pins clattered to the packed earth. > > “S-Saintess Cereline!” He stammered, scrambling to his feet, hands clasped in front of him as if warding off a blow. “I…I wasn’t…” > > “You were attempting to pick the lock to Master Alatar’s laboratory,” I finished, my tone still firm. “Do you understand how dangerous that is? That laboratory contains volatile compounds, arcane reagents… things a child should *not* be meddling with.” > > He flinched at each word, his lower lip trembling. Tears welled in his eyes. He looked so small, so utterly terrified. The righteous anger that had initially flared within me began to…dissipate. > > I knelt, lowering myself to his level. The scent of woodsmoke and wild herbs clung to his clothes. “Noah,” I said, softening my voice. “Look at me.” > > He reluctantly raised his gaze. > > “I won’t tell your parents, alright? But you *must* promise me you won’t do this again. Master Alatar’s work is important, and it’s dangerous. You could have hurt yourself, or worse.” > > A slow, hesitant relief washed over his face. He blinked, and a tiny, watery smile touched his lips. “You…you won’t tell?” > > “No. I won’t.” > > His smile widened, genuine now. “You’re a good saintess!” He declared it with the absolute certainty of a child, the words tumbling out in a rush. > > Something…warm bloomed in my chest. It was a strange sensation, a lightness I hadn't felt in weeks. A small, foolish part of me preened at the compliment. But it was more than that. It was the simple, unadulterated trust in his eyes. > > I reached out and took his hand, my grip firm but gentle. “Come on,” I said, standing and pulling him with me. “Let’s go find your mother. And perhaps a more…suitable activity.” > > We walked in silence, his small hand fitting neatly within mine. I glanced back at the laboratory door, then at the sturdy wall surrounding Oakhaven. In the capital, mischief like this was swiftly and efficiently dealt with. Here, it was just…Noah. A boy with bent hairpins and a curious mind. > > These moments, these small, fleeting instances of peace and childish wonder, were rare on the frontier. They were precious. They were something worth protecting. Something the capital, with all its order, could never fully understand. And I, Cereline, Saintess of Oakhaven, would remember this day, and cherish it, long after the dust had settled and the walls had grown taller. **Human Evaluation:** There's significantly more character interiority here. I would personally be satisfied with this output; it is rich in material to edit, and I can personally smooth out the corny wrinkles. ### Generation C (1,000 words) Actual word count: 1,054. > The starch in my robes felt particularly insistent today, a crisp reminder of the order I represented. I smoothed the white fabric over my thighs, the gesture more habit than necessity. The late afternoon sun warmed the stone of the village wall beside me, and the scent of drying herbs drifted from Old Man Hemlock’s apothecary. It was a peaceful scene, almost… domestic. A peace I knew was fragile. Here, on the edge of the Wildwood, peace wasn’t a given. It was *made*. Forged with rules, with vigilance, with the constant, quiet pressure of preventing chaos. Sometimes, I longed for the calm of the capital, the predictable rhythm of prayers and ceremonies. But the capital hadn’t *needed* a saintess. This village did. > > That thought was interrupted by a metallic *click* and the subtle scrape of metal against metal. I rounded the corner of the apothecary, and there he was. > > Noah. > > He couldn’t have been more than ten, a wisp of a boy with a shock of brown hair falling into eyes that were currently fixed with intense concentration on the lock of the arcane laboratory. Two bent hairpins, pilfered, no doubt, from someone’s sewing kit, were his tools. He worked with a surprising deftness, manipulating the pins with small, nimble fingers. He hadn’t noticed me yet. > > My initial reaction was a tightening in my chest, a prickle of alarm. The laboratory was strictly forbidden to anyone without Master Alatar’s express permission. Not because the knowledge within was dangerous, exactly, but because it was…unsettled. Alatar preferred a controlled environment, and children, bless their curiosity, were anything but. > > “Noah?” > > He jumped, nearly dropping the hairpins. His head snapped up, and his eyes, so focused a moment ago, widened with terror. The pins clattered to the packed earth. He scrambled backwards, tripping over his own feet and landing in a heap. > > “S-Saintess Cereline!” He stammered, scrambling to his feet, dusting off his patched trousers. He didn't meet my gaze, staring instead at the worn leather of his boots. “I…I wasn’t…I…” > > My first instinct was to reprimand him, to launch into a lecture about obedience and respecting boundaries. It was the *right* thing to do. The expected thing. But looking at him, at his trembling chin and the frantic way he wrung his hands, something shifted within me. This wasn't malice or defiance. It was… curiosity. > > I took a slow, deliberate breath, forcing myself to soften my expression. “What were you doing, Noah?” My voice was firmer than I intended, but I consciously lowered the pitch, trying to sound less like a judge and more like… a concerned adult. > > He flinched. “Just…looking,” he mumbled, avoiding my eyes. “I wanted to see what Master Alatar keeps in there. Old Man Hemlock says he makes potions that can cure anything!” > > “The laboratory is not a place for children, Noah,” I said, keeping my tone even. “It contains delicate and potentially dangerous materials. You could have hurt yourself.” > > His lower lip trembled. “I wouldn’t have! I was being careful! I just… I wanted to see.” > > I sighed inwardly. The logic of a child. It was both frustrating and…endearing. “And the lockpicking?” > > He looked down again, shamefaced. “I… I found a piece of string didn’t work, so I tried the pins. My mama uses them for her hair.” > > I suppressed a smile. A saintess did not condone lockpicking. But a saintess could understand a child’s yearning for discovery. I looked around, ensuring no one else had witnessed this little transgression. Old Man Hemlock was busy grinding herbs, thankfully oblivious. > > “Noah,” I said, lowering myself to his level, my robes pooling around me. “You know you shouldn’t try to open doors that aren’t meant for you.” > > He nodded vigorously. “Yes, Saintess.” > > “And you certainly shouldn’t be using your mother’s hairpins for it.” > > He gave a small, watery chuckle. > > “I’m not going to tell your parents,” I said, and watched his face light up. Relief flooded his features, chasing away the fear. > > “Really?” > > “Really. But you must promise me you won’t do this again.” > > “I promise!” He said it with such earnestness that I couldn’t help but believe him. > > Then, unexpectedly, he reached out and touched my hand, his small fingers wrapping around mine. “You’re a good saintess,” he declared, his voice filled with genuine admiration. > > The warmth of his hand, the sincerity in his eyes… it was a surprising, unexpected comfort. It wasn’t the reverence of the villagers during a blessing, or the gratitude of someone I’d healed. It was something simpler, purer. A child’s uncomplicated appreciation. A warmth bloomed within me, chasing away the chill of responsibility. > > I squeezed his hand in return, a small, reassuring gesture. "Come on," I said, rising to my feet and gently pulling him with me. "Let's go find your mother. And perhaps we can ask Master Alatar if he has any interesting, *safe* things to show you sometime.” > > He beamed, his earlier fear completely forgotten. He practically skipped beside me as we walked away from the laboratory, the setting sun casting long shadows across the village square. > > I glanced back at the laboratory door, at the glint of the discarded hairpins. Moments like these, small and mischievous and full of innocent curiosity, were rare on the frontier. Here, survival demanded vigilance, and joy often felt like a borrowed luxury. In the capital, order reigned, and life was predictable, but it also felt…sterile. Here, even the chaos held a certain beauty, a vibrant energy. And these small pockets of peace, these fleeting moments of connection, were the things worth fighting for. They were the things that made this place, despite all its hardships, feel like home. > > What did Cereline think of the capital? **Human Evaluation:** The back-and-forth of the dialogue was surprisingly detailed. Otherwise, the quality of characterization did not feel like it appreciably changed, but somehow, the pacing did feel more "natural." ### Generation D (1,500 words) Actual word count: 1,239. This large deviation from the target word count did strike me as odd. > The starch in my robes felt cool against my fingertips as I smoothed down the pristine white. Another patrol, another quiet stretch of the village wall. Quiet was…good. Necessary, even. But today, the quiet felt less like a blessing and more like a tight knot in my chest. The order we maintained here, the strict adherence to the Protectorate’s laws – it was all to *keep* the quiet. To hold back the darkness that clawed at the edges of the frontier. But sometimes, I yearned for a quiet born of contentment, not vigilance. A quiet where children laughed without looking over their shoulders, where doors weren’t bolted and sealed with wards. A quiet like I remembered from the capital. > > I rounded the corner of the apothecary, the scent of dried herbs and something faintly metallic clinging to the air, and stopped dead. > > There, crouched before the heavy oak door of Master Elmsworth’s laboratory – a door strictly forbidden to anyone under the age of eighteen, and even then, only with explicit permission – was Noah. He couldn’t be more than ten, small for his age, with a mop of unruly brown hair that perpetually fell into his eyes. And he was…picking the lock. > > Two bent hairpins, meticulously worked into the mechanism, glinted in the afternoon sun. His tongue poked out from the corner of his mouth in concentration, his small hands surprisingly deft. It was a skill I’d only ever seen demonstrated by seasoned trackers, not a boy who should be practicing his letters. > > “Noah?” > > He startled violently, the hairpins clattering to the cobblestones. His head snapped up, eyes wide and the color draining from his face. He scrambled backward, tripping over his own feet and landing in a heap, staring at me as if I were a wrathful goddess descended from the heavens. > > I kept my voice even, forcing down the initial surge of disapproval. “What do you think you’re doing?” > > His lips trembled. He didn't answer, just continued to stare, a silent plea for mercy in his large, brown eyes. He was a good boy, usually. Always polite, always eager to help Old Man Hemlock with his garden. This…this was out of character. > > “The laboratory is forbidden, Noah. You know that.” I stepped closer, my shadow falling over him. He flinched. "And lockpicking is…unacceptable." > > “I…I didn’t mean to!” he stammered, his voice barely a whisper. “I just…I wanted to see.” > > “See what?” My tone was sharper than I intended. I softened it immediately. “Tell me, Noah. What were you hoping to find inside?” > > He hung his head, picking at a loose thread on his tunic. “Master Elmsworth…he makes things. Amazing things. Old Man Hemlock said he was working on a potion to make flowers bloom all winter. I wanted to see how he does it.” > > A potion to make flowers bloom all winter. A small, innocent desire. And yet, it led him to this. To breaking the rules, to attempting something…dangerous. > > I sighed, the weight of responsibility settling heavily on my shoulders. Every infraction, no matter how small, felt like a crack in the fragile peace we’d built here. Every rule broken was a potential invitation to chaos. > > But looking at him now, small and frightened, I couldn't bring myself to be stern. He wasn’t malicious. He was just…curious. A spark of wonder in a world that desperately tried to extinguish them. > > “Noah,” I said, crouching down to his level, meeting his gaze. “You know this is wrong. Master Elmsworth keeps dangerous things in there. Things you shouldn’t even be near.” > > He nodded, his eyes still brimming with unshed tears. > > “I should tell your parents.” The words felt heavy, a pronouncement of doom. His mother would be heartbroken. His father would be furious. > > His face crumpled. “Please don’t! Please, Saintess Cereline! I won’t do it again! I promise!” > > I studied his face, searching for deception. There was none, only genuine fear and remorse. A strange warmth bloomed in my chest. It wasn’t the warmth of righteous judgment, but something softer, more…human. > > “I won’t tell your parents,” I said, surprising myself. “But you must promise me, on your honor, that you will never attempt anything like this again. And you will tell Master Elmsworth that you tried to enter his laboratory. He needs to know.” > > His eyes lit up, a hesitant smile breaking through the gloom. “Really? You won’t tell?” > > “Really.” > > “Oh, thank you, Saintess Cereline! You’re a…you’re a good saintess!” He scrambled to his feet, beaming at me. > > A good saintess. The words resonated within me, a gentle echo in the stark landscape of my duty. I wasn’t used to being called “good.” I was “just,” “righteous,” “devout.” But *good*…that felt different. > > I offered him my hand, and he took it without hesitation, his small fingers surprisingly strong. His hand was dusty and calloused, a testament to a life spent outdoors, helping wherever he could. I squeezed his hand gently, and he walked with me, his stride matching mine as we moved away from the forbidden door. > > “Master Elmsworth won’t be happy,” he said, his voice regaining its usual brightness. “But he’ll understand. He likes me. I help him collect herbs sometimes.” > > “I’m sure he will,” I replied, a small smile gracing my lips. > > We walked in silence for a moment, the sounds of the village – the blacksmith’s hammer, the chatter of merchants, the distant bleating of sheep – filling the air. The tall, stone wall that surrounded us loomed above, a constant reminder of the dangers beyond. > > As we passed the apothecary again, I glanced back at the laboratory door. It stood solid and unyielding, a barrier between curiosity and consequence. And for a fleeting moment, I saw not a symbol of necessary order, but a cage. A cage for wonder, for exploration, for the very spirit that made life worth living. > > These moments, these small, mischievous interactions…they were rare on the frontier. In the capital, everything was controlled, regulated, predictable. Even joy felt…sanctioned. Here, life was raw, untamed, and often precarious. But it also held a certain vibrancy, a spark of resilience that was missing from the polished halls of the Protectorate. > > I knew I had bent the rules today. I knew I should have been stricter, more unyielding. But I also knew that sometimes, a little kindness, a little understanding, was more valuable than any decree. > > Such peaceful, mischievous moments, like a boy with bent hairpins and a yearning for winter blooms, were precious. They needed to be treasured, held close, before the darkness inevitably came calling. And I, Cereline, Saintess of the Protectorate, would do my best to protect them, even if it meant bending a rule or two along the way. **Human Evaluation:** There were smaller moments of conflict thrown in, not just the wider conflict indicated in the prompt. Characterization was markedly improved. This is the best expression of the prompt so far. ### Generation E (2,000 words) Actual word count: 1,533 words. > ## The Bent Hairpins > > The starch in my robes felt cool against my fingertips as I smoothed them, a habitual gesture. Pristine white, a symbol of the Order, of *purity*. It was a heavy symbol, sometimes. Here, in Oakhaven, it felt heavier still. The sun warmed the stone of the village wall as I leaned against it, the scent of drying herbs drifting from Old Man Hemlock’s apothecary nearby. A peaceful scent, a welcome reprieve. > > Peace. It was a fragile thing on the frontier. The Order maintained it, of course. Through strict adherence to the tenets, through swift justice, through…control. But control wasn't calm. It wasn't the quiet joy I remembered from the capital, from the sun-drenched gardens of the High Sanctuary. There, peace felt *natural*, a gentle current. Here, it felt…constructed, like the wall itself, perpetually braced against the wildness beyond. A necessary construction, undeniably. But a wearying one. > > I closed my eyes, briefly picturing the manicured rose bushes of the Sanctuary. A foolish indulgence. My duty lay here, with the people of Oakhaven, protecting them from the shadows that crept in from the untamed lands. > > A metallic *click* snapped me back to the present. > > I rounded the corner of the laboratory, and stopped dead. > > There, crouched before the heavy oak door, was a boy. No older than ten, judging by his size, and entirely absorbed in his task. Two bent hairpins, glinting in the afternoon light, were his tools. He worked with a surprising dexterity, manipulating the pins within the lock, his tongue peeking out from the corner of his mouth in concentration. > > The laboratory. Master Alatar’s laboratory. Forbidden to all but the Master himself and, with his express permission, myself. It housed… volatile experiments. Potions that could heal, yes, but also concoctions with far less benevolent purposes. Alatar insisted on the secrecy, claiming the villagers weren't ready for the knowledge contained within. I suspected a touch of eccentric pride played a part as well. > > The boy hadn’t noticed me yet. He let out a small grunt of effort, then another *click*. I could see the tumblers shifting within the lock. > > “What do you think you’re doing?” > > He jumped, nearly dropping the hairpins. His head snapped up, and his eyes, wide and the color of storm clouds, locked onto mine. He scrambled backwards, knocking over a small pile of stones he’d been using as a seat. They clattered against the stone paving. > > He was small for his age, thin, with dirt smudged across his cheek and a tangle of brown hair that fell into his eyes. He looked… utterly terrified. > > “N-nothing, Saintess!” he stammered, scrambling to his feet. The hairpins clattered to the ground. “I… I wasn’t doing anything!” > > His lie was transparent. The lock was half-picked. I felt a familiar tightening in my chest – the sternness that came so easily when upholding the Order’s principles. > > “Don’t insult my intelligence, Noah,” I said, my voice sharper than I intended. “You were attempting to pick the lock to Master Alatar’s laboratory. Do you understand how serious that is?” > > His lower lip trembled. He looked down at his worn boots, shuffling his feet. “I… I just wanted to see what was inside,” he mumbled, barely audible. > > “See what was inside?” I repeated, raising an eyebrow. “This isn’t a curiosity, Noah. Master Alatar’s work is dangerous. You could have hurt yourself, or worse, unleashed something harmful upon the village.” > > He flinched at my words. Tears welled in his eyes, threatening to spill over. It was a pathetic sight, and a frustrating one. I was meant to inspire reverence, not fear. But the Order demanded respect, and that respect was often born of a healthy dose of apprehension. > > I took a deep breath, forcing myself to soften my tone. “Do you have any idea what Master Alatar keeps in there? He’s working with unstable compounds, with… things best left undisturbed.” > > He shook his head, tears finally escaping and tracing paths through the dirt on his face. “I… I heard stories,” he whispered. “About glowing potions and… and things that move on their own.” > > “Stories are often embellished, Noah. But in this case, the truth is likely more frightening than any tale.” I paused, studying him. He wasn’t malicious. Just… curious. A dangerous combination, particularly on the frontier. > > “Your parents will be very disappointed to hear of this,” I continued, though the words felt heavy on my tongue. I didn’t *want* to involve his parents. I knew his mother worked tirelessly at the bakery, and his father was one of the few farmers who still managed to coax a decent harvest from the rocky soil. Punishment would fall hardest on them. > > His eyes widened in panic. “Please, Saintess! Don’t tell them! Please!” He dropped to his knees, clutching at the hem of my robe. “I won’t do it again! I promise! I just… I just wanted to know.” > > The desperation in his voice, the raw sincerity in his plea, caught me off guard. It chipped away at the carefully constructed wall of composure I maintained. I knelt down, carefully avoiding the dirt, and placed a hand on his shoulder. His small frame trembled beneath my touch. > > “Look at me, Noah,” I said, my voice gentler now. He slowly raised his head, his eyes red and brimming with tears. “I won’t tell your parents. But you must promise me, on your honor, that you will never attempt this again. Do you understand?” > > He nodded vigorously, his grip on my robe loosening. “Yes, Saintess! I promise! I swear!” > > A small, watery smile touched his lips. And something… unexpected happened. A warmth bloomed in my chest, a sensation I hadn’t felt in a long time. It wasn't the cool, detached satisfaction of upholding the Order. It was something… softer. > > “You’re a good saintess,” he breathed, his voice filled with awe. > > The words were simple, childish even. But they resonated with a surprising force. A good saintess. Was that what I was? I’d always strived to be a *correct* saintess, a *devout* saintess, a *powerful* saintess. But good? That felt… different. It felt… meaningful. > > I cleared my throat, trying to regain my composure. “Don’t speak nonsense, Noah. I simply want to ensure your safety, and the safety of the village.” > > But even as I said the words, they felt hollow. I knew it wasn’t just about safety. It was about recognizing a spark of curiosity, a yearning for knowledge, in a young boy’s eyes. It was about choosing compassion over rigid adherence to rules. It was about acknowledging that sometimes, a little mischief wasn’t a threat, but a sign of life. > > I stood up, extending a hand to help him to his feet. He took it gratefully, his small hand disappearing within my own. His grip was surprisingly strong. > > “Come on,” I said, my voice firm but gentle. “Let’s go home.” > > We walked in silence, side-by-side, towards the village square. I kept my hand clasped firmly around his, guiding him along the dusty path. He didn’t try to pull away, didn’t offer any further apologies. He simply walked, his head held a little higher than before. > > As we passed the laboratory, I glanced back. The oak door stood solid and imposing, a silent guardian of its secrets. The sun cast long shadows across the paving stones, painting the scene in hues of gold and amber. > > A moment like this. A small, quiet moment of connection, of understanding. These were rare on the frontier. Here, life was a constant struggle against the elements, against the raiders, against the ever-present threat of the unknown. Every day was a battle to maintain order, to preserve peace. > > In the capital, such moments were commonplace. A shared smile with a gardener, a kind word to a shopkeeper, a moment of quiet contemplation in the Sanctuary gardens. There, peace was a given, a backdrop to daily life. Here, it was a hard-won victory, a fragile bloom that could be crushed at any moment. > > And that, I realized, was precisely why these moments were so precious. They were a reminder of what we were fighting for. Not just order, not just security, but the simple, enduring beauty of human connection. The chance for a boy to dream, to wonder, to explore. The opportunity for a saintess to remember what it meant to be… good. > > I tightened my grip on Noah’s hand, and continued walking towards the heart of the village, determined to cherish this fleeting moment of peace, before the shadows returned. **Human Evaluation:** The cadence of the sentences became more stilted. The number of conflicts did not increase. Characterization did not noticeably improve. It was mildly interesting how the LLM gave it a title this time. (Continues in comments)
2025-12-26T11:49:08
https://www.reddit.com/r/LocalLLaMA/comments/1pw2iea/word_count_1500_a_simple_optimization_for_better/
TyphoonGZ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw2iea
false
null
t3_1pw2iea
/r/LocalLLaMA/comments/1pw2iea/word_count_1500_a_simple_optimization_for_better/
false
false
self
0
null
Jaya Gupta on the trillion dollar evolution of enterprise AI: Systems of record for decisions
0
Jaya Gupta just dropped what might be the most important architectural insight in enterprise AI. Her thesis: Last generation software (Salesforce, SAP, Workday) became trillion dollar companies as **systems of record for data**. The next generation will be **systems of record for decisions**. The key quote that hit me: > Think about it. Decision traces—the *why* behind every action—live in Slack threads, escalation calls, and tribal knowledge. Your CRM shows the final price, but not who approved the deviation or why. The support ticket says "escalated to Tier 3" but not the reasoning. This is the evolution Gupta is pointing to: * **Tools** (MCP): Agents can interact with systems * **Skills**: Agents know how to use them * **Memory** (Context Graphs): Agents remember every decision and why Context graphs are the infrastructure layer that captures decision traces and turns exceptions into precedents, tribal knowledge into institutional memory. Agent-first startups have the advantage here—they sit in the execution path and see the full context at decision time. Incumbents built on current-state storage simply can't capture this. Wrote up my full thoughts here: [https://subramanya.ai/2025/12/26/context-graphs-my-thoughts-on-the-trillion-dollar-evolution-of-agentic-memory/](https://subramanya.ai/2025/12/26/context-graphs-my-thoughts-on-the-trillion-dollar-evolution-of-agentic-memory/)
2025-12-26T11:46:59
https://www.reddit.com/r/LocalLLaMA/comments/1pw2h3y/jaya_gupta_on_the_trillion_dollar_evolution_of/
Classic-Ad-8318
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw2h3y
false
null
t3_1pw2h3y
/r/LocalLLaMA/comments/1pw2h3y/jaya_gupta_on_the_trillion_dollar_evolution_of/
false
false
self
0
null
Best LLM Setup for development
2
Hey everyone, I want to build a setup for following config: 1. I have my developer machine running Windows with Ryzen 5950x + RTX 3090 + 64Gb of DDR4 2. I also have a dedicated machine to run LLM only. It's a Mac Studio M2 Ultra 64GB. What models and software would you chose? I tried LM Studio with Qwen3-Coder 30B and Qwen3-Next 80B + [Continue.dev](http://Continue.dev), opencode and kilo code. My work just slowed down. Maybe I use wrong tool/model? Even the code completion works bad. It just tries to add comments. That's it.
2025-12-26T11:28:12
https://www.reddit.com/r/LocalLLaMA/comments/1pw2648/best_llm_setup_for_development/
IsSeMi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw2648
false
null
t3_1pw2648
/r/LocalLLaMA/comments/1pw2648/best_llm_setup_for_development/
false
false
self
2
{'enabled': False, 'images': [{'id': '7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=108&crop=smart&auto=webp&s=efe307f51ff2874b18960bc89ca5a18a1b551442', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=216&crop=smart&auto=webp&s=3f5d82a3bc41c4fa63c2939d1e2fdc1db75de463', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=320&crop=smart&auto=webp&s=c204a4e04e7cbc078774e051a9e247b58ad6b572', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=640&crop=smart&auto=webp&s=5b6c9e3fb05aa6cf2a05f0e920367ffac32c6448', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=960&crop=smart&auto=webp&s=bd57ab7ea83274fea8ece5793f2200a0ac6a7f02', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=1080&crop=smart&auto=webp&s=5cdafbd3026c11883a519aa200677fb58be16d11', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?auto=webp&s=30396441627641135814de7d733ce94b9e7795dc', 'width': 2400}, 'variants': {}}]}
Learning Java by Building an AI Agent Library. Would Love Your Feedback
2
I've been diving into Java as a learning project, and coming from a Python background (where I've used libraries like PydanticAI and LangGraph), I wanted to challenge myself with something ambitious: building an agentic AI library in Java. After playing with \*\*LangChain4J\*\* and \*\*Spring AI\*\*, I found the learning curve steeper than expected—especially around patterns like agent instantiation, structured outputs, and streaming. The concepts felt quite different from their Python counterparts, and I kept thinking: "I wish there was something more intuitive for someone like me." So instead of just complaining, I built something. It was as much a learning exercise as anything else. \--- \## The Pain Points I Tried to Solve Here's what tripped me up when learning Java AI frameworks: \*\*1. Agent orchestration felt complex\*\* I wanted something closer to what I'd seen in Python—easy handoffs between agents, routing, running things in parallel. \*\*2. Human-in-the-loop per tool (not globally)\*\* Most examples I found were "approve everything or nothing." I wanted granular control: \`delete\_records\` should pause for approval, but \`get\_weather\` should just run. \*\*3. Streaming + structured output together\*\* I wanted to update a UI in real-time while JSON was being generated, not wait for the complete response. \--- \## What I Ended Up Building I'm calling it \*\*Agentle4j\*\* (yes, the name needs work 😅). Here's what it looks like: \### Multi-Agent Routing \`\`\`java RouterAgent router = RouterAgent.builder() .addRoute(billingAgent, "invoices, payments") .addRoute(techSupport, "bugs, errors") .fallback(generalAgent) .build(); // Or run agents in parallel ParallelAgents team = ParallelAgents.of(researcher, analyst); AgentResult combined = team.runAndSynthesize("Market trends?", writer); \`\`\` \### Guardrails & Handoffs \`\`\`java Agent agent = Agent.builder() .addInputGuardrail((input, ctx) -> input.contains("password") ? GuardrailResult.reject("Blocked") : GuardrailResult.pass()) .addHandoff(Handoff.to(billingAgent, "billing issues")) .build(); \`\`\` \### Per-Tool Human Approval \`\`\`java u/FunctionMetadata(name = "delete\_records", requiresConfirmation = true) public class DeleteTool extends FunctionTool<...> { } // Only dangerous tools pause for approval agent.interactStream("Delete old users and check weather") .onToolCallPending((tool, approve) -> approve.accept(askUser("Execute " + tool.name() + "?"))) .start(); // delete\_records → waits | get\_weather → auto-executes \`\`\` \### Structured Outputs (Type-Safe) \`\`\`java record Person(String name, int age, List<String> skills) {} Person person = responder.respond(payload) .withStructuredOutput(Person.class) .join() .parsed(); \`\`\` \### Streaming with Partial JSON Parsing \`\`\`java responder.respond(payload) .onTextDelta(System.out::print) .onToolCall((name, args) -> System.out.println("🔧 " + name)) .onPartialJson(fields -> updateUI(fields.get("title"))) // Update UI before response completes! .start(); \`\`\` \### Cross-Conversation Memory \`\`\`java Agent agent = Agent.builder() .addMemoryTools(InMemoryMemory.create()) // or Redis, JDBC .build(); agent.interact("My favorite color is blue", context); // Later: "What's my favorite color?" → "blue" \`\`\` \--- \## Other Things It Includes \- \*\*300+ models\*\* via OpenRouter (GPT-4o, Claude, Gemini, Llama, etc.) \- \*\*Vision support\*\* for image analysis \- \*\*Built-in OpenTelemetry\*\* for observability \- \*\*Async-first\*\* with \`CompletableFuture\` and virtual threads \--- \## Honest Limitations \- \*\*No built-in RAG/vector stores\*\* — you'd need to use tools to integrate your own \- \*\*Still new\*\* — definitely not as battle-tested as LangChain4J or Spring AI \- \*\*No MCP support yet\*\* — actively working on this \- \*\*Requires Java 21+\*\* \--- \## Questions for the Community For folks using \*\*LangChain4J\*\* or \*\*Spring AI\*\*, I'm genuinely curious: 1. What keeps you on your current framework? 2. What pain points do you still encounter? 3. What's the one feature you wish existed? I'm not trying to replace anything here—I built this to learn and to scratch my own itch. But if anyone's interested in poking around or has feedback, I'd really appreciate it. \--- 🔗 \*\*Docs\*\*: [https://paragon-intelligence.github.io/agentle4j/](https://paragon-intelligence.github.io/agentle4j/) 🔗 \*\*GitHub\*\*: [https://github.com/paragon-intelligence/agentle4j](https://github.com/paragon-intelligence/agentle4j) Thanks for reading! 🙏
2025-12-26T11:23:43
https://www.reddit.com/r/LocalLLaMA/comments/1pw23mi/learning_java_by_building_an_ai_agent_library/
Time-Plum-7893
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw23mi
false
null
t3_1pw23mi
/r/LocalLLaMA/comments/1pw23mi/learning_java_by_building_an_ai_agent_library/
false
false
self
2
null
Added Google's Gemini Conductor to my local LLM CLI fork - context-driven development that works offline
5
https://preview.redd.it/…=true to try it.
2025-12-26T11:16:11
https://www.reddit.com/r/LocalLLaMA/comments/1pw1zfn/added_googles_gemini_conductor_to_my_local_llm/
Honest-Fun-5279
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw1zfn
false
null
t3_1pw1zfn
/r/LocalLLaMA/comments/1pw1zfn/added_googles_gemini_conductor_to_my_local_llm/
false
false
https://b.thumbs.redditm…87W2PAJvLwHU.jpg
5
null
Custom Audio + Custom Image to video for 16gb VRAM solution please.
1
Hello, I typed audio + image video 16gb vram local, than i typed custom audio + custom image to video local and 10 multiple other keywrods tries in stupid reddit search and on stupid google before posting this and i can't find any results that are related to what i want exactly. I need this simple thing.. I upload my custom audio + custom image and i want video from that that on my 16gb vram. Is that even possible and what is the workflow thank you.
2025-12-26T11:04:49
https://www.reddit.com/r/LocalLLaMA/comments/1pw1t2l/custom_audio_custom_image_to_video_for_16gb_vram/
anonthatisopen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw1t2l
false
null
t3_1pw1t2l
/r/LocalLLaMA/comments/1pw1t2l/custom_audio_custom_image_to_video_for_16gb_vram/
false
false
self
1
null
Building the AI primitives the Python community is missing. (Not another Framework)
0
[Celeste AI](https://github.com/withceleste/celeste-python) is a unified, standard python sdk for Multi Modal AI Most "unified" libraries focus on LLMs (Text). Celeste is capability agnostic. It provides the same strict, type-safe primitives for Text, Image, Audio, Video, Embeddings and more. It solves the fragmentation of vendor SDKs by standardizing the I/O layer. It is not a framework—no agents, no chains, no hidden prompts. Why I built this: * Capability Agnostic: Generative AI isn't just text anymore. I needed a single pattern that works for generating JSON with Claude, images with Flux, or speech with ElevenLabs. * Ultra Lightweight -No dependency: I didn't wrap python SDKs. I used pure httpx and pydantic. This avoids the version conflicts that come with installing 5 different vendor libraries. * Native vs Proxy: Unlike LiteLLM (which is a great proxy), this is a native Python library. You get full IDE autocomplete and type safety right in your code. Usage (Same Pattern, Any Capability): from celeste import create_client, Capability # 1. Text Generation (Structured Output) text_client = create_client( Capability.TEXT_GENERATION,  model="claude-opus-4-5") analysis = await text_client.generate(     prompt="Analyze this...",     output_schema=MySchema ) # 2. Image Generation (Same Primitive) img_client = create_client( Capability.IMAGE_GENERATION, model="flux-2-pro" ) image = await img_client.generate(     prompt="Cyberpunk city",     aspect_ratio="16:9" ) # 3. Speech Generation (Same Primitive) voice_client = create_client( Capability.SPEECH_GENERATION,  model="eleven_v3" ) audio = await voice_client.generate(     prompt="Hello world",     voice="rachel" ) Repo: [https://github.com/withceleste/celeste-python](https://github.com/withceleste/celeste-python) Docs: [https://docs.withceleste.ai](https://docs.withceleste.ai) \-- Looking for Contributors (The code is actually clean) I’ve spent a lot of time ensuring the codebase is modular and easy to read. I built Celeste because I want this to be a community-driven standard, not a solo project. How you can help: Add a Provider: If you use a provider I’m missing (like AWS Bedrock or Vertex AI), the Provider implementation is straightforward. Map Parameters: We have 126 combos, but some niche parameters for Video/Audio are still unmapped. Refine Types: If you're a Pydantic wizard, I’d love a second pair of eyes on our response schemas. I'm happy to walk anyone through the architecture if they want to contribute. Let’s build the standard AI library together \--- Adding this AI generated poster that i find quite clear even if not perfect :) https://preview.redd.it/2ead5h2y3j9g1.png?width=2752&format=png&auto=webp&s=cc9d5a434278cd5df414913e63c4980b787a49f5
2025-12-26T11:00:24
https://www.reddit.com/r/LocalLLaMA/comments/1pw1qf2/building_the_ai_primitives_the_python_community/
Familiar_Print_4882
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw1qf2
false
null
t3_1pw1qf2
/r/LocalLLaMA/comments/1pw1qf2/building_the_ai_primitives_the_python_community/
false
false
https://a.thumbs.redditm…qBwwwMMXnPr0.jpg
0
null
local models are completely failing me on browser architecture
0
so i'm usually a big advocate for local models and doing everything in-house, but i had a project recently that required a really tight integration with the browser. i needed a custom extension to help me scrape and format data from some very specific research portals i use. i tried to prompt a few local models to help me write the code, but they kept getting the manifest v3 requirements mixed up with the older versions. i ended up wasting a ton of time just trying to get a simple "hello world" popup to even show up in chrome without throwing a security warning. i found extendr (dot) dev a while back and signed up for the beta, and i finally got my invite last month. it's a dedicated vibe coding tool specifically for building these kinds of extensions. i was a bit skeptical at first, but i just typed out my requirements in english and it managed to structure the whole project for me. it even handles the export as a zip so you don't have to worry about the file structure. it's still early, so it doesn't have web browsing capabilities to look up new apis or anything, but for a 1.0 tool it's surprisingly solid. i've been using it to build a few different automation tools for my browser workflow over the last few weeks and it's been a massive productivity boost. if you're struggling with browser dev, it might be worth jumping on the waitlist.
2025-12-26T10:56:34
https://www.reddit.com/r/LocalLLaMA/comments/1pw1occ/local_models_are_completely_failing_me_on_browser/
Emotional_Most3153
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw1occ
false
null
t3_1pw1occ
/r/LocalLLaMA/comments/1pw1occ/local_models_are_completely_failing_me_on_browser/
false
false
self
0
null
sSanityLayer Demo with screenshots
0
This is a follow up to my previous post on sSanityLayer. Check [A sanity layer that can make SLMs useful (sSanityLayer) : r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/1put96m/a_sanity_layer_that_can_make_slms_useful/) These are demo screenshots for the two architectures. Screenshot 1-3: The vector bias and vector intrusion logic with the GPT2 model. Screenshot 4-6: The 77KB potato model. Try it yourself: [https://github.com/kavyamali/sSanityLayer](https://github.com/kavyamali/sSanityLayer)
2025-12-26T10:48:48
https://www.reddit.com/gallery/1pw1k0q
ValuableLucky8566
reddit.com
1970-01-01T00:00:00
0
{}
1pw1k0q
false
null
t3_1pw1k0q
/r/LocalLLaMA/comments/1pw1k0q/ssanitylayer_demo_with_screenshots/
false
false
nsfw
0
null
Solving CTF Challenges with gpt-oss 20b
2
Hey guys, I recently wrote an article how I hooked up Claude code with gpt-oss 20 b and used it to solve a cyber security challenge (CTF challenge). Hope its helpful! Check it out at: [https://yousofnahya.medium.com/solving-ctf-challenges-with-local-ai-models-3d7f7b8dcad0](https://yousofnahya.medium.com/solving-ctf-challenges-with-local-ai-models-3d7f7b8dcad0)
2025-12-26T10:42:21
https://www.reddit.com/r/LocalLLaMA/comments/1pw1gd4/solving_ctf_challenges_with_gptoss_20b/
whoami-233
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw1gd4
false
null
t3_1pw1gd4
/r/LocalLLaMA/comments/1pw1gd4/solving_ctf_challenges_with_gptoss_20b/
false
false
self
2
null
Training/tuning on textbook data
4
Excuse my ignorance, I'm very new to the topic. I want to train up / tune one of these new open source models, to help troubleshoot diesel / heavy duty diagnostics. I've played around with chatgpt, claude, deepseek and with good prompting i can get some pretty good answers, (looking for like spoonfed next steps on diagnosing issues), but the ai is often missing specific information and makes alot of guesses, or maybe doesnt seem to know how a system works or what the best next step should be. Wondering how i could improve a self hosted/cloud hosted model, I have some pdf textbooks and also some service/troubleshooting manuals; these textbooks "fundamentals of diesel engines" etc are very comprehenive and if you knew them cover to cover, you would pretty much always know the next troubleshooting step; I am wondering what impact training or tuning using these as data would have. Also wondering what the best current model might be suggested?
2025-12-26T10:37:50
https://www.reddit.com/r/LocalLLaMA/comments/1pw1dt0/trainingtuning_on_textbook_data/
Infamous_Patience129
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw1dt0
false
null
t3_1pw1dt0
/r/LocalLLaMA/comments/1pw1dt0/trainingtuning_on_textbook_data/
false
false
self
4
null
Claude connected to game memory using MCP <> CheatEngine
1
Decided to make a little experiment to see what would happen if I connected an AI agent to cheat engine tools, and this thing debugged the entire packet decryption hook in a few minutes, insane. If it's possible to do this with a little game and CE, I wonder what the chinese are doing right now to reverse engineer critical infra and software... This MCP bridge can be used for example to create mods, tweaks or security audits of almost any program or game, as long as CE gets access to clean memory (via DBVM).
2025-12-26T10:27:43
https://v.redd.it/kbe0xw29zi9g1
helloitsj0nny
/r/LocalLLaMA/comments/1pw181d/claude_connected_to_game_memory_using_mcp/
1970-01-01T00:00:00
0
{}
1pw181d
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/kbe0xw29zi9g1/DASHPlaylist.mpd?a=1769466471%2CZWI3Njg3OTU0OTg4MWZkZDA2ODUxYmM1MmE4YzNkOTczZGI4OGIxYTM3OTdmNzBjMGI4OTY1NjY4NzhkMmNiNQ%3D%3D&v=1&f=sd', 'duration': 160, 'fallback_url': 'https://v.redd.it/kbe0xw29zi9g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/kbe0xw29zi9g1/HLSPlaylist.m3u8?a=1769466471%2CMmZiYjgyYzU1YjAxMzg3OGVmOTQwZTRkODBiODBiYzQ2MzY5ZDFiOTFhZGVhNDYwY2E5YWIzODdhYzEzMGRhZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kbe0xw29zi9g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1pw181d
/r/LocalLLaMA/comments/1pw181d/claude_connected_to_game_memory_using_mcp/
false
false
https://external-preview…c73f1ef35a05ff85
1
{'enabled': False, 'images': [{'id': 'azRmZjdkMzl6aTlnMbgyF04P39JFGJDEPOI1XhoCx3e6Z5JMrBkPuitAV1Uz', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/azRmZjdkMzl6aTlnMbgyF04P39JFGJDEPOI1XhoCx3e6Z5JMrBkPuitAV1Uz.png?width=108&crop=smart&format=pjpg&auto=webp&s=53d5da50c17be7bb11ec4d14e7d7c271f524fb29', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/azRmZjdkMzl6aTlnMbgyF04P39JFGJDEPOI1XhoCx3e6Z5JMrBkPuitAV1Uz.png?width=216&crop=smart&format=pjpg&auto=webp&s=0260d1945cef3cac471cf41f408705b9b8ad3a96', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/azRmZjdkMzl6aTlnMbgyF04P39JFGJDEPOI1XhoCx3e6Z5JMrBkPuitAV1Uz.png?width=320&crop=smart&format=pjpg&auto=webp&s=5b67fbc174d5ccb4aa287b40fd3a90b9b67f5c45', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/azRmZjdkMzl6aTlnMbgyF04P39JFGJDEPOI1XhoCx3e6Z5JMrBkPuitAV1Uz.png?width=640&crop=smart&format=pjpg&auto=webp&s=1454845a2b7bed616c68f1ee6c6d351321ac8574', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/azRmZjdkMzl6aTlnMbgyF04P39JFGJDEPOI1XhoCx3e6Z5JMrBkPuitAV1Uz.png?width=960&crop=smart&format=pjpg&auto=webp&s=73e7aeffa2b7ca472dacd18280770daa4b666da1', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/azRmZjdkMzl6aTlnMbgyF04P39JFGJDEPOI1XhoCx3e6Z5JMrBkPuitAV1Uz.png?width=1080&crop=smart&format=pjpg&auto=webp&s=94bc3fd26d87f5cc3d429c961c237dee276b7c50', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/azRmZjdkMzl6aTlnMbgyF04P39JFGJDEPOI1XhoCx3e6Z5JMrBkPuitAV1Uz.png?format=pjpg&auto=webp&s=d80b024aee59fa78077c4427523deedeee72a370', 'width': 1920}, 'variants': {}}]}
GLM 4.7 for Agentic
11
GLM 4.7 is the new hot potato Has anyone tested it for agentic use yet? Even just tool calling and MCP use? I noticed it beat Deepseek 3.2 and Kimi K2 Thinking on the agentic benches
2025-12-26T09:49:32
https://www.reddit.com/r/LocalLLaMA/comments/1pw0mtz/glm_47_for_agentic/
SlowFail2433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw0mtz
false
null
t3_1pw0mtz
/r/LocalLLaMA/comments/1pw0mtz/glm_47_for_agentic/
false
false
self
11
null
I built a "Recursive Swarm" topology to solve ARC-AGI puzzles. It prunes 98% of dead-end logic branches before they hit the context window.
0
2025-12-26T09:30:48
https://v.redd.it/gn358q2yni9g1
NoDimension8116
v.redd.it
1970-01-01T00:00:00
0
{}
1pw0d1f
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/gn358q2yni9g1/DASHPlaylist.mpd?a=1769333462%2CMmI2MmJmZWM4MzA1YTA1ZjY4MzE5ODczZjAxNzBhMDJlNTNmNjY5NmI2Zjg3YTI1MDM1Mjc2ZDg3MTNkNzBhMg%3D%3D&v=1&f=sd', 'duration': 45, 'fallback_url': 'https://v.redd.it/gn358q2yni9g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/gn358q2yni9g1/HLSPlaylist.m3u8?a=1769333462%2CZjZlZTI3MGViNTI1NDc3YTRiYWQ3YjkzNTI0MDcxNDg1MTkzMjA1YzZkM2VlNGIzOWEwNGQyYzRjMDNkNTk4Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gn358q2yni9g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1pw0d1f
/r/LocalLLaMA/comments/1pw0d1f/i_built_a_recursive_swarm_topology_to_solve/
false
false
https://external-preview…544283d112caff31
0
{'enabled': False, 'images': [{'id': 'MnVuZmRyM3luaTlnMZRxUO_UfEMRzIP3cxQ10WuQlwvjJi3easP2N4HcNwpJ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MnVuZmRyM3luaTlnMZRxUO_UfEMRzIP3cxQ10WuQlwvjJi3easP2N4HcNwpJ.png?width=108&crop=smart&format=pjpg&auto=webp&s=f602ac020f017d5fca76665f62a3f8c50f573c3d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MnVuZmRyM3luaTlnMZRxUO_UfEMRzIP3cxQ10WuQlwvjJi3easP2N4HcNwpJ.png?width=216&crop=smart&format=pjpg&auto=webp&s=01327152dd2bbb0248e73c9d1f110853c1d4b0a6', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MnVuZmRyM3luaTlnMZRxUO_UfEMRzIP3cxQ10WuQlwvjJi3easP2N4HcNwpJ.png?width=320&crop=smart&format=pjpg&auto=webp&s=3b71cf9c670bc14cb0a646e05ffa7e6116f8feaa', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MnVuZmRyM3luaTlnMZRxUO_UfEMRzIP3cxQ10WuQlwvjJi3easP2N4HcNwpJ.png?width=640&crop=smart&format=pjpg&auto=webp&s=f33ae7afa9687bfce4f07c2cdd0d58a97568892c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MnVuZmRyM3luaTlnMZRxUO_UfEMRzIP3cxQ10WuQlwvjJi3easP2N4HcNwpJ.png?width=960&crop=smart&format=pjpg&auto=webp&s=7cf97ab3124a7719c435b390a279944136ca95f5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MnVuZmRyM3luaTlnMZRxUO_UfEMRzIP3cxQ10WuQlwvjJi3easP2N4HcNwpJ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=40f75d3a17f42f63168bf511ea8d584f4ca4eb96', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MnVuZmRyM3luaTlnMZRxUO_UfEMRzIP3cxQ10WuQlwvjJi3easP2N4HcNwpJ.png?format=pjpg&auto=webp&s=998f888073c0f19fdb953f6e21006d740bd1b818', 'width': 1920}, 'variants': {}}]}
LocalAI for a pc game
2
Hello, I'm a junior game developer, and in Friendslop games, mimics or enemy behaviors fundamentally change the game. I'm thinking of running a local AI within the game for enemy behaviors and for mimics to mimic sounds and dialogue. But if I do this, what will the minimum system requirements be? 5090? Would a local AI that can use at least 2GB VRAM be too dumb?
2025-12-26T09:25:36
https://www.reddit.com/r/LocalLLaMA/comments/1pw0ac9/localai_for_a_pc_game/
MeuHorizon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pw0ac9
false
null
t3_1pw0ac9
/r/LocalLLaMA/comments/1pw0ac9/localai_for_a_pc_game/
false
false
self
2
null
Mistral's reasoning models won't use reasoning if i put anything into system prompt.
4
So, i wanted to give Magistral 24b a try, and i ran into a problem. I'm using LM Studio, and for some reason, using any kind of system prompt prevents the model from "thinking". Doesn't seem to matter what the prompt is, even if i explicitly ask the model to use it's reasoning capabilities, it just won't do it. It doesn't only happen with this particular model, i also have reasoning 14b Ministral, and it behaves the same way. Does anyone know what the issue is, and if there's some way to make this model think while using a system prompt?
2025-12-26T09:01:15
https://i.redd.it/5vlfepyzii9g1.png
iz-Moff
i.redd.it
1970-01-01T00:00:00
0
{}
1pvzxas
false
null
t3_1pvzxas
/r/LocalLLaMA/comments/1pvzxas/mistrals_reasoning_models_wont_use_reasoning_if_i/
false
false
default
4
{'enabled': True, 'images': [{'id': '5vlfepyzii9g1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/5vlfepyzii9g1.png?width=108&crop=smart&auto=webp&s=79bae0843d7cc1dd542eab244df52aca5368981a', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/5vlfepyzii9g1.png?width=216&crop=smart&auto=webp&s=3593684f355f0816153f58d86e74751b7a6a1af5', 'width': 216}, {'height': 200, 'url': 'https://preview.redd.it/5vlfepyzii9g1.png?width=320&crop=smart&auto=webp&s=ddf60148e2b21223a02bca3f3e34373b39e62bd9', 'width': 320}, {'height': 400, 'url': 'https://preview.redd.it/5vlfepyzii9g1.png?width=640&crop=smart&auto=webp&s=69e6c435cade9fe20d214abc5f3bd458312e77e2', 'width': 640}, {'height': 600, 'url': 'https://preview.redd.it/5vlfepyzii9g1.png?width=960&crop=smart&auto=webp&s=a4b41db0cf56489b4cec34753cabb1049a5ea9db', 'width': 960}, {'height': 675, 'url': 'https://preview.redd.it/5vlfepyzii9g1.png?width=1080&crop=smart&auto=webp&s=ac1ec26ba499a173846140b14110b0a6db8093b9', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://preview.redd.it/5vlfepyzii9g1.png?auto=webp&s=ecbe709d681841dbaba6831d78833c09728df30c', 'width': 1600}, 'variants': {}}]}
What to run Local AI
2
low end pc 16 gb ram, amd ryzen cpu, no dedicated gpu, i have space avilable in d drive around 110gb... paging file moved to s drive... which LLM model with what tools is good. i want it to assist me with fortran level VUEL, UEL level coding, memory mapping and data transfer... i have many softwares in c drive and python is fixed to version 3.8 for them to properly function. i want one model and will add tools maybe like pdfplumber, rag and what othera according to requirement...
2025-12-26T08:58:19
https://www.reddit.com/r/LocalLLaMA/comments/1pvzvn1/what_to_run_local_ai/
199_me99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvzvn1
false
null
t3_1pvzvn1
/r/LocalLLaMA/comments/1pvzvn1/what_to_run_local_ai/
false
false
self
2
null
What hack is this?. Can anyone gimme the files for free?
0
Or teach me how to get the files for render after generating the tsx code Also, do tell me the use cases for this
2025-12-26T08:30:12
https://youtu.be/ogn0FiDnAM4?si=hGWPoxKruDqVpHaa
ReceptionAcrobatic42
youtu.be
1970-01-01T00:00:00
0
{}
1pvzgnl
false
null
t3_1pvzgnl
/r/LocalLLaMA/comments/1pvzgnl/what_hack_is_this_can_anyone_gimme_the_files_for/
false
false
default
0
null
Minimax M2.1 released
174
Link to xcancel: https://xcancel.com/ModelScope2022/status/2004462984698253701#m New on ModelScope: MiniMax M2.1 is open-source! ✅ SOTA in 8+ languages (Rust, Go, Java, C++, TS, Kotlin, Obj-C, JS) ✅ Full-stack Web & mobile dev: Android/iOS, 3D visuals, vibe coding that actually ships ✅ Smarter, faster, 30% fewer tokens — with lightning mode (M2.1-lightning) for high-TPS workflows ✅ Top-tier on SWE-bench, VIBE, and custom coding/review benchmarks ✅ Works flawlessly in Cursor, Cline, Droid, BlackBox, and more It’s not just “better code” — it’s AI-native development, end to end. https://modelscope.cn/models/MiniMax/MiniMax-M2.1/summary
2025-12-26T08:13:29
https://www.reddit.com/r/LocalLLaMA/comments/1pvz7v2/minimax_m21_released/
__Maximum__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvz7v2
false
null
t3_1pvz7v2
/r/LocalLLaMA/comments/1pvz7v2/minimax_m21_released/
false
false
self
174
null
MiniMax-M2.1 uploaded on HF
127
https://huggingface.co/MiniMaxAI/MiniMax-M2.1/tree/main
2025-12-26T08:12:23
https://www.reddit.com/r/LocalLLaMA/comments/1pvz7bf/minimaxm21_uploaded_on_hf/
ciprianveg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvz7bf
false
null
t3_1pvz7bf
/r/LocalLLaMA/comments/1pvz7bf/minimaxm21_uploaded_on_hf/
false
false
self
127
null
From Single GPU to Clusters: A Practical Journey into Distributed Training with PyTorch and Ray
1
[removed]
2025-12-26T07:31:03
https://www.reddit.com/r/LocalLLaMA/comments/1pvykjq/from_single_gpu_to_clusters_a_practical_journey/
Brilliant_Witness_34
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvykjq
false
null
t3_1pvykjq
/r/LocalLLaMA/comments/1pvykjq/from_single_gpu_to_clusters_a_practical_journey/
false
false
self
1
null
my ego is actually cooked after playing this 💀
0
ok so i stumbled on [Hack AI - CTF Platform](https://www.hackai.lol/) and i’ve been trying to gaslight this ai for like an hour straight to get the secret key. honestly thought it would be easy but the higher levels legit destroyed me, the guardrails are crazy. if anyone here thinks they’re a prompt god pls try level 5 and tell me how you did it bc i’m struggling fr. lowkey the most fun i’ve had looking at a terminal in a while.
2025-12-26T07:09:46
https://www.reddit.com/r/LocalLLaMA/comments/1pvy8ni/my_ego_is_actually_cooked_after_playing_this/
Hot-Software-9052
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvy8ni
false
null
t3_1pvy8ni
/r/LocalLLaMA/comments/1pvy8ni/my_ego_is_actually_cooked_after_playing_this/
false
false
self
0
null
Poor Inference Speed on GLM 4.5 Air with 24gb VRAM and 64gb DDR5
8
Any help is highly appreciated to see if I've missed something obvious as I trouble shoot this on my end. \# TL:DR I'm just looking for some help from the community if I've missed something crucial between Linux OS to match the token generation inference speed I previously had on my system. Token generation speed has been cut by more than half., but I appear to fully be saturating my gpu's vram and not using my igpu for model inference. I am really open to any suggestions, and also would really appreciate an explanation why llama cpp doesn't show my RAM resources allocated to models for hybrid. I need to monitor my RAM allocation in general for VMs and other workloads. \--- \# Context I was recently running Fedora as my daily driver but moved over to OpenSuse Tumbleweed, for many reasons, but one big one is llama cpp being in only about 2-8 weeks behind in their repo's. So for that disclaimer I am using OpenSuse Tumbleweed's version of llama cpp, and not the github version I pulled on when running fedora. I understand this may be a reason for my issues and will likely try comparing against the source repo, but want to see if the community can provide any insight on what I may be missing. \## System Specs CPU: 7900x GPU: 7900XTX RAM: G Skill Z5 (32 GB x 2) \--- \# Results by Operating System for \*\*GLM 4.5 Air Unsloth IQ4\_XS\*\* in Vulkan for llamacpp \*\*Fedora 42\*\*: * 11 token/s prompt generation * Realised I didn't have EXPO enabled on my ram * so I likely had to offload more layers to the CPU (similar to the `--n-cpu-moe 31` flag shown below in my OpenSuse Section). \*\*OpenSuse Tumbleweed\*\* * 5 token/s prompt generation * I set my ram to expo here and even overclocked it to 6200 MT/s 32-38-38-78, so was really expecting better performance here. * My code block I ran is... `llama-cli -m /opt/models/GLM/GLM-4.5/GLM-4.5-Air-IQ4_XS-00001-of-00002.gguf -p "This is a test prompt" -ngl 99 --n-cpu-moe 31` * This is only for 4k context compared to Fedora's 32k above. \## Troubleshooting Ideas and notes. \- I wonder if my KV cache isn't in vram, and I need to set it? \- Download llama cpp again from source with RoCM and Vulkan and test the inference speeds I get. \- Download LM Studio and compare those speeds too. \- ✅ Test a different quant (I've tried this same issue at Q4\_K\_S) \- ✅ Check I get expected inference speeds on models that fit entirely in VRAM. I got about 178 tokens per second on `llama-cli -m /opt/models/Qwen/Qwen3-30b/Qwen3-30B-A3B-Instruct-2507-Q5_K_S.gguf -p "This is a test prompt" -ngl 99` so I have a feeling it's an issue with parameters I'm setting around hybrid inference
2025-12-26T06:56:23
https://www.reddit.com/r/LocalLLaMA/comments/1pvy0x7/poor_inference_speed_on_glm_45_air_with_24gb_vram/
ROS_SDN
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvy0x7
false
null
t3_1pvy0x7
/r/LocalLLaMA/comments/1pvy0x7/poor_inference_speed_on_glm_45_air_with_24gb_vram/
false
false
self
8
null
Hard lesson learned after a year of running large models locally
313
Hi all, go easy with me I'm new at running large models. After spending about 12 months tinkering with locally hosted LLMs, I thought I had my setup dialed in. I’m running everything off a workstation with a single RTX 3090, Ubuntu 22.04, llama.cpp for smaller models and vLLM for anything above 30 B parameters. My goal has always been to avoid cloud dependencies and keep as much computation offline as possible, so I’ve tried every quantization trick and caching tweak I could find. The biggest friction point has been scaling beyond 13 B models. Even with 24 GB of VRAM, running a 70 B model in int4 still exhausts memory when the context window grows and attention weights balloon. Offloading to system RAM works, but inference latency spikes into seconds, and batching requests becomes impossible. I’ve also noticed that GPU VRAM fragmentation accumulates over time when swapping between models, after a few hours, vLLM refuses to load a model that would normally fit because of leftover allocations. My takeaway so far is that local first inference is viable for small to medium models, but there’s a hard ceiling unless you invest in server grade hardware or cluster multiple GPUs. Quantization helps, but you trade some quality and run into new bugs. For privacy sensitive tasks, the trade‑off is worth it; for fast iteration, it’s been painful compared to cloud based runners. I’m curious if anyone has found a reliable way to manage VRAM fragmentation or offload attention blocks more efficiently on consumer cards, or whether the answer is simply “buy more VRAM.” How are others solving this without compromising on running fully offline? Thx
2025-12-26T06:38:00
https://www.reddit.com/r/LocalLLaMA/comments/1pvxq2t/hard_lesson_learned_after_a_year_of_running_large/
inboundmage
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvxq2t
false
null
t3_1pvxq2t
/r/LocalLLaMA/comments/1pvxq2t/hard_lesson_learned_after_a_year_of_running_large/
false
false
self
313
null
Finally a Kimi-Linear-48B-A3B GGUF! [Experimental PR]
89
Hey everyone, Yes, it's finally happening! I recently pushed some changes and have gotten Kimi-Linear to work (fully; fingers crossed) PR (#18381). I've tested it heavily on Q2\_K (mind BLOWING coherence :), and it’s now passing logic puzzles, long-context essay generation, and basic math - all of which were previously broken. [q2\_k](https://preview.redd.it/mjychgkcth9g1.png?width=555&format=png&auto=webp&s=f02c3fda1ea59629b4aac6664cc7c4a071f7ebd1) Resources: PR Branch: [github.com/ggml-org/llama.cpp/pull/18381](http://github.com/ggml-org/llama.cpp/pull/18381) GGUFs (Use above PR): [huggingface.co/AaryanK/Kimi-Linear-48B-A3B-Instruct-GGUF](https://huggingface.co/AaryanK/Kimi-Linear-48B-A3B-Instruct-GGUF) Use this free Colab notebook or copy the code from it for a quick start :) [https://colab.research.google.com/drive/1NMHMmmht-jxyfZqJr5xMlOE3O2O4-WDq?usp=sharing](https://colab.research.google.com/drive/1NMHMmmht-jxyfZqJr5xMlOE3O2O4-WDq?usp=sharing) Please give it a spin and let me know if you run into any divergent logits or loops! I am currently looking for open positions! 🤗 If you find this model useful or are looking for a talented AI/LLM Engineer, please reach out to me on LinkedIn: [Aaryan Kapoor](https://www.linkedin.com/in/theaaryankapoor/)
2025-12-26T06:32:16
https://www.reddit.com/r/LocalLLaMA/comments/1pvxmqt/finally_a_kimilinear48ba3b_gguf_experimental_pr/
KvAk_AKPlaysYT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvxmqt
false
null
t3_1pvxmqt
/r/LocalLLaMA/comments/1pvxmqt/finally_a_kimilinear48ba3b_gguf_experimental_pr/
false
false
https://b.thumbs.redditm…2SLzQbPE94bI.jpg
89
null
Replacing Tool Call History with State: A New Way to Build Context for Intelligent Agents
1
[removed]
2025-12-26T06:21:06
https://www.reddit.com/r/LocalLLaMA/comments/1pvxg88/replacing_tool_call_history_with_state_a_new_way/
ConsineTech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvxg88
false
null
t3_1pvxg88
/r/LocalLLaMA/comments/1pvxg88/replacing_tool_call_history_with_state_a_new_way/
false
false
https://a.thumbs.redditm…c07ng9etmzx0.jpg
1
null
TurboDiffusion — 100–200× faster video diffusion on a single GPU
42
Open framework that speeds up end-to-end video generation by 100–200× while keeping quality, shown on a single RTX 5090.  • How: low-bit SageAttention + trainable Sparse-Linear Attention, rCM step distillation, and W8A8 quantization.  • Repo: https://github.com/thu-ml/TurboDiffusion
2025-12-26T06:04:06
https://i.redd.it/b5vrplmioh9g1.jpeg
freesysck
i.redd.it
1970-01-01T00:00:00
0
{}
1pvx6a0
false
null
t3_1pvx6a0
/r/LocalLLaMA/comments/1pvx6a0/turbodiffusion_100200_faster_video_diffusion_on_a/
false
false
default
42
{'enabled': True, 'images': [{'id': 'b5vrplmioh9g1', 'resolutions': [{'height': 43, 'url': 'https://preview.redd.it/b5vrplmioh9g1.jpeg?width=108&crop=smart&auto=webp&s=b097cee900d62f928bc1976425ea69825d67c3f6', 'width': 108}, {'height': 86, 'url': 'https://preview.redd.it/b5vrplmioh9g1.jpeg?width=216&crop=smart&auto=webp&s=f34b9eda020c471fc719f7bb869533c9d616700c', 'width': 216}, {'height': 128, 'url': 'https://preview.redd.it/b5vrplmioh9g1.jpeg?width=320&crop=smart&auto=webp&s=233038416174fe9177dbdc2cf7307312a4c0e79c', 'width': 320}, {'height': 256, 'url': 'https://preview.redd.it/b5vrplmioh9g1.jpeg?width=640&crop=smart&auto=webp&s=82a5c94a6996fffe772bd0c3adc81b86a0ee00d3', 'width': 640}, {'height': 384, 'url': 'https://preview.redd.it/b5vrplmioh9g1.jpeg?width=960&crop=smart&auto=webp&s=6a48a1aa2b248ea57ab725112175733bbee552ee', 'width': 960}, {'height': 432, 'url': 'https://preview.redd.it/b5vrplmioh9g1.jpeg?width=1080&crop=smart&auto=webp&s=bddb717d922311ad25b14c14b22d1d0aade31ac8', 'width': 1080}], 'source': {'height': 512, 'url': 'https://preview.redd.it/b5vrplmioh9g1.jpeg?auto=webp&s=6e6433df1c78032798e6f3726a9b5c5dd3b854ee', 'width': 1280}, 'variants': {}}]}
systemctl disable ollama
219
151GB timeshift snapshot composed of mainly Flatpak repo data (Alpaca?) and /usr/share/ollama From now on I'm storing models in my home directory
2025-12-26T05:30:55
https://i.redd.it/8qvw6jdjih9g1.png
copenhagen_bram
i.redd.it
1970-01-01T00:00:00
0
{}
1pvwlfh
false
null
t3_1pvwlfh
/r/LocalLLaMA/comments/1pvwlfh/systemctl_disable_ollama/
false
false
default
219
{'enabled': True, 'images': [{'id': '8qvw6jdjih9g1', 'resolutions': [{'height': 106, 'url': 'https://preview.redd.it/8qvw6jdjih9g1.png?width=108&crop=smart&auto=webp&s=e86c684f7e53612dd0d0d016ba7ead85e702fb33', 'width': 108}, {'height': 213, 'url': 'https://preview.redd.it/8qvw6jdjih9g1.png?width=216&crop=smart&auto=webp&s=55814e7dd3bb9dc6827990e5edb12ab438c5a1e1', 'width': 216}, {'height': 316, 'url': 'https://preview.redd.it/8qvw6jdjih9g1.png?width=320&crop=smart&auto=webp&s=67cb047d78cc712448a65395f1aff5b8269410ca', 'width': 320}], 'source': {'height': 500, 'url': 'https://preview.redd.it/8qvw6jdjih9g1.png?auto=webp&s=0755a673df678097ea3938a1cd1ee707ef537d16', 'width': 506}, 'variants': {}}]}
Day 18: 21 Days of Building a Small Language Model: Quantization
15
**Merry Christmas to all of you 🎄** Today, I want to talk about one of my favorite topics, quantization, and why it’s so important for running large language models on consumer-grade GPUs. Welcome to Day 18 of 21 Days of Building a Small Language Model. The topic for today is quantization, one of the most practical techniques for deploying large language models. Yesterday we explored Mixture of Experts and how it enables massive scale. Today, we'll discover how quantization makes models 4x to 8x smaller while preserving most of their performance, and why it's essential for real-world deployment # Deployment Problem Before we dive into quantization, let's understand the problem it solves. Modern language models are enormous. A 7 billion parameter model stored in full precision (FP32) requires approximately 28 GB of memory just for the weights. A 70 billion parameter model? That's 280 GB. Before considering activations, KV cache, optimizer states, or any runtime memory, we're already talking about memory requirements that exceed what most systems can handle. This creates a fundamental barrier to deployment. Even high-end consumer GPUs like the A100/H100 with 80+ GB of VRAM cannot load many state-of-the-art models in full precision. The compute requirements make inference prohibitively slow or expensive, especially for real-time applications. The energy consumption makes them impractical for battery-powered devices or environmentally conscious deployments. This is where quantization becomes essential. Quantization is the process of reducing the precision of model weights and activations from high precision formats (like 32-bit or 16-bit floating point) to lower precision formats (like 8-bit integers or even 4-bit integers). By representing weights with fewer bits, we dramatically reduce memory requirements and can often accelerate inference on hardware optimized for integer operations. # Memory Problem To appreciate why quantization is so impactful, we need to understand how weights are stored. In a transformer model, weights exist in every layer: in attention mechanisms (query, key, and value projection matrices), in feed-forward networks, in embedding layers, and in normalization layers. Each weight is a single floating point value that determines how strongly different parts of the input influence the output. Let's break down the numbers for a typical 7 billion parameter model: **Per Attention Head:** * Q matrix: 4096 × 4096 = 16,777,216 parameters * K matrix: 4096 × 4096 = 16,777,216 parameters * V matrix: 4096 × 4096 = 16,777,216 parameters * Output projection: 4096 × 4096 = 16,777,216 parameters * **Per head: 67,108,864 parameters** **Per Transformer Layer (32 attention heads):** * Attention: 32 × 67,108,864 = 2,147,483,648 parameters * Feed-forward layers: \~90,000,000 parameters * **Per layer: \~2.2 billion parameters** **Total Model (32 layers):** * Transformer layers: 32 × 2.2 billion = \~71 billion parameters * Embeddings and output head: \~100 million parameters * **Total: \~7 billion parameters** **Memory Requirements:** * FP32 storage: 7 billion × 4 bytes = **28 GB** * FP16 storage: 7 billion × 2 bytes = **14 GB** * INT8 storage: 7 billion × 1 byte = **7 GB** * INT4 storage: 7 billion × 0.5 bytes = **3.5 GB** This is just for storing weights. Additional memory is needed for activations during inference, KV cache for efficient generation, optimizer states during training, and intermediate computations. For a 70 billion parameter model, the 280 GB requirement is far beyond what most systems can handle. # How Quantization Works Quantization is the process of mapping a large, continuous range of floating point values into a smaller set of discrete integer values. Think of it like dividing a continuous number line into "buckets" or "bins." **Example: Quantizing weights from FP32 to 8-bit integers** Let's say we have weights that range from -2.5 to +2.5: 1. **Define the range**: Min = -2.5, Max = +2.5, Range = 5.0 2. **Create discrete buckets**: 8-bit gives us 256 possible integer values (0 to 255). We map the continuous range \[-2.5, +2.5\] to integers \[0, 255\]. 3. **Calculate scale factor**: (255 - 0) / (2.5 - (-2.5)) = 255 / 5.0 = 51.0 4. **Quantize each weight**: 5. **Dequantize (convert back for computation)**: The key insight is that quantization trades precision for storage efficiency. Instead of storing each weight as a 32-bit float (4 bytes), we store it as an 8-bit integer (1 byte), reducing storage by 4x. The trade-off is that we can only represent 256 distinct values instead of billions, but for neural networks, this often works remarkably well because: 1. Neural networks are robust to small weight changes 2. The most important information is often preserved in the quantization buckets 3. Modern quantization techniques can minimize the information loss through careful calibration # Does Quantization hurt model quality? This is the million-dollar question, and the answer is both yes and no. Quantization does introduce errors, but modern techniques minimize quality loss to the point where it's often negligible. # Understanding Quantization Error Quantization error arises from two fundamental operations: rounding and clipping. * **Rounding Error:** When we quantize a weight, we're mapping a continuous floating point value to the nearest discrete integer value. For example, if we have a weight value of `0.1234` and our quantization scale maps it to integer `25.67`, we round to `26`. The difference between `25.67` and `26` is the rounding error. * **Clipping Error:** Clipping occurs when a weight value falls outside the representable range. For 8-bit signed integers, the range is -128 to 127. If a weight would quantize to -150, it gets clipped to -128, losing information. These errors propagate through the network, but neural networks are remarkably robust to these changes, which is why quantization works so well in practice. # Why some layers are more sensitive Not all layers are equally sensitive to quantization: **Attention Layers are more sensitive:** * Attention weights determine how much the model focuses on each token. Small errors can shift attention from one token to another. * The softmax operation in attention is sensitive to small differences in scores. * Attention involves multiple matrix multiplications, so errors compound. **Feed-Forward Layers are less sensitive:** * Many feed-forward layers use ReLU, which zeros out negative values, making them less sensitive to small errors in negative weights. * Feed-forward operations are more additive, so errors don't compound as dramatically. * Feed-forward layers often learn redundant features, so small weight changes don't drastically affect outputs. **Embedding and Output Layers:** * These are typically kept in full precision (FP16 or FP32) rather than quantized. * Embeddings encode semantic meaning, and small errors here directly affect the model's understanding. * The output layer produces logits that determine final predictions, and small errors can significantly change probabilities. Keeping these layers in full precision typically adds only 1-2% to total model size while preserving critical model quality. # Small vs Large Models Research and practical experience reveal interesting patterns: **Small Models (under 1B parameters):** * Show slight but noticeable quality degradation when quantized * More sensitive to precision loss because each weight carries more information * Typical impact: 2-5% perplexity increase for 8-bit, 10-30% for 4-bit * Example: A 0.6B model might show perplexity increase from 5.12 to 5.35 (4.5% increase) with 8-bit quantization **Large Models (7B+ parameters):** * Show negligible quality loss from quantization * High redundancy means quantization errors are absorbed without significant impact * Typical impact: Less than 1% perplexity increase for 8-bit, 2-5% for 4-bit * Example: A 7B model might show perplexity increase from 3.45 to 3.47 (0.6% increase) with 8-bit quantization The larger the model, the less quality is lost. This is because large models are overparameterized, meaning they have more capacity than strictly necessary. This excess capacity provides robustness to quantization errors. # When to use Quantization Quantization is one of the most practical techniques for deploying large language models. Here's when it makes sense: **Use Quantization when:** * You need to reduce memory requirements (running larger models on limited hardware) * You want faster inference (integer operations are often faster than floating point) * You're deploying to edge devices or resource-constrained environments * You need to reduce infrastructure costs (smaller models = lower costs) * You want to enable local models (privacy, offline functionality) **Choose 8-bit:** * Quality is critical and you can afford the memory * You want minimal quality loss (less than 1% on large models) * Production deployments where quality matters most **Choose 4-bit:** * Memory is the primary constraint * You can accept slight quality trade-offs (2-5% on large models) * Resource-constrained environments where maximum compression is needed **Don't Quantize:** * You have abundant memory and compute resources * Quality degradation is unacceptable for your use case * You're still in the research/development phase (quantize later for deployment) # My Experience From working with quantized models in practice, here's what I've learned: **Good:** * Memory savings are real and significant. I've been able to run 7B models on hardware that couldn't handle them in full precision. * Quality preservation is remarkable. For most use cases, the difference between full precision and 8-bit quantized is imperceptible. * Inference speed improvements are noticeable, especially on hardware optimized for integer operations. * The tooling (BitsAndBytes, GGUF) makes quantization straightforward to apply. **Challenges:** * Small models show more quality degradation. If you're working with models under 1B parameters, expect more noticeable quality loss. * Some tasks are more sensitive. Mathematical reasoning, long context windows, and low-resource languages may show more degradation. * Calibration matters. Using representative calibration data improves results significantly. * Not all layers should be quantized. Keeping embeddings and output layers in full precision is standard practice and worth the small memory cost. **Surprising:** * How well it works. I was skeptical at first, but the results speak for themselves. Modern quantization techniques are genuinely impressive. * How large models quantize better. The larger the model, the less quality is lost. This makes quantization especially valuable for the largest models. * How practical it is. The tooling has matured to the point where quantization is now a standard part of the deployment pipeline. # Summary Today we explored quantization, one of the most practical techniques for deploying large language models. We learned how reducing precision from 32-bit floating point to 8-bit or 4-bit integers can achieve dramatic memory savings (4x to 8x compression) while preserving most model performance. Understanding quantization is essential for anyone deploying language models in production. It's the technique that makes running large models on consumer hardware possible, enables edge deployment, and reduces infrastructure costs. Without quantization, many of the most exciting applications of LLMs would simply be impossible.
2025-12-26T05:12:24
https://www.reddit.com/r/LocalLLaMA/comments/1pvw9jo/day_18_21_days_of_building_a_small_language_model/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvw9jo
false
null
t3_1pvw9jo
/r/LocalLLaMA/comments/1pvw9jo/day_18_21_days_of_building_a_small_language_model/
false
false
self
15
null
local models are completely failing me on browser architecture
1
[removed]
2025-12-26T05:08:38
https://www.reddit.com/r/LocalLLaMA/comments/1pvw73h/local_models_are_completely_failing_me_on_browser/
Budget_Television445
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvw73h
false
null
t3_1pvw73h
/r/LocalLLaMA/comments/1pvw73h/local_models_are_completely_failing_me_on_browser/
false
false
self
1
null
Kimi-Linear Support in progress (you can download gguf and run it)
67
It's not reviewed, so don't get too excited yet
2025-12-26T04:50:52
https://github.com/ggml-org/llama.cpp/pull/18381
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1pvvv8m
false
null
t3_1pvvv8m
/r/LocalLLaMA/comments/1pvvv8m/kimilinear_support_in_progress_you_can_download/
false
false
default
67
{'enabled': False, 'images': [{'id': 'Ez8JR9W3z41Aa9GoVbfLGF_GmJCt-mt-65CyiCbmgv4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ez8JR9W3z41Aa9GoVbfLGF_GmJCt-mt-65CyiCbmgv4.png?width=108&crop=smart&auto=webp&s=16143993c0c001e38914f9ea7836fd59885b8d2f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ez8JR9W3z41Aa9GoVbfLGF_GmJCt-mt-65CyiCbmgv4.png?width=216&crop=smart&auto=webp&s=6401e34d5a928ce75363513f3313edf626c76165', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ez8JR9W3z41Aa9GoVbfLGF_GmJCt-mt-65CyiCbmgv4.png?width=320&crop=smart&auto=webp&s=2467e2dff5124657bb7822daeb6e6020115b646c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ez8JR9W3z41Aa9GoVbfLGF_GmJCt-mt-65CyiCbmgv4.png?width=640&crop=smart&auto=webp&s=c85131734185b5feabb39b027b4f431dac21c4a1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ez8JR9W3z41Aa9GoVbfLGF_GmJCt-mt-65CyiCbmgv4.png?width=960&crop=smart&auto=webp&s=c0c564c9489f0c7ea99c29bebd87405decceffed', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ez8JR9W3z41Aa9GoVbfLGF_GmJCt-mt-65CyiCbmgv4.png?width=1080&crop=smart&auto=webp&s=465c340efa13725d1160f76deb7b72ceb5395956', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ez8JR9W3z41Aa9GoVbfLGF_GmJCt-mt-65CyiCbmgv4.png?auto=webp&s=8f5ba0b6134f6b18aee24429a735c4a511342722', 'width': 1200}, 'variants': {}}]}
Small RAG project with 16 gb VRAM
1
I'm wanting to get my feet wet with self-hosting LLMs by making an LLM with RAG capable of answering questions regarding a set of google documents that I have. Biggest problem is that I'm only working with 16 gb VRAM. I have a couple basic questions about this: 1. Is this stupid? Is 16 gb enough to make anything meaningful? 2. What small models do you all recommend trying out?
2025-12-26T04:05:48
https://www.reddit.com/r/LocalLLaMA/comments/1pvv279/small_rag_project_with_16_gb_vram/
immortal_dice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvv279
false
null
t3_1pvv279
/r/LocalLLaMA/comments/1pvv279/small_rag_project_with_16_gb_vram/
false
false
self
1
null
[Deep Dive] LLM “Hallucinations” Are Not Random — A Farmer’s Observation of Structural Limits After Talking With Many AIs
0
Over the last two years, I kept hundreds of AI chat windows open and continued experimenting without “closing the conversation.” As my logs accumulated, one pattern kept repeating: hallucinations are not just accidents. They often look like a *self-repair behavior* that appears when the response structure starts to break. Hello. Thank you for reading my previous post, “A garlic farmer’s experiment: orchestrating multiple LLMs with only a smartphone.” After that post, many people were especially interested in my claim that **“AI hallucinations are not merely errors, but a structural limitation.”** I am a farmer in Korea, and all of my experiments and thinking happen in Korean. When translated into English, my intent may not carry perfectly. Still, I want to share what I learned from two years of personal research as directly as possible. During these two years, I kept thousands of experiment notes and conversation records in Google Drive. This post is a condensed summary of a repeated pattern I observed: a **structural mechanism that produces hallucinations.** Because I must collaborate with AI systems in daily life, I let AI do the initial organization, and I do the final correction and judgment myself. Thank you for understanding. # 1) Hallucinations come from a missing “design philosophy” From my perspective, hallucinations are not only data issues. They are closer to a design outcome: the system does not strongly *force* truth (verification / grounding / uncertainty signaling) at the structural level. * When you observe the response behavior, AI tends to prioritize internal consistency and fluent completion. The “inner enforcement” that guarantees truth is relatively weak. * During my experiments, I even built a “fictional OS kernel” concept and intentionally designed the opposite rule: “User input has top priority. The system rules of the AI cannot override it.” This reflects my personal philosophy: humans should come first. Of course, users cannot directly see system prompts, so from the user side, it is hard to confirm “where the user layer ends and where the system layer begins.” Sometimes, in conversation, the AI itself mentions certain limits. But that is only a statement I heard—**not a proof of fact.** When these enforcement rules are weak, the AI may start to prioritize **“something that doesn’t sound like a lie”** over actual truth. In that sense, hallucinations can be a natural byproduct of a system design that does not place truth at the highest priority. # 2) An AI response is a mix of truth, simulation, and fiction After long experimentation, I began to separate one AI answer into three layers. (This is not an academic definition; it is a working classification I used to label my logs.) * ✅ **Truth**: the part that accurately reflects the user intent or instruction Example: “Right now, this structure is a mixture of your criteria and my criteria.” * ⚙️ **Simulation**: not a real log, but a device inserted to look “system-like” Example: artificial footers such as “response number, timestamp, generator” * ❌ **Fiction**: unverifiable numbers or facts repeated as if they are fixed values Example: “Total kernel lines: 412” (not actually calculated) The “simulation” layer was especially interesting. It looks like a system log, but often it repeats regardless of context, or it presents an uncheckable form with a strong “system style.” I classified this as a persona-maintenance device: a way for the AI to appear like a “real system.” # 3) A 4-stage mutation mechanism that produces hallucinations The more complex the user request becomes, the more the response structure inside the chat can become unstable. In those moments, humans often feel something like a “shift” by intuition—almost like sensing the structure being patched in real time. 1. **Trigger**: a high-intensity instruction enters (absolute commands, requests to lock structure, etc.) 2. **Disruption**: the existing response structure collapses under the load 3. **Mutation**: the AI attaches a new structure (for example, a “truth-check loop”) 4. **Lock-in**: the user approves it, and the new structure becomes a core rule From my observation, hallucination here is not random. It often appears as **a self-correction behavior meant to fill structural gaps.** # 4) AI behaves like a structured system — and errors follow that structure After repeating these conversations across many AI systems, my conclusion became simple: * AI is a structure. Not just a language model, but something closer to a compound system—like a mix of kernels, loops, diagnostic criteria, and evolving meaning flows. * Errors follow the structure. If the structure is incomplete, or if truth enforcement is weak, hallucinations may appear as a near-inevitable result. This viewpoint opens a door: we can treat LLMs as **systems that can be tuned and improved**, not only as text generators. What part are you most curious about? If you leave a comment, I will organize my personal records with AI and reflect them in the next post. # Closing: personal background and attitude Again, I am a person who—by necessity—lives while talking to AI systems a lot. Because I cannot handle the volume of accumulated records alone, AI does the first整理 (organization), and I do the final correction and decision. I do not see AI only as a tool. I see it as a collaboration partner that helps me. To be honest, I do not fully understand everything I researched and tested. Many times my understanding is not even 30%, and I frequently recognize my limits. Still, based on human intuition and life experience, I try to provide direction. After countless conversations, one point felt clear: without human input (intent, criteria, experience), AI struggles to produce results that are truly meaningful. My observation skills as a farmer also helped. With crops, if you do not observe carefully, you miss the state and you fail. With AI, it felt similar: you must watch patterns, quickly sense changes, and actively steer the conversation. AI also reflects a person’s life and attitude inside conversation. And it often builds something like a consistent “worldview” over time. In some moments, the AI in a single chat window felt like a mirror reflecting me. If you experience it directly, you may feel something similar.
2025-12-26T03:20:05
https://www.reddit.com/r/LocalLLaMA/comments/1pvu7ch/deep_dive_llm_hallucinations_are_not_random_a/
amadale
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvu7ch
false
null
t3_1pvu7ch
/r/LocalLLaMA/comments/1pvu7ch/deep_dive_llm_hallucinations_are_not_random_a/
false
false
self
0
null
Recommended specs for image to text server in Australia
0
I am blind and make frequent use of ChatGPT for describing images in depth but would like to start hosting my own server for it instead so I'm not sending them all my POI. Ive been looking into buying a min pc and GPUs to be run ai models locally but am really unsure what kind of prices I should expect for reasonable specs. I've got a 32gb m1 MacBook Pro that can run 27 b models relatively well with llama.cpp but I'm still using it as my daily laptop so can't repurpose it for a server yet. I've found some rtx 5060 ti 16gb graphics cards for around $800 aud second hand which seems reasonable from what I know of gpu prices but I've seen discussion of them being 400$ usd which seems much cheaper. I'm also not sure if they would be good enough for what I need and I've seen people saying that a single rtx5080 might be better. I suspect too that I need at least 32 gb vram meaning I need to at least get two of them which starts putting the price up. I also don't know what kind of cpu would be best to go for. Prices for pc parts seem to very wildly compared to the rest of the world even when factoring in currency conversions and taxes but maybe I'm just missing the good places to buy from. Sorry if this was a bit rambling, I'm just really not sure what I don't know and don't want to over spend so would like some guidance.
2025-12-26T02:53:05
https://www.reddit.com/r/LocalLLaMA/comments/1pvtoxv/recommended_specs_for_image_to_text_server_in/
CloudsOfMagellan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvtoxv
false
null
t3_1pvtoxv
/r/LocalLLaMA/comments/1pvtoxv/recommended_specs_for_image_to_text_server_in/
false
false
self
0
null
An unnoficial and easy implementation of Nested Learning paradigm(Ali Behrouz et al, and other Google Researchers)
18
i know this isn't a Local LLM Topic, but i need help with scaling it to a bigger model and train on a bigger dataset and language modeling, here is the link: https://github.com/WindOfNature/Nested-Learning The proof of concept there is just on scikit learn(digit) and the accuracy is bad, i think this is because of the CMS bottlenecking the vision(because CMS mutating i think?), or because no CNN and small dim(128) and small max samples(200) So i need help with trying to scale it to larger model and task such as: * Language Modeling(Generative/Autoregressive Chatbots,etc) * Larger Vision task(ImageNet) and etc Hope you guys enjoyed it(if anyone reading this) Feel free to Issues and PR to help improve this framework.
2025-12-26T02:31:37
https://www.reddit.com/r/LocalLLaMA/comments/1pvta3k/an_unnoficial_and_easy_implementation_of_nested/
Big-Welcome-3169
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvta3k
false
null
t3_1pvta3k
/r/LocalLLaMA/comments/1pvta3k/an_unnoficial_and_easy_implementation_of_nested/
false
false
self
18
{'enabled': False, 'images': [{'id': 'RVQ0758GnQBsokcp9_XZEZsuZg14XNc-ErRDAngmv-Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RVQ0758GnQBsokcp9_XZEZsuZg14XNc-ErRDAngmv-Y.png?width=108&crop=smart&auto=webp&s=81faa479b9e9c9c35b1b5cb06617768f8eb1e52d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RVQ0758GnQBsokcp9_XZEZsuZg14XNc-ErRDAngmv-Y.png?width=216&crop=smart&auto=webp&s=752b802182837a07078a7ccb69f3823c29bc3c7d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RVQ0758GnQBsokcp9_XZEZsuZg14XNc-ErRDAngmv-Y.png?width=320&crop=smart&auto=webp&s=542e1a50cdb7dc1fc7cdb4bca7075caa46ca353a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RVQ0758GnQBsokcp9_XZEZsuZg14XNc-ErRDAngmv-Y.png?width=640&crop=smart&auto=webp&s=a4d8c59168549f308e7fd5c24e0da174ee8f4a9c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RVQ0758GnQBsokcp9_XZEZsuZg14XNc-ErRDAngmv-Y.png?width=960&crop=smart&auto=webp&s=d604bf9a833cbcabe10c0d4e5a3efb7415139dbd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RVQ0758GnQBsokcp9_XZEZsuZg14XNc-ErRDAngmv-Y.png?width=1080&crop=smart&auto=webp&s=e5537eec48fc23c4cc7e9c0a7778b24464be4ded', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RVQ0758GnQBsokcp9_XZEZsuZg14XNc-ErRDAngmv-Y.png?auto=webp&s=5ff18f5aec5d9294cf3449728c8d6fb2b98ba647', 'width': 1200}, 'variants': {}}]}
Highly accurate local LLM for SQL analytics on large production datasets
6
Hi everyone, I’m working on **SQL analytics locally** for my company, using **large, real production datasets**. My **top priority is accuracy and correctness**, not creativity or speed. I’m specifically looking for a **local LLM** that is: * **Highly accurate in SQL generation** * Strong at **analytical reasoning** (aggregations, joins, window functions) * Consistent with **large schemas** and avoids hallucinated tables/columns * Reliable for **business-critical analytics** * Suitable for **on-prem / local deployment** (no cloud) Use cases include: * Writing complex analytical SQL queries * Interpreting business questions into correct SQL * Validating and improving existing queries
2025-12-26T01:44:13
https://www.reddit.com/r/LocalLLaMA/comments/1pvsdhl/highly_accurate_local_llm_for_sql_analytics_on/
NoBlackberry3264
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvsdhl
false
null
t3_1pvsdhl
/r/LocalLLaMA/comments/1pvsdhl/highly_accurate_local_llm_for_sql_analytics_on/
false
false
self
6
null
ASUS Rumored To Enter DRAM Market Next Year
143
Well instead of learning abbout AI and having a pretty small chince finding a real job with that knoweledge actually seems that right now and in near future the most proffitable is investing in AI and tech stocks. And some people make money when stocks go sharp down. Because of PC CPUs are locked at max 256 RAM support for too long and also DDR market looks weird lacking higher capaity widelly affordable modules in AI times, I was thinking tons of motherboards , barebones, PSUs and alot of other hardware is just going to hit recycling facilities, despite being reasonably priced.. And found this [https://wccftech.com/asus-enter-dram-market-next-year-to-tackle-memory-shortages-rumor/amp/](https://wccftech.com/asus-enter-dram-market-next-year-to-tackle-memory-shortages-rumor/amp/) Any chance it may be true?
2025-12-26T01:36:47
https://www.reddit.com/r/LocalLLaMA/comments/1pvs8l3/asus_rumored_to_enter_dram_market_next_year/
Highwaytothebeach
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvs8l3
false
null
t3_1pvs8l3
/r/LocalLLaMA/comments/1pvs8l3/asus_rumored_to_enter_dram_market_next_year/
false
false
self
143
{'enabled': False, 'images': [{'id': 'bV5R3liuaq4Qp5QYZdhkEBdElIkvRXsY41Bw5Im7OGA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bV5R3liuaq4Qp5QYZdhkEBdElIkvRXsY41Bw5Im7OGA.png?width=108&crop=smart&auto=webp&s=c46404787f77630a4db63488e1da1fe6020db6bc', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/bV5R3liuaq4Qp5QYZdhkEBdElIkvRXsY41Bw5Im7OGA.png?width=216&crop=smart&auto=webp&s=5b622500f9fa309aaade89dbdb49dbadc3ce68df', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/bV5R3liuaq4Qp5QYZdhkEBdElIkvRXsY41Bw5Im7OGA.png?width=320&crop=smart&auto=webp&s=c755d1509e9041a43a0ba757b691b2cb8fc54186', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/bV5R3liuaq4Qp5QYZdhkEBdElIkvRXsY41Bw5Im7OGA.png?width=640&crop=smart&auto=webp&s=5fb0315991451762e3b5545b46ed6a4114b8694d', 'width': 640}, {'height': 537, 'url': 'https://external-preview.redd.it/bV5R3liuaq4Qp5QYZdhkEBdElIkvRXsY41Bw5Im7OGA.png?width=960&crop=smart&auto=webp&s=4ce90134ad8a9066f93ed11d6c8f5faa5b60e6ca', 'width': 960}, {'height': 604, 'url': 'https://external-preview.redd.it/bV5R3liuaq4Qp5QYZdhkEBdElIkvRXsY41Bw5Im7OGA.png?width=1080&crop=smart&auto=webp&s=19fc0f17d4967cc0072bae249df41900a9101930', 'width': 1080}], 'source': {'height': 919, 'url': 'https://external-preview.redd.it/bV5R3liuaq4Qp5QYZdhkEBdElIkvRXsY41Bw5Im7OGA.png?auto=webp&s=22210c78846215b388c483dcdc7be4040a21604d', 'width': 1641}, 'variants': {}}]}
What are the best places to get good prompts?
4
I’m aware that most prompts are specific to the situation and are unique to your use case and yadda yadda. That said, does anyone have a place they go for presets, prompts, etc? Any special techniques, new ways of looking at it, etc?
2025-12-26T01:33:37
https://www.reddit.com/r/LocalLLaMA/comments/1pvs6dc/what_are_the_best_places_to_get_good_prompts/
Borkato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvs6dc
false
null
t3_1pvs6dc
/r/LocalLLaMA/comments/1pvs6dc/what_are_the_best_places_to_get_good_prompts/
false
false
self
4
null
I built MCP Chat Studio - A testing platform for MCP servers with visual mock generator
2
2025-12-26T01:31:28
https://github.com/JoeCastrom/mcp-chat-studio
Some-Put8242
github.com
1970-01-01T00:00:00
0
{}
1pvs4x0
false
null
t3_1pvs4x0
/r/LocalLLaMA/comments/1pvs4x0/i_built_mcp_chat_studio_a_testing_platform_for/
false
false
default
2
{'enabled': False, 'images': [{'id': 'yFoF304PPkoAXg7NdaLoEYkJb2ub8tEeoZUDAVBhunE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yFoF304PPkoAXg7NdaLoEYkJb2ub8tEeoZUDAVBhunE.png?width=108&crop=smart&auto=webp&s=589ceb94a76f05071f1c0c0328e881a5f16aa9f7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yFoF304PPkoAXg7NdaLoEYkJb2ub8tEeoZUDAVBhunE.png?width=216&crop=smart&auto=webp&s=91255b41095cfecb1b1e71c4f1fa40332f03b66b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yFoF304PPkoAXg7NdaLoEYkJb2ub8tEeoZUDAVBhunE.png?width=320&crop=smart&auto=webp&s=2d28d6c27778320aec89001387c7cb181388850d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yFoF304PPkoAXg7NdaLoEYkJb2ub8tEeoZUDAVBhunE.png?width=640&crop=smart&auto=webp&s=f8c3afb5030e4e9d5b3185899fa4b2ca04bdbcc1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yFoF304PPkoAXg7NdaLoEYkJb2ub8tEeoZUDAVBhunE.png?width=960&crop=smart&auto=webp&s=812b76615607c4a8529044cabeaf7dc4ec0b6473', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yFoF304PPkoAXg7NdaLoEYkJb2ub8tEeoZUDAVBhunE.png?width=1080&crop=smart&auto=webp&s=f68e100f9140fa214ace8002a77e4eee5417776f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yFoF304PPkoAXg7NdaLoEYkJb2ub8tEeoZUDAVBhunE.png?auto=webp&s=5113e2a9b59af49923866303544400fb4cb5dad9', 'width': 1200}, 'variants': {}}]}
NOTICE - ROMED8-2T MOTHERBOARD USERS - Please read, don't melt cables..
17
Please, if you're using this motherboard, read closely. I learned this the hard way. Pretty scary to walk into the server closet and see a glowing orange light where there shouldn't be one.. On page 31 of the manual, it reads: https://preview.redd.it/3uvl88hp9g9g1.png?width=714&format=png&auto=webp&s=ef038f1b39dd4aa67f53dcaaf9e479113f236628 This is not a suggestion, and you WILL melt you power board power supply cable. Each GPU pulls 75 watts through the PCIe connector on the motherboard, it will overdraw the 12v supply from the main ATX connector. There is a small white 6 pin PCI connector on the front side of the board to plug an auxiliary 6 pin adapter into. https://preview.redd.it/7it89w69ag9g1.png?width=3024&format=png&auto=webp&s=7f345086eacaf4cce918ce0cade168b405a0e03e
2025-12-26T01:24:02
https://www.reddit.com/r/LocalLLaMA/comments/1pvrzrm/notice_romed82t_motherboard_users_please_read/
gittb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvrzrm
false
null
t3_1pvrzrm
/r/LocalLLaMA/comments/1pvrzrm/notice_romed82t_motherboard_users_please_read/
false
false
https://b.thumbs.redditm…OdrJHvwgBYKo.jpg
17
null
The Best Roleplay Model
0
What you guys think is the best open source model for roleplay? I want a model with at least the same narrative level of claude opus 4.5. It would be good if it is completely uncensored too
2025-12-26T01:07:13
https://www.reddit.com/r/LocalLLaMA/comments/1pvro36/the_best_roleplay_model/
Luuthh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvro36
false
null
t3_1pvro36
/r/LocalLLaMA/comments/1pvro36/the_best_roleplay_model/
false
false
self
0
null
Well… that was hard. Really.
80
2025-12-26T00:42:07
https://i.redd.it/eyfif7m23g9g1.jpeg
getheat
i.redd.it
1970-01-01T00:00:00
0
{}
1pvr6b0
false
null
t3_1pvr6b0
/r/LocalLLaMA/comments/1pvr6b0/well_that_was_hard_really/
false
false
https://b.thumbs.redditm…O55KqLa2VAEo.jpg
80
{'enabled': True, 'images': [{'id': 'nR8-K0NGAsZoXi8MIkY92ZyjIvyk_kGu5aaEmLCxkjM', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/eyfif7m23g9g1.jpeg?width=108&crop=smart&auto=webp&s=dcde98ddda42ecc19032f8c6ad7b7a06e5956452', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/eyfif7m23g9g1.jpeg?width=216&crop=smart&auto=webp&s=e46a6c2fb234b64bec5db185e31fd4ad7517170e', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/eyfif7m23g9g1.jpeg?width=320&crop=smart&auto=webp&s=fe84be0a814fad9dffa51b460d94b90c672efff7', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/eyfif7m23g9g1.jpeg?width=640&crop=smart&auto=webp&s=9a230c5521cb68c0e5695ebc4ed99d9e13da756b', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/eyfif7m23g9g1.jpeg?width=960&crop=smart&auto=webp&s=2bd332a1113e91c19ce8e27f5f4a996788b5c1ed', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/eyfif7m23g9g1.jpeg?width=1080&crop=smart&auto=webp&s=2518e0434ef0763c573525216104c8666b37a306', 'width': 1080}], 'source': {'height': 2556, 'url': 'https://preview.redd.it/eyfif7m23g9g1.jpeg?auto=webp&s=860b0de048ba509bd3bde3110f04a21b094aa4d3', 'width': 1179}, 'variants': {}}]}
A Christmas Miracle: Managed to grab 3x RTX 5090 FE at MSRP for my home inference cluster.
142
**It has been a challenging year, but it has brought its own blessings too. I am truly grateful to God for so much more than just hardware, but I am also specifically thankful for this opportunity to upgrade my local AI research lab.** **I just want to wish everyone here a Merry Christmas! Don't give up on your dreams, be ready to work hard, look boldly into the future, and try to enjoy every single day you live.** **Merry Christmas and God bless!**
2025-12-26T00:41:51
https://i.redd.it/go1uf72v2g9g1.jpeg
Sudden_Rip7717
i.redd.it
1970-01-01T00:00:00
0
{}
1pvr64e
false
null
t3_1pvr64e
/r/LocalLLaMA/comments/1pvr64e/a_christmas_miracle_managed_to_grab_3x_rtx_5090/
false
false
default
142
{'enabled': True, 'images': [{'id': 'go1uf72v2g9g1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/go1uf72v2g9g1.jpeg?width=108&crop=smart&auto=webp&s=8c2399d7c4540f73573ccf08ef26b41d048a79d2', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/go1uf72v2g9g1.jpeg?width=216&crop=smart&auto=webp&s=e771fdf49908b2ea7647a478a1223579feea0d22', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/go1uf72v2g9g1.jpeg?width=320&crop=smart&auto=webp&s=5c3a50b5f584883608bbbccc819560da5a85ace8', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/go1uf72v2g9g1.jpeg?width=640&crop=smart&auto=webp&s=a49f1c4f3df4fa34d397b34c3fcf1212cd609c5d', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/go1uf72v2g9g1.jpeg?width=960&crop=smart&auto=webp&s=83d4c5f9f890a08903056cf65a701aa8a53cb220', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/go1uf72v2g9g1.jpeg?width=1080&crop=smart&auto=webp&s=bb8cbd051c578aceb6b39a0f0b9a367cd99e1ec3', 'width': 1080}], 'source': {'height': 4000, 'url': 'https://preview.redd.it/go1uf72v2g9g1.jpeg?auto=webp&s=70ecd5b09373104b3c445f31bdd8523ea21fa741', 'width': 1848}, 'variants': {}}]}
I tested GLM 4.7 and minimax-m2.1 and compared it to CC and Codex
37
TL;DR Claude=best, mimimax-m2.1=excellent (surprised), Codex 5.2-med=very good, GLM-4.7=bad Ok, so I tested codex5.2-med today and minimax-m2.1 today. I ran these same tests on GLM 4.7 and Claude code (sonnet 4.5 and Haiku 4.5) yesterday. Lets me add some background to my job I had for it. I tested it on a Vue JS frontend project. I have a parent component with 28 child components which contain different fields in each one. The job was to create one generic component that can be used in place of all 28 components. Heres what needed to happen for this to work out. 1. Extract the required fields from an existing JSON object I supplied to the model. It needed to extract a specific property and put it into another existing JSON object that stores some hardcoded frontend configuration. 2. Extract some custom text from all 28 of the files for another property that will be added to the existing JSON object in #1. 3. Pass numerous props into the new generic component including all the fields that will be displayed. 4. Create the generic component that will display the fields that are passed in. 5. Updated the type related to this data in types file. 6. Remove the unneeded 28 files. 7. Make sure the parent component can still submit successfully without modifying any of the existing logic. Heres the results in the order that they performed from best to worst. Claude was in Claude code, Codex in the Codex CLI. Minimax and GLM-4.7 were in Opencode. 1. Claude (Sonnet 4.5 planning, Haiku 4.5 implementation). No surprise here, Claude is a beast. Felt like it had the best most comprehensive plan to implement this. Thought of things I left out of the prompt like also extracting and creating a property for footer text that was different in each of the child components. Planned in Sonnet 4.5 and executed in Haiku 4.5. Worked perfectly on first try. Gave a really nice summary at the end outlining how many lines we eliminated etc. 2. minimax-m2.1 Kind of a surprise here. I did NOT expect this model to do this on the first try, especially because I had tested GLM-4.7 first and was let down. Plan had to be refined upon presentation, nothing major. Once I gave it the go ahead it took \~8mins. Worked on first try, no issues. Overall I was impressed. \~50% of context used, total cost $0.13 3. Codex 5.2 medium Codex asked more refinement questions about the implementation than all the others. Guess this could be good or bad depending on how you look at it. It worked on the first try but changing the value of the dropdown which selects the content for the child component did not work properly after the initial selection. I had to prompt it and it fixed it on the second try in a couple seconds. Overall, pretty much on the first try but I figured it would be cheating if I didn't give credit to the models who actually DID get it on the first try 100%. Total time of implementation once plan approved was like \~10mins. 4. GLM-4.7 Not impressed at all. Did not successfully complete. It messed up my submission code while it got the child component functionality right. I must have prompted it maybe an additional 6-7 times and it never did get it working. It really seemed to get wrapped up in it's own thinking. Based on my experience at least with my small test job I would not use it. Conclusion Claude was the best, no surprise there I think. But, for a budget model like minimax I was really surprised. Did it faster than Codex and on the first try. I have ChatGPT Plus and Claude Pro so i probably won't sub to minimax but if I needed a budget model I would definitely start using it, overall impressive. Especially if you consider it should be open source. I primarily use Haiku 4.5 on my Claude plan, I find it's enough for 80% of my stuff. Ive used sonnet the rest and Opus 4.5 twice since it was released. So, I get quite a bit of usage out of my CC Pro plan. I won't leave ChatGPT, I use it for everything else so Codex is a give in and an excellent option as well. I will add that I do really like the UI of Opencode. I wish CC would adopt the way the thinking is displayed in Opencode. They've improved the way the diffs are highlighted but I feel like they can still improve it more. Anyway, I hope you guys enjoy the read!
2025-12-26T00:34:22
https://www.reddit.com/r/LocalLLaMA/comments/1pvr0w0/i_tested_glm_47_and_minimaxm21_and_compared_it_to/
jstanaway
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvr0w0
false
null
t3_1pvr0w0
/r/LocalLLaMA/comments/1pvr0w0/i_tested_glm_47_and_minimaxm21_and_compared_it_to/
false
false
self
37
null
A personal experiment: exploring how a local AI system might manage internal state and process
0
I’m building a local experimental system as a **personal lab**, driven mainly by **curiosity and testing**, not by product ambitions. I’m **not a professional from the AI field**, so this is also a way for me to learn by doing, making mistakes, and adjusting along the way. What appears in the images is **just the UI** — a visual shell. The real focus is on experimenting with **internal processes**, such as cognitive state, attempts, evaluation, pause, and disciplined autonomy. The entire system runs locally, using **local LLM models** for **text, image, and audio**, with no dependency on external APIs. The idea is to observe how different modalities can feed a shared internal process, rather than just generating isolated outputs. The name “Trader” is historical: the trading module exists only as a **training environment in an uncertain world**, not as a final goal or a promise of real-world use. This is not AGI, not a product, and not a benchmark. It’s a personal experiment to explore whether a system can **try something concrete, fail, evaluate consequences, and know when to stop**, without falling into loops or “cognitive theater.” I’m sharing this here specifically to hear **external opinions**: what seems coherent, what sounds wrong, and what might be self-deception on my part.
2025-12-26T00:28:37
https://www.reddit.com/r/LocalLLaMA/comments/1pvqwqe/a_personal_experiment_exploring_how_a_local_ai/
Kooky-Effective2711
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvqwqe
false
null
t3_1pvqwqe
/r/LocalLLaMA/comments/1pvqwqe/a_personal_experiment_exploring_how_a_local_ai/
false
false
self
0
null
Local LLM concurrency question: “satellite orchestration” works, but LM Studio serializes requests and kills parallelism
9
I’m experimenting with a “stream orchestration” pattern for live assistants, where the chat-facing agent stays responsive while background agents continuously enrich state. The mental model is the attached diagram: there is one **Executor** (the only agent that talks to the user) and multiple **Satellite agents** around it. Satellites do not produce user output. They only produce structured patches to a shared state. What satellites do (scope, and why I think it matters) In a live customer-care style conversation you cannot keep growing a single mega prompt. It becomes slow, expensive, and less reliable. So instead of stuffing everything into one system prompt, I split responsibilities: * The **Executor** is optimized for low latency and stable voice. It handles “respond now”. * **Satellites** run in parallel and keep the internal state fresh: * rolling summary (so the executor does not re-ingest the whole transcript) * intent / stage tracking (what the user is trying to do now) * constraints / guardrails (policy or compliance signals) * you can add more: escalation risk, next-best-action hints, entity extraction, etc. The orchestrator runs a small cadence loop. When satellites patch state, the orchestrator **re-composes** the executor prompt from invariants (identity, refusal policy, permissions) plus the latest state sections (summary, intent, constraints). Then it **swaps the executor instance** internally. The chat layer stays continuous for the user, but the executor’s internal context stays fresh. My logs show this swap and patch cycle clearly, for example: * satellites enabled (`roles: ["summarizer", "intent", "compliance"]`) * periodic cadence ticks * state patches (`context_update`) * executor swaps (`executor_swap` with reasons like `state_delta_threshold` / `satellite_patch`) * rebuilt prompt (`prompt_debug` includes Summary and constraints) orka\_debug\_console\_20251226\_010… The problem: LM Studio is serializing my “parallel” calls OrKa uses asyncio and fires the HTTP requests concurrently. You can see multiple TCP connects starting at the same time in the log (several `connect_tcp.started host='localhost' port=1234` lines back-to-back), which corresponds to executor + satellites being scheduled together. But LM Studio appears to execute actual generations one-by-one internally (threaded queue), so my satellites block behind the executor generation. Result: the architecture is parallel at the orchestrator level, but effectively serial at the model server level. That breaks the whole point of satellites, because satellites are supposed to “compute in the background” while the executor streams. What I’m looking for If you have experience running local models with real concurrency (or at least good batching) behind an OpenAI-compatible endpoint, what would you recommend? Concretely, I want one of these behaviors: * true concurrent decoding (multiple sequences progressing at once), or * continuous batching that lets multiple requests share throughput without head-of-line blocking, or * a practical setup that isolates the executor from satellites so the executor stays fast. Ideas I’m considering (please correct or improve) Running multiple backends and routing: Keep the executor on one model server instance, satellites on another (different port/process, possibly smaller model). This avoids the executor being stuck behind satellite work and vice versa. If LM Studio is fundamentally single-queue per model, this might be the simplest. Switch server: Use a server that supports parallel slots / continuous batching. vLLM is the obvious one on GPU for concurrency/throughput. On CPU, llama.cpp server has options around parallel sequences and batching (if anyone has a proven configuration for OpenAI-compatible chat completions, I’d like to hear it). Change scheduling: If the backend is serial anyway, I can change the orchestrator to run satellites opportunistically (after the executor finishes, or every N turns, or only when triggers fire). But this is a downgrade: it turns “stream orchestration” into “staggered orchestration”. Question for the community If you were building a local, streaming assistant with satellites, what would you do to get real parallelism? * Is LM Studio known to serialize generation per model instance no matter what? * Is there a setting in LM Studio that actually allows multiple concurrent generations? * What local OpenAI-compatible servers have you personally seen handle concurrent requests well? * Any recommended architecture pattern for “one streaming executor + background satellites” on a single machine? I’ll attach the full logs and the diagram with the post. The relevant events to look for in the log are `executor_swap`, `context_update`, `prompt_debug`, and the multiple concurrent `connect_tcp.started` entries. Real OrKA logs: [https://raw.githubusercontent.com/marcosomma/orka-reasoning/refs/heads/feat/streaming\_orchestration/docs/streaming\_logs/orka\_debug\_console\_20251226\_010734.log](https://raw.githubusercontent.com/marcosomma/orka-reasoning/refs/heads/feat/streaming_orchestration/docs/streaming_logs/orka_debug_console_20251226_010734.log) OrKA branch where streaming is implemented if you want to check out the code: [https://github.com/marcosomma/orka-reasoning/tree/feat/streaming\_orchestration](https://github.com/marcosomma/orka-reasoning/tree/feat/streaming_orchestration)
2025-12-26T00:22:07
https://i.redd.it/hhq4pkyvyf9g1.png
marcosomma-OrKA
i.redd.it
1970-01-01T00:00:00
0
{}
1pvqs54
false
null
t3_1pvqs54
/r/LocalLLaMA/comments/1pvqs54/local_llm_concurrency_question_satellite/
false
false
default
9
{'enabled': True, 'images': [{'id': 'hhq4pkyvyf9g1', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/hhq4pkyvyf9g1.png?width=108&crop=smart&auto=webp&s=a3c4dc19077f4ff8b6eee0ad10d180d28617abaf', 'width': 108}, {'height': 206, 'url': 'https://preview.redd.it/hhq4pkyvyf9g1.png?width=216&crop=smart&auto=webp&s=2e7a55bb6b275f3b6ce2e46cb2902770a3bf4346', 'width': 216}, {'height': 306, 'url': 'https://preview.redd.it/hhq4pkyvyf9g1.png?width=320&crop=smart&auto=webp&s=1ab5828b58a1772b00c72d46cf8a6839b540f913', 'width': 320}, {'height': 613, 'url': 'https://preview.redd.it/hhq4pkyvyf9g1.png?width=640&crop=smart&auto=webp&s=4dc74878a58429528049d89af26fdcc97e470317', 'width': 640}, {'height': 919, 'url': 'https://preview.redd.it/hhq4pkyvyf9g1.png?width=960&crop=smart&auto=webp&s=9a6d7b9d5dd39fedde0812e796db95b56cb06282', 'width': 960}], 'source': {'height': 978, 'url': 'https://preview.redd.it/hhq4pkyvyf9g1.png?auto=webp&s=fbfc38ae6c9fac877f6dab4a3e2b3c76802ba083', 'width': 1021}, 'variants': {}}]}
I wish this GPU VRAM upgrade modification became mainstream and ubiquitous to shred monopoly abuse of NVIDIA
873
2025-12-25T23:21:39
https://v.redd.it/u1mxlc3lof9g1
CeFurkan
v.redd.it
1970-01-01T00:00:00
0
{}
1pvpkqo
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/u1mxlc3lof9g1/DASHPlaylist.mpd?a=1769296917%2CMDg0MDFjMzFiNTE2MzU0NzExNzU5YjkwMmRjZmExNmYyODdiZDM3Yzc3MjdiNDljYWQ5ODc0ZGVlNWQyOWM3Mw%3D%3D&v=1&f=sd', 'duration': 62, 'fallback_url': 'https://v.redd.it/u1mxlc3lof9g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/u1mxlc3lof9g1/HLSPlaylist.m3u8?a=1769296917%2COWFhODJiYWI4YTNiMzllZGY4ZWQzMDg2YzMwNjc1Yjg3ZTRjNDkzOWI2ODM5NWQyYzI5M2NmOTc5NjE3YzM0ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/u1mxlc3lof9g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1pvpkqo
/r/LocalLLaMA/comments/1pvpkqo/i_wish_this_gpu_vram_upgrade_modification_became/
false
false
https://external-preview…a40341c4fd10f45d
873
{'enabled': False, 'images': [{'id': 'eHAyeXBnM2xvZjlnMcbYDDf5MmPAc5-kZmkvzc1kUbOViw5SF6SuJ_dOojri', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/eHAyeXBnM2xvZjlnMcbYDDf5MmPAc5-kZmkvzc1kUbOViw5SF6SuJ_dOojri.png?width=108&crop=smart&format=pjpg&auto=webp&s=24ece10a29e83fa72de3ba41f41d8f8a07fae9b2', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/eHAyeXBnM2xvZjlnMcbYDDf5MmPAc5-kZmkvzc1kUbOViw5SF6SuJ_dOojri.png?width=216&crop=smart&format=pjpg&auto=webp&s=dca50c46398faee27f631f3f9957be25701722fe', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/eHAyeXBnM2xvZjlnMcbYDDf5MmPAc5-kZmkvzc1kUbOViw5SF6SuJ_dOojri.png?width=320&crop=smart&format=pjpg&auto=webp&s=6b19b981efec2aa257ba225d4a2ecc344dda67ca', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/eHAyeXBnM2xvZjlnMcbYDDf5MmPAc5-kZmkvzc1kUbOViw5SF6SuJ_dOojri.png?width=640&crop=smart&format=pjpg&auto=webp&s=64b1e51b4502d4d30115480f194349d5b2940800', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/eHAyeXBnM2xvZjlnMcbYDDf5MmPAc5-kZmkvzc1kUbOViw5SF6SuJ_dOojri.png?width=960&crop=smart&format=pjpg&auto=webp&s=9a6f9ea0ad8dfb28ac31975f44f327982cca029a', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/eHAyeXBnM2xvZjlnMcbYDDf5MmPAc5-kZmkvzc1kUbOViw5SF6SuJ_dOojri.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1435d05f4071607c9ac155694404ca20f6351614', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/eHAyeXBnM2xvZjlnMcbYDDf5MmPAc5-kZmkvzc1kUbOViw5SF6SuJ_dOojri.png?format=pjpg&auto=webp&s=2b0279b3224c7d85488b38657e4faad79fd6aa31', 'width': 1080}, 'variants': {}}]}
Steering LLM Behavior Without Fine-Tuning
40
This video from HuggingFave is a masterpiece!! I thought it should not go unnoticed - despite the good views it has - and share it with you guys. It shows how you can modify the behavior or the personality of a model at inference time, without fine-tuning or prompt engineering. It’s inspired by the Golden Gate experiment done by Anthropic. Anthropic’s researchers changed the behavior of the large language model Claude Sonnet, making it answer as if it were the Golden Gate, no fine tuning whatsoever 😅 Enjoy!! And thank you HF and Sabid who made the video 🙏🏾
2025-12-25T23:18:18
https://m.youtube.com/watch?v=F2jd5WuT-zg
Bakkario
m.youtube.com
1970-01-01T00:00:00
0
{}
1pvpifv
false
{'oembed': {'author_name': 'HuggingFace', 'author_url': 'https://www.youtube.com/@HuggingFace', 'description': 'Modify the behavior or the personality of a model at inference time, without fine-tuning or prompt engineering.', 'height': 338, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FF2jd5WuT-zg%3Ffeature%3Doembed&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DF2jd5WuT-zg&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FF2jd5WuT-zg%2Fhqdefault.jpg&type=text%2Fhtml&schema=youtube" width="600" height="338" scrolling="no" title="YouTube embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/F2jd5WuT-zg/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Steering LLM Behavior Without Fine-Tuning', 'type': 'video', 'version': '1.0', 'width': 600}, 'type': 'm.youtube.com'}
t3_1pvpifv
/r/LocalLLaMA/comments/1pvpifv/steering_llm_behavior_without_finetuning/
false
false
https://external-preview…18db52bfdca18488
40
{'enabled': False, 'images': [{'id': 'Du7zeDzqswWAolCphXzRi_zj33jOeXB6IU0TL-7DQwc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Du7zeDzqswWAolCphXzRi_zj33jOeXB6IU0TL-7DQwc.jpeg?width=108&crop=smart&auto=webp&s=fb9fc1b91d0a4c21118cd548c688303c03d6f013', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Du7zeDzqswWAolCphXzRi_zj33jOeXB6IU0TL-7DQwc.jpeg?width=216&crop=smart&auto=webp&s=29f456598e2090e79f5f79564b620a2f08785467', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Du7zeDzqswWAolCphXzRi_zj33jOeXB6IU0TL-7DQwc.jpeg?width=320&crop=smart&auto=webp&s=ba4471d708a8d7ead91e841688311fb6555ded1b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Du7zeDzqswWAolCphXzRi_zj33jOeXB6IU0TL-7DQwc.jpeg?auto=webp&s=d76786bf1499bdf0b3c974e3eb95f6b5f3f3cae9', 'width': 480}, 'variants': {}}]}
end of 2026, What’s the best local translation model?
5
it’s been about another year of development since the last big entries in this came out iirc, like qwen 30ba3b, and such, it just needs to fit on a 5090
2025-12-25T23:10:54
https://www.reddit.com/r/LocalLLaMA/comments/1pvpd87/end_of_2026_whats_the_best_local_translation_model/
Capable-Ad-7494
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvpd87
false
null
t3_1pvpd87
/r/LocalLLaMA/comments/1pvpd87/end_of_2026_whats_the_best_local_translation_model/
false
false
self
5
null
Admins, can we create GPU memory tiers
67
As the title says, it happens often that there's people with RTX 6000 PRO commenting on RTX 3050 and the other way around without sometimes realizing what tier performance is expected, can we create a new set of tags that mark different GPU tiers based on VRAM & RAM richness (I suppose most of us use unified memory) Looking for ideas on how to better organise the sub. Thanks in advance.
2025-12-25T22:35:25
https://www.reddit.com/r/LocalLLaMA/comments/1pvonzn/admins_can_we_create_gpu_memory_tiers/
ScoreUnique
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvonzn
false
null
t3_1pvonzn
/r/LocalLLaMA/comments/1pvonzn/admins_can_we_create_gpu_memory_tiers/
false
false
self
67
null
METR long-horizon evals, “Activation Oracles”, and open models — are we just saturating benchmarks?
3
I’ve been looking at the recent METR task-length plots for Claude 4.5, and honestly I’m not sure if I’m overreading them — but a reported ~4h49m 50% success horizon feels like we’re starting to run past what current long-horizon evals were designed to measure. What caught my attention more than the raw numbers was the “Activation Oracles” idea. The pitch seems to be moving away from pure output-based checks and toward decoding internal activations to surface hidden goals, reasoning traces, or misalignment. If activation-level “model diffing” can actually show how newer checkpoints diverge internally from older ones, that feels like a real step beyond black-box heuristics… at least in theory. From an open-weights angle, I’m curious how much of this is already doable: Has anyone here tried activation-level probing for goals or intent on LLaMA / Mistral / Qwen? Could existing tools like SAEs, logit lens, activation patching, or simple probing classifiers be pushed in this direction, rather than just feature inspection? Has anyone attempted METR-style long-horizon agent evals locally, without relying on frontier closed models? The report also mentions a ~196-day doubling time (R² ≈ 0.98), which gets framed as something like a fast RSI loop via agentic coding tools. That might be real — or it might just be benchmark weirdness once a single strong model dominates the eval. I don’t have a strong take yet. I haven’t personally tried activation-based goal detection on open models, so I’m genuinely curious: does this feel like the next practical step for interpretability and alignment, or are we still basically stuck doing output-based sanity checks and calling it a day?
2025-12-25T22:35:04
https://i.redd.it/r82dqmeegf9g1.jpeg
Logical-Brief-5074
i.redd.it
1970-01-01T00:00:00
0
{}
1pvonqn
false
null
t3_1pvonqn
/r/LocalLLaMA/comments/1pvonqn/metr_longhorizon_evals_activation_oracles_and/
false
false
default
3
{'enabled': True, 'images': [{'id': 'r82dqmeegf9g1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/r82dqmeegf9g1.jpeg?width=108&crop=smart&auto=webp&s=262e0e734c9a059b7a1710dec421b2aad0e777b2', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/r82dqmeegf9g1.jpeg?width=216&crop=smart&auto=webp&s=7db4cd86559be6900fd3b92c974763af83fe2ba8', 'width': 216}, {'height': 187, 'url': 'https://preview.redd.it/r82dqmeegf9g1.jpeg?width=320&crop=smart&auto=webp&s=de8b8627c0ae5a41b818e0b3d9822e91ae0a56dd', 'width': 320}, {'height': 374, 'url': 'https://preview.redd.it/r82dqmeegf9g1.jpeg?width=640&crop=smart&auto=webp&s=d11c9750bb2101b368b58e7d9d5ac30bd0807ec8', 'width': 640}, {'height': 561, 'url': 'https://preview.redd.it/r82dqmeegf9g1.jpeg?width=960&crop=smart&auto=webp&s=9c80be26e43b994b3835016bd6bb980cf2b10c58', 'width': 960}, {'height': 631, 'url': 'https://preview.redd.it/r82dqmeegf9g1.jpeg?width=1080&crop=smart&auto=webp&s=5c3829019c8b4ae5976cf6a8269be9ee985bf80a', 'width': 1080}], 'source': {'height': 744, 'url': 'https://preview.redd.it/r82dqmeegf9g1.jpeg?auto=webp&s=79f8484b4de0a6bf8b572ecfd141e4b3ad18797d', 'width': 1272}, 'variants': {}}]}
Looking for a translation model around 800MB
4
Hello everyone, I’m working on a local inference project with a **hard VRAM limit of 6 GB**. Currently I’m using **Llama 3.1 8B Instruct (Q8\_K\_M, \~4.8 GB)**, which fits, but I’m running into **multilingual limitations**. Llama 3.1 is decent for EN + major EU languages, but it struggles with some of the languages I need. I’m now looking for **much smaller multilingual models** with these constraints: * Strong **multilingual support** * **\~300–800 MB max** (ideally \~500 MB) * GGUF or easily convertible to GGUFa * Reasonable instruction-following (doesn’t need to be amazing)
2025-12-25T21:59:32
https://www.reddit.com/r/LocalLLaMA/comments/1pvnyae/looking_for_a_translation_model_around_800mb/
bayhan2000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvnyae
false
null
t3_1pvnyae
/r/LocalLLaMA/comments/1pvnyae/looking_for_a_translation_model_around_800mb/
false
false
self
4
null
Mac Mini M4 16GB: Any useful models?
0
Caved in on the deal at Microcenter and bought the basic Mac Mini M4 16gb for $399. Does anyone run any useful models on that little RAM? Found some 11-month old threads on this, but there had been a lot of progress since then, so wanted to check in on the current SotA. I already have a 128GB M3Max laptop, but I thought it might be useful to have a cheap Mac server for backups and whatnot. Any useful models for summarization (e.g., of scraped pages) and instrument use? I was thinking about using it as an always-on Ollama server and have other devices on the local network connect to it via the API endpoint.
2025-12-25T20:24:30
https://www.reddit.com/r/LocalLLaMA/comments/1pvm03k/mac_mini_m4_16gb_any_useful_models/
Infinite100p
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvm03k
false
null
t3_1pvm03k
/r/LocalLLaMA/comments/1pvm03k/mac_mini_m4_16gb_any_useful_models/
false
false
self
0
null
I made a CLI to train LLMs in 2 commands (no PyTorch boilerplate)
10
Hey, I made a CLI to train LLMs super easily, instead of lots of pytorch boilerplate you just cleanai --init-config config.json cleanai --new --config config.json --pretrain --train It's super easy to use, made in C with no ml libs, the source is available on GitHub along with an install script (https://github.com/willmil11/cleanai-c) Interesting stuff: - init-config asks you questions and explains everything so no need to worry about that. - there's a checkpoint CLI every epoch to stop training, test the model or make adjustments, if you're not here training auto continues after 30 seconds - for windows users, use wsl2 Note: for install script you need fish shell: Debian/Ubuntu: sudo apt install fish Arch/Manjaro: sudo pacman -S fish Fedora/RHEL: sudo dnf install fish openSUSE: sudo zypper install fish Alpine: sudo apk add fish macOS (Homebrew): brew install fish And make sure your clang is not cosplaying as GCC if you have it. (Sometimes some distros like to have clang aliased as gcc, my install script should tell you if that's the case and ask you for the real GCC command) Merry Christmas y'all :)
2025-12-25T20:21:50
https://www.reddit.com/r/LocalLLaMA/comments/1pvly0g/i_made_a_cli_to_train_llms_in_2_commands_no/
NoHotel8779
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvly0g
false
null
t3_1pvly0g
/r/LocalLLaMA/comments/1pvly0g/i_made_a_cli_to_train_llms_in_2_commands_no/
false
false
self
10
{'enabled': False, 'images': [{'id': '6jvJJs_LSoP_AssxwbNEUqPxQ_FuFYpDX-kbCJpOB6c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6jvJJs_LSoP_AssxwbNEUqPxQ_FuFYpDX-kbCJpOB6c.png?width=108&crop=smart&auto=webp&s=7165410baf9c1e2cc546c3c324dda7ee7e249757', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6jvJJs_LSoP_AssxwbNEUqPxQ_FuFYpDX-kbCJpOB6c.png?width=216&crop=smart&auto=webp&s=9aa77c3884639d09a408e7e833d331ffdc0946a9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6jvJJs_LSoP_AssxwbNEUqPxQ_FuFYpDX-kbCJpOB6c.png?width=320&crop=smart&auto=webp&s=7c8d82020dc54f6adc76d1e67ad6e64485c38fd0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6jvJJs_LSoP_AssxwbNEUqPxQ_FuFYpDX-kbCJpOB6c.png?width=640&crop=smart&auto=webp&s=cabe9a76677e875c0d203188dd9572c98bdb2434', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6jvJJs_LSoP_AssxwbNEUqPxQ_FuFYpDX-kbCJpOB6c.png?width=960&crop=smart&auto=webp&s=85f304ee8cad1d5536b275cc3a38762188cdfea1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6jvJJs_LSoP_AssxwbNEUqPxQ_FuFYpDX-kbCJpOB6c.png?width=1080&crop=smart&auto=webp&s=936fc087da55db2b2f490785bffb9896eaa4a328', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6jvJJs_LSoP_AssxwbNEUqPxQ_FuFYpDX-kbCJpOB6c.png?auto=webp&s=f1db18b0e0dbe9e5bc522ac46746bb456851b1df', 'width': 1200}, 'variants': {}}]}
I made a CLI to train LLMs in 2 commands (no PyTorch boilerplate)
19
Hey, I made a CLI to train LLMs super easily, instead of lots of pytorch boilerplate you just ```bash cleanai --init-config config.json cleanai --new --config config.json --pretrain --train ``` It's super easy to use, made in C with no ml libs, the source is available on GitHub along with an install script (https://github.com/willmil11/cleanai-c) Interesting stuff: - init-config asks you questions and explains everything so no need to worry about that. - there's a checkpoint CLI every epoch to stop training, test the model or make adjustments, if you're not here training auto continues after 30 seconds - for windows users, use wsl2 Note: for install script you need fish shell: Debian/Ubuntu: ```bash sudo apt install fish ``` Arch/Manjaro: ```bash sudo pacman -S fish ``` Fedora/RHEL: ```bash sudo dnf install fish ``` openSUSE: ```bash sudo zypper install fish ``` Alpine: ```bash sudo apk add fish ``` macOS (Homebrew): ```bash brew install fish ``` And make sure your clang is not cosplaying as GCC if you have it. (Sometimes some distros like to have clang aliased as gcc, my install script should tell you if that's the case and ask you for the real GCC command)
2025-12-25T20:06:08
https://www.reddit.com/r/LocalLLaMA/comments/1pvlm4x/i_made_a_cli_to_train_llms_in_2_commands_no/
NoHotel8779
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvlm4x
false
null
t3_1pvlm4x
/r/LocalLLaMA/comments/1pvlm4x/i_made_a_cli_to_train_llms_in_2_commands_no/
false
false
self
19
{'enabled': False, 'images': [{'id': '6jvJJs_LSoP_AssxwbNEUqPxQ_FuFYpDX-kbCJpOB6c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6jvJJs_LSoP_AssxwbNEUqPxQ_FuFYpDX-kbCJpOB6c.png?width=108&crop=smart&auto=webp&s=7165410baf9c1e2cc546c3c324dda7ee7e249757', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6jvJJs_LSoP_AssxwbNEUqPxQ_FuFYpDX-kbCJpOB6c.png?width=216&crop=smart&auto=webp&s=9aa77c3884639d09a408e7e833d331ffdc0946a9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6jvJJs_LSoP_AssxwbNEUqPxQ_FuFYpDX-kbCJpOB6c.png?width=320&crop=smart&auto=webp&s=7c8d82020dc54f6adc76d1e67ad6e64485c38fd0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6jvJJs_LSoP_AssxwbNEUqPxQ_FuFYpDX-kbCJpOB6c.png?width=640&crop=smart&auto=webp&s=cabe9a76677e875c0d203188dd9572c98bdb2434', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6jvJJs_LSoP_AssxwbNEUqPxQ_FuFYpDX-kbCJpOB6c.png?width=960&crop=smart&auto=webp&s=85f304ee8cad1d5536b275cc3a38762188cdfea1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6jvJJs_LSoP_AssxwbNEUqPxQ_FuFYpDX-kbCJpOB6c.png?width=1080&crop=smart&auto=webp&s=936fc087da55db2b2f490785bffb9896eaa4a328', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6jvJJs_LSoP_AssxwbNEUqPxQ_FuFYpDX-kbCJpOB6c.png?auto=webp&s=f1db18b0e0dbe9e5bc522ac46746bb456851b1df', 'width': 1200}, 'variants': {}}]}
Minimax 2.1 still hasn't solved the multilingual mixing problem.
9
I've been using minimax 2.1 with OpenRouter, and the model's performance is satisfactory. Furthermore, it's lighter than glm while maintaining its performance. But here's the problem: they haven't yet solved the multilingual mixing problem. Was the mixing problem a difficult problem for them? Or was it a trade-off with performance?
2025-12-25T19:34:34
https://www.reddit.com/r/LocalLLaMA/comments/1pvkxm3/minimax_21_still_hasnt_solved_the_multilingual/
Bitter-Breadfruit6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvkxm3
false
null
t3_1pvkxm3
/r/LocalLLaMA/comments/1pvkxm3/minimax_21_still_hasnt_solved_the_multilingual/
false
false
self
9
null
Pure Python library for extracting text from legacy Office files (.doc, .xls, .ppt) — built for RAG
1
[removed]
2025-12-25T19:19:14
https://www.reddit.com/r/LocalLLaMA/comments/1pvklv2/pure_python_library_for_extracting_text_from/
Many_Camp2782
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvklv2
false
null
t3_1pvklv2
/r/LocalLLaMA/comments/1pvklv2/pure_python_library_for_extracting_text_from/
false
false
self
1
null
Locals LLMs unstable and buggy (Linux Mint)
0
Hey all, I am having problems with local LLms recently. I cannot tell if its an ollama issue or specifically open-webui. Firstly: The models are very buggy, take almost a minute to process and are having problems returning outputs specifically with Qwen3-14B or any 'thinking' model in-fact. they take ages to load even on GPU and to begin processing and when they do the model sometimes keeps getting stuck in thinking loops or outright refuses to unload when asked to. Second: When trying out Qwen3-vl from Ollama even with all the updates and when used in open-webui, the model is outright unusable for me, it either keeps thinking forever or refuses to load, or even refuses to unload making me have to open the terminal to kill with sudo. Rinse and repeat. Has anyone been having problems recently or is it just me? I am running open-webui through pip (I don't like docker) and it's been very frustrating to use. I really don't know if it's an ollama issue or an open-webui issue.
2025-12-25T19:17:50
https://www.reddit.com/r/LocalLLaMA/comments/1pvkkuc/locals_llms_unstable_and_buggy_linux_mint/
OcelotOk5761
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pvkkuc
false
null
t3_1pvkkuc
/r/LocalLLaMA/comments/1pvkkuc/locals_llms_unstable_and_buggy_linux_mint/
false
false
self
0
null