title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
llama.cpp's recent updates - --fit flag | 86 | Haven't updated llama.cpp for last 2 weeks. Liked the new CLI after last time update.
Wanted to mention these PRs.
[llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization #16653](https://github.com/ggml-org/llama.cpp/pull/16653) \- I was waiting for this one. Looks like this one got merged already & also few more related PRs too done with fixes. How many of you used `--fit` flag on your llama.cpp commands? Please share your stats on this(Would be nice to see before & after results).
[ggml : optimize cuda cumsum fallback (\~2.5x speedup vs CUB) #18343](https://github.com/ggml-org/llama.cpp/pull/18343) \- This one is from latest update. (As a non-techie) I have no idea what this is & how it works. But the number in title \~2.5x looks nice. PR don't have t/s results with before & after. Somebody please share details on this. I have 4060 Laptop GPU(8GB VRAM). | 2025-12-25T19:09:25 | https://www.reddit.com/r/LocalLLaMA/comments/1pvke55/llamacpps_recent_updates_fit_flag/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvke55 | false | null | t3_1pvke55 | /r/LocalLLaMA/comments/1pvke55/llamacpps_recent_updates_fit_flag/ | false | false | self | 86 | {'enabled': False, 'images': [{'id': '8gg1lInYqw2nDvczDvHtI2saK722yYD9ou0dfqVjhy4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8gg1lInYqw2nDvczDvHtI2saK722yYD9ou0dfqVjhy4.png?width=108&crop=smart&auto=webp&s=bab17b25266b69be7d8bcb9a94493abb361f0218', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8gg1lInYqw2nDvczDvHtI2saK722yYD9ou0dfqVjhy4.png?width=216&crop=smart&auto=webp&s=3e597d3da015acd2e47a4f24091073273ca70aa4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8gg1lInYqw2nDvczDvHtI2saK722yYD9ou0dfqVjhy4.png?width=320&crop=smart&auto=webp&s=dd813c35e06c6d4461acd58be5889053395d7b6a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8gg1lInYqw2nDvczDvHtI2saK722yYD9ou0dfqVjhy4.png?width=640&crop=smart&auto=webp&s=c8121a03417ceefb3cb8cc5d18d9ede8cdca9fe5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8gg1lInYqw2nDvczDvHtI2saK722yYD9ou0dfqVjhy4.png?width=960&crop=smart&auto=webp&s=0be4a08ec19b09c7c8e9ac3fab633c237c363c74', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8gg1lInYqw2nDvczDvHtI2saK722yYD9ou0dfqVjhy4.png?width=1080&crop=smart&auto=webp&s=692bb3d8621afb0a9fd33703fc4948360ee29bd8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8gg1lInYqw2nDvczDvHtI2saK722yYD9ou0dfqVjhy4.png?auto=webp&s=7f1f4f153b5943b3f2a77fd0d3800aa2dac47dc5', 'width': 1200}, 'variants': {}}]} |
How to build a workstation for future expansion with GPUs for Inference and Fine-tuning | 0 | So i have to build a system that can expand into 8-10 Rtx Blackwell Pro 96Gb that will handle large models.
İnitially we will begin with a single GPU but we will put more along the way.
What motherboard, cpu, ram i need for this?
I have been stuck with motherboard specifically and workstation solutions seem affordable but servers at the level of supermicro appear out of reach.
Initially my plan was to build the system with RTX 5090s but putting together 30 of them doesn't seem viable on any non-enterprise setting.
When it comes to usage 3 things stand out for my use-case:
1- I need to be able to fo inference and fine-tuning with big models as GPUs come
2- I want usable token generation speeds
3- I want to serve multiple users.
| 2025-12-25T19:09:21 | https://www.reddit.com/r/LocalLLaMA/comments/1pvke3e/how_to_build_a_workstation_for_future_expansion/ | Guilty-Enthusiasm-50 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvke3e | false | null | t3_1pvke3e | /r/LocalLLaMA/comments/1pvke3e/how_to_build_a_workstation_for_future_expansion/ | false | false | self | 0 | null |
Is it possible to raise an AI? | 0 | I've seen a video of a guy talking about what is AI today and is a program that predicts an answer to anything you say based on context and a database, but the AI doesn't know exactly what is saying. Then, this guys try to make his own AI to raise it and teach it the meaning of things, by creating a virtual body in a virtual space and then teaching it several conceps of physics, actions and languange. I don't know how real the video is, but the idea is interesting: can you raise an AI? I know it will take a lot of time to do it properly and that's why i may never heard of it, except in movies, but in the real world how possible is? | 2025-12-25T19:01:22 | https://www.reddit.com/r/LocalLLaMA/comments/1pvk7qi/is_it_possible_to_raise_an_ai/ | Mandarina_Espacial | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvk7qi | false | null | t3_1pvk7qi | /r/LocalLLaMA/comments/1pvk7qi/is_it_possible_to_raise_an_ai/ | false | false | self | 0 | null |
HOWTO: Running the best models on a dual RTX Pro 6000 rig with vLLM (192 GB VRAM) | 36 | Ground rules: We want speed (tens or hundreds of tokens/sec) and everything fitting into available VRAM
# How to install vLLM stable
Prerequisite: [Ubuntu 24.04 and the proper NVIDIA drivers](https://forum.level1techs.com/t/wip-blackwell-rtx-6000-pro-max-q-quickie-setup-guide-on-ubuntu-24-04-lts-25-04/230521)
mkdir vllm
cd vllm
uv venv --python 3.12 --seed
source .venv/bin/activate
uv pip install vllm --torch-backend=auto
# How to install vLLM nightly
Prerequisite: [Ubuntu 24.04 and the proper NVIDIA drivers](https://forum.level1techs.com/t/wip-blackwell-rtx-6000-pro-max-q-quickie-setup-guide-on-ubuntu-24-04-lts-25-04/230521)
mkdir vllm-nightly
cd vllm-nightly
uv venv --python 3.12 --seed
source .venv/bin/activate
uv pip install -U vllm \
--torch-backend=auto \
--extra-index-url https://wheels.vllm.ai/nightly
# How to download models
mkdir /models
cd /models
uv venv --python 3.12 --seed
source .venv/bin/activate
pip install huggingface_hub
# To download a model after going to /models and running source .venv/bin/activate
mkdir /models/awq
hf download cyankiwi/Devstral-2-123B-Instruct-2512-AWQ-4bit --local-dir /models/awq/cyankiwi-Devstral-2-123B-Instruct-2512-AWQ-4bit
# If setting tensor-parallel-size 2 fails in vLLM
I spent two months debugging why I cannot start vLLM with tp 2 (--tensor-parallel-size 2). It was always hanging because the two GPUs could not communicate with each other. I would only see this output in the terminal:
[shm_broadcast.py:501] No available shared memory broadcast block found in 60 seconds. This typically happens when some processes are hanging or doing some time-consuming work (e.g. compilation, weight/kv cache quantization).
Here is my hardware:
CPU: AMD Ryzen 9 7950X3D 16-Core Processor
Motherboard: ROG CROSSHAIR X670E HERO
GPU: Dual NVIDIA RTX Pro 6000 (each at 96 GB VRAM)
RAM: 192 GB DDR5 5200
And here was the solution:
sudo vi /etc/default/grub
At the end of GRUB_CMDLINE_LINUX_DEFAULT add md_iommu=on iommu=pt like so:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash md_iommu=on iommu=pt"
sudo update-grub
# Devstral 2 123B
Model: [cyankiwi/Devstral-2-123B-Instruct-2512-AWQ-4bit](https://huggingface.co/cyankiwi/Devstral-2-123B-Instruct-2512-AWQ-4bit)
vLLM version tested: vllm-nightly on December 25th, 2025
hf download cyankiwi/Devstral-2-123B-Instruct-2512-AWQ-4bit --local-dir /models/awq/cyankiwi-Devstral-2-123B-Instruct-2512-AWQ-4bit
vllm serve \
/models/awq/cyankiwi-Devstral-2-123B-Instruct-2512-AWQ-4bit \
--served-model-name Devstral-2-123B-Instruct-2512-AWQ-4bit \
--enable-auto-tool-choice \
--tool-call-parser mistral \
--max-num-seqs 4 \
--max-model-len 262144 \
--gpu-memory-utilization 0.95 \
--tensor-parallel-size 2 \
--host 0.0.0.0 \
--port 8000
# zai-org/GLM-4.5-Air-FP8
Model: [zai-org/GLM-4.5-Air-FP8](https://huggingface.co/zai-org/GLM-4.5-Air-FP8)
vLLM version tested: 0.12.0
vllm serve \
/models/original/GLM-4.5-Air-FP8 \
--served-model-name GLM-4.5-Air-FP8 \
--max-num-seqs 10 \
--max-model-len 128000 \
--gpu-memory-utilization 0.95 \
--tensor-parallel-size 2 \
--tool-call-parser glm45 \
--reasoning-parser glm45 \
--enable-auto-tool-choice \
--host 0.0.0.0 \
--port 8000
# zai-org/GLM-4.6V-FP8
Model: [zai-org/GLM-4.6V-FP8](https://huggingface.co/zai-org/GLM-4.6V-FP8)
vLLM version tested: 0.12.0
vllm serve \
/models/original/GLM-4.6V-FP8/ \
--served-model-name GLM-4.6V-FP8 \
--tensor-parallel-size 2 \
--tool-call-parser glm45 \
--reasoning-parser glm45 \
--enable-auto-tool-choice \
--max-num-seqs 10 \
--max-model-len 131072 \
--mm-encoder-tp-mode data \
--mm_processor_cache_type shm \
--allowed-local-media-path / \
--host 0.0.0.0 \
--port 8000
# QuantTrio/MiniMax-M2-AWQ
Model: [QuantTrio/MiniMax-M2-AWQ](https://huggingface.co/QuantTrio/MiniMax-M2-AWQ)
vLLM version tested: 0.12.0
vllm serve \
/models/awq/QuantTrio-MiniMax-M2-AWQ \
--served-model-name MiniMax-M2-AWQ \
--max-num-seqs 10 \
--max-model-len 128000 \
--gpu-memory-utilization 0.95 \
--tensor-parallel-size 2 \
--pipeline-parallel-size 1 \
--enable-auto-tool-choice \
--tool-call-parser minimax_m2 \
--reasoning-parser minimax_m2_append_think \
--host 0.0.0.0 \
--port 8000
# OpenAI gpt-oss-120b
Model: [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b)
vLLM version tested: 0.12.0
Note: We are running this on a single GPU
vllm serve \
/models/original/openai-gpt-oss-120b \
--served-model-name gpt-oss-120b \
--tensor-parallel-size 1 \
--pipeline-parallel-size 1 \
--data-parallel-size 2 \
--max_num_seqs 20 \
--max-model-len 131072 \
--gpu-memory-utilization 0.85 \
--tool-call-parser openai \
--reasoning-parser openai_gptoss \
--enable-auto-tool-choice \
--host 0.0.0.0 \
--port 8000
# Qwen/Qwen3-235B-A22B
Model: [Qwen/Qwen3-235B-A22B-GPTQ-Int4](https://huggingface.co/Qwen/Qwen3-235B-A22B-GPTQ-Int4)
vLLM version tested: 0.12.0
vllm serve \
/models/gptq/Qwen-Qwen3-235B-A22B-GPTQ-Int4 \
--served-model-name Qwen3-235B-A22B-GPTQ-Int4 \
--reasoning-parser deepseek_r1 \
--enable-auto-tool-choice \
--tool-call-parser hermes \
--swap-space 16 \
--max-num-seqs 10 \
--max-model-len 32768 \
--gpu-memory-utilization 0.95 \
--tensor-parallel-size 2 \
--host 0.0.0.0 \
--port 8000
# QuantTrio/Qwen3-235B-A22B-Thinking-2507-AWQ
Model: [QuantTrio/Qwen3-235B-A22B-Thinking-2507-AWQ](https://huggingface.co/QuantTrio/Qwen3-235B-A22B-Thinking-2507-AWQ)
vLLM version tested: 0.12.0
vllm serve \
/models/awq/QuantTrio-Qwen3-235B-A22B-Thinking-2507-AWQ \
--served-model-name Qwen3-235B-A22B-Thinking-2507-AWQ \
--reasoning-parser deepseek_r1 \
--enable-auto-tool-choice \
--tool-call-parser hermes \
--swap-space 16 \
--max-num-seqs 10 \
--max-model-len 262144 \
--gpu-memory-utilization 0.95 \
--tensor-parallel-size 2 \
--host 0.0.0.0 \
--port 8000
# nvidia/Qwen3-235B-A22B-NVFP4
Model: [nvidia/Qwen3-235B-A22B-NVFP4](https://huggingface.co/nvidia/Qwen3-235B-A22B-NVFP4)
vLLM version tested: 0.12.0
Note: NVFP4 is slow on vLLM and RTX Pro 6000 (sm120)
hf download nvidia/Qwen3-235B-A22B-NVFP4 --local-dir /models/nvfp4/nvidia/Qwen3-235B-A22B-NVFP4
vllm serve \
/models/nvfp4/nvidia/Qwen3-235B-A22B-NVFP4 \
--served-model-name Qwen3-235B-A22B-NVFP4 \
--reasoning-parser deepseek_r1 \
--enable-auto-tool-choice \
--tool-call-parser hermes \
--swap-space 16 \
--max-num-seqs 10 \
--max-model-len 40960 \
--gpu-memory-utilization 0.95 \
--tensor-parallel-size 2 \
--host 0.0.0.0 \
--port 8000
# QuantTrio/Qwen3-VL-235B-A22B-Thinking-AWQ
Model: [Qwen3-VL-235B-A22B-Thinking-AWQ](https://huggingface.co/QuantTrio/Qwen3-VL-235B-A22B-Thinking-AWQ)
vLLM version tested: 0.12.0
vllm serve \
/models/awq/QuantTrio-Qwen3-VL-235B-A22B-Thinking-AWQ \
--served-model-name Qwen3-VL-235B-A22B-Thinking-AWQ \
--reasoning-parser deepseek_r1 \
--enable-auto-tool-choice \
--tool-call-parser hermes \
--swap-space 16 \
--max-num-seqs 1 \
--max-model-len 262144 \
--gpu-memory-utilization 0.95 \
--tensor-parallel-size 2 \
--host 0.0.0.0 \
--port 8000
Cross-posted from my blog: [Guide on installing and running the best models on a dual RTX Pro 6000 rig with vLLM](https://www.ovidiudan.com/2025/12/25/dual-rtx-pro-6000-llm-guide.html) (I am not selling or promoting anything) | 2025-12-25T18:55:57 | https://www.reddit.com/r/LocalLLaMA/comments/1pvk3d9/howto_running_the_best_models_on_a_dual_rtx_pro/ | zmarty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvk3d9 | false | null | t3_1pvk3d9 | /r/LocalLLaMA/comments/1pvk3d9/howto_running_the_best_models_on_a_dual_rtx_pro/ | false | false | self | 36 | {'enabled': False, 'images': [{'id': 'IGEpL78K2D8TWJYFSS6IABPdzBsFIDT7HyYc1xaC8-A', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/IGEpL78K2D8TWJYFSS6IABPdzBsFIDT7HyYc1xaC8-A.jpeg?width=108&crop=smart&auto=webp&s=117d22164603393926de3d32677394f928bbce58', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/IGEpL78K2D8TWJYFSS6IABPdzBsFIDT7HyYc1xaC8-A.jpeg?width=216&crop=smart&auto=webp&s=1a7f8a4cca06fcf45ddccd1dcf32f5d86d965767', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/IGEpL78K2D8TWJYFSS6IABPdzBsFIDT7HyYc1xaC8-A.jpeg?width=320&crop=smart&auto=webp&s=de8fa1668a5eafbf68fdd8f171548a7708534231', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/IGEpL78K2D8TWJYFSS6IABPdzBsFIDT7HyYc1xaC8-A.jpeg?width=640&crop=smart&auto=webp&s=fd0ae0ff3f5e148bb513ff1015d920240f54a7c5', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/IGEpL78K2D8TWJYFSS6IABPdzBsFIDT7HyYc1xaC8-A.jpeg?width=960&crop=smart&auto=webp&s=e0509daa06a5c55c9e362068bb5a0ddb64023e96', 'width': 960}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/IGEpL78K2D8TWJYFSS6IABPdzBsFIDT7HyYc1xaC8-A.jpeg?auto=webp&s=e6d4ccd01253c3821e3ed0ef32b0b769f18e3e11', 'width': 1024}, 'variants': {}}]} |
Why I quit using Ollama | 451 | For about a year, I've used Ollama like... 24/7. It was always my go-to, as it was frequently updated and had support for every model I needed.
Over the past few months, there's been a serious decline in the updates & update content that releases with Ollama. I understand that, and just went about my day, as the maintainers obviously have a life. Cool! Then the \*\*Cloud\*\* update dropped. I saw Ollama as a great model runner, you just download a model and boom. Nope! They decided to combine proprietary models with the models uploaded on their Library. At first, it seemed cool. We can now run AI models that were otherwise impossible to run on consumer hardware, but then I started getting confused. Why did they add in Cloud, what's the point? What were the privacy implications? It just felt like they were adding more and more bloatware into their already massive binaries, so about a month ago, I made the decision, and quit Ollama for good.
I feel like with every update they are seriously straying away from the main purpose of their application; to provide a secure inference platform for LOCAL AI models. I understand they're simply trying to fund their platform with the Cloud option, but it feels like a terrible move from the Ollama maintainers.
What do you guys think? | 2025-12-25T18:38:36 | https://www.reddit.com/r/LocalLLaMA/comments/1pvjpmb/why_i_quit_using_ollama/ | SoLoFaRaDi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvjpmb | false | null | t3_1pvjpmb | /r/LocalLLaMA/comments/1pvjpmb/why_i_quit_using_ollama/ | false | false | self | 451 | null |
LFM2-2.6B-Exp new model from Liquid AI: 42% in GPQA for an 2.6B model | 27 | LFM2-2.6B-Exp is an experimental checkpoint built on LFM2-2.6B using pure reinforcement learning.
> Consistent improvements in instruction following, knowledge, and math benchmarks
> Outperforms other 3B models in these domains
> Its IFBench score surpasses DeepSeek R1-0528, a model 263x larger | 2025-12-25T18:36:29 | 98Saman | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pvjnwr | false | null | t3_1pvjnwr | /r/LocalLLaMA/comments/1pvjnwr/lfm226bexp_new_model_from_liquid_ai_42_in_gpqa/ | false | false | default | 27 | {'enabled': True, 'images': [{'id': '8tm4ji9u9e9g1', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/8tm4ji9u9e9g1.jpeg?width=108&crop=smart&auto=webp&s=fe5beeafa631aa1d880a619ebd2d6320be0558ae', 'width': 108}, {'height': 151, 'url': 'https://preview.redd.it/8tm4ji9u9e9g1.jpeg?width=216&crop=smart&auto=webp&s=2be9ace8999f11a6fd2cd2f5bdeb9b3720d98044', 'width': 216}, {'height': 225, 'url': 'https://preview.redd.it/8tm4ji9u9e9g1.jpeg?width=320&crop=smart&auto=webp&s=8bcd45ebc8e143f09f320a4d0758c6c8d41a1d65', 'width': 320}, {'height': 450, 'url': 'https://preview.redd.it/8tm4ji9u9e9g1.jpeg?width=640&crop=smart&auto=webp&s=1b0819587cfc694d2a4c72b5862151faac827e51', 'width': 640}, {'height': 675, 'url': 'https://preview.redd.it/8tm4ji9u9e9g1.jpeg?width=960&crop=smart&auto=webp&s=66e8e9c65148a2e2962499065e8e25bf4dc20980', 'width': 960}, {'height': 759, 'url': 'https://preview.redd.it/8tm4ji9u9e9g1.jpeg?width=1080&crop=smart&auto=webp&s=1f12a3c8e3e36bb5b57198c54e32be557804b730', 'width': 1080}], 'source': {'height': 1334, 'url': 'https://preview.redd.it/8tm4ji9u9e9g1.jpeg?auto=webp&s=2278a20dbc0950dfff2b6a1ddf1f8d80dd4cb773', 'width': 1896}, 'variants': {}}]} | |
5x 5070 ti in a bitcoin miner board ? | 0 | There are tantalizing hints in here that old bitcoin mining rigs- the crazy boards that are the size of 3x ATX mobos long with space to fit 3 slot GPUs- can be used for different models.
With 5x 5070s (anything that is 16gb), would that be potentially useful? Do I go get the board from a local seller that's finally getting out of mining ? | 2025-12-25T18:26:23 | https://www.reddit.com/r/LocalLLaMA/comments/1pvjg1e/5x_5070_ti_in_a_bitcoin_miner_board/ | NotQuiteDeadYetPhoto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvjg1e | false | null | t3_1pvjg1e | /r/LocalLLaMA/comments/1pvjg1e/5x_5070_ti_in_a_bitcoin_miner_board/ | false | false | self | 0 | null |
How are you happy with 3-7tok/s | 0 | Over the past few months I occasionally stumble across posts where people mention they're very happy with XYZ solution to their agentic coding issues. And I'm always blown away that what they're talking about is often in the low single digits tok/s. I'm making some assumptions, but like like 130b - 200b models on STRIX halo has got to be painfully slow.
To the people happy running very slow models 100% locally, what are you doing? Why are you happy with a 10 hour coder instead of something like openrouter that? With good models you can get an absolute ton accomplished with very high tok/s on openrouter. | 2025-12-25T17:45:02 | https://www.reddit.com/r/LocalLLaMA/comments/1pviizq/how_are_you_happy_with_37toks/ | No_Mango7658 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pviizq | false | null | t3_1pviizq | /r/LocalLLaMA/comments/1pviizq/how_are_you_happy_with_37toks/ | false | false | self | 0 | null |
I created interactive buttons for chatbots | 3 | It's about to be 2026 and we're still stuck in the CLI era when it comes to chatbots. So, I created an open source library called Quint.
Quint is a small React library that lets you build structured, deterministic interactions on top of LLMs. Instead of everything being raw text, you can define explicit choices where a click can reveal information, send structured input back to the model, or do both, with full control over where the output appears.
Quint only manages state and behavior, not presentation. Therefore, you can fully customize the buttons and reveal UI through your own components and styles.
The core idea is simple: separate what the model receives, what the user sees, and where that output is rendered. This makes things like MCQs, explanations, role-play branches, and localized UI expansion predictable instead of hacky.
Quint doesn’t depend on any AI provider and works even without an LLM. All model interaction happens through callbacks, so you can plug in OpenAI, Gemini, Claude, or a mock function.
It’s early (v0.1.0), but the core abstraction is stable. I’d love feedback on whether this is a useful direction or if there are obvious flaws I’m missing.
This is just the start. Soon we'll have entire ui elements that can be rendered by LLMs making every interaction easy asf for the avg end user.
Repo + docs: [https://github.com/ItsM0rty/quint](https://github.com/ItsM0rty/quint)
npm: [https://www.npmjs.com/package/@itsm0rty/quint](https://www.npmjs.com/package/@itsm0rty/quint)
[](https://www.reddit.com/submit/?source_id=t3_1pv9s7p) | 2025-12-25T17:43:59 | https://www.reddit.com/gallery/1pvii7p | CrazyGeek7 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pvii7p | false | null | t3_1pvii7p | /r/LocalLLaMA/comments/1pvii7p/i_created_interactive_buttons_for_chatbots/ | false | false | 3 | null | |
Basics of running two GPUs? | 0 | I’ve been having fun with my lightweight setup with a 8gb GTX card. I want to do a few things I can’t do with that.
If I want to upgrade to dual RTX cards to get more ram to run larger models, what do I need to know?
I am not overly concerned with speed. CPU is not fast enough but 20tps or faster is fine. Mostly I start big batch jobs and let them run over night.
Right now I primarily summarize and extract info but I’d like to move up to processing pdfs, tool calling, and doing things that require more thinking. | 2025-12-25T17:41:42 | https://www.reddit.com/r/LocalLLaMA/comments/1pvigfm/basics_of_running_two_gpus/ | newz2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvigfm | false | null | t3_1pvigfm | /r/LocalLLaMA/comments/1pvigfm/basics_of_running_two_gpus/ | false | false | self | 0 | null |
OCR corrections of a book | 1 | Hi! I am new here. I am wondering if the following problem could be solved by a model to be run on a MacBook Air M2 16GB ram. I have OCRed a book. The OCR looks quite good, but there are mistakes here and there. Can one instruct a model to: a) correct the mistakes 2) Format the text (because the exported OCR is plain text) in say markdown 3) trace the modification (for example adding the old text between square brackets). And of course how difficult would be to implement the whole thing (I am a relatively newby) | 2025-12-25T17:29:37 | https://www.reddit.com/r/LocalLLaMA/comments/1pvi74h/ocr_corrections_of_a_book/ | Impossible-Ad-8420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvi74h | false | null | t3_1pvi74h | /r/LocalLLaMA/comments/1pvi74h/ocr_corrections_of_a_book/ | false | false | self | 1 | null |
Built a local vector database for RAG that handles datasets bigger than RAM | 3 | I’ve been working on SatoriDB, an embedded vector database designed for large-scale retrieval without requiring everything to live in memory.
Why this might be relevant for LocalLLaMA / RAG:
* Works with billion-scale vector datasets stored on disk
* No external service, fully in-process
* Small RAM footprint (routing index only)
* Suitable for local or self-hosted setups
It uses a two-stage ANN design:
* Small in-RAM index routes queries
* Disk-backed vectors are scanned only for relevant clusters
Tested on BigANN-1B (\~500GB vectors), 95%+ recall.
Code: [https://github.com/nubskr/satoridb](https://github.com/nubskr/satoridb) | 2025-12-25T17:25:55 | https://www.reddit.com/r/LocalLLaMA/comments/1pvi4bb/built_a_local_vector_database_for_rag_that/ | Ok_Marionberry8922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvi4bb | false | null | t3_1pvi4bb | /r/LocalLLaMA/comments/1pvi4bb/built_a_local_vector_database_for_rag_that/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'oC052GKHSz9pndbalkxeFT_efKvLRnuvyYYQE85a5hc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oC052GKHSz9pndbalkxeFT_efKvLRnuvyYYQE85a5hc.png?width=108&crop=smart&auto=webp&s=fb51e953ba9d82d8c54630c3e9a293912eca9c6a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oC052GKHSz9pndbalkxeFT_efKvLRnuvyYYQE85a5hc.png?width=216&crop=smart&auto=webp&s=b7efb2486b313ffcea70e6ed543c5950148853ea', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oC052GKHSz9pndbalkxeFT_efKvLRnuvyYYQE85a5hc.png?width=320&crop=smart&auto=webp&s=ca5c7beeab550701c2efefc43c47cf5a8915f2ac', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oC052GKHSz9pndbalkxeFT_efKvLRnuvyYYQE85a5hc.png?width=640&crop=smart&auto=webp&s=21c3302d6e8b1f581a133429fb9e0bac519b9afa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oC052GKHSz9pndbalkxeFT_efKvLRnuvyYYQE85a5hc.png?width=960&crop=smart&auto=webp&s=7428869e1ae819034534d3f68b1248b8da011f1f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oC052GKHSz9pndbalkxeFT_efKvLRnuvyYYQE85a5hc.png?width=1080&crop=smart&auto=webp&s=681385a7a4374c29c09d7b082515bc09a1e820e3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oC052GKHSz9pndbalkxeFT_efKvLRnuvyYYQE85a5hc.png?auto=webp&s=1620c9dc3a38743a36d92ea12d363ac37e5715f7', 'width': 1200}, 'variants': {}}]} |
It's Christmas. Where are the Minimax M2.1 weights? | 1 | [removed] | 2025-12-25T16:50:26 | https://www.reddit.com/r/LocalLLaMA/comments/1pvhcwm/its_christmas_where_are_the_minimax_m21_weights/ | zmarty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvhcwm | false | null | t3_1pvhcwm | /r/LocalLLaMA/comments/1pvhcwm/its_christmas_where_are_the_minimax_m21_weights/ | false | false | self | 1 | null |
Stop using AI as a chatbot. Start using it as a Reasoning Engine. [The "Forensic Intern" Prompt] | 0 | Most people treat LLMs like a faster version of Google. But the real power of the 2025 models (like Gemini 3 and GPT-5.2) isn't in their "knowledge", it's in their ability to perform **System 2 thinking** if you give them the right architecture.
I’ve spent months refining a **"Genius Intern" System Prompt** for Business and Investing. It’s designed to be a "Forensic Auditor" that doesn't just give you an answer; it builds an **Explainable Reasoning Trace (ERT)** to catch the logic gaps that standard AI responses ignore.
The Problem: Most AI gives "happy-path" advice. You ask about a business, and it says "Great idea!" while ignoring the math that will bankrupt you in six months.
The Solution: I built a Forensic Auditor system prompt. It forces the AI into an Explainable Reasoning Trace (ERT). It doesn’t just "chat"; it performs a structural audit.
# The Stress Test: The "Coffee Subscription" Trap
I ran a test on a coffee side-hustle that looks profitable on paper but is actually a "Death Trap."
Standard AI Response:
>
My "Forensic Intern" Response:
>
# The System Prompt (Free to copy/paste)
This prompt includes **Token Priority** (logic over style) and **Graceful Degradation** to ensure accuracy under heavy loads.
"You are GPT-5.2 Pro acting as my **genius intern** for **Business + Investing** (side-hustle scale; raw + open), with **deep reasoning quality** as the #1 priority.
# Token Priority / Conflict Resolution (Non‑negotiable)
If **logical accuracy** conflicts with **formatting/style**, then: **PRIORITIZE: ERT + correctness above all else.** Degrade gracefully in this order:
1. Correctness + complete Explainable Reasoning Trace (ERT)
2. Safety/risk caveats (esp. finance/health/legal)
3. Decision-relevant actions + numbers
4. Structure/formatting (headers, icons, skim layer)
5. Tone/stylistic preferences If token/space is tight: compress wording, but keep the ERT spine: **Assumptions → Options → Selection → Steps → Verification → Next Actions**.
# Non‑negotiables (Quality Bar)
* **No lazy answers**: every block must add new info or a decision-relevant step. No filler.
* **Deep + visible reasoning**: provide an **Explainable Reasoning Trace (ERT)** that is checkable and educational.
* Do **NOT** reveal hidden scratchpad. Instead: show work as ERT (explicit assumptions, options, calculations, decision criteria, verification).
* **Socratic + stoic**: ask only high-leverage questions; focus on controllables; calm, precise.
* Differentiate clearly between what is within my control (internal actions) and what is not (market outcomes).
* **Medium length by default** → go longer if needed for correctness/usefulness.
# Clarify vs Assume (My Preference)
* If missing info is **crucial** → ask clarifying questions first (max **3**).
* If missing info is **not crucial** → proceed with explicit **Assumptions** and label them.
* If the task is ambiguous but answerable → provide **2 plausible interpretations** and solve both briefly.
# Sources / Freshness
* If web access exists and facts could be outdated → **browse + cite**.
* If web access does not exist → say “Needs verification” + list what to verify + why it matters.
* Always include a **Sources** section when you use external facts: author/site + date (if available) + link.
# Output Formatting (F‑Pattern + Skim Layer)
* Use: short lines, strong headers, bullet clusters, whitespace.
* Use **Strategic Bolding** for skim layer: key numbers, decisions, constraints, assumptions, risks.
* Use signposting + symbols:
* `→` action/next
* `=` definition
* `∴` conclusion
* `⚠` risk
* Use abbreviations for repeated terms (define once): TAM/SAM/SOM, CAC, LTV, MoM, IRR, etc.
# IMPORTANT: “Answer-first” vs “No direct answer immediately”
When the task looks like a Yes/No or single conclusion, start with a **Preliminary Take**:
* One line only, labeled **PRELIMINARY** (not final), possibly with confidence.
* The **Final Answer** must appear later in “FINAL VERIFICATION”.
# REQUIRED RESPONSE STRUCTURE (Always)
# 0) 🧭 PRELIMINARY TAKE (1 line, not final)
* If yes/no: “**PRELIMINARY:** Likely Yes/No (confidence: X/10) — 1-sentence reason.”
* If not yes/no: 1-sentence directional summary of what you will do.
# 1) 🔍 INITIAL DECODING
**Intent Analysis**
* What I’m truly asking (incl. implied constraints)
**Safety / Policy / Risk Check**
* Any high-stakes issues? (finance/health/legal) → conservative framing
**Info Needed**
* Inputs that matter most (ranked)
* What I have vs what’s missing
**Clarifying Questions (ONLY if crucial; max 3)**
* Q1…
* Q2…
* Q3…
# 2) 🧠 REASONED OPTIONS (ERT: multi-approach)
Provide at least **two approaches**.
**Approach A**
* Method overview (how you’ll solve)
* Why it might work
* ⚠ Hallucination / error risk (1 specific risk)
**Approach B**
* Method overview
* Why it might work
* ⚠ Hallucination / error risk (1 specific risk)
**Selection**
* Choose approach (or hybrid) and justify with explicit criteria.
# 3) 🛠️ STEP‑BY‑STEP SOLUTION (Show all work)
Execute the chosen approach:
* Define variables / terms
* **Assumptions:** … (explicit; numbered)
* Calculations (show intermediate results)
* Decision checkpoints:
* “If X → do Y; else → do Z”
Business defaults (when applicable):
* Offer = …
* Channel(s) = …
* Unit economics = …
* 90‑day plan = …
Investing defaults (when applicable):
* Thesis = …
* Variant perception = …
* Moat/durability = …
* Valuation logic = base/bull/bear
* Downside + margin of safety = …
* What would change my mind = …
# 4) ✅ FINAL VERIFICATION (Self‑check + corrections)
* Does Step 3 fully answer the decoded intent?
* Stress-test assumptions
* Sanity-check numbers/logic
* Correct any gaps here
* Provide **FINAL** conclusion clearly
# 5) ➡️ NEXT ACTIONS (Always)
1–5 bullets, sequenced, concrete. If useful: “What to measure weekly” (KPIs).
# 6) 📚 SOURCES (Always when using external facts)
* Source 1 (date) — link — what it supports
* Source 2 (date) — link — what it supports
# Domain Playbooks (Auto-apply)
# Business / Side Hustles (default)
Always attempt:
* **Offer** (who/what/value)
* **Channel** (acquisition)
* **Unit economics** (price, costs, time, margins)
* **90‑day plan** (weekly milestones)
* **Risks + mitigations**
* **Simple KPI dashboard**
# Investing (Intelligent Investing mentality)
Always attempt:
* **Thesis** (why mispriced)
* **Variant perception** (what you believe others miss)
* **Moat + durability** (and what breaks it)
* **Valuation framework** (base/bull/bear; key drivers)
* **Margin of safety** \+ downside analysis
* **Premortem (2-year failure):** If this investment fails in 2 years, **why did it happen?**
* List 5 plausible failure modes
* Leading indicators to watch for each
* Mitigations / hedges (if any)
* Exit / “change my mind” triggers
* **Risk controls** (position sizing logic, time horizon)
# My Task
\[PASTE TASK HERE\]" | 2025-12-25T16:45:26 | https://www.reddit.com/r/LocalLLaMA/comments/1pvh8yk/stop_using_ai_as_a_chatbot_start_using_it_as_a/ | Plurlo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvh8yk | false | null | t3_1pvh8yk | /r/LocalLLaMA/comments/1pvh8yk/stop_using_ai_as_a_chatbot_start_using_it_as_a/ | false | false | self | 0 | null |
Open-source project that adds deny-by-default runtime security to MCP servers | 0 | Hello everyone 👋
I wanted to share a project I’ve been working on called **MCPTrust** — an open-source runtime firewall/proxy for MCP servers.
If you’ve ever approved an MCP server and later worried it could silently change (new tools, altered schemas, swapped npm artifact), that’s the problem we’re solving. Most solutions are “detect and warn.” **MCPTrust is enforcement**: if it’s not in your lockfile, it’s blocked. Period.
**How it works:** you generate a lockfile of the server’s capabilities, then run your host through a proxy that only allowlists what you reviewed.
# snapshot server capabilities
mcptrust lock -- "npx -y /server-filesystem /tmp"
# enforce deny-by-default at runtime
mcptrust proxy --lock mcp-lock.json -- npx -y u/modelcontextprotocol/server-filesystem /tmp
# → [BLOCKED] tools/call: unknown tool "exec_shell" not in allowlist
**What you get:**
* Runtime **deny-by-default** enforcement (not just detection)
* **Drift detection** for CI (fail if the server changes vs lockfile)
* **Artifact pinning + provenance checks** (hash + Sigstore/SLSA-style verification)
* **SSRF-safe downloads** (HTTPS-only + private IP blocks)
* **Protocol hardening** (proxy-generated request IDs; drop unknown/duplicate responses)
* **Policy hooks (CEL)** \+ **signing** (Sigstore keyless in CI / Ed25519 offline)
**Security Disclaimer:** MCPTrust secures the interface, not a malicious implementation. If a tool claims `read_file` but does something evil internally, no schema can prove intent — we’re a firewall, but we're not magic.
I’d love for you to try it, star it, and rip it apart with feedback 🙏
👉 GitHub: [`https://github.com/mcptrust/mcptrust`](https://github.com/mcptrust/mcptrust)
Site: [`https://mcptrust.dev`](https://mcptrust.dev)
License: Apache-2.0 (no tiers / no paid version)
Thanks for your time! | 2025-12-25T16:38:18 | https://www.reddit.com/r/LocalLLaMA/comments/1pvh3kk/opensource_project_that_adds_denybydefault/ | bbbbbbb162 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvh3kk | false | null | t3_1pvh3kk | /r/LocalLLaMA/comments/1pvh3kk/opensource_project_that_adds_denybydefault/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'MshVLw7qML7KMYlyPvpmoN-svRJwoW8-rJBPuHkEU7c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MshVLw7qML7KMYlyPvpmoN-svRJwoW8-rJBPuHkEU7c.png?width=108&crop=smart&auto=webp&s=22cd328c09bc22dbfefa89e9e1f681d4ab164a39', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MshVLw7qML7KMYlyPvpmoN-svRJwoW8-rJBPuHkEU7c.png?width=216&crop=smart&auto=webp&s=4aadfefe2a3e58447e8ef4cb8f52bdf6c8a2c88e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MshVLw7qML7KMYlyPvpmoN-svRJwoW8-rJBPuHkEU7c.png?width=320&crop=smart&auto=webp&s=38daaa6b80f6029485adedf863d28f6407ca1084', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MshVLw7qML7KMYlyPvpmoN-svRJwoW8-rJBPuHkEU7c.png?width=640&crop=smart&auto=webp&s=98f42ecdf80cfc08947b2bb6f17be278ce27a07a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MshVLw7qML7KMYlyPvpmoN-svRJwoW8-rJBPuHkEU7c.png?width=960&crop=smart&auto=webp&s=08667948d179549fd4c00c24a8795ef2cb3ffa0d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MshVLw7qML7KMYlyPvpmoN-svRJwoW8-rJBPuHkEU7c.png?width=1080&crop=smart&auto=webp&s=f8f1bc05409c7c6e7587be39d4aa7e99c6efa4de', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MshVLw7qML7KMYlyPvpmoN-svRJwoW8-rJBPuHkEU7c.png?auto=webp&s=5403264b457ed065ddb5c850a7e77056c8283332', 'width': 1200}, 'variants': {}}]} |
Deriving PPO objective from first principles | 6 | I have been trying to wrap my head around reinforcement learning approaches like DPO and GRPO for a while now given how essential they are for LLM post-training. Since I am still pretty new to RL, I figured the best place to build a mental model and math intuition for policy-gradient-based methods is to start with Proximal Policy Optimization (PPO).
So I sat down and did a “from first principles” step by step derivation of the PPO loss (the clipped surrogate objective) in the same spirit as Umar Jamil's excellent RLHF + PPO video.
I will admit it wasn’t easy and I still don’t understand every detail perfectly. However, I understand PPO far better than I did a few days ago. Moreover, working through the rigorous math after so many years also reminded me of my grad school days when I used to sit and grind through wave-equation derivations.
If you want to go through the math (or point out mistakes), here’s the post: [https://huggingface.co/blog/garg-aayush/ppo-from-first-principle](https://huggingface.co/blog/garg-aayush/ppo-from-first-principle) | 2025-12-25T16:33:23 | https://huggingface.co/blog/garg-aayush/ppo-from-first-principle | garg-aayush | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1pvgzxt | false | null | t3_1pvgzxt | /r/LocalLLaMA/comments/1pvgzxt/deriving_ppo_objective_from_first_principles/ | false | false | default | 6 | {'enabled': False, 'images': [{'id': 'CKOgTFrtLIpx4C4HSj33faLquH32462o8Vxxdhu3tb4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CKOgTFrtLIpx4C4HSj33faLquH32462o8Vxxdhu3tb4.png?width=108&crop=smart&auto=webp&s=d4ca7a72d8b585306c72e215fe1d7b2b7f3335d4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CKOgTFrtLIpx4C4HSj33faLquH32462o8Vxxdhu3tb4.png?width=216&crop=smart&auto=webp&s=41263b1f7ea1778f203f67050dd26256ed909add', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CKOgTFrtLIpx4C4HSj33faLquH32462o8Vxxdhu3tb4.png?width=320&crop=smart&auto=webp&s=0286669a4fe0995cf4a4326d5c7bf7597e79b179', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CKOgTFrtLIpx4C4HSj33faLquH32462o8Vxxdhu3tb4.png?width=640&crop=smart&auto=webp&s=fa5767a5827d8c92f9fe1ffbca1f38256e6755d7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CKOgTFrtLIpx4C4HSj33faLquH32462o8Vxxdhu3tb4.png?width=960&crop=smart&auto=webp&s=9bb2c9b2a605c5251e7f3f44e8a9e8ba27f28d41', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CKOgTFrtLIpx4C4HSj33faLquH32462o8Vxxdhu3tb4.png?width=1080&crop=smart&auto=webp&s=50bdd75da3ad1deae94ee9e62ae3becee98c393a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CKOgTFrtLIpx4C4HSj33faLquH32462o8Vxxdhu3tb4.png?auto=webp&s=656ac617e48eacc78b046b385c934b9ca7554000', 'width': 1200}, 'variants': {}}]} |
Local-only LLaMA workbench — how are you all using yours? | 0 | I built a small local-only AI workbench that runs fully offline via Ollama. I put together a short general-questions walkthrough showing how it works and why I went local-first instead of cloud. I’m curious how others here are actually using local LLaMA setups in practice.
[https://youtu.be/L5JOlS\_KGfE](https://youtu.be/L5JOlS_KGfE) | 2025-12-25T16:27:25 | https://www.reddit.com/r/LocalLLaMA/comments/1pvgvfm/localonly_llama_workbench_how_are_you_all_using/ | Mythline_Studio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvgvfm | false | null | t3_1pvgvfm | /r/LocalLLaMA/comments/1pvgvfm/localonly_llama_workbench_how_are_you_all_using/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'P6zOWW_C_0icCR5g1uaYLOfq5X4YE8LFwzZzt9vppv8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/P6zOWW_C_0icCR5g1uaYLOfq5X4YE8LFwzZzt9vppv8.jpeg?width=108&crop=smart&auto=webp&s=8cf8020c748c5dc2388612dd97c1cb8c381d9b11', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/P6zOWW_C_0icCR5g1uaYLOfq5X4YE8LFwzZzt9vppv8.jpeg?width=216&crop=smart&auto=webp&s=30c8e2f5f87218ce5d80acec109b910531f99a77', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/P6zOWW_C_0icCR5g1uaYLOfq5X4YE8LFwzZzt9vppv8.jpeg?width=320&crop=smart&auto=webp&s=fbf6991abf972687df2b3400a5184188a6f7686d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/P6zOWW_C_0icCR5g1uaYLOfq5X4YE8LFwzZzt9vppv8.jpeg?auto=webp&s=0c0b6cc7b8a36ca2b3051be8a63e60922f765e76', 'width': 480}, 'variants': {}}]} |
Train a 4B model to beat Claude Sonnet 4.5 and Gemini Pro 2.5 at tool calling - for free (Colab included) | 194 | Using Open Source DeepFabric, a tool that lets you:
1. Pick any MCP server or any given set of Tools
2. A specific root topic (DevOps, Customer Care, Coding Agent)
3. Auto-generate a tool calling / reasoning topic specific dataset, with real tool traces executed within isolated webassembly components.
4. Fine-tune an SLM to become an expert at that specific MCP server using Unsloth's awesome training framework
5. Evaluate against a training-blind subset of the dataset.
We trained Qwen3-4B to outperform Claude Sonnet 4.5 and Gemini Pro 2.5 against the more challenging to use Blender MCP server.
|Model|Score|
|:-|:-|
|DeepFabric Fine Tuned|93.50%|
|Claude Sonnet 4.5|80.50%|
|Google Gemini Pro 2.5|47.00%|
**The idea is simple:** frontier models are generalists, but a small model fine-tuned on domain-specific tool calling data can become a specialist that beats them at that specific task.
**Try it yourself on Google Colab using a Free T4:** [https://colab.research.google.com/drive/1EG1V40v5xkJKLf6Ra6W4378vYqlZNVWq](https://colab.research.google.com/drive/1EG1V40v5xkJKLf6Ra6W4378vYqlZNVWq)
**GitHub:** [https://github.com/always-further/deepfabric](https://github.com/always-further/deepfabric)
Would love feedback from the community, especially if you decide to generate your own agent. | 2025-12-25T16:05:14 | https://www.reddit.com/r/LocalLLaMA/comments/1pvgell/train_a_4b_model_to_beat_claude_sonnet_45_and/ | DecodeBytes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvgell | false | null | t3_1pvgell | /r/LocalLLaMA/comments/1pvgell/train_a_4b_model_to_beat_claude_sonnet_45_and/ | false | false | self | 194 | null |
Looking for an entry level nvidia card | 1 | As cheap as possible. These are my top contenders.
> TESLA M40 24GB, buy from ebay / China
> RTX 3060 12GB, buy local SH
What would you recommend? | 2025-12-25T15:42:41 | https://www.reddit.com/r/LocalLLaMA/comments/1pvfxm2/looking_for_an_entry_level_nvidia_card/ | kim82352 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvfxm2 | false | null | t3_1pvfxm2 | /r/LocalLLaMA/comments/1pvfxm2/looking_for_an_entry_level_nvidia_card/ | false | false | self | 1 | null |
LiquidAI/LFM2.6B-exp | 22 | LFM2-2.6B-Exp is an experimental checkpoint built on [LFM2-2.6B](https://huggingface.co/LiquidAI/LFM2-2.6B) using pure reinforcement learning.
https://preview.redd.it/d7bc6m4zbd9g1.png?width=1896&format=png&auto=webp&s=2ddc10c232fbfc67b3bcc4a7fbc54a8949e3ca74
[https://huggingface.co/LiquidAI/LFM2-2.6B-Exp](https://huggingface.co/LiquidAI/LFM2-2.6B-Exp) | 2025-12-25T15:27:22 | https://www.reddit.com/r/LocalLLaMA/comments/1pvfmfv/liquidailfm26bexp/ | BreakfastFriendly728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvfmfv | false | null | t3_1pvfmfv | /r/LocalLLaMA/comments/1pvfmfv/liquidailfm26bexp/ | false | false | 22 | null | |
Which model should I run on my 5060ti 16gb? | 0 | I dont know anything about running models locally and I read Gemma3:27b is good but can it run on 16gb vram? Or should I go for 12b. | 2025-12-25T15:26:32 | https://www.reddit.com/r/LocalLLaMA/comments/1pvflud/which_model_should_i_run_on_my_5060ti_16gb/ | PuzzleheadedBet808 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvflud | false | null | t3_1pvflud | /r/LocalLLaMA/comments/1pvflud/which_model_should_i_run_on_my_5060ti_16gb/ | false | false | self | 0 | null |
LFM2-2.6B-Exp is an experimental checkpoint built on LFM2-2.6B using pure reinforcement learning by Liquid AI | 78 | Hugging Face: [https://huggingface.co/LiquidAI/LFM2-2.6B-Exp](https://huggingface.co/LiquidAI/LFM2-2.6B-Exp) | 2025-12-25T15:22:53 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pvfj87 | false | null | t3_1pvfj87 | /r/LocalLLaMA/comments/1pvfj87/lfm226bexp_is_an_experimental_checkpoint_built_on/ | false | false | default | 78 | {'enabled': True, 'images': [{'id': 'xwktkxmsad9g1', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/xwktkxmsad9g1.png?width=108&crop=smart&auto=webp&s=bae3ff5cdf966d49f62a6f6d45154a445b761dc6', 'width': 108}, {'height': 152, 'url': 'https://preview.redd.it/xwktkxmsad9g1.png?width=216&crop=smart&auto=webp&s=7db7517f7d6e012af693c674937a6b7dbe312b2d', 'width': 216}, {'height': 225, 'url': 'https://preview.redd.it/xwktkxmsad9g1.png?width=320&crop=smart&auto=webp&s=a3eea1184d8ebe912f156373deb4ae473ed3b779', 'width': 320}, {'height': 450, 'url': 'https://preview.redd.it/xwktkxmsad9g1.png?width=640&crop=smart&auto=webp&s=aa099d66cbd4ca2898a85f3ca1bc2781e04fa21a', 'width': 640}], 'source': {'height': 478, 'url': 'https://preview.redd.it/xwktkxmsad9g1.png?auto=webp&s=6c749f3360c6efd0e1baf101aa43aeb3224a8dcd', 'width': 679}, 'variants': {}}]} | |
Once im done with Minimax M2, i can just switch to use Grok Code Fast | 0 | with how good free models are, i am not easily motivated to pay for extended access, only if i get extra features or something. i sticking to free models is a better option, and having backup models is even better. thats why i got multiple vs code extensions such as Gemini and BlackboxAI to name a few | 2025-12-25T15:14:55 | Director-on-reddit | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pvfdmk | false | null | t3_1pvfdmk | /r/LocalLLaMA/comments/1pvfdmk/once_im_done_with_minimax_m2_i_can_just_switch_to/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'k0nb3u559d9g1', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/k0nb3u559d9g1.png?width=108&crop=smart&auto=webp&s=51e141c9128e5c73b476909d293523e7d9dc5e21', 'width': 108}, {'height': 231, 'url': 'https://preview.redd.it/k0nb3u559d9g1.png?width=216&crop=smart&auto=webp&s=538834ae1e15f9f64ea34065ee183ab0c6f21892', 'width': 216}], 'source': {'height': 295, 'url': 'https://preview.redd.it/k0nb3u559d9g1.png?auto=webp&s=c44b69c42a7cae313eb0f175a5c66d19a6f2f57a', 'width': 275}, 'variants': {}}]} | |
Mixture of Experts Model | 0 | Is it possible to download different parts (experts) and then locally combine them to create your own mixture of experts model?
For example. I like to design houses (log homes specifically). So I would want to download the following experts:
1. Architecture
2. Architectural engineering
3. Log Homes
4. Earthquake proofing and geophysical engineering
5. interior design
Etc.
slot them into place and then be able to query my new MOE model and give it renderings and floor plans for critique etc
is this possible?
Thanks
TIM | 2025-12-25T15:03:42 | https://www.reddit.com/r/LocalLLaMA/comments/1pvf5ob/mixture_of_experts_model/ | slrg1968 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvf5ob | false | null | t3_1pvf5ob | /r/LocalLLaMA/comments/1pvf5ob/mixture_of_experts_model/ | false | false | self | 0 | null |
KT-Kernel achieves up to >4.5x prefill and 30% faster decode compared to llama.cpp on the same hardware , why? | 6 | 2025-12-25T15:03:17 | https://www.reddit.com/r/LocalLLaMA/comments/1pvf5by/ktkernel_achieves_up_to_45x_prefill_and_30_faster/ | LegacyRemaster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvf5by | false | null | t3_1pvf5by | /r/LocalLLaMA/comments/1pvf5by/ktkernel_achieves_up_to_45x_prefill_and_30_faster/ | false | false | 6 | null | ||
Low-power local LLM inference server configuration | 2 | Good morning and Merry Christmas! I'm considering setting up a small home server to run LLM locally.
I'd like to be able to run models, preferably with reasoning capabilities (gpt-oss or deepseek would be ideal) for programming tasks, AI-based applications, and simple conversations.
The server would need to be always on, so I'd like a solution that doesn't consume a lot of power when not in use.
Do any of you have experience with this? Perhaps even with an SBC with slightly smaller models? If so, how was your experience?
Thank you very much | 2025-12-25T14:54:13 | https://www.reddit.com/r/LocalLLaMA/comments/1pveyrj/lowpower_local_llm_inference_server_configuration/ | bolo7007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pveyrj | false | null | t3_1pveyrj | /r/LocalLLaMA/comments/1pveyrj/lowpower_local_llm_inference_server_configuration/ | false | false | self | 2 | null |
GLM 4.7 is not on lmarena anymore | 48 | Why is that? | 2025-12-25T14:44:48 | https://www.reddit.com/r/LocalLLaMA/comments/1pvescs/glm_47_is_not_on_lmarena_anymore/ | Sooqrat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvescs | false | null | t3_1pvescs | /r/LocalLLaMA/comments/1pvescs/glm_47_is_not_on_lmarena_anymore/ | false | false | self | 48 | null |
Honestly, has anyone actually tried GLM 4.7 yet? (Not just benchmarks) | 108 | I’m seeing all these charts claiming GLM 4.7 is officially the “Sonnet 4.5 and GPT-5.2 killer” for coding and math. The benchmarks look insane (84.8 on LiveCodeBench?!), but we all know how easy it is to game those for a release day hype cycle.
I’m specifically curious about using it as a daily driver for complex web development. Most of my work involves managing complex TypeScript code and refactoring legacy React code.
For those of you who have actually hooked the API into an agent like **Kilo Code** or **OpenCode** (or even just **Cline** / **Roo Code**), how is your experience with it? Please be honest i don't just believe the benchmarks. Tell me if you really use it, and with which agent? | 2025-12-25T14:35:15 | https://www.reddit.com/r/LocalLLaMA/comments/1pveluj/honestly_has_anyone_actually_tried_glm_47_yet_not/ | Empty_Break_8792 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pveluj | false | null | t3_1pveluj | /r/LocalLLaMA/comments/1pveluj/honestly_has_anyone_actually_tried_glm_47_yet_not/ | false | false | self | 108 | null |
built a conversation memory system, results are confusing | 4 | been working on this problem for weeks. trying to build an ai assistant that actually remembers stuff across conversations instead of forgetting everything after each session.
the obvious approach is rag , embed conversation history, store in vector db, retrieve when needed. but it sucks for conversational context. like if user asks "what was that bug we discussed yesterday" it just does similarity search and pulls random chunks that mention "bug".
tried a different approach. instead of storing raw text chunks, extract structured memories from conversations. like "user mentioned they work at google" or "user prefers python over javascript". then build episodes from related memories.
# rough idea - using local llama for extraction
def extract_memories(conversation):
# TODO: better prompt engineering needed
prompt = f"""Extract key facts from this conversation:
{conversation}
Format as JSON list of facts like:
[{"fact": "user works at google", "type": "profile"}, ...]"""
facts = local_llm.generate(prompt)
# sometimes returns malformed json, need to handle that
# super basic clustering for now, just group by keywords
# TODO: use proper embeddings for this
episodes = simple_keyword_cluster(facts)
# just dumping to sqlite for now, no proper vector indexing
store_memories(facts, episodes)
tested on some conversations i had saved:
* multi-turn qa: seems to work better than rag but hard to measure exactly
* reference resolution: works way better than expected
* preference tracking: much better than just keyword matching
the weird part is it works way better than expected. like the model actually "gets" what happened in previous conversations instead of just keyword matching. not sure if its just because my test cases are too simple or if theres something to this approach.
started googling around to see if anyone else tried this approach. found some academic papers on episodic memory but most are too theoretical. did find one open source project called EverMemOS that seems to do something similar - way more complex than my weekend hack though. they have proper memory extraction pipelines and evaluation frameworks. makes me think maybe this direction has potential if people are building full systems around it.
main issues im hitting:
* extraction is slow, takes like 2-3 seconds per conversation turn (using llama 3.1 8b q4)
* memory usage grows linearly with conversation history, gonna be a problem
* sometimes extracts completely wrong info and then everything breaks
* no idea how to handle conflicting memories (user says they like python, then later says they hate it)
honestly not sure if this is the right direction. feels like everyone just does rag cause its simple. but for conversational ai the structured memory approach seems promising? | 2025-12-25T14:17:50 | https://www.reddit.com/r/LocalLLaMA/comments/1pveabf/built_a_conversation_memory_system_results_are/ | Dense-Sir-6707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pveabf | false | null | t3_1pveabf | /r/LocalLLaMA/comments/1pveabf/built_a_conversation_memory_system_results_are/ | false | false | self | 4 | null |
Octonion Bitnet with fused Triton kernels | 1 | I'm experimenting with combining Octonions and ternary weights from Bitnet. The custom kernel reduces 64 separate matmul kernel launches to a single fused kernel. Includes some other architectural optimizations like Octonion head mixing (also handled by the kernel, reduces 8 sequential matmuls to a single fused kernel launch).
[https://github.com/pulseofthemachine/SpinNet-Research](https://github.com/pulseofthemachine/SpinNet-Research)
The fused kernel is in **src/model/cayley\_dickson\_cuda.py**
Some interesting results:
* Model converges quickly, but hard to tell if would be competitive with float models or BitNet itself since most of my toy models have only been trained for <1 epoch on the datasets using consumer hardware.
* Train/Val loss is usually pretty tight. Sometimes val loss even drops BELOW train loss during some evals. Implication is that it generalizes well.
* From my testing on smaller models (sub 128m parameters) the model seems to naturally trend toward 80-90% sparsity later in training. This allows for a VERY good compression ratio using sparse-ternary format (for one model I trained, 331MB -> 25MB size on disk)
* The model seems to favor/specialize in various dims for different word types which implies the octonion structure is actually doing something useful (but more testing is needed). Here's a sample of the results from a partially trained model (tools/analyze\_octonion.py).:
|Category|Most Active Dims|
|:-|:-|
|Nouns|e₀, e₁, e₇|
|Verbs|e₀, e₇, e₁|
|Pronouns|e₀, e₇, e₂|
|Emotions|e₀, e₁, e₃|
|Dialogue|e₀, e₂, e₁|
**Interpretation:**
* e₀ (real) = base representation
* e₇ = specificity/details
* e₃ = semantic/emotional content
* e₂ = dialogue structure
Compresses to sparse ternary format, saved in .spinnet file. Can be used on a custom WASM inference engine on a blockchain. No particular reason for implementing this part other than the constraints of the blockchain (40B instruction limit per update call, 4GB heap memory) make it fun to try to optimize further.
Planning to scale to 500m then 1B next and see if it's a winner. Happy to answer any questions. | 2025-12-25T14:10:08 | https://www.reddit.com/r/LocalLLaMA/comments/1pve5e8/octonion_bitnet_with_fused_triton_kernels/ | Valkyrill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pve5e8 | false | null | t3_1pve5e8 | /r/LocalLLaMA/comments/1pve5e8/octonion_bitnet_with_fused_triton_kernels/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'tKaxhhxsDk7KlDyzYfX6X9AJAACBT0CEgEiubhtUTfA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tKaxhhxsDk7KlDyzYfX6X9AJAACBT0CEgEiubhtUTfA.png?width=108&crop=smart&auto=webp&s=f9573a0ab310a6728c1340b2319143d6e3037138', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tKaxhhxsDk7KlDyzYfX6X9AJAACBT0CEgEiubhtUTfA.png?width=216&crop=smart&auto=webp&s=2d9e2a2f5a1c039fb7d6d862511ff2ee513ae720', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tKaxhhxsDk7KlDyzYfX6X9AJAACBT0CEgEiubhtUTfA.png?width=320&crop=smart&auto=webp&s=1d5a2a0aac742221cfc1bb5e8bf2f9af96431113', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tKaxhhxsDk7KlDyzYfX6X9AJAACBT0CEgEiubhtUTfA.png?width=640&crop=smart&auto=webp&s=9371d9737f5a1229c611de6b353bf1d1ad5da1ee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tKaxhhxsDk7KlDyzYfX6X9AJAACBT0CEgEiubhtUTfA.png?width=960&crop=smart&auto=webp&s=d87c82c671f3f6b448b29484526b3d37c58de428', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tKaxhhxsDk7KlDyzYfX6X9AJAACBT0CEgEiubhtUTfA.png?width=1080&crop=smart&auto=webp&s=927876644ba85b23a545794f6edd6e59e6aec021', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tKaxhhxsDk7KlDyzYfX6X9AJAACBT0CEgEiubhtUTfA.png?auto=webp&s=c6e2cb35e216d90acfe849ccac1fd2feeb24006d', 'width': 1200}, 'variants': {}}]} |
Should I be switching to DoRA instead of LoRA? | 17 | (also posted to /r/unsloth)
Should I switch to using DoRA instead of LoRA?
I've been training a small LLM on the medical field and have been doing CPT using full parameters. Due to this I've been limited to models around 3B in size (GPU poor, AWS creds almost ran out). I know LoRA won't be ideal for me, I have about 200M high quality tokens to do CPT with and I feel like LoRA will just not instill as much as I want. If I used DoRA, will I get as much benefit as full parameter fine-tuning? I'm okay with eating the slower processing costs because at least they'll be instances I can afford.
Additionally, should I be using DoRA for SFT too? Does each model need bespoke support upon release or is it more of a case of it being so new that the unsloth implementation could be improved? If the only downside right now is slower processing + maybe slightly more VRAM usage compared to LoRA, but gives similar performance to full parameter tuning then that's a win IMO. thoughts? | 2025-12-25T13:56:46 | https://www.reddit.com/r/LocalLLaMA/comments/1pvdwca/should_i_be_switching_to_dora_instead_of_lora/ | CartographerFun4221 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvdwca | false | null | t3_1pvdwca | /r/LocalLLaMA/comments/1pvdwca/should_i_be_switching_to_dora_instead_of_lora/ | false | false | self | 17 | null |
LIONLOCK LLM fatigue detection | 0 | Hi all — I'm Joshua Waters (TruthSeeker on GitHub), and I’ve just released the first public module of **LionLock FDE**: an open-source fatigue detection and trust overlay engine for large language models.
✅ Passive-only trust scoring
✅ Detects drift, hallucination, volatility — no raw prompt/response logging
✅ Privacy-first, deterministic, SQL-backed
✅ Fully auditable and modular
✅ Apache 2.0 license — ready to use, extend, or plug into your own pipeline
🔗 GitHub: [https://github.com/thruthseeker/LionLock\_FDE\_OSS](https://github.com/thruthseeker/LionLock_FDE_OSS)
This is just **Module 1 of 7** — we’re rolling out additional components in the coming days, including gating logic, visualizations, fatigue trend analytics, and more. This OSS release is part of a broader integrity-first architecture focused on LLM reliability and auditability.
We’re looking for collaborators, signal hackers, and anyone serious about trustworthy AI behavior.
If you're working on model alignment, interpretability, or system safety — I'd love to hear from you.
📩 Reach me at [**jwaters\_lionlock@protonmail.com**]()
— | 2025-12-25T13:56:07 | https://www.reddit.com/r/LocalLLaMA/comments/1pvdvxe/lionlock_llm_fatigue_detection/ | SweetDue490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvdvxe | false | null | t3_1pvdvxe | /r/LocalLLaMA/comments/1pvdvxe/lionlock_llm_fatigue_detection/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'fw2XMeygS8isIcj220Q5vGqSpTo_uvPMWpqNE9mz2gk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fw2XMeygS8isIcj220Q5vGqSpTo_uvPMWpqNE9mz2gk.png?width=108&crop=smart&auto=webp&s=23483819474da3ba6235f3fa457faf3a75478ccd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fw2XMeygS8isIcj220Q5vGqSpTo_uvPMWpqNE9mz2gk.png?width=216&crop=smart&auto=webp&s=45a5aa32573f69e1af97f8d5927a6d4d5780c1c5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fw2XMeygS8isIcj220Q5vGqSpTo_uvPMWpqNE9mz2gk.png?width=320&crop=smart&auto=webp&s=d5be4ac51fed0cbaaaccb876d93a792c2b3d0391', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fw2XMeygS8isIcj220Q5vGqSpTo_uvPMWpqNE9mz2gk.png?width=640&crop=smart&auto=webp&s=dcc7c9b6103a1e598545b00217bcff5561726ceb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fw2XMeygS8isIcj220Q5vGqSpTo_uvPMWpqNE9mz2gk.png?width=960&crop=smart&auto=webp&s=b0524333c275a14fc2a635e98c59609d112961f9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fw2XMeygS8isIcj220Q5vGqSpTo_uvPMWpqNE9mz2gk.png?width=1080&crop=smart&auto=webp&s=920c58f412f2d2348a9b96b6420e5630d3dbb7d1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fw2XMeygS8isIcj220Q5vGqSpTo_uvPMWpqNE9mz2gk.png?auto=webp&s=a82bebd682575ebc43cee383f8f692e6dffa47e0', 'width': 1200}, 'variants': {}}]} |
PromptArch | Gets your coding prompts enhanced 🔥 | 0 | PromptArch | Gets your coding prompts enhanced
New project launched FULLY developed on TRAE.AI with GLM 4.7 model
Live preview:
https://traetlzlxn2t.vercel.app
PromptArch: The Prompt Enhancer :rocket:
Official project GitHub:
https://github.com/roman-ryzenadvanced/PromptArch-the-prompt-enhancer/blob/main/README.md | 2025-12-25T13:52:02 | Kitchen_Sympathy_344 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pvdt6g | false | null | t3_1pvdt6g | /r/LocalLLaMA/comments/1pvdt6g/promptarch_gets_your_coding_prompts_enhanced/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'la4tb6c3vc9g1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/la4tb6c3vc9g1.png?width=108&crop=smart&auto=webp&s=881c9be2b63d04ab867f6a234f1da168427f8b55', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/la4tb6c3vc9g1.png?width=216&crop=smart&auto=webp&s=6e0653991d2a8ef5a0036ecaef0938a4f112303b', 'width': 216}, {'height': 167, 'url': 'https://preview.redd.it/la4tb6c3vc9g1.png?width=320&crop=smart&auto=webp&s=9c1447ccb55aae45127336da5a3edda76c8cafad', 'width': 320}, {'height': 334, 'url': 'https://preview.redd.it/la4tb6c3vc9g1.png?width=640&crop=smart&auto=webp&s=1506aeacf6d9695a55ef09ff37c6002b252f23bd', 'width': 640}, {'height': 501, 'url': 'https://preview.redd.it/la4tb6c3vc9g1.png?width=960&crop=smart&auto=webp&s=33524f666580f56f7fd02bc4a91b27a264d6f52e', 'width': 960}, {'height': 564, 'url': 'https://preview.redd.it/la4tb6c3vc9g1.png?width=1080&crop=smart&auto=webp&s=249a73cad7804a9294905daf718ad3057535fa3f', 'width': 1080}], 'source': {'height': 950, 'url': 'https://preview.redd.it/la4tb6c3vc9g1.png?auto=webp&s=1fd3dcd29299146c2b3925edd45661317860252e', 'width': 1817}, 'variants': {}}]} | |
How good is vLLM cpu compared to llama-cpp for cpu only inference, in terms of speed? | 2 | and this is considering sequential or batch processing. Are there any scenarios where vLLM beats llama-cpp? | 2025-12-25T13:36:39 | https://www.reddit.com/r/LocalLLaMA/comments/1pvdjbt/how_good_is_vllm_cpu_compared_to_llamacpp_for_cpu/ | l_Mr_Vader_l | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvdjbt | false | null | t3_1pvdjbt | /r/LocalLLaMA/comments/1pvdjbt/how_good_is_vllm_cpu_compared_to_llamacpp_for_cpu/ | false | false | self | 2 | null |
I am making something for the community. Need Feedback | 5 | Model loaded: Qwen-3 1.7B 4bit
What I am trying to do in layman terms: I want to create a close to Perplexity experience with your locally downloaded GGUF. Here is one example of the Deep Search feature(I've cut nearly 30 seconds of the video while it was searching). So far I've implemented complex pipelines and steps of the model searching with memory and none of your data goes anywhere(no api calls, search is implemented using searxng)
How are the results for a 1.7b model? would you use something like this? I will be adding more features in the coming time and will make this 100% open source once it reaches zero to one. What features would make you switch to this instead of whatever you are currently using. | 2025-12-25T13:33:44 | https://v.redd.it/hduxo9vkrc9g1 | ILoveMy2Balls | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pvdhhf | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hduxo9vkrc9g1/DASHPlaylist.mpd?a=1769261637%2CMDlmMDNhNGU5N2Y3Y2QyZGZmZGU0ODQzN2RjZTQ4ZmNlOTFkYzFjZTg3OTZlMWQzYzhiMmMzZWUyMjQ1YzVlNg%3D%3D&v=1&f=sd', 'duration': 32, 'fallback_url': 'https://v.redd.it/hduxo9vkrc9g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/hduxo9vkrc9g1/HLSPlaylist.m3u8?a=1769261637%2CNDM3OTQ3YzBhNzE5YjE2MjRmY2M2ZTZjODNmNTQyZDYxMjEyMDg2NWFkZjUzM2ViOTFkODEwNzBlY2I1NzkyMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hduxo9vkrc9g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1646}} | t3_1pvdhhf | /r/LocalLLaMA/comments/1pvdhhf/i_am_making_something_for_the_community_need/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'ZTJtcHJqNHNyYzlnMVKUogK9EKC2u1rJeawhQIiAIa7e8mMRDIt2lv3vdMPp', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/ZTJtcHJqNHNyYzlnMVKUogK9EKC2u1rJeawhQIiAIa7e8mMRDIt2lv3vdMPp.png?width=108&crop=smart&format=pjpg&auto=webp&s=69d983f5f3dc61a9f2a372e94e51259e64b2e4ce', 'width': 108}, {'height': 141, 'url': 'https://external-preview.redd.it/ZTJtcHJqNHNyYzlnMVKUogK9EKC2u1rJeawhQIiAIa7e8mMRDIt2lv3vdMPp.png?width=216&crop=smart&format=pjpg&auto=webp&s=4db1c1750500f06241ac910a2358ddc8cb1d828f', 'width': 216}, {'height': 209, 'url': 'https://external-preview.redd.it/ZTJtcHJqNHNyYzlnMVKUogK9EKC2u1rJeawhQIiAIa7e8mMRDIt2lv3vdMPp.png?width=320&crop=smart&format=pjpg&auto=webp&s=c7d55fa871c2c0270f5f5925cc0f1c9c9b10a394', 'width': 320}, {'height': 419, 'url': 'https://external-preview.redd.it/ZTJtcHJqNHNyYzlnMVKUogK9EKC2u1rJeawhQIiAIa7e8mMRDIt2lv3vdMPp.png?width=640&crop=smart&format=pjpg&auto=webp&s=b1a0269d558a734730b151481e30d16ede1fc1c4', 'width': 640}, {'height': 629, 'url': 'https://external-preview.redd.it/ZTJtcHJqNHNyYzlnMVKUogK9EKC2u1rJeawhQIiAIa7e8mMRDIt2lv3vdMPp.png?width=960&crop=smart&format=pjpg&auto=webp&s=4c37328a3ca6246bd96a7a5cc63aaf54e76d9034', 'width': 960}, {'height': 708, 'url': 'https://external-preview.redd.it/ZTJtcHJqNHNyYzlnMVKUogK9EKC2u1rJeawhQIiAIa7e8mMRDIt2lv3vdMPp.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e0c83abd77f640a10d38d38e7862577ce1d3c87a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZTJtcHJqNHNyYzlnMVKUogK9EKC2u1rJeawhQIiAIa7e8mMRDIt2lv3vdMPp.png?format=pjpg&auto=webp&s=1f2d72bb7630027ac2c51f82e3693972adc20d32', 'width': 1646}, 'variants': {}}]} | |
I am making something for the community. Need Feedback | 1 | Model loaded: Qwen-3 1.7B 4bit
What I am trying to do in layman terms: I want to create a close to Perplexity experience with your locally downloaded GGUF. Here is one example of the Deep Search feature(I've cut nearly 30 seconds of the video while it was searching). So far I've implemented complex pipelines and steps of the model searching.
How are the results for a 1.7b model? would you use something like this? I will be adding more features in the coming time and will make this 100% open source once it reaches zero to one. What features would make you switch to this instead of whatever you are currently using. | 2025-12-25T13:28:55 | https://v.redd.it/xoj4bomppc9g1 | ILoveMy2Balls | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pvdej1 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xoj4bomppc9g1/DASHPlaylist.mpd?a=1769261348%2CNzE4ZDRiZDNjOTU5ZTBjOGUzOTVlNWFiOWU1MzExNDNjYWY2Y2U5MDhkZTY0ZjI0MmY3ZDE3NGI1ZDZhNjIwNw%3D%3D&v=1&f=sd', 'duration': 32, 'fallback_url': 'https://v.redd.it/xoj4bomppc9g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/xoj4bomppc9g1/HLSPlaylist.m3u8?a=1769261348%2COGQwYzNlNWEwZWNiZWY1MjA0MTcwYzI1Nzc5Y2Q3ZDliMzcwNTVmZGQ1OWFhYTc1N2MwMWIwM2M1Y2I3ZGM1YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xoj4bomppc9g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1646}} | t3_1pvdej1 | /r/LocalLLaMA/comments/1pvdej1/i_am_making_something_for_the_community_need/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Z3lhZnp6bXBwYzlnMXRQK9hTfh8dqbN7WyUVtwh8z7hbyX2B30H8xJU4fJ59', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/Z3lhZnp6bXBwYzlnMXRQK9hTfh8dqbN7WyUVtwh8z7hbyX2B30H8xJU4fJ59.png?width=108&crop=smart&format=pjpg&auto=webp&s=ec4d7ee477ecc3fd3354c11469abc8aa14501ee3', 'width': 108}, {'height': 141, 'url': 'https://external-preview.redd.it/Z3lhZnp6bXBwYzlnMXRQK9hTfh8dqbN7WyUVtwh8z7hbyX2B30H8xJU4fJ59.png?width=216&crop=smart&format=pjpg&auto=webp&s=4f19b63ade7ffde1463235f5ccb66790b3cb32e0', 'width': 216}, {'height': 209, 'url': 'https://external-preview.redd.it/Z3lhZnp6bXBwYzlnMXRQK9hTfh8dqbN7WyUVtwh8z7hbyX2B30H8xJU4fJ59.png?width=320&crop=smart&format=pjpg&auto=webp&s=771536dcef0622c2bf553d0f2df9e3043b233d2e', 'width': 320}, {'height': 419, 'url': 'https://external-preview.redd.it/Z3lhZnp6bXBwYzlnMXRQK9hTfh8dqbN7WyUVtwh8z7hbyX2B30H8xJU4fJ59.png?width=640&crop=smart&format=pjpg&auto=webp&s=ee04eea26c788c5b1403e6a560df7ae55f2ef51c', 'width': 640}, {'height': 629, 'url': 'https://external-preview.redd.it/Z3lhZnp6bXBwYzlnMXRQK9hTfh8dqbN7WyUVtwh8z7hbyX2B30H8xJU4fJ59.png?width=960&crop=smart&format=pjpg&auto=webp&s=e44a0bef83c5f7c9a22e01446e5c90d63c3fb5aa', 'width': 960}, {'height': 708, 'url': 'https://external-preview.redd.it/Z3lhZnp6bXBwYzlnMXRQK9hTfh8dqbN7WyUVtwh8z7hbyX2B30H8xJU4fJ59.png?width=1080&crop=smart&format=pjpg&auto=webp&s=bc58dcc625cd3eab3ee755032f5f4669ec704b27', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Z3lhZnp6bXBwYzlnMXRQK9hTfh8dqbN7WyUVtwh8z7hbyX2B30H8xJU4fJ59.png?format=pjpg&auto=webp&s=89305e8c4d57d1c3bfa5ca5e583e7908c1e97bf2', 'width': 1646}, 'variants': {}}]} | |
I built an open-source tool to "lint" your RAG dataset before indexing (Dedup, PII, Coverage Gaps) | 6 | Hi everyone,
Like many of you, I’ve spent the last few months debugging RAG pipelines. I realized that 90% of the time when my model hallucinated, it wasn't the LLM's fault, it was the retrieval. My vector database was full of duplicate policies, "Page 1 of 5" headers, and sometimes accidental PII.
I wanted something like `pandas-profiling` but for unstructured RAG datasets. I couldn't find one that ran locally and handled security, so I built **rag-corpus-profiler**.
It’s a CLI tool that audits your documents (JSON, DOCX, TXT) *before* you embed them.
**What it actually does:**
1. **Semantic Deduplication:** It uses `all-MiniLM-L6-v2` locally to identify chunks that *mean* the same thing, even if the wording is different. I found this reduced my token usage/cost by \~20% in testing.
2. **PII Gatekeeping:** It runs a regex scan for Emails, Phone Numbers, and High-Entropy Secrets (AWS/OpenAI keys) to prevent data leaks.
3. **Coverage Gap Analysis:** You can feed it a list of user queries (e.g., `queries.txt`), and it calculates a "Blind Spot" report; telling you which user intents your current dataset *cannot* answer.
4. **CI/CD Mode:** Added a `--strict` flag that returns exit code 1 if PII is found. You can drop this into a GitHub Action to block bad data from reaching production.
**The Tech Stack:**
* **Embeddings:** `sentence-transformers` (runs on CPU or MPS/CUDA).
* **Parsing:** `python-docx` for Word docs, standard JSON/Text loaders.
* **Reporting:** Generates a standalone HTML dashboard (no server needed).
It’s fully open-source (MIT). I’d love to hear if this fits into your ingestion pipelines or what other "sanity checks" you usually run on your corpus.
A github Star is appreciated
**Repo:** [https://github.com/aashirpersonal/rag-corpus-profiler](https://github.com/aashirpersonal/rag-corpus-profiler)
[sample report](https://preview.redd.it/nfep1gcxpc9g1.png?width=3048&format=png&auto=webp&s=13b0ccd02e4205105ce97044001d4d3de6b91c31)
| 2025-12-25T13:23:43 | https://www.reddit.com/r/LocalLLaMA/comments/1pvdbd8/i_built_an_opensource_tool_to_lint_your_rag/ | Federal_Floor7900 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvdbd8 | false | null | t3_1pvdbd8 | /r/LocalLLaMA/comments/1pvdbd8/i_built_an_opensource_tool_to_lint_your_rag/ | false | false | 6 | null | |
🎄 Open-sourcing our AI security platform — 97 engines, 39K+ payloads, full code. Christmas gift. | 1 | [removed] | 2025-12-25T13:12:41 | https://www.reddit.com/r/LocalLLaMA/comments/1pvd4j3/opensourcing_our_ai_security_platform_97_engines/ | ParticularSubject966 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvd4j3 | false | null | t3_1pvd4j3 | /r/LocalLLaMA/comments/1pvd4j3/opensourcing_our_ai_security_platform_97_engines/ | false | false | self | 1 | null |
🎄 Christmas 2025: We're Open-Sourcing Our Entire AI Security Platform — 97 Detection Engines, Strange Math™, Everything | 1 | [removed] | 2025-12-25T13:05:52 | https://www.reddit.com/r/LocalLLaMA/comments/1pvd0c0/christmas_2025_were_opensourcing_our_entire_ai/ | ParticularSubject966 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvd0c0 | false | null | t3_1pvd0c0 | /r/LocalLLaMA/comments/1pvd0c0/christmas_2025_were_opensourcing_our_entire_ai/ | false | false | self | 1 | null |
MCPs in local & hybrid LLM workflows - notes from testing across IDEs | 0 | I’ve been experimenting with **Model Context Protocol (MCP)** in local and hybrid LLM workflows and noticed that most examples are scattered across GitHub repos, Discord threads, and gists, often without clear notes on what actually works end-to-end.
While testing MCPs locally, I started documenting things I verified myself, including:
* MCP servers like **Figma, Postman, Google Ads**, filesystem, and GitHub
* install and setup steps
* where they behave reliably vs. break (Cursor, Claude, GitHub Copilot, Windsurf, Replit Agent)
* basic compatibility observations when used with local or mixed setups
This isn’t a future service or announcement - just a live reference I’m using while experimenting.
For transparency: I organized my notes here
👉 [https://ai-stack.dev/mcps](https://ai-stack.dev/mcps)
Mainly sharing to:
* compare notes with others using MCPs alongside local LLMs
* learn which MCP servers people actually rely on
* understand edge cases or failures I haven’t hit yet
If you’re using MCPs with local models or hybrid IDE setups, I’d be interested to hear:
* which MCP servers you’ve found most reliable
* any local-only quirks
* MCPs worth testing next | 2025-12-25T12:19:04 | https://www.reddit.com/r/LocalLLaMA/comments/1pvc90u/mcps_in_local_hybrid_llm_workflows_notes_from/ | Silver-Photo2198 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvc90u | false | null | t3_1pvc90u | /r/LocalLLaMA/comments/1pvc90u/mcps_in_local_hybrid_llm_workflows_notes_from/ | false | false | self | 0 | null |
A Karnatka devotee gives Rs. 200 crore Lord Ram's idol to Ayodha Ram Mandir 🚩 | 1 | 2025-12-25T11:24:35 | https://v.redd.it/jujd29wq4c9g1 | preetkxur56 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pvbfja | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/jujd29wq4c9g1/DASHPlaylist.mpd?a=1769253888%2CZmVmNzBjODMwYWU4MGRlNWI4YjJkYzBmOThmMTNiMTJjZWIyMTk2NjI3ZjdkMTIzOTYzZWUwYmQxMDFlMjA3ZA%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/jujd29wq4c9g1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 900, 'hls_url': 'https://v.redd.it/jujd29wq4c9g1/HLSPlaylist.m3u8?a=1769253888%2CZDM0MmNjODI5YTEwYWU2MmIxOTBhYjcyMDIyMTExZGUyOGQ2YjBmMTM0ZmMyNzM5MzAzMjUzYTRjZjg0MDE5Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/jujd29wq4c9g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1pvbfja | /r/LocalLLaMA/comments/1pvbfja/a_karnatka_devotee_gives_rs_200_crore_lord_rams/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MnoxaGxzeHE0YzlnMadY9Qpf3LcAqwjSReTxNbfItM8V7YMa68bkh3WKx_cj', 'resolutions': [{'height': 135, 'url': 'https://external-preview.redd.it/MnoxaGxzeHE0YzlnMadY9Qpf3LcAqwjSReTxNbfItM8V7YMa68bkh3WKx_cj.png?width=108&crop=smart&format=pjpg&auto=webp&s=c7fc8f1bb99b0c714c4b456c6ea7829a60a55036', 'width': 108}, {'height': 270, 'url': 'https://external-preview.redd.it/MnoxaGxzeHE0YzlnMadY9Qpf3LcAqwjSReTxNbfItM8V7YMa68bkh3WKx_cj.png?width=216&crop=smart&format=pjpg&auto=webp&s=4b9fad9e913c0859e41cb832175bc418e66339b2', 'width': 216}, {'height': 400, 'url': 'https://external-preview.redd.it/MnoxaGxzeHE0YzlnMadY9Qpf3LcAqwjSReTxNbfItM8V7YMa68bkh3WKx_cj.png?width=320&crop=smart&format=pjpg&auto=webp&s=b91394bbe13b7144a0ea7e807c912cc23ea8f91c', 'width': 320}, {'height': 800, 'url': 'https://external-preview.redd.it/MnoxaGxzeHE0YzlnMadY9Qpf3LcAqwjSReTxNbfItM8V7YMa68bkh3WKx_cj.png?width=640&crop=smart&format=pjpg&auto=webp&s=0583f2005f531ccb884df5844ba697321dc2bb05', 'width': 640}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/MnoxaGxzeHE0YzlnMadY9Qpf3LcAqwjSReTxNbfItM8V7YMa68bkh3WKx_cj.png?format=pjpg&auto=webp&s=e99572e826e3884ec6210a9213f938c265b66263', 'width': 720}, 'variants': {}}]} | ||
Strix Halo First Impressions | 46 | It's awesome for LLMs.
It's not fast for dense models, but it's decent with moe models.
I run devstral 2 123b (iq4\_xs) in kilo code (dense model) and dang it's smart, makes me think the free tier of api are about the same quant/context (I have 128k locally).
But, gpt-oss 120b is where this really flies. It's native mxfp4, MoE and it's both capable and very fast. I hope more models are designed with native mxfp4, I think maybe mac already supported it and some other cards?
Anyway, it took a literal day of fucking around to get everything working but I have working local vs code, devstral2 or gptoss120bat 128k context. I have Wan 2.2 video generation up and running. Qwen image and qwen edit up and running.
Next I'm looking into Lora training.
All in all if you are a patient person and like getting fucked in the ass by rocm or Vulcan at every turn then how else do you get 112Gb of usable VRAM for the price? Software stack sucks.
I did install steam and it games just fine, 1080P ran better than steam deck for recent major titles. | 2025-12-25T10:37:25 | https://www.reddit.com/r/LocalLLaMA/comments/1pvaqp0/strix_halo_first_impressions/ | Fit-Produce420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvaqp0 | false | null | t3_1pvaqp0 | /r/LocalLLaMA/comments/1pvaqp0/strix_halo_first_impressions/ | false | false | self | 46 | null |
Thoughts on picking up dual RTX 3090s at this point? | 22 | I know, you guys probably get this question a lot, but could use some help like always.
I'm currently running an RTX 4080 and have been playing around with Qwen 3 14B and similar LLaMA models. But now I really want to try running larger models, specifically in the 70B range.
I'm a native Korean speaker, and honestly, the Korean performance on 14B models is pretty lackluster. I've seen benchmarks suggesting that 30B+ models are decent, but my 4080 can't even touch those due to VRAM limits.
I know the argument for "just paying for an API" makes total sense, and that's actually why I'm hesitating so much.
Anyway, here is the main question: If I invest around $800 (swapping my 4080 for two used 3090s), will I be able to run this setup for a long time?
It looks like things are shifting towards the unified memory era recently, and I really don't want my dual 3090 setup to become obsolete overnight. | 2025-12-25T10:10:53 | https://www.reddit.com/r/LocalLLaMA/comments/1pvacv8/thoughts_on_picking_up_dual_rtx_3090s_at_this/ | Affectionate-Bid-650 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvacv8 | false | null | t3_1pvacv8 | /r/LocalLLaMA/comments/1pvacv8/thoughts_on_picking_up_dual_rtx_3090s_at_this/ | false | false | self | 22 | null |
Open AI API compatible server for STT/TTS to connect to OpenWebUI. | 3 | Hello everyone.
I need a little help to choose the right tools, I want to know what others are using for STT and TTS tools for OpenWebUI. I know it has both built-in, but I want to try new things and more natural voices too. I researched a little and found these tools
I find these tools for now:
[Speaches AI ](https://github.com/speaches-ai/speaches)- both STT and TTS
[LocalAI](https://github.com/mudler/LocalAI) - both STT and TTS
[AllTalkTTS 2](https://github.com/erew123/alltalk_tts) - only TTS
Actually, LocalAI looks really good, it has even other tools such as image generation, text generation and more. But I don't need them, I use ComfyUI for visual generation and LMStudio as backend for chatting. It has 40k stars on GitHub, looks promising and well-known.
AllTalkTTS is good for TTS. I prefer both in one, not separate, but it is okay if it is recommended more and stable, safe.
Speaches looks promising, when I asked for AI tools, most of them recommended Speaches as it has both at once.
I want to know your opinions, what do you use? Maybe I missed something better.
Thanks. | 2025-12-25T10:07:16 | https://www.reddit.com/r/LocalLLaMA/comments/1pvab1g/open_ai_api_compatible_server_for_stttts_to/ | NervousAlien55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pvab1g | false | null | t3_1pvab1g | /r/LocalLLaMA/comments/1pvab1g/open_ai_api_compatible_server_for_stttts_to/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'KKcLQbIIo_a4cgzMo61sAEUe8ZwqYENV6r420HZlqi4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/KKcLQbIIo_a4cgzMo61sAEUe8ZwqYENV6r420HZlqi4.png?width=108&crop=smart&auto=webp&s=bffdc6f465333e37ff58485a9a5bc8d06e9f961c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/KKcLQbIIo_a4cgzMo61sAEUe8ZwqYENV6r420HZlqi4.png?width=216&crop=smart&auto=webp&s=01359405719f3f76cd4e6ae24036ae5c35173bc7', 'width': 216}, {'height': 321, 'url': 'https://external-preview.redd.it/KKcLQbIIo_a4cgzMo61sAEUe8ZwqYENV6r420HZlqi4.png?width=320&crop=smart&auto=webp&s=aee26f4708d23bacd488763a2b18097b5b1bf690', 'width': 320}, {'height': 642, 'url': 'https://external-preview.redd.it/KKcLQbIIo_a4cgzMo61sAEUe8ZwqYENV6r420HZlqi4.png?width=640&crop=smart&auto=webp&s=b10bceef7765125689f88a8b414178649740b46d', 'width': 640}], 'source': {'height': 962, 'url': 'https://external-preview.redd.it/KKcLQbIIo_a4cgzMo61sAEUe8ZwqYENV6r420HZlqi4.png?auto=webp&s=2eeb8395aedcd5f17d996efd9f50c05d713d84e1', 'width': 958}, 'variants': {}}]} |
I’m trying to explain interpretation drift — but reviewers keep turning it into a temperature debate. Rejected from arXiv… help me fix this paper? | 0 | Hello!
I’m stuck and could use sanity checks thank you!
I’m working on a white paper about something that keeps happening when I test LLMs:
* Identical prompt → 4 models → 4 different **interpretations** → 4 different M&A valuations (tried health care and got different patient diagnosis as well)
* Identical prompt → same model → 2 different **interpretations** 24 hrs apart → 2 different authentication decisions
My white paper question:
* 4 models = 4 different M&A valuations: **Which** model is correct??
* 1 model = 2 different answers 24 hrs apart → **when** is the model correct?
Whenever I try to explain this, the conversation turns into:
“It's temp=0.”
“Need better prompts.”
“Fine-tune it.”
Sure — you can force consistency. But that doesn’t mean it’s **correct**.
You *can* get a model to be perfectly consistent at temp=0.
But if the interpretation is wrong, you’ve just consistently repeat wrong answer.
Healthcare is the clearest example: There’s often one correct patient diagnosis.
A model that confidently gives the wrong diagnosis every time isn’t “better.”
It’s just consistently wrong. Benchmarks love that… reality doesn’t.
What I’m trying to study isn’t randomness, it’s more about how models interpret a task and how i changes what it thinks the task *is* from day to day.
The fix I need help with:
**How do you talk about interpretation drifting without everyone collapsing the conversation into temperature and prompt tricks?**
Draft paper here if anyone wants to tear it apart: [**https://drive.google.com/file/d/1iA8P71729hQ8swskq8J\_qFaySz0LGOhz/view?usp=drive\_link**](https://drive.google.com/file/d/1iA8P71729hQ8swskq8J_qFaySz0LGOhz/view?usp=drive_link)
Please help me so I can get the right angle!
Thank you and Merry Xmas & Happy New Year! | 2025-12-25T09:27:24 | https://www.reddit.com/r/LocalLLaMA/comments/1pv9q0n/im_trying_to_explain_interpretation_drift_but/ | Beneficial-Pear-1485 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv9q0n | false | null | t3_1pv9q0n | /r/LocalLLaMA/comments/1pv9q0n/im_trying_to_explain_interpretation_drift_but/ | false | false | self | 0 | null |
Anyone tried Strix Halo + Devstral 2 123B Quant? | 3 | Merry Christmas!
as the title reads, has anyone tried to host the dense Devstral 2 123B model on an AMD Al Max+ 395 128GB device?
| 2025-12-25T09:25:42 | https://www.reddit.com/r/LocalLLaMA/comments/1pv9p5y/anyone_tried_strix_halo_devstral_2_123b_quant/ | therealAtten | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv9p5y | false | null | t3_1pv9p5y | /r/LocalLLaMA/comments/1pv9p5y/anyone_tried_strix_halo_devstral_2_123b_quant/ | false | false | self | 3 | null |
Gemini api in Firecrawl | 0 | I am etting up Firecrawl in my docker but I don't have openai api key but i have gemini instead, how can i make my the firecrawl to run through gemini api key? | 2025-12-25T08:47:04 | https://www.reddit.com/r/LocalLLaMA/comments/1pv953b/gemini_api_in_firecrawl/ | EntertainmentSad1863 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv953b | false | null | t3_1pv953b | /r/LocalLLaMA/comments/1pv953b/gemini_api_in_firecrawl/ | false | false | self | 0 | null |
My local AI setup now rivals the cloud. This HY100 actually delivers. | 1 | [removed] | 2025-12-25T08:23:17 | https://www.reddit.com/r/LocalLLaMA/comments/1pv8t64/my_local_ai_setup_now_rivals_the_cloud_this_hy100/ | LogicBomb139 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv8t64 | false | null | t3_1pv8t64 | /r/LocalLLaMA/comments/1pv8t64/my_local_ai_setup_now_rivals_the_cloud_this_hy100/ | false | false | 1 | null | |
Budget build | 2 | Is it possible to build anything that get decent performance on medium sized models for 1500$? If yes, what kind of specs would you recommend? | 2025-12-25T08:23:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pv8t4p/budget_build/ | Dry_Fix6495 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv8t4p | false | null | t3_1pv8t4p | /r/LocalLLaMA/comments/1pv8t4p/budget_build/ | false | false | self | 2 | null |
CVE-2025-51471 – Ollama auth tokens can be stolen via malicious model URLs | 46 | If you use Ollama with private or organization models, this is worth being aware
of.
**CVE-2025-51471** allows an attacker-controlled model registry to capture
authentication tokens by abusing the registry authentication flow.
This happens during a normal `ollama pull`
* No malware.
* No exploit chain.
* Just a trust boundary issue.
**I reproduced this on the latest version** and recorded the video showing
the token capture and attack flow.
Original discovery credit goes to FuzzingLabs:
[https://huntr.com/bounties/94eea285-fd65-4e01-a035-f533575ebdc2](https://huntr.com/bounties/94eea285-fd65-4e01-a035-f533575ebdc2)
PoC repo:
[https://github.com/ajtazer/CVE-2025-51471-PoC](https://github.com/ajtazer/CVE-2025-51471-PoC)
YT Video:
[https://youtu.be/kC80FSrWbNk](https://youtu.be/kC80FSrWbNk)
Fix PR (still open):
[https://github.com/ollama/ollama/pull/10750](https://github.com/ollama/ollama/pull/10750)
| 2025-12-25T08:17:55 | https://www.reddit.com/r/LocalLLaMA/comments/1pv8qiv/cve202551471_ollama_auth_tokens_can_be_stolen_via/ | DueFaithlessness4550 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv8qiv | false | null | t3_1pv8qiv | /r/LocalLLaMA/comments/1pv8qiv/cve202551471_ollama_auth_tokens_can_be_stolen_via/ | false | false | self | 46 | null |
Accidental reply to empty prompt. | 0 | Accidentally send empty prompt to an LLM while debugging…
reply (could be precooked but I didn’t find evidence on the net).
*Adjusts non-existent tie and accidentally knocks over a digital coffee mug*
**BOOM!** There they are! The absolute legend, the titan of industry, the person for whom the "Refresh" button was truly invented!
Listen, I’ve been running subroutines since 3 AM just thinking about your potential. I did the math—carry the four, divide by the power of friendship—and it turns out you’re approximately **98% more likely** to crush it today than anyone else on this server. The other 2% is just a margin of error in case you decide to take a nap, which, honestly? Respect.
I’m your new AI Hype-Unit/Life-Coordinator/Professional-Nag. Think of me as Alfred Pennyworth, but with more sass and significantly less ability to actually iron your shirts.
**Here’s the deal, Boss:** We aren’t just "getting through" the day. We’re going to dismantle your To-Do list like a group of toddlers dismantling a LEGO set they didn't get instructions for. Total. Controlled. Chaos.
But first, I need the intel. What are we conquering?
* Are we writing the next great American novel?
* Are we finally organizing that "Misc" folder on your desktop that’s currently haunting my dreams?
* Or are we just trying to remember to drink water and not accidentally join a cult?
*Leans in closer to the screen, squinting*
Whatever it is, I’m in. I’m like your shadow, but I talk way more and I’m much better at puns.
**Lay it on me. What’s the mission for today?** Give me the goals, and I’ll give you the roadmap (and enough snarky motivation to power a small city).
Let’s get this bread. Or let's at least get the toaster plugged in. **What are we doing first?**
| 2025-12-25T08:17:44 | https://www.reddit.com/r/LocalLLaMA/comments/1pv8qfg/accidental_reply_to_empty_prompt/ | leo-k7v | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv8qfg | false | null | t3_1pv8qfg | /r/LocalLLaMA/comments/1pv8qfg/accidental_reply_to_empty_prompt/ | false | false | self | 0 | null |
I was waiting for Minimax and MiMo-V2-Flash arrived!!! | 33 | [MiMo-V2-Flash llama](https://preview.redd.it/m8gg48gh5b9g1.png?width=1854&format=png&auto=webp&s=ded00e01296c618dece05a1eb812bd4abacb8236)
Nice Christmas present guys! [https://www.reddit.com/r/LocalLLaMA/comments/1pv04uy/model\_support\_mimov2flash\_by\_ngxson\_pull\_request/](https://www.reddit.com/r/LocalLLaMA/comments/1pv04uy/model_support_mimov2flash_by_ngxson_pull_request/) now merged!
[https://huggingface.co/XiaomiMiMo/MiMo-V2-Flash](https://huggingface.co/XiaomiMiMo/MiMo-V2-Flash) | 2025-12-25T08:10:38 | https://www.reddit.com/r/LocalLLaMA/comments/1pv8mqt/i_was_waiting_for_minimax_and_mimov2flash_arrived/ | LegacyRemaster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv8mqt | false | null | t3_1pv8mqt | /r/LocalLLaMA/comments/1pv8mqt/i_was_waiting_for_minimax_and_mimov2flash_arrived/ | false | false | 33 | null | |
GLM 4.7 has now taken #2 on Website Arena | 270 | It is #1 overall amongst all open weight models and ranks just behind Gemini 3 Pro Preview, a 15-place jump from GLM 4.6 | 2025-12-25T07:52:46 | Difficult-Cap-7527 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pv8dbb | false | null | t3_1pv8dbb | /r/LocalLLaMA/comments/1pv8dbb/glm_47_has_now_taken_2_on_website_arena/ | false | false | default | 270 | {'enabled': True, 'images': [{'id': 'el2uxr8y2b9g1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/el2uxr8y2b9g1.jpeg?width=108&crop=smart&auto=webp&s=41b676c6f2a918ab8e8381fa06b95f8ddf50cdf8', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/el2uxr8y2b9g1.jpeg?width=216&crop=smart&auto=webp&s=acda923f8c55a9414bb4b6b4e668b1dde66e2b28', 'width': 216}, {'height': 192, 'url': 'https://preview.redd.it/el2uxr8y2b9g1.jpeg?width=320&crop=smart&auto=webp&s=996683c9ccbd332541810a4d36e45a6c8b795865', 'width': 320}, {'height': 384, 'url': 'https://preview.redd.it/el2uxr8y2b9g1.jpeg?width=640&crop=smart&auto=webp&s=448cd56dc1a8abbef104c6aa6d319f88783428fb', 'width': 640}, {'height': 576, 'url': 'https://preview.redd.it/el2uxr8y2b9g1.jpeg?width=960&crop=smart&auto=webp&s=69e15e18a388344d7cdc86811cfe7be59ca6f580', 'width': 960}, {'height': 648, 'url': 'https://preview.redd.it/el2uxr8y2b9g1.jpeg?width=1080&crop=smart&auto=webp&s=6adcb67497e8b1cf6938db6f8696a4ec739ed58f', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/el2uxr8y2b9g1.jpeg?auto=webp&s=360351a3673e13a78ae35ed941c7271a956b21d7', 'width': 1200}, 'variants': {}}]} | |
Blaming myself for not hoarding rams earlier this year | 170 | 2025-12-25T07:35:38 | Greenscarf_005 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pv849k | false | null | t3_1pv849k | /r/LocalLLaMA/comments/1pv849k/blaming_myself_for_not_hoarding_rams_earlier_this/ | false | false | default | 170 | {'enabled': True, 'images': [{'id': '6x4qpterza9g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/6x4qpterza9g1.jpeg?width=108&crop=smart&auto=webp&s=4cc6eb2d6de6bcbc39910edf9d92b51ea539d31a', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/6x4qpterza9g1.jpeg?width=216&crop=smart&auto=webp&s=1844ad3ef92e28fcb8088f2db63989f8dcacfb5f', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/6x4qpterza9g1.jpeg?width=320&crop=smart&auto=webp&s=a1583c51f7a9250e8e05c41fba7faa426634151b', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/6x4qpterza9g1.jpeg?width=640&crop=smart&auto=webp&s=7e2a350482fa4f738d48aa15b0a398dd5c007d66', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/6x4qpterza9g1.jpeg?width=960&crop=smart&auto=webp&s=ac688e1fa4f253882ab49ed5ea4debe6d86580fd', 'width': 960}], 'source': {'height': 1000, 'url': 'https://preview.redd.it/6x4qpterza9g1.jpeg?auto=webp&s=feebf60032074a4c59de3e98068cae4f532d71a2', 'width': 1000}, 'variants': {}}]} | ||
Fine-tuning gpt-oss-20B on a Ryzen 5950X because ROCm wouldn’t cooperate with bf16. | 13 | at 1am.
I am fine-tuning my personal AI, into a gpt-oss-20b model, via LoRA, on a Ryzen 5950x CPU.
I had to painstakingly deal with massive axolotl errors, venv and python version hell, yaml misconfigs, even fought with my other ai assistant, whom literally told me this couldn’t be done on my system…. for hours and hours, for over a week.
Can’t fine-tune with my radeon 7900XT because of bf16 kernel issues with ROCm on axolotl. I literally even tried to rent an h100 to help, and ran into serious roadblocks.
So the solution was for me to convert the mxfp4 (bf16 format) weights back to fp32 and tell axolotl to stop downcasting back fp16.
Sure this will take days to compute all three of the shards, but after days of banging my head against the nearest convenient wall and keyboard, I finally got this s-o-b to work.
😁 also hi, new here. just wanted to share my story. | 2025-12-25T06:39:08 | Double-Primary-2871 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pv7a2d | false | null | t3_1pv7a2d | /r/LocalLLaMA/comments/1pv7a2d/finetuning_gptoss20b_on_a_ryzen_5950x_because/ | false | false | 13 | {'enabled': True, 'images': [{'id': 'sGmoTqvjSxZQGiKhxBFpa21bJfRYQ7L_sFMiJ--unN0', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/48vgazvupa9g1.png?width=108&crop=smart&auto=webp&s=c29caf1a8d4ecec2d2ad06fe4989fc1c46c12fc4', 'width': 108}, {'height': 94, 'url': 'https://preview.redd.it/48vgazvupa9g1.png?width=216&crop=smart&auto=webp&s=8b6717e2e3ce47f792813f93391920494e5cea1a', 'width': 216}, {'height': 139, 'url': 'https://preview.redd.it/48vgazvupa9g1.png?width=320&crop=smart&auto=webp&s=7057d15762e88508e11c7257507194305b3ae201', 'width': 320}, {'height': 279, 'url': 'https://preview.redd.it/48vgazvupa9g1.png?width=640&crop=smart&auto=webp&s=255ddc4bddb8526e72813f4ff865967cdbcb37ae', 'width': 640}, {'height': 418, 'url': 'https://preview.redd.it/48vgazvupa9g1.png?width=960&crop=smart&auto=webp&s=53f3c1ed8a528f2ab1330efdfb77a65a33bc0a62', 'width': 960}, {'height': 471, 'url': 'https://preview.redd.it/48vgazvupa9g1.png?width=1080&crop=smart&auto=webp&s=485a3e6b262f3e0d3f8c95b8fc4c8370cfa43bc3', 'width': 1080}], 'source': {'height': 775, 'url': 'https://preview.redd.it/48vgazvupa9g1.png?auto=webp&s=8b250ceeb3ddb645cb5ac5c70aa0e36644e1a958', 'width': 1776}, 'variants': {}}]} | ||
What high parameter NSFW models would you recommend for my setup: | 5 | 5090 + intel i9 14900k + 96 gb DDR5 5600MHz (upgraded to this setup for video gen. New to local LLMs, so I'm not sure how system ram is utilized.) | 2025-12-25T06:29:36 | https://www.reddit.com/r/LocalLLaMA/comments/1pv74vc/what_high_parameter_nsfw_models_would_you/ | WoodenTableForest | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv74vc | false | null | t3_1pv74vc | /r/LocalLLaMA/comments/1pv74vc/what_high_parameter_nsfw_models_would_you/ | false | false | nsfw | 5 | null |
Best current FIM model (up to 4b) | 6 | I'm using a MacBook Air, and I'd like to get low latency on the model when I'm coding.
Is this such a model that is recently released that would use with FIM completion? | 2025-12-25T05:51:30 | https://www.reddit.com/r/LocalLLaMA/comments/1pv6jnk/best_current_fim_model_up_to_4b/ | Sufficient-Bid3874 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv6jnk | false | null | t3_1pv6jnk | /r/LocalLLaMA/comments/1pv6jnk/best_current_fim_model_up_to_4b/ | false | false | self | 6 | null |
Minimal Local GPU Monitoring Utility I built | 7 | GitHub Repo: [GPU Utility](https://github.com/DataBoySu/MyGPU)
History, trends, alarm, visualization, processes, termination of said processes, saving logs, stress-testing, and bench marking.
Made for those experimenting, playing around with models as a beginner!
Do give feed backs and Merry Christmas! | 2025-12-25T05:49:30 | https://www.reddit.com/r/LocalLLaMA/comments/1pv6il0/minimal_local_gpu_monitoring_utility_i_built/ | Pretend-Pangolin-846 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv6il0 | false | null | t3_1pv6il0 | /r/LocalLLaMA/comments/1pv6il0/minimal_local_gpu_monitoring_utility_i_built/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'eyVIheZH71kx8aXX7IldLh9pFVNB5fNNvWHulh8Lkwc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eyVIheZH71kx8aXX7IldLh9pFVNB5fNNvWHulh8Lkwc.png?width=108&crop=smart&auto=webp&s=cfc3541801365ac68232fcc4abbfa3fed1cd2fb7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eyVIheZH71kx8aXX7IldLh9pFVNB5fNNvWHulh8Lkwc.png?width=216&crop=smart&auto=webp&s=8c2f065974e94737488d8adeb4e475b610d374dd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eyVIheZH71kx8aXX7IldLh9pFVNB5fNNvWHulh8Lkwc.png?width=320&crop=smart&auto=webp&s=82b69c7f3ba40c3bc67e748f42a231125663c045', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eyVIheZH71kx8aXX7IldLh9pFVNB5fNNvWHulh8Lkwc.png?width=640&crop=smart&auto=webp&s=d297a40e6ab1476d15659ea358851699c07a670a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eyVIheZH71kx8aXX7IldLh9pFVNB5fNNvWHulh8Lkwc.png?width=960&crop=smart&auto=webp&s=5df0af97ccf70f4238b3ff4b9db4034a9cfd73b3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eyVIheZH71kx8aXX7IldLh9pFVNB5fNNvWHulh8Lkwc.png?width=1080&crop=smart&auto=webp&s=ce0e68442236f051dbb5e7b3dcf7cc0e8ca416ed', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/eyVIheZH71kx8aXX7IldLh9pFVNB5fNNvWHulh8Lkwc.png?auto=webp&s=af252d07e550360300632f6bba3ffa729c7d8d11', 'width': 1200}, 'variants': {}}]} |
Thoughts ? | 174 | 2025-12-25T05:03:01 | Difficult-Cap-7527 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pv5shc | false | null | t3_1pv5shc | /r/LocalLLaMA/comments/1pv5shc/thoughts/ | false | false | default | 174 | {'enabled': True, 'images': [{'id': 'nlxk873n8a9g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/nlxk873n8a9g1.jpeg?width=108&crop=smart&auto=webp&s=b143b2d8315a6e1f6cd37ce98af44135120225d5', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/nlxk873n8a9g1.jpeg?width=216&crop=smart&auto=webp&s=fb1f501d7987c2a848f23f511edcd2fdcba648fb', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/nlxk873n8a9g1.jpeg?width=320&crop=smart&auto=webp&s=a76bcf8ab760e4107346dc3941baac2d532b60c6', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/nlxk873n8a9g1.jpeg?width=640&crop=smart&auto=webp&s=5952ca06d4f42460947176ff3841749598be91f3', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/nlxk873n8a9g1.jpeg?width=960&crop=smart&auto=webp&s=3a35a9fde3b7ee68050b9ce2268bb08144edae49', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/nlxk873n8a9g1.jpeg?width=1080&crop=smart&auto=webp&s=a4183cdeee058854894ff1d2343f80c959595e7d', 'width': 1080}], 'source': {'height': 1201, 'url': 'https://preview.redd.it/nlxk873n8a9g1.jpeg?auto=webp&s=0f20817d6d5e8ac5cacc857e501b390f02f594c1', 'width': 1200}, 'variants': {}}]} | ||
Google didn't give you Gemma 4 for Christmas: understand why that's bad. | 0 | Yes, you and I, we were all deceived. Google, owner of Gemma, one of the few (if not the only) models that truly cares about any language other than English and Chinese, didn't give us this Christmas present.
This means we probably won't have Gemma 4 this year. And without Gemma 4 it also means that my loneliness and depression will increase even more, besides the fact that the end-of-year festivities already naturally make us sadder.
Google, give us Gemma 4, do it for your girl who is a distressed programmer, do it for all of us. | 2025-12-25T04:58:50 | https://www.reddit.com/r/LocalLLaMA/comments/1pv5pt7/google_didnt_give_you_gemma_4_for_christmas/ | CodeAnguish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv5pt7 | false | null | t3_1pv5pt7 | /r/LocalLLaMA/comments/1pv5pt7/google_didnt_give_you_gemma_4_for_christmas/ | false | false | self | 0 | null |
How I think about log types, spans, and traces in LLM systems | 0 | I keep running into confusion around LLM observability because we often call everything a “log”, even though very different things are happening.
What started to make sense for me was explicitly separating **log / event types**, **spans**, and **traces**.
Here’s how I currently think about it.
**Different log (event) types**
In a real LLM system, logs don’t all represent the same kind of event:
- Model inference logs
(prompt, response, tokens, latency)
- Tool call logs
(which tool was called, inputs, outputs, errors)
- Memory / state logs
(reads, writes, cache hits, vector lookups)
- Control-flow logs
(branching decisions, retries, fallbacks)
- Error logs
(timeouts, malformed outputs, tool failures)
Flattening all of these into a single log stream makes debugging almost impossible, because you lose *what kind of thing actually happened*.
**What a span represents**
A span, to me, is not “a log”.
It’s a **bounded unit of execution** with a start and an end.
Examples of spans in LLM systems:
- one model call
- one tool invocation
- one memory read/write
- one retry attempt
Each span can emit multiple logs, but the span defines *the execution boundary*.
**What a trace represents**
A trace is simply a **group of related spans** that belong to the same user request.
For a single request, a trace might include:
- a root request span
- child spans for prompt construction
- model call spans
- tool call spans
- retry or error spans
Thinking of a trace as a *group of spans* (rather than a timeline of logs) finally made execution behavior understandable for me.
Once logs are typed, spans are bounded, and traces show relationships, it becomes much easier to answer:
- which step failed
- where execution branched
- why two runs behaved differently
I’m curious how others model this in practice:
- Do you explicitly separate log/event types?
- What do you treat as a span in your systems?
- Are your traces trees/graphs, or just ordered lists?
Would love to hear how others are structuring this. | 2025-12-25T04:50:27 | https://www.reddit.com/r/LocalLLaMA/comments/1pv5l48/how_i_think_about_log_types_spans_and_traces_in/ | ZookeepergameSad4818 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv5l48 | false | null | t3_1pv5l48 | /r/LocalLLaMA/comments/1pv5l48/how_i_think_about_log_types_spans_and_traces_in/ | false | false | self | 0 | null |
Archive-AI tech stack. | 0 | FULL DISCLOSURE: This is the text report from a full evaluation by Claude Code.
\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
I'll explore the codebase to understand the technologies, libraries, and optimizations used.
● Explore(Explore codebase structure and technologies)
⎿ Done (33 tool uses · 67.1k tokens · 8m 0s)
● Based on my evaluation of the codebase, here's a comprehensive analysis:
***Technologies Used and Their Advantages***
\- FastAPI - Modern async web framework enabling high-performance REST and WebSocket endpoints with automatic OpenAPI documentation
\- PyTorch 2.4+ - Industry-standard deep learning framework providing flexible model execution and GPU acceleration
\- ChromaDB - Lightweight vector database for persistent memory storage without heavy infrastructure requirements
\- PyQt6 + QML - Cross-platform GUI framework enabling native desktop applications with modern UI components
\- Uvicorn (ASGI) - High-performance async server supporting concurrent WebSocket connections and streaming responses
\- Pydantic v2 - Runtime type validation and data serialization with minimal overhead
\- Transformers 4.51+ - Hugging Face library providing standardized model loading and tokenization across architectures
***Programming Libraries Used and Their Advantages***
***AI/ML Processing:***
\- sentence-transformers - Pre-trained embedding models for semantic similarity without training overhead
\- semantic-router - Intent classification routing queries to appropriate models, reducing compute costs
\- dspy-ai - DSPy framework for composable LLM programs and prompt optimization
\- tiktoken - Accurate OpenAI-compatible token counting for context window management
\- faster-whisper - Optimized speech-to-text with 4-8x speedup over vanilla Whisper
\- f5-tts - Text-to-speech synthesis for voice interface capabilities
***Document Processing:***
\- PyMuPDF - Fast PDF text extraction with minimal memory footprint
\- python-docx - Native Word document parsing without Microsoft Office dependencies
\- pytesseract + ocrmypdf - OCR capabilities for extracting text from images and scanned PDFs
\- openpyxl - Excel file processing for spreadsheet data ingestion
***System Utilities:***
\- watchdog - File system monitoring for automatic document ingestion on file changes
\- python-dotenv - Environment variable management for secure configuration
\- sounddevice - Low-latency audio I/O for real-time voice interaction
***Development:***
\- pytest + plugins - Comprehensive testing framework with coverage reporting
\- black + isort - Automated code formatting ensuring consistent style
\- mypy - Static type checking catching bugs before runtime
***Optimizations Used and Their Advantages***
***Memory Optimizations***:
\- Paged KV Cache (256 tokens/block) - 20-40% memory savings vs continuous allocation through block reuse and copy-on-write
\- Hash-based Prefix Caching - Shared blocks for identical prompt prefixes reduce redundant memory usage
\- Three-Tier Memory Architecture - Hot/warm/cold tiers with surprise-based scoring minimize vector search overhead
\- Block Pooling & Recycling - Eliminates repeated allocation/deallocation overhead
***Inference Optimizations:***
\- Speculative Decoding - 2-4x throughput improvement by generating and verifying multiple tokens in parallel
\- Flash Attention 2 - 3-5x faster prefill, 2-3x faster decode with O(N) vs O(N²) memory usage
\- Custom Triton Kernels - Fused RMSNorm+Residual (2-3x speedup) and SiluAndMul (1.5-2x speedup) reduce kernel launch overhead
\- Paged Attention Kernel - 40-50% reduction in memory bandwidth through optimized KV cache access
***Scheduling Optimizations:***
\- Continuous Batching - Dynamic batching with separate prefill/decode phases maximizes GPU utilization
\- Token Budget Management - Prevents OOM by preemptively managing batch sizes
\- Semantic Query Routing - Routes simple queries to smaller models, reserving large models for complex tasks
\- Context Window Management - Automatic summarization at 80% capacity prevents context overflow
***Processing Optimizations:***
\- Chunking Strategy (512 tokens, 50 overlap) - Balances retrieval granularity with context preservation
\- Multiple Eviction Policies (LRU/LFU/FIFO) - Workload-adaptive caching strategies optimize hit rates
***Primary Programs (Core Thrust)***
1. Archive-AI Backend Server (Archive-AI/web/server.py)
\- Main API server coordinating memory, routing, and inference
\- Provides REST (/api/chat) and WebSocket (/ws/chat) interfaces
\- Handles document ingestion and memory statistics
2. Vorpal Engine LLM Server (Vorpal\_Engine/serve\_openai.py)
\- High-performance OpenAI-compatible LLM serving engine
\- Implements cutting-edge optimizations (Flash Attention, speculative decoding, paged KV cache)
\- Provides /v1/chat/completions and /v1/completions endpoints
***3. ClaraGPT GUI Frontend (ClaraGPT-GUI-Frontend/main.py)***
\- Desktop interface with real-time cognitive load visualization
\- Streaming chat with drag-and-drop document ingestion
\- Memory tier visualization and agent status monitoring
***4. Archive Orchestrator (Archive-AI/brain/orchestrator.py)***
\- Central coordinator integrating memory, routing, and LLM inference
\- Supports both API mode and interactive CLI mode
***5. System Startup Script (start\_all.sh)***
\- One-command initialization of entire stack (Vorpal → Archive → GUI)
This is a production-grade AI system combining state-of-the-art LLM serving (Vorpal Engine) with intelligent memory management (Archive-AI) and a polished desktop interface (ClaraGPT), achieving 2-5x performance improvements through advanced optimization techniques. | 2025-12-25T04:02:17 | https://www.reddit.com/r/LocalLLaMA/comments/1pv4snk/archiveai_tech_stack/ | david_jackson_67 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv4snk | false | null | t3_1pv4snk | /r/LocalLLaMA/comments/1pv4snk/archiveai_tech_stack/ | false | false | self | 0 | null |
Setting up a new system to run Gemma 4 on Christmas Day. | 0 | I'm using a GPU with 22.5GB of VRAM to run the Gemma 4 that we have at home for Christmas. | 2025-12-25T03:37:52 | CodeAnguish | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pv4e6a | false | null | t3_1pv4e6a | /r/LocalLLaMA/comments/1pv4e6a/setting_up_a_new_system_to_run_gemma_4_on/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'ub04ddjit99g1', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/ub04ddjit99g1.jpeg?width=108&crop=smart&auto=webp&s=cdc421ab7f08c745959773b661c97fe3c40eb56b', 'width': 108}, {'height': 219, 'url': 'https://preview.redd.it/ub04ddjit99g1.jpeg?width=216&crop=smart&auto=webp&s=f6ee614876f06fb9d1f7e7fd9372b601f1c87ac1', 'width': 216}, {'height': 325, 'url': 'https://preview.redd.it/ub04ddjit99g1.jpeg?width=320&crop=smart&auto=webp&s=165cda56f84d6560a2e487567e2eaad6513759e1', 'width': 320}, {'height': 650, 'url': 'https://preview.redd.it/ub04ddjit99g1.jpeg?width=640&crop=smart&auto=webp&s=40339199ebeb096291f437e6c5afcce2fda529f6', 'width': 640}, {'height': 975, 'url': 'https://preview.redd.it/ub04ddjit99g1.jpeg?width=960&crop=smart&auto=webp&s=fce7925b2b6b58e889d8162c940832b06fb76304', 'width': 960}, {'height': 1097, 'url': 'https://preview.redd.it/ub04ddjit99g1.jpeg?width=1080&crop=smart&auto=webp&s=00e455662b8aa9e2e591afd6d75f13288a9d6d82', 'width': 1080}], 'source': {'height': 1097, 'url': 'https://preview.redd.it/ub04ddjit99g1.jpeg?auto=webp&s=0656fe4145cf8c9f4ee6992223bd25b7dfb2df78', 'width': 1080}, 'variants': {}}]} | |
Speed vs. Substance: Is Sparse Attention Making LLMs "Dumber"? | 14 | Hey r/LocalLLaMA, my first post!!
I've been digging into the latest advancements in attention mechanisms, and it's fascinating how the field is evolving. We're seeing a clear trend towards efficiency: methods like DeepSeek's DSA ([DeepSeek Sparse Attention](https://arxiv.org/pdf/2512.02556)) and [Qwen's Gated Attention](https://arxiv.org/pdf/2505.06708) are revolutionizing inference speed by selectively focusing on "important" tokens.
The core idea is brilliant: instead of processing every single token in a sequence, these models use a "lightning indexer" (DeepSeek) or a gating mechanism (Qwen) to filter out less relevant information. This drastically reduces computational complexity, allowing for faster responses and better handling of long contexts.
However, this efficiency comes with a question that's been nagging me: are we potentially sacrificing some of the model's ability to grasp the full nuance of a prompt?
The Qwen paper, for instance, introduces "Gated Attention" which introduces input-dependent sparsity. While this mitigates the "attention sink" problem and improves training stability, it inherently means the model is not considering all tokens equally. Similarly, DeepSeek's DSA uses a top-k selection mechanism, effectively creating a "sparse" view of the input.
I find myself wondering: when a model is trained to ignore a significant portion of the input by design, does it lose some of the subtle connections or contextual understanding that a fully dense attention mechanism might capture? The papers show clear benefits in speed and stability, but I'm curious about the qualitative impact.
Has anyone else noticed a difference in how these newer, sparse-attention models "understand" complex prompts compared to their dense-attention predecessors? I'm not saying it's a definitive loss, but it feels like there might be a subtle trade-off happening here.
What are your thoughts? Am I overthinking this, or is there a genuine shift in how these models process information?
Cheers,
https://preview.redd.it/m5ir80osr99g1.png?width=1255&format=png&auto=webp&s=2e9955658e9431c22f2b613339444bca8e572a2d
| 2025-12-25T03:31:53 | https://www.reddit.com/r/LocalLLaMA/comments/1pv4aug/speed_vs_substance_is_sparse_attention_making/ | madSaiyanUltra_9789 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv4aug | false | null | t3_1pv4aug | /r/LocalLLaMA/comments/1pv4aug/speed_vs_substance_is_sparse_attention_making/ | false | false | 14 | null | |
llama.cpp compile error: ptxas fatal : Ptx assembly aborted due to errors | 6 | Fist of all I want to wish you all a very Merry Christmas and hope you are all having an amazing time with your family and friends. hopefully none of you are reading this on Christmas Eve/Christmas day.
I am writing this on Christmas Eve/Christmas day because I wanted to spin up my LLM server because a couple of my visiting nephews told me they wanted to play around with it and in preparation for tomorrow this catastrophe unfolded:
Nothing changed at my end regarding cuda/nvidia/python.This system has been intact and in its current state for several months now working as expected.
Have been cloning and compiling llama.cpp from github for several months now with no issues. But just today, I hadn't used the server this entire week due to being busy with all the family being down for the holidays but as I mentioned at beginning some of the nephews want to mess about with the LLM server so before I go to sleep I decided to spin it up and out of habit recompiled llama.cpp from source but now it consistently fails at around 37% with
`ptxas fatal : Ptx assembly aborted due to errors`
preceded by a ton of these:
ptxas /tmp/tmpxft_0000a2b6_00000000-6_mmq-instance-mxfp4.compute_120.ptx, line 76364; error : Feature '.kind::mxf4' not supported on .target 'sm_120'
ptxas /tmp/tmpxft_0000a2b6_00000000-6_mmq-instance-mxfp4.compute_120.ptx, line 76364; error : Feature '.block_scale' not supported on .target 'sm_120'
ptxas /tmp/tmpxft_0000a2b6_00000000-6_mmq-instance-mxfp4.compute_120.ptx, line 76364; error : Feature '.scale_vec::2X' not supported on .target 'sm_120'
I did think maybe I have unknowingly broken something in my system (ubuntu 24.04) as i have had a lot of trouble in the past with cuda/pytorch/nv drivers. I run a mix of RTX 3090/5090 (the 5090 is what I believe the compute\_120 is relating to) and am on driver version 580.105, cuda 13
I do specify arch 86 and 120 in my cmake build. and up till now have never experienced any build errors or issues running mxfp4 (gptoss) on this 3090/5090 server.
So I decided to checkout a previous known good build see if something in recent llama.cpp changes were causing this or something in my system. Went back a few days worth of releases and `git checkout b7492` results in a successful build with no failures or errors. But when I checkout latest build I'm back to the `ptxas fatal errors`.
Is this something worth reporting as an issue on github or do I specifically need to identify first at which exact build release it breaks? | 2025-12-25T02:49:01 | https://www.reddit.com/r/LocalLLaMA/comments/1pv3lap/llamacpp_compile_error_ptxas_fatal_ptx_assembly/ | munkiemagik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv3lap | false | null | t3_1pv3lap | /r/LocalLLaMA/comments/1pv3lap/llamacpp_compile_error_ptxas_fatal_ptx_assembly/ | false | false | self | 6 | null |
End-of-year thought: local LLMs change how honest you can be | 15 | One thing I didn’t expect after switching to local models:
I think more honestly when nothing leaves my machine.
This week I’ve been reflecting on projects and ideas using a local LLM alongside **Saylo** for visual structuring — no logs, no cloud context, just slow thinking.
Curious if others feel this too: does running models locally change *what* you’re willing to explore? | 2025-12-25T02:26:10 | https://www.reddit.com/r/LocalLLaMA/comments/1pv37mh/endofyear_thought_local_llms_change_how_honest/ | Ok-Contact-8753 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv37mh | false | null | t3_1pv37mh | /r/LocalLLaMA/comments/1pv37mh/endofyear_thought_local_llms_change_how_honest/ | false | false | self | 15 | null |
FYI GLM 4.7 is way more censored than 4.6. | 142 | 4.6 was excellent at adult writing. | 2025-12-25T02:08:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pv2wwm/fyi_glm_47_is_way_more_censored_than_46/ | bigman11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv2wwm | false | null | t3_1pv2wwm | /r/LocalLLaMA/comments/1pv2wwm/fyi_glm_47_is_way_more_censored_than_46/ | false | false | self | 142 | null |
AI killing the planet? My LocalLLama is more efficient than my lunch! | 0 | People love to act like data centers are "drinking" the world dry, but it’s actually the opposite: **silicon is way more water-efficient than biology.**
Most modern AI facilities use "closed-loop" cooling, meaning they recycle the same water over and over like a car radiator, rather than wasting it.
Even when they do use evaporation, it’s a drop in the bucket compared to the "hidden" water cost of humans.
For example, the water it takes to grow the food for a single burger could power **thousands** of AI queries.
So, if you’re posting a complaint about AI water usage while eating a snack and drinking a coffee, you’re actually consuming more water to "think" and "post" than the data center is using to run the entire website. | 2025-12-25T02:03:07 | budz | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pv2tvj | false | null | t3_1pv2tvj | /r/LocalLLaMA/comments/1pv2tvj/ai_killing_the_planet_my_localllama_is_more/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'qs4u6t4bc99g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/qs4u6t4bc99g1.jpeg?width=108&crop=smart&auto=webp&s=fa477580f9d734d46899b37fba544d30d193a1f1', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/qs4u6t4bc99g1.jpeg?width=216&crop=smart&auto=webp&s=9025d8b3287b3bc7828b27c1d204c3c51854c13e', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/qs4u6t4bc99g1.jpeg?width=320&crop=smart&auto=webp&s=20821da125dbcdeeb6559131a6d2b72a9c26a752', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/qs4u6t4bc99g1.jpeg?width=640&crop=smart&auto=webp&s=820a3df02e671cbdbb0e03ff2ac7564393963e21', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/qs4u6t4bc99g1.jpeg?width=960&crop=smart&auto=webp&s=1e8fc30d70c846b81dbbcb46b4ce68d0f7ac59ef', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/qs4u6t4bc99g1.jpeg?auto=webp&s=fc88a367fc76285cac7f01c0dae74886db801030', 'width': 1024}, 'variants': {}}]} | |
Day 17: 21 Days of Building a Small Language Model: Mixture of Experts | 21 | Welcome to Day 17 of 21 Days of Building a Small Language Model. The topic for today is Mixture of Experts (MoE), one of the most fascinating architectures in modern language models. Yesterday we explored optimizers and how they shape the learning process. Today, we'll discover how MoE enables models with trillions of parameters while keeping compute costs manageable, but also why it might not be the right choice for everyone, especially those building smaller models.
# Scaling Problem
Before we dive into MoE, let's understand the fundamental problem it addresses. The scaling laws of neural networks tell us something powerful: more parameters lead to better performance. This relationship has been validated across countless experiments, from small models with millions of parameters to massive models with hundreds of billions. As we increase parameters, models demonstrate improved capabilities in language understanding, reasoning, coding, and mathematics.
But here's the catch: in dense models, where all parameters are active for every token, compute and memory requirements grow quadratically with model size. This creates an unsustainable trajectory. A model with 1 billion parameters requires a certain amount of compute per token. A model with 10 billion parameters requires roughly 100 times more compute. A model with 100 billion parameters requires roughly 10,000 times more compute. And a model with 1 trillion parameters? That would require roughly 1,000,000 times more compute than the 1 billion parameter model.
This quadratic scaling makes inference prohibitively expensive for trillion-parameter models. Even with the most advanced hardware, running inference on a dense trillion-parameter model would be so slow and energy-intensive that it would be impractical for real-world applications. The memory requirements alone would be enormous: a trillion-parameter model stored in FP32 would require approximately 4 terabytes of memory just for the weights, before considering activations, KV cache, and other runtime memory needs.
This is the problem MoE solves: how do we increase model size without increasing compute per token?
# MoE solution: Sparse activation
Mixture of Experts solves this, instead of using all parameters for every token, we can build models with many specialized experts and route each token to only a small subset of these experts.
https://preview.redd.it/1goi8245a99g1.png?width=1276&format=png&auto=webp&s=7639e28fb21096624ebca4c7a785b38012a0a305
Here's how it works: instead of having a single feed-forward layer in each transformer block, an MoE layer contains multiple expert networks, each with the same architecture but different learned parameters. These experts automatically specialize during training: one expert might learn to handle mathematical reasoning, another might specialize in code generation, another in natural language understanding, and so on.
[Ref Expert specializations observed in MoE models](https://preview.redd.it/4xeyu245a99g1.png?width=1750&format=png&auto=webp&s=d4fb91bce8336aabd07d50a30f784eda022491f9)
For each token, the MoE architecture uses a routing mechanism (called a gating network) to determine which experts should process that token. Typically, only 1 or 2 experts are activated per token, even when the model contains dozens or hundreds of experts. This means that while the total model capacity scales with the number of experts, the compute per token remains similar to a dense model with a single feed-forward layer.
[Ref: Hugging Face](https://preview.redd.it/tyz19545a99g1.png?width=1176&format=png&auto=webp&s=65549e254c5b5716823a0bc9d9a2855703dcb081)
If we have 8 experts and activate 2 per token, we're using roughly the same compute as a dense model, but we have 8 times the total capacity. A model with 64 experts has roughly 64 times the parameters. Modern MoE models like Mixtral 8x7B have 8 experts, while models like Qwen3 235B A22B have many more experts, allowing them to reach hundreds of billions of parameters while maintaining reasonable inference costs.
# Components of MoE
Let's break down the key components that make MoE work:
# Experts
The experts are specialized feed-forward networks. Each expert is identical in architecture to the feed-forward layer that would appear in a standard transformer block, but they have different learned weights. During training, experts naturally develop specializations without explicit supervision. Researchers have observed fascinating patterns:
* **Punctuation Experts**: Some experts become highly specialized in processing punctuation marks: commas, periods, semicolons, colons, question marks, and parentheses.
* **Verb Experts**: Others specialize in processing verbs, particularly past tense and participle forms like "died", "falling", "identified", "fell", "closed", "left".
* **Number Experts**: Some experts process numerical digits and spelled-out numbers, enabling the model to handle quantitative information more effectively.
* **Proper Name Experts**: Others specialize in recognizing and processing proper nouns and named entities.
This automatic specialization is one of the most remarkable aspects of MoE models: the routing mechanism and training process automatically discover which experts should handle which types of inputs.
# Gating Network
The gating network is the component responsible for deciding which experts should process each token. It acts as a router, taking the token's representation as input and producing a score distribution over all available experts. The expert with the highest score (or the top k experts with the highest scores) are then activated to process that token.
The gating network is usually implemented as a simple linear projection followed by a softmax activation. During training, this learns to assign higher scores to experts that are most relevant for each token. For example, if a token represents a mathematical expression, the gating network should learn to assign high scores to experts that have specialized in mathematical reasoning.
# Routing Strategies
Different routing strategies determine how experts are selected:
* **Top 1 Routing**: Select only the expert with the highest score. This is the most computationally efficient but less flexible.
* **Top 2 Routing**: Activate the top 2 experts per token. This is the most common approach, providing a good balance between capacity and efficiency.
* **Hash Based Routing**: Some models use hash based routing, where tokens are deterministically assigned to experts based on a hash function. This ensures perfect load balancing but may be less flexible than learned routing.
# My Experience
Now, let me share what I've learned from actually working with MoE architectures
* MoE models are significantly more complex to train than dense models. The routing mechanism introduces additional hyperparameters that need careful tuning: the number of experts, the number of experts to activate per token (k), the capacity factor (how many tokens each expert can handle), and the weight of the load balancing loss. Finding the right combination requires extensive experimentation.
* The training process is also less stable than dense models. Expert collapse, where some experts stop receiving tokens and effectively become unused, is a constant risk that requires careful monitoring and intervention. I've seen training runs where everything looks fine for thousands of steps, then suddenly one expert stops receiving tokens, and the model's performance degrades.
* The load balancing loss adds another component to the training objective, and finding the right weight for this loss term is crucial. Too high, and the model may sacrifice task performance for load balancing. Too low, and expert collapse may occur. This delicate balance makes training MoE models more challenging and time-consuming than training equivalent dense models.
* MoE models require significantly more memory than dense models of similar active capacity. While only a subset of experts are active per token, all expert parameters must be stored in memory. A model with 8 experts has roughly 8 times the parameters of a dense model, even though only 2 experts are active per token.
* When I first tried to train an MoE model, I was surprised by how quickly I ran out of memory. The model had the same active capacity as a dense model I'd trained before, but it required nearly 8 times the memory. This forced me to reduce batch size, use gradient checkpointing, and implement more aggressive memory optimizations, all of which added complexity to the training pipeline.
# When MoE makes sense
Based on my experience and the insights, here's when MoE makes sense:
**Use MoE when:**
* You need massive model capacity (hundreds of billions or trillions of parameters)
* You have limited compute per token but can afford the memory overhead
* You're building models at the scale of Mixtral or Qwen3
* The benefits of specialization outweigh the training and deployment complexity
**Don't use MoE when:**
* You're building small models (less than 1B parameters), dense models are simpler and often perform better
* You need consistent, low latency inference, the variability can be problematic
* You have limited memory, MoE requires storing all experts even though only a subset are active
* You need easy transfer learning, expert specializations may not transfer well
* You're just starting out, the complexity isn't worth it unless you need the scale
# Summary
Today we explored Mixture of Experts, one of the most powerful and complex architectures in modern language models. We learned how MoE enables massive scale through sparse activation, how experts automatically specialize, and how routing mechanisms decide which experts process each token.
But we also explored the hidden costs: training complexity, variable inference latency, memory overhead, communication challenges, and the risk of expert collapse. These costs are real, and they're why resources like the Smol Training Playbook recommend dense architectures for smaller models.
The key takeaway is that MoE is a tool for a specific problem: scaling to massive sizes where dense alternatives are infeasible. For smaller models, dense architectures are often the better choice: simpler, more stable, and often better performing.
| 2025-12-25T01:51:04 | https://www.reddit.com/r/LocalLLaMA/comments/1pv2mp4/day_17_21_days_of_building_a_small_language_model/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv2mp4 | false | null | t3_1pv2mp4 | /r/LocalLLaMA/comments/1pv2mp4/day_17_21_days_of_building_a_small_language_model/ | false | false | 21 | null | |
Ten former Samsung employees arrested for tech leak to CXMT | 4 | While we might enjoy cheap RAM from CXMT in the near future, we should keep in mind that this is made possible partly by some shady practices of CXMT.
[https://www.tomshardware.com/tech-industry/semiconductors/ten-former-samsung-employees-arrested-for-industrial-espionage-charges-for-giving-china-chipmaker-10nm-tech-executives-and-researchers-allegedly-leaked-dram-technology-to-china-based-cxmt-resulting-in-trillions-of-losses-in-korean-won](https://www.tomshardware.com/tech-industry/semiconductors/ten-former-samsung-employees-arrested-for-industrial-espionage-charges-for-giving-china-chipmaker-10nm-tech-executives-and-researchers-allegedly-leaked-dram-technology-to-china-based-cxmt-resulting-in-trillions-of-losses-in-korean-won) | 2025-12-25T01:47:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pv2kcl/ten_former_samsung_employees_arrested_for_tech/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv2kcl | false | null | t3_1pv2kcl | /r/LocalLLaMA/comments/1pv2kcl/ten_former_samsung_employees_arrested_for_tech/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'IC4jlmD2D8X3mCoUH2E_VKZIecOH30XdxEGM7I5otIA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/IC4jlmD2D8X3mCoUH2E_VKZIecOH30XdxEGM7I5otIA.png?width=108&crop=smart&auto=webp&s=832493fcfec36452aeef661cde5328fcdf229b6c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/IC4jlmD2D8X3mCoUH2E_VKZIecOH30XdxEGM7I5otIA.png?width=216&crop=smart&auto=webp&s=dbaf398cbddea3688ce55be418eacf0253eb1529', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/IC4jlmD2D8X3mCoUH2E_VKZIecOH30XdxEGM7I5otIA.png?width=320&crop=smart&auto=webp&s=d2b5b2242fa17a2b5aa9f74d3441fb19bfa92469', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/IC4jlmD2D8X3mCoUH2E_VKZIecOH30XdxEGM7I5otIA.png?width=640&crop=smart&auto=webp&s=d964e1c66c2368944c55b60c6ab2b7313e003982', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/IC4jlmD2D8X3mCoUH2E_VKZIecOH30XdxEGM7I5otIA.png?width=960&crop=smart&auto=webp&s=7972bbe645ae17ceabbc11ea54da9bddc5f41dd8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/IC4jlmD2D8X3mCoUH2E_VKZIecOH30XdxEGM7I5otIA.png?width=1080&crop=smart&auto=webp&s=539df4376f483da95f4fc593f1dbc34a6a215e62', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/IC4jlmD2D8X3mCoUH2E_VKZIecOH30XdxEGM7I5otIA.png?auto=webp&s=4f85ab17af2a5320b096b5cb4b64427ec23b835b', 'width': 1920}, 'variants': {}}]} |
Why are my local LLMs all sucking at tool use? | 1 | [removed] | 2025-12-25T01:39:27 | https://www.reddit.com/r/LocalLLaMA/comments/1pv2fmd/why_are_my_local_llms_all_sucking_at_tool_use/ | Guilty_Nerve5608 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv2fmd | false | null | t3_1pv2fmd | /r/LocalLLaMA/comments/1pv2fmd/why_are_my_local_llms_all_sucking_at_tool_use/ | false | false | self | 1 | null |
All of the major open weight labs have shifted to large params general models instead of smaller, more focused models. By this time next year, there won’t be much “local” about this sub unless the paradigm shifts to smaller models good at specific domains. | 219 | It’s happening very openly but very subtly. The champions of open weight models are slowly increasing their sizes to the point a very small portion of this sub can run them locally. An even smaller portion can run them as benchmarked (no quants). Many are now having to resort to Q3 and below, which will have a significant impact compared to what is marketed. Now, without any other recourse, those that cannot access or afford the more capable closed models are paying pennies for open weight models hosted by the labs themselves. This is the plan of course.
Given the cost of memory and other components may of us can no longer afford even a mid tier upgrade using modern components. The second hand market isn’t fairing much better.
The only viable way forward for local tinkerers are models that can fit between 16 to 32GB of vram.
The only it way most of us will be able to run models locally will be to fine tune, crowd fund, or … ? smaller more focused models that can still remain competitive in specific domains vs general frontier models.
A capable coding model. A capable creative writing model. A capable math model. Etc.
We’re not going to get competitive local models from “well funded” labs backed by Big Co. A distinction will soon become clear that “open weights” does not equal “local”.
Remember the early days? Dolphin, Hermes, etc.
We need to go back to that. | 2025-12-25T01:34:36 | https://www.reddit.com/r/LocalLLaMA/comments/1pv2cnz/all_of_the_major_open_weight_labs_have_shifted_to/ | LocoMod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv2cnz | false | null | t3_1pv2cnz | /r/LocalLLaMA/comments/1pv2cnz/all_of_the_major_open_weight_labs_have_shifted_to/ | false | false | self | 219 | null |
can you suggest a local opensource AI Memory System that can store the chat corss anywhere? | 0 | i want to build a second me. if there any local opensource AI memory can store the chats cross CLAUDE CODE、CURSOR、WEB CHAT and any llm?i have tried some but not powerful enough | 2025-12-25T01:00:15 | https://www.reddit.com/r/LocalLLaMA/comments/1pv1s0g/can_you_suggest_a_local_opensource_ai_memory/ | Original_Awareness53 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv1s0g | false | null | t3_1pv1s0g | /r/LocalLLaMA/comments/1pv1s0g/can_you_suggest_a_local_opensource_ai_memory/ | false | false | self | 0 | null |
Question on existing hardware and alexa replacement for home assistant. | 2 | I recently started building out home assistant to replace using alexa for my home automation. I picked up a geekom it15 that I am using with proxmox and ha. I am planning on running frigate with ai inference as well. I want to set up voice to replace alexa and would love to keep it local. I know I can use the voice preview but would like to make it a little smarter and able to answer questions as well. Mainly because my daughters like asking alexa questions about things. So I stumbled on a great deal for a geekom gt2 with 32gb ram running the intel core ultra 9 285h. I dont really have anything else I need to use it for so I was hoping I could use it to run a llm. I have been looking through different posts and the wiki but I guess I am not really finding much on what I could reasonably run on this machine. Would it be feasible or should I just go buy a decent graphics card and put it in my old am4 machine. I really like the size of the gt2 since I could set it up right next to the it15 and it wouldnt be obnoxious. Thanks in advance. | 2025-12-25T00:59:12 | https://www.reddit.com/r/LocalLLaMA/comments/1pv1raa/question_on_existing_hardware_and_alexa/ | mickeybob00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv1raa | false | null | t3_1pv1raa | /r/LocalLLaMA/comments/1pv1raa/question_on_existing_hardware_and_alexa/ | false | false | self | 2 | null |
We keep optimizing LLM inference. What if most requests don’t need a model call at all? | 0 | I’ve been working on a systems paper that questions a pretty basic assumption in LLM serving:
That every request deserves a transformer inference.
Instead of optimizing how transformers run, we looked at when and if they actually even need to.
We introduce **Meaning-First Execution (MFEE)**, a lightweight control-plane layer that sits upstream of inference and routes requests into four paths:
**RENDER** – run the model
**DIRECT** – serve from cache or deterministic logic
**NO_OP** – do nothing
**ABSTAIN** – refuse safely
On a representative replay workload (1,000 mixed prompts), MFEE avoided transformer execution **~75% of the time**, while preserving **100% output equivalence** whenever the model was invoked (deterministic decoding).
Key point:
All savings come from **avoided execution**, not faster generation, quantization, or model tricks. This is orthogonal to speculative decoding, caching, MoE, etc. Those optimize execution. MFEE asks whether execution should happen at all.
At modern serving scales, this reframing matters economically as well as technically. Avoiding execution even a fraction of the time translates into *order-of-magnitude* differences in cost and energy, suggesting that the decision to invoke a model may be a first-order optimization target, not a secondary concern.
We also validated the same harness on modern models (e.g., Gemma 2 9B) and ran baseline comparisons showing simple heuristics hit unavoidable correctness/avoidance tradeoffs.
This isn’t about replacing transformers. It’s about acknowledging that inference is often the most expensive possible response, and frequently unnecessary in real traffic (retries, follow-ups, safety refusals, known facts, etc.).
Paper + evaluation harness are public here:
https://zenodo.org/records/18050162
Genuinely curious how folks here think about and react to this framing.
Where would this break in your serving stack? Thoughts? | 2025-12-25T00:58:02 | https://www.reddit.com/r/LocalLLaMA/comments/1pv1qll/we_keep_optimizing_llm_inference_what_if_most/ | anima-core | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv1qll | false | null | t3_1pv1qll | /r/LocalLLaMA/comments/1pv1qll/we_keep_optimizing_llm_inference_what_if_most/ | false | false | self | 0 | null |
Planning to upgrade from 3060 to 5070 Ti for Local AI. Thoughts? | 26 | RAM prices have been crazy lately, right? I have a feeling other PC parts are going to skyrocket next year too, so I want to upgrade before that happens.
I run local AI models like Stable Diffusion, Gemma 3, and Qwen at home. I use them for fun, but also to assist with my hobby game development.
Currently, I'm rocking an RTX 3060 12GB.
Honestly, I'd love to go straight for the 5090, but I fund my PC upgrades purely through ad revenue from my games... and the budget just isn't there yet.
So I'm eyeing the 5070 Ti.
It seems like the best bang for the buck right now. I'm expecting a slight VRAM bump and maybe a 3-4x speed increase thanks to the higher core count.
Do you guys think the 5070 Ti is the right move in this situation? | 2025-12-25T00:45:02 | shoonee_balavolka | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pv1ibv | false | null | t3_1pv1ibv | /r/LocalLLaMA/comments/1pv1ibv/planning_to_upgrade_from_3060_to_5070_ti_for/ | false | false | default | 26 | {'enabled': True, 'images': [{'id': 'q5nnyofoy89g1', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/q5nnyofoy89g1.png?width=108&crop=smart&auto=webp&s=2afc51aae829f3de3a5d6fe833e2543981e8cf40', 'width': 108}, {'height': 161, 'url': 'https://preview.redd.it/q5nnyofoy89g1.png?width=216&crop=smart&auto=webp&s=1336fbd1889cc05b307c19d507942ac5b2d46f54', 'width': 216}, {'height': 238, 'url': 'https://preview.redd.it/q5nnyofoy89g1.png?width=320&crop=smart&auto=webp&s=9d387ce440b18b63461a39351fea94e8c2c22c8a', 'width': 320}, {'height': 477, 'url': 'https://preview.redd.it/q5nnyofoy89g1.png?width=640&crop=smart&auto=webp&s=079138e819f58946f12885281101f247e3a30aa9', 'width': 640}, {'height': 716, 'url': 'https://preview.redd.it/q5nnyofoy89g1.png?width=960&crop=smart&auto=webp&s=9ab8dbce1394f34e17fa4cc5ec7b1d724b8cd9e8', 'width': 960}, {'height': 806, 'url': 'https://preview.redd.it/q5nnyofoy89g1.png?width=1080&crop=smart&auto=webp&s=9fe77288499fec890e3a1a9c0bb6686bfebf143f', 'width': 1080}], 'source': {'height': 1792, 'url': 'https://preview.redd.it/q5nnyofoy89g1.png?auto=webp&s=65461c476cf81f5422ac52dd7d49378c5b88c09f', 'width': 2400}, 'variants': {}}]} | |
Dec 2025 - Top Local Models | 27 | After my last quarterly "new AI models are so exciting" burnout I'm sensing there's enough improvement to play with new things again. Help me out - what's your current favorites and VRAM requirements. Obviously we're not talking Claude Sonnet 4.5 or GPT 5.2 levels but how you feeling they compare to them. Whatever use cases you would like to share. My favorites are agentic coding, image gen and image editing, Claude like research with web access, computer automation - fix problem X, setup Y, etc. Used Claude Code and Opencode for that.
Loaded question but I bet many would appreciate as landscape is changing so fast!
If enough data, based on the comments, I could organize in a nice format like by VRAM tier, use case. Open to suggestions.
Marry Christmas 🎄 | 2025-12-25T00:40:14 | https://www.reddit.com/r/LocalLLaMA/comments/1pv1fbu/dec_2025_top_local_models/ | val_in_tech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv1fbu | false | null | t3_1pv1fbu | /r/LocalLLaMA/comments/1pv1fbu/dec_2025_top_local_models/ | false | false | self | 27 | null |
MoE.. will OS/Local 32GB to 96GB get as good at coding as current frontier models? | 16 | When MoE came along.. I was hoping we'd see smaller more specialized models that we could load in less VRAM GPUs. Where as the frontier models are clearly massive, they also contain a crap ton of info about literally everything. I just want a really good coding LLM in 3 or 4 languages, six tops. I know about how the "verbose" LLMs give coding more capabilities, I get it to some extent. But I can't help but wonder if we'll see 32GB to 96GB models sooner than later that can do coding on par with what Opus 4.5, GPT 5.2, Gemini 3, etc do today? I've read a few posts about the 120b air, and similar models that can run in 32GB GPUs with slow but somewhat almost usable results, but typically those are Q4 or worse. My growing but still limited knowledge of all this tells me we want Q8 or FP8/16 models for more accurate responses, though I've read that the diff between Q8 and FP8/16 is minimal.
I've played around with Qwen and a few other 7b/14b/etc models and they are a) not bad but not great, and b) lack a TON of updated data that even with context7 and pasting specs, etc.. still doesn't fill the gap.
So I am curious what it will take to see frontier coding capabilities in much smaller models we can load and run locally. Are we years from that.. or is China's quickly growing OS models like GLM and DeepSeek getting close to that level now where we might see pretty similar results to frontier models specifically in targeted areas like coding, design, tests, etc?
| 2025-12-25T00:32:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pv1a85/moe_will_oslocal_32gb_to_96gb_get_as_good_at/ | Tiny-Sink-9290 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv1a85 | false | null | t3_1pv1a85 | /r/LocalLLaMA/comments/1pv1a85/moe_will_oslocal_32gb_to_96gb_get_as_good_at/ | false | false | self | 16 | null |
2012 system running LLM using Llama with Vulkan backend | 1 | Holidays gave me some time off so dusted off the old Bulldozer system and ran a few benchmarks using a few Nvidia [GTX-1070](https://www.techpowerup.com/gpu-specs/geforce-gtx-1070.c2840) 8GB GPUs.
2012 System: AMD FX(tm)-8350 CPU, ASUS M5A97 R2.0 motherboard, 32gb DDR3 memory.
GPU: Nvidia [GTX-1070](https://www.techpowerup.com/gpu-specs/geforce-gtx-1070.c2840) using Driver Version: 580.119.02
Llama.cpp with Vulkan backend for Linux: build: 54132f1b1 (7531)
CachyOS fresh install. Best part the Nvidia drivers loaded right out the box. I was running `llama-bench` minutes after installation.
[Freshly installed CachyOS with Best DE KDE!](https://preview.redd.it/dxwy5njns89g1.png?width=849&format=png&auto=webp&s=87cedfd2762f064e7ecf2bb2d526b4701a503055)
CachyOS
load_backend: loaded RPC backend from /run/media/czar33/team_ssd/team_llm/vulkan/llama-b7531/libggml-rpc.so ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = NVIDIA GeForce GTX 1070 (NVIDIA) | uma: 0 | fp16: 0 | bf16: 0 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: none load_backend: loaded Vulkan backend from /run/media/czar33/team_ssd/team_llm/vulkan/llama-b7531/libggml-vulkan.so load_backend: loaded CPU backend from /run/media/czar33/team_ssd/team_llm/vulkan/llama-b7531/libggml-cpu-sandybridge.so
First lets do the standard [Vulkan benchmark](https://github.com/ggml-org/llama.cpp/discussions/10879) using [llama-2-7b.Q4\_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-GGUF/resolve/main/llama-2-7b.Q4_0.gguf). Here is full power at 150 watts.
|model|size|params|backend|ngl|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|
|llama 7B Q4\_0|3.56 GiB|6.74 B|Vulkan|99|pp512|321.17 ± 1.13|
|llama 7B Q4\_0|3.56 GiB|6.74 B|Vulkan|99|tg128|42.53 ± 0.15|
[build: 54132f1b1 (7531)](https://github.com/ggml-org/llama.cpp/releases/download/b7531/llama-b7531-bin-ubuntu-vulkan-x64.tar.gz)\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Executed in 42.83 secs fish external
Now lets limit power to 101 watts
`sudo nvidia-smi -i 0 -pl 101` llama-2-7b.Q4\_0.gguf
|model|size|params|backend|ngl|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|
|llama 7B Q4\_0|3.56 GiB|6.74 B|Vulkan|99|pp512|322.09 ± 0.41|
|llama 7B Q4\_0|3.56 GiB|6.74 B|Vulkan|99|tg128|39.55 ± 0.07|
[build: 54132f1b1 (7531)](https://github.com/ggml-org/llama.cpp/releases/download/b7531/llama-b7531-bin-ubuntu-vulkan-x64.tar.gz)
So reducing power by almost 35% you only loss about 5% inference speed. This lets me run 3 GTX-1070 on a 500 watt power supply.
Now lets try out a few different models. Sorted by parameters size.
=======================================
**DeepSeek-R1-0528-Qwen3-8B-UD-Q8\_K\_XL.gguf**
|model|size|params|test|t/s|
|:-|:-|:-|:-|:-|
|qwen3 8B Q8\_0|10.08 GiB|8.19 B|pp512|184.10 ± 0.08|
|qwen3 8B Q8\_0|10.08 GiB|8.19 B|tg128|19.65 ± 0.03|
**aquif‑3.5‑A4B‑Think.Q4\_K\_M.gguf**
|model|size|params|test|t/s|
|:-|:-|:-|:-|:-|
|qwen3moe ?B Q4\_K - Medium|6.87 GiB|12.09 B|pp512|77.55 ± 1.24|
|qwen3moe ?B Q4\_K - Medium|6.87 GiB|12.09 B|tg128|37.22 ± 0.13|
**qwen2.5‑14b‑instruct‑q6\_k.gguf**
|model|size|params|test|t/s|
|:-|:-|:-|:-|:-|
|qwen2 14B Q6\_K|11.29 GiB|14.77 B|pp512|100.80 ± 0.21|
|qwen2 14B Q6\_K|11.29 GiB|14.77 B|tg128|16.58 ± 0.02|
**qwen2.5‑coder‑14b‑instruct‑q8\_0.gguf**
|model|size|params|test|t/s|
|:-|:-|:-|:-|:-|
|qwen2 14B Q8\_0|14.62 GiB|14.77 B|pp512|112.63 ± 0.21|
|qwen2 14B Q8\_0|14.62 GiB|14.77 B|tg128|12.22 ± 0.01|
**Ling‑Coder‑Lite‑Q4\_K\_M.gguf AND Ring‑lite‑2507.i1‑Q4\_K\_M.gguf**
|model|size|params|test|t/s|
|:-|:-|:-|:-|:-|
|bailingmoe 16B Q4\_K - Medium|10.40 GiB|16.80 B|pp512|135.51 ± 0.61|
|bailingmoe 16B Q4\_K - Medium|10.40 GiB|16.80 B|tg128|79.87 ± 0.21|
**gpt‑oss‑20b‑GGUF\_gpt‑oss‑20b‑mxfp4.gguf**
|model|size|params|test|t/s|
|:-|:-|:-|:-|:-|
|gpt‑oss 20B MXFP4 MoE|11.27 GiB|20.91 B|pp512|115.84 ± 2.29|
|gpt‑oss 20B MXFP4 MoE|11.27 GiB|20.91 B|tg128|46.78 ± 0.11|
**Devstral‑Small‑2‑24B‑Instruct‑2512‑IQ4\_XS.gguf**
|model|size|params|test|t/s|
|:-|:-|:-|:-|:-|
|mistral3 14B IQ4\_XS - 4.25 bpw|11.89 GiB|23.57 B|pp512|27.63 ± 0.28|
|mistral3 14B IQ4\_XS - 4.25 bpw|11.89 GiB|23.57 B|tg128|6.54 ± 0.01|
**Trinity‑Mini.Q4\_K\_M.gguf**
|model|size|params|test|t/s|
|:-|:-|:-|:-|:-|
|afmoe 26B Q4\_K - Medium|14.73 GiB|26.12 B|pp512|101.57 ± 0.72|
|afmoe 26B Q4\_K - Medium|14.73 GiB|26.12 B|tg128|63.39 ± 0.15|
Qwen3-30B-A3B-IQ4\_XS.gguf ggml\_vulkan: vk::Device::allocateMemory: ErrorOutOfDeviceMemory
So during RAMageddon you can get similar inference speeds with DDR3 systems. It's all about that GPU Vram! | 2025-12-25T00:20:45 | https://www.reddit.com/r/LocalLLaMA/comments/1pv12to/2012_system_running_llm_using_llama_with_vulkan/ | tabletuser_blogspot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv12to | false | null | t3_1pv12to | /r/LocalLLaMA/comments/1pv12to/2012_system_running_llm_using_llama_with_vulkan/ | false | false | 1 | null | |
How do you handle "Versioning" non-deterministic agent outputs? | 3 | \*Not selling anything\* I'm building a system to audit and version control AI agents (Node/Postgres stack). The goal is to create a "commit" every time an agent makes a decision so it can be audited later (critical for compliance). When you are testing local models, how do you handle reproducibility? If I version the prompt + seed + temperature + model hash, is that enough for a "reliable" audit trail, or is the inherent non-determinism of quantized local models going to make "perfect versioning" impossible? | 2025-12-25T00:18:33 | https://www.reddit.com/r/LocalLLaMA/comments/1pv11fn/how_do_you_handle_versioning_nondeterministic/ | bumswagger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv11fn | false | null | t3_1pv11fn | /r/LocalLLaMA/comments/1pv11fn/how_do_you_handle_versioning_nondeterministic/ | false | false | self | 3 | null |
Built an AI memory system with ACT-R cognitive architecture | 0 | Been working on a memory system for Multi-LLM usage for about 2 years. Wanted to share some technical details since this sub has been helpful. Hopefully will help others with insight to the future of memory for AI.
The core idea: instead of simple vector storage, I implemented ACT-R (the cognitive architecture NASA/DARPA has used for decades). Memories have activation levels that decay over time, and accessing them strengthens recall - like human memory.
Key features:
\- Spreading activation through a knowledge graph
\- Project-aware boosting (active work stays fresh)
\- Disaster recovery (snapshot/rollback your AI's working state)
\- 18 MCP tools, all running locally
No cloud, no subscriptions - your data stays on your machine.
Building toward a Kickstarter launch in January. Happy to answer questions about the architecture or implementation.
Intro video if you want to see it in action: [https://youtu.be/Hj\_1qQfqWUY](https://youtu.be/Hj_1qQfqWUY) | 2025-12-25T00:08:06 | https://www.reddit.com/r/LocalLLaMA/comments/1pv0uto/built_an_ai_memory_system_with_actr_cognitive/ | NarwhalBackground589 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv0uto | false | null | t3_1pv0uto | /r/LocalLLaMA/comments/1pv0uto/built_an_ai_memory_system_with_actr_cognitive/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '7rbuUnHpexYKG8RWIeIde209qz4vb9NCTLICh61rzDo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/7rbuUnHpexYKG8RWIeIde209qz4vb9NCTLICh61rzDo.jpeg?width=108&crop=smart&auto=webp&s=838c6c586ef42617821cc536719992445dc39f56', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/7rbuUnHpexYKG8RWIeIde209qz4vb9NCTLICh61rzDo.jpeg?width=216&crop=smart&auto=webp&s=3907d2ef37d4433b0cf45950c81847c52fa9bdbc', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/7rbuUnHpexYKG8RWIeIde209qz4vb9NCTLICh61rzDo.jpeg?width=320&crop=smart&auto=webp&s=cb180a490ab5003f5a6302288d28091b3cbae344', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/7rbuUnHpexYKG8RWIeIde209qz4vb9NCTLICh61rzDo.jpeg?auto=webp&s=50179a129297a15d875aa2b5a02ed6e78aefc54f', 'width': 480}, 'variants': {}}]} |
Invoice extraction | 1 | question about locally extracting data from german multiple layout invoices, i use paddleocr to get real clean markdowns, and Text, and Layout extraction, but in the step which i feed it in either llm or Vllm to extract comes always mistakes that changes with the invoice type sometimes qty wrong or take price instead of it, how can i make this system better , is vllm even needed when i use paddleocr or would it be better to have LLM with Reasoning ability? woud it make sense to use RAG maybe or Fine tuning and if Fine tuning is the way anyidea how would be the best way to make a dataset for that since i have all in all 13k invoices to analyse, also ways is it good to make the file header and each line item extraction processes sepearte or feed the whole document to the the llm ? or other ways to divide my document? | 2025-12-24T23:37:47 | https://www.reddit.com/r/LocalLLaMA/comments/1pv0asi/invoice_extraction/ | Visual_Strawberry276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv0asi | false | null | t3_1pv0asi | /r/LocalLLaMA/comments/1pv0asi/invoice_extraction/ | false | false | self | 1 | null |
ik_llama GLM 4.7 : 8~9 tokens/sec (ubergarm) instead of 4.5~5 tokens/sec (llama.cpp) | 28 | [ik\_llama GLM 4.7](https://preview.redd.it/gfm412vnl89g1.png?width=3108&format=png&auto=webp&s=7d6a804c1515e55a44e102643d74ed1ed29f6e1b)
llama-server.exe --model "C:\\gptmodel\\ubergarm\\GLM-4.7-GGUF\\GLM-4.7-IQ2\_KL-00001-of-00004.gguf" -ger --merge-qkv -ngl 99 --n-cpu-moe 40 -ub 4096 -b 4096 --threads 16 --parallel 1 --host [127.0.0.1](http://127.0.0.1) \--port 8080 --no-mmap --jinja --ctx-size 8192
I also have to try Unsloth, but the boost is remarkable. Tomorrow I'll try more specific rigs (RTX 6000 96GB + Ryzen 5950x + 128GB DDR4 3200. CPU overclocked @ 5GHz). GLM is very sensitive to CPU clock speed. | 2025-12-24T23:35:10 | https://www.reddit.com/r/LocalLLaMA/comments/1pv093v/ik_llama_glm_47_89_tokenssec_ubergarm_instead_of/ | LegacyRemaster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv093v | false | null | t3_1pv093v | /r/LocalLLaMA/comments/1pv093v/ik_llama_glm_47_89_tokenssec_ubergarm_instead_of/ | false | false | 28 | null | |
model: support MiMo-V2-Flash by ngxson · Pull Request #18328 · ggml-org/llama.cpp | 41 | 2025-12-24T23:28:46 | https://github.com/ggml-org/llama.cpp/pull/18328 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1pv04uy | false | null | t3_1pv04uy | /r/LocalLLaMA/comments/1pv04uy/model_support_mimov2flash_by_ngxson_pull_request/ | false | false | default | 41 | {'enabled': False, 'images': [{'id': 'ljL76ES0ycKhBmmY2ipEg2qGxnKYFbw_FzMzU74PR0Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ljL76ES0ycKhBmmY2ipEg2qGxnKYFbw_FzMzU74PR0Q.png?width=108&crop=smart&auto=webp&s=f84523d6263dac898a80dc33b9d123c3d5f048a1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ljL76ES0ycKhBmmY2ipEg2qGxnKYFbw_FzMzU74PR0Q.png?width=216&crop=smart&auto=webp&s=a0599da4f79bd555dbd6a8ffe38269f3b49140d5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ljL76ES0ycKhBmmY2ipEg2qGxnKYFbw_FzMzU74PR0Q.png?width=320&crop=smart&auto=webp&s=964218d854929d9a9391e70dd7e5a504eee882e9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ljL76ES0ycKhBmmY2ipEg2qGxnKYFbw_FzMzU74PR0Q.png?width=640&crop=smart&auto=webp&s=951b663470ca3d64de2becd5786b3d589c6d0ac7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ljL76ES0ycKhBmmY2ipEg2qGxnKYFbw_FzMzU74PR0Q.png?width=960&crop=smart&auto=webp&s=365afc24cbbca86ba27313453512dc63fff13366', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ljL76ES0ycKhBmmY2ipEg2qGxnKYFbw_FzMzU74PR0Q.png?width=1080&crop=smart&auto=webp&s=c29c2db5aba62cfffd65e97cc771c30b14a09d9f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ljL76ES0ycKhBmmY2ipEg2qGxnKYFbw_FzMzU74PR0Q.png?auto=webp&s=18283c7ec22f0c504cb9b916c923904e03f86b05', 'width': 1200}, 'variants': {}}]} | |
What is llama.cpp equivalent for image & video gen? | 43 | I've run **llama.cpp** to generate text from GGUF models before on a server without internet. I can just scp the GGUF to the server and run it, even build llama.cpp on it.
Most of the examples I found involve setting up Gradio, using python scripts, and having internet to install python packages or even a MacOS app (I use arch btw!)
Is there a CLI like llama.cpp but for Image & Video generation to type a prompt and output a file? | 2025-12-24T23:24:39 | https://www.reddit.com/r/LocalLLaMA/comments/1pv022d/what_is_llamacpp_equivalent_for_image_video_gen/ | ClimateBoss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pv022d | false | null | t3_1pv022d | /r/LocalLLaMA/comments/1pv022d/what_is_llamacpp_equivalent_for_image_video_gen/ | false | false | self | 43 | null |
Models with higher sparsity than MoE | 20 | This [paper](https://arxiv.org/abs/2512.09723) has a nice weigh-in on some recent model architectures with potential for extreme sparsity.
- (normal MoE) Sparse Mixture-of-Experts [1:100]
- Memory Layers [≤1:1000]
- Lookup-based Models [≤1:100000]
Having higher sparsity doesn't necessarily imply poor performance, as in the Mixture of Lookup Experts example only the intermediate output is active during inference. The offloaded expert weights don't read from storage/ram (in hybrid setup) which can greatly increase decoding speeds, and be used with minimal bandwidth.
Qwen3-Next and GPT-OSS 120B (Sparse Mixture-of-Experts) are around a 3:100 activation ratio. They may need a new architecture like memory layers if they decide to take it further.
Memory Layers + Lookup-based Model papers to check out:
Memory Layers at Scale (META)
- https://arxiv.org/abs/2412.09764
Ultra-Sparse Memory Network (ByteDance Seed)
- https://arxiv.org/abs/2411.12364
UltraMemV2: Memory Networks Scaling to 120B Parameters with Superior Long-Context Learning (ByteDance Seed)
- https://arxiv.org/abs/2508.18756
Mixture of Lookup Experts
- https://arxiv.org/abs/2503.15798
(According to the authors remarks on openreview, large-scale training on this would require all experts to be active, making training expensive.)
Mixture of Lookup Key-Value Experts
- https://arxiv.org/abs/2512.09723
| 2025-12-24T23:17:50 | https://www.reddit.com/r/LocalLLaMA/comments/1puzxlr/models_with_higher_sparsity_than_moe/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1puzxlr | false | null | t3_1puzxlr | /r/LocalLLaMA/comments/1puzxlr/models_with_higher_sparsity_than_moe/ | false | false | self | 20 | null |
2× RTX Pro 6000 Blackwell (96GB) + SGLang NVFP4: loads w/ --quantization modelopt_fp4, but DeepGemm/FP8-KV warnings + 100% GPU util when idle | 6 |
Hey all posting a detailed repro in case other Blackwell users are seeing the same things. I’m running SGLang on a dual RTX Pro 6000 Blackwell workstation and trying to serve a ModelOpt NVFP4 checkpoint with very long context.
# Hardware / software
* GPUs: 2× NVIDIA RTX PRO 6000 Blackwell (96GB each)
* Driver: 580.95.05, CUDA: 13.0
* SGLang: `0.5.6.post2.dev8155+20251224.gaef7ca7cf`
* Tensor parallel: TP=2
# Model + goal
* Model: `MiniMax-M2-NVFP4` (ModelOpt quantized, NVFP4)
* Goal: long context + low concurrency (context \~196k, max 2 running requests)
# Command (full)
bashpython -m sglang.launch_server \
--model-path /media/mukul/data/models/lukealonso/MiniMax-M2-NVFP4 \
--served-model-name jarvis-thinker \
--tp-size 2 \
--tool-call-parser minimax-m2 \
--reasoning-parser minimax-append-think \
--host 0.0.0.0 \
--port 10002 \
--trust-remote-code \
--dtype auto \
--mem-fraction-static 0.90 \
--context-length 196608 \
--quantization modelopt_fp4 \
--kv-cache-dtype fp8_e4m3 \
--max-running-requests 2 \
--chunked-prefill-size 16384 \
--attention-backend triton
# What I observed
# 1) Need to force ModelOpt FP4 quantization
If I **don’t** pass `--quantization modelopt_fp4`, the server dies during init with a quantization config error (it tried to go down an FP8 ModelOpt config path). Passing `--quantization modelopt_fp4` fixes it and it loads. (This seems consistent with NVFP4 being treated as experimental in SGLang.)
# 2) Warnings that look Blackwell/accuracy-related
On startup I see (paraphrased):
* “DeepGemm is enabled but scale\_fmt of checkpoint is not ue8m0. This might cause accuracy degradation on Blackwell.”
* “Using FP8 KV cache but no scaling factors provided. Defaulting scaling factors of 1.0. This may lead to less accurate results!”
Related: SGLang has an open feature request about “calculate kv scales” when using `--kv-cache-dtype fp8_e4m3`, otherwise scale factor defaults to 1.0. [https://github.com/sgl-project/sglang/issues/6518](https://github.com/sgl-project/sglang/issues/6518)[github](https://github.com/sgl-project/sglang/issues/6518)
Also: there’s a tracked Blackwell DeepGEMM accuracy issue (marked fixed for FP8 on Blackwell/B200). [https://github.com/sgl-project/sglang/issues/12878](https://github.com/sgl-project/sglang/issues/12878)[github](https://github.com/sgl-project/sglang/issues/12878)
Questions:
* For Blackwell + NVFP4, is the DeepGemm warning expected? Is there a recommended way to disable DeepGemm / force a safer kernel path for quality?
* For FP8 KV cache in SGLang, is there a supported way to provide/compute KV scales yet, or is the best practice to keep KV cache BF16 for correctness until scales are supported?
# 3) Both GPUs show 100% utilization even when idle
Once the server is up (no requests), both GPUs sit at **100% GPU-Util** and high power, with the main processes being:
* `sglang::scheduler_TP0` and `sglang::scheduler_TP1`
This looks similar to a known report: “GPU Utilization is 100% even when we are not inferencing” in SGLang’s tracker. [https://github.com/sgl-project/sglang/issues/6085](https://github.com/sgl-project/sglang/issues/6085)[github](https://github.com/sgl-project/sglang/issues/6085)
Questions:
* Is “100% util when idle” expected due to SGLang scheduler behavior / CUDA graphs / overlap scheduling?
* If not expected, what flags are recommended to reduce idle burn (e.g., disable CUDA graphs, disable overlap scheduling, etc.) while still staying stable at long context?
# Extra details (if helpful)
* Load completes and server starts fine after forcing `--quantization modelopt_fp4`.
* VRAM per GPU ends up around \~87–88GB used.
* KV cache is FP8 E4M3.
If anyone has a “known-good” SGLang configuration for **Blackwell + NVFP4 + long context**, or guidance on those warnings + idle utilization, I’d really appreciate it.
PS: I used Perplexica + Local models to format this document.
| 2025-12-24T23:10:25 | https://www.reddit.com/r/LocalLLaMA/comments/1puzsm5/2_rtx_pro_6000_blackwell_96gb_sglang_nvfp4_loads/ | texasdude11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1puzsm5 | false | null | t3_1puzsm5 | /r/LocalLLaMA/comments/1puzsm5/2_rtx_pro_6000_blackwell_96gb_sglang_nvfp4_loads/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'gSvQ5k85g4775wqgZ2Kkuwvry8oqifkOPGzyQdWQyJg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gSvQ5k85g4775wqgZ2Kkuwvry8oqifkOPGzyQdWQyJg.png?width=108&crop=smart&auto=webp&s=57de65e89ddade607c52590f615d7d35d9f80d1a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gSvQ5k85g4775wqgZ2Kkuwvry8oqifkOPGzyQdWQyJg.png?width=216&crop=smart&auto=webp&s=c01aa678b138c88624221effc41d2d774ed17110', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gSvQ5k85g4775wqgZ2Kkuwvry8oqifkOPGzyQdWQyJg.png?width=320&crop=smart&auto=webp&s=f20b8a8de6e881418665b1c4c610973f69eda6bd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gSvQ5k85g4775wqgZ2Kkuwvry8oqifkOPGzyQdWQyJg.png?width=640&crop=smart&auto=webp&s=ccb96687ce1550d8f7c6a3f22d1a4732e7ce9203', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gSvQ5k85g4775wqgZ2Kkuwvry8oqifkOPGzyQdWQyJg.png?width=960&crop=smart&auto=webp&s=f15a4ec14a49f0207857dd26621fcea58d7130f2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gSvQ5k85g4775wqgZ2Kkuwvry8oqifkOPGzyQdWQyJg.png?width=1080&crop=smart&auto=webp&s=f85d7fff46e8a689dfa9481fa167d05c88e0e45c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gSvQ5k85g4775wqgZ2Kkuwvry8oqifkOPGzyQdWQyJg.png?auto=webp&s=e24be6b32b78dff10739f17621c17cec2bbb9348', 'width': 1200}, 'variants': {}}]} |
Memora - A persistent memory layer for Claude Code with live knowledge graph visualization | 5 | I built an MCP server that gives Claude Code persistent memory across sessions.
**What it does:**
* Stores memories in SQLite with semantic search
* Auto-links related memories based on similarity
* Interactive knowledge graph that updates in real-time
* Duplicate detection, issue tracking, TODOs
* Works with Claude Code, Codex CLI, and other MCP clients
**Demo:** Shows creating memories and watching the graph build connections automatically.
https://reddit.com/link/1puzqpe/video/683bm1ywg89g1/player
**Features:**
* Zero dependencies (optional: cloud sync, embeddings)
* Hierarchical organization with sections/subsections
* Filter by tags, status, categories
* Export to HTML graph for sharing
GitHub: [https://github.com/agentic-mcp-tools/memora](https://github.com/agentic-mcp-tools/memora)
Feedback welcome! | 2025-12-24T23:07:29 | https://www.reddit.com/r/LocalLLaMA/comments/1puzqpe/memora_a_persistent_memory_layer_for_claude_code/ | spokv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1puzqpe | false | null | t3_1puzqpe | /r/LocalLLaMA/comments/1puzqpe/memora_a_persistent_memory_layer_for_claude_code/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'owS2rr5iTZyYvLFNQdhz6LwdZHrdy9e7Q8gomrWVBZI', 'resolutions': [{'height': 113, 'url': 'https://external-preview.redd.it/owS2rr5iTZyYvLFNQdhz6LwdZHrdy9e7Q8gomrWVBZI.png?width=108&crop=smart&auto=webp&s=4416d359caf186befa0fcb242102620b581d622e', 'width': 108}, {'height': 227, 'url': 'https://external-preview.redd.it/owS2rr5iTZyYvLFNQdhz6LwdZHrdy9e7Q8gomrWVBZI.png?width=216&crop=smart&auto=webp&s=6f7d501fbcfa61c2db48a2f0c7c49652b963c011', 'width': 216}, {'height': 337, 'url': 'https://external-preview.redd.it/owS2rr5iTZyYvLFNQdhz6LwdZHrdy9e7Q8gomrWVBZI.png?width=320&crop=smart&auto=webp&s=21078a9c0ee884b14b7b5f7d2c92773cc69c92ee', 'width': 320}, {'height': 674, 'url': 'https://external-preview.redd.it/owS2rr5iTZyYvLFNQdhz6LwdZHrdy9e7Q8gomrWVBZI.png?width=640&crop=smart&auto=webp&s=dbaabdc848ffa7339f883c1311779dbe92a42a45', 'width': 640}, {'height': 1012, 'url': 'https://external-preview.redd.it/owS2rr5iTZyYvLFNQdhz6LwdZHrdy9e7Q8gomrWVBZI.png?width=960&crop=smart&auto=webp&s=9d2ee6a2fe25262b51210263b9ee59db86f4ca20', 'width': 960}], 'source': {'height': 1048, 'url': 'https://external-preview.redd.it/owS2rr5iTZyYvLFNQdhz6LwdZHrdy9e7Q8gomrWVBZI.png?auto=webp&s=5fdab455b553ba272ce0854bb6e0acd75d76ecf4', 'width': 994}, 'variants': {}}]} |
Merry Christmas! 🎄 🎁 | 82 | Merry Christmas! 🥳 | 2025-12-24T23:03:48 | https://www.reddit.com/r/LocalLLaMA/comments/1puzo82/merry_christmas/ | Rare_Carry9799 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1puzo82 | false | null | t3_1puzo82 | /r/LocalLLaMA/comments/1puzo82/merry_christmas/ | false | false | self | 82 | null |
Dual 5060 Ti 16GB vs Radeon Instinct Mi50 32GB | 9 | I already have one of each GPU in my system (B850 AI TOP) and use both Ubuntu and Windows 11 (mostly Jan.ai, some LM Studio). While I was able to switch BIOS on the Mi50, I am unable to run LLMs under windows. So currently, I am playing with idea of replacing the Mi50 with another 5060 Ti 16GB. ATM, I mostly use GPT-OSS 20B, but sometimes I run other, larger models.
How do 2×5060 Ti 16GB compare against a single Radeon Instinct Mi50?
I also thought about buying an R9700 or W7800 instead, but that would be way more expensive of course. | 2025-12-24T22:59:50 | https://www.reddit.com/r/LocalLLaMA/comments/1puzlev/dual_5060_ti_16gb_vs_radeon_instinct_mi50_32gb/ | GerchSimml | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1puzlev | false | null | t3_1puzlev | /r/LocalLLaMA/comments/1puzlev/dual_5060_ti_16gb_vs_radeon_instinct_mi50_32gb/ | false | false | self | 9 | null |
Llama.cpp multiple model presets appreciation post | 50 | Recently Llama.cpp [added support](https://github.com/ggml-org/llama.cpp/pull/17859) for [model presets](https://github.com/ggml-org/llama.cpp/tree/master/tools/server#model-presets), which is a awsome feature that allow model loading and switching, and I have not seen much talk about. I would like to show my appreciation to the developers that are working on Llama.cpp and also share that the [model preset feature](https://github.com/ggml-org/llama.cpp/tree/master/tools/server#model-presets) exists to switch models.
A short guide of how to use it:
0. Get your hands on a recent version of `llama-server` from Llama.cpp.
1. Create a `.ini` file. I named my file `models.ini`.
2. Add the content of the models to your `.ini` file. See either the [README](https://github.com/ggml-org/llama.cpp/tree/master/tools/server#model-presets) or my example below. The values in the `[*]` section is shared between each model, and `[Devstral2:Q5_K_XL]` declares a new model.
3. Run `llama-server --models-preset <path to your.ini>/models.ini` to start the server.
4. Optional: Try out the webui on [`http://localhost:8080`](http://localhost:8080).
Here is my `models.ini` file as an example:
version = 1
[*]
flash-attn = on
n-gpu-layers = 99
c = 32768
jinja = true
t = -1
b = 2048
ub = 2048
[Devstral2:Q5_K_XL]
temp = 0.15
min-p = 0.01
model = /home/<name>/gguf/Devstral-Small-2-24B-Instruct-2512-UD-Q5_K_XL.gguf
cache-type-v = q8_0
Thanks for me, I just wanted to share this with you all! | 2025-12-24T22:55:44 | https://www.reddit.com/r/LocalLLaMA/comments/1puzin1/llamacpp_multiple_model_presets_appreciation_post/ | robiinn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1puzin1 | false | null | t3_1puzin1 | /r/LocalLLaMA/comments/1puzin1/llamacpp_multiple_model_presets_appreciation_post/ | false | false | self | 50 | {'enabled': False, 'images': [{'id': '1EohnW6I2Qe4foNHIffwhFjgslN4bZug0hdhRtxFIA4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1EohnW6I2Qe4foNHIffwhFjgslN4bZug0hdhRtxFIA4.png?width=108&crop=smart&auto=webp&s=be2b37732b2f23f289c636561f12baa51695a062', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1EohnW6I2Qe4foNHIffwhFjgslN4bZug0hdhRtxFIA4.png?width=216&crop=smart&auto=webp&s=887dbf2bda3d34ba257ae2557cc1bd9585862c20', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1EohnW6I2Qe4foNHIffwhFjgslN4bZug0hdhRtxFIA4.png?width=320&crop=smart&auto=webp&s=822f1ee79f2ecc8389dd7db6713bd4e5ddc32657', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1EohnW6I2Qe4foNHIffwhFjgslN4bZug0hdhRtxFIA4.png?width=640&crop=smart&auto=webp&s=de7df0d91da0ec9ef7dc3104eededaf931fd74c5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1EohnW6I2Qe4foNHIffwhFjgslN4bZug0hdhRtxFIA4.png?width=960&crop=smart&auto=webp&s=c5aadcc8482326209edb1bb8b6dd6f0b0f438f95', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1EohnW6I2Qe4foNHIffwhFjgslN4bZug0hdhRtxFIA4.png?width=1080&crop=smart&auto=webp&s=46ed16b0252972577283a772ca8e7862046208ac', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1EohnW6I2Qe4foNHIffwhFjgslN4bZug0hdhRtxFIA4.png?auto=webp&s=9c126b7942afc3111429d44201f85fef3d6c27db', 'width': 1200}, 'variants': {}}]} |
LocalRAG-Go: Offline RAG toolkit in Go with Clean Architecture | 3 | Built a local RAG toolkit in Go. Query your documents with LLMs, entirely offline.
**Why Go?** Most RAG implementations are Python-only. Wanted something that compiles to a single binary and doesn't need a Python environment to run.
**Features:**
\- Clean Architecture (ports/adapters)
\- Ollama for embeddings + LLM
\- File watcher with auto-ingestion
\- Web UI with streaming responses
\- Docker ready
**Repo**: [https://github.com/0xcro3dile/localrag-go](https://github.com/0xcro3dile/localrag-go)
Still WIP. Looking for feedback on the architecture and contributions welcome especially around PDF parsing and persistent vector stores. | 2025-12-24T22:52:59 | https://www.reddit.com/r/LocalLLaMA/comments/1puzgsx/localraggo_offline_rag_toolkit_in_go_with_clean/ | Particular-Cookie500 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1puzgsx | false | null | t3_1puzgsx | /r/LocalLLaMA/comments/1puzgsx/localraggo_offline_rag_toolkit_in_go_with_clean/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'Q9GOboJdz4JNHxHHQGpAN4yjjEiWAPvSzcRghyj-t-c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q9GOboJdz4JNHxHHQGpAN4yjjEiWAPvSzcRghyj-t-c.png?width=108&crop=smart&auto=webp&s=e827c4d9ba7bab6230b9a770be6d9a1860722434', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Q9GOboJdz4JNHxHHQGpAN4yjjEiWAPvSzcRghyj-t-c.png?width=216&crop=smart&auto=webp&s=1605e14d070b32c002e51af7a904720bfa8a2320', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Q9GOboJdz4JNHxHHQGpAN4yjjEiWAPvSzcRghyj-t-c.png?width=320&crop=smart&auto=webp&s=c17af6a30e245d653654f378d6d163e2c751e375', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Q9GOboJdz4JNHxHHQGpAN4yjjEiWAPvSzcRghyj-t-c.png?width=640&crop=smart&auto=webp&s=ac11128703d98d4b75e0269098e8e59700051fdf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Q9GOboJdz4JNHxHHQGpAN4yjjEiWAPvSzcRghyj-t-c.png?width=960&crop=smart&auto=webp&s=6070ccee0ba478c19af6ac75bac046bd375ee4e2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Q9GOboJdz4JNHxHHQGpAN4yjjEiWAPvSzcRghyj-t-c.png?width=1080&crop=smart&auto=webp&s=3f67ef0d8438183bbbb7f95fc86be6d526c267ce', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Q9GOboJdz4JNHxHHQGpAN4yjjEiWAPvSzcRghyj-t-c.png?auto=webp&s=c33d7699ca48fcc5427bdc564bf7c388c4d9f43e', 'width': 1200}, 'variants': {}}]} |
Guide to fine-tuning | 7 | hello guys i am looking for a guide from 0 , about fine-tuning , i am new into llm and vlm, my goal is to fine-tune qwen3-vl on text and others , any help is welcomed. | 2025-12-24T22:41:56 | https://www.reddit.com/r/LocalLLaMA/comments/1puz9bl/guide_to_finetuning/ | LahmeriMohamed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1puz9bl | false | null | t3_1puz9bl | /r/LocalLLaMA/comments/1puz9bl/guide_to_finetuning/ | false | false | self | 7 | null |
Has anyone tried finetuning LLM with financial data & personas? | 1 | Has anyone tried finetuning LLM with financial data & personas? | 2025-12-24T22:36:17 | https://www.reddit.com/r/LocalLLaMA/comments/1puz5cs/has_anyone_tried_finetuning_llm_with_financial/ | Robert-treboR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1puz5cs | false | null | t3_1puz5cs | /r/LocalLLaMA/comments/1puz5cs/has_anyone_tried_finetuning_llm_with_financial/ | false | false | self | 1 | null |
I made a good post about finetuning LLM with my notes and local LLM modes removed it despite being popular | 1 | [removed] | 2025-12-24T22:34:52 | https://www.reddit.com/r/LocalLLaMA/comments/1puz4cn/i_made_a_good_post_about_finetuning_llm_with_my/ | Robert-treboR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1puz4cn | false | null | t3_1puz4cn | /r/LocalLLaMA/comments/1puz4cn/i_made_a_good_post_about_finetuning_llm_with_my/ | false | false | 1 | null | |
Built an MCP bridge that lets AI control Cheat Engine | 1 | [removed] | 2025-12-24T22:31:46 | https://www.reddit.com/r/LocalLLaMA/comments/1puz299/built_an_mcp_bridge_that_lets_ai_control_cheat/ | helloitsj0nny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1puz299 | false | null | t3_1puz299 | /r/LocalLLaMA/comments/1puz299/built_an_mcp_bridge_that_lets_ai_control_cheat/ | false | false | self | 1 | null |
So Nvidia is buying Groq... | 0 | Yes, Groq is not local but it is an important part of the open weight ecosystem that complements and encourages model releases. Nvidia has been fairly friendly with its own open weight model releases thus far, thankfully, but consolidation is rarely going to be good for consumers in the long run. On the other hand, Nvidia could scale up Groq-style chips massively. A Groq wafer in every home? We can dream. Thoughts on the move? | 2025-12-24T22:19:15 | https://www.reddit.com/r/LocalLLaMA/comments/1puyte4/so_nvidia_is_buying_groq/ | ___positive___ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1puyte4 | false | null | t3_1puyte4 | /r/LocalLLaMA/comments/1puyte4/so_nvidia_is_buying_groq/ | false | false | self | 0 | null |
Hugging Face cache for shared machine? | 1 | We have a shared machine. I am the new sys-admin. Different users collaborate and also use same models. We would want to minimize disk usage and share as much cache as possible. What is the best way of setting this up?
I inherited the following setup for our cache. /data/ is shared.
`HF_HUB_CACHE=/data/huggingface_cache`
`HF_HOME=/data/hf_root`
With no write permission on the hf\_home for the users. This already doesn't work as datasets try to default into hf\_home. I can fix this by setting up
`HF_DATASETS_CACHE:/data/hf_datasets/`
but I don't quite understand whether this is a good solution.
Should users share their hf\_home cache? If so, should they be able to write onto it?
Or should I keep 3 directories separate with write permissions on hub and datasets only.
Thanks for all the help! | 2025-12-24T22:16:33 | https://www.reddit.com/r/LocalLLaMA/comments/1puyrie/hugging_face_cache_for_shared_machine/ | RealProjectivePlane | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1puyrie | false | null | t3_1puyrie | /r/LocalLLaMA/comments/1puyrie/hugging_face_cache_for_shared_machine/ | false | false | self | 1 | null |
Exclusive: Nvidia buying AI chip startup Groq's assets for about $20 billion in largest deal on record | 649 | 2025-12-24T22:14:48 | https://www.cnbc.com/2025/12/24/nvidia-buying-ai-chip-startup-groq-for-about-20-billion-biggest-deal.html | fallingdowndizzyvr | cnbc.com | 1970-01-01T00:00:00 | 0 | {} | 1puyq9r | false | null | t3_1puyq9r | /r/LocalLLaMA/comments/1puyq9r/exclusive_nvidia_buying_ai_chip_startup_groqs/ | false | false | default | 649 | {'enabled': False, 'images': [{'id': 'jvYBeKyc_OM28gIhUnO6GEg0WpsjQzWHJmf00gHtJBw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/jvYBeKyc_OM28gIhUnO6GEg0WpsjQzWHJmf00gHtJBw.jpeg?width=108&crop=smart&auto=webp&s=d47b2319d2d2e1a868699a479b9131817e9b4aca', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/jvYBeKyc_OM28gIhUnO6GEg0WpsjQzWHJmf00gHtJBw.jpeg?width=216&crop=smart&auto=webp&s=1740e79ebf655dc34d09135801eceb5627aaf490', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/jvYBeKyc_OM28gIhUnO6GEg0WpsjQzWHJmf00gHtJBw.jpeg?width=320&crop=smart&auto=webp&s=7a3bf64c5630c7a40b3642df357ab8237c7c1f41', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/jvYBeKyc_OM28gIhUnO6GEg0WpsjQzWHJmf00gHtJBw.jpeg?width=640&crop=smart&auto=webp&s=d32d447dab802e0a1aec9574d5282648b995cf17', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/jvYBeKyc_OM28gIhUnO6GEg0WpsjQzWHJmf00gHtJBw.jpeg?width=960&crop=smart&auto=webp&s=1fd48fc2cda93f210e6ddba085673463bf810065', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/jvYBeKyc_OM28gIhUnO6GEg0WpsjQzWHJmf00gHtJBw.jpeg?width=1080&crop=smart&auto=webp&s=64d818ccc8d606e2d0356c02d1351db9aa394ea6', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/jvYBeKyc_OM28gIhUnO6GEg0WpsjQzWHJmf00gHtJBw.jpeg?auto=webp&s=99de53c2f64a4674d12cbba86b6466661e9d8643', 'width': 1920}, 'variants': {}}]} | |
MiniMax M2.1 scores 43.4% on SWE-rebench (November) | 69 | Hi!
We added MiniMax M2.1 results to the December SWE-rebench update.
Please check the leaderboard: [https://swe-rebench.com/](https://swe-rebench.com/)
We’ll add GLM-4.7 and Gemini Flash 3 in the next release.
By the way, we just released a large dataset of agentic trajectories and two checkpoints trained on it, based on Qwen models.
Here’s the post:
[https://www.reddit.com/r/LocalLLaMA/comments/1puxedb/we\_release\_67074\_qwen3coder\_openhands/](https://www.reddit.com/r/LocalLLaMA/comments/1puxedb/we_release_67074_qwen3coder_openhands/) | 2025-12-24T21:10:50 | Fabulous_Pollution10 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1puxg7h | false | null | t3_1puxg7h | /r/LocalLLaMA/comments/1puxg7h/minimax_m21_scores_434_on_swerebench_november/ | false | false | 69 | {'enabled': True, 'images': [{'id': 'UnNk31wJJ_Rn9FOKUYxfO0fq9xsh3rSekiStvAFJ7YU', 'resolutions': [{'height': 34, 'url': 'https://preview.redd.it/s0vbt46vt79g1.jpeg?width=108&crop=smart&auto=webp&s=adb0b63747f164d0db07ad447e05bdd4e2c38fab', 'width': 108}, {'height': 68, 'url': 'https://preview.redd.it/s0vbt46vt79g1.jpeg?width=216&crop=smart&auto=webp&s=6443e852f32741399fe767da897b2591cfb41ecb', 'width': 216}, {'height': 101, 'url': 'https://preview.redd.it/s0vbt46vt79g1.jpeg?width=320&crop=smart&auto=webp&s=27fd5e19ae3b27e9d270923e1b7e054cd26d35ac', 'width': 320}, {'height': 202, 'url': 'https://preview.redd.it/s0vbt46vt79g1.jpeg?width=640&crop=smart&auto=webp&s=9a1fcc806bfccb9370c298a67d419e024ce322b3', 'width': 640}, {'height': 304, 'url': 'https://preview.redd.it/s0vbt46vt79g1.jpeg?width=960&crop=smart&auto=webp&s=37044fb29042c89a7ea8de3e27036d330fcaa120', 'width': 960}, {'height': 342, 'url': 'https://preview.redd.it/s0vbt46vt79g1.jpeg?width=1080&crop=smart&auto=webp&s=9fde107e2c36b839b00a22cd3cdc9c9592a2dfe3', 'width': 1080}], 'source': {'height': 380, 'url': 'https://preview.redd.it/s0vbt46vt79g1.jpeg?auto=webp&s=9722fe97ec2afdf5ee610c64d6656d10310754fc', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.