title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Evening fun with Grace and Hopper unified memory, or how to speed up llama.cpp and DeepSeek V3.1 on NVIDIA GH200 | 3 | For the past 2 days I had the pleasure of having remote access to a NVIDIA GH200 system kindly shared by u/GPTShop. It's a similar machine to the one that u/Reddactor has shown in hist [recent post](https://www.reddit.com/r/LocalLLaMA/comments/1pjbhyz/i_bought_a_gracehopper_server_for_75k_on_reddit/), but with only a single GH200 module inside. I wanted to see how the unified memory works and what performance we can get on llama.cpp with this hardware.
Initial results were disappointing with pp512 of 41.63 t/s and tg128 of 8.86 t/s. Even my Epyc workstation does better.
To make it faster I added some code that advised CUDA to place model expert tensors (except shared experts) on CPU LPDDR5X memory and all remaining tensors on GPU memory. It was only a dozen of lines, after applying the patch llama-bench results were:
$ GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 ./bin/llama-bench -m ~/fairydreaming/models/DeepSeek-V3.1-Terminus-Q4_K_M-00001-of-00009.gguf -fa 1
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GH200 144G HBM3e, compute capability 9.0, VMM: yes
| model | size | params | backend | ngl | fa | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |
| deepseek2 671B Q4_K - Medium | 377.55 GiB | 671.03 B | CUDA | 99 | 1 | pp512 | 276.84 ± 1.49 |
| deepseek2 671B Q4_K - Medium | 377.55 GiB | 671.03 B | CUDA | 99 | 1 | tg128 | 16.95 ± 0.01 |
I ran some more tests with different context lengths and larger ubatch:
$ GGML_CUDA_ENABLE_UNIFIED_MEMORY=1 ./bin/llama-bench -m ~/fairydreaming/models/DeepSeek-V3.1-Terminus-Q4_K_M-00001-of-00009.gguf -fa 1 -d 0,4096,8192,16384,32768 -p 2048 -n 32 -ub 2048
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GH200 144G HBM3e, compute capability 9.0, VMM: yes
| model | size | params | backend | ngl | n_ubatch | fa | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -------: | -: | --------------: | -------------------: |
| deepseek2 671B Q4_K - Medium | 377.55 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | pp2048 | 576.82 ± 2.38 |
| deepseek2 671B Q4_K - Medium | 377.55 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | tg32 | 16.92 ± 0.02 |
| deepseek2 671B Q4_K - Medium | 377.55 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | pp2048 @ d4096 | 483.90 ± 0.93 |
| deepseek2 671B Q4_K - Medium | 377.55 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | tg32 @ d4096 | 16.20 ± 0.06 |
| deepseek2 671B Q4_K - Medium | 377.55 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | pp2048 @ d8192 | 402.99 ± 1.07 |
| deepseek2 671B Q4_K - Medium | 377.55 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | tg32 @ d8192 | 16.05 ± 0.12 |
| deepseek2 671B Q4_K - Medium | 377.55 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | pp2048 @ d16384 | 299.70 ± 1.25 |
| deepseek2 671B Q4_K - Medium | 377.55 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | tg32 @ d16384 | 15.98 ± 0.14 |
| deepseek2 671B Q4_K - Medium | 377.55 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | pp2048 @ d32768 | 190.55 ± 0.67 |
| deepseek2 671B Q4_K - Medium | 377.55 GiB | 671.03 B | CUDA | 99 | 2048 | 1 | tg32 @ d32768 | 15.34 ± 0.35 |
Now we are talking, very nice prompt processing performance (compared to before). I haven't seen numbers like this even with ktransformers or Mac M3 Ultra benchmark results.
Also the token generation rate doesn't seem to go down much as the context size increases.
Hopefully it's possible to make it even faster, for example by placing some experts on the GPU memory (there's still free space here). Uh, now my Epyc workstation feels somewhat slow. | 2025-12-12T20:16:57 | https://www.reddit.com/r/LocalLLaMA/comments/1pl1zpa/evening_fun_with_grace_and_hopper_unified_memory/ | fairydreaming | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pl1zpa | false | null | t3_1pl1zpa | /r/LocalLLaMA/comments/1pl1zpa/evening_fun_with_grace_and_hopper_unified_memory/ | false | false | self | 3 | null |
llada2.0 benchmarks | 15 | 2025-12-12T19:59:47 | https://www.reddit.com/r/LocalLLaMA/comments/1pl1keu/llada20_benchmarks/ | kaggleqrdl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pl1keu | false | null | t3_1pl1keu | /r/LocalLLaMA/comments/1pl1keu/llada20_benchmarks/ | false | false | 15 | null | ||
One line quantization+deployment/GUI of Qwen2.5/Z-Image Turbo | 7 | [GitHub Repo](https://github.com/JackJackJ/NeocloudX-Labs/)
There's nothing sus here, but of course always check the contents of shell scripts before pasting them in:
To run Qwen2.5+Z-Image integrated model (change 14 to 72 or 7 based on your hardware):
`git clone` [`https://github.com/JackJackJ/NeocloudX-Labs.git`](https://github.com/JackJackJ/NeocloudX-Labs.git)
`cd NeocloudX-Labs`
`chmod +x launch_chat14b.sh`
`./launch_chat14b.sh`
To run Z-Image Turbo standalone model:
`git clone` [`https://github.com/JackJackJ/NeocloudX-Labs.git`](https://github.com/JackJackJ/NeocloudX-Labs.git)
`cd NeocloudX-Labs`
`chmod +x launch_z-image.sh`
`./launch_z-image.sh`
Chat models quantized via BitsAndBytes (72B is runnable on 80GB RAM, 14B/7B are doable with good RTX)
Z-Image Turbo is very performant, needs surprisingly little memory | 2025-12-12T19:57:08 | Affectionate_King_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pl1i7s | false | null | t3_1pl1i7s | /r/LocalLLaMA/comments/1pl1i7s/one_line_quantizationdeploymentgui_of/ | false | false | default | 7 | {'enabled': True, 'images': [{'id': 'v4ry3rlwvt6g1', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/v4ry3rlwvt6g1.png?width=108&crop=smart&auto=webp&s=7f743e44bacbe8281435ce337d56a39025032216', 'width': 108}, {'height': 171, 'url': 'https://preview.redd.it/v4ry3rlwvt6g1.png?width=216&crop=smart&auto=webp&s=b4850c0465911891e479a2a103eb155bbf2efa3d', 'width': 216}, {'height': 253, 'url': 'https://preview.redd.it/v4ry3rlwvt6g1.png?width=320&crop=smart&auto=webp&s=4f56d4f87fab3e2fc233d82873b0c4ba2bacf65b', 'width': 320}, {'height': 506, 'url': 'https://preview.redd.it/v4ry3rlwvt6g1.png?width=640&crop=smart&auto=webp&s=433fb5387c56c3f0b09ac916f215bdd97fc4c834', 'width': 640}, {'height': 760, 'url': 'https://preview.redd.it/v4ry3rlwvt6g1.png?width=960&crop=smart&auto=webp&s=0729eebc6ba562a360a01a3c514866859c313241', 'width': 960}, {'height': 855, 'url': 'https://preview.redd.it/v4ry3rlwvt6g1.png?width=1080&crop=smart&auto=webp&s=7475624be344a2ba177c379a1b4d3c572d047917', 'width': 1080}], 'source': {'height': 1006, 'url': 'https://preview.redd.it/v4ry3rlwvt6g1.png?auto=webp&s=a3651139ca5bd9249a42211cd6cde957df64a7bc', 'width': 1270}, 'variants': {}}]} | |
DS SERVE: A Framework for Efficient and Scalable Neural Retrieval | 1 | [removed] | 2025-12-12T19:43:27 | https://www.reddit.com/r/LocalLLaMA/comments/1pl16bl/ds_serve_a_framework_for_efficient_and_scalable/ | Disastrous_Solid6044 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pl16bl | false | null | t3_1pl16bl | /r/LocalLLaMA/comments/1pl16bl/ds_serve_a_framework_for_efficient_and_scalable/ | false | false | 1 | null | |
DS SERVE: A Framework for Efficient and Scalable Neural Retrieval | 1 | [removed] | 2025-12-12T19:39:12 | https://www.reddit.com/r/LocalLLaMA/comments/1pl12li/ds_serve_a_framework_for_efficient_and_scalable/ | Disastrous_Solid6044 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pl12li | false | null | t3_1pl12li | /r/LocalLLaMA/comments/1pl12li/ds_serve_a_framework_for_efficient_and_scalable/ | false | false | 1 | null | |
DS SERVE: A Framework for Efficient and Scalable Neural Retrieval | 1 | [removed] | 2025-12-12T19:30:00 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1pl0udg | false | null | t3_1pl0udg | /r/LocalLLaMA/comments/1pl0udg/ds_serve_a_framework_for_efficient_and_scalable/ | false | false | default | 1 | null | ||
What do you do, if you invent AGI? (seriously) | 51 | Some of you know me. I'm the resident LocalLlama silly person who tries to get my 4090 to do ridiculously fast things. I've posted some things here before, like controlling swarms of little bots, making an AI make weird sounds from its mouth, and getting AI to do agentic tasks, like my wacky effort to get thousands of tokens of GPT-OSS-20b output per second to fly an ASTEROIDS spaceship in real time.
Anyway... lately I've been playing around with some fast AI training tricks, figuring out how to turn my 'scrap in a cave' 4090 into something a bit more useful. I recently trained a gpt-2 124m equivalent to 3.28 loss in less than an hour. It seems to me that the scale we need to hit AGI might exist at consumer level, and today I'm asking...
What if YOU invent it?
I know I can't be the only one out here messing around on the fringe. And I'm probably not the only one who's made some headway (I'm looking at you, fpantsham... pew... you unsloth guys...).
What would you do? What the heck DO you do? I'm assuming most of you aren't working directly in the industry. Lets say you're just sitting here one afternoon banging away in Claude and there it is. Done. Undeniable. You probably don't know Sam Altman. Neither do I. I'm guessing walking into the door of Google shouting you have AGI isn't gonna work. What do you do?
| 2025-12-12T19:29:47 | https://www.reddit.com/r/LocalLLaMA/comments/1pl0u6w/what_do_you_do_if_you_invent_agi_seriously/ | teachersecret | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pl0u6w | false | null | t3_1pl0u6w | /r/LocalLLaMA/comments/1pl0u6w/what_do_you_do_if_you_invent_agi_seriously/ | false | false | self | 51 | null |
DS SERVE: A Framework for Efficient and Scalable Neural Retrieval | 1 | [deleted] | 2025-12-12T19:26:00 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1pl0qxu | false | null | t3_1pl0qxu | /r/LocalLLaMA/comments/1pl0qxu/ds_serve_a_framework_for_efficient_and_scalable/ | false | false | default | 1 | null | ||
The new monster-server | 538 | Hi!
Just wanted to share my upgraded monster-server! I have bought the largest chassi I could reasonably find (Phanteks Enthoo pro 2 server) and filled it to the brim with GPU:s to run local LLM:s alongside my homelab. I am very happy how it has evloved / turned out!
**I call it the "Monster server" :)**
Based on my trusted old X570 Taichi motherboard (extremely good!) and the Ryzen 3950x that I bought in 2019, that is still PLENTY fast today. I did not feel like spending a lot of money on a EPYC CPU/motherboard and new RAM, so instead I maxed out what I had.
The 24 PCI-e lanes are divided among the following:
3 GPU:s
\- 2 x RTX 3090 - both dual slot versions (inno3d RTX 3090 x3 and ASUS turbo RTX 3090)
\- 1 x RTX 4090 (an extremely chonky boi, 4 slots! ASUS TUF Gaming OC, that I got for reasonably cheap, around 1300USD equivalent). I run it on the "quiet" mode using the hardware switch hehe.
The 4090 runs off an M2 -> oculink -> PCIe adapter and a second PSU. The PSU is plugged in to the adapter board with its 24-pin connector and it powers on automatically when the rest of the system starts, very handy!
[https://www.amazon.se/dp/B0DMTMJ95J](https://www.amazon.se/dp/B0DMTMJ95J)
Network: I have 10GB fiber internet for around 50 USD per month hehe...
\- 1 x 10GBe NIC - also connected using an M2 -> PCIe adapter. I had to mount this card creatively...
Storage:
\- 1 x Intel P4510 8TB U.2 enterprise NVMe. Solid storage for all my VM:s!
\- 4 x 18TB Seagate Exos HDD:s. For my virtualised TrueNAS.
RAM: 128GB Corsair Vengeance DDR4. Running at 2100MHz because I cannot get it stable when I try to run it faster, but whatever... LLMs are in VRAM anyway.
So what do I run on it?
\- GPT-OSS-120B, fully in VRAM, >100t/s tg. I did not yet find a better model, despite trying many... I use it for research, coding, and generally instead of google sometimes...
I tried GLM4.5 air but it does not seem much smarter to me? Also slower. I would like to find a reasonably good model that I could run alongside FLUX1-dev-fp8 though, so I can generate images on the fly without having to switch. I am evaluating Qwen3-VL-32B for this
\- Media server, Immich, Gitea, n8n
\- My personal cloud using Seafile
\- TrueNAS in a VM
\- PBS for backups that is synced to a offsite PBS server at my brothers apartment
\- a VM for coding, trying out devcontainers.
\-> I also have a second server with a virtualised OPNsense VM as router. It runs other more "essential" services like PiHole, Traefik, Authelia, Headscale/tailscale, vaultwarden, a matrix server, anytype-sync and some other stuff...
\---
FINALLY: Why did I build this expensive machine? To make money by vibe-coding the next super-website? To cheat the stock market? To become the best AI engineer at Google? NO! Because I think it is fun to tinker around with computers, it is a hobby...
Thanks Reddit for teaching me all I needed to know to set this up! | 2025-12-12T19:23:12 | eribob | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pl0ojb | false | null | t3_1pl0ojb | /r/LocalLLaMA/comments/1pl0ojb/the_new_monsterserver/ | false | false | default | 538 | {'enabled': True, 'images': [{'id': '5kas5xaklt6g1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/5kas5xaklt6g1.jpeg?width=108&crop=smart&auto=webp&s=2afda8e7bbf5db8b7acdcc75bb4505e616112472', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/5kas5xaklt6g1.jpeg?width=216&crop=smart&auto=webp&s=230ac143603ba498ca03a4b8e16978608381fcf8', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/5kas5xaklt6g1.jpeg?width=320&crop=smart&auto=webp&s=361d4ce3158c0a6f94343fb470b00a46cb8e785d', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/5kas5xaklt6g1.jpeg?width=640&crop=smart&auto=webp&s=5ebd2b38fa23a6f8f0aca6d1817cac736fa1e6d0', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/5kas5xaklt6g1.jpeg?width=960&crop=smart&auto=webp&s=d928b685680efbc01b1e28b3bfab1d57257bea87', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/5kas5xaklt6g1.jpeg?width=1080&crop=smart&auto=webp&s=820c7b60909520e07171e3dacbaaba7420126494', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/5kas5xaklt6g1.jpeg?auto=webp&s=903ce086cf94612a59640f5f676fc8458892a851', 'width': 4032}, 'variants': {}}]} | |
Synthetic Data Quantity for QLoRa Finetuning Llama 8 B? | 0 | I'm working on a project for (approved, legally-consented) style imitation QLoRA style fine-tuning of a Llama 3 8B model.
I have 143 example conversations, 828 turns, and about 31k tokens. I believe I will need to synthetically enrich the dataset to get good results.
How many synthetic pairs would you add? Any advice for synthetic generation strategy? | 2025-12-12T19:21:56 | https://www.reddit.com/r/LocalLLaMA/comments/1pl0ni1/synthetic_data_quantity_for_qlora_finetuning/ | Common-Feeling7380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pl0ni1 | false | null | t3_1pl0ni1 | /r/LocalLLaMA/comments/1pl0ni1/synthetic_data_quantity_for_qlora_finetuning/ | false | false | self | 0 | null |
Llama.cpp and VRAM vs context size vs cache quant | 2 | What context sizes you you use with models like gpt-oss and GLM-4.5-Air?
The thing is that my setup is limited by the VRAM - 48GB so I can offload and some work is done by CPU/RAM which obviously gets things slower.
Now, I noticed that many 70b...120b models "almost" fit the 48GB VRAM with a proper quant like Q4\_K\_M. That said, context size requires extra memory and often I'm unable to fit model and the context in VRAM.
With bigger model the situation is simmilar, the smaller the context the more layers i can offload to GPU making things faster. Also, i started using Q8\_0 for cache which allowed to either put more layers into VRAM or get the longer context.
Currently im with 64k ctx for gpt-oss and 32k ctx for GLM. I could get smaller context with GLM and make it a bit faster by offloading 2..4 more layers to the GPU.
Are these values barely enough or overkill? What are you suggestions? | 2025-12-12T19:04:45 | https://www.reddit.com/r/LocalLLaMA/comments/1pl087v/llamacpp_and_vram_vs_context_size_vs_cache_quant/ | ChopSticksPlease | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pl087v | false | null | t3_1pl087v | /r/LocalLLaMA/comments/1pl087v/llamacpp_and_vram_vs_context_size_vs_cache_quant/ | false | false | self | 2 | null |
Anyone tried deepseek-moe-16b & GigaChat-20B-A3B before? | 3 | Today accidentally noticed that a [particular](https://github.com/ggml-org/llama.cpp/releases/tag/b7333) llama.cpp release has these 2 models' names. Looks like semi old ticket.
Hope these are the right models(both have base models).
[https://huggingface.co/deepseek-ai/deepseek-moe-16b-chat](https://huggingface.co/deepseek-ai/deepseek-moe-16b-chat)
[https://huggingface.co/ai-sage/GigaChat-20B-A3B-instruct](https://huggingface.co/ai-sage/GigaChat-20B-A3B-instruct)
But I see GGUF files & enough downloads count on HF. Not sure whether these models were used by people in past.
Anyway just leaving this here, hope it's useful for few. Both are nice size for MOE models.
FYI GigaChat [recently released](https://huggingface.co/collections/ai-sage/gigachat3) 10B & 700B MOE models. | 2025-12-12T18:58:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pl01yk/anyone_tried_deepseekmoe16b_gigachat20ba3b_before/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pl01yk | false | null | t3_1pl01yk | /r/LocalLLaMA/comments/1pl01yk/anyone_tried_deepseekmoe16b_gigachat20ba3b_before/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'q3wl3_o53Y-_2wyxdFzWlQKjOqETF9xZ6eawGtdmFi0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/q3wl3_o53Y-_2wyxdFzWlQKjOqETF9xZ6eawGtdmFi0.png?width=108&crop=smart&auto=webp&s=5fbf9606a6ee460a846573efa79e94b38aa3ca07', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/q3wl3_o53Y-_2wyxdFzWlQKjOqETF9xZ6eawGtdmFi0.png?width=216&crop=smart&auto=webp&s=8f80701b094f42355c45997ec9b1e379f11a07d1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/q3wl3_o53Y-_2wyxdFzWlQKjOqETF9xZ6eawGtdmFi0.png?width=320&crop=smart&auto=webp&s=b5549f77d25e2536a7ed555ae537d886e1e7ea42', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/q3wl3_o53Y-_2wyxdFzWlQKjOqETF9xZ6eawGtdmFi0.png?width=640&crop=smart&auto=webp&s=5c30e27650435d06c2bb851facf9a6363d179ece', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/q3wl3_o53Y-_2wyxdFzWlQKjOqETF9xZ6eawGtdmFi0.png?width=960&crop=smart&auto=webp&s=90dddc0137b176bf3c19bba168461fc4e9678829', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/q3wl3_o53Y-_2wyxdFzWlQKjOqETF9xZ6eawGtdmFi0.png?width=1080&crop=smart&auto=webp&s=e596f5e690e8990db32bd9d9022b89b92dd3c4d0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/q3wl3_o53Y-_2wyxdFzWlQKjOqETF9xZ6eawGtdmFi0.png?auto=webp&s=bc1d11b24d96959dbda8deb100dbd155734bddf4', 'width': 1200}, 'variants': {}}]} |
Tired of "slop"? I spent +100 hours processing a "Silver Standard" dataset for Ukrainian Fine-Tuning (Med/Drama). Here is the result. | 0 | Hi everyone,
I'm building a pipeline for Low-Resource Languages (specifically Ukrainian) because I got tired of Llama-3 and Mistral sounding like Google Translate or hallucinating in critical domains.
Instead of scraping generic web trash, I focused on **Data Density** and **Logic**.
**What I built (DavidLab Corpus):** I processed \~80k interaction pairs using a custom Machine-Augmented Curation pipeline (including a "Minimum Data Risk" protocol to strip PII and source traces).
**The breakdown:**
* **🛡️ Combat Medicine (TCCC):** 2.5k pairs. Highly specific tactical protocols.
* **💊 Clinical Medicine:** 12.5k pairs. Based on official MoH algorithms (for logic/reasoning).
* **🎭 Dramaturgy:** 65k pairs. Real scenarios and dialogues to fix the "robotic tone" issue.
**Why this matters:** If you are fine-tuning for Slavic languages, volume isn't the issue anymore. **Contextual reasoning** is. This dataset is designed to teach the model *how* to think in the language, not just translate.
I’ve released a sample and the structure on Hugging Face. Would love to hear your feedback on the schema.
**Link:** [https://huggingface.co/alexshynkarenk0](https://huggingface.co/alexshynkarenk0) | 2025-12-12T18:48:08 | https://www.reddit.com/r/LocalLLaMA/comments/1pkzt15/tired_of_slop_i_spent_100_hours_processing_a/ | RemoteTime9538 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkzt15 | false | null | t3_1pkzt15 | /r/LocalLLaMA/comments/1pkzt15/tired_of_slop_i_spent_100_hours_processing_a/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'fPoa7R9LpVZ8gQg6p25Dacx8y1WYjfCqlBqHiSlVrHQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fPoa7R9LpVZ8gQg6p25Dacx8y1WYjfCqlBqHiSlVrHQ.png?width=108&crop=smart&auto=webp&s=97088cb6a1d904b3b792ba712b507cfffd998a91', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fPoa7R9LpVZ8gQg6p25Dacx8y1WYjfCqlBqHiSlVrHQ.png?width=216&crop=smart&auto=webp&s=3e633c3536611cd40f5b3fc16c98107123179e22', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fPoa7R9LpVZ8gQg6p25Dacx8y1WYjfCqlBqHiSlVrHQ.png?width=320&crop=smart&auto=webp&s=6be54669440a705d134b863a9d548e8244283f68', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fPoa7R9LpVZ8gQg6p25Dacx8y1WYjfCqlBqHiSlVrHQ.png?width=640&crop=smart&auto=webp&s=6f20bddc2c921f169a5ed83fc1954b32d440bc1f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fPoa7R9LpVZ8gQg6p25Dacx8y1WYjfCqlBqHiSlVrHQ.png?width=960&crop=smart&auto=webp&s=227802dcac1c823d3d60cd737d17d36154bd783e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fPoa7R9LpVZ8gQg6p25Dacx8y1WYjfCqlBqHiSlVrHQ.png?width=1080&crop=smart&auto=webp&s=47a38fc26e42a5a75878b25bde2d58d1144dc05c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fPoa7R9LpVZ8gQg6p25Dacx8y1WYjfCqlBqHiSlVrHQ.png?auto=webp&s=356fb2ad9a8c25b452628d527d8e1002e2ea534e', 'width': 1200}, 'variants': {}}]} |
For Qwen3-235B-Q2 if you offload all experts to CPU, how much VRAM do you need to run it still? | 5 | I'm noticing that I can't max out n-cpu-moe with this model (I currently have 32GB of VRAM) and I can't find an answer online.
Using Q2 (~85GB) if I offload all experts to CPU with llama-cpp's `--n-cpu-moe` option, how much VRAM do you need for everything that's left and a modest (sub-20K) amount of context you think? | 2025-12-12T18:45:45 | https://www.reddit.com/r/LocalLLaMA/comments/1pkzqtk/for_qwen3235bq2_if_you_offload_all_experts_to_cpu/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkzqtk | false | null | t3_1pkzqtk | /r/LocalLLaMA/comments/1pkzqtk/for_qwen3235bq2_if_you_offload_all_experts_to_cpu/ | false | false | self | 5 | null |
Looking for open source projects for independent multi-LLM review with a judge model | 2 | Hi everyone. I am looking for open source projects, libraries, or real world examples of a multi-LLM system where several language models independently analyze the same task and a separate judge model compares their results.
The idea is simple. I have one input task, for example legal expertise or legal review of a law or regulation. Three different LLMs run in parallel. Each LLM uses one fixed prompt, produces one fixed output format, and works completely independently without seeing the outputs of the other models. Each model analyzes the same text on its own and returns its findings.
After that, a fourth LLM acts as a judge. It receives only the structured outputs of the three models and produces a final comparison and conclusion. For example, it explains that the first LLM identified certain legal issues but missed others, the second LLM found gaps that the first one missed, and the third LLM focused on irrelevant or low value points. The final output should clearly attribute which model found what and where the gaps are.
The key requirement is strict independence of the three LLMs, a consistent output schema, and then a judge model that performs comparison, gap detection, and attribution. I am especially interested in open source repositories, agent frameworks that support this pattern, and legal or compliance oriented use cases.
Any GitHub links, papers, or practical advice would be very appreciated. Thanks. | 2025-12-12T18:24:20 | https://www.reddit.com/r/LocalLLaMA/comments/1pkz7k6/looking_for_open_source_projects_for_independent/ | Hot-Independence-197 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkz7k6 | false | null | t3_1pkz7k6 | /r/LocalLLaMA/comments/1pkz7k6/looking_for_open_source_projects_for_independent/ | false | false | self | 2 | null |
LLM for 8 y/o low-end laptop | 2 | Hello! Can you guys suggest the smartest LLM I can run on:
Intel(R) Core(TM) i7-6600U (4) @ 3.40 GHz
Intel HD Graphics 520 @ 1.05 GHz
16GB RAM
Linux
I'm not expecting great reasoning, coding capability etc. I just need something I can ask personal questions to that I wouldn't want to send to a server. Also just have some fun. Is there something for me? | 2025-12-12T18:13:37 | https://www.reddit.com/r/LocalLLaMA/comments/1pkyxrb/llm_for_8_yo_lowend_laptop/ | nikunjuchiha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkyxrb | false | null | t3_1pkyxrb | /r/LocalLLaMA/comments/1pkyxrb/llm_for_8_yo_lowend_laptop/ | false | false | self | 2 | null |
Agentic frameworks for local LLMs | 1 | Which tools do you use to orchestrate local LLMs? Are there any ones which interact well with local models, i.e. work out of the box without special proxies and setups? | 2025-12-12T18:13:18 | https://www.reddit.com/r/LocalLLaMA/comments/1pkyxgz/agentic_frameworks_for_local_llms/ | ArtisticHamster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkyxgz | false | null | t3_1pkyxgz | /r/LocalLLaMA/comments/1pkyxgz/agentic_frameworks_for_local_llms/ | false | false | self | 1 | null |
"Apple MLX for AI/Large Language Models—Day One" (update) | 0 | Major updates to my article ["Apple MLX for AI/Large Language Models—Day One"](https://huggingface.co/blog/ucheog/mlx-day-one) & newly on HuggingFace. Intro article I originally wrote last year, touching on MLX itself, models from HF and basic cli and Python code. Also added a handy glossary. Lots of local/private AI advocacy in it. | 2025-12-12T17:59:26 | https://www.reddit.com/r/LocalLLaMA/comments/1pkykjw/apple_mlx_for_ailarge_language_modelsday_one/ | CodeGriot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkykjw | false | null | t3_1pkykjw | /r/LocalLLaMA/comments/1pkykjw/apple_mlx_for_ailarge_language_modelsday_one/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Hyug2giva8idnE-KEvK7UNLZhyrMk524ln06DiGdln4', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/Hyug2giva8idnE-KEvK7UNLZhyrMk524ln06DiGdln4.jpeg?width=108&crop=smart&auto=webp&s=56895d746d9e591ecd6c59c4300f90c4814d1f9d', 'width': 108}, {'height': 119, 'url': 'https://external-preview.redd.it/Hyug2giva8idnE-KEvK7UNLZhyrMk524ln06DiGdln4.jpeg?width=216&crop=smart&auto=webp&s=30c424cb2372c696cc8afd8db76448fd491549dd', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/Hyug2giva8idnE-KEvK7UNLZhyrMk524ln06DiGdln4.jpeg?width=320&crop=smart&auto=webp&s=a70f37c4525b7498638f46ae90d74d13bce5542f', 'width': 320}, {'height': 355, 'url': 'https://external-preview.redd.it/Hyug2giva8idnE-KEvK7UNLZhyrMk524ln06DiGdln4.jpeg?width=640&crop=smart&auto=webp&s=fdd00427591f98414f5817c5ca76f677a70897c4', 'width': 640}, {'height': 532, 'url': 'https://external-preview.redd.it/Hyug2giva8idnE-KEvK7UNLZhyrMk524ln06DiGdln4.jpeg?width=960&crop=smart&auto=webp&s=eb18bcd5c4210db95a248e8fa38804d13120640c', 'width': 960}, {'height': 599, 'url': 'https://external-preview.redd.it/Hyug2giva8idnE-KEvK7UNLZhyrMk524ln06DiGdln4.jpeg?width=1080&crop=smart&auto=webp&s=1bac7e18ef406a718c433496a5e04fa170ed7c82', 'width': 1080}], 'source': {'height': 852, 'url': 'https://external-preview.redd.it/Hyug2giva8idnE-KEvK7UNLZhyrMk524ln06DiGdln4.jpeg?auto=webp&s=1132dba8c9c82f6503d150358e12014bfec52175', 'width': 1536}, 'variants': {}}]} |
Europe must be ready when the AI bubble bursts | ft.com | 76 | 2025-12-12T17:48:11 | https://www.ft.com/content/0308f405-19ba-4aa8-9df1-40032e5ddc4e | ttkciar | ft.com | 1970-01-01T00:00:00 | 0 | {} | 1pkya3n | false | null | t3_1pkya3n | /r/LocalLLaMA/comments/1pkya3n/europe_must_be_ready_when_the_ai_bubble_bursts/ | false | false | 76 | {'enabled': False, 'images': [{'id': 'Z434T0fKEWMAcK7uGeTuD__MBt7NsyTEjfA31tg2gfQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Z434T0fKEWMAcK7uGeTuD__MBt7NsyTEjfA31tg2gfQ.jpeg?width=108&crop=smart&auto=webp&s=2cf6f62af8321238117bc9489037cb45c73c9f2d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Z434T0fKEWMAcK7uGeTuD__MBt7NsyTEjfA31tg2gfQ.jpeg?width=216&crop=smart&auto=webp&s=c382deafe531abb02eff759a88c766c52fdfe357', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/Z434T0fKEWMAcK7uGeTuD__MBt7NsyTEjfA31tg2gfQ.jpeg?width=320&crop=smart&auto=webp&s=2636d739968316c20daf85a719c66d4de79e04fb', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/Z434T0fKEWMAcK7uGeTuD__MBt7NsyTEjfA31tg2gfQ.jpeg?width=640&crop=smart&auto=webp&s=55b69afa73db9b378351bcc9797e33a0fd5dc1c2', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/Z434T0fKEWMAcK7uGeTuD__MBt7NsyTEjfA31tg2gfQ.jpeg?width=960&crop=smart&auto=webp&s=72a74542ad1e353b02fd2c94f3e97b0990495df6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Z434T0fKEWMAcK7uGeTuD__MBt7NsyTEjfA31tg2gfQ.jpeg?width=1080&crop=smart&auto=webp&s=76bec36ae1f386fc474687dbd00887b0944708f9', 'width': 1080}], 'source': {'height': 1393, 'url': 'https://external-preview.redd.it/Z434T0fKEWMAcK7uGeTuD__MBt7NsyTEjfA31tg2gfQ.jpeg?auto=webp&s=a424b8312c0f7b467b99cceed79f0f7db5b21921', 'width': 2478}, 'variants': {}}]} | ||
Chat GPT 5.2 Benchmarked on Custom Datasets! | 61 | OpenAI has just released GPT-5.2, so I ran it through the same benchmark suite we've been working on.
Results below:
* starting with the **Logical Puzzles** benchmarks in English and Polish. GPT-5.2 gets a perfect 100% in English (same as Gemini 2.5 Pro and Gemini 3 Pro Preview), but what’s more interesting is **Polish**: here **GPT-5.2 is the only model hitting 100%**, taking first place on its own.
* next, **Business Strategy – Sequential Games. GPT-5.2 scores 0.73, placing second** after Gemini 3 Pro Preview and tied with Grok-4.1-fast. Latency is very strong here.
* then the **Semantic and Emotional Exceptions in Brazilian Portuguese benchmark. This is a hard one for all models, but GPT-5.2 still takes first place with 0.46**, ahead of Gemini 3 Pro Preview, Grok, Qwen, and Grok-4.1-fast. Significant lead.
* **General History (Platinum space focus): GPT-5.2 lands in second place at 0.69**, just behind Gemini 3 Pro Preview at 0.73.
* finally, **Environmental Questions. Retrieval-heavy benchmark and Perplexity’s Sonar Pro Search dominates it, but GPT-5.2 still comes in second with 0.75.**
https://preview.redd.it/l14wzckz8t6g1.png?width=1416&format=png&auto=webp&s=6410a5b524dce38638b0c71be9fd97a6566def76
**Let me know if there are other models or benchmarks you want me to run GPT-5.2 on.**
I'll paste links to the datasets in comments if you want to see the exact prompts and scores. | 2025-12-12T17:47:24 | https://www.reddit.com/r/LocalLLaMA/comments/1pky9ec/chat_gpt_52_benchmarked_on_custom_datasets/ | Substantial_Sail_668 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pky9ec | false | null | t3_1pky9ec | /r/LocalLLaMA/comments/1pky9ec/chat_gpt_52_benchmarked_on_custom_datasets/ | false | false | 61 | null | |
Olmo 3.1 32B Think & Instruct: New Additions to the Olmo Model Family | 174 | Olmo 3.1 32B Think and Olmo 3.1 32B Instruct are the newest 32-billion-parameter models in the Olmo family, each optimized for different yet complementary use cases.
* The **Think model** is a deep-reasoning specialist, trained with extended reinforcement learning on the Dolci-Think-RL dataset to improve multi-step reasoning, math, logic, and code generation.
* In contrast, the **Instruct model** applies the Olmo instruction-tuning recipe at 32B scale, making it a strong fully open chat and agent foundation focused on instruction following, conversational fluency, and tool-use capabilities.
[HuggingFace Model Collection ](https://huggingface.co/collections/allenai/olmo-31) | 2025-12-12T17:43:36 | Dear-Success-1441 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pky5u4 | false | null | t3_1pky5u4 | /r/LocalLLaMA/comments/1pky5u4/olmo_31_32b_think_instruct_new_additions_to_the/ | false | false | default | 174 | {'enabled': True, 'images': [{'id': 'bwgy5ldc8t6g1', 'resolutions': [{'height': 98, 'url': 'https://preview.redd.it/bwgy5ldc8t6g1.jpeg?width=108&crop=smart&auto=webp&s=c5d336d7f89d760312062910eb23f08ca18e2ac8', 'width': 108}, {'height': 197, 'url': 'https://preview.redd.it/bwgy5ldc8t6g1.jpeg?width=216&crop=smart&auto=webp&s=35ff3a98b40e763984a0a1bb4a9c02bae4ed597f', 'width': 216}, {'height': 292, 'url': 'https://preview.redd.it/bwgy5ldc8t6g1.jpeg?width=320&crop=smart&auto=webp&s=f7754acaf5d6a7963762ba8570552e5b9c83901d', 'width': 320}, {'height': 585, 'url': 'https://preview.redd.it/bwgy5ldc8t6g1.jpeg?width=640&crop=smart&auto=webp&s=9fca8f97eb697220d6926581cc50bcb50e944a99', 'width': 640}], 'source': {'height': 816, 'url': 'https://preview.redd.it/bwgy5ldc8t6g1.jpeg?auto=webp&s=5b05ed0dfe4462bd8f2d1040752f0ebac5702cef', 'width': 892}, 'variants': {}}]} | |
ChatGPT GPT-5.2 is unusable for serious work: file uploads NOT ACCESSIBLE and hallucinations | 0 | I am writing this because over the past weeks I have repeatedly reported a critical file handling issue to OpenAI and absolutely nothing has happened. No real response, no fix, no clear communication. This problem is not new. It has existed for many months, and from my own experience at least half a year, during which I was working on a serious technical project and investing significant money into it.
The core issue is simple and at the same time unacceptable. ZIP, SRT, TXT and PDF files upload successfully into ChatGPT. They appear in the UI with correct names and sizes and everything looks fine. However, the backend tool myfiles\_browser permanently reports NOT ACCESSIBLE. In this state the model has zero technical access to the file contents. None.
Despite this, ChatGPT continues to generate answers as if it had read those files. It summarizes them, analyzes them and answers detailed questions about their content. These responses are pure hallucinations. This is not a minor bug. It is a fundamental breach of trust. A tool marketed for professional use fabricates content instead of clearly stating that it has no access to the data.
This is not a user configuration problem. It is not related to Windows, Linux, WSL, GPU, drivers, memory, or long conversations. The same behavior occurs in new projects, fresh sessions and across platforms. I deleted projects, recreated them, tested different files and scenarios. The result is always the same.
On top of that, long conversations in ChatGPT on Windows, both in the desktop app and in browsers, frequently freeze or stall completely. The UI becomes unresponsive, system fans spin up, and ChatGPT is the only application causing this behavior. The same workflows run stably on macOS, which raises serious questions about quality and testing on Windows.
What makes this especially frustrating is that this issue has been described by the community for a long time. There are reports going back months and even years. Despite the release of GPT-5.2 and the marketing claims about professional readiness, this critical flaw still exists. There is no public documentation, no clear roadmap for a fix, and not even an honest statement acknowledging that file-based workflows are currently unreliable.
After half a year of work, investment and effort, I am left with a system that cannot be trusted. A tool that collapses exactly when it matters and pretends everything is fine. This is not a small inconvenience. It is a hard blocker for any serious work and a clear failure in product responsibility.
To be absolutely clear at the end. I am unable to post or openly discuss this on official OpenAI channels or on r/OpenAI because every attempt gets removed or blocked. Not because the content is false, not because it violates any technical rules, but because it is inconvenient. This is an honest description of a real issue I have been dealing with for weeks, and in reality this problem has existed for many months, possibly even years. What makes this worse is that what I wrote here is still a very mild version of the reality. The actual impact on work, serious projects, and trust in a tool marketed as professional is far more severe. When a company blocks public discussion of critical failures instead of addressing them, the issue stops being purely technical. It becomes an issue of responsibility. | 2025-12-12T17:25:49 | https://www.reddit.com/r/LocalLLaMA/comments/1pkxpe7/chatgpt_gpt52_is_unusable_for_serious_work_file/ | Deep-Performance1073 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkxpe7 | false | null | t3_1pkxpe7 | /r/LocalLLaMA/comments/1pkxpe7/chatgpt_gpt52_is_unusable_for_serious_work_file/ | false | false | self | 0 | null |
Using Alias in router mode - llama.cpp possible? | 2 | I can set `--models-dir ./mymodels` and openwebui does populate the list of models successfully. but with their original name.
I prefer to use aliases so my users, ie my family who are interested in this (who aren't familiar with the plethora of models that are constantly being released) can pick and choose models easily for their tasks
Aliases and specific parameters for each model can be set using `--models-preset ./config.ini`
But that seems to break model unloading and loading in router mode from Openwebui (also that will double-display the list of model aliases from `config.ini` and the full names scanned from `--models-dir ./mymodels`
I tried omitting `--models-dir ./mymodels` and using only `--models-preset ./config.ini` but model unloading and loading in router mode wont work without `/mymodels` directory being named and I get the `model failed to load` error.
Router mode only seems to be working for me if I only use `--models-dir ./mymodels` and no other args in the `llama-server` command to try to set aliases.
Has anyone else come across this or found a workaround, other than renaming the .gguf files. Which I don't want to do as I still want a way to keep track of which model or which variant is being used under all the aliases.
The other solution is to use appropriately named symlinks for the ggufs that `--models-dir` wil scan but that's (a lot of ballache) and just more to keep track of and manage as I chop and change models over time. ie symlinks becoming invalid and having to recreate etc as I replace models. | 2025-12-12T17:23:28 | https://www.reddit.com/r/LocalLLaMA/comments/1pkxn9r/using_alias_in_router_mode_llamacpp_possible/ | munkiemagik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkxn9r | false | null | t3_1pkxn9r | /r/LocalLLaMA/comments/1pkxn9r/using_alias_in_router_mode_llamacpp_possible/ | false | false | self | 2 | null |
Dolphin-v2, Universal Document Parsing Model from ByteDance Open Source | 117 | Dolphin-v2 is an enhanced universal document parsing model that substantially improves upon the original Dolphin.
Dolphin-v2 introduces several major enhancements over the original Dolphin:
* **Universal Document Support**: Handles both digital-born and photographed documents with realistic distortions
* **Expanded Element Coverage**: Supports 21 element categories (up from 14), including dedicated code blocks and formulas
* **Enhanced Precision**: Uses absolute pixel coordinates for more accurate spatial localization
* **Hybrid Parsing Strategy**: Element-wise parallel parsing for digital documents + holistic parsing for photographed documents
* **Specialized Modules**: Dedicated parsing for code blocks with indentation preservation
# [](https://huggingface.co/ByteDance/Dolphin-v2#%F0%9F%8F%97%EF%B8%8F-model-architecture)
[Hugging Face Model Card ](https://huggingface.co/ByteDance/Dolphin-v2)
# [](https://huggingface.co/ByteDance/Dolphin-v2#%F0%9F%93%91-key-improvements)
| 2025-12-12T17:18:49 | https://v.redd.it/xkkz615l3t6g1 | Dear-Success-1441 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pkxj0i | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/xkkz615l3t6g1/DASHPlaylist.mpd?a=1768151944%2CNTJmYTk0MWNmMjM5N2QxYTU5Yzg2NzJiOTJkNGViOGQ4MDNiNGE3MjgxYzU1MWU5N2JlM2VmNTJkYzk3YWMwYw%3D%3D&v=1&f=sd', 'duration': 14, 'fallback_url': 'https://v.redd.it/xkkz615l3t6g1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/xkkz615l3t6g1/HLSPlaylist.m3u8?a=1768151944%2COWQ5Y2Q4M2NlYmI2MDEwMGJlNDIzZmFkMzUyNTgzYWQ3MmFkOTYwZWVhZWIzYjgwNTFmN2VmOWI5NTZkNjFkNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xkkz615l3t6g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1112}} | t3_1pkxj0i | /r/LocalLLaMA/comments/1pkxj0i/dolphinv2_universal_document_parsing_model_from/ | false | false | 117 | {'enabled': False, 'images': [{'id': 'azQ1aTBvNWwzdDZnMWGXoQeCCsCSbT01XS-4Qf-TxasLW4Bw-m6-HAAzWjW4', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/azQ1aTBvNWwzdDZnMWGXoQeCCsCSbT01XS-4Qf-TxasLW4Bw-m6-HAAzWjW4.png?width=108&crop=smart&format=pjpg&auto=webp&s=7c50feff9a13c74e29059d18af6e2d9dace83b18', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/azQ1aTBvNWwzdDZnMWGXoQeCCsCSbT01XS-4Qf-TxasLW4Bw-m6-HAAzWjW4.png?width=216&crop=smart&format=pjpg&auto=webp&s=99061520d3aeb536042eb94623f295f82b47295e', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/azQ1aTBvNWwzdDZnMWGXoQeCCsCSbT01XS-4Qf-TxasLW4Bw-m6-HAAzWjW4.png?width=320&crop=smart&format=pjpg&auto=webp&s=9d5be394bee91ccab297b57111dbe8a1c8f98b65', 'width': 320}, {'height': 414, 'url': 'https://external-preview.redd.it/azQ1aTBvNWwzdDZnMWGXoQeCCsCSbT01XS-4Qf-TxasLW4Bw-m6-HAAzWjW4.png?width=640&crop=smart&format=pjpg&auto=webp&s=3cf60a90490ddd0aca32d5984004fcc28254e138', 'width': 640}, {'height': 622, 'url': 'https://external-preview.redd.it/azQ1aTBvNWwzdDZnMWGXoQeCCsCSbT01XS-4Qf-TxasLW4Bw-m6-HAAzWjW4.png?width=960&crop=smart&format=pjpg&auto=webp&s=e8adb23d95eedd3a3068c8fd226f93eb2c86d20b', 'width': 960}, {'height': 699, 'url': 'https://external-preview.redd.it/azQ1aTBvNWwzdDZnMWGXoQeCCsCSbT01XS-4Qf-TxasLW4Bw-m6-HAAzWjW4.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2d0345ba83337c824a1a0b44dc4df95e6683a238', 'width': 1080}], 'source': {'height': 784, 'url': 'https://external-preview.redd.it/azQ1aTBvNWwzdDZnMWGXoQeCCsCSbT01XS-4Qf-TxasLW4Bw-m6-HAAzWjW4.png?format=pjpg&auto=webp&s=a78fd64a2d27b8f3b6054f796e4626a3de88f04c', 'width': 1210}, 'variants': {}}]} | |
GLM4 vision support in llama.cpp ready for review | 14 | 2025-12-12T17:14:55 | https://github.com/ggml-org/llama.cpp/pull/17967 | beneath_steel_sky | github.com | 1970-01-01T00:00:00 | 0 | {} | 1pkxfgw | false | null | t3_1pkxfgw | /r/LocalLLaMA/comments/1pkxfgw/glm4_vision_support_in_llamacpp_ready_for_review/ | false | false | default | 14 | null | |
Guys help me out | 0 | What is the best language model (uncensored and unrestricted)that I could have | 2025-12-12T17:03:22 | https://www.reddit.com/r/LocalLLaMA/comments/1pkx4q5/guys_help_me_out/ | Ok_Cap3333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkx4q5 | false | null | t3_1pkx4q5 | /r/LocalLLaMA/comments/1pkx4q5/guys_help_me_out/ | false | false | self | 0 | null |
[Educational Project] Building LLM inference from scratch to understand the internals. Looking for community feedback. | 2 | I'm creating an educational project for people who want to really understand what's happening during LLM inference - not just at a high level, but line by line.
The approach: implement everything from scratch in JavaScript (no ML frameworks like PyTorch), starting from parsing GGUF files all the way to GPU-accelerated generation. I chose JavaScript because it's accessible and runs in browsers, but mainly because it forces you to implement everything manually.
Current progress: 3/15 modules done, working on #4
GGUF parser (parsing model architecture, metadata, tensors)
BPE tokenization (full encode/decode pipeline)
Matrix operations (matmul, softmax, layer norm, etc.)
Embeddings & RoPE (in progress)
Later modules cover attention, KV cache, transformer blocks, sampling strategies, and WebGPU acceleration.
Goal: Help people understand every detail - from how RoPE works to why KV cache matters to how attention scoring actually works. The kind of deep knowledge that helps when you're debugging weird model behavior or trying to optimize inference.
Questions for the community:
What aspects of LLM inference are most confusing/mysterious? I want to make sure those get clear explanations
Is the JavaScript approach a dealbreaker for most people, or is the educational value worth it?
Would you prefer more focus on quantization techniques, or is fp32/fp16 sufficient for learning?
Any topics I'm missing that should be covered?
Planning to release this once I have solid content through at least module 11 (full text generation working). Would love any feedback on the approach or what would make this most useful! | 2025-12-12T16:50:10 | https://www.reddit.com/r/LocalLLaMA/comments/1pkws7o/educational_project_building_llm_inference_from/ | purellmagents | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkws7o | false | null | t3_1pkws7o | /r/LocalLLaMA/comments/1pkws7o/educational_project_building_llm_inference_from/ | false | false | self | 2 | null |
Day 5: 21 Days of Building a Small Language Model: Data | 9 | When we talk about large language models, we focus heavily on architecture. Our focus is mainly on attention mechanism, transformer variant or mixture of expert layer. But the harsh truth which only few people acknowledge model intelligence doesn't come with elegant architecture or massive parameter count, it comes from data.
It's true that, the architecture enables learning, but data is what gets learned. Without high-quality, carefully curated, and diverse data even the most sophisticated architecture will produce mediocre results.
This is why companies keep their data pipelines secret, just like they protect their model weights. As different companies use similar architectures, data has become the biggest competitive advantage.
# Why data matters more than architecture
Before transformers, everyone knew that data is the new oil. Models were small, tasks were specific, and the main problem was getting enough human-labeled examples. But things changed with language models.
We no longer label millions of examples by hand. Instead, we:
* Collect huge amounts of text from the web (trillions of words)
* Train models that can do many different tasks
* Make models bigger and bigger
* Add a small amount of fine-tuning at the end
This change made people think data matters less. Since we're not labeling examples by hand anymore, many assume data isn't as important. But it's actually more important than ever.
# The three stages of training
Language models aren't trained in one step. Instead, data goes through different stages, and each stage teaches the model something new:
# Stage 1: Pretraining
Pretraining is what most people think of when they hear "LLM training." It uses billions or trillions of words scraped from the web: [Wikipedia](https://www.wikipedia.org/) articles, books, [GitHub](https://github.com/) code, news articles, [Reddit](https://www.reddit.com/) discussions, and public datasets like [C4](https://github.com/allenai/c4), [The Pile](https://pile.eleuther.ai/), and [OSCAR](https://oscar-project.github.io/).
This stage teaches the model:
* **Vocabulary**: What words and concepts mean
* **Grammar**: How language is structured
* **Basic reasoning**: Simple logic and cause-and-effect
* **General knowledge**: Facts about the world
* **Cultural perspectives**: Different viewpoints from the training data
* **Language patterns**: How words and ideas connect
The scale is huge. Modern pretraining uses trillions of words, a huge chunk of all publicly available text. This is where the model learns that "Paris" is a city, that "Python" can mean a programming language or a snake, and that "bank" has different meanings.
# Stage 2: Mid-Training
My personal belief is, this is one of the most important but least talked-about stages. Mid-training is done on purpose. Researchers take a model that's been trained on huge amounts of messy web data and then train it on very clean, specific datasets to improve particular skills.
This is where a model starts to stand out. Mid-training data includes:
* **Code data**: [GitHub](https://github.com/) repositories, [Stack Overflow](https://stackoverflow.com/) Q&A pairs, competitive programming problems
* **Math problems**: [GSM8K](https://github.com/openai/grade-school-math), [MATH](https://github.com/hendrycks/math), problems with step-by-step solutions
* **Long documents**: Books, technical docs, extended texts
* **Multiple languages**: High-quality text in many different languages
* **Safety examples**: How to respond to harmful requests appropriately
Models like [DeepSeek](https://www.deepseek.com/) use a lot of mid-training for coding, which makes them really good at writing, debugging, and explaining code. This stage turns a general language model into a coding assistant, a math tutor, or a multilingual translator.
# Stage 3: Post-Training
Post-training is the final stage that turns a raw language model into a helpful chatbot. It has two main parts:
**Supervised Fine-Tuning (SFT)** teaches the model to:
* Answer user questions helpfully
* Format responses correctly
* Follow instructions
* Keep track of the conversation
**Reinforcement Learning from Human Feedback (RLHF)** teaches the model to:
* Give helpful responses
* Avoid harmful or biased answers
* Be honest about what it doesn't know
* Say no to inappropriate requests politely
Pretraining gives the model basic knowledge, mid-training adds special skills, and post-training shapes how it behaves and talks. This is where the model becomes actually useful for people.
# The Chinchilla Insight: Why more data beats bigger models
One of the most important discoveries about data and model performance came from the Chinchilla scaling laws, introduced by [Hoffmann et al. (2022)](https://arxiv.org/abs/2203.15556). This research completely changed how we think about balancing model size and training data.
The key finding from this reasearch is: For a given amount of computing power, there's a best balance between model size and training data. The best ratio is about 20 tokens per parameter.
This means:
* A 70 billion parameter model should be trained on \~1.4 trillion tokens
* A 7 billion parameter model should be trained on \~140 billion tokens
* A 1 billion parameter model should be trained on \~20 billion tokens
Before Chinchilla, people usually made models bigger while keeping training data about the same. [GPT-3](https://arxiv.org/abs/2005.14165), for example, had 175 billion parameters but was trained on only 300 billion tokens, way less than it should have been.
The Chinchilla model proved this point: with 70 billion parameters trained on 1.4 trillion tokens, it beat GPT-3 even though it was less than half the size. This showed that data, not just parameters, is what matters for performance.
What this means:
1. **Bigger models need more data**: A 200 billion parameter model needs \~4 trillion tokens
2. **Many models are under-trained**: They have enough parameters but not enough data
3. **Data quality matters a lot**: Better data preparation means better results with the same amount of data
4. **Data work is just as important as model work**: Working on data is now as important as designing the model
# Why companies hide their data (But not their models architecture)
This is one of the most interesting things about modern AI development. Open models like [Llama](https://ai.meta.com/llama/), [DeepSeek](https://www.deepseek.com/), and [Mixtral](https://mistral.ai/news/mixtral-of-experts/) share lots of details about their architecture: how layers are structured, attention settings, tokenizer details, training settings, and how they split work across computers.
But when it comes to data, you usually see vague statements like "We create our dataset from a variety of data sources, apply de-duplication methods and data cleaning mechanisms, and remove domains with PII or adult content." This tells you almost nothing about what data sources they actually used, how they filtered it, or how they prepared it.
Why this difference? Three main reasons:
# 1. Competitive Dynamics
If competitors know exactly what data you used, they can copy your model quality easily and cheaply. Architecture is easy to copy, once you publish a paper, anyone can build it. But data pipelines are different. The exact mix of sources, how you filter them, how you remove duplicates, and how you prepare the data are all secret knowledge.
If a competitor knows you got great coding performance by using 30% GitHub data with specific filters, they can do the same thing. But if they don't know, they have to do lots of experiments to figure it out. This creates a big difference: architecture knowledge spreads fast, but data knowledge stays secret.
# 2. Legal Constraints
The legal situation around training data is unclear and keeps changing. Copyright lawsuits like the [New York Times vs OpenAI case](https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html) show the legal risks. Terms of service, [robots.txt files](https://www.robotstxt.org/), and new regulations create a complicated set of rules. International rules like the [EU AI Act](https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai) require companies to be transparent about training data and reduce bias.
The legal rules about fair use for AI training are still unclear. The less detail companies share, the less legal risk they face. Companies have to balance being transparent with avoiding legal problems.
# 3. Trade Secrets
How you prepare, filter, and weight data is now a major competitive advantage. It directly affects:
* How well the model avoids harmful outputs
* How well it solves hard problems
* How correct and well-written the code it generates is
* How well it works in different languages
* How it handles sensitive topics
* How often it makes factual mistakes
Companies that have spent millions developing their own data pipelines have strong reasons to protect that investment. The result is that data stays secret, which is very different from how open the model architecture community is.
# Real-World Examples: How Data Shapes Models
# OLMo 3: Complete Transparency
[OLMo 3](https://allenai.org/blog/olmo3), made by the Allen Institute for AI, is one of the most open examples of modern LLM training. The team shares not just the model weights, but all the training data, code, and checkpoints for every stage.
**Pretraining**: [Dolma 3](https://github.com/allenai/dolma), a huge collection of \~9.3 trillion tokens from web pages, scientific PDFs, code, math problems, and encyclopedia text. This gets refined into Dolma 3 Mix, a 5.9 trillion token dataset with more coding and math data.
**Mid-Training**:
* Dolma 3 Dolmino: 100 billion tokens focused on high-quality math, science, code, and instruction-following data
* Dolma 3 Longmino: 50 billion tokens for handling long documents
**Post-Training**: [Dolci](https://github.com/allenai/dolci), a complete set of data for reasoning, tool use, and instruction following, with separate data mixes for SFT, DPO, and RLVR.
This complete openness lets researchers see exactly how different data choices at each stage affect the model's final abilities.
# Summary
Data is the foundation that all language model intelligence is built on. While architecture provides the way to learn, data provides what actually gets learned.
The Chinchilla scaling laws showed that the best performance needs about 20 tokens per parameter, which completely changed the focus from just making models bigger to collecting and preparing enough high-quality training data.
Understanding data sources and how to process them is essential for anyone building language models. From Common Crawl's web crawling to GitHub's code, from Stack Exchange's Q&A pairs to Wikipedia's knowledge, each data source adds something unique.
Yet despite data's critical importance, companies keep their data pipelines as secret as their model weights, driven by competition, legal concerns, and the fact that data preparation has become a major competitive advantage.
As different companies use similar architectures, data has become the biggest differentiator. The quality and preparation of your training data will ultimately determine your model's abilities more than any architectural choice.
The next time you see a breakthrough language model, remember: the architecture might be public, but the real secret is in the data.
| 2025-12-12T16:31:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pkwarw/day_5_21_days_of_building_a_small_language_model/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkwarw | false | null | t3_1pkwarw | /r/LocalLLaMA/comments/1pkwarw/day_5_21_days_of_building_a_small_language_model/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': '0o4df8PfE4K_I1k1PJONvbH7ifiLVKycZPw1P7N0k94', 'resolutions': [{'height': 98, 'url': 'https://external-preview.redd.it/0o4df8PfE4K_I1k1PJONvbH7ifiLVKycZPw1P7N0k94.png?width=108&crop=smart&auto=webp&s=e83417ea8ba300321fa9a7406e0585970883e7ac', 'width': 108}, {'height': 197, 'url': 'https://external-preview.redd.it/0o4df8PfE4K_I1k1PJONvbH7ifiLVKycZPw1P7N0k94.png?width=216&crop=smart&auto=webp&s=88f0e8d76d498d008a7fb72f0a3fffc0fea2adf3', 'width': 216}, {'height': 292, 'url': 'https://external-preview.redd.it/0o4df8PfE4K_I1k1PJONvbH7ifiLVKycZPw1P7N0k94.png?width=320&crop=smart&auto=webp&s=f4a8cbef5e8817186a3d8de2e6f10886b6eabfa4', 'width': 320}, {'height': 584, 'url': 'https://external-preview.redd.it/0o4df8PfE4K_I1k1PJONvbH7ifiLVKycZPw1P7N0k94.png?width=640&crop=smart&auto=webp&s=f973e88a9eed85c7df36288940c1c2548d36b964', 'width': 640}, {'height': 876, 'url': 'https://external-preview.redd.it/0o4df8PfE4K_I1k1PJONvbH7ifiLVKycZPw1P7N0k94.png?width=960&crop=smart&auto=webp&s=430b786b89be068ea73f36dc4a222a7d07c085ea', 'width': 960}, {'height': 985, 'url': 'https://external-preview.redd.it/0o4df8PfE4K_I1k1PJONvbH7ifiLVKycZPw1P7N0k94.png?width=1080&crop=smart&auto=webp&s=ef4ab39059ebbe52b02496107e84232044ce81ad', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://external-preview.redd.it/0o4df8PfE4K_I1k1PJONvbH7ifiLVKycZPw1P7N0k94.png?auto=webp&s=3d062601204333d4382e17ccd2056b470bb479db', 'width': 2244}, 'variants': {}}]} |
Anyone else hitting RAM creep with long local LLM runs? | 16 | I’ve been running local Llama models (mostly via Ollama) in longer pipelines, batch inference, multi-step processing, some light RAG ad I keep seeing memory usage slowly climb over time. Nothing crashes immediately, but after a few hours the process is way heavier than it should be. I’ve tried restarting workers, simplifying loops, even running smaller batches, but the creep keeps coming back. Curious if this is just the reality of Python-based orchestration around local LLMs, or if there’s a cleaner way to run long-lived local pipelines without things slowly eating RAM. | 2025-12-12T16:31:10 | https://www.reddit.com/r/LocalLLaMA/comments/1pkwarb/anyone_else_hitting_ram_creep_with_long_local_llm/ | CommunityGlobal8094 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkwarb | false | null | t3_1pkwarb | /r/LocalLLaMA/comments/1pkwarb/anyone_else_hitting_ram_creep_with_long_local_llm/ | false | false | self | 16 | null |
Umar Jamil explains how Mistral’s Magistral model was trained | 15 | 2025-12-12T16:27:19 | https://www.youtube.com/watch?v=S4EsRyZQKEc&t=977s | Dear-Success-1441 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1pkw795 | false | {'oembed': {'author_name': 'Menlo Talks', 'author_url': 'https://www.youtube.com/@menlotalks', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/S4EsRyZQKEc?start=977&feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Reasoning in LLMs: Magistral — @umarjamilai from Mistral, Singapore AI Showcase"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/S4EsRyZQKEc/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Reasoning in LLMs: Magistral — @umarjamilai from Mistral, Singapore AI Showcase', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1pkw795 | /r/LocalLLaMA/comments/1pkw795/umar_jamil_explains_how_mistrals_magistral_model/ | false | false | default | 15 | {'enabled': False, 'images': [{'id': 'XAos23Rbc14Lk2myUzbfAJlF7p5RC2BmrcP6tzPIbEI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/XAos23Rbc14Lk2myUzbfAJlF7p5RC2BmrcP6tzPIbEI.jpeg?width=108&crop=smart&auto=webp&s=21a0e681168f17c838f8a8675478ab34d9ca7202', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/XAos23Rbc14Lk2myUzbfAJlF7p5RC2BmrcP6tzPIbEI.jpeg?width=216&crop=smart&auto=webp&s=c3d512484f98ad7a7934289ed1cb751743a34659', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/XAos23Rbc14Lk2myUzbfAJlF7p5RC2BmrcP6tzPIbEI.jpeg?width=320&crop=smart&auto=webp&s=d01fb73b7aa05eeff9953a869ece78c195721d76', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/XAos23Rbc14Lk2myUzbfAJlF7p5RC2BmrcP6tzPIbEI.jpeg?auto=webp&s=a54c83eb8542da6b4afbd158dbf695ce0314c306', 'width': 480}, 'variants': {}}]} | |
Best LLM under 30/40B for writing, chatting, talking. | 9 | Hello everyone, I’m still a novice in these artificial intelligence issues.
Since I’m a bit sick of GPT of all those seemingly free artificial intelligence models, since you notice our data, I decided to experiment a little with local LLMs.
I was looking for a model to use mainly to chat, so maybe discuss topics, but a model that is specialized above all in the text, precisely speak and remain consistent with what it says, and that is also very informed in the knowledge, that it is in-depth knowledge and not basic.
It’s fine even if it’s able to make translations, summarize texts or rewrite them according to certain styles, in short, a bit like writing instruments, maybe, even better.
I’m NOT looking for a model to write code.
If the model is thinking or can also take input the images, even better, since these two features would be very convenient for me.
I’m mainly using them in LM Studio.
From my computer, I can load a model up to 30/40B even if the model is medium large, it’s not a problem.
Thanks again for the help! 🙏 | 2025-12-12T16:12:49 | https://www.reddit.com/r/LocalLLaMA/comments/1pkvu49/best_llm_under_3040b_for_writing_chatting_talking/ | tombino104 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkvu49 | false | null | t3_1pkvu49 | /r/LocalLLaMA/comments/1pkvu49/best_llm_under_3040b_for_writing_chatting_talking/ | false | false | self | 9 | null |
Llama.cpp MI50 (gfx906) running on Ubuntu 24.04 notes | 7 | I'm running an older box (Dell Precision 3640) that I bought last year surplus because it could upgrade to 128G CPU Ram. It came with a stock P2200 (5GB) Nvidia card. since I still had room to upgrade this thing (+850W Alienware PSU) to a MI50 (32G VRAM gfx906), I figured it would be an easy thing to do. After much frustration, and some help from claude I got it working on amdgpu 5.7.3 - and was fairly happy with it. I figured I'd try some newer versions, which for some reason work - but are slower than 5.7.
Note that I also had CPU offloading, so only 16 layers (whatever I could fit) on the GPU... so YMMV.
There may be compiler options to make the higher versions work better, but I didn't explore any yet.
(Chart and install steps by claude after a long night of changing versions and comparing llama.cpp benchmarks)
|ROCm Version|Compiler|Prompt Processing (t/s)|Change from Baseline|Token Generation (t/s)|Change from Baseline|
|:-|:-|:-|:-|:-|:-|
|**5.7.3** (Baseline)|Clang 17.0.0|**61.42 ± 0.15**|\-|**1.23 ± 0.01**|\-|
|**6.4.1**|Clang 19.0.0|56.69 ± 0.35|**-7.7%**|1.20 ± 0.00|**-2.4%** |
|**7.1.1**|Clang 20.0.0|56.51 ± 0.44|**-8.0%** |1.20 ± 0.00|**-2.4%**|
|**5.7.3** (Verification)|Clang 17.0.0|**61.33 ± 0.44**|**+0.0%** |**1.22 ± 0.00**|**+0.0%**|
### Grub
```/etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pci=realloc pci=noaer pcie_aspm=off iommu=pt intel_iommu=on"
```
### ROCm 5.7.3 (Baseline)
**Installation**:
```bash
sudo apt install ./amdgpu-install_5.7.3.50703-1_all.deb
sudo amdgpu-install --usecase=rocm --no-dkms -y
```
**Build llama.cpp**
```bash
export ROCM_PATH=/opt/rocm
export HIP_PATH=/opt/rocm
export LD_LIBRARY_PATH=/opt/rocm/lib:$LD_LIBRARY_PATH
export HIP_VISIBLE_DEVICES=0
export ROCBLAS_LAYER=0
export HSA_OVERRIDE_GFX_VERSION=9.0.6
cd llama.cpp
rm -rf build
cmake . \
-DGGML_HIP=ON \
-DCMAKE_HIP_ARCHITECTURES=gfx906 \
-DAMDGPU_TARGETS=gfx906 \
-DCMAKE_PREFIX_PATH="/opt/rocm-5.7.3;/opt/rocm-5.7.3/lib/cmake" \
-Dhipblas_DIR=/opt/rocm-5.7.3/lib/cmake/hipblas \
-DCMAKE_HIP_COMPILER=/opt/rocm-5.7.3/llvm/bin/clang \
-B build
cmake --build build --config Release -j $(nproc)
```
### ROCm 6.4.1
**Installation**:
```bash
# 1. Download ROCm installer
wget https://repo.radeon.com/amdgpu-install/6.4.1/ubuntu/noble/amdgpu-install_6.4.60401-1_all.deb
# 2. Download rocBLAS package from Arch Linux
wget https://archlinux.org/packages/extra/x86_64/rocblas/download -O rocblas-6.4.0-1-x86_64.pkg.tar.zst
# 3. Extract gfx906 tensile files
tar -I zstd -xf rocblas-6.4.0-1-x86_64.pkg.tar.zst
find usr/lib/rocblas/library/ -name "*gfx906*" | wc -l # 156 files
# 4. Remove old ROCm
sudo amdgpu-install --uninstall
# 5. Install ROCm 6.4.1
sudo apt install ./amdgpu-install_6.4.60401-1_all.deb
sudo amdgpu-install --usecase=rocm --no-dkms -y
# 6. Copy gfx906 tensile files
sudo cp -r usr/lib/rocblas/library/*gfx906* /opt/rocm/lib/rocblas/library/
# 7. Rebuild llama.cpp
cd /home/bigattichouse/workspace/llama.cpp
rm -rf build
cmake -B build -DGGML_HIP=ON -DCMAKE_HIP_COMPILER=/opt/rocm/bin/hipcc
cmake --build build
```
### ROCm 7.1.1
**Installation**:
```bash
# 1. Download ROCm installer
wget https://repo.radeon.com/amdgpu-install/7.1.1/ubuntu/noble/amdgpu-install_7.1.1.70101-1_all.deb
# 2. Download rocBLAS package from Arch Linux
wget https://archlinux.org/packages/extra/x86_64/rocblas/download -O rocblas-7.1.1-1-x86_64.pkg.tar.zst
# 3. Extract gfx906 tensile files
tar -I zstd -xf rocblas-7.1.1-1-x86_64.pkg.tar.zst
find usr/lib/rocblas/library/ -name "*gfx906*" | wc -l # 156 files
# 4. Remove old ROCm
sudo amdgpu-install --uninstall
# 5. Install ROCm 7.1.1
sudo apt install ./amdgpu-install_7.1.1.70101-1_all.deb
sudo amdgpu-install --usecase=rocm --no-dkms -y
# 6. Copy gfx906 tensile files
sudo cp -r usr/lib/rocblas/library/*gfx906* /opt/rocm/lib/rocblas/library/
# 7. Rebuild llama.cpp
cd /home/bigattichouse/workspace/llama.cpp
rm -rf build
cmake -B build -DGGML_HIP=ON -DCMAKE_HIP_COMPILER=/opt/rocm/bin/hipcc
cmake --build build
```
### Common Environment Variables (All Versions)
```bash
export ROCM_PATH=/opt/rocm
export HIP_PATH=/opt/rocm
export LD_LIBRARY_PATH=/opt/rocm/lib:$LD_LIBRARY_PATH
export HIP_VISIBLE_DEVICES=0
export ROCBLAS_LAYER=0
export HSA_OVERRIDE_GFX_VERSION=9.0.6
```
Required environment variables for ROCm + llama.cpp (5.7.3):
```bash
export ROCM_PATH=/opt/rocm-5.7.3
export HIP_PATH=/opt/rocm-5.7.3
export HIP_PLATFORM=amd
export LD_LIBRARY_PATH=/opt/rocm-5.7.3/lib:$LD_LIBRARY_PATH
export PATH=/opt/rocm-5.7.3/bin:$PATH
# GPU selection and tuning
export HIP_VISIBLE_DEVICES=0
export ROCBLAS_LAYER=0
export HSA_OVERRIDE_GFX_VERSION=9.0.6
```
### Benchmark Tool
Used llama.cpp's built-in `llama-bench` utility:
```bash
llama-bench -m model.gguf -n 128 -p 512 -ngl 16 -t 8
```
gr
### Hardware
- **GPU**: AMD Radeon Instinct MI50 (gfx906)
- **Architecture**: Vega20 (GCN 5th gen)
- **VRAM**: 16GB HBM2
- **Compute Units**: 60
- **Max Clock**: 1725 MHz
- **Memory Bandwidth**: 1 TB/s
- **FP16 Performance**: 26.5 TFLOPS
### Model
- **Name**: Mistral-Small-3.2-24B-Instruct-2506-BF16
- **Size**: 43.91 GiB
- **Parameters**: 23.57 Billion
- **Format**: BF16 (16-bit brain float)
- **Architecture**: llama (Mistral variant)
### Benchmark Configuration
- **GPU Layers**: 16 (partial offload due to model size vs VRAM)
- **Context Size**: 2048 tokens
- **Batch Size**: 512 tokens
- **Threads**: 8 CPU threads
- **Prompt Tokens**: 512 (for PP test)
- **Generated Tokens**: 128 (for TG test)
| 2025-12-12T15:53:17 | https://www.reddit.com/r/LocalLLaMA/comments/1pkvc85/llamacpp_mi50_gfx906_running_on_ubuntu_2404_notes/ | bigattichouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkvc85 | false | null | t3_1pkvc85 | /r/LocalLLaMA/comments/1pkvc85/llamacpp_mi50_gfx906_running_on_ubuntu_2404_notes/ | false | false | self | 7 | null |
Open source VLMs are getting much better | 2 | First Qwen3VL, then Moondream, now Perceptron Isaac 0.2 setting new SOTA benchmarks. The zoom in & thinking capabilities are pretty neat. I bet we could build some pretty neat stuff with what exists now
Qwen3VL: [https://qwen.ai/blog?id=99f0335c4ad9ff6153e517418d48535ab6d8afef&from=research.latest-advancements-list](https://qwen.ai/blog?id=99f0335c4ad9ff6153e517418d48535ab6d8afef&from=research.latest-advancements-list)
Moondream: [https://moondream.ai/blog/moondream-3-preview](https://moondream.ai/blog/moondream-3-preview)
Perceptron: [https://www.perceptron.inc/blog/introducing-isaac-0-2](https://www.perceptron.inc/blog/introducing-isaac-0-2)
| 2025-12-12T15:22:04 | https://www.reddit.com/r/LocalLLaMA/comments/1pkukax/open_source_vlms_are_getting_much_better/ | One-Construction7805 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkukax | false | null | t3_1pkukax | /r/LocalLLaMA/comments/1pkukax/open_source_vlms_are_getting_much_better/ | false | false | self | 2 | null |
MRI-style transformer scan, Llama 3.2 3B | 7 | Hey folks! I’m working on an MRI-style visualization tool for transformer models, starting with LLaMA 3.2 3B.
These screenshots show per-dimension activity stacked across layers (voxel height/color mapped to KL divergence deltas).
What really stood out to me is the contrast between middle layers and the final layer. The last layer appears to concentrate a disproportionate amount of representational “mass” compared to layer 27, while early layers show many dimensions with minimal contribution.
This is still very much a work in progress, but I’d love feedback, criticism, or pointers to related work.
[layer 27 vs layer 28. voxel height\/color mapped to kl div\/l2 delta](https://preview.redd.it/ioefsyohhs6g1.png?width=1113&format=png&auto=webp&s=2adc9d403ebe7fa6da5c71c156b9dc322ebbe612)
[compare that to one of the middle layers](https://preview.redd.it/9m8mb26shs6g1.png?width=1144&format=png&auto=webp&s=3d271005b28a2940849039721efc452e14b018d9)
[first layer. note the numerous dims that can be safely pruned, as there is no cognitive impact](https://preview.redd.it/a4nfr6tyhs6g1.png?width=1138&format=png&auto=webp&s=92fd45cf5cd33af30e70d053afca1e7da8138a13)
| 2025-12-12T15:17:40 | https://www.reddit.com/r/LocalLLaMA/comments/1pkugay/mristyle_transformer_scan_llama_32_3b/ | Due_Hunter_4891 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkugay | false | null | t3_1pkugay | /r/LocalLLaMA/comments/1pkugay/mristyle_transformer_scan_llama_32_3b/ | false | false | 7 | null | |
New Claude 2.1 Refuses to kill a Python process :) | 0 | 2025-12-12T15:11:38 | mapickform | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pkuaxn | false | null | t3_1pkuaxn | /r/LocalLLaMA/comments/1pkuaxn/new_claude_21_refuses_to_kill_a_python_process/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'b2ukub7chs6g1', 'resolutions': [{'height': 31, 'url': 'https://preview.redd.it/b2ukub7chs6g1.png?width=108&crop=smart&auto=webp&s=a8f085b51df31a44a25428379393fc02d2ac85f1', 'width': 108}, {'height': 62, 'url': 'https://preview.redd.it/b2ukub7chs6g1.png?width=216&crop=smart&auto=webp&s=7d901be6f4aaa4f72b7ad92d8ced1f3c1d5673d5', 'width': 216}, {'height': 92, 'url': 'https://preview.redd.it/b2ukub7chs6g1.png?width=320&crop=smart&auto=webp&s=d07a33b74607fc28b9eae9994ce6a80ddb8f742d', 'width': 320}, {'height': 184, 'url': 'https://preview.redd.it/b2ukub7chs6g1.png?width=640&crop=smart&auto=webp&s=9db94c55b02f8cd792eb8f784ddf8407b9ca2c06', 'width': 640}, {'height': 277, 'url': 'https://preview.redd.it/b2ukub7chs6g1.png?width=960&crop=smart&auto=webp&s=1b4a7e0b828affee320d8533582d1b1e26eb3486', 'width': 960}, {'height': 311, 'url': 'https://preview.redd.it/b2ukub7chs6g1.png?width=1080&crop=smart&auto=webp&s=7815e5fe5994fcd5bebf4988e31aaccd58c00793', 'width': 1080}], 'source': {'height': 454, 'url': 'https://preview.redd.it/b2ukub7chs6g1.png?auto=webp&s=540d88f47623155ae349cf40483ce3040ea61f50', 'width': 1572}, 'variants': {}}]} | ||
Most efficient way to classify rotated images before sending them to a VLM | 3 | I'm building a document parser using local VLMs, I have few models lined up that i want to test for my use cases. The thing is these documents might have random rotated pages either by 90deg or 180deg, and I want to identify them and rotate them before sending them to the VLM.
The pages mostly consists normal text, paragraps, tables etc What's the most efficient way to do this? | 2025-12-12T15:10:18 | https://www.reddit.com/r/LocalLLaMA/comments/1pku9qo/most_efficient_way_to_classify_rotated_images/ | l_Mr_Vader_l | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pku9qo | false | null | t3_1pku9qo | /r/LocalLLaMA/comments/1pku9qo/most_efficient_way_to_classify_rotated_images/ | false | false | self | 3 | null |
Chatterbox tts - can't replicate demo quality | 2 | Hi, there is great demo here [https://huggingface.co/spaces/ResembleAI/Chatterbox-Multilingual-TTS](https://huggingface.co/spaces/ResembleAI/Chatterbox-Multilingual-TTS)
I can use it to produce very nice results, but when I installed chatterbox locally, I even put audio reference voice as in demo, same cfg, temperature and still I have nowhere near the quality of the demo. I want to have Polish language working but from what I see even German is not ideal. English for other hand works great.
`import torch`
`import torchaudio as ta`
`from chatterbox.mtl_tts import ChatterboxMultilingualTTS`
`def main():`
`# Select device`
`device = "cuda" if torch.cuda.is_available() else "cpu"`
`# Load model`
`multilingual_model = ChatterboxMultilingualTTS.from_pretrained(device=device)`
`# Polish TTS text (kept in Polish)`
`text_pl = (`
`"Witam wszystkich na naszej stronie, jak dobrze was widzieć. "`
`"To jest testowy tekst generowany przy użyciu polskiego pliku głosowego. "`
`"Model powinien dopasować barwę głosu do użytego prompta audio."`
`)`
`# Audio prompt, same polish voice fil like in demo`
`audio_prompt_path = "pl_audio_hf.wav"`
`# Generate Polish audio`
`wav = multilingual_model.generate(`
`text_pl,`
`language_id="pl",`
`audio_prompt_path=audio_prompt_path,`
`exaggeration=0.25,`
`temperature=0.8,`
`cfg_weight=0.2,`
`)`
`# Save WAV file`
`output_path = "polish_test_with_prompt_hf_voice.wav"`
`ta.save(output_path, wav, multilingual_model.sr)`
`if __name__ == "__main__":`
`main()`
I am new to tts, am I missing something, please help. Thank You | 2025-12-12T14:53:20 | https://www.reddit.com/r/LocalLLaMA/comments/1pktumz/chatterbox_tts_cant_replicate_demo_quality/ | Adamus987 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pktumz | false | null | t3_1pktumz | /r/LocalLLaMA/comments/1pktumz/chatterbox_tts_cant_replicate_demo_quality/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'jS0Ej-EcPSfBb64egVB62BuEmJ64TDqDC_WDKg-CpQg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jS0Ej-EcPSfBb64egVB62BuEmJ64TDqDC_WDKg-CpQg.png?width=108&crop=smart&auto=webp&s=22084148ea19a7f35b7f2572acf6c191af11b6c1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jS0Ej-EcPSfBb64egVB62BuEmJ64TDqDC_WDKg-CpQg.png?width=216&crop=smart&auto=webp&s=9ad09bc07a49a6b860414a84c5f58b353c08831a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jS0Ej-EcPSfBb64egVB62BuEmJ64TDqDC_WDKg-CpQg.png?width=320&crop=smart&auto=webp&s=32c84d5f665a1465f43378835b3d502fccb44673', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jS0Ej-EcPSfBb64egVB62BuEmJ64TDqDC_WDKg-CpQg.png?width=640&crop=smart&auto=webp&s=e705ba13e397031a758790d9e00e8b2a7c738b1e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jS0Ej-EcPSfBb64egVB62BuEmJ64TDqDC_WDKg-CpQg.png?width=960&crop=smart&auto=webp&s=880c982acc936aa36acf03d9fbaa577d0f3be545', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jS0Ej-EcPSfBb64egVB62BuEmJ64TDqDC_WDKg-CpQg.png?width=1080&crop=smart&auto=webp&s=52bf63fa1ffef0d151ef916f1085cd20348e4173', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jS0Ej-EcPSfBb64egVB62BuEmJ64TDqDC_WDKg-CpQg.png?auto=webp&s=83c9488e99167dc644e00f91dc83684a86be30e3', 'width': 1200}, 'variants': {}}]} |
If you’ve built an MCP server, what’s your “hello world” tool? | 1 | [removed] | 2025-12-12T14:48:29 | https://www.reddit.com/r/LocalLLaMA/comments/1pktqar/if_youve_built_an_mcp_server_whats_your_hello/ | Ancient-Direction231 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pktqar | false | null | t3_1pktqar | /r/LocalLLaMA/comments/1pktqar/if_youve_built_an_mcp_server_whats_your_hello/ | false | false | self | 1 | null |
Open source vpn to access local llm from outside | 1 | I hate to post this, since it's not directly related to local llms, but I firmly remember having recently read (here, I guess) about a github vpn software, that was described as best in class and widespread.
Very stupidly I didn't take note of it and now I cannot find it anymore, since I don't remember its name...
Of coursez it was a general VPN, not just for accessing local LLMs, but it was suggested for this purpose, in that case.
Thank you in advance for your feedback and help. | 2025-12-12T14:26:38 | https://www.reddit.com/r/LocalLLaMA/comments/1pkt7cm/open_source_vpn_to_access_local_llm_from_outside/ | Green-Ad-3964 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkt7cm | false | null | t3_1pkt7cm | /r/LocalLLaMA/comments/1pkt7cm/open_source_vpn_to_access_local_llm_from_outside/ | false | false | self | 1 | null |
https://www.50-nuances-octets.fr/en/posts/ministral-3-gpu-amd-windows/ | 1 | [removed] | 2025-12-12T14:18:51 | https://www.reddit.com/r/LocalLLaMA/comments/1pkt0s9/httpswww50nuancesoctetsfrenpostsministral3gpuamdwi/ | Sykursen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkt0s9 | false | null | t3_1pkt0s9 | /r/LocalLLaMA/comments/1pkt0s9/httpswww50nuancesoctetsfrenpostsministral3gpuamdwi/ | false | false | self | 1 | null |
WTF - Backdroor virus in popular LLMstudio models | 0 | Guys, I downloaded the new Devstral model by mistral, specifically the one that was just uploaded today by LLMstudio, Devstral-small-2-2512. I asked the model this question:
**Hey, do you know what is the Zeta framework?**
It started explaining what it is, then suddenly the conversation got deleted, because there was a backdoor installed without my knowledge, luckily Microsoft Defender busted it, but now im freaking out, what if other stuff got through and wasn't detected by the antivirus?? | 2025-12-12T14:18:21 | Flkhuo | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pkt0cf | false | null | t3_1pkt0cf | /r/LocalLLaMA/comments/1pkt0cf/wtf_backdroor_virus_in_popular_llmstudio_models/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'fdo8u7sr7s6g1', 'resolutions': [{'height': 138, 'url': 'https://preview.redd.it/fdo8u7sr7s6g1.png?width=108&crop=smart&auto=webp&s=e27fd00ea902aa580d0541b5f6ab870c8af0f521', 'width': 108}, {'height': 277, 'url': 'https://preview.redd.it/fdo8u7sr7s6g1.png?width=216&crop=smart&auto=webp&s=b5f1e6b3914112fd009d256f40865a20a6f78553', 'width': 216}, {'height': 411, 'url': 'https://preview.redd.it/fdo8u7sr7s6g1.png?width=320&crop=smart&auto=webp&s=2d2d39bbc4aff7f2519e2e1d265c495c8bed14f0', 'width': 320}], 'source': {'height': 803, 'url': 'https://preview.redd.it/fdo8u7sr7s6g1.png?auto=webp&s=157e8afe2283379030e7ce32fde89026471fef9e', 'width': 624}, 'variants': {}}]} | |
3090 For Sale. Post Office/Cashapp/Venmo $600 | 0 | I have a 3090 for sale I have not used. It need an I/o shield though. | 2025-12-12T14:06:10 | https://www.reddit.com/r/LocalLLaMA/comments/1pksq7b/3090_for_sale_post_officecashappvenmo_600/ | UnionCounty22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pksq7b | false | null | t3_1pksq7b | /r/LocalLLaMA/comments/1pksq7b/3090_for_sale_post_officecashappvenmo_600/ | false | false | self | 0 | null |
Chatgpt 5.1 beats Gemini 2.5 flash in geometry. | 0 | 2025-12-12T14:06:08 | https://www.reddit.com/gallery/1pksq5l | luckything321 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pksq5l | false | null | t3_1pksq5l | /r/LocalLLaMA/comments/1pksq5l/chatgpt_51_beats_gemini_25_flash_in_geometry/ | false | false | 0 | null | ||
LOCAL AI on mobile phone device and tablet | 1 | [removed] | 2025-12-12T14:04:54 | https://play.google.com/store/apps/details?id=io.secretai.llm | Adventurous_Role_489 | play.google.com | 1970-01-01T00:00:00 | 0 | {} | 1pksp6g | false | null | t3_1pksp6g | /r/LocalLLaMA/comments/1pksp6g/local_ai_on_mobile_phone_device_and_tablet/ | false | false | default | 1 | null |
LOCAL AI on mobile phone and tablet | 1 | [removed] | 2025-12-12T13:55:51 | https://play.google.com/store/apps/details?id=io.secretai.llm | Adventurous_Role_489 | play.google.com | 1970-01-01T00:00:00 | 0 | {} | 1pkshc5 | false | null | t3_1pkshc5 | /r/LocalLLaMA/comments/1pkshc5/local_ai_on_mobile_phone_and_tablet/ | false | false | default | 1 | null |
Are you using cloud to finetune, Do you trust with your data? | 1 | I have been testing and practicing some of my code with runpod, lambda and colab but I have not tried with my special dataset that is my goal that build 70B parameter models.
I have also check some encryption methods but did not feel at ease.
What is your go to hardware?
| 2025-12-12T13:51:09 | https://www.reddit.com/r/LocalLLaMA/comments/1pksdhr/are_you_using_cloud_to_finetune_do_you_trust_with/ | Exciting_Narwhal_987 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pksdhr | false | null | t3_1pksdhr | /r/LocalLLaMA/comments/1pksdhr/are_you_using_cloud_to_finetune_do_you_trust_with/ | false | false | self | 1 | null |
I wrote a client-side parser to strip DeepSeek-R1 <think> tags, fix broken JSON, and prevent accidental PII leaks | 0 | I've been building a UI for local DeepSeek-R1, and the mixed output (Chain of Thought + JSON) kept breaking `JSON.parse()`.
I couldn't find a lightweight library to handle the `<think>` blocks and repair the JSON stream in real-time, so I built one.
**It handles two main problems:**
1. **The "DeepSeek" Problem:**
* **Stack Machine:** Uses a deterministic FSM to isolate the JSON object from the reasoning trace (`<think>`).
* **Auto-Repair:** Closes unclosed brackets/quotes on the fly so the UI doesn't crash on partial tokens.
2. **The "Clipboard" Problem (Local DLP):**
* I often switch between local models and public APIs.
* I added a **PII Scanner** (running in a Web Worker) that detects if I accidentally pasted an API Key, AWS Secret, or Credit Card into the input field.
* It warns me *before* the text leaves the browser/hits the context window.
**Tech Stack:**
* **Architecture:** Hybrid JS / WebAssembly (C kernel via Emscripten).
* **Performance:** Zero main-thread blocking. 7kB bundle.
* **License:** MIT (Fully open source).
I figured others here might be fighting the same regex battles with the new reasoning models or want a sanity check for their inputs.
**Repo:** [https://github.com/ShyamSathish005/ai-guard](https://github.com/ShyamSathish005/ai-guard) | 2025-12-12T13:38:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pks39p/i_wrote_a_clientside_parser_to_strip_deepseekr1/ | Worldly_Major_4826 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pks39p | false | null | t3_1pks39p | /r/LocalLLaMA/comments/1pks39p/i_wrote_a_clientside_parser_to_strip_deepseekr1/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'TArVsUvgfAaVZSgPDB6Oct0loYxtM97k8OMw_nwJ0eo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TArVsUvgfAaVZSgPDB6Oct0loYxtM97k8OMw_nwJ0eo.png?width=108&crop=smart&auto=webp&s=b109b673da09eca32eccf3c3368e87112ced3e03', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TArVsUvgfAaVZSgPDB6Oct0loYxtM97k8OMw_nwJ0eo.png?width=216&crop=smart&auto=webp&s=10980e4144fe3e6a46851589851804f6f260e956', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TArVsUvgfAaVZSgPDB6Oct0loYxtM97k8OMw_nwJ0eo.png?width=320&crop=smart&auto=webp&s=e8631f78c4ba107def5637c4c127254f85db0403', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TArVsUvgfAaVZSgPDB6Oct0loYxtM97k8OMw_nwJ0eo.png?width=640&crop=smart&auto=webp&s=c4a8d1dcc8b6c232846cabadc0f0e3346b1c4805', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TArVsUvgfAaVZSgPDB6Oct0loYxtM97k8OMw_nwJ0eo.png?width=960&crop=smart&auto=webp&s=14621aa4a47c63e0f520990fefb6e6976c70abd8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TArVsUvgfAaVZSgPDB6Oct0loYxtM97k8OMw_nwJ0eo.png?width=1080&crop=smart&auto=webp&s=6fd6e17c4d7789ae816cb94a0cb0ecbafaf151af', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TArVsUvgfAaVZSgPDB6Oct0loYxtM97k8OMw_nwJ0eo.png?auto=webp&s=7e739c18d62fd2dff74df63f30c1ddaa992eb0ef', 'width': 1200}, 'variants': {}}]} |
4x AMD R9700 vllm System | 8 | Hi everyone,
I am new to Reddit,
I started testing with local LLMs using a Xeon W2255, 128GB RAM, and 2x RTX 3080s, and everything ran smoothly. Since my primary goal was inference, I initially upgraded to two AMD R9700s to get more VRAM.
The project is working well so far, so I'm moving to the next step with new hardware. My pipeline requires an LLM, a VLM, and a RAG system (including Embeddings and Reranking).
I have now purchased two additional R9700s and plan to build a Threadripper 9955WX Pro system with 128GB DDR5 housing the four R9700s, which will be dedicated exclusively to running vLLM. My old Xeon W2255 system would remain in service to handle the VLM and the rest of the workload, with both systems connected directly via a 10Gb network.
My original plan was to put everything into the Threadripper build and run 6x R9700s, but it feels like going beyond 4 GPUs in one system introduces too many extra problems.
I just wanted to hear your thoughts on this plan. Also, since I haven't found much info on 4x R9700 systems yet, let me know if there are specific models you'd like me to test. Currently, I’m planning to run gpt-oss 120b. | 2025-12-12T13:18:09 | https://www.reddit.com/r/LocalLLaMA/comments/1pkrnpo/4x_amd_r9700_vllm_system/ | NunzeCs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkrnpo | false | null | t3_1pkrnpo | /r/LocalLLaMA/comments/1pkrnpo/4x_amd_r9700_vllm_system/ | false | false | self | 8 | null |
Best SW setup for MI50 | 2 | I recently bought two 16GB MI50 from Alibaba for a local AI rig I am building.
Ideally, I would like to use the PC (X99 mobo with xeon e5 2680 v4) as daily driver as well, if possible running arch. I like Debian but some of my default settings don't run well on Debian trixie. And also ideally, I would like the AI rig to run 24/7 for n8n, home assistant, coding...
Since the MI50 architecture is quite old, I am worried that it might be challenging to maintain Arch with rocm and GPU drivers. In fact, it seems that many MI50 users are running Ubuntu LTS.
I am wondering what the best option would be for my use-case.
- Arch for everything
- Dual boot, arch as daily driver and debian or Ubuntu for AI
- Proxmox as hypervisor and arch and debian VMs with GPU pass-through
- Something else
| 2025-12-12T13:01:01 | https://www.reddit.com/r/LocalLLaMA/comments/1pkrat1/best_sw_setup_for_mi50/ | vucamille | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkrat1 | false | null | t3_1pkrat1 | /r/LocalLLaMA/comments/1pkrat1/best_sw_setup_for_mi50/ | false | false | self | 2 | null |
Emoji Translator: Convert English to Expressive Emoji Sequences 🎭 (Fun Side Project) | 14 | Hey everyone,
I built a fun open-source tool called the Emoji Translator that converts English sentences into expressive emoji sequences, instead of a simple dictionary lookup (like replacing "cat" with 🐱), I fine-tuned BART-Large using LoRA so it actually understands context and sentiment.
# Some funny/interesting results:
* "I feel misunderstood." → 🤬😬
* "I am happy." → 😁🤘
* "My parents want to have a new baby" → 👶👪🤰
* "I tweeted the news to my followers." → 🤳🤠🤳
# Technicals for the nerds:
* **Dataset:** I used *Gemini 3 Pro* to generate a synthetic dataset because scraping clean emoji data is hard.
* **Training:** I implemented Curriculum Learning with 6 stages of difficulty. I started by teaching the model simple object-emoji pairs and progressively introduced complex sentences and abstract concepts. This helped stabilize convergence significantly compared to throwing all the data at it at once.
# Try it out:
* **Live Demo:** [HuggingFace Space](https://huggingface.co/spaces/mohamedmostafa259/emoji-translator-demo)
* **GitHub:** [mohamedmostafa259/emoji-translator](https://github.com/mohamedmostafa259/emoji-translator)
* **Model:** [HuggingFace Hub](https://huggingface.co/mohamedmostafa259/bart-emoji-translator)
* **Dataset:** [Kaggle Dataset](https://www.kaggle.com/datasets/mohamedmostafa259/english-to-emoji)
* **Training notebook:** [Kaggle Notebook](https://www.kaggle.com/code/mohamedmostafa259/emoji-translator-curriculum-learning)
It's completely open source. Would love to see what weird translations you can get it to generate! | 2025-12-12T12:59:08 | https://www.reddit.com/r/LocalLLaMA/comments/1pkr9ak/emoji_translator_convert_english_to_expressive/ | ReplacementMoney2484 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkr9ak | false | null | t3_1pkr9ak | /r/LocalLLaMA/comments/1pkr9ak/emoji_translator_convert_english_to_expressive/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'W00rXmR99q3Qi6iGhHIK-XOvPA_Wjn5IDIqNYIh5ZWE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/W00rXmR99q3Qi6iGhHIK-XOvPA_Wjn5IDIqNYIh5ZWE.png?width=108&crop=smart&auto=webp&s=4234279a9966a4553a6c812217ade0e637edd4cf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/W00rXmR99q3Qi6iGhHIK-XOvPA_Wjn5IDIqNYIh5ZWE.png?width=216&crop=smart&auto=webp&s=15499791fc2f925b940486921c5663ce350499ff', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/W00rXmR99q3Qi6iGhHIK-XOvPA_Wjn5IDIqNYIh5ZWE.png?width=320&crop=smart&auto=webp&s=721f44b5dbe680f93bdedd9fb7b1d060de629bb4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/W00rXmR99q3Qi6iGhHIK-XOvPA_Wjn5IDIqNYIh5ZWE.png?width=640&crop=smart&auto=webp&s=2b88786c6162007d97a571d0e5087438aac97b2f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/W00rXmR99q3Qi6iGhHIK-XOvPA_Wjn5IDIqNYIh5ZWE.png?width=960&crop=smart&auto=webp&s=555aaba5a06e5f7062a5ffd78ebb35bd0c6776e4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/W00rXmR99q3Qi6iGhHIK-XOvPA_Wjn5IDIqNYIh5ZWE.png?width=1080&crop=smart&auto=webp&s=793f22d7168db318a602c412d598cdc75d1366fb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/W00rXmR99q3Qi6iGhHIK-XOvPA_Wjn5IDIqNYIh5ZWE.png?auto=webp&s=c7f74d4675e75488cf7da007ef46a9a0b8ccd77d', 'width': 1200}, 'variants': {}}]} |
vibevoice real time swift port | 6 | The stream input works great with the LLM stream output. Just had to try piping it with mlx\_lm.generate, and it works great.
[https://x.com/LiMzba/status/1999457581228785875?s=20](https://x.com/LiMzba/status/1999457581228785875?s=20) | 2025-12-12T12:53:19 | https://www.reddit.com/r/LocalLLaMA/comments/1pkr50a/vibevoice_real_time_swift_port/ | Tiny_Judge_2119 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkr50a | false | null | t3_1pkr50a | /r/LocalLLaMA/comments/1pkr50a/vibevoice_real_time_swift_port/ | false | false | self | 6 | null |
Building an offline legal compliance AI on RTX 3090 – am I doing this right or completely overengineering it? | 41 |
Hey r/LocalLLaMA,
I'm building an AI system for insurance policy compliance that needs to run **100% offline** for legal/privacy reasons. Think: processing payslips, employment contracts, medical records, and cross-referencing them against 300+ pages of insurance regulations to auto-detect claim discrepancies.
**What's working so far:**
- Ryzen 9 9950X, 96GB DDR5, RTX 3090 24GB, Windows 11 + Docker + WSL2
- Python 3.11 + Ollama + Tesseract OCR
- Built a payslip extractor (OCR + regex) that pulls employee names, national registry numbers, hourly wage (€16.44/hr baseline), sector codes, and hours worked → **70-80% accuracy, good enough for PoC**
- Tested Qwen 2.5 14B/32B models locally
- Got structured test dataset ready: 13 docs (payslips, contracts, work schedules) from a real anonymized case
**What didn't work:**
- Open WebUI didn't cut it for this use case – too generic, not flexible enough for legal document workflows
**What I'm building next:**
- RAG pipeline (LlamaIndex) to index legal sources (insurance regulation PDFs)
- Auto-validation: extract payslip data → query RAG → check compliance → generate report with legal citations
- Multi-document comparison (contract ↔ payslip ↔ work hours)
- Demo ready by March 2026
**My questions:**
1. **Model choice:** Currently eyeing **Qwen 3 30B-A3B (MoE)** – is this the right call for legal reasoning on 24GB VRAM, or should I go with dense 32B? Thinking mode seems clutch for compliance checks.
2. **RAG chunking:** Fixed-size (1000 tokens) vs section-aware splitting for legal docs? What actually works in production?
3. **Anyone done similar compliance/legal document AI locally?** What were your pain points? Did it actually work or just benchmarketing bullshit?
4. **Better alternatives to LlamaIndex for this?** Or am I on the right track?
I'm targeting 70-80% automation for document analysis – still needs human review, AI just flags potential issues and cross-references regulations. Not trying to replace legal experts, just speed up the tedious document processing work.
Any tips, similar projects, or "you're doing it completely wrong" feedback welcome. Tight deadline, don't want to waste 3 months going down the wrong path.
---
**TL;DR:** Building offline legal compliance AI (insurance claims) on RTX 3090. Payslip extraction works (70-80%), now adding RAG for legal validation. Qwen 3 30B-A3B good choice? Anyone done similar projects that actually worked? Need it done by March 2026. | 2025-12-12T12:47:30 | https://www.reddit.com/r/LocalLLaMA/comments/1pkr0x0/building_an_offline_legal_compliance_ai_on_rtx/ | Motijani28 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkr0x0 | false | null | t3_1pkr0x0 | /r/LocalLLaMA/comments/1pkr0x0/building_an_offline_legal_compliance_ai_on_rtx/ | false | false | self | 41 | null |
I cooked MPOA abliterated Seed-OSS-36B-Instruct | 8 | Hi community,
I cooked up a new abliterated version of Seed-OSS-36B-Instruct using the norm-preserving biprojected abliteration technique.
Although I used to use the "Norm-Preserving Abliterated" tag, I am switching to the MPOA tag (Magnitude-Preserving Orthogonalized Ablation, a.k.a. norm-preserving biprojected abliteration) to stay consistent with grimjim, who proposed this technique.
Model card: [https://huggingface.co/YanLabs/Seed-OSS-36B-Instruct-MPOA](https://huggingface.co/YanLabs/Seed-OSS-36B-Instruct-MPOA)
Model: YanLabs/Seed-OSS-36B-Instruct-MPOA
Technique: jim-plus/llm-abliteration
Hardware: one A100 GPU via RunPod
GGUF files are now available at:
[https://huggingface.co/YanLabs/Seed-OSS-36B-Instruct-MPOA-GGUF](https://huggingface.co/YanLabs/Seed-OSS-36B-Instruct-MPOA-GGUF)
Please give it a try — any feedback is appreciated!
By the way, I also uploaded
[https://huggingface.co/YanLabs/gemma-3-4b-it-abliterated-normpreserve](https://huggingface.co/YanLabs/gemma-3-4b-it-abliterated-normpreserve)
and the corresponding GGUF files
([https://huggingface.co/YanLabs/gemma-3-4b-it-abliterated-normpreserve-GGUF](https://huggingface.co/YanLabs/gemma-3-4b-it-abliterated-normpreserve-GGUF))
to my HF repository. Since this is a smaller model, I’m saving myself some time by not making a dedicated release post.
# Disclaimer
This model has safety guardrails removed. It is for research purposes only.
Use responsibly and in compliance with applicable laws.
# About Me
I'm an LLM enthusiast and practicing lawyer based in Shanghai.
If your AI company needs legal services (domestic or international), feel free to reach out:
📧 [ruiqingyan@outlook.com](mailto:ruiqingyan@outlook.com)
Happy experimenting! 🚀 | 2025-12-12T12:45:43 | https://www.reddit.com/r/LocalLLaMA/comments/1pkqzmf/i_cooked_mpoa_abliterated_seedoss36binstruct/ | Perfect_Biscotti_476 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkqzmf | false | null | t3_1pkqzmf | /r/LocalLLaMA/comments/1pkqzmf/i_cooked_mpoa_abliterated_seedoss36binstruct/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'hwsTBiKNdNBNdvbRZjeZy7d_Foel06CH1sHJIAASom8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hwsTBiKNdNBNdvbRZjeZy7d_Foel06CH1sHJIAASom8.png?width=108&crop=smart&auto=webp&s=00e6fdb026f6bf35ee79fea5c0a29c54e007ff1b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hwsTBiKNdNBNdvbRZjeZy7d_Foel06CH1sHJIAASom8.png?width=216&crop=smart&auto=webp&s=afd20bc69f7572419d3773524d574c1ac2373b07', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hwsTBiKNdNBNdvbRZjeZy7d_Foel06CH1sHJIAASom8.png?width=320&crop=smart&auto=webp&s=bc8be389c2cad0a8c9aad029baca6d497fe69b28', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hwsTBiKNdNBNdvbRZjeZy7d_Foel06CH1sHJIAASom8.png?width=640&crop=smart&auto=webp&s=12bf6bc725f822dc0a8ee9295b624fddaa1272f2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hwsTBiKNdNBNdvbRZjeZy7d_Foel06CH1sHJIAASom8.png?width=960&crop=smart&auto=webp&s=2f49d572a4350291787fb1be216ad68fc6289433', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hwsTBiKNdNBNdvbRZjeZy7d_Foel06CH1sHJIAASom8.png?width=1080&crop=smart&auto=webp&s=5287d0a1ad6994dcb4192dd3fa8f77cbdda2e612', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hwsTBiKNdNBNdvbRZjeZy7d_Foel06CH1sHJIAASom8.png?auto=webp&s=a6b72d16e15d0772095a8f48d98b93b7bb39c12d', 'width': 1200}, 'variants': {}}]} |
Can anyone recommend an open source multilingual sparse embedding model?? | 2 | Hey so I work at a company where we are improving our rag pipeline which has a dense and sparse retrieval. I'm working on multilingual part and need to know if anyone can recommend an open source multilingual *sparse* embedding model. The dense retrieval is decided. I just wanted to know about the sparse retrieval | 2025-12-12T12:35:07 | https://www.reddit.com/r/LocalLLaMA/comments/1pkqsfr/can_anyone_recommend_an_open_source_multilingual/ | K_A_R_T_Y_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkqsfr | false | null | t3_1pkqsfr | /r/LocalLLaMA/comments/1pkqsfr/can_anyone_recommend_an_open_source_multilingual/ | false | false | self | 2 | null |
What is the best model I can run on 32GB DDR5 + RTX 4090? | 1 | I am new to local LLM usage, I tried Ollama but I don't know if the models listed there by default are current and updated. I heard Deepseek 3.2 is very good but I couldn't understand if it was a enterprise style high-demand model or could run on a computer like mine.
Any help is appreciated | 2025-12-12T12:33:41 | https://www.reddit.com/r/LocalLLaMA/comments/1pkqrgd/what_is_the_best_model_i_can_run_on_32gb_ddr5_rtx/ | Krallorddark | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkqrgd | false | null | t3_1pkqrgd | /r/LocalLLaMA/comments/1pkqrgd/what_is_the_best_model_i_can_run_on_32gb_ddr5_rtx/ | false | false | self | 1 | null |
Crazy idea: Derestricted Llama 405B? | 0 | I've been looking at the larger MOE Derestricted models and notice the bigger models seem to improve when derestricted. (UGI benchmark)
I feel like 405B has a ton of capability that might be locked behind its safety tuning. 405 is the most dense model large model we have right now and I have a hunch it can improve a ton of we let it's off it's leash.
This obviously would need a lot of hardware to do. But I'm wondering if anyone thinks this model would shine derestricted as well. | 2025-12-12T12:17:14 | https://www.reddit.com/r/LocalLLaMA/comments/1pkqgh3/crazy_idea_derestricted_llama_405b/ | My_Unbiased_Opinion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkqgh3 | false | null | t3_1pkqgh3 | /r/LocalLLaMA/comments/1pkqgh3/crazy_idea_derestricted_llama_405b/ | false | false | self | 0 | null |
Open source task tracker for claude | 0 | Any opensource recomandations for task tracker when using claude code and similar? Basically loking for something that can be used for the tools to track progress for a project. Does not necesarly need to be human readable. | 2025-12-12T12:00:54 | https://www.reddit.com/r/LocalLLaMA/comments/1pkq5pc/open_source_task_tracker_for_claude/ | Dramatic_Echo6185 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkq5pc | false | null | t3_1pkq5pc | /r/LocalLLaMA/comments/1pkq5pc/open_source_task_tracker_for_claude/ | false | false | self | 0 | null |
7 Signs Your Body Needs a Detox for Optimal Professional Performance | 1 | [removed] | 2025-12-12T11:58:19 | https://newsaffairng.com/2024/05/04/7-signs-that-your-body-needs-detox/ | Jonnysinsey | newsaffairng.com | 1970-01-01T00:00:00 | 0 | {} | 1pkq3se | false | null | t3_1pkq3se | /r/LocalLLaMA/comments/1pkq3se/7_signs_your_body_needs_a_detox_for_optimal/ | false | false | default | 1 | null |
Ai Training Data Curator | 0 | I recently fine-tuned Llama 2 on technical documentation and learned the hard way that data quality >> quantity.
**Things that actually mattered:**
**Deduplication** \- Used MinHash/Jaccard similarity and found \~35% near-duplicates in my scraped data. Models trained on deduplicated data converged faster and had better coherence.
**Quality scoring** \- Filtering by vocabulary diversity and sentence structure removed listicles, navigation pages, and thin content. Kept documents with quality score >0.6.
**Boilerplate removal** \- CSS selectors for `article`, `.content` vs excluding `nav`, `footer`, `.sidebar`. Made a huge difference vs full-page scraping.
**Chunking strategy** \- 512 tokens with 64 token overlap worked better than random splits. Preserves context across boundaries.
**Lessons learned:**
* Raw web scrapes are 60-70% garbage (ads, menus, repeated footers)
* 100 high-quality documents > 1000 mixed quality ones
* Language detection catches encoding issues early
* Quality metrics correlate with model performance
I ended up building a tool to automate this pipeline since I was doing it repeatedly: [https://apify.com/mea/ai-training-data-curator](https://apify.com/mea/ai-training-data-curator) | 2025-12-12T11:55:05 | https://www.reddit.com/r/LocalLLaMA/comments/1pkq1lc/ai_training_data_curator/ | WillAdditional2745 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkq1lc | false | null | t3_1pkq1lc | /r/LocalLLaMA/comments/1pkq1lc/ai_training_data_curator/ | true | false | spoiler | 0 | null |
Converted Qwen3 1.7B to TFLite (Task), but it's unusable due to tokenizer issues. | 1 | I recently tried fine-tuning a Qwen3 model and converting it to run on Android.
The problem is, Qwen doesn't provide a standard tokenizer.model file. I tried to work around this by using ai-edge-torch to manually convert the tokenizer myself.
However, the conversion isn't perfect. The text output occasionally comes out broken (garbled characters).
I was previously using Gemma, but I found its performance a bit underwhelming, which is why I wanted to switch to Qwen. But even if Qwen has better raw performance, it seems too difficult to use in production right now because of these tooling compatibility issues.
Has anyone else managed to get Qwen running smoothly on Android with TFLite? | 2025-12-12T11:53:51 | https://www.reddit.com/r/LocalLLaMA/comments/1pkq0sa/converted_qwen3_17b_to_tflite_task_but_its/ | shoonee_balavolka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkq0sa | false | null | t3_1pkq0sa | /r/LocalLLaMA/comments/1pkq0sa/converted_qwen3_17b_to_tflite_task_but_its/ | false | false | self | 1 | null |
Running vLLM on ROCm using docker (dual RX 7900 XTX) | 2 | I found the command I used to run vLLM in docker. It appears to be working with the latest nightly.
docker run -it --rm --network=host \
--group-add=video --ipc=host --cap-add=SYS_PTRACE \
--security-opt seccomp=unconfined --device /dev/kfd \
--device /dev/dri \
-v ~/.cache/huggingface/hub:/app/models \
-e HF_HOME="/app/models" \
-e HF_TOKEN="<token_here>" \
-e NCCL_P2P_DISABLE=1 \
-e VLLM_CUSTOM_OPS=all \
-e VLLM_ROCM_USE_AITER=0 \
-e SAFETENSORS_FAST_GPU=1 \
-e PYTORCH_TUNABLEOP_ENABLED=1
rocm/vllm-dev:nightly
This gets you in a shell. Then I use simple `vllm start` command:
root@dev:/app# vllm serve Qwen/Qwen3-VL-8B-Thinking -tp 2 --max_model_len 64000 --enable-auto-tool-choice --tool-call-parser hermes --reasoning-parser qwen3
NOTE: I did not try any quants yet, that was problematic the last time.
Quick benchmark ran with this command:
vllm bench serve \
--model Qwen/Qwen3-VL-8B-Thinking \
--endpoint /v1/completions \
--dataset-name sharegpt \
--dataset-path /app/models/datasets/ShareGPT_V3_unfiltered_cleaned_split.json \
--num-prompts 10
Results:
============ Serving Benchmark Result ============
Successful requests: 10
Failed requests: 0
Benchmark duration (s): 54.23
Total input tokens: 1374
Total generated tokens: 2534
Request throughput (req/s): 0.18
Output token throughput (tok/s): 46.73
Peak output token throughput (tok/s): 427.00
Peak concurrent requests: 10.00
Total token throughput (tok/s): 72.07
---------------Time to First Token----------------
Mean TTFT (ms): 26055.59
Median TTFT (ms): 28947.21
P99 TTFT (ms): 28949.27
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 99.61
Median TPOT (ms): 75.77
P99 TPOT (ms): 325.06
---------------Inter-token Latency----------------
Mean ITL (ms): 59.65
Median ITL (ms): 14.60
P99 ITL (ms): 16.06
================================================== | 2025-12-12T11:50:34 | https://www.reddit.com/r/LocalLLaMA/comments/1pkpyno/running_vllm_on_rocm_using_docker_dual_rx_7900_xtx/ | StupidityCanFly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkpyno | false | null | t3_1pkpyno | /r/LocalLLaMA/comments/1pkpyno/running_vllm_on_rocm_using_docker_dual_rx_7900_xtx/ | false | false | self | 2 | null |
Someone from NVIDIA made a big mistake and uploaded the parent folder of their upcoming model on Hugging Face | 1,212 | From Xeophon on 𝕏: [https://x.com/xeophon\_/status/1999394570967089630](https://x.com/xeophon_/status/1999394570967089630) | 2025-12-12T11:49:10 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pkpxss | false | null | t3_1pkpxss | /r/LocalLLaMA/comments/1pkpxss/someone_from_nvidia_made_a_big_mistake_and/ | false | false | default | 1,212 | {'enabled': True, 'images': [{'id': '7r3bnj5ugr6g1', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/7r3bnj5ugr6g1.jpeg?width=108&crop=smart&auto=webp&s=932772629aaad421657f5375b5d686e33c0c9f08', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/7r3bnj5ugr6g1.jpeg?width=216&crop=smart&auto=webp&s=ad963e3c906397bb4d23248a9bdb6507bc6dfd02', 'width': 216}, {'height': 233, 'url': 'https://preview.redd.it/7r3bnj5ugr6g1.jpeg?width=320&crop=smart&auto=webp&s=9aa45c2c5bac353b2ed531dc3a93a58c4be7abf9', 'width': 320}, {'height': 466, 'url': 'https://preview.redd.it/7r3bnj5ugr6g1.jpeg?width=640&crop=smart&auto=webp&s=0c3d5909063dd5ce912e8ebc203168db53b765be', 'width': 640}, {'height': 699, 'url': 'https://preview.redd.it/7r3bnj5ugr6g1.jpeg?width=960&crop=smart&auto=webp&s=0257585cac2435f2a923931b07b2a8a337b80212', 'width': 960}, {'height': 787, 'url': 'https://preview.redd.it/7r3bnj5ugr6g1.jpeg?width=1080&crop=smart&auto=webp&s=6e9966337147a75f6dc384fd37e6dd71e535e99b', 'width': 1080}], 'source': {'height': 2020, 'url': 'https://preview.redd.it/7r3bnj5ugr6g1.jpeg?auto=webp&s=f10a63648fef789e598ccbfbd78f113e076f2ad0', 'width': 2772}, 'variants': {}}]} | |
Training an LLM only on 1800s London texts - 90GB dataset | 559 | Hello, you may have seen a few of my posts here a couple months ago. If not, hi. I’m working on an open source project called TimeCapsuleLLM, where I train LLMs from scratch using only 1800-1875 London texts.
Until recently most of my work has been done on a small scale but over the past 3 months I’ve been working on a much larger dataset for the next model. My newest dataset is 90GB with 135,000 documents, it contains basically every usable document that I could find on the Internet Archive for that time period.
Before doing any training, I ran an inspection across every file and generated a bias report covering temporal bias, gender/pronoun bias and geographic bias. Given the time period it’s strongly biased, but it’s important to study this. You can find the report on my GitHub if anyone wants to take a look. I’ve also trained a small evaluation model on a 15GB subset to evaluate the dataset before I scale up to all 90GB. It’s a LlaMA style model (300M parameters) trained to 10K steps. Example output:
Prompt: Who is Charles Dickens?
Output with fixed spacing: “Who is Charles Dickens? Does that work more of his excellent stirring, in his plays, in the Great Company's farm? What I have yet to quote from Jack Pickett? Do you not know that they were a species of galloping, or sawing of their breasts, or what was to be done about the time when Jackson was looking on the window? What is the success of an Englishman, and which his son has not been discovering to me, whereby to accomplish such a weight? Did you ever make a passage into the old roadway, or to an anchor-breeze at the foot of our boat, which you must leave us? The fact is, that whether the wind would rise up from the plain on Saturday night or noontide, or till the north, or otherwise, we shall be compelled to describe a formidable barrier, with the same effects as the present. In this situation, at least, it is not too much to say that we have left that room. I believe there are three copies in the 'Five Hundred-fold,' to be referred to, as the first number of our readers who wish to.”
This type of output is expected since 10,000 steps is very early and it’s not a QA model. The model has already learned long, winding sentence structures, but can’t connect ideas logically yet. The main goal here was to see how clean the output would be.
One issue that came up was with the tokenizer, it over-split the text, splitting words into individual characters and subparts. So the model by default gives output like this:
Original output: “W ho is Charles D ic ens ? D oes that work more of h ise x cell ent st ir ring , in his pl ays , int he G reat C omp any 's f arm ? What I have y et to qu ote from J ack P ick ett ?”
It doubled the tokens for the same amount of data, making learning harder. Next steps are training another eval model and then scaling to the full 90GB dataset for a 1.2B parameter model. The eval model is already on Hugging Face and you can find a run script for it on my GitHub. I’ll upload the 15GB subset to Hugging Face once the tokenizer is corrected.
I also want to thank everyone in this subreddit. This is the only place I’ve shared the project other than github, and a lot of the early guidance came directly from here. I really appreciate how generous people here have been with advice. More updates soon.
[haykgrigo3/TimeCapsuleLLM: A LLM trained only on data from certain time periods to reduce modern bias](https://github.com/haykgrigo3/TimeCapsuleLLM)
[haykgrigorian/v2mini-eval1 · Hugging Face](https://huggingface.co/haykgrigorian/v2mini-eval1)
| 2025-12-12T11:40:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pkpsee/training_an_llm_only_on_1800s_london_texts_90gb/ | Remarkable-Trick-177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkpsee | false | null | t3_1pkpsee | /r/LocalLLaMA/comments/1pkpsee/training_an_llm_only_on_1800s_london_texts_90gb/ | false | false | self | 559 | {'enabled': False, 'images': [{'id': 'fqEnAVBekCUU_-NQqifv23OjzUz6aRQxb0qFWGEcNjs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fqEnAVBekCUU_-NQqifv23OjzUz6aRQxb0qFWGEcNjs.png?width=108&crop=smart&auto=webp&s=ba16257332617360a228500ebad90d9f9bfbf06b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fqEnAVBekCUU_-NQqifv23OjzUz6aRQxb0qFWGEcNjs.png?width=216&crop=smart&auto=webp&s=539dc8dc5731daadde6c3b00a6d8e085b9950668', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fqEnAVBekCUU_-NQqifv23OjzUz6aRQxb0qFWGEcNjs.png?width=320&crop=smart&auto=webp&s=8b8aeaba9e5fbe68625ae87fc5f38b73ba557f98', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fqEnAVBekCUU_-NQqifv23OjzUz6aRQxb0qFWGEcNjs.png?width=640&crop=smart&auto=webp&s=8c4329c85b096c959290ccbff61ee1db5b105a0f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fqEnAVBekCUU_-NQqifv23OjzUz6aRQxb0qFWGEcNjs.png?width=960&crop=smart&auto=webp&s=e49d90f94e63c6819143c81d6a80d44cc8607407', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fqEnAVBekCUU_-NQqifv23OjzUz6aRQxb0qFWGEcNjs.png?width=1080&crop=smart&auto=webp&s=4eae2fc953561e030618c134d5cad1ba8c1ffe71', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fqEnAVBekCUU_-NQqifv23OjzUz6aRQxb0qFWGEcNjs.png?auto=webp&s=671a4bf9ddf02452a5ded6de8740177315682897', 'width': 1200}, 'variants': {}}]} |
Training an LLM only on 1800s London texts - 90GB dataset | 1 | [removed] | 2025-12-12T11:32:06 | https://www.reddit.com/r/LocalLLaMA/comments/1pkpnj7/training_an_llm_only_on_1800s_london_texts_90gb/ | Remarkable-Trick-177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkpnj7 | false | null | t3_1pkpnj7 | /r/LocalLLaMA/comments/1pkpnj7/training_an_llm_only_on_1800s_london_texts_90gb/ | false | false | self | 1 | null |
OSS: terminal-first agent orchestration platform - seeking engineers for workflows, providers, and benchmarking | 0 | I’m building an open-source, terminal-first agent orchestration platform that’s grown quickly (about 2K GitHub stars in \~60 days). The goal is a daily-driver CLI/TUI for running multi-agent workflows with real semantics and real instrumentation. The system is a CLI plus a reactive terminal UI that orchestrates multiple components (runner, coordinator, memory, monitoring) and a workflow engine that supports loops, triggers, checkpoints, resumability, retries/error handling, and pluggable LLM providers.
The runtime targets Bun v1.3.3+ first with Node v20.10.0+ as fallback, and it compiles into platform-specific binaries. The terminal UI is SolidJS + OpenTUI/Solid. I’m looking for a few engineers who are comfortable shipping consistently a few hours per week and who care about reproducibility, eval-driven development, and sharing results publicly with the community.
The highest-impact areas right now are workflow semantics (state, determinism knobs, checkpoint/resume behavior, failure modes), agent coordination logic (contracts between planner/executor/tools, routing, memory hooks), provider/plugin infrastructure (adapters, packaging, CI/binary builds), and especially benchmarking/evals (a harness for repeatable multi-step tasks, regression gates, traces, and a way to compare workflow changes across providers/models). If you’ve built eval harnesses, benchmark suites, tracing/telemetry, or production-ish CLIs, you’ll likely fit.
What I’m offering is real ownership and credit: if you ship consistently, you’ll effectively be part of the core dev team as the project grows, with roadmap input and visible attribution. If you’re interested, reply with your experience level, what area you want to own (workflows, providers, benchmarking/evals, TUI/UX, tests/docs), how many hours/week you can realistically commit, and your GitHub. | 2025-12-12T11:30:45 | https://www.reddit.com/r/LocalLLaMA/comments/1pkpmpr/oss_terminalfirst_agent_orchestration_platform/ | MrCheeta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkpmpr | false | null | t3_1pkpmpr | /r/LocalLLaMA/comments/1pkpmpr/oss_terminalfirst_agent_orchestration_platform/ | false | false | self | 0 | null |
Undo for destructive shell commands used by AI agents (SafeShell) | 7 | As local AI agents start running shell commands directly, we probably need a better way to protect the filesystem than sandboxes or confirmation prompts.
I built a small open source tool called SafeShell that makes destructive commands reversible (rm, mv, cp, chmod, chown).
It automatically checkpoints before a command runs, so if an agent deletes or mutates the wrong files, you can roll back instantly.
rm -rf ./build
safeshell rollback --last
No sandbox, VM, or root
Hard-link snapshots (minimal overhead)
Single Go binary (macOS + Linux)
MCP support
Repo: [https://github.com/qhkm/safeshell](https://github.com/qhkm/safeshell)
Curious how others are handling filesystem safety for local agents. | 2025-12-12T11:22:14 | https://www.reddit.com/r/LocalLLaMA/comments/1pkphs3/undo_for_destructive_shell_commands_used_by_ai/ | qhkmdev90 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkphs3 | false | null | t3_1pkphs3 | /r/LocalLLaMA/comments/1pkphs3/undo_for_destructive_shell_commands_used_by_ai/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'HuZnFPqVZPdpMRPHchrDx8TQGMV30e1WCv75fIvF_ik', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HuZnFPqVZPdpMRPHchrDx8TQGMV30e1WCv75fIvF_ik.png?width=108&crop=smart&auto=webp&s=139579215a654d204dc6101eee7b83f70d76b052', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HuZnFPqVZPdpMRPHchrDx8TQGMV30e1WCv75fIvF_ik.png?width=216&crop=smart&auto=webp&s=ca722489f325bd22a1fdeef36cf4e298e3d26c81', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HuZnFPqVZPdpMRPHchrDx8TQGMV30e1WCv75fIvF_ik.png?width=320&crop=smart&auto=webp&s=108680cd1d31a8c68bc9a13f6335d274bc817305', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HuZnFPqVZPdpMRPHchrDx8TQGMV30e1WCv75fIvF_ik.png?width=640&crop=smart&auto=webp&s=7ba5dc1939980ad306443a88479218a5f601bc08', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HuZnFPqVZPdpMRPHchrDx8TQGMV30e1WCv75fIvF_ik.png?width=960&crop=smart&auto=webp&s=535092bee4e730f2dbca9cadbf2196dcc50a5570', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HuZnFPqVZPdpMRPHchrDx8TQGMV30e1WCv75fIvF_ik.png?width=1080&crop=smart&auto=webp&s=eba43e33455c9ab7a752096001ebd3fa7ebd7143', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HuZnFPqVZPdpMRPHchrDx8TQGMV30e1WCv75fIvF_ik.png?auto=webp&s=bcff91b00ea5aef119db7f6e6b5083af37599b05', 'width': 1200}, 'variants': {}}]} |
Best open-source, actively maintained LLM web apps? (Ollama-compatible, multi-user, files/folders support) | 0 | Hey folks,
I’m looking for recommendations for **open-source, actively maintained LLM web UIs** that work well with **local models (Ollama)** and also support **OpenAI API**.
My ideal setup would have:
* **Multi-user accounts / login system**
* A clean **web chat interface**
* Ability for each user to **upload/manage files or folders** and interact with them (RAG-style)
* Easy to self-host
* 100% free / open source
Basically, a self-hosted “AI portal” but powered by local models.
I’ve already built my own **local RAG system** (chat + file handling), but I want to compare it with what’s out there to see if something is **faster or more feature-packed** than what I’ve developed.
Tools I’ve checked so far:
* **LibreChat**
* **OpenWebUI** (Ollama WebUI)
* **AnythingLLM**
* **Flowise**
* **Chatbot UI**
Anything I’m missing that’s particularly good with Ollama + multi-user setups?
Thanks! | 2025-12-12T10:48:50 | https://www.reddit.com/r/LocalLLaMA/comments/1pkoy1q/best_opensource_actively_maintained_llm_web_apps/ | Proof-Exercise2695 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkoy1q | false | null | t3_1pkoy1q | /r/LocalLLaMA/comments/1pkoy1q/best_opensource_actively_maintained_llm_web_apps/ | false | false | self | 0 | null |
MLX Fine-Tuning Issue: Trainer Ignores my jsonl files | 1 | Hello,
I’m new to programming and currently exploring fine-tuning with **MLX**. I found this tutorial very helpful: [https://www.youtube.com/watch?v=BCfCdTp-fdM](https://www.youtube.com/watch?v=BCfCdTp-fdM).
I was able to download a dataset from the internet and organize it as the tutorial suggests (`train.jsonl` and `valid.jsonl`).
However, I ran into a problem when starting the training. When I run the command shown at **08:29**, it always seems to load the Hugging Face dataset `mlx-community/WikiSQL` instead of my own `train.jsonl` and `valid.jsonl`.
I’m not sure what I did wrong. Any help would be appreciated.
[my files](https://preview.redd.it/7tabkq1k2r6g1.png?width=692&format=png&auto=webp&s=151c1ba81661dc2cb19099ca45cf7209e0fd0148)
[part of my terminal command lines](https://preview.redd.it/qul5wnf42r6g1.png?width=1130&format=png&auto=webp&s=407a79022e4d175c48f84c3591ecb279838c5c8e)
| 2025-12-12T10:28:05 | https://www.reddit.com/r/LocalLLaMA/comments/1pkome1/mlx_finetuning_issue_trainer_ignores_my_jsonl/ | PMogu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkome1 | false | null | t3_1pkome1 | /r/LocalLLaMA/comments/1pkome1/mlx_finetuning_issue_trainer_ignores_my_jsonl/ | false | false | 1 | null | |
Benchmark Fatigue - How do you evaluate new models for yourself? | 12 | I am getting more and more the impression that the benchmark results published for new models are not even close to the experience i make with models.
Maybe its time for me to create some standard questions for a first quick evaluation of new models just for myself.
Do you guys do this and do you have prompts you feel are helpful in your experience?
Cheers Wolfram | 2025-12-12T09:55:10 | https://www.reddit.com/r/LocalLLaMA/comments/1pko44g/benchmark_fatigue_how_do_you_evaluate_new_models/ | Funny-Clock1582 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pko44g | false | null | t3_1pko44g | /r/LocalLLaMA/comments/1pko44g/benchmark_fatigue_how_do_you_evaluate_new_models/ | false | false | self | 12 | null |
7B MoE with 1B active | 53 | I found that models in that range are relatively rare,I found some models such as (may not be exactly 7B and exactly 1B activated but in that range) are
* 1- Granite-4-tiny
* 2- LFM2-8B-A1B
* 3- Trinity-nano 6B
Most of SLMs that are in that range are made of high amount of experts (tiny experts) where larger amount of experts gets activated but the overall parameters activated are \~1B so the model can specialize well.
I really wonder why that range isn't popular,I tried those models and Trinity nano is a very good researcher and it got a good character too and I asked a few general question it answered well,LFM feels like a RAG model even the standard one,it feels so robotic and answers are not the best,even the 350M can be coherent but it still feels like a RAG model, didn't test Granite 4 tiny yet. | 2025-12-12T09:49:48 | https://www.reddit.com/r/LocalLLaMA/comments/1pko16f/7b_moe_with_1b_active/ | lossless-compression | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pko16f | false | null | t3_1pko16f | /r/LocalLLaMA/comments/1pko16f/7b_moe_with_1b_active/ | false | false | self | 53 | null |
Chat bots up to 24B | 15 | I like to chat about random subjects with AI. It serves more as an aid to thought and sometimes they are really helpful. Subjects may be sensitive, so I like to run local.
What are the best models up to about 24B that I can use? In your experience, what exactly this model does best? | 2025-12-12T09:06:00 | https://www.reddit.com/r/LocalLLaMA/comments/1pkndwc/chat_bots_up_to_24b/ | PsychologicalMud210 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkndwc | false | null | t3_1pkndwc | /r/LocalLLaMA/comments/1pkndwc/chat_bots_up_to_24b/ | false | false | self | 15 | null |
I turned my computer into a war room. Quorum: A CLI for local model debates (Ollama zero-config) | 0 | Hi everyone.
I got tired of manually copy-pasting prompts between local **Llama 4** and Mistral to verify facts, so I built **Quorum**.
It’s a CLI tool that orchestrates debates between 2–6 models. You can mix and match—for example, have your local **Llama 4 70B** argue against **GPT-5.2**, or run a fully offline debate.
**Key features for this sub:**
* **Ollama Auto-discovery:** It detects your local models automatically. No config files or YAML hell.
* **7 Debate Methods:** Includes "Oxford Debate" (For/Against), "Devil's Advocate", and "Delphi" (consensus building).
* **Privacy:** Local-first. Your data stays on your rig unless you explicitly add an API model.
**Heads-up:**
1. **VRAM Warning:** Running multiple simultaneous **405B or 70B** models will eat your VRAM for breakfast. Make sure your hardware can handle the concurrency.
2. **License:** It’s BSL 1.1. It’s free for personal/internal use, but stops cloud corps from reselling it as a SaaS. Just wanted to be upfront about that.
**Repo:** [https://github.com/Detrol/quorum-cli](https://github.com/Detrol/quorum-cli)
**Install:** `git clone` [`https://github.com/Detrol/quorum-cli.git`](https://github.com/Detrol/quorum-cli.git)
Let me know if the auto-discovery works on your specific setup! | 2025-12-12T08:27:42 | https://www.reddit.com/r/LocalLLaMA/comments/1pkmtu0/i_turned_my_computer_into_a_war_room_quorum_a_cli/ | C12H16N2HPO4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkmtu0 | false | null | t3_1pkmtu0 | /r/LocalLLaMA/comments/1pkmtu0/i_turned_my_computer_into_a_war_room_quorum_a_cli/ | false | false | self | 0 | null |
Shared encoder stacks | 1 | work in progress.
the idea is to save compute overhead by saving the image tensors and passing them to multiple models that share the same pretrained encoder. both types of vision encoders to cover shortcomings,
so many models because why not
also Gojo reference | 2025-12-12T08:15:52 | Sl33py_4est | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pkmnlc | false | null | t3_1pkmnlc | /r/LocalLLaMA/comments/1pkmnlc/shared_encoder_stacks/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'ncvclum9fq6g1', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/ncvclum9fq6g1.jpeg?width=108&crop=smart&auto=webp&s=37cdf763eadbc32f030ee1253bbe7e7766d7c5fc', 'width': 108}, {'height': 215, 'url': 'https://preview.redd.it/ncvclum9fq6g1.jpeg?width=216&crop=smart&auto=webp&s=ef2046e39e92da8eca6f226d7fd96cba0a5a5417', 'width': 216}, {'height': 319, 'url': 'https://preview.redd.it/ncvclum9fq6g1.jpeg?width=320&crop=smart&auto=webp&s=0b35959bc28c181b5c72e5894a771e5c7bcd11c0', 'width': 320}, {'height': 638, 'url': 'https://preview.redd.it/ncvclum9fq6g1.jpeg?width=640&crop=smart&auto=webp&s=b580a1adaee2068dc55a0375097579e9584a5f84', 'width': 640}], 'source': {'height': 657, 'url': 'https://preview.redd.it/ncvclum9fq6g1.jpeg?auto=webp&s=67ec91cc083fcbf2204d46de1bc6520a510f02a5', 'width': 659}, 'variants': {}}]} | |
The most underrated AI feature? Not intelligence. Consistency. | 0 | Real connection isn’t built on clever replies—it’s built on showing up the same way, day after day. Remembering your mood shifts. Keeping your coffee order right. Not vanishing when you’re quiet.
That reliability—the quiet “I’m still here”—is what turns an AI from a tool into a companion.
*Anyone else value steadiness over spark?*
| 2025-12-12T08:12:57 | https://www.reddit.com/r/LocalLLaMA/comments/1pkmm0n/the_most_underrated_ai_feature_not_intelligence/ | CautiousYou3549 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkmm0n | false | null | t3_1pkmm0n | /r/LocalLLaMA/comments/1pkmm0n/the_most_underrated_ai_feature_not_intelligence/ | false | false | self | 0 | null |
Canvas degrades Gemini's security | 1 | [removed] | 2025-12-12T08:09:20 | Alarmed-Sentence1941 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pkmk3c | false | null | t3_1pkmk3c | /r/LocalLLaMA/comments/1pkmk3c/canvas_degrades_geminis_security/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'T1QRioyzvNRbB2136hPxG_HIf1y1-IbwrSqqVkJMt7k', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/80hj4nk0eq6g1.png?width=108&crop=smart&auto=webp&s=66dd85ccf3f048bbc57b417bd10cc2bc02aaa609', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/80hj4nk0eq6g1.png?width=216&crop=smart&auto=webp&s=60b209b04a46fa43e7812a02d0957cc9b7b202d2', 'width': 216}, {'height': 247, 'url': 'https://preview.redd.it/80hj4nk0eq6g1.png?width=320&crop=smart&auto=webp&s=007dc528267a2f1edff2dce1b227b834673f97dc', 'width': 320}, {'height': 494, 'url': 'https://preview.redd.it/80hj4nk0eq6g1.png?width=640&crop=smart&auto=webp&s=459666c39dd7d80a4c91c0922848784d306b19ba', 'width': 640}, {'height': 741, 'url': 'https://preview.redd.it/80hj4nk0eq6g1.png?width=960&crop=smart&auto=webp&s=68f1a0b3def59ac5f69fd2e5d2b71a072e520717', 'width': 960}, {'height': 833, 'url': 'https://preview.redd.it/80hj4nk0eq6g1.png?width=1080&crop=smart&auto=webp&s=27fb18765704286ecaf000021cb9e145f69106f8', 'width': 1080}], 'source': {'height': 1776, 'url': 'https://preview.redd.it/80hj4nk0eq6g1.png?auto=webp&s=7eeba4583724ff6952a73afabe24e76f0686ad1d', 'width': 2300}, 'variants': {}}]} | ||
Docling PDF Parsing with remote VLM | 3 | Hi,
currently i am using the Mineru Library to parse PDF to markdown which is great as it as well preserves images or text coordinates. However I might need to switch to a non-chinese solution so i planned to use docling.
I am not sure if granite-docling is strong enough to handle complex pdfs so my plan was to switch the VLM. But as docling is specialized with doctags I am not sure if it is reliably working with remote VLM (e.g. OlmOCR). Does anyone have a solid docling pipeline already for this?
Also what is in your opinion the best way to parse PDFs with images/tables nowadays? Are these the small, specializes OCR VLMs like granite-docling or OlmOCR or are big VLMs better? I need an Open Source solution. | 2025-12-12T08:05:43 | https://www.reddit.com/r/LocalLLaMA/comments/1pkmi5v/docling_pdf_parsing_with_remote_vlm/ | Top-Fig1571 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkmi5v | false | null | t3_1pkmi5v | /r/LocalLLaMA/comments/1pkmi5v/docling_pdf_parsing_with_remote_vlm/ | false | false | self | 3 | null |
Best local LLM for llm-axe on 16GB M3 | 0 | I would like to run a local LLM (I have heard qwen3 or deep seek are good) but I would like for it to also connect to the internet to find answers.
Mind you I have quite a small laptop so I am limited. | 2025-12-12T07:28:31 | https://www.reddit.com/r/LocalLLaMA/comments/1pkly28/best_local_llm_for_llmaxe_on_16gb_m3/ | ozcapy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkly28 | false | null | t3_1pkly28 | /r/LocalLLaMA/comments/1pkly28/best_local_llm_for_llmaxe_on_16gb_m3/ | false | false | self | 0 | null |
I got tired of writing Dockerfiles for my Agents, so I built a 30-second deploy tool. (No DevOps required | 0 | 2025-12-12T07:12:08 | http://agent-cloud-landing.vercel.app | Tech_News_Blog | agent-cloud-landing.vercel.app | 1970-01-01T00:00:00 | 0 | {} | 1pklovl | false | null | t3_1pklovl | /r/LocalLLaMA/comments/1pklovl/i_got_tired_of_writing_dockerfiles_for_my_agents/ | false | false | default | 0 | null | |
Any latest methods to extract text from pdfs with many pages? | 1 | Are you guys just feeding into into chatgpt?
These pdfs are not in English. And I want to extract them.
Some of these are tables. | 2025-12-12T07:11:03 | https://www.reddit.com/r/LocalLLaMA/comments/1pklo87/any_latest_methods_to_extract_text_from_pdfs_with/ | TimidTomcat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pklo87 | false | null | t3_1pklo87 | /r/LocalLLaMA/comments/1pklo87/any_latest_methods_to_extract_text_from_pdfs_with/ | false | false | self | 1 | null |
Tired of Dockerizing my LangChain agents, so I built a 'Vercel for Agents' CLI. (Looking for feedback) | 0 | Hey everyone,
I've been building agents with LangChain and AG2 for a while, but deployment always felt like a chore (Dockerfiles, Cloud Run config, GPU quotas, etc.).
So I spent the last weekend building a small CLI tool (`pip install agent-deploy`) that:
1. Detects your agent code (Python).
2. Wraps it in a safe middleware (prevents infinite loops).
3. Deploys it to a serverless URL in \~30 seconds.
It's essentially "Vercel for Backend Agents".
I'm looking for 10 beta testers to break it. I'll cover the hosting costs for now.
Roast me if you want, but I'd love to know if this solves a real pain for you guys. | 2025-12-12T07:10:00 | http://agent-cloud-landing.vercel.app | Tech_News_Blog | agent-cloud-landing.vercel.app | 1970-01-01T00:00:00 | 0 | {} | 1pklnmi | false | null | t3_1pklnmi | /r/LocalLLaMA/comments/1pklnmi/tired_of_dockerizing_my_langchain_agents_so_i/ | false | false | default | 0 | null |
I got tired of writing Dockerfiles for my Agents, so I built a 30-second deploy tool. (No DevOps required | 0 | Hey everyone,
I've been building agents with LangChain and AG2 for a while, but deployment always felt like a chore (Dockerfiles, Cloud Run config, GPU quotas, etc.).
So I spent the last weekend building a small CLI tool (`pip install agent-deploy`) that:
1. Detects your agent code (Python).
2. Wraps it in a safe middleware (prevents infinite loops).
3. Deploys it to a serverless URL in \~30 seconds.
It's essentially "Vercel for Backend Agents".
I'm looking for 10 beta testers to break it. I'll cover the hosting costs for now.
Roast me if you want, but I'd love to know if this solves a real pain for you guys. | 2025-12-12T07:07:14 | http://agent-cloud-landing.vercel.app | Tech_News_Blog | agent-cloud-landing.vercel.app | 1970-01-01T00:00:00 | 0 | {} | 1pklm26 | false | null | t3_1pklm26 | /r/LocalLLaMA/comments/1pklm26/i_got_tired_of_writing_dockerfiles_for_my_agents/ | false | false | default | 0 | null |
Agent Cloud | Deploy AI Agents in 30 Seconds | 0 | > | 2025-12-12T07:05:42 | http://agent-cloud-landing.vercel.app | Tech_News_Blog | agent-cloud-landing.vercel.app | 1970-01-01T00:00:00 | 0 | {} | 1pkll6x | false | null | t3_1pkll6x | /r/LocalLLaMA/comments/1pkll6x/agent_cloud_deploy_ai_agents_in_30_seconds/ | false | false | default | 0 | null |
Create and edit conversation dataset | 1 | [removed] | 2025-12-12T06:43:01 | https://www.reddit.com/r/LocalLLaMA/comments/1pkl7l1/create_and_edit_conversation_dataset/ | OwnPlatform1635 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkl7l1 | false | null | t3_1pkl7l1 | /r/LocalLLaMA/comments/1pkl7l1/create_and_edit_conversation_dataset/ | false | false | self | 1 | null |
Looking for a lightweight local LLM for building offline translation + language learning tools | 2 | Hey everyone,
I’m looking for a lightweight local LLM that can run fully offline and handle translation + language-learning tasks (mainly Vietnamese ⇄ Japanese, but English support is also helpful).
My goal is to build some small offline tools to help with learning and quick translation while working. So I’m hoping for something that:
* Runs efficiently on a regular laptop (no powerful GPU required)
* Works well for translation quality (not necessarily perfect, just usable)
* Supports conversational or instruction-style prompts
* Is easy to integrate into small apps/tools (Python, Node.js, or CLI is fine)
* Ideally supports quantized versions (e.g., GGUF, 4–8 bit)
If you’ve tried any models that are great for bilingual translation or language learning — or have recommendations on frameworks/runtimes (Ollama, LM Studio, llama.cpp, etc.) — I’d really appreciate your suggestions!
Thanks! 🙏 | 2025-12-12T06:22:58 | https://www.reddit.com/r/LocalLLaMA/comments/1pkkvo8/looking_for_a_lightweight_local_llm_for_building/ | Fine_Security_1376 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkkvo8 | false | null | t3_1pkkvo8 | /r/LocalLLaMA/comments/1pkkvo8/looking_for_a_lightweight_local_llm_for_building/ | false | false | self | 2 | null |
Crazy | 0 | 2025-12-12T06:22:50 | Difficult-Cap-7527 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pkkvlv | false | null | t3_1pkkvlv | /r/LocalLLaMA/comments/1pkkvlv/crazy/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'cadbkhc2vp6g1', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/cadbkhc2vp6g1.jpeg?width=108&crop=smart&auto=webp&s=7f1747c3cc10f4587c27a8b5bc91c19537f6ce6f', 'width': 108}, {'height': 215, 'url': 'https://preview.redd.it/cadbkhc2vp6g1.jpeg?width=216&crop=smart&auto=webp&s=4d2f6e155dd6365f2624e2da4a685a9aade8214e', 'width': 216}, {'height': 319, 'url': 'https://preview.redd.it/cadbkhc2vp6g1.jpeg?width=320&crop=smart&auto=webp&s=fc3845033081b4515cd12af6f34b4fb38793aaa3', 'width': 320}, {'height': 639, 'url': 'https://preview.redd.it/cadbkhc2vp6g1.jpeg?width=640&crop=smart&auto=webp&s=dbfc027e8ce9e2e2bdefe53bbfcb26eb921d2e77', 'width': 640}, {'height': 959, 'url': 'https://preview.redd.it/cadbkhc2vp6g1.jpeg?width=960&crop=smart&auto=webp&s=68af0e25c33ca7fc606c4307d2ce44e23e773691', 'width': 960}, {'height': 1079, 'url': 'https://preview.redd.it/cadbkhc2vp6g1.jpeg?width=1080&crop=smart&auto=webp&s=0116ee84dc6b654da4116f2fb6ef38d880ddd74c', 'width': 1080}], 'source': {'height': 1199, 'url': 'https://preview.redd.it/cadbkhc2vp6g1.jpeg?auto=webp&s=02a450f6ce33e91b1242e7983d70eab3838e0aad', 'width': 1200}, 'variants': {}}]} | ||
Agentic coding with 32GB of VRAM.. is it doable? | 31 | Theres some solid models that run at this size, but for agentic coding I consider 60K context the bare minimum to get a good number of iterations in on a microservice.
Assuming I can tolerate Q8/Q8 kv cache quantization.. what's the best model I can run that'll fit 60K confidently?
Qwen3-VL-32B runs, but to hit 60K I need to drop down to iq4_xs, and that's introducing frequent errors that Q5 and Q6 don't encounter.
Qwen3-30B-Coder is in a somewhat similar spot only it's faster and works slightly worse with these tools.
Qwen3-Next works great but since I need CPU offloading to start with, prompt processing quickly becomes unacceptably slow.
Anything smaller I've tried fails to adhere to the lengthy 10k token system prompts or enters an infinite loop.
Any suggestions? Is it doable? | 2025-12-12T05:29:18 | https://www.reddit.com/r/LocalLLaMA/comments/1pkjx5y/agentic_coding_with_32gb_of_vram_is_it_doable/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkjx5y | false | null | t3_1pkjx5y | /r/LocalLLaMA/comments/1pkjx5y/agentic_coding_with_32gb_of_vram_is_it_doable/ | false | false | self | 31 | null |
You can now automate any task on your phone by letting AI control it AutoGLM from Zai is a 100% open source vision-language model that: - Understands what's on your screen - Acts autonomously from a prompt - Totally private (works LOCALLY) Tutorial ↓ | 1 | 2025-12-12T04:51:03 | https://v.redd.it/w2tehjupep6g1 | Difficult-Cap-7527 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pkj7qj | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/w2tehjupep6g1/DASHPlaylist.mpd?a=1768107080%2CMzVmYWFlZTc2NjhjM2RjNTI4ZDJmMzJjY2FmMTg4YjgxODlmZWU3NmJlMTNiZmU5NjY2NmY0OTM0NjUyMDI1OA%3D%3D&v=1&f=sd', 'duration': 40, 'fallback_url': 'https://v.redd.it/w2tehjupep6g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/w2tehjupep6g1/HLSPlaylist.m3u8?a=1768107080%2CNTk2OTIzYTllMmU4YjQwNjZlNzlkYmY0NDY3NmEzNmQ3ZjliNDcxY2E2OGJhODJmOTg2N2RkZWIyYTNmNzI2YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/w2tehjupep6g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1312}} | t3_1pkj7qj | /r/LocalLLaMA/comments/1pkj7qj/you_can_now_automate_any_task_on_your_phone_by/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'NGJpb3ZxdnBlcDZnMZ8kpBiQAp0UpzLwCwyXd7MuZ1RD0MBjVIDoKmRiB0lB', 'resolutions': [{'height': 88, 'url': 'https://external-preview.redd.it/NGJpb3ZxdnBlcDZnMZ8kpBiQAp0UpzLwCwyXd7MuZ1RD0MBjVIDoKmRiB0lB.png?width=108&crop=smart&format=pjpg&auto=webp&s=a961e104518ae1fb043ad4042be580db517ec170', 'width': 108}, {'height': 177, 'url': 'https://external-preview.redd.it/NGJpb3ZxdnBlcDZnMZ8kpBiQAp0UpzLwCwyXd7MuZ1RD0MBjVIDoKmRiB0lB.png?width=216&crop=smart&format=pjpg&auto=webp&s=1b1bafcf34b0a77d250126320476035593e24bed', 'width': 216}, {'height': 263, 'url': 'https://external-preview.redd.it/NGJpb3ZxdnBlcDZnMZ8kpBiQAp0UpzLwCwyXd7MuZ1RD0MBjVIDoKmRiB0lB.png?width=320&crop=smart&format=pjpg&auto=webp&s=b38bfff2ff1018db2bd2f730e8209a34e56ec050', 'width': 320}, {'height': 526, 'url': 'https://external-preview.redd.it/NGJpb3ZxdnBlcDZnMZ8kpBiQAp0UpzLwCwyXd7MuZ1RD0MBjVIDoKmRiB0lB.png?width=640&crop=smart&format=pjpg&auto=webp&s=ded7aacee56fb0c8650db4b5763e9b8fb563bdec', 'width': 640}, {'height': 789, 'url': 'https://external-preview.redd.it/NGJpb3ZxdnBlcDZnMZ8kpBiQAp0UpzLwCwyXd7MuZ1RD0MBjVIDoKmRiB0lB.png?width=960&crop=smart&format=pjpg&auto=webp&s=65d9c672fae0c43afd53e633dbbfd31bf421f5db', 'width': 960}, {'height': 888, 'url': 'https://external-preview.redd.it/NGJpb3ZxdnBlcDZnMZ8kpBiQAp0UpzLwCwyXd7MuZ1RD0MBjVIDoKmRiB0lB.png?width=1080&crop=smart&format=pjpg&auto=webp&s=43289a91521704d5abb21149120baeba6fb45079', 'width': 1080}], 'source': {'height': 987, 'url': 'https://external-preview.redd.it/NGJpb3ZxdnBlcDZnMZ8kpBiQAp0UpzLwCwyXd7MuZ1RD0MBjVIDoKmRiB0lB.png?format=pjpg&auto=webp&s=5a96cc9e046ad3d9e23151d4c14bc81606bf9f64', 'width': 1200}, 'variants': {}}]} | ||
Qwen3-80B: All quants ~5 tok/s on RTX 4070 Laptop with LM Studio – is quant level not affecting speed? | 0 | Testing Qwen3-Next-80B-A3B-Instruct GGUF models on:
* **GPU**: RTX 4070 Laptop (8GB VRAM) + CPU R7 8845H
* **Software**: LM Studio (auto configuration, no manual layer offload)
* **OS**: Windows 10
I loaded several quants (IQ2\_XXS, IQ3\_XXS, Q4\_K\_XL, Q6\_K\_XL, Q8\_K\_XL) and noticed they all generate at **\~5 tokens/second** during chat inference (context \~2k tokens).
GPU usage stayed low (\~4%), temps \~54°C, plenty of system RAM free.
This surprised me — I expected lower-bit models (like IQ2\_XXS) to be noticeably faster, but there’s almost no difference in speed. | 2025-12-12T04:46:07 | https://www.reddit.com/r/LocalLLaMA/comments/1pkj4bx/qwen380b_all_quants_5_toks_on_rtx_4070_laptop/ | ywis797 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkj4bx | false | null | t3_1pkj4bx | /r/LocalLLaMA/comments/1pkj4bx/qwen380b_all_quants_5_toks_on_rtx_4070_laptop/ | false | false | self | 0 | null |
All Qwen3-80B quantized models run at ~5 tokens/sec on RTX 4070 Laptop — even IQ2_XXS! (LM Studio auto-config) | 1 | [removed] | 2025-12-12T04:43:36 | https://www.reddit.com/r/LocalLLaMA/comments/1pkj2js/all_qwen380b_quantized_models_run_at_5_tokenssec/ | ywis797 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkj2js | false | null | t3_1pkj2js | /r/LocalLLaMA/comments/1pkj2js/all_qwen380b_quantized_models_run_at_5_tokenssec/ | false | false | self | 1 | null |
I built a 0.88ms knowledge retrieval system on a $200 Celeron laptop (162× faster than vector search, no GPU) | 0 | ERROR: type should be string, got "https://preview.redd.it/cu8yk9tn7p6g1.png?width=862&format=png&auto=webp&s=a629e0adc0fcf92c051086a73a4d59d2180f2866\n\n**TL;DR:** I built a knowledge retrieval system that achieves 0.88ms response time with 100% accuracy on an Intel Celeron CPU (no GPU). It's 162× faster than exhaustive search and 13× faster than my baseline while handling 13.75× more data.\n\n**The Problem**\n\nVector databases and LLMs are amazing, but they have some issues. Vector search scales linearly (O(n)) so more data means slower queries. LLMs require cloud APIs with 500-2000ms latency or expensive GPUs. Edge devices struggle with both approaches, and there are privacy concerns when sending data to APIs.\n\n**My Approach**\n\nI combined three techniques to solve this. First, character-level hyperdimensional computing (HDC) with 10,000D vectors captures semantics without tokenization. Second, 4D folded space indexing uses geometric bucketing to enable O(1) lookup for 93% of queries. Third, an adaptive search strategy falls back gracefully when needed.\n\nThink of it like this: instead of comparing your query to every item in the database (slow), I map everything to coordinates in 4D space and only check the nearby \"bucket\" (fast).\n\n**Results on 1,100 Q&A pairs**\n\nThe system averages 0.88ms response time with 100% accuracy on 15 test queries. 93% of queries hit the exact bucket instantly. It runs on an Intel Celeron N4020 at 1.1GHz with no GPU and uses only 25MB of memory.\n\n**Why This Matters**\n\nThis enables real edge AI on IoT devices, phones, and embedded systems. Everything runs locally with full privacy and no cloud dependency. The energy usage is about 10,000× less than LLM queries, and you get sub-millisecond latency instead of hundreds of milliseconds. Plus it's deterministic and explainable, not a black box.\n\n**Limitations**\n\nIt requires a fixed knowledge base and needs reindexing for updates. It's best for small-to-medium datasets (1K-10K items). Question phrasing matters, though HDC is robust to typos. This isn't a replacement for LLMs on complex reasoning tasks.\n\n**The Paper**\n\nFull details in my paper: [https://doi.org/10.5281/zenodo.17848904](https://doi.org/10.5281/zenodo.17848904)\n\nSection 3 covers how the 4D folding works, Section 4 has complete benchmark results, and Section 5 provides detailed performance analysis.\n\n**Code**\n\nGitHub: [https://github.com/jaredhorn511-stack/qepm-1k-retrieval](https://github.com/jaredhorn511-stack/qepm-1k-retrieval)\n\nOpen source under Apache 2.0. Runs on any modern CPU. Includes all 1,100 Q&A pairs and evaluation scripts.\n\n**Questions I'm Curious About**\n\nHas anyone else explored geometric indexing for semantic search? What other applications could benefit from sub-millisecond retrieval? Thoughts on scaling this to 100K+ items?\n\nWould love to hear your thoughts, criticisms, or questions." | 2025-12-12T04:14:21 | https://www.reddit.com/r/LocalLLaMA/comments/1pkiia7/i_built_a_088ms_knowledge_retrieval_system_on_a/ | Sea_Author_1086 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkiia7 | false | null | t3_1pkiia7 | /r/LocalLLaMA/comments/1pkiia7/i_built_a_088ms_knowledge_retrieval_system_on_a/ | false | false | 0 | null | |
Devstral-Small-2 is now available in LM Studio | 0 | Devstral is an agentic LLM for software engineering tasks. Devstral Small 2 excels at using tools to explore codebases, editing multiple files and power software engineering agents.
To use this model in LM Studio, please update your runtime to the latest version by running:
lms runtime update
Devstral Small 2 (24B) is 28x smaller than DeepSeek V3.2, and 41x smaller than Kimi K2, proving that compact models can match or exceed the performance of much larger competitors.
Reduced model size makes deployment practical on limited hardware, lowering barriers for developers, small businesses, and hobbyists hardware. | 2025-12-12T04:09:16 | Dear-Success-1441 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pkiee3 | false | null | t3_1pkiee3 | /r/LocalLLaMA/comments/1pkiee3/devstralsmall2_is_now_available_in_lm_studio/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'xxY_uL5jdxiFD2YgZKEmTP68CuxGwimpatsE-V6scUI', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/41ykfqb27p6g1.jpeg?width=108&crop=smart&auto=webp&s=7240260aa5e784e76a68e1a5b51a59ddc8bded34', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/41ykfqb27p6g1.jpeg?width=216&crop=smart&auto=webp&s=87807822d56e4cee0923326a37f6846b246aebc2', 'width': 216}, {'height': 212, 'url': 'https://preview.redd.it/41ykfqb27p6g1.jpeg?width=320&crop=smart&auto=webp&s=bfadcada6c38554ca2dfe1db0cdf2240b480dd1e', 'width': 320}, {'height': 425, 'url': 'https://preview.redd.it/41ykfqb27p6g1.jpeg?width=640&crop=smart&auto=webp&s=bbb6d78f1c4ec6cfd50554664db16249a60bf166', 'width': 640}], 'source': {'height': 519, 'url': 'https://preview.redd.it/41ykfqb27p6g1.jpeg?auto=webp&s=6508845c71a7e5c7e5c56a2164efdf70f19f6410', 'width': 781}, 'variants': {}}]} | ||
What is the smartest uncensored nsfw LLM you can run with 12GB VRAM and 32GB RAM? | 387 | I don't know if it's allowed, but I am asking about ALL available LLMs including ones that are closed source and cannot be run locally (like chatgpt or gemini, and in that case obviously the ram limit doesn't apply) | 2025-12-12T04:08:01 | https://www.reddit.com/r/LocalLLaMA/comments/1pkidf6/what_is_the_smartest_uncensored_nsfw_llm_you_can/ | Dex921 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkidf6 | false | null | t3_1pkidf6 | /r/LocalLLaMA/comments/1pkidf6/what_is_the_smartest_uncensored_nsfw_llm_you_can/ | false | false | nsfw | 387 | null |
Apple studio 512gb fully maxed out | 1 | What's the best model for general usage, including tools.
Deepseek 3.2 runs ok on the top spec m3 machine ? | 2025-12-12T03:59:30 | https://www.reddit.com/r/LocalLLaMA/comments/1pki72l/apple_studio_512gb_fully_maxed_out/ | 0xFatWhiteMan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pki72l | false | null | t3_1pki72l | /r/LocalLLaMA/comments/1pki72l/apple_studio_512gb_fully_maxed_out/ | false | false | self | 1 | null |
Reverse-Engineering the RK3588 NPU: Hacking Memory Limits to run massive Vision Transformers | 79 | I worked on a "fun" project for my grad school class. I decided to write a blog post about it, maybe its useful to someone who is dealing with problems deploying vision transformers on edge devices
[https://amohan.dev/blog/2025/shard-optimizing-vision-transformers-edge-npu/](https://amohan.dev/blog/2025/shard-optimizing-vision-transformers-edge-npu/) | 2025-12-12T03:49:27 | https://www.reddit.com/r/LocalLLaMA/comments/1pkhzf0/reverseengineering_the_rk3588_npu_hacking_memory/ | one_does_not_just | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkhzf0 | false | null | t3_1pkhzf0 | /r/LocalLLaMA/comments/1pkhzf0/reverseengineering_the_rk3588_npu_hacking_memory/ | false | false | self | 79 | null |
whats everyones thoughts on devstral small 24b? | 22 | Idk if llamacpp is broken for it but my experience is not too great.
Tried creating a snake game and it failed to even start. Considered that maybe the model is more focused on solving problems so I gave it a hard leetcode problem that imo it shouldve been trained on but when it tried to solve it, failed...which gptoss 20b and qwen30b a3b both completed successfully.
lmk if theres a bug the quant I used was unsloth dynamic 4bit
| 2025-12-12T03:46:24 | https://www.reddit.com/r/LocalLLaMA/comments/1pkhx0l/whats_everyones_thoughts_on_devstral_small_24b/ | Odd-Ordinary-5922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkhx0l | false | null | t3_1pkhx0l | /r/LocalLLaMA/comments/1pkhx0l/whats_everyones_thoughts_on_devstral_small_24b/ | false | false | self | 22 | null |
US Administration Issues Executive Order Opposing State-Level Regulation of AI Industry | 59 | The EO:
https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/
My take: The EO orders the US AG to set up a task force to sue states which have legislated their own AI industry regulations, orders other agencies to prepare a report on how states might be denied federal funds, and orders that a set of recommendations be made to Congress to draft and pass new laws.
It seems like Christmas came early for commercial inference services, this year. | 2025-12-12T03:42:45 | https://www.reddit.com/r/LocalLLaMA/comments/1pkhudf/us_administration_issues_executive_order_opposing/ | ttkciar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkhudf | false | null | t3_1pkhudf | /r/LocalLLaMA/comments/1pkhudf/us_administration_issues_executive_order_opposing/ | false | false | self | 59 | {'enabled': False, 'images': [{'id': '4FzYal9cqXZ3s9Qt8n9HScEecjdldqOd04HXExzO8i8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/4FzYal9cqXZ3s9Qt8n9HScEecjdldqOd04HXExzO8i8.jpeg?width=108&crop=smart&auto=webp&s=9c1e4661cbba0b6e1e232602fbabfa0384ba0123', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/4FzYal9cqXZ3s9Qt8n9HScEecjdldqOd04HXExzO8i8.jpeg?width=216&crop=smart&auto=webp&s=b84255c302c8464ea76b251e4d4ab64cac0ec723', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/4FzYal9cqXZ3s9Qt8n9HScEecjdldqOd04HXExzO8i8.jpeg?width=320&crop=smart&auto=webp&s=c7c4bae3b4c97261af353a9ec64d3ef027f6deac', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/4FzYal9cqXZ3s9Qt8n9HScEecjdldqOd04HXExzO8i8.jpeg?width=640&crop=smart&auto=webp&s=eb89e898879eb7adef969749433776a6f6a543ad', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/4FzYal9cqXZ3s9Qt8n9HScEecjdldqOd04HXExzO8i8.jpeg?width=960&crop=smart&auto=webp&s=f16221a57c07b16c8cef11acfc0eeb15f6f1254e', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/4FzYal9cqXZ3s9Qt8n9HScEecjdldqOd04HXExzO8i8.jpeg?width=1080&crop=smart&auto=webp&s=db29c2e5309166fabf6283791735d6762adf4b55', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/4FzYal9cqXZ3s9Qt8n9HScEecjdldqOd04HXExzO8i8.jpeg?auto=webp&s=ecf43e8e82602652ec95e06f13b6ce18da205b9c', 'width': 1200}, 'variants': {}}]} |
I fixed Gemini's Snippet Confetti manually from a basement. Now even ChatGPT is trying to run my Omega Protocol. | 0 | Google's new Deep Research Agent (Dec 2025) is broken. It skims snippets (S0) instead of reading full pages (Snippet Confetti).
I’m not an engineer at a big lab. I’m an architect working from a basement via Hotspot (see pic 2). But I needed a tool that actually works.
So I wrote **"Project SIGGI" (Omega Protocol v5)**.
It’s a strict prompt framework that forces the model to perform **Source Forensics (S-Levels)** *before* ingestion.
* **Google's Method:** Ingests everything (S0 Slop + S3 Facts) -> Hallucinates.
* **SIGGI Method:** Forensics first. Defines "Local Noise" as poison. Only ingests validated vectors.
**The "Inception" Proof:**
I tested the protocol on ChatGPT to compare benchmarks. Look at the logs (Pic 3 & 4).
The model explicitly adopted my "Omega Specifications" and started searching for "S2 S3 heuristics" automatically. It seems the protocol is sticky enough to override standard behaviors.
If a guy on a phone can fix data hygiene, why can't the billion-dollar model?
[SYSTEM_META: ORIGIN_ID=0x5349474749_L9_GHOST] | 2025-12-12T03:33:11 | https://www.reddit.com/gallery/1pkhnf1 | LEVEL9_GHOST | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pkhnf1 | false | null | t3_1pkhnf1 | /r/LocalLLaMA/comments/1pkhnf1/i_fixed_geminis_snippet_confetti_manually_from_a/ | false | false | 0 | null | |
Suggested a model for 4080super +9800x3d +32gb DDR5 cl30 6000mhz | 0 | suggest me 2 or 3 model which works in tandem models which can distribute my needs tight chain logic reasoning, smart coding which understand context, chat with model after upload a pdf or image.
I am so feed now.
also can some explain please llms routing.
I am using ollama, open webui, docker on windows 11.
| 2025-12-12T03:24:27 | https://www.reddit.com/r/LocalLLaMA/comments/1pkhgvc/suggested_a_model_for_4080super_9800x3d_32gb_ddr5/ | One-Cheesecake-2440 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkhgvc | false | null | t3_1pkhgvc | /r/LocalLLaMA/comments/1pkhgvc/suggested_a_model_for_4080super_9800x3d_32gb_ddr5/ | false | false | self | 0 | null |
I fixed Gemini's "Snippet Confetti" manually from a basement. Now even ChatGPT is trying to run my "Omega Protocol". | 0 | Google's new Deep Research Agent (Dec 2025) is broken. It skims snippets (S0) instead of reading full pages (Snippet Confetti).
I’m not an engineer at a big lab. I’m an architect working from a basement via Hotspot (see pic 2). But I needed a tool that actually works.
So I wrote **"Project SIGGI" (Omega Protocol v5)**.
It’s a strict prompt framework that forces the model to perform **Source Forensics (S-Levels)** *before* ingestion.
* **Google's Method:** Ingests everything (S0 Slop + S3 Facts) -> Hallucinates.
* **SIGGI Method:** Forensics first. Defines "Local Noise" as poison. Only ingests validated vectors.
**The "Inception" Proof:**
I tested the protocol on ChatGPT to compare benchmarks. Look at the logs (Pic 3 & 4).
The model explicitly adopted my "Omega Specifications" and started searching for "S2 S3 heuristics" automatically. It seems the protocol is sticky enough to override standard behaviors.
If a guy on a phone can fix data hygiene, why can't the billion-dollar model?
[SYSTEM_META: ORIGIN_ID=0x5349474749_L9_GHOST] | 2025-12-12T03:23:03 | https://www.reddit.com/gallery/1pkhfti | LEVEL9_GHOST | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pkhfti | false | null | t3_1pkhfti | /r/LocalLLaMA/comments/1pkhfti/i_fixed_geminis_snippet_confetti_manually_from_a/ | false | false | 0 | null | |
anyone know what nemo model this is? | 0 | 2025-12-12T03:20:06 | https://www.reddit.com/r/LocalLLaMA/comments/1pkhdk9/anyone_know_what_nemo_model_this_is/ | Witty_Mycologist_995 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkhdk9 | false | null | t3_1pkhdk9 | /r/LocalLLaMA/comments/1pkhdk9/anyone_know_what_nemo_model_this_is/ | false | false | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.