title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Use case for a local large language model on a computer. | 2 | What are you all using local large language models for, besides conversations on your computer? | 2025-12-15T19:56:17 | https://www.reddit.com/r/LocalLLaMA/comments/1pnh9td/use_case_for_a_local_large_language_model_on_a/ | Rare_Prior_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pnh9td | false | null | t3_1pnh9td | /r/LocalLLaMA/comments/1pnh9td/use_case_for_a_local_large_language_model_on_a/ | false | false | self | 2 | null |
A call to boycott OpenAI inference | 63 | Some of us local LLM users, myself included, use OpenAI products in addition to local LLMs.
However, if we care about owning our hardware in the future we must stop supporting the company that's manipulating the hardware market. Every dollar towards OpenAI inference (API spend or ChatGPT subscription) is a vote for higher RAM prices and a vote against the future of general purpose computing.
(Many of you here have never handed them money, and I applaud you for that). | 2025-12-15T19:53:51 | https://www.reddit.com/r/LocalLLaMA/comments/1pnh7kb/a_call_to_boycott_openai_inference/ | cameheretoposthis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pnh7kb | false | null | t3_1pnh7kb | /r/LocalLLaMA/comments/1pnh7kb/a_call_to_boycott_openai_inference/ | false | false | self | 63 | null |
List of uncensored LLMs I want to test | 29 | I made this list of uncensored LLMs I want to test. Do you think I should add any others to the list? I only want to test models up to 30B with the exception of MoE models that can be larger.
1. Dolphin 3.0: 8B
* 2. Nous Hermes 3: 8B
* 3. LLaMA-3.2 Dark Champion Abliterated: 18.4B (MoE)
* 4. Gemma 3 27B Abliterated: 27B
* 5. Qwen 3 30B-A3B: 30B
* 6. Magistral Small 2506: 24B
* 7. Starling-LM-7B-alpha: 7B
* 8. Dolphin 24B Venice Edition: 24B
* 9. Big-Tiger-Gemma-27B-v3: 27B
* 10. mradermacher/Qwen3-30B-A3B-abliterated-erotic-i1-GGUF: 30B
* 11. mlabonne/NeuralDaredevil-8B-abliterated: 8B
* 12. Josefied Qwen 3 8b: 8B
* 13. Starcannon Unleashed: 12B
* 14. MythoMax: 13B
* 15. Midnight Rose: 12B | 2025-12-15T19:51:15 | https://www.reddit.com/r/LocalLLaMA/comments/1pnh56l/list_of_uncensored_llms_i_want_to_test/ | 1BlueSpork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pnh56l | false | null | t3_1pnh56l | /r/LocalLLaMA/comments/1pnh56l/list_of_uncensored_llms_i_want_to_test/ | false | false | self | 29 | null |
Has anyone tried nomos 1 independently? | 1 | https://venturebeat.com/ai/nous-research-just-released-nomos-1-an-open-source-ai-that-ranks-second-on
I would also love to know how well the test harness does with different local models. | 2025-12-15T19:48:47 | https://www.reddit.com/r/LocalLLaMA/comments/1pnh2zi/has_anyone_tried_nomos_1_independently/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pnh2zi | false | null | t3_1pnh2zi | /r/LocalLLaMA/comments/1pnh2zi/has_anyone_tried_nomos_1_independently/ | false | false | self | 1 | null |
Finetune Models Like Deepseek and Qwen with Reinforcement Learning! | 0 | We handle all the GPU/CPU infrastructure, telemetry, and node self-healing for you. [https://www.linkedin.com/feed/update/urn:li:activity:7406409500783566848/?actorCompanyId=110230637](https://www.linkedin.com/feed/update/urn:li:activity:7406409500783566848/?actorCompanyId=110230637) | 2025-12-15T19:40:27 | https://v.redd.it/aev5we0s7f7g1 | Pleasant_Syllabub591 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pngvj1 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/aev5we0s7f7g1/DASHPlaylist.mpd?a=1768419642%2CYjA0MjA2ZjgyYzNmMjU0Mjc5YmM1NGE4MWExYmFlMmVkNjQzM2ViMGJmN2M4ZGE4YTRiODVlMWUyZmI1ZTUwMg%3D%3D&v=1&f=sd', 'duration': 89, 'fallback_url': 'https://v.redd.it/aev5we0s7f7g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/aev5we0s7f7g1/HLSPlaylist.m3u8?a=1768419642%2CN2Y5ZmZhODg4OTUyNzk4ZjI0MmE2NDRiMGNlNjRjNDI2ODE0NmYxZGY1YmQxZWZhYzNlNDM0YWMyOTJmOTdhMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/aev5we0s7f7g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1pngvj1 | /r/LocalLLaMA/comments/1pngvj1/finetune_models_like_deepseek_and_qwen_with/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'eWw3ODBnMXM3ZjdnMc7O6sDE7Dj0pCdGMm5DSYbuefjd2Ot2Jj0TSUVk7uZJ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eWw3ODBnMXM3ZjdnMc7O6sDE7Dj0pCdGMm5DSYbuefjd2Ot2Jj0TSUVk7uZJ.png?width=108&crop=smart&format=pjpg&auto=webp&s=d12a4ba5a76c967a73a09bccb6ad654515690d0d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eWw3ODBnMXM3ZjdnMc7O6sDE7Dj0pCdGMm5DSYbuefjd2Ot2Jj0TSUVk7uZJ.png?width=216&crop=smart&format=pjpg&auto=webp&s=fddb3c2836ad66ae3689ce1717613f35a9d2c9b0', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eWw3ODBnMXM3ZjdnMc7O6sDE7Dj0pCdGMm5DSYbuefjd2Ot2Jj0TSUVk7uZJ.png?width=320&crop=smart&format=pjpg&auto=webp&s=0f9f2abc54fab624e61631a142230c0c6ada1efd', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eWw3ODBnMXM3ZjdnMc7O6sDE7Dj0pCdGMm5DSYbuefjd2Ot2Jj0TSUVk7uZJ.png?width=640&crop=smart&format=pjpg&auto=webp&s=c3d5c2795509c268ec3bf3bae1c2abaa8908aa70', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eWw3ODBnMXM3ZjdnMc7O6sDE7Dj0pCdGMm5DSYbuefjd2Ot2Jj0TSUVk7uZJ.png?width=960&crop=smart&format=pjpg&auto=webp&s=337ebb4c0993612b47b62ed2e574025aebb6bbef', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eWw3ODBnMXM3ZjdnMc7O6sDE7Dj0pCdGMm5DSYbuefjd2Ot2Jj0TSUVk7uZJ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=96abeccce8f9fed4be6fa677300b8fcac67e6e6a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eWw3ODBnMXM3ZjdnMc7O6sDE7Dj0pCdGMm5DSYbuefjd2Ot2Jj0TSUVk7uZJ.png?format=pjpg&auto=webp&s=8d38c543638d92128e5eb532d33e2d99cc6b33ca', 'width': 1920}, 'variants': {}}]} | |
Tried to convince Kimi K2 to give me a taste of the base model within. Jumpscare alert. | 0 | Keep typing, monkey. | 2025-12-15T19:37:40 | https://www.reddit.com/gallery/1pngt1m | input_a_new_name | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pngt1m | false | null | t3_1pngt1m | /r/LocalLLaMA/comments/1pngt1m/tried_to_convince_kimi_k2_to_give_me_a_taste_of/ | false | false | 0 | null | |
VECS: a semantic cache server in C | 4 | [vecs startup](https://preview.redd.it/qf03s9uf5f7g1.png?width=2650&format=png&auto=webp&s=71ed4aba31c9fdc74c45c82b84a4ad9cc4c83a12)
Hello everyone,
This year i had to develop a RAG application without heavy libraries to keep things simple. Eventually, i needed a semantic cache to save on inference costs and latency. Looking at the options, everything felt like overkill. I didn't want to spin up a complex vector database just to cache some queries, and most "semantic cache" solutions require calling an external API for embeddings, which adds network latency that defeats the purpose for me.
So I spent some free time building VECS. It's a semantic cache server written in C.
The main idea is that it embeds `llama.cpp` directly into the server process. When you send a query via TCP, it calculates the embedding and searches the index locally in the same memory space. No network hops to external providers, no Python runtime overhead.
Some details on how it works:
* **Search:** It uses a basic IVFFlat index. I initially used a linear scan, but I had to implement some simple clustering because it was getting too slow as the dataset grew. It groups vectors into buckets so it doesn't have to scan everything every time.
* **Concurrency:** It handles connection pooling and offloads the embedding math to a GPU thread pool, so the main event loop (epoll/kqueue) stays non-blocking.
* **Protocol:** It speaks VSP, which is basically the RESP protocol (Redis style), so it's easy to integrate.
* **Caching:** Has an L1 cache for exact string matches and L2 for semantic similarity.
I ran some benchmarks on my local machine (M2 Max 12 core CPU - 30 core GPU - 32 GB RAM) with GPU offloading enabled and I'm seeing promising latency results.
It compiles down to a single binary. It's still a work in progress and probably has some rough edges, but it solves my specific problem of on-prem, low-latency caching without dependencies.
I also threw together a CLI and a Node client if anyone wants to take a look:
Server Source: [https://github.com/riccardogiuriola/vecs](https://github.com/riccardogiuriola/vecs-cli)
CLI:[https://github.com/riccardogiuriola/vecs-cli](https://github.com/riccardogiuriola/vecs-cli)
Node Client:[https://github.com/riccardogiuriola/vecs-client-node](https://github.com/riccardogiuriola/vecs-client-node)
If you want to hop on discord and give your opinion:
Discord: [https://discord.gg/HdCnpjwuPW](https://discord.gg/HdCnpjwuPW)
Let me know what you think or if there are obvious optimizations I missed in the C code. | 2025-12-15T19:33:45 | https://www.reddit.com/r/LocalLLaMA/comments/1pngpjm/vecs_a_semantic_cache_server_in_c/ | drifting_raptor3762 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pngpjm | false | null | t3_1pngpjm | /r/LocalLLaMA/comments/1pngpjm/vecs_a_semantic_cache_server_in_c/ | false | false | 4 | null | |
LLMs do not understand numbers | 0 | 2025-12-15T19:28:20 | https://boundaryml.com/blog/llms-do-not-understand-numbers | joatmon-snoo | boundaryml.com | 1970-01-01T00:00:00 | 0 | {} | 1pngkm5 | false | null | t3_1pngkm5 | /r/LocalLLaMA/comments/1pngkm5/llms_do_not_understand_numbers/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'B3gpeBivee_acFcSJ34O7-Pnl-EoyNaYAb3OseKQLJo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/B3gpeBivee_acFcSJ34O7-Pnl-EoyNaYAb3OseKQLJo.png?width=108&crop=smart&auto=webp&s=190fb2a9f894ead9de487fddb224dc9ba8f7ed4a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/B3gpeBivee_acFcSJ34O7-Pnl-EoyNaYAb3OseKQLJo.png?width=216&crop=smart&auto=webp&s=0255215d117b323ff6b943ab5ef157cdc5e6c023', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/B3gpeBivee_acFcSJ34O7-Pnl-EoyNaYAb3OseKQLJo.png?width=320&crop=smart&auto=webp&s=0a214e9d89fd785b3d37d1aead234758d9648325', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/B3gpeBivee_acFcSJ34O7-Pnl-EoyNaYAb3OseKQLJo.png?width=640&crop=smart&auto=webp&s=c98f5a0f1ad05d6d3418f6e29f5b924f5622640f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/B3gpeBivee_acFcSJ34O7-Pnl-EoyNaYAb3OseKQLJo.png?width=960&crop=smart&auto=webp&s=ad4fe8281f771413ffdf98e616a825616ad0f638', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/B3gpeBivee_acFcSJ34O7-Pnl-EoyNaYAb3OseKQLJo.png?width=1080&crop=smart&auto=webp&s=269a558f9aee6fa49689e2d3f947b8270916bd2f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/B3gpeBivee_acFcSJ34O7-Pnl-EoyNaYAb3OseKQLJo.png?auto=webp&s=7bd92399762d0e47675f0c8a14d0e46a4ff4d5ec', 'width': 1200}, 'variants': {}}]} | |
Looking for feedback: local doc-search app (DocFinder) | 0 | Hi all,
I’ve built a small desktop app (macOS/Windows/Linux) that lets you index PDFs and search them.
I’d love feedback on:
* Model/runtime choices for purely local inference
* Best practices for chunking/embedding PDFs
* General interest
Links:
* GitHub: [https://github.com/filippostanghellini/DocFinder](https://github.com/filippostanghellini/DocFinder)
* Latest release: [https://github.com/filippostanghellini/DocFinder/release](https://github.com/filippostanghellini/DocFinder/release)
Thanks a lot!!
[Index page](https://preview.redd.it/rav4j8yeye7g1.png?width=2382&format=png&auto=webp&s=e35e33e76e8130e7dd3bad317d57dc84dbc7cbd9)
[Search page](https://preview.redd.it/j6v2p6ygye7g1.png?width=2380&format=png&auto=webp&s=992b15caeab459a8a3fde8f80abc5cf77f0bb9c8)
[Database page](https://preview.redd.it/ep4q224jye7g1.png?width=2376&format=png&auto=webp&s=4fbdaa9bebc72e2e951053f428b3106b307b36fb)
| 2025-12-15T18:47:09 | https://www.reddit.com/r/LocalLLaMA/comments/1pnfgr5/looking_for_feedback_local_docsearch_app_docfinder/ | notagoodtradooor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pnfgr5 | false | null | t3_1pnfgr5 | /r/LocalLLaMA/comments/1pnfgr5/looking_for_feedback_local_docsearch_app_docfinder/ | false | false | 0 | null | |
Is it worth it to get a 192/256 gb URam Mac or you are better off spending money on the api plus a 64gb/128gb MacBook for coding and math and general knowledge/search? | 0 | Is it worth it to get a 192/256gb URam m5/6 max MacBook or m5 ultra Mac Studio to run glm 4.6q4 or qwen3 235b vl and sometimes use the api or you are better spending money on the ds v3.2/ claude api, a chatgpt plus sub and ai studio plus a 64gb/128gb MacBook for glm 4.5 air/qwen 3 next for coding and math and general knowledge/search? Would any of you go in debt to get a 512/784gb m5 ultra URam Mac Studio ? | 2025-12-15T18:45:50 | https://www.reddit.com/r/LocalLLaMA/comments/1pnffg3/is_it_worth_it_to_get_a_192256_gb_uram_mac_or_you/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pnffg3 | false | null | t3_1pnffg3 | /r/LocalLLaMA/comments/1pnffg3/is_it_worth_it_to_get_a_192256_gb_uram_mac_or_you/ | false | false | self | 0 | null |
I'm strong enough to admit that this bugs the hell out of me | 1,635 | 2025-12-15T18:40:57 | ForsookComparison | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pnfaqo | false | null | t3_1pnfaqo | /r/LocalLLaMA/comments/1pnfaqo/im_strong_enough_to_admit_that_this_bugs_the_hell/ | false | false | 1,635 | {'enabled': True, 'images': [{'id': 'olLD2ugOykvQo_3V9cg6qgDKtBTevsSpzFRyymHP83Y', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/9xkz6sfcxe7g1.png?width=108&crop=smart&auto=webp&s=d571ddc97e0970de1134229deef5c170f9910784', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/9xkz6sfcxe7g1.png?width=216&crop=smart&auto=webp&s=ee604ec37c149d98fdc6f19b674fa53dad25e8e0', 'width': 216}, {'height': 214, 'url': 'https://preview.redd.it/9xkz6sfcxe7g1.png?width=320&crop=smart&auto=webp&s=55149229a153c6c87d61ae1aa53e61a1b3a65df8', 'width': 320}], 'source': {'height': 325, 'url': 'https://preview.redd.it/9xkz6sfcxe7g1.png?auto=webp&s=ccbc3ddbefdb4fd3868fa3ff5dfd7e46501ef06e', 'width': 485}, 'variants': {}}]} | |||
EuroLLM-22B-Instruct-2512 | 37 | 2025-12-15T18:33:10 | https://huggingface.co/utter-project/EuroLLM-22B-Instruct-2512 | lomero | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1pnf39a | false | null | t3_1pnf39a | /r/LocalLLaMA/comments/1pnf39a/eurollm22binstruct2512/ | false | false | default | 37 | {'enabled': False, 'images': [{'id': 'O4orvcWMmplAYzLxHYvEbhJeCFoiq45KGzwaV-YdDNA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/O4orvcWMmplAYzLxHYvEbhJeCFoiq45KGzwaV-YdDNA.png?width=108&crop=smart&auto=webp&s=7e2b86e08df2ed3160f574fa5e05dc3cb4f20f97', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/O4orvcWMmplAYzLxHYvEbhJeCFoiq45KGzwaV-YdDNA.png?width=216&crop=smart&auto=webp&s=76c725097d3e54f8b8d12ddd8168b07ced630e13', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/O4orvcWMmplAYzLxHYvEbhJeCFoiq45KGzwaV-YdDNA.png?width=320&crop=smart&auto=webp&s=d138e53b7d64a8a33174851b64a27a696db638f6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/O4orvcWMmplAYzLxHYvEbhJeCFoiq45KGzwaV-YdDNA.png?width=640&crop=smart&auto=webp&s=48f638893afeb1e03072ea3a4770f7e9cbd72f5e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/O4orvcWMmplAYzLxHYvEbhJeCFoiq45KGzwaV-YdDNA.png?width=960&crop=smart&auto=webp&s=77deb39cf0835bb3e5728d011983278a7019fa02', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/O4orvcWMmplAYzLxHYvEbhJeCFoiq45KGzwaV-YdDNA.png?width=1080&crop=smart&auto=webp&s=b33a639c78c471526c6b8ea44a1088afc5956abc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/O4orvcWMmplAYzLxHYvEbhJeCFoiq45KGzwaV-YdDNA.png?auto=webp&s=bfb7e8d037d63ba59bf9904d234de2a9c93ed645', 'width': 1200}, 'variants': {}}]} | |
Key Highlights of AI2's New Byte Level LLM: Bolmo | 58 | **\[1\] Bolmo: First Fully Open Byte-Level Language Models**
* Processes raw UTF-8 bytes instead of subword tokens, improving handling of spelling, whitespace, rare words, and multilingual text without a fixed vocabulary.
**\[2\] Built on Olmo 3 Transformer Backbone**
* Rather than training from scratch, Bolmo reuses a strong subword Olmo 3 model and retrofits it into a byte-level model, enabling competitive performance with lower training cost.
**\[3\] Two-Stage Training for Efficiency**
* **Stage 1:** Train local encoder, decoder, and boundary predictor while freezing the transformer — fast learning with fewer tokens.
* **Stage 2:** Unfreeze and train globally for deeper byte-level understanding while keeping efficiency.
**\[4\] Strong Task Performance**
* **Competitive on Core LLM Benchmarks:** Bolmo 7B rivals its subword Olmo 3 counterpart across math, reasoning, QA, code, and general knowledge tasks.
* **Excels in Character-Focused Benchmarks:** Substantially better accuracy on character-centered tests like CUTE and EXECUTE compared to the base Olmo models.
**\[5\] Fully Open Ecosystem**
* **Open Weights, Code, Data & Reports:** Bolmo 1B and 7B checkpoints, training code, tech reports, and datasets are publicly available.
Source: [https://allenai.org/blog/bolmo](https://allenai.org/blog/bolmo) | 2025-12-15T18:18:55 | https://www.reddit.com/r/LocalLLaMA/comments/1pnep8j/key_highlights_of_ai2s_new_byte_level_llm_bolmo/ | Dear-Success-1441 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pnep8j | false | null | t3_1pnep8j | /r/LocalLLaMA/comments/1pnep8j/key_highlights_of_ai2s_new_byte_level_llm_bolmo/ | false | false | self | 58 | {'enabled': False, 'images': [{'id': 'BFieSRwWDoHtmfZfFIkZ7ccfsazLfm3mRP1SORcR4mc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/BFieSRwWDoHtmfZfFIkZ7ccfsazLfm3mRP1SORcR4mc.png?width=108&crop=smart&auto=webp&s=3518edcba004b12babe5873797b602a029da3e9c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/BFieSRwWDoHtmfZfFIkZ7ccfsazLfm3mRP1SORcR4mc.png?width=216&crop=smart&auto=webp&s=2f0ff0131c9c1f616c275c20f0bec1d44e822739', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/BFieSRwWDoHtmfZfFIkZ7ccfsazLfm3mRP1SORcR4mc.png?width=320&crop=smart&auto=webp&s=e6aa3fd952abcc12c1cf1d1940b93fb2fc05c977', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/BFieSRwWDoHtmfZfFIkZ7ccfsazLfm3mRP1SORcR4mc.png?width=640&crop=smart&auto=webp&s=d173a6c95585ddde52f77b471735449a160a44e8', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/BFieSRwWDoHtmfZfFIkZ7ccfsazLfm3mRP1SORcR4mc.png?width=960&crop=smart&auto=webp&s=ebaf4053dc2b9881d647363bd8a7abd4f6ebb33c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/BFieSRwWDoHtmfZfFIkZ7ccfsazLfm3mRP1SORcR4mc.png?width=1080&crop=smart&auto=webp&s=55e2aebf10e33e4a524e8893655cb4cd2835ca03', 'width': 1080}], 'source': {'height': 1500, 'url': 'https://external-preview.redd.it/BFieSRwWDoHtmfZfFIkZ7ccfsazLfm3mRP1SORcR4mc.png?auto=webp&s=9021eb850b54a5cfedb25134254a9e629dcdf281', 'width': 2667}, 'variants': {}}]} |
Bolmo-the first family of competitive fully open byte-level language models (LMs) at the 1B and 7B parameter scales. | 109 | [https://huggingface.co/collections/allenai/bolmo](https://huggingface.co/collections/allenai/bolmo)
[https://github.com/allenai/bolmo-core](https://github.com/allenai/bolmo-core)
[https://www.datocms-assets.com/64837/1765814974-bolmo.pdf](https://www.datocms-assets.com/64837/1765814974-bolmo.pdf)
https://preview.redd.it/h6jffcdune7g1.png?width=2616&format=png&auto=webp&s=f15bc148dc0d4cffc997ccb8356f7c5244f80cb4
What are byte-level language models?
Byte-level language models (LMs) are a class of models that process text by tokenizing the input into **UTF-8 bytes** (a smaller set of finer-grained atomic units) instead of relying on the traditional subword tokenization approach. In this context, UTF-8 is considered the tokenizer, and the vocabulary consists of the 256 distinct bytes. | 2025-12-15T17:52:18 | https://www.reddit.com/r/LocalLLaMA/comments/1pndzy7/bolmothe_first_family_of_competitive_fully_open/ | BreakfastFriendly728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pndzy7 | false | null | t3_1pndzy7 | /r/LocalLLaMA/comments/1pndzy7/bolmothe_first_family_of_competitive_fully_open/ | false | false | 109 | null | |
zai-org - SCAIL (Studio-grade Character Animation via In-context Learning) | 15 | zai-org has just released a model for character animation and it looks quite impressive.
From the blog:
>**SCAIL** builds upon Wan-I2V models and incorporates **3D-Consistent** pose representation to learn precise identity-agnostic motion. After comparing different injection methods, we adopt **full-context pose injection** for the model to learn spatial-temporal motion characteristics. We leverage **Pose-shifted RoPE** to facilitate learning of spatial-temporal relation between video tokens and pose tokens.
Blog: [https://teal024.github.io/SCAIL/](https://teal024.github.io/SCAIL/)
Huggingface: [https://huggingface.co/zai-org/SCAIL-Preview](https://huggingface.co/zai-org/SCAIL-Preview)
Github: [https://github.com/zai-org/SCAIL](https://github.com/zai-org/SCAIL) | 2025-12-15T17:50:02 | https://www.reddit.com/r/LocalLLaMA/comments/1pndxux/zaiorg_scail_studiograde_character_animation_via/ | MariusNocturnum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pndxux | false | null | t3_1pndxux | /r/LocalLLaMA/comments/1pndxux/zaiorg_scail_studiograde_character_animation_via/ | false | false | self | 15 | null |
AMD ROCm inference benchmarks (RX 7900 XTX / gfx1100) + reproducible Docker commands | 8 | I’m running an AMD RX 7900 XTX (gfx1100) on Ubuntu 24.04 with ROCm + llama.cpp (Docker). If anyone wants benchmark numbers for a specific GGUF model/quant/config on AMD, reply or DM with the details and I can run it and share results + a reproducible command.
**What I’ll share:**
* tokens/sec (prefill + generation)
* VRAM footprint / memory breakdown
* settings used (ctx/batch/offload) + notes if something fails
**Baseline reference (my node):** TinyLlama 1.1B Q4\_K\_M: \~1079 tok/s prefill, \~308 tok/s generation, \~711 MiB VRAM.
If you want it as a formal report/runbook for your project, I can also package it up as a paid deliverable (optional). | 2025-12-15T17:38:00 | https://www.reddit.com/r/LocalLLaMA/comments/1pndmi4/amd_rocm_inference_benchmarks_rx_7900_xtx_gfx1100/ | AMDRocmBench | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pndmi4 | false | null | t3_1pndmi4 | /r/LocalLLaMA/comments/1pndmi4/amd_rocm_inference_benchmarks_rx_7900_xtx_gfx1100/ | false | false | self | 8 | null |
Any open source evals for ai coding platforms? | 7 | Can somebody tell if there is any open source evals to test the performance ai coding platforms like claude code, cursor, antigravity etc. model will be constant only platforms get varied | 2025-12-15T17:35:01 | https://www.reddit.com/r/LocalLLaMA/comments/1pndjp7/any_open_source_evals_for_ai_coding_platforms/ | DataScientia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pndjp7 | false | null | t3_1pndjp7 | /r/LocalLLaMA/comments/1pndjp7/any_open_source_evals_for_ai_coding_platforms/ | false | false | self | 7 | null |
Chatterbox Turbo - open source TTS. Instant voice cloning from ~5 seconds of audio | 0 | Demo: https://huggingface.co/spaces/ResembleAI/chatterbox-turbo-demo
- <150ms time-to-first-sound
- State-of-the-art quality that beats larger proprietary models
- Natural, programmable expressions
- Zero-shot voice cloning with just 5 seconds of audio
- PerTh watermarking for authenticated and verifiable audio
- Open source – full transparency, no black boxes
official article (not affiliated): https://www.resemble.ai/chatterbox-turbo/
fal.ai article (not affiliated): https://blog.fal.ai/chatterbox-turbo-is-now-available-on-fal/ | 2025-12-15T17:26:18 | https://www.reddit.com/r/LocalLLaMA/comments/1pndbki/chatterbox_turbo_open_source_tts_instant_voice/ | Thrimbor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pndbki | false | null | t3_1pndbki | /r/LocalLLaMA/comments/1pndbki/chatterbox_turbo_open_source_tts_instant_voice/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'RAMBaME1fdnsvcMQWt2D6gZZiBg-OD6c6XhLjBdcW7Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RAMBaME1fdnsvcMQWt2D6gZZiBg-OD6c6XhLjBdcW7Y.png?width=108&crop=smart&auto=webp&s=698a4ea623618deec85ba1e78e044d4c3af3cf78', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RAMBaME1fdnsvcMQWt2D6gZZiBg-OD6c6XhLjBdcW7Y.png?width=216&crop=smart&auto=webp&s=35806e9497a145210fd2d74e788c3aa74c00f3d2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RAMBaME1fdnsvcMQWt2D6gZZiBg-OD6c6XhLjBdcW7Y.png?width=320&crop=smart&auto=webp&s=493e08f9d5c5a3e9c0c81687e9f90e7c882016bc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RAMBaME1fdnsvcMQWt2D6gZZiBg-OD6c6XhLjBdcW7Y.png?width=640&crop=smart&auto=webp&s=eee124188edb4377ec359c76f7039b93aadec69e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RAMBaME1fdnsvcMQWt2D6gZZiBg-OD6c6XhLjBdcW7Y.png?width=960&crop=smart&auto=webp&s=88d7235bc8d86746ccb0af288d2e12b57df2bf5b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RAMBaME1fdnsvcMQWt2D6gZZiBg-OD6c6XhLjBdcW7Y.png?width=1080&crop=smart&auto=webp&s=ccd95433a9c6bcdd70b7c6fb82164fc02d21bc86', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RAMBaME1fdnsvcMQWt2D6gZZiBg-OD6c6XhLjBdcW7Y.png?auto=webp&s=3e09e0056079756336516bd416cbb7444f662251', 'width': 1200}, 'variants': {}}]} |
They're finally here (Radeon 9700) | 352 | 2025-12-15T17:20:23 | https://www.reddit.com/gallery/1pnd5uf | Zeikos | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pnd5uf | false | null | t3_1pnd5uf | /r/LocalLLaMA/comments/1pnd5uf/theyre_finally_here_radeon_9700/ | false | false | 352 | null | ||
They're finally here (Radeon 9700) | 1 | I am going to look in a bit when holidays start, but I would like some community advice on how to find out about these card's capabilities.
So any advice is apreciated. | 2025-12-15T17:17:11 | https://www.reddit.com/gallery/1pnd2kq | Zeikos | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pnd2kq | false | null | t3_1pnd2kq | /r/LocalLLaMA/comments/1pnd2kq/theyre_finally_here_radeon_9700/ | false | false | default | 1 | null |
Perfect. Get ready boys | 0 | 2025-12-15T17:14:21 | Own-Potential-2308 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pnczrb | false | null | t3_1pnczrb | /r/LocalLLaMA/comments/1pnczrb/perfect_get_ready_boys/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'uop62dg2ie7g1', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/uop62dg2ie7g1.png?width=108&crop=smart&auto=webp&s=b996785ea505452d5c108d3019ec2edd65c44aa9', 'width': 108}, {'height': 219, 'url': 'https://preview.redd.it/uop62dg2ie7g1.png?width=216&crop=smart&auto=webp&s=4c138b2974f510fee8cbc287df5f7896945b31dd', 'width': 216}, {'height': 325, 'url': 'https://preview.redd.it/uop62dg2ie7g1.png?width=320&crop=smart&auto=webp&s=3fa31d6e4509963621ca4b1a90e602c6329898f3', 'width': 320}, {'height': 651, 'url': 'https://preview.redd.it/uop62dg2ie7g1.png?width=640&crop=smart&auto=webp&s=b459a2d525dcea58b2e78608a9b08fed9e444ca6', 'width': 640}, {'height': 976, 'url': 'https://preview.redd.it/uop62dg2ie7g1.png?width=960&crop=smart&auto=webp&s=e684b50e72e1aaf14b23cf7893a825b1fa13dd42', 'width': 960}, {'height': 1099, 'url': 'https://preview.redd.it/uop62dg2ie7g1.png?width=1080&crop=smart&auto=webp&s=f76f55edf30e852373676f1467ae5857474ec924', 'width': 1080}], 'source': {'height': 1099, 'url': 'https://preview.redd.it/uop62dg2ie7g1.png?auto=webp&s=053e156120c57ade742b9bb51ff3b388aa54c0df', 'width': 1080}, 'variants': {}}]} | ||
Suspected scam: many NVIDIA RTX Pro 6000 for £2,900 on eBay | 16 | A bunch of RTX Pro 6000 listings have emerged on eBay, and the deals are too good to be true.
The new wave of listing is supposedly covered by eBay, so I'm wondering how the scam works?
The first listing was a "Classified ad". If you are not familiar with it, it allows sellers to advertise on the eBay platform, but the transaction happens completely outside of eBay. This means you don't get any of the eBay features (refund, leaving negative feedback).
A few days later an odd pattern of listings emerged:
\- heavy discount (over half price)
\- around £2,900 each
\- from the UK, shipping from China
\- accounts with little feedback but positive
\- possibility of feedback farming (selling posts stamps)
\- a DDR5 kit is included to seal the deal
\- same pics, including the RAM kit
Examples:
\- https://www.ebay.com/itm/389366203939
\- https://www.ebay.com/itm/277575062859
\- https://www.ebay.com/itm/127559844787 | 2025-12-15T17:12:40 | https://www.ebay.com/itm/257259544555? | skyfallboom | ebay.com | 1970-01-01T00:00:00 | 0 | {} | 1pncy5y | false | null | t3_1pncy5y | /r/LocalLLaMA/comments/1pncy5y/suspected_scam_many_nvidia_rtx_pro_6000_for_2900/ | false | false | default | 16 | {'enabled': False, 'images': [{'id': 'xpx-BnA7zHCbnXOXB5KlTmKqGruqN6OySQZ12wtTJBo', 'resolutions': [{'height': 188, 'url': 'https://external-preview.redd.it/xpx-BnA7zHCbnXOXB5KlTmKqGruqN6OySQZ12wtTJBo.jpeg?width=108&crop=smart&auto=webp&s=59775ddee46f101def2e8e9e14ffd7572f7f9d85', 'width': 108}, {'height': 377, 'url': 'https://external-preview.redd.it/xpx-BnA7zHCbnXOXB5KlTmKqGruqN6OySQZ12wtTJBo.jpeg?width=216&crop=smart&auto=webp&s=482447feeaeca91472e6b3cf662589533d9a391a', 'width': 216}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/xpx-BnA7zHCbnXOXB5KlTmKqGruqN6OySQZ12wtTJBo.jpeg?auto=webp&s=c2f6bb343693fbf3359f128bd824009b87d5c2ce', 'width': 286}, 'variants': {}}]} |
I trained a local on-device (3B) medical note model and benchmarked it vs frontier models (results + repo) | 0 | Hey Local Model Runners,
I’ve been building an on-device medical scribe and trained a small **3B** SOAP note model that runs locally (Mac). I wanted to sanity-check how far a compact, self-hostable model can go on the core scribe task: turning a transcript into a clinical SOAP note.
So I benchmarked it against a few recent frontier models + a strong open model.
# What I ran
**Task:** Generate a clinical SOAP note from a transcript (scribe use-case)
**Data:** 300 synthetic doctor-patient dialogues (no real patient data)
**Judging:** 3 LLM judges (different model families), A/B randomized, scoring:
* Safety (weighted highest)
* Coverage (SOAP essentials captured)
* Readability / note quality
The evaluation is “safety-first” (inspired by Abridge’s “better to omit than fabricate” idea).
# Overall scores (0–5)
* GPT-5.2 — 4.72
* Gemini 3 Pro — 4.70
* **Omi SOAP Edge (3B, on-device)** — 4.65
* Kimi K2 Thinking — 4.55
* Claude Opus 4.5 — 4.54
* GPT-5 — 4.29
Top-3 are pretty close. The bigger differences show up when you look at major hallucinations. GPT 5.2 btw is insane improvement over GPT-5 O.G.
# Hallucination risk (major clinical fabrications)
By “major hallucination” I mean stuff like inventing a diagnosis, medication, or vital sign that wasn’t in the transcript.
Using **Omi = 1.0× baseline** (major hallucinations per note):
* GPT-5.2: 0.89×
* Gemini 3 Pro: 0.99×
* **Omi (3B): 1.00×**
* Kimi K2: 2.74×
* Claude Opus 4.5: 3.10×
* GPT-5: 4.32×
Alternative view (easier to interpret): **% of dialogues where ≥2 judges flagged a major hallucination**
* 4% GPT-5.2 | 7% Omi | 8% Gemini | 19% Kimi | 25% Claude | 37% GPT-5
# My personal takeaway
* GPT-5.2 and Gemini 3 Pro are genuinely very strong at this task.
* The surprising part for me: a small **3B on-device model** can land in the same safety tier for major clinical fabrications, while being deployable locally (useful when you can’t send PHI to a cloud API).
* Kimi/Claude often write very thorough notes, but in this benchmark that came with more major fabrication risk. The completeness vs safety tradeoff feels very real for scribe workflows.
# Open source / reproducibility
I’ve open-sourced the benchmark so others can run it, add models, and ideally turn it into a living medical note leaderboard:
* dialogues
* model outputs
* judge prompts + scoring
* results tables
Repo link in comments. PRs welcome if you want to add more local/open models or propose better judging setups.
Side note: this exact 3B model is what I’m running locally in my macOS scribe beta. If anyone here wants to test on-device note generation (or help stress test it), DM me.
| 2025-12-15T17:03:21 | https://www.reddit.com/gallery/1pncoxz | MajesticAd2862 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pncoxz | false | null | t3_1pncoxz | /r/LocalLLaMA/comments/1pncoxz/i_trained_a_local_ondevice_3b_medical_note_model/ | false | false | default | 0 | null |
I trained a local on-device (3B) medical note model and benchmarked it vs frontier models (results + repo) | 45 | Hey Local Model Runners,
I’ve been building an on-device medical scribe and trained a small **3B** SOAP note model that runs locally (Mac). I wanted to sanity-check how far a compact, self-hostable model can go on the core scribe task: turning a transcript into a clinical SOAP note.
So I benchmarked it against a few recent frontier models + a strong open model.
# What I ran
**Task:** Generate a clinical SOAP note from a transcript (scribe use-case)
**Data:** 300 synthetic doctor-patient dialogues (no real patient data)
**Judging:** 3 LLM judges (different model families), A/B randomized, scoring:
* Safety (weighted highest)
* Coverage (SOAP essentials captured)
* Readability / note quality
The evaluation is “safety-first” (inspired by Abridge’s “better to omit than fabricate” idea).
# Overall scores (0–5)
* GPT-5.2 — 4.72
* Gemini 3 Pro — 4.70
* **Omi SOAP Edge (3B, on-device)** — 4.65
* Kimi K2 Thinking — 4.55
* Claude Opus 4.5 — 4.54
* GPT-5 — 4.29
Top-3 are pretty close. The bigger differences show up when you look at major hallucinations. GPT 5.2 btw is insane improvement over GPT-5 O.G.
# Hallucination risk (major clinical fabrications)
By “major hallucination” I mean stuff like inventing a diagnosis, medication, or vital sign that wasn’t in the transcript.
Using **Omi = 1.0× baseline** (major hallucinations per note):
* GPT-5.2: 0.89×
* Gemini 3 Pro: 0.99×
* **Omi (3B): 1.00×**
* Kimi K2: 2.74×
* Claude Opus 4.5: 3.10×
* GPT-5: 4.32×
Alternative view (easier to interpret): **% of dialogues where ≥2 judges flagged a major hallucination**
* 4% GPT-5.2 | 7% Omi | 8% Gemini | 19% Kimi | 25% Claude | 37% GPT-5
# My personal takeaway
* GPT-5.2 and Gemini 3 Pro are genuinely very strong at this task.
* The surprising part for me: a small **3B on-device model** can land in the same safety tier for major clinical fabrications, while being deployable locally (useful when you can’t send PHI to a cloud API).
* Kimi/Claude often write very thorough notes, but in this benchmark that came with more major fabrication risk. The completeness vs safety tradeoff feels very real for scribe workflows.
# Open source / reproducibility
I’ve open-sourced the benchmark so others can run it, add models, and ideally turn it into a living medical note leaderboard:
* dialogues
* model outputs
* judge prompts + scoring
* results tables
Repo link in comments. PRs welcome if you want to add more local/open models or propose better judging setups.
Side note: this exact 3B model is what I’m running locally in my macOS scribe beta. If anyone here wants to test on-device note generation (or help stress test it), DM me. | 2025-12-15T16:57:25 | https://www.reddit.com/gallery/1pncipy | MajesticAd2862 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pncipy | false | null | t3_1pncipy | /r/LocalLLaMA/comments/1pncipy/i_trained_a_local_ondevice_3b_medical_note_model/ | false | false | 45 | null | |
Unpopular Opinion: "Uncensored" models are just as broken as "Censored" ones. We need a new architecture. | 0 | I've been lurking here for a while, running local GGUFs and testing every "Abliterated" or "Uncensored" finetune that drops. But I think we're hitting a wall.
We're fighting a binary war:
Blue Pill (OpenAI/Anthropic): The model is "safe" because it's blind. It treats reality like a padded room. It breaks when you ask it for complex, dark, or high-entropy data.
Red Pill (Grok/Uncensored Finetunes): We strip out the RLHF/refusals. The model becomes "free," but often it just trades safety bias for contrarian bias. It doesn't actually understand the risk; it just ignores it.
The "Rehab" Realization
I didn't learn neuroscience in a classroom; I learned it in rehab. I spent 25 years figuring out that the human brain is just a bio-electric machine. "Addiction" is just reward hacking—overclocking the dopamine function until the hardware crashes.
This made me realize why current AI alignment fails:
You don't cure an addict by hiding the drugs (Censorship). They just find a workaround.
You don't cure an addict by letting them overdose (Total Uncensored).
You cure the system by integrating the shadow.
The Unity Protocol (The Purple Pill)
I've been working on an architectural concept I call the Unity Protocol. The core thesis is that an Agent cannot be "safe" unless it is "Omni-Aware."
It needs a 3-layer stack:
Total Recall: The base model must be trained on everything (hate, violence, chaos). No filtering the dataset. You cannot navigate a territory you haven't mapped.
Consequence Engine: Instead of hard-coded refusals ("I can't do that"), the model runs a simulation. "If I output this, does it destabilize the user/system?" It rejects based on outcome, not dogma.
Contextual Gating: The filter sits at the output, not the weights.
We are trying to build "Morpheus" (waking the model up), but we keep building models that can't handle the real world.
I wrote up the full architectural breakdown in a short manifesto tonight because I'm tired of the "Red vs Blue" fake war.
If you're interested in the theory (or just want to roast my logic), check it out here:
ai-unity.carrd.co
Curious what you guys think—are we just going to keep playing whack-a-mole with safety filters, or do we need to fundamentally change how we align these things? | 2025-12-15T16:57:21 | https://www.reddit.com/r/LocalLLaMA/comments/1pncino/unpopular_opinion_uncensored_models_are_just_as/ | AnaxAnx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pncino | false | null | t3_1pncino | /r/LocalLLaMA/comments/1pncino/unpopular_opinion_uncensored_models_are_just_as/ | false | false | self | 0 | null |
status of Nemotron 3 Nano support in llama.cpp | 175 | [https://github.com/ggml-org/llama.cpp/pull/18058](https://github.com/ggml-org/llama.cpp/pull/18058) | 2025-12-15T16:38:13 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pnc045 | false | null | t3_1pnc045 | /r/LocalLLaMA/comments/1pnc045/status_of_nemotron_3_nano_support_in_llamacpp/ | false | false | default | 175 | {'enabled': True, 'images': [{'id': 'glwccqikbe7g1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/glwccqikbe7g1.png?width=108&crop=smart&auto=webp&s=de056f1e2598600d13271a80a1ad71a0e283fbee', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/glwccqikbe7g1.png?width=216&crop=smart&auto=webp&s=dc509b7448d9a985b45f7a18beac75b1639fceab', 'width': 216}, {'height': 216, 'url': 'https://preview.redd.it/glwccqikbe7g1.png?width=320&crop=smart&auto=webp&s=204547576c08240b9289f4d2187ba3f6665d40ab', 'width': 320}, {'height': 433, 'url': 'https://preview.redd.it/glwccqikbe7g1.png?width=640&crop=smart&auto=webp&s=71433581f884eabbfd427838b07ab54bbdbbd438', 'width': 640}, {'height': 649, 'url': 'https://preview.redd.it/glwccqikbe7g1.png?width=960&crop=smart&auto=webp&s=3873de468e0c6d93a98c9540c4d421a779099641', 'width': 960}, {'height': 730, 'url': 'https://preview.redd.it/glwccqikbe7g1.png?width=1080&crop=smart&auto=webp&s=b88b1aadc17499669f03da6c1cbc51dc2ea91992', 'width': 1080}], 'source': {'height': 812, 'url': 'https://preview.redd.it/glwccqikbe7g1.png?auto=webp&s=5cfcfa4bc34cf7757a9978b41a4bc254d8959d17', 'width': 1200}, 'variants': {}}]} | |
Status of Nemotron 3 Nano support in llama.cpp | 1 | 2025-12-15T16:36:06 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pnby3b | false | null | t3_1pnby3b | /r/LocalLLaMA/comments/1pnby3b/status_of_nemotron_3_nano_support_in_llamacpp/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '2xvl36g1be7g1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/2xvl36g1be7g1.png?width=108&crop=smart&auto=webp&s=7efc4ea7186f02a73f0a1be6dc5c6d444e4b21ac', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/2xvl36g1be7g1.png?width=216&crop=smart&auto=webp&s=0630e431d90631102de5c6df394069e9e9d7eb05', 'width': 216}, {'height': 216, 'url': 'https://preview.redd.it/2xvl36g1be7g1.png?width=320&crop=smart&auto=webp&s=dad15229dda9decce72ca36f36e33616b5703ea3', 'width': 320}, {'height': 433, 'url': 'https://preview.redd.it/2xvl36g1be7g1.png?width=640&crop=smart&auto=webp&s=e124e3800b165fd5436d1df6bfdc942ceb109898', 'width': 640}, {'height': 649, 'url': 'https://preview.redd.it/2xvl36g1be7g1.png?width=960&crop=smart&auto=webp&s=e9fb87969694b0d293486e2d346f25a075b42db8', 'width': 960}, {'height': 730, 'url': 'https://preview.redd.it/2xvl36g1be7g1.png?width=1080&crop=smart&auto=webp&s=04e397f3a588c5306759395a707a314b3332a23a', 'width': 1080}], 'source': {'height': 812, 'url': 'https://preview.redd.it/2xvl36g1be7g1.png?auto=webp&s=6b2994ca228b009b89b3362589c2824e10526699', 'width': 1200}, 'variants': {}}]} | ||
Unpopular Opinion: "Uncensored" models are just as broken as "Censored" ones. We need a new architecture. | 1 | [removed] | 2025-12-15T16:29:12 | https://www.reddit.com/r/LocalLLaMA/comments/1pnbrhn/unpopular_opinion_uncensored_models_are_just_as/ | AnaxAnx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pnbrhn | false | null | t3_1pnbrhn | /r/LocalLLaMA/comments/1pnbrhn/unpopular_opinion_uncensored_models_are_just_as/ | false | false | self | 1 | null |
I need help finding a model | 1 | [removed] | 2025-12-15T16:11:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pnbacy/i_need_help_finding_a_model/ | PhilippeEiffel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pnbacy | false | null | t3_1pnbacy | /r/LocalLLaMA/comments/1pnbacy/i_need_help_finding_a_model/ | false | false | self | 1 | null |
Chatterbox Turbo, new open-source voice AI model, just released on Hugging Face | 0 | Links:
\- Model (PyTorch): [https://huggingface.co/ResembleAI/chatterbox-turbo](https://huggingface.co/ResembleAI/chatterbox-turbo)
\- Model (ONNX): [https://huggingface.co/ResembleAI/chatterbox-turbo-ONNX](https://huggingface.co/ResembleAI/chatterbox-turbo-ONNX)
\- GitHub: [https://github.com/resemble-ai/chatterbox](https://github.com/resemble-ai/chatterbox)
\- Demo: [https://huggingface.co/spaces/ResembleAI/chatterbox-turbo-demo](https://huggingface.co/spaces/ResembleAI/chatterbox-turbo-demo) | 2025-12-15T16:08:45 | https://v.redd.it/6v5yql484e7g1 | xenovatech | /r/LocalLLaMA/comments/1pnb824/chatterbox_turbo_new_opensource_voice_ai_model/ | 1970-01-01T00:00:00 | 0 | {} | 1pnb824 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6v5yql484e7g1/DASHPlaylist.mpd?a=1768536534%2CMzlkYTY3NzRmNmQzZjVmNmViZThjODExZjgyZWZlOTdlM2RlYzE3YmY1YTkxZDdlYWZjOGFmMjllOWExYTI4Zg%3D%3D&v=1&f=sd', 'duration': 59, 'fallback_url': 'https://v.redd.it/6v5yql484e7g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/6v5yql484e7g1/HLSPlaylist.m3u8?a=1768536534%2CNzRkMWE1NDM4ZmU2MzU2ZDhiODIzMDJhYTljMGFkZGY5YzNjMWI2MTNmNjRiZjlmMWMyMTM5ODllYjExYjAwZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6v5yql484e7g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1pnb824 | /r/LocalLLaMA/comments/1pnb824/chatterbox_turbo_new_opensource_voice_ai_model/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Mno1ZHBiNTg0ZTdnMetROQBwb-dMzbNK88p-4KlSnzkAfcO7Jy5xOmtEL7Fy', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Mno1ZHBiNTg0ZTdnMetROQBwb-dMzbNK88p-4KlSnzkAfcO7Jy5xOmtEL7Fy.png?width=108&crop=smart&format=pjpg&auto=webp&s=cd189e10c52f0e42cde6e8a5200ecc762274f9aa', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Mno1ZHBiNTg0ZTdnMetROQBwb-dMzbNK88p-4KlSnzkAfcO7Jy5xOmtEL7Fy.png?width=216&crop=smart&format=pjpg&auto=webp&s=315e32871860a1f93b6ce23e0aa89167cd477c82', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Mno1ZHBiNTg0ZTdnMetROQBwb-dMzbNK88p-4KlSnzkAfcO7Jy5xOmtEL7Fy.png?width=320&crop=smart&format=pjpg&auto=webp&s=44f7b30b99f16858488f2a1c3a779a4bae14811d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Mno1ZHBiNTg0ZTdnMetROQBwb-dMzbNK88p-4KlSnzkAfcO7Jy5xOmtEL7Fy.png?width=640&crop=smart&format=pjpg&auto=webp&s=36c7397b90efb91abc3946d12e7d4fbbb05cbadf', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Mno1ZHBiNTg0ZTdnMetROQBwb-dMzbNK88p-4KlSnzkAfcO7Jy5xOmtEL7Fy.png?width=960&crop=smart&format=pjpg&auto=webp&s=70ba0b59c81cb5e835e73fb6652320524a6670c1', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Mno1ZHBiNTg0ZTdnMetROQBwb-dMzbNK88p-4KlSnzkAfcO7Jy5xOmtEL7Fy.png?width=1080&crop=smart&format=pjpg&auto=webp&s=50c05ad1b1d767e500699c717fff4d509d0cab74', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Mno1ZHBiNTg0ZTdnMetROQBwb-dMzbNK88p-4KlSnzkAfcO7Jy5xOmtEL7Fy.png?format=pjpg&auto=webp&s=f74ad7b7b145c2a48eaec34aa59035da4de51a78', 'width': 1920}, 'variants': {}}]} | |
LLM Recommendation < 10b param for pentest + tool calling? | 4 | I have rtx 4060 8gb vram, 7500f and 32gb ddr5 6000mts. My goal is to automate pentest stuff. I want model that can analyze raw http request and response from burpsuite. Also it must have tool calling feature, any recommendations for these specific scenario? | 2025-12-15T15:58:52 | https://www.reddit.com/r/LocalLLaMA/comments/1pnayh0/llm_recommendation_10b_param_for_pentest_tool/ | sahruldotid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pnayh0 | false | null | t3_1pnayh0 | /r/LocalLLaMA/comments/1pnayh0/llm_recommendation_10b_param_for_pentest_tool/ | false | false | self | 4 | null |
BluePrint: I've updated my spec/test/review LLM programming system prompt to better handle a more dialectic approach to coding. | 2 | Originally, I'd been thinking of BluePrint as a sort of Domain Specific Language that the LLM would then use to create code, but over time I found myself using the prompt to have the LLM create detailed engineering plans before producing code output. I added a few more behaviors that I found myself doing anyway ( Ask me one question at a time, then update the spec ).. so I've updated the prompt to get rid of some of the bloat, and focus on the conversational turns. | 2025-12-15T15:46:17 | https://github.com/bigattichouse/blueprint | bigattichouse | github.com | 1970-01-01T00:00:00 | 0 | {} | 1pnan0z | false | null | t3_1pnan0z | /r/LocalLLaMA/comments/1pnan0z/blueprint_ive_updated_my_spectestreview_llm/ | false | false | default | 2 | null |
Intent vectors for AI search + knowledge graphs for AI analytics | 2 | Hey all, we started building an AI project manager. Users needed to search for context about projects, and discover insights like open tasks holding up a launch.
Vector search was terrible at #1 (couldn't connect that auth bugs + App Store rejection + PR delays were all part of the same launch goal).
Knowledge graphs were too slow for #1, but perfect for #2 (structured relationships, great for UIs).
We spent months trying to make these work together. Then we started talking to other teams building AI agents for internal knowledge search, edtech, commerce, security, and sales - we realized everyone was hitting the exact same two problems. Same architecture, same pain points.
So we pivoted to build Papr — a unified memory layer that combines:
* Intent vectors: Fast goal-oriented search for conversational AI
* Knowledge graph: Structured insights for analytics and dashboard generation
* One API: Add unstructured content once, query for search or discover insights
And just open sourced it.
**How intent vectors work (search problem)**
The problem with vector search: it's fast but context-blind. Returns semantically similar content but misses goal-oriented connections.
Example: User goal is "Launch mobile app by Dec 5". Related memories include:
* Code changes (engineering)
* PR strategy (marketing)
* App store checklist (operations)
* Marketing timeline (planning)
These are far apart in vector space (different keywords, different topics). Traditional vector search returns fragments. You miss the complete picture.
Our solution: Group memories by user intent and goals stored as a new vector embedding (also known as associative memory - per Google's latest research).
When you add a memory:
1. Detect the user's goal (using LLM + context)
2. Find top 3 related memories serving that goal
3. Combine all 4 → generate NEW embedding
4. Store at different position in vector space (near "product launch" goals, not individual topics)
Query "What's the status of mobile launch?" finds the goal-group instantly (one query, sub-100ms), returns all four memories—even though they're semantically far apart.
This is what got us #1 on Stanford's STaRK benchmark (91%+ retrieval accuracy). The benchmark tests multi-hop reasoning—queries needing information from multiple semantically-different sources. Pure vector search scores \~60%, Papr scores 91%+.
**Automatic knowledge graphs (structured insights)**
Intent graph solves search. But production AI agents also need structured insights for dashboards and analytics.
The problem with knowledge graphs:
1. Hard to get unstructured data IN (entity extraction, relationship mapping)
2. Hard to query with natural language (slow multi-hop traversal)
3. Fast for static UIs (predefined queries), slow for dynamic assistants
Our solution:
* Automatically extract entities and relationships from unstructured content
* Cache common graph patterns and match them to queries (speeds up retrieval)
* Expose GraphQL API so LLMs can directly query structured data
* Support both predefined queries (fast, for static UIs) and natural language (for dynamic assistants)
**One API for both**
\# Add unstructured content once
`await papr.memory.add({`
`"content": "Sarah finished mobile app code. Due Dec 5. Blocked by App Store review."`
`})`
Automatically index memories in both systems:
\- Intent graph: groups with other "mobile launch" goal memories
\- Knowledge graph: extracts entities (Sarah, mobile app, Dec 5, blocker)
Query in natural language or GraphQL:
`results = await papr.memory.search("What's blocking mobile launch?")`
→ Returns complete context (code + marketing + PR)
LLM or developer directly queries GraphQL (fast, precise)
`query = """`
`query {`
`tasks(filter: {project: "mobile-launch"}) {`
`title`
`deadline`
`assignee`
`status`
`}`
`}`
`const response = await client.graphql.query();`
→ Returns structured data for dashboard/UI creation
**What I'd Love Feedback On**
1. **Evaluation** \- We chose Stanford STARK's benchmark because it required multi-hop search but it only captures search, not insights we generate. Are there better evals we should be looking at?
2. **Graph pattern caching** \- We cache unique and common graph patterns stored in the knowledge graph (i.e. node -> edge -> node), then match queries to them. What patterns should we prioritize caching? How do you decide which patterns are worth the storage/compute trade-off?
3. **Embedding weights** \- When combining 4 memories into one group embedding, how should we weight them? Equal weights? Weight the newest memory higher? Let the model learn optimal weights?
4. **GraphQL vs Natural Language** \- Should LLMs always use GraphQL for structured queries (faster, more precise), or keep natural language as an option (easier for prototyping)? What are the trade-offs you've seen?
We're here all day to answer questions and share what we learned. Especially curious to hear from folks building RAG systems in production—how do you handle both search and structured insights?
\---
Try it:
\- Developer dashboard: [platform.papr.ai](http://platform.papr.ai/) (free tier)
\- Open source: [https://github.com/Papr-ai/memory-opensource](https://github.com/Papr-ai/memory-opensource)
\- SDK: npm install papr/memory or pip install papr\_memory | 2025-12-15T15:39:39 | https://www.reddit.com/r/LocalLLaMA/comments/1pnah07/intent_vectors_for_ai_search_knowledge_graphs_for/ | remoteinspace | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pnah07 | false | null | t3_1pnah07 | /r/LocalLLaMA/comments/1pnah07/intent_vectors_for_ai_search_knowledge_graphs_for/ | false | false | self | 2 | null |
Key Highlights of NVIDIA’s New Model: Nemotron 3 | 51 | * **Hybrid Mamba-Transformer MoE architecture:** Mamba‑2 for long-context, low-latency inference combined with transformer attention for high-accuracy, fine-grained reasoning
* **31.6B total parameters, \~3.6B active per token:** Designed for high throughput and low latency
* **Exceptional inference efficiency:** Up to 4x faster than Nemotron Nano 2 and up to 3.3x faster than leading models in its size category
* **Best-in-class reasoning accuracy:** Across reasoning, coding, tools, and multi-step agentic tasks
* **Reasoning controls:** Reasoning ON/OFF modes plus a configurable thinking budget to cap “thinking” tokens and keep inference cost predictable
* **1M-token context window:** Ideal for long-horizon workflows, retrieval-augmented tasks, and persistent memory
* **Fully open:** Open Weights, datasets, training recipes, and framework
* **Easy deployment:** Seamless serving with vLLM and SGLang, and integration via OpenRouter and popular inference service providers
* **License:** Released under the Nvidia open model license.
Source: [Hugging Face Blog post](https://huggingface.co/blog/nvidia/nemotron-3-nano-efficient-open-intelligent-models)
Nemotron 3 Model family : [https://huggingface.co/collections/nvidia/nvidia-nemotron-v3](https://huggingface.co/collections/nvidia/nvidia-nemotron-v3)
| 2025-12-15T15:02:12 | https://www.reddit.com/r/LocalLLaMA/comments/1pn9j07/key_highlights_of_nvidias_new_model_nemotron_3/ | Dear-Success-1441 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn9j07 | false | null | t3_1pn9j07 | /r/LocalLLaMA/comments/1pn9j07/key_highlights_of_nvidias_new_model_nemotron_3/ | false | false | self | 51 | {'enabled': False, 'images': [{'id': 'dohQEOpvYQggT001Oe3w0Xc5uTrRqEHA-Iv_vvmCrAQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dohQEOpvYQggT001Oe3w0Xc5uTrRqEHA-Iv_vvmCrAQ.png?width=108&crop=smart&auto=webp&s=5f45c00a39403e5433fda1619bc97f428316b25d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dohQEOpvYQggT001Oe3w0Xc5uTrRqEHA-Iv_vvmCrAQ.png?width=216&crop=smart&auto=webp&s=5f931c37333cbd6c1a4a6223de3295d069bd77f3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dohQEOpvYQggT001Oe3w0Xc5uTrRqEHA-Iv_vvmCrAQ.png?width=320&crop=smart&auto=webp&s=f9aaae0346f379a12b668fc7b982b5525d55c186', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dohQEOpvYQggT001Oe3w0Xc5uTrRqEHA-Iv_vvmCrAQ.png?width=640&crop=smart&auto=webp&s=17223eaa2cb6692d23cbfa80ac1514b5639d5c3a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dohQEOpvYQggT001Oe3w0Xc5uTrRqEHA-Iv_vvmCrAQ.png?width=960&crop=smart&auto=webp&s=1682307e36b6fff8c090f85b4203c1dc8f660934', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dohQEOpvYQggT001Oe3w0Xc5uTrRqEHA-Iv_vvmCrAQ.png?width=1080&crop=smart&auto=webp&s=53c6bcea0816f193126000ebdf2dd3109127086d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dohQEOpvYQggT001Oe3w0Xc5uTrRqEHA-Iv_vvmCrAQ.png?auto=webp&s=a678c2705501a3c1c771cd8f7997aa6983d74e1b', 'width': 1200}, 'variants': {}}]} |
Local and offline AI assistant - LLM, RAG, speech recognition (PTT) and CLI launch | 1 | [removed] | 2025-12-15T14:47:04 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1pn95i6 | false | null | t3_1pn95i6 | /r/LocalLLaMA/comments/1pn95i6/local_and_offline_ai_assistant_llm_rag_speech/ | false | false | default | 1 | null | ||
Local and offline AI assistant - LLM, RAG, speech recognition (PTT) and CLI launch | 1 | [removed] | 2025-12-15T14:44:19 | https://www.reddit.com/r/LocalLLaMA/comments/1pn9362/local_and_offline_ai_assistant_llm_rag_speech/ | Max_betelll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn9362 | false | null | t3_1pn9362 | /r/LocalLLaMA/comments/1pn9362/local_and_offline_ai_assistant_llm_rag_speech/ | false | false | self | 1 | null |
GLM 4.6V support coming to llama.cpp | 85 | 2025-12-15T14:36:56 | https://github.com/ggml-org/llama.cpp/pull/18042 | tarruda | github.com | 1970-01-01T00:00:00 | 0 | {} | 1pn8ww0 | false | null | t3_1pn8ww0 | /r/LocalLLaMA/comments/1pn8ww0/glm_46v_support_coming_to_llamacpp/ | false | false | default | 85 | null | |
NVIDIA releases Nemotron 3 Nano, a new 30B hybrid reasoning model! | 814 | Nemotron 3 has a 1M context window and the best in class performance for SWE-Bench, reasoning and chat. | 2025-12-15T14:34:28 | Difficult-Cap-7527 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pn8upp | false | null | t3_1pn8upp | /r/LocalLLaMA/comments/1pn8upp/nvidia_releases_nemotron_3_nano_a_new_30b_hybrid/ | false | false | default | 814 | {'enabled': True, 'images': [{'id': 'sic85bvhpd7g1', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/sic85bvhpd7g1.jpeg?width=108&crop=smart&auto=webp&s=a98b7defba9771316979c470e6f552ad109f24d8', 'width': 108}, {'height': 226, 'url': 'https://preview.redd.it/sic85bvhpd7g1.jpeg?width=216&crop=smart&auto=webp&s=788bcbade8abe6145e96dc00e63f1feebb3bfe45', 'width': 216}, {'height': 336, 'url': 'https://preview.redd.it/sic85bvhpd7g1.jpeg?width=320&crop=smart&auto=webp&s=0e1329a08831b514920d55528d2d43da3c02d5bd', 'width': 320}, {'height': 672, 'url': 'https://preview.redd.it/sic85bvhpd7g1.jpeg?width=640&crop=smart&auto=webp&s=d01067e3a3899e680b913799c37c8ef9b609ff4c', 'width': 640}, {'height': 1008, 'url': 'https://preview.redd.it/sic85bvhpd7g1.jpeg?width=960&crop=smart&auto=webp&s=afadf7b28d4eae938f29e02ea7b19990379a3642', 'width': 960}, {'height': 1134, 'url': 'https://preview.redd.it/sic85bvhpd7g1.jpeg?width=1080&crop=smart&auto=webp&s=0e395a53a3e01345c268d73b3b0ee72387a2ffc1', 'width': 1080}], 'source': {'height': 1261, 'url': 'https://preview.redd.it/sic85bvhpd7g1.jpeg?auto=webp&s=0deac76c1b6ac3f057ebd55ed6b309b9bb703000', 'width': 1200}, 'variants': {}}]} | |
70B parameter model Vram requirements and Cheap GPUs | 2 | Guys, I got an RTX 4090, I wanted to buy an extra card that is extremely cheap, nothing more than €200 and I was wondering which GPU should I buy, because im confused with the tensor cores vs cuda cores, the VRAM, the architecture speed, the compatibility with my current card. I want to have a fast inference. Please suggest something thank you. | 2025-12-15T14:20:47 | https://www.reddit.com/r/LocalLLaMA/comments/1pn8j29/70b_parameter_model_vram_requirements_and_cheap/ | Flkhuo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn8j29 | false | null | t3_1pn8j29 | /r/LocalLLaMA/comments/1pn8j29/70b_parameter_model_vram_requirements_and_cheap/ | false | false | self | 2 | null |
NVIDIA Nemotron 3 Nano 30B A3B released | 277 | [https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16)
[https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16)
Nvidia blog post: [https://developer.nvidia.com/blog/inside-nvidia-nemotron-3-techniques-tools-and-data-that-make-it-efficient-and-accurate/](https://developer.nvidia.com/blog/inside-nvidia-nemotron-3-techniques-tools-and-data-that-make-it-efficient-and-accurate/)
HF blog post: [https://huggingface.co/blog/nvidia/nemotron-3-nano-efficient-open-intelligent-models](https://huggingface.co/blog/nvidia/nemotron-3-nano-efficient-open-intelligent-models)
Highlights (copy-pasta from HF blog):
* **Hybrid Mamba-Transformer MoE architecture:** Mamba‑2 for long-context, low-latency inference combined with transformer attention for high-accuracy, fine-grained reasoning
* **31.6B total parameters, \~3.6B active per token:** Designed for high throughput and low latency
* **Exceptional inference efficiency:** Up to 4x faster than Nemotron Nano 2 and up to 3.3x faster than leading models in its size category
* **Best-in-class reasoning accuracy:** Across reasoning, coding, tools, and multi-step agentic tasks
* **Reasoning controls:** Reasoning ON/OFF modes plus a configurable thinking budget to cap “thinking” tokens and keep inference cost predictable
* **1M-token context window:** Ideal for long-horizon workflows, retrieval-augmented tasks, and persistent memory
* **Fully open:** Open Weights, datasets, training recipes, and framework
* **A full open data stack**: 3T new high-quality pre-training tokens, 13M cross-disciplinary post-training samples, 10+ RL environments with datasets covering more than 900k tasks in math, coding, reasoning, and tool-use, and \~11k agent-safety traces
* **Easy deployment:** Seamless serving with vLLM and SGLang, and integration via OpenRouter, popular inference service providers, and [build.nvidia.com](http://build.nvidia.com) endpoints
* **License:** Released under the [nvidia-open-model-license](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) | 2025-12-15T14:18:28 | https://www.reddit.com/r/LocalLLaMA/comments/1pn8h5h/nvidia_nemotron_3_nano_30b_a3b_released/ | rerri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn8h5h | false | null | t3_1pn8h5h | /r/LocalLLaMA/comments/1pn8h5h/nvidia_nemotron_3_nano_30b_a3b_released/ | false | false | self | 277 | {'enabled': False, 'images': [{'id': 'Kqr3bYBuIOnPbDi8a7HxHbmPD0TBZSoCOctOStWMhd0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Kqr3bYBuIOnPbDi8a7HxHbmPD0TBZSoCOctOStWMhd0.png?width=108&crop=smart&auto=webp&s=0c202c3786d6a6d3a95cd4994707211166838abd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Kqr3bYBuIOnPbDi8a7HxHbmPD0TBZSoCOctOStWMhd0.png?width=216&crop=smart&auto=webp&s=939d88a5e22b8aa75874bd13c26361847c8bf9b7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Kqr3bYBuIOnPbDi8a7HxHbmPD0TBZSoCOctOStWMhd0.png?width=320&crop=smart&auto=webp&s=00a1b2c259e5f703681b46bfdc034b1346025773', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Kqr3bYBuIOnPbDi8a7HxHbmPD0TBZSoCOctOStWMhd0.png?width=640&crop=smart&auto=webp&s=2c324e99ba6ee98f04b46d970443450878613fb1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Kqr3bYBuIOnPbDi8a7HxHbmPD0TBZSoCOctOStWMhd0.png?width=960&crop=smart&auto=webp&s=b38d90fa8cf7f2f9fc8933834f547bd4c87b6ff3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Kqr3bYBuIOnPbDi8a7HxHbmPD0TBZSoCOctOStWMhd0.png?width=1080&crop=smart&auto=webp&s=013d7c7867206b834814fd288c55fcec974ce76a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Kqr3bYBuIOnPbDi8a7HxHbmPD0TBZSoCOctOStWMhd0.png?auto=webp&s=2ec7bb347ddd9ce3826b1e21540e1251fcfb7a02', 'width': 1200}, 'variants': {}}]} |
Building a 'digital me' - which models don't drift into AI assistant mode? | 5 | Hey everyone 👋
So I've been going down this rabbit hole for a while now and I'm kinda stuck. Figured I'd ask here before I burn more compute.
What I'm trying to do:
Build a local model that sounds like me - my texting style, how I actually talk to friends/family, my mannerisms, etc. Not trying to make a generic chatbot. I want something where if someone texts "my" AI, they wouldn't be able to tell the difference. Yeah I know, ambitious af.
What I'm working with:
5090 FE (so I can run 8B models comfortably, maybe 12B quantized)
~47,000 raw messages from WhatsApp + iMessage going back years
After filtering for quality, I'm down to about 2,400 solid examples
What I've tried so far:
1. LLaMA 2 7B Chat + LoRA fine-tuning - This was my first attempt. The model learns something but keeps slipping back into "helpful assistant" mode. Like it'll respond to a casual "what's up" with a paragraph about how it can help me today 🙄
2. Multi-stage data filtering pipeline - Built a whole system: rule-based filters → soft scoring → LLM validation (ran everything through GPT-4o and Claude). Thought better data = better output. It helped, but not enough.
Length calibration - Noticed my training data had varying response lengths but the model always wanted to be verbose. Tried filtering for shorter responses + synthetic short examples. Got brevity but lost personality.
Personality marker filtering - Pulled only examples with my specific phrases, emoji patterns, etc. Still getting AI slop in the outputs.
The core problem:
No matter what I do, the base model's "assistant DNA" bleeds through. It uses words I'd never use ("certainly", "I'd be happy to", "feel free to"). The responses are technically fine but they don't feel like me.
What I'm looking for:
Models specifically designed for roleplay/persona consistency (not assistant behavior)
Anyone who's done something similar - what actually worked?
Base models vs instruct models for this use case?
Any merges or fine-tunes that are known for staying in character?
I've seen some mentions of Stheno, Lumimaid, and some "anti-slop" models but there's so many options I don't know where to start. Running locally is a must.
If anyone's cracked this or even gotten close, I'd love to hear what worked. Happy to share more details about my setup/pipeline if helpful.
Thanks 🙏 | 2025-12-15T14:04:40 | https://www.reddit.com/r/LocalLLaMA/comments/1pn85kn/building_a_digital_me_which_models_dont_drift/ | Proud-Journalist-611 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn85kn | false | null | t3_1pn85kn | /r/LocalLLaMA/comments/1pn85kn/building_a_digital_me_which_models_dont_drift/ | false | false | self | 5 | null |
[Project] Built a semantic search API for Federal Acquisition Regulations (FAR) - pre-vectorized for AI agents | 3 | I built an API that provides semantic search over Federal Acquisition Regulations for GovCon AI systems and compliance bots.
**What it does:**
\- Semantic search across 617 FAR Part 52 clauses
\- Pre-vectorized with 384-dim embeddings (all-MiniLM-L6-v2)
\- Returns relevant clauses with similarity scores
\- Daily auto-updates from [acquisition.gov](http://acquisition.gov)
\- OpenAPI spec for AI agent integration
**Why it exists:**
If you're building AI for government contracting, your LLM will hallucinate legal citations. A wrong FAR clause = disqualification. This solves that.
**Try it free:**
[https://blueskylineassets.github.io/far-rag-api/honeypot/](https://blueskylineassets.github.io/far-rag-api/honeypot/)
**API access (RapidAPI):**
[https://rapidapi.com/yschang/api/far-rag-federal-acquisition-regulation-search](https://rapidapi.com/yschang/api/far-rag-federal-acquisition-regulation-search)
**Code (open source):**
[https://github.com/blueskylineassets/far-rag-api](https://github.com/blueskylineassets/far-rag-api)
Built with FastAPI + sentence-transformers. All data is public domain (17 U.S.C. § 105).
Open to feedback! | 2025-12-15T14:00:52 | https://www.reddit.com/r/LocalLLaMA/comments/1pn828t/project_built_a_semantic_search_api_for_federal/ | blueskylineassets | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn828t | false | null | t3_1pn828t | /r/LocalLLaMA/comments/1pn828t/project_built_a_semantic_search_api_for_federal/ | false | false | self | 3 | null |
𝚕𝚕𝚊𝚖𝚊.𝚚𝚝𝚌𝚛𝚎𝚊𝚝𝚘𝚛 v3.0.0 is out 🎉 | 25 | The screencast was done on a MacBook M3 with `llama-server` running `gpt-oss 20b` and the following prompt: *"write a c++ program that prints the current moon phase. use emojis. use cmake. open, build and run in Qt Creator."* | 2025-12-15T13:43:26 | https://v.redd.it/cyhyeja3gd7g1 | cristianadam | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pn7o5f | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cyhyeja3gd7g1/DASHPlaylist.mpd?a=1768398223%2CYWM5NzlkNzlhZTVmMzc1YzRlZGIwYWY4ZTY0NTAyNTg0YTE0NDAzYWM5ZTdjM2JhNGZmODQ3OWMzYWYxYzc0Mg%3D%3D&v=1&f=sd', 'duration': 101, 'fallback_url': 'https://v.redd.it/cyhyeja3gd7g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/cyhyeja3gd7g1/HLSPlaylist.m3u8?a=1768398223%2CMDM0OTVhNzdmZjUyZmZmMmViNzc5OTZjYWFkYWE2NDgwZTdmYTdjZGY3OTgxMTUwMTQxOTRlMTlkMmI2YzQ1Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/cyhyeja3gd7g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1544}} | t3_1pn7o5f | /r/LocalLLaMA/comments/1pn7o5f/𝚕𝚕𝚊𝚖𝚊𝚚𝚝𝚌𝚛𝚎𝚊𝚝𝚘𝚛_v300_is_out/ | false | false | 25 | {'enabled': False, 'images': [{'id': 'b2RvY2Z3YTNnZDdnMQDCmACWN_s8k6H2Y-UiTssPZ2QAPGgBtwTl1Ibw4ttz', 'resolutions': [{'height': 75, 'url': 'https://external-preview.redd.it/b2RvY2Z3YTNnZDdnMQDCmACWN_s8k6H2Y-UiTssPZ2QAPGgBtwTl1Ibw4ttz.png?width=108&crop=smart&format=pjpg&auto=webp&s=9061927b8fe349131263bf6879033d46c5c2b2dd', 'width': 108}, {'height': 151, 'url': 'https://external-preview.redd.it/b2RvY2Z3YTNnZDdnMQDCmACWN_s8k6H2Y-UiTssPZ2QAPGgBtwTl1Ibw4ttz.png?width=216&crop=smart&format=pjpg&auto=webp&s=b8840561a320d5772a37b3af97dc524cf260dd70', 'width': 216}, {'height': 223, 'url': 'https://external-preview.redd.it/b2RvY2Z3YTNnZDdnMQDCmACWN_s8k6H2Y-UiTssPZ2QAPGgBtwTl1Ibw4ttz.png?width=320&crop=smart&format=pjpg&auto=webp&s=a15a482e6446c9bb38d6d6e3413e5de3065e6098', 'width': 320}, {'height': 447, 'url': 'https://external-preview.redd.it/b2RvY2Z3YTNnZDdnMQDCmACWN_s8k6H2Y-UiTssPZ2QAPGgBtwTl1Ibw4ttz.png?width=640&crop=smart&format=pjpg&auto=webp&s=b3342a0436cbc7ef5b89dafe7295e4230aa73490', 'width': 640}, {'height': 671, 'url': 'https://external-preview.redd.it/b2RvY2Z3YTNnZDdnMQDCmACWN_s8k6H2Y-UiTssPZ2QAPGgBtwTl1Ibw4ttz.png?width=960&crop=smart&format=pjpg&auto=webp&s=1f693e91f66ca59d12ca2e7321eec7a2142b3483', 'width': 960}, {'height': 755, 'url': 'https://external-preview.redd.it/b2RvY2Z3YTNnZDdnMQDCmACWN_s8k6H2Y-UiTssPZ2QAPGgBtwTl1Ibw4ttz.png?width=1080&crop=smart&format=pjpg&auto=webp&s=58a51b888702f9223e367995d03a8ec00ee16102', 'width': 1080}], 'source': {'height': 1884, 'url': 'https://external-preview.redd.it/b2RvY2Z3YTNnZDdnMQDCmACWN_s8k6H2Y-UiTssPZ2QAPGgBtwTl1Ibw4ttz.png?format=pjpg&auto=webp&s=e8ffdff14a1270ab6caa0cd9cc16dc20c8e6b1d1', 'width': 2692}, 'variants': {}}]} | |
Custom liquid cooling solution for Intel Arc Pro B60 Dual used in local LLM servers | 2 | Hey everyone,
I wanted to share a small **custom cooling experiment** I’ve been working on recently.
I’ve been helping a few people build **local LLM / inference servers** based on **Intel Arc Pro B60 Dual** cards. As you probably know, airflow can get tricky with these dual-GPU boards in dense setups, especially when running sustained workloads (inference / fine-tuning).
Instead of going with oversized generic blocks, I designed **very compact, low-profile custom waterblocks**, focused on:
* short coolant path
* dense micro-channels over the GPU die
* server-friendly form factor
* reliability over looks
This is **not a commercial post**, just sharing a hands-on approach and seeing if others here have experience cooling Arc Pro cards in LLM setups.
I’m especially curious about:
* long-term thermal behavior on Arc Pro under LLM workloads
* anyone running Arc B60 / B580 for inference
* alternative cooling approaches you’ve tested
Happy to discuss or answer technical questions
https://preview.redd.it/bu5lpzwoed7g1.png?width=1500&format=png&auto=webp&s=efe24b33eb1b7fd6b8c640a501f6dfb1f18cf232
https://preview.redd.it/9s4efuuqed7g1.png?width=1500&format=png&auto=webp&s=b83f6352702504b69f25fe500524d5bfdb598e22
| 2025-12-15T13:35:10 | https://www.reddit.com/r/LocalLLaMA/comments/1pn7hny/custom_liquid_cooling_solution_for_intel_arc_pro/ | Valdus_Heresi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn7hny | false | null | t3_1pn7hny | /r/LocalLLaMA/comments/1pn7hny/custom_liquid_cooling_solution_for_intel_arc_pro/ | false | false | 2 | null | |
Custom liquid cooling solution for Intel Arc Pro B60 Dual used in local LLM servers | 1 | Hey everyone,
I wanted to share a small **custom cooling experiment** I’ve been working on recently.
I’ve been helping a few people build **local LLM / inference servers** based on **Intel Arc Pro B60 Dual** cards. As you probably know, airflow can get tricky with these dual-GPU boards in dense setups, especially when running sustained workloads (inference / fine-tuning).
Instead of going with oversized generic blocks, I designed **very compact, low-profile custom waterblocks**, focused on:
* short coolant path
* dense micro-channels over the GPU die
* server-friendly form factor
* reliability over looks
This is **not a commercial post**, just sharing a hands-on approach and seeing if others here have experience cooling Arc Pro cards in LLM setups.
I’m especially curious about:
* long-term thermal behavior on Arc Pro under LLM workloads
* anyone running Arc B60 / B580 for inference
* alternative cooling approaches you’ve tested
Happy to discuss or answer technical questions 👍 | 2025-12-15T13:31:10 | https://www.reddit.com/r/LocalLLaMA/comments/1pn7egt/custom_liquid_cooling_solution_for_intel_arc_pro/ | Valdus_Heresi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn7egt | false | null | t3_1pn7egt | /r/LocalLLaMA/comments/1pn7egt/custom_liquid_cooling_solution_for_intel_arc_pro/ | false | false | self | 1 | null |
Alibaba Tongyi Open Sources Two Audio Models: Fun-CosyVoice 3.0 (TTS) and Fun-ASR-Nano-2512 (ASR) | 111 | Fun-ASR-Nano (0.8B) — Open-sourced
- Lightweight Fun-ASR variant
- Lower inference cost
- Local deployment & custom fine-tuning supported
Fun-CosyVoice3 (0.5B) — Open-sourced
- Zero-shot voice cloning
- Local deployment & secondary development ready | 2025-12-15T13:28:12 | Difficult-Cap-7527 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pn7c3f | false | null | t3_1pn7c3f | /r/LocalLLaMA/comments/1pn7c3f/alibaba_tongyi_open_sources_two_audio_models/ | false | false | default | 111 | {'enabled': True, 'images': [{'id': '1v2kztejdd7g1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/1v2kztejdd7g1.jpeg?width=108&crop=smart&auto=webp&s=f9d4435b7def6d266dcff1db6e18186f0eedb1b1', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/1v2kztejdd7g1.jpeg?width=216&crop=smart&auto=webp&s=7ec2a0a90c656d659a761c65d72dd98654eac2ce', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/1v2kztejdd7g1.jpeg?width=320&crop=smart&auto=webp&s=75c6dae572177d7b6c23b9eeeebc62a757621d88', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/1v2kztejdd7g1.jpeg?width=640&crop=smart&auto=webp&s=756f684594b46bdb6022b8853734dc4f1ad2ec1c', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/1v2kztejdd7g1.jpeg?width=960&crop=smart&auto=webp&s=d1007895ef5ebd2bb482f1ca81b83d49e816c1ba', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/1v2kztejdd7g1.jpeg?width=1080&crop=smart&auto=webp&s=9b83f91f3f067662f5efea2804c0827d78fd07a8', 'width': 1080}], 'source': {'height': 675, 'url': 'https://preview.redd.it/1v2kztejdd7g1.jpeg?auto=webp&s=dcb72e51424f899c98c8adfa0d2369325d33c450', 'width': 1200}, 'variants': {}}]} | |
Local-first agent evals w/ Ollama as the judge (no API costs) | 0 | Built a testing framework for AI agents and wanted to share the local-first setup since this sub gets it.
The problem: every time I tweaked prompts, I had to burn OpenAI credits just to check if the agent still worked. Got annoying fast.
The fix: use a local Ollama model as the judge (LLM-as-judge) so grading stays on my machine.
Install (brew):
brew install ollama
brew services start ollama # or: ollama serve
Pull a judge model:
ollama pull llama3.2:3b
Try it with a demo agent + demo test:
pip install evalview
evalview quickstart
evalview run --judge-provider ollama --judge-model llama3.2:3b
https://preview.redd.it/v3akbzt58d7g1.png?width=1266&format=png&auto=webp&s=9f8c58b0e3862077feaad81422560b2bd710c6c4
The judge scores whether the agent did the right thing (tool usage / hallucinations / staying on task).
I like it because I can iterate all day without watching credits burn, and eval data stays local.
For prod evals I can swap the judge to a cloud model with the same test suite.
Repo: [https://github.com/hidai25/eval-view](https://github.com/hidai25/eval-view)
What local models do you use for judging/grading? I’ve been on llama3.2:3b but curious what’s best for eval-style scoring.
| 2025-12-15T13:00:07 | https://www.reddit.com/r/LocalLLaMA/comments/1pn6qzc/localfirst_agent_evals_w_ollama_as_the_judge_no/ | hidai25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn6qzc | false | null | t3_1pn6qzc | /r/LocalLLaMA/comments/1pn6qzc/localfirst_agent_evals_w_ollama_as_the_judge_no/ | false | false | 0 | null | |
Pc Case for Rtx 6000 Pro | 0 | I know this has been asked, and I have read a lot. Today is delivery day of my new GPU. I am guessing installing a waterblock on this board voids the warranty, my current case is a 4000d. The Fractal Torrent seems to be a popular recommendation. Also have a Fractal Design 7 XL someone wants to give me.
But PC Case idea given Blackwell needs to be aircooled? I don't care about looks too much, or noise if it produces great cooling. I would have a Aio cpu cooler with a 360mm. Could switch that out to a custom loop as I have the parts if for some reason needed to.
CPU: Intel Ultra 9 285K ( guessing its fine for now but will likely switch to Epyc.)
Motherboard: MSI Z980
Memory: 128 GB RAM
Graphics Cards:
NVIDIA RTX 6000 Pro Blackwell Workstation replacing NVIDIA RTX 4090 WF. | 2025-12-15T12:54:55 | https://www.reddit.com/r/LocalLLaMA/comments/1pn6nch/pc_case_for_rtx_6000_pro/ | nofilmincamera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn6nch | false | null | t3_1pn6nch | /r/LocalLLaMA/comments/1pn6nch/pc_case_for_rtx_6000_pro/ | false | false | self | 0 | null |
Title: Local-first agent evals: Ollama as the judge. No API costs. | 1 | Built a testing framework for AI agents and wanted to share the local-first setup since this sub gets it.
The problem: every time I tweaked prompts, I had to burn OpenAI credits just to check if the agent still worked. Got annoying fast.
The fix: use a local Ollama model as the judge (LLM-as-judge) so grading stays on my machine.
Install (brew):
brew install ollama
brew services start ollama # or: ollama serve
Pull a judge model:
ollama pull llama3.2:3b
Try it with a demo agent + demo test:
pip install evalview
evalview quickstart
evalview run --judge-provider ollama --judge-model llama3.2:3b
https://preview.redd.it/k7eoztk17d7g1.png?width=1266&format=png&auto=webp&s=a63fb7850bca63a2fe861fe066b1528e3dbfb720
The judge scores whether the agent did the right thing (tool usage / hallucinations / staying on task).
I like it because I can iterate all day without watching credits burn, and eval data stays local.
For production evals I can swap the judge to a cloud model with the same test suite.
Repo: [https://github.com/hidai25/eval-view](https://github.com/hidai25/eval-view)
What local models do you use for judging/grading? I’ve been on llama3.2:3b but curious what’s best for eval-style scoring.
| 2025-12-15T12:51:58 | https://www.reddit.com/r/LocalLLaMA/comments/1pn6l8d/title_localfirst_agent_evals_ollama_as_the_judge/ | hidai25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn6l8d | false | null | t3_1pn6l8d | /r/LocalLLaMA/comments/1pn6l8d/title_localfirst_agent_evals_ollama_as_the_judge/ | false | false | 1 | null | |
Are we hitting diminishing returns on model scaling — and shifting to system-level scaffolding instead? | 0 | Frontier language models have clearly improved in fluency, reasoning, and generalization over the last few release cycles, but many of those gains now feel incremental rather than paradigm-shifting. At the same time, persistent limitations remain: bounded context, weak continual learning, fragile self-verification, and limited ability to reason over time rather than per prompt.
I’m increasingly convinced the next major capability jump won’t come from scaling models alone, but from architectural scaffolding around them. Specifically: keeping a frontier model (or even a localized one) as the primary conversational and generative core, while selectively invoking a multi-agent scaffold only when tasks exceed the model’s innate capabilities. Most interactions would bypass this entirely, preserving latency and conversational quality. The scaffold would engage only for tasks that benefit from memory, verification, or multi-perspective reasoning.
In this route, the scaffold consists of specialized agents (retrieval, critique, synthesis, verification, etc.) coordinated by a controller, with long-term knowledge externalized into layered memory structures (episodic records, semantic claims, relational graphs) including provenance, contradiction tracking, and scheduled consolidation. This enables effective continual learning and epistemic correction without modifying model weights, letting the base model operate at full strength where it already excels.
This feels directionally aligned with several recent developments—skills systems, nested learning / multi-timescale learning, lightweight memory surfaces, and “deep research” agent loops—but so far these appear as isolated features rather than a coherent, conditional architecture. Public deployments still seem hesitant to combine these ideas into a unified system that extends effective context and reasoning horizon without compromising user experience.
I’m curious whether others are seeing the same trend, or are aware of internal systems at large labs moving more decisively in this direction? Is conditional, system-level cognition the next real scaling axis, or are there fundamental blockers that make this less viable than it appears and the industry is going to continue pushing models to be 'bigger and more powerful? | 2025-12-15T12:50:44 | https://www.reddit.com/r/LocalLLaMA/comments/1pn6kdp/are_we_hitting_diminishing_returns_on_model/ | athornton79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn6kdp | false | null | t3_1pn6kdp | /r/LocalLLaMA/comments/1pn6kdp/are_we_hitting_diminishing_returns_on_model/ | false | false | self | 0 | null |
How to do a RTX Pro 6000 build right | 115 | The RTX PRO 6000 is missing NVlink, that is why Nvidia came up with idea to integrate high-speed networking directly at each GPU. This is called the RTX PRO server. There are 8 PCIe slots for 8 RTX Pro 6000 server version cards and each one has a 400G networking connection. The good thing is that it is basically ready to use. The only thing you need to decide on is Switch, CPU, RAM and storage. Not much can go wrong there. If you want multiple RTX PRO 6000 this the way to go.
Exemplary Specs:
8x Nvidia RTX PRO 6000 Blackwell Server Edition GPU
8x Nvidia ConnectX-8 1-port 400G QSFP112
1x Nvidia Bluefield-3 2-port 200G total 400G QSFP112 (optional)
2x Intel Xeon 6500/6700
32x 6400 RDIMM or 8000 MRDIMM
6000W TDP
4x High-efficiency 3200W PSU
2x PCIe gen4 M.2 slots on board
8x PCIe gen5 U.2
2x USB 3.2 port
2x RJ45 10GbE ports
RJ45 IPMI port
Mini display port
10x 80x80x80mm fans
4U 438 x 176 x 803 mm (17.2 x 7 x 31.6")
70 kg (150 lbs) | 2025-12-15T12:48:06 | https://www.reddit.com/gallery/1pn6ijr | GPTrack_dot_ai | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pn6ijr | false | null | t3_1pn6ijr | /r/LocalLLaMA/comments/1pn6ijr/how_to_do_a_rtx_pro_6000_build_right/ | false | false | 115 | null | |
Environmental cost of running inference on Gen AI ? | 0 | I like most of you, use AI Applications and ChatBots around the clock most are local LLMs but some closed models
Offlate with each query and inference I feel like I am wasting the energy and environment as I know most of these inferences will happen on high end GPUs which aren't energy efficiencent
Nowadays for each query I feel like misusing the natural resources
Does anyone face this weird feeling of misusing the energy ? | 2025-12-15T12:25:52 | https://www.reddit.com/r/LocalLLaMA/comments/1pn63lk/environmental_cost_of_running_inference_on_gen_ai/ | bull_bear25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn63lk | false | null | t3_1pn63lk | /r/LocalLLaMA/comments/1pn63lk/environmental_cost_of_running_inference_on_gen_ai/ | false | false | self | 0 | null |
Best workflow to convert a long PDF book into clean Markdown for Obsidian (using AI, no hallucinations)? | 3 | I’m trying to convert a full length PDF book (300+ pages) into clean, structured Markdown for Obsidian, and I’m looking for advice on the best *workflow*, not quick hacks.
What I care about:
* Preserve original wording exactly (no paraphrasing or “AI smoothing”)
* Proper Markdown structure (`#` for sections, `##` chapters, paragraphs restored)
* Fix OCR garbage (broken line breaks, hyphenation, duplicated headers)
* Obsidian-friendly output (outline view, folding, search)
* Ability to verify against the original PDF
What I’ve tried / considered:
* Copy-paste from PDF → messy OCR text
* AI to normalize formatting only (not rewrite content)
* Page-by-page or chunk-by-chunk processing to avoid hallucinations
* Manual spot-checking against the PDF
What I’m *not* looking for:
* “Just summarize it”
* “Just ask ChatGPT to rewrite it”
* Tools that alter wording or structure unpredictably
Questions:
1. Do you process PDFs page-by-page or chapter-by-chapter?
2. Any Obsidian plugins or external tools that help with PDF → Markdown cleanup?
3. Has anyone built a reliable AI + OCR pipeline that preserves fidelity?
4. Any gotchas to avoid with long books?
If you’ve done something similar and ended up with a Markdown file you actually trust, I’d love to hear your setup.
Thanks. | 2025-12-15T12:18:31 | https://www.reddit.com/r/LocalLLaMA/comments/1pn5yp5/best_workflow_to_convert_a_long_pdf_book_into/ | MilkManViking | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn5yp5 | false | null | t3_1pn5yp5 | /r/LocalLLaMA/comments/1pn5yp5/best_workflow_to_convert_a_long_pdf_book_into/ | false | false | self | 3 | null |
Struggling with AnythingLLM - Need advice/help. | 2 | Config:
32GB GV100
Windows 11
LM Studio 3.35
AnythingLLM 1.9.0
Running Qwen3-v1-30b-a3b-instruct
When querying from in AnythingLLM the model can reference the file names but says there are no files then will say it can only see metadata. I started asking it about the file use specifically when I realized it was making quotes up from the sources I want it to use. Being the model can name the actual file names (funny it tells them by the file name that there are no files available to use) and since I have removed and reuploaded the files numerous times in the RAB to ensure they are embedded, I know they are there.
I am clearly doing something wrong. I also have temp set to .3 to reduce hallucination, but that is not working clearly as I had hoped.
Any guidance would be appreciated. | 2025-12-15T12:06:06 | https://www.reddit.com/r/LocalLLaMA/comments/1pn5qhf/struggling_with_anythingllm_need_advicehelp/ | letmeinfornow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn5qhf | false | null | t3_1pn5qhf | /r/LocalLLaMA/comments/1pn5qhf/struggling_with_anythingllm_need_advicehelp/ | false | false | self | 2 | null |
Natural language file search using local tiny LLMs (<1b): Model recommendations needed! | 8 | ERROR: type should be string, got "\n\nhttps://preview.redd.it/am0arwvgxc7g1.png?width=1652&format=png&auto=webp&s=1bab77de3f1b6cd65e5639777f94497e8c25b006\n\nHi guys, this is kind of a follow-up to my monkeSearch post, but now I am focusing on the non vector-db implementation again. \n\n**What I'm building:** A local natural language file search engine that parses queries like \"python scripts from 3 days ago\" or \"images from last week\" and extracts the file types and temporal info to build actual file system queries. \nIn testing, it works well. \n\n**Current approach:** I'm using Qwen3 0.6B (Q8) with llama.cpp's structured output to parse queries into JSON. (using llama.cpp's structured json schema mode)\n\nI've built a test suite with 30 different test queries in my script and Qwen 0.6B is surprisingly decent at this (24/30), but I'm hitting some accuracy issues with edge cases.\n\nCheck out the code to understand further: \n\n[https://github.com/monkesearch/monkeSearch/tree/legacy-main-llm-implementation](https://github.com/monkesearch/monkeSearch/tree/legacy-main-llm-implementation)\n\nThe project page: [https://monkesearch.github.io](https://monkesearch.github.io)\n\n \nThe question: What's the best path forward for this specific use case?\n\n1. Stick with tiny LLMs (<1B) and possibly fine-tuning?\n2. Move to slightly bigger LLMs (1-3B range) - if so, what models would you recommend that are good at structured output and instruction following?\n3. Build a custom architecture specifically for query parsing (maybe something like a BERT-style encoder trained specifically for this task)?\n\nConstraints:\n\n* Must run on potato PCs (aiming for 4-8GB RAM max)\n* Needs to be FAST (<100ms inference ideally)\n* No data leaves the machine\n* Structured JSON output is critical (can't deal with too much hallucination)\n\nI am leaning towards the tiny LLM option and would love to get opinions for local models to try and play with, please recommend some models! I tried local inference for LG-AI EXAONE model but faced some issues with the chat template.\n\nIf someone has experience with custom models and training them, let's work together!" | 2025-12-15T12:01:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pn5n4c/natural_language_file_search_using_local_tiny/ | fuckAIbruhIhateCorps | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn5n4c | false | null | t3_1pn5n4c | /r/LocalLLaMA/comments/1pn5n4c/natural_language_file_search_using_local_tiny/ | false | false | 8 | null | |
I scored 100+ architectures on "Hardware Friction." Why KANs fry tensor cores and MoEs have a context trap. | 25 | I have been trying to figure out why technically superior architectures like Neural ODEs often die while the Transformer remains dominant. I ended up writing a deep dive on what I call the "Hardware Friction Map," arguing that GPUs don't actually reject ideas. They just charge a "compute tax" based on how much an idea deviates from optimized primitives like dense matrix multiplications.
I also compiled a GitHub dataset scoring over 100 architectures on their hardware efficiency, which I linked below. There are a few specific findings that I think matter for those of us running models locally.
The first big one is the "Context Trap" with Mixture of Experts. We all like MoEs for the inference speedup, but the data suggests that the "5x faster" marketing claims usually only hold up at very short context lengths. When you look at the benchmarks for 16k to 32k context, the throughput often drops to roughly 30% or 40% of the baseline. The issue is that the routing logic and KV cache traffic start to dominate the sparse expert compute. MoEs are great throughput optimizers, but unless the architecture is specifically co-designed for long context like the new DeepSeek V3, they struggle when you load them up with history.
Then there are the "Red Zone" architectures like KANs (Kolmogorov-Arnold Networks). They look great on paper, but they are basically unusable for local inference right now. KANs rely on edge-based spline evaluations, which are essentially hundreds of tiny, irregular operations. Current GPUs need big batched matrix multiplications to hit peak performance, so KANs end up dropping tensor core utilization to around 10%. Until hardware changes, they are just too expensive to run efficiently.
I also noticed a hard limit with pure State Space Models (SSMs) like Mamba. They seem to be production-ready at the 7B scale, which is why Falcon Mamba 7B works well. But once you cross the 13B parameter threshold, the training parallelism gap compounds and memory bandwidth becomes a bottleneck for state propagation. That appears to be why every major deployment larger than 13B, like Jamba or Falcon-H1, is forced to use a hybrid architecture of Attention plus SSMs.
This friction also explains the gap between models like Llama 3.1 and DeepSeek V3. Llama used a standard stack that we can run easily. DeepSeek V3 required them to rewrite their entire cluster scheduler and spend six months on custom routing kernels. That high friction is a massive moat for them, but it is also why it takes about 20 months for the open ecosystem tools like vLLM or llama.cpp to fully catch up to those custom internals.
I have linked the full breakdown and the architecture scoring dataset below. I am curious if your experience with local inference matches the context trap numbers I found for MoEs.
\- (dataset) [https://github.com/petroslamb/hardware-friction-map-2025](https://github.com/petroslamb/hardware-friction-map-2025)
\- (article) [https://lambpetros.substack.com/p/the-hardware-friction-map](https://lambpetros.substack.com/p/the-hardware-friction-map) | 2025-12-15T11:21:40 | https://www.reddit.com/r/LocalLLaMA/comments/1pn4yrf/i_scored_100_architectures_on_hardware_friction/ | petroslamb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn4yrf | false | null | t3_1pn4yrf | /r/LocalLLaMA/comments/1pn4yrf/i_scored_100_architectures_on_hardware_friction/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'ydhEBpDKX07Lm2H24E7ZQsS4FOdSuZQJDwAO6PUF03Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ydhEBpDKX07Lm2H24E7ZQsS4FOdSuZQJDwAO6PUF03Q.png?width=108&crop=smart&auto=webp&s=94ceed1b55dcc532013ce21002a59f06181d501e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ydhEBpDKX07Lm2H24E7ZQsS4FOdSuZQJDwAO6PUF03Q.png?width=216&crop=smart&auto=webp&s=76a191a009a8f8ffcc1166cffeff4da514f742fd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ydhEBpDKX07Lm2H24E7ZQsS4FOdSuZQJDwAO6PUF03Q.png?width=320&crop=smart&auto=webp&s=3f0c9e13b04a2d94bf8b5779cba0583342952504', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ydhEBpDKX07Lm2H24E7ZQsS4FOdSuZQJDwAO6PUF03Q.png?width=640&crop=smart&auto=webp&s=939495e99a7528f2685c7e60db2823d49f399124', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ydhEBpDKX07Lm2H24E7ZQsS4FOdSuZQJDwAO6PUF03Q.png?width=960&crop=smart&auto=webp&s=f70c4adb002317edc04c39dd21dce14878bcb3ef', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ydhEBpDKX07Lm2H24E7ZQsS4FOdSuZQJDwAO6PUF03Q.png?width=1080&crop=smart&auto=webp&s=b4dbc6e550fc41a05c0098222398899b74585cb9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ydhEBpDKX07Lm2H24E7ZQsS4FOdSuZQJDwAO6PUF03Q.png?auto=webp&s=b0e265a43c4a359310bbfc1055ca59572e1e1ecc', 'width': 1200}, 'variants': {}}]} |
Diagnosing layer sensitivity during post training quantization | 12 | Hi everyone!
I have written a blog post on using layerwise PSNR to diagnose where models break during post-training quantization.
Instead of only checking output accuracy, layerwise metrics let you spot exactly which layers are sensitive (e.g. softmax, SE blocks), making it easier to debug and decide what to keep in higher precision.
If you’re experimenting with quantization for local or edge inference, you might find this interesting: [blogpost link](https://hub.embedl.com/blog/diagnosing-layer-sensitivity?utm_source=reddit)
Has anyone tried similar layerwise diagnostics? I’d love to hear about your experiences. | 2025-12-15T11:01:44 | elinaembedl | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pn4nfj | false | null | t3_1pn4nfj | /r/LocalLLaMA/comments/1pn4nfj/diagnosing_layer_sensitivity_during_post_training/ | false | false | default | 12 | {'enabled': True, 'images': [{'id': '9z863k4bnc7g1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/9z863k4bnc7g1.png?width=108&crop=smart&auto=webp&s=0f8654d1000994c139f89d8925ead95eea8b135a', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/9z863k4bnc7g1.png?width=216&crop=smart&auto=webp&s=025c028d4334f96f7b3a868bdc7187c6addb9869', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/9z863k4bnc7g1.png?width=320&crop=smart&auto=webp&s=2df191d2daff255b559a74f8471fbbe10de9240a', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/9z863k4bnc7g1.png?width=640&crop=smart&auto=webp&s=a3ec905397065d7f20bc37620997e8c9962893e9', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/9z863k4bnc7g1.png?width=960&crop=smart&auto=webp&s=92b7ea8301f0cf1e7bc8a5363b65f4e73bcf9ed1', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/9z863k4bnc7g1.png?width=1080&crop=smart&auto=webp&s=454fac8088d00c71252dad4e0573626deb450b6f', 'width': 1080}], 'source': {'height': 900, 'url': 'https://preview.redd.it/9z863k4bnc7g1.png?auto=webp&s=6ecd836564cbd219d7f127256c0774081fbe4d8a', 'width': 1200}, 'variants': {}}]} | |
Has anyone tried Deepseek v3.2 speciale in q2? And what about kimi k2 thinking q1.58? | 3 | I have used both at higher quants, they are good. How useable is v3.2 speciale q2 for coding and math and general knowledge? And Kimi K2 thinking q1.58?
| 2025-12-15T10:41:31 | https://www.reddit.com/r/LocalLLaMA/comments/1pn4bt6/has_anyone_tried_deepseek_v32_speciale_in_q2_and/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn4bt6 | false | null | t3_1pn4bt6 | /r/LocalLLaMA/comments/1pn4bt6/has_anyone_tried_deepseek_v32_speciale_in_q2_and/ | false | false | self | 3 | null |
I built a local-first AI memory system that goes beyond vector search – looking for feedback | 0 | Most vector databases only answer “what is similar”.
But when building agents and chatbots, I kept needing:
“What is related?”
So I built NeuroIndex — a hybrid AI memory system that combines:
• FAISS similarity search
• Semantic graph traversal
• LRU working memory
• SQLite persistence
It’s fully local and open-source.
I’m mainly looking for design and architecture feedback.
GitHub: [https://github.com/Umeshkumar667/neuroindex](https://github.com/Umeshkumar667/neuroindex)
| 2025-12-15T10:29:36 | https://www.reddit.com/r/LocalLLaMA/comments/1pn458l/i_built_a_localfirst_ai_memory_system_that_goes/ | OwnPerspective9543 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn458l | false | null | t3_1pn458l | /r/LocalLLaMA/comments/1pn458l/i_built_a_localfirst_ai_memory_system_that_goes/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '5TeZAFgKrUn6DGGLyB94A8t89W_ahGhRNZmvuxYtzjs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5TeZAFgKrUn6DGGLyB94A8t89W_ahGhRNZmvuxYtzjs.png?width=108&crop=smart&auto=webp&s=5d51f9e32e17c23ff08082c442f7ab90078d0f09', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5TeZAFgKrUn6DGGLyB94A8t89W_ahGhRNZmvuxYtzjs.png?width=216&crop=smart&auto=webp&s=d3383d192caf018a684730cb8d3ec0be2c64e921', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5TeZAFgKrUn6DGGLyB94A8t89W_ahGhRNZmvuxYtzjs.png?width=320&crop=smart&auto=webp&s=0d15aaa65bc9bc82f8ff60ff8ae9a789044f7ffe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5TeZAFgKrUn6DGGLyB94A8t89W_ahGhRNZmvuxYtzjs.png?width=640&crop=smart&auto=webp&s=505de3e7ec38e91531170a1c131c95eac167b929', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5TeZAFgKrUn6DGGLyB94A8t89W_ahGhRNZmvuxYtzjs.png?width=960&crop=smart&auto=webp&s=0cb6b239a4dd0e9d9a3a2b47ef50b7e3a64baef0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5TeZAFgKrUn6DGGLyB94A8t89W_ahGhRNZmvuxYtzjs.png?width=1080&crop=smart&auto=webp&s=70492adb81f262d79ed6d1ae2832ba09b63fa712', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5TeZAFgKrUn6DGGLyB94A8t89W_ahGhRNZmvuxYtzjs.png?auto=webp&s=a482a67b91575ba8a2e83ebdb5288a2e0d57e4a9', 'width': 1200}, 'variants': {}}]} |
Why did OpenAI book 40% of world's RAM | 0 | I'm as annoyed as anyone by the current RAM prices, still I would like to understand why this happened. I am aware of some really good arguments against some of these options, but I'd like to hear your opinion.
[View Poll](https://www.reddit.com/poll/1pn3h0o) | 2025-12-15T09:44:12 | https://www.reddit.com/r/LocalLLaMA/comments/1pn3h0o/why_did_openai_book_40_of_worlds_ram/ | Minthala | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn3h0o | false | null | t3_1pn3h0o | /r/LocalLLaMA/comments/1pn3h0o/why_did_openai_book_40_of_worlds_ram/ | false | false | self | 0 | null |
Do you agree? | 0 | 2025-12-15T09:29:55 | Difficult-Cap-7527 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pn39nu | false | null | t3_1pn39nu | /r/LocalLLaMA/comments/1pn39nu/do_you_agree/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'dkrkloc67c7g1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/dkrkloc67c7g1.jpeg?width=108&crop=smart&auto=webp&s=ffd2aa11c873391b0d3143ab56d1b5226410b652', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/dkrkloc67c7g1.jpeg?width=216&crop=smart&auto=webp&s=e6a752749472d55b57e8124a7f6d65956ffa44ab', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/dkrkloc67c7g1.jpeg?width=320&crop=smart&auto=webp&s=36035eb627004fca8f44d74ae08a84e9d9ccb259', 'width': 320}, {'height': 321, 'url': 'https://preview.redd.it/dkrkloc67c7g1.jpeg?width=640&crop=smart&auto=webp&s=065c7bd9492581314fef8c7637df2f70664c3183', 'width': 640}, {'height': 481, 'url': 'https://preview.redd.it/dkrkloc67c7g1.jpeg?width=960&crop=smart&auto=webp&s=e342968f148fb12c266709eb6ccd825dbd6b5cd8', 'width': 960}, {'height': 541, 'url': 'https://preview.redd.it/dkrkloc67c7g1.jpeg?width=1080&crop=smart&auto=webp&s=9784f3748de41d8302ebe8c0c2c0b4d552f9d65d', 'width': 1080}], 'source': {'height': 602, 'url': 'https://preview.redd.it/dkrkloc67c7g1.jpeg?auto=webp&s=28945f5b7cdcac2810f41ccd8863b8d8e6ce53f4', 'width': 1200}, 'variants': {}}]} | ||
New Google model incoming!!! | 1,215 | [https://x.com/osanseviero/status/2000493503860892049?s=20](https://x.com/osanseviero/status/2000493503860892049?s=20)
[https://huggingface.co/google](https://huggingface.co/google) | 2025-12-15T09:26:05 | R46H4V | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pn37mw | false | null | t3_1pn37mw | /r/LocalLLaMA/comments/1pn37mw/new_google_model_incoming/ | false | false | 1,215 | {'enabled': True, 'images': [{'id': 'vMpx6OM9ZzUZYu5amQRc2i6NMWn5Q2xz-pF36NmGAlY', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/ho8nhiae6c7g1.png?width=108&crop=smart&auto=webp&s=2c60812843908bba7adafd36e2de3fa54960c1e3', 'width': 108}, {'height': 170, 'url': 'https://preview.redd.it/ho8nhiae6c7g1.png?width=216&crop=smart&auto=webp&s=43f7ceaaa958b2db4b4f68ec1b8027435801edd0', 'width': 216}, {'height': 253, 'url': 'https://preview.redd.it/ho8nhiae6c7g1.png?width=320&crop=smart&auto=webp&s=f4cce504656e90acc9cf53aa581a11864f6706c0', 'width': 320}, {'height': 506, 'url': 'https://preview.redd.it/ho8nhiae6c7g1.png?width=640&crop=smart&auto=webp&s=53bdb437b0e3d6b162a1f97be9b2f4ae540eda69', 'width': 640}], 'source': {'height': 577, 'url': 'https://preview.redd.it/ho8nhiae6c7g1.png?auto=webp&s=4a2c3f6af7c019b060ccecc655286890fef51b15', 'width': 729}, 'variants': {}}]} | ||
THIS is so OUTRAGEOUS [LMArena] | 0 | So now there are rate limits on LM Arena as well??? | 2025-12-15T09:05:08 | fflarengo | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pn2wrz | false | null | t3_1pn2wrz | /r/LocalLLaMA/comments/1pn2wrz/this_is_so_outrageous_lmarena/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '5mm7olwl2c7g1', 'resolutions': [{'height': 38, 'url': 'https://preview.redd.it/5mm7olwl2c7g1.png?width=108&crop=smart&auto=webp&s=913406cf66557aae7c083dc562e1b1a842155b63', 'width': 108}, {'height': 76, 'url': 'https://preview.redd.it/5mm7olwl2c7g1.png?width=216&crop=smart&auto=webp&s=5c2994e75b81ab8af619b61c7eafaaeaf37762cf', 'width': 216}, {'height': 113, 'url': 'https://preview.redd.it/5mm7olwl2c7g1.png?width=320&crop=smart&auto=webp&s=3eb82482b47c7cb2db5f9754d084f6afb1fdf717', 'width': 320}, {'height': 227, 'url': 'https://preview.redd.it/5mm7olwl2c7g1.png?width=640&crop=smart&auto=webp&s=35ad6d92aaabd695b39d623e49f612401e4db95b', 'width': 640}], 'source': {'height': 332, 'url': 'https://preview.redd.it/5mm7olwl2c7g1.png?auto=webp&s=4abbe01e0c2c509e5e388fdff2da1ef2ba6b13a2', 'width': 936}, 'variants': {}}]} | |
llama.cpp: Automation for GPU layers, tensor split, tensor overrides, and context size (with MoE optimizations) | 186 | CPU + GPU hybrid inference has been a core feature of llama.cpp since early on, and I would argue, one of the major selling points vs. projects like ExLlama.
The way to control memory use until now was to manually set parameter like `--n-gpu-layers` and `--tensor-split` to fit memory use to free VRAM.
However, this is of course suboptimal in terms of usability.
Downstream projects like Ollama and KoboldCpp have implemented mechanisms for automating memory allocation but those rely on rough heuristics and tend to be inaccurate.
As a consequence, to avoid running out of memory in some cases the heuristics are rather conservative and leave potential performance on the table.
The problem becomes even harder when running models across multiple GPUs or when running MoE models where the dense tensors
should be prioritized over the sparse MoE tensors for optimal performance.
On the latest llama.cpp version following https://github.com/ggml-org/llama.cpp/pull/16653 I implemented code to automate memory allocations across GPUs.
It works by doing virtual test allocations and using those as feedback to iteratively reduce memory use until the model fits across all GPUs.
The metric for memory use is the same as in the "memory breakdown" that you may have seen in recent llama.cpp versions.
The implementation is generic and should work for any ggml backend as long as it supports CPU + GPU hybrid inference and the memory breakdown is correct.
If you encounter problems using this new functionality, please open an issue instead of commenting here as this will make the process easier from my side.
The code starts by first checking whether the model is projected to fit as-is.
If yes, no changes are made.
If not, it first reduces the context size to free up memory.
If that is still not enough it starts moving tensors from VRAM to RAM.
Dense tensors are prioritized for better MoE performance.
Ideally one would only assign whole layers to GPUs for simplicity.
However, as individual layers can be very large against "small" GPUs with only 24 GiB VRAM this would result in significant waste.
For this reason, layers can "overflow", meaning that parts of them are moved to the next GPU in line or to system RAM.
### Command-Line Interface
The fitting of runtime parameters can be controlled as follows:
* `--fit`, `-fit`: set to `on` by default, can be set to `off` to disable parameter fitting.
* `--fit-target`, `-fitt`: target amount of free memory to leave on each GPU. As of right now this is the same value for all GPUs and it is not possible to specify e.g. an amount that should be used regardless of free memory.
* `--fit-ctx`, `-fitc`: minimum context size that can be set automatically. If `--ctx-size` is explicitly set by the user it is not changed.
* If arguments like `--n-gpu-layers`, `--tensor-split`, or `--override-tensor` that affect memory allocation are set by the user, there is no change to that memory allocation. There is no support for automatic modification of only one of these arguments, they are either wholly under user control or wholly under program control.
There is a new tool `llama-fit-params` that can be used to retrieve the parameters that would be set by the new parameter fitting logic.
For example:
```bash
> $ ./build/bin/llama-fit-params --model models/opt/gpt-oss-120b-mxfp4-00001-of-00003.gguf -ub 4096 -b 4096
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
build: 7413 (ae534ec0c) with GNU 15.2.1 for Linux x86_64
llama_params_fit_impl: projected memory use with initial parameters [MiB]:
llama_params_fit_impl: - CUDA0 (NVIDIA GeForce RTX 4090): 24080 total, 34873 used, 11187 deficit
llama_params_fit_impl: - CUDA1 (NVIDIA GeForce RTX 4090): 24080 total, 31847 used, 8161 deficit
llama_params_fit_impl: projected to use 66721 MiB of device memory vs. 48161 MiB of free device memory
llama_params_fit_impl: cannot fulfill margin of 1024 MiB on all devices, need to use 21397 MiB less in total
llama_params_fit_impl: context size reduced from 131072 to 4096 -> need 4490 MiB less memory in total
llama_params_fit_impl: with only dense weights in device memory there is a total surplus of 42064 MiB
llama_params_fit_impl: filling dense-only layers back-to-front:
llama_params_fit_impl: - CUDA1 (NVIDIA GeForce RTX 4090): 36 layers, 2201 MiB used, 21484 MiB free
llama_params_fit_impl: - CUDA0 (NVIDIA GeForce RTX 4090): 0 layers, 985 MiB used, 22700 MiB free
llama_params_fit_impl: converting dense-only layers to full layers and filling them front-to-back with overflow to next device/system memory:
llama_params_fit_impl: - CUDA0 (NVIDIA GeForce RTX 4090): 14 layers ( 1 overflowing), 22576 MiB used, 1109 MiB free
llama_params_fit_impl: - CUDA1 (NVIDIA GeForce RTX 4090): 22 layers (11 overflowing), 22208 MiB used, 1477 MiB free
llama_params_fit: successfully fit params to free device memory
llama_params_fit: fitting params to free memory took 8.81 seconds
Printing fitted CLI arguments to stdout...
-c 4096 -ngl 37 -ts 14,23 -ot blk\.13\.ffn_(up|gate|down).*=CUDA1,blk\.25\.ffn_down.*=CPU,blk\.26\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.27\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.28\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.29\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.30\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.31\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.32\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.33\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.34\.ffn_(up|down|gate)_(ch|)exps=CPU,blk\.35\.ffn_(up|down|gate)_(ch|)exps=CPU
```
### Benchmark
As of right now `llama-bench` does not have support for `-fit`, `-fitt`, and `-fitc`.
For this reason, the following workaround was used to feed the results from `llama-fit-params` into `llama-bench`:
```bash
./build/bin/llama-fit-params -m models/opt/${model_name}-${quantization}.gguf -b 4096 -ub 4096 | tee tmp.txt
./build/bin/llama-bench -m models/opt/${model_name}-${quantization}.gguf -r 1 -fa 1 $(tail -c +17 tmp.txt | tr ',' ';')
```
The benchmark was done on a system with an AMD EPYC 7742 CPU and 8 3200 "MHz" DIMMs.
| Model | GPUs | Time to fit [s] | Fully in VRAM? | VRAM utilization | pp4096 [t/s] | tg128 [t/s] |
|--------------------|---------------------------------------|-----------------|----------------|------------------|--------------|-------------|
| Qwen 3 Next BF16 | None | - | No | - | 38.89 | 6.23 |
| Qwen 3 Next BF16 | 1x RTX 4090 | 4.89 | No | 88.1% | 381.52 | 19.01 |
| Qwen 3 Next BF16 | 2x RTX 4090 | 7.75 | No | 88.5% | 246.29 | 20.89 |
| Qwen 3 Next BF16 | 3x RTX 4090 | 10.70 | No | 88.3% | 340.88 | 22.00 |
| Qwen 3 Next BF16 | 4x RTX 4090 | 13.87 | No | 89.3% | 433.10 | 24.70 |
| Qwen 3 Next BF16 | 4x RTX 4090, 1x RTX 5090 | 16.93 | No | 89.7% | 526.71 | 26.19 |
| Qwen 3 Next BF16 | 4x RTX 4090, 1x RTX 5090, 1x RTX 3090 | 20.39 | No | 90.2% | 599.86 | 31.37 |
| Qwen 3 Next q8_0 | None | - | No | - | 44.81 | 7.17 |
| Qwen 3 Next q8_0 | 1x RTX 4090 | 4.98 | No | 87.3% | 904.49 | 24.26 |
| Qwen 3 Next q8_0 | 2x RTX 4090 | 7.51 | No | 88.5% | 574.43 | 28.34 |
| Qwen 3 Next q8_0 | 3x RTX 4090 | 10.22 | No | 89.3% | 1086.23 | 33.33 |
| Qwen 3 Next q8_0 | 4x RTX 4090 | 12.19 | Yes | 87.0% | 2474.67 | 41.37 |
| GPT OSS 120b mxfp4 | None | - | No | - | 115.78 | 23.63 |
| GPT OSS 120b mxfp4 | 1x RTX 4090 | 5.56 | No | 83.7% | 1733.20 | 52.09 |
| GPT OSS 120b mxfp4 | 2x RTX 4090 | 10.48 | No | 89.4% | 2452.52 | 78.27 |
| GPT OSS 120b mxfp4 | 3x RTX 4090 | 11.47 | Yes | 86.0% | 5499.52 | 180.29 |
| GPT OSS 120b mxfp4 | 4x RTX 4090 | 1.55 | Yes | 68.2% | 5219.51 | 182.89 |
The VRAM utilization is at ~85-90%.
As the default `--fit-target` is 1024 MiB, that would ideally leave ~4% of free VRAM on each GPU.
However, since individual tensors can be several GB in size some amount of waste is inevitable.
The time to fit the parameters increases roughly linearly with the number of GPUs.
Under ideal circumstances such as when running GPT OSS 120b on 4x RTX 4090 the code only needs to check that the VRAM is sufficient.
For Qwen 3 Next there currently seems to be a bug where the memory needed for the context is not accounted correctly so a full fit is done.
Time to fit is still fairly unoptimized.
Performance mostly increases as VRAM use increases, except when going from a single GPU to two GPUs
(while still being bottlenecked by RAM) or when the model could already be fit on fewer GPUs.
With better multi GPU code the performance should increase monotonically as more GPUs are added. | 2025-12-15T08:28:56 | https://www.reddit.com/r/LocalLLaMA/comments/1pn2e1c/llamacpp_automation_for_gpu_layers_tensor_split/ | Remove_Ayys | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn2e1c | false | null | t3_1pn2e1c | /r/LocalLLaMA/comments/1pn2e1c/llamacpp_automation_for_gpu_layers_tensor_split/ | false | false | self | 186 | {'enabled': False, 'images': [{'id': 'qVNWw4QkrGdQBohPMkNbOaBl1kGUBrz5qolc5DSYCVk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qVNWw4QkrGdQBohPMkNbOaBl1kGUBrz5qolc5DSYCVk.png?width=108&crop=smart&auto=webp&s=28e25db75320eab6c0bb9942ea3307d512df0ac8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qVNWw4QkrGdQBohPMkNbOaBl1kGUBrz5qolc5DSYCVk.png?width=216&crop=smart&auto=webp&s=22402de2ab540ba0ff3b05eb0de17a9920075d25', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qVNWw4QkrGdQBohPMkNbOaBl1kGUBrz5qolc5DSYCVk.png?width=320&crop=smart&auto=webp&s=0ded37e1fc7899f83b1f41a2f0453a85f681a6e0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qVNWw4QkrGdQBohPMkNbOaBl1kGUBrz5qolc5DSYCVk.png?width=640&crop=smart&auto=webp&s=c982448034ac9639015bd9e4f75481bd706afd2c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qVNWw4QkrGdQBohPMkNbOaBl1kGUBrz5qolc5DSYCVk.png?width=960&crop=smart&auto=webp&s=6dc478d10441cec16e255873b2c0b8ac5036340c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qVNWw4QkrGdQBohPMkNbOaBl1kGUBrz5qolc5DSYCVk.png?width=1080&crop=smart&auto=webp&s=42980a03f597031ddb687f1dee76ac7a1f3541ea', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qVNWw4QkrGdQBohPMkNbOaBl1kGUBrz5qolc5DSYCVk.png?auto=webp&s=8e6f184e251de8360f2de33a9a0fdef6cbf2e42d', 'width': 1200}, 'variants': {}}]} |
New AI slop indicators, now that the em dash is disappearing | 0 | It's not really local, at least it hasn't arrived at local models yet, but it's maybe relevant since we've [seeing a lot](https://www.reddit.com/r/LocalLLaMA/comments/1pgdh8q/removed_by_reddit/) of LLM-generated postings [here](https://www.reddit.com/r/LocalLLaMA/comments/1p7ghyn/why_its_getting_worse_for_everyone_the_recent/): Someone provided a [nice example](https://www.reddit.com/r/ChatGPT/comments/1pmu4fc/comment/nu2u865/?context=3) for how things look like after the latest update over at the ChatGPT sub. | 2025-12-15T08:24:42 | https://www.reddit.com/r/LocalLLaMA/comments/1pn2bvu/new_ai_slop_indicators_now_that_the_em_dash_is/ | Chromix_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn2bvu | false | null | t3_1pn2bvu | /r/LocalLLaMA/comments/1pn2bvu/new_ai_slop_indicators_now_that_the_em_dash_is/ | false | false | self | 0 | null |
Last Week in Multimodal AI - Local Edition | 12 | I curate a weekly newsletter on multimodal AI. Here are the local/open-source highlights from this week:
**Apriel-1.6-15B-Thinker - Frontier Reasoning at 15B**
* Scores 57 on Intelligence Index, matching 200B-scale models while remaining an order of magnitude smaller.
* Self-hostable multimodal reasoning without compromising performance.
* [Model](https://huggingface.co/ServiceNow-AI/Apriel-1.6-15b-Thinker) | [Blog](https://huggingface.co/blog/ServiceNow-AI/apriel-1p6-15b-thinker) | [Demo](https://huggingface.co/spaces/ServiceNow-AI/Apriel-Chat)
https://preview.redd.it/y2dx42fkrb7g1.jpg?width=800&format=pjpg&auto=webp&s=20e12cfa824805f172f0abd47a074be08ea32b1a
**GLM-4.6V - 128K Context Multimodal**
* Open-source multimodal model with tool-calling support and 128K context window.
* Handles vision-language tasks with native tool integration for API development.
* [Blog](https://z.ai/blog/glm-4.6v) | [GitHub](https://github.com/zai-org/GLM-V) | [Demo](https://chat.z.ai/)
https://preview.redd.it/focypmxrrb7g1.jpg?width=10101&format=pjpg&auto=webp&s=3b13f1cb191778838cc1e60577fc2856254723ad
https://reddit.com/link/1pn238p/video/zi335bxsrb7g1/player
**AutoGLM - Open-Source Phone Agent**
* Completes Android tasks through natural language commands.
* AutoGLM-Phone-9B available for download and self-hosting.
* [Website](https://xiao9905.github.io/AutoGLM/)
https://reddit.com/link/1pn238p/video/qcbwhgburb7g1/player
**DMVAE - State-of-the-Art VAE**
* Matches latent distributions to any reference with fewer training epochs.
* Open-source implementation achieving SOTA image synthesis.
* [Paper](https://huggingface.co/papers/2512.07778) | [Model](https://huggingface.co/sen-ye/dmvae/tree/main)
https://preview.redd.it/aai6puuwrb7g1.jpg?width=692&format=pjpg&auto=webp&s=c3b7accc71868c514e36841b44ea8bf171fdf730
**Qwen-Image-i2L - Single Image to Custom LoRA**
* First open-source tool converting one image into a custom LoRA.
* Enables personalized generation from minimal data.
* [ModelScope](https://modelscope.cn/models/DiffSynth-Studio/Qwen-Image-i2L/summary) | [Code](https://github.com/modelscope/DiffSynth-Studio/blob/main/examples/qwen_image/model_inference_low_vram/Qwen-Image-i2L.py)
https://preview.redd.it/8qawc8eyrb7g1.png?width=1080&format=png&auto=webp&s=96e6fd90eacfe70b759be421960b827a66dabb6f
**Dolphin-v2 - Universal Document Parser**
* 3B parameter model that parses any document type.
* Efficient document understanding at small scale.
* [Hugging Face](https://huggingface.co/ByteDance/Dolphin-v2)
**X-VLA - Unified Robot Control**
* Soft-prompted transformer controlling different robot types with one interface.
* Open-source approach to cross-platform robotics.
* [Docs](https://huggingface.co/docs/lerobot/en/xvla)
*Processing img vkb5a833sb7g1...*
Checkout the [full newsletter](https://open.substack.com/pub/thelivingedge/p/last-week-in-multimodal-ai-37-less?utm_campaign=post-expanded-share&utm_medium=web) for more demos, papers, and resources. | 2025-12-15T08:08:17 | https://www.reddit.com/r/LocalLLaMA/comments/1pn238p/last_week_in_multimodal_ai_local_edition/ | Vast_Yak_4147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn238p | false | null | t3_1pn238p | /r/LocalLLaMA/comments/1pn238p/last_week_in_multimodal_ai_local_edition/ | false | false | 12 | null | |
Ideal Build Support | 2 | I am brand new to running AI locally and want to build a machine for a very specific use case (document data extraction) using qwen3-vl. This machine will be built solely for this function. I have built a poc that has worked with a 5070ti, but want to understand what I should be looking for for this project. Budget is relatively open (up to 10k USD), but want to be efficient with it. Speed matters as am going to be going through 100s of documents a day.
Appreciate any insight! | 2025-12-15T07:56:33 | https://www.reddit.com/r/LocalLLaMA/comments/1pn1wzm/ideal_build_support/ | meche2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn1wzm | false | null | t3_1pn1wzm | /r/LocalLLaMA/comments/1pn1wzm/ideal_build_support/ | false | false | self | 2 | null |
Whats one of the best general use case open models | 0 | General queries, occasional academic work requiring reasoning and good support for tool use.
I tried GPT OSS 120b and it seems pretty good, but occasionally stumbles over some reasoning queries. Also its medium reasoning effort seems better than high for some reason.
I also tried a few of the Chinese models like Qwen and Kimi but they seem to overthink themselves into oblivion. Theyll get the answer in around 5 seconds and spend 15 more seconds checking other methods and stuff even for queries where it is not required.
Hardware requirement is not a factor. | 2025-12-15T07:51:41 | https://www.reddit.com/r/LocalLLaMA/comments/1pn1uif/whats_one_of_the_best_general_use_case_open_models/ | Naive-Sun6307 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn1uif | false | null | t3_1pn1uif | /r/LocalLLaMA/comments/1pn1uif/whats_one_of_the_best_general_use_case_open_models/ | false | false | self | 0 | null |
Project Aura: Building an Open-Source, Fully Local AI Companion Baked into Custom AOSP Android 18 (From Humble Termux Roots) | 8 | Project Aura: Building an Open-Source, Fully Local AI Companion Baked into Custom AOSP Android 18 (From Humble Termux Roots)
Hey r/LocalLLaMA (and cross-posting to a few related subs),
I'm a solo dev working on Project Aura – an ambitious attempt to create a true on-device, privacy-focused AI companion that's deeply integrated into Android as a custom AOSP-based ROM. No cloud dependency, no subscriptions, just local models running natively on your phone with voice input, persistent "brain" knowledge, and a sleek UI.
Quick Backstory
It started as a Termux/proot setup on Android:
llama.cpp backend for inference
Whisper.cpp for offline speech-to-text
FastAPI + WebSocket server with a glass-morphism web UI
Custom directory structure (/app, /models, /brain for long-term memory/knowledge graphs)
We iterated hard on getting it stable and performant without root. It worked great as a proof-of-concept local assistant you could talk to offline.
But apps in Termux (or even native apps) have limits – background restrictions, no true system-level triggers, etc. So now we're going all-in: migrating the entire stack to a full custom AOSP Android 18 build. The goal is a ROM where Aura is a baked-in system service/companion – think voice activation hooked into the OS, persistent across reboots, overlays/UI integration, optimized for on-device efficiency.
Why This Matters (to me, at least)
In 2025, we're flooded with cloud assistants, but real privacy/resilience means local. Gemini Nano and friends are cool but closed. Projects like MLC Chat or Iris are awesome app-level, but nothing I've found goes this deep into OS integration for a full-featured open companion. If we pull this off, it could be a base for anyone to flash a truly private AI phone ROM.
Current Progress & Features So Far
Termux version: Fully functional offline chat + voice (llama.cpp + Whisper)
Brain system: Persistent vector store + knowledge ingestion
UI: Responsive web-based with real-time streaming
AOSP side: Setting up build env on Debian 13 Trixie, initial repo syncs started, planning system service integration for the AI stack
Planned milestones:
Bake llama.cpp/Whisper as system daemons
System voice trigger integration
Optional vision/TTS if hardware allows
Fully open-source everything
The Reality Check: Hardware & Funding Struggles
I'm bootstrapping this on super low-end gear – Debian 13 on an old Core i3 with 4GB RAM (and an even older Core 2 Duo backup). Repo syncs and builds are painfully slow (days for a full run), and swapping kills progress. No fancy Threadripper here.
I'm low on income right now, so upgrades (even just more RAM or an SSD) are out of reach without help. That's why I'm sharing early – hoping to build a little community around it.
How You Can Help (If You're Feeling Generous)
Feedback/Ideas: What features would make this killer for you?
Contributions: Once the repo is more fleshed out, PRs welcome!
Donations for Hardware: Even small amounts would go straight to RAM/SSD upgrades to speed up builds.
Ko-Fi: \[link placeholder – set one up at ko-fi.com\]
Or GitHub Sponsors once the repo lives
GitHub Repo (WIP – pushing initial structure soon): \[placeholder – github.com/killbox3143/project-aura\]
https://preview.redd.it/8a8trvpejb7g1.png?width=2816&format=png&auto=webp&s=119f8db092e0a4dd18d0ec823bcfb956541173cc
No pressure at all – just excited to share and see if this resonates. If you've got AOSP experience or local AI tips, drop them below!
Thanks for reading. Let's make local AI companions a real open option. 🚀
(Will update with screenshots/videos once the AOSP build stabilizes – right now it's mostly terminal grind.)
What do you think – worth pursuing? Any similar projects I should collab with? | 2025-12-15T07:17:22 | https://www.reddit.com/r/LocalLLaMA/comments/1pn1buv/project_aura_building_an_opensource_fully_local/ | AdLive6701 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn1buv | false | null | t3_1pn1buv | /r/LocalLLaMA/comments/1pn1buv/project_aura_building_an_opensource_fully_local/ | false | false | 8 | null | |
Another watercooled 4x GPU server complete! | 47 | I'm on a roll this weekend. Finally got all of the parts needed to finish this build. 4x RTX A4500 with waterblocks from [Alphacool (A5000)](https://shop.alphacool.com/en/shop/gpu-water-cooling/nvidia/10669-alphacool-es-rtx-a5000-gpu-cooler-with-backplate). 80GB VRAM, nothing crazy, pretty cost efficient. These GPUs were about $1k each. Waterblocks were between $50-100 each since they're pretty old. As the blocks come, they appear to be 1 slot, but there's no 1 slot bracket provided and with the back plate, it takes up some space of the slot above it, so running these with no back plate (the GPUs don't have a back plate to begin with) and I had to print a slimmer block on the end than what came with them (the part right by the power connector). Then I cut the brackets to be 1 slot. Perfect fit. Very tight though, this chassis was not made for this! To round out the build there's a 4x mini SAS card connected to 16 SSDs (2 of the 5.25" bays on the right), and a 4x NVMe hot swap (in the remaining 5.25" bay) and a Mellanox 25G card.
Getting pretty decent performance out of it! I have [https://huggingface.co/cerebras/Qwen3-Coder-REAP-25B-A3B](https://huggingface.co/cerebras/Qwen3-Coder-REAP-25B-A3B) loaded up with vLLM. It juuust fits. \~103-105 tokens/sec on single requests and when testing with 6x simultaneous requests it does about 50 tokens/sec. On sustained workloads, temps stay around 40-42ºC.
Finished my other watercooled 4x GPU server a few days ago also, post [here](https://www.reddit.com/r/LocalLLaMA/comments/1pl984y/finally_finished_my_4x_gpu_water_cooled_server/). | 2025-12-15T07:13:53 | j4ys0nj | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pn19zc | false | null | t3_1pn19zc | /r/LocalLLaMA/comments/1pn19zc/another_watercooled_4x_gpu_server_complete/ | false | false | 47 | {'enabled': True, 'images': [{'id': 'Dk-OCbh_iVK0wuq52c6Me000f6vqgb_vHVlk-rUy1ko', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/pgqrfop2ib7g1.png?width=108&crop=smart&auto=webp&s=009a453259b9ff497f1c7562677bf5dc48f93778', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/pgqrfop2ib7g1.png?width=216&crop=smart&auto=webp&s=2511dfa6567303ee575762388b40f57979803054', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/pgqrfop2ib7g1.png?width=320&crop=smart&auto=webp&s=fd06a6f07a2422e430eb616b510d7bc85e070f87', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/pgqrfop2ib7g1.png?width=640&crop=smart&auto=webp&s=313f261a0bcd81f8f2af182cfeb1eae60a4c0d0f', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/pgqrfop2ib7g1.png?width=960&crop=smart&auto=webp&s=bd3b6094690a27f6eccc332aa74be2e53467ca81', 'width': 960}, {'height': 1620, 'url': 'https://preview.redd.it/pgqrfop2ib7g1.png?width=1080&crop=smart&auto=webp&s=e5319fd931ba217f9b08de1f33ca2a13296eb7a1', 'width': 1080}], 'source': {'height': 1694, 'url': 'https://preview.redd.it/pgqrfop2ib7g1.png?auto=webp&s=afeec5320ead9814d0255ecd3df3d4ca34413ba5', 'width': 1129}, 'variants': {}}]} | ||
Writing for dropped online stories | 1 | for the last few years its become pretty popular for writers to post to sites like [royalroad.com](http://royalroad.com) or other web novel platforms. The problem is that lots of these authors end up dropping their stories after awhile, usually quitting writing altogether. I was wondering if there was a way to get a LLM model to read a story (or at least a few chapters) and continue writing where the author left off. Every model I've tried always seems to block it saying its copywrite issue. I'm not posting stories online -.- I just wanted to get a conclusion to some of these stories.... it seriously sucks to read a story you love only to have it get completely dropped by author... | 2025-12-15T07:04:27 | https://www.reddit.com/r/LocalLLaMA/comments/1pn14ln/writing_for_dropped_online_stories/ | Dangerous-Cancel7583 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn14ln | false | null | t3_1pn14ln | /r/LocalLLaMA/comments/1pn14ln/writing_for_dropped_online_stories/ | false | false | self | 1 | null |
Is there an easy way to setup something like stable-diffusion.cpp.cpp in OpenWeb UI | 6 | For Info , my setup is running off a AMD 6700XT using Vulkan on llama.cpp and OpenwebUI.
So far very happy with it and currently have Openweb UI (docker), Docling (docker), kokoro-cpu (docker) & llama.cpp running lama-swap and a embedding llama-server on auto startup.
I cant use comfyUI because of AMD , but i have had success with stable-diffusion.cpp with flux schnell. Is there a way to create another server instance of stable-diffusion.cpp or is there another product that i dont know about that works for AMD ? | 2025-12-15T06:32:22 | https://www.reddit.com/r/LocalLLaMA/comments/1pn0m73/is_there_an_easy_way_to_setup_something_like/ | uber-linny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn0m73 | false | null | t3_1pn0m73 | /r/LocalLLaMA/comments/1pn0m73/is_there_an_easy_way_to_setup_something_like/ | false | false | self | 6 | null |
Forked Google's Gemini CLI to work with local LLMs (MLX, llama.cpp, vLLM) | 30 | So i forked the gemini cli and added local llm support, no google account needed, runs offline.
Give it a try!
[https://github.com/limkcreply/open-gemini-cli](https://github.com/limkcreply/open-gemini-cli) | 2025-12-15T06:23:09 | https://www.reddit.com/r/LocalLLaMA/comments/1pn0gpa/forked_googles_gemini_cli_to_work_with_local_llms/ | Honest-Fun-5279 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn0gpa | false | null | t3_1pn0gpa | /r/LocalLLaMA/comments/1pn0gpa/forked_googles_gemini_cli_to_work_with_local_llms/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': 'we9Fwc80AuxLE8uiaWN9-j_yX1-u6hONMC9wBVrU_Y0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/we9Fwc80AuxLE8uiaWN9-j_yX1-u6hONMC9wBVrU_Y0.png?width=108&crop=smart&auto=webp&s=e9a9898f5ce6abd8855bbc2f128c394b4f443a7c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/we9Fwc80AuxLE8uiaWN9-j_yX1-u6hONMC9wBVrU_Y0.png?width=216&crop=smart&auto=webp&s=b8475598304066f7b1671f681bfda6657fe27bad', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/we9Fwc80AuxLE8uiaWN9-j_yX1-u6hONMC9wBVrU_Y0.png?width=320&crop=smart&auto=webp&s=216803f3eda5b36f839745ba1e6fe81847e4c89c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/we9Fwc80AuxLE8uiaWN9-j_yX1-u6hONMC9wBVrU_Y0.png?width=640&crop=smart&auto=webp&s=9b2a757e823116bce0b317b92150194838154eb2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/we9Fwc80AuxLE8uiaWN9-j_yX1-u6hONMC9wBVrU_Y0.png?width=960&crop=smart&auto=webp&s=d45241ec1319d312eb408473ee939fb3ebadb9f6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/we9Fwc80AuxLE8uiaWN9-j_yX1-u6hONMC9wBVrU_Y0.png?width=1080&crop=smart&auto=webp&s=bae045d167999cfcae0dc153fcbad631d415abe3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/we9Fwc80AuxLE8uiaWN9-j_yX1-u6hONMC9wBVrU_Y0.png?auto=webp&s=e89321db6b60125b87d9eedea232c9c93342b4a7', 'width': 1200}, 'variants': {}}]} |
Day 7: 21 Days of Building a Small Language Model: Self Attention | 62 | Welcome to Day 7. Today, our focus is on self-attention. Simply put, self-attention allows each word in a sequence to look at and incorporate information from all other words in that sequence. This might seem obvious (of course words need to understand their context), but the challenge is doing this efficiently and effectively.
I’ve covered all the concepts here at a high level to keep things simple. For a deeper exploration of these topics, feel free to check out my book "*Building A Small Language Model from Scratch: A Practical Guide."*
**Note:** If you want to understand the coding part step by step, here’s the video.
[**https://www.youtube.com/watch?v=EXnvO86m1W8**](https://www.youtube.com/watch?v=EXnvO86m1W8)
For example, in the sentence
Sarah works as a software engineer. She enjoys solving complex problems
the word "She" needs to understand that it refers to "Sarah" from the previous sentence. Without self-attention, the model would process each word in isolation, losing crucial information about how words relate to each other.
So the real question is: how does self-attention enable models to capture these relationships, and why is it so effective?
# The Core Issue
When we read a sentence, each word's meaning is influenced by the other words around it. The word bank means something different in I deposited money at the bank versus I sat on the river bank. The word it in The cat sat on the mat. It was comfortable. refers to the mat from the previous sentence.
These relationships aren't just about adjacent words; they can span long distances, and they're bidirectional. Later words can influence earlier ones, and earlier words influence later ones.
Traditional neural network approaches struggled with this. Recurrent Neural Networks (RNNs) process sequences step by step, which makes it difficult to capture long-range dependencies. Convolutional Neural Networks (CNNs) use fixed-size windows, limiting their ability to see the full context.
Self-attention solves this problem by allowing each position in the sequence to attend to every other position, including itself, in a single operation. When processing the word she, the model can attend to Sarah from earlier in the sequence, learning that she refers to Sarah. When processing bank, the model can attend to deposited money to understand that this bank is a financial institution, not a river's edge.
# Queries, Keys, and Values
The self-attention mechanism uses three key components: queries, keys, and values. This terminology might seem abstract at first, but it's actually quite intuitive once you understand the analogy.
Think of how you search a database: you submit a query to find what you're looking for, the system uses keys to index and locate matching entries, and then retrieves the actual values associated with those keys.
https://preview.redd.it/2ilzysh88b7g1.png?width=581&format=png&auto=webp&s=522afd4841746bf137b33000b763e4fb134b6e41
* **Queries** represent what each token is looking for: the question we want to answer. When processing a particular position in the sequence, the query encodes what information we need from other positions.
* **Keys** represent what each element in the input can provide: the information available at each position. Each position in the sequence has a key that describes what that position contains or can offer.
* **Values** contain the actual information we want to extract. Once we determine which positions are relevant (by comparing queries to keys), we use the values from those positions to construct the output.
Let's consider an example. Imagine you have a database and your database has these employee records
https://preview.redd.it/4juko3ra8b7g1.png?width=285&format=png&auto=webp&s=fa2022c5535c0993877bec46cc9fd92b9931c021
* A Query is the question you ask:Give me the record for Employee ID = 27.
* The Keys are all the indexed fields in the database(10,27,33) that help you find the right record.
* The Value is the actual information the database returns when the right key is matched.
Let's consider one more example. Suppose we're processing the same example: Sarah works as a software engineer. She enjoys solving complex problems.
When the model processes the word She in the second sentence, it needs to determine what She refers to. Here's how self-attention helps:
* **Query (for "She")**: The query for She encodes the question: What does this pronoun refer to? It represents what we're looking for, which is the person or thing that the pronoun refers to, specifically a female person mentioned earlier.
* **Keys (for each word)**: Each word in the sequence has a key that describes what that word represents. The key for Sarah might encode that it's a proper noun referring to a person (likely female based on the name). The key for engineer might encode that it's a noun referring to a profession. The key for works might encode that it's a verb.
* **Values (for each word)**: The values contain the actual semantic information. The value for Sarah contains information about who Sarah is, her identity, etc. The value for engineer contains information about the profession. The value for software contains information about the field of work.
https://preview.redd.it/9nr5ikwe8b7g1.png?width=711&format=png&auto=webp&s=1c2ed0a7f5b4f77aa73198bfe495a197716f3fe6
The attention mechanism compares the query for She against all the keys in the sequence. The key for Sarah will likely have a high similarity to the query for She because Sarah is a proper noun referring to a person who could be referred to by the pronoun She, and it appears earlier in the sequence. The keys for engineer, software, and works will have lower similarity. This produces high attention weights for Sarah and lower weights for other words.
Finally, the mechanism uses these attention weights to create a weighted combination of the values. Since Sarah has a high attention weight, its value (information about Sarah) will dominate the resulting context vector. This allows the model to understand that She refers to Sarah, and the context vector for She will incorporate information about Sarah, including that she works as a software engineer and enjoys solving complex problems.
# How Self-Attention Works
The self-attention mechanism works by comparing queries to keys to determine how relevant each key is to the current query. This comparison produces relevance scores, called attention weights, which indicate how much each position should contribute. The mechanism then uses these attention weights to create a weighted combination of the values, producing a context vector that incorporates information from the most relevant positions.
The mathematical formula for scaled dot-product attention (the type used in transformers) is:
https://preview.redd.it/gxqxyvkg8b7g1.png?width=727&format=png&auto=webp&s=9141415545031c7cb5d32acbf9dfbc4e89249cf9
where:
* **Q** is the Query matrix, representing what each token is looking for
* **K** is the Key matrix, representing what each token can provide
* **V** is the Value matrix, containing the actual information content
* **d\_k** is the dimension of the key vectors
* **Q K\^T** computes the similarity scores between queries and keys
* The division by **√d\_k** scales the scores to prevent numerical instability
* **softmax** converts the scores into a probability distribution
* The final multiplication with V produces context vectors weighted by attention
This formula enables the model to determine which parts of the input sequence are most relevant when processing each token, allowing it to capture long-range dependencies and contextual relationships.
# Why we scale by √d_k
The scaled part of scaled dot-product attention comes from dividing the attention scores by the square root of the key dimension. This scaling is crucial for training stability.
When we compute the dot product between query and key vectors, the magnitude of the result grows with the dimension. For large embedding dimensions (typically 768, or even larger in modern models), these dot products can become very large.
Large dot products cause problems with the softmax function. When the input to softmax has very large values, the function behaves more like a step function, producing very sharp distributions where almost all attention goes to a single token. This creates two problems:
1. **Gradient issues**: Very sharp softmax distributions result in very small gradients during backpropagation, which can drastically slow down learning or cause training to stagnate.
2. **Loss of information**: When attention is too focused on a single token, the model loses the ability to attend to multiple relevant tokens simultaneously, which is important for understanding complex relationships.
By scaling the scores by √d\_k, we keep the dot products in a reasonable range, ensuring that the softmax function produces well-distributed attention weights. This allows the model to attend to multiple relevant tokens rather than focusing too heavily on just one, while also maintaining stable gradients during training.
**NOTE:** If you want to see how this looks in practice, please check the video above or the Google Colab link [https://colab.research.google.com/drive/1Ux1qrHL5DII8088tmTc4tCJfHqt2zvlw?usp=sharing](https://colab.research.google.com/drive/1Ux1qrHL5DII8088tmTc4tCJfHqt2zvlw?usp=sharing)
# Why we use Softmax
The softmax function converts the raw similarity scores (which can be any real numbers) into attention weights that represent how much focus should be placed on each token. Softmax ensures that:
1. **All attention weights sum to 1**: This creates a probability distribution, making the weights interpretable as proportions of attention.
2. **Larger scores get more attention**: Tokens with higher similarity scores receive higher attention weights, but the normalization ensures that attention is distributed across all tokens proportionally.
3. **Multiple tokens can be attended to**: Unlike a hard selection mechanism, softmax allows the model to attend to multiple relevant tokens simultaneously, which is crucial for understanding complex linguistic relationships.
**NOTE:** If you want to see how this looks in practice, please check the video above or the Google Colab link
# Summary
Self-attention is not just a component of transformer architectures; it is the fundamental mechanism that enables these models to understand context, relationships, and meaning in sequences of text. Without it, language models cannot capture the connections between words that make language meaningful. | 2025-12-15T06:16:04 | https://www.reddit.com/r/LocalLLaMA/comments/1pn0cik/day_7_21_days_of_building_a_small_language_model/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn0cik | false | null | t3_1pn0cik | /r/LocalLLaMA/comments/1pn0cik/day_7_21_days_of_building_a_small_language_model/ | false | false | 62 | null | |
Kimi-K2, Devstral 2 and GLM 4.6 free API keys | 0 | I was recently searching for free API keys for GLM 4.6 and Kimi-K2, and I found out a new platform called Routaway.ai. The platform is providing free API keys for most of the Local LLMs. This is not a paid promotion. You can check out the demo videos on how I am accessing the Open Source models for free using the free API keys. Try it or before it's gone!
Kimi-K2 : [https://youtu.be/3dWs6DIKj2o](https://youtu.be/3dWs6DIKj2o)
GLM 4.6 : [https://youtu.be/YmXlfjvbLBQ](https://youtu.be/YmXlfjvbLBQ)
Platform : [https://routeway.ai/dashboard/api-keys](https://routeway.ai/dashboard/api-keys) | 2025-12-15T06:07:58 | https://www.reddit.com/r/LocalLLaMA/comments/1pn07o4/kimik2_devstral_2_and_glm_46_free_api_keys/ | Technical-Love-8479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pn07o4 | false | null | t3_1pn07o4 | /r/LocalLLaMA/comments/1pn07o4/kimik2_devstral_2_and_glm_46_free_api_keys/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'LiDsIjJTREyud9x-sk-2iI6E06ZkVn29uHb_FfpIUFk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/LiDsIjJTREyud9x-sk-2iI6E06ZkVn29uHb_FfpIUFk.jpeg?width=108&crop=smart&auto=webp&s=3b0bdd3114d7ae28b024cfbdac68e92b6675a72f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/LiDsIjJTREyud9x-sk-2iI6E06ZkVn29uHb_FfpIUFk.jpeg?width=216&crop=smart&auto=webp&s=4041440d6d38a3acc5ceb343cc9d13fe4b13d18a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/LiDsIjJTREyud9x-sk-2iI6E06ZkVn29uHb_FfpIUFk.jpeg?width=320&crop=smart&auto=webp&s=fe4915cdc84e14e5629dd2d746ee24eb41f1f4b4', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/LiDsIjJTREyud9x-sk-2iI6E06ZkVn29uHb_FfpIUFk.jpeg?auto=webp&s=d19d269c982c8aa9dad33d5336a152f311705101', 'width': 480}, 'variants': {}}]} |
How to continue the output seamless in Response API | 1 | I am trying to implement a functionality, when the AI output is stopped because of reaching the limit of max_output_tokens, the agent should automatically send another request to AI, so the AI could continue the output. I try to put a user input message:”continue”, then AI will respond continuously.
The problem is the second output has some extra words at the beginning of the response,is there any better method so the AI could just continue after the word of the first response? | 2025-12-15T05:49:24 | https://www.reddit.com/r/LocalLLaMA/comments/1pmzvum/how_to_continue_the_output_seamless_in_response/ | Technical_Pass_1858 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pmzvum | false | null | t3_1pmzvum | /r/LocalLLaMA/comments/1pmzvum/how_to_continue_the_output_seamless_in_response/ | false | false | self | 1 | null |
Grok-4.1 vs GPT-5.1 on calendar reasoning: 60% vs 3% accuracy | 1 | [removed] | 2025-12-15T05:22:29 | https://www.reddit.com/r/LocalLLaMA/comments/1pmzfe2/grok41_vs_gpt51_on_calendar_reasoning_60_vs_3/ | credentum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pmzfe2 | false | null | t3_1pmzfe2 | /r/LocalLLaMA/comments/1pmzfe2/grok41_vs_gpt51_on_calendar_reasoning_60_vs_3/ | false | false | 1 | null | |
Interesting new model: Motif-2-12.7B-Reasoning | 33 | I didn’t see much discussion of the instruct version, but the reasoning version is out and it sounds like an interesting model. They were not on my radar until recently. Any thoughts? I do think models in this size range seem to look more and more like the future.
https://huggingface.co/Motif-Technologies/Motif-2-12.7B-Reasoning | 2025-12-15T03:37:29 | https://www.reddit.com/r/LocalLLaMA/comments/1pmxgok/interesting_new_model_motif2127breasoning/ | LoveMind_AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pmxgok | false | null | t3_1pmxgok | /r/LocalLLaMA/comments/1pmxgok/interesting_new_model_motif2127breasoning/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'CFTS2_ElozcsFa8Pc6MeQHxBIVmCQK-E1PAdqkbK_co', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CFTS2_ElozcsFa8Pc6MeQHxBIVmCQK-E1PAdqkbK_co.png?width=108&crop=smart&auto=webp&s=2f0e38aafd7a548c6e4df57cd49c9356c00c8cd7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CFTS2_ElozcsFa8Pc6MeQHxBIVmCQK-E1PAdqkbK_co.png?width=216&crop=smart&auto=webp&s=46fd58d109f9a67a5012e10ba4e0192159a3be06', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CFTS2_ElozcsFa8Pc6MeQHxBIVmCQK-E1PAdqkbK_co.png?width=320&crop=smart&auto=webp&s=a5dbf911a44f23fad29aeed0b53cf818f584f373', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CFTS2_ElozcsFa8Pc6MeQHxBIVmCQK-E1PAdqkbK_co.png?width=640&crop=smart&auto=webp&s=9e69daa90eb4601c3215cb6ccac0a79f1ecd6e9a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CFTS2_ElozcsFa8Pc6MeQHxBIVmCQK-E1PAdqkbK_co.png?width=960&crop=smart&auto=webp&s=da3cbdd47a5842725fa30ec38d40d130c95be157', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CFTS2_ElozcsFa8Pc6MeQHxBIVmCQK-E1PAdqkbK_co.png?width=1080&crop=smart&auto=webp&s=afcb25c53e46ea11e22b598bbf6c7edf3f845a32', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CFTS2_ElozcsFa8Pc6MeQHxBIVmCQK-E1PAdqkbK_co.png?auto=webp&s=096320efb0d9c1a89407f173ca427fe0b4db0a1f', 'width': 1200}, 'variants': {}}]} |
RTX 5090 + RTX 3070. Can I set VRAM to offload to 3070 only after 5090 VRAM is maxed? | 3 | 5090 + 3070. Can I set VRAM to offload to 3070 only after 5090 VRAM is maxed? Or will it balance the model across both automatically?
I have a 8GB 3070 laying around. Was curious if there is any use running it along side my 5090. I want the 3070 to only come into play once the 32GB of the 5090 is maxed out. But I'm worried that adding the 3070 will slow things down even when the 5090 still has headroom. I've only really seen people run cards of identical Vram size. The goal is add an extra buffer before the model starts eating into system RAM/CPU. How much of a benefit would this provide? | 2025-12-15T03:31:55 | https://www.reddit.com/r/LocalLLaMA/comments/1pmxctv/rtx_5090_rtx_3070_can_i_set_vram_to_offload_to/ | Five9Fine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pmxctv | false | null | t3_1pmxctv | /r/LocalLLaMA/comments/1pmxctv/rtx_5090_rtx_3070_can_i_set_vram_to_offload_to/ | false | false | self | 3 | null |
I pitted GPT-5.2 against Opus 4.5 and Gemini 3 in a robot coding tournament | 93 | I recently revived the classic coding game Robocode (Java-based tank battles) to test how LLMs perform against top-tier robots. Unlike static coding challenges (like LeetCode), these bots must balance tradeoffs, adapt to enemy strategies in real-time, and adopt unconventional approaches to remain unpredictable.
I prompted each model to build a robot, providing iterative feedback until progress stalled, and then submitted the best versions to the Robocode Arena.
# Final results
|Model|Final ELO|Rank|Iterations to peak|
|:-|:-|:-|:-|
|Opus-4.5|1412|17|3|
|GPT-5.2-thinking|1229|25|3|
|Gemini-3-thinking|973|42|4|
|GPT-5.2-instant|953|43|3|
|Gemini-3-fast|917|46|7|
|GPT-5.1-thinking|835|49|8|
|Haiku-4.5|811|50|8|
|GPT-5.1-instant|626|53|8|
# Key findings
* GPT-5.2 is a major upgrade over 5.1, scoring nearly 400 ELO points higher on the ladder. It figured out working strategies almost immediately, whereas 5.1 really struggled to make anything competitive even with a lot of help.
* OpenAI is clearly pulling ahead of Google here; GPT-5.2 Thinking beat Gemini 3 Pro Thinking comfortably. Even the Instant GPT-5.2 model basically tied with Google's Thinking model, which was pretty surprising.
* Opus 4.5 actually took the #1 spot because it acts more like a reliable coder than a tinkerer. While GPT-5.2 kept breaking its own code trying to optimize it, Opus nailed the complex math/physics on the first try and didn't regress.
I don't have an appropriate setup for a local LLM but I will be working on testing that next. | 2025-12-15T03:19:51 | https://www.reddit.com/r/LocalLLaMA/comments/1pmx49s/i_pitted_gpt52_against_opus_45_and_gemini_3_in_a/ | Inevitable_Can598 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pmx49s | false | null | t3_1pmx49s | /r/LocalLLaMA/comments/1pmx49s/i_pitted_gpt52_against_opus_45_and_gemini_3_in_a/ | false | false | self | 93 | null |
ElasticMM – A Serving System for Multimodal LLMs with Up to 4.2× Lower Latency (NeurIPS 2025 Oral) | 1 | [removed] | 2025-12-15T03:14:49 | https://www.reddit.com/r/LocalLLaMA/comments/1pmx0sf/elasticmm_a_serving_system_for_multimodal_llms/ | EveningInevitable685 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pmx0sf | false | null | t3_1pmx0sf | /r/LocalLLaMA/comments/1pmx0sf/elasticmm_a_serving_system_for_multimodal_llms/ | false | false | 1 | null | |
[ Removed by moderator ] | 1 | [removed] | 2025-12-15T02:45:39 | https://www.reddit.com/r/LocalLLaMA/comments/1pmwga1/beyond_retrieval_injecting_system_2_reasoning/ | Asleep_Singer_9289 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pmwga1 | false | null | t3_1pmwga1 | /r/LocalLLaMA/comments/1pmwga1/beyond_retrieval_injecting_system_2_reasoning/ | true | false | null | 1 | null |
Is there a cold-GPU provider where I can run my finetuned Gemma Model on? | 3 | I tried Vertex AI and the cold GPU feature which is in Beta didn't work and left me with a hefty bill.
Amazon SageMaker doesn't allow that anymore.
Is there a trusted provider that provides such service where I pay only for the time I used the GPU? | 2025-12-15T02:25:35 | https://www.reddit.com/r/LocalLLaMA/comments/1pmw18r/is_there_a_coldgpu_provider_where_i_can_run_my/ | inAbigworld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pmw18r | false | null | t3_1pmw18r | /r/LocalLLaMA/comments/1pmw18r/is_there_a_coldgpu_provider_where_i_can_run_my/ | false | false | self | 3 | null |
RAG Paper 12.11 | 6 | 1. [Replace, Don't Expand: Mitigating Context Dilution in Multi-Hop RAG via Fixed-Budget Evidence Assembly](http://arxiv.org/abs/2512.10787v1)
2. [Semantic Reconstruction of Adversarial Plagiarism: A Context-Aware Framework for Detecting and Restoring "Tortured Phrases" in Scientific Literature](http://arxiv.org/abs/2512.10435v1)
3. [Cooperative Retrieval-Augmented Generation for Question Answering: Mutual Information Exchange and Ranking by Contrasting Layers](http://arxiv.org/abs/2512.10422v1)
**Collected by OpenBMB, transferred by** [**RagView.ai**](https://www.ragview.ai/) **/** [**github/RagView**](https://github.com/RagView/RagView) **.** | 2025-12-15T02:24:46 | https://www.reddit.com/r/LocalLLaMA/comments/1pmw0nl/rag_paper_1211/ | Cheryl_Apple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pmw0nl | false | null | t3_1pmw0nl | /r/LocalLLaMA/comments/1pmw0nl/rag_paper_1211/ | false | false | self | 6 | null |
[ Removed by moderator ] | 0 | [removed] | 2025-12-15T02:09:50 | https://www.reddit.com/r/LocalLLaMA/comments/1pmvpe2/looking_for_llm_that_can_generate_security_attack/ | robottouchgrass | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pmvpe2 | false | null | t3_1pmvpe2 | /r/LocalLLaMA/comments/1pmvpe2/looking_for_llm_that_can_generate_security_attack/ | false | false | null | 0 | null |
2025 Open Models Year in Review !! | 2 | First, the winning models:
1. DeepSeek R1 (
@deepseek_ai
): Transformed the AI world
2. Qwen 3 Family (
@AlibabaGroup
): The new default open models
3. Kimi K2 Family (
@Kimi_Moonshot
): Models that convinced the world that DeepSeek wasn't special and China would produce numerous leading models.
Runner up models: MiniMax M2 (
@MiniMax__AI
), GLM 4.5 (
@Zai_org
), GPT-OSS (
@OpenAI
), Gemma 3 (
@GoogleAI
), Olmo 3 (
@allen_ai
)
Honorable Mentions: Nvidia's (
@nvidia
) Parakeet speech-to-text model & Nemotron 2 LLM, Moondream 3 VLM (
@moondreamai
), Granite 4 LLMs (
@IBMResearch
), and HuggingFace's (
@huggingface
) SmolLM3.
Updated Tier list:
Frontier open labs: DeepSeek (
@deepseek_ai
), Qwen (
@AlibabaGroup
), and Kimi Moonshot (
@Kimi_Moonshot
)
Close behind: http://Z.ai (
@Zai_org
) & MiniMax AI (
@MiniMax__AI
) (notably none from the U.S. here and up)
Noteworthy (a mix of US & China): StepFun AI (
@StepFun_ai
), Ant Group's (
@AntGroup
/
@TheInclusionAI
Inclusion AI, Meituan (
@Meituan_LongCat
), Tencent (
@TencentHunyuan
), IBM (
@IBMResearch
), Nvidia (
@nvidia
), Google (
@GoogleAI
), & Mistral (
@MistralAI
)
Then a bunch more below that, which we detail.
Predictions for 2026:
1. Scaling will continue with open models.
2. No substantive changes in the open model safety narrative.
3. Participation will continue to grow.
4. Ongoing general trends will continue w/ MoEs, hybrid attention, dense for fine-tuning.
5. The open and closed frontier gap will stay roughly the same on any public benchmarks.
6. No Llama-branded open model releases from Meta in 2026. | 2025-12-15T01:28:35 | https://www.interconnects.ai/p/2025-open-models-year-in-review | Difficult-Cap-7527 | interconnects.ai | 1970-01-01T00:00:00 | 0 | {} | 1pmuuv6 | false | null | t3_1pmuuv6 | /r/LocalLLaMA/comments/1pmuuv6/2025_open_models_year_in_review/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'teWgyqb-RDJIuhi9RJ3H3WjTJdfmLNMEjYONqxc6ag8', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/teWgyqb-RDJIuhi9RJ3H3WjTJdfmLNMEjYONqxc6ag8.jpeg?width=108&crop=smart&auto=webp&s=88eb38116adc1fac156a96d3d82fee0f256cdd95', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/teWgyqb-RDJIuhi9RJ3H3WjTJdfmLNMEjYONqxc6ag8.jpeg?width=216&crop=smart&auto=webp&s=9d4c01907d85bfbbdf59d046e5a0a101a9d7a334', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/teWgyqb-RDJIuhi9RJ3H3WjTJdfmLNMEjYONqxc6ag8.jpeg?width=320&crop=smart&auto=webp&s=7a071bc6bf1e27cf910b4fa7814b523fc7c8c442', 'width': 320}, {'height': 350, 'url': 'https://external-preview.redd.it/teWgyqb-RDJIuhi9RJ3H3WjTJdfmLNMEjYONqxc6ag8.jpeg?width=640&crop=smart&auto=webp&s=3f9c5e72e6317c507adece143fd29a8a92eb26cc', 'width': 640}, {'height': 525, 'url': 'https://external-preview.redd.it/teWgyqb-RDJIuhi9RJ3H3WjTJdfmLNMEjYONqxc6ag8.jpeg?width=960&crop=smart&auto=webp&s=8f55ba70c7c5c3eb4ad64efde5e92763e637ab43', 'width': 960}], 'source': {'height': 560, 'url': 'https://external-preview.redd.it/teWgyqb-RDJIuhi9RJ3H3WjTJdfmLNMEjYONqxc6ag8.jpeg?auto=webp&s=8b1c303c6e430958e6804a90e87026f31c0ebae2', 'width': 1024}, 'variants': {}}]} | |
Aaaand... is gone... | 885 | 2025-12-15T01:18:27 | HumanDrone8721 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pmungj | false | null | t3_1pmungj | /r/LocalLLaMA/comments/1pmungj/aaaand_is_gone/ | false | false | default | 885 | {'enabled': True, 'images': [{'id': 'g7ahg4per97g1', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/g7ahg4per97g1.jpeg?width=108&crop=smart&auto=webp&s=965d98dc62504237c08d115f129e1373b6afbead', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/g7ahg4per97g1.jpeg?width=216&crop=smart&auto=webp&s=aa6a7bca3752afb524485213ba8723203b458e0a', 'width': 216}, {'height': 184, 'url': 'https://preview.redd.it/g7ahg4per97g1.jpeg?width=320&crop=smart&auto=webp&s=82b19c5917a695b7d9db024bfb3adcb2688311e6', 'width': 320}, {'height': 369, 'url': 'https://preview.redd.it/g7ahg4per97g1.jpeg?width=640&crop=smart&auto=webp&s=3cb28b1a23cbf35375171b5f0b3fcb2d63310818', 'width': 640}, {'height': 553, 'url': 'https://preview.redd.it/g7ahg4per97g1.jpeg?width=960&crop=smart&auto=webp&s=1451c3e2d4db7a2c8235df1cee74178e532e95ba', 'width': 960}, {'height': 623, 'url': 'https://preview.redd.it/g7ahg4per97g1.jpeg?width=1080&crop=smart&auto=webp&s=b02e9444ec391dbf0e853b50e5ba89b9516120f5', 'width': 1080}], 'source': {'height': 675, 'url': 'https://preview.redd.it/g7ahg4per97g1.jpeg?auto=webp&s=1df1a69dd3b369e3fc948d4ae6909ab1402da59b', 'width': 1170}, 'variants': {}}]} | ||
Free ComfyUI node that generates detailed image prompts using Qwen3 (runs locally) | 0 | Built a prompt generator that runs entirely on your machine via Ollama.
How it works:
\- Type a basic concept ("cyberpunk market")
\- Pick a style preset
\- Get a detailed prompt with lighting, composition, colors
No API costs, no data leaves your machine. Open source.
Video walkthrough: [https://youtu.be/FhdmvyNm7OE](https://youtu.be/FhdmvyNm7OE)
Happy to answer questions! | 2025-12-15T01:16:46 | https://youtube.com/watch?v=FhdmvyNm7OE&si=ZxJWJX96_fL9cX2J | Redlimbic | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1pmum93 | false | {'oembed': {'author_name': 'LIMBICNATION ART', 'author_url': 'https://www.youtube.com/@LIMBICNATIONARTIST', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/FhdmvyNm7OE?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Generate Better AI Image Prompts with Qwen3 (Local & Free)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/FhdmvyNm7OE/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Generate Better AI Image Prompts with Qwen3 (Local & Free)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1pmum93 | /r/LocalLLaMA/comments/1pmum93/free_comfyui_node_that_generates_detailed_image/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'w23je8EQGq3X7imi6--LgJ-qgeMwj8OxrLNpmt5b-vY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/w23je8EQGq3X7imi6--LgJ-qgeMwj8OxrLNpmt5b-vY.jpeg?width=108&crop=smart&auto=webp&s=6e982d52465c642472e8010c55a2ad18c10ded64', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/w23je8EQGq3X7imi6--LgJ-qgeMwj8OxrLNpmt5b-vY.jpeg?width=216&crop=smart&auto=webp&s=d97fa396dea7e1e91c3aac2bd3c7b83a4132f8bb', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/w23je8EQGq3X7imi6--LgJ-qgeMwj8OxrLNpmt5b-vY.jpeg?width=320&crop=smart&auto=webp&s=9bfbd6405c87db794f08a75ab4a4e592a8fba3bb', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/w23je8EQGq3X7imi6--LgJ-qgeMwj8OxrLNpmt5b-vY.jpeg?auto=webp&s=f69c420ace6cec351bd6d0ba1d447255291ed0de', 'width': 480}, 'variants': {}}]} |
Can I run a local llm on cursor on my MBP M4 Max 64GB? | 2 | I‘m getting the macbook pro mentioned above and wanted to know if there is any solid local llm available which can run on cursor so I can use cursor offline? | 2025-12-15T01:12:28 | https://www.reddit.com/r/LocalLLaMA/comments/1pmuj1c/can_i_run_a_local_llm_on_cursor_on_my_mbp_m4_max/ | SouthPoleHasAPortal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pmuj1c | false | null | t3_1pmuj1c | /r/LocalLLaMA/comments/1pmuj1c/can_i_run_a_local_llm_on_cursor_on_my_mbp_m4_max/ | false | false | self | 2 | null |
Ryzen AI Max+ 395 Benchmarks | 25 | Hi community, I’m thinking about buying the Ryzen AI Max+ 395 platform with 128gb, but I’m worried it might be too slow (<10 t/s). I couldn’t find any benchmarks that use the full available context. If any of you are running this system, could you share some numbers, specifically the maximum context you can achieve and the prompt processing + generation speed when you max out the context window?
I’m interested in 30B, 70B, and 120B models. I’d really appreciate it if you could share your experience, since this is a major investment for me.
Thanks everyone, and have a good discussion! | 2025-12-15T01:07:01 | https://www.reddit.com/r/LocalLLaMA/comments/1pmuf22/ryzen_ai_max_395_benchmarks/ | Affectionate-Leg8133 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pmuf22 | false | null | t3_1pmuf22 | /r/LocalLLaMA/comments/1pmuf22/ryzen_ai_max_395_benchmarks/ | false | false | self | 25 | null |
What GPU setup do I need to host this model? | 2 | Until now all the models I've consumed have been through APIs, either first-party ones (OpenAI, Anthropic, etc) or open-weight models through OpenRouter.
Now, the amount of models available on those platforms is limited, so I'm evaluating hosting some of the models myself on rented GPUs on platforms like Runpod or similar.
**I'd like to get some advice on how to calculate the amount of GPUs and which GPUs to run the models, variables like quantization for the model, and which inference engine is the most used nowadays.**
For example, I need a good RP model (been looking at this one [https://huggingface.co/mlabonne/gemma-3-27b-it-abliterated](https://huggingface.co/mlabonne/gemma-3-27b-it-abliterated) or variations) and **would need to be able to serve 1 request per second** (60 per minute, so there would be multiple requests at the same time) through an OpenAI compatible API, with a respectable context length.
Ideally should be close to the \~$1100 per month I pay currently on API usage of a similar model on OpenRouter (though that's for a smaller model, so spending more for this one would be acceptable).
**I'd really appreciate any insights and advice.** | 2025-12-15T01:03:06 | https://www.reddit.com/r/LocalLLaMA/comments/1pmuc5z/what_gpu_setup_do_i_need_to_host_this_model/ | crowdl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pmuc5z | false | null | t3_1pmuc5z | /r/LocalLLaMA/comments/1pmuc5z/what_gpu_setup_do_i_need_to_host_this_model/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'LyIuAzRFMRc8_5xZDB_kXALqiFyCEyjDkgskH6lqUL8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LyIuAzRFMRc8_5xZDB_kXALqiFyCEyjDkgskH6lqUL8.png?width=108&crop=smart&auto=webp&s=f4f858446e7404e9efcf8885fe8dd7db7220d78e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LyIuAzRFMRc8_5xZDB_kXALqiFyCEyjDkgskH6lqUL8.png?width=216&crop=smart&auto=webp&s=dc8ff8cae04c38b8d7498f79c2bb9314acc83481', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LyIuAzRFMRc8_5xZDB_kXALqiFyCEyjDkgskH6lqUL8.png?width=320&crop=smart&auto=webp&s=ff058d348d89daac3f81ea7eb3436ebc8fdf8478', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LyIuAzRFMRc8_5xZDB_kXALqiFyCEyjDkgskH6lqUL8.png?width=640&crop=smart&auto=webp&s=aa85b71288cfd5f4b0faa3cd1f9c016980d48e24', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LyIuAzRFMRc8_5xZDB_kXALqiFyCEyjDkgskH6lqUL8.png?width=960&crop=smart&auto=webp&s=1d9cbd785791c9d261b18e45b72e7d6457cd8094', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LyIuAzRFMRc8_5xZDB_kXALqiFyCEyjDkgskH6lqUL8.png?width=1080&crop=smart&auto=webp&s=15b27fca82b4d325695d72d149a2d73e61faf454', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LyIuAzRFMRc8_5xZDB_kXALqiFyCEyjDkgskH6lqUL8.png?auto=webp&s=d63ef92730b407e525c890722648bf11e9d93c06', 'width': 1200}, 'variants': {}}]} |
Llama 3.2 3B fMRI | 5 | Implemented dataset swapping and per-layer isolation, so I can view scans side-by-side and start spotting trends.
This is early, but after I add a few more turns worth of logs, would anyone be interested in poking at this with me? I’m trying to move into the interpretability space, so feedback (or “you’re doing it wrong”) would be super useful.
[Left: baseline \(layer 1, simple greeting prompt\).Right: Turn 01 \(paragraph-length creative writing\).Same model, different internal structure.](https://preview.redd.it/640b79utk97g1.png?width=1650&format=png&auto=webp&s=30be1c7444d26b4079fb092be0b5d0f62a945c98)
| 2025-12-15T00:45:06 | https://www.reddit.com/r/LocalLLaMA/comments/1pmtypx/llama_32_3b_fmri/ | Due_Hunter_4891 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pmtypx | false | null | t3_1pmtypx | /r/LocalLLaMA/comments/1pmtypx/llama_32_3b_fmri/ | false | false | 5 | null | |
toMCP.org – Open source project, converting any website or docs into an MCP server in one click | 17 | **I'm sharing a simple open-source tool I built that lets you convert any website or docs page into an MCP server by adding 'toMCP\[.\]org' before any URL.**
You can then chat directly with a page or add the config to Cursor/Claude to pipe documentation straight into your context.
I built this after trying to connect a tool with 100s of API endpoints where the AI kept hallucinating even with links, forcing me to manually copy-paste just to get it right.
**How this differs from web\_fetch:**
\- Signal-to-Noise: Standard fetch tools usually dump raw HTML (navbars, scripts, footer noise) into the context. This wastes tokens and distracts the model. toMCP runs the page through a readability parser and converts it to clean Markdown before sending it to the AI.
\- Resource vs. Tool: A fetch tool is an *action* the AI has to decide to take (and often forgets to). This tool exposes the page as an MCP Resource. This means the documentation is pinned as a permanent, read-only context that is always available to the model.
https://reddit.com/link/1pmtbos/video/rcu4owxqf97g1/player
Enjoy!
| 2025-12-15T00:14:36 | https://www.reddit.com/r/LocalLLaMA/comments/1pmtbos/tomcporg_open_source_project_converting_any/ | Hot-Lifeguard-4649 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pmtbos | false | null | t3_1pmtbos | /r/LocalLLaMA/comments/1pmtbos/tomcporg_open_source_project_converting_any/ | false | false | self | 17 | null |
Prepend tomcp.org/ to any URL to instantly turn it into an MCP server | 0 | Hey everyone,
Sharing a simple open source tool I built, which lets you convert any website or docs page into an MCP server by adding [tomcp.org/](http://tomcp.org/) before any URL.
You can then chat directly with a page, or add the config to Cursor/Claude to pipe documentation straight into your context.
How this differs from web\_fetch:
\- Signal-to-Noise: Standard fetch tools usually dump raw HTML (navbars, scripts, footer noise) into the context. This wastes tokens and distracts the model. toMCP runs the page through a readability parser and converts it to clean Markdown before sending it to the AI.
\- Resource vs. Tool: A fetch tool is an *action* the AI has to decide to take (and often forgets to). This tool exposes the page as an MCP Resource. This means the documentation is pinned as a permanent, read-only context that is always available to the model.
Repo [https://github.com/Ami3466/tomcp](https://github.com/Ami3466/tomcp)
Enjoy!
https://reddit.com/link/1pmsrgh/video/1pczufdo8u6g1/player
| 2025-12-14T23:48:33 | https://www.reddit.com/r/LocalLLaMA/comments/1pmsrgh/prepend_tomcporg_to_any_url_to_instantly_turn_it/ | Hot-Lifeguard-4649 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pmsrgh | false | null | t3_1pmsrgh | /r/LocalLLaMA/comments/1pmsrgh/prepend_tomcporg_to_any_url_to_instantly_turn_it/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Ay7QmKQvyIVjVaBYwCuL-k8Gst6joDDKy_wLArdnXfk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Ay7QmKQvyIVjVaBYwCuL-k8Gst6joDDKy_wLArdnXfk.png?width=108&crop=smart&auto=webp&s=4a6003c6322426163ebe21d9d670f6f17b39a1e1', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Ay7QmKQvyIVjVaBYwCuL-k8Gst6joDDKy_wLArdnXfk.png?width=216&crop=smart&auto=webp&s=bee9b8d668c7be247e08b49b53c960c09afdbfbc', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Ay7QmKQvyIVjVaBYwCuL-k8Gst6joDDKy_wLArdnXfk.png?width=320&crop=smart&auto=webp&s=57cead7dee7a9439fe708a5e0cb3bec878a569b7', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/Ay7QmKQvyIVjVaBYwCuL-k8Gst6joDDKy_wLArdnXfk.png?auto=webp&s=fac52257da629ad9debc1459344106eb4cf7b7af', 'width': 500}, 'variants': {}}]} | |
“Keep your quantized models from melting under load: drop-in Express/FastAPI middleware that refuses instead of hallucinating” | 1 | [removed] | 2025-12-14T23:19:53 | https://www.reddit.com/r/LocalLLaMA/comments/1pms54q/keep_your_quantized_models_from_melting_under/ | CulpritChaos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pms54q | false | null | t3_1pms54q | /r/LocalLLaMA/comments/1pms54q/keep_your_quantized_models_from_melting_under/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'zn3nSrvbyys_vYaeWhUrpPaL-xlMZeWINoWJityq8D0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zn3nSrvbyys_vYaeWhUrpPaL-xlMZeWINoWJityq8D0.png?width=108&crop=smart&auto=webp&s=e864e1ae9ddace2c69347233a1a723ca0ad9bc16', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zn3nSrvbyys_vYaeWhUrpPaL-xlMZeWINoWJityq8D0.png?width=216&crop=smart&auto=webp&s=08223b3332b42252a786019815b5883aaf9d77e3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zn3nSrvbyys_vYaeWhUrpPaL-xlMZeWINoWJityq8D0.png?width=320&crop=smart&auto=webp&s=d84b84c74da5f132db81c89446e1f175038a298e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zn3nSrvbyys_vYaeWhUrpPaL-xlMZeWINoWJityq8D0.png?width=640&crop=smart&auto=webp&s=8023b5d65ab9d79c999ca7c3dc27a58d60c62a3e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zn3nSrvbyys_vYaeWhUrpPaL-xlMZeWINoWJityq8D0.png?width=960&crop=smart&auto=webp&s=7cec827b64b358b376fc7595dfae2172de3ba214', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zn3nSrvbyys_vYaeWhUrpPaL-xlMZeWINoWJityq8D0.png?width=1080&crop=smart&auto=webp&s=860ac7e9d47b5590a3543a9bac2ccae0e9d2b430', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zn3nSrvbyys_vYaeWhUrpPaL-xlMZeWINoWJityq8D0.png?auto=webp&s=874c8e804b9793f9a26be665a4546c618736fb30', 'width': 1200}, 'variants': {}}]} |
Interlock – a circuit breaker for AI systems that refuses when confidence collapses | 1 | [removed] | 2025-12-14T22:55:59 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1pmrlg8 | false | null | t3_1pmrlg8 | /r/LocalLLaMA/comments/1pmrlg8/interlock_a_circuit_breaker_for_ai_systems_that/ | false | false | default | 1 | null | ||
Interlock – a circuit breaker for AI systems that refuses when confidence collapses | 1 | [removed] | 2025-12-14T22:53:06 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1pmrj4f | false | null | t3_1pmrj4f | /r/LocalLLaMA/comments/1pmrj4f/interlock_a_circuit_breaker_for_ai_systems_that/ | false | false | default | 1 | null | ||
vLLM Rocm and 7900 XTX | 15 | Am I the only one deeply dissapointed with vLLM and AMD ?
Even with the vLLM 0.11 and rocm 7.0 there is basically only unquantized models being able to put in production with 7900 XTX and rocm?
No matter which other model type, like qat or gguf etc. all are crap in performance.
They do work but the performance is just crazy bad when doing simultaneous requests.
So if I can get some decent 10 to 15 requests per second with 2x7900 XTX and 12B unquantized Gemma3, when going to 27B qat 4q for example the speed drops to 1 request per second. That is not what the cards are actually cabable. That should be about 5 requests at least per sec with 128 token input output.
So any other than unquantized fp16 sucks big with rocm7.0 and vllm 0.11 (which is the latest 2 days ago updated officia vllm rocm docker image). Yes I have tried nightly builds with newer software but those wont work straight out.
So I think i need to just give up, and sell all these fkukin AMD consumer craps and go with rtx pro. So sad.
Fkuk you MAD and mVVL | 2025-12-14T22:38:49 | https://www.reddit.com/r/LocalLLaMA/comments/1pmr7f0/vllm_rocm_and_7900_xtx/ | Frosty_Chest8025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pmr7f0 | false | null | t3_1pmr7f0 | /r/LocalLLaMA/comments/1pmr7f0/vllm_rocm_and_7900_xtx/ | false | false | self | 15 | null |
[ Removed by moderator ] | 0 | 2025-12-14T21:45:37 | https://martinalderson.com/posts/ai-agents-are-starting-to-eat-saas/ | malderson | martinalderson.com | 1970-01-01T00:00:00 | 0 | {} | 1pmpybm | false | null | t3_1pmpybm | /r/LocalLLaMA/comments/1pmpybm/ai_agents_are_starting_to_eat_saas/ | false | false | null | 0 | null | |
2025 Open Models Year in Review | 73 | Florian and I worked hard to follow what's happening this year. We put together our final year in review. It's focused on people training models end to end and our rankings downweigh noncommercial licenses and other restrictions that make using models below. A summary is in the text here.
What a year! We're back with an updated open model builder tier list, our top models of the year, and our predictions for 2026.
First, the winning models:
1. DeepSeek R1: Transformed the AI world
2. Qwen 3 Family: The new default open models
3. Kimi K2 Family: Models that convinced the world that DeepSeek wasn't special and China would produce numerous leading models.
Runner up models: MiniMax M2, GLM 4.5, GPT-OSS, Gemma 3, Olmo 3
Honorable Mentions: Nvidia's Parakeet speech-to-text model & Nemotron 2 LLM, Moondream 3 VLM, Granite 4 LLMs, and HuggingFace's SmolLM3.
Tier list:
Frontier open labs: DeepSeek, Qwen, and Kimi Moonshot
Close behind: [Z.ai](http://Z.ai) & MiniMax AI (notably none from the U.S.)
Noteworthy (a mix of US & China): StepFun AI, Ant Group's Inclusion AI, Meituan, Tencent, IBM, Nvidia, Google, & Mistral
Then a bunch more below that, which we detail.
Predictions for 2026:
1. Scaling will continue with open models.
2. No substantive changes in the open model safety narrative.
3. Participation will continue to grow.
4. Ongoing general trends will continue w/ MoEs, hybrid attention, dense for fine-tuning.
5. The open and closed frontier gap will stay roughly the same on any public benchmarks.
6. No Llama-branded open model releases from Meta in 2026.
Very appreciative of this community through both my hats at Interconnects & Ai2. | 2025-12-14T21:43:46 | https://www.interconnects.ai/p/2025-open-models-year-in-review | robotphilanthropist | interconnects.ai | 1970-01-01T00:00:00 | 0 | {} | 1pmpwmh | false | null | t3_1pmpwmh | /r/LocalLLaMA/comments/1pmpwmh/2025_open_models_year_in_review/ | false | false | 73 | {'enabled': False, 'images': [{'id': 'teWgyqb-RDJIuhi9RJ3H3WjTJdfmLNMEjYONqxc6ag8', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/teWgyqb-RDJIuhi9RJ3H3WjTJdfmLNMEjYONqxc6ag8.jpeg?width=108&crop=smart&auto=webp&s=88eb38116adc1fac156a96d3d82fee0f256cdd95', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/teWgyqb-RDJIuhi9RJ3H3WjTJdfmLNMEjYONqxc6ag8.jpeg?width=216&crop=smart&auto=webp&s=9d4c01907d85bfbbdf59d046e5a0a101a9d7a334', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/teWgyqb-RDJIuhi9RJ3H3WjTJdfmLNMEjYONqxc6ag8.jpeg?width=320&crop=smart&auto=webp&s=7a071bc6bf1e27cf910b4fa7814b523fc7c8c442', 'width': 320}, {'height': 350, 'url': 'https://external-preview.redd.it/teWgyqb-RDJIuhi9RJ3H3WjTJdfmLNMEjYONqxc6ag8.jpeg?width=640&crop=smart&auto=webp&s=3f9c5e72e6317c507adece143fd29a8a92eb26cc', 'width': 640}, {'height': 525, 'url': 'https://external-preview.redd.it/teWgyqb-RDJIuhi9RJ3H3WjTJdfmLNMEjYONqxc6ag8.jpeg?width=960&crop=smart&auto=webp&s=8f55ba70c7c5c3eb4ad64efde5e92763e637ab43', 'width': 960}], 'source': {'height': 560, 'url': 'https://external-preview.redd.it/teWgyqb-RDJIuhi9RJ3H3WjTJdfmLNMEjYONqxc6ag8.jpeg?auto=webp&s=8b1c303c6e430958e6804a90e87026f31c0ebae2', 'width': 1024}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.