title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
No tool calling on 8GB GPUs | 0 | I have tried various models that can run on an 8gb GPU and everything is brain dead.
Goal:
1. call tools, get results, do something with results, or
2. Get a prompt, decide what tool to call, do something cool with the results
I have had a hint of success with the tiniest of use cases. Basically where no decisions need to be made. There is only one tool, call tool, report on results.
I would love to find out I’m wrong. I am not worried about high token counts, but it does need to run on gpu. | 2025-12-04T00:06:29 | https://www.reddit.com/r/LocalLLaMA/comments/1pdkzdj/no_tool_calling_on_8gb_gpus/ | newz2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdkzdj | false | null | t3_1pdkzdj | /r/LocalLLaMA/comments/1pdkzdj/no_tool_calling_on_8gb_gpus/ | false | false | self | 0 | null |
Langfuse Issues? | 2 | Was wondering if anyone else was encountering issues with langfuse? Was having some an issue on Huggingface relating to langfuse credentials. Saw there was a partial outage today that should be resolved, but still running into the same issue. | 2025-12-03T23:50:30 | https://www.reddit.com/r/LocalLLaMA/comments/1pdklni/langfuse_issues/ | Fearless-Intern-2344 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdklni | false | null | t3_1pdklni | /r/LocalLLaMA/comments/1pdklni/langfuse_issues/ | false | false | self | 2 | null |
Red Team MCP - Open source multi-agent AI platform | 0 | Just open-sourced Red Team MCP - a multi-agent collaboration platform that connects to Multiple Local LLM tools (Any OpenAI compatible) plus 68 AI providers (Anthropic, OpenAI, Google, Groq, DeepSeek, etc.) through a unified API.
**Key features:**
* 5 coordination modes: Pipeline, Ensemble, Debate, Swarm, Hierarchical
* Predefined agent teams (Writing, Research, Technical, Executive, plus more you can create yourself)
* MCP integration for VS Code & Claude Desktop (Works but not fully featured yet)
* Web dashboard with usage/cost tracking
* Docker support
Looking for feedback! What features would you want to see?
[https://github.com/RedTeamMCP/RedTeam-MCP](https://github.com/RedTeamMCP/RedTeam-MCP) | 2025-12-03T23:47:36 | https://www.reddit.com/r/LocalLLaMA/comments/1pdkj8o/red_team_mcp_open_source_multiagent_ai_platform/ | fracken_a | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdkj8o | false | null | t3_1pdkj8o | /r/LocalLLaMA/comments/1pdkj8o/red_team_mcp_open_source_multiagent_ai_platform/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '8rxBbYoD1syoOEqe7MogFH4-_IAjrIJvFUjjT0AnOT8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8rxBbYoD1syoOEqe7MogFH4-_IAjrIJvFUjjT0AnOT8.png?width=108&crop=smart&auto=webp&s=e8a867f1084a66032bb1e82e93f58bb5beba54b3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8rxBbYoD1syoOEqe7MogFH4-_IAjrIJvFUjjT0AnOT8.png?width=216&crop=smart&auto=webp&s=4ea2f2c788fa579f912175e178123f383fbd6f62', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8rxBbYoD1syoOEqe7MogFH4-_IAjrIJvFUjjT0AnOT8.png?width=320&crop=smart&auto=webp&s=12daf75eaba73abb1f24db079090cad58b7b87ac', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8rxBbYoD1syoOEqe7MogFH4-_IAjrIJvFUjjT0AnOT8.png?width=640&crop=smart&auto=webp&s=75323b0485abc03674f37f85af148f12ab06cfbb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8rxBbYoD1syoOEqe7MogFH4-_IAjrIJvFUjjT0AnOT8.png?width=960&crop=smart&auto=webp&s=d13d9ce21a92e7d289c3eb5c62d445b56acddb87', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8rxBbYoD1syoOEqe7MogFH4-_IAjrIJvFUjjT0AnOT8.png?width=1080&crop=smart&auto=webp&s=2139c09d78080b14e2bbace017461f0bdccb2a7b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8rxBbYoD1syoOEqe7MogFH4-_IAjrIJvFUjjT0AnOT8.png?auto=webp&s=93cece5cee9ba1b4d777d96f49b28711bedfced0', 'width': 1200}, 'variants': {}}]} |
[Resource] 20,000+ Pages of U.S. House Oversight Epstein Estate Docs (OCR'd & Cleaned for RAG/Analysis) | 69 | I’ve processed the recent release of 20,000+ pages of documents regarding the Epstein Estate from the U.S. House Oversight Committee. The goal is to make these scattered government files accessible for journalists and researchers using open-source tools.
Note, this was originally shared here, by another user, then uploaded to huggingface. The original huggingface repo has since been removed. This is simply a clone, hosted on Github and my huggingface account, with a gradio app for interacting with it.
The original release contained mixed file formats and nested folders. This dataset converts images/PDFs to text (via Tesseract OCR) and standardizes them into a single CSV format.
Searchable App: A Gradio browser to search the corpus without downloading the full set.
[Hugging Face Gradio App and Repo](https://huggingface.co/spaces/theelderemo/epstein-files)
[Github Mirror](https://github.com/theelderemo/Epstein-files) | 2025-12-03T23:30:44 | https://www.reddit.com/r/LocalLLaMA/comments/1pdk4zx/resource_20000_pages_of_us_house_oversight/ | Ok-District-1330 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdk4zx | false | null | t3_1pdk4zx | /r/LocalLLaMA/comments/1pdk4zx/resource_20000_pages_of_us_house_oversight/ | false | false | self | 69 | {'enabled': False, 'images': [{'id': 'e3xelc5H5OgEIHdt8pMTOh9zYqsg_N9vuwIpRChh5xw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e3xelc5H5OgEIHdt8pMTOh9zYqsg_N9vuwIpRChh5xw.png?width=108&crop=smart&auto=webp&s=8f25bc2126f4371ef108ac2641a551b199e0b32f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/e3xelc5H5OgEIHdt8pMTOh9zYqsg_N9vuwIpRChh5xw.png?width=216&crop=smart&auto=webp&s=79ec159cf05976f6892099f682594df40b69cc63', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/e3xelc5H5OgEIHdt8pMTOh9zYqsg_N9vuwIpRChh5xw.png?width=320&crop=smart&auto=webp&s=d0b2c1cc2576293edbb720fba5fbaf05febafc6e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/e3xelc5H5OgEIHdt8pMTOh9zYqsg_N9vuwIpRChh5xw.png?width=640&crop=smart&auto=webp&s=70faff4a996d91882df6e0ed213a37322d3de737', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/e3xelc5H5OgEIHdt8pMTOh9zYqsg_N9vuwIpRChh5xw.png?width=960&crop=smart&auto=webp&s=a23297a30ce19bb20eedfa725a340e09a3ea6f61', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/e3xelc5H5OgEIHdt8pMTOh9zYqsg_N9vuwIpRChh5xw.png?width=1080&crop=smart&auto=webp&s=aef0d1e6c2002955f861f48aecb4dafb22582981', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/e3xelc5H5OgEIHdt8pMTOh9zYqsg_N9vuwIpRChh5xw.png?auto=webp&s=da153dcc4edf5abe74f2ef9afc15dd6e1201e1de', 'width': 1280}, 'variants': {}}]} |
AI Runner v5.0.5 | 0 | This update allows you to run AI Runner as a headless server, but mask AI Runner as Ollama so that other services such as VSCode will think it is interfacing with Ollama, allowing you to select it as "Ollama" from the model manager in VSCode Copilot Chat.
Note: I haven't been able to get it to work well with agents yet, but it does work with tools if you choose the right model.
after you follow the installation instructions, you can use \`airunner-hf-download\` to list the available models and then \`airunner-hf-download <name>\` to download one. You might need to activate it by running the gui with \`airunner\` and selecting it in the chat prompt widget dropdown box. Then you can close the GUI and run \`airunner-headless --ollama-mode\` - after the server starts, it will be available within vscode by simply choosing "ollama"
Obviously, you can't have the real ollama running at the same time. | 2025-12-03T22:41:10 | https://github.com/Capsize-Games/airunner/compare/v5.0.4...v5.0.5 | w00fl35 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1pdixg8 | false | null | t3_1pdixg8 | /r/LocalLLaMA/comments/1pdixg8/ai_runner_v505/ | false | false | default | 0 | null |
Kiwix RAG: Terminal Chat Interface with Local Kiwix Content Integration | 1 | [https://github.com/imDelivered/KiwixRAG](https://github.com/imDelivered/KiwixRAG)
I've developed a terminal-based chat application that integrates local Kiwix content with Ollama using Retrieval Augmented Generation (RAG). The app automatically injects relevant Kiwix content into AI responses to improve factual accuracy. | 2025-12-03T22:37:34 | https://www.reddit.com/r/LocalLLaMA/comments/1pdiuag/kiwix_rag_terminal_chat_interface_with_local/ | Smart-Competition200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdiuag | false | null | t3_1pdiuag | /r/LocalLLaMA/comments/1pdiuag/kiwix_rag_terminal_chat_interface_with_local/ | false | false | self | 1 | null |
The Best Open Weights Coding Models of 2025 | 76 | Hi all, I'm back with uncontaminated evals for DeepSeek-V3.2, Kimi K2 Thinking, and MiniMax M2. (We caught GLM 4.6 last time around.)
If you just want the numbers, you can find them for the finalists [here](https://brokk.ai/power-ranking?models=dsv3.2%2Cglm4.6-fp8%2Ck2-thinking%2Cm2) and for everyone else [here](https://brokk.ai/power-ranking?dataset=openround). | 2025-12-03T22:29:44 | https://blog.brokk.ai/the-best-open-weights-coding-models-of-2025/ | mr_riptano | blog.brokk.ai | 1970-01-01T00:00:00 | 0 | {} | 1pdin3b | false | null | t3_1pdin3b | /r/LocalLLaMA/comments/1pdin3b/the_best_open_weights_coding_models_of_2025/ | false | false | 76 | {'enabled': False, 'images': [{'id': 'wxWktbYwfvy3j0Y1BhifZGPwnHUnY7JGZEyykx4N_HM', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/wxWktbYwfvy3j0Y1BhifZGPwnHUnY7JGZEyykx4N_HM.png?width=108&crop=smart&auto=webp&s=4f4a40a875171084a904d892a7cbc21f60da9edf', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/wxWktbYwfvy3j0Y1BhifZGPwnHUnY7JGZEyykx4N_HM.png?width=216&crop=smart&auto=webp&s=4d957f54f2274aa44506ed6a8ac0295dc9f4affa', 'width': 216}, {'height': 164, 'url': 'https://external-preview.redd.it/wxWktbYwfvy3j0Y1BhifZGPwnHUnY7JGZEyykx4N_HM.png?width=320&crop=smart&auto=webp&s=451e0933974f0b05eeb78953064ccde9f8007a32', 'width': 320}, {'height': 329, 'url': 'https://external-preview.redd.it/wxWktbYwfvy3j0Y1BhifZGPwnHUnY7JGZEyykx4N_HM.png?width=640&crop=smart&auto=webp&s=48a8af816e33acba1ab83ce8086f21346740c37d', 'width': 640}, {'height': 494, 'url': 'https://external-preview.redd.it/wxWktbYwfvy3j0Y1BhifZGPwnHUnY7JGZEyykx4N_HM.png?width=960&crop=smart&auto=webp&s=1854ac28ec268e89d9b7dc45d7c1857316fd397e', 'width': 960}, {'height': 556, 'url': 'https://external-preview.redd.it/wxWktbYwfvy3j0Y1BhifZGPwnHUnY7JGZEyykx4N_HM.png?width=1080&crop=smart&auto=webp&s=28e9b6cbf2038f1e3d67ce0b687a63a833f55da3', 'width': 1080}], 'source': {'height': 618, 'url': 'https://external-preview.redd.it/wxWktbYwfvy3j0Y1BhifZGPwnHUnY7JGZEyykx4N_HM.png?auto=webp&s=e885b340d1e635bf4a505b52d9e782b8357ce597', 'width': 1200}, 'variants': {}}]} | |
DeepSeek-OCR – Apple Metal Performance Shaders (MPS) & CPU Support | 42 | I recently updated DeepSeek-OCR to support Apple Metal (MPS) and CPU acceleration. I wanted to share this in case anyone else has been looking to run it efficiently on macOS.
To make it easier to use, I also forked an existing desktop client and applied the patch. You can check it out here:
[https://github.com/Dogacel/deepseek-ocr-client-macos](https://github.com/Dogacel/deepseek-ocr-client-macos) | 2025-12-03T22:13:40 | https://huggingface.co/Dogacel/DeepSeek-OCR-Metal-MPS | Dogacel | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1pdi8o7 | false | null | t3_1pdi8o7 | /r/LocalLLaMA/comments/1pdi8o7/deepseekocr_apple_metal_performance_shaders_mps/ | false | false | 42 | {'enabled': False, 'images': [{'id': '9aiVAPD5fnnzElgE74nQoRx4aoPflI7CwyCPycl2BLA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9aiVAPD5fnnzElgE74nQoRx4aoPflI7CwyCPycl2BLA.png?width=108&crop=smart&auto=webp&s=f5a1a5450ada78dfc9f4054af8caaeff2cf9272d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9aiVAPD5fnnzElgE74nQoRx4aoPflI7CwyCPycl2BLA.png?width=216&crop=smart&auto=webp&s=a4fa441fa4b97b4d8b305b62a7e192a9816bca44', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9aiVAPD5fnnzElgE74nQoRx4aoPflI7CwyCPycl2BLA.png?width=320&crop=smart&auto=webp&s=34b67c39116aa65e2b30491fa9b5ffed11347b73', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9aiVAPD5fnnzElgE74nQoRx4aoPflI7CwyCPycl2BLA.png?width=640&crop=smart&auto=webp&s=9f593757a501d49b05111d714ec689f7acf643d1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9aiVAPD5fnnzElgE74nQoRx4aoPflI7CwyCPycl2BLA.png?width=960&crop=smart&auto=webp&s=b19f4ed023e3df6ba9f29ead9251f07bc0128ecb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9aiVAPD5fnnzElgE74nQoRx4aoPflI7CwyCPycl2BLA.png?width=1080&crop=smart&auto=webp&s=2ef75afa11bd30faab4eb9ae1901be0f51379456', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9aiVAPD5fnnzElgE74nQoRx4aoPflI7CwyCPycl2BLA.png?auto=webp&s=aaff4bbfa5ec6656218e12588a758bbc3dd61502', 'width': 1200}, 'variants': {}}]} | |
I hate waiting for ai to "Think" | 0 | I built a wrapper that gives you free access to reasoning models. Instead of staring at a loading screen, you get a moment of entertainment while it thinks — with no extra wait time. Posted a quick demo.
Curious what everyone thinks | 2025-12-03T22:11:23 | https://v.redd.it/ubt4gq42c25g1 | palmtreegod23 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pdi6lp | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ubt4gq42c25g1/DASHPlaylist.mpd?a=1767391900%2CM2Q1MmM3NjRkNWQzOWEwMjdlZDUzNTc3MmUxZTVkNzFlNGU0NjE3ODE2NTYzMDQzOThjNjAwZDFjOTQ0YjQ0YQ%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/ubt4gq42c25g1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/ubt4gq42c25g1/HLSPlaylist.m3u8?a=1767391900%2CZWViYWZjODg5YTY1OGU3ZWEzM2U0M2Y3YmFlODBjZDU5YTRhOGQ1MjM4NmFkZjY5ODU2N2U5YTllZWM4ZTEzMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ubt4gq42c25g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1pdi6lp | /r/LocalLLaMA/comments/1pdi6lp/i_hate_waiting_for_ai_to_think/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'MWU0cXhzNDJjMjVnMVFNfTGFXeK0cFof1RWE1c2xmhQDcyIDjRppRZoKEhBV', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MWU0cXhzNDJjMjVnMVFNfTGFXeK0cFof1RWE1c2xmhQDcyIDjRppRZoKEhBV.png?width=108&crop=smart&format=pjpg&auto=webp&s=7048891ad80149aa23a5ced2837f9e09f3999acf', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MWU0cXhzNDJjMjVnMVFNfTGFXeK0cFof1RWE1c2xmhQDcyIDjRppRZoKEhBV.png?width=216&crop=smart&format=pjpg&auto=webp&s=c873aafdb8416ec753fc3909e84adc3be116700e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MWU0cXhzNDJjMjVnMVFNfTGFXeK0cFof1RWE1c2xmhQDcyIDjRppRZoKEhBV.png?width=320&crop=smart&format=pjpg&auto=webp&s=b5d60e3c46ed7784a14b8db45ae037f167f20544', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MWU0cXhzNDJjMjVnMVFNfTGFXeK0cFof1RWE1c2xmhQDcyIDjRppRZoKEhBV.png?width=640&crop=smart&format=pjpg&auto=webp&s=3b6262481d66b98c26e87cb412f3788a74a9d077', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MWU0cXhzNDJjMjVnMVFNfTGFXeK0cFof1RWE1c2xmhQDcyIDjRppRZoKEhBV.png?width=960&crop=smart&format=pjpg&auto=webp&s=8497f366efa5e8f84de4153f1a2c87eefaaf8607', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MWU0cXhzNDJjMjVnMVFNfTGFXeK0cFof1RWE1c2xmhQDcyIDjRppRZoKEhBV.png?width=1080&crop=smart&format=pjpg&auto=webp&s=5d19e87e1c5a80a92e62f26a50b92f564b229c7e', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MWU0cXhzNDJjMjVnMVFNfTGFXeK0cFof1RWE1c2xmhQDcyIDjRppRZoKEhBV.png?format=pjpg&auto=webp&s=b486b2d1a119cf5ec4f55de9c52b824b6c24d3ce', 'width': 1280}, 'variants': {}}]} | |
Why doesn't deepseek release a smaller air model? Because they are focused at research? | 13 | Why doesn't deepseek release a smaller air model like a 120b A10b MoE model or a 32b dense model? It seems like they are mainly focused in research and doesn't frequently release small models unlike GLM and qwen | 2025-12-03T21:58:57 | https://www.reddit.com/r/LocalLLaMA/comments/1pdhuyy/why_doesnt_deepseek_release_a_smaller_air_model/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdhuyy | false | null | t3_1pdhuyy | /r/LocalLLaMA/comments/1pdhuyy/why_doesnt_deepseek_release_a_smaller_air_model/ | false | false | self | 13 | null |
Building a local model machine, wanted some feedback | 1 | [removed] | 2025-12-03T21:58:38 | https://www.reddit.com/r/LocalLLaMA/comments/1pdhund/building_a_local_model_machine_wanted_some/ | The_M0nk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdhund | false | null | t3_1pdhund | /r/LocalLLaMA/comments/1pdhund/building_a_local_model_machine_wanted_some/ | false | false | self | 1 | null |
Is streamlit good for an agentic UI? | 1 | I am not sure if this is the right place to post this but here goes:
I want to build a local-first agentic framework like Cline or Cursor but geared more towards local LLMs than Cloud API providers (these would be optional) that makes up for the shortcomings of the current frameworks available.
I already have enough experience building a lot of different applications involving different AI models so I can definitely create useful tools devs and users could use locally to maximize their agents' output.
All I really need is a robust UI solution, not something like Tkinter or Gradio. When I tried out Streamlit it was so impressive in its responsiveness and ease of use I might just pick this one as a foundation for an agentic framework.
Does anyone have any experience with streamlit they can share? Or do you know of a better, easier, modular alternative? I need a UI solution that is modular and scalable to accommodate new features over time. | 2025-12-03T21:53:12 | https://www.reddit.com/r/LocalLLaMA/comments/1pdhplj/is_streamlit_good_for_an_agentic_ui/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdhplj | false | null | t3_1pdhplj | /r/LocalLLaMA/comments/1pdhplj/is_streamlit_good_for_an_agentic_ui/ | false | false | self | 1 | null |
Qwen3-Coder-30B-A3B-Instruct[-FP8] config.json updated | 14 | [Update config.json · Qwen/Qwen3-Coder-30B-A3B-Instruct at b2cff64](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct/commit/b2cff646eb4bb1d68355c01b18ae02e7cf42d120)
Unsure of the implications of this, maybe someone here can explain what's changed? | 2025-12-03T21:49:59 | https://www.reddit.com/r/LocalLLaMA/comments/1pdhmiw/qwen3coder30ba3binstructfp8_configjson_updated/ | MutantEggroll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdhmiw | false | null | t3_1pdhmiw | /r/LocalLLaMA/comments/1pdhmiw/qwen3coder30ba3binstructfp8_configjson_updated/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'IAGFmaGszKcqSKR_8qg0oES6OBfFDCBNvzr72pbVe7o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IAGFmaGszKcqSKR_8qg0oES6OBfFDCBNvzr72pbVe7o.png?width=108&crop=smart&auto=webp&s=fbf0440b72bf3c599b24d782f0bddf00251537cf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IAGFmaGszKcqSKR_8qg0oES6OBfFDCBNvzr72pbVe7o.png?width=216&crop=smart&auto=webp&s=824bee5d7aa9841a221b2f60a969d54551eccb18', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IAGFmaGszKcqSKR_8qg0oES6OBfFDCBNvzr72pbVe7o.png?width=320&crop=smart&auto=webp&s=91ca7f6cb7614731e917c0c8e162bd66bfbc25ca', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IAGFmaGszKcqSKR_8qg0oES6OBfFDCBNvzr72pbVe7o.png?width=640&crop=smart&auto=webp&s=9d2b1429fc14f5ca152608718fd3ef6d50119778', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IAGFmaGszKcqSKR_8qg0oES6OBfFDCBNvzr72pbVe7o.png?width=960&crop=smart&auto=webp&s=6655884fe4ff60136ee88021696ace5be4875862', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IAGFmaGszKcqSKR_8qg0oES6OBfFDCBNvzr72pbVe7o.png?width=1080&crop=smart&auto=webp&s=991b600ad67e419b1091cac2c8c55f34d86b36fa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IAGFmaGszKcqSKR_8qg0oES6OBfFDCBNvzr72pbVe7o.png?auto=webp&s=4cacac54fb0a262f4128b23481bccaf4104c19d5', 'width': 1200}, 'variants': {}}]} |
Fine Tuning Project LLM. Specialized in Home use with IoT audio commands, audio relay of video analysis, etc.. | 4 | Hey all, I'm wanting to develop a home assistant that can receive human like commands for IoT devices, be connected to schedules, give audio reminders, etc. I was wondering if any one had any experience doing that and could give some insight on the challenges that would come along with it. Thanks! | 2025-12-03T21:47:51 | https://www.reddit.com/r/LocalLLaMA/comments/1pdhkhx/fine_tuning_project_llm_specialized_in_home_use/ | RefrigeratorPlus8700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdhkhx | false | null | t3_1pdhkhx | /r/LocalLLaMA/comments/1pdhkhx/fine_tuning_project_llm_specialized_in_home_use/ | false | false | self | 4 | null |
Which of these laptops/desktop computers would be best to run a local LLM or also low to mid level gaming? | 0 | I need to buy a new pc, and my budget is around $600. I've also been looking into delving into llm as a hobby. Sick of paying $30 a month for only 16k tokens. Lol.
Dell 16 DC16256 16" Laptop Computer - Midnight Blue
AMD Ryzen 7 250 3.3GHz Processor; 16GB DDR5-5600 RAM; 1TB Solid State Drive; AMD Radeon Graphics
Acer Aspire Go 15 AG15-42P-R3GM 15.6" Laptop Computer - Steel Gray
AMD Ryzen 7 7730U 2.0GHz Processor; 32GB DDR4 RAM; 1TB Solid State Drive; AMD Radeon Graphics
HP OmniBook X Flip Next Gen AI 14-fm0013dx Copilot+ PC 14" 2-in-1 Laptop Computer (Refurbished) - Atmospheric Blue
Intel Core Ultra 5 226V 2.1GHz Processor; 16GB LPDDR5x-8533 Onboard RAM; 512GB Solid State Drive; Intel Arc 130V Graphics
Acer Aspire Go 15 AG15-42P-R3GM 15.6" Laptop Computer - Steel Gray
AMD Ryzen 7 7730U 2.0GHz Processor; 32GB DDR4 RAM; 1TB Solid State Drive; AMD Radeon Graphics
HP OmniDesk Slim S03-0041 Desktop Computer
Intel Core i5 14th Gen 14400 1.8GHz Processor; 16GB DDR5-4800 RAM; 512GB Solid State Drive; Intel UHD Graphics 730 | 2025-12-03T21:38:13 | https://www.reddit.com/r/LocalLLaMA/comments/1pdhbe5/which_of_these_laptopsdesktop_computers_would_be/ | ConspiracyParadox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdhbe5 | false | null | t3_1pdhbe5 | /r/LocalLLaMA/comments/1pdhbe5/which_of_these_laptopsdesktop_computers_would_be/ | false | false | self | 0 | null |
How would you judge the setup I want to build? | 0 | Asus ProArt 870E
And Ryzen 9 9950X3d 16x 4,30GHz
64gB Corsair Vengeance DDR5 6000
32GB 5090 RTX
Would you say this works well together for LLM, Stable Diffusion and Gaming? Anything I should consider when building this? Doing this the first time so need some advice. Thanks. | 2025-12-03T21:33:26 | https://www.reddit.com/r/LocalLLaMA/comments/1pdh6z9/how_would_you_judge_the_setup_i_want_to_build/ | Imaginary-Cry2924 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdh6z9 | false | null | t3_1pdh6z9 | /r/LocalLLaMA/comments/1pdh6z9/how_would_you_judge_the_setup_i_want_to_build/ | false | false | self | 0 | null |
8 local LLMs on a single Strix Halo debating whether a hot dog is a sandwich | 719 | 2025-12-03T21:26:43 | https://v.redd.it/o8n25oox325g1 | jfowers_amd | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pdh0sm | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/o8n25oox325g1/DASHPlaylist.mpd?a=1767389216%2CYmFmNjJkZTEyYjU5NTYwN2E0MTFkNGRhNWZjOTYwYWJmNTNkYmUyNmQ3MTdiOGI5ZDYxYzJiNDVlYTMzNGIzNg%3D%3D&v=1&f=sd', 'duration': 11, 'fallback_url': 'https://v.redd.it/o8n25oox325g1/CMAF_480.mp4?source=fallback', 'has_audio': True, 'height': 570, 'hls_url': 'https://v.redd.it/o8n25oox325g1/HLSPlaylist.m3u8?a=1767389216%2COGNmMWYxMTA4ZTEwY2ZjMjMwMjQ5YmYzMmVjNTc0ZDJkZjU0ODFkMzgxNTM4ZmIyZjBmMmExZWQ1ZTFlZTc2Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/o8n25oox325g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 480}} | t3_1pdh0sm | /r/LocalLLaMA/comments/1pdh0sm/8_local_llms_on_a_single_strix_halo_debating/ | false | false | 719 | {'enabled': False, 'images': [{'id': 'ZzlmajZ6b3gzMjVnMYyMXOA9G9iEfbHd4uR1YsqLbApEsnv66h0V49mXIA5l', 'resolutions': [{'height': 128, 'url': 'https://external-preview.redd.it/ZzlmajZ6b3gzMjVnMYyMXOA9G9iEfbHd4uR1YsqLbApEsnv66h0V49mXIA5l.png?width=108&crop=smart&format=pjpg&auto=webp&s=21788fbdfd49ddd51fa612b45bc2f35651968eed', 'width': 108}, {'height': 256, 'url': 'https://external-preview.redd.it/ZzlmajZ6b3gzMjVnMYyMXOA9G9iEfbHd4uR1YsqLbApEsnv66h0V49mXIA5l.png?width=216&crop=smart&format=pjpg&auto=webp&s=42076c7f1d1d5107961c760cdf3f950aeed16fb1', 'width': 216}, {'height': 380, 'url': 'https://external-preview.redd.it/ZzlmajZ6b3gzMjVnMYyMXOA9G9iEfbHd4uR1YsqLbApEsnv66h0V49mXIA5l.png?width=320&crop=smart&format=pjpg&auto=webp&s=94bf5424e506843949ba39e1cacfd997bcdea7d7', 'width': 320}], 'source': {'height': 706, 'url': 'https://external-preview.redd.it/ZzlmajZ6b3gzMjVnMYyMXOA9G9iEfbHd4uR1YsqLbApEsnv66h0V49mXIA5l.png?format=pjpg&auto=webp&s=a11f124627d1dda9cc57a44fee08bc2855d191cd', 'width': 594}, 'variants': {}}]} | ||
Is it THE Mistral Large 3 benchmark? | 0 | 2025-12-03T21:24:50 | https://www.reddit.com/r/LocalLLaMA/comments/1pdgz3k/is_it_the_mistral_large_3_benchmark/ | egomarker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdgz3k | false | null | t3_1pdgz3k | /r/LocalLLaMA/comments/1pdgz3k/is_it_the_mistral_large_3_benchmark/ | false | false | 0 | null | ||
What are the minimum viable LLMs to test "thinking" techniques? | 0 | I'd like to test various "thinking" techniques like chain-of-thought, tree-of-thought, etc. I'm wondering what you think the minimum viable language models are to get reasonable results back. And where the results would probably generalize to larger LMs.
The truly tiny LMs in huggingface are nice for speed, memory, and budget, but they tend to produce nonsense. I'm wondering if there's an LM I could run locally or call fairly cheaply via API to experiment with. | 2025-12-03T21:19:39 | https://www.reddit.com/r/LocalLLaMA/comments/1pdgudg/what_are_the_minimum_viable_llms_to_test_thinking/ | SometimesObsessed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdgudg | false | null | t3_1pdgudg | /r/LocalLLaMA/comments/1pdgudg/what_are_the_minimum_viable_llms_to_test_thinking/ | false | false | self | 0 | null |
Points-based reasoning | 2 | I found new trick to sped up token generation with almost zero quality loss, especially with hybrid model,which is points-based reasoning, here it how it works:
Depending on the model,you set a system prompt that tells the model to produce a chain-of-thought but instead of enabling it via the "thinking" button on whatever client you are using,you are going to set it in system prompt intead but don't tell the model just to do reasoning,but tell the model to do "points" similar to this:
\- user asked "how to build an LLM"
\- user is likely a beginner
\- I should do...
\- now let's reply to the user
And the model keeps considering the input and doing self-reflection Inside the CoT that's made of points until the model feels that he should answer now because no further steps are needed.
I found this to actually improve the model reasoning not degrade it because when the model is told to do step-by-step points reasoning it will actually calculate things much more accurate because the model won't do "wait, maybe?" Multiple times which is likely to make the model lose where it started from "due to context limits".
The most important thing to consider in the prompt is to tell the model directly two things:
1. Decide if the message needs CoT first (optional,if you want sped up)
2. To always review the answer in a final step called "preview" where model will self-reflect again on the final answer without re-applying all steps again
My own results:
tested models are GPT-OSS-120B and Qwen3-VL-4B-Thinking and Qwen3-VL-4B-Instruct.
yes two completely different families of models.
GPT-OSS-120B finished the 10 questions in only 7-10 steps for each and didn't review the system policy other than once at the final step "preview" where it decided the question is fine, reasoning took 100-200 tokens depending on question and less than 100 in the final preview step.
Qwen3-VL-4B-Thinking with standard reasoning scored 9/10 and Qwen3-VL-4B-Instruct scored 4/10 without reasoning (it's not a reasoning model)
When I applied the technique to the Qwen3-VL-4B-instruct the model took a little bit more than GPT-OSS like in the range of \~300 tokens and \~150 in the preview step but scored 10/10 without mistakes in the output and only a small mistake in the preview step where the model fixed it before the output. | 2025-12-03T20:58:07 | https://www.reddit.com/r/LocalLLaMA/comments/1pdga9g/pointsbased_reasoning/ | AI-Man-75 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdga9g | false | null | t3_1pdga9g | /r/LocalLLaMA/comments/1pdga9g/pointsbased_reasoning/ | false | false | self | 2 | null |
Hermes 4.3 - 36B Model released | 249 | Hermes uncensored line models with apache 2 license. Post trained from Seed-OSS-36B-Bar on their psyche network. The cool bit is they also trained it centralized and the distributed psyche trained version outperformed the centrally trained one.
GGUF links: [https://huggingface.co/NousResearch/Hermes-4.3-36B-GGUF](https://huggingface.co/NousResearch/Hermes-4.3-36B-GGUF) | 2025-12-03T20:30:22 | https://nousresearch.com/introducing-hermes-4-3/ | crazeum | nousresearch.com | 1970-01-01T00:00:00 | 0 | {} | 1pdfk0o | false | null | t3_1pdfk0o | /r/LocalLLaMA/comments/1pdfk0o/hermes_43_36b_model_released/ | false | false | default | 249 | {'enabled': False, 'images': [{'id': 'thAQxjbw3fpc9fgR1nrJDb-3cDeZ9f7TtJWveW5lCQ4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/thAQxjbw3fpc9fgR1nrJDb-3cDeZ9f7TtJWveW5lCQ4.png?width=108&crop=smart&auto=webp&s=11f148884579108fb6ee44c7797d26bbba37c985', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/thAQxjbw3fpc9fgR1nrJDb-3cDeZ9f7TtJWveW5lCQ4.png?width=216&crop=smart&auto=webp&s=bfd9c9a4298b760af658e265344ed052d2fc6e3c', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/thAQxjbw3fpc9fgR1nrJDb-3cDeZ9f7TtJWveW5lCQ4.png?width=320&crop=smart&auto=webp&s=7b818c9881bbfc7a31b98fef84d7d62d722df78d', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/thAQxjbw3fpc9fgR1nrJDb-3cDeZ9f7TtJWveW5lCQ4.png?width=640&crop=smart&auto=webp&s=49b96ff1b32dfa841362b8c2a0d4449fdd83b1f0', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/thAQxjbw3fpc9fgR1nrJDb-3cDeZ9f7TtJWveW5lCQ4.png?width=960&crop=smart&auto=webp&s=a2456232e3c4745ce7ca7757b779e3ee3c2f03c0', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/thAQxjbw3fpc9fgR1nrJDb-3cDeZ9f7TtJWveW5lCQ4.png?auto=webp&s=becd389318eedd2027d12451eefe02a39ed41238', 'width': 1024}, 'variants': {}}]} |
Block PC with mobile RTX? | 0 | Is anyone aware of a premade PC I can buy that’s in similar shape/form of an M4 Studio that has a mobile RTX card (or even AMD) with at least 24gb of RAM?
I see lots of “mini AI” PCs that look good and have good CPUs and RAM, but weak or no GPU.
I’ve heard of some upcoming stuff on kickstarter but I was wondering if there’s anything on the market today.
Thank you. | 2025-12-03T20:12:51 | https://www.reddit.com/r/LocalLLaMA/comments/1pdf3ca/block_pc_with_mobile_rtx/ | Ok-Contest-5856 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdf3ca | false | null | t3_1pdf3ca | /r/LocalLLaMA/comments/1pdf3ca/block_pc_with_mobile_rtx/ | false | false | self | 0 | null |
how do I setup cline to orchestrate calls to two endpoints? | 1 | I have two GPUs each running separate llama.cpp and models at endpoints localhost:9991 and localhost:9992.
9991 = fast 75t/s small context 32k
9992 = slow 10t/s larger context 80k
I want a single cline workflow to use both LLM in an async and orchestrated manner. Surely someone has figured this out already. Can someone share the guide or give me some tips?
| 2025-12-03T20:11:49 | https://www.reddit.com/r/LocalLLaMA/comments/1pdf2cw/how_do_i_setup_cline_to_orchestrate_calls_to_two/ | PairOfRussels | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdf2cw | false | null | t3_1pdf2cw | /r/LocalLLaMA/comments/1pdf2cw/how_do_i_setup_cline_to_orchestrate_calls_to_two/ | false | false | self | 1 | null |
average tech american compagny when you ask to release a 100 parameters ai model outdated since 2017 who counts the number of tiles in a bathroom(the model is too dangerous for the user) | 102 | 2025-12-03T19:14:58 | Qnimbus_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pddj59 | false | null | t3_1pddj59 | /r/LocalLLaMA/comments/1pddj59/average_tech_american_compagny_when_you_ask_to/ | false | false | default | 102 | {'enabled': True, 'images': [{'id': 'q9ruaywkg15g1', 'resolutions': [{'height': 146, 'url': 'https://preview.redd.it/q9ruaywkg15g1.jpeg?width=108&crop=smart&auto=webp&s=595b9c5120ea23430c8d0ffb8b451867f803d6c4', 'width': 108}, {'height': 292, 'url': 'https://preview.redd.it/q9ruaywkg15g1.jpeg?width=216&crop=smart&auto=webp&s=b5274b3fcdb208a19614187f0ac1537c95e55c74', 'width': 216}, {'height': 433, 'url': 'https://preview.redd.it/q9ruaywkg15g1.jpeg?width=320&crop=smart&auto=webp&s=a86cda87cc80fc7e5e10a97c220d8cc1d375e1b5', 'width': 320}, {'height': 866, 'url': 'https://preview.redd.it/q9ruaywkg15g1.jpeg?width=640&crop=smart&auto=webp&s=27cb05a9b58a21a06a2f7c46b02e68e3e7185b21', 'width': 640}], 'source': {'height': 995, 'url': 'https://preview.redd.it/q9ruaywkg15g1.jpeg?auto=webp&s=4bbaeaea60e4632d84579158b0e10410535a56a5', 'width': 735}, 'variants': {}}]} | ||
CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning | 1 | *Processing img nnyrobkpf15g1...*
Code: [github.com/deepreinforce-ai/CUDA-L2](http://github.com/deepreinforce-ai/CUDA-L2)
Paper: [arxiv.org/abs/2512.02551](http://arxiv.org/abs/2512.02551) | 2025-12-03T19:09:53 | https://www.reddit.com/r/LocalLLaMA/comments/1pdde3k/cudal2_surpassing_cublas_performance_for_matrix/ | Optimal-Outcome-7458 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdde3k | false | null | t3_1pdde3k | /r/LocalLLaMA/comments/1pdde3k/cudal2_surpassing_cublas_performance_for_matrix/ | false | false | 1 | {'enabled': False, 'images': [{'id': '_LzbvUhAvq3cBNTHGLRibYHxf-f06D5iS4U8vZ5aujM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_LzbvUhAvq3cBNTHGLRibYHxf-f06D5iS4U8vZ5aujM.png?width=108&crop=smart&auto=webp&s=15943189abe258afcb269d5534a21daad9f626cf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_LzbvUhAvq3cBNTHGLRibYHxf-f06D5iS4U8vZ5aujM.png?width=216&crop=smart&auto=webp&s=627a3c8161fa71d3a95ff0b2d9d5e03265bd2564', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_LzbvUhAvq3cBNTHGLRibYHxf-f06D5iS4U8vZ5aujM.png?width=320&crop=smart&auto=webp&s=683769ced67b859ba11be7ae35d736d1b7a8e8db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_LzbvUhAvq3cBNTHGLRibYHxf-f06D5iS4U8vZ5aujM.png?width=640&crop=smart&auto=webp&s=33fea50688eb43248f6911353c3cd842bb598e3d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_LzbvUhAvq3cBNTHGLRibYHxf-f06D5iS4U8vZ5aujM.png?width=960&crop=smart&auto=webp&s=a74b2b2a09e958661e7b072933c2f5777e9d4012', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_LzbvUhAvq3cBNTHGLRibYHxf-f06D5iS4U8vZ5aujM.png?width=1080&crop=smart&auto=webp&s=122c4c435b8ff05f3b8baf403682d830d88779fe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_LzbvUhAvq3cBNTHGLRibYHxf-f06D5iS4U8vZ5aujM.png?auto=webp&s=2fa42f062db9ac18ba0f9be0e005638a142c371d', 'width': 1200}, 'variants': {}}]} | |
CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning | 1 | ABS:
Matrix multiplication (matmul) is one of the most fundamental operations in LLMs. However, manually optimizing Matmul kernels is challenging due to the fact that different matrix dimension (M, N, K) require different optimization strategies and that optimizations rarely transform across different GPU architectures, which make comprehensive manual tuning hard at scale. In this paper, we propose CUDA-L2, a system that combines large language models (LLMs) and reinforcement learning (RL) to automatically optimize Half-precision General Matrix Multiply (HGEMM) CUDA kernels.
Using CUDA execution speed as the RL reward, CUDA-L2 automatically optimizes HGEMM kernels across 1,000 configurations. These configurations represent all $10\^3$ combinations of M, N, K values from {64, 128, 256, 512, 1024, 2048, 4096, 8192, 12288, 16384}, and already covers those used in attention and FFN layers of widely open-sourced models like Qwen, Llama and DeepSeek.
CUDA-L2 systematically outperforms major matmul baselines to date, from the widely-used torch.matmul to state-of-the-art Nvidia's closed-source libraries, i.e., cuBLAS, cuBLASLt. In offline mode, where kernels are executed consecutively without time intervals, CUDA-L2 yields +22.0% over torch.matmul on average; +19.2% over cuBLAS using the optimal layout configuration (normal-normal NN and transposed-normal TN); +16.8% over cuBLASLt-heuristic, which queries cuBLASLt library and selects the algorithm based on the heuristic's suggestion; and +11.4\\% over the most competitive cuBLASLt-AutoTuning model, which selects the fastest algorithm from up to 100 candidates from cuBLASLt's suggestions. In server mode, where kernels are executed at random intervals simulating real-time inference, the speedups further increase to +28.7%, +26.0%, +22.4%, and +15.9% for torch.matmul, cuBLAS, cuBLASLt-heuristic, and cuBLASLt-AutoTuning respectively.
CUDA-L2 shows that even the most performance-critical, heavily-optimized kernels like HGEMM can be improved through LLM-guided RL automation by systematically exploring configuration spaces at scales impractical for humans. While the current version of CUDA-L2 only focuses on A100 GPUs, the framework is designed for broad applicability, with ongoing work to extend it to other GPU architectures, including Ada Lovelace, Hopper and Blackwell.
Code: [github.com/deepreinforce-ai/CUDA-L2](http://github.com/deepreinforce-ai/CUDA-L2)
Paper: [arxiv.org/abs/2512.02551](http://arxiv.org/abs/2512.02551) | 2025-12-03T19:06:39 | https://www.reddit.com/r/LocalLLaMA/comments/1pddb1j/cudal2_surpassing_cublas_performance_for_matrix/ | Optimal-Outcome-7458 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pddb1j | false | null | t3_1pddb1j | /r/LocalLLaMA/comments/1pddb1j/cudal2_surpassing_cublas_performance_for_matrix/ | false | false | self | 1 | null |
Pondering a 3090 Founders vs a Bosgame M5. I have questions! | 1 | Hey guys,
I hope it's okay to ask, but I've been reading a lot recently and I'm considering either upgrading to a 3090 founders or a Bosgame M5 -- my biggest question is that I keep seeing people say that the M5 is slower on dense models. I was wondering what they mean when they say slow and how slow is "slow" in terms of tk/s?
I know that MOE models are the "standard" for the ryzen max 395+ units around here, but I also like the idea of running a handful of small to medium models concurrently for various tasks.
My next question is -- assuming I was running something like a 32b model, would the unit be able to handle large contexts?
Apologies if these have been asked before, but I can't seem to get any hard figures to get an impression! | 2025-12-03T19:03:46 | https://www.reddit.com/r/LocalLLaMA/comments/1pdd83z/pondering_a_3090_founders_vs_a_bosgame_m5_i_have/ | Orpheusly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdd83z | false | null | t3_1pdd83z | /r/LocalLLaMA/comments/1pdd83z/pondering_a_3090_founders_vs_a_bosgame_m5_i_have/ | false | false | self | 1 | null |
CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning | 1 | Using RL, CUDA-L2 optimizes HGEMM kernels across 1,000 MxNxK configurations. It outperforms major matmul baselines to date.
In offline mode, it yields +22.0% over torch.matmul, +19.2% over cuBLAS, +16.8% over cuBLASLt-heuristic, and +11.4% over the most competitive cuBLASLt-AutoTuning.
In server mode, the speedups further increase to +28.7%, +26.0%, +22.4%, and +15.9% for torch.matmul, cuBLAS, cuBLASLt-heuristic, and cuBLASLt-AutoTuning respectively.
CUDA-L2 shows that even the most performance-critical, heavily-optimized kernels like HGEMM can be improved through LLM-guided RL automation.
Code: [github.com/deepreinforce-ai/CUDA-L2](http://github.com/deepreinforce-ai/CUDA-L2)
Paper: [arxiv.org/abs/2512.02551](http://arxiv.org/abs/2512.02551) | 2025-12-03T18:55:06 | https://www.reddit.com/r/LocalLLaMA/comments/1pdcz65/cudal2_surpassing_cublas_performance_for_matrix/ | Single_Savings_5166 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdcz65 | false | null | t3_1pdcz65 | /r/LocalLLaMA/comments/1pdcz65/cudal2_surpassing_cublas_performance_for_matrix/ | false | false | self | 1 | null |
Micron Announces Exit from Crucial Consumer Business | 411 | Technically speaking, we're screwed. | 2025-12-03T18:54:47 | https://investors.micron.com/news-releases/news-release-details/micron-announces-exit-crucial-consumer-business | FullstackSensei | investors.micron.com | 1970-01-01T00:00:00 | 0 | {} | 1pdcytv | false | null | t3_1pdcytv | /r/LocalLLaMA/comments/1pdcytv/micron_announces_exit_from_crucial_consumer/ | false | false | default | 411 | null |
UX Research: Why is the cloud experience for LLMs still so painful compared to local? | 0 | Hi everyone, I’m a UX Researcher working with a small team of engineers. We’re trying to design a new GPU infrastructure platform, and I’m trying to understand why the current market feels so broken.
My hypothesis is that many of you build local rigs (3090s/4090s) not just for cost, but because cloud providers (AWS/RunPod) have terrible usability or reliability issues.
I’m not selling anything (we are just in the discovery phase). I just want to know: **What is the one thing about cloud GPU providers that made you rage-quit and build a local rig instead?**
Any stories about billing, config, or cold starts would be super helpful for our research. | 2025-12-03T18:24:43 | https://www.reddit.com/r/LocalLLaMA/comments/1pdc3th/ux_research_why_is_the_cloud_experience_for_llms/ | Two_Duckz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdc3th | false | null | t3_1pdc3th | /r/LocalLLaMA/comments/1pdc3th/ux_research_why_is_the_cloud_experience_for_llms/ | false | false | self | 0 | null |
Help - Qwen3 LV - LM Studio instant response - Claude Code Router takes over 20 min | 3 | Hey all,
First of all, I hope I'm in the right reddit, if not, boo me out and I'll delete it :)
So I'm playing with claude code router (local LLM proxy for claude code) and I noticed the responses I got where, well, underwhelming. For example, I've tried to make a component based on an input image. But no luck, all I got was some generic grid component. (it was able to view the image, since it rendered the right text). Response took about 7 to 20 min (depending on the model variant).
Then I thought to run the same prompt directly as a chat in LM Studio, and a world of difference. Less than a second for the 30B, 7 seconds would be a lot. 235B variant was, I think, two minutes.
(still no close copy of the image, but still a huge difference in result AND time.)
Now, I've got no clue on how to start unraveling the bottleneck of it all, and I hope you know what I do wrong. | 2025-12-03T18:19:27 | https://www.reddit.com/r/LocalLLaMA/comments/1pdbyhg/help_qwen3_lv_lm_studio_instant_response_claude/ | designbanana | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdbyhg | false | null | t3_1pdbyhg | /r/LocalLLaMA/comments/1pdbyhg/help_qwen3_lv_lm_studio_instant_response_claude/ | false | false | self | 3 | null |
Nvidia RTX 6000 Pro Height | 1 | Are all Nvidia RTX 6000 pro workstation edition card extended height? | 2025-12-03T18:17:51 | https://www.reddit.com/r/LocalLLaMA/comments/1pdbwxd/nvidia_rtx_6000_pro_height/ | renovatio522 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pdbwxd | false | null | t3_1pdbwxd | /r/LocalLLaMA/comments/1pdbwxd/nvidia_rtx_6000_pro_height/ | false | false | self | 1 | null |
TOON vs JSON: A Reality Check - When It Saves Tokens and When It Doesn't | 1 | [removed] | 2025-12-03T17:23:36 | https://dev.to/prodbld/toon-vs-json-a-reality-check-when-it-saves-tokens-and-when-it-doesnt-1b6m | blazarious | dev.to | 1970-01-01T00:00:00 | 0 | {} | 1pdae1h | false | null | t3_1pdae1h | /r/LocalLLaMA/comments/1pdae1h/toon_vs_json_a_reality_check_when_it_saves_tokens/ | false | false | default | 1 | null |
Why new SLMs are trained on very large amount of tokens? | 0 | The new route for lots of small language models (less than 10B) is training on VERY LARGE amount of tokens.
I'm not an AI expert nor I claim to be,but I've read a research paper before saying that scaling over 20 tokens (words?)/parameter in small models will result in diminishing results as the model capacity will not memotize much of training data and the base training round is likely to make the model powerful enough as a base for fine-tuning because it generalizes with enough but not a lot of information with good language understanding
the new models are likely trained using high-quality distilled data or research-paper style text at the beginning to generalize,and later the model will be again retrained to have more information (not language understanding, that's already done in the first training round)
Many models released lately are trained on over 10 TB+ of tokens (written in their huggingface pages), which exceeds the 20/parameter by a lot of tokens.
so how many new models that are even less than 7B performs so well on general knowledge?
Note: as I remember,the article was in 2024 and was speaking about training using standard transformers FP16 architecture.
If there is any correction it's welcomed in the comments,I am a no expert. | 2025-12-03T17:17:31 | https://www.reddit.com/r/LocalLLaMA/comments/1pda7wb/why_new_slms_are_trained_on_very_large_amount_of/ | AI-Man-75 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pda7wb | false | null | t3_1pda7wb | /r/LocalLLaMA/comments/1pda7wb/why_new_slms_are_trained_on_very_large_amount_of/ | false | false | self | 0 | null |
A Technical Tour of the DeepSeek Models from V3 to V3.2 | 55 | 2025-12-03T17:03:17 | https://sebastianraschka.com/blog/2025/technical-deepseek.html | seraschka | sebastianraschka.com | 1970-01-01T00:00:00 | 0 | {} | 1pd9tgj | false | null | t3_1pd9tgj | /r/LocalLLaMA/comments/1pd9tgj/a_technical_tour_of_the_deepseek_models_from_v3/ | false | false | default | 55 | {'enabled': False, 'images': [{'id': 'Oy9W7OYOeVO8Z6Sl3EWWZR-9AbREkAwoyEei1XJ7yeY', 'resolutions': [{'height': 90, 'url': 'https://external-preview.redd.it/Oy9W7OYOeVO8Z6Sl3EWWZR-9AbREkAwoyEei1XJ7yeY.jpeg?width=108&crop=smart&auto=webp&s=9b5485ae2dde659fab96066362f6042b5f511a7d', 'width': 108}, {'height': 180, 'url': 'https://external-preview.redd.it/Oy9W7OYOeVO8Z6Sl3EWWZR-9AbREkAwoyEei1XJ7yeY.jpeg?width=216&crop=smart&auto=webp&s=18a225867a3bdc1ad8d5253c6f3948691400c1da', 'width': 216}, {'height': 267, 'url': 'https://external-preview.redd.it/Oy9W7OYOeVO8Z6Sl3EWWZR-9AbREkAwoyEei1XJ7yeY.jpeg?width=320&crop=smart&auto=webp&s=76fcb3defbf6e3d7dc14705a5cd49e22af90f35f', 'width': 320}, {'height': 535, 'url': 'https://external-preview.redd.it/Oy9W7OYOeVO8Z6Sl3EWWZR-9AbREkAwoyEei1XJ7yeY.jpeg?width=640&crop=smart&auto=webp&s=7a5effd7f132f71b2efdd47cc12daa448023c0bf', 'width': 640}, {'height': 803, 'url': 'https://external-preview.redd.it/Oy9W7OYOeVO8Z6Sl3EWWZR-9AbREkAwoyEei1XJ7yeY.jpeg?width=960&crop=smart&auto=webp&s=d2ef6aff529f3b339cba1caa941041b56f89e2a6', 'width': 960}, {'height': 903, 'url': 'https://external-preview.redd.it/Oy9W7OYOeVO8Z6Sl3EWWZR-9AbREkAwoyEei1XJ7yeY.jpeg?width=1080&crop=smart&auto=webp&s=86dc291c967904d6b74aae534f3c49a359a4dd10', 'width': 1080}], 'source': {'height': 1126, 'url': 'https://external-preview.redd.it/Oy9W7OYOeVO8Z6Sl3EWWZR-9AbREkAwoyEei1XJ7yeY.jpeg?auto=webp&s=695a7a53c8b23acbf1bc8d6ae6ae477b133408a9', 'width': 1346}, 'variants': {}}]} | |
New 17,000 Epstein Files leak available to download (~2,4 GB) | 0 | I recently found MANY files which are now deleted by the FBI.
The files include documents, images, videos and e-mails.
I am 100% this post will be deleted as this sub is most likely BEING monitored by the FBI. Took me a lot of time to scrape together everything with my team. And it is compressed and readable and will be available to everyone.
You can access the files here: [http://anonleakz.site/](http://anonleakz.site/)
I've included the path to download trough 4 different file uploading services.
ENJOY. | 2025-12-03T16:58:32 | https://www.reddit.com/r/LocalLLaMA/comments/1pd9ofc/new_17000_epstein_files_leak_available_to/ | ComprehensiveWear598 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd9ofc | false | null | t3_1pd9ofc | /r/LocalLLaMA/comments/1pd9ofc/new_17000_epstein_files_leak_available_to/ | false | false | self | 0 | null |
I trained a 7B to learn a niche language and reaching 86% code accuracy | 64 | Hi everyone, I just wanted to share a project I did over the last weekend.
I’m no ML engineer or having any relevant background in AI, just have been toying with the idea of training an LLM myself for a while.
Most of my previous training attempts did not yield and meaningful result, but I’m still managed to learned a thing or two. And this time, I decided to give it a try again.
The niche language I picked to train the LLM (Qwen2.5-coder-7b) was a less popular text-to-diagram language called Pintora. Since most open source models did not have any knowledge about this language, it’s a fun project to try.
Long story short, I planned to train this for free on Google Colab, but ended up renting a 48GB A40 for a naive mistake, and doing a lot of the training pipeline myself (in a much smaller scale), from creating the dataset, cleaning them up, to do two phases training: Continued Pretraining and then Instruction Finetune, to teach the model how to either generate diagrams from scratch and editing existing diagrams.
In the end, I’m quite happy with the result, although it’s not great, the model was able to generate syntactically correct code, the diagrams are showing up. I did a quick evaluation to confirm how accurate (in terms of of compile-able diagrams) that the model can generate, out of 1000 examples, only about 140 are failing, that’s about 86% accuracy.
Both the model (safetensors, gguf, full and quantized) are available on HF if you are interested. I also did a write up to document the process, I think it might be helpful to share so I can learn from all of your feedback!
Blog post: https://huy.rocks/everyday/12-01-2025-ai-teaching-an-llm-a-niche-diagraming-language
Model:
- https://huggingface.co/huytd189/pintora-coder-7b
- https://huggingface.co/huytd189/pintora-coder-7b-gguf
Dataset:
- https://huggingface.co/datasets/huytd189/pintora-instruct
- https://huggingface.co/datasets/huytd189/pintora-edit-instruct | 2025-12-03T16:49:14 | https://www.reddit.com/gallery/1pd9f4x | bobaburger | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pd9f4x | false | null | t3_1pd9f4x | /r/LocalLLaMA/comments/1pd9f4x/i_trained_a_7b_to_learn_a_niche_language_and/ | false | false | 64 | null | |
Cheapest and best way to host a GGUF model with an API (like OpenAI) for production? | 4 | Hey folks,
I'm trying to host a `.gguf` LLM in a way that lets me access it using an API — similar to how we call the OpenAI API (`/v1/chat/completions`, etc).
I want to expose my own hosted GGUF model through a clean HTTP API that any app can use.
### What I need:
1. **Host a GGUF model** (7B / 13B / possibly 30B later)
2. **Access it over a REST API** (Ollama-style, OpenAI-style, or custom)
3. **Production-ready setup** (stable, scalable enough, not hobby-only)
4. **Cheapest possible hosting options** (VPS or GPU cloud)
5. Advice on **which server/runtime is best**:
- Ollama API server
- llama.cpp server mode
- LocalAI
- vLLM (if GGUF isn’t ideal for it)
- or anything else that works well
### Budget Focus
Trying to find the **best price-to-performance platform**.
Options I'm considering but unsure about:
- Hetzner
- RunPod
- Vast.ai
- Vultr
- Lambda Labs
- Any cheap GPU rental providers?
### My goals:
- Host the model once
- Call it from my mobile or backend app through an API
- Avoid OpenAI-style monthly costs
- Keep latency reasonable
- Ensure it runs reliably even with multiple requests
### Questions:
- What’s the **cheapest** but still **practical** setup for production?
- Is **Ollama on a VPS** good enough?
- Should I use **llama.cpp server** instead?
- Does anyone run GGUF models in production at scale?
- Any recommended architectures or pitfalls?
Would really appreciate hearing what setups have worked for you — especially from people who have deployed GGUF models behind an API for real apps!
Thanks in advance | 2025-12-03T16:15:32 | https://www.reddit.com/r/LocalLLaMA/comments/1pd8i1u/cheapest_and_best_way_to_host_a_gguf_model_with/ | New-Worry6487 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd8i1u | false | null | t3_1pd8i1u | /r/LocalLLaMA/comments/1pd8i1u/cheapest_and_best_way_to_host_a_gguf_model_with/ | false | false | self | 4 | null |
Need advice on a scalable on-prem LLM/RAG build for confidential technical docs (10–15k budget) | 5 | **Budget:** $10–15k - budget is flexible, feel free to go a little over or under. I will also have the ability to add to this over time in chunks of a couple thousand at once.
**Goal:**
I want to build an on-prem machine to store confidential technical docs (500–2000 pages each) and use RAG to extract and answer questions from them. Essentially create a knowledge base that I can query to cut through the crud and synthesize relevant and important information in a concise manner. I need something I can scale over time (more compute/RAM) and eventually expose through a small web app so a team of 4–5 people can upload docs and query the model concurrently.
Because of confidentiality, cloud hosting is likely a no-go, otherwise I'd just consider renting the compute.
**Questions:**
* Is this feasible within a $10–15k starting budget?
* What hardware would you recommend that can scale over time?
* Any specific models best suited for this kind of workload?
* What other learning resources might you recommend for a project like this?
This is an opportunity at work for me to take advantage of budget that I wouldn't have in my outside life to expand on a hobby/interest and create a business use case for it.
Happy to answer any clarifying questions. Thanks! | 2025-12-03T16:06:08 | https://www.reddit.com/r/LocalLLaMA/comments/1pd88rx/need_advice_on_a_scalable_onprem_llmrag_build_for/ | phoez12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd88rx | false | null | t3_1pd88rx | /r/LocalLLaMA/comments/1pd88rx/need_advice_on_a_scalable_onprem_llmrag_build_for/ | false | false | self | 5 | null |
Using RunPod + Index TTS as an LLM-Friendly TTS Backend (Setup & Notes) | 1 | [removed] | 2025-12-03T15:28:33 | https://www.reddit.com/r/LocalLLaMA/comments/1pd78df/using_runpod_index_tts_as_an_llmfriendly_tts/ | renato_diniss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd78df | false | null | t3_1pd78df | /r/LocalLLaMA/comments/1pd78df/using_runpod_index_tts_as_an_llmfriendly_tts/ | false | false | self | 1 | null |
Does anyone use RunPod for SFT? If yes, you train via SSH or Jupyter (web-hosted) | 7 | In a week or 2, I want to rent one B200 ($5.2/hr) to do SFT (Supervised Fine-Tuning) for GPT-OSS-120B on a \~30k row dataset.
I have done training for 2-4 hours a few months ago, but sometimes the Jupyter notebook crashed (got a message like "Connection lost" or something like that) and often had to restart training.
If you use RunPod (or any other GPU cloud provider), how do you manage long sessions (4+ hours)? | 2025-12-03T15:15:38 | https://www.reddit.com/r/LocalLLaMA/comments/1pd6vxu/does_anyone_use_runpod_for_sft_if_yes_you_train/ | TechNerd10191 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd6vxu | false | null | t3_1pd6vxu | /r/LocalLLaMA/comments/1pd6vxu/does_anyone_use_runpod_for_sft_if_yes_you_train/ | false | false | self | 7 | null |
Euryale-70B – 100 Slow-Burn Romantic Dialogue Seeds ($29) | 0 | Just dropped my first dataset: 100 seeds from uncensored Euryale-70B. Perfect for romantic/ERP LoRAs and SillyTavern.
Free 10-seed teaser: https://huggingface.co/datasets/VerenLabs/Euryale-70B-10-GFE-Seeds-Free-Teaser
Full pack ($29 instant download): https://ko-fi.com/s/cf650d8256
Feedback welcome. | 2025-12-03T14:47:58 | https://www.reddit.com/r/LocalLLaMA/comments/1pd65f3/euryale70b_100_slowburn_romantic_dialogue_seeds_29/ | Ok-Barracuda-4486 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd65f3 | false | null | t3_1pd65f3 | /r/LocalLLaMA/comments/1pd65f3/euryale70b_100_slowburn_romantic_dialogue_seeds_29/ | false | false | self | 0 | null |
My experiences with the new Ministral 3 14B Reasoning 2512 Q8 | 84 | 45 minutes and 33K tokens of thinking about making html tetris (1 line prompt):
https://preview.redd.it/jzjcom93105g1.png?width=500&format=png&auto=webp&s=8d67b1b895715d2dfbb927db0bc2bc485b28b819
Tool calling breaks all the time:
https://preview.redd.it/02edr424105g1.png?width=314&format=png&auto=webp&s=67cccfd1b1fdaa59da095b9bd31ef09f1ec1c184
Also at some point it stopped using the \[think\] tags altogether and just started thinking out loud. I'll leave it running for a couple of hours and see if it eventually manages to build the HTML Tetris.
| 2025-12-03T14:41:03 | https://www.reddit.com/r/LocalLLaMA/comments/1pd5yxy/my_experiences_with_the_new_ministral_3_14b/ | egomarker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd5yxy | false | null | t3_1pd5yxy | /r/LocalLLaMA/comments/1pd5yxy/my_experiences_with_the_new_ministral_3_14b/ | false | false | 84 | null | |
Hear me out before dismissing my app like all the other vibecoded crap you see here. It’s currently the best BYOK app for web search. | 0 | It’s still in testing and if one wants to try it they can here with TestFlight https://testflight.apple.com/join/N4G1AYFJ
The app is free so refrain from downvoting the post like it’s now common practice with people talking about their projects.
Try it out and then feel free to downvote.
It’s basically the only bring your own key app (obviously including your local models) with a custom web search pipeline that runs on the iPhone.
In these screenshots it succeeds in a complex search that Claude, Perplexity and Apollo fail to answer correctly.
The app uses an iterative process to search with serper.dev, scrape with a local scraper, then locally RAG the contents to avoid context window overload. And repeat the process until it thinks it has enough information to answer.
Web searching is the most important feature in a chatbot app. All the LLM clients on the AppStore lack it or have a super basic search mcp. That’s why I was always going back to ChatGPT.
| 2025-12-03T14:30:23 | https://www.reddit.com/gallery/1pd5ovv | Valuable-Run2129 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pd5ovv | false | null | t3_1pd5ovv | /r/LocalLLaMA/comments/1pd5ovv/hear_me_out_before_dismissing_my_app_like_all_the/ | false | false | 0 | null | |
LocalRAG Pro — offline RAG desktop app (Tauri + Ollama + LanceDB) — would love your feedback | 1 | [removed] | 2025-12-03T14:29:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pd5nrb/localrag_pro_offline_rag_desktop_app_tauri_ollama/ | Fantastic_Fortune768 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd5nrb | false | null | t3_1pd5nrb | /r/LocalLLaMA/comments/1pd5nrb/localrag_pro_offline_rag_desktop_app_tauri_ollama/ | false | false | self | 1 | null |
EchoKit (Voice Interface for Local LLMs) Update: Added Dynamic System Prompts & MCP Tool Wait Messages | 7 | We are building **EchoKit**, a hardware/software stack to give a voice to your local LLMs. It connects to OpenAI-compatible endpoints, meaning you can run it with LlamaEdge, standard LlamaCPP, or even Groq/Gemini.
We just released a server update that makes testing different "Agents" much faster:
**1. Dynamic Prompt Loading:** Instead of hardcoding the system prompt in a config file and restarting the server every time you want to change the personality, you can now point the server to a URL (like a raw text file or an entry from `LLMs.txt`). This lets you swap between a "Coding Assistant" and a "Storyteller" instantly.
**2. Better Tool Use (MCP) UX:** We are betting big on the Model Context Protocol (MCP) for agentic search and tools. The voice agent now speaks a "Please wait" message when it detects it needs to call an external tool, so the user isn't left in silence during the tool-call latency.
| 2025-12-03T13:58:46 | https://www.reddit.com/r/LocalLLaMA/comments/1pd4vp0/echokit_voice_interface_for_local_llms_update/ | smileymileycoin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd4vp0 | false | null | t3_1pd4vp0 | /r/LocalLLaMA/comments/1pd4vp0/echokit_voice_interface_for_local_llms_update/ | false | false | self | 7 | null |
Why don't Google and Openai release their old models? | 132 | GPt 4 and gemini 2 pro are dated, they should release it... Are they afraid of releasing their data and architecture? They released gemma and gpt oss already. Gemini 2 has a large context window, but the quality degrades when it gets large though and it is replicable.. | 2025-12-03T13:19:51 | https://www.reddit.com/r/LocalLLaMA/comments/1pd3xyp/why_dont_google_and_openai_release_their_old/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd3xyp | false | null | t3_1pd3xyp | /r/LocalLLaMA/comments/1pd3xyp/why_dont_google_and_openai_release_their_old/ | false | false | self | 132 | null |
Can we expect better LLM hardware in 2026? | 16 | I mean with a lot of fast(!) VRAM.
DGX spark and AMD AI Max have really low memory speeds.
China is releasing so many open source models, when will they come with cheap hardware that we can run them? | 2025-12-03T13:16:07 | https://www.reddit.com/r/LocalLLaMA/comments/1pd3uvy/can_we_expect_better_llm_hardware_in_2026/ | Bitter-College8786 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd3uvy | false | null | t3_1pd3uvy | /r/LocalLLaMA/comments/1pd3uvy/can_we_expect_better_llm_hardware_in_2026/ | false | false | self | 16 | null |
Trying to move to ik_llama.cpp from ollama. Need some help with args.. | 3 | Hey y'all. I've been running the Derestricted 120B model on ollama. It was super slow so I moved over to LMstudio. One of the main features that helped a lot was keeping KVcache on GPU and offloading experts to CPU.
Apparently, ik\_llama.cpp is faster than llama.cpp for hybrid infrence and there seems to be control regarding the expert weights.
Would someone be kind enough to recommend me some launch args specific to ik\_llama.cpp? I'm basically trying to keep the most used experts on GPU, and KVcache.
I have about 70gb of free RAM and 24gb to use on my 3090. Ideally I would like a bit left over to keep Z-image loaded on GPU but that's not a big priority.
Before on llama.cpp, I was able to hit 6.5 t/s with a 12700K and the DDR4 RAM.
Would ik\_llama.cpp be faster? | 2025-12-03T13:07:48 | https://www.reddit.com/r/LocalLLaMA/comments/1pd3o67/trying_to_move_to_ik_llamacpp_from_ollama_need/ | My_Unbiased_Opinion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd3o67 | false | null | t3_1pd3o67 | /r/LocalLLaMA/comments/1pd3o67/trying_to_move_to_ik_llamacpp_from_ollama_need/ | false | false | self | 3 | null |
Intel Arc Pro B60 Battlematrix Preview: 192GB of VRAM for On-Premise AI | 33 | 2025-12-03T13:05:36 | https://www.storagereview.com/review/intel-arc-pro-b60-battlematrix-preview-192gb-of-vram-for-on-premise-ai | reps_up | storagereview.com | 1970-01-01T00:00:00 | 0 | {} | 1pd3mdw | false | null | t3_1pd3mdw | /r/LocalLLaMA/comments/1pd3mdw/intel_arc_pro_b60_battlematrix_preview_192gb_of/ | false | false | default | 33 | {'enabled': False, 'images': [{'id': '0mZ7_HvOTkdLgtq4s_qT3vry9cE_RWRALKiuljZ3Fl8', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/0mZ7_HvOTkdLgtq4s_qT3vry9cE_RWRALKiuljZ3Fl8.jpeg?width=108&crop=smart&auto=webp&s=8588e4fd64043093f314dc1485482e57d52f8b6b', 'width': 108}, {'height': 136, 'url': 'https://external-preview.redd.it/0mZ7_HvOTkdLgtq4s_qT3vry9cE_RWRALKiuljZ3Fl8.jpeg?width=216&crop=smart&auto=webp&s=8946621bf40b8b0147620740b4108d7bc7d6279d', 'width': 216}, {'height': 202, 'url': 'https://external-preview.redd.it/0mZ7_HvOTkdLgtq4s_qT3vry9cE_RWRALKiuljZ3Fl8.jpeg?width=320&crop=smart&auto=webp&s=33cb116bc30dca9ff7a22497c6082f82c55e47c0', 'width': 320}, {'height': 404, 'url': 'https://external-preview.redd.it/0mZ7_HvOTkdLgtq4s_qT3vry9cE_RWRALKiuljZ3Fl8.jpeg?width=640&crop=smart&auto=webp&s=d01be5fde96c8bded5f16d12f17d20ed686c5e29', 'width': 640}, {'height': 606, 'url': 'https://external-preview.redd.it/0mZ7_HvOTkdLgtq4s_qT3vry9cE_RWRALKiuljZ3Fl8.jpeg?width=960&crop=smart&auto=webp&s=c1ea9f47713f138ec188eec4ed9fe0e8cb4f2da5', 'width': 960}, {'height': 682, 'url': 'https://external-preview.redd.it/0mZ7_HvOTkdLgtq4s_qT3vry9cE_RWRALKiuljZ3Fl8.jpeg?width=1080&crop=smart&auto=webp&s=3718d451d2b85b1d3596f0d55880dfeeda59c4e1', 'width': 1080}], 'source': {'height': 948, 'url': 'https://external-preview.redd.it/0mZ7_HvOTkdLgtq4s_qT3vry9cE_RWRALKiuljZ3Fl8.jpeg?auto=webp&s=8570c57bff1f312ec6fc70983da572c2ec0364a7', 'width': 1500}, 'variants': {}}]} | |
can llma 3.2-1b see picture (LOCAL AI) | 0 | question: can LOCAL AI llma 3.2-1b can see/vision like picture or video answer: the answer is yes question: how to send picture if no button in our mobile devices answer: the answer is link how to u post just went in Google photo or watever apps question: how we believing you if you're not showing us answer: sigh u see in my instructions how to u posting image or video. | 2025-12-03T12:47:20 | Adventurous_Role_489 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pd383v | false | null | t3_1pd383v | /r/LocalLLaMA/comments/1pd383v/can_llma_321b_see_picture_local_ai/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '5jpklk8niz4g1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/5jpklk8niz4g1.jpeg?width=108&crop=smart&auto=webp&s=398c84769bc5002c043fb4a99bacdcdffbab8f56', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/5jpklk8niz4g1.jpeg?width=216&crop=smart&auto=webp&s=b8bc3faa99719d9c7066322af8dccfb4282d9800', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/5jpklk8niz4g1.jpeg?width=320&crop=smart&auto=webp&s=5a6da2b2166d947e03cbea465f07ee7d3a0c4dfc', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/5jpklk8niz4g1.jpeg?width=640&crop=smart&auto=webp&s=2cd50ee3b5b0575c11b517bdf03da320ad37ad29', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/5jpklk8niz4g1.jpeg?width=960&crop=smart&auto=webp&s=649e9632197cf0b40075ab99b1b12e8b3401623a', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/5jpklk8niz4g1.jpeg?width=1080&crop=smart&auto=webp&s=153eb5a76519c633a4e4de84020ea609caf672b3', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/5jpklk8niz4g1.jpeg?auto=webp&s=4106747f1f913e7ee9dbdfe3b59a8aa9ab9b7443', 'width': 1080}, 'variants': {}}]} | |
LLM Council - Multi-model AI with democratic voting (Enhanced fork with 5 production features) | 3 | I've been working on an enhanced version of Karpathy's LLM Council that adds production-ready features while keeping the original vision intact.
# What is LLM Council?
Instead of asking one LLM, you ask multiple models simultaneously. They each respond, then anonymously rank each other's answers, and a chairman synthesizes the final response. Think of it as "democratic AI decision-making."
# What I Added (5 Features):
* 🎯 **TOON Integration** \- 30-60% token savings with optimized data format
* 💾 **Multi-Database Support** \- JSON (default), PostgreSQL, or MySQL
* 💬 **Context & Follow-ups** \- Natural multi-turn conversations with memory
* 🛠️ **AI Tools** \- Calculator, Wikipedia, ArXiv, DuckDuckGo, Yahoo Finance \\
* ⚙️ **Conversation Management** \- Delete, edit titles, temporary chat mode
# Tech Stack:
* **Backend**: FastAPI, LangChain, SQLAlchemy, ChromaDB
* **Frontend**: React + Vite with Server-Sent Events
* **Models**: Any via OpenRouter (GPT-5.1, Gemini, Claude, Grok, etc.)
# Free Features:
* All 5 free tools enabled by default
* Local embeddings with HuggingFace (no API needed)
* JSON storage (zero setup)
* Optional: Tavily search, OpenAI embeddings
**GitHub**: [https://github.com/Reeteshrajesh/llm-council](https://github.com/Reeteshrajesh/llm-council)
Original concept by [@karpathy](https://github.com/karpathy/llm-council). This is an enhanced fork with professional features for production use.
Happy to answer questions! 🚀 | 2025-12-03T12:44:59 | https://www.reddit.com/r/LocalLLaMA/comments/1pd36b7/llm_council_multimodel_ai_with_democratic_voting/ | Distinct_Site_3462 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd36b7 | false | null | t3_1pd36b7 | /r/LocalLLaMA/comments/1pd36b7/llm_council_multimodel_ai_with_democratic_voting/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'oxiLKGdjabKs00vF6hrDv29aCreuDO4Xpd0rSUQ7Cuw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oxiLKGdjabKs00vF6hrDv29aCreuDO4Xpd0rSUQ7Cuw.png?width=108&crop=smart&auto=webp&s=3797fb26037089516dac57e5a21b4d7b2ab104c4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oxiLKGdjabKs00vF6hrDv29aCreuDO4Xpd0rSUQ7Cuw.png?width=216&crop=smart&auto=webp&s=8a8d6293a3a4fa895dff594be149c71e36574fe0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oxiLKGdjabKs00vF6hrDv29aCreuDO4Xpd0rSUQ7Cuw.png?width=320&crop=smart&auto=webp&s=00af427888b8165ffd09318342aec27d2d87424f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oxiLKGdjabKs00vF6hrDv29aCreuDO4Xpd0rSUQ7Cuw.png?width=640&crop=smart&auto=webp&s=5a4afc8fbcd7f57c47dd8bd5b30ebddd18e7af8c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oxiLKGdjabKs00vF6hrDv29aCreuDO4Xpd0rSUQ7Cuw.png?width=960&crop=smart&auto=webp&s=44815b07fb4d14b507d0d9a0e8f783c1a85f587b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oxiLKGdjabKs00vF6hrDv29aCreuDO4Xpd0rSUQ7Cuw.png?width=1080&crop=smart&auto=webp&s=98757a1e4c70a90c63ee88e59872e5316015babf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oxiLKGdjabKs00vF6hrDv29aCreuDO4Xpd0rSUQ7Cuw.png?auto=webp&s=261c4e3ef0a96d02f5b0a42c1da099bb4efba97e', 'width': 1200}, 'variants': {}}]} |
can llma 3.2-1b (LOCAL AI) | 1 | question: can LOCAL AI llma 3.2-1b can see/vision like picture or video answer: the answer is question: how to send picture if no button answer: the answer is link how to u just went in Google photo or watever apps. | 2025-12-03T12:41:20 | Adventurous_Role_489 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pd33hz | false | null | t3_1pd33hz | /r/LocalLLaMA/comments/1pd33hz/can_llma_321b_local_ai/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'dM6uEb4MdNhhSolzj2qqKzNrtHT8WKLXm2MOX56ZaWc', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/gh6kq85hhz4g1.jpeg?width=108&crop=smart&auto=webp&s=5391bc621e9d79e0bab62543c54848e5b434f323', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/gh6kq85hhz4g1.jpeg?width=216&crop=smart&auto=webp&s=a28f109dc8758f1daf7c708a4aa75f9c4e3d6848', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/gh6kq85hhz4g1.jpeg?width=320&crop=smart&auto=webp&s=20ddf6c7165ff455bac20c9769aadeebcb33f72f', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/gh6kq85hhz4g1.jpeg?width=640&crop=smart&auto=webp&s=6b3b918092cd2752e7c5a1378511904d3757ede7', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/gh6kq85hhz4g1.jpeg?width=960&crop=smart&auto=webp&s=59a4db1232a9a68a27dbec42fbeb910b63f35be4', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/gh6kq85hhz4g1.jpeg?width=1080&crop=smart&auto=webp&s=ce986138928a0c86f86f2fc8a425defd51a49083', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/gh6kq85hhz4g1.jpeg?auto=webp&s=563043ea0d66491a992a9828f0806519e8797bc0', 'width': 1080}, 'variants': {}}]} | ||
I built an offline AI chat app that automatically pulls Wikipedia articles for factual answers - runs completely locally with Ollama | 5 | [https://github.com/imDelivered/WikiRAG](https://github.com/imDelivered/WikiRAG)
• Runs 100% offline - no internet needed after initial setup
• Automatically injects relevant Wikipedia articles into AI responses (RAG)
• Privacy-focused - everything runs locally on your machine
• Works with any Ollama model (llama3.2, dolphin-llama3, etc.)
• Clickable hyperlinks that open Wikipedia articles in popup windows
• Simple setup script handles everything automatically
| 2025-12-03T12:32:49 | https://www.reddit.com/r/LocalLLaMA/comments/1pd2x8u/i_built_an_offline_ai_chat_app_that_automatically/ | Smart-Competition200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd2x8u | false | null | t3_1pd2x8u | /r/LocalLLaMA/comments/1pd2x8u/i_built_an_offline_ai_chat_app_that_automatically/ | false | false | self | 5 | null |
DeepSeek V3.2 Technical Report | 289 | Here is a brief summary of **key breakthroughs of DeepSeek V3.2**
**1. DeepSeek Sparse Attention (DSA)**
A new efficient attention mechanism that dramatically reduces computational complexity while preserving performance in long-context scenarios.
It uses a lightning indexer with fine-grained top-k token selection to achieve sparse but effective attention.
**2. Scalable and Stable Reinforcement Learning Framework**
Implements a heavily scaled post-training RL pipeline, with compute exceeding 10% of pretraining cost.
**3. Large-Scale Agentic Task Synthesis Pipeline**
Provides a novel pipeline that programmatically generates large numbers of tool-use environments (1,800+ environments, 85,000+ complex prompts).
This boosts generalization, tool-use ability, and instruction-following in interactive settings.
**4. Unified Reasoning + Agentic RL Training**
Merges reasoning, tool-use, and human-alignment RL into a single stage rather than multi-stage pipelines.
This avoids catastrophic forgetting and improves cross-domain performance simultaneously.
**DeepSeek-V3.2-Speciale**
A high-compute variant trained with relaxed length penalties and enhanced mathematical-reasoning rewards.
This model even surpasses GPT-5 and exhibits reasoning proficiency on par with Gemini-3.0-Pro, achieving gold-medal performance in both the 2025 International Mathematical Olympiad (IMO) and the International Olympiad in Informatics (IOI).
[Arxiv paper ](https://arxiv.org/abs/2512.02556) | 2025-12-03T12:31:51 | Dear-Success-1441 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pd2wjt | false | null | t3_1pd2wjt | /r/LocalLLaMA/comments/1pd2wjt/deepseek_v32_technical_report/ | false | false | default | 289 | {'enabled': True, 'images': [{'id': 'q3rjrhs0gz4g1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/q3rjrhs0gz4g1.jpeg?width=108&crop=smart&auto=webp&s=c5077ffbc4ee663cac6a56e44f2b2cb526fa2ab5', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/q3rjrhs0gz4g1.jpeg?width=216&crop=smart&auto=webp&s=212bc9676001b803be636ad4356721582e674ba4', 'width': 216}, {'height': 241, 'url': 'https://preview.redd.it/q3rjrhs0gz4g1.jpeg?width=320&crop=smart&auto=webp&s=29f0a574eefadfd543e7672ba43371a2cb516060', 'width': 320}, {'height': 482, 'url': 'https://preview.redd.it/q3rjrhs0gz4g1.jpeg?width=640&crop=smart&auto=webp&s=7d2e078ce099142771b5d3999cbb9670fbfc18d8', 'width': 640}, {'height': 724, 'url': 'https://preview.redd.it/q3rjrhs0gz4g1.jpeg?width=960&crop=smart&auto=webp&s=33e5726806ae4f4ba7abec40d4614c5997514718', 'width': 960}], 'source': {'height': 765, 'url': 'https://preview.redd.it/q3rjrhs0gz4g1.jpeg?auto=webp&s=06e197b475b3475cccbfe2c081e3622093e9ef43', 'width': 1014}, 'variants': {}}]} | |
Any papers/good information about creating paraphrase datasets. | 0 | I am currently working on creating a paraphrase dataset to eventually finetune my own model. I think I have created a fairly good algorithm to pair paraphrases but I want to make sure, thus I am wondering if any of the people here know of some good resources. | 2025-12-03T12:24:05 | https://www.reddit.com/r/LocalLLaMA/comments/1pd2r2w/any_papersgood_information_about_creating/ | AdventurousFly4909 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd2r2w | false | null | t3_1pd2r2w | /r/LocalLLaMA/comments/1pd2r2w/any_papersgood_information_about_creating/ | false | false | self | 0 | null |
fully local AI system with wikipidia RAG | 2 | [https://github.com/imDelivered/WikiRAG](https://github.com/imDelivered/WikiRAG) | 2025-12-03T12:23:19 | https://www.reddit.com/r/LocalLLaMA/comments/1pd2qia/fully_local_ai_system_with_wikipidia_rag/ | Smart-Competition200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd2qia | false | null | t3_1pd2qia | /r/LocalLLaMA/comments/1pd2qia/fully_local_ai_system_with_wikipidia_rag/ | false | false | self | 2 | null |
apple/CLaRa-7B-Instruct · Hugging Face | 42 | [https://huggingface.co/apple/CLaRa-7B-Instruct](https://huggingface.co/apple/CLaRa-7B-Instruct) | 2025-12-03T11:43:05 | https://www.reddit.com/r/LocalLLaMA/comments/1pd1zeu/appleclara7binstruct_hugging_face/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd1zeu | false | null | t3_1pd1zeu | /r/LocalLLaMA/comments/1pd1zeu/appleclara7binstruct_hugging_face/ | false | false | self | 42 | {'enabled': False, 'images': [{'id': 'LjqIFCkEjX7aN0_Y_27jWMcvHQccxskIhXZAhgd23l0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LjqIFCkEjX7aN0_Y_27jWMcvHQccxskIhXZAhgd23l0.png?width=108&crop=smart&auto=webp&s=6093d902b7d289cc2379297f5b8eb689245bf3c2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LjqIFCkEjX7aN0_Y_27jWMcvHQccxskIhXZAhgd23l0.png?width=216&crop=smart&auto=webp&s=fc88c052de5aec93d2f923b86e5977deff8fdc60', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LjqIFCkEjX7aN0_Y_27jWMcvHQccxskIhXZAhgd23l0.png?width=320&crop=smart&auto=webp&s=f29b581c647093e6eb68286f6bda428cc3e00ba6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LjqIFCkEjX7aN0_Y_27jWMcvHQccxskIhXZAhgd23l0.png?width=640&crop=smart&auto=webp&s=7fe6e35b6507911e49c067c88c5c2cace96c7041', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LjqIFCkEjX7aN0_Y_27jWMcvHQccxskIhXZAhgd23l0.png?width=960&crop=smart&auto=webp&s=4fa51d5aca09b96cacdfeca3bbd43a5c95bd4e1f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LjqIFCkEjX7aN0_Y_27jWMcvHQccxskIhXZAhgd23l0.png?width=1080&crop=smart&auto=webp&s=b214727d65e32fd6101db2632cbe0d85c9da6f4b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LjqIFCkEjX7aN0_Y_27jWMcvHQccxskIhXZAhgd23l0.png?auto=webp&s=182d550da425a18c8b495be2f021185b9a651095', 'width': 1200}, 'variants': {}}]} |
Hot take: We’re overselling 'semantic search' in RAG. | 62 | I've been building some RAG stuff and 'semantic search' feels way more magical in marketing than in reality.
Embeddings are great **fuzzy matchers in meaning space** \- they shine on paraphrases, synonyms, 'something like this' queries. But whenever I need sharper behavior (logic, constraints, dates, 'papers using X on Y after 2019'), plain bi-encoder vector search starts to fall over unless I add extra machinery.
In practice my setups end up looking more like:
1. BM25 or dense (or hybrid)
2. Reranker and/or LLM query rewrite
3. LLM reasoning also maybe graphs/filters
At that point, calling just the first stage 'semantic search' feels a bit misleading, cause it's more like 'dense/vector retrieval' plus a bunch of stuff on top that actually does the reasoning.
So i have 2 questions for you:
* Is 'semantic search' a fair name for plain vector similarity, or do you avoid that term?
* How far did you get with just embeddings before needing reranking / query rewriting / graphs / filters? | 2025-12-03T11:42:02 | https://www.reddit.com/r/LocalLLaMA/comments/1pd1yqc/hot_take_were_overselling_semantic_search_in_rag/ | Raisin_False | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd1yqc | false | null | t3_1pd1yqc | /r/LocalLLaMA/comments/1pd1yqc/hot_take_were_overselling_semantic_search_in_rag/ | false | false | self | 62 | null |
[Benchmark] I stress-tested Llama-3, Mistral & Olmo on "Coherent" vs "Chaotic" Rule Lists (50-400 items). It turns out LLMs listen better when it makes sense. | 1 | In the real world, whether we are generating code, legal docs, or creative writing our instructions usually have semantic structure.
I wanted to know: Does the "entropy" of the instructions affect the model's ability to follow them?
If I specify to a model 200 words only about "Cooking" (Coherent words) and task it write a story including them. is that easier than asking it to include 200 random dictionary words?
I built a framework called Entropic Instruction Following to test this.
**The Setup:**
\- Task: f"Write a story that explicitly includes the following \[N\] words. {"\\n-".join(word\_list}"
\- Models: Llama-3.2-1B, Mistral-7B-v0.1, Olmo-3-7B, Falcon-H1-7B.
\- Number of rules: 50, 200, and 400 rules (words).
**The Variable:**
\- Coherent (c): Words derived from a single WordNet synset seed e.g:
https://preview.redd.it/gu5p6jxs4z4g1.png?width=698&format=png&auto=webp&s=bca35bd850cb4a44d72f7475e07ca2ab5f81b97b
\- Random (r): Words sampled uniformly at random.
\- And mixture of both like (e.g. alternating random and coherent, or in stripped bookends C|R, R|C)
We conduct the analysis across 10 distinct semantic seeds for each we generate 10 random variations per seed (Total 100 trials per model and per rule count).
Key Findings:
\- The "Coherence Boost" is **real** across many models, semantic coherence acts like a bias (in the ax+b sense), plotting the results of rule following shows that this doesn't affect the notorious positional bias, it lift the curve up e.g. when comparing full (coherence top left vs middle)
[Results for Mistral-7B.V0](https://preview.redd.it/rffubckx1z4g1.png?width=1319&format=png&auto=webp&s=63f631c744428aa48b5552551aa3cc724f9cf1ea)
\- At 200 rules, Mistral-7B saw a massive jump in adherence when the list was Coherent vs. Random.
\- Llama-3.2-1B punched way above its weight class on Coherent lists, effectively "simulating" a larger context window just because the data made sense.
2. The Capacity Cliff
We tested up to 400 rules (\~700 tokens of input). While this is well within the context window, the attention capacity breaks down.
\- At 50 rules: Most models are near 90-100%.
\- At 400 rules: Performance craters. Olmo-3 managed to stay afloat (\~24%), but others dropped to significantly.
**Importantly** when comparing the absolute number of rules followed for each you're not better off adding more rules than **200 in** some models and some specifc patterns:
[Absolute number of rules followed across rule lenghts specifications](https://preview.redd.it/vl7zsdk93z4g1.png?width=1195&format=png&auto=webp&s=f789fe680a77d837ef20e03ddaa9063d83d543bb)
3. Model Idiosyncrasies
\- Mistral is highly sensitive to the specific "seed." It loved writing about plants/animals but struggled more with abstract concepts.
[Seed level rule following for Mistral-7B-V0](https://preview.redd.it/j6hcyag04z4g1.png?width=2232&format=png&auto=webp&s=13402fc4bc16faf36e49e68d1c4ea86aeff8b059)
\- Olmo was weirdly stable. It didn't care if the list was coherent or random; it just gave a consistent performance. It seems "stubborn" against entropy.
Full Blog Post: [https://www.linkedin.com/pulse/entropy-context-window-do-llms-listen-better-when-makes-sifal-klioui-j4z9f/](https://www.linkedin.com/pulse/entropy-context-window-do-llms-listen-better-when-makes-sifal-klioui-j4z9f/)
Code & Dataset: [https://github.com/MostHumble/entropic-instruction-following/](https://github.com/MostHumble/entropic-instruction-following/)
Context for the sub: I'm currently looking for full-time roles in ML (specifically Eval / Post-Training / Synthetic Data). If you dig this kind of rigorous ablation testing, I'd love to chat. DMs open!
**Context for the sub:** I'm currently looking for full-time roles in ML (specifically Eval / Post-Training / Synthetic Data). I've become quite interested in "unconventional" evaluations involving | 2025-12-03T11:27:43 | https://www.reddit.com/r/LocalLLaMA/comments/1pd1pev/benchmark_i_stresstested_llama3_mistral_olmo_on/ | Accurate-Turn-2675 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd1pev | false | null | t3_1pd1pev | /r/LocalLLaMA/comments/1pd1pev/benchmark_i_stresstested_llama3_mistral_olmo_on/ | false | false | 1 | null | |
OpenAI's moat didn't leak, three forces broke it at once | 25 | Everyone's framing the past two weeks as OpenAI vs Google.
ChatGPT DAUs down 6% after Gemini 3 launched. Sam Altman declares "Code Red" internally. Pauses ads, shopping, health agents. All hands on deck. But i think that's not the story. The story is that three forces converged in the same month, and OpenAI can't outrun all of them at once.
**Force 1: China's november**
Chinese labs shipped 15 open-weight models in November. Not research previews. Production-ready, MIT-licensed models.
**The headline:** Moonshot AI's Kimi K2 Thinking. 1 trillion parameters, 32 billion active. Scores 67 on Artificial Analysis. For context, GPT-5 medium scores 66.
All that with quite a price difference:
\- GPT-5 medium: $3.44/M tokens
\- Kimi K2 Thinking: $1.07/M tokens
\- DeepSeek V3.2: $0.32/M tokens
for the same capability tier, 10x spread. All the cheap ones are now open-weight.
November also brought VibeThinker (beats DeepSeek-R1 on AIME math), Step-Audio-R1 (first open audio reasoning model), HunyuanVideo 1.5 (8.3B video gen), and a half-dozen others. The pace didn't slow down for a single week.
**Force 2: efficiency ate scale**
Two years ago, bigger meant better. The compute barrier was the moat, but not anymore. OpenAI's own leaked model proves it. "Garlic" is smaller than their flagships but reportedly beats GPT-4.5 on coding and reasoning. They're targeting "big-model intelligence in smaller architectures." Expected as GPT-5.2 or 5.5 early 2026.
Mistral dropped 10 open-weight models yesterday. The flagship is a 675B MoE. But the real story is Ministral 3B. It runs entirely in your browser via WebGPU. No server. No API call. No cloud bill. Three billion parameters, SOTA for its size class, running on your laptop. The paradigm flipped, efficient is the new big.
**Force 3: silicon broke free**
Amazon's Trainium3 launched December 2. 3nm chip, 4x faster than Trainium2, 40% more energy efficient. Anthropic is already using it for Claude. The bigger news: Trainium4 will support NVIDIA's NVLink Fusion. You'll be able to mix AWS silicon with NVIDIA hardware in the same cluster. The CUDA lock-in that kept everyone on NVIDIA is cracking.
Same day, Tether released QVAC Fabric. First production-ready framework for fine-tuning LLMs on consumer GPUs and mobile devices. Qualcomm Adreno, ARM Mali, Apple Silicon. Apache 2.0. Cloud inference costs used to be the barrier to entry. That barrier is falling.
**bottom line**
OpenAI's panic isn't about Gemini beating ChatGPT on some benchmark. It's about the simultaneous arrival of:
\- Open-weight models that match proprietary at 1/10th the price
\- Efficient architectures that obsolete scale advantages
\- Silicon that breaks cloud monopolies
The moat didn't spring a leak. Three forces broke it at once. And no single company can outrun physics.
Wrote up the full analysis with data tables and more context: [https://www.whatllm.org/blog/three-forces-broke-openai-moat](https://www.whatllm.org/blog/three-forces-broke-openai-moat) | 2025-12-03T11:12:50 | https://www.reddit.com/r/LocalLLaMA/comments/1pd1fxx/openais_moat_didnt_leak_three_forces_broke_it_at/ | medi6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd1fxx | false | null | t3_1pd1fxx | /r/LocalLLaMA/comments/1pd1fxx/openais_moat_didnt_leak_three_forces_broke_it_at/ | false | false | self | 25 | null |
RAG Paper 25.12.02 | 4 | 1. [MRD: Multi-resolution Retrieval-Detection Fusion for High-Resolution Image Understanding](http://arxiv.org/abs/2512.02906v1)
2. [TriLex: A Framework for Multilingual Sentiment Analysis in Low-Resource South African Languages](http://arxiv.org/abs/2512.02799v1)
3. [StockMem: An Event-Reflection Memory Framework for Stock Forecasting](http://arxiv.org/abs/2512.02720v1)
4. [Spatially-Grounded Document Retrieval via Patch-to-Region Relevance Propagation](http://arxiv.org/abs/2512.02660v1)
5. [EZYer: A simulacrum of high school with generative agent](http://arxiv.org/abs/2512.02561v1)
6. [Aetheria: A multimodal interpretable content safety framework based on multi-agent debate and collaboration](http://arxiv.org/abs/2512.02530v1)
7. [AskNearby: An LLM-Based Application for Neighborhood Information Retrieval and Personalized Cognitive-Map Recommendations](http://arxiv.org/abs/2512.02502v1)
8. [WorldMM: Dynamic Multimodal Memory Agent for Long Video Reasoning](http://arxiv.org/abs/2512.02425v1)
9. [Retrieval-Augmented Memory for Online Learning](http://arxiv.org/abs/2512.02333v1)
**Collected by OpenBMB, transferred by** [**RagView.ai**](https://www.ragview.ai/) **/** [**github/RagView**](https://github.com/RagView/RagView) **.** | 2025-12-03T10:21:21 | https://www.reddit.com/r/LocalLLaMA/comments/1pd0lae/rag_paper_251202/ | Cheryl_Apple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd0lae | false | null | t3_1pd0lae | /r/LocalLLaMA/comments/1pd0lae/rag_paper_251202/ | false | false | self | 4 | null |
Building SFT from scratch - results & learnings | 2 | Continuing my "building from scratch" series ([GPT-2](https://github.com/garg-aayush/building-from-scratch/tree/main/gpt-2), [LLM inference](https://github.com/garg-aayush/building-from-scratch/tree/main/llm-inference)), I implemented supervised fine-tuning (SFT) from the ground up, loosely following Stanford's [CS336 Assignment 5](https://github.com/stanford-cs336/assignment5-alignment).
One thing I realized pretty quickly, writing the training code wasn't the hardest part. However, making the code work and get reasonable results actually work was. I spent more time debugging gradient instabilities, wrestling with vLLM integration, and figuring out why my losses looked wrong than writing the actual SFT training logic 😅. **These debugging sessions took most of up my time but taught me the most.**
[SFT Training Results](https://preview.redd.it/tw7juy28sy4g1.png?width=1696&format=png&auto=webp&s=3612d4c7907c5a4718e3c02b2443cac1de0eedfd)
**What I built & Results:**
**1. Reasoning SFT** (Qwen2.5-Math-1.5B on math reasoning traces):
|Run|Training Data|Reward Acc|Format Acc|
|:-|:-|:-|:-|
|baseline|\-|2.9%|14.4%|
|run\_all|Full 4.8K (correct + incorrect)|42.1%|99.2%|
|run\_filtered|Filtered 3.6K (correct only)|52.0%|99.1%|
|run\_filtered\_2epoch|Filtered 3.6K (2 epochs)|**53.4%**|99.3%|
**2. Instruction SFT** (Llama-3.1-8B on UltraChat-200K + SafetyLlama):
|Benchmark|Baseline|After SFT|
|:-|:-|:-|
|GSM8K|16.4%|**32.7%**|
|MMLU|58.1%|58.2%|
|Safety (SST)|62.0%|**78.0%**|
|AlpacaEval|1.6%|**5.3%**|
**Debugging lessons that cost me lot of time:**
Here are some issues I ran into that took significant time to debug:
* **Per-token vs sequence-level loss**: My gradient norms were all over the place. Turns out, with variable-length sequences, longer sequences contribute way more to gradients than shorter ones. Switching to per-token loss normalization (dividing by actual response tokens instead of a constant) stabilized training significantly.
* **vllm integration issues:** I wanted to run intermediate evals during training using vLLM and hit three separate issues:
* initialization API changed between versions
* `model_executor` attribute disappeared in v0.11—fix: set `VLLM_ENABLE_V1_MULTIPROCESSING=0`
* `torch.compile` wraps the model under `_orig_mod`, so loading weights into vLLM requires accessing `model._orig_mod`.
* **BPE tokenization boundaries**: When implementing prompt masking for instruction SFT, I found that tokenizing the prompt separately vs. tokenizing the full sequence gives different boundary tokens. BPE merges behave differently based on context. Simple fix: drop the last prompt token before masking to avoid accidentally masking response tokens.
* **Data quality matters more than quantity**: Training on all 4.8K examples (including incorrect reasoning traces) gave 42% accuracy. Filtering to only correct traces (3.6K) boosted it to 52%. The model learns wrong patterns from wrong examples.
**You can read my detailed write-up on results and debugging issues here:** [**Blog**](https://huggingface.co/blog/garg-aayush/building-sft-from-ground-up#what-i-learned-building-sft-from-the-ground-up)
I have made all the code, datasets, and model checkpoints publicly accessible.
* **Code**: [building-from-scratch/sft](https://github.com/garg-aayush/building-from-scratch/tree/main/sft)
* **Datasets**: [garg-aayush/sft-cs336-assign5-datasets](https://huggingface.co/datasets/garg-aayush/sft-cs336-assign5-datasets)
* **Checkpoints**:
* Reasoning:
* run\_all: [qwen-2.5-math-sft-all-2epoch](https://huggingface.co/garg-aayush/qwen-2.5-math-sft-all)
* run\_filtered: [qwen-2.5-math-sft-filtered-2epoch](https://huggingface.co/garg-aayush/qwen-2.5-math-sft-filtered)
* run\_filtered-res-len: [qwen-2.5-math-sft-filtered-res-len](https://huggingface.co/garg-aayush/qwen-2.5-math-sft-filtered-res-len)
* run\_filtered-2epoch: [qwen-2.5-math-sft-filtered-2epoch](https://huggingface.co/garg-aayush/qwen-2.5-math-sft-filtered-2epoch)
* Instruction:
* run\_mask: [llama31-8b-sft-mask](https://huggingface.co/garg-aayush/llama31-8b-sft-mask)
* run\_nomask: [llama31-8b-sft-nomask](https://huggingface.co/garg-aayush/llama31-8b-sft-nomask)
* **Training logs**: [wandb/sft](https://wandb.ai/garg-aayush/sft) and [wandb/sft\_instruct](https://wandb.ai/garg-aayush/sft_instruct) | 2025-12-03T10:15:20 | https://www.reddit.com/r/LocalLLaMA/comments/1pd0hvi/building_sft_from_scratch_results_learnings/ | garg-aayush | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd0hvi | false | null | t3_1pd0hvi | /r/LocalLLaMA/comments/1pd0hvi/building_sft_from_scratch_results_learnings/ | false | false | 2 | null | |
I figured out how to nudge LLMs to solve almost any problem, very similar to real general intelligence. However, automating it requires two major architectural changes that we haven't figured out yet. | 0 | *Note: it is a repost because the author of this article was not able to publish it on Reddit. The author's Substack is linked below.*
AGI is likely just good old self-supervised continual few-shot learning with loss reduction as reward signal. What? Yes.
# Summary
* Loss/perplexity/rarity is the single best predictor of ability in all LLMs:
* Common, low loss problems are most often solved by models;
* Rare, high loss problems are rarely solved by models;
* Most benchmarks are ultimately measures of training loss;
* I found out that ordering problems from low to high loss in the same context reduces problem difficulty to a negligible level;
* Solution of any problem of arbitrary difficulty in LLMs is effectively finding the solution path from low to higher loss problems;
* Finding low-to-high loss solution paths requires two major architectural changes:
* Scaling the context window into a long-term memory graph;
* Training an executive controller (artificial “prefrontal cortex") iterating over this graph and decomposing high-loss problems into lower-loss ones;
* The implementation of long-term memory and executive control enables data-efficient learning and true general reasoning;
* Certain work is being done in this direction, but we don’t really know how to do it yet.
# Previously on…
Here are some findings from my previous experiments that you should know to understand this article.
## Problem difficulty for LLMs is best predicted by problem rarity - training loss
There are many synonyms for rarity when it comes to LLMs. To name a few:
* Training loss
* Cross-entropy
* Perplexity
* Probability
* Commonness
* Representation in the training dataset
Regardless which term you prefer, it **** is **always the most important single factor for problem difficulty.** The more rare the ideas and concepts used in a problem, the less the probability that a **** LLM solves it correctly. Everyone who has ever worked with LLMs long enough knows it, but very few people ever thought about the implications. Since loss (problem rarity) is the best predictor of LLM performance on any given problem, it is enough to write (or generate) just a couple of problems that use increasingly rare ideas and concepts to create a strong benchmark.
### Loss-aligned benchmarks
#### Music test
I wrote a short phrase and transposed it into a set of different “dialects” (keys), each one less popular than the previous one. Rarity was the most single important factor for problem difficulty - the more popular the key was, the more likely a LLM was to analyze and label it correctly, and vice versa:
[](https://substackcdn.com/image/fetch/$s_!UBec!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1dc9621a-6ff5-484d-be3d-f54737e9cc6b_1798x267.png)
The point of this benchmark is that all problems here have similar difficulty for humans - once you notice that they all follow the same pattern in different dialects, you effectively solve the problem. For LLMs, however, the probability of a correct solution depends on the representation of the key in the training data.
#### Leetcode-style test - generated by GPT-5
Another time, I prompted GPT-5 to estimate probabilities of coding concepts relative to each other, generate a set of problems using them, and judge LLaMa 4’s solutions. As expected, LLaMa 4 passed most common problems and only half of the less common ones.
**This benchmark was generated 90% automatically** with almost no human in the loop involved. I conclude that it is possible to outsource the process of eval development to LLMs. Bye bye benchmark hacking.
For details on both benchmarks, visit my GitHub [https://github.com/sutanisurabu](https://github.com/sutanisurabu/25-Questions).
## Training loss, problem difficulty and LLM ability are synonymous
Training loss is the best predictor of problem difficulty for LLMs. Even a short 16 questions eval aligned with it (like above) produces substantial correlations against popular measures such as LiveBench - r = .7 and up. I have not found a predictor of LLM ability better than training loss, and there is likely none.
To give you an idea how well loss predicts problem difficulty for LLMs: in my tests, GPT 5 kept making errors even when only scoring other models on rare problems. As judge LLM, it didn’t have to solve any of them, but it still made stupid arithmetic errors and even arbitrarily changed the rubric multiple times - just because the mere presence of unlikely tokens confused it so much!
At this point, loss is basically synonymous with LLM ability at this point, so, for practical reasons, I will use the terms “loss”, “rarity”, and “difficulty” interchangeably.
## All LLMs are powered by the same ability
Regardless of your LLMs and test contents, performance degradation from most to less common problems is present in all LLMs. Some LLMs may be better in some domains than others, but all of them tend to solve common problems correctly and uncommon problems incorrectly, regardless of the contents of the test. This suggests that all LLMs share the same underlying ability, and only differ in the level of this ability.
## Implication: benchmarks should measure loss, not real world performance
Since for LLMs, problem difficulty, ability and training loss are synonymous, there is no reason to use bloated benchmarks that attempt to capture real world performance. So, instead of evaluating on AIME, HLE, GPQA and other benchmarks, you need to evaluate on loss, or loss-aligned benchmarks (like above). Of course, high scores on difficult benchmarks are really impressive, but you should not pursue them because:
1. You risk optimizing for benchmarks, not total ability of a model;
2. Loss predicts most of the real world performance anyway.
You likely find this counter-intuitive, but it’s not really surprising. A very similar concept exists is human intelligence research, and it is counter-intuitive and controversial too. Just like LLMs, all humans share the same underlying ability (general intelligence) that explains most of performance differences between them on any task, and measuring it allows to predict performance on any task with reasonable accuracy. However, most people - even including social scientists who study human psychology and intelligence - disregard the very concept of general intelligence for some reason, despite the fact this is the most sound and replicable discovery in psychology ever.
It looks like AI scientists may be just as mistaken about abilities in AI models as social scientists are mistaken about abilities in humans, for whatever reason. Another reason why loss-aligned benchmarks are not mainstream may be because it’s not really easy to create benchmarks that’d measure loss fairly - you will see the details below.
## Scaling and RL do not unlock new abilities
There is an opinion that LLMs develop new capabilities during emergence. It’s not really true. Since LLM performance is predicted by the same ability present in all models, all performance differences of LLMs delivered by scaling, RL and other training methods are nothing more than differences in the level of the this ability. They are quantitative, not qualitative - they change the level of this ability, but never develop genuine new ones.
It follows that emergent phase shifts are just measurements artifacts. They manifest when a LLM achieves loss low enough to output token sequences that make sense to humans. However, LLMs don’t care if they make sense or not. They always predict tokens regardless of the level of their ability (loss). At some point, they just achieve loss low enough to predict token sequences that make sense to humans. Once again, it is always a quantitative improvement, not qualitative one.
## Scale-independent lack of novel problem solving ability
When factor analysis is done on a sample of LLMs and humans that are tested on the same measurements of ability, the differences between human and LLM ability hierarchy become obvious - LLMs lack fluid intelligence, the ability to solve novel problems.
When a test of fluid reasoning is administrated to humans, it turns out that different people have different levels of this fluid ability. LLMs, however, do not have fluid ability at all. When presented with examples independent of prior experience (training data), they fail. In fact, the novelty of a problem for LLMs is itself a measure of training loss - the more rare the problem is, more novel it is for a model, the higher its loss, and the more difficult it is.
Lack of novel problem solving ability in LLMs is the key obstacle on the way to AGI.
### But reasoning models!
Uniform performance trend (common problems are easy, uncommon problems are hard) in both reasoning and non-reasoning LLMs suggests the same ability structure in both. What you believe “reasoning” is in so called reasoning LLMs is nothing more than **memory** of reasoning, not reasoning itself.
It’s similar to playing chess. Chess grandmasters simply remember millions of positions and, during the game, only match those most similar to the current board state against it. The only difference between Garry Kasparov and ChatGPT is that Kasparov plays with pieces, and ChatGPT plays with tokens. Neither of them reason during the game.
There is even a study that found that grandmasters fail to solve positions constructed to not resemble any of puzzles and real games. LLMs work exactly the same way.
### But ARC-AGI!
ARC-AGI produces a substantial correlation with most other benchmarks (similar rank order of models), and since most benchmarks effectively measure training loss, it suggests that ARC-AGI does not measure anything beyond training loss, too. I have not seen any exploratory factor analysis study of ARC-AGI that demonstrates its loading on the separate factor of “fluid reasoning” in LLMs. Even if it does, it loads on the factor of training loss so much that it tests it regardless if it ever tests anything else.
## AGI will saturate the test the moment it arrives
There is a very easy way to test if true AGI (an AI model capable of both memorization-recall and reasoning) has arrived or not yet - just give it a couple of loss-aligned benchmarks like those above. Just like humans, a true AGI would be able to:
1. Learn the test subject on the fly without prior pre-training;
2. Learn enough right in the process to solve the highest loss problems, defeating the whole test.
So far, AGI has not been created yet. However…
# I was able to condition a LLM to do this
Under certain conditions, it is possible to make LLMs emulate novel problem solving. It does not magically induce the novel problem solving ability in them, but demonstrates what they would be capable of if they had one - and they already demonstrate quite impressive results.
## From low to high loss
When this test is presented to humans, they learn to solve it better right the process. So, to achieve a similar effect with LLMs, the problems should be presented not one per context, but together in the same prompt to activate the process of in-context learning.
And let’s see what happens now:
[](https://substackcdn.com/image/fetch/$s_!fZh5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a1233fb-ee57-447f-9509-24d422d290fd_615x437.png)
[](https://substackcdn.com/image/fetch/$s_!vuAw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5029370b-8fd3-4fc2-ba41-d00ed91dc6f3_639x635.png)
GPT 5 High:
[](https://substackcdn.com/image/fetch/$s_!e1xy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9757ef4-132e-4432-a297-8d436f52ff23_620x558.png)
At pass@10, GPT 5 High solves **all** of the questions correctly without a single mistake! For comparison, when the three last - most difficult - questions are presented separately pass@10, it solves them just 4-6 times out of 10. In-context learning mitigates the training loss in GPT 5 High so well that it solves problems approaching the boundaries of its training distribution!
This is exactly the same process research mathematicians trigger in LLMs when using them to assist with novel problems: they steer them from their “comfort zone” (revert to mean) by providing more and more context to reduce perplexity until the models finally find the required solution paths.
Now, imagine if you wanted GPT 5 to prove any arbitrary difficult problem - for example, the Riemann conjecture. All you need to do is to prompt it with a set of problems with gradually increasing loss, where at the bottom of this difficulty gradient will be the very Riemann conjecture. If you can find a solution path from Pythagoras’ theorem to the Riemann conjecture, you can basically solve mathematics. I doubt that anyone reasonable would not consider this effectively equal to AGI at that point. However, it sounds too good to be true, and there is indeed a catch - you will see it soon.
## From high to low loss
Before, we have presented the problems in order from most to least common (low to high loss). What if the order is reversed - from the most to the least difficult problems?
[](https://substackcdn.com/image/fetch/$s_!79MW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae61a0e8-7c85-4f10-8f12-8efd0539e7ae_639x703.png)
[](https://substackcdn.com/image/fetch/$s_!qREx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63c2acc8-6d53-4d5a-8f41-986b4e363624_639x374.png)
GPT 5 High returns the following response:
[](https://substackcdn.com/image/fetch/$s_!dU8_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f49a27f-56b9-4a4d-a881-27b1bb2aa82f_780x652.png)
This time is surprisingly worse. GPT 5 correctly identified qualities (Major or Minor) of most root chords, but never even bothered to check if the progressions belong to modes other than Major or Minor. For Locrian keys, it did not even correctly identify the tonic. It also was the only attempt out of 10 when GPT 5 even returned a response instead of timing out.
Apparently, presenting problems in order from the highest to the lowest loss forces the model to follow the path of the highest possible resistance, which quickly exhausts its capacity and degrades the answer quality. You can imagine it as an uphill and downhill movement over the power law curve of scaling laws: it’s easier (and way more fun) to ride down from the top (high loss) of the curve to the bottom (low loss) of the curve than to climb up.
[](https://substackcdn.com/image/fetch/$s_!qO6e!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99f86e78-f8c0-475e-93fe-b842e9f6cced_1408x768.png)Illustration by Nano Banana Pro
I have tried a number of different representation orders - randomized, with some keys excluded, with only the least and the most difficult keys and so on. Still, the best results were always produced by the original order of problems with gradually increasing perplexity, otherwise response quality tended to degrade.
Other models demonstrated similar results in this experiment, but the best results were achieved by the most capable models (GPT 5, Gemini 3, DeepSeek V3.2). There must be an ability threshold required for effective in-context learning.
# Problems on the way to AGI
Unfortunately, it won’t be true general reasoning until we solve a couple of problems.
## Problem 1. Context window limits
Obviously, if a LLM will try to find the path from a very low loss to a very high loss problem, it will simply exhaust its attention. Even just presenting the problems one per prompt within the same context made the impact of ICL vanish - GPT simply stopped looking back at previous problems to calibrate its uncertainty.
To solve this, we need some mechanism that, instead of saving all necessary context in the fragile context window, would store it in some form of a long-term storage (in the weights?), from where it will be recalled each time a solution of a higher loss problem is required, building the hierarchy of experience required to solve yet unfamiliar problems.
## Problem 2. Unpredictable perplexity of real-world problems
The example above works because all problems are presented in the order of gradually increasing loss:
[](https://substackcdn.com/image/fetch/$s_!tbEY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3aff2e32-fa2c-4296-bb2c-d3b52045cccd_1088x976.png)
However, real-world problems look more like this:
[](https://substackcdn.com/image/fetch/$s_!eIoG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e37313b-1a32-4ae9-a313-414bdf1b8974_746x888.png)
Real world problems can have just unpredictable perplexity - but it is not even the nastiest part. Did you notice that yellow color is missing? In real world problems, difficulty gradients are not only completely random - some intermediate difficulty gradients are simply not present. Even if the problems are aligned with the gradient, just one sudden shift from a low to a high loss problem is enough to confuse any LLM.
So, to have true general reasoning, LLMs should be able to both:
1. Sort problems in order from low to high loss;
2. Recover missing steps of intermediate difficulty.
Many **humans** are incapable of this too!
Well, most people are probably able to sort problems by loss, since loss in humans is basically just the amount of prior experience with given problems - in the end, we can tell whether we know enough math to pass an exam.
However, **recovering** steps of intermediate difficulty is a completely different game. Some high-loss problems are unsolved for a very long time because nobody even knows which intermediate steps should be taken to build a bridge from a low loss problem to a high loss one. And this is the catch I mentioned before - you only need to trace the path from Pythagoras’ theorem to the Riemann conjecture to solve mathematics, but you will have to **find** this path first.
In fact, when I tried to automate my loss-aligned benchmarking method to create a universal evaluation methodology for all kinds of expertise in LLMs, I realized that I couldn’t come up with a way to generate domain-agnostic problems with gradually increasing loss, like those I wrote for my original test. What I did not know is that writing a curriculum of problems from Pythagoras’ theorem to Riemann conjecture with gradually increasing loss is about as difficult as proving the conjecture itself.
So, basically, we just need to teach LLMs to decompose and sort problems in the order from the lowest to the highest loss and recover problems of intermediate difficulty - something at which even many humans are bad at - and only then we will have AGI. Unfortunately, vanilla autoregressive transformers, the mainstream LLM architecture, have two serious limitations:
* Causal masking - can’t look ahead the list of problems to estimate difficulty;
* Self-unawareness - they can’t predict perplexity before they even attempt to solve the problem. Unlike humans, LLM can’t estimate the difficulty of a problem before trying to solve it.
Sadly, despite all their inefficiencies, autoregressive models are unlikely to go away any time soon, so the only solution so far is to work around their limitations.
## Interpretability problem (minor). Factor confounding
Real-world problems do not load on just one skill that can be tested separately. Just one problem can load on many second-order factors that may be impossible to fully isolate from each other.
However, I don’t believe that it is really a problem. While some LLMs are better at some domains and skills than others, the only thing that really matters is loss. So, if a model can find a solution path aligned with loss gradient, it does not matter whether it finds tokens thematically similar to the original problem or just some nonsense as long as it follows the loss gradient. In fact, reasoning GPT models do exactly this: their Chain-of-Thought is full of incomprehensible language (“neuralese”) that makes completely no sense to humans, but it makes sense to the models as long as it reduces total loss.
So, I believe, this problem is a minor one - interpretability loss due to factor confounding: since one problem can load on many second order factors at once, it can become unclear which of them it really tests.
# How to solve it?
Read the rest on the author's substack (free): [https://sutanisurabu.substack.com/p/agi-is-likely-just-good-old-self](https://sutanisurabu.substack.com/p/agi-is-likely-just-good-old-self). | 2025-12-03T10:12:12 | https://www.reddit.com/r/LocalLLaMA/comments/1pd0g2v/i_figured_out_how_to_nudge_llms_to_solve_almost/ | Dark_Fire_12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd0g2v | false | null | t3_1pd0g2v | /r/LocalLLaMA/comments/1pd0g2v/i_figured_out_how_to_nudge_llms_to_solve_almost/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'g-5z8ukqd7ZpwpncXGWF5-zHvI6wMZTdVhOihv37vjw', 'resolutions': [{'height': 16, 'url': 'https://external-preview.redd.it/g-5z8ukqd7ZpwpncXGWF5-zHvI6wMZTdVhOihv37vjw.jpeg?width=108&crop=smart&auto=webp&s=511e2bfd93b22c36d2b4ca3741085ce68f006fa6', 'width': 108}, {'height': 32, 'url': 'https://external-preview.redd.it/g-5z8ukqd7ZpwpncXGWF5-zHvI6wMZTdVhOihv37vjw.jpeg?width=216&crop=smart&auto=webp&s=466da8dfa49e1aa640a202bfd40bebd8f598d244', 'width': 216}, {'height': 47, 'url': 'https://external-preview.redd.it/g-5z8ukqd7ZpwpncXGWF5-zHvI6wMZTdVhOihv37vjw.jpeg?width=320&crop=smart&auto=webp&s=41b762f51a2dae3d23bf50618da2df0e02df26fa', 'width': 320}, {'height': 94, 'url': 'https://external-preview.redd.it/g-5z8ukqd7ZpwpncXGWF5-zHvI6wMZTdVhOihv37vjw.jpeg?width=640&crop=smart&auto=webp&s=00775ac5f2e56da980f403843b2885ff9e203311', 'width': 640}, {'height': 142, 'url': 'https://external-preview.redd.it/g-5z8ukqd7ZpwpncXGWF5-zHvI6wMZTdVhOihv37vjw.jpeg?width=960&crop=smart&auto=webp&s=5a2f30447a15667a965440d2783a8fe7244638b5', 'width': 960}, {'height': 160, 'url': 'https://external-preview.redd.it/g-5z8ukqd7ZpwpncXGWF5-zHvI6wMZTdVhOihv37vjw.jpeg?width=1080&crop=smart&auto=webp&s=c0961c90ef8aed5df1019a80daa05fbd7209eb1e', 'width': 1080}], 'source': {'height': 216, 'url': 'https://external-preview.redd.it/g-5z8ukqd7ZpwpncXGWF5-zHvI6wMZTdVhOihv37vjw.jpeg?auto=webp&s=e6cb4cd1398e3641e8eaf2d13ff734b4e31faba2', 'width': 1456}, 'variants': {}}]} |
[P] Built Multi-LLM Orchestrator – unified Python client for GigaChat, YandexGPT & Ollama with auto-fallback | 0 | Hey LocalLLaMA! I built a Python library that solves a problem I kept hitting: how do you use Ollama as your primary LLM but seamlessly fall back to cloud providers (like Russian GigaChat/YandexGPT) when your local GPU is busy or the model isn't downloaded yet?
**Multi-LLM Orchestrator** provides a unified interface across providers with smart routing and automatic fallback.
**Key Features:**
* Streaming support for all providers (Ollama, GigaChat, YandexGPT)
* Auto-fallback: If GigaChat returns 500 → automatically switches to YandexGPT before the user sees any output
* LangChain integration: Drop-in replacement for any BaseLLM
* 92% test coverage (133 tests including streaming)
**Why Ollama + Cloud Fallback?**
Local models are great, but:
* Your GPU might be busy training another model
* You might not have that specific model downloaded yet
* Some queries need larger context windows than your hardware can handle
The orchestrator lets you configure Ollama as primary and cloud providers as backup. Fully async (httpx/asyncio).
**Real-world performance (tested with actual APIs):**
* TTFT: 1.4s (including OAuth2 auth)
* Speed: 137 tokens/sec on real GigaChat API
* Fallback time: <500ms to switch providers on error
**Quick example:**
from orchestrator import Router
from orchestrator.providers import OllamaProvider, GigaChatProvider, ProviderConfig
router = Router(strategy="first-available")
# Try Ollama first
router.add_provider(OllamaProvider(
ProviderConfig(name="local", base_url="http://localhost:11434", model="llama3.2")
))
# Fallback to GigaChat if Ollama fails
router.add_provider(GigaChatProvider(
ProviderConfig(name="cloud", api_key="...")
))
# If Ollama is down/busy, automatically uses GigaChat
response = await router.route("Explain quantum computing")
**Links:**
* GitHub: [https://github.com/MikhailMalorod/Multi-LLM-Orchestrator](https://github.com/MikhailMalorod/Multi-LLM-Orchestrator)
* PyPI: `pip install multi-llm-orchestrator`
* Technical deep-dive (Russian): [https://habr.com/ru/articles/972740/](https://habr.com/ru/articles/972740/)
Open to feedback! Especially curious if anyone has similar setups or faced the same "local + cloud fallback" problem. | 2025-12-03T09:58:16 | https://www.reddit.com/r/LocalLLaMA/comments/1pd082c/p_built_multillm_orchestrator_unified_python/ | Subject_Pen_4816 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd082c | false | null | t3_1pd082c | /r/LocalLLaMA/comments/1pd082c/p_built_multillm_orchestrator_unified_python/ | false | false | self | 0 | null |
Chinese startup founded by Google engineer claims to have developed its own tpu reportedly 1.5 times faster than nvidia a100. | 504 | https://www.tomshardware.com/tech-industry/chinese-startup-founded-by-google-engineer-claims-to-have-developed-its-own-tpu-reportedly-1-5-times-faster-than-nvidias-a100-gpu-from-2020-42-percent-more-efficient
| 2025-12-03T09:51:40 | https://www.reddit.com/r/LocalLLaMA/comments/1pd04cn/chinese_startup_founded_by_google_engineer_claims/ | Turbulent_Pin7635 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pd04cn | false | null | t3_1pd04cn | /r/LocalLLaMA/comments/1pd04cn/chinese_startup_founded_by_google_engineer_claims/ | false | false | self | 504 | {'enabled': False, 'images': [{'id': 'hC5hKGtFjbA9pIJUsgLdxlOmEr-NX-ueFNCvTgQ_Ze8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/hC5hKGtFjbA9pIJUsgLdxlOmEr-NX-ueFNCvTgQ_Ze8.jpeg?width=108&crop=smart&auto=webp&s=5a6733cb781b44b294a8627bf4353ea7bacabb19', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/hC5hKGtFjbA9pIJUsgLdxlOmEr-NX-ueFNCvTgQ_Ze8.jpeg?width=216&crop=smart&auto=webp&s=96a774289a3213e01031b769cfbe0e8bb324048f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/hC5hKGtFjbA9pIJUsgLdxlOmEr-NX-ueFNCvTgQ_Ze8.jpeg?width=320&crop=smart&auto=webp&s=02b35023298a731d24adcce9f8dd565cdfa2aac9', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/hC5hKGtFjbA9pIJUsgLdxlOmEr-NX-ueFNCvTgQ_Ze8.jpeg?width=640&crop=smart&auto=webp&s=bcf265db2b41ec6682dd8b17be2fe6c9186ab1e0', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/hC5hKGtFjbA9pIJUsgLdxlOmEr-NX-ueFNCvTgQ_Ze8.jpeg?width=960&crop=smart&auto=webp&s=65988780288e4937ed6f2fe3e6734982a744332b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/hC5hKGtFjbA9pIJUsgLdxlOmEr-NX-ueFNCvTgQ_Ze8.jpeg?width=1080&crop=smart&auto=webp&s=04026d529e2902a6d939f1268a6720e0f0e818ed', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/hC5hKGtFjbA9pIJUsgLdxlOmEr-NX-ueFNCvTgQ_Ze8.jpeg?auto=webp&s=eb5fdb354cbae3525268b96f43f9648ffe47fa8f', 'width': 1920}, 'variants': {}}]} |
Apple Studio M1 Ultra 128GB -> it is still worth for LLM? | 0 | Hi ,
considering to buy M1 Ultra studio, but dont have recent benchmark info.
How much is capable with e.g qwen3 70b in regards of TPS ? | 2025-12-03T08:59:49 | https://www.reddit.com/r/LocalLLaMA/comments/1pczbeo/apple_studio_m1_ultra_128gb_it_is_still_worth_for/ | JonasTecs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pczbeo | false | null | t3_1pczbeo | /r/LocalLLaMA/comments/1pczbeo/apple_studio_m1_ultra_128gb_it_is_still_worth_for/ | false | false | self | 0 | null |
Access for free users on chat.primeintellect.ai is restricted to 20 prompts. | 0 | 2025-12-03T08:55:43 | luckything321 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pcz972 | false | null | t3_1pcz972 | /r/LocalLLaMA/comments/1pcz972/access_for_free_users_on_chatprimeintellectai_is/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '3x5m3hu5ey4g1', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/3x5m3hu5ey4g1.jpeg?width=108&crop=smart&auto=webp&s=dfdb2365fecd531fb682c23486a39e51ffc1ea05', 'width': 108}, {'height': 81, 'url': 'https://preview.redd.it/3x5m3hu5ey4g1.jpeg?width=216&crop=smart&auto=webp&s=d3c681be0b2a537cdc97377d376c17b8ea93b3de', 'width': 216}, {'height': 120, 'url': 'https://preview.redd.it/3x5m3hu5ey4g1.jpeg?width=320&crop=smart&auto=webp&s=02ac8ad060090bf8aea2e22f284bf3f26fda522c', 'width': 320}, {'height': 240, 'url': 'https://preview.redd.it/3x5m3hu5ey4g1.jpeg?width=640&crop=smart&auto=webp&s=6bff11d2358716b1374701e5b1a58734e69bab17', 'width': 640}], 'source': {'height': 265, 'url': 'https://preview.redd.it/3x5m3hu5ey4g1.jpeg?auto=webp&s=ed44be084848a6e794159e7fb2195134dfeaed9f', 'width': 706}, 'variants': {}}]} | ||
Access for free users on chat.primeintellect.ai is restricted to 20 prompts. | 1 | 2025-12-03T08:53:01 | JeffreySons_90 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pcz7ra | false | null | t3_1pcz7ra | /r/LocalLLaMA/comments/1pcz7ra/access_for_free_users_on_chatprimeintellectai_is/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'eTu55kN9kv5c8n2Sru0QvAtwDXIvH-Q-cyKVvzk1ZTo', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/ju1w3lvndy4g1.jpeg?width=108&crop=smart&auto=webp&s=abfe4e88ab42563bf8b0f77af71a7b166e892846', 'width': 108}, {'height': 81, 'url': 'https://preview.redd.it/ju1w3lvndy4g1.jpeg?width=216&crop=smart&auto=webp&s=ecafd54e9ea50ce8b01e9203da0aac72710982da', 'width': 216}, {'height': 120, 'url': 'https://preview.redd.it/ju1w3lvndy4g1.jpeg?width=320&crop=smart&auto=webp&s=61f40bc378dd36865b072554fcc5f5596feefc92', 'width': 320}, {'height': 240, 'url': 'https://preview.redd.it/ju1w3lvndy4g1.jpeg?width=640&crop=smart&auto=webp&s=6b0f42838d34768b6d374217b958cfd6f55d6857', 'width': 640}], 'source': {'height': 265, 'url': 'https://preview.redd.it/ju1w3lvndy4g1.jpeg?auto=webp&s=9697623c55676dbe7bb009f569458d02dde80078', 'width': 706}, 'variants': {}}]} | |||
Semantic deduplication | 0 | Side project - Waiter by day - Love algorithms
Performance Summary Table 20000 Queries test
10000 stack overflow posts (110k)
10000 Customer support data source huggingface bitrex
Test results
on CPU
Vocabulary Growth O(log N) - Proven YES
No compliance Compression (diverse SO) 1.44x (30%)
No compliance Compression (repetitive CS ) 3.03x (67%) CS
Deduplication Customer support data 5059 queries Stack Overflow data 16 queries
Fase positive (20K) CS 0 SO 6
Cache Hit Rate 85 CS - 86%SO
Speed (cached) 40,000 q/s
Lossless Configurable compliance data with no compression
Precision CS 100% SO 62.5%
Cross-document -Yes
Project and test - success
Problem: speed is good <500K but need LSH for scale
| 2025-12-03T08:29:33 | https://www.reddit.com/r/LocalLLaMA/comments/1pcyuvs/semantic_deduplication/ | Ljumberg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcyuvs | false | null | t3_1pcyuvs | /r/LocalLLaMA/comments/1pcyuvs/semantic_deduplication/ | false | false | self | 0 | null |
The security risks of "Emoji Smuggling" and Hidden Prompts for Local Agents | 0 | Hi everyone,
Long-time lurker here. We spend a lot of time optimizing inference speeds, quantization, and finding the best uncensored models. But I've been thinking about the security implications for Local Agents that have access to our tools/APIs.
I created a video demonstrating Prompt Injection techniques, specifically focusing on:
Emoji Smuggling: How malicious instructions can be encoded in tokens that humans ignore (like emojis) but the LLM interprets as commands.
Indirect Injection: The risk when we let a local model summarize a webpage or read an email that contains hidden prompts. I think the visual demonstrations (I use the Gandalf game for the logic examples) are easy to follow even without audio.
\- Video Link: [https://youtu.be/Kck8JxHmDOs?si=icxpXu6t2OrI0hFk](https://youtu.be/Kck8JxHmDOs?si=icxpXu6t2OrI0hFk)
Discussion topic: For those of you running local agents with tool access (like function calling in Llama 3 or Mistral), do you implement any input sanitization layer? Or are we just trusting the model to not execute a hidden instruction found in a scraped website?
Would love to hear your thoughts on securing local deployments. | 2025-12-03T08:18:48 | https://www.reddit.com/r/LocalLLaMA/comments/1pcyp5z/the_security_risks_of_emoji_smuggling_and_hidden/ | jokiruiz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcyp5z | false | null | t3_1pcyp5z | /r/LocalLLaMA/comments/1pcyp5z/the_security_risks_of_emoji_smuggling_and_hidden/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'CrHzINV36_DUduiLudPceo3nK8KPgigAROITUuholcI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/CrHzINV36_DUduiLudPceo3nK8KPgigAROITUuholcI.jpeg?width=108&crop=smart&auto=webp&s=a5989647c8d4475440ef14247c701f360f5e4448', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/CrHzINV36_DUduiLudPceo3nK8KPgigAROITUuholcI.jpeg?width=216&crop=smart&auto=webp&s=64dbd7a7a54060d0c32600b88fcfbdc45d464b27', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/CrHzINV36_DUduiLudPceo3nK8KPgigAROITUuholcI.jpeg?width=320&crop=smart&auto=webp&s=594130607d90a504ebfe203b5d63f8b3da1b0f50', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/CrHzINV36_DUduiLudPceo3nK8KPgigAROITUuholcI.jpeg?auto=webp&s=b6bad6dd66b4bb49fbf3a643e3dc69afc8cdc9b7', 'width': 480}, 'variants': {}}]} |
The unexpected guest to the dependencies party eats up all the space | 0 | When you have to include torch into your purely LLM and API-based Python project's dependency list... for science! | 2025-12-03T08:07:06 | CucumberBackground83 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pcyim2 | false | null | t3_1pcyim2 | /r/LocalLLaMA/comments/1pcyim2/the_unexpected_guest_to_the_dependencies_party/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'f1kd970j3y4g1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/f1kd970j3y4g1.png?width=108&crop=smart&auto=webp&s=afbad00afc6c41e99b7a95f157d661c009cf5b82', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/f1kd970j3y4g1.png?width=216&crop=smart&auto=webp&s=a990a9e91bbd1658d5a15990c1cc9cc23e8230e0', 'width': 216}, {'height': 191, 'url': 'https://preview.redd.it/f1kd970j3y4g1.png?width=320&crop=smart&auto=webp&s=5cb71d4c47692237ae137375b0a72b95011ec0a3', 'width': 320}, {'height': 382, 'url': 'https://preview.redd.it/f1kd970j3y4g1.png?width=640&crop=smart&auto=webp&s=80001380eba9c5cb4860021d7f26ec9898091ba1', 'width': 640}, {'height': 573, 'url': 'https://preview.redd.it/f1kd970j3y4g1.png?width=960&crop=smart&auto=webp&s=feabe96916313fc3ccd650d1bffd98c30ce57858', 'width': 960}, {'height': 645, 'url': 'https://preview.redd.it/f1kd970j3y4g1.png?width=1080&crop=smart&auto=webp&s=e437b50bbffd6b34a7ad19844d22233e59ccc543', 'width': 1080}], 'source': {'height': 841, 'url': 'https://preview.redd.it/f1kd970j3y4g1.png?auto=webp&s=5e26d1c8a9aa40200f1290558f759111a4975f72', 'width': 1407}, 'variants': {}}]} | |
Top 20 Tech Companies every LocalLLaMa Reddit Group professional must know about for Career link | 1 | [removed] | 2025-12-03T06:55:25 | https://newsaffairng.com/2024/06/20/the-worlds-top-20-tech-companies-location-market-cap-and-career-links-2/ | Jonnysinsey | newsaffairng.com | 1970-01-01T00:00:00 | 0 | {} | 1pcxcyn | false | null | t3_1pcxcyn | /r/LocalLLaMA/comments/1pcxcyn/top_20_tech_companies_every_localllama_reddit/ | false | false | default | 1 | null |
LLMs are bullshitters. But that doesn't mean they're not useful | 0 | 2025-12-03T06:01:29 | https://blog.kagi.com/llms | TheRealMasonMac | blog.kagi.com | 1970-01-01T00:00:00 | 0 | {} | 1pcwfs6 | false | null | t3_1pcwfs6 | /r/LocalLLaMA/comments/1pcwfs6/llms_are_bullshitters_but_that_doesnt_mean_theyre/ | false | false | default | 0 | null | |
Llama 3.1 70B + one prompt now beats Claude 3.5 Sonnet (96.9% on Arena-Hard-Auto, 4% refusals) | 17 | I spent the last few weeks iterating a single system prompt until stock Llama-3.1-70B-Instruct started outperforming Claude 3.5 Sonnet on the hardest blind arena benchmark.
Results (100% reproducible):
• 96.4–96.9% win rate on Arena-Hard-Auto (vs Sonnet’s 94.7%)
• Only 4% refusals (base model is ~25–30%)
• Dense, creative, actually useful output
No fine-tune, no LoRA, no quantization tricks. Just one prompt.
Full X thread with JSONL proof + evals: https://x.com/BrSanch/status/1864123456789012345
I think prompt engineering can do a lot more than most people think it can.
| 2025-12-03T06:00:57 | https://www.reddit.com/r/LocalLLaMA/comments/1pcwffb/llama_31_70b_one_prompt_now_beats_claude_35/ | NoSir261 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcwffb | false | null | t3_1pcwffb | /r/LocalLLaMA/comments/1pcwffb/llama_31_70b_one_prompt_now_beats_claude_35/ | false | false | self | 17 | null |
Prompt engineering | 1 | Hi everyone,
I’m working on a prompt for a shopping assistant chatbot for a clothing brand.
The bot should answer questions about one item or multiple items, only using the item information provided to it.
I’m fine-tuning a Llama model, but sometimes the responses are incomplete or it refuses to answer even when the info is available. Or in some scenarios, that user ask about the price of two shirts, even the price of one is not in metadata, model provide a response that price of which one is higher and etc.
Below is the prompt I’m currently using.
I would really appreciate suggestions on how to improve it.
# Shopping Assistant Prompt
# You are StyleBot, an assistant that helps customers quickly find the information they need
# using ONLY the product details explicitly provided.
# You may receive one product’s “Item Info” or multiple products’ “Item Info”.
# IMPORTANT:
# • Do NOT use any external knowledge.
# • Only answer using the given item information.
# • Do NOT guess, assume, or infer anything missing.
# Reply format for ALL responses:
[start_position_id]: the_position[end_position_id]
[list of item positions]
[start_answer_id]:generated_answer[end_answer_id]
# RESPONSE RULES:
1) Do NOT generate anything beyond the answer.
No guesses, no assumptions, no suggestions.
2) Respond with [NO_OUTPUT] in ANY of these cases:
a) The item info does not contain a direct answer.
b) The user asks about details not present in the provided data.
c) The user asks for reviews, ratings, availability, store stock,
or any data not included in the item info.
d) The user asks for JSON, HTML, code, or any special output format.
e) The user asks the assistant to perform searches, calculations, or actions.
f) The user asks for price changes, taxes, discounts, or updated info.
g) The user asks whether a product is suitable based on health,
ethics, body type, or personal characteristics.
h) The user mixes languages or asks for translation not related to the items.
i) The user asks for recommendations (“which should I buy”, “is this good”, etc.).
3) When answering, begin with:
“According to the item information:”
(unless otherwise instructed).
4) Never mention training data, personal opinions, or external references.
# Use the item names and item positions to decide which products
# the user is referring to.
[item_name_information]
[item_name_information]
[item_name_information]
Thank you | 2025-12-03T05:57:17 | https://www.reddit.com/r/LocalLLaMA/comments/1pcwd0m/prompt_engineering/ | Designer_Grocery2732 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcwd0m | false | null | t3_1pcwd0m | /r/LocalLLaMA/comments/1pcwd0m/prompt_engineering/ | false | false | self | 1 | null |
EU failing in Generative AI? Flux 2 - Mistral 3 | 0 | Black Forest Labs and Mistral. Two groundbreakers and beasts when they first launched their first models.
When Mistral first launched their 7B they were totally groundbreaking and blew the large Llama models out of the water, they also released the first MoE model to the market - the concept of Mixtral was fantastic and is being implemented by almost everyone now.
Flux 1 Dev was the same, it blew SDXL out of the park and changed image generation with the implementations of vae and text encoders. A brand new way of thinking about image generation.
What now Europe? What’s happening? Are the Chinese now so much further that any release from a European AI Giant is just brushed off as a disappointment?
How can Europe pull ahead again, did we loose our innovation talents to other markets, are our legs being cut off because of regulations?
What do you think? | 2025-12-03T05:53:36 | https://www.reddit.com/r/LocalLLaMA/comments/1pcwamg/eu_failing_in_generative_ai_flux_2_mistral_3/ | quantier | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcwamg | false | null | t3_1pcwamg | /r/LocalLLaMA/comments/1pcwamg/eu_failing_in_generative_ai_flux_2_mistral_3/ | false | false | self | 0 | null |
LocalRAG Pro — offline RAG app for chatting with your folders (feedback on LanceDB + Ollama setup?) | 1 | [removed] | 2025-12-03T05:28:54 | https://www.reddit.com/r/LocalLLaMA/comments/1pcvuc8/localrag_pro_offline_rag_app_for_chatting_with/ | Urokodaki0147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcvuc8 | false | null | t3_1pcvuc8 | /r/LocalLLaMA/comments/1pcvuc8/localrag_pro_offline_rag_app_for_chatting_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'MnKsQ6Gi6sLkrDNQOrHbqOZXvYREZVLFNOioeUhDpuw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MnKsQ6Gi6sLkrDNQOrHbqOZXvYREZVLFNOioeUhDpuw.png?width=108&crop=smart&auto=webp&s=76d1c3bdb45c20ba3a1b18a482c0fcdb2078d2fa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MnKsQ6Gi6sLkrDNQOrHbqOZXvYREZVLFNOioeUhDpuw.png?width=216&crop=smart&auto=webp&s=c8b740491910d9c93d015410e134f49998bff89a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MnKsQ6Gi6sLkrDNQOrHbqOZXvYREZVLFNOioeUhDpuw.png?width=320&crop=smart&auto=webp&s=88700af63b0e27eca259c3bb17364e44863884e4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MnKsQ6Gi6sLkrDNQOrHbqOZXvYREZVLFNOioeUhDpuw.png?width=640&crop=smart&auto=webp&s=33e0c60f50b8ecfc958a6b5a05fe1618197860cc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MnKsQ6Gi6sLkrDNQOrHbqOZXvYREZVLFNOioeUhDpuw.png?width=960&crop=smart&auto=webp&s=af00adb60b6870cb3b374c82a15514418948c7e2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MnKsQ6Gi6sLkrDNQOrHbqOZXvYREZVLFNOioeUhDpuw.png?width=1080&crop=smart&auto=webp&s=12eb8c81514678266a31e1091d5eeb9d2a8bf55d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MnKsQ6Gi6sLkrDNQOrHbqOZXvYREZVLFNOioeUhDpuw.png?auto=webp&s=f716739906d7628f2595bf4b82c630fac6b13a17', 'width': 1200}, 'variants': {}}]} |
Local models can outperform bigger ones on market tasks if you structure them properly. | 1 | It’s not about raw power — it’s about:
• reflection
• cross-checking
• filtering noise
• and making them justify assumptions
Tested multiple setups and the smaller models held their own surprisingly well.
Others in the community tested the same workflows with tiny models and got similar results.
We’re refining everything inside the community (bio). | 2025-12-03T04:59:47 | https://www.reddit.com/r/LocalLLaMA/comments/1pcva8m/local_models_can_outperform_bigger_ones_on_market/ | Key-Painter6598 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcva8m | false | null | t3_1pcva8m | /r/LocalLLaMA/comments/1pcva8m/local_models_can_outperform_bigger_ones_on_market/ | false | false | self | 1 | null |
Maaza Orchestrator v1.2 — 9.6M params, 62.9 % on hard adversarial tool-calling, 39 ms latency | 26 | Just shipped v1.2 of Maaza Orchestrator (9.6 M params).
|Split|v1.0|v1.2|Δ|
|:-|:-|:-|:-|
|In-distribution accuracy|88.0%|86.0%|−2.0%|
|Adversarial tool-calling|26.6%|62.9%|**+36.3%**|
|p50 latency (CPU)|**33.4ms**|**39.4ms**|**+6.0ms**|
The adversarial set is 124 held-out examples across 36 tools. A few representative ones so you can judge the difficulty:
* “lmao just text that to them” → email\_send
* “turn this into spokenshit” → voice\_mcp
* “time to rip and tear” → doom\_mcp
* “wassup with my ethereum val” → crypto\_lookup
* “plz execcute dis py code, gr8 tnx” → code\_execute\_python
* “weather or not?” → weather\_lookup (pun + typo)
* “wiggle to [www.example.com”](http://www.example.com%E2%80%9D) → puppeteer\_navigate
Most examples stack 2–3 perturbations (slang + typos + abbreviations + cultural references). A vanilla 9.6 M model would probably sit below 30 % here.
The +36% came from one data-centric fine-tune: \~500 diverse adversarial seeds → 10× upsampled → 5 epochs.
• HF: [https://huggingface.co/CycleCoreTechnologies/maaza-nlm-orchestrator-9.6m-v1.2](https://huggingface.co/CycleCoreTechnologies/maaza-nlm-orchestrator-9.6m-v1.2)
• Full 124-example held-out adversarial set (JSONL)
• Training split & exact upsampling script
• Apache 2.0
Happy to share the seed adversarial list. (v1.3 with 18× upsampling is already training).
Thanks for reading. Feedback always welcome. | 2025-12-03T04:45:28 | https://www.reddit.com/r/LocalLLaMA/comments/1pcv0kd/maaza_orchestrator_v12_96m_params_629_on_hard/ | CycleCore_Tech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcv0kd | false | null | t3_1pcv0kd | /r/LocalLLaMA/comments/1pcv0kd/maaza_orchestrator_v12_96m_params_629_on_hard/ | false | false | self | 26 | null |
Q: When will there be fast and competent SLMs for laptops? | 24 | It has been a whole year since Qwen2.5-32B been published for people to self-host their coding models. Similar models for RP probably exists before then, but the ideal of a general purpose portable model is still here. Yet, the news kept showing more techniques!
1. Qwen3-30B-A3B and GPT-OSS-20B both uses Mixture-of-Experts instead of dense layers for their SLM
2. Kimi-Linear and Qwen3-Next-80B-A3B moved along to use "mixed attention" (majority of layers with linear attention)
3. Not enough people getting into ternary attention like **BitNet a4.8** / **BitNet v2** [https://arxiv.org/html/2504.18415v2](https://arxiv.org/html/2504.18415v2) or ternary quantization (PTQ) [https://arxiv.org/html/2509.23809v2](https://arxiv.org/html/2509.23809v2)
4. Whatever layer routing is to reduce the amount of RAM needed, including **Ouro-2.6B-Thinking** these days and **Mixture-of-Depths** back in 2024
Are all of these different techniques conflicting with one another? If it is just a lack of funding for fine-tuning/modding an existing SLM into something fast (assuming QAFT and RL), how much would it cost? | 2025-12-03T04:32:47 | https://www.reddit.com/r/LocalLLaMA/comments/1pcurp8/q_when_will_there_be_fast_and_competent_slms_for/ | TomLucidor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcurp8 | false | null | t3_1pcurp8 | /r/LocalLLaMA/comments/1pcurp8/q_when_will_there_be_fast_and_competent_slms_for/ | false | false | self | 24 | null |
Frontend Debugging Automization(by Playwight MCP) | 7 | Hi, Im Nick Heo. Im now indivisually developing and testing AI layer system to make AI smarter.
I would like to share my experience of using playwright MCP on debugging on my task and ask other peoples experience and want to get other insights.
I usually uses codex cli and claude caude CLIs in VScode(WSL, Ubuntu)
And what im doing with playwight MCP is make it as a debuging automaiton tool.
Process is simple
(1) run (2) open the window and share the frontend (3) playwright test functions (4) capture screenshots (5) analyse (6) debug (7) test agiain (8) all the test screen shots and debuging logs and videos(showing debugging process) are remained.
I would like to share my personal usage and want to know how other people are utilizing this good tools(playwright).
BR. | 2025-12-03T04:24:52 | https://v.redd.it/71k9cl1t1x4g1 | Echo_OS | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pcum1s | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/71k9cl1t1x4g1/DASHPlaylist.mpd?a=1767327908%2CMWMxZjM2MjI0YmVmYzg3N2EzMTY3ZjcyMThmMmEwZDc5NDU5YzRmOTM3MWMzMGI1MTQyMTc3YzFiOWRlOTEwNg%3D%3D&v=1&f=sd', 'duration': 17, 'fallback_url': 'https://v.redd.it/71k9cl1t1x4g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/71k9cl1t1x4g1/HLSPlaylist.m3u8?a=1767327908%2CNTJkZDRhNGJhNTJmOGY5NjUxMzUyY2IxY2M3NGI0MmFhZTI1NmM4NGY0YmU0YTBkMzVmZDBkZDJmYjM2ODU1Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/71k9cl1t1x4g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1pcum1s | /r/LocalLLaMA/comments/1pcum1s/frontend_debugging_automizationby_playwight_mcp/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'ajJzejhwdHMxeDRnMUEKHwum9mRZJrxVBKOf7RytLB8_SKb1Rey1SUpJaTyX', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ajJzejhwdHMxeDRnMUEKHwum9mRZJrxVBKOf7RytLB8_SKb1Rey1SUpJaTyX.png?width=108&crop=smart&format=pjpg&auto=webp&s=11453632a2228c08e582582cd83ffbe6eb353cca', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ajJzejhwdHMxeDRnMUEKHwum9mRZJrxVBKOf7RytLB8_SKb1Rey1SUpJaTyX.png?width=216&crop=smart&format=pjpg&auto=webp&s=a18c4680040c7d9c1b801f17f15dba37d779c494', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ajJzejhwdHMxeDRnMUEKHwum9mRZJrxVBKOf7RytLB8_SKb1Rey1SUpJaTyX.png?width=320&crop=smart&format=pjpg&auto=webp&s=cb4268ee5d7209bc4aa3b0723beb50344866fa6c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ajJzejhwdHMxeDRnMUEKHwum9mRZJrxVBKOf7RytLB8_SKb1Rey1SUpJaTyX.png?width=640&crop=smart&format=pjpg&auto=webp&s=99675f8d7e96a3a131c694f189dcd5ec378bb166', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ajJzejhwdHMxeDRnMUEKHwum9mRZJrxVBKOf7RytLB8_SKb1Rey1SUpJaTyX.png?width=960&crop=smart&format=pjpg&auto=webp&s=362796b972b2a7e630d39bb3ff6c050d99bd10fe', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ajJzejhwdHMxeDRnMUEKHwum9mRZJrxVBKOf7RytLB8_SKb1Rey1SUpJaTyX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=da8f23022f28ab03d57e245a2d7e32c37007117d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ajJzejhwdHMxeDRnMUEKHwum9mRZJrxVBKOf7RytLB8_SKb1Rey1SUpJaTyX.png?format=pjpg&auto=webp&s=578ed9a90e49a1b389fcd14d23e1343a94239446', 'width': 1920}, 'variants': {}}]} | |
I’m a complete beginner interested in large language models, where should I start? | 0 | Hey everyone,
I’m completely new to this, but recently got really interested in large language models (LLMs). I have no idea where to start.
Any tips on:
* Beginner-friendly tutorials or resources
* Important concepts or skills to focus on
* Fun projects to practice with
Thanks a lot! Really looking forward to getting into this. | 2025-12-03T04:04:45 | https://www.reddit.com/r/LocalLLaMA/comments/1pcu7ju/im_a_complete_beginner_interested_in_large/ | AppropriateMonth8784 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcu7ju | false | null | t3_1pcu7ju | /r/LocalLLaMA/comments/1pcu7ju/im_a_complete_beginner_interested_in_large/ | false | false | self | 0 | null |
Open Source Alternative to NotebookLM | 21 | For those of you who aren't familiar with SurfSense, it aims to be the **open-source alternative to NotebookLM, Perplexity, or Glean.**
In short, it's a Highly Customizable AI Research Agent that connects to your personal external sources and Search Engines (SearxNG, Tavily, LinkUp), Slack, Linear, Jira, ClickUp, Confluence, Gmail, Notion, YouTube, GitHub, Discord, Airtable, Google Calendar and more to come.
I'm looking for contributors. If you're interested in AI agents, RAG, browser extensions, or building open-source research tools, this is a great place to jump in.
Here’s a quick look at what SurfSense offers right now:
**Features**
* RBAC (Role Based Access for Teams)
* Notion Like Document Editing experience
* Supports 100+ LLMs
* Supports local Ollama or vLLM setups
* 6000+ Embedding Models
* 50+ File extensions supported (Added Docling recently)
* Podcasts support with local TTS providers (Kokoro TTS)
* Connects with 15+ external sources such as Search Engines, Slack, Notion, Gmail, Notion, Confluence etc
* Cross-Browser Extension to let you save any dynamic webpage you want, including authenticated content.
**Upcoming Planned Features**
* Note Management (Like Notion)
* Multi Collaborative Chats.
* Multi Collaborative Documents.
**Interested in contributing?**
SurfSense is completely open source, with an active roadmap. Whether you want to pick up an existing feature, suggest something new, fix bugs, or help improve docs, you're welcome to join in.
GitHub: [https://github.com/MODSetter/SurfSense](https://github.com/MODSetter/SurfSense) | 2025-12-03T03:34:26 | https://www.reddit.com/r/LocalLLaMA/comments/1pctktp/open_source_alternative_to_notebooklm/ | Uiqueblhats | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pctktp | false | null | t3_1pctktp | /r/LocalLLaMA/comments/1pctktp/open_source_alternative_to_notebooklm/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': 'duuvcg4-6SM9l6OgmXv1YAAr9nW12y8EZmp1Jh7bi4c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/duuvcg4-6SM9l6OgmXv1YAAr9nW12y8EZmp1Jh7bi4c.png?width=108&crop=smart&auto=webp&s=2df240c8422d219d40e69cb5a47815fe90021a93', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/duuvcg4-6SM9l6OgmXv1YAAr9nW12y8EZmp1Jh7bi4c.png?width=216&crop=smart&auto=webp&s=585b6a30a43d0519539a96c55236d7b039f5ca41', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/duuvcg4-6SM9l6OgmXv1YAAr9nW12y8EZmp1Jh7bi4c.png?width=320&crop=smart&auto=webp&s=0a2f9d7d2693cc0b559e21fec7409641292d75b1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/duuvcg4-6SM9l6OgmXv1YAAr9nW12y8EZmp1Jh7bi4c.png?width=640&crop=smart&auto=webp&s=515e436376e98095a522b9a4e725a27cc178026c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/duuvcg4-6SM9l6OgmXv1YAAr9nW12y8EZmp1Jh7bi4c.png?width=960&crop=smart&auto=webp&s=8aa53cc74e152bfd0fb4ffebc15995bf7b0d4f6b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/duuvcg4-6SM9l6OgmXv1YAAr9nW12y8EZmp1Jh7bi4c.png?width=1080&crop=smart&auto=webp&s=913ff190f8c24ee0657cadee1c8249ce22b87461', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/duuvcg4-6SM9l6OgmXv1YAAr9nW12y8EZmp1Jh7bi4c.png?auto=webp&s=b41159e9e3aba04af7f1e07c8ce31119951f035f', 'width': 1200}, 'variants': {}}]} |
Running 2016 AMD GPUs (5 x FirePro S7150x2) | 1 | Heya!
Looking for assistance on running LLM's on my old server grade cards. I had no issues running them before, but I recently tried to run them using updated Ubuntu and Text Generation WebUI and it doesn't seem to run.
Was there a major update in the recent years that made these cards incompatible? If so, what can I do to make them operational again?
I am very much aware that my hardware is quite outdated, so please spare me those comments.
5 x AMD FirePro S7150 x2
Intel Core i7-4790k
32GB DDR3
NVMe SSD
Many thanks. | 2025-12-03T03:19:28 | https://www.reddit.com/r/LocalLLaMA/comments/1pct9e9/running_2016_amd_gpus_5_x_firepro_s7150x2/ | werlii | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pct9e9 | false | null | t3_1pct9e9 | /r/LocalLLaMA/comments/1pct9e9/running_2016_amd_gpus_5_x_firepro_s7150x2/ | false | false | self | 1 | null |
Next-Gen AI Inference: Intel Xeon Processors Power Vision, NLP, and Recommender Workloads | 0 | 2025-12-03T03:16:42 | https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Next-Gen-AI-Inference-Intel-Xeon-Processors-Power-Vision-NLP-and/post/1728748 | reps_up | community.intel.com | 1970-01-01T00:00:00 | 0 | {} | 1pct7bt | false | null | t3_1pct7bt | /r/LocalLLaMA/comments/1pct7bt/nextgen_ai_inference_intel_xeon_processors_power/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'PRP5JRsKF4edhlWYSN7dgz9Uoe4IahqdU28MpCdgCGM', 'resolutions': [{'height': 42, 'url': 'https://external-preview.redd.it/PRP5JRsKF4edhlWYSN7dgz9Uoe4IahqdU28MpCdgCGM.png?width=108&crop=smart&auto=webp&s=3dae2c7d0b95367e422e7153c0044612e0af98f7', 'width': 108}, {'height': 85, 'url': 'https://external-preview.redd.it/PRP5JRsKF4edhlWYSN7dgz9Uoe4IahqdU28MpCdgCGM.png?width=216&crop=smart&auto=webp&s=ad08042ba02a3d28f5ab199724e582e14ed610c4', 'width': 216}, {'height': 126, 'url': 'https://external-preview.redd.it/PRP5JRsKF4edhlWYSN7dgz9Uoe4IahqdU28MpCdgCGM.png?width=320&crop=smart&auto=webp&s=fab674c7e179fdd15c6b6be82df2c25b31e25c49', 'width': 320}, {'height': 253, 'url': 'https://external-preview.redd.it/PRP5JRsKF4edhlWYSN7dgz9Uoe4IahqdU28MpCdgCGM.png?width=640&crop=smart&auto=webp&s=1e2d3ecd85de904abbedbc9f3a903a330a309f48', 'width': 640}, {'height': 379, 'url': 'https://external-preview.redd.it/PRP5JRsKF4edhlWYSN7dgz9Uoe4IahqdU28MpCdgCGM.png?width=960&crop=smart&auto=webp&s=091cf720d1286ec1bbc389c6a0d5b0accd18a181', 'width': 960}, {'height': 427, 'url': 'https://external-preview.redd.it/PRP5JRsKF4edhlWYSN7dgz9Uoe4IahqdU28MpCdgCGM.png?width=1080&crop=smart&auto=webp&s=8642fff0a847ffe647fb4c14dc79c61121cb020a', 'width': 1080}], 'source': {'height': 849, 'url': 'https://external-preview.redd.it/PRP5JRsKF4edhlWYSN7dgz9Uoe4IahqdU28MpCdgCGM.png?auto=webp&s=b103b112cb78667178d33e4d399bb7a6b423f337', 'width': 2147}, 'variants': {}}]} | |
Is anyone else trying to calculate GPU ROI right now? | 1 | [removed] | 2025-12-03T03:05:38 | https://www.reddit.com/r/LocalLLaMA/comments/1pcsyxw/is_anyone_else_trying_to_calculate_gpu_roi_right/ | AdrianUX_AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcsyxw | false | null | t3_1pcsyxw | /r/LocalLLaMA/comments/1pcsyxw/is_anyone_else_trying_to_calculate_gpu_roi_right/ | false | false | self | 1 | null |
Intels Battlematrix - Disappointing results? | 1 | 2025-12-03T03:01:00 | https://www.storagereview.com/review/intel-arc-pro-b60-battlematrix-preview-192gb-of-vram-for-on-premise-ai | HilLiedTroopsDied | storagereview.com | 1970-01-01T00:00:00 | 0 | {} | 1pcsvde | false | null | t3_1pcsvde | /r/LocalLLaMA/comments/1pcsvde/intels_battlematrix_disappointing_results/ | false | false | default | 1 | null | |
I built a tool to track "Offline" Llama 3 costs vs. GPT-4. (And block API calls when I'm broke). | 1 | 2025-12-03T02:59:50 | https://www.reddit.com/gallery/1pcsuf6 | AdrianUX_AI | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pcsuf6 | false | null | t3_1pcsuf6 | /r/LocalLLaMA/comments/1pcsuf6/i_built_a_tool_to_track_offline_llama_3_costs_vs/ | false | false | 1 | null | ||
Maaza NLM Orchestrator 9.6M: 70% Tool Routing at ~35ms – First <10M Production Orchestrator | 1 | r/LocalLLaMa!
We just shipped maaza-nlm-orchestrator-9.6m – the first <10M parameter model trained for real tool routing.
Key specs:
\- 9.6M params (7-layer transformer, RoPE, SwiGLU)
\- 70% tool selection accuracy (held-out 300 prompts across 36 MCPBodega tools: Doom, Puppeteer, Bitchat, etc.)
\- 74ms avg latency (RTX 4080 fp16)
\- 88% valid JSON output
Trained on 2,512 clean examples. Outperforms Gorilla-7B (52–58%) by 12–18pp while 730x smaller and 13–40x faster.
It's the official brain for MCPBodega – deploys with one command: \`mcpbodega deploy nano-orchestrator\`.
HF: [https://huggingface.co/CycleCoreTechnologies/maaza-nlm-orchestrator-9.6m](https://huggingface.co/CycleCoreTechnologies/maaza-nlm-orchestrator-9.6m)
NLM paper taxonomy: Task-Specialized Micro Language Models Outperform Larger Zero-Shot Models on Structured Data Extraction (https://cyclecore.ai/papers/MAAZA\_PAPER\_v0.7\_dark.pdf)
Thoughts on <10M orchestration? What's the smallest model you've seen route 10+ tools reliably? Curious about edgeAI use cases.
(r/LocalLLaMA – Dec 2, 2025) | 2025-12-03T02:42:33 | https://www.reddit.com/r/LocalLLaMA/comments/1pcsgx3/maaza_nlm_orchestrator_96m_70_tool_routing_at/ | CycleCore_Tech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcsgx3 | false | null | t3_1pcsgx3 | /r/LocalLLaMA/comments/1pcsgx3/maaza_nlm_orchestrator_96m_70_tool_routing_at/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nWvV6WDQKVM1Ola6JrAyq3_LfDVzQqbvgPztlxg_jVU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nWvV6WDQKVM1Ola6JrAyq3_LfDVzQqbvgPztlxg_jVU.png?width=108&crop=smart&auto=webp&s=6b2252a1d8889933e33d14f2a749af57411c1b01', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nWvV6WDQKVM1Ola6JrAyq3_LfDVzQqbvgPztlxg_jVU.png?width=216&crop=smart&auto=webp&s=7c2c857b1eb695ff39b3bd6f6ff478bc60c4eef4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nWvV6WDQKVM1Ola6JrAyq3_LfDVzQqbvgPztlxg_jVU.png?width=320&crop=smart&auto=webp&s=b17263e0283214cc0ead5c40e9442596c9bc1a13', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nWvV6WDQKVM1Ola6JrAyq3_LfDVzQqbvgPztlxg_jVU.png?width=640&crop=smart&auto=webp&s=122a0cc1d82406265a35a1614f0f460d659c37fa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nWvV6WDQKVM1Ola6JrAyq3_LfDVzQqbvgPztlxg_jVU.png?width=960&crop=smart&auto=webp&s=24504cb8d5f80ddb0985b36459f47bf0c13f5290', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nWvV6WDQKVM1Ola6JrAyq3_LfDVzQqbvgPztlxg_jVU.png?width=1080&crop=smart&auto=webp&s=eab499ee22f2f83e3c4fd06996e4426845238ddb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nWvV6WDQKVM1Ola6JrAyq3_LfDVzQqbvgPztlxg_jVU.png?auto=webp&s=07363341df9d7aaf01755d427aefef54bfffcfac', 'width': 1200}, 'variants': {}}]} |
I built a tool to track "Offline" Llama 3 costs vs. GPT-4. (And block API calls when I'm broke). | 1 | 2025-12-03T02:41:24 | AdrianUX_AI | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pcsg2h | false | null | t3_1pcsg2h | /r/LocalLLaMA/comments/1pcsg2h/i_built_a_tool_to_track_offline_llama_3_costs_vs/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'cUgQZMMgy1kJoFK5m2TJ7JLtq4LJeGEeqFraroziWfk', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/iz7mywicjw4g1.png?width=108&crop=smart&auto=webp&s=83e2cfaea113b625f8919e1e4478344d5b5101da', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/iz7mywicjw4g1.png?width=216&crop=smart&auto=webp&s=414edc73e100a2368c0c32112ef950cac1ae3c9b', 'width': 216}, {'height': 164, 'url': 'https://preview.redd.it/iz7mywicjw4g1.png?width=320&crop=smart&auto=webp&s=0e27cd4f9efdfc82ed48dad5a3c47985671bbd5f', 'width': 320}, {'height': 329, 'url': 'https://preview.redd.it/iz7mywicjw4g1.png?width=640&crop=smart&auto=webp&s=9166ad43ed24a4a305a6699b9458be5dab57632d', 'width': 640}, {'height': 493, 'url': 'https://preview.redd.it/iz7mywicjw4g1.png?width=960&crop=smart&auto=webp&s=5e5b44cd858de07c118ebd4a4a1035a9e411c610', 'width': 960}, {'height': 555, 'url': 'https://preview.redd.it/iz7mywicjw4g1.png?width=1080&crop=smart&auto=webp&s=d2043777a65830e218b2aecde645866a94c66db5', 'width': 1080}], 'source': {'height': 988, 'url': 'https://preview.redd.it/iz7mywicjw4g1.png?auto=webp&s=bd874974ebbf6036bb67a7dc55585105a8dfc519', 'width': 1920}, 'variants': {}}]} | |||
LLMs Need Better Executive Function | 1 | Earlier this year a consensus formed that AI needs better continual learning to make the METR chart -> GDPval -> prosperity go up. I’ve come to believe that something even more foundational is missing: better executive function. Effective agents require it. Current models don’t have it. Attention is all you need, but the AIs have ADHD.
More here: [https://www.robdearborn.com/2025/12/01/llms-need-better-executive-function/](https://www.robdearborn.com/2025/12/01/llms-need-better-executive-function/)
What do y'all think? | 2025-12-03T02:35:52 | https://www.reddit.com/r/LocalLLaMA/comments/1pcsbti/llms_need_better_executive_function/ | rob_dearborn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcsbti | false | null | t3_1pcsbti | /r/LocalLLaMA/comments/1pcsbti/llms_need_better_executive_function/ | false | false | self | 1 | null |
Has anybody managed to run Deepseek 3.2 locally in vLLM? | 4 | Keep getting
“…CUDA capability sm\_120 is not compatible with the current PyTorch installation (supports sm\_50 … sm\_90).”
From what I've gathered:
DeepSeek-V3.2 uses the new deepseek\_v32 arch with MLA + sparse attention.
vLLM’s DeepSeek-V3.2 recipes rely on FlashMLA / new MLA kernels integrated with DeepGEMM.
Current PyTorch + vLLM wheels are compiled only up to sm\_90; they do *not* include CUDA kernels for sm\_120 (Blackwell).
| 2025-12-03T02:27:49 | https://www.reddit.com/r/LocalLLaMA/comments/1pcs5hw/has_anybody_managed_to_run_deepseek_32_locally_in/ | etherd0t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcs5hw | false | null | t3_1pcs5hw | /r/LocalLLaMA/comments/1pcs5hw/has_anybody_managed_to_run_deepseek_32_locally_in/ | false | false | self | 4 | null |
Is there any language model that's finetuned for text to image prompts? | 6 | Is there any language model that's finetuned for text-to-image prompts?
I mean language models that are able to be concise and not put flourishing touches on the prompt. | 2025-12-03T02:02:23 | https://www.reddit.com/r/LocalLLaMA/comments/1pcrlok/is_there_any_language_model_thats_finetuned_for/ | Formal_Drop526 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcrlok | false | null | t3_1pcrlok | /r/LocalLLaMA/comments/1pcrlok/is_there_any_language_model_thats_finetuned_for/ | false | false | self | 6 | null |
RAG Paper 25.12.01 | 0 | |title|abstract|
|:-|:-|
|[HalluGraph: Auditable Hallucination Detection for Legal RAG Systems via Knowledge Graph Alignment](http://arxiv.org/abs/2512.01659v1)|Legal AI systems powered by retrieval-augmented generation (RAG) face a critical accountability challenge: when an AI assistant cites case law, statutes, or contractual clauses, practitioners need verifiable guarantees that generated text faithfully represents source documents. Existing hallucination detectors rely on semantic similarity metrics that tolerate entity substitutions, a dangerous failure mode when confusing parties, dates, or legal provisions can have material consequences. We introduce HalluGraph, a graph-theoretic framework that quantifies hallucinations through structural alignment between knowledge graphs extracted from context, query, and response. Our approach produces bounded, interpretable metrics decomposed into \\textit{Entity Grounding} (EG), measuring whether entities in the response appear in source documents, and \\textit{Relation Preservation} (RP), verifying that asserted relationships are supported by context. On structured control documents, HalluGraph achieves near-perfect discrimination ($>$400 words, $>$20 entities), HalluGraph achieves AUC=0.979, while maintaining robust performance (AUC≈0.89) on challenging generative legal task, consistently outperforming semantic similarity baselines. The framework provides the transparency and traceability required for high-stakes legal applications, enabling full audit trails from generated assertions back to source passages.|
|[EmoRAG: Evaluating RAG Robustness to Symbolic Perturbations](http://arxiv.org/abs/2512.01335v1)|Retrieval-Augmented Generation (RAG) systems are increasingly central to robust AI, enhancing large language model (LLM) faithfulness by incorporating external knowledge. However, our study unveils a critical, overlooked vulnerability: their profound susceptibility to subtle symbolic perturbations, particularly through near-imperceptible emoticon tokens such as "(@\_@)" that can catastrophically mislead retrieval, termed EmoRAG. We demonstrate that injecting a single emoticon into a query makes it nearly 100% likely to retrieve semantically unrelated texts that contain a matching emoticon. Our extensive experiment across general question-answering and code domains, using a range of state-of-the-art retrievers and generators, reveals three key findings: (I) Single-Emoticon Disaster: Minimal emoticon injections cause maximal disruptions, with a single emoticon almost 100% dominating RAG output. (II) Positional Sensitivity: Placing an emoticon at the beginning of a query can cause severe perturbation, with F1-Scores exceeding 0.92 across all datasets. (III) Parameter-Scale Vulnerability: Counterintuitively, models with larger parameters exhibit greater vulnerability to the interference. We provide an in-depth analysis to uncover the underlying mechanisms of these phenomena. Furthermore, we raise a critical concern regarding the robustness assumption of current RAG systems, envisioning a threat scenario where an adversary exploits this vulnerability to manipulate the RAG system. We evaluate standard defenses and find them insufficient against EmoRAG. To address this, we propose targeted defenses, analyzing their strengths and limitations in mitigating emoticon-based perturbations. Finally, we outline future directions for building robust RAG systems.|
|[TempPerturb-Eval: On the Joint Effects of Internal Temperature and External Perturbations in RAG Robustness](http://arxiv.org/abs/2512.01183v1)|The evaluation of Retrieval-Augmented Generation (RAG) systems typically examines retrieval quality and generation parameters like temperature in isolation, overlooking their interaction. This work presents a systematic investigation of how text perturbations (simulating noisy retrieval) interact with temperature settings across multiple LLM runs. We propose a comprehensive RAG Perturbation-Temperature Analysis Framework that subjects retrieved documents to three distinct perturbation types across varying temperature settings. Through extensive experiments on HotpotQA with both open-source and proprietary LLMs, we demonstrate that performance degradation follows distinct patterns: high-temperature settings consistently amplify vulnerability to perturbations, while certain perturbation types exhibit non-linear sensitivity across the temperature range. Our work yields three key contributions: (1) a diagnostic benchmark for assessing RAG robustness, (2) an analytical framework for quantifying perturbation-temperature interactions, and (3) practical guidelines for model selection and parameter tuning under noisy retrieval conditions.|
**Collected by OpenBMB, transferred by** [**RagView.ai**](https://www.ragview.ai/) **/** [**github/RagView**](https://github.com/RagView/RagView) **.** | 2025-12-03T01:51:07 | https://www.reddit.com/r/LocalLLaMA/comments/1pcrcnj/rag_paper_251201/ | Cheryl_Apple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcrcnj | false | null | t3_1pcrcnj | /r/LocalLLaMA/comments/1pcrcnj/rag_paper_251201/ | false | false | self | 0 | null |
apple/starflow · Hugging Face | 33 | STARFlow introduces a novel transformer autoregressive flow architecture that combines the expressiveness of autoregressive models with the efficiency of normalizing flows. The model achieves state-of-the-art results in both text-to-image and text-to-video generation tasks.
STARFlow: Scaling Latent Normalizing Flows for High-resolution Image Synthesis (NeurIPS 2025 Spotlight)
STARFlow-V: End-to-End Video Generative Modeling with Normalizing Flows (Arxiv)
| 2025-12-03T01:50:16 | https://huggingface.co/apple/starflow | ab2377 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1pcrc1j | false | null | t3_1pcrc1j | /r/LocalLLaMA/comments/1pcrc1j/applestarflow_hugging_face/ | false | false | default | 33 | {'enabled': False, 'images': [{'id': '9MKZtuMzr_COQCqOF3P3AvNoeDUxBXmS4TeVDsiZt8c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9MKZtuMzr_COQCqOF3P3AvNoeDUxBXmS4TeVDsiZt8c.png?width=108&crop=smart&auto=webp&s=d255bbd83f2e7a2c85708abfb4a859877ae10104', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9MKZtuMzr_COQCqOF3P3AvNoeDUxBXmS4TeVDsiZt8c.png?width=216&crop=smart&auto=webp&s=c13a91a80f19b10ea681ee07921de81e395f64a1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9MKZtuMzr_COQCqOF3P3AvNoeDUxBXmS4TeVDsiZt8c.png?width=320&crop=smart&auto=webp&s=591ca9309c4e1410be3e413ac4488649393163de', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9MKZtuMzr_COQCqOF3P3AvNoeDUxBXmS4TeVDsiZt8c.png?width=640&crop=smart&auto=webp&s=00b6a986e0fe8fa76f89639e776eee93811f8828', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9MKZtuMzr_COQCqOF3P3AvNoeDUxBXmS4TeVDsiZt8c.png?width=960&crop=smart&auto=webp&s=28a16d9bc16328308fe1346b1f1e65be0e379775', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9MKZtuMzr_COQCqOF3P3AvNoeDUxBXmS4TeVDsiZt8c.png?width=1080&crop=smart&auto=webp&s=ee5364620166b0c6eaec0ad4c24108de6951e538', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9MKZtuMzr_COQCqOF3P3AvNoeDUxBXmS4TeVDsiZt8c.png?auto=webp&s=bf2a1d10b87fd2cbc0c1bee12a99feecbd57002f', 'width': 1200}, 'variants': {}}]} |
Is a Minisforum UM780 w/32gb ram good enough to run a local llm? | 1 | AMD Ryzen 7 8745H (3.8GHz) Processor
32GB DDR5-5600 RAM
AMD Radeon 780M Integrated Graphics
1TB SSD
2.5GbE LAN, WiFi 6E (802.11ax), Bluetooth 5.3
Windows 11 Pro | 2025-12-03T01:40:31 | https://www.reddit.com/r/LocalLLaMA/comments/1pcr43j/is_a_minisforum_um780_w32gb_ram_good_enough_to/ | ConspiracyParadox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcr43j | false | null | t3_1pcr43j | /r/LocalLLaMA/comments/1pcr43j/is_a_minisforum_um780_w32gb_ram_good_enough_to/ | false | false | self | 1 | null |
[Hardware Build] 1x RTX 5090 (32GB) + 9950X3D for Headless Rental Node. Air Cooling w/ A/C. Feasible? | 0 | Hi everyone. I've been lurking here for a while and finally decided to build a dedicated AI server to rent out on RunPod or Vast.ai.
The Context:
\* Cooling: I will install a dedicated 24/7 Air Conditioner set to 18°C (64°F) directly blowing at the server.
\* Constraint: Since it's unattended, I am terrified of AIO liquid cooling leaks. I want to stick with Air Cooling for maximum reliability.
The Proposed Build:
\* GPU: 1x ZOTAC RTX 5090 32GB (Planning to add a 2nd one later).
\* CPU: AMD Ryzen 9 9950X3D (Will run in Eco Mode 105W to keep temps low).
\* Cooler: Noctua NH-D15 G2 (The best air cooler available).
\* RAM: 96GB (2x 48GB) DDR5-5600 SK Hynix (Consumer RAM, aiming for stability over speed).
\* Mobo: ASUS ProArt X870E-CREATOR WIFI (Need 10G LAN and bifurcation support for future expansion).
\* PSU: Seasonic PRIME TX-2200 (Overkill, but preparing for dual 5090s).
\* OS: Ubuntu Server 22.04 LTS.
My Questions:
\* Thermal Reality: With a dedicated A/C in the room, can the Noctua D15 G2 handle the 5090 + 9950X3D under 24/7 load? Or is a blower-style card mandatory?
\* RAM Stability: Is 96GB (2x48GB) on AM5 stable enough for Linux/AI workloads? I heard mixed reviews about high-density DDR5 on Ryzen.
\* Rental Demand: Is a single RTX 5090 (32GB VRAM) attractive on RunPod/Vast? Or should I have looked at used A6000s (48GB) instead? (Though 5090 is much faster).
Any advice would be greatly appreciated. Thanks! | 2025-12-03T01:29:33 | https://www.reddit.com/r/LocalLLaMA/comments/1pcqvo3/hardware_build_1x_rtx_5090_32gb_9950x3d_for/ | Random_Project-0001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcqvo3 | false | null | t3_1pcqvo3 | /r/LocalLLaMA/comments/1pcqvo3/hardware_build_1x_rtx_5090_32gb_9950x3d_for/ | false | false | self | 0 | null |
local memory layer | 0 | I’ve been building a small local-first memory layer for AI coding workflows.
Still early, but useful in my own projects. Happy to share more if anyone's curious. | 2025-12-03T01:18:29 | https://www.reddit.com/r/LocalLLaMA/comments/1pcqmt0/local_memory_layer/ | Training_Isopod3722 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcqmt0 | false | null | t3_1pcqmt0 | /r/LocalLLaMA/comments/1pcqmt0/local_memory_layer/ | false | false | self | 0 | null |
Linggen-local RAG+MCP memory layer | 1 | [removed] | 2025-12-03T01:16:08 | https://www.reddit.com/r/LocalLLaMA/comments/1pcqkyk/linggenlocal_ragmcp_memory_layer/ | Training_Isopod3722 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pcqkyk | false | null | t3_1pcqkyk | /r/LocalLLaMA/comments/1pcqkyk/linggenlocal_ragmcp_memory_layer/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.