title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
MiniMax M2.2 Coming Soon. Confirmed by Head of Engineering @MiniMax_AI | 88 | 2026-01-16T02:29:49 | Difficult-Cap-7527 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qe3p8d | false | null | t3_1qe3p8d | /r/LocalLLaMA/comments/1qe3p8d/minimax_m22_coming_soon_confirmed_by_head_of/ | false | false | default | 88 | {'enabled': True, 'images': [{'id': 'r2y0g60chmdg1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/r2y0g60chmdg1.jpeg?width=108&crop=smart&auto=webp&s=38e1d578d9f5a4d17ccf890d1da5deb685e0f500', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/r2y0g60chmdg1.jpeg?width=216&crop=smart&auto=webp&s=b86e9fc41671a4728519a450128edc418affe4a7', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/r2y0g60chmdg1.jpeg?width=320&crop=smart&auto=webp&s=2ef7dfdd019e43ae4c04e347e0ca8c8c685cc801', 'width': 320}, {'height': 336, 'url': 'https://preview.redd.it/r2y0g60chmdg1.jpeg?width=640&crop=smart&auto=webp&s=8b4997ba8fe2fc1231baad6317e2eca9f7350374', 'width': 640}, {'height': 504, 'url': 'https://preview.redd.it/r2y0g60chmdg1.jpeg?width=960&crop=smart&auto=webp&s=52e933b183dfc07ff27b1930497fd349fe187a45', 'width': 960}, {'height': 567, 'url': 'https://preview.redd.it/r2y0g60chmdg1.jpeg?width=1080&crop=smart&auto=webp&s=8baa9be63bada7041b7b1d5564ff1110524f1290', 'width': 1080}], 'source': {'height': 630, 'url': 'https://preview.redd.it/r2y0g60chmdg1.jpeg?auto=webp&s=f266de26bc91dbdc254e081b295b5046acdb15e2', 'width': 1200}, 'variants': {}}]} | ||
RAG Paper 26.1.12 | 4 | 1. [Benchmarking Small Language Models and Small Reasoning Language Models on System Log Severity Classification](http://arxiv.org/abs/2601.07790v1)
2. [Is Agentic RAG worth it? An experimental comparison of RAG approaches](http://arxiv.org/abs/2601.07711v1)
3. [From RAG to Agentic RAG for Faithful Islamic Question Answering](http://arxiv.org/abs/2601.07528v1)
4. [FROAV: A Framework for RAG Observation and Agent Verification - Lowering the Barrier to LLM Agent Research](http://arxiv.org/abs/2601.07504v1)
5. [BayesRAG: Probabilistic Mutual Evidence Corroboration for Multimodal Retrieval-Augmented Generation](http://arxiv.org/abs/2601.07329v1)
6. [ActiShade: Activating Overshadowed Knowledge to Guide Multi-Hop Reasoning in Large Language Models](http://arxiv.org/abs/2601.07260v1)
7. [Lost in the Noise: How Reasoning Models Fail with Contextual Distractors](http://arxiv.org/abs/2601.07226v1)
8. [Relink: Constructing Query-Driven Evidence Graph On-the-Fly for GraphRAG](http://arxiv.org/abs/2601.07192v1)
**Collected by OpenBMB, transferred by** [**https://www.ragview.ai/components/arena**](https://www.ragview.ai/components/arena) **/** [**github/RagView**](https://github.com/RagView/RagView) **.** | 2026-01-16T02:25:00 | https://www.reddit.com/r/LocalLLaMA/comments/1qe3l63/rag_paper_26112/ | Cheryl_Apple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe3l63 | false | null | t3_1qe3l63 | /r/LocalLLaMA/comments/1qe3l63/rag_paper_26112/ | false | false | self | 4 | null |
Gaming/AI PC build | 3 | ERROR: type should be string, got "https://preview.redd.it/0jl08mkhamdg1.jpg?width=3024&format=pjpg&auto=webp&s=401fb203d4af855ac1a7308b2581e6a3bd31b029\n\nThis is my first attempt at a clean build where everything fits in the case. It's an Intel Ultra 9 285k with a 420mm AIO (front), an MSI Suprim LC 5090 with a 360mm AIO (top), and an RTX Pro 4500 32GB. 1300W platinum power supply and Aorus Master. 192GB RAM (4x48GB). Samsung 9100 Pro 8TB NVMe PCIe5. Intake fans on the back. Phanteks case was super easy to work with. I used Gemini Thinking to check compatibility on all of the parts before I ordered, and everything snapped together in a few hours.\n\nIt's nice to leave a model loaded in the Pro GPU, and leave the consumer GPU dedicated for video and games. No need to unload the model when you want to do something else. The Pro GPU idles at 2-3 watts with the model loaded, and spikes up to 150W when you feed it a prompt. The consumer GPU idles at 35W just to run the display, and 29C with the cooler running silently.\n\nI had wanted a used L4, L40S, or A100 40GB but didn't trust the eBay rebuilds from China that were 50% cheaper than US/Canada items. The RTX Pro 4500 was a better choice for me.\n\nRuns GPT OSS 120B about 30 tok/sec (doesn't fit) and GPT OSS 20B at >200 tok/sec.\n\nhttps://preview.redd.it/ck7tq2bjamdg1.jpg?width=4284&format=pjpg&auto=webp&s=00e0028c7ade881c208d2a87f8ee6ac46a4c553b\n\n" | 2026-01-16T01:52:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qe2ugl/gamingai_pc_build/ | gwestr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe2ugl | false | null | t3_1qe2ugl | /r/LocalLLaMA/comments/1qe2ugl/gamingai_pc_build/ | false | false | 3 | null | |
Gaming/AI PC | 1 | This is my first attempt at a clean build where everything fits in the case. It's an Intel Ultra 9 285k with a 420mm AIO (front), an MSI Suprim LC 5090 with a 360mm AIO (top), and an RTX Pro 4500 32GB. 1300W platinum power supply and Aorus Master. 192GB RAM (4x48GB). Samsung 9100 Pro 8TB NVMe PCIe5. Intake fans on the back. Phanteks case was super easy to work with. I used Gemini Thinking to check compatibility on all of the parts before I ordered, and everything snapped together in a few hours.
It's nice to leave a model loaded in the Pro GPU, and leave the consumer GPU dedicated for video and games. No need to unload the model when you want to do something else. The Pro GPU idles at 2-3 watts with the model loaded, and spikes up to 150W when you feed it a prompt. The consumer GPU idles at 35W just to run the display, and 29C with the cooler running silently.
I had wanted a used L4, L40S, or A100 40GB but didn't trust the eBay rebuilds from China that were 50% cheaper than US/Canada items. The RTX Pro 4500 was a better choice for me.
https://preview.redd.it/yt550igc9mdg1.jpg?width=3024&format=pjpg&auto=webp&s=fc9d5828aa9b8a3d76af55fbbe314f9aa06b3197
https://preview.redd.it/7thz1fzc9mdg1.jpg?width=4284&format=pjpg&auto=webp&s=9231b306aa5c381cae1033c9a56f2c7954d97d1f
| 2026-01-16T01:48:37 | https://www.reddit.com/r/LocalLLaMA/comments/1qe2ro0/gamingai_pc/ | gwestr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe2ro0 | false | null | t3_1qe2ro0 | /r/LocalLLaMA/comments/1qe2ro0/gamingai_pc/ | false | false | 1 | null | |
GEPA Prompt Optimization in Vercel's AI SDK | 1 | Been building with AI SDK for a while now, and it's easily the best TypeScript ADK I've worked with. Coming from building agents in Python I often find myself wishing prompt optimization algorithms like [GEPA](https://github.com/gepa-ai/gepa) were more accessible in the TypeScript ecosystem.
For the uninitiated: GEPA is a Genetic-Pareto algorithm that finds optimal prompts by running your system through iterations and letting an LLM explore the search space for winning candidates. The catch? It was built in Python, so TypeScript developers have been left out in the cold. And if you want full workflow optimization (not just single prompt tuning), your options have been basically nonexistent.
I just published `gepa-rpc` to fix that. It's has a very simple API that works directly with AI SDK so no need to learn yet another opinionated framework. You can find the tutorial here and some examples here: [https://github.com/modaic-ai/gepa-rpc/tree/main](https://github.com/modaic-ai/gepa-rpc/tree/main)
Also looking to expand the package to support more languages and frameworks. Lmk which you would like to see. | 2026-01-16T01:43:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qe2n8g/gepa_prompt_optimization_in_vercels_ai_sdk/ | Disneyskidney | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe2n8g | false | null | t3_1qe2n8g | /r/LocalLLaMA/comments/1qe2n8g/gepa_prompt_optimization_in_vercels_ai_sdk/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YrgCvoRzW9UcvXCv934-6V__MzWm9zy-82MmWmMTxHc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YrgCvoRzW9UcvXCv934-6V__MzWm9zy-82MmWmMTxHc.png?width=108&crop=smart&auto=webp&s=0752779e804efaeae410ad23aa70bc84daaba5d8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YrgCvoRzW9UcvXCv934-6V__MzWm9zy-82MmWmMTxHc.png?width=216&crop=smart&auto=webp&s=0c03d3f29c914bb23b6f4b7ed1b848b92032729f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YrgCvoRzW9UcvXCv934-6V__MzWm9zy-82MmWmMTxHc.png?width=320&crop=smart&auto=webp&s=56830c64089619f4a6847eb18bd1c40ea8d9cf8b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YrgCvoRzW9UcvXCv934-6V__MzWm9zy-82MmWmMTxHc.png?width=640&crop=smart&auto=webp&s=a22c7309dd58699622cbce230ae07a144449149e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YrgCvoRzW9UcvXCv934-6V__MzWm9zy-82MmWmMTxHc.png?width=960&crop=smart&auto=webp&s=c2364b0cf31eac7de627e603b8344c6da0fbf8f7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YrgCvoRzW9UcvXCv934-6V__MzWm9zy-82MmWmMTxHc.png?width=1080&crop=smart&auto=webp&s=dd2ec3f60e5957b4bf5011e8d49ebe66fa119b8d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YrgCvoRzW9UcvXCv934-6V__MzWm9zy-82MmWmMTxHc.png?auto=webp&s=1b4c7557cdf36f1cb66bfd8368ed00a03d34e016', 'width': 1200}, 'variants': {}}]} |
My story of underestimating /r/LocalLLaMA's thirst for VRAM | 1,182 | 2026-01-16T01:36:54 | EmPips | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qe2i88 | false | null | t3_1qe2i88 | /r/LocalLLaMA/comments/1qe2i88/my_story_of_underestimating_rlocalllamas_thirst/ | false | false | default | 1,182 | {'enabled': True, 'images': [{'id': 'lwod7dtv7mdg1', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/lwod7dtv7mdg1.jpeg?width=108&crop=smart&auto=webp&s=24d1bb18362abc0eb13afcb8019424f12e8fb894', 'width': 108}, {'height': 233, 'url': 'https://preview.redd.it/lwod7dtv7mdg1.jpeg?width=216&crop=smart&auto=webp&s=de488d84130e33f7dca161fc31c32e8a11ed7269', 'width': 216}, {'height': 346, 'url': 'https://preview.redd.it/lwod7dtv7mdg1.jpeg?width=320&crop=smart&auto=webp&s=704b6e50fa3a4970e27f745b227589d59102dffc', 'width': 320}, {'height': 693, 'url': 'https://preview.redd.it/lwod7dtv7mdg1.jpeg?width=640&crop=smart&auto=webp&s=660b9563f79c6dad7bb50c952f2b76bf9062955b', 'width': 640}, {'height': 1039, 'url': 'https://preview.redd.it/lwod7dtv7mdg1.jpeg?width=960&crop=smart&auto=webp&s=eaee81d021c47553592321875f86514411727f31', 'width': 960}, {'height': 1169, 'url': 'https://preview.redd.it/lwod7dtv7mdg1.jpeg?width=1080&crop=smart&auto=webp&s=200c80d5116e12292e8925a060a08ff1bd32ab9a', 'width': 1080}], 'source': {'height': 1434, 'url': 'https://preview.redd.it/lwod7dtv7mdg1.jpeg?auto=webp&s=320fe989588b57d25d7dc1c2a324993bc1df4953', 'width': 1324}, 'variants': {}}]} | ||
Style Transfer: Swiss Brutalism Tests the model GLM-Image | 1 | [removed] | 2026-01-16T01:02:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qe1qg3/style_transfer_swiss_brutalism_tests_the_model/ | Impressive-Olive8372 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe1qg3 | false | null | t3_1qe1qg3 | /r/LocalLLaMA/comments/1qe1qg3/style_transfer_swiss_brutalism_tests_the_model/ | false | false | 1 | null | |
Cursor For Data: We built a tool to connect LLMs and Agents to the entire user data and have row-level intelligence | 2 |
Modern AI tools use SQL/Code generation agents or RAG to access the user data to perform transformations. However, the drawback is that this doesn't provide row-level intelligence, especially in case of semantic intelligence is required to understand how to perform the transfromations on each row of the data.
We've released Datatune (https://github.com/vitalops/datatune) as a tool to let users connect entire user Data with LLMs and Agents, to help users access their entire data with just a prompt.
While building Agents for a customer who had large amounts of data, we saw that their Agent struggled with certain data transformation tasks, which would have performed better if LLMs had access to the full user data as well. We built Datatune as a first step, to solve this issue.
Datatune supports:
\- Diverse data backends such as Databases, DataFrames, etc.
\- Batch Processing of data to pass to LLMs + distributed computing using dask, for faster and efficient transformations, while also helping reduce cost and context length limit issues.
\- First order primitive data engineering operations such as Map, Filter, etc.
\- Chain Multiple transformations together.
\- Simplify user tasks with complex chained transformations using an Internal data engineering agent as a super orchestrator to split user prompt into sub prompts for the respective Map, Filter (primitive Agents), or code generating agents.
Next Steps:
\- Build an Embedding Layer to work in parallel with LLMs & Agents
\- Use Embedding Layer to build Semantic Deduplication, Tabular Querying, etc
Github : [https://github.com/vitalops/datatune](https://github.com/vitalops/datatune) | 2026-01-16T00:58:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qe1mx9/cursor_for_data_we_built_a_tool_to_connect_llms/ | metalvendetta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe1mx9 | false | null | t3_1qe1mx9 | /r/LocalLLaMA/comments/1qe1mx9/cursor_for_data_we_built_a_tool_to_connect_llms/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'qiGAzMTAYAM2YgkU4ClBDPyL0bnZ4AXF2LRPDt43WZg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qiGAzMTAYAM2YgkU4ClBDPyL0bnZ4AXF2LRPDt43WZg.png?width=108&crop=smart&auto=webp&s=672770ea95f504faef4ceea1a658a7400a9c7f57', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qiGAzMTAYAM2YgkU4ClBDPyL0bnZ4AXF2LRPDt43WZg.png?width=216&crop=smart&auto=webp&s=4a04de66235514a08bcc091608008b6a39f77d14', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qiGAzMTAYAM2YgkU4ClBDPyL0bnZ4AXF2LRPDt43WZg.png?width=320&crop=smart&auto=webp&s=08bb3e7a950cd226145a9b5738ae02d596404865', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qiGAzMTAYAM2YgkU4ClBDPyL0bnZ4AXF2LRPDt43WZg.png?width=640&crop=smart&auto=webp&s=ad717a361aa80900a5ffd0d168b2a48eaca2afde', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qiGAzMTAYAM2YgkU4ClBDPyL0bnZ4AXF2LRPDt43WZg.png?width=960&crop=smart&auto=webp&s=bb6b365699ffbb531e773796053b696b896b93c7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qiGAzMTAYAM2YgkU4ClBDPyL0bnZ4AXF2LRPDt43WZg.png?width=1080&crop=smart&auto=webp&s=5ec7899f23da0cf59f942ad8dc0c549dcbfeed07', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qiGAzMTAYAM2YgkU4ClBDPyL0bnZ4AXF2LRPDt43WZg.png?auto=webp&s=d31671b07324e365a3825028531f23b57b974321', 'width': 1200}, 'variants': {}}]} |
Is 5060Ti 16GB and 32GB DDR5 system ram enough to play with local AI for a total rookie? | 17 | For future proofing would it be better to get a secondary cheap GPU (like 3060) or another 32GB DDR5 RAM? | 2026-01-16T00:46:11 | https://www.reddit.com/r/LocalLLaMA/comments/1qe1dec/is_5060ti_16gb_and_32gb_ddr5_system_ram_enough_to/ | danuser8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe1dec | false | null | t3_1qe1dec | /r/LocalLLaMA/comments/1qe1dec/is_5060ti_16gb_and_32gb_ddr5_system_ram_enough_to/ | false | false | self | 17 | null |
Need help: llama.cpp memory usage when using ctk/v on multi RTX 3090 setup | 2 | Owners of RTX 3090 rigs, may I ask you to test something like that with your setup:
\- llamacpp + a model that is not too small for your rig + -ctk & -ctv set to q4\_0 + as much context as possible + if possible increase -b & -ub + no use of GGML\_CUDA\_ENABLE\_UNIFIED\_MEMORY
\- a tool like opencode or claude code
\- ask directly a question like "explain with details the following file" on a file that requires several big batches in prompt processing (e.g 1k loc)
\- observing the memory usage when the agent reads the file to check if stays flat or there the usage increases gradually (my issue)
I've been told it may be due to the llama.cpp temporary buffers as the CUDA backend does not have kernels that can use q4\_0 directly for all batch sizes so it may need to be converted to FP16 (and same for q8\_0).
But the goal is more to see if that's a common thing or not. So thank you for any help!!
| 2026-01-16T00:16:11 | https://www.reddit.com/r/LocalLLaMA/comments/1qe0o4x/need_help_llamacpp_memory_usage_when_using_ctkv/ | Leflakk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe0o4x | false | null | t3_1qe0o4x | /r/LocalLLaMA/comments/1qe0o4x/need_help_llamacpp_memory_usage_when_using_ctkv/ | false | false | self | 2 | null |
Configure Open Web UI to connect to an intranet web server for information | 0 | Can anyone tell me how to configure opeb web ui to connect to an internal intranet web server so that it will include content from it in its chat responses? | 2026-01-16T00:15:44 | https://www.reddit.com/r/LocalLLaMA/comments/1qe0nqt/configure_open_web_ui_to_connect_to_an_intranet/ | szutcxzh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe0nqt | false | null | t3_1qe0nqt | /r/LocalLLaMA/comments/1qe0nqt/configure_open_web_ui_to_connect_to_an_intranet/ | false | false | self | 0 | null |
Blackbox for AI agents | 2 | Hey,
I been working on this and wanted to share, I call it Vouch its basically a "black box" for AI agents
**What I tried to solve**
If an agent messes up or gets compromised it can just delete the logs, There is no proof of what actually happened
**What I built**
Vouch sits between agent and its tools, before any action executes Vouch will logs it to cryptographically signed chain which cant be modified later
Checks if its risky action if its risky it blocks it and ask you first
>\[Your Agent\] → \[Vouch\] → \[Actual Tools\]
>↓
>\[Signed Ledger\]
This is quick example
# Install
go install github.com/slyt3/Vouch@latest
# Start it
vouch-proxy --upstream http://localhost:3000 --port 9999
# Defines what is risky in vouch-policy.yaml
policies:
- match_methods: ["db.drop", "stripe.charge"]
action: "stall" # Wait for human approval
so when agent tries something risky they will ask for approval
Agent wants to run: db.drop_table("users")
Approve? (y/n)
annoyed of agents doing irreversible stuff. so the cryptographic chain part is maybe overkill, but I like it and wanted to make sure even if somebody hacks the agent, they can fake the audit trail
https://preview.redd.it/1x8oahwaqldg1.jpg?width=1024&format=pjpg&auto=webp&s=816e58c4e9864d3defcdcfc303b7ac44775289fa
**If you want to try it:**
git clone https://github.com/slyt3/Vouch.git
cd Vouch
go build -o vouch-proxy ./main.go
./vouch-proxy --upstream http://localhost:3000 --port 9999
`vouch-policy.yaml` is in the repo. Works with Claude Desktop, AutoGPT, LangChain, or anything that uses MCP/JSON-RPC.
**Questions that I have**
* Is this actually useful or I am overthinking like always?
* does the "human approval" part even makes sense or it would it be to annoying in practice?
* what would you guys would love to see in this tool?
**GitHub**: [https://github.com/slyt3/Vouch](https://github.com/slyt3/Vouch)
Still very early but wanted to get feedback before going further.
Thankyou for your time guys and feedback
| 2026-01-16T00:06:13 | https://www.reddit.com/r/LocalLLaMA/comments/1qe0fep/blackbox_for_ai_agents/ | Apart_Suggestion9191 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe0fep | false | null | t3_1qe0fep | /r/LocalLLaMA/comments/1qe0fep/blackbox_for_ai_agents/ | false | false | 2 | null | |
Latest upgrade…A100 40 GB | 378 | Originally this was my gaming rig but I went ITX and basically bought a new computer. So I had the case, fans, AIO, 64 GB DDR5, motherboard, PSU, and 3080 (upgraded to 5070ti RIP). I was going to sell these parts, but I started running models on my 5070ti and eventually I wanted to start running larger models. I found a 3090 on eBay for $680, and 7950x for $350. I put that together with the parts and it’s been a great AI rig for me. I really didn’t plan on upgrading this for a while, especially now with the current price surges. Welp, I saw an A100 get listed for $1000 on eBay. The catch? Listed for parts, and the description just said “card reports CUDA error”. So I figured it was worth the risk (for me), I could’ve probably sold it for the price I paid. Well, I swapped out the 3080 and on the first boot it was recognized instantly by nvidia-smi. I was able to run and train models immediately. Nice. | 2026-01-16T00:03:21 | inserterikhere | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qe0cxc | false | null | t3_1qe0cxc | /r/LocalLLaMA/comments/1qe0cxc/latest_upgradea100_40_gb/ | false | false | default | 378 | {'enabled': True, 'images': [{'id': 'f66wnmearldg1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/f66wnmearldg1.jpeg?width=108&crop=smart&auto=webp&s=5f0d41594827ec4cf5d2eab5b38fdba975339913', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/f66wnmearldg1.jpeg?width=216&crop=smart&auto=webp&s=73bd8c14f3a8591eb4b3904ce6c8c1ed5a178331', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/f66wnmearldg1.jpeg?width=320&crop=smart&auto=webp&s=12f7c9ed02545a5928cbdb746472f87a58c7ae88', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/f66wnmearldg1.jpeg?width=640&crop=smart&auto=webp&s=703e9ae4724a0bb0a84546215e98301f06d28541', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/f66wnmearldg1.jpeg?width=960&crop=smart&auto=webp&s=1a53f2efc57952fa1488a55cdae22dd5fa1e4240', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/f66wnmearldg1.jpeg?width=1080&crop=smart&auto=webp&s=79b0a05467c946db266895e338156f6b31794a54', 'width': 1080}], 'source': {'height': 5712, 'url': 'https://preview.redd.it/f66wnmearldg1.jpeg?auto=webp&s=b179e111c5c3738363c9208f887ea55fa9976fa4', 'width': 4284}, 'variants': {}}]} | |
Need hardware guidance to run local LLM | 2 | I run the it department for a small company that provides fiber to the home. I would love to experiment with using llm with RAG, maybe some LORA or QLORA level training to make an AI helper for our help desk techs and billing staff. I also would love to have something good to help with coding at home. I am looking at the DGX Spark, AMD Strix Halo, Mac Studio, or i do have an x870e motherboard, ryzen 9950x, 96gb of RAM, a good power supply and case available. I could put one or two R9700 Pros in it. I would love to be able to run models good enough to impress and prove the value of the system. Thinking GPT-OSS 120B? or similiar what is my best bang for the buck that gives usable performance and could be used for a nice proof of concept. | 2026-01-15T23:49:23 | https://www.reddit.com/r/LocalLLaMA/comments/1qe00dn/need_hardware_guidance_to_run_local_llm/ | rennocneb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qe00dn | false | null | t3_1qe00dn | /r/LocalLLaMA/comments/1qe00dn/need_hardware_guidance_to_run_local_llm/ | false | false | self | 2 | null |
Mi50 32gb cards | 0 | Bought a couple of mi50 32gb cards from alibaba early last year for about £100 each.
Never really got them running as I found Linux a total pain - eventually went 3090.
Wondering if these old mi50s are worth selling especially with the current ram crisis. Anyone know wha they are worth? | 2026-01-15T23:48:42 | https://www.reddit.com/r/LocalLLaMA/comments/1qdzzs5/mi50_32gb_cards/ | PinkyPonk10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdzzs5 | false | null | t3_1qdzzs5 | /r/LocalLLaMA/comments/1qdzzs5/mi50_32gb_cards/ | false | false | self | 0 | null |
LG, SKT, Upstage advance in Korea’s sovereign AI project; Naver, NC dropped in 1st round | 3 | 2026-01-15T23:30:44 | https://m.theinvestor.co.kr/article/10656530 | snowfordessert | m.theinvestor.co.kr | 1970-01-01T00:00:00 | 0 | {} | 1qdzjt5 | false | null | t3_1qdzjt5 | /r/LocalLLaMA/comments/1qdzjt5/lg_skt_upstage_advance_in_koreas_sovereign_ai/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': 'ca6fOzpEdiOMCo-836yntVUTziHLEUWf7zsq3n980o0', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ca6fOzpEdiOMCo-836yntVUTziHLEUWf7zsq3n980o0.jpeg?width=108&crop=smart&auto=webp&s=d42712e7376d1ef12c4b39e58bd9632693caccad', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/ca6fOzpEdiOMCo-836yntVUTziHLEUWf7zsq3n980o0.jpeg?width=216&crop=smart&auto=webp&s=c1de39e1a9140c533c6224bced45335b19cb5b49', 'width': 216}], 'source': {'height': 149, 'url': 'https://external-preview.redd.it/ca6fOzpEdiOMCo-836yntVUTziHLEUWf7zsq3n980o0.jpeg?auto=webp&s=c254de786ce309a4632ad34c2ad00e3971960795', 'width': 300}, 'variants': {}}]} | |
Do AI agents need TLS-style identities and ‘certificates’? | 0 | We’ve largely accepted that services have identities (certs) and supply chains have SBOMs. Agents are now acting like semi-autonomous clients that can call tools and trigger actions, but most systems still treat them as anonymous app code. If an agent causes damage, we often can’t prove what agent it was, what it was configured to do, or what tools it was allowed to use at that time.
What’s the minimal verifiable claim set an agent should present (model, tools, constraints, owner, version)? | 2026-01-15T23:28:33 | https://www.reddit.com/r/LocalLLaMA/comments/1qdzhu9/do_ai_agents_need_tlsstyle_identities_and/ | PutPurple844 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdzhu9 | false | null | t3_1qdzhu9 | /r/LocalLLaMA/comments/1qdzhu9/do_ai_agents_need_tlsstyle_identities_and/ | false | false | self | 0 | null |
Check out LinkRip - Extract transcripts, download videos and audio from YouTube! | 0 | 2026-01-15T23:14:34 | https://www.linkrip.com | bennybearlover | linkrip.com | 1970-01-01T00:00:00 | 0 | {} | 1qdz5jm | false | null | t3_1qdz5jm | /r/LocalLLaMA/comments/1qdz5jm/check_out_linkrip_extract_transcripts_download/ | false | false | default | 0 | null | |
I made a simple way to run Dia2 TTS without a local GPU | 1 | Hey folks! I’ve seen a lot of people struggling to get Dia2 running locally, whether it’s CUDA setup, dependency issues, or just not having a capable GPU.
I put together a small wrapper that lets you run Dia2 entirely in the cloud using Modal (serverless GPU compute), so no local GPU is required. You deploy it once, and then interact with it via a simple HTTP API.
Features:
* No local GPU or CUDA setup
* Simple REST endpoint for text → speech
* Supports multi-speaker scripts with `[S1]` / `[S2]`
* Optional voice cloning from short WAV samples
* Fast to deploy (a few minutes on first run)
It’s mainly meant as a practical way to try Dia2 or integrate it into other projects without fighting local setup.
Repo here:
[https://github.com/khariha/dia2-easy-tts](https://github.com/khariha/dia2-easy-tts)
Would love feedback, and happy to answer questions if anyone tries it out. | 2026-01-15T22:58:28 | https://www.reddit.com/r/LocalLLaMA/comments/1qdyqvl/i_made_a_simple_way_to_run_dia2_tts_without_a/ | SolidSailor7898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdyqvl | false | null | t3_1qdyqvl | /r/LocalLLaMA/comments/1qdyqvl/i_made_a_simple_way_to_run_dia2_tts_without_a/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'd_ZCu3ireps9-CZPlwA-ei23X3du3jP7gNFyq_OvxZg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/d_ZCu3ireps9-CZPlwA-ei23X3du3jP7gNFyq_OvxZg.png?width=108&crop=smart&auto=webp&s=fb3c43d81d1c996daabfc1ea46f197149be8f424', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/d_ZCu3ireps9-CZPlwA-ei23X3du3jP7gNFyq_OvxZg.png?width=216&crop=smart&auto=webp&s=106fa399a1a1867012bfa93a2a24ab850e4541af', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/d_ZCu3ireps9-CZPlwA-ei23X3du3jP7gNFyq_OvxZg.png?width=320&crop=smart&auto=webp&s=a9c0fff5383040f48b9a3ab43d10355414884064', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/d_ZCu3ireps9-CZPlwA-ei23X3du3jP7gNFyq_OvxZg.png?width=640&crop=smart&auto=webp&s=4781da7f4d6785cd4f0056ec83bc1861da713ffb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/d_ZCu3ireps9-CZPlwA-ei23X3du3jP7gNFyq_OvxZg.png?width=960&crop=smart&auto=webp&s=2fe376da434e9a355c52063fbe2714c9aee3fe1c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/d_ZCu3ireps9-CZPlwA-ei23X3du3jP7gNFyq_OvxZg.png?width=1080&crop=smart&auto=webp&s=ceeb42de6941e60814e10f3b886a1b3b838cd5d3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/d_ZCu3ireps9-CZPlwA-ei23X3du3jP7gNFyq_OvxZg.png?auto=webp&s=8ea2f53c6d6297f288ab7a5737fbc54b702e36fa', 'width': 1200}, 'variants': {}}]} |
Job wants me to develop RAG search engine for internal documents | 24 | this would be the first time I develop a RAG tool that searches through 2-4 million documents (mainly PDFs and many of those needing OCR). I was wondering what sort of approach I should take with this and whether it makes more sense to develop a local or cloud tool. Also the information needs to be secured so that's why I was leaving toward local. Have software exp in other things but not working with LLMs or RAG systems so looking for pointers. Also turnkey tools are out of the picture unless they're close to 100k. | 2026-01-15T22:42:20 | https://www.reddit.com/r/LocalLLaMA/comments/1qdyc3e/job_wants_me_to_develop_rag_search_engine_for/ | Next-Self-184 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdyc3e | false | null | t3_1qdyc3e | /r/LocalLLaMA/comments/1qdyc3e/job_wants_me_to_develop_rag_search_engine_for/ | false | false | self | 24 | null |
When context stops being guidance and starts being history | 0 | I keep seeing teams rely more and more on “context” to make LLM systems behave well over time. Rules files, examples, internal notes, summaries, and project knowledge, all of it clearly helps early on. You get better outputs faster, iterate quicker, and see fewer obvious mistakes. That part works.
What’s less discussed is what happens after the system runs for a while.
At some point, context stops being something you reference and starts behaving like something you inherit. It’s no longer just onboarding material. It quietly turns into a record of prior decisions. Nothing breaks when this happens. Outputs still look fine, sometimes even better than before. But the system becomes harder to explain.
If two summaries encode different assumptions. The model doesn’t object, it just absorbs both. If a rule is edited, earlier behavior becomes inconsistent without a clear trace. If someone asks why a decision was made a month ago, the answer is usually reconstructed rather than replayed. That doesn’t feel like a tooling bug so much as a property of using editable, heuristic material as both instruction and history.
Early on, context reduces human effort. Later, it often increases it. People get pulled back in to reconcile contradictions, decide which version “counts,” or explain why behavior shifted. Oversight slowly moves from prevention to explanation.
The optimistic story is that context compounds. Good context leads to better outputs, which teaches you what to add, which makes context better, repeat. That loop works, but until context is carrying past decisions. Once that happens, accumulation introduces tension. Revision introduces ambiguity. Growth introduces contradictions.
At that point, some uncomfortable questions show up whether you like them or not. When context changes over time, what actually stays canonical? When interpretations diverge, who decides which one wins? When does helpful guidance quietly turn into implicit truth? At what point do you stop trusting context to just work and start managing it like a system of record?
I’m not arguing that context is bad. It’s clearly powerful. I’m arguing that it has a boundary, and that boundary only becomes visible once time and edits enter the picture. Curious how others who’ve run long lived setups see this, whether this resonates or if people are handling it differently. | 2026-01-15T21:56:48 | https://www.reddit.com/r/LocalLLaMA/comments/1qdx5ts/when_context_stops_being_guidance_and_starts/ | Boring-Store-3661 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdx5ts | false | null | t3_1qdx5ts | /r/LocalLLaMA/comments/1qdx5ts/when_context_stops_being_guidance_and_starts/ | false | false | self | 0 | null |
When you press purchase on AMD hardware to do inference. | 7 | 2026-01-15T21:44:13 | https://www.reddit.com/r/LocalLLaMA/comments/1qdwu59/when_you_press_purchase_on_amd_hardware_to_do/ | SquareAbrocoma2203 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdwu59 | false | null | t3_1qdwu59 | /r/LocalLLaMA/comments/1qdwu59/when_you_press_purchase_on_amd_hardware_to_do/ | false | false | 7 | null | ||
Is it common for a mid-sized tech company (>500 employees) to completely ignore LLMs and AI agents? | 0 | Feel like almost everyone over 30 in my company has no freaking idea how an AI agent is built AND they have zero interest in knowing these things.
They keep making dumb jokes on how LLM keeps getting functions wrong, and forget there are concrete LLM engineering steps: designing functions, letting the agent select which function to use with what arguments, adding guardrails and validation layers, testing outputs, developing a test suite iterating, and eventually building toward MCP or smth.
The frustrating part is my company has already purchased enterprise licenses for an LLM API, but almost noone is willing to use it or create a PoC.
Is this common at other companies, or am I just stuck in a place that's stuck in the past?
Personally feel like it might get harder for me to switch companies in a couple of years if I have never even seen/built tools like that. | 2026-01-15T21:41:02 | https://www.reddit.com/r/LocalLLaMA/comments/1qdwr58/is_it_common_for_a_midsized_tech_company_500/ | Positive_Affect_6720 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdwr58 | false | null | t3_1qdwr58 | /r/LocalLLaMA/comments/1qdwr58/is_it_common_for_a_midsized_tech_company_500/ | false | false | self | 0 | null |
AI hardware landscape shifts from GPUs to NPUs as edge computing gains ground | 0 | 2026-01-15T20:53:16 | https://www.digitimes.com/news/a20251210PD203/gpu-npu-cost-kneron-training-data.html | Dontdoitagain69 | digitimes.com | 1970-01-01T00:00:00 | 0 | {} | 1qdvgra | false | null | t3_1qdvgra | /r/LocalLLaMA/comments/1qdvgra/ai_hardware_landscape_shifts_from_gpus_to_npus_as/ | false | false | default | 0 | null | |
what would you want to see in a dataset that would make it work better for your projects? | 1 | [removed] | 2026-01-15T20:42:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qdv6i3/what_would_you_want_to_see_in_a_dataset_that/ | AutomataManifold | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdv6i3 | false | null | t3_1qdv6i3 | /r/LocalLLaMA/comments/1qdv6i3/what_would_you_want_to_see_in_a_dataset_that/ | false | false | self | 1 | null |
Karparthi_prime | 0 | If Andreij is here I am the architecht | 2026-01-15T20:20:35 | https://www.reddit.com/r/LocalLLaMA/comments/1qdulvk/karparthi_prime/ | WranglerConscious296 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdulvk | false | null | t3_1qdulvk | /r/LocalLLaMA/comments/1qdulvk/karparthi_prime/ | false | false | self | 0 | null |
CPU only llama-bench | 6 | 2026-01-15T20:09:06 | https://www.reddit.com/r/LocalLLaMA/comments/1qduaz4/cpu_only_llamabench/ | Snow_Sylph | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qduaz4 | false | null | t3_1qduaz4 | /r/LocalLLaMA/comments/1qduaz4/cpu_only_llamabench/ | false | false | 6 | null | ||
Ilya was right: We're back to the age of research. DeepSeek's mHC proves it. | 0 | In Ilya Sutskever's recent podcast with Dwarkesh Patel, he made a striking point: the era of just scaling compute is over. We need to return to fundamental architectural research.
He also discussed something fascinating—why humans generalize so much better than our models. It's not just about data or parameters. There's something fundamental about our architecture (literally, DNA-encoded priors from evolution) that makes us sample-efficient learners.
**This is exactly what makes DeepSeek's mHC paper interesting.**
**The core problem:**
Transformers rely on residual connections for stability. Hyper-Connections tried to add more expressivity by introducing multiple residual streams, but they broke training stability at scale. At 27B parameters, HC models showed chaotic gradients and loss spikes.
It's a perfect example of what Ilya was talking about—scaling alone doesn't fix architectural problems. You can't just throw more compute at unstable information flow.
**What mHC does differently:**
Instead of letting the model learn arbitrary routing patterns between residual streams, mHC constrains mixing operations to behave like balanced redistributions. Think of it as adding "traffic rules" to the information highway—you still get multiple lanes and flexibility, but without the chaos.
**The results:**
\- Stable training even at 27B parameters
\- 6-7% compute overhead (minimal cost)
\- Better performance on reasoning tasks (GSM8K, BBH, MATH)
\- Gains persist as model size increases
**Why this matters for local models:**
This represents exactly the shift Ilya described—from "throw more compute at it" to "fix the architecture itself." For those of us running local models or working with limited resources, architectural improvements like this could be more valuable than raw scaling.
The approach uses doubly stochastic matrices and the Birkhoff polytope to enforce geometric constraints on residual mixing. Technical but elegant.
I wrote a detailed breakdown here: [https://medium.com/@shreyanshjain05/deepseeks-mhc-breakthrough-how-fixing-transformers-could-end-the-ai-scaling-era-f01141a87aed?postPublishedType=initial](https://medium.com/@shreyanshjain05/deepseeks-mhc-breakthrough-how-fixing-transformers-could-end-the-ai-scaling-era-f01141a87aed?postPublishedType=initial)
**Discussion:** Ilya predicted we're entering an "age of research" after years of scaling dominance. Do you think mHC-style innovations will matter more than raw compute in 2026? | 2026-01-15T20:02:46 | https://www.reddit.com/r/LocalLLaMA/comments/1qdu4ub/ilya_was_right_were_back_to_the_age_of_research/ | shreyanshjain05 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdu4ub | false | null | t3_1qdu4ub | /r/LocalLLaMA/comments/1qdu4ub/ilya_was_right_were_back_to_the_age_of_research/ | false | false | self | 0 | null |
Does Context Engineering (RAG) actually make reduce hallucinations in LLMs? | 3 | Hey everyone,
I just published my second paper on zenodo today.
**TL;DR:** RAG, tools, and memory reduce hallucinations short-term, but each layer adds compression artifacts that compound errors. Like re-saving a JPEG multiple times.
It's about a fundamental problem I noticed. Context engineering is not what it is marketed actually.
You can read paper. I'm attaching paper link in comment below.
Also if anyone wants to understand the paper is more simple way then you can follow the repo page I'm attaching in comment.
**Note:** This is not a novel work. I just shared my view in the paper. It's a pre-print. | 2026-01-15T19:54:15 | Moist_Landscape289 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qdtwfs | false | null | t3_1qdtwfs | /r/LocalLLaMA/comments/1qdtwfs/does_context_engineering_rag_actually_make_reduce/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'YhdeL_r2Zu0GSqEutQXaWD1u9xr5DIcAFQ36-EmO4cg', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/cwa610aoikdg1.png?width=108&crop=smart&auto=webp&s=ccb809e8ed0dadd3f85558b66754b4dee9d2f9ff', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/cwa610aoikdg1.png?width=216&crop=smart&auto=webp&s=ea366ffd95cdff983a752382eef0a00679e686c6', 'width': 216}, {'height': 211, 'url': 'https://preview.redd.it/cwa610aoikdg1.png?width=320&crop=smart&auto=webp&s=8349aa002b47c9accc134f2a6bb8343bfdb00f92', 'width': 320}, {'height': 423, 'url': 'https://preview.redd.it/cwa610aoikdg1.png?width=640&crop=smart&auto=webp&s=93ed63e7fee4cabdd167a3cb9e24f661c77484a5', 'width': 640}, {'height': 634, 'url': 'https://preview.redd.it/cwa610aoikdg1.png?width=960&crop=smart&auto=webp&s=d91f2505b926c84dbc98d2e7aab542a972042454', 'width': 960}, {'height': 714, 'url': 'https://preview.redd.it/cwa610aoikdg1.png?width=1080&crop=smart&auto=webp&s=101a31bd27d5c5907371f3af738f793de551baaf', 'width': 1080}], 'source': {'height': 796, 'url': 'https://preview.redd.it/cwa610aoikdg1.png?auto=webp&s=1ac125dc9d23d6c5718132240f4ab2a69c4ccbf0', 'width': 1204}, 'variants': {}}]} | ||
Not as impressive as most here, but really happy I made it in time! | 140 | I'm in the Netherlands, I apologize in advance for my grammar (Happy to be corrected!), not using AI for translation.
Over here, getting cards is increasingly more difficult and prices are quite steep.
It was a bit of a gamble to get the second GPU; I had the RTX 5060 Ti on order for 509EU by Paradigit but it wasn't delivered for 2 weeks straight, and they still aren't sure when supply will arrive. Cancelled the order and payed the premium for Azerty's model in stock (600EU), but it arrived the next day!
So if you're in the Netherlands, I recommend calling up the store to ask about stock availability in advance. The listings on Tweakers wasn't accurate for this card.
Today the announcement from HardwareUnboxed came that the RTX 5060 Ti 16GB is becoming unavailable. Really happy it arrived just in time.
Specs:
* AMD Ryzen 5 9600X
* Crosair Vengence 96GB (2x48) DDR5-6000 CL30
* ASUS ProArt X870E-Creator Wifi
* 2x ASUS Prime RTX 5060 Ti 16GB
* BeQuiet! Dark Power 13 860W
Notes:
* I don't use the CPU for inference much (embeddings) and the PCI lanes are the same across all models, so I went with the lowest TDP.
* Wished I had more (192GB) but I can hold off.
* Picked the motherboad specifically for it's PCI-E 5.0 splitting to get the most out of the GPUs.
* Power draw during inference is \~300W.
| 2026-01-15T19:53:15 | Kahvana | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qdtvgs | false | null | t3_1qdtvgs | /r/LocalLLaMA/comments/1qdtvgs/not_as_impressive_as_most_here_but_really_happy_i/ | false | false | 140 | {'enabled': True, 'images': [{'id': 'PoaXcCsDJF2YI17vj4hLj54AG43LLCQJ64hM5uZg-FU', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/g997soqdgkdg1.jpeg?width=108&crop=smart&auto=webp&s=ac5311158f2db62031c6cd2ebbafa09f0be7ac5d', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/g997soqdgkdg1.jpeg?width=216&crop=smart&auto=webp&s=1f0812856494aecc80031f0b00b8c0ad5a9a4d45', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/g997soqdgkdg1.jpeg?width=320&crop=smart&auto=webp&s=6e719d0285a023550eb9816358d7d0ea4df35404', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/g997soqdgkdg1.jpeg?width=640&crop=smart&auto=webp&s=572838999e31352821862a1f71f5bd35cb328147', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/g997soqdgkdg1.jpeg?width=960&crop=smart&auto=webp&s=f65d59f9f05169228ce8c03152d085a50d9a2003', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/g997soqdgkdg1.jpeg?width=1080&crop=smart&auto=webp&s=be53b95df4db1e6ad89fd5eb94c4480d375e93f3', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/g997soqdgkdg1.jpeg?auto=webp&s=49f5007aea31a2b096c3ddc21fc0cc8c9185d2cd', 'width': 3024}, 'variants': {}}]} | ||
Will Substrate disrupt the chip market? | 4 | If they succeed in mass fabbing 1nm to a14 process nodes using x rays by 2027/2028 then other companie/countries like TsMc/Taiwan and Smic/ China and Netherlands will be quite behind! They are estimated to produce 1.2 mil wafers at 10k / wafer (10x cheaper than tsmc ) by 2030… | 2026-01-15T19:42:35 | https://www.reddit.com/r/LocalLLaMA/comments/1qdtl85/will_substrate_disrupt_the_chip_market/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdtl85 | false | null | t3_1qdtl85 | /r/LocalLLaMA/comments/1qdtl85/will_substrate_disrupt_the_chip_market/ | false | false | self | 4 | null |
Looking for resources on finetuning data formatting | 1 | [removed] | 2026-01-15T19:41:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qdtk7f/looking_for_resources_on_finetuning_data/ | AutomataManifold | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdtk7f | false | null | t3_1qdtk7f | /r/LocalLLaMA/comments/1qdtk7f/looking_for_resources_on_finetuning_data/ | false | false | self | 1 | null |
I built a scraping API to simplify RAG data ingestion (3 endpoints). Looking for feedback on utility. | 2 | [removed] | 2026-01-15T19:36:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qdtfap/i_built_a_scraping_api_to_simplify_rag_data/ | g4m3r1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdtfap | false | null | t3_1qdtfap | /r/LocalLLaMA/comments/1qdtfap/i_built_a_scraping_api_to_simplify_rag_data/ | false | false | self | 2 | null |
Starting my own model journey. | 14 | Just wanted to start a little online dev log about making my very own model. I’m not doing a LoRA, I’m literally training a tokenizer and model on my own data, from scratch.
So far it’s been pretty fun. And it really helps you understand what goes into an LM. I’ve gotten basically gibberish, in fact the most coherent thing the model has produced so far was to the prompt, “There once was a man” to which the model replied, “a maned ined” so… nothing really yet.
BUT that’s the fun part. Just learning and playing with this thing and feeding it more open sourced data. I’ll post more updates in the future if I ever get past the model just randomly stringing together tokens! | 2026-01-15T19:28:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qdt76f/starting_my_own_model_journey/ | AllTheCoins | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdt76f | false | null | t3_1qdt76f | /r/LocalLLaMA/comments/1qdt76f/starting_my_own_model_journey/ | false | false | self | 14 | null |
Nexa × Qualcomm On-Device AI Bounty Program - Build Local Android AI Apps and Win Awards | 3 | On-device AI will be everywhere in 2026. Nexa AI partnered with Qualcomm to host a bounty program for builders who want to level-up local AI on mobile, ship real impact and get recognized.
**Build:**
A working Android AI app that runs locally on Qualcomm Hexagon NPU using NexaSDK.
**Win:**
\- $6,500 total cash prizes
\- Grand Winner: $5,000 cash + Edge AI Impact Award certificate
\- Top 3 finalists: $500 + flagship Snapdragon powered device
\- The real upside: Qualcomm marketing spotlight + partnership opportunities, plus expert mentorship
**Timeline (PT):**
\- Jan 15: Launch
\- Feb 15: Phase 1 deadline
\- Feb 23: Finalists announced
\- March 24: Phase 2 deadline
\- March 31: Winner announced
**Register on the program website and start building today:** [https://sdk.nexa.ai/bounty](https://sdk.nexa.ai/bounty)
https://reddit.com/link/1qdsy5t/video/60ru5xcmckdg1/player
| 2026-01-15T19:19:22 | https://www.reddit.com/r/LocalLLaMA/comments/1qdsy5t/nexa_qualcomm_ondevice_ai_bounty_program_build/ | Material_Shopping496 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdsy5t | false | null | t3_1qdsy5t | /r/LocalLLaMA/comments/1qdsy5t/nexa_qualcomm_ondevice_ai_bounty_program_build/ | false | false | self | 3 | null |
translategemma 27b/12b/4b | 77 | # Description
TranslateGemma is a family of lightweight, state-of-the-art open translation models from Google, based on the Gemma 3 family of models.
TranslateGemma models are designed to handle translation tasks across 55 languages. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art translation models and helping foster innovation for everyone.
# [](https://huggingface.co/google/translategemma-27b-it#inputs-and-outputs)
# Inputs and outputs
* **Input:**
* Text string, representing the text to be translated
* Images, normalized to 896 x 896 resolution and encoded to 256 tokens each
* Total input context of 2K tokens
* **Output:**
* Text translated into the target language
[https://huggingface.co/google/translategemma-27b-it](https://huggingface.co/google/translategemma-27b-it)
[https://huggingface.co/google/translategemma-12b-it](https://huggingface.co/google/translategemma-12b-it)
[https://huggingface.co/google/translategemma-4b-it](https://huggingface.co/google/translategemma-4b-it)
https://preview.redd.it/aza4kprrakdg1.png?width=1372&format=png&auto=webp&s=bed28fac0a9878478a7cec3f0eac6c1c585b8a85
| 2026-01-15T19:08:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qdsnul/translategemma_27b12b4b/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdsnul | false | null | t3_1qdsnul | /r/LocalLLaMA/comments/1qdsnul/translategemma_27b12b4b/ | false | false | 77 | null | |
Did anyone of you fine tune gpt oss 20b or an llm ? if so, what for, and was it worth it ? | 10 | I'm a masters ai student in germany, i work on rag systems, and i'm getting this strong urge to fine tune gpt oss 20b for rag.
I'm generally alright with gpt oss 20b, it generally works well, calls tools when it needs to, follows instructions. i was just wondering if i could fine tune it to reply how i want, like with citations, references formatted a specific way, optimise it for say legal documents, that kind of thing
but before i sink time into this, did anyone actually fine tune gpt oss 20b? or another llm around that size? what did you fine tune it for? And did you see a real difference.
i'm not talking about minor differences or benchmark numbers, i'm talking about things that actually made a difference in practice. wanna hear about personal experiences
these experiments might turn into thesis material so genuinely curious what people's experiences have been.
I already did my research, but couldn't find much in terms of actual user's experience. I found helpful training material tutorials, and cookbooks, just don't know if it creates an actual difference, and if so how much.
I've always got genuinely good replies here, so big thanks in advance ❤️
I'd welcome any thing you have to add... | 2026-01-15T18:45:22 | https://www.reddit.com/r/LocalLLaMA/comments/1qds04y/did_anyone_of_you_fine_tune_gpt_oss_20b_or_an_llm/ | Hour-Entertainer-478 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qds04y | false | null | t3_1qds04y | /r/LocalLLaMA/comments/1qds04y/did_anyone_of_you_fine_tune_gpt_oss_20b_or_an_llm/ | false | false | self | 10 | null |
OpenAI has signed a $10 billion contract with Cerebras | 27 | [https://en.ain.ua/2026/01/15/openai-has-signed-a-10-billion-contract-with-cerebras/](https://en.ain.ua/2026/01/15/openai-has-signed-a-10-billion-contract-with-cerebras/)
A few days ago, I read some comments about this hypothetical wedding and why it wasn't happening. And yet, it happened! | 2026-01-15T18:42:39 | https://www.reddit.com/r/LocalLLaMA/comments/1qdrxiu/openai_has_signed_a_10_billion_contract_with/ | LegacyRemaster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdrxiu | false | null | t3_1qdrxiu | /r/LocalLLaMA/comments/1qdrxiu/openai_has_signed_a_10_billion_contract_with/ | false | false | self | 27 | null |
Framework Desktop vs. 5090 for code analysis | 13 | I need opinions on what hardware to get, between Framework Desktop (AMD Stryx Halo 128GB unified RAM) and self-built PC with Nvidia 5090 32GB VRAM.
The use case is somewhat peculiar. I will be working with still copyrighted vintage code, mostly for early x86 PC but some of it for other 80s/90s platforms. Mostly in C89 and some of it in 8086 and 68k assembly. I'm far from an expert in this and I will be working alone. I need an AI assistant for code analysis and expediting the learning process.
I am really not sure how to approach this. I have no experience with local models and don't know what to expect from either option. My worries are that AMD will be slow and 32gb in 5090 might not be enough. In theory, slow is better that nothing, I guess. As long as it's not unbearably slow. The price, form factor and cost of operating are also leaning in AMD's favor. But in any case, I don't want to spent thousands for a doorstop if it can't do the job. Anybody who has experience with this, is most welcome to express their opinion.
I'm not even sure if LLMs are even capable of handling this somewhat obscure code base. But what I have tested with ChatGPT and Claude Code free models handle vintage C and assembly pretty well. But those are commercial cloud solutions, so yeah....
I am also open to suggestions on which local LLM is the most suitable for this kind of work. | 2026-01-15T18:33:28 | https://www.reddit.com/r/LocalLLaMA/comments/1qdroja/framework_desktop_vs_5090_for_code_analysis/ | Albedo101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdroja | false | null | t3_1qdroja | /r/LocalLLaMA/comments/1qdroja/framework_desktop_vs_5090_for_code_analysis/ | false | false | self | 13 | null |
OlmOCR Settings for Rag | 2 | I‘ve got a few hundred fairly normal PDFs, that for some reason have some bad font embeddings. I am using OlmOCR.pipeline using a model served on vLLM. I like the parallelism, but even with multiple retries it still discards documents as not processable - maybe because they contain text as well as images without text?
I have split the PDFs into 5 page chunks, set max-retries at 8, set the threshold to discard documents very high so it won’t ”fail” the whole file for 3 broken pages out of 50 etc.
The end result is a maybe 83% failure rate. Anybody have better results? What are your settings? | 2026-01-15T18:25:53 | https://www.reddit.com/r/LocalLLaMA/comments/1qdrgui/olmocr_settings_for_rag/ | FrozenBuffalo25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdrgui | false | null | t3_1qdrgui | /r/LocalLLaMA/comments/1qdrgui/olmocr_settings_for_rag/ | false | false | self | 2 | null |
Raspberry Pi AI HAT+ 2 announced! Featuring the new Hailo-10H neural network accelerater, 40 TOPS (INT4) of inferencing performance, $130 | 0 | https://www.raspberrypi.com/news/introducing-the-raspberry-pi-ai-hat-plus-2-generative-ai-on-raspberry-pi-5/ | 2026-01-15T18:24:33 | https://www.reddit.com/r/LocalLLaMA/comments/1qdrfih/raspberry_pi_ai_hat_2_announced_featuring_the_new/ | Fear_ltself | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdrfih | false | null | t3_1qdrfih | /r/LocalLLaMA/comments/1qdrfih/raspberry_pi_ai_hat_2_announced_featuring_the_new/ | false | false | self | 0 | null |
Nemotron-3-nano:30b is a spectacular general purpose local LLM | 193 | Just want to sing the praises of this model. I am stunned at how intelligent it is for a 30b model. Comparing it to Llama 3.3:70b, I have yet to find a general purpose question that Nemotron hasn't answered better. It is quite robotic so I won't be using it for creative or chat purposes. Everything else though has been stellar.
If you have the capacity to give it a try, I highly recommend it. | 2026-01-15T18:24:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qdrf3o/nemotron3nano30b_is_a_spectacular_general_purpose/ | DrewGrgich | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdrf3o | false | null | t3_1qdrf3o | /r/LocalLLaMA/comments/1qdrf3o/nemotron3nano30b_is_a_spectacular_general_purpose/ | false | false | self | 193 | null |
Any point putting a 1060 6GB in with a 3090 for partial offload 70B type scenarios? | 1 | I already run mostly 70B partially offloaded, get around 2.4 t/s.
Will adding the 1060 actually help at all? I got one from my friend for free he had sitting around. There would be less running on the CPU RAM (~ 12GB vs ~ 18 GB) but also whatever additional overhead comes with having multiple GPUs, and the 1060's memory bandwidth is only about twice as much as my normal RAM.
Should I bother digging out my PSU cables and making it happen? | 2026-01-15T18:16:05 | https://www.reddit.com/r/LocalLLaMA/comments/1qdr6wr/any_point_putting_a_1060_6gb_in_with_a_3090_for/ | Ill_Yam_9994 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdr6wr | false | null | t3_1qdr6wr | /r/LocalLLaMA/comments/1qdr6wr/any_point_putting_a_1060_6gb_in_with_a_3090_for/ | false | false | self | 1 | null |
Made "LLM Initiative" prototype - Main idea is to break "USER : ASSISTANT" format to "ASSISTANT : USER : ASSISTANT : ASSISTANT". It can force LLM to start empty dialogue, answer in a row, using fixed or random timer. Can use Web Search and Lorebook for prompt injection. | 0 | Vascura FRONT: https://github.com/Unmortan-Ellary/Vascura-FRONT
**LLM Initiative:** Interesting timer based system that force LLM to take Initiative and start new conversations to engage with the user (even with empty chats), messaging multiple times in a row trying to engage with AFK user on its own, continue to perform given task, acting as different characters each new message (if instructed). Will use Lorebook injections, can use Web Search to find fresh information about last topic of conversation. | 2026-01-15T17:55:14 | https://www.reddit.com/gallery/1qdql8e | -Ellary- | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qdql8e | false | null | t3_1qdql8e | /r/LocalLLaMA/comments/1qdql8e/made_llm_initiative_prototype_main_idea_is_to/ | false | false | 0 | null | |
Help with Llama Guard 3 prompting for OpenAI moderation taxonomy | 1 | [removed] | 2026-01-15T17:26:37 | https://www.reddit.com/r/LocalLLaMA/comments/1qdprz2/help_with_llama_guard_3_prompting_for_openai/ | WerewolfSpecial1162 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdprz2 | false | null | t3_1qdprz2 | /r/LocalLLaMA/comments/1qdprz2/help_with_llama_guard_3_prompting_for_openai/ | false | false | self | 1 | null |
How to counter Qwen3 VL Thinking emerging catchphrases? | 6 | Most people agree that Qwen3 VL Thinking is currently the best dense model under 32B parameters. That said, Qwen3 VL has some quirks that are driving me crazy.
I've noticed a weird pattern that shows up consistently in longer conversations (over 5 turns). It's a type of repetition, but not the straightforward kind that repetition or frequency penalties can fix.
Here's what happens: As the chat goes on, Qwen3 starts ending its responses (not the thinking block) with what becomes essentially a signature catchphrase. This isn't typical AI slop, it's more like an "emerging" tagline... always different. Once the model locks onto a phrase like "Now what?", it becomes almost impossible to break the pattern without addressing it in the chat. Even worse, it starts standardizing the structure leading up to that catchphrase. Each response becomes a template where it just swaps out variables... like using "Now let's talk about X" over and over, just changing what X is.
The thinking block stays sharp, but it increasingly gets boxed into formatting each answer the same way, and there's a growing, though subtle, disconnect between what it's thinking and what it actually outputs.
Has anyone else run into this? What's the best way to deal with it? Thanks in advance! | 2026-01-15T17:18:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qdpjzc/how_to_counter_qwen3_vl_thinking_emerging/ | IrisColt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdpjzc | false | null | t3_1qdpjzc | /r/LocalLLaMA/comments/1qdpjzc/how_to_counter_qwen3_vl_thinking_emerging/ | false | false | self | 6 | null |
[ Removed by moderator ] | 29 | [removed] | 2026-01-15T17:18:29 | https://modelgrep.com | Turbulent-Sky5396 | modelgrep.com | 1970-01-01T00:00:00 | 0 | {} | 1qdpjnh | false | null | t3_1qdpjnh | /r/LocalLLaMA/comments/1qdpjnh/modelgrep_open_source_project_to_help_you/ | false | false | null | 29 | null |
I built agent-of-empires: cli session manager to manage all your local LLM coding agents (opencode) | 14 | Hi! My name's Nathan, I'm an MLE at mozilla.ai.
I'm loving my LM Studio LLMs (nemotron, qwen3-coder, gpt-oss) running on a mac mini, and I wanted to give them a try at coding. Unfortunately I'm impatient and since they can run a little slower than the LLMs hosted on the expensive NVIDIA gpus, I found myself opening up a ton of terminal windows to try to do stuff while I waited. I started spending a lot of time toggling between windows to try to figure out which ones were waiting on me vs sitting idle.
So, I built a solution! Agent of Empires (aoe) is terminal session manager that manages your agents with tmux and gives you a TUI dashboard that shows session status at a glance.
* Status monitoring - See Running/Waiting/Idle state for all sessions without attaching
* Persistent sessions - Sessions survive terminal closure; your agent keeps working
* Multiple parallel sessions - Run several agents across projects while you work elsewhere
* Git worktree integration - Spin up agents on different branches simultaneously
* Docker sandboxing - Isolate agent execution for safety
Links
* GitHub: [https://github.com/njbrake/agent-of-empires](https://github.com/njbrake/agent-of-empires)
* MIT licensed, Rust, Linux/macOS
install via \`brew install njbrake/aoe/aoe\` or check out the github repo for the bash script for linux/WSL.
Happy to hear any thoughts about missing features or how it's working for you! | 2026-01-15T17:17:45 | https://v.redd.it/7xv99jhrqjdg1 | river_otter412 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qdpiw8 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/7xv99jhrqjdg1/DASHPlaylist.mpd?a=1771089489%2CNDUxMTMwZTVmYmI3ZDY5YTlkYzMyY2EzNWJkYzQxNGRkYmZhYmM1M2I1YzM1YjEzODY5ZDBhNGM0YTY2MjRmYg%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/7xv99jhrqjdg1/CMAF_480.mp4?source=fallback', 'has_audio': False, 'height': 480, 'hls_url': 'https://v.redd.it/7xv99jhrqjdg1/HLSPlaylist.m3u8?a=1771089489%2CZjgxYTU5OGU3ZTk2NGRiYjI0MDk4NDU1YWFiOTIxOGI4ZjlmYzk0YWRlNzBhNTNhNTBjZGEzMDI0NjA2ZGRjZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7xv99jhrqjdg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 822}} | t3_1qdpiw8 | /r/LocalLLaMA/comments/1qdpiw8/i_built_agentofempires_cli_session_manager_to/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'M3l2NGppaHJxamRnMfWVxy7rvYMCqiqCOPfcqx6hDwGA71STZapyDGhDWUDn', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/M3l2NGppaHJxamRnMfWVxy7rvYMCqiqCOPfcqx6hDwGA71STZapyDGhDWUDn.png?width=108&crop=smart&format=pjpg&auto=webp&s=6b8ce1d38384ff28a588f4ca5b97bc3ed0177c34', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/M3l2NGppaHJxamRnMfWVxy7rvYMCqiqCOPfcqx6hDwGA71STZapyDGhDWUDn.png?width=216&crop=smart&format=pjpg&auto=webp&s=4a4e28eb9e29d966bb385efd27de9b4147d155c0', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/M3l2NGppaHJxamRnMfWVxy7rvYMCqiqCOPfcqx6hDwGA71STZapyDGhDWUDn.png?width=320&crop=smart&format=pjpg&auto=webp&s=eaef3b97fa80c18c1859bcebc3af4b819f5bb244', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/M3l2NGppaHJxamRnMfWVxy7rvYMCqiqCOPfcqx6hDwGA71STZapyDGhDWUDn.png?width=640&crop=smart&format=pjpg&auto=webp&s=4de0a9cbda7f1820029e512c751ed08de72d7dd6', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/M3l2NGppaHJxamRnMfWVxy7rvYMCqiqCOPfcqx6hDwGA71STZapyDGhDWUDn.png?width=960&crop=smart&format=pjpg&auto=webp&s=b4e1eb7016ebde4ac0122186f9d1435933146a96', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/M3l2NGppaHJxamRnMfWVxy7rvYMCqiqCOPfcqx6hDwGA71STZapyDGhDWUDn.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1b71bb1b069ba0892d2a3d05feb70a4a4871a0e8', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/M3l2NGppaHJxamRnMfWVxy7rvYMCqiqCOPfcqx6hDwGA71STZapyDGhDWUDn.png?format=pjpg&auto=webp&s=ba66d7f1747c45f3856e3d3f9c1e07c3ce8c9d73', 'width': 1200}, 'variants': {}}]} | |
Thanks to you guys, Soprano TTS now supports OpenAI-compatible endpoint, ONNX, ComfyUI, WebUI, and CLI on CUDA, MPS, ROCm, and CPU! | 98 | [https://github.com/ekwek1/soprano](https://github.com/ekwek1/soprano)
[https://huggingface.co/ekwek/Soprano-1.1-80M](https://huggingface.co/ekwek/Soprano-1.1-80M)
[https://huggingface.co/spaces/ekwek/Soprano-TTS](https://huggingface.co/spaces/ekwek/Soprano-TTS)
Hello everyone,
This final day of updates is dedicated to all of you. When I first released Soprano, I had no idea how much support I would get from the community. Within the first day, I received an enormous number PRs adding onto the codebase. I have finally merged most of them, and am happy to announce that you can now run Soprano on nearly any device, and with a wide number of supported inference methods.
Here is a list of all the contributions you guys have made:
WebUI: (from Mateusz-Dera & humair-m)
soprano-webui
CLI: (from bigattichouse)
soprano "Hello world!"
OpenAI-compatible endpoint (from bezo97)
uvicorn soprano.server:app
In addition, several of you have made your own modifications to Soprano, allowing for ONNX and ComfyUI support! Here are some repos that implement this:
[https://github.com/SanDiegoDude/ComfyUI-Soprano-TTS](https://github.com/SanDiegoDude/ComfyUI-Soprano-TTS)
[https://github.com/jo-nike/ComfyUI-SopranoTTS](https://github.com/jo-nike/ComfyUI-SopranoTTS)
[https://github.com/KevinAHM/soprano-web-onnx](https://github.com/KevinAHM/soprano-web-onnx)
Soprano also supports more than just CUDA devices now, too! It also supports CPU (from bigattichouse), MPS (from visionik), and there is an ROCm PR (from Mateusz-Dera) that can be found here:
[https://github.com/ekwek1/soprano/pull/29](https://github.com/ekwek1/soprano/pull/29)
If you have an ROCm device I would love some help for testing this PR!
Finally, I want to thank the countless other contributions to Soprano, including an automatic hallucination detector from ChangeTheConstants and transformers streaming support from sheerun. You all have improved Soprano tremendously!
This will likely be my last update for a bit, since I still have some unfinished business left on the roadmap that will take some time. I’m not abandoning you guys though! New capabilities for Soprano will be coming soon. :)
\- Eugene | 2026-01-15T17:10:00 | eugenekwek | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qdpb2v | false | null | t3_1qdpb2v | /r/LocalLLaMA/comments/1qdpb2v/thanks_to_you_guys_soprano_tts_now_supports/ | false | false | default | 98 | {'enabled': True, 'images': [{'id': '581sfa20ojdg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/581sfa20ojdg1.png?width=108&crop=smart&auto=webp&s=58e114b08430da9af88822994540dbdc7d05975d', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/581sfa20ojdg1.png?width=216&crop=smart&auto=webp&s=1c702d1b5265c7903aba8990d21dd12959e91f95', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/581sfa20ojdg1.png?width=320&crop=smart&auto=webp&s=8a11d2bd8ca8face89851ac9fbd88ea63e7c8e1b', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/581sfa20ojdg1.png?width=640&crop=smart&auto=webp&s=3839305a28dd4d4bbbbd3af2e716c6b974829cd3', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/581sfa20ojdg1.png?width=960&crop=smart&auto=webp&s=0a53b5a673d7ee79278f4d76dea4a1f8b78e8d4a', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/581sfa20ojdg1.png?width=1080&crop=smart&auto=webp&s=d7656f5c447e931f494abffa8868e307888dd010', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/581sfa20ojdg1.png?auto=webp&s=ebdc64285568d30d7d6ebf63633a860a3eb211dc', 'width': 1280}, 'variants': {}}]} | |
[Resource] AI Guardrails: Open-source middleware to add PII Redaction & Injection Defense to local LLMs | 0 | Hey everyone,
I built a lightweight middleware API designed to sit between your users and your local LLM stack (Ollama, vLLM, Llama.cpp).
\*\*Repo:\*\* [https://github.com/koppalanaveenkumar/ai-guardrails](https://github.com/koppalanaveenkumar/ai-guardrails)
\*\*Demo:\*\* [https://aiguardrails.vercel.app](https://aiguardrails.vercel.app)
\*\*What it solves:\*\*
If you are building an agent or RAG app, you usually don't want the LLM to see raw PII (emails, SSNs) or handle malicious prompt injections ("Ignore instructions...").
\*\*How it works:\*\*
It's a standalone FastAPI service that adds <50ms latency.
1. \*\*Injections:\*\* Uses \`sentence-transformers\` (all-MiniLM-L6-v2) to detect semantic jailbreaks locally. No API calls to OpenAI.
2. \*\*PII:\*\* Uses Microsoft Presidio (running locally) to scrub sensitive data before it hits the prompt context.
3. \*\*Logs:\*\* Audits everything to Postgres.
\*\*Features:\*\*
\* Async architecture (doesn't block your generation token stream much)
\* Deployable on a cheap VPS or run alongside Ollama
\* Fully open source (MIT)
I wanted something that didn't require me to send my private data to a cloud "Security API", so I built this to run entirely offline/self-hosted.
I'm happy to answer any questions about the detection logic. | 2026-01-15T17:02:22 | https://www.reddit.com/r/LocalLLaMA/comments/1qdp3ix/resource_ai_guardrails_opensource_middleware_to/ | naveenkumarkoppala | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdp3ix | false | null | t3_1qdp3ix | /r/LocalLLaMA/comments/1qdp3ix/resource_ai_guardrails_opensource_middleware_to/ | false | false | self | 0 | null |
GPT OSS Using V100s On vLLM? | 1 | Hello!
I'm looking to run GPT OSS 120B using vLLM on my set up of 8 V100s. Should be more than enough compute but I'm having trouble figuring out if V100s can run this set up? The official documentation states "In vLLM, you can run it on NVIDIA H100, H200, B200 as well as MI300x, MI325x, MI355x and Radeon AI PRO R9700" ([documentation](https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html)) but I wasn't sure if this was just what was *officially* supported and there's a way around this. From what I've read it seems like there might be, I just can't find a definitive answer. I'm running into plenty of errors when I try to run it but I know that using vLLM can notoriously be error whack-a-mole.
Thanks for the help. | 2026-01-15T17:01:06 | https://www.reddit.com/r/LocalLLaMA/comments/1qdp26c/gpt_oss_using_v100s_on_vllm/ | NimbleTie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdp26c | false | null | t3_1qdp26c | /r/LocalLLaMA/comments/1qdp26c/gpt_oss_using_v100s_on_vllm/ | false | false | self | 1 | null |
Smallest model in Hugging Face/llama.cpp? | 0 | Tell me a small model (Hugging Face/llama.cpp). | 2026-01-15T17:00:45 | https://www.reddit.com/r/LocalLLaMA/comments/1qdp1st/smallest_model_in_hugging_facellamacpp/ | Ok-Type-7663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdp1st | false | null | t3_1qdp1st | /r/LocalLLaMA/comments/1qdp1st/smallest_model_in_hugging_facellamacpp/ | false | false | self | 0 | null |
Agent Skills in 100 lines of Python | 5 | Agent Skills are an exciting feature, but I think the conversation around them gets a bit too mystical.
After implementing the standard myself, I realized their true power isn't in some complex technical breakthrough. It's that they are a perfect example of progressive disclosure.
They allow us to replace complex sub-agent orchestration with something much more manageable: a file system.
All you need is three tools:
\- Skill(name) to read a SKILL.md
\- Read(path) to progressively read more files
\- Run(path) to execute scripts without having to read them
If you are building agents, I'd argue you should look at Skills as a very cheap tool to give your agent flexibility. It’s a lightweight way to organize prompts that might replace the complex orchestration you thought you needed.
I wrote up the full implementation (compatible with Anthropic's public skills) here:
https://www.jairtrejo.com/blog/2026/01/agent-skills | 2026-01-15T16:59:01 | https://www.reddit.com/r/LocalLLaMA/comments/1qdp00e/agent_skills_in_100_lines_of_python/ | jairtrejo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdp00e | false | null | t3_1qdp00e | /r/LocalLLaMA/comments/1qdp00e/agent_skills_in_100_lines_of_python/ | false | false | self | 5 | null |
google/translategemma | 167 | [https://huggingface.co/collections/google/translategemma](https://huggingface.co/collections/google/translategemma)
tech report: [https://arxiv.org/abs/2601.09012](https://arxiv.org/abs/2601.09012)
| 2026-01-15T16:42:58 | https://www.reddit.com/r/LocalLLaMA/comments/1qdok2i/googletranslategemma/ | BreakfastFriendly728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdok2i | false | null | t3_1qdok2i | /r/LocalLLaMA/comments/1qdok2i/googletranslategemma/ | false | false | self | 167 | {'enabled': False, 'images': [{'id': '3nmGq4Pb4Wd45VEcnJn6aN4zY72i19oKKecxe6Vlcbg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3nmGq4Pb4Wd45VEcnJn6aN4zY72i19oKKecxe6Vlcbg.png?width=108&crop=smart&auto=webp&s=11ed6729f1b53102e968ad5e14a5ad6c69c48544', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3nmGq4Pb4Wd45VEcnJn6aN4zY72i19oKKecxe6Vlcbg.png?width=216&crop=smart&auto=webp&s=d9a39259e9150ab462730e5a50bd374e59322cd6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3nmGq4Pb4Wd45VEcnJn6aN4zY72i19oKKecxe6Vlcbg.png?width=320&crop=smart&auto=webp&s=49f9b982a3f01601b9b86ab42a30a30af22b0449', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3nmGq4Pb4Wd45VEcnJn6aN4zY72i19oKKecxe6Vlcbg.png?width=640&crop=smart&auto=webp&s=74cd92a3cd8b9b8fec1e7a8ffe5144d15daeba0d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3nmGq4Pb4Wd45VEcnJn6aN4zY72i19oKKecxe6Vlcbg.png?width=960&crop=smart&auto=webp&s=35dc09b4021f501f547f2fc8b6348a2032ca552a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3nmGq4Pb4Wd45VEcnJn6aN4zY72i19oKKecxe6Vlcbg.png?width=1080&crop=smart&auto=webp&s=3258e1276c2ff6a50fe39078641f3eb4b3dcfc8c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3nmGq4Pb4Wd45VEcnJn6aN4zY72i19oKKecxe6Vlcbg.png?auto=webp&s=f1b2947f16defb6b0c107d2b7a4535285774bbbc', 'width': 1200}, 'variants': {}}]} |
AI Max 395+ tips please | 6 | I've been enjoy my dual 5090 set-up but the models I'm running are just too small. Decided to get the 128gb 395+ to run larger models.
I'm seeing some mixed reviews where people give conflicting information on what/how to run.
What's the MUST DO for local LLM on the AI Max 395+? I'm planning either Popos24(my goto) or cachyos(idk sounds fun). | 2026-01-15T16:38:08 | No_Mango7658 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qdofdx | false | null | t3_1qdofdx | /r/LocalLLaMA/comments/1qdofdx/ai_max_395_tips_please/ | false | false | default | 6 | {'enabled': True, 'images': [{'id': 'q8q5r3bvjjdg1', 'resolutions': [{'height': 186, 'url': 'https://preview.redd.it/q8q5r3bvjjdg1.jpeg?width=108&crop=smart&auto=webp&s=27331bc8ff0db6c45658c76f8999038b7e43fc65', 'width': 108}, {'height': 372, 'url': 'https://preview.redd.it/q8q5r3bvjjdg1.jpeg?width=216&crop=smart&auto=webp&s=0e73fe1295ce6a2b5516eb5ced397d526db30abe', 'width': 216}, {'height': 552, 'url': 'https://preview.redd.it/q8q5r3bvjjdg1.jpeg?width=320&crop=smart&auto=webp&s=72751048497a72515f2995467a62ba11856455d4', 'width': 320}, {'height': 1104, 'url': 'https://preview.redd.it/q8q5r3bvjjdg1.jpeg?width=640&crop=smart&auto=webp&s=7d8c75ce439f64a65f35f86cc9a0f2b3a336b821', 'width': 640}, {'height': 1656, 'url': 'https://preview.redd.it/q8q5r3bvjjdg1.jpeg?width=960&crop=smart&auto=webp&s=15b945ee192ed7be61eb9053b8d4f75f6bc66c71', 'width': 960}, {'height': 1863, 'url': 'https://preview.redd.it/q8q5r3bvjjdg1.jpeg?width=1080&crop=smart&auto=webp&s=c6ad4987327a274eb94343c4213f4926d1a3f35e', 'width': 1080}], 'source': {'height': 2485, 'url': 'https://preview.redd.it/q8q5r3bvjjdg1.jpeg?auto=webp&s=35eb2a0ea2e84cfe5a75bc00b05e68142a46510a', 'width': 1440}, 'variants': {}}]} | |
Alexandre Pedrosa AI Integrator, Meta, Microsoft, Google, OpeAI.. | 0 | https://github.com/alexandrepedrosaai/MESHES-Meta-Microsoft-Second-and-Third-AI-Integration-and-AI-Endorsement-Smart-Contracs | 2026-01-15T16:05:52 | Altruistic_Bet3198 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qdnjfh | false | null | t3_1qdnjfh | /r/LocalLLaMA/comments/1qdnjfh/alexandre_pedrosa_ai_integrator_meta_microsoft/ | false | false | 0 | {'enabled': True, 'images': [{'id': '1Btz0cVs7JS8Js6eGGtzihstM9akNPtP0wmAa917CzY', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/42yb5704ejdg1.jpeg?width=108&crop=smart&auto=webp&s=516ed857761dbd5776c2b81b5d089fa6061b0378', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/42yb5704ejdg1.jpeg?width=216&crop=smart&auto=webp&s=0204885022142240e37593e6f2a3b14d0844cadb', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/42yb5704ejdg1.jpeg?width=320&crop=smart&auto=webp&s=f8b51897c683d02ee93124b38a5fba5429a3e718', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/42yb5704ejdg1.jpeg?width=640&crop=smart&auto=webp&s=b23c1542dfebf067258701375fe4473ea6043317', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/42yb5704ejdg1.jpeg?width=960&crop=smart&auto=webp&s=ad8cfdd6c75d2c892bcd6c92a78fd87cc59e07e7', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/42yb5704ejdg1.jpeg?width=1080&crop=smart&auto=webp&s=2c5ebc5074f4a2299d2e63ceed92a33231b96a3e', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/42yb5704ejdg1.jpeg?auto=webp&s=070f28a60465581069c01987e0a30d208b1f156f', 'width': 1080}, 'variants': {}}]} | ||
Best AI for coding that isn't from the major disgusting companies? (Local or online) | 0 | Hi guys which one in your opinion is the best AI to use that is open source and more ethical not to support cancer companies like Open AI, microsoft and so on? I use it mostly as a study partner for coding.
| 2026-01-15T16:04:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qdnhz2/best_ai_for_coding_that_isnt_from_the_major/ | Quiet_Bus_6404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdnhz2 | false | null | t3_1qdnhz2 | /r/LocalLLaMA/comments/1qdnhz2/best_ai_for_coding_that_isnt_from_the_major/ | false | false | self | 0 | null |
7x Longer Context Reinforcement Learning in Unsloth | 236 | Hey r/LocalLlama! We're excited to show how Unsloth now enables **7x longer context lengths** (up to 12x) for Reinforcement Learning! By using 3 new techniques we developed, we enable you to train gpt-oss 20b QLoRA up to **20K context on a 24Gb card** \- all with **no accuracy degradation**. Unsloth GitHub: [https://github.com/unslothai/unsloth](https://github.com/unslothai/unsloth)
* For larger GPUs, Unsloth now trains gpt-oss QLoRA with **380K context** on a single 192GB NVIDIA B200 GPU
* Qwen3-8B GRPO reaches **110K context** on an 80GB VRAM H100 via vLLM and QLoRA, and **65K** for gpt-oss with BF16 LoRA.
* Unsloth GRPO RL runs with Llama, Gemma & all models auto support longer contexts
Also, all features in Unsloth can be combined together and work well together:
1. Unsloth's [weight-sharing](https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl) feature with vLLM and our Standby Feature in [Memory Efficient RL](https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl)
2. Unsloth's [Flex Attention](https://unsloth.ai/docs/models/gpt-oss-how-to-run-and-fine-tune/long-context-gpt-oss-training) for long context gpt-oss and our [500K Context Training](https://unsloth.ai/docs/new/500k-context-length-fine-tuning)
3. Float8 training in [FP8 RL](https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide/fp8-reinforcement-learning) and Unsloth's [async gradient checkpointing](https://unsloth.ai/blog/long-context) and much more
You can read our educational blogpost for detailed analysis, benchmarks and more: [https://unsloth.ai/docs/new/grpo-long-context](https://unsloth.ai/docs/new/grpo-long-context)
And you can of course train any model using our new features and kernels via our free fine-tuning notebooks: [https://docs.unsloth.ai/get-started/unsloth-notebooks](https://docs.unsloth.ai/get-started/unsloth-notebooks)
Some free Colab notebooks below which has the 7x longer context support backed in:
|[gpt-oss-20b](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/gpt-oss-(20B)-GRPO.ipynb) GSPO Colab|[Qwen3-VL-8B](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_VL_(8B)-Vision-GRPO.ipynb) Vision RL|[Qwen3-8B - FP8](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_8B_FP8_GRPO.ipynb) L4 GPU|
|:-|:-|:-|
To update Unsloth to automatically make training faster, do:
pip install --upgrade --force-reinstall --no-cache-dir --no-deps unsloth
pip install --upgrade --force-reinstall --no-cache-dir --no-deps unsloth_zoo
And to enable GRPO runs in Unsloth, do
import os
os.environ["UNSLOTH_VLLM_STANDBY"] = "1" # Standby = extra 30% context lengths!
from unsloth import FastLanguageModel
import torch
max_seq_length = 20000 # Can increase for longer reasoning traces
lora_rank = 32 # Larger rank = smarter, but slower
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Qwen3-4B-Base",
max_seq_length = max_seq_length,
load_in_4bit = False, # False for LoRA 16bit
fast_inference = True, # Enable vLLM fast inference
max_lora_rank = lora_rank,
)
Hope you all have a great rest of the week and thank you! | 2026-01-15T15:56:40 | danielhanchen | i.redd.it | 2026-01-15T16:05:50 | 0 | {} | 1qdna3t | false | null | t3_1qdna3t | /r/LocalLLaMA/comments/1qdna3t/7x_longer_context_reinforcement_learning_in/ | false | false | 236 | {'enabled': True, 'images': [{'id': 'n_r-m47P4gHQQrvVd4l7XwnxqQ7f4_8MrX8-a83hT98', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/nmkee12vbjdg1.png?width=108&crop=smart&auto=webp&s=646f1fc37efa0d5afdaebb19a4f93e1d2ab65d57', 'width': 108}, {'height': 241, 'url': 'https://preview.redd.it/nmkee12vbjdg1.png?width=216&crop=smart&auto=webp&s=3eea07d77b6e4edff23ad77c341f9fd0a906a15c', 'width': 216}, {'height': 357, 'url': 'https://preview.redd.it/nmkee12vbjdg1.png?width=320&crop=smart&auto=webp&s=9462ab421da9427aacd774a576e514501f136b5c', 'width': 320}, {'height': 714, 'url': 'https://preview.redd.it/nmkee12vbjdg1.png?width=640&crop=smart&auto=webp&s=3cd97dbf853be6596556f70c467d1dccc0cc22a1', 'width': 640}, {'height': 1072, 'url': 'https://preview.redd.it/nmkee12vbjdg1.png?width=960&crop=smart&auto=webp&s=1ef808d56184209bd44b6e057030a59eda54903d', 'width': 960}, {'height': 1206, 'url': 'https://preview.redd.it/nmkee12vbjdg1.png?width=1080&crop=smart&auto=webp&s=0ff42c371143287a2333c19bbf8645251c654faf', 'width': 1080}], 'source': {'height': 3350, 'url': 'https://preview.redd.it/nmkee12vbjdg1.png?auto=webp&s=1dddc57e23cad48fceface749bf934a6dd22717c', 'width': 3000}, 'variants': {}}]} | ||
Building a Local-First OS foundation for Trustable AI (Rust + Radxa RK3588). Open Source. | 4 | Hi everyone,
To make AI truly helpful, it needs context - it needs to see what I see. But streaming camera feeds to the cloud creates a privacy paradox.
**I believe privacy must be guaranteed by architecture, not just by policy.**
That is why I started **paiOS**. It is a Local-First OS foundation designed to enable **Trustable AI devices**.
**The Concept:** Instead of trusting a vendor's promise, the OS uses a strict runtime (Rust) to physically isolate sensors. Applications only receive data if the user explicitly grants access. "Don't trust, verify."
**The Roadmap (Pragmatic approach):**
1. **paiOS:** The core OS (Current focus, running on Radxa Rock 5C).
2. **paiLink:** A USB-NPU accelerator. It exposes standard APIs (Ollama/OpenAI compatible) to the host. Plug-and-play local AI for tools like VSCode, Obsidian, or n8n.
3. **paiGo:** The fully independent privacy-wearable (Long-term vision).
**Status:** Day 1. I just published the repository. It is a technical foundation, not a finished product yet.
**Links:**
* **Code:** [https://github.com/aurintex/pai-os](https://github.com/aurintex/pai-os)
* **Docs:** [https://docs.aurintex.com/](https://docs.aurintex.com/)
I would love your feedback on the architecture.
Cheers, Riccardo | 2026-01-15T15:53:41 | aurintex | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qdn78z | false | null | t3_1qdn78z | /r/LocalLLaMA/comments/1qdn78z/building_a_localfirst_os_foundation_for_trustable/ | false | false | 4 | {'enabled': True, 'images': [{'id': 'eiBtZ8NeJNAy6P9BeLzNhPp644mDST6Fl_dHdAw_j18', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/azz0wkmpajdg1.jpeg?width=108&crop=smart&auto=webp&s=28ef1665cccbf0f02160299d33fac6cf6675a0c6', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/azz0wkmpajdg1.jpeg?width=216&crop=smart&auto=webp&s=6c9ca363e7e9b65dcd06a58064c8ad4b4e3675f8', 'width': 216}, {'height': 194, 'url': 'https://preview.redd.it/azz0wkmpajdg1.jpeg?width=320&crop=smart&auto=webp&s=9241ec066402085956e4e3ee37150b69ca7eca83', 'width': 320}, {'height': 389, 'url': 'https://preview.redd.it/azz0wkmpajdg1.jpeg?width=640&crop=smart&auto=webp&s=e74acdb1124a83d97c7b6a68648a2ee2a064f439', 'width': 640}, {'height': 584, 'url': 'https://preview.redd.it/azz0wkmpajdg1.jpeg?width=960&crop=smart&auto=webp&s=1cd59fb5205765d5178c6c907425e494a0311a88', 'width': 960}, {'height': 657, 'url': 'https://preview.redd.it/azz0wkmpajdg1.jpeg?width=1080&crop=smart&auto=webp&s=7eaee5637e81a310f2fc131fae40a0508554b733', 'width': 1080}], 'source': {'height': 2815, 'url': 'https://preview.redd.it/azz0wkmpajdg1.jpeg?auto=webp&s=b1310e6edd054608c63ef14bcaf40cbeae27d70e', 'width': 4624}, 'variants': {}}]} | ||
Custom RAG pipeline worth it? | 0 | I'm currently stuck between two paths for a new project involving RAG with PDFs and audio transcriptions.
On one hand, I could use a turnkey solution to get up and running fast. On the other hand, my users are "power users" who need more control than a standard ChatGPT-style interface. Specifically, they need to:
Manually correct/verify document OCR results.
Define custom chunks (not just recursive character splitting).
I see many "plug and play" tools, but I often hear that high-quality RAG requires a specialized pipeline.
For those who have built both: is it worth the effort to go full DIY with custom components (LangChain/LlamaIndex/Haystack), or are there existing solutions that allow this level of granular control? I don’t want to reinvent the wheel if a "one size fits all" tool actually handles these power-user requirements well.
Looking for any "lessons learned" from people who have implemented RAG pipelines in their product. What worked for you? | 2026-01-15T15:32:09 | https://www.reddit.com/r/LocalLLaMA/comments/1qdmmn8/custom_rag_pipeline_worth_it/ | _camera_up | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdmmn8 | false | null | t3_1qdmmn8 | /r/LocalLLaMA/comments/1qdmmn8/custom_rag_pipeline_worth_it/ | false | false | self | 0 | null |
My new morning routine - we sure live in exciting times! | 46 | 2026-01-15T15:21:03 | https://v.redd.it/n2nzwp246jdg1 | platinumai | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qdmc2p | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/n2nzwp246jdg1/DASHPlaylist.mpd?a=1771082489%2CNTllOGYwNTEyNmIyZjU0MThkZTUwZmQ2ZGNmMGE2OWM1ZDJkMjNiYzQ1MWRjMmI3NzE3ZDQ2ZjJmNzI1MzA1MA%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/n2nzwp246jdg1/CMAF_480.mp4?source=fallback', 'has_audio': False, 'height': 480, 'hls_url': 'https://v.redd.it/n2nzwp246jdg1/HLSPlaylist.m3u8?a=1771082489%2CYzEyYmJjZmZkYzJmNzQ0MWRhYmU4ZTQ1Y2U0ZTJjM2I0ZDdhZDg3ZTk5NTgxZDYxYWUwNTAyOWQ0ODdlZmRiZQ%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/n2nzwp246jdg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 576}} | t3_1qdmc2p | /r/LocalLLaMA/comments/1qdmc2p/my_new_morning_routine_we_sure_live_in_exciting/ | false | false | 46 | {'enabled': False, 'images': [{'id': 'NmR1bTcweDM2amRnMXGSoRM9CsVG5MjIxh7_8FkNO_bGbbyWAgzzElo_0-YV', 'resolutions': [{'height': 90, 'url': 'https://external-preview.redd.it/NmR1bTcweDM2amRnMXGSoRM9CsVG5MjIxh7_8FkNO_bGbbyWAgzzElo_0-YV.png?width=108&crop=smart&format=pjpg&auto=webp&s=12759ee819816b35316cb074b6441c941100f4f9', 'width': 108}, {'height': 180, 'url': 'https://external-preview.redd.it/NmR1bTcweDM2amRnMXGSoRM9CsVG5MjIxh7_8FkNO_bGbbyWAgzzElo_0-YV.png?width=216&crop=smart&format=pjpg&auto=webp&s=fdada4f65265b43cc77fed3f1b3926b17026a2b9', 'width': 216}, {'height': 266, 'url': 'https://external-preview.redd.it/NmR1bTcweDM2amRnMXGSoRM9CsVG5MjIxh7_8FkNO_bGbbyWAgzzElo_0-YV.png?width=320&crop=smart&format=pjpg&auto=webp&s=bb2aa3895751fd1e4c64c2de5f0926561b486bf0', 'width': 320}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/NmR1bTcweDM2amRnMXGSoRM9CsVG5MjIxh7_8FkNO_bGbbyWAgzzElo_0-YV.png?format=pjpg&auto=webp&s=7219ab546ccf61179c5b8048c180073447c29b21', 'width': 576}, 'variants': {}}]} | ||
first time making my own product, im really happy | 0 | Been lurking here forever, never posted.
Just launched my first thing on Kickstarter and people are actually backing it?? Still feels surreal.
It's a tiny AI wearable called Berry — you talk to it and it turns what you say into notes and to-dos. Made it because I kept forgetting ideas right after I had them.
Anyway, would really appreciate honest feedback. Roast it if you want, I can take it. | 2026-01-15T15:09:16 | https://www.kickstarter.com/projects/berrygo/berry-lightest-ai-second-braincapture-inspiration-in-1-sec?ref=user_menu | supericx | kickstarter.com | 1970-01-01T00:00:00 | 0 | {} | 1qdm111 | false | null | t3_1qdm111 | /r/LocalLLaMA/comments/1qdm111/first_time_making_my_own_product_im_really_happy/ | false | false | default | 0 | null |
Raspberry Pi AI Hat+ 2 | 1 | [This](https://www.raspberrypi.com/news/introducing-the-raspberry-pi-ai-hat-plus-2-generative-ai-on-raspberry-pi-5/) was just released.
For vision-based models — such as Yolo-based object recognition, pose estimation, and scene segmentation
Says it can do LLMs, VLMS, up to 1.5B parameter models.
I am missing something here because this seems like it would suck or be slow as balls.
Never heard of a Hailo-10H neural network accelerator
Anyone have context into this chip? | 2026-01-15T15:08:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qdm050/raspberry_pi_ai_hat_2/ | Eam404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdm050 | false | null | t3_1qdm050 | /r/LocalLLaMA/comments/1qdm050/raspberry_pi_ai_hat_2/ | false | false | self | 1 | null |
WorldModel-Qwen3-0.6B : Building a "world model" into a thinking model as a modified toolcalling format. (Work in progress) | 2 | Recent discussions about AGI talk about world models being a requirement. While I have no aspirations for that kind of complexity, I thought it would be an interesting experiment to see if I could bake a "modeling" step after the <think> tag, where the model writes code to attempt to model the problem. In a way, this is just a glorified <tool> call, but I wanted something a little different, including a <requires> tag that allows our inference tool the ability to call the code.
If figure this is a way to take a VERY small model and give it the ability to look up or calculate answers without hallucinating them.
I'm using a QEMU-based VM system I created (scratch pad) to execute the code.
If people are interested, I'll see about expanding the test datset for the finetune and posting on huggingface. As is, you'd have to train it yourself.
This isn't revolutionary, but more an experiment in fine tuning a small model to go find the answers it needs using small bits of code.
User: what is 15% of 200?
<think>I need to calculate 15% of 200...</think>
<model>
result = 0.15 * 200
print(f"15% of 200 = {result}")
</model>
<requires>python:math</requires>
15% of 200 equals 30. | 2026-01-15T15:04:53 | https://github.com/bigattichouse/worldmodel | bigattichouse | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qdlwwn | false | null | t3_1qdlwwn | /r/LocalLLaMA/comments/1qdlwwn/worldmodelqwen306b_building_a_world_model_into_a/ | false | false | default | 2 | {'enabled': False, 'images': [{'id': 'DgZKX_W-G2btlwviPw01K-Bo-WqwT1-voG4W5IAUKwc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DgZKX_W-G2btlwviPw01K-Bo-WqwT1-voG4W5IAUKwc.png?width=108&crop=smart&auto=webp&s=6ae2443bdd5b0d2b4ea8fe309b6c1b7c918a80d4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DgZKX_W-G2btlwviPw01K-Bo-WqwT1-voG4W5IAUKwc.png?width=216&crop=smart&auto=webp&s=e674a84131b02b2ae9579546121ed674846807e0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DgZKX_W-G2btlwviPw01K-Bo-WqwT1-voG4W5IAUKwc.png?width=320&crop=smart&auto=webp&s=052d83535c79c0764fac0e94ce3271c66a0163e5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DgZKX_W-G2btlwviPw01K-Bo-WqwT1-voG4W5IAUKwc.png?width=640&crop=smart&auto=webp&s=bc0ef10c525685d88fec0e841826d43a25fae8ed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DgZKX_W-G2btlwviPw01K-Bo-WqwT1-voG4W5IAUKwc.png?width=960&crop=smart&auto=webp&s=8b9204ba37b3801b4ca4d78b3d93a02e3971b90f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DgZKX_W-G2btlwviPw01K-Bo-WqwT1-voG4W5IAUKwc.png?width=1080&crop=smart&auto=webp&s=ffb2bb7ca27381163a0da4dae3d9d5af9a813958', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DgZKX_W-G2btlwviPw01K-Bo-WqwT1-voG4W5IAUKwc.png?auto=webp&s=768aa01c5b2b7852ebf92678a78df625dfc5669e', 'width': 1200}, 'variants': {}}]} |
Will you spend US$5,000 for a local surveillance VideoRAG device? | 0 | I know it all depends on the exact specs and features, but I’m trying to assess a monetary value of the perception of surveillance VideoRAG in general. Suppose, if a device can locally monitor and store 5 of your IP cameras 24/7 and respond to your queries based on the stored videos from last 30 days (while immediately searching the relevant clips), what is the maximum price you’d pay to purchase it? Please provide your number with some rationale why. | 2026-01-15T14:47:51 | https://www.reddit.com/r/LocalLLaMA/comments/1qdlgpa/will_you_spend_us5000_for_a_local_surveillance/ | Middle_Investment_81 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdlgpa | false | null | t3_1qdlgpa | /r/LocalLLaMA/comments/1qdlgpa/will_you_spend_us5000_for_a_local_surveillance/ | false | false | self | 0 | null |
Falcon 90M | 85 | ...it's not 90B it's 90M, so you can run it on anything :)
[https://huggingface.co/tiiuae/Falcon-H1-Tiny-90M-Instruct-GGUF](https://huggingface.co/tiiuae/Falcon-H1-Tiny-90M-Instruct-GGUF)
[https://huggingface.co/tiiuae/Falcon-H1-Tiny-Coder-90M-GGUF](https://huggingface.co/tiiuae/Falcon-H1-Tiny-Coder-90M-GGUF)
[https://huggingface.co/tiiuae/Falcon-H1-Tiny-R-90M-GGUF](https://huggingface.co/tiiuae/Falcon-H1-Tiny-R-90M-GGUF)
| 2026-01-15T14:40:27 | https://www.reddit.com/r/LocalLLaMA/comments/1qdl9za/falcon_90m/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdl9za | false | null | t3_1qdl9za | /r/LocalLLaMA/comments/1qdl9za/falcon_90m/ | false | false | self | 85 | {'enabled': False, 'images': [{'id': 'KWqFMZZsyTPHjYJkhwghE9feVJnj5N7djCZD8Z5yAbg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KWqFMZZsyTPHjYJkhwghE9feVJnj5N7djCZD8Z5yAbg.png?width=108&crop=smart&auto=webp&s=8e2c013ad4c604daa9c5cacf8940d1eefe225b1f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KWqFMZZsyTPHjYJkhwghE9feVJnj5N7djCZD8Z5yAbg.png?width=216&crop=smart&auto=webp&s=b9566a2d8e693f7a02d9ff56b50e4fa99d066a7a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KWqFMZZsyTPHjYJkhwghE9feVJnj5N7djCZD8Z5yAbg.png?width=320&crop=smart&auto=webp&s=d9816f482ebc9dc96371cc0a49861137298ba494', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KWqFMZZsyTPHjYJkhwghE9feVJnj5N7djCZD8Z5yAbg.png?width=640&crop=smart&auto=webp&s=22cb9b9e37e4cdeaf656db0dee00828556680980', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KWqFMZZsyTPHjYJkhwghE9feVJnj5N7djCZD8Z5yAbg.png?width=960&crop=smart&auto=webp&s=074a75ddda8161c8eeb0b72606d1fa1309ebad80', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KWqFMZZsyTPHjYJkhwghE9feVJnj5N7djCZD8Z5yAbg.png?width=1080&crop=smart&auto=webp&s=bd19be29f6ef3cce1653c2607ba2aa869f91bd47', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KWqFMZZsyTPHjYJkhwghE9feVJnj5N7djCZD8Z5yAbg.png?auto=webp&s=37f22033af5c92c27307f37023008a2875152f2f', 'width': 1200}, 'variants': {}}]} |
🧠 Inference seems to be splitting: cloud-scale vs local-first | 0 | Lately I've been thinking about where AI *inference* is actually heading.
I recently read a VentureBeat article arguing that inference is starting to split into two distinct paths:
- **Cloud-scale inference** for massive shared workloads (data centers, hyperscalers, orchestration at scale)
- **Local / on-device inference** for low-latency, private, offline-capable use cases
That framing resonated with me.
On one side, cloud inference keeps getting faster and more specialized (GPUs, NPUs, custom silicon). On the other, local inference keeps getting *good enough* - smaller models, quantization, better runtimes, and consumer hardware that can now comfortably run useful models.
What's interesting is that these paths optimize for **very different constraints**:
- Cloud: throughput, elasticity, centralized updates
- Local: privacy, latency, offline reliability, user ownership of context
Personally, I've been experimenting more with local-first setups recently (visual AI workflow automations platform, AI browser assistants, even game AI NPCs), and it's made me realize how often **privacy and latency** matter more than raw model size.
As models continue to shrink and hardware improves, I wouldn't be surprised if we see a clearer divide:
- cloud AI for scale and aggregation
- local/edge AI for personal, agentic, and interactive experiences
Curious how people here see it:
- Are you mostly building **cloud-first**, **local-first**, or **hybrid** systems?
- Do you think local inference will remain “secondary,” or become the default for many use cases?
Original article for context:
https://venturebeat.com/infrastructure/inference-is-splitting-in-two-nvidias-usd20b-groq-bet-explains-its-next-act/
| 2026-01-15T14:32:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qdl2i1/inference_seems_to_be_splitting_cloudscale_vs/ | Code-Forge-Temple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdl2i1 | false | null | t3_1qdl2i1 | /r/LocalLLaMA/comments/1qdl2i1/inference_seems_to_be_splitting_cloudscale_vs/ | false | false | self | 0 | null |
New arXiv review: "High-Performance Serverless" is the future of AI Inference (and Static Clusters are dying) | 0 | Just read through this new systematic review (arXiv:2601.09334) on Serverless for HPC/AI. It’s a solid read if you're dealing with infrastructure scaling.
The TL;DR:
1. Static Allocation is breaking: The paper argues that rigid GPU clusters can't handle modern "bursty" AI workloads efficiently. You either over-provision (waste money) or under-provision (crash during spikes).
2. Serverless is the fix: The industry is moving toward elastic, serverless execution models to survive the efficiency gap.
3. The Bottleneck: They identify Cold Start Latency as the #1 blocker preventing this shift.
We've been seeing this exact pattern in production. We actually built our engine specifically to solve that Cold Start problem via state snapshotting, so it's validating to see the academic side converging on the same architecture.
Paper link: \[https://arxiv.org/abs/2601.09334\]
Anyone seeing this shift from static -> serverless in their own clusters? | 2026-01-15T14:28:37 | https://www.reddit.com/r/LocalLLaMA/comments/1qdkzdl/new_arxiv_review_highperformance_serverless_is/ | pmv143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdkzdl | false | null | t3_1qdkzdl | /r/LocalLLaMA/comments/1qdkzdl/new_arxiv_review_highperformance_serverless_is/ | false | false | self | 0 | null |
I've been working on yet another GGUF converter (YaGUFF). It is a GUI on top of llama.cpp (isn't everything?). | 55 | My goals here were self-educational so I'm curious to see how it survives contact with the outside world. It's supposed to be simple and easy. After weeks of adding features and changing everything I can't be sure. With some luck it should still be intuitive enough.
Installation should be as easy as a git clone and then running the appropriate run\_gui script for your system. Let me know how it goes!
[https://github.com/usrname0/YaGGUF](https://github.com/usrname0/YaGGUF) | 2026-01-15T14:18:51 | AllergicToTeeth | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qdkqgd | false | null | t3_1qdkqgd | /r/LocalLLaMA/comments/1qdkqgd/ive_been_working_on_yet_another_gguf_converter/ | false | false | default | 55 | {'enabled': True, 'images': [{'id': 'qbt8bfuh9idg1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/qbt8bfuh9idg1.png?width=108&crop=smart&auto=webp&s=f563b168147917fad5c10c2616468ae5820034c7', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/qbt8bfuh9idg1.png?width=216&crop=smart&auto=webp&s=27be941ae23bbb13898ab4922ab1991a27297c27', 'width': 216}, {'height': 175, 'url': 'https://preview.redd.it/qbt8bfuh9idg1.png?width=320&crop=smart&auto=webp&s=a94c30c687f967da5a5d4d30bec2dbe41998dfea', 'width': 320}, {'height': 350, 'url': 'https://preview.redd.it/qbt8bfuh9idg1.png?width=640&crop=smart&auto=webp&s=84e565840ad80ad7d1a4fa5d61ba9da5057dbda1', 'width': 640}, {'height': 525, 'url': 'https://preview.redd.it/qbt8bfuh9idg1.png?width=960&crop=smart&auto=webp&s=ab4a884b1bab2d5d01baa35645e6392af9cf14e1', 'width': 960}, {'height': 590, 'url': 'https://preview.redd.it/qbt8bfuh9idg1.png?width=1080&crop=smart&auto=webp&s=f1e16f87003f05da83d2472c4947ec8aa3d57718', 'width': 1080}], 'source': {'height': 1221, 'url': 'https://preview.redd.it/qbt8bfuh9idg1.png?auto=webp&s=1ad1ab15dc3c1868198a1f00c6175ddd4af19c1f', 'width': 2232}, 'variants': {}}]} | |
"Memory Wall" is the real bottleneck for Inference, not Compute. Why aren't we taking HBM-CPUs seriously? | 1 | [removed] | 2026-01-15T14:05:24 | Any_Cod_4947 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qdkeii | false | null | t3_1qdkeii | /r/LocalLLaMA/comments/1qdkeii/memory_wall_is_the_real_bottleneck_for_inference/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'i1hml70lsidg1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/i1hml70lsidg1.jpeg?width=108&crop=smart&auto=webp&s=7f99a2d07e73682d0efc9b096ad825880265c488', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/i1hml70lsidg1.jpeg?width=216&crop=smart&auto=webp&s=c864dc9aee5f562b8ce6914ddea9e089381d5d73', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/i1hml70lsidg1.jpeg?width=320&crop=smart&auto=webp&s=878c58cb503bbaebd9058e65f49694de8c479154', 'width': 320}, {'height': 349, 'url': 'https://preview.redd.it/i1hml70lsidg1.jpeg?width=640&crop=smart&auto=webp&s=61f4dca1d40fc18a7cfac8bb76c95c15473a422b', 'width': 640}], 'source': {'height': 349, 'url': 'https://preview.redd.it/i1hml70lsidg1.jpeg?auto=webp&s=d86df48b4134cf500f6968ec01096a226af09b23', 'width': 640}, 'variants': {}}]} | |
Why High-Bandwidth CPUs might actually challenge Nvidia for inference | 0 | We've been analyzing the current hardware landscape for enterprise AI deployments, and we're noticing a shift that isn't discussed enough: the "Memory Wall."
Everyone is scrambling for H100s, but for inference tasks especially with LLMs raw compute (FLOPs) is rarely the bottleneck. It's memory bandwidth. The tokens can only be generated as fast as you can move weights from memory to the compute unit.
This creates an opening for High-Bandwidth CPUs (like Intel's Xeon Max or AMD's MI300 variants). If CPUs can offer massive bandwidth directly on the die, the latency penalty of moving data over PCIe to a discrete GPU disappears for many use cases.
**The core argument:**
1. **Inference is bandwidth-bound:** For batch size 1 (real-time chat), you are almost always waiting on memory, not calculation.
2. **Cost efficiency:** High-end GPUs maintain their "moat" through HBM supply constraints and CUDA optimization. But if CPUs integrate HBM, the TCO for inference clusters could drop significantly.
3. **Simplicity:** Managing a CPU-only stack for inference removes substantial complexity in the software architecture (drivers, orchestration) compared to maintaining GPU clusters.
We dive deeper into the technical specs and the economic implications in our latest analysis. I’m curious if anyone here is already experimenting with HBM-equipped CPUs for local production workloads?
**Full analysis here:** [https://medium.com/researchable/nvidias-moat-is-leaking-the-rise-of-high-bandwidth-cpus-b4d4578457e4](https://medium.com/researchable/nvidias-moat-is-leaking-the-rise-of-high-bandwidth-cpus-b4d4578457e4) | 2026-01-15T13:42:17 | https://medium.com/researchable/nvidias-moat-is-leaking-the-rise-of-high-bandwidth-cpus-b4d4578457e4 | Any_Cod_4947 | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1qdjtwa | false | null | t3_1qdjtwa | /r/LocalLLaMA/comments/1qdjtwa/why_highbandwidth_cpus_might_actually_challenge/ | false | false | default | 0 | null |
The Memory Wall is the real bottleneck: Why High-Bandwidth CPUs might actually challenge Nvidia for inference | 1 | I've been analyzing the current hardware landscape for enterprise AI deployments, and we're noticing a shift that isn't discussed enough: the "Memory Wall."
Everyone is scrambling for H100s, but for inference tasks especially with LLMs raw compute (FLOPs) is rarely the bottleneck. It's memory bandwidth. The tokens can only be generated as fast as you can move weights from memory to the compute unit.
This creates an opening for High-Bandwidth CPUs (like Intel's Xeon Max or AMD's MI300 variants). If CPUs can offer massive bandwidth directly on the die, the latency penalty of moving data over PCIe to a discrete GPU disappears for many use cases.
**The core argument:**
1. **Inference is bandwidth-bound:** For batch size 1 (real-time chat), you are almost always waiting on memory, not calculation.
2. **Cost efficiency:** High-end GPUs maintain their "moat" through HBM supply constraints and CUDA optimization. But if CPUs integrate HBM, the TCO for inference clusters could drop significantly.
3. **Simplicity:** Managing a CPU-only stack for inference removes substantial complexity in the software architecture (drivers, orchestration) compared to maintaining GPU clusters.
I dive deeper into the technical specs and the economic implications in our latest analysis. I’m curious if anyone here is already experimenting with HBM-equipped CPUs for local production workloads?
**Full analysis here:** [https://medium.com/p/b4d4578457e4](https://medium.com/p/b4d4578457e4) | 2026-01-15T13:39:51 | https://www.reddit.com/r/LocalLLaMA/comments/1qdjrt7/the_memory_wall_is_the_real_bottleneck_why/ | Any_Cod_4947 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdjrt7 | false | null | t3_1qdjrt7 | /r/LocalLLaMA/comments/1qdjrt7/the_memory_wall_is_the_real_bottleneck_why/ | false | false | self | 1 | null |
Best local LLM for M1 Max 32gb for a small law office? | 0 | I want proficiency in Greek and English. Also , how do you train the model? I just read about it somewhere and have no idea how it works. Thanks! | 2026-01-15T13:21:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qdjciv/best_local_llm_for_m1_max_32gb_for_a_small_law/ | findthemistke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdjciv | false | null | t3_1qdjciv | /r/LocalLLaMA/comments/1qdjciv/best_local_llm_for_m1_max_32gb_for_a_small_law/ | false | false | self | 0 | null |
Didn’t realize duplicate invoices were such a common (and expensive) problem | 0 | [removed] | 2026-01-15T13:20:54 | https://www.reddit.com/r/LocalLLaMA/comments/1qdjbrt/didnt_realize_duplicate_invoices_were_such_a/ | Helpful_Milk_5618 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdjbrt | false | null | t3_1qdjbrt | /r/LocalLLaMA/comments/1qdjbrt/didnt_realize_duplicate_invoices_were_such_a/ | false | false | self | 0 | null |
solution for local deep research | 13 | I am still trying to set up a good local deep research workflow.
What I’ve found so far:
* [https://github.com/assafelovic/gpt-researcher](https://github.com/assafelovic/gpt-researcher) – the best one so far, but I need to refresh the browser after each research run
* [https://github.com/bytedance/deer-flow](https://github.com/bytedance/deer-flow) – another good option, but I was only able to run it in text mode (without webui)
In general, you always need to set the OpenAI endpoint to a local LLM and then switch web search from a paid provider to duckduckgo, for example:
$env:OPENAI_BASE_URL = "http://127.0.0.1:8080/v1"
$env:RETRIEVER = "duckduckgo"
Another popular project is [https://github.com/Alibaba-NLP/DeepResearch](https://github.com/Alibaba-NLP/DeepResearch), but it looks like it requires a specific model.
Do you use something else? Please share your experiences.
| 2026-01-15T13:09:44 | https://www.reddit.com/r/LocalLLaMA/comments/1qdj2nn/solution_for_local_deep_research/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdj2nn | false | null | t3_1qdj2nn | /r/LocalLLaMA/comments/1qdj2nn/solution_for_local_deep_research/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'zIj3ZCnCTgiapX4S5a4AsQzcBt1PHe8Od8z7qw4Y96g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zIj3ZCnCTgiapX4S5a4AsQzcBt1PHe8Od8z7qw4Y96g.png?width=108&crop=smart&auto=webp&s=1e6a60156250b10f1ccdec8e2048c10496ae68f3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zIj3ZCnCTgiapX4S5a4AsQzcBt1PHe8Od8z7qw4Y96g.png?width=216&crop=smart&auto=webp&s=a073cf6237230953f1ef6fadb279025be242baec', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zIj3ZCnCTgiapX4S5a4AsQzcBt1PHe8Od8z7qw4Y96g.png?width=320&crop=smart&auto=webp&s=74e9b4e5834ba5f7706a715c748b45245f3a7fe6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zIj3ZCnCTgiapX4S5a4AsQzcBt1PHe8Od8z7qw4Y96g.png?width=640&crop=smart&auto=webp&s=7cbc58b0336a239cc7998d272957da5efa85a810', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zIj3ZCnCTgiapX4S5a4AsQzcBt1PHe8Od8z7qw4Y96g.png?width=960&crop=smart&auto=webp&s=1d4e8f2219eace6d1b5b809ad716218a8a6d8b21', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zIj3ZCnCTgiapX4S5a4AsQzcBt1PHe8Od8z7qw4Y96g.png?width=1080&crop=smart&auto=webp&s=3c55a791d0f9bdea4e69aba3e9515113f929e959', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/zIj3ZCnCTgiapX4S5a4AsQzcBt1PHe8Od8z7qw4Y96g.png?auto=webp&s=43b7e59d5a4355422a48288c50c222981cdfb9f3', 'width': 1800}, 'variants': {}}]} |
kimi lied to me | 0 | I should be more careful. Asked it to produce market prices for over 2000 items on a spread sheet. I had to break them up manually and feed to the Kimi.
After an hour of work. I asked it the source of the prices and it said it fabricated all the data!.
Damn I need to be more careful. It doesn't help Kimi never saying nice things.
https://preview.redd.it/tbgbyzm8didg1.png?width=1098&format=png&auto=webp&s=62e43c47bbe3a0224469538199e040f69b9ac345
| 2026-01-15T12:39:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qdifs0/kimi_lied_to_me/ | Hammerhead2046 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdifs0 | false | null | t3_1qdifs0 | /r/LocalLLaMA/comments/1qdifs0/kimi_lied_to_me/ | false | false | 0 | null | |
Question: temporary private LLM setup for interview transcript analysis? | 1 | Hi,
I’m looking for advice on how to set up a temporary, private LLM environment to analyze qualitative interview transcripts (ask questions, find patterns, draw inferences across texts).
Key constraints:
- I don’t have strong coding skills and want to avoid complex setups
- I don’t want to train a model – just use an existing strong reasoning/instruct model
- Privacy matters: transcripts shouldn’t go into a public chat service or be stored long-term
- I only need this for 2–3 days and have a small budget
- Cloud is fine if it’s “my own” instance and can be deleted afterwards
What setups/tools would you recommend (e.g. platforms, UIs, models) with a low setup effort?
Thank you! | 2026-01-15T12:25:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qdi5e6/question_temporary_private_llm_setup_for/ | Lost-Fruit-3838 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdi5e6 | false | null | t3_1qdi5e6 | /r/LocalLLaMA/comments/1qdi5e6/question_temporary_private_llm_setup_for/ | false | false | self | 1 | null |
Curious about AI as ethical social guides? Fine-tuned a model for thoughtful companionship + built a full app around it | 1 | [removed] | 2026-01-15T12:04:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qdhqsx/curious_about_ai_as_ethical_social_guides/ | EmbarrassedHuman1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdhqsx | false | null | t3_1qdhqsx | /r/LocalLLaMA/comments/1qdhqsx/curious_about_ai_as_ethical_social_guides/ | false | false | self | 1 | null |
How to get local LLMs answer VERY LONG answers? | 10 | Even if they have a ton of context active (32K, 200K, whatever) I cannot get a model write a very long answer. Why is that? Is it possible with any trick to keep a model writing code or a long story on one shot?
I don't get how a model can have a huge context window, but it cannot give long answers.
I use LM Studio and all the common models (gptoss 20b, qwen 3, those from mistral, nemotron 3, lfm2.5, and so on).
Isn't there a way to set how long the answer should be? | 2026-01-15T12:03:01 | https://www.reddit.com/r/LocalLLaMA/comments/1qdhphy/how_to_get_local_llms_answer_very_long_answers/ | mouseofcatofschrodi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdhphy | false | null | t3_1qdhphy | /r/LocalLLaMA/comments/1qdhphy/how_to_get_local_llms_answer_very_long_answers/ | false | false | self | 10 | null |
Finally finished my all-in-one Local AI app (Flux, Music, Agent) | 16 | # Finally finished my all-in-one Local AI app (Flux, Music, Agent)
Just wanted to show off what I’ve been building for the last few months.
It’s called **V6rge**. Basically, I got tired of dealing with 10 different command-line windows just to run Flux, a Chatbot, and some standard tools. So I built a single, unified desktop app for all of them.
**What it does :**
* **Local Mode:** An agent that can actually control your PC by instructing it .
* **Image Gen:** Flux.1 & Qwen-Image (no subscriptions, just your GPU).
* **Music:** Generates tracks with MusicGen.
* **Video:** HunyuanVideo support.
* Vocal Remover
**The Update (v0.1.5):** I posted this a while ago and the installer was... kinda buggy 😅. I spent the last week rewriting the backend extraction logic. **v0.1.5 is live now**.
**Link:** [https://github.com/Dedsec-b/v6rge-releases-/releases/tag/v0.1.5](https://github.com/Dedsec-b/v6rge-releases-/releases/tag/v0.1.5)
Let me know if it breaks (but it shouldn't this time lol).
https://preview.redd.it/9bg618685idg1.png?width=1343&format=png&auto=webp&s=88428534e918dafdea84bc1de329f90e36494700
https://preview.redd.it/3askgei85idg1.png?width=1352&format=png&auto=webp&s=911bde2512c5a5f5da08c7fa48fe494092d67921
https://preview.redd.it/s1jk3l695idg1.png?width=1353&format=png&auto=webp&s=35eb437bb288447954826a3a474462547f109066
https://preview.redd.it/koeschr95idg1.png?width=1365&format=png&auto=webp&s=703aebfc380ef332d73589ee99e37d42461583f1
| 2026-01-15T11:56:28 | https://www.reddit.com/r/LocalLLaMA/comments/1qdhkxi/finally_finished_my_allinone_local_ai_app_flux/ | Motor-Resort-5314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdhkxi | false | null | t3_1qdhkxi | /r/LocalLLaMA/comments/1qdhkxi/finally_finished_my_allinone_local_ai_app_flux/ | false | false | 16 | null | |
[Project] Benchmark your local LLM inference speed with auto-submission (One-line install + Multi-GPU DP support) | 1 | Hi r/LocalLLaMA,
We are working on a project to collect and visualize real-world LLM inference performance across various hardware setups (Consumer GPUs, Macs, Server grade, etc.).
We realized it's often hard to compare "apples to apples" performance without a standardized test. So, we built a CLI tool that streamlines the process with auto-submission.
**Key Features:**
* **Standardized Testing:** Consistent models and settings for fair comparison.
* **Auto-Submission:** Results are automatically uploaded—no manual copy-pasting required.
* **Multi-GPU Ready:** Automatically detects multi-card setups and launches in **Data Parallel (DP)** mode to maximize throughput testing.
* **Smart Coverage:** The tool prioritizes models that haven't been tested enough on your specific hardware class.
**🚀 Quick Start**
You can install and run the full benchmark suite with a single command:
Bash
curl -fsSL https://ai.0.af/install.sh | bash && source ~/.bashrc && aibench autorun
**Advanced Usage**
If you want to contribute specifically where data is missing, or randomize the test order:
Bash
# Prioritize missing coverage (helps fill gaps in our database)
curl -fsSL https://ai.0.af/install.sh | bash && source ~/.bashrc && aibench autorun --fill-gaps
# Randomize model order
curl -fsSL https://ai.0.af/install.sh | bash && source ~/.bashrc && aibench autorun --shuffle
Check out the leaderboard and project here:[https://ai.0.af/](https://ai.0.af/)
We’d love to see how your rig performs. Let us know if you run into any issues! | 2026-01-15T11:45:06 | https://www.reddit.com/r/LocalLLaMA/comments/1qdhdlt/project_benchmark_your_local_llm_inference_speed/ | Tiredwanttosleep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdhdlt | false | null | t3_1qdhdlt | /r/LocalLLaMA/comments/1qdhdlt/project_benchmark_your_local_llm_inference_speed/ | false | false | self | 1 | null |
I built a 100% Rust orchestrator to chain local models (Ollama, Whisper) without Python or LangChain. Runs entirely offline. | 0 | Hey r/LocalLLaMA,
Like many of you, I got tired of the "modern" AI stack. I wanted to build complex workflows (like "watch folder -> transcribe audio -> summarize text -> save to obsidian"), but every tool out there felt like overkill. They were either wrappers around the OpenAI API or massive Python frameworks that required a venv just to say "hello."
I wanted the "Unix pipes" philosophy, but for local intelligence. So I built **LAO (Local AI Orchestrator)**.
**The Pitch:** It’s a desktop app (in alpha so lower ur expectations) written in **Rust** (backend + native egui frontend) that lets you chain local models into Directed Acyclic Graphs (DAGs). It runs completely offline.
**Key Features:**
* **No Python Required:** It's a single binary. No `pip install`, no CUDA version conflicts.
* **The Stack:** It connects to **Ollama** for LLMs and uses native Rust bindings for tools like Whisper.
* **Visual Builder:** I built a node-based graph editor in `egui` so you can visually drag-and-drop steps (or just write YAML if you prefer).
* **Prompt-to-Workflow:** You can literally type "Summarize this audio file and tag action items," and it uses a local model (like Llama 3) to generate the execution graph for you.
* **Plugin System:** I implemented a dynamic loading system (`.dll`/`.so`) so you can write your own high-performance plugins in Rust and drop them in.
**Why I built it:** I realized that if we want "Edge AI" to be real, we need tools that respect system resources. Python is great for prototyping, but I wanted something that could run in the background without eating 4GB of RAM just for the orchestrator itself.
**Repo:** [https://www.github.com/abendrothj/lao](https://www.github.com/abendrothj/lao)
It’s open source (MIT). I’d love to hear what kind of local workflows you all are running and if this would be useful for your setups. | 2026-01-15T11:32:37 | stxrmcrypt | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qdh5l5 | false | null | t3_1qdh5l5 | /r/LocalLLaMA/comments/1qdh5l5/i_built_a_100_rust_orchestrator_to_chain_local/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'uzq44ca1xhdg1', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/uzq44ca1xhdg1.png?width=108&crop=smart&auto=webp&s=2665a23280ea453cb9ac3b68cf3027527d29bca5', 'width': 108}, {'height': 152, 'url': 'https://preview.redd.it/uzq44ca1xhdg1.png?width=216&crop=smart&auto=webp&s=ed40419d7ef78125bbc681adbfd2674d6ee68327', 'width': 216}, {'height': 225, 'url': 'https://preview.redd.it/uzq44ca1xhdg1.png?width=320&crop=smart&auto=webp&s=84f885bf3e7c2fb26328502b66062fdf245c97d2', 'width': 320}, {'height': 451, 'url': 'https://preview.redd.it/uzq44ca1xhdg1.png?width=640&crop=smart&auto=webp&s=72539543c1f46574256c1d988775f0e7d56e5d30', 'width': 640}, {'height': 677, 'url': 'https://preview.redd.it/uzq44ca1xhdg1.png?width=960&crop=smart&auto=webp&s=d6da4bca8d8c30c46145d53ca250a1ff8faf4578', 'width': 960}, {'height': 762, 'url': 'https://preview.redd.it/uzq44ca1xhdg1.png?width=1080&crop=smart&auto=webp&s=ac861d9d6bf05bdd4823e25dc75cbad33e23f759', 'width': 1080}], 'source': {'height': 1682, 'url': 'https://preview.redd.it/uzq44ca1xhdg1.png?auto=webp&s=61895bf6eaab270d360153772425595250e65849', 'width': 2382}, 'variants': {}}]} | |
RTX 5070 Ti and RTX 5060 Ti 16 GB no longer manufactured | 229 | Nvidia has essentially killed off supply for the RTX 5070 Ti. Also supply of RTX 5060 Ti 16 GB has been significantly reduced. This happened partially due to memory supply shortages. This means that most AIBs will no longer manufacture these GPUs. Prices are already jumping significantly. The 5070 Ti has risen \~$100 over MSRP, and retailers expect further hikes. 8 GB configuration of RTX 5060 Ti remains unaffected.
Credit: Hardware Unboxed
[https://m.youtube.com/watch?v=yteN21aJEvE](https://m.youtube.com/watch?v=yteN21aJEvE) | 2026-01-15T11:27:15 | https://www.reddit.com/r/LocalLLaMA/comments/1qdh28f/rtx_5070_ti_and_rtx_5060_ti_16_gb_no_longer/ | Paramecium_caudatum_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdh28f | false | null | t3_1qdh28f | /r/LocalLLaMA/comments/1qdh28f/rtx_5070_ti_and_rtx_5060_ti_16_gb_no_longer/ | false | false | self | 229 | {'enabled': False, 'images': [{'id': 'dJjS3Ve9z-p_IzlGIqMccDsq-fqcnhnXiDVcaUy0gEw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/dJjS3Ve9z-p_IzlGIqMccDsq-fqcnhnXiDVcaUy0gEw.jpeg?width=108&crop=smart&auto=webp&s=ae0c3bd55a2886bf5b21ac75252dd3634059b337', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/dJjS3Ve9z-p_IzlGIqMccDsq-fqcnhnXiDVcaUy0gEw.jpeg?width=216&crop=smart&auto=webp&s=decb4b5768f2ecf5831d8c578f2433da78e301f1', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/dJjS3Ve9z-p_IzlGIqMccDsq-fqcnhnXiDVcaUy0gEw.jpeg?width=320&crop=smart&auto=webp&s=075e717aaa94bf7307eb5002d33a138eb6da8e5e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/dJjS3Ve9z-p_IzlGIqMccDsq-fqcnhnXiDVcaUy0gEw.jpeg?auto=webp&s=86bd6a80a92bc8a6d90647747e201c3031a5e867', 'width': 480}, 'variants': {}}]} |
Mistral 3 just released! | 0 | Model Collection: [https://huggingface.co/collections/mistralai/ministral-3](https://huggingface.co/collections/mistralai/ministral-3)
Tech Report: [https://arxiv.org/abs/2601.08584](https://arxiv.org/abs/2601.08584) | 2026-01-15T11:26:41 | https://mistral.ai/news/mistral-3 | Accomplished_Ad9530 | mistral.ai | 1970-01-01T00:00:00 | 0 | {} | 1qdh1vp | false | null | t3_1qdh1vp | /r/LocalLLaMA/comments/1qdh1vp/mistral_3_just_released/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=108&crop=smart&auto=webp&s=757c6641896f42b25e4c88e87dc438f1e8d270bb', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=216&crop=smart&auto=webp&s=d4e78d09c1d0842276f98a4a7745457d7c7c5171', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=320&crop=smart&auto=webp&s=4df6ded6329ae09fc0e110879f55f893298c17b4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=640&crop=smart&auto=webp&s=4c3b97e1405ebb7916bf71d7b9a3da9a44efaea7', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=960&crop=smart&auto=webp&s=0e49bc517b9cd96d953bfc71387ecf137efddf97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=1080&crop=smart&auto=webp&s=f52f14c1247d26b63fd222b2cb6756d88234d2f0', 'width': 1080}], 'source': {'height': 2520, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?auto=webp&s=fe19c20c363332d32b7f6d8917f3febce9133568', 'width': 4800}, 'variants': {}}]} | |
Ministral 3 just released | 2 | Model Collection: [https://huggingface.co/collections/mistralai/ministral-3](https://huggingface.co/collections/mistralai/ministral-3)
Tech Report: [https://arxiv.org/abs/2601.08584](https://arxiv.org/abs/2601.08584) | 2026-01-15T11:24:27 | https://mistral.ai/news/mistral-3 | Accomplished_Ad9530 | mistral.ai | 1970-01-01T00:00:00 | 0 | {} | 1qdh0j4 | false | null | t3_1qdh0j4 | /r/LocalLLaMA/comments/1qdh0j4/ministral_3_just_released/ | false | false | default | 2 | null |
What is the impact of running (some or all) PCIe5 GPUs on PCIe4 slot (with the same # of lanes) in a multi-GPU server? | 1 | I was thinking about multi-GPU scenarios where a mobo either has no PCIe5 at all, or a limited number of them with the rest being PCIe4.
Someone told me that running PCIe5 cards in a multi-GPU setup on PCIe4 for LLM is not a big deal and doesn't affect pp and tg speeds when sharding a model across multiple GPUs.
However, I've been going down the rabbit hole and it seems that, at least in theory, that's not the case.
Suppose, we have 6x GPUs 24GB VRAM each (I have Arc Pro B60's in mind, which is a PCIe5 x8 card natively) for a total of 144 VRAM.
Suppose, we want to run a model that takes (with overhead and context cache) close to 144 VRAM, so full sharding across 6x GPUs.
Suppose, 2x out of 6x B50 run on PCIe4 x8 instead of PCIe5 x8.
Wouldn't it be the case that if the model is actually sharded across all 6 GPUs (so the GPUs must exchange activations/partials during every forward pass), then the two GPUs running at PCIe 4.0 x8 can reduce both prefill throughput and token-generation speed by becoming "slow links" in the multi‑GPU communication path?
I'm curious if anyone had a chance to observe the difference in multi-GPU setups (even if it's only 2x cards) when moving some of all of the PCIe5 cards to PCIe4: Did you experience noticeable drop in pp/tg speeds, and how much?
Based on your experience, if you had to guess:
What would be the impact of 1x GPU (out of 6) at PCIe4, in your opinion?
What would be the impact of 2x GPUs at PCIe4, in your opinion?
What would be the impact if all of them are on PCIe4?
(I.e., how does it down-scale, if it does?)
| 2026-01-15T11:23:41 | https://www.reddit.com/r/LocalLLaMA/comments/1qdh026/what_is_the_impact_of_running_some_or_all_pcie5/ | Infinite100p | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdh026 | false | null | t3_1qdh026 | /r/LocalLLaMA/comments/1qdh026/what_is_the_impact_of_running_some_or_all_pcie5/ | false | false | self | 1 | null |
Microsoft releases FrogMini on HF. Built on Qwen3-14B | 2 | Hugging face: [https://huggingface.co/microsoft/FrogMini-14B-2510](https://huggingface.co/microsoft/FrogMini-14B-2510)
Achieving state-of-the-art performance on SWE-Bench Verified (Pass@1: 45.0%)
Employs supervised fine-tuning (SFT) on successful debugging trajectories generated by a strong teacher model (e.g., Claude), obtained from a mix of real-world and synthetic bug datasets | 2026-01-15T11:13:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qdgtny/microsoft_releases_frogmini_on_hf_built_on/ | Difficult-Cap-7527 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdgtny | false | null | t3_1qdgtny | /r/LocalLLaMA/comments/1qdgtny/microsoft_releases_frogmini_on_hf_built_on/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '9z7a37kgPjb8VzDQoeXE-zuBzj-58iOLaF-tmw7HTiU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9z7a37kgPjb8VzDQoeXE-zuBzj-58iOLaF-tmw7HTiU.png?width=108&crop=smart&auto=webp&s=bc5221433884db74b5cbfcfbf837a51ff14fe28e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9z7a37kgPjb8VzDQoeXE-zuBzj-58iOLaF-tmw7HTiU.png?width=216&crop=smart&auto=webp&s=bad3bbc3878e7859c7a85291207bcfdb92a9055c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9z7a37kgPjb8VzDQoeXE-zuBzj-58iOLaF-tmw7HTiU.png?width=320&crop=smart&auto=webp&s=b1bf87e13b664a96f05baa78d6149458ddca671a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9z7a37kgPjb8VzDQoeXE-zuBzj-58iOLaF-tmw7HTiU.png?width=640&crop=smart&auto=webp&s=6c4b90a416b87d34ff34137e175b64d31eedc467', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9z7a37kgPjb8VzDQoeXE-zuBzj-58iOLaF-tmw7HTiU.png?width=960&crop=smart&auto=webp&s=26ee5a552b927e963d37bdb55cbc8fbb7d4091fa', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9z7a37kgPjb8VzDQoeXE-zuBzj-58iOLaF-tmw7HTiU.png?width=1080&crop=smart&auto=webp&s=5c019babfd2e95e325409dc22fb5300467bee2ae', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9z7a37kgPjb8VzDQoeXE-zuBzj-58iOLaF-tmw7HTiU.png?auto=webp&s=5a83228051e1b3e2d8bce0114212d845eb38e877', 'width': 1200}, 'variants': {}}]} |
Local AI App With SD-1.5 Models | 1 | Got tired of the existing Android local AI apps being slow and losing chat history, so I rewrote mine.
Runs any GGUF model + SD 1.5 (uncensored) offline. One user reported their 8B Q6 went from 30sec response time to 7sec after the rewrite. Encrypted storage with WAL so conversations don't corrupt.
Right now you can load local models or add HuggingFace repos to browse available GGUFs. Working on RAG system for document injection.
No cloud, no tracking, no accounts. Apache 2.0.
GitHub: [https://github.com/Siddhesh2377/ToolNeuron](https://github.com/Siddhesh2377/ToolNeuron)
Play Store: [https://play.google.com/store/apps/details?id=com.dark.tool\_neuron](https://play.google.com/store/apps/details?id=com.dark.tool_neuron)
Built it for myself, sharing in case it's useful to anyone else. | 2026-01-15T11:07:12 | https://www.reddit.com/r/LocalLLaMA/comments/1qdgpx3/local_ai_app_with_sd15_models/ | DarkEngine774 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdgpx3 | false | null | t3_1qdgpx3 | /r/LocalLLaMA/comments/1qdgpx3/local_ai_app_with_sd15_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'NsJk07DCAgoa5IzBmVhvY4L_KKwgNXadLAq_Y-RUfj8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NsJk07DCAgoa5IzBmVhvY4L_KKwgNXadLAq_Y-RUfj8.png?width=108&crop=smart&auto=webp&s=6decc2c8e1dd1fad55de7d48bf35c7e39ef3e7a2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NsJk07DCAgoa5IzBmVhvY4L_KKwgNXadLAq_Y-RUfj8.png?width=216&crop=smart&auto=webp&s=a4bb0266f9e33de606409e8dc3bbf74da877aee6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NsJk07DCAgoa5IzBmVhvY4L_KKwgNXadLAq_Y-RUfj8.png?width=320&crop=smart&auto=webp&s=bad229d6ff8835507463e38025367fb9a26fe119', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NsJk07DCAgoa5IzBmVhvY4L_KKwgNXadLAq_Y-RUfj8.png?width=640&crop=smart&auto=webp&s=1687a1e57bb7db5faf99778f352614a2027c799c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NsJk07DCAgoa5IzBmVhvY4L_KKwgNXadLAq_Y-RUfj8.png?width=960&crop=smart&auto=webp&s=08ba4ce247f9e9604c7acade4ef5c15c25393dc8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NsJk07DCAgoa5IzBmVhvY4L_KKwgNXadLAq_Y-RUfj8.png?width=1080&crop=smart&auto=webp&s=7b0093fec4584e5083f097ee167a2cd7ebec7848', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/NsJk07DCAgoa5IzBmVhvY4L_KKwgNXadLAq_Y-RUfj8.png?auto=webp&s=8d3b9c21a14929db6a409ec58abe583bccfd7805', 'width': 1280}, 'variants': {}}]} |
MiniMax-M2.1 REAP models (0xSero) are fixed! | 51 | Previously, some experts where mistakenly left out and that caused loops, new GGUF uploads happening right now.
\- REAP-20 Deprecated
\- REAP-30 **Fixed**
\- REAP-40 **Fixed**
\- REAP-50 Deprecated
[https://huggingface.co/mradermacher/MiniMax-M2.1-REAP-30-GGUF](https://huggingface.co/mradermacher/MiniMax-M2.1-REAP-30-GGUF)
[https://huggingface.co/mradermacher/MiniMax-M2.1-REAP-40-GGUF](https://huggingface.co/mradermacher/MiniMax-M2.1-REAP-40-GGUF) | 2026-01-15T10:48:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qdgeak/minimaxm21_reap_models_0xsero_are_fixed/ | AdamDhahabi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdgeak | false | null | t3_1qdgeak | /r/LocalLLaMA/comments/1qdgeak/minimaxm21_reap_models_0xsero_are_fixed/ | false | false | self | 51 | {'enabled': False, 'images': [{'id': 'z032RXX5jad-ma_orE3eYhh4A2ItNZUj_yWYnS15G7M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/z032RXX5jad-ma_orE3eYhh4A2ItNZUj_yWYnS15G7M.png?width=108&crop=smart&auto=webp&s=183f4ca1eca90173e61293398b23badb04f400ff', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/z032RXX5jad-ma_orE3eYhh4A2ItNZUj_yWYnS15G7M.png?width=216&crop=smart&auto=webp&s=017dbdd87da98e578b579c5b00582ad4eba0c2d5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/z032RXX5jad-ma_orE3eYhh4A2ItNZUj_yWYnS15G7M.png?width=320&crop=smart&auto=webp&s=1b9e89d2f14c9a5f0eb76a58ac03a3628561ca3d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/z032RXX5jad-ma_orE3eYhh4A2ItNZUj_yWYnS15G7M.png?width=640&crop=smart&auto=webp&s=77af51be399ad17c96463e485cbe60d13284778f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/z032RXX5jad-ma_orE3eYhh4A2ItNZUj_yWYnS15G7M.png?width=960&crop=smart&auto=webp&s=f30f9fcdda14f0e93ad7c17aed936f7943158468', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/z032RXX5jad-ma_orE3eYhh4A2ItNZUj_yWYnS15G7M.png?width=1080&crop=smart&auto=webp&s=39044805a8d8ae82c217720bb54dd5c5f76bdfd2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/z032RXX5jad-ma_orE3eYhh4A2ItNZUj_yWYnS15G7M.png?auto=webp&s=7e4d90264dcc34c2aab1effbb17a272a007f23b7', 'width': 1200}, 'variants': {}}]} |
RTX 5070 Ti is end of life now | 0 | [https://m.youtube.com/watch?v=yteN21aJEvE](https://m.youtube.com/watch?v=yteN21aJEvE)
Nvidia is killing off supply for RTX 5070 Ti and RTX 5060 Ti 16 gb.
Credit: Hardware Unboxed | 2026-01-15T10:48:16 | https://www.reddit.com/r/LocalLLaMA/comments/1qdge4b/rtx_5070_ti_is_end_of_life_now/ | Paramecium_caudatum_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdge4b | false | null | t3_1qdge4b | /r/LocalLLaMA/comments/1qdge4b/rtx_5070_ti_is_end_of_life_now/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'dJjS3Ve9z-p_IzlGIqMccDsq-fqcnhnXiDVcaUy0gEw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/dJjS3Ve9z-p_IzlGIqMccDsq-fqcnhnXiDVcaUy0gEw.jpeg?width=108&crop=smart&auto=webp&s=ae0c3bd55a2886bf5b21ac75252dd3634059b337', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/dJjS3Ve9z-p_IzlGIqMccDsq-fqcnhnXiDVcaUy0gEw.jpeg?width=216&crop=smart&auto=webp&s=decb4b5768f2ecf5831d8c578f2433da78e301f1', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/dJjS3Ve9z-p_IzlGIqMccDsq-fqcnhnXiDVcaUy0gEw.jpeg?width=320&crop=smart&auto=webp&s=075e717aaa94bf7307eb5002d33a138eb6da8e5e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/dJjS3Ve9z-p_IzlGIqMccDsq-fqcnhnXiDVcaUy0gEw.jpeg?auto=webp&s=86bd6a80a92bc8a6d90647747e201c3031a5e867', 'width': 480}, 'variants': {}}]} |
AI created this app in 12hrs. Used open models, mostly local LLMs. | 0 | From a single prompt to a fully working project. I wrote less than 1% of the code.
MindMapp is available here: [https://mindm.app](https://mindm.app)
GitHub repo: [https://github.com/cepa/mindmapp](https://github.com/cepa/mindmapp)
The first version took 12 hours of work with various AI models with focus on using local ones. Call it vibe coding or agentic coding but the productivity boost is not just 10x it way more. In fact, the basic app was made in two hours, the rest was debugging and fixing issues on desktop and mobile devices.
Used local models:
\- Devstrall Small 2 - a swiss army knife, fast but you need to be precise to get the right result
\- Seed OSS - a real gem, heavy dense and slow, but you can just throw a task that require thinking and it delivers
\- GLM-4.5-Air - experienced web developer, understands the UI/UX aspects of a project better than Seed OSS
Used open models (via OpenRouter):
\- GLM-4.7 - an absolute beast in terms of app development, can refactor parts of project to make it work better, understands what app does and how it should work
\- Kimi K2 - architect, good for high level design
\- Qwen3 Max - architect, its nice to have a side by side comparision with Kimi
Process:
\- Create a mockup of an app that does X
\- Create app scaffold in Angular
\- Analyze the generated mockups and create components
\- Improve UI/UX
\- Debug, fix, debug, fix, debug, fix...
\- Dockerize
\- Deploy
(see VIBE.md)
Can you substitute huge online models with specialized local ones?
It depends.
Devstral Small 2 and alike are very handy and fast if you know exactly what needs to be done, also the more code can be used as a reference the better they get. However, they often lack proper understanding.
Local Seed OSS and GLM-4.5-Air are far better in solving complex issues requiring thinking but are slow. So, you will probably prefer to run a faster online model unless you are patitent or limited by privacy.
Well, I had to do some google search and look at Stack Overflow despite having all mighty power of LLM, so some human skill and understanding is still needed :)
Feel free to test the app and let me know please if you see issues or have ideas for improvement. | 2026-01-15T10:40:28 | ChopSticksPlease | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qdg9gd | false | null | t3_1qdg9gd | /r/LocalLLaMA/comments/1qdg9gd/ai_created_this_app_in_12hrs_used_open_models/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'RHd7CjBzK5b3-wDv6rfmsjbzysl5hgCg5vRTooV076E', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/kk1n16o9rhdg1.png?width=108&crop=smart&auto=webp&s=616854fc8c0cd3326a205929ac3e22aa8fb0f37e', 'width': 108}, {'height': 217, 'url': 'https://preview.redd.it/kk1n16o9rhdg1.png?width=216&crop=smart&auto=webp&s=d1147fa2fc2af605ce6c39a07278ac8d917e47c1', 'width': 216}, {'height': 321, 'url': 'https://preview.redd.it/kk1n16o9rhdg1.png?width=320&crop=smart&auto=webp&s=85eb3708737de9bd00340f5fae17db32dfbb27fd', 'width': 320}, {'height': 643, 'url': 'https://preview.redd.it/kk1n16o9rhdg1.png?width=640&crop=smart&auto=webp&s=f7c2243f59c35df599c45a5f0ad822dd02546964', 'width': 640}], 'source': {'height': 833, 'url': 'https://preview.redd.it/kk1n16o9rhdg1.png?auto=webp&s=d6003236a0ff9d65b80ceb562514a5233823ea4f', 'width': 828}, 'variants': {}}]} | ||
Anyone finetuned the OLMocr 2 on custom data? | 4 | I need help in fine tuning OLMocr on custom dataset including data preparation pipeline | 2026-01-15T10:04:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qdfosg/anyone_finetuned_the_olmocr_2_on_custom_data/ | nightwing_2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdfosg | false | null | t3_1qdfosg | /r/LocalLLaMA/comments/1qdfosg/anyone_finetuned_the_olmocr_2_on_custom_data/ | false | false | self | 4 | null |
Mi50 32gb vbios flash | 1 | Hey all! Got two mi50s for a local rig and want to also use them as display outs. Does anyone have any working vbios's for them? (If possible please no baidu links downloading them is a nightmare) All help is much appreciated, thanks! | 2026-01-15T09:46:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qdfefa/mi50_32gb_vbios_flash/ | onephn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdfefa | false | null | t3_1qdfefa | /r/LocalLLaMA/comments/1qdfefa/mi50_32gb_vbios_flash/ | false | false | self | 1 | null |
Raspberry Pi AI HAT+ 2 launch | 7 | The Raspberry Pi AI HAT+ 2 is available now at $130, with 8 GB onboard LPDDR4X-4267 SDRAM, with the Hailo-10H accelerator
Since it uses the only pcie express port, there's no easy way to have both the accelerator and an nvme at the same time I presume.
What do you guys this about this for edge LLMs ? | 2026-01-15T09:33:04 | https://www.raspberrypi.com/products/ai-hat-plus-2/ | nicolash33 | raspberrypi.com | 1970-01-01T00:00:00 | 0 | {} | 1qdf6h7 | false | null | t3_1qdf6h7 | /r/LocalLLaMA/comments/1qdf6h7/raspberry_pi_ai_hat_2_launch/ | false | false | default | 7 | null |
Tired of LLM Hallucinations in Data Analysis? I’m building a "Universal Excel Insight Engine" using RAG. | 0 | Hey everyone,
I’ve been working on a project to solve a problem we’ve all faced: getting LLMs to reliably analyze structured data without making things up or losing track of the schema.
I’m calling it the Universal Excel Insight Engine. It’s a RAG-based tool designed to ingest any .XLSX file (up to 200MB) and provide evidence-based insights with a strict "No Hallucination" policy.
What makes it different?
Schema-Aware: Instead of just dumping text into a vector DB, it understands the relationship between columns and rows.
Data Quality Guardrails: It automatically flags "Data Quality Gaps" like missing visit dates, null status codes, or repeated IDs.
Low-Information Detection: It identifies records that lack proper explanation (e.g., short, vague notes like "Not Working") so you can clean your data before deep analysis.
Evidence-Based: Every insight is tied back to the specific row index and rule applied, so you can actually verify the output.
Current Progress:
Right now, it’s great at identifying "what’s wrong" with a dataset (audit mode) and extracting specific patterns across thousands of rows. I’m currently working on making it even more advanced—moving toward deeper predictive insights and more complex multi-sheet reasoning.
I’d love to get some feedback from this community.
What are the biggest "deal-breakers" for you when using RAG for Excel?
What kind of "Deep Insights" would you find most valuable for a tool like this to surface automatically?
I'm still in active development, so I'm open to all suggestions! | 2026-01-15T08:28:53 | https://v.redd.it/aunecc7k4hdg1 | Accomplished_Life416 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qde614 | false | {'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/aunecc7k4hdg1/DASHPlaylist.mpd?a=1771057750%2COWI1YjE1MmNkNTJhZGY1NGM4ZWRjYjI4YzQzNTk4NWYyZTNiOTdlZDZmOGU1OTZiYTdhZWJjZTAzMzMzNzljNw%3D%3D&v=1&f=sd', 'duration': 52, 'fallback_url': 'https://v.redd.it/aunecc7k4hdg1/CMAF_360.mp4?source=fallback', 'has_audio': True, 'height': 640, 'hls_url': 'https://v.redd.it/aunecc7k4hdg1/HLSPlaylist.m3u8?a=1771057750%2CMmRjMGM0ZTkyOGU2ZDczYmU5NTdmMzc4NzU1YmVmNjY4M2VlZjVmZjBhY2ExNjg0ZWM2MGNjMGYzYjlhMGEwOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/aunecc7k4hdg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 288}} | t3_1qde614 | /r/LocalLLaMA/comments/1qde614/tired_of_llm_hallucinations_in_data_analysis_im/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'eDFtdDZhOGs0aGRnMf7vQmqNzPHFLGwo0f7Ld1rEa5Y5H5Pxe6jZiY4Jd9EI', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/eDFtdDZhOGs0aGRnMf7vQmqNzPHFLGwo0f7Ld1rEa5Y5H5Pxe6jZiY4Jd9EI.png?width=108&crop=smart&format=pjpg&auto=webp&s=2248acd4c68b918ef0ddd37120308191bda8804b', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/eDFtdDZhOGs0aGRnMf7vQmqNzPHFLGwo0f7Ld1rEa5Y5H5Pxe6jZiY4Jd9EI.png?width=216&crop=smart&format=pjpg&auto=webp&s=fc3a36902084947ecd28d060ed4d1b9fc6409062', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/eDFtdDZhOGs0aGRnMf7vQmqNzPHFLGwo0f7Ld1rEa5Y5H5Pxe6jZiY4Jd9EI.png?width=320&crop=smart&format=pjpg&auto=webp&s=276e0d421de4b03c8d22a467067f3f4f683fd3df', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/eDFtdDZhOGs0aGRnMf7vQmqNzPHFLGwo0f7Ld1rEa5Y5H5Pxe6jZiY4Jd9EI.png?width=640&crop=smart&format=pjpg&auto=webp&s=b4c8f981d6889b63417a8c751cef0e84bc0644b8', 'width': 640}], 'source': {'height': 1792, 'url': 'https://external-preview.redd.it/eDFtdDZhOGs0aGRnMf7vQmqNzPHFLGwo0f7Ld1rEa5Y5H5Pxe6jZiY4Jd9EI.png?format=pjpg&auto=webp&s=038d307ea17c4e1a64f6201aa3b996ae1e704904', 'width': 805}, 'variants': {}}]} | |
CPT (40 epochs) + SFT (10 epochs) vs. Pure SFT (50 epochs): Why is CPT failing for my Domain-Specific Regulation Model? (Qwen3-8B) | 1 | I have a question. I'm trying to fine-tune an electricity regulation model on the qwen3:8B dataset. Currently, I'm trying to train using CPT (40 epochs) + SFT (10 epochs), but the results are not as good as with SFT (50 epochs), even though the latter is clearly overfitting. However, training with SFT for 50 epochs works much better. Has anyone fine-tuned a regulation dataset? How did you do it?
## My Setup & Experiment:
. Base Model: Qwen3:8B
Domain: 2000 domain-specific datasets + 1500 general knowledge datasets
. Approach A (CPT + SFT):
CPT: 40 epochs on raw regulation text (unstructured PDF/text data).
SFT: 8 epochs on Q&A pairs/instruction data.
. Approach B (Pure SFT):
SFT: 50 epochs on the same instruction dataset.
. Approach C (Pure SFT):
SFT: 8 epochs on the same instruction dataset.
---
#### base model:
```json
{
"predict_bleu-4": 7.944340499999999,
"predict_model_preparation_time": 0.0088,
"predict_rouge-1": 24.4166615,
"predict_rouge-2": 8.1770335,
"predict_rouge-l": 11.698803499999999,
"predict_runtime": 3556.8434,
"predict_samples_per_second": 0.056,
"predict_steps_per_second": 0.028
}
```
#### C model:
```json
{
"predict_bleu-4": 38.525301999999996,
"predict_model_preparation_time": 0.01,
"predict_rouge-1": 54.4093715,
"predict_rouge-2": 36.7284015,
"predict_rouge-l": 46.633444,
"predict_runtime": 6134.5863,
"predict_samples_per_second": 0.033,
"predict_steps_per_second": 0.016
}
```
#### B model
```json
{
"predict_bleu-4": 44.747576,
"predict_model_preparation_time": 0.0102,
"predict_rouge-1": 58.0690875,
"predict_rouge-2": 41.78211,
"predict_rouge-l": 51.8099465,
"predict_runtime": 6330.4745,
"predict_samples_per_second": 0.032,
"predict_steps_per_second": 0.016
}
```
#### A model:
```json
{
"predict_bleu-4": 42.1212865,
"predict_model_preparation_time": 0.0101,
"predict_rouge-1": 56.6093725,
"predict_rouge-2": 39.5310185,
"predict_rouge-l": 49.4864215,
"predict_runtime": 6180.096,
"predict_samples_per_second": 0.032,
"predict_steps_per_second": 0.016
}
```
| 2026-01-15T08:28:33 | https://www.reddit.com/r/LocalLLaMA/comments/1qde5ua/cpt_40_epochs_sft_10_epochs_vs_pure_sft_50_epochs/ | Ok-Money-9173 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qde5ua | false | null | t3_1qde5ua | /r/LocalLLaMA/comments/1qde5ua/cpt_40_epochs_sft_10_epochs_vs_pure_sft_50_epochs/ | false | false | self | 1 | null |
Step-Audio-R1.1 (Open Weight) by StepFun just set a new SOTA on the Artificial Analysis Speech Reasoning leaderboard | 28 | [https://x.com/ModelScope2022/status/2011687986338136089](https://x.com/ModelScope2022/status/2011687986338136089)
[https://huggingface.co/stepfun-ai/Step-Audio-R1.1](https://huggingface.co/stepfun-ai/Step-Audio-R1.1)
**It outperforms Grok, Gemini, and GPT-Realtime with a 96.4% accuracy rate.**
* Native Audio Reasoning (End-to-End)
* Audio-native CoT (Chain of Thought)
* Real-time streaming inference
* FULLY OPEN SOURCE
https://preview.redd.it/wln8b464sgdg1.png?width=6507&format=png&auto=webp&s=fc83bc19c38b1c973fe264d3f32ca1b0ee860fbc
https://preview.redd.it/nxnh1w35sgdg1.png?width=3960&format=png&auto=webp&s=303afc1cca072ad309af2b75944f675d033da76c
https://preview.redd.it/14mu93p5sgdg1.png?width=6008&format=png&auto=webp&s=403110a80e8d15fc1e4a48362ab28c34f6a42042
| 2026-01-15T07:20:11 | https://www.reddit.com/r/LocalLLaMA/comments/1qdd1l7/stepaudior11_open_weight_by_stepfun_just_set_a/ | Inevitable_Sea8804 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdd1l7 | false | null | t3_1qdd1l7 | /r/LocalLLaMA/comments/1qdd1l7/stepaudior11_open_weight_by_stepfun_just_set_a/ | false | false | 28 | null | |
Any Medical doctor related Finetunes of open models ? | 0 | Hi Everyone
I am looking for a few recommendations for open AI models specialized in medical advice/diagnostic just to kind of throw onto the pc and forget in case it ever comes in handy. I have checked out Medgemma but I was wondering if there are others as well. Huggingface is not easily searchable by use case or field.
thanks, | 2026-01-15T07:08:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qdcubr/any_medical_doctor_related_finetunes_of_open/ | deathcom65 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdcubr | false | null | t3_1qdcubr | /r/LocalLLaMA/comments/1qdcubr/any_medical_doctor_related_finetunes_of_open/ | false | false | self | 0 | null |
VLA模型研究进展 | 0 | 2026-01-15T06:39:36 | https://vlamodels-nm5d2sqk.manus.space | SquashThis3025 | vlamodels-nm5d2sqk.manus.space | 1970-01-01T00:00:00 | 0 | {} | 1qdccee | false | null | t3_1qdccee | /r/LocalLLaMA/comments/1qdccee/vla模型研究进展/ | false | false | default | 0 | null | |
System Prompts When Using Claude Code and Codex with Local Models | 0 | Here are system prompts for:
* [Claude Code](https://gist.github.com/chigkim/1f37bb2be98d97c952fd79cbb3efb1c6): 16.5K tokens
* [Codex](https://gist.github.com/chigkim/ffed11a3e017d98698707dd24e78af51): 6.5K tokens
I wonder if long system prompts are one of the main reasons smaller models, without being fine-tuned on a specific system prompt, might not perform well with Claude Code or Codex.
When I used qwen3-coder-30b-a3b-q8_0, Claude Code often talked about completely unrelated things to my code and eventually failed. I.E. reading pdf even though I had no pdf.
After reading the system prompt, I realized it was talking about stuff in the system prompt.
Codex worked better. Maybe because it has shorter instruction?
Also I allocated 64K context length, so it wasn't running out of the context window. | 2026-01-15T06:36:52 | https://www.reddit.com/r/LocalLLaMA/comments/1qdcaol/system_prompts_when_using_claude_code_and_codex/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qdcaol | false | null | t3_1qdcaol | /r/LocalLLaMA/comments/1qdcaol/system_prompts_when_using_claude_code_and_codex/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=216&crop=smart&auto=webp&s=2e3562243f324d16bc6d9dd09adb1da4e0b100b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=320&crop=smart&auto=webp&s=564e5f4bb6808064a14eb3965a6911671c3c9807', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=640&crop=smart&auto=webp&s=0f53460a90493497883ab4cacbbb58e2acb464c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=960&crop=smart&auto=webp&s=7a4f79362039959fa37eab208ae001245ccfe6e3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=1080&crop=smart&auto=webp&s=912f966e123e94e32e7975fe8aebac89450a6b98', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?auto=webp&s=c7cbcc7517e2406e2326e7a1eb6bdb9022c27fda', 'width': 1280}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.