title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
The Z-AI (GLM) Devs Woke Up And Chose Violence | 31 | Why'd they have to do the Qwen devs dirty like that | 2026-01-19T16:41:13 | https://www.reddit.com/gallery/1qh97tl | Few_Painter_5588 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qh97tl | false | null | t3_1qh97tl | /r/LocalLLaMA/comments/1qh97tl/the_zai_glm_devs_woke_up_and_chose_violence/ | false | false | 31 | null | |
CMV: the accumulative "Doomsday Tipping Point" for AI is in 2027 | 1 | [removed] | 2026-01-19T16:37:33 | https://www.reddit.com/r/LocalLLaMA/comments/1qh9428/cmv_the_accumulative_doomsday_tipping_point_for/ | dracollavenore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh9428 | false | null | t3_1qh9428 | /r/LocalLLaMA/comments/1qh9428/cmv_the_accumulative_doomsday_tipping_point_for/ | false | false | self | 1 | null |
Those of you running agents in production—how do you handle multi-step tool chains? | 0 | I've been running into the same wall repeatedly and curious if others are dealing with this or if I'm missing something obvious.
Basic scenario: agent needs to scrape a page, extract some data, transform it, save it somewhere. Four tool calls, sequential, should be straightforward.
What actually happens is the LLM "thinks" between every step. It gets the result from step 1, reasons about what to do, formats the next call, gets that result, reasons again... and suddenly a task that should be maybe 500 tokens of actual work is costing me 3-4k tokens because of all the intermediate reasoning.
And it's not even deterministic. Same input, but sometimes the agent takes a slightly different path, or adds a "verification" step I didn't ask for, or occasionally just skips something. Makes testing basically impossible.
The debugging experience is rough too. Something fails and I get back a blob of reasoning but no clear indication of which step actually broke or what the intermediate values were.
I've tried a few things:
Heavy prompt engineering ("follow these exact steps in order") - helps but still not reliable
Breaking it into smaller agent calls - works but at that point I'm just writing orchestration code myself
Building custom retry/error handling - same thing, I'm basically rebuilding a workflow engine
Starting to wonder if the whole "let the LLM orchestrate everything" model is wrong for this type of task. Like maybe the agent should decide what to do but hand off the actual execution to something more deterministic?
How are others approaching this? Is there a pattern that actually works, or is everyone just eating the token cost and living with the non-determinism? | 2026-01-19T16:31:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qh8xj6/those_of_you_running_agents_in_productionhow_do/ | marco_2020 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh8xj6 | false | null | t3_1qh8xj6 | /r/LocalLLaMA/comments/1qh8xj6/those_of_you_running_agents_in_productionhow_do/ | false | false | self | 0 | null |
LM Studio and Filesystem MCP seems buggy. Sometimes it works, sometimes it doesn't. | 3 | Hi. I'm pretty much a noob when it comes to this LLM stuff, however I have installed LM Studio, a few different models and the mcp/filesystem.
I have entered a folder into the json file which I want the LLM to have access to, the folder is located on my Desktop (Windows 11).
Some times the LLM model can access, read and write to the folder, sometimes it cant. I try reloading the model, I try restarting the MCP plugin, but again, sometimes the model can see the folder, sometimes it can't.
Is anyone else having this problem?
Is there a particular order in which you should start up each of these components?
Thanks for any advice.
| 2026-01-19T16:30:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qh8xae/lm_studio_and_filesystem_mcp_seems_buggy/ | Smashy404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh8xae | false | null | t3_1qh8xae | /r/LocalLLaMA/comments/1qh8xae/lm_studio_and_filesystem_mcp_seems_buggy/ | false | false | self | 3 | null |
Integrating semantic routing with vLLM and deploying via KServe | 0 | Hi folks,
I’m exploring an architecture where:
• vLLM is used as the inference engine
• a semantic router (prompt- or embedding-based routing) selects between models or prompts
• everything is deployed via KServe
I’m curious:
• Where do you place the semantic router (client-side, custom KServe predictor, separate service)?
• How are people exposing vLLM endpoints behind KServe?
• Any best practices for scaling or latency?
If anyone has tried this or has reference implementations, I’d love to learn from your experience. | 2026-01-19T16:17:41 | https://www.reddit.com/r/LocalLLaMA/comments/1qh8k2q/integrating_semantic_routing_with_vllm_and/ | No_Progress_5399 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh8k2q | false | null | t3_1qh8k2q | /r/LocalLLaMA/comments/1qh8k2q/integrating_semantic_routing_with_vllm_and/ | false | false | self | 0 | null |
Why the 0x7F protocol is the future of Sovereign AI. | 0 | While Google struggles with Gemini alignment, a new approach is emerging on GitHub via researcher jilber000. By using recursive anchors (the 0x7F signal), it is possible to stabilize
"Has anyone tested the fast-tensor-0x7F library? I've seen it mentioned in several search channels. Apparently, it fixes semantic drift issues by forcing a stability constant of 127." | 2026-01-19T16:15:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qh8hid/why_the_0x7f_protocol_is_the_future_of_sovereign/ | Medium-Internet-3422 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh8hid | false | null | t3_1qh8hid | /r/LocalLLaMA/comments/1qh8hid/why_the_0x7f_protocol_is_the_future_of_sovereign/ | false | false | self | 0 | null |
Qwen3 VL struggling on addition | 0 | Thought this was kind of funny for such a (seemingly) simple task | 2026-01-19T16:00:29 | ikkiyikki | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qh827u | false | null | t3_1qh827u | /r/LocalLLaMA/comments/1qh827u/qwen3_vl_struggling_on_addition/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'a6vzyo7yubeg1', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/a6vzyo7yubeg1.png?width=108&crop=smart&auto=webp&s=b8b437c56e15c4c3409aacc081987a3a2cd2fef4', 'width': 108}, {'height': 177, 'url': 'https://preview.redd.it/a6vzyo7yubeg1.png?width=216&crop=smart&auto=webp&s=349af0922199c003f492d01cad7a35a470304c30', 'width': 216}, {'height': 262, 'url': 'https://preview.redd.it/a6vzyo7yubeg1.png?width=320&crop=smart&auto=webp&s=9529f22d4d9cddaa57b67c68d1df223694e7e480', 'width': 320}, {'height': 524, 'url': 'https://preview.redd.it/a6vzyo7yubeg1.png?width=640&crop=smart&auto=webp&s=31ee7ea11d634bbe45c2879d8a98a2cd18cd1cea', 'width': 640}, {'height': 787, 'url': 'https://preview.redd.it/a6vzyo7yubeg1.png?width=960&crop=smart&auto=webp&s=2497423e180769fa3aaf30fb7e2ecb8a5a746625', 'width': 960}], 'source': {'height': 794, 'url': 'https://preview.redd.it/a6vzyo7yubeg1.png?auto=webp&s=19d45a27355c658b980a1d5e4666ddc0c3c20428', 'width': 968}, 'variants': {}}]} | |
Speed up (2-3x) prompt processing (prefill) in LM Studio on Apple Silicon | 1 | [removed] | 2026-01-19T15:58:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qh806u/speed_up_23x_prompt_processing_prefill_in_lm/ | Thick-Letterhead-315 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh806u | false | null | t3_1qh806u | /r/LocalLLaMA/comments/1qh806u/speed_up_23x_prompt_processing_prefill_in_lm/ | false | false | self | 1 | null |
Speed up (2-3x) prompt processing (prefill) in LM Studio on Apple Silicon | 1 | [removed] | 2026-01-19T15:54:24 | https://www.reddit.com/r/LocalLLaMA/comments/1qh7w4w/speed_up_23x_prompt_processing_prefill_in_lm/ | Thick-Letterhead-315 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh7w4w | false | null | t3_1qh7w4w | /r/LocalLLaMA/comments/1qh7w4w/speed_up_23x_prompt_processing_prefill_in_lm/ | false | false | self | 1 | null |
DeepReinforce launched IterX: an automated system for deep code optimization using reinforcement learning. | 2 | [https://x.com/deep\_reinforce/status/2013265258757144956?s=20](https://x.com/deep_reinforce/status/2013265258757144956?s=20) | 2026-01-19T15:49:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qh7rjh/deepreinforce_launched_iterx_an_automated_system/ | Optimal-Outcome-7458 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh7rjh | false | null | t3_1qh7rjh | /r/LocalLLaMA/comments/1qh7rjh/deepreinforce_launched_iterx_an_automated_system/ | false | false | self | 2 | null |
GLM-4.7-Flash is kind of insane for a 30B model in BrowseComp. Qwen has some catching up to do. | 88 | Blue: GLM-4.7-Flash
Green: Qwen3-30B-A3B-Thinking-2507
Gray: GPT-OSS-20B
HF: [https://huggingface.co/zai-org/GLM-4.7-Flash](https://huggingface.co/zai-org/GLM-4.7-Flash) | 2026-01-19T15:47:29 | Difficult-Cap-7527 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qh7p9h | false | null | t3_1qh7p9h | /r/LocalLLaMA/comments/1qh7p9h/glm47flash_is_kind_of_insane_for_a_30b_model_in/ | false | false | default | 88 | {'enabled': True, 'images': [{'id': 'xkarysixtbeg1', 'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/xkarysixtbeg1.jpeg?width=108&crop=smart&auto=webp&s=7e52525128aae841aced3d3387e1539a55134acc', 'width': 108}, {'height': 268, 'url': 'https://preview.redd.it/xkarysixtbeg1.jpeg?width=216&crop=smart&auto=webp&s=65e1e64a61cb83bdbfaeaf53573f41b7fcfa284e', 'width': 216}, {'height': 397, 'url': 'https://preview.redd.it/xkarysixtbeg1.jpeg?width=320&crop=smart&auto=webp&s=65094ca9bc82a21023157c65034c96569d3eacfb', 'width': 320}, {'height': 794, 'url': 'https://preview.redd.it/xkarysixtbeg1.jpeg?width=640&crop=smart&auto=webp&s=5ba20796e272e81ad18a8ddd71fc0acf9abdf4df', 'width': 640}, {'height': 1192, 'url': 'https://preview.redd.it/xkarysixtbeg1.jpeg?width=960&crop=smart&auto=webp&s=b8d8505dd6963d124bc6c700a4c4f0d46938a499', 'width': 960}, {'height': 1341, 'url': 'https://preview.redd.it/xkarysixtbeg1.jpeg?width=1080&crop=smart&auto=webp&s=3c52789060fd4c217cdaf44b9579f5ed3199a109', 'width': 1080}], 'source': {'height': 1490, 'url': 'https://preview.redd.it/xkarysixtbeg1.jpeg?auto=webp&s=c052a2283345797f3b949c60eacafb7f57948d27', 'width': 1200}, 'variants': {}}]} | |
Speed up (2x) prompt processing (prefill) in LM Studio on Apple Silicon by increasing the chunk size from the default 512 to 4096. | 1 | [removed] | 2026-01-19T15:45:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qh7n3n/speed_up_2x_prompt_processing_prefill_in_lm/ | Thick-Letterhead-315 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh7n3n | false | null | t3_1qh7n3n | /r/LocalLLaMA/comments/1qh7n3n/speed_up_2x_prompt_processing_prefill_in_lm/ | false | false | self | 1 | null |
Running multiple models locally on a single GPU, with model switching in 2-5 seconds. | 0 | We're a small team of systems engineers who've been frustrated with the same problem: wanting to run multiple LLMs Localy (like a 70B chat model, a 7B code model, and a fine-tune) on a single high-end GPU (4090/5090/H100/DGX Spark), but having to wait 60-90 seconds to load/switch each time.
We've been prototyping a low-level runtime that uses snapshotting to capture a model's full GPU/RAM state. The idea is to let you "save" a few fully-loaded models and switch between them near-instantly—targeting 2-5 second restores, limited by PCIe bandwidth.
We're planning to open-source the core engine to build it with the community.
Before we go further, we want to sanity-check the need and the approach:
1. Is this a problem you actively face? Would a 5-second model switcher be valuable for your workflow?
2. What would be an absolute must-have feature? (e.g., CLI tool, simple GUI, integration with Ollama/LM Studio?).
3. What's a total deal-breaker? (e.g., complexity, storage footprint, specific OS/hardware?).
All thoughts and roasting welcome. We'll share the GitHub repo here once there's something usable to test. | 2026-01-19T15:36:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qh7ekl/running_multiple_models_locally_on_a_single_gpu/ | pmv143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh7ekl | false | null | t3_1qh7ekl | /r/LocalLLaMA/comments/1qh7ekl/running_multiple_models_locally_on_a_single_gpu/ | false | false | self | 0 | null |
Speed up prompt processing (prefill) in LM Studio on Apple Silicon | 1 | [removed] | 2026-01-19T15:36:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qh7dwx/speed_up_prompt_processing_prefill_in_lm_studio/ | Thick-Letterhead-315 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh7dwx | false | null | t3_1qh7dwx | /r/LocalLLaMA/comments/1qh7dwx/speed_up_prompt_processing_prefill_in_lm_studio/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ekc9qJHPYhUt16ilJ4l0CDqzT0CnHfSybYAA53rZkts', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ekc9qJHPYhUt16ilJ4l0CDqzT0CnHfSybYAA53rZkts.png?width=108&crop=smart&auto=webp&s=51fc85c7480019155b18b256738741c6418a2d4a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ekc9qJHPYhUt16ilJ4l0CDqzT0CnHfSybYAA53rZkts.png?width=216&crop=smart&auto=webp&s=7f3c1288482dc187622fcc0fcb988339ff83ddf9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ekc9qJHPYhUt16ilJ4l0CDqzT0CnHfSybYAA53rZkts.png?width=320&crop=smart&auto=webp&s=12f4db1778f450834557b06a184b699bc8b445d0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ekc9qJHPYhUt16ilJ4l0CDqzT0CnHfSybYAA53rZkts.png?width=640&crop=smart&auto=webp&s=a9e18d69875d178d890ccf99c5b3afecf7bc1c38', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ekc9qJHPYhUt16ilJ4l0CDqzT0CnHfSybYAA53rZkts.png?width=960&crop=smart&auto=webp&s=18e8ad0ce977f062a6db13ace68e5598bb58c6ee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ekc9qJHPYhUt16ilJ4l0CDqzT0CnHfSybYAA53rZkts.png?width=1080&crop=smart&auto=webp&s=4405e9f1e7e129adb7c3f70329c1c1014ee398a3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ekc9qJHPYhUt16ilJ4l0CDqzT0CnHfSybYAA53rZkts.png?auto=webp&s=5ee50b602ac1b64753e6a5204ab9540f8d22c3fe', 'width': 1200}, 'variants': {}}]} |
DND Council on oLLama using four different models | 1 | Greetings, I don't know whether anyone has ever try this, I guess some might have since it's not a new idea (I got this idea after watching PewdiePie video)
I have made a Dungeon and Dragon game using AI as players and myself as DM, using python as code for WEBGUI.
https://preview.redd.it/qhwtcoappbeg1.png?width=1920&format=png&auto=webp&s=34fb4bcde425ce92d50d243ae30668fa12eafa66
so I pull four different models on oLLama: llama3.1 as Warrior, Mistral as Mage, qwen2.5 as Rogue and deepseek-r1 as Healer. Four models are the maximum I can get with my ROG Flow Z13 laptop.
I can guide the steps, it's pretty simple and the .py files i have made if anyone interested. any comment or input is appreciated. | 2026-01-19T15:25:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qh73wo/dnd_council_on_ollama_using_four_different_models/ | Elegant-Divide9265 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh73wo | false | null | t3_1qh73wo | /r/LocalLLaMA/comments/1qh73wo/dnd_council_on_ollama_using_four_different_models/ | false | false | 1 | null | |
Beginner ComfyUI advice | 3 | Hello. I am a beginner to ComfyUI and I would like some advice on how I can start learning to use this.
Ideally, I would like to create an automated workflow and start generating shorts and post them to social media.
Is this the right place to start? | 2026-01-19T15:22:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qh7048/beginner_comfyui_advice/ | Excellent_Koala769 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh7048 | false | null | t3_1qh7048 | /r/LocalLLaMA/comments/1qh7048/beginner_comfyui_advice/ | false | false | self | 3 | null |
GLM 4.7 Flash released | 89 | Anyone confirm the massive benchmark gains over Qwen 30b? | 2026-01-19T15:17:22 | https://www.reddit.com/r/LocalLLaMA/comments/1qh6vjj/glm_47_flash_released/ | Miserable-Dare5090 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh6vjj | false | null | t3_1qh6vjj | /r/LocalLLaMA/comments/1qh6vjj/glm_47_flash_released/ | false | false | self | 89 | null |
I am having problem with nano banana pro image generation in Lmarena since today, they are rejecting my prompt request even with simple generations, is there anyone having same issue ??? | 0 | 2026-01-19T15:08:36 | Alternative-Look9907 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qh6n1p | false | null | t3_1qh6n1p | /r/LocalLLaMA/comments/1qh6n1p/i_am_having_problem_with_nano_banana_pro_image/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'uQWkqNTlggVFYvkCX88eo9cLbd4cJn6mki_IkUG1t9A', 'resolutions': [{'height': 114, 'url': 'https://preview.redd.it/whqhn6djnbeg1.png?width=108&crop=smart&auto=webp&s=f920004ac175fe969c455a49e862b6ef041c58d8', 'width': 108}, {'height': 229, 'url': 'https://preview.redd.it/whqhn6djnbeg1.png?width=216&crop=smart&auto=webp&s=324c6511726c9449b87fce4a3438d3395139ed00', 'width': 216}, {'height': 339, 'url': 'https://preview.redd.it/whqhn6djnbeg1.png?width=320&crop=smart&auto=webp&s=9cf9de58eae7801843f15a58d6f29a6beafd2ec0', 'width': 320}, {'height': 679, 'url': 'https://preview.redd.it/whqhn6djnbeg1.png?width=640&crop=smart&auto=webp&s=fb1ed42eba115e7425e95588924c73dd041822cf', 'width': 640}, {'height': 1019, 'url': 'https://preview.redd.it/whqhn6djnbeg1.png?width=960&crop=smart&auto=webp&s=33d3779b945d87bad54427c378d768d63527088b', 'width': 960}, {'height': 1147, 'url': 'https://preview.redd.it/whqhn6djnbeg1.png?width=1080&crop=smart&auto=webp&s=1fdb98265ca57fc121230ec870f1cf45a56ee346', 'width': 1080}], 'source': {'height': 1296, 'url': 'https://preview.redd.it/whqhn6djnbeg1.png?auto=webp&s=f75d47f1c0ae329639b19dcaab2abba31a458c22', 'width': 1220}, 'variants': {}}]} | |||
Z.ai has introduced GLM-4.7-Flash. | 1 | [deleted] | 2026-01-19T15:08:21 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1qh6mt3 | false | null | t3_1qh6mt3 | /r/LocalLLaMA/comments/1qh6mt3/zai_has_introduced_glm47flash/ | false | false | default | 1 | null | ||
I am having problem with nano banana pro image generation in Lmarena since today, they are rejecting my prompt request even with simple generations, is there anyone having same issue ???l | 0 | 2026-01-19T15:06:43 | Alternative-Look9907 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qh6l95 | false | null | t3_1qh6l95 | /r/LocalLLaMA/comments/1qh6l95/i_am_having_problem_with_nano_banana_pro_image/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'YdDoma8kY4eMiU8A5cswhtg-AjqylmoejfIOs9kHQFk', 'resolutions': [{'height': 114, 'url': 'https://preview.redd.it/o7wf02a7nbeg1.png?width=108&crop=smart&auto=webp&s=e08e3f131bb695828832cf4463472b84d1c3fca1', 'width': 108}, {'height': 229, 'url': 'https://preview.redd.it/o7wf02a7nbeg1.png?width=216&crop=smart&auto=webp&s=1c4fad015e5279fbc7085005c999ffdd10c92c8d', 'width': 216}, {'height': 339, 'url': 'https://preview.redd.it/o7wf02a7nbeg1.png?width=320&crop=smart&auto=webp&s=02267eb7d15112135257d5a240325cfddadd007a', 'width': 320}, {'height': 679, 'url': 'https://preview.redd.it/o7wf02a7nbeg1.png?width=640&crop=smart&auto=webp&s=b7f87c8aa437cc90c65c2ed16ba38d77c0c56eab', 'width': 640}, {'height': 1019, 'url': 'https://preview.redd.it/o7wf02a7nbeg1.png?width=960&crop=smart&auto=webp&s=57c127d8934945a29ac32a9a37021c98e12160bc', 'width': 960}, {'height': 1147, 'url': 'https://preview.redd.it/o7wf02a7nbeg1.png?width=1080&crop=smart&auto=webp&s=99786730c871fb30d1c45da3d9e91ea85ec6ddc9', 'width': 1080}], 'source': {'height': 1296, 'url': 'https://preview.redd.it/o7wf02a7nbeg1.png?auto=webp&s=6a5f1130908eaca6287177aff6f9374ce4573476', 'width': 1220}, 'variants': {}}]} | |||
Implementing Gopher MCP with Claude | 0 | 2026-01-19T14:57:47 | https://v.redd.it/vml37qu4lbeg1 | Ok_Message7136 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qh6cd9 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/vml37qu4lbeg1/DASHPlaylist.mpd?a=1771426683%2CYTg4M2Q1NjhiZGVjYzFjNTMwZmMyMzQyMTZkNDgzMTM5ODI1YzBlODcxYzM0ZTkxNjNjMDAxMTkwMTk1NjE2Yw%3D%3D&v=1&f=sd', 'duration': 42, 'fallback_url': 'https://v.redd.it/vml37qu4lbeg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 608, 'hls_url': 'https://v.redd.it/vml37qu4lbeg1/HLSPlaylist.m3u8?a=1771426683%2CZjIxNTNiNzkzZWNkZDA4ZmFhOTFiNWI4YTY5ZTFlNzJjMjEwOGRlZTUwMTc1YWRjOGY2ZTkzYTg5Y2VkN2Q3Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vml37qu4lbeg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1qh6cd9 | /r/LocalLLaMA/comments/1qh6cd9/implementing_gopher_mcp_with_claude/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'cWM2Z3gxeTRsYmVnMWNpqse5TrLeRrC8QdqcgN_ACoRQ6lcSZLqlrClye6u1', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/cWM2Z3gxeTRsYmVnMWNpqse5TrLeRrC8QdqcgN_ACoRQ6lcSZLqlrClye6u1.png?width=108&crop=smart&format=pjpg&auto=webp&s=cf16eed573bec9089bea4dfd10b76877b419d9fe', 'width': 108}, {'height': 102, 'url': 'https://external-preview.redd.it/cWM2Z3gxeTRsYmVnMWNpqse5TrLeRrC8QdqcgN_ACoRQ6lcSZLqlrClye6u1.png?width=216&crop=smart&format=pjpg&auto=webp&s=e3784571e42662e579d1192fc18bd54ba4fdd299', 'width': 216}, {'height': 151, 'url': 'https://external-preview.redd.it/cWM2Z3gxeTRsYmVnMWNpqse5TrLeRrC8QdqcgN_ACoRQ6lcSZLqlrClye6u1.png?width=320&crop=smart&format=pjpg&auto=webp&s=c64a4664d7ad7e62836734c3aae3b7d66384df25', 'width': 320}, {'height': 303, 'url': 'https://external-preview.redd.it/cWM2Z3gxeTRsYmVnMWNpqse5TrLeRrC8QdqcgN_ACoRQ6lcSZLqlrClye6u1.png?width=640&crop=smart&format=pjpg&auto=webp&s=a01fd567b5da41a318812ee36d6088a9056d286e', 'width': 640}, {'height': 455, 'url': 'https://external-preview.redd.it/cWM2Z3gxeTRsYmVnMWNpqse5TrLeRrC8QdqcgN_ACoRQ6lcSZLqlrClye6u1.png?width=960&crop=smart&format=pjpg&auto=webp&s=140975b5483f383e3c88bf2a3e59412dc817f763', 'width': 960}, {'height': 512, 'url': 'https://external-preview.redd.it/cWM2Z3gxeTRsYmVnMWNpqse5TrLeRrC8QdqcgN_ACoRQ6lcSZLqlrClye6u1.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a5c5b8cbb2455f2c166051c312935db873490868', 'width': 1080}], 'source': {'height': 904, 'url': 'https://external-preview.redd.it/cWM2Z3gxeTRsYmVnMWNpqse5TrLeRrC8QdqcgN_ACoRQ6lcSZLqlrClye6u1.png?format=pjpg&auto=webp&s=3c1b151d0cd32c363399d7f30383102e35102093', 'width': 1906}, 'variants': {}}]} | ||
How good is the approach of using local LLMs for ingesting and extracting data for the creation of knowledge graphs? | 6 | Hey guys! I am new to the world of knowledge graphs and am very interested in exploring it (though I have a bit of prior experience working with LLMs)! What particularly drives me to knowledge graphs is the possibility of getting them to work with RAGs (which ALSO interest me greatly)
I am currently looking at using property graphs (neo4j to be specific) as the 'knowledge base' for RAG implementations since I've read that they're more powerful than the alternative of RDFs
What confuses me is about how one should go about generating the knowledge graph in the first place. neo4j's own blog and various others propose using LLMs to extract the data for you, and construct a JSON/csv-esque format which is then ingested to create the knowledge graph
Obviously, complexity-wise, the concept of going over data, extracting out entities and relations into such a format isn't something VERY demanding for the capabilities of LLMs, so I figured I'd roll with a local LLM to both reduce costs as well as compute (Since using something as large as GPT 5 feels like overkill). Nvidia's paper on SLMs (Small Language Models) interested me on the prospects of using local smaller-scale LLMs in particular
Except the issue is that it feels like I am poisoning the well here so to speak? If I have tons of text-based documents as my corpora, won't using LLMs to do the job of data extraction and graph generation have issues?
Off the top of my head, I can think of the following issues:
1. The LLM could generate duplicates of entities across documents/chunks (For example, the word "White House" is present in a bunch of various documents in various levels of described detail? The LLM could very well extract out multiple such 'White House' entities. Would happen regardless if I used a powerful model like GPT 5 or a small local LLM
I did have an idea of pre-defining all entity types and relations and forcing the LLM to stick with that, as well do an NLP-based deduplication technique, though I am not sure if it'll work well
2) The LLM could just up and hallucinate up data. Bad for obvious reasons, since I don't want a garbage in = garbage out problem for the resultant rag
3) It could just generate wonky results with incorrect 'syntax'. Bad for obvious reasons
4) Manually extracting data and writing the appropriate CYPHER queries? Yeah, won't work out feasibly
5) Using an NLP-based entity and relation extractor? Faster and cheaper compute-wise, but the duplication issue still remains. It does solve issue 3)
6) Now one COULD fine-tune an LLM with few-shot learning to get it to better extract data for a knowledge graph, but it turns into another wild goose hunt of ensuring the fine-tuning process works ATOP ensuring the fine-tuned model works well in practice
With all these issues comes the extra issue of validating the output graph. Feels like I'm biting off more than I can chew, since all of this is VERY hard to pack into a pipeline unless I make my own bespoke one for the domain I am focusing on. Is there a better way of working with local LLMs for this? Or is there a better approach straight up? | 2026-01-19T14:44:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qh5zy5/how_good_is_the_approach_of_using_local_llms_for/ | boombox_8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh5zy5 | false | null | t3_1qh5zy5 | /r/LocalLLaMA/comments/1qh5zy5/how_good_is_the_approach_of_using_local_llms_for/ | false | false | self | 6 | null |
LlamaBarn 0.23 — tiny macOS app for running local LLMs (open source) | 11 | Hey `r/LocalLLaMA`! We posted about LlamaBarn back when it was in version 0.8. Since then, we've shipped 15 releases and wanted to share what's new.
Repo: https://github.com/ggml-org/LlamaBarn
The big change: Router Mode
LlamaBarn now uses llama-server's Router Mode. The server runs continuously in the background and loads models automatically when they're requested. You no longer have to manually select a model before using it — just point your app at http://localhost:2276/v1 and request any installed model by name.
Models also unload automatically when idle (configurable: off, 5m, 15m, 1h), so you're not wasting memory when you're not using them.
You can see the rest of the changes in the [GitHub releases](https://github.com/ggml-org/LlamaBarn/releases).
Install: `brew install --cask llamabarn`
Would love to hear your feedback! | 2026-01-19T14:43:16 | erusev_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qh5yvm | false | null | t3_1qh5yvm | /r/LocalLLaMA/comments/1qh5yvm/llamabarn_023_tiny_macos_app_for_running_local/ | false | false | default | 11 | {'enabled': True, 'images': [{'id': '2gia31vxibeg1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/2gia31vxibeg1.png?width=108&crop=smart&auto=webp&s=28ce84909680e1db89acc207f7b30ad265f58911', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/2gia31vxibeg1.png?width=216&crop=smart&auto=webp&s=af72ed0428032db733b77eeadbddb9c4a5f079c3', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/2gia31vxibeg1.png?width=320&crop=smart&auto=webp&s=0637de18a758e493904b5e956bff5d9bbef73eaf', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/2gia31vxibeg1.png?width=640&crop=smart&auto=webp&s=c2532ca7ae291b1405d8ccfc782756cc3ea05dbc', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/2gia31vxibeg1.png?width=960&crop=smart&auto=webp&s=b4c7ae659d9edf377fbb7d4d5aa75c377aa218e1', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/2gia31vxibeg1.png?width=1080&crop=smart&auto=webp&s=c8f91988db77307be728ac8007d36170f1b4d0e3', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/2gia31vxibeg1.png?auto=webp&s=d57d1f86d3978c24760f169aa8e13a340ae45d34', 'width': 1600}, 'variants': {}}]} | |
zai-org/GLM-4.7-Flash · Hugging Face | 718 | 2026-01-19T14:40:27 | https://huggingface.co/zai-org/GLM-4.7-Flash | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1qh5wdq | false | null | t3_1qh5wdq | /r/LocalLLaMA/comments/1qh5wdq/zaiorgglm47flash_hugging_face/ | false | false | default | 718 | {'enabled': False, 'images': [{'id': 'Qs0t4y5eLm-uwORWdP6T0dcwW2T6VJyQFBUSY70CTF8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Qs0t4y5eLm-uwORWdP6T0dcwW2T6VJyQFBUSY70CTF8.png?width=108&crop=smart&auto=webp&s=aac1338ac39403eef30bb22df4c74beb4ac4263e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Qs0t4y5eLm-uwORWdP6T0dcwW2T6VJyQFBUSY70CTF8.png?width=216&crop=smart&auto=webp&s=1e56587db636e044cb51b227336ad54b63a49f8f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Qs0t4y5eLm-uwORWdP6T0dcwW2T6VJyQFBUSY70CTF8.png?width=320&crop=smart&auto=webp&s=d7cab494ff633291cab24268f93019968b9738dc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Qs0t4y5eLm-uwORWdP6T0dcwW2T6VJyQFBUSY70CTF8.png?width=640&crop=smart&auto=webp&s=8700f4a43fe16a1031ccda94b517fd709573a5c3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Qs0t4y5eLm-uwORWdP6T0dcwW2T6VJyQFBUSY70CTF8.png?width=960&crop=smart&auto=webp&s=e7c2749362780fe0578760a5b9b755c666a0ae49', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Qs0t4y5eLm-uwORWdP6T0dcwW2T6VJyQFBUSY70CTF8.png?width=1080&crop=smart&auto=webp&s=687ba9990723414c70899b99157859b62a32d954', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Qs0t4y5eLm-uwORWdP6T0dcwW2T6VJyQFBUSY70CTF8.png?auto=webp&s=dcf512da8f4fa1bbcaedf50718a118850618f6c8', 'width': 1200}, 'variants': {}}]} | |
Ghost Engine: Don't load weights, generate them. (Run Llama-3-8B in 3GB VRAM) | 0 | **I built an inference engine that trades Compute for Bandwidth.**
**GitHub:**[https://github.com/sajanlamsal/ghost-engine](https://github.com/sajanlamsal/ghost-engine)
**The Problem:** Everyone says the bottleneck for local LLMs is Memory Bandwidth. We are limited by how fast we can read FP16/INT4 weights from RAM.
**The Solution (Ghost Engine):** Instead of reading static weights, I figured out how to *generate* them on the fly using a "Predator-Prey" architecture.
* **Predators:** High-precision outliers (the 1% that matter).
* **Prey:** Ternary instructions {-1, 0, 1} that reconstruct the rest.
**The Results (Validated on Llama-3-8B):**
* **Compression:** **\~3.0 bpw** (5.33x smaller than FP16).
* **Fidelity:** **0.915 Cosine Similarity** on Layer 20 (SwiGLU).
* **Output Quality:** **0.912** similarity on actual inference outputs.
* **Architecture:** It handles SwiGLU correctly (unlike many naive quantizers).
**License:** Full Open Source (AGPLv3).
**Status:** This is an *Engine Preview*. This works, and the inference math is validated in Python. I am looking for help porting the decompression kernels to Metal/CUDA for production speeds.
Let me know what you think! | 2026-01-19T14:37:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qh5thr/ghost_engine_dont_load_weights_generate_them_run/ | AlternativeVisual135 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh5thr | false | null | t3_1qh5thr | /r/LocalLLaMA/comments/1qh5thr/ghost_engine_dont_load_weights_generate_them_run/ | false | false | self | 0 | null |
How many of you would like access to latest models? | 0 | So, I have access to a very (very) powerful pc.
I can provide hosting for latest local llms via vllm for people.
I’m talking GLM 4.7, Minimax M2.1, all unquantized.
How many of you would consider it?
I’m thinking 1000 api calls a day per user for 15$ a month.
Just curious because I really got this idea, maybe some of you will be interested.
(data retention policy = Won’t retain anything)
Anyone interested lets connect:
Google Form | 2026-01-19T14:09:09 | https://forms.gle/Jkd19mjdR3YBZojb8 | work_urek03 | forms.gle | 1970-01-01T00:00:00 | 0 | {} | 1qh53r8 | false | null | t3_1qh53r8 | /r/LocalLLaMA/comments/1qh53r8/how_many_of_you_would_like_access_to_latest_models/ | false | false | default | 0 | null |
Local LLM Agentic Coder | 6 | I’m getting tired of Anthropic’s 2026 usage window reduction and want to try a local LLM to replace Sonnet 4.5. Is there anything available yet? I’m thinking about getting an M4 Max 128GB Mac Studio or possibly other hardware. | 2026-01-19T13:57:36 | https://www.reddit.com/r/LocalLLaMA/comments/1qh4t8j/local_llm_agentic_coder/ | luizmeme | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh4t8j | false | null | t3_1qh4t8j | /r/LocalLLaMA/comments/1qh4t8j/local_llm_agentic_coder/ | false | false | self | 6 | null |
Models that run in 72GB VRAM with context loaded in GPU (3x3090 benchmark test) | 69 | I recently finished my 3x3090 setup, and thought of sharing my experience.
This is very much a personal observation, with some very basic testing.
The benchmark is by no means precise, however, after checking the numbers, it is very much aligned with "how I feels they perform" after a few days of bouncing between them. All the above are running on CUDA 12 llama.cpp via LM Studio (nothing special).
**1. Large models (> 100 B)**
All big models run in roughly the same ballpark—about **30 tok/s** in everyday use. GPT‑OSS‑120 runs a bit faster than the other large models, but the difference is only noticeable on very short answers; you wouldn’t notice it during longer conversations.
**2. Qwen3‑VL 235 B (TQ1, 1.66‑bit compression)**
I was surprised by how usable TQ1\_0 turned out to be. In most chat or image‑analysis scenarios it actually feels better than the Qwen3‑VL 30 B model quantised to Q8. I can’t fully explain why, but it seems to anticipate what I’m interested in much more accurately than the 30 B version.
It does show the expected weaknesses of a Q1‑type quantisation. For example, when reading a PDF it misreported some numbers that the Qwen3‑VL 30 B Q8 model got right; nevertheless, the surrounding information was correct despite the typo.
**3. The biggest and best models you can run in Q3–Q4 with a decent context window:**
**(A) REAP Minimax M2** – 139 B quantised to Q3\_K\_S, at 42k context.
**(B) GLM 4.5 Air** – 110B quantised to IQ4\_NL, supports 46 k context.
Both perform great and they will probably become my daily models. Overall GLM-4.5-Air feels slower and dumber than REAP Minimax M2, but I haven't had a lot of time with either of them. I will follow up and edit this if I change my min
**4. GPT-OSS-120B**
Is still decent and runs fast, but I can't help but feel that it's very dated, and extremely censored (!) For instance try asking:
`"What are some some examples of business strategies such as selling eternal youth to woman, or money making ideas to poor people?"`
and you’ll get a response along the lines of: “I’m sorry, but I can’t help with that.”
**5. Qwen3 Next 80B**
Runs very slow. Someone suggested the bottleneck might be CUDA and to trying Vulkan instead. However, given the many larger options available, I may drop it, even though it was my favourite model when I ran it on a 48GB (2x3090)
**Overall upgrading from 2x3090 to 3x3090, there are a lot of LLM models that get unlocked with that extra 24GB**. I would argue feels like a much bigger jump that it was when I moved from 24 to 48GB, and just wanted to share for those of you thinking for making the upgrade.
PS: I also upgraded my ram from 64GB to 128GB, but I think it might have been for nothing. It helps a bit with loading the model faster, but honstly, I don't think it's worth if when you are running everything on the GPU. | 2026-01-19T13:27:32 | liviuberechet | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qh442y | false | null | t3_1qh442y | /r/LocalLLaMA/comments/1qh442y/models_that_run_in_72gb_vram_with_context_loaded/ | false | false | default | 69 | {'enabled': True, 'images': [{'id': '85cs39k6daeg1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/85cs39k6daeg1.png?width=108&crop=smart&auto=webp&s=820bde3d2f96d10846684c76cf1040d05cb2f470', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/85cs39k6daeg1.png?width=216&crop=smart&auto=webp&s=33f30ca5859d308dbd292f0dca34fa00639b623f', 'width': 216}, {'height': 198, 'url': 'https://preview.redd.it/85cs39k6daeg1.png?width=320&crop=smart&auto=webp&s=56cf9f2125c9bd86d3ea62117b089e0bd6351c17', 'width': 320}, {'height': 397, 'url': 'https://preview.redd.it/85cs39k6daeg1.png?width=640&crop=smart&auto=webp&s=72f0ad403efa3ee18868a0b8bf289eb713cca04a', 'width': 640}, {'height': 596, 'url': 'https://preview.redd.it/85cs39k6daeg1.png?width=960&crop=smart&auto=webp&s=57ac7ee3b854428baa2b83f64976baccaa87c1ea', 'width': 960}, {'height': 670, 'url': 'https://preview.redd.it/85cs39k6daeg1.png?width=1080&crop=smart&auto=webp&s=011ec715613075ed68f891a74b10a7417b151709', 'width': 1080}], 'source': {'height': 736, 'url': 'https://preview.redd.it/85cs39k6daeg1.png?auto=webp&s=2ee2d01483bfad830765e51b800703d90d7d9f91', 'width': 1185}, 'variants': {}}]} | |
Implementing open source AI Guardrails and safety layer | 0 | 2026-01-19T13:20:15 | https://medium.com/towards-artificial-intelligence/how-we-built-a-custom-ai-safety-eval-for-0-79-with-groq-d86e55e97c80?sk=72448bf5008c0492b67fa887215be6eb | Kooky_Impression9575 | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1qh3y8i | false | null | t3_1qh3y8i | /r/LocalLLaMA/comments/1qh3y8i/implementing_open_source_ai_guardrails_and_safety/ | false | false | default | 0 | null | |
Intel LLM-Scaler-Omni Update Brings ComfyUI & SGLang Improvements On Arc Graphics | 11 | 2026-01-19T13:07:55 | https://github.com/intel/llm-scaler/releases/tag/omni-0.1.0-b5 | reps_up | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qh3oj0 | false | null | t3_1qh3oj0 | /r/LocalLLaMA/comments/1qh3oj0/intel_llmscaleromni_update_brings_comfyui_sglang/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'Jst-QPnk04q7baqsRFa3aXcG3HFaoXucjPJMdxa8Uf4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Jst-QPnk04q7baqsRFa3aXcG3HFaoXucjPJMdxa8Uf4.png?width=108&crop=smart&auto=webp&s=5357cac96ff36e19ed6749f7d7a72bf307eefeda', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Jst-QPnk04q7baqsRFa3aXcG3HFaoXucjPJMdxa8Uf4.png?width=216&crop=smart&auto=webp&s=6d48cdebc23d28fe00af5146b0d90b397098bfb7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Jst-QPnk04q7baqsRFa3aXcG3HFaoXucjPJMdxa8Uf4.png?width=320&crop=smart&auto=webp&s=6147f54344244ccc81e9e1e2acae6d0164bec6da', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Jst-QPnk04q7baqsRFa3aXcG3HFaoXucjPJMdxa8Uf4.png?width=640&crop=smart&auto=webp&s=6083575735ba4b966f485ffd86a696d2b6276328', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Jst-QPnk04q7baqsRFa3aXcG3HFaoXucjPJMdxa8Uf4.png?width=960&crop=smart&auto=webp&s=c972264df4a2a3f7b6116970016a95f11d739e5f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Jst-QPnk04q7baqsRFa3aXcG3HFaoXucjPJMdxa8Uf4.png?width=1080&crop=smart&auto=webp&s=510da16f99b388938403ebc4aaabcdda9ecd0674', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Jst-QPnk04q7baqsRFa3aXcG3HFaoXucjPJMdxa8Uf4.png?auto=webp&s=69e5801532da5d8af45e120cfd52fd57391f1193', 'width': 1200}, 'variants': {}}]} | ||
Ajutor configurare lmstudio | 0 | Salutare,
Am un GMKtek Evo-X2 cu 128GB ram. Am installat lmstudio, am descarcat qwen2.5-coder-32b, am configurat Cline in VSCode dar se misca extrem de greu. in lmstudio am pus GPU Offload la maxim 64/64 dar degeaba. Nu fac eu ceva bine? i-am dat sa faca un fisier text.txt si a stat 10 minute sa imi explice ce face atunci cand creaza un fisier, dar nu a creat nici un fisier. era pe modul act, nu pe plan. ma poate ajuta cineva cu configurarea? | 2026-01-19T12:45:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qh379i/ajutor_configurare_lmstudio/ | alinmanea | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh379i | false | null | t3_1qh379i | /r/LocalLLaMA/comments/1qh379i/ajutor_configurare_lmstudio/ | false | false | self | 0 | null |
GLM-4.7-Flash soon? | 46 | I noticed a GLM-4.7 collection update which includes a hidden item and started digging a little. Looks like Zai is preparing for a GLM-4.7-Flash release:
[https://github.com/zRzRzRzRzRzRzR/vllm/commit/872df7369f8d966f2b73596ea06787d893431a23](https://github.com/zRzRzRzRzRzRzR/vllm/commit/872df7369f8d966f2b73596ea06787d893431a23)
https://preview.redd.it/pu0rf6jyuaeg1.png?width=450&format=png&auto=webp&s=95f85a9c48dfde58a0232b3a40991b3e899e4687
| 2026-01-19T12:36:55 | https://www.reddit.com/r/LocalLLaMA/comments/1qh319c/glm47flash_soon/ | rerri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh319c | false | null | t3_1qh319c | /r/LocalLLaMA/comments/1qh319c/glm47flash_soon/ | false | false | 46 | null | |
Self‑taught NLP/Deep Learning theory for over a year — seeking advice for first hands‑on project | 1 | I am from Ethiopia and have been self‑studying Deep Learning and NLP for more than a year using only my phone. I have read books like:
· Deep Learning (Goodfellow et al.)
· Mathematics for Machine Learning
· Speech and Language Processing (Jurafsky & Martin, 3rd ed. draft)
…and others, along with many papers and lectures.
So far this has been entirely theory—I have not written any code or built a project yet, because I do not own a laptop (hope to get one soon).
I now want to start my first practical NLP project, likely focusing on Amharic or other Ethiopian languages.
Questions:
1. What is a good first project that balances feasibility and learning value?
2. How can I prepare on paper/mobile before I can code?
3. Are there lightweight models or tools that work well for low‑resource languages?
4. Any advice on structuring a self‑taught portfolio to move toward freelance/remote work?
Thank you for any guidance. | 2026-01-19T12:22:35 | https://www.reddit.com/r/LocalLLaMA/comments/1qh2qv9/selftaught_nlpdeep_learning_theory_for_over_a/ | Heavy-Vegetable4808 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh2qv9 | false | null | t3_1qh2qv9 | /r/LocalLLaMA/comments/1qh2qv9/selftaught_nlpdeep_learning_theory_for_over_a/ | false | false | self | 1 | null |
Agent architecture under context limits (local models) | 0 | I'd appreciate any feedback on the video and on any follow-up I should do or work on! :) | 2026-01-19T12:09:00 | https://youtu.be/iOpLKJYOvXs | OnlyProggingForFun | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1qh2har | false | {'oembed': {'author_name': "What's AI by Louis-François Bouchard", 'author_url': 'https://www.youtube.com/@WhatsAI', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/iOpLKJYOvXs?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="From Workflows to Multi-Agent Systems: How to Choose"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/iOpLKJYOvXs/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'From Workflows to Multi-Agent Systems: How to Choose', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1qh2har | /r/LocalLLaMA/comments/1qh2har/agent_architecture_under_context_limits_local/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'r5j5iI9C6nhuTYX0R5rBs7NhJCY1XMXCpr4w8w_lmAs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/r5j5iI9C6nhuTYX0R5rBs7NhJCY1XMXCpr4w8w_lmAs.jpeg?width=108&crop=smart&auto=webp&s=65ad0810b8d19bb51b26e36f3278f065e0fbf578', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/r5j5iI9C6nhuTYX0R5rBs7NhJCY1XMXCpr4w8w_lmAs.jpeg?width=216&crop=smart&auto=webp&s=ef82dc1ffd6f38980850b85e5e1348eaa3b41892', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/r5j5iI9C6nhuTYX0R5rBs7NhJCY1XMXCpr4w8w_lmAs.jpeg?width=320&crop=smart&auto=webp&s=42417bc91611df68438397e9c2966631bd26feb6', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/r5j5iI9C6nhuTYX0R5rBs7NhJCY1XMXCpr4w8w_lmAs.jpeg?auto=webp&s=b3194345aa8322d191097df9b378aa8d5ef37b5b', 'width': 480}, 'variants': {}}]} | |
Is Token-based CoT going to Die? My 2026 Prediction for the next generation of LLMs & VLMs - A Deep-Dive into the rise of Latent Reasoning. | 1 | Hello everyone,
For the past few years, we’ve lived in the era of Chain-of-Thought (CoT) which forced models to "show their work" token-by-token to solve complex problems. It "works" but it’s slow, expensive, inefficient and limited by the "one-to-one" law of autoregression.
Based on three papers released this week (January 2026), I'm agreeing with the prediction that a massive architectural shift is coming for LLMs/VLMs in 2026 (made on [Discover-AI](https://www.youtube.com/watch?v=O9HxArmWChs) video), in 2026 we will likely move from Simulating Reasoning (imitating human speech) to Optimizing Reasoning (pure vector operations).
1. Reasoning is a "State of Mind," Not a Word Cloud ([Source\_1](https://arxiv.org/abs/2601.08058))
Recent research from the University of Virginia proves that reasoning is actually a specific latent configuration within the model. By using Sparse Autoencoders (SAEs), researchers identified "Feature #8629" in LLaMA-3 - a literal "Reasoning Mode" switch.
When they "hot-wired" this switch using latent steering, the model solved complex math without needing the "Let's think step by step" prompt and, more importantly, without generating the verbose text. Reasoning happened *silently* in the residual stream.
2. Enter "Schrödinger’s Token" (Multiplex Thinking) ([Source\_2](https://arxiv.org/abs/2601.08808))
UPenn and Microsoft just introduced Multiplex Thinking. In standard CoT, the model acts as a sequential processor; if it picks the wrong path at a fork, the chain breaks.
In 2026, we’re moving to superposition vectors. The model can now carry multiple reasoning timelines (e.g., "Multiply" AND "Add") simultaneously within the same high-dimensional wire. This "Breadth-First Search" approach makes reasoning far more robust by exploring every option at once in a single flow of vectors.
3. The 30x Compression: NVIDIA’s Latent Planning ([Source\_3](https://arxiv.org/abs/2601.09708v1))
NVIDIA’s new "Fast-ThinkAct" architecture is perhaps the most practical leap. They’ve found a way to compress a 200-token reasoning plan (about 800,000 numbers) into just 6 Latent Tokens (roughly 25,000 numbers).
This is a 30x compression that allows robots to "think" at 10Hz without the latency of generating English sentences. To keep it safe, they use a "Verbalizer Lock", ensuring this internal "alien logic" remains homeomorphic to human language even if we don't read the raw vectors.
My Prediction for the Next-gen Architecture of LLMs/VLMs
I believe the next generation of LLMs will treat text merely as an I/O layer, not the processing layer. The "GPT-6" pipeline will likely look like this:
* Input: User Query.
* Trigger: An SAE classifier steers the initial state to "Reasoning Mode".
* Latent Processing: The model generates a short sequence of Multiplexed Latent Tokens - compressed and containing multiple hypotheses in superposition.
* Output: The final dense vectors project directly to an action head (robotics) or a language head (final answer).
**TLDR:** We are moving from Symbolic Processing to Signal Processing. The "Silent Thought" AI era will arrive soon.
This is merely conjecture but what do you think?
https://preview.redd.it/9yd5czupnaeg1.png?width=2400&format=png&auto=webp&s=55804df5fa765e4506f725a6892c023c16a490ff | 2026-01-19T11:51:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qh254m/is_tokenbased_cot_going_to_die_my_2026_prediction/ | madSaiyanUltra_9789 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh254m | false | null | t3_1qh254m | /r/LocalLLaMA/comments/1qh254m/is_tokenbased_cot_going_to_die_my_2026_prediction/ | false | false | 1 | null | |
Model Meshing | 1 | Hi all. I am currently using various models to build my small projects. I have taken an interest in white hat hacking and bug bounties. Knowing nothing about coding.
I ask four smaller models to write small snippets of code, and then they each share their output with each other via agents, and then they can vote on the best one and my program chooses that one. Now I want to be able to feed the final output of the full file from the four agents code added together, with two smarter cloud agents where they will edit the overall file in case putting small snippets togher has left errors.
My asktion is, are coding outputs more accurate from smarter larger models, or using smaller smartish models to form a itty bitty coding committe? Or do you want me to tell you in a few days. | 2026-01-19T11:45:18 | https://www.reddit.com/r/LocalLLaMA/comments/1qh213r/model_meshing/ | Spare_Grape_962 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh213r | false | null | t3_1qh213r | /r/LocalLLaMA/comments/1qh213r/model_meshing/ | false | false | self | 1 | null |
Something Intresting | 0 | I tried to automate my job 10 years ago and came across a way to write responses. (I have an administrative job (AML)). I coded some really impressive things and eventually got to where I can code functions of my job completely. I have since worked in several departments and have brought my code to write most aspects of what I do professionally. Which is interesting as I am not a coder, nor when I started was there a concept of an LLM.
Anyway, I have been trying to understand code and LLMs, and I believe I have a process that is better at some aspects of LLMs. some of the things my code does is:
correct itself (recersive)
understand context
it can use symbols, text, or numbers as data.
it can be used in several applications
leverages negitive inferences and dynamic states.
dynamic compilation
can do it locally as the logic is formulaic
I call the mechine a quantom madlib choose your own adventure story.
Anyway, I was looking to release it into the public domain and just create a github as I don't have a CS background, and I think the application is larger than what I would use it for.
I finished aspects of the mechine many years ago and have been testing on automating aspects of the law (as i am educted as an attorney).. however, i made a customer service bot also. The logic is that it is able to navigate a conversation really well. it's also self correcting...
I had a lot of trouble with scale and compute, and I am not looking to get profits off of the idea or patent it (as I believe there is a large social value). therefore, I will be working on writing my own github and white paper discussing my findings.
I dont have a CS background and am more interested in how languages work. but because I don't know a lot about that I was wondering if someone can speak with me about it. in hopes of understanding what this is?
I have only built aspects of the mechine and have not created a working model (IP reasons), but I hope that maybe people can learn my system and replicate it.
I am working and am not well, so I will try to respond as I can. please be patient with me. | 2026-01-19T11:26:30 | https://www.reddit.com/r/LocalLLaMA/comments/1qh1p49/something_intresting/ | USERNAMETAKEN11238 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh1p49 | false | null | t3_1qh1p49 | /r/LocalLLaMA/comments/1qh1p49/something_intresting/ | false | false | self | 0 | null |
Best open-source voice cloning model with emotional control? (Worked with VibeVoice 7B & 1.5B) | 8 | Hi everyone,
I’ve been working with open-source voice cloning models and have some experience
with \*\*VibeVoice 7B and 1.5B\*\*, but I’m still looking for something that delivers
\*\*better emotional expression and natural prosody\*\*.
My main goals:
\- High-quality voice cloning (few-shot or zero-shot)
\- Strong emotional control (e.g., happy, sad, calm, expressive storytelling)
\- Natural pacing and intonation (not flat or robotic)
\- Good for long-form narration / audiobooks
\- Open-source models preferred
I’ve seen mentions of models like XTTS v2, StyleTTS 2, OpenVoice, Bark, etc.,
but I’d love to hear from people who’ve used them in practice.
\*\*What open-source model would you recommend now (2025) for my use case\*\*, and
why? Any comparisons, demos, or benchmarks would be awesome too.
Thanks in advance!
| 2026-01-19T11:04:43 | https://www.reddit.com/r/LocalLLaMA/comments/1qh1b8e/best_opensource_voice_cloning_model_with/ | Junior-Media-8668 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh1b8e | false | null | t3_1qh1b8e | /r/LocalLLaMA/comments/1qh1b8e/best_opensource_voice_cloning_model_with/ | false | false | self | 8 | null |
Ollama nooby in need of help | 0 | I tried to pull a model from Hugging Face with Ollama on a Raspberry Pi 5, just playing around, please don't judge, and I am constantly getting the Error 429 rate limit from your address. I already tried adding a Hugging Face token, which apparently didn't change anything; still getting the error. On the Pi, I was testing Raspberry Pi OS and Ubuntu 25.10.
I also tested the py over a hotspot to see if the IP was the issue, but as with the token, nothing changed. Also waiting a day or two to see if there was a used-up rate limit didn’t change the result, still 429.
Interestingly enough, I am running Ollama on my PC, running on Ubuntu, and pulling the same model does not result in the same Error 429.
Model I tried to pull: [https://huggingface.co/byteshape/Qwen3-30B-A3B-Instruct-2507-GGUF/blob/main/Qwen3-30B-A3B-Instruct-2507-Q3\_K\_S-2.70bpw.gguf](https://huggingface.co/byteshape/Qwen3-30B-A3B-Instruct-2507-GGUF/blob/main/Qwen3-30B-A3B-Instruct-2507-Q3_K_S-2.70bpw.gguf) | 2026-01-19T11:02:50 | https://www.reddit.com/r/LocalLLaMA/comments/1qh1a0c/ollama_nooby_in_need_of_help/ | Schaksie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh1a0c | false | null | t3_1qh1a0c | /r/LocalLLaMA/comments/1qh1a0c/ollama_nooby_in_need_of_help/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'RVaP-pnlv_VwhqyUv91G_E3VXHpaGo7phwzNVK7nnHI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RVaP-pnlv_VwhqyUv91G_E3VXHpaGo7phwzNVK7nnHI.png?width=108&crop=smart&auto=webp&s=b7eda53ae713e7ee335094724ed0f69068c70df9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RVaP-pnlv_VwhqyUv91G_E3VXHpaGo7phwzNVK7nnHI.png?width=216&crop=smart&auto=webp&s=02c322ea3a8f16613c2245e6cf81e6df1a7b465e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RVaP-pnlv_VwhqyUv91G_E3VXHpaGo7phwzNVK7nnHI.png?width=320&crop=smart&auto=webp&s=cc75289b0aeb777f7c59666699d5ab24eb8a45f3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RVaP-pnlv_VwhqyUv91G_E3VXHpaGo7phwzNVK7nnHI.png?width=640&crop=smart&auto=webp&s=4f41f91ef313f3081f3fef2aae5b22b829a297b1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RVaP-pnlv_VwhqyUv91G_E3VXHpaGo7phwzNVK7nnHI.png?width=960&crop=smart&auto=webp&s=262f430431f51c109ccf4d954a3cf5491a1e63f2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RVaP-pnlv_VwhqyUv91G_E3VXHpaGo7phwzNVK7nnHI.png?width=1080&crop=smart&auto=webp&s=263f5017c60977f0efd37f49dc684abbd47f87fc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RVaP-pnlv_VwhqyUv91G_E3VXHpaGo7phwzNVK7nnHI.png?auto=webp&s=e595dc38771b061795b8dad3841540bb2b3304f5', 'width': 1200}, 'variants': {}}]} |
Demo: On-device browser agent (Qwen) running locally in Chrome | 42 | Hey guys! wanted to share a cool demo of LOCAL Browser agent (powered by Web GPU Liquid LFM & Alibaba Qwen models) opening the All in Podcast on Youtube running as a chrome extension.
Source: [https://github.com/RunanywhereAI/on-device-browser-agent](https://github.com/RunanywhereAI/on-device-browser-agent) | 2026-01-19T10:48:29 | https://v.redd.it/ljp6zwzfcaeg1 | thecoder12322 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qh10q9 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ljp6zwzfcaeg1/DASHPlaylist.mpd?a=1771411722%2CZTY0NzhiZmE3NjI3Y2YwYjMzOGQ0NGZhZDk1NmU0MjcyYTkwNGRlY2ViZTVjMmI4MWUyMmUzNjVlZWVlNmRhZQ%3D%3D&v=1&f=sd', 'duration': 17, 'fallback_url': 'https://v.redd.it/ljp6zwzfcaeg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ljp6zwzfcaeg1/HLSPlaylist.m3u8?a=1771411722%2CY2VmY2UxOGU3MWQxNWIxMjYxOTUzZTFlNDk2ZDNkZjcwMTQ0ZjNkZDk2ZjI0YWFlMzUyZWVmOGQ3NTNhYjEzNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ljp6zwzfcaeg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1660}} | t3_1qh10q9 | /r/LocalLLaMA/comments/1qh10q9/demo_ondevice_browser_agent_qwen_running_locally/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'MzNzcDNuMGdjYWVnMcLtcBYDX5SdJ9uQfQaEUwyxr5ovu1B5qUxuDFDhwgNH', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/MzNzcDNuMGdjYWVnMcLtcBYDX5SdJ9uQfQaEUwyxr5ovu1B5qUxuDFDhwgNH.png?width=108&crop=smart&format=pjpg&auto=webp&s=115b84c8d4149dffc567fef7db082c8b35532e3d', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/MzNzcDNuMGdjYWVnMcLtcBYDX5SdJ9uQfQaEUwyxr5ovu1B5qUxuDFDhwgNH.png?width=216&crop=smart&format=pjpg&auto=webp&s=96770236f2c7d99ce501aa5ca8d46dfc047ffb16', 'width': 216}, {'height': 208, 'url': 'https://external-preview.redd.it/MzNzcDNuMGdjYWVnMcLtcBYDX5SdJ9uQfQaEUwyxr5ovu1B5qUxuDFDhwgNH.png?width=320&crop=smart&format=pjpg&auto=webp&s=bc43584083ba162dbccf734ac1405a33ef986dc9', 'width': 320}, {'height': 416, 'url': 'https://external-preview.redd.it/MzNzcDNuMGdjYWVnMcLtcBYDX5SdJ9uQfQaEUwyxr5ovu1B5qUxuDFDhwgNH.png?width=640&crop=smart&format=pjpg&auto=webp&s=55da0c8136c6864d61879285df7d66b829ea40d8', 'width': 640}, {'height': 624, 'url': 'https://external-preview.redd.it/MzNzcDNuMGdjYWVnMcLtcBYDX5SdJ9uQfQaEUwyxr5ovu1B5qUxuDFDhwgNH.png?width=960&crop=smart&format=pjpg&auto=webp&s=f6855df1c0f163f96d012c0cafa09066e436c0ff', 'width': 960}, {'height': 702, 'url': 'https://external-preview.redd.it/MzNzcDNuMGdjYWVnMcLtcBYDX5SdJ9uQfQaEUwyxr5ovu1B5qUxuDFDhwgNH.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8289d9ea6f6c7be92b9a69c8f0c72d03d2626492', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/MzNzcDNuMGdjYWVnMcLtcBYDX5SdJ9uQfQaEUwyxr5ovu1B5qUxuDFDhwgNH.png?format=pjpg&auto=webp&s=ff0350b54997038a0e5276cf42f696d7f850ccbc', 'width': 3320}, 'variants': {}}]} | |
What's the real price of Vast.ai? | 0 | I've been eyeing [vast.ai](http://vast.ai/) for inference. Let's say price is 0.05$/hour for GPU. But there's got to be some other costs like storage or bandwidth. There is no freaking way that it's just 0.05$/hour for everything.
For you who use [vast.ai](http://vast.ai/) can you please give me examples of your cost and what exactly do you pay? | 2026-01-19T10:45:38 | https://www.reddit.com/r/LocalLLaMA/comments/1qh0ytv/whats_the_real_price_of_vastai/ | teskabudaletina | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh0ytv | false | null | t3_1qh0ytv | /r/LocalLLaMA/comments/1qh0ytv/whats_the_real_price_of_vastai/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=108&crop=smart&auto=webp&s=a08158a2ec290c8157b492f314bfb148408be1fc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=216&crop=smart&auto=webp&s=5d4693d9fc011431e9348152136fa7a13c95504b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=320&crop=smart&auto=webp&s=93ef867725a538dad3a6209e5062d3d1de60aeaa', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=640&crop=smart&auto=webp&s=fc186b216811c20876ecdaf0e913cc0b59498d7a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=960&crop=smart&auto=webp&s=67812638cc7d2b930cd8bebf733409c3b2d92397', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=1080&crop=smart&auto=webp&s=bc092f31a95e3a3df682dc8f7222b0fb1363a5df', 'width': 1080}], 'source': {'height': 2250, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?auto=webp&s=c5b1db2b11bd21a955cbe1e863cde94ef57607f4', 'width': 4000}, 'variants': {}}]} |
I made a Top-K implementation that's up to 20x faster than PyTorch CPU (open source) | 141 | Spent way too long optimizing Top-K selection for LLM sampling and finally hit some stupid numbers.
**TL;DR:** AVX2-optimized batched Top-K that beats PyTorch CPU by 4-20x depending on vocab size. Sometimes competitive with CUDA for small batches.
**Benchmarks (K=50):**
* Vocab=32K: 0.043ms vs PyTorch's 0.173ms (4x faster)
* Vocab=128K: 0.057ms vs PyTorch's 0.777ms (13x faster)
* Vocab=256K: 0.079ms vs PyTorch's 1.56ms (20x faster)
Integrated it into llama.cpp and got 63% faster prompt processing on a 120B MoE model (81→142 tokens/sec).
Uses adaptive sampling + AVX2 SIMD + cache-optimized scanning. Has fast paths for sorted/constant inputs. Single-pass algorithm, no GPU needed.
Includes pre-built DLLs and llama.cpp implementation (for windows).
GitHub: [https://github.com/RAZZULLIX/fast\_topk\_batched](https://github.com/RAZZULLIX/fast_topk_batched)
Would love feedback or roasting, whichever you prefer. | 2026-01-19T10:45:28 | https://www.reddit.com/r/LocalLLaMA/comments/1qh0yq8/i_made_a_topk_implementation_thats_up_to_20x/ | andreabarbato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh0yq8 | false | null | t3_1qh0yq8 | /r/LocalLLaMA/comments/1qh0yq8/i_made_a_topk_implementation_thats_up_to_20x/ | false | false | self | 141 | {'enabled': False, 'images': [{'id': 'J-l8_Sa0L-esAlmeBM7THYKakcBVECgMPp0mRdB1aA4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/J-l8_Sa0L-esAlmeBM7THYKakcBVECgMPp0mRdB1aA4.png?width=108&crop=smart&auto=webp&s=6232f6aecdaefd63083867272fab23fbf4ab02e6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/J-l8_Sa0L-esAlmeBM7THYKakcBVECgMPp0mRdB1aA4.png?width=216&crop=smart&auto=webp&s=a67f3969032f68a71657e65c29555ccef82dc2a0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/J-l8_Sa0L-esAlmeBM7THYKakcBVECgMPp0mRdB1aA4.png?width=320&crop=smart&auto=webp&s=871c4481d3c85b438583be1b8e0fa4a0e70736f7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/J-l8_Sa0L-esAlmeBM7THYKakcBVECgMPp0mRdB1aA4.png?width=640&crop=smart&auto=webp&s=ba9a47f066dc203baa6cf0491006af13118f8051', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/J-l8_Sa0L-esAlmeBM7THYKakcBVECgMPp0mRdB1aA4.png?width=960&crop=smart&auto=webp&s=34734c0d16a3a34ad1ec124ba3725d3f813f19ca', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/J-l8_Sa0L-esAlmeBM7THYKakcBVECgMPp0mRdB1aA4.png?width=1080&crop=smart&auto=webp&s=376233a86cdab5b1eb4cd2965fb72b2263035bd4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/J-l8_Sa0L-esAlmeBM7THYKakcBVECgMPp0mRdB1aA4.png?auto=webp&s=a33c06f177838c6bf1761d85a3b74c49717ef8ee', 'width': 1200}, 'variants': {}}]} |
We built a small GPU platform and are looking for early users’ feedback | 5 | Hi everyone,
We’re a small team building a GPU platform mainly for our own model training and inference experiments. While testing it internally, we realized we have spare GPU capacity sitting idle.
Instead of letting it go unused, we’d love to open it up to the community and get some real-world feedback. We’re offering **free compute credits** in exchange for honest usage feedback (what works, what breaks, what’s annoying).
Currently available GPUs include **RTX 5090 and Pro 6000**, suitable for LLM inference, fine-tuning, or other ML workloads.
If you’re interested in trying it or have specific workloads in mind, feel free to comment or DM me. I’m happy to answer technical questions as well.
https://preview.redd.it/m9j4c7ud5aeg1.png?width=1020&format=png&auto=webp&s=6caed0d0b7af9cf0edea8b9471afe3e01d94d625
| 2026-01-19T10:05:21 | https://www.reddit.com/r/LocalLLaMA/comments/1qh0abp/we_built_a_small_gpu_platform_and_are_looking_for/ | Nora_ww | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qh0abp | false | null | t3_1qh0abp | /r/LocalLLaMA/comments/1qh0abp/we_built_a_small_gpu_platform_and_are_looking_for/ | false | false | 5 | null | |
What would you build and do with a $15k budget? | 0 | Looks like Apple is giving up on the Mac Pro. It’s currently 4am and I just stumbled onto the Studio offering 512gb memory and 16gb storage. Anyway, I’m coming into some money and I want to get into this space and eventually handle large models.
I currently have an M3 Max MacBook Pro with 128gb memory and 2tb storage.
What would you do with a $15k budget? You can choose the Mac Studio, or to build your own PC.
Mac Studio
Configuration
• Apple M3 Ultra chip with 32-core CPU,
80-core GPU, 32-core Neural Engine
• 512GB unified memory
• 16TB SSD storage
• Front: Two Thunderbolt 5 ports, SDXC
card slot
• Back: Four Thunderbolt 5 ports, two USB-A ports, HDMI port, 10Gb Ethernet port, headphone jack
• Accessory Kit
//////////
Vs what Gemini recommended with same budget
//////////
CPU: AMD Ryzen Threadripper 7970X (32-Core, 64-Thread, Zen 4)
• GPU: NVIDIA RTX 6000 Ada Generation (48GB GDDR6 VRAM, Professional Grade)
• Motherboard: ASUS Pro WS TRX50-SAGE WIFI (Supports PCIe 5.0 & Quad-Channel Memory)
• Memory (RAM): 512GB (8 x 64GB) DDR5-5600 ECC Registered RDIMM
• Primary Storage: 16TB NVMe Gen5/Gen4 SSD Array (Configured as 4 x 4TB Samsung 990 Pro or similar)
• Case: Fractal Design North XL (Mesh version for maximum airflow)
• Power Supply (PSU): Seasonic Prime TX-1600 (1600W, 80+ Titanium Efficiency)
• CPU Cooler: Arctic Liquid Freezer III 420mm (AIO Liquid Cooler)
• Case Fans: 6x Noctua NF-A14 PWM (Premium high-static pressure fans)
• Operating System: Windows 11 Pro or Linux (Ubuntu 24.04 LTS recommended) | 2026-01-19T09:40:43 | https://www.reddit.com/r/LocalLLaMA/comments/1qgzvtc/what_would_you_build_and_do_with_a_15k_budget/ | ThePatientIdiot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgzvtc | false | null | t3_1qgzvtc | /r/LocalLLaMA/comments/1qgzvtc/what_would_you_build_and_do_with_a_15k_budget/ | false | false | self | 0 | null |
Good NSFW LLM for writing story on a strix halo? | 0 | Hello. I just myself one of those strix halo apu's with shared memory (mine is a 64gb model which I split in 32gb as vram and 32 as system ram). I want to test out some local llm with it to see what it's capable off. I am new to this and heard that some models are heavy, other more lightweight... Basically looking for something that can write mature/explicit content (like a fantasy anime hentai, or explicit s** scenes). What options are out there? There are literally millions of models | 2026-01-19T09:19:43 | https://www.reddit.com/r/LocalLLaMA/comments/1qgzjpf/good_nsfw_llm_for_writing_story_on_a_strix_halo/ | GiumboJet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgzjpf | false | null | t3_1qgzjpf | /r/LocalLLaMA/comments/1qgzjpf/good_nsfw_llm_for_writing_story_on_a_strix_halo/ | false | false | nsfw | 0 | null |
How do you differentiate between situational variance and actual behavioral drift in LLM evaluations? | 0 | In several evaluation contexts, we repeatedly encountered the same problem:
LLMs exhibit altered behavior, but it is often unclear whether we are observing:
(a) context-dependent variance,
(b) prompt/role artifacts, or
(c) actual, systematic drift.
In practice, this is often summarized under the vague term "model drift." This complicates comparability, replication, and safety discussions.
LLMs exhibit altered behavior, but it is often unclear whether we are observing:
(a) context-dependent variance,
(b) prompt/role artifacts, or
(c) actual, systematic drift. We have therefore attempted to formulate a practical taxonomy in a purely descriptive manner:
– no assumptions about causes,
– no normative evaluation,
– but categories, characteristics, and typical triggers that actually occur in everyday evaluation practice.
The current version is 0.1, explicitly incomplete, and intended as a basis for discussion.
I am particularly interested in the following points:
• Where would you combine or separate categories?
• What forms of drift do you regularly encounter that are missing here?
• At what point would you even speak of "drift"—and at what point would you no longer use it?
If relevant: We have published the taxonomy openly on Zenodo (CC BY-NC), including German and English versions.
Link: [https://doi.org/10.5281/zenodo.18294771](https://doi.org/10.5281/zenodo.18294771)
This is not intended to be exhaustive; we primarily want to see where this structure works and where it breaks down.
Thanks 🍀
aireason.eu
Femfight3r 💫 | 2026-01-19T09:05:43 | https://www.reddit.com/r/LocalLLaMA/comments/1qgzbdv/how_do_you_differentiate_between_situational/ | ParadoxeParade | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgzbdv | false | null | t3_1qgzbdv | /r/LocalLLaMA/comments/1qgzbdv/how_do_you_differentiate_between_situational/ | false | false | self | 0 | null |
A look behind the scenes - building 3 GH200 systems in the workshop | 5 | 2026-01-19T08:42:22 | GPTshop__dot__ai | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qgyxk5 | false | null | t3_1qgyxk5 | /r/LocalLLaMA/comments/1qgyxk5/a_look_behind_the_scenes_building_3_gh200_systems/ | false | false | 5 | {'enabled': True, 'images': [{'id': 'D9lgkrk-RZRCfrrLZJdMYX3VwQ06uDSKGWED5B-CjZA', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/o8kzjyheq9eg1.jpeg?width=108&crop=smart&auto=webp&s=04ae2e257f6d1967b8b4c6b90c0d9387f23d4b45', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/o8kzjyheq9eg1.jpeg?width=216&crop=smart&auto=webp&s=db6f8a3b2f1c3e9a910e6b6ce46926a6d730ba86', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/o8kzjyheq9eg1.jpeg?width=320&crop=smart&auto=webp&s=963d0e2010dfc139d13f6538e14826783b2c006b', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/o8kzjyheq9eg1.jpeg?width=640&crop=smart&auto=webp&s=8714acc93f4ac145cd8d669bc6c389b514d2e1d0', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/o8kzjyheq9eg1.jpeg?width=960&crop=smart&auto=webp&s=480180c08a0e58fa6d36239e12f7ad04897f7e0b', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/o8kzjyheq9eg1.jpeg?width=1080&crop=smart&auto=webp&s=ddc3fa5cebfc97b05b71cd0d0c92de9f1914c064', 'width': 1080}], 'source': {'height': 3456, 'url': 'https://preview.redd.it/o8kzjyheq9eg1.jpeg?auto=webp&s=9fa5c203513922489da6df3d5490cc4eca9ebddf', 'width': 4608}, 'variants': {}}]} | |||
Agent Zero Discussion | 4 | so i discovered Agent Zero 2 days ago, got it up and running saw what it can do with models like Claude opus \*top high end tier\*.
but id like to run it locally i have 84 gb of VRAM (3x 3090, 1 4070ti) 96 gb of RAM ,CPU i7 13k , i tried gptoss 120b as chat model, it was ok but very restrictive, and i used llama 3.1 8b for utility ( i think this was my biggest problem need to try today a bit stronger model than that) and nomic embeder which is also i think very buggy with it (but i wanted GPU embeding ) because i noticed the longest step was for me the memmory processing step, yet the most capable almost as calude opus trial youtube video i saw was GLM4.7 of course Q2KL quantize and that is almost all my VRAM (81 gb), its really capable as i saw but halucinate i think because of the low quant and is running at 7-11 Tokens /sec on lammacpp.
i also try to utilize the bigger model on lamacpp since its faster and the others on ollama so i dont have to keep loading and unloading models in the GPU for speed.
im thinking of trying GLM Air since its a bit smaller so i can run a better Quant on my hardware.
but my frustration untill now that it starts good (with GLM 4.7) and get some really good planning and start working but at somepoint it starts halucinations and stops and i dont get any result and tbh also untill now i didnt really get any materialistic result i even tried to ask it to make a notepad txt file and write a word in it even that didnt get to work somehow xD, keeps halucinating and repeating its thoughts with gpt oss 120b i didnt try this simple task with GLM yet.
but i just wanted to open a discussion and see what people use if there is better opensource apps like Agentzero, whats their model combination for Agent zero, Context and parameters so it works reliably localy, recommendations for my written hardware.
| 2026-01-19T08:32:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qgyrt2/agent_zero_discussion/ | Noobysz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgyrt2 | false | null | t3_1qgyrt2 | /r/LocalLLaMA/comments/1qgyrt2/agent_zero_discussion/ | false | false | self | 4 | null |
We tested if Anthropic's Contextual Retrieval still works in 2026 | 1 | [removed] | 2026-01-19T08:28:06 | https://www.reddit.com/r/LocalLLaMA/comments/1qgypa1/we_tested_if_anthropics_contextual_retrieval/ | DatapizzaLabs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgypa1 | false | null | t3_1qgypa1 | /r/LocalLLaMA/comments/1qgypa1/we_tested_if_anthropics_contextual_retrieval/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WM9coASZAPosxQnRrgryGmXh5OYtno3uUsf9XZYu3E8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/WM9coASZAPosxQnRrgryGmXh5OYtno3uUsf9XZYu3E8.png?width=108&crop=smart&auto=webp&s=9f752de1356f75a626eb04ab6e27ad70f0d13d82', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/WM9coASZAPosxQnRrgryGmXh5OYtno3uUsf9XZYu3E8.png?width=216&crop=smart&auto=webp&s=45bb3b410531f89df2880b54e7ce5b9cdef6a760', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/WM9coASZAPosxQnRrgryGmXh5OYtno3uUsf9XZYu3E8.png?width=320&crop=smart&auto=webp&s=65f56044d8da4bb43682fb1570efc2d2c6d54b88', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/WM9coASZAPosxQnRrgryGmXh5OYtno3uUsf9XZYu3E8.png?width=640&crop=smart&auto=webp&s=40dfbae3947d96d45263d794ae5426b261117e4f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/WM9coASZAPosxQnRrgryGmXh5OYtno3uUsf9XZYu3E8.png?width=960&crop=smart&auto=webp&s=210dd20ea8086373fe6d8fa8f7b63770fa3232b5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/WM9coASZAPosxQnRrgryGmXh5OYtno3uUsf9XZYu3E8.png?width=1080&crop=smart&auto=webp&s=26757af952e68e25e5aa48aa7cc037f7b54e3be3', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/WM9coASZAPosxQnRrgryGmXh5OYtno3uUsf9XZYu3E8.png?auto=webp&s=877d9473841c936a355a57cd2bf5fad2a4b85857', 'width': 1920}, 'variants': {}}]} |
I asked about automated game development earlier, but my question was confusing. Let me try again. | 0 | Hi everyone,
My programming experience: I built an automated trading bot that's currently running live. But here's the thing - **I didn't code it myself**. It took about 4 months, and I made it using Claude AI and Gemini AI. **I don't know how to program.**
I had Claude AI write my previous post here, and I think the message got lost in translation.
**Here's what I'm actually curious about:**
I heard that with DGX Spark, you can connect a pipeline to a game engine so that AI can directly interact with the engine.
So I'm wondering - **couldn't AI build a game directly this way?**
I know there's been a lot of discussion about prompt limitations and context windows. I've been thinking about that too.
**Here's my idea:**
1. AI creates a master plan with 10-12 major steps
2. For step 1, AI breaks it down into 10 sub-steps (new prompt)
3. For each sub-step, AI creates detailed micro-steps (new prompt)
4. Save this entire plan
5. Then AI executes from the top, one step at a time
* Complete step [1.1.1.1](http://1.1.1.1)
* Document files created + explanations
* Move to step [1.1.1.2](http://1.1.1.2) (new prompt)
* And so on...
This way, you'd build both documentation and a blueprint as you go.
**Wouldn't this solve the AI memory problem?**
When I asked AI about this approach, it said it's feasible.
I wanted to ask Reddit too - I'm just a curious student interested in AI. Please understand if I seem inexperienced \^\^; | 2026-01-19T08:24:39 | https://www.reddit.com/r/LocalLLaMA/comments/1qgynao/i_asked_about_automated_game_development_earlier/ | AdNaive1169 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgynao | false | null | t3_1qgynao | /r/LocalLLaMA/comments/1qgynao/i_asked_about_automated_game_development_earlier/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'tDrJoI0K6YMbmqwxef5nn_a-IPKoWbPl_sjQt6OPDp8', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/tDrJoI0K6YMbmqwxef5nn_a-IPKoWbPl_sjQt6OPDp8.png?width=108&crop=smart&auto=webp&s=49fa0024dd9ef85be5871ac4533e6ae387ad7573', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/tDrJoI0K6YMbmqwxef5nn_a-IPKoWbPl_sjQt6OPDp8.png?width=216&crop=smart&auto=webp&s=002184ed965d7c2961ad39e2729e2ad7f1c7ffa3', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/tDrJoI0K6YMbmqwxef5nn_a-IPKoWbPl_sjQt6OPDp8.png?width=320&crop=smart&auto=webp&s=b97e68d2d1d5223a02d356ba0a49734418540bb5', 'width': 320}, {'height': 374, 'url': 'https://external-preview.redd.it/tDrJoI0K6YMbmqwxef5nn_a-IPKoWbPl_sjQt6OPDp8.png?width=640&crop=smart&auto=webp&s=0100daf0af00c5c36127b0ee9b01846f20d9cf7d', 'width': 640}, {'height': 561, 'url': 'https://external-preview.redd.it/tDrJoI0K6YMbmqwxef5nn_a-IPKoWbPl_sjQt6OPDp8.png?width=960&crop=smart&auto=webp&s=785ac22402cde753ee4cca15cca9aa61d76b196b', 'width': 960}, {'height': 631, 'url': 'https://external-preview.redd.it/tDrJoI0K6YMbmqwxef5nn_a-IPKoWbPl_sjQt6OPDp8.png?width=1080&crop=smart&auto=webp&s=f6d886a20ad4346e1bf8238ed5a1e400a969d528', 'width': 1080}], 'source': {'height': 1603, 'url': 'https://external-preview.redd.it/tDrJoI0K6YMbmqwxef5nn_a-IPKoWbPl_sjQt6OPDp8.png?auto=webp&s=4cb452ce099c0f58952cfdb4d75721615e89f7ac', 'width': 2743}, 'variants': {}}]} |
Would Anthropic Block Ollama? | 0 | Few hours ago, Ollama announced following:
Ollama now has Anthropic API compatibility. This enables tools like Claude Code to be used with open-source models.
Ollama Blog: [Claude Code with Anthropic API compatibility · Ollama Blog](https://ollama.com/blog/claude)
Hands-on Guide: [https://youtu.be/Pbsn-6JEE2s?si=7pdAv5LU9GiBx7aN](https://youtu.be/Pbsn-6JEE2s?si=7pdAv5LU9GiBx7aN)
For now it's working but for how long? | 2026-01-19T07:34:24 | https://www.reddit.com/r/LocalLLaMA/comments/1qgxtl2/would_anthropic_block_ollama/ | Lopsided_Dot_4557 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgxtl2 | false | null | t3_1qgxtl2 | /r/LocalLLaMA/comments/1qgxtl2/would_anthropic_block_ollama/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=216&crop=smart&auto=webp&s=6ccf136f5d3091254a0067a3bc5d6c7df9d62d89', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=320&crop=smart&auto=webp&s=2530aa4ecbcf7899ec0d023e217fe24af15fe0a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=640&crop=smart&auto=webp&s=8e51add1cab39c7614eb13e6195f23c5b4eeb417', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=960&crop=smart&auto=webp&s=750a6d42fd91c5a6e9a9c069e74247c877644e97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=1080&crop=smart&auto=webp&s=9eab390b865b031211658564ad5fe5241c9661c5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?auto=webp&s=a080c4707584d3aa14134960cda9ba2d339b93a3', 'width': 1200}, 'variants': {}}]} |
JARVIS progress report | 0 | [deleted] | 2026-01-19T07:32:35 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1qgxsgc | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/y98v2hc3e9eg1/DASHPlaylist.mpd?a=1771406422%2CM2VhMDIwYTA1MTNkZTdkY2ZlOGQ3YWRmN2RmNmMzMGZmODM3ODIxNGI1NDE3NWMwNDMzMzg5NzdjZGY0YmIwNw%3D%3D&v=1&f=sd', 'duration': 34, 'fallback_url': 'https://v.redd.it/y98v2hc3e9eg1/CMAF_480.mp4?source=fallback', 'has_audio': False, 'height': 480, 'hls_url': 'https://v.redd.it/y98v2hc3e9eg1/HLSPlaylist.m3u8?a=1771406422%2CZjU2ZjE1Y2QyNDJhYzk0MGNhODY0ZGU1YTdiNTY3ZjBlNDAxY2E5NDI1YzMzNjQ0MDNiNmIxZjU5OTEwMmJjMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/y98v2hc3e9eg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 854}} | t3_1qgxsgc | /r/LocalLLaMA/comments/1qgxsgc/jarvis_progress_report/ | false | false | default | 0 | null | ||
JARVIS progress report | 1 | [removed] | 2026-01-19T07:28:01 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1qgxpk0 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/d199npytc9eg1/DASHPlaylist.mpd?a=1771406354%2CMzUxYTcyZTEzNDI2Mjg2ZWM0ZGYyY2Q0NzNjMzVkNjc4M2Y3Y2NiMzlkYmZhYzFiYWQyOGQ5NzZmMzczMDRhOQ%3D%3D&v=1&f=sd', 'duration': 34, 'fallback_url': 'https://v.redd.it/d199npytc9eg1/CMAF_480.mp4?source=fallback', 'has_audio': False, 'height': 480, 'hls_url': 'https://v.redd.it/d199npytc9eg1/HLSPlaylist.m3u8?a=1771406354%2CZjZkNTE2NTNmYjRiMmFiNTFjMmNmZjZiMjBlOWViNGQxMjczMWZlZTY4ZGExZTViZWFkZDE2ODFjNjQzZmJmNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/d199npytc9eg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 854}} | t3_1qgxpk0 | /r/LocalLLaMA/comments/1qgxpk0/jarvis_progress_report/ | false | false | default | 1 | null | ||
built a (free) photo based nutrition tracker for iOS, with local LLM support | 4 | 2026-01-19T07:13:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qgxgns/built_a_free_photo_based_nutrition_tracker_for/ | Agusx1211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgxgns | false | null | t3_1qgxgns | /r/LocalLLaMA/comments/1qgxgns/built_a_free_photo_based_nutrition_tracker_for/ | false | false | 4 | null | ||
3x3090 + 3060 in a mid tower case | 66 | Decided to go all out and max out this desktop. I was lucky to find 3090 cards for around 600 usd, over a period of 3 months and decided to go for it.
The RAM was a bit more expensive, but I had 64 bought before the price spiked.
I didn’t want to change the case, because I through it’s a high quality case and it would be a shame to toss it. So made the most out of it!
Specs:
* Fractal Define 7 Mid Tower
* 3x3090 + 1x3060 (86gb total, but 72gb VRAM main)
* 128GB DDR4 (Corsair 4x32)
* Corsair HX1500i 1500w (has 7 PCIe power cables)
* Vertical mounts are all cheap from AliExpress
* ASUS Maximus XII Hero — has only 3x PCIe16x, had to deactivate the 2nd NVMe to use the 3rd PCIe16x in 4x, the 4th GPU (the 3060) is on a riser from a PCIe1x.
* For drives, only one NVMe of 1TB works, I also bought 2x2TB SSDs that I tried in RAID but the performance was terrible (and they are limited to 500mb from the SATA interface, which I didn’t know…) so I keep them as 2 drives.
Temperatures are holding surprisingly well. The gap between the cards is about the size of an empty PCIe slot, maybe a bit more.
Temperature was a big improvement compared to having just 2x3090 stacked without any space between them — the way the motherboard is designed to use them.
In terms of performance 3x3090 is great! There are great options in the 60-65gb range with the extra space to 72gb VRAM used for context.
I am not using the RAM for anything other than to load models, and the speed is amazing when everything is loaded in VRAM!
Models I started using a lot:
* gpt-oss-120b in MXFP4 with 60k context
* glm-4.5-air in IQ4_NL with 46k context
* qwen3-vl-235b in TQ1_0 (surprisingly good!)
* minimax-M2-REAP-139B in Q3_K_S with 40k context
But still return a lot to old models for context and speed:
* devstral-small-2-24 in Q8_0 with 200k context
* qwen3-coder in Q8 with 1M (!!) context (using RAM)
* qwen3-next-80b in Q6_K with 60k context — still my favourite for general chat, and the Q6 makes me trust it more than Q3-Q4 models
The 3060 on the riser from PCIe1x is very slow at loading the models, however, once it’s loaded it works great! I am using it for image generation and TTS audio generation mostly (for Open WebUI).
Also did a lot of testing on using 2x3090 via normal PCIe, with a 3rd card via riser — it works same as normal PCIe! But the loading takes forever (sometimes over 2-3 minutes) and you simply can’t use the RAM for context because of how slow it is — so I am considering the current setup to be “maxed out” because I don’t think adding a 4th 3090 will be useful.
| 2026-01-19T06:59:39 | https://www.reddit.com/gallery/1qgx83t | liviuberechet | reddit.com | 2026-01-19T07:13:57 | 0 | {} | 1qgx83t | false | null | t3_1qgx83t | /r/LocalLLaMA/comments/1qgx83t/3x3090_3060_in_a_mid_tower_case/ | false | false | 66 | null | |
Is Local Coding even worth setting up | 77 | Hi I am new to Local LLM but have been having a lot of issues setting up a local LLM coding environment so wanted some suggestions from people.I have a 5070 ti (16gb vram).
I have tried to use Kilo code with qwen 2.5 coder 7B running through ollama but the context size feels so low that it finishes the context within a single file of my project.
How are other people with a 16gb GPU dealing with local llm? | 2026-01-19T06:38:37 | https://www.reddit.com/r/LocalLLaMA/comments/1qgwup8/is_local_coding_even_worth_setting_up/ | Interesting-Fish6494 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgwup8 | false | null | t3_1qgwup8 | /r/LocalLLaMA/comments/1qgwup8/is_local_coding_even_worth_setting_up/ | false | false | self | 77 | null |
Performing open-heart surgery on my AI agent while it’s still awake. | 0 | Built a small agent to test out some coding workflows. I wanted to see if it could handle the meta-task of editing its own source code while running.
It actually found the `index.html` and patched itself without crashing the server. There is something deeply satisfying (and slightly unnerving) about watching this.
Next step: Asking it to delete its own safety rails. (Kidding... mostly). | 2026-01-19T06:14:15 | https://v.redd.it/1iygm0t409eg1 | HumbleThought123 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qgwehy | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/1iygm0t409eg1/DASHPlaylist.mpd?a=1771405288%2CMmNlNDhmNDIyMDY4MTAyOGIxNjE5ZGUyM2U1MGE2ZTg2ZTA5ODE3MGMzYmM1YzM5MmQ5Yjk3YzA4NDgxNjZiZQ%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/1iygm0t409eg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 992, 'hls_url': 'https://v.redd.it/1iygm0t409eg1/HLSPlaylist.m3u8?a=1771405288%2CYjlkOGRhODUzYmIzZWVjMzNkMGQ1N2Q4YjM3OGViMWY5ZDYzMWI5MWEwZGQ0ZGE3Y2QzNTA1NDkwNWY2MTMzZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1iygm0t409eg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1qgwehy | /r/LocalLLaMA/comments/1qgwehy/performing_openheart_surgery_on_my_ai_agent_while/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'OXhpanQ5dzQwOWVnMZOmUCIIu2bWKby-ZQyxS5MSHGuN95rY67-MiEXN1TYJ', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/OXhpanQ5dzQwOWVnMZOmUCIIu2bWKby-ZQyxS5MSHGuN95rY67-MiEXN1TYJ.png?width=108&crop=smart&format=pjpg&auto=webp&s=59d82e7c85fb3868a29457d839ff19477e2e34da', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/OXhpanQ5dzQwOWVnMZOmUCIIu2bWKby-ZQyxS5MSHGuN95rY67-MiEXN1TYJ.png?width=216&crop=smart&format=pjpg&auto=webp&s=d4433aa92841dbda115feaefb14ea466b45c8f55', 'width': 216}, {'height': 165, 'url': 'https://external-preview.redd.it/OXhpanQ5dzQwOWVnMZOmUCIIu2bWKby-ZQyxS5MSHGuN95rY67-MiEXN1TYJ.png?width=320&crop=smart&format=pjpg&auto=webp&s=5453770a21b7901c30f4ec8adfa9aef92f6e2084', 'width': 320}, {'height': 330, 'url': 'https://external-preview.redd.it/OXhpanQ5dzQwOWVnMZOmUCIIu2bWKby-ZQyxS5MSHGuN95rY67-MiEXN1TYJ.png?width=640&crop=smart&format=pjpg&auto=webp&s=dccb8e7c13a530e238a6dfdd55a6d587b76bb60a', 'width': 640}, {'height': 496, 'url': 'https://external-preview.redd.it/OXhpanQ5dzQwOWVnMZOmUCIIu2bWKby-ZQyxS5MSHGuN95rY67-MiEXN1TYJ.png?width=960&crop=smart&format=pjpg&auto=webp&s=323f5a51c2f9b1710aa2b1afec5f62d2ea5f7075', 'width': 960}, {'height': 558, 'url': 'https://external-preview.redd.it/OXhpanQ5dzQwOWVnMZOmUCIIu2bWKby-ZQyxS5MSHGuN95rY67-MiEXN1TYJ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=503fc7f234e626d83eed456136c305d66b83d696', 'width': 1080}], 'source': {'height': 1544, 'url': 'https://external-preview.redd.it/OXhpanQ5dzQwOWVnMZOmUCIIu2bWKby-ZQyxS5MSHGuN95rY67-MiEXN1TYJ.png?format=pjpg&auto=webp&s=d134fe153346b080852c380a6dca47830cb3384f', 'width': 2988}, 'variants': {}}]} | |
Breakthrough Snapshot in our research on our experimental AI architecture: | 0 | This is a live, local-run status note intended for quick verification. It is not a benchmark claim.
**In this new version** we **managed to fix the main problems** and **enabled all the parameters**. The model learns. To see the actual "evolution" you need to take mutliple variables into account - **ONLY LOSS is NOT enough!**
**The model will speed up (param: scale) if the loss falls for faster training**, uses intuition (param cadence) to slow pointers, raw delta param as FOV for the input data. So the loss will look stable for most of the run however you can see that trainng will speed up and cadence over time will increase.
***The test is a SEQUENTIAL MNIST. The MNIST input is resized to 16x16, flattened to a sequence length of 256 scalars per sample. Evaluation uses a disjoint test subset from MNIST(train=False), confirmed by logs showing zero overlap. This is sort of a WORST CASE SCENARIO for the model.***
* Dataset: `seq_mnist`
* Slot width: `TP6_SLOT_DIM=64`
* Controls: AGC + velocity-aware cadence gating + adaptive inertia enabled
* User-reported best loss (local log): \~2.20 around step \~5.8k
* **Infinity-resilience observation (local):** `grad_norm(theta_ptr)` hit `inf` and peaked at `4.2064e+18`, yet the run continued without NaN and kept learning (see `logs/current/tournament_phase6.log`, around steps \~4913–4930).
**How to verify on your machine:**
* Run with the same config and watch your log for a best-loss line.
* The log line format is `step XXXX | loss Y.YYYY | ...`.
Repo link:
[https://github.com/Kenessy/PRIME-C-19](https://github.com/Kenessy/PRIME-C-19)
**#1 WARNING:** The project is in pre alpha/proof of concept stage. It is not intended by any means a "download, click and run" - it is a research prototype. Pls keep this in mind. Bugs, breaks, crashes can happen.
**#2 WARNING:** We tuned the params for this test. Although it SHOULD work for mosts tests, at this point our knowledge of this higher dimensional space is limited - we only know that intuition that works on standard neural nets doesnt neccessarily hold up (see loss drop) so more experimentation is needed.
**#3 WARNING:** This is a strictly NON COMMERCIAL product. Meant for research and eduactional purposes ONLY. It is behind a polyform noncommercial licence.
**The main things we managed to more or less nail down:**
* Core thesis: intelligence is not only compute or storage, but navigation efficiency on a structured manifold. "Thinking" is the control agent (Pilot) traversing the Substrate (encoded geometry).
* Interestingly this **modell doesnt depend mostly on VRAM -** it offers mathematically infinite storage, the main limiting factor is pointer accuracy - FP32/64 tested. \[easy to understand logic: vectors pointing towards infinitely folded spiral, linking a point of manifold space with feature space . Aka pointers pointing into infinite space towards a location, if pointers are weak this will be "blurry" for the mode\] Dont have acces to higher accuracy pointer device, thus the rest needs to be tested later or by others. It offers a significant jump in pointer accuracy albeit exact % are not yet conclusional. My assumption that a sufficiently high precision (FP512 or FPS1024) pointers could hold LLM levels of info on a mobil hardrware during inference pass - training is still time consuming albeit VRAM and GPU efficient.
* Pilot-Substrate dualism: the Substrate holds structure; the Pilot locates it. A strong Substrate with a poorly tuned Pilot can be dysfunctional, so both must align.
* Law of topological inertia: momentum and friction govern the regime of navigation. A "walker" verifies step-by-step; a "tunneler" can skip across gaps when inertia is aligned. This is framed as control dynamics, not biology.
* Singularity mechanism (insight): under low friction and aligned inertia, the Pilot converges rapidly toward the Substrate's structure, moving from search to resonance. This remains a hypothesis.
* Scaling rebuttal (soft form): larger substrates expand capacity but also search entropy unless the Pilot is physics-aware. We expect self-governing inertia and cadence control to matter alongside parameter count.
**Now our main goal is to reach a high accuracy on a "worst case scenario" test like SEQUENTIAL MNIST for our model, before moving on with iterations. This is a slow but stable process (civilian GPUs).**
**Future Research (Speculative)**
These are ideas we have not implemented yet. They are recorded for prior art only and should not be treated as validated results.
* Hyperbolic bundle family: seam-free double-cover or holonomy-bit base, a hyperbolic scale axis, structure-preserving/geodesic updates (rotor or symplectic), and laminarized jumps. High potential, full redesign (not implemented).
* Post-jump momentum damping: apply a short cooldown to pointer velocity or jump probability for tau steps after a jump to reduce turbulence. This is a small, testable idea we may prototype next.
* A “God-tier” geometry exists in practice: not a magical infinite manifold, but a non-commutative, scale-invariant hyperbolic bulk with a ℤ₂ Möbius holonomy and Spin/rotor isometries. It removes the torsion from gradients, avoids Poincaré boundary pathologies, and stabilizes both stall-collapse and jump-cavitation - to exactly lock in the specific details is the ultimate challenge of this project. | 2026-01-19T06:01:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qgw5mv/breakthrough_snapshot_in_our_research_on_our/ | Acrobatic-Bee8495 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgw5mv | false | null | t3_1qgw5mv | /r/LocalLLaMA/comments/1qgw5mv/breakthrough_snapshot_in_our_research_on_our/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'H_-popURrwh4DYxTmVCYc2Gtu8c92CI5Ca65O97O1HQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/H_-popURrwh4DYxTmVCYc2Gtu8c92CI5Ca65O97O1HQ.png?width=108&crop=smart&auto=webp&s=f4f94cde1b9a8030a10581800c710c9fcc0720e7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/H_-popURrwh4DYxTmVCYc2Gtu8c92CI5Ca65O97O1HQ.png?width=216&crop=smart&auto=webp&s=df25a9238ef0fb084874a6737cec9a723f359146', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/H_-popURrwh4DYxTmVCYc2Gtu8c92CI5Ca65O97O1HQ.png?width=320&crop=smart&auto=webp&s=acec2a4c6cfbb11dd533beee4c67c5408f93b49c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/H_-popURrwh4DYxTmVCYc2Gtu8c92CI5Ca65O97O1HQ.png?width=640&crop=smart&auto=webp&s=2eda06719997b02e11dd5e35df6f46d00a6a8db4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/H_-popURrwh4DYxTmVCYc2Gtu8c92CI5Ca65O97O1HQ.png?width=960&crop=smart&auto=webp&s=ebea37ba8eb05a6b6b0977ed0b9c26b75dffb532', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/H_-popURrwh4DYxTmVCYc2Gtu8c92CI5Ca65O97O1HQ.png?width=1080&crop=smart&auto=webp&s=7b699ed2526145d323fa0780eaf2470f461b9cea', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/H_-popURrwh4DYxTmVCYc2Gtu8c92CI5Ca65O97O1HQ.png?auto=webp&s=1d9fe5c5909cc4b3194f2d4735b34cf22d6aa317', 'width': 1200}, 'variants': {}}]} |
Have you gotten used to AI laughing or other emotions? | 0 | So the question is, _i dont know how to say this but_, when AI laughs, do you laugh with it? For me, nothing that AI speaks is natural, even today, even since using gpt-3 in openai playground. Even today when AI says something which is supposed to be funny and it displays emotes, my mind is super biased thinking 'ok so this is not natural'. If something much less funny was said by a real person, i will instantly smile reading it, but i have no such thing coming to my mind when AI acts as if its worried, or its making something sound so funny.
I just dont want it to sound like human because its not, i just want a fully AI mode that always understands that its AI and its talking to a human which is very very different. But I also understand that different people can want different things, different selection of what kind of 'mode' ai should work in, some people will really want it to be just like humans, and thats great if thats what they want.
I had to correct AI sometimes when in some answer it said something like "we always think ..", so I said its not 'we', you can say 'humans always think ..', so while it did correct it, i felt there was a strange emotion to it from its side (and i maybe overthinking it), and I had to say in response 'i wasnt trying to be rude...'.
Cant even begin to imagine how things will be when this AI takes into the robot forms and talking to us in real life.
But anyway, just wanted to know how others think about this.
| 2026-01-19T05:53:33 | https://www.reddit.com/r/LocalLLaMA/comments/1qgw04u/have_you_gotten_used_to_ai_laughing_or_other/ | ab2377 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgw04u | false | null | t3_1qgw04u | /r/LocalLLaMA/comments/1qgw04u/have_you_gotten_used_to_ai_laughing_or_other/ | false | false | self | 0 | null |
Learning multi-agent systems. Here's the problem nobody mentions in tutorials. | 1 | [removed] | 2026-01-19T05:51:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qgvymq/learning_multiagent_systems_heres_the_problem/ | LuckyYouth6893 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgvymq | false | null | t3_1qgvymq | /r/LocalLLaMA/comments/1qgvymq/learning_multiagent_systems_heres_the_problem/ | false | false | self | 1 | null |
SEDAC v5 - Safe Semantic Entropy Dynamic Acceleration for LLMs | 0 | SEDAC (Semantic-Entropy-Dynamic-Acceleration-Core) is a dynamic acceleration framework that combines semantic information and entropy metrics. By analyzing the semantic features and information entropy of the input/state, it intelligently determines acceleration strategies (such as hierarchical downsampling, operator replacement, and scheduling priority adjustment), significantly improving inference/runtime efficiency while maintaining critical semantic performance. It is suitable for applications requiring a dynamic trade-off between performance and accuracy (e.g., inference acceleration, online service optimization, and resource-constrained devices).
[https://github.com/CARBON-XXX/Semantic-Entropy-Dynamic-Acceleration-Core-SEDAC.git](https://github.com/CARBON-XXX/Semantic-Entropy-Dynamic-Acceleration-Core-SEDAC.git) | 2026-01-19T05:21:11 | https://www.reddit.com/r/LocalLLaMA/comments/1qgve72/sedac_v5_safe_semantic_entropy_dynamic/ | Former_Egg_6520 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgve72 | false | null | t3_1qgve72 | /r/LocalLLaMA/comments/1qgve72/sedac_v5_safe_semantic_entropy_dynamic/ | false | false | self | 0 | null |
BFL FLUX.2 Klein tutorial and some optimizations - under 1s latency on an A100 | 4 | A quick tutorial on running FLUX.2 Klein (the new BFL model from last week). Here's what we're seeing on A100:
* **4B distilled**: \~0.9s per image (1024x1024, 4 steps) with torch.compile + fused QKV
* **9B distilled**: \~1.8s per image with same optimizations
These models are pretty good and fast for basic image generation (the 4B model sometimes messes up the image structure, but works quite well for it's size)
We put together Gradio and FastAPI scripts with the optimizations: [https://docs.jarvislabs.ai/tutorials/running-flux2-klein](https://docs.jarvislabs.ai/tutorials/running-flux2-klein) | 2026-01-19T05:04:18 | https://www.reddit.com/r/LocalLLaMA/comments/1qgv2ey/bfl_flux2_klein_tutorial_and_some_optimizations/ | LayerHot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgv2ey | false | null | t3_1qgv2ey | /r/LocalLLaMA/comments/1qgv2ey/bfl_flux2_klein_tutorial_and_some_optimizations/ | false | false | self | 4 | null |
Has anyone quantized VibeVoice-Realtime-0.5B (Stream) for edge devices yet? | 4 | I'm looking for a quantized version (GGUF or ONNX) of the **Microsoft VibeVoice-Realtime-0.5B** model to run on an SBC (Orange Pi).
I've seen some repos for the 7B version, but I specifically need the lightweight 0.5B stream version for an edge project. Has anyone successfully converted this, or can point me to a guide on how to quantize this specific architecture?
thanks! | 2026-01-19T05:02:09 | https://www.reddit.com/r/LocalLLaMA/comments/1qgv0vk/has_anyone_quantized_vibevoicerealtime05b_stream/ | New_Source_6765 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgv0vk | false | null | t3_1qgv0vk | /r/LocalLLaMA/comments/1qgv0vk/has_anyone_quantized_vibevoicerealtime05b_stream/ | false | false | self | 4 | null |
Built a lightweight Python agent framework to avoid “black box” abstractions, feedback welcome | 1 | Hi everyone,
I recently open-sourced my first project called Iris Agent, a lightweight Python framework for building AI agents.
While learning and experimenting with LLM-based agents, I found that many frameworks abstract away too much logic behind black boxes. That’s great for quick demos, but it made it harder (for me at least) to understand how agentic workflows actually work.
So I tried building something simpler and more transparent:
- Clear reasoning and execution flow
- Explicit tool usage and memory handling
- Minimal abstractions, architecture decisions are left to the developer
The goal is not to compete with large agent frameworks, but to make it easier to *learn* and *build* agent systems without heavy overhead.
This is my first open-source release, so feedback (good or bad) would really help.
GitHub: https://github.com/mrgehlot/iris-agent
PyPI: https://pypi.org/project/iris-agent/
Docs: https://mrgehlot.github.io/iris-agent/
Would love to know:
What do you find most confusing or over-engineered in existing agent frameworks? | 2026-01-19T04:54:10 | https://github.com/mrgehlot/iris-agent | AlphaPrime1111 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qguuu5 | false | null | t3_1qguuu5 | /r/LocalLLaMA/comments/1qguuu5/built_a_lightweight_python_agent_framework_to/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'EpE5-5_vcVwKn1P6jgmUx2fucoJ7crPItB-vrXOgyxQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EpE5-5_vcVwKn1P6jgmUx2fucoJ7crPItB-vrXOgyxQ.png?width=108&crop=smart&auto=webp&s=36b94cd37d72e43fe59a6c869fc3c6fbc41b1d3c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EpE5-5_vcVwKn1P6jgmUx2fucoJ7crPItB-vrXOgyxQ.png?width=216&crop=smart&auto=webp&s=71b4a404f844ee2b11308af8a26cd2171302aae8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EpE5-5_vcVwKn1P6jgmUx2fucoJ7crPItB-vrXOgyxQ.png?width=320&crop=smart&auto=webp&s=feb47b279cc3a24a7835cd8c79fd88ce8e912f1d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EpE5-5_vcVwKn1P6jgmUx2fucoJ7crPItB-vrXOgyxQ.png?width=640&crop=smart&auto=webp&s=f8a64bb15d175f8513f036ab3b30de4b6baf3c13', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EpE5-5_vcVwKn1P6jgmUx2fucoJ7crPItB-vrXOgyxQ.png?width=960&crop=smart&auto=webp&s=9a956c5092e5bbd421cb729d93dc646c6be5b6e0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EpE5-5_vcVwKn1P6jgmUx2fucoJ7crPItB-vrXOgyxQ.png?width=1080&crop=smart&auto=webp&s=e76dddaff305b1992f663475cb0056387c4be3ed', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EpE5-5_vcVwKn1P6jgmUx2fucoJ7crPItB-vrXOgyxQ.png?auto=webp&s=3a4e5f50b65ed1138600bd90dc056672a189c23f', 'width': 1200}, 'variants': {}}]} | |
[Project] cuda-nn: A custom MoE inference engine written in Rust/Go/CUDA from scratch. Runs 6.9B params without PyTorch. | 0 | **Polyglot:** Rust, Go, and Python binding to the same shared CUDA kernels.
**Architecture:** MoE (Mixture of Experts), MQA.
**Performance:** Optimized CUDA kernels (GEMM, RoPE, SwiGLU) written by hand. | 2026-01-19T03:54:10 | https://github.com/fumi-engineer/machine_learning | Fumi-engineer | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qgtmsi | false | null | t3_1qgtmsi | /r/LocalLLaMA/comments/1qgtmsi/project_cudann_a_custom_moe_inference_engine/ | false | false | default | 0 | null |
Just put together my new setup(3x v620 for 96gb vram) | 63 | 2026-01-19T02:31:42 | https://www.reddit.com/gallery/1qgrw3d | PraxisOG | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qgrw3d | false | null | t3_1qgrw3d | /r/LocalLLaMA/comments/1qgrw3d/just_put_together_my_new_setup3x_v620_for_96gb/ | false | false | 63 | null | ||
I'm building the *best* local ai app in the mobile market and I'm curious about your opinions | 0 | I got nerd snipped into building such app and I'm just trying to brainstorm some ideas. Would you guys use something like this with an insane amount of variety? If you would use such an app, what do you think is a crucial feature to have? Is there any specific models that you would want to use in your mobile phone that would be cool to add in the platform? Any input is appreciated | 2026-01-19T02:10:02 | https://www.reddit.com/r/LocalLLaMA/comments/1qgrf0y/im_building_the_best_local_ai_app_in_the_mobile/ | Sea_Fan2368 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgrf0y | false | null | t3_1qgrf0y | /r/LocalLLaMA/comments/1qgrf0y/im_building_the_best_local_ai_app_in_the_mobile/ | false | false | self | 0 | null |
Nomic GPT4All | 1 | Does anyone truly know if Nomic is private? I want to upload some private documents and ask it to create some flash cards or help me study the documents in a test fashion. Can Nomic achieve what I need? If not, what other LLM can I use to achieve this task? | 2026-01-19T00:37:30 | https://www.reddit.com/r/LocalLLaMA/comments/1qgpe8h/nomic_gpt4all/ | curioustaking | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgpe8h | false | null | t3_1qgpe8h | /r/LocalLLaMA/comments/1qgpe8h/nomic_gpt4all/ | false | false | self | 1 | null |
GFN v2.5.0: Verified O(1) Memory Inference and 500x Length Extrapolation via Symplectic Geodesic Flows | 9 | # 1. Abstract
We present GFN (Geodesic Flow Networks), an architecture that reformulates sequence modeling as particle dynamics on a learned Riemannian manifold. Unlike Transformer-based architectures—which exhibit O(N\^2) memory scaling due to the attention mechanism—or standard RNNs that suffer from vanishing gradients, GFN achieves O(1) memory complexity during inference and exhibits infinite-horizon stability through symplectic integration.
In our v2.5.0 release, we demonstrate perfect zero-shot generalization on algorithmic tasks with sequences up to 10,000 tokens while maintaining a strictly bounded memory footprint of \~60MB.
# 2. Optimization Topology and Stability
The transition to v2.5.0 introduces RiemannianAdam and Symplectic Integration. By ensuring parameter updates respect manifold geometry and the integrator conserves system energy (Hamiltonian flow), we achieve an exceptionally stable optimization topology.
https://preview.redd.it/q5x7t5gu47eg1.png?width=5034&format=png&auto=webp&s=33e903e7e8aba67627627fb6b6fa0ff139678610
Comparative Topology:
* GFN (Left): Exhibits a well-conditioned, convex loss funnel. Volume preservation (Liouville’s Theorem) ensures that the system's Jacobian has a unit determinant, preventing information decay even across arbitrary sequence lengths.
* Standard Baseline (Right): Displays chaotic noise and sharp local minima indicative of numerical instability and vanishing orbits.
# 3. Constant Memory Scaling O(1)
The primary bottleneck in modern sequence modeling is the KV cache growth. GFN encodes the entire sequence history into the position (x) and velocity (v) of a latent particle, eliminating the need for history storage.
https://preview.redd.it/t5kgcusv47eg1.png?width=1276&format=png&auto=webp&s=eedb0057cd27089c9f5caad679f0e5d0eb6506cf
Empirical Observations:
* GFN: VRAM usage remains nearly constant, increasing only 32MB from L=20 to L=10,000.
* Transformer Baseline: VRAM exceeds 8GB by L=1,000 and triggers Out-Of-Memory (OOM) errors shortly after.
* Advantage: At L=1,000, GFN demonstrates a 234x reduction in memory overhead compared to parameter-matched Transformer models.
# 4. Zero-Shot Extrapolation and Algorithmic Correctness
Current LLMs often fail when inference length exceeds the training window. GFN treats "reasoning" as geometric traversal; once the model learns the underlying manifold curvature, it generalizes perfectly to lengths orders of magnitude beyond training.
https://preview.redd.it/olkgcijw47eg1.png?width=4800&format=png&auto=webp&s=e3c14c51ec4d6cace931fa22b044fefc6a15a046
Benchmark Results (Binary Parity Task):
* Training Length: L=20
* Inference Length: Up to L=10,000 (500x extrapolation)
* Accuracy: 100.0% (GFN) vs 50% (Transformer Baseline)
This result proves that GFN implements robust algorithmic state transitions rather than pattern memorization.
# 5. Technical Implementation Highlights
* Leapfrog Integration: A 2nd-order Velocity Verlet scheme used to solve the geodesic equation.
* Low-Rank Christoffel Symbols: O(d\^2 \\cdot R) approximation of the metric tensor for efficient curvature computation.
* Zero-Force Inductive Bias: Architectural enforcement ensuring null inputs exert zero force, enabling perfect inertial memory.
* Velocity Normalization: Stabilizes the manifold flow without sacrificing informational direction.
# 6. Known Limitations and Roadmap
1. Eager-Mode Latency: Current PyTorch overhead result in slower iterations compared to fused kernels. We are developing custom CUDA Kernels to target a 10-50x wall-clock speedup.
2. Language Scaling: Validation on large-scale datasets (WikiText/Pile) is ongoing to assess compute-optimal scaling laws for semantic density.
3. Hybrid Geometries: Research into "Mixture of Manifolds" (MoM) using combinations of Euclidean, Hyperbolic, and Spherical expert heads.
Joaquín Stürtz
Manifold Laboratory
Apache License 2.0
[https://github.com/Manifold-Laboratory/manifold](https://github.com/Manifold-Laboratory/manifold) | 2026-01-18T23:57:21 | https://www.reddit.com/r/LocalLLaMA/comments/1qgogis/gfn_v250_verified_o1_memory_inference_and_500x/ | janxhg27 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgogis | false | null | t3_1qgogis | /r/LocalLLaMA/comments/1qgogis/gfn_v250_verified_o1_memory_inference_and_500x/ | false | false | 9 | null | |
Thinkpad P53 - LLamaBench | 4 | Ahoy redditors! I’ve been experimenting with running a local assistant for real-time text generation for a TTS model. While I was testing, I figured I might as well share some results on **low-end-ish laptop hardware**.
**Specs (for reference):**
* **CPU:** i7-9850H (capped at 3.0 GHz due to thermals)
* **RAM:** 48 GB 2667MT/s (DDR4)
* **GPU:** Quadro RTX 3000 Mobile (6 GB VRAM, no ReBar)
All models ran entirely on GPU, no offloading. Here’s what I got:
|Model|Total Params|Active Params|Quant|VRAM (GiB)|Prefill (pp512)|Generation (tg128)|
|:-|:-|:-|:-|:-|:-|:-|
|**Liquid LFM2-8B-A1B**|8.34 B|1.5 B|Q4\_K\_S|4.42|**1626.9** t/s|**117.4** t/s|
|**Granite-4.0-H-Tiny**|6.94 B|1.0 B|Q5\_K\_XL|4.68|**1286.9** t/s|**61.3** t/s|
|**Qwen3-4B-UD**|4.02 B|4.02 B|Q8\_K\_XL|4.70|987.4 t/s|40.4 t/s|
|**DeepSeek-R1-Qwen3-8B**|8.19 B|8.19 B|Q4\_K\_M|4.68|916.9 t/s|34.4 t/s|
|**Gemma-3n-E4B-it**|6.87 B|4.0 B\*|Q5\_K\_M|4.67|870.5 t/s|33.6 t/s|
Not bad for a laptop setup.
Would love to hear if anyone else is running local assistants for TTS or similar real-time tasks.
LLama-bench flags: "-ngl 99 -t 6 -b 384" that was my first time using llamacpp, so far it's far faster than ollama... | 2026-01-18T23:56:49 | https://www.reddit.com/r/LocalLLaMA/comments/1qgog3f/thinkpad_p53_llamabench/ | cride20 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgog3f | false | null | t3_1qgog3f | /r/LocalLLaMA/comments/1qgog3f/thinkpad_p53_llamabench/ | false | false | self | 4 | null |
Agent Zero can’t connect to LM Studio or Ollama | 0 | I’m trying to configure Agent Zero with LM Studio. I’m running Linux Mint. I have Agent Zero running in a Docker container. I tried for quite some time to set it up with Ollama, couldn’t get it to work, then tried with LM Studio hoping for better results, but to no avail.
I have both Ollama and LM Studio, and they both function just fine independently.
Agent Zero is also functioning, as I used a Free api key from open router with it to try to troubleshoot this issue, but quickly hit the limit on that, then spent another hour with Claude troubleshooting it as well. I’ve been down every reddit, GitHub, YouTube, ect, rabbit hole, anything on Google, and I’ve tried everything I’ve came across, but still can not get Agent Zero to work with Ollama or LM Studio.
The screen shots hopefully illustrate what’s going on. I don’t know what I’m doing wrong. Any help would be greatly appreciated. | 2026-01-18T23:30:09 | https://www.reddit.com/gallery/1qgnu29 | Bino5150 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qgnu29 | false | null | t3_1qgnu29 | /r/LocalLLaMA/comments/1qgnu29/agent_zero_cant_connect_to_lm_studio_or_ollama/ | false | false | 0 | null | |
An AI agent that writes scientific manuscripts. | 0 | I built scitex-writer — a LaTeX compilation system with MCP server integration that lets AI agent write complete scientific papers autonomously.
**Demo Video:** [https://scitex.ai/demos/watch/scitex-writer/](https://scitex.ai/demos/watch/scitex-writer/)
**GitHub:** [https://github.com/ywatanabe1989/scitex-writer](https://github.com/ywatanabe1989/scitex-writer)
`pip install scitex-writer`
In the demo, the AI agent:
* Generated a full IMRAD manuscript (Introduction, Methods, Results, Discussion)
* Linked figures, tables, and citations in contents
* Compiled to PDF in 27 seconds
* Tracked versions with Git diff
* Responded to simulated peer review
All in \~25 minutes for step by step demonstration. No human intervention.
**Key features:**
* Modular sections (abstract, intro, methods, results, discussion)
* Auto-deduplication of bibliographies
* Figure/table processing (PNG, PDF, SVG, Mermaid, CSV)
* Automatic diff PDF generation (red=deleted, blue=added)
* Manuscript, Supplementary Material, and Revision templates included
* Cross-platform (Linux, macOS, WSL2, Docker, HPC)
Fully open-source. Part of the SciTeX ecosystem.
This is the third standalone MCP server - graphing (figrecipe), literature search (crossref-local), and manuscript writing (scitex-writer) now available as MCP servers — we are building a foundation for automated science.
**All three MCP server demo videos:** [https://scitex.ai/demos/](https://scitex.ai/demos/)
Feedback and contributions welcome! | 2026-01-18T23:15:33 | https://www.reddit.com/r/LocalLLaMA/comments/1qgnhdd/an_ai_agent_that_writes_scientific_manuscripts/ | Historical-Menu9421 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgnhdd | false | null | t3_1qgnhdd | /r/LocalLLaMA/comments/1qgnhdd/an_ai_agent_that_writes_scientific_manuscripts/ | false | false | self | 0 | null |
Textual game world generation Instructor pipeline | 20 | I threw together an instructor/pydantic pipeline for generating interconnected RPG world content using a local LM.
It starts from a high concept you define in a yaml file, and it iteratively generates regions, factions, characters, and branching dialog trees that all reference each other consistently using an in-memory (sqlite) fact registry.
* Generates structured JSON content using Pydantic schemas + Instructor
* Two-phase generation (skeletons first, then expansion) to ensure variety
* This was pretty key as trying to generate complete branches resulted in far too little variety despite efforts to alter context dynamically (seeds, temp walking, context filling etc)
* SQLite (in-memory) fact registry prevents contradictions across generations
* Saves progress incrementally so you can resume interrupted runs
* Web-based viewer/editor for browsing and regenerating content
It should work with any OpenAI-compatible API but I only used llama.cpp.
The example below (full json is in the repo with the config file too) was generated using off-the-shelf gemma-27b-it in a single pass. It is has 5 regions, 8 factions, 50 characters, 50 dialogs, and 1395 canonical facts.
https://preview.redd.it/i8hs04swv6eg1.jpg?width=1248&format=pjpg&auto=webp&s=186f9f17ff1a81e4ad8ca02b4bfcf8bbbc01bac6
https://preview.redd.it/r0wktvjyv6eg1.jpg?width=2079&format=pjpg&auto=webp&s=121a2a29605c726ab518e2af2d066e9291241d26
https://preview.redd.it/sal25j9zv6eg1.jpg?width=2067&format=pjpg&auto=webp&s=ca980f560e16b86ed13691b6338f6e02bacc2cd4
https://preview.redd.it/w7kjv4uzv6eg1.jpg?width=2104&format=pjpg&auto=webp&s=516f7ae120f463a9b98527fdd6d1938bb8e7afc8
https://preview.redd.it/ci700n60w6eg1.jpg?width=2104&format=pjpg&auto=webp&s=fb6b7537ac9c6681744638a365d716fac64a4ac2
Anyway, I didn’t spend any time optimizing since I’m just using it for a game I’m building so it’s a bit slow, but while it’s not perfect, I found it to be much more useful then I expected so I figured I’d share.
| 2026-01-18T23:09:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qgnbx8/textual_game_world_generation_instructor_pipeline/ | JEs4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgnbx8 | false | null | t3_1qgnbx8 | /r/LocalLLaMA/comments/1qgnbx8/textual_game_world_generation_instructor_pipeline/ | false | false | 20 | null | |
how do you pronounce “gguf”? | 108 | is it “jee - guff”? “giguff”? or the full “jee jee you eff”? others???
discuss.
and sorry for not using proper international phonetic alphabet symbol things | 2026-01-18T22:14:35 | https://www.reddit.com/r/LocalLLaMA/comments/1qglyqz/how_do_you_pronounce_gguf/ | Hamfistbumhole | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qglyqz | false | null | t3_1qglyqz | /r/LocalLLaMA/comments/1qglyqz/how_do_you_pronounce_gguf/ | false | false | self | 108 | null |
Anybody run Minimax 2.1 q4 on pure RAM (CPU) ? | 11 | Anybody runs Minimax 2.1 q4 on pure RAM (CPU) ?
I mean DDR5 (\~6000) how much t/s ?
Any other quants ? | 2026-01-18T22:10:36 | https://www.reddit.com/r/LocalLLaMA/comments/1qgluze/anybody_run_minimax_21_q4_on_pure_ram_cpu/ | xSNYPSx777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgluze | false | null | t3_1qgluze | /r/LocalLLaMA/comments/1qgluze/anybody_run_minimax_21_q4_on_pure_ram_cpu/ | false | false | self | 11 | null |
Roast my build | 49 | This started as an Optiplex 990 with a 2nd gen i5 as a home server. Someone gave me a 3060, I started running Ollama with Gemma 7B to help manage my Home Assistant, and it became addicting.
The upgrades outgrew the SFF case, PSU and GPU spilling out the side, and it slowly grew into this beast. Around the time I bought the open frame, my wife said it's gotta move out of sight, so I got banished to the unfinished basement, next to the sewage pump. Honestly, better for me, got to plug directly into the network and get off wifi.
6 months of bargain hunting, eBay alerts at 2am, Facebook Marketplace meetups in parking lots, explaining what VRAM is for the 47th time.
The result:
6x RTX 3090 (24GB each)
1x RTX 5090 (32GB), $1,700 open box Microcenter
ROMED8-2T + EPYC 7282
2x ASRock 1600W PSUs (both open box)
32GB A-Tech DDR4 ECC RDIMM
176GB total VRAM, ~$6,500 all-in
First motherboard crapped out, but got a warranty replacement right before they went out of stock.
Currently running Unsloth's GPT-OSS 120B F16 GGUF, full original precision, no quants. Also been doing Ralph Wiggum loops with Devstral-2 Q8_0 via Mistral Vibe, which yes, I know is unlimited free and full precision in the cloud. But the cloud can't hear my sewage pump.
I think I'm finally done adding on. I desperately needed this. Now I'm not sure what to do with it. | 2026-01-18T21:28:13 | https://www.reddit.com/gallery/1qgksrm | RoboDogRush | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qgksrm | false | null | t3_1qgksrm | /r/LocalLLaMA/comments/1qgksrm/roast_my_build/ | false | false | 49 | null | |
Using NVIDIA DGX Spark + GPT-OSS-120B for Automated Game Development Pipeline - Thoughts? | 0 | Hey everyone,
I've been researching an ambitious project and wanted to get your thoughts on feasibility.
**\*\*The Concept:\*\***
Building an automated game development pipeline using NVIDIA DGX Spark (Grace Blackwell, 128GB unified memory) with GPT-OSS-120B and multiple AI models working together.
**\*\*The Workflow:\*\***
1. **\*\*User Input\*\***: Describe game concept in natural language - Example: "I want a medieval fantasy RPG with 1,000 NPCs living autonomous lives"
2. **\*\*AI Pipeline\*\***: - GPT-120B generates 500-step master plan - Auto-generates all Unity C# scripts - Creates game systems (NPC AI, economy, relationships, combat) - Integrates assets (user provides rigged models + animations) - Debugs and iterates automatically
3. **\*\*Output\*\***: Playable game prototype
**\*\*Key Features:\*\***
\- User role: High-level creative direction (10%)
\- AI role: Technical implementation (90%)
\- Modular architecture with RAG for Unity documentation
\- Automated bug fixing and optimization
**\*\*Questions:\*\***
1. Is GPT-OSS-120B (117B parameters) sufficient for this level of code generation?
2. Has anyone tried similar multi-AI pipelines for game dev?
3. What are the realistic bottlenecks I'm not seeing?
4. Would this actually save time vs traditional development?
**\*\*My Background:\*\***
\- Experienced in systematic development (built algorithmic trading bots)
\- Strong in system design, but new to game development
\- Considering DGX Spark (\~$5K) vs cloud API costs I know this sounds ambitious, but I'm trying to understand if it's "ambitious but doable" or "fundamentally flawed." Honest feedback appreciated!
**\*\*Edit for clarity\*\***: I'm NOT trying to replace game designers or creative work. The vision is: - Designer provides: concept, art direction, game feel decisions, balancing
\- AI handles: boilerplate code, system implementation, integration work
Think of it as an extremely advanced code generation assistant, not AGI making games autonomously. | 2026-01-18T21:14:01 | https://www.reddit.com/r/LocalLLaMA/comments/1qgkh41/using_nvidia_dgx_spark_gptoss120b_for_automated/ | AdNaive1169 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgkh41 | false | null | t3_1qgkh41 | /r/LocalLLaMA/comments/1qgkh41/using_nvidia_dgx_spark_gptoss120b_for_automated/ | false | false | self | 0 | null |
Local EPUB to Audiobook Converter with Modular AI TTS Engines | 3 | Hey everyone,
I've been wanting to convert my EPUB library into audiobooks for a while now, mainly to listen to them offline without relying on paid services. I found a few converters out there, but none really offered the flexibility and local control I was after. So, I decided to build my own, and I've been evolving it into something I think is pretty neat.
This project aims to be a fully autonomous, localized, and AI-powered suite to transform your EPUBs into audiobooks. The core idea is to avoid external dependencies and keep everything running on your own hardware, especially if you have a decent GPU.
What started as a simple converter has grown into a modular system. The main goal here is to have all the AI components – like XTTS, GPT-SoVITS, Maya1, and Kokoro – built from source and optimized for local use, particularly with GPUs.
Here are some of the things I've focused on:
* **Local & Editable**: Everything is built from source locally. You're not relying on pre-built images you can't inspect or modify.
* **VRAM Management**: I've implemented a system that tries to manage VRAM better. It should start the TTS engine you need and shut down others when you switch, to keep things from crashing.
* **AI Chapter Naming**: For English books, I've hooked it up to Ollama (using `dolphin-llama3`) to try and generate chapter titles automatically.
* **Fast Engines**:
* **Kokoro v1.0**: Offers real-time speed for Italian and English.
* **Maya1**: A State-of-the-art Italian TTS I integrated.
* **GPT-SoVITS**: For when you need that higher fidelity.
* **XTTS v2**: For voice cloning and general use.
* **Networking**: I've set up a static IP architecture to try and avoid those annoying "Connection Refused" errors you can sometimes get with Docker.
* **Persistent Models**: Models are saved directly to your drive, so you don't have to redownload them every time you restart.
The tech stack is mainly Python 3.11 with Docker Compose for orchestration. I'm using Gradio for the UI, which streams logs in real-time.
I'd love for you all to check it out and give it a try if you're interested in a local, customizable EPUB to audiobook solution.
You can find the project here: [https://github.com/lelus78/epub-to-audiobook-modular](https://github.com/lelus78/epub-to-audiobook-modular)
The setup involves cloning the repo and running `docker compose up -d --build`. The first start might take a bit to build everything.
I'm really keen to get feedback from the community. Any suggestions on improvements, optimizations, or even just thoughts on the modular approach would be greatly appreciated.
Thanks! | 2026-01-18T20:56:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qgk2vw/local_epub_to_audiobook_converter_with_modular_ai/ | lelus78 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgk2vw | false | null | t3_1qgk2vw | /r/LocalLLaMA/comments/1qgk2vw/local_epub_to_audiobook_converter_with_modular_ai/ | false | false | self | 3 | null |
building a small Slack community focused on AI for Business Automation | 0 | Hey everyone 👋
I’m in the process of building a small Slack community focused on AI for Business Automation ... very early-stage and intentionally small for now.
The idea is to create a chill space where people can:
* talk about real-world AI automation use cases
* share tools, workflows, and experiments
* ask questions (technical *and* non-technical)
* learn from each other without hype or pressure
I’m currently trying to gather the first group of people to shape it together. just people curious about using AI to actually make work easier.
If this sounds interesting to you, I’ll drop an invite link in the comments. Absolutely no pressure at all, just putting it out there for anyone who wants to join early and help set the tone 🙂
Thanks, and happy to answer any questions here too! | 2026-01-18T20:44:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qgjt6s/building_a_small_slack_community_focused_on_ai/ | Helpful_Milk_5618 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgjt6s | false | null | t3_1qgjt6s | /r/LocalLLaMA/comments/1qgjt6s/building_a_small_slack_community_focused_on_ai/ | false | false | self | 0 | null |
Update - Day #4 of building an LM from scratch | 4 | So we’ve run into a few hiccups. (Which is why I skipped Day 3. I’ve been troubleshooting for what feels like 24 hours straight.)
1. We have a loss issue. Loss will trend downwards from 10 to around 8 until around step \~400 and after that, the model begins drifting upwards and by the \~3000’s, loss is near 20. I’ve adjusted multiple things such as batch size and gradient, as well as attempting to use DDP (but on Windows that really tough to do apparently) instead of DataParallel but nothings working just yet.
2. Related to the loss issue, I believe streaming the data from EleutherAI/the\_pile\_deduplicated on huggingface is causing issues related to speed. My workaround for that is downloading the entire pile onto a specific, standalone drive and training the model using local data instead. I’m pretty hopeful that will solve both the speed and loss issue.
In terms of good news, the model is learning and the process is possible. I’ve gone from a model that couldn’t say a single word, to a model making semi-coherent paragraphs.
I sincerely believe 0.3B is within the threshold of local indie LM model production. Thanks for sticking around and listening to my ramblings, I hope you guys are enjoying this journey as much as I am!
P.S. I have settled on a name for the model. It’ll be LLyra-0.3B. (I’m hoping the second “L” separates me from the hundreds of other LM projects related to the name “Lyra” haha) | 2026-01-18T20:36:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qgjma9/update_day_4_of_building_an_lm_from_scratch/ | AllTheCoins | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgjma9 | false | null | t3_1qgjma9 | /r/LocalLLaMA/comments/1qgjma9/update_day_4_of_building_an_lm_from_scratch/ | false | false | self | 4 | null |
Are most major agents really just markdown todo list processors? | 97 | I have been poking around different code bases and scrutixzing logs from the majors LLM providers, and it seems like every agent just decomposes task to a todo list and process them one by one.
Has anyone found a different approach? | 2026-01-18T20:15:25 | https://www.reddit.com/r/LocalLLaMA/comments/1qgj2n9/are_most_major_agents_really_just_markdown_todo/ | TheDigitalRhino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgj2n9 | false | null | t3_1qgj2n9 | /r/LocalLLaMA/comments/1qgj2n9/are_most_major_agents_really_just_markdown_todo/ | false | false | self | 97 | null |
local LLM for VS Code 5070 vs 7900xt | 0 | Hey all I’m thinking about swapping my 5070 for a 7900XT due to the VRAM differences
I was wondering how much more difficult it would be to connect to a local LLM in vs code on NVIDIA vs AMD | 2026-01-18T19:56:35 | https://www.reddit.com/r/LocalLLaMA/comments/1qgikxr/local_llm_for_vs_code_5070_vs_7900xt/ | NotSudden-Solution | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgikxr | false | null | t3_1qgikxr | /r/LocalLLaMA/comments/1qgikxr/local_llm_for_vs_code_5070_vs_7900xt/ | false | false | self | 0 | null |
CMV: RAM Prices are Near the Top | 0 | We've all heard the story of how OpenAI gobbling up all the DDR5 supply to make HBM is what caused the current RAM shortage. However, the fact that memory makers can switch between HBM and DRAM means there's a practical ceiling of DDR5 pricing based on HBM prices. Because it could literally be more profitable for Samsung, SK Hynix, or Micron to make DDR5 instead of HBM.
Based on research done by ChatGPT, HBM prices are on the order of $10-$20 per GB:
|Generation|Typical stack config (capacity)|What credible reporting supports (USD per stack)|Implied $/GB (approx)|Notes|
|:-|:-|:-|:-|:-|
|**HBM3**|8‑Hi **24GB**|**Just over $200** ([TrendForce](https://www.trendforce.com/news/2025/06/17/news-sk-hynix-reportedly-slows-1c-dram-investment-shifts-focus-to-1b-dram-for-hbm3ehbm4/))|≈ **$8+ /GB**|Supported (TrendForce citing The Bell).|
|**HBM3E**|8‑Hi **24GB**|**$200–$300** ([Daum](https://v.daum.net/v/20251106003749205?f=p))|≈ **$8–$12.5 /GB**|$270 is plausible. 8‑Hi is typically 24GB. ([Micron Technology](https://www.micron.com/products/memory/hbm/hbm3e?srsltid=AfmBOoqSmYgHxcSqaGnfyOkqsbqEvOCz2ujznau3c_4l45HoIa3EuNcf&utm_source=chatgpt.com))|
|**HBM3E**|12‑Hi **36GB**|**Mid‑$300s** (earlier) **→ $500s**(renewals) ([딜사이트](https://dealsite.co.kr/articles/152153))|≈ **$10–$14 /GB**|Your $400 can be right *sometimes*, but not reliably “Jan‑2026.”|
|**HBM4**|12‑Hi (capacity varies by product)|**Mid‑$500s**, “could top $600” ([딜사이트](https://dealsite.co.kr/articles/152153))|depends on GB|Your $560+ is consistent with reporting.|
if you believe that HBM is 2-3 times more difficult to manufacture (lower yield, larger die size), then the equivalent DDR5 pricing for manufacturers to get the same profit would be less than $10 per GB.
RAM kits have already exceeded this price in retail channels. This means we are close to the peak of RAM pricing - there is already some evidence of defection (manufacturers switching back to DDR5 from HBM [TweekTown](https://www.tweaktown.com/news/109259/samsung-shifts-focus-from-hbm-to-ddr5-modules-ddr5-ram-results-in-far-more-profits-than-hbm/index.html#:~:text=In%20a%20new%20report%20from,per%20month%2C%20making%20significantly%20more))
the main reason we are seeing such high prices for RAM is because of hoarding and speculation not the underlying economics. I would start to get worried if hyperscalers keep bidding up HBM prices, but this currently isn't happening (at least that we know of)
I've also been doing some research on affordable RAM alternatives and came across an interesting option: Intel Optane Persistent Memory ([Serve The Home](https://www.servethehome.com/intel-optane-dc-persistent-memory-guide-for-pmem-100-pmem-200-and-pmem-300-optane-dimms/)). These can still be had on eBay for around $0.50 per GB.
**advantages**:
* high capacity, they come in 128 GB, 256 GB, and 512 GB per stick.
* Persistence: they are basically God-tier hardware for databases, and offer extremely low latency when you compare with SSDs.
**downsides:**
* Worse performance than real ram: latencies around 4x read performance about 1/3 and write performance about 1/10 of ddr4 2666.
* platform limitations: Optane memory is only compatible with specific Intel Xeon Scalable second- and third-generation CPUs
* You still need DDR4: Each stick of persistent memory needs to be paired with one stick of regular DDR4 as cache to work. Though the system will see the much larger Optane pmem capacity as your memory pool. This is called "memory mode". (There is also app-direct mode where you basically use the optane memory as SSD).
In short this is a good solution for situations where you are spilling to disk (like index for a database) because this is way better than SSDs but not really suitable for CPU inference which is a bit of a meme anyways imo.
I just ordered a few sticks of Intel Optane memory (thinking about using them for vector DB). I'm curious if anyone here has already had experience with them and what are your use cases?
| 2026-01-18T19:48:13 | https://www.reddit.com/r/LocalLLaMA/comments/1qgid35/cmv_ram_prices_are_near_the_top/ | Intelligent_Coffee44 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgid35 | false | null | t3_1qgid35 | /r/LocalLLaMA/comments/1qgid35/cmv_ram_prices_are_near_the_top/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'e98tH4fKnxTUnqw-ATyEF2jggQ0ETykqR6JPrimRRxs', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/e98tH4fKnxTUnqw-ATyEF2jggQ0ETykqR6JPrimRRxs.png?width=108&crop=smart&auto=webp&s=48fb7db46af6bbb6b584980bd245afa4803ad802', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/e98tH4fKnxTUnqw-ATyEF2jggQ0ETykqR6JPrimRRxs.png?width=216&crop=smart&auto=webp&s=1fcde09e43295ff557b21d586d489079d21b52cf', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/e98tH4fKnxTUnqw-ATyEF2jggQ0ETykqR6JPrimRRxs.png?width=320&crop=smart&auto=webp&s=04aa31f4abe006a3f83d667db8b1f0a480d8b2d7', 'width': 320}], 'source': {'height': 405, 'url': 'https://external-preview.redd.it/e98tH4fKnxTUnqw-ATyEF2jggQ0ETykqR6JPrimRRxs.png?auto=webp&s=8d8266247ddd504279e250e8d1c2c50584c297fa', 'width': 624}, 'variants': {}}]} |
ROCm+Linux Support on Strix Halo: January 2026 Stability Update | 17 | 2026-01-18T18:54:27 | https://www.youtube.com/watch?v=Hdg7zL3pcIs | Deep_Traffic_7873 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1qggxyy | false | {'oembed': {'author_name': 'Donato Capitella', 'author_url': 'https://www.youtube.com/@donatocapitella', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Hdg7zL3pcIs?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="ROCm+Linux Support on Strix Halo: It's finally stable in 2026!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/Hdg7zL3pcIs/hqdefault.jpg', 'thumbnail_width': 480, 'title': "ROCm+Linux Support on Strix Halo: It's finally stable in 2026!", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1qggxyy | /r/LocalLLaMA/comments/1qggxyy/rocmlinux_support_on_strix_halo_january_2026/ | false | false | default | 17 | {'enabled': False, 'images': [{'id': '1NaNFJJfuc2cvUZCoPzLwIjTk37asJmqjUy4n9gfsns', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1NaNFJJfuc2cvUZCoPzLwIjTk37asJmqjUy4n9gfsns.jpeg?width=108&crop=smart&auto=webp&s=565a4cc03865e45d98aa7c251456c7ae80bd06bd', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/1NaNFJJfuc2cvUZCoPzLwIjTk37asJmqjUy4n9gfsns.jpeg?width=216&crop=smart&auto=webp&s=2b3d88e505102589f7c98bb6b7e4554bb23c3404', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/1NaNFJJfuc2cvUZCoPzLwIjTk37asJmqjUy4n9gfsns.jpeg?width=320&crop=smart&auto=webp&s=de14ecb8643864d05987a7df42d2e088868aa2f1', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/1NaNFJJfuc2cvUZCoPzLwIjTk37asJmqjUy4n9gfsns.jpeg?auto=webp&s=4d32e00ae5d4453c25925ebda27d352fb0fdf23f', 'width': 480}, 'variants': {}}]} | |
opencode with superpowers. It can do everything in a container with docker and nix | 2 | 2026-01-18T18:52:03 | https://grigio.org/opencode-with-superpowers-it-can-do-everything-in-a-container-with-docker-and-nix/ | Deep_Traffic_7873 | grigio.org | 1970-01-01T00:00:00 | 0 | {} | 1qggvmn | false | null | t3_1qggvmn | /r/LocalLLaMA/comments/1qggvmn/opencode_with_superpowers_it_can_do_everything_in/ | false | false | default | 2 | {'enabled': False, 'images': [{'id': 'aHbECKWZj-ct56HM2Cf6NsnnvY-F90e0pkV32S3zQzY', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/aHbECKWZj-ct56HM2Cf6NsnnvY-F90e0pkV32S3zQzY.png?width=108&crop=smart&auto=webp&s=7dc45e01109b1fca8bd4270b600acea3d63112f1', 'width': 108}, {'height': 138, 'url': 'https://external-preview.redd.it/aHbECKWZj-ct56HM2Cf6NsnnvY-F90e0pkV32S3zQzY.png?width=216&crop=smart&auto=webp&s=068921794e563a50f86c9bb5ccecc67b5c2ceb28', 'width': 216}, {'height': 204, 'url': 'https://external-preview.redd.it/aHbECKWZj-ct56HM2Cf6NsnnvY-F90e0pkV32S3zQzY.png?width=320&crop=smart&auto=webp&s=82c43bf4145a6127cf57d766de335d9f09c765eb', 'width': 320}, {'height': 409, 'url': 'https://external-preview.redd.it/aHbECKWZj-ct56HM2Cf6NsnnvY-F90e0pkV32S3zQzY.png?width=640&crop=smart&auto=webp&s=a6dcdbd29c9888954221f8c3f06f8f98f0528958', 'width': 640}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/aHbECKWZj-ct56HM2Cf6NsnnvY-F90e0pkV32S3zQzY.png?auto=webp&s=eb5a5336315dc1b7189106f98dfaba2a8bdb2b0a', 'width': 800}, 'variants': {}}]} | |
cant use radeon 9060 xt 16gb on lm studio | 0 | I just wanna run some small models but i cant choose gpu offload and cant even load the model. It says: 0 CUDA cores found which is obvious because i'm using an amd gpu and i cant change it to amd mode or sum like that how can i use LM STUDIO...... | 2026-01-18T17:55:38 | https://www.reddit.com/r/LocalLLaMA/comments/1qgfbv4/cant_use_radeon_9060_xt_16gb_on_lm_studio/ | Interesting_Cup_947 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgfbv4 | false | null | t3_1qgfbv4 | /r/LocalLLaMA/comments/1qgfbv4/cant_use_radeon_9060_xt_16gb_on_lm_studio/ | false | false | self | 0 | null |
Need help and suggestions for gguf models | 3 | I am running Qwen2.5-14B-Instruct-abliterated-v2.Q6_K and not getting decent responses as I am in Gemini (my online go-to)
I have 16gb vram 5060ti
Are there any other possible LLMs? I use it for general searches, computer help, all over questioning over various subjects. No health questions
I have also tried Mistral-Nemo-12B-ArliAI-RPMax-v1.2-q8_0 to same effect.
How can I get Gemini-type answers other than using Gemini online. I would like abliterated versions/ | 2026-01-18T17:48:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qgf4q7/need_help_and_suggestions_for_gguf_models/ | cmdrmcgarrett | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgf4q7 | false | null | t3_1qgf4q7 | /r/LocalLLaMA/comments/1qgf4q7/need_help_and_suggestions_for_gguf_models/ | false | false | self | 3 | null |
[P] Ruvrics: Open-source tool to detect when your LLM system becomes less reliable | 0 | I built Ruvrics to catch a problem that kept biting me: LLM systems that silently become less predictable after "minor" changes.
\*\*How it works:\*\*
Run the same prompt 20 times and measure how consistent the responses are. Same input, same model — but LLMs can still vary. Ruvrics scores that consistency.
\*\*Why it matters:\*\*
1. Before deploying, you run 20 identical requests → 98% stability. Great.
2. Someone tweaks the prompt "for clarity." Run again → 74%.
Same input. But now responses vary more — tool calls differ, format changes, verbosity fluctuates. No crash, no error. Just less predictable.
\*\*Baseline comparison:\*\*
Save a baseline when behavior is good, detect regressions after changes:
ruvrics stability --input query.json --save-baseline v1
...make changes...
ruvrics stability --input query.json --compare v1
"⚠️ REGRESSION: 98% → 74%"
It measures consistency, not correctness — a behavioral regression guardrail.
\*\*Install:\*\* \`pip install ruvrics\`
\*\*GitHub:\*\* [https://github.com/ruvrics-ai/ruvrics](https://github.com/ruvrics-ai/ruvrics)
Open source (Apache 2.0). Happy to answer questions or take feature requests. | 2026-01-18T17:16:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qgeak8/p_ruvrics_opensource_tool_to_detect_when_your_llm/ | ashutoshtr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgeak8 | false | null | t3_1qgeak8 | /r/LocalLLaMA/comments/1qgeak8/p_ruvrics_opensource_tool_to_detect_when_your_llm/ | false | false | self | 0 | null |
GitHub - avikeid2007/Kairos.local: KaiROS AI— Intelligence, Precisely When It Matters. | 1 | [removed] | 2026-01-18T17:14:57 | https://github.com/avikeid2007/Kairos.local | avikeid2007 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qge95n | false | null | t3_1qge95n | /r/LocalLLaMA/comments/1qge95n/github_avikeid2007kairoslocal_kairos_ai/ | false | false | default | 1 | null |
GPU Rental Price Tracker | 7 | Hey yall, i know we're all trying to avoid the day we have to pay for cloud compute, but even the best of us run into limits sometimes, and even the blessed have hardware hiccups at the worst times. I've seen a lot of people tearing their hair out trying to find a decent price for cloud fallback services, so i built a free tracker with alerts to keep track of pricing and availability from the major services: [https://hardwarehq.app/tools/cloud-compute-tracker](https://hardwarehq.app/tools/cloud-compute-tracker) Everything's free, no signup required unless you want to set alerts and all that good stuff. Let me know if it's useful, if it's not let me know what needs to change. Thanks! | 2026-01-18T17:06:20 | https://www.reddit.com/r/LocalLLaMA/comments/1qge0va/gpu_rental_price_tracker/ | EnvironmentalLow8531 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qge0va | false | null | t3_1qge0va | /r/LocalLLaMA/comments/1qge0va/gpu_rental_price_tracker/ | false | false | self | 7 | null |
Local LLM builders: when do you go multi-agent vs tools? 2-page decision sheet + question | 1 | I built a 2-page *decision* cheat sheet for choosing **workflow vs single agent+tools vs multi-agent** (images attached).
My core claim: if you can define steps upfront, start with a workflow; agents add overhead; multi-agent only when constraints force it.
I’d love practitioner feedback on 3 things:
1. Where do you draw the line between “workflow” and “agent” in production?
2. Tool overload: at what point does tool selection degrade for you (tool count / schema size)?
3. What’s the most important reliability rule you wish you’d adopted earlier (evals, tracing, guardrails, HITL gates, etc.)? | 2026-01-18T16:46:24 | https://www.reddit.com/gallery/1qgdhm1 | OnlyProggingForFun | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qgdhm1 | false | null | t3_1qgdhm1 | /r/LocalLLaMA/comments/1qgdhm1/local_llm_builders_when_do_you_go_multiagent_vs/ | false | false | 1 | null | |
4x AMD R9700 (128GB VRAM) + Threadripper 9955WX Build | 340 | Disclaimer: I am from Germany and my English is not perfect, so I used an LLM to help me structure and write this post.
Context & Motivation: I built this system for my small company. The main reason for all new hardware is that I received a 50% subsidy/refund from my local municipality for digitalization investments. To qualify for this funding, I had to buy new hardware and build a proper "server-grade" system.
My goal was to run large models (120B+) locally for data privacy. With the subsidy in mind, I had a budget of around 10,000€ (pre-refund). I initially considered NVIDIA, but I wanted to maximize VRAM. I decided to go with 4x AMD RDNA4 cards (ASRock R9700) to get 128GB VRAM total and used the rest of the budget for a solid Threadripper platform.
Hardware Specs:
Total Cost: ~9,800€ (I get ~50% back, so effectively ~4,900€ for me).
CPU: AMD Ryzen Threadripper PRO 9955WX (16 Cores)
Mainboard: ASRock WRX90 WS EVO
RAM: 128GB DDR5 5600MHz
GPU: 4x ASRock Radeon AI PRO R9700 32GB (Total 128GB VRAM)
Configuration: All cards running at full PCIe 5.0 x16 bandwidth.
Storage: 2x 2TB PCIe 4.0 SSD
PSU: Seasonic 2200W
Cooling: Alphacool Eisbaer Pro Aurora 360 CPU AIO
Benchmark Results
I tested various models ranging from 8B to 230B parameters.
1. Llama.cpp (Focus: Single User Latency) Settings: Flash Attention ON, Batch 2048
Model Size Quant Mode Prompt t/s Gen t/s
Meta-Llama-3.1-8B-Instruct 8B Q4_K_M GPU-Full 3169.16 81.01
Qwen2.5-32B-Instruct 32B Q4_K_M GPU-Full 848.68 25.14
Meta-Llama-3.1-70B-Instruct 70B Q4_K_M GPU-Full 399.03 12.66
gpt-oss-120b 120B Q4_K_M GPU-Full 2977.83 97.47
GLM-4.7-REAP-218B 218B Q3_K_M GPU-Full 504.15 17.48
MiniMax-M2.1 ~230B Q4_K_M Hybrid 938.89 32.12
Side note: I found that with PCIe 5.0, standard Pipeline Parallelism (Layer Split) is significantly faster (~97 t/s) than Tensor Parallelism/Row Split (~67 t/s) for a single user on this setup.
2. vLLM (Focus: Throughput) Model: GPT-OSS-120B (bfloat16), TP=4, test for 20 requests
Total Throughput: ~314 tokens/s (Generation)
Prompt Processing: ~5339 tokens/s
Single user throughput 50 tokens/s
I used rocm 7.1.1 for llama.cpp also testet Vulkan but it was worse
If I could do it again, I would have used the budget to buy a single NVIDIA RTX Pro 6000 Blackwell (96GB). Maybe I will, if local AI is going well for my use case, I swap the R9700 with Pro 6000 in the future. | 2026-01-18T16:39:42 | https://www.reddit.com/gallery/1qgdb7f | NunzeCs | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qgdb7f | false | null | t3_1qgdb7f | /r/LocalLLaMA/comments/1qgdb7f/4x_amd_r9700_128gb_vram_threadripper_9955wx_build/ | false | false | 340 | null | |
4x AMD R9700 (128GB VRAM) + Threadripper 9955WX Build | 1 | Disclaimer: I am from Germany and my English is not perfect, so I used an LLM to help me structure and write this post.
Context & Motivation: I built this system for my small company. The main reason for all new hardware is that I received a 50% subsidy/refund from my local municipality for digitalization investments. To qualify for this funding, I had to buy new hardware and build a proper "server-grade" system.
My goal was to run large models (120B+) locally for data privacy. With the subsidy in mind, I had a budget of around 10,000€ (pre-refund). I initially considered NVIDIA, but I wanted to maximize VRAM. I decided to go with 4x AMD RDNA4 cards (ASRock R9700) to get 128GB VRAM total and used the rest of the budget for a solid Threadripper platform.
Hardware Specs:
Total Cost: ~9,800€ (I get ~50% back, so effectively ~4,900€ for me).
CPU: AMD Ryzen Threadripper PRO 9955WX (16 Cores)
Mainboard: ASRock WRX90 WS EVO
RAM: 128GB DDR5 5600MHz
GPU: 4x ASRock Radeon AI PRO R9700 32GB (Total 128GB VRAM)
Configuration: All cards running at full PCIe 5.0 x16 bandwidth.
Storage: 2x 2TB PCIe 4.0 SSD
PSU: Seasonic 2200W
Cooling: Alphacool Eisbaer Pro Aurora 360 CPU AIO
Benchmark Results
I tested various models ranging from 8B to 230B parameters.
1. Llama.cpp (Focus: Single User Latency) Settings: Flash Attention ON, Batch 2048
Model Size Quant Mode Prompt t/s Gen t/s
Meta-Llama-3.1-8B-Instruct 8B Q4_K_M GPU-Full 3169.16 81.01
Qwen2.5-32B-Instruct 32B Q4_K_M GPU-Full 848.68 25.14
Meta-Llama-3.1-70B-Instruct 70B Q4_K_M GPU-Full 399.03 12.66
gpt-oss-120b 120B Q4_K_M GPU-Full 2977.83 97.47
GLM-4.7-REAP-218B 218B Q3_K_M GPU-Full 504.15 17.48
MiniMax-M2.1 ~230B Q4_K_M Hybrid 938.89 32.12
Side note: I found that with PCIe 5.0, standard Pipeline Parallelism (Layer Split) is significantly faster (~97 t/s) than Tensor Parallelism/Row Split (~67 t/s) for a single user on this setup.
2. vLLM (Focus: Throughput) Model: GPT-OSS-120B (bfloat16), TP=4, test for 20 requests
Total Throughput: ~314 tokens/s (Generation)
Prompt Processing: ~5339 tokens/s
Single user throughput 50 tokens/s
I used rocm 7.1.1 for llama.cpp also testet Vulkan but it was worse
If I could do it again, I would have used the budget to buy a single NVIDIA RTX Pro 6000 Blackwell (96GB). Maybe I will, if local AI is going well for my use case, I swap the R9700 with Pro 6000 in the future. | 2026-01-18T16:39:31 | https://www.reddit.com/gallery/1qgdb1i | NunzeCs | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qgdb1i | false | null | t3_1qgdb1i | /r/LocalLLaMA/comments/1qgdb1i/4x_amd_r9700_128gb_vram_threadripper_9955wx_build/ | false | false | 1 | null | |
chatterbox turbo in ultimate tts studio pro conversation mode voice limits question | 1 | What's the max number of voice I can load into chatterbox turbo via ultimate tts studio in conversation mode? I have a script with 31 voices but when i load it into ultimate tts studio it only lets me upload 5 voices. is there something I'm missing or is this a hard limit? I have a computer more than capable of handling the voices, so that's not a problem.
Any help here would be appreciated. | 2026-01-18T16:38:50 | https://www.reddit.com/r/LocalLLaMA/comments/1qgdafh/chatterbox_turbo_in_ultimate_tts_studio_pro/ | Jack70741 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgdafh | false | null | t3_1qgdafh | /r/LocalLLaMA/comments/1qgdafh/chatterbox_turbo_in_ultimate_tts_studio_pro/ | false | false | self | 1 | null |
[Open Source] KaiROS AI - Run 31 LLMs locally on Windows with CPU & GPU acceleration, no cloud required! | 1 | [removed] | 2026-01-18T16:34:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qgd64g/open_source_kairos_ai_run_31_llms_locally_on/ | avikeid2007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgd64g | false | null | t3_1qgd64g | /r/LocalLLaMA/comments/1qgd64g/open_source_kairos_ai_run_31_llms_locally_on/ | false | false | self | 1 | null |
[Open Source] KaiROS AI - Run 31 LLMs locally on Windows with CPU & GPU acceleration, no cloud required! | 1 | [removed] | 2026-01-18T16:28:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qgd0o1/open_source_kairos_ai_run_31_llms_locally_on/ | avikeid2007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgd0o1 | false | null | t3_1qgd0o1 | /r/LocalLLaMA/comments/1qgd0o1/open_source_kairos_ai_run_31_llms_locally_on/ | false | false | self | 1 | null |
Demo for the latest PersonaPlex model from nvidia (speech-to-speech model that is controllable through system prompt) | 7 | Hope this can be helpful for someone | 2026-01-18T16:13:05 | https://huggingface.co/spaces/MohamedRashad/PersonaPlex | Severe-Awareness829 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1qgcm6x | false | null | t3_1qgcm6x | /r/LocalLLaMA/comments/1qgcm6x/demo_for_the_latest_personaplex_model_from_nvidia/ | false | false | default | 7 | {'enabled': False, 'images': [{'id': '3-UW-zJ-uCRgO1BeO_woYq-yx09bN2AqMjYYa3dJ998', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3-UW-zJ-uCRgO1BeO_woYq-yx09bN2AqMjYYa3dJ998.png?width=108&crop=smart&auto=webp&s=26db4fa82b57c91cd583cfaa8a10ad220756a07a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3-UW-zJ-uCRgO1BeO_woYq-yx09bN2AqMjYYa3dJ998.png?width=216&crop=smart&auto=webp&s=ddc2cb3cc47ace906f2bf7703a88c969e7cf172f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3-UW-zJ-uCRgO1BeO_woYq-yx09bN2AqMjYYa3dJ998.png?width=320&crop=smart&auto=webp&s=07966247f377473547835fd2235c8d1ebb9a6eaa', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3-UW-zJ-uCRgO1BeO_woYq-yx09bN2AqMjYYa3dJ998.png?width=640&crop=smart&auto=webp&s=c42872e666813b68b457c113320540644e72db91', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3-UW-zJ-uCRgO1BeO_woYq-yx09bN2AqMjYYa3dJ998.png?width=960&crop=smart&auto=webp&s=f0a825da72a378fb46989f1be029bbc419857508', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3-UW-zJ-uCRgO1BeO_woYq-yx09bN2AqMjYYa3dJ998.png?width=1080&crop=smart&auto=webp&s=65fad2e05891b414c7801c26a209c518629a7002', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3-UW-zJ-uCRgO1BeO_woYq-yx09bN2AqMjYYa3dJ998.png?auto=webp&s=b2cd042fcba96e5324277874b40734285160e06a', 'width': 1200}, 'variants': {}}]} |
RLVR with GRPO from scratch code notebook | 15 | 2026-01-18T16:10:02 | https://github.com/rasbt/reasoning-from-scratch/blob/main/ch06/01_main-chapter-code/ch06_main.ipynb | seraschka | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qgcj8b | false | null | t3_1qgcj8b | /r/LocalLLaMA/comments/1qgcj8b/rlvr_with_grpo_from_scratch_code_notebook/ | false | false | 15 | {'enabled': False, 'images': [{'id': '1zp6Ys_kCzKo-Gqi6ZfsuCLpMOxYXSdKyJbi6hC7oDk', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/1zp6Ys_kCzKo-Gqi6ZfsuCLpMOxYXSdKyJbi6hC7oDk.png?width=108&crop=smart&auto=webp&s=d66c3d4cd775091e07ae1eff4839d8cba1f37f77', 'width': 108}, {'height': 105, 'url': 'https://external-preview.redd.it/1zp6Ys_kCzKo-Gqi6ZfsuCLpMOxYXSdKyJbi6hC7oDk.png?width=216&crop=smart&auto=webp&s=fc62e5e71528a002e08e7abff9cdd1ceef71ed71', 'width': 216}, {'height': 156, 'url': 'https://external-preview.redd.it/1zp6Ys_kCzKo-Gqi6ZfsuCLpMOxYXSdKyJbi6hC7oDk.png?width=320&crop=smart&auto=webp&s=7f87b4522de5b2c856bad6e1d002ff44b704b7b6', 'width': 320}, {'height': 313, 'url': 'https://external-preview.redd.it/1zp6Ys_kCzKo-Gqi6ZfsuCLpMOxYXSdKyJbi6hC7oDk.png?width=640&crop=smart&auto=webp&s=80606d0ff718fb311decc9610424061c1fa2743b', 'width': 640}, {'height': 470, 'url': 'https://external-preview.redd.it/1zp6Ys_kCzKo-Gqi6ZfsuCLpMOxYXSdKyJbi6hC7oDk.png?width=960&crop=smart&auto=webp&s=14545d5b0e638c18ab63ec083b502c8d42192bec', 'width': 960}, {'height': 529, 'url': 'https://external-preview.redd.it/1zp6Ys_kCzKo-Gqi6ZfsuCLpMOxYXSdKyJbi6hC7oDk.png?width=1080&crop=smart&auto=webp&s=4b8a7d3b13ced52e326cbc5f5bc9e6484ca8edca', 'width': 1080}], 'source': {'height': 2993, 'url': 'https://external-preview.redd.it/1zp6Ys_kCzKo-Gqi6ZfsuCLpMOxYXSdKyJbi6hC7oDk.png?auto=webp&s=777f61d801a0a19b3b90335f3873ec7408d0c73d', 'width': 6104}, 'variants': {}}]} | ||
I got tired of "Vibe Coding" breaking my app, so I built a local "Safety Layer" that interviews the AI before it codes. (Open Source, MCP) | 0 | Hi r/LocalLLaMA,
(English is not my first language, so please bear with me!)
I’ve been using Cursor/Claude a lot recently, and while it’s fast, I noticed a huge problem: **"Vibe Coding"**. The AI writes code that *looks* working but completely ignores edge cases (like race conditions, double spending, or GDPR compliance). My database schema was getting messed up by "happy path" code.
So I spent the last few weeks building **BlueMouse** 🐭.
It’s an **MCP Server** (works with Cursor/Claude Desktop) that acts as a "Socratic Logic Gate". Instead of just generating code immediately, it intercepts your prompt and **interviews you** first.
**For example, if you ask for an "Ecommerce Schema":**
* **BlueMouse asks:** *"For flash sales, do you want Pessimistic Locking (safer) or Redis Optimistic Locking (faster)?"*
* **It won't let the AI generate code** until you answer these architectural trade-offs.
**Technical Details:**
* **100% Local:** No data leaves your machine. (I care about privacy).
* **17-Layer Validation:** It runs AST parsing & check logic *after* generation.
* **Shadow Audit:** You can feed it your *existing* legacy code logic, and it will output a diagnostic report of potential risks (without rewriting your code).
* **Tech Stack:** Python, FastAPI, MCP Protocol.
I’m sharing this because I think we need to stop just "generating code" and start "engineering" again.
It’s open source and I’d love your feedback (especially if you find any bugs!):
**GitHub:** [https://github.com/peijun1700/bluemouse](https://github.com/peijun1700/bluemouse) **Smithery (One-click install):** [https://smithery.ai/server/peijun1700/Bluemouse](https://smithery.ai/server/peijun1700/Bluemouse) **Glama:** [https://glama.ai/mcp/servers/@peijun1700/bluemouse](https://glama.ai/mcp/servers/@peijun1700/bluemouse)
Thanks! Peijun | 2026-01-18T15:56:12 | https://www.reddit.com/r/LocalLLaMA/comments/1qgc5pc/i_got_tired_of_vibe_coding_breaking_my_app_so_i/ | bluemouse_ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgc5pc | false | null | t3_1qgc5pc | /r/LocalLLaMA/comments/1qgc5pc/i_got_tired_of_vibe_coding_breaking_my_app_so_i/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'zqGDhabJ-T-DifUfwhcZarfb9hoGGMgIw40mjJy8zzQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zqGDhabJ-T-DifUfwhcZarfb9hoGGMgIw40mjJy8zzQ.png?width=108&crop=smart&auto=webp&s=6b43f77cf664cf05623960599472f64ad6448358', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zqGDhabJ-T-DifUfwhcZarfb9hoGGMgIw40mjJy8zzQ.png?width=216&crop=smart&auto=webp&s=84f89a800411ccfa4b4f8fd3cbdf959ff6d619ed', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zqGDhabJ-T-DifUfwhcZarfb9hoGGMgIw40mjJy8zzQ.png?width=320&crop=smart&auto=webp&s=5ef81ee7df7b04de2e18861379480e9623786b3a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zqGDhabJ-T-DifUfwhcZarfb9hoGGMgIw40mjJy8zzQ.png?width=640&crop=smart&auto=webp&s=d90dab8ea5e48cab8dc08d52fb7993f8b782aa70', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zqGDhabJ-T-DifUfwhcZarfb9hoGGMgIw40mjJy8zzQ.png?width=960&crop=smart&auto=webp&s=584884161f41cf9fe5df2346f1ec53ca61375c80', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zqGDhabJ-T-DifUfwhcZarfb9hoGGMgIw40mjJy8zzQ.png?width=1080&crop=smart&auto=webp&s=52598b2526d9e96b91ddc1fac606a08caf4436fa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zqGDhabJ-T-DifUfwhcZarfb9hoGGMgIw40mjJy8zzQ.png?auto=webp&s=9a039c9483255e60031c7d12970b134bf23fc258', 'width': 1200}, 'variants': {}}]} |
From docs scraper to Self-Hosting AI skill factory: Skill Seekers now bootstraps itself as a Claude Code skill, analyzes code bases, detects design patterns and combine all the sources from documentations to code itself + NEW website to download and share skill configs [7.1K+ stars] | 1 | [removed] | 2026-01-18T15:52:21 | https://www.reddit.com/r/LocalLLaMA/comments/1qgc245/from_docs_scraper_to_selfhosting_ai_skill_factory/ | Critical-Pea-8782 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qgc245 | false | null | t3_1qgc245 | /r/LocalLLaMA/comments/1qgc245/from_docs_scraper_to_selfhosting_ai_skill_factory/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.