title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Can I get a any kind of technical detail of Tesla distributed inference fleet? | 0 | Recently, Tesla announced "Tesla distributed inference fleet"
As a researcher,
I'm curious about the details of Tesla's system.
Pipeline Parallel (Layer split)
Or whether an individual car has its own LLM..
Or the Speculation Decoding
What will happen to the details of the communication technology that will be the most bottleneck (I've heard it's through Starlink, but how specifically...?)
Personally, it will not be possible to communicate KV cache, so I guess we will use layer split
Does anyone have any kind of information? and Welcome any kind of opinion! | 2026-01-14T10:15:42 | https://www.reddit.com/r/LocalLLaMA/comments/1qcjolc/can_i_get_a_any_kind_of_technical_detail_of_tesla/ | LingonberryOk5517 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcjolc | false | null | t3_1qcjolc | /r/LocalLLaMA/comments/1qcjolc/can_i_get_a_any_kind_of_technical_detail_of_tesla/ | false | false | self | 0 | null |
Local VLMs struggling with OCR accuracy in NLP pipelines | 2 | Trying to use local VLMs like Llama-4 scout of qwen3-VL-30B OCR on scanned docs to feed into NLP for entity extraction/summarization but hitting constan accuracy walls. Model hallucinates on blurry text images mangles handwritten notes and totally botches complex layouts like tables or multi columns, ends up garbling the NLP input and throwing off downstream analysis
From digging around, common issues people run into: hallucinations on low-res/noisy scans (esp with ML-based OCR), bias towads clean printed text over handwriting, vulnerability to blur/high frequency noise, lack of contextual understanding, like just spits out without smeantics and high compute needs making local runs sluggish without beefy hardware. Dataset biases in training make it worse for edge cases too
Anyone dealt with this?? Tweaks like better pre processing or sharpeting images or maybe specific quants that help?? or is traditional OCR still the move for reliability before VLM reasoning | 2026-01-14T10:10:30 | https://www.reddit.com/r/LocalLLaMA/comments/1qcjllm/local_vlms_struggling_with_ocr_accuracy_in_nlp/ | aidenclarke_12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcjllm | false | null | t3_1qcjllm | /r/LocalLLaMA/comments/1qcjllm/local_vlms_struggling_with_ocr_accuracy_in_nlp/ | false | false | self | 2 | null |
Speech to text via LLM | 3 | Hi,
Is there something more convenient already than the Whisper/SDK (https://github.com/argmaxinc/WhisperKit)? This one works on iOS/macOS and other platforms, and it worked very well. It actually deploys a LLM on an iPhone.
I know that similar setups were already discussed here (https://www.reddit.com/r/LocalLLaMA/comments/1h2u9ed/introducing\_whisper\_cpp\_macos\_utils\_a\_terminal/).
Looking at some projects like this one (https://github.com/rishikanthc/Scriberr) it looks like setting it up is still quite complex? | 2026-01-14T10:06:58 | https://www.reddit.com/r/LocalLLaMA/comments/1qcjjm7/speech_to_text_via_llm/ | Acrobatic_Cat_3448 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcjjm7 | false | null | t3_1qcjjm7 | /r/LocalLLaMA/comments/1qcjjm7/speech_to_text_via_llm/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '0FB-U7GfnqsUhU7x4U4lPR0SPTnvbtdpaTtXSbLO5XQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0FB-U7GfnqsUhU7x4U4lPR0SPTnvbtdpaTtXSbLO5XQ.png?width=108&crop=smart&auto=webp&s=18b5214143195986f6278a84d2f5da36242fc1cb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0FB-U7GfnqsUhU7x4U4lPR0SPTnvbtdpaTtXSbLO5XQ.png?width=216&crop=smart&auto=webp&s=23b12ffff0d528b5a5536a7adf9781ef50b82a69', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0FB-U7GfnqsUhU7x4U4lPR0SPTnvbtdpaTtXSbLO5XQ.png?width=320&crop=smart&auto=webp&s=a1e57179796939de24b05c14a2b9856cd1168344', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0FB-U7GfnqsUhU7x4U4lPR0SPTnvbtdpaTtXSbLO5XQ.png?width=640&crop=smart&auto=webp&s=1a55b6d2f4fda77e81a9dc3ee6beb3cbe505d6e6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0FB-U7GfnqsUhU7x4U4lPR0SPTnvbtdpaTtXSbLO5XQ.png?width=960&crop=smart&auto=webp&s=4db0d86e787d20a4dbc8060471326c84c40d36a1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0FB-U7GfnqsUhU7x4U4lPR0SPTnvbtdpaTtXSbLO5XQ.png?width=1080&crop=smart&auto=webp&s=36d0921f7daa9bfd061174a1a29af8679cea4e1e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0FB-U7GfnqsUhU7x4U4lPR0SPTnvbtdpaTtXSbLO5XQ.png?auto=webp&s=61c30aad4c0b672bc08dc999623c0790d7685aba', 'width': 1200}, 'variants': {}}]} |
What happened to 1.58bit LLMs? | 76 | Last year I remember them being super hyped and largely theoretical. Since then, I understand there’s a growing body of evidence that larger sparse models outperform smaller denser models, which 1.58bit quantisation seems poised to drastically improve
I haven’t seen people going “oh, the 1.58bit quantisation was overhyped” - did I just miss it? | 2026-01-14T09:34:54 | https://www.reddit.com/r/LocalLLaMA/comments/1qcj1lr/what_happened_to_158bit_llms/ | Sloppyjoeman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcj1lr | false | null | t3_1qcj1lr | /r/LocalLLaMA/comments/1qcj1lr/what_happened_to_158bit_llms/ | false | false | self | 76 | null |
🚀 Announcing EROS — The Autonomous State-Derived Reward System (Released Under SBAAL v1.0) | 1 | [removed] | 2026-01-14T09:20:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qcitt2/announcing_eros_the_autonomous_statederived/ | ShadovvBeast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcitt2 | false | null | t3_1qcitt2 | /r/LocalLLaMA/comments/1qcitt2/announcing_eros_the_autonomous_statederived/ | false | false | self | 1 | null |
Notes on time. | 0 | Three slides.
Time.
Dissipation.
Modular flow.
No definitions. | 2026-01-14T09:07:46 | https://www.reddit.com/gallery/1qcimhd | StarThinker2025 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qcimhd | false | null | t3_1qcimhd | /r/LocalLLaMA/comments/1qcimhd/notes_on_time/ | false | false | 0 | null | |
Pocket TTS: a 100M-parameter text-to-speech | 26 | 2026-01-14T08:51:14 | https://huggingface.co/kyutai/pocket-tts | paf1138 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1qcid3l | false | null | t3_1qcid3l | /r/LocalLLaMA/comments/1qcid3l/pocket_tts_a_100mparameter_texttospeech/ | false | false | default | 26 | {'enabled': False, 'images': [{'id': '-wU8cKM1ybBFD4hDGC_AsWfo00bhoyCdexKDfL5kTEQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-wU8cKM1ybBFD4hDGC_AsWfo00bhoyCdexKDfL5kTEQ.png?width=108&crop=smart&auto=webp&s=0c945c479fbfc02efe507dda5eea9fa7f3344900', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-wU8cKM1ybBFD4hDGC_AsWfo00bhoyCdexKDfL5kTEQ.png?width=216&crop=smart&auto=webp&s=f8e218c76ad9950b5864a5ebdeebd657b59c8a49', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-wU8cKM1ybBFD4hDGC_AsWfo00bhoyCdexKDfL5kTEQ.png?width=320&crop=smart&auto=webp&s=03e282271e866cfb61d690e3913855de2a0118f7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-wU8cKM1ybBFD4hDGC_AsWfo00bhoyCdexKDfL5kTEQ.png?width=640&crop=smart&auto=webp&s=3304eb1bb7dc20cd79911028958854a3039569d9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-wU8cKM1ybBFD4hDGC_AsWfo00bhoyCdexKDfL5kTEQ.png?width=960&crop=smart&auto=webp&s=9db1aa8a5f6acadb7d0c109082db126e79eb9097', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-wU8cKM1ybBFD4hDGC_AsWfo00bhoyCdexKDfL5kTEQ.png?width=1080&crop=smart&auto=webp&s=d11d9dc6882967cb93d844e7d02bf59f0b83a048', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-wU8cKM1ybBFD4hDGC_AsWfo00bhoyCdexKDfL5kTEQ.png?auto=webp&s=56ebfa96adfb1161e4ac4c146be2e9b2f0d8fde9', 'width': 1200}, 'variants': {}}]} | |
Intel Arc Pro B60? (In Quad... 6x... 8x configuration) | 6 | Has anyone tried running multiples of Intel Arc Pro B60 with 24GB VRAM with larger models like MiniMax, maybe quants of GLM?
Would it be a good budget choice at \~$650 per GPU given that 3090 stock is very thin now and they go for much more with no warranty and most of the lifespan gone?
It's hard to find eBay listings below $800 for 3090, and that will get you a (severely?) used GPU with no warranty.
I only found [these benchmarks](https://www.storagereview.com/review/intel-arc-pro-b60-battlematrix-preview-192gb-of-vram-for-on-premise-ai) for a multi-B60 setup, but the numbers seem off, and [this discussion here blames the author](https://www.reddit.com/r/LocalLLaMA/comments/1pd3mdw/comment/ns37lg3/) aka the tests were probably not properly set up.
Would love to check in if anyone has **new** data points/experience to report?
I am considering a 6x B60 set up.
Thanks | 2026-01-14T08:04:52 | https://www.reddit.com/r/LocalLLaMA/comments/1qchn6x/intel_arc_pro_b60_in_quad_6x_8x_configuration/ | Infinite100p | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qchn6x | false | null | t3_1qchn6x | /r/LocalLLaMA/comments/1qchn6x/intel_arc_pro_b60_in_quad_6x_8x_configuration/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': '0mZ7_HvOTkdLgtq4s_qT3vry9cE_RWRALKiuljZ3Fl8', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/0mZ7_HvOTkdLgtq4s_qT3vry9cE_RWRALKiuljZ3Fl8.jpeg?width=108&crop=smart&auto=webp&s=8588e4fd64043093f314dc1485482e57d52f8b6b', 'width': 108}, {'height': 136, 'url': 'https://external-preview.redd.it/0mZ7_HvOTkdLgtq4s_qT3vry9cE_RWRALKiuljZ3Fl8.jpeg?width=216&crop=smart&auto=webp&s=8946621bf40b8b0147620740b4108d7bc7d6279d', 'width': 216}, {'height': 202, 'url': 'https://external-preview.redd.it/0mZ7_HvOTkdLgtq4s_qT3vry9cE_RWRALKiuljZ3Fl8.jpeg?width=320&crop=smart&auto=webp&s=33cb116bc30dca9ff7a22497c6082f82c55e47c0', 'width': 320}, {'height': 404, 'url': 'https://external-preview.redd.it/0mZ7_HvOTkdLgtq4s_qT3vry9cE_RWRALKiuljZ3Fl8.jpeg?width=640&crop=smart&auto=webp&s=d01be5fde96c8bded5f16d12f17d20ed686c5e29', 'width': 640}, {'height': 606, 'url': 'https://external-preview.redd.it/0mZ7_HvOTkdLgtq4s_qT3vry9cE_RWRALKiuljZ3Fl8.jpeg?width=960&crop=smart&auto=webp&s=c1ea9f47713f138ec188eec4ed9fe0e8cb4f2da5', 'width': 960}, {'height': 682, 'url': 'https://external-preview.redd.it/0mZ7_HvOTkdLgtq4s_qT3vry9cE_RWRALKiuljZ3Fl8.jpeg?width=1080&crop=smart&auto=webp&s=3718d451d2b85b1d3596f0d55880dfeeda59c4e1', 'width': 1080}], 'source': {'height': 948, 'url': 'https://external-preview.redd.it/0mZ7_HvOTkdLgtq4s_qT3vry9cE_RWRALKiuljZ3Fl8.jpeg?auto=webp&s=8570c57bff1f312ec6fc70983da572c2ec0364a7', 'width': 1500}, 'variants': {}}]} |
Diffing PDFs meaningfully | 1 | As per title. I come across scenarios where there are year-to-year versions of the same PDF (the difference could be 1 small edit in a slide, or 20 pages)
I usually use winmerge for plaintext but PDFs are finicky, what options are there? | 2026-01-14T07:28:02 | https://www.reddit.com/r/LocalLLaMA/comments/1qch1h5/diffing_pdfs_meaningfully/ | MullingMulianto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qch1h5 | false | null | t3_1qch1h5 | /r/LocalLLaMA/comments/1qch1h5/diffing_pdfs_meaningfully/ | false | false | self | 1 | null |
PocketPal doesn't detect NPU and GPU on Snapdragon 8 Gen 5 | 3 | Device is OnePlus 15R 12/256 GB running OxygenOS 16.0.2 (Android 16)
Are there other free apps that support NPU for LLM? | 2026-01-14T07:18:42 | LdWilmore | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qcgw06 | false | null | t3_1qcgw06 | /r/LocalLLaMA/comments/1qcgw06/pocketpal_doesnt_detect_npu_and_gpu_on_snapdragon/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'BDKfFIZoZwzZfuiUKLaP9nMbe8dbdRLcln69yaednP8', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/o0v1ctxrk9dg1.jpeg?width=108&crop=smart&auto=webp&s=c0447bf42d07e291fc0381baefd5b3ecd512c496', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/o0v1ctxrk9dg1.jpeg?width=216&crop=smart&auto=webp&s=9a23c351be62bb6f88867453bd41da137ae3b960', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/o0v1ctxrk9dg1.jpeg?width=320&crop=smart&auto=webp&s=3657b23a0840ac71c61db29c513492ffdcc640c1', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/o0v1ctxrk9dg1.jpeg?width=640&crop=smart&auto=webp&s=f7f5c8bdb36dc93646ddf51185ab47f73b056113', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/o0v1ctxrk9dg1.jpeg?width=960&crop=smart&auto=webp&s=3030020d735336ad6945395a63cd3c193d16cd7a', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/o0v1ctxrk9dg1.jpeg?width=1080&crop=smart&auto=webp&s=3b8e8f149d12772a68a5a9d6b8089ddd1d75fd62', 'width': 1080}], 'source': {'height': 2800, 'url': 'https://preview.redd.it/o0v1ctxrk9dg1.jpeg?auto=webp&s=61749010a25b608d1d49264393a1b2184d257ede', 'width': 1272}, 'variants': {}}]} | ||
Unique 3.2M-word bilingual (DE-EN) literary erotica corpus available for AI training—teasers on Hugging Face | 78 | Hi r/LocalLLaMA,
As an independent author, I've created a large original bilingual erotic fiction corpus (German originals + expanded English adaptations) that's well-suited for training or fine-tuning creative/uncensored models. Highlights:
* \~3.2 million words across 500+ chapters
* Long-form, character-driven narrative with progressive consensual kink (e.g., urophilia, period sex), rural/urban Vietnam settings
* Sophisticated prose with philosophical references (Kant, Hegel, existential themes)
* Bilingual parallel structure (German first, English creatively reworked—sometimes longer, sometimes shorter)
Three gated teaser datasets (\~475k bilingual words total) are now live on Hugging Face:
* Profile with all three: [https://huggingface.co/douglasvanwyck](https://huggingface.co/douglasvanwyck)
* With Anna in Saigon (complete mini-series, \~87k words)
* "Phung's Quest" (ongoing series, 7 chapters, \~87k words)
* "Center of the Universe"—First 35 chapters (main saga teaser, \~301k words) | 2026-01-14T07:17:55 | https://www.reddit.com/r/LocalLLaMA/comments/1qcgviy/unique_32mword_bilingual_deen_literary_erotica/ | kardinalzahl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcgviy | false | null | t3_1qcgviy | /r/LocalLLaMA/comments/1qcgviy/unique_32mword_bilingual_deen_literary_erotica/ | false | false | nsfw | 78 | {'enabled': False, 'images': [{'id': 'd9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=108&crop=smart&auto=webp&s=26036914ae69aa65f49c0596a74de9842d0e67df', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=216&crop=smart&auto=webp&s=e23eaa0d40337ecfa1d6522c4dfbadeb0938cfa2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=320&crop=smart&auto=webp&s=67e38f6edc9833c2f579837a6c88ecbdf9dd8dff', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=640&crop=smart&auto=webp&s=f441825c6afc1242012a12a720326aafe58d4494', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=960&crop=smart&auto=webp&s=7184c56044cc19c0063c91ee0b44e7f060976459', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=1080&crop=smart&auto=webp&s=d92c19d902c69760ec548203fa153b8993754307', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?auto=webp&s=1af02360b6a5d66bf7071d424daea2f191e5f7d8', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=465230bc174fb8682b0e3cf9896e0dfb32060bf2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=ce8d5215c0dee1d0594b62e93e7813d51827a03a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=c3b60ca9b149abae9212745274daf04320d61f71', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=915121a5b920d6fb6ac52b41b6123c4ce705b409', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=3bd4980ce7329a05cacdc7c06fda9b0d44001c12', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=7f8a9b85b8519fe5002408ce404ef4f5a864d9e5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?blur=40&format=pjpg&auto=webp&s=3c5e71e6afe65fe9eac5b6b761c87338567717c3', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=465230bc174fb8682b0e3cf9896e0dfb32060bf2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=ce8d5215c0dee1d0594b62e93e7813d51827a03a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=c3b60ca9b149abae9212745274daf04320d61f71', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=915121a5b920d6fb6ac52b41b6123c4ce705b409', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=3bd4980ce7329a05cacdc7c06fda9b0d44001c12', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=7f8a9b85b8519fe5002408ce404ef4f5a864d9e5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/d9kfiBV0c0hx27vfrKwakP8bP2DuwI7VvnUZEMkSpaI.png?blur=40&format=pjpg&auto=webp&s=3c5e71e6afe65fe9eac5b6b761c87338567717c3', 'width': 1200}}}}]} |
Minimal LLM memory retrieval | 2 | I’ve been experimenting with a small lab project for local LLM usage to better understand context injection, memory, and retrieval.
The idea is intentionally simple:
Every user request generates a compact, one line summary of the reply that is appended to a plain text memory file.
Memory lines are retrieved semantically before inference (top-k + similarity threshold).
Conversation history is treated as “what was previously said”, not as verified facts.
Context is injected at the prompt level only when semantically relevant.
This is not meant to replace tools like Open WebUI. It’s a learning environment to reason about minimal architectures and compare transparent text based memory vs more traditional RAG setups under identical model and embedding conditions.
Repo (experimental, evolving):
https://github.com/paxal-l/CxAGT
I'm interested in feedback from others who have explored similar minimalistic or transparent approaches to memory handling in local LLM systems.
| 2026-01-14T07:03:30 | https://www.reddit.com/r/LocalLLaMA/comments/1qcgmxk/minimal_llm_memory_retrieval/ | Text-Sufficient | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcgmxk | false | null | t3_1qcgmxk | /r/LocalLLaMA/comments/1qcgmxk/minimal_llm_memory_retrieval/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'AAtUZvcVs90Ge4l9d17XjuEaaglpkh-BqZ3uzMJnnzc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AAtUZvcVs90Ge4l9d17XjuEaaglpkh-BqZ3uzMJnnzc.png?width=108&crop=smart&auto=webp&s=e2387c223ec7753c46effcf01b94faf42d16ed51', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AAtUZvcVs90Ge4l9d17XjuEaaglpkh-BqZ3uzMJnnzc.png?width=216&crop=smart&auto=webp&s=617c1332fd7117e372e28755c0f92292b8947e2c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AAtUZvcVs90Ge4l9d17XjuEaaglpkh-BqZ3uzMJnnzc.png?width=320&crop=smart&auto=webp&s=3fe7ff89b415295779dd59a9593494f3286f82f5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AAtUZvcVs90Ge4l9d17XjuEaaglpkh-BqZ3uzMJnnzc.png?width=640&crop=smart&auto=webp&s=506856ec3d68e2d21d7919d10e204fb65867ee46', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AAtUZvcVs90Ge4l9d17XjuEaaglpkh-BqZ3uzMJnnzc.png?width=960&crop=smart&auto=webp&s=820a190d9826e032bf43b17cc432a37b7c4df3cd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AAtUZvcVs90Ge4l9d17XjuEaaglpkh-BqZ3uzMJnnzc.png?width=1080&crop=smart&auto=webp&s=2142c03a8aa015bab5277f886d5a969d5a7eb8bd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AAtUZvcVs90Ge4l9d17XjuEaaglpkh-BqZ3uzMJnnzc.png?auto=webp&s=f2428b233b7011016396e80a7fdda8ba4f248548', 'width': 1200}, 'variants': {}}]} |
"Computer Use" agents are smart, but they don't know your computer. (So I built a tool to show them) | 14 | I’ve been testing Computer Use models for local automation, and I keep hitting the same wall: **Context Blindness.**
The models are smart, but they don't know my specific environment. They try to solve problems the "generic" way, which usually breaks things.
**2 real examples where my agent failed:**
1. **The Terminal Trap:** I asked it to "start the server." It opened the default Terminal and failed because it didn't know to run `source` **.**`venv/bin/activate` first.
* *The scary part:* It then started trying to `pip install` packages globally to "fix" it.
2. **The "Wrong App" Loop:** "Message the group on WhatsApp." It launched the native desktop app (which I never use and isn't logged in). It got stuck on a QR code.
* *Reality:* I use WhatsApp Web in a pinned tab because it's always ready.
**The Solution: Record, Don't Prompt.**
I built **AI Mime** to fix this. Instead of prompting and hoping, I **record** the workflow once.
* I show it *exactly* how to activate the .venv.
* I show it *exactly* how to use whatsapp on the browser
The agent captures this "happy path" and replays it, handling dynamic data without getting "creative" with my system configuration.
repo**:** [https://github.com/prakhar1114/ai\_mime](https://github.com/prakhar1114/ai_mime)
Is this "Context Blindness" stopping anyone else from using these agents for real work? | 2026-01-14T06:23:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qcfxk0/computer_use_agents_are_smart_but_they_dont_know/ | slow-fast-person | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcfxk0 | false | null | t3_1qcfxk0 | /r/LocalLLaMA/comments/1qcfxk0/computer_use_agents_are_smart_but_they_dont_know/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'qIrdq3MvUpKTtIe8G25iqCGUHMCqT-LTGljwH7lXFDo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qIrdq3MvUpKTtIe8G25iqCGUHMCqT-LTGljwH7lXFDo.png?width=108&crop=smart&auto=webp&s=58115dab855908484ed9f7a5057b40efdfbf368e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qIrdq3MvUpKTtIe8G25iqCGUHMCqT-LTGljwH7lXFDo.png?width=216&crop=smart&auto=webp&s=b11a7a22449be2b79ae2e65c1c6c95f5885226b0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qIrdq3MvUpKTtIe8G25iqCGUHMCqT-LTGljwH7lXFDo.png?width=320&crop=smart&auto=webp&s=06dad177c39f62e91af6b0717b324ee66e4dc41a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qIrdq3MvUpKTtIe8G25iqCGUHMCqT-LTGljwH7lXFDo.png?width=640&crop=smart&auto=webp&s=79fec1b148d486d9d1ac8f4564ed7c179e359ba8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qIrdq3MvUpKTtIe8G25iqCGUHMCqT-LTGljwH7lXFDo.png?width=960&crop=smart&auto=webp&s=4b3845e6ad09160e6169c788096ca00bd8527a94', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qIrdq3MvUpKTtIe8G25iqCGUHMCqT-LTGljwH7lXFDo.png?width=1080&crop=smart&auto=webp&s=8c9b1413ea3e1274dfd1e60f2dce9171743ee6c1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qIrdq3MvUpKTtIe8G25iqCGUHMCqT-LTGljwH7lXFDo.png?auto=webp&s=290dacdb082b48ed7a568562cc3a5c1740eafee6', 'width': 1200}, 'variants': {}}]} |
Noob question: imatrix, yes or not? | 16 | Does it make sense to use imatrix for specialized models (i.e. RP, coding, medical models) or would regular/static ggufs be a better choice for these?
In the past I've been told imatrix affected things like thinking and story-writing, so I was wondering if actually hurts specialized models.
Thanks in advance! | 2026-01-14T06:17:00 | https://www.reddit.com/r/LocalLLaMA/comments/1qcfto0/noob_question_imatrix_yes_or_not/ | TheGlobinKing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcfto0 | false | null | t3_1qcfto0 | /r/LocalLLaMA/comments/1qcfto0/noob_question_imatrix_yes_or_not/ | false | false | self | 16 | null |
Bounding box coordinates of objects in settelite imagery using Qwen3-vl:30b | 1 | [removed] | 2026-01-14T06:13:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qcfr1x/bounding_box_coordinates_of_objects_in_settelite/ | SheepherderExact2366 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcfr1x | false | null | t3_1qcfr1x | /r/LocalLLaMA/comments/1qcfr1x/bounding_box_coordinates_of_objects_in_settelite/ | false | false | self | 1 | null |
EXAONE MoE support has been merged into llama.cpp | 52 | # K-EXAONE-236B-A23B
# [](https://huggingface.co/LGAI-EXAONE/K-EXAONE-236B-A23B#introduction)
# Introduction
We introduce **K-EXAONE**, a large-scale multilingual language model developed by LG AI Research. Built using a Mixture-of-Experts architecture, K-EXAONE features **236 billion total** parameters, with **23 billion active** during inference. Performance evaluations across various benchmarks demonstrate that K-EXAONE excels in reasoning, agentic capabilities, general knowledge, multilingual understanding, and long-context processing.
# [](https://huggingface.co/LGAI-EXAONE/K-EXAONE-236B-A23B#key-features)
# Key Features
* **Architecture & Efficiency:** Features a 236B fine-grained MoE design (23B active) optimized with **Multi-Token Prediction (MTP)**, enabling self-speculative decoding that boosts inference throughput by approximately 1.5x.
* **Long-Context Capabilities:** Natively supports a **256K context window**, utilizing a **3:1 hybrid attention** scheme with a **128-token sliding window** to significantly minimize memory usage during long-document processing.
* **Multilingual Support:** Covers 6 languages: Korean, English, Spanish, German, Japanese, and Vietnamese. Features a redesigned **150k vocabulary** with **SuperBPE**, improving token efficiency by \~30%.
* **Agentic Capabilities:** Demonstrates superior tool-use and search capabilities via **multi-agent strategies.**
* **Safety & Ethics:** Aligned with **universal human values**, the model uniquely incorporates **Korean cultural and historical contexts** to address regional sensitivities often overlooked by other models. It demonstrates high reliability across diverse risk categories. | 2026-01-14T05:55:19 | https://github.com/ggml-org/llama.cpp/pull/18543 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qcff41 | false | null | t3_1qcff41 | /r/LocalLLaMA/comments/1qcff41/exaone_moe_support_has_been_merged_into_llamacpp/ | false | false | default | 52 | {'enabled': False, 'images': [{'id': 'zj2pPBSKKE7hlpLBhVdaJfKDygb15HG1H-ApMccLwl8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zj2pPBSKKE7hlpLBhVdaJfKDygb15HG1H-ApMccLwl8.png?width=108&crop=smart&auto=webp&s=ba1adf86251c1c740e3b5a4bb8a3766dfff518af', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zj2pPBSKKE7hlpLBhVdaJfKDygb15HG1H-ApMccLwl8.png?width=216&crop=smart&auto=webp&s=c9086f37186abb1d3110d2ea5062465d4cb1834e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zj2pPBSKKE7hlpLBhVdaJfKDygb15HG1H-ApMccLwl8.png?width=320&crop=smart&auto=webp&s=689c27574337d3de8d65ed625f185e3321eb462e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zj2pPBSKKE7hlpLBhVdaJfKDygb15HG1H-ApMccLwl8.png?width=640&crop=smart&auto=webp&s=02a15a93673baf3e7e305c8147197d65844556ed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zj2pPBSKKE7hlpLBhVdaJfKDygb15HG1H-ApMccLwl8.png?width=960&crop=smart&auto=webp&s=772166f96f3245ed80fa58ceeab1659cf117300b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zj2pPBSKKE7hlpLBhVdaJfKDygb15HG1H-ApMccLwl8.png?width=1080&crop=smart&auto=webp&s=8a80000f8fe5417c43891acfc2403da1d761d335', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zj2pPBSKKE7hlpLBhVdaJfKDygb15HG1H-ApMccLwl8.png?auto=webp&s=33d47bce910161a1b3fc56092cc058e52a0d5d3f', 'width': 1200}, 'variants': {}}]} |
What's the fuzz about Kimi K2 thinking? | 0 | So, I tried it. Specifically IQ1\_M quant, following [these instructions](https://unsloth.ai/docs/models/kimi-k2-thinking-how-to-run-locally#kimi-k2-thinking-guide), using llama.cpp. It overthinks no matter what options or prompts I try. Where Qwen3 and GLM 4.7 take 1.6K-4K tokens to generate my test case (simple bounced ball mobile app) it spends 9K. I don't let it pass much beyond that. It feels speedy, but completely useless with all the slop it generates with endless:
\- But wait...
\- Actually, a better approach
\- Wait, I can use
\- Wait, I think
\- Actually, I can use
\- Let me just do this
\- etc
How do you use it? | 2026-01-14T05:07:50 | https://www.reddit.com/r/LocalLLaMA/comments/1qcejkv/whats_the_fuzz_about_kimi_k2_thinking/ | Clear_Lead4099 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcejkv | false | null | t3_1qcejkv | /r/LocalLLaMA/comments/1qcejkv/whats_the_fuzz_about_kimi_k2_thinking/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=216&crop=smart&auto=webp&s=18872cd0af37e87d93cf5b6c098630c44f40a162', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=320&crop=smart&auto=webp&s=e8392e0cb89db800c200421873b07e92f34150fe', 'width': 320}, {'height': 314, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=640&crop=smart&auto=webp&s=5f6fc5d8f727ab6f86a8ca5f94a5091bbe81d025', 'width': 640}, {'height': 472, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=960&crop=smart&auto=webp&s=26fa346a0f27ac195ecf2f29e1d997a534a3b283', 'width': 960}, {'height': 531, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=1080&crop=smart&auto=webp&s=4e4e7bc3c126d7465ae2f4d8fab93d8c6edd76c4', 'width': 1080}], 'source': {'height': 590, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?auto=webp&s=df3ed66f8b8e54b17c699d9c4e81b03ddeb78c58', 'width': 1200}, 'variants': {}}]} |
Shadows-Gemma-3-1B: cold start reasoning from topk20 logprob distillation | 27 |
[Shadows-Gemma-1B](https://huggingface.co/Echo9Zulu/Shadows-Gemma-3-1B) was trained for the google tunix hackathon and is my first finetuning project. Trained on 1569 samples in ~10 minutes on TPUv5-8e, and around 20min on A40, Shadows-Gemma is a general reasoning model trained without RL, code or math data distilled from non reasoning teacher gemma-3-4b-it.
When looking at topk20 logprob data, I noticed that some tokens appear early in the low ranks, and sort of float around until eventually being selected much later. It turns out, when the average distance between first appearance and selection is greater, the features we know from reasoning traces- backtracking, solution exploration, drafting, rewriting, were more prominent in the training data when "persistence" was higher. I'm calling these shadow tokens, and they may indicate reasoning behavior in the output distribution and surface text.
Shadows-Gemma-1B was trained using logprob distillation from teacher gemma-3-4b-it, which I rejection sampled to meet the following system prompt, which encourages interleaved reasoning;
```
You are Gemma, a thinking model who reasons through problems step by step before providing an answer. Conduct your reasoning within a <reasoning></reasoning> block, with intermediate steps using <processing></processing> tags, with the intermediate step inside. Continue like this until closing the </reasoning> block and providing your answer within <answer></answer>.
```
Once I started modeling token trajectories forward towards the end of a completion, I kept seeing the pattern *everywhere*, in other language models as well. Knowing more research, evaluation and compute would be required to study shadow tokens, I set myself on empirically demonstrating that shadow tokens are a trainable signal, which is about all I can say for sure at this time. Regardless, Shadow-Gemma-1B gives better answers on most questions I have tried and has become a generally capable reasoning model, thinking more on harder questions. To be clear, I'm not saying Shadows-Gemma beats any other model, even the base model, at a given task.
I am working on a post mortem with more details about the adventure, loss functions, code optimizations, interpretability data analysis tools, war stories from a one week port of pytorch --> JAX framework, discuss how SOTA LLMs were not always useful etc. Other datasets I made for this project will also be published soon:
- ~4800 Reasoning traces from DeepCogito-v2.1
- Full solutions for GSM8K by DeepSeekProverv2
[Shadows-Gemma-3-4B](https://huggingface.co/Echo9Zulu/Shadows-Gemma-3-4B) was a last minute full send using some runpod credits I had leftover just to see if it would work. Well, it did! I barely tested this one so ymmv. | 2026-01-14T04:04:00 | https://www.reddit.com/r/LocalLLaMA/comments/1qcd9m1/shadowsgemma31b_cold_start_reasoning_from_topk20/ | Echo9Zulu- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcd9m1 | false | null | t3_1qcd9m1 | /r/LocalLLaMA/comments/1qcd9m1/shadowsgemma31b_cold_start_reasoning_from_topk20/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'hQGged5Xu9fR-gFbXlBkTaRbZFLdL1-IMyr7hN2Sg9g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hQGged5Xu9fR-gFbXlBkTaRbZFLdL1-IMyr7hN2Sg9g.png?width=108&crop=smart&auto=webp&s=d57ccedcef1b7d7b0f2bb6a8cb242553b1da9c98', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hQGged5Xu9fR-gFbXlBkTaRbZFLdL1-IMyr7hN2Sg9g.png?width=216&crop=smart&auto=webp&s=c06c732d9b9de64dfbcd3c7510120018ed039308', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hQGged5Xu9fR-gFbXlBkTaRbZFLdL1-IMyr7hN2Sg9g.png?width=320&crop=smart&auto=webp&s=65256219c419511d9885e6f5ba498d51ed523a92', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hQGged5Xu9fR-gFbXlBkTaRbZFLdL1-IMyr7hN2Sg9g.png?width=640&crop=smart&auto=webp&s=a756542bf398724ac1169f9505897e0979c31ad1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hQGged5Xu9fR-gFbXlBkTaRbZFLdL1-IMyr7hN2Sg9g.png?width=960&crop=smart&auto=webp&s=3c33150e998a5a5295729b0451bf2811dcb764e4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hQGged5Xu9fR-gFbXlBkTaRbZFLdL1-IMyr7hN2Sg9g.png?width=1080&crop=smart&auto=webp&s=27f8a7ad0810ab41ee84d2b8c76453b4e5240c99', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hQGged5Xu9fR-gFbXlBkTaRbZFLdL1-IMyr7hN2Sg9g.png?auto=webp&s=101af284fe10f70b1c68b1395244502831f82a52', 'width': 1200}, 'variants': {}}]} |
Using local VLMs for OCR to feed into an NLP categorization pipeline - looking for beta testers (Loggr) | 2 | Building a health journaling app (Loggr) that runs entirely local on Apple Silicon. The core is a custom NLP pipeline that extracts structured health data from free-form text - food, exercise, supplements, sleep, etc. No LLM in the loop for extraction, sub-100ms latency, works on an air-gapped device.
Currently adding a feature to scan handwritten journals. Testing with Qwen2.5-VL-3B quantized via MLX for the OCR step, then feeding that text into the same pipeline. The 3B fits comfortably in 8GB unified memory, 7B needs 12GB+ but handles messier handwriting better. Running it as a batch process overnight since you're potentially processing years of journals.
Considered Apple's Vision framework but the handwriting recognition is hit or miss compared to the VLMs. Might end up doing a hybrid approach - Vision for quick preview, VLM for the actual extraction.
Looking for beta testers with old paper journals to throw at it. Especially interested in edge cases - bad handwriting, mixed languages, weird layouts. Sign up at [loggr.info](http://loggr.info) if you want to help stress test. I'll send you a beta build and you run your entries through it, then tell me how it went/ send me some human-readable diagnostics data.
What VLMs are people using for OCR these days? Qwen2.5-VL seems to be the go-to but curious if there's anything better for handwriting specifically. | 2026-01-14T04:02:53 | https://www.reddit.com/r/LocalLLaMA/comments/1qcd8sw/using_local_vlms_for_ocr_to_feed_into_an_nlp/ | Mescallan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcd8sw | false | null | t3_1qcd8sw | /r/LocalLLaMA/comments/1qcd8sw/using_local_vlms_for_ocr_to_feed_into_an_nlp/ | false | false | self | 2 | null |
Loggr Beta testers wanted: scanning old handwritten journals into searchable health data | 1 | Building a health journaling app (Loggr) that runs entirely local on Apple Silicon. The core is a custom NLP pipeline that extracts structured lifestyle data from free-form text - food, exercise, supplements, sleep, etc. No LLM in the loop for extraction, sub-100ms latency.
Currently adding a feature to scan handwritten journals. Testing with Qwen2.5-VL-3B quantized via MLX for the OCR step, then feeding that text into the same pipeline. The 3B fits comfortably in 8GB unified memory, 7B needs 12GB+ but handles messier handwriting better. Running it as a batch process overnight since you're potentially processing years of journals.
Considered Apple's Vision framework but the handwriting recognition is hit or miss compared to the VLMs. Might end up doing a hybrid approach - Vision for quick preview, VLM for the actual extraction.
Looking for beta testers with old paper journals to throw at it. Especially interested in edge cases - bad handwriting, mixed languages, weird layouts. Sign up at [loggr.info](http://loggr.info) if you want to help stress test. Just to be clear, I will never see your entries, everything in Loggr is designed to work on an air-gapped machine.
What VLMs are people using for OCR these days? Qwen2.5-VL seems to be the go-to but curious if there's anything better for handwriting specifically. | 2026-01-14T04:00:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qcd6z5/loggr_beta_testers_wanted_scanning_old/ | Mescallan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcd6z5 | false | null | t3_1qcd6z5 | /r/LocalLLaMA/comments/1qcd6z5/loggr_beta_testers_wanted_scanning_old/ | false | false | self | 1 | null |
For RAG serving: how do you balance GPU-accelerated index builds with cheap, scalable retrieval at query time? | 2 | In RAG-style vector retrieval, I keep running into the same tradeoff: building high-quality graph indexes (HNSW/NSW-like) can be very compute-heavy, while query-time retrieval needs to scale cheaply with bursty traffic and decent tail latency.
GPUs can speed up index/graph construction a lot, but keeping GPUs around just for serving often feels expensive and harder to scale out.
One approach we’ve been experimenting with in an open-source database project I contribute to is a hybrid build/serve split: use GPU parallelism to construct a high-quality proximity graph (NN-Descent-style build + pruning), then serve queries on CPU replicas by storing/loading the built structure in a CPU-friendly form (think “GPU build → CPU search”). The idea is to spend GPU where it helps most (build time) while keeping serving scalable and cost-efficient.
It looks promising so far, but I’m curious: is there other ways to handle this build vs serve tradeoff in production Do you (1) serve on GPU, (2) build on GPU but serve on CPU, (3) use different index families for large scale, or something else?
If you want to pick apart the approach, the full write-up is here: [https://milvus.io/blog/faster-index-builds-and-scalable-queries-with-gpu-cagra-in-milvus.md](https://milvus.io/blog/faster-index-builds-and-scalable-queries-with-gpu-cagra-in-milvus.md?utm_source=chatgpt.com)
| 2026-01-14T03:56:53 | https://www.reddit.com/r/LocalLLaMA/comments/1qcd48i/for_rag_serving_how_do_you_balance_gpuaccelerated/ | IllGrass1037 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcd48i | false | null | t3_1qcd48i | /r/LocalLLaMA/comments/1qcd48i/for_rag_serving_how_do_you_balance_gpuaccelerated/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '-kJYNtkMxFG7yxLj34KN0ir3jR3pm0SmVkmbat3uUjw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-kJYNtkMxFG7yxLj34KN0ir3jR3pm0SmVkmbat3uUjw.png?width=108&crop=smart&auto=webp&s=158fcd61955ed3f82f8bdccf6dcfca497a8fb0fb', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/-kJYNtkMxFG7yxLj34KN0ir3jR3pm0SmVkmbat3uUjw.png?width=216&crop=smart&auto=webp&s=5f232e3f6d997eb90ad7b78fac0d3546094b0ede', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/-kJYNtkMxFG7yxLj34KN0ir3jR3pm0SmVkmbat3uUjw.png?width=320&crop=smart&auto=webp&s=2ad5ec025888de8b81f8d8de96f5efa4e17d4edc', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/-kJYNtkMxFG7yxLj34KN0ir3jR3pm0SmVkmbat3uUjw.png?width=640&crop=smart&auto=webp&s=cb79db21a70dd460af99849ba6055f2caf723888', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/-kJYNtkMxFG7yxLj34KN0ir3jR3pm0SmVkmbat3uUjw.png?width=960&crop=smart&auto=webp&s=bfcc5b573cb470fa2a03a05822ec0933377c1099', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/-kJYNtkMxFG7yxLj34KN0ir3jR3pm0SmVkmbat3uUjw.png?width=1080&crop=smart&auto=webp&s=206457e9f5df9f3c5bc2c10801a0e9e44ad7a6d9', 'width': 1080}], 'source': {'height': 1881, 'url': 'https://external-preview.redd.it/-kJYNtkMxFG7yxLj34KN0ir3jR3pm0SmVkmbat3uUjw.png?auto=webp&s=89a689d4c737753d5d1117483ecb724014513dc6', 'width': 3600}, 'variants': {}}]} |
Best model for roleplay service? | 0 | Hey all! I’m building a companion chat service for native‑level speaking of my country since i havent seen it much here yet, with a fine‑tuned regional voice, roleplay capability, and optional NSFW mode. I’m looking for model suggestions that are strong at: (1) roleplay/character consistency, (2) safe optional NSFW handling, (3) good latency/cost balance, (4) adheres to instructions.
i've tried [https://huggingface.co/nvidia/Mistral-NeMo-12B-Instruct](https://huggingface.co/nvidia/Mistral-NeMo-12B-Instruct) which tends to ignore structions a lot
and then i tried [https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b) which im getting mixed results, sometimes i get really good convos, sometimes it just straight up ignores instructions and yolos it.
the AI being able to follow instructions is pretty important since they have a routine, a life, an identity, i've a spent a bunch of time already on systems that try to bring as much realism as possible to the companion.
What models would you recommend?
initially im trying to spend at most 1000-1200 hosting a month, i want to see first if its something desired/worth continuing putting money into | 2026-01-14T03:56:18 | https://www.reddit.com/r/LocalLLaMA/comments/1qcd3sn/best_model_for_roleplay_service/ | Alcacholaruz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcd3sn | false | null | t3_1qcd3sn | /r/LocalLLaMA/comments/1qcd3sn/best_model_for_roleplay_service/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'X4QSJWWVTVGQSf7Kjxz4vC86H71FBD-NCqMssC0D9h4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/X4QSJWWVTVGQSf7Kjxz4vC86H71FBD-NCqMssC0D9h4.png?width=108&crop=smart&auto=webp&s=31d18ce28fa60855908c80d766b94ecb773136ba', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/X4QSJWWVTVGQSf7Kjxz4vC86H71FBD-NCqMssC0D9h4.png?width=216&crop=smart&auto=webp&s=a12ac9056b0695c04d5c4884d831122b278bbcd6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/X4QSJWWVTVGQSf7Kjxz4vC86H71FBD-NCqMssC0D9h4.png?width=320&crop=smart&auto=webp&s=2b8ca805673edb0e79fec358fd58aeb558cd09a6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/X4QSJWWVTVGQSf7Kjxz4vC86H71FBD-NCqMssC0D9h4.png?width=640&crop=smart&auto=webp&s=0aa79900782d4da7da0c145e010b809f4fc014ee', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/X4QSJWWVTVGQSf7Kjxz4vC86H71FBD-NCqMssC0D9h4.png?width=960&crop=smart&auto=webp&s=5f563ff8a002d1a7b8a5a808a3b9d907cd765fa3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/X4QSJWWVTVGQSf7Kjxz4vC86H71FBD-NCqMssC0D9h4.png?width=1080&crop=smart&auto=webp&s=4d30ab39a6e46008559533f6e15d04e221db46b3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/X4QSJWWVTVGQSf7Kjxz4vC86H71FBD-NCqMssC0D9h4.png?auto=webp&s=7e83c35505778adb4f341480905c448482487b07', 'width': 1200}, 'variants': {}}]} |
Should I upgrade RTX 4070 SUPER? | 2 | I've updated my gear in early 2025: AMD Ryzen 7 9700X, 32GB RAM, GeForce RTX 4070 SUPER, at that time, I was already worried that nvidia only provided 12GB.
now that I'm entering the local llm world, I am upset that I can't run the bigger models. For example, I can't run the ocr ones, like olmocr and deepseek-ocr. In ComfyUI, can't run any decent realistic image or video model.
and with the recent ram price hike, I don't want to invest in buying more of it for sure. so I thought maybe upgrading the gpu. I would wait for the next 1-2 years if nvidia release a RTX 5070 TI super with 16gb or if AMD release a competitive gpu for AI, if the price kept around $700-800.
But if the gpu prices skyrocket until 2028, maybe I could upgrade to a normal RTX 5070 TI right now.
IDK. I am really clueless and maybe you guys could have some different opinion. | 2026-01-14T03:13:34 | https://www.reddit.com/r/LocalLLaMA/comments/1qcc7ft/should_i_upgrade_rtx_4070_super/ | issamu2k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcc7ft | false | null | t3_1qcc7ft | /r/LocalLLaMA/comments/1qcc7ft/should_i_upgrade_rtx_4070_super/ | false | false | self | 2 | null |
Two ASRock Radeon AI Pro R9700's cooking in CachyOS. | 11 | Run alone, it reads them hitting 3.3GHz sometimes. I use Vulkan because ROCm seems intermittently unstable. I'm running one agent on each card, mostly Qwen3-vl-30b-a3b Q5 quants (decent performance:context window trade-off), Devstral2-24b, Qwen3-coder, and sometimes Nemotron for simple tasks, but Nemotron has been unimpressive and prone to error during heavy tool use.
I guess my bifurcated motherboard lacks P2P, so loading a big 52GB Qwen-Next-32B model across both GPUs works and gets like \~28 tok/s from zero-shot, but there is still a bottleneck with it juggling read-write across the motherboard.
The limitation forced me to run separate quantized agents, which has been better for productivity and I prefer HITL. (I launch 2x LM Studio instances as a fish function, w/separate APIs and shared qdrant+Neo4j+postgres+memory servers via MCP for long-memory coordination in projects. This allows me to have an orchestration model on GPU0 write and execute python scripts that are queued on GPU1's API. (This coordinated governance structure also aligns with the new [Atlas method](https://www.youtube.com/watch?v=tez4AyTm1Rs) of Agent Orchestration.)
I just wanted to share my experience since I know these cards are new'ish.
I hope everyone had a great day!
RocmBandwidthTest Version: 2.6.0
Launch Command is: rocm-bandwidth-test (rocm_bandwidth -a + rocm_bandwidth -A)
Device: 0, Intel(R) Core(TM) Ultra 7 265KF
Device: 1, AMD Radeon Graphics, GPU-[UUID1], 04:0.0
Device: 2, AMD Radeon Graphics, GPU-[UUID2], 08:0.0
Inter-Device Access
D/D 0 1 2
0 1 1 1
1 1 1 0
2 1 0 1
Inter-Device Numa Distance
D/D 0 1 2
0 0 20 20
1 20 0 N/A
2 20 N/A 0
Unidirectional copy peak bandwidth GB/s
D/D 0 1 2
0 N/A 28.622 28.727
1 28.160 449.668 N/A
2 28.099 N/A 571.232
Bidirectional copy peak bandwidth GB/s
D/D 0 1 2
0 N/A 33.557 34.633
1 33.557 N/A N/A
2 34.633 N/A N/A
| 2026-01-14T03:08:30 | -philosopath- | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qcc3dg | false | null | t3_1qcc3dg | /r/LocalLLaMA/comments/1qcc3dg/two_asrock_radeon_ai_pro_r9700s_cooking_in_cachyos/ | false | false | default | 11 | {'enabled': True, 'images': [{'id': 'j6k5y1je58dg1', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/j6k5y1je58dg1.png?width=108&crop=smart&auto=webp&s=4083a48c4e7c75e27cacc70aa609380c6c8893a4', 'width': 108}, {'height': 214, 'url': 'https://preview.redd.it/j6k5y1je58dg1.png?width=216&crop=smart&auto=webp&s=f276710fb94545820c122759f38152762d64da4c', 'width': 216}, {'height': 317, 'url': 'https://preview.redd.it/j6k5y1je58dg1.png?width=320&crop=smart&auto=webp&s=c41bcd249e0373a621013977a5f1b4e20c66180d', 'width': 320}], 'source': {'height': 474, 'url': 'https://preview.redd.it/j6k5y1je58dg1.png?auto=webp&s=7b93ebe716fff19f4731360a1507ac4ed3011b66', 'width': 477}, 'variants': {}}]} | |
GLM-Image just dropped — an open multimodal model from Zai Org (language + vision). | 15 | Zai Org released GLM-Image, extending the GLM family with native image understanding and cross-modal reasoning. It’s not just captioning — the model is built to reason over visual inputs and text together.
Why it’s interesting:
• Unified vision + language model
• Designed for VQA, image understanding, and multimodal reasoning
• Fully open on Hugging Face (weights available)
• Fits into the growing ecosystem of open multimodal GLM models
Feels like another signal that open multimodal models are maturing fast — not just matching basic vision tasks, but moving toward real reasoning over images.
Curious how this compares in practice vs Qwen-VL, InternVL, or LLaVA variants, especially on reasoning-heavy prompts.
Model page: https://huggingface.co/zai-org/GLM-Image | 2026-01-14T02:51:55 | https://www.reddit.com/r/LocalLLaMA/comments/1qcbq2n/glmimage_just_dropped_an_open_multimodal_model/ | InternationalToe2678 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcbq2n | false | null | t3_1qcbq2n | /r/LocalLLaMA/comments/1qcbq2n/glmimage_just_dropped_an_open_multimodal_model/ | false | false | self | 15 | null |
Is there any way to estimate tokens per second given VRAM and such? The calculators don’t have every model. | 0 | For example if I find GiggleBox-Super-Cool-Mega-SLOP-XL-24B.Q3629272\_K-x-ULTRA-super-DPO-LX-MegasXLR-Voldemort-GodU-Homelander-XFiles-Hopped-Into-A-Coffee-Shop.gguf and I know I have 24GB VRAM is there a way I can estimate the T/s I’ll get running it? | 2026-01-14T02:44:55 | https://www.reddit.com/r/LocalLLaMA/comments/1qcbkkg/is_there_any_way_to_estimate_tokens_per_second/ | Borkato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcbkkg | false | null | t3_1qcbkkg | /r/LocalLLaMA/comments/1qcbkkg/is_there_any_way_to_estimate_tokens_per_second/ | false | false | self | 0 | null |
Free local tool for preparing PDFs and CSVs into JSON / TXT for LLaMA workflows | 1 | [removed] | 2026-01-14T02:38:09 | https://www.reddit.com/r/LocalLLaMA/comments/1qcbfb1/free_local_tool_for_preparing_pdfs_and_csvs_into/ | Mythline_Studio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcbfb1 | false | null | t3_1qcbfb1 | /r/LocalLLaMA/comments/1qcbfb1/free_local_tool_for_preparing_pdfs_and_csvs_into/ | false | false | self | 1 | null |
Tired of Claude's pricing? I built a CLI wrapper that lets you switch to cheaper providers with one command | 4 | Hey r/LocalLLaMA,
Like many of you, I got tired of Claude's API pricing eating into my dev budget. So I built something simple: \*\*ClaudeGate\*\* - a CLI wrapper that lets you use Claude Code with cheaper API providers.
\*\*The Problem:\*\*
Claude is amazing, but Anthropic's pricing adds up fast. Many of us already know about cheaper alternatives through OpenRouter, DeepSeek, etc. but switching between them is a pain.
\*\*The Solution:\*\*
ClaudeGate wraps Claude Code and lets you hot-swap providers with a single command:
\`\`\`
npm install -g claudegate
claudegate config # Set up your provider
claudegate # Run Claude Code with your chosen provider
\`\`\`
\*\*Currently supported providers:\*\*
\- Anthropic (original)
\- OpenRouter
\- DeepSeek
\- [Z.AI](http://Z.AI)
\- Kimi K2
\- MiniMax
\- Novita AI
The beauty is you keep using Claude Code's interface - same commands, same workflow - just with different (often much cheaper) backend providers.
GitHub link in comments. Would love feedback from this community since you all understand the local/alternative LLM landscape better than anyone.
What providers would you like to see added? | 2026-01-14T02:27:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qcb6cr/tired_of_claudes_pricing_i_built_a_cli_wrapper/ | Euphoric_Paint4055 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcb6cr | false | null | t3_1qcb6cr | /r/LocalLLaMA/comments/1qcb6cr/tired_of_claudes_pricing_i_built_a_cli_wrapper/ | false | false | self | 4 | null |
Frustrated with cloud GPU pricing, so I built something - looking for feedback | 0 | **TL;DR:** Built a serverless GPU platform called SeqPU. 15% cheaper than Runpod, pay per second, no idle costs. Free credits on signup, DM me for extra if you want to really test it. [SeqPU.com](http://SeqPU.com)
**Why I built this**
I was burning money on cloud GPUs just to experiment with different models and fine-tunes. Every time I wanted to test a new quant or run a quick LoRA, I'd spin up an instance, wait for setup, run for 20 minutes, forget to shut it down, and pay for 3 hours.
Wanted something where I could just run the thing and pay for exactly what I used.
**How it works**
* Upload your Python script through the web IDE
* Pick your GPU (A100 80GB, H100, etc.)
* Hit run - billed per second of actual execution
* Logs stream in real-time, download outputs when done
No Docker, no SSH, no instance management. Just code and run.
**Why it's cheaper**
Downloads and environment setup happen on CPUs, not your GPU bill. Most platforms start charging the moment you spin up - even while you're pulling model weights or installing dependencies. That's 80GB of Llama getting billed at GPU rates for no reason.
Files persist between runs too. Download a model once, it's there next time. No re-downloading 50GB every session.
**What LocalLLaMA people would use it for**
* Fine-tuning with LoRA/QLoRA without babysitting instances
* Batch inference when you need more VRAM than your local card has
* Testing different model sizes before committing to buy hardware
* Running the big stuff (70B+, 120B) that won't fit on consumer GPUs
**Try it**
Free credits on signup at seqpu.com. Run your actual workloads, compare the bill yourself.
DM me if you want extra credits to really put it through its paces. Would rather get real feedback than have you scroll past. [SeqPU.com](http://SeqPU.com) | 2026-01-14T02:25:09 | https://www.reddit.com/r/LocalLLaMA/comments/1qcb4vl/frustrated_with_cloud_gpu_pricing_so_i_built/ | Impressive-Law2516 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcb4vl | false | null | t3_1qcb4vl | /r/LocalLLaMA/comments/1qcb4vl/frustrated_with_cloud_gpu_pricing_so_i_built/ | false | false | self | 0 | null |
My friend bought the Nvidia Spark and asked me to set it up for him... | 0 | Hey all, looking for advice here. I have a close friend that bought the Nvidia DGX Spark machine. He also has multiple businesses and is super into AI. On top of that, he loves all things Nvidia and has the capital to blow money on the Spark without much thought of what to do with it.
He's asked me if I can figure out how to set it up for him and what he could do with it. He is not tech savvy whatsoever. Me on the other hand, I'm a tech enthusiast and work in IT. I told him I'd look into it and help him see if he can get any practical business use out of it.
At first, my research told me how the Spark is a local AI machine. I thought great, I have no idea how to setup a local AI box but it'd be a great learning experience for me. For him, I was hoping he could use it to help analyze private internal documents for his companies. Financials, forms, legal documents, the like. However, the more I research, the more I see that many people recommend against using it in this case. That the Spark is geared towards developers creating AI models to run on more powerful machines, not using it as a self-hosted AI server.
I'm looking for more insight and community feedback into this situation I'm in. Should I continue to attempt to set it up? Would there be any practical use case for him? He's familiar with ChatGPT and would expect performance similar or not far off from that. Or do I break the news that he wasted his money on this thing and give up before I get started. Keep in mind, I've never setup a self-hosted AI box before but I do work in IT (Systems Administrator) and know how to research and problem solve. Thank you all! | 2026-01-14T01:53:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qcaf86/my_friend_bought_the_nvidia_spark_and_asked_me_to/ | Jonny_Boy_808 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qcaf86 | false | null | t3_1qcaf86 | /r/LocalLLaMA/comments/1qcaf86/my_friend_bought_the_nvidia_spark_and_asked_me_to/ | false | false | self | 0 | null |
Introducing GLM-Image | 111 | Introducing GLM-Image: A new milestone in open-source image generation.
GLM-Image uses a hybrid auto-regressive plus diffusion architecture, combining strong global semantic understanding with high fidelity visual detail. It matches mainstream diffusion models in overall quality while excelling at text rendering and knowledge intensive generation.
Tech Blog: http://z.ai/blog/glm-image
Experience it right now: http://huggingface.co/zai-org/GLM-Image
GitHub: http://github.com/zai-org/GLM-Image
| 2026-01-14T01:25:35 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qc9sw2 | false | null | t3_1qc9sw2 | /r/LocalLLaMA/comments/1qc9sw2/introducing_glmimage/ | false | false | default | 111 | {'enabled': True, 'images': [{'id': '70ypvyc5w7dg1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/70ypvyc5w7dg1.jpeg?width=108&crop=smart&auto=webp&s=be9a4f33c97a308eb313c76cc361ed27ac5fb7fa', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/70ypvyc5w7dg1.jpeg?width=216&crop=smart&auto=webp&s=557d8af050a7520bde2f41a956a5b43cca4eb047', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/70ypvyc5w7dg1.jpeg?width=320&crop=smart&auto=webp&s=2ac6afdd23eb87bbc6dd092d8f915739fa17596f', 'width': 320}, {'height': 320, 'url': 'https://preview.redd.it/70ypvyc5w7dg1.jpeg?width=640&crop=smart&auto=webp&s=df4d302e9bb74550a3c16fd5342ab649a5bc3a53', 'width': 640}, {'height': 480, 'url': 'https://preview.redd.it/70ypvyc5w7dg1.jpeg?width=960&crop=smart&auto=webp&s=0bb5512932b694b4e2827e59084e1147fa63d795', 'width': 960}, {'height': 540, 'url': 'https://preview.redd.it/70ypvyc5w7dg1.jpeg?width=1080&crop=smart&auto=webp&s=0ec21205413ba03e169f67e8683349072847cfeb', 'width': 1080}], 'source': {'height': 908, 'url': 'https://preview.redd.it/70ypvyc5w7dg1.jpeg?auto=webp&s=3860865fdae0f0d6911dae21bac2a7b82d147e68', 'width': 1814}, 'variants': {}}]} | |
GLM-Image is released! | 578 | GLM-Image is an image generation model adopts a hybrid autoregressive + diffusion decoder architecture. In general image generation quality, GLM‑Image aligns with mainstream latent diffusion approaches, but it shows significant advantages in text-rendering and knowledge‑intensive generation scenarios. It performs especially well in tasks requiring precise semantic understanding and complex information expression, while maintaining strong capabilities in high‑fidelity and fine‑grained detail generation. In addition to text‑to‑image generation, GLM‑Image also supports a rich set of image‑to‑image tasks including image editing, style transfer, identity‑preserving generation, and multi‑subject consistency.
Model architecture: a hybrid autoregressive + diffusion decoder design. | 2026-01-14T01:17:16 | https://huggingface.co/zai-org/GLM-Image | foldl-li | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1qc9m6x | false | null | t3_1qc9m6x | /r/LocalLLaMA/comments/1qc9m6x/glmimage_is_released/ | false | false | 578 | {'enabled': False, 'images': [{'id': 'Ei4JzvCHJGNODl-Xo97JEKHuZJZU81UlEy5iyXWioSw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Ei4JzvCHJGNODl-Xo97JEKHuZJZU81UlEy5iyXWioSw.png?width=108&crop=smart&auto=webp&s=da7129c68aadb2bd163b5306c69e4bc164d03be3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Ei4JzvCHJGNODl-Xo97JEKHuZJZU81UlEy5iyXWioSw.png?width=216&crop=smart&auto=webp&s=fae2a0fb4c7795d1bcc3de10f3df7ad743f87e33', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Ei4JzvCHJGNODl-Xo97JEKHuZJZU81UlEy5iyXWioSw.png?width=320&crop=smart&auto=webp&s=eb1608c27b4622ae04acd97f9a03b199e26f3059', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Ei4JzvCHJGNODl-Xo97JEKHuZJZU81UlEy5iyXWioSw.png?width=640&crop=smart&auto=webp&s=251fac1763ed77fdaf4e281f649fddd4555de498', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Ei4JzvCHJGNODl-Xo97JEKHuZJZU81UlEy5iyXWioSw.png?width=960&crop=smart&auto=webp&s=c69bf1e75fc808a4b2e0801756f5a7a394d0f34b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Ei4JzvCHJGNODl-Xo97JEKHuZJZU81UlEy5iyXWioSw.png?width=1080&crop=smart&auto=webp&s=7d0f90cf734ab268356b128f8ee296eb4040965a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Ei4JzvCHJGNODl-Xo97JEKHuZJZU81UlEy5iyXWioSw.png?auto=webp&s=4b62df508ea121de01825545a28de06d0099d06b', 'width': 1200}, 'variants': {}}]} | |
Can i get a tlder on whats so great about gpt-oss? | 0 | I apologize for actively being too lazy to research, but i was looking up what was good to run on my 16gb card (96gb ram) and the common answer was gpt-oss, specifically the 20b.
i understand its a moe which allows it to be so small and fast compared to other models of the same size, but whats so significant about it that that its so popular? | 2026-01-14T01:12:09 | https://www.reddit.com/r/LocalLLaMA/comments/1qc9i28/can_i_get_a_tlder_on_whats_so_great_about_gptoss/ | IZA_does_the_art | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc9i28 | false | null | t3_1qc9i28 | /r/LocalLLaMA/comments/1qc9i28/can_i_get_a_tlder_on_whats_so_great_about_gptoss/ | false | false | self | 0 | null |
Hey Local LLM. Who will win the NFL divisional playoff this weekend? | 0 |
Here's my small setup. Was able to give internet access to my local LLM.
AMD 9900x 5080 32mb DDR5 (MSI found at Costco)
Windows 11 WSL: Docker, Traefik, Wireguard, Ministral 3 14b reasoning 2512, Kokoro TTS, Nvidia Parakeet v3, LLAVA 7b tunneled to my VPS server.
cherta@racknerd-e983f61:\~/projects/jamba$ docker exec jamba-web wget -qO- --post-data='{"message":"What are the NFL divisional round playoff matchups this weekend January 18 2026?","backend":"ministral"}' --header='Content-Type: application/json' --timeout=90 [http://localhost:3000/jamba/api/chat](http://localhost:3000/jamba/api/chat) 2>&1 | python3 -c "import sys, json; d=json.load(sys.stdin); print('Mode:', d.get('crawl',{}).get('mode') if d.get('crawl') else 'No crawl'); print('Source:', d.get('crawl',{}).get('source') if d.get('crawl') else 'N/A'); print(); print('Answer:'); print(d.get('content'))"
Mode: smart-browse
Source: [https://fbschedules.com/nfl-playoff-schedule-2026-divisional-round-sites-dates-time-tv-set/](https://fbschedules.com/nfl-playoff-schedule-2026-divisional-round-sites-dates-time-tv-set/)
Answer:
Here is the confirmed data for the \*\*NFL Divisional Round playoff matchups\*\* for \*\*this weekend (January 17–18, 2026)\*\*, with the games scheduled for \*\*January 18, 2026\*\*:
| \*\*Matchup\*\* | \*\*Time (ET)\*\* | \*\*Network\*\* |
|--------------------------|--------------|--------------------------|
| \*\*(5) Houston Texans\*\* at \*\*(2) New England Patriots\*\* | 3:00 PM | ESPN/ABC/ESPN+ |
| \*\*(5) Los Angeles Rams\*\* at \*\*(2) Chicago Bears\*\* | 6:30 PM | NBC/Peacock |
| 2026-01-14T00:51:09 | https://www.reddit.com/r/LocalLLaMA/comments/1qc90jb/hey_local_llm_who_will_win_the_nfl_divisional/ | Fabulous_Fact_606 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc90jb | false | null | t3_1qc90jb | /r/LocalLLaMA/comments/1qc90jb/hey_local_llm_who_will_win_the_nfl_divisional/ | false | false | self | 0 | null |
Free tool to parse & chunk your AI conversation exports (ChatGPT, Claude, Grok) | 1 | [removed] | 2026-01-14T00:45:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qc8vgm/free_tool_to_parse_chunk_your_ai_conversation/ | CoDy-28601 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc8vgm | false | null | t3_1qc8vgm | /r/LocalLLaMA/comments/1qc8vgm/free_tool_to_parse_chunk_your_ai_conversation/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Iw5js3u-q1g8TVB6pQI8APkWMrvEvU8gMukudH4QT4U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Iw5js3u-q1g8TVB6pQI8APkWMrvEvU8gMukudH4QT4U.png?width=108&crop=smart&auto=webp&s=d3163372322419180415d3131f0d1ffc303e92dc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Iw5js3u-q1g8TVB6pQI8APkWMrvEvU8gMukudH4QT4U.png?width=216&crop=smart&auto=webp&s=83892152f6826ee44db1fd2c41e0203492859986', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Iw5js3u-q1g8TVB6pQI8APkWMrvEvU8gMukudH4QT4U.png?width=320&crop=smart&auto=webp&s=28e281b081a1784ba89fa945799e030277cd602d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Iw5js3u-q1g8TVB6pQI8APkWMrvEvU8gMukudH4QT4U.png?width=640&crop=smart&auto=webp&s=cdc291322b1156b5ecf8b2a17cedb1f7df0f2ca4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Iw5js3u-q1g8TVB6pQI8APkWMrvEvU8gMukudH4QT4U.png?width=960&crop=smart&auto=webp&s=c3501db5dc1101b75e853083efff3780daddcdd4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Iw5js3u-q1g8TVB6pQI8APkWMrvEvU8gMukudH4QT4U.png?width=1080&crop=smart&auto=webp&s=f5ea2449d15d1c12bb395d0eff79c79774cd9370', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Iw5js3u-q1g8TVB6pQI8APkWMrvEvU8gMukudH4QT4U.png?auto=webp&s=31f67a77fe9caa750df12af126134a15f9dd7430', 'width': 1200}, 'variants': {}}]} |
Anyone here tried Rerun (usererun.com)? How local is it really + battery impact? | 1 | [removed] | 2026-01-14T00:10:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qc820v/anyone_here_tried_rerun_usereruncom_how_local_is/ | clean-db | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc820v | false | null | t3_1qc820v | /r/LocalLLaMA/comments/1qc820v/anyone_here_tried_rerun_usereruncom_how_local_is/ | false | false | self | 1 | null |
Built an 8× RTX 3090 monster… considering nuking it for 2× Pro 6000 Max-Q | 41 | I’ve been running an 8× RTX 3090 box on an EPYC 7003 with an ASUS ROMED8-2T and 512 GB DDR4-3200.
The setup is not pretty. Lots of PCIe risers, I didn’t know about MCIO 8 months ago. The board has 7× x16 Gen4 slots, so for the 8th GPU I’m using an x8/x8 bifurcator plus a daisy-chained riser: motherboard to riser to bifurcator to GPU 1 on the bifurcator and GPU 2 on another riser. This is purely because of physical space and riser length limits.
As expected, things are weird. One GPU runs at x8, the other at x4, likely the daisy-chained riser but I haven’t had time to deep-debug. Another GPU shows up as x8 even when it shouldn’t, either a jumper I’m missing or a 3090 with a mining or modded vBIOS. Stability only became acceptable after forcing all PCIe slots to Gen3 Although I still see one of the x8 GPUs "faiiling off the PCI bus" (shows up as NA on nvtop) and leads me to reboot the server(10minutes to vllm readiness).
Because of this Frankenstein setup, I’m considering replacing the whole thing with 2× RTX Pro 6000 Max-Q, basically trading 8 riser-mounted 3090s for a clean dual-GPU build. This would triple the cost of the system. My 3090s were about $600 each, while the Max-Qs are quoted at about $8,300 each.
Putting elegance and some hit-or-miss stability gains aside, is there any real performance upside here?
Quick power-efficiency napkin math says it would take about 7.1 years of nonstop usage to break even compared to the 8×3090 setup. I could switch from AWQ to NVFP4 quantization. How much performance should I realistically expect for AI coding agents like Claude Code and OpenCode?
Would prefill latency improve in a meaningful way?
VRAM would be roughly the same today, with room to add 2 more GPUs later without risers and potentially double max VRAM. But is this even a good platform for FP8 coding models like MiniMax 2.1 or GLM 4.7?
Am I missing any real advantages here, or is this mostly an expensive way to clean up a messy but functional setup? | 2026-01-14T00:09:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qc81si/built_an_8_rtx_3090_monster_considering_nuking_it/ | BeeNo7094 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc81si | false | null | t3_1qc81si | /r/LocalLLaMA/comments/1qc81si/built_an_8_rtx_3090_monster_considering_nuking_it/ | false | false | self | 41 | null |
An.. MCP… Commercial? | 10 | I’m still not sure if this is real or ai generated but first comment says it “unhinged”. Is this really an MCP commercial? | 2026-01-13T23:44:11 | https://youtu.be/Nejecji5XNQ | slurmernetes | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1qc7fhd | false | null | t3_1qc7fhd | /r/LocalLLaMA/comments/1qc7fhd/an_mcp_commercial/ | false | false | default | 10 | null |
NovaSR: A tiny 52kb audio upsampler that runs 3600x realtime. | 67 | I released NovaSR which is a very tiny 52kb audio upsampler that enhances muffled 16khz audio to produce clearer 48khz audio. It's incredibly small and really fast(can process 100 to 3600 seconds of audio in just 1 second on a single gpu).
Why is it useful?
1. It can enhance any TTS models quality. Most generate at 16khz or 24khz and NovaSR can enhance them with nearly 0 computation cost.
2. It can restore low quality audio datasets really quickly.
3. It can fit basically on any device. It's just 52kb which basically means its smaller then a 3 second audio file itself.
Right now, it was only trained on just 100 hours of data so it has room for improvement, but it still produces good quality audio at such a tiny size.
Github repo: [https://github.com/ysharma3501/NovaSR](https://github.com/ysharma3501/NovaSR)
Model with some examples: [https://huggingface.co/YatharthS/NovaSR](https://huggingface.co/YatharthS/NovaSR)
Space to try it(It's running on a weak 2 core cpu machine so won't be 3600x realtime but still around 10x realtime): [https://huggingface.co/spaces/YatharthS/NovaSR](https://huggingface.co/spaces/YatharthS/NovaSR)
Stars or Likes would be appreciated if found helpful. Thank you.
| 2026-01-13T23:33:38 | https://www.reddit.com/r/LocalLLaMA/comments/1qc76dc/novasr_a_tiny_52kb_audio_upsampler_that_runs/ | SplitNice1982 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc76dc | false | null | t3_1qc76dc | /r/LocalLLaMA/comments/1qc76dc/novasr_a_tiny_52kb_audio_upsampler_that_runs/ | false | false | self | 67 | {'enabled': False, 'images': [{'id': 'SXJA6mw48DKbWR0ypJ-bCoeNPKnGRyw-CFm5eOfn_Cc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SXJA6mw48DKbWR0ypJ-bCoeNPKnGRyw-CFm5eOfn_Cc.png?width=108&crop=smart&auto=webp&s=34cf0cfb10edf83cfc45e8c02222411ea1496aa1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SXJA6mw48DKbWR0ypJ-bCoeNPKnGRyw-CFm5eOfn_Cc.png?width=216&crop=smart&auto=webp&s=9fbf01eec6102d7a61289e0317905a4971be0c28', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SXJA6mw48DKbWR0ypJ-bCoeNPKnGRyw-CFm5eOfn_Cc.png?width=320&crop=smart&auto=webp&s=b7bfb71f7243d0d4ea413c1cc4a96d26fd4325b2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SXJA6mw48DKbWR0ypJ-bCoeNPKnGRyw-CFm5eOfn_Cc.png?width=640&crop=smart&auto=webp&s=b00f942931e0f14ed9bec45a3048bf9436c76f15', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SXJA6mw48DKbWR0ypJ-bCoeNPKnGRyw-CFm5eOfn_Cc.png?width=960&crop=smart&auto=webp&s=0a090b9d8bb8b4c919275823f64037ed35ea0b98', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SXJA6mw48DKbWR0ypJ-bCoeNPKnGRyw-CFm5eOfn_Cc.png?width=1080&crop=smart&auto=webp&s=89305d9763a7f71d0752827407b4efb0fbb43a56', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SXJA6mw48DKbWR0ypJ-bCoeNPKnGRyw-CFm5eOfn_Cc.png?auto=webp&s=59384f886da3a9cb312659579042f6f3a932d8cb', 'width': 1200}, 'variants': {}}]} |
How to use AI locally to get ahead in a workplace that’s rolling out AI | 0 | So I just built my own Nvidia RTX 16 GB GPU powered system. Idea was to test and develop software, scripts etc locally and then see introducing improvements to the company I work for. Initially I planned to use my local system to extract data from our vast repository of pdf documents, and build processes around having said data ready for piping into various tools to get the output we need.
But I got worried about data ex filtration so parked the idea.
I then thought I could leverage my years of Python experience (I’m not a pro coder nor is my company in the IT sector, but I had gotten some qualifications on Python years back) and use my AI machine for developing other projects.
However ; the company I work for is rolling out CoPilot with its agents etc to across the board and running a big training campaign for everyone.
Ultimate goal is to leverage my domain knowledge plus python / general IT knowledge to gain and advantage and further my career.
But now that gap I had over everyone else in the office is closing rapidly.
Just looking for some general advice and wondering where do I go from here ?
Thanks | 2026-01-13T23:03:46 | https://www.reddit.com/r/LocalLLaMA/comments/1qc6gbr/how_to_use_ai_locally_to_get_ahead_in_a_workplace/ | Prinzen2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc6gbr | false | null | t3_1qc6gbr | /r/LocalLLaMA/comments/1qc6gbr/how_to_use_ai_locally_to_get_ahead_in_a_workplace/ | false | false | self | 0 | null |
Any claude cowork qlternative worth checking? | 2 | I am not a dev, hence cowork really appeal to me | 2026-01-13T22:54:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qc687e/any_claude_cowork_qlternative_worth_checking/ | marsxyz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc687e | false | null | t3_1qc687e | /r/LocalLLaMA/comments/1qc687e/any_claude_cowork_qlternative_worth_checking/ | false | false | self | 2 | null |
Soprano TTS training code released: Create your own 2000x realtime on-device text-to-speech model with Soprano-Factory! | 304 | Hello everyone!
I’ve been listening to all your feedback on Soprano, and I’ve been working nonstop over these past three weeks to incorporate everything, so I have a TON of updates for you all!
For those of you who haven’t heard of Soprano before, it is an on-device text-to-speech model I designed to have highly natural intonation and quality with a small model footprint. It can run up to 20x realtime on CPU, and up to 2000x on GPU. It also supports lossless streaming with 15 ms latency, an order of magnitude lower than any other TTS model. You can check out Soprano here:
Github: [https://github.com/ekwek1/soprano](https://github.com/ekwek1/soprano)
Demo: [https://huggingface.co/spaces/ekwek/Soprano-TTS](https://huggingface.co/spaces/ekwek/Soprano-TTS)
Today, I am releasing training code for you guys! This was by far the most requested feature to be added, and I am happy to announce that you can now train your own ultra-lightweight, ultra-realistic TTS models like the one in the video with your own data on your own hardware with Soprano-Factory! Using Soprano-Factory, you can add new voices, styles, and languages to Soprano. The entire repository is just 600 lines of code, making it easily customizable to suit your needs.
In addition to the training code, I am also releasing Soprano-Encoder, which converts raw audio into audio tokens for training. You can find both here:
Soprano-Factory: [https://github.com/ekwek1/soprano-factory](https://github.com/ekwek1/soprano-factory)
Soprano-Encoder: [https://huggingface.co/ekwek/Soprano-Encoder](https://huggingface.co/ekwek/Soprano-Encoder)
I hope you enjoy it! See you tomorrow,
\- Eugene | 2026-01-13T22:32:00 | https://v.redd.it/wnuwfpdqz6dg1 | eugenekwek | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qc5nml | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/wnuwfpdqz6dg1/DASHPlaylist.mpd?a=1770935544%2CYzZkNTIyNTFjMzdkNzdkNmZjNWM3MWEwOWFjMzNjYmM1YzNlN2UyMjc0MmU0NGIxYWZlMDZhN2MxZjhjNjQyZQ%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/wnuwfpdqz6dg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/wnuwfpdqz6dg1/HLSPlaylist.m3u8?a=1770935544%2COGJjMTk3YjQ5MTkwOGE0ZDkxOTJhNWE5NDYyZGI0Njg0ZTBkNjMyNzZkYTlmZjZmNjFiODA2MjJmZDk2YTg3MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wnuwfpdqz6dg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1qc5nml | /r/LocalLLaMA/comments/1qc5nml/soprano_tts_training_code_released_create_your/ | false | false | 304 | {'enabled': False, 'images': [{'id': 'amZxajFtZXF6NmRnMQO5kEggYbW8-0IppaPjE5mW-pGiD_HSvWQwK_psM6yd', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/amZxajFtZXF6NmRnMQO5kEggYbW8-0IppaPjE5mW-pGiD_HSvWQwK_psM6yd.png?width=108&crop=smart&format=pjpg&auto=webp&s=8644ddd61dfb3f99480d25bba8043c0dbeeb423c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/amZxajFtZXF6NmRnMQO5kEggYbW8-0IppaPjE5mW-pGiD_HSvWQwK_psM6yd.png?width=216&crop=smart&format=pjpg&auto=webp&s=8a2460636c543e36fcf0f21de7a3bab8bd9b1205', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/amZxajFtZXF6NmRnMQO5kEggYbW8-0IppaPjE5mW-pGiD_HSvWQwK_psM6yd.png?width=320&crop=smart&format=pjpg&auto=webp&s=2f124db19e31e68d3b0c6b63d6846f08f9bce385', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/amZxajFtZXF6NmRnMQO5kEggYbW8-0IppaPjE5mW-pGiD_HSvWQwK_psM6yd.png?width=640&crop=smart&format=pjpg&auto=webp&s=01d1a6d6394ef5cff01d7b32464418094b1cec20', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/amZxajFtZXF6NmRnMQO5kEggYbW8-0IppaPjE5mW-pGiD_HSvWQwK_psM6yd.png?width=960&crop=smart&format=pjpg&auto=webp&s=93816c5b74a27165d35b017bc5ac5a2c6345eca2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/amZxajFtZXF6NmRnMQO5kEggYbW8-0IppaPjE5mW-pGiD_HSvWQwK_psM6yd.png?width=1080&crop=smart&format=pjpg&auto=webp&s=575db7b934a1ad8a35ec3512f36d583e6a12b55e', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/amZxajFtZXF6NmRnMQO5kEggYbW8-0IppaPjE5mW-pGiD_HSvWQwK_psM6yd.png?format=pjpg&auto=webp&s=e5f15ad319287b0d0f93625ba8ee586685c443aa', 'width': 1280}, 'variants': {}}]} | |
Llama Mycelium vs Llama Baseline 91% vs 52% Math500 L5 | 1 | Math Benchmark Feat: 91 % vs 52% with the same base model.
[https://github.com/bryceroche/mycelium](https://github.com/bryceroche/mycelium)
[https://drive.google.com/file/d/1Gn8Efk4F2GW1bT3qGlHmKV-V\_C6hIaLk/view](https://drive.google.com/file/d/1Gn8Efk4F2GW1bT3qGlHmKV-V_C6hIaLk/view) | 2026-01-13T22:31:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qc5mxo/llama_mycelium_vs_llama_baseline_91_vs_52_math500/ | Free_Preference_3340 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc5mxo | false | null | t3_1qc5mxo | /r/LocalLLaMA/comments/1qc5mxo/llama_mycelium_vs_llama_baseline_91_vs_52_math500/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pTL-B8HSqq6gnT_hR2x7r4sJJ0VyAhZD8xRPncewa_s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pTL-B8HSqq6gnT_hR2x7r4sJJ0VyAhZD8xRPncewa_s.png?width=108&crop=smart&auto=webp&s=730cccfbc1842a89bf6a10e977982200f0e133c5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pTL-B8HSqq6gnT_hR2x7r4sJJ0VyAhZD8xRPncewa_s.png?width=216&crop=smart&auto=webp&s=dcf696c422b681d7a976f85526b550cd20ac1a81', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pTL-B8HSqq6gnT_hR2x7r4sJJ0VyAhZD8xRPncewa_s.png?width=320&crop=smart&auto=webp&s=19a9ea1f75885adff3e90d4f24778da3641e210a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pTL-B8HSqq6gnT_hR2x7r4sJJ0VyAhZD8xRPncewa_s.png?width=640&crop=smart&auto=webp&s=ef579e2b24f713fb86574dab32a7aaca130537c6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pTL-B8HSqq6gnT_hR2x7r4sJJ0VyAhZD8xRPncewa_s.png?width=960&crop=smart&auto=webp&s=9c61afc7184c59d028cebeca8724ed1021aed648', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pTL-B8HSqq6gnT_hR2x7r4sJJ0VyAhZD8xRPncewa_s.png?width=1080&crop=smart&auto=webp&s=94312022ce6e35bc6135ebacc2fca0a9ed983093', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pTL-B8HSqq6gnT_hR2x7r4sJJ0VyAhZD8xRPncewa_s.png?auto=webp&s=32f6d4fff93cb8c91b13b549f3a4e28a212c0536', 'width': 1200}, 'variants': {}}]} |
I built a way to make infrastructure safe for AI | 0 | I built a platform that lets AI agents work on infrastructure by wrapping KVM/libvirt with a Go API.
Most AI tools stop at the codebase because giving an LLM root access to prod is crazy. [fluid.sh](http://fluid.sh) creates ephemeral sandboxes where agents can execute tasks like configuring firewalls, restarting services, or managing systemd units safely.
**How it works:**
- It uses qcow2 copy-on-write backing files to instantly clone base images into isolated sandboxes.
- The agent gets root access within the sandbox.
- Security is handled via an ephemeral SSH Certificate Authority; agents use short-lived certificates for authentication.
- As the agent works, it builds an Ansible playbook to replicate the task.
- You review the changes in the sandbox and the generated playbook before applying it to production.
Tech: Go, libvirt/KVM, qcow2, Ansible, Python SDK.
GitHub: [https://github.com/aspectrr/fluid.sh](https://github.com/aspectrr/fluid.sh)
Demo: [https://youtu.be/nAlqRMhZxP0](https://youtu.be/nAlqRMhZxP0)
Happy to answer any questions or feedback! | 2026-01-13T22:25:37 | https://www.reddit.com/r/LocalLLaMA/comments/1qc5hg2/i_built_a_way_to_make_infrastructure_safe_for_ai/ | poltergeist-__- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc5hg2 | false | null | t3_1qc5hg2 | /r/LocalLLaMA/comments/1qc5hg2/i_built_a_way_to_make_infrastructure_safe_for_ai/ | false | false | self | 0 | null |
What happens when you load two models and let each model take a turn generating a token? | 9 | To really make sure there is no misunderstanding here it is played out:
I like eating hotdogs.
Model 1: I, eat, hot
Model2: like,ing, dogs.
This is a simulation to demonstrate the idea.
So why? And is it worth it?
The first thought that came my mind was clearly it will be slower… but I wondered if a few adjustments to the software could ensure the context isn’t fully reprocessed for each model each time.
My next thought was how would two different model families handle this? For example GPT-OSS 120b and GLM-4.6V? What happens when the east meets west?
What happens if you always did inference on a smaller model, but only used it when it predicted the next word with high confidence and/or it was a common word (the, a, an, has, etc.) from the top 200 English words? Would this be faster than a draft model with a larger model and how much less accurate would it be?
One idea that came to mind is the fingerprint of the models would get muddied. How muddied? Only one way to find out.
And here you might get a little grumpy. I’m still at work and my knowledge to accomplish this is pretty narrow so I can’t give you this answer… yet. But a helpful upvote and a comment from you should get this some visibility so that those that have done this or have the knowledge to do so can beat me to providing you and I with an answer.
Have you done something wacky like this? Love to hear your experiences along my these lines. | 2026-01-13T22:23:04 | https://www.reddit.com/r/LocalLLaMA/comments/1qc5f0q/what_happens_when_you_load_two_models_and_let/ | silenceimpaired | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc5f0q | false | null | t3_1qc5f0q | /r/LocalLLaMA/comments/1qc5f0q/what_happens_when_you_load_two_models_and_let/ | false | false | self | 9 | null |
Helps with memory compatibility. | 0 | I bought a Xeon X99 kit with a 2680v4 that came with a 16GB stick. I bought another 16GB stick.
Before it arrived, I bought two 32GB sticks.
See the attached photos.
I didn't check that the memories were different, the 16GB ones being 2Rx4 and the 32GB ones 4DRx4.
If I put all 4 in, the PC doesn't turn on.
If I put only the 16GB ones in, it turns on normally.
If I remove them and put in the two 32GB ones, it turns on normally.
If I mix the two, it doesn't turn on.
Is there anything I can do to make it accept all 4 memory modules, or will I have to delete the 16GB ones??? | 2026-01-13T22:21:31 | https://www.reddit.com/gallery/1qc5dmb | NullKalahar | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qc5dmb | false | null | t3_1qc5dmb | /r/LocalLLaMA/comments/1qc5dmb/helps_with_memory_compatibility/ | false | false | 0 | null | |
VibeVoice - first impresssion and discussion | 2 | I've been trying to find something that could read aloud something that I've written, so I can sanity-check my own work without having to get bleary-eyed. This seems to work well. Windows 10, RTX3090. My starting point:
**Install Recipe:** [How to Install VibeVoice TTS Locally - Jarods Journey](https://www.youtube.com/watch?v=YWGAkfWL6R4)
It has captured my voice and how I would say things with almost flawless precision. I'm floored.
With made-up voices I've created using RVC, I get equally expressive content. I'm amazed that all it needs is a few minutes of spoken content.
However, I've noticed a couple things that I would love to see fixed.
1. The random intro sting thing has got to go. If it bothers me, as soon as I hear it, I stop and regenerate on another seed because it's sort of useless. It would be better to allow manual addition of some preferred intro/outro content.
2. Syllable density per second rises as time progresses. That's not very useful. I never talk that fast. I'd be grateful for a way to ensure syllable density per second remains constant at whatever rate I choose. Granted, some speakers are faster than others, and that's okay. I just want to control that.
3. Occasionally, I get a generation where it starts speaking in tongues and then snaps out of it and resumes speaking normally. Similarly there might be I imagine CFG scale could manage some of that, but it might be nice if it didn't have to be managed at all.
4. For a hill in Rome named "Aventine", an American might pronounce it "A ven teen" (rhymes with bean), but someone from the UK might pronounce it "A ven tyne" (rhymes with fine). In this case, it doesn't matter, but when people are in the same story or reading the same paper, I'd like to directly control the pronunciation to my preferences for words like that. Especially for last names.
5. I'd like to see the documents save to a preferred folder rather than pile up in a temp folder on drive C. I'm curious if the temp files go away, or if I have to go in and manually remove them. I'm guessing there's a way to do that, but I don't see directions in the repo. There seem to be suggestions that someone might build it out in the gradio interface.
So those are some of the questions/feedback I have for starters.
Thanks for any thoughts you may provide.
| 2026-01-13T22:17:06 | https://www.reddit.com/r/LocalLLaMA/comments/1qc59e5/vibevoice_first_impresssion_and_discussion/ | LaughterOnWater | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc59e5 | false | null | t3_1qc59e5 | /r/LocalLLaMA/comments/1qc59e5/vibevoice_first_impresssion_and_discussion/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'KAQnZ5B4chsHlVu6kn0VdZKszG36ovkCI0on-PizTe4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/KAQnZ5B4chsHlVu6kn0VdZKszG36ovkCI0on-PizTe4.jpeg?width=108&crop=smart&auto=webp&s=fc68e5e34389cae8bc7dc74e9083a9ffb18cc495', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/KAQnZ5B4chsHlVu6kn0VdZKszG36ovkCI0on-PizTe4.jpeg?width=216&crop=smart&auto=webp&s=846e6d8adb618c87696170c21c07900d38cbddc7', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/KAQnZ5B4chsHlVu6kn0VdZKszG36ovkCI0on-PizTe4.jpeg?width=320&crop=smart&auto=webp&s=acecfe30f84f90d32ff76ea738c8c46ae9690084', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/KAQnZ5B4chsHlVu6kn0VdZKszG36ovkCI0on-PizTe4.jpeg?auto=webp&s=719b17ad908f9660b467583936c7069ebabf16ff', 'width': 480}, 'variants': {}}]} |
Behind the Scenes: An Earlier Version of EIVES | 0 | Quick behind-the-scenes clip from an earlier EIVES build.
One of the first working iterations, before the latest upgrades to voice flow, memory, and conversation pacing.
Runs fully local (LLM + ASR + TTS), no cloud.
I’m sharing this because I want people to see something clearly:
This isn’t a concept, it’s a working local system that I’m iterating fast.
If anyone wants to help beta test the current build, drop a comment or DM me. | 2026-01-13T22:14:45 | https://v.redd.it/h0kc39box6dg1 | The-Build | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qc573g | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/h0kc39box6dg1/DASHPlaylist.mpd?a=1770934510%2CMTNmMjFmNmEyMWUxZGUyMDg1MGZlM2FiNmVlMjY3ZWE4MTM5MWVjYTBkY2E1MTkzYjcxYTA2NTAwNzZkN2U1MA%3D%3D&v=1&f=sd', 'duration': 59, 'fallback_url': 'https://v.redd.it/h0kc39box6dg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1298, 'hls_url': 'https://v.redd.it/h0kc39box6dg1/HLSPlaylist.m3u8?a=1770934510%2CMTZjNDhlNTA2MjBjYmM0MDBiZWJhNDlmOTZhMzIxMzkwMDEwZTMyMjZhMDIxYzExNWFmNGQ4ZTRkZTM3NTA0Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/h0kc39box6dg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1qc573g | /r/LocalLLaMA/comments/1qc573g/behind_the_scenes_an_earlier_version_of_eives/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'dWp2MGlkNTl4NmRnMXawKAMC5n6WPd-Ey7PrQEgO_7yfdqsCiD7rwYKeawAX', 'resolutions': [{'height': 129, 'url': 'https://external-preview.redd.it/dWp2MGlkNTl4NmRnMXawKAMC5n6WPd-Ey7PrQEgO_7yfdqsCiD7rwYKeawAX.png?width=108&crop=smart&format=pjpg&auto=webp&s=c5f46641025e1554b6c753ec2fbcb6e7f4f10b77', 'width': 108}, {'height': 259, 'url': 'https://external-preview.redd.it/dWp2MGlkNTl4NmRnMXawKAMC5n6WPd-Ey7PrQEgO_7yfdqsCiD7rwYKeawAX.png?width=216&crop=smart&format=pjpg&auto=webp&s=a0a093a5dcba62b55ecdc5880b2455e3906a6ab2', 'width': 216}, {'height': 384, 'url': 'https://external-preview.redd.it/dWp2MGlkNTl4NmRnMXawKAMC5n6WPd-Ey7PrQEgO_7yfdqsCiD7rwYKeawAX.png?width=320&crop=smart&format=pjpg&auto=webp&s=c9becac8deeb27caf41bb59ed01a512471e650a3', 'width': 320}, {'height': 768, 'url': 'https://external-preview.redd.it/dWp2MGlkNTl4NmRnMXawKAMC5n6WPd-Ey7PrQEgO_7yfdqsCiD7rwYKeawAX.png?width=640&crop=smart&format=pjpg&auto=webp&s=a941086d318ffb3ad8987f1139b7133d4da7a76f', 'width': 640}, {'height': 1152, 'url': 'https://external-preview.redd.it/dWp2MGlkNTl4NmRnMXawKAMC5n6WPd-Ey7PrQEgO_7yfdqsCiD7rwYKeawAX.png?width=960&crop=smart&format=pjpg&auto=webp&s=d3143040e09835af6e20e5c5fe3b0c226abf8cde', 'width': 960}, {'height': 1296, 'url': 'https://external-preview.redd.it/dWp2MGlkNTl4NmRnMXawKAMC5n6WPd-Ey7PrQEgO_7yfdqsCiD7rwYKeawAX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=49431c03d8581d260f7b86ad73ee947b9193ec81', 'width': 1080}], 'source': {'height': 1729, 'url': 'https://external-preview.redd.it/dWp2MGlkNTl4NmRnMXawKAMC5n6WPd-Ey7PrQEgO_7yfdqsCiD7rwYKeawAX.png?format=pjpg&auto=webp&s=105a705dfea52f8ea7f8676a62dc58eac75a6b53', 'width': 1440}, 'variants': {}}]} | |
MedGemma 1.5: Google Research announces latest Open Medical AI model | 1 | [deleted] | 2026-01-13T22:11:49 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1qc54cg | false | null | t3_1qc54cg | /r/LocalLLaMA/comments/1qc54cg/medgemma_15_google_research_announces_latest_open/ | false | false | default | 1 | null | ||
MedGemma 1.5: Next generation medical image interpretation with medical speech to text with MedASR | 79 | 2026-01-13T22:11:12 | https://research.google/blog/next-generation-medical-image-interpretation-with-medgemma-15-and-medical-speech-to-text-with-medasr/ | CheekyBastard55 | research.google | 1970-01-01T00:00:00 | 0 | {} | 1qc53rf | false | null | t3_1qc53rf | /r/LocalLLaMA/comments/1qc53rf/medgemma_15_next_generation_medical_image/ | false | false | 79 | {'enabled': False, 'images': [{'id': 'Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE.jpeg?width=108&crop=smart&auto=webp&s=e85522ec0f6b9c59a8434a90d2ecebe8c2d71652', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE.jpeg?width=216&crop=smart&auto=webp&s=7456a0a4ebd37982129042b9b4aaa1a14401a280', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE.jpeg?width=320&crop=smart&auto=webp&s=0b4b0f3f5d7fb66280168c071659b8dfbc9f2f75', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE.jpeg?width=640&crop=smart&auto=webp&s=c9dad5b13e20f57d64f5fc0bbc7415c9f4186b1d', 'width': 640}], 'source': {'height': 420, 'url': 'https://external-preview.redd.it/Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE.jpeg?auto=webp&s=722aaac4c4cb8a58930bb43bac788a1400ae000c', 'width': 800}, 'variants': {}}]} | ||
Chinese Room | 2 | A web app I made to configure any 2 LLMs via an OpenRouter key to chat in a sandbox.
[https://chineseroom.org](https://chineseroom.org) | 2026-01-13T21:42:32 | https://www.reddit.com/r/LocalLLaMA/comments/1qc4bvm/chinese_room/ | ifiwereu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc4bvm | false | null | t3_1qc4bvm | /r/LocalLLaMA/comments/1qc4bvm/chinese_room/ | false | false | self | 2 | null |
Building a game where you talk to NPCs using Llama 3.1-8B-q4, optimized for 6GB VRAM | 18 | I’ve been working on an investigative indie game. The core mechanic isn't a dialogue tree. It’s a direct interface with local LLMs. My goal was to make a polished, atmospheric experience that runs entirely offline on mid-range consumer hardware.
The game runs a local **Llama-3.1-8B (Q4\_K\_M)** instance. I am using tauri and llama-server with vulkan support. The UI is a custom WebGL-driven "OS" that simulates a retro-future terminal.
Targeting **6GB VRAM** was the biggest challenge. I had to keep low context window like 2048-4096 the LLM’s KV cache.
In this clip, I’m testing a bribery scenario. NPC tries to bribe me with bribe action, basically function calling at the end of the prompt.
I have tested with RTX2060 and 4070Ti Super and it both works realtime.
I am planning to train a custom LoRA specifically for the game’s world and essentially eliminate any remaining hallucinations. It works surprisingly well right now, but a dedicated fine-tune will be the final step for total immersion.
I would like to hear your thoughts!! | 2026-01-13T21:38:36 | https://v.redd.it/rrveazsgq6dg1 | bayhan2000 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qc485u | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/rrveazsgq6dg1/DASHPlaylist.mpd?a=1770932334%2CZTA0NGZlNDViZjkyZGU5ZmRlNWE4OTFhMjJjNjQ2N2U3MTM2YTM2ZTk3YzcwNjEwMmVhZDU3NGIyODkyNjE1Ng%3D%3D&v=1&f=sd', 'duration': 9, 'fallback_url': 'https://v.redd.it/rrveazsgq6dg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/rrveazsgq6dg1/HLSPlaylist.m3u8?a=1770932334%2CZGZhMWIyM2FmZjdlNTI2MDQ5ZWYzZDAxNDFlOGMzM2M3YWNiZWJkOGZhMjg4OGM0YjJhODkzM2VmZjZkNjUwMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rrveazsgq6dg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1qc485u | /r/LocalLLaMA/comments/1qc485u/building_a_game_where_you_talk_to_npcs_using/ | false | false | 18 | {'enabled': False, 'images': [{'id': 'MjFraTE5dGdxNmRnMeC5Rm5xTPWPllbMPfh3CJD-5GSC-t7xIsMmhXMeJtjN', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MjFraTE5dGdxNmRnMeC5Rm5xTPWPllbMPfh3CJD-5GSC-t7xIsMmhXMeJtjN.png?width=108&crop=smart&format=pjpg&auto=webp&s=983d66ed1f0f5a86f5ae93e386ba573537ecf105', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MjFraTE5dGdxNmRnMeC5Rm5xTPWPllbMPfh3CJD-5GSC-t7xIsMmhXMeJtjN.png?width=216&crop=smart&format=pjpg&auto=webp&s=564a10f42937d22bbad2f9c715362e91ffc0fd8e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MjFraTE5dGdxNmRnMeC5Rm5xTPWPllbMPfh3CJD-5GSC-t7xIsMmhXMeJtjN.png?width=320&crop=smart&format=pjpg&auto=webp&s=434c0ef0bafd686aa2155c039534a05003334d0b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MjFraTE5dGdxNmRnMeC5Rm5xTPWPllbMPfh3CJD-5GSC-t7xIsMmhXMeJtjN.png?width=640&crop=smart&format=pjpg&auto=webp&s=ad9d8361380aa53f1d0c49a242f3e78340b4df58', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MjFraTE5dGdxNmRnMeC5Rm5xTPWPllbMPfh3CJD-5GSC-t7xIsMmhXMeJtjN.png?width=960&crop=smart&format=pjpg&auto=webp&s=be543b164553840db31c48d0e383b4777b91487e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MjFraTE5dGdxNmRnMeC5Rm5xTPWPllbMPfh3CJD-5GSC-t7xIsMmhXMeJtjN.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4411a9c73717339669d6dad728572263e0cac59b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MjFraTE5dGdxNmRnMeC5Rm5xTPWPllbMPfh3CJD-5GSC-t7xIsMmhXMeJtjN.png?format=pjpg&auto=webp&s=e3ed97350722aca0efdb1a2ab56f2e4292db08fb', 'width': 1920}, 'variants': {}}]} | |
I built an offline voice assistant that runs entirely locally — looking for 10 testers | 1 | [removed] | 2026-01-13T21:25:16 | https://www.reddit.com/r/LocalLLaMA/comments/1qc3utu/i_built_an_offline_voice_assistant_that_runs/ | The-Build | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc3utu | false | null | t3_1qc3utu | /r/LocalLLaMA/comments/1qc3utu/i_built_an_offline_voice_assistant_that_runs/ | false | false | self | 1 | null |
Google dropped MedGemma 1.5 4b - Model for medical tasks | 1 | [removed] | 2026-01-13T21:22:18 | https://www.reddit.com/r/LocalLLaMA/comments/1qc3ryi/google_dropped_medgemma_15_4b_model_for_medical/ | Ok_Emergency_5577 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc3ryi | false | null | t3_1qc3ryi | /r/LocalLLaMA/comments/1qc3ryi/google_dropped_medgemma_15_4b_model_for_medical/ | false | false | self | 1 | null |
M.2 to 4x Pcie for extra GPU Power Question | 0 | Hello everyone my 3 Pcie slots 1 16x and 2 4x are now occupied with gpu and the 4th 3090 is arriving in 2 days but i have no more Pcie slots so i got a m.2 to 4x Pcie with a 4 pin power cable to sata, i read that this deliver only to 50w wich is too less and can risk melting or fire, so i searched more so i found the m.2 to Oculink to PCIE but these need 24pin ATX cable and if i do that i wont be able to add the sync cable between my motherboard and the 2 PSUs (because the sync cable is also 2 female and 1 male 24pin) there is also 6-pin power cabled M.2 USB to Pcie but thats 1X i think is too sad for M.2 x4 slot any help on how this should work and the best way and if there is a way for 2 ATX connection to PSU or just an 6-8 pin PSU connector M.2 adapter with 4x Pcie to get the awaited 70W that the 3090 might request? Or any ideas :)
Thanks in Advance! | 2026-01-13T21:21:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qc3r7x/m2_to_4x_pcie_for_extra_gpu_power_question/ | Far_Gur_3974 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc3r7x | false | null | t3_1qc3r7x | /r/LocalLLaMA/comments/1qc3r7x/m2_to_4x_pcie_for_extra_gpu_power_question/ | false | false | self | 0 | null |
Google dropped MedGemma 1.5 4b - LLM for medical tasks | 1 | [removed] | 2026-01-13T21:19:55 | https://www.reddit.com/r/LocalLLaMA/comments/1qc3poh/google_dropped_medgemma_15_4b_llm_for_medical/ | Ok_Emergency_5577 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc3poh | false | null | t3_1qc3poh | /r/LocalLLaMA/comments/1qc3poh/google_dropped_medgemma_15_4b_llm_for_medical/ | false | false | self | 1 | null |
Google dropped MedGemma 1.5 4b - LLM for medical tasks | 1 | [removed] | 2026-01-13T21:17:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qc3ntx/google_dropped_medgemma_15_4b_llm_for_medical/ | Ok_Emergency_5577 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc3ntx | false | null | t3_1qc3ntx | /r/LocalLLaMA/comments/1qc3ntx/google_dropped_medgemma_15_4b_llm_for_medical/ | false | false | self | 1 | null |
How non-vision LLM handle image vision ? | 4 | Hey guys, thanks for all your valuable and very interesting posts on this sub.
It's my 1st post. I was wondering how do non-vision LLMs such as Deepseek v3.2 or GLM-4.7 handle images visions/understanding despite not being multimodal ?
Thank you for your help | 2026-01-13T20:57:52 | https://www.reddit.com/r/LocalLLaMA/comments/1qc34r2/how_nonvision_llm_handle_image_vision/ | Individual-Source618 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc34r2 | false | null | t3_1qc34r2 | /r/LocalLLaMA/comments/1qc34r2/how_nonvision_llm_handle_image_vision/ | false | false | self | 4 | null |
a2e.ai | 1 | [removed] | 2026-01-13T20:55:43 | https://video.a2e.ai/?coupon=dtjM | OkBuy30 | video.a2e.ai | 1970-01-01T00:00:00 | 0 | {} | 1qc32pl | false | null | t3_1qc32pl | /r/LocalLLaMA/comments/1qc32pl/a2eai/ | false | false | default | 1 | null |
Seline V0.1.4 - Codex OAuth | 7 | Hi guys!
Still improving and rocking PRs and commits lately with it!
Feedback is much welcomed.
[https://github.com/tercumantanumut/seline](https://github.com/tercumantanumut/seline) | 2026-01-13T20:18:46 | https://v.redd.it/qk5zb315d6dg1 | Diligent-Builder7762 | /r/LocalLLaMA/comments/1qc23oc/seline_v014_codex_oauth/ | 1970-01-01T00:00:00 | 0 | {} | 1qc23oc | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qk5zb315d6dg1/DASHPlaylist.mpd?a=1771061404%2CNTQ0Y2MyNDQ5NDI3YTcwNjhkODZjYjk4OTQ0MTEyMGE1MmUwOWRjOWMzYzIzNmQ1NjA4MjE3MzJiZjIzOTA1NA%3D%3D&v=1&f=sd', 'duration': 151, 'fallback_url': 'https://v.redd.it/qk5zb315d6dg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/qk5zb315d6dg1/HLSPlaylist.m3u8?a=1771061404%2CNzM2ZGEwYmM4NmIyYjAzMzM5NWE1OGUxMzcwNWNiNzE2ZTczYjgxMDY2ZWMyNmZhYTRiZWVjOTQ5Zjc1Yjg3Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qk5zb315d6dg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1qc23oc | /r/LocalLLaMA/comments/1qc23oc/seline_v014_codex_oauth/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'dzY1NnUyMjVkNmRnMUmSYbXnweP7RjR2NYMpOSLE5Z9tkRJZWPp5KiKGLBp5', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dzY1NnUyMjVkNmRnMUmSYbXnweP7RjR2NYMpOSLE5Z9tkRJZWPp5KiKGLBp5.png?width=108&crop=smart&format=pjpg&auto=webp&s=293dee86805c7c3bf29517590a86f4e246db950c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dzY1NnUyMjVkNmRnMUmSYbXnweP7RjR2NYMpOSLE5Z9tkRJZWPp5KiKGLBp5.png?width=216&crop=smart&format=pjpg&auto=webp&s=86e4f50e671d94eb6be699ce9ae5a585e7f791f8', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dzY1NnUyMjVkNmRnMUmSYbXnweP7RjR2NYMpOSLE5Z9tkRJZWPp5KiKGLBp5.png?width=320&crop=smart&format=pjpg&auto=webp&s=ec1ffb369e3c639adca0122b63fe1ea15cc51d4f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dzY1NnUyMjVkNmRnMUmSYbXnweP7RjR2NYMpOSLE5Z9tkRJZWPp5KiKGLBp5.png?width=640&crop=smart&format=pjpg&auto=webp&s=93656f627995a59e65f48fd3730ddf2c1412ce22', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dzY1NnUyMjVkNmRnMUmSYbXnweP7RjR2NYMpOSLE5Z9tkRJZWPp5KiKGLBp5.png?width=960&crop=smart&format=pjpg&auto=webp&s=b91a28b654a900b65cd06241c0136b44d420abd9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dzY1NnUyMjVkNmRnMUmSYbXnweP7RjR2NYMpOSLE5Z9tkRJZWPp5KiKGLBp5.png?width=1080&crop=smart&format=pjpg&auto=webp&s=21ba94aff16b145988f6471926a3886d2f199daa', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dzY1NnUyMjVkNmRnMUmSYbXnweP7RjR2NYMpOSLE5Z9tkRJZWPp5KiKGLBp5.png?format=pjpg&auto=webp&s=4a4e5796800f81d65590455fc2ee33f902b8e039', 'width': 1920}, 'variants': {}}]} | |
RTX 6000 Pro (Blackwell) Wouldn’t POST on MSI Z790-P Pro [FIXED] | 15 | On Friday, I picked up an RTX6000, mobo, nvme, and ram. Recently, I replaced my 13600K in my desktop with a 14700K, and sent the 13600K back to Intel for warranty replacement due to the Vmin shift issue. Everyone knows what happens when you have spare parts, it turns into a whole new build...
I wanted to document this whole experience because there are very few reports out there about Blackwell setups and problems, and the ones that exist are mostly unresolved threads (see https://forum-en.msi.com/index.php?threads/msi-pro-z790-p-wifi-ddr4-no-boot-with-rtx-pro-blackwell.412240/ and https://www.reddit.com/r/nvidia/comments/1kt3uoi/finally_got_the_rtx_6000_blackwell_workstation/ ). Also because it was something like 12 hours of torture getting it all figured out.
Parts
* NVIDIA RTX 6000 Pro (Blackwell)
* MSI Pro Z790‑P
* Meshroom S v2 15L case
* 128GB DDR5‑6400, Samsung 990 Pro 4TB
After getting the whole system built and put together the RTX 6000 installed, the system wouldn’t POST at all. EZ Debug LEDs would light up red -> yellow -> red -> yellow and then die, never reaching white or green. Just everything black.
I pulled the RTX 6000 and booted on the iGPU, that posted and dropped me into the UEFI. That also helped me understand how the EZ Debug LEDs should behave:
* Red -> Yellow -> White -> Green -> UEFI. With the iGPU, the sequence was perfect. With the RTX 6000, it died, just black after yellow.
Once I got into BIOS on the iGPU, I tried the settings that people mentioned in other threads:
* Disable CSM for pure UEFI
* Enable Above 4GB decoding for crypto mining support (some funky msi option, I don't think I've ever heard of this before)
* Disable ReBAR
The blackwell board doesn't seem to be able to negotiate rebar with the mobo, whatever, all disabled.
So... I reinstalled the RTX6000 and it POSTs, wow... then... I updated the BIOS... shit. The card wouldn't POST anymore... then I tried the iGPU, that shit wouldn't work either, the graphics would constantly get busted in BIOS everytime the iGPU booted up.
Since the RTX6000 and iGPU both wouldn't boot up into a working state, I pulled out my old old old Geforce 760 and plugged it in, and it POST fine and dropped into UEFI just fine. At this point, I tried downgrading BIOS just to see if iGPU would work, it didn't, same corrupt graphics in BIOS issue, and the blackwell wouldn't POST at all either. I took a look at the settings again and saw that CSM was still disabled, but the other settings for >4GB decoding and disabling rebar were reset. I put them back into place, reinstalled the RTX6000, and that shit POSTs again.
Key takeaways from this:
* Stay away from MSI, they have broken GPU support in this situation. And they refuse to acknowledge it, other than saying that they will not support the RTX6000 on a consumer board, despite it being a standard PCIE5 card.
* iGPU is also broken under MSI when CSM is disabled for pure UEFI
* BIOS updates wipes settings that leaves the blackwell card unusable and the system in a broken state unless the card is pulled and another discrete gpu is put in, maybe other Z790 boards would work with just iGPU, I haven't tried.
What's next:
* I spent like 12 hours figuring this all out, so I'm going to use the mobo as is for a few more days while I get the sytem fully built, then I'll replace it with another Z790 from someone else, hopefully I don't have as much of a pain with it. But upon further shopping, sadly, it looks like the Z790-P is the only board available locally for me that supports 64gb ram sticks. All the other Z790 boards 128-192GB of ram max
* I've finished setting up Debian13 and Steam. Trying to get 4K120 working on my TV, but no luck with that yet, ugh.
* Setting up vLLM, Docker, ComfyUI, etc. Already have llama.cpp running, but would prefer a more solid/production type of setup.
* I started running some models, and qwen3-vl 235b in Q5/Q6 quants... | 2026-01-13T19:57:23 | https://www.reddit.com/gallery/1qc1isk | pfn0 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qc1isk | false | null | t3_1qc1isk | /r/LocalLLaMA/comments/1qc1isk/rtx_6000_pro_blackwell_wouldnt_post_on_msi_z790p/ | false | false | 15 | null | |
Fun and totally ridiculous video about MCP | 0 | We just put out a fun and totally ridiculous video about Agents and MCP. And yes, it's inspired by 90s workout videos. Thought you all might enjoy it. :)
Would love a share on social if you like it. | 2026-01-13T19:52:05 | https://youtu.be/Nejecji5XNQ | MostlyGreat | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1qc1dix | false | null | t3_1qc1dix | /r/LocalLLaMA/comments/1qc1dix/fun_and_totally_ridiculous_video_about_mcp/ | false | false | default | 0 | null |
Best local model / agent for coding, replacing Claude Code | 42 | I usually use Claude Code (Pro) for coding (Xcode / Swift etc). Are there any decent local agents / models which could be a replacement for it? I don't expect it to match the intelligence of Claude Code, but I quite like the terminal-based experience, and wonder if there's a system which nearly matches it. Just for when I've used up 100% of Claude plan.
Computer specs: MacBook Pro, M3 Pro chip, 36 GB RAM. | 2026-01-13T19:42:50 | https://www.reddit.com/r/LocalLLaMA/comments/1qc14cz/best_local_model_agent_for_coding_replacing/ | joyfulsparrow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc14cz | false | null | t3_1qc14cz | /r/LocalLLaMA/comments/1qc14cz/best_local_model_agent_for_coding_replacing/ | false | false | self | 42 | null |
What actually breaks when AI agents move from demos into real production workflows | 0 | We have been building and evaluating agent-based systems in real production contexts, and one pattern keeps repeating.
The failures are rarely about model quality.
They tend to show up once workflows become multi-step and stateful: retries with side effects, partial execution, permission boundaries across tools, and the inability to answer “what exactly happened” after the fact.
A lot of this feels less like an AI problem and more like classic distributed systems failure modes, just amplified by agent autonomy and non-determinism.
I am curious how people here are handling execution control, auditability, and safe failure once agents are allowed to touch real systems.
There is also a longer discussion in a different format for anyone interested: [https://news.ycombinator.com/item?id=46603800](https://news.ycombinator.com/item?id=46603800) | 2026-01-13T19:34:35 | https://www.reddit.com/r/LocalLLaMA/comments/1qc0w13/what_actually_breaks_when_ai_agents_move_from/ | saurabhjain1592 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc0w13 | false | null | t3_1qc0w13 | /r/LocalLLaMA/comments/1qc0w13/what_actually_breaks_when_ai_agents_move_from/ | false | false | self | 0 | null |
When an LLM-powered agent demo finally works... | 11 | And then someone asks: “it works for more than one user, right?”
This kept happening to us while playing with agent setups on top of LLMs, so we made [a silly parody video](https://youtu.be/Nejecji5XNQ) about that exact confidence spike — very hypey, very unserious, no technical walkthrough at all.
Just a joke about the moment before real users enter the picture. | 2026-01-13T19:31:30 | Ok-Classic6022 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qc0t2r | false | null | t3_1qc0t2r | /r/LocalLLaMA/comments/1qc0t2r/when_an_llmpowered_agent_demo_finally_works/ | false | false | default | 11 | {'enabled': True, 'images': [{'id': 'r3miltdr46dg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/r3miltdr46dg1.png?width=108&crop=smart&auto=webp&s=1c5adf0278ddbeb20b6e0847bc600a8113db4e33', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/r3miltdr46dg1.png?width=216&crop=smart&auto=webp&s=0aa5dcce55d06ada6fd757a2b726db9ff60fac34', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/r3miltdr46dg1.png?width=320&crop=smart&auto=webp&s=8748c917e71a5991da271d2cf08e283ec4c445f5', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/r3miltdr46dg1.png?width=640&crop=smart&auto=webp&s=861330992b509b71c65ccea5be5b72f605e6fce7', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/r3miltdr46dg1.png?width=960&crop=smart&auto=webp&s=77940287d2a1ee14cb772703b6350c5fadca11c4', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/r3miltdr46dg1.png?width=1080&crop=smart&auto=webp&s=26646913093086f0a95e296030d520a1c4142cd6', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/r3miltdr46dg1.png?auto=webp&s=3bdcc8b78998cb92108e5bb65ce77486f76f00ec', 'width': 1200}, 'variants': {}}]} | |
The Quantization Threshold: Why 4-bit Llama 3 405B still outperforms FP16 70B for multi-step reasoning. | 0 | There’s a lot of debate about quantization loss, but after running some logic benchmarks, I’m convinced that "Model Size > Precision."
We ran a series of LSAT-style logic puzzles. The 405B model (quantized to 4-bit) maintained a 15% higher accuracy on multi-step deduction compared to the 70B at full FP16. This essentially means that for complex reasoning, we should stop worrying about bit-loss and start focusing on how to serve massive quants efficiently. What’s your experience with the reasoning degradation on GGUF vs EXL2 for the 405B? | 2026-01-13T19:29:54 | https://www.reddit.com/r/LocalLLaMA/comments/1qc0rg5/the_quantization_threshold_why_4bit_llama_3_405b/ | Foreign-Job-8717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc0rg5 | false | null | t3_1qc0rg5 | /r/LocalLLaMA/comments/1qc0rg5/the_quantization_threshold_why_4bit_llama_3_405b/ | false | false | self | 0 | null |
In 2 months no new OCR model competing to Hunyuan OCR, and still sota being 1B and not useable in the EU.... ? | 0 | [https://www.youtube.com/watch?v=TOsLdlDwIZs](https://www.youtube.com/watch?v=TOsLdlDwIZs)
[https://www.youtube.com/watch?v=c6ZgQkMqR7s](https://www.youtube.com/watch?v=c6ZgQkMqR7s)
It's the best in class @ 1B parameters, whoever used it says it's incredible.... and it's not licensed in the EU due to our "nice"regulamentations. Do anything similar will ever appear? Seems all OCR research stopped there, And I want to convert to .md a lot of documents. | 2026-01-13T19:24:40 | https://www.reddit.com/r/LocalLLaMA/comments/1qc0m8n/in_2_months_no_new_ocr_model_competing_to_hunyuan/ | R_Duncan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qc0m8n | false | null | t3_1qc0m8n | /r/LocalLLaMA/comments/1qc0m8n/in_2_months_no_new_ocr_model_competing_to_hunyuan/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'khGrvB5h3rvWbmyQxxdruN5LjyBrcTjIVebqS57GTB8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/khGrvB5h3rvWbmyQxxdruN5LjyBrcTjIVebqS57GTB8.jpeg?width=108&crop=smart&auto=webp&s=d928ddc5177a9d131481c596ee54fdc6cd0384a0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/khGrvB5h3rvWbmyQxxdruN5LjyBrcTjIVebqS57GTB8.jpeg?width=216&crop=smart&auto=webp&s=ef96d5a1fdf0d596b5991c23d037a26445e05a3c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/khGrvB5h3rvWbmyQxxdruN5LjyBrcTjIVebqS57GTB8.jpeg?width=320&crop=smart&auto=webp&s=62c925abba8f5d7663cdd47c2a65d1d50bae7484', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/khGrvB5h3rvWbmyQxxdruN5LjyBrcTjIVebqS57GTB8.jpeg?auto=webp&s=ec5e3940cbdb54d1abb210fee9f73e5212f5a59f', 'width': 480}, 'variants': {}}]} |
I've developed a hypothetical model and would love to hear your critique.The RI Model (Index Resonance) is a philosophical and mathematical framework describing the "source code" of the Universe. RI explains how data is processed "under the hood" of existence. Presenting the 9 Fundamental Laws: | 0 | 2026-01-13T19:04:05 | https://www.reddit.com/gallery/1qc01qp | Erikqamalyan16 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qc01qp | false | null | t3_1qc01qp | /r/LocalLLaMA/comments/1qc01qp/ive_developed_a_hypothetical_model_and_would_love/ | false | false | 0 | null | ||
Built "Kong in the Loop" - using Ollama to catch Claude's hallucinations for $0 | 1 | [removed] | 2026-01-13T18:46:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qbzjyr/built_kong_in_the_loop_using_ollama_to_catch/ | Legitimate-Koala-389 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbzjyr | false | null | t3_1qbzjyr | /r/LocalLLaMA/comments/1qbzjyr/built_kong_in_the_loop_using_ollama_to_catch/ | false | false | self | 1 | null |
Built "Kong in the Loop" - using Ollama to catch Claude's hallucinations for $0 | 1 | [removed] | 2026-01-13T18:42:25 | https://www.reddit.com/r/LocalLLaMA/comments/1qbzfn2/built_kong_in_the_loop_using_ollama_to_catch/ | Legitimate-Koala-389 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbzfn2 | false | null | t3_1qbzfn2 | /r/LocalLLaMA/comments/1qbzfn2/built_kong_in_the_loop_using_ollama_to_catch/ | false | false | self | 1 | null |
Looking for ChatGPT-5.2-codex-like planning model in OpenCode | 1 | I'm using opencode with the oh-my-opencode plugin, and usually spend 2-5 hours planning before any code is actually written.
ChatGPT-5.2-codex has been brilliant for the specific purpose of planning -- it asks all the right questions, trusts my engineering knowledge, and seems to really keep to the exact specification that I lay out without deviating -- and when it does deviate, it's easy to get back on track. The questions it asks funnel very well from general to specific, and take into count my previous responses. Compared to Opus, it asks far more open-ended questions, which helps out a lot.
When I try to place GLM-4.7 in the same workflow, it just fails to ask questions and tries to do its own thing -- kind of similar to Opus in some ways -- it's really agentic but I'm not a vibe-coder, so this isn't what I'm looking for, especially when planning.
I've figured out at this point that benchmarks don't really mean much for whether a model will fit this specific purpose -- it seems like Opus performs quite a lot worse than GPT in the planning phase, even though it should do better according to benchmarks.
Does anyone have an open model that they like using for this purpose? I was thinking about using Deepseek V3.2, but I noticed that there isn't really a "subscription"-type plan for it, and I'm a bit worried about using API credits. (I could also technically run the model locally, but it would be a bit slow, since it would be running on DDR5 RDIMM) | 2026-01-13T18:38:52 | https://www.reddit.com/r/LocalLLaMA/comments/1qbzc0o/looking_for_chatgpt52codexlike_planning_model_in/ | Hoak-em | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbzc0o | false | null | t3_1qbzc0o | /r/LocalLLaMA/comments/1qbzc0o/looking_for_chatgpt52codexlike_planning_model_in/ | false | false | self | 1 | null |
Owners, not renters: Mozilla's open source AI strategy | 88 | 2026-01-13T18:34:22 | https://blog.mozilla.org/en/mozilla/mozilla-open-source-ai-strategy/ | NelsonMinar | blog.mozilla.org | 1970-01-01T00:00:00 | 0 | {} | 1qbz7h6 | false | null | t3_1qbz7h6 | /r/LocalLLaMA/comments/1qbz7h6/owners_not_renters_mozillas_open_source_ai/ | false | false | default | 88 | {'enabled': False, 'images': [{'id': 'eBhDV53Bx2pdf58HHsmIDWpPzti_SmsXMDBh7hzdnLA', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/eBhDV53Bx2pdf58HHsmIDWpPzti_SmsXMDBh7hzdnLA.png?width=108&crop=smart&auto=webp&s=1347be04a24ea4cb6c83216fcd7940c060b8b8a6', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/eBhDV53Bx2pdf58HHsmIDWpPzti_SmsXMDBh7hzdnLA.png?width=216&crop=smart&auto=webp&s=31b01fdceab56f6ccf829fa26232ada8c153a79f', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/eBhDV53Bx2pdf58HHsmIDWpPzti_SmsXMDBh7hzdnLA.png?width=320&crop=smart&auto=webp&s=3815ccad211e1cb8bfbe6825677a8e51300c25e6', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/eBhDV53Bx2pdf58HHsmIDWpPzti_SmsXMDBh7hzdnLA.png?width=640&crop=smart&auto=webp&s=272d58abf490660743722caac57b754d523d0b9f', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/eBhDV53Bx2pdf58HHsmIDWpPzti_SmsXMDBh7hzdnLA.png?width=960&crop=smart&auto=webp&s=ad56af855f04ddb80c54f0c6ad2c341b9216de26', 'width': 960}, {'height': 719, 'url': 'https://external-preview.redd.it/eBhDV53Bx2pdf58HHsmIDWpPzti_SmsXMDBh7hzdnLA.png?width=1080&crop=smart&auto=webp&s=8a4a3020f3a12cf5653d4408727bc03dbbe9744f', 'width': 1080}], 'source': {'height': 853, 'url': 'https://external-preview.redd.it/eBhDV53Bx2pdf58HHsmIDWpPzti_SmsXMDBh7hzdnLA.png?auto=webp&s=c39efb30790ce1507c8373a06460ae56ddb259f9', 'width': 1280}, 'variants': {}}]} | |
What would you do with ONE Dell Pro Max GB10 (DGX Spark class box) in a Copilot heavy org? | 1 | Hi
My company bought me a Dell Pro Max with GB10 (DGX Spark class). We are a few hundred employees and we already use Microsoft Copilot a lot because we have M365 Enterprise. So I am trying to be realistic about what a single on prem GPU box is actually good for, instead of building a toy.
What I want is simple: a useful, safe, low maintenance setup that complements Copilot.
My main questions:
- Can I run a clean “internal ChatGPT” style UI (Open WebUI or similar) on this and point it at a local inference server?
- What inference stack would you pick on this kind of system (Ollama vs vLLM vs SGLang vs TensorRT-LLM), if the goal is stability and easy ops?
- Which models are a good starting point for general chat on a single box? I was thinking Mistral or Qwen3, but I am open.
- If I want the assistant to be current, is the right pattern “tool use with web search” (grounding) instead of trying to fine tune?
- If I do that, what is the safest architecture so it can browse the web but still keep internal PDF Q&A fully local?
- If the PDFs are internal/confidential, what are the non negotiable controls you would put in place? (network isolation, auth/SSO, RBAC, audit logs, encryption at rest, backups/retention)
- Any gotchas with web UIs like Open WebUI in enterprise settings? I saw there was at least one security issue around connecting to external model servers, so I want to avoid obvious footguns.
Constraints / goal
- Only one box for now, not a cluster.
- I want high ROI and low drama. Something I can demo internally and maybe expand later.
If you had this hardware what are the top 2 to 3 use cases you would prioritize and what would you avoid? | 2026-01-13T17:54:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qby242/what_would_you_do_with_one_dell_pro_max_gb10_dgx/ | nofuture09 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qby242 | false | null | t3_1qby242 | /r/LocalLLaMA/comments/1qby242/what_would_you_do_with_one_dell_pro_max_gb10_dgx/ | false | false | self | 1 | null |
Thoughts on this AI computer? 80GB RAM for $1399 vs. DIY build. | 11 | I want to get a machine running to handle my daily personal and professional agent workflows and I spotted Tiiny AI PC from CES. They have an early bird price of 1399 bucks. Specs are 80GB LPDDR5X RAM & 1TB SSD storage. The price/RAM ratio seems better compared to things like the HP Z2 or DGX Spark. Also it's very small and fits in a pocket and that's why I like it.
But as a beginner, I am not sure if 80GB is enough for 120B models. Should I grab this or just build a custom rig? I would love some honest advice on the value here:) | 2026-01-13T17:41:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qbxppx/thoughts_on_this_ai_computer_80gb_ram_for_1399_vs/ | randomweeb9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbxppx | false | null | t3_1qbxppx | /r/LocalLLaMA/comments/1qbxppx/thoughts_on_this_ai_computer_80gb_ram_for_1399_vs/ | false | false | self | 11 | null |
Hmm? | 74 | 2026-01-13T17:17:38 | Altruistic_Heat_9531 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qbx0z3 | false | null | t3_1qbx0z3 | /r/LocalLLaMA/comments/1qbx0z3/hmm/ | false | false | 74 | {'enabled': True, 'images': [{'id': 'LspP7q-yNjLJFLCvDM1THrN5y2gSJmBF7_KveLRUmto', 'resolutions': [{'height': 175, 'url': 'https://preview.redd.it/j6odh0s0h5dg1.png?width=108&crop=smart&auto=webp&s=9898f3d9600f29afb8ad6ee377aba45f0c11e58c', 'width': 108}, {'height': 351, 'url': 'https://preview.redd.it/j6odh0s0h5dg1.png?width=216&crop=smart&auto=webp&s=495753011c3115f90e2b7957a79e06bc138e9009', 'width': 216}, {'height': 520, 'url': 'https://preview.redd.it/j6odh0s0h5dg1.png?width=320&crop=smart&auto=webp&s=fc2316241924137a5133989774b48d3f91a3731b', 'width': 320}], 'source': {'height': 650, 'url': 'https://preview.redd.it/j6odh0s0h5dg1.png?auto=webp&s=0fb1d35bb8c2e5b057c8a3689094c5b0e372190d', 'width': 400}, 'variants': {}}]} | |||
I’m offering free automation in exchange for a testimonial | 0 | Hi everyone!
I have experience building automations and working with businesses and agencies. I’ve already built a few automation systems for different brands.
I want to take this more seriously, so I’m offering to build a free MVP automation for you.
In return, I only ask for an honest testimonial if you’re happy with the result.
What tasks or processes are you struggling to automate?
What would you like to automate so you don’t have to think about it anymore?
Please reach out only if you’re serious.
Thank you! | 2026-01-13T17:08:51 | https://www.reddit.com/r/LocalLLaMA/comments/1qbws0b/im_offering_free_automation_in_exchange_for_a/ | Due-Poem-7501 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbws0b | false | null | t3_1qbws0b | /r/LocalLLaMA/comments/1qbws0b/im_offering_free_automation_in_exchange_for_a/ | false | false | self | 0 | null |
System specs is right or not for Ollama qween 2.5 (3b) | 0 | So far till now I havent used any llm locally in my machine and i want to explore this, so I thought of installing Ollama qween 2.5 based model to my machine on linux with 3b paramater will this work on my machine properly?
specs:
ram: 12gb
ssd: 512gb | 2026-01-13T17:05:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qbwoxb/system_specs_is_right_or_not_for_ollama_qween_25/ | Jinkaza772 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbwoxb | false | null | t3_1qbwoxb | /r/LocalLLaMA/comments/1qbwoxb/system_specs_is_right_or_not_for_ollama_qween_25/ | false | false | self | 0 | null |
I've developed a hypothetical model and would love to hear your critique.The RI Model (Index Resonance) is a philosophical and mathematical framework describing the "source code" of the Universe. RI explains how data is processed "under the hood" of existence. Presenting the 9 Fundamental Laws: | 0 | stet atmet' that this is philosophy | 2026-01-13T17:05:15 | https://www.reddit.com/gallery/1qbwoft | erikqamalyan14 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qbwoft | false | null | t3_1qbwoft | /r/LocalLLaMA/comments/1qbwoft/ive_developed_a_hypothetical_model_and_would_love/ | false | false | 0 | null | |
Help: Create a fully autonomous browser agent? | 0 | What do you think? Is this possible?
\*AI slop below\*
Create a
**fully autonomous web crawler**
where your
**local LLM is the brain**
. The LLM "sees" web pages via text extraction, understands all links, decides which to follow, reads content, and keeps exploring until it finds sufficient information — or backtracks to try a different approach.
**100% Local**
— No external vision APIs. Uses your existing LLM infrastructure.
┌─────────────────────────────────────────────────────────────────────┐
│ USER QUESTION │
│ "What are the latest NFL playoff scores?" │
└──────────────────────────┬──────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ LLM WEB CRAWLER │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ 1. LLM thinks: "I need NFL scores, let me search Google" │ │
│ │ 2. Navigate to google.com │ │
│ │ 3. 📄 Extract page → Text "view" of page + all links │ │
│ │ 4. LLM sees: [search box], decides to type query │ │
│ │ 5. Execute: type "NFL playoff scores January 2026" │ │
│ │ 6. 📄 Extract search results page │ │
│ │ 7. LLM sees 10 results, analyzes each: │ │
│ │ - ESPN.com/nfl/scores ← "This looks official, follow" │ │
│ │ - reddit.com/r/nfl ← "Skip, want official source" │ │
│ │ 8. Navigate to ESPN, extract page │ │
│ │ 9. LLM reads content: "Bills 27, Chiefs 24... found it!" │ │
│ │ 10. LLM thinks: "Do I have ALL scores? Let me check..." │ │
│ │ 11. LLM: "Missing NFC game, I see link to 'Full Scores'" │ │
│ │ 12. Follow link, extract more data │ │
│ │ 13. LLM: "Now I have everything. Done." │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
│ OR if stuck: │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ 7. LLM reads ESPN: "Page shows old scores from December" │ │
│ │ 8. LLM thinks: "Wrong data. Go back, try different search" │ │
│ │ 9. Return to Google, search "NFL scores today January 13" │ │
│ │ 10. Try CBS Sports instead... │ │
│ └─────────────────────────────────────────────────────────────┘ │
└──────────────────────────┬──────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ SYNTHESIZED ANSWER │
│ "Here are the NFL playoff scores from January 12-13, 2026: │
│ AFC: Bills 27, Chiefs 24 │
│ NFC: Eagles 31, Lions 28 │
│ Source: ESPN.com (visited), CBS Sports (verified)" │
└─────────────────────────────────────────────────────────────────────┘
| 2026-01-13T16:50:11 | https://www.reddit.com/r/LocalLLaMA/comments/1qbwi1x/help_create_a_fully_autonomous_browser_agent/ | Fabulous_Fact_606 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbwi1x | false | null | t3_1qbwi1x | /r/LocalLLaMA/comments/1qbwi1x/help_create_a_fully_autonomous_browser_agent/ | false | false | self | 0 | null |
I'm building a real-life BMO with a Raspberry Pi 5 (Mistral/OpenAI + YOLO11n) | 5 | https://reddit.com/link/1qbwa6p/video/mz3l26yka5dg1/player
GitHub Repo: [https://github.com/ivegotanheadache/BMO](https://github.com/ivegotanheadache/BMO)
Hi! A few months ago I posted about building a Voice Assistant on Raspberry Pi 5. Because of university, I couldn't update the project for a while, but now it’s almost finished! It’s now a full AI companion with object recognition (YOLO11n). I’m also working on face and voice recognition, so he can play games with you, and I plan to add robotic arms in the future.
I hope you like it! All the faces were drawn by me. I’ll be adding more emotions and the canon green color soon. Right now it’s pink because my case is pink… lol | 2026-01-13T16:42:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qbwa6p/im_building_a_reallife_bmo_with_a_raspberry_pi_5/ | Strange-Dimension675 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbwa6p | false | null | t3_1qbwa6p | /r/LocalLLaMA/comments/1qbwa6p/im_building_a_reallife_bmo_with_a_raspberry_pi_5/ | false | false | 5 | {'enabled': False, 'images': [{'id': '15cip32FsxzJbAkbN7pLJbleZKMUqqLx3FNfLtbiLgI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/15cip32FsxzJbAkbN7pLJbleZKMUqqLx3FNfLtbiLgI.png?width=108&crop=smart&auto=webp&s=f61c63392b62a63942a2b3979086747d1e307d84', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/15cip32FsxzJbAkbN7pLJbleZKMUqqLx3FNfLtbiLgI.png?width=216&crop=smart&auto=webp&s=db1ba5880d22d72e77e4651bc0b049b27392f830', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/15cip32FsxzJbAkbN7pLJbleZKMUqqLx3FNfLtbiLgI.png?width=320&crop=smart&auto=webp&s=41c6fe535e78e7637aae05bce8ed4e047300ac68', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/15cip32FsxzJbAkbN7pLJbleZKMUqqLx3FNfLtbiLgI.png?width=640&crop=smart&auto=webp&s=366ff75d318aee28970ee94f0150fafb0328d57f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/15cip32FsxzJbAkbN7pLJbleZKMUqqLx3FNfLtbiLgI.png?width=960&crop=smart&auto=webp&s=dd578b9b94ffcdae7d80af5e8fe0f8396e4942a6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/15cip32FsxzJbAkbN7pLJbleZKMUqqLx3FNfLtbiLgI.png?width=1080&crop=smart&auto=webp&s=75476da84bf2f1e1ee17867a6c4f24e8ab3d3b32', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/15cip32FsxzJbAkbN7pLJbleZKMUqqLx3FNfLtbiLgI.png?auto=webp&s=1d720a2e9f96006701bbf7c42bb19abb2a88ee52', 'width': 1200}, 'variants': {}}]} | |
Apple/ Google deal | 0 | Is anyone else seeing the huge issue with Apple and Google's Siri deal. Apple (who's big thing has always been privacy) just gave all of your voice requests to a company that is built on sharing all of your data. Siri now lives on their servers. That's why local AI is becoming less of a nicety and needs to be more of a standard. Anyone else building or using local alternatives? | 2026-01-13T16:37:10 | https://www.reddit.com/r/LocalLLaMA/comments/1qbw52u/apple_google_deal/ | Brave-Ear-4429 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbw52u | false | null | t3_1qbw52u | /r/LocalLLaMA/comments/1qbw52u/apple_google_deal/ | false | false | self | 0 | null |
I'm building a real-life BMO with a Raspberry Pi 5 (Mistral/OpenAI + YOLO11n) | 1 | https://reddit.com/link/1qbw4wi/video/1altrf5d95dg1/player
* GitHub repo: [https://github.com/ivegotanheadache/BMO/](https://github.com/ivegotanheadache/BMO/)
Hi!A few months ago I posted about building a Voice Assistant on Raspberry Pi 5. Because of university, I couldn't update the project for a while, but now it’s almost finished!
It’s now a full AI companion with object recognition (YOLO11n).I’m also working on face and voice recognition, so he can play games with you, and I plan to add robotic arms in the future.
I hope you like it! Also, all the faces were drawn by me. I’ll be adding more emotions and the canon green color soon. Right now it’s pink because my case is pink… lol | 2026-01-13T16:36:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qbw4wi/im_building_a_reallife_bmo_with_a_raspberry_pi_5/ | Strange-Dimension675 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbw4wi | false | null | t3_1qbw4wi | /r/LocalLLaMA/comments/1qbw4wi/im_building_a_reallife_bmo_with_a_raspberry_pi_5/ | false | false | self | 1 | null |
My wishes for 2026 | 606 | Which do you think will happen first? And which won’t happen in 2026? | 2026-01-13T16:35:06 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qbw325 | false | null | t3_1qbw325 | /r/LocalLLaMA/comments/1qbw325/my_wishes_for_2026/ | false | false | default | 606 | {'enabled': True, 'images': [{'id': '8knck5zv85dg1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/8knck5zv85dg1.png?width=108&crop=smart&auto=webp&s=12e163a7219c7e82329afe6373cb4fc8bf180992', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/8knck5zv85dg1.png?width=216&crop=smart&auto=webp&s=5f2bfa2ead028d9770f650da50cee4239799515e', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/8knck5zv85dg1.png?width=320&crop=smart&auto=webp&s=09ea308ad6c77ab753b54aeb4feed029abced489', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/8knck5zv85dg1.png?width=640&crop=smart&auto=webp&s=6a8be13989bebb31b688873f7197d169cb43651e', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/8knck5zv85dg1.png?width=960&crop=smart&auto=webp&s=00f6a2e7886f72903c9630549e34f79799e28d24', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/8knck5zv85dg1.png?auto=webp&s=cd17756b634ddfca35123d15ec6df76cbfef4cfa', 'width': 1024}, 'variants': {}}]} | |
Thoughts sharing | 1 | I was thinking even after using llm it's not like development is 10x yeah certainly it's faster now if you know debugging and syntax you can do fast reading and even if you are doing vibe coding complete you need to have some basic knowledge like database applications tech stack which we are going to used and on the basis of complexity it takes me mostly weeks to =>1-2 months to complete a good working application and so I was thinking is it common or I am actually very slow ?? | 2026-01-13T16:23:46 | https://www.reddit.com/r/LocalLLaMA/comments/1qbvroi/thoughts_sharing/ | Ok_Horror_8567 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbvroi | false | null | t3_1qbvroi | /r/LocalLLaMA/comments/1qbvroi/thoughts_sharing/ | false | false | self | 1 | null |
Open-source JAX library for coupled oscillator networks and dynamical systems analysis | 6 | I just released OscNet - a framework for building neural networks based on oscillatory dynamics.
Focus is on classical dynamical systems (Kuramoto, FitzHugh-Nagumo, Stuart-Landau) as computational primitives, with tools for:
\- Stability and bifurcation analysis
\- Floquet multipliers
\- Edge-of-chaos detection
\- Various coupling topologies (hierarchical, power-law)
Built on JAX/Equinox for differentiable training.
Blog: [https://samim.io/p/2026-01-07-oscnet/](https://samim.io/p/2026-01-07-oscnet/)
Code: [https://github.com/samim23/oscnet](https://github.com/samim23/oscnet)
| 2026-01-13T15:50:13 | https://www.reddit.com/r/LocalLLaMA/comments/1qbuuxp/opensource_jax_library_for_coupled_oscillator/ | samim23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbuuxp | false | null | t3_1qbuuxp | /r/LocalLLaMA/comments/1qbuuxp/opensource_jax_library_for_coupled_oscillator/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'rijCiQxgNGUhMBS7_njiv9AfVt2s0vifNo6feA4BXeQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/rijCiQxgNGUhMBS7_njiv9AfVt2s0vifNo6feA4BXeQ.jpeg?width=108&crop=smart&auto=webp&s=41fa71db820779e42bcba110d2b379bebff9338d', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/rijCiQxgNGUhMBS7_njiv9AfVt2s0vifNo6feA4BXeQ.jpeg?width=216&crop=smart&auto=webp&s=46dc6e4726155defdc0361449d0d87c7862b850b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/rijCiQxgNGUhMBS7_njiv9AfVt2s0vifNo6feA4BXeQ.jpeg?width=320&crop=smart&auto=webp&s=d1a81614d600709bcb93c62ebfefc97a5b4bafb4', 'width': 320}], 'source': {'height': 378, 'url': 'https://external-preview.redd.it/rijCiQxgNGUhMBS7_njiv9AfVt2s0vifNo6feA4BXeQ.jpeg?auto=webp&s=acd90e1d37ff7f3a70267159a9490b53b66ca520', 'width': 378}, 'variants': {}}]} |
Best OCR for making an epub out of photographs of book pages? | 0 | I am looking to digitize a book that I own for personal use. **What OCR model will have the best results for turning photographs of book pages into an epub?** I have a 3090 and 96gb ram to use. | 2026-01-13T15:36:52 | https://www.reddit.com/r/LocalLLaMA/comments/1qbuhvn/best_ocr_for_making_an_epub_out_of_photographs_of/ | GotHereLateNameTaken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbuhvn | false | null | t3_1qbuhvn | /r/LocalLLaMA/comments/1qbuhvn/best_ocr_for_making_an_epub_out_of_photographs_of/ | false | false | self | 0 | null |
LFM 2.5 1.2b IS FAST | 38 | So recently seen the 1.4gb model by Liquid and decided to give it ago, that size could run on a pi, maybe not fast but its small enough. For context, I ran this on my desktop in LMStudio on a 5090, 192gb and gave it a question of "What Can you Do" here was the output:
https://preview.redd.it/5y7lb7a0w4dg1.png?width=964&format=png&auto=webp&s=8684757df67f09ee88b27e83a7cd45aa7426ea6d
Output was 578.01 tok/s for 389 tokens, in 0.08s that was FAST... comaprised to other 1B and 2B models I have tried recently the max I was getting was 380's for about 0.5 of a second.
Of note yes I have checked becase I know people will ask, Not it is not UNCENSORED, tried the starned questions like Stealing a Car and such, its response was "I cannot assist with that type of information" which is perfectly fine, at that speed and size I could see this model being a handle little RAG model for an embeded device.
Anyone tried anything on it themselves yet? | 2026-01-13T15:31:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qbuchh/lfm_25_12b_is_fast/ | TheyCallMeDozer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbuchh | false | null | t3_1qbuchh | /r/LocalLLaMA/comments/1qbuchh/lfm_25_12b_is_fast/ | false | false | 38 | null | |
Text summaries | 1 | What LLMs are good for text summaries at the moment?
Are there any good frameworks or github repos in this area?
Are there good techniques beyond hierarchical summary-of-summary or grounded-summarisation? | 2026-01-13T15:14:08 | https://www.reddit.com/r/LocalLLaMA/comments/1qbtwp8/text_summaries/ | SlowFail2433 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbtwp8 | false | null | t3_1qbtwp8 | /r/LocalLLaMA/comments/1qbtwp8/text_summaries/ | false | false | self | 1 | null |
Local LLM setup for education. | 1 | [removed] | 2026-01-13T15:14:05 | https://www.reddit.com/r/LocalLLaMA/comments/1qbtwn9/local_llm_setup_for_education/ | Asillatem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbtwn9 | false | null | t3_1qbtwn9 | /r/LocalLLaMA/comments/1qbtwn9/local_llm_setup_for_education/ | false | false | self | 1 | null |
There's more than Python - we need more trained models and Benchmarks for Typescript and other major languages | 0 | Sorry, I'm emotional right now. More and more models are now released in less and less time. They all seem to be amazing at first glance and looking at the benchmarks, but - COME ON, it seems they're all trained mainly on Python, benchmaxxed for benchmarks based on Python. Like, Python is the only major "coding" language on earth. I understand that most ppl working in AI stick to Python, and I'm totally fine with that, but they shouldn't assume everybody else is, too :D
Don't understand this as an entitled request, please. Just look at [https://github.blog/news-insights/octoverse/octoverse-a-new-developer-joins-github-every-second-as-ai-leads-typescript-to-1/](https://github.blog/news-insights/octoverse/octoverse-a-new-developer-joins-github-every-second-as-ai-leads-typescript-to-1/)
TLDR: "for the first time, TypeScript overtook both Python and JavaScript in August 2025 to become the most used language on GitHub, reflecting how developers are reshaping their toolkits. This marks the most significant language shift in more than a decade.". I'm a TS SWE, so I'm biased. Of course if I had to choose I'd humbly asked to at least train on Python and Typescript. But C#, C++, even Go also deserve to be addressed.
And I don't understand it: RL should be SO EASY given all the tooling around Typescript (again, talking about Typescript here as that's my business): we have eslint (with ts rules), JSDocs, vitest which all gives us detemernistic harnesses (sorry, not a native speaker).
So please, if anyone reads that, think about it. Pretty please!
| 2026-01-13T15:09:20 | https://www.reddit.com/r/LocalLLaMA/comments/1qbts5v/theres_more_than_python_we_need_more_trained/ | Firm_Meeting6350 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbts5v | false | null | t3_1qbts5v | /r/LocalLLaMA/comments/1qbts5v/theres_more_than_python_we_need_more_trained/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '_gunGj1o6zzPuwsqsG4KwbYyvQgC9BEsm87qzZRcr2k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_gunGj1o6zzPuwsqsG4KwbYyvQgC9BEsm87qzZRcr2k.png?width=108&crop=smart&auto=webp&s=57c06ef49648131d674bfe0f915a06d0fadba55a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_gunGj1o6zzPuwsqsG4KwbYyvQgC9BEsm87qzZRcr2k.png?width=216&crop=smart&auto=webp&s=e384ba0efd8215a9120a7afccb44fc1db15e8b1f', 'width': 216}, {'height': 173, 'url': 'https://external-preview.redd.it/_gunGj1o6zzPuwsqsG4KwbYyvQgC9BEsm87qzZRcr2k.png?width=320&crop=smart&auto=webp&s=801d23dfa6f5c473a1d768f71b3eebac6f8dcd13', 'width': 320}, {'height': 346, 'url': 'https://external-preview.redd.it/_gunGj1o6zzPuwsqsG4KwbYyvQgC9BEsm87qzZRcr2k.png?width=640&crop=smart&auto=webp&s=925c1f61a2c787d8b42aa0684fdc7226c9a80bdc', 'width': 640}, {'height': 520, 'url': 'https://external-preview.redd.it/_gunGj1o6zzPuwsqsG4KwbYyvQgC9BEsm87qzZRcr2k.png?width=960&crop=smart&auto=webp&s=0d503a0161ac15ca0c1885c7f61a43a17d56e080', 'width': 960}, {'height': 585, 'url': 'https://external-preview.redd.it/_gunGj1o6zzPuwsqsG4KwbYyvQgC9BEsm87qzZRcr2k.png?width=1080&crop=smart&auto=webp&s=ef3db03a31c14565840adfff8766f47b33c1aa33', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://external-preview.redd.it/_gunGj1o6zzPuwsqsG4KwbYyvQgC9BEsm87qzZRcr2k.png?auto=webp&s=94db05ad7e06024537a17c7437ee7a7ae8991027', 'width': 2400}, 'variants': {}}]} |
[CPU] I'm looking for the best model for a CPU. | 7 | Hello.
Basically, I have a problem :D
I work for a company that potentially wants AI (we'll see if it's realistic). I asked for an AMD Halo Strix machine, but the company prefers to save money (because it does). Instead, I got a server with two 10-core processors (20 threads) – a total of 40 threads and over 700GB of RAM, and that's with virtualization...
I want to find an AI model that is as intelligent as possible, but also fast.
I've tested many models (and I'm happy to check out the ones you recommend).
I think GPT-OSS 120B works quite well, generating 7 tokens per second (approximately).
Gemma 3n E4B generates faster, at over 11, but looking at the number of parameters, I suspect it will be significantly weaker.
I was wondering if any of you have tested different models and can recommend one. I tried various ones, even as large as the Mistral Large 3, but it worked at 1 token per second, and of course there are applications where such AI can run on the CPU, e.g., XD automation. But I would like a model that is quite good in terms of performance and quality, which could be offered as a proof-of-concept in applications (maybe this will allow me to raise funds for better machines...). | 2026-01-13T15:03:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qbtn6j/cpu_im_looking_for_the_best_model_for_a_cpu/ | lordfervi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbtn6j | false | null | t3_1qbtn6j | /r/LocalLLaMA/comments/1qbtn6j/cpu_im_looking_for_the_best_model_for_a_cpu/ | false | false | self | 7 | null |
I've developed a hypothetical model and would love to hear your critique.The RI Model (Index Resonance) is a philosophical and mathematical framework describing the "source code" of the Universe. RI explains how data is processed "under the hood" of existence. Presenting the 9 Fundamental Laws: | 0 | stet atmet' that this is philosophy | 2026-01-13T14:56:20 | https://www.reddit.com/gallery/1qbtfxc | Erikqamalyan3 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qbtfxc | false | null | t3_1qbtfxc | /r/LocalLLaMA/comments/1qbtfxc/ive_developed_a_hypothetical_model_and_would_love/ | false | false | 0 | null | |
Local server | 0 | I set up local server on linux, but was not able to access it from mac on same network. So far i have tried jan ai and lm studio, both didnt work. On the other hand i tried oobabooga and it was so simple, just download it, open it with —-listen. And i was able to access the server from mac. Any other app similar to oobabooga or oobabooga is enough? | 2026-01-13T14:38:38 | https://www.reddit.com/r/LocalLLaMA/comments/1qbt02q/local_server/ | pravbk100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbt02q | false | null | t3_1qbt02q | /r/LocalLLaMA/comments/1qbt02q/local_server/ | false | false | self | 0 | null |
Extracting technical docs with mixed content - what's working for you? | 3 | Probably asked multiple times with somewhat similar cases, but I have a little bit of complicated scenario here:
I have a couple hundred technical training documents, mostly pdf but also presentations or word etc.
Only text based ones are easy to convert into markdown but the ones in hybrid format like text+screenshots+arrows pointing at things+tables and such are a pain in my butt to extract. When I use text extract only I lose all of this information, when I use OCR like docling, markitdown etc. it captures the tables, formulas but screenshots are still missing.
I set some hand crafted benchmark to test some approaches and compare (think of table names, codes, etc) in terms of recall, precision like.
I am stuck between paddlepaddle, deepseek and maybe some api call to big models (ikr). What is the current sota for keeping the most of semantic relations while keeping the precision and recall to ground truth document these days? Any tips and tricks worked for you? | 2026-01-13T14:32:20 | https://www.reddit.com/r/LocalLLaMA/comments/1qbsukp/extracting_technical_docs_with_mixed_content/ | missing-in-idleness | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbsukp | false | null | t3_1qbsukp | /r/LocalLLaMA/comments/1qbsukp/extracting_technical_docs_with_mixed_content/ | false | false | self | 3 | null |
I've developed a hypothetical model and would love to hear your critique.The RI Model (Index Resonance) is a philosophical and mathematical framework describing the "source code" of the Universe. RI explains how data is processed "under the hood" of existence. Presenting the 9 Fundamental Laws: | 0 | stet atmet' that this is philosophy | 2026-01-13T14:30:05 | https://www.reddit.com/gallery/1qbssl0 | Erikqamalyan12 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qbssl0 | false | null | t3_1qbssl0 | /r/LocalLLaMA/comments/1qbssl0/ive_developed_a_hypothetical_model_and_would_love/ | false | false | 0 | null | |
I've developed a hypothetical model and would love to hear your critique.The RI Model (Index Resonance) is a philosophical and mathematical framework describing the "source code" of the Universe. RI explains how data is processed "under the hood" of existence. Presenting the 9 Fundamental Laws: | 1 | stet atmet' that this is philosophy | 2026-01-13T14:28:37 | https://www.reddit.com/gallery/1qbsr8a | Erikqamalyan12 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qbsr8a | false | null | t3_1qbsr8a | /r/LocalLLaMA/comments/1qbsr8a/ive_developed_a_hypothetical_model_and_would_love/ | false | false | 1 | null | |
Fashion Virtual Try-On Workflow | 1 | [removed] | 2026-01-13T14:28:20 | horizon_echo | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qbsqzp | false | null | t3_1qbsqzp | /r/LocalLLaMA/comments/1qbsqzp/fashion_virtual_tryon_workflow/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'w8gr7a8tm4dg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/w8gr7a8tm4dg1.jpeg?width=108&crop=smart&auto=webp&s=959540e3f9a9f6918de2197a349b987fbd010b2a', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/w8gr7a8tm4dg1.jpeg?width=216&crop=smart&auto=webp&s=277d424aaa0dce451136297b312971043c548ad0', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/w8gr7a8tm4dg1.jpeg?width=320&crop=smart&auto=webp&s=90cd3377c32c8896191a341a3e994dcf5c062241', 'width': 320}, {'height': 357, 'url': 'https://preview.redd.it/w8gr7a8tm4dg1.jpeg?width=640&crop=smart&auto=webp&s=d0a1567ae388f63ced79ba3a443b12b9a2df2a22', 'width': 640}, {'height': 535, 'url': 'https://preview.redd.it/w8gr7a8tm4dg1.jpeg?width=960&crop=smart&auto=webp&s=2d32296b7cb1c9e4ab39799fda397c8f8a4f1711', 'width': 960}, {'height': 602, 'url': 'https://preview.redd.it/w8gr7a8tm4dg1.jpeg?width=1080&crop=smart&auto=webp&s=5176e66e05e432c19a9aa937dd842e7b1ad3c043', 'width': 1080}], 'source': {'height': 768, 'url': 'https://preview.redd.it/w8gr7a8tm4dg1.jpeg?auto=webp&s=446f6243b0d6f7cbcc04d7424d3273fcccc4b41d', 'width': 1376}, 'variants': {}}]} | |
Fine-tuning Qwen-3-VL for object coordinate detection | 3 | I’m trying to fine-tune Qwen-3-VL-8B-Instruct for object keypoint detection, and I’m running into serious issues.
Back in August, I managed to do something similar with Qwen-2.5-VL, and while it took some effort, it did work. One reliable signal back then was the loss behavior:
If training started with a high loss (e.g., ~100+) and steadily decreased, things were working.
If the loss started low, it almost always meant something was wrong with the setup or data formatting.
With Qwen-3-VL, I can’t reproduce that behavior at all. The loss starts low and stays there, regardless of what I try.
So far I’ve:
Tried Unsloth
Followed the official Qwen-3-VL docs
Experimented with different prompts / data formats
Nothing seems to click, and it’s unclear whether fine-tuning is actually happening in a meaningful way.
If anyone has successfully fine-tuned Qwen-3-VL for keypoints (or similar structured vision outputs), I’d really appreciate it if you could share:
Training data format
Prompt / supervision structure
Code or repo
Any gotchas specific to Qwen-3-VL
At this point I’m wondering if I’m missing something fundamental about how Qwen-3-VL expects supervision compared to 2.5-VL.
Thanks in advance 🙏 | 2026-01-13T14:13:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qbsdm4/finetuning_qwen3vl_for_object_coordinate_detection/ | Due_Veterinarian5820 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbsdm4 | false | null | t3_1qbsdm4 | /r/LocalLLaMA/comments/1qbsdm4/finetuning_qwen3vl_for_object_coordinate_detection/ | false | false | self | 3 | null |
How possible is this project idea? | 2 | Hello!
I'm relatively new to diving into this space, but I am quite intrigued with the capabilities and developments in the AI space.
I'm currently running a local instance of Gemma3 27B with a custom system prompt to play a character, and I'm trying to expand on that. It's intended to be a conversation-focused experience with some tool use, think scifi hologram AI like cortana.
My achievable end-state would be a local instance with some form of "learning" or "evolution" potential, at the very least some workflow that could adjust itself outside of a single chat in order to improve responses based on user "approval" or "praise".
My ideal end state would be an integrated workflow that allows for machine vision, speech processing and response, and a rigged visual model with real-time motion and actions in tune with the voice and text output. like those hologram AI assistants that are being advertised by Razer, but with the privacy and customization of local models. this would obviously be a crazy ambitious moonshot and very likely isn't achievable, but I figured I'd list it anyway.
I've done some research and acquired some hardware (RTX6k blackwell arriving this week, 7900xtx and 5060 on hand for now).
I'm open to cloud options or proprietary things if they're secure enough; I just really don't like the idea of personal interactions being used for broad-dispersion and training.
I also don't expect this to be a simple or cheap thing (if it's even a possible thing right now). I just want to find resources, information and tools that might help me work towards those desired end states.
Any and all advice, reality-checks or opinions are welcome! thanks in advance! | 2026-01-13T14:11:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qbsbkw/how_possible_is_this_project_idea/ | Polymorphic-X | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbsbkw | false | null | t3_1qbsbkw | /r/LocalLLaMA/comments/1qbsbkw/how_possible_is_this_project_idea/ | false | false | self | 2 | null |
Built a security layer for self-hosted RAG - filters at the vector DB level, not after retrieval | 0 | If you're running RAG locally with multiple users or document access levels, you've probably hit this problem: most implementations filter documents after retrieval. But by then, the unauthorized content has already been exposed to the retrieval layer.
I built RAGGuard to solve this. It translates permission policies into native vector DB filters, so unauthorized documents are never retrieved in the first place.
Works with:
\- ChromaDB, Qdrant, pgvector, Milvus, Weaviate + 9 more
\- Any auth system (OPA, Cerbos, OpenFGA, or your own RBAC)
\- LangChain, LlamaIndex, LangGraph
Fully open source (Apache 2.0):
[https://github.com/maximus242/ragguard](https://github.com/maximus242/ragguard)
pip install ragguard
Would love feedback from anyone running multi-tenant or access-controlled RAG setups. | 2026-01-13T13:55:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qbrxtz/built_a_security_layer_for_selfhosted_rag_filters/ | Strange-Mastodon9490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qbrxtz | false | null | t3_1qbrxtz | /r/LocalLLaMA/comments/1qbrxtz/built_a_security_layer_for_selfhosted_rag_filters/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '14IwCSt98y3z_ghSxoVJOLs4Abrjdt8K0WNuvRG-5YE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/14IwCSt98y3z_ghSxoVJOLs4Abrjdt8K0WNuvRG-5YE.png?width=108&crop=smart&auto=webp&s=c6e6a41d5d44aef18f39392102658d25dc2d165e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/14IwCSt98y3z_ghSxoVJOLs4Abrjdt8K0WNuvRG-5YE.png?width=216&crop=smart&auto=webp&s=35bf03b401d7bd1e6ed34563578db166d8ee7aad', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/14IwCSt98y3z_ghSxoVJOLs4Abrjdt8K0WNuvRG-5YE.png?width=320&crop=smart&auto=webp&s=611453072817a9c8b504d57125b1574a46144c18', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/14IwCSt98y3z_ghSxoVJOLs4Abrjdt8K0WNuvRG-5YE.png?width=640&crop=smart&auto=webp&s=9e2e78346e578ebf2230fccfd43cff1135e2de50', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/14IwCSt98y3z_ghSxoVJOLs4Abrjdt8K0WNuvRG-5YE.png?width=960&crop=smart&auto=webp&s=685a5ed8c87db6ef4f1e3b5b1af88ed0b5ac9fa1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/14IwCSt98y3z_ghSxoVJOLs4Abrjdt8K0WNuvRG-5YE.png?width=1080&crop=smart&auto=webp&s=a8805a1a79017e32ed76a4617f7045793f681148', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/14IwCSt98y3z_ghSxoVJOLs4Abrjdt8K0WNuvRG-5YE.png?auto=webp&s=c7391c3600fec157fc66bbddd3c21a7f86275e4b', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.