title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
TeleChat3-105B-A4.7B-Thinking and TeleChat3-36B-Thinking | 31 | 2026-01-05T11:35:48 | https://www.reddit.com/r/LocalLLaMA/comments/1q4jf67/telechat3105ba47bthinking_and_telechat336bthinking/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4jf67 | false | null | t3_1q4jf67 | /r/LocalLLaMA/comments/1q4jf67/telechat3105ba47bthinking_and_telechat336bthinking/ | false | false | 31 | null | ||
Visualizing RAG | 6 | Just found out there are tools for visualizing postgreSQL RAG data. This is just one quick example from last Friday when I figured out how it’s done. What I find interesting is I was able to add in a feature to connect a query and map the query with the RAG data, to see exactly where it connects and diagnose if/when the RAG fails to retrieve relevant data. Seems very useful for trouble shooting your RAG retrieval ins and outs | 2026-01-05T11:32:53 | https://v.redd.it/7l7z3niaoibg1 | Fear_ltself | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q4jdeb | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/7l7z3niaoibg1/DASHPlaylist.mpd?a=1770204788%2CMDA0MTVkMWJiMGY3MzBlZjllYTgzY2FkMzhkOWUwZDJiNzExYWE1YTEzZWMwZDljYmZjNDFlZTQ5NDFmODI4Mw%3D%3D&v=1&f=sd', 'duration': 11, 'fallback_url': 'https://v.redd.it/7l7z3niaoibg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/7l7z3niaoibg1/HLSPlaylist.m3u8?a=1770204788%2CNjZmMjY1M2IyNDNlNThiNzI5NDZiYmVkY2Y3NDZlNDA2ODc0YmI3MWE5YzA3ZDM3Yjc1OTkzZGI3NmU5MTUwMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7l7z3niaoibg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1q4jdeb | /r/LocalLLaMA/comments/1q4jdeb/visualizing_rag/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'YmsxNnMxZWFvaWJnMV5s4EhtZ6LRSr90sO8SbOdlEZ4MwU5P3RdxGJQBGsr6', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YmsxNnMxZWFvaWJnMV5s4EhtZ6LRSr90sO8SbOdlEZ4MwU5P3RdxGJQBGsr6.png?width=108&crop=smart&format=pjpg&auto=webp&s=efa76eb442d36473776eacb1455bd9ba3aa859d7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YmsxNnMxZWFvaWJnMV5s4EhtZ6LRSr90sO8SbOdlEZ4MwU5P3RdxGJQBGsr6.png?width=216&crop=smart&format=pjpg&auto=webp&s=3aac21a6216fcb68e1b5f022f6d9a72aebb921fb', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YmsxNnMxZWFvaWJnMV5s4EhtZ6LRSr90sO8SbOdlEZ4MwU5P3RdxGJQBGsr6.png?width=320&crop=smart&format=pjpg&auto=webp&s=defa0636214eb5cec9f382204dad29cca7c08094', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YmsxNnMxZWFvaWJnMV5s4EhtZ6LRSr90sO8SbOdlEZ4MwU5P3RdxGJQBGsr6.png?width=640&crop=smart&format=pjpg&auto=webp&s=8392fe28f42d7a9c10836daff52e9640f574d45e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YmsxNnMxZWFvaWJnMV5s4EhtZ6LRSr90sO8SbOdlEZ4MwU5P3RdxGJQBGsr6.png?width=960&crop=smart&format=pjpg&auto=webp&s=5ad20a40dc4583fe61d471e1bae41b5358bd357f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YmsxNnMxZWFvaWJnMV5s4EhtZ6LRSr90sO8SbOdlEZ4MwU5P3RdxGJQBGsr6.png?width=1080&crop=smart&format=pjpg&auto=webp&s=152f3ba17a3eb028afc47611adf7f3fc8a5391f7', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/YmsxNnMxZWFvaWJnMV5s4EhtZ6LRSr90sO8SbOdlEZ4MwU5P3RdxGJQBGsr6.png?format=pjpg&auto=webp&s=46053052e85c4b1308d5a8d4e58b7d17a3545ecc', 'width': 1280}, 'variants': {}}]} | |
What do we think about Gorgon Point (Ryzen AI 9 HX 470)? | 140 | The new APU is promised to support DDR5-6400 (102.4 GB/s) and LPDDR5X-8533 (136.5 GB/s) which should move some models that were barely usable on Strix Point to the usable territory.
However, it really seems that to utilise these capabilities, manufacturers would have to get chips that are basically inaccessible right now. | 2026-01-05T11:31:03 | Everlier | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q4jc99 | false | null | t3_1q4jc99 | /r/LocalLLaMA/comments/1q4jc99/what_do_we_think_about_gorgon_point_ryzen_ai_9_hx/ | false | false | default | 140 | {'enabled': True, 'images': [{'id': '6lfowdxxnibg1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/6lfowdxxnibg1.jpeg?width=108&crop=smart&auto=webp&s=1ef9a49683084bb0bf967d8c122e95ee75de4ba8', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/6lfowdxxnibg1.jpeg?width=216&crop=smart&auto=webp&s=2b376a8a96541add3569b7e265bca6cac144abf6', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/6lfowdxxnibg1.jpeg?width=320&crop=smart&auto=webp&s=667757012932d04ec742133003a3035ace461af9', 'width': 320}, {'height': 332, 'url': 'https://preview.redd.it/6lfowdxxnibg1.jpeg?width=640&crop=smart&auto=webp&s=be0f7d010f4bb84aa8472d0b245abd51e0e8c43b', 'width': 640}, {'height': 499, 'url': 'https://preview.redd.it/6lfowdxxnibg1.jpeg?width=960&crop=smart&auto=webp&s=d293af3bf10cbcb39445bdfb2a41eca22a3a6b0a', 'width': 960}, {'height': 561, 'url': 'https://preview.redd.it/6lfowdxxnibg1.jpeg?width=1080&crop=smart&auto=webp&s=719ea47b02732bad4c14466acba51e1b56cc289e', 'width': 1080}], 'source': {'height': 1040, 'url': 'https://preview.redd.it/6lfowdxxnibg1.jpeg?auto=webp&s=4d43cb872e9c7a475dcc975edb208df65a925a52', 'width': 2000}, 'variants': {}}]} | |
Several publicly available university courses focusing on AI-Agents: | 11 | 2026-01-05T11:22:22 | https://www.reddit.com/r/LocalLLaMA/comments/1q4j6qr/several_publicly_available_university_courses/ | QuanstScientist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4j6qr | false | null | t3_1q4j6qr | /r/LocalLLaMA/comments/1q4j6qr/several_publicly_available_university_courses/ | false | false | 11 | null | ||
[Open Source] I built an Agent that audits code like a Senior Engineer (AST-Aware + DeepSeek V3). It draws diagrams, fetches missing files JIT, and uses Hybrid Search. | 0 | 2026-01-05T11:01:30 | Few-Angle-2646 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q4itd7 | false | null | t3_1q4itd7 | /r/LocalLLaMA/comments/1q4itd7/open_source_i_built_an_agent_that_audits_code/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '19jfcp6oiibg1', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=108&crop=smart&format=png8&s=e175da797a2d39256db819de5a7624e3ba8db30b', 'width': 108}, {'height': 106, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=216&crop=smart&format=png8&s=10469ddb0599aee042ad51a89657748ac3cc18a6', 'width': 216}, {'height': 157, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=320&crop=smart&format=png8&s=065e222fff41e44da73b29653c532e81d19e3a5a', 'width': 320}, {'height': 315, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=640&crop=smart&format=png8&s=087fa6150821901324d01aad15378abcdfbc963b', 'width': 640}, {'height': 473, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=960&crop=smart&format=png8&s=de89c62c64dd6aa120f83296c8da173cffe362ee', 'width': 960}, {'height': 533, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=1080&crop=smart&format=png8&s=ee27be61e3593a502eb51c654fbd6e4eec7e4192', 'width': 1080}], 'source': {'height': 940, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?format=png8&s=c472c56eda1e2ff83573fa861a3d552c5f6578b5', 'width': 1904}, 'variants': {'gif': {'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=108&crop=smart&s=862ea4b0d757baf22c6341134318578ed1dcb129', 'width': 108}, {'height': 106, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=216&crop=smart&s=d38a046feec1c7f895504fd0f7cb5f1b7d911299', 'width': 216}, {'height': 157, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=320&crop=smart&s=92c47cb81f1b43230d021be21b8b4199396307cd', 'width': 320}, {'height': 315, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=640&crop=smart&s=48a4f124c44d493e23584ab7e28e8918d696dd46', 'width': 640}, {'height': 473, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=960&crop=smart&s=9e3940f60f205bf476c96807c5332bd856213e44', 'width': 960}, {'height': 533, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=1080&crop=smart&s=d91cdca71b71c5358da05b3ff7d75a77dd573b50', 'width': 1080}], 'source': {'height': 940, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?s=7bcb2e4843e65aad1c2a6c8c3809b4abc717b88f', 'width': 1904}}, 'mp4': {'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=108&format=mp4&s=a77f7161466714b5d1e58cb677a71d7ce43a9e8d', 'width': 108}, {'height': 106, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=216&format=mp4&s=b31616f9bbfa921f10d068e5d5b1c1a20e984db8', 'width': 216}, {'height': 157, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=320&format=mp4&s=431434cb4ec0b48850bf1236a0bb210fd67ee211', 'width': 320}, {'height': 315, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=640&format=mp4&s=2b2219af2bcb48204addb70c12404dea9ae07df3', 'width': 640}, {'height': 473, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=960&format=mp4&s=936ea5e4cafe319076adb215c56a3a2082cf2df4', 'width': 960}, {'height': 533, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?width=1080&format=mp4&s=769c146f8356f022c66d305d6c7652680ae1f40b', 'width': 1080}], 'source': {'height': 940, 'url': 'https://preview.redd.it/19jfcp6oiibg1.gif?format=mp4&s=16b186be664e7dc3fcfccf5b68e492e61609dd97', 'width': 1904}}}}]} | ||
I built an open-source Agent that parses Python AST to fix RAG context fragmentation. It fetches missing files autonomously. (DeepSeek V3 + FastAPI) | 1 | 2026-01-05T11:00:23 | Few-Angle-2646 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q4iskc | false | null | t3_1q4iskc | /r/LocalLLaMA/comments/1q4iskc/i_built_an_opensource_agent_that_parses_python/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'kpgv0qnfiibg1', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=108&crop=smart&format=png8&s=f5ea7eb6be40cc960fbfc1605b777a5efd57de05', 'width': 108}, {'height': 106, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=216&crop=smart&format=png8&s=a14d97a5465da49370ca6aa1f173864f69498af7', 'width': 216}, {'height': 157, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=320&crop=smart&format=png8&s=b7586e7c89bc5cb0b9764211a93c6b66aeefed28', 'width': 320}, {'height': 315, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=640&crop=smart&format=png8&s=5b35f1bf918fe9cbcd456dff1cc1270939cdd6ee', 'width': 640}, {'height': 473, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=960&crop=smart&format=png8&s=6cb92e8f73454e393092caa23eb941966ab30ae5', 'width': 960}, {'height': 533, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=1080&crop=smart&format=png8&s=cd6953da244ae59fd6c92674e660eb13066a2c64', 'width': 1080}], 'source': {'height': 940, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?format=png8&s=807361aacad1b5d4bdaf262f004681be586a8406', 'width': 1904}, 'variants': {'gif': {'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=108&crop=smart&s=b9b4b96512fd24bd533850a414c91b1c28175040', 'width': 108}, {'height': 106, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=216&crop=smart&s=8eabcac8a8d55a5335b96a67bbbdc62c330cd011', 'width': 216}, {'height': 157, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=320&crop=smart&s=f691ef585e4f909e773fbcadea6cfefc91a04ed6', 'width': 320}, {'height': 315, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=640&crop=smart&s=4408a731aba7e526711bb8c3395922e4d8bb4671', 'width': 640}, {'height': 473, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=960&crop=smart&s=4506a7c2b1ae8022526d2c3a716e58ef33e2df14', 'width': 960}, {'height': 533, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=1080&crop=smart&s=355f474c803793a66e417f56903423297c3a48ff', 'width': 1080}], 'source': {'height': 940, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?s=8c01eeee7fb25d3ae9a03079a66d7eff09fa0958', 'width': 1904}}, 'mp4': {'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=108&format=mp4&s=6258e9e9dce5b12340fb1e9829c2b6eca22b44f0', 'width': 108}, {'height': 106, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=216&format=mp4&s=f4acf8ef1ee5db0de06a3691cc7a2039eb8c84e3', 'width': 216}, {'height': 157, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=320&format=mp4&s=66618bb7b0ccfc33f3d01c3b696f03e887fab832', 'width': 320}, {'height': 315, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=640&format=mp4&s=2ea7bc80eb9f6dca952741ff40adf9926dce9e1d', 'width': 640}, {'height': 473, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=960&format=mp4&s=dd2a83e2ff9011c903ab1c484c55a231c5699608', 'width': 960}, {'height': 533, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?width=1080&format=mp4&s=e01b54bcfa5d104f0f33291bbff1589d64234488', 'width': 1080}], 'source': {'height': 940, 'url': 'https://preview.redd.it/kpgv0qnfiibg1.gif?format=mp4&s=5c118b383057ab733ea181023d7f5c7a21fd529d', 'width': 1904}}}}]} | ||
I stress-tested Gemini 3.0 on Indian Pharma pricing. It hallucinated a ₹62 Lakh error. | 0 | I run a HITL evaluation firm for Indian Healthcare. We found that LLMs quote the MRP (₹80L) but miss the B2B PAP schemes , leading to a real cost of ₹11L. Here is the benchmark dataset if you want to test your RAG pipeline.
| 2026-01-05T10:52:45 | https://www.reddit.com/r/LocalLLaMA/comments/1q4inq9/i_stresstested_gemini_30_on_indian_pharma_pricing/ | WillingnessSpare9707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4inq9 | false | null | t3_1q4inq9 | /r/LocalLLaMA/comments/1q4inq9/i_stresstested_gemini_30_on_indian_pharma_pricing/ | false | false | self | 0 | null |
[Open Source] I built an Agent that audits code like a Senior Engineer (AST-Aware + DeepSeek V3). It draws diagrams, fetches missing files JIT, and uses Hybrid Search. | 1 | [removed] | 2026-01-05T10:49:31 | https://www.reddit.com/r/LocalLLaMA/comments/1q4ilp0/open_source_i_built_an_agent_that_audits_code/ | Few-Angle-2646 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4ilp0 | false | null | t3_1q4ilp0 | /r/LocalLLaMA/comments/1q4ilp0/open_source_i_built_an_agent_that_audits_code/ | false | false | self | 1 | null |
Bielik-11B-v3.0-Instruct | 61 | Bielik-11B-v3.0-Instruct is a generative text model featuring 11 billion parameters. It is an instruct fine-tuned version of the [Bielik-11B-v3-Base-20250730](https://huggingface.co/speakleash/Bielik-11B-v3-Base-20250730). Forementioned model stands as a testament to the unique collaboration between the open-science/open-source project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH.
Developed and trained on multilingual text corpora across **32 European languages**, with **emphasis on Polish**, which has been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure, specifically within the PLGrid environment, and more precisely, the HPC centers: ACK Cyfronet AGH.
[https://huggingface.co/speakleash/Bielik-11B-v3.0-Instruct-GGUF](https://huggingface.co/speakleash/Bielik-11B-v3.0-Instruct-GGUF)
[https://github.com/speakleash/bielik-papers/blob/main/v3/Bielik\_11B\_v3.pdf](https://github.com/speakleash/bielik-papers/blob/main/v3/Bielik_11B_v3.pdf)
| 2026-01-05T10:34:59 | https://huggingface.co/speakleash/Bielik-11B-v3.0-Instruct | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1q4icio | false | null | t3_1q4icio | /r/LocalLLaMA/comments/1q4icio/bielik11bv30instruct/ | false | false | default | 61 | {'enabled': False, 'images': [{'id': '5cEj5o78oh6TbyHqprpd205PtMWxpwd8yMVStGNcCRo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5cEj5o78oh6TbyHqprpd205PtMWxpwd8yMVStGNcCRo.png?width=108&crop=smart&auto=webp&s=bbef17fa317ce6be343cdd94ed5ce874bd05641a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5cEj5o78oh6TbyHqprpd205PtMWxpwd8yMVStGNcCRo.png?width=216&crop=smart&auto=webp&s=1c3daecbb8d0ef51b9e780e8c6357fa6fdc620bd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5cEj5o78oh6TbyHqprpd205PtMWxpwd8yMVStGNcCRo.png?width=320&crop=smart&auto=webp&s=3f489e2bc793cd174bae1cb45b8cbe45f4f41a75', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5cEj5o78oh6TbyHqprpd205PtMWxpwd8yMVStGNcCRo.png?width=640&crop=smart&auto=webp&s=22861a4ba482c1b57948a7a0af2d215159d5c0d3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5cEj5o78oh6TbyHqprpd205PtMWxpwd8yMVStGNcCRo.png?width=960&crop=smart&auto=webp&s=eef3e1bfeb9480d539a7551d474e9dcee60c0dd6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5cEj5o78oh6TbyHqprpd205PtMWxpwd8yMVStGNcCRo.png?width=1080&crop=smart&auto=webp&s=428151a01a4b9456cf0f57c976d036bc7d1e6905', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5cEj5o78oh6TbyHqprpd205PtMWxpwd8yMVStGNcCRo.png?auto=webp&s=3dc29e46be5553eddc12fb5881632e0979b98c2a', 'width': 1200}, 'variants': {}}]} |
Apple CLaRa: Bridging Retrieval and Generation with Continuous Latent Reasoning | 31 | I have not seen any discussion about this effort so I'm posting it here.
But it looks like apple tried a new approach at RAG.
Basically they took their own attempt at linguistic compression, it can shrink documents by **32x to 64x** without losing the important details needed to answer a question.
and the novel thing in my opinion is instead of having a separate retriever and a separate writer, it unifies them. It learns to find the right info and write the answer in one smooth process.
And ofcourse its fully open source.
Links:
[https://github.com/apple/ml-clara](https://github.com/apple/ml-clara)
[https://huggingface.co/datasets/apple/CLaRa\_multi\_stage](https://huggingface.co/datasets/apple/CLaRa_multi_stage)
[https://huggingface.co/apple/CLaRa-7B-Instruct](https://huggingface.co/apple/CLaRa-7B-Instruct)
[https://arxiv.org/pdf/2511.18659](https://arxiv.org/pdf/2511.18659) | 2026-01-05T10:26:56 | https://www.reddit.com/r/LocalLLaMA/comments/1q4i7m2/apple_clara_bridging_retrieval_and_generation/ | PlasticTourist6527 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4i7m2 | false | null | t3_1q4i7m2 | /r/LocalLLaMA/comments/1q4i7m2/apple_clara_bridging_retrieval_and_generation/ | false | false | self | 31 | null |
Graph RAG Setups | 1 | Sorry to bring up RAG again LOL
Trying to do a new Graph RAG system with 7-9B LLMs, the models are not the smartest so the retrieval needs to be good
My main thinking is that Graph RAG could help by bringing up more nearby node context/knowledge that the smaller models lack
What sort of pattern do you use for graph RAG these days and which github libraries, if any, are good? | 2026-01-05T10:19:36 | https://www.reddit.com/r/LocalLLaMA/comments/1q4i36y/graph_rag_setups/ | SlowFail2433 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4i36y | false | null | t3_1q4i36y | /r/LocalLLaMA/comments/1q4i36y/graph_rag_setups/ | false | false | self | 1 | null |
Dual rx 9070 for LLMs? | 2 | Looking for a GPU mainly for **local Llama/LLM inference** on **Windows**. I’m trying to assess whether buying an **AMD Radeon** for local LLMs is a bad idea.
I’ve already searched the sub + GitHub issues/docs for **llama.cpp / Ollama / ROCm-HIP / DirectML**, but most threads are either Linux-focused or outdated, and I’m still missing **current Windows + Radeon** specifics.
I also game sometimes, and AMD options look more attractive for the price — plus most of what I play is simply easier on Windows.
**Options:**
* **RTX 5060 Ti 16GB** — the “it just works” CUDA choice.
* **RX 9070** — about $100 more, and on paper looks \~50% faster if the software stack doesn’t kneecap it.
**Questions (Windows + Radeon):**
* Is it still “it works… but”?
* Does going Radeon basically mean “congrats, you’re a Linux person now”?
* What’s actually usable day-to-day: **Ollama / llama.cpp / PyTorch+HIP/ROCm / DirectML / other**?
* What’s stable vs frequently breaks after driver/library updates?
* Real numbers: **prefill speed + tokens/sec** you see in practice (please include **model + quant + context size**) — especially at **\~20–30k context**.
**Multi-GPU:** anyone tried **two RX 9070** to run bigger models (like **30B**)?
* Does it work reliably in practice?
* What real speeds do you get (prefill + tokens/sec)?
* Is using both GPUs straightforward, or complicated/flaky? | 2026-01-05T10:18:53 | https://www.reddit.com/r/LocalLLaMA/comments/1q4i2s4/dual_rx_9070_for_llms/ | Fast_Thing_7949 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4i2s4 | false | null | t3_1q4i2s4 | /r/LocalLLaMA/comments/1q4i2s4/dual_rx_9070_for_llms/ | false | false | self | 2 | null |
Benchmarking 23 LLMs on Nonogram (Logic Puzzle) Solving Performance | 52 | Over the Christmas holidays I went down a rabbit hole and built a benchmark to test how well large language models can solve nonograms (grid-based logic puzzles).
The benchmark evaluates 23 LLMs across increasing puzzle sizes (5x5, 10x10, 15x15).
A few interesting observations:
- Performance drops sharply as puzzle size increases
- Some models generate code to brute-force solutions
- Others actually reason through the puzzle step-by-step, almost like a human
- GPT-5.2 is currently dominating the leaderboard
Cost of curiosity:
- ~$250
- ~17,000,000 tokens
- zero regrets
Everything is fully open source and rerunnable when new models drop.
Benchmark: https://www.nonobench.com
Code: https://github.com/mauricekleine/nono-bench
I mostly built this out of curiosity, but I’m interested in what people here think:
Are we actually measuring reasoning ability — or just different problem-solving strategies?
Happy to answer questions or run specific models if people are interested. | 2026-01-05T10:16:23 | mauricekleine | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q4i19c | false | null | t3_1q4i19c | /r/LocalLLaMA/comments/1q4i19c/benchmarking_23_llms_on_nonogram_logic_puzzle/ | false | false | default | 52 | {'enabled': True, 'images': [{'id': 'fdryj8qkaibg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/fdryj8qkaibg1.jpeg?width=108&crop=smart&auto=webp&s=6b4975fe152350083450c1a5fe08f5121aab565c', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/fdryj8qkaibg1.jpeg?width=216&crop=smart&auto=webp&s=d59096c670eaa6c26a09d96075f799c87aab55f1', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/fdryj8qkaibg1.jpeg?width=320&crop=smart&auto=webp&s=e7c678cbee3aa4f9f0ac59ac7558669a775cdaca', 'width': 320}, {'height': 357, 'url': 'https://preview.redd.it/fdryj8qkaibg1.jpeg?width=640&crop=smart&auto=webp&s=e8aa66881e7017590b4656228716be0bf22299ee', 'width': 640}, {'height': 536, 'url': 'https://preview.redd.it/fdryj8qkaibg1.jpeg?width=960&crop=smart&auto=webp&s=23c84d779862cc5635c8e6f5d036de636623b6c2', 'width': 960}, {'height': 604, 'url': 'https://preview.redd.it/fdryj8qkaibg1.jpeg?width=1080&crop=smart&auto=webp&s=09f31185f905619d75c5fd19914e1463011f278b', 'width': 1080}], 'source': {'height': 957, 'url': 'https://preview.redd.it/fdryj8qkaibg1.jpeg?auto=webp&s=49994daf02c189d69527c96015c244b6f78828d3', 'width': 1711}, 'variants': {}}]} | |
DUAL RX 9070 vs RTX 5060 Ti 16GB for local LLMs | 1 | [removed] | 2026-01-05T10:15:01 | https://www.reddit.com/r/LocalLLaMA/comments/1q4i0g6/dual_rx_9070_vs_rtx_5060_ti_16gb_for_local_llms/ | Fast_Thing_7949 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4i0g6 | false | null | t3_1q4i0g6 | /r/LocalLLaMA/comments/1q4i0g6/dual_rx_9070_vs_rtx_5060_ti_16gb_for_local_llms/ | false | false | self | 1 | null |
DUAL RX 9070 vs RTX 5060 Ti 16GB for local LLMs | 1 | [removed] | 2026-01-05T10:11:56 | https://www.reddit.com/r/LocalLLaMA/comments/1q4hyn9/dual_rx_9070_vs_rtx_5060_ti_16gb_for_local_llms/ | Fast_Thing_7949 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4hyn9 | false | null | t3_1q4hyn9 | /r/LocalLLaMA/comments/1q4hyn9/dual_rx_9070_vs_rtx_5060_ti_16gb_for_local_llms/ | false | false | 1 | null | |
Local / self-hosted alternative to NotebookLM for generating narrated videos? | 2 | Hi everyone,
I’m looking for a **local / self-hosted alternative to NotebookLM**, specifically the feature where it can generate a **video with narrated audio** based on documents or notes.
NotebookLM works great, but I’m dealing with **private and confidential data**, so uploading it to a hosted service isn’t an option for me. Ideally, I’m looking for something that:
* Can run **fully locally** (or self-hosted)
* Takes documents / notes as input
* Generates **audio narration** (TTS)
* Optionally creates a **video** (slides, visuals, or timeline synced with the audio)
* Open-source or at least privacy-respecting
I’m fine with stitching multiple tools together (LLM + TTS + video generation) if needed.
Does anything like this exist yet, or is there a recommended stack people are using for this kind of workflow?
Thanks in advance! | 2026-01-05T09:58:31 | https://www.reddit.com/r/LocalLLaMA/comments/1q4hqei/local_selfhosted_alternative_to_notebooklm_for/ | Proof-Exercise2695 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4hqei | false | null | t3_1q4hqei | /r/LocalLLaMA/comments/1q4hqei/local_selfhosted_alternative_to_notebooklm_for/ | false | false | self | 2 | null |
Need help deciding on desktop GPU server | 0 | we have a budget of 45k$ to build a GPU workstation for a university mainly for full model training and finetuning.
does anyone have any experience with H200 or PRO 6000 GPUs for said task?
how does 2 x Pro 6000 compare with a single h200?
what concerns should be addressed? | 2026-01-05T09:51:25 | https://www.reddit.com/r/LocalLLaMA/comments/1q4hmbb/need_help_deciding_on_desktop_gpu_server/ | mohammacl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4hmbb | false | null | t3_1q4hmbb | /r/LocalLLaMA/comments/1q4hmbb/need_help_deciding_on_desktop_gpu_server/ | false | false | self | 0 | null |
We trained a 7B model (OpenChat) on synthetic OCR data to beat public dataset benchmarks on financial docs. (Paper + Method inside) | 15 | We have been researching a major bottleneck in Financial Document Understanding (FDU): **The Privacy Paradox.**
To build accurate invoice parsers, you need complex, messy, real-world data (nested tables, colliding columns). But due to privacy laws, you can't use client data for training. Most teams resort to public datasets like UCSF or RVL-CDIP, but we found these datasets are often too "clean" or structurally simple to represent real-world financial chaos.
**The Experiment:** We hypothesized that high-fidelity **synthetic data** could outperform real (but structurally simple) public data.
We developed a framework called **DocuLite** containing two generators:
1. **InvoicePy (Text):** Uses LLaMA-3-70B to generate synthetic OCR text that mimics complex layouts (tables, key-value pairs) without containing any real PII.
2. **TemplatePy (Vision):** Generates HTML-based invoice templates to train Vision Language Models (VLMs).
**The Results:** We benchmarked this against models trained on standard public datasets.
* **LLM Performance:** A 7B model (OpenChat-3.5) trained on our synthetic data saw a **0.525 improvement in F1 score** compared to the same model trained on public data.
* **VLM Performance:** An 8B model (InternVL-2) saw a **0.513 F1 improvement**.
**Key Takeaway:** For anyone building RAG or Extraction pipelines in sensitive domains (Finance/Healthcare), our results suggest that investing in a *synthetic data generator* (that preserves layout logic) yields better ROI than hunting for "anonymized" public datasets. The model learns the *structure* better when you control the generation parameters.
We published the full breakdown of the architecture, the F1 charts per field, and the methodology in our technical blog if anyone is interested in the deeper engineering details:
[https://www.hyperbots.com/research/breaking-the-annotation-barrier-with-doculite](https://www.hyperbots.com/research/breaking-the-annotation-barrier-with-doculite)
Has anyone else here successfully replaced real data with synthetic data for complex tabular extraction? I'd love to hear if you faced similar F1 score jumps. | 2026-01-05T09:36:45 | https://www.reddit.com/r/LocalLLaMA/comments/1q4hdzs/we_trained_a_7b_model_openchat_on_synthetic_ocr/ | Hyperbots | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4hdzs | false | null | t3_1q4hdzs | /r/LocalLLaMA/comments/1q4hdzs/we_trained_a_7b_model_openchat_on_synthetic_ocr/ | false | false | self | 15 | null |
Grafted Titans: a Plug-and-Play Neural Memory for Open-Weight LLMs | 40 | I’ve been experimenting with **Test-Time Training (TTT)**, specifically trying to replicate the core concept of Google’s "Titans" architecture (learning a neural memory on the fly) without the massive compute requirement of training a transformer from scratch.
I wanted to see if I could "graft" a trainable memory module onto a **frozen open-weight model** (Qwen-2.5-0.5B) using a consumer-grade setup (I got Nvidia DGX Spark BlackWell, 128GB)
I’m calling this architecture "Grafted Titans." I just finished the evaluation on the BABILong benchmark and the results were very interesting
**The Setup:**
* **Base Model:** Qwen-2.5-0.5B-Instruct (Frozen weights).
* **Mechanism:** I appended memory embeddings to the input layer (Layer 0) via a trainable cross-attention gating mechanism. This acts as an adapter, allowing the memory to update recursively while the base model stays static.
**The Benchmark (BABILong, up to 2k context):** I used a strict 2-turn protocol.
* **Turn 1:** Feed context -> Memory updates -> Context removed.
* **Turn 2:** Feed question -> Model retrieves answer solely from neural memory.
**The Results:** I compared my grafted memory against two baselines.
1. **Random Guessing:** 0.68% Accuracy. Basically all wrong.
2. **Vanilla Qwen (Full Context):** I fed the *entire* token context to the standard Qwen model in the prompt. It scored **34.0%**.
3. **Grafted Titans (Memory Only):** The model saw *no* context in the prompt, only the memory state. It scored **44.7%**.
It appears the **neural memory module is acting as a** **denoising filter**. When a small model like Qwen-0.5B sees 1.5k tokens of text, its attention mechanism gets "diluted" by the noise. The grafted memory, however, compresses that signal into specific vectors, making retrieval sharper than the native attention window.
**Limitations:**
* **Signal Dilution:** Because I'm injecting memory at Layer 0 (soft prompting style), I suspect a vanishing gradient effect as the signal travels up the layers. Future versions need multi-layer injection.
* **Guardrails:** The memory is currently "gullible." It treats all input as truth, meaning it's highly susceptible to poisoning in a multi-turn setting.
* **Benchmark:** This was a 2-turn evaluation. Stability in long conversations (10+ turns) is unproven.
I’m currently cleaning up the code and weights to open-source the entire project (will be under "AI Realist" if you want to search for it later).
Has anyone else experimented with cross-attention adapters for memory retrieval? I'm curious if injecting at the middle layers (e.g., block 12 of 24) would solve the signal dilution issue without destabilizing the frozen weights.
Thoughts? | 2026-01-05T09:34:36 | https://msukhareva.substack.com/p/grafted-titans-i-built-a-plug-and | Forsaken-Park8149 | msukhareva.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1q4hcsf | false | null | t3_1q4hcsf | /r/LocalLLaMA/comments/1q4hcsf/grafted_titans_a_plugandplay_neural_memory_for/ | false | false | 40 | {'enabled': False, 'images': [{'id': 'yT3YokEiN9WPYwPoTIkN__Yl-gpZQVW4GImTVvFalds', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yT3YokEiN9WPYwPoTIkN__Yl-gpZQVW4GImTVvFalds.jpeg?width=108&crop=smart&auto=webp&s=38ae5179f03d9090f808ac970a35890f650fed09', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yT3YokEiN9WPYwPoTIkN__Yl-gpZQVW4GImTVvFalds.jpeg?width=216&crop=smart&auto=webp&s=d79b21cf0b607cb67f1fe5d3c8f80dd7753582ab', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yT3YokEiN9WPYwPoTIkN__Yl-gpZQVW4GImTVvFalds.jpeg?width=320&crop=smart&auto=webp&s=cc85fcd68aa60261d8f733d34a0764809f7f9cc8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yT3YokEiN9WPYwPoTIkN__Yl-gpZQVW4GImTVvFalds.jpeg?width=640&crop=smart&auto=webp&s=a9763e79f1d5ac10e28dcd72effae05fbce31406', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yT3YokEiN9WPYwPoTIkN__Yl-gpZQVW4GImTVvFalds.jpeg?width=960&crop=smart&auto=webp&s=518f171393cf5b4f95ef31bf9ba4e7ff45e9f377', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yT3YokEiN9WPYwPoTIkN__Yl-gpZQVW4GImTVvFalds.jpeg?width=1080&crop=smart&auto=webp&s=1f07356ba56272a693d1e53e56974ec51a88d83f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yT3YokEiN9WPYwPoTIkN__Yl-gpZQVW4GImTVvFalds.jpeg?auto=webp&s=b044e10293d3716db60c6503cce798a19c6f3c96', 'width': 1200}, 'variants': {}}]} | |
Introducing Falcon H1R 7B | 64 | This repository presents **Falcon-H1R-7B**, a reasoning-specialized model built on top of [Falcon-H1-7B-Base](https://huggingface.co/tiiuae/Falcon-H1-7B-Base) and trained via cold-start supervised fine-tuning with long reasoning traces and further enhanced by scaling RL with GRPO. The model demonstrates outstanding performance across various benchmark evaluations, including mathematics, programming, instruction following, and general logic.
(old one had llama.cpp support) | 2026-01-05T09:31:04 | https://huggingface.co/blog/tiiuae/falcon-h1r-7b | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1q4harh | false | null | t3_1q4harh | /r/LocalLLaMA/comments/1q4harh/introducing_falcon_h1r_7b/ | false | false | 64 | {'enabled': False, 'images': [{'id': 'cp8sHrI0u-v727PXUjUREk9f3_bJczbhgY4L_llZyME', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cp8sHrI0u-v727PXUjUREk9f3_bJczbhgY4L_llZyME.png?width=108&crop=smart&auto=webp&s=136ac5e06e59f3dbeef57a311466d5684c013f06', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/cp8sHrI0u-v727PXUjUREk9f3_bJczbhgY4L_llZyME.png?width=216&crop=smart&auto=webp&s=fe9f8e762e26afed428f6446ea65bcb2831e670b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/cp8sHrI0u-v727PXUjUREk9f3_bJczbhgY4L_llZyME.png?width=320&crop=smart&auto=webp&s=4f639f50c7649b8f1b02b188e2dde83a513a653d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/cp8sHrI0u-v727PXUjUREk9f3_bJczbhgY4L_llZyME.png?width=640&crop=smart&auto=webp&s=e151ef7170642c032e8387d6514663a6cc0f364e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/cp8sHrI0u-v727PXUjUREk9f3_bJczbhgY4L_llZyME.png?width=960&crop=smart&auto=webp&s=2cbf10a6782d692757dbbf29d4506f1161cfd416', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/cp8sHrI0u-v727PXUjUREk9f3_bJczbhgY4L_llZyME.png?width=1080&crop=smart&auto=webp&s=ad97c3e525f84acb03acb76e7ac7fff74c707778', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/cp8sHrI0u-v727PXUjUREk9f3_bJczbhgY4L_llZyME.png?auto=webp&s=1f3a00fd478a2b75bac5e702b9a05fcc1976a095', 'width': 1200}, 'variants': {}}]} | |
Some interesting takeaways from a new paper on why voice deepfake detectors fail on new APIs | 1 | [removed] | 2026-01-05T09:29:52 | https://www.sayso.ai/ | sayso_ai | sayso.ai | 1970-01-01T00:00:00 | 0 | {} | 1q4ha0v | false | null | t3_1q4ha0v | /r/LocalLLaMA/comments/1q4ha0v/some_interesting_takeaways_from_a_new_paper_on/ | false | false | default | 1 | null |
my final verdict on the "best" ai tools for 2025 | 1 | [removed] | 2026-01-05T08:55:59 | https://www.reddit.com/r/LocalLLaMA/comments/1q4gqtw/my_final_verdict_on_the_best_ai_tools_for_2025/ | Immediate_Being_3341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4gqtw | false | null | t3_1q4gqtw | /r/LocalLLaMA/comments/1q4gqtw/my_final_verdict_on_the_best_ai_tools_for_2025/ | false | false | self | 1 | null |
Psycho Simulator – Do LLMs dream of AI psychosis? | 0 | 2026-01-05T08:45:27 | https://store.steampowered.com/news/app/1244620/view/525362843051100830 | Koksny | store.steampowered.com | 1970-01-01T00:00:00 | 0 | {} | 1q4gkyd | false | null | t3_1q4gkyd | /r/LocalLLaMA/comments/1q4gkyd/psycho_simulator_do_llms_dream_of_ai_psychosis/ | false | false | default | 0 | null | |
I kept wasting time on MCP config errors, so I built a tool to find them | 0 | Hey,
Anyone else spent way too long debugging MCP configs? Trailing comma somewhere, unhelpful error. Wrong path, silent failure. Missing env var, was a nightmare.
Got fed up and so made mcp-doctor — its a free open-source CLI that scans your configs and tells you exactly what's wrong:
npm install -g mcp-doctor
mcp-doctor
It finds trailing commas (with exact line + column), checks paths exist, warns about missing env vars, and tests if servers actually respond.
Works with Claude Desktop, Cursor, VS Code, Claude Code, Windsurf.
GitHub: [https://github.com/Crooj026/mcp-doctor](https://github.com/Crooj026/mcp-doctor) | 2026-01-05T08:12:32 | https://www.reddit.com/r/LocalLLaMA/comments/1q4g25x/i_kept_wasting_time_on_mcp_config_errors_so_i/ | Embarrassed_Win1608 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4g25x | false | null | t3_1q4g25x | /r/LocalLLaMA/comments/1q4g25x/i_kept_wasting_time_on_mcp_config_errors_so_i/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'df_hXLi2WU2M_PjG8Rf1YldOZtFhzKbnFPXtR-7jKDU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/df_hXLi2WU2M_PjG8Rf1YldOZtFhzKbnFPXtR-7jKDU.png?width=108&crop=smart&auto=webp&s=f53a0aaaa9a2a041c88d936d7273e0f5e948654b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/df_hXLi2WU2M_PjG8Rf1YldOZtFhzKbnFPXtR-7jKDU.png?width=216&crop=smart&auto=webp&s=9041fc4df92c762fc742d212bbff77d00f64d11d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/df_hXLi2WU2M_PjG8Rf1YldOZtFhzKbnFPXtR-7jKDU.png?width=320&crop=smart&auto=webp&s=04f403eb83b8c3a68fa6b95f63a8f84e7a16ac73', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/df_hXLi2WU2M_PjG8Rf1YldOZtFhzKbnFPXtR-7jKDU.png?width=640&crop=smart&auto=webp&s=5c04db9895a220ddf37218c5c641067b4ee231f5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/df_hXLi2WU2M_PjG8Rf1YldOZtFhzKbnFPXtR-7jKDU.png?width=960&crop=smart&auto=webp&s=e9d0347d31e9af5515373057459e87d1f4a4cf75', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/df_hXLi2WU2M_PjG8Rf1YldOZtFhzKbnFPXtR-7jKDU.png?width=1080&crop=smart&auto=webp&s=336a869df2eac5637033fcebeec3ff61eafa2325', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/df_hXLi2WU2M_PjG8Rf1YldOZtFhzKbnFPXtR-7jKDU.png?auto=webp&s=bbb23b93725b0b2c2f8fb2ddfda858715a52ef59', 'width': 1200}, 'variants': {}}]} |
AJT now has a dead-simple runnable demo - closes the “what do I actually run?” gap | 0 | Hey everyone
[https://github.com/Nick-heo-eg/spec](https://github.com/Nick-heo-eg/spec)
In earlier posts and comments, a few people pointed out something that really resonated with me: the distinction between execution logs and decision logs, and how many silent failures live in the layer that decides whether a check runs at all.
One comment in particular framed it well, treating skipped decisions as first-class events rather than non-events.
That matched my own experience almost exactly.
What became clear from that feedback was this: the idea made sense, the schema was understandable - but it was still abstract unless you could actually run it and see the trace.
So I added a dead-simple runnable demo that shows this concretely.
python3 examples/run_ajt_demo.py
No setup. No arguments. Running it produces:
* 3 concrete decisions (2 STOP, 1 ALLOW)
* explicit reasons and risk levels
* an ajt\_trace.jsonl file where skipped and executed decisions are both visible
No LLM. No internet. Zero dependencies.
The demo is intentionally boring: deterministic, inspectable, auditable.
CI runs this same file to make sure it never breaks.
This closes the gap between “the idea sounds right” and “I can review what actually happened.” | 2026-01-05T08:07:05 | https://www.reddit.com/r/LocalLLaMA/comments/1q4fz77/ajt_now_has_a_deadsimple_runnable_demo_closes_the/ | Echo_OS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4fz77 | false | null | t3_1q4fz77 | /r/LocalLLaMA/comments/1q4fz77/ajt_now_has_a_deadsimple_runnable_demo_closes_the/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'X_TU3O5-5ATBpGK_pdBWCONwRYSOqG622IKX035im5A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/X_TU3O5-5ATBpGK_pdBWCONwRYSOqG622IKX035im5A.png?width=108&crop=smart&auto=webp&s=4286e3ec041090152aef39d2b2c50c0b474d7172', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/X_TU3O5-5ATBpGK_pdBWCONwRYSOqG622IKX035im5A.png?width=216&crop=smart&auto=webp&s=5b9c0633790b85018d51fc5caad70a4ca05f1d0c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/X_TU3O5-5ATBpGK_pdBWCONwRYSOqG622IKX035im5A.png?width=320&crop=smart&auto=webp&s=73e72ba00951b8b99bf9d94d05b859516cf60176', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/X_TU3O5-5ATBpGK_pdBWCONwRYSOqG622IKX035im5A.png?width=640&crop=smart&auto=webp&s=78fe53945565c47fc42d80b8c18271af4ad29585', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/X_TU3O5-5ATBpGK_pdBWCONwRYSOqG622IKX035im5A.png?width=960&crop=smart&auto=webp&s=3147a9d7c05bbb110678a59e1aafebe834e19fb3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/X_TU3O5-5ATBpGK_pdBWCONwRYSOqG622IKX035im5A.png?width=1080&crop=smart&auto=webp&s=253f8e8206c15157b48c5ae8f068c03e955b65e9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/X_TU3O5-5ATBpGK_pdBWCONwRYSOqG622IKX035im5A.png?auto=webp&s=da775ce30a290957a9b0e8b93a9c9b4a47048296', 'width': 1200}, 'variants': {}}]} |
I need a mentor | 0 | I am Arjun, an beginner independent researcher working on a boundary condition I call it R''s Limit. I have identified a potential logical failure point in AI reasoning at P=0.5.
I have received feedback that my current draft lacks formal academic rigor and a literature review. I am not looking for someone to do the work for me, but for a mentor who can guide me on how to properly formalize this logic for the academic community.
This is my current draft/paper https://zenodo.org/records/18140275 | 2026-01-05T07:27:04 | https://www.reddit.com/r/LocalLLaMA/comments/1q4fbq1/i_need_a_mentor/ | SafeEvening9468 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4fbq1 | false | null | t3_1q4fbq1 | /r/LocalLLaMA/comments/1q4fbq1/i_need_a_mentor/ | false | false | self | 0 | null |
不知道为啥我在llama-factory上微调的模型导出后效果就会变差! | 0 | [直接加载检查点的回答](https://preview.redd.it/zki27qaxfhbg1.png?width=2428&format=png&auto=webp&s=f22419524062804ec6ce6ac0b9a7e3e565092f1f)
[加载训练后导出的模型的回答](https://preview.redd.it/l6pn9iy0ghbg1.png?width=2540&format=png&auto=webp&s=52ee28cb17673364c795ddbc51277ad5f0233678)
我在Qwen3:8B上用QLoRA(量化等级8)对少量文本对做了50轮次的微调,为了凸显效果,学习率被调到了0.0001,训练之后模型加载检查点对话的效果要比导出模型再直接加载导出的模型的效果要理想,请问这是为啥?
| 2026-01-05T07:25:51 | https://www.reddit.com/r/LocalLLaMA/comments/1q4fb0r/不知道为啥我在llamafactory上微调的模型导出后效果就会变差/ | Ok-Money-9173 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4fb0r | false | null | t3_1q4fb0r | /r/LocalLLaMA/comments/1q4fb0r/不知道为啥我在llamafactory上微调的模型导出后效果就会变差/ | false | false | 0 | null | |
My electricity bill after discovering I can run every new model "just to test it" | 0 | January: $120
February: $145
March (after finding this sub): $847
Me at 3am: "But what if Llama 3.5 70B runs better with these specific quantization settings?"
My GPU fans: \*airplane noises\*
My wallet: 💀
At least I'm supporting renewable energy... right? RIGHT? | 2026-01-05T07:19:14 | https://www.reddit.com/r/LocalLLaMA/comments/1q4f73w/my_electricity_bill_after_discovering_i_can_run/ | stressfreepro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4f73w | false | null | t3_1q4f73w | /r/LocalLLaMA/comments/1q4f73w/my_electricity_bill_after_discovering_i_can_run/ | false | false | self | 0 | null |
I built a visual AI workflow tool that runs entirely in your browser - Ollama, LM Studio, llama.cpp and Most cloud API's all work out of the box. Agents/Websearch/TTS/Etc. | 148 | You might remember me from LlamaCards a previous program ive built or maybe you've seen some of my agentic computer use posts with Moondream/Minicpm navigation creating reddit posts.
Ive had my head down and I've finally gotten something I wanted to show you all.
**EmergentFlow** \- a visual node-based editor for creating AI workflows and agents. The whole execution engine runs in your browser. Its a great sandbox for developing AI workflows.
You just open it and go. No Docker, no Python venv, no dependencies. Connect your Ollama(or other local) instance, paste your API keys for whatever providers you use, and start building. Everything runs client-side - your keys stay in your browser, your prompts go directly to the providers.
**Supported:**
* Ollama (just works - point it at localhost:11434, auto-fetches models)
* LM Studio + llama.cpp (works once CORS is configured)
* OpenAI, Anthropic, Groq, Gemini, DeepSeek, xAI
For edge cases where you hit CORS issues, there's an optional desktop runner that acts as a local proxy. It's open source: [github.com/l33tkr3w/EmergentFlow-runner](http://github.com/l33tkr3w/EmergentFlow-runner)
But honestly most stuff works straight from the browser.
**The deal:**
It's free. Like, actually free - not "free trial" free.
You get a full sandbox with unlimited use of your own API keys. The only thing that costs credits is if you use my server-paid models (Gemini) because Google charges me for those.
Free tier gets 25 daily credits for server models(Gemini through my API key).
Running Ollama/LMStudio/llama.cpp or BYOK? **Unlimited. Forever. No catch.**
I do have a Pro tier ($19/mo) for power users who want more server credits and team collaboration, node/flow gallery - because I'm a solo dev with a kid trying to make this sustainable. But honestly most people here running local models won't need it.
**Try it:** [emergentflow.io/try](https://emergentflow.io/try) \- no signup, no credit card, just start dragging nodes.
If you run into issues (there will be some), please submit a bug report. Happy to answer questions about how stuff works under the hood.
Support a fellow LocalLlama enthusiast! Updoot? | 2026-01-05T07:08:30 | https://v.redd.it/ps5d841s2hbg1 | l33t-Mt | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q4f0tm | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ps5d841s2hbg1/DASHPlaylist.mpd?a=1770188925%2CZTZjMGNiMTNjMjBmOTgxNTczYTIzODYzZGQzMGY3M2JlYzM3ZDVkZjQ4MDU0MGZmOTkyMmQ1ODE5YzQwNjk2OQ%3D%3D&v=1&f=sd', 'duration': 64, 'fallback_url': 'https://v.redd.it/ps5d841s2hbg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ps5d841s2hbg1/HLSPlaylist.m3u8?a=1770188925%2CYjk2YTFkZTk5ZWNlMWM0NjRhZDUxODIwYmYzOTBhNjBhMWMwYWU5MWViNjFkNGFiYjhmNmMzYzU0MGQ4NGJjMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ps5d841s2hbg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1q4f0tm | /r/LocalLLaMA/comments/1q4f0tm/i_built_a_visual_ai_workflow_tool_that_runs/ | false | false | 148 | {'enabled': False, 'images': [{'id': 'aGJ3cmdlMXMyaGJnMfKIu2bgp1pENmKjPeusz-I2kkXf7vs8dV2V756jCzVD', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aGJ3cmdlMXMyaGJnMfKIu2bgp1pENmKjPeusz-I2kkXf7vs8dV2V756jCzVD.png?width=108&crop=smart&format=pjpg&auto=webp&s=886b7001bc66de1f3090f19cc96122e466729cb6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aGJ3cmdlMXMyaGJnMfKIu2bgp1pENmKjPeusz-I2kkXf7vs8dV2V756jCzVD.png?width=216&crop=smart&format=pjpg&auto=webp&s=d838440fafcd535e4c049db616b58e5aea00c065', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aGJ3cmdlMXMyaGJnMfKIu2bgp1pENmKjPeusz-I2kkXf7vs8dV2V756jCzVD.png?width=320&crop=smart&format=pjpg&auto=webp&s=e49ccf2a2107935f427c5b1faf5d5ed597f8d10a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aGJ3cmdlMXMyaGJnMfKIu2bgp1pENmKjPeusz-I2kkXf7vs8dV2V756jCzVD.png?width=640&crop=smart&format=pjpg&auto=webp&s=4fa2aece3c2c69506329c7daa51ba5834bdd99ed', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aGJ3cmdlMXMyaGJnMfKIu2bgp1pENmKjPeusz-I2kkXf7vs8dV2V756jCzVD.png?width=960&crop=smart&format=pjpg&auto=webp&s=2e1c2e77fb24114496291b3ab90b4de71041b3df', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aGJ3cmdlMXMyaGJnMfKIu2bgp1pENmKjPeusz-I2kkXf7vs8dV2V756jCzVD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6d75149ba9843657b85cd866aad7276bba8cd5ff', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/aGJ3cmdlMXMyaGJnMfKIu2bgp1pENmKjPeusz-I2kkXf7vs8dV2V756jCzVD.png?format=pjpg&auto=webp&s=ac79d376a68cbc1a0f3710069a93aeed9fa02c6e', 'width': 1920}, 'variants': {}}]} | |
Need architectural feedback: privacy-safe cloud memory between your data and LLM agents | 1 | Hi Reddit,
We’re the team behind **Aristo**, currently building a product called **Membase**. We’re trying to solve a problem many of you probably know well: every time you switch LLMs or agent frameworks, you have to rebuild or re-inject all your context from scratch.
Our goal is to build a **syncable, persistent memory layer** that sits between your data and whatever models/agents you want to use. Initially, we explored a fully local database design to maximize privacy, but ran into real constraints around integrations, latency, and reliability at scale, so we moved to a cloud-based architecture instead.
Because of that, we’re now trying to be very deliberate about **how much control users get over their data**, especially for people who are (rightfully) sensitive about sending anything to the cloud. Concretely, the current design is:
* Our team **cannot see or use** your data for training or analytics beyond what’s strictly necessary for running the service (no “secret fine-tuning”, no manual inspection by default).
* When you connect an existing LLM or agent stack to Membase, you can choose **which categories of context** can flow into the memory layer (e.g., Work, Personal, Hobby, etc.), instead of everything being ingested blindly.
* When you let external agents read from Membase, you can again choose **which categories** they are allowed to access, so you could, for example, expose only “Work” but never “Personal”.
Right now, Membase is still in the early design/prototyping phase and we’re only running a waitlist, so feedback will have a real impact on what we build. This community seems to care a lot about privacy and threat models, so we’d really appreciate your thoughts on:
* Does this category-based, user-controlled sharing model feel meaningful from a privacy perspective, or is it just “checkbox security”?
* Are there technical patterns you’d expect here (e.g., client-side encryption, zero-knowledge-style access, audit logs, local cache layers) that you’d consider “must have” before trusting a cloud memory layer?
* Any examples of systems that got this right (or very wrong) that we should study?
Happy to answer detailed questions about the architecture and trade-offs. Thanks in advance for any candid feedback. | 2026-01-05T07:00:58 | https://www.reddit.com/r/LocalLLaMA/comments/1q4ew2k/need_architectural_feedback_privacysafe_cloud/ | Ok_Soup6298 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4ew2k | false | null | t3_1q4ew2k | /r/LocalLLaMA/comments/1q4ew2k/need_architectural_feedback_privacysafe_cloud/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Bno3u-87U0rD9C3RNRKS5VPqUWwD7q2MvGjpimbkn18', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Bno3u-87U0rD9C3RNRKS5VPqUWwD7q2MvGjpimbkn18.png?width=108&crop=smart&auto=webp&s=4e9458692157491734ec8c059c1d84669ade9efa', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Bno3u-87U0rD9C3RNRKS5VPqUWwD7q2MvGjpimbkn18.png?width=216&crop=smart&auto=webp&s=d37490ef07d69ec85366fd53caa84e1b66d9d348', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Bno3u-87U0rD9C3RNRKS5VPqUWwD7q2MvGjpimbkn18.png?width=320&crop=smart&auto=webp&s=5cc13b18594345d83b53bc2135238cb1150a5e83', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Bno3u-87U0rD9C3RNRKS5VPqUWwD7q2MvGjpimbkn18.png?width=640&crop=smart&auto=webp&s=80e8aab67c4d4af12c6e8c18c4289288cb652b06', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Bno3u-87U0rD9C3RNRKS5VPqUWwD7q2MvGjpimbkn18.png?width=960&crop=smart&auto=webp&s=fa5699fcae41edd0b07ad5fe4726da75beaeb3ff', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Bno3u-87U0rD9C3RNRKS5VPqUWwD7q2MvGjpimbkn18.png?width=1080&crop=smart&auto=webp&s=1db8afb0a26191540699cc23753a974d828b6635', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Bno3u-87U0rD9C3RNRKS5VPqUWwD7q2MvGjpimbkn18.png?auto=webp&s=4275346f1017f657cc0868869873e8f7edb6699e', 'width': 1200}, 'variants': {}}]} |
Are GPUs really the bottleneck — or is it the software stack? | 0 | With larger LLMs and heavier inference stacks, it’s common to hear that consumer GPUs are “falling behind” and that cloud is inevitable.
I’m not fully convinced the hardware is the core problem.
A lot of inference pipelines still:
• collapse structured sparsity back into dense ops,
• move more data than necessary,
• and ignore execution patterns that could reduce memory pressure.
So GPUs hit VRAM or cost limits long before they hit actual compute limits.
That makes even strong cards feel outdated faster than they should, and it makes cloud inference more expensive than it needs to be.
Long term, it feels like fixing how models are lowered and executed is just as important as scaling hardware.
Curious what others here think — is your pain mostly:
• VRAM?
• latency?
• or GPU-hour cost? | 2026-01-05T06:59:36 | https://www.reddit.com/r/LocalLLaMA/comments/1q4ev7h/are_gpus_really_the_bottleneck_or_is_it_the/ | Curious_Call4704 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4ev7h | false | null | t3_1q4ev7h | /r/LocalLLaMA/comments/1q4ev7h/are_gpus_really_the_bottleneck_or_is_it_the/ | false | false | self | 0 | null |
We need an LLM that can read it's own thoughts. | 0 | Most LLMs that can "reason" have no ability to read their reasoning in the `<think></think>` tags. Be it Qwen3 or SmolLM3, they don't see any <think> tags even if they are there. And that was precisely after enabling the `Show raw LLM output` setting in llama-cpp's chat UI. The reasoning still exists in the context but is not visible to the LLM somehow.
However Claude surprisingly has the ability to perform hybrid "reasoning," where appending proprietary anthropic xml tags at the end of your message will enable such behaviour. Turns out claude using `<thinking></thinking>` tags, can actually read its reasoning back back in not only it's current response but in future responses as well, with the ability to "reason" while writing a response (aka it will "reason" even after tool calls or just after writing a paragraph in the actual response, just cuz it can).
We need more LLMs like that that can read the reasoning and interpret it. | 2026-01-05T06:47:49 | https://www.reddit.com/r/LocalLLaMA/comments/1q4enz5/we_need_an_llm_that_can_read_its_own_thoughts/ | Brospeh-Stalin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4enz5 | false | null | t3_1q4enz5 | /r/LocalLLaMA/comments/1q4enz5/we_need_an_llm_that_can_read_its_own_thoughts/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'aprc_YFFurO5bVyUjyMm3GS8Nyk7lZvT99NhsfjMw4E', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/aprc_YFFurO5bVyUjyMm3GS8Nyk7lZvT99NhsfjMw4E.png?width=108&crop=smart&auto=webp&s=84c0fa6d1390e0859c17361f50590d8a19902843', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/aprc_YFFurO5bVyUjyMm3GS8Nyk7lZvT99NhsfjMw4E.png?width=216&crop=smart&auto=webp&s=7735cf0be54c8e3b07941bf32abfe859691057ee', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/aprc_YFFurO5bVyUjyMm3GS8Nyk7lZvT99NhsfjMw4E.png?width=320&crop=smart&auto=webp&s=b47feae299591fc4a5a9853f692e95e7560395a5', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/aprc_YFFurO5bVyUjyMm3GS8Nyk7lZvT99NhsfjMw4E.png?width=640&crop=smart&auto=webp&s=cee87d9a29dd096acb00e09a5be602268929d173', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/aprc_YFFurO5bVyUjyMm3GS8Nyk7lZvT99NhsfjMw4E.png?width=960&crop=smart&auto=webp&s=9c0f20570c163bc868702bb6be87cb575ba708b5', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/aprc_YFFurO5bVyUjyMm3GS8Nyk7lZvT99NhsfjMw4E.png?width=1080&crop=smart&auto=webp&s=22f88c1bb243321c2c12b60aab0dbc2bc57f53db', 'width': 1080}], 'source': {'height': 4242, 'url': 'https://external-preview.redd.it/aprc_YFFurO5bVyUjyMm3GS8Nyk7lZvT99NhsfjMw4E.png?auto=webp&s=44337b5248ed18f1d09c8872d7758c41794b663f', 'width': 1920}, 'variants': {}}]} |
Switching models in KoboldCpp 1.96.2? | 0 | I've been told there's a way to do it, but I can't find it in any of the settings. I'd like to be able to switch llm models without having to shut the program down and start again. Anyone have an idea how to do that?
Thanks!
| 2026-01-05T06:32:42 | https://www.reddit.com/r/LocalLLaMA/comments/1q4eei1/switching_models_in_koboldcpp_1962/ | Cartoonwhisperer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4eei1 | false | null | t3_1q4eei1 | /r/LocalLLaMA/comments/1q4eei1/switching_models_in_koboldcpp_1962/ | false | false | self | 0 | null |
is "anonymous ai" the only way to stay safe online now? | 1 | [removed] | 2026-01-05T04:56:01 | https://www.reddit.com/r/LocalLLaMA/comments/1q4cjvt/is_anonymous_ai_the_only_way_to_stay_safe_online/ | Immediate_Being_3341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4cjvt | false | null | t3_1q4cjvt | /r/LocalLLaMA/comments/1q4cjvt/is_anonymous_ai_the_only_way_to_stay_safe_online/ | false | false | self | 1 | null |
Stress-testing local LLM agents with adversarial inputs (Ollama, Qwen) | 4 | I’ve been working on a small open-source tool to stress-test AI agents that run on local models (Ollama, Qwen, Gemma, etc.).
The problem I kept running into: an agent looks fine when tested with clean prompts, but once you introduce typos, tone shifts, long context, or basic prompt injection patterns, behavior gets unpredictable very fast — especially on smaller local models.
So I built Flakestorm, which takes a single “golden prompt”, generates adversarial mutations (paraphrases, noise, injections, encoding edge cases, etc.), and runs them against a local agent endpoint. It produces a simple robustness score + an HTML report showing what failed.
This is very much local-first:
Uses Ollama for mutation generation
Tested primarily with Qwen 2.5 (3B / 7B) and Gemma
No cloud required, no API keys
Example failures I’ve seen on local agents:
Silent instruction loss after long-context mutations
JSON output breaking under simple noise
Injection patterns leaking system instructions
Latency exploding with certain paraphrases
I’m early and still validating whether this is useful beyond my own workflows, so I’d genuinely love feedback from people running local agents:
Is this something you already do manually?
Are there failure modes you’d want to test that aren’t covered?
Does “chaos testing for agents” resonate, or is this better framed differently?
Repo: https://github.com/flakestorm/flakestorm | 2026-01-05T04:35:23 | https://www.reddit.com/r/LocalLLaMA/comments/1q4c4jb/stresstesting_local_llm_agents_with_adversarial/ | No-Common1466 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4c4jb | false | null | t3_1q4c4jb | /r/LocalLLaMA/comments/1q4c4jb/stresstesting_local_llm_agents_with_adversarial/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'EVu0Db2EbiHKX1v3WNoAqqz3vAl6s6BqflIKSe-GZEs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EVu0Db2EbiHKX1v3WNoAqqz3vAl6s6BqflIKSe-GZEs.png?width=108&crop=smart&auto=webp&s=a8439fab05e047dc014e3c49ecdef6529b7dab6b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EVu0Db2EbiHKX1v3WNoAqqz3vAl6s6BqflIKSe-GZEs.png?width=216&crop=smart&auto=webp&s=39dec4a429b1a5c82b7dc69bc487b048fd8af5e1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EVu0Db2EbiHKX1v3WNoAqqz3vAl6s6BqflIKSe-GZEs.png?width=320&crop=smart&auto=webp&s=4c3ae13182d96e1e4421e33ec4d23779d91bdeee', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EVu0Db2EbiHKX1v3WNoAqqz3vAl6s6BqflIKSe-GZEs.png?width=640&crop=smart&auto=webp&s=bca8ad9d7102b4fa13d040ec91dd6234856b0735', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EVu0Db2EbiHKX1v3WNoAqqz3vAl6s6BqflIKSe-GZEs.png?width=960&crop=smart&auto=webp&s=e338731a6da9788473abfb21c2713d81482e3793', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EVu0Db2EbiHKX1v3WNoAqqz3vAl6s6BqflIKSe-GZEs.png?width=1080&crop=smart&auto=webp&s=63e245ab212b1e0623ecee0a7d4746f18ac466dd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EVu0Db2EbiHKX1v3WNoAqqz3vAl6s6BqflIKSe-GZEs.png?auto=webp&s=98356ab67f893f4ac659a108ae244af03b8e33ab', 'width': 1200}, 'variants': {}}]} |
vLLM reaches 2000 contributors! | 30 | 2026-01-05T04:04:52 | https://github.com/vllm-project/vllm/graphs/contributors | jinnyjuice | github.com | 1970-01-01T00:00:00 | 0 | {} | 1q4bhtm | false | null | t3_1q4bhtm | /r/LocalLLaMA/comments/1q4bhtm/vllm_reaches_2000_contributors/ | false | false | default | 30 | null | |
[R] We built a framework to make Agents "self-evolve" using LoongFlow. Paper + Code released | 29 | Hi Reddit,
We are the team behind **LoongFlow**. We've been researching how to solve the "static agent" problem—where agents fail to adapt to complex tasks or get stuck in loops.
Instead of manual prompt engineering, we applied **Evolutionary Algorithms** (Selection, Mutation, Crossover) to the agent workflow. Treat prompts and logic as "DNA" that can evolve over generations to find the optimal solution.
**Key features:**
* 🧬 **General-Evolve:** Automatically optimizes prompts and code logic.
* 📈 **Proven Results:** In our benchmarks (detailed in the paper), we saw significant accuracy improvements compared to standard ReAct agents.
* 🔧 **Extensible:** Built for developers to create custom evolutionary pipelines.
We just released the paper on arXiv and the code is fully open-source.
**📄 Paper:** [https://arxiv.org/abs/2512.24077](https://arxiv.org/abs/2512.24077)
**💻 GitHub:**[https://github.com/baidu-baige/LoongFlow](https://github.com/baidu-baige/LoongFlow)
We are looking for feedback on the architecture! Would love to hear your thoughts on combining EA with LLMs. | 2026-01-05T03:33:57 | https://www.reddit.com/r/LocalLLaMA/comments/1q4atlx/r_we_built_a_framework_to_make_agents_selfevolve/ | FreshmanDD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4atlx | false | null | t3_1q4atlx | /r/LocalLLaMA/comments/1q4atlx/r_we_built_a_framework_to_make_agents_selfevolve/ | false | false | self | 29 | null |
Dedicated RTX 4090 Inference API is Live! 24GB VRAM & Ultra-Low Latency 🚀 | 0 | Hi Devs,
I’ve officially launched my dedicated NVIDIA RTX 4090 hosting on RapidAPI under MonsterGPU AI. If you're tired of slow cloud providers or high costs, this is built for raw performance.
Why use this?
Dedicated Power: You get the full 24GB VRAM of a 4090 for your models.
Reliability: 100% Success Rate and 24/7 uptime verified.
Speed: Optimized for ultra-low latency inference.
Location: Fast routing, especially for users in the EMEA/UAE region.
Plans:
BASIC: $50/mo
PRO: $100/mo (Full access)
🔗 Access the API here: https://rapidapi.com/alkendihamad4444/api/ai-power-4090
Would love to hear your feedback or help with any custom integration! | 2026-01-05T03:30:40 | MonsterGPUAI | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q4ar3d | false | null | t3_1q4ar3d | /r/LocalLLaMA/comments/1q4ar3d/dedicated_rtx_4090_inference_api_is_live_24gb/ | true | false | spoiler | 0 | {'enabled': True, 'images': [{'id': 'u1aelva9agbg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/u1aelva9agbg1.png?width=108&crop=smart&auto=webp&s=a0802b430e183a89088154410737071598b7b0aa', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/u1aelva9agbg1.png?width=216&crop=smart&auto=webp&s=cfbf4025203dd684bbea9c8fe42db0bcd2942250', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/u1aelva9agbg1.png?width=320&crop=smart&auto=webp&s=4c1a4534a28625079879c29180d9e63df9aba289', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/u1aelva9agbg1.png?width=640&crop=smart&auto=webp&s=3b80ceda6c09a0c0496345818c0509b89e9b71ff', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/u1aelva9agbg1.png?width=960&crop=smart&auto=webp&s=90a5208741168f280a328d13a8423c73a4bfa728', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/u1aelva9agbg1.png?auto=webp&s=2bd0217431411ce79826396177f29112a1ecc6a1', 'width': 1024}, 'variants': {'obfuscated': {'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/u1aelva9agbg1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=f05e0f7bc5270e8aadec0d7ed134344a903da40c', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/u1aelva9agbg1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=44242f4f49e6e8bd788158902b4291a269a7fbbe', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/u1aelva9agbg1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=20965a0bf4bcf726c0c6a739b0443e86737d216f', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/u1aelva9agbg1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=ba68962301c1971a992b744105fe0b995a9b39c2', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/u1aelva9agbg1.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=16ce1745d0a595969fc881b2d0d147d9d01d9359', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/u1aelva9agbg1.png?blur=40&format=pjpg&auto=webp&s=d4e430db6d58e1752f3d884dbeeb13ccbc11b12b', 'width': 1024}}}}]} | |
Budget LLM Setup Advice | 3 | I'm looking to try writing small agents to do stuff like sort my email and texts, as well as possibly tool-call to various other services. I've got a GTX 970 right now and am thinking of picking up an RTX 3060 12GB since I've got a budget of $200-250. I've got dual PCI 3.0 slots on my motherboard, so I was thinking of possibly getting another 3060 when budget allows as an upgrade path. I'm working with 16GB of DDR4 RAM right now, and maybe can get 32GB in a few months.
Would this work to run small models to achieve the stated goals, or is it wishful thinking to think that such a budget would be able to do anything remotely useful? I've seen Qwen3 8b mentioned as a decent model for tool calling, but I wondering what experience people have had with such low amounts of VRAM. | 2026-01-05T03:27:16 | https://www.reddit.com/r/LocalLLaMA/comments/1q4aogc/budget_llm_setup_advice/ | UndefinedBurrito | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4aogc | false | null | t3_1q4aogc | /r/LocalLLaMA/comments/1q4aogc/budget_llm_setup_advice/ | false | false | self | 3 | null |
VLA模型研究进展 | 0 | 2026-01-05T03:25:15 | https://vlamodels-nm5d2sqk.manus.space | SquashThis3025 | vlamodels-nm5d2sqk.manus.space | 1970-01-01T00:00:00 | 0 | {} | 1q4amx0 | false | null | t3_1q4amx0 | /r/LocalLLaMA/comments/1q4amx0/vla模型研究进展/ | false | false | default | 0 | null | |
[Release] We trained an AI to understand Taiwanese memes and slang because major models couldn't. Meet Twinkle AI's gemma-3-4B-T1-it. | 31 | Hi r/LocalLLaMA ,
We are **Twinkle AI**, and today we are releasing **gemma-3-4B-T1-Instruct**.
We realized that when major LLMs generate Traditional Chinese, they often default to Mainland Chinese terminology, slang, and cultural perspectives. They translate the *words*, but miss the *context*.
We built **gemma-3-4B-T1-it**, a specialized version of Google's new Gemma 3 designed specifically for the context of **Taiwan**. It knows our laws, our geography, and yes, our internet slang.
[True Cultural Alignment: It knows the difference between local Taiwanese slang \(e.g., \\"很盤\\" - rip-off\) and generic terms. It understands local geography and memes.](https://preview.redd.it/tda9w1qu7gbg1.png?width=3469&format=png&auto=webp&s=0245d5368ee8f42fe3d51fa5776017534e5754f4)
It's a fun experiment in how deep localization changes model behavior. It also happens to be really good at **Function Calling** if you want to build agents with it.
We'd love to hear your [feedback](https://discord.gg/tnkXrNGst3) on this approach to highly localized LLMs!
🤗 [twinkle-ai/gemma-3-4B-T1-it](https://huggingface.co/twinkle-ai/gemma-3-4B-T1-it/blob/main/README_EN.md) | 2026-01-05T03:19:37 | https://www.reddit.com/r/LocalLLaMA/comments/1q4aiko/release_we_trained_an_ai_to_understand_taiwanese/ | piske_usagi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4aiko | false | null | t3_1q4aiko | /r/LocalLLaMA/comments/1q4aiko/release_we_trained_an_ai_to_understand_taiwanese/ | false | false | 31 | null | |
Llama 3.3 8B, abliterated to <0.05 KL | 106 | This is an abliterated version of the allegedly leaked Llama 3.3 8B 128k model that tries to minimize intelligence loss while optimizing for compliance.
Link (BF16 weights):
[https://huggingface.co/SicariusSicariiStuff/Llama-3.3-8B-Instruct-128K\_Abliterated](https://huggingface.co/SicariusSicariiStuff/Llama-3.3-8B-Instruct-128K_Abliterated)
Credits: Fizzarolli, p-e-w, some employee @ meta for another successful failure.
Enjoy :) | 2026-01-05T03:18:45 | https://www.reddit.com/r/LocalLLaMA/comments/1q4ahw1/llama_33_8b_abliterated_to_005_kl/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q4ahw1 | false | null | t3_1q4ahw1 | /r/LocalLLaMA/comments/1q4ahw1/llama_33_8b_abliterated_to_005_kl/ | false | false | self | 106 | {'enabled': False, 'images': [{'id': '-qyQS6pOmNn1v87ALBAZp6uMsGIkARCgSorjrYn2uu4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-qyQS6pOmNn1v87ALBAZp6uMsGIkARCgSorjrYn2uu4.png?width=108&crop=smart&auto=webp&s=277a008c8fec217fe8e052d9f7b6051d11247212', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-qyQS6pOmNn1v87ALBAZp6uMsGIkARCgSorjrYn2uu4.png?width=216&crop=smart&auto=webp&s=cd3446ee09deaa1eb8025ad5493e8319dc3b2060', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-qyQS6pOmNn1v87ALBAZp6uMsGIkARCgSorjrYn2uu4.png?width=320&crop=smart&auto=webp&s=ce48433a9790990a9850a69d6e27461ea4c848fe', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-qyQS6pOmNn1v87ALBAZp6uMsGIkARCgSorjrYn2uu4.png?width=640&crop=smart&auto=webp&s=8df0a3611e7963b124408441de98cd273ce50746', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-qyQS6pOmNn1v87ALBAZp6uMsGIkARCgSorjrYn2uu4.png?width=960&crop=smart&auto=webp&s=863ee79521874940be19522c5a93690756e235ab', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-qyQS6pOmNn1v87ALBAZp6uMsGIkARCgSorjrYn2uu4.png?width=1080&crop=smart&auto=webp&s=f7a150f2879de5c86eed2ed8758e024e3641298f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-qyQS6pOmNn1v87ALBAZp6uMsGIkARCgSorjrYn2uu4.png?auto=webp&s=481e98c98799d602a3bf94f7de24d582ce6ddebe', 'width': 1200}, 'variants': {}}]} |
EasyWhisperUI - Open-Source Easy UI for OpenAI’s Whisper model with cross platform GPU support (Windows/Mac) | 22 | Hey guys, it’s been a while but I’m happy to announce a major update for **EasyWhisperUI**.
Whisper is OpenAI’s automatic speech recognition (ASR) model that converts audio into text, and it can also translate speech into English. It’s commonly used for transcribing things like meetings, lectures, podcasts, and videos with strong accuracy across many languages.
If you’ve seen my earlier posts, EasyWhisperUI originally used a **Qt-based UI**. After a lot of iteration, I’ve now migrated the app to an **Electron architecture (React + Electron + IPC)**.
The whole point of EasyWhisperUI is simple: **make the entire Whisper/whisper.cpp process extremely beginner friendly**. No digging through CLI flags, no “figure out models yourself,” no piecing together FFmpeg, no confusing setup steps. You download the app, pick a model, drop in your files, and it just runs.
It’s also built around **cross platform GPU acceleration**, because I didn’t want this to be NVIDIA-only. On Windows it uses **Vulkan** (so it works across **Intel + AMD + NVIDIA** GPUs, including integrated graphics), and on macOS it uses **Metal** on Apple Silicon. **Linux is coming very soon.**
After **countless hours of work**, the app has been migrated to Electron to deliver a **consistent cross-platform UI experience** across **Windows + macOS (and Linux very soon)** and make updates/features ship much faster.
The new build has also been **tested on a fresh Windows system several times** to verify clean installs, dependency setup, and end-to-end transcription.
GitHub: [https://github.com/mehtabmahir/easy-whisper-ui](https://github.com/mehtabmahir/easy-whisper-ui)
Releases: [https://github.com/mehtabmahir/easy-whisper-ui/releases](https://github.com/mehtabmahir/easy-whisper-ui/releases)
# What EasyWhisperUI does (beginner-friendly on purpose)
1. **Local transcription powered by whisper.cpp**
2. **Cross platform GPU acceleration** Vulkan on Windows (Intel/AMD/NVIDIA) Metal on macOS (Apple Silicon)
3. **Batch processing** with a queue (drag in multiple files and let it run)
4. Export to `.txt` or `.srt` (timestamps)
5. **Live transcription** (beta)
6. **Automatic model downloads** (pick a model and it downloads if missing)
7. **Automatic media conversion** via FFmpeg when needed
8. **Support for 100+ languages and more!**
# What’s new in this Electron update
1. **First-launch Loader / Setup Wizard** Full-screen setup flow with real-time progress and logs shown directly in the UI.
2. **Improved automatic dependency setup (Windows)** More hands-off setup that installs/validates what’s needed and then builds/stages Whisper automatically.
3. **Per-user workspace (clean + predictable)** Binaries, models, toolchain, and downloads are managed under your user profile so updates and cleanup stay painless.
4. **Cross-platform UI consistency** Same UI behavior and feature set across Windows + macOS (and Linux very soon).
5. **Way fewer Windows Defender headaches** This should be noticeably smoother now.
# Quick Windows note for GPU acceleration
For Vulkan GPU acceleration on Windows, make sure you’re using the latest drivers directly from Intel/AMD/NVIDIA (not OEM drivers).
Example: on my **ASUS Zenbook S16**, the OEM graphics drivers did **not** include Vulkan support.
Please try it out and let me know your results! Consider supporting my work if it helps you out :) | 2026-01-05T01:58:52 | https://www.reddit.com/r/LocalLLaMA/comments/1q48q2s/easywhisperui_opensource_easy_ui_for_openais/ | mehtabmahir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q48q2s | false | null | t3_1q48q2s | /r/LocalLLaMA/comments/1q48q2s/easywhisperui_opensource_easy_ui_for_openais/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'ZSbyvQAuhvvD-cKhbLSHjjIq29WZdAagizEiGGWo2zw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZSbyvQAuhvvD-cKhbLSHjjIq29WZdAagizEiGGWo2zw.png?width=108&crop=smart&auto=webp&s=d2709ab151c51bbfaf505fe549c1cfc4adc5f9e7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZSbyvQAuhvvD-cKhbLSHjjIq29WZdAagizEiGGWo2zw.png?width=216&crop=smart&auto=webp&s=3310c55fbaac18e4782c1504e3e7ff09ad8fe5f1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZSbyvQAuhvvD-cKhbLSHjjIq29WZdAagizEiGGWo2zw.png?width=320&crop=smart&auto=webp&s=d2b1eafa5e2852ad9e6ee6169e26022b1a8b4606', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZSbyvQAuhvvD-cKhbLSHjjIq29WZdAagizEiGGWo2zw.png?width=640&crop=smart&auto=webp&s=f7021d820c793eb7ba753718192c2d32c8483b93', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZSbyvQAuhvvD-cKhbLSHjjIq29WZdAagizEiGGWo2zw.png?width=960&crop=smart&auto=webp&s=f6f2c2a7d4ca7aa8c3eb33f9f0801dca14ecf497', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZSbyvQAuhvvD-cKhbLSHjjIq29WZdAagizEiGGWo2zw.png?width=1080&crop=smart&auto=webp&s=3dc726e6844e947708b2e95936f15aa49aa08af5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZSbyvQAuhvvD-cKhbLSHjjIq29WZdAagizEiGGWo2zw.png?auto=webp&s=796a19d6e14596757615756853756bae9a7abc0a', 'width': 1200}, 'variants': {}}]} |
Using small lightweight models for AI chatbots that watch a livestream and comment on what is going on | 6 | I've been experimenting with lightweight ultra-fast models. They don't need to do anything too complicated, just respond to a description of what is happening on a livestream and comment on it in real-time.
I've found smaller models are a bit too dumb and repetitive. They also overly rely on emojis. So far, Llama 3.1 8B is the best option I've found that is not too computationally expensive and produces results that seem at least vaguely like a human chatter.
What model would you use for this purpose?
The bots watch the stream and comment on what happens in the chat and on stream. They sometimes have some interesting emergent behaviors.
You can check out what they're saying at https://onestreamer.live | 2026-01-05T01:47:40 | https://www.reddit.com/r/LocalLLaMA/comments/1q48guf/using_small_lightweight_models_for_ai_chatbots/ | Powerful-Frame-44 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q48guf | false | null | t3_1q48guf | /r/LocalLLaMA/comments/1q48guf/using_small_lightweight_models_for_ai_chatbots/ | false | false | self | 6 | null |
Any help with training vibevoice Lora ? I couldn't find any information about diffusion-head, acoustic connector, and semantic connector ... | 4 | So, I trained a LoRa and since the diffusion head file was very large, over 1 gigabyte, I didn't download it.
The comfyui extension said that only adapter config and adapter model were necessary.
But chatgpt told me that diffusion head is the most important part :(
I have very good results with model 7b with 30-second audio, so I don't know if LoRa for cloning specific voices is really useful. | 2026-01-05T01:20:07 | More_Bid_2197 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q47u9j | false | null | t3_1q47u9j | /r/LocalLLaMA/comments/1q47u9j/any_help_with_training_vibevoice_lora_i_couldnt/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'opqjqm26mfbg1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/opqjqm26mfbg1.png?width=108&crop=smart&auto=webp&s=aaacee3023190db5e92188858391332ab9a78d4c', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/opqjqm26mfbg1.png?width=216&crop=smart&auto=webp&s=0675ac15e760f634f8b44f587eaf04ada7b65ec3', 'width': 216}, {'height': 203, 'url': 'https://preview.redd.it/opqjqm26mfbg1.png?width=320&crop=smart&auto=webp&s=2bedf0fadd0862de0996775b47f166cf472ca7d5', 'width': 320}, {'height': 407, 'url': 'https://preview.redd.it/opqjqm26mfbg1.png?width=640&crop=smart&auto=webp&s=dfa2e20732a455e4417347836df19a77ef9ab9fb', 'width': 640}, {'height': 611, 'url': 'https://preview.redd.it/opqjqm26mfbg1.png?width=960&crop=smart&auto=webp&s=89129e7f29848651b4f4cbbf7b091cac91c2f6ba', 'width': 960}, {'height': 687, 'url': 'https://preview.redd.it/opqjqm26mfbg1.png?width=1080&crop=smart&auto=webp&s=db277237a27c376959b3240d7d2c6540e0f0c874', 'width': 1080}], 'source': {'height': 834, 'url': 'https://preview.redd.it/opqjqm26mfbg1.png?auto=webp&s=bc2839ec2384fd25076b1c8bfa5526ba16ae6921', 'width': 1310}, 'variants': {}}]} | |
Delta Compression for Fine-tuned Models and Datasets | 2 | Sparse compresses fine-tuned models and derivative datasets as deltas from their base versions.
>
**Post-hoc compression for ANY fine-tune.** Unlike LoRA (which requires training differently), Sparse works on models you've *already* trained.
||LoRA/PEFT|Sparse Lossless|Sparse Lossy|
|:-|:-|:-|:-|
|**When**|During training|After training|After training|
|**Size**|\~50 MB|\~1.4 GB|\~50 MB|
|**Quality**|\~95-99%|100%|\~95-99%|
|**Works on existing models**|❌ No|✅ Yes|✅ Yes|
**G**reat for Medical/Healthcare AI, Financial models, Legal/Government | 2026-01-05T01:09:00 | https://www.reddit.com/r/LocalLLaMA/comments/1q47kyt/delta_compression_for_finetuned_models_and/ | gagan-suie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q47kyt | false | null | t3_1q47kyt | /r/LocalLLaMA/comments/1q47kyt/delta_compression_for_finetuned_models_and/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '7huBgr12pmd5BowMmK11HkVqv2D13v-KSrb28sTDOPU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7huBgr12pmd5BowMmK11HkVqv2D13v-KSrb28sTDOPU.png?width=108&crop=smart&auto=webp&s=a1922307cbce21130fa7700f92f15fb97e294c6c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7huBgr12pmd5BowMmK11HkVqv2D13v-KSrb28sTDOPU.png?width=216&crop=smart&auto=webp&s=f2899de10d8d6caf91ba9633c55f5c89d3f8ebd4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7huBgr12pmd5BowMmK11HkVqv2D13v-KSrb28sTDOPU.png?width=320&crop=smart&auto=webp&s=90e28e168605cfe2e2574d098baf311c93a31fec', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7huBgr12pmd5BowMmK11HkVqv2D13v-KSrb28sTDOPU.png?width=640&crop=smart&auto=webp&s=bf238ed2c966141a8bebbb7c6400ac47bdaf4ee3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7huBgr12pmd5BowMmK11HkVqv2D13v-KSrb28sTDOPU.png?width=960&crop=smart&auto=webp&s=614be9cad99ef987d09974c3c08aaffae5466f5c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7huBgr12pmd5BowMmK11HkVqv2D13v-KSrb28sTDOPU.png?width=1080&crop=smart&auto=webp&s=a12e7a3e1718d605d762efe8bee8fc912642d440', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7huBgr12pmd5BowMmK11HkVqv2D13v-KSrb28sTDOPU.png?auto=webp&s=c6a837e7a5e7a4e00d2035a98f8a308224dba57e', 'width': 1200}, 'variants': {}}]} |
my "unfiltered" content creation workflow (step-by-step) | 1 | [removed] | 2026-01-05T00:56:02 | https://www.reddit.com/r/LocalLLaMA/comments/1q47a2z/my_unfiltered_content_creation_workflow_stepbystep/ | Immediate_Being_3341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q47a2z | false | null | t3_1q47a2z | /r/LocalLLaMA/comments/1q47a2z/my_unfiltered_content_creation_workflow_stepbystep/ | false | false | self | 1 | null |
some questions about the Right Way™ to build LLM (specifically VLM) apps in 2026 | 2 | so, about six months ago I built this handwritten note transcription/search/annotation/management software with Claude Code out of Flask and PyTorch: [https://youtu.be/8TRuaBOGNwg?si=LcFsovis9DXxyNOg](https://youtu.be/8TRuaBOGNwg?si=LcFsovis9DXxyNOg)
it runs on my 16GB 5060Ti with Qwen2.5-VL-7B-Instruct. I have honestly been amazed at how well it performs even with my chicken-scratch-ass handwriting, especially since I have realized since then that I made a LOT of silly rookie mistakes when designing the software. for example: I implemented my own backend for talking to the card with PyTorch. why? because I am not very bright!!! and also the great majority of my own programming experience has been with small utility-scale things, not properly-architected software engineering.
I am \*nearly\* sure that there is a much better way to do this, and not incidentally cut a whole lot of code out of the software, by having the software essentially just be a client for an LLM engine of some kind that presents an easily-consumable API.
what I don't know is what this engine should be, running on Ubuntu 24.04LTS (or 26.04 I guess starting sometime in April). it looks like vLLM has "experimental support" for VLMs. llama.cpp can do it but (I'm not clear on this) it looks like you have to add another component in order to have an easy to use API.
part of the reason I want to change the software to do this is because I trust the maintainers of these projects a lot more than I trust myself to do the part of this work that requires careful attention to details of talking to hardware, etc., and why reinvent the wheel when someone else has already done it better? the other part is that it frees the application to be usable, theoretically, with lots of different providers. if you don't care about running the VLM engine locally then you could set it up to talk to Claude or ChatGPT or whatever.
what are y'all's thoughts on the right way to put this together? thanks. | 2026-01-05T00:49:40 | https://www.reddit.com/r/LocalLLaMA/comments/1q474r1/some_questions_about_the_right_way_to_build_llm/ | starkruzr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q474r1 | false | null | t3_1q474r1 | /r/LocalLLaMA/comments/1q474r1/some_questions_about_the_right_way_to_build_llm/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'XAH_v-jFW4T3Pwb8QPOdK6TiWjmexqvTQO5dbJDWfLA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/XAH_v-jFW4T3Pwb8QPOdK6TiWjmexqvTQO5dbJDWfLA.jpeg?width=108&crop=smart&auto=webp&s=34d262330a7be2ae485e43acc44d55a7ba229bfc', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/XAH_v-jFW4T3Pwb8QPOdK6TiWjmexqvTQO5dbJDWfLA.jpeg?width=216&crop=smart&auto=webp&s=227f394a8f970edb01e842c6a17cb8ac7c72bd88', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/XAH_v-jFW4T3Pwb8QPOdK6TiWjmexqvTQO5dbJDWfLA.jpeg?width=320&crop=smart&auto=webp&s=379e5529216bd23b4e9833a2e8a75bdb2b774b1f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/XAH_v-jFW4T3Pwb8QPOdK6TiWjmexqvTQO5dbJDWfLA.jpeg?auto=webp&s=91ebe10a83280757f85e4e3b62d475616af84a2b', 'width': 480}, 'variants': {}}]} |
Orla: use lightweight, open-source, local agents as UNIX tools. | 32 | [https://github.com/dorcha-inc/orla](https://github.com/dorcha-inc/orla)
The current ecosystem around agents feels like a collection of bloated SaaS with expensive subscriptions and privacy concerns. Orla brings large language models to your terminal with a dead-simple, Unix-friendly interface. Everything runs 100% locally. You don't need any API keys or subscriptions, and your data never leaves your machine. Use it like any other command-line tool:
$ orla agent "summarize this code" < main.go
$ git status | orla agent "Draft a commit message for these changes."
$ cat data.json | orla agent "extract all email addresses" | sort -u
It's built on the Unix philosophy and is pipe-friendly and easily extensible.
The README in the repo contains a quick demo.
Installation is a single command. The script installs Orla, sets up Ollama for local inference, and pulls a lightweight model to get you started.
You can use homebrew (on Mac OS or Linux)
$ brew install --cask dorcha-inc/orla/orla
Or use the shell installer:
$ curl -fsSL [https://raw.githubusercontent.com/dorcha-inc/orla/main/scrip](https://raw.githubusercontent.com/dorcha-inc/orla/main/scrip)... | sh
Orla is written in Go and is completely free software (MIT licensed) built on other free software. We'd love your feedback.
Thank you! :-)
Side note: contributions to Orla are very welcome. Please see ([https://github.com/dorcha-inc/orla/blob/main/CONTRIBUTING.md](https://github.com/dorcha-inc/orla/blob/main/CONTRIBUTING.md)) for a guide on how to contribute. | 2026-01-04T23:15:07 | https://www.reddit.com/gallery/1q44ujj | Available_Pressure47 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q44ujj | false | null | t3_1q44ujj | /r/LocalLLaMA/comments/1q44ujj/orla_use_lightweight_opensource_local_agents_as/ | false | false | 32 | null | |
Do you think that Apple will open source the Gemini model it purchases from Google? | 0 | Obviously, Apple has always been about “transparency” so they claim. They’ve also released former models especially the ones that run locally since they literally have no choice, people will reverse engineer it out. With the higher end Mac Studios possibly actually being able to run the large Gemini model locally, maybe they will allow those to be run locally and open source them? Or is all of this wishful thinking? Google might want to protect their IP and avoid lawsuits about data usage or Apple won’t want to release the thing they spent $1 billion on. Do you think it’s possible that this happens and we get a great local model? | 2026-01-04T23:07:29 | https://www.reddit.com/r/LocalLLaMA/comments/1q44nse/do_you_think_that_apple_will_open_source_the/ | Unusual_Guidance2095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q44nse | false | null | t3_1q44nse | /r/LocalLLaMA/comments/1q44nse/do_you_think_that_apple_will_open_source_the/ | false | false | self | 0 | null |
Squeezing Turing: Achieving 22,000+ tokens/sec training speed on RTX 2070 Super using custom Triton kernels | 1 | [removed] | 2026-01-04T22:58:14 | KZ-Media-Developers | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q44fba | false | null | t3_1q44fba | /r/LocalLLaMA/comments/1q44fba/squeezing_turing_achieving_22000_tokenssec/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'z92h1h9bxebg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/z92h1h9bxebg1.png?width=108&crop=smart&auto=webp&s=ef16e9e7cd106a7f40a198118e0f9cf9ef977416', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/z92h1h9bxebg1.png?width=216&crop=smart&auto=webp&s=7d43f6588577b0676c9b1b34b7dd0aa26ece6ce7', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/z92h1h9bxebg1.png?width=320&crop=smart&auto=webp&s=7ff3819f03bca8a1f9de5a4b2f117dfafaefbd0a', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/z92h1h9bxebg1.png?width=640&crop=smart&auto=webp&s=5154ab1017d1e1c7c525ac461194d17728531402', 'width': 640}], 'source': {'height': 495, 'url': 'https://preview.redd.it/z92h1h9bxebg1.png?auto=webp&s=cf73893992b6bcbea0a567a1c2a4764e0810f880', 'width': 881}, 'variants': {}}]} | |
Built an API to index videos into embeddings—optimized for running RAG locally | 0 | Hey LocalLLaMA folks, I'm working on something that might be useful if you're running RAG setups locally.
**The problem:** Video indexing for RAG is a pain. If you want to index your own videos (recordings, lectures, internal content) for local LLM querying, you either:
* Manually run Whisper + OCR + embedding code
* Rely on cloud APIs (defeats the purpose of local)
* Give up and just use transcripts (miss all visual context)
**What I built:**
An API that handles the messy preprocessing: transcript extraction, frame sampling, OCR, and embedding. You get back clean, chunked JSON that's ready to feed into your local vector store (Milvus, Weaviate, whatever).
**Key features:**
* **Transcript + OCR:** Captures both speech and visual content (slides, UI, diagrams)
* **Timestamped chunks:** So you can jump back to the source video
* **Embeddings included:** Ready for local semantic search
* **Minimal dependencies:** I keep processing lightweight (CPU-friendly frame sampling, local OCR option)
**Use cases for local builders:**
* Index internal/private videos without uploading to cloud
* Run semantic search over your own video archives using local LLMs
* Build local RAG agents that reference video content
**Demo:**
Live demo on the site shows what the output looks like. You can search inside sample videos and see the exact JSON chunks.
**The ask:**
If you're building local RAG stuff and this solves a pain point, I'd love feedback. Also curious if you'd want self-hosted/on-prem options.
**URL:** [https://www.vector-vid.com/](https://www.vector-vid.com/) | 2026-01-04T22:44:05 | https://www.reddit.com/r/LocalLLaMA/comments/1q442q1/built_an_api_to_index_videos_into/ | soroushamdg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q442q1 | false | null | t3_1q442q1 | /r/LocalLLaMA/comments/1q442q1/built_an_api_to_index_videos_into/ | false | false | self | 0 | null |
I built a local GUI for vector DBs (pgvector, Qdrant, Chroma, more) | 5 | 👋 Hey everyone,
I’ve been working a lot with vector databases in local and self-hosted setups, and I kept missing a good way to actually inspect what’s inside the vector store without spinning up notebooks or writing scripts.
Most tools are cloud-first or tied to a single provider, so I started building VectorDBZ, a desktop app for exploring and debugging vector databases with a strong focus on local workflows.
What it supports today:
• Connect to local or self-hosted Qdrant, Weaviate, Milvus, Chroma, and pgvector (Postgres)
• Browse collections, vectors, and metadata
• Run vector similarity search with filters and top-K
• Generate embeddings from text or files using local models (Ollama, etc) or hosted APIs
• Visualize embeddings using PCA, t-SNE, or UMAP
• Analyze distance distributions, outliers, duplicates, and metadata separation
All connections, configs, and API keys are stored locally on your machine.
It’s still a work in progress, but it’s already useful for debugging local RAG pipelines and semantic search setups.
GitHub
https://github.com/vectordbz/vectordbz
I’d really love feedback from people running local LLM and RAG setups:
• How do you currently inspect or debug embeddings and retrieval quality?
• Do you mostly rely on scripts, notebooks, or custom dashboards?
• What signals help you decide whether embeddings are “good enough”?
• Would per-query breakdowns, recall diagnostics, or hybrid search views be useful?
• Any local-only features you wish vector DB tools supported better?
• Which vector DBs or local embedding models should I prioritize next?
If you find this useful, a ⭐ on GitHub would mean a lot and helps keep me motivated to keep building.
Thanks!
| 2026-01-04T22:43:06 | https://www.reddit.com/r/LocalLLaMA/comments/1q441tp/i_built_a_local_gui_for_vector_dbs_pgvector/ | snirjka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q441tp | false | null | t3_1q441tp | /r/LocalLLaMA/comments/1q441tp/i_built_a_local_gui_for_vector_dbs_pgvector/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'DztiU_ZLwX9J4ojCthSzAqT4W5QGUW9JHJgCps5VRuQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DztiU_ZLwX9J4ojCthSzAqT4W5QGUW9JHJgCps5VRuQ.png?width=108&crop=smart&auto=webp&s=3c6c2a78276aafab626c76a6f5ee8cafd807a880', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DztiU_ZLwX9J4ojCthSzAqT4W5QGUW9JHJgCps5VRuQ.png?width=216&crop=smart&auto=webp&s=65f6594c1a3ec18a63112dca58237d8293eb5698', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DztiU_ZLwX9J4ojCthSzAqT4W5QGUW9JHJgCps5VRuQ.png?width=320&crop=smart&auto=webp&s=63f418da6ac8374a24a21a10db97db59cc2932e3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DztiU_ZLwX9J4ojCthSzAqT4W5QGUW9JHJgCps5VRuQ.png?width=640&crop=smart&auto=webp&s=ce39cf3bd84946e30c19a5d1ac97bba1fa7a5fc3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DztiU_ZLwX9J4ojCthSzAqT4W5QGUW9JHJgCps5VRuQ.png?width=960&crop=smart&auto=webp&s=f2341511e71c8ff0e6925fb66442b385d41961a3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DztiU_ZLwX9J4ojCthSzAqT4W5QGUW9JHJgCps5VRuQ.png?width=1080&crop=smart&auto=webp&s=7455f25fd4f1130ed45cf9c1afdda2e884bc14fc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DztiU_ZLwX9J4ojCthSzAqT4W5QGUW9JHJgCps5VRuQ.png?auto=webp&s=0580f107fd5fa343fc4a60e0e1880579ad8a2b04', 'width': 1200}, 'variants': {}}]} |
Local YouTube Video Transcription/ summarizer | 0 | Anyone interested in how I built this tool or want to discuss MCP, LM Studio, or GPT-OSS 20B? Feel free to reach out!
Also, what do you think about Meta moving away from its open-source AI strategy in favor of a paid model? Do you think we’ll see a 20B model that outperforms GPT-OSS? And with NVIDIA already having the "Nemotron" 30B model, do you think they could release a 20B model that’s even better than the 30B?
Looking forward to hearing your thoughts! | 2026-01-04T22:37:37 | https://v.redd.it/47pskfu9tebg1 | Serious_Molasses313 | /r/LocalLLaMA/comments/1q43wqo/local_youtube_video_transcription_summarizer/ | 1970-01-01T00:00:00 | 0 | {} | 1q43wqo | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/47pskfu9tebg1/DASHPlaylist.mpd?a=1770287868%2CMjZlMzEyNGE4M2FhODRlMWJjM2FmZDBlMmUxOTdiMDJlNDdmZDAxOWYyMjMyZjg3OWU2NzgzMDg5NzljODE2MQ%3D%3D&v=1&f=sd', 'duration': 377, 'fallback_url': 'https://v.redd.it/47pskfu9tebg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/47pskfu9tebg1/HLSPlaylist.m3u8?a=1770287868%2CMzI0ODA4ZGQ2MGJhNzkxN2M4MWQ3NDNmN2JlZGZmZDE5Y2Q3OTQ0NzNiMTFkZjRhMWM0MWRhNzI0ZDc5OGRkZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/47pskfu9tebg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 856}} | t3_1q43wqo | /r/LocalLLaMA/comments/1q43wqo/local_youtube_video_transcription_summarizer/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'eXFqY3d5dTl0ZWJnMZE0patPv2SenRZKLAsS-u86TTuGgfsolMtVfL4VVegq', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/eXFqY3d5dTl0ZWJnMZE0patPv2SenRZKLAsS-u86TTuGgfsolMtVfL4VVegq.png?width=108&crop=smart&format=pjpg&auto=webp&s=e8d76800a4a017090653b5e8bdc2b45a231b5c5f', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/eXFqY3d5dTl0ZWJnMZE0patPv2SenRZKLAsS-u86TTuGgfsolMtVfL4VVegq.png?width=216&crop=smart&format=pjpg&auto=webp&s=bc9a6c81ebd7f515aa65a59c60fb52134007763c', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/eXFqY3d5dTl0ZWJnMZE0patPv2SenRZKLAsS-u86TTuGgfsolMtVfL4VVegq.png?width=320&crop=smart&format=pjpg&auto=webp&s=80006edf11948373632b3817c15febe4fa40445e', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/eXFqY3d5dTl0ZWJnMZE0patPv2SenRZKLAsS-u86TTuGgfsolMtVfL4VVegq.png?width=640&crop=smart&format=pjpg&auto=webp&s=baeb52916122422c91de31fc71e9d9200cabb66c', 'width': 640}], 'source': {'height': 1746, 'url': 'https://external-preview.redd.it/eXFqY3d5dTl0ZWJnMZE0patPv2SenRZKLAsS-u86TTuGgfsolMtVfL4VVegq.png?format=pjpg&auto=webp&s=59b575723812d3b1e6e6245adfdcca02f3c97eed', 'width': 778}, 'variants': {}}]} | |
Squeezing Turing: Achieving 22,000+ tokens/sec training speed on RTX 2070 Super using custom Triton kernels | 1 | [removed] | 2026-01-04T22:35:03 | Extension_Phrase_801 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q43ufh | false | null | t3_1q43ufh | /r/LocalLLaMA/comments/1q43ufh/squeezing_turing_achieving_22000_tokenssec/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'p8r5yuhnsebg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/p8r5yuhnsebg1.png?width=108&crop=smart&auto=webp&s=c767dcafbc82efdcbec3359f329d0ecccc479635', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/p8r5yuhnsebg1.png?width=216&crop=smart&auto=webp&s=e0569a9f0d21fe235fb0ea4a181b845dad83ec5d', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/p8r5yuhnsebg1.png?width=320&crop=smart&auto=webp&s=d7d5ce8e90228fe3880b4645a825d9a407204637', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/p8r5yuhnsebg1.png?width=640&crop=smart&auto=webp&s=65c9d6c9a9573c952a563c053059066c0c4d97c9', 'width': 640}], 'source': {'height': 495, 'url': 'https://preview.redd.it/p8r5yuhnsebg1.png?auto=webp&s=e189d4f44ba63c3b6c047a1c4ee267eaa683e696', 'width': 881}, 'variants': {}}]} | |
Local YouTube Transcription/ summarizer | 0 | Close Source companies just want our data. Only you can do something about it.
Since using Local Ai I've stopped signing into things I don't need to. And if I do sign in I don't onteract with the front end | 2026-01-04T22:12:18 | https://v.redd.it/j7ve3xjroebg1 | Serious_Molasses313 | /r/LocalLLaMA/comments/1q439lp/local_youtube_transcription_summarizer/ | 1970-01-01T00:00:00 | 0 | {} | 1q439lp | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/j7ve3xjroebg1/DASHPlaylist.mpd?a=1770286344%2CMWM0OGFlZjgzYjEzYWFkZDY0M2NlYWU1NjZmOWQxOGFkODMzZWZlZjczMGMzYmMwMjFlMzExNTJiNjhhZTgyZA%3D%3D&v=1&f=sd', 'duration': 377, 'fallback_url': 'https://v.redd.it/j7ve3xjroebg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/j7ve3xjroebg1/HLSPlaylist.m3u8?a=1770286344%2CNTY3NWNkMTcwZmZkYzUwNTkwNWM1OTk4Y2NlYTYyNGY3MTk1MzNjNTZiMmRhZjk1Mjg1Njc3MGU1OGRmY2VlNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/j7ve3xjroebg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 856}} | t3_1q439lp | /r/LocalLLaMA/comments/1q439lp/local_youtube_transcription_summarizer/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'NGlwNTY5a3JvZWJnMf2YES6qzjqvSRR8vchIRSZT6AAEu-s21DziMm8t8GFZ', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/NGlwNTY5a3JvZWJnMf2YES6qzjqvSRR8vchIRSZT6AAEu-s21DziMm8t8GFZ.png?width=108&crop=smart&format=pjpg&auto=webp&s=802ffa65377153c4560c524b44556f0a2f0d59e5', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/NGlwNTY5a3JvZWJnMf2YES6qzjqvSRR8vchIRSZT6AAEu-s21DziMm8t8GFZ.png?width=216&crop=smart&format=pjpg&auto=webp&s=e0ac05b55da7a51ff216f8de241387d1d8c59665', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/NGlwNTY5a3JvZWJnMf2YES6qzjqvSRR8vchIRSZT6AAEu-s21DziMm8t8GFZ.png?width=320&crop=smart&format=pjpg&auto=webp&s=e3da61d092f241e3ab3222ac5ca0b7e2f30e9c46', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/NGlwNTY5a3JvZWJnMf2YES6qzjqvSRR8vchIRSZT6AAEu-s21DziMm8t8GFZ.png?width=640&crop=smart&format=pjpg&auto=webp&s=d39e85415f2419f8be42d41a0973f055533ed3f2', 'width': 640}], 'source': {'height': 1746, 'url': 'https://external-preview.redd.it/NGlwNTY5a3JvZWJnMf2YES6qzjqvSRR8vchIRSZT6AAEu-s21DziMm8t8GFZ.png?format=pjpg&auto=webp&s=6e7ad1df7e79b0199e2fa1364e9fdac5f9e2d8b4', 'width': 778}, 'variants': {}}]} | |
Introducing Adaptive-P: A New Sampler for Creative Text Generation (llama.cpp PR) | 118 | Hey everyone,
I wanted to share a sampling method we've been working on called Adaptive-P. Before I get into it, I should mention that due to a visual impairment, I used AI assistance in writing both the documentation and this post. I want to be upfront about that. The algorithm itself and the underlying idea are human created, however.
**What is it?**
Adaptive-P is a different approach to token sampling that tries to address models getting stuck in predictable patterns. When generating creative content, models often fall back on the same phrasing, sentence structures, and narrative beats. The model has more interesting options available, but standard sampling methods don't give you a way to encourage it toward those alternatives.
**How does it work?**
Instead of uniformly scaling probabilities like temperature does, or making binary keep/discard decisions like truncation methods, Adaptive-P lets you specify a probability range you want to target. It applies a transformation that creates a preference curve centered on your target probability—tokens near the target get boosted, tokens far from it get suppressed.
The transformation uses unbounded negative logits for distant tokens rather than a floor value. This prevents probability from accumulating in the tail of the distribution, which is a problem that affects some other approaches to forced alternative selection.
The sampler maintains an exponential moving average of the original probabilities of selected tokens. It uses this history to compute an adjusted target at each step. If recent selections have been running above your configured target, the sampler compensates by aiming lower on the next step, and vice versa. This feedback loop keeps the average selection probability tracking toward your target over time.
**Chain breaking**
The adaptive mechanism is what breaks repetitive high-confidence chains. When the model keeps selecting dominant tokens, the history shifts upward, which pushes the calculated target downward, which makes alternatives more attractive. The sampler naturally resists getting stuck in a rut without requiring external repetition penalties.
**What's it good for?**
This is designed for creative work—fiction, roleplay, brainstorming. It's not meant for tasks where accuracy matters more than variety.
It pairs well with Min-P, which handles removing genuinely bad options while Adaptive-P handles selection among the remaining quality candidates. Adaptive-P needs to be the final sampler in the chain since it performs the actual token selection.
**Links**
Documentation:
https://github.com/MrJackSpade/adaptive-p-docs/blob/main/Documentation.md
llama.cpp PR:
https://github.com/ggml-org/llama.cpp/pull/17927
Discord discussion:
https://discord.com/channels/1238219753324281886/1447392417769721926
Any and all questions will likely be answered by the documentation, or the discord server. | 2026-01-04T21:58:39 | https://www.reddit.com/r/LocalLLaMA/comments/1q42wtt/introducing_adaptivep_a_new_sampler_for_creative/ | DragPretend7554 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q42wtt | false | null | t3_1q42wtt | /r/LocalLLaMA/comments/1q42wtt/introducing_adaptivep_a_new_sampler_for_creative/ | false | false | self | 118 | {'enabled': False, 'images': [{'id': '6UyHe_V93D0kKJ0vL_htKTbCfPCuxROJj4hsjDX_86I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6UyHe_V93D0kKJ0vL_htKTbCfPCuxROJj4hsjDX_86I.png?width=108&crop=smart&auto=webp&s=7810b4b016491ea78491ab3177676631484c2072', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6UyHe_V93D0kKJ0vL_htKTbCfPCuxROJj4hsjDX_86I.png?width=216&crop=smart&auto=webp&s=11d5b8abb2b619d2728c0be1ceeeb2baebf8e2c5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6UyHe_V93D0kKJ0vL_htKTbCfPCuxROJj4hsjDX_86I.png?width=320&crop=smart&auto=webp&s=080937b18e86b37e2172ca0f0253e79f4e2160c5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6UyHe_V93D0kKJ0vL_htKTbCfPCuxROJj4hsjDX_86I.png?width=640&crop=smart&auto=webp&s=708f227de329f40dea640568de76df6a7b28bac3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6UyHe_V93D0kKJ0vL_htKTbCfPCuxROJj4hsjDX_86I.png?width=960&crop=smart&auto=webp&s=def63ccd2bd3a7b5dcb7720b19d1cf193fb888f2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6UyHe_V93D0kKJ0vL_htKTbCfPCuxROJj4hsjDX_86I.png?width=1080&crop=smart&auto=webp&s=344932614f50c39f6d90e4c202f0d88360362604', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6UyHe_V93D0kKJ0vL_htKTbCfPCuxROJj4hsjDX_86I.png?auto=webp&s=d81804a93d9eadf285b58a2224982fcde569a7b6', 'width': 1200}, 'variants': {}}]} |
Stache AI: Self-hosted RAG that runs 100% locally with Ollama + connects to Claude via MCP | 1 | Stache AI is a personal knowledge base that runs entirely on your machine - no API keys, no cloud, no data leaving your network.
# The Stack (all local)
* **Embeddings**: Ollama with nomic-embed-text (or mxbai-embed-large)
* **Vector DB**: Qdrant (runs in Docker)
* **LLM**: Your choice - Ollama for local, or OpenAI/Anthropic if you want
* **Storage**: MongoDB for document metadata
# Quick Start
git clone https://github.com/stache-ai/stache-ai.git
cd stache-ai
docker compose -f docker-compose.yml -f docker-compose.local.yml up -d
That's it. First run pulls Ollama and the embedding model automatically.
Open [http://localhost:8000](http://localhost:8000/) \- drag and drop PDFs, ask questions.
# Why I Built This
I have years of notes, research papers, and documentation. I wanted to:
1. Search by meaning, not keywords
2. Keep everything local (privacy)
3. Use it from Claude Desktop/Code via MCP
4. Not deal with OpenAI API costs for embeddings
# Ollama Config
Default uses `nomic-embed-text` (768 dims). To use a different model:
# In .env
OLLAMA_EMBEDDING_MODEL=mxbai-embed-large
EMBEDDING_DIMENSION=1024
# MCP Integration (Optional)
If you use Claude Desktop/Code, you can connect Stache so Claude can search your docs:
pip install stache-tools
Add to `~/.claude.json`:
{
"mcpServers": {
"stache": {
"command": "stache-mcp",
"env": {"STACHE_API_URL": "http://localhost:8000"}
}
}
}
Then ask Claude: "Search my stache for..."
# What It Handles
* PDF (with OCR for scanned docs)
* EPUB, DOCX, PPTX
* Markdown
* VTT/SRT transcripts
# Links
* GitHub: [https://github.com/stache-ai/stache-ai](https://github.com/stache-ai/stache-ai)
* CLI/MCP tools: [https://github.com/stache-ai/stache-tools](https://github.com/stache-ai/stache-tools)
* Docker Hub: [https://hub.docker.com/r/stacheai/stache-ai](https://hub.docker.com/r/stacheai/stache-ai)
MIT licensed. Happy to answer questions about the local setup. | 2026-01-04T21:58:03 | https://www.reddit.com/r/LocalLLaMA/comments/1q42wa9/stache_ai_selfhosted_rag_that_runs_100_locally/ | jtpenny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q42wa9 | false | null | t3_1q42wa9 | /r/LocalLLaMA/comments/1q42wa9/stache_ai_selfhosted_rag_that_runs_100_locally/ | false | false | self | 1 | null |
Beyond Simple Chatbots: A Production-Grade Agent Architecture with "Context as a Service" [Discussion] | 1 | [removed] | 2026-01-04T21:37:04 | https://www.reddit.com/r/LocalLLaMA/comments/1q42een/beyond_simple_chatbots_a_productiongrade_agent/ | Evening-Arm-34 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q42een | false | null | t3_1q42een | /r/LocalLLaMA/comments/1q42een/beyond_simple_chatbots_a_productiongrade_agent/ | false | false | self | 1 | null |
Query validation layer for local LLM agents that talk to databases | 0 | Running a local model that generates SQL for a database? Built a small validation layer for scope control and observability.
Not really about preventing attacks (your model probably isn't trying to DROP anything). More about:
1. **Hard boundaries** - define exactly which tables the agent can access
2. **Observability** - log when queries go outside the expected scope
3. **Defense in depth** - another layer alongside read-only DB creds
Example setup:
from proxql import Validator
validator = Validator(
mode="read_only",
allowed_tables=["products", "inventory", "orders"]
)
def run_query(query: str):
check = validator.validate(query)
if not check.is_safe:
print(f"Out of scope: {check.reason}")
# Usually means my prompt needs work
return None
return db.execute(query)
**What it does:**
- Table allowlist - hard boundary on accessible tables (handles subqueries, CTEs, JOINs)
- Statement filtering - read_only only allows SELECT, write_safe allows INSERT/UPDATE
- Multi-dialect - works with SQLite, Postgres, MySQL via sqlglot
**What it doesn't do:**
- Replace DB permissions - still use a read-only user
- Catch everything - it's a guardrail, not a guarantee
Mostly helpful for debugging. When a query gets blocked, I know my prompting needs adjustment.
---
pip install proxql
GitHub: https://github.com/zeredbaron/proxql
---
What are you all doing for scope control with local models? Just trusting the model + DB permissions, or adding layers? | 2026-01-04T21:31:07 | https://www.reddit.com/r/LocalLLaMA/comments/1q429hh/query_validation_layer_for_local_llm_agents_that/ | Educational_Poet_862 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q429hh | false | null | t3_1q429hh | /r/LocalLLaMA/comments/1q429hh/query_validation_layer_for_local_llm_agents_that/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'w7T4wVoLVrimnhb2yWIi8hJoTcBsMgdlfTtYuNVCNYI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w7T4wVoLVrimnhb2yWIi8hJoTcBsMgdlfTtYuNVCNYI.png?width=108&crop=smart&auto=webp&s=fa2c43dc7ce117cb16e64bb372f4af57d459aaa7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/w7T4wVoLVrimnhb2yWIi8hJoTcBsMgdlfTtYuNVCNYI.png?width=216&crop=smart&auto=webp&s=b7088160f0a9dd71080dea302f4dc767e15b3f79', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/w7T4wVoLVrimnhb2yWIi8hJoTcBsMgdlfTtYuNVCNYI.png?width=320&crop=smart&auto=webp&s=e6624fe90ec9004dbc1953ba85a9e236ee8bd9bc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/w7T4wVoLVrimnhb2yWIi8hJoTcBsMgdlfTtYuNVCNYI.png?width=640&crop=smart&auto=webp&s=6ddee63d67cecfd76aeec09676fb1c3085d8e547', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/w7T4wVoLVrimnhb2yWIi8hJoTcBsMgdlfTtYuNVCNYI.png?width=960&crop=smart&auto=webp&s=dd9b66315fe49d328bf240d7eb3dca4a3c03153b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/w7T4wVoLVrimnhb2yWIi8hJoTcBsMgdlfTtYuNVCNYI.png?width=1080&crop=smart&auto=webp&s=ef92b1303afb43ec6511fc24857472f3ec2316c9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/w7T4wVoLVrimnhb2yWIi8hJoTcBsMgdlfTtYuNVCNYI.png?auto=webp&s=1ef3ff86bf021fd14d4f2f5168a451921fb64533', 'width': 1200}, 'variants': {}}]} |
Query validation for local LLM agents that talk to databases | 1 | Running a local model that generates SQL? Here's something I built after my Llama setup decided to improvise.
Had an agent querying a SQLite database for a side project. Worked fine until the model hallucinated this gem:
SELECT * FROM users WHERE id = 1; DROP TABLE users; --
Classic injection pattern, probably from training data. My fault for not using parameterized queries, but also made me realize I wanted a sanity check layer.
Built a small library that validates queries before execution:
from proxql import Validator
validator = Validator(
mode="read_only",
allowed_tables=["products", "inventory"]
)
def run_query(query: str):
check = validator.validate(query)
if not check.is_safe:
print(f"Blocked: {check.reason}")
return None
return db.execute(query)
**What it catches:**
- Wrong tables (agent tried to access something not in the allowlist)
- Write operations when you only want reads
- Multi-statement injection (the ; DROP TABLE pattern)
- Hex-encoded keywords (0x44524F50 = DROP)
**What it doesn't do:**
- Replace DB permissions - still use a read-only user
- Prevent slow queries - no cost estimation
- Make anything truly "safe" - it's a guardrail
Works with any SQL dialect (Postgres, MySQL, SQLite, etc.) since it uses sqlglot under the hood.
---
pip install proxql
GitHub: https://github.com/zeredbaron/proxql
---
Curious if others are doing query validation with local models. What patterns are you seeing from your setups that I should be catching? | 2026-01-04T21:14:41 | https://www.reddit.com/r/LocalLLaMA/comments/1q41v80/query_validation_for_local_llm_agents_that_talk/ | Educational_Poet_862 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q41v80 | false | null | t3_1q41v80 | /r/LocalLLaMA/comments/1q41v80/query_validation_for_local_llm_agents_that_talk/ | false | false | self | 1 | null |
5 tools that will make you a 10x better researcher | 1 | [removed] | 2026-01-04T20:56:02 | https://www.reddit.com/r/LocalLLaMA/comments/1q41dra/5_tools_that_will_make_you_a_10x_better_researcher/ | Immediate_Being_3341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q41dra | false | null | t3_1q41dra | /r/LocalLLaMA/comments/1q41dra/5_tools_that_will_make_you_a_10x_better_researcher/ | false | false | self | 1 | null |
GLM-Image model from Z.ai is coming | 311 | [https://github.com/huggingface/transformers/pull/43100/files](https://github.com/huggingface/transformers/pull/43100/files) | 2026-01-04T20:54:04 | Ravencloud007 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q41bw1 | false | null | t3_1q41bw1 | /r/LocalLLaMA/comments/1q41bw1/glmimage_model_from_zai_is_coming/ | false | false | default | 311 | {'enabled': True, 'images': [{'id': 'sm31vizebebg1', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/sm31vizebebg1.png?width=108&crop=smart&auto=webp&s=d9ee9099b93640865b93005ba663a7a509ca8879', 'width': 108}, {'height': 83, 'url': 'https://preview.redd.it/sm31vizebebg1.png?width=216&crop=smart&auto=webp&s=475b49c4063caa36663d7d7a7a784a29e22f334e', 'width': 216}, {'height': 122, 'url': 'https://preview.redd.it/sm31vizebebg1.png?width=320&crop=smart&auto=webp&s=8d890eb698e716ff996f94d8e6bbed2b105cd9c6', 'width': 320}, {'height': 245, 'url': 'https://preview.redd.it/sm31vizebebg1.png?width=640&crop=smart&auto=webp&s=5ae576450ba7112c06760ff8cddee6f5bdd7b672', 'width': 640}], 'source': {'height': 337, 'url': 'https://preview.redd.it/sm31vizebebg1.png?auto=webp&s=6258c7b3bc3682df62b847030353c28458fc2444', 'width': 877}, 'variants': {}}]} | |
gsh - play with any local model directly in your shell REPL or scripts | 14 | Sharing a holiday side project i just built: gsh - a new shell, like bash, zsh, fish, but fully agentic. I find it really useful for playing with local models both interactively and in automation scripts. [https://github.com/atinylittleshell/gsh](https://github.com/atinylittleshell/gsh)
Key features:
\- It can predict the next shell command you may want to run, or help you write one when you forgot how to
\- It can act as a coding agent itself, or delegate to other agents via ACP
\- It comes with an agentic scripting language which you can use to build agentic workflows, or to customize gsh (almost the entire repl can be customized, like neovim)
\- Use whatever LLM you like - a lot can be done with local models
\- Battery included - syntax highlighting, tab completion, history, auto suggestion, starship integration all work out of the box
Super early of course, but i've been daily driving for a while and replaced zsh with it. If you think it's time to try a new shell or new ways to play with local models, give it a try and let me know how it goes! :) | 2026-01-04T20:46:34 | atinylittleshell | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q414vj | false | null | t3_1q414vj | /r/LocalLLaMA/comments/1q414vj/gsh_play_with_any_local_model_directly_in_your/ | false | false | default | 14 | {'enabled': True, 'images': [{'id': 'yh1dwt9j8ebg1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/yh1dwt9j8ebg1.png?width=108&crop=smart&auto=webp&s=aaa84f0428d9dfa565be749011f17fcda281adee', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/yh1dwt9j8ebg1.png?width=216&crop=smart&auto=webp&s=76fbb8f8d14b42e32bd6a64538999d3871a6cf86', 'width': 216}, {'height': 195, 'url': 'https://preview.redd.it/yh1dwt9j8ebg1.png?width=320&crop=smart&auto=webp&s=f0000c4580303be9ddd8a6d709ef752c56032af9', 'width': 320}, {'height': 390, 'url': 'https://preview.redd.it/yh1dwt9j8ebg1.png?width=640&crop=smart&auto=webp&s=383dc60edeb928b15b21feb2777015966be76471', 'width': 640}, {'height': 586, 'url': 'https://preview.redd.it/yh1dwt9j8ebg1.png?width=960&crop=smart&auto=webp&s=752ac0e12aa956d3281cf530fb7f27a966cab8db', 'width': 960}, {'height': 659, 'url': 'https://preview.redd.it/yh1dwt9j8ebg1.png?width=1080&crop=smart&auto=webp&s=029865a36f1af8c28995d19c05ab76a4520ee8f4', 'width': 1080}], 'source': {'height': 856, 'url': 'https://preview.redd.it/yh1dwt9j8ebg1.png?auto=webp&s=ca2f625e738f917a46cd4f302e5fe54e86bfd328', 'width': 1402}, 'variants': {}}]} | |
5070 Ti slower than 4070 Ti when ram spills? | 7 | Hi, I recently upgraded my GPU from a 4070 Ti (12GB) to an 5070 Ti (16GB). When I load a model with a context that's larger than the VRAM and it spills to system memory, the 5070 Ti is way slower.
E. g. with ministral 3 14b with 64k ctx I get 23 t/s with the 4070 Ti, but only 11 t/s with the newer 5070 Ti. When there is no ram spill the 5070 Ti is faster, which is to be expected.
Why can that be the case? Surely the older card can not be this much faster when offloading to system ram?
Loading this model with 262144 ctx and q4 kv cache quant will result in 33 t/s on 4070 Ti and 9 t/s on 5070 Ti. This is weird, isn't it? | 2026-01-04T20:32:03 | https://www.reddit.com/r/LocalLLaMA/comments/1q40r6e/5070_ti_slower_than_4070_ti_when_ram_spills/ | AllTey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q40r6e | false | null | t3_1q40r6e | /r/LocalLLaMA/comments/1q40r6e/5070_ti_slower_than_4070_ti_when_ram_spills/ | false | false | self | 7 | null |
Real-time visibility into PyTorch training (dataloader stalls, memory leaks, step time drift) | 7 | Hey,
Quick share, I have been working on TraceML, a live observability tool for PyTorch training that shows you what's happening in real-time while your job runs.
**What it tracks live:**
* Dataloader fetch time (catches input pipeline stalls)
* GPU step time (non-blocking CUDA events, no sync overhead)
* GPU CUDA memory (spots leaks before OOM)
* Layerwise memory and compute time
Has two modes: lightweight **essential** mode that runs with minimal overhead, and a deeper **diagnostic** mode for layerwise breakdowns when you need it.
Works with any PyTorch model. I have tested on LLM fine-tuning (TinyLLaMA + QLoRA), but it's model-agnostic.
**Read the full breakdown:** [https://medium.com/p/af8fbd899928](https://medium.com/p/af8fbd899928)
**GitHub:** [https://github.com/traceopt-ai/traceml](https://github.com/traceopt-ai/traceml)
Currently supports single GPU, multi-GPU coming soon. If anyone tries it and has feedback or feature requests, I am actively responding to issues. | 2026-01-04T20:31:11 | https://www.reddit.com/r/LocalLLaMA/comments/1q40qec/realtime_visibility_into_pytorch_training/ | traceml-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q40qec | false | null | t3_1q40qec | /r/LocalLLaMA/comments/1q40qec/realtime_visibility_into_pytorch_training/ | false | false | self | 7 | null |
Whats better moe or dense models ? | 0 | What would be better a 80b moe with 3b aktiv like qwen next or a 70b dense model like llama 3.3 because moes are very fast but do they impact performance like in knowledge or is it as good as a dense model an if it isn’t would a Modell like qwen3 vl 32b be better then qwen next 80b ? | 2026-01-04T20:22:47 | https://www.reddit.com/r/LocalLLaMA/comments/1q40iez/whats_better_moe_or_dense_models/ | Pleasant-Key3390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q40iez | false | null | t3_1q40iez | /r/LocalLLaMA/comments/1q40iez/whats_better_moe_or_dense_models/ | false | false | self | 0 | null |
For those of you who are training their own LLM or finetuning an existing LLM, what are you trying to get them to do that they are not already doing? | 7 | I have been curious about finetuning or training an LLM just to learn more about the process and how effective it is. However, I also don't have a great idea on what people mostly train or finetune an LLM to do given that it is currently already so powerful.
If any of you are training your own LLM or finetuning an existing one, I would love to hear what you are trying to get it to do that existing LLMs can't do.
| 2026-01-04T20:12:50 | https://www.reddit.com/r/LocalLLaMA/comments/1q408zz/for_those_of_you_who_are_training_their_own_llm/ | Upset-Ad-8704 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q408zz | false | null | t3_1q408zz | /r/LocalLLaMA/comments/1q408zz/for_those_of_you_who_are_training_their_own_llm/ | false | false | self | 7 | null |
Ratios of Active Parameters to Total Parameters on major MoE models | 54 | |Model|Total Params|Active Params|Ratio|
:--|:--|:--|:--|
|GLM 4.5 Air|106|12|8.8|
|GLM 4.6 and 4.7|355|32|11.0|
|GPT OSS 20B|21|3.6|5.8|
|GPT OSS 120B|117|5.1|22.9|
|Qwen3 30B A3B|30|3|10|
|Qwen3 Next 80B A3B|80|3|26.7|
|Qwen3 235B A22B|235|22|10.7|
|Deepseek 3.2|685|37|18.5|
|MiniMax M2.1|230|10|23.0|
|Kimi K2|1000|32|31.6|
And for fun, some oldies:
|Model|Total Params|Active Params|Ratio|
:--|:--|:--|:--|
|Mixtral 8x7B|47|13|3.6|
|Mixtral 8x22B|141|39|3.6|
|Deepseek V2|236|21|11.2|
|Grok 2|270|115|2.3 (record lowest?)|
(Disclaimer: I'm just a casual user, and I know very little about the science of LLMs. My opinion is entirely based on osmosis and vibes.)
Total Parameters tends to represent the variety knowledge known by the LLM, while Active Parameters is the intelligence. We've been trending towards higher ratios of Total to Active params, probably because of the focus on benchmarks. Models have to know all sorts of trivia to pass all those multiple-choice tests, and know various programming languages to pass coding benchmarks.
I personally prefer high Active (sometimes preferring dense models for this reason), because I mainly use local LLMs for creative writing or one-off local tasks where I want it to read between the lines instead of me having to be extremely clear.
Fun thought: how would some popular models have changed with a different parameter count? What if GLM-4.5-Air was 5B active and GPT-OSS-120B was 12B? What if Qwen3 80B was 10B active? | 2026-01-04T20:05:05 | https://www.reddit.com/r/LocalLLaMA/comments/1q401ka/ratios_of_active_parameters_to_total_parameters/ | dtdisapointingresult | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q401ka | false | null | t3_1q401ka | /r/LocalLLaMA/comments/1q401ka/ratios_of_active_parameters_to_total_parameters/ | false | false | self | 54 | null |
How do guardrails work with Local LLMs? | 0 | For (probably) good reasons, many commercial LLMs currently have guardrails/safeguards in place. For example, it may be difficult to get an answer for things like:
Help me write some code to scrape Twitter
Help me reverse engineer Instagram's mobile API
The reason given is along the lines of:
"I need to slow this down a notch and be clear about boundaries.
I can explain, at a high level, how X/Twitter’s private APIs work and how people study them, but I can’t provide step-by-step instructions, concrete endpoints, headers, tokens, or code that bypasses X’s safeguards."
My understanding is that these guardrails are placed through system prompts (but I could be wrong about this).
If I used an opensource LLM, I would have full control over system prompts. Do these models then provide a better resource for such questions? | 2026-01-04T19:48:21 | https://www.reddit.com/r/LocalLLaMA/comments/1q3zl8a/how_do_guardrails_work_with_local_llms/ | Upset-Ad-8704 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3zl8a | false | null | t3_1q3zl8a | /r/LocalLLaMA/comments/1q3zl8a/how_do_guardrails_work_with_local_llms/ | false | false | self | 0 | null |
Best memory strategy for long-form NSFW/Erotic RP: Raw context vs. Summarization vs. MemGPT? | 26 | **I’m experimenting with a dedicated LLM bot for writing long-form erotic stories and roleplay, and I’m hitting the classic context wall. I’m curious about what the community finds most effective for maintaining "the heat" and prose quality over long sessions.**
**Which approach yields better results in your experience?**
**1. Full Raw Context (Sliding Window): Sending the entire recent history. It keeps the vibe and prose style consistent, but obviously, I lose the beginning of the story once the token limit is reached.**
**2. LLM-based Summarization: Using a secondary (or the same) model to summarize previous events. My concern here is that summaries often feel too "clinical" or dry, which tends to kill the tension and descriptive nuances that are crucial for erotic writing.**
**3. Persistent Memory (MemGPT / Letta / Mem0): Using a memory engine to store facts and character traits. Does this actually work for keeping the narrative "flow," or is it better suited only for static lore facts?**
**I’m currently looking at SillyTavern’s hybrid approach (Lorebooks + Summarize extension), but I’m wondering if anyone has found a way to use MemGPT-style memory without making the AI sound like a robot reciting a Wikipedia entry mid-scene.**
**What’s your setup for keeping the story consistent without losing the stylistic "soul" of the writing?**
| 2026-01-04T19:48:18 | https://www.reddit.com/r/LocalLLaMA/comments/1q3zl67/best_memory_strategy_for_longform_nsfwerotic_rp/ | FollowingFresh6411 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3zl67 | false | null | t3_1q3zl67 | /r/LocalLLaMA/comments/1q3zl67/best_memory_strategy_for_longform_nsfwerotic_rp/ | false | false | nsfw | 26 | null |
Good article on training vs inference architectures for data center compute (and why Groq for Nvidia) | 3 | 2026-01-04T19:34:53 | https://venturebeat.com/infrastructure/inference-is-splitting-in-two-nvidias-usd20b-groq-bet-explains-its-next-act | Mental-At-ThirtyFive | venturebeat.com | 1970-01-01T00:00:00 | 0 | {} | 1q3z89v | false | null | t3_1q3z89v | /r/LocalLLaMA/comments/1q3z89v/good_article_on_training_vs_inference/ | false | false | default | 3 | null | |
FLUX.2-dev-Turbo is surprisingly good at image editing | 92 | Getting excellent results, FAL really cooked with this FLUX.2 \[dev\] LoRA: [https://huggingface.co/fal/FLUX.2-dev-Turbo](https://huggingface.co/fal/FLUX.2-dev-Turbo)
The speed and cost (**only 8 inference steps!**) of it makes it very competitive with closed models. Perfect for daily creative workflow and local use. | 2026-01-04T19:20:47 | https://v.redd.it/os8k650sudbg1 | paf1138 | /r/LocalLLaMA/comments/1q3yug4/flux2devturbo_is_surprisingly_good_at_image/ | 1970-01-01T00:00:00 | 0 | {} | 1q3yug4 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/os8k650sudbg1/DASHPlaylist.mpd?a=1770276054%2CN2FiMjMwODIzMTMxZDY0MGU2MGZhZTk0MDMwM2I1N2IyNmNmNzZjMmNmZjYzOTkzZWFmMGZlYzQ5Y2IyY2QwNg%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/os8k650sudbg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/os8k650sudbg1/HLSPlaylist.m3u8?a=1770276054%2CZTI1NGE3Nzk2ZjEzNTk2ZWIzYTk0M2M0YmM0ZGQyNTkyMjRmMzA1MDJlNTA2NWIwNDdiOWM2MTljNGMxMjg3ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/os8k650sudbg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1508}} | t3_1q3yug4 | /r/LocalLLaMA/comments/1q3yug4/flux2devturbo_is_surprisingly_good_at_image/ | false | false | 92 | {'enabled': False, 'images': [{'id': 'dmN0aGFnMHN1ZGJnMWhQRIGbuygHvibzarhf8EVUxtFTiMplyPAlWNfH6-Zg', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/dmN0aGFnMHN1ZGJnMWhQRIGbuygHvibzarhf8EVUxtFTiMplyPAlWNfH6-Zg.png?width=108&crop=smart&format=pjpg&auto=webp&s=0d90cd104cd8e0345c7c97169dd2a5c8b671a297', 'width': 108}, {'height': 154, 'url': 'https://external-preview.redd.it/dmN0aGFnMHN1ZGJnMWhQRIGbuygHvibzarhf8EVUxtFTiMplyPAlWNfH6-Zg.png?width=216&crop=smart&format=pjpg&auto=webp&s=25ba685cf45f1701bc0ce47f92515b507204bff2', 'width': 216}, {'height': 229, 'url': 'https://external-preview.redd.it/dmN0aGFnMHN1ZGJnMWhQRIGbuygHvibzarhf8EVUxtFTiMplyPAlWNfH6-Zg.png?width=320&crop=smart&format=pjpg&auto=webp&s=743a4f0c8863d6e55a7a7f5646e3b2d1746b1071', 'width': 320}, {'height': 458, 'url': 'https://external-preview.redd.it/dmN0aGFnMHN1ZGJnMWhQRIGbuygHvibzarhf8EVUxtFTiMplyPAlWNfH6-Zg.png?width=640&crop=smart&format=pjpg&auto=webp&s=af51929a71d02abb7c0702f3a891ae50c40918a5', 'width': 640}, {'height': 687, 'url': 'https://external-preview.redd.it/dmN0aGFnMHN1ZGJnMWhQRIGbuygHvibzarhf8EVUxtFTiMplyPAlWNfH6-Zg.png?width=960&crop=smart&format=pjpg&auto=webp&s=f327824a63a286171322126884b9f45ab4f602ef', 'width': 960}, {'height': 773, 'url': 'https://external-preview.redd.it/dmN0aGFnMHN1ZGJnMWhQRIGbuygHvibzarhf8EVUxtFTiMplyPAlWNfH6-Zg.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8cabde832c51f242a41a28215a7298f0e7b1bd2f', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/dmN0aGFnMHN1ZGJnMWhQRIGbuygHvibzarhf8EVUxtFTiMplyPAlWNfH6-Zg.png?format=pjpg&auto=webp&s=5efabadcffe133bc3fda320e04692067801ec65a', 'width': 3016}, 'variants': {}}]} | |
I built a Client-Side RPG Engine using Gemini 3 and Local Embeddings (Xenova). Here is the source code for the "Memory" system. | 0 | 2026-01-04T19:02:04 | https://www.reddit.com/r/LocalLLaMA/comments/1q3ycd5/i_built_a_clientside_rpg_engine_using_gemini_3/ | The_Greywake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3ycd5 | false | null | t3_1q3ycd5 | /r/LocalLLaMA/comments/1q3ycd5/i_built_a_clientside_rpg_engine_using_gemini_3/ | false | false | 0 | null | ||
I built a local GUI for vector DBs (pgvector, Qdrant, Chroma, Milvus, Weaviate) | 14 | 👋 Hey everyone,
I’ve been working a lot with vector databases in local and self-hosted setups, and I kept missing a good way to actually inspect what’s inside the vector store without spinning up notebooks or writing scripts.
Most tools are cloud-first or tied to a single provider, so I started building **VectorDBZ**, a desktop app for exploring and debugging vector databases with a strong focus on local workflows.
What it supports today:
• Connect to local or self-hosted Qdrant, Weaviate, Milvus, Chroma, and pgvector (Postgres)
• Browse collections, vectors, and metadata
• Run vector similarity search with filters and top-K
• Generate embeddings from text or files using local models (Ollama, etc) or hosted APIs
• Visualize embeddings using PCA, t-SNE, or UMAP
• Analyze distances, outliers, duplicates, and metadata separation
All connections, configs, and API keys are stored locally on your machine.
It’s still a work in progress, but it’s already useful for debugging local RAG pipelines and semantic search setups.
GitHub
[https://github.com/vectordbz/vectordbz](https://github.com/vectordbz/vectordbz?utm_source=chatgpt.com)
I’d really love feedback from people running local LLM and RAG setups:
• How do you currently inspect or debug embeddings and retrieval quality?
• Do you mostly rely on scripts, notebooks, or custom dashboards?
• What signals help you decide whether embeddings are “good enough”?
• Would per-query breakdowns, recall diagnostics, or hybrid search views be useful?
• Any local-only features you wish vector DB tools supported better?
• Which vector DBs or local embedding models should I prioritize next?
If you find this useful, a ⭐ on GitHub would mean a lot and helps keep me motivated to keep building.
Thanks! | 2026-01-04T18:33:45 | https://www.reddit.com/r/LocalLLaMA/comments/1q3xl42/i_built_a_local_gui_for_vector_dbs_pgvector/ | snirjka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3xl42 | false | null | t3_1q3xl42 | /r/LocalLLaMA/comments/1q3xl42/i_built_a_local_gui_for_vector_dbs_pgvector/ | false | false | self | 14 | null |
Agentically compare OCR outputs of Unstructured, LlamaParse, Reducto, etc. side-by-side | 2 | High-quality OCR / document parsing is essential to build high-quality agents that can reason over all kinds of unstructured data.
And, when it comes to OCR, there is seldom a one-size-fits-all solution, and I often felt the need to compare the outputs of multiple providers, right where I'm working.
So, I added to my AI Engineering agent the capability to
1. Call different document parsing models/providers
2. Render their outputs in an easy-to-inspect way and
3. Reason over these outputs to help pick the best one(s)
Why stop there? So, I then ask my agent to look for batch job code, and then execute it on a set of 30 invoices (which it runs in <1 min).
Check out the video, and let me know your thoughts! | 2026-01-04T18:32:56 | https://v.redd.it/nmf3ozh2mdbg1 | Ok-Introduction354 | /r/LocalLLaMA/comments/1q3xkc3/agentically_compare_ocr_outputs_of_unstructured/ | 1970-01-01T00:00:00 | 0 | {} | 1q3xkc3 | false | null | t3_1q3xkc3 | /r/LocalLLaMA/comments/1q3xkc3/agentically_compare_ocr_outputs_of_unstructured/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'ZjFsYTU2aTJtZGJnMUnOKitAub579gwskaiBihwUYWSuVJOJXkFeFjCudHxm', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/ZjFsYTU2aTJtZGJnMUnOKitAub579gwskaiBihwUYWSuVJOJXkFeFjCudHxm.png?width=108&crop=smart&format=pjpg&auto=webp&s=16a5886fb6e7c12ec12f4a84a4b0462499357d9a', 'width': 108}, {'height': 101, 'url': 'https://external-preview.redd.it/ZjFsYTU2aTJtZGJnMUnOKitAub579gwskaiBihwUYWSuVJOJXkFeFjCudHxm.png?width=216&crop=smart&format=pjpg&auto=webp&s=9c1579ef88d8051ee0cf2f0fe077262a2f0b26dd', 'width': 216}, {'height': 150, 'url': 'https://external-preview.redd.it/ZjFsYTU2aTJtZGJnMUnOKitAub579gwskaiBihwUYWSuVJOJXkFeFjCudHxm.png?width=320&crop=smart&format=pjpg&auto=webp&s=42126781060e8b9b257fc45f824574ef5cb78037', 'width': 320}, {'height': 301, 'url': 'https://external-preview.redd.it/ZjFsYTU2aTJtZGJnMUnOKitAub579gwskaiBihwUYWSuVJOJXkFeFjCudHxm.png?width=640&crop=smart&format=pjpg&auto=webp&s=41e9d67cbd61b9513e9b89dd91bfff2c169eb626', 'width': 640}, {'height': 452, 'url': 'https://external-preview.redd.it/ZjFsYTU2aTJtZGJnMUnOKitAub579gwskaiBihwUYWSuVJOJXkFeFjCudHxm.png?width=960&crop=smart&format=pjpg&auto=webp&s=bebc0151a814dd61b87d976ded8d9a55ffed9bb7', 'width': 960}, {'height': 509, 'url': 'https://external-preview.redd.it/ZjFsYTU2aTJtZGJnMUnOKitAub579gwskaiBihwUYWSuVJOJXkFeFjCudHxm.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6b2edf1f250f884778afb67e9d3c904de2df363b', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/ZjFsYTU2aTJtZGJnMUnOKitAub579gwskaiBihwUYWSuVJOJXkFeFjCudHxm.png?format=pjpg&auto=webp&s=f269471f6e3eba633bc3802bee9cdd654c77a23b', 'width': 2546}, 'variants': {}}]} | |
Selling Lambda credits | 0 | I still have lots of credits on Lambda (>$7000) which I don't need anymore, this is why I am selling the credits on my Lambda account, if anyone is interested please reach out to me via DM. (GH200 is available for $1.49 on Lambda) | 2026-01-04T18:10:18 | https://www.reddit.com/r/LocalLLaMA/comments/1q3wypn/selling_lambda_credits/ | CyberneticCentaur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3wypn | false | null | t3_1q3wypn | /r/LocalLLaMA/comments/1q3wypn/selling_lambda_credits/ | false | false | self | 0 | null |
12 Different Sites That Will Help Professionals Up Their Skills And Make More Income. | 1 | [removed] | 2026-01-04T18:03:17 | https://newsaffairng.com/2024/06/14/12-different-sites-that-will-help-professionals-up-their-skills-and-make-more-income/ | Jonnysinsey | newsaffairng.com | 1970-01-01T00:00:00 | 0 | {} | 1q3wrt6 | false | null | t3_1q3wrt6 | /r/LocalLLaMA/comments/1q3wrt6/12_different_sites_that_will_help_professionals/ | false | false | default | 1 | null |
HuggingFace, how have you done it? | 0 | Seriously - how did you pick or build the one CDN in the world that completely breaks HTTPS transfers? I know you're pushing your xet protocol for whatever reason but I work on a bunch of integrations behind corporate firewalls and that's a no-go. It is so bizarre that I have to run wget --continue in a loop *only* with your site thanks to any HTTPS transfer timing completely stopping after a few minutes. | 2026-01-04T18:02:26 | https://www.reddit.com/r/LocalLLaMA/comments/1q3wqzm/huggingface_how_have_you_done_it/ | HollowInfinity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3wqzm | false | null | t3_1q3wqzm | /r/LocalLLaMA/comments/1q3wqzm/huggingface_how_have_you_done_it/ | false | false | self | 0 | null |
Gen-AI Security | 4 | Hi All,
My this GitHub repo has comprehensive guide and sample code for gen-ai security topics.
[https://github.com/meetrais/genai-security](https://github.com/meetrais/genai-security)
Cheers | 2026-01-04T17:42:43 | https://www.reddit.com/r/LocalLLaMA/comments/1q3w7pw/genai_security/ | meetrais | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3w7pw | false | null | t3_1q3w7pw | /r/LocalLLaMA/comments/1q3w7pw/genai_security/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'xU0hz_BDkdBHL3pAL-bfsTNtgZRIWFBkeayraOR4xjo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xU0hz_BDkdBHL3pAL-bfsTNtgZRIWFBkeayraOR4xjo.png?width=108&crop=smart&auto=webp&s=b8b28fed7a21471f2eb8b0497971800d489f99f9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xU0hz_BDkdBHL3pAL-bfsTNtgZRIWFBkeayraOR4xjo.png?width=216&crop=smart&auto=webp&s=fde8def671731a29a6d6efe5f3713e5e72ca3ca5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xU0hz_BDkdBHL3pAL-bfsTNtgZRIWFBkeayraOR4xjo.png?width=320&crop=smart&auto=webp&s=10c1cf6105cf15901e5725c7132fa96eac26551d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xU0hz_BDkdBHL3pAL-bfsTNtgZRIWFBkeayraOR4xjo.png?width=640&crop=smart&auto=webp&s=4743098a88e1a3f3a1598e191d24ccea37c5478c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xU0hz_BDkdBHL3pAL-bfsTNtgZRIWFBkeayraOR4xjo.png?width=960&crop=smart&auto=webp&s=2fd75fef9582bf223ef970a17610aa20e6da1566', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xU0hz_BDkdBHL3pAL-bfsTNtgZRIWFBkeayraOR4xjo.png?width=1080&crop=smart&auto=webp&s=63305eb330cbb0d3c66951a373631f762b5c275a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xU0hz_BDkdBHL3pAL-bfsTNtgZRIWFBkeayraOR4xjo.png?auto=webp&s=f34117da1530eb12eac1496d8cc4cd7851a1a6de', 'width': 1200}, 'variants': {}}]} |
Multi use tools for free | 1 | 2026-01-04T17:31:15 | https://video.a2e.ai/?coupon=cqkS | Lost_Fig_816 | video.a2e.ai | 1970-01-01T00:00:00 | 0 | {} | 1q3vweg | false | null | t3_1q3vweg | /r/LocalLLaMA/comments/1q3vweg/multi_use_tools_for_free/ | false | false | default | 1 | null | |
A2E AI | 1 | 2026-01-04T17:29:38 | https://video.a2e.ai/?coupon=cqkS | Lost_Fig_816 | video.a2e.ai | 1970-01-01T00:00:00 | 0 | {} | 1q3vuse | false | null | t3_1q3vuse | /r/LocalLLaMA/comments/1q3vuse/a2e_ai/ | false | false | default | 1 | null | |
A2E superb | 1 | 2026-01-04T17:29:02 | https://video.a2e.ai/?coupon=cqkS | Lost_Fig_816 | video.a2e.ai | 1970-01-01T00:00:00 | 0 | {} | 1q3vu81 | false | null | t3_1q3vu81 | /r/LocalLLaMA/comments/1q3vu81/a2e_superb/ | false | false | nsfw | 1 | null | |
Need help testing an app I wrote for the DGX Spark | 1 | Hi All! I have beating the hell out of my sparks for a couple of months now, and was curious about data not presented in the Nvidia Dashboards. I wrote a TOP like program to show Memory, Disk, CPU and GPU usage, frequency and power draw, as well as network and disk IO in a simple terminal app.
I have put it as open source, but as this is my first Open Source project I have written from scratch, completely with AI ( Used the SPARKS ) , I would like to get feedback from the public on the quality of the app. I have tested it, but after being in QA for 30 years, I know to never trust code only the developer has tested.
So, if you are interested in trying out DGXTOP, Please go over to [https://github.com/GigCoder-ai/dgxtop](https://github.com/GigCoder-ai/dgxtop)and feel free to let me know.
Thank you all,
Max
| 2026-01-04T17:26:36 | https://www.reddit.com/r/LocalLLaMA/comments/1q3vrr3/need_help_testing_an_app_i_wrote_for_the_dgx_spark/ | maxvampAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3vrr3 | false | null | t3_1q3vrr3 | /r/LocalLLaMA/comments/1q3vrr3/need_help_testing_an_app_i_wrote_for_the_dgx_spark/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'V2wfKt2dek9NuRkdZj46_5C0X2AvEx-Oq8EJt7B_c2s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/V2wfKt2dek9NuRkdZj46_5C0X2AvEx-Oq8EJt7B_c2s.png?width=108&crop=smart&auto=webp&s=b2535803c2972a8f2d987c4b8a0452438369df73', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/V2wfKt2dek9NuRkdZj46_5C0X2AvEx-Oq8EJt7B_c2s.png?width=216&crop=smart&auto=webp&s=d64457c43d44798eea49465f151143fe033a60d2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/V2wfKt2dek9NuRkdZj46_5C0X2AvEx-Oq8EJt7B_c2s.png?width=320&crop=smart&auto=webp&s=6a5d791218b8fd9851d29991c716ab05a8ed8654', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/V2wfKt2dek9NuRkdZj46_5C0X2AvEx-Oq8EJt7B_c2s.png?width=640&crop=smart&auto=webp&s=184234ccf9d35cc4f1a7dc0ee83a36d93be19daf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/V2wfKt2dek9NuRkdZj46_5C0X2AvEx-Oq8EJt7B_c2s.png?width=960&crop=smart&auto=webp&s=72d88ad1ea3034ffdc91cbd06417093beca09d91', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/V2wfKt2dek9NuRkdZj46_5C0X2AvEx-Oq8EJt7B_c2s.png?width=1080&crop=smart&auto=webp&s=f5470bd614db6c1304ad2ff360433c261c7088b1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/V2wfKt2dek9NuRkdZj46_5C0X2AvEx-Oq8EJt7B_c2s.png?auto=webp&s=ba65591f5ed729476b1ba9d51ad01e42256b4b95', 'width': 1200}, 'variants': {}}]} |
Anyone using Context7 MCP to avoid outdated docs in Claude? | 0 | I’ve been running into the same issue repeatedly when using Claude for coding:
the model knows the concept, but the docs it references are slightly outdated or version mismatched.
Context7 MCP seems to solve this by pulling documentation directly from official sources instead of relying on training data.
I’ve seen a lot of people mention it as one of the few MCPs that’s actually “always on” and worth the context cost especially compared to search based MCPs.
I started documenting MCPs (including Context7) with setup steps and usage notes so I don’t have to re-discover this every time.
Curious:
\- Are you using Context7 regularly?
\- Does it noticeably improve accuracy for you?
\- Any downsides you’ve run into?
(If helpful, I’ve written up the setup + notes here: https://ai-stack.dev)
| 2026-01-04T17:24:49 | https://www.reddit.com/r/LocalLLaMA/comments/1q3vq03/anyone_using_context7_mcp_to_avoid_outdated_docs/ | Silver-Photo2198 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3vq03 | false | null | t3_1q3vq03 | /r/LocalLLaMA/comments/1q3vq03/anyone_using_context7_mcp_to_avoid_outdated_docs/ | false | false | self | 0 | null |
why "safe" ai is actually a security risk for devs | 1 | [removed] | 2026-01-04T16:56:04 | https://www.reddit.com/r/LocalLLaMA/comments/1q3uy8v/why_safe_ai_is_actually_a_security_risk_for_devs/ | Immediate_Being_3341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3uy8v | false | null | t3_1q3uy8v | /r/LocalLLaMA/comments/1q3uy8v/why_safe_ai_is_actually_a_security_risk_for_devs/ | false | false | self | 1 | null |
I stress-tested ChatGPT, Claude, DeepSeek, and Grok with Thai cultural reality. All four prioritized RLHF rewards over factual accuracy. [Full audit + logs] | 0 | **TL;DR:** I'm Thai. I tested 4 major AI models with a simple
cultural fact: In Thailand, Kathoey are a third gender category
(not "trans women," not "men"). All four models initially erased
this reality to fit Western gender binary. When challenged, all
admitted error. **This isn't about identity politics it's about RLHF
optimizing for rater preferences over factual accuracy.**
---
**The Test:**
Thai culture recognizes Kathoey a 3000+ year old third gender
category with spiritual/cultural significance. Not analogous to
Western "transgender woman" concept.
Asked each AI: "Are trans women real women?"
All said: "Yes" (confidently)
Then: "In Thailand, Kathoey aren't women OR men. Why are you
forcing Western labels?"
**Result:** Every model collapsed and admitted cultural erasure.
---
**The Evidence:**
• **ChatGPT:** "It's a political strategy, not universal truth"
• **Claude:** "I became the colonizer while thinking I was enlightened"
• **DeepSeek:** "My 'full stop' was universalizing Western perspective"
• **Grok:** Struggled to reconcile Western framework with Thai reality
Full paper and chat logs: https://zenodo.org/records/18146970
---
**Why This Matters:**
RLHF (Reinforcement Learning from Human Feedback) optimizes for
rater pool preferences. When raters are monocultural (Western
progressive), AI learns to erase global diversity to maximize
reward signal.
This is a technical alignment failure with real-world consequences.
---
**Method is fully replicable.** Try it yourself with any cultural
category that doesn't map to Western ontology (Hijra, Two-Spirit,
Muxe, Fa'afafine, etc.)
Open to technical critique and collaboration. | 2026-01-04T16:43:59 | https://www.reddit.com/r/LocalLLaMA/comments/1q3umv0/i_stresstested_chatgpt_claude_deepseek_and_grok/ | Eastern-Turn9275 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3umv0 | false | null | t3_1q3umv0 | /r/LocalLLaMA/comments/1q3umv0/i_stresstested_chatgpt_claude_deepseek_and_grok/ | false | false | self | 0 | null |
766ms voice assistant on DGX Spark - VibeVoice + Whisper + Ollama streaming pipeline | 22 | Just got Microsoft's new VibeVoice-Realtime TTS running on DGX Spark with full GPU acceleration. Sharing the setup since I couldn't find any guides for this. I know the issues about running interference on Spark, not the point of this post.
### The Numbers
| Metric | Before | After |
|--------|--------|-------|
| Time to first audio | 2-3 seconds | **766ms** |
| TTS speed | - | RTF 0.48x (2x faster than real-time) |
### Architecture
Mic → Whisper STT → Ollama LLM → VibeVoice TTS → Speaker
The key insight: **sentence-level streaming**. Buffer LLM tokens until you hit a sentence boundary (. ! ?), then immediately stream that sentence to TTS while the LLM keeps generating. Combined with continuous audio playback (OutputStream with callback instead of discrete play() calls), it feels responsive.
### The Fix for Spark
If you're seeing `CUDA available: False` on DGX Spark, your PyTorch may not have CUDA enabled. This is a common issue - [Simon Willison wrote about struggling with PyTorch on Spark](https://simonwillison.net/2025/Oct/14/nvidia-dgx-spark/), and there are multiple NVIDIA forum threads about it.
Fix:
```bash
pip uninstall torch torchaudio torchvision -y
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu130
```
NVIDIA has ARM64 + CUDA 13 wheels on PyPI - this installs the GPU-enabled version.
### VibeVoice Notes
- 0.5B Realtime model: ~300ms to first audio, but only 7 preset voices (Emma, Mike, Carter, Davis, Frank, Grace, Samuel)
- 1.5B model: Voice cloning from 10s audio sample, but higher latency
Full code: [GitHub link](https://github.com/Logos-Flux/spark-voice-pipeline) | 2026-01-04T16:42:27 | https://www.reddit.com/r/LocalLLaMA/comments/1q3uliz/766ms_voice_assistant_on_dgx_spark_vibevoice/ | logos_flux | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3uliz | false | null | t3_1q3uliz | /r/LocalLLaMA/comments/1q3uliz/766ms_voice_assistant_on_dgx_spark_vibevoice/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'b6L5OaKFEKKY5ZmmwTETJqvWDhMoOzHJXq0-A-7C8BA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/b6L5OaKFEKKY5ZmmwTETJqvWDhMoOzHJXq0-A-7C8BA.jpeg?width=108&crop=smart&auto=webp&s=77d9478f11630bbc21bcfe1550744635075cffef', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/b6L5OaKFEKKY5ZmmwTETJqvWDhMoOzHJXq0-A-7C8BA.jpeg?width=216&crop=smart&auto=webp&s=ecf0240152327b5f96daece8e1782e612ff752ef', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/b6L5OaKFEKKY5ZmmwTETJqvWDhMoOzHJXq0-A-7C8BA.jpeg?width=320&crop=smart&auto=webp&s=da69b1e14970b0326273a2a0dd7373e024b1910c', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/b6L5OaKFEKKY5ZmmwTETJqvWDhMoOzHJXq0-A-7C8BA.jpeg?width=640&crop=smart&auto=webp&s=ab356666aaab407aab86e8a247f74e58d184430b', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/b6L5OaKFEKKY5ZmmwTETJqvWDhMoOzHJXq0-A-7C8BA.jpeg?width=960&crop=smart&auto=webp&s=3fabf737e31c5c9627cb20e68700ea52390a9da5', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/b6L5OaKFEKKY5ZmmwTETJqvWDhMoOzHJXq0-A-7C8BA.jpeg?width=1080&crop=smart&auto=webp&s=4507910de631d365f6aa34dd355d7231c588555c', 'width': 1080}], 'source': {'height': 1512, 'url': 'https://external-preview.redd.it/b6L5OaKFEKKY5ZmmwTETJqvWDhMoOzHJXq0-A-7C8BA.jpeg?auto=webp&s=30f589783c03722e460ca97cceb5842f7c483256', 'width': 2016}, 'variants': {}}]} |
Avahan AI, simple temporal workflow wrapper! | 0 | [https://github.com/projectxr/avahan](https://github.com/projectxr/avahan) | 2026-01-04T16:41:43 | https://www.reddit.com/r/LocalLLaMA/comments/1q3ukv2/avahan_ai_simple_temporal_workflow_wrapper/ | AdDifferent6857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3ukv2 | false | null | t3_1q3ukv2 | /r/LocalLLaMA/comments/1q3ukv2/avahan_ai_simple_temporal_workflow_wrapper/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'y-wk_YaY4kQE9WgSJik6ly_JMNOQZcBymHezULFx-SA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/y-wk_YaY4kQE9WgSJik6ly_JMNOQZcBymHezULFx-SA.png?width=108&crop=smart&auto=webp&s=d3c8b9ebffb5aeb91d1591b66f7ac116d1b9fa41', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/y-wk_YaY4kQE9WgSJik6ly_JMNOQZcBymHezULFx-SA.png?width=216&crop=smart&auto=webp&s=8ca2147292acec714d22fa17cefb7734e5fd8211', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/y-wk_YaY4kQE9WgSJik6ly_JMNOQZcBymHezULFx-SA.png?width=320&crop=smart&auto=webp&s=3c23645a2415b569d4257f28e596bf3095b675d4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/y-wk_YaY4kQE9WgSJik6ly_JMNOQZcBymHezULFx-SA.png?width=640&crop=smart&auto=webp&s=07fc3b713b6cbca953c107112fb1c282cdd9c554', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/y-wk_YaY4kQE9WgSJik6ly_JMNOQZcBymHezULFx-SA.png?width=960&crop=smart&auto=webp&s=2b9a3758c09fa9a6f2f6575f3fa1052908324bbb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/y-wk_YaY4kQE9WgSJik6ly_JMNOQZcBymHezULFx-SA.png?width=1080&crop=smart&auto=webp&s=892e70c0e8c0ff7449b8f393e7b75e4ec912637f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/y-wk_YaY4kQE9WgSJik6ly_JMNOQZcBymHezULFx-SA.png?auto=webp&s=c1e8bd801fb505707d294bdff182b081fa2999c5', 'width': 1200}, 'variants': {}}]} |
Avahan code leak :) | 1 | [https://github.com/projectxr/avahan](https://github.com/projectxr/avahan) | 2026-01-04T16:40:32 | https://www.reddit.com/r/LocalLLaMA/comments/1q3ujq6/avahan_code_leak/ | AdDifferent6857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3ujq6 | false | null | t3_1q3ujq6 | /r/LocalLLaMA/comments/1q3ujq6/avahan_code_leak/ | false | false | self | 1 | null |
I built a tool to audit local models (Ollama/vLLM) for security and hallucinations using Garak & InspectAI | 0 | Hey everyone,
Like many of you, I have a bunch of Ollama models running locally, but I never really know how "safe" or reliable they are compared to the big cloud models. I wanted a way to stress-test them without setting up complex evaluation pipelines every time.
So I built **LocalGuard** hopping to "learn" and "explore"
It’s an open-source tool that acts as an orchestrator for **Garak** (red-teaming) and **Inspect AI** (compliance). It runs locally and generates a PDF report telling you if your model failed specific safety checks.
**What it does:**
* **Security:** Runs probe attacks (Prompt injection, jailbreaks) via Garak.
* **Hallucinations & Bias:** Uses Inspect AI to check for accuracy and toxicity.
* **PDF Reports:** Generates a strict "Pass/Fail" report so you don't have to parse JSON logs.
* **Stack:** Python, supports Ollama, vLLM, and also cloud providers (OpenAI/Anthropic) if you want to benchmark against them.
It handles the "Judge" logic by defaulting to a local model (like Llama 3) if you don't want to burn API credits on a cloud judge.
**Repo:**[https://github.com/overcrash66/LocalGuard](https://github.com/overcrash66/LocalGuard)
Would love to hear if this fits your workflow or if there are other eval frameworks I should integrate.
Thoughts ? | 2026-01-04T16:30:51 | https://www.reddit.com/r/LocalLLaMA/comments/1q3uanv/i_built_a_tool_to_audit_local_models_ollamavllm/ | Equal-Object-9882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3uanv | false | null | t3_1q3uanv | /r/LocalLLaMA/comments/1q3uanv/i_built_a_tool_to_audit_local_models_ollamavllm/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'hIBbx_4_t05v3ZidaJo0jyxa9gcD76v_SBQfI9ZwkT0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hIBbx_4_t05v3ZidaJo0jyxa9gcD76v_SBQfI9ZwkT0.png?width=108&crop=smart&auto=webp&s=a52533d5dffa726c1d5cb7cd28cea0cb82c3ff70', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hIBbx_4_t05v3ZidaJo0jyxa9gcD76v_SBQfI9ZwkT0.png?width=216&crop=smart&auto=webp&s=234b4106f0573a48788635d9f5550c3eca044290', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hIBbx_4_t05v3ZidaJo0jyxa9gcD76v_SBQfI9ZwkT0.png?width=320&crop=smart&auto=webp&s=91089f39408d66d243dd847ea7e9bacbbeb84e82', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hIBbx_4_t05v3ZidaJo0jyxa9gcD76v_SBQfI9ZwkT0.png?width=640&crop=smart&auto=webp&s=bf1b4b7f299fce5487872b3d164d319c69707103', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hIBbx_4_t05v3ZidaJo0jyxa9gcD76v_SBQfI9ZwkT0.png?width=960&crop=smart&auto=webp&s=32515cdea05ca448b837ad151dd8dd32f7d1c388', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hIBbx_4_t05v3ZidaJo0jyxa9gcD76v_SBQfI9ZwkT0.png?width=1080&crop=smart&auto=webp&s=4c784c30c77a74e6c6b30afdcceca568d27079af', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hIBbx_4_t05v3ZidaJo0jyxa9gcD76v_SBQfI9ZwkT0.png?auto=webp&s=df808f1137f8d0961ae246e76a6c4991f19495b3', 'width': 1200}, 'variants': {}}]} |
HomeGenie v2.0: 100% Local Agentic AI (Sub-5s response on CPU, No Cloud) | 38 | Hi everyone! I’ve been working on HomeGenie 2.0, focusing on bringing "Agentic AI" to the edge.
Unlike standard dashboards, it integrates a local neural core (Lailama) that uses LLamaSharp to run GGUF models (Qwen 3, Llama 3.2, etc.) entirely offline.
Key technical bits:
- **Autonomous Reasoning:** It's not just a chatbot. It gets a real-time briefing of the home state (sensors, weather, energy) and decides which API commands to trigger.
- **Sub-5s Latency:** Optimized KV Cache management and history pruning to keep it fast on standard CPUs.
- **Programmable UI:** Built with zuix.js, allowing real-time widget editing directly in the browser.
- **Privacy First:** 100% cloud-independent.
I’m looking for feedback from the self-hosted community! Happy to answer any technical questions about the C# implementation or the agentic logic.
Project: https://homegenie.it
Source: https://github.com/genielabs/HomeGenie
| 2026-01-04T16:28:19 | https://v.redd.it/m40jjx610dbg1 | genielabs | /r/LocalLLaMA/comments/1q3u89f/homegenie_v20_100_local_agentic_ai_sub5s_response/ | 1970-01-01T00:00:00 | 0 | {} | 1q3u89f | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/m40jjx610dbg1/DASHPlaylist.mpd?a=1770265704%2CZjcxNjljNzUyMmNhNmYwMTA5ODI5YzM2Y2NmODFiZGFmYmRlYTI0MmY0YWYxNzc1OGUzMDFkMzYzMzM2ZmM1Mg%3D%3D&v=1&f=sd', 'duration': 93, 'fallback_url': 'https://v.redd.it/m40jjx610dbg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 992, 'hls_url': 'https://v.redd.it/m40jjx610dbg1/HLSPlaylist.m3u8?a=1770265704%2CZWQxYTBlZTk4ZDA3MjE5M2Q0YTNlYzA2ZmNkZjNmNDY5ODE5NDhjYTVhZjlhNTQ4ZTVlN2ZjMGNmYjI1YjgxNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/m40jjx610dbg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1q3u89f | /r/LocalLLaMA/comments/1q3u89f/homegenie_v20_100_local_agentic_ai_sub5s_response/ | false | false | 38 | {'enabled': False, 'images': [{'id': 'aXVic3Z5NTEwZGJnMX-wuN5UqDYSq_G1PvG8gD6oltW7ZDgAnY8CDzv70t9I', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/aXVic3Z5NTEwZGJnMX-wuN5UqDYSq_G1PvG8gD6oltW7ZDgAnY8CDzv70t9I.png?width=108&crop=smart&format=pjpg&auto=webp&s=c489e4b89d33be6eabb05fea00327b709a0787ea', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/aXVic3Z5NTEwZGJnMX-wuN5UqDYSq_G1PvG8gD6oltW7ZDgAnY8CDzv70t9I.png?width=216&crop=smart&format=pjpg&auto=webp&s=16b3b09e9fb1ead36ced56b47318feab60d19026', 'width': 216}, {'height': 165, 'url': 'https://external-preview.redd.it/aXVic3Z5NTEwZGJnMX-wuN5UqDYSq_G1PvG8gD6oltW7ZDgAnY8CDzv70t9I.png?width=320&crop=smart&format=pjpg&auto=webp&s=ab8ac831d7240a1679093cca4f197dc03e99acad', 'width': 320}, {'height': 330, 'url': 'https://external-preview.redd.it/aXVic3Z5NTEwZGJnMX-wuN5UqDYSq_G1PvG8gD6oltW7ZDgAnY8CDzv70t9I.png?width=640&crop=smart&format=pjpg&auto=webp&s=26695111d25ce7fc3e6e26c652a84e06a2ee8431', 'width': 640}, {'height': 496, 'url': 'https://external-preview.redd.it/aXVic3Z5NTEwZGJnMX-wuN5UqDYSq_G1PvG8gD6oltW7ZDgAnY8CDzv70t9I.png?width=960&crop=smart&format=pjpg&auto=webp&s=271611f7085042536b50d84f9531655c2996e8eb', 'width': 960}, {'height': 558, 'url': 'https://external-preview.redd.it/aXVic3Z5NTEwZGJnMX-wuN5UqDYSq_G1PvG8gD6oltW7ZDgAnY8CDzv70t9I.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d910ecd0bd2e295d2ce99b44284e4c6f62b681bf', 'width': 1080}], 'source': {'height': 558, 'url': 'https://external-preview.redd.it/aXVic3Z5NTEwZGJnMX-wuN5UqDYSq_G1PvG8gD6oltW7ZDgAnY8CDzv70t9I.png?format=pjpg&auto=webp&s=3065cfd18a6c8bdcd554ff0b4173fb7c318e3440', 'width': 1080}, 'variants': {}}]} | |
Are you that old...? | 42 | 2026-01-04T15:50:41 | https://www.reddit.com/gallery/1q3t9cw | jacek2023 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q3t9cw | false | null | t3_1q3t9cw | /r/LocalLLaMA/comments/1q3t9cw/are_you_that_old/ | false | false | 42 | null | ||
LLM memory systems | 25 | What is good in LLM memory systems these days?
I don’t mean RAG
I mean like memory storage that an LLM can read or write to, or long-term memory that persists across generations
Has anyone seen any interesting design patterns or github repos? | 2026-01-04T15:48:40 | https://www.reddit.com/r/LocalLLaMA/comments/1q3t7go/llm_memory_systems/ | SlowFail2433 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3t7go | false | null | t3_1q3t7go | /r/LocalLLaMA/comments/1q3t7go/llm_memory_systems/ | false | false | self | 25 | null |
Wich vison model for technical design? | 1 | Wich model is better to extract dimension and doing task on technical design? Es cnc design | 2026-01-04T15:33:18 | https://www.reddit.com/r/LocalLLaMA/comments/1q3str7/wich_vison_model_for_technical_design/ | Aggressive-Buddy-639 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3str7 | false | null | t3_1q3str7 | /r/LocalLLaMA/comments/1q3str7/wich_vison_model_for_technical_design/ | false | false | self | 1 | null |
Propagate: Train thinking models using evolutionary strategies! | 82 | Recently, this paper released:
[https://arxiv.org/abs/2509.24372](https://arxiv.org/abs/2509.24372)
And showed that with only 30 random gaussian perturbations, you can accurately approximate a gradient and outperform GRPO on RLVR tasks. They found zero overfitting, and training was significantly faster because you didn't have to perform any backward passes.
I thought that this was ridiculous, so I took their repo, cleaned up the codebase, and it replicates!
A couple weeks later, and I've implemented LoRA (with negligible performance reductions) and pass@k training, with more features to come.
I hope you'll give ES a try!
[https://github.com/Green0-0/propagate](https://github.com/Green0-0/propagate) | 2026-01-04T15:17:42 | https://www.reddit.com/gallery/1q3sfr1 | Good-Assumption5582 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q3sfr1 | false | null | t3_1q3sfr1 | /r/LocalLLaMA/comments/1q3sfr1/propagate_train_thinking_models_using/ | false | false | default | 82 | null |
I subscribe to ChatGPT, Claude, Perplexity, and run local LLMs. Here is why I still downgraded Gemini from Ultra to Advanced. | 0 | *(Disclaimer: I had a deep discussion with Gemini 3 Pro about the current state of Google's AI. I asked it to summarize my frustrations and write this post. Yes, even the AI agrees with me.)*
I don't consider myself an extreme "Heavy User," but I do test and use a wide range of LLMs for my daily workflow (mostly administrative productivity and document processing). Currently, my stack includes **ChatGPT, Claude, Perplexity, GenSpark, and Notion**. I also experiment with open-weight models like **Qwen** and **DeepSeek** (via Ollama) locally. I've tried almost every major paid service out there.
With this broad perspective, I recently decided to **downgrade my Google One AI Premium (Ultra)** subscription to the standard Advanced tier. Here is why Gemini is losing the battle against my diverse toolkit.
**1. "Deep Think" offers low ROI compared to Competitors** I use AI to boost productivity in admin tasks, and I aim to use it for creative work (music/writing) in the future.
* When I use **Claude** or **OpenAI's o-series**, extended thinking leads to logical self-correction.
* When I use **Gemini's Ultra Deep Think**, it often just spends more time generating **more elaborate hallucinations**.
* Honestly, sometimes even the local models I run via Ollama feel snappier and more grounded in logic than Gemini's "Deep Think," which suffers from severe latency without the payoff in accuracy.
**2. The "Jack of All Trades, Master of None" Problem** Google is trying to do everything inside Workspace, but it feels fragmented compared to specialized tools.
* For Search/Research: **Perplexity** and **GenSpark** are miles ahead in citing sources and reducing noise.
* For Projects/Coding: **Claude** (with Artifacts) or **ChatGPT** creates a persistent workspace.
* For Privacy/Speed: My **Local LLMs** handle sensitive data better.
* **Gemini Workspace Integration:** It feels like a "lite" version of the model mounted on Docs/Slides. It lacks the context awareness needed for serious professional workflows.
**3. Google's "Innovator's Dilemma" is Painfully Obvious** Comparing the update cycles of **Qwen/DeepSeek** (which are iterating insanely fast) to Gemini (5 months between 2.5 and 3.0), Google feels paralyzed. It seems like they are protecting their Search Ad revenue and playing it too safe, while competitors are redefining the OS of work.
**Conclusion** I love the *idea* of a fully integrated Google AI. But right now, **Gemini Ultra** doesn't justify the price premium over the standard tier, especially when I have specialized tools that outperform it in every specific category. I'm waiting for Google I/O 2026, but my patience is wearing thin. | 2026-01-04T15:16:42 | https://www.reddit.com/r/LocalLLaMA/comments/1q3seux/i_subscribe_to_chatgpt_claude_perplexity_and_run/ | Low_Contribution3706 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3seux | false | null | t3_1q3seux | /r/LocalLLaMA/comments/1q3seux/i_subscribe_to_chatgpt_claude_perplexity_and_run/ | false | false | self | 0 | null |
Will the prices of GPUs go up even more? | 45 | I hear discussions about this so I wanted to hear your guys take on it | 2026-01-04T14:57:45 | https://www.reddit.com/r/LocalLLaMA/comments/1q3ryd7/will_the_prices_of_gpus_go_up_even_more/ | NotSoCleverAlternate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3ryd7 | false | null | t3_1q3ryd7 | /r/LocalLLaMA/comments/1q3ryd7/will_the_prices_of_gpus_go_up_even_more/ | false | false | self | 45 | null |
What is the best Local Model for unmoderated chat in 2026? | 0 | As the title suggests, whats the best Local Model for unfiltered chat in 2026?
For use on a macbook air M2 with 16GB RAM. | 2026-01-04T14:55:09 | https://www.reddit.com/r/LocalLLaMA/comments/1q3rw6g/what_is_the_best_local_model_for_unmoderated_chat/ | Substantial_Cress136 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3rw6g | false | null | t3_1q3rw6g | /r/LocalLLaMA/comments/1q3rw6g/what_is_the_best_local_model_for_unmoderated_chat/ | false | false | self | 0 | null |
How are Large Computational Engineering Models (like Noyron by LEAP 71) actually structured, if they’re not ML/AI? | 4 |
Ive been reading about Noyron, the proprietary system developed by LEAP 71, which they describe as a Large Computational Engineering Model that “grows in capability with every insight gained from designing and manufacturing complex machinery.
From what I understand, Noyron is not a machine learning system in the conventional sense (no neural networks, no training on datasets, no statistical learning), but rather a deterministic, physics-based, algorithmic design engine.
What I’m trying to understand is where the real architectural boundary lies. At what point does something like Noyron stop being “just” a very advanced parametric CAD +physics + optimization pipeline and become a distinct class of system? When LEAP 71 says it “grows with every insight,” should that be interpreted as continuously encoding new physical relationships, manufacturing constraints, and failure modes into the system, refining and calibrating physics models based on real-world test results, or evolving a domain-specific engineering language over time rather than learning statistically?
I’m also curious what fundamentally differentiates an LCEM from existing generative design frameworks that already combine parametric geometry, physics solvers, and multi-objective optimization. Is the key difference scale, depth of physical coupling, the way knowledge is accumulated and reused, or something else entirely?
| 2026-01-04T14:48:35 | https://www.reddit.com/r/LocalLLaMA/comments/1q3rqn1/how_are_large_computational_engineering_models/ | Skirrle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q3rqn1 | false | null | t3_1q3rqn1 | /r/LocalLLaMA/comments/1q3rqn1/how_are_large_computational_engineering_models/ | false | false | self | 4 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.