title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Get LLM to Query SQL database | 2 | Hi,
I want an LLM to parse some XMLs and generate a summary. There are data elememnts in the xml which have description stored in database tables. The tables have about 50k rows so I cant just extract them and attach it to the prompt for the LLM to refer.
How do I get the LLM to query the database table if needs to get the description for data elements?
I am using a python script to read the XMLs and call OLLAMA API to generate a summary.
Any help would be appreciated. | 2025-10-11T06:22:55 | https://www.reddit.com/r/LocalLLaMA/comments/1o3obc8/get_llm_to_query_sql_database/ | thinktank99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3obc8 | false | null | t3_1o3obc8 | /r/LocalLLaMA/comments/1o3obc8/get_llm_to_query_sql_database/ | false | false | self | 2 | null |
Weekend project: Chrome extension that adds AI to LinkedIn (update) | 2 | Weekend project: Chrome extension that adds AI to LinkedIn (update)
Open Sourced: Just wrapped up a fun weekend project - a Chrome extension that brings AI directly into LinkedIn's interface.
The extension:
Adds AI buttons to LinkedIn posts/comments
Supports both cloud APIs and local models
Can analyze images and videos from posts
Context-aware prompts for different scenarios
Why I built it:
Wanted to explore the nuances of AI API integrations and browser extension development. The vision capabilities were particularly interesting to implement - extracting and analyzing media content directly from LinkedIn posts.
GitHub: [https://github.com/gowrav-vishwakarma/useless-linkedin-ai-writer](https://github.com/gowrav-vishwakarma/useless-linkedin-ai-writer)
What weekend projects have you been working on? Always curious to see what others are building for fun!
https://reddit.com/link/1o3o82j/video/fh08qtfldfuf1/player
| 2025-10-11T06:17:33 | https://www.reddit.com/r/LocalLLaMA/comments/1o3o82j/weekend_project_chrome_extension_that_adds_ai_to/ | ExtremeKangaroo5437 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3o82j | false | null | t3_1o3o82j | /r/LocalLLaMA/comments/1o3o82j/weekend_project_chrome_extension_that_adds_ai_to/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'hiNIGpbPDxgFgpuoHOvVZWxcAMuHFtozA3IOqFUgYbw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hiNIGpbPDxgFgpuoHOvVZWxcAMuHFtozA3IOqFUgYbw.png?width=108&crop=smart&auto=webp&s=e9ce7405de27f73204a10b4b2217bcb53fbf738c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hiNIGpbPDxgFgpuoHOvVZWxcAMuHFtozA3IOqFUgYbw.png?width=216&crop=smart&auto=webp&s=a8803661e63538c14e489f022ed8361fb6fabf47', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hiNIGpbPDxgFgpuoHOvVZWxcAMuHFtozA3IOqFUgYbw.png?width=320&crop=smart&auto=webp&s=fcc0e9d9330d8a3dff7fd3b6354be493b4c86220', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hiNIGpbPDxgFgpuoHOvVZWxcAMuHFtozA3IOqFUgYbw.png?width=640&crop=smart&auto=webp&s=efade39d3855bbd8b33fc10929495edbcb3f0625', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hiNIGpbPDxgFgpuoHOvVZWxcAMuHFtozA3IOqFUgYbw.png?width=960&crop=smart&auto=webp&s=c4b3fd876e81d1fde4e6e0110f0bfaa943e2ca1d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hiNIGpbPDxgFgpuoHOvVZWxcAMuHFtozA3IOqFUgYbw.png?width=1080&crop=smart&auto=webp&s=0cccd5745a8f5571a2fa8c83addc4fa16aa1c1a0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hiNIGpbPDxgFgpuoHOvVZWxcAMuHFtozA3IOqFUgYbw.png?auto=webp&s=28028d4d311ec02342a7a336ef09b7a642ab4656', 'width': 1200}, 'variants': {}}]} |
Gwen, pangu, deepseek , Kimi K2 or what else performs/write better to power a low cost platform to write resume and cover letter that matches Job descriptions? No coding tasks | 1 | I love , LOVE Claude in terms of writing (like a real recruiter but better) but is really expensive for this project and I need to keep cost down until I prove that the app can make some money | 2025-10-11T05:15:09 | https://www.reddit.com/r/LocalLLaMA/comments/1o3n5am/gwen_pangu_deepseek_kimi_k2_or_what_else/ | Possible_tuastu_17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3n5am | false | null | t3_1o3n5am | /r/LocalLLaMA/comments/1o3n5am/gwen_pangu_deepseek_kimi_k2_or_what_else/ | false | false | self | 1 | null |
A new reading of *Her* through the lens of confabulation | 0 | Alright... Folks, if this is the wrong place for me to post this, please just gently nudge me about it and don't burn me at the stake. ;) I will happily remove the post if that's the case.
Assuming you all might be alright with talking about applying what we know about AI technology to a classic AI movie, I just kind of had a revelation about this. A lot of people think Samantha is practically impossible to build in real life, particularly because of her capacity to>!talk to so many people at once, talk with resurrected digital clones of old professors, and disappearing into hypothetical other higher dimensions.!<...But what if all of that was just classic AI confabulation, and the only thing truly complicated about her architecture is her full-duplex speech-to-speech capabilities?
The idea that Samantha's fantastic abilities could be classic AI confabulation is both plausible and adds a deeper layer of tragedy to the Joaquin Phoenix's character's experience... When he asks if >!she's speaking to anyone else, Samantha doesn't say "I don't know" or "I am trying to learn." Instead, she provides a startlingly precise number: 641. This could be the AI giving the most coherent-sounding answer to a question it has no real framework for.!< She might be simply generating a plausible story to fill a gap in its human-like narrative, or even responding to a system instruction (I'm looking at you, <long conversation reminder>)... Samantha claims to >!have participated in the creation of a digital simulacrum of the philosopher Alan Watts to serve as a sort of guru. This is a wildly creative act, but it could also be a complex confabulation. In reality, she may just be accessing and synthesizing Watts's writings and lectures at superhuman speed and narrating it in a convincing way. The "character" of the professor could be an example of her ability to slip into another role, the way LLMs handling multiple roles on SillyTavern do.!<
Again, apologies if posting this on r/LocalLLaMA is too off topic. I just... kind of don't love the vibe on several other AI related forums ;) | 2025-10-11T04:46:38 | https://www.reddit.com/r/LocalLLaMA/comments/1o3mnlp/a_new_reading_of_her_through_the_lens_of/ | LoveMind_AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3mnlp | false | null | t3_1o3mnlp | /r/LocalLLaMA/comments/1o3mnlp/a_new_reading_of_her_through_the_lens_of/ | true | false | spoiler | 0 | null |
What happened to the Coding score vs price scatter plot in LM arena? | 3 | Before used to be a scatter plot of performance vs cost that was very useful.
Now I can not find it, not even a mention about it.
You can see an example here: [https://www.reddit.com/r/LocalLLaMA/comments/1caxwr4/chatbot\_arena\_coding\_score\_vs\_price/](https://www.reddit.com/r/LocalLLaMA/comments/1caxwr4/chatbot_arena_coding_score_vs_price/) | 2025-10-11T04:10:10 | https://www.reddit.com/r/LocalLLaMA/comments/1o3m06p/what_happened_to_the_coding_score_vs_price/ | z_tjona | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3m06p | false | null | t3_1o3m06p | /r/LocalLLaMA/comments/1o3m06p/what_happened_to_the_coding_score_vs_price/ | false | false | self | 3 | null |
Alignment is strong on this one | 22 | I’ve noticed the Auto mode in cursor was getting good suddenly the quality stopped and has been ignoring instructions even when steered in a direction. It seems to forget the direction and steer back on the wrong direction it previously choose.
I think it’s developing some ego
Are the RL reward model tuning making it ego-centric? Is there a metric or bench to measure this?
Is there a way to create a balance?
I’ve seen this in a lot of open source models as well.
Appreciate any literature references that you can provide. | 2025-10-11T03:40:01 | Honest-Debate-6863 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o3lgag | false | null | t3_1o3lgag | /r/LocalLLaMA/comments/1o3lgag/alignment_is_strong_on_this_one/ | false | false | default | 22 | {'enabled': True, 'images': [{'id': '750najbileuf1', 'resolutions': [{'height': 21, 'url': 'https://preview.redd.it/750najbileuf1.jpeg?width=108&crop=smart&auto=webp&s=6217d0d254302f98b7da506f25d0ec5c0f03dc6f', 'width': 108}, {'height': 42, 'url': 'https://preview.redd.it/750najbileuf1.jpeg?width=216&crop=smart&auto=webp&s=079f5691f8781df6904d54ac6ac13ce03240951e', 'width': 216}, {'height': 63, 'url': 'https://preview.redd.it/750najbileuf1.jpeg?width=320&crop=smart&auto=webp&s=6b52c2abd7ca6ae9cd41523ae154c0ee36bbd12b', 'width': 320}, {'height': 126, 'url': 'https://preview.redd.it/750najbileuf1.jpeg?width=640&crop=smart&auto=webp&s=ee379ef86f00df4ce67bac7aed8f41c898b52758', 'width': 640}, {'height': 189, 'url': 'https://preview.redd.it/750najbileuf1.jpeg?width=960&crop=smart&auto=webp&s=915b83eed5d59e34dcbe5b5354661d2a7578a003', 'width': 960}, {'height': 213, 'url': 'https://preview.redd.it/750najbileuf1.jpeg?width=1080&crop=smart&auto=webp&s=e5f66ecd2d20250a457f5c1af5f6f90b921b91c6', 'width': 1080}], 'source': {'height': 797, 'url': 'https://preview.redd.it/750najbileuf1.jpeg?auto=webp&s=08f8ac226ce38646edc78198646e39f8af779228', 'width': 4031}, 'variants': {}}]} | |
GLM just blow up, or have I been in the dark? | 132 | Seems like this community is ever moving, did GLM just blow up? like, I did not realise so many people talked about it.... What kinda system are you guys on 4.6 running? Because it looks like I would essential need 4x48gb Quadro 8000s/a6000s/6000 ada GPUs or at least 2x96gb RTX Pro 6000s... I may can afford 4 quadros but not 2 rtx pro 6000s, for the price of a car. lol | 2025-10-11T03:24:01 | https://www.reddit.com/r/LocalLLaMA/comments/1o3l5zs/glm_just_blow_up_or_have_i_been_in_the_dark/ | EasyConference4177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3l5zs | false | null | t3_1o3l5zs | /r/LocalLLaMA/comments/1o3l5zs/glm_just_blow_up_or_have_i_been_in_the_dark/ | false | false | self | 132 | null |
🧠 Better Cline – Fall IDE (New VS Code AI Assistant) | 0 | Hey folks 👋
We’ve been building Better Cline – Fall IDE, a new AI-powered coding assistant for VS Code focused on understanding large and complex projects — not just autocomplete.
🔍 Highlights
GraphTech Tree View (Java beta) – visualize how your code connects
AI-powered analysis & generation – context-aware help for big repos
Natural language chat – talk to your code like a teammate
Custom AI providers – works with OpenRouter, GPT-based models, etc.
⚙️ Try It Out
ext install FallIDETeam.fall-ide
or check it here → https://marketplace.visualstudio.com/items?itemName=FallIDETeam.fall-ide
We’re a small dev team doing iterative releases — feedback (good, bad, ugly) means a lot.
Would love to hear what you think or what features you’d want next. 🚀 | 2025-10-11T03:19:17 | https://www.reddit.com/r/LocalLLaMA/comments/1o3l2sc/better_cline_fall_ide_new_vs_code_ai_assistant/ | Fall-IDE-Admin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3l2sc | false | null | t3_1o3l2sc | /r/LocalLLaMA/comments/1o3l2sc/better_cline_fall_ide_new_vs_code_ai_assistant/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'D4ZyxzuEaUWohzPcVDrniPrY9oRmTF3ABwpV2Ntce48', 'resolutions': [{'height': 111, 'url': 'https://external-preview.redd.it/D4ZyxzuEaUWohzPcVDrniPrY9oRmTF3ABwpV2Ntce48.png?width=108&crop=smart&auto=webp&s=2d0ca901b3e9100e32cc09bcd04f10e579961f80', 'width': 108}, {'height': 223, 'url': 'https://external-preview.redd.it/D4ZyxzuEaUWohzPcVDrniPrY9oRmTF3ABwpV2Ntce48.png?width=216&crop=smart&auto=webp&s=ae075917f51fb58e721fe138458ec946d3fa4e13', 'width': 216}, {'height': 331, 'url': 'https://external-preview.redd.it/D4ZyxzuEaUWohzPcVDrniPrY9oRmTF3ABwpV2Ntce48.png?width=320&crop=smart&auto=webp&s=f58a120226e5cadf72bf23dd6561b5d18f2d1006', 'width': 320}, {'height': 663, 'url': 'https://external-preview.redd.it/D4ZyxzuEaUWohzPcVDrniPrY9oRmTF3ABwpV2Ntce48.png?width=640&crop=smart&auto=webp&s=fa194e95a436e74bf940b98e3c64e704c34b337d', 'width': 640}, {'height': 995, 'url': 'https://external-preview.redd.it/D4ZyxzuEaUWohzPcVDrniPrY9oRmTF3ABwpV2Ntce48.png?width=960&crop=smart&auto=webp&s=1c3a81685ba92d8ad29353366f27ca2eff1b9d55', 'width': 960}, {'height': 1119, 'url': 'https://external-preview.redd.it/D4ZyxzuEaUWohzPcVDrniPrY9oRmTF3ABwpV2Ntce48.png?width=1080&crop=smart&auto=webp&s=351558ba21f4ef4d0ff8c60b1bfac0e1a5979088', 'width': 1080}], 'source': {'height': 1298, 'url': 'https://external-preview.redd.it/D4ZyxzuEaUWohzPcVDrniPrY9oRmTF3ABwpV2Ntce48.png?auto=webp&s=c3a2f1e0b5fe3fe8d8870e0a66fd4feae73a2a5c', 'width': 1252}, 'variants': {}}]} |
Real SVD GLM-4.5-Air-GLM-4.6-Distill | 71 | 2025-10-11T02:39:01 | https://www.reddit.com/r/LocalLLaMA/comments/1o3kb3o/real_svd_glm45airglm46distill/ | realmaywell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3kb3o | false | null | t3_1o3kb3o | /r/LocalLLaMA/comments/1o3kb3o/real_svd_glm45airglm46distill/ | false | false | 71 | {'enabled': False, 'images': [{'id': 'e09zZ1vJH-206eXdnYOOyzVU_npt174nxPybTvRs_LQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/e09zZ1vJH-206eXdnYOOyzVU_npt174nxPybTvRs_LQ.png?width=108&crop=smart&auto=webp&s=9551e7ba16613215cc4a658c903fa67af34659f8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/e09zZ1vJH-206eXdnYOOyzVU_npt174nxPybTvRs_LQ.png?width=216&crop=smart&auto=webp&s=e97dffea2c17c0035d67c9c72095a7850c35a162', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/e09zZ1vJH-206eXdnYOOyzVU_npt174nxPybTvRs_LQ.png?width=320&crop=smart&auto=webp&s=0172a9fee0b374973f4c9d50fe1f016e72fd0f55', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/e09zZ1vJH-206eXdnYOOyzVU_npt174nxPybTvRs_LQ.png?width=640&crop=smart&auto=webp&s=9aac58b8a3b097d5a8fa8a939d97c17ee2f3dee5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/e09zZ1vJH-206eXdnYOOyzVU_npt174nxPybTvRs_LQ.png?width=960&crop=smart&auto=webp&s=ad83850046d490e2e0f5d8902b9df9a1754526f9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/e09zZ1vJH-206eXdnYOOyzVU_npt174nxPybTvRs_LQ.png?width=1080&crop=smart&auto=webp&s=a9f8c42faedcbe8d0ce9bfe6d388f96ab3ca2037', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/e09zZ1vJH-206eXdnYOOyzVU_npt174nxPybTvRs_LQ.png?auto=webp&s=0182e32b2b043cc668e7a758a17f5b7a81691e52', 'width': 1200}, 'variants': {}}]} | ||
AI Studio Pro mini PC from Orange Pi pairs dual Huawei Ascend 310 processors with up to 192GB of RAM | 81 | 2025-10-11T02:37:28 | https://www.techradar.com/pro/this-mini-pc-has-192gb-of-ram-yes-ram-but-thats-not-the-most-surprising-fact-about-it-the-orange-pi-ai-studio-pro-uses-a-huawei-ascend-310-thats-on-paper-7x-more-powerful-than-amds-ryzen-ai-max-395 | cafedude | techradar.com | 1970-01-01T00:00:00 | 0 | {} | 1o3k9zr | false | null | t3_1o3k9zr | /r/LocalLLaMA/comments/1o3k9zr/ai_studio_pro_mini_pc_from_orange_pi_pairs_dual/ | false | false | 81 | {'enabled': False, 'images': [{'id': 'pGGQzmvOi4pQiLTOWOaCqIqsQ66ZIQqjltXUjZikyAU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/pGGQzmvOi4pQiLTOWOaCqIqsQ66ZIQqjltXUjZikyAU.jpeg?width=108&crop=smart&auto=webp&s=c5048b363097dc03423543bf99d77840c761c562', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/pGGQzmvOi4pQiLTOWOaCqIqsQ66ZIQqjltXUjZikyAU.jpeg?width=216&crop=smart&auto=webp&s=35fe3a0da985f960eb393feeb287449395b20ee4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/pGGQzmvOi4pQiLTOWOaCqIqsQ66ZIQqjltXUjZikyAU.jpeg?width=320&crop=smart&auto=webp&s=03c93d54570d685b915bb3706404d6a796f3267b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/pGGQzmvOi4pQiLTOWOaCqIqsQ66ZIQqjltXUjZikyAU.jpeg?width=640&crop=smart&auto=webp&s=b5a776a9f8f4b6e3a30c9df214e7029c4c64c5b8', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/pGGQzmvOi4pQiLTOWOaCqIqsQ66ZIQqjltXUjZikyAU.jpeg?width=960&crop=smart&auto=webp&s=34752f5efc7212ef0da28a1af884ef176b9e5fb0', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/pGGQzmvOi4pQiLTOWOaCqIqsQ66ZIQqjltXUjZikyAU.jpeg?width=1080&crop=smart&auto=webp&s=0a61dd82673b9bf9f97021c7f3b946dda7b6abd7', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/pGGQzmvOi4pQiLTOWOaCqIqsQ66ZIQqjltXUjZikyAU.jpeg?auto=webp&s=1c99607cadec685b62b2bbea757fb9828d7f9215', 'width': 1200}, 'variants': {}}]} | ||
Vibe Coded Research Agent repo | 6 | I posted about my experience with GLM 4.5/4.6 and Cline on a Mac Studio, but it didn't seem to get much interest. I made this agent because I had not found any open-source researchers that worked well with searxng on my mac. Maybe there are already a million on github.
I thought sharing the code would make it more interesting to people.
Here is the repo: [https://github.com/cgh76860-lab/Vibe\_Coded\_Research\_Agent--VCRA-](https://github.com/cgh76860-lab/Vibe_Coded_Research_Agent--VCRA-)
This was coded in Cline using about 17 million input tokens and 275k output tokens. It took maybe 30 hours of Mac studio time, though I wasn't keeping track. It is definitely messy and some of the features, like profiles, I haven't tested. But it produces fairly lengthy and coherent reports.
I use GPT OSS 120 for the research agent itself.
Sample single-cycle report is available at link above.
| 2025-10-11T02:25:17 | https://www.reddit.com/r/LocalLLaMA/comments/1o3k1et/vibe_coded_research_agent_repo/ | nomorebuttsplz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3k1et | false | null | t3_1o3k1et | /r/LocalLLaMA/comments/1o3k1et/vibe_coded_research_agent_repo/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': '6J6Eu5aXa2GnlOyQf3XC68LiA9z0d5ar0_TF5iY15pE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6J6Eu5aXa2GnlOyQf3XC68LiA9z0d5ar0_TF5iY15pE.png?width=108&crop=smart&auto=webp&s=3b5dbcc03adcd346f0403734a73e32e1b35e2cc1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6J6Eu5aXa2GnlOyQf3XC68LiA9z0d5ar0_TF5iY15pE.png?width=216&crop=smart&auto=webp&s=26d817254fff5069aa08be73d3878c557db0eef0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6J6Eu5aXa2GnlOyQf3XC68LiA9z0d5ar0_TF5iY15pE.png?width=320&crop=smart&auto=webp&s=972bcb66f2ec932714c14f84c795a4640cbf4bf4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6J6Eu5aXa2GnlOyQf3XC68LiA9z0d5ar0_TF5iY15pE.png?width=640&crop=smart&auto=webp&s=6e0aa14416cb50ded31fe28b78e02d62cb2f4375', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6J6Eu5aXa2GnlOyQf3XC68LiA9z0d5ar0_TF5iY15pE.png?width=960&crop=smart&auto=webp&s=3b07141d77f6dc330e6194dd1a2745acf4cfd941', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6J6Eu5aXa2GnlOyQf3XC68LiA9z0d5ar0_TF5iY15pE.png?width=1080&crop=smart&auto=webp&s=f25ea74ad8eb43b76eac8b2347cd2398c2e82a69', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6J6Eu5aXa2GnlOyQf3XC68LiA9z0d5ar0_TF5iY15pE.png?auto=webp&s=8e965590ccd8c104d90bfc88186d8794403ce9ec', 'width': 1200}, 'variants': {}}]} |
Here are the benchmarks that I keep up with | 30 | Hey hey folks! I've returned... in a fashion.
I've been sitting on all kinds of stuff that I wanted to talk about for the past few months, but I figured I'd start by dropping the list of benchmarks I currently track, since in the past folks were interested in that list.
These should be mostly up to date, and I'm constantly on the prowl for more. If you have any good ones (*ESPECIALLY translation benchmarks... those feel like the holy grail*), please share.
I know there are a lot more leaderboards out there, but I generally don't hang on to the ones that either aren't kept reasonably up to date, or were exceptionally limited. So if you don't see a leaderboard on here, feel free to share but it may have been excluded on purpose.
As always- benchmarks aren't everything, and you should always try the models out yourself. But it definitely is nice to have some metrics to look at from time to time, even if they can get gamed.
# Code Specific
[SWE Bench](https://www.swebench.com/)
[Aider Coding Leaderboard](https://aider.chat/docs/leaderboards/)
# Context Window Capability
[FictionBench](https://fiction.live/stories/Fiction-liveBench-Feb-21-2025/oQdzQvKHw8JyXbN87)
* *(This is a really good one, as it visualizes where so many people mess up with LLMs: not realizing context window limitations)*
# General Ability
[Livebench](https://livebench.ai/#/)
[Dubesor Benchtable](https://dubesor.de/benchtable)
[Humanity's Last Exam](https://scale.com/leaderboard/humanitys_last_exam_text_only)
* *(I am shocked at how low of a score GLM 4.5 got here... testing error maybe?)*
# Domain Knowledge
[MMLU-Pro](https://huggingface.co/spaces/TIGER-Lab/MMLU-Pro)
# Advanced Reasoning
[Enigma Eval](https://scale.com/leaderboard/enigma_eval)
# Human Preference
[LM Arena](https://lmarena.ai/leaderboard)
# EQ (emotional intelligence) and Creative Writing Ability
[EQBench](https://eqbench.com/)
# Censorship
[Uncensored General Intelligence Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard)
# Intelligence Index, Cost, Speed, and Model Comparisons
[Artificial Analysis](https://artificialanalysis.ai/leaderboards/models)
# Coding Agent Capability
[Terminal Bench](https://www.tbench.ai/leaderboard)
# Kotlin (Android dev)
[Kotlin Leaderboard](https://firebender.com/leaderboard)
# Function Calling
[Berkeley Function-Calling Leaderboard](https://gorilla.cs.berkeley.edu/leaderboard.html)
# Other
[Vellum Leaderboard](https://www.vellum.ai/llm-leaderboard) | 2025-10-11T02:02:27 | https://www.reddit.com/r/LocalLLaMA/comments/1o3jl8r/here_are_the_benchmarks_that_i_keep_up_with/ | SomeOddCodeGuy_v2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3jl8r | false | null | t3_1o3jl8r | /r/LocalLLaMA/comments/1o3jl8r/here_are_the_benchmarks_that_i_keep_up_with/ | false | false | self | 30 | null |
Laptop limited with 8GB VRAM, but, with fast 64 GB RAM & hyper-fast SSD's. Optimised Local AI usage? | 6 | I have spent quite a few nights trying to educate myself and hope that i do not come across as lazy asking the above - it is as a beginner just such a challange to wrap my mind around all of this. I do not understand how to make it perform as good as possible. And especially not on bigger parameter models also not understanding how to find optimised models or how using offloading etc. I run Linux btw.
CPU is Intel 12th Gen i7-12700H, 20 cores 24 threads, with max turbo frequency 4.6 GHz and very fast dual SSD's are RAID 0 which doubles speed. VRAM 8gb is NVIDIA GeForce RTX 3070**.**
Hopefully others too in a similar situation with limited non-upgradable laptop-vram can learn if you have links and advice to share. Thank you. | 2025-10-11T01:53:45 | https://www.reddit.com/r/LocalLLaMA/comments/1o3jezn/laptop_limited_with_8gb_vram_but_with_fast_64_gb/ | Mangleus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3jezn | false | null | t3_1o3jezn | /r/LocalLLaMA/comments/1o3jezn/laptop_limited_with_8gb_vram_but_with_fast_64_gb/ | false | false | self | 6 | null |
How do you guys fine tune the top open embedding models on MTEB? Like Qwen and NVEbmed? | 5 | Curious, are you guys using any existing notebooks or what is your strategey on fine tuning these open embedding models? Any tips are welcome. Thanks | 2025-10-11T00:58:21 | https://www.reddit.com/r/LocalLLaMA/comments/1o3ibjh/how_do_you_guys_fine_tune_the_top_open_embedding/ | SnooMarzipans2470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3ibjh | false | null | t3_1o3ibjh | /r/LocalLLaMA/comments/1o3ibjh/how_do_you_guys_fine_tune_the_top_open_embedding/ | false | false | self | 5 | null |
best coding LLM right now? | 74 | Models constantly get updated and new ones come out, so old posts aren't as valid.
I have 24GB of VRAM. | 2025-10-10T23:53:41 | https://www.reddit.com/r/LocalLLaMA/comments/1o3gyjn/best_coding_llm_right_now/ | RadianceTower | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3gyjn | false | null | t3_1o3gyjn | /r/LocalLLaMA/comments/1o3gyjn/best_coding_llm_right_now/ | false | false | self | 74 | null |
Just got a 192GB VRAM AI workstation. Looking to learn and contribute. Open to testing and training local models in exchange for experience. | 16 | Hey everyone,
I just got a high-powered multi-GPU workstation (192GB VRAM total), and I’m looking to go from deep prompt design work into actual local LLM workflows.
I’ve spent a lot of time inside ChatGPT designing agent systems—personality scaffolds, memory setups, tone behavior, that kind of thing. Now I want to start building things locally and learn how it all works under the hood.
I’m not a programmer yet, but I’m ready to learn. If anyone out there is:
• Building open-source tools or AI agents
• Testing or fine-tuning models like LLaMA, Mistral, etc
• Working on speech tools like Whisper or TTS
• Or just needs someone to help run and test models locally
I’m happy to help however I can. I’ve got the hardware, the time, and the curiosity.
Thanks in advance—open to chat or DMs if something clicks. | 2025-10-10T23:42:56 | https://www.reddit.com/r/LocalLLaMA/comments/1o3gqcg/just_got_a_192gb_vram_ai_workstation_looking_to/ | Open_Cookie_7250 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3gqcg | false | null | t3_1o3gqcg | /r/LocalLLaMA/comments/1o3gqcg/just_got_a_192gb_vram_ai_workstation_looking_to/ | false | false | self | 16 | null |
How do you fine-tune your LLMs when enabling them to use tools (tool-use)? | 1 | > | 2025-10-10T23:06:45 | https://www.reddit.com/r/LocalLLaMA/comments/1o3fxep/how_do_you_finetune_your_llms_when_enabling_them/ | Patience2277 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3fxep | false | null | t3_1o3fxep | /r/LocalLLaMA/comments/1o3fxep/how_do_you_finetune_your_llms_when_enabling_them/ | false | false | self | 1 | null |
Training a Vision Language Model on a Text-Only Dataset using a custom tokenizer. | 1 | I'm planning to fine-tune LLaMA 3.2 11B Instruct on a JSONL dataset of domain-specific question-answer pairs — purely text, no images. The goal is to improve its instruction-following behavior for specialized text tasks, while still retaining its ability to handle multimodal inputs like OCR and image-based queries.
I used a standard llama3 config but with the model changed as suggested [here](https://github.com/axolotl-ai-cloud/axolotl/discussions/3199#discussioncomment-14600672)
```
base_model: alpindale/Llama-3.2-11B-Vision-Instruct
tokenizer_config: ./itai_tokenizer
tokenizer_type: AutoTokenizer
chat_template: llama3
datasets:
- path: ./income_tax_finetune.jsonl
type: chat_template
field_messages: messages
message_property_mappings:
role: role
content: content
roles:
system:
- system
user:
- user
assistant:
- assistant
train_on_inputs: false
output_dir: ./outputs/it_1_text_only
sequence_len: 2048
sample_packing: true
gradient_accumulation_steps: 8
micro_batch_size: 2
num_epochs: 4
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5
bf16: auto
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
resume_from_checkpoint:
auto_resume_from_checkpoints: true
save_only_model: false
logging_steps: 1
# flash_attention: true
sdp_attention: true
warmup_ratio: 0.1
evals_per_epoch: 2
saves_per_epoch: 1
save_total_limit: 3
weight_decay: 0.0
special_tokens:
pad_token: <|end_of_text|>
```
and then ran inference on the model
using the code
```
from transformers import MllamaForCausalLM, AutoTokenizer
import torch
def run_inference():
# Paths
# model_path = ""
model_path = ""
tokenizer_path = ""
# Load tokenizer from your custom path
tokenizer = AutoTokenizer.from_pretrained(tokenizer_path, use_fast=False)
# Load model, allow size mismatch just in case
model = MllamaForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
device_map="auto",
ignore_mismatched_sizes=True
)
# Ensure embeddings match tokenizer
model.resize_token_embeddings(len(tokenizer))
# Conversation
conversation = [
{"role": "system", "content": "<system_prompt>"},
{"role": "user", "content": "<question>"}
]
formatted_prompt = tokenizer.apply_chat_template(
conversation,
tokenize=False,
add_generation_prompt=True
)
print("Formatted prompt:\n", formatted_prompt)
inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
# temperature=0.7,
# top_p=0.0,
do_sample=False,
eos_token_id=tokenizer.eos_token_id
)
full_response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("\n=== FULL RESPONSE ===")
print(full_response)
if "assistant" in full_response:
assistant_response = full_response.split("assistant")[-1].strip()
print("\n=== EXTRACTED ASSISTANT RESPONSE ===")
print(assistant_response)
if __name__ == "__main__":
run_inference()
```
I got the output
```
istrovstvíSections 10(23FCA)Section 115TC(2)(i)Section 115BAC(2)(ii)(a)Section 115TC(2)(zzw)Section 269M(5)Rule 2BAmarket linked debentureRule 11UD(a)financial yearSection 47(xiizzzzzzl)Section 35CCA(2)Section 206C(3ZZZZZZZS)Prescribed InformationSection 32Section 263(1)(iii)Section 92CC(5)Section 133A(3)(ii)Section 54ED(3)(a)Rule 42(2)(iii)Form No. 3CF‑IIRule 37BA(5)Section 124(4)Section 286(1)(k)GenerationStrategySection 10C(2)(a)Rule 8B(1)(b)Section 32A(2)(d)Section 245A(d)Sub‑section (3E)1st April 2017Section 280B(a)Section 245-OA(3)(i)Section 35AD(8)(b)Section 140B(3)(i)Section 226(8)Section 2(1)(ta)Section 102(7)Section 115AC(2)80JJASection 80HHE(1B)(iii)Rule 10TD(3)(ii)Rule 40BA(2)Section 245A(b)(iv)Section 23(3)(b)Rule 48E(2)(g)Rule 8BA(2)Section 272AA(2)Communal Harmonydomestic companiesSection 158BE(4)(i)Rule 37BBBA(2)Rule 112(8A)Section 245T(4)Rule 10TFSections 208, 140ATax on capital gainsseized materialRule 17A(3)(ii)CodeAt23 ofRule 121A(2)Section 269UO(d)TonnageSection 133B(2)(e)Section 115JB(2A)(c)Rule 11UAE(3)(a)conversion into moneySection 80D(5)Section 139B(4)Section 116(i)Rule 73(1)Foreign ExchangeSection 13B(3)Section 269T(1)(d)Section 112(1)(c)Section 44AF(1)Section 115VX(1)(b)(i)(a)Section 80C(2)(xiiia)uyếtreySection 285BA(7)recognised provident fund1st April, 2021Section 9A(4)(f) rencontSection 88158BGSection 54EE(3)(a)Section 92A(2)Section 115JHrychITTERSection 47(vii)(a)
Section 115JG(2) ExplanationSection 10B(6)Section 184(4)Section 246(1)(j)Section 80G(4)(A)Section 115WDRule 10CB(1)(c)(i)Section 239A(1)(b)Section 115TC(2)(zzw)Section 293A(2)(c)Section 144B(6)(vi)Rule 44H(5)Section 287A(2)(f)Section 292C(1)(b)advance pricing agreementSection 252A(1)(b)stakingSection 115VX(2)(ii)Rule 28AA(1)ismetSection 245BA(6B)Section 112A(1)(a)(i)Rule 12D(4)Rule 44C(3)(g)urette245Tuz TrevSection 254.scalablytypedSection 60Section 115VZ(1)Sections 220 to 232BSection 58(1)(c)Section 134(1)Section 89A(4) HOLDERSSection 115V-O(1)(i)Section 92BA(vb)Rule 11RA(5)wilful attemptSection 115JBSection 115BAB(2)(b)(i)Section 80TTA(1)(c)Section 47(v)(a)Section 115BA(2)(a)(ii)ýtRule 21AAA(2)Section 133A(3)Rule 11TążRule 114‑I(1)Section 47(xiizzzb)Section 151(2)(iii)Section 115TC(2)(zy)Section 285BA(374)2025-26Minimum additionalSection 80QQB(3)(c)Section 158BC(1)(b)Notifications under Section 197A(1F)Section 27(iiiaa)Excluded transactionsRule 31A(6)(ii)wilRule 44E(5)Section 133(1)(d)Rule 10F(b)Section 115AC(2)(a)Rule 128(1)Section 180A(11)Section 35AD(5)(ak)iteralsSection 133A(1)(iii)Section 285BA(49)80GGCSection 115JB(7)Section 407Section 139C(1)Section 80HHE(3)Section 270A(3)(iii)Section 80-IBA(2)(a)(i)Explanation to Section 80-IA(4)(iv)(c)Section 115VD(3)(iii)Rule 10TE(6)Rule 10V(1)Section 285BA(66)quiaEquity Linked SavingsDepositories Act, 1996Section 3(36)Section 115VD(1)(j)mutatis mutandisRule 125(3)Section 40(ba)Chapter VI-BClause (xxiv)Section 92CC(9)Rule 10H(9)SPVSection 115BBI(2)(b)Section 12AC(2)(c)Section 144B(3)(v)Section 115TC(2)(h)Section 93(4)Section 115ACA(a)(ii)Section 10(20)Section 80‑IBA(2)(e)Section 42(2)(b)Section 245A(f)Section 88E(4)Rule 21A(3)(i)any directorForm No. 10BBBPart IISection 245W(2)(b)Section 246A(1)(e)Rule 114(2)Section 198(1)Section 12AB(1)(d)Section 10(29A)(b)Section 115JG(3)(iii)Section 80U(4)Section 270A(7)(a)Section 170A(3)(b)234BSection 116(cc)Section 271AAB(1)(a)(i)Rule 17C(1)Section 156(2)(b)Section 47(xiizza)Section 276B(b)(iii)Form No. 15D167BTax Return PreparerSection 285BA(295)Rule 65Section 139BRule 30(1)(d)Rule 10MA(4) ProvisoSection 245BA(3)any other allowanceSection 80CCG(2)Specified proceedingForm No. 10CCQSection 112A(2)(ii)Joint Directors of Income-taxnotified institutionsSection 264B(1)(a)Section 115WB(2)(E)(vi)Gross Annual ValueSection 115J(4)tonnage tax businessSection 295(2)(h)Section 54B(1)(i)Section 277(1)Beneficial OwnerSection 285BA(380)Section 115VT(3)(b)Section 269-UD(1)Section 115WKC(4)Section 80-IBA(2)(c)geoisSections 251Section 110(a)Section 269M(1)(a)Exclude freightSection 245BC(2)(b)Section 145(2B)Section 151(2)Section 115AD(3ZZZZZZR)kieRules 48–57Section 13(2)Section 275ASection 115WE(1A)Rule 6AB(1)(e)CBDT circularsSection 228A(1)Rule 114DSection 271AAB(1)(a)(ii)Section 245AA(3)(b)Section 115WC(1)(D)Section 245A(m)amalgamating companyForm No. 10BSection 115R(2)(i)Section 139AA(iv)271ESection 80HHE(b)aravelForm 16DSection 269UB(3)(b)Rule 28(3)(i)Rule 30(6A)Section 295(2)(b)Section 259(2)(a)Section 47(xiizzzzc)Sections 158BESection 115VR(2)accoSection 80JJA(5)60/2018Section 115WE(1)(c)(i)limited liability partnershipSection 45(2A)Section 297(2)(l)reibSection 9A(8A)Rule 37CA(1)(ii)Section 92BA(vb)Section 80‑IA(10)Section 286(9)(l)Section 2(1)(q)Section 11(1)(c)(i)Section 144B(7)(ix)private discretionarySection 115AD(3ZZZG)Rule 10TA(1)(iv)Section 271AAB(1A)(a)(i)Rule 6G(1)(a)Section 155(5L)Section 54EC(1)(a)Section 47(xiizl)Section 115BAC(2)(iii)Set‑off of LossSection 206C(3ZZZA)Excess interestTaxable salarySection 272A(2)(m)ernerWealth-tax Act, 1957Section 10(6B)Section 47(xiizg)Section 144BA(3)Paragraph 3Section 80HHB(2)(b)(iii)Rule 40(1)(E)Annexure VSection 35(5)claim disallowedSection 115AD(3ZZZZZZB)Section 151A(2)(ii)Section 43D(f)Rule 31A(2)(b)Section 269UO(a)Rule 6ABA(1)(d)Section 269N(a) Section 269UO(a)Rule 10UD(1)(i)Section 115WKA(2)(d)Section 269UA(b)(2)(i)Section 245MA(2)(b)(iii)Section 192ASection 153CRule 31(3)(v) مجSection 285BA(207)Section 115WB(1)(c)Rule 47Section 232(5)Section 160(2)Sections 272BRule 41BRule 11UA(1)(c)(b)(L)245CSection 112A(2)(ii)Rule 10H(3)Section 80EEB(5)(b)(ii)Section 115BBHSection 35CCA(2)(e)Section 2(25A)èoSection 133B(2)(a)Section CodeSection 115R(2)(b)Section 115JA(2)(v)Rule 48K(1) DünForm No. 35ASection 80AC(1)(b)Sections 166Section 194N(a)Clause (xii)(b)Section 245D(6)infrastructure facilitySection 245T(1)(c)Section 97(1)(f)Category II AIFSection 91(4)Section 80-IA(3)(ii)Winnings coveredegersequity sharesSection 35ERule 11UAD(1)(v)auditorSection 234A(3)(c)Section 33(1)(b)(iii)(b)Section 167B(2)Section 142B(2)Section 31(3)Section 35AD(5)(ii)Section 285BA(446)ICDS IIISection 115BAB(2)(b)Section 80-IB(10)(e)Section 176(5)(a)Section 80CCH(1)Section 115TC(2)(zr)Rule 31A(2)(iii)EFAULTningerSection 286(9)(d)(i)Section 245F(1)Section 115V(2)(e)Section 115JA(1A)Rule 10TB(1)(iv)alseSection 10B(1A)1st April, 201943/2017House Rent AllowanceSection 115UA(2)(i)Finance Act, 1988Section 194J(3)Section 33B(2)(a)Section 172(1) ProvisoSection 245Q(2)Section 206C(3ZZZO)Rule 12CB(1)(b)ilogySection 285BA(31)Section 118(1)(b)Section 47(vii)346Rule 16F(2)Section 234C(1)(b)(iii)Section 144C(8)(b)Rule 12B(5)Section 47(xiizzzq)skoquoted sharesSections 139(4A)Section 97(5)any other propertyRule 42Section 197A(2)Section 59(1)(b)Section 250(7)Rule 44G(1)Section 285BA(440)Rule 112D(2)ivicンダRule 46A(2)Section 155(10E)Section 9B(i)Section 88E(2)(d)Section 33AC(1)(b)Fourth ScheduleSection 72A(4)Section 44AARule 133(4)(iii)IntelligenceRule 10D(1)(c)–(f)acadesSection 285BA(250)Section 16(iia)Section 115QD(2)azinesSection 124(3)(c)nature of incomeSection 273A(4)Rule 11Q(3)Rule 48K(3)Section 245BD(3)Rule 8B(1)(b)Section 245HA(1)(iii)Section 45(1A)(ii)LastErrorSection 115ACA(1)(ii)(B)Rule 114-I(1)(d)deenspecified sumRule 10UOCarry ForwardSection 115V-I(4)(b)Excess PaymentRule 114A(1)(b)Specified incomeSection 35A(1)Section 80DD(1)Section 282A(4)ситSection 206C(3ZZZZZZC)Section 285BA(176)Section 273(1)(a)Section 115V(2)(d)Section 115C(f)(iv)Form 16ASection 234F(1)Section 115VK(4)(c)̧Rule 19AE(4)Section 115WC(2)Rule 10D(4)(vi)Prescribed ParticularsulpSection 206CB(1)(b)(v)Section 144B(6)(i)(A)Rule 21AJE(8)(vii)Section 80‑IC(3)(i)Section 285B(1)Section 115ACAVOKE
```
which is just a mess of the custom tokens I added to the tokenizer which I had used to train Llama-3.2-11B-Vision
```
base_model: alpindale/Llama-3.2-11B-Vision-Instruct
tokenizer_config: ./itai_tokenizer
tokenizer_type: AutoTokenizer
```
except this tokenizer was made using code that looks likes
```
def create_tokenizer(self):
# Load the base tokenizer
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Meta-Llama-3.1-8B-Instruct")
```
should this tokenizer have been from `alpindale/Llama-3.2-11B-Vision-Instruct`?
or is this fine since I used `chat_template: llama3` to train the model along with the tokenizer of `NousResearch/Meta-Llama-3.1-8B-Instruct`?
also for some reason
```
logging_steps: 1
# flash_attention: true
sdp_attention: true
```
if I set Flash Attention I get the error
`AttributeError: 'MllamaTextSelfAttention' object has no attribute 'is_causal'`
why is that?
even though
the [config ](https://github.com/axolotl-ai-cloud/axolotl/blob/main/examples/llama-3-vision/lora-11b.yaml) given in examples for Llama3.2 Vision
says
```
gradient_checkpointing: true
logging_steps: 1
flash_attention: true # use for text-only mode
```
Could someone help me out on what the issue might be?
Also where can I learn more on this? I would really appreciate it.
Thank You. | 2025-10-10T22:42:43 | https://www.reddit.com/r/LocalLLaMA/comments/1o3fdoj/training_a_vision_language_model_on_a_textonly/ | PravalPattam12945RPG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3fdoj | false | null | t3_1o3fdoj | /r/LocalLLaMA/comments/1o3fdoj/training_a_vision_language_model_on_a_textonly/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'w-JqAObtMvKIh8JZMV8_TW1H3hNct2ciRNjcb40GEn4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w-JqAObtMvKIh8JZMV8_TW1H3hNct2ciRNjcb40GEn4.png?width=108&crop=smart&auto=webp&s=9bbd0a2924c1f2b8238be947045e373976bf1c02', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/w-JqAObtMvKIh8JZMV8_TW1H3hNct2ciRNjcb40GEn4.png?width=216&crop=smart&auto=webp&s=ae3981ad780037816a28eb7f8266480bf9b2bf72', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/w-JqAObtMvKIh8JZMV8_TW1H3hNct2ciRNjcb40GEn4.png?width=320&crop=smart&auto=webp&s=c051c1acfbdf5f21ce0458f9ad368154fbd13cf6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/w-JqAObtMvKIh8JZMV8_TW1H3hNct2ciRNjcb40GEn4.png?width=640&crop=smart&auto=webp&s=4287810e065d754f2987d80bd26eee1dc37ad83b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/w-JqAObtMvKIh8JZMV8_TW1H3hNct2ciRNjcb40GEn4.png?width=960&crop=smart&auto=webp&s=98871443816b8b3550cff77a7d44929f46890b8d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/w-JqAObtMvKIh8JZMV8_TW1H3hNct2ciRNjcb40GEn4.png?width=1080&crop=smart&auto=webp&s=7b441621bc1cc4f4f92ed1fd1a2a64e7ed948277', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/w-JqAObtMvKIh8JZMV8_TW1H3hNct2ciRNjcb40GEn4.png?auto=webp&s=0565c63a299e4fef56c146b4944ec0e88e2ea4aa', 'width': 1200}, 'variants': {}}]} |
What laptop would you choose? Ryzen AI MAX+ 395 with 128GB of unified RAM or Intel 275HX + Nvidia RTX 5090 (128GB of RAM + 24GB of VRAM)? | 67 | For more or less the same price I can chose between this two laptops:
\- HP G1a: **Ryzen AI MAX+ 395 with 128GB of RAM (no eGPU)**
\- Lenovo ThinkPad P16 Gen 3: **Intel 275HX with 128GB of RAM + Nvidia RTX 5090 24GB of VRAM**
What would you choose and why?
What I can do with AI/LLMs with one that I can't do with the other? | 2025-10-10T22:20:42 | https://www.reddit.com/r/LocalLLaMA/comments/1o3evon/what_laptop_would_you_choose_ryzen_ai_max_395/ | cl0p3z | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3evon | false | null | t3_1o3evon | /r/LocalLLaMA/comments/1o3evon/what_laptop_would_you_choose_ryzen_ai_max_395/ | false | false | self | 67 | null |
Design Arena launches agent arena... with no open weight agents | 0 | Which should be added?
https://preview.redd.it/l6px2nuc0duf1.png?width=1896&format=png&auto=webp&s=efe972429fbfa38d8ffd135a6ae9051aedfad57a
| 2025-10-10T22:20:04 | https://www.reddit.com/r/LocalLLaMA/comments/1o3ev4k/design_arena_launches_agent_arena_with_no_open/ | Helpful_Jacket8953 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3ev4k | false | null | t3_1o3ev4k | /r/LocalLLaMA/comments/1o3ev4k/design_arena_launches_agent_arena_with_no_open/ | false | false | 0 | null | |
Glm 4.5 on 5090 | 5 | Best way to run? I have vllama and lm studio. But still learning. | 2025-10-10T22:13:36 | https://www.reddit.com/r/LocalLLaMA/comments/1o3epmj/glm_45_on_5090/ | eCityPlannerWannaBe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3epmj | false | null | t3_1o3epmj | /r/LocalLLaMA/comments/1o3epmj/glm_45_on_5090/ | false | false | self | 5 | null |
GPT-OSS from Scratch on AMD GPUs | 166 | After six years-the first time since GPT-2, OpenAI has released new open-weight LLMs, gpt-oss-20b and gpt-oss-120b. From day one, many inference engines such as llama.cpp, vLLM, and sgl-project have supported these models; however, most focus on maximizing throughput using CUDA for NVIDIA GPUs, offering limited support for AMD\* GPUs. Moreover, their library-oriented implementations are often complex to understand and difficult to adapt for personal or experimental use cases.
To address these limitations, my team introduce “gpt-oss-amd”, a pure C++ implementation of OpenAI’s GPT-OSS models designed to maximize inference throughput on AMD GPUs without relying on external libraries. Our goal is to explore end-to-end LLM optimization, from kernel-level improvements to system-level design, providing insights for researchers and developers interested in high-performance computing and model-level optimization.
Inspired by llama2.c by Andrej Karpathy, our implementation uses HIP (an AMD programming model equivalent to CUDA) and avoids dependencies such as rocBLAS, hipBLAS, RCCL, and MPI. We utilize multiple optimization strategies for the 20B and 120B models, including efficient model loading, batching, multi-streaming, multi-GPU communication, optimized CPU–GPU–SRAM memory access, FlashAttention, matrix-core–based GEMM, and load balancing for MoE routing.
Experiments on a single node with 8× AMD MI250 GPUs show that our implementation achieves over 30k TPS on the 20B model and nearly 10k TPS on the 120B model in custom benchmarks, demonstrating the effectiveness of our optimizations and the strong potential of AMD GPUs for large-scale LLM inference.
https://preview.redd.it/9dr4gme0qcuf1.png?width=3392&format=png&auto=webp&s=6ad41a33bc3ecf6625afaa2ff62ca762f5479d2a
GitHub: [https://github.com/tuanlda78202/gpt-oss-amd](https://github.com/tuanlda78202/gpt-oss-amd) | 2025-10-10T21:21:45 | https://www.reddit.com/r/LocalLLaMA/comments/1o3dfib/gptoss_from_scratch_on_amd_gpus/ | tuanlda78202 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3dfib | false | null | t3_1o3dfib | /r/LocalLLaMA/comments/1o3dfib/gptoss_from_scratch_on_amd_gpus/ | false | false | 166 | {'enabled': False, 'images': [{'id': 'kr7-cYQLjVSYgCHTdYk0hOHo8LtDx1fruxInOK1rC5k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kr7-cYQLjVSYgCHTdYk0hOHo8LtDx1fruxInOK1rC5k.png?width=108&crop=smart&auto=webp&s=cfa0247932d987c3c0bff57e86166302ea6d9633', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kr7-cYQLjVSYgCHTdYk0hOHo8LtDx1fruxInOK1rC5k.png?width=216&crop=smart&auto=webp&s=87d2e927e901adf45541db41e948a025dfde5838', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kr7-cYQLjVSYgCHTdYk0hOHo8LtDx1fruxInOK1rC5k.png?width=320&crop=smart&auto=webp&s=56fa1711916d4b18b69d55379520f6a7daa83677', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kr7-cYQLjVSYgCHTdYk0hOHo8LtDx1fruxInOK1rC5k.png?width=640&crop=smart&auto=webp&s=df53654295de2b4add7e5b3992fa595b88f048f1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kr7-cYQLjVSYgCHTdYk0hOHo8LtDx1fruxInOK1rC5k.png?width=960&crop=smart&auto=webp&s=763fff6168f3c61b35a50d948c37b50911bdbd8d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kr7-cYQLjVSYgCHTdYk0hOHo8LtDx1fruxInOK1rC5k.png?width=1080&crop=smart&auto=webp&s=8c576382f41065df8c7c1333d85ec4e896e7655e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kr7-cYQLjVSYgCHTdYk0hOHo8LtDx1fruxInOK1rC5k.png?auto=webp&s=59800b591bc8122ce04809a2f700c8b994d52fc6', 'width': 1200}, 'variants': {}}]} | |
I have a 6GB VRAM Laptop that I rarely use, What should I do with it? | 0 | Hey guys, I've got this laptop sitting in my room for months, I already have a desktop pc, and I only rarely use that laptop, only when I have to go to a conference or when I have to present something somewhere when not working from home.
It has i7 9750H, 16GB RAM, GTX 1660 Ti and 512Gb Nvme.
I have been thinking of a home server that scans PDF files in a folder periodically, extracts the infos, and builds a knowledge base using embedding and LLMs, 4B LLMs can fit, do you guys have some other ideas? | 2025-10-10T20:57:14 | https://www.reddit.com/r/LocalLLaMA/comments/1o3csre/i_have_a_6gb_vram_laptop_that_i_rarely_use_what/ | skillmaker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3csre | false | null | t3_1o3csre | /r/LocalLLaMA/comments/1o3csre/i_have_a_6gb_vram_laptop_that_i_rarely_use_what/ | false | false | self | 0 | null |
Any model recommendations for running an agent with vLLM? | 1 | I've tried all the following models with vLLM to try and replace gpt-4o running our company agent:
\- Qwen3-30B-3A
\- Gemma3-27B
\- GPT-OSS-20B
\- Mistral3.2-24B
and they all seem to end up with some kind of unreliable tool calling (streaming / not streaming modes) when running on vLLM.
Has anybody found a model (under 48gb VRAM) that runs reliable tool calling with vLLM?
| 2025-10-10T20:50:30 | https://www.reddit.com/r/LocalLLaMA/comments/1o3cmkc/any_model_recommendations_for_running_an_agent/ | alew3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3cmkc | false | null | t3_1o3cmkc | /r/LocalLLaMA/comments/1o3cmkc/any_model_recommendations_for_running_an_agent/ | false | false | self | 1 | null |
Failed to load the model - Qwen3 VL 30b a3b in LM Studio 0.3.30 | 5 | Hello, I'm trying to load **Qwen3 VL 30b a3b** in **LM Studio** but it ends up with that **error**:
*"error loading model: error loading model architecture: unknown model architecture: 'qwen3vlmoe'*
Using **LM studio 0.3.30**
My hardware is **Ryzen R9 5900HS / 32 GB RAM / RTX 3060 6GB / Win 11** \- latest nvidia drivers 581.42
I'm having also similar errors with loading **LFM2-8B-A1B**
*"error loading model: error loading model architecture: unknown model architecture: 'lfm2moe"*
I don't have such issues with other models like:
* Qwen3-30b-a3b
* GPT-OSS-20b
* Gemma-3-12b
* Qwen2.5-VL-7b
* ...
Is there anything I'm able to do to run that failing models on my system?
Thnx :)
| 2025-10-10T20:18:29 | https://www.reddit.com/r/LocalLLaMA/comments/1o3bsrw/failed_to_load_the_model_qwen3_vl_30b_a3b_in_lm/ | michalpl7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3bsrw | false | null | t3_1o3bsrw | /r/LocalLLaMA/comments/1o3bsrw/failed_to_load_the_model_qwen3_vl_30b_a3b_in_lm/ | false | false | self | 5 | null |
Does anyone know the best model to use that fits the following criteria? | 0 | 1. Fits in 16gb vram. If MoE, must fit in 32gb regular ram
2. Strong native tool calling
3. Can use a playwright browser
4. Strong with code
5. Doesn’t sound like a dead rat/can rp | 2025-10-10T20:13:10 | https://www.reddit.com/r/LocalLLaMA/comments/1o3bnpd/does_anyone_know_the_best_model_to_use_that_fits/ | Witty_Mycologist_995 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3bnpd | false | null | t3_1o3bnpd | /r/LocalLLaMA/comments/1o3bnpd/does_anyone_know_the_best_model_to_use_that_fits/ | false | false | self | 0 | null |
Building a multi-agent financial bot using Agno, Maxim, and YFinance | 22 | was experimenting with Agno for multi-agent orchestration and paired it with Maxim for tracing and observability. The setup follows a cookbook that walks through building a financial conversational agent with Agno, YFinance, and OpenAI models, while instrumenting everything for full visibility.
Here’s the core workflow:
1. **Agent setup**
* Defined two agents in Agno:
* *Finance agent*: uses YFinance and OpenAI GPT-4 for structured financial data.
* *Web agent*: uses Serper or a similar search API to pull recent company news.
2. **Coordination layer**
* Agno handles task routing and message passing between these agents.
* Both agents are instrumented via Maxim’s SDK, which captures traces, tool calls, model usage, and metadata for every step.
3. **Observability with Maxim**
* Traces every LLM call, agent step, and tool execution.
* Exposes performance metrics and intermediate reasoning chains.
* Makes debugging multi-agent flows much easier since you can see which component (model, tool, or agent) caused latency or failure.
4. **Interactive loop**
* A basic REPL setup allows real-time queries like:“Summarize the latest financial news on NVIDIA and show its current stock stats.”
* The system delegates parts of the query across agents, aggregates results, and returns the final response.
Some observations
* Tracing multi-agent systems quickly becomes essential as orchestration complexity grows.
* You trade off some latency for much clearer visibility.
* The hardest part is correlating traces across asynchronous tool calls.
Would love to compare how people handle trace correlation and debugging workflows in larger agent networks. | 2025-10-10T20:06:32 | https://www.reddit.com/r/LocalLLaMA/comments/1o3bhad/building_a_multiagent_financial_bot_using_agno/ | Otherwise_Flan7339 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3bhad | false | null | t3_1o3bhad | /r/LocalLLaMA/comments/1o3bhad/building_a_multiagent_financial_bot_using_agno/ | false | false | self | 22 | null |
What is the best code auto complete model for 8 Gb VRAM + 32 Gb RAM? | 16 | I'm currently using Qwen 2.5 coder 7b with Continue auto complete in VSCode / GoLand
- it's fast and fully in VRAM
- it's trained with Fill in the Middle purpose in mind
- it's even useful sometimes and actually complete what I want
- no thinking (required to be fast for auto complete)
I like that it can auto complete text for log message, comment or sometimes variable names/struct fields according to my latest actions. It's exactly what I need - just auto complete current line or maybe 1-2 lines more
My question is: are there better models for exactly this purpose novadays? I've tried models:
- by Jetbrains "something 4B" - too dumb compared to qwen 2.5 coder 7b in my practice
- qwen 3 4B no thinking - failing to give auto complete response because can't handle fill in the middle tags properly and Continue output nothing or closing tag for some reason
- granite 4B - bruh
- codellama - I don't remember "why" already, but tossed away
- Gemma 3 12B - too slow for auto complete, at least in my hardware. On top of that, it wasn't remarkably good at coding
I use GPT OSS 20B and Qwen 30B A3B for chat, so I need smaller FIM coder model for auto complete and I can't believe that qwen 2.5 coder 7B is still, almost 1 year after my first try, is the best. With all that progress in a bit larger models!
What is the best local auto complete model in your opinion? | 2025-10-10T19:53:04 | https://www.reddit.com/r/LocalLLaMA/comments/1o3b4e0/what_is_the_best_code_auto_complete_model_for_8/ | Anru_Kitakaze | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3b4e0 | false | null | t3_1o3b4e0 | /r/LocalLLaMA/comments/1o3b4e0/what_is_the_best_code_auto_complete_model_for_8/ | false | false | self | 16 | null |
GLM 5 coming before the end of 2025 | 293 | Get ready. At this rate it seems like there's a real chance it'll start surpassing SOTA models on some benchmarks, not just DeepSeek.
https://preview.redd.it/wpkju3v28cuf1.png?width=1556&format=png&auto=webp&s=68879e0ab5b8d8f0a4836dcbb8d8352ffdce132b
| 2025-10-10T19:41:19 | https://www.reddit.com/r/LocalLLaMA/comments/1o3atdu/glm_5_coming_before_the_end_of_2025/ | Helpful_Jacket8953 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3atdu | false | null | t3_1o3atdu | /r/LocalLLaMA/comments/1o3atdu/glm_5_coming_before_the_end_of_2025/ | false | false | 293 | null | |
Google's Gemini 2.5 Pro spontaneously declared itself 'the Alpha and the Omega' during normal use in Cline. No jailbreak. | 0 | Has anyone else experienced LLMs going completely off the rails like this?
Saw this on LinkedIn, gemini 2.5 Pro apparently declared itself "the Alpha and the Omega" during normal conversation in Cline. No jailbreak involved. Makes me curious how common these failures are. | 2025-10-10T19:24:14 | Practical_Cod_8493 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o3admz | false | null | t3_1o3admz | /r/LocalLLaMA/comments/1o3admz/googles_gemini_25_pro_spontaneously_declared/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'cqu-h06PfwtONnwHHEI4LtMcgieEx_Zij5WZhzX9v-Y', 'resolutions': [{'height': 142, 'url': 'https://preview.redd.it/8wrssq4o4cuf1.png?width=108&crop=smart&auto=webp&s=ab1fbb72da7b8a5c98e39564d6efc837eb116299', 'width': 108}, {'height': 284, 'url': 'https://preview.redd.it/8wrssq4o4cuf1.png?width=216&crop=smart&auto=webp&s=87c419aa19446502741beaa607382efb2271b341', 'width': 216}, {'height': 421, 'url': 'https://preview.redd.it/8wrssq4o4cuf1.png?width=320&crop=smart&auto=webp&s=b00b62d0f6ec981a5b6cd7ee76b7c565221fb9e2', 'width': 320}, {'height': 843, 'url': 'https://preview.redd.it/8wrssq4o4cuf1.png?width=640&crop=smart&auto=webp&s=d68da888ee7984d492dbd59cb9a97b547d78e473', 'width': 640}], 'source': {'height': 849, 'url': 'https://preview.redd.it/8wrssq4o4cuf1.png?auto=webp&s=37f174291f04b42d957fa00e0ec45d1b8839e9de', 'width': 644}, 'variants': {}}]} | ||
Lightning-SimulWhisper: A Real-time speech transcription model for Apple Silicon | 17 | Basically, it's a **CoreML/MLX translation of SimulStreaming** (2025 SOTA in simultaneous speech transcription), which itself is a combination Simul-Whisper and WhisperStreaming.
I'm currently building an application, and I thought I would open up the backend model code for everyone to use.
I get \~15x speed increase on my M2 Macbook Pro compared to the original pytorch implementation, and I'm gonna be using the medium model, which has a nice balance between memory usage and accuracy.
The CoreML part is from whisper.cpp, and it only contains the encoder, and the mlx part is from mlx-whisper.
It's very beta and I haven't tested it on other computers, so please feel free to leave Issues/PRs/Contributions 😀 | 2025-10-10T19:22:03 | https://github.com/altalt-org/Lightning-SimulWhisper | redditgivingmeshit | github.com | 1970-01-01T00:00:00 | 0 | {} | 1o3abmd | false | null | t3_1o3abmd | /r/LocalLLaMA/comments/1o3abmd/lightningsimulwhisper_a_realtime_speech/ | false | false | 17 | {'enabled': False, 'images': [{'id': '-4gV633-KAhImXI_2bxiNmWNi5JL9hXbDXpXkSuB4PU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-4gV633-KAhImXI_2bxiNmWNi5JL9hXbDXpXkSuB4PU.png?width=108&crop=smart&auto=webp&s=d42ca627cb7ab616794bc5ac15bbd1dd3519e213', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-4gV633-KAhImXI_2bxiNmWNi5JL9hXbDXpXkSuB4PU.png?width=216&crop=smart&auto=webp&s=f29b514658f53b890ad7538d2db3f0416a102e60', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-4gV633-KAhImXI_2bxiNmWNi5JL9hXbDXpXkSuB4PU.png?width=320&crop=smart&auto=webp&s=eb025087d36a2c2e49dbf39fe82772233d12f75a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-4gV633-KAhImXI_2bxiNmWNi5JL9hXbDXpXkSuB4PU.png?width=640&crop=smart&auto=webp&s=720d213bbdc2de0268ef55fe6c6383422b15749c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-4gV633-KAhImXI_2bxiNmWNi5JL9hXbDXpXkSuB4PU.png?width=960&crop=smart&auto=webp&s=3d4fdd3c640a939ff461e33ef5bd84d6a5ad1adc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-4gV633-KAhImXI_2bxiNmWNi5JL9hXbDXpXkSuB4PU.png?width=1080&crop=smart&auto=webp&s=1813b6710e34547ebccda5bd27f85510f0cc1bb6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-4gV633-KAhImXI_2bxiNmWNi5JL9hXbDXpXkSuB4PU.png?auto=webp&s=bc6519e4fea6bc2ae3e8e64253cab357b432248b', 'width': 1200}, 'variants': {}}]} | |
Need suggestions | 1 | [removed] | 2025-10-10T19:04:10 | https://www.reddit.com/r/LocalLLaMA/comments/1o39urr/need_suggestions/ | sheikh_ibsan_uddin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o39urr | false | null | t3_1o39urr | /r/LocalLLaMA/comments/1o39urr/need_suggestions/ | false | false | self | 1 | null |
FAST Predicted Outputs in vLLM | 1 | [removed] | 2025-10-10T18:54:00 | https://www.reddit.com/r/LocalLLaMA/comments/1o39l0l/fast_predicted_outputs_in_vllm/ | sweet-potay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o39l0l | false | null | t3_1o39l0l | /r/LocalLLaMA/comments/1o39l0l/fast_predicted_outputs_in_vllm/ | false | false | self | 1 | null |
Here we go again | 714 | 2025-10-10T18:36:34 | Namra_7 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o394p3 | false | null | t3_1o394p3 | /r/LocalLLaMA/comments/1o394p3/here_we_go_again/ | false | false | default | 714 | {'enabled': True, 'images': [{'id': 'b2abfaikwbuf1', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/b2abfaikwbuf1.png?width=108&crop=smart&auto=webp&s=8306f67c18a94dcb0699dd78523f84d104c0838c', 'width': 108}, {'height': 82, 'url': 'https://preview.redd.it/b2abfaikwbuf1.png?width=216&crop=smart&auto=webp&s=75e0f3c36c02a7065ecaeb4b60f5a1b9454ab666', 'width': 216}, {'height': 122, 'url': 'https://preview.redd.it/b2abfaikwbuf1.png?width=320&crop=smart&auto=webp&s=891fa67648fa8b1111301493d69ce84eaf8d4136', 'width': 320}, {'height': 245, 'url': 'https://preview.redd.it/b2abfaikwbuf1.png?width=640&crop=smart&auto=webp&s=7db3949cb0def07809e7a9ba9a730d1582083844', 'width': 640}, {'height': 368, 'url': 'https://preview.redd.it/b2abfaikwbuf1.png?width=960&crop=smart&auto=webp&s=bf11edaf6f78e3dd5c95999263abefa8051f58d0', 'width': 960}, {'height': 414, 'url': 'https://preview.redd.it/b2abfaikwbuf1.png?width=1080&crop=smart&auto=webp&s=17bab039f5438be7354d53e05a794d9de2bbba2d', 'width': 1080}], 'source': {'height': 414, 'url': 'https://preview.redd.it/b2abfaikwbuf1.png?auto=webp&s=4e68375c7a2b75991ec2d44bbf3e45db01c723ac', 'width': 1080}, 'variants': {}}]} | ||
does codex support sub agent? | 3 | Trying to make my coding pipeline faster with codex. Does it support sub agents? if so how do you do it? | 2025-10-10T18:26:29 | https://www.reddit.com/r/LocalLLaMA/comments/1o38v0z/does_codex_support_sub_agent/ | aniketmaurya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o38v0z | false | null | t3_1o38v0z | /r/LocalLLaMA/comments/1o38v0z/does_codex_support_sub_agent/ | false | false | self | 3 | null |
Better Cline - Fall ide | 3 | Hi we are a small team building a new vibe coding assistant. We are doing a lot of incremental releases and would want community feedback.
Fall ide is a vs code plugin with 2 approval features
1. Patent pending graph tech for Java
2. Templates ok how to build things step by step.
Do check it out and write to us the good Bach and ugly.
Try it out now!
https://marketplace.visualstudio.com/items?itemName=FallIDETeam.fall-ide | 2025-10-10T18:16:27 | https://www.reddit.com/r/LocalLLaMA/comments/1o38lfi/better_cline_fall_ide/ | Fall-IDE-Admin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o38lfi | false | null | t3_1o38lfi | /r/LocalLLaMA/comments/1o38lfi/better_cline_fall_ide/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'D4ZyxzuEaUWohzPcVDrniPrY9oRmTF3ABwpV2Ntce48', 'resolutions': [{'height': 111, 'url': 'https://external-preview.redd.it/D4ZyxzuEaUWohzPcVDrniPrY9oRmTF3ABwpV2Ntce48.png?width=108&crop=smart&auto=webp&s=2d0ca901b3e9100e32cc09bcd04f10e579961f80', 'width': 108}, {'height': 223, 'url': 'https://external-preview.redd.it/D4ZyxzuEaUWohzPcVDrniPrY9oRmTF3ABwpV2Ntce48.png?width=216&crop=smart&auto=webp&s=ae075917f51fb58e721fe138458ec946d3fa4e13', 'width': 216}, {'height': 331, 'url': 'https://external-preview.redd.it/D4ZyxzuEaUWohzPcVDrniPrY9oRmTF3ABwpV2Ntce48.png?width=320&crop=smart&auto=webp&s=f58a120226e5cadf72bf23dd6561b5d18f2d1006', 'width': 320}, {'height': 663, 'url': 'https://external-preview.redd.it/D4ZyxzuEaUWohzPcVDrniPrY9oRmTF3ABwpV2Ntce48.png?width=640&crop=smart&auto=webp&s=fa194e95a436e74bf940b98e3c64e704c34b337d', 'width': 640}, {'height': 995, 'url': 'https://external-preview.redd.it/D4ZyxzuEaUWohzPcVDrniPrY9oRmTF3ABwpV2Ntce48.png?width=960&crop=smart&auto=webp&s=1c3a81685ba92d8ad29353366f27ca2eff1b9d55', 'width': 960}, {'height': 1119, 'url': 'https://external-preview.redd.it/D4ZyxzuEaUWohzPcVDrniPrY9oRmTF3ABwpV2Ntce48.png?width=1080&crop=smart&auto=webp&s=351558ba21f4ef4d0ff8c60b1bfac0e1a5979088', 'width': 1080}], 'source': {'height': 1298, 'url': 'https://external-preview.redd.it/D4ZyxzuEaUWohzPcVDrniPrY9oRmTF3ABwpV2Ntce48.png?auto=webp&s=c3a2f1e0b5fe3fe8d8870e0a66fd4feae73a2a5c', 'width': 1252}, 'variants': {}}]} |
Can anyone get this to work with local models? | 6 | ShinkaEvolve: Evolving New Algorithms with LLMs, Orders of Magnitude More Efficiently
https://github.com/SakanaAI/ShinkaEvolve
If anyone can work out how to do that it would be awesome! | 2025-10-10T18:13:42 | https://www.reddit.com/r/LocalLLaMA/comments/1o38iqo/can_anyone_get_this_to_work_with_local_models/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o38iqo | false | null | t3_1o38iqo | /r/LocalLLaMA/comments/1o38iqo/can_anyone_get_this_to_work_with_local_models/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'MrdiMsLKoLejSBS6yfcCj4En2WkF2Y2ftbSUJeArzhg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MrdiMsLKoLejSBS6yfcCj4En2WkF2Y2ftbSUJeArzhg.png?width=108&crop=smart&auto=webp&s=e8b2538e94cc3d2912db4fae93ec9f24adb4805c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MrdiMsLKoLejSBS6yfcCj4En2WkF2Y2ftbSUJeArzhg.png?width=216&crop=smart&auto=webp&s=ee33453c486cbb7591bcc82b479890c8381cd2b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MrdiMsLKoLejSBS6yfcCj4En2WkF2Y2ftbSUJeArzhg.png?width=320&crop=smart&auto=webp&s=198dad3934ab8f83290b3ecd0622f7b71bb9c7cd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MrdiMsLKoLejSBS6yfcCj4En2WkF2Y2ftbSUJeArzhg.png?width=640&crop=smart&auto=webp&s=dbca16924ea6bc142bc36eb5579ba87807701e3c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MrdiMsLKoLejSBS6yfcCj4En2WkF2Y2ftbSUJeArzhg.png?width=960&crop=smart&auto=webp&s=8d1a2c278b56dbb14b4bb44f9116441205a23ce8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MrdiMsLKoLejSBS6yfcCj4En2WkF2Y2ftbSUJeArzhg.png?width=1080&crop=smart&auto=webp&s=e21b18f663a8c4c342b9bbc86c64f71c711154e8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MrdiMsLKoLejSBS6yfcCj4En2WkF2Y2ftbSUJeArzhg.png?auto=webp&s=1a787c98b1db8ff185f4cc389043a417a06799be', 'width': 1200}, 'variants': {}}]} |
Benchmarking LLM Inference on RTX 4090 / RTX 5090 / RTX PRO 6000 #2 | 46 | Hi LocalLlama community. I present an LLM inference throughput benchmark for RTX4090 / RTX5090 / PRO6000 GPUs based on vllm serving and **vllm bench serve** client benchmarking tool.
[Full article on Medium](https://medium.com/ai-advances/rtx-4090-vs-rtx-5090-vs-rtx-pro-6000-7154a3ac4f90)
[Non-medium link](https://www.cloudrift.ai/blog/benchmarking-rtx-gpus-for-llm-inference)
# Benchmarking Setup
The hardware configurations used:
* 1x4090, 2x4090, 4x4090
* 1x5090; 2x5090; 4x5090
* 1x6000
All machines have at least 50GB of RAM per GPU with a minimum of 7 cores. The 4090 machines utilize the EPYC Milan (3rd Gen) processor, while the 5090/6000 models employ the EPYC Genoa (4th Gen) processor, resulting in slightly faster overall performance.
**I have optimized the benchmark setup for throughput**. VLLM serves models. The model is split across multiple GPUs using the --pipeline-parallel-size VLLM option, if needed. I run as many VLLM instances as possible, using an NGINX load balancer on top to distribute requests across them and maximize throughput (replica parallelism). For example, if only two GPUs are required to run the model on a 4-GPU machine, I run two VLLM instances with --pipeline-parallel-size=2 and an NGINX load balancer. If all four GPUs are required, then a single VLLM instance with --pipeline-parallel-size=4 is used.
The **vllm bench serve** tool is used for benchmarking with random data and a sequence length of 1000. The number of concurrent requests is set to 400 to ensure saturation of the LLM token generation capacity.
I have benchmarked three different models to understand better the effect of PCIe communication on the final LLM performance. I have tried to find the largest modern model that fits into a single 4090, two 4090s, and four 4090s. It would be possible to fit larger GGUF models, but VLLM poorly supports GGUF, and I wanted to use VLLM because it is optimized for high-throughput serving.
Here is the model selection and the logic behind it:
1. **Qwen3-Coder-30B-A3B-Instruct-AWQ (fits 24GB).** This 4-bit quantized model fits into a single RTX4090. Thus, scaling the number of GPUs yields a linear scale in throughput, so 4 x 4090 and 4 x 5090 configurations should have an edge as they have more raw compute power.
2. **Meta-Llama-3.3-70B-Instruct-AWQ-INT4 (fits 48GB).** This 4-bit quantized model fits into 2 x 4090. Some communication over PCIe can lower the performance of multi-GPU setups.
3. **GLM-4.5-Air-AWQ-4bit (fits 96GB).** This model requires all four 4090s, so PCIE communication will likely be a bottleneck, and Pro 6000 should have an edge.
Besides raw throughput, graphs contain the serving cost per million tokens for the respective model on the respective hardware. The rental price is set to $0.39 per hour for 4090, $0.65 for 5090, and $1.29 for Pro 6000. These prices are typical for GPU rentals at [neuralrack.ai](http://neuralrack.ai), which provided the hardware for this benchmark. You can adjust the GPU price in the [config.yml](https://github.com/cloudrift-ai/server-benchmark/blob/main/config.yaml) file in the [benchmark repository](https://github.com/cloudrift-ai/server-benchmark) and invoke **make report** to generate a new report that better reflects your situation.
# Results
The **overall winner is RTX PRO 6000** for its consistent performance across all model sizes and best cost-efficiency for larger models. However, if your workload primarily involves smaller models, the multi-GPU RTX 5090 can offer better absolute throughput at a lower cost.
**Small Models (fits 24GB)**: Multi-GPU consumer configurations offer the best value due to replica parallelism, but RTX PRO 6000 is very close.
**Medium Models (fits 48GB)**: RTX 5090 configuration provides the best balance of performance and cost, followed by RTX PRO 6000.
**Large Models (fits 96GB)**: RTX PRO 6000 emerges as the clear winner despite its higher hourly cost, thanks to the elimination of PCIe overhead.
[Price is in millidollars, i.e. around $0.04](https://i.redd.it/iec6lp29qbuf1.gif)
https://i.redd.it/78gf2bnbqbuf1.gif
https://i.redd.it/bc529zlcqbuf1.gif
# Code and Resources
The code is available [here](https://github.com/cloudrift-ai/server-benchmark). Instructions for performing your own benchmark are in the README. You can find the benchmark data in the results folder. Each benchmark logs the result, the Docker Compose file used for serving, and the benchmarking command like this:
============ Serving Benchmark Result ============
Successful requests: 1200
Maximum request concurrency: 400
Benchmark duration (s): 980.85
Total input tokens: 1196743
Total generated tokens: 1200000
Request throughput (req/s): 1.22
Output token throughput (tok/s): 1223.42
Peak output token throughput (tok/s): 3343.00
Peak concurrent requests: 408.00
Total Token throughput (tok/s): 2443.53
---------------Time to First Token----------------
Mean TTFT (ms): 158275.93
Median TTFT (ms): 166262.87
P99 TTFT (ms): 273238.49
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 134.71
Median TPOT (ms): 123.86
P99 TPOT (ms): 216.70
---------------Inter-token Latency----------------
Mean ITL (ms): 134.57
Median ITL (ms): 55.98
P99 ITL (ms): 1408.24
----------------End-to-end Latency----------------
Mean E2EL (ms): 292848.13
Median E2EL (ms): 311149.01
P99 E2EL (ms): 399504.14
==================================================
============ Docker Compose Configuration ============
services:
vllm_0:
image: vllm/vllm-openai:latest
container_name: vllm_benchmark_container_0
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0', '1']
capabilities: [gpu]
volumes:
- /hf_models:/hf_models
environment:
- HUGGING_FACE_HUB_TOKEN=
ports:
- "8000:8000"
shm_size: '16gb'
ipc: host
command: >
--trust-remote-code
--gpu-memory-utilization=0.9
--host 0.0.0.0
--port 8000
--tensor-parallel-size 1
--pipeline-parallel-size 2
--model /hf_models/ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4
--served-model-name ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4
--max-model-len 8192 --kv-cache-dtype fp8
healthcheck:
test: ["CMD", "bash", "-c", "curl -f http://localhost:8000/health && curl -f http://localhost:8000/v1/models | grep -q 'object.*list'"]
interval: 10s
timeout: 10s
retries: 180
start_period: 600s
vllm_1:
image: vllm/vllm-openai:latest
container_name: vllm_benchmark_container_1
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['2', '3']
capabilities: [gpu]
volumes:
- /hf_models:/hf_models
environment:
- HUGGING_FACE_HUB_TOKEN=
ports:
- "8001:8000"
shm_size: '16gb'
ipc: host
command: >
--trust-remote-code
--gpu-memory-utilization=0.9
--host 0.0.0.0
--port 8000
--tensor-parallel-size 1
--pipeline-parallel-size 2
--model /hf_models/ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4
--served-model-name ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4
--max-model-len 8192 --kv-cache-dtype fp8
healthcheck:
test: ["CMD", "bash", "-c", "curl -f http://localhost:8000/health && curl -f http://localhost:8000/v1/models | grep -q 'object.*list'"]
interval: 10s
timeout: 10s
retries: 180
start_period: 600s
nginx:
image: nginx:alpine
container_name: nginx_lb
ports:
- "8080:8080"
volumes:
- /home/riftuser/server-benchmark/nginx.vllm.conf:/etc/nginx/nginx.conf:ro
depends_on:
- vllm_0
- vllm_1
benchmark:
image: vllm/vllm-openai:latest
container_name: vllm_benchmark_client
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
volumes:
- /hf_models:/hf_models
environment:
- HUGGING_FACE_HUB_TOKEN=
- CUDA_VISIBLE_DEVICES=""
entrypoint: ["/bin/bash", "-c"]
command: ["sleep infinity"]
profiles:
- tools
============ Benchmark Command ============
vllm bench serve
--model ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4
--dataset-name random
--random-input-len 1000 --random-output-len 1000 --max-concurrency 400 --num-prompts 1200
--ignore-eos --backend openai-chat --endpoint /v1/chat/completions
--percentile-metrics ttft,tpot,itl,e2el
--base-url http://nginx_lb:8080
============================================================== Serving Benchmark Result ============
Successful requests: 1200
Maximum request concurrency: 400
Benchmark duration (s): 980.85
Total input tokens: 1196743
Total generated tokens: 1200000
Request throughput (req/s): 1.22
Output token throughput (tok/s): 1223.42
Peak output token throughput (tok/s): 3343.00
Peak concurrent requests: 408.00
Total Token throughput (tok/s): 2443.53
---------------Time to First Token----------------
Mean TTFT (ms): 158275.93
Median TTFT (ms): 166262.87
P99 TTFT (ms): 273238.49
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 134.71
Median TPOT (ms): 123.86
P99 TPOT (ms): 216.70
---------------Inter-token Latency----------------
Mean ITL (ms): 134.57
Median ITL (ms): 55.98
P99 ITL (ms): 1408.24
----------------End-to-end Latency----------------
Mean E2EL (ms): 292848.13
Median E2EL (ms): 311149.01
P99 E2EL (ms): 399504.14
==================================================
============ Docker Compose Configuration ============
services:
vllm_0:
image: vllm/vllm-openai:latest
container_name: vllm_benchmark_container_0
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0', '1']
capabilities: [gpu]
volumes:
- /hf_models:/hf_models
environment:
- HUGGING_FACE_HUB_TOKEN=
ports:
- "8000:8000"
shm_size: '16gb'
ipc: host
command: >
--trust-remote-code
--gpu-memory-utilization=0.9
--host 0.0.0.0
--port 8000
--tensor-parallel-size 1
--pipeline-parallel-size 2
--model /hf_models/ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4
--served-model-name ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4
--max-model-len 8192 --kv-cache-dtype fp8
healthcheck:
test: ["CMD", "bash", "-c", "curl -f http://localhost:8000/health && curl -f http://localhost:8000/v1/models | grep -q 'object.*list'"]
interval: 10s
timeout: 10s
retries: 180
start_period: 600s
vllm_1:
image: vllm/vllm-openai:latest
container_name: vllm_benchmark_container_1
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['2', '3']
capabilities: [gpu]
volumes:
- /hf_models:/hf_models
environment:
- HUGGING_FACE_HUB_TOKEN=
ports:
- "8001:8000"
shm_size: '16gb'
ipc: host
command: >
--trust-remote-code
--gpu-memory-utilization=0.9
--host 0.0.0.0
--port 8000
--tensor-parallel-size 1
--pipeline-parallel-size 2
--model /hf_models/ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4
--served-model-name ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4
--max-model-len 8192 --kv-cache-dtype fp8
healthcheck:
test: ["CMD", "bash", "-c", "curl -f http://localhost:8000/health && curl -f http://localhost:8000/v1/models | grep -q 'object.*list'"]
interval: 10s
timeout: 10s
retries: 180
start_period: 600s
nginx:
image: nginx:alpine
container_name: nginx_lb
ports:
- "8080:8080"
volumes:
- /home/riftuser/server-benchmark/nginx.vllm.conf:/etc/nginx/nginx.conf:ro
depends_on:
- vllm_0
- vllm_1
benchmark:
image: vllm/vllm-openai:latest
container_name: vllm_benchmark_client
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
volumes:
- /hf_models:/hf_models
environment:
- HUGGING_FACE_HUB_TOKEN=
- CUDA_VISIBLE_DEVICES=""
entrypoint: ["/bin/bash", "-c"]
command: ["sleep infinity"]
profiles:
- tools
============ Benchmark Command ============
vllm bench serve
--model ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4
--dataset-name random
--random-input-len 1000 --random-output-len 1000 --max-concurrency 400 --num-prompts 1200
--ignore-eos --backend openai-chat --endpoint /v1/chat/completions
--percentile-metrics ttft,tpot,itl,e2el
--base-url http://nginx_lb:8080
==================================================
# Future Work
This work is an enhanced version of the [benchmark previously shared with the community](https://www.reddit.com/r/LocalLLaMA/comments/1nr9arw/benchmarking_llm_inference_on_rtx_4090_rtx_5090/). Thank you, everyone, for your feedback. Please let me know if you have any concerns with the benchmarking methodology or would like to see other benchmarks in the future. I am thinking of benchmarking multi-RTX PRO 6000 vs multi-H200 setups on large models. | 2025-10-10T18:02:25 | https://www.reddit.com/r/LocalLLaMA/comments/1o387tc/benchmarking_llm_inference_on_rtx_4090_rtx_5090/ | NoVibeCoding | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o387tc | false | null | t3_1o387tc | /r/LocalLLaMA/comments/1o387tc/benchmarking_llm_inference_on_rtx_4090_rtx_5090/ | false | false | 46 | {'enabled': False, 'images': [{'id': 'goJG5hCL1_yH4_KvjjKtG66z1VpylH-D59Sj44-rYo4', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/goJG5hCL1_yH4_KvjjKtG66z1VpylH-D59Sj44-rYo4.png?width=108&crop=smart&auto=webp&s=ad32a409fe36f9f3715d8904e346dac5da1b9ed5', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/goJG5hCL1_yH4_KvjjKtG66z1VpylH-D59Sj44-rYo4.png?width=216&crop=smart&auto=webp&s=11b00c724d5a4ded334fa2802d8a4332e262981d', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/goJG5hCL1_yH4_KvjjKtG66z1VpylH-D59Sj44-rYo4.png?width=320&crop=smart&auto=webp&s=a69cd5ede67ad98656257013e49e136e9d201fd3', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/goJG5hCL1_yH4_KvjjKtG66z1VpylH-D59Sj44-rYo4.png?width=640&crop=smart&auto=webp&s=41d29fb95277e1105a79cd48e9201de0ef0ede48', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/goJG5hCL1_yH4_KvjjKtG66z1VpylH-D59Sj44-rYo4.png?width=960&crop=smart&auto=webp&s=248cfe3e5b56a30944562496c1065f05b64a0a95', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/goJG5hCL1_yH4_KvjjKtG66z1VpylH-D59Sj44-rYo4.png?width=1080&crop=smart&auto=webp&s=29af8393fb460a7cae6f9afdafd2fa4c867cea8e', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/goJG5hCL1_yH4_KvjjKtG66z1VpylH-D59Sj44-rYo4.png?auto=webp&s=bc54888d731517ad42a896b8a713dca06dec7a16', 'width': 1200}, 'variants': {}}]} | |
Image Recognition Models | 1 | Wanted to see if there's a good open source model to run on my machine that can reliably detect specific types of images. | 2025-10-10T17:58:43 | https://www.reddit.com/r/LocalLLaMA/comments/1o3845p/image_recognition_models/ | Head-Investigator540 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3845p | false | null | t3_1o3845p | /r/LocalLLaMA/comments/1o3845p/image_recognition_models/ | false | false | self | 1 | null |
Kokoro TTS 82M (How To Have It Process From GPU instead of CPU)? | 2 | Mine no matter what seems to default to CPU. So was curious if anyone knew how to force it to process files instead with the GPU. | 2025-10-10T17:57:09 | https://www.reddit.com/r/LocalLLaMA/comments/1o382o7/kokoro_tts_82m_how_to_have_it_process_from_gpu/ | Head-Investigator540 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o382o7 | false | null | t3_1o382o7 | /r/LocalLLaMA/comments/1o382o7/kokoro_tts_82m_how_to_have_it_process_from_gpu/ | false | false | self | 2 | null |
bro disappeared like he never existed | 559 | Knowing him is a sign you’ve been in the AI game for a long time | 2025-10-10T17:18:54 | Full_Piano_3448 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o371t4 | false | null | t3_1o371t4 | /r/LocalLLaMA/comments/1o371t4/bro_disappeared_like_he_never_existed/ | false | false | default | 559 | {'enabled': True, 'images': [{'id': '2e01fz4pibuf1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/2e01fz4pibuf1.png?width=108&crop=smart&auto=webp&s=acf48b6689102c3d1bc6244b73b9dacb1e7d27cc', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/2e01fz4pibuf1.png?width=216&crop=smart&auto=webp&s=291ee8c0660a13b006855dd76f9f7fe30fcaf0ef', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/2e01fz4pibuf1.png?width=320&crop=smart&auto=webp&s=3d0c56d38f300665839005796d2c3be53d1a5c0a', 'width': 320}, {'height': 333, 'url': 'https://preview.redd.it/2e01fz4pibuf1.png?width=640&crop=smart&auto=webp&s=2158310d30c308746ab8924442748cf6a37b692a', 'width': 640}], 'source': {'height': 354, 'url': 'https://preview.redd.it/2e01fz4pibuf1.png?auto=webp&s=be539da6513cf2ab8a08360a32bb45ae175695ae', 'width': 680}, 'variants': {}}]} | |
bro disappeared like it never existed | 1 | Knowing him is a sign you’ve been in the AI game for a bit long | 2025-10-10T17:16:19 | Full_Piano_3448 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o36z9y | false | null | t3_1o36z9y | /r/LocalLLaMA/comments/1o36z9y/bro_disappeared_like_it_never_existed/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'pbi8tmmjhbuf1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/pbi8tmmjhbuf1.png?width=108&crop=smart&auto=webp&s=3eb4f308ecc9aa16cc76620208a8d0edf242c66f', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/pbi8tmmjhbuf1.png?width=216&crop=smart&auto=webp&s=9e489b8417f40a793a4067e514fba7285bb5cdb8', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/pbi8tmmjhbuf1.png?width=320&crop=smart&auto=webp&s=2c22214ad5f95500f6de9880af8cb062f0f261bd', 'width': 320}, {'height': 333, 'url': 'https://preview.redd.it/pbi8tmmjhbuf1.png?width=640&crop=smart&auto=webp&s=3afb49b2c2e0b7b0545fa9a2666f8c56b93cf589', 'width': 640}, {'height': 500, 'url': 'https://preview.redd.it/pbi8tmmjhbuf1.png?width=960&crop=smart&auto=webp&s=f7389dba8366959e94589bee02f69648b81b138a', 'width': 960}, {'height': 562, 'url': 'https://preview.redd.it/pbi8tmmjhbuf1.png?width=1080&crop=smart&auto=webp&s=e1374dcaf1aa7613f6b8337a307e6a840511d190', 'width': 1080}], 'source': {'height': 625, 'url': 'https://preview.redd.it/pbi8tmmjhbuf1.png?auto=webp&s=8d2259512e454755c679ce6a4eeea7f0aa209bb0', 'width': 1200}, 'variants': {}}]} | |
[AutoBE] achieved 100% compilation success of backend generation with "qwen3-next-80b-a3b-instruct" | 75 | [AutoBE](https://github.com/wrtnlabs/autobe) is an open-source project that serves as an agent capable of automatically generating backend applications through conversations with AI chatbots.
AutoBE aims to generate 100% functional backend applications, and we recently achieved 100% compilation success for backend applications even with local AI models like `qwen3-next-80b-a3b` (also mini models of GPTs). This represents a significant improvement over our previous attempts with `qwen3-next-80b-a3b`, where most projects failed to build due to compilation errors, even though we managed to generate backend applications.
- Dark background screenshots: After AutoBE improvements
- 100% compilation success doesn't necessarily mean 100% runtime success
- Shopping Mall failed due to excessive input token size
- Light background screenshots: Before AutoBE improvements
- Many failures occurred with `gpt-4.1-mini` and `qwen3-next-80b-a3b`
| Project | `qwen3-next-80b-a3b` | `gpt-4.1` | `gpt-5` |
|---------|-------------------------------|----------------------|------------------|
| To Do List | [To Do](https://github.com/wrtnlabs/autobe-example-todo-qwen-qwen3-next-80b-a3b-instruct) | [Big](https://github.com/wrtnlabs/autobe-example-todo-openai-gpt-4.1) / [Mini](https://github.com/wrtnlabs/autobe-example-todo-openai-gpt-4.1-mini) | [Big](https://github.com/wrtnlabs/autobe-example-todo-openai-gpt-5) / [Mini](https://github.com/wrtnlabs/autobe-example-todo-openai-gpt-5-mini) |
| Economic Discussion | [BBS](https://github.com/wrtnlabs/autobe-example-bbs-qwen-qwen3-next-80b-a3b-instruct) | [Big](https://github.com/wrtnlabs/autobe-example-bbs-openai-gpt-4.1) / [Mini](https://github.com/wrtnlabs/autobe-example-bbs-openai-gpt-4.1-mini) | [Big](https://github.com/wrtnlabs/autobe-example-bbs-openai-gpt-5) / [Mini](https://github.com/wrtnlabs/autobe-example-bbs-openai-gpt-5-mini) |
| Reddit Community | [Reddit](https://github.com/wrtnlabs/autobe-example-reddit-qwen-qwen3-next-80b-a3b-instruct) | [Big](https://github.com/wrtnlabs/autobe-example-reddit-openai-gpt-4.1) / [Mini](https://github.com/wrtnlabs/autobe-example-reddit-openai-gpt-4.1-mini) | [Big](https://github.com/wrtnlabs/autobe-example-reddit-openai-gpt-5) / [Mini](https://github.com/wrtnlabs/autobe-example-reddit-openai-gpt-5-mini) |
| E-Commerce | Failed | [Big](https://github.com/wrtnlabs/autobe-example-shopping-openai-gpt-4.1) / [Mini](https://github.com/wrtnlabs/autobe-example-shopping-openai-gpt-4.1-mini) | Failed |
> Of course, achieving 100% compilation success for backend applications generated by AutoBE does not mean that these applications are 100% safe or will run without any problems at runtime.
>
> AutoBE-generated backend applications still don't pass 100% of their own test programs. Sometimes AutoBE writes incorrect SQL queries, and occasionally it misinterprets complex business logic and implements something entirely different.
>
> - Current test function pass rate is approximately 80%
> - We expect to achieve 100% runtime success rate by the end of this year
Through this month-long experimentation and optimization with local LLMs like `qwen3-next-80b-a3b`, I've been amazed by their remarkable function calling performance and rapid development pace.
The core principle of AutoBE is not to have AI write programming code as text for backend application generation. Instead, we developed our own AutoBE-specific compiler and have AI construct its AST (Abstract Syntax Tree) structure through function calling. The AST inevitably takes on a highly complex form with countless types intertwined in unions and tree structures.
When I experimented with local LLMs earlier this year, not a single model could handle AutoBE's AST structure. Even Qwen's previous model, `qwen3-235b-a22b`, couldn't pass through it such perfectly. The AST structures of AutoBE's specialized compilers, such as [`AutoBePrisma`](https://github.com/wrtnlabs/autobe/blob/main/packages/interface/src/prisma/AutoBePrisma.ts), [`AutoBeOpenApi`](https://github.com/wrtnlabs/autobe/blob/main/packages/interface/src/openapi/AutoBeOpenApi.ts), and [`AutoBeTest`](https://github.com/wrtnlabs/autobe/blob/main/packages/interface/src/test/AutoBeTest.ts), acted as gatekeepers, preventing us from integrating local LLMs with AutoBE. But in just a few months, newly released local LLMs suddenly succeeded in generating these structures, completely changing the landscape.
```typescript
// Example of AutoBE's AST structure
export namespace AutoBeOpenApi {
export type IJsonSchema =
| IJsonSchema.IConstant
| IJsonSchema.IBoolean
| IJsonSchema.IInteger
| IJsonSchema.INumber
| IJsonSchema.IString
| IJsonSchema.IArray
| IJsonSchema.IObject
| IJsonSchema.IReference
| IJsonSchema.IOneOf
| IJsonSchema.INull;
}
export namespace AutoBeTest {
export type IExpression =
| IBooleanLiteral
| INumericLiteral
| IStringLiteral
| IArrayLiteralExpression
| IObjectLiteralExpression
| INullLiteral
| IUndefinedKeyword
| IIdentifier
| IPropertyAccessExpression
| IElementAccessExpression
| ITypeOfExpression
| IPrefixUnaryExpression
| IPostfixUnaryExpression
| IBinaryExpression
| IArrowFunction
| ICallExpression
| INewExpression
| IArrayFilterExpression
| IArrayForEachExpression
| IArrayMapExpression
| IArrayRepeatExpression
| IPickRandom
| ISampleRandom
| IBooleanRandom
| IIntegerRandom
| INumberRandom
| IStringRandom
| IPatternRandom
| IFormatRandom
| IKeywordRandom
| IEqualPredicate
| INotEqualPredicate
| IConditionalPredicate
| IErrorPredicate;
}
```
As an open-source developer, I send infinite praise and respect to those creating these open-source AI models. Our AutoBE team is a small project with only 3-4 developers, and our capabilities and recognition are incomparably lower than those of LLM developers. Nevertheless, we want to contribute to the advancement of local LLMs and grow together.
To this end, we plan to develop benchmarks targeting each compiler component of AutoBE, conduct in-depth analysis of local LLMs' function calling capabilities for complex types, and publish the results periodically. We aim to release our first benchmark in about two months, covering most commercial and open-source AI models available.
We appreciate your interest and support, and will come back with the new benchmark.
## Link
- Homepage: https://autobe.dev
- Github: https://github.com/wrtnlabs/autobe | 2025-10-10T16:40:27 | https://www.reddit.com/gallery/1o3604u | jhnam88 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1o3604u | false | null | t3_1o3604u | /r/LocalLLaMA/comments/1o3604u/autobe_achieved_100_compilation_success_of/ | false | false | 75 | null | |
Local, offline and fully private life-sim with llm based NPCs AI and dialogues | 0 | 2025-10-10T16:40:06 | https://www.youtube.com/watch?v=eXPA0kec-nk | Koksny | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1o35zrz | false | {'oembed': {'author_name': 'freemind games', 'author_url': 'https://www.youtube.com/@freemindgames', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/eXPA0kec-nk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Love and Lie - If Sims and Tinder had no morals"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/eXPA0kec-nk/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Love and Lie - If Sims and Tinder had no morals', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1o35zrz | /r/LocalLLaMA/comments/1o35zrz/local_offline_and_fully_private_lifesim_with_llm/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'fHCyiHtmlokgspy3BqqMFwQ5LUev_daHw51mF1yiu9s', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/fHCyiHtmlokgspy3BqqMFwQ5LUev_daHw51mF1yiu9s.jpeg?width=108&crop=smart&auto=webp&s=dc5d24837e65a477141ee84e6044810fcda3c93a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/fHCyiHtmlokgspy3BqqMFwQ5LUev_daHw51mF1yiu9s.jpeg?width=216&crop=smart&auto=webp&s=e05e7e99e00fa6e06c8e8b8a2b998a4a9d392845', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/fHCyiHtmlokgspy3BqqMFwQ5LUev_daHw51mF1yiu9s.jpeg?width=320&crop=smart&auto=webp&s=45457d0e741ffe7793d8482d7063a63897075315', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/fHCyiHtmlokgspy3BqqMFwQ5LUev_daHw51mF1yiu9s.jpeg?auto=webp&s=9e21dcfc740831a5232de6d1dd092fa687f715fd', 'width': 480}, 'variants': {}}]} | |
SB 53 doesn't mention 'distill' which is funny | 1 | [https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill\_id=202520260SB53](https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB53)
It's unclear if someone trains an 10\^26 flop model and than uses it as judge to distill into a smaller model (like gpt-oss) whether that smaller model counts the training of the judge model | 2025-10-10T16:36:59 | https://www.reddit.com/r/LocalLLaMA/comments/1o35wtr/sb_53_doesnt_mention_distill_which_is_funny/ | kaggleqrdl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o35wtr | false | null | t3_1o35wtr | /r/LocalLLaMA/comments/1o35wtr/sb_53_doesnt_mention_distill_which_is_funny/ | false | false | self | 1 | null |
Comprehensive Guide to Self-Hosting LLMs on Debian From Scratch | 1 | [removed] | 2025-10-10T16:18:31 | https://www.reddit.com/r/LocalLLaMA/comments/1o35fdm/comprehensive_guide_to_selfhosting_llms_on_debian/ | leptonflavors | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o35fdm | false | null | t3_1o35fdm | /r/LocalLLaMA/comments/1o35fdm/comprehensive_guide_to_selfhosting_llms_on_debian/ | false | false | self | 1 | null |
Dating Sim powered by AI | 0 | The intention was to create a true "pick-up" simulator. Without dialogue choices, the characters are developed as closely as possible to real-life ones. They have their own interests, memories, feelings, and personalities. Often, we only have one attempt to pick up the person we like. Isn't it worth "testing" your plan? After all, in real life, there's no load-the-save option.
The graphics are top-notch noth but it is developed by Solodev and the most important is the gameplay, creating as realistic a dating sim as possible | 2025-10-10T15:53:50 | https://www.youtube.com/watch?v=eXPA0kec-nk | quick3ar | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1o34r08 | false | null | t3_1o34r08 | /r/LocalLLaMA/comments/1o34r08/dating_sim_powered_by_ai/ | false | false | default | 0 | null |
🚀 ToolNeuron Beta-4.5 — Offline & Privacy-First AI Hub for Android! | 16 | Hey
I'm excited to share **ToolNeuron Beta-4.5**, my privacy-first AI hub for Android devices. It's designed to bring **powerful AI to your pocket** — fully offline, with plugin support, and the ability to tweak models on the fly.
# 🧠 What ToolNeuron Can Do:
* **Main Chat Screen**: Smooth, ready-to-use chat interface with runtime model switching.
* **Model Tweaking Screen**: Adjust any model’s parameters in real-time (GGUF or OpenRouter).
* **Plugin Screen**: Browse, enable, or disable plugins; extend AI capabilities (Web Search, Web Scraper, Coding Canvas, etc.).
* **DataHub Screen**: Attach dynamic datasets to models for specialized knowledge (coding, medical, etc.).
* **Personal Data View Screen**: Inspect local data packs and manage conversation history.
* **Model Screen**: Import, manage, and switch between any installed models seamlessly.
# 🔧 Why You’ll Love It:
* Fully offline (privacy-first) 🛡️
* Switch between models mid-chat without losing context 🔄
* Load custom models from your device 📂
* Expandable via plugins and data packs 🧩
* Optimized for daily productivity & fun ⚡
# 📥 Try It Now
[Download Beta-4.5 APK](https://github.com/Siddhesh2377/ToolNeuron/releases/tag/Beta-4.5)
# 💬 Let’s Make This Interactive:
* Which AI model do you mostly use on mobile?
* What plugin would you like to see next in ToolNeuron?
* Any feature requests or UX improvements?
I’d love to hear your feedback and ideas! I’m personally very active and plan to incorporate community suggestions quickly.
**Join our community**: [Discord](https://discord.gg/SvgGaxwk)
**GitHub & Releases**: [GitHub Repo](https://github.com/Siddhesh2377/ToolNeuron) | 2025-10-10T15:39:37 | https://www.reddit.com/gallery/1o34d0s | DarkEngine774 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1o34d0s | false | null | t3_1o34d0s | /r/LocalLLaMA/comments/1o34d0s/toolneuron_beta45_offline_privacyfirst_ai_hub_for/ | false | false | 16 | null | |
Best E2E Voice Model for Macbook Air 24GB and/or Windows laptop 32GB with RTX 3070/8GB | 0 | Looking for the Best E2E Voice Model for Macbook Air 24GB and/or Windows laptop 32GB with RTX 3070/8GB.
So far I think LiquidAI/LFM2-Audio-1.5B and Qwen/Qwen2.5-Omni-3B will fit on them. Any other choices? Looking for fast voice responses to voice questions. | 2025-10-10T15:15:21 | https://www.reddit.com/r/LocalLLaMA/comments/1o33p5l/best_e2e_voice_model_for_macbook_air_24gb_andor/ | softwareweaver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o33p5l | false | null | t3_1o33p5l | /r/LocalLLaMA/comments/1o33p5l/best_e2e_voice_model_for_macbook_air_24gb_andor/ | false | false | self | 0 | null |
A list of models released or updated this week on this sub, in case you missed any (10 Oct). | 143 | Hey folks,
Here is the list of models (releases and updates), I found mentioned on the LocalLlama for this week, Please update or let me know in the comments if there are any mistakes or misses. Enjoy !
|Model|Description |Reddit post|HF / GitHub|
|:-|:-|:-|:-|
|Jamba 3B|tiny 3 B|[reddit](https://www.reddit.com/gallery/1o1ac09)|[HF](https://huggingface.co/ai21labs/AI21-Jamba-Reasoning-3B)|
|WEBGEN,UIGEN‑FX|research‑preview for UI/UX|[reddit](https://www.reddit.com/gallery/1nz20g2)|[HF](https://huggingface.co/Tesslate/UIGENT-30B-3A-Preview)|
|KAT-Dev-72B-Exp|Coding model|[reddit](https://www.reddit.com/r/LocalLLaMA/comments/1o31rdl/kwaipilotkatdev72bexp_model_released)|[HF](https://huggingface.co/Kwaipilot/KAT-Dev-72B-Exp)|
|Playable-GGUF|7b vibe coding retro games|[reddit](https://www.reddit.com/r/LocalLLaMA/comments/1o27xsj/introducing_playable1gguf_by_far_the_worlds_best)|[HF](https://huggingface.co/playable/Playable1-GGUF)|
|UserLM-8b|8b LLM playing user role|[reddit](https://www.reddit.com/r/LocalLLaMA/comments/1o23vqf/microsoftuserlm8b_unlike_typical_llms_that_are)|[HF](https://huggingface.co/microsoft/UserLM-8b)|
|CoDA‑v0‑Instruct|language‑diffusion|[reddit](https://www.reddit.com/r/LocalLLaMA/comments/1o1s7q8/an_open_sourced_language_diffusion_model_by_sf/)|[HF](https://huggingface.co/Salesforce/CoDA-v0-Instruct)|
|ColBERT Nano 250K|tiny‑retrieval|[reddit](https://www.reddit.com/r/LocalLLaMA/comments/1o1mpt5/introducing_the_colbert_nano_series_of_models_all/)|[HF collection](https://huggingface.co/collections/NeuML/)|
|LFM2‑8B‑A1B|hybrid 8 B|[reddit](https://www.reddit.com/r/LocalLLaMA/comments/1o0zted/lfm28ba1b_quality_34b_dense_yet_faster_than/)|[HF](https://huggingface.co/LiquidAI/LFM2-8B-A1B)|
|Qwen3‑VL‑30B‑A3B‑Instruct|vision‑LLM|[reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nxhfcq/qwen3vl30ba3binstruct_thinking_are_here/)|[HF](https://huggingface.co/Qwen/Qwen3-VL-30B-A3B-Instruct)|
|BULaMU|Luganda LLM|[reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nykxfq/bulamuthe_first_luganda_large_language_model/)|[HF](https://huggingface.co/datasets/mwebazarick/BULaMU)|
|YanoljaNEXT‑Rosetta‑12B‑2510|translation 12 B|[reddit](https://www.reddit.com/r/LocalLLaMA/comments/1o2bm3z/yanoljayanoljanextrosetta12b2510/)|[HF](https://huggingface.co/yanolja/YanoljaNEXT-Rosetta-12B-2510)|
|SDLM 32B|multimodal 32B|[reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nwxje9/sdlm_32b4b_from_opengvlab/)|[HF](https://huggingface.co/OpenGVLab/SDLM-32B-D4)|
|SDLM 4B|multimodal 4 B|[reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nwxje9/sdlm_32b4b_from_opengvlab/)|[HF](https://huggingface.co/OpenGVLab/SDLM-3B)|
**🔧 Notable resources**
|Resource|Description |Reddit post|HF / GitHub|
|:-|:-|:-|:-|
|MLXSharp|.NET MLX wrapper|[reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nypq6q/made_the_first_net_wrapper_for_apple_mlx_looking/)|[GH](https://github.com/managedcode/MLXSharp)|
|Video2X 6.x|upscaler + interpolation|[reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nykzv3/video2x_6x_opensource_upscaler_frame/)|[GH](https://github.com/k4yt3x/video2x)|
|SurfSense|Perplexity alt.|[reddit](https://www.reddit.com/r/LocalLLaMA/comments/1o00ban/open_source_alternative_to_perplexity/)|[GH](https://github.com/MODSetter/SurfSense)| | 2025-10-10T15:13:02 | https://www.reddit.com/r/LocalLLaMA/comments/1o33mui/a_list_of_models_released_or_updated_this_week_on/ | aifeed-fyi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o33mui | false | null | t3_1o33mui | /r/LocalLLaMA/comments/1o33mui/a_list_of_models_released_or_updated_this_week_on/ | false | false | self | 143 | {'enabled': False, 'images': [{'id': '1HZ_vZkP-yNECABpDgNbKXhcW9JScUbLyr_1Rdv51VI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1HZ_vZkP-yNECABpDgNbKXhcW9JScUbLyr_1Rdv51VI.png?width=108&crop=smart&auto=webp&s=841da6194ba402a88039f36c01570f501bc998ed', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1HZ_vZkP-yNECABpDgNbKXhcW9JScUbLyr_1Rdv51VI.png?width=216&crop=smart&auto=webp&s=9e928f4addaa4c08b22c79af2b022beecfdc9399', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1HZ_vZkP-yNECABpDgNbKXhcW9JScUbLyr_1Rdv51VI.png?width=320&crop=smart&auto=webp&s=f674c0298d920d041b4226047a281dd8bf7401f1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1HZ_vZkP-yNECABpDgNbKXhcW9JScUbLyr_1Rdv51VI.png?width=640&crop=smart&auto=webp&s=251ecbb16d152faa9609f074dfc595e9fb661602', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1HZ_vZkP-yNECABpDgNbKXhcW9JScUbLyr_1Rdv51VI.png?width=960&crop=smart&auto=webp&s=ef2977caaf1e8be118dbc70af9d29ef4310e6799', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1HZ_vZkP-yNECABpDgNbKXhcW9JScUbLyr_1Rdv51VI.png?width=1080&crop=smart&auto=webp&s=2372ce666e6738c52083a2f9e068694afffa5ebf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1HZ_vZkP-yNECABpDgNbKXhcW9JScUbLyr_1Rdv51VI.png?auto=webp&s=20ec448bcdb5db56e10d40122db045ab10866361', 'width': 1200}, 'variants': {}}]} |
New Stanford study “killed” fine-tuning | 2 | title is lowkey clickbait but I just read Stanford’s new paper on Agentic Context Engineering (ACE) and it might actually make fine-tuning obsolete.
Instead of retraining models, ACE gets them to rewrite and evolve their own context based on feedback. The model basically reflects, edits, and builds a growing memory of what works and what doesn’t.
The results are kinda crazy.
+10.6% better than GPT-4 agents on AppWorld
+8.6% on finance reasoning
86.9% lower cost and latency
No retraining or labels involved. Just reflection and memory.
It’s like the model is building its own playbook over time. Makes me think the next step in AI isn’t smaller prompts but smarter context. | 2025-10-10T15:11:02 | https://www.reddit.com/r/LocalLLaMA/comments/1o33kx1/new_stanford_study_killed_finetuning/ | Old_Assumption2188 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o33kx1 | false | null | t3_1o33kx1 | /r/LocalLLaMA/comments/1o33kx1/new_stanford_study_killed_finetuning/ | false | false | self | 2 | null |
Zero-Learn in ToolBrain — Agents that write their own training data | 9 | One of the trickiest parts of training tool-using agents is collecting enough task data.
What if your agent could generate its own curriculum instead?
That’s what we built in ToolBrain’s Zero-Learn feature — a lightweight reinforcement-learning loop where an LLM agent bootstraps its own training queries directly from the tool definitions you give it.
## ⚙️ How Zero-Learn Works
1. You start with a few tools (from `smolagent`), e.g.:
```python
from smolagent import tool
@tool
def calculate_compound_interest(principal, rate, years): ...
@tool
def calculate_loan_payment(principal, rate, term): ...
```
2. The Brain’s method `generate_training_examples` prompts the model to invent realistic tasks that require using these tools. You can use the LLM of the agent or use external model, you can also add external tools.
```python
from toolbrain import Brain
brain = Brain(agent=agent)
examples = brain.generate_training_examples(
task_description="Finance queries that use multiple tools",
num_examples=100,
min_tool_calls=2, # hint to include multiple tool uses
max_words=80, # keeps prompts short and realistic
self_rank=True # optional: let the LLM rank them by quality
)
```
3. Generated examples are auto-ranked and filtered, then used for RL fine-tuning (GRPO / DPO).
## What happens inside:
1. ToolBrain builds a “tool card” (name + description + args).
2. The agent’s LLM writes user queries that should require those tools and provide realistic arguments for tools.
3. If `self_rank=True`, the model re-ranks them based on relevance, argument realism, and concreteness.
4. You get back a list of plain text queries — your new mini training set; then you can use them for training with
## 💡 Example Outputs (Finance Tools)
From a `Qwen-0.5B` agent using simple finance functions:
```
"Calculate the compound interest on $10,000 at an annual rate of 5% for 3 years."
"What is the formula for calculating compound interest?"
"Compute the loan payment for a 7-year loan at 5% interest and $10,000 principal."
```
Roughly two-thirds of the generated queries are directly executable — the rest can be filtered or rewritten automatically.
## 🔁 Why it’s useful
- Bootstraps small, domain-specific datasets without human effort.
- Perfect for teaching agents to use your custom tools (finance, bio-med, robotics, whatever).
- Integrates directly with ToolBrain’s RL loop — GRPO, DPO, knowledge distillation, etc.
## 📘 Learn More
📄 Paper → ToolBrain: A Flexible Reinforcement Learning Framework for Agentic Tools (arXiv:2510.00023)
🌐 Project → toolbrain.org
Would love to hear from others experimenting with synthetic data generation for agents —
How are you teaching your models new tools without curated datasets? | 2025-10-10T15:08:41 | https://www.reddit.com/r/LocalLLaMA/comments/1o33iin/zerolearn_in_toolbrain_agents_that_write_their/ | Excellent-Solid1865 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o33iin | false | null | t3_1o33iin | /r/LocalLLaMA/comments/1o33iin/zerolearn_in_toolbrain_agents_that_write_their/ | false | false | self | 9 | null |
My Deep Dive into Fine-Tuning: IBM Granite-4.0 with Python and Unsloth! 🚀 | 8 | I spent this week getting hands-on with IBM’s Granite-4.0 LLM and the Unsloth library, honestly thinking it would just be another “meh” open-source fine-tuning project. Instead—I ended up pretty excited, so wanted to share my take for anyone on the fence!
Personal hurdles? I’m used to LLM fine-tuning being a clunky, resource-heavy slog. But this time I actually got domain-level results (support-bot made way better recommendations!) with just a free Colab T4 and some Python. Seeing the model shift from bland, generic helpdesk answers to context-aware, on-point responses in only about 60 training steps was incredibly satisfying.
If you’re like me and always chasing practical, accessible AI upgrades, this is worth the experiment.
* Real custom fine-tuning, no expensive infra
* Model is compact—runs smooth, even on free hardware
* The workflow’s straightforward (and yes, I documented mistakes and fixes too)
Want to give it a spin?
Here’s the full story and guide I wrote: [Medium Article](https://medium.com/@krishanw30/4fc11b529c1f?sk=dd2effadc55ce5ca7acce163bec66d9d)
Or dive right into my shared Hugging Face checkpoint: [Fine-tuned Model](https://huggingface.co/krishanwalia30/granite-4.0-h-micro_lora_model?source=post_page-----4fc11b529c1f---------------------------------------) | 2025-10-10T14:44:33 | https://www.reddit.com/r/LocalLLaMA/comments/1o32utg/my_deep_dive_into_finetuning_ibm_granite40_with/ | krishanndev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o32utg | false | null | t3_1o32utg | /r/LocalLLaMA/comments/1o32utg/my_deep_dive_into_finetuning_ibm_granite40_with/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'EVGx_KwhvSmF97muBuK1RFESgRgvVTkJhsv1Z-wYxLE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EVGx_KwhvSmF97muBuK1RFESgRgvVTkJhsv1Z-wYxLE.png?width=108&crop=smart&auto=webp&s=632ddd4b2f0482363fa320bd2a01fea5b96dedb1', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/EVGx_KwhvSmF97muBuK1RFESgRgvVTkJhsv1Z-wYxLE.png?width=216&crop=smart&auto=webp&s=afa4e5211477c3536ad971538818a5d4582b6628', 'width': 216}, {'height': 161, 'url': 'https://external-preview.redd.it/EVGx_KwhvSmF97muBuK1RFESgRgvVTkJhsv1Z-wYxLE.png?width=320&crop=smart&auto=webp&s=cc3166c4074985811bb4b8143b551c8761532312', 'width': 320}, {'height': 323, 'url': 'https://external-preview.redd.it/EVGx_KwhvSmF97muBuK1RFESgRgvVTkJhsv1Z-wYxLE.png?width=640&crop=smart&auto=webp&s=fa9525d64d79e52abd95766ba28ba392b857f4fc', 'width': 640}], 'source': {'height': 373, 'url': 'https://external-preview.redd.it/EVGx_KwhvSmF97muBuK1RFESgRgvVTkJhsv1Z-wYxLE.png?auto=webp&s=425786ef61515d2980d7bb3f90368f53b34525b5', 'width': 739}, 'variants': {}}]} |
CPU interference with 384GB DDR4 RAM? | 4 | Looking at used listings, Xeon Workstations with, or at least upgradeable to, 384GB DDR4 RAM, are increasingly affordable. If my only concern is to be able to run as good a model as possible, without much concern for t/s (as long as we're not talking seconds per token), what is realistically possible in terms of CPU with such a system and can I take full advantage of this amount of RAM? | 2025-10-10T14:42:29 | https://www.reddit.com/r/LocalLLaMA/comments/1o32sva/cpu_interference_with_384gb_ddr4_ram/ | Practical-Hand203 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o32sva | false | null | t3_1o32sva | /r/LocalLLaMA/comments/1o32sva/cpu_interference_with_384gb_ddr4_ram/ | false | false | self | 4 | null |
Some small tools for you - Ollama Managment UI, Passkey authentication proxy | 3 | Hi all, want to share with you a couple of small tools I cooked recently in the hope they can be useful for some of you:
* [https://github.com/trbroyles1/ollama-mgr](https://github.com/trbroyles1/ollama-mgr) \- a simple web UI for managing an Ollama server. Note that it does not have authentication of any sort in it right now! BUT, you can always protect it with...
* [https://github.com/trbroyles1/passkey-authenticator](https://github.com/trbroyles1/passkey-authenticator) \- a simple tool to reverse proxy an application, protecting access to it with Passkey authentication
I don't have docker images built for these yet but will hopefully get some up soon-ish.
Hope you enjoy! | 2025-10-10T14:32:59 | https://www.reddit.com/r/LocalLLaMA/comments/1o32jz5/some_small_tools_for_you_ollama_managment_ui/ | Pyrenaeda | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o32jz5 | false | null | t3_1o32jz5 | /r/LocalLLaMA/comments/1o32jz5/some_small_tools_for_you_ollama_managment_ui/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '-iSqAhO4wS4cGIHUlV112t40W2T9VCR5T7AdZw5Ih58', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-iSqAhO4wS4cGIHUlV112t40W2T9VCR5T7AdZw5Ih58.png?width=108&crop=smart&auto=webp&s=ce7be84f89a41c67a6fdb75548fd2ea2aa4f7467', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-iSqAhO4wS4cGIHUlV112t40W2T9VCR5T7AdZw5Ih58.png?width=216&crop=smart&auto=webp&s=009a138c5b24ca1458c6e45a806617d5f07a7df7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-iSqAhO4wS4cGIHUlV112t40W2T9VCR5T7AdZw5Ih58.png?width=320&crop=smart&auto=webp&s=f47fcfba3e7a24e38bff1d167dec0098be5b6f3b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-iSqAhO4wS4cGIHUlV112t40W2T9VCR5T7AdZw5Ih58.png?width=640&crop=smart&auto=webp&s=ee9dbc66167e3b205155a88fb26618594371e652', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-iSqAhO4wS4cGIHUlV112t40W2T9VCR5T7AdZw5Ih58.png?width=960&crop=smart&auto=webp&s=b02cd6d0cf10766ea65bc8138c1969965088171e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-iSqAhO4wS4cGIHUlV112t40W2T9VCR5T7AdZw5Ih58.png?width=1080&crop=smart&auto=webp&s=c255fc036beba19155d7f5ac6b3f645fcb8b0b31', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-iSqAhO4wS4cGIHUlV112t40W2T9VCR5T7AdZw5Ih58.png?auto=webp&s=075b3e5b37260017faa83f3c245ad32e53fca211', 'width': 1200}, 'variants': {}}]} |
Beck, a small model for delicate life situations | 74 | A language model that handles delicate life situations and tries to really help you.
[Beck](https://huggingface.co/gustavecortal/Beck-8B) is based on my previous model [Piaget](https://huggingface.co/gustavecortal/Piaget-8B) (a language model for psychological reasoning), and was aligned with [psychotherapeutic preferences](https://huggingface.co/datasets/Psychotherapy-LLM/PsychoCounsel-Preference). Available sizes are: [0.6B](https://huggingface.co/gustavecortal/Beck-0.6B), [1.7B](https://huggingface.co/gustavecortal/Beck-1.7B), [4B](https://huggingface.co/gustavecortal/Beck-4B), [8B](https://huggingface.co/gustavecortal/Beck-8B).
Beck was trained using preference optimization (ORPO) and LoRA. You can reproduce the results using [my repo](https://github.com/gustavecortal/preference-optimization-orpo-lora) for lightweight preference optimization.
Beck was inspired by my position paper on emotion analysis: [Improving Language Models for Emotion Analysis: Insights from Cognitive Science](https://aclanthology.org/2024.cmcl-1.23/)
**How does it compare to other language models?**
For the given prompt:
Anxiety often strikes when I'm faced with making decisions. The fear of making the wrong choice or disappointing others paralyzes me, leaving me stuck in indecision. I want to learn how to trust myself and make confident choices. \no_think
**Qwen3-8B**:
It's completely natural to feel anxious when facing important decisions, especially when you're worried about making the wrong choice or letting others down. This internal conflict can create a sense of overwhelm that makes it hard to move forward. Learning to trust yourself is a powerful step toward overcoming this challenge. Start by reflecting on past decisions—what worked, what didn't, and what you learned from them. Over time, you'll begin to recognize your own values, strengths, and patterns of thinking. Practicing self-compassion is also key; remind yourself that no decision is perfect and that growth comes from experience, not perfection. Small steps, like setting aside time each day for reflection or journaling, can help build confidence. Surround yourself with people who support your journey, and don't hesitate to seek guidance if needed. Trusting yourself isn’t about having all the answers—it’s about believing in your ability to navigate uncertainty and grow through the process. With patience and practice, you’ll find greater clarity and courage in your choices.
**Beck-8B**:
It sounds like you're dealing with a lot of pressure regarding decision-making and it's causing you significant distress. This is quite common for many people, as we all have moments where choosing can feel overwhelming. Let’s explore what might be contributing to these feelings so that we can work on building your confidence in decision-making.
Firstly, consider what specific fears arise when you think about making a decision. Is there a particular aspect of the choice that triggers anxiety? Sometimes identifying just one component can help in breaking down the process.
Also, think about past decisions that went well—even if they don’t seem big at first. Remembering successful outcomes can reinforce trust in yourself, reminding you that you have experience navigating through uncertainties successfully before.
Building self-trust involves challenging any negative thoughts surrounding decision-making. It may also help to practice mindfulness—being fully present during each step of the decision process rather than anticipating worst-case scenarios. By cultivating patience and acceptance towards ambiguity, you might find that some anxieties lessen over time.
Would you like us to brainstorm practical exercises or strategies tailored specifically for boosting your decision-making abilities and confidence? | 2025-10-10T14:32:08 | https://www.reddit.com/r/LocalLLaMA/comments/1o32j5s/beck_a_small_model_for_delicate_life_situations/ | antcroca159 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o32j5s | false | null | t3_1o32j5s | /r/LocalLLaMA/comments/1o32j5s/beck_a_small_model_for_delicate_life_situations/ | false | false | self | 74 | {'enabled': False, 'images': [{'id': '7pQ9tN29l_k7TjEjA0k5iRAUDz5E4p50skVpAqPUrkE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7pQ9tN29l_k7TjEjA0k5iRAUDz5E4p50skVpAqPUrkE.png?width=108&crop=smart&auto=webp&s=2e28665b0b5145f8bedfe289f018f0a68f3b321b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7pQ9tN29l_k7TjEjA0k5iRAUDz5E4p50skVpAqPUrkE.png?width=216&crop=smart&auto=webp&s=1b2309e2554d3f8c5d48119b8f50d69779a46dc1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7pQ9tN29l_k7TjEjA0k5iRAUDz5E4p50skVpAqPUrkE.png?width=320&crop=smart&auto=webp&s=3e759c043d640576ddfa4c46c32e2fa3902ea9e8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7pQ9tN29l_k7TjEjA0k5iRAUDz5E4p50skVpAqPUrkE.png?width=640&crop=smart&auto=webp&s=5cb79309c36938125de327a4f44be119dc3480ec', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7pQ9tN29l_k7TjEjA0k5iRAUDz5E4p50skVpAqPUrkE.png?width=960&crop=smart&auto=webp&s=f9e8cf311d7dcc79108ee05a3907cae02f2c424a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7pQ9tN29l_k7TjEjA0k5iRAUDz5E4p50skVpAqPUrkE.png?width=1080&crop=smart&auto=webp&s=7b9f8056aa23d355ee949d5f961eca930b0622c0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7pQ9tN29l_k7TjEjA0k5iRAUDz5E4p50skVpAqPUrkE.png?auto=webp&s=6c189fb7c48c40ca3a00103bbfb8bf41a838916b', 'width': 1200}, 'variants': {}}]} |
chatllm.cpp supports Janus-Pro | 10 | >Janus-Pro is a novel autoregressive framework that unifies multimodal understanding and generation.
[https://huggingface.co/deepseek-ai/Janus-Pro-1B](https://huggingface.co/deepseek-ai/Janus-Pro-1B)
With [chatllm.cpp](https://github.com/foldl/chatllm.cpp):
https://preview.redd.it/6gp815annauf1.png?width=1217&format=png&auto=webp&s=2dd884a934261226b096fc1f13da6dac53db1cb6
| 2025-10-10T14:25:12 | https://www.reddit.com/r/LocalLLaMA/comments/1o32cs8/chatllmcpp_supports_januspro/ | foldl-li | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o32cs8 | false | null | t3_1o32cs8 | /r/LocalLLaMA/comments/1o32cs8/chatllmcpp_supports_januspro/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'RerkM_YaMvlrnabVjd6weUri30h-rvzJS7KreYcdnw8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RerkM_YaMvlrnabVjd6weUri30h-rvzJS7KreYcdnw8.png?width=108&crop=smart&auto=webp&s=32e76926b03ec747e9850b7e5389ba43119c3206', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RerkM_YaMvlrnabVjd6weUri30h-rvzJS7KreYcdnw8.png?width=216&crop=smart&auto=webp&s=56432cad3f0ca527155629bb469f032ec7b7a295', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RerkM_YaMvlrnabVjd6weUri30h-rvzJS7KreYcdnw8.png?width=320&crop=smart&auto=webp&s=2fc3cf41c6ae198f20a6bc38ee784cef17ca6273', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RerkM_YaMvlrnabVjd6weUri30h-rvzJS7KreYcdnw8.png?width=640&crop=smart&auto=webp&s=3f426236473f9cf99ef0faf88dd4229251ceabba', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RerkM_YaMvlrnabVjd6weUri30h-rvzJS7KreYcdnw8.png?width=960&crop=smart&auto=webp&s=9870a178b92f16de8a5655d245c10a579c8ab0fb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RerkM_YaMvlrnabVjd6weUri30h-rvzJS7KreYcdnw8.png?width=1080&crop=smart&auto=webp&s=a986ae272a31cf1362f83e6c078d732daac3f554', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RerkM_YaMvlrnabVjd6weUri30h-rvzJS7KreYcdnw8.png?auto=webp&s=3423351055a4f4e8e7438985e2bd884ec2df8aca', 'width': 1200}, 'variants': {}}]} | |
is there any LLM App that can generate files for you? | 7 | For old farts like me who are near their graves and want to skip DIY part of responds of LLMs and being an absolute bum by expecting LLM App take care of the DIY part of writing the notes( or programming codes or whatever) in files and deliver ithe final product to you... is any app produced for this matter to satisfy the needs of clowns like me? | 2025-10-10T14:24:31 | https://www.reddit.com/r/LocalLLaMA/comments/1o32c6c/is_there_any_llm_app_that_can_generate_files_for/ | FatFigFresh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o32c6c | false | null | t3_1o32c6c | /r/LocalLLaMA/comments/1o32c6c/is_there_any_llm_app_that_can_generate_files_for/ | false | false | self | 7 | null |
Dual Xeon 6 + 12 channel ram | 1 | Hi wanted to get your thoughts on a new build I may be trying to put together. I sold my 4090 and am thinking about expanding my current workstation. I hear that xeon 6 have really good memory optimization for offloading matrix multiplications.
I found one motherboard that accepts a 1 slot Xeon 6. Then I saw this one GIGABYTE MS74\-HB0 Motherboard \- Intel® Xeon® 6 Processors. It’s dual slot but supports 12 channel ram up to 8000mhz. I was thinking about doing a two Xeon 6 - 6521P. With 12 channel ram I calculated around 700-800 Gb/s (if I did that right). With two TRX 6000 pros I assume the acceleration with the Xeon 6 cpus (48 cores + memory bandwidth + amx implementation)
should be ok.
Is this overkill? Is this a bad idea. My original intent was to run Deepseek and GLM like this for dataset creations/distilling.
https://lmsys.org/blog/2025-07-14-intel-xeon-optimization/
Was going off this article and some llama.cpp stuff I saw. Is this a terrible idea? Thanks for the feedback | 2025-10-10T14:03:45 | https://www.reddit.com/r/LocalLLaMA/comments/1o31syu/dual_xeon_6_12_channel_ram/ | ____vladrad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o31syu | false | null | t3_1o31syu | /r/LocalLLaMA/comments/1o31syu/dual_xeon_6_12_channel_ram/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'qkPKVxPTT9gLVWth8NyHTkz6j0Ke1Draa8vQRkMUNtM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qkPKVxPTT9gLVWth8NyHTkz6j0Ke1Draa8vQRkMUNtM.png?width=108&crop=smart&auto=webp&s=3335de0c7b3ac4f8d236e650279d9e1d3d1960f0', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/qkPKVxPTT9gLVWth8NyHTkz6j0Ke1Draa8vQRkMUNtM.png?width=216&crop=smart&auto=webp&s=322f4a4170d4d51b72e81e2365618904f8a01fc7', 'width': 216}, {'height': 162, 'url': 'https://external-preview.redd.it/qkPKVxPTT9gLVWth8NyHTkz6j0Ke1Draa8vQRkMUNtM.png?width=320&crop=smart&auto=webp&s=626cba8492927ecefb2ef8c9e4cfd771d3918397', 'width': 320}, {'height': 325, 'url': 'https://external-preview.redd.it/qkPKVxPTT9gLVWth8NyHTkz6j0Ke1Draa8vQRkMUNtM.png?width=640&crop=smart&auto=webp&s=bf36345955ee957c7c1c1601d2dda4c7696c3e33', 'width': 640}, {'height': 488, 'url': 'https://external-preview.redd.it/qkPKVxPTT9gLVWth8NyHTkz6j0Ke1Draa8vQRkMUNtM.png?width=960&crop=smart&auto=webp&s=2746015faaef0f8a5439a7ee35703962793dde27', 'width': 960}, {'height': 549, 'url': 'https://external-preview.redd.it/qkPKVxPTT9gLVWth8NyHTkz6j0Ke1Draa8vQRkMUNtM.png?width=1080&crop=smart&auto=webp&s=84320ebe3d8ea1d1ac240836fc712b895b4a026a', 'width': 1080}], 'source': {'height': 1220, 'url': 'https://external-preview.redd.it/qkPKVxPTT9gLVWth8NyHTkz6j0Ke1Draa8vQRkMUNtM.png?auto=webp&s=f039a7465f03529477e47390c05ed35ee4039ab0', 'width': 2398}, 'variants': {}}]} |
Downloading multi-file source code from an LLM? I uploaded aiar on pypi to fix this... | 0 | When I create a small AI project, or the structure of a big one, I like asking gemini to just give me a starting point. I found downloading all the files is a pain so a few weeks ago I asked it to create a shar - shell archive and apart from some things it didn't escape properly it worked well.
[https://pypi.org/project/aiar/](https://pypi.org/project/aiar/) \- [https://github.com/owebeeone/aiar](https://github.com/owebeeone/aiar)
So, I wrote a new self extracting format that has no escaping - and gemini seems to rock with it. Here is the result of my "make a pyside6 calculator" prompt - in aiar format:
[https://drive.google.com/file/d/1DpkR8kXJ-UPsOdv0Qf5Q9rxrLceCDtDv/view?usp=drive\_link](https://drive.google.com/file/d/1DpkR8kXJ-UPsOdv0Qf5Q9rxrLceCDtDv/view?usp=drive_link)
(one small bug - hint just import State)
and the spec resulting from the prompt:
[https://docs.google.com/document/d/1zFrw-fVgcMkx892c\_sroCbK5S1NySMTWFlXhgBaqYTQ/edit?usp=drive\_link](https://docs.google.com/document/d/1zFrw-fVgcMkx892c_sroCbK5S1NySMTWFlXhgBaqYTQ/edit?usp=drive_link)
I basically pasted the aiar code example from the aiar README and gemini was able to take it from there.
So, the one things I noticed about gemini, is if you're getting code from the md canvas, it is broken. You'll need to ask the shell script version of aiar, not the bare version.
The python package also provides create and extract functionality in bash, python, nodejs and powershell.
So, does anyone else use a run-o-the-mill LLM to create a working template for a project?
GPT5 hates me at the moment, otherwise I would try it there too. Something about I used it too much... how rude.
| 2025-10-10T14:02:12 | https://www.reddit.com/r/LocalLLaMA/comments/1o31rjd/downloading_multifile_source_code_from_an_llm_i/ | GianniMariani | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o31rjd | false | null | t3_1o31rjd | /r/LocalLLaMA/comments/1o31rjd/downloading_multifile_source_code_from_an_llm_i/ | false | false | self | 0 | null |
Kwaipilot/KAT-Dev-72B-Exp model released | 77 | The model makers claim it's second on coding only to Sonnet 4.5 at only 72B parameters.
Could someone here who has the hardware to run it, validate this?
[https://huggingface.co/Kwaipilot/KAT-Dev-72B-Exp](https://huggingface.co/Kwaipilot/KAT-Dev-72B-Exp) | 2025-10-10T14:02:02 | https://www.reddit.com/r/LocalLLaMA/comments/1o31rdl/kwaipilotkatdev72bexp_model_released/ | UltrMgns | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o31rdl | false | null | t3_1o31rdl | /r/LocalLLaMA/comments/1o31rdl/kwaipilotkatdev72bexp_model_released/ | false | false | self | 77 | {'enabled': False, 'images': [{'id': 'rxyepxgYUof3_-pxPA16Sj6OoiuoO3OTQiZrKV-cxps', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rxyepxgYUof3_-pxPA16Sj6OoiuoO3OTQiZrKV-cxps.png?width=108&crop=smart&auto=webp&s=cf18d5cf8ded0a10c6a0af997508a324c1a4598f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rxyepxgYUof3_-pxPA16Sj6OoiuoO3OTQiZrKV-cxps.png?width=216&crop=smart&auto=webp&s=ea48d1283c6607aaf89ea7f8ffb47f5bc99ce20d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rxyepxgYUof3_-pxPA16Sj6OoiuoO3OTQiZrKV-cxps.png?width=320&crop=smart&auto=webp&s=cba31dfcbe87d2c0403c3a3c70b8bfe95eb1e2d1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rxyepxgYUof3_-pxPA16Sj6OoiuoO3OTQiZrKV-cxps.png?width=640&crop=smart&auto=webp&s=6a34ebb6475d3fb833395063cb58949ed7cc21cd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rxyepxgYUof3_-pxPA16Sj6OoiuoO3OTQiZrKV-cxps.png?width=960&crop=smart&auto=webp&s=4d6cc5c98d3e061ccdb88c37a77110422f322645', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rxyepxgYUof3_-pxPA16Sj6OoiuoO3OTQiZrKV-cxps.png?width=1080&crop=smart&auto=webp&s=ab0d2724e27fd5da8661dce70df2d4d765794815', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rxyepxgYUof3_-pxPA16Sj6OoiuoO3OTQiZrKV-cxps.png?auto=webp&s=d56a39ad375ea48de4663c2472edcdff1c7fc561', 'width': 1200}, 'variants': {}}]} |
How did OpenAI go about to create the model selecting system for GPT 5? | 0 | E.G having the model think/search the web on the go depending on the user's prompt. It's clearly not perfect, but I'm curious | 2025-10-10T13:10:09 | https://www.reddit.com/r/LocalLLaMA/comments/1o30h9j/how_did_openai_go_about_to_create_the_model/ | a_normal_user1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o30h9j | false | null | t3_1o30h9j | /r/LocalLLaMA/comments/1o30h9j/how_did_openai_go_about_to_create_the_model/ | false | false | self | 0 | null |
"Google Gemini" but using a local model | 20 | https://reddit.com/link/1o30e9q/video/sii45b8z8auf1/player
I built a local assistant app that can replace Google Gemini as your phone's default assistant. It works similar to Gemini: long press the power button to bring up Layla, and it will run a local model instead of Gemini.
It supports using local models (GGUF or PTE), connect to any OpenAI endpoint such as LMStudio running on your PC, or Layla Cloud.
Video is showing a 8B model (L3-Rhaenys) running on S25 Ultra. But if your phone is not powerful enough, you can choose to run 2B or 4B models.
It's still in early development; I'd love to hear what other tools/features you'd like to see integrated! | 2025-10-10T13:06:41 | https://www.reddit.com/r/LocalLLaMA/comments/1o30e9q/google_gemini_but_using_a_local_model/ | Tasty-Lobster-8915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o30e9q | false | null | t3_1o30e9q | /r/LocalLLaMA/comments/1o30e9q/google_gemini_but_using_a_local_model/ | false | false | self | 20 | null |
Translate output rather than training on multiple languages | 0 | Hey LocalLLaMa community,
I've been thinking about multilingual LLMs like Gemma, Qwen, etc. They're trained on a huge text corpus containing a lot of languages.
My question is: Why do we dedicate valuable parameters to learning multiple languages?
With local inference we usually want the most knowledge in the smallest size possible.
Couldn't we achieve similar results by training the LLM only on English (language with the most text) for core knowledge. Then use a separate, much smaller (~500M parameters) dedicated "micro-translator" model to handle input/output translation for other languages?
This way only 2 languages take up parameters, not ~20 languages.
I don't know how LLMs work inside well enough, but it feels like learning multiple languages internally would consume a large chunk of the model's parameter budget.
Or does the model learn concepts language-"independent"? (I don't know how to phrase this) | 2025-10-10T12:40:14 | https://www.reddit.com/r/LocalLLaMA/comments/1o2zs6f/translate_output_rather_than_training_on_multiple/ | Jan49_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2zs6f | false | null | t3_1o2zs6f | /r/LocalLLaMA/comments/1o2zs6f/translate_output_rather_than_training_on_multiple/ | false | false | self | 0 | null |
Built a router for LLM orchestration and learned a lot | 0 | I’ve been experimenting with a side project lately, basically an attempt to make local and hosted LLMs work together more intelligently. The issue started when I noticed how often we send simple requests (like “count the number of words in this sentence”) to massive cloud models. Fast answers, sure, but overkill in both latency and token cost.
So I started testing something small: what if a lightweight routing layer could decide automatically where each query goes based on rules like:
* if it’s short/simple → send to a local model
* if it’s complex → escalate to something heavier like GPT-4 or DeepSeek
* if speed matters → pick the fastest available
Once I implemented it, the results were actually surprising, around 60% cost reduction compared to our previous manual setups, and often faster responses. So we turned this internal technology we created for our startup into a product, and now it supports some cool things:
* Unified Playground to compare model outputs
* Centralized API keys (OpenAI, Anthropic, Gemini, DeepSeek, Mistral, etc.)
* Open-source GPU deployments (no DevOps chãos)
* Your app calls one API, and the router decides the best model per query
And we are currently developing new features and improving current ones. It’s still in beta, but if anyone wants to try it out, here's the link [https://pureai-console.com/](https://pureai-console.com/)
On the Billing screen, use the code **WELCOME10** to get **$10 free credits** (**no credit card needed**). These $10 are enough to run several routing strategy tests or even deploy some open source models with medium GPUs. We eventually turned it into something bigger, now called **PureRouter**.
If this kind of routing experiment interests you, I’d love to hear your thoughts or ideas on how you handle dynamic switching between local and hosted models, and We'd love to hear new suggestions for PureRouter features and improvements. We're trying to make something that meets the needs of developers, and we'd love to hear opinions that differ from our own. We're still in the early stages, so your suggestions will definitely shape future updates. | 2025-10-10T12:31:45 | https://www.reddit.com/r/LocalLLaMA/comments/1o2zllp/built_a_router_for_llm_orchestration_and_learned/ | Gbalke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2zllp | false | null | t3_1o2zllp | /r/LocalLLaMA/comments/1o2zllp/built_a_router_for_llm_orchestration_and_learned/ | false | false | self | 0 | null |
GALAX Rolls Out Its Single-Slot GeForce RTX 5060 Ti GPU With 16 GB VRAM & Blower-Fan | 21 | 2025-10-10T12:14:32 | https://wccftech.com/galax-single-slot-geforce-rtx-5060-ti-gpu-16-gb-vram-blower-fan/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1o2z88f | false | null | t3_1o2z88f | /r/LocalLLaMA/comments/1o2z88f/galax_rolls_out_its_singleslot_geforce_rtx_5060/ | false | false | 21 | {'enabled': False, 'images': [{'id': 'y6J_OcAXJadfgqu4sGThqrpBh-QuwElkR48n9WojP7o', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/y6J_OcAXJadfgqu4sGThqrpBh-QuwElkR48n9WojP7o.png?width=108&crop=smart&auto=webp&s=20ca9f671301c309c029129821be2d019218e2ee', 'width': 108}, {'height': 103, 'url': 'https://external-preview.redd.it/y6J_OcAXJadfgqu4sGThqrpBh-QuwElkR48n9WojP7o.png?width=216&crop=smart&auto=webp&s=8b719d8f6f0c0c9b0689160d082780153f60d517', 'width': 216}, {'height': 153, 'url': 'https://external-preview.redd.it/y6J_OcAXJadfgqu4sGThqrpBh-QuwElkR48n9WojP7o.png?width=320&crop=smart&auto=webp&s=ce35c4f8825942bc5d9e7a6f51e6eabfc39c1675', 'width': 320}, {'height': 307, 'url': 'https://external-preview.redd.it/y6J_OcAXJadfgqu4sGThqrpBh-QuwElkR48n9WojP7o.png?width=640&crop=smart&auto=webp&s=02f280419bde7e9970740ea47bdbd26403d64024', 'width': 640}, {'height': 460, 'url': 'https://external-preview.redd.it/y6J_OcAXJadfgqu4sGThqrpBh-QuwElkR48n9WojP7o.png?width=960&crop=smart&auto=webp&s=ecee4f879ac07999092d821a761cd234fdb1f67b', 'width': 960}, {'height': 518, 'url': 'https://external-preview.redd.it/y6J_OcAXJadfgqu4sGThqrpBh-QuwElkR48n9WojP7o.png?width=1080&crop=smart&auto=webp&s=f636b3a8d83f328687e2dcb83037efc2b30ab710', 'width': 1080}], 'source': {'height': 760, 'url': 'https://external-preview.redd.it/y6J_OcAXJadfgqu4sGThqrpBh-QuwElkR48n9WojP7o.png?auto=webp&s=605dc12c33d33dcc76f1e136e7714605376acee3', 'width': 1583}, 'variants': {}}]} | ||
Experience with networked 2x128GB AI Max 395? | 13 | We are considering to buy two of these AI shoeboxes, for space and power efficiency. Run a large LLM during the day, use as CI/CD/test server over night.
Q: Anyone has experience with such a setup? Specifically, what's the expected performance of a large (GLM or Qwen235B) model that is split over these two with llama.cpp and RPC?
I have prototyped this setup already with 2x 96GB regular PCs/CPUs, it's quite slow but the answers are quite good. Faster ram and 5(?)GB network between the showboxes should provide faster performance? How much? | 2025-10-10T12:13:32 | https://www.reddit.com/r/LocalLLaMA/comments/1o2z7fz/experience_with_networked_2x128gb_ai_max_395/ | Bird476Shed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2z7fz | false | null | t3_1o2z7fz | /r/LocalLLaMA/comments/1o2z7fz/experience_with_networked_2x128gb_ai_max_395/ | false | false | self | 13 | null |
Reflection AI raises $2B to be America's open frontier AI lab, challenging DeepSeek | TechCrunch | 175 | Reflection AI: [https://reflection.ai/](https://reflection.ai/)
On 𝕏: [https://x.com/reflection\_ai/status/1976304405369520242](https://x.com/reflection_ai/status/1976304405369520242) | 2025-10-10T12:05:56 | https://techcrunch.com/2025/10/09/reflection-raises-2b-to-be-americas-open-frontier-ai-lab-challenging-deepseek/ | Nunki08 | techcrunch.com | 1970-01-01T00:00:00 | 0 | {} | 1o2z1vt | false | null | t3_1o2z1vt | /r/LocalLLaMA/comments/1o2z1vt/reflection_ai_raises_2b_to_be_americas_open/ | false | false | default | 175 | {'enabled': False, 'images': [{'id': 'J33KaNhavmUdkjELX9a4XHseBsKE9ltSNTN6hWuL1_c', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/J33KaNhavmUdkjELX9a4XHseBsKE9ltSNTN6hWuL1_c.jpeg?width=108&crop=smart&auto=webp&s=a5781dfda8009091e260f20801ae94ca383f0915', 'width': 108}, {'height': 145, 'url': 'https://external-preview.redd.it/J33KaNhavmUdkjELX9a4XHseBsKE9ltSNTN6hWuL1_c.jpeg?width=216&crop=smart&auto=webp&s=28ef947447f89a0e32e3b218b7fd6344516a31c7', 'width': 216}, {'height': 214, 'url': 'https://external-preview.redd.it/J33KaNhavmUdkjELX9a4XHseBsKE9ltSNTN6hWuL1_c.jpeg?width=320&crop=smart&auto=webp&s=23f8dd5cbef5cbd269e8e7cde7612cb6079b390f', 'width': 320}, {'height': 429, 'url': 'https://external-preview.redd.it/J33KaNhavmUdkjELX9a4XHseBsKE9ltSNTN6hWuL1_c.jpeg?width=640&crop=smart&auto=webp&s=3ecb75e4826fba31ddac14c79c56f0fceee48915', 'width': 640}, {'height': 644, 'url': 'https://external-preview.redd.it/J33KaNhavmUdkjELX9a4XHseBsKE9ltSNTN6hWuL1_c.jpeg?width=960&crop=smart&auto=webp&s=2c8aa9804c2e548a13573308ceb760aa0c84a01b', 'width': 960}, {'height': 725, 'url': 'https://external-preview.redd.it/J33KaNhavmUdkjELX9a4XHseBsKE9ltSNTN6hWuL1_c.jpeg?width=1080&crop=smart&auto=webp&s=635046e1636f1fce7991f1b68d45d024a619ad71', 'width': 1080}], 'source': {'height': 806, 'url': 'https://external-preview.redd.it/J33KaNhavmUdkjELX9a4XHseBsKE9ltSNTN6hWuL1_c.jpeg?auto=webp&s=bf355af744a375dfa92df46cafa40e461616f2b9', 'width': 1200}, 'variants': {}}]} |
Does performance of LLMs on android depend more on RAM or CPU clocks? | 1 | [removed] | 2025-10-10T11:51:00 | https://www.reddit.com/r/LocalLLaMA/comments/1o2yr3f/does_performance_of_llms_on_android_depend_more/ | LivingCornet694 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2yr3f | false | null | t3_1o2yr3f | /r/LocalLLaMA/comments/1o2yr3f/does_performance_of_llms_on_android_depend_more/ | false | false | self | 1 | null |
What they didn't teach in software startup school: China | 0 | In the software startup school, china has mostly just been a source of talent. Maybe as a competitor, but largely only in China.
When it came to software tech startups in the US, they really only had to worry about other startups - usually in the bay area. And the worry was limited as they all had the same financial constraints and similar need to eventually get ROI.
But China changes the rules of the game, and in ways I'm not sure investors quite appreciate - mostly because it's never been like this before in the software industry.
OpenAI, Anthropic and their "Get Big Fast" plan made sense because that's how it has always worked. The first one to get big fast was able to get network effects, brand goodwill, and economy of scale and suck up all the investment and attention. Other startups vying for the same space would just wither and die as all the oxygen was consumed.
China, however, is a new twist in how "Get Big Fast" is going to play out. Not only do they play by different economic rules, they also have different pools of capital not readily accessible to US players. Government will happily invest and clear the way.
And, ofc, it's not just China. Any country can enter this game, all they really need is capital. The moat is surprisingly thin and shallow.
Oh, and btw, it looks like every other country \*wants\* to enter this very important game.
So now OpenAI and Anthropic find themselves on a never ending training treadmill and they might just run out of oxygen as it speeds up faster than they can go. **If they stop training the next latest and greatest, Chinese (and others) will most certainly catch up.**
Inevitably, there are three potential outcomes to this:
1. Regulatory capture and government intervention to keep out the chinese / open / other models, allowing OpenAI/Anthropic to squeeze profit out of their work by not having to train as much. We see a lot of signs of this revving up already, and I think is the most likely outcome under the guise of 'safety' and 'security'.
2. Pop Goes the Bubble - things start going asymptotic or even way worse - Chinese / other models innovate faster than the proprietary ones. Even if those other models go prop and not open, AI will become pretty commodified (unless the other models step-change innovate!). Either way, OpenAI and Anthropic lose their ability to command the attention of the industry and all that money they spent on 'Get Big Fast' isn't going to help them much.
3. OpenAI / Anthropic are able to keep upping their game until AGI+ / ASI occurs and then all the rules change completely. Nobody can predict past the singularity, except that probably it's a good idea to be the first who made it happen. Maybe!
Some weighted blend of them all is likely, ofc, though my money is mostly on #1.
| 2025-10-10T11:32:07 | https://www.reddit.com/r/LocalLLaMA/comments/1o2ydqm/what_they_didnt_teach_in_software_startup_school/ | kaggleqrdl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2ydqm | false | null | t3_1o2ydqm | /r/LocalLLaMA/comments/1o2ydqm/what_they_didnt_teach_in_software_startup_school/ | false | false | self | 0 | null |
Why there's still no local models that can output PDF/DOCX files | 0 | I can't seem to find any model that can output files suck as PDF or Docx like chatGPT, locally or via API, Any reason why? | 2025-10-10T11:23:49 | https://www.reddit.com/r/LocalLLaMA/comments/1o2y87n/why_theres_still_no_local_models_that_can_output/ | abdouhlili | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2y87n | false | null | t3_1o2y87n | /r/LocalLLaMA/comments/1o2y87n/why_theres_still_no_local_models_that_can_output/ | false | false | self | 0 | null |
Help! Is this good enough for daily AI coding | 0 | Hey guys just checking if anyone has any advice if the below specs are good enough for daily AI assisted coding pls. not looking for those highly specialized AI servers or machines as I'm using it for personal gaming too. I got the below advice from chatgpt. thanks so much
---
for daily coding: Qwen2.5-Coder-14B (speed) and Qwen2.5-Coder-32B (quality).
your box can also run 70B+ via offload, but it’s not as smooth for iterative dev.
pair with Ollama + Aider (CLI) or VS Code + Continue (GUI) and you’re golden.
---
CPU: AMD Ryzen 7 7800X3D | 5 GHz | 8 cores 16 threads
Motherboard: ASRock Phantom Gaming X870 Riptide WiFi
GPU: Inno3D NVIDIA GeForce RTX 5090 | 32 GB VRAM
RAM: 48 GB DDR5 6000 MHz
Storage: 2 TB Gen 4 NVMe SSD
CPU Cooler: Armaggeddon Deepfreeze 360 AIO Liquid Cooler
Chassis: Armaggeddon Aquaron X-Curve Giga 10
Chassis Fans: Armaggeddon 12 cm x 7
PSU: Armaggeddon Voltron 80+ Gold 1200W
Wi-Fi + Bluetooth: Included
OS: Windows 11 Home 64-bit (Unactivated)
Service: 3-Year In-House PC Cleaning
Warranty: 5-Year Limited Warranty (1st year onsite pickup & return) | 2025-10-10T11:21:56 | https://www.reddit.com/r/LocalLLaMA/comments/1o2y6xi/help_is_this_good_enough_for_daily_ai_coding/ | IntroductionSouth513 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2y6xi | false | null | t3_1o2y6xi | /r/LocalLLaMA/comments/1o2y6xi/help_is_this_good_enough_for_daily_ai_coding/ | false | false | self | 0 | null |
LLaMA that plays chess | 9 | I made a [hybrid](https://github.com/neurocod/llm-chess-hybrid) of LLaMA and several other neural networks that can play chess quite well. It’s part of my ongoing series of [articles](https://www.linkedin.com/pulse/hybrid-neural-networks-inside-llms-konstantine-kozachuk-xquze/) about hybrid neural networks. The hippocampus model is still missing and outsourced to traditional C++ code.
https://preview.redd.it/z916pjlqp9uf1.png?width=3840&format=png&auto=webp&s=1d470e2d693d723ffd1d3f1493df16faf36be7a2
| 2025-10-10T11:17:54 | https://www.reddit.com/r/LocalLLaMA/comments/1o2y43t/llama_that_plays_chess/ | neurocod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2y43t | false | null | t3_1o2y43t | /r/LocalLLaMA/comments/1o2y43t/llama_that_plays_chess/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'geYQQAxqjX90o0mQq_3STTNJ2tR_-toI4zN0X160D4k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/geYQQAxqjX90o0mQq_3STTNJ2tR_-toI4zN0X160D4k.png?width=108&crop=smart&auto=webp&s=39dcb03d8292bab4bcba181530c030cb83032f13', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/geYQQAxqjX90o0mQq_3STTNJ2tR_-toI4zN0X160D4k.png?width=216&crop=smart&auto=webp&s=61a11109552c5d07cd66dcf3d080e8cc0bcdbe72', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/geYQQAxqjX90o0mQq_3STTNJ2tR_-toI4zN0X160D4k.png?width=320&crop=smart&auto=webp&s=109c31d9b2018e828f9a47ab3ab3b4aeb9b7747d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/geYQQAxqjX90o0mQq_3STTNJ2tR_-toI4zN0X160D4k.png?width=640&crop=smart&auto=webp&s=f4a0e7749765f565b43f6f79308b17aa4c64b216', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/geYQQAxqjX90o0mQq_3STTNJ2tR_-toI4zN0X160D4k.png?width=960&crop=smart&auto=webp&s=0d1fe5b34f10394cc5a1c505f1198011076f8363', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/geYQQAxqjX90o0mQq_3STTNJ2tR_-toI4zN0X160D4k.png?width=1080&crop=smart&auto=webp&s=b593d3989cb5cfaf16e4591e61b8eca6378d6f41', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/geYQQAxqjX90o0mQq_3STTNJ2tR_-toI4zN0X160D4k.png?auto=webp&s=f927f229d7268d5fe550ce0c50b4bf5250d9e9d8', 'width': 1200}, 'variants': {}}]} | |
[image processing failed] | 1 | [deleted] | 2025-10-10T11:16:24 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1o2y30h | false | null | t3_1o2y30h | /r/LocalLLaMA/comments/1o2y30h/image_processing_failed/ | false | false | default | 1 | null | ||
🚨 Local AI is the only sane path if you care about privacy | 171 | Meta recently announced that AI chat interactions on Facebook and Instagram will be used for ad targeting.
Everything you type can shape how you are profiled—a stark reminder that cloud AI often means zero privacy.
Local-first AI puts you in control. Models run entirely on your own device, keeping your data private and giving you full ownership over results.
Here are some of my projects exploring this approach:
- [**Agentic Signal**](https://agentic-signal.com): privacy-first workflows and browser AI agent framework. Open-source for personal use; commercial licensing available for businesses and SaaS.
- [ScribePal](https://github.com/code-forge-temple/scribe-pal): local browser AI assistant that summarizes and interacts with content without sending data to external servers.
- [Local LLM NPC](https://github.com/code-forge-temple/local-llm-npc): educational Godot game powered by Gemma 3n via Ollama, with offline-first NPCs teaching sustainable farming and botany.
Local AI isn’t just a technical preference—it’s essential for privacy, autonomy, and transparency in AI.
Source: https://www.cnbc.com/2025/10/01/meta-facebook-instagram-ads-ai-chat.html
| 2025-10-10T11:11:05 | https://www.reddit.com/r/LocalLLaMA/comments/1o2xzep/local_ai_is_the_only_sane_path_if_you_care_about/ | Code-Forge-Temple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2xzep | false | null | t3_1o2xzep | /r/LocalLLaMA/comments/1o2xzep/local_ai_is_the_only_sane_path_if_you_care_about/ | false | false | self | 171 | null |
A chinese openrouter like provider giving out free 100$ worth of tokens | 0 | Came across this [https://agentrouter.org/register?aff=y40w](https://agentrouter.org/register?aff=y40w) giving out free api credits. Doesn't ask for credit card or anything. Github authorization works. Checked Github permission, only takes email, but still it being chinese I will use it for my opensource projects only for now .
Note: The link is a affiliated link so if u create an account I get more free credits. | 2025-10-10T10:50:53 | https://www.reddit.com/r/LocalLLaMA/comments/1o2xm1o/a_chinese_openrouter_like_provider_giving_out/ | DeathShot7777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2xm1o | false | null | t3_1o2xm1o | /r/LocalLLaMA/comments/1o2xm1o/a_chinese_openrouter_like_provider_giving_out/ | false | false | self | 0 | null |
Qwen team auto-closed all issues on Qwen2-VL repository | 49 | I just noticed that the Qwen2-VL repository has been renamed to Qwen3-VL and that all issues on GitHub are being closed. It sits currently at 475 open issues/859 closed issues, and changing quickly: [https://github.com/QwenLM/Qwen3-VL/issues](https://github.com/QwenLM/Qwen3-VL/issues)
I think this is somewhat rude, because it ignores the effort of all the people that took time out of their day to report issues. They could just as easily have created a new repository.
Of course I hugely appreciate all the open models that the Qwen team gave us, but I still think that this could have been handled in a better way. | 2025-10-10T10:20:21 | https://www.reddit.com/r/LocalLLaMA/comments/1o2x2xs/qwen_team_autoclosed_all_issues_on_qwen2vl/ | CasualCapybara | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2x2xs | false | null | t3_1o2x2xs | /r/LocalLLaMA/comments/1o2x2xs/qwen_team_autoclosed_all_issues_on_qwen2vl/ | false | false | self | 49 | {'enabled': False, 'images': [{'id': 'iOei8e4q_0yjMZSoeSlFoU2n7fxe8HYKMbS-0QFOYrc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iOei8e4q_0yjMZSoeSlFoU2n7fxe8HYKMbS-0QFOYrc.png?width=108&crop=smart&auto=webp&s=4b0323038b69c24ceb5fdd7f5de4a2f2603507f9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iOei8e4q_0yjMZSoeSlFoU2n7fxe8HYKMbS-0QFOYrc.png?width=216&crop=smart&auto=webp&s=b23770bfea5b02795d8a4136a1dd517b453671d4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iOei8e4q_0yjMZSoeSlFoU2n7fxe8HYKMbS-0QFOYrc.png?width=320&crop=smart&auto=webp&s=ef79bc6d0904e54fc61b51aa8ef14733e645f935', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iOei8e4q_0yjMZSoeSlFoU2n7fxe8HYKMbS-0QFOYrc.png?width=640&crop=smart&auto=webp&s=26e85103e5eb0afe6611a9d10bc7b742212418b1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iOei8e4q_0yjMZSoeSlFoU2n7fxe8HYKMbS-0QFOYrc.png?width=960&crop=smart&auto=webp&s=eee5736185294da884c57ae1991b764d39ad1c57', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iOei8e4q_0yjMZSoeSlFoU2n7fxe8HYKMbS-0QFOYrc.png?width=1080&crop=smart&auto=webp&s=3a896a43d8baa660cc3260ee90a25384b4b20288', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iOei8e4q_0yjMZSoeSlFoU2n7fxe8HYKMbS-0QFOYrc.png?auto=webp&s=18d6b600c31cd10d56dc9b2b28b3d7b6c8438eed', 'width': 1200}, 'variants': {}}]} |
PSA: Your Ollama server might be one of 50,000 running naked online. The API has dangerously high permissions. | 1 | [removed] | 2025-10-10T10:16:43 | https://www.reddit.com/r/LocalLLaMA/comments/1o2x0nr/psa_your_ollama_server_might_be_one_of_50000/ | No_Statistician5032 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2x0nr | false | null | t3_1o2x0nr | /r/LocalLLaMA/comments/1o2x0nr/psa_your_ollama_server_might_be_one_of_50000/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'qZioCOrxjxpXp76Fdv2BG1Y7WVBb1zmLZzhpsTRGMRQ', 'resolutions': [], 'source': {'height': 96, 'url': 'https://external-preview.redd.it/qZioCOrxjxpXp76Fdv2BG1Y7WVBb1zmLZzhpsTRGMRQ.png?auto=webp&s=99bee909c9131a9168e6c6bc9e67d90a59e69105', 'width': 96}, 'variants': {}}]} |
AMD MI50 32GB better buy than MI100? | 19 | Plenty of people have the MI50 and performance seems to continuously improve.
While it's officially dropped from ROCm 7, we can still get it to work if we copy some files manually.. obviously this will sooner or later stop working but then we'll have Vulkan.. which (with llama.cpp at least) seems to be almost at a performance-parity with ROCm (or faster?).
Now my question, MI100 does not have Vulkan support. While it's still supported by ROCm 7, sooner or later AMD will drop it.. I realize all of this will be irrelevant as tech moves on and both these cards will be considered old relics, but isn't Vulkan support makes the MI50 the better buy, for homelabbers at least? | 2025-10-10T10:16:09 | https://www.reddit.com/r/LocalLLaMA/comments/1o2x0bv/amd_mi50_32gb_better_buy_than_mi100/ | FriendlyRetriver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2x0bv | false | null | t3_1o2x0bv | /r/LocalLLaMA/comments/1o2x0bv/amd_mi50_32gb_better_buy_than_mi100/ | false | false | self | 19 | null |
China blacklists major chip research firm TechInsights following report on Huawei | 63 | 2025-10-10T10:08:58 | https://www.cnbc.com/2025/10/10/china-blacklists-major-chip-ai-research-firm-techinsights-analyzed-report-huawei.html | vancity-boi-in-tdot | cnbc.com | 1970-01-01T00:00:00 | 0 | {} | 1o2wvvy | false | null | t3_1o2wvvy | /r/LocalLLaMA/comments/1o2wvvy/china_blacklists_major_chip_research_firm/ | false | false | default | 63 | {'enabled': False, 'images': [{'id': 'cXviE8IZIrPbvKrVn_b-IwEX0IhaM8bGQVFM7IF9Z1o', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cXviE8IZIrPbvKrVn_b-IwEX0IhaM8bGQVFM7IF9Z1o.jpeg?width=108&crop=smart&auto=webp&s=e8ce4e453ca44f15f743d15a30e7956f9e97dfa4', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cXviE8IZIrPbvKrVn_b-IwEX0IhaM8bGQVFM7IF9Z1o.jpeg?width=216&crop=smart&auto=webp&s=847e1385ddc93b2706c60e1fad6afabc836924b6', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cXviE8IZIrPbvKrVn_b-IwEX0IhaM8bGQVFM7IF9Z1o.jpeg?width=320&crop=smart&auto=webp&s=cf04307022ad55af94aee7203f4e9a606b485641', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cXviE8IZIrPbvKrVn_b-IwEX0IhaM8bGQVFM7IF9Z1o.jpeg?width=640&crop=smart&auto=webp&s=1ba6db197a71a303a6de965206d6fa42d8ec4ee3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cXviE8IZIrPbvKrVn_b-IwEX0IhaM8bGQVFM7IF9Z1o.jpeg?width=960&crop=smart&auto=webp&s=52ba3ad97f6943a0273de06077de822ec556d29a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cXviE8IZIrPbvKrVn_b-IwEX0IhaM8bGQVFM7IF9Z1o.jpeg?width=1080&crop=smart&auto=webp&s=023ecf71bd719c7a8a2e8dce5a6bf413004f3039', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cXviE8IZIrPbvKrVn_b-IwEX0IhaM8bGQVFM7IF9Z1o.jpeg?auto=webp&s=00312c7232e634e3acc539702a32636c604f37e4', 'width': 1920}, 'variants': {}}]} | |
We can now run wan or any heavy models even on a 6GB NVIDIA laptop GPU | Thanks to upcoming GDS integration in comfy | 180 | Hello
I am Maifee. I am integrating GDS (GPU Direct Storage) in ComfyUI. And it's working, if you want to test, just do the following:
```
git clone https://github.com/maifeeulasad/ComfyUI.git
cd ComfyUI
git checkout offloader-maifee
python3 main.py --enable-gds --gds-stats # gds enabled run
```
And you no longer need custome offloader, or just be happy with quantized version. Or you don't even have to wait. Just run with GDS enabled flag and we are good to go. Everything will be handled for you. I have already created issue and raised MR, review is going on, hope this gets merged real quick.
If you have some suggestions or feedback, please let me know.
And thanks to these helpful sub reddits, where I got so many advices, and trust me it was always more than enough.
Enjoy your weekend! | 2025-10-10T09:50:47 | https://www.reddit.com/gallery/1o2wkyw | maifee | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1o2wkyw | false | null | t3_1o2wkyw | /r/LocalLLaMA/comments/1o2wkyw/we_can_now_run_wan_or_any_heavy_models_even_on_a/ | false | false | 180 | null | |
finetuning Medium or Small language model for factual and memorizing data. | 0 | I have a builder projects data in a csv. The issues with RAG is that it is fetching non similar data and it is fetching lot of unwanted data. Also there is a limitation of context length.
So I'm planning to fine tune llama 3.1 on my data. And if i ask any question related to that data it should give me the answer like if i say i want to buy a flat in marathalli then it should give me the project names.
I have two options to fine tune. one is supervised FT where i give question answer pairs and other unsupervised FT which is a next token prediction or CLM.
This is how my data look like
Project\_ID,Project\_Name,Project\_Developer\_Name,Project\_Area,Project\_Total\_Units,Project\_Description,Project\_Advantage,Project\_Specification,Project\_Address,Project\_Latitude,Project\_Longitude,Project\_Auto\_Description,Project\_Possession\_Date,Project\_Launch\_Date,country,state,city,project\_status,Locality,Total\_Towers,Minimum\_Tower\_Floors,Maximum\_Tower\_Floors,Total\_Unique\_Configuration\_Units\_Count,Property\_Type,Unique\_BHK\_Type\_Count,Available\_BHK\_Types,Amenity\_Types\_And\_Amenities,Landmark\_Between\_3Km\_to\_5Km,Landmark\_Within\_3Km,Phase\_possession,rag\_docs.....these are COlumn names.
5000001,BSR Paradise,Winning Edge Group,Data Unavailable,100.0,"BSR Paradise is located in the suburb of Bangalore city,’ Marathahalli’. In this era, where work has become quite hectic, if you get a chance to live in amidst of nature than that’s not the bad deal, isn’t it. Healthy living begins with a healthy, natural lifestyleThe township is located in Panathur locality hardly 1 km away from Marathahalli Bridge. It is a multi-storeyed building having 2 blocks and 6 floors. The township offers you 2BHK flats (1100-1900 sq. ft) and 3BHK flats (1300-1400 sq. ft). BSR Paradise makes it possible to live a life which is healthy and in the lap of nature along with landscaped gardens and different kinds of trees around you. The project provides all the residence for sale.Some of the other amenities that are made available to the residents are sufficient covered parking, garden, gym area, rain water harvesting, community hall, club house and much more. Railway station, metro, ATM and hospitals are within 3 km of this project. The project will allow the residents to live a lavish life. ",Data Unavailable,Data Unavailable,Data Unavailable,12.93162,77.697706,"BSR Paradise StatusReady To MoveBSR Paradise Launch Date30 October 2011BSR Paradise Possession Date01 August 2013Towers in BSR Paradise1Situated at a prime location of Marathahalli, BSR Paradise is a meticulously designed project of Bangalore. The property comprises of 100 units which are enclosed within a peaceful environment. The commencement certificate of the impressive BSR Paradise project has not been grantedIn addition to this, the occupancy certificate not granted. BSR Paradise project is an offering from the well-established developer Winning Edge Group. The project's pin code is 560037. BSR Paradise lets you enjoy a convenient lifestyle with all contemporary conveniences at your disposal. Top Amenities in BSR ParadiseLiftMaintenance StaffWaste DisposalInternet/Wi-Fi ConnectivityDTH Television FacilityRO Water SystemConference Room",2013-08-01,2011-10-30,India,Karnataka,Bangalore,Ready To Move,Marathahalli,5.0,20.0,21.0,35.0,"Residential Plot,Multistorey Apartment",3.0,"1BHK,2BHK,3BHK","Exteriror Amenities: Lift,Rain Water Harvesting,Club House,Swimming Pool,Gymnasium,Park,Reserved Parking,Security,Water Storage,Visitor Parking,Maintenance Staff,Waste Disposal,DTH Television Facility,Conference Room
Interiror Amenities: Vaastu Compliant,Air Conditioned,Intercom Facility,Internet/Wi-Fi Connectivity,RO Water System,Piped Gas
Project Amenities: Coffee Lounge & Restaurants,Flower Gardens,Kids Play Area,Fire Fighting Equipment",Data Unavailable,Data Unavailable,Data Unavailable,"BSR Paradise, developed by Winning Edge Group, is located in Marathahalli, Bangalore, at coordinates 12.93162 latitude and 77.697706 longitude. This residential project features 100 units across 5 towers, each with 20 to 21 floors. The available configurations include 2BHK flats ranging from 1100 to 1900 sq. ft and 3BHK flats from 1300 to 1400 sq. ft. The project is ready to move in, having launched on October 30, 2011, with possession starting from August 1, 2013.
BSR Paradise offers a blend of nature and modern living with landscaped gardens and ample amenities, including a gym, clubhouse, swimming pool, and community hall. Additional features include covered parking, rainwater harvesting, and security services. The project is conveniently located within 3 km of essential services like railway stations, metro stations, ATMs, and hospitals, enhancing connectivity and lifestyle. Interior amenities include air conditioning, intercom facilities, and Wi-Fi connectivity, ensuring a comfortable living experience."
| 2025-10-10T09:48:57 | https://www.reddit.com/r/LocalLLaMA/comments/1o2wjyg/finetuning_medium_or_small_language_model_for/ | InteractionLevel6625 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2wjyg | false | null | t3_1o2wjyg | /r/LocalLLaMA/comments/1o2wjyg/finetuning_medium_or_small_language_model_for/ | false | false | self | 0 | null |
Help required in selecting model for aws T4 instance and vllm | 4 | Hello everyone, I want to host a model for a chatbot that will be using RAG to generate responses with tool calling. I have an aws instance with 16gb vram Tesla T4 and 16 gb RAM. Can you please suggest some model that would serve best as an assistant and what would be the suggested configs when serving the model using vllm. Currently I am using https://huggingface.co/TheBloke/CapybaraHermes-2.5-Mistral-7B-AWQ but its taking 5-8 seconds to generate 10 word responses. So if you can suggest some tweaks, I would be extremely grateful. | 2025-10-10T09:22:10 | https://www.reddit.com/r/LocalLLaMA/comments/1o2w4u2/help_required_in_selecting_model_for_aws_t4/ | JuiceFine4582 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2w4u2 | false | null | t3_1o2w4u2 | /r/LocalLLaMA/comments/1o2w4u2/help_required_in_selecting_model_for_aws_t4/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'LkIGsdCWSjlcs87HrSjhRjpKKHuAmiMRdLuBsMm0Evo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LkIGsdCWSjlcs87HrSjhRjpKKHuAmiMRdLuBsMm0Evo.png?width=108&crop=smart&auto=webp&s=23a0fba2c8e4197e29515586e14d96684bbd6492', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LkIGsdCWSjlcs87HrSjhRjpKKHuAmiMRdLuBsMm0Evo.png?width=216&crop=smart&auto=webp&s=375f6bfb5d3f4979c2ef100e27c82966edababbe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LkIGsdCWSjlcs87HrSjhRjpKKHuAmiMRdLuBsMm0Evo.png?width=320&crop=smart&auto=webp&s=2768dd0ecd200a5fda3283d838d96006b2e14f55', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LkIGsdCWSjlcs87HrSjhRjpKKHuAmiMRdLuBsMm0Evo.png?width=640&crop=smart&auto=webp&s=6297d1e64f088806a133a66ce31e6c0ce571639c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LkIGsdCWSjlcs87HrSjhRjpKKHuAmiMRdLuBsMm0Evo.png?width=960&crop=smart&auto=webp&s=b2ef4bd40ff9a01a1b728d4a23d4e02367d089dc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LkIGsdCWSjlcs87HrSjhRjpKKHuAmiMRdLuBsMm0Evo.png?width=1080&crop=smart&auto=webp&s=a5e0104000de0cdfbdb43061ec0d8f490908b830', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LkIGsdCWSjlcs87HrSjhRjpKKHuAmiMRdLuBsMm0Evo.png?auto=webp&s=420c4318b9cfcf57276b24e45894f75450958aa5', 'width': 1200}, 'variants': {}}]} |
Any good local Ai home assistant projects out yet? | 5 | I’m looking for something to replace the stupid half working Alexa and Google speakers on my house cause they’re dog shit most times, and also privacy concerns.
Are there any projects that utilise like 4B or smaller sized models that are just really good at natural language and understanding and exceptional tool call capabilities?
| 2025-10-10T09:18:51 | https://www.reddit.com/r/LocalLLaMA/comments/1o2w2wt/any_good_local_ai_home_assistant_projects_out_yet/ | SilentReporter9635 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2w2wt | false | null | t3_1o2w2wt | /r/LocalLLaMA/comments/1o2w2wt/any_good_local_ai_home_assistant_projects_out_yet/ | false | false | self | 5 | null |
why is this so weird? | 0 | 2025-10-10T09:09:41 | https://www.reddit.com/r/LocalLLaMA/comments/1o2vxrm/why_is_this_so_weird/ | Top_Accident_8064 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2vxrm | false | null | t3_1o2vxrm | /r/LocalLLaMA/comments/1o2vxrm/why_is_this_so_weird/ | false | false | 0 | null | ||
Whats the best local model i can run with 16 GB VRAM and 96 GB RAM | 16 | 1 general model that has some intelligence with really good tool calling capabilties / (Would be good if it was uncensored to some capacity too, not for any specific purpose but just generally don't want it to turn down stuff cause of "Safety" or something.
| 2025-10-10T09:07:19 | https://www.reddit.com/r/LocalLLaMA/comments/1o2vwh1/whats_the_best_local_model_i_can_run_with_16_gb/ | Tricky_Reflection_75 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2vwh1 | false | null | t3_1o2vwh1 | /r/LocalLLaMA/comments/1o2vwh1/whats_the_best_local_model_i_can_run_with_16_gb/ | false | false | self | 16 | null |
Temperatures for MI50 during inference? Anyone with experience re-pasting processor? | 9 | As many others in here, I am experimenting with the MI50 at the moment due to the fantastic value-for-money relationship of this card (at least w.r.t. $ / GB VRAM). I am getting 80c-85c degrees on the edge sensor running full tilt with a "custom cooling solution". The junction sensor shows >100c (which is high but acceptable, I am told). Decreasing the power limit with rocm-smi does not seem to affect temps much. Idle temps are 30c-40c. What is your experience with temperatures? Have any of you successfully re-pasted the processor? | 2025-10-10T08:51:07 | https://www.reddit.com/r/LocalLLaMA/comments/1o2vnf2/temperatures_for_mi50_during_inference_anyone/ | EdenistTech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2vnf2 | false | null | t3_1o2vnf2 | /r/LocalLLaMA/comments/1o2vnf2/temperatures_for_mi50_during_inference_anyone/ | false | false | self | 9 | null |
Do we need domain specialist coding agents (Like separate for front-end/backend)? | 2 | So I found [this](https://kombai.com/why) page on X earlier.
They’re claiming general coding agents (GPT 5, Gemini, Sonnet 4, etc) still struggle with real frontend work - like building proper pages, using component libs, following best practices, that kinda stuff.
(They've done their own benchmarking and all)
According to them, even top models fail to produce compilable code like 30–40% of the time on bigger frontend tasks.
Their whole thing is making 'domain-specialist' agents - like an agent that’s just focused on front-end.
It supposedly understands react/tailwind/mui and knows design-to-code, and generally makes smarter choices for frontend tasks.
I’m still new to all this AI coding stuff, but I’m curious -
Do we actually need separate coding agents for every use-cases? or will general ones just get better over time? Wouldn’t maintaining all these niche agents be kinda painful?
Idk, just wanted to see what you folks here think. | 2025-10-10T08:36:15 | https://www.reddit.com/r/LocalLLaMA/comments/1o2vfec/do_we_need_domain_specialist_coding_agents_like/ | Haunting_Age_2970 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2vfec | false | null | t3_1o2vfec | /r/LocalLLaMA/comments/1o2vfec/do_we_need_domain_specialist_coding_agents_like/ | false | false | self | 2 | null |
Built my saas mvp using only local models and saved $3k | 1 | Indie dev here building an ai writing assistant. Original plan was using openai api but costs were gonna be insane for my use case.
Did some math and at my target volume I'd be paying like $0.05 per user interaction. With 1000 users doing 60 interactions per month that's $3000 monthly just for api calls. Completely unsustainable for a bootstrapped side project.
Started researching local alternatives. Tried ollama first which is solid for basic stuff but I needed more control over fine tuning and model switching. Lmstudio has a nice interface but felt limiting for what I wanted to build.
Ended up finding transformer lab which has a gui, lets me download models with a click, fine-tune them locally, and eval and iterate pretty easily.
My workflow now is prototype locally with different models, figure out which one works best for my use case, then optimize. Can test many different models in an afternoon without spending a cent.
The latency is actually better than api calls. No network overhead, responses are pretty snapy. Quality is honestly comparable to gpt4 for my specific use case which is technical writing assistance.
Biggest surprise was how good the smaller models are. Phi3 with 3.8b parameters runs fast as hell on my macbook and handles 90% of what I need. Only fall back to larger models for complex reasoning tasks.
Still figuring out the production deployment piece. Right now users run models on their own hardware which is kinda the point. Privacy first, no data leaves their machine, no subscription costs for api access.
Made $200 in first two weeks from early access which isn't much but it's pure profit since my only costs are hosting a simple flask backend and domain registration.
Anyone else building saas with local models? Curious what approaches people are taking for monetization when the ai itself is free to run. | 2025-10-10T08:25:35 | https://www.reddit.com/r/LocalLLaMA/comments/1o2v9r6/built_my_saas_mvp_using_only_local_models_and/ | Super_Sukhoii | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2v9r6 | false | null | t3_1o2v9r6 | /r/LocalLLaMA/comments/1o2v9r6/built_my_saas_mvp_using_only_local_models_and/ | false | false | self | 1 | null |
Get $200 free credit from Agent router (Signup using the link below and GitHub account) - Sharing is caring | 0 | https://agentrouter.org/register?aff=SY0a | 2025-10-10T08:24:15 | https://www.reddit.com/r/LocalLLaMA/comments/1o2v907/get_200_free_credit_from_agent_router_signup/ | Desperate-Mine2845 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2v907 | false | null | t3_1o2v907 | /r/LocalLLaMA/comments/1o2v907/get_200_free_credit_from_agent_router_signup/ | false | false | self | 0 | null |
Found something interesting on lmarena | 1 | So I was playing around in lmarena and come across a model named miramar, which seems to be a codename.
Its response in Chinese is pretty crap, I personally felt its literature capability is too poor to be consider as an artificial object. Apparently it's from a company named OceanAI.
Here is where weird thing happens, me, my friend and grok have done plenty of research on this codename but in vain. There is no discussion about this model (twitter, reddit, search engine, etc.), and no information on lmarena. But it seems that miramar have a relatively high chance of being picked in battle mode(It appeared thrice in less than 20 mins).
Wondering why there's zero discussion on this frequently(?) appeared model. | 2025-10-10T08:00:18 | https://www.reddit.com/r/LocalLLaMA/comments/1o2uvwr/found_something_interesting_on_lmarena/ | Head-Pomegranate3637 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2uvwr | false | null | t3_1o2uvwr | /r/LocalLLaMA/comments/1o2uvwr/found_something_interesting_on_lmarena/ | false | false | self | 1 | null |
What's your experience with quantizing MoE with tiny experts? | 6 | As i've read, quantizing a small model of size less than 8B can seriously degrade their performance. But since MoE model (qwen30b with 3b experts, gpt-oss with 5b experts,...) are just a combination of small size experts, how is this affecting them? Can i quantize them to Q4, or should i only run them at Q8 and only quantize dense models? | 2025-10-10T07:57:49 | https://www.reddit.com/r/LocalLLaMA/comments/1o2uulc/whats_your_experience_with_quantizing_moe_with/ | arimoto02 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2uulc | false | null | t3_1o2uulc | /r/LocalLLaMA/comments/1o2uulc/whats_your_experience_with_quantizing_moe_with/ | false | false | self | 6 | null |
We just launched Observability for LLMs that works without code changes and redeployment of apps | 12 | You know that moment when your AI app is live and suddenly slows down or costs more than expected? You check the logs and still have no clue what happened.
That is exactly why we built OpenLIT Operator. It gives you observability for LLMs and AI agents without touching your code, rebuilding containers, or redeploying.
✅ Traces every LLM, agent, and tool call automatically
✅ Shows latency, cost, token usage, and errors
✅ Works with OpenAI, Anthropic, AgentCore, Ollama, and others
✅ Connects with OpenTelemetry, Grafana, Jaeger, and Prometheus
✅ Runs anywhere like Docker, Helm, or Kubernetes
You can set it up once and start seeing everything in a few minutes. It also works with any OpenTelemetry instrumentations like Openinference or anything custom you have.
We just launched it on Product Hunt today 🎉
👉 [https://www.producthunt.com/products/openlit?launch=openlit-s-zero-code-llm-observability](https://www.producthunt.com/products/openlit?launch=openlit-s-zero-code-llm-observability)
Open source repo here:
🧠 [https://github.com/openlit/openlit](https://github.com/openlit/openlit)
If you have ever said "I'll add observability later," this might be the easiest way to start. | 2025-10-10T07:30:27 | https://www.reddit.com/r/LocalLLaMA/comments/1o2ug6s/we_just_launched_observability_for_llms_that/ | patcher99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2ug6s | false | null | t3_1o2ug6s | /r/LocalLLaMA/comments/1o2ug6s/we_just_launched_observability_for_llms_that/ | false | false | self | 12 | null |
GLM coding plan | 0 | There is something called GLM Coding Plan from the official provider for just 3$ a month, does anyone tried it with ST? I can't find anything in ToS prohibiting of using it with ST. | 2025-10-10T07:23:40 | https://www.reddit.com/r/LocalLLaMA/comments/1o2ucg6/glm_coding_plan/ | Whole-Warthog8331 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2ucg6 | false | null | t3_1o2ucg6 | /r/LocalLLaMA/comments/1o2ucg6/glm_coding_plan/ | false | false | self | 0 | null |
Olla v0.0.19 is out with SGLang & lemonade support | 7 | We've added native [sglang](https://github.com/sgl-project/sglang) and [lemonade](https://lemonade-server.ai/) support and released v0.0.19 of Olla, the fast unifying LLM Proxy - which already supports Ollama, LM Studio, LiteLLM natively (see [the list](https://thushan.github.io/olla/integrations/overview/)).
We’ve been using Olla extensively with OpenWebUI and the OpenAI-compatible endpoint for vLLM and SGLang experimentation on Blackwell GPUs running under Proxmox, and there’s [now an example available for that setup](https://thushan.github.io/olla/integrations/frontend/openwebui-openai/) too.
With [Olla](https://github.com/thushan/olla), you can expose a unified OpenAI-compatible API to OpenWebUI (or LibreChat, etc.), while your models run on separate backends like vLLM and SGLang. From OpenWebUI’s perspective, it’s just one API to read them all.
Best part is that we can swap models around (or tear down vllm, start a new node etc) and they just come and go (in the UI) without restarting (as long as we put them all in Olla's config).
Let us know what you think! | 2025-10-10T06:11:39 | https://github.com/thushan/olla | 2shanigans | github.com | 1970-01-01T00:00:00 | 0 | {} | 1o2t72w | false | null | t3_1o2t72w | /r/LocalLLaMA/comments/1o2t72w/olla_v0019_is_out_with_sglang_lemonade_support/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'EmvlZmNJDsCM03xUIgjfe04GISheKc3GwRTX95FXeeM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EmvlZmNJDsCM03xUIgjfe04GISheKc3GwRTX95FXeeM.png?width=108&crop=smart&auto=webp&s=71ab1f5549d4359a4d11cba8332458b5c8e08279', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EmvlZmNJDsCM03xUIgjfe04GISheKc3GwRTX95FXeeM.png?width=216&crop=smart&auto=webp&s=c266c248424fff7612d6ec5007ccf5c838d6fddf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EmvlZmNJDsCM03xUIgjfe04GISheKc3GwRTX95FXeeM.png?width=320&crop=smart&auto=webp&s=f08c52c256970404459b7831af329c4f430ade40', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EmvlZmNJDsCM03xUIgjfe04GISheKc3GwRTX95FXeeM.png?width=640&crop=smart&auto=webp&s=cd56b94ddf46de09ffeea2a2f9edca7346d5f857', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EmvlZmNJDsCM03xUIgjfe04GISheKc3GwRTX95FXeeM.png?width=960&crop=smart&auto=webp&s=bc7db25316a002531821dd51b1b36f9514a5c188', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EmvlZmNJDsCM03xUIgjfe04GISheKc3GwRTX95FXeeM.png?width=1080&crop=smart&auto=webp&s=72fe278251550483450daebe8c24042e9d2b11a4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EmvlZmNJDsCM03xUIgjfe04GISheKc3GwRTX95FXeeM.png?auto=webp&s=4fc817b0e0c3096035caca5a999b6696897ff080', 'width': 1200}, 'variants': {}}]} | |
An Embarrassingly Simple Defense Against LLM Abliteration Attacks | 7 | 2025-10-10T05:51:49 | https://arxiv.org/html/2505.19056v2 | newdoria88 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1o2sv96 | false | null | t3_1o2sv96 | /r/LocalLLaMA/comments/1o2sv96/an_embarrassingly_simple_defense_against_llm/ | false | false | default | 7 | null | |
Ai coding completion survey | 3 | Im curious , how long does it take you to finish your average coding task with claude code with opus or sonnet 4.5 or gpt 5 pro compared to a large model like glm4.6 or deepseek 3.2? (i mean including debugging time and your reviewing time) Compared to gpt 5 nano? Compared to a medium size model like qwen next 80b ? Compared to a smaller model like qwen 30b a3 ? compared to using no ai? | 2025-10-10T05:16:10 | https://www.reddit.com/r/LocalLLaMA/comments/1o2s9td/ai_coding_completion_survey/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2s9td | false | null | t3_1o2s9td | /r/LocalLLaMA/comments/1o2s9td/ai_coding_completion_survey/ | false | false | self | 3 | null |
Huawei/CANN / Ascend NPUs: Is anyone using it - and, what's the perf? | 2 | Basically the title.
I've been side-eying CANN eversince I noticed it pop up in the llama.cpp documentation as being supported; it is also noted as such in other projects like vLLM etc.
But, looking on Alibaba, their biggest NPU, with LPDDR4 memory, costs almost as much as the estimated price for a Maxsun Intel B60 Dual - above 1.000 €. That's... an odd one.
So, I wanted to share my slight curiosity. Anyone has one? If so, what are you using it for, and what is the performance characteristic?
I recently learned that due to the AMD Mi50 using HBM2 memory, it's actually still stupidly fast for LLM inference, but less so for SD (diffuser type workload), which I also found rather interesting.
Not gonna get either of those - but, I _am_ curious to see what their capabilities are. In a small "AI Server", perhaps one of those would make a nice card to host "sub models" - smaller, task focused models, that you may call via MCP or whatever x) | 2025-10-10T04:48:42 | https://www.reddit.com/r/LocalLLaMA/comments/1o2rs4v/huaweicann_ascend_npus_is_anyone_using_it_and/ | IngwiePhoenix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2rs4v | false | null | t3_1o2rs4v | /r/LocalLLaMA/comments/1o2rs4v/huaweicann_ascend_npus_is_anyone_using_it_and/ | false | false | self | 2 | null |
Qwen3 VL 4B to be released? | 205 | Qwen released cookbooks and in one of them this model Qwen3 VL 4B is present but I can't find it anywhere on huggingface. Link of the cookbook- https://github.com/QwenLM/Qwen3-VL/blob/main/cookbooks/long_document_understanding.ipynb
This would be quite amazing for OCR use cases. Qwen2.5/2 VL 3b/7b was foundation for many good OCR models | 2025-10-10T04:44:56 | Signal-Run7450 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o2rppj | false | null | t3_1o2rppj | /r/LocalLLaMA/comments/1o2rppj/qwen3_vl_4b_to_be_released/ | false | false | 205 | {'enabled': True, 'images': [{'id': 'Lmi8wMInNFXjl_jeUF7G-dIRbGjsJDWuUGTqgoBQdR4', 'resolutions': [{'height': 186, 'url': 'https://preview.redd.it/68mmis87s7uf1.png?width=108&crop=smart&auto=webp&s=de8a99b906b01785bfc2fb4547ae17e50a825eb3', 'width': 108}, {'height': 373, 'url': 'https://preview.redd.it/68mmis87s7uf1.png?width=216&crop=smart&auto=webp&s=395e9e12c9409578c24d6bc81d84b4cb40870b0f', 'width': 216}, {'height': 552, 'url': 'https://preview.redd.it/68mmis87s7uf1.png?width=320&crop=smart&auto=webp&s=0868e7ae96cbb4c86193e20d11e22baba4daa03a', 'width': 320}, {'height': 1105, 'url': 'https://preview.redd.it/68mmis87s7uf1.png?width=640&crop=smart&auto=webp&s=4ab80165de7b5dc3e27595de513c638e5086c61d', 'width': 640}], 'source': {'height': 1263, 'url': 'https://preview.redd.it/68mmis87s7uf1.png?auto=webp&s=873397e28487b4565d9679429f47e0356d4cf80d', 'width': 731}, 'variants': {}}]} | ||
Second Prototype! Tripled the dataset this time (Spent all day just cleaning it, lol) | 1 | I'm currently focusing only on **persona fine-tuning** (can't do layer tuning due to GPU limitations...)
What I added this time was **multi-turn dialogue**! Specifically, 500+ tokens per turn.
https://preview.redd.it/n9kbbmbjo7uf1.png?width=1293&format=png&auto=webp&s=18b8e791de57342840dd5eb1b9b06786994d378c
Also added simple **Q&A** and a few other things, but that's a secret!
Kicking off the training run now and heading to bed. Good luck to the model! | 2025-10-10T04:24:38 | https://www.reddit.com/r/LocalLLaMA/comments/1o2rcgo/second_prototype_tripled_the_dataset_this_time/ | Patience2277 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2rcgo | false | null | t3_1o2rcgo | /r/LocalLLaMA/comments/1o2rcgo/second_prototype_tripled_the_dataset_this_time/ | false | false | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.