title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Hi, I'm new here and I'm looking for an LLM provider for study and role-playing. | 5 | Well, this is my story. I'm a software student, and recently we've been asked to do a project that has to do with LLM servers, so I've been looking for free servers for that and failing miserably. Although I admit that I've also been looking for these servers for roleplay, something like kicks before it became paid. I'd really appreciate any recommendations! (I used to use chutes for studying and roleplaying.) | 2025-09-19T13:29:21 | https://www.reddit.com/r/LocalLLaMA/comments/1nl3m43/hi_im_new_here_and_im_looking_for_an_llm_provider/ | Daylight_0708 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nl3m43 | false | null | t3_1nl3m43 | /r/LocalLLaMA/comments/1nl3m43/hi_im_new_here_and_im_looking_for_an_llm_provider/ | false | false | self | 5 | null |
The new coderabbit cli tool is incredible | 0 | 2025-09-19T13:27:57 | https://v.redd.it/1eyd2787i4qf1 | Motor_Cycle7600 | /r/LocalLLaMA/comments/1nl3kxs/the_new_coderabbit_cli_tool_is_incredible/ | 1970-01-01T00:00:00 | 0 | {} | 1nl3kxs | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/1eyd2787i4qf1/DASHPlaylist.mpd?a=1761042276%2CM2UwNzFmM2NlY2UzZGU2Y2E1YzM5NzQ4YjQwM2UzZGIxODgxYzRjZTI0NzFmM2JiODJjZGFjNDA4NDIxN2FiNw%3D%3D&v=1&f=sd', 'duration': 137, 'fallback_url': 'https://v.redd.it/1eyd2787i4qf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/1eyd2787i4qf1/HLSPlaylist.m3u8?a=1761042276%2CZjdkYjhhODEzNzc1ZDBkNGFhMjhjN2FmNzkzYTEyMjE3MzdlOGM1ZGNkMzU2YTk5MGQ0MTdlZWU4MzY1OWM3Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1eyd2787i4qf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1596}} | t3_1nl3kxs | /r/LocalLLaMA/comments/1nl3kxs/the_new_coderabbit_cli_tool_is_incredible/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'cHV2OXV4NzdpNHFmMcZyht-0YIVdicLMYIUqqzvtJfCrBkdrbd5yMoNloEzo', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/cHV2OXV4NzdpNHFmMcZyht-0YIVdicLMYIUqqzvtJfCrBkdrbd5yMoNloEzo.png?width=108&crop=smart&format=pjpg&auto=webp&s=22dc618fa689bfe4e860e6d5ff5a2a49089aa08d', 'width': 108}, {'height': 146, 'url': 'https://external-preview.redd.it/cHV2OXV4NzdpNHFmMcZyht-0YIVdicLMYIUqqzvtJfCrBkdrbd5yMoNloEzo.png?width=216&crop=smart&format=pjpg&auto=webp&s=1be6a6a094f7dcd4ba5b5ccac703daf806d60fa2', 'width': 216}, {'height': 216, 'url': 'https://external-preview.redd.it/cHV2OXV4NzdpNHFmMcZyht-0YIVdicLMYIUqqzvtJfCrBkdrbd5yMoNloEzo.png?width=320&crop=smart&format=pjpg&auto=webp&s=c3f81c80ef2a723f6e32d8fb809d228033ed1e53', 'width': 320}, {'height': 433, 'url': 'https://external-preview.redd.it/cHV2OXV4NzdpNHFmMcZyht-0YIVdicLMYIUqqzvtJfCrBkdrbd5yMoNloEzo.png?width=640&crop=smart&format=pjpg&auto=webp&s=63cca5d550f4e5193fbcfe6e0965c62330482628', 'width': 640}, {'height': 649, 'url': 'https://external-preview.redd.it/cHV2OXV4NzdpNHFmMcZyht-0YIVdicLMYIUqqzvtJfCrBkdrbd5yMoNloEzo.png?width=960&crop=smart&format=pjpg&auto=webp&s=432436c8d9c97ca6ebaafa1d1bb11232f126a8fe', 'width': 960}, {'height': 730, 'url': 'https://external-preview.redd.it/cHV2OXV4NzdpNHFmMcZyht-0YIVdicLMYIUqqzvtJfCrBkdrbd5yMoNloEzo.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2256409f501c9c2ee19736b39a9380eab80ed3f1', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/cHV2OXV4NzdpNHFmMcZyht-0YIVdicLMYIUqqzvtJfCrBkdrbd5yMoNloEzo.png?format=pjpg&auto=webp&s=d81b1e258002abec8a2ee41e9cab11a17061f4af', 'width': 3192}, 'variants': {}}]} | ||
Thoughts about AI potentially winning Person of the Year? | 0 | 2025-09-19T13:06:19 | Extension-Cut-5535 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nl32gv | false | null | t3_1nl32gv | /r/LocalLLaMA/comments/1nl32gv/thoughts_about_ai_potentially_winning_person_of/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'vvb1cafie4qf1', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/vvb1cafie4qf1.png?width=108&crop=smart&auto=webp&s=6a75f8ab58663d185d8447ddb8e937873d8cb43d', 'width': 108}, {'height': 177, 'url': 'https://preview.redd.it/vvb1cafie4qf1.png?width=216&crop=smart&auto=webp&s=a256c7b2a11cc90856be8a7ed3c3075f18e3ab84', 'width': 216}, {'height': 262, 'url': 'https://preview.redd.it/vvb1cafie4qf1.png?width=320&crop=smart&auto=webp&s=79c11f90f06ad2f3b756ea4560399b90b38eb3e4', 'width': 320}, {'height': 524, 'url': 'https://preview.redd.it/vvb1cafie4qf1.png?width=640&crop=smart&auto=webp&s=2f853c52bd00b71516ddc3c1d90dc9eaa9836782', 'width': 640}, {'height': 786, 'url': 'https://preview.redd.it/vvb1cafie4qf1.png?width=960&crop=smart&auto=webp&s=6ca8dea94080e1d903f1833a779acf4cf6e24e0b', 'width': 960}], 'source': {'height': 882, 'url': 'https://preview.redd.it/vvb1cafie4qf1.png?auto=webp&s=6e4aa53d9e02a7915e3372f35cf4d666128d20a6', 'width': 1076}, 'variants': {}}]} | ||
Able to use LMStudio plugins on Windows but not Linux? | 4 | I run LM Studio on both Windows 11 and Pop!_OS 22.04. On Windows, the sidebar shows a "Plugins" option right under Models. On Linux, that option isn’t there. Same version number, downloaded from the official site.
Is anyone else seeing this discrepancy? I haven’t found any release notes that explain whether the feature is Windows-only or just not built into the Linux binaries yet.
If you’ve checked on another distro or build, what do you see? | 2025-09-19T12:46:06 | https://www.reddit.com/r/LocalLLaMA/comments/1nl2lgb/able_to_use_lmstudio_plugins_on_windows_but_not/ | FreshmanCult | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nl2lgb | false | null | t3_1nl2lgb | /r/LocalLLaMA/comments/1nl2lgb/able_to_use_lmstudio_plugins_on_windows_but_not/ | false | false | self | 4 | null |
LLM association | 4 | I needed to analyze a complex scientific text and generate ideas.
Problems:
1. gpt oss gpt-oss-120b-F16 - uncreative and knows little
2. kimi-k2 - knows a lot, but is poor at expressing his thoughts mathematically.
What I did:
1. I learned everything kimi-k2 knows on the topic. Context - 60k
2. I changed the IP address and restarted gpt-oss-120b-F16 in this session. I told gpt - figure it out and write your own version.
As a result, I got 120k and a lot of interesting ideas, presented mathematically.
Does anyone else do this? | 2025-09-19T12:41:39 | https://www.reddit.com/r/LocalLLaMA/comments/1nl2hym/llm_association/ | Defiant_Diet9085 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nl2hym | false | null | t3_1nl2hym | /r/LocalLLaMA/comments/1nl2hym/llm_association/ | false | false | self | 4 | null |
How do you block telemetry of apps? | 0 | Some of you do use Proprietary / Closed source apps like Ollama, Msty, LMStudio, etc., Even I want to use those apps for few features. But how do you block telemetry of those apps? Any Opensource tools/utilities for this? | 2025-09-19T12:32:19 | https://www.reddit.com/r/LocalLLaMA/comments/1nl2ald/how_do_you_block_telemetry_of_apps/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nl2ald | false | null | t3_1nl2ald | /r/LocalLLaMA/comments/1nl2ald/how_do_you_block_telemetry_of_apps/ | false | false | self | 0 | null |
llama.cpp build 6517 fails to parse gpt-oss-20b harmony tags | 1 | Hi guys, llama.cpp fails to parse harmony tags for me.
https://preview.redd.it/s848nbcv14qf1.png?width=750&format=png&auto=webp&s=ed444dbd2a9c361e8d3d7d29be88082161979c98
Logs: [https://pastebin.com/7xQ1fLfk](https://pastebin.com/7xQ1fLfk)
version: 6517 (69ffd891)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
LLAMA_ARG_HOST: 0.0.0.0
LLAMA_ARG_PORT: 80
LLAMA_ARG_THREADS: 8
LLAMA_ARG_CTX_SIZE: 0
LLAMA_ARG_HF_REPO: unsloth/gpt-oss-20b-GGUF:Q4_K_S
LLAMA_ARG_N_GPU_LAYERS: 1
LLAMA_ARG_FLASH_ATTN: "enabled"
LLAMA_ARG_JINJA: "enabled"
LLAMA_ARG_THINK: "auto" | 2025-09-19T11:59:32 | https://www.reddit.com/r/LocalLLaMA/comments/1nl1lb0/llamacpp_build_6517_fails_to_parse_gptoss20b/ | lifeequalsfalse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nl1lb0 | false | null | t3_1nl1lb0 | /r/LocalLLaMA/comments/1nl1lb0/llamacpp_build_6517_fails_to_parse_gptoss20b/ | false | false | 1 | null | |
Destkop CPU choice for inference: 8700G or 9900X? | 4 | Hi,
I'm building a new desktop and I also want it to run larger LLMs. I'm getting 192GB of DDR5-6000, and I'm installing a 7900 XTX along my old 7600 XT for a combined 40GB of VRAM.
I'm in doubt whether the 8700G's integrated graphics could bring something to the table when running larger parameter counts that don't fit inside the GPUs memory, or whether I should just go for the 9900X instead which has more cores.
Both have the same memory bandwidth, but the 9900X also has two CCDs with 6 cores each instead of just one 8 core CCD for the 8700G, which might be an obstacle to getting the most out of the chip in inference workloads.
PS: Yes, I know the 7600 XT has low memory bandwidth, but if the model can't fit the 7900 XTX but does fit the two cards combined, it will still beat out CPU offload in Llama.cpp. | 2025-09-19T11:08:10 | https://www.reddit.com/r/LocalLLaMA/comments/1nl0lqk/destkop_cpu_choice_for_inference_8700g_or_9900x/ | spaceman_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nl0lqk | false | null | t3_1nl0lqk | /r/LocalLLaMA/comments/1nl0lqk/destkop_cpu_choice_for_inference_8700g_or_9900x/ | false | false | self | 4 | null |
China will stop sharing more capable models, and so will frontier labs | 0 | [https://www.alignmentforum.org/posts/Bz2gPtqRJJDWyKxnX/ai-companies-have-started-saying-safeguards-are-load-bearing](https://www.alignmentforum.org/posts/Bz2gPtqRJJDWyKxnX/ai-companies-have-started-saying-safeguards-are-load-bearing)
>There are two ways to show that an AI system is safe: show that it doesn't have dangerous capabilities, or show that it's safe even if it has dangerous capabilities. Until three months ago, AI companies said their models didn't have dangerous capabilities.
A lot of people are talking about 'asymptotic' ceilings, signs that AI isn't learning much.
What they don't realize is that models are getting too capable and too dangerous and labs are going to be increasingly reluctant about sharing those capabilities in a public facing fashion.
Why brag about something we can't use? It will just invite anger at the brand.
China especially will pressure labs into not releasing highly capable models.
What does this mean? Going forward we will see improvements in efficiency (size/compute/power) but we're probably hitting a ceiling in terms of capability that will be openly accessible.
It would take a pretty rogue lab to release something like that.
Nvidia's SLM push could be around this. They realize that privately they have customers that can do bigger and better things with LLMs but they can't / won't release public science around that. So they throw us bones and tell us life is going to be great with SLMs. And it is what it is. At least there is some effort that helps us make do.
You might doubt all this, but start watching for things like special access for experts in the near future.
eg: [https://help.openai.com/en/articles/11826767-life-science-research-special-access-program](https://help.openai.com/en/articles/11826767-life-science-research-special-access-program) | 2025-09-19T11:03:57 | https://www.reddit.com/r/LocalLLaMA/comments/1nl0j11/china_will_stop_sharing_more_capable_models_and/ | kaggleqrdl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nl0j11 | false | null | t3_1nl0j11 | /r/LocalLLaMA/comments/1nl0j11/china_will_stop_sharing_more_capable_models_and/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'yc3FKegVFTsn5106cI_Oymuw-5KC_sVfj1V3NGUsJ5w', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/yc3FKegVFTsn5106cI_Oymuw-5KC_sVfj1V3NGUsJ5w.png?width=108&crop=smart&auto=webp&s=ac3c5d3fbed1eb5491cec16247f779e8145fc835', 'width': 108}, {'height': 154, 'url': 'https://external-preview.redd.it/yc3FKegVFTsn5106cI_Oymuw-5KC_sVfj1V3NGUsJ5w.png?width=216&crop=smart&auto=webp&s=dea68bc4fa68c1e226655d47ae5acd8dc90c4068', 'width': 216}, {'height': 228, 'url': 'https://external-preview.redd.it/yc3FKegVFTsn5106cI_Oymuw-5KC_sVfj1V3NGUsJ5w.png?width=320&crop=smart&auto=webp&s=c6a5776129696e7bd223cfaa6372a1159f44a97c', 'width': 320}, {'height': 456, 'url': 'https://external-preview.redd.it/yc3FKegVFTsn5106cI_Oymuw-5KC_sVfj1V3NGUsJ5w.png?width=640&crop=smart&auto=webp&s=3c784adbf13b46e2bb0db544e9b0cddaf1a98721', 'width': 640}, {'height': 685, 'url': 'https://external-preview.redd.it/yc3FKegVFTsn5106cI_Oymuw-5KC_sVfj1V3NGUsJ5w.png?width=960&crop=smart&auto=webp&s=b72e036b02c5a7091959ed9be98c6d9b348cd722', 'width': 960}, {'height': 770, 'url': 'https://external-preview.redd.it/yc3FKegVFTsn5106cI_Oymuw-5KC_sVfj1V3NGUsJ5w.png?width=1080&crop=smart&auto=webp&s=9111c7c5414fa0a91c3fdbbf13fb651d79c83f31', 'width': 1080}], 'source': {'height': 1009, 'url': 'https://external-preview.redd.it/yc3FKegVFTsn5106cI_Oymuw-5KC_sVfj1V3NGUsJ5w.png?auto=webp&s=5cb8e9a03481fc545a141eefdd0beb6b04b2d31d', 'width': 1414}, 'variants': {}}]} |
AI-powered JEE prep tools (notes + flashcards + chatbot) 🚀📚 | 1 | Hey guys, I’ve been working on something fun — an AI-based study helper for JEE prep. It currently has:
📑 Topic-wise notes
🔗 NCERT reference links
🧠 Flashcards (concepts + practice questions)
🤖 Chatbot for quick doubt-solving
📝 Formula sheets (coming soon)
The idea is to make JEE prep a little less painful by organizing everything in one place. Right now I’m calling it Examsprint. Would love to know what feature would help you the most.
| 2025-09-19T10:56:47 | Distinct-Mode-7415 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nl0e21 | false | null | t3_1nl0e21 | /r/LocalLLaMA/comments/1nl0e21/aipowered_jee_prep_tools_notes_flashcards_chatbot/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'Ke2XaIl8jUz3nN_JgxM4qPvBUbXflqHEn50TPXbPHAo', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/i96iqoaer3qf1.png?width=108&crop=smart&auto=webp&s=0f2cf42174a02ecb88b3617c8cc8cb0be7990e55', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/i96iqoaer3qf1.png?width=216&crop=smart&auto=webp&s=8dd79614f9af8746330be5c3098fdee097db61f7', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/i96iqoaer3qf1.png?width=320&crop=smart&auto=webp&s=066754e4ae99161c93c7315ee859ae1f6d33c874', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/i96iqoaer3qf1.png?width=640&crop=smart&auto=webp&s=d49615459cf8c65a2a04dc40470b9325bd4b4f50', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/i96iqoaer3qf1.png?width=960&crop=smart&auto=webp&s=2a383b0f82cce1eb42dc5a253752957d4371cca3', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/i96iqoaer3qf1.png?width=1080&crop=smart&auto=webp&s=327d1d367bbe20334784d165b61c450c9d753dcd', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/i96iqoaer3qf1.png?auto=webp&s=53c1cc0e3c7614af15dd53e9c18bd3cc283f2c52', 'width': 1080}, 'variants': {}}]} | ||
GitHub - gruai/koifish: A c++ framework on efficient training & fine-tuning LLMs | 22 | Now you can speed run training. Train GPT2-1558M in 30 hours on a single 4090! | 2025-09-19T10:52:28 | https://github.com/gruai/koifish | DeltaSqueezer | github.com | 1970-01-01T00:00:00 | 0 | {} | 1nl0bay | false | null | t3_1nl0bay | /r/LocalLLaMA/comments/1nl0bay/github_gruaikoifish_a_c_framework_on_efficient/ | false | false | default | 22 | {'enabled': False, 'images': [{'id': 'Ip5tq3IGjmyJr17Rld7StDK76hE-5Hx6xnwIgFhhwT8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ip5tq3IGjmyJr17Rld7StDK76hE-5Hx6xnwIgFhhwT8.png?width=108&crop=smart&auto=webp&s=305c682adebb304e23b7aa4c05d43efffcdcd347', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ip5tq3IGjmyJr17Rld7StDK76hE-5Hx6xnwIgFhhwT8.png?width=216&crop=smart&auto=webp&s=f347ed92e3cded84670019bbef662601ece7979d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ip5tq3IGjmyJr17Rld7StDK76hE-5Hx6xnwIgFhhwT8.png?width=320&crop=smart&auto=webp&s=0ab9509afd8dd0e763b692c2b692c12801113fad', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ip5tq3IGjmyJr17Rld7StDK76hE-5Hx6xnwIgFhhwT8.png?width=640&crop=smart&auto=webp&s=01d14f9d61b75357055bbb3e18a4033516cadfbc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ip5tq3IGjmyJr17Rld7StDK76hE-5Hx6xnwIgFhhwT8.png?width=960&crop=smart&auto=webp&s=6928dac17c9e369c7cd272b1e21a68e691dbea77', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ip5tq3IGjmyJr17Rld7StDK76hE-5Hx6xnwIgFhhwT8.png?width=1080&crop=smart&auto=webp&s=923b2c8e1bbd09e53cd3be285999db5174f94a32', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ip5tq3IGjmyJr17Rld7StDK76hE-5Hx6xnwIgFhhwT8.png?auto=webp&s=59634cfbb779eb068cf8c1dbf0c1c7d54f4d8921', 'width': 1200}, 'variants': {}}]} |
Most people who say "LLMs are so stupid" totally fall into this trap | 0 | 2025-09-19T10:50:12 | FinnFarrow | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nl09ur | false | null | t3_1nl09ur | /r/LocalLLaMA/comments/1nl09ur/most_people_who_say_llms_are_so_stupid_totally/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'GXOeeCISgU0Z8xs3smV9m7V2V4XGHmRrOlQCylWYNCU', 'resolutions': [{'height': 106, 'url': 'https://preview.redd.it/5s462dg5q3qf1.png?width=108&crop=smart&auto=webp&s=d82dd29462bde518e5d7f534c3982ee928079fa6', 'width': 108}, {'height': 213, 'url': 'https://preview.redd.it/5s462dg5q3qf1.png?width=216&crop=smart&auto=webp&s=c3c2afe079ba7d325586c0fb063c70b6f3f4bcbb', 'width': 216}, {'height': 316, 'url': 'https://preview.redd.it/5s462dg5q3qf1.png?width=320&crop=smart&auto=webp&s=5f88c1afbef0c2887f89edc15cc0e4544ea5d03d', 'width': 320}, {'height': 632, 'url': 'https://preview.redd.it/5s462dg5q3qf1.png?width=640&crop=smart&auto=webp&s=2a6d7cfd839f258cf68fe249a6ec06824ed5a02c', 'width': 640}], 'source': {'height': 893, 'url': 'https://preview.redd.it/5s462dg5q3qf1.png?auto=webp&s=7c4d520822eb3a878e8ff0710e9d6ef76ffde5c2', 'width': 904}, 'variants': {}}]} | |||
Zenbot Chrome Nonautomation - An LLM-Driven Web Co-Browser using Smolagents, Zendriver, Trafilatura | 1 | [removed] | 2025-09-19T09:58:43 | https://github.com/michaelsoftmd/zenbot-chrome | dredgesta | github.com | 1970-01-01T00:00:00 | 0 | {} | 1nkzeij | false | null | t3_1nkzeij | /r/LocalLLaMA/comments/1nkzeij/zenbot_chrome_nonautomation_an_llmdriven_web/ | false | false | default | 1 | null |
Today I'm competing against Apple on Product Hunt. I need some help, guys. | 0 | Two years ago I left my job( tesla ) to build SuperU - an AI voice agent platform. For this, i burned through savings, lived on ramen, coded in my apartment, the whole thing.
I launched on Product Hunt today and just found out Google and Apple are launching their products the same day. My heart literally sank when I saw the lineup.
Here's the thing - I know SuperU is good. Really good. We've got enterprise customers using our AI agents for thousands of calls, 200ms response time, works in 100+ languages. I built something that actually solves problems.
But I'm just one guy who's now competing against teams of hundreds with big dollar marketing budgets.
I've been grinding on this for 730 days straight. Learned to code APIs, figured out voice synthesis, built integrations with CRMs, handled customer support calls at 2am. Everything.
The worst part? I almost gave up three months ago when my savings hit zero. Had to move back with my parents. My college friends are buying houses while I'm explaining why I'm 30 and sleeping in my childhood bedroom.
But then this fintech company signed up and ran 10,000 calls through SuperU in their first month. Their CEO said it saved them more money than their entire software budget. That's when I knew this thing could actually work.
Today feels like everything I've worked for comes down to one day. If I can get some visibility on Product Hunt, maybe more companies will discover what I've built. If not... honestly, I don't know what happens next.
I'm not asking for charity or sympathy votes. But if you've got 30 seconds to check out SuperU on Product Hunt and think it's genuinely useful, a vote would mean everything to me.
Thanks for reading. Back to refreshing analytics and pretending I'm not terrified. | 2025-09-19T09:58:39 | Distinct_Criticism36 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nkzehc | false | null | t3_1nkzehc | /r/LocalLLaMA/comments/1nkzehc/today_im_competing_against_apple_on_product_hunt/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'p1rc3d7xg3qf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/p1rc3d7xg3qf1.jpeg?width=108&crop=smart&auto=webp&s=9e9bafb17ef5fc21abf8eac5f0b8875ce70af735', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/p1rc3d7xg3qf1.jpeg?width=216&crop=smart&auto=webp&s=1317afc42fd13adc75f01a707a210fb3e7a0f9e1', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/p1rc3d7xg3qf1.jpeg?width=320&crop=smart&auto=webp&s=bd4cba243d57d3e6a1060576bdc3611e1a1e6ecf', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/p1rc3d7xg3qf1.jpeg?width=640&crop=smart&auto=webp&s=d1ac35831d083b3f0318b51bd18a0057e39179ab', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/p1rc3d7xg3qf1.jpeg?width=960&crop=smart&auto=webp&s=05ce245fca5c39bf6743d8704377b6d4d86180e7', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/p1rc3d7xg3qf1.jpeg?width=1080&crop=smart&auto=webp&s=f242ce651b9bc214313f35563e80f7db99873d52', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/p1rc3d7xg3qf1.jpeg?auto=webp&s=855b08e9f68dae616903b7952e82a9d201593657', 'width': 1080}, 'variants': {}}]} | |
Introducing Zenbot - Your Web Browser Companion | 1 | [removed] | 2025-09-19T09:48:26 | https://www.reddit.com/r/LocalLLaMA/comments/1nkz8ck/introducing_zenbot_your_web_browser_companion/ | dredgesta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkz8ck | false | null | t3_1nkz8ck | /r/LocalLLaMA/comments/1nkz8ck/introducing_zenbot_your_web_browser_companion/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MFq-9F2ZNcKykUryu4kDKZRxKS1KeBKrueOZlpoNcz8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MFq-9F2ZNcKykUryu4kDKZRxKS1KeBKrueOZlpoNcz8.png?width=108&crop=smart&auto=webp&s=4e615e91bc22f6485dd80d2dad7e6bb3fcd4a03a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MFq-9F2ZNcKykUryu4kDKZRxKS1KeBKrueOZlpoNcz8.png?width=216&crop=smart&auto=webp&s=1e5a8c26280aae459ba2ef5812be8353cae379f8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MFq-9F2ZNcKykUryu4kDKZRxKS1KeBKrueOZlpoNcz8.png?width=320&crop=smart&auto=webp&s=a4aa0a8e8aa04ff6e2e5dac3456e3728e9c34355', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MFq-9F2ZNcKykUryu4kDKZRxKS1KeBKrueOZlpoNcz8.png?width=640&crop=smart&auto=webp&s=c1432898ffad2a71acfab289d95d9dfbbcdccef2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MFq-9F2ZNcKykUryu4kDKZRxKS1KeBKrueOZlpoNcz8.png?width=960&crop=smart&auto=webp&s=7abbcbd01f6175b74722dbcd53600bb9d6b11ba1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MFq-9F2ZNcKykUryu4kDKZRxKS1KeBKrueOZlpoNcz8.png?width=1080&crop=smart&auto=webp&s=ecbffa96fd2bc72c92cccbbfe1b53dd8b9fc6fc7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MFq-9F2ZNcKykUryu4kDKZRxKS1KeBKrueOZlpoNcz8.png?auto=webp&s=f80bf918fc89523e1495922e4f7068ede92b47ed', 'width': 1200}, 'variants': {}}]} | |
Introducing Zenbot - Your Web Browser Companion | 1 | [removed] | 2025-09-19T09:39:55 | https://www.reddit.com/r/LocalLLaMA/comments/1nkz3d5/introducing_zenbot_your_web_browser_companion/ | dredgesta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkz3d5 | false | null | t3_1nkz3d5 | /r/LocalLLaMA/comments/1nkz3d5/introducing_zenbot_your_web_browser_companion/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'MFq-9F2ZNcKykUryu4kDKZRxKS1KeBKrueOZlpoNcz8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MFq-9F2ZNcKykUryu4kDKZRxKS1KeBKrueOZlpoNcz8.png?width=108&crop=smart&auto=webp&s=4e615e91bc22f6485dd80d2dad7e6bb3fcd4a03a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MFq-9F2ZNcKykUryu4kDKZRxKS1KeBKrueOZlpoNcz8.png?width=216&crop=smart&auto=webp&s=1e5a8c26280aae459ba2ef5812be8353cae379f8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MFq-9F2ZNcKykUryu4kDKZRxKS1KeBKrueOZlpoNcz8.png?width=320&crop=smart&auto=webp&s=a4aa0a8e8aa04ff6e2e5dac3456e3728e9c34355', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MFq-9F2ZNcKykUryu4kDKZRxKS1KeBKrueOZlpoNcz8.png?width=640&crop=smart&auto=webp&s=c1432898ffad2a71acfab289d95d9dfbbcdccef2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MFq-9F2ZNcKykUryu4kDKZRxKS1KeBKrueOZlpoNcz8.png?width=960&crop=smart&auto=webp&s=7abbcbd01f6175b74722dbcd53600bb9d6b11ba1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MFq-9F2ZNcKykUryu4kDKZRxKS1KeBKrueOZlpoNcz8.png?width=1080&crop=smart&auto=webp&s=ecbffa96fd2bc72c92cccbbfe1b53dd8b9fc6fc7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MFq-9F2ZNcKykUryu4kDKZRxKS1KeBKrueOZlpoNcz8.png?auto=webp&s=f80bf918fc89523e1495922e4f7068ede92b47ed', 'width': 1200}, 'variants': {}}]} |
Solutions to the sycophant problem? | 8 | tl;dr - are there any models that handle conflict in a realistic way? That is to say, characters in-fiction will refuse each other and behave somewhat rationally.
\---
I've been playing around with AI as a writing assistant, essentially prompting it with what I have so far and seeing how it might complete a sentence/paragraph, change my description, etc.
This isn't writing for sale, just for fun to see what I can do with it.
setup is 2x 3090s
The AI rarely outright refuses me at the model level in the "can't let you do that Dave" sense.
However, I've encountered an issue I reckon many others have too - it sucks terribly at conflict.
Are there any models or finetunes or strategies that can get round this?
For example, I can spend about 8000 words setting up a conflict between two ex-lovers who have despised each other for a decade, and the moment the AI takes the wheel it has them start to reconcile immediately and cry on each others shoulders within one page. All the models I've tried behave this way. Mistral, Qwen, Llama, some finetunes.
Even conversations that start about a completely different topic eventually devolve into "you know we should also address the thing while we're here." like it's a Teams call performance review.
I've tried prompting it to avoid easy conflict resolution in a variety of ways with mixed results, all bad. It will either outright ignore the prompt, or hyper fixate on it with no middle ground. So either characters still reconcile, or they become outright petty and start arguments no sane person would have while ignoring everything else in the scene's context.
| 2025-09-19T09:26:32 | https://www.reddit.com/r/LocalLLaMA/comments/1nkyvty/solutions_to_the_sycophant_problem/ | TurpentineEnjoyer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkyvty | false | null | t3_1nkyvty | /r/LocalLLaMA/comments/1nkyvty/solutions_to_the_sycophant_problem/ | false | false | self | 8 | null |
nvivida vs Mac Studio M4 Max - gemma3 vision input performance Q | 0 | So for gemma3 12b with the appropriate mmproj in llama3-tmtd-cli ,
I'm seeing an RTX4090 (\~1000gb/sec memory) encode image input near instantly '252ms'
.. whilst the mac studio M4 36gb (400gb/sec memory) takes around at least 6 seconds.
the gap is huge, wheras for text inference the gap is closer to the memory bandwidths.. the M4 is perfectly useable for conversation.
Is this down to being compute-bound, but is it more extreme with the RTX4090 having better tensor cores more suited to the convolutions (support for better formats for it etc)
.. or could it also be down to optimisation, e.g. less effort has been put into the needed codepaths in MLX
I gather that apple are going to change design alot in the M5 (probably trying to close gaps like this)
I think apple silicon also struggles with diffusion models?
I knew this when I got the device, with the M4 being more an all rounder that just happens to handle LLMs pretty well - but if it could handle VLM's that would be handy
Is it worth looking into optimization (I am a graphics programmer, I have dealt with shaders & SIMD) .. but i figure 'if it was possible someone would have done it by now' for something so prominent
It also might be possible to just offload the vision net to another box ? send the image to a server to do the encoding and get embedding vectors back to slot into the appropriate place - again if C++ coding is needed I could in theory have a bash at it , but in practice hacking on an unfamiliar codebase is tricky and modifications get lost with updates if you dont have buy in from the community on how it should work. It sounds like the exact mechanics of 'using a vision server' could be viewed as too niche.
Then again this might be a use case which helps many people out .
I have a spare machine with a smaller GPU , if it's 1/2-1/4 the speed of the 4090 that'll still be >4x faster than the current apple machine for vision .
I'm also interested in integrating the vison encoding with a game engine (generate frames, then vision-encode them, and throw embeddings at the LLM which could be on another box. Again delegation of machine based on what boxes can handle the most difficult aspects of each stage)
any thoughts ? | 2025-09-19T09:23:26 | https://www.reddit.com/r/LocalLLaMA/comments/1nkyu2c/nvivida_vs_mac_studio_m4_max_gemma3_vision_input/ | dobkeratops | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkyu2c | false | null | t3_1nkyu2c | /r/LocalLLaMA/comments/1nkyu2c/nvivida_vs_mac_studio_m4_max_gemma3_vision_input/ | false | false | self | 0 | null |
I want to get y'all's take on KV Cache | 0 | My whole LYRN system is built around efficient KV cache reuse and it's essentially turning the system prompt into an entire stateful mindspace. I wanted to see what you guys understand KV cache to be and how you are using it with your systems.
I think that KV cache is the greatest thing since sliced bread and I completely take advantage of the efficiency I get from sticking all context into a snapshot system with static and dynamic snapshots. This system completely rewrites how the system prompt is used and built. You can see how this works with my application here. https://github.com/bsides230/LYRN | 2025-09-19T09:22:13 | https://www.reddit.com/r/LocalLLaMA/comments/1nkytdh/i_want_to_get_yalls_take_on_kv_cache/ | PayBetter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkytdh | false | null | t3_1nkytdh | /r/LocalLLaMA/comments/1nkytdh/i_want_to_get_yalls_take_on_kv_cache/ | false | false | self | 0 | null |
What are your most-wanted datasets? | 2 | We have received a grant and would like to spend a portion of the funds on curating and releasing free and open source datasets on huggingface, what would you say are the modalities / types of datasets you would like to have readily available? | 2025-09-19T09:17:12 | https://www.reddit.com/r/LocalLLaMA/comments/1nkyqpy/what_are_your_mostwanted_datasets/ | superbardibros | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkyqpy | false | null | t3_1nkyqpy | /r/LocalLLaMA/comments/1nkyqpy/what_are_your_mostwanted_datasets/ | false | false | self | 2 | null |
SFT for response style: train on per-turn completion tokens converges fast, train on assistant only responses underfits | 2 | Hey folks, looking for advice on SFT setup for “baking in” response style on a small multi-turn conv dataset (\~10k samples, multi turn conversations, mostly english and code mixed hindi and english)
I tried two approaches
1. train on assistant responses only (user and system prompt is masked)
2. train on completion tokens only (break multi turn conv at assistant response from beginning to break point)
Second approach converges very fast (train loss = 0.3 on just 500 steps), but first approach saturates and underfits (train loss = 0.9).
My doubt is, are the two approaches technically equivalent or not? If yes, why is there a different behavior in both the approaches. Is approach 2 benefiting from some subtle **data leakage**, or is it simply the better-posed objective (optimize P(y|x) with a single contiguous target span).
Would love to hear what’s worked for you on smallish dialog SFT, especially around packing, sampling, and eval protocols. Thanks! | 2025-09-19T09:13:58 | https://www.reddit.com/r/LocalLLaMA/comments/1nkyoym/sft_for_response_style_train_on_perturn/ | okbromonke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkyoym | false | null | t3_1nkyoym | /r/LocalLLaMA/comments/1nkyoym/sft_for_response_style_train_on_perturn/ | false | false | self | 2 | null |
Do we have any Android/Windows app that has a playground feature for base LLMs | 2 | Thx.
| 2025-09-19T09:04:04 | https://www.reddit.com/r/LocalLLaMA/comments/1nkyjl9/do_we_have_any_androidwindows_app_that_has_a/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkyjl9 | false | null | t3_1nkyjl9 | /r/LocalLLaMA/comments/1nkyjl9/do_we_have_any_androidwindows_app_that_has_a/ | false | false | self | 2 | null |
GPU power limiting measurements update | 48 | This is an update to this thread: [https://old.reddit.com/r/LocalLLaMA/comments/1n89wi8/power\_limit\_your\_gpus\_to\_reduce\_electricity\_costs/](https://old.reddit.com/r/LocalLLaMA/comments/1n89wi8/power_limit_your_gpus_to_reduce_electricity_costs/)
In that thread I was recommended to use a special tool from Nvidia to log the actual energy usage: [https://docs.nvidia.com/datacenter/dcgm/latest/user-guide/feature-overview.html](https://docs.nvidia.com/datacenter/dcgm/latest/user-guide/feature-overview.html)
So I've run the test again and got some interesting results, for example the GPU consumes less power than the power limit set, the higher the limit the bigger the difference with the actual power draw. The VRAM clock does not change with the different power limits and always stays almost at its maximum value of 14001 MHz, but the GPU clock varies. And the most interesting chart is "minutes elapsed vs energy consumed" chart: the `llama-bench` takes the same time to complete the task (process/generate 1024 tokens for 5 times), and the GPU just wastes more energy with the higher power limits. It appeared that I was wrong with the conclusion that 360W is the best power limit for PRO 6000: the actual best spot seems to be around 310W (the actual power draw should be around 290W).
Also people recommend to downvolt the GPU instead of power limiting it, for example see these threads:
[https://old.reddit.com/r/LocalLLaMA/comments/1nhcf8t/successfully\_tuning\_5090s\_for\_low\_heat\_high\_speed/](https://old.reddit.com/r/LocalLLaMA/comments/1nhcf8t/successfully_tuning_5090s_for_low_heat_high_speed/)
[https://old.reddit.com/r/LocalLLaMA/comments/1njlnad/lact\_indirect\_undervolt\_oc\_method\_beats\_nvidiasmi/](https://old.reddit.com/r/LocalLLaMA/comments/1njlnad/lact_indirect_undervolt_oc_method_beats_nvidiasmi/)
I did not run the proper tests yet but from the quick testing it seems that raising the power limit plus limiting the GPU clock MHz indeed works better than simply lowering the power limit. I will run a similar test with DCGM but limiting the clock instead of the power, and will report back later.
Here is the testing script I've made (slightly modified and not rechecked prior to posting to Reddit so I might have fucked it up, check the code before running it), has to be run as root.
#!/bin/bash
gpuname=' PRO 6000 '; # search the GPU id by this string
startpower=150; # Watt
endpower=600; # Watt
increment=30; # Watt
llama_bench='/path/to/bin/llama-bench';
model='/path/to/Qwen_Qwen3-32B-Q8_0.gguf';
n_prompt=1024;
n_gen=1024;
repetitions=5;
filenamesuffix=$(date +%Y%m%d);
check() {
if [ "$?" -ne "0" ]; then echo 'something is wrong, exit'; exit 1; fi;
}
type nvidia-smi >/dev/null 2>&1; if [ "$?" -ne "0" ]; then echo 'install nvidia-smi'; exit 1; fi;
type dcgmi >/dev/null 2>&1; if [ "$?" -ne "0" ]; then echo 'install datacenter-gpu-manager'; exit 1; fi;
type awk >/dev/null 2>&1; if [ "$?" -ne "0" ]; then echo 'install gawk or mawk'; exit 1; fi;
test -f "$llama_bench"; if [ "$?" -ne "0" ]; then echo 'error: llama-bench not found' && exit 1; fi;
test -f "$model"; if [ "$?" -ne "0" ]; then echo 'error: LLM model not found'; exit 1; fi;
GPUnv=$(nvidia-smi --list-gpus | grep "$gpuname" | cut -d\ -f2 | sed 's/://');
# I hope these IDs won't be different but anything could happen LOL
GPUdc=$(dcgmi discovery -l | grep "$gpuname" | head -n 1 | awk '{print $2}');
if [ "x$GPUnv" = "x" ] || [ "x$GPUdc" = "x" ]; then echo 'error getting GPU ID, check \$gpuname'; exit 1; fi;
echo "###### nvidia-smi GPU id = $GPUnv; DCGM GPU id = $GPUdc";
iterations=$(expr $(expr $endpower - $startpower) / $increment);
if [ "x$iterations" = "x" ]; then echo 'error calculating iterations, exit'; exit 1; fi;
echo "###### resetting GPU clocks to default";
nvidia-smi -i $GPUnv --reset-gpu-clocks; check;
nvidia-smi -i $GPUnv --reset-memory-clocks; check;
echo "###### recording current power limit value";
oldlimit=$(nvidia-smi -i $GPUnv -q | grep 'Requested Power Limit' | head -n 1 | awk '{print $5}');
if [ "x$oldlimit" = "x" ]; then echo 'error saving old power limit'; exit 1; fi;
echo "###### = $oldlimit W";
echo "###### creating DCGM group";
oldgroup=$(dcgmi group -l | grep -B1 powertest | head -n 1 | awk '{print $6}');
if [ "x$oldgroup" = "x" ]; then true; else dcgmi --delete $oldgroup; fi;
dcgmi group -c powertest; check;
group=$(dcgmi group -l | grep -B1 powertest | head -n 1 | awk '{print $6}');
dcgmi group -g $group -a $GPUdc; check;
dcgmi stats -g $group -e -u 500 -m 43200; check; # enable stats monitoring, update interval 500 ms, keep stats for 12 hours
for i in $(seq 0 $iterations);
do
echo "###### iteration $i";
powerlimit=$(expr $startpower + $(expr $i \* $increment));
echo "###### cooling GPU for 1 min...";
sleep 60;
echo "###### flushing RAM for cold start";
echo 3 > /proc/sys/vm/drop_caches;
echo 1 > /proc/sys/vm/compact_memory;
echo "######################## setting power limit = $powerlimit ########################";
nvidia-smi --id=$GPUnv --power-limit=$powerlimit 2>&1 | grep -v 'persistence mode is disabled'; check;
echo "###### start collecting stats";
dcgmi stats -g $group -s $powerlimit; check;
echo "###### running llama-bench";
CUDA_VISIBLE_DEVICES=$GPUnv $llama_bench -fa 1 --n-prompt $n_prompt --n-gen $n_gen --repetitions $repetitions -m $model -o csv | tee "${filenamesuffix}_${powerlimit}_llamabench.txt";
echo "###### stop collecting stats";
dcgmi stats -g $group -x $powerlimit; check;
echo "###### saving log: ${filenamesuffix}_${powerlimit}.log";
dcgmi stats -g $group -j $powerlimit -v > "${filenamesuffix}_${powerlimit}.log";
echo;echo;echo;
done
echo "###### test done, resetting power limit and removing DCGM stats";
nvidia-smi -i $GPUnv --power-limit=$oldlimit;
dcgmi stats -g $group --jremoveall;
dcgmi stats -g $group -d;
dcgmi group -d $group;
echo "###### finish, check ${filenamesuffix}_${powerlimit}*"; | 2025-09-19T08:51:47 | https://www.reddit.com/gallery/1nkycpq | MelodicRecognition7 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nkycpq | false | null | t3_1nkycpq | /r/LocalLLaMA/comments/1nkycpq/gpu_power_limiting_measurements_update/ | false | false | 48 | null | |
ssd on m.2 to chipset vs directly to CPU? f | 1 | so i'm considering the asus pro creator x870e, which has 2 pcie gen 5.0x16
now if i understand correctly if i use dual GPU setup it would give m2 x8, but if i add the m.2 ssd on the top slot i would get x8 for first GPU and x4 for the 2nd GPU and x4 for the ssd
but if i use the m.2 slot connected to the chipset i would get x8 on both GPU, right?
so question LLM wise what would be more preferred? GPU+SSD on PCI5 x8,x4,x4 or GPU on PCI5 x8,x8 ssd on PCI4x4 ?
i'm assuming the 2nd option would give me better inference speed but slower model loading + if if the SSD shares the lanes with everything else in chipset it may incur latency | 2025-09-19T08:44:02 | https://www.reddit.com/r/LocalLLaMA/comments/1nky8c4/ssd_on_m2_to_chipset_vs_directly_to_cpu_f/ | emaayan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nky8c4 | false | null | t3_1nky8c4 | /r/LocalLLaMA/comments/1nky8c4/ssd_on_m2_to_chipset_vs_directly_to_cpu_f/ | false | false | self | 1 | null |
I made project called Local Agent personal artificial intelligence also known as LAPAI, i need some advice or what do you think about my project, because i still new on this thing, AI offline for support dev integrate AI to their project entirely offline | 9 | Here i made AI engine that improve and enhance tiny model like 8B have ability to have memory and stuff like that, and work entirely offline the reason for this it's for support dev who want to integrate AI to their project without data go to cloud, entirely offline, but i still need some advice, because i am still new on this thing, and i just made it, detail on my GitHub: [Local Agent Personal Artificial Intelligence](https://github.com/NaosaikaDevelopment/LAPAI_Project_Experimental)
Thank you for your time to see this. | 2025-09-19T08:33:57 | https://www.reddit.com/r/LocalLLaMA/comments/1nky2vh/i_made_project_called_local_agent_personal/ | Ambitious_Cry3080 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nky2vh | false | null | t3_1nky2vh | /r/LocalLLaMA/comments/1nky2vh/i_made_project_called_local_agent_personal/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'Yhih5PwKa-ZXXfZXRpN9in-Szwct0zQu7PBeMEwjA0U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Yhih5PwKa-ZXXfZXRpN9in-Szwct0zQu7PBeMEwjA0U.png?width=108&crop=smart&auto=webp&s=5a8faec068e4241814b46e0e9e2dbf209ffc2566', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Yhih5PwKa-ZXXfZXRpN9in-Szwct0zQu7PBeMEwjA0U.png?width=216&crop=smart&auto=webp&s=4c882526ea7df59bedc53f51d20b51f847958116', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Yhih5PwKa-ZXXfZXRpN9in-Szwct0zQu7PBeMEwjA0U.png?width=320&crop=smart&auto=webp&s=d838b4fc946258bc5547be83d8b00b636e09537d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Yhih5PwKa-ZXXfZXRpN9in-Szwct0zQu7PBeMEwjA0U.png?width=640&crop=smart&auto=webp&s=af214f46b0eb0902f5155bdab0239693f9310ae6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Yhih5PwKa-ZXXfZXRpN9in-Szwct0zQu7PBeMEwjA0U.png?width=960&crop=smart&auto=webp&s=1d588b49040db84a1e599dd1dbc0f150daade2d5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Yhih5PwKa-ZXXfZXRpN9in-Szwct0zQu7PBeMEwjA0U.png?width=1080&crop=smart&auto=webp&s=fc3eb498c7b3f94ef43b6468163a1f36cf9d0203', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Yhih5PwKa-ZXXfZXRpN9in-Szwct0zQu7PBeMEwjA0U.png?auto=webp&s=05d903a2f213470d24d7c1759f9e49174de0f04e', 'width': 1200}, 'variants': {}}]} |
The comparison results between Qwen3-Next-80B-A3B and Qwen3-30B-A3B are similar. Why? | 1 | [removed] | 2025-09-19T08:32:07 | https://www.reddit.com/r/LocalLLaMA/comments/1nky1wf/the_comparison_results_between_qwen3next80ba3b/ | ProfitFeeling216 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nky1wf | false | null | t3_1nky1wf | /r/LocalLLaMA/comments/1nky1wf/the_comparison_results_between_qwen3next80ba3b/ | false | false | self | 1 | null |
Found a Handy Study & Exam Resource — Examsprint | 1 | Hey folks, I came across Examsprint and thought it might be useful for students here.
👉 What it offers:
Exam prep resources (notes, practice Qs, tips)
Study-friendly layout (clean, mobile-friendly)
Free access (no annoying popups or ads)
Updated material for different boards/exams
If you’re into local/community study support or helping juniors, this could be worth sharing. Anyone else here tried it out or looking for similar resources?
| 2025-09-19T08:29:45 | Immediate_Video_2850 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nky0js | false | null | t3_1nky0js | /r/LocalLLaMA/comments/1nky0js/found_a_handy_study_exam_resource_examsprint/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'IB2ppBX1UeO6l4jmrwhptMluPLa1mV_v8N4Imn74_QM', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/04hhuxy513qf1.jpeg?width=108&crop=smart&auto=webp&s=a048aebb139de7365da7501fca629ee35219eba2', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/04hhuxy513qf1.jpeg?width=216&crop=smart&auto=webp&s=6cf4f8ded1fc26fce28108cdc23eff32b6273d85', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/04hhuxy513qf1.jpeg?width=320&crop=smart&auto=webp&s=ca4c9c3b2c48d263bc5397c2790a1c818bea65ca', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/04hhuxy513qf1.jpeg?width=640&crop=smart&auto=webp&s=ab7fe84a4a949627d6dcc55cab3f334126449fc3', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/04hhuxy513qf1.jpeg?width=960&crop=smart&auto=webp&s=55f9837ff9de618a7985b1d188dd72e6f104e9db', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/04hhuxy513qf1.jpeg?width=1080&crop=smart&auto=webp&s=5ff8df617565e7eba0deadfc7b9a04425c9f3a34', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/04hhuxy513qf1.jpeg?auto=webp&s=94497796d26c88d43172eade2fbd3b6e455b3383', 'width': 1080}, 'variants': {}}]} | ||
Use VLLM to guard your house | 1 | 2025-09-19T07:58:35 | https://www.reddit.com/r/LocalLLaMA/comments/1nkxjch/use_vllm_to_guard_your_house/ | LJ-Hao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkxjch | false | null | t3_1nkxjch | /r/LocalLLaMA/comments/1nkxjch/use_vllm_to_guard_your_house/ | false | false | 1 | null | ||
Documents-Parsing-Lab : Exploring OCR & Document AI models on real-world PDFs, tables, and charts | 1 | [removed] | 2025-09-19T07:56:41 | https://www.reddit.com/r/LocalLLaMA/comments/1nkxich/documentsparsinglab_exploring_ocr_document_ai/ | Mobile-Cartoonist-48 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkxich | false | null | t3_1nkxich | /r/LocalLLaMA/comments/1nkxich/documentsparsinglab_exploring_ocr_document_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '0oeL8KVUDiLyZN1M4ZcMjiM7HZASZNfFI62TNGdRVpo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0oeL8KVUDiLyZN1M4ZcMjiM7HZASZNfFI62TNGdRVpo.png?width=108&crop=smart&auto=webp&s=1ed533a7be66c339b5c0b583703db1956c8cd376', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0oeL8KVUDiLyZN1M4ZcMjiM7HZASZNfFI62TNGdRVpo.png?width=216&crop=smart&auto=webp&s=2f0df176df905eb5c7f582492e21ecbbca0081a3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0oeL8KVUDiLyZN1M4ZcMjiM7HZASZNfFI62TNGdRVpo.png?width=320&crop=smart&auto=webp&s=898b4730c144f0e144e24650a325903f43245996', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0oeL8KVUDiLyZN1M4ZcMjiM7HZASZNfFI62TNGdRVpo.png?width=640&crop=smart&auto=webp&s=869feb5809c5cd0c1e257268da650e39ffa18c00', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0oeL8KVUDiLyZN1M4ZcMjiM7HZASZNfFI62TNGdRVpo.png?width=960&crop=smart&auto=webp&s=fefedd2e7c39102038ebfeb529d268e17187b21c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0oeL8KVUDiLyZN1M4ZcMjiM7HZASZNfFI62TNGdRVpo.png?width=1080&crop=smart&auto=webp&s=454db66a4e8aa76fff9ed7dcdad9ea5242fd12ce', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0oeL8KVUDiLyZN1M4ZcMjiM7HZASZNfFI62TNGdRVpo.png?auto=webp&s=9dc5fde38be8c0997f81a1e3decdb488c14ab8a2', 'width': 1200}, 'variants': {}}]} |
Everyone’s trying vectors and graphs for AI memory. We went back to SQL. | 245 | When we first started building with LLMs, the gap was obvious: they could reason well in the moment, but forgot everything as soon as the conversation moved on.
You could tell an agent, *“I don’t like coffee,”* and three steps later it would suggest espresso again. It wasn’t broken logic, it was missing memory.
Over the past few years, people have tried a bunch of ways to fix it:
* **Prompt stuffing / fine-tuning** – Keep prepending history. Works for short chats, but tokens and cost explode fast.
* **Vector databases (RAG)** – Store embeddings in Pinecone/Weaviate. Recall is semantic, but retrieval is noisy and loses structure.
* **Graph databases** – Build entity-relationship graphs. Great for reasoning, but hard to scale and maintain.
* **Hybrid systems** – Mix vectors, graphs, key-value, and relational DBs. Flexible but complex.
And then there’s the twist:
**Relational databases! Yes,** the tech that’s been running banks and social media for decades is looking like one of the most practical ways to give AI persistent memory.
Instead of exotic stores, you can:
* Keep short-term vs long-term memory in SQL tables
* Store entities, rules, and preferences as structured records
* Promote important facts into permanent memory
* Use joins and indexes for retrieval
This is the approach we’ve been working on at **Gibson**. We built an open-source project called [Memori](https://github.com/gibsonai/memori) , a **multi-agent memory engine** that gives your AI agents human-like memory.
It’s kind of ironic, after all the hype around vectors and graphs, one of the best answers to AI memory might be the tech we’ve trusted for 50+ years.
I would love to know your thoughts about our approach! | 2025-09-19T07:17:52 | https://www.reddit.com/r/LocalLLaMA/comments/1nkwx12/everyones_trying_vectors_and_graphs_for_ai_memory/ | Arindam_200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkwx12 | false | null | t3_1nkwx12 | /r/LocalLLaMA/comments/1nkwx12/everyones_trying_vectors_and_graphs_for_ai_memory/ | false | false | self | 245 | {'enabled': False, 'images': [{'id': '5CIvF7mkfXgexa7qOnPDVB8ezpM2-4q4ovSFJmhz71Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5CIvF7mkfXgexa7qOnPDVB8ezpM2-4q4ovSFJmhz71Y.png?width=108&crop=smart&auto=webp&s=3fb82fa807d6ffbe8bcf55a6f25b45bd80e321f7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5CIvF7mkfXgexa7qOnPDVB8ezpM2-4q4ovSFJmhz71Y.png?width=216&crop=smart&auto=webp&s=c2c070b0ebca5f8a10f5c5cf94c90815415ecfaa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5CIvF7mkfXgexa7qOnPDVB8ezpM2-4q4ovSFJmhz71Y.png?width=320&crop=smart&auto=webp&s=b652a3da10689c1c8d0aacfa835e8912aa52894c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5CIvF7mkfXgexa7qOnPDVB8ezpM2-4q4ovSFJmhz71Y.png?width=640&crop=smart&auto=webp&s=bc62c2a4ab81a94a6f6ae513e5aa95c44329d54c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5CIvF7mkfXgexa7qOnPDVB8ezpM2-4q4ovSFJmhz71Y.png?width=960&crop=smart&auto=webp&s=a70e3f76d2e48b045106aac1cd3fb2cdc57945c4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5CIvF7mkfXgexa7qOnPDVB8ezpM2-4q4ovSFJmhz71Y.png?width=1080&crop=smart&auto=webp&s=3b85b163601391a3cfc135fa78f0eee826b5e1b3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5CIvF7mkfXgexa7qOnPDVB8ezpM2-4q4ovSFJmhz71Y.png?auto=webp&s=c634f424ab1a4a25309fd1996600245e7761942d', 'width': 1200}, 'variants': {}}]} |
Streaming TTS on google colab? | 3 | I'm looking for a TTS that can work with a streaming text from a LLM, and also able to run on colab. I been looking for one but only saw stuff that only work on a laptop/pc and not colab, so i don't know if it even possible. | 2025-09-19T06:57:20 | https://www.reddit.com/r/LocalLLaMA/comments/1nkwl09/streaming_tts_on_google_colab/ | Kiyumaa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkwl09 | false | null | t3_1nkwl09 | /r/LocalLLaMA/comments/1nkwl09/streaming_tts_on_google_colab/ | false | false | self | 3 | null |
Grok 2 anyone? | 0 | I feel a little dirty even bringing it up considering that it came from an org headed by a literal nazi but am still a little curious about it. At 250B it's about the same class as Qwen3 and GLM 4.5, two of the best open source/weight models, but one generation behind which should make for interesting comparisons.
Anyone bother? | 2025-09-19T06:49:01 | https://www.reddit.com/r/LocalLLaMA/comments/1nkwgew/grok_2_anyone/ | ikkiyikki | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkwgew | false | null | t3_1nkwgew | /r/LocalLLaMA/comments/1nkwgew/grok_2_anyone/ | false | false | self | 0 | null |
Unit-test style fairness / bias checks for LLM prompts. Worth building? | 3 | Bias in LLMs doesn't just come from the training data but also shows up at the prompt layer too within applications. The same template can generate very different tones for different cohorts (e.g. job postings - one role such as lawyer gets "ambitious and driven," another such as a nurse gets "caring and nurturing"). Right now, most teams only catch this with ad-hoc checks or after launch.
I've been exploring a way to treat fairness like unit tests:
• Run a template across cohorts and surface differences side-by-side
• Capture results in a reproducible manifest that shows bias was at least considered
• Give teams something concrete for internal review or compliance contexts (NYC Local Law
144, Colorado Al Act, EU Al Act, etc.)
Curious what you think: is this kind of "fairness-as-code" check actually useful in practice, or how would you change it? How would you actually surface or measure any type of inherent bias in the responses created from prompts? | 2025-09-19T06:43:47 | https://www.reddit.com/r/LocalLLaMA/comments/1nkwdel/unittest_style_fairness_bias_checks_for_llm/ | The__Bear_Jew | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkwdel | false | null | t3_1nkwdel | /r/LocalLLaMA/comments/1nkwdel/unittest_style_fairness_bias_checks_for_llm/ | false | false | self | 3 | null |
Favorite agentic coding llm up to 144GB of vram? | 17 | Hi,
in past weeks I've been evaluating agentic coding setups on server with 6x 24 GB gpus (5x 3090 + 1x 4090).
I'd like to have setup that will allow me to have inline completion (can be separate model) and agentic coder (crush, opencode, codex, ...).
Inline completion isn't really issue I use [https://github.com/milanglacier/minuet-ai.nvim](https://github.com/milanglacier/minuet-ai.nvim) and it just queries openai chat endpoint so if it works it works (almost any model will work with it).
Main issue is agentic coding. So far only setup that worked for me reliably is gpt-oss-120b with llama.cpp on 4x 3090 + codex. I've also tried gpt-oss-120b on vllm but there are tool calling issues when streaming (which is shame since it allows for multiple requests at once).
I've also tried to evaluate (test cases and results here [https://github.com/hnatekmarorg/llm-eval/tree/main/output](https://github.com/hnatekmarorg/llm-eval/tree/main/output) ) multiple models which are recommended here:
\- qwen3-30b-\* seems to exhibit tool calling issues both on vllm and llama.cpp but maybe I haven't found good client for it. Qwen3-30b-coder (in my tests its called qwen3-coder-plus since it worked with qwen client) seems ok but dumber (which is expected for 30b vs 60b model) than gpt-oss but it does create pretty frontend
\- gpt-oss-120b seems good enough but if there is something better I can run I am all ears
\- nemotron 49b is lot slower then gpt-oss-120b (expected since it isn't MoE) and for my use case doesn't seem better
\- glm-4.5-air seems to be strong contender but I haven't had luck with any of the clients I could test
Rest aren't that interesting I've also tried lower quants of qwen3-235b (I believe it was Q3) and it didn't seem worth it based on speed and quality of response.
So if you have recommendations on how to improve my setup (gpt-oss-120b for agentic + some smaller faster model for inline completions) let me know.
Also I should mention that I haven't really had time to test these thing comprehensively so if I missed something obvious I apologize in advance
Also if that inline completion model could fit into 8GB of VRAM I can run it on my notebook... (maybe something like smaller qwen2.5-coder with limited context wouldn't be a worst idea in the world)
| 2025-09-19T06:35:44 | https://www.reddit.com/r/LocalLLaMA/comments/1nkw8v6/favorite_agentic_coding_llm_up_to_144gb_of_vram/ | Grouchy_Ad_4750 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkw8v6 | false | null | t3_1nkw8v6 | /r/LocalLLaMA/comments/1nkw8v6/favorite_agentic_coding_llm_up_to_144gb_of_vram/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'Iwf9JRlZ5dU1u_zmUtnWK5kCBNPVy7DYwTAKBKaezNo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Iwf9JRlZ5dU1u_zmUtnWK5kCBNPVy7DYwTAKBKaezNo.png?width=108&crop=smart&auto=webp&s=66b8f9bd2a9976229dc51d81de1811558bed13f3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Iwf9JRlZ5dU1u_zmUtnWK5kCBNPVy7DYwTAKBKaezNo.png?width=216&crop=smart&auto=webp&s=11fd1215d20aa7399ba52d0c2c08dbd1656f6e73', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Iwf9JRlZ5dU1u_zmUtnWK5kCBNPVy7DYwTAKBKaezNo.png?width=320&crop=smart&auto=webp&s=f9966cf7c00a1deabc70a0921be93d8a4659e3fb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Iwf9JRlZ5dU1u_zmUtnWK5kCBNPVy7DYwTAKBKaezNo.png?width=640&crop=smart&auto=webp&s=5997e67d10bbac77dfbeaccd2ae21711fcc98468', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Iwf9JRlZ5dU1u_zmUtnWK5kCBNPVy7DYwTAKBKaezNo.png?width=960&crop=smart&auto=webp&s=034dd84bc3de4e3c2ff87c63bb506c4b03c48a22', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Iwf9JRlZ5dU1u_zmUtnWK5kCBNPVy7DYwTAKBKaezNo.png?width=1080&crop=smart&auto=webp&s=9f4ac248af8a9c2340dc07cc6a5e12e32ee6dfa4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Iwf9JRlZ5dU1u_zmUtnWK5kCBNPVy7DYwTAKBKaezNo.png?auto=webp&s=f80d008d720cb9fba6737351fbf366fc765c0402', 'width': 1200}, 'variants': {}}]} |
when canva needed domo’s spice | 0 | built a boring slide deck in canva. exported the images, ran them through domo restyle with retro zine vibes. suddenly school project looked like comic flyers. canva = base, domoai = flair. | 2025-09-19T06:01:53 | https://www.reddit.com/r/LocalLLaMA/comments/1nkvowm/when_canva_needed_domos_spice/ | lailith_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkvowm | false | null | t3_1nkvowm | /r/LocalLLaMA/comments/1nkvowm/when_canva_needed_domos_spice/ | false | false | self | 0 | null |
Gemma 3 27b context shifting not supported in llama.cpp? | 3 | I’ve recently upgraded my VRAM and decided to finally switch to llama.cpp for my inference, and a huge issue with Gemma 3 that I had on ollama is gone now - it doesn’t take half an hour to get to the first token on huge context!
But now I have a different problem:
```
common_init_from_params: KV cache shifting is not supported for this context, disabling KV cache shifting
```
And I’m afraid it’s something I can’t workaround. Gemma 3 works just fine while within the context window, but the moment it goes out of bounds - llama.cpp cancels generation.
Is there anything I can do? The only info I could find is a reddit comment saying that SWA is incompatible with context shifting, so, I guess I can’t do anything? | 2025-09-19T05:55:06 | https://www.reddit.com/r/LocalLLaMA/comments/1nkvkle/gemma_3_27b_context_shifting_not_supported_in/ | ABLPHA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkvkle | false | null | t3_1nkvkle | /r/LocalLLaMA/comments/1nkvkle/gemma_3_27b_context_shifting_not_supported_in/ | false | false | self | 3 | null |
Wow, Moondream 3 preview is goated | 432 | If the "preview" is this great, how great will the full model be? | 2025-09-19T05:48:41 | Brave-Hold-9389 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nkvgn0 | false | null | t3_1nkvgn0 | /r/LocalLLaMA/comments/1nkvgn0/wow_moondream_3_preview_is_goated/ | false | false | 432 | {'enabled': True, 'images': [{'id': 'xUR_Rr73-8QFKoiGKfPpF7PS4T1VUkgvYJfvRt2U1Nc', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/nwfm02if82qf1.jpeg?width=108&crop=smart&auto=webp&s=109dd33a67670e97c3433a396112fc60c00098c5', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/nwfm02if82qf1.jpeg?width=216&crop=smart&auto=webp&s=dec677efdd992e146dd8570c79d9c69a70791219', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/nwfm02if82qf1.jpeg?width=320&crop=smart&auto=webp&s=bee4f395b0b0f82073e4efc51484072660755191', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/nwfm02if82qf1.jpeg?width=640&crop=smart&auto=webp&s=ebab95faf4918729235e9d66f345bf7bf80fbb91', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/nwfm02if82qf1.jpeg?width=960&crop=smart&auto=webp&s=9d99897e3e3b48813df16c5beeec57868ee7bd60', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/nwfm02if82qf1.jpeg?width=1080&crop=smart&auto=webp&s=4aa6cbd3ac2250b19a94844a5b5fb80149deb489', 'width': 1080}], 'source': {'height': 2408, 'url': 'https://preview.redd.it/nwfm02if82qf1.jpeg?auto=webp&s=e99a1108048b68d812fe2988a0f12d6b5fce3a44', 'width': 1080}, 'variants': {}}]} | ||
Ideas on AI coding companion (NOT coding assistent) | 1 | I’ve been thinking about building a coding companion that combines Live2D, LLMs, and multimodal interactions. The concept is somewhat similar to *Neuro-sama*, but instead of focusing on gaming, it’s designed to accompany developers while they’re working on projects.
The inspiration came from a Chinese company that created an AI gaming companion for all kinds of games like *GTA* and *Genshin*. Their AI could detect the player’s in-game status and chat with them about relevant topics, and offering suggestions, just like having a real friend sitting beside you.
I want to bring that same idea into programming. I’m not aiming to build a professional coding agent—developers will always turn to tools like ChatGPT or Claude for serious technical help. Nor do I want to replicate existing “AI waifu” projects on GitHub. My focus is on striking a balance between entertainment and utility, creating something that feels like a genuine companion who can keep you company while coding.
My programming skills are solid, and I can handle the backend, frontend, and memory management things. The biggest problem is to come up with a clear **use case**. If you have any ideas, maybe something that is totally different from mine mentioned above, please comment down below :D | 2025-09-19T05:11:37 | https://www.reddit.com/r/LocalLLaMA/comments/1nkuts7/ideas_on_ai_coding_companion_not_coding_assistent/ | Overall_Two_2447 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkuts7 | false | null | t3_1nkuts7 | /r/LocalLLaMA/comments/1nkuts7/ideas_on_ai_coding_companion_not_coding_assistent/ | false | false | self | 1 | null |
Depth upscaling? | 0 | I was and still am incredibly fascinated with the concept of "Depth Upscaling" (DUS) and how the solar model felt really smart especially considering it only had around 11b parameters Given that most of us do not have the hardware or budget to pretrain models at home, I was never able to try it in practice for myself. Just now while browsing huggingface, I discovered this beauty: https://huggingface.co/swiss-ai/Apertus-70B-Instruct-2509/tree/main. At first glance, it looks like just another llama 3 finetune but if you squint a little closer, the description says that it was pretrained on 15T tokens. Now, whether that means continal pretraining on the existing base model, or pretrained weights from scratch just using the llama 3 architecture is unclear but either way, it is clear that this model has in some way or another been pretrained on 15T tokens that the original llama 3 has not been. That being said, I was thinking, what if we went the DUS route with this model and the original version of llama 3 (remove last 8 layers of one of the models and first 8 layers of the other model and stitch them together) and then simply finetune this stitched together model on a very large and comprehensive dataset? I''m thinking this could work because the would-be duplicate weights are already different and trained on new data so all that would need to be done is heavy duty finetuning to align all the weights to work together. Does anybody more experienced in the field have anything to say about this? I feel like this model is almost a free ticket to a far larger llama 3 architecture with more training. I want to give this a try but I was hoping someone with more experience could tell me if I would be wasting my time or not. Thanks all. | 2025-09-19T04:55:44 | https://www.reddit.com/r/LocalLLaMA/comments/1nkuj6z/depth_upscaling/ | WyattTheSkid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkuj6z | false | null | t3_1nkuj6z | /r/LocalLLaMA/comments/1nkuj6z/depth_upscaling/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '9GgDYkPT-Gxip2OFhO7vxaxnJ4YtO6DF4-8qcJNHN4M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9GgDYkPT-Gxip2OFhO7vxaxnJ4YtO6DF4-8qcJNHN4M.png?width=108&crop=smart&auto=webp&s=3fb1f71e89eea175a5ad0701bcb97fb17f89c367', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9GgDYkPT-Gxip2OFhO7vxaxnJ4YtO6DF4-8qcJNHN4M.png?width=216&crop=smart&auto=webp&s=98697356e9d0626f9044b2f06e264d3d7a34932e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9GgDYkPT-Gxip2OFhO7vxaxnJ4YtO6DF4-8qcJNHN4M.png?width=320&crop=smart&auto=webp&s=f0f399c2408bb0050b1abd007678f27669e737de', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9GgDYkPT-Gxip2OFhO7vxaxnJ4YtO6DF4-8qcJNHN4M.png?width=640&crop=smart&auto=webp&s=de42fe04f24602c9c12af1fc30a2c850aaf4a289', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9GgDYkPT-Gxip2OFhO7vxaxnJ4YtO6DF4-8qcJNHN4M.png?width=960&crop=smart&auto=webp&s=a45244ed202d13123537089caa70c628e51fb974', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9GgDYkPT-Gxip2OFhO7vxaxnJ4YtO6DF4-8qcJNHN4M.png?width=1080&crop=smart&auto=webp&s=636b16adce7c363387f84236cabac59ec51e3a98', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9GgDYkPT-Gxip2OFhO7vxaxnJ4YtO6DF4-8qcJNHN4M.png?auto=webp&s=328523f5a243719f2703d0f31d5c2c191edb48e3', 'width': 1200}, 'variants': {}}]} |
NVIDIA + Intel collab means better models for us locally | 17 | I think this personal computing announcement directly implies they’re building unified memory similar to Apple devices
https://newsroom.intel.com/artificial-intelligence/intel-and-nvidia-to-jointly-develop-ai-infrastructure-and-personal-computing-products | 2025-09-19T04:46:35 | https://www.reddit.com/r/LocalLLaMA/comments/1nkud68/nvidia_intel_collab_means_better_models_for_us/ | ChipCrafty4327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkud68 | false | null | t3_1nkud68 | /r/LocalLLaMA/comments/1nkud68/nvidia_intel_collab_means_better_models_for_us/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'NFgrUkd8cvY6wu9MojRIISpVL-PMVNB83Uxn_op60Js', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NFgrUkd8cvY6wu9MojRIISpVL-PMVNB83Uxn_op60Js.jpeg?width=108&crop=smart&auto=webp&s=3f62a6499e5d21d7f3de6655baa69ada657f55c9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NFgrUkd8cvY6wu9MojRIISpVL-PMVNB83Uxn_op60Js.jpeg?width=216&crop=smart&auto=webp&s=c8412be3b94a1ac8dd294126d2efb0b18864f35b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NFgrUkd8cvY6wu9MojRIISpVL-PMVNB83Uxn_op60Js.jpeg?width=320&crop=smart&auto=webp&s=99e05a6d268a656047c36d33050c81c9ec550477', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NFgrUkd8cvY6wu9MojRIISpVL-PMVNB83Uxn_op60Js.jpeg?width=640&crop=smart&auto=webp&s=3d0ff1651a82b7c67f58c3d3ad4463415ea5db23', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NFgrUkd8cvY6wu9MojRIISpVL-PMVNB83Uxn_op60Js.jpeg?width=960&crop=smart&auto=webp&s=6719645cd77281d8285cb4946c007d0b174e1be1', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NFgrUkd8cvY6wu9MojRIISpVL-PMVNB83Uxn_op60Js.jpeg?width=1080&crop=smart&auto=webp&s=732741bf95811fd2102250792baed1e8c9b21cf2', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/NFgrUkd8cvY6wu9MojRIISpVL-PMVNB83Uxn_op60Js.jpeg?auto=webp&s=d3ff87009ceb4afaeb5683a5b57056be7482a236', 'width': 2560}, 'variants': {}}]} |
What have you found to be the most empathetic/conversational <96GB local model? | 3 | I'm doing some evaluations in consideration for experimenting with a personal companion/journal, and am curious what folks have found to be the most conversational, personable, and empathetic/high-EQ model under 96GB. gemma3:27b has been pretty solid in my testing, and the Dolphin Venice Mistral tune is exceptional in flexibility but is kinda resistant to system prompting sometimes. I haven't sunk much time into qwq:32b but it got solid scores on EQBench so ??? Maybe I should look into that next.
I've got 48GB VRAM, 64GB DDR5, so <96GB is ideal for decent speed (and 30B models that can be all VRAM are delightful but I'm looking for quality over sppleed here).
What are your favorite companion/conversational models for local? Would love to hear thoughts and experiences.
| 2025-09-19T04:45:04 | https://www.reddit.com/r/LocalLLaMA/comments/1nkuc4u/what_have_you_found_to_be_the_most/ | CharlesStross | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkuc4u | false | null | t3_1nkuc4u | /r/LocalLLaMA/comments/1nkuc4u/what_have_you_found_to_be_the_most/ | false | false | self | 3 | null |
RTX 3080 10gb vs M4 pro 24gb for LocalLLM | 1 | Hello!
I just got permission to use local LLM to help with coding 'VSCode' using Continue extension. For my work.
I have two platforms as I mentioned, an 3080 and a MBP M4 pro with 24gb unified memory. I am currently setting up work pc and appreciate the responses and tips if you guys have any! | 2025-09-19T04:41:49 | https://www.reddit.com/r/LocalLLaMA/comments/1nkua0k/rtx_3080_10gb_vs_m4_pro_24gb_for_localllm/ | exivor01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkua0k | false | null | t3_1nkua0k | /r/LocalLLaMA/comments/1nkua0k/rtx_3080_10gb_vs_m4_pro_24gb_for_localllm/ | false | false | self | 1 | null |
SongBloom : Open-Sourced model for full song generation with vocals and instruments, 2 minute> audios | 1 | Developed by researchers from **CUHK, Tencent, and Nanjing University**
Generates **coherent** full-length songs
Includes lyrics, vocals, instruments
Creates intros, choruses, outros
Produces songs **2+ minutes** long
Huggingface : [https://huggingface.co/CypressYang/SongBloom](https://huggingface.co/CypressYang/SongBloom)
Official blog : [https://cypress-yang.github.io/SongBloom\_demo/](https://cypress-yang.github.io/SongBloom_demo/)
Video demo : [https://www.youtube.com/watch?v=x\_5y3GPjNcA&t=2s](https://www.youtube.com/watch?v=x_5y3GPjNcA&t=2s)
| 2025-09-19T04:39:31 | https://www.reddit.com/r/LocalLLaMA/comments/1nku8hb/songbloom_opensourced_model_for_full_song/ | Technical-Love-8479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nku8hb | false | null | t3_1nku8hb | /r/LocalLLaMA/comments/1nku8hb/songbloom_opensourced_model_for_full_song/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '8kBUKotVk1MdYn1G2e50R_3DFd4CFoPZ68RHxREyxDw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8kBUKotVk1MdYn1G2e50R_3DFd4CFoPZ68RHxREyxDw.png?width=108&crop=smart&auto=webp&s=fe6d85a475e3d60ba03ce36f13effb312d26b611', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8kBUKotVk1MdYn1G2e50R_3DFd4CFoPZ68RHxREyxDw.png?width=216&crop=smart&auto=webp&s=03f7e8d7b3a210779b1494f9991594eb5181b356', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8kBUKotVk1MdYn1G2e50R_3DFd4CFoPZ68RHxREyxDw.png?width=320&crop=smart&auto=webp&s=0d1b9bea2b073a2bccebba475797079b8f6b5243', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8kBUKotVk1MdYn1G2e50R_3DFd4CFoPZ68RHxREyxDw.png?width=640&crop=smart&auto=webp&s=b4a7ed8cae82d55cd82db34b1105202dd08e510f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8kBUKotVk1MdYn1G2e50R_3DFd4CFoPZ68RHxREyxDw.png?width=960&crop=smart&auto=webp&s=48027815f0258934404bcfece8753d6045abe306', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8kBUKotVk1MdYn1G2e50R_3DFd4CFoPZ68RHxREyxDw.png?width=1080&crop=smart&auto=webp&s=b431029eaf33f65a7b8a90c8cf700c789bf76baa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8kBUKotVk1MdYn1G2e50R_3DFd4CFoPZ68RHxREyxDw.png?auto=webp&s=db03894f85957f2221326d7bf82ed46d540929ea', 'width': 1200}, 'variants': {}}]} |
Open source Voice AI Agents | 8 | Hello!
Is there any Ready-to-go open source Voice AI Agents/pipelines like 11Labs's AI Agents?
I've found intervo.ai, but it seems dead. I also know about LiveKit, but this one not Ready-to-go at all. | 2025-09-19T04:10:48 | https://www.reddit.com/r/LocalLLaMA/comments/1nktpac/open_source_voice_ai_agents/ | MrCrabPhantom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nktpac | false | null | t3_1nktpac | /r/LocalLLaMA/comments/1nktpac/open_source_voice_ai_agents/ | false | false | self | 8 | null |
Frustrated by inability to perform simple human tasks | 0 | I love LLMs but I am frustrated I can't get any to do the following simple human task. I want to summarize the plays that are either currently on or upcoming in my area. For each of them I want any published star ratings along with the source of the rating.
Can any local model do this? | 2025-09-19T04:03:39 | https://www.reddit.com/r/LocalLLaMA/comments/1nktkcx/frustrated_by_inability_to_perform_simple_human/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nktkcx | false | null | t3_1nktkcx | /r/LocalLLaMA/comments/1nktkcx/frustrated_by_inability_to_perform_simple_human/ | false | false | self | 0 | null |
New Wan MoE video model | 185 | Wan AI just dropped this new MoE video diffusion model: Wan2.2-Animate-14B | 2025-09-19T03:57:51 | https://huggingface.co/Wan-AI/Wan2.2-Animate-14B | edward-dev | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1nktfxl | false | null | t3_1nktfxl | /r/LocalLLaMA/comments/1nktfxl/new_wan_moe_video_model/ | false | false | 185 | {'enabled': False, 'images': [{'id': 'TgMeHU4GJUa5aR0M3117isJqdoSEY-Q0uxO6S138yuw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TgMeHU4GJUa5aR0M3117isJqdoSEY-Q0uxO6S138yuw.png?width=108&crop=smart&auto=webp&s=1cabb3203dd2428008dcf71b7e06f5b9b6c4df14', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TgMeHU4GJUa5aR0M3117isJqdoSEY-Q0uxO6S138yuw.png?width=216&crop=smart&auto=webp&s=ae819b61aa8d49f02e2578da0894ca4f503f4332', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TgMeHU4GJUa5aR0M3117isJqdoSEY-Q0uxO6S138yuw.png?width=320&crop=smart&auto=webp&s=57ab244b8615543ce8dbf21474e3fca8eb97f91e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TgMeHU4GJUa5aR0M3117isJqdoSEY-Q0uxO6S138yuw.png?width=640&crop=smart&auto=webp&s=6de68752b1ead1487008f27659ea654e42269c7e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TgMeHU4GJUa5aR0M3117isJqdoSEY-Q0uxO6S138yuw.png?width=960&crop=smart&auto=webp&s=af9b3d39bf816b5797195a3236a91fe205b28263', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TgMeHU4GJUa5aR0M3117isJqdoSEY-Q0uxO6S138yuw.png?width=1080&crop=smart&auto=webp&s=2d31ca2e4411e8a3822e63e07bcca4d657a89e22', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TgMeHU4GJUa5aR0M3117isJqdoSEY-Q0uxO6S138yuw.png?auto=webp&s=875e65e70bdc128c63d74017daecdf52118b5b09', 'width': 1200}, 'variants': {}}]} | |
Name of this model | 0 | Name of this model | 2025-09-19T03:54:04 | Ravi1802 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nktdab | false | null | t3_1nktdab | /r/LocalLLaMA/comments/1nktdab/name_of_this_model/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'jo4z22czn1qf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/jo4z22czn1qf1.jpeg?width=108&crop=smart&auto=webp&s=3bdf26bb66678080f8d89ccb63408841c3ce4165', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/jo4z22czn1qf1.jpeg?width=216&crop=smart&auto=webp&s=19d20086ab9bb465479c395cc9c2bad724ba5f21', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/jo4z22czn1qf1.jpeg?width=320&crop=smart&auto=webp&s=820d7368029a2a4f0865594ebc7aab1838ad7fa9', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/jo4z22czn1qf1.jpeg?width=640&crop=smart&auto=webp&s=a4b986ff33603538efe7833a914db27c256226c9', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/jo4z22czn1qf1.jpeg?width=960&crop=smart&auto=webp&s=b99de9c749f9218bbf4d26137af165f3caf51938', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/jo4z22czn1qf1.jpeg?width=1080&crop=smart&auto=webp&s=8c9df774186c4f25e2b2303747d64573572e3757', 'width': 1080}], 'source': {'height': 2340, 'url': 'https://preview.redd.it/jo4z22czn1qf1.jpeg?auto=webp&s=d8f1c749641ab558c984156e317fb98f3543eadd', 'width': 1080}, 'variants': {}}]} | |
Serving API for personal use?? | 0 | HI, what service can I use to make an API to use uncensored model for personal private use like lambda AI vastai runpod etc??? I want it to be an API and id like to serve custom API tool not something super premade so I can either call it from python or call it from my discord bot. Thanks… | 2025-09-19T03:41:39 | https://www.reddit.com/r/LocalLLaMA/comments/1nkt4ju/serving_api_for_personal_use/ | KaouSakura | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkt4ju | false | null | t3_1nkt4ju | /r/LocalLLaMA/comments/1nkt4ju/serving_api_for_personal_use/ | false | false | self | 0 | null |
What's your favorite all-rounder stack? | 8 | I've been a little curious about this for a while now, if you wanted to run a single server that could do a little of everything with local LLMs, what would your combo be? I see a lot of people mentioning the downsides of ollama, when other ones can shine, preferred ways to run MCP servers or other tool servicesfor RAG, multimodal, browser use, and and more, so rather than spending weeks comparing them by just throwing everything I can find into docker, I want to see what you all consider to be the best services that can allow you to do damn near everything without running 50 separate services to do it. My appreciation to anyone's contribution to my attempt at relative minimalism. | 2025-09-19T02:12:34 | https://www.reddit.com/r/LocalLLaMA/comments/1nkrau1/whats_your_favorite_allrounder_stack/ | SocietyTomorrow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkrau1 | false | null | t3_1nkrau1 | /r/LocalLLaMA/comments/1nkrau1/whats_your_favorite_allrounder_stack/ | false | false | self | 8 | null |
Qwen3-Next experience so far | 156 | I have been using this model as my primary model and its safe to say , the benchmarks don't lie.
This model is amazing, i have been using a mix of GLM-4.5-Air, Gpt-oss-120b, llama 4 scout and llama 3.3 in comparison to it.
And its safe to say it beat them by a good margin , i used both the thinking and instruct versions for multiple use cases mostly coding, summarizing & writing , RAG and tool use .
I am curious about your experiences aswell. | 2025-09-19T02:07:13 | https://www.reddit.com/r/LocalLLaMA/comments/1nkr6op/qwen3next_experience_so_far/ | Daemontatox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkr6op | false | null | t3_1nkr6op | /r/LocalLLaMA/comments/1nkr6op/qwen3next_experience_so_far/ | false | false | self | 156 | null |
ModernBERT for financial domain | 4 | Fin-ModernBERT is a domain-adapted pretrained language model for the **financial domain**, obtained by continual pretraining of [ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) with a **context length of 1024 tokens** on large-scale finance-related corpora.
[Fin-ModernBERT](https://huggingface.co/clapAI/Fin-ModernBERT)
| 2025-09-19T01:57:24 | https://www.reddit.com/r/LocalLLaMA/comments/1nkqz4j/modernbert_for_financial_domain/ | hungnm009 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkqz4j | false | null | t3_1nkqz4j | /r/LocalLLaMA/comments/1nkqz4j/modernbert_for_financial_domain/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'iuwZRbPUy1HTAXteDMQohDvRsb7U_InfZ-bqT99kqR8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/iuwZRbPUy1HTAXteDMQohDvRsb7U_InfZ-bqT99kqR8.png?width=108&crop=smart&auto=webp&s=bab16fdbc54f2f24ee0e30bc7280216211b057c7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/iuwZRbPUy1HTAXteDMQohDvRsb7U_InfZ-bqT99kqR8.png?width=216&crop=smart&auto=webp&s=37e9202d058f8341a128574de3ea265dd2653866', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/iuwZRbPUy1HTAXteDMQohDvRsb7U_InfZ-bqT99kqR8.png?width=320&crop=smart&auto=webp&s=047ae412dedb5ca223c7ccba0b837a4683dc3663', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/iuwZRbPUy1HTAXteDMQohDvRsb7U_InfZ-bqT99kqR8.png?width=640&crop=smart&auto=webp&s=0010dd3d6cab5bf7c1ccc5aa849b6f9ea84368f7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/iuwZRbPUy1HTAXteDMQohDvRsb7U_InfZ-bqT99kqR8.png?width=960&crop=smart&auto=webp&s=74a827f999fac0d127ad62c9db7bcb6a9ca90ffd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/iuwZRbPUy1HTAXteDMQohDvRsb7U_InfZ-bqT99kqR8.png?width=1080&crop=smart&auto=webp&s=0c6d98e768ca3a6428a2174ba3a35b24710264f3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/iuwZRbPUy1HTAXteDMQohDvRsb7U_InfZ-bqT99kqR8.png?auto=webp&s=0c25ca8ac8d43f5990adf262307f0d644ffb9808', 'width': 1200}, 'variants': {}}]} |
Improved performance for Qwen3-next 80B on MLX, now on LM Studio (v0.27.1) | 1 | Thanks to Gated-Delta Fused Kernel (Qwen3Next) - [https://github.com/ml-explore/mlx-lm/pull/454](https://github.com/ml-explore/mlx-lm/pull/454)
| 2025-09-19T01:31:54 | onil_gova | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nkqffd | false | null | t3_1nkqffd | /r/LocalLLaMA/comments/1nkqffd/improved_performance_for_qwen3next_80b_on_mlx_now/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '6659pzv2y0qf1', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/6659pzv2y0qf1.jpeg?width=108&crop=smart&auto=webp&s=71265bed544ca5e86017e32cb8688df99e92e74b', 'width': 108}, {'height': 240, 'url': 'https://preview.redd.it/6659pzv2y0qf1.jpeg?width=216&crop=smart&auto=webp&s=582a4fd2297a0ddbe47af13b5e90643f2180d156', 'width': 216}, {'height': 356, 'url': 'https://preview.redd.it/6659pzv2y0qf1.jpeg?width=320&crop=smart&auto=webp&s=0933c96e772e058f3e2e5081da5c24823e0f58ff', 'width': 320}, {'height': 712, 'url': 'https://preview.redd.it/6659pzv2y0qf1.jpeg?width=640&crop=smart&auto=webp&s=a763a0bf9866a815b2806bc96353165c529a4219', 'width': 640}, {'height': 1068, 'url': 'https://preview.redd.it/6659pzv2y0qf1.jpeg?width=960&crop=smart&auto=webp&s=92fcb9e6a73375ce12763be6e1920d7393c48c7e', 'width': 960}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/6659pzv2y0qf1.jpeg?auto=webp&s=68b6bf30a21e050c3cf8deca7e4c2c43ef61033f', 'width': 1078}, 'variants': {}}]} | |
Anyone have access to the Nemotron Dataset(s)? | 4 | Hi guys, idk what happened but for some reason I got denied access to the nemotron pretraining datasets (the sft and the code ones). I used my institutional email address as requested idk what happened. Was wondering if anyone had torrents or a mirror of them they wouldn’t mind sharing. Thanks | 2025-09-19T01:22:14 | https://www.reddit.com/r/LocalLLaMA/comments/1nkq84j/anyone_have_access_to_the_nemotron_datasets/ | WyattTheSkid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkq84j | false | null | t3_1nkq84j | /r/LocalLLaMA/comments/1nkq84j/anyone_have_access_to_the_nemotron_datasets/ | false | false | self | 4 | null |
I can can get GPUs as a tax write off. Thinking of doubling down on my LLM/ML learning adventure by buying one or two RTX 6000 pros. | 30 | I was having a lot of fun a few months back learning graph/vector based RAG. Then work unloaded a ridiculous level of work. I started by trying to use my ASUS M16 with a 4090 for local 3b models. It didn't work as I hoped. Now I'll probably sell the thing to build a local desktop rig that I can remotely use across the world (original reason I got the M16).
Reason I want it:
1. Over the last two years I've taken it upon myself to start future proofing my career. I've learn IoT, game development, and now mostly LLMs. I want to also learn how to do things like object detection.
2. It's a tax write off.
3. If I'm jobless I don't have to pay cloud costs and I have something I can liquidate if need be.
4. It would expand what I could do startup wise. **(Most important reason)**
So my question is, what's the limit of one or two RTX 6000 Pro Blackwells? Would I be able to essentially do any RAG, Object detection, or ML like start up? What type of accuracy could I hope to accomplish with a good RAG pipeline and the open source models that'd be able to run on one or two of these GPUs? | 2025-09-19T00:56:21 | https://www.reddit.com/r/LocalLLaMA/comments/1nkpohe/i_can_can_get_gpus_as_a_tax_write_off_thinking_of/ | Tired__Dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkpohe | false | null | t3_1nkpohe | /r/LocalLLaMA/comments/1nkpohe/i_can_can_get_gpus_as_a_tax_write_off_thinking_of/ | false | false | self | 30 | null |
Poor man's LLM server | 2 | 2025-09-18T23:44:41 | rayzinnz | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nko504 | false | null | t3_1nko504 | /r/LocalLLaMA/comments/1nko504/poor_mans_llm_server/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'yoy5eknhf0qf1', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/yoy5eknhf0qf1.jpeg?width=108&crop=smart&auto=webp&s=af60334879ed7b4c1be9c82f89985c772712eef4', 'width': 108}, {'height': 192, 'url': 'https://preview.redd.it/yoy5eknhf0qf1.jpeg?width=216&crop=smart&auto=webp&s=016ad97997f3ecbfca7c2abe020f1cb7b8a83e12', 'width': 216}, {'height': 285, 'url': 'https://preview.redd.it/yoy5eknhf0qf1.jpeg?width=320&crop=smart&auto=webp&s=cf64cbd5798c45d0925ee74a046b37a9fe3c9912', 'width': 320}, {'height': 570, 'url': 'https://preview.redd.it/yoy5eknhf0qf1.jpeg?width=640&crop=smart&auto=webp&s=89b557c403443949ba7e7afb2f5aac1afe0d298e', 'width': 640}, {'height': 856, 'url': 'https://preview.redd.it/yoy5eknhf0qf1.jpeg?width=960&crop=smart&auto=webp&s=f826aa4e9ccee62cfe9f5b0ed9d4c02424fff64e', 'width': 960}, {'height': 963, 'url': 'https://preview.redd.it/yoy5eknhf0qf1.jpeg?width=1080&crop=smart&auto=webp&s=8f1f6da8b196cb20c213e15a291e93ce0fce7138', 'width': 1080}], 'source': {'height': 2294, 'url': 'https://preview.redd.it/yoy5eknhf0qf1.jpeg?auto=webp&s=6c5c9e27ecc250cade455df30a48e6ed3598eb80', 'width': 2572}, 'variants': {}}]} | ||
System prompt to make a model help users guess its name? | 31 | I’m working on this bot (you can find it in the /r/LocalLLaMa Discord server) that plays a game asking users to guess which model it is. My system prompt asks the model to switch to riddles if the user directly asks for its identity, because that’s how some users may choose to play the game. But what I’m finding is that the riddles are often useless because the model doesn’t know its own identity (or it is intentionally lying).
Case in point, Mistral designing an elaborate riddle about itself being made by Google: https://whichllama.com/?share=SMJXbCovucr8AVqy (why?!)
Now, I can plug the true model name into the system prompt myself, but that is either ignored by the model or used in a way that makes it too easy to guess. Any tips on how I can design the system prompt to balance between being too easy and difficult? | 2025-09-18T23:24:54 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nknpgd | false | null | t3_1nknpgd | /r/LocalLLaMA/comments/1nknpgd/system_prompt_to_make_a_model_help_users_guess/ | false | false | default | 31 | {'enabled': True, 'images': [{'id': 'shq50qtyb0qf1', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/shq50qtyb0qf1.jpeg?width=108&crop=smart&auto=webp&s=3afb49b28141ec8c24206209ad149b295d9e19aa', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/shq50qtyb0qf1.jpeg?width=216&crop=smart&auto=webp&s=abddb060dcf3e1ff40300148f82bbec412cdbd1a', 'width': 216}, {'height': 229, 'url': 'https://preview.redd.it/shq50qtyb0qf1.jpeg?width=320&crop=smart&auto=webp&s=76e611f38cc9012d4815049f919efde4d3fe3048', 'width': 320}, {'height': 458, 'url': 'https://preview.redd.it/shq50qtyb0qf1.jpeg?width=640&crop=smart&auto=webp&s=dcdc6bd12cac219bcc5a1cf9ba84a06b87fe4e3d', 'width': 640}, {'height': 687, 'url': 'https://preview.redd.it/shq50qtyb0qf1.jpeg?width=960&crop=smart&auto=webp&s=5a7bcdb0eda23a5634c31928a8f217ab129869ab', 'width': 960}, {'height': 773, 'url': 'https://preview.redd.it/shq50qtyb0qf1.jpeg?width=1080&crop=smart&auto=webp&s=b46f24e31f2d5006a4580d2f1f0d5f7bfa606bd0', 'width': 1080}], 'source': {'height': 1401, 'url': 'https://preview.redd.it/shq50qtyb0qf1.jpeg?auto=webp&s=4c50f9842e5ca5d0f094adbead6abd474fdc5f28', 'width': 1956}, 'variants': {}}]} | |
[Research] Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens | 13 | I thought this would be relevant for us here in local llama, since reasoning models are coming into fashion for local inference, with the new GPT OSS models and friends (and that reflexion fiasco; for those that remember) | 2025-09-18T23:19:10 | https://arxiv.org/abs/2508.01191 | Confident-Honeydew66 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1nknkyr | false | null | t3_1nknkyr | /r/LocalLLaMA/comments/1nknkyr/research_is_chainofthought_reasoning_of_llms_a/ | false | false | default | 13 | null |
Vllm with mistral small 3.2 | 1 | Hi, I have a VM with Ubuntu running vllm with unsloth mistral small (tried 3.2 gguf and 3.1 awq). Previously I had same 3.2 but in ollama. Running in nvidia L4 24gb
Problem is that inference speed is much slower in vllm for some reason. Context with 500 tokens and output with 100.
What am I missing here? Does someone have some tips about vllm performance?
Thank you | 2025-09-18T23:11:16 | https://www.reddit.com/r/LocalLLaMA/comments/1nknen7/vllm_with_mistral_small_32/ | justlows | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nknen7 | false | null | t3_1nknen7 | /r/LocalLLaMA/comments/1nknen7/vllm_with_mistral_small_32/ | false | false | self | 1 | null |
Moondream 3 (Preview) -- hybrid reasoning vision language model | 114 | 2025-09-18T22:24:59 | https://huggingface.co/moondream/moondream3-preview | radiiquark | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1nkmc7z | false | null | t3_1nkmc7z | /r/LocalLLaMA/comments/1nkmc7z/moondream_3_preview_hybrid_reasoning_vision/ | false | false | default | 114 | {'enabled': False, 'images': [{'id': '4djziNvQ2zvOfJv3_xVajpCMtf-Z4Exi5Qyi8qcyMmc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4djziNvQ2zvOfJv3_xVajpCMtf-Z4Exi5Qyi8qcyMmc.png?width=108&crop=smart&auto=webp&s=7f4bc05396c9eef82562c7442117229573183441', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4djziNvQ2zvOfJv3_xVajpCMtf-Z4Exi5Qyi8qcyMmc.png?width=216&crop=smart&auto=webp&s=d8dd49bb5ef923de3002c7c868f31d8575fcbfe2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4djziNvQ2zvOfJv3_xVajpCMtf-Z4Exi5Qyi8qcyMmc.png?width=320&crop=smart&auto=webp&s=f7c46f191556344c85c1241c576345dcdf6d8af1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4djziNvQ2zvOfJv3_xVajpCMtf-Z4Exi5Qyi8qcyMmc.png?width=640&crop=smart&auto=webp&s=690d6b125016267d773d6fd42ebb4a21aff8aca7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4djziNvQ2zvOfJv3_xVajpCMtf-Z4Exi5Qyi8qcyMmc.png?width=960&crop=smart&auto=webp&s=a333965f7aea282e211f0de4c1db78250e455977', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4djziNvQ2zvOfJv3_xVajpCMtf-Z4Exi5Qyi8qcyMmc.png?width=1080&crop=smart&auto=webp&s=9864136890df3931fa0ca0827102716f63aea44d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4djziNvQ2zvOfJv3_xVajpCMtf-Z4Exi5Qyi8qcyMmc.png?auto=webp&s=61ab2c2503315d453ac85598d9afede3a39200f9', 'width': 1200}, 'variants': {}}]} | |
Could small language models (SLMs) be a better fit for domain-specific tasks? | 4 | Hi everyone! Quick question for those working with AI models: do you think we might be over-relying on large language models even when we don’t need all their capabilities? I’m exploring whether there’s a shift happening toward using smaller, more niche-focused models SLMs that are fine-tuned just for a specific domain. Instead of using a giant model with lots of unused functions, would a smaller, cheaper, and more efficient model tailored to your field be something you’d consider? Just curious if people are open to that idea or if LLMs are still the go-to for everything. Appreciate any thoughts! | 2025-09-18T22:15:37 | https://www.reddit.com/r/LocalLLaMA/comments/1nkm44c/could_small_language_models_slms_be_a_better_fit/ | Money-Psychology6769 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkm44c | false | null | t3_1nkm44c | /r/LocalLLaMA/comments/1nkm44c/could_small_language_models_slms_be_a_better_fit/ | false | false | self | 4 | null |
Local real-time assistant that remembers convo + drafts a doc | 12 | I wired up a local “brainstorming assistant” that keeps memory of our chat and then writes a Google doc based on what we talked about.
Demo was simple:
1. Talked with it about cats.
2. Asked it to generate a doc with what we discussed.
Results: it dropped a few details, but it captured the main points surprisingly well. Not bad for a first pass. Next step is wiring it up with an MCP so the doc gets written continuously while we talk instead of at the end.
Excited to test this on a longer conversation. | 2025-09-18T21:56:53 | https://v.redd.it/g7qj66kitzpf1 | Weary-Wing-6806 | /r/LocalLLaMA/comments/1nklnqi/local_realtime_assistant_that_remembers_convo/ | 1970-01-01T00:00:00 | 0 | {} | 1nklnqi | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/g7qj66kitzpf1/DASHPlaylist.mpd?a=1760954218%2CYWFjMGVkZjk4NjU1MmE5NzJiNWRjM2RjM2NhNTliYWZkZGNjYTdlMDIxNTM2MzJiMjM2ZDQ5OGNkNWI3NTI0Yw%3D%3D&v=1&f=sd', 'duration': 108, 'fallback_url': 'https://v.redd.it/g7qj66kitzpf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/g7qj66kitzpf1/HLSPlaylist.m3u8?a=1760954218%2CNzI3MjU0MTFjYjk5N2YwZTgzNTZiMTAwNmY4NWU5NWM1ODY0MWU3NjRkYjI4MzYwZmVlODZlNWQzZjMxM2Y3NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/g7qj66kitzpf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1nklnqi | /r/LocalLLaMA/comments/1nklnqi/local_realtime_assistant_that_remembers_convo/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'cDVmYTE2a2l0enBmMX1sjDttlx_mQ45DeDKL-DubvjOdCDAQ-7LC5FNwPoBD', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cDVmYTE2a2l0enBmMX1sjDttlx_mQ45DeDKL-DubvjOdCDAQ-7LC5FNwPoBD.png?width=108&crop=smart&format=pjpg&auto=webp&s=ca33bf8405b3dc5bd404f2c8928d940835ccd33a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cDVmYTE2a2l0enBmMX1sjDttlx_mQ45DeDKL-DubvjOdCDAQ-7LC5FNwPoBD.png?width=216&crop=smart&format=pjpg&auto=webp&s=dd533dcb2f41c4a5e8ba3177d5695db3e13a5615', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cDVmYTE2a2l0enBmMX1sjDttlx_mQ45DeDKL-DubvjOdCDAQ-7LC5FNwPoBD.png?width=320&crop=smart&format=pjpg&auto=webp&s=c80a12b05dff6450ac0721955d1eb5a5a1b1f8f6', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cDVmYTE2a2l0enBmMX1sjDttlx_mQ45DeDKL-DubvjOdCDAQ-7LC5FNwPoBD.png?width=640&crop=smart&format=pjpg&auto=webp&s=f1ef34eb212fa4cc5385fb7fb7e39f6ba393d9f1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cDVmYTE2a2l0enBmMX1sjDttlx_mQ45DeDKL-DubvjOdCDAQ-7LC5FNwPoBD.png?width=960&crop=smart&format=pjpg&auto=webp&s=66c43807a20b6c07d838f9a16235832f97bb12d6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cDVmYTE2a2l0enBmMX1sjDttlx_mQ45DeDKL-DubvjOdCDAQ-7LC5FNwPoBD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=00b297e4c7c72830f4ffe3b11fcea2da8fa7a2d9', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/cDVmYTE2a2l0enBmMX1sjDttlx_mQ45DeDKL-DubvjOdCDAQ-7LC5FNwPoBD.png?format=pjpg&auto=webp&s=543bb048ccdf115e3d90c5ce7646d0f233d9a04c', 'width': 1280}, 'variants': {}}]} | |
How can I control emotions/tone in Higgs Audio — can I make it be sad at the start and happy at the end? | 0 | Hey everyone — quick question about Higgs Audio: is it possible to control emotions within a single input (for example: sad at the start, neutral in the middle, then happy at the end)? If yes, how do you do it in practice? Can you guys give a example? And if this is not possible with higgs then are there any moddels that are capabile of doing such task | 2025-09-18T21:22:22 | https://www.reddit.com/r/LocalLLaMA/comments/1nkksml/how_can_i_control_emotionstone_in_higgs_audio_can/ | Adept_Lawyer_4592 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkksml | false | null | t3_1nkksml | /r/LocalLLaMA/comments/1nkksml/how_can_i_control_emotionstone_in_higgs_audio_can/ | false | false | self | 0 | null |
Decart-AI releases “Open Source Nano Banana for Video” | 138 | We are building “Open Source Nano Banana for Video” - here is open source demo v0.1
We are open sourcing Lucy Edit, the first foundation model for text-guided video editing!
Lucy Edit lets you prompt to try on uniforms or costumes - with motion, face, and identity staying perfectly preserved
Get the model on @huggingface 🤗, API on @FAL, and nodes on @ComfyUI 🧵
X post: https://x.com/decartai/status/1968769793567207528?s=46
Hugging Face: https://huggingface.co/decart-ai/Lucy-Edit-Dev
Lucy Edit Node on ComfyUI: https://github.com/decartAI/lucy-edit-comfyui
| 2025-09-18T21:09:20 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nkkghp | false | null | t3_1nkkghp | /r/LocalLLaMA/comments/1nkkghp/decartai_releases_open_source_nano_banana_for/ | false | false | 138 | {'enabled': True, 'images': [{'id': '5Ufq-8-sVIe9gs2Ea19Xt0BvSMm2DG5sKDXfqYeq2vs', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/eisyod0snzpf1.jpeg?width=108&crop=smart&auto=webp&s=448ae3bf251dad89bc99f1e9fc18517ab4c3fc23', 'width': 108}, {'height': 287, 'url': 'https://preview.redd.it/eisyod0snzpf1.jpeg?width=216&crop=smart&auto=webp&s=489489f13399a6c356f3176b6e1b6c11a4a651f5', 'width': 216}, {'height': 425, 'url': 'https://preview.redd.it/eisyod0snzpf1.jpeg?width=320&crop=smart&auto=webp&s=c648ba214e3f239b3107cce02a7e25c7f96bc8e4', 'width': 320}, {'height': 851, 'url': 'https://preview.redd.it/eisyod0snzpf1.jpeg?width=640&crop=smart&auto=webp&s=2b42197a4268b3f80790f0506f12d7d6cfc5a4bb', 'width': 640}, {'height': 1276, 'url': 'https://preview.redd.it/eisyod0snzpf1.jpeg?width=960&crop=smart&auto=webp&s=ec8c7637676daf15d6cc42f229da7a5cf78959b7', 'width': 960}, {'height': 1436, 'url': 'https://preview.redd.it/eisyod0snzpf1.jpeg?width=1080&crop=smart&auto=webp&s=0220fb1542b5b02839ba9b3f70840ab3875202bb', 'width': 1080}], 'source': {'height': 1652, 'url': 'https://preview.redd.it/eisyod0snzpf1.jpeg?auto=webp&s=7dda09c5f2d1dfd6f1540a6e3c2938545df91e51', 'width': 1242}, 'variants': {}}]} | ||
What are the best current text "humanization" methods/models? | 1 | I've been loosely following the evolution of AI-detection methods, along with the various subsequent websites that have emerged offering it as a service. From what I can tell, the main methods are:
1. Token-rank and entropy signals (histogram of top-k ranks, perplexity);
2. Curvature of log-probability (https://arxiv.org/abs/2301.11305); and
3. Stylometry, or NLP-based detection of part-of-speech patterns, punctation rhythms, etc. mixed with BERT/RoBERTa variants.
Then there's also watermarking (https://deepmind.google/science/synthid/), which is related but slightly different, if only in the sense that *you know* you don't need to de-watermark if you're using a model that doesn't add a watermark.
I initially considered the AI-detection sites that popped up to be snake-oil taking advantage of desperate teachers, etc. but there seems to be serious research behind it now.
At the same time, I've seen a few models on Hugging Face that claim to humanize text with what seems to be either something analogous to ablation models (https://huggingface.co/spaces/Farhan1572/Humanizer) or standard fine-tuning in order to produce a derivative model with a different probabilistic token signature. But there doesn't seem to be very much here yet.
Does anyone know what the latest "humanization" techniques are? Of course there is always the close relatedness of detection and evasion, so the literature on detection counts to a degree, but there seems to be much less out there directly dealing with humanization. | 2025-09-18T20:59:25 | https://www.reddit.com/r/LocalLLaMA/comments/1nkk79t/what_are_the_best_current_text_humanization/ | louis-debroglie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkk79t | false | null | t3_1nkk79t | /r/LocalLLaMA/comments/1nkk79t/what_are_the_best_current_text_humanization/ | false | false | self | 1 | null |
I built APM – a package manager for agent workflows and context (works with local LLaMA too) | 6 | I’ve been hacking on a side project I’m calling **APM (Agent Package Manager)**. The idea is simple: treat agent prompts + workflows like code packages you can version, share, and reuse.
My pain point was always the same:
– I’d write complex prompt/rule files, but couldn’t easily reuse them in another project.
– Teammates would reinvent the wheel with their own context configs.
– There wasn’t a clean way to compose/share agent workflows across different models.
With APM you can:
– `apm init` to create a package of rules/prompts
– Publish it to GitHub
– Pull it into any project with `apm install`
– Compile everything into optimized context with `apm compile`
– And then run workflows against compatible Agent CLIs (Codex today) with `apm run`
**Local angle**: it’s built on Codex CLI under the hood \`apm runtime setup codex\`, which you can configure to point to your own LLaMA instance. So you can spin up workflows locally without touching cloud APIs. The architecture supports extending to other CLIs.
Repo: [github.com/danielmeppiel/apm](https://github.com/danielmeppiel/apm)
I’d love if someone here with a local LLaMA setup could try it out and tell me if it actually makes context management/sharing easier. Early days, so any feedback (bugs, feature ideas) is gold. | 2025-09-18T20:57:53 | https://www.reddit.com/r/LocalLLaMA/comments/1nkk5tv/i_built_apm_a_package_manager_for_agent_workflows/ | Amazing_Midnight_813 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkk5tv | false | null | t3_1nkk5tv | /r/LocalLLaMA/comments/1nkk5tv/i_built_apm_a_package_manager_for_agent_workflows/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'xguMc4QbyVyC-YFGUkERRZvoqyunAzwGK5aWn9RbY0o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xguMc4QbyVyC-YFGUkERRZvoqyunAzwGK5aWn9RbY0o.png?width=108&crop=smart&auto=webp&s=b76222e3b45de8b4f25c07d01cbfb93ec498b01a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xguMc4QbyVyC-YFGUkERRZvoqyunAzwGK5aWn9RbY0o.png?width=216&crop=smart&auto=webp&s=df62da0d39ed8d89ba13a99d7b19579bc1116487', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xguMc4QbyVyC-YFGUkERRZvoqyunAzwGK5aWn9RbY0o.png?width=320&crop=smart&auto=webp&s=063a51576c01732bebb4900996fce93f499ef61a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xguMc4QbyVyC-YFGUkERRZvoqyunAzwGK5aWn9RbY0o.png?width=640&crop=smart&auto=webp&s=4ea0cb67090a5bef8a3b4925a802e87a270c72c0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xguMc4QbyVyC-YFGUkERRZvoqyunAzwGK5aWn9RbY0o.png?width=960&crop=smart&auto=webp&s=146534231c0e093558e1a757acbc4dc112bebf54', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xguMc4QbyVyC-YFGUkERRZvoqyunAzwGK5aWn9RbY0o.png?width=1080&crop=smart&auto=webp&s=0e522fce2822e81961cc831b0d72f413161a2844', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xguMc4QbyVyC-YFGUkERRZvoqyunAzwGK5aWn9RbY0o.png?auto=webp&s=523c9e1d7dee76bd67f1f80ad83724b0203117af', 'width': 1200}, 'variants': {}}]} |
Model: Qwen3 Next Pull Request llama.cpp | 183 | 2025-09-18T20:40:40 | https://www.reddit.com/r/LocalLLaMA/comments/1nkjpu3/model_qwen3_next_pull_request_llamacpp/ | Loskas2025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkjpu3 | false | null | t3_1nkjpu3 | /r/LocalLLaMA/comments/1nkjpu3/model_qwen3_next_pull_request_llamacpp/ | false | false | 183 | null | ||
Local LLM in Github Copilot, Agent mode | 4 | I am pretty new to this whole Local LLM world, Thought id give it a bash with ollama, Got it all up and running in Docker on Ubuntu 25.04 with a 4070ti
Ive nipped into VSCode github copilot extension, Selected the models, Gone into agent mode and I can get the models to respond (mostly) but none of them seem to be able to write/edit files like the cloud models do
Am I missing something here?
The models I have tried are in the screenshot, You can kind of see the response in the background, But it didnt write any of these files, It just answered like it was in chat
https://preview.redd.it/vxfc7rm4izpf1.png?width=300&format=png&auto=webp&s=b74104c1a6614566f4412e4355954a73945e3931
| 2025-09-18T20:39:49 | https://www.reddit.com/r/LocalLLaMA/comments/1nkjp1y/local_llm_in_github_copilot_agent_mode/ | SuspiciousParsnip5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkjp1y | false | null | t3_1nkjp1y | /r/LocalLLaMA/comments/1nkjp1y/local_llm_in_github_copilot_agent_mode/ | false | false | 4 | null | |
With Meta's Fail, I wanted to test a couple 4Bs | 0 | Throwaway because I'm a lurker, not a poster.... but I thought this was interesting...
First I took a screenshot of the man asking for the AI questions, and I asked Gemma-3-4B what it saw:
Okay, let's break down this image in as much detail as possible. Here’s a comprehensive analysis, focusing on what I observe:
**Overall Impression:**
The image presents a staged, professionally-shot video still, likely from a cooking demonstration or tutorial. It’s a bright, well-lit scene, giving it a clean and modern feel. It has a "Live Demo" overlay at the top, suggesting it's a real-time broadcast.
**Subject & Composition:**
* **Central Figure:** The primary subject is a man, likely the presenter or chef, positioned slightly off-center to the right. He's wearing a dark gray athletic polo shirt and appears to be in his late 20s or early 30s. He has dark, neatly styled hair, a light beard, and is looking directly at the camera with a friendly, instructional expression.
* **Framing:** The shot is framed fairly tightly around the man, focusing his attention. The background is present but slightly blurred, drawing the viewer’s eye to him.
**Setting & Background:**
* **Kitchen:** We're clearly in a modern kitchen. The countertop is a dark wood color, likely butcher block, and appears to be quite clean and organized.
* **Backsplash:** The backsplash is a muted, light green tile.
* **Shelving Unit:** To the left of the man, there's a dark gray shelving unit with a series of open shelves. On the shelves, we see:
* A collection of small, attractive ceramic bowls.
* A few lemons and yellow lemons.
* A variety of jars and containers, hinting at spices or oils.
* **Wall Features:** Above the shelving unit is a decorative wooden shelf. A potted plant (likely a snake plant or similar) is on the counter to the right of the man.
* **Hardware:** A stainless steel faucet is visible on the counter, and a row of copper-colored cooking tools (spatulas, tongs, etc.) are hanging on a rack above the countertop.
**Ingredients & Props:**
* **Countertop Ingredients:** The most prominent elements on the countertop are a collection of ingredients for cooking:
* **Large Bowl:** A white ceramic bowl contains chopped green onions and what appears to be some sort of spice (possibly chili powder or paprika – it’s difficult to be certain).
* **Jars:** Three jars are on the countertop:
* A dark brown jar labeled “Cacao”.
* A clear jar with a dark liquid, possibly soy sauce or balsamic vinegar.
* A dark brown jar with a label “Sesame Oil”.
* **Meat:** There's a piece of sliced pink meat (likely ham or prosciutto) arranged on the counter.
* **Other Spices:** Several small containers of spices are scattered around – likely garlic powder, pepper, and possibly others.
* **Cutting Board:** A white wooden cutting board is placed in front of the man, suggesting he’s in the process of preparing the ingredients.
**Technical Aspects (Based on the Image):**
* **Lighting:** The lighting is bright and even, likely from multiple sources (e.g., softboxes or LED panels).
* **Focus:** The man is in sharp focus, while the background is slightly softer, creating depth of field.
* **Color Palette:** The color palette is relatively muted and natural – greens, browns, grays, and pops of yellow from the lemons.
**Overall Impression & Possible Context:**
The image strongly suggests a cooking demonstration focused on a savory dish, potentially something with Asian-inspired flavors, given the soy sauce and sesame oil. The presenter seems knowledgeable and approachable, and the organized setting implies a focus on clear instructions.
Next I continued on, and asked, just like the video **"hey, can you help me make a Korean inspired steak sauce for my steak sandwich here?"**
I received this... now I don't know anything about Korean cuisine, but it seemed like a pretty good response. Go go Local AI :)
Absolutely! Let’s craft a fantastic Korean-inspired steak sauce perfect for elevating your steak sandwich. Here’s a recipe and detailed breakdown, incorporating the key flavors and techniques:
**Korean-Inspired Steak Sauce Recipe**
**Yields:** Approximately 1 cup
**Prep Time:** 10 minutes
**Cook Time:** 15-20 minutes
**Ingredients:**
* 1/2 cup Beef Broth (low sodium is best)
* 1/4 cup Soy Sauce (low sodium)
* 2 tablespoons Gochujang (Korean Chili Paste – adjust to your spice preference)
* 2 tablespoons Honey or Maple Syrup (honey provides a deeper flavor)
* 1 tablespoon Sesame Oil (crucial for that authentic aroma)
* 1 tablespoon Rice Vinegar (adds brightness and cuts through the richness)
* 1 clove Garlic, minced
* 1/2 teaspoon Ground Ginger
* 1/4 teaspoon Black Pepper
* 1/2 teaspoon Toasted Sesame Seeds (for garnish - optional)
**Instructions:**
1. **Sauté Aromatics:** In a small saucepan, heat a teaspoon of sesame oil over medium heat. Add the minced garlic and ground ginger. Sauté for 30-60 seconds, until fragrant – *don’t burn the garlic!* This builds the foundational flavor.
2. **Combine Liquids:** Pour in the beef broth and soy sauce. Bring to a gentle simmer.
3. **Add Sweetness & Spice:** Stir in the gochujang, honey (or maple syrup), rice vinegar, and black pepper.
4. **Simmer & Reduce:** Reduce the heat to low and let the sauce simmer for 15-20 minutes, or until it has thickened slightly to your desired consistency. *Stir occasionally to prevent sticking.* The sauce will thicken further as it cools.
5. **Strain (Optional):** For a smoother sauce, you can strain it through a fine-mesh sieve after simmering. This is purely for aesthetic preference.
6. **Final Touch:** Stir in the remaining teaspoon of sesame oil before serving. Sprinkle with toasted sesame seeds for garnish if desired.
**Flavor Notes & Customization:**
* **Spice Level:** Gochujang varies in heat. Start with 2 tablespoons and add more to your taste. A little goes a long way!
* **Sweetness:** Adjust the honey/maple syrup based on how sweet you like your sauce.
* **Umami Boost:** A tiny splash (1/2 teaspoon) of fish sauce can deepen the savory notes. Be cautious – it’s potent!
* **Citrus:** A squeeze of lime juice at the end will add a bright, zesty element.
* **Korean Pear or Apple:** For a more complex sweetness, consider adding a small amount (1/4 cup) of pureed Korean pear or apple during the simmering process.
**Using the Sauce with Your Steak Sandwich:**
* Brush the sauce generously onto your cooked steak before assembling your sandwich.
* Alternatively, serve the sauce on the side for dipping.
| 2025-09-18T20:29:34 | https://www.reddit.com/r/LocalLLaMA/comments/1nkjfiq/with_metas_fail_i_wanted_to_test_a_couple_4bs/ | Single_Power4777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkjfiq | false | null | t3_1nkjfiq | /r/LocalLLaMA/comments/1nkjfiq/with_metas_fail_i_wanted_to_test_a_couple_4bs/ | false | false | self | 0 | null |
Want to split a big model among two 5090's - what's my best case for single query response speed improvement? | 2 | So.. i have a single 5090 here and i'm looking to buy another. I also need to get another motherboard in the process.
What i'm trying to figure out is..
When splitting a model between two GPUs ( GLM 4.5 air in this case ), what is the best case speedup, in terms of tokens/sec, either literally or a percentage, i could get?
I get the impression from reading some posts here that the best we can do is about 15%.. but then there's some outliers claiming they can get a 60% speedup..
i do understand i need to use vllm or something similar to get good paralellization.
Side note, to avoid buying server hardware, i'm looking at first getting an Asus proart board, which can provide an x8 split on two PCIE 5.0 slots.. i'm figuring this is adequate bandwidth to use two 5090's in concert, and it's possible i get no benefit from buying a server board and using two x16's instead.. let me know if i'm wrong. | 2025-09-18T20:18:16 | https://www.reddit.com/r/LocalLLaMA/comments/1nkj4wo/want_to_split_a_big_model_among_two_5090s_whats/ | mr_zerolith | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkj4wo | false | null | t3_1nkj4wo | /r/LocalLLaMA/comments/1nkj4wo/want_to_split_a_big_model_among_two_5090s_whats/ | false | false | self | 2 | null |
New stealth model Golden Capybara? | 12 | 2025-09-18T19:52:29 | https://www.reddit.com/r/LocalLLaMA/comments/1nkigg6/new_stealth_model_golden_capybara/ | Adept_Photograph_796 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkigg6 | false | null | t3_1nkigg6 | /r/LocalLLaMA/comments/1nkigg6/new_stealth_model_golden_capybara/ | false | false | 12 | null | ||
PSA it costs authors $12,690 to make a Nature article Open Access | 648 | And the DeepSeek folks paid up so we can read their work without hitting a paywall. Massive respect for absorbing the costs so the public benefits. | 2025-09-18T19:50:42 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nkieo3 | false | null | t3_1nkieo3 | /r/LocalLLaMA/comments/1nkieo3/psa_it_costs_authors_12690_to_make_a_nature/ | false | false | default | 648 | {'enabled': True, 'images': [{'id': 'xkcal9zq9zpf1', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/xkcal9zq9zpf1.jpeg?width=108&crop=smart&auto=webp&s=c1a5f51b1b9633285101e029621c74f1a8ea04de', 'width': 108}, {'height': 186, 'url': 'https://preview.redd.it/xkcal9zq9zpf1.jpeg?width=216&crop=smart&auto=webp&s=6bc781ef2509e7c9f4685dfb3bf89e077553121c', 'width': 216}, {'height': 275, 'url': 'https://preview.redd.it/xkcal9zq9zpf1.jpeg?width=320&crop=smart&auto=webp&s=9748c012c711af0ccbb1951b463963382d2eb46b', 'width': 320}, {'height': 551, 'url': 'https://preview.redd.it/xkcal9zq9zpf1.jpeg?width=640&crop=smart&auto=webp&s=07dcfaf4df77e0f86644296480de4064b0f6ca22', 'width': 640}, {'height': 826, 'url': 'https://preview.redd.it/xkcal9zq9zpf1.jpeg?width=960&crop=smart&auto=webp&s=571dcd00120b5dfb35e820ab1e5830acf95125fc', 'width': 960}, {'height': 930, 'url': 'https://preview.redd.it/xkcal9zq9zpf1.jpeg?width=1080&crop=smart&auto=webp&s=d1f861247b14b6835ec0e3bfa2d38886aabc8c51', 'width': 1080}], 'source': {'height': 969, 'url': 'https://preview.redd.it/xkcal9zq9zpf1.jpeg?auto=webp&s=49a0ffa00036a3e6dec101191d3e0a531d134dc8', 'width': 1125}, 'variants': {}}]} | |
gpt-oss-20b TTFT very slow with llama.cpp? | 5 | Hey friends,
I'm running llama.cpp with llama-swap, and getting really poor performance with gsp-oss-20b on dual RTX 3060s with tensor split. I'm trying to switch over from ollama (for obvious reasons), but I'm finding that TTFT gets longer and longer as context grows, sometimes waiting 30 seconds to even minutes before inference even begins. Inference with higher context is also slow but my main concern is that the inference doesn't even start for a long time.
Here is the relevant log snippet:
`forcing full prompt re-processing due to lack of cache data (likely due to SWA, see https://github.com/ggml-org/llama.cpp/pull/13194#issuecomment-2868343055) slot update_slots: id 0 | task 4232 | kv cache rm [0, end) slot update_slots: id 0 | task 4232 | prompt processing progress, n_past = 2048, n_tokens = 2048, progress = 0.657886`
Here is my startup command:
# command:
- --server
- -m
- ${MODEL}
- -c
- ${CONTEXT}
- -b
- "4096"
- -ub
- "1024"
- --temp
- "0.7"
- --top_p
- "0.9"
- --top_k
- "20"
- --min_p
- "0"
- -ngl
- "9999"
- --tensor-split
- "1,1"
- -mg
- "0"
- --flash-attn
- "on"
- --cache-type-k
- q8_0
- --cache-type-v
- q8_0
- --jinja
- --host
- "0.0.0.0"
- --port
- "8001"
Not sure if there's something specific I need to do for gpt-oss here? Has anyone else run into this? | 2025-09-18T19:46:30 | https://www.reddit.com/r/LocalLLaMA/comments/1nkiaov/gptoss20b_ttft_very_slow_with_llamacpp/ | No_Information9314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkiaov | false | null | t3_1nkiaov | /r/LocalLLaMA/comments/1nkiaov/gptoss20b_ttft_very_slow_with_llamacpp/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'VDsaRKB6MF1zC72W3rEBeZYiTeSy-Leo3mzF4pSf7-0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VDsaRKB6MF1zC72W3rEBeZYiTeSy-Leo3mzF4pSf7-0.png?width=108&crop=smart&auto=webp&s=f6dc2e77da193dc9923ee611963dc30f62c22e3e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VDsaRKB6MF1zC72W3rEBeZYiTeSy-Leo3mzF4pSf7-0.png?width=216&crop=smart&auto=webp&s=742717e8589ba64336d158f9ee46167415a5fe62', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VDsaRKB6MF1zC72W3rEBeZYiTeSy-Leo3mzF4pSf7-0.png?width=320&crop=smart&auto=webp&s=c993b2663a7e0e458a900e17b2d869ce7941a2be', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VDsaRKB6MF1zC72W3rEBeZYiTeSy-Leo3mzF4pSf7-0.png?width=640&crop=smart&auto=webp&s=2480e1c32d962561e6efcf0097816be217d8974f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VDsaRKB6MF1zC72W3rEBeZYiTeSy-Leo3mzF4pSf7-0.png?width=960&crop=smart&auto=webp&s=82c4bdb567fd7ad9a020c55088fcc9e666ccac27', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VDsaRKB6MF1zC72W3rEBeZYiTeSy-Leo3mzF4pSf7-0.png?width=1080&crop=smart&auto=webp&s=f42a22312533784c863e2b2fe4a85296046e155f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VDsaRKB6MF1zC72W3rEBeZYiTeSy-Leo3mzF4pSf7-0.png?auto=webp&s=896ef80c61d1253c6f8c272b59aa14ae1a5d92e1', 'width': 1200}, 'variants': {}}]} |
Problem with glm air in LMStudio | 5 |
Hi. I have tried to get glm 4.5 air to work with opencode. Works great when I use it via openrouter, but when I run same model locally (LMStudio) all tool call fails. Have tried different quants, but so far nothing works.
Anyone who have a clue? Would really appreciate suggestions.
| 2025-09-18T19:21:33 | Magnus114 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nkhn3t | false | null | t3_1nkhn3t | /r/LocalLLaMA/comments/1nkhn3t/problem_with_glm_air_in_lmstudio/ | false | false | default | 5 | {'enabled': True, 'images': [{'id': 'xvt021tj4zpf1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/xvt021tj4zpf1.jpeg?width=108&crop=smart&auto=webp&s=94b965f12dfed39ac6cd79d26e2a4a7789059ad0', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/xvt021tj4zpf1.jpeg?width=216&crop=smart&auto=webp&s=bceb30faa055fba98914b288b27c3151fca041de', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/xvt021tj4zpf1.jpeg?width=320&crop=smart&auto=webp&s=66221e2e48d824c056c67b146175f5566b417db0', 'width': 320}, {'height': 348, 'url': 'https://preview.redd.it/xvt021tj4zpf1.jpeg?width=640&crop=smart&auto=webp&s=2cecfe08ace7af193f16c61bd5dfdcc67c200c03', 'width': 640}, {'height': 522, 'url': 'https://preview.redd.it/xvt021tj4zpf1.jpeg?width=960&crop=smart&auto=webp&s=9602365249a03a3c970412202ee00db2075cdbd1', 'width': 960}, {'height': 587, 'url': 'https://preview.redd.it/xvt021tj4zpf1.jpeg?width=1080&crop=smart&auto=webp&s=6b81069e6a73d0f004da7ba80884111e691897f7', 'width': 1080}], 'source': {'height': 694, 'url': 'https://preview.redd.it/xvt021tj4zpf1.jpeg?auto=webp&s=f8f6148494874753019a06e77cb4a3e74310dab1', 'width': 1275}, 'variants': {}}]} | |
Can you guess what model you're talking to in 5 prompts? | 52 | I made a [web version](whichllama.com) of the WhichLlama? bot in our Discord server (you should join!) to share here. I think my own "LLM palate" isn't refined enough to tell models apart (drawing an analogy to coffee and wine tasting). | 2025-09-18T19:05:38 | https://v.redd.it/y7dajeso1zpf1 | entsnack | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nkh88k | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/y7dajeso1zpf1/DASHPlaylist.mpd?a=1760814352%2CMWYzYzhhMDJhYjBhMDk2MTM4ZTZjYWIyMTBiZGYyYzZhZmE4NmNkMjkzYzkyYzg2YzQ0ODQyZWNjY2JiZGRlMA%3D%3D&v=1&f=sd', 'duration': 42, 'fallback_url': 'https://v.redd.it/y7dajeso1zpf1/DASH_480.mp4?source=fallback', 'has_audio': False, 'height': 480, 'hls_url': 'https://v.redd.it/y7dajeso1zpf1/HLSPlaylist.m3u8?a=1760814352%2CNjlmOTViNTJmYmM1YWZkMzgyODJjZmExZTEwYjBlMzVhZmQwYWQxOTNjM2M0Y2I0MDk2NThkYWZhYWViMDkzZg%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/y7dajeso1zpf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 660}} | t3_1nkh88k | /r/LocalLLaMA/comments/1nkh88k/can_you_guess_what_model_youre_talking_to_in_5/ | false | false | 52 | {'enabled': False, 'images': [{'id': 'NTh4aG80cW8xenBmMfQ6ULqGkcZNtZeiwHOodBaY1uWCovO-Ocod72xeRKh_', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/NTh4aG80cW8xenBmMfQ6ULqGkcZNtZeiwHOodBaY1uWCovO-Ocod72xeRKh_.png?width=108&crop=smart&format=pjpg&auto=webp&s=b06676b1b14b3a38abd0a0c10466aa243480ff12', 'width': 108}, {'height': 157, 'url': 'https://external-preview.redd.it/NTh4aG80cW8xenBmMfQ6ULqGkcZNtZeiwHOodBaY1uWCovO-Ocod72xeRKh_.png?width=216&crop=smart&format=pjpg&auto=webp&s=790b1fb84e75d677c5bbff189c02e30f1be9a930', 'width': 216}, {'height': 233, 'url': 'https://external-preview.redd.it/NTh4aG80cW8xenBmMfQ6ULqGkcZNtZeiwHOodBaY1uWCovO-Ocod72xeRKh_.png?width=320&crop=smart&format=pjpg&auto=webp&s=6d09403dfd39e6425823aef73f4d3bd5778665e3', 'width': 320}, {'height': 466, 'url': 'https://external-preview.redd.it/NTh4aG80cW8xenBmMfQ6ULqGkcZNtZeiwHOodBaY1uWCovO-Ocod72xeRKh_.png?width=640&crop=smart&format=pjpg&auto=webp&s=a861c3106296219f14e8767a769bb688c7ac5fcf', 'width': 640}, {'height': 699, 'url': 'https://external-preview.redd.it/NTh4aG80cW8xenBmMfQ6ULqGkcZNtZeiwHOodBaY1uWCovO-Ocod72xeRKh_.png?width=960&crop=smart&format=pjpg&auto=webp&s=a81901f1ef939bb198c90708ea6a7590fd046368', 'width': 960}], 'source': {'height': 718, 'url': 'https://external-preview.redd.it/NTh4aG80cW8xenBmMfQ6ULqGkcZNtZeiwHOodBaY1uWCovO-Ocod72xeRKh_.png?format=pjpg&auto=webp&s=6e1fc0a675b7bb44d27db748ba7f629646d089df', 'width': 986}, 'variants': {}}]} | |
Problem with glm air in LMStudio | 1 | Hi. I have tried to get glm 4.5 air to work with opencode. Works great when I use it via openrouter, but when I run same model locally (LMStudio) all tool call fails. Have tried different quants, but so far nothing works.
Anyone who have a clue? Would really appreciate suggestions.
| 2025-09-18T19:00:59 | Magnus114 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nkh3qv | false | null | t3_1nkh3qv | /r/LocalLLaMA/comments/1nkh3qv/problem_with_glm_air_in_lmstudio/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'vrksb8pv0zpf1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/vrksb8pv0zpf1.jpeg?width=108&crop=smart&auto=webp&s=e113d608d23348dd35fbe2eb95cad402a8e58f10', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/vrksb8pv0zpf1.jpeg?width=216&crop=smart&auto=webp&s=42e1d64f7931bf285044d2abc4d5ab6e777dde06', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/vrksb8pv0zpf1.jpeg?width=320&crop=smart&auto=webp&s=a91d01a5a9736a93c18d4e9095cdae9953c2c987', 'width': 320}, {'height': 348, 'url': 'https://preview.redd.it/vrksb8pv0zpf1.jpeg?width=640&crop=smart&auto=webp&s=249539e9a875c3501a6ae43ca6a7009b1830b404', 'width': 640}, {'height': 522, 'url': 'https://preview.redd.it/vrksb8pv0zpf1.jpeg?width=960&crop=smart&auto=webp&s=15dc2c8f4ef759f875d68fab86ed6b22d6fd257f', 'width': 960}, {'height': 587, 'url': 'https://preview.redd.it/vrksb8pv0zpf1.jpeg?width=1080&crop=smart&auto=webp&s=04d977b1cf18a3ce291d3cf077c04f408f61a702', 'width': 1080}], 'source': {'height': 694, 'url': 'https://preview.redd.it/vrksb8pv0zpf1.jpeg?auto=webp&s=eda8c1f610701659dfa6c6bf05eb63f1944a5ef6', 'width': 1275}, 'variants': {}}]} | |
Beginner Question: How do I use quantised VisionLLMs available on Hugging Face? | 3 | I want to run the VLLM on Jetson Orin Nano (8GBs RAM) and so I've been looking for quantized VLLMs. But, when I tried to run
"EZCon/Qwen2-VL-2B-Instruct-abliterated-4bit-mlx" on PyTorch
It gave me this error: The model's quantization config from the arguments has no \`quant\_method\` attribute. Make sure that the model has been correctly quantized
And now I found this: [Qwen.Qwen2.5-VL-7B-Instruct-GGUF](https://huggingface.co/DevQuasar/Qwen.Qwen2.5-VL-7B-Instruct-GGUF)
Which is a GGUF file that is not compatible with PyTorch and so I have no idea if I import it into Ollama how I would process images. | 2025-09-18T18:52:02 | https://www.reddit.com/r/LocalLLaMA/comments/1nkgvcb/beginner_question_how_do_i_use_quantised/ | dead_shroom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkgvcb | false | null | t3_1nkgvcb | /r/LocalLLaMA/comments/1nkgvcb/beginner_question_how_do_i_use_quantised/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'flDa7d4uxAqmQLZxAO_rdAJNBdbzF-nMCH8sZ7QYVJI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/flDa7d4uxAqmQLZxAO_rdAJNBdbzF-nMCH8sZ7QYVJI.png?width=108&crop=smart&auto=webp&s=d7e6b660624062e5513c36bd831f94637032b3b3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/flDa7d4uxAqmQLZxAO_rdAJNBdbzF-nMCH8sZ7QYVJI.png?width=216&crop=smart&auto=webp&s=8a93b51101999e8096e8fcc9d718df6e58b88f4a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/flDa7d4uxAqmQLZxAO_rdAJNBdbzF-nMCH8sZ7QYVJI.png?width=320&crop=smart&auto=webp&s=26b6d8d3287baedcbe3fa1551d6e4c4866628235', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/flDa7d4uxAqmQLZxAO_rdAJNBdbzF-nMCH8sZ7QYVJI.png?width=640&crop=smart&auto=webp&s=a4e16f80c3a15de005eae771fb0aeac5c2587e56', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/flDa7d4uxAqmQLZxAO_rdAJNBdbzF-nMCH8sZ7QYVJI.png?width=960&crop=smart&auto=webp&s=5976174ee9cc36d2c4e57b8607b79b6dd0f94a8f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/flDa7d4uxAqmQLZxAO_rdAJNBdbzF-nMCH8sZ7QYVJI.png?width=1080&crop=smart&auto=webp&s=d597ca284ffe7ee516949277f83b9c757fbe3eed', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/flDa7d4uxAqmQLZxAO_rdAJNBdbzF-nMCH8sZ7QYVJI.png?auto=webp&s=3d3e17687ed16914f24cd173be411a77d5c6cf09', 'width': 1200}, 'variants': {}}]} |
RX 7700 launched with 2560 cores (relatively few) and 16GB memory with 624 GB/s bandwidth (relatively high) | 26 | This seems like an LLM GPU. Lot’s of bandwidth compared to compute.
See https://www.amd.com/en/products/graphics/desktops/radeon/7000-series/amd-radeon-rx-7700.html for the full specs | 2025-09-18T18:37:07 | https://videocardz.com/newz/amd-launches-radeon-rx-7700-with-2560-cores-and-16gb-memory | Balance- | videocardz.com | 1970-01-01T00:00:00 | 0 | {} | 1nkgh1u | false | null | t3_1nkgh1u | /r/LocalLLaMA/comments/1nkgh1u/rx_7700_launched_with_2560_cores_relatively_few/ | false | false | default | 26 | {'enabled': False, 'images': [{'id': 'ZtmeYdNQChVt_XKulmUOe_WPIvyPIS1JhHlC4A0Lp38', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ZtmeYdNQChVt_XKulmUOe_WPIvyPIS1JhHlC4A0Lp38.jpeg?width=108&crop=smart&auto=webp&s=2f266b69025a9d4f631bd6e151d37dbe9b6bed83', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/ZtmeYdNQChVt_XKulmUOe_WPIvyPIS1JhHlC4A0Lp38.jpeg?width=216&crop=smart&auto=webp&s=d962b62cce56a2a176a00942bdea75c795b7a373', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/ZtmeYdNQChVt_XKulmUOe_WPIvyPIS1JhHlC4A0Lp38.jpeg?width=320&crop=smart&auto=webp&s=e004e590c1a7bc92d634e5cc113261c7cf63b751', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/ZtmeYdNQChVt_XKulmUOe_WPIvyPIS1JhHlC4A0Lp38.jpeg?width=640&crop=smart&auto=webp&s=cb90e99fc8917d6295abd4272b2623d002b47934', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/ZtmeYdNQChVt_XKulmUOe_WPIvyPIS1JhHlC4A0Lp38.jpeg?width=960&crop=smart&auto=webp&s=b9abb48b6a3dabdba46f56e36e85b16f76bd7db7', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/ZtmeYdNQChVt_XKulmUOe_WPIvyPIS1JhHlC4A0Lp38.jpeg?width=1080&crop=smart&auto=webp&s=4db91ab7322f166d84d1c96f0fc4bc94f0fb362e', 'width': 1080}], 'source': {'height': 1040, 'url': 'https://external-preview.redd.it/ZtmeYdNQChVt_XKulmUOe_WPIvyPIS1JhHlC4A0Lp38.jpeg?auto=webp&s=9eab2c6d2a6d1336965af8750b58b4ca827e572f', 'width': 2000}, 'variants': {}}]} |
Local LLM Coding Stack (24GB minimum, ideal 36GB) | 323 | Perhaps this could be useful to someone trying to get his/her own local AI coding stack. I do scientific coding stuff, not web or application development related stuff, so the needs might be different.
Deployed on a 48gb Mac, but this should work on 32GB, and maybe even 24GB setups:
General Tasks, used 90% of the time: Cline on top of Qwen3Coder-30b-a3b. Served by LM Studio in MLX format for maximum speed. This is the backbone of everything else...
Difficult single script tasks, 5% of the time: QwenCode on top of GPT-OSS 20b (Reasoning effort: High). Served by LM Studio. This cannot be served at the same time of Qwen3Coder due to lack of RAM. The problem cracker. GPT-OSS can be swept with other reasoning models with tool use capabilities (Magistral, DeepSeek, ERNIE-thinking, EXAONE, etc... lot of options here)
Experimental, hand-made prototyping: Continue doing auto-complete work on top of Qwen2.5-Coder 7b. Served by Ollama to be always available together with the model served by LM Studio. When you need to be in the loop of creativity this is the one.
IDE for data exploration: Spyder
Long Live to Local LLM. | 2025-09-18T18:14:56 | JLeonsarmiento | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nkfvrl | false | null | t3_1nkfvrl | /r/LocalLLaMA/comments/1nkfvrl/local_llm_coding_stack_24gb_minimum_ideal_36gb/ | false | false | default | 323 | {'enabled': True, 'images': [{'id': 'ia5muohupypf1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/ia5muohupypf1.png?width=108&crop=smart&auto=webp&s=843fd78c870da3099d1531a990d26820a222a1af', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/ia5muohupypf1.png?width=216&crop=smart&auto=webp&s=96999c3ab53af08379ba7447758d8e4c6dc89ae1', 'width': 216}, {'height': 207, 'url': 'https://preview.redd.it/ia5muohupypf1.png?width=320&crop=smart&auto=webp&s=3d8fefa74b6f8e6ea19bd4bc8e3b046f0bf17a44', 'width': 320}, {'height': 415, 'url': 'https://preview.redd.it/ia5muohupypf1.png?width=640&crop=smart&auto=webp&s=1148b1bf986af2a5825964a50e7d6bdf8dc5dc16', 'width': 640}, {'height': 623, 'url': 'https://preview.redd.it/ia5muohupypf1.png?width=960&crop=smart&auto=webp&s=2cf2ee3c3923915ceda82eccd995248a52387175', 'width': 960}, {'height': 701, 'url': 'https://preview.redd.it/ia5muohupypf1.png?width=1080&crop=smart&auto=webp&s=815e88865a033494d93310e1ebfb36894db8674b', 'width': 1080}], 'source': {'height': 1964, 'url': 'https://preview.redd.it/ia5muohupypf1.png?auto=webp&s=f7929de6dd1bf17c7ab1d9eb475d73c16fc08764', 'width': 3024}, 'variants': {}}]} | |
AMA with the LM Studio team | 178 | Hello r/LocalLLaMA! We're excited for this AMA. Thank you for having us here today. We got a full house from the LM Studio team:
\- Yags [https://reddit.com/user/yags-lms/](https://t.co/ERfA4NrR96) (founder)
\- Neil [https://reddit.com/user/neilmehta24/](https://t.co/KyiHVfv0QG) (LLM engines and runtime)
\- Will [https://reddit.com/user/will-lms/](https://t.co/IjAZJL2JMK) (LLM engines and runtime)
\- Matt [https://reddit.com/user/matt-lms/](https://t.co/6MNkItPYnI) (LLM engines, runtime, and APIs)
\- Ryan [https://reddit.com/user/ryan-lms/](https://t.co/0snuNUPizo) (Core system and APIs)
\- Rugved [https://reddit.com/user/rugved\_lms/](https://t.co/xGtYHsJZI3) (CLI and SDKs)
\- Alex [https://reddit.com/user/alex-lms/](https://t.co/wtT2IFf0z6) (App)
\- Julian [https://www.reddit.com/user/julian-lms/](https://www.reddit.com/user/julian-lms/) (Ops)
Excited to chat about: the latest local models, UX for local models, steering local models effectively, LM Studio SDK and APIs, how we support multiple LLM engines (llama.cpp, MLX, and more), privacy philosophy, why local AI matters, our open source projects (mlx-engine, lms, lmstudio-js, lmstudio-python, venvstacks), why ggerganov and Awni are the GOATs, where is TheBloke, and more.
Would love to hear about people's setup, which models you use, use cases that really work, how you got into local AI, what needs to improve in LM Studio and the ecosystem as a whole, how you use LM Studio, and anything in between!
>Everyone: it was awesome to see your questions here today and share replies! Thanks a lot for the welcoming AMA. We will continue to monitor this post for more questions over the next couple of days, but for now we're signing off to continue building 🔨
>
>We have several marquee features we've been working on for a loong time coming out later this month that we hope you'll love and find lots of value in. And don't worry, UI for n cpu moe is on the way too :)
>
>Special shoutout and thanks to ggerganov, Awni Hannun, TheBloke, Hugging Face, and all the rest of the open source AI community!
>
>Thank you and see you around!
>- Team LM Studio 👾 | 2025-09-18T18:12:24 | https://www.reddit.com/r/LocalLLaMA/comments/1nkft9l/ama_with_the_lm_studio_team/ | yags-lms | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkft9l | false | null | t3_1nkft9l | /r/LocalLLaMA/comments/1nkft9l/ama_with_the_lm_studio_team/ | false | true | self | 178 | null |
ꓚоսꓲd ꓲосаꓲ ꓡꓡꓟѕ bесоmе tһе rеаꓲ аꓲꓲ-іո-оոе аꓲtеrոаtіνе tо сꓲоսd ꓮꓲ? | 0 | ꓲ’νе bееո ехреrіmеոtіոց ԝіtһ dіffеrеոt ꓲосаꓲ ѕеtսрѕ fоr ꓡꓡаꓟꓮ-bаѕеd mоdеꓲѕ, аոd іt ցоt mе tһіոkіոց, аrе ԝе mоνіոց сꓲоѕеr tо а fսtսrе ԝһеrе ꓲосаꓲ ꓡꓡꓟѕ саո ѕеrνе аѕ а trսе аꓲꓲ-іո-оոе һսb?
ꓣіցһt ոоԝ, реорꓲе tеոd tо mіх аոd mаtсһ: оոе tооꓲ fоr ոоtе-tаkіոց, аոоtһеr fоr аսtоmаtіоո, аոоtһеr fоr сһаt, еtс. ꓐսt іf ꓲосаꓲ ꓡꓡꓟѕ kеер іmрrоνіոց, соսꓲd tһеу еνеոtսаꓲꓲу сеոtrаꓲіzе аꓲꓲ оf tһаt іոtо а ѕіոցꓲе ꓮꓲ brаіո tһаt rսոѕ рrіνаtеꓲу оո уоսr оԝո mасһіոе?
ꓢоmе рꓲаtfоrmѕ (ꓲіkе ցrееոdаіѕу.аі аոd оtһеrѕ) аrе аꓲrеаdу ехрꓲоrіոց “ꓮꓲ һսbѕ” іո tһе сꓲоսd, bսt ԝһаt ехсіtеѕ mе іѕ tһе іdеа оf kееріոց tһаt ѕаmе роԝеr ꓲосаꓲ, fаѕtеr rеѕроոѕеѕ, bеttеr рrіνасу, ոо ѕսbѕсrірtіоո ꓲосk-іոѕ.
ꓚսrіоսѕ tо һеаr ԝһаt tһіѕ соmmսոіtу tһіոkѕ:
ꓓо уоս ѕее ꓲосаꓲ ꓡꓡꓟѕ еνоꓲνіոց іոtо аꓲꓲ-іո-оոе һսbѕ, оr ԝіꓲꓲ tһеу ѕtау ѕресіаꓲіzеd?
ꓪһаt’ѕ һоꓲdіոց սѕ bасk rіցһt ոоԝ, һаrdԝаrе, соոtехt ꓲеոցtһ, оr јսѕt аdорtіоո? | 2025-09-18T17:40:54 | https://www.reddit.com/r/LocalLLaMA/comments/1nkeykh/ꓚоսꓲd_ꓲосаꓲ_ꓡꓡꓟѕ_bесоmе_tһе_rеаꓲ_аꓲꓲіոоոе/ | Objective-Box7188 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkeykh | false | null | t3_1nkeykh | /r/LocalLLaMA/comments/1nkeykh/ꓚоսꓲd_ꓲосаꓲ_ꓡꓡꓟѕ_bесоmе_tһе_rеаꓲ_аꓲꓲіոоոе/ | false | false | self | 0 | null |
DnD AI DM : Stable SillyTavern + Vicuna 13B setup for lore-heavy D&D (RTX 3070 / 8GB VRAM) | 1 | [removed] | 2025-09-18T17:35:31 | https://www.reddit.com/r/LocalLLaMA/comments/1nketbi/dnd_ai_dm_stable_sillytavern_vicuna_13b_setup_for/ | Additional_Ad2545 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nketbi | false | null | t3_1nketbi | /r/LocalLLaMA/comments/1nketbi/dnd_ai_dm_stable_sillytavern_vicuna_13b_setup_for/ | false | false | self | 1 | null |
[Guide] Stable SillyTavern + Vicuna 13B setup for lore-heavy D&D (RTX 3070 / 8GB VRAM) | 1 | [removed] | 2025-09-18T17:34:03 | https://www.reddit.com/r/LocalLLaMA/comments/1nkes06/guide_stable_sillytavern_vicuna_13b_setup_for/ | Additional_Ad2545 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkes06 | false | null | t3_1nkes06 | /r/LocalLLaMA/comments/1nkes06/guide_stable_sillytavern_vicuna_13b_setup_for/ | false | false | self | 1 | null |
DnD AI DM: Stable SillyTavern + Vicuna 13B Setup for Lore-Heavy D&D (RTX 3070/8GB VRAM) | 1 | [removed] | 2025-09-18T17:30:37 | https://www.reddit.com/r/LocalLLaMA/comments/1nkeoqg/dnd_ai_dm_stable_sillytavern_vicuna_13b_setup_for/ | Additional_Ad2545 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkeoqg | false | null | t3_1nkeoqg | /r/LocalLLaMA/comments/1nkeoqg/dnd_ai_dm_stable_sillytavern_vicuna_13b_setup_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'JXD7apj8aB4GZzBm_M46ukkHvkv5ubcKkAZboniqkrY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JXD7apj8aB4GZzBm_M46ukkHvkv5ubcKkAZboniqkrY.png?width=108&crop=smart&auto=webp&s=50459b106c03fc3b98949b8b4c193cb05d80eada', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JXD7apj8aB4GZzBm_M46ukkHvkv5ubcKkAZboniqkrY.png?width=216&crop=smart&auto=webp&s=a0563fc7a5d1a62d3a861a146cfa2502c1596314', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JXD7apj8aB4GZzBm_M46ukkHvkv5ubcKkAZboniqkrY.png?width=320&crop=smart&auto=webp&s=1031b766f8dd69f95d02d9f3d23dcd1107ee8514', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JXD7apj8aB4GZzBm_M46ukkHvkv5ubcKkAZboniqkrY.png?width=640&crop=smart&auto=webp&s=257d71dd5a213a160b5fab23963d46bc7e016040', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JXD7apj8aB4GZzBm_M46ukkHvkv5ubcKkAZboniqkrY.png?width=960&crop=smart&auto=webp&s=b938467eed467048292ddd7ca8a34c46d936bde2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JXD7apj8aB4GZzBm_M46ukkHvkv5ubcKkAZboniqkrY.png?width=1080&crop=smart&auto=webp&s=5c0f6cc5727c25f62a978d9217d9b72337e9d94d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JXD7apj8aB4GZzBm_M46ukkHvkv5ubcKkAZboniqkrY.png?auto=webp&s=40c0c6f9b2dbb3977c337fa215b9fa1dff365bae', 'width': 1200}, 'variants': {}}]} |
Can I run local opensource models like Wan2.1, Hunyuan Video, CogVideoX & Chatterbox on a RTX 3060 build? | 1 | [removed] | 2025-09-18T17:30:21 | https://www.reddit.com/r/LocalLLaMA/comments/1nkeogq/can_i_run_local_opensource_models_like_wan21/ | Educational-Bite325 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkeogq | false | null | t3_1nkeogq | /r/LocalLLaMA/comments/1nkeogq/can_i_run_local_opensource_models_like_wan21/ | false | false | self | 1 | null |
DnD AI DM, lore-heavy. [Guide] Stable SillyTavern + Vicuna 13B Setup for Lore-Heavy D&D (RTX 3070/8GB VRAM) | 1 | [removed] | 2025-09-18T17:11:47 | https://www.reddit.com/r/LocalLLaMA/comments/1nke6o0/dnd_ai_dm_loreheavy_guide_stable_sillytavern/ | Additional_Ad2545 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nke6o0 | false | null | t3_1nke6o0 | /r/LocalLLaMA/comments/1nke6o0/dnd_ai_dm_loreheavy_guide_stable_sillytavern/ | false | false | self | 1 | null |
Can someone explain | 0 | I am lost and looking for resources are making me more lost. What do these terms mean
1. Safetensors
2. GGUF
3. Instruct
4. MoE - I know it is mixture of experts but how is it different
And more are there | 2025-09-18T17:04:52 | https://www.reddit.com/r/LocalLLaMA/comments/1nkdzu0/can_someone_explain/ | r00tdr1v3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkdzu0 | false | null | t3_1nkdzu0 | /r/LocalLLaMA/comments/1nkdzu0/can_someone_explain/ | false | false | self | 0 | null |
How to locally test ICPC 2025 World Finals questions with open-source models. | 1 | The questions put to all these teams and their hardware and programs at this event that just concluded in Baku - where all the big models get ranked in performance - are available online in PDF format exactly as presented in competition.
Now I can solve all of them in my head mind you, but just for giggles, how would I go about testing various open-source models using say LM Studio? Would the models have to multimodal to understand the PDFs? What would the prompts be? Do the PDFs have to be OCR'd first or converted to JPG?
Any tips from fellow open-source LLM fans would be greatly appreciated. | 2025-09-18T16:59:06 | https://www.reddit.com/r/LocalLLaMA/comments/1nkdu8u/how_to_locally_test_icpc_2025_world_finals/ | PaulDallas72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkdu8u | false | null | t3_1nkdu8u | /r/LocalLLaMA/comments/1nkdu8u/how_to_locally_test_icpc_2025_world_finals/ | false | false | self | 1 | null |
A dialogue where god tries (and fails) to prove to satan that humans can reason | 77 | [Full article here](https://www.astralcodexten.com/p/what-is-man-that-thou-art-mindful). | 2025-09-18T16:52:06 | FinnFarrow | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nkdnf2 | false | null | t3_1nkdnf2 | /r/LocalLLaMA/comments/1nkdnf2/a_dialogue_where_god_tries_and_fails_to_prove_to/ | false | false | default | 77 | {'enabled': True, 'images': [{'id': 'fqm6nmw8dypf1', 'resolutions': [{'height': 150, 'url': 'https://preview.redd.it/fqm6nmw8dypf1.png?width=108&crop=smart&auto=webp&s=c9132c2b2434b185bf9747080b6a5329e2213db1', 'width': 108}, {'height': 300, 'url': 'https://preview.redd.it/fqm6nmw8dypf1.png?width=216&crop=smart&auto=webp&s=ace1bdcd50cbd2889aebc45cb1a48fae30a1a70f', 'width': 216}, {'height': 445, 'url': 'https://preview.redd.it/fqm6nmw8dypf1.png?width=320&crop=smart&auto=webp&s=888f209e80c21379151bc2ce6d8eddb851e47e4e', 'width': 320}, {'height': 890, 'url': 'https://preview.redd.it/fqm6nmw8dypf1.png?width=640&crop=smart&auto=webp&s=d353499d3cd5e9c56a4af4ceac147c01524c7fe1', 'width': 640}, {'height': 1336, 'url': 'https://preview.redd.it/fqm6nmw8dypf1.png?width=960&crop=smart&auto=webp&s=96370b8a7dd0dd8f9fed16898d8d20b24f4430d5', 'width': 960}, {'height': 1503, 'url': 'https://preview.redd.it/fqm6nmw8dypf1.png?width=1080&crop=smart&auto=webp&s=835557bfba37846ef025b819a478841efa7ea6dd', 'width': 1080}], 'source': {'height': 1662, 'url': 'https://preview.redd.it/fqm6nmw8dypf1.png?auto=webp&s=5c41e674c80b12f8b08360e497d048a57c113601', 'width': 1194}, 'variants': {}}]} | |
Vibevoice Comfy Distributed? | 1 | Could vibevoice be run on across distributed GPUs in ComfyUI? Any ideas if this is possible? | 2025-09-18T16:31:37 | https://www.reddit.com/r/LocalLLaMA/comments/1nkd3fa/vibevoice_comfy_distributed/ | Eastern_Rock7947 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkd3fa | false | null | t3_1nkd3fa | /r/LocalLLaMA/comments/1nkd3fa/vibevoice_comfy_distributed/ | false | false | self | 1 | null |
Running Nvidia CUDA Pytorch/vLLM projects and pipelines on AMD with no modifications | 2 | Hi, I wanted to share some information on this cool feature we built in WoolyAI GPU hypervisor, which enables users to run their existing Nvidia CUDA pytorch/vLLM projects and pipelines without any modifications on AMD GPUs. ML researchers can transparently consume GPUs from a heterogeneous cluster of Nvidia and AMD GPUs. MLOps don't need to maintain separate pipelines or runtime dependencies. The ML team can scale capacity easily.
Please share feedback and we are also signing up Beta users.
[https://youtu.be/MTM61CB2IZc](https://youtu.be/MTM61CB2IZc) | 2025-09-18T16:25:38 | https://www.reddit.com/r/LocalLLaMA/comments/1nkcxlj/running_nvidia_cuda_pytorchvllm_projects_and/ | Chachachaudhary123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkcxlj | false | null | t3_1nkcxlj | /r/LocalLLaMA/comments/1nkcxlj/running_nvidia_cuda_pytorchvllm_projects_and/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'vbTKGbEN-i6KoDED_b9j5Og-2ZY5W-amfiS9hRDeZjE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/vbTKGbEN-i6KoDED_b9j5Og-2ZY5W-amfiS9hRDeZjE.jpeg?width=108&crop=smart&auto=webp&s=51e56875612b1cac16959f9d217b5fbceed6b736', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/vbTKGbEN-i6KoDED_b9j5Og-2ZY5W-amfiS9hRDeZjE.jpeg?width=216&crop=smart&auto=webp&s=8d377fd7ff6c4f61fec158ea8518636d5884216f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/vbTKGbEN-i6KoDED_b9j5Og-2ZY5W-amfiS9hRDeZjE.jpeg?width=320&crop=smart&auto=webp&s=5120a7b1271f96380118414f7fe51d7b3543a7a5', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/vbTKGbEN-i6KoDED_b9j5Og-2ZY5W-amfiS9hRDeZjE.jpeg?auto=webp&s=4c84ddf35a62134b49be5175e483bca43959bd8e', 'width': 480}, 'variants': {}}]} |
how do i best use my hardware | 0 | Hi folks:
I have been hosting LLM's on my hardware a bit (taking a break right now from all ai -- personal reasons, dont ask), but eventually i'll be getting back into it. I have a Ryzen 9 9950x with 64gb of ddr5 memory, about 12 tb of drive space, and a 3060 (12gb) GPU -- it works great, but, unfortunately, the gpu is a bit space limited. Im wondering if there are ways to use my cpu and memory for LLM work without it being glacial in pace -- | 2025-09-18T16:12:36 | https://www.reddit.com/r/LocalLLaMA/comments/1nkckw6/how_do_i_best_use_my_hardware/ | slrg1968 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkckw6 | false | null | t3_1nkckw6 | /r/LocalLLaMA/comments/1nkckw6/how_do_i_best_use_my_hardware/ | false | false | self | 0 | null |
GLM 4.5 Air - Jinja Template Modification (Based on Unsloth's) - No thinking by default - straight quick answers, need thinking? simple activation with "/think" command anywhere in the system prompt. | 58 | 2025-09-18T16:03:16 | https://www.reddit.com/gallery/1nkcbwp | -Ellary- | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nkcbwp | false | null | t3_1nkcbwp | /r/LocalLLaMA/comments/1nkcbwp/glm_45_air_jinja_template_modification_based_on/ | false | false | 58 | null | ||
I open-sourced a text2SQL RAG for all your databases and local models | 21 | Hey r/LocalLLama 👋
I’ve spent most of my career working with databases, and one thing that’s always bugged me is how hard it is for AI agents to work with them. Whenever I ask Claude, GPT, or Llama about my data, it either invents schemas or hallucinates details. To fix that, I built [ToolFront](https://docs.toolfront.ai/). It's a free and open-source Python library for creating lightweight but powerful retrieval agents, giving them a safe, smart way to actually understand and query your database schemas.
# So, how does it work?
ToolFront gives your custom/local models two read-only database tools so they can explore your data and quickly find answers. You can also add business context to help the AI better understand your databases. Check out our [model documentation page](https://docs.toolfront.ai/documentation/ai_models/#custom-model-providers) for more info on how to use your own models.
# Connects to everything
* 15+ databases and warehouses, including: Snowflake, BigQuery, PostgreSQL & more!
* Data files like CSVs, Parquets, JSONs, and even Excel files.
* Any API with an OpenAPI/Swagger spec (e.g. GitHub, Stripe, Discord, and even internal APIs)
# Why you'll love it
* **Zero configuration:** Skip config files and infrastructure setup. ToolFront works out of the box with all your data and models.
* **Predictable results**: Data is messy. ToolFront returns structured, type-safe responses that match exactly what you want e.g.
* `answer: list[int] = db.ask(...)`
* **Use it anywhere**: Avoid migrations. Run ToolFront directly, as an MCP server, or build custom tools for your favorite AI framework.
If you’re building AI agents for databases (or APIs!), I really think ToolFront could make your life easier. Your feedback last time was incredibly helpful for improving the project. Please keep it coming!
**Docs:** [https://docs.toolfront.ai/](https://docs.toolfront.ai/)
**GitHub Repo**: [https://github.com/kruskal-labs/toolfront](https://github.com/kruskal-labs/toolfront)
**Discord:** [https://discord.com/invite/rRyM7zkZTf](https://discord.com/invite/rRyM7zkZTf)
A ⭐ on GitHub really helps with visibility! | 2025-09-18T16:02:36 | Durovilla | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nkcb7n | false | null | t3_1nkcb7n | /r/LocalLLaMA/comments/1nkcb7n/i_opensourced_a_text2sql_rag_for_all_your/ | false | false | default | 21 | {'enabled': True, 'images': [{'id': 'kolx2nwb3ypf1', 'resolutions': [{'height': 35, 'url': 'https://preview.redd.it/kolx2nwb3ypf1.png?width=108&crop=smart&auto=webp&s=149df8c389bf0e27509937c2677254f7f2720af9', 'width': 108}, {'height': 71, 'url': 'https://preview.redd.it/kolx2nwb3ypf1.png?width=216&crop=smart&auto=webp&s=8e52df768519f4edc68597fbb2d159c6d7a12e89', 'width': 216}, {'height': 105, 'url': 'https://preview.redd.it/kolx2nwb3ypf1.png?width=320&crop=smart&auto=webp&s=825ae1e60b6ff8cac7f96004d74091e3e73e734d', 'width': 320}, {'height': 211, 'url': 'https://preview.redd.it/kolx2nwb3ypf1.png?width=640&crop=smart&auto=webp&s=a50db6350de24146879cc0b1d205373350ff3287', 'width': 640}, {'height': 317, 'url': 'https://preview.redd.it/kolx2nwb3ypf1.png?width=960&crop=smart&auto=webp&s=c498f114d94c485a5ef2ebfd1f2dbafe2695a5f0', 'width': 960}, {'height': 357, 'url': 'https://preview.redd.it/kolx2nwb3ypf1.png?width=1080&crop=smart&auto=webp&s=6a8fe48759a729e4f30e5578eb32c41488daca01', 'width': 1080}], 'source': {'height': 432, 'url': 'https://preview.redd.it/kolx2nwb3ypf1.png?auto=webp&s=ccb4389e1345212343a5069e515194706eaaf56d', 'width': 1306}, 'variants': {}}]} | |
Qwen3 Next - Behind the Curtain | 7 | 2025-09-18T15:59:55 | https://www.youtube.com/watch?v=DfPKk-8fOGA | Confident-Honeydew66 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1nkc8gm | false | {'oembed': {'author_name': 'Sam Witteveen', 'author_url': 'https://www.youtube.com/@samwitteveenai', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/DfPKk-8fOGA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Qwen3 Next - Behind the Curtain"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/DfPKk-8fOGA/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Qwen3 Next - Behind the Curtain', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1nkc8gm | /r/LocalLLaMA/comments/1nkc8gm/qwen3_next_behind_the_curtain/ | false | false | default | 7 | {'enabled': False, 'images': [{'id': 'atPe_ZcLh-b1MmmRXwqyrOtH9cMsKeTF2v_fDKIB1JE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/atPe_ZcLh-b1MmmRXwqyrOtH9cMsKeTF2v_fDKIB1JE.jpeg?width=108&crop=smart&auto=webp&s=4e9a85451b145d09453eb66848e9c1cc81accfaa', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/atPe_ZcLh-b1MmmRXwqyrOtH9cMsKeTF2v_fDKIB1JE.jpeg?width=216&crop=smart&auto=webp&s=4ac1d452e15055d5109e6509f7c9f5c2d3431dd5', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/atPe_ZcLh-b1MmmRXwqyrOtH9cMsKeTF2v_fDKIB1JE.jpeg?width=320&crop=smart&auto=webp&s=59a612c7f34c2a299ecbaf1a97e8caa9252bc616', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/atPe_ZcLh-b1MmmRXwqyrOtH9cMsKeTF2v_fDKIB1JE.jpeg?auto=webp&s=4e77ca64572e14f2ff2b6802bae39c44f5c6ff8e', 'width': 480}, 'variants': {}}]} | |
How are you using computer-use agents? | 7 | I'm trying to understand how people are using computer-use agents in practice. If you are using computer-use agents today, what's your use-case?
To clarify, I'm not looking for folks building these agents. I'd love to hear from you if you are / know of individuals, teams, or companies actually using them in their workflows, products, or internal processes. | 2025-09-18T15:58:15 | https://www.reddit.com/r/LocalLLaMA/comments/1nkc6tx/how_are_you_using_computeruse_agents/ | New-Strain-7472 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkc6tx | false | null | t3_1nkc6tx | /r/LocalLLaMA/comments/1nkc6tx/how_are_you_using_computeruse_agents/ | false | false | self | 7 | null |
The one who knows how 2 5090 on same server can run vLLM in tp 2 | 0 | The one who knows how 2x 5090 on same server can run vLLM in tensor parallel 2, I will transfer 100 euro worth of coins.
Currently my 2x AMD 7900 XTX can do that, they run fine vLLM in tensor parallel 2,
but latest Nvidia cards cant :)
How come the tables have turned. So currently Nvidia sucks, and AMD just works. I wish I have known this befor purhasing these 5090. Okey its not a huge problem, data parallelism works also. | 2025-09-18T15:53:54 | https://www.reddit.com/r/LocalLLaMA/comments/1nkc2o6/the_one_who_knows_how_2_5090_on_same_server_can/ | somealusta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkc2o6 | false | null | t3_1nkc2o6 | /r/LocalLLaMA/comments/1nkc2o6/the_one_who_knows_how_2_5090_on_same_server_can/ | false | false | self | 0 | null |
Using gpt-oss:120b with Ollama on a Ryzen Max 395+ via Continue.dev | 5 | I have a Bosgame M5 AI Mini PC running Ubuntu 24.04. On said machine, I have Ollama 0.11.11. I have the memory configured with 96GB dedicated for GPU with the remaining 32GB for system use. Using gpt-oss:120b via Open Web UI works without issue from a browser. In fact, it is quite responsive. In trying to get the [Continue.dev](http://Continue.dev) CLI agentic tool to work through Open Web UI to Ollama, I am seeing the following error in the logs:
2025-09-18T15:34:01.201140+00:00 bosgame kernel: workqueue: svm_range_restore_work [amdgpu] hogged CPU for >10000us 32 times, consider switching to WQ_UNBOUND
2025-09-18T15:34:24.014339+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: MES failed to respond to msg=REMOVE_QUEUE
2025-09-18T15:34:24.014369+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: failed to remove hardware queue from MES, doorbell=0x1002
2025-09-18T15:34:24.014372+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: MES might be in unrecoverable state, issue a GPU reset
2025-09-18T15:34:24.014372+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: Failed to evict queue 1
2025-09-18T15:34:24.014373+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: GPU reset begin!
2025-09-18T15:34:24.014989+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: Failed to evict process queues
2025-09-18T15:34:24.015078+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: Dumping IP State
2025-09-18T15:34:24.016954+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: Dumping IP State Completed
2025-09-18T15:34:24.038820+00:00 bosgame ollama[26114]: HW Exception by GPU node-1 (Agent handle: 0x7ba55c692d40) reason :GPU Hang
2025-09-18T15:34:24.164997+00:00 bosgame kernel: amdgpu: Freeing queue vital buffer 0x7b9410200000, queue evicted
2025-09-18T15:34:24.165015+00:00 bosgame kernel: amdgpu: Freeing queue vital buffer 0x7ba38ea00000, queue evicted
2025-09-18T15:34:24.165017+00:00 bosgame kernel: amdgpu: Freeing queue vital buffer 0x7ba395400000, queue evicted
2025-09-18T15:34:24.165018+00:00 bosgame kernel: amdgpu: Freeing queue vital buffer 0x7ba396c00000, queue evicted
2025-09-18T15:34:24.165019+00:00 bosgame kernel: amdgpu: Freeing queue vital buffer 0x7ba530800000, queue evicted
2025-09-18T15:34:24.271776+00:00 bosgame ollama[26114]: time=2025-09-18T15:34:24.271Z level=ERROR source=server.go:1459 msg="post predict" error="Post \"http://127.0.0.1:34789/completion\": EOF"
2025-09-18T15:34:24.272088+00:00 bosgame ollama[26114]: [GIN] 2025/09/18 - 15:34:24 | 200 | 25.833761683s | 172.17.0.3 | POST "/api/chat"
2025-09-18T15:34:24.272226+00:00 bosgame ollama[26114]: time=2025-09-18T15:34:24.272Z level=DEBUG source=sched.go:377 msg="context for request finished" runner.name=registry.ollama.ai/library/gpt-oss:120b runner.inference=rocm runner.devices=1 runner.size="61.4 GiB" runner.vram="61.4 GiB" runner.parallel=1 runner.pid=113255 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 runner.num_ctx=8192
2025-09-18T15:34:24.272266+00:00 bosgame ollama[26114]: time=2025-09-18T15:34:24.272Z level=DEBUG source=sched.go:286 msg="runner with non-zero duration has gone idle, adding timer" runner.name=registry.ollama.ai/library/gpt-oss:120b runner.inference=rocm runner.devices=1 runner.size="61.4 GiB" runner.vram="61.4 GiB" runner.parallel=1 runner.pid=113255 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 runner.num_ctx=8192 duration=5m0s
2025-09-18T15:34:24.272294+00:00 bosgame ollama[26114]: time=2025-09-18T15:34:24.272Z level=DEBUG source=sched.go:304 msg="after processing request finished event" runner.name=registry.ollama.ai/library/gpt-oss:120b runner.inference=rocm runner.devices=1 runner.size="61.4 GiB" runner.vram="61.4 GiB" runner.parallel=1 runner.pid=113255 runner.model=/usr/share/ollama/.ollama/models/blobs/sha256-90a618fe6ff21b09ca968df959104eb650658b0bef0faef785c18c2795d993e3 runner.num_ctx=8192 refCount=0
2025-09-18T15:34:25.113360+00:00 bosgame kernel: gmc_v11_0_process_interrupt: 95 callbacks suppressed
2025-09-18T15:34:25.113366+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:169 vmid:0 pasid:0)
2025-09-18T15:34:25.113367+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: in page starting at address 0x0000000000000000 from client 10
2025-09-18T15:34:25.113367+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: GCVM_L2_PROTECTION_FAULT_STATUS:0x00040B53
2025-09-18T15:34:25.113368+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: Faulty UTCL2 client ID: CPC (0x5)
2025-09-18T15:34:25.113370+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: MORE_FAULTS: 0x1
2025-09-18T15:34:25.113370+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: WALKER_ERROR: 0x1
2025-09-18T15:34:25.113371+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: PERMISSION_FAULTS: 0x5
2025-09-18T15:34:25.113372+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: MAPPING_ERROR: 0x1
2025-09-18T15:34:25.113372+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: RW: 0x1
2025-09-18T15:34:25.113373+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: [gfxhub] page fault (src_id:0 ring:153 vmid:0 pasid:0)
2025-09-18T15:34:25.113374+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: in page starting at address 0x0000000000000000 from client 10
2025-09-18T15:34:26.683975+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: MES failed to respond to msg=SUSPEND
2025-09-18T15:34:26.683980+00:00 bosgame kernel: [drm:amdgpu_mes_suspend [amdgpu]] *ERROR* failed to suspend all gangs
2025-09-18T15:34:26.683981+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: suspend of IP block <mes_v11_0> failed -110
2025-09-18T15:34:27.118955+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: MODE2 reset
2025-09-18T15:34:27.149973+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: GPU reset succeeded, trying to resume
2025-09-18T15:34:27.149976+00:00 bosgame kernel: [drm] PCIE GART of 512M enabled (table at 0x00000097FFB00000).
2025-09-18T15:34:27.149977+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: SMU is resuming...
2025-09-18T15:34:27.157972+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: SMU is resumed successfully!
2025-09-18T15:34:27.172973+00:00 bosgame kernel: [drm] DMUB hardware initialized: version=0x09000F00
2025-09-18T15:34:27.253979+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: ring gfx_0.0.0 uses VM inv eng 0 on hub 0
2025-09-18T15:34:27.253982+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: ring comp_1.0.0 uses VM inv eng 1 on hub 0
2025-09-18T15:34:27.253983+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: ring comp_1.1.0 uses VM inv eng 4 on hub 0
2025-09-18T15:34:27.253984+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: ring comp_1.2.0 uses VM inv eng 6 on hub 0
2025-09-18T15:34:27.253984+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: ring comp_1.3.0 uses VM inv eng 7 on hub 0
2025-09-18T15:34:27.253985+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: ring comp_1.0.1 uses VM inv eng 8 on hub 0
2025-09-18T15:34:27.253986+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: ring comp_1.1.1 uses VM inv eng 9 on hub 0
2025-09-18T15:34:27.253986+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: ring comp_1.2.1 uses VM inv eng 10 on hub 0
2025-09-18T15:34:27.253987+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: ring comp_1.3.1 uses VM inv eng 11 on hub 0
2025-09-18T15:34:27.253987+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: ring sdma0 uses VM inv eng 12 on hub 0
2025-09-18T15:34:27.253988+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: ring vcn_unified_0 uses VM inv eng 0 on hub 8
2025-09-18T15:34:27.253989+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: ring vcn_unified_1 uses VM inv eng 1 on hub 8
2025-09-18T15:34:27.253989+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: ring jpeg_dec_0 uses VM inv eng 4 on hub 8
2025-09-18T15:34:27.253990+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: ring jpeg_dec_1 uses VM inv eng 6 on hub 8
2025-09-18T15:34:27.253990+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: ring mes_kiq_3.1.0 uses VM inv eng 13 on hub 0
2025-09-18T15:34:27.253991+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: ring vpe uses VM inv eng 7 on hub 8
2025-09-18T15:34:27.296972+00:00 bosgame kernel: amdgpu 0000:c5:00.0: amdgpu: GPU reset(19) succeeded!
Here is my [Continue.dev](http://Continue.dev) CLI config.yaml:
name: Local Assistant
version: 1.0.0
schema: v1
models:
- name: gpt-oss:120b
provider: openai
model: gpt-oss:120b
env:
useLegacyCompletionsEndpoint: false
apiBase: http://10.1.1.27:3000/api
apiKey: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
roles:
- chat
- edit
timeout: 6000000
context:
- provider: code
- provider: docs
- provider: diff
- provider: terminal
- provider: problems
- provider: folder
- provider: codebase
I also tried getting OpenAI's codex CLI to work, and Ollama is throwing the same error.
Has anyone else had similar issues?
| 2025-09-18T15:48:13 | https://www.reddit.com/r/LocalLLaMA/comments/1nkbx6w/using_gptoss120b_with_ollama_on_a_ryzen_max_395/ | tinycomputing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkbx6w | false | null | t3_1nkbx6w | /r/LocalLLaMA/comments/1nkbx6w/using_gptoss120b_with_ollama_on_a_ryzen_max_395/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=108&crop=smart&auto=webp&s=efe307f51ff2874b18960bc89ca5a18a1b551442', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=216&crop=smart&auto=webp&s=3f5d82a3bc41c4fa63c2939d1e2fdc1db75de463', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=320&crop=smart&auto=webp&s=c204a4e04e7cbc078774e051a9e247b58ad6b572', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=640&crop=smart&auto=webp&s=5b6c9e3fb05aa6cf2a05f0e920367ffac32c6448', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=960&crop=smart&auto=webp&s=bd57ab7ea83274fea8ece5793f2200a0ac6a7f02', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=1080&crop=smart&auto=webp&s=5cdafbd3026c11883a519aa200677fb58be16d11', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?auto=webp&s=30396441627641135814de7d733ce94b9e7795dc', 'width': 2400}, 'variants': {}}]} |
Good mining frame for server motherboards and large GPUs? | 2 | I am putting together a system with an SSI-EEB board as well as chonky 4090s that are 360mm in length.
Most mining frames are targeted for bitcoin mining with ATX motherboards and a bunch of smaller GPUs and they don't necessarily support the SSI-EEB screw pattern or GPUs that long.
I'm open to other ideas too, but a tower case is infeasible due to the size/number of GPUs.
I figure that this community has at least a few people who've put something like this together. What are you using? | 2025-09-18T15:44:25 | https://www.reddit.com/r/LocalLLaMA/comments/1nkbthf/good_mining_frame_for_server_motherboards_and/ | abnormal_human | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkbthf | false | null | t3_1nkbthf | /r/LocalLLaMA/comments/1nkbthf/good_mining_frame_for_server_motherboards_and/ | false | false | self | 2 | null |
Local Suno just dropped | 474 | [https://huggingface.co/fredconex/SongBloom-Safetensors](https://huggingface.co/fredconex/SongBloom-Safetensors)
[https://github.com/fredconex/ComfyUI-SongBloom](https://github.com/fredconex/ComfyUI-SongBloom)
Examples:
[https://files.catbox.moe/i0iple.flac](https://files.catbox.moe/i0iple.flac)
[https://files.catbox.moe/96i90x.flac](https://files.catbox.moe/96i90x.flac)
[https://files.catbox.moe/zot9nu.flac](https://files.catbox.moe/zot9nu.flac)
There is a DPO trained one that just came out, I saw no examples for that: [https://huggingface.co/fredconex/SongBloom-Safetensors/blob/main/songbloom\_full\_150s\_dpo.safetensors](https://huggingface.co/fredconex/SongBloom-Safetensors/blob/main/songbloom_full_150s_dpo.safetensors) | 2025-09-18T15:42:25 | https://www.reddit.com/r/LocalLLaMA/comments/1nkbrk1/local_suno_just_dropped/ | Different_Fix_2217 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkbrk1 | false | null | t3_1nkbrk1 | /r/LocalLLaMA/comments/1nkbrk1/local_suno_just_dropped/ | false | false | self | 474 | {'enabled': False, 'images': [{'id': '0madv9n-jRmSA4yqmlaooVPTBKjVWGBP05OrZ3jpZho', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0madv9n-jRmSA4yqmlaooVPTBKjVWGBP05OrZ3jpZho.png?width=108&crop=smart&auto=webp&s=898f505c9bcb7b44da27d0dfa4696d360b00d8f0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0madv9n-jRmSA4yqmlaooVPTBKjVWGBP05OrZ3jpZho.png?width=216&crop=smart&auto=webp&s=205ae3af983f7e40b088ea87d033f9fc210b1ad8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0madv9n-jRmSA4yqmlaooVPTBKjVWGBP05OrZ3jpZho.png?width=320&crop=smart&auto=webp&s=6d91f65aa023aac8794708c8f8581ba960166cfa', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0madv9n-jRmSA4yqmlaooVPTBKjVWGBP05OrZ3jpZho.png?width=640&crop=smart&auto=webp&s=715710f00bb1ecdc891a71ffd9ad3d88943ad9c3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0madv9n-jRmSA4yqmlaooVPTBKjVWGBP05OrZ3jpZho.png?width=960&crop=smart&auto=webp&s=de8ce9ffcc0b2ff5a2396fcac64085f8c9607a0e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0madv9n-jRmSA4yqmlaooVPTBKjVWGBP05OrZ3jpZho.png?width=1080&crop=smart&auto=webp&s=83da8c0ecae2baae9a598e16b4d8432857679577', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0madv9n-jRmSA4yqmlaooVPTBKjVWGBP05OrZ3jpZho.png?auto=webp&s=dfdc4d5f84989641017a6abaa18400b21282ea7d', 'width': 1200}, 'variants': {}}]} |
Anyone here tried NVIDIA’s LLM-optimized VM setups for faster workflows? | 1 | Lately I’ve been looking into ways to speed up LLM workflows (training, inference, prototyping) without spending hours setting up CUDA, PyTorch, and all the dependencies manually.
From what I see, there are preconfigured GPU-accelerated VM images out there that already bundle the common libraries (PyTorch, TensorFlow, RAPIDS, etc.) plus JupyterHub for collaboration.
Curious if anyone here has tested these kinds of “ready-to-go” LLM VMs in production or for research:
Do they really save you setup time vs just building your own environment?
Any hidden trade-offs (cost, flexibility, performance)?
Are you using something like this on AWS, Azure, or GCP? | 2025-09-18T15:30:07 | https://www.reddit.com/r/LocalLLaMA/comments/1nkbfp5/anyone_here_tried_nvidias_llmoptimized_vm_setups/ | techlatest_net | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nkbfp5 | false | null | t3_1nkbfp5 | /r/LocalLLaMA/comments/1nkbfp5/anyone_here_tried_nvidias_llmoptimized_vm_setups/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.