title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Tracking MCP Server Growth: 1,150+ servers and climbing | 2 | 2025-10-12T13:49:26 | https://martinalderson.com/posts/tracking-mcp-server-growth/ | malderson | martinalderson.com | 1970-01-01T00:00:00 | 0 | {} | 1o4pk9x | false | null | t3_1o4pk9x | /r/LocalLLaMA/comments/1o4pk9x/tracking_mcp_server_growth_1150_servers_and/ | false | false | 2 | {'enabled': False, 'images': [{'id': '8Cu81YajYCaT4p00UYebB_W6eiURi1xEFXqNrO8eoW0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/8Cu81YajYCaT4p00UYebB_W6eiURi1xEFXqNrO8eoW0.png?width=108&crop=smart&auto=webp&s=026f2a90fac2076aa4156bf9b14fea53d1c57865', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/8Cu81YajYCaT4p00UYebB_W6eiURi1xEFXqNrO8eoW0.png?width=216&crop=smart&auto=webp&s=2ab5004050c7fc6147ed66141db2b150f93e91b1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/8Cu81YajYCaT4p00UYebB_W6eiURi1xEFXqNrO8eoW0.png?width=320&crop=smart&auto=webp&s=96132c3cc02c7b2271fa6edd02131d1efa8f618f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/8Cu81YajYCaT4p00UYebB_W6eiURi1xEFXqNrO8eoW0.png?width=640&crop=smart&auto=webp&s=e74dae782f8b7043cfc210e3929a58d3e1abbbb1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/8Cu81YajYCaT4p00UYebB_W6eiURi1xEFXqNrO8eoW0.png?width=960&crop=smart&auto=webp&s=4d4b4cbe5d3c84bcbc8ec3219e43d56eee593afb', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/8Cu81YajYCaT4p00UYebB_W6eiURi1xEFXqNrO8eoW0.png?width=1080&crop=smart&auto=webp&s=31f3191865cb5d9684295492a2d92f5417241b30', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/8Cu81YajYCaT4p00UYebB_W6eiURi1xEFXqNrO8eoW0.png?auto=webp&s=f9174cde039a5fa9a4f704e996e3964d1a77c2c2', 'width': 1200}, 'variants': {}}]} | ||
Question about power-cheap and economical solution for selfhosting | 4 | Hello, I come here because after some research I am currrently thinking of self hosting AI but curious about the hardware to buy;
Originally, I wanted to buy a M1 Max with 32GB of RAM, put some LLM,
After some research I am considering *Yahboom Jetson Orin Nano Super 8GB Development Board Kit 67TOP* on one hand for my dev needs, running Ministral or Phi. and on one of my server (24GB of RAM) buying a *Google Coral USB* for every other stuff which would mostly be *stupid questions that i want to be answered fast*, which i would share with my gf.
I want to prioritize power consumption, my budget is around 1k EUR, which is the price I could get a M1 Max with 32GB of RAM, second hand.
Thanks | 2025-10-12T13:44:49 | https://www.reddit.com/r/LocalLLaMA/comments/1o4pgh5/question_about_powercheap_and_economical_solution/ | XenYaume | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4pgh5 | false | null | t3_1o4pgh5 | /r/LocalLLaMA/comments/1o4pgh5/question_about_powercheap_and_economical_solution/ | false | false | self | 4 | null |
No Prob Llama Roller Skating ! | 1 | LocalLLaMA | 2025-10-12T13:43:21 | https://www.reddit.com/gallery/1o4pfce | Available-Grade-4415 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1o4pfce | false | null | t3_1o4pfce | /r/LocalLLaMA/comments/1o4pfce/no_prob_llama_roller_skating/ | false | false | 1 | null | |
Very interesting! OmniInsert — mask-free video insertion of any reference | 8 | New diffusion-transformer method that inserts a referenced subject into a source video **without masks**, with robust demos and a technique report. Paper + project page are live; repo is up—**eager to test once code & weights drop**.
* Highlights: InsertPipe data pipeline, condition-specific feature injection, progressive training; introduces **InsertBench**. [arXiv](https://arxiv.org/abs/2509.17627?utm_source=chatgpt.com)
* Status: Apache-2.0 repo; no releases yet; open issue requesting HF models/dataset; arXiv says “code will be released.”
[**https://phantom-video.github.io/OmniInsert/**](https://phantom-video.github.io/OmniInsert/) | 2025-10-12T13:29:16 | https://www.reddit.com/r/LocalLLaMA/comments/1o4p3vj/very_interesting_omniinsert_maskfree_video/ | freesysck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4p3vj | false | null | t3_1o4p3vj | /r/LocalLLaMA/comments/1o4p3vj/very_interesting_omniinsert_maskfree_video/ | false | false | self | 8 | null |
Local llm grounding? | 3 | Hey I was wondering if there were any open source projects that allow llms to web search for relevant info before answering right out of the box?
StudioLM is nice but doesn't have this feature, and as small models are typically very limited in knowledge, inserting some automatically via rag or just top x searches in context behind the scenes would boost them quite a bit. | 2025-10-12T13:07:25 | https://www.reddit.com/r/LocalLLaMA/comments/1o4olyx/local_llm_grounding/ | GreenTreeAndBlueSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4olyx | false | null | t3_1o4olyx | /r/LocalLLaMA/comments/1o4olyx/local_llm_grounding/ | false | false | self | 3 | null |
I benchmarked my Redmagic 9 Pro phone, initially to find out whether the BLAS batch size parameter had an observable effect on performance, and got some interesting results. | 11 | Phone maker and model: Redmagic 9 Pro 512/16GB, released Jan. 03 2024.
Results :
* Basically a wash on prompt processing speeds ;
* Some interesting results on the 100 tokens generations, including massive outliers I have no explanation for ;
* Going from 3840 to 4096 context window sizes increased the PP and generation speeds slightly.
Notes :
* Ran on Termux, KoboldCpp compiled on-device ;
* 100% battery. Power consumption stood at around 7.5 to 9W at the wall, factory phone charger losses included ;
* Choice of number of threads: going from 3 to 6 threads registered a great boost in speeds, while 7 threads halved the results obtained at 6 threads. 8 threads not tested. Hypothesis: all cores run at the same frequency, and the slowest cores slow the rest too much to be worth adding to the process. KoboldCpp notes "6 threads and 6 BLAS threads" were spawned ;
* Choice of quant: Q4\_0 allows using the Llama.cpp improvements for ARM with memory interleaving, increasing performance ; I have observed Q4\_K\_M models running single-digit speeds at under 1k context window usage ;
* Choice of KV quant: Q8 was basically for the compromise on memory usage, considering the device used. I only evaluated whether the model was coherent on a random topic repeatedly ("A wolf has entered my house, what do I do? AI: <insert short response here> User: Thank you. Any other advice? AI: <insert 240+ tokens response here>") before using it for the benchmark ;
* FlashAttention: this one I was divided on, but settled on using it because KoboldCpp highly discourages using QuantKV without it, citing possible higher memory usage than without QuantKV at all ;
* I highly doubt KoboldCpp uses the Qualcomm Hexagon NPU at all ;
* htop reported RAM usage went up from 8.20GB to 10.90GB which corresponds to the model size, while KoboldCpp reported 37.72MiB for llama\_context at 4096 context window. I'm surprised by this "small" memory footprint for the context.
* This benchmark session took the better time of 8 hours ;
* While the memory footprint of the context allowed for testing larger context windows, going all the way to 8192 context window size would take an inordinate amount of time to benchmark.
If you think other parameters can improve those charts, I'll be happy to try a few of them! | 2025-10-12T11:43:37 | https://www.reddit.com/gallery/1o4mwtv | PurpleWinterDawn | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1o4mwtv | false | null | t3_1o4mwtv | /r/LocalLLaMA/comments/1o4mwtv/i_benchmarked_my_redmagic_9_pro_phone_initially/ | false | false | 11 | null | |
How do you benchmark the cognitive performance of local LLM models? | 6 | Hey everyone,
I’ve been experimenting with running local LLMs (mainly open-weight models from Hugging Face) and I’m curious about **how to systematically benchmark their cognitive performance** — not just speed or token throughput, but things like reasoning, memory, comprehension, and factual accuracy.
I know about `lm-evaluation-harness`, but it’s pretty cumbersome to run manually for each model. I’m wondering if:
* there’s any **online tool or web interface** that can run multiple benchmarks automatically (similar to Hugging Face’s Open LLM Leaderboard, but for local models), or
* a more **user-friendly script or framework** that can test reasoning / logic / QA performance locally without too much setup.
Any suggestions, tools, or workflows you’d recommend?
Thanks in advance! | 2025-10-12T11:43:18 | https://www.reddit.com/r/LocalLLaMA/comments/1o4mwlp/how_do_you_benchmark_the_cognitive_performance_of/ | LastikPlastic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4mwlp | false | null | t3_1o4mwlp | /r/LocalLLaMA/comments/1o4mwlp/how_do_you_benchmark_the_cognitive_performance_of/ | false | false | self | 6 | null |
GPU Poor LLM Arena is BACK! 🎉🎊🥳 | 520 | **🚀 GPU Poor LLM Arena is BACK! New Models & Updates!**
Hey everyone,
First off, a massive apology for the extended silence. Things have been a bit hectic, but the GPU Poor LLM Arena is officially back online and ready for action! Thanks for your patience and for sticking around.
**🚀 Newly Added Models:**
* Granite 4.0 Small Unsloth (32B, 4-bit)
* Granite 4.0 Tiny Unsloth (7B, 4-bit)
* Granite 4.0 Micro Unsloth (3B, 8-bit)
* Qwen 3 Instruct 2507 Unsloth (4B, 8-bit)
* Qwen 3 Thinking 2507 Unsloth (4B, 8-bit)
* Qwen 3 Instruct 2507 Unsloth (30B, 4-bit)
* OpenAI gpt-oss Unsloth (20B, 4-bit)
**🚨 Important Notes for GPU-Poor Warriors:**
* Please be aware that Granite 4.0 Small, Qwen 3 30B, and OpenAI gpt-oss models are quite bulky. Ensure your setup can comfortably handle them before diving in to avoid any performance issues.
* I've decided to default to Unsloth GGUFs for now. In many cases, these offer valuable bug fixes and optimizations over the original GGUFs.
I'm happy to see you back in the arena, testing out these new additions! | 2025-10-12T11:43:02 | https://huggingface.co/spaces/k-mktr/gpu-poor-llm-arena | kastmada | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1o4mwet | false | null | t3_1o4mwet | /r/LocalLLaMA/comments/1o4mwet/gpu_poor_llm_arena_is_back/ | false | false | default | 520 | {'enabled': False, 'images': [{'id': 'xnvppfD8q4Rvrqs00KT2LLxfAKmO_ypt1REhqFgxlVw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xnvppfD8q4Rvrqs00KT2LLxfAKmO_ypt1REhqFgxlVw.png?width=108&crop=smart&auto=webp&s=0cdddaee9fbd292bf4c72ae29fb8d38ffe50fcfd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xnvppfD8q4Rvrqs00KT2LLxfAKmO_ypt1REhqFgxlVw.png?width=216&crop=smart&auto=webp&s=97d57b4eed2bf0cee56acb5ae5af8141546d1167', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xnvppfD8q4Rvrqs00KT2LLxfAKmO_ypt1REhqFgxlVw.png?width=320&crop=smart&auto=webp&s=6d78a07974dd09abcc40a5ace2a1b3e598f11c94', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xnvppfD8q4Rvrqs00KT2LLxfAKmO_ypt1REhqFgxlVw.png?width=640&crop=smart&auto=webp&s=49fb1bf881d9a00b0e731a0269d44b4ea6c31968', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xnvppfD8q4Rvrqs00KT2LLxfAKmO_ypt1REhqFgxlVw.png?width=960&crop=smart&auto=webp&s=38f0aaef9f70003d476a782b17ef8bc1b6959bdd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xnvppfD8q4Rvrqs00KT2LLxfAKmO_ypt1REhqFgxlVw.png?width=1080&crop=smart&auto=webp&s=eb0fe012c8804c95ee9b3dde01421400dc546b97', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xnvppfD8q4Rvrqs00KT2LLxfAKmO_ypt1REhqFgxlVw.png?auto=webp&s=669024ca053eaab0f4a27cd26a959f3210e6aed7', 'width': 1200}, 'variants': {}}]} |
embedding model which is good for short term tokens? | 1 | hi, i'm trying to rag on xml document inorder to be able to ask it several questions about various parts of the document which can be connected semanticaly.
the idea is that instead of actuallly taking the tags, flatten them so instead of xml
you'll have
a.b.c.=2328
the thing is some of those xml tags represents entities with odd names such ab-23-north and mk-28-clean
so i would like to pose a query such as what's the distacne between entity ab-23-north and mk-28-clean as they both have x,y coordinates, note that not all entities have such identifications, and not all entities are the same type either, you could have antoher entity which is a polygon)
i could ask for example how many entities inside polygon x, etc... do certain entities have properties configured properly based on some knowledge document someone wrote, etc..
so i'm wondering what type of embedding is good for this.
<a>
<b>
<c>23ssd</c>
</b>
</a> | 2025-10-12T11:27:06 | https://www.reddit.com/r/LocalLLaMA/comments/1o4mmah/embedding_model_which_is_good_for_short_term/ | emaayan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4mmah | false | null | t3_1o4mmah | /r/LocalLLaMA/comments/1o4mmah/embedding_model_which_is_good_for_short_term/ | false | false | self | 1 | null |
Local open source AI-sheets? | 11 | Is there any solution for local and open source AI that generates content based on an Excel sheet or preferably something web-based?
The use case is to generate content based on other column, try to fill gaps, etc. | 2025-10-12T11:24:50 | SuddenWerewolf7041 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o4mkve | false | null | t3_1o4mkve | /r/LocalLLaMA/comments/1o4mkve/local_open_source_aisheets/ | false | false | default | 11 | {'enabled': True, 'images': [{'id': 'u9r363k61ouf1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/u9r363k61ouf1.png?width=108&crop=smart&auto=webp&s=f692d967e5aaa846f1492000dd1efad0d3f23d45', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/u9r363k61ouf1.png?width=216&crop=smart&auto=webp&s=cf32eb6e7b744e396073fe4e2b4a32f45c962c0a', 'width': 216}, {'height': 207, 'url': 'https://preview.redd.it/u9r363k61ouf1.png?width=320&crop=smart&auto=webp&s=6801691f6a13702b411221cd785bd9fa2cb388da', 'width': 320}], 'source': {'height': 282, 'url': 'https://preview.redd.it/u9r363k61ouf1.png?auto=webp&s=10600f287d4929924d81f78373bbc9e5c62e58bd', 'width': 434}, 'variants': {}}]} | |
Why has Meta research failed to deliver foundational model at the level of Grok, Deepseek or GLM? | 241 | They have been in the space for longer - could have atracted talent earlier, their means are comparable to ther big tech. So why have they been outcompeted so heavily? I get they are currently a one generation behind and the chinese did some really clever wizardry which allowed them to squeeze a lot more eke out of every iota. But what about xAI? They compete for the same talent and had to start from the scratch. Or was starting from the scratch actually an advantage here? Or is it just a matter of how many key ex OpenAI employees was each company capable of attracting - trafficking out the trade secrets? | 2025-10-12T11:13:22 | https://www.reddit.com/r/LocalLLaMA/comments/1o4mdkh/why_has_meta_research_failed_to_deliver/ | External_Natural9590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4mdkh | false | null | t3_1o4mdkh | /r/LocalLLaMA/comments/1o4mdkh/why_has_meta_research_failed_to_deliver/ | false | false | self | 241 | null |
LM Studio no new runtimes since weeks..? | 9 | Pardon the hyperbole and sorry to bother, but since the release of GLM-4.6 on Oct. 30 (that's fourteen days, or two weeks ago), I have been checking daily on LM Studio whether new Runtimes are provided to finally run the successsor to my favourite model, GLM-4.5. [I was told](https://www.reddit.com/r/LocalLLaMA/comments/1nw4sv6/unsloth_glm46_gguf_doesnt_work_in_lm_studio/) their current runtime v1.52.1 is based on llama.cpp's b6651, with b6653 (just two releases later) adding support for GLM-4.6. Meanwhile as of writing, llama.cpp is on release b6739.
@ LM Studio, thank you so much for your *amazing* platform, and sorry that we cannot contribute to your incessant efforts in proliferating Local LLMs. (obligatory "open-source when?")
I sincerely hope you are doing alright... | 2025-10-12T11:04:23 | https://www.reddit.com/r/LocalLLaMA/comments/1o4m7yt/lm_studio_no_new_runtimes_since_weeks/ | therealAtten | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4m7yt | false | null | t3_1o4m7yt | /r/LocalLLaMA/comments/1o4m7yt/lm_studio_no_new_runtimes_since_weeks/ | false | false | self | 9 | null |
Help with RTX6000 Pros and vllm | 5 | So at work we were able to scrape together the funds to get a server with 6 x RTX 6000 Pro Blackwell server editions, and I want to setup vLLM running in a container. I know support for the card is still maturing, I've tried several different posts claiming someone got it working, but I'm struggling. Fresh Ubuntu 24.04 server, cuda 13 update 2, nightly build of pytorch for cuda 13, 580.95 driver. I'm compiling vLLM specifically for sm120. The cards show up running Nvidia-smi both in and out of the container, but vLLM doesn't see them when I try to load a model. I do see some trace evidence in the logs of a reference to sm100 for some components. Does anyone have a solid dockerfile or build process that has worked in a similar environment? I've spent two days on this so far so any hints would be appreciated. | 2025-10-12T11:02:48 | https://www.reddit.com/r/LocalLLaMA/comments/1o4m71e/help_with_rtx6000_pros_and_vllm/ | TaiMaiShu-71 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4m71e | false | null | t3_1o4m71e | /r/LocalLLaMA/comments/1o4m71e/help_with_rtx6000_pros_and_vllm/ | false | false | self | 5 | null |
I have an interview scheduled after 2 days from now and I'm hoping to get a few suggestions on how to best prepare myself to crack it. These are the possible topics which will have higher focus | 20 | 2025-10-12T10:02:47 | alone_musk18 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o4l7qk | false | null | t3_1o4l7qk | /r/LocalLLaMA/comments/1o4l7qk/i_have_an_interview_scheduled_after_2_days_from/ | false | false | default | 20 | {'enabled': True, 'images': [{'id': '1rpwzkgqmnuf1', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/1rpwzkgqmnuf1.png?width=108&crop=smart&auto=webp&s=d0ed3012e9957df7af68ff106cd1fe885ede78e1', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/1rpwzkgqmnuf1.png?width=216&crop=smart&auto=webp&s=26184f183ddfce744400d6d37297f5751d029e03', 'width': 216}, {'height': 243, 'url': 'https://preview.redd.it/1rpwzkgqmnuf1.png?width=320&crop=smart&auto=webp&s=6e56a94787ba054e250bde4ac7cf59194d901a6f', 'width': 320}, {'height': 487, 'url': 'https://preview.redd.it/1rpwzkgqmnuf1.png?width=640&crop=smart&auto=webp&s=afa5a5bad4ea1892710d6879e8e3673370dee73e', 'width': 640}, {'height': 730, 'url': 'https://preview.redd.it/1rpwzkgqmnuf1.png?width=960&crop=smart&auto=webp&s=dbc749a3b090240e8e4c487a6c73f1a5842f4de9', 'width': 960}, {'height': 822, 'url': 'https://preview.redd.it/1rpwzkgqmnuf1.png?width=1080&crop=smart&auto=webp&s=bb4a7d5c592b05c7fb3f8fcd5a7ae08237caca04', 'width': 1080}], 'source': {'height': 822, 'url': 'https://preview.redd.it/1rpwzkgqmnuf1.png?auto=webp&s=83b1ad7f36ab401b471d0b4d56642534666bc35f', 'width': 1080}, 'variants': {}}]} | ||
Deep Think with Confidence | 1 | Hi everyone, sorry if my English is terrible 🙏
Do you know of any models that implement these functions?
👇
[https://arxiv.org/abs/2508.15260](https://arxiv.org/abs/2508.15260)
I found this but I'm having a lot of trouble installing it.
👇
[https://github.com/facebookresearch/deepconf](https://github.com/facebookresearch/deepconf)
👇
[https://jiaweizzhao.github.io/deepconf/](https://jiaweizzhao.github.io/deepconf/) | 2025-10-12T09:46:59 | https://www.reddit.com/r/LocalLLaMA/comments/1o4kyo9/deep_think_with_confidence/ | Temporary-Roof2867 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4kyo9 | false | null | t3_1o4kyo9 | /r/LocalLLaMA/comments/1o4kyo9/deep_think_with_confidence/ | false | false | self | 1 | null |
Help me brainstorm a webapp | 0 | Thinking of building a webapp that can render 3D architecture models like autodesk revit in browser (IFC models). The prime purpose is to let architects (users) navigate the structure and they can use their perspective view to generate a realistic image of it using nano banana and edit the image as they like with the copilot like interface, possibly for interior decoration, realistic render like view, change materials, etc. It will be somewhat like how revit works with vectra plugin, also it will let users skip the time taking rendering process.
Also possibly some basic BIM editing features and 3D asset generation using hunyan3D-2 model for 3D asset generation like furnitures, decorations, etc. Also need advice in choosing between hunyan or trellis for this. If possible it would be great to run trellis inbrowser using webgpu being only 2b model.
Will it be usefull/viable product? | 2025-10-12T09:42:55 | https://www.reddit.com/r/LocalLLaMA/comments/1o4kweq/help_me_brainstorm_a_webapp/ | DeathShot7777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4kweq | false | null | t3_1o4kweq | /r/LocalLLaMA/comments/1o4kweq/help_me_brainstorm_a_webapp/ | false | false | self | 0 | null |
Recommendation for a relatively small local LLM model and environment | 1 | I have an M2 Macbook Pro with 16 GB RAM.
I want to use a local LLM mostly to go over work logs (tasks, meeting notes, open problems, discussions, ...) for review and planning (LLM summarizes, suggests, points out on different timespans), so not very deep or sophisticated intelligence work.
What would you recommend currently as the best option, in terms of the actual model and the environment in which the model is obtained and served, if I want relative ease of use through terminal? | 2025-10-12T09:18:23 | https://www.reddit.com/r/LocalLLaMA/comments/1o4kive/recommendation_for_a_relatively_small_local_llm/ | __kex__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4kive | false | null | t3_1o4kive | /r/LocalLLaMA/comments/1o4kive/recommendation_for_a_relatively_small_local_llm/ | false | false | self | 1 | null |
Please tell me what you think. | 1 | [removed] | 2025-10-12T08:20:46 | https://www.reddit.com/r/LocalLLaMA/comments/1o4jmn2/please_tell_me_what_you_think/ | HauntingStranger4858 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4jmn2 | false | null | t3_1o4jmn2 | /r/LocalLLaMA/comments/1o4jmn2/please_tell_me_what_you_think/ | false | false | 1 | null | |
Help me finish my LLM benchmark tool | 1 | [removed] | 2025-10-12T08:12:07 | https://www.reddit.com/r/LocalLLaMA/comments/1o4jht8/help_me_finish_my_llm_benchmark_tool/ | HauntingStranger4858 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4jht8 | false | null | t3_1o4jht8 | /r/LocalLLaMA/comments/1o4jht8/help_me_finish_my_llm_benchmark_tool/ | false | false | 1 | null | |
Help me finish my LLM benchmark tool | 1 | [removed] | 2025-10-12T07:38:59 | https://www.reddit.com/r/LocalLLaMA/comments/1o4iz60/help_me_finish_my_llm_benchmark_tool/ | HauntingStranger4858 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4iz60 | false | null | t3_1o4iz60 | /r/LocalLLaMA/comments/1o4iz60/help_me_finish_my_llm_benchmark_tool/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'NCAkhz5gfC0WGpxirUFnVhX5YhNEqWDq8LhsAF8AAo0', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/NCAkhz5gfC0WGpxirUFnVhX5YhNEqWDq8LhsAF8AAo0.jpeg?width=108&crop=smart&auto=webp&s=cbb7b5eebaed12e6f919781546415e015288e273', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/NCAkhz5gfC0WGpxirUFnVhX5YhNEqWDq8LhsAF8AAo0.jpeg?width=216&crop=smart&auto=webp&s=7534a032824b15aceb4a7a48d2e52248d927e251', 'width': 216}, {'height': 206, 'url': 'https://external-preview.redd.it/NCAkhz5gfC0WGpxirUFnVhX5YhNEqWDq8LhsAF8AAo0.jpeg?width=320&crop=smart&auto=webp&s=5eb4098b413206832f4beeff9867b6e2032fd0c8', 'width': 320}, {'height': 412, 'url': 'https://external-preview.redd.it/NCAkhz5gfC0WGpxirUFnVhX5YhNEqWDq8LhsAF8AAo0.jpeg?width=640&crop=smart&auto=webp&s=44ad35bac6c04aa5998606536c1a39a0bef33275', 'width': 640}, {'height': 618, 'url': 'https://external-preview.redd.it/NCAkhz5gfC0WGpxirUFnVhX5YhNEqWDq8LhsAF8AAo0.jpeg?width=960&crop=smart&auto=webp&s=70a9f316b4ce379e5c0afc77e92fc001809edb05', 'width': 960}, {'height': 695, 'url': 'https://external-preview.redd.it/NCAkhz5gfC0WGpxirUFnVhX5YhNEqWDq8LhsAF8AAo0.jpeg?width=1080&crop=smart&auto=webp&s=e4a441387ccc1bb6092f124e8d40596a8b46851d', 'width': 1080}], 'source': {'height': 1242, 'url': 'https://external-preview.redd.it/NCAkhz5gfC0WGpxirUFnVhX5YhNEqWDq8LhsAF8AAo0.jpeg?auto=webp&s=0f62c13264824381e810b4e28eb17b52bd783cdd', 'width': 1928}, 'variants': {}}]} | |
Help me finish my LLM benchmark tool – I’m too lazy! | 1 | [removed] | 2025-10-12T07:29:25 | https://www.reddit.com/r/LocalLLaMA/comments/1o4itvs/help_me_finish_my_llm_benchmark_tool_im_too_lazy/ | HauntingStranger4858 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4itvs | false | null | t3_1o4itvs | /r/LocalLLaMA/comments/1o4itvs/help_me_finish_my_llm_benchmark_tool_im_too_lazy/ | false | false | 1 | null | |
I built an open-source repo to learn and apply AI Agentic Patterns | 11 | Hey everyone 👋
I’ve been experimenting with how AI agents actually *work in production* — beyond simple prompt chaining. So I created an **open-source project** that demonstrates **30+ AI Agentic Patterns**, each in a single, focused file.
Each pattern covers a core concept like:
* Prompt Chaining
* Multi-Agent Coordination
* Reflection & Self-Correction
* Knowledge Retrieval
* Workflow Orchestration
* Exception Handling
* Human-in-the-loop
* And more advanced ones like Recursive Agents & Code Execution
✅ Works with OpenAI, Gemini, Claude, Fireworks AI, Mistral, and even **Ollama** for local runs.
✅ Each file is self-contained — perfect for learning or extending.
✅ Open for contributions, feedback, and improvements!
You can check the full list and examples in the README here:
🔗 [https://github.com/learnwithparam/ai-agents-pattern](https://github.com/learnwithparam/ai-agents-pattern)
Would love your feedback — especially on:
1. Missing patterns worth adding
2. Ways to make it more beginner-friendly
3. Real-world examples to expand
Let’s make AI agent design patterns as clear and reusable as software design patterns once were. | 2025-10-12T07:15:09 | https://www.reddit.com/r/LocalLLaMA/comments/1o4im2n/i_built_an_opensource_repo_to_learn_and_apply_ai/ | learnwithparam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4im2n | false | null | t3_1o4im2n | /r/LocalLLaMA/comments/1o4im2n/i_built_an_opensource_repo_to_learn_and_apply_ai/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'bS_xHArxDGeb7yFAkcSVcJbEjIcazq1udPyRBpP1D-o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bS_xHArxDGeb7yFAkcSVcJbEjIcazq1udPyRBpP1D-o.png?width=108&crop=smart&auto=webp&s=0084091a4e18d33c13abe3950e7848d04437d144', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bS_xHArxDGeb7yFAkcSVcJbEjIcazq1udPyRBpP1D-o.png?width=216&crop=smart&auto=webp&s=568d014fe86fc897bafb2d35a3cd2c39277aca03', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bS_xHArxDGeb7yFAkcSVcJbEjIcazq1udPyRBpP1D-o.png?width=320&crop=smart&auto=webp&s=260d174625d5a7c0a060e879fe4e260869bf881f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bS_xHArxDGeb7yFAkcSVcJbEjIcazq1udPyRBpP1D-o.png?width=640&crop=smart&auto=webp&s=fdd9ecdce7c1103762eefbc0b6e828c1d16688e2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bS_xHArxDGeb7yFAkcSVcJbEjIcazq1udPyRBpP1D-o.png?width=960&crop=smart&auto=webp&s=f351b938c4e5021925ff936f876a467d82492247', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bS_xHArxDGeb7yFAkcSVcJbEjIcazq1udPyRBpP1D-o.png?width=1080&crop=smart&auto=webp&s=51830f198cd1053709ad68f6b8e116fca446d4b3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bS_xHArxDGeb7yFAkcSVcJbEjIcazq1udPyRBpP1D-o.png?auto=webp&s=acd6141ca9e938ac5871f1a32da12a0bb4bc8015', 'width': 1200}, 'variants': {}}]} |
I feel goooood | 0 | Something is going well! I have a lot to learn, though.
Get ready, everyone! If the results are satisfactory, I'll release it on GitHub.
Oh, but since I only have Korean datasets, it probably won't be very good at English... | 2025-10-12T06:48:49 | https://www.reddit.com/r/LocalLLaMA/comments/1o4i6v4/i_feel_goooood/ | Patience2277 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4i6v4 | false | null | t3_1o4i6v4 | /r/LocalLLaMA/comments/1o4i6v4/i_feel_goooood/ | false | false | self | 0 | null |
KoboldCpp now supports video generation | 140 | 2025-10-12T06:32:40 | https://github.com/LostRuins/koboldcpp/releases/latest | fish312 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1o4hxqe | false | null | t3_1o4hxqe | /r/LocalLLaMA/comments/1o4hxqe/koboldcpp_now_supports_video_generation/ | false | false | 140 | {'enabled': False, 'images': [{'id': 'n7QpKCCkcHUBLj4nC-Lh95amFG6mzdqatT5L5ZA_y1k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/n7QpKCCkcHUBLj4nC-Lh95amFG6mzdqatT5L5ZA_y1k.png?width=108&crop=smart&auto=webp&s=901c79827463c68d29dba482acddc86f905280f2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/n7QpKCCkcHUBLj4nC-Lh95amFG6mzdqatT5L5ZA_y1k.png?width=216&crop=smart&auto=webp&s=e2db123488c69bcb5507b2554daac5293b0e70cb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/n7QpKCCkcHUBLj4nC-Lh95amFG6mzdqatT5L5ZA_y1k.png?width=320&crop=smart&auto=webp&s=9e9094fc4479e45e2a1a00cb4e5064a7f4c29b22', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/n7QpKCCkcHUBLj4nC-Lh95amFG6mzdqatT5L5ZA_y1k.png?width=640&crop=smart&auto=webp&s=e0f4fc22015a57b49dec0643ef6f0d2a92f83c37', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/n7QpKCCkcHUBLj4nC-Lh95amFG6mzdqatT5L5ZA_y1k.png?width=960&crop=smart&auto=webp&s=8e3bdf581e8a7ac439a944274f602fda488a4f31', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/n7QpKCCkcHUBLj4nC-Lh95amFG6mzdqatT5L5ZA_y1k.png?width=1080&crop=smart&auto=webp&s=44c2aaf1b6ae2ff90f6d24898e3063f7f87f7a8e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/n7QpKCCkcHUBLj4nC-Lh95amFG6mzdqatT5L5ZA_y1k.png?auto=webp&s=bf07709bffa4a31cea3802da2abfcef41b396375', 'width': 1200}, 'variants': {}}]} | ||
Good balance between RP and instructions | 4 | Hi all, I’ve been playing for a while with several LLMs for a project I’m working on that requires the LLM to:
- Follow instructions regarding text output (mainly things like adding BBCode that require opening/closing tags)
- Ability to read JSON in messages correctly
- Be decent at creating vivid descriptions of locations, engaging conversations while still respecting some form of scope boundaries.
Some context about the project; I’m aiming to create an interactive experience that puts the user in charge of running an alchemy shop. It’s basically inventory management with dynamic conversations :-)
I tried a few LLMs:
- Qwen3 instruct: very good instruction wise, but I feel it lacks something
- Shteno: Very good roleplaying, bad at instructions (when asking it, it told me it “glances over” instructions like the ones I need)
- Claude: Pretty good, but it started doing its own thing and disregarded my instructions.
This project started off as an experiment a few weeks ago but snowballed into something I’d like to finish; most parts are finished (player can talk to multiple unique characters running their own prompts, moving between locations works, characters can move between locations, drilling down items for exploring items). I’m using Qwen3-4B instruct right now and while that works pretty smooth, I’m missing the “cozy” descriptions/details Shteno came up with.
As a newcomer in the world of LLMs there’s way too many and I was hoping someone here could guide me to some LLMs I could try that would fit my requirements?
| 2025-10-12T06:20:22 | https://www.reddit.com/r/LocalLLaMA/comments/1o4hqm7/good_balance_between_rp_and_instructions/ | GarmrNL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4hqm7 | false | null | t3_1o4hqm7 | /r/LocalLLaMA/comments/1o4hqm7/good_balance_between_rp_and_instructions/ | false | false | self | 4 | null |
Please recommend me local models based on my specs | 0 | I have the following
Ryzen 7800x3d
64gb dd5 ram
Rtx 5080 16gb vram
I am new to this and just am only interested in gerneral questions and image based questions if possible for now
I have Ollama with open web ui in docker and I also have lm studio if it matters
Please and thank you | 2025-10-12T05:48:49 | https://www.reddit.com/r/LocalLLaMA/comments/1o4h82r/please_recommend_me_local_models_based_on_my_specs/ | zeek988 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4h82r | false | null | t3_1o4h82r | /r/LocalLLaMA/comments/1o4h82r/please_recommend_me_local_models_based_on_my_specs/ | false | false | self | 0 | null |
Trying to get Ollama to use Radeon RX 6800S GPU | 2 | I’m running Pop!OS on a Zephyrus G14 (Ryzen 9 6900HS, Radeon RX 6800S, 32GB RAM) and trying to get Ollama to use the GPU.
ROCm installation keeps failing due to version conflicts — it seems support for this GPU starts at 6.0.
Before I waste hours troubleshooting, does anyone have a recent guide or confirmed setup for using Ollama or text-generation-webui with an AMD GPU (RDNA2 / RX 6800S) on Linux?
(I am new to this) | 2025-10-12T05:45:01 | https://www.reddit.com/r/LocalLLaMA/comments/1o4h5pl/trying_to_get_ollama_to_use_radeon_rx_6800s_gpu/ | iKf8ui | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4h5pl | false | null | t3_1o4h5pl | /r/LocalLLaMA/comments/1o4h5pl/trying_to_get_ollama_to_use_radeon_rx_6800s_gpu/ | false | false | self | 2 | null |
I got this error with kokoro tts card 5060 TI, anyone know how to fix it? | 1 | [I'm having trouble running Kokoro TTS on my RTX 5060 Ti has anyone else run into this issue or found a fix?](https://preview.redd.it/q5ngmzvw8muf1.png?width=1085&format=png&auto=webp&s=4bca755b9c02d4131bf556191f964732b629485a)
https://preview.redd.it/g6fch1hx8muf1.png?width=1046&format=png&auto=webp&s=77a98b72f0a005cbd42f3af8735d6501cac9a633
| 2025-10-12T05:23:56 | https://www.reddit.com/r/LocalLLaMA/comments/1o4gt8c/i_got_this_error_with_kokoro_tts_card_5060_ti/ | Low_Round_2941 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4gt8c | false | null | t3_1o4gt8c | /r/LocalLLaMA/comments/1o4gt8c/i_got_this_error_with_kokoro_tts_card_5060_ti/ | false | false | 1 | null | |
Llama5 is cancelled long live llama | 332 | 2025-10-12T05:20:06 | SelectionCalm70 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o4gqv0 | false | null | t3_1o4gqv0 | /r/LocalLLaMA/comments/1o4gqv0/llama5_is_cancelled_long_live_llama/ | false | false | default | 332 | {'enabled': True, 'images': [{'id': 'un7exota8muf1', 'resolutions': [{'height': 148, 'url': 'https://preview.redd.it/un7exota8muf1.png?width=108&crop=smart&auto=webp&s=bd54f15fddb520bb40cbb8d33aba40fd303651ac', 'width': 108}, {'height': 297, 'url': 'https://preview.redd.it/un7exota8muf1.png?width=216&crop=smart&auto=webp&s=49dd37bfd72a7ea0004eef88a5a415964605b468', 'width': 216}, {'height': 440, 'url': 'https://preview.redd.it/un7exota8muf1.png?width=320&crop=smart&auto=webp&s=e1d2fdd7af2b9f6ec6f83088a8c1c55a22eb75ff', 'width': 320}, {'height': 880, 'url': 'https://preview.redd.it/un7exota8muf1.png?width=640&crop=smart&auto=webp&s=88cfa4523e8a7104c067d423fd7f685e54e54432', 'width': 640}, {'height': 1320, 'url': 'https://preview.redd.it/un7exota8muf1.png?width=960&crop=smart&auto=webp&s=772af5906f9e72cf16c0734e927e1d7bf9ed5c85', 'width': 960}, {'height': 1486, 'url': 'https://preview.redd.it/un7exota8muf1.png?width=1080&crop=smart&auto=webp&s=d28bef6fb16b0aafa2d2a3f977503c9e701b8040', 'width': 1080}], 'source': {'height': 1486, 'url': 'https://preview.redd.it/un7exota8muf1.png?auto=webp&s=cd463303fc941d997b7743bab6d8bc7cdc1392aa', 'width': 1080}, 'variants': {}}]} | ||
How to handle long running tools in realtime conversations. | 8 | Hi everyone.
I've been working on a realtime agent that has access to different tools for my client. Some of those tools might take a few seconds or even sometimes minutes to finish.
Because of the sequential behavior of models it just forces me to stop talking or cancels the tool call if I interrupt.
Did anyone here have this problem? How did you handle it?
I know pipecat has async tool calls done with some orchestration but I've tried this pattern and it's kinda working with gpt-5 but for any other model the replacement of tool result in the past just screws it up and it has no idea what just happened. Similarly with Claude. Gemini is the worst of them all.
Thanks! | 2025-10-12T05:09:16 | https://www.reddit.com/r/LocalLLaMA/comments/1o4gka5/how_to_handle_long_running_tools_in_realtime/ | fajfas3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4gka5 | false | null | t3_1o4gka5 | /r/LocalLLaMA/comments/1o4gka5/how_to_handle_long_running_tools_in_realtime/ | false | false | self | 8 | null |
sm120 - is like everything gated? (Pre-training my own) | 5 | Let me say that I’m new to this whole world of lm training and I’ve pretty much learned as I go. For a couple weeks now I’ve been working on a 1.8b param model just chugging along in pre training. I’ve done many a search for a better, more effective strat. Things I read about such as FA2/3, MXFP8/4, some Hopper stuff all seems gated. I set up a nightly torchao build in another venv and getting blocked all around. I mean, sm120 been out for some time, right? Here’s the most stable I’ve come up with to date. If anyone has any advice to share, I would love to hear it:
Ubuntu 22.04 (WSL2 on Win 11)
PyTorch 2.8 + CUDA 12.8 / 13.0 drivers (5090 32gb)
Transformer Engine 2.8 FP8 linears active
cudaMallocAsync allocator enabled
Doc-aware SDPA attention (efficient path, flash off)
TE RMSNorm swap (+15 % throughput vs baseline)
AdamW fused, D2Z LR schedule
Training data ≈ 20 B tokens Nemotron HQ mixed with some Nemo Math, The Stack V2 and 2025 Wikipedia.
15 k tokens/s steady @ batch 4 × grad-accum 6, ctx = 2048, loss ≈ 0.7 → 0.5 about 10b tokens chewed on. Had a bad 30k run because for whatever reason I had one or both embed.weight and lm_head.weight tensors blow up on me and since I had them tied, that was a bad day. Since then, smooth sailing. | 2025-10-12T05:05:42 | https://www.reddit.com/r/LocalLLaMA/comments/1o4gi5j/sm120_is_like_everything_gated_pretraining_my_own/ | exhorder72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4gi5j | false | null | t3_1o4gi5j | /r/LocalLLaMA/comments/1o4gi5j/sm120_is_like_everything_gated_pretraining_my_own/ | false | false | self | 5 | null |
Recently started to dabble in LocalLLMs... | 5 | Had an android powered ToughPad (3gb ram) that I had laying around so got it set up and running an uncensored Llama 3.2 1b as a off-grid mobile, albeit rather limited LLM option
But naturally I wanted more, so working with what I had spare, I set up a headless windows 11 box running Ollama and LM Studio, that I remote desktop into via RustDesk from my Android and Windows devices inorder to use the GUIs
System specs:
i7 4770K (Running at 3000mhz)
16gb DDR3 RAM (Running at 2200mhz)
GTX 1070 8gb
I have got it up and running, managed to get the Wake on Lan working correctly, so that It sleeps when not being used, I just need to use an additional program to ping the PC prot to RD Connection.
The current setup can run the following models at the speeds shown below: (Prompt "Hi"
Gemma 4b 23.21 tok/sec (43 tokens)
Gemma 12b 8.03 tok/sec (16tokens)
I have a couple of questions
I can perform a couple of upgrades to this systems for a low price in just wondering would they be worth it
I can double the ram to 32gb for around £15
I can pick up an additional GTX 1070 8gb for around £60.
If I doubled my RAM to 32gb and VRAM to 16gb and I can currently just about run a 12b model what can I likely expect to see?
Can Ollama and LM Studio (and Open WebUI) utilize and take advantage of 2 GPUs and if so would I need the SLI connector?
And finally does CPU speed or core count or even ram speed matter at all when offloading 100% of the model to the GPU?. This very old (2014) 4 core 8 thread CPU runs stable at 4.6ghz overclock, but is currently underclocked to 3.0 GHz (from 3.5ghz stock | 2025-10-12T05:02:03 | https://www.reddit.com/r/LocalLLaMA/comments/1o4gfy0/recently_started_to_dabble_in_localllms/ | Asbular | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4gfy0 | false | null | t3_1o4gfy0 | /r/LocalLLaMA/comments/1o4gfy0/recently_started_to_dabble_in_localllms/ | false | false | self | 5 | null |
Good provider? | 0 |
I was wondering which api provider I should get. I think I would prefer a subscription based one. I was looking at chutes 3usd sub and nanogpt 8usd subscription. The 300 daily messages of chutes would be more than enough for me, but chutes seems to have a bad rep around for having toned down models. I was wondering if someone could let me know about that. Meanwhile nanogpt offers 2k requests per day which seems overkill to me.
I wouldn't want to subscribe to offical APIs because I like switching between different models. Openrouter pay as you go would cost me somewhere between chutes and nanogpt per month.
So, which should I get? I'd use it for creative writing mostly. | 2025-10-12T04:47:33 | https://www.reddit.com/r/LocalLLaMA/comments/1o4g6wk/good_provider/ | thunderbolt_1067 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4g6wk | false | null | t3_1o4g6wk | /r/LocalLLaMA/comments/1o4g6wk/good_provider/ | false | false | self | 0 | null |
PSA: Ollama no longer supports the Mi50 or Mi60 | 72 | [https://github.com/ollama/ollama/pull/12481](https://github.com/ollama/ollama/pull/12481)
Ollama recently upgraded its ROCM version and therefore no longer supports the Mi50 or Mi60.
Their most recent release notes states that "AMD gfx900 and gfx906 (MI50, MI60, etc) GPUs are no longer supported via ROCm. We're working to support these GPUs via Vulkan in a future release."
This means if you pull the latest version of Ollama you won't be able to use the Mi50 even though Ollama docs still list it as being supported.
| 2025-10-12T04:32:11 | https://www.reddit.com/r/LocalLLaMA/comments/1o4fxer/psa_ollama_no_longer_supports_the_mi50_or_mi60/ | TechEnthusiastx86 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4fxer | false | null | t3_1o4fxer | /r/LocalLLaMA/comments/1o4fxer/psa_ollama_no_longer_supports_the_mi50_or_mi60/ | false | false | self | 72 | {'enabled': False, 'images': [{'id': 'NOGwsuRmGTvZ1xQ9XZdHww9OGQpcIxwOYUXO176sQ3E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NOGwsuRmGTvZ1xQ9XZdHww9OGQpcIxwOYUXO176sQ3E.png?width=108&crop=smart&auto=webp&s=df9d6408e4cd1bbf0d4714fb228e5cf05ab103bf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NOGwsuRmGTvZ1xQ9XZdHww9OGQpcIxwOYUXO176sQ3E.png?width=216&crop=smart&auto=webp&s=e8335cce7c89ab4579673c2337057fbe711b7e3f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NOGwsuRmGTvZ1xQ9XZdHww9OGQpcIxwOYUXO176sQ3E.png?width=320&crop=smart&auto=webp&s=7ef249549418b4fb70ac1f571657f1925e3eee1a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NOGwsuRmGTvZ1xQ9XZdHww9OGQpcIxwOYUXO176sQ3E.png?width=640&crop=smart&auto=webp&s=6f2760aac14e92486ae1593cb0fe933ca5c80337', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NOGwsuRmGTvZ1xQ9XZdHww9OGQpcIxwOYUXO176sQ3E.png?width=960&crop=smart&auto=webp&s=ce061af5e32936f8981dee7b8824c7619e61773f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NOGwsuRmGTvZ1xQ9XZdHww9OGQpcIxwOYUXO176sQ3E.png?width=1080&crop=smart&auto=webp&s=4881fdcaf16bcef9b0327e2679baedc573f36be9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NOGwsuRmGTvZ1xQ9XZdHww9OGQpcIxwOYUXO176sQ3E.png?auto=webp&s=55e31f9f8a4b4cea5b640a98e28a26d98a000288', 'width': 1200}, 'variants': {}}]} |
I got this error with 5060 TI card, anyone know how to fix it? | 1 | [removed] | 2025-10-12T04:30:50 | https://www.reddit.com/r/LocalLLaMA/comments/1o4fwkf/i_got_this_error_with_5060_ti_card_anyone_know/ | SubstantialFun8805 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4fwkf | false | null | t3_1o4fwkf | /r/LocalLLaMA/comments/1o4fwkf/i_got_this_error_with_5060_ti_card_anyone_know/ | false | false | 1 | null | |
LLM advice | 1 | To start things out i'll give you my hardware first:
AMD Ryzen 5 4500
32GB DDR4
1TB NVMe SSD
RX 7900 XT 20GB
RX 6800 16 GB
OS Ubuntu 22.04 LTS
Ollama
LM Studio
just downloaded llama.cpp not set up yet
What models could I conceivably run on this hardware? I'm fine with slower t/s so long as i'm not waiting 5 minutes till first token. I'm a lslow reader anyway lol
Any help would be greatly appreciated. I"m currently using GPT-OSS-20B and while it's useful for talking to, it seems as if it's not the best for the small coding tasks I give it. And I'm very interested in something bigger. | 2025-10-12T04:11:48 | https://www.reddit.com/r/LocalLLaMA/comments/1o4fkin/llm_advice/ | Savantskie1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4fkin | false | null | t3_1o4fkin | /r/LocalLLaMA/comments/1o4fkin/llm_advice/ | false | false | self | 1 | null |
¿What open-source models that run locally are the most commonly used? | 1 | Hello everyone! I'm about to start exploring the world of local Al, and I'd love to know which models you use. I just want to get an idea of what's popular or worth trying - any category is fine! | 2025-10-12T04:10:09 | https://www.reddit.com/r/LocalLLaMA/comments/1o4fjfx/what_opensource_models_that_run_locally_are_the/ | Mister_X-16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4fjfx | false | null | t3_1o4fjfx | /r/LocalLLaMA/comments/1o4fjfx/what_opensource_models_that_run_locally_are_the/ | false | false | self | 1 | null |
PSU for 2x RTX 3090, 3x 8-pin each | 2 | Not sure if this is the right place to ask, so feel free to shoo me away to elsewhere.
Building a LLM rig, I intend to put 2 second-hand 3090s in it. What I see in the used market is a mix of boards, some with 2 8-pin connectors, some with 3 8-pin connectors. Cards being 350W and each connector rated for 150W, ok I guess that makes sense.
Not wanting to paint myself into a corner, I think I have to get a PSU with at least 8x 8-pin sockets, ballparking it at 1200W or higher.
Do I have this right? | 2025-10-12T04:09:06 | https://www.reddit.com/r/LocalLLaMA/comments/1o4firv/psu_for_2x_rtx_3090_3x_8pin_each/ | zhambe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4firv | false | null | t3_1o4firv | /r/LocalLLaMA/comments/1o4firv/psu_for_2x_rtx_3090_3x_8pin_each/ | false | false | self | 2 | null |
I made a plugin to run LLMs on phones | 12 | Hi everyone, I've been working on a side project to get LLMs (GGUF models) running locally on Android devices using Flutter.
The result is a plugin I'm calling Llama Flutter. It uses llama.cpp under the hood and lets you load any GGUF model from Hugging Face. I built a simple chat app as an example to test it.
I'm sharing this here because I'm looking for feedback from the community. Has anyone else tried building something similar? I'd be curious to know your thoughts on the approach, or any suggestions for improvement.
Video Demo: [https://files.catbox.moe/xrqsq2.mp4](https://files.catbox.moe/xrqsq2.mp4)
Example APK: [https://github.com/dragneel2074/Llama-Flutter/blob/master/example-app/app-release.apk](https://github.com/dragneel2074/Llama-Flutter/blob/master/example-app/app-release.apk)
**Here are some of the technical details / features:**
* Uses the latest llama.cpp (as of Oct 2025) with ARM64 optimizations.
* Provides a simple Dart API with real-time token streaming.
* Supports a good range of generation parameters and several built-in chat templates.
* For now, it's Android-only and focused on text generation.
If you're interested in checking it out to provide feedback or contribute, the links are below. If you find it useful, a star on GitHub would help me gauge interest.
Links:
\* GitHub Repo: [https://github.com/dragneel2074/Llama-Flutter](https://github.com/dragneel2074/Llama-Flutter)
\* Plugin on pub.dev: https://pub.dev/packages/llama\_flutter\_android
What do you think? Is local execution of LLMs on mobile something you see a future for in Flutter? | 2025-10-12T03:39:04 | https://www.reddit.com/r/LocalLLaMA/comments/1o4ez13/i_made_a_plugin_to_run_llms_on_phones/ | Dragneel_passingby | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4ez13 | false | null | t3_1o4ez13 | /r/LocalLLaMA/comments/1o4ez13/i_made_a_plugin_to_run_llms_on_phones/ | false | false | self | 12 | null |
I just asked about UNO cards | 0 | I was playing UNO earlier today and I wanted to know weather I was using the right amount of cards. So I asked the deepseek model I have on my laptop and forgot about it when I went out. It's been generating this for at least an hour haha | 2025-10-12T03:30:40 | https://v.redd.it/6txwvqyioluf1 | epic2142 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o4etgu | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/6txwvqyioluf1/DASHPlaylist.mpd?a=1762831852%2CMTA1OTg5MjE3MjQ5MTM5OTA2NThkNTdjYTdiOGIwNDIzMTg1ZjQ2YmI2NTQzM2Q0NDU5OGM1ODRjYjU4NGM0Mw%3D%3D&v=1&f=sd', 'duration': 98, 'fallback_url': 'https://v.redd.it/6txwvqyioluf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 682, 'hls_url': 'https://v.redd.it/6txwvqyioluf1/HLSPlaylist.m3u8?a=1762831852%2CNTgzMzJlNTczNjc4NzgwNWE1MmVjNjAzOTA3NDFiYjFmZGUxODk2ZTkyZWU4ZjBkYTI3NWRjYmQ4NzczMTQ5Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6txwvqyioluf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1o4etgu | /r/LocalLLaMA/comments/1o4etgu/i_just_asked_about_uno_cards/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'b2UwbmNyeWlvbHVmMa-zrUgLyXkmc0NVBgUQMwT83FFaVFp3c5zywyXZ8x74', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/b2UwbmNyeWlvbHVmMa-zrUgLyXkmc0NVBgUQMwT83FFaVFp3c5zywyXZ8x74.png?width=108&crop=smart&format=pjpg&auto=webp&s=0a4972ac3635041216faa50ce24d03281212ba49', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/b2UwbmNyeWlvbHVmMa-zrUgLyXkmc0NVBgUQMwT83FFaVFp3c5zywyXZ8x74.png?width=216&crop=smart&format=pjpg&auto=webp&s=d0bdf4533ca7adee3c80c75a754aafc62f633b07', 'width': 216}, {'height': 170, 'url': 'https://external-preview.redd.it/b2UwbmNyeWlvbHVmMa-zrUgLyXkmc0NVBgUQMwT83FFaVFp3c5zywyXZ8x74.png?width=320&crop=smart&format=pjpg&auto=webp&s=c4ec273b699140310c09e2bd7d2105cfdeb477f6', 'width': 320}, {'height': 340, 'url': 'https://external-preview.redd.it/b2UwbmNyeWlvbHVmMa-zrUgLyXkmc0NVBgUQMwT83FFaVFp3c5zywyXZ8x74.png?width=640&crop=smart&format=pjpg&auto=webp&s=56ddcac470fca66792a35d610c3489199ce751a0', 'width': 640}, {'height': 511, 'url': 'https://external-preview.redd.it/b2UwbmNyeWlvbHVmMa-zrUgLyXkmc0NVBgUQMwT83FFaVFp3c5zywyXZ8x74.png?width=960&crop=smart&format=pjpg&auto=webp&s=aaaddd0c7f2e64d29634dd8fdf56a44fc80baf25', 'width': 960}, {'height': 574, 'url': 'https://external-preview.redd.it/b2UwbmNyeWlvbHVmMa-zrUgLyXkmc0NVBgUQMwT83FFaVFp3c5zywyXZ8x74.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8e5cb7335f2de7c85f5c634c9d50f47b3b01fbc2', 'width': 1080}], 'source': {'height': 1022, 'url': 'https://external-preview.redd.it/b2UwbmNyeWlvbHVmMa-zrUgLyXkmc0NVBgUQMwT83FFaVFp3c5zywyXZ8x74.png?format=pjpg&auto=webp&s=c8bbf174a609b3441af94d0e579e4c5523963b02', 'width': 1920}, 'variants': {}}]} | |
I got this error with 5060 TI card, anyone know how to fix it? | 1 | [removed] | 2025-10-12T03:19:44 | https://www.reddit.com/r/LocalLLaMA/comments/1o4em1k/i_got_this_error_with_5060_ti_card_anyone_know/ | SubstantialFun8805 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4em1k | false | null | t3_1o4em1k | /r/LocalLLaMA/comments/1o4em1k/i_got_this_error_with_5060_ti_card_anyone_know/ | false | false | 1 | null | |
Oscillink - Self-Optimizing Scalable Memory for Generative Models and Databases | 0 | I just released an open-source SDK that adds *physics-based working memory* to any generative or retrieval system.
It turns raw embeddings into a coherent, explainable memory layer — no training, no drift, just math.
**What it does**
* ⚡ **Scales smoothly** — latency < 40 ms even as your corpus grows
* 🎯 **Hallucination control** — 42.9 % → 0 % trap rate in a controlled fact-retrieval test
* 🧾 **Deterministic receipts** — every run produces a signed, auditable ΔH energy log
* 🔧 **Universal** — works with any embedding model, no retraining
* 📈 **Self-optimizing** — learns the best λ-params over time
**Why it matters**
Instead of another neural reranker, Oscillink minimizes a real energy functional
MU\\\*=λGY+λQB1ψTM U\^\\\* = λ\_G Y + λ\_Q B 1ψ\^TMU\\\*=λGY+λQB1ψT to reach the most coherent global state.
It’s explainable, predictable, and mathematically guaranteed to converge.
Check it out, Test it out, and let me know how it works in your stack.
[https://github.com/Maverick0351a/Oscillink](https://github.com/Maverick0351a/Oscillink) | 2025-10-12T03:02:26 | https://www.reddit.com/r/LocalLLaMA/comments/1o4eaic/oscillink_selfoptimizing_scalable_memory_for/ | Otherwise_Hold_189 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4eaic | false | null | t3_1o4eaic | /r/LocalLLaMA/comments/1o4eaic/oscillink_selfoptimizing_scalable_memory_for/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'LI2Z6mtSmzgrn6xJP_spLs8C7DsHNuPSJgWgTei6biw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LI2Z6mtSmzgrn6xJP_spLs8C7DsHNuPSJgWgTei6biw.png?width=108&crop=smart&auto=webp&s=caa088ab73548aba48c941c278f2b9b594d7e9ac', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LI2Z6mtSmzgrn6xJP_spLs8C7DsHNuPSJgWgTei6biw.png?width=216&crop=smart&auto=webp&s=735848600a7fc2f6418ff6f20cc65048f0c425e4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LI2Z6mtSmzgrn6xJP_spLs8C7DsHNuPSJgWgTei6biw.png?width=320&crop=smart&auto=webp&s=1ed83da87aeabeefa619f088ce34239fc7670762', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LI2Z6mtSmzgrn6xJP_spLs8C7DsHNuPSJgWgTei6biw.png?width=640&crop=smart&auto=webp&s=bdc5cd6c71f4e69fa0cd913e23600f82183395af', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LI2Z6mtSmzgrn6xJP_spLs8C7DsHNuPSJgWgTei6biw.png?width=960&crop=smart&auto=webp&s=12ef0068de60ba5547bb9f26e1defeb1dd8d7059', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LI2Z6mtSmzgrn6xJP_spLs8C7DsHNuPSJgWgTei6biw.png?width=1080&crop=smart&auto=webp&s=9228bddf0fc3f75c3ffee4a7635c2173d14ad8d1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LI2Z6mtSmzgrn6xJP_spLs8C7DsHNuPSJgWgTei6biw.png?auto=webp&s=39c66e1fbaa3f5f0aba2fe99cf8c0eb72bf71b57', 'width': 1200}, 'variants': {}}]} |
Optimized Docker image for Unsloth fine-tuning + GGUF export via llama.cpp | 11 | # 🐳 unsloth-docker
**Optimized Docker image for Unsloth fine-tuning + GGUF export via llama.cpp**
This Docker image seamlessly integrates [Unsloth](https://github.com/unslothai/unsloth) — the ultra-fast LLM fine-tuning library — with [llama.cpp](https://github.com/ggml-org/llama.cpp) to enable end-to-end training and quantized GGUF model export in a single, GPU-accelerated environment.
---
## ✨ Features
- **Pre-installed Unsloth** with FlashAttention, xformers, and custom CUDA kernels for blazing-fast training
- **Full llama.cpp toolchain**, including `convert_hf_to_gguf.py` for easy GGUF conversion
- **Jupyter Lab** pre-configured for interactive development
- **GPU-accelerated** (CUDA 12.1 + cuDNN)
- **Quantization-ready**: supports all standard GGUF quant types (`q4_k_m`, `q5_k_m`, `q8_0`, etc.)
---
## 🚀 Quick Start
### 1. Build & Launch
```bash
# Build the image
docker compose build
# Start the container (Jupyter Lab runs on port 38888)
docker compose up -d
```
### 2. Access Jupyter Lab
Open your browser at **http://127.0.0.1:38888** and log in with your password.
Create a new notebook to fine-tune your model using Unsloth.
After training, save and convert your model directly inside the notebook:
```python
# Save merged model (Unsloth syntax)
model.save_pretrained_merged("your-new-model", tokenizer)
# Convert to GGUF using pre-installed llama.cpp
!python /workspace/llama.cpp/convert_hf_to_gguf.py \
--outfile your-new-model-gguf \
--outtype q8_0 \
your-new-model
```
---
Train fast. Quantize smarter. Run anywhere. 🚀
👉 **Star the repo if you find it useful!**
https://github.com/covrom/unsloth-docker | 2025-10-12T02:41:51 | https://github.com/covrom/unsloth-docker | rtsov | github.com | 1970-01-01T00:00:00 | 0 | {} | 1o4dwj6 | false | null | t3_1o4dwj6 | /r/LocalLLaMA/comments/1o4dwj6/optimized_docker_image_for_unsloth_finetuning/ | false | false | default | 11 | {'enabled': False, 'images': [{'id': 'iRMYKPaKkEEDms46u7jhdX3Sb7NwqrocaVgHjkYHszM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iRMYKPaKkEEDms46u7jhdX3Sb7NwqrocaVgHjkYHszM.png?width=108&crop=smart&auto=webp&s=c5cd0a48576fa171bc880d854d546f8b336c2896', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iRMYKPaKkEEDms46u7jhdX3Sb7NwqrocaVgHjkYHszM.png?width=216&crop=smart&auto=webp&s=25a70a7dc5e2a40ce289a94fb9e012cccb3ad86d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iRMYKPaKkEEDms46u7jhdX3Sb7NwqrocaVgHjkYHszM.png?width=320&crop=smart&auto=webp&s=f79b63ea85fe1a4590723e2773b5ff40852ad2c4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iRMYKPaKkEEDms46u7jhdX3Sb7NwqrocaVgHjkYHszM.png?width=640&crop=smart&auto=webp&s=87f388a35515f2a499f727fef68a4423768197b0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iRMYKPaKkEEDms46u7jhdX3Sb7NwqrocaVgHjkYHszM.png?width=960&crop=smart&auto=webp&s=6e77e0cfb40f766019c0ee3a47f794d5ee0c512e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iRMYKPaKkEEDms46u7jhdX3Sb7NwqrocaVgHjkYHszM.png?width=1080&crop=smart&auto=webp&s=a900138230a95a1626b75988512fbad10e328fe5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iRMYKPaKkEEDms46u7jhdX3Sb7NwqrocaVgHjkYHszM.png?auto=webp&s=d3fa0a1f797598473653cd9392fb6e889ba79a7a', 'width': 1200}, 'variants': {}}]} |
HuggingFace storage is no longer unlimited - 12TB public storage max | 421 | In case you’ve missed the memo like me, HuggingFace is no longer unlimited.
| Type of account | Public storage | Private storage |
|-----------------------------------|---------------------------------------------|--------------------------------------------------------|
| Free user or org | Best-effort* <br>usually up to 5 TB for impactful work | 100 GB |
| PRO | Up to 10 TB included* <br>✅ grants available for impactful work† | 1 TB + pay-as-you-go |
| Team Organizations | 12 TB base + 1 TB per seat | 1 TB per seat + pay-as-you-go |
| Enterprise Organizations | 500 TB base + 1 TB per seat | 1 TB per seat + pay-as-you-go |
As seen on https://huggingface.co/docs/hub/en/storage-limits
—-
For ref. https://web.archive.org/web/20250721230314/https://huggingface.co/docs/hub/en/storage-limits | 2025-10-12T02:36:39 | https://www.reddit.com/r/LocalLLaMA/comments/1o4dswr/huggingface_storage_is_no_longer_unlimited_12tb/ | Thireus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4dswr | false | null | t3_1o4dswr | /r/LocalLLaMA/comments/1o4dswr/huggingface_storage_is_no_longer_unlimited_12tb/ | false | false | self | 421 | {'enabled': False, 'images': [{'id': 't7-yAwqgKpqBAjVAVkt52s_omQvtV3HUx8h8fSlyRwk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/t7-yAwqgKpqBAjVAVkt52s_omQvtV3HUx8h8fSlyRwk.png?width=108&crop=smart&auto=webp&s=69eb2121b4d6d3f3fe1b5e16b9e75fc42cab53c1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/t7-yAwqgKpqBAjVAVkt52s_omQvtV3HUx8h8fSlyRwk.png?width=216&crop=smart&auto=webp&s=af1fb1f66917feb6cfa126b306bb42828f38c48f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/t7-yAwqgKpqBAjVAVkt52s_omQvtV3HUx8h8fSlyRwk.png?width=320&crop=smart&auto=webp&s=9b9fd707bdba03645c2850f448cc3e98b044e9ef', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/t7-yAwqgKpqBAjVAVkt52s_omQvtV3HUx8h8fSlyRwk.png?width=640&crop=smart&auto=webp&s=77449b017deebb1b00797741775b155e1ff56ef8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/t7-yAwqgKpqBAjVAVkt52s_omQvtV3HUx8h8fSlyRwk.png?width=960&crop=smart&auto=webp&s=7b9d1de04c5fefb294277511b4c3d662cf1141f1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/t7-yAwqgKpqBAjVAVkt52s_omQvtV3HUx8h8fSlyRwk.png?width=1080&crop=smart&auto=webp&s=5b14abdac9fc310790648ce1027bc51004fe2a86', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/t7-yAwqgKpqBAjVAVkt52s_omQvtV3HUx8h8fSlyRwk.png?auto=webp&s=9b1c198948ed77bc2d19c193a89ec81c6a8d8923', 'width': 1200}, 'variants': {}}]} |
Appreciate advice on labeling sound files | 2 | I’d like to automate the process of labeling a large catalog of music files - bpm, chords, etc.
What tools work best for this?
Thanks in advance for
any suggestions! | 2025-10-12T02:09:35 | https://www.reddit.com/r/LocalLLaMA/comments/1o4da60/appreciate_advice_on_labeling_sound_files/ | seoulsrvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4da60 | false | null | t3_1o4da60 | /r/LocalLLaMA/comments/1o4da60/appreciate_advice_on_labeling_sound_files/ | false | false | self | 2 | null |
How can I connect everything toolbar to llm in order for it to search better based on my inquiries ? | 0 | Is there any plugin or app made for this already? | 2025-10-12T01:12:25 | https://www.reddit.com/r/LocalLLaMA/comments/1o4c5jm/how_can_i_connect_everything_toolbar_to_llm_in/ | FatFigFresh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4c5jm | false | null | t3_1o4c5jm | /r/LocalLLaMA/comments/1o4c5jm/how_can_i_connect_everything_toolbar_to_llm_in/ | false | false | self | 0 | null |
Looking for Smallest-Yet-Efficient-English model for its linguistic reasoning capacity only | 5 | I am going to use this model only for English language and only for it to be able to understand semantics between different English notes, in order to be able to connect these notes together. So some normal degree of linguistic wisdom and reasoning in the model is needed, while it doesn’t need to have knowledge of many other fields nor other languages so It respond faster.
What models are good candidates? It must be less than 6GB , the smaller the better as long as it doesn’t go so dumb for understanding semantics .
| 2025-10-12T01:07:53 | https://www.reddit.com/r/LocalLLaMA/comments/1o4c2dg/looking_for_smallestyetefficientenglish_model_for/ | FatFigFresh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4c2dg | false | null | t3_1o4c2dg | /r/LocalLLaMA/comments/1o4c2dg/looking_for_smallestyetefficientenglish_model_for/ | false | false | self | 5 | null |
OpenRouter API's on Android? | 6 | On Desktop I use MstyStudio, but as far as I know they don't have an android app? Most don't as far as I know?
How are y'all using something like an API from OpenRouter on mobile? Are there any apps? | 2025-10-12T00:32:06 | https://www.reddit.com/r/LocalLLaMA/comments/1o4bcnv/openrouter_apis_on_android/ | PangurBanTheCat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4bcnv | false | null | t3_1o4bcnv | /r/LocalLLaMA/comments/1o4bcnv/openrouter_apis_on_android/ | false | false | self | 6 | null |
Out of shared memory | 0 | anyone come across this issue:
```
triton.runtime.errors.OutOfResources: out of resource: shared memory, Required: 98304, Hardware limit: 49152. Reducing block sizes or `num_stages` may help.
```
and how did you resolve it? | 2025-10-12T00:13:45 | https://www.reddit.com/r/LocalLLaMA/comments/1o4az4v/out_of_shared_memory/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4az4v | false | null | t3_1o4az4v | /r/LocalLLaMA/comments/1o4az4v/out_of_shared_memory/ | false | false | self | 0 | null |
LLM-JEPA: Large Language Models Meet Joint Embedding Predictive Architectures | 28 | Abstract
>Large Language Model (LLM) pretraining, finetuning, and evaluation rely on input-space reconstruction and generative capabilities. Yet, it has been observed in vision that embedding-space training objectives, e.g., with Joint Embedding Predictive Architectures (JEPAs), are far superior to their input-space counterpart. That mismatch in how training is achieved between language and vision opens up a natural question: {\\em can language training methods learn a few tricks from the vision ones?} The lack of JEPA-style LLM is a testimony of the challenge in designing such objectives for language. In this work, we propose a first step in that direction where we develop LLM-JEPA, a JEPA based solution for LLMs applicable both to finetuning and pretraining. Thus far, LLM-JEPA is able to outperform the standard LLM training objectives by a significant margin across models, all while being robust to overfiting. Those findings are observed across numerous datasets (NL-RX, GSM8K, Spider, RottenTomatoes) and various models from the Llama3, OpenELM, Gemma2 and Olmo families. Code: [this https URL](https://github.com/rbalestr-lab/llm-jepa).
Limitations
Despite its strong accuracy gains, LLM-JEPA introduces two additional hyperparameters. As shown in fig. 7, the optimal configuration may occur at any point in a grid (λ, k), which imposes a significant cost for hyperparameter tuning. While we have not identified an efficient method to explore this space, we empirically observe that adjacent grid points often yield similar accuracy, suggesting the potential for a more efficient tuning algorithm.
The primary bottleneck at present is the 2-fold increase in compute cost during training, which is mitigated by random loss dropout. | 2025-10-12T00:08:28 | https://arxiv.org/abs/2509.14252 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1o4av71 | false | null | t3_1o4av71 | /r/LocalLLaMA/comments/1o4av71/llmjepa_large_language_models_meet_joint/ | false | false | default | 28 | null |
How do you discover & choose right models for your agents? (genuinely curious) | 17 | I'm trying to understand how people actually find the right model for their use case.
If you've recently picked a model for a project, how did you do it?
A few specific questions:
1. Where did you start your search? (HF search, Reddit, benchmarks, etc.)
2. How long did it take? (minutes, hours, days?)
3. What factors mattered most? (accuracy, speed, size?)
4. Did you test multiple models or commit to one?
5. How confident were you in your choice?
Also curious: what would make this process easier?
My hypothesis is that most of us are winging it more than we'd like to admit. Would love to hear if others feel the same way or if I'm just doing it wrong! | 2025-10-11T23:49:59 | https://www.reddit.com/r/LocalLLaMA/comments/1o4ahfi/how_do_you_discover_choose_right_models_for_your/ | Curious-Engineer22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4ahfi | false | null | t3_1o4ahfi | /r/LocalLLaMA/comments/1o4ahfi/how_do_you_discover_choose_right_models_for_your/ | false | false | self | 17 | null |
Help running models for roleplay. | 1 | I was using openrouter with an API key to silly tavern okn my laptop with deepseek 3.1 until they got rid of the free plan fke that. I tried r1 T2 and it worked kinda (continuing the role play I did was really well but new one on open router not as much and silly tavern new one just said it was not allowed to do what openrouter same model let me) and I kinda just wanted to see what I can run locally. I can't get fandom roleplays but I tried custom ones and the setup works but it just won't stop playing as my character also. I have a 4070 and 32gb of ram on the laptop so I can do anything less then about 30B fast enough (I think, I know less then 24B fast enough) but all the models I've tried with LLM studio keep playing as myself also.
TLDR can I get fandom roleplays to work? Probably not I mash world's so I can't just 3 page lore dump info. For regular roleplays, why is the API key version of the deepseek r1 T2 more censored and how do I get models to not reply as my character? Can I run good enough stuff on my laptop? | 2025-10-11T23:49:57 | https://www.reddit.com/r/LocalLLaMA/comments/1o4ahek/help_running_models_for_roleplay/ | newbuildertfb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4ahek | false | null | t3_1o4ahek | /r/LocalLLaMA/comments/1o4ahek/help_running_models_for_roleplay/ | false | false | self | 1 | null |
What is the most you can do to scale the inference of a model? Specifically looking for lesser known tricks and optimization you have found while tinkering with models | 19 | Scenario: Assuming I have the Phi 4 14b model hosted on a A100 40GB machine, and I can run it for a single data. If i have 1 million legal text documents, what is the best way to scale the inference such that I can process the 1 million text (4000 million words) and extract information out of it? | 2025-10-11T22:38:20 | https://www.reddit.com/r/LocalLLaMA/comments/1o48x07/what_is_the_most_you_can_do_to_scale_the/ | SnooMarzipans2470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o48x07 | false | null | t3_1o48x07 | /r/LocalLLaMA/comments/1o48x07/what_is_the_most_you_can_do_to_scale_the/ | false | false | self | 19 | null |
50-series and pro 6000s sm120 cards. supported models in vllm, exl3, sglang etc. thread | 12 | Hi guys I'm starting this thread so people like me with sm120 cards can share with each other which models they get working how they got them working in vllm, sglang, exl3 etc. If you have one or more of these cards please share your experiences and what works and what doesn't etc. I will post too. For now I have gpt-oss working both 20b and 120b and will be trying GLM-4.6 soon | 2025-10-11T22:35:05 | https://www.reddit.com/r/LocalLLaMA/comments/1o48ufw/50series_and_pro_6000s_sm120_cards_supported/ | Sorry_Ad191 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o48ufw | false | null | t3_1o48ufw | /r/LocalLLaMA/comments/1o48ufw/50series_and_pro_6000s_sm120_cards_supported/ | false | false | self | 12 | null |
I built a community crowdsourced LLM benchmark leaderboard (Claude Sonnet/Opus, Gemini, Grok, GPT-5, o3) | 0 | I built [CodeLens.AI](http://CodeLens.AI) \- a tool that compares how 6 top LLMs (GPT-5, Claude Opus 4.1, Claude Sonnet 4.5, Grok 4, Gemini 2.5 Pro, o3) handle your actual code tasks.
**How it works:**
* Upload code + describe task (refactoring, security review, architecture, etc.)
* All 6 models run in parallel (\~2-5 min)
* See side-by-side comparison with AI judge scores
* Community votes on winners (blind voting)
* Each evaluation gets reflected in the overall AI model leaderboard, showing us best ones
**Why I built this:** Existing benchmarks (HumanEval, SWE-Bench) don't reflect real-world developer tasks. I wanted to know which model actually solves MY specific problems - refactoring legacy TypeScript, reviewing React components, etc. It's also similar to LMArena, but their evaluations are not entirely transparent.
**Current status:**
* Live at [https://codelens.ai](https://codelens.ai)
* 23 evaluations so far (small sample, I know!)
* Free tier processes 3 evals per day (first-come, first-served queue)
* Looking for real tasks to make the benchmark meaningful
* Happy to answer questions about the tech stack, cost structure, or methodology.
Currently in validation stage. What are your first impressions? | 2025-10-11T21:55:02 | CodeLensAI | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o47y61 | false | null | t3_1o47y61 | /r/LocalLLaMA/comments/1o47y61/i_built_a_community_crowdsourced_llm_benchmark/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'l7vbr33v0kuf1', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/l7vbr33v0kuf1.png?width=108&crop=smart&auto=webp&s=5d8285108e2484e2427a1522af4102e6d4b92abc', 'width': 108}, {'height': 152, 'url': 'https://preview.redd.it/l7vbr33v0kuf1.png?width=216&crop=smart&auto=webp&s=0d579588a78f721477b20336fd131327d38f8ac7', 'width': 216}, {'height': 226, 'url': 'https://preview.redd.it/l7vbr33v0kuf1.png?width=320&crop=smart&auto=webp&s=b201e687a8633a9e0a1c23e1d341a0de712a4237', 'width': 320}, {'height': 452, 'url': 'https://preview.redd.it/l7vbr33v0kuf1.png?width=640&crop=smart&auto=webp&s=b3327242865f4e782ce261565e7b1fd502cb63f4', 'width': 640}, {'height': 678, 'url': 'https://preview.redd.it/l7vbr33v0kuf1.png?width=960&crop=smart&auto=webp&s=21fbb36c40ab4219e3c2719502abb5b6242ee6d9', 'width': 960}, {'height': 763, 'url': 'https://preview.redd.it/l7vbr33v0kuf1.png?width=1080&crop=smart&auto=webp&s=de53a4f27e748b4efdd99abd4c8ca5dfe9330c04', 'width': 1080}], 'source': {'height': 1236, 'url': 'https://preview.redd.it/l7vbr33v0kuf1.png?auto=webp&s=da8ba7c83715a6ceb2590398726a9627b307165d', 'width': 1749}, 'variants': {}}]} | |
Snapdragon 8 Lite Gen 5 + cheap | 0 | This is a benefit for everyone | 2025-10-11T21:32:52 | Illustrious-Swim9663 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o47foi | false | null | t3_1o47foi | /r/LocalLLaMA/comments/1o47foi/snapdragon_8_lite_gen_5_cheap/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'z8sf9lvxwjuf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/z8sf9lvxwjuf1.jpeg?width=108&crop=smart&auto=webp&s=b33a9b328f62831b0b97ca542eb43a5c75e52ec7', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/z8sf9lvxwjuf1.jpeg?width=216&crop=smart&auto=webp&s=cfeae3cf9b1707154c3f0fecacccd9e7f7407ea1', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/z8sf9lvxwjuf1.jpeg?width=320&crop=smart&auto=webp&s=e7c4110fda58bcdb95a41d9406ba7cbc0c12a055', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/z8sf9lvxwjuf1.jpeg?width=640&crop=smart&auto=webp&s=2c1cc3bbb6fb299988ae8b9a21fe60f1043798fc', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/z8sf9lvxwjuf1.jpeg?width=960&crop=smart&auto=webp&s=e97d932ad32101bfc1045958165a312395243542', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/z8sf9lvxwjuf1.jpeg?width=1080&crop=smart&auto=webp&s=5f285aa1664f54ead638930e600526e9feeface3', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/z8sf9lvxwjuf1.jpeg?auto=webp&s=f676b3d7ff158c302630109cb5b4851dcd51f1d6', 'width': 1080}, 'variants': {}}]} | |
Running a large model overnight in RAM, use cases? | 21 | I have a 3945wx with 512gb of ddr4 2666mhz. Work is tossing out a few old servers so I am getting my hands on 1TB of ram for free. I have 2x3090 currently.
But was thinking of doing some scraping and analysis, particularly for stocks. My pricing goes to 7p per kw overnight and was thinking of using a night model in RAM that is slow, but fast and using the GPUs during the day.
Surely I’m not the only one who has thought about this?
Perplexity has started to throttle labs queries so this could be my replacement for deep research. It might be slow, but it will be cheaper than a GPU furnace!! | 2025-10-11T21:30:21 | https://www.reddit.com/r/LocalLLaMA/comments/1o47di4/running_a_large_model_overnight_in_ram_use_cases/ | Salt_Armadillo8884 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o47di4 | false | null | t3_1o47di4 | /r/LocalLLaMA/comments/1o47di4/running_a_large_model_overnight_in_ram_use_cases/ | false | false | self | 21 | null |
auditlm: dirt simple self-hostable code review | 7 | Following up from [this thread](https://www.reddit.com/r/LocalLLaMA/comments/1o1gdp9/im_seeking_alternatives_to_coderabbit_cli_for/), I implemented a very basic [self-hostable code review tool](https://github.com/ellenhp/auditlm) for when I want a code review but don't have any humans available to help with that. It is an extremely cavewoman-brained piece of software, I basically just give an agent free reign inside of a docker container and ask it to run any commands it needs to get context about the codebase before providing a review of the diff. There's no forge integration yet so it's not usable as a copilot alternative, but perhaps I'll get to that in due time :)
I don't know if I'd recommend anyone actually *use* this at least in its current state, especially without additional sandboxing, but I'm hoping either this project or something else will grow to fill this need.
Cheers. | 2025-10-11T21:24:51 | https://www.reddit.com/r/LocalLLaMA/comments/1o478xe/auditlm_dirt_simple_selfhostable_code_review/ | ellenhp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o478xe | false | null | t3_1o478xe | /r/LocalLLaMA/comments/1o478xe/auditlm_dirt_simple_selfhostable_code_review/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': '55NYsbj6ARKxGtaVbTWiw98x5zQFXuuCzMILByZ9boE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/55NYsbj6ARKxGtaVbTWiw98x5zQFXuuCzMILByZ9boE.png?width=108&crop=smart&auto=webp&s=63571beb1159d524e9394c63efa56fb624b63ef2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/55NYsbj6ARKxGtaVbTWiw98x5zQFXuuCzMILByZ9boE.png?width=216&crop=smart&auto=webp&s=08b3b014b727c0d17f3494386973750733644fc1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/55NYsbj6ARKxGtaVbTWiw98x5zQFXuuCzMILByZ9boE.png?width=320&crop=smart&auto=webp&s=551855e27b71707bee64f7e2b6eb5861d8f3e1b0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/55NYsbj6ARKxGtaVbTWiw98x5zQFXuuCzMILByZ9boE.png?width=640&crop=smart&auto=webp&s=2f7a005f6e22f64cda7f6ccc9cdd412737a42581', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/55NYsbj6ARKxGtaVbTWiw98x5zQFXuuCzMILByZ9boE.png?width=960&crop=smart&auto=webp&s=33ddf5f679dda73cb0d335727cd5f98101fca0b7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/55NYsbj6ARKxGtaVbTWiw98x5zQFXuuCzMILByZ9boE.png?width=1080&crop=smart&auto=webp&s=839aea683aea12e48f4413bd6a56b8a6a2074e79', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/55NYsbj6ARKxGtaVbTWiw98x5zQFXuuCzMILByZ9boE.png?auto=webp&s=ae692efc433dca79f374a9a9c58fa3bab5e02f6a', 'width': 1200}, 'variants': {}}]} |
We know the rule of thumb… large quantized models outperform smaller less quantized models, but is there a level where that breaks down? | 78 | I ask because I’ve also heard quants below 4 bit are less effective, and that rule of thumb always seemed to compare 4bit large vs 8bit small.
As an example let’s take the large GLM 4.5 vs GLM 4.5 Air. You can have a much higher bitrate with GLM 4.5 Air… but… even with a 2bit quant made by unsloth, GLM 4.5 does quite well for me.
I haven’t figured out a great way to have complete confidence though so I thought I’d ask you all. What’s your rule of thumb when having to weigh a smaller model vs larger model at different quants? | 2025-10-11T19:43:54 | https://www.reddit.com/r/LocalLLaMA/comments/1o44u78/we_know_the_rule_of_thumb_large_quantized_models/ | silenceimpaired | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o44u78 | false | null | t3_1o44u78 | /r/LocalLLaMA/comments/1o44u78/we_know_the_rule_of_thumb_large_quantized_models/ | false | false | self | 78 | null |
Question on privacy when using Openrouter API | 2 | I am unable to run a fully local LLM on my old laptop, so I need to use an LLM in the cloud.
Excluding fully local LLM, Duck.ai is so far one of the most private ones. As far as I know, these are the privacy upside of using duck.ai:
- All messages goes through DuckDuckGo’s proxy to the LLM provider, making everyone look the same to the providers as if duck.ai is the one that is asking all the different questions.
- duck.ai has it set so the LLM providers do not train on the data submitted through duck.ai.
- all the chats are stored locally on the device in the browser files, not on DuckDuckGo’s servers.
Is using Openrouter API via a local interface like Jan, LMstudio, etc the same in terms of privacy? Since all messages go through Openrouter’s server so it’s indistinguishable which user is asking, users can turn off data training from within the openrouter settings, and the chat history are stored locally within Jan, LMstudio app. Am I missing anything or is openrouter API with a lock app interface just as private as Duck.ai? | 2025-10-11T19:26:20 | https://www.reddit.com/r/LocalLLaMA/comments/1o44et0/question_on_privacy_when_using_openrouter_api/ | JaniceRaynor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o44et0 | false | null | t3_1o44et0 | /r/LocalLLaMA/comments/1o44et0/question_on_privacy_when_using_openrouter_api/ | false | false | self | 2 | null |
[Looking for testers] TraceML: Live GPU/memory tracing for PyTorch fine-tuning | 5 | I am looking for a few people to test TraceML, an open-source tool that shows GPU/CPU/memory usage live during training. It is for spotting CUDA OOMs and inefficiency.
It works for single-GPU fine-tuning and tracks activation + gradient peaks, per-layer memory, and step timings (forward/backward/optimizer).
Repo: github.com/traceopt-ai/traceml
I.would love to find a couple of regular testers / design partners whose feedback can shape what to build next.
Active contributors will also be mentioned in the README 🙏 | 2025-10-11T19:22:42 | https://www.reddit.com/r/LocalLLaMA/comments/1o44bom/looking_for_testers_traceml_live_gpumemory/ | traceml-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o44bom | false | null | t3_1o44bom | /r/LocalLLaMA/comments/1o44bom/looking_for_testers_traceml_live_gpumemory/ | false | false | self | 5 | null |
What rig are you running to fuel your LLM addiction? | 116 | Post your shitboxes, H100's, nvidya 3080ti's, RAM-only setups, MI300X's, etc. | 2025-10-11T18:59:15 | https://www.reddit.com/r/LocalLLaMA/comments/1o43qhn/what_rig_are_you_running_to_fuel_your_llm/ | Striking_Wedding_461 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o43qhn | false | null | t3_1o43qhn | /r/LocalLLaMA/comments/1o43qhn/what_rig_are_you_running_to_fuel_your_llm/ | false | false | self | 116 | null |
What if we made LLMs paranoid? | 0 | What if we made LLMs paranoid about their output / the context given? for example the code they may be given might have bugs or not work, it might obviously not know that but unless it thinks that it is 100% certain with it it would of course recheck the code,
this raises another question, would Anthropic shut this down for the models wellbeing? how would it, in Antrophics antics, effect the models wellbeing? | 2025-10-11T18:54:50 | https://www.reddit.com/r/LocalLLaMA/comments/1o43mjy/what_if_we_made_llms_paranoid/ | EmirTanis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o43mjy | false | null | t3_1o43mjy | /r/LocalLLaMA/comments/1o43mjy/what_if_we_made_llms_paranoid/ | false | false | self | 0 | null |
Recommendation for a local Japanese -> English vision model | 7 | As per the title I'm looking for a multimodal model that can perform competent JP to ENG translations from images. Ideally it'd fit in 48 gb of VRAM but I'm not opposed to doing a bit of CPU offloading. | 2025-10-11T18:49:43 | https://www.reddit.com/r/LocalLLaMA/comments/1o43hyn/recommendation_for_a_local_japanese_english/ | Confident-Willow5457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o43hyn | false | null | t3_1o43hyn | /r/LocalLLaMA/comments/1o43hyn/recommendation_for_a_local_japanese_english/ | false | false | self | 7 | null |
Who makes the Code Supernova model that's available for free on KiloCode now? | 5 | It's pretty decent. And it's got a 1M token context window. Not sure now long it's going to remain free, though. | 2025-10-11T18:15:05 | https://www.reddit.com/r/LocalLLaMA/comments/1o42n2x/who_makes_the_code_supernova_model_thats/ | cafedude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o42n2x | false | null | t3_1o42n2x | /r/LocalLLaMA/comments/1o42n2x/who_makes_the_code_supernova_model_thats/ | false | false | self | 5 | null |
Optimize my environment for GLM 4.5 Air | 19 | Hello there people.
For the last month I am using GLM air (4 K S quant) and I really like it!
It's super smart and always to the point!
I only have one problem, the t/s are really low (6-7 tk/s)
So im looking for a way to upgrade my local rig, that's why I call you, the smart people! ☺️
My current setup is AMD 7600 cpu, 64 gb ddr5 6000, and two cpus, 1 5060ti 16gb and 1 4060ti 16gb.
My backend is LM Studio.
So, should I change backend?
Should I get a third GPU?
What do you think? | 2025-10-11T18:11:28 | https://www.reddit.com/r/LocalLLaMA/comments/1o42jx1/optimize_my_environment_for_glm_45_air/ | Former-Tangerine-723 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o42jx1 | false | null | t3_1o42jx1 | /r/LocalLLaMA/comments/1o42jx1/optimize_my_environment_for_glm_45_air/ | false | false | self | 19 | null |
Total noob, please recommend my next steps | 2 | Hello everyone, this is my first post here and in general, I have a what I believe a humble setup, an AMD 8845HS mini-PC with 64GB DDR5-5600 and an ASUS 4090 with 24GB VRAM eGPU via OCULINK. I would like to experiment with fully local hosted LLM and to get familiar with the domain concepts. Please recommend me a good starting point, there's a lot of material on the net and a lot of conflicting info, sometimes deliberately hostile of wrong. Any useful comment and guidance is appreciated. | 2025-10-11T18:09:08 | https://www.reddit.com/r/LocalLLaMA/comments/1o42hrv/total_noob_please_recommend_my_next_steps/ | HumanDrone8721 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o42hrv | false | null | t3_1o42hrv | /r/LocalLLaMA/comments/1o42hrv/total_noob_please_recommend_my_next_steps/ | false | false | self | 2 | null |
Choosing a code completion (FIM) model | 29 | Fill-in-the-middle (FIM) models don't necessarily get all of the attention that coder models get but they work great llama.cpp and [llama.vim](https://github.com/ggml-org/llama.vim) or [llama.vscode](https://github.com/ggml-org/llama.vscode).
Generally, when picking an FIM model, ***speed*** is absolute priority because no one wants to sit waiting for the completion to finish. Choosing models with few active parameters and running GPU only is key. Also, counterintuitively, "base" models work just as well as instruct models.
***Note that only some models support FIM.*** Sometimes, it can be hard to tell from model cards whether they are supported or not.
Recent models:
* [Qwen/Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct) (the larger variant might also be FIM, I don't have the hardware to try it)
* [Kwaipilot/KwaiCoder-23B-A4B-v1](https://huggingface.co/Kwaipilot/KwaiCoder-23B-A4B-v1)
* [Kwaipilot/KwaiCoder-DS-V2-Lite-Base](https://huggingface.co/Kwaipilot/KwaiCoder-DS-V2-Lite-Base) (16b 2.4b active)
Slightly older but reliable small models:
* [Qwen/Qwen2.5-Coder-3B](https://huggingface.co/Qwen/Qwen2.5-Coder-3B)
* [Qwen/Qwen2.5-Coder-1.5B](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B)
Untested, new models:
* [Salesforce/CoDA-v0-Instruct](https://huggingface.co/Salesforce/CoDA-v0-Instruct) (I'm unsure if this is FIM)
What models am I missing? What models are you using? | 2025-10-11T18:03:11 | https://www.reddit.com/r/LocalLLaMA/comments/1o42ch4/choosing_a_code_completion_fim_model/ | Zc5Gwu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o42ch4 | false | null | t3_1o42ch4 | /r/LocalLLaMA/comments/1o42ch4/choosing_a_code_completion_fim_model/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': 'eq90-IL0kufeajOSuYnk2ejN5Sd3wmxK83XDApUVVp4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eq90-IL0kufeajOSuYnk2ejN5Sd3wmxK83XDApUVVp4.png?width=108&crop=smart&auto=webp&s=43923e10c0b16e08874b05ceede0c632c590487e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eq90-IL0kufeajOSuYnk2ejN5Sd3wmxK83XDApUVVp4.png?width=216&crop=smart&auto=webp&s=e4c45b547a1d2941ee9ec47f1a098c55412d777d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eq90-IL0kufeajOSuYnk2ejN5Sd3wmxK83XDApUVVp4.png?width=320&crop=smart&auto=webp&s=1b34ce529b511e27d05fac0680f915259eedac61', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eq90-IL0kufeajOSuYnk2ejN5Sd3wmxK83XDApUVVp4.png?width=640&crop=smart&auto=webp&s=0970bf622c85060c3df907aec27e34e848f2f0ca', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eq90-IL0kufeajOSuYnk2ejN5Sd3wmxK83XDApUVVp4.png?width=960&crop=smart&auto=webp&s=88763eee17c662fc3a5d79c34b23da388cdd5454', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eq90-IL0kufeajOSuYnk2ejN5Sd3wmxK83XDApUVVp4.png?width=1080&crop=smart&auto=webp&s=02ecfe58dfa27f39043189ca4d753ad71a95f6b4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/eq90-IL0kufeajOSuYnk2ejN5Sd3wmxK83XDApUVVp4.png?auto=webp&s=80468bb1ade130e11d1be85db8f9267409e4af08', 'width': 1200}, 'variants': {}}]} |
Wanted to share tool for linking LM Studio/Ollama to a discord bot for mobile chatting! | 6 | I built this for myself while I was rating chats for RLHF training and wanted to do it from my phone. I felt this was the easiest way to get my models on mobile, saves chat logs, message ratings and has a quick and easy setup!
[https://github.com/ella0333/Local-LLM-Discord-Bot](https://github.com/ella0333/Local-LLM-Discord-Bot) (free/opensource) | 2025-10-11T17:59:36 | https://www.reddit.com/r/LocalLLaMA/comments/1o4290b/wanted_to_share_tool_for_linking_lm_studioollama/ | ella0333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o4290b | false | null | t3_1o4290b | /r/LocalLLaMA/comments/1o4290b/wanted_to_share_tool_for_linking_lm_studioollama/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'ni2gJBTCgYrQFEFbOkI1W8LOh5UVk6xS4rMGO6Pp4yw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ni2gJBTCgYrQFEFbOkI1W8LOh5UVk6xS4rMGO6Pp4yw.png?width=108&crop=smart&auto=webp&s=08fe060d01af7cafdc094a1e0a7a8e3f308616e5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ni2gJBTCgYrQFEFbOkI1W8LOh5UVk6xS4rMGO6Pp4yw.png?width=216&crop=smart&auto=webp&s=b674cd5d26b9d74fcb306f8b4353835773b37cf8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ni2gJBTCgYrQFEFbOkI1W8LOh5UVk6xS4rMGO6Pp4yw.png?width=320&crop=smart&auto=webp&s=ea850b9061c2e32809e864e4352b499eeb545168', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ni2gJBTCgYrQFEFbOkI1W8LOh5UVk6xS4rMGO6Pp4yw.png?width=640&crop=smart&auto=webp&s=bb6e933fcfbcb7f5592dbac2048bfe14b5a4fe97', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ni2gJBTCgYrQFEFbOkI1W8LOh5UVk6xS4rMGO6Pp4yw.png?width=960&crop=smart&auto=webp&s=88ebf40a7245ec039c6b73c2b7d4bae15634950b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ni2gJBTCgYrQFEFbOkI1W8LOh5UVk6xS4rMGO6Pp4yw.png?width=1080&crop=smart&auto=webp&s=2c35c383f13f86111fa7a7171e52d26965716970', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ni2gJBTCgYrQFEFbOkI1W8LOh5UVk6xS4rMGO6Pp4yw.png?auto=webp&s=4f31076c66acee74f4e318028071b931a5a2ab21', 'width': 1200}, 'variants': {}}]} |
video to audio? | 5 | Do you know any open models to generate audio from the video?
I know video from image, audio to text, text to audio, but can't find audio from video. | 2025-10-11T17:30:48 | https://www.reddit.com/r/LocalLLaMA/comments/1o41j5y/video_to_audio/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o41j5y | false | null | t3_1o41j5y | /r/LocalLLaMA/comments/1o41j5y/video_to_audio/ | false | false | self | 5 | null |
I built a privacy-first desktop assistant that see your screen and learn when to help. | 1 | [removed] | 2025-10-11T17:23:48 | https://github.com/Vokturz/loyca-ai | Vokturz | github.com | 1970-01-01T00:00:00 | 0 | {} | 1o41cye | false | null | t3_1o41cye | /r/LocalLLaMA/comments/1o41cye/i_built_a_privacyfirst_desktop_assistant_that_see/ | false | false | default | 1 | null |
Locally or Cloud (Chatgpt) | 1 | I have 5090 card and 64 ram for context.
So I run AI locally via (LM Studio), and I use Chatgpt. I have the business subscription (1 month trial), I love how fast it does things on LM Studio, but it obviously have features that doesn't work etc, and I have to use others.
I currently use LM Studio for text based ai, and have been experimenting with comfyui for image/video generation, but I always see myself going back to Chatgpt, why?
Even do I find it a little bit slower, it does everything, it can generate images (logo, banner etc), and it can also make files for me. (I have made some websites and workflows just from chatgpt). I feel like even though it's slower, it's much better then all the other local models I have used (I have tried many of them).
Obviously I am kinda new to the AI scene, so I need to learn more how to actually use the different AI's, but I just didn't know if you guys knew any easy Local AI's that is like chatgpt and how it behaves.
like you can chat to it, and get it to generate images, videos, files etc.
Also as an autist and adhder who loves technology, I have spent so many hours the past 2 days, and this AI thing is all I can think of as of now, how to make a good system for all kind of things (video generation, image etc), and I hope I can learn some more coding etc. | 2025-10-11T16:56:57 | https://www.reddit.com/r/LocalLLaMA/comments/1o40oz8/locally_or_cloud_chatgpt/ | GiljaS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o40oz8 | false | null | t3_1o40oz8 | /r/LocalLLaMA/comments/1o40oz8/locally_or_cloud_chatgpt/ | false | false | self | 1 | null |
How to use GLM Plan + Claude Plan with Claude Code on macOS | 3 | 2025-10-11T16:38:25 | https://gist.github.com/RuiNelson/a5af5620404a0a9fbf3cf3e92fe97585 | Routine-Teach5293 | gist.github.com | 1970-01-01T00:00:00 | 0 | {} | 1o408is | false | null | t3_1o408is | /r/LocalLLaMA/comments/1o408is/how_to_use_glm_plan_claude_plan_with_claude_code/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=216&crop=smart&auto=webp&s=2e3562243f324d16bc6d9dd09adb1da4e0b100b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=320&crop=smart&auto=webp&s=564e5f4bb6808064a14eb3965a6911671c3c9807', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=640&crop=smart&auto=webp&s=0f53460a90493497883ab4cacbbb58e2acb464c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=960&crop=smart&auto=webp&s=7a4f79362039959fa37eab208ae001245ccfe6e3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=1080&crop=smart&auto=webp&s=912f966e123e94e32e7975fe8aebac89450a6b98', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?auto=webp&s=c7cbcc7517e2406e2326e7a1eb6bdb9022c27fda', 'width': 1280}, 'variants': {}}]} | |
Poor GPU Club : Anyone use Q3/Q2 quants of 20-40B Dense models? How's it? | 13 | FYI My System Info: ^(Intel(R) Core(TM) i7-14700HX 2.10 GHz |) **^(32 GB RAM)** ^(| 64-bit OS, x64-based processor | NVIDIA GeForce RTX 4060 Laptop GPU ()**^(8GB VRAM)**^() |) **^(Cores - 20)** ^(|) **^(Logical Processors - 28)**^(.)
Unfortunately I can't use Q4 or above quants of 20-40B Dense models, it'll be slower with single digit t/s.
How is Q3/Q2 quants of 20-40B Dense models? Talking about Perplexity, KL divergence, etc., metrics. Are they worthy enough to use? Wish there's a portal for such metrics for all models & with all quants.
List of models I want to use:
* Magistral-Small-2509 ( **IQ3\_XXS** \- 9.41GB | Q3\_K\_S - 10.4GB | Q3\_K\_M - 11.5GB )
* Devstral-Small-2507 ( **IQ3\_XXS** \- 9.41GB | Q3\_K\_S - 10.4GB | Q3\_K\_M - 11.5GB )
* reka-flash-3.1 ( **IQ3\_XXS** \- 9.2GB )
* Seed-OSS-36B-Instruct ( IQ3\_XXS - 14.3GB | **IQ2\_XXS** \- 10.2GB )
* GLM-4-32B-0414 ( IQ3\_XXS - 13GB | **IQ2\_XXS** \- 9.26GB )
* Gemma-3-27B-it ( IQ3\_XXS - 10.8GB | **IQ2\_XXS** \- 7.85GB )
* Qwen3-32B ( IQ3\_XXS - 13GB | **IQ2\_XXS** \- 9.3GB )
* KAT-V1-40B ( **IQ2\_XXS** \- 11.1GB )
* KAT-Dev ( IQ3\_XXS - 12.8GB | **IQ2\_XXS** \- 9.1GB )
* EXAONE-4.0.1-32B ( IQ3\_XXS - 12.5GB | **IQ2\_XXS** \- 8.7GB )
* Falcon-H1-34B-Instruct ( IQ3\_XXS - 13.5GB | **IQ2\_XXS** \- 9.8GB )
Please share your thoughts. Thanks. | 2025-10-11T16:27:46 | https://www.reddit.com/r/LocalLLaMA/comments/1o3zz30/poor_gpu_club_anyone_use_q3q2_quants_of_2040b/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3zz30 | false | null | t3_1o3zz30 | /r/LocalLLaMA/comments/1o3zz30/poor_gpu_club_anyone_use_q3q2_quants_of_2040b/ | false | false | self | 13 | null |
Best Bang for Buck? | 4 | I converted prices so may not match US stores but as a comparison between each other, what is the best deal here?
Is the 3060 the best option since cheapest price + 12GB VRAM?
I can't get a clear answer of whether the more recent technology of the newer cards cancels out the higher VRAM of the 3060.
* **MSI RTX 3060 12GB – $310**
* **PNY RTX 3070 8GB – $408**
* **ASUS RTX 4060 8GB – $365**
* **ASUS RTX 4060 Ti 8GB – $462**
* **ASUS RTX 5050 8GB – $354**
* **MSI RTX 5060 8GB – $326**
* **ASUS RTX 5060 Ti 16GB – $517**
Additional info: 1080p gaming + Ryzen 5 5600x + B550M DS3H | 2025-10-11T16:11:04 | https://www.reddit.com/r/LocalLLaMA/comments/1o3zk2l/best_bang_for_buck/ | CranberryTraining614 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3zk2l | false | null | t3_1o3zk2l | /r/LocalLLaMA/comments/1o3zk2l/best_bang_for_buck/ | false | false | self | 4 | null |
The LLM running on my local PC is too slow. | 15 | Hey, I'm getting really slow speeds and need a sanity check.
I'm only getting 1.0 t/s running a C4AI 111B model (63GB Q4\_GGUF) on an RTX 5090 with 128GB of RAM.
this normal, or is something wrong with my config? | 2025-10-11T16:07:43 | https://www.reddit.com/r/LocalLLaMA/comments/1o3zgy7/the_llm_running_on_my_local_pc_is_too_slow/ | Glanble | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3zgy7 | false | null | t3_1o3zgy7 | /r/LocalLLaMA/comments/1o3zgy7/the_llm_running_on_my_local_pc_is_too_slow/ | false | false | self | 15 | null |
optimize qwen3 4b | 1 | how i can optimize qwen-3-2507 for my potato pc, i heard that this was the best model | 2025-10-11T16:01:56 | https://www.reddit.com/r/LocalLLaMA/comments/1o3zbpt/optimize_qwen3_4b/ | No-Selection2972 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3zbpt | false | null | t3_1o3zbpt | /r/LocalLLaMA/comments/1o3zbpt/optimize_qwen3_4b/ | false | false | self | 1 | null |
LM Studio + Open-WebUI - no reasoning | 9 | Hello, I run **LM Studio** \+ **Open-WebUI** with model **GPT-OSS-20b** but it's much worse on that web page than used locally in LM Studio, it answers completely stupid. I also don't see the **reasoning button**, checked models settings in Open-WebUI admin page but there were nothing matching, only vision, file input, code interpreter, etc. Do you know how to make it working same smart with open-webui as local? | 2025-10-11T15:57:46 | https://www.reddit.com/r/LocalLLaMA/comments/1o3z7q9/lm_studio_openwebui_no_reasoning/ | michalpl7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3z7q9 | false | null | t3_1o3z7q9 | /r/LocalLLaMA/comments/1o3z7q9/lm_studio_openwebui_no_reasoning/ | false | false | self | 9 | null |
How Skygen AI can convince your boss to give you a raise | 0 | 2025-10-11T15:54:21 | https://v.redd.it/r9zu6j2c8iuf1 | cammmtheemann | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o3z4pm | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/r9zu6j2c8iuf1/DASHPlaylist.mpd?a=1762790078%2CYzdhYThhYTBmN2ViYjMxZjFkZmFhZjFjZjA4ZDg1YTdkOTg3ZDczZTIzZGIzMTVlZmNmZTFmZDI1NjllYzM4Yw%3D%3D&v=1&f=sd', 'duration': 56, 'fallback_url': 'https://v.redd.it/r9zu6j2c8iuf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/r9zu6j2c8iuf1/HLSPlaylist.m3u8?a=1762790078%2CZjQyOWIxNGVkMDU5MmQ5ODk2NDE3YjQzOGRiMzhlNDk5NTI5NzAwZTZlZGZjZjgwZmQxYzk4ZDU2YWVjMjI0OA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/r9zu6j2c8iuf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1668}} | t3_1o3z4pm | /r/LocalLLaMA/comments/1o3z4pm/how_skygen_ai_can_convince_your_boss_to_give_you/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'MHFyMmJqMmM4aXVmMZhyTuSFJFtUREHIH5H7EIZmhfhHKTulSh4cL9QeVTuw', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/MHFyMmJqMmM4aXVmMZhyTuSFJFtUREHIH5H7EIZmhfhHKTulSh4cL9QeVTuw.png?width=108&crop=smart&format=pjpg&auto=webp&s=d5482933ac96db768dce1059de3362e5551876ab', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/MHFyMmJqMmM4aXVmMZhyTuSFJFtUREHIH5H7EIZmhfhHKTulSh4cL9QeVTuw.png?width=216&crop=smart&format=pjpg&auto=webp&s=ec63bce4a1ff10c14a4d718e794881747114affd', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/MHFyMmJqMmM4aXVmMZhyTuSFJFtUREHIH5H7EIZmhfhHKTulSh4cL9QeVTuw.png?width=320&crop=smart&format=pjpg&auto=webp&s=119cde12da3b54e703435704d4ef4b6fd2788a1c', 'width': 320}, {'height': 414, 'url': 'https://external-preview.redd.it/MHFyMmJqMmM4aXVmMZhyTuSFJFtUREHIH5H7EIZmhfhHKTulSh4cL9QeVTuw.png?width=640&crop=smart&format=pjpg&auto=webp&s=b7a17752e9778a210cc871ddfc1a3e9411129ab4', 'width': 640}, {'height': 621, 'url': 'https://external-preview.redd.it/MHFyMmJqMmM4aXVmMZhyTuSFJFtUREHIH5H7EIZmhfhHKTulSh4cL9QeVTuw.png?width=960&crop=smart&format=pjpg&auto=webp&s=15b91812e55a198a3b0b9f00b4cff206931f861c', 'width': 960}, {'height': 699, 'url': 'https://external-preview.redd.it/MHFyMmJqMmM4aXVmMZhyTuSFJFtUREHIH5H7EIZmhfhHKTulSh4cL9QeVTuw.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c70a7a1f62a36eee3afa1fb781ff93ab1ba17e3c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MHFyMmJqMmM4aXVmMZhyTuSFJFtUREHIH5H7EIZmhfhHKTulSh4cL9QeVTuw.png?format=pjpg&auto=webp&s=d1bfacdb4c69b24c577b5a934c239f62a802d811', 'width': 1668}, 'variants': {}}]} | ||
Fighting Email Spam on Your Mail Server with LLMs — Privately | 40 | I'm sharing a blog post I wrote: https://cybercarnet.eu/posts/email-spam-llm/
It's about how to use local LLMs on your own mail server to identify and fight email spam.
This uses Mailcow, Rspamd, Ollama and a custom proxy in python.
Give your opinion, what you think about the post. If this could be useful for those of you that self-host mail servers.
Thanks | 2025-10-11T14:52:20 | https://www.reddit.com/r/LocalLLaMA/comments/1o3xluf/fighting_email_spam_on_your_mail_server_with_llms/ | unixf0x | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3xluf | false | null | t3_1o3xluf | /r/LocalLLaMA/comments/1o3xluf/fighting_email_spam_on_your_mail_server_with_llms/ | false | false | self | 40 | null |
Tooling+Model recommendations for base (16G) mac Mini M4 as remote server? | 18 | I use Intel laptop as my main coding machine. Recently got myself a base model mac Mini and got surprised how fast it is for inference.
I'm still very new at using AI for coding. Not trying to be lazy, but want to get an advice in a large and quickly developing field from knowledgeable people.
>What I already tried: [Continue.dev](http://Continue.dev) in VS studio + ollama with qwen2.5-coder:7B. It works, but is there a better, more efficient way? I'm quite technical so I won't mind running more complex software stack if it brings significant improvements.
I'd like to automate some routine, boring programming tasks, for example: writing boilerplate html/js, writing bash scripts (yes, I very carefully check them before running), writing basic, boring python code. Nothing too complex, because I still prefer using my brain for actual work, plus even paid edge models are still not good at my area.
So I need a model that is:
* is good at tasks specified above (should I use a specially optimized model or generic ones are OK?)
* outputs at least 15+ tokens/sec
* would integrate nicely with tooling on my work machine
Also, what does a proper, modern VS code setup looks nowadays?
| 2025-10-11T13:28:29 | https://www.reddit.com/r/LocalLLaMA/comments/1o3vnt3/toolingmodel_recommendations_for_base_16g_mac/ | Valuable-Question706 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3vnt3 | false | null | t3_1o3vnt3 | /r/LocalLLaMA/comments/1o3vnt3/toolingmodel_recommendations_for_base_16g_mac/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': '7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=108&crop=smart&auto=webp&s=efe307f51ff2874b18960bc89ca5a18a1b551442', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=216&crop=smart&auto=webp&s=3f5d82a3bc41c4fa63c2939d1e2fdc1db75de463', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=320&crop=smart&auto=webp&s=c204a4e04e7cbc078774e051a9e247b58ad6b572', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=640&crop=smart&auto=webp&s=5b6c9e3fb05aa6cf2a05f0e920367ffac32c6448', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=960&crop=smart&auto=webp&s=bd57ab7ea83274fea8ece5793f2200a0ac6a7f02', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=1080&crop=smart&auto=webp&s=5cdafbd3026c11883a519aa200677fb58be16d11', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?auto=webp&s=30396441627641135814de7d733ce94b9e7795dc', 'width': 2400}, 'variants': {}}]} |
Can we use glm coding api in python code? | 3 | Need to do some concurrent requests to rewrite a small book. | 2025-10-11T13:23:41 | https://www.reddit.com/r/LocalLLaMA/comments/1o3vjz1/can_we_use_glm_coding_api_in_python_code/ | n3pst3r_007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3vjz1 | false | null | t3_1o3vjz1 | /r/LocalLLaMA/comments/1o3vjz1/can_we_use_glm_coding_api_in_python_code/ | false | false | self | 3 | null |
How do I compare cost per token for serverless vs provisioned hardware? | 7 | How are you guys comparing the cost per token for serverless vs provisioned hardware?
Eg, aws bedrock vs an EC2 running vllm
Mostly interested in batch inference costs | 2025-10-11T12:00:57 | https://www.reddit.com/r/LocalLLaMA/comments/1o3ttlo/how_do_i_compare_cost_per_token_for_serverless_vs/ | OverclockingUnicorn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3ttlo | false | null | t3_1o3ttlo | /r/LocalLLaMA/comments/1o3ttlo/how_do_i_compare_cost_per_token_for_serverless_vs/ | false | false | self | 7 | null |
When will 100M context window be available for retail users? | 0 | I saw this [https://lablab.ai/tech/ltm-2-mini](https://lablab.ai/tech/ltm-2-mini) and was surprised new models these days can already handle 100M, while consumer models can only handle up to 1M currently.
Just wondering how long it will take before we can see 100M context window? Asking as I am building a summarization model, so ingesting 100M at 1 go will be better than using mapReduce or RAG strategy, is that right? | 2025-10-11T11:06:35 | https://www.reddit.com/r/LocalLLaMA/comments/1o3su0l/when_will_100m_context_window_be_available_for/ | milkygirl21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3su0l | false | null | t3_1o3su0l | /r/LocalLLaMA/comments/1o3su0l/when_will_100m_context_window_be_available_for/ | false | false | self | 0 | null |
Every single COT terms score. | 10 | **1. Zeroing** \- By far the most annoying word I've ever heard, used by Gemini 2.5 Pro's Thinking but when people actually knows this meaning, it just a fancy word that means 'I am directing all the focus into this or that (subject)'. Its efficient (it only uses two tokens) and fancy but annoying. **8.7/10**.
**2. Synthesizing** \- GLM 4.6 uses this, Gemini 2.5 Pro Exp, and more. Its fancy wording too, but it just means 'I am combining this thought I have with the old thought I made', its good and it doesn't really sound the much annoying (IMO). It also helps the AI combining ideas and thoughts in one go so its **9/10**.
**3. Hmm** \- Used by Qwen (idk what model but Qwen), Deepseek V3.1 to V3.2, and more. Its not fancy, but it just means lots of things, sometimes Qwen 3 235B do this by pausing and hesitating before dumping more thoughts, and Deepseek uses this in the first word to think. It doesn't do much I would say, its by far mid and its only for pausing and hesitating to think. My only favorite part is that it uses two tokens, **6.6/10**.
**4. Confidence Score/Confident Score** \- For some reason, it's one of my favorite terms. It makes the LLM aware how confident it is with the answer or not. It can also make the LLM think more further ahead for some reason, but its not perfect, most LLM's hallucinate and would think its 5/5 or 10/10 in a wrong answer they would give so sometimes it had no point to use it. **7.1/10**.
**5. Alternatively** \- The old Deepseek times. ONE OF MY FAVORITE TERMS. It makes the LLM aware of its thoughts so it lowers the mistake.
The biggest con is when the LLM uses these terms is that it BLOATS, when you expect it only uses 941 tokens on a single thought turned into a massive 5000 tokens in a single thought before response. It makes my API cripple so bad and its the biggest con. So its a **5.4/10**, I wish it can be back but this time more efficient. | 2025-10-11T11:04:46 | https://www.reddit.com/r/LocalLLaMA/comments/1o3sstr/every_single_cot_terms_score/ | Ambitious-a4s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3sstr | false | null | t3_1o3sstr | /r/LocalLLaMA/comments/1o3sstr/every_single_cot_terms_score/ | false | false | self | 10 | null |
Which company is likely to be the first to release the artificial general intelligence (AGI) model? | 0 | and most importantly would they release it as a commercial model or can the companies advancing open-weight models take the lead? | 2025-10-11T10:57:25 | https://www.reddit.com/r/LocalLLaMA/comments/1o3so61/which_company_is_likely_to_be_the_first_to/ | zoxtech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3so61 | false | null | t3_1o3so61 | /r/LocalLLaMA/comments/1o3so61/which_company_is_likely_to_be_the_first_to/ | false | false | self | 0 | null |
I used Llama 3.3 70b to build Examsprint AI and get best Education Startup by Scout Forge | 0 | I am Aadarsh Pandey 13y/o from India. I am the developer and founder of Examsprint AI.
features of Examsprint AI are:
Chapters and topics list
Direct NCERT Links
Practice questions in form of Flashcards specialised for each chapter[For Class 11 and 12]
Personal AI chatbot to SOLVE any type of Questions regarding Physics , Chemistry , BIology and Maths
TOPPER'S Notes[ Variety from class 9 to 12]
AI chatbot that gives visual representation with textual answer for better understanding
JEE blueprint
Neet blueprint
Boards blueprint
School blueprints
Specialised TOPPER'S HANDWRITTEN NOTES with Interactive AI notes for better understanding.
NOTES ARE AVAILABLE IN BOTH VIEWABLE AND FREE DOWNLOADABLE FORMS.
NCERT BACK EXERCISE SOLUTIONS
SOF OLYMPIADS PYQ COMING SOON
FORMULA SHEET COMING SOON
BOARDS ARENA COMING SOON
STUDY AND LIGHT MODE PRESENT
JEE/NEET ARENA COMING SOON
ABSOLUTELY FREE OF COST
CAN USE WITHOUT SIGNING IN
FAQ's for INSTANT DOUBT-solving regarding USE and WEBSITE
Upto date calendar for instant date previews | 2025-10-11T10:48:01 | https://www.reddit.com/r/LocalLLaMA/comments/1o3si8q/i_used_llama_33_70b_to_build_examsprint_ai_and/ | SoggyAward2697 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3si8q | false | null | t3_1o3si8q | /r/LocalLLaMA/comments/1o3si8q/i_used_llama_33_70b_to_build_examsprint_ai_and/ | false | false | self | 0 | null |
Adding search to open models | 6 | So right now I mostly use the small GLM Plan in Roo - main missing thing is search, that is only available in the 5 times more expensive medium plan.
Do I need to bite the bullet there and get the medium plan or are there better options? | 2025-10-11T10:38:26 | https://www.reddit.com/r/LocalLLaMA/comments/1o3sch6/adding_search_to_open_models/ | Simple_Split5074 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3sch6 | false | null | t3_1o3sch6 | /r/LocalLLaMA/comments/1o3sch6/adding_search_to_open_models/ | false | false | self | 6 | null |
Local LLM on iPhone 17 with RAG —unrealistic or can I just not find it? | 0 | Is there a real option for a decent LLM with RAG that could run on a base model iPhone 17?
My goal is to use local LLMs for most simple tasks and tasks related to my personal life. I’d like to save my tokens for the SOTA models for heavy lifting tasks that aren’t private. Apple Intelligence is no where to be found.
I know I could do it on my actual computer with ollama or similar but most tasks like this are lighter ones where I’m more likely to be on the go so it would be awesome if it could be on my phone.
Locally AI was able to give me a nice interface and LFM2 which seems to be more than I need—it advertises RAG but I couldn’t figure out how to get it to work. I did get Gemma 3 QAT 4B running on that app and seems likely more powerful than I need for most things.
Pocketpal and MLC chat both worked but I wasn’t able to get them fully to what I was hoping for, and I can’t tell if they just don’t have the features (like MLC chat has no conversation history?) or if I just can’t figure them out.
I got LLM Farm downloaded and it just crashed on me before I got any real output.
Any suggestions would be really appreciated. I might just need to wait until one of the above options is updated but wanted to ask. SaaS or paid apps aren’t as attractive to me because this feels like it should be totally possible open source. Open source would also give me more confidence in my data actually staying put too.!!Thank you! | 2025-10-11T10:37:38 | https://www.reddit.com/r/LocalLLaMA/comments/1o3sc0u/local_llm_on_iphone_17_with_rag_unrealistic_or/ | TheSnowCroow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3sc0u | false | null | t3_1o3sc0u | /r/LocalLLaMA/comments/1o3sc0u/local_llm_on_iphone_17_with_rag_unrealistic_or/ | false | false | self | 0 | null |
How should I translate movie subtitles? | 11 | (I hope this thread's replies can serve as a tutorial to other people interested in this. I was not able to find a similar discussion in this sub's search history. There was a few threads about "best model for X language", but nothing about translation workflows.)
I have english subtitles for movies and I'd like to convert them to arabic. This is just a low-effort thing, I'm not planning on distributing this on the internet, it's just to quickly put out some srt's for someone I know when none exist in his language.
Just to remind the folks at home, SRT subtitles files are text files that look like this:
1
00:00:12,222 --> 00:00:15,333
Oh no, the ship's sinking!
2
00:00:16,123 --> 00:00:20,456
To the life rafts, now!
...
For reference, the subtitle file for Interstellar is 58k tokens.
I don't think I should be just dumping 50k+ tokens into a small local LLM and asking it to translate. I'm worried about the following:
1. Hallucinations in the timestamps: this would completely mess up the subtitles, ruining the movie
2. Hallucinations/schizoness in the content: many LLMs degrade at such large contexts
3. the LLM might simply drop some entries altogether, I've seen it before
So what are my options here? How do hobbyist AI translators do it?
My thoughts in comments. | 2025-10-11T10:29:27 | https://www.reddit.com/r/LocalLLaMA/comments/1o3s74i/how_should_i_translate_movie_subtitles/ | dtdisapointingresult | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3s74i | false | null | t3_1o3s74i | /r/LocalLLaMA/comments/1o3s74i/how_should_i_translate_movie_subtitles/ | false | false | self | 11 | null |
Which one is sota ?! | 0 | 2025-10-11T10:11:31 | Cheryl_Apple | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o3rwj7 | false | null | t3_1o3rwj7 | /r/LocalLLaMA/comments/1o3rwj7/which_one_is_sota/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'K-gFthAZ-rmf8mCUlCOL-H3MCpm4iB3spM1aKgapyik', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/54m9hhm7jguf1.jpeg?width=108&crop=smart&auto=webp&s=73b0817d9e38f56b6a41291620e5e5d1f4ae3b6f', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/54m9hhm7jguf1.jpeg?width=216&crop=smart&auto=webp&s=b21233c3cf0e832d68558b61f855f9004b16cf80', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/54m9hhm7jguf1.jpeg?width=320&crop=smart&auto=webp&s=1c0ca84f02883d556a74836e1ef497c1b930b376', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/54m9hhm7jguf1.jpeg?width=640&crop=smart&auto=webp&s=e619caa94e2680541a6b10c04100388acd08fc8a', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/54m9hhm7jguf1.jpeg?width=960&crop=smart&auto=webp&s=230494dd389fef78f052b0baf928a9d42d69f8d4', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/54m9hhm7jguf1.jpeg?width=1080&crop=smart&auto=webp&s=28e43631d170aa0ee9bfcc6692bd4fb3481ffb65', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/54m9hhm7jguf1.jpeg?auto=webp&s=fc193aedb377ff9c5d8f24e831c604c6e6e12d97', 'width': 1280}, 'variants': {}}]} | |||
NVIDIA 3060 12GB Vs AMD 6600XT | 0 | GPU: 6600XT.
CPU: AMD Ryzen 5 5600X
Motherboard: BS550
Monitor: 1080p
I do not play mainstream games or multiplayer games for that matter. The most graphics intensive game I play would be Beamng.
I have recently gotten into image generation and local LLMs and would like to explore further into the whole AI rabbit hole.
I am not interested in saving up for better cards considering to get a Nvidia GPU with 12GB(+) VRAM for a similar price to the 3060 is highly unlikely.
1)Would a 3060 12GB be sufficient for most AI things while being significantly better than the 6600XT strictly for AI?
2) For gaming in general, would there be a major loss/gain in performance? | 2025-10-11T10:07:08 | https://www.reddit.com/r/LocalLLaMA/comments/1o3rtxk/nvidia_3060_12gb_vs_amd_6600xt/ | Cute_Mark543 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3rtxk | false | null | t3_1o3rtxk | /r/LocalLLaMA/comments/1o3rtxk/nvidia_3060_12gb_vs_amd_6600xt/ | false | false | self | 0 | null |
Broke grads in Bangalore , tiny PG, huge product dreams, need devs & AI engineers to join the ride | 1 | [deleted] | 2025-10-11T10:03:24 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1o3rrqz | false | null | t3_1o3rrqz | /r/LocalLLaMA/comments/1o3rrqz/broke_grads_in_bangalore_tiny_pg_huge_product/ | false | false | default | 1 | null | ||
Best model for CV reading? | 0 | I have a web application which currently uses the DeepSeek API to read CVs and parse them into JSON format for storing on the system.
I'd like to find a lightweight model that I can run locally (or on the sever where my web application is hosted) to do this same task. There are so many models on Ollama, I'm not sure which one is best. Presumably I don't need a large one as the task is very specific.
Any help appreciated, thanks! | 2025-10-11T09:28:02 | https://www.reddit.com/r/LocalLLaMA/comments/1o3r809/best_model_for_cv_reading/ | propostor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3r809 | false | null | t3_1o3r809 | /r/LocalLLaMA/comments/1o3r809/best_model_for_cv_reading/ | false | false | self | 0 | null |
Quality degradation of fp8 quantization? | 16 | So I've been entirely in the GGUF ecosystem up till now, but I've been wanting to try out vllm for a potential speed boost as well as batching.
Generally with GGUFs Q8\_0 is considered to be so close to full precision that it's practically indistinguishable. It's my understanding that Q8\_0 is a bit closer to full precision than FP8, but how much worse is FP8 than full precision exactly? As a reference, is it between Q8 and Q6? Worse? | 2025-10-11T09:25:15 | https://www.reddit.com/r/LocalLLaMA/comments/1o3r6et/quality_degradation_of_fp8_quantization/ | Confident-Willow5457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3r6et | false | null | t3_1o3r6et | /r/LocalLLaMA/comments/1o3r6et/quality_degradation_of_fp8_quantization/ | false | false | self | 16 | null |
Kwaipilot/KAT-Dev-72B-Exp seems to be a great coding model? | 19 | [https://huggingface.co/Kwaipilot/KAT-Dev-72B-Exp](https://huggingface.co/Kwaipilot/KAT-Dev-72B-Exp)
https://preview.redd.it/j0m718zu7guf1.png?width=1186&format=png&auto=webp&s=0f04d36fe4ab33026c8bafc9cb90592b260562ec
| 2025-10-11T09:08:47 | https://www.reddit.com/r/LocalLLaMA/comments/1o3qx3e/kwaipilotkatdev72bexp_seems_to_be_a_great_coding/ | Human-Gas-1288 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3qx3e | false | null | t3_1o3qx3e | /r/LocalLLaMA/comments/1o3qx3e/kwaipilotkatdev72bexp_seems_to_be_a_great_coding/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'rxyepxgYUof3_-pxPA16Sj6OoiuoO3OTQiZrKV-cxps', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rxyepxgYUof3_-pxPA16Sj6OoiuoO3OTQiZrKV-cxps.png?width=108&crop=smart&auto=webp&s=cf18d5cf8ded0a10c6a0af997508a324c1a4598f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rxyepxgYUof3_-pxPA16Sj6OoiuoO3OTQiZrKV-cxps.png?width=216&crop=smart&auto=webp&s=ea48d1283c6607aaf89ea7f8ffb47f5bc99ce20d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rxyepxgYUof3_-pxPA16Sj6OoiuoO3OTQiZrKV-cxps.png?width=320&crop=smart&auto=webp&s=cba31dfcbe87d2c0403c3a3c70b8bfe95eb1e2d1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rxyepxgYUof3_-pxPA16Sj6OoiuoO3OTQiZrKV-cxps.png?width=640&crop=smart&auto=webp&s=6a34ebb6475d3fb833395063cb58949ed7cc21cd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rxyepxgYUof3_-pxPA16Sj6OoiuoO3OTQiZrKV-cxps.png?width=960&crop=smart&auto=webp&s=4d6cc5c98d3e061ccdb88c37a77110422f322645', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rxyepxgYUof3_-pxPA16Sj6OoiuoO3OTQiZrKV-cxps.png?width=1080&crop=smart&auto=webp&s=ab0d2724e27fd5da8661dce70df2d4d765794815', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rxyepxgYUof3_-pxPA16Sj6OoiuoO3OTQiZrKV-cxps.png?auto=webp&s=d56a39ad375ea48de4663c2472edcdff1c7fc561', 'width': 1200}, 'variants': {}}]} | |
Future plans..? | 0 | A question for devs working on AI-based systems : if the tools were significantly better, what would you do differently?
Say LLM context window size, speed and reasoning capability are an order of magnitude better - what impact?
Why I ask is, building from scratch, it's taken me almost a year to roughly lay the foundations of a system. It may take another 6 months to make it genuinely useful. That timescale is such that it's worth planning for *tomorrow's* tech.
We are about due another jump forward. I doubt it'll be linear, like a GPT 6, but a phase change in architecture. Unpredictable. But better be prepared...somehow. | 2025-10-11T08:35:58 | https://www.reddit.com/r/LocalLLaMA/comments/1o3qes9/future_plans/ | danja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3qes9 | false | null | t3_1o3qes9 | /r/LocalLLaMA/comments/1o3qes9/future_plans/ | false | false | self | 0 | null |
Need expert recommendations for a scalable, portable midrange AI hardware setup (2025) | 5 | Hi all,
I’m a bit lost when it comes to configuring AI hardware and would really appreciate some expert advice. My goal is to start with a solid midrange setup that is truly expandable — meaning I want to be able to add more GPUs, RAM, and storage later on without major hassle. Ideally, this setup should also be portable enough to bring to client sites when needed.
Right now, the main components I’m considering include:
* **CPU:** AMD Threadripper PRO or EPYC 7004 series for high core count and ECC support
* **GPU:** NVIDIA RTX 4090 or RTX 6000 Ada for strong AI performance and CUDA compatibility
* **RAM:** Minimum 128GB DDR5 ECC with at least 8 slots for future upgrades
* **Storage:** NVMe SSDs (1TB system drive + multiple TBs for data with RAID options)
* **Mainboard:** Supports multiple PCIe 5.0 x16 slots for GPU expansion, robust VRM for stable power delivery
* **Chassis:** Portable midtower or flight case with good airflow and room for multiple GPUs
* **Power supply:** 1200W or higher modular platinum rated PSU, with capacity for future GPU additions
Has anyone built or used similar systems recently? What are the key things to watch out for when balancing portability, cooling, and expandability? Any advice on choosing between workstation motherboards vs. small server boards for such setups?
Thanks a lot in advance! | 2025-10-11T07:19:16 | https://www.reddit.com/r/LocalLLaMA/comments/1o3p83a/need_expert_recommendations_for_a_scalable/ | Beautiful-Buy4321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3p83a | false | null | t3_1o3p83a | /r/LocalLLaMA/comments/1o3p83a/need_expert_recommendations_for_a_scalable/ | false | false | self | 5 | null |
Self Hosted AI Advise for Repetative Work Task | 1 | I have successfully managed to train both CoPilot and ChatGPT to carry out a fairly repetative task I have at work. This involves going through pages of timber cutting details and combining in efficient ways to create the least amount of waste with the timbers that we stock. The output would be a PDF (or similar) report that can be printed out.
I can see that if I put more time into fine tuning the way it works, it would only further improve.
Before I go down that path, I'd be much more comfortable hosting my own version so these resources aren't either taken away without notice, or put behind a hefty paywall.
Are there any good self-hosted solutions (ideally something that runs dockerised or within a Proxmox container) that would be good at carrying out this sort of task? | 2025-10-11T06:56:05 | https://www.reddit.com/r/LocalLLaMA/comments/1o3ouqm/self_hosted_ai_advise_for_repetative_work_task/ | voyto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o3ouqm | false | null | t3_1o3ouqm | /r/LocalLLaMA/comments/1o3ouqm/self_hosted_ai_advise_for_repetative_work_task/ | false | false | self | 1 | null |
What the sub feels like lately | 825 | 2025-10-11T06:47:33 | marderbot13 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o3opq5 | false | null | t3_1o3opq5 | /r/LocalLLaMA/comments/1o3opq5/what_the_sub_feels_like_lately/ | false | false | 825 | {'enabled': True, 'images': [{'id': '3CLclIHbJPk7x6N4ZpvDuvVgfK1WBXFF8dJwOPgz7pw', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/92s8znbxifuf1.jpeg?width=108&crop=smart&auto=webp&s=745834e2d08efad26d8150f4c279cab69d23b8c5', 'width': 108}, {'height': 159, 'url': 'https://preview.redd.it/92s8znbxifuf1.jpeg?width=216&crop=smart&auto=webp&s=cb63d90fc28cc0d01c21201b16127902d48d5167', 'width': 216}, {'height': 236, 'url': 'https://preview.redd.it/92s8znbxifuf1.jpeg?width=320&crop=smart&auto=webp&s=fb4866bff0d572386ea47fc19d643a6b2261fbdb', 'width': 320}], 'source': {'height': 430, 'url': 'https://preview.redd.it/92s8znbxifuf1.jpeg?auto=webp&s=1ab1de5f9c86d4eadfdb03593277608a0f478474', 'width': 581}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.