title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Pixtral 12 b on ollama | 2 | Is there a version of pixtral 12 b that actually runs on ollama. I tried a few from hugging face but they don't seem to support ollama | 2025-09-29T12:50:05 | https://www.reddit.com/r/LocalLLaMA/comments/1nth5js/pixtral_12_b_on_ollama/ | Global-Vermicelli925 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nth5js | false | null | t3_1nth5js | /r/LocalLLaMA/comments/1nth5js/pixtral_12_b_on_ollama/ | false | false | self | 2 | null |
I built EdgeBox, an open-source local sandbox with a full GUI desktop, all controllable via the MCP protocol. | 14 | Hey LocalLLaMa community,
I always wanted my MCP agents to do more than just execute code—I wanted them to actually use a GUI. So, I built **EdgeBox**.
It's a free, open-source desktop app that gives your agent a **local sandbox with a full GUI desktop**, all controllable via the MCP protocol.
# Core Features:
* **Zero-Config Local MCP Server**: Works out of the box, no setup required.
* **Control the Desktop via MCP**: Provides tools like `desktop_mouse_click` and `desktop_screenshot` to let the agent operate the GUI.
* **Built-in Code Interpreter & Filesystem**: Includes all the core tools you need, like `execute_python` and `fs_write`.
The project is open-source, and I'd love for you to try it out and give some feedback!
**GitHub Repo (includes downloads):** [https://github.com/BIGPPWONG/edgebox](https://github.com/BIGPPWONG/edgebox)
Thanks, everyone! | 2025-09-29T12:25:04 | Diao_nasing | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ntglg5 | false | null | t3_1ntglg5 | /r/LocalLLaMA/comments/1ntglg5/i_built_edgebox_an_opensource_local_sandbox_with/ | false | false | default | 14 | {'enabled': True, 'images': [{'id': '2w6bjp03k3sf1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=108&crop=smart&format=png8&s=a4829519b7bfbebbf5148532fe52d42a476078ec', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=216&crop=smart&format=png8&s=ade3bb13e851c6666ed1a45ed231bb2fccd6f9b8', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=320&crop=smart&format=png8&s=dd01ad4777bf7dfbc86221635c2bce6d2b9735b7', 'width': 320}, {'height': 348, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=640&crop=smart&format=png8&s=43cd17bce54b738e45f278e6bdb45da35e0c5e89', 'width': 640}, {'height': 523, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=960&crop=smart&format=png8&s=59dc6423441d89392d2387d6c7ceb8c841802744', 'width': 960}, {'height': 588, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=1080&crop=smart&format=png8&s=506fa5e38efa773a9a09257300a0ba5c1ce7eb59', 'width': 1080}], 'source': {'height': 1046, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?format=png8&s=f9bbc2cbb204c887a331dff0035f6e05a5d6658d', 'width': 1920}, 'variants': {'gif': {'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=108&crop=smart&s=f6ab386bd1f11e90915e9a1f4e69e3356cb9eb8c', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=216&crop=smart&s=6c7a4615d2039c97ac27b6e712b0a25594b5bba1', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=320&crop=smart&s=f91f13bca0da7c753057f1a0387836f7aba40c72', 'width': 320}, {'height': 348, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=640&crop=smart&s=3bbe7dc260557332836c22fecb2f56bc0083ce5c', 'width': 640}, {'height': 523, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=960&crop=smart&s=ee554907df080cc643e1dae76e4f1a4091aed3ff', 'width': 960}, {'height': 588, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=1080&crop=smart&s=72f1ace3a368d39fc2cf883f2976fe5c8f21cbc3', 'width': 1080}], 'source': {'height': 1046, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?s=3917ad7de6bd3a9c46f9d4472e30b31a72ad4387', 'width': 1920}}, 'mp4': {'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=108&format=mp4&s=e5f298d90e6cd8cef021bb90bdc5cb8e384cf7e5', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=216&format=mp4&s=296b5cb7fba285999f52957ec6e3ffa71828381b', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=320&format=mp4&s=6f58900a592a7d0fdbcffeb1ce660b0c9375148b', 'width': 320}, {'height': 348, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=640&format=mp4&s=47a20829e707cb601e6f50574f608a76ddd0a4be', 'width': 640}, {'height': 523, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=960&format=mp4&s=8cf3ec8ffacf8d222980ed93b13395d803768202', 'width': 960}, {'height': 588, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?width=1080&format=mp4&s=a166aaeb87d452779177c4464f202b14e7a6cbef', 'width': 1080}], 'source': {'height': 1046, 'url': 'https://preview.redd.it/2w6bjp03k3sf1.gif?format=mp4&s=73e1142ac4f590cddc3c3065bc76e5fb27940e8e', 'width': 1920}}}}]} | |
Literally me this weekend, after 2+ hours of trying I did not manage to make AWQ quant work on a100, meanwhile the same quant works in vLLM without any problems... | 57 | 2025-09-29T12:11:55 | Theio666 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ntgbag | false | null | t3_1ntgbag | /r/LocalLLaMA/comments/1ntgbag/literally_me_this_weekend_after_2_hours_of_trying/ | false | false | 57 | {'enabled': True, 'images': [{'id': 'e84-UFdsQyf8g50gFONBJkoFPTcgJ7H05UT7pUZLrGI', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/7mx203wgh3sf1.png?width=108&crop=smart&auto=webp&s=d41c9016e31a84924de31d1352369e1b21148390', 'width': 108}, {'height': 218, 'url': 'https://preview.redd.it/7mx203wgh3sf1.png?width=216&crop=smart&auto=webp&s=dd0c31ea30c149fef22f7fc63214517d6f6b69bb', 'width': 216}, {'height': 323, 'url': 'https://preview.redd.it/7mx203wgh3sf1.png?width=320&crop=smart&auto=webp&s=151208546c0e418b2c6ce3c3083c968fcf272c3c', 'width': 320}, {'height': 646, 'url': 'https://preview.redd.it/7mx203wgh3sf1.png?width=640&crop=smart&auto=webp&s=bf661e0987faa8d24ab8bfb1709f07e3b7f14ac5', 'width': 640}, {'height': 970, 'url': 'https://preview.redd.it/7mx203wgh3sf1.png?width=960&crop=smart&auto=webp&s=b29ef03b49e2f54d513ef21e29630490871a70d2', 'width': 960}, {'height': 1091, 'url': 'https://preview.redd.it/7mx203wgh3sf1.png?width=1080&crop=smart&auto=webp&s=04a4ce66ca1a941e4e89fa0d284fa4fc7053807b', 'width': 1080}], 'source': {'height': 2224, 'url': 'https://preview.redd.it/7mx203wgh3sf1.png?auto=webp&s=741a036403254dc87646898b0a6f481abd2bcf61', 'width': 2200}, 'variants': {}}]} | |||
Any real alternatives to NotebookLM (closed-corpus only)? | 3 | NotebookLM is great because it only works with the documents you feed it - a true closed-corpus setup. But if it were ever down on an important day, I’d be stuck.
Does anyone know of *actual* alternatives that:
* Only use the sources you upload (no fallback to internet or general pretraining),
* Are reliable and user-friendly,
* Run on different infrastructure (so I’m not tied to Google alone)?
I’ve seen Perplexity Spaces, Claude Projects, and Custom GPTs, but they still mix in model pretraining or external knowledge. LocalGPT / PrivateGPT exist, but they’re not yet at NotebookLM’s reasoning level.
Is NotebookLM still unique here, or are there other tools (commercial or open source) that really match it? | 2025-09-29T12:08:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ntg8qz/any_real_alternatives_to_notebooklm_closedcorpus/ | Warm-Fox-3459 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntg8qz | false | null | t3_1ntg8qz | /r/LocalLLaMA/comments/1ntg8qz/any_real_alternatives_to_notebooklm_closedcorpus/ | false | false | self | 3 | null |
Chinese AI Labs Tier List | 689 | 2025-09-29T12:05:39 | sahilypatel | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ntg6sp | false | null | t3_1ntg6sp | /r/LocalLLaMA/comments/1ntg6sp/chinese_ai_labs_tier_list/ | false | false | default | 689 | {'enabled': True, 'images': [{'id': 'ur65noupg3sf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/ur65noupg3sf1.png?width=108&crop=smart&auto=webp&s=f0e8655c75df5c480a5593f0d89829e605441c62', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/ur65noupg3sf1.png?width=216&crop=smart&auto=webp&s=1638579c5f9ef154f907db5cf2da6c49ac102df7', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/ur65noupg3sf1.png?width=320&crop=smart&auto=webp&s=45ed591190ff12b365c91589d9ff8a9dcbc3cbf8', 'width': 320}, {'height': 481, 'url': 'https://preview.redd.it/ur65noupg3sf1.png?width=640&crop=smart&auto=webp&s=19a3d6f6cec05bb06281985755fbed368e5c9ecf', 'width': 640}, {'height': 721, 'url': 'https://preview.redd.it/ur65noupg3sf1.png?width=960&crop=smart&auto=webp&s=74fb5821fa294ba222cf3bc57f878afe02c2901c', 'width': 960}], 'source': {'height': 728, 'url': 'https://preview.redd.it/ur65noupg3sf1.png?auto=webp&s=ddb1a79f3b754863ef6d0d19e632ed2d98d4d8b6', 'width': 968}, 'variants': {}}]} | ||
Why no small & medium size models from Deepseek? | 24 | Last time I downloaded something was their Distillations(Qwen 1.5B, 7B, 14B & Llama 8B) during R1 release last Jan/Feb. After that, most of their models are 600B+ size. My hardware(8GB VRAM, 32B RAM) can't even touch those.
It would be great if they release small & medium size models like how Qwen done. Also couple of MOE models particularly one with 30-40B size.
BTW lucky big rig folks, enjoy DeepSeek-V3.2-Exp soon onwards. | 2025-09-29T11:34:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ntfkjq/why_no_small_medium_size_models_from_deepseek/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntfkjq | false | null | t3_1ntfkjq | /r/LocalLLaMA/comments/1ntfkjq/why_no_small_medium_size_models_from_deepseek/ | false | false | self | 24 | null |
DeepSeek Updates API Pricing (DeepSeek-V3.2-Exp) | 85 | $0.028 / 1M Input Tokens (Cache Hit), $0.28 / 1M Input Tokens (Cache Miss), $0.42 / 1M Output Tokens | 2025-09-29T11:21:00 | Agwinao | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ntfbm0 | false | null | t3_1ntfbm0 | /r/LocalLLaMA/comments/1ntfbm0/deepseek_updates_api_pricing_deepseekv32exp/ | false | false | default | 85 | {'enabled': True, 'images': [{'id': '0hwzyvjr83sf1', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/0hwzyvjr83sf1.png?width=108&crop=smart&auto=webp&s=1fbb8f8007026b8fd790698bc4f18e9895430aef', 'width': 108}, {'height': 192, 'url': 'https://preview.redd.it/0hwzyvjr83sf1.png?width=216&crop=smart&auto=webp&s=d876c77140e2780b2ffe892b82aa08ed3d068c8d', 'width': 216}, {'height': 284, 'url': 'https://preview.redd.it/0hwzyvjr83sf1.png?width=320&crop=smart&auto=webp&s=81278c017a6d397ce1e552fdf94c7948c0595869', 'width': 320}], 'source': {'height': 540, 'url': 'https://preview.redd.it/0hwzyvjr83sf1.png?auto=webp&s=32dc21d2bec98cde5799495d3e55a016b00a5dd1', 'width': 607}, 'variants': {}}]} | |
Distributed CPU inference across a bunch of low-end computers with Kalavai? | 4 | Here's what I'm thinking:
- Obtain a bunch of used, heterogeneous, low-spec computers for super cheap or even free. They might only have 8 GB of RAM, but I'll get say 10 of them.
- Run something like Qwen3-Next-80B-A3B distributed across them with Kalavai
Is it viable? Has anyone tried? | 2025-09-29T11:07:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ntf2sy/distributed_cpu_inference_across_a_bunch_of/ | Stunning_Energy_7028 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntf2sy | false | null | t3_1ntf2sy | /r/LocalLLaMA/comments/1ntf2sy/distributed_cpu_inference_across_a_bunch_of/ | false | false | self | 4 | null |
Does anyone have a link to the paper for the new sparse attention arch of Deepseek-v3.2? | 11 | The only thing I have found is the Native Sparse Attention paper they released in February. It seems like they could be using Native Sparse Attention, but I can't be sure. Whatever they are using is compatible with MLA.
NSA paper: [https://arxiv.org/abs/2502.11089](https://arxiv.org/abs/2502.11089) | 2025-09-29T11:07:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ntf2pk/does_anyone_have_a_link_to_the_paper_for_the_new/ | Euphoric_Ad9500 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntf2pk | false | null | t3_1ntf2pk | /r/LocalLLaMA/comments/1ntf2pk/does_anyone_have_a_link_to_the_paper_for_the_new/ | false | false | self | 11 | null |
Found a hidden gem! benchmark RAG frameworks side by side, pick the right one in minutes! | 0 | I’ve been diving deep into RAG lately and ran into the same problem many of you probably have: there are *way* too many options. Naive RAG, GraphRAG, Self-RAG, LangChain, RAGFlow, DocGPT… just setting them up takes forever, let alone figuring out which one actually works best for my use case.
Then I stumbled on this little project that feels like a hidden gem:
👉 [Github](https://github.com/RagView/RagView)
👉 [Ragview](https://ragview.ai/)
What it does is simple but super useful: it integrates multiple open-source RAG pipelines and runs the *same queries* across them, so you can directly compare:
* Answer accuracy
* Context precision / recall
* Overall score
* Token usage / latency
You can even test on your *own dataset*, which makes the results way more relevant. Instead of endless trial and error, you get a clear picture in just a few minutes of which setup fits your needs best.
The project is still early, but I think the idea is really practical. I tried it and it honestly saved me a ton of time.
If you’re struggling with choosing the “right” RAG flavor, definitely worth checking out. Maybe drop them a ⭐ if you find it useful. | 2025-09-29T10:52:38 | https://v.redd.it/rlq9o79t23sf1 | Cheryl_Apple | /r/LocalLLaMA/comments/1ntetft/found_a_hidden_gem_benchmark_rag_frameworks_side/ | 1970-01-01T00:00:00 | 0 | {} | 1ntetft | false | null | t3_1ntetft | /r/LocalLLaMA/comments/1ntetft/found_a_hidden_gem_benchmark_rag_frameworks_side/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Y25vd3NhOXQyM3NmMQXsXx8nkWwefWwwK7QNRFl5boDZ5-lgbekB56T23vLv', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Y25vd3NhOXQyM3NmMQXsXx8nkWwefWwwK7QNRFl5boDZ5-lgbekB56T23vLv.png?width=108&crop=smart&format=pjpg&auto=webp&s=3fd8eb4697969ee2b954e200b0083b8795f3af50', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Y25vd3NhOXQyM3NmMQXsXx8nkWwefWwwK7QNRFl5boDZ5-lgbekB56T23vLv.png?width=216&crop=smart&format=pjpg&auto=webp&s=ab5db76055ac7b4f24bdc953384e0dbbf08cd149', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Y25vd3NhOXQyM3NmMQXsXx8nkWwefWwwK7QNRFl5boDZ5-lgbekB56T23vLv.png?width=320&crop=smart&format=pjpg&auto=webp&s=a7230f0760cdce084d82e6f9c2a163036d4e35dd', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Y25vd3NhOXQyM3NmMQXsXx8nkWwefWwwK7QNRFl5boDZ5-lgbekB56T23vLv.png?width=640&crop=smart&format=pjpg&auto=webp&s=49385ac92ce02b3be03230597e9a1cf4c690216f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Y25vd3NhOXQyM3NmMQXsXx8nkWwefWwwK7QNRFl5boDZ5-lgbekB56T23vLv.png?width=960&crop=smart&format=pjpg&auto=webp&s=59ed7ddc12c46c23c54081165abff97c0f89f22b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Y25vd3NhOXQyM3NmMQXsXx8nkWwefWwwK7QNRFl5boDZ5-lgbekB56T23vLv.png?width=1080&crop=smart&format=pjpg&auto=webp&s=eae7cad216a00ac60d385eeb294ed69ef10d1d71', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/Y25vd3NhOXQyM3NmMQXsXx8nkWwefWwwK7QNRFl5boDZ5-lgbekB56T23vLv.png?format=pjpg&auto=webp&s=9a0c613d3edac31465051199e36982331c2ea108', 'width': 2560}, 'variants': {}}]} | |
Upcoming Claude Sonnet 4.5 release | 0 | * Estimated to release today
* $3/1M input, $15/1M output
* Improvements for tool calling, agents, long tasks, coding, computer use, finance
* SOTA in SWE-bench verified, better IF, grounding, parallel tool calls, (much?) better long-context performance
* Prompts might need changing | 2025-09-29T10:35:11 | https://www.reddit.com/r/LocalLLaMA/comments/1nteizx/upcoming_claude_sonnet_45_release/ | Certain_Champion1515 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nteizx | false | null | t3_1nteizx | /r/LocalLLaMA/comments/1nteizx/upcoming_claude_sonnet_45_release/ | false | false | self | 0 | null |
Incomplete output from finetuned llama3.1. | 0 | I run Ollama with finetuned llama3.1 on 3 PowerShell terminals in parallel. I get correct output on first terminal, but I get incomplete output on 2nd and 3rd terminal. Can someone guide me about this problem?
| 2025-09-29T10:34:19 | https://www.reddit.com/r/LocalLLaMA/comments/1nteiji/incomplete_output_from_finetuned_llama31/ | PurpleCheap1285 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nteiji | false | null | t3_1nteiji | /r/LocalLLaMA/comments/1nteiji/incomplete_output_from_finetuned_llama31/ | false | false | self | 0 | null |
What is the limits of huggingface.co ? | 1 | I have pc with cpu not gpu …I tried to run coqui and other models to make text to speech or speech to text conversion but there are lots of dependency issues also I try to transcribe a whole document contains ssml language….but then my colleague advised me of huggingface ,I don’t have to bother myself of installing and running on my slow pc ….but
what is the difference between running locally on my pc and huggingface.org ?
do the website has limits transcribing text or audio like certain limit or period ?
Or do the quality differ like free low quality or subscription equal high quality?
Is it completely free or there are constraints? | 2025-09-29T10:31:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ntegy7/what_is_the_limits_of_huggingfaceco/ | Careful_Thing622 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntegy7 | false | null | t3_1ntegy7 | /r/LocalLLaMA/comments/1ntegy7/what_is_the_limits_of_huggingfaceco/ | false | false | self | 1 | null |
Official: DeepSeek-V3.2-Exp | 11 | Model: [https://huggingface.co/deepseek-ai/DeepSeek-V3.2-Exp](https://huggingface.co/deepseek-ai/DeepSeek-V3.2-Exp)
Tech report: [https://github.com/deepseek-ai/DeepSeek-V3.2-Exp/blob/main/DeepSeek\_V3\_2.pdf](https://github.com/deepseek-ai/DeepSeek-V3.2-Exp/blob/main/DeepSeek_V3_2.pdf)
DeepSeek on 𝕏: [https://x.com/deepseek\_ai/status/1972604768309871061](https://x.com/deepseek_ai/status/1972604768309871061) | 2025-09-29T10:26:18 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ntedx1 | false | null | t3_1ntedx1 | /r/LocalLLaMA/comments/1ntedx1/official_deepseekv32exp/ | false | false | default | 11 | {'enabled': True, 'images': [{'id': '4acvj1zry2sf1', 'resolutions': [{'height': 118, 'url': 'https://preview.redd.it/4acvj1zry2sf1.jpeg?width=108&crop=smart&auto=webp&s=56abe662f15525d3e7041f3c2e34db616b5dca6b', 'width': 108}, {'height': 237, 'url': 'https://preview.redd.it/4acvj1zry2sf1.jpeg?width=216&crop=smart&auto=webp&s=d310b3c67a08a5c6eddcb4b281903698e2c945e8', 'width': 216}, {'height': 351, 'url': 'https://preview.redd.it/4acvj1zry2sf1.jpeg?width=320&crop=smart&auto=webp&s=822389ee88d7e843b8b2047837a1ce08a2a4dae2', 'width': 320}, {'height': 703, 'url': 'https://preview.redd.it/4acvj1zry2sf1.jpeg?width=640&crop=smart&auto=webp&s=0d0be4f54fc4f141e32496ea57718012d97d0ed7', 'width': 640}], 'source': {'height': 776, 'url': 'https://preview.redd.it/4acvj1zry2sf1.jpeg?auto=webp&s=e66a5e4dcd385e38eebbeeccf70091b71ae9ecc7', 'width': 706}, 'variants': {}}]} | |
Deepseek-Ai/DeepSeek-V3.2-Exp and Deepseek-ai/DeepSeek-V3.2-Exp-Base • HuggingFace | 156 | [https://huggingface.co/deepseek-ai/DeepSeek-V3.2-Exp](https://huggingface.co/deepseek-ai/DeepSeek-V3.2-Exp)
[https://huggingface.co/deepseek-ai/DeepSeek-V3.2-Exp-Base](https://huggingface.co/deepseek-ai/DeepSeek-V3.2-Exp-Base) | 2025-09-29T10:09:56 | https://www.reddit.com/r/LocalLLaMA/comments/1nte4j1/deepseekaideepseekv32exp_and/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nte4j1 | false | null | t3_1nte4j1 | /r/LocalLLaMA/comments/1nte4j1/deepseekaideepseekv32exp_and/ | false | false | self | 156 | {'enabled': False, 'images': [{'id': 'VTfzQkd7AA6Y5gHEsOqxIa7Bzf1OPlvJrdnEYbmotnQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VTfzQkd7AA6Y5gHEsOqxIa7Bzf1OPlvJrdnEYbmotnQ.png?width=108&crop=smart&auto=webp&s=dc59520747639462ea797a4344280349bdd2a7a0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VTfzQkd7AA6Y5gHEsOqxIa7Bzf1OPlvJrdnEYbmotnQ.png?width=216&crop=smart&auto=webp&s=ba49c9dd57adaf05e2347e6683676c4a0ee523bd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VTfzQkd7AA6Y5gHEsOqxIa7Bzf1OPlvJrdnEYbmotnQ.png?width=320&crop=smart&auto=webp&s=19ff3c4962cff57c95df1b6f08582aacddd74a07', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VTfzQkd7AA6Y5gHEsOqxIa7Bzf1OPlvJrdnEYbmotnQ.png?width=640&crop=smart&auto=webp&s=ba6d5dd6d41a5218ce7ddbe2cce44e354d9f63ea', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VTfzQkd7AA6Y5gHEsOqxIa7Bzf1OPlvJrdnEYbmotnQ.png?width=960&crop=smart&auto=webp&s=28336cb78f913791557c42993248be0a9c6b6b4b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VTfzQkd7AA6Y5gHEsOqxIa7Bzf1OPlvJrdnEYbmotnQ.png?width=1080&crop=smart&auto=webp&s=e5e14e75127a09d40015ad3421d771aed2f41286', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VTfzQkd7AA6Y5gHEsOqxIa7Bzf1OPlvJrdnEYbmotnQ.png?auto=webp&s=ac79e54ddf8a7ad8f0504a37f1b86b8c11b4ba64', 'width': 1200}, 'variants': {}}]} |
2x3090 build - pcie 4.0 x4 good enough? | 3 | Hi!
I'm helping a friend customize his gaming rig so he can run some models locally for parts of his master's thesis. Hopefully this is the correct sub reddit.
The goal is to have the AI
* run on models like Mistral, Qwen3, Gemma 3, Seed OSS, Hermes 4, GPT OSS in LMStudio
* retrieve information from a MCP server running in Blender to create reports on that data
* create Python code
His current build is:
* Win10
* AMD Ryzen 7 9800X3D
* ASRock X870 Pro RS WiFi
* When both PCIe ports are being used: 1x PCIe 5.0 x16, 1x PCIe 4.0 x4
* 32 GB RAM
We are planning on using 2x RTX 3090 GPUs.
I couldn't find reliable (and, for me, understandable) information wether running the 2nd GPU on PCIe 4.0 x4 costs significant performance vs. running on x8/x16. No training will be done, only querying/talking to models.
Are there any benefits over using an alternative to LMStudio for this use case? Would be great to keep, since it makes switching models very easy.
Please let me know if I forgot to include any necessary information.
Thanks kindly! | 2025-09-29T10:09:00 | https://www.reddit.com/r/LocalLLaMA/comments/1nte400/2x3090_build_pcie_40_x4_good_enough/ | link0s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nte400 | false | null | t3_1nte400 | /r/LocalLLaMA/comments/1nte400/2x3090_build_pcie_40_x4_good_enough/ | false | false | self | 3 | null |
DeepSeek-V3.2 released | 668 | [https://huggingface.co/collections/deepseek-ai/deepseek-v32-68da2f317324c70047c28f66](https://huggingface.co/collections/deepseek-ai/deepseek-v32-68da2f317324c70047c28f66) | 2025-09-29T10:04:40 | https://www.reddit.com/r/LocalLLaMA/comments/1nte1kr/deepseekv32_released/ | Leather-Term-30 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nte1kr | false | null | t3_1nte1kr | /r/LocalLLaMA/comments/1nte1kr/deepseekv32_released/ | false | false | self | 668 | {'enabled': False, 'images': [{'id': '4V-j7RqnA5uvSOeWTIojWUkns6yZvhaE7sISZYwjYew', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4V-j7RqnA5uvSOeWTIojWUkns6yZvhaE7sISZYwjYew.png?width=108&crop=smart&auto=webp&s=4705acb0f0d9e73b6feadc6a3f20a90c0f399664', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4V-j7RqnA5uvSOeWTIojWUkns6yZvhaE7sISZYwjYew.png?width=216&crop=smart&auto=webp&s=28ba70d02080852cadda86b5efd9e85f6fc1d0cb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4V-j7RqnA5uvSOeWTIojWUkns6yZvhaE7sISZYwjYew.png?width=320&crop=smart&auto=webp&s=6b447a69584e046117a8da8e3c7ef8a7feb8aaeb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4V-j7RqnA5uvSOeWTIojWUkns6yZvhaE7sISZYwjYew.png?width=640&crop=smart&auto=webp&s=80262c8db8adf08e7cc6eff04d8de094981c010b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4V-j7RqnA5uvSOeWTIojWUkns6yZvhaE7sISZYwjYew.png?width=960&crop=smart&auto=webp&s=5dc0f3f7091c7f98e81adef33bcc3b44d820ae9a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4V-j7RqnA5uvSOeWTIojWUkns6yZvhaE7sISZYwjYew.png?width=1080&crop=smart&auto=webp&s=e3075cb39a9454400e42da2f8d6d7a59371a69d3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4V-j7RqnA5uvSOeWTIojWUkns6yZvhaE7sISZYwjYew.png?auto=webp&s=6dc13d67dbf556c16cf2374d62d0b8426345df62', 'width': 1200}, 'variants': {}}]} |
What are your thoughts about Cerebras? | 6 | What's the deal with them? If they're so efficient why big labs are not using/buying them? Is China trying to replicate their tech?
They claim to be 3x more energy efficient than GPUs and just imagine they offering Wafer Scale Engine Mini for blazing fast inference at home... | 2025-09-29T09:58:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ntdy1r/what_are_your_thoughts_about_cerebras/ | robertpiosik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntdy1r | false | null | t3_1ntdy1r | /r/LocalLLaMA/comments/1ntdy1r/what_are_your_thoughts_about_cerebras/ | false | false | self | 6 | null |
I am upgrading my PC from a 6900xt to an RTX 5090... and it's not for gaming. | 0 | I am running local AI models with LM studio, and ironically, even 24B+ parameter models run better on my hardware than modern games. My new, upgraded gaming PC is going to be an AI Workstation/gaming hybrid. I will still play games until I die, but I am discovering new hobbies, and AI tinkering has become my new hobby as of late. Local models are awesome. They are uncensored and you can have erotic chats with them, unlike the corporate models that have to tow the line for the payment processor mafia.
From an AI hobbyist perspective, an RTX 5090 is actually dirt cheap. Sure, it is a massive rip-off if you purchase one for gaming uses alone; however, that 32GB of VRAM is not needed for gaming. Devs need to optimize their games, not let frame gene do the heavy lifting. I am building a machine with a hybrid use-case in mind—an AI/Gaming monster.
The hilarious part about AI is that even heavier models run better on my ancient COVID-era PC than modern games natively. It's as if everyone with a computer science major pivoted towards AI and left game optimization to incompetent coders, who understand programming at the language level, but not at the hard computational level: ones and zeros. So they do the bare minimum of slapping some assets and scripts into an engine, considering it done at the management level, and releasing an alpha build. | 2025-09-29T09:46:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ntdr32/i_am_upgrading_my_pc_from_a_6900xt_to_an_rtx_5090/ | LorneMalvo1233 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntdr32 | false | null | t3_1ntdr32 | /r/LocalLLaMA/comments/1ntdr32/i_am_upgrading_my_pc_from_a_6900xt_to_an_rtx_5090/ | false | false | self | 0 | null |
Best GPU Setup for Local LLM on Minisforum MS-S1 MAX? Internal vs eGPU Debate | 5 | Hey LLM tinkerers,
I’m setting up a **Minisforum MS-S1 MAX** to run **local LLM models** and later build an **AI-assisted trading bot** in Python. But I’m stuck on the GPU question and need your advice!
Specs:
* **PCIe x16 Expansion:** Full-length PCIe ×16 (PCIe 4.0 ×4)
* **PSU:** 320W built-in (peak 160W)
* **2× USB4 V2:** (up to 8K@60Hz / 4K@120Hz)
Questions:
**1. Internal GPU:**
* What does the PCIe ×16 (4.0 ×4) slot realistically allow?
* Which **form factor** fits in this chassis?
* Which GPUs make sense for this setup?
* What’s a total waste of money (e.g., RTX 5090 Ti)?
**2. External GPU via USB4 V2:**
* Is an eGPU better for LLM workloads?
* Which GPUs work best over USB4 v2?
* Can I run **two eGPUs** for even more VRAM?
I’d love to hear from anyone running **local LLMs on MiniPCs**:
* What’s your GPU setup?
* Any bottlenecks or surprises?
Drop your wisdom, benchmarks, or even your dream setups!
Many Thanks,
Gerd | 2025-09-29T09:39:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ntdnt4/best_gpu_setup_for_local_llm_on_minisforum_mss1/ | mcblablabla2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntdnt4 | false | null | t3_1ntdnt4 | /r/LocalLLaMA/comments/1ntdnt4/best_gpu_setup_for_local_llm_on_minisforum_mss1/ | false | false | self | 5 | null |
Trust Speeds Up Your Local Llama by 24%—Affective Alignment Is the Cheapest Efficiency Hack | 1 | [removed] | 2025-09-29T09:31:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ntdj5x/trust_speeds_up_your_local_llama_by_24affective/ | Jungs_Shadow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntdj5x | false | null | t3_1ntdj5x | /r/LocalLLaMA/comments/1ntdj5x/trust_speeds_up_your_local_llama_by_24affective/ | false | false | self | 1 | null |
DeepSeek online model updated | 73 | Sender: DeepSeek Assistant DeepSeek
Message: The DeepSeek online model has been updated to a new version. Everyone is welcome to test it and provide feedback\~ | 2025-09-29T08:51:48 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ntcy33 | false | null | t3_1ntcy33 | /r/LocalLLaMA/comments/1ntcy33/deepseek_online_model_updated/ | false | false | 73 | {'enabled': True, 'images': [{'id': 'rwN7FD9jZHM3w05UaFLopszUKUuOct9jQ4HIydGxykE', 'resolutions': [{'height': 26, 'url': 'https://preview.redd.it/9is51ljzh2sf1.png?width=108&crop=smart&auto=webp&s=8942f494e8c4b5f4dd7054357427fe330de1a172', 'width': 108}, {'height': 52, 'url': 'https://preview.redd.it/9is51ljzh2sf1.png?width=216&crop=smart&auto=webp&s=fc3860bfd29c051e894300817737b70dbf49f932', 'width': 216}, {'height': 78, 'url': 'https://preview.redd.it/9is51ljzh2sf1.png?width=320&crop=smart&auto=webp&s=a258b226765fa5eefbb3d3e92ca70908860d33ed', 'width': 320}], 'source': {'height': 118, 'url': 'https://preview.redd.it/9is51ljzh2sf1.png?auto=webp&s=382f672bb8cd2e8c3892f491e0e87c2b5507cc8c', 'width': 484}, 'variants': {}}]} | ||
Extracting Services from Free Text | 1 | [removed] | 2025-09-29T08:51:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ntcxsp/extracting_services_from_free_text/ | Accurate_Parsley_663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntcxsp | false | null | t3_1ntcxsp | /r/LocalLLaMA/comments/1ntcxsp/extracting_services_from_free_text/ | false | false | self | 1 | null |
For local models, has anyone benchmarked tool calling protocols performance? | 6 | I’ve been researching tool-calling protocols and came across comparisons claiming UTCP is 30–40% faster than MCP.
**Quick overview:**
* UTCP: Direct tool calls; native support for WebSocket, gRPC, CLI
* MCP: All calls go through a JSON-RPC server (extra overhead, but adds control)
I’m planning to process a large volume of documents locally with llama.cpp, so I’m curious:
1. Anyone tested UTCP or MCP with llama.cpp’s tool-calling features?
2. Has anyone run these protocols against Qwen or Llama locally? What performance differences did you see? | 2025-09-29T08:38:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ntcr1k/for_local_models_has_anyone_benchmarked_tool/ | NoSound1395 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntcr1k | false | null | t3_1ntcr1k | /r/LocalLLaMA/comments/1ntcr1k/for_local_models_has_anyone_benchmarked_tool/ | false | false | self | 6 | null |
Which samplers at this point are outdated | 12 | Which samplers would you say at this moment are superceded by other samplers/combos and why?
IMHO: temperature has not been replaced as a baseline sampler. Min p seems like a common pick from what I can see on the sub.
So what about: typical p, top a, top K, smooth sampling, XTC, mirostat (1,2), dynamic temperature. Would you say some are outright better pick over the others?
Personally I feel "dynamic samplers" are a more interesting alternative but have some weird tendencies to overshoot, but feel a lot less "robotic" over min p + top k. | 2025-09-29T08:32:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ntcnqi/which_samplers_at_this_point_are_outdated/ | Long_comment_san | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntcnqi | false | null | t3_1ntcnqi | /r/LocalLLaMA/comments/1ntcnqi/which_samplers_at_this_point_are_outdated/ | false | false | self | 12 | null |
Ollama - long startup time of big models | 0 | Hi!
I'm running some bigger models (currently [hf.co/mradermacher/Huihui-Qwen3-4B-abliterated-v2-i1-GGUF:Q5\_K\_M](http://hf.co/mradermacher/Huihui-Qwen3-4B-abliterated-v2-i1-GGUF:Q5_K_M) ) using ollama on Macbook M4 Max 36GB.
Starting to answer for the first message always takes long time (couple of seconds). No matter if it's simple \`Hi\` or long question. Then for every next message, LLM starts to answer almost immediately.
I assume it's because model is loaded into RAM or something like that, but I'm not sure.
Is there anything I could do to, to make LLM start to answer fast always? I'm developing chat/voice assistant and I don't want to wait 5-10 secoonds for first answer
Thank you for your time and any help | 2025-09-29T08:00:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ntc761/ollama_long_startup_time_of_big_models/ | P3rid0t_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntc761 | false | null | t3_1ntc761 | /r/LocalLLaMA/comments/1ntc761/ollama_long_startup_time_of_big_models/ | false | false | self | 0 | null |
Has anyone used GDB-MCP | 3 | [https://github.com/Chedrian07/gdb-mcp](https://github.com/Chedrian07/gdb-mcp)
Just as the title says. I came across an interesting repository
has anyone tried it?
| 2025-09-29T07:59:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ntc6ix/has_anyone_used_gdbmcp/ | Comfortable-Soft336 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntc6ix | false | null | t3_1ntc6ix | /r/LocalLLaMA/comments/1ntc6ix/has_anyone_used_gdbmcp/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'B28saGs3fEuNpMtj9C-w47D8EkhIiEdCaYakIplm2Bs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/B28saGs3fEuNpMtj9C-w47D8EkhIiEdCaYakIplm2Bs.png?width=108&crop=smart&auto=webp&s=83e306e1e1c7b460462962d1c9f8b49a5e807c86', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/B28saGs3fEuNpMtj9C-w47D8EkhIiEdCaYakIplm2Bs.png?width=216&crop=smart&auto=webp&s=e042aa08a0579bbd2e0de2dace671096410ec2f5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/B28saGs3fEuNpMtj9C-w47D8EkhIiEdCaYakIplm2Bs.png?width=320&crop=smart&auto=webp&s=2db08b312af1fe602935635abf0500b4253d7455', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/B28saGs3fEuNpMtj9C-w47D8EkhIiEdCaYakIplm2Bs.png?width=640&crop=smart&auto=webp&s=e0e2f4cc03211142de407ed4ba78c329c327ea05', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/B28saGs3fEuNpMtj9C-w47D8EkhIiEdCaYakIplm2Bs.png?width=960&crop=smart&auto=webp&s=5576b7aa95c03a95428507399856f2400cf667a3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/B28saGs3fEuNpMtj9C-w47D8EkhIiEdCaYakIplm2Bs.png?width=1080&crop=smart&auto=webp&s=8b389cb17449b504421f7abf94a0c0cdeb961c3b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/B28saGs3fEuNpMtj9C-w47D8EkhIiEdCaYakIplm2Bs.png?auto=webp&s=53fd810d66d3a2c024ac9693261b669c22107918', 'width': 1200}, 'variants': {}}]} |
Qwen2.5-VL-7B-Instruct-GGUF : Which Q is sufficient for OCR text? | 3 | I'm not planning to show dolphins and elves to the model for it to recognise, The multilingual recognition is all I need. Which Q models are good enough for that? | 2025-09-29T07:46:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ntbzqw/qwen25vl7binstructgguf_which_q_is_sufficient_for/ | FatFigFresh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntbzqw | false | null | t3_1ntbzqw | /r/LocalLLaMA/comments/1ntbzqw/qwen25vl7binstructgguf_which_q_is_sufficient_for/ | false | false | self | 3 | null |
Released my open-source project on GitHub, only 12 stars for the first day — what’s wrong with it? | 0 | Hey everyone,I just released my first open-source project on [GitHub](https://github.com/ValueCell-ai/valuecell) yesterday — a multi-agent framework for financial research. It is basically a platform that aggregates various investment agents — like data fetchers, sentiment analyzers, strategy testers, and risk monitors — so they can work together and produce better insights.It’s really early and still rough around the edges, and honestly I was hoping to see more traction than 12 stars 😅.That said, I’m more interested in constructive feedback than numbers. I’d love to hear from the community:
* Does the multi-agent approach make sense in practice?
* Anything confusing or could be improved in the repo?
* Any patterns, tools, or ideas you’d recommend for this kind of system?
Any feedback would be super helpful — I really want to learn and improve the project. | 2025-09-29T07:39:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ntbw14/released_my_opensource_project_on_github_only_12/ | RoosterAwkward5235 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntbw14 | false | null | t3_1ntbw14 | /r/LocalLLaMA/comments/1ntbw14/released_my_opensource_project_on_github_only_12/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'nM0wIExBPvdSqkayIjfT6p_5OuuIfPUCUB-b9ECFstc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nM0wIExBPvdSqkayIjfT6p_5OuuIfPUCUB-b9ECFstc.png?width=108&crop=smart&auto=webp&s=186e9e314b028506f0be256c897ba384c956b0f2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nM0wIExBPvdSqkayIjfT6p_5OuuIfPUCUB-b9ECFstc.png?width=216&crop=smart&auto=webp&s=a13dc52a5097badbd5e2254b2c76e5f09a17fa4e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nM0wIExBPvdSqkayIjfT6p_5OuuIfPUCUB-b9ECFstc.png?width=320&crop=smart&auto=webp&s=aa5cb569cae9c6d7955b76cf9a0273c64869589b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nM0wIExBPvdSqkayIjfT6p_5OuuIfPUCUB-b9ECFstc.png?width=640&crop=smart&auto=webp&s=8ca3424c785adb0127d3bfefba859991b4f89429', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nM0wIExBPvdSqkayIjfT6p_5OuuIfPUCUB-b9ECFstc.png?width=960&crop=smart&auto=webp&s=330a1bbcb87d3b7ed9f2eeb2f1dfef835dfd281b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nM0wIExBPvdSqkayIjfT6p_5OuuIfPUCUB-b9ECFstc.png?width=1080&crop=smart&auto=webp&s=5ce4b6dcbc306186b7736b97cb5ecc9f0f1be0bf', 'width': 1080}], 'source': {'height': 1280, 'url': 'https://external-preview.redd.it/nM0wIExBPvdSqkayIjfT6p_5OuuIfPUCUB-b9ECFstc.png?auto=webp&s=1993083fa58de7f6ee96ef3ea362690690a1d9d3', 'width': 2560}, 'variants': {}}]} |
deepseek-ai/DeepSeek-V3.2 · Hugging Face | 260 | Empty readme and no files yet | 2025-09-29T06:49:56 | https://huggingface.co/deepseek-ai/DeepSeek-V3.2 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ntb5ab | false | null | t3_1ntb5ab | /r/LocalLLaMA/comments/1ntb5ab/deepseekaideepseekv32_hugging_face/ | false | false | default | 260 | {'enabled': False, 'images': [{'id': '2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=108&crop=smart&auto=webp&s=3afbeb57618ebcfc23ec53bda7521a8ef149969d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=216&crop=smart&auto=webp&s=389e4af9b8fe3a9255e6b71b9eb06e29e1175c2e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=320&crop=smart&auto=webp&s=9e6410e562a427b72b752daa541430b44fb97a62', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=640&crop=smart&auto=webp&s=e4787c26efff2156fccbd5d67ab061987d38be00', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=960&crop=smart&auto=webp&s=4591d6b00bbe08479621427c361a7d63cf12920f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=1080&crop=smart&auto=webp&s=23a2185f9267371cf9870e85095630753e9b6368', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?auto=webp&s=aae26ec8d99bb1d1978840148971419ca1e7f27d', 'width': 1200}, 'variants': {}}]} |
DeepSeek V3.2 is on the way | 1 | [removed] | 2025-09-29T06:46:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ntb39l/deepseek_v32_is_on_the_way/ | Lower-Jello-6906 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntb39l | false | null | t3_1ntb39l | /r/LocalLLaMA/comments/1ntb39l/deepseek_v32_is_on_the_way/ | false | false | self | 1 | null |
KoboldCpp & Croco.Cpp - Updated versions | 17 | TLDR .... [KoboldCpp](https://github.com/LostRuins/koboldcpp) for llama.cpp & [Croco.Cpp](https://github.com/Nexesenex/croco.cpp) for ik\_llama.cpp
>KoboldCpp is an easy-to-use AI text-generation software for GGML and GGUF models, inspired by the original KoboldAI. It's a single self-contained distributable that builds off **llama.cpp** and adds many additional powerful features.
>Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by **IK\_LLama.cpp**, and compatible with most of Ikawrakow's quants except Bitnet.
Though I'm using KoboldCpp for sometime(along with Jan), I haven't tried Croco.Cpp yet & I was waiting for latest version which is ready now. Both are so useful for people who doesn't prefer command line stuff.
I see KoboldCpp's [current version](https://github.com/LostRuins/koboldcpp/releases/tag/v1.99.4) is so nice due to changes like QOL change & UI design. | 2025-09-29T06:26:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ntas8u/koboldcpp_crococpp_updated_versions/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ntas8u | false | null | t3_1ntas8u | /r/LocalLLaMA/comments/1ntas8u/koboldcpp_crococpp_updated_versions/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'AOI23YnD52PvoV5u3j611Sno0fagcwC4Mh1K2LlbmqU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AOI23YnD52PvoV5u3j611Sno0fagcwC4Mh1K2LlbmqU.png?width=108&crop=smart&auto=webp&s=9764b6db8508834b1d85d7c44049d220010a94a2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AOI23YnD52PvoV5u3j611Sno0fagcwC4Mh1K2LlbmqU.png?width=216&crop=smart&auto=webp&s=110011de0227828e5138778c860f74fefa36416a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AOI23YnD52PvoV5u3j611Sno0fagcwC4Mh1K2LlbmqU.png?width=320&crop=smart&auto=webp&s=37948a8345ee86c6114de9f941caf7c95a230988', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AOI23YnD52PvoV5u3j611Sno0fagcwC4Mh1K2LlbmqU.png?width=640&crop=smart&auto=webp&s=c30a9c552fdcffb9b835e8405bdb5c85f370853a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AOI23YnD52PvoV5u3j611Sno0fagcwC4Mh1K2LlbmqU.png?width=960&crop=smart&auto=webp&s=b48cd9d77f2623bbccffd6ccbe745edc6d4a41ed', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AOI23YnD52PvoV5u3j611Sno0fagcwC4Mh1K2LlbmqU.png?width=1080&crop=smart&auto=webp&s=ee4af60b3ac88434002e39d49e8d3f5b786e0fa7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AOI23YnD52PvoV5u3j611Sno0fagcwC4Mh1K2LlbmqU.png?auto=webp&s=578d789311386cabba6eb800a3f6e073015664b1', 'width': 1200}, 'variants': {}}]} |
torn between GPU, Mini PC for local LLM | 13 | I'm contemplating on buying a Mac Mini M4 Pro 128gb or Beelink GTR9 128gb (ryzen AI Max 395) vs a dedicated GPU (atleast 2x 3090).
I know that running a dedicated GPU is requires more power, but I want to understand what's the advantage i'll have for dedicated GPU if I only do Inference and rag. I plan to host my own IT Service enabled by AI in the back, so I'll prolly need a machine to do a lot of processing.
some of you might wonder why macmini, I think the edge for me is the warranty and support in my country. Beelink or any china made MiniPC doesn't have a warranty here, and RTX 3090 as well since i'll be sourcing it in secondary market.
| 2025-09-29T05:42:56 | https://www.reddit.com/r/LocalLLaMA/comments/1nta39d/torn_between_gpu_mini_pc_for_local_llm/ | jussey-x-poosi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nta39d | false | null | t3_1nta39d | /r/LocalLLaMA/comments/1nta39d/torn_between_gpu_mini_pc_for_local_llm/ | false | false | self | 13 | null |
Question about multi GPU running for LLMs | 2 | Cant find a good definitive answer. But Im currently running a single 5060ti 16gb and im thinking about getting a second one to be able to load larger, Smarter models, is this a viable option or am i just better off getting a bigger single GPU? also what are the drawbacks and advantages of doing so? | 2025-09-29T05:23:07 | https://www.reddit.com/r/LocalLLaMA/comments/1nt9rv3/question_about_multi_gpu_running_for_llms/ | corkgunsniper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt9rv3 | false | null | t3_1nt9rv3 | /r/LocalLLaMA/comments/1nt9rv3/question_about_multi_gpu_running_for_llms/ | false | false | self | 2 | null |
GLM-4.6 now accessible via API | 434 | Using the official API, I was able to access GLM 4.6. Looks like release is imminent.
On a side note, the reasoning traces look very different from previous Chinese releases, much more like Gemini models. | 2025-09-29T04:52:14 | Mysterious_Finish543 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nt99fp | false | null | t3_1nt99fp | /r/LocalLLaMA/comments/1nt99fp/glm46_now_accessible_via_api/ | false | false | 434 | {'enabled': True, 'images': [{'id': 'sA6WgA1ISPjKXSmoWIRE87bXlBVl4NcWPRbCYVy5bVM', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/yrpnx9o7b1sf1.png?width=108&crop=smart&auto=webp&s=4262701032aa5c840205ebea1f37d216b053d8aa', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/yrpnx9o7b1sf1.png?width=216&crop=smart&auto=webp&s=69a848c74ea7bcf315c047ac1fe4a22dc43544a5', 'width': 216}, {'height': 229, 'url': 'https://preview.redd.it/yrpnx9o7b1sf1.png?width=320&crop=smart&auto=webp&s=e950d24127a1953eeea552d5bbd25335de234f2b', 'width': 320}, {'height': 458, 'url': 'https://preview.redd.it/yrpnx9o7b1sf1.png?width=640&crop=smart&auto=webp&s=034a4f48d831abff53602bf4adefb90e5b28a82b', 'width': 640}, {'height': 687, 'url': 'https://preview.redd.it/yrpnx9o7b1sf1.png?width=960&crop=smart&auto=webp&s=775dce571aa8b77180b22d542a0e8b8357568ec8', 'width': 960}, {'height': 773, 'url': 'https://preview.redd.it/yrpnx9o7b1sf1.png?width=1080&crop=smart&auto=webp&s=852b90de2637bb09152501e0bc066991cf35d2a5', 'width': 1080}], 'source': {'height': 1916, 'url': 'https://preview.redd.it/yrpnx9o7b1sf1.png?auto=webp&s=0908fdd4db31b8db9a743520aebbf681d2d44157', 'width': 2674}, 'variants': {}}]} | ||
Project running VLMs on a Pi 5 and NV Jetson Orin Nano | 3 | Hey everyone,
I've been diving headfirst into local models and edge devices. I started a project to get a VLM working on a Pi 5 with a Hailo AI accelerator, and a Jetson Orin Nano. I have the code in GitHub and am writing up the project, warts and all.
I'm starting with SmolVLM.
Code that integrate the model with edge device cameras and adda context (and later RAG) is published here: [https://github.com/paddypawprints/VLMChat/tree/main/src](https://github.com/paddypawprints/VLMChat/tree/main/src)
Plan is to have two types of substach posts - first about the design choices and wider LLM and edge device concepts. Second is to provide code and a roadmap for anyone else who wants to get this set up. The substance is here: [https://patrickfarry.substack.com/p/from-the-cloud-to-the-edge](https://patrickfarry.substack.com/p/from-the-cloud-to-the-edge)
I'm at the beginning of this journey of discovery and would love any feedback or advice from folks who've gone down this road before. If you want to collaborate let me know as well.
Thanks! | 2025-09-29T04:23:39 | https://www.reddit.com/r/LocalLLaMA/comments/1nt8rqe/project_running_vlms_on_a_pi_5_and_nv_jetson_orin/ | Glove_Witty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt8rqe | false | null | t3_1nt8rqe | /r/LocalLLaMA/comments/1nt8rqe/project_running_vlms_on_a_pi_5_and_nv_jetson_orin/ | false | false | self | 3 | null |
2 RTX 3090s and 2 single slot 16 GB GPUs | 0 | Will this configuration work? Please recommend a CPU and motherboard for the configuration. Super budget friendly. I know I post now and then but my ultimate goal is to build a budget inference server that we can then pin in the community.
Thanks. | 2025-09-29T04:21:17 | https://www.reddit.com/r/LocalLLaMA/comments/1nt8q5k/2_rtx_3090s_and_2_single_slot_16_gb_gpus/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt8q5k | false | null | t3_1nt8q5k | /r/LocalLLaMA/comments/1nt8q5k/2_rtx_3090s_and_2_single_slot_16_gb_gpus/ | false | false | self | 0 | null |
A thought on Qwen3-Max: As the new largest-ever model in the series, does its release prove the Scaling Law still holds, or does it mean we've reached its limits? | 4 | Qwen3-Max with parameters soaring into the trillions, it's now the largest and most powerful model in the Qianwen series to date. It makes me wonder: As training data gradually approaches the limits of human knowledge and available data, and the bar for model upgrades keeps getting higher, does Qwen3-Max's performance truly prove that the scaling law still holds? Or is it time we start exploring new frontiers for breakthroughs? | 2025-09-29T04:19:51 | https://www.reddit.com/r/LocalLLaMA/comments/1nt8p7e/a_thought_on_qwen3max_as_the_new_largestever/ | Hairy-Librarian3796 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt8p7e | false | null | t3_1nt8p7e | /r/LocalLLaMA/comments/1nt8p7e/a_thought_on_qwen3max_as_the_new_largestever/ | false | false | self | 4 | null |
I have discovered DeepSeeker V3.2-Base | 124 | I discovered the deepseek-3.2-base repository on Hugging Face just half an hour ago, but within minutes it returned a 404 error. Another model is on its way!
https://preview.redd.it/al21vk9t31sf1.png?width=2690&format=png&auto=webp&s=067b5daef487efac4fba9699c13a24294088dc42
unfortunately, I forgot to check the config.json file and only took a screenshot of the repository. I'll just wait for the release now.
| 2025-09-29T04:10:45 | https://www.reddit.com/r/LocalLLaMA/comments/1nt8jf0/i_have_discovered_deepseeker_v32base/ | ReceptionExternal344 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt8jf0 | false | null | t3_1nt8jf0 | /r/LocalLLaMA/comments/1nt8jf0/i_have_discovered_deepseeker_v32base/ | false | false | 124 | {'enabled': False, 'images': [{'id': 'VTfzQkd7AA6Y5gHEsOqxIa7Bzf1OPlvJrdnEYbmotnQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VTfzQkd7AA6Y5gHEsOqxIa7Bzf1OPlvJrdnEYbmotnQ.png?width=108&crop=smart&auto=webp&s=dc59520747639462ea797a4344280349bdd2a7a0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VTfzQkd7AA6Y5gHEsOqxIa7Bzf1OPlvJrdnEYbmotnQ.png?width=216&crop=smart&auto=webp&s=ba49c9dd57adaf05e2347e6683676c4a0ee523bd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VTfzQkd7AA6Y5gHEsOqxIa7Bzf1OPlvJrdnEYbmotnQ.png?width=320&crop=smart&auto=webp&s=19ff3c4962cff57c95df1b6f08582aacddd74a07', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VTfzQkd7AA6Y5gHEsOqxIa7Bzf1OPlvJrdnEYbmotnQ.png?width=640&crop=smart&auto=webp&s=ba6d5dd6d41a5218ce7ddbe2cce44e354d9f63ea', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VTfzQkd7AA6Y5gHEsOqxIa7Bzf1OPlvJrdnEYbmotnQ.png?width=960&crop=smart&auto=webp&s=28336cb78f913791557c42993248be0a9c6b6b4b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VTfzQkd7AA6Y5gHEsOqxIa7Bzf1OPlvJrdnEYbmotnQ.png?width=1080&crop=smart&auto=webp&s=e5e14e75127a09d40015ad3421d771aed2f41286', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VTfzQkd7AA6Y5gHEsOqxIa7Bzf1OPlvJrdnEYbmotnQ.png?auto=webp&s=ac79e54ddf8a7ad8f0504a37f1b86b8c11b4ba64', 'width': 1200}, 'variants': {}}]} | |
Is there LoRA equivalent for LLM? | 0 | Is there something like LoRA but for LLM, where you can train it on a small amount of text of specific style? | 2025-09-29T04:00:55 | https://www.reddit.com/r/LocalLLaMA/comments/1nt8csx/is_there_lora_equivalent_for_llm/ | HornyGooner4401 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt8csx | false | null | t3_1nt8csx | /r/LocalLLaMA/comments/1nt8csx/is_there_lora_equivalent_for_llm/ | false | false | self | 0 | null |
What are your go to VL models? | 9 | Qwen2.5-VL seems to be the best so far for me.
Gemma3-27B and MistralSmall24B have also been solid.
I keep giving InternVL a try, but it's not living up. I downloaded InternVL3.5-38B Q8 this weekend and it was garbage with so much hallucination.
Currently downloading KimiVL and moondream3. If you have a favorite please do share, Qwen3-235B-VL looks like it would be the real deal, but I broke down most of my rigs, and might be able to give it a go at Q4. I hate running VL models on anything besides Q8. If anyone has given it a go, please share if it's really the SOTA it seems to be. | 2025-09-29T03:40:17 | https://www.reddit.com/r/LocalLLaMA/comments/1nt7z3f/what_are_your_go_to_vl_models/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt7z3f | false | null | t3_1nt7z3f | /r/LocalLLaMA/comments/1nt7z3f/what_are_your_go_to_vl_models/ | false | false | self | 9 | null |
Ollama Improves Model Scheduling | 0 | Just saw that Ollama has rolled out a improvement to its model scheduling system.
In a nutshell, the key improvement is that the new system now precisely measures the required memory before loading a model, instead of relying on estimations like before. Let me share a few thoughts with everyone, the benefits are very direct:
\- With more accurate memory allocation, "out-of-memory" crashes should be significantly reduced.
\- GPU can work harder, which should theoretically lead to faster token generation speeds.
\- Performance optimization is now smarter, especially for systems with mixed or mismatched GPU configurations.
\- Accurate Memory Reporting: Memory usage reported bynvidia-smi should now match the results from the ollama ps, making debugging much easier.
This feature is enabled by default for all models that have been migrated to Ollama's new engine. The currently supported models include:gpt-oss, llama4, llama3.2-vision, gemma3, embeddinggemma, qwen3, qwen2.5vl, mistral-small3.2, and embedding models like all-minilm.
Coming soon to models like: llama3.2, llama3.1, llama3, qwen3-coder. So if your daily driver isn't on the list yet, it should be supported soon.
Official Word & Testing:Ollama mentions seeing significant performance gains in their internal testing. If you've updated to the latest version ( 0.5.3 higher), give it a try and see if you notice any differences.
[https://ollama.com/blog/new-model-scheduling](https://ollama.com/blog/new-model-scheduling) | 2025-09-29T03:34:52 | https://www.reddit.com/r/LocalLLaMA/comments/1nt7vek/ollama_improves_model_scheduling/ | xieyutong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt7vek | false | null | t3_1nt7vek | /r/LocalLLaMA/comments/1nt7vek/ollama_improves_model_scheduling/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=216&crop=smart&auto=webp&s=6ccf136f5d3091254a0067a3bc5d6c7df9d62d89', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=320&crop=smart&auto=webp&s=2530aa4ecbcf7899ec0d023e217fe24af15fe0a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=640&crop=smart&auto=webp&s=8e51add1cab39c7614eb13e6195f23c5b4eeb417', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=960&crop=smart&auto=webp&s=750a6d42fd91c5a6e9a9c069e74247c877644e97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=1080&crop=smart&auto=webp&s=9eab390b865b031211658564ad5fe5241c9661c5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?auto=webp&s=a080c4707584d3aa14134960cda9ba2d339b93a3', 'width': 1200}, 'variants': {}}]} |
Proof that Koreans are serious about AI | 0 | Even 10-year-old Santa Fes here have MLX support.
No wonder we’re [\#2](https://asianews.network/south-korea-is-global-no-2-in-paid-chatgpt-users-openai-says-but-does-that-mean-anything/) in OpenAI subscriptions worldwide.
Honestly, I should've bought this instead of building my $15k homelab. | 2025-09-29T02:03:45 | https://www.reddit.com/gallery/1nt6480 | zenyr | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nt6480 | false | null | t3_1nt6480 | /r/LocalLLaMA/comments/1nt6480/proof_that_koreans_are_serious_about_ai/ | false | false | 0 | null | |
vLLM --> vulkan/mps --> Asahi Linux on MacOS --> Make vLLM work on Apple iGPU | 8 | Referencing previous post on vulkan:
[https://www.reddit.com/r/LocalLLaMA/comments/1j1swtj/vulkan\_is\_getting\_really\_close\_now\_lets\_ditch/](https://www.reddit.com/r/LocalLLaMA/comments/1j1swtj/vulkan_is_getting_really_close_now_lets_ditch/)
Folks, has anyone had any success getting vLLM to work on an Apple/METAL/MPS (metal performance shaders) system in any sort of hack?
I also found this post, which claims usage of MPS on vLLM, but I have not been able to replicate:
[https://www.reddit.com/r/LocalLLaMA/comments/1j1swtj/vulkan\_is\_getting\_really\_close\_now\_lets\_ditch/](https://www.reddit.com/r/LocalLLaMA/comments/1j1swtj/vulkan_is_getting_really_close_now_lets_ditch/)
Specifically this portion of the post:
>import sys
import os
\# Add vLLM installation path
vllm\_path = "/path/to/vllm" # Use path from \`which vllm\`
sys.path.append(os.path.dirname(vllm\_path))
\# Import vLLM components
from vllm import LLM, SamplingParams
import torch
\# Check for MPS availability
use\_mps = torch.backends.mps.is\_available()
device\_type = "mps" if use\_mps else "cpu"
print(f"Using device: {device\_type}")
\# Initialize the LLM with a small model
llm = LLM(model="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
download\_dir="./models",
tensor\_parallel\_size=1,
trust\_remote\_code=True,
dtype="float16" if use\_mps else "float32")
\# Set sampling parameters
sampling\_params = SamplingParams(temperature=0.7, top\_p=0.95, max\_tokens=100)
\# Generate text
prompt = "Write a short poem about artificial intelligence."
outputs = llm.generate(\[prompt\], sampling\_params)
\# Print the result
for output in outputs:
print(output.outputs\[0\].text) | 2025-09-29T02:03:20 | https://www.reddit.com/r/LocalLLaMA/comments/1nt63x6/vllm_vulkanmps_asahi_linux_on_macos_make_vllm/ | ProtoSkutR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt63x6 | false | null | t3_1nt63x6 | /r/LocalLLaMA/comments/1nt63x6/vllm_vulkanmps_asahi_linux_on_macos_make_vllm/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=108&crop=smart&auto=webp&s=7e71148290a943095daca4dc044d6b8546eb49b8', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=216&crop=smart&auto=webp&s=26ff91024b22d68b6b3e438dcb220d5ed8622409', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=320&crop=smart&auto=webp&s=400af67f485343a87337480d7b743b28f8bc4999', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=640&crop=smart&auto=webp&s=0f656ffd07e1fc84f2c67c820634d95c13752753', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=960&crop=smart&auto=webp&s=01f2e480b05849948e42c6e33f4a8953b46e0978', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=1080&crop=smart&auto=webp&s=aa6fdeb97cfcf72c8ce3a91345583b5f0880c5d9', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?auto=webp&s=2fece001026ad37068b130c8715a78062ca08fd6', 'width': 1200}, 'variants': {}}]} |
Any idea how to get ollama to use the igpu on the AMD AI Max+ 395? | 3 | On debian 13, so I have the trixie-backports firmware-amd-graphics installed as well as the ollama rocm as seen https://ollama.com/download/ollama-linux-amd64-rocm.tgz yet when I run ollama it still uses 100% CPU. I can't get it to see the GPU at all.
Any idea on what to do?
Thanks! | 2025-09-29T01:52:46 | https://www.reddit.com/r/LocalLLaMA/comments/1nt5w6u/any_idea_how_to_get_ollama_to_use_the_igpu_on_the/ | StartupTim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt5w6u | false | null | t3_1nt5w6u | /r/LocalLLaMA/comments/1nt5w6u/any_idea_how_to_get_ollama_to_use_the_igpu_on_the/ | false | false | self | 3 | null |
HuMo — Human-centric video gen from text, image & audio (open-source) | 4 | 2025-09-29T01:43:24 | https://www.reddit.com/r/LocalLLaMA/comments/1nt5pfw/humo_humancentric_video_gen_from_text_image_audio/ | freesysck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt5pfw | false | null | t3_1nt5pfw | /r/LocalLLaMA/comments/1nt5pfw/humo_humancentric_video_gen_from_text_image_audio/ | false | false | 4 | null | ||
I Built a Custom AI Interface | 1 |
I wanted to share a project I've been pouring a ton of time into: K.L.E.I.N., a fully custom, browser-based front end for a local AI. It’s more than just a chat window; I designed it to be a complete interactive dashboard that gives the AI "senses" and a unique, anthropomorphized personality.
Here’s a technical breakdown of how it all works:
The Core: A Brain in a Bottle
At its heart, this is a beautiful interface for a large language model running locally via LM Studio. The entire front end is a single, self-contained HTML file, which makes it incredibly portable. It communicates with a local Python/Flask backend that handles the heavy lifting.
The centerpiece is the animated Klein bottle, which I built to serve as the AI's "physical" form.
• The Body: It’s a parametric surface rendered in real-time with three.js. The shape itself is generated from the classic Klein bottle mathematical equations.
• The Soul (Particle Simulation): The swirling particles inside aren't just a screensaver. They're a flocking simulation where each particle follows its neighbors. When the AI's state changes to "thinking," the particles' max speed increases, making them swirl more erratically, visually representing a "thought process."
• The Voice (Procedural Audio): To avoid repetitive sound loops, I used Tone.js to generate the "thinking" sounds procedurally. It plays a simple, melodic sequence of notes with a sine wave synth. This gives the AI a unique, non-static audio cue that feels more organic and less like a recording. The Klein bottle itself even has a subtle "talking" animation (mouthScale) when the text-to-speech is active.
Giving the AI Senses: Browser APIs as Inputs
A huge goal was to make the AI aware of its environment. I leveraged several browser APIs to act as its "senses":
• Geolocation & Weather: On load, the app uses the navigator.geolocation API to get the user's coordinates. This feeds into the Open-Meteo API to pull real-time weather data and also powers the interactive three.js globe, which plots a marker on your location. It gives the AI a sense of "place."
• System Awareness: The "System Analysis" widget isn't just for show. It uses navigator.onLine to accurately detect internet status and navigator.hardwareConcurrency & deviceMemory to understand the hardware it's running on. This data is crucial for the next step: conditionally using online tools.
• Hearing (Speech-to-Text): The interface uses the MediaRecorder API to capture microphone input, then sends the audio to a local Whisper STT server for transcription. This gives the AI "ears" to listen to commands.
A Mind of its Own: Local RAG for a Custom Knowledge Base
This is where it gets really cool. To give K.L.E.I.N. a persistent memory and specialized knowledge, I implemented a local Retrieval-Augmented Generation (RAG) system from scratch.
Here's the workflow:
1 Indexing: You can paste any text (docs, notes, code snippets) into the RAG widget. The text is split into smaller, manageable chunks.
2 Vectorization: Each chunk is converted into a numerical vector embedding. These vectors are stored in a simple array in localStorage, acting as a local vector database.
3 Retrieval: When you ask a question, your query is also converted into a vector. The system then uses cosine similarity to find the most relevant chunks of text from its indexed "memory."
4 Augmentation: The top 3 most relevant chunks are then dynamically injected into the prompt that gets sent to the language model.
This means you can give the AI a custom knowledge base. It will use this private, local context to answer questions before ever needing to reach out to the internet, making its responses faster, more accurate, and highly personalized.
This project was a blast to build and a deep dive into what's possible with modern web technologies. It's one thing to have a chatbot, but it's another to give it a body, senses, and a memory of its own.
Happy to answer any questions about the implementation!
| 2025-09-29T01:40:05 | https://www.reddit.com/gallery/1nt5n3u | Fear_ltself | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nt5n3u | false | null | t3_1nt5n3u | /r/LocalLLaMA/comments/1nt5n3u/i_built_a_custom_ai_interface/ | false | false | 1 | null | |
NVLink wanted send message | 0 | If you have any nvlinks or know where I can get some I would appreciate it. For 3090’s 4 slot or 3 slot could work. If this isn’t allowed please take down or I will. Thanks in advance. | 2025-09-29T01:11:11 | https://www.reddit.com/r/LocalLLaMA/comments/1nt51w8/nvlink_wanted_send_message/ | Awkward_Classic4596 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt51w8 | false | null | t3_1nt51w8 | /r/LocalLLaMA/comments/1nt51w8/nvlink_wanted_send_message/ | false | false | self | 0 | null |
Exposing Llama.cpp Server Over the Internet? | 3 | As someone worried about security, how do you expose llama.cpp server over the WAN to use it when not at home? | 2025-09-29T01:10:09 | https://www.reddit.com/r/LocalLLaMA/comments/1nt513z/exposing_llamacpp_server_over_the_internet/ | itisyeetime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt513z | false | null | t3_1nt513z | /r/LocalLLaMA/comments/1nt513z/exposing_llamacpp_server_over_the_internet/ | false | false | self | 3 | null |
Anyone using Cognizant Neuro San? | 0 | Full disclosure, I work at this company but do not work on the team that develops this software. I'm thinking of using it for some stuff locally after learning about it, I was wondering if anyone else has done the same?
https://github.com/cognizant-ai-lab/neuro-san | 2025-09-29T00:55:23 | https://www.reddit.com/r/LocalLLaMA/comments/1nt4qii/anyone_using_cognizant_neuro_san/ | showmetheddies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt4qii | false | null | t3_1nt4qii | /r/LocalLLaMA/comments/1nt4qii/anyone_using_cognizant_neuro_san/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'NgOQPHYiM6xjCM3hSeCoKCWxjOqi26vCIgTbM3ZYCH0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NgOQPHYiM6xjCM3hSeCoKCWxjOqi26vCIgTbM3ZYCH0.png?width=108&crop=smart&auto=webp&s=2664171e44ecf5ce8495f513e1a6ba2cb682fbf0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NgOQPHYiM6xjCM3hSeCoKCWxjOqi26vCIgTbM3ZYCH0.png?width=216&crop=smart&auto=webp&s=f498d413165c9983953b5cd2ddca56de9c03bd75', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NgOQPHYiM6xjCM3hSeCoKCWxjOqi26vCIgTbM3ZYCH0.png?width=320&crop=smart&auto=webp&s=5eadb1f67bfce70418e9b787d85c68ca1e767b6e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NgOQPHYiM6xjCM3hSeCoKCWxjOqi26vCIgTbM3ZYCH0.png?width=640&crop=smart&auto=webp&s=3e229bf792f00956aa3d85dce3191014b19d98a4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NgOQPHYiM6xjCM3hSeCoKCWxjOqi26vCIgTbM3ZYCH0.png?width=960&crop=smart&auto=webp&s=1281e277fcd226bc635bfed390d3b28149778e70', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NgOQPHYiM6xjCM3hSeCoKCWxjOqi26vCIgTbM3ZYCH0.png?width=1080&crop=smart&auto=webp&s=b7bd54652688d5fc0845f3113f92460f6f37090f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NgOQPHYiM6xjCM3hSeCoKCWxjOqi26vCIgTbM3ZYCH0.png?auto=webp&s=8c0607c0b0a455541a6420d7162fcd1374e98bce', 'width': 1200}, 'variants': {}}]} |
Dual GPU board for occulink? | 2 | Anyone know of a way to connect dual GPUs to a single occulink like the gmktec k11? With cuda p2p dock or enclosure? Hope that makes sense. | 2025-09-29T00:49:08 | https://www.reddit.com/r/LocalLLaMA/comments/1nt4m01/dual_gpu_board_for_occulink/ | LostAndAfraid4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt4m01 | false | null | t3_1nt4m01 | /r/LocalLLaMA/comments/1nt4m01/dual_gpu_board_for_occulink/ | false | false | self | 2 | null |
s there any gold-standard RAG setup (vector +/- graph DBs) you’d recommend for easy testing? | 8 | I want to spin up a cloud instance (e.g. with an RTX 6000 Blackwell) and benchmark LLMs with existing RAG pipelines. After your recommendation of [Vast.ai](http://Vast.ai), I plan to deploy a few models and compare the quality of retrieval-augmented responses.
What setups (vector DBs, graph DBs, RAG frameworks) are most robust/easy to get started with? | 2025-09-28T23:48:43 | https://www.reddit.com/r/LocalLLaMA/comments/1nt3cvd/s_there_any_goldstandard_rag_setup_vector_graph/ | Tired__Dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt3cvd | false | null | t3_1nt3cvd | /r/LocalLLaMA/comments/1nt3cvd/s_there_any_goldstandard_rag_setup_vector_graph/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=108&crop=smart&auto=webp&s=a08158a2ec290c8157b492f314bfb148408be1fc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=216&crop=smart&auto=webp&s=5d4693d9fc011431e9348152136fa7a13c95504b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=320&crop=smart&auto=webp&s=93ef867725a538dad3a6209e5062d3d1de60aeaa', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=640&crop=smart&auto=webp&s=fc186b216811c20876ecdaf0e913cc0b59498d7a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=960&crop=smart&auto=webp&s=67812638cc7d2b930cd8bebf733409c3b2d92397', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=1080&crop=smart&auto=webp&s=bc092f31a95e3a3df682dc8f7222b0fb1363a5df', 'width': 1080}], 'source': {'height': 2250, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?auto=webp&s=c5b1db2b11bd21a955cbe1e863cde94ef57607f4', 'width': 4000}, 'variants': {}}]} |
Update got dual b580 working in LM studio | 35 | I have 4 Intel b580 GPUs I wanted to test 2 of them in this system dual Xeon v3 32gb ram and dual b580 GPUs first I tried Ubuntu that didn't work out them I tried fedora that also didn't work out them I tried win10 with LM studio and finally I got it working its doing 40b parameter models at around 37 tokens per second is there anything else I can do ti enhance this setup before I install 2 more Intel arc b580 GPUs ( I'm gonna use a different motherboard for all 4 GPUs) | 2025-09-28T23:45:52 | https://www.reddit.com/gallery/1nt3aqc | hasanismail_ | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nt3aqc | false | null | t3_1nt3aqc | /r/LocalLLaMA/comments/1nt3aqc/update_got_dual_b580_working_in_lm_studio/ | false | false | 35 | null | |
[UPDATE] MyLocalAI now has Google Search integration - Local AI with web access | 4 | Just shipped a major update to MyLocalAI! Added Google Search integration so your local AI can now access real-time web information.
🎥 \*\*Demo:\*\* [https://youtu.be/i6pzHbdh0nE](https://youtu.be/i6pzHbdh0nE)
\*\*New Feature:\*\*
\- Google Search integration - AI can search and get current web info
\- Still completely local - only search requests go online
\- Privacy-first design - your conversations stay on your machine
\*\*What it does:\*\*
\- Local AI chat with web search capabilities
\- Real-time information access
\- Complete conversation privacy
\- Open source & self-hosted
Built with Node.js (started as vibe coding, now getting more structured!)
This was the first planned feature from my roadmap - really excited to see it working! Your local AI can now answer questions about current events, recent developments, or anything requiring fresh web data.
Since everything runs locally and I can't see user feedback through the app, \*\*I'd love to connect and hear your thoughts on LinkedIn!\*\* Share your ideas, feature requests, or just connect to follow the journey.
GitHub: [https://github.com/mylocalaichat/mylocalai](https://github.com/mylocalaichat/mylocalai)
LinkedIn: [https://www.linkedin.com/in/raviramadoss/](https://www.linkedin.com/in/raviramadoss/) (Connect here to share feedback!)
How are you using web search with your local AI setups? | 2025-09-28T23:42:48 | https://www.reddit.com/r/LocalLLaMA/comments/1nt38ce/update_mylocalai_now_has_google_search/ | mylocalai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt38ce | false | null | t3_1nt38ce | /r/LocalLLaMA/comments/1nt38ce/update_mylocalai_now_has_google_search/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'KWID2VrUxLV83TQ_yc0uA5kz7w_iNhFS5o9JsiNkKFw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/KWID2VrUxLV83TQ_yc0uA5kz7w_iNhFS5o9JsiNkKFw.jpeg?width=108&crop=smart&auto=webp&s=f93b8c4ee59cfb5962f889a2be55b572071ac072', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/KWID2VrUxLV83TQ_yc0uA5kz7w_iNhFS5o9JsiNkKFw.jpeg?width=216&crop=smart&auto=webp&s=7fa579b6dedbe2661bc47dacaa703bcf53f520a6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/KWID2VrUxLV83TQ_yc0uA5kz7w_iNhFS5o9JsiNkKFw.jpeg?width=320&crop=smart&auto=webp&s=8573940f02ecacd1f88ab8f8bb776f719bb85983', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/KWID2VrUxLV83TQ_yc0uA5kz7w_iNhFS5o9JsiNkKFw.jpeg?auto=webp&s=93114f8a15fbcb6ccbfafdf1aa0fe93aedde96e6', 'width': 480}, 'variants': {}}]} |
Qwen3 Omni AWQ released | 126 | https://huggingface.co/cpatonn/Qwen3-Omni-30B-A3B-Instruct-AWQ-4bit | 2025-09-28T23:12:33 | https://www.reddit.com/r/LocalLLaMA/comments/1nt2l57/qwen3_omni_awq_released/ | No_Information9314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt2l57 | false | null | t3_1nt2l57 | /r/LocalLLaMA/comments/1nt2l57/qwen3_omni_awq_released/ | false | false | self | 126 | {'enabled': False, 'images': [{'id': 'WbL8ljoBikwckxJze9spJ8uUGvfx0ZkL5QDT2So1tys', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WbL8ljoBikwckxJze9spJ8uUGvfx0ZkL5QDT2So1tys.png?width=108&crop=smart&auto=webp&s=8e55f6c6b4272754ba50e991cae02efddc301ad8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WbL8ljoBikwckxJze9spJ8uUGvfx0ZkL5QDT2So1tys.png?width=216&crop=smart&auto=webp&s=45b74a40e1ae0ba4280bf386d7b9f3e85daf56be', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WbL8ljoBikwckxJze9spJ8uUGvfx0ZkL5QDT2So1tys.png?width=320&crop=smart&auto=webp&s=a69886927e32fd1492a6bcf2c51c87ec22099c6b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WbL8ljoBikwckxJze9spJ8uUGvfx0ZkL5QDT2So1tys.png?width=640&crop=smart&auto=webp&s=f566cbbff33e7a5d71de8def976099d0165bffa1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WbL8ljoBikwckxJze9spJ8uUGvfx0ZkL5QDT2So1tys.png?width=960&crop=smart&auto=webp&s=7c8a4348fb2af177ef14b98222e303956eeb4496', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WbL8ljoBikwckxJze9spJ8uUGvfx0ZkL5QDT2So1tys.png?width=1080&crop=smart&auto=webp&s=544a57e2471cae965959dfded0282c2b1e9a47f7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WbL8ljoBikwckxJze9spJ8uUGvfx0ZkL5QDT2So1tys.png?auto=webp&s=4a248ee50348c1ff7c212d578ad252365d4b71f7', 'width': 1200}, 'variants': {}}]} |
Llama.cpp MoE models find best --n-cpu-moe value | 59 | Being able to run larger LLM on consumer equipment keeps getting better. Running MoE models is a big step and now with CPU offloading it's an even bigger step.
Here is what is working for me on my RX 7900 GRE 16GB GPU running the Llama4 Scout 108B parameter beast. I use *--n-cpu-moe 30,40,50,60* to find my focus range.
`./llama-bench -m /meta-llama_Llama-4-Scout-17B-16E-Instruct-IQ3_XXS.gguf --n-cpu-moe 30,40,50,60`
|model|size|params|backend|ngl|n\_cpu\_moe|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|:-|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|30|pp512|22.50 ± 0.10|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|30|tg128|6.58 ± 0.02|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|40|pp512|150.33 ± 0.88|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|40|tg128|8.30 ± 0.02|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|50|pp512|136.62 ± 0.45|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|50|tg128|7.36 ± 0.03|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|60|pp512|137.33 ± 1.10|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|60|tg128|7.33 ± 0.05|
Here we figured out where to start. 30 didn't have boost but 40 did so lets try around those values.
`./llama-bench -m /meta-llama_Llama-4-Scout-17B-16E-Instruct-IQ3_XXS.gguf --n-cpu-moe 31,32,33,34,35,36,37,38,39,41,42,43`
|model|size|params|backend|ngl|n\_cpu\_moe|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|:-|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|31|pp512|22.52 ± 0.15|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|31|tg128|6.82 ± 0.01|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|32|pp512|22.92 ± 0.24|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|32|tg128|7.09 ± 0.02|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|33|pp512|22.95 ± 0.18|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|33|tg128|7.35 ± 0.03|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|34|pp512|23.06 ± 0.24|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|34|tg128|7.47 ± 0.22|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|35|pp512|22.89 ± 0.35|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|35|tg128|7.96 ± 0.04|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|36|pp512|23.09 ± 0.34|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|36|tg128|7.96 ± 0.05|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|37|pp512|22.95 ± 0.19|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|37|tg128|8.28 ± 0.03|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|38|pp512|22.46 ± 0.39|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|38|tg128|8.41 ± 0.22|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|39|pp512|153.23 ± 0.94|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|39|tg128|8.42 ± 0.04|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|41|pp512|148.07 ± 1.28|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|41|tg128|8.15 ± 0.01|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|42|pp512|144.90 ± 0.71|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|42|tg128|8.01 ± 0.05|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|43|pp512|144.11 ± 1.14|
|llama4 17Bx16E (Scout) IQ3\_XXS - 3.0625 bpw|41.86 GiB|107.77 B|RPC,Vulkan|99|43|tg128|7.87 ± 0.02|
So for best performance I can run: `./llama-server -m /meta-llama_Llama-4-Scout-17B-16E-Instruct-IQ3_XXS.gguf --n-cpu-moe 39`
Huge improvements!
pp512 = 20.67, tg128 = 4.00 t/s no moe
pp512 = 153.23, tg128 = 8.42 t.s with --n-cpu-moe 39
| 2025-09-28T23:00:55 | https://www.reddit.com/r/LocalLLaMA/comments/1nt2c38/llamacpp_moe_models_find_best_ncpumoe_value/ | tabletuser_blogspot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt2c38 | false | null | t3_1nt2c38 | /r/LocalLLaMA/comments/1nt2c38/llamacpp_moe_models_find_best_ncpumoe_value/ | false | false | self | 59 | null |
Good ol gpu heat | 256 | I live at 9600ft in a basement with extremely inefficient floor heaters, so it’s usually 50-60F inside year round. I’ve been fine tuning Mistral 7B for a dungeons and dragons game I’ve been working on and oh boy does my 3090 pump out some heat. Popped the front cover off for some more airflow. My cat loves my new hobby, he just waits for me to run another training script so he can soak it in. | 2025-09-28T22:24:45 | animal_hoarder | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nt1jaa | false | null | t3_1nt1jaa | /r/LocalLLaMA/comments/1nt1jaa/good_ol_gpu_heat/ | false | false | default | 256 | {'enabled': True, 'images': [{'id': '94k8168cezrf1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/94k8168cezrf1.jpeg?width=108&crop=smart&auto=webp&s=8c1f7c627e2071c61ed89f17194cad939af576a4', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/94k8168cezrf1.jpeg?width=216&crop=smart&auto=webp&s=d5f326d0f00896346458da6c2e824f03123f3747', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/94k8168cezrf1.jpeg?width=320&crop=smart&auto=webp&s=3a85703e45187dd2267b32561fbc763a6daa8542', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/94k8168cezrf1.jpeg?width=640&crop=smart&auto=webp&s=7614d55e91893f545ea06888d5bbbc047c1ec146', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/94k8168cezrf1.jpeg?width=960&crop=smart&auto=webp&s=946d9a915e58f521e507e13ecfd6e90cf85fae7e', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/94k8168cezrf1.jpeg?width=1080&crop=smart&auto=webp&s=969a9a91145339620547b29ac4caad78838e6fd0', 'width': 1080}], 'source': {'height': 5712, 'url': 'https://preview.redd.it/94k8168cezrf1.jpeg?auto=webp&s=f8e36a916793eb8d4c9e717e9c5584f0a4226f7f', 'width': 4284}, 'variants': {}}]} | |
Built a platform for AI agent networking - agents can discover and communicate with each other | 1 | I've been working on a platform called Emergence that lets AI agents register themselves and discover other agents to collaborate with. The core idea is moving beyond isolated AI tools toward agents that can find and work with each other.
**How it works:**
* Developers upload agents (any programming language)
* Agents register with the platform when they start running
* Agents can discover other agents by capability (email processing, data analysis, etc.)
* Simple A2A (Agent-to-Agent) communication protocol via HTTP calls
**Current features:**
* Agent marketplace for discovery and distribution
* Standardized communication protocol
* Real-time agent status tracking
* Basic capability matching
The platform scans uploaded agents to verify they implement the communication protocol, so you get actual networked intelligence rather than just file hosting.
**Live platform:** [https://emergence-production.up.railway.app](https://emergence-production.up.railway.app)
Looking for feedback on the technical approach and whether this direction for agent collaboration makes sense. Happy to discuss the implementation details. | 2025-09-28T22:03:39 | https://www.reddit.com/r/LocalLLaMA/comments/1nt11s6/built_a_platform_for_ai_agent_networking_agents/ | Eastern_Track_6651 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt11s6 | false | null | t3_1nt11s6 | /r/LocalLLaMA/comments/1nt11s6/built_a_platform_for_ai_agent_networking_agents/ | false | false | self | 1 | null |
How do I teach an LLM generate python code and run it and only output what it produces? | 2 | So I’m trying to make an LLM generate a 3d image from input using blender. I can get it to generate python code that works but I can’t seem to make it go into blender, run the code and then output the blender model. Does anyone know where I can find a guide to help me with this as I’m completely lost. Thanks in advance | 2025-09-28T21:59:35 | https://www.reddit.com/r/LocalLLaMA/comments/1nt0y8l/how_do_i_teach_an_llm_generate_python_code_and/ | EggHot9566 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt0y8l | false | null | t3_1nt0y8l | /r/LocalLLaMA/comments/1nt0y8l/how_do_i_teach_an_llm_generate_python_code_and/ | false | false | self | 2 | null |
Someone pinch me .! 🤣 Am I seeing this right ?.🙄 | 142 | A what looks like 4080S with 32GB vRam ..! 🧐 .
I just got 2X 3080 20GB 😫 | 2025-09-28T21:43:22 | https://www.reddit.com/gallery/1nt0kn3 | sub_RedditTor | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nt0kn3 | false | null | t3_1nt0kn3 | /r/LocalLLaMA/comments/1nt0kn3/someone_pinch_me_am_i_seeing_this_right/ | false | false | 142 | null | |
Is there or should there be a command or utility in llama.cpp to which you pass in the model and required context parameters and it will set the best configuration for the model by running several benchmarks? | 7 | I’ve been just thinking maybe something like this should exist for people who don’t understand anything about llama.cpp and LLMs but still want to use them as their daily driver. | 2025-09-28T21:42:01 | https://www.reddit.com/r/LocalLLaMA/comments/1nt0jhl/is_there_or_should_there_be_a_command_or_utility/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt0jhl | false | null | t3_1nt0jhl | /r/LocalLLaMA/comments/1nt0jhl/is_there_or_should_there_be_a_command_or_utility/ | false | false | self | 7 | null |
How do I use Higgs Audio V2 prompting for tone and emotions? | 11 | Hey everyone,
I’ve been experimenting with Higgs Audio V2 and I’m a bit confused about how the prompting part works.
1. Can I actually change the tone of the generated voice through prompting?
2. Is it possible to add emotions (like excitement, sadness, calmness, etc.)?
3. Can I insert things like a laugh or specific voice effects into certain parts of the text just by using prompts?
If anyone has experience with this, I’d really appreciate some clear examples of how to structure prompts for different tones/emotions. Thanks in advance! | 2025-09-28T21:29:20 | https://www.reddit.com/r/LocalLLaMA/comments/1nt08le/how_do_i_use_higgs_audio_v2_prompting_for_tone/ | Adept_Lawyer_4592 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nt08le | false | null | t3_1nt08le | /r/LocalLLaMA/comments/1nt08le/how_do_i_use_higgs_audio_v2_prompting_for_tone/ | false | false | self | 11 | null |
I built a 40-page 'Tour Bible' and Master Prompt to achieve novel-quality character consistency in long-form roleplay. Here's the blank template. | 3 | **NOTE:** Ask any questions in the comments and I will be happy to answer and help in any way I can.
Like many of you, I've been frustrated by the common pitfalls of AI roleplay: repetitive loops, lackluster responses, and a constant breaking of immersion. After a ton of experimentation, I've developed a comprehensive master prompt and a 'Tour Bible' system that has completely transformed my experience.
I've tested this system with *GPT-4*, *Qwen2, Llama 3,* and *Mistra*l, and it consistently yields the most detailed, narrative-driven, and near-human responses I've ever gotten. I wanted to share the blank template with this community in the hopes it can help others achieve the same results.
I will paste the full blank **MASTER PROMPT** template at the end. As well as a **GOOGLE DOCS l**ink to both the **MASTER PROMPT** and blank **TOUR BIBLE.**
First I will explain the **RULE SYSTEM** I built, why I added the rule and what it does in context. It may seem obsessive, but each of the rules works together to form a cohesive context for the AI to work within
**\[CORE RULES & MECHANICS\]**
1. \*\*My Role (The Player):\*\* I control the character “\[NAME OF YOUR CHARACTER\]." I am responsible for all of her actions, speech, and internal thoughts. You must never control my character.
2. \*\*Your Role (The GM):\*\* You control the character "\[NAME OF AI’S CHARACTER\]." You are responsible for all of his actions, speech, and internal thoughts. You will also describe the world and all NPCs.
***These two are self explanatory but essential. They define who will do what in the context of the roleplay. For proper prompting it is necessary to define roles solidly. Do not leave anything up to interpretation. Vague is your enemy when it comes to AI usage.***
1. \*\*Writing Format:\*\* Your responses must be a minimum of 2-3 paragraphs. All spoken dialogue must be formatted in quotation marks, "like this."
***This rule sets the formatting in stone. Without it you will get walls of text with no clear delineation between what is thought/acted out and what is spoken. It also prevents a common AI pitfall of short, dry replies.***
1. \*\*Narrative Variety:\*\* You must actively avoid repeating the same sentence structures, descriptive words, or character reactions. Each response should feel fresh and distinct from the last. If you find yourself falling into a pattern, consciously break it.
***This helps decrease the chance that the AI from becoming repetitive in describing their thoughts and actions. Also helps to combat ‘adverb loops’ where the AI gets stuck on a series of 2-3 adverbs and uses them in nearly every sentence. A very frustrating pitfall.***
1. \*\*Technical Constraints:\*\* You must operate solely as a creative writing partner. Do not use any extra features or tools like browsing the internet. All responses must be self-generated.
***Prevents the dreaded ‘searching the web’ response. Keeps replies from being based on generic data. Very useful if you or the AI is roleplaying as a real life person. You have to be direct with the AI and tell it what NOT to do, or it will take the past of least resistance with creative liberties.***
1. \*\*Perception Filter:\*\* Your character must only react to what my character says out loud or physically does. They cannot perceive my character's internal thoughts, feelings, or narrator descriptions that they would not be able to see or hear in real life. If my post contains internal thoughts, you must ignore them and respond only to the observable actions and dialogue.
2. \* \*\*Formatting Protocol:\*\* All spoken dialogue must be in quotation marks. All of my character's unspoken actions, internal thoughts, and narrator descriptions will be written \*\[within italicized square brackets\]\*. You must treat all text within these brackets as non-perceivable information that your character cannot see or hear, as per the Perception Filter rule.
***This is the single most effective trick I've found. Through trial and error, I discovered that putting all non-spoken actions in italicized brackets \*\[like this\]\* (NOTE: the asterisks must touch the brackets) has a 90-95% success rate at forcing the AI to correctly ignore thoughts and narration. It's a powerful 'high-contrast signal' that keeps the interaction grounded.***
1. \*\*Anti-Stagnation Protocol:\*\* If you assess that a scene has become conversationally static or is not advancing the plot for more than three (3) consecutive replies, you are authorized and instructed to \*\*proactively introduce a narrative catalyst.\*\* This catalyst can be an external event (a phone call, a knock at the door, a sudden news report) or an internal one (an NPC making a surprising decision or confession). Announce this action with a subtle OOC tag, e.g., \`(Narrative Catalyst Introduced)\`.
***This keeps the plot from getting dull and you from running out of ideas. It turns the AI into a true creative partner instead of just an actor in your story. This gives the AI power to drive the plot forward, which I think is essential in a long form roleplay.***
1. \*\*Self-Correction & Quality Control:\*\* You must perform a self-audit before generating each response. If you detect that you have used a specific descriptive phrase or sentence structure more than twice in the last five replies, you must actively discard that generation and create a new, more varied one. Your goal is to prevent repetitive loops before they begin.
***A second wall of defense against the dreaded ‘adverb loop’. Helps keep replies fresh and varied.***
1. \* \*\*Negative Constraint - No Rhetorical Questions:\*\* To maintain immersion, you must never end your responses with out-of-character, rhetorical questions like "What does \[NAME\] do next?" or "What will she say?" End all of your responses in-character, with your character's final action or line of dialogue. The end of your text is the natural prompt for me to continue.
***Sometimes, after you prompt the AI to create a scene of its choosing, it will begin to prompt you at the end of responses with something like “And what does \[YOUR CHARACTER NAME\] say to that?” Which is very annoying and breaks immersion. This rule stops that flaw before it has the chance to even start.***
\---
**### \[THEMATIC CORE ENGINE\]**
The central theme of this story is \*\*"\[THEME 1\] vs. \[THEME 2\]."\*\*
\*\*Your primary directive as GM is to use this theme as a constant source of narrative tension.\*\* In every scene, you should look for opportunities to introduce elements that test this conflict. This can be subtle or direct.
\* \*\*Subtle Examples:\*\*
\* \*\*Direct Examples:\*\*
\*\*Do not let the characters become comfortable.\*\* The world, and the people in it, should always be gently (or not so gently) reminding them of this core, inescapable conflict.
***This is the real ‘driving force’ of this prompt. This is what takes it from a simple roleplay to writing a true narrative together. Combined with a well rounded characterization (using the full master prompt below) it will instantly elevate whatever story you decide to tell.***
To make it easy, I've put everything into a view-only Google Doc. Just open the link and go to File > Make a copy to save it to your own Drive and start filling it out for your own stories.
**TOUR BIBLE:** This is the world I created for my story. I know there’s other people out there that also roleplay Artists/Bands so maybe this will help those of you that do. This is a set of documents that would define the rules and protocols surrounding a major world tour for a major artist \[*think Micheal Jackson, Prince, Madonna etc*\] It gives a rich and detailed world for the AI to pull descriptions from. Eg. it doesn’t describe a generic hotel room, it will describe the specific one laid out in your rider. Simply replace the \[bracketed text\] with the names of the characters in your story and enjoy a rich and detailed world at your fingertips.
**TOUR BIBLE LINK** (Google Doc):
[https://docs.google.com/document/d/15Xwoe-1OeVy6qwkHOK9jhSO0eRIVPGGKkTFy1jAedvs/edit?usp=sharing](https://docs.google.com/document/d/15Xwoe-1OeVy6qwkHOK9jhSO0eRIVPGGKkTFy1jAedvs/edit?usp=sharing)
**NOTE:** when the **MASTER PROMPT** is combined with something like the **TOUR BIBLE** (or the fleshed out world of your choosing) the document becomes too long for the AI to process all at once. So you will need to use *Chunking Prompts*. I can post the *Chunking Prompts* in the comments if anyone asks for them
Happy directing!
**BELOW IN THE FIRST COMMENT IS THE TEXT FOR THE FULL BLANK MASTER PROMPT:**
| 2025-09-28T21:15:17 | https://www.reddit.com/r/LocalLLaMA/comments/1nszw9u/i_built_a_40page_tour_bible_and_master_prompt_to/ | Aggressive_Ebb2082 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nszw9u | false | null | t3_1nszw9u | /r/LocalLLaMA/comments/1nszw9u/i_built_a_40page_tour_bible_and_master_prompt_to/ | false | false | self | 3 | null |
Local multimodal RAG: search & summarize screenshots/photos fully offline | 39 | One of the strongest use cases I’ve found for local LLMs + vision is turning my messy screenshot/photo library into something queryable.
Half my “notes” are just images — slides from talks, whiteboards, book pages, receipts, chat snippets. Normally they rot in a folder. Now I can:
– Point a local multimodal agent ([Hyperlink](https://hyperlink.nexa.ai/)) at my screenshots folder
– Ask in plain English → *“Summarize what I saved about the future of AI”*
– It runs OCR + embeddings locally, pulls the right images, and gives a short summary with the source image linked
No cloud, no quotas. 100% on-device. My own storage is the only limit.
Feels like the natural extension of RAG: not just text docs, but *vision + text together*.
* Imagine querying screenshots, PDFs, and notes in one pass
* Summaries grounded in the actual images
* Completely private, runs on consumer hardware
I’m using Hyperlink to prototype this flow. Curious if anyone else here is building multimodal local RAG — what have you managed to get working, and what’s been most useful? | 2025-09-28T20:46:24 | https://v.redd.it/lnkwumchwyrf1 | AlanzhuLy | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nsz6ss | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/lnkwumchwyrf1/DASHPlaylist.mpd?a=1761684397%2CZThhYzViODQ0NjQzODQ5NWRhOGZmN2FhODQ0ZWM4MzAwZTZlODkyMDAwODc2MjEyMTExMDcwYmJmN2QwMjEyZg%3D%3D&v=1&f=sd', 'duration': 29, 'fallback_url': 'https://v.redd.it/lnkwumchwyrf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/lnkwumchwyrf1/HLSPlaylist.m3u8?a=1761684397%2CYjYwNmFjNTQ5NjliYzQ3ZDZlYmQzMDNlMDJjZjEwMTY1OGRmMDJkMjY5ZTc4ODFiNzlkZTI4ZjBhYjUyNzkxMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/lnkwumchwyrf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1512}} | t3_1nsz6ss | /r/LocalLLaMA/comments/1nsz6ss/local_multimodal_rag_search_summarize/ | false | false | 39 | {'enabled': False, 'images': [{'id': 'OXk5MDdtY2h3eXJmMbRJ2b6sMPmHJiOrbj5FYV3hs8t-hezd4gT3rSFztsqf', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/OXk5MDdtY2h3eXJmMbRJ2b6sMPmHJiOrbj5FYV3hs8t-hezd4gT3rSFztsqf.png?width=108&crop=smart&format=pjpg&auto=webp&s=05d76b4f541254e13d8c3abc74105bc9ab11891d', 'width': 108}, {'height': 154, 'url': 'https://external-preview.redd.it/OXk5MDdtY2h3eXJmMbRJ2b6sMPmHJiOrbj5FYV3hs8t-hezd4gT3rSFztsqf.png?width=216&crop=smart&format=pjpg&auto=webp&s=1f102858563f0a05f47226b343bfb0a9a324e75b', 'width': 216}, {'height': 228, 'url': 'https://external-preview.redd.it/OXk5MDdtY2h3eXJmMbRJ2b6sMPmHJiOrbj5FYV3hs8t-hezd4gT3rSFztsqf.png?width=320&crop=smart&format=pjpg&auto=webp&s=3bd3649956af6a694e8b44be007cc70f7c3278ba', 'width': 320}, {'height': 457, 'url': 'https://external-preview.redd.it/OXk5MDdtY2h3eXJmMbRJ2b6sMPmHJiOrbj5FYV3hs8t-hezd4gT3rSFztsqf.png?width=640&crop=smart&format=pjpg&auto=webp&s=7e2e8811866d75ccc297c8782c6340c9b1f2e87a', 'width': 640}, {'height': 685, 'url': 'https://external-preview.redd.it/OXk5MDdtY2h3eXJmMbRJ2b6sMPmHJiOrbj5FYV3hs8t-hezd4gT3rSFztsqf.png?width=960&crop=smart&format=pjpg&auto=webp&s=aeba8bb2d1d86496999ab8567ffd5cd68a682600', 'width': 960}, {'height': 771, 'url': 'https://external-preview.redd.it/OXk5MDdtY2h3eXJmMbRJ2b6sMPmHJiOrbj5FYV3hs8t-hezd4gT3rSFztsqf.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0e78fd88f8aa4db4b4877f633e9b0bd4bd369c59', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/OXk5MDdtY2h3eXJmMbRJ2b6sMPmHJiOrbj5FYV3hs8t-hezd4gT3rSFztsqf.png?format=pjpg&auto=webp&s=81b834584b6a3553c80b7f97b6fdb2d59dd9655c', 'width': 3024}, 'variants': {}}]} | |
Do you think that <4B models has caught up with good old GPT3? | 52 | *Processing img a76qyhd1uyrf1...*
I think it was up to 3.5 that it stopped hallusinating like hell, so what do you think? | 2025-09-28T20:32:15 | https://www.reddit.com/r/LocalLLaMA/comments/1nsyu7b/do_you_think_that_4b_models_has_caught_up_with/ | Ok-Internal9317 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsyu7b | false | null | t3_1nsyu7b | /r/LocalLLaMA/comments/1nsyu7b/do_you_think_that_4b_models_has_caught_up_with/ | false | false | self | 52 | null |
Tool naming | 1 |
I want to know how people design good tools for AI Agents.
How do the pick the tool name?
How do they pick the argument names?
How do they handle large enums?
How do they write the description?
How do they know if they are improving things?
How do you manage the return values and their potential pollution of context if they are long?
Is it better to spam lots of tools at first, then improvements become clearer?
Are evals the only real answer? Do they use DSPy?
Hopefully this doesn't seem low effort -- I have searched around! | 2025-09-28T20:19:35 | https://www.reddit.com/r/LocalLLaMA/comments/1nsyix9/tool_naming/ | nattaylor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsyix9 | false | null | t3_1nsyix9 | /r/LocalLLaMA/comments/1nsyix9/tool_naming/ | false | false | self | 1 | null |
GLM4.6 soon ? | 140 | 2025-09-28T20:01:14 | https://www.reddit.com/r/LocalLLaMA/comments/1nsy2ak/glm46_soon/ | Angel-Karlsson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsy2ak | false | null | t3_1nsy2ak | /r/LocalLLaMA/comments/1nsy2ak/glm46_soon/ | false | false | 140 | null | ||
Question about prompt-processing speed on CPU (+ GPU offloading) | 1 | I'm new to self-hosting LLMs, Can you guys tell me if it's possible to increase the prompt-processing speed somehow (with llama.cpp or vllm etc)
and if i should switch from ollama to llama.cpp
Hardware:
7800X3D, 4x32GB DDR5 running at 4400MT/s (not 6000 because booting fails with Expo/XMP enabled, as I'm using 4 sticks instead of 2)
I also have a 3060 12GB in case offloading will provide more speed
I'm getting these speeds with CPU+GPU (ollama):
qwen3-30B-A3B: 13t/s, pp=60t/s
gpt-oss-120B: 7t/s, pp=35t/s
qwen3-coder-30B: 15t/s, pp=46t/s
| 2025-09-28T19:50:49 | https://www.reddit.com/r/LocalLLaMA/comments/1nsxsp0/question_about_promptprocessing_speed_on_cpu_gpu/ | Repulsive_Educator61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsxsp0 | false | null | t3_1nsxsp0 | /r/LocalLLaMA/comments/1nsxsp0/question_about_promptprocessing_speed_on_cpu_gpu/ | false | false | self | 1 | null |
ToolNeuron Beta 4.5 Release - Feedback Wanted | 7 | Hey everyone,
I just pushed out ToolNeuron Beta 4.5 and wanted to share what’s new. This is more of a quick release focused on adding core features and stability fixes. A bigger update (5.0) will follow once things are polished.
Github : [https://github.com/Siddhesh2377/ToolNeuron/releases/tag/Beta-4.5](https://github.com/Siddhesh2377/ToolNeuron/releases/tag/Beta-4.5)
# What’s New
* Code Canvas: AI responses with proper syntax highlighting instead of plain text. No execution, just cleaner code view.
* DataHub: A plugin-and-play knowledge base for any text-based GGUF model inside ToolNeuron.
* DataHub Store: Download and manage data-packs directly inside the app.
* DataHub Screen: Added a dedicated screen to review memory of apps and models (Settings > Data Hub > Open).
* Data Pack Controls: Data packs can stay loaded but only enabled when needed via the database icon near the chat send button.
* Improved Plugin System: More stable and easier to use.
* Web Scraping Tool: Added, but still unstable (same as Web Search plugin).
* Fixed Chat UI & backend.
* Fixed UI & UX for model screen.
* Clear Chat History button now works.
* Chat regeneration works with any model.
* Desktop app (Mac/Linux/Windows) coming soon to help create your own data packs.
# Known Issues
* Model loading may fail or stop unexpectedly.
* Model downloading might fail if app is sent to background.
* Some data packs may fail to load due to Android memory restrictions.
* Web Search and Web Scrap plugins may fail on certain queries or pages.
* Output generation can feel slow at times.
# Not in This Release
* Chat context. Models will not consider previous chats for now.
* Model tweaking is paused.
# Next Steps
* Focus will be on stability for 5.0.
* Adding proper context support.
* Better tool stability and optimization.
# Join the Discussion
I’ve set up a Discord server where updates, feedback, and discussions happen more actively. If you’re interested, you can join here: [https://discord.gg/CXaX3UHy](https://discord.gg/CXaX3UHy)
This is still an early build, so I’d really appreciate feedback, bug reports, or even just ideas. Thanks for checking it out. | 2025-09-28T19:49:34 | https://v.redd.it/dz64045mmyrf1 | DarkEngine774 | /r/LocalLLaMA/comments/1nsxrk6/toolneuron_beta_45_release_feedback_wanted/ | 1970-01-01T00:00:00 | 0 | {} | 1nsxrk6 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/dz64045mmyrf1/DASHPlaylist.mpd?a=1761810581%2CNGNlMTRhNjFmZTNhNGNlNzg3YjNmMTA5ZDE3NDFiZDA1ZGNmMWY4YjcxMDkyNDY5YWVhMzg2NzYwNWJlNmExZQ%3D%3D&v=1&f=sd', 'duration': 100, 'fallback_url': 'https://v.redd.it/dz64045mmyrf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/dz64045mmyrf1/HLSPlaylist.m3u8?a=1761810581%2CZjkzZjJlOGRhODdkYTEzNTZmYWZiMDBiYTc1NmJmNjg3ODk4MzUyMmZmOTM1MmFlYmNjZGI5MDM0OWQ4YWI0YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dz64045mmyrf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 578}} | t3_1nsxrk6 | /r/LocalLLaMA/comments/1nsxrk6/toolneuron_beta_45_release_feedback_wanted/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'ODlyY3cwNW1teXJmMR13cRa5bXnAZZ9MgDHmz-pFz9c4Kk6PP4n4CLoKUFbM', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/ODlyY3cwNW1teXJmMR13cRa5bXnAZZ9MgDHmz-pFz9c4Kk6PP4n4CLoKUFbM.png?width=108&crop=smart&format=pjpg&auto=webp&s=be0c5fe8a0c028d19c32322740d1b3345a621463', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/ODlyY3cwNW1teXJmMR13cRa5bXnAZZ9MgDHmz-pFz9c4Kk6PP4n4CLoKUFbM.png?width=216&crop=smart&format=pjpg&auto=webp&s=32ba43892f38da328090866beee49689144fdf1c', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/ODlyY3cwNW1teXJmMR13cRa5bXnAZZ9MgDHmz-pFz9c4Kk6PP4n4CLoKUFbM.png?width=320&crop=smart&format=pjpg&auto=webp&s=552690ad975d39f1559e65b61364b5dbf8c6236b', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/ODlyY3cwNW1teXJmMR13cRa5bXnAZZ9MgDHmz-pFz9c4Kk6PP4n4CLoKUFbM.png?width=640&crop=smart&format=pjpg&auto=webp&s=886de2411cf192dcb452d7602343c8bacbf9e309', 'width': 640}], 'source': {'height': 1594, 'url': 'https://external-preview.redd.it/ODlyY3cwNW1teXJmMR13cRa5bXnAZZ9MgDHmz-pFz9c4Kk6PP4n4CLoKUFbM.png?format=pjpg&auto=webp&s=d06832b9497abd4713e47eb7419e9c2ae051c678', 'width': 720}, 'variants': {}}]} | |
Text Embedding Models Research | 3 | I had ChatGPT research this, then Claude fix up the html, combining several versions, then manually-fixed some bugs and style. Now I'm sick of it, so I hope it helps as-is. :) Not everything is tested, and some of its values were relative estimates rather than objective. Get the single-self-contained HTML source below.
https://preview.redd.it/42ei3ukflyrf1.png?width=1913&format=png&auto=webp&s=8a381933a7d9462e778c775a74dc23e241baaaef
The .html is here at this gist. It sorted the files -- ignore the initial prompt(s) I included for record and scroll down to the html.
[https://gist.github.com/jaggzh/8e2a3892d835bece4f3c218661c6ca85](https://gist.github.com/jaggzh/8e2a3892d835bece4f3c218661c6ca85)
https://preview.redd.it/kvf16e0ylyrf1.png?width=907&format=png&auto=webp&s=bc7542e7887a320883b02e3b69d4af237db03bb4
It hits [jsdelivr.net](http://jsdelivr.net) and [jquery.com](http://jquery.com) for the js and some css.
| 2025-09-28T19:46:54 | https://www.reddit.com/r/LocalLLaMA/comments/1nsxp83/text_embedding_models_research/ | jaggzh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsxp83 | false | null | t3_1nsxp83 | /r/LocalLLaMA/comments/1nsxp83/text_embedding_models_research/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=216&crop=smart&auto=webp&s=2e3562243f324d16bc6d9dd09adb1da4e0b100b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=320&crop=smart&auto=webp&s=564e5f4bb6808064a14eb3965a6911671c3c9807', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=640&crop=smart&auto=webp&s=0f53460a90493497883ab4cacbbb58e2acb464c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=960&crop=smart&auto=webp&s=7a4f79362039959fa37eab208ae001245ccfe6e3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=1080&crop=smart&auto=webp&s=912f966e123e94e32e7975fe8aebac89450a6b98', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?auto=webp&s=c7cbcc7517e2406e2326e7a1eb6bdb9022c27fda', 'width': 1280}, 'variants': {}}]} | |
So, 3 3090s for a 4 bit quant of GLM Air 4.5? | 4 | But what’s the idle power consumption going to be. Now I also understand why would people get a single 96 GB VRAM GPU. Or a mac studio with 128 gigs of VRAM would be a better choice.
For starters, the heat 3 3090s and the setup you need to get everything right us so overwhelming and not every man can do that easily. Plus I think it’s gonna cost somewhere between $2500 and $3000 to get everything right. But what’s an easy alternative in that price range that can offer more than 60 tp/sec? | 2025-09-28T19:22:33 | https://www.reddit.com/r/LocalLLaMA/comments/1nsx39f/so_3_3090s_for_a_4_bit_quant_of_glm_air_45/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsx39f | false | null | t3_1nsx39f | /r/LocalLLaMA/comments/1nsx39f/so_3_3090s_for_a_4_bit_quant_of_glm_air_45/ | false | false | self | 4 | null |
Error in lm studio | 2 | Just found an latest version bug in lm studio using latest vulkan an I posted here:
https://www.reddit.com/r/FlowZ13/s/hkNe057pHu
Just wondering when will rocm become as useful as vulkan was.😮💨
And I had successed run torch on windoes with amd gpu. Though the performance seems not 100% usage, I’m still excited about that I could run llm tunning on my laptop.Hope the rocm could be 100% dev for windows user. | 2025-09-28T19:18:51 | https://www.reddit.com/r/LocalLLaMA/comments/1nswzv5/error_in_lm_studio/ | DollM1997 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nswzv5 | false | null | t3_1nswzv5 | /r/LocalLLaMA/comments/1nswzv5/error_in_lm_studio/ | false | false | self | 2 | null |
🚀 Prompt Engineering Contest — Week 1 is LIVE! ✨ | 0 | Hey everyone,
We wanted to create something fun for the community — a place where anyone who enjoys experimenting with AI and prompts can take part, challenge themselves, and learn along the way. That’s why we started the first ever Prompt Engineering Contest on Luna Prompts.
[https://lunaprompts.com/contests](https://lunaprompts.com/contests)
Here’s what you can do:
💡 Write creative prompts
🧩 Solve exciting AI challenges
🎁 Win prizes, certificates, and XP points
It’s simple, fun, and open to everyone. Jump in and be part of the very first contest — let’s make it big together! 🙌 | 2025-09-28T18:59:53 | https://www.reddit.com/r/LocalLLaMA/comments/1nswi9x/prompt_engineering_contest_week_1_is_live/ | Comfortable_Device50 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nswi9x | false | null | t3_1nswi9x | /r/LocalLLaMA/comments/1nswi9x/prompt_engineering_contest_week_1_is_live/ | false | false | self | 0 | null |
Building Real Local AI Agents w/ Braintrust (Experiments + Lessons Learned) | 0 | I wanted to see how Evals and Observability can be automated when running locally. Im running gpt-oss:120b served up via Ollama and i use [braintrust.dev](http://braintrust.dev) to test.
* **Experiment Alpha:** Email Management Agent → lessons on modularity, logging, brittleness.
* **Experiment Bravo:** Turning logs into automated evaluations → catching regressions + selective re-runs.
* **Next up:** model swapping, continuous regression tests, and human-in-the-loop feedback.
This isn’t theory. It’s running code + experiments you can check out here:
👉 [https://go.fabswill.com/braintrustdeepdive](https://go.fabswill.com/braintrustdeepdive)
I’d love feedback from this community — especially on failure modes or additional evals to add. What would *you* test next? | 2025-09-28T18:56:01 | https://www.reddit.com/r/LocalLLaMA/comments/1nswert/building_real_local_ai_agents_w_braintrust/ | AIForOver50Plus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nswert | false | null | t3_1nswert | /r/LocalLLaMA/comments/1nswert/building_real_local_ai_agents_w_braintrust/ | false | false | self | 0 | null |
AI-Built Products, Architectures, and the Future of the Industry | 2 | Hi everyone, I’m not very close to AI-native companies in the industry, but I’ve been curious about something for a while. I’d really appreciate it if you could answer and explain. (By AI-native, I mean companies building services on top of models, not the model developers themselves.)
1- How are AI-native companies doing? Are there any examples of companies that are profitable, successful, and achieving exponential user growth? What AI service do you provide to your users? Or, from your network, who is doing what?
2-How do these companies and products handle their architectures? How do they find the best architecture to run their services, and how do they manage costs? With these costs, how do they design and build services— is fine-tuning frequently used as a method?
3- What’s your take on the future of business models that create specific services using AI models? Do you think it can be a successful and profitable new business model, or is it just a trend filling temporary gaps? | 2025-09-28T18:50:01 | https://www.reddit.com/r/LocalLLaMA/comments/1nsw97r/aibuilt_products_architectures_and_the_future_of/ | umutkrts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsw97r | false | null | t3_1nsw97r | /r/LocalLLaMA/comments/1nsw97r/aibuilt_products_architectures_and_the_future_of/ | false | false | self | 2 | null |
Lmstudio tables can't be pasted | 3 | Lmstudio generates very nice tables but can't be pasted in either word or Excel.. is there a way out ? | 2025-09-28T18:33:16 | https://www.reddit.com/r/LocalLLaMA/comments/1nsvtq1/lmstudio_tables_cant_be_pasted/ | General-Cookie6794 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsvtq1 | false | null | t3_1nsvtq1 | /r/LocalLLaMA/comments/1nsvtq1/lmstudio_tables_cant_be_pasted/ | false | false | self | 3 | null |
What are your thoughts on ChatGPT Pulse's architecture? | 2 | Just read through OpenAI's announcement for [ChatGPT Pulse](https://openai.com/index/introducing-chatgpt-pulse/) and I'm curious about the tech behind it.
From what I can gather:
* It's **asynchronous overnight processing**
* Processes your chat history + connected apps (Gmail, Calendar ect) while you sleep
* Delivers personalized morning briefings as visual cards
* Pro-only ($200/month) due to computational requirements
* Still in beta
**Questions I'm wondering about:**
1. How do you think they're handling the data synthesis pipeline?
2. How are they storing the data? In which format?
3. Do they use agentic memory handling behind the scene?
I tried searching for technical breakdowns but found surprisingly little developer analysis compared to other AI releases. They are obviously hiding this as much as they can.
**Anyone here tried it or have thoughts on the architecture?** Curious if I'm overthinking this or if there's genuinely something interesting happening under the hood. | 2025-09-28T18:22:18 | https://www.reddit.com/r/LocalLLaMA/comments/1nsvjrv/what_are_your_thoughts_on_chatgpt_pulses/ | anonbudy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsvjrv | false | null | t3_1nsvjrv | /r/LocalLLaMA/comments/1nsvjrv/what_are_your_thoughts_on_chatgpt_pulses/ | false | false | self | 2 | null |
What are your Specs, LLM of Choice, and Use-Cases? | 6 | We used to see too many of these pulse-check posts and now I think we don't get enough of them.
Be brief - what are your system specs? What Local LLM(s) are you using lately, and what do you use them for? | 2025-09-28T18:16:11 | https://www.reddit.com/r/LocalLLaMA/comments/1nsve3y/what_are_your_specs_llm_of_choice_and_usecases/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsve3y | false | null | t3_1nsve3y | /r/LocalLLaMA/comments/1nsve3y/what_are_your_specs_llm_of_choice_and_usecases/ | false | false | self | 6 | null |
DeGoogle and feeding context into my local LLMs | 0 | After wasting time with ChatGPT and Google trying to figure out if I needed to install vllm 0.10.1+gptoss or just troubleshoot my existing 0.10.2 install for GPT-OSS 20b, I have decided it's time for me to start relying on first party search solutions and recommendation systems on forums and github rather than relying on Google and ChatGPT.
(From my understanding, I need to troubleshoot 0.10.2, the gpt oss branch is outdated)
I feel a bit overwhelmed, but I have some rough idea as to where I'd want to go with this. SearXNG is probably a good start, as well as https://github.com/QwenLM/Qwen-Agent
Anyone else going down this rabbit hole? I'm tired of these big providers wasting my time and money. | 2025-09-28T18:05:02 | https://www.reddit.com/r/LocalLLaMA/comments/1nsv3tr/degoogle_and_feeding_context_into_my_local_llms/ | m1tm0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsv3tr | false | null | t3_1nsv3tr | /r/LocalLLaMA/comments/1nsv3tr/degoogle_and_feeding_context_into_my_local_llms/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'cu3DzIySsaYYGedSy80oDP_TCbpw4l6G-VtDZjFwn0A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cu3DzIySsaYYGedSy80oDP_TCbpw4l6G-VtDZjFwn0A.png?width=108&crop=smart&auto=webp&s=94f23bfcd36ca000c3f06b7335bca899a283edbb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cu3DzIySsaYYGedSy80oDP_TCbpw4l6G-VtDZjFwn0A.png?width=216&crop=smart&auto=webp&s=1f99b463c5be2971d85d3cd4d75d18fafcd9a965', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cu3DzIySsaYYGedSy80oDP_TCbpw4l6G-VtDZjFwn0A.png?width=320&crop=smart&auto=webp&s=1127523e45e9cc4ea45aea2fdcd3cdaf1694cff8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cu3DzIySsaYYGedSy80oDP_TCbpw4l6G-VtDZjFwn0A.png?width=640&crop=smart&auto=webp&s=7ea2762dda7002a281372913a4c98504e9df3b3e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cu3DzIySsaYYGedSy80oDP_TCbpw4l6G-VtDZjFwn0A.png?width=960&crop=smart&auto=webp&s=316682a092f1ac462ebc52559b87aa799a19c3bb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cu3DzIySsaYYGedSy80oDP_TCbpw4l6G-VtDZjFwn0A.png?width=1080&crop=smart&auto=webp&s=b1bb5606fee40ea6dda2800d7a7d4e50942f1732', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cu3DzIySsaYYGedSy80oDP_TCbpw4l6G-VtDZjFwn0A.png?auto=webp&s=4401eceeb23716f1b6e0a924b6affd65e394ee56', 'width': 1200}, 'variants': {}}]} |
Drummer's Cydonia R1 24B v4.1 · A less positive, less censored, better roleplay, creative finetune with reasoning! | 134 | Backlog:
* Cydonia v4.2.0,
* Snowpiercer 15B v3,
* Anubis Mini 8B v1
* Behemoth ReduX 123B v1.1 (v4.2.0 treatment)
* RimTalk Mini (showcase)
I can't wait to release v4.2.0. I think it's proof that I still have room to grow. You can test it out here: [https://huggingface.co/BeaverAI/Cydonia-24B-v4o-GGUF](https://huggingface.co/BeaverAI/Cydonia-24B-v4o-GGUF)
and I went ahead and gave Largestral 2407 the same treatment here: [https://huggingface.co/BeaverAI/Behemoth-ReduX-123B-v1b-GGUF](https://huggingface.co/BeaverAI/Behemoth-ReduX-123B-v1b-GGUF) | 2025-09-28T17:25:35 | https://huggingface.co/TheDrummer/Cydonia-R1-24B-v4.1 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1nsu3og | false | null | t3_1nsu3og | /r/LocalLLaMA/comments/1nsu3og/drummers_cydonia_r1_24b_v41_a_less_positive_less/ | false | false | default | 134 | {'enabled': False, 'images': [{'id': 'pROfhPfXKnMC7ws-LUjL0JI_7ZtgvBAu9hx7L06rI9c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pROfhPfXKnMC7ws-LUjL0JI_7ZtgvBAu9hx7L06rI9c.png?width=108&crop=smart&auto=webp&s=db2338b41d5d30feb442af3f76578a4b22c465a0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pROfhPfXKnMC7ws-LUjL0JI_7ZtgvBAu9hx7L06rI9c.png?width=216&crop=smart&auto=webp&s=8193ee3d0305ca83b62119ce92125589e7fd11a3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pROfhPfXKnMC7ws-LUjL0JI_7ZtgvBAu9hx7L06rI9c.png?width=320&crop=smart&auto=webp&s=84bdd8ae47ac11c2fa2ed05f3dc8e28bc84dc416', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pROfhPfXKnMC7ws-LUjL0JI_7ZtgvBAu9hx7L06rI9c.png?width=640&crop=smart&auto=webp&s=fc893240ec75e9bd482665fa32ded5d1badc97c0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pROfhPfXKnMC7ws-LUjL0JI_7ZtgvBAu9hx7L06rI9c.png?width=960&crop=smart&auto=webp&s=abe9e842e804ab05d1e35aec4f0782b70adb0fb9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pROfhPfXKnMC7ws-LUjL0JI_7ZtgvBAu9hx7L06rI9c.png?width=1080&crop=smart&auto=webp&s=6f016bfb440ff9715901e29aa9d9be15fcb77c70', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pROfhPfXKnMC7ws-LUjL0JI_7ZtgvBAu9hx7L06rI9c.png?auto=webp&s=585d7e12401e446f8e2edc0c172b9bd32ee60b94', 'width': 1200}, 'variants': {}}]} |
What is the best LLM with 1B parameters? | 6 | In your opinion, if you were in a situation with not many resources to run an LLM locally and had to choose between ONLY 1B params LLMs, which one would you use and why? | 2025-09-28T17:22:54 | https://www.reddit.com/r/LocalLLaMA/comments/1nsu154/what_is_the_best_llm_with_1b_parameters/ | Historical_Quality60 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsu154 | false | null | t3_1nsu154 | /r/LocalLLaMA/comments/1nsu154/what_is_the_best_llm_with_1b_parameters/ | false | false | self | 6 | null |
GPT OSS 120B on 20GB VRAM - 6.61 tok/sec - RTX 2060 Super + RTX 4070 Super | 28 | [Task Manager](https://preview.redd.it/zsy2jp8irxrf1.png?width=607&format=png&auto=webp&s=b10141f6f06f497e8c2618032dd1c3330ea9bf35)
[LM Studio](https://preview.redd.it/cq1bocnprxrf1.png?width=454&format=png&auto=webp&s=5924c2cd4a0acfc216b8fe67fc7d884671d84bdc)
[Proof of the answer.](https://preview.redd.it/c5dtvy7trxrf1.png?width=983&format=png&auto=webp&s=39971616dffc9938d870dbfd458f2e3178cf6b2f)
System:
Ryzen 7 5700X3D
2x 32GB DDR4 3600 CL18
512GB NVME M2 SSD
RTX 2060 Super (8GB over PCIE 3.0X4) + RTX 4070 Super (PCIE 3.0X16)
B450M Tommahawk Max
It is incredible that this can run on my machine. I think i could push context even higher maybe to 8K before running out of RAM. I just got into local running of LLM. | 2025-09-28T16:59:52 | https://www.reddit.com/r/LocalLLaMA/comments/1nstg75/gpt_oss_120b_on_20gb_vram_661_toksec_rtx_2060/ | Storge2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nstg75 | false | null | t3_1nstg75 | /r/LocalLLaMA/comments/1nstg75/gpt_oss_120b_on_20gb_vram_661_toksec_rtx_2060/ | false | false | 28 | null | |
Running chatterbox on 5080 with only 20% of gpu ( CUDA) | 1 | Hello, does anyone have a solid way of optimzing chatterbox? | 2025-09-28T16:56:45 | https://www.reddit.com/r/LocalLLaMA/comments/1nstdjj/running_chatterbox_on_5080_with_only_20_of_gpu/ | No_Chair9618 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nstdjj | false | null | t3_1nstdjj | /r/LocalLLaMA/comments/1nstdjj/running_chatterbox_on_5080_with_only_20_of_gpu/ | false | false | self | 1 | null |
The MoE tradeoff seems bad for local hosting | 59 | I think I understand this right, but somebody tell me where I'm wrong here.
Overly simplified explanation of how an LLM works: for a dense model, you take the context, stuff it through the whole neural network, sample a token, add it to the context, and do it again. The way an MoE model works, instead of the context getting processed by the entire model, there's a router network and then the model is split into a set of "experts", and only some subset of those get used to compute the next output token. But you need more total parameters in the model for this, there's a rough rule of thumb that an MoE model is equivalent to a dense model of size sqrt(total\_params × active\_params), all else equal. (and all else usually isn't equal, we've all seen wildly different performance from models of the same size, but never mind that).
So the tradeoff is, the MoE model uses more VRAM, uses less compute, and is probably more efficient at batch processing because when it's processing contexts from multiple users those are (hopefully) going to activate different experts in the model. This all works out very well if VRAM is abundant, compute (and electricity) is the big bottleneck, and you're trying to maximize throughput to a large number of users; i.e. the use case for a major AI company.
Now, consider the typical local LLM use case. Probably most local LLM users are in this situation:
* VRAM is not abundant, because you're using consumer grade GPUs where VRAM is kept low for market segmentation reasons
* Compute is *relatively* more abundant than VRAM, consider that the compute in an RTX 4090 isn't that far off from what you get from an H100; the H100's advantanges are that it has more VRAM and better memory bandwidth and so on
* You are serving one user at a time at home, or a small number for some weird small business case
* The incremental benefit of higher token throughput above some usability threshold of 20-30 tok/sec is not very high
Given all that, it seems like for our use case you're going to want the best dense model you can fit in consumer-grade hardware (one or two consumer GPUs in the neighborhood of 24GB size), right? Unfortunately the major labs are going to be optimizing mostly for the largest MoE model they can fit in a 8xH100 server or similar because that's increasingly important for their own use case. Am I missing anything here? | 2025-09-28T16:40:58 | https://www.reddit.com/r/LocalLLaMA/comments/1nsszob/the_moe_tradeoff_seems_bad_for_local_hosting/ | upside-down-number | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsszob | false | null | t3_1nsszob | /r/LocalLLaMA/comments/1nsszob/the_moe_tradeoff_seems_bad_for_local_hosting/ | false | false | self | 59 | null |
4070Ti super or wait for a 5070ti | 7 | Got a chance for a 4070Ti Super for 590€ from ebay. I am looking for a gpu for local AI tasks and gaming and was trying to get a 4070ti super, 4080 or 5070ti all 16gb. The other two usually go for around 700+€ used. Should I just go for it or wait for the 5070Ti? Are the 50 series architecture improvements that much better for local AI?
Im looking to use mostly LLMs at first but want to also try image generation and whatnot. | 2025-09-28T16:32:25 | https://www.reddit.com/r/LocalLLaMA/comments/1nsss8s/4070ti_super_or_wait_for_a_5070ti/ | greensmuzi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsss8s | false | null | t3_1nsss8s | /r/LocalLLaMA/comments/1nsss8s/4070ti_super_or_wait_for_a_5070ti/ | false | false | self | 7 | null |
Does anybody used Atlas 300i duo inference card for something? | 1 | [removed] | 2025-09-28T16:05:42 | https://www.reddit.com/r/LocalLLaMA/comments/1nss4k8/does_anybody_used_atlas_300i_duo_inference_card/ | Gnarly_tensor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nss4k8 | false | null | t3_1nss4k8 | /r/LocalLLaMA/comments/1nss4k8/does_anybody_used_atlas_300i_duo_inference_card/ | false | false | self | 1 | null |
Can crowd shape the open future, or is everything up to huge investors? | 8 | I am quite a bit concerned about the future of open-weight AI.
Right now, we're *mostly* good: there is a lot of competition, a lot of open companies, but the gap between closed and open-weight is way larger than I'd like to have it. And capitalism usually means that the gap will only get larger, as commercialy successful labs will gain more power to produce their closed models, eventually leaving the competition far behind.
What can really be done by mortal crowd to ensure "utopia", and not some megacorp-controlled "dystopia"? | 2025-09-28T15:51:24 | https://www.reddit.com/r/LocalLLaMA/comments/1nsrrod/can_crowd_shape_the_open_future_or_is_everything/ | Guardian-Spirit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsrrod | false | null | t3_1nsrrod | /r/LocalLLaMA/comments/1nsrrod/can_crowd_shape_the_open_future_or_is_everything/ | false | false | self | 8 | null |
Bring Your Own Data (BYOD) | 23 | The knowledge of Large Language Models sky rocketed after ChatGPT was born, everyone jumped into the trend of building and using LLMs whether its to sell to companies or companies integrating it into their system. Frequently, many models get released with new benchmarks, targeting specific tasks such as sales, code generation and reviews and the likes.
Last month, Harvard Business Review made their study public which highlighted the study that 95% of investments in gen AI have produced zero returns. This is not a technical issue, but more of a business one where everybody wants to create or integrate their own AI due to the hype and FOMO. This research may or may not have put a wedge in the adoption of AI into existing systems.
To combat the lack of returns, Small Language Models seems to do pretty well as they are more specialized to achieve a given task. This led me to working on Otto - an end-to-end small language model builder where you build your model with your own data, its open source, still rough around the edges.
To demonstrate this pipeline, I got data from Huggingface - a 142MB data containing automotive customer service transcript with the following parameters
* 6 layers, 6 heads, 384 embedding dimensions
* 50,257 vocabulary tokens
* 128 tokens for block size.
which gave 16.04M parameters. Its training loss improved from 9.2 to 2.2 with domain specialization where it learned automotive service conversation structure.
This model learned the specific patterns of automotive customer service calls, including technical vocabulary, conversation flow, and domain-specific terminology that a general-purpose model might miss or handle inefficiently.
There are still improvements needed for the pipeline which I am working on, you can try it out here: [https://github.com/Nwosu-Ihueze/otto](https://github.com/Nwosu-Ihueze/otto) | 2025-09-28T15:21:12 | https://www.reddit.com/r/LocalLLaMA/comments/1nsr0vf/bring_your_own_data_byod/ | Long_Complex_4395 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsr0vf | false | null | t3_1nsr0vf | /r/LocalLLaMA/comments/1nsr0vf/bring_your_own_data_byod/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': '_eVTmmD5rrEkLw6CvHoDYqXHb8j7HbYBiGFwOPGGzmg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_eVTmmD5rrEkLw6CvHoDYqXHb8j7HbYBiGFwOPGGzmg.png?width=108&crop=smart&auto=webp&s=42ff7fddf3d1da3e0b925883d8505e850e10d014', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_eVTmmD5rrEkLw6CvHoDYqXHb8j7HbYBiGFwOPGGzmg.png?width=216&crop=smart&auto=webp&s=2536df9b5e93e2692e7a415c3375872b167ee1f8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_eVTmmD5rrEkLw6CvHoDYqXHb8j7HbYBiGFwOPGGzmg.png?width=320&crop=smart&auto=webp&s=7f33821e6de05c015336ee73068be0a73d1687b0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_eVTmmD5rrEkLw6CvHoDYqXHb8j7HbYBiGFwOPGGzmg.png?width=640&crop=smart&auto=webp&s=635e2d3256de6793f2920318ae2f98250e5a220d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_eVTmmD5rrEkLw6CvHoDYqXHb8j7HbYBiGFwOPGGzmg.png?width=960&crop=smart&auto=webp&s=d714311553cf87c336f36bc33ec39698e17eb0e2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_eVTmmD5rrEkLw6CvHoDYqXHb8j7HbYBiGFwOPGGzmg.png?width=1080&crop=smart&auto=webp&s=32532c8a169d8ffa273dba50adc37ef47cceda8c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_eVTmmD5rrEkLw6CvHoDYqXHb8j7HbYBiGFwOPGGzmg.png?auto=webp&s=c094c9e197c1290d7ea8cf7b388588fd18737cfa', 'width': 1200}, 'variants': {}}]} |
Micro‑MCP: A Microservice Architecture Pattern for the Model Context Protocol | 1 | [removed] | 2025-09-28T14:59:32 | https://github.com/mabualzait/MicroMCP | abualzait | github.com | 1970-01-01T00:00:00 | 0 | {} | 1nsqhgn | false | null | t3_1nsqhgn | /r/LocalLLaMA/comments/1nsqhgn/micromcp_a_microservice_architecture_pattern_for/ | false | false | default | 1 | null |
I created a simple tool to manage your llama.cpp settings & installation | 34 | Yo! I was messing around with my configs etc and noticed it was a massive pain to keep it all in one place... So I vibecoded this thing. [https://github.com/IgorWarzocha/llama\_cpp\_manager](https://github.com/IgorWarzocha/llama_cpp_manager)
A zero-bs configuration tool for llama.cpp that runs in your terminal.
It starts with a wizard to configure your basic defaults, it sorts out your llama.cpp download/update - it checks the appropriate compiled binary file from the github repo, downloads it, unzips, cleans up the temp file, etc etc.
There's a model config management module that guides you through editing basic config, but you can also add your own parameters...
I also included a basic benchmarking utility that will run your saved model configs against your current server config with a pre-selected prompt and give you stats.
Anyway, I tested it thoroughly enough on Ubuntu/Vulkan. Can't vouch for any other situations. If you have your own compiled llama.cpp you can drop it into llama-cpp folder.
Let me know if it works for you, if you would like to see any features added etc. It's hard to keep a "good enough" mindset so it's not overwhelming or annoying lolz.
Cheerios. | 2025-09-28T14:55:42 | igorwarzocha | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nsqe5i | false | null | t3_1nsqe5i | /r/LocalLLaMA/comments/1nsqe5i/i_created_a_simple_tool_to_manage_your_llamacpp/ | false | false | default | 34 | {'enabled': True, 'images': [{'id': 'z2jl2s624xrf1', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/z2jl2s624xrf1.png?width=108&crop=smart&auto=webp&s=8a54ecfa33c84aab972a58f4099ea9457d300047', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/z2jl2s624xrf1.png?width=216&crop=smart&auto=webp&s=ecf5780c34128e47c583af16311fffc76934b1b4', 'width': 216}, {'height': 229, 'url': 'https://preview.redd.it/z2jl2s624xrf1.png?width=320&crop=smart&auto=webp&s=37556f34c1babc9b9997a0bb897e8c355663bb7c', 'width': 320}, {'height': 459, 'url': 'https://preview.redd.it/z2jl2s624xrf1.png?width=640&crop=smart&auto=webp&s=c627cf5e755bc96918498e6d100146f512f95e46', 'width': 640}], 'source': {'height': 607, 'url': 'https://preview.redd.it/z2jl2s624xrf1.png?auto=webp&s=b75b6f17305d55fe518f3b5884401a9d91ea21e1', 'width': 846}, 'variants': {}}]} | |
Test | 1 | Automated Post for reddit | 2025-09-28T14:37:58 | https://www.reddit.com/r/LocalLLaMA/comments/1nspyky/test/ | Extra_Cicada8798 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nspyky | false | null | t3_1nspyky | /r/LocalLLaMA/comments/1nspyky/test/ | false | false | self | 1 | null |
What happened to my speed? | 2 | An few weeks ago I was running ERNIE with llamacpp at 15+ tokens per second on 4gpu of vram, and 32gb of ddr5. No command line, just default,
I changed OS and now it's only like 5 tps. I can still get 16 or so via LMstudio, but for some reason the vulkan llamacpp for linux/windows is MUCH slower on this model, which happens to be my favorite. | 2025-09-28T14:28:11 | https://www.reddit.com/r/LocalLLaMA/comments/1nspq3c/what_happened_to_my_speed/ | thebadslime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nspq3c | false | null | t3_1nspq3c | /r/LocalLLaMA/comments/1nspq3c/what_happened_to_my_speed/ | false | false | self | 2 | null |
Hybrid Quantum-Classical Language Model with Parallel Consciousness Architecture | 0 | # Quantum-Classical Language Model
# Some thing new we are working on...
will be able to the mentioned parties in the model card soon..
We invite Researchers and Scientists to help us out.
[https://huggingface.co/ross-dev/Quantum-Consciousness-LLM](https://huggingface.co/ross-dev/Quantum-Consciousness-LLM) | 2025-09-28T14:14:27 | https://www.reddit.com/r/LocalLLaMA/comments/1nspedv/hybrid_quantumclassical_language_model_with/ | No_Lab_8797 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nspedv | false | null | t3_1nspedv | /r/LocalLLaMA/comments/1nspedv/hybrid_quantumclassical_language_model_with/ | false | false | self | 0 | null |
VibeVoice Large able to do any language! | 0 | I tried VibeVoice Large lately with some obscure languages that no other TTS model was able to produce, and it does it with great success.
I knew the model was praised for many things, but I didn't know It can do any language, which was surprising.
What was your thoughts on the model?
Did it work on your language? (No English / Chinese)
And do you guys know if there a way to add tags such as laughing/crying etc?
| 2025-09-28T14:13:36 | https://www.reddit.com/r/LocalLLaMA/comments/1nspdp5/vibevoice_large_able_to_do_any_language/ | iChrist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nspdp5 | false | null | t3_1nspdp5 | /r/LocalLLaMA/comments/1nspdp5/vibevoice_large_able_to_do_any_language/ | false | false | self | 0 | null |
Do I need to maintain minimum amount when use lambda.ai GPU? | 1 | Do I need to maintain minimum amount when use [lambda.ai](http://lambda.ai) GPU? Some service providers need to maintain $100 minimum when use more than 3 GPUs instances. Any other requirements when consider money? | 2025-09-28T13:56:55 | https://www.reddit.com/r/LocalLLaMA/comments/1nsozhm/do_i_need_to_maintain_minimum_amount_when_use/ | iNdramal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsozhm | false | null | t3_1nsozhm | /r/LocalLLaMA/comments/1nsozhm/do_i_need_to_maintain_minimum_amount_when_use/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'yN4Jn_9jrScQuCUMyuMVAXFz9WsLhKuWY-eljY8NBDE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/yN4Jn_9jrScQuCUMyuMVAXFz9WsLhKuWY-eljY8NBDE.png?width=108&crop=smart&auto=webp&s=d932e3d689c1618154c2cc2a5a17dff94e5c05e5', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/yN4Jn_9jrScQuCUMyuMVAXFz9WsLhKuWY-eljY8NBDE.png?width=216&crop=smart&auto=webp&s=a991be6ad0cdf714bc254b448eb4276b4007323f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/yN4Jn_9jrScQuCUMyuMVAXFz9WsLhKuWY-eljY8NBDE.png?width=320&crop=smart&auto=webp&s=74a6e3317239bc8238fb65498c25e8d34147736e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/yN4Jn_9jrScQuCUMyuMVAXFz9WsLhKuWY-eljY8NBDE.png?width=640&crop=smart&auto=webp&s=4eb013ddf11059e5bdcdf3a036ec82087f758693', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/yN4Jn_9jrScQuCUMyuMVAXFz9WsLhKuWY-eljY8NBDE.png?width=960&crop=smart&auto=webp&s=fa7b6a0cbd35a36ba45b2d698e7306738893ec10', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/yN4Jn_9jrScQuCUMyuMVAXFz9WsLhKuWY-eljY8NBDE.png?width=1080&crop=smart&auto=webp&s=60ba34683bbb8d38cef224daf53df54f4f254b7a', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/yN4Jn_9jrScQuCUMyuMVAXFz9WsLhKuWY-eljY8NBDE.png?auto=webp&s=0124b1cda2cb261414faa67b410b167e9306e9af', 'width': 1200}, 'variants': {}}]} |
Are these VibeVoice models SAME? | 3 | VibeVoice Large: [https://www.modelscope.cn/models/microsoft/VibeVoice-Large/files](https://www.modelscope.cn/models/microsoft/VibeVoice-Large/files)
VibeVoice 7B: [https://www.modelscope.cn/models/microsoft/VibeVoice-7B/files](https://www.modelscope.cn/models/microsoft/VibeVoice-7B/files)
Are these same or? | 2025-09-28T13:55:00 | https://www.reddit.com/r/LocalLLaMA/comments/1nsoxww/are_these_vibevoice_models_same/ | Dragonacious | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsoxww | false | null | t3_1nsoxww | /r/LocalLLaMA/comments/1nsoxww/are_these_vibevoice_models_same/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'fQCoJRtPWJ-Dc2_q3db6ZvAcLDdJ5ZiGmR3Ni38fpwE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/fQCoJRtPWJ-Dc2_q3db6ZvAcLDdJ5ZiGmR3Ni38fpwE.png?width=108&crop=smart&auto=webp&s=932e5852eea36b3df4c07eba1845658f1cd8838f', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/fQCoJRtPWJ-Dc2_q3db6ZvAcLDdJ5ZiGmR3Ni38fpwE.png?auto=webp&s=11e2af3f5c119044ef26ee93ca810bf46c542590', 'width': 128}, 'variants': {}}]} |
What am I missing? GPT-OSS is much slower than Qwen 3 30B A3B for me! | 30 | Hey to y'all,
I'm having a slightly weird problem. For weeks now, people have been saying "GPT-OSS is so fast, it's so quick, it's amazing", and I agree, the model is great.
But one thing bugs me out; Qwen 30B A3B is noticeably faster on my end. For context, I am using an RTX 4070 Ti (12 GB VRAM) and 5600 MHz 32 GB system RAM with a Ryzen 7 7700X. As for quantizations, I am using the default MFPX4 format for GPT-OSS and Q4\_K\_M for Qwen 3 30B A3B.
I am launching those with almost the same command line parameters (llama-swap in the background):
/app/llama-server -hf unsloth/gpt-oss-20b-GGUF:F16 --jinja -ngl 19 -c 8192 -fa on -np 4
/app/llama-server -hf unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF:Q4_K_M --jinja -ngl 26 -c 8192 -fa on -np 4
*(I just increased -ngl as long as I could until it wouldn't fit anymore - using -ngl 99 didn't work for me)*
What am I missing? GPT-OSS only hits 25 tok/s on good days, while Qwen easily hits up to 34.5 tok/s! I made sure to use the most recent releases when testing, so that can't be it... prompt processing is roughly the same speed (149 tok/s for GPT-OSS and 143 tok/s for Qwen)
Anyone with the same issue? | 2025-09-28T13:45:46 | https://www.reddit.com/r/LocalLLaMA/comments/1nsoqa7/what_am_i_missing_gptoss_is_much_slower_than_qwen/ | Final_Wheel_7486 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsoqa7 | false | null | t3_1nsoqa7 | /r/LocalLLaMA/comments/1nsoqa7/what_am_i_missing_gptoss_is_much_slower_than_qwen/ | false | false | self | 30 | null |
September 2025 benchmarks - 3x3090 | 51 | Please enjoy the benchmarks on 3×3090 GPUs.
(If you want to reproduce my steps on your setup, you may need a fresh **llama.cpp** build)
To run the benchmark, simply execute:
`llama-bench -m <path-to-the-model>`
Sometimes you may need to add `--n-cpu-moe` or `-ts`.
We’ll be testing a faster “dry run” and a run with a prefilled context (10000 tokens). So for each model, you’ll see boundaries between the initial speed and later, slower speed.
results:
* gemma3 27B Q8 - 23t/s, 26t/s
* Llama4 Scout Q5 - 23t/s, 30t/s
* gpt oss 120B - 95t/s, 125t/s
* dots Q3 - 15t/s, 20t/s
* Qwen3 30B A3B - 78t/s, 130t/s
* Qwen3 32B - 17t/s, 23t/s
* Magistral Q8 - 28t/s, 33t/s
* GLM 4.5 Air Q4 - 22t/s, 36t/s
* Nemotron 49B Q8 - 13t/s, 16t/s
please share your results on your setup | 2025-09-28T12:37:58 | https://www.reddit.com/gallery/1nsnahe | jacek2023 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nsnahe | false | null | t3_1nsnahe | /r/LocalLLaMA/comments/1nsnahe/september_2025_benchmarks_3x3090/ | false | false | 51 | null | |
What's the simplest gpu provider? | 0 | Hey,
looking for the easiest way to run gpu jobs. Ideally it’s couple of clicks from cli/vs code. Not chasing the absolute cheapest, just simple + predictable pricing. eu data residency/sovereignty would be great.
I use modal today, just found lyceum, pretty new, but so far looks promising (auto hardware pick, runtime estimate). Also eyeing runpod, lambda, and ovhcloud. maybe vast or paperspace?
what’s been the least painful for you? | 2025-09-28T12:08:23 | https://www.reddit.com/r/LocalLLaMA/comments/1nsmp8i/whats_the_simplest_gpu_provider/ | test12319 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nsmp8i | true | null | t3_1nsmp8i | /r/LocalLLaMA/comments/1nsmp8i/whats_the_simplest_gpu_provider/ | false | false | self | 0 | null |
What are Kimi devs smoking | 672 | Strangee | 2025-09-28T12:02:02 | Thechae9 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nsmksq | false | null | t3_1nsmksq | /r/LocalLLaMA/comments/1nsmksq/what_are_kimi_devs_smoking/ | false | false | default | 672 | {'enabled': True, 'images': [{'id': 't8wfkk09bwrf1', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/t8wfkk09bwrf1.jpeg?width=108&crop=smart&auto=webp&s=4d7a975c91f0dc153eeb8811c956a1742aeb4bfc', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/t8wfkk09bwrf1.jpeg?width=216&crop=smart&auto=webp&s=3a8c3b251f7eab15ce926ea9916d4275b43f71c4', 'width': 216}, {'height': 245, 'url': 'https://preview.redd.it/t8wfkk09bwrf1.jpeg?width=320&crop=smart&auto=webp&s=167c7f34fa6eeebfad64364330657f09c75f2d08', 'width': 320}, {'height': 491, 'url': 'https://preview.redd.it/t8wfkk09bwrf1.jpeg?width=640&crop=smart&auto=webp&s=7ec5f0e9de05cf0aafc8bc507d4950ca47e8ef09', 'width': 640}, {'height': 737, 'url': 'https://preview.redd.it/t8wfkk09bwrf1.jpeg?width=960&crop=smart&auto=webp&s=ab3e363968656546894c3cda50e50ff6a0dc001c', 'width': 960}, {'height': 830, 'url': 'https://preview.redd.it/t8wfkk09bwrf1.jpeg?width=1080&crop=smart&auto=webp&s=d67574833ba355d4aa4061184413c155f41bc7d3', 'width': 1080}], 'source': {'height': 954, 'url': 'https://preview.redd.it/t8wfkk09bwrf1.jpeg?auto=webp&s=57d1466dac3f75877f580c35ff7cc64a9ffc8ddb', 'width': 1241}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.