title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AMD + NVIDIA GPU | 1 | I've got a RTX 5070 Ti (PCIe 5.0x16, CPU) and a RX 5700 XT (PCIe 4.0x4, CPU) in my AM5 PC.
Is there a way to use both GPUs and the CPU to run the same gguf model? | 2025-10-20T11:33:40 | https://www.reddit.com/r/LocalLLaMA/comments/1obgm8u/amd_nvidia_gpu/ | Wundsalz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obgm8u | false | null | t3_1obgm8u | /r/LocalLLaMA/comments/1obgm8u/amd_nvidia_gpu/ | false | false | self | 1 | null |
Which LLM to use to replace Gemma3? | 4 | I build a complex program that uses Gemma 3 27b to add a memory node graph, drives, emotions, goals, needs, identity and so onto it, but I'm still using Gemma 3 to run the whole thing.
Is there any non-thinking LLM as of now that I can fully fit on my 3090 that can also handle complex JSON output and is good at conversations? | 2025-10-20T11:20:25 | https://www.reddit.com/r/LocalLLaMA/comments/1obgdae/which_llm_to_use_to_replace_gemma3/ | PSInvader | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obgdae | false | null | t3_1obgdae | /r/LocalLLaMA/comments/1obgdae/which_llm_to_use_to_replace_gemma3/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'JDv-JdUWqZeAfz3cVc1YJxGxxPPKzP8VXdiF3jDrxtk', 'resolutions': [{'height': 98, 'url': 'https://external-preview.redd.it/JDv-JdUWqZeAfz3cVc1YJxGxxPPKzP8VXdiF3jDrxtk.png?width=108&crop=smart&auto=webp&s=33d42416e7cf13bf6c5f2323c68050ace12c0d0e', 'width': 108}, {'height': 197, 'url': 'https://external-preview.redd.it/JDv-JdUWqZeAfz3cVc1YJxGxxPPKzP8VXdiF3jDrxtk.png?width=216&crop=smart&auto=webp&s=2630e4dea2e27481af6ab4849209cbd7ca4bd87a', 'width': 216}, {'height': 292, 'url': 'https://external-preview.redd.it/JDv-JdUWqZeAfz3cVc1YJxGxxPPKzP8VXdiF3jDrxtk.png?width=320&crop=smart&auto=webp&s=57c64dad622c2ca2a97b1e36b5fe084642352ae3', 'width': 320}, {'height': 585, 'url': 'https://external-preview.redd.it/JDv-JdUWqZeAfz3cVc1YJxGxxPPKzP8VXdiF3jDrxtk.png?width=640&crop=smart&auto=webp&s=8d4911c9230532a26f8774c306924ab1912f57ef', 'width': 640}, {'height': 877, 'url': 'https://external-preview.redd.it/JDv-JdUWqZeAfz3cVc1YJxGxxPPKzP8VXdiF3jDrxtk.png?width=960&crop=smart&auto=webp&s=984fe90bf96191ab38edcda2b2222a3ca79d8d0e', 'width': 960}, {'height': 987, 'url': 'https://external-preview.redd.it/JDv-JdUWqZeAfz3cVc1YJxGxxPPKzP8VXdiF3jDrxtk.png?width=1080&crop=smart&auto=webp&s=6ae68a3428382bc2f1e9131b21831239883c123e', 'width': 1080}], 'source': {'height': 1545, 'url': 'https://external-preview.redd.it/JDv-JdUWqZeAfz3cVc1YJxGxxPPKzP8VXdiF3jDrxtk.png?auto=webp&s=51dd0e09906a37278eb3b06fa313f07f29254815', 'width': 1690}, 'variants': {}}]} |
Is Meta done with open-source Llama releases? | 41 | Was cleaning up my local LM stacks and noticed all the old Llama models I had. Brought back memories of how much fun they were — made me wonder, is Meta done releasing open-source models? | 2025-10-20T11:19:17 | https://www.reddit.com/r/LocalLLaMA/comments/1obgci1/is_meta_done_with_opensource_llama_releases/ | emimix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obgci1 | false | null | t3_1obgci1 | /r/LocalLLaMA/comments/1obgci1/is_meta_done_with_opensource_llama_releases/ | false | false | self | 41 | null |
Best Model for OCR | 2 | I'm trying to integrate Meal Tracker and Nutrition Label OCR in one of my projects.
Right now I've used Gpt-4o and Gemini 2.5 flash and the results are good.
What are the best/optimal solutions for this kinda problem which are of course cheap and good in performance and accuracy as well | 2025-10-20T11:07:44 | https://www.reddit.com/r/LocalLLaMA/comments/1obg4om/best_model_for_ocr/ | Savings_Day_1595 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obg4om | false | null | t3_1obg4om | /r/LocalLLaMA/comments/1obg4om/best_model_for_ocr/ | false | false | self | 2 | null |
ollama for toys, vLLM for cloud production, who used EXO? | 0 | Hi,
I heard many times about ollama and I run it locally on laptop. Most production solution recommend vLLM, like Mistral, Hugging Face, Llama Stack official documentation mention vLLM.
What about EXO? Did anyone use it? What are benefits? | 2025-10-20T11:01:35 | https://www.reddit.com/r/LocalLLaMA/comments/1obg11j/ollama_for_toys_vllm_for_cloud_production_who/ | kasianenko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obg11j | false | null | t3_1obg11j | /r/LocalLLaMA/comments/1obg11j/ollama_for_toys_vllm_for_cloud_production_who/ | false | false | self | 0 | null |
Practical takeaways from recent hands-on use of PaddleOCR‑VL 0.9B | 19 | Bottom line up front: I care most about whether complex layouts can be restored into structured data, whether handwriting tables and formulas are stable, and local inference speed and cost. Paddleocr‑VL 0.9B feels purpose built for production, especially for multi column PDFs, table structures, and formulas. Cloud models like GPT‑4o and Gemini 2.5 Pro are more general for commonsense cross domain understanding and conversational interaction, but you need to factor in cost and privacy compliance.
Scope and Constraints
1. Task domain: Document parsing and OCR, including text, tables, formulas, handwriting, and chart annotations.
2. Versions and sources: PaddleOCR‑VL 0.9B based on public materials and official demos. Baselines include GPT‑4o, Gemini 2.5 Pro, Mineru2.5, and dots.ocr using public information.
On multi column complex layouts and whether they can be directly restored into structured data, which I value highly because it decides how much human cleanup downstream automation needs. Paddleocr‑VL takes an engineering first approach: a NaViT dynamic visual encoder plus a lightweight ERNIE, combining layout understanding with structured outputs. In my experience with academic PDFs and financial reports that mix multi columns, formulas, and footnotes, it less often produces results that look correct but have broken structure. If your core goal is structured outputs that minimize rework, the default path of Paddleocr‑VL is steadier. General VLMs can understand the content, but often need extra prompt engineering or postprocessing to guarantee structure.
Handwriting, tables, and formulas: which is steadier? I would not claim any model absolutely dominates, but considering both recognition accuracy and structural usability together, PaddleOCR‑VL feels more production ready. It emphasizes strong performance on printed Chinese and English, handwritten English, and even Chinese handwriting and pinyin. Tables and formulas are traditional strengths of OCR systems, and emitting Markdown, html, or latex can save a lot of time. Cloud models are strong at formula inference and cross page linkage, but they sometimes output plausible looking yet misgridded or misaligned structures, which requires an extra verification pass.
Multilingual support is a classic ocr topic. This generation of Paddleocr‑VL highlights coverage of 109 languages and continues the pp‑ocr family’s lightweight design without sacrificing multilingual capability. Traditional ocr recognition modules can even be kept within hundreds of megabytes. My hunch is that common European languages plus Chinese Japanese Korean pose no pressure, while long tail scripts and rare character sets depend on your data distribution, so it is best to pilot with a small batch first.
I'm not an expert either; I'm just sharing as a newbie with everyone:
1. If your goal is to extract multi column PDFs, reports, and papers into structured data in as close to one pass as possible, and you need to run extensively on an enterprise intranet or at the edge, prioritize Paddleocr‑VL.
2. If you need to chat with documents, do cross domain summarization reasoning rewriting, and the volume is small with no hard privacy constraints, use GPT‑4o or Gemini 2.5 pro, then add some postprocessing for structure.
3. If you already have Mineru2.5 or dots.ocr pipelines and costs are under control, there is no need to churn if production is good enough. If you must tackle complex layouts with structured export, run another head‑to‑head focusing on rework volume.
Reference links
1. [https://huggingface.co/PaddlePaddle/PaddleOCR-VL](https://huggingface.co/PaddlePaddle/PaddleOCR-VL)
2. [https://github.com/PaddlePaddle/PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)
3. [https://aistudio.baidu.com/paddleocr](https://aistudio.baidu.com/paddleocr) | 2025-10-20T10:50:26 | https://www.reddit.com/r/LocalLLaMA/comments/1obfwt9/practical_takeaways_from_recent_handson_use_of/ | contportvas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obfwt9 | false | null | t3_1obfwt9 | /r/LocalLLaMA/comments/1obfwt9/practical_takeaways_from_recent_handson_use_of/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'so92btUv2x-sag9aEkVPkOEeOZ-pGD9IrU6oXVc8bYc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/so92btUv2x-sag9aEkVPkOEeOZ-pGD9IrU6oXVc8bYc.png?width=108&crop=smart&auto=webp&s=2b6867a99b296af139bf92baba4ba9c23c5190f2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/so92btUv2x-sag9aEkVPkOEeOZ-pGD9IrU6oXVc8bYc.png?width=216&crop=smart&auto=webp&s=0bcedb343872d6905a62e1a99228b3255de7f35b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/so92btUv2x-sag9aEkVPkOEeOZ-pGD9IrU6oXVc8bYc.png?width=320&crop=smart&auto=webp&s=890f7ee50683e41b2d3767b38bcc53a4887c9a2c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/so92btUv2x-sag9aEkVPkOEeOZ-pGD9IrU6oXVc8bYc.png?width=640&crop=smart&auto=webp&s=04346628d934445d1403083d6a50040dfc398816', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/so92btUv2x-sag9aEkVPkOEeOZ-pGD9IrU6oXVc8bYc.png?width=960&crop=smart&auto=webp&s=d9477557868643dec33a38fe14a34cfae73041f3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/so92btUv2x-sag9aEkVPkOEeOZ-pGD9IrU6oXVc8bYc.png?width=1080&crop=smart&auto=webp&s=4a1761447a5093fca86cc1c443e322cd6159d1d7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/so92btUv2x-sag9aEkVPkOEeOZ-pGD9IrU6oXVc8bYc.png?auto=webp&s=80976f19ed367894d3b7b21e812dae8765708544', 'width': 1200}, 'variants': {}}]} |
DAMN! Kimi K2 is 5x faster and more accurate than frontier proprietary models | 80 | ERROR: type should be string, got "https://preview.redd.it/bw20aruc58wf1.png?width=1192&format=png&auto=webp&s=80ef4d2d3b6b194d08a290d37a68cd1f5bd072bb\n\nGuillermo Rauch (**Vercel CEO**) just shared benchmark results from their internal agent testing. That’s roughly **5× faster** and **50% higher accuracy** than the top proprietary models\n\nIt’s wild to see open source models not just catching up but starting to outperform in both efficiency and accuracy." | 2025-10-20T10:33:42 | https://www.reddit.com/r/LocalLLaMA/comments/1obftw9/damn_kimi_k2_is_5x_faster_and_more_accurate_than/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obftw9 | false | null | t3_1obftw9 | /r/LocalLLaMA/comments/1obftw9/damn_kimi_k2_is_5x_faster_and_more_accurate_than/ | false | false | 80 | null | |
Best story AI | 0 | I have a Rtx 5060ti 16gb ( lucky I didn't choose 5070) and 32gb ram (probably doesn't help much 😔) And I am writing stories, Chat-gpt is great, but the memory is uh....not good enough, the longer the conversation is, the more I have to keep reminding him. So I am thinking about use an AI locally( for a lot better persistent memory). What is the best AI for this task right now? | 2025-10-20T10:28:24 | https://www.reddit.com/r/LocalLLaMA/comments/1obft8r/best_story_ai/ | Adorable-Opening-199 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obft8r | false | null | t3_1obft8r | /r/LocalLLaMA/comments/1obft8r/best_story_ai/ | false | false | self | 0 | null |
DreamOmni2 — multimodal instruction-based editing & generation (web demo + code) | 9 | Open-source, unified model that uses text + reference images to do precise edits or full generations, including abstract attributes and multi-reference workflows. See the project page demos, try the HF Web demo, and grab code + weights.
• Capabilities shown: object replacement, lighting/style transfer, pose/expression/hair edits, in-context & multi-reference examples. 
• Try it now: DreamOmni2-Edit Space on Hugging Face. 
https://huggingface.co/spaces/wcy1122/DreamOmni2-Edit
https://github.com/dvlab-research/DreamOmni2 | 2025-10-20T09:55:55 | https://www.reddit.com/r/LocalLLaMA/comments/1obfodq/dreamomni2_multimodal_instructionbased_editing/ | freesysck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obfodq | false | null | t3_1obfodq | /r/LocalLLaMA/comments/1obfodq/dreamomni2_multimodal_instructionbased_editing/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'lo4RCH8mfgTlqEWmyarAJ2ziZKTszAic3Z2bECjvO14', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/lo4RCH8mfgTlqEWmyarAJ2ziZKTszAic3Z2bECjvO14.png?width=108&crop=smart&auto=webp&s=57055d2f4848d92480b43f8702e119cd7c5bd17b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/lo4RCH8mfgTlqEWmyarAJ2ziZKTszAic3Z2bECjvO14.png?width=216&crop=smart&auto=webp&s=d757c0ef0edd7be7fc4daf6b622bfede67a398c2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/lo4RCH8mfgTlqEWmyarAJ2ziZKTszAic3Z2bECjvO14.png?width=320&crop=smart&auto=webp&s=b308988af5136cbba05eaadc879f0fa4a85f6552', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/lo4RCH8mfgTlqEWmyarAJ2ziZKTszAic3Z2bECjvO14.png?width=640&crop=smart&auto=webp&s=4cec369ed01358e5865f0c05fbdfbd74427b5607', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/lo4RCH8mfgTlqEWmyarAJ2ziZKTszAic3Z2bECjvO14.png?width=960&crop=smart&auto=webp&s=ee411e380f5fa4169bd01e9ab878e179725dc181', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/lo4RCH8mfgTlqEWmyarAJ2ziZKTszAic3Z2bECjvO14.png?width=1080&crop=smart&auto=webp&s=0bac2d8dfff889fba960c399a8b5950b57c80ea6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/lo4RCH8mfgTlqEWmyarAJ2ziZKTszAic3Z2bECjvO14.png?auto=webp&s=5affffb6bfae616de759536ba2f9d6813f580730', 'width': 1200}, 'variants': {}}]} |
What kind of hardware do you need to run and train a big LLM locally ? | 1 | Hey folks,
I’ve been diving deeper into local LLMs lately and I’m curious about a few things that I can’t seem to find a solid, real-world answer for:
1. **What model size is generally considered “comfortable” for a ChatGPT-like experience?** I’m not talking about GPT-4 quality exactly — just something that feels smooth, context-aware, and fast enough for daily use without insane latency.
2. **What hardware setup can comfortably run that kind of model** with *high speed* and the ability to handle **5–10 concurrent sessions** (e.g. multiple users or chat tabs)? I’ve heard that **AMD’s upcoming Strix Halo chips** might be really strong for this kind of setup — are they actually viable for running medium-to-large models locally, or still not quite there compared to multi-GPU rigs?
3. For those of you who’ve actually set up local LLM systems:
* How do you structure your **data pipeline** (RAG, fine-tuning, vector DBs, etc.)?
* How do you handle **cooling, uptime, and storage management** in a home or lab environment?
* Any “I wish I knew this earlier” advice before someone invests thousands into hardware?
I’m trying to plan a setup that can eventually handle both **inference and some light fine-tuning** on my own text datasets, but I’d like to know what’s *realistically sustainable* for local use before I commit.
Would love to hear your experiences — from both the workstation and homelab side.
(ironically I wrote this with the helped of GPT-5, no need to point it out :p. I've tried searching back and forth through google and ChatGPT, I want to hear an answer from you lot that have actually experienced and tinkered with it, HUGE thanks in advance by the way) | 2025-10-20T09:50:50 | https://www.reddit.com/r/LocalLLaMA/comments/1obfn0y/what_kind_of_hardware_do_you_need_to_run_and/ | DarealCoughyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obfn0y | false | null | t3_1obfn0y | /r/LocalLLaMA/comments/1obfn0y/what_kind_of_hardware_do_you_need_to_run_and/ | false | false | self | 1 | null |
Captioning | 0 | Hi! Do you have any suggestions for the best model to train from scratch on a 64k dataset? | 2025-10-20T09:47:11 | https://www.reddit.com/r/LocalLLaMA/comments/1obfm35/captioning/ | CompetitionOk5997 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obfm35 | false | null | t3_1obfm35 | /r/LocalLLaMA/comments/1obfm35/captioning/ | false | false | self | 0 | null |
does MOE means lesser GPU? | 0 | Hey Guys,
I am little confused unto this. I thought of planinng a home lab for LLMs for daily thing nothing fance. I see recently many good MOE models came which are quite good in tool calling and instruction following. I thought I need GPU only for active params but when I asked chatGPT it said no, I will need GPU to fit in the entire model otherwise performace will be a bottle neck.
Here are some of the screenshots:
https://preview.redd.it/mj1dloaci8wf1.png?width=2092&format=png&auto=webp&s=66e46f8ba1331d411c1f18f57a8e4ce7b67a68c4
https://preview.redd.it/qeisiqafi8wf1.png?width=2072&format=png&auto=webp&s=5e971ec0fb06fe2cfe6d583df9201255bcc4835c
| 2025-10-20T09:20:46 | https://www.reddit.com/r/LocalLLaMA/comments/1obfduz/does_moe_means_lesser_gpu/ | bhupesh-g | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obfduz | false | null | t3_1obfduz | /r/LocalLLaMA/comments/1obfduz/does_moe_means_lesser_gpu/ | false | false | 0 | null | |
Assistance needed in building a clone | 1 | Are there any free lancers out here that will get me started in building a clone of myself? I can go into greater detail if someone takes on projects like that. I am pretty low level knowledge regarding the process, strapped for time and not a coder at all!!! Given a raod map, I can follow it. Thanks | 2025-10-20T08:57:39 | https://www.reddit.com/r/LocalLLaMA/comments/1obf1vk/assistance_needed_in_building_a_clone/ | Happyhotwifenow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obf1vk | false | null | t3_1obf1vk | /r/LocalLLaMA/comments/1obf1vk/assistance_needed_in_building_a_clone/ | false | false | self | 1 | null |
Suggestion! SD Workstation Threadripper PRO + RTX 6000 Blackwell | 1 | I am looking to run stable diffusion on 24 hours via API and there will be 4 customers at the same time.
* Does below configuration makes sense?
* Are there any conflicts between hardware i choose?
[System Specs](https://preview.redd.it/6cyli356c8wf1.png?width=1157&format=png&auto=webp&s=d9efe6b38113beb12a1efa8d0534a07d59417a00)
| 2025-10-20T08:50:42 | https://www.reddit.com/r/LocalLLaMA/comments/1obeych/suggestion_sd_workstation_threadripper_pro_rtx/ | visionkhawar512 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obeych | false | null | t3_1obeych | /r/LocalLLaMA/comments/1obeych/suggestion_sd_workstation_threadripper_pro_rtx/ | false | false | 1 | null | |
Hands-on tutorial on fine-tuning Small Vision Models | 15 | In this repository you will learn how to build and deploy high-accuracy-and-low-latency image classifers into your phone using local Visual Language Models.
We will use
* a sequence of increasingly complex classification tasks, to uncover step-by-step how to build highly-specialized image classification systems, tailored to your specific use case.
* the [**LFM2-VL** family of open-weight Visual Language Models (aka VLMs) by Liquid AI](https://huggingface.co/collections/LiquidAI/lfm2-vl-68963bbc84a610f7638d5ffa) to classify images for these tasks.
* the [**Leap Edge SDK**](https://leap.liquid.ai/docs) for iOS to deploy the final models into an iOS app.
Link to the github repo: [https://github.com/Paulescu/image-classification-with-local-vlms](https://github.com/Paulescu/image-classification-with-local-vlms) | 2025-10-20T08:49:12 | https://www.reddit.com/r/LocalLLaMA/comments/1obexku/handson_tutorial_on_finetuning_small_vision_models/ | PauLabartaBajo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obexku | false | null | t3_1obexku | /r/LocalLLaMA/comments/1obexku/handson_tutorial_on_finetuning_small_vision_models/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': '7QUiksPUSH_9tBS9MFsoNYyq8h1YZyhG7Ci-T9dM5wA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jXJFO9Mb0HwA99A7iDJEl7Phs1zHyHKadm3WWMstigA.jpg?width=108&crop=smart&auto=webp&s=2458dc3e06584eeb4a448513cca024d9d9ba9c6c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jXJFO9Mb0HwA99A7iDJEl7Phs1zHyHKadm3WWMstigA.jpg?width=216&crop=smart&auto=webp&s=b06aef3e43e94a08679552bf363aebc01e8d00d8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jXJFO9Mb0HwA99A7iDJEl7Phs1zHyHKadm3WWMstigA.jpg?width=320&crop=smart&auto=webp&s=2bf207f270f583d898237eb1f35918f897c30b3d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jXJFO9Mb0HwA99A7iDJEl7Phs1zHyHKadm3WWMstigA.jpg?width=640&crop=smart&auto=webp&s=de38746d5442dfc374aa48a97fa54df986a47daf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jXJFO9Mb0HwA99A7iDJEl7Phs1zHyHKadm3WWMstigA.jpg?width=960&crop=smart&auto=webp&s=042f6127edddd22b35f519eb33ea5d2bdb246adb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jXJFO9Mb0HwA99A7iDJEl7Phs1zHyHKadm3WWMstigA.jpg?width=1080&crop=smart&auto=webp&s=24a55849f9c7f3f4b0ec9b7c2df04dd2b6d649f8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jXJFO9Mb0HwA99A7iDJEl7Phs1zHyHKadm3WWMstigA.jpg?auto=webp&s=de57d8f4f1c58196dd945d54cfc70473984292fb', 'width': 1200}, 'variants': {}}]} |
Debugging at llama.cpp server side | 7 | Given a llama.cpp server, what is the best way to dump all the requests/responses send/received from it?
Some AI tools/plugins/UIs work quite fast, while some work quite slow with seemingly the same request. Probably that is because the prompt prefixed before the actual request is quite large? I want to read/debug the actual prompt being sent - guess this can only be done by dumping the http request from the wire or patching llama.cpp? | 2025-10-20T08:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/1obeq5q/debugging_at_llamacpp_server_side/ | Bird476Shed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obeq5q | false | null | t3_1obeq5q | /r/LocalLLaMA/comments/1obeq5q/debugging_at_llamacpp_server_side/ | false | false | self | 7 | null |
Is this affordable server useful for a multicard setup ? | 0 | Gigabyte G431-MM0 AMD EPYC 3151 SoC 0GB DDR4 10X GPU 4X SFF 4U Rack Server found on the German EBAY:
https://www.ebay.de/itm/166801259652
Your thoughts are appreciated. | 2025-10-20T08:19:12 | https://www.reddit.com/r/LocalLLaMA/comments/1obegxi/is_this_affordable_server_useful_for_a_multicard/ | HumanDrone8721 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obegxi | false | null | t3_1obegxi | /r/LocalLLaMA/comments/1obegxi/is_this_affordable_server_useful_for_a_multicard/ | false | false | self | 0 | null |
Dual gpu setup, one gpu functions normally, the other spikes, why does this happen? | 6 | Does anyone know why this happens? I’m using behemoth 123B at Q2 K S on 2 MI50 32gbs. When prompt processing, everything is normal on the first gpu but the graph is spiky on the second one. Could this be because of pcie lanes? Because the only difference between them is that the second one is connected with pcie 3.0 x4 while the first one is on x16. This doesn’t happened with smaller models or more models either :/ | 2025-10-20T07:50:00 | https://i.redd.it/vjobsc5c28wf1 | opoot_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1obdyve | false | null | t3_1obdyve | /r/LocalLLaMA/comments/1obdyve/dual_gpu_setup_one_gpu_functions_normally_the/ | false | false | default | 6 | {'enabled': True, 'images': [{'id': 'oBiIkp3Tz30NYUbQ6u2Bvomo6k9wzXBZ7epB7h2C3J4', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/vjobsc5c28wf1.jpeg?width=108&crop=smart&auto=webp&s=11fb473dfa2d66a20e2347731d8eaf31b9f04ee6', 'width': 108}], 'source': {'height': 126, 'url': 'https://preview.redd.it/vjobsc5c28wf1.jpeg?auto=webp&s=f178c671eadee399fbc21caae5aec7efeae2d292', 'width': 196}, 'variants': {}}]} |
Why is Perplexity so fast | 0 | I want to know that how is Perplexity so fast like when I use its quick mode it start generating answer in 1or 2 sec | 2025-10-20T07:01:24 | https://www.reddit.com/r/LocalLLaMA/comments/1obd631/why_is_perplexity_so_fast/ | TopFuture2709 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obd631 | false | null | t3_1obd631 | /r/LocalLLaMA/comments/1obd631/why_is_perplexity_so_fast/ | false | false | self | 0 | null |
Why is Perplexity so fast | 1 | [removed] | 2025-10-20T06:57:31 | https://www.reddit.com/r/LocalLLaMA/comments/1obd3ud/why_is_perplexity_so_fast/ | ShoulderTough8758 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obd3ud | false | null | t3_1obd3ud | /r/LocalLLaMA/comments/1obd3ud/why_is_perplexity_so_fast/ | false | false | self | 1 | null |
How Perplexity so fast | 1 | [removed] | 2025-10-20T06:56:41 | https://www.reddit.com/r/LocalLLaMA/comments/1obd3dm/how_perplexity_so_fast/ | ShoulderTough8758 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obd3dm | false | null | t3_1obd3dm | /r/LocalLLaMA/comments/1obd3dm/how_perplexity_so_fast/ | false | false | self | 1 | null |
Can ByteDance-Seed/UI-TARS-1.5-7B be loaded in a single 3090 in VLLM? | 5 | Or am I just banging my head against wall? | 2025-10-20T06:37:33 | https://www.reddit.com/r/LocalLLaMA/comments/1obcskt/can_bytedanceseeduitars157b_be_loaded_in_a_single/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obcskt | false | null | t3_1obcskt | /r/LocalLLaMA/comments/1obcskt/can_bytedanceseeduitars157b_be_loaded_in_a_single/ | false | false | self | 5 | null |
One 5090 or five 5060 Ti? | 8 | They price out to about the same, 380$ish for one 5060 Ti or 2k$ for a 5090. On paper 5 5060s (dropping the Ti here for laziness) should be better, with 80 GB VRAM and 2240 GB/s total bandwidth, but we all know things don't scale that cleanly. Assume I can connect and power them - I have a Threadripper board I could use, or it'd be easy enough to get 5x PCIe 5 x4 off an AM5 in a pseudo-mining-rig configuration. My use case would be coding assistance mostly as well as just generally screwing around. These both seem like common enough cards that I'm hoping someone has done Literally This before and can just share results, but I also welcome informed speculation. Thanks! | 2025-10-20T06:32:09 | https://www.reddit.com/r/LocalLLaMA/comments/1obcphd/one_5090_or_five_5060_ti/ | emrlddrgn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obcphd | false | null | t3_1obcphd | /r/LocalLLaMA/comments/1obcphd/one_5090_or_five_5060_ti/ | false | false | self | 8 | null |
DeepSeek releases DeepSeek OCR | 477 | [https://huggingface.co/deepseek-ai/DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR)
https://preview.redd.it/t4ji6agdn7wf1.png?width=2646&format=png&auto=webp&s=f76bdf09e595fa18f0d701b98f9b0f3ed01ee5db
| 2025-10-20T06:26:26 | https://www.reddit.com/r/LocalLLaMA/comments/1obcm9r/deepseek_releases_deepseek_ocr/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obcm9r | false | null | t3_1obcm9r | /r/LocalLLaMA/comments/1obcm9r/deepseek_releases_deepseek_ocr/ | false | false | 477 | {'enabled': False, 'images': [{'id': 'ddlXXAanndfx0k3ivMcCdrEJtDQlMZs1JyMP8q81Yms', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ddlXXAanndfx0k3ivMcCdrEJtDQlMZs1JyMP8q81Yms.png?width=108&crop=smart&auto=webp&s=f5914164124ed5d207c21a93e57848c65e8782f0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ddlXXAanndfx0k3ivMcCdrEJtDQlMZs1JyMP8q81Yms.png?width=216&crop=smart&auto=webp&s=eb1882341be1620e1bb4ca70579e80694476f486', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ddlXXAanndfx0k3ivMcCdrEJtDQlMZs1JyMP8q81Yms.png?width=320&crop=smart&auto=webp&s=a574b238d5198f4230f17385a63cee15a97e4866', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ddlXXAanndfx0k3ivMcCdrEJtDQlMZs1JyMP8q81Yms.png?width=640&crop=smart&auto=webp&s=54c207b8079de2f72cbaafba0d28b87918c60e33', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ddlXXAanndfx0k3ivMcCdrEJtDQlMZs1JyMP8q81Yms.png?width=960&crop=smart&auto=webp&s=9557891405d08a95936d7547b252f3ee42605279', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ddlXXAanndfx0k3ivMcCdrEJtDQlMZs1JyMP8q81Yms.png?width=1080&crop=smart&auto=webp&s=a214e4ee4a18a5550203f753f4802d59d967559c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ddlXXAanndfx0k3ivMcCdrEJtDQlMZs1JyMP8q81Yms.png?auto=webp&s=e9c1ed72a1f05e703c83e13042667d7a7fad88f6', 'width': 1200}, 'variants': {}}]} | |
deepseek-ai/DeepSeek-OCR · Hugging Face | 2 | 2025-10-20T06:25:31 | https://huggingface.co/deepseek-ai/DeepSeek-OCR | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1obclqt | false | null | t3_1obclqt | /r/LocalLLaMA/comments/1obclqt/deepseekaideepseekocr_hugging_face/ | false | false | default | 2 | null | |
I built a plug-n-play AI PC running Ollama and OpenWebUI | 1 | 2025-10-20T06:14:46 | https://www.reddit.com/gallery/1obcfqn | boxgpt | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1obcfqn | false | null | t3_1obcfqn | /r/LocalLLaMA/comments/1obcfqn/i_built_a_plugnplay_ai_pc_running_ollama_and/ | false | false | 1 | null | ||
Help me pick a machine for running Jarvis like personal assistant | 2 | Hey,
I am starting a project to create a fully local personal assistant running my home ( and me really ), I have macbook air m3 with 16GB memory now - and it's certainly not enough, I have a $4.000 budget.
This is inference only, any training I will need to do, I will likely utilize cloud resources. But for inference I refuse to call any external APIs.
Chatgpt 5 Thinking, given two options: MacStudio M4 Max 128GB vs PC 128GB, RTX 3090 - strongly prefers PC. I find his reasoning shallow though - but apparently that's the opinion of internet at large.
My own opinion is completely opposite, I think this project will involve multiple local SLMs (7B is likely a sweet spot, but 14B an option ) requiring large amounts of memory - and even though PC has 152GB of memory vs 128GB of Mac, I am not sure I want to deal with paging constantly crossing PCIe.
Any help would be appreciated, I feel I should go with Mac Studio - but maybe I am missing something obvious?
Example features ( from my chatgpt prompt :) ):
\- he will be able to watch the feed through few cameras at my home
\- he will use both TTS and STT models, and have personality in his voice, the house will be mic'd and there will be speakers everywhere
\- he will have access to my calendar, browsing history, heart rate, etc..
\- he will use RAGs a lot to deal with memory and context length issues
\- he will not be one model, but multiple ones running as mixture of experts
\- he will run almost 24/7 with few breaks | 2025-10-20T06:10:19 | https://www.reddit.com/r/LocalLLaMA/comments/1obcd8o/help_me_pick_a_machine_for_running_jarvis_like/ | Agreeable-Chef4882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obcd8o | false | null | t3_1obcd8o | /r/LocalLLaMA/comments/1obcd8o/help_me_pick_a_machine_for_running_jarvis_like/ | false | false | self | 2 | null |
Which LLM should i use for my local bussiness | 0 | I work as an electronics engineer at a small company. Because I'm a veteran of the company, they constantly call me to ask about paperwork (purchase orders, annual leave requests, changing computer passwords, etc.). However, the documentation clearly states how to do these tasks, but no one reads them. I want to build an AI assistant that I'll train using approximately 100 files in .txt format that the company's employees will use. I started by trying Gemma-3, but it takes a minute to respond. What would be your suggestion for such a problem? | 2025-10-20T05:17:31 | Civil-Development-56 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1obbibx | false | null | t3_1obbibx | /r/LocalLLaMA/comments/1obbibx/which_llm_should_i_use_for_my_local_bussiness/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'u6m1hpsya7wf1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/u6m1hpsya7wf1.jpeg?width=108&crop=smart&auto=webp&s=89e2632890ffa581f83cd3e879b7c33953da9062', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/u6m1hpsya7wf1.jpeg?width=216&crop=smart&auto=webp&s=6807a23759be2c6075ed18e65bbca6ed49e5dc3c', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/u6m1hpsya7wf1.jpeg?width=320&crop=smart&auto=webp&s=670b771aa8a3a66e83eaba48bb1abefd0a8fa0ba', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/u6m1hpsya7wf1.jpeg?width=640&crop=smart&auto=webp&s=6ea114f003ff606eea7ee8ddda90e9a6017e8076', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/u6m1hpsya7wf1.jpeg?width=960&crop=smart&auto=webp&s=6b0d58ec72139cabf3d30b5440d6e7517fc14fee', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/u6m1hpsya7wf1.jpeg?width=1080&crop=smart&auto=webp&s=16419fbaf921b7fea85b255f670771f0d21c5a2e', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://preview.redd.it/u6m1hpsya7wf1.jpeg?auto=webp&s=1b6067631f0054d8e5e111e3c7ea744f5bdbd426', 'width': 1500}, 'variants': {}}]} | |
That one time when you connect the monitor to integrated graphics and run AI | 0 | 22.5 tokens/s on 20B open AI q4 quant 4k window | 2025-10-20T05:13:58 | OldEffective9726 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1obbg68 | false | null | t3_1obbg68 | /r/LocalLLaMA/comments/1obbg68/that_one_time_when_you_connect_the_monitor_to/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 's6dg979da7wf1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/s6dg979da7wf1.png?width=108&crop=smart&auto=webp&s=3ef31cd5187c86458ed694a2f5447783bf47365e', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/s6dg979da7wf1.png?width=216&crop=smart&auto=webp&s=1744ed7977417e33b8bb1cd8783b6c96abdb96d2', 'width': 216}, {'height': 205, 'url': 'https://preview.redd.it/s6dg979da7wf1.png?width=320&crop=smart&auto=webp&s=5f80f50497344f03f79f74a4803a6aefb95b10e0', 'width': 320}], 'source': {'height': 341, 'url': 'https://preview.redd.it/s6dg979da7wf1.png?auto=webp&s=e2105fc9bc61f89826cfb19b763c66968f66cc41', 'width': 532}, 'variants': {}}]} | |
What are your /r/LocalLLaMA "hot-takes"? | 88 | Or something that goes against the general opinions of the community? Vibes are the only benchmark that counts after all.
I tend to agree with the flow on most things *but* my thoughts that I'd consider going against the grain:
- QwQ was think-slop and was never *that* good
- Qwen3-32B is still SOTA for 32GB and under. I cannot get anything to reliably beat it despite shiny benchmarks
- Deepseek is still open-weight SotA. I've really tried Kimi, GLM, and Qwen3's larger variants but asking Deepseek still feels like asking the adult in the room. Caveat is GLM codes better
- (proprietary bonus): Grok4 handles news data better than Chatgpt5 or Gemini2.5 and will always win if you ask it about something that happened *that day*. | 2025-10-20T04:55:04 | https://www.reddit.com/r/LocalLLaMA/comments/1obb4c4/what_are_your_rlocalllama_hottakes/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obb4c4 | false | null | t3_1obb4c4 | /r/LocalLLaMA/comments/1obb4c4/what_are_your_rlocalllama_hottakes/ | false | false | self | 88 | null |
Options for truly local TTS software or models? | 0 | I'm looking for TTS that isn't an API and runds completely on hardware. There's always the default microsft voices, but is there anything a bit more advanced? Something that can work in LM studio, or is even a stand alone application?
I'm just concerned with training data/telemetry being sent to an api. | 2025-10-20T04:49:32 | https://www.reddit.com/r/LocalLLaMA/comments/1obb0rp/options_for_truly_local_tts_software_or_models/ | AI_Renaissance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obb0rp | false | null | t3_1obb0rp | /r/LocalLLaMA/comments/1obb0rp/options_for_truly_local_tts_software_or_models/ | false | false | self | 0 | null |
Need easy to install local TTS recommendations. | 1 | So like I tried to download orpheus, and use it with LM studio, but I can't figure out how to launch a server.
I just get empty java page and code on the dev console that says
\[ERROR\] Unexpected endpoint or method. (GET /). Returning 200 anyway.
I was wondering if there were any specific standalone local frontentnds, or apps for tts that are easier to set up? Anything else out there thats local besides Orpheus too? | 2025-10-20T04:40:16 | https://www.reddit.com/r/LocalLLaMA/comments/1obaufq/need_easy_to_install_local_tts_recommendations/ | AI_Renaissance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obaufq | false | null | t3_1obaufq | /r/LocalLLaMA/comments/1obaufq/need_easy_to_install_local_tts_recommendations/ | false | false | self | 1 | null |
🔥 BFL killed finetuning — no migration, no explanation. What’s going on? | 0 | So… BFL just quietly announced that all finetuning APIs will be deprecated by October 31, 2025, including `/v1/finetune`, `flux-pro-finetuned`, and every `*-finetuned` model.
The release note (https://docs.bfl.ai/release-notes) literally says:
>“No migration path available. Finetuning functionality will be discontinued.”
And that’s it. No explanation, no replacement plan, nothing. 🤷♂️
I checked everywhere — no blog post, no Discord statement, no social media mention. It’s like they just pulled the plug.
Is anyone else surprised by this?
* Are they planning a new lightweight tuning method (like LoRA or adapters)?
* Is this a cost/safety decision?
* Or are they just consolidating everything into a single “smart prompt” system?
Feels like a major shift, especially since a lot of devs relied on BFL’s finetuning for production workflows.
Anyone here have inside info or thoughts on what’s really happening?
[BFL Release Notes](https://preview.redd.it/76dx9o2dz6wf1.png?width=555&format=png&auto=webp&s=34300bbc1e16acf7a2a2a304f087dc7d9c30a1f8)
| 2025-10-20T04:14:49 | https://www.reddit.com/r/LocalLLaMA/comments/1obabrv/bfl_killed_finetuning_no_migration_no_explanation/ | ReviewThis6614 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1obabrv | false | null | t3_1obabrv | /r/LocalLLaMA/comments/1obabrv/bfl_killed_finetuning_no_migration_no_explanation/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'n4nqgWGH1BcT3bnlYqwyNeVIbOUm-oxIDdck2dqmulk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/n4nqgWGH1BcT3bnlYqwyNeVIbOUm-oxIDdck2dqmulk.png?width=108&crop=smart&auto=webp&s=4c80b0e9c273ebb2e4e4e9ec8f5888c17b9401b9', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/n4nqgWGH1BcT3bnlYqwyNeVIbOUm-oxIDdck2dqmulk.png?width=216&crop=smart&auto=webp&s=cc538d20193e4758346c8b7ec236404592727a89', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/n4nqgWGH1BcT3bnlYqwyNeVIbOUm-oxIDdck2dqmulk.png?width=320&crop=smart&auto=webp&s=c2d7cbfae9594ebf881eeedc8d61d2240a031456', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/n4nqgWGH1BcT3bnlYqwyNeVIbOUm-oxIDdck2dqmulk.png?width=640&crop=smart&auto=webp&s=75095f979e58fb6dd1fefae1f8fded2cd9f9f94f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/n4nqgWGH1BcT3bnlYqwyNeVIbOUm-oxIDdck2dqmulk.png?width=960&crop=smart&auto=webp&s=e301173791f5fd3f6611c2d0ad0df129b660b5e5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/n4nqgWGH1BcT3bnlYqwyNeVIbOUm-oxIDdck2dqmulk.png?width=1080&crop=smart&auto=webp&s=03ba3f0aa315bd0477e64811ac38a5c81c4d89b6', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/n4nqgWGH1BcT3bnlYqwyNeVIbOUm-oxIDdck2dqmulk.png?auto=webp&s=530232f5e813b11020d19e3c9ff585548cf770f1', 'width': 1200}, 'variants': {}}]} | |
What happens when Chinese companies stop providing open source models? | 383 | What happens when Chinese companies stop providing open source models? Good example would be Alibaba's WAN. It was open source until the last version WAN2.5, which is closed source and it costs money. What happens when they start doing this across the board? | 2025-10-20T03:51:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ob9vvk/what_happens_when_chinese_companies_stop/ | 1BlueSpork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ob9vvk | false | null | t3_1ob9vvk | /r/LocalLLaMA/comments/1ob9vvk/what_happens_when_chinese_companies_stop/ | false | false | self | 383 | null |
Good blogs or write ups on maximizing AI while not completely vibe coding | 9 | I just got into the world of Claude code and open code after using copilot for a year. It’s so much better, and I’m really feeling the powers of boosting my workflow to a much higher level. At the same time, sometimes I get too carried away and spend lots of time cleaning up AI slop.
Recently, I started using detailed context files, utilizing git branch/commits on AI, setting up plans before utilizing, ~~actually reading the code instead of pressing accept~~ and I find it being a great positive effect.
Is there any blogs or write ups that you guys recommend for setting up such a dev environment? at this point, it seems to be as important as setting up linting whenever you code | 2025-10-20T03:34:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ob9k3p/good_blogs_or_write_ups_on_maximizing_ai_while/ | atom9408 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ob9k3p | false | null | t3_1ob9k3p | /r/LocalLLaMA/comments/1ob9k3p/good_blogs_or_write_ups_on_maximizing_ai_while/ | false | false | self | 9 | null |
How I Built Lightning-Fast Vector Search for Legal Documents | 27 | 2025-10-20T03:22:16 | https://medium.com/@adlumal/how-i-built-lightning-fast-vector-search-for-legal-documents-fbc3eaad55ea | Neon0asis | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1ob9bli | false | null | t3_1ob9bli | /r/LocalLLaMA/comments/1ob9bli/how_i_built_lightningfast_vector_search_for_legal/ | false | false | default | 27 | {'enabled': False, 'images': [{'id': 'yTyQwb8h92SADq3dGPHjZVD8A4qDhAKutT4IHEH7UFE', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/yTyQwb8h92SADq3dGPHjZVD8A4qDhAKutT4IHEH7UFE.png?width=108&crop=smart&auto=webp&s=32102c8620878e72ebf240f045b7e099f00d15db', 'width': 108}, {'height': 155, 'url': 'https://external-preview.redd.it/yTyQwb8h92SADq3dGPHjZVD8A4qDhAKutT4IHEH7UFE.png?width=216&crop=smart&auto=webp&s=9c51d37b9578f0a1dac18b2ae046aec4a0b6ff85', 'width': 216}, {'height': 230, 'url': 'https://external-preview.redd.it/yTyQwb8h92SADq3dGPHjZVD8A4qDhAKutT4IHEH7UFE.png?width=320&crop=smart&auto=webp&s=d99c85f5adf3d74ba646fa93ae37d910e93e7960', 'width': 320}, {'height': 460, 'url': 'https://external-preview.redd.it/yTyQwb8h92SADq3dGPHjZVD8A4qDhAKutT4IHEH7UFE.png?width=640&crop=smart&auto=webp&s=71cacbcc942a0d80e2aee0bed02c52803264926a', 'width': 640}], 'source': {'height': 529, 'url': 'https://external-preview.redd.it/yTyQwb8h92SADq3dGPHjZVD8A4qDhAKutT4IHEH7UFE.png?auto=webp&s=c5bbe4afb618d5053e5bb7b1fb6d80f11ad0db93', 'width': 736}, 'variants': {}}]} | |
Identify This Nvidia Jetson Board? | 0 | 2025-10-20T03:13:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ob95ja/identify_this_nvidia_jetson_board/ | Both-Activity6432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ob95ja | false | null | t3_1ob95ja | /r/LocalLLaMA/comments/1ob95ja/identify_this_nvidia_jetson_board/ | false | false | 0 | null | ||
How do you guys generate/prepare your coding datasets? | 0 | Honestly, I'm questioning if I even need to include coding data for my fine-tuning, but I figured I'd ask just in case!
I've used the Claude API and Codex before. Now, I'm considering using **Qwen3-Coder-30B** for simpler tasks.
What level of complexity/quality should I ask for? (Although, I doubt my own skills are good enough to properly review the output, lol.)
Oh! And here's an update on my progress:
https://preview.redd.it/u3a9zne4h6wf1.png?width=1081&format=png&auto=webp&s=252c273a4d4d04edaf1d51cc2662c053a513ecbc
The persona is still unstable, haha. It takes some prompting/persuasion to get it to act the part. | 2025-10-20T02:29:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ob8a40/how_do_you_guys_generateprepare_your_coding/ | Patience2277 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ob8a40 | false | null | t3_1ob8a40 | /r/LocalLLaMA/comments/1ob8a40/how_do_you_guys_generateprepare_your_coding/ | false | false | 0 | null | |
Nvidia's OmniVinci: Enhancing Architecture and Data for Omni-Modal Understanding LLM | 31 | 2025-10-20T02:02:11 | https://huggingface.co/nvidia/omnivinci | ninjasaid13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ob7q6m | false | null | t3_1ob7q6m | /r/LocalLLaMA/comments/1ob7q6m/nvidias_omnivinci_enhancing_architecture_and_data/ | false | false | default | 31 | {'enabled': False, 'images': [{'id': 'T8a1JuGOfWYN7yWqfBi5-bruC3MzoVLZu36ygPTxd0o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/T8a1JuGOfWYN7yWqfBi5-bruC3MzoVLZu36ygPTxd0o.png?width=108&crop=smart&auto=webp&s=5ef3666ba66354207a5d19ebd0fe3c3cb2d270d6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/T8a1JuGOfWYN7yWqfBi5-bruC3MzoVLZu36ygPTxd0o.png?width=216&crop=smart&auto=webp&s=f40c98fd8c71225e3a206e3a52ee69b0dbac7d0e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/T8a1JuGOfWYN7yWqfBi5-bruC3MzoVLZu36ygPTxd0o.png?width=320&crop=smart&auto=webp&s=14cd928353e699ef3eeae818b7bd369fbfc2e29c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/T8a1JuGOfWYN7yWqfBi5-bruC3MzoVLZu36ygPTxd0o.png?width=640&crop=smart&auto=webp&s=264bdb16d9e4730080c65c21ffa671e23a0de176', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/T8a1JuGOfWYN7yWqfBi5-bruC3MzoVLZu36ygPTxd0o.png?width=960&crop=smart&auto=webp&s=2bece6aff2977039b65764ac6e88072d38c7abe0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/T8a1JuGOfWYN7yWqfBi5-bruC3MzoVLZu36ygPTxd0o.png?width=1080&crop=smart&auto=webp&s=9f9a2728e4f2ca8b7ffe7c4501bb9a6f558b7557', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/T8a1JuGOfWYN7yWqfBi5-bruC3MzoVLZu36ygPTxd0o.png?auto=webp&s=b69d02dfeea3d39396ccb5e8cd1f528797665415', 'width': 1200}, 'variants': {}}]} | |
How can I run a VL model on a Smartphone? | 0 | I know there are several apps that can run VL models, and I know I can compile llama.cpp on my phone and run models, but is there a good interface to perform inference on these models besides the google ai gallery? | 2025-10-20T01:56:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ob7mcy/how_can_i_run_a_vl_model_on_a_smartphone/ | klop2031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ob7mcy | false | null | t3_1ob7mcy | /r/LocalLLaMA/comments/1ob7mcy/how_can_i_run_a_vl_model_on_a_smartphone/ | false | false | self | 0 | null |
Mac Studio vs. DGX Spark for Local LLMs — Early Token/sec Comparison (Looking for Better Data) | 1 | [removed] | 2025-10-20T01:30:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ob73ax/mac_studio_vs_dgx_spark_for_local_llms_early/ | MikeBeezzz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ob73ax | false | null | t3_1ob73ax | /r/LocalLLaMA/comments/1ob73ax/mac_studio_vs_dgx_spark_for_local_llms_early/ | false | false | self | 1 | null |
GIGABYTE AI TOP ATOM Introduces NVIDIA Grace Blackwell GB10 Performance for the Desktop | 43 | 2025-10-20T01:23:37 | https://linuxgizmos.com/gigabyte-ai-top-atom-introduces-nvidia-grace-blackwell-gb10-performance-for-the-desktop/ | DeliciousBelt9520 | linuxgizmos.com | 1970-01-01T00:00:00 | 0 | {} | 1ob6ydq | false | null | t3_1ob6ydq | /r/LocalLLaMA/comments/1ob6ydq/gigabyte_ai_top_atom_introduces_nvidia_grace/ | false | false | default | 43 | {'enabled': False, 'images': [{'id': '9USPaHCqnaWZUhhwpPcmVYuxokNlKHBzm3mdxx2L9rE', 'resolutions': [{'height': 84, 'url': 'https://external-preview.redd.it/9USPaHCqnaWZUhhwpPcmVYuxokNlKHBzm3mdxx2L9rE.png?width=108&crop=smart&auto=webp&s=73734091bc01ba212fdec99dab217cb12df2c5d3', 'width': 108}, {'height': 169, 'url': 'https://external-preview.redd.it/9USPaHCqnaWZUhhwpPcmVYuxokNlKHBzm3mdxx2L9rE.png?width=216&crop=smart&auto=webp&s=c3953e6185f262128d91b752a865a610ca8cd6c9', 'width': 216}, {'height': 250, 'url': 'https://external-preview.redd.it/9USPaHCqnaWZUhhwpPcmVYuxokNlKHBzm3mdxx2L9rE.png?width=320&crop=smart&auto=webp&s=5c5a9785f74812c007fd190cf17bd9645963730e', 'width': 320}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/9USPaHCqnaWZUhhwpPcmVYuxokNlKHBzm3mdxx2L9rE.png?auto=webp&s=ddcf0be07f30d4e6899990f7fbf59ac79846d313', 'width': 587}, 'variants': {}}]} | |
Testing Pre-release(before nerfed) Gemini 3.0 pro (OrionMist) entire 1 min cartoon , zero shot , only svg animation + code | 5 | 2025-10-20T00:39:14 | https://v.redd.it/5pgfmriex5wf1 | balianone | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ob61vf | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5pgfmriex5wf1/DASHPlaylist.mpd?a=1763512769%2CZDM4MjBiNTRkNWIyNmM4MjAzNmE4MjBiYmJjNmVhNmJmMjI3M2IzNGI2OTg4Y2M4ODA3ZmU3NDM5YTAxNTA5Ng%3D%3D&v=1&f=sd', 'duration': 61, 'fallback_url': 'https://v.redd.it/5pgfmriex5wf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/5pgfmriex5wf1/HLSPlaylist.m3u8?a=1763512769%2CZWE5MGEyNDMyOGZmOTNlMGFmODM5YTk1ZmY1ZjFhMTU0MjBjN2UyMDNjYzM4Njk0NjFjZGY3NGNhMGEyYzYyMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5pgfmriex5wf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ob61vf | /r/LocalLLaMA/comments/1ob61vf/testing_prereleasebefore_nerfed_gemini_30_pro/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'c2x5eHdpYmh4NXdmMa8-WRyYQkv4w-V-pO7BQ_INA615q8OCX1IxZNhnrLGC', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/c2x5eHdpYmh4NXdmMa8-WRyYQkv4w-V-pO7BQ_INA615q8OCX1IxZNhnrLGC.png?width=108&crop=smart&format=pjpg&auto=webp&s=b5f8c82bf1b3dd18fae6ff9d1e8f51373211e669', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/c2x5eHdpYmh4NXdmMa8-WRyYQkv4w-V-pO7BQ_INA615q8OCX1IxZNhnrLGC.png?width=216&crop=smart&format=pjpg&auto=webp&s=81329475c1a3021d428f0904a2dc0443a2bb033c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/c2x5eHdpYmh4NXdmMa8-WRyYQkv4w-V-pO7BQ_INA615q8OCX1IxZNhnrLGC.png?width=320&crop=smart&format=pjpg&auto=webp&s=72aa62f51925cf4896237ab3ad41641ecab40af5', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/c2x5eHdpYmh4NXdmMa8-WRyYQkv4w-V-pO7BQ_INA615q8OCX1IxZNhnrLGC.png?width=640&crop=smart&format=pjpg&auto=webp&s=16f92fb6a4c40b5e189abb483eb64dc7d0a05da7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/c2x5eHdpYmh4NXdmMa8-WRyYQkv4w-V-pO7BQ_INA615q8OCX1IxZNhnrLGC.png?width=960&crop=smart&format=pjpg&auto=webp&s=c865392a21e1b7ad851f1ec6d7511959bb13f30f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/c2x5eHdpYmh4NXdmMa8-WRyYQkv4w-V-pO7BQ_INA615q8OCX1IxZNhnrLGC.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a16674d2cfbd893b83b7a46140db0a7da95d4e1a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/c2x5eHdpYmh4NXdmMa8-WRyYQkv4w-V-pO7BQ_INA615q8OCX1IxZNhnrLGC.png?format=pjpg&auto=webp&s=28f40f7c972d7ab5adc64ab975881c6f2d4a77ac', 'width': 1920}, 'variants': {}}]} | ||
CMP 50HX vs P102-100 test results. | 11 | Well, I finally put together the second LLM server as I had mentioned earlier on another post. Here are the results of pair of P102-100 vs a pair of CMP 50HX. The results are quite the contrast and interesting. In order to simplify the test I used docker, llama-swap and the same configs using 16K context, Q8kv, Unsloth IQ4\_NL except for GPT-OSS-20 which I used Q5\_K\_M and the same prompt across all tests.
|GPU-MODEL|PP|TG|
|:-|:-|:-|
|P102-Qwen3-0.6B-GGUF|5165.73|143.02|
|50HX-Qwen3-0.6B-GGUF|3226.96|195.86|
|P102-Qwen3-1.7B-GGUF|2790.78|110.94 |
|50HX-Qwen3-1.7B-GGUF|1519.72|137.73|
|P102-Qwen3-4B-GGUF|1123.46|63.24|
|50HX-Qwen3-4B-GGUF|604.38|74.73|
|P102-Qwen3-8B-GGUF|704.40 |45.17|
|50HX-Qwen3-8B-GGUF|367.09|51.05|
|P102-Qwen3-14B-GGUF|319.38|27.34|
|50HX-Qwen3-14B-GGUF|203.78|32.69|
|P102-Qwen3-32B-GGUF|161.50|13.26|
|50HX-Qwen3-32B-GGUF|87.79|15.76|
|P102-GLM-4-32B-0414-GGUF|174.58|14.25|
|50HX-GLM-4-32B-0414-GGUF|89.46 |16.86|
|P102-gpt-oss-20b-GGUF|929.58|58.42|
|50HX-gpt-oss-20b-GGUF|376.16|72.10|
|P102-Qwen3-30B-A3B-GGUF|803.81|54.90|
|50HX-Qwen3-30B-A3B-GGUF|291.01|70.52|
As you can see a pattern emerges, Turing is better at TG and Pascal is better at PP. The key reasons for that are...
1- Turing has a lower double precision throughput than Volta with only 2 FP64 cores.
2- Turing FMA math operations is four clock cycles, like Volta, compared to six cycles on Pascal.
3- The maximum number of concurrent warps per SM is 32 on Turing 64.
However, what is impressive is the 72 tk/s on the 50hx on GPT-OSS and 70 on Qwen3-30B-A3B and basically 16tk/s on Qwen32. Those are not slow numbers for a 150 dollar investment. There are cards that cost t a whole lot more of give you less performance when it comes to LLM. I would certainly not use these cards for image or video gen but I am curious about these 50HX working on exllamav2 or v3 since they are 7.5 which are supposedly supported and I might get tensor parallel working on these. I guess that is the next challenge.
In conclusion, because of the drastic loss of PP on the 50hx, even though it does TG faster than the P102-100 the PP rate drop is too high for my taste so I might drop these 50HX and get something a little better if the price is right. For now, I will keep rocking the dual P102-100 which has served me so well. I do have wishful thinking on a pair of Mi50 32GB versions. Someday I will see some on ebay for a 100 bucks each, and I will pull the trigger. | 2025-10-20T00:38:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ob61fg/cmp_50hx_vs_p102100_test_results/ | Boricua-vet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ob61fg | false | null | t3_1ob61fg | /r/LocalLLaMA/comments/1ob61fg/cmp_50hx_vs_p102100_test_results/ | false | false | self | 11 | null |
Is it normal to reach 180-210 tk/s with 30B local LLM? | 0 | I'm getting very fast responses on my new RTX 5090 using a LLM model.
[LM Studio output stats](https://preview.redd.it/pplyef5mn5wf1.png?width=458&format=png&auto=webp&s=93dd49a962fb4b647c40f0f5d4a2a210d02fa5c9)
When I look at other people from internet and Youtube guides using a 5090, they seem to get 110-130 on the same AI model and same single GPU. Are there other big factors than 5090? I'm pretty new to AI LLM.
I'm using LM Studio, **Qwen3-30B-A3B-Thinking** with Q6\_K GGUF.
LM Studio Settings:
\* Context length: 32768
\* GPU Offload: 48/48
\* CPU Thread Pool Size: 12
\* Offload KV Cache to GPU Memory: true
\* Flash Attention: true
\* K Cache Quantization Type: Enabled - Q8\_0
\* V Cache Quantization Type: Enabled - Q8\_0 | 2025-10-19T23:48:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ob4xi6/is_it_normal_to_reach_180210_tks_with_30b_local/ | Ambitious-Tie7231 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ob4xi6 | false | null | t3_1ob4xi6 | /r/LocalLLaMA/comments/1ob4xi6/is_it_normal_to_reach_180210_tks_with_30b_local/ | false | false | 0 | null | |
Gemini 3 on Design Arena? | 0 | "Nebula-Fast" just came up in one of my tournaments and it was a beast on front end -- any chance it's a gemini 3 endpoint? | 2025-10-19T23:27:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ob4gcp/gemini_3_on_design_arena/ | Significant-Fan241 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ob4gcp | false | null | t3_1ob4gcp | /r/LocalLLaMA/comments/1ob4gcp/gemini_3_on_design_arena/ | false | false | self | 0 | null |
Free Gemini Ultra “Deep Think” | 3 | https://gemini.google.com/gem/1hHt4QD_EbuTUdpdo8JOBaUqdL1AkPztz?usp=sharing
Enjoy until it last!
| 2025-10-19T22:06:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ob2mh9/free_gemini_ultra_deep_think/ | PerformanceRound7913 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ob2mh9 | false | null | t3_1ob2mh9 | /r/LocalLLaMA/comments/1ob2mh9/free_gemini_ultra_deep_think/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '3zLZPAqpuh3kPgTUMeK-vgJ6skSQCNWqIm0HbDxDO-M', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/3zLZPAqpuh3kPgTUMeK-vgJ6skSQCNWqIm0HbDxDO-M.png?width=108&crop=smart&auto=webp&s=9be47c95f132bd41c4c50c5badf17ece622f0d86', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/3zLZPAqpuh3kPgTUMeK-vgJ6skSQCNWqIm0HbDxDO-M.png?width=216&crop=smart&auto=webp&s=ca384bbc60f4d578096165c4ed840543b9c0c8eb', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/3zLZPAqpuh3kPgTUMeK-vgJ6skSQCNWqIm0HbDxDO-M.png?width=320&crop=smart&auto=webp&s=9a4c9530632d18963f31306a36444651356618e0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/3zLZPAqpuh3kPgTUMeK-vgJ6skSQCNWqIm0HbDxDO-M.png?width=640&crop=smart&auto=webp&s=8878815ad4fcdaad8efb90ea4f5f2c3df6fbfaa7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/3zLZPAqpuh3kPgTUMeK-vgJ6skSQCNWqIm0HbDxDO-M.png?width=960&crop=smart&auto=webp&s=5bb3910c23d5f3e5de10591931fa0c8d04c0a3eb', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/3zLZPAqpuh3kPgTUMeK-vgJ6skSQCNWqIm0HbDxDO-M.png?width=1080&crop=smart&auto=webp&s=7197067c75b7792ab1052ccc89a81036bf63dbf4', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/3zLZPAqpuh3kPgTUMeK-vgJ6skSQCNWqIm0HbDxDO-M.png?auto=webp&s=b89a64e050ba0c3b5fd195ef0a9ef1297cb72251', 'width': 1920}, 'variants': {}}]} |
What are professional AI devs (those who make wrappers for chat/agents, not model trainers) using for a dev machine? | 3 | My requirements are zero cloud costs. It seems like bang for the buck a MacBook pro is superior even though docker might be a pain in the ass, the code is obviously the same since it runs in a docker container, so aside from maybe a learning curve on the setup I would fare better with Mac than trying to cobble multiple external gpus to load a 70b model locally.
End state production would be cloud but there's not a budget for dev cloud usage (not my call)
Thoughts? I dont think i can match a mac with an nvidia laptop for under 6k | 2025-10-19T21:54:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ob2cgk/what_are_professional_ai_devs_those_who_make/ | No-Issue-9136 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ob2cgk | false | null | t3_1ob2cgk | /r/LocalLLaMA/comments/1ob2cgk/what_are_professional_ai_devs_those_who_make/ | false | false | self | 3 | null |
LLM for building GUI | 5 | Are there any models out there that would be suitable to help build a GUI for an app? | 2025-10-19T21:37:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ob1xkz/llm_for_building_gui/ | Savantskie1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ob1xkz | false | null | t3_1ob1xkz | /r/LocalLLaMA/comments/1ob1xkz/llm_for_building_gui/ | false | false | self | 5 | null |
I made a mod of Qwen Code for working with local models in LM Studio | 17 | I made [LowCal Code](https://github.com/dkowitz/LowCal-Code) specifically to work with my locally hosted models in LM Studio, and also with the option to use online models through OpenRouter - that's it, those are the only two options with /auth, LM Studio or OpenRouter.
When you use /model
* With LM Studio, it shows you available models to choose from, along with their configured and maximum context sizes (you have to manually configure a model in LM Studio once and set it's context size before it's available in LowCal).
* With OpenRouter, it shows available models (hundreds), along with context size and price, and you can filter them. You need an api key.
Other local model enhancements:
* `/promptmode set <full/concise/auto>`
* full: full, long system prompt with verbose instructions and lots of examples
* concise: short, abbreviated prompt for conserving context space and decreasing latency, particularly for local models. Dynamically constructed to only include instructions/examples for tools from the currently activated /toolset.
* auto: automatically uses concise prompt when using LM Studio endpoint and full prompt when using OpenRouter endpoint
* `/toolset (list, show, activate/use, create, add, remove)` \- use custom tool collections to exclude tools from being used and saving context space and decreasing latency, particularly with local models. Using the shell tool is often more efficient than using file tools.
* list: list available preset tool collections
* show : shows which tools are in a collection
* activate/use: Use a selected tool collection
* create: Create a new tool collection`/toolset create <name> [tool1, tool2, ...]` (Use tool names from /tools)
* add/remove: add/remove tool to/from a tool collection `/toolset add[remove] <name> tool`
* `/promptinfo` \- Show the current system prompt in a /view window (↑↓ to scroll, 'q' to quit viewer).
It's made to run efficiently and autonomously with local models, gpt-oss-120, 20, Qwen3-coder-30b, glm-45-air, and others work really well! Honestly I don't see a huge difference in effectiveness between the concise prompt and the huge full system prompt, and often using just the shell tool, or in combination with WebSearch or Edit can be much faster and more effective than many of the other tools.
I developed it to use on my 128gb Strix Halo system on Ubuntu, so I'm not sure it won't be buggy on other platforms (especially Windows).
Let me know what you think! [https://github.com/dkowitz/LowCal-Code](https://github.com/dkowitz/LowCal-Code)
https://preview.redd.it/sip3pvr0v4wf1.png?width=1691&format=png&auto=webp&s=e9eb322340ffed7a42020ed91ea4f3520b2125ac
| 2025-10-19T21:08:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ob17go/i_made_a_mod_of_qwen_code_for_working_with_local/ | feverdream | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ob17go | false | null | t3_1ob17go | /r/LocalLLaMA/comments/1ob17go/i_made_a_mod_of_qwen_code_for_working_with_local/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'YRVmeTTENyTnVFkxiv6fyP_vZo98Ij9IpjhsuYhArJo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YRVmeTTENyTnVFkxiv6fyP_vZo98Ij9IpjhsuYhArJo.png?width=108&crop=smart&auto=webp&s=525044a6cf370491f5faaf8204b119e28b85e4c9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YRVmeTTENyTnVFkxiv6fyP_vZo98Ij9IpjhsuYhArJo.png?width=216&crop=smart&auto=webp&s=3754cdb15149773d82daf7728537904279cf4882', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YRVmeTTENyTnVFkxiv6fyP_vZo98Ij9IpjhsuYhArJo.png?width=320&crop=smart&auto=webp&s=6dd609ee16f103caa9498a0ef3bacbaae743636b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YRVmeTTENyTnVFkxiv6fyP_vZo98Ij9IpjhsuYhArJo.png?width=640&crop=smart&auto=webp&s=83641b9a64a38f5187a89aa2294d825e59a7f28c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YRVmeTTENyTnVFkxiv6fyP_vZo98Ij9IpjhsuYhArJo.png?width=960&crop=smart&auto=webp&s=d9f85c777a48d0d9d3a915d6bb69ad6fba235bc7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YRVmeTTENyTnVFkxiv6fyP_vZo98Ij9IpjhsuYhArJo.png?width=1080&crop=smart&auto=webp&s=15e6aa8be26036a19f9f67aeffa51af4f389eb2c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YRVmeTTENyTnVFkxiv6fyP_vZo98Ij9IpjhsuYhArJo.png?auto=webp&s=356a179655d04a74b39036f543af1c0e274d1638', 'width': 1200}, 'variants': {}}]} | |
Benchmarking different CLI parameters for VRAM vs. tk/s | 2 | I'm getting more speed than I need out of my GPU and wanted to explore tradeoffs of token generation speed vs. VRAM in llama.cpp since I'm sharing the GPU with other tools. What I'm seeing is that n-gpu-layers and n-cpu-moe can do that, but the decrease in VRAM is comparatively modest vs. the decrease in speed, with n-gpu-layers having a much stronger effect than n-cpu-moe. In particular, n-gpu-layers and n-cpu-moe to a lesser extent drop performance by a whole ton the moment you set them away from their defaults, while the VRAM remains almost entirely the same. no-kv-offload on the other hand drops VRAM usage by a fair amount while not impacting speed too heavily (20 GB -> 17 GB; 614 tk/s -> 550 tk/s), so I might consider using this in the future.
The results are probably YMMV and dependent on the specific setup being used (my system is a 5090 + DDR5-6400 RAM running llama.cpp version 6697 and the unsloth 4\_K\_M quant of Qwen 3 Coder 30B). I also didn't use too many Monte Carlo runs since I just wanted something quick and dirty, so there's probably some variation in the results. I uploaded the python script I used to automate the testing here (https://pastebin.com/q6hTfMkq) along with raw results from my system in case it's of interest to anyone else. It's not the most efficient thing in the world (wastes time tearing down/starting up llama.cpp even when it's not necessary) but does the job for my needs. The code has some other arguments I played around with, but from what I saw, they didn't seem to decrease VRAM by any significant amount and some even increased it. Disclaimer, I used AI to generate a simple base for this script since I didn't want to waste time going through documentation trying to figure out how to properly query/manage the server, but modified the resulting script manually for my needs.
tl;dr no-kv-offload seems like an interesting option for a modest reduction in VRAM while not hurting performance too much. n-cpu-moe and n-gpu-layers can also reduce by a lot but costs quite a bit of speed.
Curious to know what other people think about the results or if there are any other parameters that might be interesting to look at. | 2025-10-19T20:45:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ob0n3g/benchmarking_different_cli_parameters_for_vram_vs/ | jumpingcross | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ob0n3g | false | null | t3_1ob0n3g | /r/LocalLLaMA/comments/1ob0n3g/benchmarking_different_cli_parameters_for_vram_vs/ | false | false | self | 2 | null |
Got new 5070ti gpu, have access to 16gb vram. What things can I do with it for AI? | 7 | Had 2050 earlier with 4gb. Curious what new superpowers do I get with this new vram access?
So far
1. ran gpt-oss 20b in lmstuio\\. with upto 30k context window it gives around 40 tok/sec output.
2. ran gemma-27b. runs around 17 tok/sec
3. ran qwen3 coder 30b -- rund around 30 tok/sec
Apart from running models locally, I want to do things which earlier I didn't think of.
Planned :
1. Image generation with flux and automatic1111
2. want to try openai whisper
3. want to build ai agents which runs 24\*7
last but not the least, complete spiderman 2 on this :)
Please help me with ideas and experimentations, I want to utilize this precious thing as much as possible and upskill myself in AI world. | 2025-10-19T19:32:21 | https://www.reddit.com/r/LocalLLaMA/comments/1oayrh9/got_new_5070ti_gpu_have_access_to_16gb_vram_what/ | AdOver7835 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oayrh9 | false | null | t3_1oayrh9 | /r/LocalLLaMA/comments/1oayrh9/got_new_5070ti_gpu_have_access_to_16gb_vram_what/ | false | false | self | 7 | null |
Free API Key for GLM 4.6 | 62 | Hi guys, providing a free API for GLM 4.6 for the next 48 hours as part of a load test. Enjoy.
Here are the credentials:
Model Name:
z-ai/glm-4.6
Base URL:
https://api.avian.io/v1
API Key:
avian-8z-5Qb5tLGS6q_A2j6Z2-iZxD78XnKCuvisEQQswZXw | 2025-10-19T19:25:35 | https://www.reddit.com/r/LocalLLaMA/comments/1oayl4j/free_api_key_for_glm_46/ | avianio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oayl4j | false | null | t3_1oayl4j | /r/LocalLLaMA/comments/1oayl4j/free_api_key_for_glm_46/ | false | false | self | 62 | null |
Turn any dataset into a reasoning dataset easily and cheaply | 16 | Tldr; this model is tiny but meant for recreating grounded reasoning generation without changing your datasets too much (scroll down for link)
I woke up one day and thought if it is possible to make an LLM (a tiny one, 0.6b!) turn those old but gold chat datasets into reasoning chat datasets, turns out yes it is possible and the results were quite good.
Which allows you to then fine tune a model on those same older but hq datasets but your model would also learn to reason like those big SOTA's.
Tried multiple llms, gemma3 1b, gemma3 270m and qwen3 0.6b, qwen3 0.6b gave me by far the best results and good interference / training speeds.
Tried both the instruct and base variants of this model, yes the base model performed significantly better and did not seem to overfit, it was fine-tuned on 1 epoch of a mixed half gpt OSS half deepseek r1 dataset with the special format the model uses and needs (about 200k rows total)
The model replicates how deepseeek r1 or gpt OSS would think about answering, you provide it the assistant output and user input (exact format on model page) and it would generate plausible grounded reasoning, keep in mind I've decided to almost completely eliminate reasoning about policies (gpt OSS stuff) and censorship biased reasoning while filtering, so it can think about spicy content, but due to limited data in that field you should check how it performs at that, generally deepseek r1 styled reasoning works better at NSFW, but obviously yes if you make it think about a rejection it would reject in the reasoning.
You can find it here: https://huggingface.co/Pinkstack/syngen-reasoning-0.6b
Also I made a very quick example dataset for you to evaluate how well it replicates reasoning: https://huggingface.co/datasets/Pinkstack/syngen-reasoning-example-80-smoltalk1 usually it does pretty good but as a rule of thumb, if you give it nonsense it would think poorly, feel free to test that though could be funny.
Hopefully this is useful to somebody! 🎉 | 2025-10-19T19:22:43 | https://www.reddit.com/r/LocalLLaMA/comments/1oayijj/turn_any_dataset_into_a_reasoning_dataset_easily/ | ApprehensiveTart3158 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oayijj | false | null | t3_1oayijj | /r/LocalLLaMA/comments/1oayijj/turn_any_dataset_into_a_reasoning_dataset_easily/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'TkrqhjnBfCyLWVbLdpfDtQjHREmMmj83QkJhvhD_s7Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TkrqhjnBfCyLWVbLdpfDtQjHREmMmj83QkJhvhD_s7Y.png?width=108&crop=smart&auto=webp&s=8e5da575a56d70b792002a0727e8dcf8611a1130', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TkrqhjnBfCyLWVbLdpfDtQjHREmMmj83QkJhvhD_s7Y.png?width=216&crop=smart&auto=webp&s=91f7a12a10e6027b7c60cd989bfe785608c2b95b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TkrqhjnBfCyLWVbLdpfDtQjHREmMmj83QkJhvhD_s7Y.png?width=320&crop=smart&auto=webp&s=848a2389c6ba96d27f0d5349e1f6de10c0d3be9c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TkrqhjnBfCyLWVbLdpfDtQjHREmMmj83QkJhvhD_s7Y.png?width=640&crop=smart&auto=webp&s=818ab1c88ee7c1bf14b8918a24da911e81d0ce05', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TkrqhjnBfCyLWVbLdpfDtQjHREmMmj83QkJhvhD_s7Y.png?width=960&crop=smart&auto=webp&s=f1b4e5aecabfb9397a419f067cb20a65c1f271a6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TkrqhjnBfCyLWVbLdpfDtQjHREmMmj83QkJhvhD_s7Y.png?width=1080&crop=smart&auto=webp&s=a4bbfadbb39da6432399155b65d3a87374eaaa13', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TkrqhjnBfCyLWVbLdpfDtQjHREmMmj83QkJhvhD_s7Y.png?auto=webp&s=be45a122d7065ae396f5aa1b350babcb5b7fbc54', 'width': 1200}, 'variants': {}}]} |
Looking for a feedback | 0 | Hey guys, recently I have been working on a project that is kinda like a social network.The main idea is for people to learn how to use AI even for fun. Everybody can use it easily from their phone. The platform allows users to generate AI images and videos using the best providers out there and make the public for others to learn. Everyone has their own profiles where they can control pretty much everything. Users can follow, like, comment on each others content. For example , im with friends, I take my phone, make a photo from the app and edit it with text or voice prompt. Than I can instantly share it everywhere. I than put the image for Public to see it and they can use exact same prompt for their generation if they want. What do you guys think about such a platform ? | 2025-10-19T19:10:57 | https://www.reddit.com/r/LocalLLaMA/comments/1oay7q4/looking_for_a_feedback/ | Virtual-Elevator908 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oay7q4 | false | null | t3_1oay7q4 | /r/LocalLLaMA/comments/1oay7q4/looking_for_a_feedback/ | false | false | self | 0 | null |
Perplexity + Comet (Command Browser) — 1-Month FREE. Research faster, create images/videos, and I’ll teach you to earn | 1 | [removed] | 2025-10-19T18:53:33 | https://www.reddit.com/r/LocalLLaMA/comments/1oaxrjx/perplexity_comet_command_browser_1month_free/ | Candid-Pride-4433 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oaxrjx | false | null | t3_1oaxrjx | /r/LocalLLaMA/comments/1oaxrjx/perplexity_comet_command_browser_1month_free/ | false | false | self | 1 | null |
Best Ollama model for coding? | 0 | With 16GB of VRAM and 32GB of RAM, and an RTX 4070 SUPER, I need to perform large coding tasks in Python, as well as create BAT files. | 2025-10-19T18:50:43 | https://www.reddit.com/r/LocalLLaMA/comments/1oaxowb/best_ollama_model_for_coding/ | Winter_Proposal_6310 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oaxowb | false | null | t3_1oaxowb | /r/LocalLLaMA/comments/1oaxowb/best_ollama_model_for_coding/ | false | false | self | 0 | null |
DGX Spark vs 4× RTX 5090 for local LLM inference real numbers | 1 | [removed] | 2025-10-19T18:32:27 | https://www.reddit.com/r/LocalLLaMA/comments/1oax7tv/dgx_spark_vs_4_rtx_5090_for_local_llm_inference/ | texasdude11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oax7tv | false | null | t3_1oax7tv | /r/LocalLLaMA/comments/1oax7tv/dgx_spark_vs_4_rtx_5090_for_local_llm_inference/ | false | false | self | 1 | null |
Ollama's qwen3vl's page announced qwen3-vl local models coming soon. In the meantime, I tested qwen3vl-235b-A22B-cloud's visual accuracy for complicated tasks. Here are the results. | 1 | Link: https://ollama.com/library/qwen3-vl#:~:text=Local%20models%20coming%20soon. | 2025-10-19T17:44:06 | https://v.redd.it/qbrt6jyhu3wf1 | swagonflyyyy | /r/LocalLLaMA/comments/1oavyby/ollamas_qwen3vls_page_announced_qwen3vl_local/ | 1970-01-01T00:00:00 | 0 | {} | 1oavyby | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qbrt6jyhu3wf1/DASHPlaylist.mpd?a=1763617453%2CMzE1NjVjOGQxYTQxMjhkMmNiOWVlMTMyMTBiYzZhMmEzYjhjMjhiYTZhZDgxN2UwY2VhYTNkYjM1NDA0ZTBiMQ%3D%3D&v=1&f=sd', 'duration': 241, 'fallback_url': 'https://v.redd.it/qbrt6jyhu3wf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/qbrt6jyhu3wf1/HLSPlaylist.m3u8?a=1763617453%2CYjZlNGI3MWQ2NDYyNTY1ZTRmNmFiNmM3ZjQ1MjhiMzEyMjU3MzRmYjhjOWMxOTdmYjQyNjlkYjM1YmZlN2JkMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qbrt6jyhu3wf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1oavyby | /r/LocalLLaMA/comments/1oavyby/ollamas_qwen3vls_page_announced_qwen3vl_local/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eHZuZnpqeWh1M3dmMQuXf5wI2zq5P49C9csYfK7gng126CVz8BH3cE4LI4QP', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eHZuZnpqeWh1M3dmMQuXf5wI2zq5P49C9csYfK7gng126CVz8BH3cE4LI4QP.png?width=108&crop=smart&format=pjpg&auto=webp&s=f78f3692292cd6599769ee31f41137a7dd5e50f9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eHZuZnpqeWh1M3dmMQuXf5wI2zq5P49C9csYfK7gng126CVz8BH3cE4LI4QP.png?width=216&crop=smart&format=pjpg&auto=webp&s=023b6d4097023edbebadcb50ec372e9f36a0d33d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eHZuZnpqeWh1M3dmMQuXf5wI2zq5P49C9csYfK7gng126CVz8BH3cE4LI4QP.png?width=320&crop=smart&format=pjpg&auto=webp&s=19aae9ef59a8a1fa78dc8741762b2a76635e1e66', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eHZuZnpqeWh1M3dmMQuXf5wI2zq5P49C9csYfK7gng126CVz8BH3cE4LI4QP.png?width=640&crop=smart&format=pjpg&auto=webp&s=fba346263a9be54ff2a6753a5d56bf06998d4e77', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eHZuZnpqeWh1M3dmMQuXf5wI2zq5P49C9csYfK7gng126CVz8BH3cE4LI4QP.png?width=960&crop=smart&format=pjpg&auto=webp&s=0af87ba4467f5149a3a6cfb351d69e4f1576367b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eHZuZnpqeWh1M3dmMQuXf5wI2zq5P49C9csYfK7gng126CVz8BH3cE4LI4QP.png?width=1080&crop=smart&format=pjpg&auto=webp&s=18dc313920d6eb3f4b74c281b15cbb4a226ecaee', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eHZuZnpqeWh1M3dmMQuXf5wI2zq5P49C9csYfK7gng126CVz8BH3cE4LI4QP.png?format=pjpg&auto=webp&s=378b6513be1d9b10f8b9333baf624a84ab024084', 'width': 1920}, 'variants': {}}]} | |
Reverse Engineering and Tracing internal thoughts of LLM | 18 | hey folks I did following experiments to understand inner working of LLM
Index of experiments I did in this article
1. Token Prediction Trace
2. Attribution Analysis
3. Layer Emergence (knowledge tracing)
4. Weight Matrix Analyis (How knowledge encoded in weights)
5. Dimension Tokens Analysis (which Dimension stored encoded token for “paris”)
6. Prediction Chain (How does each dimension contribute to final output)
7. Token→Neuron Map (Which neurons encode token)
[https://medium.com/@harishhacker3010/reverse-engineering-and-tracing-internal-thoughts-of-llm-3017b5f72008](https://medium.com/@harishhacker3010/reverse-engineering-and-tracing-internal-thoughts-of-llm-3017b5f72008) | 2025-10-19T17:43:57 | https://www.reddit.com/r/LocalLLaMA/comments/1oavy6f/reverse_engineering_and_tracing_internal_thoughts/ | Altruistic-Tea-5612 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oavy6f | false | null | t3_1oavy6f | /r/LocalLLaMA/comments/1oavy6f/reverse_engineering_and_tracing_internal_thoughts/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'Q-60XD1inXNG2gaRYer1VwFRYisVLxsSjYWWXkcUQUA', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/Q-60XD1inXNG2gaRYer1VwFRYisVLxsSjYWWXkcUQUA.png?width=108&crop=smart&auto=webp&s=7d4409fd2e4f9b1749683e64c1d1107bbe5a4e0a', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/Q-60XD1inXNG2gaRYer1VwFRYisVLxsSjYWWXkcUQUA.png?width=216&crop=smart&auto=webp&s=031370c52b94015699e5bc5e4b34820247b0f5b8', 'width': 216}, {'height': 212, 'url': 'https://external-preview.redd.it/Q-60XD1inXNG2gaRYer1VwFRYisVLxsSjYWWXkcUQUA.png?width=320&crop=smart&auto=webp&s=bb22438f2543c164b74d50ab8f1e579113ec617b', 'width': 320}, {'height': 425, 'url': 'https://external-preview.redd.it/Q-60XD1inXNG2gaRYer1VwFRYisVLxsSjYWWXkcUQUA.png?width=640&crop=smart&auto=webp&s=3c82b170d9702dd151d7df2a0dfc2ef9a96e37e6', 'width': 640}, {'height': 638, 'url': 'https://external-preview.redd.it/Q-60XD1inXNG2gaRYer1VwFRYisVLxsSjYWWXkcUQUA.png?width=960&crop=smart&auto=webp&s=9874f97ffed38be2e385ef0c5a87d89ba67b34ed', 'width': 960}, {'height': 718, 'url': 'https://external-preview.redd.it/Q-60XD1inXNG2gaRYer1VwFRYisVLxsSjYWWXkcUQUA.png?width=1080&crop=smart&auto=webp&s=0970e8bf3d12994995bee53a09a34b1c1bbd41bd', 'width': 1080}], 'source': {'height': 798, 'url': 'https://external-preview.redd.it/Q-60XD1inXNG2gaRYer1VwFRYisVLxsSjYWWXkcUQUA.png?auto=webp&s=d3701c7410e29932d47ed57b0ade60bc097a4bf9', 'width': 1200}, 'variants': {}}]} |
I built a 1B CAD generator model | 244 | On a weekend, I decided to build a small language model to generate me 3d files. No reason except for pure curiosity. Here's what I did:
\- Gather dataset on OpenSCAD: This turns out to be quite bad because people's code quality is low & in-consistent.
\- Generate synthetic data (prompt -> openscad): This was the most wasteful per dollar part. I spent 150$+ on Claude API (70% are on reasoning token). Ended up using Gemma3-12b running in 48 hours continuously.
\- Finetune Gemma3-270M, 1B & 4B: 270M lacks fundamental code & object understanding and failed badly. 1B is a good balance between render-ability rate & speed.
Overall, I spent 150$ on Claude (totally wasted) & 25$ on GPU. Both given as credits and grants.
I also made a CLI app if you wanna try on Mac, Linux or Raspberry Pi 4/5: [https://github.com/ThomasVuNguyen/MakeMe](https://github.com/ThomasVuNguyen/MakeMe)
Models, dataset & code:
[https://github.com/ThomasVuNguyen/K](https://github.com/ThomasVuNguyen/K)
[https://huggingface.co/collections/ThomasTheMaker/makeme-68f52281c3adf70d1e1dfe5b](https://huggingface.co/collections/ThomasTheMaker/makeme-68f52281c3adf70d1e1dfe5b) | 2025-10-19T17:43:33 | https://v.redd.it/pn0yo3o2v3wf1 | ThomasPhilli | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oavxt8 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/pn0yo3o2v3wf1/DASHPlaylist.mpd?a=1763487827%2CNDM0ZDc3NTMzNTJjZWE1ZDQ1ZmM3YWJiZGM0OTAxZTUxY2ZmZjIxMGE5NDFkYjc1MjQzM2JjNjU1MDQ0MGVlZQ%3D%3D&v=1&f=sd', 'duration': 57, 'fallback_url': 'https://v.redd.it/pn0yo3o2v3wf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/pn0yo3o2v3wf1/HLSPlaylist.m3u8?a=1763487827%2CNTJiMGY2NTgxZTI0M2YyOWQzNDdjMTNkMGQ2MzA5NGQ5NGFlOTYwNTRjMjRiMWJmNjRmNTUwZjk4MDM5YTQwZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/pn0yo3o2v3wf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1oavxt8 | /r/LocalLLaMA/comments/1oavxt8/i_built_a_1b_cad_generator_model/ | false | false | 244 | {'enabled': False, 'images': [{'id': 'ZGFhNmE0bzJ2M3dmMdhv6U5XLy0vFYTB3BWLA3H-O3YDxkmUtGbojZ8LN3lz', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZGFhNmE0bzJ2M3dmMdhv6U5XLy0vFYTB3BWLA3H-O3YDxkmUtGbojZ8LN3lz.png?width=108&crop=smart&format=pjpg&auto=webp&s=626cfacd1d2c91bf38d8d2e1d1cceb8fbb636fd4', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZGFhNmE0bzJ2M3dmMdhv6U5XLy0vFYTB3BWLA3H-O3YDxkmUtGbojZ8LN3lz.png?width=216&crop=smart&format=pjpg&auto=webp&s=67ce4d4661aefc563930731161438e8fadf30c7a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZGFhNmE0bzJ2M3dmMdhv6U5XLy0vFYTB3BWLA3H-O3YDxkmUtGbojZ8LN3lz.png?width=320&crop=smart&format=pjpg&auto=webp&s=c51cf3a4c2bf04df841f4407fefb54e235abaf4c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZGFhNmE0bzJ2M3dmMdhv6U5XLy0vFYTB3BWLA3H-O3YDxkmUtGbojZ8LN3lz.png?width=640&crop=smart&format=pjpg&auto=webp&s=ab5f051943a854a98676a04345f16287e66f9e92', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZGFhNmE0bzJ2M3dmMdhv6U5XLy0vFYTB3BWLA3H-O3YDxkmUtGbojZ8LN3lz.png?width=960&crop=smart&format=pjpg&auto=webp&s=e0ae01ff22e0474528aefcfae87aab65c9d6df0d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZGFhNmE0bzJ2M3dmMdhv6U5XLy0vFYTB3BWLA3H-O3YDxkmUtGbojZ8LN3lz.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0188f6045cf2218ade595ac9673d1747255ba657', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/ZGFhNmE0bzJ2M3dmMdhv6U5XLy0vFYTB3BWLA3H-O3YDxkmUtGbojZ8LN3lz.png?format=pjpg&auto=webp&s=3c884a5172633040060c64b95719a123e48c62a3', 'width': 1280}, 'variants': {}}]} | |
Hot take: Recursive reasoning might be the actual path to AGI, not scaling to 1T parameters | 0 | , Been following the recent wave of papers on recursive/iterative reasoning (TRM, HRM, test-time compute scaling) and I think we're witnessing a paradigm shift that most people are sleeping on.
The Core Insight
Human reasoning isn't one-shot inference. It's iterative refinement.
When you solve a hard problem, you don't generate the complete solution in one pass through your brain. You:
- Make an attempt
- Check if it works
- Revise based on feedback
- Repeat until solved
LLMs do the opposite. One forward pass, dump tokens, done. No revision loop. No "thinking harder" on difficult parts.
Why This Changes Everything for Local
The scaling laws we've been following assume intelligence = more parameters. But these recursive models suggest intelligence = better iteration + feedback loops.
What this means practically:
A 7M param model that can iterate 100 times is beating 70B models that run once. The compute is still way lower because 7M × 100 iterations << 70B × 1 pass.
For local inference, this is the unlock:
- Small models iterate fast
- Can "think longer" on hard problems, speed through easy ones
- Memory footprint stays tiny
- Multiple specialized reasoners can run in parallel
The Architecture Philosophy
Traditional: Cram all knowledge and reasoning into static weights → need billions of parameters
Recursive: Separate the reasoning process from the knowledge base → can be tiny
This mirrors how our brain works - you have long-term memory (knowledge) and working memory (reasoning/planning). They're different systems with different requirements.
Where This Goes
I think we'll see:
- Hybrid architectures: small recursive reasoner + larger knowledge model
- Task-specific reasoning modules (7-30M each) you compose together
- Test-time compute becoming as important as parameter count
- The end of "one model to rule them all" approach
The wildest part? The recursion/iteration loop doesn't need to be neural. You could have:
- Tiny NN for generating candidates
- Classical algorithm for verification
- Another tiny NN for refinement
This is how AlphaGo worked - tiny value network + search. We're rediscovering this pattern.
My Prediction
In 2-3 years, the local AI stack won't be "Llama 4 405B quantized to Q4". It'll be:
- 1-3B general language model
- 5-10 specialized 10-50M reasoning modules
- Orchestration layer to route between them
- Total size: under 5GB, runs on laptop, outperforms today's 70B models
The era of "just scale it up" is ending. The era of "think iteratively" is beginning.
Thoughts? | 2025-10-19T17:39:17 | https://www.reddit.com/r/LocalLLaMA/comments/1oavtxb/hot_take_recursive_reasoning_might_be_the_actual/ | 1Hesham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oavtxb | false | null | t3_1oavtxb | /r/LocalLLaMA/comments/1oavtxb/hot_take_recursive_reasoning_might_be_the_actual/ | false | false | self | 0 | null |
Hot take: Recursive reasoning might be the actual path to AGI, not scaling to 1T parameters | 1 | Been following the recent wave of papers on recursive/iterative reasoning (TRM, HRM, test-time compute scaling) and I think we're witnessing a paradigm shift that most people are sleeping on.
The Core Insight
Human reasoning isn't one-shot inference. It's iterative refinement.
When you solve a hard problem, you don't generate the complete solution in one pass through your brain. You make an attempt, check if it works, revise based on feedback, repeat until solved.
LLMs do the opposite. One forward pass, dump tokens, done. No revision loop. No "thinking harder" on difficult parts.
Why This Changes Everything for Local
The scaling laws we've been following assume intelligence = more parameters. But these recursive models suggest intelligence = better iteration + feedback loops.
A 7M param model that can iterate 100 times is beating 70B models that run once. The compute is still way lower because 7M × 100 iterations << 70B × 1 pass.
For local inference, this is the unlock:
Small models iterate fast
Can "think longer" on hard problems, speed through easy ones
Memory footprint stays tiny
Multiple specialized reasoners can run in parallel
The Architecture Philosophy
Traditional approach: Cram all knowledge and reasoning into static weights → need billions of parameters
Recursive approach: Separate the reasoning process from the knowledge base → can be tiny
This mirrors how our brain works - you have long-term memory (knowledge) and working memory (reasoning/planning). They're different systems with different requirements.
Where This Goes
I think we'll see:
Hybrid architectures: small recursive reasoner + larger knowledge model
Task-specific reasoning modules (7-30M each) you compose together
Test-time compute becoming as important as parameter count
The end of "one model to rule them all" approach
The wildest part? The recursion/iteration loop doesn't need to be neural. You could have a tiny NN for generating candidates, classical algorithm for verification, another tiny NN for refinement. This is how AlphaGo worked - tiny value network + search. We're rediscovering this pattern.
My Prediction
In 2-3 years, the local AI stack won't be "Llama 4 405B quantized to Q4". It'll be:
1-3B general language model
5-10 specialized 10-50M reasoning modules
Orchestration layer to route between them
Total size: under 5GB, runs on laptop, outperforms today's 70B models
The era of "just scale it up" is ending. The era of "think iteratively" is beginning.
Thoughts? | 2025-10-19T17:37:20 | https://www.reddit.com/r/LocalLLaMA/comments/1oavs4m/hot_take_recursive_reasoning_might_be_the_actual/ | 1Hesham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oavs4m | false | null | t3_1oavs4m | /r/LocalLLaMA/comments/1oavs4m/hot_take_recursive_reasoning_might_be_the_actual/ | false | false | self | 1 | null |
Confused about GLM 4.6 running locally. | 9 | Everyone is saying GLM 4.6 is amazing but I get really long and hallucination style outputs.
Here I'm just asking "1+1" and looks at the output, it's crazy. Am I missing something?
ollama run GLM-4.6-GGUF
\>1+1
\>2? The effect of coupling between photovoltaic panel and biorefinery
The study proposes a novel integrated system combining a photovoltaic/thermal (PV/T) collector and a membrane bioreactor for the co-generation of electricity, thermal energy, and bio‑fuels. The experimental set-up was built in Tunisia and includes a PV/T solar collector, flat ducts placed underneath the PV modules, an air gap between them, and a water tank that serves as both thermal energy storage and a low-temperature bioreactor for microalgae cultivation. The PV/T panel is cooled by circulating water through the ducts, providing thermal energy to be stored in the tank and used later for hot water demands or space heating purposes. The experimental results show that the average electrical efficiency of the PV modules increased from 13.58% (non-cooled) to 15.77% (cooled), while the overall system efficiency reached 45%. The bio‑fuel production was also evaluated, showing a promising potential for biodiesel and biogas generation from microalgae grown in the water tank. The study concludes that the proposed integrated system offers an efficient way to harness solar energy for multiple purposes and could be a sustainable solution for future energy needs.
摘要:本研究提出了一种新型集成系统,将光伏/热(PV/T)集热器与膜生物反应器相 结合,以共同生产电能、热能和生物质燃料。实验装置建于突尼斯,包括一个PV/T太阳 能集热器、位于PV模块下方的扁平管道、它们之间的空气间隙以及一个既作为热能储存 又作为用于微藻培养的低温生物反应器的水箱。通过在管道中循环水来冷却PV/T面板, 提供的热量被储存在水箱中,可用于之后的热水需求或空间供暖目的。实验结果表明, PV模块的平均电气效率从13.58%(未冷却)提高到15.77%(冷却),而整体系统效率达 到45%。生物质燃料的生产也进行了评估,显示了在水箱中生长的微藻用于生物柴油和 沼气生成的良好潜力。该研究得出结论,提出的集成系统提供了一种高效利用太阳能以 实现多种目的的方式,并且可能是未来能源需求的可持续解决方案。
1. 拆解论文
* **标题:** "1+1>2? The effect of coupling between photovoltaic panel and biorefinery" (1+1>2?光伏面板与生物精炼耦合的效果)
* 标题本身就是一个问题,暗示了协同效应。它很吸引人且现代。
* **摘要:** 这是一篇标准科学论文摘要的完美典范。它遵循以下结构:
1. **目标/提议:** "proposes a novel integrated system combining a photovoltaic/thermal (PV/T) collector and a membrane bioreactor for the co-generation of electricity, thermal energy, and bio‑fuels."(提出了一种将 光伏/热集热器与膜生物反应器相结合的新型集成系统,用于共同生产电能、热能和生 物质燃料。)
2. **方法论/装置:** "experimental set-up was built in Tunisia... includes a PV/T solar collector, flat ducts... air gap... water tank that serves as both thermal energy storage and a low-temperature bioreactor for microalgae cultivation."(实验装置建于突尼斯……包括一个PV/T太阳能集热器、扁平 管道……空气间隙……水箱既作为热能储存,又作为用于微藻培养的低温生物反应器。)关 键组件被列出。位置(突尼斯)为高辐照度区域增加了背景信息。 .... | 2025-10-19T17:24:28 | https://www.reddit.com/r/LocalLLaMA/comments/1oavg1z/confused_about_glm_46_running_locally/ | InTheEndEntropyWins | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oavg1z | false | null | t3_1oavg1z | /r/LocalLLaMA/comments/1oavg1z/confused_about_glm_46_running_locally/ | false | false | self | 9 | null |
Two new Google models, "lithiumflow" and "orionmist", have been added to LMArena. This is Google's naming scheme and "orion" has been used internally with Gemini 3 codenames, so these are likely Gemini 3 models | 46 | https://www.reddit.com/r/Bard/comments/1oauzgr/two_new_google_models_lithiumflow_and_orionmist | 2025-10-19T17:11:45 | https://www.reddit.com/r/LocalLLaMA/comments/1oav4hi/two_new_google_models_lithiumflow_and_orionmist/ | balianone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oav4hi | false | null | t3_1oav4hi | /r/LocalLLaMA/comments/1oav4hi/two_new_google_models_lithiumflow_and_orionmist/ | false | false | self | 46 | null |
Quantized some MoE models with MXFP4 | 43 | So as I was sitting and trying out some MXFP4\_MOE quants from [Face314](https://huggingface.co/Face314) & [sm54](https://huggingface.co/sm54), and I can say that liked them very much.
So I thought why not quantize some more this weekend.
Well, here they are:
[https://huggingface.co/noctrex](https://huggingface.co/noctrex)
Any suggestions or critique welcome. | 2025-10-19T17:10:57 | https://www.reddit.com/r/LocalLLaMA/comments/1oav3r1/quantized_some_moe_models_with_mxfp4/ | noctrex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oav3r1 | false | null | t3_1oav3r1 | /r/LocalLLaMA/comments/1oav3r1/quantized_some_moe_models_with_mxfp4/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'mZGcuuZ66YaD3SPKNSJXJbmYko5iCW8QJO1JCOOZbyw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mZGcuuZ66YaD3SPKNSJXJbmYko5iCW8QJO1JCOOZbyw.png?width=108&crop=smart&auto=webp&s=1cd821689e2a261bbaf2e1a521c1db0270b8439a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mZGcuuZ66YaD3SPKNSJXJbmYko5iCW8QJO1JCOOZbyw.png?width=216&crop=smart&auto=webp&s=d660e726b71c5a704acf423f3881d710a19c0217', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mZGcuuZ66YaD3SPKNSJXJbmYko5iCW8QJO1JCOOZbyw.png?width=320&crop=smart&auto=webp&s=33655ae4cf7301bc6c722daa3d48ec6e3ec7581c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mZGcuuZ66YaD3SPKNSJXJbmYko5iCW8QJO1JCOOZbyw.png?width=640&crop=smart&auto=webp&s=6fc1146ff7ba0125019a2ea641477563975e00be', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mZGcuuZ66YaD3SPKNSJXJbmYko5iCW8QJO1JCOOZbyw.png?width=960&crop=smart&auto=webp&s=dda085835bf126d438a9016b6c4d30f69c90e37f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mZGcuuZ66YaD3SPKNSJXJbmYko5iCW8QJO1JCOOZbyw.png?width=1080&crop=smart&auto=webp&s=436e802035d1cf1e78ceeff3272d7c3ba6193528', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mZGcuuZ66YaD3SPKNSJXJbmYko5iCW8QJO1JCOOZbyw.png?auto=webp&s=7482d981325d5a30040872ba40a66fe64c193cf6', 'width': 1200}, 'variants': {}}]} |
lazylms - TUI for LM Studio | 38 | Hey guys! I made a TUI for using LM Studio by staying in the terminal. This is a hobby side project, MIT licensed and uses the CLI and REST API. Feel free to give it a try. This is inspired by lazygit and lazydocker.
[https://github.com/Rugz007/lazylms](https://github.com/Rugz007/lazylms) | 2025-10-19T17:04:04 | https://www.reddit.com/gallery/1oauxgg | Rugs007 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1oauxgg | false | null | t3_1oauxgg | /r/LocalLLaMA/comments/1oauxgg/lazylms_tui_for_lm_studio/ | false | false | 38 | null | |
Energy Based Adapter Help | 1 | I'm trying to develop an energy based adapter which behaves like an energy based transformer. My primary goal is to provide any model uncertainty estimates (on a finetuned dataset). Unfortunately, the current code suffers degenerate generations and exhibits a lot of repeating words and patterns.
Any thoughts on why this is occurring and how to fix it? I think this could be a very useful technique if it works.
[https://colab.research.google.com/drive/1irCZ02XqTqQjQuE07FBjue6YYWmLsqbi?usp=sharing](https://colab.research.google.com/drive/1irCZ02XqTqQjQuE07FBjue6YYWmLsqbi?usp=sharing) | 2025-10-19T16:52:24 | https://www.reddit.com/r/LocalLLaMA/comments/1oaumqx/energy_based_adapter_help/ | arcco96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oaumqx | false | null | t3_1oaumqx | /r/LocalLLaMA/comments/1oaumqx/energy_based_adapter_help/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260}, 'variants': {}}]} |
How can I determine OCR confidence level when using a VLM? | 1 | I’m building an OCR pipeline that uses a Vision-Language Model (VLM) to extract structured fields from receipts/invoices (e.g., supplier name, date, total amount).
I want to automatically detect when the model’s output is *uncertain*, so I can ask the user to re-upload a clearer image.
The problem: VLMs don’t expose token-level confidence like traditional OCR engines (e.g., Tesseract). I even tried prompting the model to **generate a confidence score per field**, but it just outputs “1.0” for everything — basically meaningless.
I’ve also thought about using **image resolution** or **text size** as a proxy, but that’s unreliable — sometimes a higher-resolution image has smaller, harder-to-read text, while a lower-resolution photo with big clear text is perfectly readable.
So… how do people handle this?
* Any ways to estimate confidence from logits / probabilities (if accessible)?
* Better visual quality heuristics (e.g., average text height, contrast, blur detection)?
* Post-hoc consistency checks between text and layout that can act as a proxy?
Would love to hear practical approaches or heuristics you’ve used to flag “low-confidence” OCR results from VLMs. | 2025-10-19T16:51:39 | https://www.reddit.com/r/LocalLLaMA/comments/1oaum1r/how_can_i_determine_ocr_confidence_level_when/ | Ok_Television_9000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oaum1r | false | null | t3_1oaum1r | /r/LocalLLaMA/comments/1oaum1r/how_can_i_determine_ocr_confidence_level_when/ | false | false | self | 1 | null |
Its Impossible, Change My Mind | 0 | So........Many people say: Qwen models are benchmaxed, they can't be as great as the benchmarks say they are yada yada yada🗣️🗣️🗣️. And then those same people say: Well....they also think a lot.
And im like.....what????
If these models are benchmaxed, then why are they using this many tokens??? They should just spit out the answer without thinking much coz they already know the answer to that question (apparently)
An Ai model must be benchmaxed if they perform very very good in benchmarks but dont use massive amount of reasoning tokens. But thats not the case with most of the models. Like for example, Apriel 1.5 15b thinking is very small model, but performs very good in benchmarks. So was it benchmaxed???? No, coz it uses massive amount of reasoning tokens.
I will update the title if someone changes my mind | 2025-10-19T16:28:46 | https://www.reddit.com/gallery/1oau144 | Brave-Hold-9389 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1oau144 | false | null | t3_1oau144 | /r/LocalLLaMA/comments/1oau144/its_impossible_change_my_mind/ | false | false | 0 | null | |
N00b looking to get initial hardware to play with | 0 | Hi,
I have been experimenting for now on "regular machines" (aka with no GPU) and I want to start experimenting a bit. I want to start by experimenting. My priority is working with TTS engines like Chatterbox ([https://github.com/resemble-ai/chatterbox](https://github.com/resemble-ai/chatterbox)). Over all I am trying to figure out the hardware I should get to start learning and I am clueless. I learn more from playing then from reading docs. Can someone explain to me "like I am five" the quests below?
* How GPU's work when it comes to loading models? Like if the model I am loading needs 8GB then do I need a card that has at least 8GB on it to load it?
* If I want to run concurrent requests at once (say two requests at once) do I then need a card that has 16GB?
* Is it better get a system like a MAC that has unified memory or get multiple cards? Again my goal for now is concurrently TTS. I would like to branch into Speech to Text with the spare time that I have (when I am not generating TTS).
* What kind of cards should I look at? I have heard cards like the 4070, 3090 etc. but I am clueless where I start.
* Can anyone explain the differences in cards other than the memory capacity? Like how do I know the speed of the card and how does that matter for concurrency and speed of testing.
* How do I find out how much memory is needed (for instance for chatterbox). Do you look at the project and try to figure out what's needed or do you run it and find out what it takes?
* Would one of these cards work with a Zima board?
For now I just want to experiment and test. I don't care so much about speed as I care about getting my feet wet and seeing what I can do. My current TTS bill with Google is about $150.00 per month and growing and I am wondering if it's time to get some GPU's and do it myself. I am also thinking about getting one of these ([https://marketplace.nvidia.com/en-us/developer/dgx-spark/](https://marketplace.nvidia.com/en-us/developer/dgx-spark/)) but based on this video ([https://www.youtube.com/watch?v=FYL9e\_aqZY0](https://www.youtube.com/watch?v=FYL9e_aqZY0)) it seems like the bang per buck you get here is more for training. Side note: I have a pile Nvidia Jetsons' though I think they are only 2GB and doubt they can be of any use here.
TIA. | 2025-10-19T16:27:04 | https://www.reddit.com/r/LocalLLaMA/comments/1oatzjf/n00b_looking_to_get_initial_hardware_to_play_with/ | dovi5988 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oatzjf | false | null | t3_1oatzjf | /r/LocalLLaMA/comments/1oatzjf/n00b_looking_to_get_initial_hardware_to_play_with/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'OwTVjdhG9npeKWimKEF6GQ_Nq_qi4SpBWKIido0k6yM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OwTVjdhG9npeKWimKEF6GQ_Nq_qi4SpBWKIido0k6yM.png?width=108&crop=smart&auto=webp&s=104d3e22e62743f354799ea4a6c469c3cd5ad4e4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OwTVjdhG9npeKWimKEF6GQ_Nq_qi4SpBWKIido0k6yM.png?width=216&crop=smart&auto=webp&s=aea2d85fb8ff1d0e60121aebd62c598ae3bee2c9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OwTVjdhG9npeKWimKEF6GQ_Nq_qi4SpBWKIido0k6yM.png?width=320&crop=smart&auto=webp&s=3446b826782b5cb57837f5600e95ce4abc8c1758', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OwTVjdhG9npeKWimKEF6GQ_Nq_qi4SpBWKIido0k6yM.png?width=640&crop=smart&auto=webp&s=789e12593ea83f9d47d9b1f993b30785e4cc3474', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OwTVjdhG9npeKWimKEF6GQ_Nq_qi4SpBWKIido0k6yM.png?width=960&crop=smart&auto=webp&s=0b6c7c9c69a4c22e02e0ef93f07a168ba0cd8886', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OwTVjdhG9npeKWimKEF6GQ_Nq_qi4SpBWKIido0k6yM.png?width=1080&crop=smart&auto=webp&s=7423bc1aac4b920554e84970b6a69008a6831c60', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OwTVjdhG9npeKWimKEF6GQ_Nq_qi4SpBWKIido0k6yM.png?auto=webp&s=124949a6ac71908779e5cce469a27041fddf45df', 'width': 1200}, 'variants': {}}]} |
If the bubble really pops how can that affect local AI models? | 30 | If all this AI bubble talk really comes to an popa after all, how might this affect the development of more local AI models? From what I've seen MoE models still outperforms most models easily, but creating models is still expensive as shit, rather for the planet than their pocket, donation exists anyways.
But the servers these models use to be trained consumes a shitton of load, and I could imagine most big company servers not allowing AI to be trained on their servers anymore considering the massive amounts of models being released every week. Do you think AI would immediately freeze in advancement upon a bubble pop making us have to wait more 80 years for an actual AGI? | 2025-10-19T16:08:41 | https://www.reddit.com/r/LocalLLaMA/comments/1oatip1/if_the_bubble_really_pops_how_can_that_affect/ | WEREWOLF_BX13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oatip1 | false | null | t3_1oatip1 | /r/LocalLLaMA/comments/1oatip1/if_the_bubble_really_pops_how_can_that_affect/ | false | false | self | 30 | null |
The next breakthrough is high computer low memory , not MOE | 0 | Memory is way more expensive and slower than compute.. The next breakthrough should be a small low memory running in parallel using a lot of compute like what qwen experimented in the parallel scale paper. Memory bw is growing slower than compute. Im waiting for a 10billion param model running in parallel with the performance of a 300 b moe model… most of the inference’s electricity cost comes from memory transfer not compute | 2025-10-19T16:02:35 | https://www.reddit.com/r/LocalLLaMA/comments/1oatd53/the_next_breakthrough_is_high_computer_low_memory/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oatd53 | false | null | t3_1oatd53 | /r/LocalLLaMA/comments/1oatd53/the_next_breakthrough_is_high_computer_low_memory/ | false | false | self | 0 | null |
Best Current Model for Programming? | 7 | The title says it all. I'm looking to work with Rust, C/C++, Python and Assembly.
Thank you in advance. | 2025-10-19T15:55:20 | https://www.reddit.com/r/LocalLLaMA/comments/1oat6fh/best_current_model_for_programming/ | MurazakiUsagi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oat6fh | false | null | t3_1oat6fh | /r/LocalLLaMA/comments/1oat6fh/best_current_model_for_programming/ | false | false | self | 7 | null |
I came from the future and in the future we all laugh at MoEs and "Thinkers" 🤣 | 0 | We saw that most people in the past had very limited GPUs, and under the pretext of making AI more "intelligent" and "accessible," you had the brilliant idea of making larger models with the same performance as smaller models. And then you made the model "think," filling your precious VRAM with a bunch of useless nonsense, only to end up with a very similar result. Later, we realized that all of this was just pure laziness and excessive savings from companies that didn't want to make their models smarter simply by improving their datasets and training methods. We laughed a lot here, but everything serves as a learning experience! Thank you! 🤣 | 2025-10-19T15:33:42 | https://www.reddit.com/r/LocalLLaMA/comments/1oasmnx/i_came_from_the_future_and_in_the_future_we_all/ | Substantial-Dig-8766 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oasmnx | false | null | t3_1oasmnx | /r/LocalLLaMA/comments/1oasmnx/i_came_from_the_future_and_in_the_future_we_all/ | false | false | self | 0 | null |
Best Agentic Coder | 0 | I’ve tried Claude code, CLINE, continue, codex. I want to find the best local LLM based Claude code that I can run, have it debug and test/improve the code all by itself. I’ll be using OSS:120b or any recommended model for the DGX Spark, what are yalls recommendations? | 2025-10-19T15:27:35 | https://www.reddit.com/r/LocalLLaMA/comments/1oash81/best_agentic_coder/ | Huge-Solution-7168 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oash81 | false | null | t3_1oash81 | /r/LocalLLaMA/comments/1oash81/best_agentic_coder/ | false | false | self | 0 | null |
🤯 boom | 0 | Page : https://www.browseros.com/
Amazing 🫢, can run on browserOS + QWEN3-VL-4B | 2025-10-19T15:18:09 | Illustrious-Swim9663 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oas8o7 | false | null | t3_1oas8o7 | /r/LocalLLaMA/comments/1oas8o7/boom/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'hOHvpQ8Vtf8fNqSuC3tULRmuecMXCuT2L6HxqA_bUGE', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/ipe4ifrd53wf1.jpeg?width=108&crop=smart&auto=webp&s=7f41174b3ed794505ddab52ed8dccf62428fdc6a', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/ipe4ifrd53wf1.jpeg?width=216&crop=smart&auto=webp&s=b1ad39f89c798d54c8ab963c19caee302dcc4d27', 'width': 216}, {'height': 148, 'url': 'https://preview.redd.it/ipe4ifrd53wf1.jpeg?width=320&crop=smart&auto=webp&s=54656f3cc8338a85e4db894c83e6a3646407fc0a', 'width': 320}, {'height': 297, 'url': 'https://preview.redd.it/ipe4ifrd53wf1.jpeg?width=640&crop=smart&auto=webp&s=09c75af488fc32499faa484926ee9e56f06cb585', 'width': 640}, {'height': 446, 'url': 'https://preview.redd.it/ipe4ifrd53wf1.jpeg?width=960&crop=smart&auto=webp&s=307a7fa17a7fe7dde029ab66da24c2b82dc79719', 'width': 960}, {'height': 502, 'url': 'https://preview.redd.it/ipe4ifrd53wf1.jpeg?width=1080&crop=smart&auto=webp&s=bdce56402c1386d253a36ed66dc15432869680ea', 'width': 1080}], 'source': {'height': 635, 'url': 'https://preview.redd.it/ipe4ifrd53wf1.jpeg?auto=webp&s=02ba10cc4e0a8f01fb62546895f2ca5b7cf6af6d', 'width': 1365}, 'variants': {}}]} | ||
Same banchmark, diff results? | 0 | I wanted so see which model performs better in benches, ring mini 2.0 or gpt oss 20b (high). So, i searched for a direct comparison. I couldn't find it though, but what i did find was more interesting.
The hugging face card for ring mini 2.0 shows a couple of benchmarks. Benchmarks of ring mini 2.0 vs gpt oss 20b (medium) vs qwen3 8b thinking. So i thought that this model (ring mini 2.0) aint that great coz they were comparing it with gpt oss 20b set to medium thinking budget (not high thinking budget) and a model half the size of ring mini 2.0 (qwen3 8b thinking).
So i looked for benchmarks of gpt oss 20b (high), and i found this:
Gpt oss 20b (medium) scorers 73.33 in AIME 25 (ring mini 2.0's model card)
Gpt oss 20b (high) scores only 62 in AIME 25 (artificial intelligence analysis)
Gpt oss 20b (medium) scorers 65.53 in GPQA Diamond (ring mini 2.0's model card)
Gpt oss 20b (high) scorers only 62 in GPQA Diamond (artificial intelligence analysis)
So, my questions are:
1)Are these inconsistencies coz of faulty benchmarking or coz gpt oss 20b (medium) is actually better than gpt oss 20b (high) in some cases?
2)Which one is actually better, ring mini 2.0 or gpt oss 20b (high).
If there is a direct comparison than please share it.
[Unsessary coz this is reasonable, high outperforming medium:
Gpt oss 20b (medium) scorers 54.90 in LiveCodeBench (ring mini 2.0's model card)
Gpt oss 20b (high) scores 57 in LiveCodeBench (artificial intelligence analysis)] | 2025-10-19T15:15:51 | https://www.reddit.com/gallery/1oas6ki | Brave-Hold-9389 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1oas6ki | false | null | t3_1oas6ki | /r/LocalLLaMA/comments/1oas6ki/same_banchmark_diff_results/ | false | false | 0 | null | |
Build Advice - RTX 6000 / 7985WX | 3 | Hey there I’m about to pull the trigger on this on Monday. Is there anything I’m not taking into account here?
I currently have an 80TB SSD NAS, I’m debating going 25GbE for network so I can also use it for storage, and am considering adding an additional 7.68 or 15 NVMe U2/U3 SSD.
Is there anything you’d consider adding here or anything obvious I’ve missed? Thanks.
CPU: Ryzen Threadripper PRO 7985WX – 64C/128T, 3.2GHz base / 5.1GHz boost, 256MB L3
Cooler: AIO Liquid for SP3/TR4/TR5
RAM: 64GB DDR5 ECC REG 6400MT/s 512GB total
Storage:
2TB M.2 NVMe (OS)
7.68TB U.2/U.3 NVMe Enterprise SSD
20TB 7200RPM SATA HDD (Enterprise)
GPU: NVIDIA RTX PRO 6000 Blackwell Max-Q, 96GB GDDR7, 300W x 2
Networking: 2x 10GbE + 1x GbE IPMI
Case: 240x580x560mm, supports 4x double-wide GPUs
PCIe Layout: 6x PCIe 5.0 x16 + 1x PCIe 5.0 x8
Motherboard storage: 4x SATA, 4x M.2 NVMe, 2x SlimSAS U.2/U.3
| 2025-10-19T15:11:57 | https://www.reddit.com/r/LocalLLaMA/comments/1oas343/build_advice_rtx_6000_7985wx/ | Direct_Bodybuilder63 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oas343 | false | null | t3_1oas343 | /r/LocalLLaMA/comments/1oas343/build_advice_rtx_6000_7985wx/ | false | false | self | 3 | null |
Modaic - A New RL Native Agent Development Kit | 0 | [https://docs.modaic.dev/](https://docs.modaic.dev/)
My friend and I built Modaic, an open source RL, native Agent Development Kit on top of DSPy.
We've been building agents for a while now and have deployed several to production. Like the creators of Atomic Agents, I've found that most ADKs (LangChain, CrewAI, etc.) abstract away too much, preventing devs from making necessary optimizations.
At the same time, I believe ADKs that are too low-level sacrifice maintainability and explainability. I resonate more with DSPy's philosophy: treat the LLM as a CPU and the ADK as a compiler that translates human intent into LLM execution. This essentially means prompts *should* be abstracted. Not as hardcoded strings buried in the library, but as declarative, self-improving parameters optimized for your agent via RL.
That's why my friend and I built Modaic on top of DSPy. We added extensive context engineering tools (Context class, GraphDB, VectorDB, SQLDB, etc). We also added a hub for sharing and downloading pre-optimized agents for specific tasks such as text-2-sql. There are a few up there already! You can see them here: [https://www.modaic.dev/agents](https://www.modaic.dev/agents)
We're still early, but we'd really appreciate any feedback (love or hate).
| 2025-10-19T15:07:48 | https://www.reddit.com/r/LocalLLaMA/comments/1oarzhf/modaic_a_new_rl_native_agent_development_kit/ | Disneyskidney | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oarzhf | false | null | t3_1oarzhf | /r/LocalLLaMA/comments/1oarzhf/modaic_a_new_rl_native_agent_development_kit/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ROCcj72CmGZ2vjkOCKcAwx7jmZymMaITI40Io56qTv0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ROCcj72CmGZ2vjkOCKcAwx7jmZymMaITI40Io56qTv0.png?width=108&crop=smart&auto=webp&s=22c6001967d137ebe97f568a1451672e4be659c3', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/ROCcj72CmGZ2vjkOCKcAwx7jmZymMaITI40Io56qTv0.png?width=216&crop=smart&auto=webp&s=3df6bc16525170d4c2caa5e873dfbc54c24e3f04', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/ROCcj72CmGZ2vjkOCKcAwx7jmZymMaITI40Io56qTv0.png?width=320&crop=smart&auto=webp&s=5ece12b4fb5d6e3ef71d44944d0061b593f26598', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/ROCcj72CmGZ2vjkOCKcAwx7jmZymMaITI40Io56qTv0.png?width=640&crop=smart&auto=webp&s=af632d290c80ba54b2d1d6621f6780b0f23c69d6', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/ROCcj72CmGZ2vjkOCKcAwx7jmZymMaITI40Io56qTv0.png?width=960&crop=smart&auto=webp&s=1804d7e340038c0f8c642e054358b59d620f3664', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/ROCcj72CmGZ2vjkOCKcAwx7jmZymMaITI40Io56qTv0.png?width=1080&crop=smart&auto=webp&s=9bc1b57414c7819015c46b6631045442564da1fc', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/ROCcj72CmGZ2vjkOCKcAwx7jmZymMaITI40Io56qTv0.png?auto=webp&s=05017480d764b27077c62938a0a439de468048c5', 'width': 1200}, 'variants': {}}]} |
I am generally impressed by iPhone 17 GPU | 0 | Qwen3 4B runs at ~25t/s on A19 Pro with MLX. This is a massive gain even compared with iPhone 16 pro. Energy efficiency appears to have gotten better too, as my iPhone Air did not get very hot. Finally feels like local AI is going to possible. | 2025-10-19T14:50:58 | https://v.redd.it/kk520tyi03wf1 | Glad-Speaker3006 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oarkn3 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/kk520tyi03wf1/DASHPlaylist.mpd?a=1763477473%2CNGYxMTA1OWU0NmQ5MmI3MDMzNzY3ODQzNWI4MjcyMTZjNjQ0MmQ3MDM0YTliN2VlZGFhOTY2NjkzN2ZkMTM0ZQ%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/kk520tyi03wf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/kk520tyi03wf1/HLSPlaylist.m3u8?a=1763477473%2CMDY5MjJiMjRiNTgxMWU4MjEwOTRkYTc5ZGQxMTNhYWZjZjEzODFmN2JkMzc2MTY2NmMzZGE0ZDUwZDg4MzEwYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kk520tyi03wf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 884}} | t3_1oarkn3 | /r/LocalLLaMA/comments/1oarkn3/i_am_generally_impressed_by_iphone_17_gpu/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'bHkzd2twcGkwM3dmMdYUh878AlpOQDz4eE-1IUAASsN5-1iGTSJV8Fv1Kuhw', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/bHkzd2twcGkwM3dmMdYUh878AlpOQDz4eE-1IUAASsN5-1iGTSJV8Fv1Kuhw.png?width=108&crop=smart&format=pjpg&auto=webp&s=9623d8805a8abc7182d99babd6fcc3c22868c4e1', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/bHkzd2twcGkwM3dmMdYUh878AlpOQDz4eE-1IUAASsN5-1iGTSJV8Fv1Kuhw.png?width=216&crop=smart&format=pjpg&auto=webp&s=d98283d78a0bbdecf5111dcbe8a70ce207e2689c', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/bHkzd2twcGkwM3dmMdYUh878AlpOQDz4eE-1IUAASsN5-1iGTSJV8Fv1Kuhw.png?width=320&crop=smart&format=pjpg&auto=webp&s=feb7f613c789cb2f6a515d07a94d03d26bda6b62', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/bHkzd2twcGkwM3dmMdYUh878AlpOQDz4eE-1IUAASsN5-1iGTSJV8Fv1Kuhw.png?width=640&crop=smart&format=pjpg&auto=webp&s=432373bcf3e4e22a66c81d2291db0469f5f62151', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/bHkzd2twcGkwM3dmMdYUh878AlpOQDz4eE-1IUAASsN5-1iGTSJV8Fv1Kuhw.png?width=960&crop=smart&format=pjpg&auto=webp&s=e7612bb60e60f527261d28d371d3ec4ed1761233', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/bHkzd2twcGkwM3dmMdYUh878AlpOQDz4eE-1IUAASsN5-1iGTSJV8Fv1Kuhw.png?width=1080&crop=smart&format=pjpg&auto=webp&s=29e2d6229afd98670005c9a63dd0f2400d217528', 'width': 1080}], 'source': {'height': 2736, 'url': 'https://external-preview.redd.it/bHkzd2twcGkwM3dmMdYUh878AlpOQDz4eE-1IUAASsN5-1iGTSJV8Fv1Kuhw.png?format=pjpg&auto=webp&s=70f9b3c79582601d61522a847901cc6973927d57', 'width': 1260}, 'variants': {}}]} | |
Has anyone had strange experiences with LLM's saying very odd things? | 0 | This is GLM 4.6 in opencode. The final form of AI will be essentially a function that calculates the probability of a certain event happening, transcending time and enabling a system of control more powerful than the matrix. This was during an implementation of space based repetition algorithms.
Has anyone had strange experiences with LLM's saying very odd things when they shouldn't? I have also had Mistral 3.2 instruct say "Yes I am a demon" when asked if it was a demon. | 2025-10-19T14:42:43 | Splinter2121 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oardi5 | false | null | t3_1oardi5 | /r/LocalLLaMA/comments/1oardi5/has_anyone_had_strange_experiences_with_llms/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'gpwl7xcby2wf1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/gpwl7xcby2wf1.png?width=108&crop=smart&auto=webp&s=0ad0c704db983a0b489837a2d9260326e7b6d17b', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/gpwl7xcby2wf1.png?width=216&crop=smart&auto=webp&s=e06134e56448c8dfcce74bf6850ac2c181d08a3b', 'width': 216}, {'height': 173, 'url': 'https://preview.redd.it/gpwl7xcby2wf1.png?width=320&crop=smart&auto=webp&s=5d3e8ea15cd849547a9de8597d46d6fd17976bc9', 'width': 320}, {'height': 347, 'url': 'https://preview.redd.it/gpwl7xcby2wf1.png?width=640&crop=smart&auto=webp&s=b60226a4543c51dc547620108b7563f934eee135', 'width': 640}, {'height': 521, 'url': 'https://preview.redd.it/gpwl7xcby2wf1.png?width=960&crop=smart&auto=webp&s=4fd4dd7c5f0dcf0199f28a085bb3a0fa66395d09', 'width': 960}, {'height': 586, 'url': 'https://preview.redd.it/gpwl7xcby2wf1.png?width=1080&crop=smart&auto=webp&s=65d002f0c7619e3a5e920c23290d4a4c487afab1', 'width': 1080}], 'source': {'height': 1042, 'url': 'https://preview.redd.it/gpwl7xcby2wf1.png?auto=webp&s=8b2bda4d2e8cf20dc91c8aaae735017227fc6e42', 'width': 1920}, 'variants': {}}]} | |
Environmental Impact | 0 | Trying to understand this in regard to local LLMs.
I recently came from a discussion in r/aiwars where someone argued that since they run their image generation stuff locally, they "don't use any data centers" and have "zero environmental impact".
Meanwhile, posts/comments like on [this thread](https://www.reddit.com/r/LocalLLaMA/comments/1mymak3/google_new_research_paper_measuring_the/) seem to argue that 1) yes, local AI still has an environmental impact and 2) they're actually less efficient.
Also got into an argument about how local just isn't available to everyone, so it's totally reasonable that people go for public LLMs, and got told "get a better PC". And learn to program apparently, because that seems necessary to get anything to work.
I mainly use Ollama, and in order to use it I need to turn off every other process on my laptop, and it still crashes frequently and takes 5-10min to generate mediocre responses. I'll still use it on occasion, bust I mostly abandoned AI as "bad", though I still have some use cases. Recently tried Kobold which doesn't seem to be working, and SillyTavern, which was apparently not local after all.
Otherwise I've been under the impression that privacy is a much more relevant strength for local over public.
| 2025-10-19T14:39:20 | https://www.reddit.com/r/LocalLLaMA/comments/1oarakk/environmental_impact/ | Flaky-Werewolf-2563 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oarakk | false | null | t3_1oarakk | /r/LocalLLaMA/comments/1oarakk/environmental_impact/ | false | false | self | 0 | null |
PC hardware questions - RAM/FCLK frequency, PCIx4 wiring | 1 | I want to run an LLM locally for no great reason, it's being more of a hobby. Completely new to it. Have a couple of technical questions
To start with I am going to try CPU inference with Ryzen 9700x, in that case should I bother OCing memory from 6000 to 6400 MT/s and FCLK from 2000 to 2133, or it will give less increase in speed than the numbers suggest in which case I probably will not bother stressing my system
Second - I have 1080 (non-Ti) and looking to get a used 3090. I know the fact that bottom PCIe is wired x4 does not matter a great deal, but does it matter it is wired to chipset and not CPU directly if I were to use both cards at the same time ot it's largely the same if I am not looking to do inference all day every day? | 2025-10-19T14:36:28 | https://www.reddit.com/r/LocalLLaMA/comments/1oar842/pc_hardware_questions_ramfclk_frequency_pcix4/ | Ertata | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oar842 | false | null | t3_1oar842 | /r/LocalLLaMA/comments/1oar842/pc_hardware_questions_ramfclk_frequency_pcix4/ | false | false | self | 1 | null |
Any resource to understand LLM fine tuning/inference at a medium level to learn about temperature, quanitzation, loss functions, gpu setup? | 6 | is there any resource you found helpful to learn LLM fine tuning at a medium level so. i can start tinkering by knowing what's happening behind the scenes? Thank you! | 2025-10-19T14:34:15 | https://www.reddit.com/r/LocalLLaMA/comments/1oar69q/any_resource_to_understand_llm_fine/ | SnooMarzipans2470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oar69q | false | null | t3_1oar69q | /r/LocalLLaMA/comments/1oar69q/any_resource_to_understand_llm_fine/ | false | false | self | 6 | null |
I made a multi-provider AI coding agent | 2 | Hi everyone,
I've been building Binharic, an open-source AI coding assistant that runs in the terminal. It's entirely written in TypeScript and uses the AI SDK from Vercel for its agentic logic, including tool use and workflow management.
It supports models from OpenAI, Google, Anthropic, and local ones through Ollama. It has a built-in keyword-based RAG pipeline and can use external tools via the MCP. Many things about the agent are customizable, including its personality. The default persona is a Tech-Priest (from Warhammer 40k), but this can be changed.
Project's GitHub repo: [https://github.com/CogitatorTech/binharic-cli](https://github.com/CogitatorTech/binharic-cli) | 2025-10-19T14:33:16 | https://www.reddit.com/r/LocalLLaMA/comments/1oar5du/i_made_a_multiprovider_ai_coding_agent/ | West-Bottle9609 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oar5du | false | null | t3_1oar5du | /r/LocalLLaMA/comments/1oar5du/i_made_a_multiprovider_ai_coding_agent/ | false | false | self | 2 | null |
Phoneme Extraction Failure When Fine-Tuning VITS TTS on Arabic Dataset | 0 | Hi everyone,
I’m fine-tuning **VITS TTS** on an **Arabic speech dataset** (audio files + transcriptions), and I encountered the following error during training:
RuntimeError: min(): Expected reduction dim to be specified for input.numel() == 0. Specify the reduction dim with the 'dim' argument.
# 🧩 What I Found
After investigating, I discovered that **all** `.npy` **phoneme cache files** inside `phoneme_cache/` contain only a single integer like:
int32: 3
That means **phoneme extraction failed**, resulting in empty or invalid token sequences.
This seems to be the reason for the empty tensor error during alignment or duration prediction.
When I set:
use_phonemes = False
the model starts training successfully — but then I get warnings such as:
Character 'ا' not found in the vocabulary
(and the same for other Arabic characters).
# ❓ What I Need Help With
1. **Why did the phoneme extraction fail?**
* Is this likely related to my dataset (Arabic text encoding, unsupported characters, or missing phonemizer support)?
* How can I fix or rebuild the phoneme cache correctly for Arabic?
2. **How can I use phonemes and still avoid the** `min(): Expected reduction dim` **error?**
* Should I delete and regenerate the phoneme cache after fixing the phonemizer?
* Are there specific settings or phonemizers I should use for Arabic (e.g., `espeak`, `mishkal`, or `arabic-phonetiser`)? the model automatically uses `espeak`
# 🧠 My Current Understanding
* `use_phonemes = True`: converts text to phonemes (better pronunciation if it works).
* `use_phonemes = False`: uses raw characters directly.
Any help on:
* Fixing or regenerating the phoneme cache for Arabic
* Recommended phonemizer / model setup
* Or confirming if this is purely a dataset/phonemizer issue
would be greatly appreciated!
Thanks in advance! | 2025-10-19T14:32:54 | https://www.reddit.com/r/LocalLLaMA/comments/1oar532/phoneme_extraction_failure_when_finetuning_vits/ | Batman_255 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oar532 | false | null | t3_1oar532 | /r/LocalLLaMA/comments/1oar532/phoneme_extraction_failure_when_finetuning_vits/ | false | false | self | 0 | null |
What is currently the best model for accurately describing an image ? 19/10/2025 | 0 | It's all in the title. This post is just meant to serve as a checkpoint. | 2025-10-19T14:31:53 | https://www.reddit.com/r/LocalLLaMA/comments/1oar481/what_is_currently_the_best_model_for_accurately/ | Top-Diver-4606 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oar481 | false | null | t3_1oar481 | /r/LocalLLaMA/comments/1oar481/what_is_currently_the_best_model_for_accurately/ | false | false | self | 0 | null |
How to Fine tune a LLM to give it a persona? | 0 | I am trying to fine tune a LLM for a hospital but I don't know how to get started. I want it to know about my hospital details. Also, When asked "Who are you?" It must say "I am a Chatbot of XYZ Hospital" rather than saying about the base model. Can someone tell me how to do it? | 2025-10-19T14:16:50 | https://www.reddit.com/r/LocalLLaMA/comments/1oaqr7x/how_to_fine_tune_a_llm_to_give_it_a_persona/ | Bruce_spixky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oaqr7x | false | null | t3_1oaqr7x | /r/LocalLLaMA/comments/1oaqr7x/how_to_fine_tune_a_llm_to_give_it_a_persona/ | false | false | self | 0 | null |
[Architecture] Proposal to Free Up Your VRAM: Decoupling the Tokenizer from the LLM | 1 | [removed] | 2025-10-19T14:09:10 | https://www.reddit.com/r/LocalLLaMA/comments/1oaqkqo/architecture_proposal_to_free_up_your_vram/ | Content_Size715 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oaqkqo | false | null | t3_1oaqkqo | /r/LocalLLaMA/comments/1oaqkqo/architecture_proposal_to_free_up_your_vram/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '0TFzcqqCpbmrdfogKzEoD_iq1iv4JsvWwFVKRg5JOZo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0TFzcqqCpbmrdfogKzEoD_iq1iv4JsvWwFVKRg5JOZo.png?width=108&crop=smart&auto=webp&s=9ece2f95bfbf5c66936072c25ec946426a49a6f7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0TFzcqqCpbmrdfogKzEoD_iq1iv4JsvWwFVKRg5JOZo.png?width=216&crop=smart&auto=webp&s=da4327587c47cf9b4cdc7d66f395ece2961d54f8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0TFzcqqCpbmrdfogKzEoD_iq1iv4JsvWwFVKRg5JOZo.png?width=320&crop=smart&auto=webp&s=b8326a48aea77d84e2c8498af4dcfa2ab7f5c53e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0TFzcqqCpbmrdfogKzEoD_iq1iv4JsvWwFVKRg5JOZo.png?width=640&crop=smart&auto=webp&s=fbf167ff6156054d28ccda3e3de79f48586c14a5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0TFzcqqCpbmrdfogKzEoD_iq1iv4JsvWwFVKRg5JOZo.png?width=960&crop=smart&auto=webp&s=54d491c689727d7bb59018f7cd7dd257d5b9ea1e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0TFzcqqCpbmrdfogKzEoD_iq1iv4JsvWwFVKRg5JOZo.png?width=1080&crop=smart&auto=webp&s=e0092de7755013eb5f37d3e26e11358c6e95a151', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0TFzcqqCpbmrdfogKzEoD_iq1iv4JsvWwFVKRg5JOZo.png?auto=webp&s=27221f449c95554679e3b9ef389a3e37d9801300', 'width': 1200}, 'variants': {}}]} |
Finally able to stuff everything to my 8GB vram 😂 | 0 | A Llama 3.2 Q6K_L at 40k ctx..on my RDNA 1.0 gpu hope others having same gpu as mine will now know it's possible..
***
Welcome to KoboldCpp - Version 1.93.2
For command line arguments, please refer to --help
***
Unable to detect VRAM, please set layers manually.
Detected Free GPU Memory: 8176 MB (Set GPU layers manually if incorrect)
Auto Selected Vulkan Backend...
Loading Chat Completions Adapter: C:\Users\ADMINI~1\AppData\Local\Temp\_MEI44762\kcpp_adapters\Llama-3.json
Chat Completions Adapter Loaded
Initializing dynamic library: koboldcpp_vulkan.dll
==========
Namespace(admin=False, admindir='', adminpassword='', analyze='', benchmark='stdout', blasbatchsize=16, blasthreads=4, chatcompletionsadapter='C:/Users/Administrator/AppData/Local/Temp/_MEI74762/kcpp_adapters/Llama-3.json', cli=False, config=None, contextsize=40960, debugmode=0, defaultgenamt=256, draftamount=8, draftgpulayers=999, draftgpusplit=None, draftmodel=None, embeddingsmaxctx=0, embeddingsmodel='', enableguidance=False, exportconfig='', exporttemplate='', failsafe=False, flashattention=False, forceversion=0, foreground=False, gpulayers=29, highpriority=False, hordeconfig=None, hordegenlen=0, hordekey='', hordemaxctx=0, hordemodelname='', hordeworkername='', host='100.65.254.126', ignoremissing=False, launch=False, lora=None, loramult=1.0, maxrequestsize=32, mmproj=None, mmprojcpu=False, model=[], model_param='D:/Llama-3.2-3B-Instruct-Q6_K_L.gguf', moeexperts=-1, multiplayer=True, multiuser=1, noavx2=False, noblas=False, nobostoken=False, nocertify=False, nofastforward=False, nommap=False, nomodel=False, noshift=False, onready='', overridekv=None, overridetensors=None, password=None, port=5001, port_param=5001, preloadstory=None, prompt='', promptlimit=100, quantkv=0, quiet=False, remotetunnel=False, ropeconfig=[0.0, 10000.0], savedatafile=None, sdclamped=0, sdclipg='', sdclipl='', sdconfig=None, sdlora='', sdloramult=1.0, sdmodel='', sdnotile=False, sdquant=False, sdt5xxl='', sdthreads=2, sdvae='', sdvaeauto=False, showgui=False, singleinstance=False, skiplauncher=False, smartcontext=False, ssl=None, tensor_split=None, threads=4, ttsgpu=False, ttsmaxlen=4096, ttsmodel='', ttsthreads=0, ttswavtokenizer='', unpack='', useclblast=None, usecpu=False, usecublas=None, usemlock=False, usemmap=True, useswa=False, usevulkan=[0], version=False, visionmaxres=1024, websearch=True, whispermodel='')
==========
Loading Text Model: D:\Llama-3.2-3B-Instruct-Q6_K_L.gguf
The reported GGUF Arch is: llama
Arch Category: 0
---
Identified as GGUF model.
Attempting to Load...
---
Using automatic RoPE scaling for GGUF. If the model has custom RoPE settings, they'll be used directly instead!
System Info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | AMX_INT8 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 |
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Radeon RX 5500 XT (AMD proprietary driver) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 32768 | int dot: 1 | matrix cores: none
llama_model_load_from_file_impl: using device Vulkan0 (Radeon RX 5500 XT) - 7920 MiB free
llama_model_loader: loaded meta data with 35 key-value pairs and 255 tensors from D:\Llama-3.2-3B-Instruct-Q6_K_L.gguf (version GGUF V3 (latest))
print_info: file format = GGUF V3 (latest)
print_info: file type = TQ2_0 - 2.06 bpw ternary
print_info: file size = 2.54 GiB (6.80 BPW)
init_tokenizer: initializing tokenizer for type 2
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch = llama
print_info: vocab_only = 0
print_info: n_ctx_train = 131072
print_info: n_embd = 3072
print_info: n_layer = 28
print_info: n_head = 24
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: is_swa_any = 0
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 3
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 8192
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 0
print_info: rope scaling = linear
print_info: freq_base_train = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 131072
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 3B
print_info: model params = 3.21 B
print_info: general.name = Llama 3.2 3B Instruct
print_info: vocab type = BPE
print_info: n_vocab = 128256
print_info: n_merges = 280147
print_info: BOS token = 128000 '<|begin_of_text|>'
print_info: EOS token = 128009 '<|eot_id|>'
print_info: EOT token = 128009 '<|eot_id|>'
print_info: EOM token = 128008 '<|eom_id|>'
print_info: LF token = 198 '─è'
print_info: EOG token = 128008 '<|eom_id|>'
print_info: EOG token = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: relocated tensors: 1 of 283
load_tensors: offloading 28 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 29/29 layers to GPU
load_tensors: Vulkan0 model buffer size = 2604.90 MiB
load_tensors: CPU_Mapped model buffer size = 399.23 MiB
...........................................................................
Automatic RoPE Scaling: Using (scale:1.000, base:500000.0).
llama_context: constructing llama_context
llama_context: n_batch is less than GGML_KQ_MASK_PAD - increasing to 64
llama_context: n_seq_max = 1
llama_context: n_ctx = 41080
llama_context: n_ctx_per_seq = 41080
llama_context: n_batch = 64
llama_context: n_ubatch = 16
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: freq_base = 500000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (41080) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
set_abort_callback: call
llama_context: Vulkan_Host output buffer size = 0.49 MiB
create_memory: n_ctx = 41088 (padded)
llama_kv_cache_unified: Vulkan0 KV buffer size = 4494.00 MiB
llama_kv_cache_unified: size = 4494.00 MiB ( 41088 cells, 28 layers, 1 seqs), K (f16): 2247.00 MiB, V (f16): 2247.00 MiB
llama_context: enumerating backends
llama_context: backend_ptrs.size() = 2
llama_context: max_nodes = 65536
llama_context: worst-case: n_tokens = 16, n_seqs = 1, n_outputs = 0
llama_context: Vulkan0 compute buffer size = 70.97 MiB
llama_context: Vulkan_Host compute buffer size = 10.22 MiB
llama_context: graph nodes = 1014
llama_context: graph splits = 2
Threadpool set to 4 threads and 4 blasthreads...
attach_threadpool: call
Starting model warm up, please wait a moment...
Load Text Model OK: True
Embedded KoboldAI Lite loaded.
Embedded API docs loaded.
======
Active Modules: TextGeneration NetworkMultiplayer WebSearchProxy
Inactive Modules: ImageGeneration VoiceRecognition MultimodalVision ApiKeyPassword TextToSpeech VectorEmbeddings AdminControl
Enabled APIs: KoboldCppApi OpenAiApi OllamaApi
Running benchmark (Not Saved)...
Processing Prompt (40860 / 40860 tokens)
Generating (100 / 100 tokens)
[21:17:13] CtxLimit:40960/40960, Amt:100/100, Init:0.29s, Process:779.79s (52.40T/s), Generate:15.92s (6.28T/s), Total:795.71s
Benchmark Completed - v1.93.2 Results:
======
Flags: NoAVX2=False Threads=4 HighPriority=False Cublas_Args=None Tensor_Split=None BlasThreads=4 BlasBatchSize=16 FlashAttention=False KvCache=0
Timestamp: 2025-10-19 13:17:13.398342+00:00
Backend: koboldcpp_vulkan.dll
Layers: 29
Model: Llama-3.2-3B-Instruct-Q6_K_L
MaxCtx: 40960
GenAmount: 100
-----
ProcessingTime: 779.791s
ProcessingSpeed: 52.40T/s
GenerationTime: 15.922s
GenerationSpeed: 6.28T/s
TotalTime: 795.713s
Output: 1 1 1 1
-----
Server was not started, main function complete. Idling.
===
Press ENTER key to exit.
| 2025-10-19T13:43:03 | https://www.reddit.com/r/LocalLLaMA/comments/1oapz7h/finally_able_to_stuff_everything_to_my_8gb_vram/ | DigRealistic2977 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oapz7h | false | null | t3_1oapz7h | /r/LocalLLaMA/comments/1oapz7h/finally_able_to_stuff_everything_to_my_8gb_vram/ | false | false | self | 0 | null |
I want to have a local llm server for my house - just focused on coding assistant - what would be a reasonable spec for that? | 10 | I don't need and am not interested in video/image generation - just want something to work with me on coding stuff. | 2025-10-19T13:19:49 | https://www.reddit.com/r/LocalLLaMA/comments/1oapgtj/i_want_to_have_a_local_llm_server_for_my_house/ | gameguy56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oapgtj | false | null | t3_1oapgtj | /r/LocalLLaMA/comments/1oapgtj/i_want_to_have_a_local_llm_server_for_my_house/ | false | false | self | 10 | null |
Total noob here who wants to run a local LLM to build my own coach and therapist chatbot | 1 |
As the title says, I’m an absolute beginner when it comes to local LLMs. I’ve been using ChatGPT, Claude, and Perplexity daily, but that’s about it. I work in hospitality and mostly with English speakers, but English is my second language.
I’ve been thinking about building a local LLM that could act as a personal coach and therapist. I’ve been in therapy with a certified therapist for the past 18 months, and she’s allowed me to record every session. Having those sessions twice a month has been a game changer for me.
The thing is, I pay around $100 per 45-minute session out of pocket, and I’m currently focused on paying off some debt. So, I’d like to reduce my sessions to once every 4–6 weeks instead and supplement them with something AI-based. My therapist is totally on board with this idea.
My main concern, though, is privacy. I don’t want to upload any personal data to random AI tools, which is why I want to explore a local setup. The problem is, I can’t afford new hardware right now I only have a Mac Mini M3 Pro. My goal is to run a local LLM offline, ideally with voice input, and have it push me like David Goggins but also use the same therapeutic techniques my therapist does.
The issue is.. I have zero clue where to start or if this is even possible. I see people on YouTube using tools like NotebookLM for personal stuff like Tiago Forte in one of his videos but I’m just too paranoid to trust big tech companies with something this personal.
Any advice, resources, or starting points would be super appreciated.
| 2025-10-19T13:02:02 | https://www.reddit.com/r/LocalLLaMA/comments/1oap367/total_noob_here_who_wants_to_run_a_local_llm_to/ | tokyothrowie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oap367 | false | null | t3_1oap367 | /r/LocalLLaMA/comments/1oap367/total_noob_here_who_wants_to_run_a_local_llm_to/ | false | false | self | 1 | null |
llama-swap: Automatic unloading after timeout + multiple started models + rules which models can be loaded same time without unloading all of them? | 0 | 1. How to setup llama-swap config ( I am using config.yaml only) for models automatically unload after some time? (I guess using `ttl` but how and where?)
2. How to change setup, so multiple models could be loaded? (Groups aren't exactly what I am searching for I guess, because it would not allow to have loaded gwen 30B and same time qwen 4B and then unload and load qwen thinking 4B instead of qwen 4B, as I understood it will unload both models and load qwen 30b and qwen thinking 4B together again, which creates delay of loading big model again.)
3. How to specify which models can be loaded together at a given time?
my config:
| 2025-10-19T12:44:43 | https://www.reddit.com/r/LocalLLaMA/comments/1oaoprv/llamaswap_automatic_unloading_after_timeout/ | Pure_Force8771 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oaoprv | false | null | t3_1oaoprv | /r/LocalLLaMA/comments/1oaoprv/llamaswap_automatic_unloading_after_timeout/ | false | false | self | 0 | null |
Open Source Project to generate AI documents/presentations/reports via API: Apache 2.0 | 0 | Hi everyone,
We've been building Presenton which is an open source project which helps to generate AI documents/presentations/reports via API and through UI.
It works on Bring Your Own Template model, which means you will have to use your existing PPTX/PDF file to create a template which can then be used to generate documents easily.
It supports Ollama and all major LLM providers, so you can either run it locally or using most powerful models to generate AI documents.
You can operate it in two steps:
1. **Generate Template**: Templates are a collection of React components internally. So, you can use your existing PPTX file to generate template using AI. We have a workflow that will help you vibe code your template on your favourite IDE.
2. **Generate Document:** After the template is ready you can reuse the template to generate infinite number of documents/presentations/reports using AI or directly through JSON. Every template exposes a JSON schema, which can also be used to generate documents in non-AI fashion(for times when you want precison).
Our internal engine has best fidelity for HTML to PPTX conversion, so any template will basically work.
Community has loved us till now with 20K+ docker downloads, 2.5K stars and \~500 forks. Would love for you guys to checkout let us know if it was helpful or else feedback on making it useful for you.
Checkout website for more detail: [https://presenton.ai](https://presenton.ai/)
We have a very elaborate docs, checkout here: [https://docs.presenton.ai](https://docs.presenton.ai/)
Github: [https://github.com/presenton/presenton](https://github.com/presenton/presenton)
have a great day! | 2025-10-19T12:18:21 | https://www.reddit.com/r/LocalLLaMA/comments/1oao6k1/open_source_project_to_generate_ai/ | goodboydhrn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oao6k1 | false | null | t3_1oao6k1 | /r/LocalLLaMA/comments/1oao6k1/open_source_project_to_generate_ai/ | false | false | self | 0 | null |
PC rig to get started | 0 | I currently have a Ryzen 7 9700X, 64GB of ram and a 4060 Ti 8GB. I kind of realized I should have gone higher on the GPU vram. But I mainly got a prebuilt with some deal. I just upgraded over time since my old prebuilt parts were supposed to go to a family member (the CPU and ram have been upgraded).
The GPU is something I’m struggling to choose at. I know such things as cloud exist but I kind of want to do both locally and cloud. And I guess to be honest I judged wanted a bit more performance on my desktop. I have a microcenter not too far that has 3090 Ti and 3090 refurbished. The Ti ones are FE models at $800 refurbished. There is only one 3090 which is EVGA at $780. I was kind of leaning towards this path as I’m not particularly good at going after used ones. And mainly I can’t find one on facebook or eBay below $700. I most likely need to try harder. Or should I just stick to 5060 Ti 16GB? Since the RTX 5000 series will get a super series set sometime maybe next year? Although I don’t think it’s feasible to upgrade to those in that short time from the 5060 TI.
I would also like to ask if AMD options are reasonable considerations as well? Mainly in my budget I can be more willing to get a 9070 or XT with those 16GB.
As for work, I’m mostly just interested in training models and learning more in this field. At least I want to learn what I can and create portfolio for internships after I graduate at my university. | 2025-10-19T12:05:01 | https://www.reddit.com/r/LocalLLaMA/comments/1oanxn0/pc_rig_to_get_started/ | Due_Librarian_7026 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oanxn0 | false | null | t3_1oanxn0 | /r/LocalLLaMA/comments/1oanxn0/pc_rig_to_get_started/ | false | false | self | 0 | null |
Qwen3 Next support almost ready 🎉 | 347 | 2025-10-19T11:52:59 | https://github.com/ggml-org/llama.cpp/pull/16095#issuecomment-3419600401 | beneath_steel_sky | github.com | 1970-01-01T00:00:00 | 0 | {} | 1oanpdt | false | null | t3_1oanpdt | /r/LocalLLaMA/comments/1oanpdt/qwen3_next_support_almost_ready/ | false | false | 347 | {'enabled': False, 'images': [{'id': 'i7eFNEDuUciRrfCZPE4vDbbnitlKFru9a-LhPWvWNKY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/i7eFNEDuUciRrfCZPE4vDbbnitlKFru9a-LhPWvWNKY.png?width=108&crop=smart&auto=webp&s=73b88a6c262292a039d872eabdff777c84191417', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/i7eFNEDuUciRrfCZPE4vDbbnitlKFru9a-LhPWvWNKY.png?width=216&crop=smart&auto=webp&s=0ecda1ffea37ec5a5ac07a5f0b7789607c438dc6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/i7eFNEDuUciRrfCZPE4vDbbnitlKFru9a-LhPWvWNKY.png?width=320&crop=smart&auto=webp&s=e261406afdea7bdd66fe5ac5d062b1f111969ade', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/i7eFNEDuUciRrfCZPE4vDbbnitlKFru9a-LhPWvWNKY.png?width=640&crop=smart&auto=webp&s=c40ba30707796f926638df0347f891c8e7cb6d0c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/i7eFNEDuUciRrfCZPE4vDbbnitlKFru9a-LhPWvWNKY.png?width=960&crop=smart&auto=webp&s=597fe208b8cebb8a1f54cd58b6322a50397e734f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/i7eFNEDuUciRrfCZPE4vDbbnitlKFru9a-LhPWvWNKY.png?width=1080&crop=smart&auto=webp&s=20ebb71b8625b9410ec8e0e19c24cd417a8d35a4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/i7eFNEDuUciRrfCZPE4vDbbnitlKFru9a-LhPWvWNKY.png?auto=webp&s=d2b3806709f593d55dade1cda2041ca81a6ada09', 'width': 1200}, 'variants': {}}]} | ||
How to get a nvidia dgx spark in India | 0 | Hi All, I have been thinking of getting my hands on a nvidia dgx spark since its announcement (despite its abysmal Memory Bandwidth), but It has not been officially launched in India (most probably due to low interest and purchase power), I think it might never launch, is there any way to get it without risking it on a shady reseller or is there anything else comparable on the same price range, want it mostly for Finetuning and small scale model training. | 2025-10-19T11:52:26 | https://www.reddit.com/r/LocalLLaMA/comments/1oanp2e/how_to_get_a_nvidia_dgx_spark_in_india/ | Pitiful-Elk-1114 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oanp2e | false | null | t3_1oanp2e | /r/LocalLLaMA/comments/1oanp2e/how_to_get_a_nvidia_dgx_spark_in_india/ | false | false | self | 0 | null |
Struggling with codex-cli using open weights models | 2 | I am messing around with codex-cli. Got GLM 4.6 (via z.ai) working just fine, but my attempts to get DeepSeek or gpt-oss-120b working through nano-gpt or openrouter are largely failing - sometimes I get an answer or two but more often, codex does nothing or just says 'Ok' (DS3.2 viaOneRouter seems to wok half reliably, all the other combos fail).
The requests get logged by the API usage overviews, so config seems to be correct:
>\[model\_providers.nanogpt\]
>\# Name of the provider that will be displayed in the Codex UI.
>name = "nanogpt"
>\# The path \`/chat/completions\` will be amended to this URL to make the POST
>\# request for the chat completions.
>base\_url = "https://nano-gpt.com/api/v1"
>env\_key = "NanogptKey"
>
>\[profiles.gptoss\]
>model = "openai/gpt-oss-120b"
>model\_provider = "nanogpt"
Anything I am missing?
In particular, gpt-oss would be attractive for its speed (I can use DeepSeek through roo if need be, but roo is not totally compatible with gptoss) | 2025-10-19T11:38:52 | https://www.reddit.com/r/LocalLLaMA/comments/1oangg9/struggling_with_codexcli_using_open_weights_models/ | Simple_Split5074 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oangg9 | false | null | t3_1oangg9 | /r/LocalLLaMA/comments/1oangg9/struggling_with_codexcli_using_open_weights_models/ | false | false | self | 2 | null |
Laptop recommendations for AI ML Workloads | 0 | I am planning to buy a laptop for ML AI workloads (in India). While I can only buy 8GB GPUs with my budget, I believe it would be okay for at least smaller LLMs ( I would like to inference a 30B but lower is also fine) or models.
It is very weird but the difference between 3060 4060 5060 is just around 30k INR , so I was thinking of buying 5060 itself. However, I was hearing there might be heating and software issues for the newer RTX graphic cards and need some advice on which ones are good and reviews about heating issues, battery performance and so on
Also would like to know which chips/hardware utilize the graphics more effectly ( like i5 gen 14 HX with ram 16GB will utilize RTX 5060 8GB well and so on - I don't know if this is true though 😅)
I am seeing omen and Lenovo legion pro 5i gen 10
https://amzn.in/d/4l9IV1P
Previously, I did try looking for 16 GB or 32 GB graphics card laptops but understood that those will be well beyond my budget.
Any advice suggestions will be helpful like maybe taking Apple Mac M3 will be better or any other laptop will be better or taking RTX 3060 will be better or taking laptop in foreign is better and so on.
Thanks a lot
| 2025-10-19T11:17:11 | https://www.reddit.com/r/LocalLLaMA/comments/1oan2mk/laptop_recommendations_for_ai_ml_workloads/ | Commercial-Fly-6296 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oan2mk | false | null | t3_1oan2mk | /r/LocalLLaMA/comments/1oan2mk/laptop_recommendations_for_ai_ml_workloads/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=108&crop=smart&auto=webp&s=c7ef9713fb4fbf51d0d7da30fb558f95324a395b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=216&crop=smart&auto=webp&s=70f4ef0366eafa569960666b4537977954dc4da4', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=320&crop=smart&auto=webp&s=e88e6f574ea2b6abf3644be5140a1ed8ad6d613c', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=640&crop=smart&auto=webp&s=290ace7209dd3df0a237ec970a6a8b1662d523e1', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=960&crop=smart&auto=webp&s=421952297faebb04d1038184216c053ab1f0bb56', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=1080&crop=smart&auto=webp&s=2e3704dd3e397c6dbebe004c6cce33e8cd82d316', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?auto=webp&s=8cdb17f0919f23f3fc3c0bd9dac21cd40118adda', 'width': 1910}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.