title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Best Local MCP servers for coding support? | 2 | I'm coming from a place where I understand some of them for web searches and adding features to your LLM.
There was recently an Aider Polyglot benchmark for GPT-OSS 120B using Aider's Repomap to a 78.7% which is very desirable as a local model.
This Repomap can be a local MCP server and just reading into it as a RooCode user, how to set it up and what is required.
I have no idea further what is a normal local MCP peopl .use for coding support that's easy to setup for users? | 2025-09-07T14:25:37 | https://www.reddit.com/r/LocalLLaMA/comments/1nauk10/best_local_mcp_servers_for_coding_support/ | Dundell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nauk10 | false | null | t3_1nauk10 | /r/LocalLLaMA/comments/1nauk10/best_local_mcp_servers_for_coding_support/ | false | false | self | 2 | null |
In your experience, what are the most consistent local models for tool calling and/or object generation? | 6 | I want to forget about benchmarks for a second and get a feel for people’s experience in practice.
What models have you found to be the most consistent for tool calling and/or object generation? Feel free to provide multiple.
**Optionally:**
- What have you found the limitations to be, if any? *e.g. nested types, context restraints, infinite loops*
- Are there any kinks to get it working as expected? *e.g. custom instructions, custom parsing, programmatic intervention, model routing*
- What are your use cases? *To get a better understanding of the conditions that the model is performing under, and complexity of expected output* | 2025-09-07T14:08:13 | https://www.reddit.com/r/LocalLLaMA/comments/1nau4ty/in_your_experience_what_are_the_most_consistent/ | AnotherSoftEng | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nau4ty | false | null | t3_1nau4ty | /r/LocalLLaMA/comments/1nau4ty/in_your_experience_what_are_the_most_consistent/ | false | false | self | 6 | null |
Llama-OS - I'm developing an app to make llama.cpp usage easier. | 236 | Hello Guys,
This is an app I'm working on, the idea around is is that I use llama-server directly, so updating llama become seamless.
Actually it does:
* Model management
* Hugging Face Integration
* Llama.cpp GitHub integration with releases management
* Llama-server terminal launching with easy arguments customization, Internal / External
* Simple chat interface for easy testing
* Hardware monitor
* Color themes | 2025-09-07T14:03:31 | https://v.redd.it/qc7edhshyqnf1 | fredconex | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nau0qe | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/qc7edhshyqnf1/DASHPlaylist.mpd?a=1759845827%2CNmE5MTIxZTMwMTdlZWRiMzJiZmVlOTY0YzI2MzQ4ODMwYzk2YjUyN2IxMDFmMzYyZWU3NmE0ZGM5MTQyMTllNg%3D%3D&v=1&f=sd', 'duration': 111, 'fallback_url': 'https://v.redd.it/qc7edhshyqnf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/qc7edhshyqnf1/HLSPlaylist.m3u8?a=1759845827%2CODU2NzUyMGM1MDhiZDAzZTJhNmY0Zjk5ODc1ZDc1NmQ3MDAxYTg3N2E0ZjE2ZmFlNDE2MDUzNTg2OGFkNTVjNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qc7edhshyqnf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1038}} | t3_1nau0qe | /r/LocalLLaMA/comments/1nau0qe/llamaos_im_developing_an_app_to_make_llamacpp/ | false | false | 236 | {'enabled': False, 'images': [{'id': 'MzczZWhoc2h5cW5mMSpEG6AmlfNZCDZthrNu5xlRNijQvZUzUBXEn_GdpClu', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/MzczZWhoc2h5cW5mMSpEG6AmlfNZCDZthrNu5xlRNijQvZUzUBXEn_GdpClu.png?width=108&crop=smart&format=pjpg&auto=webp&s=f394bd64a29539c88777be670bad2ac4478deb8a', 'width': 108}, {'height': 149, 'url': 'https://external-preview.redd.it/MzczZWhoc2h5cW5mMSpEG6AmlfNZCDZthrNu5xlRNijQvZUzUBXEn_GdpClu.png?width=216&crop=smart&format=pjpg&auto=webp&s=d1da8d5934b201368a7d7ac59ce9f5712aca10d7', 'width': 216}, {'height': 222, 'url': 'https://external-preview.redd.it/MzczZWhoc2h5cW5mMSpEG6AmlfNZCDZthrNu5xlRNijQvZUzUBXEn_GdpClu.png?width=320&crop=smart&format=pjpg&auto=webp&s=0078062a4ac282e11878000d70515d71108417e6', 'width': 320}, {'height': 444, 'url': 'https://external-preview.redd.it/MzczZWhoc2h5cW5mMSpEG6AmlfNZCDZthrNu5xlRNijQvZUzUBXEn_GdpClu.png?width=640&crop=smart&format=pjpg&auto=webp&s=06040b6d8e79bcf574c93214908266ce5888bab0', 'width': 640}, {'height': 666, 'url': 'https://external-preview.redd.it/MzczZWhoc2h5cW5mMSpEG6AmlfNZCDZthrNu5xlRNijQvZUzUBXEn_GdpClu.png?width=960&crop=smart&format=pjpg&auto=webp&s=0b8f7da7cc570a5db7f4fbdefca0e25ff287cf5a', 'width': 960}, {'height': 749, 'url': 'https://external-preview.redd.it/MzczZWhoc2h5cW5mMSpEG6AmlfNZCDZthrNu5xlRNijQvZUzUBXEn_GdpClu.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1d7ffc706473d82199954cff8bf620f951357f1e', 'width': 1080}], 'source': {'height': 1034, 'url': 'https://external-preview.redd.it/MzczZWhoc2h5cW5mMSpEG6AmlfNZCDZthrNu5xlRNijQvZUzUBXEn_GdpClu.png?format=pjpg&auto=webp&s=e9f2cdd2ebc08bb72c0406cab6ff8f822a554878', 'width': 1490}, 'variants': {}}]} | |
Help picking a model | 2 | Hi I am getting a new pc soon with 32GB of RAM at 6000mhz, a 5 9600X CPU and a Asus 5070 prime OC 12GB GPU. (Old PC is a 1070, 16GB 2400mhz, I7-5820K)
Was looking into it myself, but without the PC here yet, I could not do any tests, and only really just googled (DDG) it and use something like chatGPT to get help into finding something that might work. But I got bad results from looking online, and I don't trust GPT enough, so I thought I would ask an AI sub on Reddit for some extra help. Here is what I am looking for:
Looking to a run local text AI with kobold (so GGUF) into sillytavern. (Roleplay) I want an uncensored model and for it to be at least 8K in context (as high as I can get it ideally). I mostly care about context, but quality and speed are nice. I only care about it making text on par or faster than what I can read, so no need for like 100t/s, but still responsive so I don't wait ages for a response. So context > quality > speed.
So far I have only used online stuff like novelAI in the past (like when it came out past) So I am not super familiar with AI terms (the few I know are mostly surface knowledge) so I was wondering if I could get links to some huggingface models that fit my PC specs and what I want. I have a few models saved, but I was thinking maybe someone who knows better could link me to stuff that will work. Otherwise, I would have had to spend a few hours trial-and-erroring my way to what I wanted. | 2025-09-07T13:22:26 | https://www.reddit.com/r/LocalLLaMA/comments/1nat22v/help_picking_a_model/ | mrdonkyman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nat22v | false | null | t3_1nat22v | /r/LocalLLaMA/comments/1nat22v/help_picking_a_model/ | false | false | self | 2 | null |
Which model can solve this ? | 2 | 2025-09-07T13:20:52 | https://x.com/cneuralnetwork/status/1963988141364134337?t=vtftZfo5mKU4pxccHSCM-g&s=19 | JeffreySons_90 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1nat0tm | false | null | t3_1nat0tm | /r/LocalLLaMA/comments/1nat0tm/which_model_can_solve_this/ | false | false | default | 2 | null | |
GPT-OSS on lm-studio advice | 1 | Hello all-- looking for some practical advice on running the excellent gpt-oss models on lm-studio. I'm running a 5090 on a 9800X3D with 64GB DDR5-6000.
First, flash attention:
GPT-OSS-20B , with 64K context and feeding it about that many tokens:
Fits on the GPU with 27GB. Feeding about 60K tokens takes 1 minute for prompt processing, and it then generates about 75 tokens/ sec
Doing the same with flash attention enabled (which isn't the default): Only needs 15 GB, loads in 10 sec and much faster.
WIth 128K and flash, 26 seconds for the prompt, then 90 tokens/sec. only used 19GB of GPU. This is really, really fast, good enough to do interactive work with maximum context.
Trying this without flash runs into system RAM and becomes extremely slow.
GPT-OSS-120B, 64K prompt, with flash: 27GB GPU, 50 GB main ram (barely fits.)
3 minutes to eat the prompt , then 5 token/ses
So here are the questions:
1. The Force model expert weights onto CPU loading option seems to force ALL the model weights onto the CPU, leaving lots of empty room on the GPU, even with GPU offload slider all the way to the right. I can't load the 120B model at all this way because it doesn't use my VRAM enough. Is there some way to get it be more intelligent about this?
2. How much does flash attention affect the quality of the results? It sure speeds things up!
Thanks,
M | 2025-09-07T12:35:47 | https://www.reddit.com/r/LocalLLaMA/comments/1nas1zt/gptoss_on_lmstudio_advice/ | Labtester | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nas1zt | false | null | t3_1nas1zt | /r/LocalLLaMA/comments/1nas1zt/gptoss_on_lmstudio_advice/ | false | false | self | 1 | null |
I've built a CLI tool that can generate code and scripts with AI using Ollama or LM studio | 0 | I’ve been working on a CLI tool (similar to **claude-code**) that lets you go from simple questions (e.g., *“I want a script to list the 10 biggest files on my OS”*) to more complex tasks (e.g., *“Build me a RESTful API using Express”*).
You can install it with:
pip install xandai-cli
And if you’d like to support the project, you can give it a star on GitHub:
[XandAI-CLI](https://github.com/XandAI-project/Xandai-CLI) | 2025-09-07T12:29:00 | https://www.reddit.com/r/LocalLLaMA/comments/1narwzb/ive_built_a_cli_tool_that_can_generate_code_and/ | Sea-Reception-2697 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1narwzb | false | null | t3_1narwzb | /r/LocalLLaMA/comments/1narwzb/ive_built_a_cli_tool_that_can_generate_code_and/ | false | false | self | 0 | null |
I have a "practical case" that takes me ages everyweek, I'd like to know more about AI local agents | 1 | Hi there,
I have a few "redundant" actions that are using a lot of my time. This is basically writing classified ads on very "niche" website. I tried to automatise it before, with a macro and then Python. The issue is that websites (it's about 15-20) are from time to time changing and it might break the script.
So basically I let the things like they are, and did that manually since the time wasted in refining the script wasn't a good investment.
But, I was thinking that adjusting to the small changes should be in the area of an AI agent. I'd like however to have it local, even though it takes forever to perform the task.
Do you have any recommendation to run it locally ? | 2025-09-07T12:20:57 | https://www.reddit.com/r/LocalLLaMA/comments/1narr4l/i_have_a_practical_case_that_takes_me_ages/ | Hour_Channel3379 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1narr4l | false | null | t3_1narr4l | /r/LocalLLaMA/comments/1narr4l/i_have_a_practical_case_that_takes_me_ages/ | false | false | self | 1 | null |
Best sub-15B Ollama model for fact extraction? | 0 | I’m building a pipeline that extracts facts from transcribed dialogues/chat logs and stores them in a RAG index.
Inputs can be long (up to \~32k tokens).
Now I'm looking for a lightweight Ollama model that’s fast on an RTX 5070 Ti + 32 GB RAM but still accurate and stable.
**Requirements:** Language German and English, I prefer **4–7B** (ok up to **15B**), good instruction following, low hallucinations.
What models do you recommend and how would you rank them? | 2025-09-07T11:40:00 | https://www.reddit.com/r/LocalLLaMA/comments/1naqzfm/best_sub15b_ollama_model_for_fact_extraction/ | NenntronReddit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1naqzfm | false | null | t3_1naqzfm | /r/LocalLLaMA/comments/1naqzfm/best_sub15b_ollama_model_for_fact_extraction/ | false | false | self | 0 | null |
Anyone Know if There Any Other Uncensored Models Beside Grok? | 0 | I tested models from a few companies (OpenAi, Anthropic, Google, DeepSeek, NVIDIA), they are all censored "for safety" or whatever.. Anyone here knows of models who are naturally uncensored like Grok (no I don't mean abliterated).
Anyway I asked Grok about their uncensored status when it comes to text-related tasks and how they compared to other models and here is the reply:
Grok: "I'm built to be more "uncensored" in this area—maximally truthful and helpful without unnecessary restrictions."
Though from what uders are reporting, Grok has become more censored in recent months, especially in terms of images, text doesn't seem to have been affected thankfully: [https://www.reddit.com/r/grok/comments/1joqs98/is\_grok\_becoming\_less\_uncensored\_now/](https://www.reddit.com/r/grok/comments/1joqs98/is_grok_becoming_less_uncensored_now/) | 2025-09-07T11:33:06 | https://www.reddit.com/r/LocalLLaMA/comments/1naqv29/anyone_know_if_there_any_other_uncensored_models/ | Zephyr1421 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1naqv29 | false | null | t3_1naqv29 | /r/LocalLLaMA/comments/1naqv29/anyone_know_if_there_any_other_uncensored_models/ | false | false | self | 0 | null |
Reson — a weird 7B LoRA that doesn’t hallucinate, it adapts | 0 | I fine-tuned LLaMA-2 7B Chat (~11k examples) but with a twist: instead of trying to reduce hallucinations, I trained it to adapt.
It does meta-cognition, scenario reasoning, and cross-domain jumps.
Some outputs look crazy, but they’re intentional — it tries to reflect and strategize instead of just answering safe.
Repo + demo script: huggingface.co/Nexus-Walker/Reson
Curious what you guys think:
is this kind of adaptive weirdness useful for experimentation, or just noise compared to factual models?
| 2025-09-07T11:21:33 | https://www.reddit.com/r/LocalLLaMA/comments/1naqnhn/reson_a_weird_7b_lora_that_doesnt_hallucinate_it/ | Ill-Button-1680 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1naqnhn | false | null | t3_1naqnhn | /r/LocalLLaMA/comments/1naqnhn/reson_a_weird_7b_lora_that_doesnt_hallucinate_it/ | false | false | self | 0 | null |
How is qwen3 4b this good? | 466 | This model is on a different level. The only models which can beat it are 6 to 8 times larger. I am very impressed. It even Beats all models in the "small" range in Maths (AIME 2025). | 2025-09-07T11:18:38 | https://www.reddit.com/gallery/1naqln5 | Brave-Hold-9389 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1naqln5 | false | null | t3_1naqln5 | /r/LocalLLaMA/comments/1naqln5/how_is_qwen3_4b_this_good/ | false | false | 466 | null | |
software engineering best practices | 5 | hey there, just an inquiry
is there a benchmark for evaluating LLM performance on software engineering best practices?
I notice many models write code and tend to ignore best practices | 2025-09-07T10:37:05 | https://www.reddit.com/r/LocalLLaMA/comments/1napx37/software_engineering_best_practices/ | angelo_justBuild | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1napx37 | false | null | t3_1napx37 | /r/LocalLLaMA/comments/1napx37/software_engineering_best_practices/ | false | false | self | 5 | null |
check https://huggingface.co/papers/2509.01363 | 66 | The paper shows that reasoning ability can be extracted as a vector from RL-trained models and added to others via simple arithmetic to boost reasoning without retraining
would appreciate an upvote [https://huggingface.co/papers/2509.01363](https://huggingface.co/papers/2509.01363) | 2025-09-07T10:24:40 | https://www.reddit.com/r/LocalLLaMA/comments/1napq0m/check_httpshuggingfacecopapers250901363/ | LowChance4561 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1napq0m | false | null | t3_1napq0m | /r/LocalLLaMA/comments/1napq0m/check_httpshuggingfacecopapers250901363/ | false | false | self | 66 | {'enabled': False, 'images': [{'id': '6H-KDEVpZ9NpxS2HndOs2dWmPPzxE6EUp2vHrZUoEA8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6H-KDEVpZ9NpxS2HndOs2dWmPPzxE6EUp2vHrZUoEA8.png?width=108&crop=smart&auto=webp&s=bdcbdfdc699666b1b4a083ed18650739cb1492c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6H-KDEVpZ9NpxS2HndOs2dWmPPzxE6EUp2vHrZUoEA8.png?width=216&crop=smart&auto=webp&s=6f15f469842b054fe5a92c4cb1bbad6e04f2e7e4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6H-KDEVpZ9NpxS2HndOs2dWmPPzxE6EUp2vHrZUoEA8.png?width=320&crop=smart&auto=webp&s=3cfd1d05dbbd072c8e018eb3d6935c21e0487aa3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6H-KDEVpZ9NpxS2HndOs2dWmPPzxE6EUp2vHrZUoEA8.png?width=640&crop=smart&auto=webp&s=6e2368609d045193f21436c56e3f29d8cf10bbf0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6H-KDEVpZ9NpxS2HndOs2dWmPPzxE6EUp2vHrZUoEA8.png?width=960&crop=smart&auto=webp&s=441af03cb22210caca4cd04c5b41af9e76776e07', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6H-KDEVpZ9NpxS2HndOs2dWmPPzxE6EUp2vHrZUoEA8.png?width=1080&crop=smart&auto=webp&s=25c99948b19bbb5746e9c020269e2e556b92b32c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6H-KDEVpZ9NpxS2HndOs2dWmPPzxE6EUp2vHrZUoEA8.png?auto=webp&s=5df32261adeff3a2bcda144430d36dcd42965fea', 'width': 1200}, 'variants': {}}]} |
ChatGPT Pro UNCENSORED | 1 | [removed] | 2025-09-07T10:21:20 | https://www.reddit.com/r/LocalLLaMA/comments/1napo4c/chatgpt_pro_uncensored/ | No_Illustrator_1359 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1napo4c | false | null | t3_1napo4c | /r/LocalLLaMA/comments/1napo4c/chatgpt_pro_uncensored/ | false | false | self | 1 | null |
Imagine an AI Coding Assistant CLI with Domain Expertise like Tech Leads and Vector Code Search like Crusor | 15 | Ever wished your AI coding assistant actually understood your team's domain knowledge and architectural decisions?
Just shipped **Terra Code CLI** - the first AI assistant that learns your organization's patterns and works like a senior developer.
**What makes it different:**
• **Interactive KT Sessions** - Senior devs teach Terra through structured knowledge transfer
• **Semantic Code Search** - Lightning-fast indexing of entire codebases for analysis
• **Persistent Memory** - Remembers team standards across all projects
• **Domain Expertise** - Upload architecture docs, API specs (.txt, .md, .docx, .pdf)
**Built on Qwen's foundation** (thanks to the Qwen team!) + Gemini CLI framework.
**Try it free during beta:**
```bash
npm install -g @terra-code/terra-code@latest
terra
```
**Which feature would most improve your coding workflow?**
- Full domain knowledge integration
- Semantic code search capabilities
- Persistent team memory
- Interactive knowledge transfer
**Beta ending soon** - perfect time to onboard your team's knowledge!
**Question for the community:** Have you faced challenges with AI coding assistants lacking domain understanding? How did it impact your development process?
**GitHub:** [Star us on GitHub](https://github.com/TerraAGI/terra-code-cli)
**Website:** [Visit our website](https://terra-agi.com/)
---
*Built with ❤️ by the TerraAGI team* | 2025-09-07T10:08:33 | prabhjots665 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1napgwx | false | null | t3_1napgwx | /r/LocalLLaMA/comments/1napgwx/imagine_an_ai_coding_assistant_cli_with_domain/ | false | false | 15 | {'enabled': True, 'images': [{'id': 'YHt8d6rpTal_tV9_RIgUDMhtjLWgWLbVMEWEuOHc-AI', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/822tkwtuvpnf1.jpeg?width=108&crop=smart&auto=webp&s=f3f0a57e078c3cb69ddd02dbed9316b22aa0507c', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/822tkwtuvpnf1.jpeg?width=216&crop=smart&auto=webp&s=28bf26f7347e83fddb5af1308654cd9005592ce0', 'width': 216}, {'height': 183, 'url': 'https://preview.redd.it/822tkwtuvpnf1.jpeg?width=320&crop=smart&auto=webp&s=85cfe5cd7c2e354630f8c7a4e8cdd30ca48c44b3', 'width': 320}, {'height': 366, 'url': 'https://preview.redd.it/822tkwtuvpnf1.jpeg?width=640&crop=smart&auto=webp&s=d57de564e1e533aa28806e728440b57bef0c8bf2', 'width': 640}, {'height': 549, 'url': 'https://preview.redd.it/822tkwtuvpnf1.jpeg?width=960&crop=smart&auto=webp&s=6cb33ca68518ce13c51fe3cc8305e60e0f7dc51e', 'width': 960}, {'height': 617, 'url': 'https://preview.redd.it/822tkwtuvpnf1.jpeg?width=1080&crop=smart&auto=webp&s=3e10ce4bdc41a98a56cccf3dd8ab9166ff0cc944', 'width': 1080}], 'source': {'height': 732, 'url': 'https://preview.redd.it/822tkwtuvpnf1.jpeg?auto=webp&s=72c3818a089763c4b11615368611d488c48b26ac', 'width': 1280}, 'variants': {}}]} | ||
Best app and quant type for hosting LLM on modern android phone, with an endpoint for Sillytavern? | 3 | I mean actually running the model on the device, not accessing a remote API elsewhere. The use case is run a small RP fine tune on the phone in places without internet access.
Phone is S23 Ultra with a Snapdragon 8 Gen 2 with 8GB ram, 4B and small quant 8Bs fit but small quant 8Bs seem unusably slow (with Layla at least).
\-I don't know what quant type is best for ARM SOCs,
\-or what apps will run what kinds of quants,
\-or which apps/quant types utilize the Qualcomm Hexagon AI accelerator chip thing embedded into the Snapdragon 8 Gen 2,
\-or if I should even care about utilizing the Hexagon AI accelerator vs the standard compute on the main cpu in the SOC,
\-or which apps (if any) will give me an actual IP address port or other type of end point that Sillytavern can use (Sillytavern works through Termux on android).
Layla crashes all the time and does not seem to be able to give an endpoint for Sillytavern. It also gives terrible outputs compared to the same models on PC, and from what I can tell my sampler settings are the same.
I've used search but all the posts about this are either very outdated or not quite relevant (no mention of how to set up a link between Sillytavern as the frontend on the phone, and some type of host on the phone). I'd appreciate any guidance. | 2025-09-07T09:12:18 | https://www.reddit.com/r/LocalLLaMA/comments/1naolrp/best_app_and_quant_type_for_hosting_llm_on_modern/ | CanineAssBandit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1naolrp | false | null | t3_1naolrp | /r/LocalLLaMA/comments/1naolrp/best_app_and_quant_type_for_hosting_llm_on_modern/ | false | false | self | 3 | null |
[ Removed by Reddit ] | 1 | [removed] | 2025-09-07T08:57:14 | https://www.reddit.com/r/LocalLLaMA/comments/1naod47/removed_by_reddit/ | Aromatic_Bonus583 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1naod47 | false | null | t3_1naod47 | /r/LocalLLaMA/comments/1naod47/removed_by_reddit/ | false | false | self | 1 | null |
From diffusion to LLMs: Need advice on best local models for my new 96GB RTX 6000 workstation | 0 | Hello everyone, not long ago I used years of savings to build what may be the most expensive computer of my life— it cost me nearly $12,500. The detailed specifications are:
CPU: AMD Ryzen 9 9950X3D (Retail Box)
CPU Cooler: Thermalright A90 360mm Liquid Cooler
Motherboard: MSI MAG X870E Tomahawk
Memory: G.Skill Trident Z5 256GB (4x64GB) DDR5-6600 Kit
Storage: Zhitai TiPlus 9000 4TB NVMe SSD ×2
Graphics Card: NVIDIA RTX 6000 Ada Generation (PRO) 96GB
Case: Segotep 620 Workstation Chassis, Transparent Side Panel
Power Supply: Seasonic Prime 1200W (Gold Rated)
I gave up the crazy idea of building a server-grade configuration because my main experience before purchasing was running diffusion models like sd, flux, and wan.
I'm asking for help here because this community is full of experienced localllm users. Based on my current setup, could you recommend and advise on models for running LLMs locally? I would be very grateful, as I've always been very interested in local LLMs | 2025-09-07T08:49:20 | https://www.reddit.com/r/LocalLLaMA/comments/1nao8tw/from_diffusion_to_llms_need_advice_on_best_local/ | Adventurous-Bit-5989 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nao8tw | false | null | t3_1nao8tw | /r/LocalLLaMA/comments/1nao8tw/from_diffusion_to_llms_need_advice_on_best_local/ | false | false | self | 0 | null |
I managed to compile and run Llama 3B Q4_K_M on llama.cpp with Termux on ARMv7a, using only 2 GB. | 31 | I used to think running a reasonably coherent model on Android ARMv7a was impossible, but a few days ago I decided to put it to the test with llama.cpp, and I was genuinely impressed with how well it works. It's not something you can demand too much from, but being local and, of course, offline, it can get you out of tricky situations more than once. The model weighs around 2 GB and occupies roughly the same amount in RAM, although with certain flags it can be optimized to reduce consumption by up to 1 GB. It can also be integrated into personal Android projects thanks to its server functionality and the endpoints it provides for sending requests.
If anyone thinks this could be useful, let me know; as soon as I can, I’ll prepare a complete step-by-step guide, especially aimed at those who don’t have a powerful enough device to run large models or rely on a 32-bit processor. | 2025-09-07T07:38:21 | https://www.reddit.com/gallery/1nan5az | arbolito_mr | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nan5az | false | null | t3_1nan5az | /r/LocalLLaMA/comments/1nan5az/i_managed_to_compile_and_run_llama_3b_q4_k_m_on/ | false | false | 31 | null | |
HF releases 3T tokens dataset sourced entirely from PDFs. | 470 | Hey guy, something we have teased a bit during our AMA is finally out:
📄 FinePDFs, the largest PDF dataset ever released, spanning over half a billion documents!
\- Long context: Documents are 2x longer than web text
\- 3T tokens from high-demand domains like legal and science.
\- Heavily improves over SoTA when mixed with FW-EDU&DCLM web copora 📈. | 2025-09-07T07:26:55 | https://www.reddit.com/r/LocalLLaMA/comments/1namz1q/hf_releases_3t_tokens_dataset_sourced_entirely/ | Other_Housing8453 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1namz1q | false | null | t3_1namz1q | /r/LocalLLaMA/comments/1namz1q/hf_releases_3t_tokens_dataset_sourced_entirely/ | false | false | self | 470 | null |
Logré compilar y ejecutar Llama 3B Q4_K_M en llama.cpp con Termux en ARMv7a, ocupando solo 2 GB. | 0 | Creía que correr un modelo medianamente coherente en Android ARMv7a era imposible, pero hace unos días decidí ponerlo a prueba con llama.cpp y realmente quedé impresionado con lo bien que funciona. No es algo a lo que se le pueda exigir demasiado, pero al ser local y, obviamente, sin conexión a Internet, puede sacarte de apuros en más de una situación. El modelo pesa alrededor de 2 GB y en RAM ocupa prácticamente lo mismo, aunque con el uso de ciertos flags se puede optimizar y reducir el consumo hasta en 1 GB menos. También puede integrarse en proyectos personales en Android gracias a su función de servidor y a los endpoints que permite para enviar solicitudes.
Si alguien cree que podría resultarle útil, hágamelo saber; en cuanto pueda, prepararé una guía completa paso a paso, pensada sobre todo para quienes no cuentan con un dispositivo lo suficientemente potente para correr modelos grandes o dependen de un procesador de 32 bits. | 2025-09-07T07:24:32 | https://www.reddit.com/gallery/1namxo6 | arbolito_mr | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1namxo6 | false | null | t3_1namxo6 | /r/LocalLLaMA/comments/1namxo6/logré_compilar_y_ejecutar_llama_3b_q4_k_m_en/ | false | false | 0 | null | |
HuggingFace releases 3T token dataset sourced entirely from PDFs | 1 | [deleted] | 2025-09-07T07:21:56 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1namw7i | false | null | t3_1namw7i | /r/LocalLLaMA/comments/1namw7i/huggingface_releases_3t_token_dataset_sourced/ | false | false | default | 1 | null | ||
Best 100B class model/framework to run on 16 P100s (256GB of VRAM)? | 15 | I’ve got 16× Tesla P100s (256 GB VRAM) and I’m trying to explore and find how to run 100B+ models with max context on Pascal cards.
See the machine: https://www.reddit.com/r/LocalLLaMA/comments/1ktiq99/i_accidentally_too_many_p100/
At the time, I had a rough time trying to get Qwen3 MoE models to work with Pascal, but maybe things have improved.
The two models at the top of my list are gpt-oss-120B and GLM-4.5-Air. For extended context I’d love to get one of the 235B Qwen3 models to work too.
I’ve tried llama.cpp, Ollama, ExLlamaV2, and vllm-pascal. But none have handled MoE properly on this setup. So, if anyone has been able to run MoE models on P100s, I'd love to have some pointers. I’m open to anything. I’ll report back with configs and numbers if I get something working. | 2025-09-07T06:40:22 | https://www.reddit.com/r/LocalLLaMA/comments/1nam7i1/best_100b_class_modelframework_to_run_on_16_p100s/ | TooManyPascals | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nam7i1 | false | null | t3_1nam7i1 | /r/LocalLLaMA/comments/1nam7i1/best_100b_class_modelframework_to_run_on_16_p100s/ | false | false | self | 15 | null |
anyone tried to serve OSS with VLLM on T4 GPU | 1 | In the past few days I was trying to deploy the OSS model using t4 gpu with offloading but no success
The main reason is this quantization is not supported with old GPUs like t4
BTW what's the best way to server quantized llm using VLLM (I am using awq mainly but seems to be not supporting the modern models) so suggest the best way you are using
Thanks | 2025-09-07T06:08:45 | https://www.reddit.com/r/LocalLLaMA/comments/1nalohd/anyone_tried_to_serve_oss_with_vllm_on_t4_gpu/ | Accomplished_Pin_626 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nalohd | false | null | t3_1nalohd | /r/LocalLLaMA/comments/1nalohd/anyone_tried_to_serve_oss_with_vllm_on_t4_gpu/ | false | false | self | 1 | null |
Create a shared alternative to OpenRouter Together | 11 | Hi everyone, I had this idea after reading the latest paper by Nvidia on making large models more efficient for long context through modification of the model.
I did some calculations on OpenRouter margins for models like Qwen 3 Coder 480B parameter, and the charges for running the model is quite high on OpenRouter, especially when running the model on a 8xB200 GPU system that can be rented for about 22 to 29 dollars an hour from DataCrunch.io. Without any model optimization and assuming fairly large input tokens of around 10k+ tokens input average, it’s about three to five times more expensive than it costs to run on a 8xB200 system. However if we use an optimized model, using the latest Nvidia paper, it’s about 5-10 times cheaper to run than the price. It costs quite a lot to optimize a model, even if we’re only use some of the optimizations in the paper.
My original thought was to create an inference provider on OpenRouter using the low hanging fruit optimizations from the paper to make a good profit, but I’m not that interested in making another business right now or making more money. However I figure if we pool our knowledge together, and our financial and GPU resources, we can do a light pass series of optimizations on the most common models, and offer inference to each other at a close to at cost rate, basically saving a large amount from the cost of OpenRouter.
What are your thoughts? | 2025-09-07T05:56:06 | https://www.reddit.com/r/LocalLLaMA/comments/1nalgur/create_a_shared_alternative_to_openrouter_together/ | Far-Incident822 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nalgur | false | null | t3_1nalgur | /r/LocalLLaMA/comments/1nalgur/create_a_shared_alternative_to_openrouter_together/ | false | false | self | 11 | null |
🤖 Tried Using LLaMA for Student Revision Tools (Notes + Flashcards) | 5 | 👉 https://examsprint.pages.dev
I’ve been experimenting with building a study assistant for CBSE/JEE/NEET prep. The idea was to:
Generate flashcards from NCERT chapters
Provide quick topic-wise notes
Add a chatbot for doubts
Right now I’m testing smaller LLaMA models locally, but inference speed is still a challenge. Curious if anyone here has optimized LLaMA for lightweight use-cases like student Q&A or flashcard generation.
| 2025-09-07T05:30:59 | https://www.reddit.com/r/LocalLLaMA/comments/1nal1mf/tried_using_llama_for_student_revision_tools/ | maker_of_examsprint | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nal1mf | false | null | t3_1nal1mf | /r/LocalLLaMA/comments/1nal1mf/tried_using_llama_for_student_revision_tools/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '-pz9facuOM06xX3thgwJXQv6qid3p588Dee-2lfI2gg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/-pz9facuOM06xX3thgwJXQv6qid3p588Dee-2lfI2gg.png?width=108&crop=smart&auto=webp&s=0b2325c4807f5e789ff2724bdd7314a09415e0e5', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/-pz9facuOM06xX3thgwJXQv6qid3p588Dee-2lfI2gg.png?width=216&crop=smart&auto=webp&s=8409bc79d08b81f39a5382e360b681892f8299c5', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/-pz9facuOM06xX3thgwJXQv6qid3p588Dee-2lfI2gg.png?width=320&crop=smart&auto=webp&s=ca57cb2dff805aaa9ab5f610344327c9cb888862', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/-pz9facuOM06xX3thgwJXQv6qid3p588Dee-2lfI2gg.png?width=640&crop=smart&auto=webp&s=37a8f684fb44b8d6e970a5cc884241e6e12ae130', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/-pz9facuOM06xX3thgwJXQv6qid3p588Dee-2lfI2gg.png?width=960&crop=smart&auto=webp&s=9025c62a9e98fc56b9a45b120ccf8db286819d66', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/-pz9facuOM06xX3thgwJXQv6qid3p588Dee-2lfI2gg.png?auto=webp&s=a3b77a25279dd2c3dd0f86b3c48089d8cff0e001', 'width': 1024}, 'variants': {}}]} |
Best for Coding | 18 | I was reading the discussion about the pros and cons of K2 – 0905, GLM 4.5, deepseek etc. I have used all of these although not extensively then I tried Qwen3-coder which seems so superior for any type of coding work. And yet I seldom see Qwen3-coder discussed or commented, is there some reason it is not popular? | 2025-09-07T04:53:16 | https://www.reddit.com/r/LocalLLaMA/comments/1nake20/best_for_coding/ | johanna_75 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nake20 | false | null | t3_1nake20 | /r/LocalLLaMA/comments/1nake20/best_for_coding/ | false | false | self | 18 | null |
Weird chat completions response from gpt-oss-20b | 1 | I received the following chat completions response from a locally hosted gpt-oss-20b instance **randomly** during the execution of my custom multi-turn reasoning pipeline in my chatbot. Based on prior error logs, this seems to have happened a few times now where instead of the `content` field, the model outputs its response in the `reasoning\_content` field. This is highly irregular as OpenAI's API docs don't have a single mention of this field. Anyone got a clue what's happening here?
{
"id": "chatcmpl-5f5f3231936d4473b6dcb1a251a1f91a",
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": null,
"refusal": null,
"role": "assistant",
"annotations": null,
"audio": null,
"function_call": null,
"tool_calls": [],
"reasoning_content": "The user wants a refined search query to get more specific information about the focus area: Graph RAG handling retrieval, reasoning, long-term consolidation compared to vector embeddings, episodic logs, symbolic stores. They want side-by-side analysis of accuracy, interpretability, scalability. They want a refined search query. So we need to propose a search query that will retrieve relevant papers, articles, or resources that discuss Graph RAG vs other memory types, focusing on retrieval, reasoning, long-term consolidation, and metrics like accuracy, interpretability, scalability. Provide a query string with advanced operators. Maybe include terms like \"Graph Retrieval-Augmented Generation\", \"vector embeddings\", \"episodic logs\", \"symbolic stores\", \"accuracy\", \"interpretability\", \"scalability\", \"long-term memory\", \"retrieval\", \"reasoning\", \"consolidation\", \"comparison\", \"benchmark\", \"hotpotqa\", \"triviaqa\", \"DiaASQ\", \"knowledge graph"
},
"stop_reason": null
}
],
"created": 1757217993,
"model": "openai/gpt-oss-20b",
"object": "chat.completion",
"service_tier": null,
"system_fingerprint": null,
"usage": {
"completion_tokens": 200,
"prompt_tokens": 1139,
"total_tokens": 1339,
"completion_tokens_details": null,
"prompt_tokens_details": null
},
"prompt_logprobs": null,
"kv_transfer_params": null
} | 2025-09-07T04:34:55 | https://www.reddit.com/r/LocalLLaMA/comments/1nak2qq/weird_chat_completions_response_from_gptoss20b/ | xceed35 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nak2qq | false | null | t3_1nak2qq | /r/LocalLLaMA/comments/1nak2qq/weird_chat_completions_response_from_gptoss20b/ | false | false | self | 1 | null |
Why do you need cheap Cloud Gpu provider? | 0 | Know that you don't need to buy thousands of dollars worth of graphics cards for your AI/ML/LLMs workloads. Cloud GPU platforms offer this service, and you can rent them for much cheaper than you might expect. The platform I use, with implementations like Jupyter and CUDA, streamlines my workflow, and you can get 24/7 support whenever you need it. Plus, if you want to try the platform, free credits are provided to get you started, eliminating the need to spend hours explaining your project.
What do you think? | 2025-09-07T03:43:28 | Dangerous_Coyote9306 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1naj4xd | false | null | t3_1naj4xd | /r/LocalLLaMA/comments/1naj4xd/why_do_you_need_cheap_cloud_gpu_provider/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '0sgzh2n5znnf1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/0sgzh2n5znnf1.jpeg?width=108&crop=smart&auto=webp&s=cc71c0dd03b8540b6bc193cd356d3a8ba496313c', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/0sgzh2n5znnf1.jpeg?width=216&crop=smart&auto=webp&s=2272b9321b09dc79abff50fb6d42e26ac2781fcb', 'width': 216}, {'height': 202, 'url': 'https://preview.redd.it/0sgzh2n5znnf1.jpeg?width=320&crop=smart&auto=webp&s=92bb7c86bbe84255d92e53d370950e35e8e45d6b', 'width': 320}, {'height': 405, 'url': 'https://preview.redd.it/0sgzh2n5znnf1.jpeg?width=640&crop=smart&auto=webp&s=69b5c3d3941f7eb65e0fed09fdb496ac75ca6133', 'width': 640}, {'height': 608, 'url': 'https://preview.redd.it/0sgzh2n5znnf1.jpeg?width=960&crop=smart&auto=webp&s=b5381efd640a30cb4af81eb59b7a28a1d96f2e0a', 'width': 960}, {'height': 685, 'url': 'https://preview.redd.it/0sgzh2n5znnf1.jpeg?width=1080&crop=smart&auto=webp&s=58a2fafe8012bcd10c7fc1bbd0a9c97391d1b650', 'width': 1080}], 'source': {'height': 685, 'url': 'https://preview.redd.it/0sgzh2n5znnf1.jpeg?auto=webp&s=8eef19fb095c12c2ed779474b3e0740225965773', 'width': 1080}, 'variants': {}}]} | |
What do yall use your agents for? | 12 | My rig is 2x 3090 and 64gb vram. Currently working with 1 card to host Gemma 3 which does image recognition and text prompt stuff to basically act as my Jarvis (work in progress still, been playing with possibilities for about 2 months now and finally feel comfortable with embeddings, rag retrieval, agent tooling etc) and the other card runs some of the agents tools like image/video gen and TTS (vibe voice). Eventually want to give my agent a body like a little rc car it can drive around or something lol. Just curious what others are doing with their local setups. | 2025-09-07T03:33:17 | https://www.reddit.com/r/LocalLLaMA/comments/1naiy7o/what_do_yall_use_your_agents_for/ | ChiefMalone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1naiy7o | false | null | t3_1naiy7o | /r/LocalLLaMA/comments/1naiy7o/what_do_yall_use_your_agents_for/ | false | false | self | 12 | null |
Which local LLMs for coding can run on a computer with 16GB of VRAM? | 3 | I'm a beginner trying to develop some web apps for personal use while learning to code. I recently bought a gaming graphics card with 16GB VRAM, so I thought it would be fun to utilize it for this purpose.
I installed LM Studio and Ollama to test some small local models. Next, I installed AI assistant extensions for VS Code like Cline and Roo, then connected my local model to try some simple chat interactions.
However, I got a context window overflow error. The message I sent was just a simple greeting, but when I checked LM Studio, it warned that the tokens exceeded 8k. I'm not sure why a simple greeting would be 8k tokens, but I assume there's some context needed for VS Code communication.
So I changed the context window setting in LM Studio to the model's maximum value and tried reloading the model. Then it failed to load due to insufficient memory.
The model I was trying to load is Qwen2.5 Coder 14B, which is about 8GB. I can fully load it on my GPU. However, when I set the context window to maximum, it wouldn't load.
I tried finding the maximum working setting, but even 16k wouldn't load. Am I doing something wrong? I set it to 12k and tried sending a message again - no error this time, but the loading took so long that I gave up.
I need some advice. Thanks! | 2025-09-07T03:30:30 | https://www.reddit.com/r/LocalLLaMA/comments/1naiwb2/which_local_llms_for_coding_can_run_on_a_computer/ | CrowKing63 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1naiwb2 | false | null | t3_1naiwb2 | /r/LocalLLaMA/comments/1naiwb2/which_local_llms_for_coding_can_run_on_a_computer/ | false | false | self | 3 | null |
Do local LLMs do almost as well with code generation as the big boys? | 21 | Hey all,
Sort of a "startup" wears all hats person like many are these days with AI/LLM tools at our disposal.
I pay for the $200 month Anthropic plan because CC (cli mode) did quite well on some tasks, and I was always running out of context with the $20 plan and even the $100 plan. However, as many are starting to say on a few llm channels, it seems like it has gotten worse. Not sure how accurate that is or not. BUT.. that, the likely growing costs, and experimenting with taking the output of CC as input to ChatGPT5 and Gemini 2.5 Pro (using some credits I have left from playing with KiloCode before I switched to CC Max).. I have been seeing that what CC puts out is often a bunch of fluff. It says all these great things like "It's 100% working, its the best ever" and then I try to use my code and find out its mostly mock, fake or CC generated the values instead of actually ran some code and got results from the code running.
It got me thinking. The monthly costs to use 2 or 3 of these things starts to add up for those of us not lucky enough to be employed and/or a company paying for it. Myself, I am unemployed for almost 2 years now and decided I want to try to build my dream passion project that I have vetted with several colleagues and they are all agreeing it is much needed and could very well be very valuable. So I figure.. use AI + my experience/knowledge. I can't afford to hire a team, and frankly my buddy in India who runs a company to farm out works was looking at $5K a month per developer.. so yah.. that's like 6+ months of multiple AIs cost.. figured not worth it for one developer month of a likely "meh" coder that would require many months or more to build what I am now working on with AI.
SO.. per my subject (sorry had to add some context).. my thought is.. would it benefit me to run a local LLM like DeepSeek or Meta or Qwen 3.. but buying the hardware.. in this case it seems like the Mac M3 Studio Ultra (hoping they announce an M4 Studio Ultra in a few days) with 512GB RAM or even the lower cpu/256GB ram would be a good way to go. Before anyone says "Dude.. thats $6K to $10K depending on configuration.. that's a LOT of cloud AI you can afford". My argument is that it seems like using Claude + ChatGPT + Gemini.. to bounce results between them is at least getting me a bit better code out of CC than CC is on its own. I have a few uses for running a local LLM for my products that I am working on, but I am wondering if running the larger models + much larger context windows will be a LOT better than using LM Studio on my desktop with 16GB of gpu VRAM. Is the results from these larger models + more context window going to be that much better? OR is it a matter of a few percentage points better? I read for example the FP16 is not any better than Q8 in terms of quality.. like literally about .1% or less better and not all the time. Given that open source models are getting better all the time, free to download/use, I am really curious if they could be coerced with the right prompting to put code out as good as claude code or ChatGPT 5 or Gemini 2.5Pro if I had a larger 200GB to 400GB model and 1mil+ context window.
I've seen some bits of info on this topic.. that yes they can be every bit as good or they are not as good because the big 3 (or so) have TBs of model size and massive amounts of hardware ($billions).. so of course a $5K to $10K Studio + OS large model may not be as good.. but is it good enough that you could rely on it to do initial ideas/draft code, then feed that code to Claude, ChatGPT, Gemini.
But the bigger ask is.. do you basically get really good overall quality code if you use multiple models against each other.. or.. working together. Like giving the prompt to local LLM. Generate a bunch of code. Then feed the project to ChatGPT. Have it come back with some response. Then tell Claude (this is what ChatGPT and my DeepSeek said.. what do you think..) and so on. My hope is some sort of "cross response" between them results in one of them (ideally local would be great to avoid cloud costs) coming up with great quality code that mostly works.
I do realize I have to review/test code.. I am not relying on the generated stuff 100%. However, I am working in a few languages two of which I know jack shit about, three of which I know a little bit of and 2 I know very well. So I am sort of relying on the knowledge of AI for most of this stuff and applying my experience/knowledge to try to re-prompt to get better results.
Maybe it's all wishful thinking. | 2025-09-07T03:27:39 | https://www.reddit.com/r/LocalLLaMA/comments/1naiud3/do_local_llms_do_almost_as_well_with_code/ | Middle_Reception286 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1naiud3 | false | null | t3_1naiud3 | /r/LocalLLaMA/comments/1naiud3/do_local_llms_do_almost_as_well_with_code/ | false | false | self | 21 | null |
Did you notice the VibeVoice model card privacy policy? | 18 | Quoting Microsft's repo and HuggingFace model card.
I wonder if any of this is true for their released local-machind source code; or if it's only true for output generated by some specific website?
---
To mitigate the risks of misuse, we have:
- Embedded an audible disclaimer (e.g. “This segment was generated by AI”) automatically into every synthesized audio file.
- Added an imperceptible watermark to generated audio so third parties can verify VibeVoice provenance. Please see contact information at the end of this model card.
- **Logged inference requests (hashed) for abuse pattern detection and publishing aggregated statistics quarterly."**
- Users are responsible for sourcing their datasets legally and ethically. This may include securing appropriate rights and/or anonymizing data prior to use with VibeVoice.
- **Users are reminded to be mindful of data privacy concerns.** | 2025-09-07T03:23:52 | https://www.reddit.com/r/LocalLLaMA/comments/1nairnx/did_you_notice_the_vibevoice_model_card_privacy/ | pilkyton | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nairnx | false | null | t3_1nairnx | /r/LocalLLaMA/comments/1nairnx/did_you_notice_the_vibevoice_model_card_privacy/ | false | false | self | 18 | null |
Why isn't there a local tool server that replicates most of the tools avaliable on ChatGPT? | 126 | We've made it to the point where mid-sized local LLMs can rival some cloud models in some use cases, but it feels like the local tool ecosystem is still years behind. It's a shame because models like gpt-oss-120b are pretty competent at *using* tools that it is given access to.
A small, but not-insignificant fraction of all LLM prompts in most domains *need* tools. Web search for up to date information, python interpreter for data analysis and moderately complex calculations, date and time access, and the ability to leverage an image-gen model all "just work" on ChatGPT. Even if I could run the GPT-5 model locally on my PC, it could never be usable for me without the tools.
In the local space, a quick search for MCP tool servers yields a fragmented ecosystem servers that do *one* thing, often highly specialized, like analyze a github codebase or read your google calendar. You can't come close to replicating the *basic* functionality of ChatGPT like web search and calculator without downloading 5+ servers using the command line or github (RIP beginners) and learning how to use docker or writing some master server to proxys them all into one.
Maybe I'm not looking in the right places, but it seems like people are only interested in using cloud tool servers (often with an API cost) with their local LLM, something that defeats the purpose imo. Even the new version of ollama runs the web search tool from the cloud instead of querying from the local machine.
| 2025-09-07T02:53:58 | https://www.reddit.com/r/LocalLLaMA/comments/1nai7rf/why_isnt_there_a_local_tool_server_that/ | gigaflops_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nai7rf | false | null | t3_1nai7rf | /r/LocalLLaMA/comments/1nai7rf/why_isnt_there_a_local_tool_server_that/ | false | false | self | 126 | null |
The 0.3ms Reality: Why Context Assembly Speed Actually Matters for Local LLMs | 1 | When building local AI systems, we obsess over inference speed and model optimization but completely ignore context assembly performance. After profiling several production RAG implementations, I discovered this creates a massive bottleneck that nobody talks about.
# The Hidden Performance Problem
Most developers focus on optimizing their LLM inference while leaving context retrieval running at 30-50ms+ per query. For interactive AI applications, this means users spend more time waiting for context assembly than actual AI processing.
Here's what I found benchmarking different retrieval approaches:
# Performance Reality Check
**Traditional BM25 (Elasticsearch/Lucene)**
* Response time: 35-50ms typical
* Memory footprint: 200MB+ for indexes
* Quality: Good for exact matches, struggles with semantic queries
* Scaling: Performance degrades linearly with corpus size
**Vector Embeddings**
* Response time: 80-150ms (including embedding computation)
* Memory footprint: 400MB+ for vector storage
* Quality: Excellent semantic matching
* Scaling: Requires significant GPU resources for large corpora
**Alternative Approaches** Some teams are experimenting with mathematical optimization techniques that treat document selection as a constraint satisfaction problem rather than pure similarity matching. Early results suggest this can achieve sub-20ms response times with lower memory usage, though the implementation complexity is significantly higher.
# Why This Matters for Local Development
The performance gap becomes critical when you're resource-constrained:
1. **Memory pressure**: Vector indexes consume RAM that could be used for model inference
2. **Latency perception**: 30ms+ feels sluggish in interactive applications
3. **CPU cycles**: Complex similarity calculations compete with inference workloads
4. **Battery life**: On edge devices, retrieval efficiency directly impacts runtime
# What I've Learned About RAG Optimization
After benchmarking various approaches in production environments, here are the key insights:
# Memory vs Speed Trade-offs
Different retrieval strategies show distinct resource patterns:
* **Full-text indexes**: 200-300MB typical, good for exact matching
* **Vector databases**: 400MB+ for embeddings, excellent semantics but slow
* **Hybrid approaches**: Combining multiple techniques, but complexity increases
# The Caching Reality
Effective caching can dramatically improve perceived performance:
* **Document parsing**: Pre-compute features where possible
* **Query patterns**: Common queries benefit from memoization
* **Index warming**: Cold start performance matters for user experience
# Quality vs Performance
The eternal trade-off in information retrieval:
* Pure speed optimizations often sacrifice relevance quality
* Semantic approaches provide better results but cost performance
* Finding the sweet spot requires careful profiling of your specific use case
# Practical Recommendations for Local RAG
Based on production experience, here's what actually works:
# 1. Profile Your Specific Use Case
* Measure actual retrieval times in your application
* Identify whether you're memory-bound or CPU-bound
* Test with realistic document corpus sizes
# 2. Consider Hybrid Approaches
* Start with simple BM25 for baseline performance
* Add semantic layers only where quality gains justify the cost
* Experiment with different caching strategies
# 3. Optimize for Your Hardware
* Edge devices benefit from lighter-weight approaches
* Desktop applications can handle more complex retrieval
* Consider GPU availability for embedding computations
# The Broader Context
Most RAG implementations are designed with cloud-first assumptions (unlimited memory, network latency tolerance). When building truly local AI systems, every millisecond and megabyte counts.
The key insight is that context assembly performance directly impacts user experience in ways that pure inference optimization cannot address. A 50ms retrieval delay feels sluggish regardless of how fast your LLM processes the context.
# Moving Forward
The RAG performance landscape is evolving rapidly. New techniques are emerging that challenge traditional similarity-based approaches, but production deployment requires careful evaluation of the trade-offs.
I've been working on some of these optimization problems in my own project ([ContextLite](https://contextlite.com/download)) - exploring how mathematical optimization can improve both speed and quality in context assembly. Still early days, but the performance characteristics are promising for local deployment scenarios.
What's your experience with RAG performance in local applications? Have you noticed context assembly becoming a bottleneck in your AI projects? | 2025-09-07T02:36:18 | https://www.reddit.com/r/LocalLLaMA/comments/1nahvq5/the_03ms_reality_why_context_assembly_speed/ | targetedwebresults | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nahvq5 | false | null | t3_1nahvq5 | /r/LocalLLaMA/comments/1nahvq5/the_03ms_reality_why_context_assembly_speed/ | false | false | self | 1 | null |
Best OS local model for Swift? | 2 | Hey all, I’m new to the subreddit and exploring local OS models. I’ve been doing a lot of Swift development lately and was wondering what the best OS model (as of today) might be for Swift development. Is there a leaderboard or resource to track this?
Right now, I’m using GPT-5/Claude and it’s decent, but switching between the web and Xcode is a bit of a pain (unless I’m missing a solution, or if an upcoming version of Xcode is planning to integrate LLMs). I’m not sure if a better fine-tuned Swift-specific local model exists, but I’d love to use one.
Thanks in advance, and sorry if this is a newbie question. Cheers! | 2025-09-07T01:26:41 | https://www.reddit.com/r/LocalLLaMA/comments/1nagiq2/best_os_local_model_for_swift/ | brequinn89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nagiq2 | false | null | t3_1nagiq2 | /r/LocalLLaMA/comments/1nagiq2/best_os_local_model_for_swift/ | false | false | self | 2 | null |
PC build advice for local LLMs? | 3 | I’m putting together a desktop mainly to run local LLMs (20B+). From what I gather, VRAM matters most, so I’m debating between an older 3090 (24 GB) vs something newer like a 5080 (16 GB).
Other rough ideas: Ryzen 9/i9, 64–128 GB RAM, 2 TB NVMe. Budget is flexible but I don’t want to overspend if I don’t have to.
Anyone here built a rig recently for this? Curious what you’d recommend or avoid. | 2025-09-07T00:31:58 | https://www.reddit.com/r/LocalLLaMA/comments/1nafefs/pc_build_advice_for_local_llms/ | r_no_one | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nafefs | false | null | t3_1nafefs | /r/LocalLLaMA/comments/1nafefs/pc_build_advice_for_local_llms/ | false | false | self | 3 | null |
lmao what the point of local models then? | 0 | Though I would try my hand at ML translation. Serious? | 2025-09-07T00:26:34 | hippynox | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nafad4 | false | null | t3_1nafad4 | /r/LocalLLaMA/comments/1nafad4/lmao_what_the_point_of_local_models_then/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'uVZJRa7qB2lqHcSWnMyqGtHftATSZKcoPFC6kMogYf0', 'resolutions': [{'height': 30, 'url': 'https://preview.redd.it/9ujknb2ozmnf1.jpeg?width=108&crop=smart&auto=webp&s=bf0504f1d4c05165c8d4ed0ea29e4cdb5264363c', 'width': 108}, {'height': 60, 'url': 'https://preview.redd.it/9ujknb2ozmnf1.jpeg?width=216&crop=smart&auto=webp&s=c7d45ff089772c03bbfe0c84f810a7db30e84a9d', 'width': 216}, {'height': 89, 'url': 'https://preview.redd.it/9ujknb2ozmnf1.jpeg?width=320&crop=smart&auto=webp&s=d5e5cfdf9a45aa4e560bc3a1e95d93bc17ad00a3', 'width': 320}, {'height': 178, 'url': 'https://preview.redd.it/9ujknb2ozmnf1.jpeg?width=640&crop=smart&auto=webp&s=914e4a286969945280dadf893c6ece428f73c1eb', 'width': 640}, {'height': 268, 'url': 'https://preview.redd.it/9ujknb2ozmnf1.jpeg?width=960&crop=smart&auto=webp&s=f20171462b19d1dec27c4e232f3ca6912540deba', 'width': 960}, {'height': 301, 'url': 'https://preview.redd.it/9ujknb2ozmnf1.jpeg?width=1080&crop=smart&auto=webp&s=5212756f09141f8b9142d9499158ef54bce68794', 'width': 1080}], 'source': {'height': 354, 'url': 'https://preview.redd.it/9ujknb2ozmnf1.jpeg?auto=webp&s=f20b3c94676d28575cff7f36fb092747e3993b94', 'width': 1266}, 'variants': {}}]} | ||
2x MI50 32GB Quant Speed Comparison (Mistral 3.2 24B, llama.cpp, Vulkan) | 45 | All tests were run on the same system with 2x MI50 32GB from AliExpress, with a fixed VBios found on this subreddit. Llama.cpp was compiled with vulkan support as that is what I use for all of my GPUs regardless of vendor.
Quants for Mistral 3.2 Small 2506 24B were sourced from both Bartowski and Unsloth, when there were quants provided by both the values were averaged as I found that there was negligible difference in speed and size between the providers.
Every quant was run through 8 tests using llama-bench, with the variables in play being Flash Attention On/Off, Depth of either 0 or 32768, and the test type PP512 or TG128. Testing took approximately 62 hours to complete.
[Chart 1: Prompt Processing in Tokens Per Second](https://preview.redd.it/n2b2e0xvwmnf1.png?width=1255&format=png&auto=webp&s=e4a3d7a2ff32cbcca43de514b1a88a25fc3751fe)
[Chart 2: Token Generation in Tokens Per Second](https://preview.redd.it/r0tltrr9xmnf1.png?width=1255&format=png&auto=webp&s=9011470110b826a17a7e4b4e10d5f37c61bb2295)
[Chart 3: Prompt Processing in GB x Tokens Per Second](https://preview.redd.it/xmrwqghbxmnf1.png?width=1255&format=png&auto=webp&s=7ce8c383a27c8fd05e356a97851b49179b4e3703)
[Chart 4: Token Generation in GB x Tokens Per Second](https://preview.redd.it/apls9iqdxmnf1.png?width=1255&format=png&auto=webp&s=14de7a426d9413cc331b35b6fdaf16fa6a76b320)
An explanation of the charts:
Chart 1 and 2 are quite straight forward, they show the raw scores from the PP512 and TG128 test respectively, it clearly shows that there is a massive spike in prompt processing for Q4\_0, Q4\_1, Q8\_0, UD-Q8\_K\_XL, and BF16 at low depths, which gradually equalizes once flash attention is enabled and as depth increases. On the other hand the Token generation graph shows a massive plummet for IQ4\_XS.
Chart 3 and 4 are simply taking the values used for chart 1 and 2 and multiplying by the reported model size in llama-bench during the run. I only really ran this test since I have been slowly losing faith in quantization all together and am shifting towards using Q8\_0 and BF16 models wherever possible and wanted to confirm my own biases with cherry picked statistics. The results are the same as before Q4\_0, Q4\_1, Q8\_0, UD-Q8\_K\_XL and BF16 are the only real standouts.
TLDR - Q4\_0, Q4\_1, Q8\_0, Q8\_K\_XL, BF16
EDIT: Here is some ROCm data with the newest version of llama.cpp as of September 6th, no pretty graphs this time but here is the raw data table:
| Organization | Quantization | Size (GB) | Flash Attention | Test | Depth | Tokens / Second | VK T/S | Diff % |
| :----------- | :----------- | :-------- | :-------------- | :---- | :---- | :-------------- | :----- | :-------------- |
| Bartowski | Q4\_K\_S | 12.61 | FALSE | pp512 | 0 | 326.94 | 104.24 | 2.136415963 |
| Bartowski | Q4\_K\_S | 12.61 | FALSE | tg128 | 0 | 27.37 | 21.57 | 0.2688919796 |
| Bartowski | Q4\_K\_S | 12.61 | FALSE | pp512 | 32768 | 73.08 | 66.3 | 0.1022624434 |
| Bartowski | Q4\_K\_S | 12.61 | FALSE | tg128 | 32768 | 6.21 | 9.29 | \\-0.3315392896 |
| Bartowski | Q4\_K\_S | 12.61 | TRUE | pp512 | 0 | 312.29 | 102.16 | 2.056871574 |
| Bartowski | Q4\_K\_S | 12.61 | TRUE | tg128 | 0 | 25.93 | 21.12 | 0.2277462121 |
| Bartowski | Q4\_K\_S | 12.61 | TRUE | pp512 | 32768 | 42.59 | 26.02 | 0.6368178324 |
| Bartowski | Q4\_K\_S | 12.61 | TRUE | tg128 | 32768 | 8.09 | 11.64 | \\-0.3049828179 |
| Bartowski | Q4\_0 | 12.56 | FALSE | pp512 | 0 | 351.48 | 259.4 | 0.3549730146 |
| Bartowski | Q4\_0 | 12.56 | FALSE | tg128 | 0 | 29.38 | 21.81 | 0.3470884915 |
| Bartowski | Q4\_0 | 12.56 | FALSE | pp512 | 32768 | 74.2 | 108.63 | \\-0.3169474363 |
| Bartowski | Q4\_0 | 12.56 | FALSE | tg128 | 32768 | 6.31 | 9.36 | \\-0.3258547009 |
| Bartowski | Q4\_0 | 12.56 | TRUE | pp512 | 0 | 334.47 | 248.3 | 0.3470398711 |
| Bartowski | Q4\_0 | 12.56 | TRUE | tg128 | 0 | 27.78 | 21.28 | 0.3054511278 |
| Bartowski | Q4\_0 | 12.56 | TRUE | pp512 | 32768 | 42.99 | 30.64 | 0.4030678851 |
| Bartowski | Q4\_0 | 12.56 | TRUE | tg128 | 32768 | 8.27 | 11.72 | \\-0.2943686007 |
| Bartowski | Q4\_1 | 13.84 | FALSE | pp512 | 0 | 369.72 | 221.11 | 0.6721089051 |
| Bartowski | Q4\_1 | 13.84 | FALSE | tg128 | 0 | 31.29 | 19.22 | 0.6279916753 |
| Bartowski | Q4\_1 | 13.84 | FALSE | pp512 | 32768 | 74.98 | 101.39 | \\-0.2604793372 |
| Bartowski | Q4\_1 | 13.84 | FALSE | tg128 | 32768 | 6.39 | 8.81 | \\-0.2746878547 |
| Bartowski | Q4\_1 | 13.84 | TRUE | pp512 | 0 | 350.83 | 212.67 | 0.6496449899 |
| Bartowski | Q4\_1 | 13.84 | TRUE | tg128 | 0 | 29.37 | 18.88 | 0.5556144068 |
| Bartowski | Q4\_1 | 13.84 | TRUE | pp512 | 32768 | 43.25 | 29.95 | 0.4440734558 |
| Bartowski | Q4\_1 | 13.84 | TRUE | tg128 | 32768 | 8.39 | 10.89 | \\-0.2295684114 |
| Bartowski | Q4\_K\_M | 13.34 | FALSE | pp512 | 0 | 301.58 | 104.83 | 1.87684823 |
| Bartowski | Q4\_K\_M | 13.34 | FALSE | tg128 | 0 | 26.49 | 20.83 | 0.2717234758 |
| Bartowski | Q4\_K\_M | 13.34 | FALSE | pp512 | 32768 | 71.68 | 66.45 | 0.07870579383 |
| Bartowski | Q4\_K\_M | 13.34 | FALSE | tg128 | 32768 | 6.18 | 9.17 | \\-0.3260632497 |
| Bartowski | Q4\_K\_M | 13.34 | TRUE | pp512 | 0 | 289.13 | 102.75 | 1.813917275 |
| Bartowski | Q4\_K\_M | 13.34 | TRUE | tg128 | 0 | 25.3 | 20.41 | 0.239588437 |
| Bartowski | Q4\_K\_M | 13.34 | TRUE | pp512 | 32768 | 42.13 | 26.07 | 0.6160337553 |
| Bartowski | Q4\_K\_M | 13.34 | TRUE | tg128 | 32768 | 8.04 | 11.39 | \\-0.2941176471 |
| Bartowski | Q4\_K\_L | 13.81 | FALSE | pp512 | 0 | 301.52 | 104.81 | 1.87682473 |
| Bartowski | Q4\_K\_L | 13.81 | FALSE | tg128 | 0 | 26.49 | 20.81 | 0.2729456992 |
| Bartowski | Q4\_K\_L | 13.81 | FALSE | pp512 | 32768 | 71.65 | 66.43 | 0.07857895529 |
| Bartowski | Q4\_K\_L | 13.81 | FALSE | tg128 | 32768 | 6.18 | 9.16 | \\-0.3253275109 |
| Bartowski | Q4\_K\_L | 13.81 | TRUE | pp512 | 0 | 289.02 | 102.77 | 1.812299309 |
| Bartowski | Q4\_K\_L | 13.81 | TRUE | tg128 | 0 | 25.05 | 20.26 | 0.2364264561 |
| Bartowski | Q4\_K\_L | 13.81 | TRUE | pp512 | 32768 | 42.13 | 26.11 | 0.6135580237 |
| Bartowski | Q4\_K\_L | 13.81 | TRUE | tg128 | 32768 | 8.02 | 11.37 | \\-0.2946350044 |
| Bartowski | Q6\_K | 18.01 | FALSE | pp512 | 0 | 190.91 | 106.29 | 0.7961238122 |
| Bartowski | Q6\_K | 18.01 | FALSE | tg128 | 0 | 23.12 | 16.12 | 0.4342431762 |
| Bartowski | Q6\_K | 18.01 | FALSE | pp512 | 32768 | 62.92 | 67.44 | \\-0.06702253855 |
| Bartowski | Q6\_K | 18.01 | FALSE | tg128 | 32768 | 5.98 | 8.17 | \\-0.2680538556 |
| Bartowski | Q6\_K | 18.01 | TRUE | pp512 | 0 | 185.86 | 104.15 | 0.7845415266 |
| Bartowski | Q6\_K | 18.01 | TRUE | tg128 | 0 | 21.95 | 15.77 | 0.3918833228 |
| Bartowski | Q6\_K | 18.01 | TRUE | pp512 | 32768 | 38.94 | 26.17 | 0.4879633168 |
| Bartowski | Q6\_K | 18.01 | TRUE | tg128 | 32768 | 7.7 | 9.88 | \\-0.2206477733 |
I was not able to test Q8\_0 or above as the system would OOM at 32k context without flash attention, which was an interesting twist. The general pattern seems to be:
Prompt Processing at low depth with or without flash attention +50-200% performance
Prompt Processing at long depth without flash attention basically the same
Prompt Processing at long depth with flash attention +50%
Token Generation at low depth with or without flash attention +20-50%
Token Generation at long depth with or without flash attention -20-50%
Overall it is difficult to decide whether ROCm is worth it, especially if you are going to run a reasoning model which will be generation a large amount of tokens compared to the prompt size. | 2025-09-07T00:24:56 | https://www.reddit.com/r/LocalLLaMA/comments/1naf93r/2x_mi50_32gb_quant_speed_comparison_mistral_32/ | OUT_OF_HOST_MEMORY | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1naf93r | false | null | t3_1naf93r | /r/LocalLLaMA/comments/1naf93r/2x_mi50_32gb_quant_speed_comparison_mistral_32/ | false | false | 45 | null | |
is there a android gemma 270M RAG example? | 0 | Looking for an example code , I could only find for gemma 3 1B | 2025-09-06T23:58:42 | https://www.reddit.com/r/LocalLLaMA/comments/1naep0y/is_there_a_android_gemma_270m_rag_example/ | EfficiencyOpposite30 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1naep0y | false | null | t3_1naep0y | /r/LocalLLaMA/comments/1naep0y/is_there_a_android_gemma_270m_rag_example/ | false | false | self | 0 | null |
why does it take a long time loading RAG file gemma 3 1B | 0 | tried this example and used my custom rag file 8000 chunks
[https://github.com/google-ai-edge/ai-edge-apis/tree/main/examples/rag](https://github.com/google-ai-edge/ai-edge-apis/tree/main/examples/rag)
why does it take a long time while loading the app ? | 2025-09-06T23:57:55 | https://www.reddit.com/r/LocalLLaMA/comments/1naeoge/why_does_it_take_a_long_time_loading_rag_file/ | EfficiencyOpposite30 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1naeoge | false | null | t3_1naeoge | /r/LocalLLaMA/comments/1naeoge/why_does_it_take_a_long_time_loading_rag_file/ | false | false | self | 0 | null |
Horizon Alpha was different - miss those results | 3 | 2025-09-06T23:56:37 | sb6_6_6_6 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1naengg | false | null | t3_1naengg | /r/LocalLLaMA/comments/1naengg/horizon_alpha_was_different_miss_those_results/ | false | false | default | 3 | {'enabled': True, 'images': [{'id': '8x2tq3znumnf1', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/8x2tq3znumnf1.png?width=108&crop=smart&auto=webp&s=a47b6e6064c4d44d2c436b4cdd8c4e6e69f63e05', 'width': 108}, {'height': 182, 'url': 'https://preview.redd.it/8x2tq3znumnf1.png?width=216&crop=smart&auto=webp&s=5ba1b635695230a7a2c58613ec44b7b1b9cc6d47', 'width': 216}, {'height': 271, 'url': 'https://preview.redd.it/8x2tq3znumnf1.png?width=320&crop=smart&auto=webp&s=2e798e78aca2242a4143c4883ec7ff246b4cafad', 'width': 320}, {'height': 542, 'url': 'https://preview.redd.it/8x2tq3znumnf1.png?width=640&crop=smart&auto=webp&s=79f0f0c3b91bba0a5df38a284daa9af4be5c3c24', 'width': 640}, {'height': 813, 'url': 'https://preview.redd.it/8x2tq3znumnf1.png?width=960&crop=smart&auto=webp&s=d498474fe151db98e68144485d43a8f1f50e9830', 'width': 960}], 'source': {'height': 819, 'url': 'https://preview.redd.it/8x2tq3znumnf1.png?auto=webp&s=6c17ed68b73e98e5d6e95f3d34e360d7e3d7acdd', 'width': 967}, 'variants': {}}]} | ||
Equipment suggestions for a tight budget | 2 | I have been fortunate enough to acquire a second 24 GB RTX 3090 for a great price and I would like to put it together with the 24GB RTX 3090 I already have, but I don't think I have any machines that are suitable. Seeing some suggestions about getting a Motherboard/CPU/RAM bundle X99 from Aliexpress is certainly interesting as I am pretty budget constrained. Is this a feasible route? Anything in particular to look for/avoid?
I'm currently mainly doing inference but I would like to explore training more. Contributing back to the community would be great, so another goal for putting this together is so I can also explore "giving back", whether it is doing commissioned trainings (is it still a commission if it is free?), making my compute available for others to use somehow, or some other mechanism!
(I do also have some 12GB 2060's. If I could get them all together in one machine, that might be even better?) | 2025-09-06T23:40:41 | https://www.reddit.com/r/LocalLLaMA/comments/1naebbt/equipment_suggestions_for_a_tight_budget/ | ConnectionOutside485 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1naebbt | false | null | t3_1naebbt | /r/LocalLLaMA/comments/1naebbt/equipment_suggestions_for_a_tight_budget/ | false | false | self | 2 | null |
Networking Multiple GPUs | 3 | I have 2 machines that both have an unused GPU in them. I was wondering how difficult it would be to have them network together to run larger models. The first machine has a RTXa5000 and the second is a RTX6000 so combined it would be 48GB of VRAM. I am thinking that should be a decent amount to run some medium size models. If it matters, both machines are running Windows 11. I could run any OS on them because they are a VM. | 2025-09-06T23:34:21 | https://www.reddit.com/r/LocalLLaMA/comments/1nae6mp/networking_multiple_gpus/ | Ikyo75 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nae6mp | false | null | t3_1nae6mp | /r/LocalLLaMA/comments/1nae6mp/networking_multiple_gpus/ | false | false | self | 3 | null |
Effecient hot-swappable LoRA variant supported in llama.cpp | 32 | Activated LoRA: Fine-tuned LLMs for Intrinsics - https://arxiv.org/abs/2504.12397
>Despite the promise of highly customized behaviors and capabilities, switching between relevant LoRAs in a multiturn setting is inefficient, as the key-value (KV) cache of the entire turn history must be recomputed with the LoRA weights before generation can begin. To address this problem, we propose Activated LoRA (aLoRA), an adapter architecture which modifies the LoRA framework to only adapt weights for the tokens in the sequence after the aLoRA is invoked. This change crucially allows aLoRA to accept the base model's KV cache of the input string, meaning that aLoRA can be instantly activated whenever needed in a chain without recomputing the cache
I don't think any other model besides granite is supported yet. This has some merit for hybrid and cpu inference, especially if they can figure out alora extraction. If we are changing the model especially the strength/influence that can give better results than just an appended prompt alone. | 2025-09-06T23:28:27 | https://github.com/ggml-org/llama.cpp/pull/15327 | Aaaaaaaaaeeeee | github.com | 1970-01-01T00:00:00 | 0 | {} | 1nae1zj | false | null | t3_1nae1zj | /r/LocalLLaMA/comments/1nae1zj/effecient_hotswappable_lora_variant_supported_in/ | false | false | default | 32 | {'enabled': False, 'images': [{'id': 'j9pZxlhUy5RV0Ryi-2-5BCQ4nmGeUpe7px39264mgOc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/j9pZxlhUy5RV0Ryi-2-5BCQ4nmGeUpe7px39264mgOc.png?width=108&crop=smart&auto=webp&s=10ad44f3a46c3ce234f89f85b6ba2d6272a24818', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/j9pZxlhUy5RV0Ryi-2-5BCQ4nmGeUpe7px39264mgOc.png?width=216&crop=smart&auto=webp&s=aca954c163e7b7908fdea7ed38fc363876a96618', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/j9pZxlhUy5RV0Ryi-2-5BCQ4nmGeUpe7px39264mgOc.png?width=320&crop=smart&auto=webp&s=be9774281055bf71f19bf78af2da4f23d7e0661a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/j9pZxlhUy5RV0Ryi-2-5BCQ4nmGeUpe7px39264mgOc.png?width=640&crop=smart&auto=webp&s=b9f29e94704fe7ecbbc414657d711e7dcf339469', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/j9pZxlhUy5RV0Ryi-2-5BCQ4nmGeUpe7px39264mgOc.png?width=960&crop=smart&auto=webp&s=8561ac79986acf6373dd587070fd92083c1b33fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/j9pZxlhUy5RV0Ryi-2-5BCQ4nmGeUpe7px39264mgOc.png?width=1080&crop=smart&auto=webp&s=a10432045d77d9575edf68f15fb5b49157675f8d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/j9pZxlhUy5RV0Ryi-2-5BCQ4nmGeUpe7px39264mgOc.png?auto=webp&s=dd2ee6418892f01ca5d189c72319aa10753b0f9e', 'width': 1200}, 'variants': {}}]} |
[Tool] Speakr v0.5.5: Self-hosted audio transcription app with LocalLLM support + new semantic search & full internationalization | 53 | Speakr v0.5.5 is out - a self-hosted app that connects to your local STT and LLM instances for transcription with speaker diarization and semantic search.
Inquire Mode (still experimental) uses the all-MiniLM-L6-v2 embedding model to allow semantic search over recordings. It works on CPU, creates 384d vectors, and synthesizes answers from your complete library, not just returning search hits. Ask "What deliverables were assigned to me in the TPS meeting?" and get actual narrative answers with citations. [Have a look at some screenshots](https://murtaza-nasir.github.io/speakr/screenshots).
Works with any OpenAI-compatible API (vLLM, LocalAI, LM Studio). Works with any Whisper endpoint, with an recommended ASR companion container for speaker diarization. Tag-based prompt customization allows you to customize summaries by domain - medical recordings get medical summaries, technical meetings get technical summaries.
What's new in this release: five-language UI support, automatic audio format conversion where necessary, air-gapped environment support, and rewritten documentation.
Everything stays local. No external API calls for embeddings or processing, unless you want to use external APIs.
[GitHub](https://github.com/murtaza-nasir/speakr) | [Docker Hub](https://hub.docker.com/r/learnedmachine/speakr) | [Screenshots](https://murtaza-nasir.github.io/speakr/screenshots)
Looking for feedback on Inquire Mode. What features would improve your workflow? | 2025-09-06T23:13:24 | https://www.reddit.com/gallery/1nadq9u | hedonihilistic | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nadq9u | false | null | t3_1nadq9u | /r/LocalLLaMA/comments/1nadq9u/tool_speakr_v055_selfhosted_audio_transcription/ | false | false | 53 | null | |
W8A16 quantized model generating different code quality with --dtype flag in vLLM? | 9 | Testing a quantized Qwen3-Coder-30B (W8A16) on dual RTX 3090s and getting weird results. Same model, same prompts, but different --dtype flags produce noticeably different code quality.
\--dtype bfloat16: Better game logic, correct boundary checking, proper object placement
\--dtype float16: Simpler code structure, but has bugs like boundary detection issues
Both have identical performance (same t/s, same VRAM usage).
\--dtype auto defaulted to BF16, and vLLM actually warned me about using BF16 on pre-SM90 GPUs (RTX 3090 is SM86), suggesting FP16 for better performance. But the performance is identical and BF16 gives better code quality.
Anyone else seen dtype affecting code generation quality beyond just numerical precision? Is this expected behavior or should I ignore the SM90 warning?
| 2025-09-06T23:00:37 | https://www.reddit.com/r/LocalLLaMA/comments/1nadg0j/w8a16_quantized_model_generating_different_code/ | sb6_6_6_6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nadg0j | false | null | t3_1nadg0j | /r/LocalLLaMA/comments/1nadg0j/w8a16_quantized_model_generating_different_code/ | false | false | self | 9 | null |
Llama.cpp any way to custom split 'compute buffer size'? | 6 | Context
running GLM 4.5 Air Q4\_K\_M quant on quad RTX 3090.
Trying to squeeze every byte of VRAM possible.
| 0% 40C P8 18W / 350W | 23860MiB / 24576MiB | 0% Default |
| 0% 52C P8 17W / 350W | 22842MiB / 24576MiB | 0% Default |
| 0% 43C P8 17W / 350W | 22842MiB / 24576MiB | 0% Default |
| 0% 44C P8 29W / 420W | 21328MiB / 24576MiB | 0% Default |
command:
\~/llama.cpp/build/bin/llama-server -m '\~/model/GLM-4.5-Air-GGUF-Q4\_K\_M/GLM-4.5-Air-Q4\_K\_M-00001-of-00002.gguf' -ngl 47 -c 131072 -ub 1408 --temp 0.6 --min-p 0.0 --top-p 0.95 --top-k 20 --port 5000 --host [0.0.0.0](http://0.0.0.0) \--cache-type-k q8\_0 --cache-type-v q8\_0 --alias GLM-4.5-Air
Been tweaking -c, -ub and --cache-type-k/--cache-type-v
\-ub distribution seems lopsided and the source of CUDA0 at 23860MiB
llama\_context: CUDA0 compute buffer size = 3707.11 MiB
llama\_context: CUDA1 compute buffer size = 2029.61 MiB
llama\_context: CUDA2 compute buffer size = 2029.61 MiB
llama\_context: CUDA3 compute buffer size = 2464.13 MiB
llama\_context: CUDA\_Host compute buffer size = 2838.15 MiB
| 2025-09-06T22:42:36 | https://www.reddit.com/r/LocalLLaMA/comments/1nad1on/llamacpp_any_way_to_custom_split_compute_buffer/ | MachineZer0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nad1on | false | null | t3_1nad1on | /r/LocalLLaMA/comments/1nad1on/llamacpp_any_way_to_custom_split_compute_buffer/ | false | false | self | 6 | null |
Best way to integrate news search to your local LLM? | 8 | What is the best way to automate news searches you’ve found? | 2025-09-06T22:12:27 | https://www.reddit.com/r/LocalLLaMA/comments/1nacd3m/best_way_to_integrate_news_search_to_your_local/ | seoulsrvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nacd3m | false | null | t3_1nacd3m | /r/LocalLLaMA/comments/1nacd3m/best_way_to_integrate_news_search_to_your_local/ | false | false | self | 8 | null |
Anyone actully try to run gpt-oss-120b (or 20b) on a Ryzen AI Max+ 395? | 99 | AMD is understandably[ trying to tout this](https://www.amd.com/en/blogs/2025/how-to-run-openai-gpt-oss-20b-120b-models-on-amd-ryzen-ai-radeon.html) and and there's this from a month a go claiming "30 tokens per second" (not clear if 120b or 20b). I can't tell if the flops are int8 flops of bf16 or fp16 on the 395. In theory if we assume the 395 has 50 tops of bf16 on its NPU and we trust their ["overall TOPS"](https://www.amd.com/en/products/processors/laptop/ryzen/ai-300-series/amd-ryzen-ai-max-plus-395.html) its potentially pushing into 3090 territory under ideal conditions. It has \*waaay\* more memory which is super useful for getting things to run at all but it also has a lot less memory bandwidth about 1/4th as much. I guess a more fair comparison would be on 20b. I'd strong anticipate the 3090 getting better tokens per second on 20b.
[this post ](https://www.reddit.com/r/ryzen/comments/1lzr7yq/yolov8_multimachine_benchmark_rtx_3090_vs_ryzen/)suggests that actually under common configs a lot of times the 395 can beat the 3090...this is very surprising to me. Curious if anyone has actually tried 20b on both and can compare. Also curious what actual tokens per second people are getting with 120b. | 2025-09-06T21:28:33 | https://www.reddit.com/r/LocalLLaMA/comments/1nabcek/anyone_actully_try_to_run_gptoss120b_or_20b_on_a/ | PM_ME_YOUR_PROOFS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nabcek | false | null | t3_1nabcek | /r/LocalLLaMA/comments/1nabcek/anyone_actully_try_to_run_gptoss120b_or_20b_on_a/ | false | false | self | 99 | {'enabled': False, 'images': [{'id': 'NgWDcnPAzAv9C_IdYMqborsydUjUEEFGZXyC17go05Y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NgWDcnPAzAv9C_IdYMqborsydUjUEEFGZXyC17go05Y.jpeg?width=108&crop=smart&auto=webp&s=569ee38687589e1134e6e83d5b36da59d9f13822', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/NgWDcnPAzAv9C_IdYMqborsydUjUEEFGZXyC17go05Y.jpeg?width=216&crop=smart&auto=webp&s=7cb26532b06f5ea81f07adfe5a75835b3e05deae', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/NgWDcnPAzAv9C_IdYMqborsydUjUEEFGZXyC17go05Y.jpeg?width=320&crop=smart&auto=webp&s=e09b639f1669d23c2f81fe7b4aaafa1f08e9b48e', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/NgWDcnPAzAv9C_IdYMqborsydUjUEEFGZXyC17go05Y.jpeg?width=640&crop=smart&auto=webp&s=f5a2400223098f00c87af89039830dd05676a630', 'width': 640}, {'height': 537, 'url': 'https://external-preview.redd.it/NgWDcnPAzAv9C_IdYMqborsydUjUEEFGZXyC17go05Y.jpeg?width=960&crop=smart&auto=webp&s=feffe38281ed5c7865f1858eaf5b4a52fdae65b0', 'width': 960}, {'height': 604, 'url': 'https://external-preview.redd.it/NgWDcnPAzAv9C_IdYMqborsydUjUEEFGZXyC17go05Y.jpeg?width=1080&crop=smart&auto=webp&s=282032576631f90d79349a3d0d655abf4371e1b6', 'width': 1080}], 'source': {'height': 1336, 'url': 'https://external-preview.redd.it/NgWDcnPAzAv9C_IdYMqborsydUjUEEFGZXyC17go05Y.jpeg?auto=webp&s=c5bc77b30d5877b6a5932eb0a84c8fb1976e6dc4', 'width': 2386}, 'variants': {}}]} |
No marketing Just Helping out, Built my own AI-powered Resumu Builder (and it's 100% free, no signup) | 0 | No matter what anyone says — I finally did it. I built a resume builder that:
Runs completely free in your browser.
Has an AI mode that takes your old PDF/text resume and rebuilds it ATS-friendly.
Requires no sign-up, no cloud storage, everything stays in localStorage.
Works offline if you save the HTML. Just a single file.
I was tired of those shady resume sites asking for credit cards, subscriptions, or harvesting your data. So I made my own.
👉 WebLink :: https://free-resume-builder-six.vercel.app/
👉 GitHub Repo :: https://github.com/Siddhesh2377/FreeResumeBuilder
It’s not perfect (still tweaking AI output and print layouts), but it’s already way better than the paywalled junk out there.
If you ever:
got stuck behind a paywall trying to export your resume,
saw “Download PDF — $10/month” pop up,
or just wanted something clean and private,
…then this is for you. ✨
Would love feedback from folks here. Should I add more templates, or keep it minimal like ChatGPT’s vibe? 🤔 | 2025-09-06T20:33:30 | https://www.reddit.com/r/LocalLLaMA/comments/1naa1gi/no_marketing_just_helping_out_built_my_own/ | DarkEngine774 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1naa1gi | false | null | t3_1naa1gi | /r/LocalLLaMA/comments/1naa1gi/no_marketing_just_helping_out_built_my_own/ | false | false | self | 0 | null |
The openrouter chat system prompt makes some models seem way worse than they actually are | 5 | I found this especially when testing Qwen 3 Max. I didn't think about it and had the impression that the model was pretty bad compared to on the qwen website, but when I switched it off, realizing that alibaba probably were using their own, I saw a drastic improvement. Same for other models I had on hand like deepseek v3.1 and Kimi K2.
Without the prompt, 3 Max feels like claude in the way it writes and talks, but with it on it feels like a carbon copy of 4o, just so annoying lmao
| 2025-09-06T20:28:23 | https://www.reddit.com/r/LocalLLaMA/comments/1na9wzv/the_openrouter_chat_system_prompt_makes_some/ | Longjumping_Spot5843 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na9wzv | false | null | t3_1na9wzv | /r/LocalLLaMA/comments/1na9wzv/the_openrouter_chat_system_prompt_makes_some/ | false | false | self | 5 | null |
Is there a version of VibeVoice that streams truly in real-time? | 6 | I’ve been testing VibeVoice 7B and noticed something about its “real-time streaming”:
* The model doesn’t actually start streaming immediately.
* It first generates a **large 30-second chunk** of audio before it begins streaming smaller chunks.
* After that, it continues generating in real-time, but there’s a long initial delay.
I want a version of VibeVoice that:
1. Starts streaming **immediately**, with maybe 1–2 seconds of audio at first.
2. Continues adding audio smoothly in real-time from the very beginning.
Has anyone here:
* Made a patched version of VibeVoice that does this?
* Found a version that supports **true low-latency real-time streaming**?
Any pointers or forks would be really appreciated — I want the model to feel like it’s speaking live from the first second. | 2025-09-06T20:25:11 | https://www.reddit.com/r/LocalLLaMA/comments/1na9ubw/is_there_a_version_of_vibevoice_that_streams/ | Forsaken-Turnip-6664 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na9ubw | false | null | t3_1na9ubw | /r/LocalLLaMA/comments/1na9ubw/is_there_a_version_of_vibevoice_that_streams/ | false | false | self | 6 | null |
Llama 4 guard or IBM Granite 3.3B Guard? | 0 | I want a model to sanitize user inputs and AI responses - I believe Llama 4 guard and IBM Granite 3.3B Guard are the two latest models from general providers in this space. How do the two compare? Any others you would recommend? | 2025-09-06T20:10:18 | https://www.reddit.com/r/LocalLLaMA/comments/1na9hea/llama_4_guard_or_ibm_granite_33b_guard/ | rcanand72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na9hea | false | null | t3_1na9hea | /r/LocalLLaMA/comments/1na9hea/llama_4_guard_or_ibm_granite_33b_guard/ | false | false | self | 0 | null |
Nemotron Nano V2 models are remarkably good for agentic coding | 65 | I use Nvidia Nemotron Nano 9B V2 and 12B v2 with Roo Coder served by both LM Studio (Mac) and Llama.cpp (Ubuntu). These models are small, fast, smart, follow instructions well, and are good at tool calling. Just an FYI for anyone with a smaller vram GPU - I can load up Q8 quants with 300k context and VRAM use is less than 24Gb. Great little models. | 2025-09-06T20:05:00 | https://www.reddit.com/r/LocalLLaMA/comments/1na9crp/nemotron_nano_v2_models_are_remarkably_good_for/ | Thrumpwart | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na9crp | false | null | t3_1na9crp | /r/LocalLLaMA/comments/1na9crp/nemotron_nano_v2_models_are_remarkably_good_for/ | false | false | self | 65 | null |
Should I sell my 3060 for a 3080 to pair with my 5060ti 16gb? | 2 | Is the 2gb vram and bandwidth difference big when running >24b models at q4 or q5 with >24k context? | 2025-09-06T20:01:53 | https://www.reddit.com/r/LocalLLaMA/comments/1na99xo/should_i_sell_my_3060_for_a_3080_to_pair_with_my/ | Fakkle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na99xo | false | null | t3_1na99xo | /r/LocalLLaMA/comments/1na99xo/should_i_sell_my_3060_for_a_3080_to_pair_with_my/ | false | false | self | 2 | null |
MoE models tested on miniPC iGPU with Vulkan | 20 | Super affordable miniPC seem to be taking over the [market](https://www.globenewswire.com/news-release/2025/06/11/3097703/0/en/Mini-PCs-Market-Size-to-Surpass-USD-34-25-Billion-by-2032-Owing-to-the-Growing-Demand-for-Compact-and-High-Performance-Computing-Solutions-Research-by-SNS-Insider.html) but struggle to provide decent local AI performance. [MoE](https://huggingface.co/blog/moe) seems to be the current answer to the problem. All of these models should have no problem running on Ollama as it's based on llama.cpp backend, just won't have Vulkan benefit for prompt processing. I've installed Ollama on [ARM](https://github.com/ollama/ollama/blob/main/docs/linux.md) based systems like android [cell phones](https://www.reddit.com/r/ollama/comments/1d8p2lg/getting_ollama_on_a_smartphone/) and [Android ](https://kskroyal.com/run-llms-android-ollama/)TV boxes.
System:
AMD Ryzen 7 [6800H](https://www.techpowerup.com/cpu-specs/ryzen-7-6800h.c2527) with iGPU Radeon [680M](https://www.techpowerup.com/gpu-specs/radeon-680m.c3871) sporting 64GB of DDR5 but limited to 4800 MT/s by system.
llama.cpp vulkan build: fd621880 ([6396](https://github.com/ggml-org/llama.cpp/releases/tag/b6396)) prebuilt package so just unzip and [llama-bench](https://github.com/ggml-org/llama.cpp/discussions/7195)
Here are 6 HF MoE models and 1 model for reference for expected performance of mid tier [miniPC](https://www.reddit.com/r/MiniPCs/comments/1gu2rgb/mini_pc_that_could_handle_local_ai_for_home/).
1. ERNIE-4.5-21B-A3B-PT.i1-IQ4\_XS - 4.25 bpw
2. ggml-org\_gpt-oss-20b-GGUF\_gpt-oss-20b-mxfp4
3. Ling-lite-1.5-2507.IQ4\_XS- 4.25 bpw 4.25 bpw
4. Mistral-Small-3.2-24B-Instruct-2506-IQ4\_XS - 4.25 bpw
5. Moonlight-16B-A3B-Instruct-IQ4\_XS - 4.25 bpw
6. Qwen3-Coder-30B-A3B-Instruct-Q4\_K\_M - Medium
7. SmallThinker-21B-A3B-Instruct.IQ4\_XS.imatrix IQ4\_XS - 4.25 bpw
8. Qwen3-Coder-30B-A3B-Instruct--IQ4\_XS
|model | size| params| pp512 | tg128 |
|:-|:-|:-|:-|:-|
|ernie4\_5-moe 21B.A3B IQ4\_XS| 10.89| 21.83 B|187.15 ± 2.02 | 29.50 ± 0.01 |
|gpt-oss 20B MXFP4 MoE | 11.27| 20.91 B|239.21 ± 2.00 | 22.96 ± 0.26 |
|bailingmoe 16B IQ4\_XS | 8.65| 16.80 B|256.92 ± 0.75 | 37.55 ± 0.02 |
|llama 13B IQ4\_XS | 11.89| 23.57 B| 37.77 ± 0.14 | 4.49 ± 0.03 |
|deepseek2 16B IQ4\_XS | 8.14| 15.96 B|250.48 ± 1.29 | 35.02 ± 0.03 |
|qwen3moe 30B.A3B Q4\_K | 17.28| 30.53 B|134.46 ± 0.45 | 28.26 ± 0.46 |
|smallthinker 20B IQ4\_XS | 10.78| 21.51 B|173.80 ± 0.18 | 25.66 ± 0.05 |
|qwen3moe 30B.A3B IQ4\_XS|15.25|30.53|140.34 ± 1.12|27.96 ± 0.13|
Notes:
* **Backend**: All models are running on RPC + Vulkan backend.
* **ngl**: The number of layers used for testing (99).
* **Test**:
* `pp512`: Prompt processing with 512 tokens.
* `tg128`: Text generation with 128 tokens.
* **t/s**: Tokens per second, averaged with standard deviation.
Winner (subjective) for miniPC MoE models:
1. Qwen3-Coder-30B-A3B (qwen3moe 30B.A3B Q4\_K or IQ4\_XS)
2. smallthinker 20B IQ4\_XS
3. Ling-lite-1.5-2507.IQ4\_XS (bailingmoe 16B IQ4\_XS)
4. gpt-oss 20B MXFP4
5. ernie4\_5-moe 21B.A3B
6. Moonlight-16B-A3B (deepseek2 16B IQ4\_XS)
I'll have all 6 MoE models installed on my miniPC systems. Each actually has its benefits. Longer prompt data I would probably use gpt-oss 20B MXFP4 and Moonlight-16B-A3B (deepseek2 16B IQ4\_XS). For my resource deprived miniPC/SBC I'll use Ling-lite-1.5 (bailingmoe 16B IQ4\_XS) and Moonlight-16B-A3B (deepseek2 16B IQ4\_XS). I threw in Qwen3 Q4\_K\_M vs Qwen3 IQ4\_XS to see if any real difference.
If there are other MoE models worth adding to a library of models for miniPC please share. | 2025-09-06T19:58:08 | https://www.reddit.com/r/LocalLLaMA/comments/1na96gx/moe_models_tested_on_minipc_igpu_with_vulkan/ | tabletuser_blogspot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na96gx | false | null | t3_1na96gx | /r/LocalLLaMA/comments/1na96gx/moe_models_tested_on_minipc_igpu_with_vulkan/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'nDqw00Cz2HPbqhtqEOVqqdlZWCHxPB7ubVhrwg1GNsg', 'resolutions': [{'height': 37, 'url': 'https://external-preview.redd.it/nDqw00Cz2HPbqhtqEOVqqdlZWCHxPB7ubVhrwg1GNsg.png?width=108&crop=smart&auto=webp&s=fbbc7f3f27a38f3108f28610c79b16d6c99c8dcd', 'width': 108}, {'height': 75, 'url': 'https://external-preview.redd.it/nDqw00Cz2HPbqhtqEOVqqdlZWCHxPB7ubVhrwg1GNsg.png?width=216&crop=smart&auto=webp&s=f9fc3ba783fed390d59350e61921b290ec8c3a74', 'width': 216}], 'source': {'height': 80, 'url': 'https://external-preview.redd.it/nDqw00Cz2HPbqhtqEOVqqdlZWCHxPB7ubVhrwg1GNsg.png?auto=webp&s=68093fc5469af09d34b89e3d239f27ba27cb70f4', 'width': 230}, 'variants': {}}]} |
Adversarial CoT? | 7 | What if instead of the usual single single-view CoT, CoT had two or more separate agents cooperating to find the right answer.
Agent A — Primary Reasoner
Focused, narrow reasoning (the one every reasoning LLM uses) Dives deep, generates hypotheses, diagnoses, plans solutions. Acts like a doctor, engineer, researcher, etc.
Agent B — Skeptical Challenger
Questions assumptions, cross-checks reasoning, considers alternative explanations, maintains awareness of missing context or long-term effects.
(Optional) Agent C — Integrator / Arbiter
Resolves conflicts, merges insights, and produces the final, balanced conclusion.
This would go on for multiple turns.
| 2025-09-06T19:56:46 | https://www.reddit.com/r/LocalLLaMA/comments/1na959b/adversarial_cot/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na959b | false | null | t3_1na959b | /r/LocalLLaMA/comments/1na959b/adversarial_cot/ | false | false | self | 7 | null |
Need help with my local Ollama-Codegemma model | 1 | Hi all,
I am a java developer trying to integrate any ai model into my personal Intellij Idea IDE.
With a bit of googling and stuff, I downloaded ollama and then downloaded the latest version of Codegemma. I even setup the plugin "Continue" and it is now detecting the LLM model to answer my questions.
The issue I am facing is that, when I ask it to scan my spring boot project, or simply analyze it, it says it cant due to security and privacy policies.
a) Am I doing something wrong?
b) Am I using any wrong model?
c) Is there any other thing that I might have missed?
Since my workplace has integrated windsurf with a premium subscription, it can analyze my local files / projects and give me answers as expected. However, I am trying to achieve kind of something similar, but with my personal PC and free tier overall.
Kindly help. Thanks | 2025-09-06T19:39:30 | https://www.reddit.com/r/LocalLLaMA/comments/1na8pvl/need_help_with_my_local_ollamacodegemma_model/ | Unknownduck07 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na8pvl | false | null | t3_1na8pvl | /r/LocalLLaMA/comments/1na8pvl/need_help_with_my_local_ollamacodegemma_model/ | false | false | self | 1 | null |
Knowledge Graph RAG | 11 | In the Company I work for we have a serious issue with increasing Support demand (due to extreme growth) and nobody has the knowledge or overview to identify the core issues. Since im working in the medical Sector, im working with highly sensitive data, thats why i need the local approach. Now I Was thinking about building a knowledge graph as input for a RAG Pipeline to get an overview over all the Tickets and their similarity and so on. My question is, is that still state of the art (latest i found regarding knowledge graph RAG research for Such problems was 2024) or is there a more elegant way today? | 2025-09-06T19:35:15 | https://www.reddit.com/r/LocalLLaMA/comments/1na8m3o/knowledge_graph_rag/ | GeT_fRoDo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na8m3o | false | null | t3_1na8m3o | /r/LocalLLaMA/comments/1na8m3o/knowledge_graph_rag/ | false | false | self | 11 | null |
Silly-v0.2 is my new favorite model | 81 | It's not my model, I just feel like it's very underrated. it's the most fun I've had talking to an LLM, and it feels a lot like character AI. it has almost no GPT-isms and is very humanlike, and it's only 12b parameters, so it's insanely fast. it seems to work really well with character cards as well. I've been looking for a model like this for ages, and i'm glad we finally have one like it that's open-source | 2025-09-06T19:09:54 | https://www.reddit.com/r/LocalLLaMA/comments/1na7zg7/sillyv02_is_my_new_favorite_model/ | RandumbRedditor1000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na7zg7 | false | null | t3_1na7zg7 | /r/LocalLLaMA/comments/1na7zg7/sillyv02_is_my_new_favorite_model/ | false | false | self | 81 | null |
Be Motivated ...Believe it! | 0 | 2025-09-06T18:57:51 | Old-Raspberry-3266 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1na7oh1 | false | null | t3_1na7oh1 | /r/LocalLLaMA/comments/1na7oh1/be_motivated_believe_it/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '491jkjpddlnf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/491jkjpddlnf1.png?width=108&crop=smart&auto=webp&s=330421245215b59bd475e1ed8643d2024dbba692', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/491jkjpddlnf1.png?width=216&crop=smart&auto=webp&s=c8a6a5b0da3f982060131d9f0889a621813bbd9a', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/491jkjpddlnf1.png?width=320&crop=smart&auto=webp&s=672c9bd56690f650908c4fd5204fd14d029f0759', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/491jkjpddlnf1.png?width=640&crop=smart&auto=webp&s=a57c364239dd31835182c6638a6b4646060a4720', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/491jkjpddlnf1.png?width=960&crop=smart&auto=webp&s=c29032736524ac484c56dd196180616c716ca56f', 'width': 960}], 'source': {'height': 1000, 'url': 'https://preview.redd.it/491jkjpddlnf1.png?auto=webp&s=c27f2075be47b6206fe26b51509eb55732648796', 'width': 1000}, 'variants': {}}]} | ||
OpenAI: Why Language Models Hallucinate | 211 | In short: LLMs hallucinate because we've inadvertently designed the training and evaluation process to reward confident, even if incorrect, answers, rather than honest admissions of uncertainty. Fixing this requires a shift in how we grade these systems to steer them towards more trustworthy behavior.
The Solution:
Explicitly stating "confidence targets" in evaluation instructions, where mistakes are penalized and admitting uncertainty (IDK) might receive 0 points, but guessing incorrectly receives a negative score. This encourages "behavioral calibration," where the model only answers if it's sufficiently confident. | 2025-09-06T18:44:12 | https://share.google/9SKn7X0YThlmnkZ9m | onil_gova | share.google | 1970-01-01T00:00:00 | 0 | {} | 1na7c1b | false | null | t3_1na7c1b | /r/LocalLLaMA/comments/1na7c1b/openai_why_language_models_hallucinate/ | false | false | default | 211 | null |
MCP with Computer Use | 13 | MCP Server with Computer Use Agent runs through Claude Desktop, Cursor, and other MCP clients.
An example use case lets try using Claude as a tutor to learn how to use Tableau.
The MCP Server implementation exposes CUA's full functionality through standardized tool calls. It supports single-task commands and multi-task sequences, giving Claude Desktop direct access to all of Cua's computer control capabilities.
This is the first MCP-compatible computer control solution that works directly with Claude Desktop's and Cursor's built-in MCP implementation. Simple configuration in your claude_desktop_config.json or cursor_config.json connects Claude or Cursor directly to your desktop environment.
Github : https://github.com/trycua/cua
Discord: discord.gg/cua-ai | 2025-09-06T18:41:57 | https://v.redd.it/dk9apg8jalnf1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1na79xn | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/dk9apg8jalnf1/DASHPlaylist.mpd?a=1759776134%2CMDFhN2EzYTI1ZTVkMzhkMGVjODU2NzdjMjU5ZTYxNmYzYmY2N2U3ZWMzYzIxZWQyZjQ1ZjM0YTFiMTExNTdmNg%3D%3D&v=1&f=sd', 'duration': 115, 'fallback_url': 'https://v.redd.it/dk9apg8jalnf1/DASH_480.mp4?source=fallback', 'has_audio': False, 'height': 440, 'hls_url': 'https://v.redd.it/dk9apg8jalnf1/HLSPlaylist.m3u8?a=1759776134%2CZTU5MzQ1MmU2ODJkYjBjNDdmM2YwNDQ2NjhlNjA0ODliNzcyZGRhMGYzODFlYTZmMzQxNTUwMjFiZDBjMDVhOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dk9apg8jalnf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 854}} | t3_1na79xn | /r/LocalLLaMA/comments/1na79xn/mcp_with_computer_use/ | false | false | 13 | {'enabled': False, 'images': [{'id': 'Yms5cnp6d2lhbG5mMRWfbJ8igMn1wejE5S1jscBm4hQQlwhyyetxPIo-3e1f', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/Yms5cnp6d2lhbG5mMRWfbJ8igMn1wejE5S1jscBm4hQQlwhyyetxPIo-3e1f.png?width=108&crop=smart&format=pjpg&auto=webp&s=8efb3cb80bcf256bab6b2696f136cf2eaa283a61', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/Yms5cnp6d2lhbG5mMRWfbJ8igMn1wejE5S1jscBm4hQQlwhyyetxPIo-3e1f.png?width=216&crop=smart&format=pjpg&auto=webp&s=704ad5d84e292115ae53b12cfd6665821c79e20a', 'width': 216}, {'height': 164, 'url': 'https://external-preview.redd.it/Yms5cnp6d2lhbG5mMRWfbJ8igMn1wejE5S1jscBm4hQQlwhyyetxPIo-3e1f.png?width=320&crop=smart&format=pjpg&auto=webp&s=cbc6bb82ca22dbea6330228a550645a639ad96c2', 'width': 320}, {'height': 329, 'url': 'https://external-preview.redd.it/Yms5cnp6d2lhbG5mMRWfbJ8igMn1wejE5S1jscBm4hQQlwhyyetxPIo-3e1f.png?width=640&crop=smart&format=pjpg&auto=webp&s=7c64d71fcce628c3b8661892cd6d6f6df9fcdc9e', 'width': 640}, {'height': 493, 'url': 'https://external-preview.redd.it/Yms5cnp6d2lhbG5mMRWfbJ8igMn1wejE5S1jscBm4hQQlwhyyetxPIo-3e1f.png?width=960&crop=smart&format=pjpg&auto=webp&s=7ea5ac6a8756880a30d81aa08110b5a681a09b95', 'width': 960}, {'height': 555, 'url': 'https://external-preview.redd.it/Yms5cnp6d2lhbG5mMRWfbJ8igMn1wejE5S1jscBm4hQQlwhyyetxPIo-3e1f.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0f022ec7d26643e4c2b2e1d4864a460e20284227', 'width': 1080}], 'source': {'height': 658, 'url': 'https://external-preview.redd.it/Yms5cnp6d2lhbG5mMRWfbJ8igMn1wejE5S1jscBm4hQQlwhyyetxPIo-3e1f.png?format=pjpg&auto=webp&s=e84ba2547e327ca7d21052e51ed4d51f1b239979', 'width': 1280}, 'variants': {}}]} | |
Qwen3 30B A3B Q40 @ 13 tok/sec on Raspberry Pi cluster | 3 | Distributed llama is really promising for local inference. You could run multiple boxes on different circuits in your home to deal with power limitations, and scale capacity constraints horizontally. It doesn’t solve for running large open models, but it makes smaller ones potentially way faster. | 2025-09-06T18:40:33 | https://github.com/b4rtaz/distributed-llama/discussions/255 | DealingWithIt202s | github.com | 1970-01-01T00:00:00 | 0 | {} | 1na78oe | false | null | t3_1na78oe | /r/LocalLLaMA/comments/1na78oe/qwen3_30b_a3b_q40_13_toksec_on_raspberry_pi/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': 'KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=108&crop=smart&auto=webp&s=06d7401a6e5999b0a0217ef6b4acbaa7a56631c2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=216&crop=smart&auto=webp&s=ac96a579b70dcb81b85c3309ce81d06017fcc35c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=320&crop=smart&auto=webp&s=3a890f57ecdb4a316b5bce7d8bc6acaa9e610f26', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=640&crop=smart&auto=webp&s=4733d277f6f6e759fe47794087d1da790f8d36b7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=960&crop=smart&auto=webp&s=fb26627ab8849438dbe4b35d34d6b3665f48943c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=1080&crop=smart&auto=webp&s=8c7b67c6cd00d8d68d14385d500e9be57c5e9372', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?auto=webp&s=23a8552f0dbd903c0723bd9c297e053626f3b8db', 'width': 1200}, 'variants': {}}]} |
Sometimes I prefer to use my own chatbot over ChatGPT because the answers are faster. Not always better, but faster ✌️😊✨
(WIP: every day getting better and better) | 0 | 2025-09-06T18:34:37 | Roseldine | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1na73bu | false | null | t3_1na73bu | /r/LocalLLaMA/comments/1na73bu/sometimes_i_prefer_to_use_my_own_chatbot_over/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'c9bi2bs39lnf1', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/c9bi2bs39lnf1.png?width=108&crop=smart&auto=webp&s=a681a23fd3bdceefb7646469ee093df0129587c3', 'width': 108}, {'height': 102, 'url': 'https://preview.redd.it/c9bi2bs39lnf1.png?width=216&crop=smart&auto=webp&s=9cede4fd84cc54cecf25c6b2514993d5e9293fb1', 'width': 216}, {'height': 152, 'url': 'https://preview.redd.it/c9bi2bs39lnf1.png?width=320&crop=smart&auto=webp&s=f10894e1814c0fd0514b220b6e46b2f98f5a2051', 'width': 320}, {'height': 305, 'url': 'https://preview.redd.it/c9bi2bs39lnf1.png?width=640&crop=smart&auto=webp&s=da9b22e6446322a712e46be5145e19ea9fa4d83a', 'width': 640}, {'height': 457, 'url': 'https://preview.redd.it/c9bi2bs39lnf1.png?width=960&crop=smart&auto=webp&s=d7ac52e5fdfa7ac935c66575b434ace3ef498c60', 'width': 960}, {'height': 514, 'url': 'https://preview.redd.it/c9bi2bs39lnf1.png?width=1080&crop=smart&auto=webp&s=52d0c374eaa79a63ed8264a5769963241d1f704e', 'width': 1080}], 'source': {'height': 1217, 'url': 'https://preview.redd.it/c9bi2bs39lnf1.png?auto=webp&s=1cafb3f58fd11a555c68bc70a7af4d8c13050659', 'width': 2553}, 'variants': {}}]} | ||
Most easy way to rent a server and start training? | 2 | I have data where i was planning to use for training. But i m beginner.
Which free services and methods i can use to start to work on training my own data?
PS: I can run local llm in my MacBook i m looking for some proper way of doing this, and that’s why i need some help or advice to the right way.
| 2025-09-06T18:05:25 | https://www.reddit.com/r/LocalLLaMA/comments/1na6ctj/most_easy_way_to_rent_a_server_and_start_training/ | UnhappyJournalist175 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na6ctj | false | null | t3_1na6ctj | /r/LocalLLaMA/comments/1na6ctj/most_easy_way_to_rent_a_server_and_start_training/ | false | false | self | 2 | null |
Is RTX 5080 PC enough to run open source models like QWEN or Llama or Gemma? | 0 | I want to run open source models on new PC along with gaming. Is RTX 5080 enough? Budget is around $2500. What ready made PC you guys recommend?
Example: https://www.newegg.com/cobratype-gaming-desktop-pcs-geforce-rtx-5080-amd-ryzen-9-9900x-32gb-ddr5-2tb-ssd-venom-white/p/3D5-000D-00246?item=3D5-000D-00246 | 2025-09-06T18:04:10 | https://www.reddit.com/r/LocalLLaMA/comments/1na6boz/is_rtx_5080_pc_enough_to_run_open_source_models/ | pavankjadda | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na6boz | false | null | t3_1na6boz | /r/LocalLLaMA/comments/1na6boz/is_rtx_5080_pc_enough_to_run_open_source_models/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'mkMSI_LANGEap6KCmerJv-rN0PxBttSBRR3xTWlEWt4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/mkMSI_LANGEap6KCmerJv-rN0PxBttSBRR3xTWlEWt4.jpeg?width=108&crop=smart&auto=webp&s=30d779d98a3bca6eb951acda954b78b2072948fb', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/mkMSI_LANGEap6KCmerJv-rN0PxBttSBRR3xTWlEWt4.jpeg?width=216&crop=smart&auto=webp&s=9bd7a52a2a317532bcfd9c4dce347ae6362434aa', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/mkMSI_LANGEap6KCmerJv-rN0PxBttSBRR3xTWlEWt4.jpeg?width=320&crop=smart&auto=webp&s=37251c4d6ca835ea35d8c53e8d20a272a82466fa', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/mkMSI_LANGEap6KCmerJv-rN0PxBttSBRR3xTWlEWt4.jpeg?width=640&crop=smart&auto=webp&s=0cf2d17f51732af392ac33ec569f4a016decb292', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/mkMSI_LANGEap6KCmerJv-rN0PxBttSBRR3xTWlEWt4.jpeg?auto=webp&s=a3725df7a3c31f7f9269cda6d592e93eca0d821a', 'width': 640}, 'variants': {}}]} |
Struggling with GPT-OSS-20B | 8 | I'm really having a hard time training it. I have data that I have used on mistral for pharmacy direction conversions to a short code. Example, take one tablet orally daily to T1T PO QD. A normal pharmacy thing. Mistral model does it well but sometimes there are complex scenarios it fails on. I was thinking reasoning may help.
In short, it doesn't seem like GPT-OSS model is following the training. It gives random directions and rarely gets the simple ones incorrect. Below is YAML of config and happy to load sample data if anyone can provide guidance.
Config
architecture:
backbone\_dtype: bfloat16
gradient\_checkpointing: true
intermediate\_dropout: 0.0
pretrained: true
pretrained\_weights: ''
augmentation:
neftune\_noise\_alpha: 0.0
random\_parent\_probability: 0.0
skip\_parent\_probability: 0.0
token\_mask\_probability: 0.0
dataset:
add\_eos\_token\_to\_answer: true
add\_eos\_token\_to\_prompt: true
add\_eos\_token\_to\_system: true
answer\_column: "output\_json\\r"
chatbot\_author: [H2O.ai](http://H2O.ai)
chatbot\_name: h2oGPT
data\_sample: 1.0
data\_sample\_choice:
\- Train
\- Validation
id\_column: None
limit\_chained\_samples: false
mask\_prompt\_labels: true
only\_last\_answer: false
parent\_id\_column: None
personalize: false
prompt\_column:
\- instruction
prompt\_column\_separator: ''
system\_column: system
text\_answer\_separator: ''
text\_prompt\_start: ''
text\_system\_start: ''
train\_dataframe: /home/llmstudio/mount/data/user/Training eRx 1 v8/Training eRx
1 v8.csv
validation\_dataframe: None
validation\_size: 0.1
validation\_strategy: automatic
environment:
compile\_model: false
deepspeed\_allgather\_bucket\_size: 1000000
deepspeed\_method: ZeRO2
deepspeed\_reduce\_bucket\_size: 1000000
deepspeed\_stage3\_param\_persistence\_threshold: 1000000
deepspeed\_stage3\_prefetch\_bucket\_size: 1000000
find\_unused\_parameters: false
gpus:
\- '0'
huggingface\_branch: main
mixed\_precision: true
mixed\_precision\_dtype: bfloat16
number\_of\_workers: 8
seed: 42
trust\_remote\_code: true
use\_deepspeed: false
experiment\_name: emerald-leech
llm\_backbone: openai/gpt-oss-20b
logging:
log\_all\_ranks: false
log\_step\_size: absolute
logger: None
neptune\_project: ''
wandb\_entity: ''
wandb\_project: ''
output\_directory: /home/llmstudio/mount/output/user/emerald-leech/
prediction:
batch\_size\_inference: 0
do\_sample: false
max\_length\_inference: 256
max\_time: 0.0
metric: Perplexity
metric\_gpt\_model: gpt-3.5-turbo-0301
metric\_gpt\_template: general
min\_length\_inference: 2
num\_beams: 1
num\_history: 4
repetition\_penalty: 1.0
stop\_tokens: ''
temperature: 0.0
top\_k: 0
top\_p: 1.0
problem\_type: text\_causal\_language\_modeling
tokenizer:
add\_prompt\_answer\_tokens: false
max\_length: 768
padding\_quantile: 0.95
tokenizer\_kwargs: '{"use\_fast": true, "add\_prefix\_space": false}'
training:
attention\_implementation: eager
batch\_size: 8
differential\_learning\_rate: 1.0e-05
differential\_learning\_rate\_layers: \[\]
drop\_last\_batch: true
epochs: 1
evaluate\_before\_training: false
evaluation\_epochs: 1.0
freeze\_layers: \[\]
grad\_accumulation: 8
gradient\_clip: 0.0
learning\_rate: 0.0002
lora: true
lora\_alpha: 32
lora\_dropout: 0.05
lora\_r: 32
lora\_target\_modules: q\_proj,k\_proj,v\_proj,o\_proj,gate\_proj,up\_proj,down\_proj
lora\_unfreeze\_layers: \[\]
loss\_function: TokenAveragedCrossEntropy
min\_learning\_rate\_ratio: 0.0
optimizer: AdamW
save\_checkpoint: last
schedule: Cosine
train\_validation\_data: false
use\_dora: false
use\_rslora: false
warmup\_epochs: 0.05
weight\_decay: 0.05
| 2025-09-06T17:53:02 | https://www.reddit.com/r/LocalLLaMA/comments/1na61bk/struggling_with_gptoss20b/ | Psychological_Ad8426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na61bk | false | null | t3_1na61bk | /r/LocalLLaMA/comments/1na61bk/struggling_with_gptoss20b/ | false | false | self | 8 | null |
Baseten raises $150M Series D for inference infra but where’s the real bottleneck? | 20 | Baseten just raised $150M Series D at a $2.1B valuation. They focus on inference infra like low latency serving, throughput optimization, developer experience.
They’ve shared benchmarks showing their embeddings inference outperforms vLLM and TEI, especially on throughput and latency. The bet is that inference infra is the pain point, not training.
But this raises a bigger question. what’s the real bottleneck in inference? •Baseten and others (Fireworks, Together) are competing on latency + throughput. •Some argue the bigger cost sink is cold starts and low GPU utilization , serving multiple models elastically without waste is still unsolved at scale.
I wonder what everyone thinks…
Will latency/throughput optimizations be enough to differentiate?
Or is utilization (how efficiently GPUs are used across workloads) the deeper bottleneck?
Does inference infra end up commoditized like training infra, or is there still room for defensible platforms? | 2025-09-06T17:34:56 | https://www.reddit.com/r/LocalLLaMA/comments/1na5kmg/baseten_raises_150m_series_d_for_inference_infra/ | pmv143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na5kmg | false | null | t3_1na5kmg | /r/LocalLLaMA/comments/1na5kmg/baseten_raises_150m_series_d_for_inference_infra/ | false | false | self | 20 | null |
[Follow-up] A deep dive into my solo-dev narrative game engine, Synthasia (Multi-LLM & Local-First!) | 48 | Hey everyone,
First off, a massive thank you to this community. Last week, I posted a few teaser images of a project I've been pouring my heart into, and the interest and kind words were genuinely motivating. As promised, now that I've hammered out a few more systems and squashed some particularly nasty bugs, I'm ready to properly introduce you to **Synthasia**.
This is my solo-dev passion project, a journey to build an engine that allows for truly living narratives.
# The Ethos: Back to the Future of Gameplay
I genuinely feel like we're on the verge of a new era for gameplay. To understand where we're going, I felt we had to go back to where it all began: text adventures. Those games were powered not by GPUs, but by the single most powerful graphics processor in existence: the human imagination. My goal with Synthasia is to use the very latest in AI to recapture that feeling of boundless narrative freedom.
# Delivering a Story as a Game
At its core, Synthasia is an engine designed to deliver a story as a game, complete with light RPG mechanics, inventory, and stats. It gives creators the power to decide the balance between pre-written lore and AI-driven dynamism. You can define every detail of your world, or you can provide a high-level premise and let the AI take over, enriching characters, describing locations, and reacting to the player in ways you never planned for.
I have to be honest, the first time I saw an NPC I'd created get genuinely convinced through unscripted dialogue to hand over a keycard—a real, game-state-altering action—it was pure magic. It's that feeling I'm chasing.
# The Tech Stack (The Fun Part!)
I know this is what you're all here for! The entire engine is built with a local-first philosophy and a flexible, multi-LLM design.
# 1. The Multi-LLM Design: A "Creative Director" and a "Stage Manager"
Instead of relying on a single model, Synthasia orchestrates multiple LLMs, each with a specialized role.
* **The Primary LLM (The "Creative Director"):** This is the powerhouse for heavy, creative lifting: generating rich, atmospheric location descriptions, writing complex and nuanced NPC dialogue, and enriching the world with lore. For this role, bigger is often better for generating richer detail, but I've found that even the latest **4B parameter models are incredibly promising**.
* **The Secondary LLM (The "Stage Manager"):** This is where the local-first approach becomes incredible. The "Stage Manager" handles all the smaller, faster, high-frequency tasks where speed is critical. And here's the part that blew my mind: I'm having huge success running a tiny, blazing-fast **1.2B model (liquid/lfm2-1.2b) locally via Ollama** for this. It's responsible for:
* Summarizing conversations for an NPC's memory.
* Generating quick, atmospheric descriptions for player actions (e.g., picking up an item).
* Transforming conversational player input ("tell me more") into clean queries for the RAG system.
* Handle some game world state changes and events
* Process combat turns
* Extracting "emotions" from conversations so to evaluate eventual relationship improvements or worsening between NPCs and the player
* More...
This design allows for a super-responsive experience while keeping costs and privacy in check. We can even add more specialized models later for different tasks.
# 2. The RAG System: Giving the World a Memory
Context windows are the final boss. My solution is a multi-stage RAG pipeline that acts as the world's memory. Before hitting the vector database with a conversational query, the local "Stage Manager" LLM rewrites it into a perfect, standalone question. This makes retrieval incredibly accurate. The RAG system also uses separate vector stores for global world knowledge and private NPC memories, so characters only know what they've personally experienced or been told.
# 3. Game State & User Agency
The game state is managed centrally. Before any major world-altering AI call, a complete snapshot is taken. If the AI call fails, or if the player just doesn't like the response, they can hit a "Regenerate" button. This restores the snapshot and re-runs the query, giving the user real agency over their narrative experience.
# Infinite Worlds, One Engine
Because the core systems are genre-agnostic, Synthasia can power vastly different experiences with equal fidelity—from interactive tales for little kids to deep, complex fantasy worlds or hard sci-fi mysteries.
# The Road Ahead & A Call for Testers!
This has been an incredible journey, but I know that a project like this thrives with a community. To that end, I've just set up a Discord server to gather feedback, share progress, and hopefully build a group of people excited to help shape the future of this engine.
We're getting very close to having something ready for an early alpha build. If you're interested in being one of the first to test it out, mess with the LLM settings, and see what kind of stories you can create, please feel free to **DM me here on Reddit or, even better, join the Discord!**
**Discord Link:** [https://discord.gg/2wc4n2GMmn](https://www.google.com/url?sa=E&q=https%3A%2F%2Fdiscord.gg%2F2wc4n2GMmn)
Thanks so much for your time and for being such an awesome and inspiring community. I can't wait to hear what you think. | 2025-09-06T17:07:54 | https://www.reddit.com/gallery/1na4vx4 | orblabs | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1na4vx4 | false | null | t3_1na4vx4 | /r/LocalLLaMA/comments/1na4vx4/followup_a_deep_dive_into_my_solodev_narrative/ | false | false | 48 | null | |
Inference at scale | 3 | Guys there is this data center that I might be involved with and I want to know the best strategies of serving the LLM inference at scale. For now the requirements are ~500 users within the organization and modality can be text only. The model selection will also depend on the inference strategy. | 2025-09-06T16:46:16 | https://www.reddit.com/r/LocalLLaMA/comments/1na4clu/inference_at_scale/ | BABA_yaaGa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na4clu | false | null | t3_1na4clu | /r/LocalLLaMA/comments/1na4clu/inference_at_scale/ | false | false | self | 3 | null |
Minimal build review for local llm | 0 | Hey folks,
I’ve been wanting to have a setup for running local llms and I have the chance to buy this second hand build:
- RAM: G.SKILL Trident Z RGB 32GB DDR4-3200MHz
- CPU Cooler: Cooler Master MasterLiquid ML240L V2 RGB 240mm
- GPU: PNY GeForce RTX 3090 24GB GDDR6X
- SSD: Western Digital Black SN750SE 1TB NVMe
- CPU: Intel Core i7-12700KF 12-Core
- Motherboard: MSI Pro Z690-A DDR4
I’m planning to use it for tasks like agentic code assistance but I’m also trying to understand what kinds of tasks can I do with this setup.
What are your thoughts?
Any feedback is appreciated :)
| 2025-09-06T16:40:44 | https://www.reddit.com/r/LocalLLaMA/comments/1na47q7/minimal_build_review_for_local_llm/ | Level-Assistant-4424 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na47q7 | false | null | t3_1na47q7 | /r/LocalLLaMA/comments/1na47q7/minimal_build_review_for_local_llm/ | false | false | self | 0 | null |
Llama-3.3-Nemotron-Super-49B-v1.5 is very good model to summarized long text into formatted markdown (Nvidia also provided free unlimited API call with rate limit) | 57 | I've been working on a project to convert medical lesson data from websites into markdown format for a RAG application. Tested several popular models including Qwen3 235B, Gemma 3 27B, and GPT-oss-120 they all performed well technically, but as someone with a medical background, the output style just didn't click with me (totally subjective, I know).
So I decided to experiment with some models on NVIDIA's API platform and stumbled upon **Llama-3.3-Nemotron-Super-49B-v1.5** This thing is surprisingly solid for my use case. I'd tried it before in an agent setup where it didn't perform great on evals, so I had to stick with the bigger models. But for this specific summarization task, it's been excellent.
The output is well-written, requires minimal proofreading, and the markdown formatting is clean right out of the box. Plus it's free through NVIDIA's API (40 requests/minute limit), which is perfect for my workflow since I manually review everything anyway.
Definitely worth trying if you're doing similar work with medical or technical content, write a good prompt still the key though. | 2025-09-06T16:29:19 | https://www.reddit.com/r/LocalLLaMA/comments/1na3xkd/llama33nemotronsuper49bv15_is_very_good_model_to/ | dheetoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na3xkd | false | null | t3_1na3xkd | /r/LocalLLaMA/comments/1na3xkd/llama33nemotronsuper49bv15_is_very_good_model_to/ | false | false | self | 57 | null |
Built QWEN3-0.6B mini inference engine in CUDA from scratch | 132 | I'm into CUDA and GPGPU programming much, didn't get into LLMs or NLP at all, so tried build that side project as as a hands-on way to learn about LLMs while practicing my CUDA programming.
chose that cute tiny model of qwen3-600m
Static configured, with suckless philosophy in code as much as possible, no deps to build beyond cuBLAS, CUB, std IO libs
I know that im missing smth but in benchmarking with greedy sampling (temp=0) on my RTX 3050, I get 3x speed of hf with flash-attn inference and extremely comparable speed with llama.cpp
My guess is the slight edge over llama.cpp comes from being hyper-specialized for just one model, allowing for more compile-time optimizations with no runtime branching.
feel free to check github if you want:
[https://github.com/yassa9/qwen600](https://github.com/yassa9/qwen600) | 2025-09-06T16:25:42 | https://v.redd.it/xh5qjozxkknf1 | yassa9 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1na3u8w | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xh5qjozxkknf1/DASHPlaylist.mpd?a=1759767958%2CZTcwZDBhMGZiMjI0NjllYWY4MmM4ODFhODgzOWZiNWFiMDM4M2M1OTJhNWRmOGU5MGVhY2I4NWZmODRjNWQwMw%3D%3D&v=1&f=sd', 'duration': 13, 'fallback_url': 'https://v.redd.it/xh5qjozxkknf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/xh5qjozxkknf1/HLSPlaylist.m3u8?a=1759767958%2CNjJhMTY2NGMwZmE1MzA1NTdhMDJhMjZiOGMwNWMyMGJiYjlhOGZkM2FkZWMyY2NhNjlkNTJlMGYyNjU3YjdjZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xh5qjozxkknf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1na3u8w | /r/LocalLLaMA/comments/1na3u8w/built_qwen306b_mini_inference_engine_in_cuda_from/ | false | false | 132 | {'enabled': False, 'images': [{'id': 'ZXpiNW9wenhra25mMbxpIYt75C5r2fT6cxhEXwgg3zD3iMqrP6b9J5eZI0o1', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZXpiNW9wenhra25mMbxpIYt75C5r2fT6cxhEXwgg3zD3iMqrP6b9J5eZI0o1.png?width=108&crop=smart&format=pjpg&auto=webp&s=647d511795c5a71c8bf53b4e74b189f1c3815b62', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZXpiNW9wenhra25mMbxpIYt75C5r2fT6cxhEXwgg3zD3iMqrP6b9J5eZI0o1.png?width=216&crop=smart&format=pjpg&auto=webp&s=106c1f2bf535ca6c6b9b72f39132043f50641ee0', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZXpiNW9wenhra25mMbxpIYt75C5r2fT6cxhEXwgg3zD3iMqrP6b9J5eZI0o1.png?width=320&crop=smart&format=pjpg&auto=webp&s=eea6d585914ae3dddc1673493090da27251474d6', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZXpiNW9wenhra25mMbxpIYt75C5r2fT6cxhEXwgg3zD3iMqrP6b9J5eZI0o1.png?width=640&crop=smart&format=pjpg&auto=webp&s=db907feb0e6aaef3b07aa6f86274fdd2da937b72', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZXpiNW9wenhra25mMbxpIYt75C5r2fT6cxhEXwgg3zD3iMqrP6b9J5eZI0o1.png?width=960&crop=smart&format=pjpg&auto=webp&s=825da446504c35934536b503eeb1f05ee0474c74', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZXpiNW9wenhra25mMbxpIYt75C5r2fT6cxhEXwgg3zD3iMqrP6b9J5eZI0o1.png?width=1080&crop=smart&format=pjpg&auto=webp&s=72ecb9424c211c0a556fea1a2fae9439bff0717a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZXpiNW9wenhra25mMbxpIYt75C5r2fT6cxhEXwgg3zD3iMqrP6b9J5eZI0o1.png?format=pjpg&auto=webp&s=1e5b264a7b8f768c109a339e8462c77b4f930194', 'width': 1920}, 'variants': {}}]} | |
guys i have a question is there any ai model providing the free api key even if limit im fine with that | 0 | i need api for the project i need help if anyone can help me then it will be nice | 2025-09-06T16:17:25 | https://www.reddit.com/r/LocalLLaMA/comments/1na3mni/guys_i_have_a_question_is_there_any_ai_model/ | Select_Dream634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na3mni | false | null | t3_1na3mni | /r/LocalLLaMA/comments/1na3mni/guys_i_have_a_question_is_there_any_ai_model/ | false | false | self | 0 | null |
UI-Tars-1.5 reasoning never fails to entertain me. | 0 | The era of local Computer-Use AI Agents is here.
Meet UI-TARS-1.5-7B-6bit, now running natively on Apple Silicon via MLX.
Made using : https://github.com/trycua/cua | 2025-09-06T16:16:32 | Impressive_Half_2819 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1na3lv4 | false | null | t3_1na3lv4 | /r/LocalLLaMA/comments/1na3lv4/uitars15_reasoning_never_fails_to_entertain_me/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'c3m2zuhlkknf1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/c3m2zuhlkknf1.jpeg?width=108&crop=smart&auto=webp&s=c2651dcb62acfa10efd8d66e40ebd974ba122b31', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/c3m2zuhlkknf1.jpeg?width=216&crop=smart&auto=webp&s=f174124b420880fd06e5bcfdedc191e2ea2d74d5', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/c3m2zuhlkknf1.jpeg?width=320&crop=smart&auto=webp&s=4b713dfc122d174b8a152d8cb6f5f7276536e0de', 'width': 320}, {'height': 409, 'url': 'https://preview.redd.it/c3m2zuhlkknf1.jpeg?width=640&crop=smart&auto=webp&s=478c705ef58bcab2b004f3b372858b37ddbd02ae', 'width': 640}], 'source': {'height': 466, 'url': 'https://preview.redd.it/c3m2zuhlkknf1.jpeg?auto=webp&s=0c34cf57ca0f66b2186650a1c29fde163cbfb674', 'width': 729}, 'variants': {}}]} | |
today grok code fast 1 brake the 1 trillion mark in the open router history | 0 | 2025-09-06T16:12:30 | Select_Dream634 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1na3icm | false | null | t3_1na3icm | /r/LocalLLaMA/comments/1na3icm/today_grok_code_fast_1_brake_the_1_trillion_mark/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'l35vq1trjknf1', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/l35vq1trjknf1.png?width=108&crop=smart&auto=webp&s=2861b0b60a5be131d20748822f05681d64fef67a', 'width': 108}, {'height': 94, 'url': 'https://preview.redd.it/l35vq1trjknf1.png?width=216&crop=smart&auto=webp&s=10dc1e81b4810f780500617445544a8a34a1ac5c', 'width': 216}, {'height': 139, 'url': 'https://preview.redd.it/l35vq1trjknf1.png?width=320&crop=smart&auto=webp&s=44245cfe9d27d0f4953bf004f84bedbe166a5170', 'width': 320}, {'height': 278, 'url': 'https://preview.redd.it/l35vq1trjknf1.png?width=640&crop=smart&auto=webp&s=4ad02854dcca1e4396c4063e031e5a95da35a2c9', 'width': 640}, {'height': 417, 'url': 'https://preview.redd.it/l35vq1trjknf1.png?width=960&crop=smart&auto=webp&s=e76419ec2e97896e9968a79aa00ed5e91255921c', 'width': 960}, {'height': 470, 'url': 'https://preview.redd.it/l35vq1trjknf1.png?width=1080&crop=smart&auto=webp&s=6193d3fa304f4131f52235cccda9e42d19660626', 'width': 1080}], 'source': {'height': 818, 'url': 'https://preview.redd.it/l35vq1trjknf1.png?auto=webp&s=a5c98ae581e1d9d1567d38aa7703d7de9ffec7d7', 'width': 1879}, 'variants': {}}]} | ||
Renting GPUs is hilariously cheap | 1,525 | A 140 GB monster GPU that costs $30k to buy, plus the rest of the system, plus electricity, plus maintenance, plus a multi-Gbps uplink, for a little over 2 bucks per hour.
If you use it for 5 hours per day, 7 days per week, and factor in auxiliary costs and interest rates, buying that GPU today vs. renting it when you need it will only pay off in 2035 or later. That’s a tough sell.
Owning a GPU is great for privacy and control, and obviously, many people who have such GPUs run them nearly around the clock, but for quick experiments, renting is often the best option.
| 2025-09-06T16:08:44 | -p-e-w- | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1na3f1s | false | null | t3_1na3f1s | /r/LocalLLaMA/comments/1na3f1s/renting_gpus_is_hilariously_cheap/ | false | false | 1,525 | {'enabled': True, 'images': [{'id': 'RDgco0KHfsARtUSRJQyja7YELP_Hyoc2oPSO1CbtxS4', 'resolutions': [{'height': 23, 'url': 'https://preview.redd.it/dhtzimf7jknf1.jpeg?width=108&crop=smart&auto=webp&s=2783f7491f1e55f2146da44a640e338d5866e1be', 'width': 108}, {'height': 47, 'url': 'https://preview.redd.it/dhtzimf7jknf1.jpeg?width=216&crop=smart&auto=webp&s=247d5a4dbe26ecfa26147902b1e4fd1ccf76a2d1', 'width': 216}, {'height': 70, 'url': 'https://preview.redd.it/dhtzimf7jknf1.jpeg?width=320&crop=smart&auto=webp&s=c3e34b5b4f7c7b4ccad772c02e9a52fc05dc764d', 'width': 320}, {'height': 140, 'url': 'https://preview.redd.it/dhtzimf7jknf1.jpeg?width=640&crop=smart&auto=webp&s=1bca94832d9e6b8fb7b8faf80d61387d12889d7f', 'width': 640}, {'height': 211, 'url': 'https://preview.redd.it/dhtzimf7jknf1.jpeg?width=960&crop=smart&auto=webp&s=8494dae51a8ae8472164ae5863491d3f88a6aa93', 'width': 960}, {'height': 237, 'url': 'https://preview.redd.it/dhtzimf7jknf1.jpeg?width=1080&crop=smart&auto=webp&s=cc29f95c8ca6da59026bdd7398dbd64fd170990c', 'width': 1080}], 'source': {'height': 241, 'url': 'https://preview.redd.it/dhtzimf7jknf1.jpeg?auto=webp&s=3aed32b41be94f3068bd59f46bebc9ba28238209', 'width': 1096}, 'variants': {}}]} | ||
Custom Dataset for Fine Tuning | 6 | Can any one drop a tip or any suggestions/
recommendations for how to create or own dataset to fine tune a LLM.
How many minimum rows should we take.
Should we use use prompt, completion method or role, content,system, user, assistant method.
Please drop your thoughts on this🙏🏻🙃 | 2025-09-06T16:06:59 | https://www.reddit.com/r/LocalLLaMA/comments/1na3dit/custom_dataset_for_fine_tuning/ | Old-Raspberry-3266 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na3dit | false | null | t3_1na3dit | /r/LocalLLaMA/comments/1na3dit/custom_dataset_for_fine_tuning/ | false | false | self | 6 | null |
Fixing up this desktop | 2 | This entry level prebuilt desktop has been sitting unused for a year or two. I’ve been getting into running some models locally, mostly for some projects I’m working on, and want to give this thing a purpose again. The plan is to build an actual rig with better hardware at some point, but I’m willing to put $500 into this now to make it more capable. There are the specs currently:
CPU: Intel Core i5 10400F
GPU: NVIDIA GeForce RTX 3060 Ti 12gb
Memory: Team T-FORCE Vulcan Z (2X8GB at 3200 MHz)
Motherboard: B560
Storage: Western Digital 1TB Blue SN550 NVMe
PSU: Seasonic 550W Bronze
Chassis: H510
I was looking at adding another 12gb 3060, and upgrading the RAM to at least 32gb. I think I’ll probably also have to swap out the PSU with a 750W to handle the extra gpu.
What do you think? Is the dual 3060 worth doing? It seems like the most cost effective way to get this system up to 24gb VRAM. Or should I just save up for a single 24gb 3090? I wouldn’t need a new PSU if I went that route
I appreciate any input you have, thanks!
| 2025-09-06T16:06:32 | animal_hoarder | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1na3d4s | false | null | t3_1na3d4s | /r/LocalLLaMA/comments/1na3d4s/fixing_up_this_desktop/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'k0g6d3dtiknf1', 'resolutions': [{'height': 140, 'url': 'https://preview.redd.it/k0g6d3dtiknf1.jpeg?width=108&crop=smart&auto=webp&s=9a1dc8c6148c280afcffb8584c57ce64096b1125', 'width': 108}, {'height': 280, 'url': 'https://preview.redd.it/k0g6d3dtiknf1.jpeg?width=216&crop=smart&auto=webp&s=d84762c6dd60f05e5789cb953ba0a399af100cb2', 'width': 216}, {'height': 415, 'url': 'https://preview.redd.it/k0g6d3dtiknf1.jpeg?width=320&crop=smart&auto=webp&s=64dbc6e6646f76e96becdfe1e651b68bf3095802', 'width': 320}, {'height': 830, 'url': 'https://preview.redd.it/k0g6d3dtiknf1.jpeg?width=640&crop=smart&auto=webp&s=88b3fc9bad808e71857293e4de0ebb582da48826', 'width': 640}, {'height': 1246, 'url': 'https://preview.redd.it/k0g6d3dtiknf1.jpeg?width=960&crop=smart&auto=webp&s=9d3358984482cf4d3ce97ce96e87d5dd7b3f90e7', 'width': 960}, {'height': 1402, 'url': 'https://preview.redd.it/k0g6d3dtiknf1.jpeg?width=1080&crop=smart&auto=webp&s=e9df5cfb3a094895a84fe8b3367f0982a692bfb7', 'width': 1080}], 'source': {'height': 3246, 'url': 'https://preview.redd.it/k0g6d3dtiknf1.jpeg?auto=webp&s=407ed3cb0e65ba73ee23f9d8e9bbacf2b61d8123', 'width': 2500}, 'variants': {}}]} | |
Bringing Computer Use to the Web | 1 | Bringing Computer Use to the Web: control cloud desktops from JavaScript/TypeScript, right in the browser.
Until today computer-use was Python only, shutting out web devs. Now you can automate real UIs without servers, VMs, or weird work arounds.
What you can build: Pixel-perfect UI tests, Live AI demos, In app assistants that actually move the cursor, or parallel automation streams for heavy workloads.
Github : https://github.com/trycua/cua
Blog : https://www.trycua.com/blog/bringing-computer-use-to-the-web | 2025-09-06T15:59:58 | https://v.redd.it/ojlmrwxmhknf1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1na36w0 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ojlmrwxmhknf1/DASHPlaylist.mpd?a=1759766412%2CODQ1OGFjYWRhYzJiODRhNmE1N2JjMzBjNDY4ZjZiZTg5ODViNmY1NGRiOWZiODI5YzI1YmZlZWVkNThjNzQ0Ng%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/ojlmrwxmhknf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/ojlmrwxmhknf1/HLSPlaylist.m3u8?a=1759766412%2CMzRmNDcwYmIwOWY5M2VkNTA3NDA0ZGE3MGM2N2EwZDU0Yzc1ZjRiYzY0MWRlMDdjMDJiYjZhMDdiYzIyM2E4OA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ojlmrwxmhknf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1na36w0 | /r/LocalLLaMA/comments/1na36w0/bringing_computer_use_to_the_web/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bHhrbXAxbm1oa25mMRcxEnlpDBBJVNjXlCDC4HUtgXjfB5ufLszRpp9PEi0H', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bHhrbXAxbm1oa25mMRcxEnlpDBBJVNjXlCDC4HUtgXjfB5ufLszRpp9PEi0H.png?width=108&crop=smart&format=pjpg&auto=webp&s=a58561b35c6a04ea9e091f9ea95131b6496c4c72', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bHhrbXAxbm1oa25mMRcxEnlpDBBJVNjXlCDC4HUtgXjfB5ufLszRpp9PEi0H.png?width=216&crop=smart&format=pjpg&auto=webp&s=8b15368c1532110740e282d739242089b42d72e3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bHhrbXAxbm1oa25mMRcxEnlpDBBJVNjXlCDC4HUtgXjfB5ufLszRpp9PEi0H.png?width=320&crop=smart&format=pjpg&auto=webp&s=c53fe052dbfe6bced17048671c649eaef3288b8f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bHhrbXAxbm1oa25mMRcxEnlpDBBJVNjXlCDC4HUtgXjfB5ufLszRpp9PEi0H.png?width=640&crop=smart&format=pjpg&auto=webp&s=4a8844710e87382c63f3471679ddc76d964dc2a3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bHhrbXAxbm1oa25mMRcxEnlpDBBJVNjXlCDC4HUtgXjfB5ufLszRpp9PEi0H.png?width=960&crop=smart&format=pjpg&auto=webp&s=001747c44d33d0a45af54cd2f263e0f1b590ad43', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bHhrbXAxbm1oa25mMRcxEnlpDBBJVNjXlCDC4HUtgXjfB5ufLszRpp9PEi0H.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c96c6d2ee6fb90d0fdc0cbe3b6a984d3c5c77614', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bHhrbXAxbm1oa25mMRcxEnlpDBBJVNjXlCDC4HUtgXjfB5ufLszRpp9PEi0H.png?format=pjpg&auto=webp&s=ac1b014083f94d05668479f8b20e563a458b513f', 'width': 1920}, 'variants': {}}]} | |
[Level 1] Building Personalized Text Summarization - Following up on Personal Chatbot Success | 2 | **Background from Level 0:** Successfully completed my first fine-tuning project (personal chatbot) using Unsloth + abliterated Llama 3.2 3B with 1400 examples. Thanks to community advice, switched from regular Llama to `huihui-ai/Llama-3.2-3B-Instruct-abliterated` which solved safety trigger issues. Model now responds as me instead of generic AI assistant responses.
Previous post: [https://www.reddit.com/r/LocalLLaMA/comments/1n81d1t/level\_0\_finetuned\_my\_first\_personal\_chatbot/ ](https://www.reddit.com/r/LocalLLaMA/comments/1n81d1t/level_0_finetuned_my_first_personal_chatbot/)
Google Colab Code: [https://colab.research.google.com/drive/1Az3gFYEKSzPouxrhvES7v5oafyhnm80v#scrollTo=0yBqGdhl\_po9](https://colab.research.google.com/drive/1Az3gFYEKSzPouxrhvES7v5oafyhnm80v#scrollTo=0yBqGdhl_po9)
**Level 1 Challenge:** Want to build personalized text summarization that reflects my teaching/explanation style. Instead of generic summaries, I want summaries that follow my specific approach.
**My summarization style:**
1. Start with simple, kid-friendly analogy (even for adults)
2. Build technical implementation/definition from that analogy
3. Use same analogy to answer follow-up questions
4. Extend the analogy for related subtopics
5. Include visual diagrams when possible
6. Derive formulas step-by-step, explaining each variable
Example: Explaining machine learning as "teaching a kid to recognize cats" → building to training data, algorithms, parameters → extending to deep learning as "layered understanding" → deriving the mathematical formulas with each variable explained. Hope this explain things...
**Technical questions:**
1. **Dataset creation**: How do I create training data for this specific style? Do I manually summarize 500+ documents in my approach, or is there a smarter way to capture this pattern?
2. **Model choice**: Should I fine-tune a dedicated summarization model or extend my existing personal chatbot to handle summarization tasks?
3. **Style capture**: How do I train the model to consistently use analogies first, then build technical concepts? This seems harder than just "write summaries."
4. **Multi-document handling**: How do I handle different content types (research papers vs articles vs documentation) while maintaining my explanation style?
**My setup:** M4 MacBook (16GB RAM), comfortable with Unsloth workflow, can use Colab for training.
**What worked from Level 0 that I'll reuse:**
* Abliterated models to avoid safety lectures
* Quality over quantity for dataset
* LoRA fine-tuning approach
* Gradio interface for testing
**Specific help needed:**
* Examples of style-specific summarization datasets
* Techniques for teaching consistent explanation patterns
* Whether my teaching style is too complex for current fine-tuning methods
Has anyone tackled personalized summarization before? What approaches worked/failed?
Appreciate if someone provide me a step by step method on how to make this one. Also got some comments on my last post questioning my model choice, I am a beginner so my choices aren't so good and are naive but am learning across each step. Sorry to offend anyone. | 2025-09-06T15:55:29 | https://www.reddit.com/r/LocalLLaMA/comments/1na32ws/level_1_building_personalized_text_summarization/ | FastCommission2913 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na32ws | false | null | t3_1na32ws | /r/LocalLLaMA/comments/1na32ws/level_1_building_personalized_text_summarization/ | false | false | self | 2 | null |
Kimi K2 0905 Official Pricing (generation, tool) | 60 | Quite cheap for a model this big! Consider using the official API instead of Openrouter, it directly supports the model builders (PS: I looked for "non-local" flair and couldn't find it). | 2025-09-06T15:35:30 | https://www.reddit.com/gallery/1na2l5b | entsnack | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1na2l5b | false | null | t3_1na2l5b | /r/LocalLLaMA/comments/1na2l5b/kimi_k2_0905_official_pricing_generation_tool/ | false | false | 60 | null | |
Has anyone here had any experience ordering from Tenstorrent or dealing with their customer service? | 8 | I'm a fan of Jim Keller, and love the mission behind his product, but my Blackhole cards have been stuck in customs for over a week, pending paperwork that was never sent to the carrier. Despite reaching out to them multiple times, I have yet to get any response. After digging, I found their phone number only to call and discover the voice mail hasn't even been set up. This whole experience has been disappointing and has led me to question even ordering these cards to begin with. I have never experience such a total lack of customer service, and am very frustrated at this point. | 2025-09-06T15:24:01 | https://www.reddit.com/r/LocalLLaMA/comments/1na2auz/has_anyone_here_had_any_experience_ordering_from/ | elephantgif | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na2auz | false | null | t3_1na2auz | /r/LocalLLaMA/comments/1na2auz/has_anyone_here_had_any_experience_ordering_from/ | false | false | self | 8 | null |
Did openai just found way to solve hallucinations!! | 0 | 2025-09-06T15:16:22 | Independent-Wind4462 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1na23uv | false | null | t3_1na23uv | /r/LocalLLaMA/comments/1na23uv/did_openai_just_found_way_to_solve_hallucinations/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '5cpaebxu9knf1', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/5cpaebxu9knf1.jpeg?width=108&crop=smart&auto=webp&s=b39c4fd7923edc72e9ccea2186353b235319607a', 'width': 108}, {'height': 165, 'url': 'https://preview.redd.it/5cpaebxu9knf1.jpeg?width=216&crop=smart&auto=webp&s=1e333082da364c9bc2119d67243dc1271f8b729d', 'width': 216}, {'height': 245, 'url': 'https://preview.redd.it/5cpaebxu9knf1.jpeg?width=320&crop=smart&auto=webp&s=dc45ff2b58592f221753887140012b6715f3aadb', 'width': 320}, {'height': 490, 'url': 'https://preview.redd.it/5cpaebxu9knf1.jpeg?width=640&crop=smart&auto=webp&s=b1d7ffcb1c15cf5c9300acd3cf81901e243936ca', 'width': 640}, {'height': 735, 'url': 'https://preview.redd.it/5cpaebxu9knf1.jpeg?width=960&crop=smart&auto=webp&s=d9c7d5a9aff776f3ca13934740bc8043297779de', 'width': 960}, {'height': 827, 'url': 'https://preview.redd.it/5cpaebxu9knf1.jpeg?width=1080&crop=smart&auto=webp&s=f62b4b8978eff24f265218f4bb6d27c2f12b5ec6', 'width': 1080}], 'source': {'height': 879, 'url': 'https://preview.redd.it/5cpaebxu9knf1.jpeg?auto=webp&s=9cd9408b903699e86e033442830fdbfbd3f43f48', 'width': 1147}, 'variants': {}}]} | ||
Prompting forces you to have a clear vision (especially w GPT5) | 0 | Can't be mentioned enough.
What's true for endless vibecoding odysseys where GPT (and 5 in particular) tends to add stuff and go outside the context logic that's actually given - for example in an existing codebase - is also true for other knowledge work.
I am developing a strategic approach for a company, within a given context defined by strategic frameworks. The exact moment where GPT starts to propose new categories or approaches outside of the given framework (and it does this a lot), the result tends to become overloaded an overly complicated. Very much what I observed with vibecoding too.
This forces me to understand what I really want and the context I am operating within very, very clearly.
It goes to show clearly the limits of the "AI revolution", as it exemplifies that it really is just a statistical model - not a being that is able to understand intent beyond what's provided with context. And since context provided by the user is so important, it's the user who has to understand what otherwise an experienced manager or experienced engineer might correctly and intiuitively assume outside of what's written in a prompt.
It adds to the statement: "You're not going to be replaced by AI, but by people who know how to use it." And it also means, it's not just about knowing how to prompt, but to know the domain you're working on inside out.
This makes me actually much better at my job, as it forces me to very clearly understand where I want to go, understand why I'm using the tools I'm using and keeping GPT aligned as a result.
Finally, it also means the omnipotent AI overlord that's feared (or maybe anticipated by less qualified people) is an unrealistic scenario. | 2025-09-06T15:11:08 | https://www.reddit.com/r/LocalLLaMA/comments/1na1za8/prompting_forces_you_to_have_a_clear_vision/ | Mr_Moonsilver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na1za8 | false | null | t3_1na1za8 | /r/LocalLLaMA/comments/1na1za8/prompting_forces_you_to_have_a_clear_vision/ | false | false | self | 0 | null |
Qwen3 30B A3B Models Missing in LM Studio | 0 | For ollama these are the models available for Qwen3 30B A3B:
* qwen3-coder:30b-a3b-q4\_K\_M
* qwen3-coder:30b-a3b-q8\_0
* qwen3-coder:30b-a3b-fp16
In LM Studio Community these are the models available for Qwen3 30B A3B:
* qwen3-coder:30b-a3b-q3\_K\_L
* qwen3-coder:30b-a3b-q4\_K\_M
* qwen3-coder:30b-a3b-q6\_K
* qwen3-coder:30b-a3b-q8\_K
I get great results with qwen3-coder:30b-a3b-fp16 in ollama. I'd prefer to use it in LM Studio but it doesn't seem to exist. I tried the unsloth BF16 version but it doesn't work nearly as well as the native ollama qwen3-coder:30b-a3b-fp16. Why is the fp16 version missing in LM Studio? | 2025-09-06T14:34:00 | https://www.reddit.com/r/LocalLLaMA/comments/1na12uk/qwen3_30b_a3b_models_missing_in_lm_studio/ | podred800 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na12uk | false | null | t3_1na12uk | /r/LocalLLaMA/comments/1na12uk/qwen3_30b_a3b_models_missing_in_lm_studio/ | false | false | self | 0 | null |
Strix Halo on Ubuntu looks great - Netstatz | 11 | Not the author, just sharing an article written by a GitHub contributor. I appreciate that it’s an end to end tutorial with code that includes all the problems/challenges! | 2025-09-06T14:29:10 | https://netstatz.com/strix_halo_lemonade/ | jfowers_amd | netstatz.com | 1970-01-01T00:00:00 | 0 | {} | 1na0yp7 | false | null | t3_1na0yp7 | /r/LocalLLaMA/comments/1na0yp7/strix_halo_on_ubuntu_looks_great_netstatz/ | false | false | default | 11 | {'enabled': False, 'images': [{'id': 'NJZpfJ99mI8JFoVvYOBKMFZAOXMI0D90oeAZs96TxjQ', 'resolutions': [{'height': 75, 'url': 'https://external-preview.redd.it/NJZpfJ99mI8JFoVvYOBKMFZAOXMI0D90oeAZs96TxjQ.jpeg?width=108&crop=smart&auto=webp&s=7eec8e3030b0c17f32a34dd97c5d17d32600b09d', 'width': 108}, {'height': 151, 'url': 'https://external-preview.redd.it/NJZpfJ99mI8JFoVvYOBKMFZAOXMI0D90oeAZs96TxjQ.jpeg?width=216&crop=smart&auto=webp&s=8bfa184dd47aead2b361177ffeb7754292954b27', 'width': 216}], 'source': {'height': 210, 'url': 'https://external-preview.redd.it/NJZpfJ99mI8JFoVvYOBKMFZAOXMI0D90oeAZs96TxjQ.jpeg?auto=webp&s=dbdf7d59556fcb1d98f7acef2d9ef4ebbf755715', 'width': 300}, 'variants': {}}]} |
Strix Halo on Ubuntu looks great - Netstatz | 1 | [removed] | 2025-09-06T14:26:09 | https://netstatz.com/strix_halo_lemonade/ | jfowers_amd | netstatz.com | 1970-01-01T00:00:00 | 0 | {} | 1na0w65 | false | null | t3_1na0w65 | /r/LocalLLaMA/comments/1na0w65/strix_halo_on_ubuntu_looks_great_netstatz/ | false | false | default | 1 | null |
Qwen3 30B A3B Hits 13 token/s on 4x Raspberry Pi 5 | 131 | 2025-09-06T14:15:15 | https://github.com/b4rtaz/distributed-llama/discussions/255 | vibjelo | github.com | 1970-01-01T00:00:00 | 0 | {} | 1na0mw3 | false | null | t3_1na0mw3 | /r/LocalLLaMA/comments/1na0mw3/qwen3_30b_a3b_hits_13_tokens_on_4x_raspberry_pi_5/ | false | false | default | 131 | {'enabled': False, 'images': [{'id': 'KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=108&crop=smart&auto=webp&s=06d7401a6e5999b0a0217ef6b4acbaa7a56631c2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=216&crop=smart&auto=webp&s=ac96a579b70dcb81b85c3309ce81d06017fcc35c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=320&crop=smart&auto=webp&s=3a890f57ecdb4a316b5bce7d8bc6acaa9e610f26', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=640&crop=smart&auto=webp&s=4733d277f6f6e759fe47794087d1da790f8d36b7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=960&crop=smart&auto=webp&s=fb26627ab8849438dbe4b35d34d6b3665f48943c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?width=1080&crop=smart&auto=webp&s=8c7b67c6cd00d8d68d14385d500e9be57c5e9372', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KUWKhlT5OZYpzmuPdkrY6FyowQ4PaYe23RiUvraDVrQ.png?auto=webp&s=23a8552f0dbd903c0723bd9c297e053626f3b8db', 'width': 1200}, 'variants': {}}]} | |
How big to start | 9 | I've been lurking in this sub for while, and it's been awesome. I'm keen to get my hands dirty and build a home server to run local experiments. I'd like to hit a couple birds with one stone: I'm keen to explore a local llm to help me write some memoirs, for example, and I think it would be a fun experience to build a beefy server with my teenage boy. The issue is, there are simply too many options, and given it's likely to be a 10kusd build (dual 4090 e g.) I figured I'd ask the sub for advice or reliable sources. I'm a decently comfortable sysadmin, but that gives me the dread of unsupported hardware and that sort of things | 2025-09-06T14:00:23 | https://www.reddit.com/r/LocalLLaMA/comments/1na0aam/how_big_to_start/ | engineeredjoy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na0aam | false | null | t3_1na0aam | /r/LocalLLaMA/comments/1na0aam/how_big_to_start/ | false | false | self | 9 | null |
EPYC vs. Xeon for Hybrid Inference Server? | 5 | Hello all,
I'm looking to put together a server primarily to serve hybrid inference for large MoE models. After deciding to go with a server board for the memory bandwidth and deciding on GPUs (Blackwells), I'm looking to get some input on the CPU/RAM configuration.
I'm quite lacking on knowledge about server-grade chips, so please excuse any misconceptions below. Any input from those who have more experience with these setups would be greatly appreciated.
The use case is serving hybrid inference of large MoE models with low concurrency (i.e. not doing a ton of batched inference), and keeping TTFT/latency low is a priority. K/V cache can likely be offloaded entirely to VRAM, dependent upon the exact configuration I end up settling on.
**1. Xeon vs. EPYC**
Deciding between Xeon and EPYC is tough, as I don't fully know the comparative advantages that each have over the other yet. That being said, here what's I've noted:
* Some Xeon models have AMX instructions, which is significantly more efficient on a per-core basis for matmul. This drives faster prompt processing times, while the actual token generation is then based on memory bandwidth. I have also heard that AMX instructions require custom kernels to really get any benefit, and the advantage would be lost without them, but most prominent backends do appear to have AMX support.
* At comparable costs, EPYC chips appear to have, on average, more cores than Xeon chips. I have heard that core/thread count has an upper bound for accelerating PP. In theory, the core count does not affect t/s, since that is memory bandwidth-bound. It only affects PP, which is not a fair core-for-core comparison between the two assuming that AMX support is present.
* At the high end, Xeon Max (Sapphire Rapids) chips have 64GB HBM2e cache. Now, whether or not this (or L3 cache amount or speed, for that matter) does anything for low-concurrence inference - I don't know.
* Of the latest processors (Xeon 6, EPYC 9005), EPYC appears to have the advantage in memory bandwidth, offering both more channels and more theoretical peak bandwidth. This means higher token generation speeds *once* prompt processing is done.
* NUMA may cause issues with EPYC chips with multiple CCDs, but this has been addressed in the 9005 series, and I've been told that it presents as a single instance due to a unified memory controller.
So, I clearly have a lot of reading to do. The general picture I've gotten is that Intel has an advantage in matmul (and thus PP) due to AMX instructions, but this may not be applicable in all cases. EPYC offers a higher number of cores and higher overall memory bandwidth.
For highly concurrent batched inference, I would think that EPYC has the edge. For single-user/low-latency inference, the faster PP speeds on Xeon due to AMX wins, pending kernel support. I don't know if the faster overall memory bandwidth on EPYC systems is able to compensate for this in overall inference time. AMX is tempting, but so is memory bandwidth. Not sure where to go here.
**2. Core/Thread Count and Clock Speed**
EPYC chips have more cores, whereas Intel has fewer, but with AMX, as mentioned. As far as I can tell, this means that Intel is more efficient per-core, whereas EPYC just has more cores with AVX support to try to even this out.
The core count theoretically drives matrix multiplication, and thus affects prompt processing speed. I have heard that there's an upper bound to this, but I don't know if that's always the case, or backend/kernel-dependent.
Clock speed/frequency is where I really lose the thread. Higher core count appears to generally correlate with lower clock speeds. What the interplay is, exactly - between core count, core efficiency (P-cores vs E-Cores, AMX/non-AMX, etc.), and individual core clock speed - is what I'm trying to figure out.
**3. RAM Configuration/Channels**
EPYC appears to have higher memory bandwidth overall. This directly affects inference speed following prompt processing.
If I understand the memory controller implementation correctly, it would appear that due to interleaving memory access, any amount of parameters offloaded to system RAM is spread evenly among all available memory channels. Assuming that all channels are populated, that would still be an advantage for AMD in this area. As mentioned, previous gen EPYC chips with >1 CCD may have had NUMA issues, but this has been corrected for in the latest series, if I understand correctly.
If there is no penalty for having an excess of RAM in terms of bandwidth,, then I suppose having more rather than less would be better. Models are only getting larger nowadays. I'm thinking around 1\~1.5TB should do it. All DDR5, and hopefully supported at 6400MT/s.
This is another thing - not all the chips mentioned are stable at/support 6400MT/s DDR5. Since loading K/V cache onto VRAM can alleviate any issues with PP speeds, but experts have to be loaded/unloaded off of RAM by necessity, I would assume both bandwidth and frequency are a factor here.
**4. Single vs. Dual Socket**
From what I know, there is really no argument in favor of dual socket for a low-concurrency, low-latency situation. Aside from the future ability to populate more PCIe lanes (which is a factor with a machine such as this), dual socket can cause slowdowns due to issues with NUMA, and does not necessarily lead to a linear increase in either matmul throughput nor memory bandwidth.
In addition to the potential memory latency, two sockets means two processors. That adds significantly to the the cost without a concomitant increase in throughput.
Unless I'm way off base here, I'm thinking single socket is the way to go. Taking a look at most configurations available, though, the ones that support 4+ GPUs appear to largely be dual socket configurations. I'm wondering if there's a reason for this that I'm missing.
Am I correct in thinking that single socket is the way to go in this use case?
**.**
That's where I'm at. I also briefly considered the latest Threadripper Pro chips, but the lower number of memory lanes has dissuaded me. If there's an argument to be made for them (perhaps if higher boost/turbo clock speed matters, etc.), then please do feel free to correct me.
Any input is welcome.
Cheers
| 2025-09-06T13:49:40 | https://www.reddit.com/r/LocalLLaMA/comments/1na01pi/epyc_vs_xeon_for_hybrid_inference_server/ | HvskyAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1na01pi | false | null | t3_1na01pi | /r/LocalLLaMA/comments/1na01pi/epyc_vs_xeon_for_hybrid_inference_server/ | false | false | self | 5 | null |
Tested sonoma-sky-alpha on Fiction.liveBench, fantastic close to SOTA scores, currently free | 14 | 2025-09-06T13:40:47 | fictionlive | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n9zuil | false | null | t3_1n9zuil | /r/LocalLLaMA/comments/1n9zuil/tested_sonomaskyalpha_on_fictionlivebench/ | false | false | default | 14 | {'enabled': True, 'images': [{'id': 'lbbwz19ssjnf1', 'resolutions': [{'height': 170, 'url': 'https://preview.redd.it/lbbwz19ssjnf1.png?width=108&crop=smart&auto=webp&s=46dce58f29d39140cfaec4960750e60692b6d80d', 'width': 108}, {'height': 340, 'url': 'https://preview.redd.it/lbbwz19ssjnf1.png?width=216&crop=smart&auto=webp&s=f476a183997391fe7079a743719677287ed53e61', 'width': 216}, {'height': 504, 'url': 'https://preview.redd.it/lbbwz19ssjnf1.png?width=320&crop=smart&auto=webp&s=efe2a97b5086be77c3bae7a17be7dfe1924602b6', 'width': 320}, {'height': 1008, 'url': 'https://preview.redd.it/lbbwz19ssjnf1.png?width=640&crop=smart&auto=webp&s=6c7f0bc4e724dc56e61b41a81da5f3d8d98fc7b0', 'width': 640}, {'height': 1513, 'url': 'https://preview.redd.it/lbbwz19ssjnf1.png?width=960&crop=smart&auto=webp&s=2c73b34c07218227d06d622cb5966982f082e159', 'width': 960}, {'height': 1702, 'url': 'https://preview.redd.it/lbbwz19ssjnf1.png?width=1080&crop=smart&auto=webp&s=494acb4ecf87c20aa4d4f997f030df237d2184bd', 'width': 1080}], 'source': {'height': 2462, 'url': 'https://preview.redd.it/lbbwz19ssjnf1.png?auto=webp&s=d74bc4782965df3c080e1fee2a30df6f88fd93ca', 'width': 1562}, 'variants': {}}]} | ||
How do you make 3+ GPUs stable?! | 11 | I just got my third 3090 and the setup from 2 to 3 GPUs was a PITA as I had to now use a mining frame with these pcie x16 risers (https://www.amazon.ca/dp/B0C4171HKX?ref=ppx\_yo2ov\_dt\_b\_fed\_asin\_title&th=1)
Problem is I've been dealing with constant issues of crashes and instability. For example I've been trying to preprocess datasets over night just to wake up to these messages and my system hanging:
`GPU 00000000:01:00.0: GPU Unavailable error occurred`
`GPU 00000000:05:00.0: GPU Recovery action event occurred`
`GPU 00000000:01:00.0: Detected Critical Xid Error`
Journalctl also shows a lot of these
`Sep 06 11:43:45 ml-lab1 kernel: pcieport 0000:00:01.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)`
`Sep 06 11:43:45 ml-lab1 kernel: pcieport 0000:00:01.0: device [8086:a70d] error status/mask=00001000/00002000`
`Sep 06 11:43:45 ml-lab1 kernel: pcieport 0000:00:01.0: [12] Timeout`
Judging from this it's most likely the risers. I do hope there's some kind of magic setting in the BIOS I'm missing that someone could point out (so far the only thing I set was above 4g decoding and force pcie gen 3) but if not I would greatly appreciate recommendations for better risers | 2025-09-06T13:37:33 | https://www.reddit.com/r/LocalLLaMA/comments/1n9zrvc/how_do_you_make_3_gpus_stable/ | anothy1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n9zrvc | false | null | t3_1n9zrvc | /r/LocalLLaMA/comments/1n9zrvc/how_do_you_make_3_gpus_stable/ | false | false | self | 11 | null |
What’s the best model to run on a 5060 ti? | 0 | I’m looking for THE SMARTEST model that can run on my gpu alone, no cpu work but still has the most “smart ratings”
I love ai and the ideas around it but the benchmarks and stuff kinda blow over my head and it’s not even my field sadly, if anyone has any opinions on what’s the best model lmk
Personally I love Gemma3 4b in a pinch and gpt-oss 20b, even tho gptoss doesn’t fit on my gpu alone it’s like 20/80 cpu to gpu which is fine but not ideal | 2025-09-06T13:10:02 | https://www.reddit.com/r/LocalLLaMA/comments/1n9z5yz/whats_the_best_model_to_run_on_a_5060_ti/ | nad_lab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n9z5yz | false | null | t3_1n9z5yz | /r/LocalLLaMA/comments/1n9z5yz/whats_the_best_model_to_run_on_a_5060_ti/ | false | false | self | 0 | null |
Advice on first build: Watercooling + mainboard for local inference | 1 | Hi everyone,
I’m about to build my first machine for local inference and would really appreciate some input from the experts here.
I have the opportunity to pick up two Gigabyte RTX 3090 Turbo/Blower. I'd use them for inferencing only.
* Do you recommend switching them to watercooling?
* If so, which full-cover water blocks are a good fit? I’ve looked into the Alphacool Eisblock Aurora Acryl GPX-N RTX 3080/3090 Turbo, which looks like it would be compatible. Is that correct? Appreciate advice or alternatives.
Additionally, I’m still undecided on the motherboard. I don't really care about the brand of the mainboard and CPU, as long as it works. The ASRock TRX40 Creator and the Asus ProArt X870E-Creator seem to get good reviews. Any recommendations or experience with these (or others)?
Thanks in advance!
| 2025-09-06T12:52:43 | https://www.reddit.com/r/LocalLLaMA/comments/1n9yspv/advice_on_first_build_watercooling_mainboard_for/ | userFromNextDoor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n9yspv | false | null | t3_1n9yspv | /r/LocalLLaMA/comments/1n9yspv/advice_on_first_build_watercooling_mainboard_for/ | false | false | self | 1 | null |
Stable LLM that I can setup on a 5080 system for home/records related work, but also multimodal? | 1 | Would like to know if it’s yet possible to have a practically useful local model that doesn’t require much upkeep, with a zero friction multi modal UI (generate images, speech (VibeVoice), etc) but also use it for work (information processing, ie summarizing complex “chunky” vital records consistently.
I’ve really been trying to learn how to with the little free time I have but it starts to feel like that SpongeBob meme where he’s reading a book and his eyes point everywhere.
All over this subreddit there’s snippets and hints of things I wish I could do locally,but just can’t seem to get it to work.
I’ve tried LM studio, which seems to be the easiest way to do LLM. But I know things like docker exist which give you a similar UI to ChatGPT.
Right now I’m relegated to use Gemini, after spending a significant time removing sensitive info, it gets the job done well, but I still don’t trust Google even if it tells me it’s not remembering the chats.
And OpenAI taught me that you can’t rely on online services, as models can regress and mess up months of “prompt training” that finally got you consistent results but now it’s back to being low quality.
Just need a push in the right direction or maybe a mentor on these things. But I feel like I have the hardware (even willing to invest in a 5090 if needed) and the local models are there, just need to find out how to put it together in a reliable way without much maintainance and only updating model (hopefully there is a single good model for my use case, but I’ve seen people using multiple models and having them work together (nodes?)). Would appreciate any help from any wizards out there. If it’s possible at all and I’m not just hallucinating that we’re there yet.
TLDR: Need help setting up all in one Local AI daddy that can be easily updated without any tweaking can reliably help me with fun (image/voice/music? Gen), homework (have persistent memory about tasks I ask it to remind me of weeks later) and workwork (large context window/upload documents/OCR for pdf and images) and use it with a consistent UI, on my 9800x3d 5080 machine (which I can upgrade if needed). Do exist? And How? Please. | 2025-09-06T12:51:27 | https://www.reddit.com/r/LocalLLaMA/comments/1n9yrsh/stable_llm_that_i_can_setup_on_a_5080_system_for/ | Onetimehelper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n9yrsh | false | null | t3_1n9yrsh | /r/LocalLLaMA/comments/1n9yrsh/stable_llm_that_i_can_setup_on_a_5080_system_for/ | false | false | self | 1 | null |
Tools are not working on self hosted models | 5 | Ho all, i am trying to implement self hosted models like qwen3 and oss120b but as i see the tools i had are not working. By default, it wont use my email tool to check mails. If i switch back to gpt4 it is working in a moment. What am I doing wrong?
Thanx | 2025-09-06T12:20:11 | https://www.reddit.com/r/LocalLLaMA/comments/1n9y553/tools_are_not_working_on_self_hosted_models/ | Disastrous-Tap-2254 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n9y553 | false | null | t3_1n9y553 | /r/LocalLLaMA/comments/1n9y553/tools_are_not_working_on_self_hosted_models/ | false | false | self | 5 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.