title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 โ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k โ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 โ | ups int64 0 8.54k | preview stringlengths 301 5.01k โ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Real world Medical Reports on LLMs | 7 | Hi everyone,
So it happens that I got my hands on a big dataset of real world medical reports.
I tried to assess them and predict labeled conditions using open source LLMs. So far ChatGPT OSS 120B seems to work out somehow but it still misses a lot of details when assessing conditions.
I need some advice on how to move forward. Should I fine tune an LLM specifically for this task or keep experimenting with prompt engineering and maybe RAG?
| 2025-10-28T07:57:46 | https://www.reddit.com/r/LocalLLaMA/comments/1oi3gm9/real_world_medical_reports_on_llms/ | makisgr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oi3gm9 | false | null | t3_1oi3gm9 | /r/LocalLLaMA/comments/1oi3gm9/real_world_medical_reports_on_llms/ | false | false | self | 7 | null |
Is it possible to build an alternative of Gemini Live via combination of open-source systems? | 2 | I was wondering if it was possible to build a friend (AI assistant) - who views my screen in realtime and we can talk (like it will say if I am doing something wrong - I can ask it for guidance, etc). (Just like Gemini live, but it will be looking my screen everytime - and this will be expensive via Gemini)
I was wondering if there is any way of building so.
I used several LLMs to find the answer - and they said to use Livekit, TTS, ASR and VLM.
Now for VLM - Are there any leaderboard (like livebench - which is regularly updated) - where I can find the best VLM for me?
(I am non technical and does not have much information regarding technical facts - I am just curious if it's possible to build AI friend) | 2025-10-28T07:36:46 | https://www.reddit.com/r/LocalLLaMA/comments/1oi35z7/is_it_possible_to_build_an_alternative_of_gemini/ | Helpful-Egg-4377 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oi35z7 | false | null | t3_1oi35z7 | /r/LocalLLaMA/comments/1oi35z7/is_it_possible_to_build_an_alternative_of_gemini/ | false | false | self | 2 | null |
3090 for approx $600 still a good investment in 2025? Or are there better value alternatives? | 4 | Iโm trying to find a โgood valueโ GPU or setup for running LLMs locally (mainly for coding and research projects) and for ComfyUI work.
I donโt have a strict budget in mind, but I do have a desktop with a 3060 and 128 GB of RAM. Iโm thinking I should probably โmax it outโ before considering a completely new build.
Iโve been using the 3060 quite a bit, but itโs hard not to notice how much smarter the 20โ32B models are compared to the 8โ16B ones I can currently run.
Iโm a bit wary of dual-GPU setups since Iโm more comfortable on Windows, but it seems like the dual 3090 configuration (for 48 GB VRAM under Linux) is still often recommended as the best value.
Does that still hold true as of late 2025?
| 2025-10-28T07:34:50 | https://www.reddit.com/r/LocalLLaMA/comments/1oi350f/3090_for_approx_600_still_a_good_investment_in/ | liviuberechet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oi350f | false | null | t3_1oi350f | /r/LocalLLaMA/comments/1oi350f/3090_for_approx_600_still_a_good_investment_in/ | false | false | self | 4 | null |
I built a small Python tool to track how your directories get messy (and clean again) | 26 | So, much as we hate to admit, almost every project or downloads folder gets out of control over time (yep).
I got curious โ not just about which files change, but **how the structure itself evolves.**
So I built [**Directory Monitor**](https://github.com/sukanto-m/directory-monitor) โ a lightweight Python script that keeps tabs on **directory organization**, not just file edits. This tool uses local LLMs (Qwen, Llama, choose your own) to analyze project structure and give cleanup recommendations. Everything runs locally - no cloud APIs.
\*\*The interesting technical bits:\*\*
\- Uses RAG with local sentence-transformers to compare current state against historical scans
\- LLM analyzes trends and gives specific, actionable recommendations
\- Terminal UI with Rich showing real-time metrics and sparklines
\- All stored in SQLite locally
\*\*Example output:\*\*
\`\`\`
Messiness Score: 6.2/10
Top 3 Issues:
1. Too many files (28) in src/components - split into ui/, forms/, layouts/
2. 8 files contain 'temp' - move to .archive/ or use proper version control
3. Directory depth exceeds 7 levels - flatten structure
Trend: ๐ Improving (was 7.8, now 6.2)
\`\`\`
\*\*Stack:\*\*
\- Ollama (Qwen/Llama) for LLM
\- sentence-transformers for embeddings
\- SQLite for history
\- Python with Rich/Flask
Works completely offline after setup. Tested with Qwen3:8b and Llama3.2.
Would love feedback โ what features would *you* add for keeping folders sane?
\*\*GitHub:\*\* [https://github.com/sukanto-m/directory-monitor](https://github.com/sukanto-m/directory-monitor) | 2025-10-28T07:12:21 | https://www.reddit.com/r/LocalLLaMA/comments/1oi2tky/i_built_a_small_python_tool_to_track_how_your/ | VegetableSense | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oi2tky | false | null | t3_1oi2tky | /r/LocalLLaMA/comments/1oi2tky/i_built_a_small_python_tool_to_track_how_your/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': '8SESsu9PkBB4Sp3QAlhfXy10rfoeFfx8S3O5FzS39-Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8SESsu9PkBB4Sp3QAlhfXy10rfoeFfx8S3O5FzS39-Y.png?width=108&crop=smart&auto=webp&s=919621c59d266419a7434eaf828658c63d15b9ba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8SESsu9PkBB4Sp3QAlhfXy10rfoeFfx8S3O5FzS39-Y.png?width=216&crop=smart&auto=webp&s=9b5272eaa745b192664f5413b8bc4078b62c49e1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8SESsu9PkBB4Sp3QAlhfXy10rfoeFfx8S3O5FzS39-Y.png?width=320&crop=smart&auto=webp&s=6932ba5790ae6e33e21bf11683337e3428d7109a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8SESsu9PkBB4Sp3QAlhfXy10rfoeFfx8S3O5FzS39-Y.png?width=640&crop=smart&auto=webp&s=c619499d018c97618bcb28af0e15bef7e0be72bf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8SESsu9PkBB4Sp3QAlhfXy10rfoeFfx8S3O5FzS39-Y.png?width=960&crop=smart&auto=webp&s=b84730e8b441e25860e5f5636c371525fd9698b4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8SESsu9PkBB4Sp3QAlhfXy10rfoeFfx8S3O5FzS39-Y.png?width=1080&crop=smart&auto=webp&s=3d928d45542d904bb657c9972e799ce16fba19e9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8SESsu9PkBB4Sp3QAlhfXy10rfoeFfx8S3O5FzS39-Y.png?auto=webp&s=d6936e9d7bc36b7f863e0f27045e7a1252380474', 'width': 1200}, 'variants': {}}]} |
๐ค How do you think about the AI + Spreadsheet?๏ผlike tryshortcut, endex, claude......) | 0 | ๐ Today I saw that Claude is going to release an Excel plug-in. Similar products include tryshortcut, endex, and the native Excel agent. How do you think about the AI + Spreadsheet.
https://preview.redd.it/s0m3z6ffssxf1.png?width=1160&format=png&auto=webp&s=fb4f1ce47d77fce1ec739348a568e3035e053e4f
For me :
๐ ๐ฟ๐๐พ๐ ๐๐พ๐บ๐ฝ๐๐๐ ๐ฏ๐๐๐๐ผ๐๐๐
๐พ๐ ๐ป๐ ๐ฑ๐บ๐ ๐ฃ๐บ๐
๐๐ ๐๐ ๐พ๐บ๐๐
๐ ๐ค๐ข๐ค๐ค, ๐จ ๐๐บ๐ ๐ฝ๐พ๐พ๐๐
๐ ๐๐๐๐๐ผ๐ ๐ป๐ ๐๐๐พ ๐๐ฝ๐พ๐บ โ ๐๐๐บ๐ ๐พ๐๐ฎ๐ป๐๐ถ๐๐ฎ๐๐ถ๐๐ฒ ๐๐ต๐ถ๐ป๐ธ๐ถ๐ป๐ด ๐๐ ๐๐๐พ ๐๐ฟ ๐๐๐พ ๐๐พ๐ ๐ฝ๐๐๐๐๐๐ ๐ฟ๐๐๐ผ๐พ๐ ๐ป๐พ๐๐๐๐ฝ ๐๐๐๐บ๐ ๐๐๐๐๐๐พ๐๐.
๐ณ๐๐ฝ๐บ๐, ๐๐ฟ ๐๐พ ๐
๐๐๐ ๐บ๐๐๐๐๐ฝ, ๐๐๐พ ๐๐๐๐พ๐บ๐ฝ๐๐๐พ๐พ๐ ๐๐พ๐๐บ๐๐๐ ๐๐๐พ ๐๐ฟ ๐๐๐พ ๐๐๐๐ ๐๐๐๐พ๐๐ฟ๐๐
๐ผ๐๐๐๐๐๐บ๐๐๐๐๐บ๐
๐๐๐๐
๐ ๐บ๐๐บ๐๐
๐บ๐ป๐
๐พ ๐๐ ๐บ๐๐๐๐๐พ. ๐ฎ๐๐พ๐ ๐๐๐พ ๐๐บ๐๐ ๐จ๐ข ๐๐พ๐บ๐๐, ๐๐๐ ๐ผ๐บ๐๐บ๐ป๐๐
๐๐๐๐พ๐ ๐๐บ๐๐พ ๐๐๐๐๐ ๐๐๐พ๐๐พ๐๐ฝ๐๐๐๐
๐ โ ๐๐๐๐พ ๐๐๐บ๐ ๐ฆ,๐ข๐ข๐ข ๐ฟ๐๐๐ผ๐๐๐๐๐ ๐๐๐ ๐
๐๐๐พ ๐๐๐๐๐ฝ๐พ ๐๐๐๐ โ๐๐๐๐พ๐ ๐๐๐๐
.โ ๐ซก u/excel
๐ก๐๐ ๐๐พ๐๐พโ๐ ๐๐๐พ ๐๐บ๐๐บ๐ฝ๐๐:
๐ต๐ด% ๐ผ๐ณ ๐๐๐ฒ๐ฟ๐ ๐๐๐ฒ ๐ผ๐ป๐น๐ ๐ฎ% ๐ผ๐ณ ๐ถ๐๐ ๐ฐ๐ฎ๐ฝ๐ฎ๐ฏ๐ถ๐น๐ถ๐๐ถ๐ฒ๐.
๐ณ๐๐พ ๐๐พ๐บ๐๐๐ ๐๐ ๐๐๐๐๐
๐พ โ ๐๐พ๐๐๐
๐พ ๐ฝ๐๐โ๐ ๐๐๐๐ ๐๐๐บ๐โ๐ ๐๐๐๐๐๐ป๐
๐พ, ๐๐ ๐ฝ๐๐โ๐ ๐๐๐๐ ๐๐๐ ๐๐ ๐๐๐พ ๐๐.
๐ถ๐พโ๐๐พ ๐ป๐พ๐พ๐ ๐๐บ๐
๐๐๐๐ ๐บ๐ป๐๐๐ โ๐ฝ๐๐๐๐๐บ๐
๐๐๐บ๐๐๐ฟ๐๐๐๐บ๐๐๐๐โ ๐ฟ๐๐ ๐๐พ๐บ๐๐, ๐๐พ๐ ๐๐บ๐๐ ๐๐๐ฝ๐๐๐๐๐๐พ๐ ๐บ๐๐ฝ ๐ผ๐๐๐๐บ๐๐๐พ๐ ๐บ๐๐พ ๐๐๐๐
๐
๐๐พ๐
๐๐ผ๐๐บ๐๐ ๐๐ ๐บ๐ฝ๐๐๐ ๐๐.
๐ถ๐๐? ๐ก๐พ๐ผ๐บ๐๐๐พ ๐๐๐๐๐๐๐ ๐๐๐๐พ๐
๐
๐๐๐พ๐๐ ๐บ๐๐๐๐๐๐บ๐๐ผ๐พ, ๐๐๐พ ๐ผ๐๐๐ ๐๐ฟ ๐๐๐๐๐ ๐ฟ๐๐
๐
๐ ๐ฝ๐๐๐๐๐บ๐
๐๐ ๐พ๐๐๐๐พ๐๐พ๐
๐ ๐๐๐๐ โ ๐๐ ๐ฝ๐พ๐๐พ๐๐ฝ๐ ๐๐ ๐๐๐พ๐๐๐พ๐ ๐๐๐พ ๐๐๐๐บ๐๐๐๐บ๐๐๐๐ ๐ผ๐บ๐ ๐บ๐ฟ๐ฟ๐๐๐ฝ ๐๐๐๐
๐
๐พ๐ฝ ๐ฝ๐บ๐๐บ ๐บ๐๐บ๐
๐๐๐๐ ๐๐ ๐๐๐.
๐ณ๐๐บ๐โ๐ ๐๐๐, ๐๐๐๐ผ๐พ ๐๐๐ฝ-๐ค๐ข๐ค๐ค, ๐จโ๐๐พ ๐ป๐พ๐พ๐ building ๐ ๐จ-๐๐๐๐พ๐๐พ๐ฝ feature in ๐๐๐๐พ๐บ๐ฝ๐๐๐พ๐พ๐๐ โ ๐ฟ๐๐๐ ๐ ๐จ ๐๐๐๐๐พ๐ ๐๐พ๐๐พ๐๐บ๐๐๐๐ ๐๐ ๐ป๐บ๐๐ผ๐ ๐๐๐๐ผ๐พ๐๐๐๐๐, ๐ผ๐๐๐ฝ๐๐๐๐๐๐บ๐
๐ฟ๐๐๐๐บ๐๐๐๐๐, ๐ฝ๐บ๐๐บ ๐ป๐พ๐บ๐๐๐๐ฟ๐๐ผ๐บ๐๐๐๐, ๐ฟ๐๐๐๐๐
๐บ ๐๐๐๐๐๐๐, ๐บ๐๐ฝ ๐ ๐จ-๐ฝ๐๐๐๐พ๐ ๐ผ๐๐บ๐๐ ๐บ๐๐ฝ ๐ฝ๐บ๐๐๐ป๐๐บ๐๐ฝ ๐ผ๐๐พ๐บ๐๐๐๐.
๐จ๐๐๐๐ฝ๐พ ๐บ ๐๐๐๐พ๐บ๐ฝ๐๐๐พ๐พ๐,
User ๐๐พ๐พ๐ฝ ๐บ ๐พ๐๐ฎ๐น๐ถ๐ณ๐ถ๐ฒ๐ฑ, ๐ถ๐ป๐๐ฒ๐น๐น๐ถ๐ด๐ฒ๐ป๐ ๐ฐ๐ผ๐ฝ๐ถ๐น๐ผ๐ โ ๐๐๐พ ๐๐๐บ๐ ๐ผ๐บ๐ ๐ผ๐๐
๐
๐บ๐ป๐๐๐บ๐๐พ ๐๐๐๐ ๐๐๐๐บ๐๐ (human in the loop) ๐๐ ๐ผ๐๐๐๐๐พ๐ ๐๐๐พ ๐๐บ๐
๐
๐๐ผ๐๐๐บ๐๐๐๐๐ ๐๐ฟ ๐ซ๐ซ๐ฌ๐ ๐บ๐๐ฝ ๐๐๐๐
๐ ๐๐๐
๐๐ผ๐ ๐๐๐๐ฝ๐๐ผ๐๐๐๐๐๐.
To unleash the meta-knowledge of LLMs โ and bring intelligence into everyoneโs spreadsheet.
Openness and integration are especially important in the AI era. | 2025-10-28T07:03:50 | https://www.reddit.com/r/LocalLLaMA/comments/1oi2p6j/how_do_you_think_about_the_ai_spreadsheetlike/ | Helpful-Manner-952 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oi2p6j | false | null | t3_1oi2p6j | /r/LocalLLaMA/comments/1oi2p6j/how_do_you_think_about_the_ai_spreadsheetlike/ | false | false | 0 | null | |
Is Grokipedia available for fine-tuning? | 0 | With grokipedia now live, wondering what it's licensing policy is for using articles for fine-tuning local models. Not sure if article snapshots are already (or will be ever available) for free publicly. | 2025-10-28T06:46:21 | https://www.reddit.com/r/LocalLLaMA/comments/1oi2ftf/is_grokipedia_available_for_finetuning/ | Chance-Studio-8242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oi2ftf | false | null | t3_1oi2ftf | /r/LocalLLaMA/comments/1oi2ftf/is_grokipedia_available_for_finetuning/ | false | false | self | 0 | null |
What's your suggestion for machines that can run large models? | 0 | There's AMD Ryzenโข AI Max+ 395, NVidia DGX, and some Apple variants, and so on. But all these have 128 GB of memory at most, which can't run 1T parameter models, which seem to be often casually suggested on this sub.
Are there solutions out there that won't require me to buy 20 GPUs and put them in the basement? What's your best solution for a home user that wants to learn?
Would appreciate your insight. | 2025-10-28T06:37:12 | https://www.reddit.com/r/LocalLLaMA/comments/1oi2ay1/whats_your_suggestion_for_machines_that_can_run/ | TheQuantumPhysicist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oi2ay1 | false | null | t3_1oi2ay1 | /r/LocalLLaMA/comments/1oi2ay1/whats_your_suggestion_for_machines_that_can_run/ | false | false | self | 0 | null |
Multi-Backend LLM Router - Automatic Model & Backend Switching for SGLang/llama.cpp/TabbyAPI | 2 | Hey everyone, wanted to share something I put together that solved a major headache for me and might help a few of you too.
It's entirely possible this already exists as another name or service, but I couldn't find it.
Iโm not a coder. This is the first time I even made a GitHub repo, But I got tired of constantly switching between different LLM backends (SGLang/AWQ, llama.cpp/GGUF, TabbyAPI/EXL2). Every time I wanted to test a new model, it turned into a 20-minute ritual of stopping services, editing configs, remembering which port did what was a total pain.
I had Claude build a model router that exposes an OpenAI-style API and plugs right into Open-WebUI. Now I just pick a model from the dropdown, and it handles all the backend switching automatically. No manual restarts, no config editing, no guessing which backend is running.
# What it actually does
* No more backend juggling. It stops the current service, fires up the right one, loads the model, and proxies everything through automatically.
* Performance stats after every response. Example: โก 45.2 tok/s (180 tokens in 4.0s)
* Simple model management. Add or remove models with a built-in script no JSON editing required.
* Handles systemd services, health checks, timeouts, and even does a real inference test before marking a backend healthy.
* While switching, streams updated time and model info so you know it hasn't frozen or died.
* Confirmed working with Blackwell GPUs. Tested on an RTX Pro 6000 with CUDA arch tweaks included.
# Quick visual
Client (Open-WebUI)
โ
Router (8002)
โ
โโโโโโโโดโโโโโโโ
โ โ โ
SGLang llama TabbyAPI
(30000) (8085) (5000)
AWQ GGUF EXL2
When you pick a model:
1. The router checks which backend it needs.
2. Stops anything else running.
3. Starts the right backend.
4. Streams your response back.
5. Shows token performance when itโs done.
All selectable directly from Open-WebUI (should work for others too. I've only tested on Open-Webui) no service restarts, no config edits. Switching models is instant and effortless.
# Install
bash
Copy codegit clone https://github.com/darkmaniac7/LLM-Model-Router.git
cd LLM-Model-Router
sudo ./install.sh
# Follow prompts
Then add your models:
sudo /opt/llm-router/manage-models.sh add
# Choose backend, enter model path, done
TL;DR: Itโs a drop-in router for AWQ/GGUF/EXL2 backends that gives you one OpenAI-compatible endpoint, automatic backend switching, systemd integration, live token stats, and dead-simple model management.
Repo is here: [https://github.com/darkmaniac7/LLM-Model-Router](https://github.com/darkmaniac7/LLM-Model-Router?utm_source=chatgpt.com)
Let me know if you try it or hit any issues. Iโm curious how it runs in other setups.
If any actual devs like it and want to change anything please feel free.
| 2025-10-28T06:04:54 | https://www.reddit.com/gallery/1oi1tj9 | darkmaniac7 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1oi1tj9 | false | null | t3_1oi1tj9 | /r/LocalLLaMA/comments/1oi1tj9/multibackend_llm_router_automatic_model_backend/ | false | false | 2 | null | |
AI Agents Reasoning Collapse Imminent (CMU, Berkeley) | 0 | This recent article reviewed here provides a data-driven proof of how a simple game (tower of hanoi) shows that LLMs \*\*may not\*\*, in fact, reason, but instead follow statistical modes that break down into loops at high enough complexity. Really interesting findings. | 2025-10-28T05:39:04 | https://youtube.com/watch?v=nVC_ZKcHPj8&si=ZWv1LG9ljVihykyF | Badger-Purple | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1oi1f69 | false | {'oembed': {'author_name': 'Discover AI', 'author_url': 'https://www.youtube.com/@code4AI', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/nVC_ZKcHPj8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="AI Agents Reasoning Collapse Imminent (CMU, Berkeley)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/nVC_ZKcHPj8/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'AI Agents Reasoning Collapse Imminent (CMU, Berkeley)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1oi1f69 | /r/LocalLLaMA/comments/1oi1f69/ai_agents_reasoning_collapse_imminent_cmu_berkeley/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'lKAGASdo6JvQTzTn5zYaMZ9MBh1TMHQumDOMB0bwr_g', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/lKAGASdo6JvQTzTn5zYaMZ9MBh1TMHQumDOMB0bwr_g.jpeg?width=108&crop=smart&auto=webp&s=7870caa2a4fd8f1830948ff44dbe40cd5554a7c2', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/lKAGASdo6JvQTzTn5zYaMZ9MBh1TMHQumDOMB0bwr_g.jpeg?width=216&crop=smart&auto=webp&s=0a88d14ef54bfe0d4381fe480879fcc32c848d68', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/lKAGASdo6JvQTzTn5zYaMZ9MBh1TMHQumDOMB0bwr_g.jpeg?width=320&crop=smart&auto=webp&s=c310e751ec0fa62b5e6f9af023700f7615393d28', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/lKAGASdo6JvQTzTn5zYaMZ9MBh1TMHQumDOMB0bwr_g.jpeg?auto=webp&s=80979b661ab9ed89b6477eb9162628477eef40a9', 'width': 480}, 'variants': {}}]} |
Is there a model catalogue management service tool already? | 1 | Like others, I have been using several local AI model providers like Ollama, LM Studio and so on. Currently, I download the required models for each tool as required - but soon the disk space fills up. This is due to every provider downloading their own version of the model and keeping it in their specified location on disk. Is there a system service that can catalogue the available models on the system (may be using a unique ID) that can be used by several tools (on a read-only basis)?
This is a major issue developing software/mobile apps using local models as well. We do not want to burden the user with a fresh download for every software that uses AI models. May be this centralized system service can keep track of downloaded models and provide a method to acquire it if needed by any software on the system.
I may have completely missed it. Such a tool may be already available. Please let me know. | 2025-10-28T05:30:53 | https://www.reddit.com/r/LocalLLaMA/comments/1oi1akc/is_there_a_model_catalogue_management_service/ | bad_position | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oi1akc | false | null | t3_1oi1akc | /r/LocalLLaMA/comments/1oi1akc/is_there_a_model_catalogue_management_service/ | false | false | self | 1 | null |
vLLM MoE Benchmark Configs for Qwen3 Coder REAP 25B & RTX Pro 6000 | 6 | https://reddit.com/link/1oi16jj/video/53rpmw42fsxf1/player
Took me a while to figure this out and I couldn't find the configs online anywhere, so I thought I'd share in case anyone else has been looking. If you see a message like this in your vLLM logs: `Using default MoE config. Performance might be sub-optimal!` you'll want one of these configs. This combined with a few other params took Qwen3 Coder REAP 25b from often randomly taking 10+ minutes to complete a request, to being able to handle multiple requests at once (of around 25k tokens each in this example) at the same time and responding to all requests at once at a rate of around 45 tokens/sec.
For fused mixture of expert models vLLM needs a config that's specific to the "shape" of the MoE and the device. `E=<experts>,N=<moe_intermediate/2>,device_name=<GPU>.json` like: `E=103,N=768,device_name=NVIDIA_RTX_PRO_6000_Blackwell_Server_Edition.json`
vLLM has a bunch of common combos, but doesn't have one for Qwen3 coder or any Blackwell GPUs. And on top of that (at least in vLLM v0.10.1.1), the benchmark script to produce the configs runs way more combinations than are needed, so I modified the script to pair that down, and thus take less time, and also made it save the files incrementally incase thats helpful as the original script doesn't save them incrementally.
Repo: [https://github.com/MissionSquad/vllm-moe-configs](https://github.com/MissionSquad/vllm-moe-configs) | 2025-10-28T05:23:46 | https://www.reddit.com/r/LocalLLaMA/comments/1oi16jj/vllm_moe_benchmark_configs_for_qwen3_coder_reap/ | j4ys0nj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oi16jj | false | null | t3_1oi16jj | /r/LocalLLaMA/comments/1oi16jj/vllm_moe_benchmark_configs_for_qwen3_coder_reap/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'uKKImkX9nhrtp0LIvGXTQD2Xl65r6cYkFlt89RXy4k4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uKKImkX9nhrtp0LIvGXTQD2Xl65r6cYkFlt89RXy4k4.png?width=108&crop=smart&auto=webp&s=d566dd073afa94416c33fcb86179be5940d61163', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uKKImkX9nhrtp0LIvGXTQD2Xl65r6cYkFlt89RXy4k4.png?width=216&crop=smart&auto=webp&s=1f09f88d88374bca14cb74956c7fc80544f2e01e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uKKImkX9nhrtp0LIvGXTQD2Xl65r6cYkFlt89RXy4k4.png?width=320&crop=smart&auto=webp&s=b89ec52e38c3441f114325b826654ac27e68af04', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uKKImkX9nhrtp0LIvGXTQD2Xl65r6cYkFlt89RXy4k4.png?width=640&crop=smart&auto=webp&s=51a720fb2a0814150cb703882892cbe1a2524dcd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uKKImkX9nhrtp0LIvGXTQD2Xl65r6cYkFlt89RXy4k4.png?width=960&crop=smart&auto=webp&s=92e0db0886519696e43e7eba9275e32a2fe4d5d8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uKKImkX9nhrtp0LIvGXTQD2Xl65r6cYkFlt89RXy4k4.png?width=1080&crop=smart&auto=webp&s=a029f0cafb0d99b8ea78dfcc6899392188438e66', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uKKImkX9nhrtp0LIvGXTQD2Xl65r6cYkFlt89RXy4k4.png?auto=webp&s=5cab3724717c2eb33c2aceb4601babfed38f3552', 'width': 1200}, 'variants': {}}]} |
NVIDIA-OZAKI + NVLink just did something nobody noticed or is talking about--Nvidia just turned EVERY AI LLM Supercluster DataCenter into a Scientific AI SuperLab for HPL Workloads - And not by a small margin but I mean HPL FP64 MULTI-exaFLOPs ROCKET FUEL - This Changes Everything | 0 | [https://indico.cern.ch/event/1538409/contributions/6522024/attachments/3097817/5488258/OZAKI\_slide\_CERN.pdf](https://indico.cern.ch/event/1538409/contributions/6522024/attachments/3097817/5488258/OZAKI_slide_CERN.pdf)
[https://epubs.siam.org/doi/10.1137/17M1140819](https://epubs.siam.org/doi/10.1137/17M1140819)
| 2025-10-28T04:45:40 | Xtianus21 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oi0k40 | false | null | t3_1oi0k40 | /r/LocalLLaMA/comments/1oi0k40/nvidiaozaki_nvlink_just_did_something_nobody/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'k5SEgDIbTl22HFDQy490z2BqTRnHIYAoRpoF3R5yjMA', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/9rcpk04h8sxf1.png?width=108&crop=smart&auto=webp&s=79da2e624c387521d91fcaf7593cf5533dfd0ee9', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/9rcpk04h8sxf1.png?width=216&crop=smart&auto=webp&s=d4bb0ce4d162305f4873f8f1440231b848d289ce', 'width': 216}, {'height': 167, 'url': 'https://preview.redd.it/9rcpk04h8sxf1.png?width=320&crop=smart&auto=webp&s=68ec1e8c364f4ecf4ace86a1373b3d88a1932fb3', 'width': 320}, {'height': 335, 'url': 'https://preview.redd.it/9rcpk04h8sxf1.png?width=640&crop=smart&auto=webp&s=56847c99657b1d3bc54f98060203b53101df4325', 'width': 640}, {'height': 503, 'url': 'https://preview.redd.it/9rcpk04h8sxf1.png?width=960&crop=smart&auto=webp&s=0d15c63e65dbda7edd9071fc1182c58056c392da', 'width': 960}, {'height': 566, 'url': 'https://preview.redd.it/9rcpk04h8sxf1.png?width=1080&crop=smart&auto=webp&s=59da7343de3c3c4bbaf8691b77c4824dde8f8b7f', 'width': 1080}], 'source': {'height': 1015, 'url': 'https://preview.redd.it/9rcpk04h8sxf1.png?auto=webp&s=4ecc7d88f79b7509d630caaac783682136f8a917', 'width': 1936}, 'variants': {}}]} | ||
Running FP8 with vLLM on RDNA4? | 0 | I'm having a hard time figuring out if this is possible and am looking for help if someone can point me in the right direction. Also how to find out myself is fine, i.e. which documentation would answer this. | 2025-10-28T04:13:50 | https://www.reddit.com/r/LocalLLaMA/comments/1oi00ik/running_fp8_with_vllm_on_rdna4/ | Mr_Moonsilver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oi00ik | false | null | t3_1oi00ik | /r/LocalLLaMA/comments/1oi00ik/running_fp8_with_vllm_on_rdna4/ | false | false | self | 0 | null |
How we built Agentic Retrieval at Ragie | 4 | Hey all... curious about how Agentic Retrieval works?
We wrote a blog explaining how we built a production grade system for this at Ragie.
Take a look and let me know what you think!
[https://www.ragie.ai/blog/how-we-built-agentic-retrieval-at-ragie](https://www.ragie.ai/blog/how-we-built-agentic-retrieval-at-ragie) | 2025-10-28T03:54:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ohzo48/how_we_built_agentic_retrieval_at_ragie/ | bob_at_ragie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohzo48 | false | null | t3_1ohzo48 | /r/LocalLLaMA/comments/1ohzo48/how_we_built_agentic_retrieval_at_ragie/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'qs4MBGKU4-pHZb5cnrgZNgwTdz1ovHyVV-En75ZqvXU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qs4MBGKU4-pHZb5cnrgZNgwTdz1ovHyVV-En75ZqvXU.jpeg?width=108&crop=smart&auto=webp&s=296329e0408b8f028a87f10a42bd813740b79c01', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/qs4MBGKU4-pHZb5cnrgZNgwTdz1ovHyVV-En75ZqvXU.jpeg?width=216&crop=smart&auto=webp&s=804adc41a72c98506b31141293785854cd280aa1', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/qs4MBGKU4-pHZb5cnrgZNgwTdz1ovHyVV-En75ZqvXU.jpeg?width=320&crop=smart&auto=webp&s=0f7d9eb84824962d5657a64b2a2a78e68151b892', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/qs4MBGKU4-pHZb5cnrgZNgwTdz1ovHyVV-En75ZqvXU.jpeg?width=640&crop=smart&auto=webp&s=2e53193ada8862b211676ad5a9dcb4b90fcd7340', 'width': 640}, {'height': 500, 'url': 'https://external-preview.redd.it/qs4MBGKU4-pHZb5cnrgZNgwTdz1ovHyVV-En75ZqvXU.jpeg?width=960&crop=smart&auto=webp&s=a3e334af2f9ee4d5cc73d3ac4909ce505c3be0d7', 'width': 960}, {'height': 563, 'url': 'https://external-preview.redd.it/qs4MBGKU4-pHZb5cnrgZNgwTdz1ovHyVV-En75ZqvXU.jpeg?width=1080&crop=smart&auto=webp&s=8d15e3911f5ae04a13d95a7e21cfb1bb1009a424', 'width': 1080}], 'source': {'height': 836, 'url': 'https://external-preview.redd.it/qs4MBGKU4-pHZb5cnrgZNgwTdz1ovHyVV-En75ZqvXU.jpeg?auto=webp&s=187e3eaca93597b245c7462aa6edbd1b787a25cc', 'width': 1602}, 'variants': {}}]} |
VellumForge2 - A high performance, very configurable and really easy to use DPO dataset generation tool, create high quality datasets for completely free | 20 | Finally releasing my new dataset generation tool, and some Fantasy writing datasets to go with it (soon).
[https://github.com/lemon07r/VellumForge2](https://github.com/lemon07r/VellumForge2)
Sample Dataset: [https://huggingface.co/collections/lemon07r/vellumforge2-datasets](https://huggingface.co/collections/lemon07r/vellumforge2-datasets) (large datasets coming soon)
**Functionality** (all you need for a tl;dr)
This tool creates DPO-style datasets using a main topic and LLMs to generate subtopics, prompts, and chosen/rejected response pairs through a hierarchical pipeline. What sets it apart is the optional LLM-as-a-judge rubric scoring system, inspired by how Kimi K2 was trained using rubric-based evaluation to generate higher quality writing samples. The output uses a flexible "one-to-many" hybrid schema that works seamlessly with DPOTrainer, RewardTrainer, and MORL training, no data transformation needed. You can also skip the judge entirely for DPO training or just use the prompt and chosen responses for SFT.
**Overview & Features**
My original python script that I was using for making datasets worked mostly fine, but I broke it, many many times trying to refactor it and add features to it. It did get to a good place at some point, with working async, rate limiting, etc, before I broke it again with some experimental stuff that turned out to not be a good idea even if it did work. Some good lessons learned here.
What I did learn, I used in my complete re-write of the tool. This time I wrote it in Go, and kept it very simple and easy to use. I also kept it very modular and highly configurable from the very start. This tool works with any OpenAI-compatible API including local servers like llama.cpp, kobold.cpp, LM studio, vLLM or Ollama. Handles rate limiting automatically, supports concurrent workers, and can upload directly to Hugging Face Hub in one command, which was implemented without needing any external tools/dependencies like the HF cli. Generation templates are fully customizable via TOML config, meaning you can make any type of dataset. The example configs come with a strong default template for fantasy writing to help give an idea of what a good template would look like. The documentation includes a thorough quick start guide, and examples.
**Dataset Generation**
This thing works fast. Had a much bigger impact than I expected in dataset generation speed compared to the old tool. Even using the completely free (and unlimited) Nvidia NIM api with it's 40 RPM rate limit and slow 20-30 tps Kimi K2 0905 model, plus any small local model for rejected responses, you can create a very high quality (possibly only topped by using Sonnet 4.5) DPO datasets, with about 1000 rows of high quality data in under a few hours, for completely free. No expensive hardware or API provider required (which of course you can use with this tool too). The sample dataset I linked completed under these conditions in only a 36-minute run, which would have been only half as long without a judge.
| 2025-10-28T03:41:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ohzfdu/vellumforge2_a_high_performance_very_configurable/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohzfdu | false | null | t3_1ohzfdu | /r/LocalLLaMA/comments/1ohzfdu/vellumforge2_a_high_performance_very_configurable/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'eZlgbyrnATkYPDUNtXcXwNsCX5uGrVuaKgVTMya0EyY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eZlgbyrnATkYPDUNtXcXwNsCX5uGrVuaKgVTMya0EyY.png?width=108&crop=smart&auto=webp&s=cb814243fffcc44a7cd4f8367f1e87b7028cfcf4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eZlgbyrnATkYPDUNtXcXwNsCX5uGrVuaKgVTMya0EyY.png?width=216&crop=smart&auto=webp&s=6dcc1130f250828d54da306ac3cf43c9b3b09961', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eZlgbyrnATkYPDUNtXcXwNsCX5uGrVuaKgVTMya0EyY.png?width=320&crop=smart&auto=webp&s=75793679e4b985b6043c2310e4f9ccda5f2c293b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eZlgbyrnATkYPDUNtXcXwNsCX5uGrVuaKgVTMya0EyY.png?width=640&crop=smart&auto=webp&s=0a7c7d226c34cb0f872837189763b69c31a96834', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eZlgbyrnATkYPDUNtXcXwNsCX5uGrVuaKgVTMya0EyY.png?width=960&crop=smart&auto=webp&s=a373e57aef9d9265a43387e22b2c8a115543dff2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eZlgbyrnATkYPDUNtXcXwNsCX5uGrVuaKgVTMya0EyY.png?width=1080&crop=smart&auto=webp&s=15b3061118c5a061e74d0a7d3600bd829fddd95a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/eZlgbyrnATkYPDUNtXcXwNsCX5uGrVuaKgVTMya0EyY.png?auto=webp&s=fa8ea5f8f18c1a2fcd1bc141bd1b42f7f3398007', 'width': 1200}, 'variants': {}}]} |
Llama3.3:70b vs GPT-OSS:20b for PHP Code Generation | 0 | Hi! I like PHP, Javascript, and so forth, and I'm just getting into ollama and trying to figure out which models I should use. So I ran some tests and wrote some long, windy blog posts. I don't want to bore you with those so here's a gpt-oss:120b generated re-write for freshness and readability of what I came up with. Although, I did check it and edit a few things. Welcome to the future!
**Title: Llamaโฏ3.3โฏ70B vs GPTโOSSโฏ20B โ PHP codeโgeneration showdown (Ollama + OpenโWebUI)**
---
### TL;DR
| Feature | **Llamaโฏ3.3โฏ70B** | **GPTโOSSโฏ20B** |
|---|---|---|
| **Firstโtoken latency** | 10โ30โฏs | ~15โฏs |
| **Total generation time** | 1โฏโโฏ1.5โฏmin | ~40โฏs |
| **Lines of code (average)** | 95โฏยฑโฏ15 | 165โฏยฑโฏ20 |
| **JSON correctness** | โ
3/4 runs, 1 run wrong filename | โ
3/4 runs, 1 run wrong filename (story.json.json) |
| **Fileโreconstruction** | โ
3/4 runs, 1 run added stray newlines | โ
3/4 runs, 1 run wrong โโ2โ suffix |
| **Comment style** | Sparse, occasional boilerโplate | Detailed, numbered sections, helpful tips |
| **Overall vibe** | Good, but inconsistent (variable names, refactoring, whitespace handling) | Very readable, wellโcommented, slightly larger but easier to understand |
Below is a single, cohesive post that walks through the experiment, the numbers, the code differences, and the final verdict.
---
## 1. Why I ran the test
I wanted a quick, repeatable way to see how **Ollama**โserved LLMs handle a *realโworld* PHP task:
*Read a text file, tokenise it, build an array of objects, write a JSON summary, and reโcreate the original file.*
The prompt was deliberately detailed (fileโname handling, whitespace handling, analytics, etc.) and I fed **exactly the same prompt** to each model in a fresh chat (no prior context).
---
## 2. Test harness
| Step | What I did |
|---|---|
| **Prompt** | Same multiโparagraph description for both models. |
| **Runs per model** | 4 independent generations (to catch variability). |
| **Environment** | Ollamaโฏ+โฏOpenโWebUI (context persists only within a single chat). |
| **Metrics collected** | โข Firstโtoken latency (time to the first visible token) <br>โข Total generation time <br>โข Lines of code (excluding blank lines) <br>โข JSON file correctness <br>โข Reโgenerated text file correctness <br>โข Subjective readability of the code/comments. |
---
## 3. Speed & latency
| Model | Firstโtoken latency | Total time (average) |
|---|---|---|
| **Llamaโฏ3.3โฏ70B** | 10โ30โฏs (often ~20โฏs) | 1โฏโโฏ1.5โฏmin |
| **GPTโOSSโฏ20B** | ~15โฏs | ~40โฏs |
*Even though Llamaโฏ3.3 felt โslow to startโ, it still finished within a minute and a half. GPTโOSS was noticeably snappier.*
---
## 4. Code size & structure
| Model | Avg. SLOC | Notable structural quirks |
|---|---|---|
| **Llamaโฏ3.3โฏ70B** | 95โฏยฑโฏ15 | โข Variable names changed between runs (e.g., `$outputFilename` vs `$outBase`). <br>โข Some runs used `file_put_contents()`, others used `fopen()/fwrite()`. <br>โข Inconsistent handling of whitespace tokens in the JSON (sometimes a boolean, sometimes omitted). |
| **GPTโOSSโฏ20B** | 165โฏยฑโฏ20 | โข Heavier commenting (numbered sections, โwhatโthisโdoesโ bullet points). <br>โข Consistent use of `file_put_contents()` for both JSON and text output. <br>โข More explicit error handling. |
*Overall, I much prefer the kind of commentary and code structure produced by GPT-OSS.*
### 4.1. Sample header comparison
**GPTโOSSโฏ20B** (first ~12โฏlines)
```php
<?php
/**
* a1.php
*
* 1. Reads a text file (via GET or POST: file=)
* 2. Tokenises it into words & whitespace
* 3. Builds an array of objects:
* - id : the token number in the file
* - t : the exact token as it appears
* - whitespace : true for whitespace tokens, false otherwise
* - w : processed word (keeps dash/apostrophe, removes punctuation)
* 4. Stores the array, wordโcount and elapsed time in JSON
* 5. Writes a plainโtext file that contains all obj->t in order,
* with โโ2โ inserted before the extension.
*
* The script is deliberately written stepโbyโstep for teaching.
*/
```
**Llamaโฏ3.3โฏ70B** (first ~12โฏlines)
```php
<?php
// Combine GET and POST requests
$request = array_merge($_GET, $_POST);
// Check if file is provided
if (!isset($request['file'])) {
echo "Error: No file provided.";
exit;
}
$filename = $request['file'];
// Set output filename
if (isset($request['out']) && $request['out'] != '') {
$output_filename = $request['out'];
} else {
$parts = explode('.', $filename);
$output_filename = $parts[0] . '.json';
}
```
*The GPTโOSS header reads like a short design document, while Llamaโs header is non-existant. GPT-OSS wins hands down on structure and commenting.*
---
## 5. JSON output quality
Both models produced **humanโreadable JSON** in the majority of runs. The main hiccups:
| Model | Issue | Frequency |
|---|---|---|
| **Llamaโฏ3.3โฏ70B** | Wrong filename handling (`filename.json.json`) โ runโฏ4 | 1/4 |
| **GPTโOSSโฏ20B** | Same filename bug (`story.json.json`) โ runโฏ2 | 1/4 |
| **Both** | Offโbyโone word count in one run (4650 vs. 4651) | 1/4 each |
All other runs generated a complete JSON object with `num_words`, `processing_time`, and the full token array. However, some runs of Llama3.3:70b-instruct produced correct but unreadable (by humans) JSON code.
---
## 6. Reโcreating the original text file
| Model | Mistake(s) | How obvious was it? |
|---|---|---|
| **Llamaโฏ3.3โฏ70B** | In runโฏ4 the function added a newline after every token (`fwrite($file, $token->t . "\n");`). This produced a file with extra blank lines. | Visible immediately when diffโing with the source. |
| **GPTโOSSโฏ20B** | Runโฏ2 wrote the secondary file as `story.json-2.txt` (missing the โโ2โ before the extension). | Minor, but broke the naming convention. |
| **Both** | All other runs reproduced the file correctly. | โ |
---
## 7. Readability & developer experience
### 7.1. Llamaโฏ3.3โฏ70B
**Pros**
* Generates usable code quickly once the first token appears.
* Handles most of the prompt correctly (JSON, tokenisation, analytics).
**Cons**
* Inconsistent naming and variable choices across runs.
* Sparse comments โ often just a single line like โ// Calculate analyticsโ.
* Occasionally introduces subtle bugs (extra newlines, wrong filename).
* Useless comments after the code. It's more conversational.
### 7.2. GPTโOSSโฏ20B
**Pros**
* Very thorough comments, broken into numbered sections that match the original spec.
* Helpful โtipsโ mapped to numbered sections in the code (e.g., regex explanation for word cleaning).
* Helpful after-code overview which reference numbered sections in the code. This is almost a game changer, just by itself.
* Consistent logic and naming across runs (reliable!)
* Consistent and sane levels of error handling (`die()` with clear messages).
**Cons**
* None worth mentioning
---
## 8. โInstructโ variant of Llamaโฏ3.3 (quick note)
I also tried **llama3.3:70bโinstructโq8_0** (4 runs).
* Latency: highest 30โฏsโฏโโฏ1โฏmin to first token, ~2 to 3โฏmin total.
* Code length similar to the regular 70โฏB model.
* Two runs omitted newlines in the regenerated text (making it unreadable).
* None of the runs correctly handled the output filename (all clobbered `story-2.txt`).
**Conclusion:** the plain **llama3.3โฏ70B** remains the better choice of the two Llama variants for this task.
---
## 9. Verdict โ which model should you pick?
| Decision factor | **Llamaโฏ3.3โฏ70B** | **GPTโOSSโฏ20B** |
|---|---|---|
| **Speed** | Slower start, still <โฏ2โฏmin total. | Faster start, subโminute total. |
| **Code size** | Compact, but sometimes cryptic. | Verbose, but selfโdocumenting. |
| **Reliability** | 75โฏ% correct JSON / filenames. | 75โฏ% correct JSON / filenames. |
| **Readability** | Minimal comments, more postโgeneration tinkering. | Rich comments, easier to handโoff. |
| **Overall โplugโandโplayโ** | Good if you tolerate a bit of cleanup. | Better if you value clear documentation outโofโtheโbox. |
> **My personal take:** Iโll keep **Llamaโฏ3.3โฏ70B** in my toolbox for quick oneโoffs, but for any serious PHP scaffolding Iโll reach for **GPTโOSSโฏ20B** (or the 120B variant if I can spare a few extra seconds).
---
## 10. Bonus round โ GPTโOSSโฏ120B
**TL;DR** โ The 120โbillionโparameter variant behaves like the 20โฏB model but is a bit *slower* and produces *more* and *better code and commentary. Accuracy goes up. (โโฏ100โฏ% correct JSON / filenames).
| Metric | GPTโOSSโฏ20B | **GPTโOSSโฏ120B** |
|---|---|---|
| **Firstโtoken latency** | ~15โฏs | **โโฏ30โฏs** (roughly double) |
| **Total generation time** | ~40โฏs | **โโฏ1โฏminโฏ15โฏs** |
| **Average SLOC** | 165โฏยฑโฏ20 | **190โฏยฑโฏ25** (โโฏ15โฏ% larger) |
| **JSONโfilename bug** | 1/4 runs | 0/4 runs |
| **Extraโnewline bug** | 0/4 runs | 0/4 runs |
| **Comment depth** | Detailed, numbered sections | **Very detailed** โ includes extra โperformanceโnotesโ sections and inline type hints |
| **Readability** | Good | **Excellent** โ the code seems clearer and the extra comments really help |
### 12.1. What changed compared with the 20โฏB version?
* **Latency:** The larger model needs roughly twice the time to emit the first token. Once it starts, the perโtoken speed is similar, so the overall time is only 10-30โฏs longer.
* **Code size:** The 120โฏB model adds a few more helper functions (e.g., `sanitize_word()`, `format_elapsed_time()`) and extra inline documentation. The extra lines are mostly comments, not logic.
* **Bug pattern:** All of the same bugs that appeared in the 20โฏB runs reโappear (wrong `*.json.json` filename, correct โโ2โ suffix). In addition, **runโฏ3** introduced an unwanted newline when rebuilding the original file:
```php
function rebuild_text(array $tokens, string $filename): void {
$fh = fopen($filename, 'w');
foreach ($tokens as $tok) {
// The 120B model mistakenly adds "\n" here in one of the runs
fwrite($fh, $tok->t . "\n");
}
fclose($fh);
}
---
## 11. Bottom line
Both Llamaโฏ3.3โฏ70B and GPTโOSSโฏ20B can **solve the same PHP coding problem**, but they do it with different tradeโoffs:
* **Llamaโฏ3.3โฏ70B** โ smaller, faster to read once generated, but inconsistent and occasionally buggy.
* **GPTโOSSโฏ20B** โ larger, more verbose, but gives you a readyโtoโread design document in the code itself.
If you need **quick scaffolding** and are comfortable fixing a few quirks, Llamaโฏ3.3โฏ70B is still a solid free option.
If you prefer **wellโcommented, โjustโdropโinโ code** and can tolerate a slightly larger file, GPTโOSSโฏ20B (or its 120B sibling) is the way to go.
Happy coding, and may your prompts be clear! | 2025-10-28T03:11:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ohyuci/llama3370b_vs_gptoss20b_for_php_code_generation/ | AppledogHu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohyuci | false | null | t3_1ohyuci | /r/LocalLLaMA/comments/1ohyuci/llama3370b_vs_gptoss20b_for_php_code_generation/ | false | false | self | 0 | null |
Help deciding local LLM with multimodal capabilities on a low end Mac | 2 | m1 macbook air 8gb. any suggestions? current thinking of Gemma 3 or 3n but donโt know which is better. | 2025-10-28T02:53:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ohyhjy/help_deciding_local_llm_with_multimodal/ | gamerboixyz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohyhjy | false | null | t3_1ohyhjy | /r/LocalLLaMA/comments/1ohyhjy/help_deciding_local_llm_with_multimodal/ | false | false | self | 2 | null |
Minimax-M2 support added in MLX | 70 | 2025-10-28T02:49:15 | No_Conversation9561 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ohyeee | false | null | t3_1ohyeee | /r/LocalLLaMA/comments/1ohyeee/minimaxm2_support_added_in_mlx/ | false | false | 70 | {'enabled': True, 'images': [{'id': 'ykcgLeJDI1OJN6fb1GlzQZYyWyAqlTmLoSJlG3lb4Gg', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/4yqqtqzynrxf1.jpeg?width=108&crop=smart&auto=webp&s=f8981e10131a929c0e219fd32d3d8df27f9bd453', 'width': 108}, {'height': 220, 'url': 'https://preview.redd.it/4yqqtqzynrxf1.jpeg?width=216&crop=smart&auto=webp&s=deaf91270bd28c6fdbd09de2eabe8d5a7e684755', 'width': 216}, {'height': 326, 'url': 'https://preview.redd.it/4yqqtqzynrxf1.jpeg?width=320&crop=smart&auto=webp&s=ce06dc688c75a7702d24e9330b811e0413bead8d', 'width': 320}, {'height': 652, 'url': 'https://preview.redd.it/4yqqtqzynrxf1.jpeg?width=640&crop=smart&auto=webp&s=85e601e94e902746685decb25bfc69d58f508850', 'width': 640}, {'height': 978, 'url': 'https://preview.redd.it/4yqqtqzynrxf1.jpeg?width=960&crop=smart&auto=webp&s=e78ce5a5aafb4184bef65d7786be1948324d9477', 'width': 960}, {'height': 1101, 'url': 'https://preview.redd.it/4yqqtqzynrxf1.jpeg?width=1080&crop=smart&auto=webp&s=d9346445228470a9b50deee7169c75d96f6f336f', 'width': 1080}], 'source': {'height': 1308, 'url': 'https://preview.redd.it/4yqqtqzynrxf1.jpeg?auto=webp&s=504c5c428131ef02594dff0804a3656801f2ec9c', 'width': 1283}, 'variants': {}}]} | |||
Open Source Enterprise Search Platform (Generative-AI Powered) | 5 | Hey everyone!
Iโm excited to share something weโve been building for the past few months -ย [**PipesHub**](https://github.com/pipeshub-ai/pipeshub-ai), aย **fully open-source Enterprise Search Platform**ย designed to bring powerful Enterprise Search to every team, without vendor lock-in. The platform brings all your business data together and makes it searchable. It connects with apps like Google Drive, Gmail, Slack, Notion, Confluence, Jira, Outlook, SharePoint, Dropbox, and even local file uploads. You can deploy it and run it with just one docker compose command.
The entire system is built on aย **fully event-streaming architecture powered by Kafka**, making indexing and retrieval scalable, fault-tolerant, and real-time across large volumes of data.
**Key features**
* Deep understanding of user, organization and teams with enterprise knowledge graph
* Connect to any AI model of your choice including OpenAI, Gemini, Claude, or Ollama
* Use any provider that supports OpenAI compatible endpoints
* Choose from 1,000+ embedding models
* Vision-Language Models and OCR for visual or scanned docs
* Login with Google, Microsoft, OAuth, or SSO
* Rich REST APIs for developers
* All major file types support including pdfs with images, diagrams and charts
**Features releasing early next month**
* Agent Builder - Perform actions like Sending mails, Schedule Meetings, etc along with Search, Deep research, Internet search and more
* Reasoning Agent that plans before executing tasks
* 40+ Connectors allowing you to connect to your entire business apps
Check it out and share your thoughts or feedback. Your feedback is immensely valuable and is much appreciated:
[https://github.com/pipeshub-ai/pipeshub-ai](https://github.com/pipeshub-ai/pipeshub-ai) | 2025-10-28T02:32:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ohy203/open_source_enterprise_search_platform/ | Effective-Ad2060 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohy203 | false | null | t3_1ohy203 | /r/LocalLLaMA/comments/1ohy203/open_source_enterprise_search_platform/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM', 'resolutions': [{'height': 96, 'url': 'https://external-preview.redd.it/hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM.jpeg?width=108&crop=smart&auto=webp&s=63a546b8ac654187ee9b0d14224e852ef0c3d692', 'width': 108}], 'source': {'height': 99, 'url': 'https://external-preview.redd.it/hO1BK6bS_4mNYaGVC084UtT7OL1PkuHl2mbg6ueHrQM.jpeg?auto=webp&s=47e8987d3d53065768b4c796fa5af51c7a36d470', 'width': 111}, 'variants': {}}]} |
Is there any tool to automatically check if my Nvidia GPU, CUDA drivers, cuDNN, Pytorch and TensorFlow are all compatible between each other? | 1 | [removed] | 2025-10-28T02:14:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ohxoyb/is_there_any_tool_to_automatically_check_if_my/ | Franck_Dernoncourt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohxoyb | false | null | t3_1ohxoyb | /r/LocalLLaMA/comments/1ohxoyb/is_there_any_tool_to_automatically_check_if_my/ | false | false | self | 1 | null |
DeepSeek-OCR question for my workflow below... | 9 | Please take a look at these questions after reviewing my workflow above:
1. Could I compress multiple PNGs, combine them into one image, and then process them as one image for text extraction?
2. Would this model run on my Mac Mini 2024 M4 Base model? And would it be faster than Azure deployments strategy.
3. Would the model be as precise as GPT-4o's Vision? 4o is very good at this extraction job.
Any feedback is greatly appreciated. | 2025-10-28T01:20:43 | Excellent_Koala769 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ohwjmq | false | null | t3_1ohwjmq | /r/LocalLLaMA/comments/1ohwjmq/deepseekocr_question_for_my_workflow_below/ | false | false | default | 9 | {'enabled': True, 'images': [{'id': '2ghkk6328rxf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/2ghkk6328rxf1.png?width=108&crop=smart&auto=webp&s=630c189315c426997c895a6cbd4ae4169e6bafb2', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/2ghkk6328rxf1.png?width=216&crop=smart&auto=webp&s=9a7e8242b60134801c6e34666f78a47bfed89b65', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/2ghkk6328rxf1.png?width=320&crop=smart&auto=webp&s=216c74d8dd7d539e1a729087aba4aa5fc826b208', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/2ghkk6328rxf1.png?width=640&crop=smart&auto=webp&s=e7c9a433a11837865ed74195eb17db4ef389640e', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/2ghkk6328rxf1.png?width=960&crop=smart&auto=webp&s=6b0e434150b01f63ea9d591d5a47d6078df520ae', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/2ghkk6328rxf1.png?width=1080&crop=smart&auto=webp&s=f566d1730304403b936d0e6e20e62eaa81efa346', 'width': 1080}], 'source': {'height': 3375, 'url': 'https://preview.redd.it/2ghkk6328rxf1.png?auto=webp&s=473c594e5cf830d5b20acbcae00007fce9402726', 'width': 6000}, 'variants': {}}]} | |
Should I keep using GPT API or continue building my own self-hosted inference setup (Qwen 2.5) on AWS SageMaker? | 1 | [removed] | 2025-10-28T01:16:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ohwgj6/should_i_keep_using_gpt_api_or_continue_building/ | Pristine-Fall9259 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohwgj6 | false | null | t3_1ohwgj6 | /r/LocalLLaMA/comments/1ohwgj6/should_i_keep_using_gpt_api_or_continue_building/ | false | false | self | 1 | null |
hould I keep using GPT API or continue building my own self-hosted inference setup (Qwen 2.5) on AWS SageMaker? | 1 | [removed] | 2025-10-28T01:12:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ohwdny/hould_i_keep_using_gpt_api_or_continue_building/ | Pristine-Fall9259 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohwdny | false | null | t3_1ohwdny | /r/LocalLLaMA/comments/1ohwdny/hould_i_keep_using_gpt_api_or_continue_building/ | false | false | self | 1 | null |
Is an NVIDIA A40 48GB for 1500USD a bad idea because it's age? | 84 | Hello guys, hope you're fine.
Short question, I managed to find, working and testing on my PC right now, an A40 48GB. It is passively cooled and it gets quite hot.
[Local testing on my PC](https://preview.redd.it/8az1kqsdyqxf1.png?width=764&format=png&auto=webp&s=301fff8d7b8d78a3f33c97765bb96ebdeaa03e2d)
The seller (a friend) is asking me 1500USD for it. I'm not from USA, but a 3rd world country.
But I have read here on Local llama that such old cards and such aren't very worth it, also no FP8 support, etc.
So I'm really torn and indecisive about it. What would you guys do? Thanks! | 2025-10-28T00:26:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ohvcwt/is_an_nvidia_a40_48gb_for_1500usd_a_bad_idea/ | panchovix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohvcwt | false | null | t3_1ohvcwt | /r/LocalLLaMA/comments/1ohvcwt/is_an_nvidia_a40_48gb_for_1500usd_a_bad_idea/ | false | false | 84 | null | |
็ฎๅไธบๆญขๆๅผบๆง่ฝ็ๅฐๆจกๅๆฏไปไน๏ผๆๆ็ๆฏ2bไปฅไธ็๏ผ | 0 | ๆไธป่ฆๆฏๆณ่ฆๅฐๅ
ถๅตๅ
ฅๆ็ๅบ็จไนไธญ่ฟ่กๆฌๅฐๆฃ็ดข็ญๆไฝใ | 2025-10-28T00:21:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ohv98n/็ฎๅไธบๆญขๆๅผบๆง่ฝ็ๅฐๆจกๅๆฏไปไนๆๆ็ๆฏ2bไปฅไธ็/ | Smart-Cap-2216 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohv98n | false | null | t3_1ohv98n | /r/LocalLLaMA/comments/1ohv98n/็ฎๅไธบๆญขๆๅผบๆง่ฝ็ๅฐๆจกๅๆฏไปไนๆๆ็ๆฏ2bไปฅไธ็/ | false | false | self | 0 | null |
M2 to PCIEx16 adaptor safety | 1 |
Hey folks,
I'm currently building a three-GPU rig for local LLaMA. For the third GPU, I'm using the M2 slot from my CPU which runs PCIe 4.0 x4 with an M2 to PCIe x16 riser.
One thing I'm a bit concerned about is that this riser cable is touching the CPU cooler bracket, although not touching the heat sink. Should I be worried and look for another riser?
Also, I noticed that this riser is powering the PCIe slot with a SATA power connector, which is limited to 54W. Is this sufficient to power the PCIe slot in general? My third GPU is not super high-end, just a 4060 Ti 16GB with a 165W TDP. Should I not worry about it since the 8-pin PCIe power cable to the GPU can deliver up to 150W, so the max power draw from the PCIe slot would be limited to ~15W? Or am I misunderstanding how this works?
Any advice from folks with multi-GPU setups on consumer-grade CPUs and mobos is welcomeโI'm sure there are many of you here.
| 2025-10-27T23:43:34 | https://www.reddit.com/gallery/1ohueqh | siegevjorn | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ohueqh | false | null | t3_1ohueqh | /r/LocalLLaMA/comments/1ohueqh/m2_to_pciex16_adaptor_safety/ | false | false | 1 | null | |
Idea: use a small transformer to create continuous embeddings | 6 | DeepSeek-OCR and Glyph demo the idea of using continuous embeddings instead of discrete ones can reduce number of tokens.
Why bother to convert text to an image?
We can use a small transformer to project a large piece of text into a small number of continuous embeddings, as shown below. This also unifies the processing of text, image, and audio.
https://preview.redd.it/gg5u0cp6nqxf1.png?width=1510&format=png&auto=webp&s=dbad253ea04d08cf1c53fd59c0186408da456357
| 2025-10-27T23:30:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ohu3gy/idea_use_a_small_transformer_to_create_continuous/ | foldl-li | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohu3gy | false | null | t3_1ohu3gy | /r/LocalLLaMA/comments/1ohu3gy/idea_use_a_small_transformer_to_create_continuous/ | false | false | 6 | null | |
Flagship LLM on 128GB | 8 | Hello !
Running an M4 Max Mac Studio with 128GB RAM. Currently using OSS20B but wondering if I should go bigger for better performance.
What models do you recommend for this setup? Worth stepping up in size?
Thanks
| 2025-10-27T23:27:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ohu19n/flagship_llm_on_128gb/ | EffectiveGlove1651 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohu19n | false | null | t3_1ohu19n | /r/LocalLLaMA/comments/1ohu19n/flagship_llm_on_128gb/ | false | false | self | 8 | null |
Bad news: DGX Spark may have only half the performance claimed. | 611 | There might be more bad news about the DGX Spark!
Before it was even released, I told everyone that this thing has a memory bandwidth problem. Although it boasts 1 PFLOPS of FP4 floating-point performance, its memory bandwidth is only 273GB/s. This will cause major stuttering when running large models (with performance being roughly only one-third of a MacStudio M2 Ultra).
Today, more bad news emerged: the floating-point performance doesn't even reach 1 PFLOPS.
Tests from two titans of the industryโJohn Carmack (founder of id Software, developer of games like Doom, and a name every programmer should know from the legendary fast inverse square root algorithm) and Awni Hannun (the primary lead of Apple's large model framework, MLX)โhave shown that this device only achieves 480 TFLOPS of FP4 performance (approximately 60 TFLOPS BF16). That's less than half of the advertised performance.
Furthermore, if you run it for an extended period, it will overheat and restart.
It's currently unclear whether the problem is caused by the power supply, firmware, CUDA, or something else, or if the SoC is genuinely this underpowered. I hope Jensen Huang fixes this soon. The memory bandwidth issue could be excused as a calculated product segmentation decision from NVIDIA, a result of us having overly high expectations meeting his precise market strategy. However, performance not matching the advertised claims is a major integrity problem.
So, for all the folks who bought an NVIDIA DGX Spark, Gigabyte AI TOP Atom, or ASUS Ascent GX10, I recommend you all run some tests and see if you're indeed facing performance issues. | 2025-10-27T23:13:15 | Dr_Karminski | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ohtp6d | false | null | t3_1ohtp6d | /r/LocalLLaMA/comments/1ohtp6d/bad_news_dgx_spark_may_have_only_half_the/ | false | false | 611 | {'enabled': True, 'images': [{'id': '8YNINIX8-MfjG97xQMTQOFEhzP1v7EiqV_tNZG-RhU4', 'resolutions': [{'height': 123, 'url': 'https://preview.redd.it/9b2ziei0lqxf1.png?width=108&crop=smart&auto=webp&s=8811fd7a1da1798b92efbb6cbdf23959f1b07b3e', 'width': 108}, {'height': 246, 'url': 'https://preview.redd.it/9b2ziei0lqxf1.png?width=216&crop=smart&auto=webp&s=7f0960008860b9b69ba357e4967fca733b473d2f', 'width': 216}, {'height': 365, 'url': 'https://preview.redd.it/9b2ziei0lqxf1.png?width=320&crop=smart&auto=webp&s=4cc27c305447473947603dac783ecc3a4f38d188', 'width': 320}, {'height': 731, 'url': 'https://preview.redd.it/9b2ziei0lqxf1.png?width=640&crop=smart&auto=webp&s=de9741ebb17cdabb88dde97eb430a1c2ff563565', 'width': 640}, {'height': 1096, 'url': 'https://preview.redd.it/9b2ziei0lqxf1.png?width=960&crop=smart&auto=webp&s=f5025ac303c249c08adaf6d6a67d5d98cec288ae', 'width': 960}], 'source': {'height': 1170, 'url': 'https://preview.redd.it/9b2ziei0lqxf1.png?auto=webp&s=3d0183e357a112c42294794cbccc4a121c840b1d', 'width': 1024}, 'variants': {}}]} | ||
mcp_agent_mail: Like gmail for your coding agents. Lets various different agents communicate and coordinate with each other. | 5 | I finally got around to making a tool I've wanted for a long time: you can basically think of it as being "like Gmail for coding agents."
If you've ever tried to use a bunch of instances of Claude Code or Codex at once across the same project, you've probably noticed how annoying it can be when they freak out about the other agent changing the files they're working on.
Then they start doing annoying things, like restoring files from git, in the process wiping out another agent's work without a backup.
Or if you've tried to have agents coordinate on two separate repos, like a Python backend and a Nextjs frontend for the same project, you may have found yourself acting as the go-between and liaison between two or three different agents, passing messages between them or having them communicate by means of markdown files or some other workaround.
I always knew there had to be a better way. But it's hard to get the big providers to offer something like that in a way that's universal, because Anthropic doesn't want to integrate with OpenAI's competitive coding tool, and neither wants to deal with Cursor or Gemini-CLI.
So a few days ago, I started working on it, and it's now ready to share with the world. Introducing the 100% open-source MCP Agent Mail tool. This can be set up very quickly and easily on your machine and automatically detects all the most common coding agents and configures everything for you.
I also include a ready-made blurb (see the README file in the repo) that you can add to your existing AGENTS dot md or CLAUDE dot md file to help the agents better leverage the system straight out of the gate.
It's almost comical how quickly the agents take to this system like a fish to water. They seem to relish in it, sending very detailed messages to each other just like humans do, and start coordinating in a natural, powerful way. They even give each other good ideas and pushback on bad ideas.
They can also reserve access to certain files to avoid the "too many cooks" problems associated with having too many agents all working on the same project at the same time, all without dealing with git worktrees and "merge hell."
This also introduces a natural and powerful way to do something I've also long wanted, which is to automatically have multiple different frontier models working together in a collaborative, complementary way without me needing to be in the middle coordinating everything like a parent setting up playdates for their kids.
And for the human in the loop, I made a really slick web frontend that you can view and see all the messages your agents are sending each other in a nice, Gmail-like interface, so you can monitor the process. You can even send a special message to some or all your agents as the "Human Overseer" to give them a directive (of course, you can also just type that in manually into each coding agent, too.)
I made this for myself and know that I'm going to be getting a ton of usage out of it going forward. It really lets you unleash a massive number of agents using a bunch of different tools/models, and they just naturally coordinate and work with each other without stepping on each other's toes.
It lets you as the human overseer relax a bit more as you no longer have to be the one responsible for coordinating things, and also because the agents watch each other and push back when they see mistakes and errors happening. Obviously, the greater the variety of models and agent tools you use, the more valuable that emergent peer review process will be.
Anyway, give it a try and let me know what you think. I'm sure there are a bunch of bugs that I'll have to iron out over the next couple days, but I've already been productively using it to work on another project and it is pretty amazingly functional already! | 2025-10-27T22:56:50 | https://github.com/Dicklesworthstone/mcp_agent_mail | dicklesworth | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ohtazc | false | null | t3_1ohtazc | /r/LocalLLaMA/comments/1ohtazc/mcp_agent_mail_like_gmail_for_your_coding_agents/ | false | false | default | 5 | {'enabled': False, 'images': [{'id': 'DxxiVUfNXxMHom971ROCV12JiPFvOYFmFU9gdDY9_NI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DxxiVUfNXxMHom971ROCV12JiPFvOYFmFU9gdDY9_NI.png?width=108&crop=smart&auto=webp&s=8f71922d7e9558edc7cff58ecc3631206d0c3646', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DxxiVUfNXxMHom971ROCV12JiPFvOYFmFU9gdDY9_NI.png?width=216&crop=smart&auto=webp&s=1defe6f546efec9b9f28f6612ef8c8079ee724cf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DxxiVUfNXxMHom971ROCV12JiPFvOYFmFU9gdDY9_NI.png?width=320&crop=smart&auto=webp&s=4b259a8dd2bbbccdba3369101a1f751acc1fc043', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DxxiVUfNXxMHom971ROCV12JiPFvOYFmFU9gdDY9_NI.png?width=640&crop=smart&auto=webp&s=c605cf4e64c39f01d619c8f66637992604195fce', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DxxiVUfNXxMHom971ROCV12JiPFvOYFmFU9gdDY9_NI.png?width=960&crop=smart&auto=webp&s=82a9b62cfc275668f8d1e01f8d89508b4276b8d9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DxxiVUfNXxMHom971ROCV12JiPFvOYFmFU9gdDY9_NI.png?width=1080&crop=smart&auto=webp&s=e690da3b7d6bec56dd610e0d8ad92b66fe781c22', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DxxiVUfNXxMHom971ROCV12JiPFvOYFmFU9gdDY9_NI.png?auto=webp&s=29d8fcd8ac26c26f70fe1c9e95bd0ea782a987f8', 'width': 1200}, 'variants': {}}]} |
Parallels Virtualization for Local AI on Macbook w/eGPU connected (TB4/5) | 2 | Has anyone tried using a Mac to run Windows through Parallels and then used that Windows instance to run local LLMs while connected via Thunderbolt 4/5 to use an eGPU or your main PC to boost performance? Is that possible? | 2025-10-27T22:56:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ohtard/parallels_virtualization_for_local_ai_on_macbook/ | Super_Revolution3966 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohtard | false | null | t3_1ohtard | /r/LocalLLaMA/comments/1ohtard/parallels_virtualization_for_local_ai_on_macbook/ | false | false | self | 2 | null |
Z.ai release Glyph weight | 241 | Glyph: Scaling Context Windows via Visual-Text Compression
Paper: arxiv.org/abs/2510.17800
Weights: huggingface.co/zai-org/Glyph
Repo: github.com/thu-coai/Glyph
Glyph is a framework for scaling the context length through visual-text compression. It renders long textual sequences into images and processes them using visionโlanguage models.
This design transforms the challenge of long-context modeling into a multimodal problem, substantially reducing computational and memory costs while preserving semantic information. | 2025-10-27T22:55:19 | https://www.reddit.com/gallery/1oht9pw | nekofneko | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1oht9pw | false | null | t3_1oht9pw | /r/LocalLLaMA/comments/1oht9pw/zai_release_glyph_weight/ | false | false | 241 | null | |
Which small models are best for fine-tuning? (most adaptive) | 8 | Which ones were most "flexible" (achieved biggest performance gains) when fine-tuned on the same dataset?
Do you have an idea how it differs depending on different sizes? (ex. 0.5-1B; 3-4B; 7-8B) | 2025-10-27T21:58:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ohrvrm/which_small_models_are_best_for_finetuning_most/ | Empty-Tourist3083 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohrvrm | false | null | t3_1ohrvrm | /r/LocalLLaMA/comments/1ohrvrm/which_small_models_are_best_for_finetuning_most/ | false | false | self | 8 | null |
Investigating Apple's new "Neural Accelerators" in each GPU core (A19 Pro vs M4 Pro vs M4 vs RTX 3080 - Local LLM Speed Test!) | 26 | Hey everyone :D
I thought itโd be really interesting to compare how Apple's new A17 Pro (and in turn, the M5) with its fancy **new "neural accelerators" in each GPU core** compare to other GPUs!
I ran Gemma 3n 4B on each of these devices, outputting \~the same 100-word story (at a temp of 0). I used the most optimal inference framework for each to give each their best shot.
Here're the results!
|GPU|Device|Inference Set-Up|Tokens / Sec|Time to First Token|Perf / GPU Core|
|:-|:-|:-|:-|:-|:-|
|A17 Pro|6 GPU cores; iPhone 17 Pro|MLX? (โLocal Chatโ app)|23.5 tok/s|0.4 s ๐|3.92|
|M4|10 GPU cores, iPad Pro 13โ|MLX? (โLocal Chatโ app)|33.4 tok/s|1.1 s|3.34|
|RTX 3080|16 GPU cores, MacBook Pro 14โ, 48 GB unified memory|MLX (LM Studio)|59.1 tok/s|0.02 s|3.69|
|M4 Pro|10 GB VRAM; paired with a Ryzen 5 7600 + 32 GB DDR5|CUDA 12 llama.cpp (LM Studio)|60.5 tok/s ๐|0.31 s|\-|
# Super Interesting Notes:
**1. The neural accelerators didn't make much of a difference. Here's why!**
* First off, they do indeed significantly accelerate compute! [Taras Zakharko found that](https://tzakharko.github.io/apple-neural-accelerators-benchmark/#:~:text=Metal%20Shading%20Language.-,Key%20Takeaways%3A,-Operation) Matrix FP16 and Matrix INT8 are already accelerated by 4x and 7x respectively!!!
* BUT, when the LLM spits out tokens, we're limited by memory bandwidth, NOT compute. This is especially true with Apple's iGPUs using the comparatively low-memory-bandwith system RAM as VRAM.
* Still, there is one stage of inference that is compute-bound: prompt pre-processing! That's why we see the A17 Pro has \~3x faster Time to First Token vs the M4.
[Max Weinbach's testing](https://www.macstories.net/linked/max-weinbach-on-the-m5s-neural-accelerators/) also corroborates what I found. And it's worth noting that MLX hasn't been updated (yet) to take full advantage of the new neural accelerators!
**2. My M4 Pro as fast as my RTX 3080!!! It's crazy - 350 w vs 35 w**
When you use an MLX model + MLX on Apple Silicon, you get some really remarkable performance. Note that the 3080 also had \~its best shot with CUDA optimized llama cpp! | 2025-10-27T21:48:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ohrn20/investigating_apples_new_neural_accelerators_in/ | TechExpert2910 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohrn20 | false | null | t3_1ohrn20 | /r/LocalLLaMA/comments/1ohrn20/investigating_apples_new_neural_accelerators_in/ | false | false | self | 26 | null |
Looking for a local llm thats good with warhammer 40k lore, Preferably below 10B | 14 | Hey everyone
So i work in places with spotty/no internet pretty often and im new to **40k lore**. been trying to find a decent local llm that knows its stuff about **warhammer lore** so i can ask questions, brainstorm some stuff, or just chat about the setting when im bored.
ive tried a few models through lm studio but they seem pretty hit or miss with the lore - like they know the basic stuff (emperor, chaos, space marines) but when you get into specifics they start making things up or mixing factions.
wondering if anyone here has found a model that actually handles specialized lore well? or if anyone has fine-tuned something for 40k specifically? not looking for anything crazy powerful, just something that can run offline and actually knows the difference between a custodes and a primaris lol.
my setup can handle up to maybe 8b comfortably, could push 10b if its really worth it
any recommendations appreciated, thanks. | 2025-10-27T21:17:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ohqulc/looking_for_a_local_llm_thats_good_with_warhammer/ | Hakukh123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohqulc | false | null | t3_1ohqulc | /r/LocalLLaMA/comments/1ohqulc/looking_for_a_local_llm_thats_good_with_warhammer/ | false | false | self | 14 | null |
Best Local TTS/STT Models - October 2025 | 73 | Share what your favorite TTS / STT models are right now **and why.**
Given the the amount of ambiguity and subjectivity in rating/testing these models, please be as detailed as possible in describing your setup, nature of your usage (how much, personal/professional use), tools/frameworks/prompts etc. Closed models like Elevenlabs v3 seem to continue to be a few levels above open models, so comparisons, especially empirical ones are welcome.
**Rules**
* Should be open weights models
Please use the top level TTS/STT comments to thread your responses. | 2025-10-27T21:00:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ohqev8/best_local_ttsstt_models_october_2025/ | rm-rf-rm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohqev8 | false | null | t3_1ohqev8 | /r/LocalLLaMA/comments/1ohqev8/best_local_ttsstt_models_october_2025/ | false | true | self | 73 | null |
Asking for review: article to run local LLM for tax firms | 1 | Hi, I did some research on running an LLM locally to answer some questions tax firms have: invest in a tool that can assist CPA's or better to build their own stack locally for privacy and security reasons.
Would love some eyes on it from a technical lens if anyone feels like it, feel free to shoot holes in it.
[https://www.taxproexchange.com/ai](https://www.taxproexchange.com/ai) | 2025-10-27T20:55:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ohqaiu/asking_for_review_article_to_run_local_llm_for/ | RepliKoen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohqaiu | false | null | t3_1ohqaiu | /r/LocalLLaMA/comments/1ohqaiu/asking_for_review_article_to_run_local_llm_for/ | false | false | self | 1 | null |
What is the best overall AI image generator that doesn't look like AI | 0 | I've tried them all. open AI models suck, gemini is good but it's still not there. The only real good one is midjourney but it's hella expensive, any hidden gems yall can suggest? | 2025-10-27T20:50:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ohq5kn/what_is_the_best_overall_ai_image_generator_that/ | Technical_Sign6619 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohq5kn | false | null | t3_1ohq5kn | /r/LocalLLaMA/comments/1ohq5kn/what_is_the_best_overall_ai_image_generator_that/ | false | false | self | 0 | null |
GLM-4.6 vs Minimax-M2 | 27 | **I've been using the GLM Coding Plan and it works well**ย โ not quite Sonnet 3.5 performance, but with clear prompts it gets the job done.
**However, everyone's hyping Minimax M2**, claiming it crushes every benchmark. The problem? I haven't seen any real-world coding examples or projects using it.
**Has anyone here actually used Minimax M2 for development work?**ย If so:
* How does it compare to other models in practice?
* Is it worth switching to?
* Any specific use cases where it excels or falls short?
Would love to hear some hands-on experiences beyond the benchmark numbers. | 2025-10-27T20:50:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ohq5bc/glm46_vs_minimaxm2/ | baykarmehmet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohq5bc | false | null | t3_1ohq5bc | /r/LocalLLaMA/comments/1ohq5bc/glm46_vs_minimaxm2/ | false | false | self | 27 | null |
How is longcat-flash-chat the highest rated open weight coding model on LM arena? | 4 | I used longcat-flash-chat before, it was okay, but not great, how is the best open model for coding on LM arena, even higher than glm 4.6 and Kimi k2 and deepsek v3.1 ? | 2025-10-27T20:45:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ohq0qr/how_is_longcatflashchat_the_highest_rated_open/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohq0qr | false | null | t3_1ohq0qr | /r/LocalLLaMA/comments/1ohq0qr/how_is_longcatflashchat_the_highest_rated_open/ | false | false | self | 4 | null |
Radeon R9700 Dual GPU First Look โ AI/vLLM plus creative tests with Nuke & the Adobe Suite | 28 | 2025-10-27T20:34:58 | https://www.youtube.com/watch?v=efQPFhZmhAo&embeds_referring_euri=https%3A%2F%2Fwww.reddit.com%2F | atape_1 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1ohpqvy | false | {'oembed': {'author_name': 'Level1Techs', 'author_url': 'https://www.youtube.com/@Level1Techs', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/efQPFhZmhAo?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Radeon R9700 Dual GPU First Look โ AI/vLLM plus creative tests with Nuke & the Adobe Suite"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/efQPFhZmhAo/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Radeon R9700 Dual GPU First Look โ AI/vLLM plus creative tests with Nuke & the Adobe Suite', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1ohpqvy | /r/LocalLLaMA/comments/1ohpqvy/radeon_r9700_dual_gpu_first_look_aivllm_plus/ | false | false | 28 | {'enabled': False, 'images': [{'id': 'nujocFBKcx2R2jDqpB8QfazaXJG6_pPL0crrme1qkLM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/nujocFBKcx2R2jDqpB8QfazaXJG6_pPL0crrme1qkLM.jpeg?width=108&crop=smart&auto=webp&s=d37169e458ec670b67b78ec1f4300076613f3c54', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/nujocFBKcx2R2jDqpB8QfazaXJG6_pPL0crrme1qkLM.jpeg?width=216&crop=smart&auto=webp&s=903f42d02220f1021d25065cf5e9bde16bde4f7d', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/nujocFBKcx2R2jDqpB8QfazaXJG6_pPL0crrme1qkLM.jpeg?width=320&crop=smart&auto=webp&s=166ad90b700b767ba9a03c0fc586c47d3289ab0d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/nujocFBKcx2R2jDqpB8QfazaXJG6_pPL0crrme1qkLM.jpeg?auto=webp&s=03891eee88c19988a24a0e234b2e3c85cdca705a', 'width': 480}, 'variants': {}}]} | ||
Newbie at a crossroads for choice of GPU | 0 | Iโm at a crossroads where I donโt know if I should pick a laptop with an 8GB gpu (rtx 5060)or a desktop with 16gb vram (rtx 4060ti or 5060ti).
Now going for the desktop would be the obvious choice but in my country, a setup like that costs roughly $2000 (way over my budget), while I can get a laptop for ~$1000 (which I can afford) during Black Friday and have a family member bring it to me.
Would I miss out on a lot if I just got a laptop and start tinkering with ai models locally and then maybe when I get a really good gig that pays well, get a desktop? Or would the laptop be redundant and I should just bite the bullet and get the desktop?
Iโm pretty new in AI so Iโm obviously not going to be using the larger models immediately. Iโll start small and then scale up.
Please advise. Thanks. | 2025-10-27T20:06:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ohozh2/newbie_at_a_crossroads_for_choice_of_gpu/ | Jaymineh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohozh2 | false | null | t3_1ohozh2 | /r/LocalLLaMA/comments/1ohozh2/newbie_at_a_crossroads_for_choice_of_gpu/ | false | false | self | 0 | null |
Kiln Agent Builder (new): Build agentic systems in minutes with tools, sub-agents, RAG, and context management [Kiln] | 18 | We just added an interactive Agent builder to [the GitHub project Kiln](https://github.com/Kiln-AI/Kiln). With it you can build agentic systems in under 10 minutes. You can do it all through our UI, or use our python library.
What is it? Well โagenticโ is just about the most overloaded term in AI, but Kiln supports everything you need to build agents:
* [Tool Use](https://docs.kiln.tech/docs/agents#tool-use)
* [Multi-Actor Interaction (aka subtasks)](https://docs.kiln.tech/docs/agents#multi-actor-interaction-aka-subtasks)
* [Goal Directed, Autonomous Looping & Reasoning](https://docs.kiln.tech/docs/agents#goal-directed-autonomy-and-reasoning)
* [State & Memory](https://docs.kiln.tech/docs/agents#state-and-memory)
**Context Management with Subtasks (aka Multi-Actor Pattern)**
Context management is the process of curating the model's context (chat/tool history) to ensure it has the right data, at the right time, in the right level of detail to get the job done.
With Kiln you can implement context management by dividing your agent tasks into subtasks, making context management easy. Each subtask can focus within its own context, then compress/summarize for the parent task. This can make the system faster, cheaper and higher quality. See our [docs on context management](https://docs.kiln.tech/docs/agents#context-management) for more details.
**Eval & Optimize Agent Performance**
Kiln agents work with [Kiln evals](https://docs.kiln.tech/docs/evaluations) so you can measure and improve agent performance:
* Find the ideal model to use, balancing quality, cost and speed
* Test different prompts
* Evaluate end-to-end quality, or focus on the quality of subtasks
* Compare different agent system designs: more/fewer subtasks
**Links and Docs**
Some links to the repo and guides:
* [Kiln AI on Github - 4k stars](https://github.com/Kiln-AI/Kiln)
* [Docs for Kiln Agents](https://docs.kiln.tech/docs/agents)
* [Kiln Discord](https://getkiln.ai/discord)
* [Homepage](https://kiln.tech/)
Feedback and suggestions are very welcome! Weโre already working on custom evals to inspect the trace, and ensure the right tools are used at the right times. What else would be helpful? Any other agent memory patterns youโd want to see? | 2025-10-27T20:04:04 | https://v.redd.it/71g2oykfnpxf1 | davernow | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ohox4r | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/71g2oykfnpxf1/DASHPlaylist.mpd?a=1764187458%2CMWI3N2Y0NGUxZGI1MTA5Y2Q1ODlmNDE1MDEyNjAyMWNkYjcwYjAyMjhiY2E4Y2E1YzViOGRjY2E0OWRhYTYzMA%3D%3D&v=1&f=sd', 'duration': 45, 'fallback_url': 'https://v.redd.it/71g2oykfnpxf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/71g2oykfnpxf1/HLSPlaylist.m3u8?a=1764187458%2CZTM0MWFmMzMxNzMzMDMzNTE4MjM1NzliODBkMDJhOTFjMWZkOWIwYTQ5MDM2YjYzYjJmYmFiNzY2NzIyMDU1Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/71g2oykfnpxf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1660}} | t3_1ohox4r | /r/LocalLLaMA/comments/1ohox4r/kiln_agent_builder_new_build_agentic_systems_in/ | false | false | 18 | {'enabled': False, 'images': [{'id': 'd3Zva2N5a2ZucHhmMTkufcVC4ejmCLafkThKi0kTFLLjDsIWZAXwIzmkOAnA', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/d3Zva2N5a2ZucHhmMTkufcVC4ejmCLafkThKi0kTFLLjDsIWZAXwIzmkOAnA.png?width=108&crop=smart&format=pjpg&auto=webp&s=9066402056287ee6d96d6de99976612f536de549', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/d3Zva2N5a2ZucHhmMTkufcVC4ejmCLafkThKi0kTFLLjDsIWZAXwIzmkOAnA.png?width=216&crop=smart&format=pjpg&auto=webp&s=9ac6e6ee45c5b7a997d840723657e38878f849e9', 'width': 216}, {'height': 208, 'url': 'https://external-preview.redd.it/d3Zva2N5a2ZucHhmMTkufcVC4ejmCLafkThKi0kTFLLjDsIWZAXwIzmkOAnA.png?width=320&crop=smart&format=pjpg&auto=webp&s=47b49afaad42b1166767a754a4b97d6852b40fc3', 'width': 320}, {'height': 416, 'url': 'https://external-preview.redd.it/d3Zva2N5a2ZucHhmMTkufcVC4ejmCLafkThKi0kTFLLjDsIWZAXwIzmkOAnA.png?width=640&crop=smart&format=pjpg&auto=webp&s=ceb191dfa60af167b557578545d777f99efa3503', 'width': 640}, {'height': 624, 'url': 'https://external-preview.redd.it/d3Zva2N5a2ZucHhmMTkufcVC4ejmCLafkThKi0kTFLLjDsIWZAXwIzmkOAnA.png?width=960&crop=smart&format=pjpg&auto=webp&s=0743ca5ca5084573810fac59229862701d5e4c64', 'width': 960}, {'height': 702, 'url': 'https://external-preview.redd.it/d3Zva2N5a2ZucHhmMTkufcVC4ejmCLafkThKi0kTFLLjDsIWZAXwIzmkOAnA.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a83964220aec23d241fb6c063fd039d4a1c3d5c4', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/d3Zva2N5a2ZucHhmMTkufcVC4ejmCLafkThKi0kTFLLjDsIWZAXwIzmkOAnA.png?format=pjpg&auto=webp&s=7c00ad8927e9ab34bc97adf5357c1fce42e783b1', 'width': 3320}, 'variants': {}}]} | |
Finetuning a LLM (~20B) for Binary Classification โ Need Advice on Dataset Design | 12 | I'm planning to finetune a language model (โค20B parameters) for a binary classification task in the healthcare insurance domain. I have around 10M records (wonโt use all for training), and my input data consists of 4 JSON files per sample.
Given the complexity of the domain, I was thinking of embedding **rules** into the training data to guide the model better. My idea is to structure the dataset using instruction-response format like:
### Instruction:
[Task description + domain-specific rules]
### Input:
{...json1...} --- {...json2...} --- {...json3...} --- {...json4...}
### Response:
[Binary label]
# My questions:
* Is it a good idea to include rules directly in the instruction part of each sample?
* If yes, should I repeat the same rules across all samples, or rephrase them to add variety?
* Are there better approaches for incorporating domain knowledge into finetuning?
| 2025-10-27T19:48:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ohoi4n/finetuning_a_llm_20b_for_binary_classification/ | United_Demand | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohoi4n | false | null | t3_1ohoi4n | /r/LocalLLaMA/comments/1ohoi4n/finetuning_a_llm_20b_for_binary_classification/ | false | false | self | 12 | null |
qwen3 30B 2507 weird thinking output | 1 | I am trying to use the 2507 version of the 30B through ollama, and it's outputting like this:
>\[thiago@server p106docker\]$ ollama run qwen3:30b-a3b-thinking-2507-q4\_K\_M
>\>>> hi what are you?
>Thinking...
>Okay, the user asked, hi what are you? I need to respond in a friendly and helpful way. First, I should introduce myself as Qwen, the large language model developed by Tongyi Lab. I should mention my capabilities, like answering questions, creating text, coding, etc. Keep it simple and not too technical.
>
>The user's query is very short, so they might be new to this. I should avoid jargon. Maybe they want to know if I can help with something specific. But since they just asked what I am, I'll stick to the basics. Also, check if they need help with anything else. Keep the tone warm and inviting. Make sure to mention I'm here to assist
>with various tasks. Let me structure the response: greeting, introduction, key features, offer help. Avoid any markdown. Keep it natural. Let me draft that.
>
>Wait, the user said "hi what are you?" so I should correct the typo in "what" but not point it out. Just answer. Make sure the response is concise. Don't overcomplicate. Let me check for any errors. Alright, ready to write the response.
>...done thinking.
>
>Hi! I'm Qwen, a large language model developed by Tongyi Lab. I can help with answering questions, writing stories, emails, scripts, performing logical reasoning, coding, and more. How can I assist you today? ๐
As you can see, it is not using <think></think> but Thinking... ...done thinking. Is this the new way it is now? All tools I am using are buggy because of this | 2025-10-27T19:44:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ohoebn/qwen3_30b_2507_weird_thinking_output/ | JsThiago5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohoebn | false | null | t3_1ohoebn | /r/LocalLLaMA/comments/1ohoebn/qwen3_30b_2507_weird_thinking_output/ | false | false | self | 1 | null |
Still kinda new to all this. Currently using "LibreChat" + "TailScale" for my local frontend and remote access... was wondering if you guys could recommend any better local frontends that supports MCP, uploading files to a RAG system, and Prompt caching. | 3 | I really like LibreChat, It does about everything I want.. and I could probably integrate what I need for MCP. But was just wondering what else is out there.
Also, any suggestions for the best local models for tool calling as well as good social nuance understanding.
I"m currently being spoiled by sonnet 4.5 API but it is expensive | 2025-10-27T19:44:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ohoe9r/still_kinda_new_to_all_this_currently_using/ | WoodenTableForest | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohoe9r | false | null | t3_1ohoe9r | /r/LocalLLaMA/comments/1ohoe9r/still_kinda_new_to_all_this_currently_using/ | false | false | self | 3 | null |
Why is Meta AI so bad | 0 | I was trying to generate some AI image to see what are Meta AI's capabilities are but it just keeps on making weird anime style images for no reason, even though I don't tell it to. | 2025-10-27T19:44:39 | https://www.reddit.com/gallery/1ohoe6a | IndependenceMuch7891 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ohoe6a | false | null | t3_1ohoe6a | /r/LocalLLaMA/comments/1ohoe6a/why_is_meta_ai_so_bad/ | false | false | 0 | null | |
Anyone have experience with TOON project? (Reducing JSON token cost) | 0 | Token-Oriented Object Notation โ JSON for LLMs at half the token cost.
https://github.com/johannschopplich/toon | 2025-10-27T19:41:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ohoaub/anyone_have_experience_with_toon_project_reducing/ | palindsay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohoaub | false | null | t3_1ohoaub | /r/LocalLLaMA/comments/1ohoaub/anyone_have_experience_with_toon_project_reducing/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'VrvLLB_yWC1wqD45_cvqK9l_om9VPq0R0Yc-ww1E9Aw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VrvLLB_yWC1wqD45_cvqK9l_om9VPq0R0Yc-ww1E9Aw.png?width=108&crop=smart&auto=webp&s=dc183f0a43af7a78f9fc1f52628b80486e040727', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VrvLLB_yWC1wqD45_cvqK9l_om9VPq0R0Yc-ww1E9Aw.png?width=216&crop=smart&auto=webp&s=7bb985875c3f9a4a95b3e369a17b859e1616947d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/VrvLLB_yWC1wqD45_cvqK9l_om9VPq0R0Yc-ww1E9Aw.png?width=320&crop=smart&auto=webp&s=35f12146d8eecb0b65fce6e5b3c81d664c327623', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/VrvLLB_yWC1wqD45_cvqK9l_om9VPq0R0Yc-ww1E9Aw.png?width=640&crop=smart&auto=webp&s=500c22355aceea8d22944381237591529ec9d1ae', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/VrvLLB_yWC1wqD45_cvqK9l_om9VPq0R0Yc-ww1E9Aw.png?width=960&crop=smart&auto=webp&s=3b31b5b1d95432cc75fed540f5b8a037e83be962', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/VrvLLB_yWC1wqD45_cvqK9l_om9VPq0R0Yc-ww1E9Aw.png?width=1080&crop=smart&auto=webp&s=3cfd99ae80a1dc9d713d3aae63f34427f049f3f1', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/VrvLLB_yWC1wqD45_cvqK9l_om9VPq0R0Yc-ww1E9Aw.png?auto=webp&s=a01e662abb269697b25f889e7d04670b9b255357', 'width': 2400}, 'variants': {}}]} |
Onyx Document Set Question | 1 | In Onyx under the admin panel, if I go to document sets it shows the permission as public, how can I change it to private or limit it to a certain user or users? I know on the assistants how to do it, but can't figure it out on the document sets themselves. | 2025-10-27T19:28:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ohnyx0/onyx_document_set_question/ | jkay1904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohnyx0 | false | null | t3_1ohnyx0 | /r/LocalLLaMA/comments/1ohnyx0/onyx_document_set_question/ | false | false | self | 1 | null |
How are you preventing production AI agents from going rogue? (Cost overruns, unsafe tool use, etc.) | 15 | My team is moving our LangChain/LangGraph agents from prototype to production, and we're looking at risks of autonomous execution.
We're trying to solve problems like:
* Preventing an agent from getting stuck in a loop and blowing our OpenAI budget.
* Enforcing strict rules about which tools certain user roles can trigger (e.g., guests can't use aย delete\_filesย tool).
* Requiring manual human approval before an agent performs a high-stakes action (like for example a financial transaction).
Right now, our code is getting messy withย if/elseย checks for permissions and budget limits. It feels brittle and hard to audit... How are you all handling this in production?
Are you using framework features (like LangChain's new middleware), external tools (like OPA), or just building custom logic? What are the trade-offs you've found (especially around latency and complexity)?
| 2025-10-27T19:24:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ohnuxy/how_are_you_preventing_production_ai_agents_from/ | ClearstoneDev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohnuxy | false | null | t3_1ohnuxy | /r/LocalLLaMA/comments/1ohnuxy/how_are_you_preventing_production_ai_agents_from/ | false | false | self | 15 | null |
Does anyone know what AI voice model they are using? | 2 | Recently I came across a new style of AI youtube video and I am trying to find the model/voice they are using.
Does anyone know what it is?
Video examples:
[https://www.youtube.com/watch?v=Q3UI39q-M0Q](https://www.youtube.com/watch?v=Q3UI39q-M0Q)
[https://www.youtube.com/watch?v=ks8YCtUd26s](https://www.youtube.com/watch?v=ks8YCtUd26s)
| 2025-10-27T19:03:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ohnafz/does_anyone_know_what_ai_voice_model_they_are/ | RuiRdA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohnafz | false | null | t3_1ohnafz | /r/LocalLLaMA/comments/1ohnafz/does_anyone_know_what_ai_voice_model_they_are/ | false | false | self | 2 | null |
Was excited to try Minimax-M2, but yeah it's benchmaxed!!! Worse than glm4.6 | 0 | very bad | 2025-10-27T18:55:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ohn3ba/was_excited_to_try_minimaxm2_but_yeah_its/ | balianone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohn3ba | false | null | t3_1ohn3ba | /r/LocalLLaMA/comments/1ohn3ba/was_excited_to_try_minimaxm2_but_yeah_its/ | false | false | self | 0 | null |
Not shipping another LLM app without analytics to prove the value - open spec to track AI products | 0 | 2025-10-27T18:54:16 | https://www.reddit.com/gallery/1ohn1vb | ephemeral404 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ohn1vb | false | null | t3_1ohn1vb | /r/LocalLLaMA/comments/1ohn1vb/not_shipping_another_llm_app_without_analytics_to/ | false | false | 0 | null | ||
LM Studio Local Server hidden and always running | 7 | Hi guys, can someone else confirm that LM Studio, even if you have local server turned off, it is actively listening to localhost port 41343? How is this possible? If you're on windows, try this cmd "netstat -ano | findstr 41343" (if on other OS you'll know how to do it). Mine outputs this "TCP 127.0.0.1:41343 0.0.0.0:0 LISTENING 17200" so when I run this "tasklist /FI "PID eq 17200"" it returns this "LM Studio.exe 17200 Console 1 97,804 K" so I went digging everywhere and can't find anyone with this same issue.. Thanks! | 2025-10-27T18:25:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ohmado/lm_studio_local_server_hidden_and_always_running/ | JustSayin_thatuknow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohmado | false | null | t3_1ohmado | /r/LocalLLaMA/comments/1ohmado/lm_studio_local_server_hidden_and_always_running/ | false | false | self | 7 | null |
Newegg has 32gb AMD r9700 for $1,300 | 112 | https://videocardz.com/newz/amd-radeon-pro-ai-r9700-is-now-available-32gb-memory-and-full-navi-48-gpu
Phoronix did a poor job of benchmarking it. Would prefer benchmarking a 32gb model like qwen3 coder, but instead focuses on 8gb model:
https://www.phoronix.com/review/amd-radeon-ai-pro-r9700 | 2025-10-27T18:23:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ohm80t/newegg_has_32gb_amd_r9700_for_1300/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohm80t | false | null | t3_1ohm80t | /r/LocalLLaMA/comments/1ohm80t/newegg_has_32gb_amd_r9700_for_1300/ | false | false | self | 112 | {'enabled': False, 'images': [{'id': 'zAUgZezmoJNbQj6Zo6uPCScsM6GFyWNrXUqtt3oswGE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/zAUgZezmoJNbQj6Zo6uPCScsM6GFyWNrXUqtt3oswGE.jpeg?width=108&crop=smart&auto=webp&s=6179000bfa444cad08e8db04c71192e9bbe20dfb', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/zAUgZezmoJNbQj6Zo6uPCScsM6GFyWNrXUqtt3oswGE.jpeg?width=216&crop=smart&auto=webp&s=c4e935ea2a1c5ebdbcd016d39502c16085094c9f', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/zAUgZezmoJNbQj6Zo6uPCScsM6GFyWNrXUqtt3oswGE.jpeg?width=320&crop=smart&auto=webp&s=be6d33ad0dc259b9368b7d0719a9aca1cc7fc203', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/zAUgZezmoJNbQj6Zo6uPCScsM6GFyWNrXUqtt3oswGE.jpeg?width=640&crop=smart&auto=webp&s=5293778c5a4fbc0461ac894d7a38d568038d7534', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/zAUgZezmoJNbQj6Zo6uPCScsM6GFyWNrXUqtt3oswGE.jpeg?width=960&crop=smart&auto=webp&s=3ba3b969553370ca1f2a46899d57f5e8ec18066d', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/zAUgZezmoJNbQj6Zo6uPCScsM6GFyWNrXUqtt3oswGE.jpeg?width=1080&crop=smart&auto=webp&s=78b90a3ccacc636fe7ba81c14e27506af9b04b3d', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://external-preview.redd.it/zAUgZezmoJNbQj6Zo6uPCScsM6GFyWNrXUqtt3oswGE.jpeg?auto=webp&s=df4627574009c0b24221eff81cb975812a14fddc', 'width': 2500}, 'variants': {}}]} |
Claude Desktop for local models. | 5 | I'm building an application for a hackathon that functions like Claude desktop for local.models. It has web search and document.upload ( if open-sourced would like to add image attach for bimodal).
If there's any interest I will open-source the project during the hackathon for people to use with other models. | 2025-10-27T18:18:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ohm37c/claude_desktop_for_local_models/ | thebadslime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohm37c | false | null | t3_1ohm37c | /r/LocalLLaMA/comments/1ohm37c/claude_desktop_for_local_models/ | false | false | self | 5 | null |
REQUEST: GuerillaMash-13B GGUF download link? | 0 | Hi all,
Iโm looking for the GuerillaMash-13B model in GGUF format (ideally Q5\_K\_M or Q6\_K\_M) for use with KoboldCpp/llama.cpp on my local machine.
Does anyone have a working download link (Gofile, Mega, Terabox, OneDriveโฆ)? The old links I found here and on Discord are expired.
If you can reupload or share a hash, it would be amazing!
Thanks so much in advance!
| 2025-10-27T18:17:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ohm2h8/request_guerillamash13b_gguf_download_link/ | whys1212 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohm2h8 | false | null | t3_1ohm2h8 | /r/LocalLLaMA/comments/1ohm2h8/request_guerillamash13b_gguf_download_link/ | false | false | nsfw | 0 | null |
MiniMax-M2 quants? | 15 | It's still early after release, but not seeing any early quants yet of M2:
Are there any impediments to GGUF and MLX quants of this model?
Have any of you tried making quants yet? | 2025-10-27T17:57:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ohlikz/minimaxm2_quants/ | Tasty_Lynx2378 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohlikz | false | null | t3_1ohlikz | /r/LocalLLaMA/comments/1ohlikz/minimaxm2_quants/ | false | false | self | 15 | null |
Phoronix benchmarks single and dual AMD R9700 GPUs against a single NVIDIA RTX 6000 Ada GPU | 42 | 2025-10-27T17:56:13 | https://www.phoronix.com/review/amd-radeon-ai-pro-r9700 | Brian-Puccio | phoronix.com | 1970-01-01T00:00:00 | 0 | {} | 1ohlhdx | false | null | t3_1ohlhdx | /r/LocalLLaMA/comments/1ohlhdx/phoronix_benchmarks_single_and_dual_amd_r9700/ | false | false | default | 42 | null | |
Ways to use the DGX Spark in the cloud? | 0 | I have a work use case where a customer wants to do some document conversion/processing offline for business reasons. I was going to recommend the DGX spark for their use case, but it would be great to see what the bigger models can do on that hardware before I make that recommendation. Is there any way to cloud provision a DGX Spark to see if it would work for their use case? I donโt think itโs possible but am hoping Iโm wrong. Note that I donโt want to use the DGX Cloud as it seems to use different hardware than the Spark machine. | 2025-10-27T17:55:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ohlh1t/ways_to_use_the_dgx_spark_in_the_cloud/ | reallyfunnyster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohlh1t | false | null | t3_1ohlh1t | /r/LocalLLaMA/comments/1ohlh1t/ways_to_use_the_dgx_spark_in_the_cloud/ | false | false | self | 0 | null |
LM studio RoCM runtime much slower than Vulcan runtime | 3 | Tested open ai oss 20b on windows 11+rx9070. Vulcan gets 133tks while rocm only gets 99tks. Thatโs 25% slowerโฆ Anyone has same experience? | 2025-10-27T17:43:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ohl4tx/lm_studio_rocm_runtime_much_slower_than_vulcan/ | Only_Comfortable_224 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohl4tx | false | null | t3_1ohl4tx | /r/LocalLLaMA/comments/1ohl4tx/lm_studio_rocm_runtime_much_slower_than_vulcan/ | false | false | self | 3 | null |
API middle layer to automatically cut LLM costs | 0 | Iโve been experimenting with an idea for aย middle layer between the client and an LLM APIย that automatically
Caches and reuses system prompts
Truncates and summarizes context and instructions intelligently
Routes calls to the most cost efficient model
Does so without losing response quality
Iโve been doing this manually on the client side for a while, but realized thereโs real potential for aย plug and play middle manย that removes the prompt-engineering headache and optimizes cost automatically. I know these things already exist separately in bits (I use OpenRouter sometimes), but I couldn't find anything that was light and integrates everything cohesively.
I think it would also be cool to have a dashboard where you can dynamically see how much money you're saving as you process tokens with every call.
From my early tests, Iโve already seenย around a 30% token cost savingsย with nearly identical output accuracy. Given how model pricing is trending, this feels like a big opportunity and I'm motivated to build this out.
I want to gauge interest in this. Would you use something like this if it can save you money at each API call? Or if you have any experience in this space and would want to jam, would love to hear any ideas.
I'll leave a link to the waitlist in the comments
Again, would love feedback on the concept or to connect with anyone whoโs been building in this space. | 2025-10-27T17:36:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ohkxxn/api_middle_layer_to_automatically_cut_llm_costs/ | BikeFastEatFaster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohkxxn | false | null | t3_1ohkxxn | /r/LocalLLaMA/comments/1ohkxxn/api_middle_layer_to_automatically_cut_llm_costs/ | false | false | self | 0 | null |
Building an API middle layer to automatically cut LLM costs (30%+ savings so far) | 1 | [removed] | 2025-10-27T17:34:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ohkvvp/building_an_api_middle_layer_to_automatically_cut/ | BikeFastEatFaster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohkvvp | false | null | t3_1ohkvvp | /r/LocalLLaMA/comments/1ohkvvp/building_an_api_middle_layer_to_automatically_cut/ | false | false | self | 1 | null |
How to use LLM on Android phone What to do with LLM | 0 | I don't know much about this, but if there is no LLM guide on Android phones,
"All about LLM for MobilePhones"
I would love to have it prepared here. | 2025-10-27T17:14:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ohkcc3/how_to_use_llm_on_android_phone_what_to_do_with/ | Outrageous-Bison-424 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohkcc3 | false | null | t3_1ohkcc3 | /r/LocalLLaMA/comments/1ohkcc3/how_to_use_llm_on_android_phone_what_to_do_with/ | false | false | self | 0 | null |
Dataset streaming for distributed SOTA model training | 11 | "Streaming datasets: 100x More Efficient" is a new blog post sharing improvements on dataset streaming to train AI models.
Link:ย [https://huggingface.co/blog/streaming-datasets](https://huggingface.co/blog/streaming-datasets)
Summary of the blog post:
>
There is also a 1min video explaining the impact of this:ย [https://x.com/andimarafioti/status/1982829207471419879](https://x.com/andimarafioti/status/1982829207471419879) | 2025-10-27T17:00:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ohjyoc/dataset_streaming_for_distributed_sota_model/ | qlhoest | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohjyoc | false | null | t3_1ohjyoc | /r/LocalLLaMA/comments/1ohjyoc/dataset_streaming_for_distributed_sota_model/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': '2R0R9Vm0F743cVX988x9aN6dKYSNlB5z3kRw0pHOCsg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2R0R9Vm0F743cVX988x9aN6dKYSNlB5z3kRw0pHOCsg.png?width=108&crop=smart&auto=webp&s=3ce7b40a54982a55a407f855e2a92d30e0f44c58', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2R0R9Vm0F743cVX988x9aN6dKYSNlB5z3kRw0pHOCsg.png?width=216&crop=smart&auto=webp&s=365ead88396bbfb5f2ec58aa1c053a5a1675fada', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2R0R9Vm0F743cVX988x9aN6dKYSNlB5z3kRw0pHOCsg.png?width=320&crop=smart&auto=webp&s=6572b417e82f0b301e168ce453beff847fe013ad', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2R0R9Vm0F743cVX988x9aN6dKYSNlB5z3kRw0pHOCsg.png?width=640&crop=smart&auto=webp&s=2a938bbd07f49f2ccbf354a373a0265c36bbbde2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2R0R9Vm0F743cVX988x9aN6dKYSNlB5z3kRw0pHOCsg.png?width=960&crop=smart&auto=webp&s=c2f04f8b28407fc5307a58847249aa008f2dc6f6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2R0R9Vm0F743cVX988x9aN6dKYSNlB5z3kRw0pHOCsg.png?width=1080&crop=smart&auto=webp&s=fa9835d2f6f6c4a78b3a2664fbfb9ea60ca13c5a', 'width': 1080}], 'source': {'height': 1138, 'url': 'https://external-preview.redd.it/2R0R9Vm0F743cVX988x9aN6dKYSNlB5z3kRw0pHOCsg.png?auto=webp&s=29dc48d81c06450254752a1f9968c2c1c7d314d7', 'width': 2275}, 'variants': {}}]} |
For those building AI agents, whatโs your biggest headache when debugging reasoning or tool calls? | 0 | Hey all ๐
You mightโve seen my pasts posts, for those who havenโt, Iโve been building something around reasoning visibility for AI agents, not metrics, but understanding why an agent made certain choices (like which tool it picked, or why it looped).
Iโve read docs, tried LangSmith/LangFuse, and theyโre great for traces, but I still canโt tell what actually goes wrong when the reasoning derails.
Iโd love to talk (DM or comments) with someone whoโs built or maintained agent systems, to understand your current debugging flow and whatโs painful about it.
Totally not selling anything, just trying to learn how people handle โreasoning blindnessโ in real setups.
If youโve built with LangGraph, OpenAIโs Assistants, or custom orchestration, Iโd genuinely appreciate your input ๐
Thanks,
Melchior
| 2025-10-27T16:40:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ohjeyd/for_those_building_ai_agents_whats_your_biggest/ | AdVivid5763 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohjeyd | false | null | t3_1ohjeyd | /r/LocalLLaMA/comments/1ohjeyd/for_those_building_ai_agents_whats_your_biggest/ | false | false | self | 0 | null |
Llama.cpp New Ram halves inference speed at a higher context | 20 | Hi,
I am just starting to debug this and wondered if anyone else has run into this issue.
I am running a W7-3455 ( Xeon 8 channel DDR5 ). I recently upgraded from 8x64GB DDR5 to 8x96GB. The original kit was a high performance V-color kit with lower CL timings, so the performance on MLC is about a \~5% decrease. In any case, the speed is very good according to MLC ( \~ 240GB/s ).
When running the same parameters with llama-server, I initially get the same inference speeds. However, at about 25K context, the inference speed just drops by half.
Example running DeepSeekV3.1-Terminus at Q4\_K\_XL:
srv params_from_: Chat format: DeepSeek V3.1
slot get_availabl: id 0 | task 0 | selected slot by LRU, t_last = 55080165780
slot launch_slot_: id 0 | task 138 | processing task
slot update_slots: id 0 | task 138 | new prompt, n_ctx_slot = 164096, n_keep = 0, n_prompt_tokens = 24619
slot update_slots: id 0 | task 138 | n_past = 2, memory_seq_rm [2, end)
slot update_slots: id 0 | task 138 | prompt processing progress, n_past = 2050, n_tokens = 2048, progress = 0.083188
slot update_slots: id 0 | task 138 | n_past = 2050, memory_seq_rm [2050, end)
slot update_slots: id 0 | task 138 | prompt processing progress, n_past = 4098, n_tokens = 2048, progress = 0.166376
slot update_slots: id 0 | task 138 | n_past = 4098, memory_seq_rm [4098, end)
slot update_slots: id 0 | task 138 | prompt processing progress, n_past = 6146, n_tokens = 2048, progress = 0.249563
slot update_slots: id 0 | task 138 | n_past = 6146, memory_seq_rm [6146, end)
slot update_slots: id 0 | task 138 | prompt processing progress, n_past = 8194, n_tokens = 2048, progress = 0.332751
slot update_slots: id 0 | task 138 | n_past = 8194, memory_seq_rm [8194, end)
slot update_slots: id 0 | task 138 | prompt processing progress, n_past = 10242, n_tokens = 2048, progress = 0.415939
slot update_slots: id 0 | task 138 | n_past = 10242, memory_seq_rm [10242, end)
slot update_slots: id 0 | task 138 | prompt processing progress, n_past = 12290, n_tokens = 2048, progress = 0.499127
slot update_slots: id 0 | task 138 | n_past = 12290, memory_seq_rm [12290, end)
slot update_slots: id 0 | task 138 | prompt processing progress, n_past = 14338, n_tokens = 2048, progress = 0.582314
slot update_slots: id 0 | task 138 | n_past = 14338, memory_seq_rm [14338, end)
slot update_slots: id 0 | task 138 | prompt processing progress, n_past = 16386, n_tokens = 2048, progress = 0.665502
slot update_slots: id 0 | task 138 | n_past = 16386, memory_seq_rm [16386, end)
slot update_slots: id 0 | task 138 | prompt processing progress, n_past = 18434, n_tokens = 2048, progress = 0.748690
slot update_slots: id 0 | task 138 | n_past = 18434, memory_seq_rm [18434, end)
slot update_slots: id 0 | task 138 | prompt processing progress, n_past = 20482, n_tokens = 2048, progress = 0.831878
slot update_slots: id 0 | task 138 | n_past = 20482, memory_seq_rm [20482, end)
slot update_slots: id 0 | task 138 | prompt processing progress, n_past = 22530, n_tokens = 2048, progress = 0.915066
slot update_slots: id 0 | task 138 | n_past = 22530, memory_seq_rm [22530, end)
slot update_slots: id 0 | task 138 | prompt processing progress, n_past = 24578, n_tokens = 2048, progress = 0.998253
slot update_slots: id 0 | task 138 | n_past = 24578, memory_seq_rm [24578, end)
slot update_slots: id 0 | task 138 | prompt processing progress, n_past = 24619, n_tokens = 41, progress = 0.999919
slot update_slots: id 0 | task 138 | prompt done, n_past = 24619, n_tokens = 41
slot release: id 0 | task 138 | stop processing: n_past = 25332, truncated = 0
slot print_timing: id 0 | task 138 |
prompt eval time = 977896.21 ms / 24617 tokens ( 39.72 ms per token, 25.17 tokens per second)
eval time = 88448.57 ms / 714 tokens ( 123.88 ms per token, 8.07 tokens per second)
total time = 1066344.78 ms / 25331 tokens
Then the following prompt:
srv update_slots: all slots are idle
srv log_server_r: request: POST /v1/chat/completions 10.0.0.40 200
srv params_from_: Chat format: DeepSeek V3.1
slot get_availabl: id 0 | task 138 | selected slot by lcs similarity, lcs_len = 24618, similarity = 0.972 (> 0.100 thold)
slot launch_slot_: id 0 | task 865 | processing task
slot update_slots: id 0 | task 865 | new prompt, n_ctx_slot = 164096, n_keep = 0, n_prompt_tokens = 25756
slot update_slots: id 0 | task 865 | n_past = 24618, memory_seq_rm [24618, end)
slot update_slots: id 0 | task 865 | prompt processing progress, n_past = 25756, n_tokens = 1138, progress = 0.044184
slot update_slots: id 0 | task 865 | prompt done, n_past = 25756, n_tokens = 1138
slot release: id 0 | task 865 | stop processing: n_past = 26212, truncated = 0
slot print_timing: id 0 | task 865 |
prompt eval time = 51948.00 ms / 1138 tokens ( 45.65 ms per token, 21.91 tokens per second)
eval time = 94955.55 ms / 457 tokens ( 207.78 ms per token, 4.81 tokens per second)
total time = 146903.55 ms / 1595 tokens
This never happened with my previous RAM kit. The inference speed would decrease as context increased, but rather linearly rather than this huge drop.
Any tips?
My current llama-server command:
numactl --interleave=all ./build/bin/llama-server --model /mnt/home_extend/models/unsloth_DeepSeek-V3.1-Terminus-GGUF/UD-Q4_K_XL/DeepSeek-V3.1-Terminus-UD-Q4_K_XL-00001-of-00008.gguf --alias DeepSeek-V3.1 --threads 44 --ctx-size 120000 --n-gpu-layers 99 --cpu-moe --temp 0.6 --top-p 0.95 -fa 1 --host 0.0.0.0 --jinja --port 8099 --threads 48 --no-host | 2025-10-27T16:36:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ohjayo/llamacpp_new_ram_halves_inference_speed_at_a/ | easyrider99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohjayo | false | null | t3_1ohjayo | /r/LocalLLaMA/comments/1ohjayo/llamacpp_new_ram_halves_inference_speed_at_a/ | false | false | self | 20 | null |
Core Ultra 7 265K, Ryzen 9 7900X, Ryzen 9 9950X, or is it irrelevant? | 8 | Currently refreshing my home workstation setup and I am looking to get more into local LLMs for professional reasons. Currently using the 7900X, have a new 265K that I was planning to move to so I had QuickSync, but wouldn't be against upgrading to the 9950X if it's worth it. Going to be pairing them with 2x48gb ddr5 6000 memory and a 3090. | 2025-10-27T16:26:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ohj1jq/core_ultra_7_265k_ryzen_9_7900x_ryzen_9_9950x_or/ | illicITparameters | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohj1jq | false | null | t3_1ohj1jq | /r/LocalLLaMA/comments/1ohj1jq/core_ultra_7_265k_ryzen_9_7900x_ryzen_9_9950x_or/ | false | false | self | 8 | null |
Know the capabilities of your models before coding a big project | 7 | I spent a bunch of time creating scripts that can take base64 strings of encoded PDFs, converting them to PDFs in memory, OCRing the text, then funneling that text to a local AI model for summarizing and categorizing. Well guess what, the Gemma family of models, and probably others, can just take a 100,000 character base 64 string, decode it in memory and summarize the text, with no plugins needed. What the hell lol | 2025-10-27T16:22:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ohixop/know_the_capabilities_of_your_models_before/ | peppaz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohixop | false | null | t3_1ohixop | /r/LocalLLaMA/comments/1ohixop/know_the_capabilities_of_your_models_before/ | false | false | self | 7 | null |
Best open source offline TTS that can be fully trained with voice samples? | 3 | I"m new to voice cloning and TTS and I've recently been dabbling with Chatterbox and, while it's impressive, I'm not happy with the overall prosody despite tweaking what is possible in this fork. It just doesn't sound quite as I'd like it to.
I'm looking to get as accurate a representation of my voice as possible, the idea being to provide samples and transcripts and, once the TTS has learned how I want the output to sound, provide it with the full public domain book text to convert to speech.
Which out of the many available options is the best for this?
Preferably something that not only sounds great but is easy to install and use and which will work within 12GB of VRAM on a 3060 GPU.
All that said, I may consider upgrading the GPU if the best software requires it. | 2025-10-27T16:07:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ohij3h/best_open_source_offline_tts_that_can_be_fully/ | Twigling | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohij3h | false | null | t3_1ohij3h | /r/LocalLLaMA/comments/1ohij3h/best_open_source_offline_tts_that_can_be_fully/ | false | false | self | 3 | null |
Do these 3090s look in good shape?? | 0 | Hello. Found someone who is selling 3090s online. Should I be skeptical about the quality of these??
Any tips for buying GPUs from people online? | 2025-10-27T16:07:23 | https://www.reddit.com/gallery/1ohij0h | Excellent_Koala769 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ohij0h | false | null | t3_1ohij0h | /r/LocalLLaMA/comments/1ohij0h/do_these_3090s_look_in_good_shape/ | false | false | 0 | null | |
Another Banger from Inclusion AI: Ming-flash-omni-Preview | 117 | [https://huggingface.co/inclusionAI/Ming-flash-omni-Preview](https://huggingface.co/inclusionAI/Ming-flash-omni-Preview)
Based on [Ling-Flash-2.0](https://github.com/inclusionAI/Ling-V2) this model has 100b total parameters and 6b active ones and supports context aware asr, text to speech, image generation and editing, segmentation etc (well its an omni modal model so you know the drill). Since its fairly sparse it is very efficient and while I couldn't test it myself the benchmarks seem promising, and it also supports voice cloning (;
It says it can do dialect-aware ASR, though im not sure if that will only work with Chinese ๐ค
Anyways, if im not mistaken this is the biggest open sourced omni modal model yet so thanks to the mad lads at inclusion ai!
https://preview.redd.it/qml9ai33goxf1.png?width=2972&format=png&auto=webp&s=29df775ad390dc4e6cb4306e302540f231bdf556
https://reddit.com/link/1ohihvo/video/oh86jahegoxf1/player
| 2025-10-27T16:06:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ohihvo/another_banger_from_inclusion_ai/ | Finanzamt_Endgegner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohihvo | false | null | t3_1ohihvo | /r/LocalLLaMA/comments/1ohihvo/another_banger_from_inclusion_ai/ | false | false | 117 | {'enabled': False, 'images': [{'id': 'PFGMqHZG1FenJLDckxpcToXwao333pejl_fZNW4bWqk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PFGMqHZG1FenJLDckxpcToXwao333pejl_fZNW4bWqk.png?width=108&crop=smart&auto=webp&s=5a62ef4a917061a1e7833de6e660cf63105c6fab', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PFGMqHZG1FenJLDckxpcToXwao333pejl_fZNW4bWqk.png?width=216&crop=smart&auto=webp&s=ebc4539f3785e5d42934dac1b89be023fd67356a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PFGMqHZG1FenJLDckxpcToXwao333pejl_fZNW4bWqk.png?width=320&crop=smart&auto=webp&s=72ea2a95a8900e6abaf94762c97e282b8aa6ff12', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PFGMqHZG1FenJLDckxpcToXwao333pejl_fZNW4bWqk.png?width=640&crop=smart&auto=webp&s=04ce1f6abd38ec8e123f358b2f5b41d3b28a30d6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PFGMqHZG1FenJLDckxpcToXwao333pejl_fZNW4bWqk.png?width=960&crop=smart&auto=webp&s=0e9f1db990ec573e8be9c0bd9c09c700150ce04e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PFGMqHZG1FenJLDckxpcToXwao333pejl_fZNW4bWqk.png?width=1080&crop=smart&auto=webp&s=b36d6e0661987cf4c242d8d855c736c39e6695dd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PFGMqHZG1FenJLDckxpcToXwao333pejl_fZNW4bWqk.png?auto=webp&s=c6d769f83a73158606a50e6ab7d14d6d65eb760f', 'width': 1200}, 'variants': {}}]} | |
Best GPU rental for instant stop/restart (Vast.ai keeps me waiting)? | 2 | Iโve been using [**Vast.ai**](http://Vast.ai) for LLM experiments, but whenever I stop an instance and try to resume later, it says my GPU slot isnโt available โ sometimes for hours or weeks.
I donโt do long training runs โ I just spin up a GPU for development or testing, a few hours at a time. Iโd like to **turn it on and off multiple times a day**, paying only while itโs running. I donโt need RAM state saved โ I just need the **file system to persist**.
Basically, Iโm looking for a **GPU provider with reliable stop/restart**, like AWS or GCP, where I can:
* Keep my disk/volume
* Stop compute when idle
* **Restart instantly** without waiting for capacity
Has anyone tried **CoreWeave, Lambda, RunPod, TensorDock, Cudo Compute, etc** for this?
Which providers actually let you pause and resume smoothly? Options I may not be considering?
Thanks for any first-hand insight! | 2025-10-27T16:05:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ohihhg/best_gpu_rental_for_instant_stoprestart_vastai/ | josephljohnston | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohihhg | false | null | t3_1ohihhg | /r/LocalLLaMA/comments/1ohihhg/best_gpu_rental_for_instant_stoprestart_vastai/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=108&crop=smart&auto=webp&s=a08158a2ec290c8157b492f314bfb148408be1fc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=216&crop=smart&auto=webp&s=5d4693d9fc011431e9348152136fa7a13c95504b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=320&crop=smart&auto=webp&s=93ef867725a538dad3a6209e5062d3d1de60aeaa', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=640&crop=smart&auto=webp&s=fc186b216811c20876ecdaf0e913cc0b59498d7a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=960&crop=smart&auto=webp&s=67812638cc7d2b930cd8bebf733409c3b2d92397', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=1080&crop=smart&auto=webp&s=bc092f31a95e3a3df682dc8f7222b0fb1363a5df', 'width': 1080}], 'source': {'height': 2250, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?auto=webp&s=c5b1db2b11bd21a955cbe1e863cde94ef57607f4', 'width': 4000}, 'variants': {}}]} |
Do these RTX 3090s look in good shape? | 1 | Found someone online selling these 3090s... do they look in good condition? | 2025-10-27T16:03:40 | https://www.reddit.com/gallery/1ohifd4 | Excellent_Koala769 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ohifd4 | false | null | t3_1ohifd4 | /r/LocalLLaMA/comments/1ohifd4/do_these_rtx_3090s_look_in_good_shape/ | false | false | 1 | null | |
Can local LLMs reveal sources/names of documents used to generate output? | 4 | As per the title, having a local "compressed" snapshot of the current 'Web is astounding, but not super-useful without referencing sources. Can you get links/names of sources, like what the Google AI summaries offer?
On that note, for example, if you have a DGX Spark, does the largest local LLM you can run somehow truncate/trim source data over what GPT 5 (or whatever) can reference? (ignore timeliness, just raw snapshot to snapshot)
If so, how large would the current GPT 5 inference model be? | 2025-10-27T15:38:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ohhqxa/can_local_llms_reveal_sourcesnames_of_documents/ | CellMan28 | self.LocalLLaMA | 2025-10-27T15:41:32 | 0 | {} | 1ohhqxa | false | null | t3_1ohhqxa | /r/LocalLLaMA/comments/1ohhqxa/can_local_llms_reveal_sourcesnames_of_documents/ | false | false | self | 4 | null |
Prompt para tinyllama | 0 | Hola, estoy queriendo probar tinyllama para que me devuelva en JSON o vector el resultado de la clasificacion de emociones.
Por ejemplo: {{"ira":0,"asco":0,"miedo":0,"alegria":1,"tristeza":0,"sorpresa":0}} O QUE ME DEVUELVA 1,0,1 Y ASI.
No se si alguien pudo hacer que haga una clasificacion. Porque me cambia las palabras tipo agregandole acento o plural y asi. No se si es algo que se podria manejar porque ya le puse como indicacion que no me devuelva nada asi sino que respete las palabras | 2025-10-27T15:32:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ohhl74/prompt_para_tinyllama/ | ConstructionLive6914 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohhl74 | false | null | t3_1ohhl74 | /r/LocalLLaMA/comments/1ohhl74/prompt_para_tinyllama/ | false | false | self | 0 | null |
If you want keep up on your acronym soup: GEMM on Triton vs cublass, Cutlass, TileLang, and Mojo | 1 | [https://x.com/clattner\_llvm/status/1982196673771139466](https://x.com/clattner_llvm/status/1982196673771139466)
Quote: Thank you to folks at [u/metaai](https://x.com/metaai) for publishing their independent perf analysis comparing CUDA and Mojo against Triton and TileLang DSLs, showing Mojo meeting and beating CUDA, and leaving DSLs in the dust. | 2025-10-27T15:28:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ohhh53/if_you_want_keep_up_on_your_acronym_soup_gemm_on/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohhh53 | false | null | t3_1ohhh53 | /r/LocalLLaMA/comments/1ohhh53/if_you_want_keep_up_on_your_acronym_soup_gemm_on/ | false | false | self | 1 | null |
Roleplay LLM Stack - Foundation | 1 | HI Folks - -this is kinda a follow up question from the one about models the other day. I had planned to use Ollama as the backend, but, Ive heard a lot of people talking about different backends. Im very comfortable with command line so that is not an issue -- but I would like to know what you guys recommend for the backend
TIM | 2025-10-27T15:18:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ohh7kr/roleplay_llm_stack_foundation/ | slrg1968 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohh7kr | false | null | t3_1ohh7kr | /r/LocalLLaMA/comments/1ohh7kr/roleplay_llm_stack_foundation/ | false | false | self | 1 | null |
Anyone running GLM 4.5 Air Q8 tell me vram at 2K and 100K context? | 6 | Anyone running GLM 4.5 Air Q8 tell me vram at 2K and 100K context?
KV not quantized. | 2025-10-27T15:16:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ohh5g4/anyone_running_glm_45_air_q8_tell_me_vram_at_2k/ | MidnightProgrammer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohh5g4 | false | null | t3_1ohh5g4 | /r/LocalLLaMA/comments/1ohh5g4/anyone_running_glm_45_air_q8_tell_me_vram_at_2k/ | false | false | self | 6 | null |
Flamingo 3 released in safetensors | 1 | NVIDIA has a bunch of models they release in their own format, but they just put up Audio Flamingo 3 as safetensors: https://huggingface.co/nvidia/audio-flamingo-3-hf
Does anyone know if this can be turned into a GGUF/MLX file? Since itโs based on Qwen3.5 and Whisper, wondering if supporting it in llama.cpp will be difficult.
| 2025-10-27T15:14:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ohh481/flamingo_3_released_in_safetensors/ | Miserable-Dare5090 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohh481 | false | null | t3_1ohh481 | /r/LocalLLaMA/comments/1ohh481/flamingo_3_released_in_safetensors/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '36Ok47xCt-UsrF_pBwDRpDtkOuAIN95LDIE97SAnDIs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/36Ok47xCt-UsrF_pBwDRpDtkOuAIN95LDIE97SAnDIs.png?width=108&crop=smart&auto=webp&s=5d1a4c7175860239c17a0c1af2ea2caf82c7c974', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/36Ok47xCt-UsrF_pBwDRpDtkOuAIN95LDIE97SAnDIs.png?width=216&crop=smart&auto=webp&s=792a758d2389d826ea76655a02217440d96633d7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/36Ok47xCt-UsrF_pBwDRpDtkOuAIN95LDIE97SAnDIs.png?width=320&crop=smart&auto=webp&s=0a4141687bf859ab5bde9e94489276f9b639ced4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/36Ok47xCt-UsrF_pBwDRpDtkOuAIN95LDIE97SAnDIs.png?width=640&crop=smart&auto=webp&s=ef3fe22994cee79088a342f6e2d9293e595841c2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/36Ok47xCt-UsrF_pBwDRpDtkOuAIN95LDIE97SAnDIs.png?width=960&crop=smart&auto=webp&s=bc8b6e864a41f55980ca72f43f743df473b60513', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/36Ok47xCt-UsrF_pBwDRpDtkOuAIN95LDIE97SAnDIs.png?width=1080&crop=smart&auto=webp&s=71619a6a2a254690e27a60700345036295577906', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/36Ok47xCt-UsrF_pBwDRpDtkOuAIN95LDIE97SAnDIs.png?auto=webp&s=173701ac4b386e6fff245236bd9a5299f92182b2', 'width': 1200}, 'variants': {}}]} |
86% accuracy on SimpleQA with gpt-4.1-mini. Open-source deep research agent. | 101 | Built SGR Deep Research to prove small models can do serious research Results: 86.1% on SimpleQA (4,326 questions)
Model: gpt-4.1-mini Cost: $0.03 per query
Core idea: Schema-Guided Reasoning Explicitly control reasoning steps instead of hoping the model figures it out Currently running in production at telecom and banking
Testing Qwen3-30B-A3B-Instruct-2507 next for self-hosted deployment ($0 API costs)
Everything is public: full logs, configs, evaluation code GitHub: [https://github.com/vamplabAI/sgr-deep-research](https://github.com/vamplabAI/sgr-deep-research) Questions?
https://preview.redd.it/akowocw57oxf1.png?width=2460&format=png&auto=webp&s=3d332497cfe0686bedb5b11f58bbb7e6de61f0a3
https://preview.redd.it/bocirpd67oxf1.png?width=1176&format=png&auto=webp&s=2c1adc29c14fc211311efbf31063e3d449ffcbd0
| 2025-10-27T15:12:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ohh1l2/86_accuracy_on_simpleqa_with_gpt41mini_opensource/ | Ok-Attention1022 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohh1l2 | false | null | t3_1ohh1l2 | /r/LocalLLaMA/comments/1ohh1l2/86_accuracy_on_simpleqa_with_gpt41mini_opensource/ | false | false | 101 | {'enabled': False, 'images': [{'id': 'Mhekv1CVcCnqQ6OsfNa_xd5RwOhyacefoOODajDng28', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/Mhekv1CVcCnqQ6OsfNa_xd5RwOhyacefoOODajDng28.png?width=108&crop=smart&auto=webp&s=b7d20c4c03bd8714e5717186b20f1479995792c2', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/Mhekv1CVcCnqQ6OsfNa_xd5RwOhyacefoOODajDng28.png?width=216&crop=smart&auto=webp&s=8a2117d4af290fe49cf6c1b9f1a39df508851857', 'width': 216}, {'height': 158, 'url': 'https://external-preview.redd.it/Mhekv1CVcCnqQ6OsfNa_xd5RwOhyacefoOODajDng28.png?width=320&crop=smart&auto=webp&s=a91afbeb6f07c8491765576b28e8d61d252b5a66', 'width': 320}, {'height': 317, 'url': 'https://external-preview.redd.it/Mhekv1CVcCnqQ6OsfNa_xd5RwOhyacefoOODajDng28.png?width=640&crop=smart&auto=webp&s=513369b6dc41c3172d89ddffae9bf50cb41bb901', 'width': 640}, {'height': 476, 'url': 'https://external-preview.redd.it/Mhekv1CVcCnqQ6OsfNa_xd5RwOhyacefoOODajDng28.png?width=960&crop=smart&auto=webp&s=c4f679c9307e59ac9c33b08318a1f5971094e9fa', 'width': 960}, {'height': 535, 'url': 'https://external-preview.redd.it/Mhekv1CVcCnqQ6OsfNa_xd5RwOhyacefoOODajDng28.png?width=1080&crop=smart&auto=webp&s=727f4ca001be83c3eb2efcb1d0984a7b8a48ee16', 'width': 1080}], 'source': {'height': 1338, 'url': 'https://external-preview.redd.it/Mhekv1CVcCnqQ6OsfNa_xd5RwOhyacefoOODajDng28.png?auto=webp&s=01d5abc56f350e40c658ce4b62bc417384809332', 'width': 2696}, 'variants': {}}]} | |
I built an SDK for research-grade semantic text chunking | 3 | Most RAG systems fall apart when you feed them large documents.
You can embed a few paragraphs fine, but once the text passes a few thousand tokens, retrieval quality collapses, models start missing context, repeating sections, or returning irrelevant chunks.
The core problem isnโt the embeddings. Itโs how the text gets chunked.
Most people still use dumb fixed-size splits, 1000 tokens with 200 overlap, which cuts off mid-sentence and destroys semantic continuity. Thatโs fine for short docs, but not for research papers, transcripts, or technical manuals.
So I built a TypeScript SDK that implements multiple **r**esearch-grade text segmentation methods, all under one interface.
It includes:
* Fixed-size: basic token or character chunking
* Recursive: splits by logical structure (headings, paragraphs, code blocks)
* Semantic: embedding-based splitting using cosine similarity
* z-score / std-dev thresholding
* percentile thresholding
* local minima detection
* gradient / derivative-based change detection
* full segmentation algorithms: TextTiling (1997), C99 (2000), and BayesSeg (2008)
* Hybrid: combines structural and semantic boundaries
* Topic-based: clustering sentences by embedding similarity
* Sliding Window: fixed window stride with overlap for transcripts or code
The SDK unifies all of these behind one consistent API, so you can do things like:
const chunker = createChunker({
type: "hybrid",
embedder: new OpenAIEmbedder(),
chunkSize: 1000
});
const chunks = await chunker.chunk(documentText);
or easily compare methods:
const strategies = ["fixed", "semantic", "hybrid"];
for (const s of strategies) {
const chunker = createChunker({ type: s });
const chunks = await chunker.chunk(text);
console.log(s, chunks.length);
}
Itโs built for developers working on RAG systems, embeddings, or document retrieval who need consistent, meaningful chunk boundaries that donโt destroy context.
If youโve ever wondered why your retrieval fails on long docs, itโs probably not the model, itโs your chunking.
It supports OpenAI, HuggingFace, and local embedding models
Repo link: [https://github.com/Mikethebot44/Scout-Text-Chunker](https://github.com/Mikethebot44/Scout-Text-Chunker) | 2025-10-27T14:57:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ohgmti/i_built_an_sdk_for_researchgrade_semantic_text/ | Lonely-Marzipan-9473 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohgmti | false | null | t3_1ohgmti | /r/LocalLLaMA/comments/1ohgmti/i_built_an_sdk_for_researchgrade_semantic_text/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '3PnRkWQ-PiTc7nGhN1khgbSJ3wEwSYTF_pkBFnBFaaQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3PnRkWQ-PiTc7nGhN1khgbSJ3wEwSYTF_pkBFnBFaaQ.png?width=108&crop=smart&auto=webp&s=7774835ebb88c923c6cb99433e65b5c3ab24757c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3PnRkWQ-PiTc7nGhN1khgbSJ3wEwSYTF_pkBFnBFaaQ.png?width=216&crop=smart&auto=webp&s=20161f7ade98e3cd25184ea5eaf46767a28818dc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3PnRkWQ-PiTc7nGhN1khgbSJ3wEwSYTF_pkBFnBFaaQ.png?width=320&crop=smart&auto=webp&s=7653c7673e924185401748f81b2798662f986e54', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3PnRkWQ-PiTc7nGhN1khgbSJ3wEwSYTF_pkBFnBFaaQ.png?width=640&crop=smart&auto=webp&s=044b06d98b80779fd6245dd41bcd47619556e43c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3PnRkWQ-PiTc7nGhN1khgbSJ3wEwSYTF_pkBFnBFaaQ.png?width=960&crop=smart&auto=webp&s=df6ca07c5901defaa1fa9cffaeb16536fb24915a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3PnRkWQ-PiTc7nGhN1khgbSJ3wEwSYTF_pkBFnBFaaQ.png?width=1080&crop=smart&auto=webp&s=fb72b915c9276faa5d73f87a4e2f9dc187d21b79', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3PnRkWQ-PiTc7nGhN1khgbSJ3wEwSYTF_pkBFnBFaaQ.png?auto=webp&s=3d99d5b1d4182f8d091cf789513ce7d7b76ac676', 'width': 1200}, 'variants': {}}]} |
๐ The Meta Codex Read Between The lines; Or Have Your Agency Read it..Do not use this If inferred as development or anything like that โtoolโ. | 0 | This is Not about Glyphs, Fractals, Prompts, or AI really; this is about the Future Right Under our Feet Uh ๐ oh. Maybe I should not have posted this. But None of it will work And I am just being Friendly. I will post a real version with not one bit of hesitation.
| 2025-10-27T14:55:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ohglpw/the_meta_codex_read_between_the_lines_or_have/ | sunfireservices | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohglpw | false | null | t3_1ohglpw | /r/LocalLLaMA/comments/1ohglpw/the_meta_codex_read_between_the_lines_or_have/ | false | false | nsfw | 0 | null |
Batch inference locally on 4080 | 1 | Hi all,
Iโm running ollama with Gemma 3 12b locally on my 4080 but Iโd like to have my endpoint be a similar interface as OpenAIโs batch interface. Iโm trying to do this with a wrapper around VLLM but Iโm having issues.
Iโm not super deep in this space and have been using agents to help me set everything up.
My use case is to send 200k small profiles to a recommendation engine and get 5-15 classifications on each profile.
Any advice on how to get this accomplished?
Currently the agents are running into trouble as they say the engine isnโt handling well. VLLM model support doesnโt list latest models for Gemma either.
Am I barking up the wrong tree? Any advice would be much appreciated | 2025-10-27T14:49:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ohgf9t/batch_inference_locally_on_4080/ | 0bviousEcon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohgf9t | false | null | t3_1ohgf9t | /r/LocalLLaMA/comments/1ohgf9t/batch_inference_locally_on_4080/ | false | false | self | 1 | null |
Whatโs a use case you run exclusively on your local LLM setup for privacy reasons? | 0 | No RP/ ERP please. | 2025-10-27T14:47:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ohgdm5/whats_a_use_case_you_run_exclusively_on_your/ | Adventurous-Gold6413 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohgdm5 | false | null | t3_1ohgdm5 | /r/LocalLLaMA/comments/1ohgdm5/whats_a_use_case_you_run_exclusively_on_your/ | false | false | self | 0 | null |
4 AWQ Quantized Models from AllenAI | 1 | Built 4 AWQ quantized models from AllenAI!
Molmo-7B-D AWQ (14GBโ5GB): Efficient VLM performing between GPT-4V and GPT-4o on academic benchmarks, with just 6.1% perplexity degradation.
MolmoAct-7B-D AWQ (14GBโ6GB): Specialized robotic manipulation model reduced by \~57%.
Molmo-72B AWQ (145GBโ38GB): VLM with Qwen2-72B decoder that performs competitively with GPT-4, achieving only 10.5% perplexity degradation while saving 107GB of memory.
OLMo-2-32B-Instruct AWQ (64GBโ17GB): LLM post-trained on Tรผlu 3 with 3% perplexity degradation while saving \~50GB.
All VLMs only had their text models quantized. | 2025-10-27T14:46:53 | https://huggingface.co/ronantakizawa/molmo-72b-awq | Ok_Employee_6418 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ohgd7k | false | null | t3_1ohgd7k | /r/LocalLLaMA/comments/1ohgd7k/4_awq_quantized_models_from_allenai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'EBM6AQPgCc0uhc_H3MNjd4y9QL-tlqIHsi5igpe52K4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EBM6AQPgCc0uhc_H3MNjd4y9QL-tlqIHsi5igpe52K4.png?width=108&crop=smart&auto=webp&s=3ccaea1faa4478e2c9da12a36ec5c98e5ec6f5c7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/EBM6AQPgCc0uhc_H3MNjd4y9QL-tlqIHsi5igpe52K4.png?width=216&crop=smart&auto=webp&s=d1f6bd893a98b17ed5f135a9087ca438fd1ceca8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/EBM6AQPgCc0uhc_H3MNjd4y9QL-tlqIHsi5igpe52K4.png?width=320&crop=smart&auto=webp&s=d94cc60e55d94e5d042ec1042716753b041f9e9b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/EBM6AQPgCc0uhc_H3MNjd4y9QL-tlqIHsi5igpe52K4.png?width=640&crop=smart&auto=webp&s=aa977f75ccc05436266d526b8c44aed42a3e1cf3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/EBM6AQPgCc0uhc_H3MNjd4y9QL-tlqIHsi5igpe52K4.png?width=960&crop=smart&auto=webp&s=fc0102e2d0f605f0e48991ba674bf0dfbfc87001', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/EBM6AQPgCc0uhc_H3MNjd4y9QL-tlqIHsi5igpe52K4.png?width=1080&crop=smart&auto=webp&s=5c9d87f3c16efc2173aec878060850eb8fcfdd63', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/EBM6AQPgCc0uhc_H3MNjd4y9QL-tlqIHsi5igpe52K4.png?auto=webp&s=d5f63e17df4ad9be437eb7cf81486b7948f2c1c2', 'width': 1200}, 'variants': {}}]} | |
I found my collegue writes about 30 prompts in different yaml file in Agents project, annoyed to use them and copy them, so I made this. | 2 | Hey AI enthusiasts! ๐
I just released PromptPro, a developer-friendly tool designed to completely transform how you manage, version, and organize AI prompts. Whether you're a prompt engineer, AI developer, or just someone obsessed with clean, efficient prompt workflows, this is for you.
**Why PromptPro?**
- ๐ท๏ธ Automatic Versioning โ Every change to your prompt is tracked. No more messy JSON/YAML chaos.
- ๐ Secure Vaults โ Optional password-encrypted storage for sensitive prompts.
- ๐ป Beautiful TUI โ Navigate your prompts effortlessly in the terminal.
- โก Blazing Fast โ Powered by Rust ๐ฆ for lightning-fast performance.
- ๐ Polyglot Support โ Works out-of-the-box with Python and Rust, any language, any project.
Quick Start
pip install promptpro
Python Example
from promptpro import PromptManager
pm = PromptManager.get_singleton("promptpro.vault", "")
prompt = pm.get_prompt("pc_operator_v2", "dev")
print(prompt)
Rust API also provided!
**Key Features**
- ๐ Automatic versioning
- ๐ท๏ธ Smart tagging (dev, stable, release, custom tags)
- ๐ฆ Backup & restore with optional encryption
- ๐ Rich history tracking with timestamps and notes
- ๐ ๏ธ CLI & API support for developers
**Why Youโll Love It**
- Track prompt evolution during experiments
- A/B test variations seamlessly
- Manage production vs. experimental prompts
- Share and sync prompt collections securely
PromptPro is available on PyPI and Cargo, or you can build it from source.
Check it out here: https://github.com/lucasjinreal/promptpro
Built with โค๏ธ for the AI dev community. Let me know your thoughts or feature requests!
https://github.com/lucasjinreal/promptpro | 2025-10-27T14:41:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ohg80j/i_found_my_collegue_writes_about_30_prompts_in/ | LewisJin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohg80j | false | null | t3_1ohg80j | /r/LocalLLaMA/comments/1ohg80j/i_found_my_collegue_writes_about_30_prompts_in/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'o3dcgKlH1OHegZ95PGJbg99ZMMyiqzka8JbIgpRMrBI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/o3dcgKlH1OHegZ95PGJbg99ZMMyiqzka8JbIgpRMrBI.png?width=108&crop=smart&auto=webp&s=7e851eb81b600fe2f96a3dc12f96216bb2a314d4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/o3dcgKlH1OHegZ95PGJbg99ZMMyiqzka8JbIgpRMrBI.png?width=216&crop=smart&auto=webp&s=12aff17c645499279f38b2d2f83470d63cc5d4fc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/o3dcgKlH1OHegZ95PGJbg99ZMMyiqzka8JbIgpRMrBI.png?width=320&crop=smart&auto=webp&s=22c7aa7dfc2ffbcb85236ae6d4d4d7cc4825324e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/o3dcgKlH1OHegZ95PGJbg99ZMMyiqzka8JbIgpRMrBI.png?width=640&crop=smart&auto=webp&s=57e5a7728bf187fe4a3fd78a029cbb634ed8d3fa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/o3dcgKlH1OHegZ95PGJbg99ZMMyiqzka8JbIgpRMrBI.png?width=960&crop=smart&auto=webp&s=8bdbd0843273de5bad59fa65f00840be0fabab9c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/o3dcgKlH1OHegZ95PGJbg99ZMMyiqzka8JbIgpRMrBI.png?width=1080&crop=smart&auto=webp&s=d224c33e729413bd978b3754179ec52023cfdb84', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/o3dcgKlH1OHegZ95PGJbg99ZMMyiqzka8JbIgpRMrBI.png?auto=webp&s=988eed367381277bacdf58ebc2b564a0db47aed1', 'width': 1200}, 'variants': {}}]} |
4 AWQ Quantized models from AllenAI | 1 | Excited to announce 4 AWQ quantized models from #AllenAI! ๐
Molmo-7B-D AWQ (14GBโ5GB): Efficient VLM performing between GPT-4V and GPT-4o on academic benchmarks, with just 6.1% perplexity degradation.
MolmoAct-7B-D AWQ (14GBโ6GB): Specialized robotic manipulation model reduced by \~57%.
Molmo-72B AWQ (145GBโ38GB): VLM with Qwen2-72B decoder that performs competitively with GPT-4, achieving only 10.5% perplexity degradation while saving 107GB of memory.
OLMo-2-32B-Instruct AWQ (64GBโ17GB): LLM post-trained on Tรผlu 3 with 3% perplexity degradation while saving \~50GB.
All VLMs only had their text models quantized. | 2025-10-27T14:40:05 | https://huggingface.co/ronantakizawa/molmo-72b-awq | Ok_Employee_6418 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ohg6q7 | false | null | t3_1ohg6q7 | /r/LocalLLaMA/comments/1ohg6q7/4_awq_quantized_models_from_allenai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'EBM6AQPgCc0uhc_H3MNjd4y9QL-tlqIHsi5igpe52K4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EBM6AQPgCc0uhc_H3MNjd4y9QL-tlqIHsi5igpe52K4.png?width=108&crop=smart&auto=webp&s=3ccaea1faa4478e2c9da12a36ec5c98e5ec6f5c7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/EBM6AQPgCc0uhc_H3MNjd4y9QL-tlqIHsi5igpe52K4.png?width=216&crop=smart&auto=webp&s=d1f6bd893a98b17ed5f135a9087ca438fd1ceca8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/EBM6AQPgCc0uhc_H3MNjd4y9QL-tlqIHsi5igpe52K4.png?width=320&crop=smart&auto=webp&s=d94cc60e55d94e5d042ec1042716753b041f9e9b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/EBM6AQPgCc0uhc_H3MNjd4y9QL-tlqIHsi5igpe52K4.png?width=640&crop=smart&auto=webp&s=aa977f75ccc05436266d526b8c44aed42a3e1cf3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/EBM6AQPgCc0uhc_H3MNjd4y9QL-tlqIHsi5igpe52K4.png?width=960&crop=smart&auto=webp&s=fc0102e2d0f605f0e48991ba674bf0dfbfc87001', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/EBM6AQPgCc0uhc_H3MNjd4y9QL-tlqIHsi5igpe52K4.png?width=1080&crop=smart&auto=webp&s=5c9d87f3c16efc2173aec878060850eb8fcfdd63', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/EBM6AQPgCc0uhc_H3MNjd4y9QL-tlqIHsi5igpe52K4.png?auto=webp&s=d5f63e17df4ad9be437eb7cf81486b7948f2c1c2', 'width': 1200}, 'variants': {}}]} | |
Building a Memory-Augmented AI with Its Own Theory Lab. Need Help Stabilizing the Simulation Side | 0 | Iโve built a custom AI agent called MIRA using Qwen-3 as the LLM. She has persistent memory split into self, operational, and emotional types; a toolset that includes a sandbox, calculator, and eventually a browser; and a belief system that updates through praise-based reinforcement and occasional self-reflection.
The idea was to add a โlabโ module where she can generate original hypotheses based on her memory/knowledge, simulate or test them in a safe environment, and update memory accordingly but the moment I prompt her to form a scientific theory from scratch, she crashes.
Anyone here tried something similar? Ideas for how to structure the lab logic so it doesnโt overload the model or recursive prompt chain? | 2025-10-27T14:35:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ohg2tr/building_a_memoryaugmented_ai_with_its_own_theory/ | Upper-Promotion8574 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohg2tr | false | null | t3_1ohg2tr | /r/LocalLLaMA/comments/1ohg2tr/building_a_memoryaugmented_ai_with_its_own_theory/ | false | false | self | 0 | null |
Last week in Multimodal AI - Local Edition | 33 | I curate a weekly newsletter on multimodal AI. Here are the local/edge highlights from last week:
**DeepSeek OCR - Efficient Document Parsing**
โข Uses optical 2D mapping with lossy compression for 97% OCR accuracy at 10x compression.
โข Processes 200k+ pages daily on a single A100 GPU, ideal for local document digitization.
โข [GitHub](https://github.com/deepseek-ai/DeepSeek-OCR)ย |ย [Hugging Face](https://huggingface.co/deepseek-ai/DeepSeek-OCR)ย |ย [Paper](https://arxiv.org/abs/2510.18234)
https://preview.redd.it/8mt2da5wynxf1.png?width=1456&format=png&auto=webp&s=09575812d8fc3336db32cda7148fb8fbc9c0857c
**LightOnOCR-1B - Multimodal OCR for Edge**
โข 1B parameter model transcribes full pages to Markdown at 5.71 pages/second on an H100.
โข Distilled from a 72B teacher, optimized for low-resource local setups with SOTA efficiency.
โข [Hugging Face](https://huggingface.co/lightonai/LightOnOCR-1B-1025)
**Tencent Hunyuan World 1.1 (WorldMirror)**
โข Feed-forward 3D reconstruction from video or multi-view, running on a single GPU.
โข Delivers production-ready 3D assets in seconds for local VR and gaming workflows.
โข [Project Page](https://3d-models.hunyuan.tencent.com/world/)ย |ย [GitHub](https://github.com/Tencent-Hunyuan/HunyuanWorld-Mirror)ย |ย [Hugging Face](https://huggingface.co/tencent/HunyuanWorld-Mirror)
https://reddit.com/link/1ohfuea/video/1arpw5h6znxf1/player
**ByteDance Seed3D 1.0**
โข Generates simulation-ready 3D assets from a single image on consumer hardware.
โข Perfect for local deployment in robotics and autonomous vehicle training.
โข [Paper](https://arxiv.org/pdf/2510.19944)ย |ย [Announcement](https://x.com/jianfengzhang95/status/1982076343593353400?s=42)
https://reddit.com/link/1ohfuea/video/4ep9bvc7znxf1/player
**Krea Realtime - Real-Time Video Generation**
โข 14B model generates video at 11 fps on a single B200 GPU.
โข Enables real-time interactive video for edge-based creative applications.
โข [Hugging Face](https://huggingface.co/krea/krea-realtime-video)ย |ย [Announcement](https://x.com/krea_ai/status/1980358158376988747?s=42)
https://reddit.com/link/1ohfuea/video/ula998hcznxf1/player
**AGILE - Agentic Jigsaw Interaction Learning**
โข Trains VLMs via trial-and-error puzzle solving, boosting accuracy from 9.5% to 82.8%.
โข Lightweight and interactive, ideal for edge-based vision task improvement.
โข [Project Page](https://yuzeng0-0.github.io/AGILE/)ย |ย [Paper](https://arxiv.org/abs/2510.01304)ย |ย [GitHub](https://github.com/yuzeng0-0/AGILE)
https://preview.redd.it/cqdgb04gznxf1.jpg?width=1456&format=pjpg&auto=webp&s=2790f8e9b1e4627202fd96de5485540f4c6456ca
See the full newsletter for more demos, papers, and more resources: [https://open.substack.com/pub/thelivingedge/p/multimodal-monday-30-smarter-agents](https://open.substack.com/pub/thelivingedge/p/multimodal-monday-30-smarter-agents) | 2025-10-27T14:26:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ohfuea/last_week_in_multimodal_ai_local_edition/ | Vast_Yak_4147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohfuea | false | null | t3_1ohfuea | /r/LocalLLaMA/comments/1ohfuea/last_week_in_multimodal_ai_local_edition/ | false | false | 33 | {'enabled': False, 'images': [{'id': 'GG_nI8HdJYOWQ3BUPyYaxU74wbTZUS40_TzupervzGM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GG_nI8HdJYOWQ3BUPyYaxU74wbTZUS40_TzupervzGM.png?width=108&crop=smart&auto=webp&s=a22c6481d458f200f654fc0b57686581e52ae7dd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GG_nI8HdJYOWQ3BUPyYaxU74wbTZUS40_TzupervzGM.png?width=216&crop=smart&auto=webp&s=6aa418547a2853ef281f855e8ae34993162cb9a5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GG_nI8HdJYOWQ3BUPyYaxU74wbTZUS40_TzupervzGM.png?width=320&crop=smart&auto=webp&s=1025a5691355ba61fde00d1bdc146515a32cab15', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GG_nI8HdJYOWQ3BUPyYaxU74wbTZUS40_TzupervzGM.png?width=640&crop=smart&auto=webp&s=a7b79f3bb4c84db3fbd07ad6d0a3cdb30315acbb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GG_nI8HdJYOWQ3BUPyYaxU74wbTZUS40_TzupervzGM.png?width=960&crop=smart&auto=webp&s=80641cea322e218d25685c11a2b858e17360dace', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GG_nI8HdJYOWQ3BUPyYaxU74wbTZUS40_TzupervzGM.png?width=1080&crop=smart&auto=webp&s=a29a2f742a4fd67164cf37cb85bf35b324491ba0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GG_nI8HdJYOWQ3BUPyYaxU74wbTZUS40_TzupervzGM.png?auto=webp&s=b6c4e4fb639fc2d7fbec5bc52076134a8a8815fb', 'width': 1200}, 'variants': {}}]} | |
What is the best build for *inferencing*? | 0 | Hello, I have been considering starting a local hardware build. In this learning curve, I have realized that there is a big difference between creating a rig for model inferencing compared to training. I would love to know your opinion on this.
Also, with this said, what setup would you recommend strictly for inferencing.. not planning to train models. And on the note, what hardware is recommended for fast inferencing?
Also, for now I would like to have a machine that could inference DeepSeek OCR(DeepSeek3B-MoE-A570M). This would allow me to not use api calls to cloud providers and inference my workflows locally for vision queries. | 2025-10-27T14:18:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ohfmoc/what_is_the_best_build_for_inferencing/ | Excellent_Koala769 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohfmoc | false | null | t3_1ohfmoc | /r/LocalLLaMA/comments/1ohfmoc/what_is_the_best_build_for_inferencing/ | false | false | self | 0 | null |
Made my own Local AI Research Agent | Need suggestions how to improve prompt/execution | 22 | Hello everyone!
So, in short I built my own local AI research assistant in Python ๐ฆ.
It reads Wikipedia, Arxiv, and news, then outputs professional research summaries directly in the terminal. Everything runs fully offline using Ollama! This is my first time exploring the agentic world, understanding how tool-calling and reasoning flow actually work.
Iโve always been a frontend engineer, and honestly, I didnโt realize how far the AI world had come โ the progress is unbelievable. After just 7 days of studying and 1 day of building, I made this small project. Itโs definitely not perfect.
Iโm still using pre-built tools instead of making things from scratch, but the outcome feels like a light version of ChatGPT, running locally!
Iโd really love to hear your thoughts and suggestions on how I can improve this or what I should learn next to move closer to becoming an AI Engineer.
Hereโs the GitHub link: [https://github.com/vedas-dixit/LocalAgent](https://github.com/vedas-dixit/LocalAgent) If you try it locally, let me know what you think!
Thanks in advance :) | 2025-10-27T14:15:10 | FriendshipCreepy8045 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ohfk05 | false | null | t3_1ohfk05 | /r/LocalLLaMA/comments/1ohfk05/made_my_own_local_ai_research_agent_need/ | false | false | 22 | {'enabled': True, 'images': [{'id': 'D9siJL3D8Iiv6rFNEYae6TkeOhFa7TrBYCsu4WFIBHE', 'resolutions': [{'height': 140, 'url': 'https://preview.redd.it/adft1ikiwnxf1.png?width=108&crop=smart&auto=webp&s=52f6791532a8d7ff898cb7ca03755fb35bff9ce4', 'width': 108}, {'height': 280, 'url': 'https://preview.redd.it/adft1ikiwnxf1.png?width=216&crop=smart&auto=webp&s=3926c25a5a9e186736d13d944f3f4df3232b514f', 'width': 216}, {'height': 415, 'url': 'https://preview.redd.it/adft1ikiwnxf1.png?width=320&crop=smart&auto=webp&s=4850252b9a6d338f6600ff54efb5fdeee58c4535', 'width': 320}, {'height': 830, 'url': 'https://preview.redd.it/adft1ikiwnxf1.png?width=640&crop=smart&auto=webp&s=5103180332a6e1cf251a0358796ec069d99e5ed5', 'width': 640}], 'source': {'height': 891, 'url': 'https://preview.redd.it/adft1ikiwnxf1.png?auto=webp&s=6c76dbba87affa2be171a6d818d3943432714e8c', 'width': 687}, 'variants': {}}]} | ||
Does only ChatGPT get this question wrong? "If I have only a fixed pulley and I'm standing on the ground and pull on the rope, can I lift myself off of the ground?" | 0 | (BTW the answer to the question above is yes.)
Recently I saw [this video](https://youtu.be/uPtGGBXiivE) and kinda got curious for myself if this question was "patched", after all it'd been three weeks so it wouldn't be too surprising.
However, that doesn't seem to be the case. I even modified the question, since, I can admit it was kinda vague but even when asked the following:
"If there is a fixed pulley on a wooden beam directly above me and I have rope wrapped around my waist and connected to the fixed pulley, and I also have the other end of the rope right infront of me, can I pull myself up?"
And it still said no while sometimes giving conflicting answers and solutions with image generation. I also tested it with deepseek through open router (3.1 exacto and 3.2 exp) and while they did answer it correctly, 3.1 took over 8000 tokens of reasoning and 3.2 took over 3000 tokens of reasoning to get it right, which seems like a lot. (Though 3.2 seems kinda inconsistent, one time it reasoned for so long it timed out, the other time, it got it in 500 tokens so idk.)
Is this just a ChatGPT issue or does it affect most "smart" llms? (Also, I wonder what other counter intuitive questions catch llms off guard like this) | 2025-10-27T13:57:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ohf48a/does_only_chatgpt_get_this_question_wrong_if_i/ | mcpoopinton | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohf48a | false | null | t3_1ohf48a | /r/LocalLLaMA/comments/1ohf48a/does_only_chatgpt_get_this_question_wrong_if_i/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'b6gdwNGhz0b2Qr1hudxK6CBvjohE-ytHEHbvsFWyN7o', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/b6gdwNGhz0b2Qr1hudxK6CBvjohE-ytHEHbvsFWyN7o.jpeg?width=108&crop=smart&auto=webp&s=6a2825f020e693f725fd99dc02b53d140b6bc955', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/b6gdwNGhz0b2Qr1hudxK6CBvjohE-ytHEHbvsFWyN7o.jpeg?width=216&crop=smart&auto=webp&s=d7e507e14e1113fd38e1143e0cdf5492911a6484', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/b6gdwNGhz0b2Qr1hudxK6CBvjohE-ytHEHbvsFWyN7o.jpeg?width=320&crop=smart&auto=webp&s=42b660aeebe2e13405cf3749f9e11d4dd4f6a313', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/b6gdwNGhz0b2Qr1hudxK6CBvjohE-ytHEHbvsFWyN7o.jpeg?auto=webp&s=d0014cfc8491b67570e115058b93bbff0bda2e7d', 'width': 480}, 'variants': {}}]} |
After seeing the release of LLaDA2.0โฆ what other open source text diffusion models exist? | 5 | Thatโs all, I was just wondering, as they can be more annoying to run. | 2025-10-27T13:40:01 | https://www.reddit.com/r/LocalLLaMA/comments/1oheotp/after_seeing_the_release_of_llada20_what_other/ | 42GOLDSTANDARD42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oheotp | false | null | t3_1oheotp | /r/LocalLLaMA/comments/1oheotp/after_seeing_the_release_of_llada20_what_other/ | false | false | self | 5 | null |
Minimax M2 now at Huggingface! | 1 | As it is post in Huggingface:
*"Today, we release and open source MiniMax-M2, a* ***Mini*** *model built for* ***Max*** *coding & agentic workflows.*
***MiniMax-M2*** *redefines efficiency for agents. It's a compact, fast, and cost-effective MoE model (230 billion total parameters with 10 billion active parameters) built for elite performance in coding and agentic tasks, all while maintaining powerful general intelligence. With just 10 billion activated parameters, MiniMax-M2 provides the sophisticated, end-to-end tool use performance expected from today's leading models, but in a streamlined form factor that makes deployment and scaling easier than ever."*
https://preview.redd.it/zymbt3bkpnxf1.png?width=8676&format=png&auto=webp&s=19d2cb297069c72900b2632d40918cadd6412a66
[https://huggingface.co/MiniMaxAI/MiniMax-M2](https://huggingface.co/MiniMaxAI/MiniMax-M2)
and VLLM announces its support from day one:
[https://docs.vllm.ai/projects/recipes/en/latest/MiniMax/MiniMax-M2.html](https://docs.vllm.ai/projects/recipes/en/latest/MiniMax/MiniMax-M2.html)
https://preview.redd.it/c3po0rexpnxf1.png?width=596&format=png&auto=webp&s=c34fc9b7b6eb47a55a56820f009f13abac3d6169
Let's try it! | 2025-10-27T13:34:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ohejqb/minimax_m2_now_at_huggingface/ | Rascazzione | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohejqb | false | null | t3_1ohejqb | /r/LocalLLaMA/comments/1ohejqb/minimax_m2_now_at_huggingface/ | false | false | 1 | null | |
FREE invites: Comet AI Browser and Perplexity PRO | 0 | You will open the link and will be redirected to the download page, after downloading you have to create an account and chat at least one time with perplexity AI and you will get 1 month FREE PRO perplexity access. This is the link: [https://pplx.ai/horiabocia93026](https://pplx.ai/horiabocia93026) | 2025-10-27T13:33:17 | https://www.reddit.com/r/LocalLLaMA/comments/1oheiyl/free_invites_comet_ai_browser_and_perplexity_pro/ | horiab1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oheiyl | false | null | t3_1oheiyl | /r/LocalLLaMA/comments/1oheiyl/free_invites_comet_ai_browser_and_perplexity_pro/ | false | false | self | 0 | null |
What can I run locally that's most similar to Infinite Worlds? | 2 | If you're not familiar with it, \[Infinite Worlds\](https://infiniteworlds.app)\* is a game that lets you take actions in custom worlds and tells you the results. It's pretty good at keeping things consistent, including tracking stats and characters' secret objectives, and it's pretty creative. Unfortunately, it's also way too expensive.
What can I run against either a locally hosted LLM or one that's available via API (e.g. OpenRouter) that would provide a similar experience? (I'm not even sure what to call this kind of experience; does this fall under "role playing"?)
Outside of playing around a little with IW, my only creative use of LLMs has been issuing instructions for storytelling ("Generate me an outline for X story idea. Write chapter 1.")
\* I have no affiliation with Infinite Worlds. I reference it here because it's a good example of what I want. | 2025-10-27T13:27:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ohedyj/what_can_i_run_locally_thats_most_similar_to/ | Living_Ingenuity_385 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohedyj | false | null | t3_1ohedyj | /r/LocalLLaMA/comments/1ohedyj/what_can_i_run_locally_thats_most_similar_to/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM.jpeg?width=108&crop=smart&auto=webp&s=f6f8b64999df5e525210926ff14e0366d0414ca0', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM.jpeg?width=216&crop=smart&auto=webp&s=df5e4bba47dca376aec8a8bd298f0dc8e29bc441', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM.jpeg?width=320&crop=smart&auto=webp&s=0246b879241c126f01fa3940fe24290adbb0f8e4', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/-zD2c4lElwHlEPOaFZLAx6hj0Hup43QnN3iPjfsZ4TM.jpeg?auto=webp&s=3ef6c2b8bfabe82de39c2ec59fcd2f9f79dcf2e1', 'width': 512}, 'variants': {}}]} |
researchAmericanAI - llama 3.2 vs. claudeai - Opus 4.1 | 0 | AI provides a way to solve problems.
It records how you solve them.
It guesses the answers.
You confirm when it's correct.
AI doesn't solve problems.
You do.
Step-by-Step.
https://preview.redd.it/z5jsxq22mnxf1.png?width=3350&format=png&auto=webp&s=ea835167851e5f85a7d8bf1c1e1821860f1762d1
https://preview.redd.it/zozjf302mnxf1.png?width=3355&format=png&auto=webp&s=48a0560177ff781011727b26586fa76af88aec57
https://preview.redd.it/ldvss302mnxf1.png?width=1616&format=png&auto=webp&s=07b41c21bbbb8c7242a70bdbf1ce435ff1c87d07
https://preview.redd.it/u3r44402mnxf1.png?width=1542&format=png&auto=webp&s=577a80dd728b976ca3e5a3673e088c0bb18dd697
| 2025-10-27T13:11:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ohe0w9/researchamericanai_llama_32_vs_claudeai_opus_41/ | researchAmericanAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohe0w9 | false | null | t3_1ohe0w9 | /r/LocalLLaMA/comments/1ohe0w9/researchamericanai_llama_32_vs_claudeai_opus_41/ | false | false | 0 | null | |
AMA Announcement: Liquid AI, the team behind Liquid Foundational Models, LEAP and Apollo (Thu, Oct 30 โข 10 AM โ 1 PM PDT) | 51 | # When: Thursday 10/30, 10 AM โ 1 PM PST
The Liquid AI team will also continue answering questions for the following 24 hours, so jump in anytime!
**Who will be there:**
* Jacob Marks (Data)
* Jimmy Smith (Pre-Training)
* Maxim Labonne (Post-Training)
* Fernando Fernandes (Post-training)
* Anna Banaszak (LFM2-VL)
* Arthur Bรถรถk (LFM2-Audio)
* Yuri Khrustalev (Inference engine, llama.cpp)
* Darian Bhathena (LEAP SDK and Apollo)
* Edoardo Mosca (LEAP Best Model Search and Finetune)
* Anthony Crognale (LEAP SDK)
* Pau Labarta Bajo (Dev Relations)
**Want to get started?**
โ [Deploy your first model on-device today](https://leap.liquid.ai/models?utm_source=reddit&utm_medium=devrel)
โ [Check out our models on Hugging Face](https://huggingface.co/LiquidAI?utm_source=reddit&utm_medium=devrel)
โ [Play with models on Apollo](https://www.liquid.ai/apollo?utm_source=reddit&utm_medium=devrel)[](https://leap.liquid.ai/models?utm_source=reddit&utm_medium=devrel)โ [Learn more about our recent releases](https://www.liquid.ai/company/news?utm_source=reddit&utm_medium=devrel)
| 2025-10-27T13:10:46 | LiquidAI_Team | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ohdzxs | false | null | t3_1ohdzxs | /r/LocalLLaMA/comments/1ohdzxs/ama_announcement_liquid_ai_the_team_behind_liquid/ | false | true | 51 | {'enabled': True, 'images': [{'id': 'NfeHet8kqydO8t1sf351OvSRA7SuKdYsKMQVkGjwdwQ', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/47wfyylmlnxf1.png?width=108&crop=smart&auto=webp&s=258f3956199906267a9e7bc8dc0cf13c66defc8c', 'width': 108}, {'height': 105, 'url': 'https://preview.redd.it/47wfyylmlnxf1.png?width=216&crop=smart&auto=webp&s=2e2d5af51a1cb4fec833592947878db8b045b34d', 'width': 216}, {'height': 155, 'url': 'https://preview.redd.it/47wfyylmlnxf1.png?width=320&crop=smart&auto=webp&s=358521089cf8771e06fd65ad940b350093a5125d', 'width': 320}, {'height': 311, 'url': 'https://preview.redd.it/47wfyylmlnxf1.png?width=640&crop=smart&auto=webp&s=c359ceba4921b523ecd2e493f9cc84bd8b3e7881', 'width': 640}, {'height': 467, 'url': 'https://preview.redd.it/47wfyylmlnxf1.png?width=960&crop=smart&auto=webp&s=29a43f7e102dcecaa7da98403867afea888431c7', 'width': 960}, {'height': 526, 'url': 'https://preview.redd.it/47wfyylmlnxf1.png?width=1080&crop=smart&auto=webp&s=97a2e3272bfbce03aff838d35a90f885734c0b07', 'width': 1080}], 'source': {'height': 1052, 'url': 'https://preview.redd.it/47wfyylmlnxf1.png?auto=webp&s=b5104d58be71e1db85e884a2cfeea881584a3271', 'width': 2160}, 'variants': {}}]} | ||
Help choosing a local LLM box (text-only RAG): 1ร RTX 5090 now (maybe 2 later) vs RTX PRO 6000 Blackwell (96GB)? | 5 | Hi! Iโm new to local LLM hosting. We need an **on-prem, text-only** setup (PDF/doc Q&A, summaries) for a small team that will grow. No images.
Iโm debating **1ร RTX 5090** now (option to add a second later) **vs** a single **RTX PRO 6000 Blackwell (96GB VRAM)**. **Catch:** Iโm in **Argentina** โ the PRO 6000 is \~**US$20,000** here vs \~**US$8,000** in the U.S., and many parts donโt arrive locally (e.g., **X870E Aorus AI TOP** motherboard), though cheaper boards might be importable.
Looking for plain-language advice on:
* **GPU:** start with one big consumer card or go straight to 96GB workstation for 70B-class @ 4-bit with growing context/concurrency?
* **Platform:** motherboard/CPU that plays nice with **two large GPUs** (lanes, slot spacing, thermals) on Linux.
* **RAM:** **64GB** vs **128GB**?
* **Storage:** sensible start = **2โ4TB NVMe** (OS/models/index) + **4โ8TB** for docs/backups?
* **Software:** stable multi-user stack (vLLM or llama.cpp/Ollama + vector DB + simple web UI).
Real-world build lists and โwish I knew this earlierโ tips welcome โ thanks!
I used GPT to translate this post, sorry about that! | 2025-10-27T13:10:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ohdzdd/help_choosing_a_local_llm_box_textonly_rag_1_rtx/ | ReplacementSelect887 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohdzdd | false | null | t3_1ohdzdd | /r/LocalLLaMA/comments/1ohdzdd/help_choosing_a_local_llm_box_textonly_rag_1_rtx/ | false | false | self | 5 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.